diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcdgf" "b/data_all_eng_slimpj/shuffled/split2/finalzzcdgf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcdgf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nAny programmable response of a material to external stimulation can be interpreted as computation. \nTo implement a logical function in a material one must map space-time dynamics of an internal structure of a material onto a space of logical values. \nThis is how experimental laboratory prototypes of unconventional computing devices are made: logical gates, circuits and binary adders employing interaction of wave-fragments in light-sensitive Belousov-Zhabotinsky media \\cite{ref1}, swarms of soldier crabs \\cite{ref2}, growing lamellipodia of slime mould Physarum polycephalum~\\cite{adamatzky2016logical}, crystallisation patterns in ``hot ice'' \\cite{adamatzky2009hot}, peristaltic waves in protoplasmic tubes \\cite{adamatzky2014slime}. In many cases logical circuits are `built' or evolved from a previously disordered material \\cite{miller2014evolution}, e.g. networks of slime mould \\emph{Physarum polycephalum} \\cite{whiting2016towards}, bulks of nanotubes \\cite{broersma2012nascence}, nano particle ensembles \\cite{bose2015evolution, broersma2017computational}. In these works the computing structures could be seen as growing on demand, and logical gates develop in a continuum where an optimal distribution of material minimised internal energy. A continuum exhibiting such properties can be coined as a ``self-optimising continuum''. Slime mould of \\emph{Physarum polycephalum} well exemplifies such a continuum: the slime mould is capable of solving many computational problems, including mazes and adaptive networks \\cite{adamatzky2016advances}. Other examples of the material behaviour include bone remodelling \\cite{christen2014bone}, \nroots elongation \\cite{mazzolai2010plant}, sandstone erosion \\cite{bruthans2014sandstone}, \ncrack and lightning propagation \\cite{achtziger2000optimization}, growth of neurons and blood vessels etc. \nSome other physical systems suitable for computations were also proposed in \\cite{miller2014evolution, turner2014neuroevolution, banzhaf2006guidelines, miller2002evolution}. In all these cases, a phenomenon of the formation of an optimum layout of material is related to non-linear laws of material behaviour, \nresulting in the evolution of material structure governed by algorithms similar to those used in a topology optimisation of structures~\\cite{klarbring2010dynamical}. We develop the ideas of material optimisation further and show, in numerical models, how logical circuits can be build in a conductive material self-optimise its structure governed by configuration of inputs and outputs. \n\nThe paper is structured as follows. In Sect.~\\ref{topologyoptimisation} we introduce topology optimisation aimed to solve a problem of a stationary heat conduction.\nGates {\\sc and} and {\\sc xor} are designed and simulated in Sects.~\\ref{andgate} and \\ref{xorgate}. We design one-bit half-adder in Sect.~\\ref{onebithalfadder}.\nDirections of further research are outlined in Sect.~\\ref{discussion}.\n\n\n\n\\section{Topology optimisation}\n\\label{topologyoptimisation}\n\n\nA topology optimisation in continuum mechanics aims to find a layout of a material within a given \ndesign space that meets specific optimum performance targets \\cite{bendsoe2013topology, hassani2012homogenization, huang2010evolutionary}. \nThe topology optimisation is applied to solve a wide range of problems \\cite{bendsoe2005topology}, \ne.g. \nmaximisation of heat removal for a given amount of heat conducting material \\cite{bejan1997constructal}, \nmaximisation of fluid flow within channels \\cite{borrvall2003topology},\nmaximisation of structure stiffness and strength \\cite{bendsoe2005topology}, \ndevelopment of meta-materials satisfying specified mechanical and thermal physical properties \\cite{bendsoe2005topology}, \noptimum layout of plies in composite laminates \\cite{stegmann2005discrete}, \nthe design of an inverse acoustic horn \\cite{bendsoe2005topology}, modelling of amoeboid organism growing towards food sources \\cite{Safonov20161},\noptimisation of photonics-crystal band-gap structures \\cite{men2014robust}. \n\n\nA standard method of the topology optimisation employs a modelling material layout that uses a density of material, $\\rho$, \nvarying from 0 (absence of a material) to 1 (presence of a material), where a dependence of structural properties on the density \nof material is described by a power law. This method is known as Solid Isotropic Material with Penalisation (SIMP) \\cite{zhou1991coc}. \nAn optimisation of the objective function consists in finding an optimum distribution of $\\rho$: $\\min_\\rho f(\\rho)$.\n\nThe problem can be solved in various numerical schemes, including the sequential quadratic programming (SQP) \\cite{wilson1963simplicial}, \nthe method of moving asymptotes (MMA) \\cite{svanberg1987method}, and the optimality criterion (OC) method \\cite{bendsoe2005topology}. \nThe topology optimisation problem can be replaced with a problem of finding a stationary point of an Ordinary Differential Equation (ODE) \\cite{klarbring2010dynamical}. Considering density constraints\non $\\rho$, the right term of ODE is equal to a projection of the negative gradient of the objective function. Such\noptimisation approach is widely used in the theory of projected dynamical systems \\cite{nagurney2012projected}. Numerical schemes of topology\noptimisation solution can be found using simple explicit Euler algorithm. As shown in \\cite{klarbring2012dynamical} iterative schemes match the\nalgorithms used in bone remodelling literature \\cite{harrigan1994bone}.\n\n\nIn this work the topology optimisation problem as applied to heat conduction problems \\cite{gersborg2006topology}. \nConsider a region in the space $\\Omega$ with a boundary \n$\\Gamma=\\Gamma _D \\cup \\Gamma _N$, $\\Gamma _D \\cap \\Gamma_N= \\emptyset$, \nseparated for setting the Dirichlet (D) and the Neumann (N) boundary conditions. \nFor the region $\\Omega $ we consider the steady-state heat equation given in: \n\n\\begin{equation}\n\\nabla \\cdot k \\nabla T +f=0 \\text{ in } \\Omega \n\\end{equation}\n\n\\begin{equation}\nT = T_0 \\text{ on } \\Gamma_D \n\\end{equation}\n\n\n\\begin{equation}\n(k \\nabla T) \\cdot n = Q_0 \\text{ on } \\Gamma_N \n\\end{equation}\n\n\n\n\nwhere $T$ is a temperature, \n$k$ is a heat conduction coefficient, \n$f$ is a volumetric heat source, and \n$n$ is an outward unit normal vector. \nAt the boundary $\\Gamma_D$ a temperature $T=T_0$ is specified in the form of Dirichlet\nboundary conditions, and at the boundary $\\Gamma _N$ of the heat flux $(k \\nabla T) \\cdot n$ is specified using Neumann boundary conditions. \nThe condition $(k \\nabla T) \\cdot n = 0$ specified at the part of $\\Gamma_N$ means a thermal insulation (adiabatic conditions).\n\nWhen stating topology optimisation problem for a solution of the heat conduction problems it is necessary to find an optimal distribution for a limited volume of conductive material in order to minimise heat release, which corresponds to designing a thermal conductive device. It is necessary to find an optimum distribution of material density $\\rho$ within a given area $\\Omega$ in order to minimise the cost function:\n\n\\begin{equation}\n\\text{Minimize } C(\\rho) = \\int_\\Omega \\nabla T \\cdot (k (\\rho) \\nabla T)\n\\end{equation}\n\n\\begin{equation}\n\\text{Subject to } \\int_\\Omega \\rho 1$).\n\nIn order to solve the problem (1)--(6) we apply the following techniques used in the dynamic systems modelling. \nAssume that $\\rho$ depends on a time-like variable $t$. \nLet us consider the following differential equation to determine density in $i$-th finite element, $\\rho_i$, \nwhen solving the problem stated in (1)--(6):\n\n\\begin{equation}\n\\acute{\\rho_i}=\\lambda (\\frac{C_i(\\rho_i)}{\\rho_i V_i} - \\mu), \\hspace{5mm} C_i(\\rho_i) = \\int_{\\Omega_i} \\nabla T \\cdot (k_i(\\rho) \\nabla T) d\\Omega\n\\end{equation}\nwhere dot above denotes the derivative with respect to $t$, \n$\\Omega_i$ is a domain of $i$-th finite element, \n$V_i$ is a volume of $i$-th element, \n$\\lambda$ and $\\mu$ are positive constants characterising behaviour of the model. \nThis equation can be obtained by applying methods of the projected dynamical systems \\cite{klarbring2012dynamical} \n or bone remodelling methods \\cite{harrigan1994bone, mullender1994physiological, payten1998optimal}.\n\n\nFor numerical solution of equation (8) a projected Euler method is used \\cite{nagurney2012projected}. \nThis gives an iterative formulation for the solution finding $\\rho_i$ \\cite{klarbring2010dynamical}:\n\\begin{equation}\n\\rho^{n+1}_i = \\rho^n_i + q[\\frac{C_i(\\rho^n_i)}{\\rho^n_i V_i} - \\mu^n]\n\\end{equation}\nwhere $q = \\lambda \\Delta t$, \n$\\rho^{n+1}_i$ and $\\rho^n_i$ are the numerical approximations of $\\rho_i(t+\\Delta t)$ and $\\rho_i(t)$, \n$\\mu^n =\\frac{\\sum_i C_i(\\rho^n_i)}{\\sum_i \\int_{\\Omega_i} \\rho_{ev} d\\Omega}$, $\\rho_{ev}$ is a specified mean value of density.\n\nWe consider a modification of equation (8):\n\\begin{equation}\n\\rho^{n+1}_i = \n\\begin{cases}\n\\rho^n_i + \\theta \\text{ if } \\frac{C_i(\\rho^n_i)}{\\rho^n_i V_i} - \\mu^n \\geq 0, \\\\\n \\rho^n_i - \\theta \\text{ if } \\frac{C_i(\\rho^n_i)}{\\rho^n_i V_i} - \\mu^n < 0,\n\\end{cases}\n\\end{equation}\nwhere $\\theta$ is a positive constant.\n\nThen we calculate a value of $\\rho _i^{n+1}$ using equation (9) and project $\\rho _i$ onto a set of constraints:\n\n\\begin{equation}\n\\rho^{n+1}_i = \n\\begin{cases}\n\\rho_{\\max} \\text{ if } \\rho^{n+1}_i > \\rho_{\\max}, \\\\\n\\rho^{n+1}_1 \\text{ if } \\rho_{\\min} \\leq \\rho^{n+1}_i \\leq \\rho_{\\max},\\\\\n\\rho_{\\min} \\text{ if } \\rho^{n+1}_i < \\rho_{\\min} \n\\end{cases}\n\\end{equation}\nwhere $\\rho_{\\min}$ is a specified minimum value of $\\rho_i$ and $\\rho_{\\max}$ is a specified maximum value of $\\rho_i$. \nA minimum value is taken as the initial value of density for all finite elements: $\\rho_i^0=\\rho_{\\min}$.\n\n\\section{Specific parameters}\n\\label{Specificparameters}\n\n\nThe algorithm above is implemented in ABAQUS \\cite{Abaqus2014} using the modification of the structural topology optimisation plug-in, \nUOPTI, developed previously \\cite{Safonov2015}. Calculations were performed using topology optimisation methods for the finite element model\nof $200 \\times 200 \\times 1$ elements. Cube-shaped linear hexahedral elements of DC3D8 type with a unit\nlength edges were used in calculations. The elements used have eight integration points. The cost function value is\nupdated for each finite element as a mean value of integration points for an element under consideration \\cite{Abaqus2014}. \n\n\nThe model can be described by the following parameters: \n$\\rho_{\\min}$ and $\\rho_{\\max}$ are minimum and maximum values of $\\rho_i$, \n$M=\\sum_i \\int_{\\Omega_i} \\rho_{ev} d\\Omega$ is a mass of the conductive material, \n$\\theta$ is an increment of $\\rho_i$ at each time step, \n$p$ is a penalisation power, \n$k_{\\max}$ is a heat conduction coefficient at $\\rho_i = 1$, \n$k_{\\min}$ is a heat conduction coefficient at $\\rho_i = 0$. \nAll parameters but $M$ are the same for all six (three devices with two type of boundary conditions) implementations:\n$\\rho_{\\max}=1$,\n$\\rho_{\\min}=0.01$,\n$\\theta=0.03$,\n$p=2$,\n$K_{\\max}=1$,\n$K_{\\min}=0.009$. \n\nThe parameter $M$ is specified as follows: \n$M=2000$ for {\\sc and}, {\\sc xor} in Dirichlet boundary conditions on inputs, and one-bit half-adder for both types of boundary conditions;\n$M=800$ for {\\sc and} gate and $M=400$ for {\\sc xor} gate in Neumann boundary conditions. \n\nWe use the following notations. Input logical variables are $x$ and $y$, output logical variable is $z$. \nThey takes values 0 ({\\sc False}) and 1 ({\\sc True}).\nSites in input stimuli the simulated material are $I_x$ and $I_y$ (inputs), $O$, $O_1$, $O_2$ (outputs). \nSites of outlets are $V$, $V_1$ and $V_2$ (temperature are set to 0 in the outlet, \nso we use symbol $V$ by analogy with vents in fluidic devices). Temperature at the sites is shown as \n$T_{I_x}$, $T_{I_y}$, $T_{O}$, $T_{O_1}$ etc. We show distances between as $l(I_x, I_y)$, $l(I_x, O)$ etc.\n\n\n\nLogical values are represented by temperature: $x=1$ is $T_{I_x}=100$ and $x=0$ is $T_{I_x}=0$, the same for $y$. \nWe input data in the gates by setting up thermal boundary conditions are set at the input sites and adiabatic boundary conditions \nfor other nodes. The temperature at each point is specified by setting equal values in 4 neighbour nodes belonging to the same finite \nelement. Temperature at outputs and outlets is set to zero of all experiments: $T_O=T_{O_1}=T_{O_2}=0$,\n $T_V=T_{V_1}=T_{V_2}=0$. To maintain specified boundary conditions we setup a thermal flow through the boundary points. Intensity of the flows is determined via solution of the thermal conductivity equation at each iteration. Therefore intensity of the thermal streams via input, output and outlet sites changes during the simulation depending on a density distribution of the conductive material. Namely, if we define zero temperature at a site the intensity of the stream though the site will be negative if a density of the conductive material is maximal; the intensity will be zero if the material density is minimal. In case when we do not define a temperature at a site the intensity is non-zero if the density is maximal and zero if the density is minimal.\n Therefore, instead of talking about temperature at the output we talk about thickness of the conductive material. \n Namely, if the material density value at the output site $O$ is minimal, $\\rho_O = \\rho_{\\min}$, we assume logical output 0 ({\\sc False}). \n If the density $\\rho_O = \\rho_{\\max}$ we assume logical output 1 ({\\sc True}). The material density for all finite elements is set to a\nminimum value $\\rho _i^0=\\rho _{\\min}$ at the beginning of computation. \n\n In case of Dirichlet boundary conditions in inputs, in $x=0$ and $y=0$ the temperature is constant and equal to zero at all points, therefore the temperature gradient is also zero, $\\nabla T=0$. The cost function is also equals to zero at all points: $C_i(\\rho _i)=0$. \nAs the initial density for all finite elements is set to a minimum value $\\rho_i^0=\\rho_{\\min}$ then \n from equations (9) and (10) follows that the density stays constant and equal to its minimum value \n $\\rho_i^n=\\rho_{\\min}$. Therefore, the density value at $O$ point is minimal, $\\rho_O=\\rho_{\\min}$ which indicates \n logical output 0. Further we consider only situations when one of the inputs is non-zero.\n \n In case of Neumann boundary conditions in inputs a flux in each site is specified by setting the flux through the face of the finite element to which the site under consideration belongs. Adiabatic boundary conditions are set for other nodes. The logical value of $x$ is represented by the value of given flux in $I_x$, $Q_{I_x}$. The logical value of $y$ is represented by the value of given flux in $I_y$, $Q_{I_y}$. Flux $Q_{I_x}=0$ represents $x=0$ and flux $Q_{I_x}=1$ represents $x=1$. \n \n Figures in the paper show density distribution of the conductive material. The maximum values of $\\rho$ are shown by red colour, \n the minimum values by blue colour. \n \n\n \n\n\n\\section{{\\sc and} gate}\n\\label{andgate}\n\n\\subsection{Dirichlet boundary conditions}\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[]{\\includegraphics[scale=0.25]{fig1a} }\n\\subfigure[]{\\includegraphics[scale=0.25]{fig1b}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig1c}}\n\\caption{{\\sc and} gate implementation with Dirichlet boundary conditions. \n(a) Scheme of inputs and outputs. (bc) Density distribution $\\rho$ for inputs (b) $x=1$ and $y=0$ and \n(c ) $x=1$ and $y=1$. }\n\\label{fig1}\n\\end{figure}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[$t=10$]{ \\includegraphics[scale=0.85]{fig2a} }\n\\subfigure[$t=20$]{ \\includegraphics[scale=0.85]{fig2b} }\n\\subfigure[$t=30$]{ \\includegraphics[scale=0.85]{fig2c} }\n\\subfigure[$t=40$]{ \\includegraphics[scale=0.85]{fig2d} }\n\\subfigure[$t=50$]{ \\includegraphics[scale=0.85]{fig2e} }\n\\subfigure[$t=100$]{ \\includegraphics[scale=0.85]{fig2f} } \n\\caption{Density distribution, $\\rho$, in the implementation of {\\sc and} gate for inputs $x=1$ and $y=0$, \nDirichlet boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 100 steps.}\n\\label{fig2}\n\\end{figure}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[$t=10$]{ \\includegraphics[scale=0.85]{fig3a} }\n\\subfigure[$t=20$]{ \\includegraphics[scale=0.85]{fig3b} }\n\\subfigure[$t=30$]{ \\includegraphics[scale=0.85]{fig3c} }\n\\subfigure[$t=40$]{ \\includegraphics[scale=0.85]{fig3d} }\n\\subfigure[$t=50$]{ \\includegraphics[scale=0.85]{fig3e} }\n\\subfigure[$t=100$]{ \\includegraphics[scale=0.85]{fig3f} } \n\\caption{Density distribution, $\\rho$, in the implementation of {\\sc and} gate for inputs $x=1$ and $y=1$, \nDirichlet boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 100 steps.}\n\\label{fig3}\n\\end{figure}\n\nLet us consider implementation of a {\\sc and} gate in case of the Dirichlet boundary conditions in the input sites. \nThe input $I_x$ and $I_y$ and output $O$ sites are arranged at the vertices of an isosceles triangle (Fig.~\\ref{fig1}a):\n$l(I_x, I_y)=102$, $l(I_x, O)=127$, $l(I_y, O)=127$. The Dirichlet boundary conditions are set to $I_x$, $I_y$ and $O$.\nThe material density distribution for inputs $x=1$ and $y=0$ is shown in Fig.~\\ref{fig1}b. The maximum density region connects \n$I_x$ with $I_y$ and no material is formed at site $O$, thus output is 0. The space-time dynamics of the gate is shown in Fig.~\\ref{fig2}. \nWhen both inputs are {\\sc True}, $x=1$ and $y=1$, domains with maximum density of the material span input sites with output site, \n($I_x, O$) and ($I_y, O$) (Fig.~\\ref{fig1}c). Therefore the density value at the output is maximal, $\\rho_O = \\rho_{\\max}$ which indicated \nlogical output 1 ({\\sc True}). Figure~\\ref{fig3} shows intermediate results of density distribution in the gate for $x=1$ and $y=1$. Supplementary videos can be found here \\cite{Safonov2016}. \n\n\n\n\\subsection{Neumann boundary conditions.}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[]{\\includegraphics[scale=0.25]{fig5a}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig5b}}\\\\\n\\subfigure[]{\\includegraphics[scale=0.25]{fig5c}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig5d}}\n\\caption{{\\sc and} gate implementation in case of Neumann boundary conditions.\n(a) Scheme of the gate. Density distribution, $\\rho$, for inputs\n(b) $x=1$, $y=0$,\n(c ) $x=0$, $y=1$,\n(d) $x=1$, $y=1$.}\n\\label{fig5}\n\\end{figure}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[$t=10$]{ \\includegraphics[scale=0.85]{fig6a} }\n\\subfigure[$t=20$]{ \\includegraphics[scale=0.85]{fig6b} }\n\\subfigure[$t=30$]{ \\includegraphics[scale=0.85]{fig6c} }\n\\subfigure[$t=40$]{ \\includegraphics[scale=0.85]{fig6d} }\n\\subfigure[$t=50$]{ \\includegraphics[scale=0.85]{fig6e} }\n\\subfigure[$t=200$]{ \\includegraphics[scale=0.85]{fig6f} } \n\\caption{Density distribution, $\\rho$, in the implementation of {\\sc and} gate for inputs $x=1$ and $y=1$, \nNeumann boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 200 steps.}\n\\label{fig6}\n\\end{figure}\n\nLet us consider the implementation of the {\\sc and} in case of Neumann boundary conditions for input points. \nScheme of the gate is shown in Fig.~\\ref{fig5}a. The distance between $I_x$ and $I_y$ is 40 points, the distance between $I_x$ \nand $V$ is 70 points and between $I_y$ and outlet $V$ 90 points. The output site $O$ is positioned in the middle of the segment\n$(I_x, I_y)$. Boundary conditions in $I_x$, $I_y$ and $V$ are set as fluxes, i.e. Neumann boundary conditions.\n\nFigure~\\ref{fig5}b shows density distribution, $\\rho$ for inputs $x=1$ and $y=0$. The maximum density region\ndevelops along the shortest path $(I_x, V)$. Therefore, the density value at $O$ is minimal, $\\rho_O=\\rho_{\\min}$, which represents\nlogical output {\\sc False}. For inputs $x=0$ and $y=1$ (Fig.\\ref{fig5}c) the maximal density region is formed along the path \n$(I_y, V)$, i.e. $\\rho_O=\\rho_{\\min}$ and the logical output is {\\sc False}. The material density distribution for inputs $x=1$ and $y=1$ is shown in \nFig.\\ref{fig5}d. The maximum density region develops along the path $(I_y, I_x, V)$. Thus $\\rho_O=\\rho_{\\max}$ and logical output is {\\sc True}.\n\n\nFigure \\ref{fig6} shows intermediate results of simulating density distribution, $\\rho$, for inputs $x=1$ and $y=1$. At beginning of computation \nthe material develops in proximity of $I_x$, $I_y$ and $V$ (Fig.~ \\ref{fig6}a). Then $I_x$ and $V$ are connected by a domain with highest density of the material (Fig.~ \\ref{fig6}b). The thinner region of high-density material is further develops between $I_x$ and $I_y$ (Fig.~ \\ref{fig6}c--f).\n\n\n\n\n\\section{{\\sc xor} gate}\n\\label{xorgate}\n\n\\subsection{Dirichlet boundary conditions}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[]{\\includegraphics[scale=0.24]{fig4a} }\n\\subfigure[]{\\includegraphics[scale=0.24]{fig4b} }\n\\subfigure[]{\\includegraphics[scale=0.24]{fig4c} }\n\\caption{{\\sc xor} gate implementation with Dirichlet boundary conditions. \n(a) Scheme of inputs and outputs. (bc) Density distribution $\\rho$ for inputs (b) $x=1$ and $y=0$ and \n(c ) $x=1$ and $y=1$. }\n\\label{fig4}\n\\end{figure}\n\n\nLet us consider the implementation of the {\\sc xor} gate in case of Dirichlet boundary conditions for input points. We use similar design as in {\\sc and} gate (Fig.~\\ref{fig1}) but use two inputs $I_x$ and $I_y$, output $O$ and outlet $V$. The site of output $O$ in {\\sc and} gate is assigned outlet $V$ function\nand the output site $O$ is positioned in the middle of the segment connecting sites $I_x$ and $I_y$ (Fig~\\ref{fig4}a). \nThe temperature at $V$ point is set to 0, $T_V=0$, no temperature boundary conditions are set at $O$. If only one input is {\\sc True} a region of maximum density material is formed along a shortest path between $I_x$ and $I_y$. Therefore, the density value $\\rho_O=\\rho_{\\max}$ thus indicated output {\\sc True} (Fig.~\\ref{fig4}b, $x=1$, $y=0$). When both inputs variables are {\\sc True}, $x=1$ and $y=1$, maximum density regions are formed along the path \n$(I_x, V)$ and $(I_y, V)$ not along $(I_x, I_y)$. Thus $\\rho_0 = \\rho_{\\min}$, i.e. logical output {\\sc False} (Fig.~\\ref{fig4}c, $x=1$, $y=1$). \n\n\n\n\\subsection{Neumann boundary conditions.}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[]{\\includegraphics[scale=0.25]{fig7a}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig7b}}\\\\\n\\subfigure[]{\\includegraphics[scale=0.25]{fig7c}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig7d}}\n\\caption{{\\sc xor} gate implementation in case of Neumann boundary conditions.\n(a) Scheme of the gate. Density distribution, $\\rho$, for inputs\n(b) $x=1$, $y=0$,\n(c ) $x=0$, $y=1$,\n(d) $x=1$, $y=1$.}\n\\label{fig7}\n\\end{figure}\n\n\nLet us consider the implementation of {\\sc xor} gate in case of Neumann boundary conditions for input points. \nThe gate has five sites: inputs $I_x$ and $I_y$, output $O$, outlets $V_1$ and $V_2$ (Fig.~\\ref{fig7}a). \nSites $I_x$, $I_y$, $V_1$ and $V_2$ are vertices of the square with the side length 42 points. The output site $O$\nis positioned at the intersection of diagonals of the square. Boundary conditions in $I_x$, $I_y$, $V_1$ and $V_2$ \nare set as fluxes, i.e. Neumann boundary conditions. To ensure convergence of solutions for the stationary problem of \nheat conduction (1) the fluxes at $V_1$ and $V_2$ are set equal to the negative half-sum of fluxes in $I_x$ and $I_y$:\n$Q_{V_1} = Q_{V_2} = - \\frac{Q_{I_x}+Q_{I_y}}{2}$. \n\n\nFor $x=1$ and $y=0$ the maximum density domain is formed between $I_x$ and $V_1$ and \nbetween $I_x$ and $V_2$ (Fig~\\ref{fig7}b). The output site $O$ sits at the $(I_x, V_2)$ diagonal, \ntherefore $\\rho_O = \\rho_{\\max}$, and thus the logical output is {\\sc True}. When inputs are $x=0$ and $y=1$\nthe maximum density domain is formed between $I_y$ and $V_2$ and between $I_y$ and $V_1$ (Fig~\\ref{fig7}c).\nThe output site $O$ sits at the $(I_y, V_1)$ diagonal, therefore $\\rho_O = \\rho_{\\max}$, and thus the logical output is {\\sc True}.\nWhen goths inputs are {\\sc True}, $x=1$ and $y=1$, domains of high-density material develop along shortest paths \n$(I_x, V_1)$ and $(I_y, V_2)$ (Fig~\\ref{fig7}d). These domain do not cover the site $O$, therefore logical output is {\\sc False}.\n\n\n\\section{One-bit half-adder}\n\\label{onebithalfadder}\n\n\\subsection{Dirichlet boundary conditions}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[]{\\includegraphics[scale=0.25]{fig8a}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig8b}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig8c}}\n\\caption{One-bit half-adder implementation in case of Dirichlet boundary conditions.\n(a) Scheme of the adder. Density distribution, $\\rho$, for inputs\n(b) $x=1$, $y=0$,\n(c ) $x=1$, $y=1$.\n}\n\\label{fig8}\n\\end{figure}\n\n\n\n\nTo implement the one-bit half-adder in case of Dirichlet boundary conditions for input points we combine designs of {\\sc and} and {\\sc xor} gates\n(Figs.~\\ref{fig1}a and \\ref{fig4}a). We introduce the following changes to the scheme shown in Fig~\\ref{fig4}a: the former outlet $V$\nis designated as output $O_1$, the former output $O$ is designated as output $O_2$ (Fig.~\\ref{fig8}a). Temperature value at $O_1$ is set zero, \n$T_{O_1}=0$. No temperature boundary conditions are set at $O_2$. The output $O_1$ indicated logical value $xy$ and the output $O_2$ logical \nvalue $x \\oplus y$. When only one of the inputs is {\\sc True} and other {\\sc False}, e.g. $x=1$ and $y=0$ as shown in Fig.~\\ref{fig8}b, the density\nvalue at $O_1$ is minimal, $\\rho_{O_1}=\\rho_{\\min}$, and the density value at $O_2$ is maximal, $\\rho_{O_2}=\\rho_{\\max}$. Thus $O_1$ indicated {\\sc False} and $O_2$ {\\sc True}. For inputs $x=1$ and $y=1$ we have $\\rho_{O_1}=\\rho_{\\max}$ and $\\rho_{O_2}=\\rho_{\\min}$, i.e. logical outputs \n{\\sc True} and {\\sc False}, respectively. \n\n\n\\subsection{Neumann boundary conditions}\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[]{\\includegraphics[scale=0.25]{fig9a}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig9b}}\\\\\n\\subfigure[]{\\includegraphics[scale=0.25]{fig9c}}\n\\subfigure[]{\\includegraphics[scale=0.25]{fig9d}}\n\\caption{One-bit half-adder implementation in case of Neumann boundary conditions.\n(a) Scheme of the adder. Density distribution, $\\rho$, for inputs\n(b) $x=1$, $y=0$,\n(c ) $x=0$, $y=1$,\n(b) $x=1$, $y=1$.\n}\n\\label{fig9}\n\\end{figure}\n\n\n\n\\begin{figure}[!tbp]\n\\centering\n\\subfigure[$t=10$]{ \\includegraphics[scale=0.85]{fig10a} }\n\\subfigure[$t=20$]{ \\includegraphics[scale=0.85]{fig10b} }\n\\subfigure[$t=30$]{ \\includegraphics[scale=0.85]{fig10c} }\n\\subfigure[$t=40$]{ \\includegraphics[scale=0.85]{fig10d} }\n\\subfigure[$t=50$]{ \\includegraphics[scale=0.85]{fig10e} }\n\\subfigure[$t=69$]{ \\includegraphics[scale=0.85]{fig10f} } \n\\caption{Density distribution, $\\rho$, in the implementation of one-bit half-adder for inputs $x=1$ and $y=1$, \nNeumann boundary conditions for input points. The snapshots are taken at t=10, 20, 30, 40, 50, and 69 steps.}\n\\label{fig10}\n\\end{figure}\n\n\nLet us consider the implementation of one-bit half-adder in case of Neumann boundary conditions for input points. \nThe devices consists of seven sites: two inputs $I_x$ and $I_y$, two outputs $O_1$ and $O_2$, \nthree outlets $V_1$, $V_2$ and $V_3$ (Fig.~\\ref{fig9}a). Sites $I_x$, $I_y$, $O_1$ and $O_2$ are vertices of a square with \nthe side length 40. The output $O_2$ is positioned at the intersection of diagonals of this square. The output $O_1$ is positioned \nat the middle of the segment connecting $V_2$ and $V_3$. The distance between $V_1$ and $V_3$ is 36 points, the distance between \n$V_3$ and $V_2$ is 51 points. The output $O_1$ represent logical function $xy$ and the output $O_2$ function $x \\oplus y$. \nBoundary conditions in $I_1$, $I_2$, $V_1$, $V_2$ and $V_3$ are set as fluxes, thus corresponding to Neumann boundary \nconditions. To ensure convergence of solutions for the stationary problem of heat conduction (1) the flux values at\n$V_1$, $V_2$ and $V_3$ are set equal to one third of the negative sum of fluxes in $I_x$ and $I_2$: \n$Q_{V_1} = Q_{V_2} = Q_{V_3} = - \\frac{Q_{I_x}+Q_{I_y}}{3}$.\n\nFigure~\\ref{fig9}b shows results of calculating density distribution $\\rho$ for inputs $x=1$ and $y=0$. There the \nmaximum density region connects $I_x$ with $V_1$, $V_2$ and $V_3$. The density domain $(I_x, V_3)$ is not a straight line because\nthe system benefits most when a of the segment $(I_x, V_3)$ coincide with the segment $(I_x, V_1)$. The site $O_2$ is covered by maximum \ndensity domain $(I_x, V_1)$, $\\rho_{O_2} = \\rho_{\\max}$, thus representing logical value {\\sc True}; the output $O_1$ is {\\sc False} because\n$\\rho_{O_2} = \\rho_{\\min}$. \n\nThe density distribution $\\rho$ calculated for inputs $x=0$ and $y=1$ is shown in Fig.~\\ref{fig9}c. The maximum density region \nconnects $I_y$ with $V_1$, $V_2$ and $V_3$ via paths $(I_y, V_1)$, $(I_y, V_2)$, $(I_y, V_3)$. The output $O_2$ belongs to $(I_y, V_2)$ therefore it \nindicates logical output {\\sc True}. The output $O_1$ indicated {\\sc False} because it is not covered by a high density domain. \n\nThe density distribution $\\rho$ calculated for inputs $x=1$ and $y=1$ is shown in Fig.~\\ref{fig9}d. The maximum density regions are developed along paths\n$(I_1, V_2)$, $(I_2, V_1)$, $(I_2, V_3)$ and $(V_2, V_3)$. There is heat flux between $V_1$ and $V_2$ which forms a segment of high density material.\nThe high density material covers $O_1$, therefore the output $O_1$ indicate logical value {\\sc True}. The output $O_2$ is not covered by a high density material, \nthus {\\sc False}. \n\n\nFigure \\ref{fig10} shows intermediate results of simulating density distribution $\\rho$ for inputs $x=1$ and $y=1$.\n\n\n\\section{Discussion}\n\\label{discussion}\n\nWe implemented logical gates and circuits using optimisation of conductive material when solving stationary problems of heat conduction. \nIn the simplest case of two sites with given heat fluxes the conductive material is distributed between the sites in a straight line. \nThe implementations of gates presented employ several sites, exact configuration of topologically optimal structures of the conductive \nmaterial is determined by value of input variables. The algorithm of optimal layout of the conductive material is similar to a biological process of \nbone remodelling. The algorithm proposed can be applied to a wide range of biological networks, including neural networks, \nvascular networks, slime mould, plant routes, fungi mycelium. These networks will be the subject of further studies. In future we can also \nconsider an experimental laboratory testing of the numerical implementations of logical gates, e.g. via dielectric breakdown tests, because\nthe phenomenon is also described by Laplace's stationary heat conduction equation which takes into account the evolution of conductivity of \na medium determined by the electric current. The approach to developing logical circuits, proposed by us, could be used in fast-prototyping of \nexperimental laboratory unconventional computing devices. Such devices will do computation by changing properties of their material substrates. \nFirst steps in this direction have been in designing Belousov-Zhabotinsky medium based computing devices for pattern recognition~\\cite{fang2016pattern} and \nconfigurable logical gates~\\cite{wang2016configurable}, learning slime mould chip~\\cite{whiting2016towards}, \nelectric current based computing~\\cite{ayrinhac2014electric},\nprogrammable excitation wave propagation in living bioengineered tissues~\\cite{mcnamara2016optically}, heterotic computing~\\cite{kendon2015heterotic}, memory devices in digital collides~\\cite{phillips2014digital}.\n\n\\section*{Supplemetary materials}\n\n\\subsection*{{\\sc xor} gate, Neumann boundary conditions}\n\\begin{itemize}\n\\item inputs $x=0$, $y=1$: \\url{https:\/\/www.youtube.com\/watch?v=osB12UqM3-w}\n\\item inputs $x=1$, $y=0$: \\url{https:\/\/www.youtube.com\/watch?v=lKMeu1nFuak}\n\\item inputs $x=1$, $y=1$: \\url{https:\/\/www.youtube.com\/watch?v=AxdCVVtIqgk}\n\\end{itemize}\n\n\\subsection*{One-bit half-adder, Neumann boundary conditions} \n\\begin{itemize}\n\\item inputs $x=0$, $y=1$: \\url{https:\/\/www.youtube.com\/watch?v=i81WTCrg8Lg}\n\\item inputs $x=1$, $y=0$: \\url{https:\/\/www.youtube.com\/watch?v=impbwJXjCAM}\n\\item inputs $x=1$, $y=1$: \\url{https:\/\/www.youtube.com\/watch?v=ubrgfzlAQQE}\n\\end{itemize}\n\n\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{}}\n\n\n\\section{Introduction}\n\n\n\nDespite rapid progress in computer technology, there are still computational problems that are\ndifficult to solve for any known algorithm that uses modern computers. However, the theory that\ndescribes physics on atomic length-scales, quantum mechanics, suggests a new way to attack\nhard computational problems in a more efficient way. A computer using operations involving\nthe entanglement of quantum states is known as a quantum computer. \\cite{Deutsch1985} A number of algorithms proposed for\nquantum computers are expected to solve many classically hard problems. \\cite{nielsen2010quantum} For example,\nShor's algorithm \\cite{Shor1997} can efficiently solve the prime factorization problem, which is difficult to\nsolve even with state-of-the-art classical algorithms and computers.\nThe quantum analog of the bit -- the fundamental information storage unit in classical\ncomputers -- is called the qubit. Many physical realizations of qubits for quantum computers\nare being developed, including semiconductor, superconductor, nuclear, and optical qubit\nsystems. The mature semiconductor manufacturing industry today offers several advantages in\nthe construction of semiconductor qubits. In particular, the nanoscale semiconductor structure\nknown as the quantum dot has been proposed as a possible realization of the semiconductor\nqubit. \\cite{LossDiVincenzo1998}\n\nSemiconductor qubits constructed from typical semiconductor (e.g. GaAs) quantum dots have limited usefulness, because\nquantum information stored in the device can be lost due to spin-decoherence. Sources of\nspin-decoherence in semiconductors include the spin-orbit, hyperfine, and electron-phonon interactions.\n{Graphene, \\cite{RevModPhys.81.109,Kotov2012,DasSarma2011} a two-dimensional lattice structure formed by carbon, is a promising material for\navoiding spin-decoherence. \\cite{trauzettel2007spin} Graphene has not only weak spin-orbit coupling, but also a\nnegligible hyperfine interaction, since carbon-12 has zero nuclear spin. These advantages of\ngraphene have driven researchers towards developing a graphene-based qubit. One proposed\nmodel for the graphene qubit is the graphene nanoribbon (GNB) quantum dot, \\cite{trauzettel2007spin,BreyFertig2006,Gucclu2014} which has been\nexperimentally realized by several groups. \\cite{Han2007,Li2008,Stampfer2009,Guttinger2012,Liu2010,Wei2013}}\n\nPrevious theoretical efforts to model GNB quantum dots \\cite{trauzettel2007spin} offer only\nestimates of the electron-electron interaction and many-particle effects in graphene.\nConsequently, it is not clear if the predicted long-range Heisenberg exchange coupling between\nthe dots---which is necessary for qubit operations in universal quantum\ncomputing---can be achieved in practice. Nor are the effects of gate voltage changes on the\nelectronic structure and the exchange coupling in the multi-electron regime well understood.\nRealistic models of graphene qubit operation require a better understanding of these effects. \\cite{LossDiVincenzo1998}\nTo understand how the GNB quantum dot qubit functions in real\napplications, we study a more complete model of the graphene qubit using a numerical\napproach. { To study the exchange coupling between two graphene qubits, we use an unrestricted Hartree-Fock method with a generalized-valence-bond (GVB) wave function. \\cite{Ostlund1996,Hu2000,Yannouleas2001,Fang2002} This is a double-Slater-determinant approach, which takes into account the correlation effect due to charge separation and is suitable for describing diatomic molecules. In contrast, the conventional Hartree-Fock method, which uses factorized single-particle product states (i.e. a single determinant) cannot describe the correlation effect. The GVB method allows us to capture the physics of entangled states (crucial for quantum computing), which cannot be described by the single-determinant Hartree-Fock method. \\cite{Horodecki2009} The computational cost of the GVB method is much lower than that of the configuration interaction calculations. \\cite{Ostlund1996,Dutta2008} Thus, it is easier to analyze the exchange coupling between two qubits in various gate configurations with the GVB method.} In this work, we employ this numerical scheme and the Dirac equation to provide a realistic simulation of a GNB quantum dot qubit.\n\n\n\n\\section{Method}\n\n\\subsection{One-particle problem}\n\nConsider a graphene nanoribbon with width $W$ and length $L$ and armchair boundary conditions. Let the x-axis be the direction across the width of the nanoribbon and the y-axis be the direction parallel to the length of the ribbon. { The behavior of an electron with energy close to the Fermi level in this system can be described by the Dirac equation \\cite{Semenoff1984,DiVincenzo1984,trauzettel2007spin,BreyFertig2006,RevModPhys.81.109}\n\n\\begin{eqnarray}\n&& H_1 |\\psi \\rangle = \\epsilon | \\psi \\rangle \\\\\n&& H_1 = -i \\hbar v\n\\begin{pmatrix}\n\\sigma_x \\partial_x + \\sigma_y \\partial_y & 0 \\\\\n0 & -\\sigma_x \\partial_x + \\sigma_y \\partial_y \\\\\n\\end{pmatrix}\n +V(y) ,\\notag\n\\end{eqnarray}\nwhere $\\hbar$ is Planck's constant, $v$ is the Fermi velocity of graphene, $\\sigma_x,\\sigma_y$ are Pauli matrices for the pseudospin describing two sublattices of graphene, $\\partial_x,\\partial_y$ are partial derivatives, and $V(y)$ is the electrical confining potential along the y-axis.} $|\\psi \\rangle$ is a 4-component spinor in the form\n\\begin{eqnarray}\n|\\psi \\rangle =\n\\begin{pmatrix}\n\\psi (K,A)\\\\\n\\psi (K,B)\\\\\n-\\psi(K',A)\\\\\n-\\psi(K',B)\\\\\n\\end{pmatrix},\n\\end{eqnarray}\nwhere $K,K'$ label the two valleys in the Brillouin zone of graphene, $A,B$ label the two sublattices of graphene, and $\\psi$ is the envelope function.\n\n\nWe expand the electron envelope function within a basis set as follows\n\\begin{eqnarray}\n |\\psi \\rangle &=& \\sum_{m,s,n} \\phi^{m,n}_s |\\psi^{m,n}_s \\rangle, \\\\\n \\langle x,y |\\psi^{m,n}_s \\rangle &=&\n\\frac{1}{\\sqrt{2WL}} \\begin{pmatrix}\n\\chi_s e^{iq_nx} \\\\\n\\chi_s e^{-iq_nx} \\\\\n\\end{pmatrix}\n f_m (y) ,\n\\end{eqnarray}\n where $\\{ f_m(y) \\}$ is a set of basis functions of variable $y$, $s=A,B$ labels the two components of the pseudospin describing two sublattices. The basis vectors for the two-component pseudospinors are\n\\begin{eqnarray}\n&\\chi_A &=\n\\begin{pmatrix}\n1 \\\\\n0 \\\\\n\\end{pmatrix} \\\\\n&\\chi_B &=\n\\begin{pmatrix}\n0 \\\\\n1 \\\\\n\\end{pmatrix}.\n\\end{eqnarray}\nThe boundary conditions are\n\\begin{eqnarray}\n&\\langle x,y|\\psi \\rangle |_{x=0} &=\n\\begin{pmatrix}\n 0 & {\\bf I}_{2} \\\\\n {\\bf I}_{2} & 0 \\\\\n\\end{pmatrix}\n\\langle x,y |\\psi \\rangle |_{x=0} \\label{bc1} \\\\\n&\\langle x,y|\\psi \\rangle |_{x=W} &=\n\\begin{pmatrix}\n 0 & e^{+ i\\frac{2\\pi\\mu}{3} }{\\bf I}_{2} \\\\\n e^{ -i\\frac{2\\pi\\mu}{3} }{\\bf I}_{2} & 0 \\\\\n\\end{pmatrix}\n\\langle x,y |\\psi \\rangle |_{x=W} , \\label{bc2}\n\\end{eqnarray}\nwhere $\\mu =\\pm 1, 0$ is a constant determined by the termination of the ribbon edge. $\\mu =\\pm 1$ defines the semiconducting boundary condition. The symbol ${\\bf I}_{2}$ in Eqs. (\\ref{bc1}) and (\\ref{bc2}) denotes a $2\\times2$ identity matrix in pseudospin space. {Imposing the semiconducting boundary conditions on the basis functions leads to quantization of the electronic states in the x-direction\n\\begin{eqnarray}\n &q_n &=\\frac{\\pi}{W}(n + \\frac{\\mu}{3}) , n \\in \\mathbb{Z} \\\\\n &&= (3n + \\mu )q_0 ,\n\\end{eqnarray}\nwhere the characteristic momentum scale $q_0 = \\frac{\\pi}{3W}$ is defined by the width of the ribbon $W$. In this work, $1\/q_0$ is the characteristic length scale and $\\hbar v q_0$ is a characteristic energy scale. Throughout this paper, we consider only the condition with $n=0$ and $\\mu=+1$, since the ribbon is narrow enough such that the energies of higher confined states are outside the range of interest.}\n\n\nThe basis functions $f_m(y)$ are chosen to be sinusoidal functions confined within the interval $[0,L]$,\n\\begin{eqnarray}\nf_m(y) &=& \\sqrt{2}\\sin (\\frac{\\pi m y}{L}) .\n\\end{eqnarray}\nThe matrix elements of the one-particle Hamiltonian and overlap can then be written down analytically. {The Dirac equation is cast into the form of a generalized eigenvalue problem\n\\begin{eqnarray}\n\\sum_{ms} \\langle \\psi^{m',n}_{s'} | H_1 | \\psi^{m,n}_{s} \\rangle \\phi^{m,n}_s = \\epsilon \\sum_{ms} \\langle \\psi^{m',n}_{s'} | \\psi^{m,n}_s \\rangle \\phi^{m,n}_s , \\notag \\\\\n\\end{eqnarray}\nwhich is solved numerically.}\n\n\n\n\n\\subsection{Two-particle problem}\n\nTo evaluate the exchange coupling between two electrons separately located in two neighboring GNB quantum dots, we need to consider the mutual Coulomb interaction and exchange term between them. The Coulomb interaction in two-dimension is given by {\n\\begin{equation}\n v_{ee} = (\\frac{e^2}{4\\pi\\epsilon \\hbar v }) \\hbar v\n \\frac{1}{\\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}}, \\notag\n\\end{equation}}\nwhere $(\\frac{e^2 }{4\\pi\\epsilon \\hbar v })$ is a dimensionless Coulomb parameter, which can be viewed as the fine-structure constant of graphene. {One expects $\\frac{e^2 }{4\\pi\\epsilon \\hbar v }=2.2$ or smaller for a suspended graphene.\\cite{Hwang2012,Reed2010,Kotov2012} In this work we use $\\frac{e^2 }{4\\pi\\epsilon \\hbar v }=1.43$ for graphene on quartz substrate. \\cite{Hwang2012}}\n\nWe adopt the unrestricted Hartree-Fock method with generalized-valence-bond wave function \\cite{Fang2002} to solve the two-particle problem. Because the spin-orbit interaction is omitted here, the total wavefunction can be written as the product of the spatial wavefunction, denoted by $\\Psi_+ (\\Psi_-)$, and the corresponding two-particle spinor for the singlet (triplet) state. The spatial wave function $\\Psi_+ (\\Psi_-)$ of the spin singlet (triplet) must be symmetric (antisymmetric) with respect to the exchange of two particles to make the total wavefunction antisymmetric. The spatial wavefunctions take the form\n\\begin{eqnarray}\n | \\Psi_\\pm \\rangle =\n\\frac{1}{\\sqrt{2(1\\pm S^2)}}( | \\psi_L , \\psi_R \\rangle \\pm | \\psi_R , \\psi_L \\rangle),\n\\end{eqnarray}\nwhere $\\psi_L$ ($\\psi_R$) denotes a four-component single-particle wavefunction (for the Dirac particle with two bands and two valleys at K and K') localized at the left (right) quantum dot, and $S=\\langle \\psi_L | \\phi_R \\rangle$, which is enhanced when the Klein tunneling condition is met. The two-particle Hamiltonian can be written as\n\\begin{eqnarray}\nH = H_1 \\otimes 1 + 1\\otimes H_1 + v_{ee},\n\\end{eqnarray}\nwhere $H_1$ is the single-particle Hamiltonian for a Dirac particle as defined in Eq.~(1).\nThe two-particle Schrodinger equation is\n\\begin{eqnarray}\n& H | \\Psi_\\pm \\rangle &= E | \\Psi_\\pm \\rangle .\n\\end{eqnarray}\n$\\psi_L$ ($\\psi_R$) appearing in $| \\Psi_\\pm \\rangle$ is expanded in a non-orthonormal basis set $\\{|\\nu \\rangle\\}$ as defined in Eq.~(4) with $\\nu$ being a composite index for $(m,n,s)$.\nIn each iteration with a given $\\psi_R$, we write $| \\psi_L \\rangle=\\sum_{\\nu}C^L_{\\nu}|\\nu \\rangle$ and solve for the expansion coefficients of $C^L_{\\nu}$ according to the following projected Scr\\\"{o}dinger equation for $| \\Psi_\\pm \\rangle$.\n\\begin{eqnarray}\n& \\langle \\nu',\\psi_R|H| \\Psi_\\pm \\rangle &= E \\langle \\nu',\\psi_R | \\Psi_\\pm \\rangle ,\n\\end{eqnarray}\nwhere $\\langle\\nu' , \\psi_R|$ denotes a product state of basis state $\\langle \\nu'|$ and $\\langle \\psi_R|$.\n\nThe generalized eigenvalue problem to be solved becomes\n\\begin{eqnarray}\n\\sum_{\\nu} \\langle \\nu' | H_{GVB} |\\nu \\rangle C^L_{\\nu} = E \\sum_{\\nu} \\langle \\nu' | S_{GVB} |\\nu \\rangle C^L_{\\nu},\n\\end{eqnarray}\nwhere the Hamiltonian and overlap matrices elements are\n\\begin{eqnarray}\n\\langle \\nu' | H_{GVB} |\\nu \\rangle &=& \\langle \\nu' |H_1| \\nu \\rangle + \\langle \\nu' | \\nu \\rangle \\langle \\psi_R |H_1| \\psi_R \\rangle \\\\\n& \\pm & \\langle \\nu' |H_1| \\psi_R \\rangle \\langle \\psi_R |\\nu \\rangle \\pm \\langle \\nu' | \\psi_R \\rangle \\langle \\psi_R |H_1| \\nu \\rangle \\notag \\\\\n&+& \\langle \\nu',\\psi_R | v_{ee} | \\nu, \\psi_R \\rangle \\pm \\langle \\nu',\\psi_R | v_{ee} |\\psi_R , \\nu \\rangle \\notag \\\\\n \\langle \\nu' | S_{GVB} |\\nu \\rangle &=& \\langle \\nu' | \\nu\\rangle \\pm \\langle \\nu' | \\psi_R \\rangle \\langle \\psi_R |\\nu\\rangle .\n\\end{eqnarray}\nThe iteration continues until self-consistency is reached. For the triplet state, we carry out the reorthonormalization and projection procedure described in \\textcite{Fang2002} to resolve the linear-dependence problem in the generalized eigenvalue problem.\n\n\\subsection{The double well model}\n\n\\begin{figure}\n\\includegraphics[trim = 0mm 0mm 0mm 0mm,width=0.4\\textwidth]{fig1.eps}\n\\caption{Double well potential profile along the graphene ribbon length. The width of each well is $w$, the well-to-well separation is $d$, the { barrier height outside the double wells is $\\mu_{out}$, the height of the middle potential barrier is $\\mu_b$, and the potential at the bottom of the well is set to zero.}\\label{fig1}}\n\\end{figure}\n\n\nWe model a GNB double-dot system by a double square well potential in the y-direction, as shown in Fig.~\\ref{fig1}. Unless specified, we use the following parameters throughout this work. The physical parameters used in the model are those reported in the experimental study by \\textcite{Liu2010} The width of the ribbon is {$W=20 (nm)\\approx 1q_0^{-1}$}, and hence our characteristic energy is {$\\hbar v q_0\\approx 32.9 (meV)$}. The length of the ribbon is {$L=800 (nm)\\approx 40q_0^{-1}$}. The width of each dot (i.e., the width of each confining well) is $w$. The separation between the dots (i.e., the width of the potential barrier between two dots) is $d$. {The potential heights of the barrier and outside region are given by $\\mu_b$ and $\\mu_{out}$, respectively. The potential at the bottom of the well is set to zero.} We use $\\mu_{out}=1.5 \\hbar vq_0 =49.4 (meV)$ as suggested by previous theoretical work. \\cite{trauzettel2007spin} The Fermi energy is fixed at $E_F=1 \\hbar vq_0$. The number of sinusoidal basis functions used is 50.\n\n\n\n\n\n\\section{Results and Discussion}\n\n\\subsection{{Single-particle solutions}}\n\n\\begin{figure}\n\\includegraphics[trim = 20mm 0mm 80mm 0mm,width=0.45\\textwidth]{fig2.eps}\n\\caption{ (a) Single particle energy levels as functions of confining potential height for a single well. The well width is $w=2q_0^{-1}$. The conducting states (states forming the left upper triangle) and the Klein tunneling states (states forming the right lower triangle) are also shown in our calculation. (b) Single particle energy levels as functions of inter-well distance for a double well. The width of the wells is $w=4q_0^{-1}$. The ground state (solid blue) is generally above but very close to the top of the barrier valence band (dotted black), and hence is a {Klein} tunneling assisted state. (c) Energy splittings ($\\Delta E_m= E_{m+1}-E_m$) as functions of inter-well distance for a double well. \\label{fig2}}\n\\end{figure}\n\n\nWe first examine the single-particle behavior of single and double potential wells. Figure \\ref{fig2}(a) shows the single-particle energy levels as functions of confining barrier height $\\mu_{out}$ for a single potential well, where the width of the well is $w=2q_0^{-1}$ and the length of the ribbon is $L=16 q_0^{-1}$. { The left upper triangle region contains many slanted lines which describe the discretized states of the conduction bands derived from the GNB regions outside the well. Similarly, the right lower triangle region also contains many slanted lines which describe the discretized states of the GNB valence bands. The discretization is a result of quantum confinement due to the finite length ($L$) of GNB considered. The two regions are separated by a constant $2 \\hbar v q_0$, which corresponds to the band gap of GNB. In the gap, there are 3 quantized levels, which are the quantum confined states derived from the middle well. The dependence of these energy levels on the barrier height $\\mu_{out}$ matches the results obtained by solving the transcendental equation as described in \\textcite{trauzettel2007spin}. The good agreement validates our numerical procedure. It should be noted that our numerical method based on the expansion within a nearly complete basis set can handle not only the simple case of a GNB quantum dot defined by a square well but also arbitrary potential profile along the $y$ axis. This makes it easy to extend to double wells and the two-particle problem for finding the exchange coupling between two neighboring qubits.}\n\n\nFigure~\\ref{fig2}(b) shows energy levels as functions of inter-well separation, $d$ for a double square well. The width of the wells is $w=4q_0^{-1}$. This figure involves some high-energy excited states, so the number of sinusoidal basis functions used in this calculation is enlarged to 100 for higher accuracy. These energy levels come in pairs (indicated by the same color but different curve types), corresponding to two split levels associated with the inter-well coupling of one energy level in the left dot and the corresponding one in the right. The amount of energy splitting reflects the coupling strength of the two states located in the two dots, which would be degenerate in energy in the absence of the inter-well coupling. {Above the energy of the valence band maximum (VBM) of the barrier (black dotted line), there are three pairs of bound states indicated by blue, green, and red colors. The ground state (blue) pair is very close to the VBM of the barrier, and one expects to see enhanced inter-well coupling due to the Klein-tunneling effect.}\n\n\nFigure~\\ref{fig2}(c) shows the energy splittings due to inter-well coupling ($\\Delta E_m=E_{m+1}-E_m$) as functions of the well-to-well separation, $d$ for the same double square well potential in semi-log scale. $E_0$ is the ground state energy, $E_1$ is the 1st-excited state energy ...etc. The curves are almost linear in semi-log scale, indicating that the energy splittings decay exponentially with the well-to-well separation. For small separation the ground state splitting, $\\Delta E_0$ is smaller than the excited state splittings $\\Delta E_m, m=2,4$. However, for large separation ($q_0d>4$) the ground state splitting becomes larger than the excited state splittings, indicating the ground state splitting has a smaller decay rate comparing to the excited state splittings. This suggests that the inter-well coupling is enhanced by the Klein tunneling for the ground state pair, in qualitative agreement with the prediction in Ref.~\\onlinecite{trauzettel2007spin} based on simple estimation.\n\n\n\\subsection{{Effects of barrier height on two-particle solutions}}\n\n\n\\begin{figure}\n\\includegraphics[trim = 35mm 0mm 45mm 0mm,width=0.5\\textwidth]{fig3.eps}\n\\caption{Negative exchange coupling $-J_{ex}$ as a function of the inter-well barrier potential height $\\mu_b$ for $q_0d=2$ (blue solid) and $q_0d=4$ (red dashed). The width of the wells is $w=4q_0^{-1}$. (a) Linear plot. (b) Semi-log plot. \\label{fig3}}\n\\end{figure}\n\n\n{ Here, we study the effects of barrier height on the exchange coupling ($J_{ex}=E_{triplet}-E_{singlet}$) between two electrons in the GNB double well. Figure~\\ref{fig3}(a) shows $-J_{ex}$ as a function of barrier potential height $\\mu_b$. For small barrier heights, $-J_{ex}$ can be either negative (for $q_0d=2$, blue solid) or positive (for $q_0d=4$, red dashed) depending on the well-to-well separation.} For small barrier heights, the potential profile behaves like a single confining well instead of a double well. A singlet-triplet ground state transition is expected for barrier heights lower than a critical value, which shall be discussed further in Section \\ref{distance}. In this section we focus on the regime in which the barrier height is larger than or equal to the critical value. In our model, the critical value can be estimated by comparing the barrier height with both the single-particle ground state energy and the bottom of the conduction bands associated with the wells. In Fig.~\\ref{fig3}(a), the green dashed line marks the point when $\\mu_b$ crosses $\\langle H_1 \\rangle$, the expectation value of $H_1$ in the triplet solution for $q_0d=2$, which has a weak dependence on $\\mu_b$. We label this point by $\\bar E_1$. $\\bar E_1$ happens to be almost the same as the average of $\\langle H_1 \\rangle$ over $\\mu_b$ for $\\mu_b$ from 0 to 2$\\hbar vq_0$.\nThe black dashed line indicates where the barrier height equals to the bottom of the conduction bands of the wells, which is at $1\\hbar vq_0$. {The critical value $\\approx 1.1\\hbar vq_0$, which lies between $1\\hbar vq_0$ and $\\bar E_1$.}\nFor $\\mu_b$ larger than the critical value {($\\approx 1.1\\hbar vq_0$) $-J_{ex}$} is always positive, and it increases monotonically up to $\\mu_b= 1.9\\hbar vq_0$.\n\n\nFigure~\\ref{fig3}(b) shows $-J_{ex}$ for $\\mu_b > 1.1\\hbar vq_0$ on a semi-log scale. The magnitude of $-J_{ex}$ grows exponentially for $q_0d=2$, and super-exponentially for $q_0d=4$. The coupling for $q_0d=4$ can be almost as large as the case for $q_0d=2$ as $\\mu_b$ reaches $ 1.9\\hbar vq_0 $. The exchange coupling for $q_0d=2$ (blue solid) is linear in the semi-log plot, which indicates an exponential growth. For a longer well-to-well separation with $q_0d=4$, one can see super-exponential growth of the exchange coupling. For $q_0d=4$ and $\\mu_b >1.92 \\hbar vq_0$, the electrons in the singlet state are in the first-excited single-particle state of both dots, while the electrons in the triplet state are still in the single-particle ground state. The exchange coupling can not be defined in this case.\n\n\n\n \\begin{table\n \\caption{Singlet total energy $E_{singlet}$, triplet total energy $E_{triplet}$, and triplet single particle energy $E_1$ for some selected inter-dot distance $d$ and barrier height $\\mu_b$.\\label{table1}}\n \\begin{ruledtabular}\n \\begin{tabular}{c c c c c}\n $q_0d$ & $\\mu_b\/\\hbar vq_0$ & $E_{singlet}\/\\hbar vq_0$ & $E_{triplet}\/\\hbar vq_0$ & $E_1\/\\hbar vq_0$\\\\ \\hline\n 2 & 1.5 & 2.63897 & 2.63869 & 1.20982 \\\\\n 2 & 1.9 & 2.65891 & 2.65711 & 1.21937 \\\\\n 4 & 1.5 & 2.57493 & 2.57489 & 1.20796 \\\\\n 4 & 1.9 & 2.59815 & 2.59668 & 1.21833 \\\\\n 8 & 1.5 & 2.51309 & 2.51309 & 1.20725 \\\\\n 8 & 1.9 & 2.53504 & 2.53501 & 1.21845 \\\\\n \\end{tabular}\n \\end{ruledtabular}\n \\end{table}\n\n\nThe super-exponential growth of $-J_{ex}$ as $\\mu_b$ increases for larger well-to-well separation as shown in Fig.~\\ref{fig3}(b) is a special characteristic of the GNB quantum dot qubit. The overlap between the wave functions of the electrons in the left dot and the right dot is expected to be enhanced by the Klein tunnelling of Dirac particles when the valence band maximum of the barrier is close to the energies of conduction band states in the wells. \\cite{trauzettel2007spin} This long-distance coupling of Dirac particles is suggested as a possible advantage of the GNB quantum dot qubit over qubits in conventional systems. For the GNB qubit, the exchange coupling for the long distance case ($q_0d=4$) is almost as large as that in the short distance case ($q_0d=2$) as the barrier height approaches $\\mu_b=2\\hbar vq_0$. This implies that the valence band maximum in the barrier region is approaching the bottom of conduction band in the well. This result lends support to the proposal in \\textcite{trauzettel2007spin}.\n\n\n\n\\begin{figure}\n\\includegraphics[trim = 35mm 0mm 35mm 0mm,width=0.33\\textwidth]{fig4.eps}\n\\caption{Projected charge densities $\\rho_\\pm(y)$ along the ribbon for the case $q_0d=4$ for (a) singlet state and (b) triplet state for various barrier potential strengths. $\\mu_b=0.3\\hbar vq_0$ (red dotted), $\\mu_b=0.6\\hbar vq_0$ (green dash-dot), $\\mu_b=1.2\\hbar vq_0$ (blue dashed), $\\mu_b=1.9\\hbar vq_0$ (black solid). The density in the barrier region decreases as $\\mu_b$ increasing from zero to $0.6$ and $1.2 \\hbar vq_0$, but increases as $\\mu_b$ further increases to a higher value $1.9\\hbar vq_0$. The increment of density in the barrier region is significantly larger for the singlet state than that of the triplet state. The width of the wells is $w=4q_0^{-1}$.\\label{fig4}}\n\\end{figure}\n\n\n\n{This overlap enhancement can be illustrated by examining the projected charge density, defined as\n\\begin{equation} \\rho_\\pm (y_1)=\\int {\\Psi}_\\pm(x_1,y_1;x_2,y_2)\\cdot \\Psi_\\pm(x_1,y_1;x_2,y_2) dx_1dx_2dy_2, \\end{equation}\n where $\\Psi_\\pm$ is a four-dimensional vector defined in Eq.~(13). { The projected charge density can be written explicitly as\n\\begin{eqnarray} \\rho_\\pm (y_1)&=&\\int dx_1 [|\\psi_L(x_1,y_1)|^2\/2+|\\psi_R(x_1,y_1)|^2\/2 \\nonumber \\\\\n&\\pm & S \\psi_L(x_1,y_1)\\psi_R(x_1,y_1)]\/(1\\pm S^2). \\end{eqnarray} \nFor all cases, the $\\rho_\\pm (y)$ is dominated by the sum of the first two terms, since the overlap $S$ is small. We plot $\\rho_\\pm (y)$ as a function of $y$ in Fig.~\\ref{fig4} for singlet and triplet states for various barrier heights. The electron density is mainly localized in the two potential wells. The charge density between two wells decreases as the barrier potential is raised from $\\mu_b=0.3\\hbar vq_0$ (red dotted) to $0.6\\hbar vq_0$ (green dash-dot) and $1.2\\hbar vq_0$ (blue dashed). However, as the barrier height is raised to a higher value $\\mu_b=1.9\\hbar vq_0$ (black solid), the charge density between the wells becomes significantly larger for the singlet state as seen in Fig.~4(a). In contrast, there is much smaller change in charge density for the triplet state for the corresponding change in barrier height. The enhanced charge density at the middle for the singlet is caused by the long tails of $|\\psi_L(x_1,y_1)|^2$ and $|\\psi_R(x_1,y_1)|^2$ extending into the opposite well, which result from the Klein tunneling effect. The long tail is enhanced (reduced) for the singlet (triplet) state, which is driven by the positive (negative) exchange term.}\n\nThis counter-intuitive behavior due to Klein tunneling is one of the special characteristics of Dirac particles. As the barrier height approaches $\\mu_b=2\\hbar vq_0$, the VBM of the barrier is aligned with the conduction band minimum in the well. This leads to an enhancement of the overlap between the two electrons. This result is consistent with the result shown in Fig.~\\ref{fig3}.\n\n\n\n\n\\subsection{{Effects of inter-dot distance on two-particle solutions}\\label{distance}}\n\n\n\\begin{figure}\n\\includegraphics[trim = 30mm 0mm 40mm 0mm,width=0.45\\textwidth]{fig5.eps}\n\\caption{Negative exchange coupling ($-J_{ex}$) as a function of well-to-well distance $d$ for various barrier heights in (a) linear scale and (b) semi-log scale. The width of the wells is $w=4q_0^{-1}$. $\\mu_b=0$ (red dash-dot), $\\mu_b=1.5\\hbar vq_0$ (blue solid), $\\mu_b=1.9\\hbar vq_0$ (green dashed). For zero barrier, the singlet-triplet ground state transition occurs roughly at critical distance {with $q_0d_c\\approx 3$} (black vertical dashed). For finite barrier, $-J_{ex}$ decays exponentially for $q_0d>3$.\\label{fig5}}\n\\end{figure}\n\n\nFigure~\\ref{fig5}(a) shows $-J_{ex}$ as a function of inter-dot distance for different barrier heights. In the absence of a barrier (red dash-dot), $-J_{ex}$ starts with negative values and increases to positive values for $q_0d>3$. For $\\mu_b=1.5\\hbar vq_0$ and $\\mu_b=1.9\\hbar vq_0$, $-J_{ex}$ starts with a positive value, and decays exponentially for $q_0d>3$, as shown in Fig.~\\ref{fig5}(b).\n\n\nFor $\\mu_b=0$, there is no barrier and we only have one confining potential well. For $\\mu_b=0$, increasing $d$ is the same as increasing the width of a single potential well (which equals to $d+2w$, as shown in Fig.~1). {This situation has been studied by using various first-principle calculations, as summarized in Ref.~\\onlinecite{Rayne2011}. The ground state is a singlet for a short ribbon, and a triplet for a long ribbon. Our result is consistent with the previous studies for this limiting case. In particular, the red dash-dot line is similar to Fig. 2(a) in Ref.~\\onlinecite{Rayne2011}. The change of sign of $J_{ex}$ as $q_0d$ varies has a simple physical explanation. Whether the ground state is the singlet or triplet state depends the relative strength of the kinetic energy and the potential energy due to mutual Coulomb interaction. Similar to interacting electrons in a jellium model, the kinetic energy of the system scales like $1\/r_s^2$ and potential energy scales like $1\/r_s$, where $r_s$ denotes the average distance between electrons in the system. For small inter-dot separation, the kinetic energy dominates and the singlet state has lower total energy since its wavefunction has less spatial variation (due to symmetric behavior) as compared to the the triplet state. A the separation gets larger, the potential energy dominates and the total energy of the singlet state becomes higher, since it has more charge piling toward the center as compared to the the triplet state as illustrated in Fig.~4.} In our calculation, the singlet-triplet ground state transition occurs roughly at the critical distance {$d_c\\approx 3q_0^{-1}$}, which is labeled by the black vertical dashed line. For larger barrier heights, the singlet state has a larger density at the central barrier region and hence has higher Coulomb energy. This is why the triplet state is the ground state and $-J_{ex}$ is always positive for $\\mu_b=1.5\\hbar vq_0$ and $\\mu_b=1.9\\hbar vq_0$.\n\nFor medium barrier height $\\mu_b=1.5\\hbar vq_0$, the exchange coupling decreases exponentially when the inter-dot distance increases, as shown in Fig.~\\ref{fig5}. For higher barrier height $\\mu_b=1.9\\hbar vq_0$, where Klein paradox assisted tunneling occurs, $-J_{ex}$ is generally larger than that for $\\mu_b=1.5\\hbar vq_0$. For small values of $q_0d$, $-J_{ex}$ increases with increasing separation before reaching the singlet-triplet ground state transition point {$q_0d_c\\approx 3$}. For inter-dot distances longer than the critical distance {$d_c\\approx 3q_0^{-1}$}, we see the expected exponential decay. Hence in the Klein tunneling regime, the location of the maximum of $-J_{ex}$ can be roughly predicted by looking at the zero barrier height solution.\n\n\n\\subsection{Effects of well width on two-particle solutions}\n\n\n\\begin{figure}\n\\includegraphics[trim = 35mm 0mm 35mm 0mm,width=0.36\\textwidth]{fig6.eps}\n\\caption{Projected charge density $\\rho_\\pm(y)$ along the ribbon with $q_0d=4$ and $\\mu_b=1.9\\hbar vq_0$ for (a) singlet state and (b) triplet state for various well widths. $q_0w=4$ (red dotted), $q_0w=6$ (green dash-dot), $q_0w=8$ (blue dashed). \\label{fig6}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[trim = 30mm 0mm 40mm 0mm,width=0.45\\textwidth]{fig7.eps}\n\\caption{Negative exchange coupling ($-J_{ex}$) as a function of well width $w$ for various barrier heights in (a) linear scale and (b) semi-log scale. \\label{fig7}}\n\\end{figure}\n\n\nFigure~\\ref{fig6} shows the charge density along the y-axis for various well widths for (a) the singlet states and (b) the triplet states. The inter-well separation is fixed at $d=4q_0^{-1}$ and the barrier height is $\\mu_b=1.9\\hbar vq_0$. For small well width with $q_0w=4$ (black solid), the density in the barrier region for the singlet state is much higher than that of the triplet state. For larger well widths, the charge densities spread out from the center, and the difference of the charge densities between the singlet state and the triplet state in the barrier region become less significant. {The absolute value of exchange coupling is hence expected to be very small for large well width.} This is shown in Fig.~\\ref{fig7}, where $-J_{ex}$ is plotted as functions of well width. For barrier height larger than the critical value $\\mu_b=1.5\\hbar vq_0$ (blue solid) and $\\mu_b=1.9\\hbar vq_0$ (green dashed), the exchange splitting decays exponentially, as shown in the semi-log scale in Fig.~\\ref{fig7}(b). {The exchange splitting of $\\mu_b=1.9\\hbar vq_0$ is much larger than that of $\\mu_b=1.5\\hbar vq_0$, since Klein tunneling emerges as the VBM of the barrier gets close to the conduction band states of the wells.} For zero barrier height (red dash-dot), there is only a single confining well, and the curve is just the long-distance extension of the same curve (red dash-dot) in Fig.~\\ref{fig5}(a) discussed in the previous section.\n\n\n\\subsection{Qubit operation}\n\nOne figure of merit for qubit operation is the ratio of coherence time to switching time ${T_c}\/{\\tau_s}$, where $T_c$ is the coherence time and $\\tau_s$ is the switching time required for a $\\mbox{SWAP}^{\\frac{1}{2}}$ gate operation. \\cite{LossDiVincenzo1998,Shi2014}Within our model and range of parameters, the maximum exchange splitting for graphene nanoribbon quantum dot qubit is $|J_{ex}|_{graphene}\\approx 0.002 \\hbar v q_0=66 \\mu eV$, which is of the same order as the typical value for GaAs quantum dot qubit $|J_{ex}|_{GaAs}\\approx 100 \\mu eV$. \\cite{Burkard2000,Schliemann2001,Engel2004} Various sources of spin decoherence in graphene quantum dot are investigated, including spin-orbit coupling, \\cite{Kane2005,Min2006,Struck2010,Hachiya2014} electron-phonon interaction, \\cite{Mariani2008,Struck2010,Tikhonov2014} and hyperfine interaction. \\cite{Fischer2009,Recher2010,Fuchs2012}. The coherence time for graphene is expected to be $T_c \\approx 80 \\mu s$, three orders of magnitude longer than that of GaAs. \\cite{Recher2010,Kloeffel2013} Since the coherence time is much longer and the switching time is similar, we expect the figure of merit of graphene is much better than that of GaAs. In the original proposal, the exchange coupling is estimated to be $J_{ex}\\approx 0.1 \\sim 1.5 meV$ using single-particle picture and empirical value for Coulomb interaction, \\cite{trauzettel2007spin} which is an overestimation comparing to our calculation. Our calculation provides details of the exchange splitting versus various parameters associated with gate operation, and confirms that the magnitude of $J_{ex}$ required for qubit operation is achievable in the presence of electron-electron interaction, even though it is somewhat smaller than the previous estimation.\n\n{\nThe barrier height of the outside region $\\mu_{out}$ should be carefully chosen to avoid the electron leakage caused by Klein tunneling. $\\mu_{out}=1.5\\hbar vq_0$ in the current model, so an electron state will be confined in the double well if its single-particle energy lies in the band gap of outside region ($0.5\\hbar vq_0 < E <2.5\\hbar vq_0$). However, if the single-particle energy is too close to the VBM ($0.5\\hbar vq_0$) or the bottom of conduction band ($2.5\\hbar vq_0$) of the outside region, the electrons could also tunnel out from the double well. In our numerical tests, it is safe to set $\\mu_{out}=1.5\\hbar vq_0$ and the Fermi energy $E_F$ is set to be slightly above the conduction band minimum ($1\\hbar vq_0$). This region of operation could be identified by measuring the charge stability diagram of a graphene double quantum dot device.}\n\n\n\n\\section{Conclusion}\n\nWe have performed theoretical studies of the electronic structures of GNB quantum dot qubits using the Dirac equation and a double square well potential. The two electron wave functions and exchange splitting are calculated for various potential configurations by using a GVB wave function within the unrestricted Hartree-Fock approximation. As the barrier height approaches $2\\hbar v q_0$ (the band gap of the nanoribbon), the magnitude of the exchange coupling is enhanced by the Klein tunneling. This enhancement can make the long distance coupling almost as large as the short distance coupling. {We found that the exchange coupling between two GNB quantum dot qubits can switch sign as the average inter-particle distance varies. This behavior is consistent with previous first-principle studies for two electrons in a finite-length GNB (which corresponds to the zero barrier limit of our GNB double dot system).\\cite{Rayne2011} For higher barriers, the magnitude of the exchange coupling decays, but it can be magnified by the Klein tunneling when the valence band maximum of the barrier is close to the conduction band states of the wells. We found that the magnitude for the exchange splitting required for qubit operation is achievable, and the figure of merit of the GNB qubit is expected to be significantly better than that of the GaAs quantum dot qubit.}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nWe thank Lance Cooper and Jason Chang for valuable discussions. This work was supported in part by Ministry of Science and Technology, Taiwan under Contract No. NSC 101-2112-M-001-024-MY3.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nData-driven optimal control of a nonlinear system is a problem that has significant interest with applications ranging from vehicle autonomy, robotics to manufacturing and power systems.\nThe traditional approach to optimal control problem (OCP) relies on solving the Hamilton Jacobi Bellman (HJB) equation \\cite{fleming2012deterministic}. The HJB equation is a nonlinear partial differential equation and challenging to solve. Existing algorithms for solving HJB equations rely on iterative scheme \\cite{beard1997galerkin,bertsekas2011approximate}. This iterative scheme is also at the heart of the variety of reinforcement learning (RL) algorithms for the data-driven optimal control \\cite{sutton2018reinforcement}. In this paper, we present an alternate approach based on the dual formulation of the OCP. This dual approach leads to a convex optimization formulation of the OCP, which can be solved using a single-shot algorithm. This is in contrast to the iterative scheme used for solving the HJB equation.\nFurthermore, the iterative algorithm required for solving the HJB equation requires an initial stabilizing controller. Finding stabilizing controller for a nonlinear system is, in general, a nontrivial problem. However, the computational framework for solving the OCP problem in the dual space does not require an initial stabilizing controller. \n\nThe dual formulation to the OCP we present is based on the theory of linear operators, namely the P-F and Koopman operators \\cite{Lasota} and is developed in \\cite{vaidyaocpconvex}. However, there are differences between the results presented in this paper and \\cite{vaidyaocpconvex} as discussed in our contributions. The convex formulation to the OCP in the dual space of densities and occupation measure has been extensively studied in \\cite{henrion2013convex,korda2017convergence,lasserre2008nonlinear}. The computational framework in these works relies on moment-based relaxation of the infinite-dimensional optimization problem. In contrast, our proposed computational framework uses data and depends on the linear operator theory for the finite-dimensional approximation of the infinite-dimensional convex optimization problem. The convex formulation for optimal control is also extended to study stochastic OCP, control design with safety constraints, and data-driven stabilization problems \\cite{safetyPF,choi2020convex1}. There is a growing body of literature on the use of the Koopman operator for data-driven control, where the control dynamical system is lifted in the space of functions or observables using the Koopman operator \\cite{kaiser2021data,huang2018feedback,arbabi2018data,ma2019optimal,korda2020optimal1,mauroy2013spectral,huang2020data}. However, in this paper, we lift the control system using the P-F operator, which is dual to the Koopman operator. Unlike Koopman-based lifting, P-F lifting of the control dynamical system leads to a convex formulation of the OCP \\cite{raghunathan2013optimal,vaidya2010nonlinear}. \n\nThe main contributions of this paper are stated as follows. First, we provide a convex formulation to the infinite horizon OCP with discounted cost involving continuous-time dynamics. We consider OCP problems with both positive and negative discount. For the continuous-time OCP, the negative (positive) discount corresponds to the case where the cost function is exponentially decreasing (increasing) with time. There is extensive literature on the OCP with negative discount factor \\cite{modares2014linear,modares2016optimal,ghosh1993optimal}. One of the main contributions of this paper is to provide condition for the existence of optimal control problem with a positive discount. The condition arises in the form of a stronger notion of almost everywhere exponential stability \\cite{Vaidya_TAC}. Unlike \\cite{vaidyaocpconvex}, the computation framework relies on the use of polynomial basis for the approximation of linear Koopman operator using generator Extended Dynamic Mode Decomposition (gEDMD) algorithm \\cite{klus2020data}. Hence, we employ sum-of-square (SOS) optimization methods for solving a finite-dimensional optimization problem. The finite-dimensional approximation of the infinite-dimensional optimization problem is written as a semi-definite program (SDP). SOS-based optimization toolbox is then used to solve the SDP in a numerically efficient manner. Existing rigorous results for the convergence analysis of Koopman operator in the limit of data and number of basis functions goes to infinity are leveraged to provide convergence analysis of data-driven optimization problem. Simulation results are presented to verify the efficacy of the developed framework. The results presented in this paper are an extension of our conference paper \\cite{moyalan2021sum}. In particular, the data-driven framework for optimal control design and theorems involving OCP with discounted cost function are new to this paper. \n\nThe rest of the paper is structured as follows. In Section \\ref{sec:background}, we introduce preliminaries and notations used throughout the paper. The main results of the paper on the convex formulation to OCP are presented in Section \\ref{section_main}. The SOS and Koopman-based computation framework for the data-driven approximation of the convex optimization problem is discussed in Section \\ref{sec:SOS Control}. Simulation results are presented in Section \\ref{sec:examples}. Conclusions are presented in Section \\ref{sec:conclusion}. \n\n\n\n\n\n\n\n\n\n\\section{Preliminaries and Notations}\\label{sec:background}\n\\noindent{\\bf Notation:} ${\\mathbb R}^n$ denotes the $n$ dimensional Euclidean space and ${\\mathbb R}^n_{\\geq 0}$ is the positive orthant. Given ${\\mathbf X}\\subseteq {\\mathbb R}^n$ and ${\\mathbf Y}\\subseteq {\\mathbb R}^m$, let ${\\cal L}_1({\\mathbf X},{\\mathbf Y}), {\\cal L}_\\infty({\\mathbf X},{\\mathbf Y})$, and ${\\cal C}^k({\\mathbf X},{\\mathbf Y})$ denote the space of all real valued integrable functions, essentially bounded functions, and space of $k$ times continuously differentiable functions mapping from ${\\mathbf X}$ to ${\\mathbf Y}$ respectively. If the space ${\\mathbf Y}$ is not specified then it is understood that the underlying space is ${\\mathbb R}$. ${\\cal B}({\\mathbf X})$ denotes the Borel $\\sigma$-algebra on ${\\mathbf X}$ and ${\\cal M}({\\mathbf X})$ is the vector space of real-valued measure on ${\\cal B}({\\mathbf X})$. ${\\mathbf s}_t({\\mathbf x})$ denotes the solution of dynamical system $\\dot {\\mathbf x}={\\bf f}({\\mathbf x})$ starting from initial condition ${\\mathbf x}$. We will use the notation ${\\cal N}_\\delta$ to denote the $\\delta$ neighborhood of the equilibrium point at the origin for some fixed $\\delta>0$ and ${\\mathbf X}_1:={\\mathbf X}\\setminus {\\cal N}_\\delta$. \n\\subsection{Koopman and Perron-Frobenius Operators}\nConsider a dynamical system\n\\begin{align}\\dot {\\mathbf x}={\\mathbf f}({\\mathbf x}),\\;\\;{\\mathbf x}\\in {\\mathbf X}\\subseteq \\mathbb{R}^n \\label{dyn_sys}\\end{align} where the vector field is assumed to be ${\\bf f}({\\mathbf x})\\in {\\cal C}^1({\\mathbf X},{\\mathbb R}^n)$. There are two different ways of linearly lifting the finite dimensional nonlinear dynamics from state space to infinite dimension space of functions, Koopman and Perron-Frobenius operators. \n\n\\noindent{\\bf Koopman Operator:} $\\mathbb{K}_t :{\\cal L}_\\infty({\\mathbf X})\\to {\\cal L}_\\infty({\\mathbf X})$ for dynamical system~\\eqref{dyn_sys} is defined as \n\\[[\\mathbb{K}_t \\varphi]({\\mathbf x})=\\varphi({\\mathbf s}_t({\\mathbf x})),\\;\\;\\varphi\\in {\\cal L}_\\infty. \n\\]\nThe infinitesimal generator for the Koopman operator is\n\\begin{equation}\n\\lim_{t\\to 0}\\frac{\\mathbb{K}_t\\varphi-\\varphi}{t}={\\mathbf f}({\\mathbf x})\\cdot \\nabla \\varphi({\\mathbf x})=:{\\cal K}_{{\\mathbf f}} \\varphi. \\label{K_generator}\n\\end{equation}\n\\noindent{\\bf Perron-Frobenius Operator:} $\\mathbb{P}_t:{\\cal L}_1({\\mathbf X})\\to {\\cal L}_1({\\mathbf X})$ for system~\\eqref{dyn_sys} is defined as\n\\begin{equation}[\\mathbb{P}_t \\psi]({\\mathbf x})=\\psi({\\mathbf s}_{-t}({\\mathbf x}))\\left|\\frac{\\partial {\\mathbf s}_{-t}({\\mathbf x}) }{\\partial {\\mathbf x}}\\right|,\\;\\;\\psi\\in {\\cal L}_1,\\label{pf-operator} \n\\end{equation}\nwhere $\\left|\\cdot \\right|$ stands for the determinant. The infinitesimal generator for the P-F operator is given by \n\\begin{align}\n\\lim_{t\\to 0}\\frac{\\mathbb{P}_t\\psi-\\psi}{t}=-\\nabla \\cdot ({\\mathbf f}({\\mathbf x}) \\psi({\\mathbf x}))=: {\\cal P}_{{\\mathbf f}}\\psi. \\label{PF_generator}\n\\end{align}\nThese two operators are dual to each other where the duality is expressed as\n\\begin{align*}\n\\int_{\\mathbb{R}^n}[\\mathbb{K}_t \\varphi]({\\mathbf x})\\psi({\\mathbf x})d{\\mathbf x}=\n\\int_{\\mathbb{R}^n}[\\mathbb{P}_t \\psi]({\\mathbf x})\\varphi({\\mathbf x})d{\\mathbf x}.\n\\end{align*}\n\n\n\n\n\n\n\n\n\\subsection{Sum of squares} \\label{sec:SOS}\nSum of squares (SOS) optimization \\cite{Topcu_10,Pablo_03_SDP,parrilo2003minimizing,Pablo_2000} is a relaxation of positive polynomial constraints appearing in polynomial optimization problems. SOS polynomials are in a set of polynomials that can be described as a finite linear combination of monomials, i.e., $p = \\sum_{i=1}^\\ell d_i p_i^2$ where $p$ is a SOS polynomial, $p_i$ are polynomials, and $d_i$ are nonnegative coefficients. Hence, SOS is a sufficient condition for the nonnegativity of a polynomial. Thus SOS relaxation provides a lower bound on the minimization problems of polynomial optimizations. Using the SOS relaxation, a large class of polynomial optimization problems with positive constraints can be formulated as SOS optimization as\n\\begin{align} \\label{eq:SOSOPT}\n\\begin{split}\n \\min_{{\\mathbf d}} \\, \\mathbf{w}^\\top \\mathbf{d} \\,\\,\\, \\mathrm{s.t.} \\,\\,\\, p_s({\\mathbf x},{\\mathbf d}) \\in \\Sigma[{\\mathbf x}], \\, p_e({\\mathbf x};{\\mathbf d}) = 0,\n\\end{split}\n\\end{align}\nwhere $\\Sigma[{\\mathbf x}]$ denotes SOS set, $\\mathbf{w}$ is weighting coefficient, $p_s$ and $p_e$ are polynomials parametrized by coefficients ${\\mathbf d}$. The problem in~\\eqref{eq:SOSOPT} can be translated into a Semidefinite Programming (SDP)~\\cite{Pablo_03_SDP, Laurent_2009}. There are readily available SOS optimization packages such as SOSTOOLS~\\cite{sostools_Parrilo} and SOSOPT~\\cite{Seiler_2013_SOSOPT} for solving~\\eqref{eq:SOSOPT}.\n\n\n\\subsection{Almost everywhere uniform stability and Stabilization}\n\nThis section derives results on a stronger notion of stability used in formulating optimal control problem with discounted cost. We first present the notion of, a.e., uniform stability as introduced in \\cite{rajaram2010stability}. In the rest of the paper, we will use the following notation.\n\n\n\\begin{definition}\\label{definitiona.euniform}[a.e. uniform stability]\nThe equilibrium point of \\eqref{dyn_sys} is said to be a.e. uniform stable w.r.t. measure $\\mu\\in {\\cal M}({\\mathbf X})$ if, for every given $\\epsilon$, there exists a time $T(\\epsilon)$ such that\n\\begin{align}\n\\int_{T(\\epsilon)}^\\infty \\mu (B_t)dt<\\epsilon\\label{eq_aeunifrom}\n\\end{align}\nwhere $B_t:=\\{{\\mathbf x} \\in {\\mathbf X}: {\\mathbf s}_t({\\mathbf x})\\in B\\}$ for any set $B\\in {\\cal B}({\\mathbf X}_1)$. \n\\end{definition}\nThe above stability definition essentially means that given any arbitrary set $B$ not containing the origin, the measure of the set of all initial conditions that stay inside $B$ can be made arbitrarily small after a sufficiently long time. Note that the above definition of a.e. uniform stability is stronger than the almost everywhere stability notion as introduced in \\cite{Rantzer01} (refer to \\cite{rajaram2010stability} for the proof). The following definition of a.e. exponential stability is introduced here and is stronger than the above Definition \\ref{definitiona.euniform}. The following exponential stability definition is a continuous-time counterpart of the discrete-time definition studied in \\cite{Vaidya_TAC}.\n\n\\begin{definition}\n\\label{definition_aeexponential}[a.e. exponential stability]\nThe equilibrium point is said to be almost everywhere exponential stable w.r.t. measure $\\mu$\nwith rate $\\gamma>0$ if there exists a constant $M$ such that \n\\begin{align}\n\\mu(B_t)\\leq M e^{-\\gamma t}\n\\label{a.eexponential}\n\\end{align}\nwhere $B_t:=\\{{\\mathbf x}\\in {\\mathbb R}^n: {\\mathbf s}_t({\\mathbf x})\\in B\\}$ for any set $B\\in {\\cal B}({\\mathbf X}_1)$.\n\\end{definition}\n\n\n\n\n\n\n In the following, we state theorems providing necessary and sufficient condition for a.e. uniform and a.e uniform exponential stability. These results are proved under the following assumption on the equilibrium point of (\\ref{dyn_sys}). \n\\begin{assumption}\\label{assume_localstability} We assume that ${\\mathbf x}=0$ is locally stable equilibrium point for the system (\\ref{dyn_sys}) with local domain of attraction denoted by $\\mathcal{D}$ and let $0\\in {\\cal N}_\\delta\\subset \\mathcal{D}$.\n\\end{assumption}\n\\begin{theorem}\\label{theorem_necc_suffuniform}\nThe equilibrium point ${\\mathbf x}=0$ for the system (\\ref{dyn_sys}) satisfying Assumption \\ref{assume_localstability} is almost everywhere uniform stable w.r.t. measure $\\mu$ if and only if there exists a density function $\\rho({\\mathbf x})\\in{\\cal C}^1({\\mathbf X}\\setminus \\{0\\},{\\mathbb R}^+)$ which is integrable on ${\\mathbf X}_1$ and satisfies \n\\begin{align}\n\\nabla\\cdot ({\\bf f}\\rho )=h_0\\label{steady_pde_uniform}\n\\end{align}\nwhere $h_0\\in {\\cal C}^1({\\mathbf X})$ is the density function corresponding to the measure $\\mu$.\n\\end{theorem}\nRefer to \\cite[Theorem 13]{rajaram2010stability} for the proof.\n\n\n\n\n\n\n\n\n\\section{Convex Formulation of Optimal Control Problem}\\label{section_main}\nIn this section we briefly summarize the main results from \\cite{vaidyaocpconvex} on the convex formulation of the optimal control problem. Consider a control affine system of the form \n\\begin{align} \\label{cont_syst1}\n\\dot {\\mathbf x}=\\bar {\\bf f}({\\mathbf x})+{\\bf g}({\\mathbf x})\\bar {\\mathbf u},\\;\\;{\\mathbf x} \\in {\\mathbf X}\\subseteq{\\mathbb R}^n\n\\end{align}\nwhere, ${\\mathbf x}$ is the state, $\\bar {\\mathbf u}=[\\bar u_1,\\ldots,\\bar u_m]^\\top\\in \\mathbb{R}^m$ is the control input and ${\\mathbf g}({\\mathbf x})=({\\mathbf g}_1({\\mathbf x}),\\ldots,{\\mathbf g}_m({\\mathbf x}))$. All the vector fields are assumed to belong to ${\\cal C}^1({\\mathbf X},{\\mathbb R}^n)$.\n\n\\begin{remark}\nThe affine control assumption for a dynamical control system is not restrictive as any non-affine dynamical control system can be converted to control-affine by extending the state space. In particular, consider the control dynamical system of the form\n\\[\\dot {\\mathbf x}={\\mathbf f}({\\mathbf x},{\\mathbf u})\\]\nthen we can define ${\\mathbf u}$ as a new state and introduce $\\tilde {\\mathbf u}$ as another control input to write the above system as the following affine in the input control system\n\\begin{align}\n&\\dot {\\mathbf x}={\\mathbf f}({\\mathbf x},{\\mathbf u})\\nonumber,\\;\\;\\;\\dot {\\mathbf u}=\\tilde {\\mathbf u}\n\\end{align}\n\\end{remark}\nThe following assumption is made on (\\ref{cont_syst1}).\n\\begin{assumption}\\label{assume_localstable}\nWe assume that the linearization of the nonlinear system dynamics (\\ref{cont_syst1}) at the origin is stabilizable i.e., the pair $(\\frac{\\partial \\bar {\\bf f}}{\\partial {\\mathbf x}}(0),{\\mathbf g}(0))$ is stabilizable. \n\\end{assumption}\nUsing the above stabilizability assumption, we can design a local stable controller using data. The detailed procedure for the design of such controller is given in Section \\ref{sec:SOS Control}. Let ${\\mathbf u}_\\ell$ be the locally stable controller. Defining ${\\bf f}({\\mathbf x}):=\\bar {\\bf f}({\\mathbf x})+{\\mathbf g}({\\mathbf x}) {\\mathbf u}_\\ell$ and ${\\mathbf u}=\\bar {\\mathbf u}-{\\mathbf u}_\\ell$, we can rewrite control system (\\ref{cont_syst1}) as\n\\begin{align}\n\\dot {\\mathbf x}={\\bf f}({\\mathbf x})+{\\bf g}({\\mathbf x}) {\\mathbf u}.\\label{cont_syst}\n\\end{align}\nThe following is valid for the above dynamical system. With the control input ${\\mathbf u}=0$, the origin of system (\\ref{cont_syst}) is almost sure asymptotically stable locally in small neighborhood $\\mathcal{D}$ of the origin such that ${ B}_\\delta \\subset {\\cal D}$.\n\n\n\n\n\n\n\nConsider the discounted cost OCP of the form\n\\begin{align}\nJ^\\star(\\mu_0)\\!\\!=\\!&\\inf_{\\mathbf u}\\!\\!\\int_{{\\mathbf X}_1}\\!\\!\\left[\\!\\int_0^\\infty\\!\\!\\! e^{\\gamma t}(q({\\mathbf x}(t))+ \\beta {\\mathbf u}(t)^\\top {\\mathbf R} {\\mathbf u}(t)) \\;dt\\right] d\\mu_0\\nonumber\\\\\n&{\\rm subject\\;to\\;(\\ref{cont_syst})}\\label{cost_function}\n\\end{align}\nwhere $\\gamma\\in {\\mathbb R}$. The existing literature on OCP with discounted cost address the case where $\\gamma$ is negative, i.e., negative discount factor. In this paper, with the stronger notion of a.e. uniform stability with geometric decay, we can address the case of cost with a positive discount. Note that the cost function is a function of initial measure $\\mu_0$, and this dependency on $\\mu_0$ can be explained as follows.\nThe cost function can be written as \n\\begin{align}J(\\mu_0)=\\int_{{\\mathbf X}_1}V({\\mathbf x})d\\mu_0({\\mathbf x})\\label{costmu}\n\\end{align}\nwhere, $V({\\mathbf x})$ can be written as \n\\[V({\\mathbf x})=\\int_0^\\infty e^{\\gamma t}(q({\\mathbf x}(t))+\\beta {\\mathbf u}^\\top (t){\\mathbf R} {\\mathbf u}(t))dt\\]\nwith ${\\mathbf x}(\\cdot)$ being a trajectory with initial condition ${\\mathbf x}(0)={\\mathbf x}$.While $V({\\mathbf x})$ can be recognized with the familiar cost function used in the formulation of OCP in primal domain, the cost function $J(\\mu_0)$ is minimized w.r.t. set of initial condition distributed with initial measure $\\mu_0$. In the rest of the paper we assume that the initial measure $\\mu_0$ is equivalent to Lebesgue with density function $0 0})\\cap {\\cal C}^1({\\mathbb R}^n)$. \nWe make the following assumption on the OCP. \n\\begin{assumption}\\label{assumption_onocp}\n We assume that the state cost function $q: {\\mathbb R}^n\\to {\\mathbb R}_{\\geq 0}$ is zero at the origin and uniformly bounded away from zero outside the neighborhood ${\\cal N}_\\delta$ and ${\\mathbf R}>0$. Furthermore, there exists a feedback control for which the cost function in (\\ref{cost_function}) is finite and that the optimal control is feedback in nature, i.e., ${\\mathbf u}^\\star={\\mathbf k}^\\star({\\mathbf x})$ with the function ${\\mathbf k}^\\star$\n being in ${\\cal C}^1({\\mathbf X},{\\mathbb R}^m)$. \n\\end{assumption}\nWith the assumed feedback form of the optimal control input, the OCP can be written as \n{\\small\n\\begin{eqnarray}\n \\inf\\limits_{{\\mathbf k}\\in {\\cal C}^1({\\mathbf X})} &\\int_{{\\mathbf X}_1}\\left[\\int_0^\\infty e^{\\gamma t}(q({\\mathbf x}(t))+ \\beta {\\mathbf k}({\\mathbf x}(t))^\\top {\\mathbf R} {\\mathbf k}({\\mathbf x}(t)))\\;dt\\right] d\\mu_0\\nonumber\\\\\n {\\rm s.t.}&\\dot {\\mathbf x}={\\bf f}({\\mathbf x})+{\\mathbf g}({\\mathbf x}){\\mathbf k}({\\mathbf x})\n\\label{ocp_main_discounted}\n\\end{eqnarray}\n}\nThe following is the main theorem on the OCP with discounted cost function.\n\n\\begin{theorem}\\label{theorem_maingeometric}\nConsider the OCP (\\ref{ocp_main_discounted}) with discount factor $\\gamma\\leq 0$ and assume that the cost function and optimal control satisfy Assumption \\ref{assumption_onocp}. Then the OCP (\\ref{ocp_main_discounted}) can be written as the following infinite dimensional convex optimization problem \n\\begin{eqnarray}\nJ^\\star(\\mu_0)&=&\\inf_{\\rho\\in {\\cal S},\\bar {\\boldsymbol \\rho}\\in {\\cal C}^1({\\mathbf X}_1)} \\;\\;\\; \\int_{{\\mathbf X}_1} q({\\mathbf x})\\rho({\\mathbf x})+\\beta\\frac{\\bar {\\boldsymbol \\rho}({\\mathbf x})^\\top {\\mathbf R}\\bar {\\boldsymbol \\rho}({\\mathbf x})}{\\rho} d{\\mathbf x}\\nonumber\\\\\n{\\rm s.t}.&&\\nabla\\cdot ({\\bf f}\\rho +{\\mathbf g}\\bar {\\boldsymbol \\rho})=\\gamma \\rho+ h_0\n\\label{eqn_ocpdiscount_L2}\n\\end{eqnarray}\nwhere $\\bar {\\boldsymbol \\rho}=(\\bar \\rho_1,\\ldots, \\bar\\rho_m)$ and ${\\cal S}:={\\cal L}_1({\\mathbf X}_1)\\cap {\\cal C}^1({\\mathbf X}_1,{\\mathbb R}_{\\geq 0})$. The optimal feedback control input is recovered from the solution of the above optimization problem as \n\\begin{align}\n{\\mathbf k}^\\star({\\mathbf x})=\\frac{\\bar {\\boldsymbol \\rho}^\\star({\\mathbf x})}{\\rho^\\star({\\mathbf x})}\\label{feedback_input}.\n\\end{align}\nFurthermore, if $\\gamma=0$, then optimal control ${\\mathbf k}^\\star({\\mathbf x})$ is a.e. uniformly stabilizing w.r.t. measure $\\mu_0$.\n\\end{theorem}\n\nProof of theorem \\eqref{theorem_maingeometric} is given in Appendix.\n\n\n\nNext, we consider discounted cost OCP with ${\\cal L}_1$ norm on control term \n\\begin{eqnarray}\n \\inf\\limits_{{\\mathbf k}\\in {\\cal C}^1({\\mathbf X})} &\\int_{{\\mathbf X}_1}\\left[\\int_0^\\infty e^{\\gamma t}(q({\\mathbf x}(t))+ \\beta \\|{\\mathbf k}({\\mathbf x}(t))\\|_1)\\;dt\\right] d\\mu_0({\\mathbf x})\\nonumber\\\\\n {\\rm s.t.}&\\dot {\\mathbf x}={\\bf f}({\\mathbf x})+{\\mathbf g}({\\mathbf x}){\\mathbf k}({\\mathbf x}).\n\\label{ocp_main_discounted1}\n\\end{eqnarray}\nWe make the following assumption on the nature of optimal control for the ${\\cal L}_1$-norm OCP (\\ref{ocp_main_discounted1}). \n\\begin{assumption}\\label{assume_OCP1}\nWe assume that the state cost function $q: {\\mathbb R}^n\\to {\\mathbb R}_{\\geq 0}$ is zero at the origin and uniformly bounded away from zero outside the neighborhood ${\\cal N}_\\delta$ and ${\\mathbf R}>0$. Furthermore, there exists a feedback control input for which the cost function in \\eqref{ocp_main_discounted1} is finite. Furthermore, the optimal control is feedback in nature, i.e., ${\\mathbf u}^\\star={\\mathbf k}^\\star({\\mathbf x})$ with the function ${\\mathbf k}^\\star$ is assumed to be ${\\cal C}^1({\\mathbf X},{\\mathbb R}^m)$. \n\\end{assumption}\n\\begin{theorem}\\label{theorem_maingeometric1}\nConsider the OCP (\\ref{ocp_main_discounted1}) with discount factor $\\gamma\\leq 0$ and assume that the cost function and optimal control satisfy Assumption \\ref{assume_OCP1} respectively. Then the OCP (\\ref{ocp_main_discounted1}) can be written as following infinite dimensional convex optimization problem \n\\begin{eqnarray}\nJ^\\star(\\mu_0)&=&\\inf_{\\rho\\in {\\cal S},\\bar {\\boldsymbol \\rho}\\in {\\cal C}^1({\\mathbf X}_1)} \\;\\;\\; \\int_{{\\mathbf X}_1} q({\\mathbf x})\\rho({\\mathbf x})+\\beta\\|\\bar {\\boldsymbol \\rho}({\\mathbf x})\\|_1 d{\\mathbf x}\\nonumber\\\\\n{\\rm s.t}.&&\\nabla\\cdot ({\\bf f}\\rho +{\\mathbf g}\\bar {\\boldsymbol \\rho})=\\gamma \\rho+ h_0\n\\label{eqn_ocpdiscount}\n\\end{eqnarray}\nwhere $\\bar {\\boldsymbol \\rho}=(\\bar\\rho_1,\\ldots, \\bar\\rho_m)$ and ${\\cal S}:={\\cal L}_1({\\mathbf X}_1)\\cap {\\cal C}^1({\\mathbf X}_1,{\\mathbb R}_{\\geq 0})$. The optimal feedback control input is recovered from the solution of the above optimization problem as \n\\begin{align}\n{\\mathbf k}^\\star({\\mathbf x})=\\frac{\\bar {\\boldsymbol \\rho}^\\star({\\mathbf x})}{\\rho^\\star({\\mathbf x})}.\n\\end{align}\nFurthermore, if $\\gamma=0$, then optimal control ${\\mathbf k}({\\mathbf x})$ is a.e. uniformly stabilizing w.r.t. measure $\\mu_0$.\n\\end{theorem}\n\n\n\\begin{proof}\nThe proof of Theorem \\ref{theorem_maingeometric1} follows along similar lines to the proof of Theorem \\ref{theorem_maingeometric}. \n\\end{proof}\n\n\\begin{remark}\\label{remark_singularity}\nIt is important to emphasize that the optimal feedback controller with discount factor $\\gamma= 0$ is stabilizing in almost everywhere sense. This is analogous to the optimal control design in the primal formulation. The optimal cost function also serves as a Lyapunov function, thereby ensuring the stability of the feedback control system. In our proposed dual setting, the optimal density function serves as a.e. stability certificate for the case of discount factor $\\gamma= 0$. However, due to the dual nature of the Lyapunov function and density function \\cite{Vaidya_TAC,Rantzer01}, the density function has a singularity at the origin. Because of this singularity at the origin, the cost function is evaluated in ${\\mathbf X}_1$ excluding the small region around the origin. Hence it may become necessary to design a local stabilizing or local optimal controller. The existence of such a local stabilizing controller is ensured following Assumption \\ref{assume_localstable}. The local controller, say ${\\mathbf k}_\\ell$, can be blended with global control ${\\mathbf k}^\\star$ using the following formula \\cite{rantzer2001smooth}. \n\n\\begin{equation}\n u({\\mathbf x}) = \\frac{\\rho_L}{\\rho_L+\\rho_N}{\\mathbf k}_\\ell({\\mathbf x}) + \\frac{\\rho_N}{\\rho_L+\\rho_N}{\\mathbf k}^*({\\mathbf x}) \\nonumber\n\\end{equation}\n \n\\begin{equation}\n \\rho_L({\\mathbf x}) = max\\{(({\\mathbf x}^T P {\\mathbf x}))^{-3}-\\Delta,0\\} \\nonumber\n\\end{equation}\nwhere matrix $P > 0$ define a control Lyapunov function. The parameter $\\Delta$ determines the region of operation for the local controller. \n\\end{remark}\n\n\n\nThe optimal control results involving ${\\cal L}_2$ and ${\\cal L}_1$ norm on the control input with positive discount factor, i.e., $\\gamma>0$, are proved under the following assumption. \n\n\\begin{assumption}\\label{assumption_ocppositivediscount}\nWe assume that the state cost function $q: {\\mathbb R}^n\\to {\\mathbb R}_{\\geq 0}$ is zero at the origin and uniformly bounded away from zero outside the neighborhood ${\\cal N}_\\delta$ and ${\\mathbf R}>0$. Furthermore, there exists a feedback control for which the cost function in (\\ref{cost_function}) is finite and that the optimal control is feedback in nature, i.e., ${\\mathbf u}^\\star={\\mathbf k}^\\star({\\mathbf x})$ with the function ${\\mathbf k}^\\star$\n being in ${\\cal C}^1({\\mathbf X},{\\mathbb R}^m)$. Furthermore, the feedback controller is assumed to be almost everywhere exponentially stabilizing (Definition \\ref{definition_aeexponential}) with decay rate $\\gamma'>\\gamma>0$.\n\\end{assumption}\nNote that Assumption \\ref{assumption_ocppositivediscount} is same as Assumption \\ref{assumption_onocp} except for the additional requirment that the feedback controller is a.e. exponentially stabilizing with decay rate strictly large than $\\gamma$.\n\n\\begin{theorem}\\label{theorem_maingeometric_positivediscount}\nConsider the OCP (\\ref{ocp_main_discounted}) with discount factor $\\gamma> 0$ and assume that the cost function and optimal control satisfy Assumption \\ref{assumption_ocppositivediscount}. Then the OCP (\\ref{ocp_main_discounted}) can be written as the following infinite dimensional convex optimization problem \n\\begin{eqnarray}\nJ^\\star(\\mu_0)&=&\\inf_{\\rho\\in {\\cal S},\\bar {\\boldsymbol \\rho}\\in {\\cal C}^1({\\mathbf X}_1)}\\int_{{\\mathbf X}_1} q({\\mathbf x})\\rho({\\mathbf x})+\\beta\\frac{\\bar {\\boldsymbol \\rho}({\\mathbf x})^\\top {\\mathbf R}\\bar {\\boldsymbol \\rho}({\\mathbf x})}{\\rho} d{\\mathbf x}\\nonumber\\\\\n{\\rm s.t}.&&\\nabla\\cdot ({\\bf f}\\rho +{\\mathbf g}\\bar {\\boldsymbol \\rho})=\\gamma \\rho+ h_0\n\\label{eqn_ocpdiscount_L2}\n\\end{eqnarray}\nwhere $\\bar {\\boldsymbol \\rho}=(\\bar \\rho_1,\\ldots, \\bar\\rho_m)$ and ${\\cal S}:={\\cal L}_1({\\mathbf X}_1)\\cap {\\cal C}^1({\\mathbf X}_1,{\\mathbb R}_{\\geq 0})$. The optimal feedback control input is recovered from the solution of the above optimization problem as \n\\begin{align}\n{\\mathbf k}^\\star({\\mathbf x})=\\frac{\\bar {\\boldsymbol \\rho}^\\star({\\mathbf x})}{\\rho^\\star({\\mathbf x})}\\label{feedback_input}.\n\\end{align}\n\\end{theorem}\nThe proof of this theorem is provided in the Appendix. Theorem analogous to Theorem \\ref{theorem_maingeometric1} can be stated and proved for the case involving ${\\cal L}_1$ control norm and with positive discount factor $\\gamma>0$. \\\\\n\n\\begin{remark}\nIn the above formulations of the OCPs, we did not explicitly impose constraints on the control input. Explicit constraints on the magnitude of the control input can be imposed in a convex manner as follows:\n\\begin{align}\n \\|{\\mathbf u}\\|_1\\leq M\\iff |\\bar \\rho_1({\\mathbf x})|^2+\\ldots+|\\bar \\rho_m({\\mathbf x})|^2\\leq M \\rho({\\mathbf x}),\\label{control_constraints}\n\\end{align}\nfor some positive constant $M$. To arrive at (\\ref{control_constraints}) we have used the formula for the optimal feedback control, i.e., (\\ref{feedback_input}) and the fact that $\\rho({\\mathbf x})>0$. \nThe above constraints are linear in the optimization variables $\\bar {\\boldsymbol \\rho}$ and $\\rho$ and hence can be implemented convexily. So the OCP involving explicit norm constraints on the control input can be implemented convexily by augmenting the optimization problem with linear constraints in (\\ref{control_constraints}). \n\\end{remark}\n\n\\begin{remark}\nIn the above formulations of the OCP problem, we have assumed that density function $h_0>0$, implying that the initial measure $\\mu_0$ is equivalent to Lebesgue. This assumption is necessary as $h_0>0$ guarantees that $\\rho>0$ (refer to Eq. (\\ref{definingrho})) and hence the the feedback control input ${\\mathbf k}=\\frac{\\bar {\\boldsymbol \\rho}}{\\rho}$ is well defined. However, it is possible to relax this assumption and work with density function $h_0\\geq 0$. This will correspond to the case where the initial measure $\\mu_0$ is continuous w.r.t. Lebesgue measure and not equivalent to Lebesgue. In order to ensure that the feedback control input is well defined when $\\mu_0$ is continuous w.r.t. Lebesgue measure, we need to impose the following constraints on the control input\n\\[ |\\bar \\rho_k({\\mathbf x})|^2\\leq M \\rho({\\mathbf x}),\\;\\;k=1,\\ldots,m\\]\nfor some large constant $M$. The above constraints will ensure that the for a.e. ${\\mathbf x}$ if $\\rho({\\mathbf x})=0\\implies \\bar {\\boldsymbol \\rho}_k({\\mathbf x})=0$ for $k=1,\\ldots, m$ thereby the feedback control input is well defined. Working with absolutely continuous initial measure $\\mu_0$ or equivalently $h_0\\geq 0$ will correspond to the case where optimality is guaranteed only from set of initial condition with support on $\\mu_0$. \n\\end{remark}\nIn the following, we demonstrate how the results involving dual formulation to the OCP problem works out for the special case of scalar linear system. The main conclusion is that the optimal control obtained using dual formulation matches with the control obtained using a primal formulation of OCP, namely the linear quadratic regulator problem for a particular choice of $h_0({\\mathbf x})$. Note that the initial measure or the density function $h_0({\\mathbf x})$ is unique to our dual formulation with no parallel in the primal formulation. \n\\begin{comment}\n\\begin{example}\nConsider a linear scalar control system \n\\[\\dot x=ax+u,\\] with $q(x)=q x^2$ and $h_0(x)=\\frac{1}{x^2}$, we then have \n\\begin{align*}\n\\inf_{\\rho,\\bar \\rho}&\\int_{X_1} qx^2\\rho(x)+\\frac{\\bar \\rho^2(x)}{\\rho(x)}dx\n\\\\\n&{\\rm s.t.}\\;\\;\\frac{d}{dx}\\left(ax \\rho+\\bar \\rho\\right)=\\frac{1}{x^2}\n\\end{align*}\nAssuming following parameterization for $\\rho(x)=\\frac{1}{px^{2\\beta}}$ and $\\bar \\rho(x)=\\frac{1}{\\lambda x^\\alpha}$, we get\n\\[\\frac{d}{dx}\\left(ax \\rho+\\bar \\rho\\right)=\\frac{1}{x^2}\\implies \\beta=1, \\alpha=1, \\lambda=\\frac{-p}{a+p}\\]\nSubstituting for $\\alpha, \\beta$, and $\\lambda$ in the minimization, we get\n\\[\n\\inf_{\\rho,\\bar \\rho}\\int_{X_1} qx^2\\rho(x)+\\frac{\\bar \\rho^2(x)}{\\rho(x)}dx\\propto\\min_{p} \\frac{q}{p}+\\frac{(a+p)^2}{p}.\n\\]\n\nSolving for $p$, we obtain, $p=\\sqrt{q+a^2}$, and hence\n\n\\[\\bar\\rho(x)=-\\frac{(a+p)}{p x},\\;\\;\\rho(x)=\\frac{1}{px^2}\\]\nwith feedback control\n\\begin{align}u=\\frac{\\bar \\rho(x)}{\\rho(x)}=kx=-(a+p)x=-(a+\\sqrt{a^2+q})x.\\label{control_linearscalar}\\end{align}\nThe above formula for feedback control matches the solution for OCP obtained using the primal formulation. In particular, the Riccati solution for the linear system in variable $d$ and feedback control can be written as \n\\[ d=a\\pm\\sqrt{a^2+q},\\;\\;u=-(a+\\sqrt{a^2+q})x\\]\nwhich matches with (\\ref{control_linearscalar})\n\\end{example}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\\section{Koopman and SOS-based Computation Framework for Optimal Control} \\label{sec:SOS Control}\nIn this section, we provide Koopman and SOS-based computational framework for the finite dimensional approximation of optimal control formulation involving ${\\cal L}_1$\/${\\cal L}_2$ costs. \n\\chen{switch the order of the $L_1, L_2$ problems in the following presentation}\nConsider the parameterization of $\\rho({\\mathbf x})$ and $\\bar{{\\boldsymbol \\rho}}({\\mathbf x})$ in~\\eqref{steady_pde_gdecay} as follows:\n\\begin{align}\n \\rho({\\mathbf x}) = \\frac{a({\\mathbf x})}{b({\\mathbf x})^{\\alpha}},\\;\\; \\bar{{\\boldsymbol \\rho}}({\\mathbf x}) = \\frac{{\\mathbf c}({\\mathbf x})}{b({\\mathbf x})^{\\alpha}}, \\label{rational_parametrization}\n\\end{align}\nwhere $a({\\mathbf x})$ and $b({\\mathbf x})$ are positive polynomials (positive at ${\\mathbf x} \\neq 0$) and ${\\mathbf c}({\\mathbf x}) = \\begin{bmatrix} c_1({\\mathbf x}),\\, \\ldots,\\, c_m({\\mathbf x})\\end{bmatrix}^\\top$. Here, $b({\\mathbf x})$ is a fixed positive polynomial and $\\alpha$ is a positive constant which is sufficiently large for integrability condition (\\cite{Prajna04}, Theorem~1). Using~\\eqref{rational_parametrization}, we can restate~\\eqref{steady_pde_gdecay} as \\chen{use $h$ or $h_0$?}\n\\begin{align} \\label{eq:parameterized stability}\n&\\nabla \\cdot ({\\mathbf f}\\rho + {\\mathbf g}\\bar{{\\boldsymbol \\rho}}) = \\gamma \\rho + h_0 \\nonumber\\\\\n\\Rightarrow &\\nabla \\cdot \\left[\\frac{{\\mathbf f} a + {\\mathbf g}{\\mathbf c}}{b^{\\alpha}} \\right] = \\frac{\\gamma a}{b^\\alpha} + h_0 \\nonumber\\\\\n\\Rightarrow &(1 + \\alpha) b \\nabla \\cdot ({\\mathbf f} a + {\\mathbf g}{\\mathbf c}) - \\alpha \\nabla \\cdot (b{\\mathbf f} a + b{\\mathbf g}{\\mathbf c}) - \\gamma a b \\geq 0.\n\\end{align}\nThe inequality in~\\eqref{eq:parameterized stability} is from the fact that $h$ is a positive function. Using~\\eqref{rational_parametrization} and~\\eqref{eq:parameterized stability}, the $\\mathcal{L}_1$ OCP problem in~\\eqref{eqn_ocpdiscount} can be rewritten as follows:\n\\begin{align}\\label{eqn_ocp3}\n\\begin{split}\n\\inf_{a({\\mathbf x}), {\\mathbf c}({\\mathbf x})} &\\;\\;\\; \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x}) a({\\mathbf x})}{b({\\mathbf x})^\\alpha}+\\frac{\\beta||{\\mathbf c}({\\mathbf x})||_1}{b({\\mathbf x})^\\alpha} d{\\mathbf x}\\\\\n {\\rm s.t}.& \\;\\;\\;\\mathrm{LHS~of}~\\eqref{eq:parameterized stability} \\geq 0,\\,\\, a({\\mathbf x}) \\geq 0.\n\\end{split}\n\\end{align}\nRecall that we exclude a small neighborhood of the origin $\\mathcal{B}_\\delta$ due to singularity of the density $\\rho$ at the origin. To guarantee the optimality in $\\mathcal{B}_\\delta$, we design an optimal linear quadratic regulator (LQR) controller for $\\mathcal{B}_\\delta$ using the identified linear dynamics. \n\nTaking one step further, we introduce dummy polynomials ${\\mathbf s}({\\mathbf x}) = [s_1({\\mathbf x}),\\ldots,s_m({\\mathbf x})]^\\top$ such that $s_j({\\mathbf x}) \\ge |c_j({\\mathbf x})|$, which leads to the following additional constraints to~\\eqref{eqn_ocp3}:\n\\begin{align} \\label{eq:const on sx}\n {\\mathbf s}({\\mathbf x}) - {\\mathbf c}({\\mathbf x}) \\geq 0,\\, {\\mathbf s}({\\mathbf x}) + {\\mathbf c}({\\mathbf x}) \\geq 0.\n\\end{align}\nThe objective function of the OCP in~\\eqref{eqn_ocp3} is rational polynomial functions. To make~\\eqref{eqn_ocp3} a polynomial optimization, we define a vector of polynomial basis functions\n\\begin{align} \\label{eq:basis functions}\n {\\boldsymbol \\Psi}({\\mathbf x}) = [\\psi_1({\\mathbf x}), \\ldots, \\psi_Q({\\mathbf x})]^\\top,\n\\end{align}\nand coefficient vectors ${\\boldsymbol{\\mathcal C}}_a$, ${\\boldsymbol{\\mathcal C}}_{c_j}$, and ${\\boldsymbol{\\mathcal C}}_{s_j}$ such that\n\\begin{align} \\label{eq:a&cj}\n a({\\mathbf x}) \\hspace{-0.03in}=\\hspace{-0.03in} {\\boldsymbol{\\mathcal C}}_a^\\top {\\boldsymbol \\Psi}({\\mathbf x}), c_j({\\mathbf x})\\hspace{-0.03in}=\\hspace{-0.03in}{\\boldsymbol{\\mathcal C}}_{c_j}^\\top {\\boldsymbol \\Psi}({\\mathbf x}), s_j({\\mathbf x})\\hspace{-0.03in}=\\hspace{-0.03in}{\\boldsymbol{\\mathcal C}}_{s_j}^\\top {\\boldsymbol \\Psi}({\\mathbf x}),\n\\end{align}\nfor $j=1,\\ldots,m$. Furthermore, we introduce ${\\boldsymbol{\\mathcal C}}_{ab}$, ${\\boldsymbol{\\mathcal C}}_{bc_1}$, \\ldots, ${\\boldsymbol{\\mathcal C}}_{bc_m}$ such that \n\\begin{align} \\label{eq:ab&bc}\n a({\\mathbf x})b({\\mathbf x}) = {\\boldsymbol{\\mathcal C}}_{ab}^\\top {\\boldsymbol \\Psi} ({\\mathbf x}), b({\\mathbf x}) c_j({\\mathbf x}) = {\\boldsymbol{\\mathcal C}}_{bc_j}^\\top {\\boldsymbol \\Psi} ({\\mathbf x}).\n\\end{align}\nUsing the parameterization in~\\eqref{eq:const on sx}--\\eqref{eq:ab&bc} and also treating positive constraints as sum of squares constraints denoted by $\\Sigma[{\\mathbf x}]$, \\eqref{eqn_ocp3} is expressed as a SOS problem given below:\n\\begin{align} \\label{eqn_ocp5}\n \\min_{\\underset{j=1,\\ldots,m}{{\\boldsymbol{\\mathcal C}}_a, {\\boldsymbol{\\mathcal C}}_{c_j}, {\\boldsymbol{\\mathcal C}}_{s_j}}} &\\,\\,\\, {\\mathbf d}_1^\\top {\\boldsymbol{\\mathcal C}}_a + \\beta \\textstyle{\\sum}_{j=1}^{m} {\\mathbf d}_2^\\top {\\boldsymbol{\\mathcal C}}_{s_j} \\nonumber\\\\\n {\\rm s.t}. &\\,\\,\\, \\mathrm{LHS~of}~\\eqref{eq:parameterized stability} \\in \\Sigma[{\\mathbf x}],\\,\\, a({\\mathbf x}) \\in \\Sigma[{\\mathbf x}],\\\\\n &\\,\\,\\, ({\\mathbf s}({\\mathbf x}) - {\\mathbf c}({\\mathbf x})) \\in \\Sigma[{\\mathbf x}],\\, ({\\mathbf s}({\\mathbf x}) + {\\mathbf c}({\\mathbf x})) \\in \\Sigma[{\\mathbf x}].\\nonumber\n\\end{align}\nwhere\n\\begin{align} \\label{eq:integral}\n {\\mathbf d}_1 = \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x}){\\boldsymbol \\Psi}({\\mathbf x})}{b({\\mathbf x})^\\alpha} d{\\mathbf x},\\,\\, {\\mathbf d}_2 = \\int_{{\\mathbf X}_1} \\frac{{\\boldsymbol \\Psi}({\\mathbf x})}{b({\\mathbf x})^\\alpha} d{\\mathbf x}.\n\\end{align}\n\n\n\n\n\n\nSimilarly, using the same parameterization in~\\eqref{rational_parametrization} and~\\eqref{eq:parameterized stability}, $\\mathcal{L}_2$ OCP in~\\eqref{eqn_ocpdiscount_L2} is restated as below:\n\\begin{align}\\label{eq:eqn_ocp6}\n\\begin{split}\n \\min_{a,{\\mathbf c}} &\\quad \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x})a({\\mathbf x})}{b({\\mathbf x})^\\alpha} + \\frac{{\\mathbf c}({\\mathbf x})^\\top {\\mathbf R} {\\mathbf c}({\\mathbf x})}{a({\\mathbf x})b({\\mathbf x})^\\alpha} dx\\\\\n \\mathrm{s.t.} &\\quad \\mathrm{LHS~of}~\\eqref{eq:parameterized stability} \\geq 0,\\,\\, a({\\mathbf x}) \\geq 0.\n\\end{split}\n\\end{align}\nSubsequently, we introduce a dummy variable $w({\\mathbf x})$ such that $\\frac{{\\mathbf c}({\\mathbf x})^\\top {\\mathbf R} {\\mathbf c}({\\mathbf x})}{a({\\mathbf x})} \\leq w({\\mathbf x})$ and apply Schur complement lemma on the new inequality constraint. Then, \\eqref{eq:eqn_ocp6} is reformulated as follows:\n\\begin{align}\\label{eq:eqn_ocp7}\n\\begin{split}\n \\min_{a,{\\mathbf c},w} &\\quad \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x})a({\\mathbf x})}{b({\\mathbf x})^\\alpha} + \\frac{w({\\mathbf x})}{b({\\mathbf x})^\\alpha} d{\\mathbf x}\\\\\n \\mathrm{s.t.} &\\quad \\mathrm{LHS~of}~\\eqref{eq:parameterized stability} \\geq 0,\\,\\, a({\\mathbf x}) \\geq 0,\\\\\n &\\quad {\\boldsymbol{\\mathcal M}}({\\mathbf x}) = \\begin{bmatrix} w({\\mathbf x}) & {\\mathbf c}({\\mathbf x})^\\top \\\\ {\\mathbf c}({\\mathbf x}) & a({\\mathbf x}){\\mathbf R}^{-1} \\end{bmatrix} \\succcurlyeq 0.\n\\end{split}\n\\end{align}\nNow we algebraically express the positive-definite polynomial matrix ${\\boldsymbol{\\mathcal M}}({\\mathbf x})\\succcurlyeq 0$ by using the following lemma: \n\\begin{lemma}[Positive semidefinite polynomial] \\label{lem:polyPSD}\nA $p \\times p$ matrix ${\\mathbf H}({\\mathbf x})$ whose entries are polynomials is positive semidefinite with respect to the monomial vector ${\\mathbf z}({\\mathbf x})$, \\textit{if and only if}, there exist ${\\mathbf D} \\succcurlyeq 0$ such that\n\\begin{align*}\n {\\mathbf H}({\\mathbf x}) = \\left({\\mathbf z}({\\mathbf x}) \\otimes {\\mathbf I}_p \\right)^\\top {\\mathbf D} \\left({\\mathbf z}({\\mathbf x}) \\otimes {\\mathbf I}_p\\right),\n\\end{align*}\nwhere $\\otimes$ denotes a Kronecker product (tensor product) and ${\\mathbf I}_p$ is an identity matrix with dimension $p$ \\cite{Scherer_2006}.\n\\end{lemma}\nFollowing Lemma~\\ref{lem:polyPSD}, ${\\boldsymbol{\\mathcal M}}({\\mathbf x})$ is PSD when there exists ${\\mathbf D} \\succcurlyeq 0$ such that ${\\boldsymbol{\\mathcal M}}({\\mathbf x})={\\mathbf H}({\\mathbf x})$ with ${\\mathbf z}({\\mathbf x})$ being a monomial vector with the maximum degree equal to $\\mathrm{floor}(\\mathrm{deg}({\\boldsymbol \\Psi}({\\mathbf x}))\/2)+1$, then ${\\boldsymbol{\\mathcal M}}({\\mathbf x})$. As a result, a SOS problem equivalent to~\\eqref{eq:eqn_ocp7} can be formulated as follows:\n\\begin{align} \\label{ocp_9}\n \\min_{\\underset{j=1,\\ldots,m}{{\\boldsymbol{\\mathcal C}}_a, {\\boldsymbol{\\mathcal C}}_w, {\\boldsymbol{\\mathcal C}}_{c_j}}} &\\,\\,\\, {\\mathbf d}_1^\\top {\\boldsymbol{\\mathcal C}}_a + {\\mathbf d}_2^\\top {\\boldsymbol{\\mathcal C}}_{w}\\nonumber\\\\\n \\mathrm{s.t.} &\\quad \\mathrm{LHS~of}~\\eqref{eq:parameterized stability} \\in \\Sigma[{\\mathbf x}],\\,\\, a({\\mathbf x}) \\in \\Sigma[{\\mathbf x}],\\\\\n &\\quad w({\\mathbf x}) - {\\mathbf H}_{11}({\\mathbf x}) = 0,\\, {\\mathbf c}({\\mathbf x}) - {\\mathbf H}_{12}({\\mathbf x}) = 0,\\nonumber\\\\\n &\\quad a({\\mathbf x}) {\\mathbf R}^{-1} - {\\mathbf H}_{22}({\\mathbf x}) = 0,\\, {\\mathbf D} \\succcurlyeq 0\\nonumber\n\\end{align}\nwhere ${\\mathbf H}_{ij}({\\mathbf x})$ denotes the ${ij}$th entry of~${\\mathbf H}({\\mathbf x})$; and $\\boldsymbol{\\mathcal{C}}_w$ is a coefficient vector of~$w({\\mathbf x})$ with respect to ${\\boldsymbol \\Psi}({\\mathbf x})$, i.e., $w({\\mathbf x})=\\boldsymbol{\\mathcal{C}}_w^\\top {\\boldsymbol \\Psi}({\\mathbf x})$.\n\n\n\\subsection{Data-driven OCPs using linear operator approximations} \\label{sec:data-driven}\nIn this section, we solve the OCPs in~\\eqref{eqn_ocp5} and~\\eqref{ocp_9} using time-series data without known explicit models ${\\mathbf f}$ and ${\\mathbf g}$. Especially, we leverage infinitesimal P-F generator approximation using time-series data given as below:\n\\begin{align} \\label{PF_gen_approx}\n-{\\cal P}_{\\mathbf f} \\psi \\hspace{-0.02in} = \\hspace{-0.02in} \\nabla\\cdot({\\mathbf f} \\psi) \\hspace{-0.02in} = \\hspace{-0.02in} {\\mathbf f} \\cdot \\nabla \\psi+\\nabla\\cdot {\\mathbf f} \\psi \\hspace{-0.02in} = \\hspace{-0.02in} {\\cal K}_{\\mathbf f} \\psi+\\nabla \\cdot {\\mathbf f} \\psi\n\\end{align}\nwhere $\\cal P_{\\mathbf f}$ and $\\cal K_{\\mathbf f}$ denote P-F and Koopman generators corresponding to a function ${\\mathbf f}$. We approximate the Koopman generator $\\cal K_{\\mathbf f}$ using the algorithm shown in~\\cite{Klus_2019}.\nThe brief sketch of the procedure is given as follows. First, we collect finite-time-series data by injecting different control inputs to the system ${\\mathbf f}$ and ${\\mathbf g}$: i) zero control inputs, ${\\mathbf u}=0$, and ii) unit step control inputs, ${\\mathbf u}=\\mathbf{e}_j$, where ${\\mathbf e}_j \\in \\mathbb{R}^m$ denotes unit vectors, i.e., $j$th entry of $\\mathbf{e}_j$ is 1, otherwise 0, into the data matrices:\n\\begin{align} \\label{eq:data_matrix}\n \\mathbf{X}_i = \\left [ \\mathbf{x}_1, \\ldots, \\mathbf{x}_{T_i} \\right ], \\,\\, \\dot{\\mathbf{X}}_i = \\left [ \\dot{\\mathbf{x}}_1, \\ldots, \\dot{\\mathbf{x}}_{T_i} \\right ],\n\\end{align}\nwith $i=0,1,\\ldots,m$ for zero and step control inputs where $T_i$ are the number of data points for $i$th input case. The samples in $\\mathbf{X}_i$ and $\\mathbf{\\dot{X}}_i$ can be collected from a concatenation of multiple experiment\/simulation trajectories without imposing any specific orders. Also, state derivatives $\\dot{{\\mathbf x}}$ can be numerically identified using algorithms in~\\cite{Xiao_2011,Na_1979}. Next, we construct \\textit{dictionary functions} ${\\boldsymbol \\Psi}({\\mathbf x})$ which constitutes polynomial basis vector as shown in~\\eqref{eq:basis functions}. The types of polynomial basis include monomials, or Legendre\/Hermite polynomials. Based on the choice of the dictionary functions, we calculate the derivative of ${\\boldsymbol \\Psi}({\\mathbf x})$ as below:\n\\begin{align} \\label{eq:basis functions time derivatives}\n\\dot{{\\boldsymbol \\Psi}}({\\mathbf x},\\dot{{\\mathbf x}}) = [\\dot{\\psi}_1({\\mathbf x},\\dot{{\\mathbf x}}), \\ldots, \\dot{\\psi}_Q({\\mathbf x},\\dot{{\\mathbf x}})]^\\top,\n\\end{align}\n\nwhere $\\dot{\\psi}_k({\\mathbf x},\\dot{{\\mathbf x}}) = (\\nabla_{\\mathbf x} \\psi_k)^\\top \\dot{{\\mathbf x}} = \\textstyle{\\sum}_{j=1}^n \\frac{\\partial \\psi_k}{\\partial x_j} \\frac{dx_j}{dt}$.\n{\\color{red} How is $\\dot x$ computed from data is not clear here..as this will depend on $\\Delta t$ the sampling time and the dependence is not shown here explicitly..??}\nThen, the Koopman generator approximates ${\\mathbf L}_i$ for each input case are approximated as follows:\n\\begin{align} \\label{eq:finite Koopman generator}\n\\begin{split}\n {\\mathbf L}_i &= \\underset{{\\mathbf L}_i}{\\mathrm{argmin}} \\, ||{\\mathbf B}_i - {\\mathbf A}_i {\\mathbf L}_i||_F,\\\\\n \\mathbf{A}_i &= \\frac{1}{T_i} \\textstyle{\\sum}_{\\ell=1}^{T_i} \\, \\boldsymbol{\\Psi}({\\mathbf X}_{i,\\ell}) \\boldsymbol{\\Psi}({\\mathbf X}_{i,\\ell})^\\top,\\\\\n \\mathbf{B}_i &= \\frac{1}{T_i} \\textstyle{\\sum}_{\\ell=1}^{T_i} \\, \\boldsymbol{\\Psi}({\\mathbf X}_{i,\\ell}) \\dot{\\boldsymbol{\\Psi}}({\\mathbf X}_{i,\\ell},\\dot{{\\mathbf X}}_{i,\\ell})^\\top,\n\\end{split}\n\\end{align}\nand ${\\mathbf X}_{i,\\ell}$ and $\\dot{{\\mathbf X}}_{i,\\ell}$ denote $\\ell$th column of ${\\mathbf X}_i$ and $\\dot{{\\mathbf X}}$, respectively. \\eqref{eq:finite Koopman generator} admits explicit solution $\\mathbf{K}_i = \\mathbf{A}_i^\\dagger \\mathbf{B}_i$, where $\\dagger$ stands for pseudo-inverse. Since the Koopman generator has linear properties, i.e., ${\\cal K}_{{\\mathbf f} + {\\mathbf g}_j} = \\cal{K}_{\\mathbf f} +\\cal{K}_{{\\mathbf g}_j}$, Koopman generators corresponding to ${\\mathbf g}_j$ are found by:\n\\begin{align}\n{\\cal K}_{{\\mathbf g}_j}\\approx {{\\mathbf L}}_j-{\\mathbf L}_0,\\;\\;\\;j=1,\\ldots, m. \\label{K_gapprox}\n\\end{align}\nNote that ${\\cal K}_{{\\mathbf f}}$ and ${\\cal K}_{{\\mathbf g}_j}$ can be identified using different algorithms, e.g., standard \\textit{extended dynamic mode decomposition (EDMD)}, and also, can be approximated jointly by using trajectories subject to arbitrary inputs.\nLastly, the divergence of ${\\mathbf f}$ shown in~\\eqref{PF_gen_approx} can be found:\n\\begin{align}\n\\nabla \\cdot {{\\mathbf f}} =\\nabla \\cdot [{\\cal K}_0 x_1,\\ldots, {\\cal K}_0 x_n]^\\top \\approx \\nabla \\cdot({\\boldsymbol{\\mathcal C}}_x^\\top {\\mathbf L}_0 {\\boldsymbol \\Psi}) \\label{approx_divergence}\n\\end{align}\nwhere ${\\boldsymbol{\\mathcal C}}_x$ is a coefficient vector for ${\\mathbf x}$ with respect to ${\\boldsymbol \\Psi}({\\mathbf x})$, i.e., ${\\mathbf x} = {\\boldsymbol{\\mathcal C}}_x^\\top {\\boldsymbol \\Psi} ({\\mathbf x})$, which can be found easily if ${\\boldsymbol \\Psi}({\\mathbf x})$ includes 1st-order monomials (i.e., ${\\mathbf x}$ itself). \nSimilarly, the divergence of vector fields ${\\mathbf g}_j$ are approximated as \n\\begin{align}\n \\nabla\\cdot ({\\mathbf g}_j)\\approx \\nabla \\cdot({\\boldsymbol{\\mathcal C}}_x^\\top {\\mathbf L}_j {\\boldsymbol \\Psi}),\\;\\;j=1,\\ldots,m.\\label{divg_approx}\n\\end{align}\nUsing the results in~\\eqref{K_gapprox}--\\eqref{divg_approx}, P-F generators are approximated by:\n\\begin{align} \\label{eq:PF generators}\n\\begin{split}\n {{\\mathbf P}}_i &={\\mathbf L}_i+\\nabla \\cdot({\\boldsymbol{\\mathcal C}}_x^\\top {\\mathbf L}_i {\\boldsymbol \\Psi}) {\\bf I},\n\\end{split}\n\\end{align}\nfor $i=0,1,\\ldots,m$. Now, using approximated infinitesimal PF generators in~\\eqref{eq:PF generators}, the OCPs in~\\eqref{eqn_ocp5} and~\\eqref{ocp_9} using data by restating the LHS of~\\eqref{eq:parameterized stability} as follows:\n\\begin{align} \\label{eq:stability_approximation}\n\\begin{split}\n&(1+\\alpha) b({\\mathbf x}) \\left ( {\\boldsymbol{\\mathcal C}}_a^\\top {\\mathbf P}_0 {\\boldsymbol \\Psi}({\\mathbf x}) + \\textstyle{\\sum}_{j=1}^m {\\boldsymbol{\\mathcal C}}_c^\\top {\\mathbf P}_j {\\boldsymbol \\Psi}({\\mathbf x}) \\right )\\\\\n&- \\alpha \\left( {\\boldsymbol{\\mathcal C}}_{ab} {\\mathbf P}_0 {\\boldsymbol \\Psi}({\\mathbf x}) + \\textstyle{\\sum}_{j=1}^m {\\boldsymbol{\\mathcal C}}_{bc_j}^\\top {\\mathbf P}_j {\\boldsymbol \\Psi}({\\mathbf x}) \\right).\n\\end{split}\n\\end{align}\n\\end{comment}\n\n\\section{Koopman and SOS-based Computation Framework for Optimal Control} \\label{sec:SOS Control}\nThis section provides Koopman and SOS-based computational framework for the finite-dimensional approximation of OCP involving ${\\cal L}_1$\/${\\cal L}_2$ costs. We begin with the following parameterization for the optimization variables $\\rho$ and $\\bar{\\boldsymbol \\rho}$. \n\\begin{equation}\n \\rho({\\mathbf x}) = {a({\\mathbf x})}\/b({\\mathbf x})^{\\alpha},\\;\\; \\bar{{\\boldsymbol \\rho}}({\\mathbf x}) = {\\mathbf c}({\\mathbf x})\/b({\\mathbf x})^{\\alpha}, \\label{rational_parametrization}\n\\end{equation}\nwhere $a({\\mathbf x})\\geq 0$ and ${\\mathbf c}({\\mathbf x}) = \\begin{bmatrix} c_1({\\mathbf x}),\\, \\ldots,\\, c_m({\\mathbf x})\\end{bmatrix}^\\top$. Here, $b({\\mathbf x})$ is a positive polynomial (positive at ${\\mathbf x} \\ne 0$), and $\\alpha$ is a positive constant which is sufficiently large for integrability condition. In fact $b({\\mathbf x})$ is chosen to be control Lyapunov function based on the linearized control dynamics at the origin. The data-driven procedure for the identification of the linear dynamics used in the construction of $b({\\mathbf x})$ is explained in Remark \\ref{remark_bconstrucntion}. The particular form for the parameterization of the optimization variable in (\\ref{rational_parametrization}) is chosen because of the fact that $\\rho$ has singularity at the origin (Remark \\ref{remark_singularity}). Using~\\eqref{rational_parametrization}, we can write the constraints for the optimization problem as in~\\eqref{ocp_main_discounted1} and~\\eqref{eqn_ocpdiscount} as\n\\begin{align*} \nh &=\\nabla \\cdot ({\\mathbf f}\\rho + {\\mathbf g}\\bar{{\\boldsymbol \\rho}}) - \\gamma \\rho = \\nabla \\cdot \\left[({\\mathbf f} a + {\\mathbf g}{\\mathbf c})\/b^{\\alpha} \\right] - \\gamma \\frac{a}{b^{\\alpha}}\\\\\n&=\\frac{1}{b^{\\alpha +1}} [(1 + \\alpha) b \\nabla \\cdot ({\\mathbf f} a + {\\mathbf g}{\\mathbf c}) - \\alpha \\nabla \\cdot (b{\\mathbf f} a + b{\\mathbf g}{\\mathbf c})\\\\\n&\\;\\;\\;\\;-\\gamma a b].\n\\end{align*}\nWith the above form of the constraints, we assume the following parameterization for $h=\\frac{d}{b^{\\alpha+1}}$, \nwhere $d$ is an arbitrary positive polynomial. With the assumed form of $h$, we write the constraints in the optimization variable, $a$ and ${\\mathbf c}$ \n as\n\\begin{align} \\label{rational_parametrization_1}\n(1 + \\alpha) b \\nabla \\cdot ({\\mathbf f} a + {\\mathbf g}{\\mathbf c}) - \\alpha \\nabla \\cdot (b{\\mathbf f} a + b{\\mathbf g}{\\mathbf c}) - \\gamma a b = d\n\\end{align}\nThe above constraint in the optimization problem can be written in terms of the P-F generator as follows:\n\\begin{align}\n &(1+\\alpha)b \\left({\\cal P}_{\\bf f}a+\\sum_{i=1}^m{\\cal P}_{\\bf g_i}c_i\\right)-\\alpha \\left({\\cal P}_{\\bf f}(ba)+\\sum_{i=1}^m{\\cal P}_{\\bf g_i}(bc_i)\\right)\\nonumber\\\\\n &-\\gamma a=d\\label{ss12}\n\\end{align}\n\\subsection{Data-driven Approximation of the Generators}\\label{section_datagenerator}\nFrom (\\ref{ss12}), it follows that the data-driven approximation of the constraints in the optimization problems (\\ref{eqn_ocpdiscount}) and (\\ref{eqn_ocpdiscount_L2}) involves approximation of the P-F generators, ${\\cal P}_{\\bf f}$ and ${\\cal P}_{{\\mathbf g}_i}$. Furthermore, the P-F generator can be expressed in terms of the Koopman generator as \n\\begin{eqnarray}\n-{\\cal P}_{\\mathbf f} \\psi \\hspace{-0.02in} = \\hspace{-0.02in} \\nabla\\cdot({\\mathbf f} \\psi) \\hspace{-0.02in} = \\hspace{-0.02in} {\\mathbf f} \\cdot \\nabla \\psi+\\nabla\\cdot {\\mathbf f} \\psi \\hspace{-0.02in} = \\hspace{-0.02in} {\\cal K}_{\\mathbf f} \\psi+\\nabla \\cdot {\\mathbf f} \\psi.\\label{PF_gen_approx}\n\\end{eqnarray}\n\nExpressing the P-F generator in terms of the Koopman generator allows us to use data-driven methods used to approximate the Koopman generator for the approximation of the P-F generator. In particular, we use generator Extended Dynamic Mode Decomposition (gEDMD) algorithm from~\\cite{klus2020data} for the approximation.\nTo approximate Koopman generators, we first collect time-series data from the dynamical system in~\\eqref{cont_syst1} by injecting different control inputs: i) zero control inputs, ${\\mathbf u}=0$, and ii) unit step control inputs, ${\\mathbf u}=\\mathbf{e}_j$\\footnote{${\\mathbf e}_j \\in \\mathbb{R}^m$ denotes unit vectors, i.e., $j$th entry of $\\mathbf{e}_j$ is 1, otherwise 0.} for $j=1,\\ldots,m$ for a finite time horizon with sampling step $\\delta t$. Let, \n\\begin{align} \\label{eq:data_matrix}\n \\mathbf{X}_i = \\left [ \\mathbf{x}_1, \\ldots, \\mathbf{x}_{T_i} \\right ], \\,\\, \\dot{\\mathbf{X}}_i = \\left [ \\dot{\\mathbf{x}}_1, \\ldots, \\dot{\\mathbf{x}}_{T_i} \\right ],\n\\end{align}\nwith $i=0,1,\\ldots,m$ for zero and step control inputs where $T_i$ are the number of data points for $i$th input case. The samples in $\\mathbf{X}_i$ do not have to be from a single trajectory; it can be a concatenation of multiple experiment\/simulation trajectories. Also, time derivatives of the states $\\dot{{\\mathbf x}}$ can be accurately estimated using numerical algorithms such as finite differences. Next, we construct a polynomial basis vector:\n\\begin{align} \\label{eq:basis functions}\n{\\boldsymbol \\Psi}({\\mathbf x}) = [\\psi_1({\\mathbf x}), \\ldots, \\psi_Q({\\mathbf x})]^\\top,\n\\end{align}\nwhich can include monomials or Legendre\/Hermite polynomials. The data-driven approximation of the generator will essentially involve the projection of the infinite-dimensional Koopman and P-F generators on the finite-dimensional space spanned by the basis function (\\ref{eq:basis functions}). Following \\cite{klus2020data}, we define:\n\\begin{align} \\label{eq:basis functions time derivatives}\n\\dot{{\\boldsymbol \\Psi}}({\\mathbf x},\\dot{{\\mathbf x}}) = [\\dot{\\psi}_1({\\mathbf x},\\dot{{\\mathbf x}}), \\ldots, \\dot{\\psi}_Q({\\mathbf x},\\dot{{\\mathbf x}})]^\\top,\n\\end{align}\n\\begin{align}\n \\dot{\\psi}_k({\\mathbf x},\\dot{{\\mathbf x}}) = (\\nabla_{\\mathbf x} \\psi_k)^\\top \\dot{{\\mathbf x}} = \\textstyle{\\sum}_{j=1}^n \\frac{\\partial \\psi_k}{\\partial x_j} \\frac{dx_j}{dt}\n\\end{align}\nThe partial derivatives of the basis function are computed analytically which is required for $\\dot{\\psi}_k({\\mathbf x},\\dot{{\\mathbf x}})$. Note that we also need $\\frac{dx_j}{dt}$ which is simply denoted by $\\dot{x}_j$. The value of $\\dot{x}_j$ is approximated using finite differences:\n\\begin{align}\n \\dot{x}_j \\approx \\frac{x_j - x_{j-1}}{\\Delta t}\n\\end{align}\nwhere $x_{j-1}$ and $x_j$ are the $j-1^{th}$ and $j^{th}$ data point in the system trajectory and $\\Delta t$ is the time difference between two consecutive data points. A more sophisticated finite-difference method can be used, e.g., total variation regularization~\\cite{Chartrand2011-ni} for noisy data and discontinuous derivatives, and also second-order central difference for better accuracy. Then, the Koopman generator approximate $\\mathbf{L}_i$ for each input case can be approximated as:\n\\begin{align} \\label{eq:finite Koopman generator}\n\\begin{split}\n \\mathbf{L}_i &= \\underset{{\\mathbf L}_i}{\\mathrm{argmin}} \\, ||{\\mathbf B}_i - {\\mathbf A}_i {\\mathbf L}_i||_F,\\\\\n \\mathbf{A}_i &= \\frac{1}{T_i} \\textstyle{\\sum}_{\\ell=1}^{T_i} \\, \\boldsymbol{\\Psi}({\\mathbf X}_{i,\\ell}) \\boldsymbol{\\Psi}({\\mathbf X}_{i,\\ell})^\\top,\\\\\n \\mathbf{B}_i &= \\frac{1}{T_i} \\textstyle{\\sum}_{\\ell=1}^{T_i} \\, \\boldsymbol{\\Psi}({\\mathbf X}_{i,\\ell}) \\dot{\\boldsymbol{\\Psi}}({\\mathbf X}_{i,\\ell},\\dot{{\\mathbf X}}_{i,\\ell})^\\top,\n\\end{split}\n\\end{align}\nand ${\\mathbf X}_{i,\\ell}$ and $\\dot{{\\mathbf X}}_{i,\\ell}$ denote $\\ell$th column of ${\\mathbf X}_i$ and $\\dot{{\\mathbf X}}$, respectively. The solution of~\\eqref{eq:finite Koopman generator} is explicitly known, $\\mathbf{K}_i = \\mathbf{A}_i^\\dagger \\mathbf{B}_i$, where $\\dagger$ stands for pseudo-inverse. Given the Koopman generator approximates for ${\\mathbf f}$, ${\\cal K}_{{\\mathbf f}}\\approx {\\mathbf L}_0$, using the linearity of the generators,\n\\begin{equation}\\label{eq:L0}\n{\\cal K}_{{\\mathbf g}_j}\\approx {{\\mathbf L}}_j-{\\mathbf L}_0,\\;\\;\\;j=1,\\ldots, m.\n\\end{equation}\n\nThe above is one method to estimate ${\\cal K}_{{\\mathbf f}}$ and ${\\cal K}_{{\\mathbf g}_j}$. They can also be approximated jointly by using trajectories subject to arbitrary inputs.\nNext, we approximate the divergence of vector field ${\\mathbf f}$ as\n\\begin{equation}\n\\nabla \\cdot {{\\mathbf f}} =\\nabla \\cdot [{\\cal K}_0 x_1,\\ldots, {\\cal K}_0 x_n]^\\top \\approx \\nabla \\cdot({\\boldsymbol{\\mathcal C}}_x^\\top {\\mathbf L}_0 {\\boldsymbol \\Psi}) \\label{approx_divergence}\n\\end{equation}\nwhere ${\\boldsymbol{\\mathcal C}}_x$ is a coefficient vector for ${\\mathbf x}$, i.e., ${\\mathbf x} = {\\boldsymbol{\\mathcal C}}_x^\\top {\\boldsymbol \\Psi}$, which can be found easily if ${\\boldsymbol \\Psi}$ includes 1st-order monomials (i.e., ${\\mathbf x}$). \nSimilarly, the divergence of vector fields ${\\mathbf g}_j$ are approximated as \n\\begin{equation}\n\\nabla\\cdot ({\\mathbf g}_j)\\approx \\nabla \\cdot({\\boldsymbol{\\mathcal C}}_x^\\top {\\mathbf L}_j {\\boldsymbol \\Psi}),\\;\\;j=1,\\ldots,m.\\label{divg_approx}\n\\end{equation}\nfrom~\\eqref{PF_gen_approx},~\\eqref{eq:L0}--\\eqref{divg_approx}, P-F generators are approximated by\n\\begin{equation} \\label{eq:PF generators}\n{{\\mathbf P}}_j\\hspace{-0.03in}=\\hspace{-0.03in}{\\mathbf L}_j+\\nabla \\cdot({\\boldsymbol{\\mathcal C}}_x^\\top {\\mathbf L}_j {\\boldsymbol \\Psi}) {\\bf I}\n\\end{equation}\nfor $j=0,1,\\ldots,m$. \n\\begin{remark}\nWhile the above procedure describes an approach for the approximation of the Koopman generators corresponding to the drift vector field, ${\\bf f}({\\mathbf x})$, and control vector fields, ${\\mathbf g}_j({\\mathbf x})$, for $j=1,\\ldots,m$ using zero input and step input, it is also possible to identify these vector field using random inputs. The problem of data-driven identification of the system dynamics using random or arbitrary control input will involve identifying bilinear vector fields. It can again be reduced to a least-square optimization problem similar to (\\ref{eq:finite Koopman generator}).\n\\end{remark}\n\nTo parameterize the optimization variables, we express polynomial functions $a({\\mathbf x})$, $b({\\mathbf x})$, and $c_j({\\mathbf x})$ with respect to the basis ${\\boldsymbol \\Psi}({\\mathbf x})$. Let ${\\boldsymbol{\\mathcal C}}_a$, ${\\boldsymbol{\\mathcal C}}_b$, and ${\\boldsymbol{\\mathcal C}}_{c_j}$ be the coefficient vectors used in the expansion of $a({\\mathbf x})$, $b({\\mathbf x})$, and $c_j({\\mathbf x})$:\n\\begin{align} \\label{eq:a&cj}\n a({\\mathbf x}) \\hspace{-0.03in}=\\hspace{-0.03in} {\\boldsymbol{\\mathcal C}}_a^\\top {\\boldsymbol \\Psi}({\\mathbf x}), b({\\mathbf x}) \\hspace{-0.03in}=\\hspace{-0.03in} {\\boldsymbol{\\mathcal C}}_b^\\top {\\boldsymbol \\Psi}({\\mathbf x}), c_j({\\mathbf x})\\hspace{-0.03in}=\\hspace{-0.03in}{\\boldsymbol{\\mathcal C}}_{c_j}^\\top {\\boldsymbol \\Psi}({\\mathbf x}).\n\\end{align}\nNote that ${\\boldsymbol{\\mathcal C}}_b$ contains constant coefficients of $b({\\mathbf x})$ since $b({\\mathbf x})$ is a known polynomial function. Similarly, let ${\\boldsymbol{\\mathcal C}}_{ab}$, ${\\boldsymbol{\\mathcal C}}_{bc_1}$, \\ldots, ${\\boldsymbol{\\mathcal C}}_{bc_m}$ denote coefficient vectors of polynomials, $a({\\mathbf x}) b({\\mathbf x})$, and $b({\\mathbf x}) c_j({\\mathbf x})$, for $j=1,\\ldots,m$, namely,\n\\begin{align} \\label{eq:ab&bc}\n a({\\mathbf x})b({\\mathbf x}) = {\\boldsymbol{\\mathcal C}}_{ab}^\\top {\\boldsymbol \\Psi} ({\\mathbf x}), b({\\mathbf x}) c_j({\\mathbf x}) = {\\boldsymbol{\\mathcal C}}_{bc_j}^\\top {\\boldsymbol \\Psi} ({\\mathbf x}).\n\\end{align}\nConstructing the coefficient vectors ${\\boldsymbol{\\mathcal C}}_{ab}$, and ${\\boldsymbol{\\mathcal C}}_{bc_j}$ from the coefficient vectors ${\\boldsymbol{\\mathcal C}}_{a}$, ${\\boldsymbol{\\mathcal C}}_b$, and ${\\boldsymbol{\\mathcal C}}_{c_j}$ requires trivial numerical procedures. In case that ${\\boldsymbol \\Psi}({\\mathbf x})$ is a monomial, finding these coefficient vectors are straightforward. If ${\\boldsymbol \\Psi}({\\mathbf x})$ is not a monomial vector, it involves more complicated numerical steps. One approach is to express the basis with respect to a common monomial vector $\\mathcal{M}({\\mathbf x})$ such that ${\\boldsymbol \\Psi}({\\mathbf x}) = {\\boldsymbol{\\mathcal C}}_\\Psi^\\top \\mathcal{M}({\\mathbf x})$ where ${\\boldsymbol{\\mathcal C}}_\\Psi$ is a coefficient matrix that connects ${\\boldsymbol \\Psi}({\\mathbf x})$ and $\\mathcal{M}({\\mathbf x})$. Then, we can find coefficient vectors in terms of the monomial vector $\\mathcal{M}({\\mathbf x})$, i.e., $a({\\mathbf x}) b({\\mathbf x}) = {\\boldsymbol{\\mathcal C}}_{ab}^{'} \\mathcal{M}({\\mathbf x})$ and $b({\\mathbf x}) c_j({\\mathbf x}) = {\\boldsymbol{\\mathcal C}}_{bc_j}^{'} \\mathcal{M}({\\mathbf x})$, and convert these coefficient vectors back to the original basis ${\\boldsymbol \\Psi}({\\mathbf x})$ by multiplying pseudo-inverse of ${\\boldsymbol{\\mathcal C}}_\\Psi$, i.e., ${\\boldsymbol{\\mathcal C}}_{ab} = {\\boldsymbol{\\mathcal C}}_\\Psi^\\dagger {\\boldsymbol{\\mathcal C}}_{ab}^{'}$ and ${\\boldsymbol{\\mathcal C}}_{bc_j} = {\\boldsymbol{\\mathcal C}}_\\Psi^\\dagger {\\boldsymbol{\\mathcal C}}_{bc_j}^{'}$. The implementation of this procedure can be done easily using polynomial toolbox provided in SOSOPT in Matlab~\\cite{Seiler_2013_SOSOPT}.\n\n\n\n\nNow, using approximated infinitesimal PF generators in~\\eqref{eq:PF generators}, we restate the LHS of~\\eqref{rational_parametrization_1} in~\\eqref{eqn_ocpdiscount_L2} and~\\eqref{eqn_ocpdiscount} as:\n\\begin{align} \\label{eq:stability_approximation}\n\\begin{split}\n&(1+\\alpha) b({\\mathbf x}) \\left ( {\\boldsymbol{\\mathcal C}}_a^\\top {\\mathbf P}_0 {\\boldsymbol \\Psi}({\\mathbf x}) + \\textstyle{\\sum}_{j=1}^m {\\boldsymbol{\\mathcal C}}_c^\\top {\\mathbf P}_j {\\boldsymbol \\Psi}({\\mathbf x}) \\right )\\\\\n&- \\alpha \\left( {\\boldsymbol{\\mathcal C}}_{ab}^\\top {\\mathbf P}_0 {\\boldsymbol \\Psi}({\\mathbf x}) + \\textstyle{\\sum}_{j=1}^m {\\boldsymbol{\\mathcal C}}_{bc_j}^\\top {\\mathbf P}_j {\\boldsymbol \\Psi}({\\mathbf x}) \\right) - \\gamma a({\\mathbf x}) b({\\mathbf x}).\n\\end{split}\n\\end{align}\n\n\n\\begin{remark}\nIn \\cite{korda2018convergence,klus2020data}, convergence results for the finite-dimensional approximation of the Koopman operator and generators in the limit as the number of basis functions and data points goes to infinity is studied. These convergence results combined with the finite-dimensional approximation of the cost function for the optimization problem can be used to provide theoretical justification for the solution obtained using the finite-dimensional approximation of the infinite-dimensional optimization problem. \n\\end{remark}\nIn the following sections, we discuss the finite dimensional approximation of the cost function leading up to the finite dimensional approximation of the infinite dimensional optimization problems (\\ref{eqn_ocpdiscount}) and (\\ref{eqn_ocpdiscount_L2}) involving ${\\cal L}_1$ and ${\\cal L}_2$ norms on control input respectively.\n\n\\subsection{Optimal Control with $\\mathcal{L}_1$ norm of feedback control}\n\n\n\n\nUsing the assumed paramaterization for the \n$\\rho$ and $\\bar{\\boldsymbol \\rho}$ from (\\ref{rational_parametrization}) the cost function for the ${\\cal L}_1$ OCP problem in~\\eqref{eqn_ocpdiscount} can be written as \n\\begin{align}\\label{eqn_ocp3}\n\\begin{split}\n\\inf_{a\\geq 0,{\\mathbf c}} &\\;\\;\\; \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x}) a({\\mathbf x})}{b({\\mathbf x})^\\alpha}+\\frac{\\beta||{\\mathbf c}({\\mathbf x})||_1}{b({\\mathbf x})^\\alpha} d{\\mathbf x}\n\\end{split}\n\\end{align}\nwhere a small neighborhood of the origin, ${\\cal N} = \\{{\\mathbf x} \\in {\\mathbf X}: |{\\mathbf x}| \\le \\epsilon,\\, \\epsilon > 0\\}$, \nis chosen as a polytope and excluded from the integration of the cost function to remove singularity of the density at the origin (refer to Remark \\ref{remark_singularity}).\n\n\nTo make~\\eqref{eqn_ocp3} solvable, we introduce dummy polynomials ${\\mathbf s}({\\mathbf x}) = [s_1({\\mathbf x}),\\ldots,s_m({\\mathbf x})]^\\top$, adding constraints:\n\\begin{align} \\label{eq:const on sx}\n {\\mathbf s}({\\mathbf x}) - {\\mathbf c}({\\mathbf x}) \\geq 0,\\, {\\mathbf s}({\\mathbf x}) + {\\mathbf c}({\\mathbf x}) \\geq 0.\n\\end{align}\nThe polynomial ${\\mathbf s}$ is expressed in terms of the basis function using the coefficient vector ${\\boldsymbol{\\mathcal C}}_{s_j}$ as:\n\\begin{align} \\label{eq:a&cj}\n \n s_j({\\mathbf x})\\hspace{-0.03in}=\\hspace{-0.03in}{\\boldsymbol{\\mathcal C}}_{s_j}^\\top {\\boldsymbol \\Psi}({\\mathbf x}),\n\\end{align}\nfor $j=1,\\ldots,m$.\nSubstituting (\\ref{eq:ab&bc}) in the integral cost (\\ref{eqn_ocp3}), we obtain following finite dimensional approximation of the cost function.\n\\begin{align} \\label{eq:L1 cost precomp}\n {\\mathbf d}_1^\\top {\\boldsymbol{\\mathcal C}}_a + \\beta \\textstyle{\\sum}_{j=1}^{m} {\\mathbf d}_2^\\top {\\boldsymbol{\\mathcal C}}_{s_j}\n\\end{align}\nwhere ${\\mathbf d}_1$ and ${\\mathbf d}_2$ are the coefficient vectors given by\n\\begin{align} \\label{eq:integral}\n {\\mathbf d}_1 = \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x}){\\boldsymbol \\Psi}({\\mathbf x})}{b({\\mathbf x})^\\alpha} d{\\mathbf x},\\,\\, {\\mathbf d}_2 = \\int_{{\\mathbf X}_1} \\frac{{\\boldsymbol \\Psi}({\\mathbf x})}{b({\\mathbf x})^\\alpha} d{\\mathbf x}.\n\\end{align}\nUsing~\\eqref{eq:const on sx}--\\eqref{eq:integral} and SOS positivity constraints denoted by $\\Sigma[{\\mathbf x}]$, \\eqref{eqn_ocp3} can be expressed as a SOS problem as:\n\\begin{align} \\label{eqn_ocp5}\n\\begin{split}\n \\min_{\\underset{j=1,\\ldots,m}{{\\boldsymbol{\\mathcal C}}_a, {\\boldsymbol{\\mathcal C}}_{c_j}, {\\boldsymbol{\\mathcal C}}_{s_j}}} &\\,\\,\\, {\\mathbf d}_1^\\top {\\boldsymbol{\\mathcal C}}_a + \\beta \\textstyle{\\sum}_{j=1}^{m} {\\mathbf d}_2^\\top {\\boldsymbol{\\mathcal C}}_{s_j}\\\\\n &{\\rm s.t}. \\,\\; \\eqref{eq:stability_approximation} \\in \\Sigma[{\\mathbf x}],\\,\\, a({\\mathbf x}) \\in \\Sigma[{\\mathbf x}],\\\\\n &\\, ({\\mathbf s}({\\mathbf x}) - {\\mathbf c}({\\mathbf x})) \\in \\Sigma[{\\mathbf x}],\\, ({\\mathbf s}({\\mathbf x}) + {\\mathbf c}({\\mathbf x})) \\in \\Sigma[{\\mathbf x}].\n\\end{split}\n\\end{align}\n\n\n\n\n\n\n\\subsection{Optimal Control with $\\mathcal{L}_2$ norm of feedback control}\n$\\mathcal{L}_2$ OCP in~\\eqref{eqn_ocpdiscount_L2} is restated as:\n\\vspace{-0.0in}\n\\begin{align}\\label{eq:eqn_ocp6}\n\\begin{split}\n \\min_{a,{\\mathbf c}} &\\quad \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x})a({\\mathbf x})}{b({\\mathbf x})^\\alpha} + \\beta \\frac{{\\mathbf c}({\\mathbf x})^\\top {\\mathbf R} {\\mathbf c}({\\mathbf x})}{a({\\mathbf x})b({\\mathbf x})^\\alpha} dx\\\\\n &\\mathrm{s.t.} \\quad \\eqref{eq:stability_approximation} \\geq 0,\\,\\, a({\\mathbf x}) \\geq 0.\n\\end{split}\n\\end{align}\nby following the same parameterization in~\\eqref{rational_parametrization}. Subsequently, we reformulate \\eqref{eq:eqn_ocp6} as follows:\n\\vspace{-0.0in}\n\\begin{align}\\label{eq:eqn_ocp7}\n\\begin{split}\n \\min_{a,{\\mathbf c},w} &\\quad \\int_{{\\mathbf X}_1} \\frac{q({\\mathbf x})a({\\mathbf x})}{b({\\mathbf x})^\\alpha} + \\beta \\frac{w({\\mathbf x})}{b({\\mathbf x})^\\alpha} d{\\mathbf x}\\\\\n \\mathrm{s.t.} &\\quad \\eqref{eq:stability_approximation} \\geq 0,\\,\\, a({\\mathbf x}) \\geq 0,\\\\\n &\\quad {\\boldsymbol{\\mathcal M}}({\\mathbf x}) = \\begin{bmatrix} w({\\mathbf x}) & {\\mathbf c}({\\mathbf x})^\\top \\\\ {\\mathbf c}({\\mathbf x}) & a({\\mathbf x}){\\mathbf R}^{-1} \\end{bmatrix} \\succcurlyeq 0,\n\\end{split}\n\\end{align}\nwhere the positive semidefinite (PSD) of ${\\boldsymbol{\\mathcal M}}({\\mathbf x})$ is a result of applying the Schur complement lemma on $\\mathcal{L}_2$ cost bounded by $w({\\mathbf x})$, i.e., $\\frac{{\\mathbf c}({\\mathbf x})^\\top {\\mathbf R} {\\mathbf c}({\\mathbf x})}{a({\\mathbf x})} \\leq w({\\mathbf x})$. Now, to algebraically express ${\\boldsymbol{\\mathcal M}}({\\mathbf x})\\succcurlyeq 0$, we introduce the lemma:\n\\begin{lemma}[\\scriptsize Positive semidefinite polynomial matrix] ~\\cite{Scherer_2006} \\label{lem:polyPSD}\nA $p \\times p$ matrix ${\\mathbf H}({\\mathbf x})$ whose entries are polynomials is positive semidefinite with respect to the monomial vector ${\\mathbf z}({\\mathbf x})$, \\textit{if and only if}, there exist ${\\mathbf D} \\succcurlyeq 0$ such that\n\\begin{align*}\n {\\mathbf H}({\\mathbf x}) = \\left({\\mathbf z}({\\mathbf x}) \\otimes {\\mathbf I}_p \\right)^\\top {\\mathbf D} \\left({\\mathbf z}({\\mathbf x}) \\otimes {\\mathbf I}_p\\right),\n\\end{align*}\nwhere $\\otimes$ denotes a Kronecker product (tensor product) and ${\\mathbf I}_p$ is an identity matrix with dimension $p$.\n\\end{lemma}\n\nFollowing Lemma~\\ref{lem:polyPSD}, let ${\\mathbf z}({\\mathbf x})$ be a monomial vector with the maximum degree equal to $\\mathrm{floor}(\\mathrm{deg}({\\boldsymbol \\Psi}({\\mathbf x}))\/2)+1$, then ${\\boldsymbol{\\mathcal M}}({\\mathbf x})$ in~\\eqref{eq:eqn_ocp7} is PSD when there exists ${\\mathbf D} \\succcurlyeq 0$ such that ${\\boldsymbol{\\mathcal M}}({\\mathbf x})={\\mathbf H}({\\mathbf x})$. Using this result and~\\eqref{eq:integral}, a SOS problem equivalent to~\\eqref{eq:eqn_ocp7} can be formulated as follows:\n\\begin{align} \\label{ocp_9}\n \\min_{\\underset{j=1,\\ldots,m}{{\\boldsymbol{\\mathcal C}}_a, {\\boldsymbol{\\mathcal C}}_w, {\\boldsymbol{\\mathcal C}}_{c_j}}} &\\,\\,\\, {\\mathbf d}_1^\\top {\\boldsymbol{\\mathcal C}}_a + \\beta {\\mathbf d}_2^\\top {\\boldsymbol{\\mathcal C}}_{w}\\nonumber\\\\\n \\mathrm{s.t.} &\\quad \\eqref{eq:stability_approximation} \\in \\Sigma[{\\mathbf x}],\\,\\, a({\\mathbf x}) \\in \\Sigma[{\\mathbf x}],\\\\\n &\\quad w({\\mathbf x}) - {\\mathbf H}_{11}({\\mathbf x}) = 0,\\, {\\mathbf c}({\\mathbf x}) - {\\mathbf H}_{12}({\\mathbf x}) = 0,\\nonumber\\\\\n &\\quad a({\\mathbf x}) {\\mathbf R}^{-1} - {\\mathbf H}_{22}({\\mathbf x}) = 0,\\, {\\mathbf D} \\succcurlyeq 0\\nonumber\n\\end{align}\nwhere ${\\mathbf H}_{ij}({\\mathbf x})$ denotes the ${ij}$th entry of~${\\mathbf H}({\\mathbf x})$; and $\\boldsymbol{\\mathcal{C}}_w$ is a coefficient vector of~$w({\\mathbf x})$, i.e., $w({\\mathbf x})=\\boldsymbol{\\mathcal{C}}_w^\\top {\\boldsymbol \\Psi}({\\mathbf x})$.\n\n\n\n\n\n\n\\begin{remark}\\label{remark_bconstrucntion}\nTo obtain $b({\\mathbf x})$, the control Lyapunov function for the linearized dynamics, we first identify the linearized control system dynamics from time-series data collected near the origin. to identify the linear dynamics, we use the gEDMD algorithm discussed in Section \\ref{section_datagenerator} for the special case of identity basis functions i.e., ${\\boldsymbol \\Psi}({\\mathbf x})={\\mathbf x}$. Once linearized system dynamics is identified we use linear quadratic regulator based optimal control for the construction of $b({\\mathbf x})$, namely $b({\\mathbf x})={\\mathbf x}^\\top P{\\mathbf x}$, where $P$ is the solution of algebraic Riccati equation (ARE). Following Assumption \\ref{assume_localstable}, we know that there exists a positive definite solution, $P$, to the ARE, which serves as control Lyapunov function for the linearized control system. \n\\end{remark}\n\n\\section{Simulation Results}\\label{sec:examples}\nIn this section, we present simulation results to illustrate the proposed data-driven control framework. All the simulation results are performed using MATLAB on a desktop computer with 64GB RAM. We have taken the value of $\\alpha = 4 $ and $\\beta = 1$ for all our examples. The cost function and control matrix for each example is $q({\\mathbf x}) = {\\mathbf x}^T{\\mathbf x}$ and $R=1$ respectively. Furthermore, the cost function is computed outside the region $\\cal N$ of the neighborhood. However, we did not implement a local stabilizing controller around the origin and hence no blending controller (Remark \\ref{remark_singularity}). Also, the maximum degree of $a({\\mathbf x})$ is taken to be 1. We take the simulation time step $\\Delta t = 0.01 \\, \\mathrm{[s]}$ for sampling time-series data, and also, Legendre polynomials are used as dictionary functions for all examples. We use SOSOPT~\\cite{Seiler_2013_SOSOPT} toolbox to solve the formulated SOS optimization problems for the OCPs in~\\eqref{eqn_ocp5} and~\\eqref{ocp_9}. All other parameters used for each example are listed in Table \\ref{table:OCP param}.\n\n\\begin{table}[b]\n\\small\n\\centering\n\\caption{Parameters for different examples}\n\\begin{tabular} {*5c}\n\\toprule\n& \\textbf {Ex~1} & \\textbf {Ex~2}& \\textbf{Ex~3} &\\textbf{Ex~4}\\\\\n\\midrule\n$\\mathbf{X}$ &$[-5,5]^2$ & $[-5,5]^2$ & $[-5,5]^2$ & $[-5,5]^3$ \\\\\n$\\mathcal{N}$ & $[-0.1,0.1]^2$ & $[-0.1,0.1]^2$ & $[-0.1,0.1]^2$ & $[-0.1,0.1]^3$ \\\\\ndeg($c({\\mathbf x})$) & $2$ & $6$ & $3$ & $6$ \\\\ \ndeg($s({\\mathbf x})$) & 2& $7$ & $7$ & $6$ \\\\\n${\\boldsymbol \\Psi}({\\mathbf x})$ & ${4^{th}}$ order & ${9^{th}}$ order & ${7^{th}}$ order & ${8^{th}}$ order \\\\\n\\toprule\n\\end{tabular}\n\\label{table:OCP param}\n\\end{table}\n\n\n\n\n\n\n\\noindent\\textbf{Example 1:} Consider the dynamics of controlled simple nonlinear numerical system:\n\\begin{align*}\n\\dot x_1=-x_1+x_2,\\;\\;\n\\dot x_2= -0.5(x_1+x_2) + 0.5x_1^2x_2 + x_1 u.\n\\end{align*}\nWith this example we use ${\\cal L}_2$ cost on the control input. For this example, optimal control and optimal cost can be found by solving the HJB equation~\\cite{primbs1996optimality} analytically and are given as below:\n\\[u^\\star({\\mathbf x})=-x_1x_2,\\;\\;V^\\star({\\mathbf x})=0.5 x_1^2+x_2^2.\\]\nNext, using the proposed method, we get an optimal control with discount factor $\\gamma=0$ as follows.\n\\[k^\\star({\\mathbf x}) = -1.38x_1x_2 + 0.00005x_1 - 0.00004x_2 - 0.0063\\]\nBy rounding off the coefficients of $k^\\star({\\mathbf x})$, we see that $u^\\star({\\mathbf x}) = k^\\star({\\mathbf x})$. The small mismatch in decimal is due to the choice of $h_0({\\mathbf x})$, which is unique to our formulation but is absent from the primal formulation of OCP. The simulation results are obtained by solving inequality in the constraints corresponding to the case where $h_0\\geq 0$. In this example, we collected $2 \\times 10^4$ time-series data points by simulating the system to estimate the Koopman generator. Figure~\\ref{E7L2_1} shows the comparison of trajectories simulated from the closed-loop system using the optimal control solutions obtained by the HJB approach (dotted red) and the proposed data-driven convex approach (solid black). \n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{E7L2_1.eps}\n\\caption{$\\mathcal{L}_2$ OCP results of system in Example 1, showing converging state trajectories for $\\gamma=0$.}\n\\label{E7L2_1}\n\\end{figure}\n\n\\noindent{\\textbf{Example 2.}} Consider the dynamics of controlled Van der Pol oscillator as follows:\n\\begin{equation*}\n\\dot{x}_1 = x_2,\\,\\, \\dot{x}_2 = (1-x_1^2)x_2 - x_1 + u.\n\\end{equation*}\nIn this example, we solve the ${\\cal L}_2$ OCP for different values of the discount factor. Total $2 \\times 10^4$ time-series data points are collected from repeated simulations to estimate the Koopman generator. Fig.\\ref{E2L2_1} and~\\ref{E2L2_6} show the trajectories of the closed-loop system starting from arbitrary initial points obtained from discount factor values, $\\gamma=0$ and $\\gamma=1$, respectively. We notice that the controller becomes more aggressive for larger $\\gamma$ and trajectories converge to the origin faster. This is expected as the OCPs achieve optimal control solutions at an exponential rate for $\\gamma > 0$ for which the closed-loop system converges faster than uniform stability.\nOn the other hand, we observe that, for negative discount factor $\\gamma < 0$, the control solution is not guaranteed to stabilize the system to the origin, and the closed-loop dynamics converge to a limit cycle as shown in Fig.~\\eqref{E2L2_6} resulting from $\\gamma = -5$. This is again expected as the cost function is decreasing exponentially, and even without the stabilizing feedback controller, the optimal cost function is finite.\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=.9\\linewidth]{tract_0.eps}\n\\caption{Trajectories of closed-loop system with $\\mathcal{L}_2$ OCP solution for $\\gamma=0$ for Van der pol Oscillator.}\n\\label{E2L2_1}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=.9\\linewidth]{tract_1.eps}\n\\caption{Trajectories of closed-loop system with $\\mathcal{L}_2$ OCP solution for $\\gamma=1$ for Van der pol Oscillator.}\n\\label{E2L2_2}\n\\end{figure}\n\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=.9\\linewidth]{tract_m_5.eps}\n\\caption{Trajectories of closed-loop system with $\\mathcal{L}_2$ OCP solution for $\\gamma=-5$ for Van der pol Oscillator.}\n\\label{E2L2_6}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\noindent\\textbf{Example 3:} Consider the dynamics of controlled simple inverted pendulum involing nonpolynomial dynamics:\n\\begin{equation*}\n\\dot x_1=x_2,\\;\\;\\dot x_2=-\\sin x_1 - 0.2 x_2 + u\n\\end{equation*}\nThe number of data points used in the estimation of the Koopman operator equals $2 \\times 10^4$. \nThe simulation results for the ${\\cal L}_2$ optimal control for different discount factor $\\gamma$ are shown in Fig.\\ref{E5L2_1} - Fig.\\ref{E5L2_5}. Similar to Example 2, we notice that, for zero and positive discount factor $\\gamma$, the controller obtained by positive discount factor can stabilize the origin at a faster rate than the case of $\\gamma = 0$ whereas, for negative discount factor, the origin is not stabilized for the closed-loop system.\n\n\n\n\n\n\\begin{comment}\n\\begin{figure} [ht]\n \\centering\n \\includegraphics[scale=0.17]{Lorentz_L2_2D.eps}\n \\caption{Trajectories in states vs. time of Lorentz attractor simulated from open-loop\nas well as optimal control with $\\mathcal{L}_2$ control norm, starting from some disturbed\ninitial points converge to the origin while open-loop dynamics\nshows chaotic behavior.}\n \\label{Lorentz2}\n\\end{figure}\n\\begin{figure} [ht]\n \\centering\n \\includegraphics[scale=0.17]{lorentzL2_trajectory.eps}\n \\caption{Dynamics of Lorentz attractor, for both closed-loop system using optimal control with $\\mathcal{L}_2$ norm for control input (blue) converging to the origin (denoted by black dot) as well as the open-loop dynamics (red), starting from initial points (denoted by $\\times$).}\n \\label{lorentzL2_trajectory}\n\\end{figure}\n\\end{comment}\n\n\\noindent\\textbf{Example 4:} Consider the controlled Lorentz attractor:\n\\begin{equation*}\n\\dot x_1=\\sigma(x_2-x_1),\\dot x_2=x_1(\\rho-x_3) - x_2+u,\\dot x_3=x_1x_2 - \\eta x_3,\n\\end{equation*}\nwhere $\\sigma=10$, $\\rho = 28$, and $\\eta = \\frac{8}{3}$. The open-loop dynamics of the Lorentz system with the above parameter values are chaotic. For this example, we provide simulation results with ${\\cal L}_1$ optimal control with $\\gamma=0$. We notice that the optimal control can stabilize the system.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{pend_tract_0.eps}\n\\caption{$\\mathcal{L}_2$ OCP results of simple inverted pendulum, showing converging state trajectories for $\\gamma=0$.}\n\\label{E5L2_1}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{pend_tract_2.eps}\n\\caption{$\\mathcal{L}_2$ OCP results of simple inverted pendulum, showing converging state trajectories for $\\gamma=2$.}\n\\label{E5L2_2}\n\\end{figure}\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{pend_tract_m_5.eps}\n\\caption{$\\mathcal{L}_2$ OCP results of simple inverted pendulum, showing converging state trajectories for $\\gamma=-5$.}\n\\label{E5L2_5}\n\\end{figure}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\linewidth]{Lor_tract1_0.eps}\n\\caption{$\\mathcal{L}_1$ OCP results of Lorentz attractor, showing converging state trajectories for $\\gamma=0$.}\n\\label{E4L1_1}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nA systematic convex optimization-based framework is provided for optimal control of nonlinear systems with a discounted cost function. In contrast to the existing literature of OCP with discounted cost, we consider the OCP problem with both positive and negative discount factor and provide a condition for the existence of optimal control. The OCP is formulated in the dual space of density function as an infinite-dimensional convex optimization problem. A new data-driven algorithm is provided for the computation of optimal control combining methods from Sum-of-Squares optimization and data-driven approximation of linear transfer operators. Simulation results are presented to verify the developed framework on data-driven optimal control. \n\n\n\n\n\n\\section*{Appendix}\n\n\n\\textbf{ Proof of Theorem \\ref{theorem_maingeometric}}\\\\\nConsider the feedback control system\n\\[\\dot {\\mathbf x}={\\bf f}({\\mathbf x})+{\\mathbf g}({\\mathbf x}){\\mathbf k}({\\mathbf x})\\]\nand let ${\\mathbb P}^c_t$ and ${\\mathbb U}^c_t$ be the P-F and Koopman operator for the feedback control system. Using the definition of the Koopman operator, the cost in (\\ref{ocp_main_discounted}) can be written as \n\\[\nJ(\\mu_0)=\\int_{{\\mathbf X}_1}\\int_0^\\infty e^{\\gamma t}[{\\mathbb U}_t^c (q+{\\mathbf k}^\\top {\\mathbf R}{\\mathbf k})]({\\mathbf x})dt h_0({\\mathbf x})d{\\mathbf x}.\n\\]\nUsing the duality and linearity of the Koopman and P-F operators, we obtain\n\\[\nJ(\\mu_0)=\\int_{{\\mathbf X}_1}\\int_0^\\infty (q+{\\mathbf k}^\\top {\\mathbf R}{\\mathbf k})({\\mathbf x})e^{\\gamma t}[{\\mathbb P}_t^c h_0]({\\mathbf x})dt d{\\mathbf x}.\n\\]\nDefining \n\\begin{equation}\\rho({\\mathbf x}):=\\int_0^\\infty e^{\\gamma t}[{\\mathbb P}_t^c h_0]({\\mathbf x}) dt, \\label{definingrho}\n\\end{equation} \nthe $J(\\mu_0)$ can be written as \n\\begin{equation}\nJ(\\mu_0)=\\int_{{\\mathbf X}_1} (q({\\mathbf x})+{\\mathbf k}({\\mathbf x})^\\top {\\mathbf R}{\\mathbf k}({\\mathbf x})) \\rho({\\mathbf x})d{\\mathbf x}.\\label{ocp_costdiscountproof}\n\\end{equation}\nDefining $\\bar {\\boldsymbol \\rho}({\\mathbf x}):={\\mathbf k}({\\mathbf x})\\rho({\\mathbf x})$, the cost function can be written in the form given in \\eqref{eqn_ocpdiscount_L2}.\nWe next show that $\\rho({\\mathbf x})$ and $\\bar {\\boldsymbol \\rho}$ satisfies the constraints in \\eqref{eqn_ocpdiscount_L2}. Following Assumptions \\ref{assumption_onocp}, we know that the state cost $q$ is uniformly bounded away from zero in ${\\mathbf X}_1$ and optimal cost function is finite and hence we have\n\\begin{equation}\\infty>J(\\mu_0)\\geq \\int_{{\\mathbf X}_1}q({\\mathbf x})\\rho({\\mathbf x})d{\\mathbf x}\\geq c\\int_{{\\mathbf X}_1}\\rho({\\mathbf x})d{\\mathbf x}\\label{ss}\n\\end{equation}\nwhere $c$ is the lower bound for the state cost function $q({\\mathbf x})$ on ${\\mathbf X}_1$. The above proves that there exists a constant $M$ such that \n\\begin{eqnarray}\n\\int_{{\\mathbf X}_1}\\rho({\\mathbf x})d{\\mathbf x}=\\int_{{\\mathbf X}_1}\\int_0^\\infty e^{\\gamma t}[{\\mathbb P}_t^c h_0]({\\mathbf x})dt d{\\mathbf x}\\leq M.\\label{before_claim}\n\\end{eqnarray}\nWe next claim that\n\\begin{eqnarray}\n\\lim_{t\\to \\infty}e^{\\gamma t}[{\\mathbb P}_t h]({\\mathbf x})=0\\label{claim}\n\\end{eqnarray}\nfor $\\mu_0$ almost all ${\\mathbf x}\\in {\\mathbf X}_1$. Using Barbalat Lemma, we know that for $f(t)\\in {\\cal C}^1$, and $\\lim_{t\\to \\infty} f(t)=\\alpha$. If $f'(t)$ is uniformly continuous, then $\\lim_{t\\to \\infty} f'(t)=0$.\\\\\nLetting $f(t)=\\int_0^t e^{\\gamma \\tau}\\int_{{\\mathbf X}_1}[{\\mathbb P}^c_\\tau h_0]({\\mathbf x})d{\\mathbf x} d\\tau$ and hence $f'(t)=e^{\\gamma t} \\int_{{\\mathbf X}_1}[{\\mathbb P}_t h_0]({\\mathbf x})d{\\mathbf x}$. By Barbalat Lemma, since $e^{\\gamma t}\\int_{{\\mathbf X}_1}[{\\mathbb P}^c_t h_0]({\\mathbf x})d{\\mathbf x}$ is uniformly continuous w.r.t. time for $\\gamma\\leq 0$. The uniformly continuity follows from the definition of the P-F semi-group and the fact that solution of dynamical system is uniformly continuous w.r.t. time. We have \n\\[0=\\lim_{t\\to \\infty}e^{\\gamma t}\\int_{{\\mathbf X}_1}[{\\mathbb P}^c_t h_0]({\\mathbf x})d{\\mathbf x}=\\lim_{t\\to \\infty}\\int_{{\\mathbf X}_1}e^{\\gamma t}[{\\mathbb P}_t^c h_0]({\\mathbf x})d{\\mathbf x}\\]\nwhich implies \n\\begin{equation}\\lim_{t\\to \\infty}e^{\\gamma t}[{\\mathbb P}_t^c h_0]({\\mathbf x})=0\n\\label{convergence}\n\\end{equation}\nfor a.e. ${\\mathbf x}$ w.r.t. $\\mu_0$ in ${\\mathbf X}_1$.\nNext, we claim that $\\rho({\\mathbf x})$ as defined in (\\ref{definingrho}) can be obtained as a solution of the following equation\n\\begin{equation}\n\\nabla\\cdot (({\\bf f}({\\mathbf x})+{\\mathbf g}({\\mathbf x}){\\mathbf k}({\\mathbf x}))\\rho({\\mathbf x}))=\\gamma \\rho({\\mathbf x})+h({\\mathbf x}),\\label{steady_pdegeometric}\n\\end{equation}\nfor $x\\in {\\mathbf X}_1$.\nSubstituting (\\ref{definingrho}) in (\\ref{steady_pdegeometric}), we obtain\n\\begin{eqnarray}\n\\nabla\\cdot ({\\bf f}_c \\rho)=\\int_0^\\infty \\nabla\\cdot({\\bf f}_c ({\\mathbf x})e^{\\gamma t}[{\\mathbb P}^c_t h_0]({\\mathbf x}))dt\\nonumber\\\\\n=\\int_0^\\infty -e^{\\gamma t}\\frac{d}{dt}[{\\mathbb P}^c_t h_0]({\\mathbf x})dt\\nonumber\\\\\n-e^{\\gamma t} [{\\mathbb P}_t^c h_0]({\\mathbf x})|_{t=0}^\\infty+\\int_0^\\infty \\gamma e^{\\gamma t} [{\\mathbb P}_t^c h_0]({\\mathbf x}) dt\\nonumber\\\\\n=h_0({\\mathbf x})+\\gamma \\rho({\\mathbf x})\\label{eq22}\n\\end{eqnarray}\nIn deriving (\\ref{eq22}) we have used the infinitesimal generator property of P-F operator Eq. (\\ref{PF_generator}) and the fact that $\\lim_{t\\to \\infty} e^{\\gamma t}[\\mathbb{P}_t^c h_0]({\\mathbf x})=0$ following (\\ref{convergence}).\nFurthermore, since $h_0({\\mathbf x})> 0$, it follows that $\\rho({\\mathbf x})> 0$ from the positivity property of the P-F operator. Combining (\\ref{ocp_costdiscountproof}) and (\\ref{eq22}) along with the definition of $\\bar \\rho$, it follows that the OCP problem can be written as convex optimization problem (\\ref{eqn_ocpdiscount}). The optimal control ${\\mathbf k}^\\star({\\mathbf x})$ obtained as the solution of optimization problem (\\ref{eqn_ocpdiscount}) is a.e. uniform stable (for $\\gamma=0$) follows from the results of Theorem \\ref{theorem_necc_suffuniform} using the fact that closed loop system satisfies (\\ref{steady_pdegeometric}) with $\\rho$ that is integrable. The optimal solution $\\rho^\\star({\\mathbf x})\\in {\\cal L}_1({\\mathbf X}_1)\\cap {\\cal C}^1({\\mathbf X}_1,{\\mathbb R}_{\\geq 0})$ follows from the fact that $h_0\\in {\\cal L}_1({\\mathbb R}^n,{\\mathbb R}_{> 0})\\cap {\\cal C}^1({\\mathbb R}^n)$ and the definitions of $\\rho$ (\\ref{definingrho}) and P-F operator (\\ref{pf-operator}). \\\\\n\n\n\\textbf{ Proof of Theorem \\ref{theorem_maingeometric_positivediscount}} The proof of this theorem follows exactly along the lines of proof of Theorem \\ref{theorem_maingeometric} until equation (\\ref{before_claim}). Unlike the proof of Theorem \\ref{theorem_maingeometric}, where the claim (\\ref{claim}) is proved using Barbalat Lemma, in this proof $\\lim_{t\\to \\infty}[{\\mathbb P}_t h_0]({\\mathbf x})=0$ for $\\mu_0$ almost all ${\\mathbf x} \\in {\\mathbf X}_1$ follows from Assumption \\ref{assumption_ocppositivediscount}. As $\\mu_0(B_t)\\leq M e^{-\\gamma' t}$ implies $e^{\\gamma t}[{\\mathbb P}_t h_0]({\\mathbf x})\\to 0$ for $\\mu_0$ a.e. ${\\mathbf x}\\in {\\mathbf X}_1$. The rest of the proof then follows again along the lines of proof of Theorem \\ref{theorem_maingeometric}. \n\n\n\n\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSecure communication based on message encryption with controlled noise (pseudo-noise or PN)\nstarted with the work of the actress-engineer Hedy Lamarr and her husband-pianist \nGeorges Antheil in 1941 who were interested in military communications during World War II.\n\nLamarr invented frequency hopping to prevent an intruder from jamming a signal sent to \ncontrol torpedoes remotely, since using a single frequency might be easily detected and blocked.\nFrequency hopping is used presently in Bluetooth and other types of wireless communication \nand is called FHSS~\\cite{Carlson} (Frequency Hopping Spread Spectrum). \n\nGiven a set of frequency values $[f_1, f_2, f_3...]$, one selects a well-defined \nsequence of frequencies following an apparently random pattern (picked from PN values) \nshared solely between transmit and receive ends. \nFor an eavesdropper unaware of the sequence used, the signal appears as white noise containing no\nvaluable information and that is the reason why it is termed spread-spectrum communication\ngiven that noise has broader bandwidth than the signal.\n\nFHSS needs another important ingredient to be completely operational: sender-receiver \nperfect synchronization in order to be able to modulate-demodulate with the right frequency. \nIt is Antheil, exploiting his musician skills, who developed the \nsynchronization~\\cite{Carlson} method between sender and \nreceiver enabling them to encrypt-decrypt ongoing transmitted information.\n\nThe analog to digital conversion of frequency hopping gave birth to DSSS\n(Direct Sequence Spread Spectrum) methods based on Galois polynomials, the generators of\nPN sequences that we describe below and that are used presently in Wi-Fi and other \ntypes of digital communications (Smartphones...).\n\n\nConsequently, it is important to relate noise to communications, how it might \nbe used to alter the nature of the signal and ultimately transmit hidden information\nin a way such that it is properly retrieved by the target receiver.\n\nAn ordinary resistor has a fluctuating\nvoltage across it whether standing free or belonging to an electronic \ncircuit. The resistor embodies free electrons that are thermally agitated, \ninducing random voltage fluctuations. \n\nMost physicists\/engineers refer to this thermal voltage fluctuation as \nJohnson-Nyquist noise, after J.B. Johnson~\\cite{Nyquist}, who was first \nto observe the effect at Bell laboratories in 1928 and H. Nyquist~\\cite{Nyquist} \nwho first explained it. \n\nCircuit noise studies in Bell laboratories have a very peculiar history since\nin the 1950's, H. E. D. Scovil and his associates built the world's lowest-noise\nmicrowave amplifiers cooled by liquid helium to reduce noise \nand incorporated in extremely sensitive radiometers used in radio-astronomy.\n\nRadiometers usually contain calibration noise sources consisting of a resistor \nat a known temperature. During the 1960's Penzias and Wilson~\\cite{Penzias} \nwhile improving these radiometers discovered serendipitously Big-Bang cosmic \nbackground radiation in 1965.\n\n\nB. Yurke~\\cite{Yurke} and his collaborators embarked, in the 1980's, \non a pioneering study of Quantum noise through the quantization of $LC$ networks \ndrawing from an analogy between an $LC$ circuit and the harmonic oscillator. \nQuantum effects in circuits occur when we deal with low temperature \n(as in superconductors) or at very high frequency. Usual telecommunication and \nsignal processing frequencies are in the kHz-GHz range whereas \nTera-Hz ($10^{12}$ Hz) devices encountered in medical imaging and \noptical devices operate at $10^{14}$ Hz. Consequently, kHz-GHz frequencies\nare classical whereas Tera-Hz and optical devices should be considered as quantum.\nWith the progress of integrated circuits toward the nanometer scale (presently \nthe minimal feature used in the semiconductor industry is 14 nm) and \nsingle electron as well as quantum dot (akin to synthetic atom) devices, \nwe expect large quantum effects implying quantum noise becoming more \nimportant than thermal. \n\n\nThe equivalence between an impedance\nand an oscillator is a very important idea that will trigger and sustain steady\nprogress in several areas of Quantum information and communication.\n\nNyquist derived an expression for White Noise based on the interaction between \nelectrons and electromagnetic waves propagating along a transmission line using\narguments based on black-body radiation. This means that Nyquist is in fact\na true pioneer in Quantum noise. \n\nHe based his work on Johnson measurements who \nfound that thermal agitation of electricity in conductors produces a random \nvoltage variation between the ends of the conductor $R$ of the form:\n\n\\begin{equation}\n\\langle(V - \\langle V \\rangle)^2 \\rangle= \\mean{\\delta V^2}= 4R k_B T \\Delta f\n\\end{equation}\n\n$\\langle ...\\rangle$ is the average value, voltage\nfluctuation is $\\delta V = V-\\mean{V}$ and $V$ is the instantaneous voltage measured at\nthe ends of the resistance $R$. \n$k_B$ is Boltzmann constant and $T$ is absolute temperature. $ \\Delta f$ is the\nbandwidth of voltage fluctuations (see Appendix A).\nThis frequency interval spans the range of a few Hz to several tens of GHz.\n\nThe voltage fluctuation developed across the\nends of the conductor due to Thermal noise is unaffected by the presence or \nabsence of direct current. This can be explained by the fact that electron thermal \nvelocities in a conductor are much greater ($\\sim 10^3$ times) than electron drift velocities. \n\nSince electromagnetic waves are equivalent to photons through Quantum Mechanics \nDuality principle~\\cite{Duality}, Nyquist derivation is based on blackbody radiation that was\nexplained earlier by Planck.\n\nIn Quantum Mechanics language, a (zero rest mass) photon is a special case of \na harmonic oscillator since the energy levels are separated by the same energy $\\hbar \\omega$ \ni.e. the $n$-th level $E_n=n\\hbar \\omega$ (ignoring zero-point energy $\\hbar \\omega\/2$) \ncorresponds to an integer number $n$ of photons. \nMoreover $\\omega=2\\pi f$ is the electromagnetic pulsation\nand not the mechanical one $\\sqrt{k\/m}$ where $k,m$ are the respective \nspring constant and mass of the mechanical oscillator.\n\n\n\nWhile classical pseudo-noise used in spread-spectrum communications hides the signal\nfrom intrusion by an eavesdropper through a crypting operation (FHSS frequencies follow\na PN sequence whereas in DSSS, the signal is directly multiplied by the PN sequence) using\na set of keys (corresponding to a given PN sequence) that are shared solely between \nthe transmitter and the receiver, Quantum mechanics can be used to encrypt the \nsignal in a completely different fashion.\n\n\nQuantum Mechanics can be used to generate naturally random instead of \ndeterministic pseudo-random numbers. In the early days of computing\ncosmic rays or radioactive sources~\\cite{Schmidt} were used for generating \nnon-deterministic random numbers.\nQuantum phenomena being essentially non-deterministic, \nwould be able to produce truly \nrandom numbers and the corresponding devices are called Quantum Random Number\nGenerators~\\cite{Schmidt} (QRNG). \n\nObviously, this is not the only advantage of Quantum Mechanics since\nat the Garching Max Planck Institute for Quantum Optics (MPQ) in Germany \nand the Technical University of Vienna, \ncommunication experiments showed that Quantum Mechanics provides entanglement \nas an alternative concept to secure information transfer between two remote sites. \n\nEntanglement, first introduced by Einstein (who called it \"spooky \naction at a distance\"), Podolsky, and Rosen~\\cite{EPR}, and \nSchr\\\"odinger~\\cite{Schrodinger} in 1935, can arise when two \nquantum systems are produced from \na common source, e.g. when a spinless particle decays into two particles\ncarrying opposite spins. Such states violate a set of \ninequalities~\\cite{Bell} established by J.S. Bell in 1964, implying \nthat quantum theory embodies non-locality (see section IV for the mathematical\nimplication). Bell inequalities are the statistical measure of entanglement\nand their violation can be demonstrated by measuring correlations between\nquantum states.\n\nEntangled quantum systems behave as if they can \naffect each other instantaneously, even when they are extremely far from \neach other, due to the essential non-local~\\cite{Buscemi} character of \nentanglement. \n\nThe strongest advantage of noise-based communication is that by hiding a\nsignal in noise, it is extremely difficult or even impossible to detect it\nif the eavesdropper does not know the keys or the algorithm used between \nthe transmitter and the receiver. In Quantum communication (QC), entanglement\nties together in a very stringent fashion both parties and any intrusion\nattempted by an eavesdropper, when detected, triggers immediately disruption \nof communication.\n\nThis work can be taught as an application chapter \nin a general Statistical Physics course at the Graduate \nor in a specialized Graduate course \nrelated to applications of Quantum Mechanics and Statistical Physics\nsince physicists generally interested in the applications of\nQuantum Mechanics and Statistical Physics are keen to expand their \nknowledge to areas of Quantum Information Processing and Communications (QIPC).\n\nThis paper is organized as follows: after reviewing several derivations\nof White noise by Nyquist and others in section II, we discuss in section III\nthe Fluctuation Dissipation theorem and its quantum version in order \nto derive in a rigorous way, Nyquist result with modern quantum noise approach\nand lay the foundations of secure communication from the classical and quantum\npoints of view. In section IV we apply the analysis to secure communications\nwith classical noise (spread-spectrum) and entanglement based Quantum information \nprocessing and transfer. Discussions and Conclusions are in section V.\n\n\\section{Derivations of Thermal Noise}\n\nNyquist work is based on phenomenological thermodynamic considerations and electric\ncircuit theory, including the classical equipartition theorem.\nThe latter is based on the physical system number of degrees of freedom.\nThis number~\\cite{freedom} is well defined when the different contributions to system energy \n(translational, rotational, vibrational, electromagnetic...) \nare quadratic and decoupled with presence of weak interactions.\nIn the general case (non-quadratic energy or strong interactions), \none evaluates the partition function in order to derive thermodynamical properties.\n\n\\subsection{Nyquist derivation of Thermal Noise}\nNyquist based his derivation on Einstein remark that many physical systems would \nexhibit Brownian motion and that Thermal noise in circuits is nothing more \nthan Brownian motion of electrons due to ambient temperature.\nDespite the fact one might find several strange assumptions and even flaws in \nNyquist derivation, it remains a pioneering interesting approach since it paves \nthe way to quantization of electrical circuits and noise in circuits.\nNyquist considered thermal noise in a resistor $R$ as stemming from \nelectrons interacting with electromagnetic waves represented by \na one-dimensional black-body thermal radiator. Electromagnetic waves travel \nthrough an ideal (lossless) one-dimensional\ntransmission line of length $\\ell$ joining two resistances $R$.\nHence the transmission line characteristic impedance being equal to $R$ amounts \nto considering that its impedance is matched at both ends and \nthat any voltage wave propagating along the line is completely absorbed by the \nend resistor $R$ without any reflection, exactly like a black-body. \n\nEach resistor $R$ has a thermally-fluctuating voltage at\ntemperature $T$ which will be transmitted down the wires \nwith a current and voltage wave appearing across the other resistor. \n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[angle=0,width=50mm,clip=]{nyquist.pdf} \n\\vspace*{-3mm} \n\\caption{Transmission line of length $\\ell$ matched by two resistors $R$ at its ends.}\n \\label{fig1}\n\\end{figure} \n\nA voltage wave propagating along the transmission line is expressed at any point $x$\nand any time $t$ as:\n$V(x,t) = V_0 \\exp[i(\\kappa x - \\omega t)]$ \nwhere $\\kappa$ is the wavenumber and $\\omega$ the angular frequency of the wave.\n\n\nThe velocity $v = \\omega \/\\kappa$ in the line is typically $c\/10$ where \n$c$ is light velocity in vacuum.\nConsidering the transmission line of length $\\ell$ as a domain between $x=0$ and $x = \\ell$ and \nimposing the boundary condition $V (\\ell,t) = V (0,t), \\hspace{0.5cm} \\forall t$,\nwe infer that possible propagating wavenumbers are \ngiven by $\\kappa \\ell = 2n\\pi $, where $n$ is any integer, \nand there are $\\Delta n = (1\/2\\pi)d\\kappa=d\\omega\/(2\\pi v)$ such modes per unit \nlength of the line in the frequency range between $\\omega $ and $\\omega + d\\omega $. \n\n\nIn the Canonical ensemble, the Bose-Einstein distribution for photons gives the mean number\nof photons $\\langle n \\rangle$ per mode at energy $\\hbar \\omega$ and temperature $T$ as:\n\\begin{equation}\n\\langle n \\rangle= \\frac{1}{e^{\\beta \\hbar \\omega} -1}\n\\label{Bose}\n\\end{equation}\n\n$\\beta = \\frac{1}{k_B T}$ is inverse temperature~\\cite{Reif} coefficient. \nThus the energy of the photon gas (ignoring zero-point energy) \nis $\\langle n \\rangle \\hbar \\omega$.\n\n\nDetailed balance allows to equate the power absorbed by a resistor\n(in any angular frequency range $\\omega$ and $\\omega + d\\omega$) \nto the power emitted by it. The energy in the interval $\\omega$ and $\\omega + d\\omega$\nis proportional (see Appendix C) to the number of propagating modes per \nunit length in this frequency range. The mean energy per unit time incident\nupon a resistor in this frequency range is:\n\n\\begin{equation}\nP_{in}=v \\left(\\frac{d\\omega}{2\\pi v}\\right) E(\\omega)= \\frac{1}{2 \\pi} E(\\omega) d\\omega \n\\end{equation} \n\nwhere $E(\\omega)$ is the electromagnetic energy at $\\omega$.\nNyquist considers that the total resistance making the circulating\ncurrent $I$ is $2R$ and thus $I = V \/2R$ as if the line \nwhose characteristic impedance $R$ did not contribute at all to\nthe total resistance. Perfect matching at both ends implies that no resistance is \ncontributed by the line since current\/voltage waves are not subjected to scattering. \nThe mean power emitted down the line and absorbed by the resistor at the other end is \n\\begin{equation}\nR\\langle I^2\\rangle= R \\left<\\frac{V^2}{4R^2}\\right>= \\frac{1}{4R} \n\\int\\limits_0^{\\infty} S_V(\\omega) d\\omega\n\\end{equation}\n\nwhere $S_V$ is the voltage Power Spectral Density (PSD) (see Appendix A).\n\nHence we have:\n\n\\begin{equation}\n\\frac{1}{2 \\pi} E(\\omega) d\\omega= \\frac{1}{4R} S_V(\\omega) d\\omega\n\\end{equation}\n\nwith:\n\n\\begin{equation}\nS_V(\\omega) = \\frac{2R}{\\pi} \\frac{\\hbar \\omega}{e^{\\beta \\hbar \\omega} -1}\n\\end{equation}\n\n\nMoving from angular to linear frequency, we get: \n\n\\begin{equation}\nS_V(f) = 4R\\frac{hf}{e^{\\beta hf} -1}\n\\label{frequency_dependent}\n\\end{equation}\n\nVoltage fluctuations are given by (see Appendix A):\n\n\\begin{equation}\n\\sigma_V^2=\\mean{\\delta V^2}=\\int\\limits_{0}^{\\infty}4R\\frac{hf}{e^{\\beta hf} -1} df\n\\end{equation}\n\nPerforming a change of variable $x=\\beta hf$, we get:\n\n\\begin{equation}\n\\sigma_V^2=\\frac{4R (k_B T)^2}{h} \\int\\limits_{0}^{\\infty}\\frac{x}{e^{x} -1} dx\n\\label{integral}\n\\end{equation}\n\nUsing the integral~\\cite{Landau,Gradstein}:\n\n\\begin{equation}\n\\int\\limits_{0}^{\\infty}\\frac{x^{2n-1}}{e^{x} -1} dx= \\frac{(2\\pi)^{2n}B_n}{4n}\n\\end{equation}\n\nwith $B_n$ the Bernoulli polynomial coefficients, we select $n=1$ and $B_1=\\frac{1}{6}$.\n\nThe value of the integral in eq.~\\ref{integral} is thus $\\frac{\\pi^2}{6}$. \n\nThe fluctuations are then given by:\n\n\\begin{equation}\n\\sigma_V^2=\\frac{2R (\\pi k_B T)^2}{3h}\n\\end{equation}\n\nThis surprising result implies that, in the classical case, \n($ h \\rightarrow 0$), fluctuations become extremely large. In fact, \nordinary frequencies $f \\sim $kHz-GHz are low with respect to \n6.25 $\\times 10^{12}$ Hz frequencies that correspond to \nthermal room temperature energy $k_B T$. \nThus $hf \\ll k_B T$ and the number of photons per mode is large since \n$\\langle n \\rangle \\approx k_B T\/hf$. In the classical \nlimit, the number of photons being very large, we get wave-like behaviour\nwhereas in the quantum limit a small $\\langle n \\rangle$ produces particle-like \n(photon) behaviour. \n\n\nExpanding the PSD eq.\\ref{frequency_dependent} at low frequency $hf \\ll k_B T$:\n\n\\begin{equation}\nS_V(f) \\approx 4R k_B T (1 -\\frac{hf}{2 k_B T})\n\\label{low_frequency}\n\\end{equation}\n\nThus quantum effects no longer intervene in the low frequency limit \n$ f \\rightarrow 0$, yielding Nyquist result: \n\n\\begin{equation}\nS_V(0) = 4R k_B T\n\\label{frequency_independent}\n\\end{equation}\n\nAnother divergence is encountered when we ignore the frequency dependence of \n$S_V(f)$ in eq.~\\ref{frequency_dependent} and consider\n$S_V(0)$ to be valid for all frequencies as usually considered for \"White Noise\"\n(flat spectrum for all frequencies):\n\n\\begin{equation}\n\\sigma_V^2= \\int\\limits_{0}^{\\infty} 4R k_B T df= 4R k_B T \\int\\limits_{0}^{\\infty} df= \\infty\n\\end{equation}\n\n\nThis divergence is similar to the Ultra-Violet catastrophe encountered in black-body\nradiation since $hf \\ll k_B T$ corresponds to Rayleigh-Jeans regime and its \nsolution is that voltages are filtered and we never encounter \nin practice an infinite frequency domain. \n\nTherefore let us assume we have a finite bandwidth $\\Delta f$ for voltage \nfluctuations, then:\n\n\\begin{equation}\n\\sigma_V^2= \\int\\limits_{0}^{\\Delta f} 4R k_B T df= 4R k_B T \\Delta f\n\\end{equation}\n\nTo sum up, in order to recover the Johnson-Nyquist result we have to respect two\nconditions: finite band $\\Delta f < \\infty$ and low frequencies (kHz-GHz range) \n$\\Delta f \\ll k_B T\/h$.\n\nAdditionally, it is surprising to note that Nyquist considered a 1D photon gas \nwith a single polarization\ndespite the fact the photon had two polarizations (circular left and right) and in sharp \ncontrast with the evaluation of the blackbody radiation by Planck who considered \na 3D gas with two polarizations (see Appendix C). Moreover, Nyquist ignored zero-point \nenergy in spite of its importance in quantum circuits \nand the fact Planck introduced it in his second paper on black-body radiation \n(see further below).\n\n\n\\subsection{RC circuit classical derivation of Thermal Noise}\n\nNyquist's theorem can be proven with the help of a parallel $RC$ circuit containing\na random source representing interactions with a thermal reservoir.\nThe resistor $R$ is parallel to the capacitor $C$ and the result of \nrandom thermal agitation of the electrons in the resistor will charge\nand discharge the capacitor in a random fashion. \n\nStarting from the time dependent equation of motion of the $RC$ circuit, we have:\n\n\\begin{equation}\nR\\frac{dq(t)}{dt}= -\\frac{q(t)}{C} + \\xi(t)\n\\label{langevin}\n\\end{equation}\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[angle=0,width=40mm,clip=]{RC.pdf} \n\\caption{RC circuit with the noise source $\\xi(t)$ originating from thermal contact with a reservoir.}\n\\vspace*{-3mm} \n \\label{RC}\n\\end{figure} \n\n$q(t)$ is the capacitor charge and $\\xi(t)$ is a stochastic voltage \n(see Appendix A) stemming from interactions with\na reservoir at temperature $T$ with the following statistical properties: \n\n\\begin{equation}\n\\mean{\\xi(t)}=0 \\mbox{ and } \\mean{\\xi(t)\\xi(t')} = \\lambda \\delta(t-t') \n\\label{white2}\n\\end{equation}\n\n$\\lambda$ is a constant that will be determined later and \neq.~\\ref{langevin} is called a Langevin equation~\\cite{Gardiner} (see Appendix B) due to the\npresence of the time-dependent random term $\\xi(t)$.\n\nAssuming the capacitor is uncharged at time $t=-\\infty$, direct integration of the \nfirst-order differential equation yields:\n\n\\begin{equation}\nq(t)= \\frac{1}{R} \\exp(-t\/RC) \\int\\limits_{-\\infty}^{t} \\exp(t'\/RC) \\xi(t') dt'\n\\label{langevin2}\n\\end{equation}\n\nThe voltage $V(t)$ across the capacitor is related to charge through: $q(t)=C V(t)$, \ntherefore evaluating the charge PSD (see Appendix A) is equivalent to voltage PSD.\n\nUsing properties of $\\xi(t)$ given in eq.~\\ref{white2}, we obtain:\n\n\\begin{equation}\n\\mean{q(t)q(t')}= C^2 \\mean{V(t) V(t')}= \\frac{\\lambda C}{R} \\exp \\left(-\\frac{|t-t'|}{RC}\\right)\n\\label{spectrum}\n\\end{equation}\n\nThis result is expected as discussed in Appendix A since the auto-correlation\n$\\mean{q(t)q(t')}$ must be a decreasing function of the argument $|t-t'|$\ncontrolled by the relaxation time $RC$. \n\nSetting $t=t'$ we have the equality: $C^2 \\mean{V^2(t)}= \\frac{\\lambda C}{R}$, hence\n$\\lambda$ is determined as: $\\lambda= RC \\mean{V^2(t)}$.\n\nThe average $\\mean{V^2(t)}$ can be determined from the energy \n$\\frac{1}{2}C \\mean{V^2}$ stored in the capacitor\nthrough the classical equipartition theorem: \n$\\frac{1}{2}C \\mean{V^2}=\\frac{1}{2} k_B T$ as if a capacitor is equivalent\nto a single degree of freedom (see next section).\n\nThe equipartition theorem can be proven as follows.\nIf a system is at temperature $T$, the probability that it is in a state of energy $E$ \nis proportional to the Boltzmann factor $\\exp(-E\/k_B T )$.\n\n\nIn the $RC$ circuit the probability element $dp$ of finding\na voltage between $V$ and $(V + dV)$ is $dp= A \\exp(-E\/k_B T ) dV$ corresponding \nto an energy $E=\\frac{1}{2}C V^2$ stored in the capacitor $C$. \n\n\nThe prefactor $A$ normalizes the probability density:\n\\begin{equation}\n\\int\\limits_{-\\infty}^{\\infty} A e^{(-C V^2\/2 k_B T )} dV =1\n\\end{equation}\nUsing the result $\\int\\limits_{-\\infty}^{\\infty} e^{-x^2} dx= \\sqrt{\\pi}$ we get \n$A=\\sqrt{\\frac{C}{2 \\pi k_B T}}$.\nThe mean square value of the voltage is obtained from the probability density \nas:\n\\begin{equation}\n\\mean{V^2}= A \\int\\limits_{-\\infty}^{\\infty} V^2 e^{(-C V^2\/2 k_B T )} dV\n\\end{equation}\n\nThus $\\mean{V^2}= \\frac{k_B T}{C}$.\n\n\nThe voltage fluctuation PSD may be evaluated with the equipartition theorem as:\n\n\\begin{equation}\nS_V (\\omega)= \\mean{V^2} \\frac{2RC}{1+(RC \\omega)^2}= \\frac{2R k_B T}{1+(RC \\omega)^2}\n\\label{spectrum2}\n\\end{equation}\n\nAt frequencies such that $\\omega \\ll 1\/RC$ we get $S_V (\\omega)= 2R k_B T$\nrecovering the Johnson-Nyquist result (see note~\\cite{factor2}).\n\n\n\\subsection{Derivation of Thermal noise from Einstein thermodynamic fluctuation theory}\nA macroscopic system at thermodynamic equilibrium is an ensemble of subsystems which are in \nthermodynamic equilibrium with each other. The actual values of the variables, \nhowever, may differ from mean equilibrium values. \n\nThe departure from equilibrium is due to fluctuations in the subsystems.\n\nThe probability, $p$, for entropy fluctuation $\\delta S$ is obtained by reverting \nBoltzmann principle $S=k_B \\ln W$ as $p(\\delta S)=p(S-\\langle S\\rangle) \\propto e^{\\delta S\/k_B}$.\nThe probability $p$ is proportional to $W$ the number of microscopic available states \nand we assume the validity of applying Boltzmann principle to the entropy fluctuation $\\delta S$.\n\nGenerally, entropy is a function of state variables $X_i$, \ni.e. $S = S(X_1, X_2, . . ., X_i, . . .)$. It \ncan be expanded as $dS$ around equilibrium since it is an analytical \nfunction for most thermodynamic systems when\nsmall fluctuations of the $X_i$ are considered.\n\nAt equilibrium $S$ is maximum and all the first order derivatives \n$\\frac{\\partial S}{\\partial X_i}=0, \\forall i$\nimplying that the first non-vanishing terms are quadratic.\n\n\nThus one has around equilibrium:\n\n\\begin{equation}\n\\delta S \\approx \\frac{1}{2} \\sum_{i,j} \\frac{\\partial^2 S}{\\partial X_i \\partial X_j} \\delta X_i \\delta X_j\n\\end{equation}\n\nThe series expansion is performed at the equilibrium value of $X_i$, hence \n$\\delta X_i = X_i -\\langle X_i\\rangle$. \n\nThe combination of Boltzmann principle and second-order expansion of entropy about equilibrium\nresults in a Gaussian probability density function (PDF) for finding subsystems with the non-equilibrium value of the variable $X_i$ (akin to the central limit-theorem),\n\n\\begin{equation}\np(\\delta X_i)= \\frac{1}{\\sqrt{2\\pi \\sigma_i^2}} e^{-\\frac{\\delta X_i^2}{2\\sigma_i^2}}\n\\end{equation}\n\nmeaning that the average value (or macroscopic equilibrium) of $X_i$ is $\\langle X_i\\rangle$\nand that the standard deviation away from equilibrium is \n$\\sigma_i^2=\\mean{\\delta X_i^2}=-k_B\/(\\frac{\\partial^2 S}{\\partial X_i^2})$.\n\n\nThe entropy of a single phase, one component system is given in terms of the energy $U$, volume \n$\\Phi$ and pressure $P$ as $dS=\\frac{dU}{T}+\\frac{P}{T}d\\Phi$.\n\n\nIn the presence of a voltage $V$, the entropy expression becomes \n$dS=\\frac{dU}{T}+\\frac{P}{T}d\\Phi -\\frac{q}{T}dV$ since charge $q$ couples to\nvoltage $V$. This yields the value \nof the entropy derivative $\\left(\\frac{\\partial S}{\\partial V}\\right)_{T,\\Phi}=-\\frac{q}{T}$.\nThe second derivative is thus obtained as:\n$\\left(\\frac{\\partial^2 S}{\\partial V^2}\\right)_{T,\\Phi}= -\\frac{1}{T} \\left(\\frac{\\partial q}{\\partial V}\\right)_{T,\\Phi}$.\n\nThe voltage fluctuation is expressed as:\n\n\\begin{equation}\n\\sigma_V^2=\\mean{\\delta V^2}=-k_B\/\\left(\\frac{\\partial^2 S}{\\partial V^2}\\right)_{T,\\Phi}=k_B T \\left(\\frac{\\partial V}{\\partial q}\\right)_{T,\\Phi}\n\\end{equation} \n\nAssuming the validity of Ohm's law: $I=\\frac{dq}{dt}=\\frac{V-V_0}{R}$ where \n$V_0$ is some reference voltage (e.g. $V_0= \\mean{V}$), \nwe infer that voltage fluctuations are given by:\n\n\n\\begin{equation}\n\\sigma_V^2=k_B T R \\frac{\\left[d(V-V_0)\/dt\\right]}{(V-V_0)}\n\\end{equation}\n\nEstimating the time derivative: $\\frac{d(V-V_0)}{dt} \\sim \\frac{(V-V_0)}{\\tau}$ with\n$\\tau$ as a typical time variation of the voltage and given \nthat for a band-limited signal of bandwidth~\\cite{Carlson} $\\Delta f$ with \n$\\Delta f \\sim \\frac{1}{2 \\tau}$, we finally obtain for\nthe voltage fluctuation expression :\n\n\\begin{equation}\n\\sigma_V^2= 2R k_B T\\Delta f\n\\end{equation}\n\nthat agrees with Johnson-Nyquist result (see note~\\cite{factor2}).\n\n\\section{The quantum fluctuation-dissipation theorem (QFDT)}\n\\label{QFDT}\nThe classical fluctuation-dissipation theorem derived in Appendix B provides a\nrelation between equilibrium fluctuations and dissipative transport\ncoefficients. Besides, it is an interesting route to quantize classical noise.\n\nCallen and Welton~\\cite{Callen} proved the QFDT with the correspondence theorem \nallowing to transpose classical results to quantum ones such that\na classical physical quantity is transformed into its quantum counterpart\nwith an observable operator.\n\nFor a single degree of freedom, linear response theory~\\cite{Hanggi} yields for the\nchange of the expectation value of an operator-valued observable $B$ due to\nthe action of a (classical) force $F(t)$ that couples to the conjugate\ndynamical operator $A$:\n\n\\begin{equation}\n\\langle\\delta B(t)\\rangle = \\int\\limits_{-\\infty}^{t}ds \\chi_{BA}(t-s) F(s)\\,.\n\\end{equation}\n\n$\\delta B(t)=B(t)-\\langle B\\rangle_0$ denotes the difference with respect to\nthe thermal equilibrium average $\\langle B\\rangle_0$ in force absence.\nThe dissipative part of the response function $\\chi_{BA}(t)$ is given by:\n\n\\begin{equation}\n\\chi_{BA}^D(t)= \\frac{1}{2i}[\\chi_{BA}(t) - \\chi_{AB}(-t)]\\,.\n\\end{equation}\n\nThe fluctuations are described by the equilibrium correlation function\n\\begin{equation}\nC_{BA}(t) = \\langle\\delta B(t)\\delta A(0)\\rangle_\\beta\n\\end{equation}\n\n\nThe thermal average is taken at an inverse temperature $\\beta$ (see Appendix A).\n\nThe correlation function is\ncomplex-valued because the operators $B(t)$ and $A(0)$ in general do not\ncommute. While the antisymmetric part of $C_{BA}(t)$ is directly related \nto the response function by linear response theory, the symmetrized correlation function PSD:\n\n\\begin{equation}\nS_{BA}(t)=\\frac{1}{2}\\langle\\delta B(t)\\delta A(0)+\\delta A(0)\\delta B(t)\\rangle\n\\end{equation}\n\ndepends on the Fourier transform of the dissipative part of the response function:\n\n\\begin{equation}\nS_{BA}(\\omega) = \\hbar\\coth\\left(\\frac{\\beta \\hbar\\omega}{2}\\right) X_{BA}^D(\\omega)\\,.\n\\end{equation}\n\nwhere $X_{BA}^D(\\omega)$ is the Fourier transform of $\\chi_{BA}^D(t)$. \nThis is the quantum version of the fluctuation-dissipation theorem\nas it links the fluctuations $S_{BA}(\\omega)$ to dissipation as in the\nclassical case (Appendix B).\n\nNote that $S_{BA}$ is a two variable extension of the PSD\npreviously used and defined in Appendix A with a single variable.\nConsequently $S_V$ should be written in fact as $S_{VV}$.\n\nAnalyzing the response of a current $\\delta I$ through an \nelectric circuit subject to a voltage change $\\delta V$, \nimplies $B=I$ and $A=Q$, since voltage couples to charge $Q$.\n\nA circuit response is determined by $\\delta I(\\omega)=Y(\\omega) \\delta V(\\omega)$ \nwhere $Y(\\omega)$ is the admittance. \nGiven $I=\\dot Q$, the symmetrized current PSD is $S_{II}(\\omega)=i\\omega S_{IQ}(\\omega)$ yielding:\n\n\\bea\nS_{II} (\\omega) &= \\hbar\\omega\\coth\\left(\\frac{\\beta \\hbar\\omega}{2}\\right)\n\\Re Y(\\omega) \\nonumber \\\\\n&=2 (\\mean{n}+\\frac{1}{2})\\hbar\\omega \\Re Y(\\omega)\\,.\n\\eea\n\nwhere $\\mean{n}$ is the Bose-Einstein factor (given by eq.~\\ref{Bose}) and $\\Re$ is real part symbol.\nIn the high temperature limit $k_B T \\gg \\hbar\\omega$, we recover the Johnson-Nyquist result \n$ S_{II}(\\omega)= 2\\Re Y(\\omega) k_B T$ (see note~\\cite{factor2}). \n\nNyquist, in the last paragraph of his 1928 paper~\\cite{Nyquist},\nhad already anticipated the quantum case. However, he made use of\nPlanck first paper on black-body radiation which does not contain \nzero-point energy term $\\frac{1}{2} \\hbar \\omega$. By missing this term, Nyquist\nignored the 1912 second~\\cite{Planck} paper on black-body radiation by Planck\nwho aimed at correcting his previous work by introducing zero-point energy\nin order to recover the right classical~\\cite{classical} limit of an oscillator \nmean energy per mode. Let us add that zero-point energy can also be shown\nto originate from Heisenberg uncertainty as in the 1D harmonic oscillator \ncase~\\cite{Landauq}.\n\nIf the oscillator mean energy per mode is taken as $\\mean{n} \\hbar \\omega$, \nwe obtain to order $O(\\frac{1}{k_B T})$ the classical (high temperature) limit $k_B T \\gg \\hbar\\omega$:\n\n\\bea\n\\mean{n} \\hbar \\omega & \\approx & \\frac{\\hbar \\omega}{1+ \\frac{\\hbar \\omega}{k_B T} + \\frac{1}{2} {(\\frac{\\hbar \\omega}{k_B T})}^2... -1} \\nonumber \\\\\n & \\approx & \\frac{k_B T}{1+ \\frac{1}{2} (\\frac{\\hbar \\omega}{k_B T})...} \\approx k_B T - \\frac{1}{2} \\hbar \\omega.\n\\eea\n\nThus one should rather write the mean energy per mode as $(\\mean{n}+ \\frac{1}{2}) \\hbar \\omega$ \nin order to retrieve the right classical limit $ k_B T$ originating\nfrom the comparison between Planck photon spectral density expression \n$\\frac{2 \\omega^2}{\\pi^2 c^3} \\left[\\frac{\\hbar \\omega}{e^{\\beta \\hbar \\omega} -1} \\right]$\nand Rayleigh-Jeans~\\cite{electromag} expression $ \\frac{2 \\omega^2}{\\pi^2 c^3} [k_B T]$\n($c$ is the velocity of light).\n\n\\subsection{$LC$ circuit quantum derivation of Thermal Noise}\nWe start with an analogy~\\cite{Devoret} between the harmonic oscillator and the $LC$ resonator. \nMoving from an $RC$ to an $LC$ circuit\nstems from the fact a resistance may be defined from $ R = \\sqrt{\\frac{L}{C}} $ \nand that an oscillator underlying a resistor allows a ready route to quantization.\n\nLater on when we consider a semi-infinite transmission line with $L$ inductance \nand $C$ capacitance per unit length, the line resistance\n$R = \\sqrt{\\frac{L}{C}} $ is same as the single $LC$ resonator.\nThus the transmission line might be viewed simply as a large collection \nof harmonic oscillators (normal modes) and hence can be readily quantized. \nThe resistance picture that links the resonator to the transmission line \nis very appealing and has has been introduced for the first time by \nCaldeira and Leggett~\\cite{Devoret} to describe a continuum as sets of harmonic oscillators \nas described below.\n\n\nLet us write the Hamiltonian of a single $LC$ resonator circuit in the form:\n\n\\begin{equation}\n\\mathscr{H}_0= \\frac{q^2}{2C_0} + \\frac{\\phi^2}{2L_0}\n\\end{equation}\n\n\nwhere variables $q$ and $\\phi$ are capacitor charge and flux in the inductor.\nDrawing from complete analogy with the harmonic oscillator, we quantize variables with:\n\n\\begin{equation}\nq=\\sqrt{\\frac{\\hbar}{2R}}(a+a^{\\dagger}), \\phi=-i\\sqrt{\\frac{\\hbar R}{2}}(a-a^{\\dagger}) \n\\end{equation}\n\nwhere $ R = \\sqrt{\\frac{L_0}{C_0}} $ and $a^{\\dagger}$, $a$ are ladder operators \ncharacterized by commutation property: $[a,a^{\\dagger}]=1$.\n\nThe Hamiltonian is transformed into standard harmonic oscillator form\n$\\mathscr{H}_0= \\hbar \\omega_0 (a^{\\dagger}a + \\frac{1}{2})$\nwith $\\omega_0= 1\/{\\sqrt{L_0 C_0}}$, the classical resonance frequency.\n \nMoving from a single oscillator to a continuum, \nwe follow Goldstein~\\cite{Goldstein} treatment by considering a transmission\nline of length $\\ell$ characterized by an inductance $L$ and capacitance $C$ per unit length. \n\nThe Hamiltonian of the system is \n\\begin{equation} \\mathscr{H}(t) = \\int\\limits_0^\\ell dx\\,\\left[ \\frac{q^2(x,t)}{2C} + \\frac{\\phi^2(x,t)}{2L}\\right], \n\\end{equation}\n\nwhere $\\phi(x,t)$ is the local flux density and $q(x,t)$ is the local charge density.\n\n\nWe define a new variable \n\\begin{equation}\n\\theta(x,t) = \\int\\limits_0^x dx'\\, q(x',t) \n\\end{equation} \n\nto express current density $ j(x,t) = -\\frac{\\partial \\theta(x,t)}{\\partial t}$\nand charge density $ q(x,t) = \\frac{\\partial \\theta(x,t)}{\\partial x}$\nsuch that charge conservation rule:\n\n\\begin{equation}\n\\frac{\\partial}{\\partial x} j(x,t) + \\frac{\\partial}{\\partial t} q(x,t) = 0.\n\\end{equation} \n\nis obeyed.\n\nThe Hamiltonian is written as:\n\\begin{equation} \\mathscr{H}(t) = \\int\\limits_0^\\ell dx\\,\n\\left[ \\frac{1}{2C}\\left(\\frac{\\partial \\theta}{\\partial x}\\right)^2 \n+\\frac{L}{2}\\left(\\frac{\\partial \\theta}{\\partial t}\\right)^2 \\right] \n\\end{equation} \n\n\nFrom Hamilton equations of motion, we get the wave equation \n$\\frac{\\partial^2 \\theta}{\\partial x^2} - \\frac{1}{v^2} \\frac{\\partial^2 \\theta}{\\partial t^2}= 0 $ \nwith velocity $v=1\/{\\sqrt{LC}}$.\n\nThe normal mode expansion when the transmission line is considered with stationary\nboundary conditions at both ends $\\theta(0,t) = \\theta(\\ell,t)=0$ is given by:\n\n\n\\begin{equation} \n\\theta(x,t) = \\sqrt{\\frac{2}{\\ell}}\\sum_{n=1}^\\infty b_n(t) \\sin k_n x, \n\\end{equation} \n\nwhere $b_n(t)$ is the time-dependent mode amplitude and quantized wavevectors $k_n = \\frac{n\\pi}{\\ell}$.\nAfter substitution of this form into the Hamiltonian and integrating over $x$ exploiting orthogonality\nof the basis functions $[\\cos k_n x, \\sin k_n x]$ over the interval $\\ell$ gives: \n\n\\begin{equation} \n\\mathscr{H}(t) =\\sum_{n=1}^\\infty \\frac{k_n^2}{2C} {[b_n(t)]}^2 + \\frac{L}{2} \\left[\\frac{d b_n(t)}{dt}\\right]^2\n\\end{equation} \n\nQuantizing the system in terms of harmonic oscillator ladder operator sets\nusing the correspondence: \n\\begin{equation}\nb_n(t) \\rightarrow \\sqrt{\\frac{\\hbar C}{2}} \\frac{\\sqrt{\\omega_n}}{k_n}[a^\\dagger_n(t) + a_n(t)]\n\\end{equation} \n\nwhere $\\omega_n= \\frac{n v \\pi}{\\ell}$ controls Heisenberg time dependence of ladder operators through\n$a^\\dagger_n(t)=\\exp(i \\omega_n t) a^\\dagger_n(0)$ and $a_n(t)~=~\\exp(-i\\omega_n t)~a_n(0)$,\nyields, from charge density $ q(x,t) = \\frac{\\partial \\theta(x,t)}{\\partial x}$, the voltage at $x=0$:\n\n\\bea \nV(t) & =&\\frac{1}{C}\\left[\\frac{\\partial \\theta(x,t)}{\\partial x}\\right]_{x=0} \\nonumber \\\\\n & =& \\sqrt{\\frac{\\hbar}{\\ell C}}\\sum_{n=1}^\\infty \n\\sqrt{ \\omega_n} [e^{i \\omega_n t} a^\\dagger_n(0) + e^{-i\\omega_n t}~a_n(0)] \\nonumber \\\\\n\\eea \n\n\n\nThe voltage PSD is obtained after quantum averaging the voltage time correlation (see Appendix A): \n\\begin{equation}\nS_{V}(\\omega) = \\frac{2 \\pi}{\\ell C} \\sum_{n=1}^\\infty \\hbar\\omega_n [ n (\\omega_n)\\delta(\\omega+\\omega_n) \n+ [n (\\omega_n)+1] \\delta(\\omega-\\omega_n) ], \n\\end{equation}\n\n\nwhere $n (\\omega)=\\mean{a^\\dagger_n(0)a_n(0)}$ is the photon Bose-Einstein distribution \nwith energy $\\hbar\\omega$ defined previously as $\\mean{n}$ in eq.~\\ref{Bose}. \n\nTaking the limit $\\ell \\rightarrow\\infty$ and converting summation to integration through the replacement\n\n\n\\begin{equation} \n \\sum_{n=1}^\\infty f(\\omega_n) \\approx \\frac{\\ell}{v\\pi} \\int_0^\\infty f(\\omega) d\\omega\n\\end{equation}\n\nyields \n\n\\begin{equation} \nS_{V}(\\omega) = 2R \\hbar\\omega \\{-n (-\\omega) \\Upsilon(-\\omega) +[n (\\omega)+1] \\Upsilon(\\omega) \\}, \n\\end{equation} \n\nwhere $\\Upsilon(\\omega)=\\int_0^\\infty \\delta(\\omega-x) dx$ is the Heaviside step function.\nPhysically the negative $\\omega$ term corresponds to energy absorption whereas in the\npositive $\\omega$ case, $n(\\omega)$ represents stimulated emission and +1 represents\nspontaneous emission leading to $S_{V}(\\omega)$ being asymmetric with respect \nto $\\omega$ in contrast to the classical oscillator case (see Appendix A).\n\nIn the $\\omega > 0$ case ($\\Upsilon(\\omega)$ term retained), the spectral density: \n\\begin{equation} \nS_{V}(\\omega) = \\frac{2R \\hbar\\omega}{1-e^{-{\\hbar \\omega \/ k_{B}T}} },\n\\end{equation} \n\nreduces, in the classical limit~\\cite{classical} $k_{B}T \\gg \\hbar\\omega$,\nto Johnson-Nyquist noise result $S_{V}(\\omega) = 2R k_{B}T$ (see note~\\cite{factor2}).\n\nIn order to retrieve the quantum fluctuation-dissipation theorem\\cite{Callen},\nwe take the symmetric part of $S_{V}(\\omega)$ by adding positive and negative \nspectral contributions:\n\n\\begin{equation} \nS_{V}(\\omega) + S_{V}(-\\omega) = 2R \\hbar\\omega \\coth \\left(\\frac{\\hbar\\omega}{2k_{B}T} \\right)\n\\end{equation} \n\n\n\\section{Application to secure communications}\nThe main question in this section deals with the possible way to communicate\nsecurely with a classical approach based upon acting on communication bits with \ncontrolled noise (shift-register generated pseudo-random bits) or through\nQuantum Communications based on entanglement.\n\n\\subsection{Spread spectrum communications}\nThe principle of spread-spectrum communications such as DSSS used in Wi-Fi and \ncordless telephony is based on multiplying the message (made of 0's and 1's) \nby a sequence of pseudo-random bits. \nPseudo-Random Binary Sequences (PRBS), the digital version of PN\nsequences are produced in a controlled fashion\nwith a deterministic algorithm akin to pseudo-random numbers used in \na Monte-Carlo algorithm or some other type of simulation~\\cite{Recipes}.\n\nThe main goal of PRBS generation, is to draw 0 or 1 in an equally probable fashion\nin order to have highly efficient crypting of the message (largest bandwidth or spreading). \nA particularly efficient method for producing PRBS is based on primitive polynomials modulo 2 \nor Galois polynomials~\\cite{Knuth} with the following arithmetic: \\\\\n$0\\oplus 0=0, 0\\oplus 1=1, 1\\oplus 0=1, 1\\oplus 1=0$. \\\\\n$\\oplus$ is the usual symbol for modulo 2 arithmetic corresponding\nto the logical XOR operation.\n \n\n\\begin{figure}[htbp]\n \\centering\n \\resizebox{70mm}{!}{\\includegraphics[angle=0,clip=]{shift9.pdf}} \n\\vspace*{-3mm} \n\\caption{Shift register connections with feedback set up for PRBS~\\cite{Recipes} generation \non the basis of a (modulo 2) primitive polynomial given by $1+x^4+x^9$. It is of order 9 and \ntap connections (9,4,0) are shown.}\n\\label{XOR}\n\\end{figure} \n\nThe coefficients of primitive polynomials modulo 2 are zero or one e.g. \n$x^4+x^3+1$, moreover they cannot be decomposed into a product of simpler modulo 2 polynomials.\nAn illustrative example is $x^2+1$ that cannot be decomposed into simpler\npolynomials with real coefficients but can be decomposed into polynomials with \ncomplex coefficients $x^2+1=(x+i)(x-i)$ with $i=\\sqrt{-1}$. When this polynomial\nis viewed as a Galois polynomial, it is not primitive since it can be decomposed into a product \nof simpler polynomials $x^2+1 \\equiv x^2+2 x +1=(x+1)(x+1)$ since in modulo 2 arithmetic the term\n$2 x$ is equivalent to 0 according to the above arithmetic rule ($1\\oplus 1=0$).\n\n\nThe method for producing PRBS illustrated in fig.~\\ref{XOR} requires only a single shift \nregister $n$ bits long and a few XOR or mod 2 bit addition operations ($\\oplus$ gates).\n\nThe terms that are allowed to be XOR summed together are indicated by shift register taps.\nThere is precisely one term for each nonzero coefficient in the primitive polynomial \nexcept the constant (zero bit) term. Table~\\ref{Galois} contains a list of polynomials for $n \\le 15$,\nshowing that for a primitive polynomial of degree $n$, the first and last term are 1. \n \n\n\\begin{table}[htbp]\n\\begin{tabular}{l|r}\n\\hline\nConnection Nodes & Equivalent Polynomial \\\\\n\\hline\n (1,\\hspace{0.3cm}0) & $1+x$ \\\\ \n (2,\\hspace{0.3cm}1,\\hspace{0.3cm}0) & $1+x+x^2$ \\\\ \n (3,\\hspace{0.3cm}1,\\hspace{0.3cm}0) & $1+x+x^3$ \\\\\n (4,\\hspace{0.3cm}1,\\hspace{0.3cm}0) & $1+x+x^4$\\\\ \n (5,\\hspace{0.3cm}2,\\hspace{0.3cm}0) & $1+x^2+x^5$\\\\ \n (6,\\hspace{0.3cm}1,\\hspace{0.3cm}0) & $1+x+x^6$\\\\ \n (7,\\hspace{0.3cm}1,\\hspace{0.3cm}0) & $1+x+x^7$\\\\ \n (8,\\hspace{0.3cm}4,\\hspace{0.3cm}3,\\hspace{0.3cm} 2,\\hspace{0.3cm} 0) & $1+x^2+x^3+x^4+x^8$ \\\\\n (9,\\hspace{0.3cm}4,\\hspace{0.3cm}0) & $1+x^4+x^9$\\\\ \n (10,\\hspace{0.3cm}3,\\hspace{0.3cm}0) & $1+x^3+x^{10}$\\\\ \n (11,\\hspace{0.3cm}2,\\hspace{0.3cm}0) & $1+x^2+x^{11}$\\\\ \n (12,\\hspace{0.3cm}6,\\hspace{0.3cm}4,\\hspace{0.3cm}1,\\hspace{0.3cm} 0) & $1+x+x^4+x^6+x^{12}$ \\\\\n (13,\\hspace{0.3cm}4,\\hspace{0.3cm}3,\\hspace{0.3cm}1,\\hspace{0.3cm} 0) & $1+x+x^3+x^4+x^{13}$ \\\\ \n (14,\\hspace{0.3cm}5,\\hspace{0.3cm}3,\\hspace{0.3cm}1,\\hspace{0.3cm} 0) & $1+x+x^3+x^5+x^{14}$\\\\ \n (15,\\hspace{0.3cm}1,\\hspace{0.3cm}0) & $1+x+x^{15}$\\\\ \n\\hline\n\\end{tabular}\n\\caption{List of the first 15 Galois polynomials.}\n\\label{Galois}\n\\end{table} \n\n\nA Maximum-Length Sequence (MLS) $x[n]$ is a balanced sequence made from \nequally probable symbols with values +1 and -1 such that the MLS averages to zero. \nChoosing $x[n] = (-1)^{a[n]}$ with $a[n]\u00a0=$\u00a00 or 1 originating from PRBS yields the \ndesired values $x[n]$\u00a0=\u00a0+1 or -1 with +1\/-1 equally probable.\nThe PRBS sequence $a[n]$ is produced with a shift register XOR operation \nas discussed previously and illustrated in fig.~\\ref{XOR}.\nThe MLS has many attractive features in addition to the balanced character: \nits standard deviation and peak values are both equal to 1 making its crest factor \n(peak\/standard deviation) equal to 1, the lowest value it can get~\\cite{Recipes}. \nThat is why MLS has noise-immune property~\\cite{Recipes} required in communication\nelectronics. MLS are used not only in secure communications but also in synchronization \nof digital sequences. \n\n\\begin{figure}[htbp]\n \\centering\n \\resizebox{60mm}{!}{\\includegraphics[angle=0,clip=]{MLS.pdf}} \n\\vspace*{-3mm} \n\\caption{Property of MLS auto-correlation $R_{xx}[n]$ showing peaks that enables decoding\nthe message and displaying maximum length $N=2^{n_c} -1$ with $n_c$ the number of coding bits.}\n\\label{MLS}\n\\end{figure} \n\nA message $x(t)$ transmitted through a linear time-invariant medium is \nconvoluted with the channel impulse response $h(t)$ resulting in an output message:\n\n\\begin{equation}\ny(t)=h(t)*x(t)=\\int_{-\\infty}^{\\infty}h(t-t') x(t') dt'\n\\label{reception}\n\\end{equation}\n\n\nThe decoding process of the message is based on a correlation operation\nbased on the $x[n]$ auto-correlation given by: \n \n\\bea\nR_{xx}[n] &=& \\frac{1}{N-1} \\sum_{i=0}^{N-2}{ x[i] x[n+i] } \\nonumber \\\\\n & =& \\frac{1}{N-1} \\sum_{i=0}^{N-2}{ (-1)^{(a[i] \\oplus a[n+i])} } \n\\eea\n\nwith $N=2^{n_c} -1$ where $n_c$ is the number of coding bits or MLS order. $N$ is the period or the\nlength of the MLS.\n\nAs an example, the auto-correlation $R_{xx}[n]$ of order $n_c=9$ shown in fig.~\\ref{Auto} displays a \n$\\delta$ function-like behaviour required for message decoding or synchronization (shown in fig.~\\ref{MLS}).\n\n\\begin{figure}[htbp]\n \\centering\n \\resizebox{60mm}{!}{\\includegraphics[angle=0,clip=]{auto9.pdf}} \n\\vspace*{-3mm} \n\\caption{Auto-correlation $R_{xx}[n]$ of order 9 MLS displaying the peak value of 2$^9$-1=511 \nover the period interval [0-511].}\n\\label{Auto}\n\\end{figure} \n\nAnother application of the MLS is the determination of the impulse response $h(t)$ of any communication channel \nby sending through the channel a PRBS signal $x(t)$ whose auto-correlation is a delta function that will be used\nto identify $h(t)$ at the receiver (see eq.~\\ref{reception}) since:\n\n\\begin{equation}\nh(t)=\\int_{-\\infty}^{\\infty}h(t-t') \\delta(t') dt'\n\\label{identification}\n\\end{equation}\n\n\nThe impulse response determined with MLS is known to be immune to distortion. This is why \ndespite the fact, many other methods~\\cite{Stan} exist to measure it with various success, \nMLS is still preferred when distortion is an issue. \n\n\n\\subsection{Quantum communication (QC)}\nQuantum Mechanics provides several important ingredients to information communication\nnot present in its classical counterpart~\\cite{Ekert}.\nFirstly the information itself sent across a communication channel can be either \nclassical or quantum. The same applies to the channel that might be classical or quantum.\nInformation transmission is measured with input-output correlations performed \nacross the channel that can also be classical or quantum, the signature of entangled states. \nCopying a bit in classical communication is a trivial voltage replication operation\nwhereas in QC the no-cloning theorem~\\cite{Scarani} forbids copying quantum information\nwithout leaving a trace. Crypting information can be made with classical keys (as in PRBS) or\nquantum keys. The generation of random numbers through quantum means\n(QRNG) are superior to PRBS despite their many interesting properties.\n\nQuantum networks across which quantum information is carried is also different from its\nclassical counterpart and finally classical noise as well as quantum noise should be\nproperly described in order to evaluate information error rates.\n\nWe describe every element of quantum communication below.\n\n\\subsubsection{Quantum unit of information: the qubit}\n\nThe discrete~\\cite{continuous} unit quantum information in 2D Hilbert space is the qubit, \nthe two-state quantum counterpart of the classical bit (see fig.~\\ref{bloch}). \n\nIt is represented by a \ntwo-component wavefunction (or spinor~\\cite{spinor}) $\\ket{\\psi(\\theta,\\phi)}$.\n\nComputationally, a qubit is representable with \n128 classical bits considering that it is made of two complex numbers that \nare equivalent themselves to four 32 bit (single precision) float numbers.\n\nIn the case of photons, quantum states $\\ket{0}$ and $\\ket{1}$ are equivalent \nto orthogonal polarization states (see fig.~\\ref{bloch}).\n \nThe photon is the logical choice as the basic information carrier in quantum\ncommunications proceeding between nodes that make quantum networks. \nInformation can be encoded in photon polarization, orbital momentum, spatial mode \nor time and any manipulation targeting processing\nor information transfer can be made with optical operations, \nsuch as using birefringent waveplates to encode polarization...\n\nOn the other hand, atoms are the natural choice to make quantum memories since some of \ntheir electronic states can retain quantum information for a very long time.\n\nQuantum networks convey quantum information with nodes that allow for its \nreversible exchange. The latter may be done with two coupled single-atom nodes \nthat communicate via coherent exchange of single photons. In comparison, classical\nfiber-optic networks use pulses containing typically 10$^7$ photons each.\n\nIn order to prevent change in information or even its loss, it is necessary \nto have tight control over all quantum network components. Considering the \nsmallest memory for quantum information as a single atom with single photons \nas message carriers, efficient information transfer between an atom and a \nphoton requires strong interaction between the two components not achievable \nwith atoms in free space but in special optical cavities.\n\n\\begin{figure}[htbp]\n \\centering\n \\resizebox{60mm}{!}{\\includegraphics[angle=0,clip=]{bloch.pdf}} \n\\vspace*{-3mm} \n\\caption{Bloch sphere (Poincar\\'e sphere for photons) representing the possible \nvalues of a quantum information unit in 2D Hilbert space or qubit shown as the\nquantum wavefunction $\\ket{\\psi(\\theta,\\phi)}$. \nThe classical bits (0,1) are the poles of the sphere. A qubit is\nany two-component wavefunction given by $(\\alpha \\ket{0} + \\beta \\ket{1})$ \nwith complex coefficients $\\alpha,\\beta$. A pure state exists over the Bloch sphere with\n$|\\alpha|^2 +|\\beta|^2 =1$ whereas a mixed state lies inside the sphere with\n$|\\alpha|^2 +|\\beta|^2 <1$. A classical\nanalog~\\cite{continuous} state lies anywhere on the vertical axis linking the poles.}\n\\label{bloch}\n\\end{figure} \n\nA low-loss cavity made with a set of strongly reflective mirrors \nalters the distribution of modes with which the atom interacts modifying \nthe density of vacuum fluctuations that it experiences at a given frequency\nenhancing or reducing atomic radiative properties. As a consequence, \nspontaneous emission from the atom excited state being a \nmajor source of decoherence can be inhibited in a cavity. \n\nA low-loss optical cavity possesses a high quality factor ($Q > 10^3$) allowing\na photon entering the cavity to be reflected between mirrors making the cavity several \nthousand times per second strongly enhancing its coupling with the atom leading\nto its absorption by the atom in a highly efficient coherent fashion. \n\nOn the other hand, photon emission by an atom inside a cavity is highly directional\nand can be sent to other network nodes in a precisely controlled fashion. \n\n\nControlling qubit states means that an operator is required to allow switching from one qubit state to another. \nA rotation matrix $\\hat{R}_z(\\theta,\\phi)$ represents such an operator in the $\\ket{0},\\ket{1}$\nbasis:\n\n\\begin{equation}\n \\hat{R}_z(\\theta,\\phi)= \\left [\n \\begin{array}{cc}\n \\cos\\frac{\\theta}{2}&-ie^{i\\phi}\\sin\\frac{\\theta}{2}\\\\\n -ie^{-i\\phi}\\sin\\frac{\\theta}{2}&\\cos\\frac{\\theta}{2}\n \\end{array}\\right ] \\label{rotation matrix}\n\\end{equation} \n\nApplying $ \\hat{R}_z(\\theta,\\phi)$ on the state $\\ket{0}$ allows us \nto produce an arbitrary state $(\\theta \\ne 0,\\phi \\ne 0)$ on the Bloch sphere: \n$\\ket{\\psi(\\theta,\\phi)}= \\hat{R}_z(\\theta,\\phi) \\ket{0}= \n\\cos\\frac{\\theta}{2}\\ket{0} -ie^{-i\\phi}\\sin\\frac{\\theta}{2}\\ket{1} \\equiv (\\alpha \\ket{0} + \\beta \\ket{1})$. \nThis applies only for pure states that lie on the\nBloch sphere surface ($|\\alpha|^2 +|\\beta|^2 =1$). In the case of mixed states \n($|\\alpha|^2 +|\\beta|^2 <1$) we need additional operators that alter also the \nwavefunction modulus. Experimentally, photon polarization can be rotated\nwith a half-wavelength plate (called also a Hadamard gate), moreover it can be\nseparated into individual components with a polarizing beam-splitter \n(see ref.~\\cite{Obrien}).\n\n\\subsubsection{Entanglement}\n\nQuantum networks possess special features that are not found in their classical counterparts. \nThis is due to the intrinsic nature of the information processed: \nwhile a classical bit is either 0 or 1 (see fig.~\\ref{bloch}), a qubit \n(quantum bit wavefunction) can take both values at the same time due to coherent \nsuperposition inherent to Quantum Mechanics linearity.\n \nQuantum Mechanics embodies the notion of entanglement \ndetected with violation of Bell inequalities~\\cite{Bell} and\nthat brings a paradigm shift into information processing.\n\nGiven two qubits $\\ket{Q_1}=~\\frac{1}{\\sqrt{2}}(c_0 \\ket{0} + c_1 \\ket{1})$ \nand $\\ket{Q_2}~=~\\frac{1}{\\sqrt{2}}(d_0 \\ket{0} + d_1 \\ket{1})$, \nit is possible to build a state $\\ket{Q_1,Q_2}=\\ket{Q_1}\\otimes \\ket{Q_2}$ such that:\n\\bea\n\\ket{Q_1,Q_2}=\\frac{1}{\\sqrt{4}}(c_0 \\ket{0} + c_1 \\ket{1})\\otimes (d_0 \\ket{0} + d_1 \\ket{1}) \\nonumber \\\\\n =\\frac{1}{\\sqrt{4}}(c_0 d_0 \\ket{0,0}+c_0 d_1 \\ket{0,1}+c_1 d_0 \\ket{1,0}+c_1 d_1 \\ket{1,1}) \\nonumber \\\\\n\\eea\n\nQuantum Mechanics, however, allows for building other states such as:\n$\\ket{Q}=\\frac{1}{\\sqrt{2}}(c_0 d_1 \\ket{0,1}+c_1 d_0 \\ket{1,0})$\nwhich are not decomposable into products of constituent states. \n\nThese states are called entangled and can be mapped onto the polarization of single photons \nwhich can be transferred through an optical fiber between two nodes consisting respectively \nof atoms in state $\\ket{A}$ and state $\\ket{B}$. \n\nQuantum mechanical entanglement ought to be achieved between the two nodes in order to\nhave successful QC maintained over the coherence time preserving\nintegrity of quantum information transfer.\n\nIn order to achieve entanglement between two remote network nodes, polarization of the \nsingle photon emitted by atom in state $\\ket{A}$ is entangled with the atomic quantum state. \n\nOnce the photon gets absorbed, the entanglement is transferred onto atom in state $\\ket{B}$\nand reversible exchange of quantum information is performed between the two nodes. \n\n\nExperimental production of entanglement can be made between two particles \n(bipartite) or between several particles (multipartite) and for each number of particle \n(two, three, four ...) case or particle type (atoms, electrons, photons...) \nseveral experimental procedures readily exist. It is not limited to microscopic\nparticles since it can be induced by a light pulse between two macroscopic \nobjects~\\cite{Julsgaard} consisting each of a gas containing \nabout 10$^{12}$ Cesium atoms.\n\nEntanglement can occur when particles interact and kept in contact or \nwhen they emerge from a common ancestor as in the EPR~\\cite{EPR} case where \na spinless particle decays into two particles carrying opposite spins... \nAnother example is the case of a photon interacting with a non-linear crystal. \nIt can be destroyed and replaced with a lower energy entangled photon pair \n(the process is called SPDC~\\cite{Kwiat} or Spontaneous Parametric Down-Conversion).\n\nHeralded~\\cite{Monroe} entanglement may occur between non-interacting remote particles\n(Yb ions held in two ion traps, 1 meter apart) not possessing a common ancestor, \nhowever the entanglement probability $p_E$ is very low ($p_E \\approx 10^{-9}$) since\nentanglement results from the interaction of decay photons emitted by each ion after \ntheir excitation by picosecond laser pulses. Thus $p_E$ needs to be increased substantially \nin order to make it applicable to mass QC. \n\n\n\n\\subsubsection{Quantum Random Number Generation}\nClassical Random Number Generators (RNG) are based on Uniform\nRNG and the standard statistical quality tests target \nthe uniformity~\\cite{Recipes} of the numbers generated.\nQuantum Mechanics introduce a predictability test to further improve\nquality of RNG.\n\nThis means that even if the RN is perfectly uniformly distributed, \nit may contain hidden deterministic information and is therefore \nprone to be predictable. For instance, PN and PRBS generate uniformly \ndistributed numbers but since they are produced with a deterministic \nalgorithm, an eavesdropper might, by drawing values and performing\nstatistical analysis~\\cite{Recipes}, be able to make an educated guess \nand access the cipher password, key...\n\nThus, statistical uniformity tests are necessary but not sufficient to guarantee that any given \nRNG is not prone to attack and guess by an intruder. Quantum RNG (QRNG) offers \"true\nRN\" generation that is very difficult to predict. Using a special program called \n\"randomness extractor\"~\\cite{Colbeck} one might eliminate all bit strings originating from \nan implicit deterministic algorithm and keep only truly random bit strings. For this reason,\nthe method is also called, amplification of weak randomness~\\cite{Colbeck}.\n\nRandomness extraction procedure exploits entropy hierarchy (see Appendix D) that\nattributes a number of bits depending on the entropy estimation used. \nR\\'enyi min-entropy is very efficient computationally wise\nand a string of perfectly random bits has unit min-entropy per bit as derived in Appendix D.\n\nStarting from $l$ input bits $X_i$ of low-entropy per bit ($s < 1$), the extractor \ncomputes a number $k < l $ of higher-entropy ($s' \\approx 1$) output bits $Y_j$ with\na linear transformation via multiplication by a matrix $m$:\n\\begin{equation}\nY_j = \\sum^l_{i=1}m_{ji}X_i, \\hspace{1cm} j=1...k\n\\label{extractor}\n\\end{equation}\n\n$m$ is built from $l\\times k$ random bits that can be generated with Galois\npolynomials and all arithmetic operations are done modulo 2 with AND and XOR logic.\n\nThis \"Whitening\" procedure can be viewed as the quantum counterpart of the Maximum Entropy\nMethod that is widely used in Image Processing for deblurring images~\\cite{Recipes}.\n\nAs a direct application of this concept, Sanguinetti {\\it et al.}~\\cite{Sanguinetti} used \nSmartphone cameras to produce Quantum Random Numbers. \nAfter uniform illumination of the camera image sensor \nby a LED and estimation of the number of photons generated per pixel, a randomness \nextractor algorithm such the above (eq.~\\ref{extractor}) is used to \ncompute truly random numbers.\n\nFor $X_i$ input bits with low entropy per bit ($s < 1$), the probability that \nthe output $Y_j$ deviates from a perfectly random bit string (with high entropy\nper bit $s' \\approx 1$) is bounded~\\cite{Cover} by:\n\\begin{equation}\n\\epsilon=2^{-(l s - k s')\/2},\n\\end{equation}\n\nPicking a CCD image sensor with 16 bits per pixel (detection capability) \nand a photon flux producing $2\\times 10^4$ electrons per \npixel gives $R_\\infty=8.469$ bits\/pixel (from eq.~\\ref{Renyi}) \nyielding a min-entropy per bit $s=0.529$ (in comparison, Shannon Entropy \nis 9.191 bits\/pixel or 0.574 per bit). Selecting input $l=2000$, \noutput $k=400$ and $s'=1$, we get $\\epsilon =2.57 \\times 10^{-197}$.\n\nAs a result, an eavesdropper would have to generate \nan extremely large~\\cite{Cover} amount of random numbers (about $1.97 \\times 10^{99}$)\nbefore noticing any departure from a perfectly random sequence, indicating\nthe superior performance of QRNG with respect to any classical RNG.\n\n\n\n\\subsubsection{Quantum Keys}\nClassical cryptography is based on two types of keys that are used to\nencode and decode messages: secret or symmetric keys and public or asymmetric keys.\nSymmetric keys are same for encoding and decoding messages whereas in \npublic cryptography systems, one needs a public key and a private key.\nIn the PGP (Pretty Good Privacy) secure mailing system over the Internet,\nthe sender encodes the message with receiver public key and the receiver\ndecodes the message with his private key. \nIn quantum cryptography, the simplest example of secret key sharing among\nsender and receiver (Alice and Bob) in QKD is the BB84~\\cite{Scarani} protocol. \nAlice and Bob communicate through two channels: one quantum to send\npolarized single photons and one classical to send ordinary messages.\nAlice selects two bases in 2D Hilbert space consisting each of two\northogonal states: $\\bigoplus$ basis with $(0,\\pi\/2)$\nlinearly polarized photons, \nand $\\bigotimes$ basis with $(\\pi\/4, -\\pi\/4)$ linearly polarized photons.\n\nFour symbols: $\\ket{\\rightarrow}, \\ket{\\uparrow},\\ket{\\nearrow},\\ket{\\searrow}$\nrepresenting polarized single photons are used to transmit quantum data with\n$\\ket{\\nearrow}=\\frac{1}{\\sqrt{2}}(\\ket{\\rightarrow}+ \\ket{\\uparrow})$\nand $\\ket{\\searrow}=\\frac{1}{\\sqrt{2}}(\\ket{\\rightarrow}- \\ket{\\uparrow})$.\n\nIn the $(basis,data)$ representation, the symbols are given by \n$\\ket{\\rightarrow}=(\\bigoplus,0)$, $\\ket{\\uparrow}=(\\bigoplus,1)$ in the $\\bigoplus$\nbasis whereas $\\ket{\\searrow}=(\\bigotimes,0)$, $\\ket{\\nearrow}=(\\bigotimes,1)$ in the\n$\\bigotimes$ basis.\n\nA message transmitted by Alice to Bob over the Quantum channel is a stream of \nsymbols selected randomly among the four described above. \n\nBob performs polarization measurements over the received symbols selecting \nrandomly bases $\\bigoplus$ or $\\bigotimes$.\n\nAfterwards Bob and Alice exchange via the classical channel their mutual\nchoice of bases without revealing the measurement results.\n\nIn the ideal case (no transmission errors, no eavesdropping) \nAlice and Bob should discard results pertaining to \nmeasurements done in different bases (or when Bob failed to detect \nany photon). This process is called \"key sifting\" after which the raw key is determined.\n\nAfter key sifting, another process called key distillation~\\cite{Scarani} must be performed.\nThis process entails three steps~\\cite{Scarani}: error correction, privacy amplification and\nauthentication in order to reveal classical or quantum errors of transmission,\ndetect eavesdropping (with the no-cloning theorem~\\cite{Scarani}) and act against it. \n\nIgnoring, for simplicity, key distillation, the raw key size is typically \nabout one quarter of the data sent since both Alice and Bob are selecting \ntheir bases at random (total probability is roughly \n$\\frac{1}{2} \\times \\frac{1}{2}=\\frac{1}{4}$).\n\n\nA random number generator (RNG) or rather\na random bit generator can be used to select $\\bigoplus$ or $\\bigotimes$ bases. \nUsing PRBS or, even better, QRNG to select measurement bases, we infer that\nby comparison with the classical FHSS crypting method, Quantum Mechanics \nprovides extra flexibility through basis selection. \nSuch option is simply not available in classical communication.\n\n\n\nOn the negative side, there are several problems that may come up with the \nBB84 scheme. One major obstacle is that\npresently, it is difficult, on a large scale level, to produce single photons. \nOne approximate method for doing this, is to \nuse attenuated laser pulses containing several photons that might be intercepted\nin the quantum channel by an eavesdropper with a PNS (Photon Number Splitting) attack. \n\nQuantum Communications can be made more secure when QKD is implemented with\nentanglement~\\cite{Scarani} providing a secure way to distribute secret keys between remote \nusers such that when some eavesdropper is detected, the transmission is halted and the data discarded.\n\n\nThe BBM92~\\cite{Scarani} scheme is an entanglement based version of the BB84 protocol. \nPolarization entangled photon pairs (called EPR pairs or Bell states) are sequentially \ngenerated with one photon polarization measured by Alice and the other \nmeasured by Bob. EPR pairs are produced after emerging from SPDC~\\cite{EPR} by using \na birefringent phase shifter or slightly rotating the non-linear crystal \nitself since the state produced by SPDC is:\n\n\\begin{equation}\n\\ket{\\psi}=\\frac{1}{\\sqrt{2}}(\\ket{\\rightarrow \\uparrow} + e^{i \\varphi} \\ket{\\uparrow \\rightarrow})\n\\end{equation}\n\nThus it suffices to modify $\\varphi$ to 0 or $\\pi$ or place a quarter wave-plate \ngiving a 90\\deg shift in one photon path to generate all \nBell states~\\cite{Kwiat}.\nThese states are polarization entangled~\\cite{Scarani} photons :\n\n\\begin{equation}\n\\ket{\\psi^\\pm}=\\frac{1}{\\sqrt{2}}(\\ket{\\rightarrow \\uparrow} \\pm \\ket{\\uparrow \\rightarrow}),\n\\ket{\\phi^\\pm}=\\frac{1}{\\sqrt{2}}(\\ket{\\rightarrow \\rightarrow} \\pm \\ket{\\uparrow \\uparrow})\n\\end{equation}\n\nThe set forms a complete orthonormal basis in 4D Hilbert space for all polarization states\nof a two-photon system.\n\nAlice and Bob choose randomly one of the two bases $\\bigoplus$ or $\\bigotimes$\nto perform photon polarization measurement.\n\nAfterwards Alice and Bob communicate over the classical channel \nwhich basis they used for each photon successfully received by Bob.\n\nThe raw key is obtained by retaining the results obtained when the bases used are same.\nNeither RNG nor QRNG are used in this case since randomness is inherent \nto the EPR pair polarization measurement~\\cite{Scarani}.\nMoreover, no Bell inequality tests are needed since all measurements \nmust be perfectly correlated or anti-correlated.\n\nFor instance in the $\\ket{\\psi^+}$ state, if one photon is measured to be in the \n$\\ket{\\rightarrow}$ state, the other must be in the $\\ket{\\uparrow}$ since \nthe probabilities of measuring ${\\rightarrow \\rightarrow}$ or ${\\uparrow \\uparrow}$ are given by \n$|\\bra{\\rightarrow \\rightarrow}\\ket{\\psi^+}|^2=|\\bra{\\uparrow \\uparrow}\\ket{\\psi^+}|^2=0$, \nwhereas the probabilities of measuring ${\\rightarrow \\uparrow}$ and ${\\uparrow \\rightarrow}$ are \n$|\\bra{\\rightarrow \\uparrow}\\ket{\\psi^+}|^2=|\\bra{\\uparrow \\rightarrow}\\ket{\\psi^+}|^2=\\frac{1}{2}$.\nThis is termed perfect anti-correlation.\n\nWhen the polarization measurements are performed in the $\\bigotimes$ basis, we get\nrather, perfect correlation. That means \nthe probabilities of measuring ${\\nearrow \\searrow}$ or ${\\searrow \\nearrow}$ are \n$|\\bra{\\nearrow \\searrow}\\ket{\\psi^+}|^2=|\\bra{\\searrow \\nearrow}\\ket{\\psi^+}|^2=0$,\nwhereas the probabilities of measuring ${\\nearrow \\nearrow}$ or ${\\searrow \\searrow}$ are given by \n$|\\bra{\\nearrow \\nearrow}\\ket{\\psi^+}|^2=|\\bra{\\searrow \\searrow}\\ket{\\psi^+}|^2=\\frac{1}{2}$. \n\nNote that if we rather consider the $\\ket{\\psi^-}$ state, we get perfect \nanti-correlation in both bases $\\bigoplus$ and $\\bigotimes$.\n\n\n\\subsubsection{Quantum Networks}\n\nIn classical communications, channel transfer function, the Fourier transform\nof its impulse response $h(t)$ is a function of\nfrequency and distance. Channel bandwidth and signal attenuation are functions\nof distance. When a pulse (representing a communication symbol made of several\nbits depending on the modulation method used) is sent through an optical fiber, \nit undergoes broadening leading to inter-symbol interference, attenuation\nleading to signal loss and alteration due to noise. \nThus it is required to evaluate the largest distance\nthat could be covered at the end of which a repeater is placed in order to filter\nout noise and restore pulse shape to its original form.\n\nIn QKD, Alice and Bob should be able to determine efficiently their shared secret key \nas a function of distance $L$ separating them. Since, the secure key is determined after\nsifting and distillation, secure key rate is expressed in bps (bits per symbol) given\nthat Alice sends symbols to Bob to sift and distill with the remaining bits making the secret key.\n\nThe simplest phenomenological way to estimate secure key rate versus distance $K(L)$ is to consider \na point-to-point scenario with $K(L) \\propto [A(L)]^n$ where $A(L)= 10^{-\\alpha_0 L\/10}$\nis signal attenuation versus distance. $\\alpha_0$ is the attenuation \ncoefficient per fiber length and $n=1,2...$. $\\alpha_0$ depends strongly on the wavelength $\\lambda$\nused to transmit information through the fiber. For the standard Telecom wavelength~\\cite{Carlson}\n$\\lambda=1.55 \\mu$m, $\\alpha_0$=0.2 dB\/km.\n\nThe optimal distance~\\cite{Scarani} $L_{opt}$ is determined by the maximum of the objective\nfunction $L K(L)$. Taking the derivative and solving, we get \n${L_{opt}=\\frac{10}{n \\alpha_0 \\ln(10)}}$.\n\nThis yields $L_{opt}=$21.7 kms for $n=1$, $L_{opt}=$10.86 kms for $n=2$ and \n$L_{opt}=$5.43 kms for $n=4$. \n\nErrors produced by noise, interference and damping are represented by a $BER$\n(Bit Error Rate), the ratio of wrong bits over total number of transmitted bits.\n$BER$ versus distance is an important indicator of communication quality as much as\ncommunication speed is represented by bit rate versus distance. \n\nIn the quantum case, the $QBER$ (Quantum $BER$) $Q_e(L)$ versus distance \nis the quantity of interest. Regarding the BB84 protocol case, a simple model~\\cite{GYS} \ndelivers the expression:\n\n\\begin{equation}\nQ_e(L)=\\frac{P_e}{ A(L) \\mu \\eta_{Bob} +2P_e}\n\\end{equation}\n\nwhere $P_e$ is the probability of error per clock cycle (measured to be\n8.5 $\\times 10^{-7}$). $\\mu=0.1$ is the average photon flux used by Alice\nto transmit symbols and $\\eta_{Bob}=0.045$ is Bob apparatus detection efficiency. \nThe results are displayed versus distance $L$ in fig.\\ref{QBER}.\n\n\\begin{figure}[htbp]\n \\centering\n \\resizebox{80mm}{!}{\\includegraphics[angle=0,clip=]{QBER.pdf}} \n\\vspace*{-3mm} \n\\caption{(Color on-line) Classical $BER$ and quantum $QBER$ versus distance\n$L$ along an optical fiber using the BB84 protocol considering they start from\nthe same value at $L=0$.}\n\\label{QBER}\n\\end{figure}\n\n\nFig.~\\ref{QBER} shows that the $QBER$ increases faster and takes larger values \nthan the optical fiber classical $BER$. For many digital lightwave systems\nusing ON-OFF modulation~\\cite{Carlson} (1 for light pulse, 0 for no pulse), the classical $BER$ \nis typically about $10^{-9}$ and may reach values in the $[10^{-16}-10^{-15}]$ range.\n\n\nMoving on to estimate the secure key generation rate in bits per symbol (bps) emitted \nby Alice, a simple model for the BB84 protocol~\\cite{GLLP} gives:\n\n\\begin{equation}\nK(L)=G_\\mu \\{ -h_2(Q_e(L)) + \\Omega [1-h_2(e_1)] \\}\n\\end{equation}\n\nwhere $G_\\mu$ is the gain for an average photon flux $\\mu$. \n$\\Omega$ is the fraction of events detected by Bob and produced\nby single-photon signals emitted by Alice. $e_1$ is the corresponding $QBER$ and\n$h_2$ is the binary Shannon entropy~\\cite{Carlson} given by \n$h_2(x)=-x\\log_2(x)-(1-x)\\log_2(1-x)$. Using the same parameters as in Ref.~\\cite{GYS} and \nbounds for $\\Omega$ and $e_1$ estimated in Ref.~\\cite{GLLP}, we are able to plot \nthe secure key rate versus distance for several\nvalues of the detector error rate $e_D$ as displayed in fig.~\\ref{Rate}.\n\n\\begin{figure}[htbp]\n \\centering\n \\resizebox{80mm}{!}{\\includegraphics[angle=0,clip=]{Rate.pdf}} \n\\vspace*{-3mm} \n\\caption{(Color on-line) Key rate $K(L)$ in bps versus distance $L$ using the same parameters \nas in Ref.~\\cite{GLLP} for several detector error rates $e_D$=0.01, 0.1, 0.2 and 0.4.}\n\\label{Rate}\n\\end{figure} \n\n\n\nFig.~\\ref{Rate} shows that the key rate is small and given that security increases \nwith key length, a major improvement with respect to this simple approach \nshould be undertaken in order to increase substantially the bps rate.\n\nRecently, a joint team from Cambridge Science Park and University of Cambridge~\\cite{Comandar} \nsucceeded to increase substantially the secure key rate using detectors operating at room temperature.\nThe secure key rates obtained were between 1.79 Mbit\/s and 1.2 kbit\/s for \nfiber lengths between 40 km and 100 km, respectively.\n\n\n \n\nRegarding network building developments, the first elementary quantum network based on interfaces \nbetween single atoms and photons located at two network nodes installed in two distant \nlaboratories connected by an optical fiber link was made in 2012 by a team of \nscientists~\\cite{Ritter} at the Garching MPQ.\n\nUsing the above procedures, Ritter {\\it et al.}~\\cite{Ritter} were able to generate \nentanglement between two remote nodes in two different laboratories separated by a \ndistance of 21 meters and linked by an optical fiber. \nThey were able to maintain entanglement for about 100 microseconds while entanglement \ngeneration itself took about a single microsecond.\n\nLater, a team from Technical University of Vienna~\\cite{Vienna} succeeded in coupling Cesium \natoms to an optical fiber and storing quantum information over a period of time that is\nlong enough to sustain entanglement over distances (hundreds of kilometers) that are large enough\nto achieve reliable long distance communication.\n\nThe Vienna team extended coherence time to several milliseconds \nand given that speed of light in an optical fiber is about 200 kilometers per \nmillisecond, a substantial separation increase is henceforth achievable potentially\nreaching several hundred kilometers between nodes over which entanglement and \ncoherence are maintained, paving the way to long-distance QC.\n\n\\subsubsection{Quantum noise}\n\n\nAt low temperature, very high frequency $hf > k_B T$, mesoscopic scale or when considering\nsingle carrier, quantum dot devices... quantum noise becomes larger than thermal \nimplying a full reconsideration of traditional electronics that has long been\ndescribed by White (thermal noise with no relaxation time), Shot noise based on \na single relaxation time (such as generation-recombination noise in semiconductors),\nPink noise ($1\/f$) originating from a distribution of relaxation times...\n \nRecently, entanglement has been shown to appear spontaneously in photon-assisted \nelectrical noise occurring in quantum conductors consisting of an ac-biased tunnel \njunction cooled at low temperature~\\cite{Sherbrooke}. \n\nThe experiments were performed in Sherbrooke~\\cite{Sherbrooke} at 18 mK~\\cite{field} on \na Al\/Al$_2$O$_3$\/Al tunnel junction with resistance of 70 $\\Omega$, \nthe signal being emitted by the junction analyzed at two frequencies $f_1$=7 GHz and $f_2$=7.5 GHz.\n\nThe total voltage applied on the junction is given by $V_{dc}+V_{ac} \\cos 2 \\pi f_0 t$\nwith frequency $f_0=f_1+f_2$ chosen to produce optimal junction response as explained below.\n\nFirstly, junction noise becomes photon-assisted because ac-biasing injects photons in the junction.\n\nSecondly, statistical correlations between currents at $f_1$ and $f_2$ as a function of dc voltage\nshowed that photons generated in pairs in the junction are entangled since \ntheir correlations violate Bell inequalities~\\cite{Bell} as discussed below.\nDefining \"position\" $X_1,X_2$ and \"momentum\" operators $P_1, P_2$ from frequency dependent\ncurrent operators $I(\\pm f_{1}), I(\\pm f_{2})$ as:\n\n\\begin{equation}\nX_{1,2}=\\frac{I(f_{1,2})+I(-f_{1,2})}{\\sqrt{2}}, \\, P_{1,2}=\\frac{I(f_{1,2})-I(-f_{1,2})}{i\\sqrt{2}}\n\\end{equation}\n\nwe use the QFDT (see section~\\ref{QFDT}) to evaluate the various quantum correlations versus\ndc voltage $V_{dc}$ applied to the Al\/Al$_2$O$_3$\/Al tunnel junction for a fixed ac \nvoltage $V_{ac}$= 37 $\\mu$V in fig.~\\ref{correlations}. \nViolation of Bell inequalities displayed by $\\langle X_1X_2\\rangle$ and $\\langle P_1P_2\\rangle$ \nfor non-zero $V_{ac}$ indicate entanglement in contrast with the other correlators that\ndo not display any variation with $V_{dc}$. $\\langle X_1X_2\\rangle_0$ and $\\langle P_1P_2\\rangle_0$ that \nare evaluated for $V_{ac}=0$ do not show violation\nof Bell inequalities indicating that it is $V_{ac}$ that induces quantum correlations thus\nyielding a simple electrical control parameter for entanglement.\n\n\n\\begin{figure}[htbp]\n \\centering\n \\resizebox{60mm}{!}{\\includegraphics[angle=0,clip=]{correlations.pdf}} \n\\vspace*{-3mm} \n\\caption{(Color on-line) Correlations in Kelvin units between currents at frequencies \n$f_1$ and $f_2$ versus $V_{dc}$ voltage applied to the Al\/Al$_2$O$_3$\/Al tunnel junction for a fixed \nac voltage $V_{ac}$= 37 $\\mu$V. Violation of Bell inequalities is displayed only \nby $\\langle X_1X_2\\rangle$ and $\\langle P_1P_2\\rangle$ quantum correlations for non-zero \napplied ac voltage whereas all other correlators are null including $\\langle X_1X_2\\rangle_0$ \nand $\\langle P_1P_2\\rangle_0$ that are evaluated when the ac voltage is zero.}\n\\label{correlations}\n\\end{figure} \n \n\n\\section{Discussions and conclusions}\nIn this work, the main unifying thread for the description of fluctuations, noise\nand noise-based communication is the ubiquitous presence of harmonic oscillators \nrepresented mostly by photons.\n\nWhile secure classical noise-based communication uses spread-spectrum sequences,\nsecure QC based on QKD implemented with entanglement\nties communicating parties in a way such that any attempt by some\neavesdropper to intercept or interfere in the communication process is\nimmediately sensed and treated appropriately.\n\nEntanglement may be done between quantum objects such as atoms, electrons, photons etc...\nhowever the preferred information carrier is the photon and the entanglement \nthat can be based on polarization, momentum, spatial mode or time can be sustained over \nvery large distances as demonstrated by the Vienna experiment.\n\nHeralded entanglement not necessitating a common ancestor has even been applied by the\nsame Garching~\\cite{Ritter2} (MPQ) group to transfer a polarization qubit from a photon to \na single atom with 39\\% efficiency and perform the reverse process, that is from the atom \nto a given photon with an efficiency of 69\\%, proving once again that a long-distance\nQC network based on entangled photons is a serious contender for secure communication.\n\nOn the other hand, the Sherbrooke experiment shows that a major component \nof noise-based QC is built within quantum noise since entanglement is produced \nin quantum conductors by a simple electrical (ac voltage) control.\n\nEven if presently such entanglement occurs at very low temperature (18 mK), \nthe result is still important since that particular type of entanglement could be \nexploited after appropriate conditioning with quantum cryptography techniques in order\nto secure information transfer and communication. \n\n\nPresently several secure QC schemes not based on entanglement exist, moreover\nsome other protocols not relying on key generation and distribution have \nalso been developed.\n\nFor instance QSDC (Quantum Secure Direct Communication) is a branch of QC \nin which the message is sent directly between remote users without \ngenerating a key to encrypt it.\n\nPracticality and robustness of schemes used in QC for securing transmission\nof information will finally decide which of the different methods and \nprotocols will be ultimately adopted as reliable for secure mass communication. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNew models of infinitely divisible random matrices have emerged in recent\nyears from both applications and theory. On the one hand, they have been\nimportant in multivariate financial L\\'{e}vy modelling where stochastic\nvolatility models have been proposed using L\\'{e}vy and Ornstein-Uhlenbeck\nmatrix valued processes; see \\cite{BNSe07}, \\cite{BNSe09}, \\cite{BNS11} and\n\\cite{PiSe09a}. A key role in these models is played by the positive-definite\nmatrix processes and more general matrix covariation processes.\n\nOn the other hand, in the context of free probability, Bercovici and Pata\n\\cite{BP} introduced a bijection $\\Lambda$ from the set of classical\ninfinitely divisible distributions to the set of free infinitely divisible\ndistributions. This bijection was explained in terms of random matrix\nensembles by Benaych-Georges \\cite{BG} and Cabanal-Duvillard \\cite{CD},\nproviding in a more palpable way the bijection $\\Lambda$ and producing a new\nkind of infinitely divisible random matrix ensembles. Moreover, the results in\n\\cite{BG} and \\cite{CD} constitute a generalization of Wigner's result for the\nGaussian Unitary Ensemble and give an alternative simple infinitely divisible\nrandom matrix model for the Marchenko-Pastur distribution, for which the\nWishart and other empirical covariance matrix ensembles are not infinitely divisible.\n\nMore specifically, it is shown in \\cite{BG} and \\cite{CD} that for any\none-dimensional infinitely divisible distribution $\\mu$ there is an ensemble\nof Hermitian random matrices $(M_{d})_{d\\geq1}$, whose empirical spectral\ndistribution converges weakly almost surely to $\\Lambda(\\mu)$ as $d$ goes to\ninfinity. Moreover, for each $d\\geq1$, $M_{d}$ has a unitary invariant matrix\ndistribution which is also infinitely divisible in the matrix sense. From now\non we call these models BGCD matrix ensembles. We consider additional facts of\nBGCD models in Section 3.\n\nA problem of further interest is to understand the matrix L\\'{e}vy processes\n$\\left \\{ M_{d}(t)\\right \\} _{t\\geq0}$ associated to the BGCD matrix\nensembles. It was pointed out in \\cite{DRA}, \\cite{PAS} that the L\\'{e}vy\nmeasures of these models are concentrated on rank one matrices. This means\nthat the random matrix $M_{d}$ is a realization, at time one, of a matrix\nvalued L\\'{e}vy process $\\left \\{ M_{d}(t)\\right \\} _{t\\geq0}$ with rank one\njumps $\\Delta M_{d}(t)=M_{d}(t)-M_{d}(t-).$\n\nThe purpose of this paper is to study the structure of a $d\\times d$ Hermitian\nL\\'{e}vy process $\\left \\{ L_{d}(t)\\right \\} _{t\\geq0}$ with rank one jumps.\nIt is shown in Section 4 that if $L_{d}$ is a $d\\times d$ complex matrix\nsubordinator, it is the quadratic variation of an $\\mathbb{C}^{d}$-valued\nL\\'{e}vy process $X_{d}$, being the converse and extension of a known result\nin dimension one, see \\cite[Example 8.5]{CT}. The process $X_{d}$ is\nconstructed via its L\\'{e}vy-It\\^{o} decomposition. In\\ Section 5 we consider\nnew realizations in terms of covariation of $\\mathbb{C}^{d}$-valued L\\'{e}vy\nprocess for matrix compound Poisson process as well as sample path\napproximations for L\\'{e}vy processes associated to general BGCD ensembles. A\nnew insight on Marchenko-Pastur's type results for empirical covariance matrix\nensembles was recently given in \\cite{BGCD} by considering compound Poisson\nmodels (then infinitely divisible). In this direction our results show the\nrole of covariation of $d$-dimensional L\\'{e}vy processes as an alternative to\nempirical covariance processes.\n\nFor convenience of the reader, and since the material and notation in the\nliterature is disperse and incomplete, we include Section 2 with a review on\npreliminaries on complex matrix semimartingales and matrix valued L\\'{e}vy\nprocesses that are used later on in this paper.\n\n\\section{Preliminaries on matrix semimartingales and matrix L\\'{e}vy \\newline\nprocesses}\n\nLet $\\mathbb{M}_{d\\times q}=\\mathbb{M}_{d\\times q}\\left( \\mathbb{C}\\right) $\ndenote the linear space of $d\\times q$ matrices with complex (respectively\nreal) entries with scalar product $\\left \\langle A,B\\right \\rangle\n=~\\mathrm{tr}\\left( AB^{\\ast}\\right) $ and the Frobenius norm $\\left \\Vert\nA\\right \\Vert =\\left[ \\mathrm{tr}\\left( AA^{\\ast}\\right) \\right] ^{1\/2}$\nwhere $\\mathrm{tr}$ denotes the (non normalized) trace. If $q=d,$ we write\n$\\mathbb{M}_{d}=\\mathbb{M}_{d\\times d}$. The set of Hermitian random matrices\nin $\\mathbb{M}_{d}$ is denoted by $\\mathbb{H}_{d}$. Likewise, let\n$\\mathbb{U}_{d\\times q}=\\mathbb{U}_{d\\times q}\\left( \\mathbb{C}\\right)\n=\\left \\{ U\\in \\mathbb{M}_{d\\times q}:U^{\\ast}U=\\mathrm{I}_{q}\\right \\} .$ If\n$q=d,$ $\\mathbb{U}_{d}=\\mathbb{U}_{d\\times d}$.\n\nWe denote by $\\mathbb{H}_{d(1)}$ the set of matrices in $\\mathbb{H}_{d}$ of\nrank one and by $\\mathbb{H}_{d}^{+}$ ($\\overline{\\mathbb{H}}_{d}^{+}$) the set\nof positive (nonnegative) definite matrices in $\\mathbb{H}_{d}$. Likewise\n$\\mathbb{H}_{d(1)}^{+}=\\mathbb{H}_{d(1)}\\cap \\overline{\\mathbb{H}}_{d}^{+}$ is\nthe closed cone of $d\\times d$ nonnegative definite matrices of rank one. Let\n$\\mathbb{S}(\\mathbb{H}_{d(1)})$ denote the unit sphere of $\\mathbb{H}_{d(1)}$.\n\n\\begin{remark}\n\\label{Decomp}(a) Every $V\\in \\mathbb{H}_{d(1)}^{+}$ can be written as\n$V=xx^{\\ast}$ where $x\\in \\mathbb{C}^{d}$. One can see that $x$ is unique if we\nrestrict $x$ to the set $C_{+}^{d}=\\{x=\\left( x_{1},x_{2},\\ldots\n,x_{d}\\right) \\allowbreak:\\allowbreak x_{1}\\allowbreak \\geq \\allowbreak\n0,\\allowbreak$ $x_{j}\\allowbreak \\in \\allowbreak \\mathbb{C},\\allowbreak$\n$j\\allowbreak=2,\\allowbreak...,\\allowbreak d\\}$.\n\n(b) Every $V\\in \\mathbb{H}_{d\\left( 1\\right) }$ can be written as $V=\\lambda\nuu^{\\ast}$ where $\\lambda$ the eigenvalue of $V$ and $u$ is a unitary vector\nin $\\mathbb{C}^{d}$. In this representation the $d\\times d$ matrix $uu^{\\ast}$\nis unique.\n\\end{remark}\n\n\\paragraph{Covariation of complex matrix semimartingales}\n\nAn $\\mathbb{M}_{d\\times q}$-valued process $X=\\left \\{ (x_{ij})(t)\\right \\}\n_{t\\geq0}$ is a matrix semimartingale if $x_{ij}(t)$ is a complex\nsemimartingale for each $i=1,...,d,j=1,...,q.$ Let $X=\\left \\{ (x_{ij\n)(t)\\right \\} _{t\\geq0}$ $\\in \\mathbb{M}_{d\\times q}$ and $Y=\\left \\{\n(y_{ij})(t)\\right \\} _{t\\geq0}\\in \\mathbb{M}_{q\\times r}$ be semimartingales.\nSimilar to the case of matrices with real entries in \\cite{BNSe07}, we define\nthe matrix covariation of $X$ and $Y$ as the $\\mathbb{M}_{d\\times r}$-valued\nprocess $\\left[ X,Y\\right] :=\\left \\{ \\left[ X,Y\\right] (t):t\\geq\n0\\right \\} $ with entrie\n\\begin{equation}\n\\left[ X,Y\\right] _{ij}(t)=\\sum \\limits_{k=1}^{q}\\left[ x_{ik\n,y_{kj}\\right] (t)\\text{,} \\label{DefCov\n\\end{equation}\nwhere $\\left[ x_{ik},y_{kj}\\right] (t)$ is the covariation of the\n$\\mathbb{C}$-valued semimartingales $\\left \\{ x_{ik}(t)\\right \\} _{t\\geq0}$\nand $\\left \\{ x_{kj}(t)\\right \\} _{t\\geq0}$; see \\cite[pp 83]{Pr04}. One has\nthe decomposition into a continuous part and a pure jump part as follow\n\\begin{equation}\n\\left[ X,Y\\right] (t)=\\left[ X^{c},Y^{c}\\right] (t)+\\sum_{s\\leq t}\\left(\n\\Delta X(s)\\right) \\left( \\Delta Y(s)\\right) \\text{,} \\label{ForCov\n\\end{equation}\nwhere $\\left[ X^{c},Y^{c}\\right] _{ij}(t):=\\sum \\nolimits_{k=1}^{q}\\left[\nx_{ik}^{c},y_{kj}^{c}\\right] (t).$ We recall that for any semimartingale $x$,\nthe process $x^{c}$ is the $a.s.$ unique continuous local martingale $m$ such\nthat $\\left[ x-m\\right] $ is purely discontinuous.\n\nWe will use the facts that $\\left[ X\\right] =\\left[ X,X^{\\ast}\\right] $ is\na nonnegative definite $d\\times d$ matrix, that $\\left[ X,Y\\right] ^{\\top\n}=\\left[ Y^{\\top},X^{\\top}\\right] $ and that for any nonrandom matrices\n$A\\in \\mathbb{M}_{m\\times d},C\\in \\mathbb{M}_{r\\times n}$ and semimartingales\n$X\\in \\mathbb{M}_{d\\times q},Y\\in \\mathbb{M}_{q\\times r}$\n\\begin{equation}\n\\left[ AX,YC\\right] =A\\left[ X,Y\\right] C\\text{.} \\label{CovBil\n\\end{equation}\n\n\nThe natural example of a continuous semimartingale is the standard complex\n$d\\times q$ matrix Brownian motion $B=\\left \\{ B(t)\\right \\} _{t\\geq\n0}=\\left \\{ b_{jl}(t)\\right \\} _{t\\geq0}$ consisting of independent\n$\\mathbb{C}$-valued Brownian motions $b_{jl}(t)=\\operatorname{Re\n(b_{jl}(t))+\\mathrm{i}\\operatorname{Im}(b_{jl}(t))$ where $\\operatorname{Re\n(b_{jl}(t)),\\operatorname{Im}(b_{jl}(t))$ are independent one-dimensional\nBrownian motions with common variance $t\/2$. Then we have $\\left[ B,B^{\\ast\n}\\right] ^{ij}(t)=\\sum \\nolimits_{k=1}^{q}\\left[ b_{ik},\\overline{b\n_{jk}\\right] (t)=qt\\delta_{ij}$ and hence the matrix quadratic variation of\n$B$ is given by the $d\\times d$ matrix process\n\\begin{equation}\n\\left[ B,B^{\\ast}\\right] (t)=qt\\mathrm{I}_{d}\\text{.} \\label{CovCB\n\\end{equation}\nThe case $q=1$ corresponds to the $\\mathbb{C}^{d}$-valued standard Brownian\nmotion $B$.$\\ $We observe this corresponds to $\\left[ B,B^{\\ast}\\right]\n_{t}=t\\mathrm{I}_{d}$ instead of the common $2t\\mathrm{I}_{d}$ used in the literature.\n\nOther examples of complex matrix semimartingales are L\\'{e}vy processes\nconsidered next.\n\n\\paragraph{Complex matrix L\\'{e}vy processes}\n\nAn infinitely divisible random matrix $M$ in $\\mathbb{M}_{d\\times q}$ is\ncharacterized by the L\\'{e}vy-Khintchine representation of its Fourier\ntransform $\\mathbb{E}\\mathrm{e}^{\\mathrm{itr}(\\Theta^{\\ast}M)}\\allowbreak\n\\ =\\ \\allowbreak \\exp(\\psi(\\Theta))$ with Laplace exponent\n\\begin{equation}\n\\psi(\\Theta)={}\\mathrm{itr}(\\Theta^{\\ast}\\Psi \\text{ }){}-{}\\frac{1\n{2}\\mathrm{tr}\\left( \\Theta^{\\ast}\\mathcal{A}\\Theta^{\\ast}\\right) {}+{\n\\int_{\\mathbb{M}_{d\\times q}}\\left( \\mathrm{e}^{\\mathrm{itr}(\\Theta^{\\ast\n\\xi)}{}-1{}-\\mathrm{i}\\frac{\\mathrm{tr}(\\Theta^{\\ast}\\xi)}{1+\\left \\Vert\n\\xi \\right \\Vert ^{2}}{}\\right) \\nu(\\mathrm{d}\\xi),\\ \\Theta \\in \\mathbb{M\n_{d\\times q}, \\label{LKRgen\n\\end{equation}\nwhere $\\mathcal{A}:\\mathbb{M}_{q\\times d}\\rightarrow \\mathbb{M}_{d\\times q}$ is\na positive symmetric linear operator $($i.e. $\\mathrm{tr}\\left( \\Phi^{\\ast\n}\\mathcal{A}\\Phi^{\\ast}\\right) \\geq0$ for $\\Phi \\in \\mathbb{M}_{d\\times q}$ and\n$\\mathrm{tr}\\left( \\Theta_{2}^{\\ast}\\mathcal{A}\\Theta_{1}^{\\ast}\\right)\n=\\mathrm{tr}\\left( \\Theta_{1}^{\\ast}\\mathcal{A}\\Theta_{2}^{\\ast}\\right) $\nfor $\\Theta_{1},\\Theta_{2}\\in \\mathbb{M}_{d\\times q})$, $\\nu$ is a measure on\n$\\mathbb{M}_{d\\times q}$ (the L\\'{e}vy measure) satisfying $\\nu(\\{0\\})=0$ and\n$\\int_{\\mathbb{M}_{d\\times q}}(1\\wedge \\left \\Vert x\\right \\Vert ^{2\n)\\nu(\\mathrm{d}x)<\\infty$, and $\\Psi \\in \\mathbb{M}_{d\\times q}$. The triplet\n$(\\mathcal{A},\\nu,\\Psi)$ uniquely determines the distribution of $M$.\n\n\\begin{remark}\n\\label{ObsGaPart}The notation $\\mathcal{A}\\Theta^{\\ast}$ means the linear\noperator $\\mathcal{A}$ from $\\mathbb{M}_{q\\times d}$ to $\\mathbb{M}_{d\\times\nq}$ acting on $\\Theta^{\\ast}\\in \\mathbb{M}_{q\\times d}$. Some interesting\nexamples of $\\mathcal{A}$ and its corresponding matrix Gaussian distributions are:\n\n(a) $\\mathcal{A}\\Theta^{\\ast}=\\Theta$. This corresponds to a Gaussian matrix\ndistribution invariant under left and right unitary transformations in\n$\\mathbb{U}_{d}$ and $\\mathbb{U}_{q}$, respectively.\n\n(b) $\\mathcal{A}\\Theta^{\\ast}=\\Sigma_{1}\\Theta \\Sigma_{2}$ for $\\Sigma_{1}\\in$\n$\\overline{\\mathbb{H}}_{d}^{+}$ and $\\Sigma_{2}\\in \\overline{\\mathbb{H}\n_{q}^{+}$. In this case the corresponding matrix Gaussian distribution is\ndenoted by $\\mathrm{N}_{d\\times q}(0,\\Sigma_{1}\\otimes \\Sigma_{2})$ \\ and\n$\\Sigma_{1}\\otimes \\Sigma_{2}$ is called a Kronecker covariance. It holds that\nif $N$ has the distribution $\\mathrm{N}_{d\\times q}(0,\\mathrm{I}_{d}{\n\\otimes \\mathrm{I}_{q})$, then $\\Sigma_{1}^{1\/2}N\\Sigma_{2}^{1\/2}$ has\ndistribution $\\mathrm{N}_{d\\times q}(0,\\Sigma_{1}\\otimes \\Sigma_{2})$.\n\n(c) When $q=d$, $\\mathcal{A}\\Theta^{\\ast}=$ $\\mathrm{tr}(\\Theta)\\mathrm{I\n_{d}$ is the covariance operator of the Gaussian random matrix $g\\mathrm{I\n_{d}$ where $g$ is a one-dimensional random variable with a standard Gaussian distribution.\n\\end{remark}\n\nLet $\\mathbb{S}_{d\\times q}$ be the unit sphere of $\\mathbb{M}_{d\\times q}$\nand let $\\mathbb{M}_{d\\times q}^{0}=\\mathbb{M}_{d\\times q}\\backslash \\{0\\}$. If\n$\\nu$ is a L\\'{e}vy measure on $\\mathbb{M}_{d\\times q}$, then there are a\nmeasure $\\lambda$ on $\\mathbb{S}_{d\\times q}$ with $\\lambda(\\mathbb{S\n_{d\\times q})\\geq0$ and a measure $\\nu_{\\xi}$ for each $\\xi \\in \\mathbb{S\n_{d\\times q}$ with $\\nu_{\\xi}((0,\\infty))>0$ such that\n\\[\n\\nu(E)=\\int_{\\mathbb{S}_{d\\times q}}\\lambda(\\mathrm{d}\\xi)\\int_{(0,\\infty\n)}1_{E}(u\\xi)\\nu_{\\xi}(\\mathrm{d}u),\\qquad E\\in \\mathcal{B}(\\mathbb{M}_{d\\times\nq}^{0}).\n\\]\nWe call $(\\lambda,\\nu_{\\xi})$ a polar decomposition of $\\nu$. When $d=q=1$,\n$\\nu$ is a L\\'{e}vy measure on $\\mathbb{R}$ and $\\lambda$ is a measure in the\nunit sphere $\\mathbb{S}_{1\\times1}=\\left \\{ -1,1\\right \\} $ of $\\mathbb{R}$.\n\nAny $\\mathbb{M}_{d\\times q}$-valued L\\'{e}vy process $L=\\left \\{ L(t)\\right \\}\n_{t\\geq0}$ with triplet $(\\mathcal{A},\\nu,\\Psi)$ is a semimartingale with the\nL\\'{e}vy-It\\^{o} decomposition\n\\begin{equation}\nL(t)=t\\Psi+B_{\\mathcal{A}}(t)+\\int_{[0,t]}\\int_{\\left \\Vert V\\right \\Vert \\leq\n1}V\\widetilde{J}_{L}(\\mathrm{d}s\\mathrm{,d}V)+\\int_{[0,t]}\\int_{\\left \\Vert\nV\\right \\Vert >1}VJ_{L}(\\mathrm{d}s,\\mathrm{d}V)\\text{, }t\\geq0, \\label{LID\n\\end{equation}\nwhere:\n\n(a) $\\left \\{ B_{\\mathcal{A}}(t)\\right \\} _{t\\geq0}$ is a $\\mathbb{M}_{d\\times\nq}$-valued Brownian motion with covariance $\\mathcal{A}$, i.e. it is a\nL\\'{e}vy process with continuous sample paths (a.s.) and each $B_{\\mathcal{A\n}(t)$ is centered Gaussian with\n\\[\n\\mathbb{E}\\left \\{ \\mathrm{tr(}\\Theta_{1}^{\\ast}B_{\\mathcal{A}}(t){\n)\\mathrm{tr}\\left( \\Theta_{2}^{\\ast}B_{\\mathcal{A}}(s){}\\right) {}\\right \\}\n=\\min(s,t)\\mathrm{tr}\\left( \\Theta_{1}^{\\ast}\\mathcal{A}\\Theta_{2}^{\\ast\n}\\right) {}\\text{for each }\\Theta_{1},\\Theta_{2}\\in \\mathbb{M}_{d\\times q},\n\\]\n\n\n(b) $J_{L}(\\cdot,\\cdot)$ is the Poisson random measure of jumps on\n$[0,\\infty)\\times \\mathbb{M}_{d\\times q}^{0}$. That is, $J_{L}(t,E)=\\# \\{(0\\leq\ns\\leq t:\\allowbreak \\Delta L_{s}\\in E\\},$ $E$ $\\in \\mathbb{M}_{d\\times q}^{0},$\nwith intensity measure $Leb\\otimes \\nu$, and independent of $\\left \\{\nB_{\\mathcal{A}}(t)\\right \\} _{t\\geq0}$,\n\n(c) $\\widetilde{J}_{L}$ is the compensator measure of $J_{L}$, i.e.\n\\[\n\\widetilde{J}_{L}(\\mathrm{d}t,\\mathrm{d}V)=J_{L}(\\mathrm{d}t,\\mathrm{d\nV)-\\mathrm{d}t\\nu(\\mathrm{d}V);\n\\]\nsee for example \\cite{Ap07} for the most general case of L\\'{e}vy processes\nwith values in infinite dimensional Banach spaces.\n\nAn $\\mathbb{M}_{d\\times q}$-valued L\\'{e}vy process $L=\\left \\{ L(t)\\right \\}\n_{t\\geq0}$ has bounded variation if and only if its L\\'{e}vy-It\\^{o}\ndecomposition takes the form\n\\begin{equation}\nL(t)=t\\Psi_{0}+\\int_{[0,t]}\\int_{\\mathbb{M}_{d\\times q}^{0}}VJ_{L\n(\\mathrm{d}s,\\mathrm{d}V)=t\\Psi_{0}+\\sum_{s\\leq t}\\Delta L(s)\\text{, }t\\geq0,\n\\label{LIFV\n\\end{equation}\nwhere $\\Psi_{0}=$ $\\Psi-\\int_{\\left \\Vert V\\right \\Vert \\leq1}V\\nu\n(\\mathrm{d}V).$\n\nThe matrix quadratic variation (\\ref{ForCov}) of $L$ is given by the\n$\\overline{\\mathbb{H}}_{d}^{+}$-valued process\n\\begin{equation}\n\\lbrack L](t)=\\left[ B_{\\mathcal{A}},B_{\\mathcal{A}}^{\\ast}\\right]\n(t)+\\int_{[0,t]}\\int_{\\mathbb{M}_{d\\times q}^{0}}VV^{\\ast}J_{L}(\\mathrm{d\ns,\\mathrm{d}V)=\\left[ B_{\\mathcal{A}},B_{\\mathcal{A}}^{\\ast}\\right]\n(t)\\mathcal{+}\\sum_{s\\leq t}\\Delta L(s)\\Delta L(s)^{\\ast}. \\label{QVLP\n\\end{equation}\n\n\nIn\\ Section 3 we prove a partial converse of the last result in the case\n$q=1.$\n\n\\begin{remark}\n\\label{ObsGaPartqv} On the lines of Remark \\ref{ObsGaPart} we have the\nfollowing observations for the quadratic variation of the continuous part in\n(\\ref{QVLP}):\n\n(a) When $\\mathcal{A}\\Theta^{\\ast}=\\Theta,$ $\\left[ B_{\\mathcal{A\n},B_{\\mathcal{A}}^{\\ast}\\right] (t)=qt\\mathrm{I}_{d}$. This follows from\n(\\ref{CovCB}) since $B_{\\mathcal{A}}(t)$ is a standard complex $d\\times q$\nmatrix Brownian motion.\n\n(b) When $\\mathcal{A}\\Theta^{\\ast}=\\Sigma_{1}\\Theta \\Sigma_{2}$ for $\\Sigma\n_{1}\\in$ $\\overline{\\mathbb{H}}_{d}^{+}$ and $\\Sigma_{2}\\in \\overline\n{\\mathbb{H}}_{q}^{+}$, we have $B_{\\mathcal{A}}(t)=\\Sigma_{1}^{1\/2\nB(t)\\Sigma_{2}^{1\/2}$ where $B=\\left \\{ B(t)\\right \\} _{t\\geq0}$ is a standard\ncomplex $d\\times q$ matrix Brownian motion. Then, using (\\ref{CovBil}) we\nhave\n\\[\n\\left[ B_{\\mathcal{A}},B_{\\mathcal{A}}^{\\ast}\\right] (t)=\\left[ \\Sigma\n_{1}^{1\/2}B\\Sigma_{2}^{1\/2},\\Sigma_{2}^{1\/2}B^{\\ast}\\Sigma_{1}^{1\/2}\\right]\n(t)=\\Sigma_{1}^{1\/2}\\left[ B\\Sigma_{2}^{1\/2},\\Sigma_{2}^{1\/2}B^{\\ast}\\right]\n(t)\\Sigma_{1}^{1\/2}=t\\mathrm{tr}(\\Sigma_{2})\\Sigma_{1},\n\\]\nwhere we have also used the easily checked fact $\\left[ B\\Sigma_{2\n^{1\/2},\\Sigma_{2}^{1\/2}B^{\\ast}\\right] (t)=t\\mathrm{tr}(\\Sigma_{2})I_{d}$.\n\n(c) When $q=d$ and $\\mathcal{A}\\Theta^{\\ast}=$ $\\mathrm{tr}(\\Theta\n)\\mathrm{I}_{d}$, we have $\\left[ B_{\\mathcal{A}},B_{\\mathcal{A}}^{\\ast\n}\\right] (t)=t\\mathrm{I}_{d}$ since $B_{\\mathcal{A}}(t)=b(t)\\mathrm{I}_{d}$\nwhere $b=\\left \\{ b(t)\\right \\} _{t\\geq0}$ is a one-dimensional Brownian motion.\n\\end{remark}\n\nThe extension of the notion of a real subordinator to the matrix case relies\non cones. A cone $K$ is a nonempty, closed, convex subset of $\\mathbb{M\n_{d\\times q}$ such that if $A\\in K$ and $\\alpha \\geq0$ imply $\\alpha A\\in K$. A\ncone $K$ determines a partial order in $\\mathbb{M}_{d\\times q}$ by defining\n$V_{1}\\leq_{K}V_{2}$ for $V_{1},V_{2}\\in \\mathbb{M}_{d\\times q}$ whenever\n$V_{2}-V_{1}\\in K$. A $\\mathbb{M}_{d\\times q}$-valued L\\'{e}vy process\n$L=\\left \\{ L(t)\\right \\} _{t\\geq0}$ is $K$- increasing if $L(t_{1})\\leq\n_{K}L(t_{2})$ for every $t_{1}0,$\n$\\left \\{ M_{d}(t)\\right \\} _{t\\geq0}$ is the $d\\times d$ matrix compound\nPoisson process $M_{d}(t)=\\sum_{k=1}^{N(t)}u_{k}^{d}u_{k}^{d\\ast}$ where\n$\\left \\{ u_{k}^{d}\\right \\} _{k\\geq1}$ is a sequence of independent uniformly\ndistributed random vectors on the unit sphere of $\\mathbb{C}^{d}$ independent\nof the Poisson process $\\left \\{ N(t)\\right \\} _{t\\geq0}$, and $\\Lambda(\\mu)$\nis the Marchenko-Pastur distribution of parameter $\\lambda>0$; see\n\\cite[Remark 3.2]{BG}. We observe that in this case $\\left \\{ M_{d\n(t)\\right \\} _{t\\geq0}$ is a matrix covariation (quadratic) process rather\nthan a covariance matrix process as in the Wishart or other empirical\ncovariance processes.\n\nProposition \\ref{polar} below collects computations in \\cite{BG}, \\cite{CD}\nand \\cite{DRA} to summarize the L\\'{e}vy triplet of a general BGCD matrix\nensemble in an explicit manner. Let $\\nu|_{(0,\\infty)}$ and $\\nu\n|_{(-\\infty,0)}$ denote the corresponding restrictions to $\\left(\n0,+\\infty \\right) $ and $\\left( -\\infty,0\\right) $ for any L\\'{e}vy measure\n$\\nu$, respectively.\n\n\\begin{proposition}\n\\label{polar}Let $\\mu \\ $be an infinitely divisible distribution in\n$\\mathbb{R}$ with L\\'{e}vy triplet $(a^{2}$,$\\mathcal{\\psi},\\nu)$ and let\n$(M_{d})_{d\\geq1}$ be a BGCD matrix ensemble for $\\Lambda(\\mu)$. Then, for\neach $d\\geq1$ $M_{d}$ has the L\\'{e}vy-Khintchine representation\n(\\ref{LKRgen}) with L\\'{e}vy triplet $(\\mathcal{A}_{d},\\Psi_{d},\\nu_{d})$ where\n\na) $\\Psi_{d}=\\mathcal{\\psi}\\mathrm{I}_{d}$\n\nb)\n\\begin{equation}\n\\mathcal{A}_{d}\\Theta=a^{2}\\frac{1}{d+1}(\\Theta+\\mathrm{tr}(\\Theta\n)\\mathrm{I}_{d}),\\quad \\Theta \\in \\mathbb{H}_{d}. \\label{GPBGCD\n\\end{equation}\n\n\nc)\n\\begin{equation}\n\\nu_{d}\\left( E\\right) =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})}\\int\n_{0}^{\\infty}1_{E}\\left( rV\\right) \\nu_{V}\\left( \\mathrm{d}r\\right)\n\\Pi \\left( \\mathrm{d}V\\right) \\text{,\\quad}E\\in \\mathcal{B}\\left(\n\\mathbb{H}_{d}\\backslash \\left \\{ 0\\right \\} \\right) \\text{,} \\label{PDBGCD\n\\end{equation}\nwhere $\\nu_{V}=\\nu|_{(0,\\infty)}$ or $\\nu|_{(-\\infty,0)}$ according to\n$V\\geq0$ or\\ $V\\leq0$ and $\\Pi$ is a measure on $\\mathbb{S}(\\mathbb{H\n_{d(1)})$ such that\n\\begin{equation}\n\\Pi \\left( D\\right) =\\int \\limits_{\\mathbb{S}(\\mathbb{H}_{d(1)})\\cap\n\\overline{\\mathbb{H}}_{d}^{+}}\\int \\limits_{\\left \\{ -1,1\\right \\} \n1_{D}\\left( tV\\right) \\lambda \\left( \\mathrm{d}t\\right) \\omega_{d}\\left(\n\\mathrm{d}V\\right) \\text{,\\quad}D\\in \\mathcal{B}\\left( \\mathbb{S\n(\\mathbb{H}_{d(1)})\\right) \\text{,} \\label{pi\n\\end{equation}\nwhere $\\lambda$ is the spherical measure of $\\nu$ and $\\omega_{d}$ is the\nprobability measure on $\\mathbb{S}(\\mathbb{H}_{d(1)})\\cap \\overline{\\mathbb{H\n}_{d}^{+}$ induced by the transformation $u\\rightarrow V=uu^{\\ast}$, where $u$\nis a uniformly distributed column random vector in the unit sphere of\n$\\mathbb{C}^{d}$.\n\\end{proposition}\n\n\\begin{proof}\n(a) It follows from the first term in the L\\'{e}vy exponent of $M_{d}$ in page\n$635$ of \\cite{CD}, where the notation $\\Lambda_{d}(\\mu)$ is used for the\ndistribution of $M_{d}$. For (b), the form of the covariance operator\n$\\mathcal{A}_{d}$ was implicitly computed in the first example in Section II.C\nof \\cite{CD}. Finally, the polar decomposition of the L\\'{e}vy measure\n(\\ref{PDBGCD}) was found in \\cite{DRA}.\n\\end{proof}\n\nThe L\\'{e}vy-It\\^{o} decomposition of the L\\'{e}vy process associated to the\nBGCD model $M_{d}$ is given by\n\\begin{equation}\nM_{d}(t)=\\mathcal{\\psi}td\\mathrm{I}_{d}+B_{\\mathcal{A}_{d}}(t)+\\int\n_{[0,t]}\\int_{\\left \\{ \\left \\Vert V\\right \\Vert \\leq1\\right \\} \\cap\n\\mathbb{H}_{d(1)}}V\\widetilde{J}_{d}(\\mathrm{d}s\\mathrm{,d}V)+\\int_{[0,t]\n\\int_{\\left \\{ \\left \\Vert V\\right \\Vert >1\\right \\} \\cap \\mathbb{H}_{d(1)\n}VJ_{d}(\\mathrm{d}s,\\mathrm{d}V)\\text{,} \\label{LIDbgcd\n\\end{equation}\nwhere $t\\geq0$, $\\mathcal{A}_{d}\\Theta=a^{2}\\frac{1}{d+1}(\\Theta\n+\\mathrm{tr}(\\Theta)\\mathrm{I}_{d})$, $J_{d}(t,E)=\\# \\left \\{ 0\\leq s\\leq\nt:\\Delta M_{d}(s)\\in E\\right \\} =J_{d}(t,E\\cap \\mathbb{H}_{d(1)})$ for any\nmeasurable $E$ $\\in \\mathbb{H}_{d}\\backslash \\{0\\}$. Its quadratic variation is\nobtained by (\\ref{QVLP}) as the matrix subordinato\n\\begin{subequations}\n\\begin{equation}\n\\left[ M_{d}\\right] (t)=a^{2}t\\mathrm{I}_{d}+\\int_{[0,t]}\\int_{\\mathbb{H\n_{d(1)}\\backslash \\{0\\}}VV^{\\ast}J_{d}(\\mathrm{d}s,\\mathrm{d}V)=a^{2\nt\\mathrm{I}_{d}+\\sum_{s\\leq t}\\Delta M_{d}(s)\\left( \\Delta M_{d}(s)\\right)\n^{\\ast}.\\nonumber\n\\end{equation}\n\n\\end{subequations}\n\\begin{remark}\nIt is possible to obtain BGCD models of symmetric random matrices rather than\nHermitian. Indeed, slight changes in the proof of \\cite[Theorem 3.1]{BG} give\nfor each $d\\geq1$, a $d\\times d$ real symmetric random matrix $M_{d}$ with\northogonal invariant infinitely divisible matrix distribution. The asymptotic\nspectral distribution of the corresponding Hermitian and symmetric ensembles\nis the same, similarly as the semicircle distribution is the asymptotic\nspectral distribution for the Gaussian Unitary Ensemble and Gaussian\nOrthogonal Ensemble.\n\\end{remark}\n\n\\section{Bounded variation case}\n\nIt is well known that the quadratic variation of a one-dimensional L\\'{e}vy\nprocess is a subordinator, see \\cite[Example 8.5]{CT}. The following result\ngives a converse and a generalization to matrix subordinators with rank one\njumps. The one dimensional case is given in \\cite[Lemma 6.5]{Se10}.\n\n\\begin{theorem}\n\\label{Sub} Let $L_{d}=\\left \\{ L_{d}(t):t\\geq0\\right \\} $ be a L\\'{e}vy\nprocess in $\\overline{\\mathbb{H}}_{d}^{+}$ whose jumps are of rank one almost\nsurely. Then there exists a L\\'{e}vy process $X=\\left \\{ X(t):t\\geq0\\right \\}\n$ in $\\mathbb{C}^{d}$ such that $L_{d}(t)=\\left[ X\\right] (t)$.\n\\end{theorem}\n\n\\begin{proof}\nWe construct $X$ as a L\\'{e}vy-It\\^{o} decomposition realization. Using\n(\\ref{LIFV}), for each $d\\geq1$, $L_{d}$ is an $\\overline{\\mathbb{H}}_{d}^{+\n$-process of bounded variation with L\\'{e}vy-It\\^{o} decompositio\n\\[\nL_{d}(t)=\\Psi_{0}t+\\int \\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{H\n_{d\\left( 1\\right) }^{+}\\backslash \\{0\\}}VJ_{L_{d}}(\\mathrm{d}s,\\mathrm{d\nV)\\text{, }t\\geq0\\text{,\n\\]\nwhere $\\Psi_{0}\\in \\mathbb{H}_{d}^{+}$ and $J_{L_{d}}$ is the Poisson random\nmeasure of $L_{d}$. Let $Leb\\otimes \\nu_{L_{d}}$ denote the intensity measure\nof $J_{L_{d}}$.\n\nConsider the cone $C_{+}^{d}=\\left \\{ x=\\left( x_{1},x_{2},\\ldots\n,x_{d}\\right) :x_{1}\\geq0,\\text{ }x_{j}\\in \\mathbb{C},\\text{ \nj=2,...,d\\right \\} $ and let $\\varphi_{+}:\\mathbb{R}_{+}\\times \\mathbb{H\n_{d(1)}^{+}\\rightarrow \\mathbb{R}_{+}\\times C_{+}^{d}$ be defined as\n$\\varphi_{+}\\left( t,V\\right) =(t,x)$ where $V=xx^{\\ast}$ and $x\\in\nC_{+}^{d}$. Let $\\overline{\\varphi}_{+}:\\mathbb{H}_{d(1)}^{+}\\rightarrow\nC_{+}^{d}$ be defined by $\\overline{\\varphi}_{+}\\left( V\\right) =x$ for\n$V=xx^{\\ast}$ and $x\\in C_{+}^{d}$. By Remark \\ref{Decomp} (a) the functions\n$\\varphi_{+}$ and $\\overline{\\varphi}_{+}$ are well defined.\n\nLet us define $J(\\mathrm{d}s,\\mathrm{d}x)=\\left( J_{L_{d}}\\circ \\varphi\n_{+}^{-1}\\right) \\left( \\mathrm{d}s,\\mathrm{d}x\\right) $ the random measure\ninduced by the transformation $\\varphi_{+}$ which is a Poisson random measure\non $\\mathbb{R}_{+}\\times C_{+}^{d}$. Observe that $\\mathbb{E}\\left[\nJ(t,F)\\right] =\\mathbb{E}\\left[ J_{L_{d}}\\circ \\varphi_{+}^{-1}\\left(\n\\left \\{ t\\right \\} \\times F\\right) \\right] =t\\nu_{L_{d}}\\left(\n\\overline{\\varphi}_{+}\\left( F\\right) \\right) =t\\left( \\nu_{L_{d}\n\\circ \\overline{\\varphi}_{+}^{-1}\\right) \\left( F\\right) $ for\n$F\\allowbreak \\in \\allowbreak \\mathcal{B(}\\allowbreak C_{+}^{d}\\allowbreak\n\\backslash \\left \\{ 0\\right \\} )$. Let us denote $\\nu=\\nu_{L_{d}}\\circ\n\\overline{\\varphi}_{+}^{-1}$ which is a L\\'{e}vy measure on $C_{+}^{d}$ sinc\n\\[\n\\int_{C_{+}^{d}\\backslash \\left \\{ 0\\right \\} }\\left( 1\\wedge \\left \\vert\nx\\right \\vert ^{2}\\right) \\nu(\\mathrm{d}x)=\\int_{C_{+}^{d}\\backslash \\left \\{\n0\\right \\} }\\left( 1\\wedge \\left \\vert x\\right \\vert ^{2}\\right) \\nu_{L_{d\n}\\circ \\overline{\\varphi}_{+}^{-1}(\\mathrm{d}x)\n\\\n\\[\n=\\int_{C_{+}^{d}\\backslash \\left \\{ 0\\right \\} }\\left( 1\\wedge \\mathrm{tr\n\\left( xx^{\\ast}\\right) \\right) \\nu_{L_{d}}\\circ \\overline{\\varphi}_{+\n^{-1}(\\mathrm{d}x)=\\int_{\\mathbb{H}_{d(1)}^{+}\\backslash \\left \\{ 0\\right \\}\n}\\left( 1\\wedge \\mathrm{tr}\\left( V\\right) \\right) \\left( \\nu_{L_{d}\n\\circ \\overline{\\varphi}_{+}^{-1}\\right) \\circ f^{-1}(\\mathrm{d}V)\n\\\n\\[\n=\\int_{\\mathbb{H}_{d(1)}^{+}\\backslash \\left \\{ 0\\right \\} }\\left(\n1\\wedge \\mathrm{tr}\\left( V\\right) \\right) \\nu_{L_{d}}(\\mathrm{d\nV)<\\infty \\text{,\n\\]\nwhere $\\left( \\nu_{L_{d}}\\circ \\overline{\\varphi}_{+}^{-1}\\right) \\circ\nf^{-1}=\\nu_{L_{d}},$ with $f\\left( x\\right) =xx^{\\ast}$ and we have used\nthat $\\mathrm{tr}\\left( V\\right) \\leq \\alpha \\left \\Vert V\\right \\Vert $ for\nsome constant $\\alpha>0$. Thus $Leb\\otimes \\nu$ is the intensity measure of the\nPoisson random measure $J$.\n\nLet us take the L\\'{e}vy process in $\\mathbb{C}^{d}\n\\begin{equation}\nX(t)=\\left \\vert \\Psi_{0}\\right \\vert ^{1\/2}B_{I}(t)+\\int \\nolimits_{\\left[\n0,t\\right] }\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert x\\right \\vert \\leq\n1\\}}x\\widetilde{J}(\\mathrm{d}s,\\mathrm{d}x)+\\int \\nolimits_{\\left[ 0,t\\right]\n}\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert x\\right \\vert >1\\}}xJ(\\mathrm{d\ns,\\mathrm{d}x)\\text{, }t\\geq0\\text{,} \\label{LIX\n\\end{equation}\nwhere $B_{I}$ is a $\\mathbb{C}^{d}$-valued standard Brownian motion with\nquadratic variation $tI_{d}$, (i.e. (\\ref{CovCB}) with $q=1$). Thus the\nquadratic variation of $X$ is given b\n\\[\n\\left[ X\\right] (t)=\\left[ \\left \\vert \\Psi_{0}\\right \\vert ^{1\/2}B_{I\n,B_{I}^{\\ast}\\left \\vert \\Psi_{0}\\right \\vert ^{1\/2}\\right] (t)+\\int\n\\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{C}^{d}\\backslash \\{0\\}}xx^{\\ast\n}J(\\mathrm{d}s,\\mathrm{d}x)\n\\\n\\[\n=\\Psi_{0}t+\\int \\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{C}^{d\n\\backslash \\{0\\}}xx^{\\ast}J_{L_{d}}\\circ \\varphi_{+}^{-1}(\\mathrm{d\ns,\\mathrm{d}x)=\\Psi_{0}t+\\int \\nolimits_{\\left[ 0,t\\right] }\\int\n_{\\mathbb{H}_{d(1)}^{+}\\backslash \\left \\{ 0\\right \\} }VJ_{L_{d}}\\circ\n\\varphi_{+}^{-1}\\circ h^{-1}(\\mathrm{d}s,\\mathrm{d}V)\n\\\n\\[\n=\\Psi_{0}t+\\int \\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{H}_{d(1)\n^{+}\\backslash \\left \\{ 0\\right \\} }VJ_{L_{d}}(\\mathrm{d}s,\\mathrm{d\nV)=L_{d}(t),\n\\]\nwhere $J_{L_{d}}\\circ \\varphi_{+}^{-1}\\circ h^{-1}=J_{L_{d}},$ with $h\\left(\nt,x\\right) =\\left( t,xx^{\\ast}\\right) .$\n\\end{proof}\n\n\\smallskip\n\nFor the general bounded variation case we have the following Wiener-Hopf type decomposition.\n\n\\begin{theorem}\n\\label{Bdv}Let $L_{d}=\\left \\{ L_{d}(t):t\\geq0\\right \\} $ be a L\\'{e}vy\nprocess in $\\mathbb{H}_{d}$ of bounded variation whose jumps are of rank one\nalmost surely. Then there exist L\\'{e}vy processes $X=\\left \\{ X(t):t\\geq\n0\\right \\} $ and $Y=\\left \\{ Y(t):t\\geq0\\right \\} $ in $\\mathbb{C}^{d}$ such\nthat\n\\begin{equation}\nL_{d}(t)=\\left[ X\\right] (t)-\\left[ Y\\right] (t). \\label{WHTD\n\\end{equation}\nMoreover, $\\left \\{ \\left[ X\\right] (t):t\\geq0\\right \\} $ and $\\left \\{\n\\left[ Y\\right] (t):t\\geq0\\right \\} $ are independent processes.\n\\end{theorem}\n\n\\begin{proof}\nFor each $d\\geq1$, $L_{d}$ is an $\\mathbb{H}_{d}$-process of bounded variation\nwith L\\'{e}vy-It\\^{o} decompositio\n\\begin{equation}\nL_{d}(t)=\\Psi t+\\int \\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{H\n_{d(1)}\\backslash \\{0\\}}VJ_{L_{d}}(\\mathrm{d}s,\\mathrm{d}V)\\text{, \nt\\geq0\\text{,} \\label{LIDBV\n\\end{equation}\nwhere $\\Psi \\in \\mathbb{H}_{d}$ and $J_{L_{d}}$ is the Poisson random measure of\n$L_{d}$. Let $Leb\\otimes \\nu_{L_{d}}$ denote the intensity measure of\n$J_{L_{d}}$.\n\nFirst we prove that $L_{d}=L_{d}^{1}-L_{d}^{2}$ where $L_{d}^{1}$ and\n$L_{d}^{2}$ are the L\\'{e}vy processes in $\\overline{\\mathbb{H}}_{d}^{+}$\ngiven by (\\ref{Quad1decomp}) and (\\ref{Quad2decomp}).\\newline Every\n$V\\in \\mathbb{H}_{d\\left( 1\\right) }$ can be written as $V=\\lambda uu^{\\ast}$\nwhere $\\lambda$ the eigenvalue of $V$ and $u$ is a unitary vector in\n$\\mathbb{C}^{d}$. Let us define $\\left \\vert V\\right \\vert =\\left \\vert\n\\lambda \\right \\vert uu^{\\ast}$ and $V^{+}=\\lambda^{+}uu^{\\ast}$, $V^{-\n=\\lambda^{-}uu^{\\ast}$ where $\\lambda^{+}=\\lambda$ if $\\lambda \\geq0$ and\n$\\lambda^{-}=-\\lambda$ if $\\lambda<0$.\n\nLet $\\varphi_{+}:\\mathbb{R}_{+}\\times \\mathbb{H}_{d(1)}\\rightarrow\n\\mathbb{R}_{+}\\times \\mathbb{H}_{d(1)}^{+}$ and $\\varphi_{-}:\\mathbb{R\n_{+}\\times \\mathbb{H}_{d(1)}\\rightarrow \\mathbb{R}_{+}\\times \\mathbb{H\n_{d(1)}^{+}$ be defined as $\\varphi_{+}\\left( t,V\\right) =(t,V^{+})$ and\n$\\varphi_{-}\\left( t,V\\right) =(t,V^{-})$ respectively. Let $\\overline\n{\\varphi}_{+}:\\mathbb{H}_{d(1)}\\rightarrow \\mathbb{H}_{d(1)}^{+}$ and\n$\\overline{\\varphi}_{-}:\\mathbb{H}_{d(1)}\\rightarrow \\mathbb{H}_{d(1)}^{+}$ be\ndefined as $\\overline{\\varphi}_{+}(V)=V^{+}$ and $\\overline{\\varphi\n_{-}(V)=V^{-}$ respectively. By Remark \\ref{Decomp} (b) the functions\n$\\varphi_{+},$ $\\overline{\\varphi}_{+},$ $\\varphi_{-}$ and $\\overline{\\varphi\n}_{-}$ are well defined and hence $V=\\overline{\\varphi}_{+}(V)-\\overline\n{\\varphi}_{-}(V)$.\n\nLet us define $J^{+}(\\mathrm{d}s,\\mathrm{d}x)=\\left( J_{L_{d}}\\circ\n\\varphi_{+}^{-1}\\right) \\left( \\mathrm{d}s,\\mathrm{d}x\\right) $ and\n$J^{-}(\\mathrm{d}s,\\mathrm{d}x)=\\left( J_{L_{d}}\\circ \\varphi_{-}^{-1}\\right)\n\\left( \\mathrm{d}s,\\mathrm{d}x\\right) $ the random measures induced by the\ntransformations $\\varphi_{+}$ and $\\varphi_{-}$ respectively, which are\nPoisson random measures both on $\\mathbb{R}_{+}\\times \\mathbb{H}_{d(1)}^{+}$.\nObserve that $\\mathbb{E}\\left[ J^{+}(t,F)\\right] =\\mathbb{E[}J_{L_{d}\n\\circ \\allowbreak \\varphi_{+}^{-1}(\\allowbreak \\left \\{ t\\right \\} \\allowbreak\n\\times \\allowbreak F)]=t\\nu_{L_{d}}\\left( \\overline{\\varphi}_{+}^{-1}\\left(\nF\\right) \\right) =t\\left( \\nu_{L_{d}}\\circ \\overline{\\varphi}_{+\n^{-1}\\right) \\left( F\\right) $ for $F\\in \\mathcal{B}\\left( \\mathbb{H\n_{d(1)}^{+}\\backslash \\left \\{ 0\\right \\} \\right) $ and similarly\n$\\mathbb{E}\\left[ J^{-}(t,F)\\right] =t\\left( \\nu_{L_{d}}\\circ\n\\overline{\\varphi}_{-}^{-1}\\right) \\left( F\\right) $. Let us denote\n$\\nu_{L_{d}}^{+}=\\nu_{L_{d}}\\circ \\overline{\\varphi}_{+}^{-1}$ and $\\nu_{L_{d\n}^{-}=\\nu_{L_{d}}\\circ \\overline{\\varphi}_{-}^{-1}$. Note that\\ $\\nu_{L_{d\n}^{+}$ is a L\\'{e}vy measure on $\\mathbb{H}_{d(1)}^{+}$ sinc\n\\begin{align*}\n\\infty & >\\int_{\\mathbb{H}_{d(1)}\\backslash \\left \\{ 0\\right \\} }\\left(\n1\\wedge \\left \\Vert V\\right \\Vert \\right) \\nu_{L_{d}}(\\mathrm{d}V)\\geq\n\\int_{\\mathbb{H}_{d(1)}\\backslash \\left \\{ 0\\right \\} }\\left( 1\\wedge\n\\left \\Vert \\overline{\\varphi}_{+}(V)\\right \\Vert \\right) \\nu_{L_{d\n}(\\mathrm{d}V)\\\\\n& =\\int_{\\mathbb{H}_{d(1)}^{+}\\backslash \\left \\{ 0\\right \\} }\\left(\n1\\wedge \\left \\Vert W\\right \\Vert \\right) \\nu_{L_{d}}^{+}(\\mathrm{d}W)\\text{.\n\\end{align*}\nHence $Leb\\otimes \\nu_{L_{d}}^{+}$ is the intensity measure of $J^{+}$.\nSimilarly, one can see that $Leb\\otimes \\nu_{L_{d}}^{-}$ is the intensity\nmeasure of $J^{-}$.\n\nThere exist $\\Psi^{+}$ and $\\Psi^{-}$ in $\\mathbb{H}_{d}^{+}$ such that\n$\\Psi=\\Psi^{+}-\\Psi^{-}$. Let us take the L\\'{e}vy processes $X$ and $Y$ in\n$\\mathbb{C}^{d}\n\\[\nX(t)=\\left \\vert \\Psi^{+}\\right \\vert ^{1\/2}B_{I}(t)+\\int \\nolimits_{\\left[\n0,t\\right] }\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert x\\right \\vert \\leq\n1\\}}x\\widetilde{J}^{+}(\\mathrm{d}s,\\mathrm{d}x)+\\int \\nolimits_{\\left[\n0,t\\right] }\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert x\\right \\vert >1\\}\nxJ^{+}(\\mathrm{d}s,\\mathrm{d}x)\\text{, }t\\geq0\\text{,\n\\\n\\[\nY(t)=\\left \\vert \\Psi^{-}\\right \\vert ^{1\/2}B_{I}(t)+\\int \\nolimits_{\\left[\n0,t\\right] }\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert x\\right \\vert \\leq\n1\\}}x\\widetilde{J}^{-}(\\mathrm{d}s,\\mathrm{d}x)+\\int \\nolimits_{\\left[\n0,t\\right] }\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert x\\right \\vert >1\\}\nxJ^{-}(\\mathrm{d}s,\\mathrm{d}x)\\text{, }t\\geq0\\text{,\n\\]\nwhere $B_{I}$ is a $\\mathbb{C}^{d}$-valued standard Brownian motion with\nquadratic variation $tI_{d}$.\n\nObserve tha\n\\begin{equation}\n\\left[ X\\right] (t)=\\Psi^{+}t+\\int \\nolimits_{\\left[ 0,t\\right] \n\\int_{\\mathbb{C}^{d}\\backslash \\{0\\}}xx^{\\ast}J_{+}(\\mathrm{d}s,\\mathrm{d\nx)=\\Psi^{+}t+\\int \\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{H}_{d(1)\n^{+}\\backslash \\{0\\}}VJ_{L_{d}}(\\mathrm{d}s,\\mathrm{d}V) \\label{Quad1decomp\n\\end{equation}\nan\n\\begin{align}\n\\left[ Y\\right] (t) & =\\Psi^{-}t+\\int \\nolimits_{\\left[ 0,t\\right] \n\\int_{\\mathbb{C}^{d}\\backslash \\{0\\}}xx^{\\ast}J^{-}(\\mathrm{d}s,\\mathrm{d\nx)=\\Psi^{-}t-\\int \\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{C\n^{d}\\backslash \\{0\\}}\\left( -xx^{\\ast}\\right) J_{L_{d}}(\\mathrm{d\ns,\\mathrm{d}x)\\nonumber \\\\\n& =\\Psi^{-}t-\\int \\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{H}_{d(1)\n^{-}\\backslash \\{0\\}}VJ_{L_{d}}(\\mathrm{d}s,\\mathrm{d}V), \\label{Quad2decomp\n\\end{align}\nwhere $\\mathbb{H}_{d(1)}^{-}$ denotes the cone of negative (nonpositive)\ndefinite matrices of rank one in $\\mathbb{H}_{d}$. The first assertion follows\nfrom (\\ref{LIDBV}). Finally, since $J_{L_{d}}$ is a Poisson random measure and\n$\\mathbb{H}_{d(1)}^{+}\\backslash \\{0\\}$ and $\\mathbb{H}_{d(1)}^{-\n\\backslash \\{0\\}$ are disjoint sets, from the last expressions in\n(\\ref{Quad1decomp}) and (\\ref{Quad2decomp}) we have that $\\left[ X\\right] $\nand $\\left[ Y\\right] $ are independent processes, although $X$ and $Y$ are not.\n\\end{proof}\n\nNext we consider the matrix L\\'{e}vy processes associated to the BGCD matrix\nensembles $(M_{d})_{d\\geq1}$. We have the following two consequences of the\nformer results.\n\n\\begin{corollary}\n\\label{corBG}Let $M_{d}=\\left \\{ M_{d}(t):t\\geq0\\right \\} $ be the matrix\nL\\'{e}vy process associated to the BGCD random matrix ensembles.\n\na) Let $\\mu$ be the infinitely divisible distribution with triplet $\\left(\n0,\\psi,\\nu \\right) $ associated to $M_{d}$ such that\n\\[\n\\int_{\\left \\vert x\\right \\vert \\leq1}\\left( 1\\wedge x\\right) \\nu\n(\\mathrm{d}x)<\\infty,\\ \\ \\nu((-\\infty,0])=0\\ \\text{ and\\ }\\mathcal{\\psi\n_{0}:=\\mathcal{\\psi}-\\int_{x\\leq1}x\\nu(\\mathrm{d}x)\\geq0.\n\\]\nLet us consider the L\\'{e}vy-It\\^{o} decomposition of $M_{d}(t)$ in\n$\\overline{\\mathbb{H}}_{d}^{+}$\n\\[\nM_{d}(t)=\\mathcal{\\psi}_{0}tdI_{d}+\\int \\nolimits_{\\left[ 0,t\\right] \n\\int_{\\mathbb{H}_{d(1)}^{+}\\backslash \\{0\\}}VJ_{M_{d}}(\\mathrm{d\ns,\\mathrm{d}V).\n\\]\nThen there exists a L\\'{e}vy process $X=\\left \\{ X(t):t\\geq0\\right \\} $ in\n$\\mathbb{C}^{d}$ such that $M_{d}(t)=\\left[ X\\right] (t)$, where\n\\[\nX(t)=\\left \\vert \\mathcal{\\psi}_{0}\\right \\vert ^{1\/2}B_{I}(t)+\\int\n\\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert\nx\\right \\vert \\leq1\\}}x\\widetilde{J}(\\mathrm{d}s,\\mathrm{d}x)+\\int\n\\nolimits_{\\left[ 0,t\\right] }\\int_{\\mathbb{C}^{d}\\cap \\{ \\left \\vert\nx\\right \\vert >1\\}}xJ(\\mathrm{d}s,\\mathrm{d}x)\\text{, }t\\geq0\\text{,\n\\]\n$B_{I}$ is a $\\mathbb{C}^{d}$-valued standard Brownian motion with quadratic\nvariation $tI_{d}$, and the Poisson random measure $J$ is given by\n$J=J_{M_{d}}\\circ \\varphi_{+}^{-1}$.\n\nb) If $M_{d}$ has bounded variation then there exist L\\'{e}vy processes\n$X=\\left \\{ X(t):t\\geq0\\right \\} $ and $Y=\\left \\{ Y(t):t\\geq0\\right \\} $ in\n$\\mathbb{C}^{d}$ such that $M_{d}(t)=\\left[ X\\right] (t)-\\left[ Y\\right]\n(t),$ where $\\left \\{ \\left[ X\\right] (t):t\\geq0\\right \\} $ and $\\left \\{\n\\left[ Y\\right] (t):t\\geq0\\right \\} $ are independent.\n\\end{corollary}\n\n\\section{Covariation matrix processes approximation}\n\nWe now consider approximation of general BGCD ensembles by BGCD matrix\ncompound Poisson processes which are covariation of $\\mathbb{C}^{d}$-valued\nL\\'{e}vy processes.\n\nThe following results gives realizations of BGCD ensembles of compound Poisson\ntype as the covariation of two $\\mathbb{C}^{d}$-valued L\\'{e}vy processes. Its\nproof is straightforward.\n\n\\begin{proposition}\n\\label{BGCDCPP} Let $\\mu$ be a compound Poisson distribution on $\\mathbb{R}$\nwith L\\'{e}vy measure $\\nu$ and drift $\\mathcal{\\psi}\\in \\mathbb{R}$ and let\n$(M_{d})_{d\\geq1}$ be the BGCD matrix ensemble for $\\Lambda(\\mu).$ For each\n$d\\geq1$, assume that\n\ni) $(\\beta_{j})_{j\\geq1}$ is a sequence of i.i.d. random variables with\ndistribution $\\nu\/\\nu \\left( \\mathbb{R}\\right) $.\n\nii) $(u_{j})_{j\\geq1}$ is a sequence of i.i.d. random vectors with uniform\ndistribution on the unit sphere of $\\mathbb{C}^{d}$.\n\niii) $\\left \\{ N(t)\\right \\} _{t\\geq0}$ is a Poisson process with parameter one.\n\nAssume that $(\\beta_{j})_{j\\geq1}$, $(u_{j})_{j\\geq1}$ and $\\left \\{\nN(t)\\right \\} _{t\\geq0}$ are independent. Then\n\na) $M_{d}$ has the same distribution as $M_{d}(1)$ where\n\\begin{equation}\nM_{d}(t)=\\mathcal{\\psi}tI_{d}+\\sum_{j=1}^{N(t)}\\beta_{j}u_{j}u_{j}^{\\ast\n},\\quad t\\geq0. \\label{BGCP1\n\\end{equation}\n\n\nb) $M_{d}(\\cdot)=[X_{d},Y_{d}](\\cdot)$ where $X_{d}=\\left \\{ X_{d}(t)\\right \\}\n_{t\\geq0},$ $Y_{d}=\\left \\{ Y_{d}(t)\\right \\} _{t\\geq0}$ are the\n$\\mathbb{C}^{d}$-valued L\\'{e}vy processe\n\\begin{equation}\nX_{d}(t)=\\sqrt{\\left \\vert \\mathcal{\\psi}\\right \\vert }B(t)+\\sum_{j=1\n^{N(t)}\\sqrt{\\left \\vert \\beta_{j}\\right \\vert }u_{j},\\quad t\\geq0,\n\\label{BGCP2\n\\end{equation\n\\begin{equation}\nY_{d}(t)=\\mathrm{sign}\\left( \\mathcal{\\psi}\\right) \\sqrt{\\left \\vert\n\\mathcal{\\psi}\\right \\vert }B(t)+\\sum_{j=1}^{N(t)}\\mathrm{sign}\\left(\n\\beta_{j}\\right) \\sqrt{\\left \\vert \\beta_{j}\\right \\vert }u_{j},\\quad t\\geq0,\n\\label{BGCP3\n\\end{equation}\nand $B=\\left \\{ B(t)\\right \\} _{t\\geq0}$ is a $\\mathbb{C}^{d}$-valued standard\nBrownian motion independent of $(\\beta_{j})_{j\\geq1}$, $(u_{j})_{j\\geq1}$ and\n$\\left \\{ N(t)\\right \\} _{t\\geq0}$.\n\\end{proposition}\n\nFor the general case we have the following sample path approximation by\ncovariation processes for L\\'{e}vy processes generated by the BGCD matrix ensembles.\n\n\\begin{theorem}\n\\label{General} Let $\\mu$ be an infinitely divisible distribution on\n$\\mathbb{R}$ with triplet $(a^{2},\\psi,\\nu)$ and let $(M_{d})_{d\\geq1}$ be the\ncorresponding BGCD matrix ensemble for $\\Lambda(\\mu).$ Let $d\\geq1$ fixed and\nassume that for $n\\geq1$\n\ni) $(\\beta_{j}^{n})_{j\\geq1}$ is a sequence of i.i.d. random variables with\ndistribution $\\mu^{\\ast \\frac{1}{n}}$.\n\nii) $(u_{j}^{n})_{j\\geq1}$ is a sequence of i.i.d. random vectors with uniform\ndistribution on the unit sphere of $\\mathbb{C}^{d}$.\n\niii) $N^{n}=\\left \\{ N^{n}(t)\\right \\} _{t\\geq0}$ is a Poisson process with\nparameter $n$.\n\niv) $B^{n}=\\left \\{ B^{n}(t)\\right \\} _{t\\geq0}$ is a $\\mathbb{C}^{d}$-valued\nstandard Brownian motion.\n\nv) $(\\beta_{j}^{n})_{j\\geq1}$, $(u_{j}^{n})_{j\\geq1},N^{n}$ and $B^{n}$are independent.\n\nLet\n\\begin{equation}\nX_{d}^{n}(t)=\\sqrt{\\left \\vert \\mathcal{\\psi}\\right \\vert }B^{n}(t)+\\sum\n_{j=1}^{N^{n}(t)}\\sqrt{\\left \\vert \\beta_{j}^{n}\\right \\vert }u_{j}^{n},\\quad\nt\\geq0, \\label{Gen1\n\\end{equation\n\\begin{equation}\nY_{d}^{n}(t)=\\mathrm{sign}\\left( \\mathcal{\\psi}\\right) \\sqrt{\\left \\vert\n\\mathcal{\\psi}\\right \\vert }B^{n}(t)+\\sum_{j=1}^{N^{n}(t)}\\mathrm{sign}\\left(\n\\beta_{j}^{n}\\right) \\sqrt{\\left \\vert \\beta_{j}^{n}\\right \\vert }u_{j\n^{n},\\quad t\\geq0. \\label{Gen2\n\\end{equation}\nThen for each $d\\geq1$ there exist $\\mathbb{M}_{d}$-valued processes\n$\\widetilde{M}_{d}^{n}=\\left \\{ \\widetilde{M}_{d}^{n}(t)\\right \\} _{d\\geq1}$\nsuch that $\\widetilde{M}_{d}^{n}\\overset{\\mathcal{L}}{=}[X_{d}^{n},Y_{d}^{n\n]$\n\\[\n\\sup_{00\n\\begin{equation}\n\\int_{\\left \\vert r\\right \\vert \\leq \\varepsilon}r^{2}\\nu^{n}\\left( dr\\right)\n\\longrightarrow a^{2}\\text{ as }n\\rightarrow \\infty, \\label{gau\n\\end{equation}\nand $\\psi^{n}\\rightarrow \\psi$.\n\nA similar proof as for Proposition \\ref{BGCDCPP} give\n\\[\nM_{d}^{n}(t):=\\left[ X_{d}^{n},Y_{d}^{n\\ast}\\right] (t)=\\mathcal{\\psi\n}t\\mathrm{I}_{d}+\\sum_{j=0}^{N^{n}(t)}\\beta_{j}^{n}u_{j}^{n}u_{j}^{n\\ast},\n\\]\nwhich is a matrix value compound Poisson process with triplet $\\left(\n\\mathcal{A}_{d}^{n},\\psi_{d}^{n},\\nu_{d}^{n}\\right) $ given by $\\mathcal{A\n_{d}^{n}=0,\\ \\psi_{d}^{n}=\\psi \\mathrm{I}_{d}$ an\n\\begin{equation}\n\\nu_{d}^{n}\\left( E\\right) =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})}\\int\n_{0}^{\\infty}1_{E}\\left( rV\\right) \\nu_{V}^{n}\\left( \\mathrm{d}r\\right)\n\\Pi \\left( \\mathrm{d}V\\right) ,\\quad E\\in \\mathcal{B}\\left( \\mathbb{H\n_{d}\\backslash \\left \\{ 0\\right \\} \\right) \\text{,} \\label{nudn\n\\end{equation}\nwhere $\\nu_{V}^{n}=\\nu^{n}|_{(0,\\infty)}$ or $\\nu^{n}|_{(-\\infty,0)}$\naccording to $V\\geq0$ or\\ $V\\leq0$ and $\\Pi$ is the measure on $\\mathbb{S\n(\\mathbb{H}_{d(1)})$ in (\\ref{pi}).\n\nWe will prove that $M_{d}^{n}\\overset{\\mathcal{L}}{\\longrightarrow}M_{d}$ by\nshowing that the triplet $\\left( \\mathcal{A}_{d}^{n},\\psi_{d}^{n},\\nu_{d\n^{n}\\right) $ converges to the triplet $\\left( \\mathcal{A}_{d},\\psi_{d\n,\\nu_{d}\\right) $ of the BGCD matrix ensemble in Proposition \\ref{polar} in\nthe sense of Proposition \\ref{convternas}:\n\nWe observe that $\\psi_{d}^{n}=\\psi \\mathrm{I}_{d}$ for each $n$.\n\nLet $f:\\mathbb{H}_{d(1)}\\longrightarrow \\mathbb{R}$ be a continuous bounded\nfunction vanishing in a neighborhood of zero. Using the polar decomposition\n(\\ref{PDBGCD}) for $\\nu_{d}^{n}$ we hav\n\\begin{align}\n\\int_{\\mathbb{H}_{d(1)}}f\\left( \\xi \\right) \\nu_{d}^{n}\\left( d\\xi \\right)\n& =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})}\\int_{0}^{\\infty}f\\left( rV\\right)\n\\nu_{V}^{n}\\left( dr\\right) \\Pi \\left( dV\\right) \\nonumber \\\\\n& =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})\\cap \\overline{\\mathbb{H}}_{d}^{+}\n\\int_{\\left \\{ -1,1\\right \\} }\\int_{0}^{\\infty}f\\left( trV\\right) \\nu\n_{V}^{n}\\left( dr\\right) \\lambda^{n}\\left( dt\\right) \\omega_{d}\\left(\ndV\\right) . \\label{intfb\n\\end{align}\nFor $V\\in \\mathbb{S}(\\mathbb{H}_{d(1)})\\cap \\overline{\\mathbb{H}}_{d}^{+}$\nfixed,\n\\begin{align*}\n\\int_{\\left \\{ -1,1\\right \\} }\\int_{0}^{\\infty}f\\left( trV\\right) \\nu\n_{V}^{n}\\left( dr\\right) \\lambda^{n}\\left( dt\\right) & =\\lambda\n^{n}\\left( \\left \\{ 1\\right \\} \\right) \\int_{0}^{\\infty}f\\left( rV\\right)\n\\nu^{n}\\left( dr\\right) \\\\\n& +\\lambda^{n}\\left( \\left \\{ -1\\right \\} \\right) \\int_{-\\infty\n^{0}f\\left( rV\\right) \\nu^{n}\\left( dr\\right) \\text{.\n\\end{align*}\nAs a function of $r$, $f\\left( rV\\right) $ is a real valued continuous\nbounded function vanishing in a neighborhood of zero, hence using (\\ref{lev}\n\\[\n\\lambda^{n}\\left( \\left \\{ 1\\right \\} \\right) \\int_{0}^{\\infty}f\\left(\nrV\\right) \\nu^{n}\\left( dr\\right) \\longrightarrow \\lambda \\left( \\left \\{\n1\\right \\} \\right) \\int_{0}^{\\infty}f\\left( rV\\right) \\nu \\left( dr\\right)\n\\]\nan\n\\[\n\\lambda^{n}\\left( \\left \\{ -1\\right \\} \\right) \\int_{-\\infty}^{0}f\\left(\nrV\\right) \\nu^{n}\\left( dr\\right) \\longrightarrow \\lambda \\left( \\left \\{\n-1\\right \\} \\right) \\int_{-\\infty}^{0}f\\left( rV\\right) \\nu \\left(\ndr\\right) .\n\\]\nThen from (\\ref{intfb}\n\\begin{align*}\n\\int_{\\mathbb{H}_{d(1)}}f\\left( \\xi \\right) \\nu_{d}^{n}\\left( d\\xi \\right)\n& \\longrightarrow d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})\\cap \\overline\n{\\mathbb{H}}_{d}^{+}}\\int_{\\left \\{ -1,1\\right \\} }\\int_{0}^{\\infty}f\\left(\ntrV\\right) \\nu_{V}\\left( dr\\right) \\lambda \\left( dt\\right) \\omega\n_{d}\\left( dV\\right) \\\\\n& =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})}\\int_{0}^{\\infty}f\\left( rV\\right)\n\\nu_{d}\\left( dr\\right) \\Pi \\left( dV\\right) =\\int_{\\mathbb{H}_{d(1)\n}f\\left( \\xi \\right) \\nu_{d}\\left( d\\xi \\right) .\n\\end{align*}\n\n\nNext, we verify the convergence of the Gaussian part.\n\nLet us define, for each $\\varepsilon>0$ and $n\\geq1,$ the operator\n$\\mathcal{A}^{n,\\varepsilon}:\\mathbb{H}_{d}\\longrightarrow \\mathbb{H}_{d}$ by\n\\[\n\\mathrm{tr}\\left( \\Theta \\mathcal{A}^{n,\\varepsilon}\\Theta \\right)\n=\\int_{\\left \\Vert \\xi \\right \\Vert \\leq \\varepsilon}\\left \\vert \\mathrm{tr}\\left(\n\\Theta \\xi \\right) \\right \\vert ^{2}\\nu_{d}^{n}\\left( d\\xi \\right) .\n\\]\nFrom (\\ref{nudn}) we get\n\\begin{align*}\n& \\int_{\\left \\Vert \\xi \\right \\Vert \\leq \\varepsilon}\\left \\vert \\mathrm{tr\n\\left( \\Theta \\xi \\right) \\right \\vert ^{2}\\nu_{d}^{n}\\left( d\\xi \\right)\n=d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})}\\int_{0}^{\\infty}\\mathbf{1}_{\\left \\{\n\\left \\Vert rV\\right \\Vert \\leq \\varepsilon \\right \\} }\\left( rV\\right)\n\\left \\vert \\mathrm{tr}\\left( r\\Theta V\\right) \\right \\vert ^{2}\\nu_{V\n^{n}\\left( dr\\right) \\Pi \\left( dV\\right) \\\\\n& =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})\\cap \\overline{\\mathbb{H}}_{d}^{+}\n\\int_{\\left \\{ -1,1\\right \\} }\\int_{0}^{\\infty}\\mathbf{1}_{\\left \\{\nr\\leq \\varepsilon \\right \\} }\\left( rtV\\right) r^{2}\\left \\vert \\mathrm{tr\n\\left( \\Theta V\\right) \\right \\vert ^{2}\\nu_{V}^{n}\\left( dr\\right)\n\\lambda \\left( dt\\right) \\omega_{d}\\left( dV\\right) \\\\\n& =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})\\cap \\overline{\\mathbb{H}}_{d}^{+}\n\\int_{\\mathbb{R}}\\mathbf{1}_{\\left \\{ r\\leq \\varepsilon \\right \\} }\\left(\nrV\\right) r^{2}\\left \\vert \\mathrm{tr}\\left( \\Theta V\\right) \\right \\vert\n^{2}\\nu^{n}\\left( dr\\right) \\omega_{d}\\left( dV\\right) \\\\\n& =d\\int_{\\mathbb{S}(\\mathbb{H}_{d(1)})\\cap \\overline{\\mathbb{H}}_{d}^{+\n}\\left \\vert \\mathrm{tr}\\left( \\Theta V\\right) \\right \\vert ^{2\n\\int_{\\left \\vert r\\right \\vert \\leq \\varepsilon}r^{2}\\nu^{n}\\left( dr\\right)\n\\omega_{d}\\left( dV\\right) .\n\\end{align*}\nThen using (\\ref{gau})\n\\[\n\\int_{\\left \\Vert \\xi \\right \\Vert \\leq \\varepsilon}\\left \\vert \\mathrm{tr}\\left(\n\\Theta \\xi \\right) \\right \\vert ^{2}\\nu_{d}^{n}\\left( d\\xi \\right)\n\\longrightarrow da^{2}E_{u}\\left \\vert \\mathrm{tr}\\left( \\Theta uu^{\\ast\n}\\right) \\right \\vert ^{2}\\text{,\n\\]\nwhere $u$ is a uniformly distributed column random vector in the unit sphere\nof $\\mathbb{C}^{d}$. Finally\n\\begin{equation}\nda^{2}E_{u}\\left \\vert \\mathrm{tr}\\left( \\Theta uu^{\\ast}\\right) \\right \\vert\n^{2}=\\frac{a^{2}}{d+1}\\left( \\mathrm{tr}\\left( \\Theta^{2}\\right) +\\left(\n\\mathrm{tr}\\left( \\Theta \\right) \\right) ^{2}\\right) =\\mathrm{tr}\\left(\n\\Theta^{\\ast}\\mathcal{A}_{d}\\Theta^{\\ast}\\right) , \\label{covgau\n\\end{equation}\n\\newline where $\\mathcal{A}_{d}$ is as in (\\ref{GPBGCD}) and the first\nequality in (\\ref{covgau}) follows from page $637$ in \\cite{CD}. Thus\n$M_{d}^{n}\\overset{\\mathcal{L}}{\\longrightarrow}M_{d}$ and the conclusion\nfollows from Proposition \\ref{convproc}.\n\\end{proof}\n\n\\section{Final remarks}\n\n\\begin{enumerate}\n\\item For the present work we do not have a specific financial application in\nmind. However, infinitely divisible nonnegative definite matrix processes with\nrank one jumps as characterized in Theorem \\ref{Sub}, might be useful in the\nstudy of multivariate high-frequency data using realized covariation, where\nmatrix covariation processes appear; see for example \\cite{BNSh04}. Moreover,\nit seems interesting to explore the construction of financial oriented matrix\nL\\'{e}vy based models as in \\cite{BNSe09} for the specific case of rank one\njumps matrix process of bounded variation.\n\n\\item In the direction of free probability, it is well known that the\nso-called Hermitian Brownian motion matrix ensemble $\\left \\{ B_{d\n(t):t\\geq0\\right \\} $, $d\\geq1,$ is a realization of the free Brownian motion.\nIt is an open question if the matrix L\\'{e}vy processes from BGCD models\n$\\left \\{ M_{d}(t):t\\geq0\\right \\} $, $d\\geq1$, are realizations of free\nL\\'{e}vy processes. A first step in this direction would be to prove that the\nincrements of a BGCD ensemble become free independent. A second step, more\nrelated to our work, would be to have an insight of the implication of the\nrank one condition of the matrix L\\'{e}vy BGCD process in Corollary\n\\ref{corBG} as realization of a positive free L\\'{e}vy process. These two\nproblems are the subjects of current research of one of the coauthors.\n\n\\item In \\cite{BG07} a new Bercovici-Pata bijection for certain free\nconvolution $\\boxplus_{c}$ is established and a $d\\times d^{\\prime}$ random\nmatrix model for this bijection which is very close to the one given by the\nBGCD random matrix model is established. It can be seen that the L\\'{e}vy\nmeasures of these rectangular BGCD random matrices are supported in the subset\nof $d\\times d^{\\prime}$ complex matrices of rank one, in a similar way as done\nin \\cite{DRA} for the BGCD case. It would be of interest to have the analogue\nresults on bounded variation of Section 4 for the L\\'{e}vy processes\nassociated to these rectangular BGCD random matrices, considering an\nappropriate nonnegative definite notion for rectangular matrices.\n\\end{enumerate}\n\n\\noindent \\textbf{Acknowledgement}. \\emph{This work was done while Victor\nP\\'{e}rez-Abreu was visiting Universidad Aut\\'{o}noma de Sinaloa in January\nand May of 2012. The authors thank two referees for the very carefully and\ndetailed reading of a previous version of the manuscript and for their\ncomments that improved Theorems \\ref{Sub} and \\ref{Bdv} and the presentation\nof the present version of the manuscript.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}