diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmeaw" "b/data_all_eng_slimpj/shuffled/split2/finalzzmeaw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmeaw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe last few years were marked by an outburst of research devoted to\nthe problem of reconstruction of quantum states for various physical\nsystems (see, e.g., Ref.~\\cite{BM99} for an extensive list of the\nliterature on the subject). The problem, as stated already in the \nfifties by Fano \\cite{Fano57} and Pauli \\cite{Pauli58}, is to \ndetermine the density matrix $\\rho$ from information obtained by a \nset of measurements performed on an ensemble of identically prepared \nsystems. Significant theoretical and experimental progress has been \nachieved during the last decade in the reconstruction of quantum \nstates of the light field \\cite{Leon:book}. Also, numerous works \nwere devoted to reconstruction methods for other physical systems.\nMost recently, a general theory of quantum-state reconstruction for \nphysical systems with Lie-group symmetries was developed \\cite{BM99}.\n\nIn the present work we consider state-reconstruction methods \nfor some quantum systems possessing SU(2) symmetry. The principal\nprocedure for the reconstruction of spin states was recently \npresented by Agarwal \\cite{Agar98}. A similar approach was also \nproposed by Dodonov and Man'ko \\cite{DoMa97}, while the basic idea\nunderlying this method goes back to the pioneering work \nby Royer \\cite{Royer}. In brief, one applies a phase-space\ndisplacement [specifically, a rotation in the SU(2) case] to\nthe initial quantum state and then measures the probability to\nfind the displaced system in a specific state (the so-called\n``quantum ruler'' state). Repeating this procedure with identically \nprepared systems for many phase-space points [many rotation angles\nin the SU(2) case], one determines a function on the phase space\n(the so-called operational phase-space probability distribution\n\\cite{Wod84,BKK95,Ban98}). \nIn particular, by measuring the population of the ground state, \none obtains the so-called $Q$ function. The information contained \nin the operational phase-space probability distribution is \nsufficient to completely reconstruct the unknown density matrix of \nthe initial quantum state. A general group-theoretical description \nof this method and some examples, including SU(2), are presented \nin Ref.~\\cite{BM99}.\n\nThe aim of the present paper is to study how the general\nstate-reconstruction procedure outlined above can be implemented \nin practice for a number of specific physical systems with SU(2) \nsymmetry. Three systems are considered: a collection of two-level \natoms, a two-mode quantized radiation field with a fixed total \nnumber of photons, and a single laser-cooled ion in a \ntwo-dimensional harmonic trap with a fixed total number of \nvibrational quanta. We show that a simple rearrangement of \nconventional spectroscopic and interferometric schemes enables one \nto measure unknown quantum states of these systems.\n\n\\section{Reconstruction of quantum states for systems with SU(2) \nsymmetry}\n\nWe start with some basic properties of SU(2) which is the dynamical \nsymmetry group for the angular momentum or spin and for many other \nsystems (e.g., a collection of two-level atoms, the Stokes \noperators describing the polarization of the quantized light field, \ntwo light modes with a fixed total photon number, etc.). The su(2)\nsimple Lie algebra consists of the three operators\n$\\{J_{x},J_{y},J_{z}\\}$, \n\\begin{equation}\n\\label{su2alg}\n[J_{p},J_{r}] = i \\epsilon_{p r t} J_{t} .\n\\end{equation}\nThe Casimir operator is a constant times the unit operator,\n${\\mathbf{J}}^2 = j(j+1) I$, for any unitary irreducible \nrepresentation of the SU(2) group; so the representations are \nlabeled by the single index $j$ that takes the values\n$j = 0,1\/2,1,3\/2,\\ldots$. The representation Hilbert space \n${\\cal H}_{j}$ is spanned by the complete orthonormal basis \n$\\{ |j,\\mu\\rangle \\}$ (where $\\mu=j,j-1,\\ldots,-j$):\n\\[\n{\\mathbf{J}}^2 |j,\\mu\\rangle = j(j+1) |j,\\mu\\rangle , \\hspace{8mm}\nJ_z |j,\\mu\\rangle = \\mu |j,\\mu\\rangle .\n\\]\nIn the following we assume that the state $|\\psi\\rangle$ of the \nsystem belongs to ${\\cal H}_{j}$ (or, for mixed states, that the \ndensity matrix $\\rho$ is an operator on ${\\cal H}_{j}$).\nGroup elements can be \nparametrized using the Euler angles $\\alpha,\\beta,\\gamma$:\n\\begin{equation}\n\\label{eq:gEuler}\ng=g(\\alpha,\\beta,\\gamma) = e^{ i \\alpha J_{z} } \ne^{ i \\beta J_{y} } e^{ i \\gamma J_{z} } .\n\\end{equation}\n\nWe will employ two very useful concepts: the phase space (which\nis the group coset space of maximum symmetry) and the coherent\nstates (each point of the phase space corresponds to a coherent \nstate). For SU(2), the phase space is the unit sphere \n${\\Bbb{S}}^2 = {\\rm SU}(2) \/ {\\rm U}(1)$, and each coherent\nstate is characterized by a unit vector \\cite{ACGT72,Per}\n\\begin{equation}\n{\\bf n} = (\\sin\\theta \\cos\\phi, \\sin\\theta \\sin\\phi, \n\\cos\\theta) .\n\\end{equation}\nSpecifically, the coherent states $|j;{\\bf n}\\rangle$ are given by\nthe action of the group element \n\\begin{equation}\n\\label{eq:oSU2}\ng({\\bf n}) = e^{- i \\phi J_{z} } e^{- i \\theta J_{y} }\n\\end{equation}\non the highest-weight state $|j,j\\rangle$:\n\\begin{eqnarray}\n|j;{\\bf n}\\rangle = g({\\bf n}) |j,j\\rangle \n& = & \\sum_{\\mu=-j}^{j} {2j \\choose j+\\mu}^{1\/2} \n\\cos^{j+\\mu}(\\theta\/2) \\nonumber \\\\\n& & \\times \\sin^{j-\\mu}(\\theta\/2) e^{- i \\mu \\phi} |j,\\mu\\rangle .\n\\label{jcohstates}\n\\end{eqnarray}\nAn important property of the coherent states is the resolution of\nthe identity:\n\\begin{equation}\n\\frac{2j+1}{4\\pi} \\int_{{\\Bbb{S}}^2} d {\\bf n}\\, |j;{\\bf n}\\rangle\n\\langle j;{\\bf n}| = I ,\n\\end{equation}\nwhere $d {\\bf n} = \\sin\\theta\\, d \\theta\\, d \\phi$.\n\nA possible procedure for the quantum-state reconstruction is as \nfollows \\cite{BM99,Agar98,DoMa97}. First, the system, whose initial \nstate is described by the density matrix $\\rho$, is displaced in \nthe phase space:\n\\begin{equation}\n\\label{eq:displ}\n\\rho \\rightarrow \\rho({\\bf n}) = g^{-1}({\\bf n}) \\rho g({\\bf n}) ,\n\\hspace{8mm} {\\bf n} \\in {\\Bbb{S}}^2 .\n\\end{equation}\nThen one measures the probability to find the displaced system in\none of the states $|j,\\mu\\rangle$ (e.g., in the highest state\n$|j,j\\rangle$). This probability \n\\begin{equation}\n\\label{eq:pmu-def}\np_{\\mu} ({\\bf n}) = \\langle j,\\mu| \\rho({\\bf n}) |j,\\mu\\rangle\n\\end{equation}\n(which is sometimes called the operational phase-space probability \ndistribution) can be formally considered as the expectation value \n\\begin{equation}\np_{\\mu} ({\\bf n}) = {\\rm Tr}\\, [\\rho \\Gamma_{\\mu} ({\\bf n})] \n\\end{equation}\nof the so-called displaced projector \n\\begin{equation}\n\\Gamma_{\\mu} ({\\bf n}) = g({\\bf n}) |j,\\mu\\rangle \n\\langle j,\\mu| g^{-1}({\\bf n}) .\n\\end{equation}\nRepeating this procedure (with a large number of identically \nprepared systems) for a large number of phase-space points ${\\bf n}$,\none can determine the function $p_{\\mu} ({\\bf n})$.\n\nKnowledge of the function $p_{\\mu} ({\\bf n})$ is sufficient for the\nreconstruction of the initial density matrix $\\rho$.\nWe can use the following expansion for the density matrix (such an\nexpansion exists for any operator on ${\\cal H}_j$):\n\\begin{equation}\n\\rho = \\sum_{l=0}^{2j} \\sum_{m=-l}^{l} {\\cal R}_{l m} D_{l m} ,\n\\hspace{6mm} {\\cal R}_{l m} = {\\rm Tr}\\, (\\rho D^{\\dagger}_{l m}) .\n\\end{equation}\nHere, $D_{l m}$ are the so-called tensor operators (also known in \nthe context of angular momentum as the Fano multipole operators\n\\cite{Fano53}),\n\\begin{equation}\nD_{l m} = \\sqrt{\\frac{2l+1}{2j+1}} \\sum_{k,q=-j}^{j}\n\\langle j,k;l,m|j,q \\rangle |j,q \\rangle \\langle j,k| ,\n\\end{equation}\nwhere $\\langle j_{1},m_{1};j_{2},m_{2}|j,m \\rangle$ \nare the Clebsch-Gordan coefficients.\nNow, one can reconstruct the density matrix by using the relation\n\\cite{BM99,Agar98}\n\\begin{equation}\n{\\cal R}_{l m} = \\frac{ \\sqrt{(2j+1)\/4\\pi} }{ \n\\langle j,\\mu;l,0|j,\\mu \\rangle } \\int_{{\\Bbb{S}}^2} d {\\bf n}\\,\np_{\\mu} ({\\bf n}) Y_{l m}^{\\ast}({\\bf n}) ,\n\\end{equation}\nwhere $Y_{l m}({\\bf n})$ are the spherical harmonics.\nOther ways to deduce the density matrix from the measured \nprobabilities $p_{\\mu} ({\\bf n})$ were also proposed \n\\cite{AmWe99}.\n\nLet us also consider the useful concept of phase-space \nquasiprobability distributions (QPDs). In the SU(2) case, one can\nintroduce an $s$-parametrized family of the QPDs \n\\cite{BM99,Agar81,VaGB89}\n\\begin{equation}\nP({\\bf n};s) = \\sum_{l=0}^{2j} \\sum_{m=-l}^{l} \n\\frac{ \\sqrt{4\\pi\/(2j+1)} }{ \\langle j,j;l,0|j,j \\rangle^{s} }\n{\\cal R}_{l m} Y_{l m}({\\bf n}) .\n\\end{equation}\nFor $s=0$, we have the SU(2) equivalent of the Wigner function,\n\\begin{equation}\nW({\\bf n}) = \\sqrt{ \\frac{4\\pi}{2j+1} }\n\\sum_{l=0}^{2j} \\sum_{m=-l}^{l} {\\cal R}_{l m} Y_{l m}({\\bf n}) .\n\\end{equation}\nFor $s=1$, we obtain the SU(2) equivalent of the Glauber-Sudarshan\nfunction (also known as Berezin's contravariant symbol), \n$P({\\bf n})$, whose defining property is\n\\begin{equation}\n\\rho = \\frac{2j+1}{4\\pi} \\int_{{\\Bbb{S}}^2} d {\\bf n}\\, \nP({\\bf n}) |j;{\\bf n}\\rangle \\langle j;{\\bf n}| .\n\\end{equation}\nThe function which is probably the most important for the \nreconstruction problem is the SU(2) equivalent of the Husimi\nfunction (also known as Berezin's covariant symbol),\n\\begin{equation}\nQ({\\bf n}) = \\langle j;{\\bf n}| \\rho |j;{\\bf n}\\rangle ,\n\\end{equation}\nobtained for $s=-1$. As is seen from Eq.~(\\ref{eq:pmu-def}),\nthe function $Q({\\bf n})$ gives the probability to find the\ndisplaced system in the highest spin state $|j,j\\rangle$,\n\\begin{equation}\n\\label{eq:Q-p}\nQ({\\bf n}) = p_j ({\\bf n}) .\n\\end{equation}\nAlso, one can see that the probability $p_{-j}(\\theta,\\phi)$\nto find the displaced system in the lowest spin state \n$|j,-j\\rangle$ is equal to $Q(\\theta+\\pi,\\phi)$.\nMore generally, any one of the QPDs can be reconstructed using\nthe relation \\cite{BM99}\n\\begin{eqnarray}\n&& P({\\bf n};s) = \\frac{2j+1}{4\\pi} \\int_{{\\Bbb{S}}^2} d {\\bf n}'\\,\nK_{\\mu,s}^{-}({\\bf n}, {\\bf n}')\\, p_{\\mu} ({\\bf n}') , \\\\\n&& K_{\\mu,s}^{-}({\\bf n}, {\\bf n}') = \\sum_{l=0}^{2j} \n\\frac{2l+1}{2j+1} \\frac{ \\langle j,j;l,0|j,j \\rangle^{-s} }{ \n\\langle j,\\mu;l,0|j,\\mu \\rangle } P_l ( {\\bf n} \\cdot {\\bf n}' ) ,\n\\end{eqnarray}\nwhere $P_l (x)$ are the Legendre polynomials. \nFor $s=-1$ and $\\mu=j$ we recover the relation (\\ref{eq:Q-p}).\n\n\\section{General description of experimental schemes}\n\n\\subsection{Spectroscopy and interferometry}\n\nQuantum transformations which constitute the basic operations\nin spectroscopic and interferometric measurements can be\nconveniently described as rotations in an abstract 3-dimensional\nspace. In this description, the system is characterized by the\nvector ${\\mathbf{J}} = (J_x , J_y , J_z)^T$, where the three\noperators $J_x$, $J_y$, and $J_z$ satisfy the su(2) algebra\n(\\ref{su2alg}).\n\nA spectroscopic or interferometric process is usually described\nin the Heisenberg picture as a unitary transformation\n\\begin{equation}\n{\\mathbf{J}}_{\\mathrm{out}} = \nU(\\vartheta_1,\\vartheta_2,\\varphi) {\\mathbf{J}} \nU^{\\dagger}(\\vartheta_1,\\vartheta_2,\\varphi)\n= {\\mathsf{U}}(\\vartheta_1,\\vartheta_2,\\varphi) {\\mathbf{J}} ,\n\\end{equation}\nwhere ${\\mathsf{U}}(\\vartheta_1,\\vartheta_2,\\varphi)$ is a $3 \\times 3$ \ntransformation (rotation) matrix, and $\\vartheta_1$, $\\vartheta_2$, \n$\\varphi$ are transformation parameters (rotation angles). \nA standard transformation consists of three steps: \n\\begin{enumerate}\n\\item[(i)] rotation around the $\\hat{\\mathbf{y}}$ axis by $\\vartheta_1$, \nwith the transformation matrix ${\\mathsf{R}}_y (\\vartheta_1)$,\n\\item[(ii)] rotation around the $\\hat{\\mathbf{z}}$ axis by $\\varphi$, \nwith the transformation matrix ${\\mathsf{R}}_z (\\varphi)$,\n\\item[(iii)] rotation around the $\\hat{\\mathbf{y}}$ axis by \n$\\vartheta_2$, with the transformation matrix \n${\\mathsf{R}}_y (\\vartheta_2)$.\n\\end{enumerate}\nThe overall transformation performed on ${\\mathbf{J}}$ is\n\\begin{equation}\n{\\mathsf{U}}(\\theta,\\phi) = {\\mathsf{R}}_y (\\vartheta_2)\n{\\mathsf{R}}_z (\\varphi) {\\mathsf{R}}_y (\\vartheta_1) .\n\\end{equation}\nThis transformation is slightly more general than those routinely\nmade in spectroscopy and interferometry. The usual choice is\n$\\vartheta_2 = -\\vartheta_1 = \\pm \\pi\/2$, so \n${\\mathsf{U}} = {\\mathsf{R}}_x (\\pm \\varphi)$, respectively,\nwhile $\\varphi$ is the parameter to be estimated in the experiment.\nIn the Schr\\\"{o}dinger picture, the density matrix of the system\ntransforms as\n\\begin{equation}\n \\label{eq:Srot}\n\\rho_{\\mathrm{out}} = U^{\\dagger}(\\vartheta_1,\\vartheta_2,\\varphi) \n\\rho U(\\vartheta_1,\\vartheta_2,\\varphi) , \n\\end{equation}\nwhere the transformation operator is\n\\begin{equation}\n \\label{eq:Uoperator}\nU(\\vartheta_1,\\vartheta_2,\\varphi) = e^{ i \\vartheta_1 J_y }\ne^{ i \\varphi J_z } e^{ i \\vartheta_2 J_y } .\n\\end{equation}\n\nNow, the aim is to measure the value of $\\varphi$ which is \nproportional to the transition frequency in a spectroscopic \nexperiment or to the optical path difference between the two arms of \nan interferometer.\nThe information on $\\varphi$ is inferred from the measurement of\nthe observable $J_z$ at the output. The quantum uncertainty\nin the estimation of $\\varphi$ is\n\\begin{equation}\n\\label{eq:uncert}\n\\Delta \\varphi = \\frac{\\Delta J_{z \\mathrm{out}} }{| \\partial\n\\langle J_{z \\mathrm{out}} \\rangle\/ \\partial \\varphi |} ,\n\\end{equation}\nwhere the expectation values are taken over the initial quantum\nstate of the system. This state is assumed to be known, so one\ncan estimate the value of $\\varphi$ and the corresponding \nuncertainty. \n\n\\subsection{Reconstruction of the initial state}\n\nIn this paper we consider how to use the spectroscopic or \ninterferometric arrangement for the inverse purpose, i.e., for the\nmeasurement of an unknown initial quantum state by means of a large \nnumber of transformations with known parameters.\n\nAs discussed in Sec.~II, the first part of the reconstruction \nprocedure is the phase-space displacement of Eq.~(\\ref{eq:displ}).\nWith the phase space being the sphere, this displacement is just\na rotation produced by the operator $g({\\bf n})$ of \nEq.~(\\ref{eq:oSU2}). Now, compare this rotation with the one\nmade during a spectroscopic or interferometric experiment, as\ngiven by Eqs.~(\\ref{eq:Srot}) and (\\ref{eq:Uoperator}). \nOne can immediately conclude that the phase-space displacement \nneeded for the SU(2) state-reconstruction procedure can be neatly \nimplemented by means of the spectroscopic and interferometric \ntechniques. One only needs to omit the first rotation (i.e., take \n$\\vartheta_1 = 0$), and recognize the two spherical angles as:\n\\begin{equation}\n\\theta = -\\vartheta_2 , \\hspace{8mm}\n\\phi = -\\varphi .\n\\end{equation}\nAfter the rotation $g({\\bf n})$ is made, one should\nmeasure the probability $p_{\\mu}({\\bf n})$ to find the displaced\nsystem in the state $|j,\\mu\\rangle$. Perhaps the most convenient \nchoice is to measure the population of the lowest state \n$|j,-j\\rangle$, which is usually the ground state of the system\n(e.g., this state corresponds to the case where all the atoms are \nunexcited; in the atomic case such a measurement can be made by \nmonitoring the resonant fluorescence for an auxiliary dipole \ntransition). \nThis procedure should be repeated for many phase-space points\n${\\bf n}$ with a large number of identically prepared systems, \nthereby determining the function $p_{\\mu}({\\bf n})$ (e.g., for\n$\\mu = j$ or $\\mu = -j$). According to the formalism presented\nin Sec.~II, this information is sufficient to reconstruct the \ninitial quantum state. \n\n\\section{Collections of two-level atoms}\n\nIn Ramsey spectroscopy \\cite{Ramsey} one deals with a collection \nof $N$ two-level systems (usually atoms or ions) interacting with \nclassical light fields. One can equivalently describe this physical \nsituation as the interaction of $N$ spin-$\\frac{1}{2}$ particles \nwith classical magnetic fields. Denoting by ${\\mathbf{S}}_i$ the \nspin of $i$th particle, one can use the collective spin operators:\n\\begin{equation}\n{\\mathbf{J}} = \\sum_{i=1}^{N} {\\mathbf{S}}_i .\n\\end{equation}\nThe orthonormal basis $\\{ |j,\\mu\\rangle \\}$ consists of the symmetric \nDicke states \\cite{Dicke54}:\n\\begin{equation}\n|j,\\mu\\rangle = {N \\choose p}^{-1\/2} \\sum \\prod_{k=1}^{p} \n|+\\rangle_{l_k} \\prod_{l \\neq l_k} |-\\rangle_{l} ,\n\\end{equation}\nwhere $|+\\rangle_l$ and $|-\\rangle_l$ are the upper and lower states,\nrespectively, of the $l$th atom, and the summation is over all\npossible permutations of $N$ atoms. If only symmetric states\nare considered, then the ``cooperative number'' $j$ is equal to\n$N\/2$ and $p = \\mu+j$ is just the number of excited atoms.\nUsually the atoms (ions) are far enough apart so their wave functions\ndo not overlap and the direct dipole-dipole coupling or other direct\ninteractions between the atoms may be neglected.\n\nIn the spin formulation (see, e.g., Ref.~\\cite{Wine94} for a very good \ndescription), the magnetic moment $\\bbox{\\mu} = \\mu_0 {\\mathbf{S}}$ \nis associated with each particle. If a uniform external magnetic \nfield ${\\mathbf{B}}_0 = B_0 \\hat{\\mathbf{z}}$ \nis applied, the Hamiltonian for each particle is given by\n\\begin{equation}\nH_0 = - \\bbox{\\mu} \\cdot {\\mathbf{B}}_0 = \\hbar \\omega_0 S_z ,\n\\end{equation}\nwhere $\\hbar \\omega_0 = -\\mu_0 B_0$ is the separation in energy\nbetween the two levels. The corresponding Heisenberg equation for\nthe collective spin operator is\n\\begin{equation}\n\\partial {\\mathbf{J}}\/ \\partial t = \\bbox{\\omega}_0 \\times \n{\\mathbf{J}} ,\n\\end{equation}\nwhere $\\bbox{\\omega}_0 = \\omega_0 \\hat{\\mathbf{z}}$. Then one\napplies the so-called clock radiation which is a classical field\nof the form\n\\begin{equation}\n{\\mathbf{B}}_{\\perp} = B_{\\perp} \\left( \\hat{\\mathbf{y}} \\cos \\omega t\n- \\hat{\\mathbf{x}} \\sin \\omega t \\right) ,\n\\end{equation}\nwhere $\\omega \\simeq \\omega_0$ and we assume $\\omega_0 > 0$.\nIn the reference frame that rotates at frequency $\\omega$, the\ncollective spin ${\\mathbf{J}}$ interacts with the effective field\n\\begin{equation}\n{\\mathbf{B}} = B_r \\hat{\\mathbf{z}} + B_{\\perp} \\hat{\\mathbf{y}} ,\n\\end{equation}\nwhere $B_r = B_0 (\\omega_0 -\\omega)\/\\omega_0$. In the rotating \nframe, the Hamiltonian is \n$H = - \\mu_0 {\\mathbf{J}} \\cdot {\\mathbf{B}}$, and the Heisenberg \nequation for $\\mathbf{J}$ is \n\\begin{equation}\n\\label{eq:Jrot}\n\\partial {\\mathbf{J}}\/ \\partial t = \\bbox{\\omega}' \\times \n{\\mathbf{J}} ,\n\\end{equation}\nwhere $\\bbox{\\omega}' = (\\omega_0 - \\omega) \\hat{\\mathbf{z}}\n+ \\omega_{\\perp} \\hat{\\mathbf{y}}$ and \n$\\omega_{\\perp} = -\\mu_0 B_{\\perp}\/\\hbar$.\n\nThe Ramsey method breaks the evolution time of the system into three\nparts. In the first part $B_{\\perp}$ is nonzero and constant with \nvalue $B_{1}$ during the time interval $0 \\leq t \\leq t_{\\vartheta}$. \nDuring this period (the first Ramsey pulse), \n${\\mathbf{B}} = B_r \\hat{\\mathbf{z}} + B_{1} \\hat{\\mathbf{y}}\n\\simeq B_{1} \\hat{\\mathbf{y}}$, where we made the assumption\n$|B_{1}| \\gg |B_{r}|$, i.e., $|\\omega_{1}| \\gg |\\omega_0 - \\omega|$,\nwith $\\omega_{1} = -\\mu_0 B_{1}\/\\hbar$. Therefore, in the rotating \nframe of Eq.~(\\ref{eq:Jrot}), $\\mathbf{J}$ rotates around the \n$\\hat{\\mathbf{y}}$ axis by the angle \n$\\vartheta_1 = \\omega_{1} t_{\\vartheta}$.\nDuring the second period, of duration $T$, \n(usually $T \\gg t_{\\vartheta}$), $B_{\\perp}$ is zero, so \n${\\mathbf{B}} = B_r \\hat{\\mathbf{z}}$, and $\\mathbf{J}$ rotates around \nthe $\\hat{\\mathbf{z}}$ axis by the angle \n$\\varphi = (\\omega_0 - \\omega) T$. The third period is exactly as the\nfirst one but with the field $B_{\\perp} = B_{2}$ and the corresponding\nangular frequency $\\omega_{2} = -\\mu_0 B_{2}\/\\hbar$.\nThis gives a rotation around the $\\hat{\\mathbf{y}}$ axis by the\nangle $\\vartheta_2 = \\omega_{2} t_{\\vartheta}$. These three Ramsey \npulses provide the rotations we described in Sec.~III~A (usually,\n$\\vartheta_1 = \\vartheta_2 = \\pi\/2$).\n\nThe aim of spectroscopic experiments is to measure the transition \nfrequency $\\omega_0$ (which is equivalent to the measurement of\n$\\varphi$, as $\\omega$ and $T$ are determined by the experimenter). \nUsually, one measures the number of atoms in the upper state\n$|+\\rangle$,\n\\begin{equation}\nN_{+ {\\rm out}} = J_{z {\\rm out}} + N\/2 ,\n\\end{equation}\nand thus obtains the information about the angle $\\varphi$ or,\nequivalently, about the frequency $\\omega_0$. Of course, in order to \ninfer this information one should know the initial quantum state of \nthe system. The measurement sensitivity, as seen from \nEq.~(\\ref{eq:uncert}), also depends on the initial quantum state.\n\nIn the state-reconstruction procedure, the first Ramsey pulse should\nbe omitted, while the second and third pulses produce the desired\nphase-space displacement $g^{\\dagger}({\\bf n})$. After the displacement \nis completed, one should measure the probability to find the system in \none of the states $|j,\\mu\\rangle$, for example, measure the population \nof the ground state $|j,-j\\rangle$ or of the most excited state \n$|j,j\\rangle$.\nThis measurement can be made by driving a dipole transition to an\nauxiliary atomic level and then observing the resonance fluorescence.\nThe phase space is scanned by repeating the measurement with many\nidentically prepared systems for various durations of the\nRamsey pulses, $T$ and $t_{\\vartheta}$. Of course, the apparatus\nshould be first calibrated by measuring the transition frequency\n$\\omega_0$.\n\n\\section{Two-mode light fields}\n\nThe basic device employed in a passive optical interferometer is\na beam splitter (a partially transparent mirror). A Mach-Zehnder\ninterferometer consists of two beam splitters and its operation\nis as follows. Two light modes (with boson annihilation operators\n$a_1$ and $a_2$) are mixed by the first beam splitter,\naccumulate phase shifts $\\varphi_1$ and $\\varphi_2$, respectively,\nand then they are once again mixed by the second beam splitter.\nPhotons in the output modes are counted by two photodetectors.\nIn fact, a Michelson interferometer works in the same way, but\ndue to its geometric layout the two beam splitters may coincide.\n\nEach beam splitter has two input and two output ports.\nLet ${\\bf a} = (a_1 , a_2)^T$ and ${\\bf b} = (b_1 , b_2)^T$ be the \ncolumn-vectors of the boson operators of the input and output modes,\nrespectively. Then, in the Heisenberg picture, the action of\nthe beam splitter is described by the transformation\n\\begin{equation}\n{\\bf b} = {\\sf B} {\\bf a} ,\n\\end{equation}\nwhere ${\\sf B}$ is a $2 \\times 2$ matrix. For a lossless beam\nsplitter ${\\sf B}$ must be unitary, thereby assuring the energy\n(photon number) conservation. A possible form of ${\\sf B}$ is\n\\begin{equation}\n\\label{eq:BSmatrix}\n{\\sf B}(\\vartheta) = \\left( \\begin{array}{cc} \n\\cos (\\vartheta\/2) & -\\sin (\\vartheta\/2) \\\\\n\\sin (\\vartheta\/2) & \\cos (\\vartheta\/2) \\end{array} \\right) ,\n\\end{equation}\nwith $T = \\cos^2 (\\vartheta\/2)$ and $R = \\sin^2 (\\vartheta\/2)$\nbeing the transmittance and reflectivity, respectively.\nWhen the two light modes accumulate phase shifts $\\varphi_1$ and \n$\\varphi_2$, respectively, the corresponding transformation is\n\\begin{equation}\n\\label{eq:PSmatrix}\n{\\bf b} = {\\sf P} {\\bf a} , \\hspace{8mm}\n{\\sf P} = \\left( \\begin{array}{cc} \ne^{ i \\varphi_1 } & 0 \\\\ 0 & e^{ i \\varphi_2 } \\end{array} \\right) .\n\\end{equation}\n\nThe group-theoretic description of the interferometric process \n\\cite{YMK86} is based on the Schwinger realization of the su(2) \nalgebra:\n\\begin{eqnarray}\n \\label{eq:Schwinger}\n& & J_x = \n( a_1^{\\dagger} a_2 + a_2^{\\dagger} a_1 )\/2 , \\nonumber \\\\\n& & J_y = \n- i ( a_1^{\\dagger} a_2 - a_2^{\\dagger} a_1 )\/2 , \\\\\n& & J_z = \n( a_1^{\\dagger} a_1 - a_2^{\\dagger} a_2 )\/2 . \\nonumber\n\\end{eqnarray}\nActions of the interferometer elements (mixing by the beam splitters \nand phase shifts) can be represented as rotations of the column-vector \n${\\bf J} = ( J_x , J_y , J_z )^T$.\nThe beam-splitter transformation of Eq.~(\\ref{eq:BSmatrix}) is \nrepresented by rotation ${\\sf R}_y (\\vartheta)$ around the \n$\\hat{\\mathbf{y}}$ axis by the angle $\\vartheta$, and the phase shift \nof Eq.~(\\ref{eq:PSmatrix}) is represented by rotation \n${\\sf R}_z (\\varphi)$ around the $\\hat{\\mathbf{z}}$ axis by the angle \n$\\varphi = \\varphi_2 - \\varphi_1$.\nNow, if the transmittances of the two beam splitters are \n$T_1 = \\cos^2 (\\vartheta_1 \/2)$ and $T_2 = \\cos^2 (\\vartheta_2 \/2)$,\nrespectively, then the interferometer action is given by the\nthree rotations described in Sec.~III~A. (Usually, one uses 50-50\nbeam splitters, so $\\vartheta_1 = -\\vartheta_2 = \\pi\/2$.)\n\nInterferometers are constructed to measure the relative phase\nshift $\\varphi$, which is proportional to the optical path difference\nbetween the two arms. Usually, one measures the difference between \nthe photocurrents due to the two output light beams. This quantity is\nproportional to the photon-number difference at the output,\n$q_{\\mathrm{out}} = 2 J_{z \\mathrm{out}}$. If the input state of\nlight is known, then the measurement of $q_{\\mathrm{out}}$ can\nbe used to infer the phase shift $\\varphi$ and estimate the\nmeasurement error due to the quantum fluctuations of the light\nfield.\n\nA simple calculation gives ${\\bf J}^2 = (N\/2)(1+N\/2)$, where\n$N = a_1^{\\dagger} a_1 + a_2^{\\dagger} a_2$ is the total number\nof photons in the two modes. If $N$ has a fixed value for the input\nstate of the two-mode light field, then this state belongs to the \nHilbert space ${\\cal H}_j$ of a specific SU(2) representation with \n$j = N\/2$. Because $N$ is the SU(2) invariant, this state will remain \nin ${\\cal H}_j$ during the interferometric process. \nSuch input states of the two-mode light field can be reconstructed \nusing a rearrangement of the interferometric scheme, according to the \ngeneral procedure described in Secs.~II and III~B.\n\nThe phase-space displacement $g^{\\dagger}({\\bf n})$ needed for the \nstate-reconstruction procedure can be implemented by using an \ninterferometer without the first beam splitter. Then one should \nmeasure the probability $p_{\\mu}({\\bf n})$ to find the output light \nin one of the states $|j,\\mu\\rangle$. Note that these states are \ngiven by\n\\begin{equation}\n\\label{eq:staterel}\n|j,\\mu\\rangle = |j+\\mu \\rangle_1 \\otimes |j-\\mu \\rangle_2\n\\end{equation}\nin the terms of the Fock states of the two light modes.\nSo, $\\mu$ is just one half of the photon-number difference \nmeasured at the output. Averaging over many measurements, one\nobtains the probabilities $p_{\\mu}({\\bf n})$.\nFor example, $p_{j}({\\bf n})$ is the probability that all photons \nexit in the first output beam while the number of photons in the \nsecond output beam is zero.\nThe measurement should be repeated with identically prepared input\nlight beams for many phase-space displacements. This means that one \nneeds a well-calibrated apparatus which can be tuned for various \nvalues of the relative phase shift $\\varphi$. These phase shifts \ncan be conveniently produced by moving a mirror with a precise \nelectro-mechanical system. Various values of the angle \n$\\vartheta_2$ can be realized using a collection of partially\ntransparent mirrors with different reflectivities for the second\nbeam splitter. An alternative possibility is to use the dependence\nof the reflectivity on the angle of incidence for light polarized \nin the plane of incidence.\n\nIn general, the state reconstruction for two-mode light fields is\na tedious task, because the corresponding Hilbert space is very \nlarge \\cite{KWV95,RMAL96,OWV97,Richter97,PTKJ97}.\nObviously, this task can be greatly simplified for the subclass of \ntwo-mode states with a fixed total number of photons, by means of\nthe reconstruction method presented here. However, this method is\nin principle suitable also for other two-mode states as well.\nIn general, the whole Hilbert space of the two-mode system can\nbe decomposed as\n\\begin{equation}\n\\label{eq:decomp}\n{\\cal H} = \\bigoplus_{j} {\\cal H}_j .\n\\end{equation}\nThe method of inverted interferometry enables one to reconstruct \nthe part of the density matrix corresponding to each irreducible \nsubspace ${\\cal H}_j$.\nOne case for which our method is applicable is the subclass of \nstates, whose density matrices are block-diagonal in terms of the \ndecomposition (\\ref{eq:decomp}).\nThis means that the corresponding operator can be written as\n\\begin{equation}\n\\rho = \\sum_{j} \\rho_j ,\n\\end{equation}\nwhere $\\rho_j$ is an operator on ${\\cal H}_j$. Each component \n$\\rho_j$ evolves independently during the phase-space displacement; \nhence the state of the whole system can be measured by \nreconstructing all invariant components $\\rho_j$.\nThe other case for which our method works is the subclass of pure \nstates,\n\\begin{equation}\n|\\psi\\rangle = \\sum_{j} |\\psi_j\\rangle , \\hspace{8mm}\n|\\psi_j\\rangle = \\sum_{\\mu = -j}^{j} c_{j\\mu} |j,\\mu\\rangle .\n\\end{equation}\nThen the density matrix can be written as\n\\begin{equation}\n\\label{eq:pure-decomp}\n\\rho = \\sum_{j} | \\psi_j \\rangle \\langle \\psi_j | + \n\\sum_{j \\neq j'} |\\psi_j \\rangle \\langle \\psi_{j'} | .\n\\end{equation}\nThe populations of the states $|j,\\mu\\rangle$ are unaffected by\nthe second term in (\\ref{eq:pure-decomp}), and one can reconstruct\nall invariant components \n$\\rho_j = | \\psi_j \\rangle \\langle \\psi_j |$.\nThis gives information about the state $|\\psi\\rangle$ of the whole \nsystem, except for relative phases between different \n$| \\psi_j \\rangle$. From the technical point of view,\neach measurement of the photon-number difference $2 \\mu$, needed \nto determine the probabilities $p_{\\mu}({\\bf n})$, should be\naccompanied by a measurement of the photon-number sum $N = 2j$, \nin order to determine to which invariant subspace ${\\cal H}_j$ \ndoes the detected value of $\\mu$ correspond. \nConsequently, one needs to make many more measurements, in order \nto accumulate enough data for each value of $j$. \nA technical problem is that quantum efficiencies of realistic \nphotodetectors are always less then unity.\nWhile this problem is not too serious for the measurement of the\nphoton-number difference (as long as both detectors have the\nsame efficiency), it puts a serious limitation on the accuracy\nof the measurement of the total number of photons.\n\n\\section{Two-dimensional vibrations of a trapped ion}\n\nAs was recently demonstrated by Wineland \\emph{et al.} \\cite{Wine98},\na single laser-cooled ion in a harmonic trap can be used to simulate \nvarious interactions governing many well-known optical processes.\nIn particular, one can simulate transformations produced by elements \nof a Mach-Zehnder optical interferometer.\n\nConsider a single ion confined in a two-dimensional harmonic trap, \nwith angular frequencies of oscillations in two orthogonal\ndirections $\\Omega_1$ and $\\Omega_2$. Two internal states\nof the ion, $|+\\rangle$ and $|-\\rangle$, are separated in energy\nby $\\hbar \\omega_0$. The internal and motional degrees of freedom\ncan be coupled by applying classical laser beams, with \nelectric fields of the form\n\\[ {\\bf E}({\\bf x},t) = {\\bf E}_0 \\cos ({\\bf k} \\cdot {\\bf x}\n- \\omega t + \\Phi) . \\]\nFor example, one can apply two laser beams to produce stimulated\nRaman transitions. We denote by $\\omega = \\omega_1 - \\omega_2$,\n${\\bf k} = {\\bf k}_1 - {\\bf k}_2$, and $\\Phi = \\Phi_1 - \\Phi_2$\nthe differences between the angular frequencies, the wave \nvectors, and the phases, respectively, of the two applied fields.\nThen, in the rotating-wave approximation, the interaction \nHamiltonian reads\n\\begin{equation}\nH_I = \\hbar \\kappa \\exp[ i ({\\bf k} \\cdot {\\bf x} - \\delta t + \n\\Phi) ] + {\\rm H.c.} ,\n\\end{equation}\nwhere $\\delta = \\omega-\\omega_0$ is the frequency detuning,\n${\\bf x}$ is the ion's position relative to its equilibrium, and\n$\\kappa$ is the coupling constant (the Rabi frequency). Each of \nthe two modes of the ion's motion can be modelled by a quantum \nharmonic oscillator:\n\\begin{equation}\nx_r = x_{0 r} (a_r + a_r^{\\dagger}), \\hspace{6mm}\nx_{0 r} = \\sqrt{ \\hbar\/(2 M \\Omega_r) } ,\n\\end{equation}\nwhere $r=1,2$ and $M$ is the ion's mass. Also, let \n$\\eta_r = k_r x_{0 r}$ ($r=1,2$) be the Lamb-Dicke parameters for the \ntwo oscillatory modes. \nIt is convenient to use the interaction picture for the ion's motion: \n\\begin{eqnarray}\n\\tilde{H}_I & = & \\exp( i H_0 t\/\\hbar) H_I \\exp(- i H_0 t\/\\hbar) \n\\nonumber \\\\\n& = & \\hbar \\kappa e^{ i (\\Phi - \\delta t)} \\prod_{r=1,2}\n\\exp[ i \\eta_r (\\tilde{a}_r + \\tilde{a}_r^{\\dagger}) ] + {\\rm H.c.}, \n\\label{eq:Ham2}\n\\end{eqnarray}\nwhere $H_0$ is the free Hamiltonian for the ion's motion,\n\\begin{equation}\nH_0 = \\hbar \\Omega_1 \\left( a_1^{\\dagger} a_1 + \\mbox{$\\frac{1}{2}$}\n\\right) + \\hbar \\Omega_2 \\left( a_2^{\\dagger} a_2 + \n\\mbox{$\\frac{1}{2}$} \\right) ,\n\\end{equation}\nand $\\tilde{a}_r = a_r \\exp(- i \\Omega_r t)$, $r=1,2$.\n\nIf the coupling constant $\\kappa$ is small enough and $\\Omega_1$ and \n$\\Omega_2$ are incommensurate, one can resonantly excite only one \nspectral component of the possible transitions. For a particular \nresonance condition $\\delta = \\Omega_2 - \\Omega_1$ (and in the \nLamb-Dicke limit of small $\\eta_1$ and $\\eta_2$), the product in \nEq.~(\\ref{eq:Ham2}) will be dominated by the single term\n$( i \\eta_1 a_1 )( i \\eta_2 a_2^{\\dagger} )$. Therefore, one\nobtains\n\\begin{equation}\n\\label{eq:H-bs}\n\\tilde{H}_I \\approx -\\hbar \\kappa \\eta_1 \\eta_2 \\left( e^{ i \\Phi} \na_1 a_2^{\\dagger} + e^{- i \\Phi} a_1^{\\dagger} a_2 \\right).\n\\end{equation}\nReturning to the Schr\\\"{o}dinger picture, the total evolution\noperator reads:\n\\begin{eqnarray}\nU(t) & = & \\exp(- i H_0 t\/\\hbar) \\exp(- i \\tilde{H}_I t\/\\hbar)\n\\nonumber \\\\\n& = & \\exp[- i (\\Omega_1 + \\Omega_2)(N+1) t\/2 ]\n\\exp[ i (\\Omega_2 - \\Omega_1) J_z t ] \\nonumber \\\\\n& & \\times \\exp( 2 i \\kappa \\eta_1 \\eta_2 J_{\\Phi} t ) .\n\\label{eq:evolution}\n\\end{eqnarray}\nHere, $N = a_1^{\\dagger} a_1 + a_2^{\\dagger} a_2$ is the total number\nof vibrational quanta in the two modes, \n$J_{\\Phi} = J_x \\cos\\Phi + J_y \\sin\\Phi$, and we used the Schwinger \nrealization (\\ref{eq:Schwinger}) for the SU(2) generators. \n\nNow, let us consider only such motional states of the ion for which\n$N$ has a fixed value, i.e., which belong to the irreducible Hilbert \nspace ${\\cal H}_j$ (with $j = N\/2$). For these states,\nthe first exponent in (\\ref{eq:evolution}) will just produce an\nunimportant phase factor and can be omitted. Clearly, the evolution \noperator (\\ref{eq:evolution}) can be used to simulate the action of \nan optical interferometer, with two vibrational modes of a trapped \nion employed instead of two light beams. In order to simulate the\naction of a beam splitter, one should apply the interaction\n(\\ref{eq:H-bs}) during time $t_{\\theta}$ and ensure that\n$|2 \\kappa \\eta_1 \\eta_2| \\gg |\\Omega_2 - \\Omega_1|$, so the effect\nof the free evolution can be neglected. Then, for $\\Phi = \\pi\/2$,\nthe evolution operator reads\n\\begin{equation}\n\\label{eq:Utheta}\nU_y (\\theta) = \\exp( i \\theta J_y ) , \\hspace{8mm}\n\\theta = 2 \\kappa \\eta_1 \\eta_2 t_{\\theta} .\n\\end{equation}\nA relative phase shift between the two modes can be produced\njust by using the free evolution, i.e., with no external laser \nfields applied. Letting the system evolve freely during time $T$,\none obtains\n\\begin{equation}\n\\label{eq:Uphi}\nU_z (\\phi) = \\exp( i \\phi J_z ) , \\hspace{8mm}\n\\phi = (\\Omega_2 - \\Omega_1) T .\n\\end{equation}\n\nIt is obvious that applying consequently the transformations\n(\\ref{eq:Uphi}) and (\\ref{eq:Utheta}) one will produce the \nphase-space displacement $g^{\\dagger}({\\bf n})$, employed in the \nstate-reconstruction procedure. The whole phase space can be \nscanned by repeating the procedure with identically prepared\nsystems for various durations $T$ and $t_{\\theta}$. Each \nphase-space displacement should be followed by the measurement of \nthe probability $p_{\\mu}({\\bf n})$ to find the system in one of\nthe states $|j,\\mu\\rangle$. For example, $p_{j}({\\bf n})$ is\nthe probability that the first oscillatory mode is excited to the\n$N$th level ($N = 2j$) while the second mode is in the ground \nstate. Such a measurement can be made with the method used\nrecently by the NIST group \\cite{Leibfr} to reconstruct the \none-dimensional motional state of a trapped ion. \nThe principle of this method is as follows. One of the \noscillatory modes is coupled to the internal transition \n$|+\\rangle \\leftrightarrow |-\\rangle$. This is done by applying\none classical laser field, so single-photon transitions are\nexcited. This results in an interaction of the Jaynes-Cummings\ntype \\cite{JC63} between the oscillatory mode and the internal \ntransition. Then the population $P_{-}(t)$ of the lower internal \nstate $|-\\rangle$ is measured for various values of the\ninteraction time $t$ (as we already mentioned, this measurement \ncan be made by monitoring the resonant fluorescence produced in \nan auxiliary dipole transition). If $|-\\rangle$ is the internal \nstate at $t=0$, then the signal averaged over many measurements \nis \n\\[\nP_{-}(t) = \\frac{1}{2} \\left[ 1 + \n\\sum_{n=0}^{\\infty} P_{n} \\cos(2 \\Omega_{n,n+1} t)\ne^{-\\gamma_{n} t} \\right] ,\n\\]\nwhere $\\Omega_{n,n+1}$ are the Rabi frequencies and $\\gamma_{n}$ \nare the experimentally determined decay constants. This\nrelation allows one to determine the populations $P_{n}$ \nof the motional eigenstates $|n\\rangle$. By virtue of \nEq.~(\\ref{eq:staterel}), this gives the populations $p_{\\mu}$ of \nthe SU(2) states $|j,\\mu\\rangle$ (with $\\mu = n-j$ for the first\nmode and $\\mu = j-n$ for the second mode). For example, $p_{-j}$\nand $p_{j}$ are given by $P_0$ for the first and second modes,\nrespectively.\n\n\\section{Conclusions}\n\nIn this paper we presented practical methods for the reconstruction \nof quantum states for a number of physical systems with SU(2) \nsymmetry. All these methods employ the same basic idea---the \nmeasurement of displaced projectors---which in principle is \napplicable to any system possessing a Lie-group symmetry. Practical \nrealizations, of course, vary for different physical systems. \nIn our approach, we exploited the fact that transformations applied\nin conventional spectroscopic and interferometric schemes are, from\nthe mathematical point of view, just rotations. In the context of\nthe SU(2) group, these rotations constitute phase-space \ndisplacements needed to implement a part of the reconstruction\nprocedure. Therefore, the spectroscopic and interferometric\nmeasurements can be easily rearranged in order to enable one to \ndetermine unknown quantum states for an ensemble of identically \nprepared systems. As the spectroscopic and interferometric\nmeasurements are known for their high accuracy, we hope that the\ncorresponding rearrangements will allow accurate reconstructions \nof unknown quantum states.\n\n\\acknowledgements\n\nThis work was supported by the Fund for Promotion of Research \nat the Technion and by the Technion VPR Fund.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}}\n\\newcommand{\\subsect}[1]{\\subsection{#1}}\n\\newcommand{\\subsubsect}[1]{\\subsubsection{#1}}\n\\renewcommand{\\theequation}{\\arabic{section}.\\arabic{equation}}\n\n\\newtheorem{proposition}{Proposition}\n\n\\def\\begin{equation}{\\begin{equation}}\n\\def\\end{equation}{\\end{equation}}\n\\def\\begin{eqnarray}{\\begin{eqnarray}}\n\\def\\end{eqnarray}{\\end{eqnarray}}\n\n\n\n\n \n\n\\def\\1{\\'{\\i}} \n\\def\\R{{\\rm I\\kern-.2em R}} \n\n \n \n\n\n \n\n\\begin{document}\n \n \n\\thispagestyle{empty}\n\n\n \n\n \n\n\n\\ \n\\vspace{3cm}\n\n \n\\begin{center} {\\LARGE{\\bf{Quantum Heisenberg--Weyl Algebras}}} \n\\end{center}\n \n \n\n\\bigskip\\bigskip\n\n\n\n\n \n\\begin{center} Angel Ballesteros$^\\dagger$, Francisco J.\nHerranz$^\\dagger$ and Preeti Parashar$^\\ddagger$\n\\end{center}\n\n\\begin{center} {\\it { $^\\dagger$ Departamento de F\\1sica, Universidad\nde Burgos} \\\\ Pza. Misael Ba\\~nuelos, \nE-09001-Burgos, Spain}\n\\end{center}\n\n\\begin{center} {\\it { $^\\ddagger$ SISSA, Via Beirut 2-4, \n34014 Trieste, Italy}}\n\\end{center}\n\n\n\\bigskip\\bigskip\\bigskip\n\n\n\n\n\n\n\\begin{abstract} \nAll Lie bialgebra structures on the Heisenberg--Weyl\nalgebra $[A_+,A_-]=M$ are classified and explicitly quantized. The\ncomplete list of quantum Heisenberg--Weyl algebras so obtained includes\nnew multiparameter deformations, most\nof them being of the non-coboundary type. \n\\end{abstract} \n\n\\newpage \n\n\n\n\\setcounter{equation}{0}\n\n\\renewcommand{\\theequation}{\\arabic{equation}}\n\n\n\n\n\n\n\n A Hopf algebra deformation of a universal enveloping algebra $U g$\ndefines in a unique way a Lie bialgebra structure $(g,\\delta)$ on $g$\n \\cite{PC}. The cocommutator $\\delta$ provides the first order terms\nin the deformation of the coproduct, and can be seen as the natural\ntool to classify quantum algebras. Moreover, this well known statement\nsuggests the relevance of the inverse problem, i.e., to find a method\nto construct, given an arbitrary Lie bialgebra, a Hopf algebra\nquantization of it. \n\nThis question has been addressed recently in\n\\cite{osc}, where a very general construction of a deformed\ncoassociative coproduct linked to a given Lie bialgebra $(g,\\delta)$\nhas been presented. Such Lie bialgebra quantization formalism,\ninspired by the paper\n\\cite{LM} (see also \\cite{Lyak}), has been shown to be universal for\nthe oscillator algebra: multiparametric coproducts corresponding to\nall coboundary oscillator Lie bialgebra structures\ncan be obtained in that way (for the oscillator algebra non-coboundary\nstructures do not exist\n\\cite{oscGos}). To complete the structure of quantum algebras,\ndeformed commutation rules can be found by imposing the homomorphism\ncondition for the coproduct (counit and antipode can be also easily\nderived).\n\nIn this letter we show that all Heisenberg--Weyl Lie bialgebras can\nbe completely quantized by making use of this formalism. This result\nenhances the advantages of such an approach in order to obtain a full\nchart of Hopf algebra deformations of physically relevant algebras. \n\nFirstly, we find the most general form of all families\nof Heisenberg--Weyl Lie bialgebras. It is remarkable that, in contrast to\nthe oscillator case, now there exists only one coboundary bialgebra\namong them. Afterwards, it is shown how all these Lie bialgebras can\nbe classified and ``exponentiated\" to get the quantum coproducts by\nmeans of the formalism introduced in\n\\cite{osc}. We also find all deformed commutation rules, thus\nobtaining a complete list of quantum deformations of this algebra,\nwhose properties are briefly commented. This exhaustive description is\nfully complementary with respect to the quantum group results already\nknown either from a Poisson--Lie construction\n\\cite{Kuper} or from an\n$R$-matrix approach \\cite{HLR}. \n\nLet us fix the notation. The Heisenberg--Weyl Lie algebra $h_3$ is\ngenerated by $A_+$, $A_-$ and $M$ with Lie brackets\n\\begin{equation}\n [A_-,A_+]=M,\\qquad \n[M,\\cdot\\,]=0 .\n\\label{aa}\n\\end{equation}\nA $3\\times 3$ real matrix representation $D$ of (\\ref{aa}) is given by:\n\\begin{equation} \n D(A_+)=\\left(\\begin{array}{ccc}\n 0 &0 & 0 \\\\ 0 & 0 & 1 \\\\ 0 & 0 & 0 \n\\end{array}\\right),\\quad\n D(A_-)=\\left(\\begin{array}{ccc}\n 0 &1 & 0 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 0 \n\\end{array}\\right),\\quad \nD(M)=\\left(\\begin{array}{ccc}\n 0 &0 & 1 \\\\ 0 & 0 & 0 \\\\ 0 & 0 & 0 \n\\end{array}\\right).\n\\label{ab}\n\\end{equation} \nThe expression for a generic element of the Heisenberg--Weyl group\n$H_3$ coming from this representation is: \n\\begin{equation} \n D(T)= \\exp\\{m D(M)\\}\\exp\\{a_- D(A_-)\\}\n\\exp\\{a_+ D(A_+)\\} =\\left(\\begin{array}{ccc}\n 1 &a_- & m + a_- a_+ \\\\ 0 & 1 & a_+ \\\\ 0 & 0 & 1 \n\\end{array}\\right) ,\n\\label{ac}\n\\end{equation} \nand the group law for the coordinates $m$, $a_-$ and $a_+$ is\nobtained by means of matrix multiplication $D({T}'')=D({T}')\\cdot\nD({T})$:\n\\begin{equation} \n m''=m+m' \n- a_- a'_+ ,\\quad \n a''_+=a'_+ + a_+ ,\\quad \n a''_-=a'_- + a_- .\n\\label{ad}\n\\end{equation} \n \n\n\n\n\n\n\n\nHeisenberg--Weyl Lie bialgebras $(h_3,\\delta)$ will be defined by the\ncocommutator\n$\\delta:h_3\\to h_3\\otimes h_3$ such that\n\n\\noindent \ni) $\\delta$ is a 1--cocycle, i.e.,\n\\begin{equation}\n\\delta([X,Y])=[\\delta(X),\\, 1\\otimes Y+ Y\\otimes 1] + \n[1\\otimes X+ X\\otimes 1,\\, \\delta(Y)], \\quad \\forall \\,X,Y\\in\nh_3. \\label{ba}\n\\end{equation}\n\n\\noindent \nii) The dual map $\\delta^\\ast:h_3^\\ast\\otimes h_3^\\ast \\to\nh_3^\\ast$ is a Lie bracket on $h_3^\\ast$.\n \n\nFrom ii), we consider an arbitrary skewsymmetric cocommutator:\n\\begin{eqnarray}\n&&\\delta(A_-)=a_1\\, A_-\\wedge A_+ + \na_2\\, A_-\\wedge M + a_3\\, A_+ \\wedge\nM,\\cr \n&&\\delta(A_+)=b_1\\, A_-\\wedge A_+ + b_2\\, A_-\\wedge M + b_3\\, A_+\n\\wedge M,\\cr \n&&\\delta(M)=c_1\\, A_-\\wedge A_+ + c_2\\, A_-\\wedge M + c_3\\, A_+\n\\wedge M,\n\\label{bc}\n\\end{eqnarray}\nwhere $a_i,b_i,c_i$ ($i=1,2,3$) are real parameters.\nIf we impose on (\\ref{bc}) the cocycle condition (\\ref{ba}) we obtain:\n\\begin{equation}\nc_1=0,\\qquad c_2=b_1,\\qquad c_3=-a_1 .\n\\label{bd}\n\\end{equation}\nSince the dual $h^\\ast_3$ with generators $\\{m , a_-,a_+\\}$ must be\na Lie algebra, the Jacobi identity on the bracket $\\delta^\\ast$ gives\nrise to two additional conditions:\n\\begin{equation}\na_1(b_3 - a_2) - 2 b_1 a_3 = 0,\\qquad \nb_1(a_2 - b_3) - 2 a_1 b_2 = 0.\n\\label{bf}\n\\end{equation}\nHence, the most general Heisenberg--Weyl bialgebra has commutation\nrelations (\\ref{aa}) and cocommutators\n\\begin{eqnarray}\n&&\\delta(A_-)=a_1\\, A_-\\wedge A_+ \n+ a_2\\, A_-\\wedge M + a_3\\, A_+ \\wedge\nM,\\cr \n&&\\delta(A_+)=b_1\\, A_-\\wedge A_+ + b_2\\, A_-\\wedge M + b_3\\, A_+\n\\wedge M,\\cr \n&&\\delta(M)= b_1\\, A_-\\wedge M - a_1\\, A_+\n\\wedge M,\n\\label{bg}\n\\end{eqnarray}\nwith the six parameters $a_i$, $b_i$ verifying (\\ref{bf}).\n\nIt is also known that the dual Lie bracket\n$\\delta^\\ast$ gives the linear part of the (unique) Poisson--Lie\nstructure on the group linked to\n$\\delta$ \\cite{Dr}. Therefore, starting from the\nclassification of Poisson--Lie Heisenberg groups given in \\cite{Kuper}\nand taking into account the change of local coordinates on the\nHeisenberg group\n\\begin{equation}\nx_1=a_-,\\qquad x_2=a_+,\\qquad x_3=m + a_-\\,a_+,\n\\label{bhb}\n\\end{equation}\nit is straightforward to prove that the full Poisson--Lie\nbracket associated to $\\delta$ reads\n\\begin{eqnarray}\n&&\\{a_-,a_+\\}=a_1\\, a_- + b_1\\, a_+,\\cr\n&&\\{a_-,m\\}=\na_2\\, a_- + b_2\\, a_+ + b_1\\, m - \\frac {a_1}{2}\\, a_-^2,\\cr\n&&\\{a_+,m\\}=a_3\\, a_- + b_3 \\,a_+ - a_1\\, m + \\frac {b_1}{2}\\, a_+^2 .\n\\label{bh}\n\\end{eqnarray}\nIn other words, if (\\ref{ad}) is read as a coproduct on Fun($H_3$), it\nis easy to check that the group law turns out to be a Poisson algebra\nhomomorphism with respect to (\\ref{bh}).\n\nFinally, let us find out for which\nvalues of the parameters we have coboundary Lie bialgebras. So, we\ninvestigate the most general skewsymmetric element $r$ of $h_3\\otimes\nh_3$ such that \n\\begin{equation}\n\\delta(X):=[1\\otimes X + X \\otimes 1,\\, r],\\quad \nX\\in h_3, \\label{bi}\n\\end{equation}\ndefines a Lie\nbialgebra. This is equivalent to imposing the Schouten bracket\n$[[r,r]]$ to be a solution of the modified classical Yang--Baxter\nequation (YBE)\n\\begin{equation}\n[X\\otimes 1\\otimes 1 + 1\\otimes X\\otimes 1 +\n1\\otimes 1\\otimes X,[[r,r]]\\, ]=0, \\quad X\\in h_3.\n\\label{bj}\n\\end{equation}\nExplicitly, we consider three real-valued coefficients\n$\\xi,\\beta_+,\\beta_-$ and write:\n\\begin{equation} \n r= \\xi\\, A_+\\wedge A_- + \\beta_+ \\, A_+\\wedge M\n + \\beta_- A_- \\wedge M.\n\\label{bl}\n\\end{equation}\nThe Schouten bracket of this element is given by\n \\begin{equation} \n [[r,r]]= - \\xi^2 \\,M\\wedge A_+\\wedge A_-.\n\\label{bm}\n\\end{equation} \nThis bracket is found to fulfill automatically the modified classical\nYBE (\\ref{bj}). Therefore, (\\ref{bl}) is always a classical $r$-matrix.\nThe cocommutator (\\ref{bi}) derived from it reads\n\\begin{equation}\n\\delta(A_+)= - \\xi\\, A_+\\wedge M,\\qquad \n\\delta(A_-)= - \\xi\\, A_-\\wedge M,\\qquad \n\\delta(M)= 0 .\n\\label{bn}\n\\end{equation}\nThus, we conclude that there exists only one non-trivial coboundary\nHeisenberg--Weyl Lie bialgebra which is characterized by\n\\begin{equation}\na_1=a_3=b_1=b_2=0,\\qquad a_2=b_3=- \\xi.\n\\label{bo}\n\\end{equation}\nThe case $\\xi=0$\ngives rise to a solution of the classical YBE, but now the \ncocommutator vanishes.\n\n \n\n\nLet us go back to the four-parameter family of bialgebras given by\n(\\ref{bf}) and (\\ref{bg}). It is easy to check that equations (\\ref{bf})\nhave three disjoint types of solutions:\n\n\\noindent Type I$_+$: $a_1\\neq 0$, $b_2=-\\,a_3\\,b_1^2\/a_1^2$,\n$b_3=a_2+2\\,b_1\\,a_3\/a_1$ and\n$a_2,a_3,b_1$ arbitrary. \n\n\\noindent Type I$_-$: $a_1=0$, $b_1\\neq 0$, $a_3=0$, $a_2=b_3$ and\n$b_2,b_3$ arbitrary. \n\n\\noindent Type II: $a_1=0$, $b_1=0$ and $a_2,a_3,b_2,b_3$ arbitrary.\n\nSo, we have three (multiparametric) families of Lie bialgebras. To\nquantize them, we have to check that, within each family \\cite{osc}:\n\n\\noindent a) There exists some set $\\{H_i\\}$ of commuting\ngenerators of $g$ such that $\\delta(H_i)=0$ (these will be the\nprimitive generators after quantization).\n\n\\noindent b) For the remaining generators $X_j$, their\ncocommutator $\\delta(X_j)$ must only contain terms of the form\n$X\\wedge H$ (neither $X_l\\wedge X_m$ nor\n$H_n\\wedge H_p$ contributions are allowed).\n\nFinally, we have to take into account that two Lie bialgebra\nstructures of a Lie algebra $g$ are equivalent if there exists an\nautomorphism of $g$ that transforms one into the other. As we shall\nsee, some automorphisms of the Heisenberg algebra will help us to get\nbialgebras fulfilling conditions a) and b).\n\n\n\\noindent $\\bullet$ {\\bf Type I$_+$:} This is a\nfamily of Lie bialgebras which has, for general values of the\nparameters, no primitive generator\n$\\delta(H)=0$. However, if we define\n\\begin{equation}\nA_+':=A_+ - \\frac{b_1}{a_1}\\,A_- + \\left(\\frac{b_1\\,a_3}{a_1^2} \n+ \\frac{a_2}{a_1} \\right)\\,M,\\qquad a_1\\neq 0,\n\\label{qa}\n\\end{equation}\nit is immediate to check that, in this new basis, the Type I$_+$\nbialgebras have the following cocommutator:\n\\begin{eqnarray}\n&& \\delta(A_-)=- a_1\\, A_+'\\wedge A_- + a_3\\, A_+' \\wedge\nM,\\cr\n&& \\delta(A_+')=0,\\cr\n&& \\delta(M)= - a_1\\, A_+' \\wedge M .\n\\label{cb}\n\\end{eqnarray}\nThe automorphism (\\ref{qa}) has shown the parameters $b_1$ \nand $a_2$ to be superfluous. \n\nThe coproduct that quantizes the resultant biparametric family\n(\\ref{cb}) can be now obtained: firstly, we see that this family of\nbialgebras verify conditions a) and b) with\n$A_+'$ being the primitive generator (from now on, we shall write\n$A_+$ instead of $A_+'$). Following \\cite{osc} we write the\nnon-vanishing cocommutators in (\\ref{cb}) in the matrix form:\n\\begin{equation}\n \\delta\\left(\\begin{array}{c}\nA_- \\\\ M \n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\n-a_1 A_+ & a_3 A_+ \\\\ 0 & -a_1 A_+\n\\end{array}\\right)\\dot\\wedge \\left(\\begin{array}{c}\nA_- \\\\ M \n\\end{array}\\right). \n\\label{cc}\n\\end{equation}\nIn this way, the coproduct for non-primitive generators will be\nformally given by:\n\\begin{equation} \n \\Delta\\left(\\begin{array}{c}\nA_- \\\\ M \n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\n1 &0 \\\\ 0 & 1 \n\\end{array}\\right)\n\\dot\\otimes \\left(\\begin{array}{c}\nA_- \\\\ M \n\\end{array}\\right) +\n\\sigma\\left( \\exp\\left\\{\\left(\\begin{array}{cc}\n a_1 A_+ & -a_3 A_+ \\\\ 0 & a_1 A_+\n\\end{array}\\right)\\right\\}\\dot\\otimes \\left(\\begin{array}{c}\nA_- \\\\ M \n\\end{array}\\right)\\right) .\n\\label{cd}\n\\end{equation}\nBy computing explicitly the exponential, we find that\n\\begin{eqnarray}\n&&\\Delta(A_+)=1\\otimes A_+ + A_+ \\otimes 1,\\qquad \n \\Delta(M)=1\\otimes M +M \\otimes e^{a_1A_+},\\cr\n&&\\Delta(A_-)=1\\otimes A_- + A_- \\otimes e^{a_1A_+} -\na_3 M \\otimes A_+\\, e^{a_1A_+}. \n\\label{ed}\n\\end{eqnarray}\nThe next step is the search for deformed commutation rules compatible\nwith (\\ref{ed}). They turn out to be\n\\begin{equation} \n [A_-,A_+]=M,\\qquad \n[A_-,M]=\\frac {a_1}2 M^2 ,\\qquad \n[A_+,M]=0 .\n\\label{eg}\n\\end{equation} \nFinally, counit and antipode are deduced\n\\begin{equation}\n\\epsilon(X)=0,\\qquad X\\in\\{A_-,A_+,M\\},\n\\label{ee}\n\\end{equation}\n\\begin{eqnarray}\n&&\\gamma(A_+)=-A_+,\\qquad \\gamma(M)=-M \\,e^{-a_1A_+},\\cr \n&&\\gamma(A_-)=-A_- \\, e^{-a_1A_+} - a_3 M\\, A_+ \\, e^{-a_1A_+} , \n\\label{ef}\n\\end{eqnarray}\nand the Hopf algebra $U_{a_1,a_3}(h_3)$ that quantizes\nthe family of (non-coboundary) Heisenberg--Weyl bialgebras (\\ref{cb})\nis obtained.\n\nIt is remarkable that in this quantum deformation the parameter $a_3$\nis not involved in the deformed commutation rules. Recall that\n(\\ref{eg}) was firstly obtained in \\cite{HLR} starting from a quantum\nHeisenberg group and by\napplying a duality method (coproduct (\\ref{ed}) could not be\nfound). \n\nOn the other hand, a\nphysically suggestive observation comes from the fact that the\ngenerator\n$M$ is neither central nor primitive (recall the role that the\nnon-primitive mass generator of quantum extended Galilei algebra\nplays in one-dimensional magnon systems \\cite{magn}). The central\nelement\n$\\cal C$ is now\n\\begin{equation}\n{\\cal C}=M\\,e^{-a_1\\,A_+\/2}.\n\\label{eeg}\n\\end{equation}\nThis element labels the following differential realization of\n$U_{a_1,a_3}(h_3)$:\n\\begin{equation}\nA_+=x,\\qquad A_-=\\lambda\\,e^{a_1\\,x\/2}\\,\\partial_x,\\qquad\nM=\\lambda\\,e^{a_1\\,x\/2},\n\\label{eeh}\n\\end{equation}\nwhere $\\lambda$ is the eigenvalue of $\\cal C$. Note also that, by\nintroducing $\\cal C$ as a new generator instead of $M$, relations\n(\\ref{eg}) turn into \n\\begin{equation} \n [A_-,A_+]={\\cal C}\\,e^{a_1\\,A_+\/2},\\qquad \n[A_-,{\\cal C}]=0 ,\\qquad \n[A_+,{\\cal C}]=0 .\n\\label{eei}\n\\end{equation} \n\nFinally note that, if ${\\tilde A}$, ${\\tilde A}_+$ and ${\\tilde A}_-$\nare the generators of the non-standard quantum deformation $U_z\nsl(2,\\R)$\n\\cite{sl}, the quantum Heisenberg algebra $U_{a_1,0}(h_3)$ can be\nobtained as the contraction $\\varepsilon \\to 0$ defined by\n\\begin{equation}\nM=-\\varepsilon\\,{\\tilde A},\\qquad A_+={\\tilde A}_+, \n\\qquad A_-=\\varepsilon\\,{\\tilde A}_-,\\qquad a_1=2\\,z.\n\\label{eej}\n\\end{equation}\nAs it could be expected from the non-coboundary character of\n$U_{a_1,0}(h_3)$, the universal $R$-matrix of $U_z sl(2,\\R)$ diverges\nunder (\\ref{eej}).\n \n\n\\noindent $\\bullet$ {\\bf Type I$_-$:} After specializing the\ncorresponding parameters we find a three-parameter cocommutator also\nwith no primitive generators. However, the definition of $A_-'$ by\nmeans of the automorphism\n\\begin{equation}\nA_-':=A_- - \\frac{b_3}{b_1}\\,M ,\\qquad b_1\\neq 0,\n\\label{qb}\n\\end{equation}\nimplies that this family of Lie bialgebras is given by\n\\begin{eqnarray}\n&&\\delta(A_-')= 0 ,\\cr \n&&\\delta(A_+)=b_1\\, A_-'\\wedge A_+ + b_2\\, A_-'\\wedge M ,\\cr \n&&\\delta(M)= b_1\\, A_-'\\wedge M .\n\\label{bgh}\n\\end{eqnarray}\nIn particular, the parameter $b_3$ has been reabsorbed, and\n(\\ref{bgh}) can be quantized. Moreover, this Type I$_-$ structures are\nessentially the same as the Type I$_+$ (\\ref{cb}), but reversing the\nrole of $A_-$ and\n$A_+$. Once again, another Heisenberg algebra\nautomorphism given by\n\\begin{equation}\nA_+\\to A_-,\\qquad A_-\\to A_+,\\qquad M\\to -M,\n\\label{ca}\n\\end{equation}\nwould make both types of bialgebras explicitly equivalent. Therefore,\nwe omit the explicit quantization leading to the algebra\n$U_{b_1,b_2}(h_3)$. \n\n\n\n\\noindent $\\bullet$ {\\bf Type II:} If $a_1$ and $b_1$ vanish,\nthe cocommutator (\\ref{bg}) reads:\n\\begin{eqnarray}\n&& \\delta(A_-)= a_2\\, A_-\\wedge M + a_3\\, A_+ \\wedge M,\\cr\n&& \\delta(A_+)= b_2\\, A_-\\wedge M + b_3\\, A_+ \\wedge M,\\cr \n&& \\delta(M)=0 .\n\\label{eh}\n\\end{eqnarray}\nIn this case, $M$ is the primitive generator and no extra\nmanipulation is needed in order to quantize this family of bialgebras.\nWe write (\\ref{eh}) in matrix form:\n\\begin{equation}\n \\delta\\left(\\begin{array}{c}\nA_- \\\\ A_+ \n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\n-a_2 M & -a_3 M \\\\ -b_2 M & -b_3 M\n\\end{array}\\right)\\dot\\wedge \\left(\\begin{array}{c}\nA_- \\\\ A_+ \n\\end{array}\\right), \n\\label{ei}\n\\end{equation}\nhence, the corresponding coproduct is given by:\n\\begin{equation} \n \\Delta\\left(\\begin{array}{c}\nA_- \\\\ A_+\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\n1 &0 \\\\ 0 & 1 \n\\end{array}\\right)\n\\dot\\otimes \\left(\\begin{array}{c}\nA_- \\\\ A_+ \n\\end{array}\\right) +\n\\sigma\\left( \\exp\\left\\{\\left(\\begin{array}{cc}\na_2 M & a_3 M \\\\ b_2 M & b_3 M\n\\end{array}\\right)\\right\\}\\dot\\otimes \\left(\\begin{array}{c}\nA_- \\\\ A_+ \n\\end{array}\\right)\\right) .\n\\label{ej}\n\\end{equation}\n\nAlthough the four parameters describing this quantum algebra are\narbitrary, in order to derive the commutation rules compatible with\n(\\ref{ej}) it will suffice to write\n\\begin{equation}\nE:=\\exp\\left\\{\\left(\\begin{array}{cc}\na_2 M & a_3 M \\\\ b_2 M & b_3 M\n\\end{array}\\right)\\right\\}=\\left(\\begin{array}{cc}\nE_{11}(M) & E_{12}(M) \\\\ E_{21}(M) & E_{22}(M)\n\\end{array}\\right).\n\\label{ejb}\n\\end{equation}\nIn this way, the explicit quantum coproduct will be\n\\begin{eqnarray} \n&& \\Delta(M)=1\\otimes M +M \\otimes 1,\\nonumber\\\\\n&& \\Delta(A_-)=1\\otimes A_- + A_- \\otimes E_{11}(M) + A_+ \\otimes\nE_{12}(M),\\nonumber\\\\\n&& \\Delta(A_+)=1\\otimes A_+ + A_+ \\otimes E_{22}(M) + A_- \\otimes\nE_{21}(M).\n\\label{ejc}\n\\end{eqnarray}\nNow, by taking into account the following property\n\\begin{equation}\nE_{11}(M)\\,E_{22}(M)-E_{12}(M)\\,E_{21}(M)=e^{(a_2+b_3)\\,M}\n\\end{equation}\nit is straightforward to prove that the four-parameter coproduct\n(\\ref{ejc}) is an algebra homomorphism with respect to the deformed\ncommutation rules\n\\begin{equation} \n [A_-,A_+]=\\frac { e^{(a_2+b_3)\\,M}-1}{a_2+b_3} ,\\qquad \n[A_-,M]=0 ,\\qquad \n[A_+,M]=0 .\n\\label{en}\n\\end{equation}\nDue to the preservation of $M$ as central element, counit and antipode\nare easily deduced. These operations complete the obtention of the\nmultiparametric quantum algebra\n$U_{a_2,a_3,b_2,b_3}(h_3)$. These Type II quantizations were studied\nin\n\\cite{Lyak} with no reference to Lie bialgebra structures.\n\nThe well-known coboundary quantization is a particular subcase\nwith $a_3=b_2=0$ and $a_2=b_3=-\\xi$. A universal $R$-matrix (which is\nnot a solution of the quantum YBE) for it was obtained\nin \\cite{BCH}. \n\n\n\n\n\\bigskip\n\n\\noindent {\\bf Acknowledgements}\n\n\\medskip\n\nA.B. and F.J.H. are\npartially supported by DGICYT (Project PB94-1115) from the\nMinisterio de Educaci\\'on y Ciencia de Espa\\~na. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/teaser.png}\n \\caption{These chairs were generated by generative models fine-tuned by our method to produce chairs that will be comfortable for a given input body shape (shown sitting in each chair). Our method is general and can be applied to any latent-variable shape generative model. Here we show results from ShapeAssembly~\\cite{13} (red), IM-Net~\\cite{7} (green), and SP-GAN~\\cite{6} (blue).\n }\n \\label{fig:teaser}\n\\end{figure}\n\nWhy are 3D objects shaped the way they are? For many everyday objects, the answer is ``to allow people to interact with them'': tables support objects placed on top of them; doorknobs facilitate gripping and twisting; etc.\nTechnologies designed to assist in the creation of 3D objects should be aware of how people will interact with those objects.\n\nGenerative models of 3D shapes, particularly deep generative models, have received considerable recent attention in vision and graphics.\nSuch models can produce visually-plausible output geometries for a wide variety of object categories by mimicking patterns in a large training data.\nThis mimicry has limits, though: a visually-plausible object may fail to be functionally usable when a person tries to interact with it.\nConsider the experience of many children making paper airplanes for the first time---many designs which look more or less like airplanes nevertheless do not fly!\n\nWe argue that the field of 3D shape generative modeling should move toward models that are \\emph{explicitly} trained to produce outputs that are \\emph{functional} when interacted with.\nIn this paper, we take a first step in this direction: we train generative models of 3D chairs that accommodate different body shapes and sitting postures.\nChairs are a good starting point for this investigation: they are ubiquitous in the real world, virtual 3D models of them are widely available, and the activity they afford (sitting) is relatively simple (compared to other activities that require dexterous manipulation).\n\nWe experiment with two types of body-aware chair generative models, each of which enforces a desirable property of its output chairs.\nThe first takes a body shape and produces chairs which will be comfortable for a person with that body shape; the second takes a sitting pose and produces chairs which accommodate that sitting pose.\nTo train these models, we define a ``sitting pose matching'' metric and a novel ``sitting comfort'' metric.\nCalculating these metrics requires solving an optimization problem to sit the body into the chair, which is too computationally expensive to be used as a loss function for training our generative models.\nInstead, we train neural networks to approximate the behavior of the pose matching and comfort metrics.\n\nWe apply our approach to training three types of chair generative models: a part-based generator, a point cloud generator, and an implicit surface generator.\nIn all cases, experiments show that our generative models successfully adapt their output chair shapes to the input body specification while still producing visually plausible results (see Figure~\\ref{fig:teaser}).\nIn summary, our contributions are:\n\\begin{packed_itemize}\n \\item A novel optimization procedure for chair sitting poses, as well as a physically-based metric of sitting comfort.\n \\item Neural network proxy models which accurately approximate computationally-expensive functions that require optimization of bodies into sitting postures.\n \\item A general approach for making any latent variable generative shape model body-aware.\n\\end{packed_itemize}\n\\section{Related Work}\n\\label{sec:related_work}\n\n\\parahead{Deep 3D Shape Generative Models}\nRecent years have seen rapid advancement in deep neural networks for data-driven 3D shape generation.\nModels have been proposed that synthesize shapes represented as volumetric occupancy grids~\\cite{1,2,3}, point clouds~\\cite{4,5,6}, implicit surfaces~\\cite{7,8,9}, and others.\nThere are also structure-aware models which generating assemblies of primitives or other part geometries~\\cite{10,11,12,13,14,15}.\nWe build upon three of these prior generative models: one generates objects as cuboid assemblies~\\cite{13}, one as point clouds~\\cite{6}, and one as implicit surfaces~\\cite{7}.\nWe use the same methodology to make each type of generative model body-aware.\n\nOur work is also related to recent work that fine-tunes an implicit shape generative model such that its outputs are physically connected and stable~\\cite{Mezghanni_2021_CVPR}.\nWe extend this idea to human body awareness.\nThis prior system trains a neural network to serve as a stability loss function, instead of a more expensive physical simulation; we also train a neural networks to serve as loss function, instead of expensive optimization.\nOur approach to fine-tuning generative models is also based on a learned warping of the latent space.\nOur warping function must be body shape- and\/or pose-conditional, though, which complicates the problem.\n\n\\parahead{3D Shape Functionality Analysis}\nThere is a considerable body of prior work on analyzing the \\emph{functionality} of 3D objects, where ``functionality'' can be defined as the geometry of an object plus its interaction with other entities in a specific context~\\cite{16}.\nPrior work considering people as the `other entities' is related to our work.\nSceneGrok learns a model which takes a 3D indoor scene as input and outputs a probability distribution over what human activities can be performed where~\\cite{17}.\nFollow-up work generated `interaction shapshots': localized arrangements of objects plus a person posed such as to be using those objects~\\cite{18}.\nIn this work, scenes are generated by retrieving existing objects from a database; we seek to synthesize the geometry of objects themselves.\nPose2Shape takes an object's geometry as input and predicts a pose which a person might assume while interacting with that object~\\cite{19}.\nWe focus on the inverse problem: taking a body shape or pose as input and producing an object which accommodates that body.\nFinally, we develop a sitting comfort metric based on distributions of pressure exerted on the body.\nA conceptually-related measure of sitting comfort has also been explored in prior work which analyzes human sitting behavior in videos~\\cite{20}.\n\n\\parahead{Scene-Aware Body Generation}\nThere has been recent interest in the problem of generating human poses that fit a particular 3D scenes.\nPOSA~\\cite{Hassan:CVPR:2021} uses a body-centric representation to place it at a static within 3D scenes, while SAMPL~\\cite{hassan2021stochastic} places dynamic and plausible motions within 3D scenes.\nOur goal is to solve the inverse problem: given human body poses, synthesize plausible 3D shapes.\n\\section{Overview}\n\\label{sec:overview}\n\nOur objective is to train a 3D chair generative model that alters its output to accommodate an input human body shape or pose.\nWhen given a body shape, it should produce output chairs in which a person with that body shape could comfortably sit.\nWhen given a body pose, it should produce output chairs in which a person sitting would naturally assume that pose (or one similar to it).\nWe tackle the problem of creating such a generative model in four stages (see Figure~\\ref{fig:overview}):\n\n\\parahead{Optimizing for sitting poses}\nTo train a generative model to produce chair shapes that accommodate specific bodies, we need a way to assess how well a given chair shape accommodates a given body. \nA prerequisite for this is the ability to simulate how a given body will sit in a given chair.\nThus, the first stage of our system is a physically-based optimization procedure which simulates a body settling into a chair.\nSection~\\ref{sec:pose} describes this procedure in more detail.\n\n\\parahead{Defining functionality metrics}\nGiven optimized sitting poses, we next define metrics which assess how well a chair meets our functional goals: being comfortable for a given body shape or supporting a given sitting pose.\nOur sitting comfort metric is based on approximating the distribution of pressures exerted by the chair on the body.\nSection~\\ref{sec:obj} describes these metrics in more detail.\n\n\\parahead{Training loss proxy networks}\nGiven a body and a generated chair, evaluating the functionality metrics from the previous stage requires first optimizing for the sitting pose of the body in that chair.\nThis optimization makes our metrics prohibitively expensive for use as loss functions for training generative models.\nInstead, we train efficient neural networks to approximate the behavior of these metrics.\nSee Section~\\ref{sec:proxy} for more details.\n\n\\parahead{Training generative models}\nFinally, we use the neural loss proxies to train body-aware 3D shape generative models.\nWe take a generative model pre-trained on a dataset of chair shapes, and adapt it to be body-aware by learning a body-conditional nonlinear warping of the pre-trained latent space.\nSection~\\ref{sec:generative} describes this approach in more detail.\n\\section{Sitting Pose Optimization}\n\\label{sec:pose}\n\nOur goal is to define a procedure which takes a chair $\\ensuremath{\\mathcal{C}}$ and a human body $\\ensuremath{\\mathcal{B}}$ and outputs a pose that the body would assume when sitting in the chair.\nPeople can sit in the same chair in different ways and can shift continuously between different sitting postures.\nIn our work, we make the simplifying assumption that each body takes on a single, `relaxed' sitting pose in a given chair.\nWe define this pose as the solution to an energy minimization problem.\nThe energy function is comprised of terms that model physical constraints (e.g. gravity, collision), anatomical constraints (e.g joint angle limits), and characteristics of typical `relaxed' sitting poses (e.g. bilateral symmetry, large body\/chair contact area).\nThis formulation is related to that of prior work in estimating sitting poses from images~\\cite{21}.\n\n\\parahead{Human body model}\nIn our experiments, we use the SMPL human body model, a realistic 3D model of the human body based on skinning and blend shapes learned from thousands of 3D body scans~\\cite{22}.\nIn SMPL, a body shape is specified by 16 parameters; a pose is specified by a global rigid translation, global rigid rotation, and one axis-angle rotation for each of 21 skeletal bones (a total of 69 parameters).\nThese pose parameters are our variables of optimization.\n\n\\parahead{Initialization}\nWe initialize the optimization by placing the body in a neutral sitting position (selected from the AMASS dataset~\\cite{26}) suspended over the chair.\nWe set the pose's initial global rigid translation to $(0,\\ y_{\\text{max}} + 0.2,\\ 0.5 (z_{\\text{min}} + z_{\\text{max}}))$, where $y_{\\text{max}}$, $z_{\\text{min}}$, and $z_{\\text{max}}$ are parameters of the chair's bounding box (in a y-up coordinate system).\n\n\\parahead{Gravitational energy}\nFor the body to come to rest in the chair, it must minimize its gravitational potential energy.\nTo simplify the problem of modeling this energy, we represent the body as a connected assembly of rigid bodies, with mass concentrated at each joint $\\ensuremath{\\mathbf{j}}$ in the body's skeleton.\nWe define the mass of each joint via a Monte Carlo point sampling of the body's interior volume: $m(\\ensuremath{\\mathbf{j}})$ is the percentage of sampled points which are closet to $\\ensuremath{\\mathbf{j}}$, multiplied by a constant overall mass $M$ for the body (we use $M=1$ in our experiments).\nThe gravitational energy term is then\n\\begin{equation*}\n E_{\\text{grav}}(\\ensuremath{\\mathcal{B}}, \\ensuremath{\\mathcal{C}}) = \\sum_{\\mathbf{j} \\in \\ensuremath{\\mathbf{J}}(\\ensuremath{\\mathcal{B}})} \\mathbf{g} \\cdot (j_y - y_{\\text{min}}(\\ensuremath{\\mathcal{C}})) \\cdot m(\\mathbf{j}) \\cdot M\n\\end{equation*}\nwhere $\\ensuremath{\\mathbf{J}}(\\ensuremath{\\mathcal{B}})$ is the set of all body joints, $\\mathbf{g}$ is the gravity vector, and $y_{\\text{min}}(\\ensuremath{\\mathcal{C}})$ is the bottom of the chair's bounding box, i.e. the ground.\n\n\\parahead{Penetration energies}\nWe model contact of the body with the chair by minimizing a body\/chair penetration energy:\n\\begin{equation*}\n E_{\\text{pen}}(\\ensuremath{\\mathcal{B}},\\ensuremath{\\mathcal{C}}) = \\sum_{\\mathbf{v} \\in \\mathbf{V}(\\ensuremath{\\mathcal{B}})} |\\text{min}(d(\\mathbf{v}, \\ensuremath{\\mathcal{C}}),0)|\n\\end{equation*}\nwhere $\\mathbf{V}(\\ensuremath{\\mathcal{B}})$ are the vertices of the body mesh and $d(\\mathbf{x}, \\ensuremath{\\mathcal{C}})$ is the signed distance of a point $\\mathbf{x}$ to the chair $\\ensuremath{\\mathcal{C}}$.\nWe also include a term $E_{\\text{self}}$ to penalize body self-penetrations, using an approach based on detecting colliding triangles using a bounding volume hierarchy and then evaluating local signed distance functions for each pair of colliding faces~\\cite{21,24}.\n\n\\begin{figure*}[t!]\n \\centering\n \\setlength{\\tabcolsep}{1pt}\n \\begin{tabular}{ccccccc}\n Initial pose & No $E_{\\text{sym}}$ & No $E_{\\text{feas}}$ or $E_{\\text{spine}}$ & No $E_{\\text{zgrav}}$ & No $E_{\\text{sit}}$ & All terms\n \\\\\n \\includegraphics[width=0.16\\linewidth]{images\/pose\/shapeAssembly\/init.png}&\n \\includegraphics[width=0.16\\linewidth]{images\/pose\/shapeAssembly\/symmetry.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/pose\/shapeAssembly\/valid.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/pose\/shapeAssembly\/zgravity.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/pose\/shapeAssembly\/sitting.png}&\n \\includegraphics[width=0.16\\linewidth]{images\/pose\/shapeAssembly\/all.png}\n \\end{tabular}\n \\vspace{-1em}\n \\caption{Demonstrating the effect of removing different energy terms from our sitting pose optimization objective.}\n \\label{fig:sitting_opt_ablation}\n\\end{figure*}\n\n\\parahead{Pose feasibility energies}\nUnconstrained optimization of SMPL joint angles can produce physically infeasible poses, as most joints in the human body have a limited range of motion.\nThus, we add an energy term which penalizes infeasible poses.\nWe adapt PosePrior, a binary classifier for determining whether a joint angle for a given bone in the SMPL skeleton is feasible~\\cite{25}, by converting its outputs into a differentiable energy function.\nFor each bone in the body ($\\ensuremath{\\mathbf{b}} \\in \\ensuremath{\\mathbf{B}}(\\ensuremath{\\mathcal{B}})$), we uniformly sample its space of rotations $(\\phi_\\ensuremath{\\mathbf{b}}, \\theta_\\ensuremath{\\mathbf{b}})$ and evaluate the classifier at each point, producing a binary validity grid $V_\\ensuremath{\\mathbf{b}}$.\nWe then use a distance transform to produce a discrete field of signed distances to the boundary of the valid region $D_\\ensuremath{\\mathbf{b}}$.\nThe pose feasibility energy is then\n\\begin{equation*}\n E_{\\text{feas}}(\\ensuremath{\\mathcal{B}}) = \\sum_{\\ensuremath{\\mathbf{b}} \\in \\ensuremath{\\mathbf{B}}(\\ensuremath{\\mathcal{B}})} \\text{max} (\\text{bilerp}(D_\\ensuremath{\\mathbf{b}}, \\phi_\\ensuremath{\\mathbf{b}}, \\theta_\\ensuremath{\\mathbf{b}}) , 0)\n\\end{equation*}\n\nAs PosePrior does not include classifiers for spine joints, we use an additional energy term to penalize spine bending that deviates from that of the initial sitting pose:\n\\begin{equation*}\n E_{\\text{spine}}(\\ensuremath{\\mathcal{B}}) = \\sum_{\\ensuremath{\\mathbf{b}} \\in \\ensuremath{\\mathbf{B}}^{\\text{spine}}(\\ensuremath{\\mathcal{B}})} || \\ensuremath{\\mathbf{r}}_\\ensuremath{\\mathbf{b}} - \\ensuremath{\\mathbf{r}}^0_\\ensuremath{\\mathbf{b}} ||_1\n\\end{equation*}\nwhere $\\ensuremath{\\mathbf{r}}_\\ensuremath{\\mathbf{b}}$ is the axis-angle rotation vector for bone $\\ensuremath{\\mathbf{b}}$.\n\nFigure~\\ref{fig:sitting_opt_ablation} illustrates the effect of removing these terms from the pose optimization.\n\n\\parahead{Sitting energies}\nOptimizing the energy terms defined thus far will result in the body settling under gravity into a physically-feasible pose, but this pose may not resemble a sitting posture (e.g. the body may slide off of the chair and onto the ground).\nThus, we introduce terms which encourage the body to settle into a sitting pose.\n\nThe first term helps the body settle into a pose in which its back makes contact with the back of the chair (if one exists).\nTo achieve this goal, we introduce a gravitational energy term that acts along the z dimension (i.e. the front facing direction of the chair), pulling the body toward the backmost extent of the chair:\n\\begin{equation*}\n E_{\\text{zgrav}}(\\ensuremath{\\mathcal{B}}, \\ensuremath{\\mathcal{C}}) = \\sum_{\\ensuremath{\\mathbf{j}} \\in \\ensuremath{\\mathcal{B}}} \\mathbf{g}_z \\cdot (j_z - z_{\\text{min}}(\\ensuremath{\\mathcal{C}})) \\cdot m(\\mathbf{j}) \\cdot M\n\\end{equation*}\nwhere $\\mathbf{g}_z$ is the gravity vector $\\mathbf{g}$ rotated to be parallel to the z-axis.\nFigure~\\ref{fig:sitting_opt_ablation} shows the effect of omitting this term.\n\nIn addition, we include an energy term that encourages maximizing the contact area between the chair and the body's back and glutes.\nIf we let $\\mathbf{V}^{\\text{sit}}(\\ensuremath{\\mathcal{B}})$ denote the set of body mesh vertices located on the back or glutes regions, then:\n\\begin{equation*}\n E_{\\text{sit}}(\\ensuremath{\\mathcal{B}},\\ensuremath{\\mathcal{C}}) = \\frac{1}{|\\mathbf{V}^{\\text{sit}}(\\ensuremath{\\mathcal{B}})|} \\sum_{\\mathbf{v} \\in \\mathbf{V}^{\\text{sit}}(\\ensuremath{\\mathcal{B}})} \\max(\\text{d}(\\mathbf{v}, \\ensuremath{\\mathcal{C}}) - \\tau, 0)\n\\end{equation*}\nwhere $\\tau$ is the distance threshold at which a vertex is considered in contact with the chair (we use $\\tau = 0.001$).\nFigure~\\ref{fig:sitting_opt_ablation} illustrates the effect of removing this energy term.\n\n\\parahead{Bilateral symmetry energy}\nAs most human bodies are bilaterally symmetric, people typically assume bilaterally symmetric relaxed sitting poses.\nThus, our final energy term encourages the optimization to find such symmetric poses.\nLet $\\ensuremath{\\mathbf{B}}^{\\text{sym}(\\ensuremath{\\mathcal{B}})}$ be set the set of all symmetric pairs of joints in the body, $\\mathbf{R}_{yz}$ be the reflection matrix about the $yz$ plane, $\\mathbf{P}_{yz}$ be the projection matrix onto the $yz$ plane, and $\\mathbf{P}_{x}$ be the projection matrix onto the x axis. Then:\n\\begin{align*}\n E_{\\text{sym}}(\\ensuremath{\\mathcal{B}}) = \\sum_{(\\ensuremath{\\mathbf{b}}_i,\\ensuremath{\\mathbf{b}}_j) \\in \\ensuremath{\\mathbf{B}}^{\\text{sym}}(\\ensuremath{\\mathcal{B}})} ||\\ensuremath{\\mathbf{r}}_{\\ensuremath{\\mathbf{b}}_i} - \\mathbf{R}_{yz} \\ensuremath{\\mathbf{r}}_{\\ensuremath{\\mathbf{b}}_j}||_1\\\\\n + \\sum_{\\ensuremath{\\mathbf{b}} \\in \\ensuremath{\\mathbf{B}}(\\ensuremath{\\mathcal{B}}) \\setminus \\ensuremath{\\mathbf{B}}^{\\text{sym}}(\\ensuremath{\\mathcal{B}})} || \\mathbf{P}_{yz} \\ensuremath{\\mathbf{r}}_b ||_1\n + ||\\mathbf{P}_{yz} \\ensuremath{\\mathbf{r}}_{\\text{glob}}||_1 + ||\\mathbf{P}_{x} \\mathbf{t}_{\\text{glob}}||_1\n\\end{align*}\nThe first term encourages all joints which have a symmetric twin to have the same rotation, up to reflectional symmetry.\nThe second term encourages all other joints not to rotate out of the $xy$ plane.\nThe third term encourages the body's global rotation to keep it within the $xy$ plane, and the fourth term encourages the body's global translation to do the same.\n\n\\parahead{Minimizing the total energy}\nThe total energy that we optimize is a weighted sum of all the energy terms defined above, which we minimize using the Adam optimizer~\\cite{Adam}:\n\\begin{align*}\n &\\alpha_{\\text{grav}} E_{\\text{grav}} + \\alpha_{\\text{pen}} E_{\\text{pen}} + \\alpha_{\\text{self}} E_{\\text{self}} + \\alpha_{\\text{feas}} E_{\\text{feas}} + \\\\\n &\\alpha_{\\text{spine}} E_{\\text{spine}} + \\alpha_{\\text{zgrav}} E_{\\text{zgrav}} + \\alpha_{\\text{sit}} E_{\\text{sit}} + \\alpha_{\\text{sym}} E_{\\text{sym}} \n\\end{align*}\nWe determine the weights of these terms empirically.\nIn our experiments: $\\alpha_{\\text{grav}} = 19.6,\\ \\alpha_{\\text{self}} = 10,\\ \\alpha_{\\text{feas}} = 0.1,\\ \\alpha_{\\text{spine}} = 1,\\ \\alpha_{\\text{zgrav}} = 9.8,\\ \\alpha_{\\text{sym}} = 2.5$.\n$\\alpha_{\\text{pen}}$ and $\\alpha_{\\text{sit}}$ vary by the shape representation used for chairs due to differences in the calculation of point-to-chair signed distances $d(\\cdot, \\ensuremath{\\mathcal{C}})$.\nFor ShapeAssembly chairs, which use analytical point-to-cuboid signed distances, $\\alpha_{\\text{pen}} = 1$ and $\\alpha_{\\text{sit}} = 17$.\nFor IM-Net chairs, which use occupancy probability rather than a true signed distance, $\\alpha_{\\text{pen}} = 1.3$ and $\\alpha_{\\text{sit}} = 30$.\nFor SP-GAN chairs, which we convert to meshes using moving least squares~\\cite{DeepMLS} and then to a discrete signed distance field via a distance transform, $\\alpha_{\\text{pen}} = 1.3 \\cdot 10^{-2}$ and $\\alpha_{\\text{sit}} = 10^{-2}$.\n\n\\parahead{Running time}\nOptimizing one sitting pose using this procedure takes upwards of a minute, on average, using our PyTorch implementation on an Intel i7-7800X machine with 32GB RAM and an NVIDIA GeForce RTX 2080Ti GPU.\nGPU parallelism can accelerate the process (e.g. about 30 minutes for a batch of 200 chairs).\n\\section{Chair Functionality Metrics}\n\\label{sec:obj}\n\nNow that we have defined a procedure for sitting a body on a chair, we can ask how well these sitting postures meet the functional goals we are interested in training generative models to satisfy: being comfortable for a given body shape or supporting a given sitting pose.\nIn this section, we define the metrics we use to quantify how well a given body $\\ensuremath{\\mathcal{B}}$'s sitting pose satisfies these goals for a given chair $\\ensuremath{\\mathcal{C}}$.\n\n\\subsection{Comfort Loss}\n\\label{sec:comfort_metric}\n\nWe quantify the comfort of a body sitting in a chair by measuring the pressure the chair exerts on the body.\nTo compute these pressures, we need the equilibrium contact forces between the chair and the body; this requires solving for all forces acting upon the body when it is in static equilibrium.\n\n\\parahead{Solving for equilibrium forces}\nGiven a SMPL body mesh (simplified to 200 faces for computational efficiency), we discretize its interior volume into a tetrahedral mesh using fTetWild~\\cite{fTetWild}.\nAssuming our optimization has settled the body into the chair, we can assume that very little deformation to the body is required to achieve static equilibrium.\nThus, for simplicity, we treat the interior volume mesh as a connected assembly of rigid, constant-density tetrahedra.\nWe then compute equilibrium forces at each tet vertex by solving a system of linear equations with the following constraints:\n\\begin{packed_itemize}\n \\item For each tetrahedron, the sum of the forces on its adjacent vertices plus the gravitational force acting upon it should be zero.\n \\item For each tetrahedron, the net torque on each of its vertices about its center of mass should be zero.\n \\item The sum of forces for all vertices in contact with the chair plus the gravitational force acting on the entire body should be zero.\n \\item The net torque on each vertex in contact with the chair about the body's center of mass should be zero.\n\\vspace{-\\topsep}\n\\end{packed_itemize}\nThe first two sets of constraints enforce a static equilibrium; the second two sets of constraints ensure that the equilibrium found is consistent with physical contact between the body and the chair.\nIn general, this system is over-determined (i.e. some small non-rigid deformation is necessary for equilibrium), so we solve it in least-squares sense.\n\n\\parahead{Computing pressure}\nGiven these forces, we compute the total pressure on the body as the sum of pressures on each triangular face $f$ of the body mesh in contact with the chair (i.e. the sum of normal forces divided by area):\n\\begin{equation*}\n \\mathcal{L}_\\text{comfort}(\\ensuremath{\\mathcal{C}},\\ensuremath{\\mathcal{B}}) = \\sum_{f \\in \\ensuremath{\\mathcal{B}} \\cap \\ensuremath{\\mathcal{C}}} \\frac{\\mathbf{F_v}(f) \\cdot \\mathbf{\\hat{n}}(f)}{A(f)}\n\\end{equation*}\nwhere $\\mathbf{F_v}(f)$ is the sum of face $f$'s vertex forces, $\\mathbf{\\hat{n}}(f)$ is $f$'s surface normal, and $A(f)$ is $f$'s area.\nFigure~\\ref{fig:contact_viz} visualizes how the average pressures increase as $\\mathcal{L}_\\text{comfort}$ increases.\n\n\\begin{figure}[t!]\n \\centering\n \\setlength{\\tabcolsep}{1pt}\n \\begin{tabular}{cccc}\n \\includegraphics[width=0.24\\linewidth]{images\/objective\/comf_x_chairs\/0.png} &\n \\includegraphics[width=0.24\\linewidth]{images\/objective\/comf_x_chairs\/1.png} &\n \\includegraphics[width=0.24\\linewidth]{images\/objective\/comf_x_chairs\/2.png} &\n \\includegraphics[width=0.24\\linewidth]{images\/objective\/comf_x_chairs\/3.png}\n \\end{tabular}\n \\vspace{-1em}\n \\caption{Visualizing the relationship between average pressure and comfort loss $\\mathcal{L}_\\text{comfort}$.\n We compute $\\mathcal{L}_\\text{comfort}$ for a large set of chairs paired with random body shapes and split the chairs into four bins based on these losses (left-to-right).\n In each image, we color each vertex by its average pressure across all bodies in that bin.\n }\n \\label{fig:contact_viz}\n\\end{figure}\n\n\\subsection{Pose Matching Loss}\n\nTo measure how similar a sitting pose $S$ is to a desired target pose $T$, we simply use the sum of Euclidean distances between corresponding joints in the two poses:\n\\begin{equation*}\n \\mathcal{L}_\\text{pose}(\\ensuremath{\\mathcal{B}}_S, \\ensuremath{\\mathcal{B}}_T) = \\sum_{i=1}^{|\\ensuremath{\\mathbf{J}}(\\ensuremath{\\mathcal{B}}_S)|} || \\ensuremath{\\mathbf{J}}(\\ensuremath{\\mathcal{B}}_S)_i - \\ensuremath{\\mathbf{J}}(\\ensuremath{\\mathcal{B}}_T)_i ||_2\n\\end{equation*}\nTo make this metric invariant to global rigid translation, we remove the body root joint from the computation.\nWe also remove the hands and feet joints, as our sitting optimization has no energy terms governing their position.\n\\section{Loss Proxy Networks}\n\\label{sec:proxy}\n\nEvaluating the functionality metrics defined above requires running expensive optimization to find a body's sitting pose in a chair.\nIt is not feasible to run this optimization on every iteration of training a chair generative model.\nInstead, we train neural networks which approximate the behavior of the comfort and target pose loss functions.\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/proxies.pdf}\n \\vspace{-2em}\n \\caption{Loss proxy network architectures. We use neural networks to approximate the two functionality losses we have introduced. Left: the target pose proxy network takes a merged point cloud of a chair and a target pose and passes it through a PointNet++ backbone (three set abstraction layers and a linear layer). Right: the comfort loss proxy network takes an 18-dimensional SMPL body shape vector which conditions the same PointNet++ backbone through Featurewise Linear Modulation (FiLM).}\n \\label{fig:networks}\n\\end{figure*}\n\n\\subsection{Network Architectures}\n\nFigure~\\ref{fig:networks} shows the architectures of these proxy networks.\nBoth are based on PointNet++~\\cite{28}: a point-sampled chair is passed through three PointNet++ set abstraction layers, after which the features are flattened and fed to a linear layer to produce the output loss value.\nThe networks differ in how they incorporate body conditioning information.\nThe pose loss network takes an additional point cloud depicting a body in the desired pose; the network should predict how close this pose will be to the pose the body would take when seated in the chair.\nThis point cloud is merged with the chair point cloud; each point is given an extra one-hot dimension indicating whether it is a chair or body point.\nThe comfort loss network takes as input only a body shape, specified as a 18-dimensional SMPL body shape feature vector $\\mathbf{f}_\\text{shape}$, the concatenation of the 16 body parameters as well as a one-hot encoding of the sex.\nThe network is conditioned on this vector via Featurewise Linear Modulation (FiLM)~\\cite{29}, i.e. a MLP takes $\\mathbf{f}_\\text{shape}$ as input and outputs point-wise scale and shift parameters for each set abstraction layer.\n\n\\subsection{Training}\n\nGiven a dataset of chair shapes, we use the optimization procedure from Section~\\ref{sec:pose} to optimize sitting poses for 100 different body shapes per chair, where body shape vectors $\\mathbf{f}_\\text{shape}$ are randomly sampled from a multivariate normal distribution fit to the the AMASS dataset~\\cite{26}.\nTo produce training data for the comfort loss proxy network, we evaluate the comfort loss on all of these optimized (chair, body shape, pose) tuples, resulting in $100N$ (chair, body shape, comfort loss) training examples for a chair dataset of size $N$.\nTo produce $100N$ training examples for the target pose loss proxy network, for each chair, we evaluate the target pose loss on 100 different poses.\nOne of these is the optimized pose for the chair itself (which incurs zero target pose loss); the other 99 are randomly sampled from the set of optimized poses for other chairs.\nFinally, this data is split 95\\%\/5\\% into train\/test.\nBoth networks are trained to minimize the absolute difference between their predicted loss and the ground truth.\nTo stabilize training, the ground-truth loss values are whitened (normalized to zero mean and unit variance).\nSee the supplement for details.\n\\section{Body-aware Generative Models}\n\\label{sec:generative}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/generative.pdf}\n \\vspace{-1.5em}\n \\caption{Generative model architecture diagram.\n We use a mapping network $F$ to warp the latent space of a pre-trained shape generative model, conditioned on either a body shape or pose.\n Bell curve icon by Davo Sime from the Noun Project.\n }\n \\label{fig:gen_condition}\n\\end{figure}\n\nFinally, we use the proxy networks to train body-conditional generative models by fine-tuning a generative model pre-trained on a chair dataset.\nRather than fine-tune the generator (which could result in shapes that satisfy the proxies but are un-chair-like), we learn a \\emph{mapping network} $F$ to transform the generator's input latent code $\\mathbf{z}$ to a latent code $F(\\mathbf{z})$.\n$F$ induces a body-conditional warping of the latent space, pushing latent codes towards regions of the space which better accommodate the input body.\nThis approach is based on prior work that fine-tunes 3D shape generative models to be physically connected and stable~\\cite{Mezghanni_2021_CVPR}.\n\nFigure~\\ref{fig:gen_condition} illustrates this approach.\nGiven an input body shape or pose, we project it to a conditioning vector $\\mathbf{c}$, which is then fed to a FiLM network.\nThe FiLM net produces element-wise scales and shifts for each layer of the mapping network $F$, which takes a latent code $\\mathbf{z}$ as input and produces a transformed latent code $F(\\mathbf{z})$ as output.\nWe train two variants of $F$: one that takes body shapes as input (18-dimensional SMPL shape vector), and one that takes target poses (69-dimensional SMPL pose vectors).\n$F(\\mathbf{z})$ is then fed to the fixed generator network to produce an output shape, which is then point-sampled and fed to the appropriate loss proxy network to produce a training loss.\nWe regularize the network with an additional loss that penalizes $F(\\mathbf{z})$ from drifting too far from $\\mathbf{z}$.\n\nWe use this procedure to train shape- and pose-conditioned variants for three generative shape models: ShapeAssembly, a generative model that writes programs which declare and connect cuboids~\\cite{13}; SP-GAN, a point cloud generator~\\cite{6}; and IM-Net, an implicit field generator~\\cite{7}.\nMore training details can be found in supplemental.\n\\section{Results \\& Evaluation}\n\\label{sec:results}\n\nIn this section, we evaluate the performance of each component of our system: chair functionality metrics, loss proxy networks, and body-conditional generative models.\n\n\\parahead{Comfort metric perceptual validation}\nTo evaluate how well the comfort metric proposed in Section~\\ref{sec:comfort_metric} agrees with human judgments of comfort, we conduct a two-alternative forced choice perceptual study.\nIn each comparison, participants were shown two images of a human body sitting in a chair and asked which sitting pose looks more comfortable.\nWe recruited 20 participants (members of our research lab), each of whom performed the same set of 50 such comparisons.\nThe relative values of our comfort metric agree with the participants for $70\\%$ of these comparisons (95\\% confidence interval: $(55\\%, 86\\%)$), indicating that our metric generally captures human perception of sitting comfort.\n\n\\parahead{Accuracy of loss proxy networks}\nWe evaluate how well our two loss proxy networks learn to approximate the true loss functions by checking whether they order chairs by loss in the same way.\nSpecifically, we create a large set of pairs of chairs (not seen during training) and record which of the pair has a lower true loss value. \nWe then check how often the chair they predict the lower loss for is the same one that has the lower true loss.\nTable~\\ref{tab:proxy_acc} reports these accuracies.\nThe comfort loss proxy is harder to learn than the pose proxy, as comfort is the more subtle of the two objectives.\nAs we will show, this level of accuracy is sufficient for effectively fine-tuning our generative models.\n\n\\begin{table}[t!]\n \\centering\n \\footnotesize\n \\renewcommand{\\arraystretch}{0.85}\n \\begin{tabular}{lcc}\n \\toprule\n \\textbf{Data type} & \\textbf{Comfort Proxy} & \\textbf{Pose Proxy}\n \\\\\n \\midrule\n Cuboids (ShapeAssembly) & 64\\% & 89\\% \n \\\\\n Implicit (IM-Net) & 62\\% & 80\\%\n \\\\\n Point cloud (SP-GAN) & 73\\% & 88\\%\n \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-1em}\n \\caption{\n Accuracy of our loss proxy networks when used to predict which of two chairs should have the lower loss.}\n \\label{tab:proxy_acc}\n\\end{table}\n\n\n\\parahead{Generative model evaluation}\nWe evaluate our body-conditional generative models using the following metrics:\n\\begin{packed_itemize}\n \\item \\emph{Comfort loss (Comfort)}: the mean comfort loss value for a chair generated given an input body shape (in kilopascals; lower is better).\n \\item \\emph{Poss loss (Pose)}: the mean pose matching loss value for a chair generated given an input body shape (in centimeters; lower is better).\n \\item \\emph{Frechet Distance (FD)}: the Frechet Distance~\\cite{FrechetInceptionDistance} (evaluated in the feature space of a pre-trained PointNet classifier~\\cite{qi2017pointnet}) between a set of generated chairs and a set of chairs from a held-out test set (lower is better).\n\\vspace{-\\topsep}\n\\end{packed_itemize}\nWe compare to the original generative model pre-trained on the PartNet dataset as well as an ``oracle'' method in which multiple (10) samples are drawn from the original generative model and then evaluated using our comfort loss or pose matching loss; whichever achieves the best loss is returned as the output of this method.\nThe oracle requires running the expensive sitting pose optimization for each chair, so it is not practical for most applications.\nIt does serve as a informative upper bound on performance, however.\n\nTable~\\ref{tab:gen_quant} shows the results of this experiment.\nThe body-conditioned variants achieve better pose and comfort losses than the original model (though not quite as good as the expensive-to-evaluate oracle).\nRespecting the functional losses comes at the cost of a small distribution shift (as measured by FD), which the oracle also incurs.\nFigure~\\ref{fig:comfort_qual} qualitatively compares some outputs of our shape-conditioned generative model with that of the original generative model when given the same latent code; Figure~\\ref{fig:target_qual} shows analogous results for our pose-conditioned generative model.\nThe fine-tuned generative models adjust chair geometry to better accommodate the input body: tilting chair backs forward or back, lowering or raising seat slope, etc.\nOur regularization that penalizes the warped latent code from drifting too far from the original latent code does prevent some large-scale geometric changes from happening (e.g. adding footrests to the chair in Figure~\\ref{fig:target_qual} bottom, second from left).\n\n\\begin{table}[t!]\n \\centering\n \\footnotesize\n \\setlength{\\tabcolsep}{3pt}\n \\renewcommand{\\arraystretch}{0.85}\n \\begin{tabular}{llccc}\n \\toprule\n \\textbf{Model type} & \\textbf{Model variant} &\n \\textbf{Comfort$\\downarrow$} & \\textbf{Pose$\\downarrow$} &\n \\textbf{FD$\\downarrow$}\n \\\\\n \\midrule\n \\multirow{5}{*}{ShapeAssembly} &\n Original & 11.2 & 44.5 & 33.0 \\\\ \n & Shape-conditioned & 7.71 & -- & 41.8\\\\\n & Pose-conditioned & -- & 30.3 & 42.8\\\\\n & \\emph{Oracle} & \\emph{6.52} & \\emph{21.2} & \\emph{39.3}\n \\\\\n \\midrule\n \\multirow{5}{*}{IM-Net} &\n Original & 14.1 & 40.35 & 82.6 \\\\\n & Shape-conditioned & 10.7 & -- & 86.0\\\\\n & Pose-conditioned & -- & 31.6 & 92.0\\\\\n & \\emph{Oracle} & \\emph{9.53} & \\emph{17.25} & \\emph{86.8}\n \\\\\n \\midrule\n \\multirow{5}{*}{SP-GAN} &\n Original & 10.23 & 109.85 & 25.4 \\\\ \n & Shape-conditioned & 8.77 & -- & 67.1\\\\\n & Pose-conditioned & -- & 61.87 & 60.5\\\\\n & \\emph{Oracle} & \\emph{4.74} & \\emph{31.04} & \\emph{25.1} \n \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-1em}\n \\caption{Evaluating how well different generative models respect functional losses (Comfort, Pose) while staying within the distribution of chairs (FD).\n }\n \\label{tab:gen_quant}\n\\end{table}\n\n\\begin{figure*}[t!]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{1pt}\n \\renewcommand{\\arraystretch}{0.5}\n \\begin{tabular}{c cc cc cc}\n & \\multicolumn{2}{c}{ShapeAssembly} & \\multicolumn{2}{c}{IM-Net} & \\multicolumn{2}{c}{SP-GAN}\n \\\\\n \\raisebox{0.8em}{\\rotatebox{90}{Unconditioned}} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/0uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/1uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/2uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/3uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/4uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/5uncond.png} \\\\\n & $\\mathcal{L}_\\text{comfort} = 11.0$ &\n $\\mathcal{L}_\\text{comfort} = 11.8$ &\n $\\mathcal{L}_\\text{comfort} = 13.5$ &\n $\\mathcal{L}_\\text{comfort} = 14.3$ &\n $\\mathcal{L}_\\text{comfort} = 11.2$&\n $\\mathcal{L}_\\text{comfort} = 10.9$\n \\\\\n \\raisebox{1.3em}{\\rotatebox{90}{Conditioned}} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/0cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/1cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/2cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/3cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/4cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure7\/5cond.png}\\\\\n & $\\mathcal{L}_\\text{comfort} = 7.93$ &\n $\\mathcal{L}_\\text{comfort} = 7.07$ &\n $\\mathcal{L}_\\text{comfort} = 11.3$ &\n $\\mathcal{L}_\\text{comfort} = 10.5$ &\n $\\mathcal{L}_\\text{comfort} = 9.90$ &\n $\\mathcal{L}_\\text{comfort} = 8.06$\n \\end{tabular}\n \\caption{Comparing outputs of our shape-conditioned generative models with their corresponding unconditioned generative models, given the same latent code.}\n \\label{fig:comfort_qual}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{1pt}\n \\renewcommand{\\arraystretch}{0.5}\n \\begin{tabular}{c cc cc cc}\n & \\multicolumn{2}{c}{ShapeAssembly} & \\multicolumn{2}{c}{IM-Net} & \\multicolumn{2}{c}{SP-GAN}\n \\\\\n \\raisebox{0.8em}{\\rotatebox{90}{Unconditioned}} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/0uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/1uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/2uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/3uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/4uncond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/5uncond.png}\\\\\n & $\\mathcal{L}_\\text{pose} = 17.2$ &\n $\\mathcal{L}_\\text{pose} = 49.2$ &\n $\\mathcal{L}_\\text{pose} = 16.8$ &\n $\\mathcal{L}_\\text{pose} = 45.6$ &\n $\\mathcal{L}_\\text{pose} = 31.5$ &\n $\\mathcal{L}_\\text{pose} = 84.0$\n \\\\\n \\raisebox{1.3em}{\\rotatebox{90}{Conditioned}} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/0cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/1cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/2cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/3cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/4cond.png} &\n \\includegraphics[width=0.16\\linewidth]{images\/results\/figure8\/5cond.png}\\\\\n & $\\mathcal{L}_\\text{pose} = 8.40$ &\n $\\mathcal{L}_\\text{pose} = 15.5$ &\n $\\mathcal{L}_\\text{pose} = 8.25$ &\n $\\mathcal{L}_\\text{pose} = 24.1$ &\n $\\mathcal{L}_\\text{pose} = 14.2$ &\n $\\mathcal{L}_\\text{pose} = 69.9$\n \\end{tabular}\n \\caption{Comparing outputs of our pose-conditioned generative models with their corresponding unconditioned generative models, given the same latent code. The target pose is shown in gold; the pose the body actually assumes when sitting in the chair is shown in gray.}\n \\label{fig:target_qual}\n\\end{figure*}\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe presented a new technique for adapting generative models of 3D chairs into \\emph{body-aware} generative models which accommodate input body shapes or sitting postures.\nWe described an optimization for finding a body's sitting pose in a chair, defined a metric for assessing the comfort of such sitting poses, and trained neural networks to approximate this metric and a related pose-matching metric.\nFinally, we introduced a general scheme for adding body conditioning to any latent variable shape generative model and trained these models to minimize network proxy losses.\nThe generality of this scheme allows application to three base generative models: ShapeAssembly, IM-Net, and SP-GAN.\n\n\\parahead{Limitations \\& Future Work}\nWhile our method is currently limited to chairs, we look forward to learning to synthesize other common objects that people interact (e.g.~other furniture, tools).\nOur sitting pose optimization procedure and comfort loss metric use limited physics and could benefit from soft body simulation to produce more accurate body pressure estimates.\nWe could also remove the assumption that the sitting pose is passive, incorporating active human agents that seek to accomplish goals while sitting (e.g.~eating, working at a desk, or conversing with a friend).\nWe could replace the current sitting pose optimization procedure with one that factors in the active effort required for a person to sit.\nPursuing this latter direction would facilitate design of chairs for people with limited strength, mobility, or muscle control.\nThere is no reason to limit the model to working on the range of body shapes expressible in the SMPL model: it would be valuable to support the bodies of children, or people missing one or more limbs.\n\n\\section{Supplemental Materials}\n\\subsection{Loss Proxy Training}\nWe found that the ground-truth comfort loss and the pose matching loss distributions were too widely dispersed for a neural network to learn to regress. We found that by taking the log of the comfort loss and the log of the pose matching loss increased by 1, the loss distributions better resembled non-unit normal distributions. We then whitened these values to have a mean of zero and a standard deviation of one.\n\\subsection{Generative Model Training}\nWe found that when training our conditioning networks to produce latent vectors for our generative models, we needed to regularize the values produced by the network to stop the models from exceeding too far beyond the unit normal distribution and creating shapes that stop functioning as chairs.\n\\subsubsection{Shape Assembly}\nWhen training ShapeAssembly we found that leaving the model to train without a regularizer manifests in the generation of erroneous shape programs, as only a subset of programs output by the decoder are executable. We implemented a $\\mathbf{z}$-distance loss with learning weight 0.001 to restrict the model from producing latent vectors far from the unconditioned vector. We train the ShapeAssembly conditioning network for 50 epochs, 1000 iterations for each epoch. We use a learning rate of 0.0001 with no learning rate scheduling. \n\\subsubsection{IM-NET}\nFor IM-NET, training without regularization yields chairs with artefacts or signed distance function outputs that do not contain the level set at which we produce our output chair meshes through the marching cubes algorithm. Similarly, we implemented a $\\mathbf{z}$-distance loss with learning weight 0.001 to restrict the model from producing errors. For SP-GAN, we likewise found that training without any regularization term results in deformed shapes that do not resemble chairs. Thus, we implemented a $\\mathbf{z}$-distance loss with coefficient $0.1$. We train the IM-NET conditioning network for 50 epochs, 1000 iterations for each epoch. We use a learning rate of 0.0001 with no learning rate scheduling. \n\\subsubsection{SP-GAN}\nThe SP-GAN decoding process is slightly different, as the model decodes a chair from $N_p$ $\\mathbf{z}$ vectors where $N_p$ is the number of points on the output pointcloud. This gives the output space significantly higher degrees of freedom, so we can only train the model for very few iterations before the output shapes start to deform. Thus, we only need to train SP-GAN with learning rate 0.0001 with no learning rate scheduling for 100 epochs for 10 iterations per epoch in order to see results. Due to the limited amount of training, the various training setups for SP-GAN's pose models do not result in large $\\mathbf{z}$ distance deviations. However, we still found that adding in the $\\mathbf{z}$ distance loss with coefficient $0.1$ allowed our model to produce lower loss values from our pose and comfort loss proxies. ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nBreast cancer is one of the most prevalent cancers worldwide with 1\/8th of the women population affected by the disease \\cite{Siegel2019}. Regular breast cancer screening plays an important role in early breast cancer detection and mammography remains the most used around the world. Its main purpose is the discovery of the signs of cancer development (e.g. clusters of microcalcifications, spiked masses, distortions, etc.) leading to further examinations such as ultrasound, MRI, biopsy. Given the importance of breast cancer for public health, the area is one is most active in medical imaging the research \\cite{Carneiro2017, Hamidinekoo2018a}.\n\nThe mammography analysis presents by its nature two main tasks: classification and detection. The difficulty of the tasks is emphasized by the complexity of the visualized features. That is, the image presents a projection of several layers of soft tissues. Moreover, the malignant findings sizes vary, with the smallest being below $0.5mm$ \\cite{Mercado2014}, making the detection task even harder. \n\nDeep learning techniques are widely used for different medical-imaging-related tasks and mammography is amongst the areas of interest \\cite{Carneiro2017, Hamidinekoo2018a}. However, the size and the complexity of the imaging makes the application of the state-of-the-art deep learning methods less straightforward, both in training and in test phases.\n\nIn the present work, we focused on the supervised segmentation task. We aim at maximizing the input resolution of the images to increase the detection capabilities while having explicit hardware limitations (i.e. one mass-market GPU).\n\n\\section{Related work}\n\\label{sec:intro}\n\nSeveral light implementations of U-Net-type networks have been proposed \\cite{Chen2018,Qi2019}. Sun et al. proposed a U-Net for mammographies \\cite{Sun2018} that involves downscaling image to (256x256), where it is difficult to detect certain suspicious regions (i.e. calcifications clusters). Oktay et al. \\cite{Oktay2018} and Sun et al. \\cite{Sun2018} propose structural modifications to a U-Net with attention layers, hence more complexity, while De Moor et al. propose to use a patch-wise trained U-Net on the full-sized mammographies \\cite{DeMoor2018}. While it allows the processing of high-resolution images, the training process looses spatial and topological information of the full image. Pandey et al. introduced a network similar to ours, but the authors experiment on low-resolution imaging (256x256) \\cite{Pandey2018}. In most of the cases, the proposed lightweight architectures have been applied to solve tasks where the traditional U-Net is applicable and yields reasonable results. In contrast, we tempt to solve the task where regular U-Net does not fit.\n\n\\section{Methods}\n\\label{sec:format}\n\nFocusing on the segmentation task, we based our work on the U-Net, state-of-the-art deep learning architecture for segmentation \\cite{Ronneberger2015} (see 2-levels-depth U-Net illustration on fig. \\ref{fig:unet}), bringing several modifications. \n\n\\begin{figure}[thb]\n\\centering\n\\includegraphics[width=6cm]{src\/unet-illustration}\n\n\\caption{Simplified illustration of the U-Net architecture having depth of 2 levels. For details, see \\cite{Ronneberger2015}}\n\\label{fig:unet}\n\\end{figure}\n\nThe initial U-Net processes images of 572x572 pixels and has 4-level-depth having $\\approx 10M$ parameters. The U-Net architecture being fully convolutional by design, it can process images of different sizes and in such cases, the limits are imposed by the hardware. \n\nAs shown by \\cite{DeMoor2018}, the training operations may be performed patch-wise on GPU hardware, and the testing is done image-wise (on CPU and RAM). While it allows running the training on images of smaller size, it looses spatial and topological information of the full image.\n\nUnlike \\cite{DeMoor2018} we aimed at training on full mammography images. To achieve our goal, we build a lightweight U-Net having several modifications compared to the initial U-Net. \n\nHaving images of higher resolution, we increased the depth of the network from four to seven levels. That yielded a network with 600M parameters. To cope with such complexity, we decreased the number of the initial level convolution filters from 64, as in the original paper, to 16, which limited the number of parameters to 37M. To simplify the network even further, we replaced traditional convolutional layers with separable convolutions; that brought the number of parameters down to 4.6M. Finally, to mitigate the precision loss due to the limited number of parameters, we added short residual connections \\cite{Chen2018} at each level of the U-Net, leading to the final 5.3M parameters network. \n\nWe trained our network on full mammographies resized to 1536x1536. Such resolution yielded images of $\\approx 0.15mm$ pixel spacing, which is acceptable with regards to the size of the malignant features \\cite{Mercado2014}. Therefore, given the resolution of the input images, unlike most of the state-of-the-art approaches, our proposed network is trained to segment both, masses and calcifications. \n\n\\section{Results}\n\nWe achieve $DICE = 0.58$, on the INbreast \\cite{Moreira2012a} database, which is comparable to the state-of-the-art performances (0.62, \\cite{Sun2018}), considering our higher scale and combination of masses and calcifications.\n\n\n\n\\begin{figure}[thb]\n\\centering\n\\includegraphics[width=8cm]{src\/unet_seg_output-23_20200103235519-crop.png}\n\n\\caption{Segmentation results of the proposed network. \\textbf{First row}: input images; \\textbf{Second row}: Thresholded output, \\textbf{Third row}: ground truth (masses and mircocalcifications)}\n\\label{fig:res}\n\\end{figure}\n\n\\section{Discussion}\n\\label{sec:pagestyle}\n\nIn the present work, we focused on the mammography imaging segmentation task, looking for a compromise between the solution precision and complexity. We approached the problem with deep learning fully supervised method, based on state-of-the-art U-Net architecture.\n\nTo achieve our goal, we designed a network with a limited number of parameters so it can fit one mass-market GPU. Thanks to its lightness, our network can run fast in the test environment, which makes it production-ready.\n\nThe results obtained with our mammography segmentation deep learning network may be used in the screening process in two ways:\ni) Guiding the radiologists during diagnosis and ii) Guiding the radiographers in decision for additional examinations before the images are shown to the radiologist.\n\nFinally, our approach may be used for other types of imaging as well where both, high resolution and lower complexity are equally important. More precise architecture adjustments may be necessary for any particular task.\n\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\n\nWith the rapid development of nanotechnology, the transport of single electron can now be studied in electronic devices \\cite{Lu,Bylander,Fuji}, leading to renewal interest on statistical distribution of electrons and the energy it carries. Full-counting statistics is a new methodology to characterize full probability distribution of electron and energy transport by calculating the corresponding generating function \\cite{Belzig,Bagrets,Pilgram,Gogolin,Scho,Saito,Flindt,Urban,T1}. Experimentally the real-time counting of electrons has been carried out in a quantum dot (QD) which direct measures the distribution function of current fluctuations \\cite{Bylander,Gust,Flindt2}. In the experiment, the measurement of higher order \\textit{cumulants} up to 15th has also been reported in quantum point contact systems.\\cite{Flindt2} Such a stochastic process shows universal fluctuation for noise and higher order moments which is a common feature in mesoscopic systems. Full-counting statistics offers a superior way to study the noise and those all higher order correlations. Furthermore, quantum entanglement and quantum information have been also related to quantum transport in terms of full-counting statistics\\cite{Klich,Song,Les}.\n\nOne of the important issues in the non-equilibrium transport process is the energy transport which gives information on how energy is dissipated and its correlations for any working electronic devices. Recently, the energy transport through the one-dimensional system, such as the trapped ion chains, has been measured experimentally \\cite{Ramm}. The energy dissipation and fluctuation can be characterized by the energy current, which can be investigated by Landauer-B\\\"{u}ttiker type of formalism theoretically in the dc transport for non-interaction electrons \\cite{But,sivan,kearney}. The energy current $I^E_\\alpha$ is also related to the heat current $I^h_\\alpha$ by $I^h_\\alpha=I^E_\\alpha-\\mu I_\\alpha$ where $\\mu$ is the chemical potential and $I_\\alpha$ is the particle current. It is known that the Joule heating $J$ due to the leads is related to the heat current by $\\sum_\\alpha I^h_\\alpha=J$. The heat current driven by external bias and temperature gradient plays a central role in studying the efficiency of nano-scaled heat engine\\cite{engine1}. Recently ac heat current, its shot noise, and relaxation resistance have been investigated in mesoscopic systems\\cite{sanchez1,sanchez2,mosk1,mosk2,jchen}. Using the non-equilibrium Green's function transient heat current has been studied in mesoscopic systems\\cite{Michelini}. Moreover, A first-principles calculation for transient heat current through molecular devices has also been carried out\\cite{Yu}. It would be interesting to further study the FCS of energy transport in ac regime.\n\nWe note that FCS of energy transport of phonon has been studied extensively. An exact formula for cumulant generating function of heat transfer has been derived in harmonic networks to study non-equilibrium fluctuations \\cite{SD1,SD2}. Moreover, energy fluctuations in a driven quantum resonator has also been studied by full-counting statistics \\cite{C1,C2}. Using the phonon non-equilibrium Green's function, the generating function has been obtained for phonon transport in the transient regime as well as steady states\\cite{W1,W2,W3}. Various cumulants of thermal current and entropy production have been studied numerically\\cite{W1,W3,W4}. So far most of investigations of FCS of energy transfer focus on the phonon transport. However, less attention has been paid on FCS of the electron energy transfer\\cite{foot5}. It is the purpose of this paper to address this issue.\n\nIn this paper, we develop a Keldysh non-equilibrium Green's function (NEGF) theory to study FCS of transferred energy in transient regime. Based on two measurement scheme we derive the expression of generating functions for FCS in terms of non-equilibrium Green's functions. This allows us to calculate $n$th cumulant $C_n$ of transferred energy of mesoscopic systems in the transient regime as well as in long-time limit. We then apply our formalism to investigate the energy transport in the transient regime for both single and double QD systems. To study the finite bandwidth effect on the cumulants of transferred energy, an exact solvable model is used with Lorentzian linewidth so that the non-equilibrium Green's function can be obtained exactly. As expected, the cumulants of transferred energy show linear characteristics in time in the long-time limit for both systems. For the single QD, the transient energy current exhibits damped oscillatory behavior. The oscillation frequency is found to be independent of bandwidth of the leads and the decay of oscillation amplitude is proportional to the lifetime of resonant state of QD which decreases as the bandwidth increases. At short times, the maximum amplitude $M_n$ of the normalized $n$th cumulant $C_n(t)\/C_1(t)$ show a universal behavior for the single QD system. Specifically, we find $M_{2k} = a_1 e^{\\kappa k}$ and $M_{2k+1} = a_2 e^{\\kappa k}$ for different system parameters where $a_1$ and $a_2$ are non-universal constants. The universal slope $\\kappa$ is found to be close to 3. For the double QD system, we find that the transient energy current shows the damped Rabi oscillations with frequency approximately proportional to the interdot coupling constant $v$ between two QDs. A threshold of interdot coupling $v_c$ is found below which transient energy current increases with the increase of $v$ while for $v>v_c$ the transient current is suppressed significantly. These interesting results can be understood analytically.\n\nThis paper is organized as follows. In Sec.~\\ref{sec2}, the formalism of generating function for studying full-counting statistics of transferred energy in the transient regime is first presented. In Sec.~\\ref{sec3}, we apply the formalism obtained to both single and double QD systems and show numerical results of various cumulants, transient energy current and the corresponding higher order cumulants. Finally, the discussion and conclusion are given in Sec.~\\ref{sec4}.\n\n\\section{Theoretical Formalism}\\label{sec2}\nTo study full-counting statistics of energy transport, we need to obtain the probability distribution $P(\\Delta \\epsilon, t)$ of the transferred energy carried by electrons $\\Delta \\epsilon = \\epsilon_t - \\epsilon_0$ between an initial time $t_0$ (for simplicity, we set $t_0=0$) and a later time $t$ which can be calculated from two-time quantum measurement. Denoting $\\epsilon$ the eigenvalue of the Hamiltonian of the left lead $H_L$ where we measure the energy flow. Taking measurement at $t$ gives $\\epsilon_t$ which is a stochastic variable. The generating function $Z(\\lambda, t)$ with the counting field $\\lambda$ can be obtained by the Fourier transformation of the probability distribution as \\cite{Scho},\n\\begin{equation}\\label{Z}\n Z(\\lambda,t) \\equiv \\langle e^{i\\lambda \\Delta \\epsilon}\\rangle = \\sum_{\\Delta \\epsilon}P(\\Delta \\epsilon, t) e^{i\\lambda \\Delta \\epsilon}.\n\\end{equation}\nThe $j$th cumulant of transferred energy $\\langle \\langle(\\Delta \\epsilon)^j\\rangle\\rangle$ is defined by,\n\\begin{equation} \\label{jth}\n \\langle\\langle (\\Delta \\epsilon)^j \\rangle\\rangle = \\frac{\\partial^j \\ln Z(\\lambda)}{\\partial (i\\lambda)^j} \\bigg{|} _{\\lambda=0}.\n\\end{equation}\n\nWe now derive the generating function using NEGF theory for a general QD system coupled with two semi-infinite leads in the transient regime. For this purpose, we assume that the couplings between the QD and leads are turned on at $t=0$. The Hamiltonian of the whole system can be written as,\n\\begin{equation}\\label{ham}\n H = \\sum_{k\\alpha} \\epsilon_{k\\alpha} c^\\dag_{k\\alpha} c_{k\\alpha} + \\sum_n \\epsilon_n d_n^\\dag d_n + \\sum_{k\\alpha n} \\Big( t_{k\\alpha n}c^\\dag_{k\\alpha}d_n + \\mathrm{h.c.} \\Big),\n\\end{equation}\nwhere, $c^\\dag (c)$ and $d^\\dag (d)$ are the creation (annihilation) operators of leads and QD, respectively. $\\epsilon_n$ is the energy level for the QD and $\\epsilon_{k\\alpha}$ is the energy levels of the lead $\\alpha (\\alpha = L, R)$. $t_{k\\alpha n}$ is the coupling constant between two leads and the QD.\n\nTo investigate the energy current through the left lead where the measurement is made, we focus on the energy operator of the left lead\n\\begin{equation}\\label{hamL}\nH_L = \\sum_{k} \\epsilon_{kL} c^\\dag_{kL} c_{kL}.\n\\end{equation}\nSince we study the behaviour in the transient regime we assume that the bias is applied to the leads at $t=-\\infty$ while the leads and QD are disconnected. All the couplings are switched on at $t=0$ giving rise to a transient energy current. The switching of coupling between QD and leads can be done by a quantum point contact that is controlled by a gate voltage. Since the system is disconnect before $t=0$, the initial density matrix of the whole system at time $0$ is the direct product of the subsystems expressed by $\\rho(0)=\\rho_L\\otimes\\rho_D\\otimes\\rho_R$. Similar to the cases of phonon and electron charge transport\\cite{W2,T2}, the generating function of transferred energy can be expressed as,\n\\begin{equation}\\label{gf1}\nZ(\\lambda,t) = \\mathrm{Tr}\\left[ \\rho(0) e^{i\\lambda H_{L}(0)} e^{-i\\lambda H^h_{L}(t)} \\right].\n\\end{equation}\nHere, $H^h_{L}(t)$ denotes the energy operator of the lead $L$ in the Heisenberg picture, which is related to the energy operator in the Schr\\\"{o}dinger picture $H_{L}(0)$ (Eq.~(\\ref{hamL})) by\n\\begin{equation}\nH^h_{L}(t)=U^\\dag(t,0)H_{L}(0) U(t,0),\n\\end{equation}\nwhere $U(t,0)$ is the evolution operator.\n\nIn terms of the modified Hamiltonian $H_\\gamma$ given in Eq.~(\\ref{mH}), the generating function can be rewritten as,\n\\begin{equation}\\label{gf2}\nZ(\\lambda,t) =\\mathrm{Tr}\\left\\{ \\rho(0) U^\\dag_{\\lambda\/2} (t,0) U_{-\\lambda\/2} (t,0) \\right\\},\n\\end{equation}\nwhere the modified evolution operator is,\n\\begin{equation}\\label{U}\n U_\\gamma(t,0) = \\mathcal{T} \\exp\\left[ -\\frac{i}{\\hbar}\\int_{0}^{t} H_\\gamma(t') dt'\\right],\n\\end{equation}\nwith\n\\begin{eqnarray}\\label{mH}\n H_\\gamma\n &=& \\sum_{k} \\Big[ \\epsilon_{kL} c^\\dag_{kL}(t_\\gamma) c_{kL}(t_\\gamma) + \\epsilon_{kR} c^\\dag_{kR} c_{kR} \\Big] \\nonumber\\\\\n && +\\sum_n \\epsilon_n d^\\dag_n d_n + \\sum_{kn}\\Big[ \\Big( t_{kL n} c^\\dag_{kL}(t_\\gamma) d_n \\nonumber\\\\\n && + t_{kR n} c^\\dag_{kR} d_n \\Big) + \\mathrm{h.c.} \\Big]. \\label{H_m}\n\\end{eqnarray}\nwith $t_\\gamma=\\hbar\\gamma$ and $\\gamma=\\lambda\/2$.\nIn deriving Eq.~(\\ref{H_m}), we have used the following relation,\n\\begin{equation}\\label{ck}\n e^{i\\gamma H_L} c_L(0) e^{-i\\gamma H_L} =\\sum_n \\frac{\\hbar^n \\gamma^n}{n!} [\\partial^n_t c_{kL}(t)]_{t=0}= c_L(t_\\gamma).\n\\end{equation}\n\n\n\nUsing Grassmann algebra the generating function becomes\\cite{T2},\n\\begin{equation}\\label{gfgra}\n Z(\\lambda,t) = \\int D[\\bar{\\phi}\\phi]e^{iS[\\bar{\\phi}\\phi]},\n\\end{equation}\nwhere $D[\\bar{\\phi}\\phi] = \\Pi_{x\\sigma} d\\bar{\\phi}^\\sigma_x d\\phi^\\sigma_x$ with $x \\in k\\alpha, n$ and the action $S[\\bar{\\phi}\\phi]$ is given by,\n\\begin{eqnarray}\\label{action}\nS[\\bar{\\phi}\\phi] &=& \\int_{0}^{t} d\\tau \\sum_{k\\sigma} \\sigma \\Big[ \\bar{\\phi}^\\sigma_{kL}(\\hbar\\gamma_\\sigma)(i\\partial_\\tau - \\epsilon_{kL})\\phi^\\sigma_{kL}(\\hbar\\gamma_\\sigma) \\nonumber\\\\\n&& + \\bar{\\phi}^\\sigma_{kR}(i\\partial_\\tau - \\epsilon_{kR})\\phi^\\sigma_{kR} \\Big] + \\sum_{n \\sigma} \\sigma \\bar{\\phi}^\\sigma_n (i\\partial_\\tau - \\epsilon_{n})\\phi^\\sigma_{n} \\nonumber\\\\\n&& - \\sum_{kn\\sigma} \\sigma \\Big[ t_{kL n} \\bar{\\phi}^\\sigma_{kL}(\\hbar\\gamma_\\sigma) \\psi^\\sigma_n + t_{kR n} \\bar{\\phi}^\\sigma_{kR} \\psi^\\sigma_n + \\mathrm{c.c.} \\Big], \\nonumber\\\\\n\\end{eqnarray}\nwhere $\\phi$ and $\\bar{\\phi}$ are the Grassmann variables which are two independent complex numbers \\cite{gras} and $\\sigma = +,-$ denoting the upper and lower branches of the Keldysh contour, respectively.\n\n\nAfter the Keldysh rotation\\cite{Kamenev} we rewrite the action in Eq.~(\\ref{action}) in a matrix form,\n\\begin{equation}\\label{ac-ma}\n S[\\bar{\\Psi}\\Psi] = \\int_{0}^{t} d\\tau \\int_{0}^{t} d\\tau' \\bar{\\Psi}^T(\\tau) M(\\tau,\\tau') \\Psi(\\tau'),\n\\end{equation}\nwith $\\bar{\\Psi}^T(\\tau) = [\\bar{\\psi}_{kL}^T(\\tau_\\gamma),\\bar{\\psi}_n^T(\\tau),\\bar{\\psi}_{kR}^T(\\tau)]$ and\n\\begin{equation}\\label{Mmatrix}\n M = \\left(\n \\begin{array}{ccc}\n g^{-1}_{kk'L}(\\tau_\\gamma,\\tau'_\\gamma) & -t_{kL n'}\\delta & 0 \\\\\n -t^*_{k'L n}\\delta & g^{-1}_{nn'}(\\tau,\\tau') & -t^*_{k'R n}\\delta \\\\\n 0 & -t_{kR n'}\\delta & g^{-1}_{kk'R}(\\tau,\\tau') \\\\\n \\end{array}\n \\right),\n\\end{equation}\nwhere $\\delta$ is a unit matrix in Keldysh time space.\n\nUsing the functional integration of the Gaussian integral for the Grassmann fields, the generating function can be expressed by the Keldysh non-equilibrium Green's function as \\cite{T2},\n\\begin{equation}\\label{gf}\n Z(\\lambda, t) = \\mathrm{det} (G\\widetilde{G}^{-1}),\n\\end{equation}\nwith\n\\begin{eqnarray}\nG^{-1} &=& g^{-1} - \\Sigma_L - \\Sigma_R, \\label{G1} \\\\\n\\widetilde{G}^{-1} &=& g^{-1} - \\widetilde{\\Sigma}_L - \\Sigma_R. \\label{G2}\n\\end{eqnarray}\n\nHere, $g = g_{nn'}(\\tau,\\tau')$ is the Green's function of the isolated QD in Keldysh space. $\\Sigma_R = \\sum_{kk'}t^*_{k'R n} g_{kk'R} (\\tau,\\tau') t_{kR n'}$ is the self-energy of the right lead in the Keldysh space in the time domain. $\\widetilde{\\Sigma}_L = \\sum_{kk'}t^*_{k'L n} g_{kk'L} (\\tau_\\gamma,\\tau'_\\gamma) t_{kL n'}$ is the self-energy with the counting field, meaning that the two-time measurement is done in the left lead. It is easy to find the expression of $\\widetilde\\Sigma_L$ given by,\n\\begin{eqnarray}\\label{gk}\n \\widetilde\\Sigma_L(t,t') &=& \\left(\n \\begin{array}{cc}\n \\widetilde\\Sigma_L^r(t,t') & \\widetilde\\Sigma_L^k(t,t') \\\\\n \\widetilde\\Sigma_L^{\\bar{k}}(t,t') & \\widetilde\\Sigma_L^a(t,t') \\\\\n \\end{array}\n \\right),\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\\label{tildesig2}\n \\widetilde\\Sigma^r_L&=&\\frac{1}{2} \\left( \\Sigma^r_L + \\Sigma^a_L - \\widetilde\\Sigma^{<}_L + \\widetilde\\Sigma^{>}_L\\right), \\nonumber\\\\\n \\widetilde\\Sigma^k_L&=&\\frac{1}{2} \\left( \\Sigma^k_L + \\widetilde\\Sigma^{<}_L + \\widetilde\\Sigma^{>}_L\\right), \\nonumber\\\\\n \\widetilde\\Sigma^{\\bar{k}}_L&=&\\frac{1}{2} \\left( \\Sigma^k_L - \\widetilde\\Sigma^{<}_L - \\widetilde\\Sigma^{>}_L\\right),\n\\end{eqnarray}\nwith $\\Sigma_L^k = 2\\Sigma_L^< + \\Sigma_L^r - \\Sigma_L^a$.\n\nNote that the expression of Eq.~(\\ref{gf}) is the same as that of FCS of charge transport in Refs.~\\onlinecite{T2} and \\onlinecite{ep1}. The difference lies in the expression of $\\widetilde\\Sigma_L$. For the charge transport the counting field appears as an extra phase while for the energy transport the presence of counting field is to shift the time from $t$ to $t_\\gamma$. In Eq.~(\\ref{gf}) the generating function is expressed in terms of a determinant in both time and space domains. As a result, the calculation of generating function is very time consuming for realistic systems. To calculate $Z(\\lambda, t)$ for fixed $\\lambda$ and $t$ for a system with $N$ degrees of freedom, we need to discretize the time $t$ into $N_t$ uniform mesh and calculate the Green's function as a function of both position and time. The computational complexity to evaluate the determinant is of order $N^3 N_t^3$ for each $t$ \\cite{foot3}. For this reason, the transient calculation of FCS is limited to a single QD or double QD with $N=1$ or $2$ where the exact solution of time-dependent non-equilibrium Green's function is available. If one wishes to study a realistic system with $N=100$ (for instance), the amount of calculation increases by six orders of magnitude. However, if we are interested in the first a few cumulants, we can first find their expressions using Eqs.(\\ref{jth}) and (\\ref{gf}) and then calculate them numerically using these expressions rather than calculating cumulant generating function numerically.\n\nNow we examine the limiting cases of generating function defined in Eq.~(\\ref{gf}). First of all, we look at the energy current in the transient regime. From Eq.~(\\ref{gf}) and using the relation $\\ln \\det A = \\mathrm{Tr} \\ln A$, the cumulant generating function is given by\n\\begin{equation}\\label{cgf}\n \\ln Z(\\lambda, t) = \\mathrm{Tr} \\ln [I - G( \\widetilde{\\Sigma}_L - \\Sigma_L)],\n\\end{equation}\nAccording to Eq.~(\\ref{jth}), the transient energy current can be expressed as (see derivation in appendix~\\ref{a1} and we have set $\\hbar = e = 1$),\n\\begin{equation}\\label{1st}\nI^{E}_L(t) = 2\\mathrm{Re} \\int dt' \\mathrm{Tr} \\big[ G^r(t,t') \\breve{\\Sigma}^<_L(t',t) + G^<(t,t') \\breve{\\Sigma}_L^a(t',t) \\big],\n\\end{equation}\nwhere\n\\begin{equation}\\label{sigma}\n \\breve{\\Sigma}^\\chi_L(t',t) = \\sum_{k} \\epsilon_{kL} \\Sigma^\\chi_{kL}(t'-t),\n\\end{equation}\nwith $\\chi = <,a$ and $\\Sigma^\\chi_{kL}(t'-t)$ being the self-energy of the left lead in the absence of the counting field. This expression of transient energy current agrees with that obtained directly by the Green's function method \\cite{Yu}.\n\nWe now consider the short time behavior of the generating function. Using the fact that $G( \\widetilde{\\Sigma}_L - \\Sigma_L)$ is of order $t^2$, we find for small t\n\\begin{equation}\n \\ln Z(\\lambda, t) = -\\mathrm{Tr} [G( \\widetilde{\\Sigma}_L - \\Sigma_L)],\n\\end{equation}\nwhere the quantities in trace are in Keldysh space. It is straightforward to show\n\\begin{equation}\\label{cgf1}\n \\ln Z(\\lambda, t) = \\mathrm{Tr} [G^<( \\widetilde{\\Sigma}^>_L - \\Sigma^>_L)+G^>( \\widetilde{\\Sigma}^<_L - \\Sigma^<_L)].\n\\end{equation}\nIn the limit of weak coupling case for a single quantum dot system with a single level, we can use Green's function of isolated quantum dot to replace $G^<$ and $G^>$ and we find\n\\begin{eqnarray}\\label{short}\n\\ln Z(\\lambda, t)=(n_d-1) M_{L1} + n_d M_{L2},\n\\end{eqnarray}\nwhere $n_d$ is the initial occupation number of the isolated QD and\n\\begin{eqnarray}\nM_{L1}&=&\\int dE A_0(E) (e^{i \\alpha(E) \\lambda}-1) f_L(E), \\\\\nM_{L2}&=&\\int dE A_0(E) (e^{-i \\alpha(E) \\lambda}-1) (f_L(E)-1),\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\\label{a0}\nA_0(E)=\\frac{4\\Gamma_L(E)}{\\pi} \\frac{\\sin^2[(E-\\epsilon_0)t\/2]}{(E-\\epsilon_0)^2}.\n\\end{eqnarray}\nwhere $\\Gamma_L(E)$ is the linewidth function of the left lead. Here, $\\epsilon_0$ is the energy level of the QD and $\\alpha(E) = 1$ or $E$ for charge transport and energy transport, respectively. In the wideband limit (WBL) and $\\alpha=1$, Eq.~(\\ref{short}) recovers the result of Ref.~\\onlinecite{ep1} from which a universal behavior of $n$th cumulant of charge transport $C_n$ has been derived that was first demonstrated experimentally in Ref.~\\onlinecite{Flindt2}. Note that an important relation holds for charge transport when $n_d=0$, i.e.,\n\\begin{equation}\\label{rel}\n(-i)^n \\partial^n Z(\\lambda,t)\/\\partial \\lambda^n |_{\\lambda=0}= x(t),\n \\end{equation}\nwhich is independent of $n$ with $n>0$. This allows one to obtain an analytic expression for $C_n$ for charge transport in the short time limit and very weak coupling regime leading to this universal behavior.\n\nFor the energy transport with $\\alpha=E$, Eq.~(\\ref{rel}) does not hold anymore. Although no analytic expression is available for $n$th cumulant of energy transport $C_n$, some asymptotic behavior can be derived. Assuming $n_d=0$, i.e., initially there is no electron in the QD. From Eq.~(\\ref{short}), it is straightforward to find the expression of $n$th order cumulant of energy transport ($\\alpha=E$) at zero temperature\n\\begin{eqnarray}\\label{cumu0}\nC_n=\\frac{\\partial^n \\ln Z}{\\partial (i\\lambda)^n} = \\int_{-\\infty}^{\\Delta_L} dE A_0(E) E^n,\n\\end{eqnarray}\nwhere $\\Delta_L$ is the bias voltage of the left lead. Now we use WBL such that $\\Gamma(E)$ is a nonzero constant only for $|E|0$, we have\n\\begin{eqnarray}\\label{cumu1}\nC_{n}= \\int_{-W}^{W} du A_0(u) (u+\\epsilon_0)^{n},\n\\end{eqnarray}\nwhere $u=E-\\epsilon_0$. Since $A_0(u)$ is an even function, this integral can be evaluated in the large W limit. For even $n=2k$, the major contribution comes from $\\int du A_0(u) u^{2k}$ which gives\n\\begin{eqnarray}\\label{cumu3}\nC_{2k} \\sim a_1 W^{2k-1},\n\\end{eqnarray}\nwith $a_1=\\frac{4\\Gamma_L}{(2k-1)\\pi}$, the next order $W^{2k-2}$ term depends on $t$.\nFor $C_{2k+1}$, it is dominated by $\\int du A_0(u) (2k+1) \\epsilon_0 u^{2k}$ from which we find\n\\begin{eqnarray}\\label{cumu4}\nC_{2k+1} \\sim a_2 W^{2k-1},\n\\end{eqnarray}\nwith $a_2=\\frac{4\\Gamma_L(2k+1)}{(2k-1)\\pi} \\epsilon_0$. Denoting $F_{n} = \\ln(C_{n}\/C_1)$, we have $F_{2k} = \\ln(a_1\/C_1) +(2k-1)\\ln W $ and $F_{2k+1} = \\ln(a_2\/C_1) +(2k-1)\\ln W $. This suggests that both $F_{2k}$ and $F_{2k+1}$ depend linearly on $k$ with the same slope but different intercepts. Since the slope depends only on the bandwidth $W$, it is universal. Obviously, the above discussion is qualitative under certain limits, the detailed numerical study on the universal behavior at short time will be presented in the next section.\n\nNow we investigate the long-time behavior of the generating function in the transient regime. When $t$ goes to infinity, the Green's function and self-energy in the time domain become invariants under the time translation \\cite{T2}. Therefore, the cumulant generating function in Eq.~(\\ref{cgf}) in the long-time limit in the energy space becomes,\n\\begin{equation}\\label{cgflt}\n \\ln Z_s (\\lambda,t) = t \\int \\frac{d\\omega}{2\\pi} \\ln \\det \\big\\{ I - G(\\omega) [\\widetilde{\\Sigma}_L(\\omega) - \\Sigma_L(\\omega)]\\big\\}.\n\\end{equation}\n\nIn the next section, we will give numerical result of FCS of transferred energy in the transient regime. We will study FCS for two systems, single QD and double QD systems. In calculating the generating function numerically from Eq.~(\\ref{gf}), the Green's functions defined in Eq.~(\\ref{G1}) for an occupied single QD can be expressed as,\n\\begin{eqnarray}\n g^r(\\tau_1, \\tau_2) &=& -i\\theta(\\tau_1-\\tau_2) \\exp[-i\\epsilon(\\tau_1-\\tau_2)], \\label{gr}\\\\\n g^<(\\tau_1, \\tau_2) &=& i \\exp[-i\\epsilon(\\tau_1-\\tau_2)], \\label{gl}\n\\end{eqnarray}\nwith the Heaviside step function $\\theta(\\tau_1-\\tau_2)$. For a double QD system, the Green's function with the counting field in Eq.~(\\ref{G2}) should be written as,\n\\begin{equation}\\label{Gd}\n \\widetilde{G}^{-1} = \\left(\n \\begin{array}{cc}\n g^{-1}_1 - \\widetilde{\\Sigma}_{L} & -v \\\\\n -v^* & g^{-1}_2 - \\Sigma_{R} \\\\\n \\end{array}\n \\right).\n\\end{equation}\nHere, $g^{-1}_1$ and $g^{-1}_2$ are the Green's function for the first and second isolated QD, respectively. $v$ is the coupling constant between two QDs.\n\nIn order to calculate the self-energy $\\Sigma_{L(R)}$ and $\\widetilde{\\Sigma}_L$ in Eqs.~(\\ref{G1}) and (\\ref{G2}), we use the Lorentzian linewidth function to describe the self-energy so that the equilibrium energy dependent self-energy can be written using a finite band width $W$,\n\\begin{equation}\\label{selfenergy}\n {\\Sigma}^r_\\alpha(\\omega) = \\frac{\\Gamma_\\alpha W}{2(\\omega + iW)},\n\\end{equation}\nwith the linewidth amplitude $\\Gamma_\\alpha$. This is a special model that allows us to find the Green's function exactly while still going beyond the WBL. Note that one can not tune the bandwidth experimentally. Then the retarded self-energy can be given by,\\cite{T2}\n\\begin{eqnarray}\\label{sigmar}\n\\Sigma^r_\\alpha (\\tau_1, \\tau_2) &=& -\\frac{i}{4} \\theta(\\tau_1-\\tau_2) \\Gamma_\\alpha W e^{-(i\\Delta_\\alpha + W)(\\tau_1 - \\tau_2)},\n\\end{eqnarray}\nwhere $\\Delta_\\alpha$ is the external bias applied on the lead $\\alpha$. In order to calculate the lesser Green's function analytically, we focus on zero temperature. We find $\\Sigma^<_\\alpha (\\tau_1 - \\tau_2) = \\frac{i}{8} \\Gamma W$ for $\\tau_1 = \\tau_2$, and otherwise \\cite{T2}\n\\begin{eqnarray}\\label{sigmal}\n \\Sigma^<_\\alpha(\\tau_1, \\tau_2) &=& \\frac{i}{8} \\Gamma W \\bigg\\{ - \\frac{i}{\\pi} e^{-(i\\Delta_\\alpha - W)\\tau} \\mathrm{Ei}(-W\\tau) \\nonumber\\\\\n &&+ e^{-(i\\Delta_\\alpha + W) \\tau} \\Big[ 1+\\frac{i}{\\pi} \\mathrm{Ei}(W\\tau) \\Big] \\bigg\\},\n\\end{eqnarray}\nwith $\\tau = \\tau_1 - \\tau_2$ and $\\mathrm{Ei}(x) = -\\int_{-x}^{\\infty} \\frac{e^{-t}}{t} dt$. Note that the diagonal element of lesser self-energy diverges for large $W$. It has been confirmed in Ref.\\onlinecite{joseph} that the transient charge current at WBL can be obtained as follows: calculating transient current as a function of $W$ and then taking large $W$ limit. We have confirmed that transient energy current at WBL can be obtained similarly.\n\nBefore we end this section, we mention that the approach presented in this paper is suitable only for non-interacting problems. Under a special situation where electrons couple with a single phonon mode, this type of approach can be generalized (see Ref.~\\onlinecite{Urban}).\n\n\\section{Numerical results}\\label{sec3}\nIn this section, we first apply our formalism to a single QD system which is assumed to be half occupied at $t=0$. The dependence of cumulants on the occupation will be examined later. The linewidth amplitude in Eq.~(\\ref{selfenergy}) is set to be $\\Gamma_L = \\Gamma_R = \\Gamma\/2$ and the bandwidth $W$ is also set to be the same for both leads. The energy level of the QD is assumed to be $5\\Gamma$ and a bias with amplitude $\\Delta_L = 10\\Gamma$ is chosen for this system. In the following numerical calculations, we set $e = \\hbar =\\Gamma = 1$ for simplicity.\n\n\\begin{figure}\n \\includegraphics[width=3.25in]{Figure1.eps}\\\\\n \\caption{(a) 1st, (b) 2rd, (c) 3rd, and (d) 4th cumulants of transferred energy with different bandwidth $W$ in the left lead for a single QD system.}\n \\label{fig1}\n\\end{figure}\n\nFigure~\\ref{fig1} shows the 1st to 4th cumulants of transferred energy counted from the initial time $t=0$ in the left lead of the system for different bandwidths $W = 10\\Gamma$, $20\\Gamma$, $50\\Gamma$, and $80\\Gamma$. For the 1st and 3rd cumulants, they decrease immediately once the system turns on and increase after reaching a minimum. In this region, the 1st and 3rd cumulants with smaller bandwidths have larger value until the crossover occurs at the time around $0.45$ and $2.5$, as shown in Fig.~\\ref{fig1}(a) and \\ref{fig1}(c), respectively. After the crossover, situation reverses, i.e., the 1st and 3rd cumulants with the larger bandwidth $W$ becomes smaller than those with smaller $W$ in the self-energy. From Fig.~\\ref{fig1}(b) and \\ref{fig1}(d) we see that the 2nd and 4th cumulants show a sharp rise first when the system turns on and then increases almost linearly after the transient regime. Roughly speaking the larger the bandwidth $W$, the larger the values of the 2nd and 4th cumulants.\n\n\\begin{figure}\n \\includegraphics[width=3.25in]{Figure2.eps}\\\\\n \\caption{Time derivative of (a) 1st, (b) 2rd, (c) 3rd, and (d) 4th cumulants of transferred energy with different bandwidth $W$ in the left lead for a single QD system. Inset: transmission coefficients of the single QD system with different bandwidth.}\n \\label{fig2}\n\\end{figure}\n\nWe find that all cumulants of transferred energy increase with small oscillations when the time increases and exhibit linear characteristics at the long time which agrees with the long-time limit of the cumulant generating function in Eq.~(\\ref{cgflt}). The small oscillation of cumulants with time can be seen clearly from their derivative with respective of time, as shown in Fig.~\\ref{fig2}. Fig.~\\ref{fig2}(a) presents the time derivative of the 1st cumulant, namely, the transient energy current of the left lead for different bandwidths $W$. We see that they first drop down exhibiting dips with negative value once the system is connected. After that they increase to maximum values and then decay in an oscillatory fashion. In the long-time limit, they reach the values of dc energy current. We note that the transient energy current behaves similarly to the transient charge current obtained in Ref.~\\onlinecite{joseph}. For the time derivative of the 2nd cumulant which is related to the shot noise of the system, it increases immediately exhibiting peaks when the system turns on, and drops down to the long-time limit very quickly with tiny oscillations. The time derivative of the 4th cumulant shows similar behavior to that of the 2nd cummulant besides that it drops to negative values and finally approaches to the positive long-time limit, while that of the 3rd cumulant shows opposite behavior. The short time behavior of $n$th cumulants of energy transport $C_n$ in Fig.~\\ref{fig2} can be qualitatively understood from Eq.~(\\ref{cumu1}) where $C_n$ depends on $W$ through $\\Gamma(E) \\sim W^2\/(E^2+W^2)$. Hence at short times, a large $W$ gives a large $|C_n|$. The sign of $C_n$ can also be understood from Eq.~(\\ref{cumu0}) for $t$ approaching zero. For $n=2k$, we have from Eq.~(\\ref{a0}) and (\\ref{cumu0})\n\\begin{eqnarray}\nC_{2k}= \\frac{\\Gamma_L t^2}{\\pi} \\int_{-\\infty}^{\\Delta_L} dE E^{2k},\n\\end{eqnarray}\nwhich is positive definite. For $n=2k+1$, since $\\int_{-\\Delta_L}^{\\Delta_L} dE E^{2k+1}=0$, we have\n\\begin{eqnarray}\nC_{2k+1}= \\frac{\\Gamma_L t^2}{\\pi} \\int_{-\\infty}^{-\\Delta_L} dE E^{2k+1},\n\\end{eqnarray}\nwhich is negative in agreement with the results of Fig.~\\ref{fig1} and Fig.~\\ref{fig2}.\n\nFrom the numerical results we find that the frequency of the oscillations is independent of the bandwidth of leads while for the transient energy current, it is found that the time-dependent energy current with larger bandwidth $W$ decays faster than that with smaller $W$. This oscillatory behavior can be understood analytically. For a QD under an upward pulse of bias within the WBL the transient energy current can be expressed in terms of the spectral function $A(\\epsilon, t)$ as,\n\\begin{eqnarray}\\label{Iet_wbl}\nI^E_L(t) &=& -\\Gamma_L \\int \\frac{d\\epsilon}{2\\pi} \\epsilon \\Big\\{ 2f_L(\\epsilon) \\mathrm{Im} [A(\\epsilon,t)] \\nonumber\\\\\n&&+ \\sum_\\alpha \\Gamma_\\alpha f_\\alpha(\\epsilon) |A(\\epsilon,t)|^2 \\Big\\},\n\\end{eqnarray}\nwith\n\\begin{equation}\\label{AA}\n A(\\epsilon,t) = \\frac{\\epsilon-\\epsilon_0 + i\\Gamma\/2 + \\Delta_L e^{i(\\epsilon-\\epsilon_0 + \\Delta_L + i\\Gamma\/2)t}}{(\\epsilon-\\epsilon_0 + i\\Gamma\/2)(\\epsilon-\\epsilon_0 + \\Delta_L + i\\Gamma\/2)}.\n\\end{equation}\nClearly the oscillatory behavior of transient energy current is due to the oscillatory term $\\exp[i(\\epsilon-\\epsilon_0 + \\Delta_L)t - (\\Gamma\/2)t]$ in the spectral function $A(\\epsilon, t)$. We note that the period of oscillation of the transient energy current is only dependent on the energy level $\\epsilon_0$ of the QD and the applied bias. In addition, the damping of this oscillation is dominated by life time of the resonant state of the QD which is proportional to $\\Gamma$ in the WBL. In our numerical calculation, finite bandwidth $W$ is used which affects the lifetime of the resonant state. To find the influence of $W$ on the lifetime, we investigate transmission coefficient versus energy to look for resonant behavior and find the resonant state (transmission peak) and its lifetime (inverse of peak width). From the inset of Fig.~\\ref{fig2}(d), we find that the transmission peak is mediated by the resonant state of the single QD system is broadened by increasing the bandwidth $W$ in self-energy. This shows that the life time of the resonant state is proportional to bandwidth and explains why the transient energy current decays faster for the system with larger bandwidth (see Fig.~\\ref{fig2}(a)). From the transmission coefficient shown in the inset of Fig.~\\ref{fig2}(d), it is clear that larger $W$ corresponds to a large dc charge or energy current which is consistent with our dc limit of transient energy current.\n\n\\begin{figure}\n \\includegraphics[width=3.25in]{Figure3.eps}\\\\\n \\caption{The logarithmic plot of the maximum amplitude of the normalized transient energy cumulants $C_n(t)\/C_1(t)$ versus $n$ at short times for different system parameters for (a) different $\\epsilon_0$ with $W = 50\\Gamma$ and (b) different bandwidth $W$ with $\\epsilon_0 = 5\\Gamma$.}\n \\label{figs}\n\\end{figure}\n\nFrom the discussion in Section II, we see that there may exist a universal behavior for $C_n$ at short times. In this section, we provide numerical evidence to show that this is indeed the case. Denoting $M_n$ as the maximum amplitude of the normalized transient energy cumulants $C_n(t)\/C_1(t)$. In Fig.~\\ref{figs}, we show the logarithmic plot of $M_n$ versus $n$ for different system parameters. We see that both $\\ln(M_{2k})$ and $\\ln(M_{2k+1})$ depend linearly on $k$ with the same slope $\\kappa$ but different intercepts. Varying system parameters such as $W$ and $\\epsilon_0$ will change the intercepts while the slope remains unchanged. Hence we have $M_{2k} = a_1 e^{\\kappa k}$ and $M_{2k+1} = a_2 e^{\\kappa k}$ where $a_1$ and $a_2$ are constants. The universal slope $\\kappa$ is found to be close to 3.\n\n\\begin{figure}\n \\includegraphics[width=3.25in]{Figure4.eps}\\\\\n \\caption{(a) Transient energy current of the left lead for a single QD system with $W = 50\\Gamma$ which is initially unoccupied (black solid line), half occupied (red solid line), and fully occupied (blue solid line). The dark yellow solid line represents $I^E_{L,in}(t)$ in Eq.~(\\ref{oo}) for the fully occupied case. (b) Transient energy current of the left lead for a double QD system with $v = 2\\Gamma$ which is initially unoccupied (black solid line), only fully occupied for $\\epsilon_1$ (red solid line) or $\\epsilon_2$ (green solid line), and fully occupied for both energy levels (blue solid line). The dark yellow and orange solid line represent $I^E_{L,in}(t)$ for the case that only $\\epsilon_1$ and $\\epsilon_2$ is fully occupied, respectively.}\n \\label{figo}\n\\end{figure}\n\nIn order to study the effect of initial occupation number of the single QD on the transient energy current, the time-dependent energy currents calculated by the time-derivative of 1st cumulant with $W = 50\\Gamma$ for different initial occupation number are plotted in Fig.~\\ref{figo}(a). To understand this behavior, we note that in the transient regime the lesser Green's function can be expressed by \\cite{zwm,leizhang},\n\\begin{eqnarray}\\label{lgt}\nG^<(t,t') &=& G^r(t,0)g^<(0,0)G^a(0,t') \\nonumber\\\\\n&& + \\int_0^t\\int_0^t d\\tau_1 d\\tau_2 G^r(t,\\tau_1) \\Sigma^<(\\tau_1, \\tau_2) G^a(\\tau_2,t'). \\nonumber\\\\\n\\end{eqnarray}\nSubstituting this expression into Eq.~(\\ref{1st}) we find the transient energy current consists of two terms $I^E_L(t) = I^E_{L, un}(t) + I^E_{L,in}$ where $I^E_{L, un}(t)$ is the transient energy current for a system which is initially unoccupied while the transient energy current due to the initial occupation is\n\\begin{equation}\\label{oo}\nI^E_{L,in}(t) = 2\\mathrm{Re} \\int dt' \\mathrm{Tr} \\big[G^r(t,0)g^<(0,0)G^a(0,t') \\breve{\\Sigma}_L^a(t',t) \\big].\n\\end{equation}\nIn Fig.~\\ref{figo}(a), we plot $I^E_{L,in}(t)$ for the single QD system which is initially full occupied (defined as $I^E_{L0}$), namely, $g^<(0,0) = 1i$. Therefore, the transient energy current for a single dot system with initial occupation of $\\alpha$ is $I^E_L(t) = I^E_{L, un}(t) + \\alpha I^E_{L0}$. We have checked that three curves (black, red, and blue lines) in Fig.~\\ref{figo}(a) indeed satisfy this relation.\n\n\nAs a second example, we consider a double QD system with the Hamiltonian of $\\left(\n \\begin{array}{cc}\n \\epsilon_1 & -v \\\\\n -v^* & \\epsilon_2 \\\\\n \\end{array}\n \\right)$.\nWe set the energy levels of the first and second QD to be $\\epsilon_1 = 4\\Gamma$ and $\\epsilon_2 = 6\\Gamma$ which connect with the left and right lead, respectively. The bandwidth of the lead and the bias are set to be $W = 10\\Gamma$ and $\\Delta_L = 10\\Gamma$, respectively. The bias is applied to the isolated leads at $t=-\\infty$ and the couplings between the leads and QDs are switched on at $t=0$. For the double QD system, we assume that the initial state for $\\epsilon_1$ is occupied initially with the lesser Green's function given in Eq.~(\\ref{gl}), while $\\epsilon_2$ is unoccupied so that its occupation number (proportional to its lesser Green's function) is 0.\n\n\\begin{figure}\n \\includegraphics[width=3.25in]{Figure5.eps}\\\\\n \\caption{(a) 1st, (b) 2rd, (c) 3rd, and (d) 4th cumulants of transferred energy with different coupling constant $v$ in the left lead for a double QD system.}\n \\label{fig3}\n\\end{figure}\n\nFigure~\\ref{fig3} presents the 1st to 4th cumulants of transferred energy of electrons counted in the left lead with different coupling constants of $v = 1\\Gamma$, $2\\Gamma$, $3\\Gamma$, and $5\\Gamma$ between two QDs. Generally speaking, as the time increases, all calculated cumulants increase with oscillation except for the 3rd cumulant with $v = 5 \\Gamma$ which starts to decrease slowly when $t$ exceeds 4.5. The oscillations of the 3rd and 4th cumulants with a large interdot coupling constant $v = 5\\Gamma$ decay much more slowly than those of other cases. Similar to the cumulants for the single QD system, the long-time limit of different cumulants for the double QD system is a linear function of time as shown in Eq.~(\\ref{cgflt}). It is also found that in the long-time limit, the $n$th cumulant with coupling constant of $v = 3\\Gamma$ becomes the largest while $v = \\Gamma$ is the smallest compared with those with other coupling constants for all $n$. We also note that the behaivor of cumulants of transferred energy for the double QD system resembles that of cumulants of transferred charge reported in Ref.~\\onlinecite{T2}.\n\n\\begin{figure}\n \\includegraphics[width=3.25in]{Figure6.eps}\\\\\n \\caption{Time derivative of (a) 1st, (b) 2rd, (c) 3rd, and (d) 4th cumulants of transferred energy with different coupling constant $v$ in the left lead for a double QD system. Inset: transmission coefficients of the double QD system with different coupling constan.}\n \\label{fig4}\n\\end{figure}\n\nThe time derivative of the cumulants for the double QD system are then presented in Fig.~\\ref{fig4}. The time derivative of all cumulants show oscillations with their amplitudes decreasing with the increase of time and their frequencies approximately proportional to the coupling constant, especially for large coupling constant. To understand the behavior of oscillation, we note that in the resonance regime the electron oscillates between two QDs with different energy levels and coupling constants, leading to different oscillation frequencies related by the Rabi frequency defined as,\n\\begin{equation}\\label{rabi}\n \\omega = \\sqrt{(\\epsilon_1 - \\epsilon_2)^2 + 4|v|^2},\n\\end{equation}\nwhich is actually the difference between two eigenvalues of the Hamiltonian for the double QD system. In our case, the period of oscillation for the transient energy current is $T = \\frac{2\\pi}{\\omega} = \\frac{\\pi}{\\sqrt{1+ v^2}}$. Therefore, higher frequency is obtained for the time derivative of cumulants plotted in Fig.~\\ref{fig4} for system with larger coupling constant. Specifically, for the transient energy current, namely, the time derivative of the 1st cumulant, it is found that for the systems with coupling constants of $v = 1\\Gamma$, $2\\Gamma$, and $3\\Gamma$, larger energy current is obtained for system with larger coupling constant in the long-time limit since the transmission peaks become higher and wider when the coupling constant is increased , as shown in the inset of Fig.~\\ref{fig4}(d). However, when the coupling constant is further increased to $5 \\Gamma$, the transmission peaks start to shift out of the bias windows [0,10] which dramatically reduces the energy current in the long-time limit. For higher order cumulants, we observe similar behaviors (which can be understood using the augments discussed above): (1). the larger the interdot coupling, the larger the oscillation frequency becomes; (2). in the long-time limit, the value of cumulants increases with $v$ as long as $v<4.5\\Gamma$. (3). For $v>4.5\\Gamma$, the value of cumulants is the smallest in the long-time limit.\n\nMoreover, the transient energy currents calculated by the time-derivative of 1st cumulant of the double QD with $v = 2\\Gamma$ for different initial occupation condition are plotted in Fig.~\\ref{figo}(b). Similar to the case of single QD, we also calculate the contribution of transient current from the initial occupation condition by Eq.~(\\ref{oo}) for the case that only $\\epsilon_1$ ($I^E_{L1}(t)$) and $\\epsilon_2$ ($I^E_{L2}(t)$) is fully occupied, respectively, as shown in Fig.~\\ref{figo}(b). Therefore, for a double QD system in which the occupation number of $\\epsilon_1$ and $\\epsilon_2$ are $\\alpha$ and $\\beta$, respectively, the transient current can be calculated simply by $I^E_L(t) = I^E_{L, un}(t) + \\alpha I^E_{L1} + \\beta I^E_{L2}$.\n\n\\section{Conclusion}\\label{sec4}\nWe have investigated the FCS of transferred energy in the transient regime. Two time measurement scheme was used to derive the generating function of FCS of transferred energy in the transient regime using the Keldysh non-equilibrium Green's function. Our formalism was then applied to both single and double QD systems to study the 1st to 4th order cumulants of transferred energy in the transient regime. Oscillations are observed in the transient energy current for both single and double QD systems. At short times, universal scaling was found for maximum amplitude of normalized cumulant of energy current for the single QD system. For the single QD system, we find that the frequency of oscillation in the transient energy current is independent of the bandwidth of the self-energy while and for the double QD system the frequency is proportional to the coupling constant between two QDs for large coupling constant.\n\n\\begin{acknowledgments}\nThis work was financially supported by the Research Grant Council (Grant No. HKU 705212P), the University Grant Council (Contract No. AoE\/P-04\/08) of the Government of HKSAR, NSF-China under Grant No. 11374246.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}