diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgrfz" "b/data_all_eng_slimpj/shuffled/split2/finalzzgrfz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgrfz" @@ -0,0 +1,5 @@ +{"text":"\\section{ Introduction}\n\nThis paper continues our study of correlation functions\nin lattice integrable models \\cite{BJMST1,BJMST2,BJMST3,BJMST4,\nBJMST}.\nConsider the infinite XXZ spin chain with the Hamiltonian\n\\begin{eqnarray}\nH_{\\rm XXZ}=\\textstyle{\\frac{1}{2}}\\sum\\limits_{k=-\\infty}^{\\infty}\n\\left( \n\\sigma_{k}^1\\sigma_{k+1}^1+\n\\sigma_{k}^2\\sigma_{k+1}^2+\n\\Delta\\sigma_{k}^3\\sigma_{k+1}^3\n\\right), \n\\label{eq:XXZ}\n\\end{eqnarray}\nwhere $\\sigma^a \\, (a=1,2,3)$ are \nthe Pauli matrices and\n\\begin{eqnarray*}\n\\Delta=\\cos\\pi\\nu\n\\end{eqnarray*}\nis a real parameter. \nWe use the usual notation\n$$\nq=e^{\\pi i \\nu}\n\\,.\n$$\nIn our previous work \\cite{BJMST}, \nwe obtained an algebraic representation for\ngeneral correlation functions of the XXZ model. \nHere we generalize this result to the situation when a disorder\noperator is present.\nIn the course we find a new interesting structure behind the model. \nWe consider only the massless regime $|\\Delta|<1$, $0<\\nu<1$, \nsince it is more important for physics because of its relation to\nconformal field theory (CFT) discussed below.\nExplanation about the massive regime $\\Delta>1$ will be given\nelsewhere.\n\nLet us introduce \n$$S(k)=\\textstyle{\\frac 1 2} \\sum\\limits_{j=-\\infty}^k\\sigma ^3_j\\,.$$\nDenote by $|\\text{vac}\\rangle$ the ground state of\nthe Hamiltonian, and let $\\alpha$ be a parameter. \nWe consider the normalized vacuum expectation values:\n\\begin{align}\n\\frac{\\langle\\text{vac}|q^{2\\alpha S(0)}\n\\mathcal{O}|\\text{vac}\\rangle}{\\langle\\text{vac}|q^{2\\alpha S(0)}\n|\\text{vac}\\rangle}\\label{exp}\n\\end{align}\nwhere $\\mathcal{O}$ is a local operator.\n\nLocality of $\\mathcal{O}$ implies that the operator\n$q^{2\\alpha S(0)}\\mathcal{O}$ stabilizes: there exist integers $k, l$ such that \nfor all $j>l$ (resp. $j< k$) this operator acts on the \n$j$-th lattice site as $1$ (resp. $q^{\\alpha\\sigma ^3}$).\nIf $k$ (resp. $l$) is the maximal (resp. minimal) integer with this property,\n$l-k+1$ will be called the length of the operator $q^{2\\alpha S(0)}\\mathcal{O}$. \nThe very formulation of the problem implies that we are\ninterested only in local operators $\\mathcal{O}$ of total spin $0$\n(otherwise the correlation function vanishes). \nNevertheless, for the sake of\nconvenience we introduce the spaces $\\mathcal{W}\n_{\\alpha,s}$ of operators $q^{2\\alpha S(0)}\\mathcal{O}$ of\nspin $s$:\n$$\n\\[S\\ ,\\ q^{2\\alpha S(0)}\\mathcal{O}\\]=s\\ q^{2\\alpha S(0)}\\mathcal{O},\n\\quad S=S(\\infty)\\,.\n$$\nAlso we set\n$$\n\\mathcal{W}_{\\alpha}=\\bigoplus\\limits _{s=-\\infty}^{\\infty}\n\\mathcal{W}_{\\alpha,s}\\,.\n$$\n\n\nThe leading long distance asymptotics of the XXZ spin chain is \ndescribed by CFT with $c=1$: \nthat of free bosons $\\phi,\\bar{\\phi}$ \nwith compactification radius $\\beta=\\sqrt{1-\\nu}$. \n{}For an extensive discussion about the XXZ model \nas an irrelevant perturbation of CFT, \nwe refer the reader to \\cite{luk}. \nThe space $\\mathcal{W}_{\\alpha,s}$ corresponds to the space\nof descendants of the operator \n$$\ne^{\\frac i 2\\(\\alpha (\\beta ^{-1}-\\beta )(\\phi +\\bar{\\phi})+s\\beta (\\phi-\n\\bar{\\phi})\\)}\\,.\n$$\nSimilarly to the conformal case \\cite{BLZ1,BLZ2,BLZ3}, \nintroduction of the disorder parameter $\\alpha$ \nregularizes the problem, and allows to write\nmuch nicer formulae than in the case $\\alpha =0$ \n\\footnote{\nThe formulae are written initially for $|q^{\\alpha}|<1$\nand continued analytically in $\\alpha$, but $\\alpha =0$ is\none of singular points where l'H\\^opital's rule should be applied.}.\nAnother similarity is that it is very\nconvenient to consider, as an intermediate object which\ndoes not enter the final formulae, the following space:\n$$\n\\mathcal{W}_\n{[\\alpha]}=\\bigoplus\\limits _{k=-\\infty}^{\\infty}\\mathcal{W}_{\\alpha+k}\\ .\n$$\n\nIn this paper we \nshall introduce two \nanti-commuting families of operators $\\mathbf{b} (\\zeta )$ and\n$\\mathbf{c} (\\zeta)$ acting on $\\mathcal{W}_{[\\alpha]}$:\n\\begin{align}\n\\[\\mathbf{b} (\\zeta_1),\\mathbf{b} (\\zeta_2)\\]_+=\\[\\mathbf{b} (\\zeta_1),\\mathbf{c} (\\zeta_2)\\]_+=\\[\\mathbf{c} (\\zeta_1),\\mathbf{c} (\\zeta_2)\\]_+=0\\ .\\nonumber\n\\end{align}\nThe operators $\\mathbf{b} (\\zeta )$ and $\\mathbf{c} (\\zeta )$ have the following block\nstructure:\n$$\n\\mathbf{b} (\\zeta):\\ \\mathcal{W}_{\\alpha+k,s}\\to\n\\mathcal{W}_{\\alpha+k+1,s-1}, \n\\quad \\mathbf{c} (\\zeta ):\\ \\mathcal{W}_{\\alpha+k,s}\\to \n\\mathcal{W}_{\\alpha+k-1,s+1}\n\\ .\n$$\nHence the operator \n$\\mathbf{b} (\\zeta_1)\\mathbf{c} (\\zeta _2)$ acts from $\\mathcal{W}_{\\alpha,0}$ to itself.\n\n\nThe operators $\\mathbf{b} (\\zeta )$, $\\mathbf{c} (\\zeta )$ \nare formal series in $(\\zeta -1)^{-1}$.\nWhen applied to an operator $q^{2\\alpha S(0)}\\mathcal{O}$ of length $L$, \nthe singularity is a pole of order $L$, in other words, the series terminates at\n$(\\zeta -1)^{-L}$.\nThe action of $\\mathbf{b} (\\zeta)$, $\\mathbf{c} (\\zeta)$ produces\noperators of the same or smaller length. \nThe coefficients of $\\mathbf{b} (\\zeta)$, $\\mathbf{c} (\\zeta)$ \ngive rise to an action of the Grassmann algebra with $2L$\ngenerators. In particular \n$$\n\\mathbf{b}(\\zeta_1)\\cdots\\mathbf{b}(\\zeta_{L+1})\n\\(q^{2\\alpha S(0)}\\mathcal{O}\\)=0,\n\\quad \n\\mathbf{c}(\\zeta_1)\\cdots\\mathbf{c}(\\zeta_{L+1})\n\\(q^{2\\alpha S(0)}\\mathcal{O}\\)=0\n\\,.\n$$\n\nWe introduce also the linear functional\non $\\text{End}\\(\\mathbb{C}^2\\)$:\n\\begin{align}\n\\text{tr}^{\\alpha}(x)=\n\\frac 1 {q^{\\frac {\\alpha }2}+q^{-\\frac {\\alpha }2}}\n\\text{tr} \\(q^{-\\frac 1 2 \\alpha \\sigma ^3}x\\)\n\\label{tral}\n\\end{align}\nwith the obvious properties:\n$$\n\\text{tr}^{\\alpha}(1)=\\text{tr}^{\\alpha}(q^{\\alpha \\sigma ^3})=1\\,.\n$$\nThis gives rise to a\nlinear functional on $\\mathcal{W}_{\\alpha}$ \n$$\n\\mathbf{tr}^{\\alpha}(X)=\\cdots{\\rm tr} _1^{\\alpha}\\ {\\rm tr} _2^{\\alpha}\\ \n{\\rm tr} _3^{\\alpha}\\cdots (X)\\,.\n$$\n\nOur main result is:\n\\begin{align}\n\\frac{\\langle\\text{vac}|q^{2\\alpha S(0)}\n\\mathcal{O}|\\text{vac}\\rangle}{\\langle\\text{vac}|q^{2\\alpha S(0)}\n|\\text{vac}\\rangle}\\ =\n\\mathbf{tr}^{\\alpha}\\(e^{\\mbox{\\scriptsize\\boldmath{$\\Omega$}}}\\(q^{2\\alpha S(0)}\\mathcal{O}\\)\\)\\,,\n\\label{main}\n\\end{align}\nwhere\\footnote {In \\cite{BJMST} the operator $\\mbox{\\boldmath$\\Omega $}$ was denoted by $\\Omega ^*$.}\nthe operator $\\mbox{\\boldmath$\\Omega $}$ acts on $\\mathcal{W}_{[\\alpha ]}$:\n\\begin{align}\n\\mbox{\\boldmath$\\Omega $}=-\n{\\rm res} _{\\zeta_1=1}{\\rm res} _{\\zeta_2=1}\n\\(\\mbox{\\boldmath$\\omega $} \\(\\zeta_1\/\\zeta_2\\)\\mathbf{b} (\\zeta _1)\\mathbf{c} (\\zeta _2)\n\\frac{d\\zeta _1}{\\zeta _1}\\frac{d\\zeta _2}{\\zeta _2}\\)\\,,\\nonumber\n\\end{align}\nand $\\mbox{\\boldmath$\\omega $} (\\zeta)$ is a scalar operator on each $\\mathcal{W}_{\\alpha}$, \n\\begin{align}\n&\n\\left. \\mbox{\\boldmath$\\omega $} (\\zeta)\\right|_{\\mathcal{W}_{\\alpha}}=\\omega (\\zeta ,\\alpha)1_{\\mathcal{W}_{\\alpha }},\\label{omega}\n\\end{align}\nthe scalar being\n\\begin{align}\n&\\omega (\\zeta ,\\alpha)\\nonumber\\\\\n&=\\frac {4(q\\zeta)^{\\alpha }} {\\(1+q^{\\alpha}\\)^2}\n\\(\n\\frac {q^{-\\alpha}}{1-q^{-2}\\zeta ^2}-\\frac {q^{\\alpha}}{1-q^2\\zeta ^2}\\)+\n\\int\\limits _{-i\\infty -0}^{i\\infty -0}\\zeta ^{u+\\alpha}\n\\frac {\\sin \\frac {\\pi} 2\\(u-\\nu(u+\\alpha)\\)}{\\sin \\frac {\\pi} 2 u\n\\cos \\frac {\\pi\\nu} 2\\(u+\\alpha\\)}du\\ \\nonumber.\n\\end{align}\n{}For any local operator of length $L$, \nthe trace is effectively taken over $\\(\\mathbb{C}^2\\)^{\\otimes L}$.\n\nComments are in order about the meaning of \\eqref{main}.\nIn \\cite{JM,JM2}, \nin the setting of inhomogeneous chains, \nit was conjectured \nthat the thermodynamic limit of the ground state \naverages in the finite XXZ chain \nare certain specific solutions of the reduced qKZ \n(rqKZ) equation given by multiple integrals. \nSubsequently these integral formulas were also derived \nfrom the viewpoint of algebraic Bethe Ansatz \\cite{maillet}. \nWe take these formulas as the definition of \nthe left hand side of \\eqref{main}.\nFollowing our previous works \\cite{BJMST2,BJMST}, \nwe present here another formula for solutions of rqKZ equations. \nThe right hand side of \\eqref{main}\nis its specialization to the homogeneous case.\nWe have no doubt that these two solutions coincide\n\\footnote{It is known to be the case in the massive regime, \nsee \\cite{BJMST2}. \nWe also confirm the coincidence at the free fermion point, \nsee section \\ref{XX}. \n}. \nSince a mathematical proof is lacking at the moment,\nwe propose \\eqref{main} as conjecture. \nThe function $\\omega (\\zeta ,\\alpha)$ and $\\mathbf{tr}^{\\alpha}$ develop singularities\nat $\\alpha = \\pm 1\/\\nu$. \nIn view of this, we presume that the formula holds true throughout the range \n$|{\\rm Re}\\, \\alpha|<1\/\\nu$. \n\nIt will be shown that the operators $\\mathbf{b} (\\zeta)$, $\\mathbf{c} (\\zeta)$ commute\nwith the adjoint action of the shift operator $U$ and \nof local integrals of motion $I_p$ on $\\mathcal{W}_{[\\alpha]}$. \nSince $q^{-\\alpha S}$ commutes with $U,I_p$, \none immediately concludes that the vacuum expectation values of \n$U\\(q^{2\\alpha S(0)}\\mathcal{O}\\)U^{-1}\n -q^{2\\alpha S(0)}\\mathcal{O}$ and\n$\\[ I_p,q^{2\\alpha S(0)}\\mathcal{O}\\]$ \ngiven by (\\ref{main}) vanish, as it should be.\n\nIn our opinion the appearance of anti-commuting operators $\\mathbf{b} (\\zeta)$\nand $\\mathbf{c} (\\zeta)$ is quite remarkable. \nIn the next section we explain how these operators are\nconstructed using the $q$-oscillators. \nWe explain their relation to the \nJordan-Wigner fermions in the XX case in Section \\ref{XX}. \nIn Appendix we briefly discuss the generalization\nof our previous formulae \\cite{BJMST} to the case when\nthe disorder operator is present. \n\n{}For the sake of simplicity \nwe consider the homogeneous chain only. \nWe give brief explanations about the inhomogeneous case when needed.\nWe do not give complete proofs, but just sketch the derivation\nof the main statements. \nWe tried to make this paper as brief as possible,\nleaving the details to a separate publication. \n\n\n\\section{Operators $\\mathbf{b} (\\zeta )$ and $\\mathbf{c} (\\zeta )$}\\label{bc}\n\n{}First we prepare our notation for the $L$-operators. \nConsider the quantum affine algebra $U_q(\\widehat{\\mathfrak{sl}}_2)$. \nThe universal $R$-matrix of this algebra belongs to the tensor product \n$\\mathfrak{b}_+\\otimes \\mathfrak{b}_-$ of its two Borel subalgebras. \nBy an $L$-operator we mean its image under an algebra map\n$\\mathfrak{b}_+\\otimes \\mathfrak{b}_-\\to N_1\\otimes N_2$, \nwhere $N_1,N_2$ are some algebras. \nIn this paper we always take $N_2$ to be the \nalgebra $M=Mat(2,{\\mathbb C})$ of $2\\times 2$ matrices. \nAs for $N_1$ we make several choices:\n$U_q(\\mathfrak{sl}_2)$, $M$, \nthe $q$-oscillator algebra \n$Osc$ (see below) or $Osc\\otimes M^{\\pm}$, \nwhere $M^{\\pm}\\subset M$ are the subalgebras\nof upper and lower triangular matrices.\n{}For economy of symbols, we use the same letter $L$ \nto designate these various $L$-operators. \nWe always put indices, \nindicating to which tensor product of algebras they belong. \nWe use $j,k,\\cdots$ as labels for the lattice sites, \nand $a,b,\\cdots$ as those for the `auxiliary' two-dimensional space. \nAccordingly we write the matrix algebra as $M_j$ or $M_a$. \nCapital letters $A,B,\\cdots$ will indicate \nthe $q$-oscillator algebra $Osc$. \n{}Finally, for $Osc\\otimes M^{\\pm}$ we use pairs of \nindices such as $\\{A,a\\}$.\n\nThe first case of $L$-operators is when $N_1=U_q(\\mathfrak{sl}_2)$:\n\\begin{align}\nL_j(\\zeta)=\\begin{pmatrix}\\zeta q^{\\frac{H+1}2}-\\zeta ^{-1}q^{-\\frac{H+1}2}\n&(q-q^{-1})Fq^{\\frac{H-1}2}\\cr (q-q^{-1})q^{-\\frac{H-1}2}E & \\zeta q^{-\\frac{H-1}2}-\\zeta ^{-1}q^{\\frac{H-1}2}\n\\end{pmatrix}_j\\in U_q(\\mathfrak{sl}_2)\\otimes\nM_j.\\label{Lop}\n\\end{align}\nHere $E,F,q^{\\pm H\/2}$ are the standard generators of $U_q(\\mathfrak{sl}_2)$. \nThe suffix $j$ in the right hand side \nmeans that it is considered as a $2\\times 2$ matrix in $M_j$. \nThis is an exceptional case when we do not put any index for\nthe first (`auxiliary') tensor factor; we shall never use several copies \nof $U_q(\\mathfrak{sl}_2)$.\nMapping further $U_q(\\mathfrak{sl}_2)$ to $M_a$, \nwe obtain the second $L$-operator \n$$\nL_{a,j}(\\zeta)\\in M_a\\otimes M_j\\,,\n$$ \nwhich actually \ncoincides with the standard $4\\times 4$ $R$-matrix. \n\nThe next case is due originally\nto Bazhanov, Lukyanov and Zamolodchikov \\cite{BLZ3}.\nLet us consider the $q$-oscillators $a$, $a^*$ satisfying\n$$ \naa^*-q^2a^*a=1-q^2.\n$$\nIt is convenient to introduce one more element $q^D$ such that \n\\begin{align}\n&q^{D}a^*=a^*\nq^{D+1},\\qquad q^{D}a=aq^{D-1}\\,,\\nonumber\\\\\n&a^*a=1-q^{2D}, \\quad aa^*=1-q^{2D+2}\\,.\\nonumber\n\\end{align}\nDenote by $Osc$ the algebra generated by $a,a^*,q^{\\pm D}$ \nwith the above relations.\nWe consider the following two representations of $Osc$, \n\\begin{align}\nW^+=&\\bigoplus\\limits _{k=0}^\\infty \\mathbb{C}|k\\rangle,\n\\quad a^*|k-1\\rangle=|k\\rangle,\n\\quad\nD|k\\rangle=k|k\\rangle ,\\quad a|0\\rangle=0;\\nonumber\\\\\nW^-=&\\bigoplus\\limits _{k=-\\infty}^{-1} \\mathbb{C}|k\\rangle,\n\\quad a|k+1\\rangle=|k\\rangle,\n\\quad\nD|k\\rangle=k|k\\rangle ,\\quad a^*|-1\\rangle=0\n\\,.\n\\nonumber\n\\end{align}\nIn the root of unity case, if $r$ is the smallest positive \ninteger such that $q^{2r}=1$, \nwe consider the $r$-dimensional quotient of $W^{\\pm}$ \ngenerated by $|0\\rangle$ or $|-1\\rangle$.\n\n\nThe $L$-operator associated with $Osc$ is given by\n\\begin{align}\n&L^+_{A,j}(\\zeta)=i\\zeta ^{-\\frac 1 2}q^{-\\frac 1 4}\n\\begin{pmatrix}1& -\\zeta a_A^*\\\\[5pt]\n -\\zeta a_A & 1- \\zeta ^2q^{2D_A+2} \\end{pmatrix}_j\n\\begin{pmatrix}q^{D_A} &0\\\\[5pt] 0 & q^{-D_A} \\end{pmatrix}_j\n\\in \nOsc _A\\otimes M_j\\nonumber\\,.\n\\end{align}\nThis $L$-operator satisfies the crossing symmetry relation:\n$$\nL^+_{A,j}(\\zeta)^{-1}=\\frac 1 {\\zeta-\\zeta ^{-1}}\\overline{L}_{A,j}^+(\\zeta)\\,,\n$$\nwhere we have set\n$$\n\\overline{L}_{A,j}^+(\\zeta)=\\sigma ^2 _j L_{A,j}^+(\\zeta q^{-1}) ^{t _j}\\sigma^2_j\n\\,,\n$$\nand $t_j$ stands for the transposition in $M_j$.\nWe use also another $L$-operator\n$$\nL^-_{A,j}(\\zeta)=\\sigma ^1_jL^+_{A,j}(\\zeta)\\sigma ^1_j\\ .\n$$\n\nConsider the product $L ^+_{A,j}(\\zeta )L_{a,j}(\\zeta )$. \nIt is well known that this product can be \nbrought to a triangular form, giving rise \nin particular to Baxter's `$TQ$-equation' for transfer matrices. \nNamely, introducing\n$$G ^+_{A,a}=\n\\begin{pmatrix} q^{-D_A} &0\\\\ 0& q^{D_A} \\end{pmatrix}_a\n\\begin{pmatrix} 1& a^*_A\\\\ 0 &1 \\end{pmatrix}_a,\n\\quad G^-_{A,a}=\\sigma ^1_aG^+_{A,a}\\sigma ^1_a\\,,\n$$\none easily finds that \n\\begin{align}\n&L^+ _{\\{A,a\\},j}(\\zeta )\n=\\(G^+_{A,a}\\)^{-1} L^+ _{A,j}(\\zeta)L_{a,j}(\\zeta )G^+_{A,a}\n\\label{fusionright+}\\\\ &=\n\\begin{pmatrix}\n(\\zeta q-\\zeta ^{-1}q^{-1})\nL^+_{A,j}(\\zeta q^{-1})q^{-\\frac{\\sigma^3_j} 2} &0\\\\\n(q-q^{-1})L^+_{A,j}(\\zeta q)\\sigma _j^+q^{-2D_A+\\frac 1 2}\n& (\\zeta -\\zeta ^{-1})\nL^+_{A,j}(\\zeta q )q^{\\frac{\\sigma^3_j} 2}\n\\end{pmatrix}_a\\in\\(Osc_A\\otimes M^-_a\\)\n\\otimes M_j \\,.\n\\nonumber\n\\end{align}\n\n{}For the inverse matrix one has:\n\\begin{align}\nL ^+_{\\{A,a\\},j}(\\zeta )^{-1}&=\\frac 1{(\\zeta -\\zeta ^{-1})\n(\\zeta q -\\zeta ^{-1}q^{-1})(\\zeta q^{-1}-\\zeta ^{-1}q)}\n\\label{fusioninv}\\\\ &\\times\n\\begin{pmatrix}\n(\\zeta -\\zeta ^{-1})\nq^{\\frac{\\sigma^3_j} 2}\\ \\overline{L }^+_{A,j}(\\zeta q^{-1})& 0 \\\\\n -(q-q^{-1}) \\sigma _j^+\\ \\overline{L }^+_{A,j}(\\zeta q)q^{-2D_A+\\frac 1 2}\n& (\\zeta q^{-1}-\\zeta ^{-1}q)q^{-\\frac{\\sigma^3_j} 2}\n\\ \\overline{ L }^+_{A,j}(\\zeta q)\n\\end{pmatrix}_a\\,.\n\\nonumber\n\\end{align}\nAgain we shall use another $L$-operator:\n$$ L ^-_{\\{A,a\\},j}(\\zeta )=\\sigma ^1_a\\sigma ^1_jL ^+_{\\{A,a\\},j}(\\zeta )\n\\sigma ^1_a\\sigma ^1_j \\in\\(Osc_A\\otimes M^+_a\\)\n\\otimes M_j\n\\,.\n$$\nSome information will be needed about $R$-matrices which intertwine these \n$L$-operators.\n{}First, consider the Yang-Baxter equation:\n\\begin{align}\n&R_\n{A,B}(\\zeta_1\/\\zeta _2)L^{\\pm}_{A,j}(\\zeta _1)\nL^{\\pm}_{B,j}(\\zeta _2)=L_{B,j}^{\\pm}(\\zeta _2)L^{\\pm}_{A,j}(\\zeta _1)R_{A,B}(\\zeta_1\/\\zeta _2)\n\\,.\n\\label{YB+}\n\\end{align}\nThe $R$-matrix appearing in \\eqref{YB+} is given by \n$$\nR_{A,B}(\\zeta)=P_{A,B}h(\\zeta, u_{A,B})\\zeta ^{D_A+D_B}\\,,\n$$\nwhere $P_{A,B}$ is the permutation, \n$$\nu_{A,B}=a_A^*q^{-2D_A} a_B,\n$$\nand the function $h(\\zeta, u)$ is given by\n$$\nh(\\zeta,u)=\\sum\\limits _{n=0}^{\\infty}\\ \n\\(-uq^{-1}\\)^n\n\\prod_{j=1}^n\\frac{q^{j-1}\\zeta^{-1}-q^{-j+1}\\zeta}{q^j-q^{-j}}\n\\,.\n$$\nWhen $q$ is not a root of unity, \nthe series for $R_{A,B}(\\zeta)$ \nis well defined because\nthe action of $u_{A,B}$ on $W^{\\pm}\\otimes W^{\\pm}$ \nis locally nilpotent. \nOtherwise we replace the right hand side \nby the sum $\\sum_{n=0}^{r-1}$, \nif $r$ is the smallest positive \ninteger such that $q^{2r}=1$.\n\nSecond, consider the Yang-Baxter equation for the $L$-operators\n$L^+_{\\{A,a\\},j}$:\n\\begin{align}\n&R ^+_{\\{A,a\\},\\{B,b\\}}(\\zeta_1\/\\zeta _2)L^+_{\\{A,a\\},j}(\\zeta _1)L^+_{\\{B,b\\},j}(\\zeta _2)=\nL^+_{\\{B,b\\},j}(\\zeta _2)L^+_{\\{A,a\\},j}(\\zeta _1)\nR ^+_{\\{A,a\\},\\{B,b\\}}(\\zeta_1\/\\zeta _2)\\,.\n\\label{YB4}\n\\end{align}\nThe corresponding $R$-matrix has the form \n\\begin{align}\n&R ^+_{\\{A,a\\},\\{B,b\\}}(\\zeta )=\n\\begin{pmatrix}\n\\mathcal{R}_{1,1}(\\zeta)&0 & 0 &0\\\\\n\\mathcal{R} _{2,1}(\\zeta) &\\mathcal{R}_{2,2}(\\zeta)&0 & 0\\\\\n\\mathcal{R} _{3,1}(\\zeta) &0 &\\mathcal{R}_{3,3}(\\zeta)&0\\\\\n\\mathcal{R} _{4,1}(\\zeta) &\\mathcal{R} _{4,2}(\\zeta) &\\mathcal{R}_{4,3}(\\zeta) &\\mathcal{R} _{4,4}(\\zeta)\n\\end{pmatrix}_{a,b}\n\\,.\n\\label{R+ab}\n\\end{align}\nThe entries $\\mathcal{R} _{i,j}(\\zeta )$ can be found by a direct\ncalculation. In this paper we shall need only two of them:\n\\begin{align}\n&\n\\mathcal{R}_{1,1}(\\zeta)=q^{-D_A}R_{A,B}(\\zeta)q^{D_B},\n\\quad \\mathcal{R}_{4,4}(\\zeta)=-\\zeta ^2q^{D_A}R_{A,B}(\\zeta)q^{-D_B}\\,.\n\\nonumber\n\\end{align}\nUp to scalar coefficients depending on $\\zeta $, \nthese operators\ncan be guessed immediately, but the coefficient, especially\nthe sign, in $\\mathcal{R} _{4,4}(\\zeta)$ is important for us. \nAs usual we define:\n$$\nR ^-_{\\{A,a\\},\\{B,b\\}}(\\zeta)\n=\\sigma ^1_a\\sigma ^1_bR^+_{\\{A,a\\}\\{B,b\\}}(\\zeta)\n\\sigma ^1_a\\sigma ^1_b\n\\,.\n$$\n\n\nNow we have everything necessary for the definition of\nthe operators $\\mathbf{b} (\\zeta )$ and $\\mathbf{c} (\\zeta )$. \n{}For two integers $k\\le l$ we set \n$$\nM_{[k,l]}=M_{k}\\otimes\\cdots \\otimes M_{l}\\,.\n$$\nThis is the algebra of linear operators on the `quantum space'\non the interval $[k,l]$. \nOur main object is the monodromy matrix\n\\begin{align}\nT^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta )=L^{\\pm}_{\\{A,a\\},l}(\\zeta )\\cdots L^{\\pm}_{\\{A,a\\},k}(\\zeta )\n\\in Osc_A\\otimes M^{\\mp}_a\\otimes M_{[k,l]}\\,.\n\\label{monodromy0}\n\\end{align}\nDefine further an element\n$\\mathbb{T} ^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta)\\in \nOsc_A\\otimes M^{\\mp}_a\\otimes \\text{End}(M_{[k,l]})$\nby setting\n\\begin{align}\n&\\mathbb{T} ^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta)(X_{[k,l]})= \nT^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta )\\cdot \n(1_{A,a}\\otimes X_{[k,l]})\\cdot\nT^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta )^{-1}\n\\, ,\n\\label{monodromy}\n\\end{align}\nwhere $1_{A,a}=1_A\\otimes 1_a$ and $X_{[k,l]}\\in M_{[k,l]}$. \nTo illustrate the definition, we have, for \n$x_{\\{A,a\\}}\\in Osc _A\\otimes M_a^{\\mp}$ and $X_{[k,l]}\\in\nM_{[k,l]}$, an equality in \n$Osc_A\\otimes M^{\\mp}_a\\otimes M_{[k,l]}$\n\\begin{align}\n&\\(\\mathbb{T} ^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta )\\cdot x_{\\{A,a\\}}\\otimes id\\)\\(X_{[k,l]}\\)\\nonumber\\\\\n&=T^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta )\\cdot\\(1_{\\{A,a\\}}\\otimes X_{[k,l]}\\)\\cdot\nT^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta )^{-1}\\cdot \n(x_{\\{A,a\\}}\\otimes 1_{[k,l]})\\,,\\nonumber\n\\end{align}\nwhere \n$id$ is the identity operator in $\\text{End}(M_{[k,l]})$.\n\nWe define $\\mathbb{T}^{\\pm}_{A,[k,l]}(\\zeta)\\in Osc_A\\otimes \\text{End}(M_{[k,l]})$ and $\\mathbb{T}_{a,[k,l]}(\\zeta)\\in M_a\\otimes \\text{End}(M_{[k,l]})$ in a similar manner.\n\n\nIn the following we shall use only $\\mathbb{T} ^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta)^{-1}$. \nWe understand certain inconvenience in using the inverse\noperators, but it has for us a \nhistorical reason:\nonce we define the transfer-matrix as in \\cite{JM}, the order\nof multipliers is fixed everywhere.\n\nWe have the Yang-Baxter equation \n\\begin{align}\n&\\mathbb{T} ^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta_1)^{-1}\\mathbb{T} ^{\\pm}_{\\{B,b\\},[k,l]}(\\zeta_2)^{-1}\nR^{\\pm}_{\\{A,a\\},\\{B,b\\}}(\\zeta_1\/\\zeta _2)\n\\label{rightYB}\n\\\\&\\quad=\nR^{\\pm}_{\\{A,a\\},\\{B,b\\}}(\\zeta_1\/\\zeta _2)\n\\mathbb{T} ^{\\pm}_{\\{B,b\\},[k,l]}(\\zeta_2)^{-1}\n\\mathbb{T} ^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta_1)^{-1}\\,, \n\\nonumber\n\\end{align}\nwhere the identity is in $Osc_A\\otimes M^{\\mp}_a\\otimes\nOsc_B\\otimes M^{\\mp}_b\\otimes \\text{End}(M_{[k,l]})$. \n\nOf particular importance are the $Q$-operators acting on local operators. \nThey are defined as \n\\begin{align}\n& \\mathbf{Q} ^+_{[k,l]}(\\zeta, \\alpha )=\n{\\rm tr} ^+_A\\(q^{2\\alpha D_A}\\ \\mathbb{T}^{+}_{A,[k,l]}(\\zeta)^{-1}\\)(1-q^{2(\\alpha-\\mathbf{S})}), \n\\label{Qop}\\\\\n & \n\\mathbf{Q}^-_{[k,l]}(\\zeta, \\alpha )={\\rm tr} ^-_A\\(q^{-2\\alpha (D_A+1)}\\ \\mathbb{T} ^{-}_{A,[k,l]}(\\zeta)\n ^{-1}\\)q^{2\\mathbf{S} }\n (1-q^{2(\\alpha-\\mathbf{S})})\\,,\n\\nonumber\n\\end{align}\nwhere $\\mathbf {S} $ stands for the adjoint action of the total spin\noperator \n$$ \n\\mathbf {S} (X_{[k,l]})=\\bigl[\nS(l)-S(k-1)\\ ,\\ X_{[k,l]}\\bigr]\\, ,\n\\qquad\nX_{[k,l]}\\in M_{[k,l]}\\,.\n$$\nThe trace functionals ${\\rm tr} ^{+}_A\\bigl(q^{2\\alpha D_A}Y_A\\bigr)$ and \n${\\rm tr} ^{-}_A\\bigl(q^{-2\\alpha (D_A+1)}Y_A\\bigr)$ for $Y_A\\in Osc_A$ are defined\nas analytic \ncontinuations with respect to $\\alpha$ of traces over\n$W^+$ and $W^-$ from the region $|q^{\\alpha}|<1$. \nThe $Q$-operators \\eqref{Qop} are mutually commuting families \nof operators. They are\nso normalized that $\\mathbf{Q}^{\\pm}_{[k,l]}(0,\\alpha)=1$.\n\n\nRegarding $\\mathbb{T} ^{\\pm}_{\\{A,a\\},[k,l]}(\\zeta)^{-1}$ \nas a matrix in $M^{\\mp }_a$, let us write its entries as\n\\begin{align}\n&\\mathbb{T} ^{+}_{\\{A,a\\},[k,l]}(\\zeta)^{-1}=\\begin{pmatrix}\n\\mathbb{A} ^+_{A,[k,l]}(\\zeta)&0\\\\\n\\mathbb{C} ^+_{A,[k,l]}(\\zeta)&\\mathbb{D} ^+_{A,[k,l]}(\\zeta)\n\\end{pmatrix}_a,\\nonumber\\\\\n&\\mathbb{T} ^{-}_{\\{A,a\\},[k,l]}(\\zeta)^{-1}=\\begin{pmatrix}\n\\mathbb{A} ^-_{A,[k,l]}(\\zeta)&\\mathbb{B} ^-_{A,[k,l]}(\\zeta)\\\\\n0&\\mathbb{D} ^-_{A,[k,l]}(\\zeta)\n\\end{pmatrix}_a\\, ,\n\\nonumber\n\\end{align}\nwhere $\\mathbb{A} ^+_{A,[k,l]}(\\zeta)$, etc., are elements of\n$Osc_A\\otimes \\text{End}(M_{[k,l]})$.\nIt follows from the definition that $\\mathbb{T} ^{\\pm\\ }_{\\{A,a\\},[k,l]}(\\zeta)^{-1}$ \nhave poles of order $l-k+1$ at the points \n$\\zeta^2=1, q^{\\pm 2}$. \nHowever, looking at the formulae (\\ref{fusionright+})--\n(\\ref{fusioninv}), one realizes that \nat the pole $\\zeta ^2=1$ only $\\mathbb{C} ^+_{A,[k,l]}(\\zeta)$ and\n$\\mathbb{B} ^-_{A,[k,l]}(\\zeta)$ are singular. \nThis motivates, at least partly, the following definition:\n\\begin{align}\n&\\mathbf{c} _{[k,l]}(\\zeta,\\alpha)=q^{\\alpha -\\mathbf{S} }\\(1-q^{2(\\alpha-\\mathbf{S})}\\)\n\\text{sing}_{\\,\\zeta=1}\\left[\n\\zeta ^{\\alpha -\\mathbf{S}}{\\rm tr} ^+_A\\(q^{2\\alpha D_A}\\ \\mathbb{C} ^+_{A,[k,l]}(\\zeta)\n\\)\n\\right],\\label{defc1}\\\\\n&\\mathbf{b} _{[k,l]}(\\zeta,\\alpha)={q^{2\\mathbf{S}}} \\text{\\,sing}_{\\,\\zeta=1}\\left[\n\\zeta ^{-\\alpha +\\mathbf{S}}{\\rm tr} ^-_A\\(q^{-2\\alpha(D_A+1)}\\ \\mathbb{B} ^-_{A,[k,l]}(\\zeta)\n\\)\n\\right]\\,.\n\\label{defb1}\n\\end{align}\nHere and after, $\\text{\\,sing}_{\\zeta=1}[f(\\zeta )]$ signifies the singular part of \n$f(\\zeta)$ at $\\zeta =1$:\n\\begin{align}\n\\text{\\,sing}_{\\zeta=1}\\[f(\\zeta)\\]=\n\\frac 1 {2\\pi i}\\int \\frac { f(\\xi)} {\\zeta-\\xi}d\\xi\n\\,,\n\\label{int}\n\\end{align}\nwhere the integral is taken over a simple closed curve containing \n$\\xi=1$ inside, while \n$\\xi =\\zeta $ and other singular points of $f(\\xi)$ are outside. \nWe note that \n$$\n[\\mathbf{S},\\mathbf{c} _{[k,l]}(\\zeta,\\alpha)]=\\mathbf{c} _{[k,l]}(\\zeta,\\alpha),\n\\quad \n[\\mathbf{S},\\mathbf{b} _{[k,l]}(\\zeta,\\alpha)]=-\\mathbf{b} _{[k,l]}(\\zeta,\\alpha)\\,.\n$$\n\nThere are several important properties of operators $\\mathbf{c} _{[k,l]}(\\zeta,\\alpha)$ and\n$\\mathbf{b} _{[k,l]}(\\zeta,\\alpha)$ which we formulate as Lemmas.\n\\begin{lem}\\label{lem1}\nThe operators $\\mathbf{c} _{[k,l]}(\\zeta,\\alpha)$ and\n$\\mathbf{b} _{[k,l]}(\\zeta,\\alpha)$ satisfy the following anti-commutation relations:\n\\begin{align}\n&\\mathbf{c} _{[k,l]}(\\zeta_1,\\alpha-1)\\mathbf{c} _{[k,l]}(\\zeta _2,\\alpha)=-\\mathbf{c} _{[k,l]}(\\zeta _2,\\alpha-1)\\mathbf{c} _{[k,l]}(\\zeta_1,\\alpha)\n\\,,\n\\label{commcc}\\\\\n&\\mathbf{b} _{[k,l]}(\\zeta_1,\\alpha+1)\\mathbf{b} _{[k,l]}(\\zeta _2,\\alpha)=-\\mathbf{b} _{[k,l]}(\\zeta _2,\\alpha+1)\\mathbf{b} _{[k,l]}(\\zeta_1,\\alpha)\\,.\n\\label{commbb}\n\\end{align}\n\\end{lem}\n\n\n\\begin{proof}\nConsider the Yang-Baxter equations (\\ref{rightYB}) for $+$. Using the $R$-matrix (\\ref{R+ab}) one finds:\n\\begin{align}\n&\\zeta_1^{-2}\\mathbb{C}^+_{A,[k,l]}(\\zeta _1)\\mathbb{C}^+_{B,[k,l]}(\\zeta _2)\n\\label{CC}\\\\\n&+{\\zeta _2} ^{-2}q^{D_A}R_{A,B}(\\zeta_1\/\\zeta_2)q^{-D_B}\\cdot\n\\mathbb{C}^+_{B,[k,l]}(\\zeta _2)\\mathbb{C} ^+_{A,[k,l]}(\\zeta _1)\\cdot\nq^{-D_B}R_{A,B}(\\zeta_1\/\\zeta_2)^{-1}q^{D_A}=\\cdots\\nonumber\n\\end{align}\nwhere $\\cdots $ stands for a sum of \nterms which contain at least one\n$\\mathbb{A}^+_{[k,l]}(\\zeta _i)$ or $\\mathbb{D} ^+_{[k,l]}(\\zeta _i)$, \nand hence have vanishing singular parts at $\\zeta _i=1$.\nMultiplying (\\ref{CC}) by $q^{2(\\alpha-1)D_A+2\\alpha D_B}$, \ntaking the trace and the singular part, \none immediately gets (\\ref{commcc}).\nSimilarly one proves (\\ref{commbb}) using (\\ref{rightYB}) for $-$.\n\\end{proof}\n\n\\begin{lem}\\label{lem2}\nWe have the following reduction relations:\n\\begin{align}\n&\\mathbf{c} _{[k,l]}(\\zeta ,\\alpha)\\(X_{[k,l-1]}\\cdot 1_l\\)=\\mathbf{c} _{[k,l-1]}(\\zeta ,\\alpha )\\(X_{[k,l-1]}\\)\\cdot 1_l\\,,\n\\label{redc+}\\\\\n&\\mathbf{b} _{[k,l]}(\\zeta ,\\alpha)\\(X_{[k,l-1]}\\cdot 1_l\\)=\\mathbf{b} _{[k,l-1]}(\\zeta ,\\alpha )\\(X_{[k,l-1]}\\)\\cdot 1_l\\,,\n\\label{redb+}\\\\\n&\\mathbf{c} _{[k,l]}(\\zeta ,\\alpha)\\(q^{\\alpha\\sigma ^3_k}\\cdot X_{[k+1,l]}\\)=\nq^{(\\alpha-1)\\sigma ^3_k}\\cdot\n\\mathbf{c} _{[k+1,l]}(\\zeta ,\\alpha)\\(X_{[k+1,l]}\\)\n\\,,\n\\label{redc-}\\\\\n&\\mathbf{b} _{[k,l]}(\\zeta ,\\alpha)\\(q^{\\alpha\\sigma ^3_k}\\cdot X_{[k+1,l]}\\)=\nq^{(\\alpha+1)\\sigma ^3_k}\\cdot\n\\mathbf{b} _{[k+1,l]}(\\zeta ,\\alpha)\\(X_{[k+1,l]}\\)\\label{redb-}\\,.\n\\end{align}\n\\end{lem}\n\n\\begin{proof}\nThe equations (\\ref{redc+}), (\\ref{redb+}) are\ntrivial consequences of the definition. \nIn contrast, eqs. (\\ref{redc-}), (\\ref{redb-})\nare far from being obvious. \n\nConsider the first of them. By definition we have:\n\\begin{align}\n&\\frac 1 {q^{\\alpha-\\mathbf{S} }\\(1-q^{2(\\alpha-\\mathbf{S})}\\)}\n\\mathbf{c} _{[k,l]}(\\zeta ,\\alpha)\\(q^{\\alpha\\sigma ^3_k}\\cdot X_{[k+1,l]}\\)\\nonumber\n\\\\\n&=\n\\text{\\,sing}_{\\zeta=1}\\left[\n{\\rm tr} ^+_A\\(q^{2\\alpha D_A}\\ \\mathbb{C}_{A,[k,l]}^+(\\zeta)\\(q^{\\alpha\\sigma ^3_k}\\cdot X_{[k+1,l]}\\)\n\\)\\zeta^{\\alpha -s-1}\n\\right]\\,,\n\\nonumber\n\\end{align}\nwhere $s$ is the spin of $X_{[k+1,l]}$.\n\nLet us simplify the trace. \nWe will use the crossing symmetry\n\\begin{align}\n&\\mathcal{P} ^- _{j,\\bar j}\nL ^+_{A,j}(\\zeta q ^{-1})L ^+_{A, \\bar j}(\\zeta )=\n(\\zeta -\\zeta^{-1})\\mathcal{P} ^- _{j,\\bar j}\\,,\n\\label{cross1}\\\\\n&\\mathcal{P} ^- _{j,\\bar j}\nL ^+_{\\{A,a\\},j}(\\zeta q ^{-1})L ^+_{\\{A,a\\}, \\bar j}(\\zeta )=\n(\\zeta q -\\zeta^{-1}q^{-1})(\\zeta -\\zeta^{-1})(\\zeta q^{-1}-\\zeta^{-1}q)\\mathcal{P} ^- _{j,\\bar j}\n\\,,\n\\label{cross2}\n\\end{align}\nwhere $\\mathcal{P}^-_{j,\\bar j}$ is the anti-symmetrizer. \nIntroducing consecutively some additional two-dimensional spaces, we have \n\\begin{align}\n&{\\rm tr} ^+_A\\(q^{2\\alpha D_A}\\ \\mathbb{C}_{A,[k,l]}^+(\\zeta)\\(q^{\\alpha\\sigma ^3_k}\\cdot X_{[k+1,l]}\\)\\)\n\\label{trtrtr}\n\\\\\n&=\n{\\rm tr} _a{\\rm tr} ^+_A\\(\\sigma _a^+L_{\\{A,a\\},k}^+(\\zeta)^{-1}\\cdot\n\\mathbb{T}_{\\{A,a\\},[k+1,l]}^+(\\zeta)^{-1}\\(X_{[k+1,l]}\\)\\cdot\nq^{\\alpha\\sigma ^3_k} L_{\\{A,a\\},k}^+(\\zeta)q^{2\\alpha D_A}\\)\\nonumber\\\\&=\n\\frac {1}{(\\zeta -\\zeta ^{-1})\n(\\zeta q -\\zeta ^{-1}q^{-1})(\\zeta q^{-1}-\\zeta ^{-1}q)}\\nonumber\\\\&\\times\n{\\rm tr} _{\\bar k}{\\rm tr} _a{\\rm tr} ^+_A\\(\\sigma _a^+L_{\\{A,a\\},\\bar k}^+(\\zeta q^{-1})\n\\cdot\n2\\mathcal{P}^-_{k,\\bar k}\\cdot\n\\mathbb{T}_{\\{A,a\\},[k+1,l]}^+(\\zeta)^{-1}\n\\(X_{[k+1,l]}\\)\\cdot\nq^{\\alpha\\sigma ^3_k} \n\\cdot\nL_{\\{A,a\\},k}^+(\\zeta)q^{2\\alpha D_A}\\)\\nonumber\\,.\n\\end{align}\nNow use\n$$\nq^{\\alpha \\sigma ^3_k}L ^+_{A,k}(\\zeta)q^{2\\alpha D_A}\n=q^{2\\alpha D_A}L^+_{A,k}(\\zeta)q^{\\alpha \\sigma ^3_k}\n$$ \nand the cyclicity of the trace to simplify (\\ref{trtrtr}) further:\n\\begin{align}\n&{\\rm tr} ^+_A\\(q^{2\\alpha D_A}\\ \\mathbb{C}_{A,[k,l]}^+(\\zeta)\\(q^{\\alpha\\sigma ^3_k}\n\\cdot X_{[k+1,l]}\\)\\)\n\\label{exp1}\n\\\\\n&=\n\\frac {1}{(\\zeta -\\zeta ^{-1})\n(\\zeta q -\\zeta ^{-1}q^{-1})(\\zeta q^{-1}-\\zeta ^{-1}q)}\n\\nonumber\n\\\\\n&\\times\n{\\rm tr} _{\\bar k}{\\rm tr} _a{\\rm tr} ^+_A\\(\n\\mathbb{T}_{\\{A,a\\},[k+1,l]}^+(\\zeta)^{-1}\\(X_{[k+1,l]}\\)q^{2\\alpha D_A}\\mathcal{L}(\\zeta )\nq^{\\alpha\\sigma ^3_k} \\)\n\\,.\\nonumber\n\\end{align}\nIt is easy to see that\n\\begin{align}\n\\mathcal{L}(\\zeta )&=2\\mathcal{P}^-_{k,\\bar k}L_{\\{A,a\\},k}^+(\\zeta)\\sigma _a^+L_{\\{A,a\\},\\bar k}^+(\\zeta q^{-1})\n\\label{exp2}\\\\\n&=\n\\begin{pmatrix}\n\\zeta q-\\zeta^{-1}q^{-1} & 0\\\\\n0 &(q-q^{-1})q^{-2D_A-\\frac 1 2}\n\\end{pmatrix}_a\n2\\mathcal{P}^-_{k,\\bar k}L_{A,k}^+(\\zeta q^{-1})L_{A,\\bar k}^+(\\zeta )\n\\nonumber\\\\ \n&\\times\n\\begin{pmatrix}\nq^{-\\frac {\\sigma ^3_k} 2} \\sigma ^+_{\\bar k} & \nq^{\\frac {\\sigma ^3_{\\bar k}-\\sigma ^3_k} 2}\\\\[5pt]\n\\sigma ^+ _k \\sigma ^+_{\\bar k} & \\sigma ^+ _kq^{\\frac {\\sigma ^3_{\\bar k}} 2}\n\\end{pmatrix}_a\n\\begin{pmatrix}\n(q-q^{-1})q^{-2D_A+\\frac 1 2} & 0\\\\ 0 &\\zeta q^{-1}-\\zeta ^{-1}q\n\\end{pmatrix}_a\\,.\n\\nonumber\n\\end{align}\nwhere we used\n$$L^+_{A,j}(\\zeta q)\\sigma _j^+q^{-2D_A+\\frac 1 2}=\nq^{-2D_A-\\frac 1 2}L^+_{A,j}(\\zeta q^{-1})\\sigma _j^+\\,.$$\nIn view of \\eqref{cross1}, $\\mathcal{L}(\\zeta )$ is divisible by $\\zeta-\\zeta^{-1}$, and \nin (\\ref{exp1}) we can drop the diagonal elements of \n$\\mathbb{T}_{\\{A,a\\},[k+1,l]}^+(\\zeta)^{-1}$, \narriving immediately at (\\ref{redc-}).\n\nThe proof of (\\ref{redb-}) is similar.\n\\end{proof}\n\n\\noindent\n{\\bf Remark.} \nThe above construction carries over to inhomogeneous chains \nwhere an independent spectral parameter $\\xi_j$ is attached to each site $j$. \nThe operators $\\mathbf{c} _{[k,l]}(\\zeta;\\xi _k,\\cdots ,\\xi _l)$, \n$\\mathbf{b} _{[k,l]}(\\zeta;\\xi _k,\\cdots ,\\xi _l)$ \nare defined via the above construction with two modifications: \n\\begin{enumerate}\n\\item In the definition (\\ref{monodromy0}), each \n$L ^{\\pm}_{\\{A,a \\},j}\\({\\zeta}\\)$ is replaced by \n$L ^{\\pm}_{\\{A,a \\},j}\\({\\zeta}\/{\\xi_j}\\)$.\n\\item The singular part is understood as an \nintegral (\\ref{int}) around the points $\\xi_k,\\cdots,\\xi_l$. \n\\end{enumerate}\nLemma \\ref{lem1} and Lemma \\ref{lem2} remain valid. \n\\hfill{\\qed}\n\\medskip\n\nLemma \\ref{lem2} allows us to define \nuniversal operators $\\mathbf{b} (\\zeta,\\alpha)$, $\\mathbf{c} (\\zeta ,\\alpha)$: \n\\begin{definition}\\label{defbc}\n{}For any operator\n$q^{2\\alpha S(0)}\\mathcal{O}\\in \\mathcal{W}_{\\alpha}$, \nlet \n$\\(q^{2\\alpha S(0)}\\mathcal{O} \\)_{[k,l]}$ be its restriction \nto the finite interval $[k,l]$ of the lattice. \nWe define \n\\begin{align}\n&\\mathbf{b} (\\zeta,\\alpha):\\ \\mathcal{W}_{\\alpha,s}\\to \n\\mathcal{W}_{\\alpha+1,s-1}\n\\ ,\\label{actb}\\\\\n&\\mathbf{c} (\\zeta ,\\alpha ):\\ \\mathcal{W}_{\\alpha,s}\\to \n\\mathcal{W}_{\\alpha-1,s+1}\n\\label{actc}\\,,\n\\end{align}\nby setting \n\\begin{align}\n&\\mathbf{b} (\\zeta,\\alpha)\\(q^{2\\alpha S(0)}\\mathcal{O}\\)\n=\\lim _{k\\to-\\infty,l\\to \\infty}\n\\mathbf{b} _{[k,l]}(\\zeta,\\alpha)\\(\\(q^{2\\alpha S(0)}\\mathcal{O}\\)_{[k,l]}\\)\\,,\n\\label{defb}\\\\\n&\\mathbf{c} (\\zeta,\\alpha)\\(q^{2\\alpha S(0)}\\mathcal{O}\\)\n=\\lim _{k\\to-\\infty,l\\to \\infty}\n\\mathbf{c} _{[k,l]}(\\zeta,\\alpha)\\(\\(q^{2\\alpha S(0)}\\mathcal{O}\\)_{[k,l]}\\)\\,.\n\\label{defc}\n\\end{align}\n\\end{definition}\nIt follows from Lemma \\ref{lem2} that for any particular operator\n$q^{2\\alpha S(0)}\\mathcal{O}$ the expressions under the limit \nin (\\ref{defb}), (\\ref{defc}) stabilize for sufficiently \nlarge interval $[k,l]$. \nHence the limit is well-defined. \nIn particular we have, for any $k$, \n\\begin{align}\n\\mathbf{b} (\\zeta,\\alpha)(q^{2\\alpha S(k)})=0, \\quad \\mathbf{c} (\\zeta,\\alpha)(q^{2\\alpha S(k)})=0\\,. \\label{zerobc}\n\\end{align}\n\nDenoting by $\\mathbf{b}(\\zeta)$ and $\\mathbf{c} (\\zeta)$ the operators acting on \nthe direct sum $\\mathcal{W}_{[\\alpha]}$ we have \nthe anti-commutativity\n\\begin{align}\n\\[\\mathbf{b} (\\zeta_1),\\mathbf{b} (\\zeta_2)\\]_+=\\[\\mathbf{c} (\\zeta_1),\\mathbf{c} (\\zeta_2)\\]_+=0\\ .\\nonumber\n\\end{align}\n\nIn Appendix, we give a brief summary of \nthe algebraic formula for the correlation functions in the \npresence of disorder. The result is expressed \nin terms of the operator\n\\begin{align}\n\\mbox{\\boldmath$\\Omega $}&= -\\text{res}_{\\zeta_1=1}\\text{res}_{\\zeta_2=1}\n \\(\\mathbf{X}(\\zeta_1,\\zeta _2)\n \\mbox{\\boldmath$\\omega $} (\\zeta_2\/\\zeta _1)\n \\frac {d\\zeta _1}{\\zeta _1} \\frac {d\\zeta _2}{\\zeta _2}\\)\\,,\n\\label{Omega2}\n\\end{align}\nwhere $\\left.\\mathbf{X} (\\zeta_1,\\zeta _2)\\right|_{\\mathcal{W}_{\\alpha}}=\\mathbf{X} (\\zeta_1,\\zeta _2,\\alpha)$, the operator $\\mathbf{X} (\\zeta_1,\\zeta _2,\\alpha)$ is given in either of the two formulas \n(\\ref{App0}), (\\ref{App1}), $\\mbox{\\boldmath$\\omega $} (\\zeta)$ is given by (\\ref{omega}). \nThe following result allows us to express $\\mbox{\\boldmath$\\Omega $}$ in terms of \n$\\mathbf{b} (\\zeta)$, $\\mathbf{c} (\\zeta )$. \nAt the same time, \nthe existence of two equivalent representations \nguarantees the anti-commutativity between the latter. \n\\begin{lem}\\label{lem3}\nThe operator $\\mathbf{X}(\\zeta _1,\\zeta _2)$ can be evaluated as follows:\n\\begin{align}\n\\left.\\mathbf{X} (\\zeta _1,\\zeta _2)\\right|_{\\mathcal{W}_{\\alpha}}\n=\\mathbf{b} (\\zeta _2,\\alpha-1)\\mathbf{c} (\\zeta _1,\\alpha)=-\\mathbf{c} (\\zeta_1,\\alpha +1)\\mathbf{b} (\\zeta _2,\\alpha)\\,.\n\\label{X=bc=cb}\n\\end{align}\n\\end{lem}\n\\begin{proof}\nConsider the formula (\\ref{App0}). We have:\n\\begin{align}\n{\\rm tr} _{a,b}&\\(\n B^0_{b,a}(\\zeta_2\/\\zeta_1)\\mathbb{T}_a(\\zeta_1)^{-1}\\mathbb{T}_b(\\zeta _2)^{-1}\\)\\mathbf{Q}^+(\\zeta_1,\\alpha+1)\\mathbf{Q}^-(\\zeta_2,\\alpha +1)\n\\nonumber\\\\\n&=\n {\\rm tr} _{a,b}{\\rm tr} _A^+{\\rm tr} _B^-\\(B^0_{b,a}(\\zeta_2\/\\zeta_1) \n \\mathbb{T}_a(\\zeta_1)^{-1}\\mathbb{T}_b(\\zeta _2)^{-1}\n \\mathbb{T}^{+}_A(\\zeta _1)^{-1}\\mathbb{T}^{-}_B(\\zeta _2)^{-1}\\right. \\nonumber\n\\\\&\\left.\\times q^{2(\\alpha+1)(D_A-D_B-1)}\\)(1-q^{2(\\alpha+1-\\mathbf{S})})^2\nq^{2\\mathbf{S}}\n\\,.\n\\nonumber\n \\end{align}\nWe move $\\mathbb{T}_b(\\zeta_2)^{-1}$ through $\\mathbb{T}^+_A(\\zeta _1)^{-1}$ using\nthe Yang-Baxter equation\n$$\nL ^+_{A,b}\\({\\zeta _1}\/{\\zeta _2}\\)\\mathbb{T}_b(\\zeta_2)^{-1}\\mathbb{T}^{+}_A(\\zeta _1)^{-1}\n=\\mathbb{T}^{+}_A(\\zeta _1)^{-1}\\mathbb{T}_b(\\zeta_2)^{-1}L^+_{A,b}\\({\\zeta _1}\/{\\zeta _2}\\)\\,.\n$$\nNow $\\mathbb{T}_a(\\zeta_1)^{-1} \\mathbb{T}^{+}_A(\\zeta _1)^{-1}$ and \n$\\mathbb{T}_b(\\zeta_2)^{-1} \\mathbb{T}^{-}_B(\\zeta _2)^{-1}$ come together. \nConjugating by $G ^+_{A,a}$, $G^-_{B,b}$, \nwe can combine them into \nthe monodromy matrices \n$\\mathbb{T} ^{+}_{\\{A,a\\}}(\\zeta_1)^{-1}$, $\\mathbb{T}^{-}_{\\{B,b\\}}(\\zeta _2)^{-1}$.\nIn these monodromy matrices we drop\ndiagonal elements \nbecause they have no singularities at $\\zeta _i=1$. \nThen by a straightforward calculation we come to\n\\begin{align}\n&{\\rm tr} _{a,b}\\(\n B^0_{b,a}(\\zeta_2\/\\zeta_1)\n \\mathbb{T}_a(\\zeta_1)^{-1}\\mathbb{T}_b(\\zeta _2)^{-1}\n\\)\\mathbf{Q}^+(\\zeta_1,\\alpha+1)\\mathbf{Q}^-(\\zeta_2,\\alpha +1)\n\\label{x1}\\\\\n&\n\\simeq\n-\n {\\rm tr} _A^+{\\rm tr} _B^-\\(\\mathbb{C}^+_{A}(\\zeta _1)\\mathbb{B} ^-_B(\\zeta _2)q^{2(\\alpha+1) D_A-2\\alpha (D_B+1)-2}\\)(1-q^{2(\\alpha-\\mathbf{S}+1)})^2\nq^{2\\mathbf{S}}\n\\nonumber \n\\end{align}\nwhere $\\simeq$ means that the singular parts are identical. \nSimilarly we have:\n\\begin{align}\n&{\\rm tr} _{a,b}\\(\n B^1_{a,b}(\\zeta_1\/\\zeta_2)\\mathbb{T}_b(\\zeta _2)^{-1}\\mathbb{T}_a(\\zeta_1)^{-1}\\)\\mathbf{Q}^-(\\zeta_2,\\alpha -1)\\mathbf{Q}^+(\\zeta_1,\\alpha-1)\n\\label{x2} \\\\\n&\n\\simeq\n- {\\rm tr} _A^+{\\rm tr} _B^-\\(\\mathbb{B} ^-_B(\\zeta _2)\\mathbb{C}^+_{A}(\\zeta _1)\nq^{2\\alpha D_A-2(\\alpha-1)(D_B+1)}\\)\n(1-q^{2(\\alpha-\\mathbf{S}-1)})^2q^{2\\mathbf{S}}\n\\,.\n\\nonumber\n\\end{align}\nEq. (\\ref{X=bc=cb}) follows from (\\ref{x1}),\n(\\ref{x2}) and the definition of $\\mathbf{b} (\\zeta ,\\alpha)$, $\\mathbf{c} (\\zeta ,\\alpha)$.\n\\end{proof}\nThe main formula \\eqref{main} follows from \\eqref{Omega2},\n\\eqref{X=bc=cb} \nand \\eqref{sol}. \n\nLet $U$ be the shift operator by one lattice unit, which acts \non local operators by adjoint:\n$$\nU\\sigma ^a_jU^{-1}=\\sigma ^a_{j+1}.\n$$\nThere is also an infinite set of local integrals of motion\nwhich commute with $U$ and among themselves.\nThe last important property of $\\mathbf{b} (\\zeta)$, $\\mathbf{c} (\\zeta)$ \nis their invariance:\n\\begin{lem}\\label{lem4}\nThe operators $\\mathbf{b} (\\zeta)$, $\\mathbf{c} (\\zeta)$ commute with the\nadjoint action of the shift operator $U$ and of the local integrals of motion.\n\\end{lem}\n\\begin{proof}\n{}For $U$ the statement of this lemma follows immediately from\nthe definition, essentially it is a consequence of Lemma \\ref{lem2}.\n\nThe local integrals of motion are of the form\n\\begin{align}\nI_p=\\sum\\limits _{j=-\\infty}^{\\infty} d_{j,p}\\,,\n\\label{plocal}\n\\end{align}\nwhere $d_{j,p}$ is an operator acting non-trivially on the sites \n$j,\\cdots, j+p$. We shall call operators of the type (\\ref{plocal}) \n$p$-local operators.\n\nLet us write the $4\\times 4$ $R$-matrix as \n$\\check{R}_{j,k}(\\xi)=P_{j,k}L_{j,k}(\\xi)$. We set \n$$ \nU_{[k,l]}(\\xi)=(q-q^{-1})^{k-l}\n\\check{R}_{l,l-1}(\\xi)\\cdots \\check{R}_{k+1,k}(\\xi)\\,.\n$$\n{}Following the remark after Lemma \\ref{lem2}, \nconsider $\\mathbf{c} _{[k,l]}$ with one inhomogeneity:\n$$\n\\mathbf{c} _{[k,l]}(\\zeta;\\xi, 1,\\cdots, 1)\\ \\text{and }\\ \\mathbf{c} _{[k,l]}(\\zeta;1,\\cdots, 1,\\xi)\\,.\n$$\nIt is clear from the definition that\n\\begin{align}\n&U_{[k,l]}(\\xi)\\cdot \n\\mathbf{c} _{[k,l]}(\\zeta;\\xi, 1,\\cdots, 1)\\(\\(q^{2\\alpha S(0)}\\mathcal{O}\\)_{[k,l]}\\)\\ \n\\cdot U_{[k,l]}(\\xi)^{-1}\n\\label{comm}\\\\\n&=\\mathbf{c} _{[k,l]}(\\zeta;1,\\cdots, 1,\\xi)\n\\(U_{[k,l]}(\\xi)\\cdot \\(q^{2\\alpha S(0)}\\mathcal{O}\\)_{[k,l]}\\cdot \nU_{[k,l]}(\\xi)^{-1}\\)\\,.\n\\nonumber \n\\end{align}\n \n \nLet $\\xi=1+\\epsilon$. Then \n$$\nU_{[k,l]}(\\xi)=\\exp\\(\\sum\\limits _{p=1}^{\\infty} \\epsilon^p I_{[k,l],p}\\)\\,.\n$$\nDue to the Campbell-Hausdorff formula, \nthe operators $I_{[k,l],p}$ are $p$-local. \n{}For finite $k,l$ these operators do not \ncommute because of some boundary terms, \nbut in the limit $k\\to-\\infty$, \n$l\\to\\infty$ they coincide with the local integrals of motion \n$I_p$ which are combined into the generating function:\n$$\nU(\\xi)=\\exp\\bigl(\\sum\\limits _{p=1}^{\\infty}\\epsilon ^p I_{p}\\bigr)\\,.\n$$\n \nIn the right hand side of (\\ref{comm}) we have the expression\n$$\nU_{[k,l]}(\\xi)\\cdot \n\\(q^{2\\alpha S(0)}\\mathcal{O}\\)_{[k,l]}\\cdot \nU_{[k,l]}(\\xi)^{-1}=\\sum\\limits \\epsilon ^p \n\\(q^{2\\alpha S(0)}\\mathcal{O}\\)^{(p)}_{[k,l]}\\,.\n$$\nHere the $p$-local operators $I_{[k,l],p}$ act by multiple adjoint.\nIt is clear that for every given degree $p$ we can \nfind a large enough interval $[k,l]$ in order that \n$$ \n\\(q^{2\\alpha S(0)}\\mathcal{O}\\)^{(p)}_{[k,l]}\n=\\(\\bigl(q^{2\\alpha S(0)}\\mathcal{O}\\bigr)^{(p)}\\)_{[k,l]}\\,,\n$$\nwhere \n$$\nU(\\xi)\\cdot q^{2\\alpha S(0)}\\mathcal{O}\\cdot U(\\xi)^{-1}=\n\\sum\\limits \\epsilon ^p \\(q^{2\\alpha S(0)}\\mathcal{O}\\)^{(p)}\\,.\n$$\nObviously\n$$\n\\text{length}\\(\\bigl(q^{2\\alpha S(0)}\\mathcal{O}\\bigr)^{(p)}\\)\\le \n\\text{length}\\(q^{2\\alpha S(0)}\\mathcal{O}\\)+2p\\,. \n$$ \n \nNow considering (\\ref{comm}) order by\norder in $\\epsilon$, choosing for \nevery order sufficiently large interval\n$[k,l]$ and using the inhomogeneous version of\nLemma \\ref{lem2} and the definition of $\\mathbf{c} (\\zeta)$, \nwe get:\n\\begin{align}\n&U(\\xi)\\cdot \n\\mathbf{c} (\\zeta)\\(q^{2\\alpha S(0)}\\mathcal{O}\\)\\cdot U(\\xi)^{-1}=\n\\mathbf{c} (\\zeta)\\(U(\\xi)\\cdot q^{2\\alpha S(0)}\\mathcal{O}\\cdot U(\\xi)^{-1}\\),\n\\label{comm1}\n\\end{align}\nwhich is understood as an equality of power series in $\\epsilon$.\n\\end{proof}\n\n\\section{Free fermion point}\\label{XX}\n\nConsider the point $\\nu =1\/2$, $q=i$. \nFor this coupling constant\nthe Hamiltonian turns into\n$$\nH_{XX}=\\sum\\limits _{j=-\\infty}^{\\infty}\\(\\sigma _j^+\\sigma _{j+1}^-+\n\\sigma _j^-\\sigma _{j+1}^+\\)\\,,\n$$\nand can be diagonalized by the Jordan-Wigner transformation:\n$$\n\\psi _k ^{\\pm}=\\sigma ^{\\pm}_ke^{\\mp\\pi i S(k-1)\n}\\,.\n$$\nThe space $\\mathcal{W}_{[\\alpha]}$ becomes a\ndirect sum of two components:\n$$\\mathcal{W}_{[\\alpha]}=\\mathcal{W}_{\\alpha}\\oplus \\mathcal{W}_{\\alpha +1}\\,.$$\nWe set \n$$y= e^{\\frac{\\pi i \\alpha} 2}\\, ,$$\nso that the space $\\mathcal{W}_{\\alpha}$ consists of operators of the form \n$\ny^{ 2 S(0)}\\mathcal{O}\n$.\nThere are two fermion\noperators acting in the space of states, so, there are \nfour of them\nacting on the space of operators by left and right multiplication.\nIt is convenient to introduce the following four operators:\n\\begin{align}\n&\\Psi _{k}^{\\pm}(X)=\\psi ^{\\pm}_kX-(-1)^{F(X)} X\\psi ^{\\pm}_k,\n\\label{PsiPhi}\\\\\n&\\Phi_{\\alpha,k}^{\\pm}(X)=\n\\frac 1 {1-y^{\\mp 2}}\n\\(\\psi ^{\\pm}_kX-y^{\\mp 2}(-1)^{F(X)} X\\psi ^{\\pm}_k\\)\\, .\n\\nonumber\n\\end{align}\nwhere $F(X)$ is the fermionic number of the operator $X$.\n\nWe have $\\Phi_{\\alpha+2,k}^{\\pm}=\\Phi_{\\alpha,k}^{\\pm}$. \nThese operators are natural for us because $\\Psi_{k}^{\\pm}$\nannihilate $1$ while $\\Phi_{\\alpha,k}^{\\pm}$ annihilate $y^{2 S}$ \n(recall that at plus or minus infinity \n$y^{ 2 S(0)}\\mathcal{O}$ stabilizes to $1$ or $y^{2 S}$). \nThe operators $\\Psi_{k}^{\\pm}$, $\\Phi_{\\alpha,k}^{\\pm}$ satisfy the \ncanonical anti-commutation relations:\n\\begin{align}\n&[\\Psi _{k}^{\\epsilon},\\Psi _{l}^{\\epsilon '}]_+=\n[\\Phi_{\\alpha,k}^{\\epsilon},\\Phi_{\\alpha,l}^{\\epsilon '}]_+=0,\\quad\n[\\Psi _{k}^{\\epsilon},\\Phi_{\\alpha,l}^{\\epsilon '}]_+\n=\\delta _{\\epsilon+\\epsilon ',0}\\delta _{k,l}\\,.\n\\label{comferm}\n\\end{align}\nIt is clear, however, that the operators\n$\\mathbf{b} (\\zeta,\\alpha)$, $\\mathbf{c} (\\zeta,\\alpha)$ cannot be constructed as linear \ncombinations of $\\Psi _{k}^{\\pm},\\ \\Phi_{\\alpha,k}^{\\pm}$. \nIndeed the operators\n$\\mathbf{b} (\\zeta,\\alpha)$, $\\mathbf{c} (\\zeta,\\alpha)$ are translationally invariant, in particular, they \nannihilate $y^{2 S(k)}$ for any $k$, see (\\ref{zerobc}).\nClearly this is impossible for any linear combination of \n$\\Psi _{k}^{\\pm},\\ \\Phi_{\\alpha,k}^{\\pm}$. \nOur plan in this section is as\nfollows. First, we find a compact expression for $\\mathbf{b} (\\zeta,\\alpha)$ and\n$\\mathbf{c} (\\zeta,\\alpha)$ in terms of $\\Psi _{k}^{\\pm},\\ \\Phi_{\\alpha,k}^{\\pm}$. \nThen we show that our formula gives the same result for the correlators \nas the one \nobtained by a straightforward calculation based on normal ordering.\n\nThe calculation of $\\mathbf{b} (\\zeta,\\alpha)$, $\\mathbf{c} (\\zeta ,\\alpha)$ at the free fermon point\nis summarized by\n\\begin{lem}\\label{lem5}\nAt the free fermion point, the operators $\\mathbf{b} (\\zeta,\\alpha)$ \nand $\\mathbf{c}(\\zeta,\\alpha)$ are given by \n\\begin{align}\n&\\mathbf{b} (\\zeta ,\\alpha)=\\frac{2i^{-\\mathbf{S}}}{1-(-1)^{\\mathbf{S}}y^{2}}\\ \n{\\rm sing}_{\\zeta=1}\n\\[\\zeta ^{-\\alpha+\\mathbf{S}}\\Psi^-(\\zeta )E^{-}(\\zeta ,\\alpha-\\mathbf{S})\n\\frac{\\zeta}{1+\\zeta^2}\n\\]\\,,\n\\label{bcff}\\\\\n&\\mathbf{c} (\\zeta ,\\alpha)=2y\\ {\\rm sing}_{\\zeta=1}\n\\[\\zeta ^{\\alpha-\\mathbf{S}}\\Psi ^+(\\zeta )\nE^{+}(\\zeta ,\\alpha-\\mathbf{S})\n\\frac{\\zeta}{1+\\zeta^2}\n\\]\\,,\n\\nonumber\n\\end{align}\nwhere\n\\begin{align}\n\\Psi^{\\pm}(\\zeta )=\\sum\\limits _ {j=-\\infty}^{\\infty}\n\\Psi^{\\pm}_j\\(\\frac {1+\\zeta^2}\n{1-\\zeta ^2}\\)^{j}\\label{fermion}\n\\end{align}\nand\n\\begin{align}\nE^{\\pm}(\\zeta,\\alpha)=\\exp\\(\\mathcal{N}\\[\\Phi_{\\alpha}^{\\pm} \n\\log \\(I-\\zeta ^2M\\)\n\\Psi ^{\\mp}-\n\\Phi_{\\alpha}^{\\mp} \n\\log \\(I+\\zeta ^2M\\)\n\\Psi ^{\\pm}\\]\\)\\,.\\label{E}\n\\end{align}\nIn the last formula we consider $\\Phi_{\\alpha,j}^{\\pm}$ \n(resp. $\\Psi^{\\pm}_j$)\nas components of a row (resp. column) vector, \n$$ \nM=(1+u)(1-u)^{-1},\\qquad \n\\(u\\Psi ^{\\pm}\\)_j=\\Psi ^{\\pm}_{j+1}\\,,\n$$\nand \n$\\log \\(1\\pm\\zeta ^2M\\)$ are understood as Taylor series in $u$.\n$\\mathcal{N}[\\cdot]$ stands for the normal ordering \nwhich applies only to operators acting at the same site. For them we set\n\\begin{align}\n\\mathcal{N}[\\Phi_{\\alpha,j}^{\\epsilon}\\Psi_j^{\\epsilon'}]\n=\n\\begin{cases}\n\\Phi_{\\alpha,j}^{\\epsilon}\\Psi ^{\\epsilon'}_j&\\quad (j>0)\\,,\\\\\n-\\Psi ^{\\epsilon'}_j\\Phi_{\\alpha,j}^{\\epsilon}&\\quad (j\\le0 )\\,.\\\\\n\\end{cases}\n\\label{norm}\n\\end{align}\n\\end{lem} \nSince the $q$-oscillators become fermions at $q=i$, \nLemma can be shown by \nmanipulations \nwith exponentials of quadratic forms in fermions. \nDetails will be given in another publication.\n\nWe remark that the exponent of (\\ref{E}) is well defined \nas an operator on $\\mathcal{W}_{\\alpha}$. Indeed by definition\nit consists of $\\mathcal{N}\\(\\Phi_{\\alpha,k}^{\\pm}\\Psi ^{\\mp}_l\\)$ \nwith $l\\ge k$. \nOn a particular operator in $\\mathcal{W}_{\\alpha}$ \nonly a finite number of these operators do not vanish.\n\\medskip\n\nIt has been said that, unlike $\\mathbf{b} (\\zeta)$, $\\mathbf{c} (\\zeta)$, \nformulae containing fermions necessarily\nbreak the translational invariance. \nWe choose the point $k=1$ as the origin \nand consider only operators of the form \n\\begin{align}\ny^{2 S(0)}\\mathcal{O}_>\n\\label{O+}\n\\end{align}\nwhere $\\mathcal{O}_> $ acts only on the interval $[1,\\infty)$. \nAny operator in $\\mathcal{W}_{\\alpha}$ can be brought to the form (\\ref{O+}) by a shift, \nso we do not really lose generality.\nIn the sequel we need the operators on a half line:\n$$\n\\mathbf{b} _>(\\zeta,\\alpha)=\\mathbf{b}_{[1,\\infty)}(\\zeta,\\alpha),\\qquad\n\\mathbf{c} _>(\\zeta,\\alpha)=\\mathbf{c}_{[1,\\infty)}(\\zeta,\\alpha)\\,. \n$$\nThey are defined as in (\\ref{bcff}), replacing $E^{\\pm}(\\zeta,\\alpha)$, $\\Psi ^{\\pm}(\\zeta )$ and \n$\\Phi_{\\alpha}^{\\pm}(\\zeta )$ by $E^{\\pm}_>(\\zeta,\\alpha)$, \n$\\Psi^{\\pm}_>(\\zeta )$ and \n$\\Phi_{\\alpha,>}^{\\pm}(\\zeta )$, respectively. \nThe latter are given by the same formulae (\\ref{fermion}), (\\ref{E}) \nwith non-positive components of fermions removed. \n\nIn the free fermion case the function $\\omega (\\zeta ,\\alpha)$ can be\ncalculated explicitly. \nPutting it together with \\eqref{bcff}, \nwe rewrite our main formula in the free fermion case as follows. \n\\begin{align}\n&\\frac{\\langle\\text{vac}|\\ y^{2 S(0)}\\mathcal{O}_>\n\\ |\\text{vac}\\rangle}{\\langle\\text{vac}|\\ y^{2 S(0)}\n\\ |\\text{vac}\\rangle}\n =\\mathbf{tr}_>^{\\alpha}\\(e^{\\mbox{\\scriptsize\\boldmath{$\\Omega$}}_>}\\(\\mathcal{O}_>\\)\\),\\label{mainferm}\\\\\n&\\mbox{\\boldmath$\\Omega $} _>=\n\\frac {i} {\\sin\\frac {\\pi \\alpha} 2}{\\rm res} _{\\zeta _1=1}\n{\\rm res} _{\\zeta _2=1}\\(\n\\frac {\\zeta _1^{\\alpha}\\zeta_2^{-\\alpha}-1}{\\zeta _1^2+\\zeta _2^2}\nE^{-}_>(\\zeta _2,\\alpha)E^{+}_>(\\zeta _1,\\alpha)\n\\Psi_>^-(\\zeta _2)\\Psi_>^+(\\zeta _1)\n\\frac{d\\zeta_1^2}{1+\\zeta_1^2}\n\\frac{d\\zeta_2^2}{1+\\zeta_2^2}\n\\),\\nonumber\n\\end{align}\nwhere $\\mathbf{tr}_>^{\\alpha}$ means that the trace is calculated\nover the positive half of the chain only.\n\nNow\nnotice that \n\\begin{align}\n&\\Psi ^{\\pm}_>(\\zeta)(I)=0,\n\\quad \\mathbf{tr}_>^{2(\\alpha+1)}\n\\(\\Phi_{\\alpha,>}^{\\pm}(\\zeta)\\(\\mathcal{O}_>\\)\\)=0,\\label{cran}\\\\\n&\\psi ^{\\pm}_j\\mathcal{O}_>\n=\\(\\Phi_{\\alpha,j}^{\\pm}-\\frac {y^{\\mp 2}}{1-y^{\\mp 2}}\n\\Psi ^{\\pm}_j\\)(\\mathcal{O}_>)\\,.\n\\nonumber\n\\end{align}\nSo, by changing $ \\mathbf{tr}_>^{\\alpha}$ to $ \\mathbf{tr}_>^{2(\\alpha+1)}$, \nthe operators $\\Phi_{\\alpha,>}^{\\mp}$ and $\\Psi ^{\\pm}_>$ can be considered\nas creation-annihilation operators in the space of operators.\nFor efficient application of them we need \nthe following:\n\\begin{lem}\\label{lem6}\nThe following identity holds:\n\\begin{align}\n\\mathbf{tr}_>^{\\alpha}\\(e^{\\mbox{\\scriptsize\\boldmath{$\\Omega$}} _>}\\(\\mathcal{O}_>\\)\\)=\n\\mathbf{tr}_>^{2(\\alpha+1)}\\(e^{\\widetilde{\\mbox{\\scriptsize\\boldmath{$\\Omega$}}} _>}\\(\\mathcal{O}_>\\)\\)\n\\label{newtrace}\n\\end{align}\nwhere \n\\begin{align}\n\\widetilde{\\mbox{\\boldmath$\\Omega $}} _>= \n\\frac {i} {\\sin\\frac {\\pi \\alpha} 2}{\\rm res} _{\\zeta _1=1}\n{\\rm res} _{\\zeta _2=1}\\(\n\\frac {\\zeta _1^{\\alpha}\\zeta_2^{-\\alpha}}{\\zeta _1^2+\\zeta _2^2}\n\\Psi_>^-(\\zeta _2)\\Psi_>^+(\\zeta _1)\n\\frac{d\\zeta_1^2}{1+\\zeta_1^2}\\frac{d\\zeta_2^2}{1+\\zeta_2^2}\n\\)\\,.\\nonumber\n\\end{align}\n\\end{lem}\n\nThe formulae (\\ref{cran}) and (\\ref{newtrace})\nallow an explicit calculation of correlators. One easily obtains:\n\\begin{align}\n\\frac{\\langle\\text{vac}|\\ y^{2 S(0)}\\psi ^+_{k_1}\\cdots \\psi ^+_{k_p}\n\\psi ^-_{l_p}\\cdots \\psi ^-_{l_1}\\ |\\text{vac}\\rangle }\n{\\langle\\text{vac}|y^{2 S(0)}\\ |\\text{vac}\\rangle }\n=\\text{det}\\(\\langle \\psi ^+_{k_i} \n\\psi ^-_{l_j} \\rangle\\)_{i,j=1,\\cdots ,p}\\label{fermcorr}\n\\end{align}\nwhere \n\\begin{align}\n\\langle \\psi ^+_{k} \n&\\psi ^-_{l} \\rangle =\\label{2point}\\\\\n&\\frac {i}{\\sin \\frac {\\pi\\alpha} 2}\n\\(-\\frac {y} 2\\delta _{k,l}\n+\\,{\\rm res} _{\\zeta_1=1}{\\rm res} _{\\zeta _2=1}\n\\frac {\\zeta _1^{\\alpha}\\zeta _2^{-\\alpha}}{\\zeta _1^2+\\zeta _2 ^2}\n\\(\\frac {1+\\zeta _1^2} {1-\\zeta _1^2} \\)^k\\(\\frac {1+\\zeta _2^2} {1-\\zeta _2^2} \\)^l\n\\frac {d\\zeta_1^2}{1+\\zeta _1 ^2}\\frac {d\\zeta_2^2}{1+\\zeta _2 ^2}\\)\\,.\n\\nonumber\n\\end{align}\n\nOn the other hand, \none can calculate the correlators (\\ref{fermcorr}) \ndirectly by normal ordering $y^{2 S(0)}$. \nThe result is the same: \n(\\ref{2point}) is the two-point function while (\\ref{fermcorr}) is\nobtained by the Wick theorem.\n\nThis calculation is unsatisfactory because\nwe had to pass through the fermions $\\Psi^{\\pm}_k$, \n$\\Phi_{\\alpha,k}^{\\pm}$.\nIt would be much better to find a basis in the space of local\noperators, on which the original\noperators $\\mathbf{b} (\\zeta ,\\alpha)$, $\\mathbf{c} (\\zeta ,\\alpha)$ act nicely. \nSuch a construction would have a chance to generalize to an arbitrary coupling constant.\n{}For the moment we cannot do that. \n\n\\section{Conclusion}\\label{sec:conclusion}\n\nThe main result of this paper can be formulated as follows.\nWe consider the space $\\mathcal{W}_{[\\alpha]}$ \nof local operators in the presence of a disorder field.\nWe have shown that the vacuum expectation values\nof operators in $\\mathcal{W}_{[\\alpha]}$ can be expressed \nin terms of two anti-commutative families of operators \n$\\mathbf{b}(\\zeta)$ and $\\mathbf{c}(\\zeta )$ acting on $\\mathcal{W}_{[\\alpha]}$ .\nAt present, we do not know \nhow to organize the space $\\mathcal{W}_{[\\alpha]}$ \nin order to describe efficiently the action of $\\mathbf{b}(\\zeta)$ and $\\mathbf{c}(\\zeta )$.\nThe operators $\\mathbf{b}(\\zeta)$ and $\\mathbf{c}(\\zeta )$ should be considered as \nannihilation operators, as both of them kill the `vacua', i.e., operators\n$q^{2\\alpha S(k)}$, for all $k$. \nWhat is missing is a construction of creation operators.\nEven in the free fermion case, we were able rather to\nmake a detour than to actually solve the problem. \n\nIn fact, the problem of constructing creation operators\ncannot be solved literally, because \n$\\mathbf{b}(\\zeta )$ and $\\mathbf{c}(\\zeta )$ have a large common kernel. \nConsider the restricted operators\n$\\mathbf{b}_{[k,l]}(\\zeta ,\\alpha)$ and $\\mathbf{c}_{[k,l]}(\\zeta ,\\alpha )$ acting \non the space of dimension $4^{l-k+1}$. \nIn the free fermion case, it can be shown that \nthe dimension of the kernel is $2^{l-k+1}$. \nNumerical experiments indicate that the dimension stays the same generically. \n\nBecause of this kernel, we cannot expect \noperators satisfying the canonical anti-commutation relations \nwith $\\mathbf{b}(\\zeta )$ and $\\mathbf{c}(\\zeta )$. \nSo the first problem is to understand the meaning of the kernel. \nObviously, the \ndifference of any two operators in the kernel has \nvanishing expectation value. \nThe origin of these operators with zero vacuum expectation \nvalues is a mystery to us. \nThe only operators for which this property \ncan be easily explained are the descendants generated\nby adjoint action of local integrals of motion, \nbut for them the vacuum expectation values vanish for a different \nreason: $\\mathbf{b}(\\zeta)$ and $\\mathbf{c}(\\zeta)$ commute with the adjoint action\nof local integrals of motion as is explained by Lemma \\ref{lem4}.\n\nUnderstanding the origin of the kernel of $\\mathbf{b}(\\zeta )$ and $\\mathbf{c}(\\zeta )$, \nand the construction of creation operators, \nare the problem which we wish to solve.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection*{#1}}\n\\def\\vskip3mm\\par\\noindent{\\vskip3mm\\par\\noindent}\n\\defdf{df}\n\\defadf{adf}\n\\defdf_{{\\rm fit}}{df_{{\\rm fit}}}\n\\defdf_{{\\rm res}}{df_{{\\rm res}}}\n\\defdf_{\\yhat}{df_{{\\widehat y}}}\n\\def\\sigma_b{\\sigma_b}\n\\def\\left[\\begin{array}{c{\\left[\\begin{array}{c}\n{\\mbox{\\boldmath$a$}}\\\\\n{\\bf b}\\\\\n\\end{array}\\right]}\n\\def\\sigma_u{\\sigma_u}\n\\def\\sigma_{\\scriptscriptstyle{U}}{\\sigma_{\\scriptscriptstyle{U}}}\n\\def\\df_{\\scriptscriptstyle{U}}{df_{\\scriptscriptstyle{U}}}\n\\def\\bH_{\\scriptscriptstyle{U}}{{\\bf H}_{\\scriptscriptstyle{U}}}\n\\def{\\widehat\\sigma}_{\\varepsilon}{{\\widehat\\sigma}_{\\varepsilon}}\n\\def{\\widehat\\sigma}_u{{\\widehat\\sigma}_u}\n\\def{\\widehat\\sigma}_{\\scriptscriptstyle{U}}{{\\widehat\\sigma}_{\\scriptscriptstyle{U}}}\n\\def{\\widehat\\sigma}_b{{\\widehat\\sigma}_b}\n\\def{\\widehat\\sigma}{{\\widehat\\sigma}}\n\\def\\sigma^2_b{\\sigma^2_b}\n\\def\\sigma^2_u{\\sigma^2_u}\n\\def\\sigma^2_{\\varepsilon}{\\sigma^2_{\\varepsilon}}\n\\def{\\widehat\\sigma}^2_{\\varepsilon,0}{{\\widehat\\sigma}^2_{\\varepsilon,0}}\n\\def{\\widehat\\sigma}^2_{\\varepsilon}{{\\widehat\\sigma}^2_{\\varepsilon}}\n\\def{\\widehat\\sigma}^2_b{{\\widehat\\sigma}^2_b}\n\\def{\\widehat\\sigma}^2_u{{\\widehat\\sigma}^2_u}\n\\def\\df_{\\mbox{\\begin{tiny}numer\\end{tiny}}}{df_{\\mbox{\\begin{tiny}numer\\end{tiny}}}}\n\\def{\\widehat m}_{\\lambda}{{\\widehat m}_{\\lambda}}\n\\def(\\bZ\\trans\\bZ)^{-1}{({\\bf Z}^{\\transpose}{\\bf Z})^{-1}}\n\\def{\\widehat f}_{\\lambda}{{\\widehat f}_{\\lambda}}\n\\def\\Dsc{{\\mathcal D}}\n\\defA_{\\varepsilon}{A_{\\varepsilon}}\n\\defB_{\\varepsilon}{B_{\\varepsilon}}\n\\defA_u{A_u}\n\\defB_u{B_u}\n\\def\\underline{\\ell}{\\underline{\\ell}}\n\\def\\underline{p}{\\underline{p}}\n\\def\\underline{q}{\\underline{q}}\n\\def\\mbox{IG}{\\mbox{IG}}\n\\def\\,\\mbox{Inv-}\\chi^2{\\,\\mbox{Inv-}\\chi^2}\n\\def\\,\\mbox{Inv-Wishart}{\\,\\mbox{Inv-Wishart}}\n\\def\\,\\mbox{Inv-Gamma}{\\,\\mbox{Inv-Gamma}}\n\\def\\tilde{\\bX}_r{\\tilde{{\\bf X}}_r}\n\\def\\mbox{\\tt main}{\\mbox{\\tt main}}\n\\def\\argmin{\\bbeta,\\bb}{\\argmin{{\\thick\\beta},{\\bf b}}}\n\\def\\argmin{\\bbeta,\\bu}{\\argmin{{\\thick\\beta},{\\bf u}}}\n\\def\\Bsc{{\\mathcal B}}\n\\def\\mbox{\\tt log(amt)}{\\mbox{\\tt log(amt)}}\n\\def\\bm_{\\lambda}{{\\bf m}_{\\lambda}}\n\\def\\bm_{\\lambda}{{\\bf m}_{\\lambda}}\n\\def\\bfhat_{\\alpha}{{\\widehat \\bdf}_{\\alpha}}\n\\def\\byhat_{\\lambda}{{\\widehat \\by}_{\\lambda}}\n\\def\\bS_{\\lambda}{{\\bf S}_{\\lambda}}\n\\def\\matsubsc#1#2#3{[\\relstack{#1}{#2}]_{#3}}\n\\def[1\\ x_i]_{1\\leq i\\leq n}{[1\\ x_i]_{1\\leq i\\leq n}}\n\\def[\\relstack{(x_i-\\kappa_k)_+}{1\\leq k\\leq K}]_{1\\leq i\\leq n}{[\\relstack{(x_i-\\kappa_k)_+}{1\\leq k\\leq K}]_{1\\leq i\\leq n}}\n\\def[\\relstack{(x_i-x_j)_+}{1\\leq i,j\\leq n}]{[\\relstack{(x_i-x_j)_+}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{C(|x_i-x_j|)}{1\\leq i,j\\leq n}]{[\\relstack{C(|x_i-x_j|)}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{C(\\Vert \\bx_i-\\bx_j\\Vert)}{1\\leq i,j\\leq n}]{[\\relstack{C(\\Vert {\\bf x}_i-{\\bf x}_j\\Vert)}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{C(\\Vert \\bx_i-\\bkappa_k\\Vert){[\\relstack{C(\\Vert {\\bf x}_i-{\\thick\\kappa}_k\\Vert)}\n{1\\leq k\\leq K}]_{1\\leq i\\leq n}}\n\\def[\\relstack{C(\\Vert\\bkappa_k-\\bkappa_{k'}\\Vert){[\\relstack{C(\\Vert{\\thick\\kappa}_k-{\\thick\\kappa}_{k'}\\Vert)}\n{1\\le k,k'\\le K}]}\n\\def\\bZradialfullhdlrknots^{-1\/2}{[\\relstack{C(\\Vert\\bkappa_k-\\bkappa_{k'}\\Vert)^{-1\/2}}\n\\def\\bZradialfull{[\\relstack{C(|x_i-x_j|)}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{|x_i-x_j|}{1\\leq i,j\\leq n}]{[\\relstack{|x_i-x_j|}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{-|x_i-x_j|+|x_i|+|x_j|}{1\\leq i,j\\leq n}]{[\\relstack{-|x_i-x_j|+|x_i|+|x_j|}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{-|x_i-x_j|}{1\\leq i,j\\leq n}]{[\\relstack{-|x_i-x_j|}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{|x_i-x_j|}{1\\leq i,j\\leq n}]^{1\/2}{[\\relstack{|x_i-x_j|}{1\\leq i,j\\leq n}]^{1\/2}}\n\\def[\\relstack{(-1)^m|x_i-x_j|^{2m-1}}{1\\leq i,j\\leq n}]{[\\relstack{(-1)^m|x_i-x_j|^{2m-1}}{1\\leq i,j\\leq n}]}\n\\def[\\relstack{e^{-|x_i-x_j|\/\\rho}}{1\\leq i,j\\leq n}]{[\\relstack{e^{-|x_i-x_j|\/\\rho}}{1\\leq i,j\\leq n}]}\n\\def\\bZtps{[\\relstack{\\Vert {\\bf x}_i-{\\thick\\kappa}_k\\Vert^2\\log\n\\Vert{\\bf x}_i-{\\thick\\kappa}_k\\Vert}{1\\le k\\le K}]_{1\\le i\\le n}}\n\\def\\bOmegatps{[\\relstack{\\Vert{\\thick\\kappa}_k-{\\thick\\kappa}_{k'}\\Vert^2\\log\n\\Vert{\\thick\\kappa}_k-{\\thick\\kappa}_{k'}\\Vert}{1\\le k,k'\\le K}]}\n\\def[\\relstack{(x-\\kappa_k)_+}{1\\leq k\\leq K}]{[\\relstack{(x-\\kappa_k)_+}{1\\leq k\\leq K}]}\n\\def[\\relstack{2(x-\\kappa_k)_+}{1\\leq k\\leq K}]{[\\relstack{2(x-\\kappa_k)_+}{1\\leq k\\leq K}]}\n\\def[\\relstack{3|x-\\kappa_k|(x-\\kappa_k)}{1\\leq k\\leq K}]{[\\relstack{3|x-\\kappa_k|(x-\\kappa_k)}{1\\leq k\\leq K}]}\n\\def\\bX\\bbeta+\\bZ\\bu{{\\bf X}{\\thick\\beta}+{\\bf Z}{\\bf u}}\n\\def\\bG_{\\smbtheta}{{\\bf G}_{{\\thick{\\scriptstyle{\\theta}}}}}\n\\def\\left[\\begin{array}{c{\\left[\\begin{array}{c}\n{\\widehat\\bbeta} \\\\\n{\\widehat \\bu}\\\\\n\\end{array}\\right]}\n\\def{\\widehat{\\smbu}}{{\\widehat{{\\thick{\\scriptstyle{\\rm u}}}}}}\n\\def\\bbeta^0{{\\thick\\beta}^0}\n\\def\\bu^0{{\\bf u}^0}\n\\def\\by^0{{\\bf y}^0}\n\\def\\bdf^0{{\\bf f}^0}\n\\def\\mbox{BLUP}{\\mbox{BLUP}}\n\\def{\\mathsf{E}}{{\\mathsf{E}}}\n\\def\\widehat{\\widehat}\n\\def\\partial{\\partial}\n\\def\\widetilde{\\widetilde}\n\\def\\varepsilon{\\varepsilon}\n\\def\\hbox{$1\\over n$}{\\hbox{$1\\over n$}}\n\\def{\\textstyle\\frac{1}{2}}{{\\textstyle\\frac{1}{2}}}\n\\def\\rightarrow{\\rightarrow}\n\\def{\\mathsf{N}}{{\\mathsf{N}}}\n\\def\\sum_{i=1}^n{\\sum_{i=1}^n}\n\\def\\mathbb{R}{\\mathbb{R}}\n\\def\\mathop{\\mathsf{Var}}{\\mathop{\\mathsf{Var}}}\n\\def{\\mathrm{d}}{{\\mathrm{d}}}\n\\def{\\mathsf{Pr}}{{\\mathsf{Pr}}}\n\\def\\pmatrix{\\pmatrix}\n\\def{\\em et al.}{}{{\\em et al.}{}}\n\\def\\mbox{EBLUP}{\\mbox{EBLUP}}\n\\def\\mbox{BP}{\\mbox{BP}}\n\\def\\mbox{BLP}{\\mbox{BLP}}\n\\def\\lambdahat_{\\mbox{\\tiny REML}}{{\\widehat\\lambda}_{\\mbox{\\tiny REML}}}\n\\def\\lambdahat_{\\mbox{\\tiny ML}}{{\\widehat\\lambda}_{\\mbox{\\tiny ML}}}\n\\def\\lambdahat_{\\mbox{\\tiny CV}}{{\\widehat\\lambda}_{\\mbox{\\tiny CV}}}\n\\def\\lambdahat_{\\mbox{\\tiny GCV}}{{\\widehat\\lambda}_{\\mbox{\\tiny GCV}}}\n\\def\\lambdahat_{\\mbox{\\tiny $C_p$}}{{\\widehat\\lambda}_{\\mbox{\\tiny $C_p$}}}\n\\def\\lambdahat_{\\mbox{\\tiny AIC}}{{\\widehat\\lambda}_{\\mbox{\\tiny AIC}}}\n\\def\\lambdahat_{\\mbox{\\tiny $\\AIC_C$}}{{\\widehat\\lambda}_{\\mbox{\\tiny $\\mbox{AIC}_C$}}}\n\\def\\bC_{\\bg}{{\\bf C}_{{\\mbox{\\boldmath$g$}}}}\n\\def\\bdf_{\\bg}{{\\bf f}_{{\\mbox{\\boldmath$g$}}}}\n\\def\\bfhat_{\\bg}{{\\widehat \\bdf}_{{\\mbox{\\boldmath$g$}}}}\n\\def\\bftilde_{\\bg}{{\\widetilde \\bdf}_{{\\mbox{\\boldmath$g$}}}}\n\\def\\index{ Age and income data}{\\index{ Age and income data}}\n\\def\\index{ Basis functions, especially truncated power series bases}{\\index{ Basis functions, especially truncated power series bases}}\n\\def\\index{ Bsplines}{\\index{ Bsplines}}\n\\def\\index{ Smoothing splines}{\\index{ Smoothing splines}}\n\\def\\index{ Scatterplot smoothers}{\\index{ Scatterplot smoothers}}\n\\def\\index{ Broken stick model}{\\index{ Broken stick model}}\n\\def\\index{ Wavelets}{\\index{ Wavelets}}\n\\def\\index{ Kernel regression}{\\index{ Kernel regression}}\n\\def\\index{ Whip model}{\\index{ Whip model}}\n\\def\\index{ Knots}{\\index{ Knots}}\n\\def\\index{ Knot selection}{\\index{ Knot selection}}\n\\def\\index{ Penalized regression splines}{\\index{ Penalized regression splines}}\n\\def\\index{ Penalties in spline smoothing}{\\index{ Penalties in spline smoothing}}\n\\def\\index{ Radial basis functions}{\\index{ Radial basis functions}}\n\\def\\index{ Rank of a smoother}{\\index{ Rank of a smoother}}\n\\def\\index{ Series--based smoothers}{\\index{ Series--based smoothers}}\n\\def\\index{ Kernel regression smoothers}{\\index{ Kernel regression smoothers}}\n\\def\\index{ Wavelets}{\\index{ Wavelets}}\n\\def\\index{ Bayesian methods, see also MCMC}{\\index{ Bayesian methods, see also MCMC}}\n\\def\\index{ Binary regression, see Logistic regression}{\\index{ Binary regression, see Logistic regression}}\n\\def\\index{ BLUP: best linear unbiased predictor}{\\index{ BLUP: best linear unbiased predictor}}\n\\def\\index{ Bronchpulmonary dysplasia data}{\\index{ Bronchpulmonary dysplasia data}}\n\\def\\index{ BUGS}{\\index{ BUGS}}\n\\def\\index{ Corrected PQL (CPQL)}{\\index{ Corrected PQL (CPQL)}}\n\\def\\index{ Contingency tables}{\\index{ Contingency tables}}\n\\def\\index{ Credible intervals, see also Bayesian methods}{\\index{ Credible intervals, see also Bayesian methods}}\n\\def\\index{ Deviance}{\\index{ Deviance}}\n\\def\\index{ Degrees of freedom}{\\index{ Degrees of freedom}}\n\\def\\index{ EM Algorithm}{\\index{ EM Algorithm}}\n\\def\\index{ Empirical Bayes}{\\index{ Empirical Bayes}}\n\\def\\index{ Gamma regression}{\\index{ Gamma regression}}\n\\def\\index{ Generalized linear models (GLIM)}{\\index{ Generalized linear models (GLIM)}}\n\\def\\index{ Generalized Linear Mixed Models (GLMM)}{\\index{ Generalized Linear Mixed Models (GLMM)}}\n\\def\\index{ Gibbs Sampling}{\\index{ Gibbs Sampling}}\n\\def\\index{ GLMM: to correct for nonconstant variance}{\\index{ GLMM: to correct for nonconstant variance}}\n\\def\\index{ Hat matrix}{\\index{ Hat matrix}}\n\\def\\index{ Heteroscedasticity}{\\index{ Heteroscedasticity}}\n\\def\\index{ Heteroscedasticity}{\\index{ Heteroscedasticity}}\n\\def\\index{ Iteratively reweighted least squares}{\\index{ Iteratively reweighted least squares}}\n\\def\\index{ Latent variables}{\\index{ Latent variables}}\n\\def\\index{ Lidar Data}{\\index{ Lidar Data}}\n\\def\\index{ Linear Mixed Model (LMM)}{\\index{ Linear Mixed Model (LMM)}}\n\\def\\index{ Logistic regression}{\\index{ Logistic regression}}\n\\def\\index{ Markov chain Monte--Carlo (MCMC)}{\\index{ Markov chain Monte--Carlo (MCMC)}}\n\\def\\index{ Markov chain Monte--Carlo (MCMC)}{\\index{ Markov chain Monte--Carlo (MCMC)}}\n\\def\\index{ Measurement error}{\\index{ Measurement error}}\n\\def\\index{ Metropolis--Hastings Algorithm}{\\index{ Metropolis--Hastings Algorithm}}\n\\def\\index{ Newton--Raphson}{\\index{ Newton--Raphson}}\n\\def\\index{ Overdispersion}{\\index{ Overdispersion}}\n\\def\\index{ Poisson regression}{\\index{ Poisson regression}}\n\\def\\index{ Posterior distributions in Bayesian analyses}{\\index{ Posterior distributions in Bayesian analyses}}\n\\def\\index{ Partial quasilikelihood (PQL)}{\\index{ Partial quasilikelihood (PQL)}}\n\\def\\index{ Probit regression}{\\index{ Probit regression}}\n\\def\\ipseudo{\\index{ Pseudolikelihood, see also\n Iteratively reweighted least squares}}\n\\def\\index{ Quasilikelihood}{\\index{ Quasilikelihood}}\n\\def\\index{ Fisher's method of scoring}{\\index{ Fisher's method of scoring}}\n\\def\\index{ Standard errors}{\\index{ Standard errors}}\n\\def\\index{ Transformations}{\\index{ Transformations}}\n\\def\\index{ Transformations}{\\index{ Transformations}}\n\\def\\index{ Variance Functions}{\\index{ Variance Functions}}\n\\def\\index{ Variance Functions}{\\index{ Variance Functions}}\n\\def\\index{ Union and wages data}{\\index{ Union and wages data}}\n\\def\\begin{eqnarray*}{\\begin{eqnarray*}}\n\\def\\end{eqnarray*}{\\end{eqnarray*}}\n\\def\\begin{eqnarray}{\\begin{eqnarray}}\n\\def\\end{eqnarray}{\\end{eqnarray}}\n\\def\\hbox{Normal}{\\hbox{Normal}}\n\\def\\ell_{\\mbox{\\tiny full}}{\\ell_{\\mbox{\\tiny full}}}\n\\def\\ell_{\\mbox{\\tiny rem}}{\\ell_{\\mbox{\\tiny rem}}}\n\\defINF-$\\gamma${INF-$\\gamma$}\n\\def\\mbox{IQR}{\\mbox{IQR}}\n\\def\\INFgamma\\ Derf\\ 1{INF-$\\gamma$\\ Derf\\ 1}\n\\defn_{\\scriptscriptstyle{\\rm case}}{n_{\\scriptscriptstyle{\\rm case}}}\n\\defn_{\\scriptscriptstyle{\\rm cont}}{n_{\\scriptscriptstyle{\\rm cont}}}\n\\defn_{\\scriptscriptstyle{\\rm tot}}{n_{\\scriptscriptstyle{\\rm tot}}}\n\\defp_{{\\scriptscriptstyle{\\rm case}}}{p_{E_+|{\\scriptscriptstyle{\\rm case}}}}\n\\defp_{{\\scriptscriptstyle{\\rm cont}}}{p_{E_+|{\\scriptscriptstyle{\\rm cont}}}}\n\\defOR_{\\scriptscriptstyle{\\rm int}}{OR_{\\scriptscriptstyle{\\rm int}}}\n\\defOR_{\\scriptscriptstyle{\\rm sus}}{OR_{\\scriptscriptstyle{\\rm sus}}}\n\\defOR_{\\scriptscriptstyle{\\rm n-sus}}{OR_{\\scriptscriptstyle{\\rm n-sus}}}\n\\defn_{\\scriptscriptstyle{\\rm case}}{n_{\\scriptscriptstyle{\\rm case}}}\n\\defn_{\\scriptscriptstyle{\\rm cont}}{n_{\\scriptscriptstyle{\\rm cont}}}\n\\defn_{\\scriptscriptstyle{\\rm tot}}{n_{\\scriptscriptstyle{\\rm tot}}}\n\\defp_{{\\scriptscriptstyle{\\rm case}}}{p_{{\\scriptscriptstyle{\\rm case}}}}\n\\defp_{{\\scriptscriptstyle{\\rm cont}}}{p_{{\\scriptscriptstyle{\\rm cont}}}}\n\\def\\pe+{p_{{\\scriptscriptstyle{\\rm E}+}}}\n\\def\\pg+{p_{{\\scriptscriptstyle{\\rm G}+}}}\n\\def\\OPsection#1{\\vfill\\eject\\section{#1}}\n\\def{\\thick{\\tt SemiPar}}{{\\thick{\\tt SemiPar}}}\n\\def{\\tt WinBUGS}{{\\tt WinBUGS}}\n\\defna\\\"{\\i}ve{na\\\"{\\i}ve}\n\\def\\mbox{x}{\\mbox{x}}\n\\def\\mbox{z}{\\mbox{z}}\n\\def\\mbox{s}{\\mbox{s}}\n\\def\\mbox{t}{\\mbox{t}}\n\\def\\mbox{a}{\\mbox{a}}\n\\def\\mbox{b}{\\mbox{b}}\n\\def\\mbox{K}{\\mbox{K}}\n\\def\\mbox{L}{\\mbox{L}}\n\\def\\textcolor{red}{\\LARGE$\\bullet$}{\\textcolor{red}{\\LARGE$\\bullet$}}\n\\def\\textcolor{ruppertgreen}{\\LARGE$\\bullet$}{\\textcolor{ruppertgreen}{\\LARGE$\\bullet$}}\n\\def\\textcolor{gold}{\\LARGE$\\bullet$}{\\textcolor{gold}{\\LARGE$\\bullet$}}\n\\newcommand{\\plagiarism} {\n\\vfill\n\\begin{center}\nName: \\rule{8cm}{0.25pt}\\vspace{8pt}\nTutorial: \\rule{6cm}{0.25pt}\\vspace{16pt}\n\\fbox{\\parbox{\\textwidth}{\nI declare that this assessment item is my own work, except where\nacknowledged, and has not been submitted for academic credit elsewhere, and\nacknowledge that the assessor of this item may, for the purpose of assessing\nthis item:\n\\begin{itemize}\n\\item Reproduce this assessment item and provide a copy to another member of\nthe University; and\/or,\n\\item Communicate a copy of this assessment item to a plagiarism checking\nservice (which may then retain a copy of the assessment item on its database\nfor the purpose of future plagiarism checking).\n\\end{itemize}\nI certify that I have read and understood the University Rules in respect of\nStudent Academic Misconduct.\\vspace{24pt}\nSigned: \\rule{8cm}{0.25pt} \\hfill Date: \\rule{3cm}{0.25pt}\n}}\\end{center}\n\\vfill\n}\n\\def\\exerDBtags#1#2{\\null}\n\\newcommand{\\slide}[1]{\\foilhead{\\sf #1}}\n\\newcommand{\\fontsize{24pt}{24pt}\\sf}{\\fontsize{24pt}{24pt}\\sf}\n\\newenvironment{list1}%\n{\\begin{list}\n{\\bulletcolour$\\bullet$}\n{\\setlength{\\leftmargin}{30mm}\\setlength{\\itemsep}{0.5ex}}\\sf}\n{\\end{list}\\normalsize}\n\\newenvironment{list2}%\n{\\begin{list}{\\small\\bulletcolour$\\bullet$}\n{\\setlength{\\leftmargin}{0.75in}\\setlength{\\itemsep}{0.5ex}}\\small}\n{\\end{list}\\normalsize}\n\\def\\slidehead#1{\\slide{\\Large\\headingcolour #1}}\n\\def\\null{\\null}\n\n\\section{Introduction}\\label{sec:intro}\nIt is well-known that in the normal linear regression model,\n$$\nY_i | X_i \\sim X_i^T\\beta + \\epsilon_i\\ , \\quad \\epsilon_i \\stackrel{{\\tiny \\mbox{i.i.d.}}}{\\sim} N(0,\\sigma^2)\\ ,\n$$\nthe mean-model parameter $\\beta$ is orthogonal to the error variance $\\sigma^2$ \\citep[e.g.][Section 3.3]{CR1987}. \nThere are two important implications of this. \nFirst, the maximum likelihood estimator (MLE) of $\\beta$ is asymptotically efficient regardless of whether $\\sigma^2$ is known or estimated simultaneously from the data. Second, the MLE of $\\beta$ is independent of the MLE of $\\sigma^2$, which is central to deriving the usual $t$-tests for inferences on $\\beta$. (Note that orthogonal parameters are only asymptotically independent in general; finite-sample independence is special to the normal distribution.) When interest lies primarily in the mean-model, the error variance is often deemed a nuisance parameter.\n\nSimilar orthogonality results hold for other generalized linear models (GLMs). Specific examples include the gamma regression model, in which the mean-model parameter is orthogonal to a nuisance shape parameter \\citep[][Section 3.2]{CR1987}, and multinomial models for polytomous data, in which the mean-model parameter is orthogonal to a nuisance vector of baseline probability masses \\citep{RG2009}. Note that the orthogonality property holds for any link function. \n\nIn each of the above settings, the error distribution is characterized by a finite vector of nuisance parameters and orthogonality is established by showing that the Fisher information matrix is block-diagonal, with the blocks corresponding to the vector of mean-model parameters and the vector of nuisance parameters. It is usually straightforward to perform these calculations on a case-by-case basis, working with the specific family of distributions under consideration, but a general result for parametric GLMs can be found in \\citet{JK2004}.\n\nWhen we move away from specific parametric families to consider the class of all GLMs, we find that a general orthogonality property, although expected, may not be so easy to establish. This is because such a class constitutes a semiparametric model, and it is no longer feasible to compute and examine the Fisher information matrix in the presence of an infinite-dimensional parameter. \n\nIn this note, we show that the mean-model parameter is always orthogonal to the error distribution in GLMs, even when \nthe error distribution is treated as an infinite-dimensional parameter, belonging to the space of all distributions having a Laplace transform in some neighborhood of zero. This class includes, as special cases, the classical normal, Poisson, gamma and multinomial distributions, as well as many interesting and non-standard distributions, such as the generalized Poisson distribution of \\citet{WF1997} for overdispersed counts and the class of all exponential dispersion models with constant dispersion \\citep{Jorg1987}. We note in Section \\ref{sec:2} that this class of distributions is as large as possible for GLMs, so the result here is indeed the most general possible. \n\nThat a general orthogonality property should hold is alluded to in \\citet[][Section 6.2]{JK2004}. In that paper, the notion of orthogonality between a finite-dimensional and infinite-dimensional parameter being considered is that orthogonality holds, in the usual Fisher information matrix sense, for every finite-dimensional submodel. In this note, we use a slightly stronger notion of orthogonality, namely, that the score function for the finite-dimensional parameter is orthogonal to the nuisance tangent space of the infinite-dimensional parameter. Recalling that the nuisance tangent space is the closure of all finite-dimensional submodel tangent spaces, we see that the notion of orthogonality here implies the notion considered in \\citet{JK2004}.\n\nOur proof proceeds along the following lines. We first use an exponential tilt representation of GLMs, introduced in \\citet{RG2009} and expanded upon in \\citet{Huang2013}, to index any GLM by just two parameters, namely, a finite-dimensional mean-model parameter $\\beta$ and an infinite-dimensional error distribution parameter $F$. The orthogonality of the two parameters is then characterized by the orthogonality of the score function for $\\beta$ to the nuisance tangent space for $F$. As it turns out, the nuisance tangent space for $F$ is rather difficult to work with directly, but by embedding the model into a larger class of models, we find that the required calculations become particularly simple. \n\nThe exponential tilt representation is derived in Section \\ref{sec:2} and the general orthogonality property is proven in Section \\ref{sec:3}. A connection with mean-variance models is outlined in Section \\ref{sec:4}. A brief discussion of the theoretical and practical implications of the our findings is given in Section \\ref{sec:5}, which concludes the note.\n\n\\section{An exponential tilt representation of generalized linear models}\n\\label{sec:2}\n\nRecall that a GLM \\citep{MN1989} for independent data pairs $(X_1,Y_1), \\ldots, (X_n, Y_n)$ is defined by two components. First, there is a conditional mean model for the responses,\n\\begin{equation}\nE(Y_i|X_i) = \\mu(X_i^T\\beta) \\ ,\n\\label{eq:meanmodel}\n\\end{equation}\nwhere $\\mu$ is a user-specified inverse-link function and $\\beta \\in \\mathbb{R}^q$ is a vector of unknown regression parameters. Second, the conditional distributions $F_i$ of each response $Y_i$ given covariate $X_i$ are assumed to come from some exponential family. Assuming the distributions $F_i$ have densities $dF_i$ with respect to some dominating measure, the second component can be written in the exponential tilt form\n\\begin{equation}\ndF_i(y) = \\exp\\{b(X_i; \\beta, F) + \\theta(X_i; \\beta, F) y\\} dF(y)\n\\end{equation}\nfor some reference distribution $F$, where \n\\begin{equation}\n\\label{eq:norm}\nb(X_i; \\beta, F) = -\\log \\int \\exp\\{\\theta(X_i; \\beta, F) y\\} dF(y)\n\\end{equation}\nis a normalizing function and, in order to satisfy (\\ref{eq:meanmodel}), the tilt $\\theta(X_i;\\beta,F)$ is implicitly defined as the solution to the mean constraint\n\\begin{equation}\n\\mu(X_i^T\\beta) = \\int y \\exp\\{b(X_i; \\beta, F) +\\theta(X_i;\\beta,F) y\\} dF(y) \\ .\n\\label{eq:mean}\n\\end{equation}\n\nIt is easy to see that the exponential tilt representation (\\ref{eq:meanmodel})--(\\ref{eq:mean}) encompasses all classical GLMs. For example, normal, Poisson and gamma regression models can be recovered by choosing $dF$ to be a Gaussian, Poisson or gamma kernel, respectively. The main advantage of this representation is that it naturally allows for the reference distribution $F$ to be considered as an infinite-dimensional nuisance parameter, along with the finite-dimensional parameter $\\beta$, in the model. It is this novel representation that allows us to conveniently \ncharacterize any GLM using just the two parameters $\\beta$ and $F$.\n\nAs with any GLM, the reference distribution $F$ is required to have a Laplace transform in some neighborhood of the origin so that the cumulant generating function (\\ref{eq:norm}) is well-defined. Thus, the parameter space for $F$ is the class of all distributions that have a Laplace transform in some neighborhood of the origin. Note that this class of distribution functions is as large as it can be for GLMs, because any distribution outside this class cannot be used to generate a valid model.\n\nThe exponential tilt representation (\\ref{eq:meanmodel})--(\\ref{eq:mean}) was first introduced in \\citet{RG2009}. In that paper, the representation is used to derive a useful alternative parametrization of the multinomial regression model for polytomous responses. The representation is also used in \\cite{Huang2013} to motivate a semiparametric extension of GLMs for arbitrary responses.\n\n\\section{The orthogonality of parameters}\n\\label{sec:3}\nIn parametric models, orthogonality of parameters can be characterized by the Fisher information matrix being block-diagonal. In semiparametric models however, orthogonality between a finite-dimensional parameter $\\beta$ and an infinite-dimensional parameter $F$ cannot be characterized in this way. Rather, it is characterized through the score function for $\\beta$ being orthogonal to the nuisance tangent space for $F$. Intuitively speaking, the projection of the score function for $\\beta$ on to the nuisance tangent space is a measure of the loss of information about $\\beta$ due to the presence of the nuisance parameter $F$ -- this is zero if and only if the score function is orthogonal to the nuisance tangent space. Note that this general notion of orthogonality reduces to the Fisher information matrix criterion when the nuisance parameter is finite-dimensional.\n\nNow, the loglikelihood function corresponding to model (\\ref{eq:meanmodel})--(\\ref{eq:mean}) is $l(\\beta, F|X,Y) = \\log dF(Y) + b(X; \\beta, F) + \\theta(X;\\beta, F) Y$. Thus, the\nscore function for $\\beta$ is given by\n$$S_{\\beta,F}(X,Y) := \\frac{\\partial}{\\partial \\beta} l(\\beta, F) = \\frac{\\partial}{\\partial \\beta} b(X;\\beta, F) + \\frac{\\partial}{\\partial \\beta} \\theta(X;\\beta, F) Y.$$\nImplicit differentiation of the defining equations for \n$b$ and $\\theta$ leads to the identities\n\\begin{eqnarray*}\n\\frac{\\partial }{\\partial \\beta} b(X;\\beta,F) = - \\mu (X^T\\beta) \\frac{\\partial }{\\partial \\beta} \\theta(X;\\beta,F) \\quad \\mbox{ and } \\quad\n\\frac{\\partial }{\\partial \\beta} \\theta(X;\\beta,F) = \\frac{\\mu'(X^T\\beta) }{V(X;\\beta,F)} X\\ ,\n\\end{eqnarray*}\nwhere\n$V(X;\\beta,F) = E_{\\beta,F}[(Y-\\mu(X^T\\beta))^2|X]$\nis the conditional variance of $Y$ given $X$ under parameter value $(\\beta,F)$. The score function for $\\beta$ therefore reduces to\n\\begin{equation}\nS_{\\beta,F}(X,Y) = X\\frac{\\mu'(X^T\\beta)}{V(X;\\beta,F)} \\left(Y-\\mu(X^T\\beta)\\right) \\ ,\n\\label{eq:betascore}\n\\end{equation}\nwhich is of the same weighted least-squares form as for a parametric GLM. The difference here is that the variance function $V(X;\\beta,F)$ is not known because $F$ is not specified.\n\nThe orthogonality between $\\beta$ and $F$ now reduces to the score function (\\ref{eq:betascore}) being orthogonal to the nuisance tangent space for $F$. Although it is not hard to derive a score function for $F$ \\citep[e.g.][Section 3.3]{Huang2013}, it turns out to be rather difficult to compute the nuisance tangent space explicitly. This is also noted in \\citet[][Section 6.2]{JK2004}. We can work around this, however, by embedding model (\\ref{eq:meanmodel})--(\\ref{eq:mean}) into a more general class of ``semiparametric restricted moment models\" \\citep[e.g.][Section 4.5]{Tsia2006} for which the required calculations are much easier. This class is given by\n\\begin{equation}\n\\label{eq:srmm}\nY = \\mu(X,\\beta) + \\epsilon \\ ,\n\\end{equation}\nwhere the conditional distribution of $\\epsilon$ given $X$ is specified only up to the moment condition $E(\\epsilon|X) = 0$. It is clear that the semiparametric extension (\\ref{eq:meanmodel})--(\\ref{eq:mean}) is a subclass of the restricted moment model, with $\\mu(X,\\beta) = \\mu(X^T\\beta)$ and $E(\\epsilon|X) = E(Y-\\mu(X^T\\beta)|X) = 0$ by construction. The nuisance tangent space for $F$ in the semiparametric model (\\ref{eq:meanmodel})--(\\ref{eq:mean}) must therefore be a subspace of the nuisance tangent space for the restricted moment model.\n\nElementary calculations \\citep[see][pp.81--83]{Tsia2006} show that the nuisance tangent space for the restricted moment model is given by\n\\begin{equation*}\n\\Lambda = \\left\\{\n\\text{all $q \\times 1$ functions $a(X,Y)$ such that\n$E_{\\beta,F}[(Y-\\mu(X,\\beta))a(X,Y)|X] = 0$}\n\\right \\}\n\\end{equation*}\nand the projection operator $\\Pi_{\\beta,F}$ onto this nuisance tangent space is given by\n\\begin{equation*}\n\\Pi_{\\beta,F} s = s - \\frac{E_{\\beta,F} \\left[\\left(Y-\\mu(X,\\beta)\\right)s|X\\right]}{V(X;\\beta,F)} \\left(Y-\\mu(X,\\beta)\\right) \\ .\n\\end{equation*}\nApplying this operator to the score function (\\ref{eq:betascore}), with $\\mu(X,\\beta) = \\mu(X^T\\beta)$, gives\n\\begin{eqnarray*}\n\\Pi_{\\beta,F} S_{\\beta,F} &=& X\\frac{ \\mu'(X^T\\beta)}{V(X;\\beta,F)} \\left(Y-\\mu(X^T\\beta)\\right) \\\\\n& & - X\\frac{ \\mu'(X^T\\beta)}{V(X;\\beta,F)} E_{\\beta,F} \\left[\\frac{(Y-\\mu(X^T\\beta))^2}{V(X;\\beta,F)}\\Bigg|X \\right] (Y-\\mu(X^T\\beta)) \\\\\n&=& 0 \\ \\ \\mbox{ for all } (\\beta, F),\n\\end{eqnarray*}\nbecause $V(X;\\beta,F) = E_{\\beta,F}[(Y-\\mu(X^T\\beta))^2|X]$ by definition. \nThus, the score function (\\ref{eq:betascore}) is orthogonal to the nuisance tangent space in the restricted moment model (\\ref{eq:srmm}) and therefore necessarily orthogonal to the nuisance tangent space in the semiparametric model (\\ref{eq:meanmodel})--(\\ref{eq:mean}) also. \nWe summarize as follows:\n\n\\begin{Proposition}[Orthogonality]\n\\label{le:orth}\nThe mean-model parameter $\\beta$ and the error distribution $F$ in any generalized linear model are orthogonal.\n\\end{Proposition}\n\nNote that the nuisance tangent space for any parametric model (that is, a model in which $F$ is characterized by a finite number of parameters) is necessarily a subspace of the semiparametric nuisance tangent space. We therefore have the following corollary:\n\n\\begin{Corollary} [Orthogonality in parametric models]\nIf the error distribution is characterized by a finite vector of nuisance parameters $\\phi$, then the mean-model parameter $\\beta$ is orthogonal to $\\phi$.\n\\end{Corollary}\n\n\\section{A connection with quasilikelihood models}\n\\label{sec:4}\nA popular extension of GLMs is the class of quasilikelihood (QL) models, also known mean-variance models \\citep[e.g.][]{Wed1974}. These models make the assumption that $E(Y|X) = \\mu(X,\\beta)$ for some mean function $\\mu$ and $\\mbox{Var}(Y|X) = v(\\mu)$ for some positive variance function $v$. Such models can be characterized by their quasi-score functions for $\\beta$,\n\\begin{equation}\n\\label{eq:quasiscore}\n\\frac{\\partial \\mu}{\\partial \\beta}= \\frac{Y-\\mu}{v(\\mu)} \\ .\n\\end{equation}\nIn classical QL literature, the functional forms of both $\\mu$ and $v$ are usually specified, although there is growing literature on adaptive estimation in which the variance function is left unspecified and estimated nonparametrically from data \\citep[e.g.][]{DZ2002}. By comparing score equations (\\ref{eq:betascore}) and (\\ref{eq:quasiscore}), note that GLMs form a subset of QL models.\n\nFor models characterized by (\\ref{eq:quasiscore}), \\citet{JK2004} showed that the mean-model parameter $\\beta$ is orthogonal to the variance function $v$ whenever the latter can be characterized by a finite number of parameters. A general orthogonality result for arbitrary, infinite-dimensional $v$ remains elusive, however, perhaps because of the fact that not all QL score functions (\\ref{eq:quasiscore}) correspond to actual probability models. Indeed, such correspondences are atypical. Nevertheless, there is an interesting connection between QL models with unspecified variance functions and GLMs with unspecified error distributions in a certain asymptotic sense made more precise below.\n\nThe connection is based on a rather remarkable, but relatively obscure, result from \\citet{Hiejima1997}, who showed that GLMs can be considered ``dense\" in the class of QL models in the following asymptotic sense: for any mean-variance relationship, there exists an exponential family of distributions (i.e. a GLM with some error distribution $F$) whose score equations for $\\beta$ admit roots that are arbitrarily close to the roots of the corresponding QL score equation, as the sample size increases. Thus, for large enough sample sizes, any adaptive QL model with unspecified variance function $v$ can be approximated arbitrarily well by a GLM with unspecified error distribution $F$, with the latter possessing the orthogonality property \\ref{le:orth}. We conjecture that this connection may be the best possible for adaptive QL models, mainly because of the aforementioned fact that QL score functions typically do not correspond to actual probability models.\n\n\\section{Practical implications}\n\\label{sec:5}\nThe orthogonality property \\ref{le:orth} naturally suggests the idea of estimating the error distribution nonparametrically and simultaneously with the mean-model, leading to a kind of ``adaptive GLM\". Indeed, if the joint estimation procedure is based on maximum semiparametric likelihood, then the estimator for $\\beta$ is guaranteed to be asymptotically efficient and asymptotically independent of the estimated error distribution. In other words, both estimation and inferences on $\\beta$ are asymptotically unaffected by having to also estimate the error distribution. In contrast, estimation and inferences in GLMs with misspecified error distributions, or QL models with misspecified variance functions, are generally not efficient. \n\nThe idea of jointly estimating the mean and error distribution in GLMs is considered in more detail in \\citet{Huang2013}. In that paper, it is demonstrated that inferences on $\\beta$ based on profiling out the error distribution $F$ in the likelihood can be more accurate than inferences based on QL methods. Here, we focus our attention on the accuracy of the point estimates of $\\beta$. The results here complement those found in \\citet[][Section 6]{Huang2013}.\n\n\\begin{table}\n\\label{tab:1}\n\\centering\n\\caption{\\it Relative root mean-square errors of a semiparametric MLE ($\\hat \\beta_{SP}$) and the usual MLE ($\\hat \\beta_{MLE}$) in three simulation settings, based on 5000 replications each.\n}\n\\small{\n\\begin{tabular}{lllccccccc}\n\\hline\n & & & \\uline{\\hspace{3mm}Esti\\hspace{-2.5mm}} & \\uline{\\hspace{-2.5mm}mator\\hspace{3mm}} \\\\\nData &n\t& Parameter\t& $\\hat \\beta_{SP}$ & $\\hat \\beta_{MLE}$ \\\\\n\\hline\nExponential & 33 & Intercept \t& 0.199 & 0.194 \\\\\n& & Group effect \t\t\t \t & 0.361 & 0.354 \\\\\n& & Common slope \t\t & 0.464 & 0.455 \\\\\n& 66 & Intercept \t\t\t & 0.124 & 0.122 \\\\\n& & Group effect \t\t\t & 0.247 & 0.243 \\\\\n& & Common slope \t \t& 0.300 & 0.297 \\\\\nPoisson & 44 & Intercept & 0.275 & 0.271 \\\\\n & & Coefficient of $X_1$ & 0.610 & 0.594 \\\\\n & & Coefficient of $X_2$ & 0.205 & 0.201\\\\\n & & Coefficient of $X_3$ & 0.546& 0.533\\\\\n\\hline\n\\end{tabular}\n}\n\\end{table}\n\nIn Table 1, we compare the relative root mean-square error of a semiparametric MLE of $\\beta$ (with $F$ unknown) to that of the usual MLE (with $F$ set to the true distribution) from three sets of simulations. Recall that the relative root mean-square error of an estimator $\\hat \\beta$ is defined as the root mean-square error of $\\hat \\beta$ divided by the absolute value $|\\beta|$ of the parameter. The simulation settings are based on a leukemia survival dataset from \\citet{DH1997} and a mine injury dataset from \\citet{MMVR2010}, and are described in more detail in Sections 6.1 and 6.2 of \\citet{Huang2013}. The particular semiparametric estimation approach we use for estimating $\\beta$ \nis based on empirical likelihood and is described in more detail in Section 4 of \\citet{Huang2013}. \n\nWe see from Table 1 that the relative root mean-square errors of the two estimators are essentially the same, even for moderately small sample sizes. This supports the claim that maximum likelihood estimation of $\\beta$ is asymptotically efficient regardless of whether $F$ is known or completely unknown and estimated nonparametrically from data.\n\n\n\\section{Conclusion}\nIn this note, we have shown that orthogonality between the mean-model parameter and the error distribution holds for any GLM, parametric or nonparametric. This confirms, in greatest generality, what is well-known for the special cases of normal, gamma and multinomial regression. The result also has implications for applied statistical work, with our numerical results suggesting that little is lost by treating the error distribution nonparametrically, even in moderately sized problems. (It can also be said that little is gained by knowing the error distribution completely!) Nonparametric estimation of the error distribution can therefore safeguard against biases due to parametric model misspecification, without sacrificing much in terms of efficiency.\n\n\\section*{Acknowledgments}\nThe authors thank the Associate Editor and an anonymous referee for suggestions that improved the paper. Paul. J. Rathouz was funded by NIH grant R01 HL094786. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe resurgence of autoencoders (AE) \\cite{yann1987modeles,bourlard1988auto,hinton1994autoencoders} is an important component in the rapid development of modern deep learning \\cite{goodfellow2016deep}. Autoencoders have been widely adopted for modeling signals and images \\cite{poultney2007efficient, vincent2010stacked}.\nIts statistical counterpart, the variational autoencoder (VAE) \\cite{kingma2013auto}, has led to a recent wave of development in generative modeling due to its two-in-one capability, both representation and statistical learning in a single framework. Another exploding direction in generative modeling includes generative adversarial networks (GAN) \\cite{goodfellow2014generative}, but GANs focus on the generation process and are not aimed at representation learning (without an encoder at least in its vanilla version). \n\nCompared with classical dimensionality reduction methods like principal component analysis (PCA) \\cite{candes1933robust,Jolliffe2011principal} and Laplacian eigenmaps \\cite{belkin2003laplacian}, VAEs have demonstrated their unprecedented power in modeling high dimensional data of real-world complexity. However, there is still a large room to improve for VAEs to achieve a high quality reconstruction\/synthesis. Additionally, it is desirable to make the VAE representation learning more transparent, interpretable, and controllable.\n\nIn this paper, we attempt to learn a transparent representation by introducing guidance to the latent variables in a VAE. We design two strategies for our Guided-VAE, an unsupervised version (Fig.~\\ref{fig:model}.a) and a supervised version (Fig.~\\ref{fig:model}.b). The main motivation behind Guided-VAE is to encourage the latent representation to be semantically interpretable, while maintaining the integrity of the basic VAE architecture. Guided-VAE is learned in a multi-task learning fashion. The objective is achieved by taking advantage of the modeling flexibility and the large solution space of the VAE under a lightweight target. Thus the two tasks, learning a good VAE and making the latent variables controllable, become companions rather than conflicts.\n\nIn {\\bf unsupervised Guided-VAE}, in addition to the standard VAE backbone, we also explicitly force the latent variables to go through a lightweight encoder that learns a deformable PCA. As seen in Fig.~\\ref{fig:model}.a, two decoders exist, both trying to reconstruct the input data ${\\bf x}$:\nThe main decoder, denoted as $\\text{Dec}_{main}$, functions regularly as in the standard VAE \\cite{kingma2013auto}; the secondary decoder, denoted as $\\text{Dec}_{sub}$, explicitly learns a geometric deformation together with a linear subspace.\nIn {\\bf supervised Guided-VAE}, we introduce a subtask for the VAE by forcing one latent variable to be discriminative (minimizing the classification error) while making the rest of the latent variable to be adversarially discriminative (maximizing the minimal classification error). This subtask is achieved using an adversarial excitation and inhibition formulation. Similar to the unsupervised Guided-VAE, the training process is carried out in an end-to-end multi-task learning manner. The result is a regular generative model that keeps the original VAE properties intact, while having the specified latent variable semantically meaningful and capable of controlling\/synthesizing a specific attribute.\nWe apply Guided-VAE to the data modeling and few-shot learning problems and show favorable results on the MNIST, CelebA, CIFAR10 and Omniglot datasets.\n\nThe contributions of our work can be summarized as follows:\n{\n\\begin{itemize}\n \\setlength\\itemsep{0mm}\n \\setlength{\\itemindent}{0mm}\n\\item We propose a new generative model disentanglement learning method by introducing latent variable guidance to variational autoencoders (VAE). Both unsupervised and supervised versions of Guided-VAE have been developed. \n\\item In unsupervised Guided-VAE, we introduce deformable PCA as a subtask to guide the general VAE learning process, making the latent variables interpretable and controllable.\n\\item In supervised Guided-VAE, we use an adversarial excitation and inhibition mechanism to encourage the disentanglement, informativeness, and controllability of the latent variables.\n\\end{itemize}\n}\n\nGuided-VAE can be trained in an end-to-end fashion. It is able to keep the attractive properties of the VAE while significantly improving the controllability of the vanilla VAE. It is applicable to a range of problems for generative modeling and representation learning.\n\n\\section{Related Work}\nRelated work can be discussed along several directions.\n\nGenerative model families such as generative adversarial networks (GAN) \\cite{goodfellow2014generative,WGAN} and variational autoencoder (VAE) \\cite{kingma2013auto} have received a tremendous amount of attention lately. Although GAN produces higher quality synthesis than VAE, GAN is missing the encoder part and hence is not directly suited for representation learning.\nHere, we focus on disentanglement learning by making VAE more controllable and transparent.\n\nDisentanglement learning \\cite{mathieu2016disentangling,szabo2017challenges,hu2018disentangling,achille2018emergence,gonzalez2018image,jha2018disentangling} recently becomes a popular topic in representation learning. Adversarial training has been adopted in approaches such as \\cite{mathieu2016disentangling,szabo2017challenges}.\nVarious methods \\cite{peng2017reconstruction,kim2018disentangling,lin2019exploring} have imposed constraints\/regularizations\/supervisions to the latent variables, but these existing approaches often involve an architectural change to the VAE backbone and the additional components in these approaches are not provided as secondary decoder for guiding the main encoder.\nA closely related work is the $\\beta$-VAE \\cite{higgins2017beta} approach in which a balancing term $\\beta$ is introduced to control the capacity and the independence prior. $\\beta$-TCVAE \\cite{chen2018isolating} further extends $\\beta$-VAE by introducing a total correlation term.\n\nFrom a different angle, principal component analysis (PCA) family \\cite{candes1933robust,Jolliffe2011principal,candes2011robust} can also be viewed as representation learning. Connections between robust PCA \\cite{candes2011robust} and VAE \\cite{kingma2013auto} have been observed \\cite{dai2018connections}. Although being a widely adopted method, PCA nevertheless has limited modeling capability due to its linear subspace assumption. To alleviate the strong requirement for the input data being pre-aligned, RASL \\cite{peng2012rasl} deals with unaligned data by estimating a hidden transformation to each input.\nHere, we take advantage of the transparency of PCA and the modeling power of VAE by developing a sub-encoder (see Fig. \\ref{fig:model}.a), deformable PCA, that guides the VAE training process in an integrated end-to-end manner. After training, the sub-encoder can be removed by keeping the main VAE backbone only.\n\nTo achieve disentanglement learning in supervised Guided-VAE, we encourage one latent variable to directly correspond to an attribute while making the rest of the variables uncorrelated. This is analogous to the excitation-inhibition mechanism \\cite{murphy2003multiplicative, yizhar2011neocortical}\nor the explaining-away \\cite{wellman1993explaining} phenomena. Existing approaches \\cite{liu2018detach,lin2019exploring} impose supervision as a conditional model for an image translation task, whereas our supervised Guided-VAE model targets the generic generative modeling task by using an adversarial excitation and inhibition formulation. This is achieved by minimizing the discriminative loss for the desired latent variable while maximizing the minimal classification error for the rest of the variables.\nOur formulation has a connection to the domain-adversarial neural networks (DANN) \\cite{ganin2016domain}, but the two methods differ in purpose and classification formulation. Supervised Guided-VAE is also related to the adversarial autoencoder approach \\cite{makhzani2016adversarial}, but the two methods differ in the objective, formulation, network structure, and task domain. In \\cite{ilse2019diva}, the domain invariant variational autoencoders method (DIVA) differs from ours by enforcing disjoint sectors to explain certain attributes.\n\nOur model also has connections to the deeply-supervised nets (DSN) \\cite{lee2015deeply}, where intermediate supervision is added to a standard CNN classifier. There are also approaches \\cite{engel2018latent,bojanowski2018optimizing} in which latent variables constraints are added, but they have different formulations and objectives than Guided-VAE. Recent efforts in fairness disentanglement learning \\cite{creager2019flexibly,song2018learning} also bear some similarity, but there is still a large difference in formulation.\n\n\n\\section{Guided-VAE Model}\n\\label{Guided-VAE}\nIn this section, we present the main formulations of our Guided-VAE models. The unsupervised Guided-VAE version is presented first, followed by introduction of the supervised version.\n\n\\begin{figure*}[!htp]\n\\begin{center}\n\\scalebox{0.9}{\n\\begin{tabular}{cc}\n\\hspace{-2mm}\n\\includegraphics[width=0.5\\textwidth]{.\/figures\/UnGuidedVAE.png} &\n\\hspace{5mm}\n\\includegraphics[width=0.5\\textwidth]{.\/figures\/SuGuidedVAE.png}\\\\\n(a) Unsupervised Guided-VAE &\n(b) Supervised Guided-VAE\\\\\n\\hspace{-2mm}\n\\end{tabular}\n}\n\\caption{Model architecture for the proposed Guided-VAE algorithms.}\n\\label{fig:model}\n\\vspace{-3mm}\n\\end{center}\n\\end{figure*}\n\n\\subsection{VAE}\n\nFollowing the standard definition in variational autoencoder (VAE) \\cite{kingma2013auto}, a set of input data is denoted as $\\text{X}=({\\bf x}_1,...,{\\bf x}_n)$ where $n$ denotes the number of total input samples. The latent variables are denoted by vector ${\\bf z}$. The encoder network includes network and variational parameters $\\bm{\\phi}$ that produces variational probability model $q_{\\bm{\\phi}}({\\bf z}|{\\bf x})$. The decoder network is parameterized by $\\bm{\\theta}$ to reconstruct sample $\\tilde{{\\bf x}}=f_{\\bm{\\theta}}({\\bf z})$. The log likelihood $\\log p({\\bf x})$ estimation is achieved by maximizing the Evidence Lower BOund (ELBO) \\cite{kingma2013auto}:\n\\begin{equation}\n\\begin{aligned}\n ELBO(\\bm{\\theta}, \\bm{\\phi}; {\\bf x}) &= {\\mathbb{E}}_{q_{\\bm{\\phi}}({\\bf z}|{\\bf x})} [\\log(p_{\\bm{\\theta}}({\\bf x}|{\\bf z}))] \\\\\n &- D_{\\mathrm{KL}}(q_{\\bm{\\phi}}({\\bf z}|{\\bf x}) || p({\\bf z})).\n\\label{eq:ELBO}\n\\end{aligned}\n\\end{equation}\n\nThe first term in Eq. (\\ref{eq:ELBO}) corresponds to a reconstruction loss $\\int q_{\\bm{\\phi}}({\\bf z}|{\\bf x}) \\times ||{\\bf x}-f_{\\bm{\\theta}}({\\bf z})||^2 d{\\bf z}$ (the first term is the \\emph{negative} of reconstruction loss between input ${\\bf x}$ and reconstruction $\\tilde{{\\bf x}}$) under Gaussian parameterization of the output.\nThe second term in Eq. (\\ref{eq:ELBO}) refers to the KL divergence between the variational distribution $q_{\\bm{\\phi}}({\\bf z}|{\\bf x})$ and the prior distribution $p({\\bf z})$.\nThe training process thus tries to optimize:\n\\begin{equation}\n \\max_{\\bm{\\theta}, \\bm{\\phi}} \\left\\{\\sum_{i=1}^n ELBO(\\bm{\\theta}, \\bm{\\phi}; {\\bf x}_i)\\right\\}.\n\\label{eq:VAE}\n\\end{equation}\n\\vspace{-6.5mm}\n\\subsection{Unsupervised Guided-VAE }\nIn our unsupervised Guided-VAE, we introduce a deformable PCA as a secondary decoder to guide the VAE training. An illustration can be seen in Fig. \\ref{fig:model}.a. This secondary decoder is called $\\text{Dec}_{sub}$.\nWithout loss of generality, we let ${\\bf z}=({\\bf z}_{def}, {\\bf z}_{cont})$. ${\\bf z}_{def}$ decides a deformation\/transformation field, e.g. an affine transformation denoted as $\\tau({\\bf z}_{def})$. ${\\bf z}_{cont}$ determines the content of a sample image for transformation. The PCA model consists of $K$ basis $B=({\\bf b}_1,...,{\\bf b}_K)$. We define a deformable PCA loss as:\n\\begin{equation}\n\\begin{aligned}\n & {\\mathcal{L}}_{DPCA}(\\bm{\\phi}, B)\\\\\n &= \\sum_{i=1}^n {\\mathbb{E}}_{q_{\\bm{\\phi}}({\\bf z}_{def},{\\bf z}_{cont}|{\\bf x}_i)}\\left[ ||{\\bf x}_i- \\tau({\\bf z}_{def}) \\circ ({\\bf z}_{cont} B^T) ||^2 \\right] \\\\\n &+ \\sum_{k,j\\ne k} ({\\bf b}_{k}^T {\\bf b}_{j})^2,\n\\label{eq:DPCA}\n\\end{aligned}\n\\end{equation}\nwhere $\\circ$ defines a transformation (affine in our experiments) operator decided by $\\tau({\\bf z}_{def})$ and $\\sum_{k,j\\ne k} ({\\bf b}_{k}^T {\\bf b}_{j})^2$ is regarded as the orthogonal loss. A normalization term $\\sum_{k} ({\\bf b}_{k}^T {\\bf b}_{k}-1)^2$ can be optionally added to force the basis to be unit vectors. We follow the spirit of the PCA optimization and a general formulation for learning PCA can be found in \\cite{candes2011robust}.\n\nTo keep the simplicity of the method we learn a fixed basis $B$ and one can also adopt a probabilistic PCA model \\cite{tipping1999probabilistic}. Thus, learning unsupervised Guided-VAE becomes:\n\\begin{equation}\n\\begin{aligned}\n \\max_{\\bm{\\theta}, \\bm{\\phi}, B} \\left \\{ \\sum_{i=1}^n ELBO(\\bm{\\theta}, \\bm{\\phi}; {\\bf x}_i) - {\\mathcal{L}}_{DPCA}(\\bm{\\phi}, B) \\right \\}.\n\\label{eq:ungvae}\n\\end{aligned}\n\\end{equation}\nThe affine matrix described in our transformation follows implementation in \\cite{jaderberg2015spatial}:\n\n\\begin{equation}\n A_\\theta=\n\\left[ {\n\\begin{array}{ccc}\n\\theta_{11} & \\theta_{12} & \\theta_{13}\\\\\n\\theta_{21} & \\theta_{22} & \\theta_{23}\n\\end{array}\n}\\right]\n\\label{eqn:attention}\n\\end{equation}\nThe affine transformation includes translation, scale, rotation and shear operation. We use different latent variables to calculate different parameters in the affine matrix according to the operations we need.\n\n\\begin{figure*}[!htp]\n\\begin{center}\n\\begin{tabular}{ccccc}\n\n\\includegraphics[width=1\\textwidth]{.\/figures\/MNIST.png}\\\\\n\\hspace{1.0em} (a)VAE \\hspace{6.0em} (b) $\\beta$-VAE \\hspace{4.0em} (c) CC$\\beta$-VAE \\hspace{4.0em} (d) JointVAE \\hspace{2.0em} (e) Guided-VAE (Ours) \n\\end{tabular}\n\n\\caption{\\small{\\textbf{Latent Variables Traversal on MNIST:} Comparison of traversal results from vanilla VAE \\cite{kingma2013auto}, $\\beta$-VAE \\cite{higgins2017beta}, $\\beta$-VAE with\ncontrolled capacity increase (CC$\\beta$-VAE), JointVAE \\cite{dupont2018learning} and our Guided-VAE on the MNIST dataset. $z_{1}$ and $z_{2}$ in Guided-VAE are controlled.}}\n\\label{fig:mnist}\n\\vspace{-3mm}\n\\end{center}\n\\end{figure*}\n\n\\vspace{-1mm}\n\\subsection{Supervised Guided-VAE}\n\\vspace{-1mm}\nFor training data $\\text{X}=({\\bf x}_1,...,{\\bf x}_n)$, suppose there exists a total of $T$ attributes with ground-truth labels.\nLet ${\\bf z}=(z_t, {\\bf z}_t^{rst})$ where $z_t$ defines a scalar variable deciding the $t$-th attribute and ${\\bf z}_t^{rst}$ represents remaining latent variables. Let $y_t({\\bf x}_i)$ be the ground-truth label for the $t$-th attribute of sample ${\\bf x}_i$; $y_t({\\bf x}_i) \\in \\{-1, +1\\}$. For each attribute, we use an adversarial excitation and inhibition method with term: \n\n\\begin{equation}\n\\begin{aligned}\n& {\\mathcal{L}}_{Excitation}(\\bm{\\phi}, t) \\\\\n&= \\max_{w_t}\\left\\{ \\sum_{i=1}^n {\\mathbb{E}}_{q_{\\bm{\\phi}}(z_t|{\\bf x}_i)} [\\log p_{w_t}(y=y_t({\\bf x}_i)|z_t)]\\right\\} ,\n\\end{aligned}\n\\end{equation}\nwhere $w_t$ refers to classifier making a prediction for the $t$-th attribute using the latent variable $z_t$.\n\nThis is an excitation process since we want latent variable $z_t$ to directly correspond to the attribute label. \n\n\nNext is an inhibition term.\n\\begin{equation}\n\\begin{aligned}\n& {\\mathcal{L}}_{Inhibition} (\\bm{\\phi}, t) \\\\\n&= \\max_{C_t} \\left\\{\\sum_{i=1}^n {\\mathbb{E}}_{q_{\\bm{\\phi}}({\\bf z}_t^{rst}|{\\bf x}_i)} [\\log p_{C_t}(y=y_t({\\bf x}_i)|{\\bf z}_t^{rst})] \\right\\},\n\\label{eq:inhibition}\n\\end{aligned}\n\\end{equation}\nwhere $C_t({\\bf z}_t^{rst})$ refers to classifier making a prediction for the $t$-th attribute using the remaining latent variables ${\\bf z}_t^{rst}$.\n$\\log p_{C_t}(y=y_t({\\bf x})|{\\bf z}_t^{rst})$ is a cross-entropy term for minimizing the classification error in Eq. (\\ref{eq:inhibition}).\nThis is an inhibition process since we want the remaining variables ${\\bf z}_t^{rst}$ as independent as possible to the attribute label in Eq. (\\ref{eq:sugvae}) below.\n\\begin{equation}\n\\begin{aligned}\n &\\max_{\\bm{\\theta}, \\bm{\\phi}} {\\bigg\\{} \\sum_{i=1}^n ELBO(\\bm{\\theta}, \\bm{\\phi}; {\\bf x}_i) \\\\\n &+ \\sum_{t=1}^T \\left[{\\mathcal{L}}_{Excitation}(\\bm{\\phi}, t) - {\\mathcal{L}}_{Inhibition} (\\bm{\\phi},t) \\right] {\\bigg\\}}.\n\\label{eq:sugvae}\n\\end{aligned}\n\\end{equation}\n\nNotice in Eq. (\\ref{eq:sugvae}) the minus sign in front of the term ${\\mathcal{L}}_{Inhibition} (\\bm{\\phi}, t)$ for maximization which is an adversarial term to make ${\\bf z}_t^{rst}$ as uninformative to attribute $t$ as possible, by pushing the best possible classifier $C_t$ to be the least discriminative.\nThe formulation of Eq. (\\ref{eq:sugvae}) bears certain similarity to that in domain-adversarial neural networks \\cite{ganin2016domain} in which the label classification is minimized with the domain classifier being adversarially maximized. Here, however, we respectively encourage and discourage different parts of the features to make the same type of classification. \n\n\\section{Experiments}\n\\label{Experiments}\n\nIn this section, we first present qualitative and quantitative results demonstrating our proposed unsupervised Guided-VAE (Figure \\ref{fig:model}a) capable of disentangling latent embedding more favorably than previous disentangle methods \\cite{higgins2017beta, dupont2018learning, kim2018disentangling} on MNIST dataset \\cite{lecun2010mnist} and 2D shape dataset \\cite{dsprites17}. We also show that our learned latent representation improves classification performance in a representation learning setting. Next, we extend this idea to a supervised guidance approach in an adversarial excitation and inhibition fashion, where a discriminative objective for certain image properties is given (Figure \\ref{fig:model}b) on the CelebA dataset \\cite{liu2015faceattributes}. Further, we show that our method is architecture agnostic, applicable in a variety of scenarios such as image interpolation task on CIFAR 10 dataset \\cite{cifar10} and a few-shot classification task on Omniglot dataset \\cite{lake2015human}.\n\n\\subsection{Unsupervised Guided-VAE}\n\n\\subsubsection{Qualitative Evaluation}\n\nWe present qualitative results on the MNIST dataset first by traversing latent variables received affine transformation guiding signal in Figure \\ref{fig:mnist}. Here, we applied the Guided-VAE with the bottleneck size of 10 (i.e. the latent variables ${\\bf z} \\in \\mathbb{R}^{10}$). The first latent variable $z_{1}$ represents the rotation information, and the second latent variable $z_{2}$ represents the scaling information. The rest of the latent variables ${\\bf z}_{3:10}$ represent the content information. Thus, we present the latent variables as ${\\bf z} = ({\\bf z}_{def}, {\\bf z}_{cont}) = ({\\bf z}_{1:2}, {\\bf z}_{3:10})$.\n\nWe compare traversal results of all latent variables on MNIST dataset for vanilla VAE \\cite{kingma2013auto}, $\\beta$-VAE \\cite{higgins2017beta}, JointVAE \\cite{dupont2018learning} and our Guided-VAE ($\\beta$-VAE, JointVAE results are adopted from \\cite{dupont2018learning}). While $\\beta$-VAE cannot generate meaningful disentangled representations for this dataset, even with controlled capacity increased, JointVAE can disentangle class type from continuous factors. Our Guided-VAE disentangles geometry properties rotation angle at $z_{1}$ and stroke thickness at $z_{2}$ from the rest content information ${\\bf z}_{3:10}$. \n\nTo assess the disentangling ability of Guided-VAE against various baselines, we create a synthetic 2D shape dataset following \\cite{dsprites17, higgins2017beta} as a common way to measure the disentanglement properties of unsupervised disentangling methods. The dataset consists 737,280 images of 2D shapes (heart, oval and square) generated from four ground truth independent latent factors: $x$-position information (32 values), $y$-position information (32 values), scale (6 values) and rotation (40 values). This gives us the ability to compare the disentangling performance of different methods with given ground truth factors. We present the latent space traversal results in Figure \\ref{fig:dsprites}, where the results of $\\beta$-VAE and FactorVAE are taken from \\cite{kim2018disentangling}. Our Guided-VAE learns the four geometry factors with the first four latent variables where the latent variables ${\\bf z} \\in \\mathbb{R}^{6} = ({\\bf z}_{def}, {\\bf z}_{cont}) = ({\\bf z}_{1:4}, {\\bf z}_{5:6})$. We observe that although all models are able to capture basic geometry factors, the traversal results from Guided-VAE are more obvious with fewer factors changing except the target one. \n\n\\begin{figure}\n\\begin{center}\n\\scalebox{0.85}{\n\\begin{tabular}{c}\n\\hspace{1.2em} $\\beta$-VAE \\hspace{8.6em} FactorVAE\\\\\n\\includegraphics[width=0.52\\textwidth]{.\/figures\/2dshape_factor.jpg}\\\\\n\\hspace{4.1em} VAE \\hspace{7.2em} Guided-VAE (Ours)\\\\\n\\includegraphics[width=0.52\\textwidth]{.\/figures\/2dshape_ours.jpg}\n\n\\end{tabular}\n}\n\\caption{\\small \\textbf{Comparison of qualitative results on 2D shape.} First row: originals. Second row: reconstructions. Remaining rows: reconstructions of latent traversals across each\nlatent dimension. In our results, $z_1$ represents the $x$-position information, $z_2$ represents the $y$-position information, $z_3$ represents the scale information and $z_4$ represents the rotation information. }\n\\label{fig:dsprites}\n\\vspace{-8mm}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!htp]\n\n\\begin{center}\n\\scalebox{0.9}{\n\\begin{tabular}{c} \n\\includegraphics[width=0.5\\textwidth]{.\/figures\/Celeba_2.png}\\\\\n\\hspace{1.0em} Gender \\hspace{8.0em} Smile\n\\end{tabular}\n}\n\\caption{\\small\\textbf{Comparison of Traversal Result learned on CelebA:} Column 1 shows traversed images from male to female. Column 2 shows traversed images from smiling to no-smiling. The first row is from \\cite{higgins2017beta} and we follow its figure generation procedure.}\n\\label{fig:Comparison_CelebA}\n\\end{center}\n\\vspace{-5mm}\n\\end{figure}\n\\vspace{-3mm}\n\\subsubsection{Quantitative Evaluation}\n\nWe perform two quantitative experiments with strong baselines for disentanglement and representation learning in Table \\ref{tab:disentangle_results} and \\ref{tab:classification-mnist-methods}. We observe significant improvement over existing methods in terms of {\\em disentanglement} measured by Z-Diff score \\cite{higgins2017beta}, SAP score \\cite{kumar2017variational}, Factor score \\cite{kim2018disentangling} in Table \\ref{tab:disentangle_results}, and representation {\\em transferability} based on classification error in Table \\ref{tab:classification-mnist-methods}.\n\n\\begin{table}\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{l | cccc}\n\\textbf{Model ($d_{\\bf z}=6$)} & {Z-Diff $\\uparrow$} & {SAP $\\uparrow$} & {Factor $\\uparrow$} \\\\\n\\hline\n\\textbf{\\textsc{VAE} \\cite{kingma2013auto}}\n & 78.2 & 0.1696 & 0.4074 \\\\\n\\textbf{\\textsc{$\\beta$-VAE ($\\beta$=2)\\cite{higgins2017beta}}}\n & 98.1 & 0.1772 & 0.5786 \\\\\n\\textbf{\\textsc{FactorVAE ($\\gamma$=5) \\cite{kim2018disentangling}}} \n & 92.4 & 0.1770 & 0.6134 \\\\\n\\textbf{\\textsc{FactorVAE ($\\gamma$=35) \\cite{kim2018disentangling}}} \n & 98.4 & 0.2717 & 0.7100 \\\\\n\\textbf{\\textsc{$\\beta$-TCVAE ($\\alpha$=1,$\\beta$=5,$\\gamma$=1) \\cite{chen2018isolating}}} \n & 96.8 & 0.4287 & 0.6968 \\\\\n\\hline\n\\textbf{\\textsc{Guided-VAE (Ours)}} \n & \\textbf{99.2} & 0.4320 & 0.6660 \\\\ \n\\textbf{\\textsc{Guided-$\\beta$-TCVAE (Ours)}} \n & 96.3 & \\textbf{0.4477} & \\textbf{0.7294} \\\\ \n\\end{tabular}\n}\n\\caption{\\small \\textbf{Disentanglement:} Z-Diff score, SAP score, and Factor score over unsupervised disentanglement methods on 2D Shapes dataset. [$\\uparrow$ means higher is better]}\n\\label{tab:disentangle_results}\n\\end{center}\n\\vspace{-2mm}\n\\end{table}\n\nAll models are trained in the same setting as the experiment shown in Figure \\ref{fig:dsprites}, and are assessed by three disentangle metrics shown in Table \\ref{tab:disentangle_results}. An improvement in the Z-Diff score and Factor score represents a lower variance of the inferred latent variable for fixed generative factors, whereas our increased SAP score corresponds with a tighter coupling between a single latent dimension and a generative factor. Compare to previous methods, our method is orthogonal (due to using a side objective) to most existing approaches. $\\beta$-TCVAE \\cite{chen2018isolating} improves $\\beta$-VAE \\cite{higgins2017beta} based on weighted mini-batches to stochastic training. Our Guided-$\\beta$-TCVAE further improves the results in all three disentangle metrics.\n\n\\begin{table}\n\\begin{center}\n\\scalebox{0.7}{\n\\begin{tabular}{l | ccc}\n\\textbf{Model } & {$d_{\\bf z} = 16$ $\\downarrow$} & {$d_{\\bf z} = 32$ $\\downarrow$} & {$d_{\\bf z} = 64$ $\\downarrow$} \\\\\n\\hline\n\\textbf{\\textsc{VAE} \\cite{kingma2013auto}}\n & 2.92\\%$\\pm$0.12 & 3.05\\%$\\pm$0.42 & 2.98\\%$\\pm$0.14\\\\\n\\textbf{\\textsc{$\\beta$-VAE($\\beta$=2)}\\cite{higgins2017beta}} \n & 4.69\\%$\\pm$0.18 & 5.26\\%$\\pm$0.22 & 5.40\\%$\\pm$0.33 \\\\\n\\textbf{\\textsc{FactorVAE($\\gamma$=5)} \\cite{kim2018disentangling}} \n & 6.07\\%$\\pm$0.05 & 6.18\\%$\\pm$0.20 & 6.35\\%$\\pm$0.48 \\\\\n\\textbf{\\textsc{$\\beta$-TCVAE ($\\alpha$=1,$\\beta$=5,$\\gamma$=1)} \\cite{chen2018isolating}} \n & 1.62\\%$\\pm$0.07 & 1.24\\%$\\pm$0.05 & 1.32\\%$\\pm$0.09 \\\\\n\\hline\n\\textbf{\\textsc{Guided-VAE (Ours)}} \n & 1.85\\%$\\pm$0.08 & 1.60\\%$\\pm$0.08 & 1.49\\%$\\pm$0.06 \\\\ \n\\textbf{\\textsc{Guided-$\\beta$-TCVAE (Ours)}} \n & \\textbf{1.47\\%$\\pm$0.12 } & \\textbf{1.10\\%$\\pm$0.03 } & \\textbf{1.31\\%$\\pm$0.06} \\\\\n\\end{tabular}\n}\n\\caption{\\small \\textbf{Representation Learning:} Classification error over unsupervised disentanglement methods on MNIST. [$\\downarrow$ means lower is better]{\\scriptsize \\textsuperscript{$\\dagger$} The 95 \\% confidence intervals from 5 trials are reported.}}\n\\label{tab:classification-mnist-methods}\n\\end{center}\n\\vspace{-5mm}\n\\end{table}\n\n\\begin{figure*}[!htp]\n\n\\begin{center}\n\\scalebox{0.9}{\n\\begin{tabular}{ccc}\n\n\\includegraphics[width=0.31\\textwidth]{.\/figures\/Bald.png} &\n\\includegraphics[width=0.31\\textwidth]{.\/figures\/Bangs.png} &\n\\includegraphics[width=0.31\\textwidth]{.\/figures\/Black_Hair.png}\\\\\n(a) Bald &\n(b) Bangs &\n(c) Black Hair\\\\\n\\vspace{+1mm}\n\\includegraphics[width=0.31\\textwidth]{.\/figures\/Mouth_Slightly_Open.png} &\n\\includegraphics[width=0.31\\textwidth]{.\/figures\/Receding_Hairlines.png} &\n\\includegraphics[width=0.31\\textwidth]{.\/figures\/Young.png}\\\\\n(d) Mouth Slightly Open &\n(e) Receding Hairlines &\n(f) Young\\\\\n\n\\end{tabular}\n\n}\n\\caption{\\small\\textbf{Latent factors learned by Guided-VAE on CelebA:} Each image shows the traversal results of Guided-VAE on a single latent variable which is controlled by the lightweight decoder using the corresponding labels as signal.}\n\\label{fig:celeba_appendix}\n\\end{center}\n\\end{figure*}\n\nWe further study representation transferability by performing classification tasks on the latent embedding of different generative models. Specifically, for each data point ($\\mathbf{x}, y$), we use the pre-trained generative models to obtain the value of latent variable ${\\bf z}$ given input image ${\\bf x}$. Here ${\\bf z}$ is a $d_{{\\bf z}}$-dim vector. We then train a linear classifier $f(\\cdot)$ on the embedding-label pairs $\\{({\\bf z}, y)\\}$ to predict the class of digits. For the Guided-VAE, we disentangle the latent variables ${\\bf z}$ into deformation variables ${\\bf z}_{def}$ and content variables ${\\bf z}_{cont}$ with same dimensions (i.e. $d_{{\\bf z}_{def}}=d_{{\\bf z}_{cont}}$). We compare the classification errors of different models with multiple choices of dimensions of the latent variables in Table \\ref{tab:classification-mnist-methods}. In general, VAE \\cite{kingma2013auto}, $\\beta$-VAE \\cite{higgins2017beta}, and FactorVAE \\cite{kim2018disentangling} do not benefit from the increase of the latent dimensions, and $\\beta$-TCVAE \\cite{chen2018isolating} shows evidence that its discovered representation is more useful for classification task than existing methods. Our Guide-VAE achieves competitive results compare to $\\beta$-TCVAE, and our Guided-$\\beta$-TCVAE can further reduce the classification error to $1.1\\%$ when $d_{\\bf z} = 32$, which is $1.95\\%$ lower than the baseline VAE. \n\nMoreover, we study the effectiveness of ${\\bf z}_{def}$ and ${\\bf z}_{cont}$ in Guided-VAE separately to reveal the different properties of the latent subspace. We follow the same classification task procedures described above but use different subsets of latent variables as input features for the classifier $f(\\cdot)$. Specifically, we compare results based on the deformation variables ${\\bf z}_{def}$, the content variables ${\\bf z}_{cont}$, and the whole latent variables ${\\bf z}$ as the input feature vector. To conduct a fair comparison, we still keep the same dimensions for the deformation variables ${\\bf z}_{def}$ and the content variables ${\\bf z}_{cont}$. Table \\ref{tab:classification-mnist-disentanglement} shows that the classification errors on ${\\bf z}_{cont}$ are significantly lower than the ones on ${\\bf z}_{def}$, which indicates the success of disentanglement as the content variables should determine the class of digits. In contrast, the deformation variables should be invariant to the class. Besides, when the dimensions of latent variables ${\\bf z}$ are higher, the classification errors on ${\\bf z}_{def}$ increase while the ones on ${\\bf z}_{cont}$ decrease, indicating a better disentanglement between deformation and content with increased latent dimensions.\n\n\n\\begin{table}\n\\begin{center}\n\\scalebox{0.65}{\n\\begin{tabular}{l | cccccc}\n\\textbf{Model } & $d_{{\\bf z}_{def}}$ & {$d_{{\\bf z}_{cont}}$} & {$d_{{\\bf z}}$} & {${\\bf z}_{def}\\,\\, Error$ $\\uparrow$} & {${\\bf z}_{cont} \\,\\, Error$ $\\downarrow$} & {${\\bf z} \\,\\,Error$ $\\downarrow$}\\\\\n\\hline\n\\textbf{\\textsc{Guided-VAE}} \n &8 & 8 & 16 & 27.1\\% & 3.69\\% & 2.17\\% \\\\\n\\,\\,\\,\\,\\,\\, &16 &16 & 32 & 42.07\\% & 1.79\\% & 1.51\\% \\\\\n\\,\\,\\,\\,\\,\\, &32 & 32 & 64 & 62.94\\% & 1.55\\% & 1.42\\% \\\\\n\n\\end{tabular}\n}\n\\caption{\\small \\textbf{Classification on MNIST using different latent variables as features:} Classification error over Guided-VAE with different dimensions of latent variables [$\\uparrow$ means higher is better, $\\downarrow$ means lower is better]}\n\\label{tab:classification-mnist-disentanglement}\n\\end{center}\n\\vspace{-9mm}\n\\end{table}\n\n\\vspace{-2mm}\n\\subsection{Supervised Guided-VAE}\n\n\\subsubsection{Qualitative Evaluation}\nWe first present qualitative results on the CelebA dataset \\cite{liu2015faceattributes} by traversing latent variables of attributes shown in Figure \\ref{fig:Comparison_CelebA} and Figure \\ref{fig:celeba_appendix}. In Figure \\ref{fig:Comparison_CelebA}, we compare the traversal results of Guided-VAE with $\\beta$-VAE for two labeled attributes (gender, smile) in the CelebA dataset. The bottleneck size is set to 16 ($d_{\\bf z} = 16$). We use the first two latent variables $z_1, z_2$ to represent the attribute information, and the rest ${\\bf z}_{3:16}$ to represent the content information. During evaluation, we choose $z_t \\in \\{z_1, z_2\\}$ while keeping the remaining latent variables ${\\bf z}_t^{rst}$ fixed. Then we obtain a set of images through traversing $t$-th attribute (e.g., smiling to non-smiling) and compare them over $\\beta$-VAE. In Figure \\ref{fig:celeba_appendix}, we present traversing results on another six attributes.\n\n$\\beta$-VAE performs decently for the controlled attribute change, but the individual ${\\bf z}$ in $\\beta$-VAE is not fully entangled or disentangled with the attribute. We observe the traversed images contain several attribute changes at the same time. Different from our Guided-VAE, $\\beta$-VAE cannot specify which latent variables to encode specific attribute information. Guided-VAE, however, is designed to allow defined latent variables to encode any specific attributes. Guided-VAE outperforms $\\beta$-VAE by only traversing the intended factors (smile, gender) without changing other factors (hair color, baldness).\n\\vspace{-5mm}\n\n\\subsubsection{Quantitative Evaluation}\n\nWe attempt to interpret whether the disentangled attribute variables can control the generated images from the supervised Guided-VAE. We pre-train an external binary classifier for $t$-th attribute on the CelebA training set and then use this classifier to test the generated images from Guided-VAE. Each test includes $10,000$ generated images randomly sampled on all latent variables except for the particular latent variable $z_t$ we decide to control. As Figure \\ref{fig:supervised-quantitative} shows, we can draw the confidence-$z$ curves of the $t$-th attribute where $z=z_t\\in [-3.0, 3.0]$ with $0.1$ as the stride length. For the gender and the smile attributes, it can be seen that the corresponding $z_t$ is able to enable ($z_t < -1$) and disable ($z_t > 1$) the attribute of the generated image, which shows the controlling ability of the $t$-th attribute by tuning the corresponding latent variable $z_t$.\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\vspace{-3mm}\n\\scalebox{0.9}{\n\\begin{tabular}{c}\n\\includegraphics[width=0.35\\textwidth]{.\/figures\/conf.pdf}\n\n\\end{tabular}\n}\n\\vspace{-1mm}\n\\caption{\n\\small Experts (high-performance external classifiers for attribute classification) prediction for being negatives on the generated images. We traverse $z_1$ (gender) and $z_2$ (smile) separately to generate images for the classification test.\n}\n\\label{fig:supervised-quantitative}\n\\end{center}\n\\vspace{-7mm}\n\\end{figure}\n\n\\vspace{-5mm}\n\\subsubsection{Image Interpolation}\n\\vspace{-2mm}\n\nWe further show the disentanglement properties of using supervised Guided-VAE on the CIFAR10 dataset. ALI-VAE borrows the architecture that is defined in \\cite{ALI}, where we treat $G_z$ as the encoder and $G_x$ as the decoder. This enables us to optimize an additional reconstruction loss. Based on ALI-VAE, we implement Guided-ALI-VAE (Ours), which adds supervised guidance through excitation and inhibition shown in Figure \\ref{fig:model}. ALI-VAE and AC-GAN \\cite{acgan} serve as a baseline for this experiment. \n\nTo analyze the disentanglement of the latent space, we train each of these models on a subset of the CIFAR10 dataset \\cite{cifar10} (Automobile, Truck, Horses) where the class label corresponds to the attribute to be controlled. We use a bottleneck size of 10 for each of these models. We follow the training procedure mentioned in \\cite{acgan} for training the AC-GAN model and the optimization parameters reported in \\cite{ALI} for ALI-VAE and our model. For our Guided-ALI-VAE model, we add supervision through inhibition and excitation on $z_{1:3}$. \n\n\\begin{table}\n\\begin{center}\n\\scalebox{0.75}{\n\\begin{tabular}{l | cc}\n\\textbf{Model } & Automobile-Horse $\\downarrow$ & Truck-Automobile $\\downarrow$ \\\\\n\\hline\n\\textbf{\\textsc{AC-GAN} \\cite{acgan}} & 88.27 & 81.13 \\\\\n\\textbf{\\textsc{ALI-VAE}} \\textsuperscript{$\\dagger$} & 91.96 & 78.92 \\\\\n\\textbf{\\textsc{Guided-ALI-VAE (Ours)}} & \\textbf{85.43} & \\textbf{72.31} \\\\\n\\end{tabular}\n}\n\\caption{\\small \\textbf{Image Interpolation: } FID score measured for a subset of CIFAR10 \\cite{cifar10} with two classes each. [$\\downarrow$ means lower is better] \\textsuperscript{$\\dagger$} ALI-VAE is a modification of the architecture defined in \\cite{ALI} }\n\n\\label{tab:cifar-fid}\n\\end{center}\n\\end{table}\nTo visualize the disentanglement in our model, we interpolate the corresponding $z$, $z_{t}$ and $z_{t}^{rst}$ of two images sampled from different classes. The interpolation here is computed as a uniformly spaced linear combination of the corresponding vectors. The results in Figure \\ref{fig:traverse-Interpolation} qualitatively show that our model is successfully able to capture complementary features in $z_{1:3}$ and $z_{1:3}^{rst}$. Interpolation in $z_{1:3}$ corresponds to changing the object type. Whereas, the interpolation in $z_{1:3}^{rst}$ corresponds to complementary features such as color and pose of the object.\n\nThe right column in Figure \\ref{fig:traverse-Interpolation} shows that our model can traverse in $z_{1:3}$ to change the object in the image from an automobile to a truck. Whereas a traversal in $z_{1:3}^{rst}$ changes other features such as background and the orientation of the automobile. We replicate the procedure on ALI-VAE and AC-GAN and show that these models are not able to consistently traverse in $z_{1:3}$ and $z_{1:3}^{rst}$ in a similar manner. Our model also produces interpolated images in higher quality as shown through the FID scores \\cite{fid} in Table \\ref{tab:cifar-fid}. \n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{c}\n\\vspace{-2mm}\n\\includegraphics[width=0.48\\textwidth]{.\/figures\/cifar10_new.png}\n\\end{tabular}\n\\caption{\\small Interpolation of images in $z$, $z_{1:3}$ and $z_{1:3}^{rst}$ for AC-GAN, ALI-VAE and Guided-ALI-VAE (Ours).\n}\n\\label{fig:traverse-Interpolation}\n\\end{center}\n\\vspace{-8mm}\n\\end{figure}\n\n\n\n\\subsection{Few-Shot Learning}\nPreviously, we have shown that Guided-VAE can perform images synthesis and interpolation and form better representation for the classification task. Similarly, we can apply our supervised method to VAE-like models in the few-shot classification. Specifically, we apply our adversarial excitation and inhibition formulation to the Neural Statistician \\cite{edwards2016towards} by adding a supervised guidance network after the statistic network. The supervised guidance signal is the label of each input. We also apply the Mixup method \\cite{zhang2017mixup} in the supervised guidance network. However, we could not reproduce exact reported results in the Neural Statistician, which is also indicated in \\cite{korshunova2018bruno}. For comparison, we mainly consider results from Matching Nets \\cite{vinyals2016matching} and Bruno \\cite{korshunova2018bruno} shown in Table \\ref{tab:Omniglot}. Yet it cannot outperform Matching Nets, our proposed Guided Neural Statistician reaches comparable performance as Bruno (discriminative), where a discriminative objective is fine-tuned to maximize the likelihood of correct labels.\n\n\\begin{table}[!htb]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{l | cc|cc}\n\\textbf{Model} & \\multicolumn{2}{c|}{\\textbf{5-way}} & \\multicolumn{2}{|c}{\\textbf{20-way}} \\\\\n\\textbf{Omniglot } & \\textbf{ 1-shot} & \\textbf{5-shot} & \\textbf{1-shot} & \\textbf{ 5-shot}\\\\\n\\hline\n\\textbf{\\textsc{Pixels} \\cite{vinyals2016matching}} & 41.7\\% &63.2\\% &26.7\\% & 42.6\\%\\\\\n\\textbf{\\textsc{Baseline Classifier} \\cite{vinyals2016matching}} & 80.0\\% &95.0\\% &69.5\\% & 89.1\\%\\\\\n\\textbf{\\textsc{Matching Nets} \\cite{vinyals2016matching}} & 98.1\\% &98.9\\% &93.8\\% & 98.5\\%\\\\\n\\textbf{\\textsc{Bruno} \\cite{korshunova2018bruno}} & 86.3\\% &95.6\\% &69.2\\% & 87.7\\%\\\\\n\\textbf{\\textsc{Bruno (discriminative)} \\cite{korshunova2018bruno}} & 97.1\\% &99.4\\% &91.3\\% & 97.8\\%\\\\\n\n\\hline\n\\textbf{\\textsc{Baseline} } & 97.7\\% &99.4\\% &91.4\\% & 96.4\\%\\\\\n\\textbf{\\textsc{Ours (discriminative)}} &97.8\\% &99.4\\% &92.1\\% &96.6\\%\\\\\n\\end{tabular}\n}\n\\caption{\\small \\textbf{Few-shot classification:} Classification accuracy for a few-shot learning task on the Omniglot dataset.}\n\\label{tab:Omniglot}\n\\vspace{-5mm}\n\\end{center}\n\\end{table}\n\n\n\n\n\\vspace{-4mm}\n\\section{Ablation Study}\n\\label{others}\n\n\\subsection{Deformable PCA}\nIn Figure \\ref{fig:pca}, we visualize the sampling results from PCA and $Dec_{sub}$. By applying a deformation layer into the PCA-like layer, we show deformable PCA has a more crispy sampling result than vanilla PCA. \n\n\\begin{figure}[!htb\n\\begin{center}\n\\scalebox{0.7}{\n\\begin{tabular}{c}\n\\includegraphics[width=0.52\\textwidth]{.\/figures\/pca_image.pdf}\\\\\n \n\\end{tabular}\n\\vspace{-2mm}\n}\n\n\\caption{\\small (Top) Sampling Result Obtained from PCA (Bottom) Sampling Result obtained from learned deformable PCA (Ours)}\n\\label{fig:pca}\n\\end{center}\n\\end{figure}\n\n\\vspace{-6mm}\n\\subsection{Guided Autoencoder}\n\\vspace{-2mm}\n\nTo further validate our concept of ``guidance'', we introduce our lightweight decoder to the standard autoencoder (AE) framework. We conduct MNIST classification tasks using the same setting in Figure \\ref{tab:classification-mnist-methods}. As Table \\ref{tab:classification-mnist-ae} shows, our lightweight decoder improves the representation learned in autoencoder framework. Yet a VAE-like structure is indeed not needed if the purpose is just reconstruction and representation learning. However, VAE is of great importance in building generative models. The modeling of the latent space of ${\\bf z}$ with e.g., Gaussian distributions is again important if a probabilistic model is needed to perform novel data synthesis (e.g., the images shown in Figure \\ref{fig:Comparison_CelebA} and Figure \\ref{fig:celeba_appendix}).\n\n\\begin{table}[h!]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{l | ccc}\n\\textbf{Model } & {$d_{\\bf z} = 16$ $\\downarrow$} & {$d_{\\bf z} = 32$ $\\downarrow$} & {$d_{\\bf z} = 64$ $\\downarrow$} \\\\\n\\hline\n\\textbf{\\textsc{Auto-Encoder (AE)}}\n & \\textbf{1.37}\\%$\\pm$0.05 & 1.06\\%$\\pm$0.04 & 1.34\\%$\\pm$0.04 \\\\\n\\textbf{\\textsc{Guided-AE (Ours)}}\n & 1.46\\%$\\pm$0.06 & \\textbf{1.00}\\%$\\pm$0.06 & \\textbf{1.10}\\%$\\pm$0.08 \\\\\n\\end{tabular}\n}\n\\caption{\\footnotesize Classification error over AE and Guided-AE on MNIST.}\n\\label{tab:classification-mnist-ae}\n\\end{center}\n\\vspace{-7mm}\n\\end{table}\n\n\\subsection{Geometric Transformations}\n\\vspace{-2mm}\n\nWe conduct an experiment by excluding the geometry-guided part from the unsupervised Guided-VAE. In this way, the lightweight decoder is just a PCA-like decoder but not a deformable PCA. The setting of this experiment is exactly the same as described in Figure \\ref{fig:mnist}. The bottleneck size of our model is set to 10 of which the first two latent variables $z_1, z_2$ represent the rotation and scaling information separately. As a comparison, we drop off the geometric guidance so that all 10 latent variables are controlled by the PCA-like light decoder. As shown in Figure \\ref{fig:ablation} (a) (b), it can be easily seen that geometry information is hardly encoded into the first two latent variables without a geometry-guided part.\n\n\\begin{figure}[!htb]\n\\begin{center}\n\\scalebox{0.9}{\n\\begin{tabular}{cc}\n\\includegraphics[width=0.23\\textwidth]{.\/figures\/DeformablePCA_ablation.png}&\n\\includegraphics[width=0.23\\textwidth]{.\/figures\/sample_9.png}\n\\\\ \n\\includegraphics[width=0.23\\textwidth]{.\/figures\/mnist_ablation_4.png}&\n\\includegraphics[width=0.23\\textwidth]{.\/figures\/mnist_4.png}\n\\\\\n\\small{(a) Unsupervised Guided-VAE} & \\small{(b) Unsupervised Guided-VAE}\\\\\n\\small{without Geometric Guidance} & \\small{with Geometric Guidance}\\\\\n\n\\includegraphics[width=0.23\\textwidth]{.\/figures\/ablation_DEI_2.png}\n& \\hspace{-2mm}\n\\includegraphics[width=0.23\\textwidth]{.\/figures\/ablation_DEI_1.png}\n\\\\ \n\\includegraphics[width=0.23\\textwidth]{.\/figures\/celeba_ablation_smile.png}\n& \\hspace{-2mm}\n\\includegraphics[width=0.23\\textwidth]{.\/figures\/celeba_smile.png}\n\\\\ \n\\small{(c) Supervised Guided-VAE} & \\small{(d) Supervised Guided-VAE}\\\\\n\\small{without Inhibition} & \\small{with Inhibition}\\\\\n\\end{tabular}\n}\n\\vspace{1mm}\n\\caption{\\small{Ablation study on Unsupervised Guided-VAE and Supervised Guided-VAE}}\n\\label{fig:ablation}\n\\end{center}\n\\vspace{-5mm}\n\\end{figure}\n\n\\vspace{-2mm}\n\\subsection{Adversarial Excitation and Inhibition}\n\\vspace{-2mm}\nWe study the effectiveness of adversarial inhibition using the exact same setting described in the supervised Guided-VAE part. As shown in Figure \\ref{fig:ablation} (c) and (d), Guided-VAE without inhibition changes the smiling and sunglasses while traversing the latent variable controlling the gender information.\nThis problem is alleviated by introducing the excitation-inhibition mechanism into Guided-VAE.\n\n\\vspace{-2mm}\n\\section{Conclusion}\n\\vspace{-2mm}\nIn this paper, we have presented a new representation learning method, guided variational autoencoder (Guided-VAE), for disentanglement learning. Both unsupervised and supervised versions of Guided-VAE utilize lightweight guidance to the latent variables to achieve better controllability and transparency. Improvements in disentanglement, image traversal, and meta-learning over the competing methods are observed. Guided-VAE maintains the backbone of VAE and it can be applied to other generative modeling applications. \\\\\n\\hspace{-1mm}{\\bf Acknowledgment}. \\small{This work is funded by NSF IIS-1618477, NSF IIS-1717431, and Qualcomm Inc. ZD is supported by the Tsinghua Academic Fund for Undergraduate Overseas Studies. We thank Kwonjoon Lee, Justin Lazarow, and Jilei Hou for valuable feedbacks.}\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSystems consisting of diffusing particles or random walks interacting by means of a long-range potential are non-equilibrium systems, which describe different phenomena in physics, chemistry and biology. From a physical perspective they are used to study metastable supercooled liquids \\cite{Supercool, Dean}, \nmelting in type-II high-temperature superconductors \\cite{Nelson}, electron transport in quasi-one-dimensional conductors \\cite{Quasi1d} and carbon nanotubes \\cite{Nanotube}. From a chemical viewpoint the interest in these systems lies in the fact that some diffusion-controlled reactions processes rely on the diffusion of long-range interacting particles which react after they are closer than an effective capture distance. Some examples include radiolysis in liquids \\cite{ParkDeem}, \n electronic energy transfer reactions \\cite{Klafter} and a large variety of chemical reactions in amorphous media \\cite{RDreview}. From a biological viewpoint, the investigation of these systems is helpful in understanding \nthe dynamics of interacting populations in terms of predator-prey models \\cite{Krap-Redner, Bray} and \n membrane inclusions with curvature-mediated interactions \\cite{Reynwar1, Reynwar2}. \n\nVicious walks (VW) are a class of non-intersecting random walks, where the process is terminated upon the first encounter between walkers \\cite{Fisher}. \nThe fundamental physical quantity describing VW is the survival probability which is defined as the probability that no pair of particles has collided up to time $t$. Diffusing particles or walks that are not allowed to meet each other but otherwise remain free, we call pure VW. The behavior of pure VW is generally well-known. The survival probability for such a system has been computed in the framework of renormalization group theory in arbitrary spatial dimensions up to two-loop order \\cite{Cardy, Bhat1, Bhat2}. These approximations have been confirmed by exact results available in one dimension from the solution of the boundary problem of the Fokker-Plank equation \\cite{Krap-Redner, Bray}, using matrix model formalism \\cite{Katori} and Bethe ansatz technique \\cite{Derrida}. On the other hand the effect of long range interactions has been extensively investigated in many-body problems.\nIt has been shown that the existence of long-range disorder leads to a rich phase diagram with interesting crossover effects \\cite{Halp, Bla, Prud}. If the potential is Coulomb-like ($\\sim r^{-1-\\sigma}$) then systems in one dimension \n behave similar to a one-dimensional version of a Wigner crystal \\cite{Wigncrist}\nfor $\\sigma<0$ and similar to a Luttinger liquid for $\\sigma\\ge 0$ \\cite{Mor-Zab}. If the potential is logarithmic then in the long-time limit the dynamics of particles \nare described by non-intersecting paths \\cite{Hinrichsen,Katori}.\nThe generalization of VW that includes the effect of long range interactions has not attracted much attention in the literature. Up to our knowledge there was one attempt to study long-range VW \\cite{Bhat3}. Here the authors considered the case of a long-range potential decaying as $gr^{-\\sigma-d}$, where $g$ is a coupling constant. It was shown by applying the Wilson momentum shell renormalization group that only one of the critical exponents characterize long-range VW.\nFor a specific value of $\\sigma$ ($\\sigma=2-d$) they show that the exponent $\\gamma$, which determines the decay of the asymptotic survival probability with time, is given by the expression:\n\\beq\\label{sp_bhat}\n\\gamma= \\frac{p(p-1)}{4}u_1,\n\\end{equation}\n where $p$ the number of VW in the system, $u_1=(\\varepsilon\/2+[(\\varepsilon\/2)^2 +g]^{1\/2})$ and $\\varepsilon=2-d$. There are limitations to the above approach. First, it is restricted to a single form of the potential ($\\sim r^{-2}$) and systems such as membrane inclusions and chemical reactions have different power-law potentials. Second, it considers identical walkers but one would like to have results if the diffusion constant of all walkers are different. Finally it is not convenient to compute higher-loop corrections using the Wilson formalism.\n\nIn this paper we reconsider the problem of long-range VW using methods of Callan-Symanzyk renormalized field theory in\nconjunction with an expansion in $\\varepsilon=2-d$ and $\\delta=2-d-\\sigma$. We note that it is more convenient to compute logarithmic and higher loop corrections by using this method. We derive the asymptotics of the survival and reunion probability for all values of the parameters $(\\sigma,d)$ for the first time. \n\n\\begin{table}\n\\vspace{0.3cm}\n\\caption{\\label{tab:table1}%\nOne-loop survival probability of $p$ sets of particles with $n_j$ particles in each set large-time asymptotic at different regions of the $\\sigma-d$ plane. We refer to Figure 3 for specific value of $\\sigma$ and $d$ in each region.}\n\\begin{ruledtabular}\n\\begin{tabular}{ccc}\nRegion & Survival probability & \\\\\n\\hline\n I & $t^{-(d-2)\/2}+t^{-(d+\\sigma-2)\/2}$ & \\\\\n II & $t^{-\\frac{1}{2}\\sum_{ij}n_in_j\\varepsilon}$ & \\\\\n III & $t^{-\\frac{u_1}{2}\\sum_{ij}n_in_j (1+\\delta\/2\\log t)} $& \\\\\n IV & $t^{-(d-2)\/2}$ & \\\\\n V, $d=2$ & $t^{-\\frac{\\sqrt{g_0}}{2}\\sum_{ij}n_in_j (1+\\delta\/2\\log t)} $ & \\\\\n VI, $\\sigma=2-d$ & $t^{-\\frac{u_1}{2}\\sum_{ij}n_in_j}$ \\footnote{$u_1$ is defined by the formula (\\ref{sp_bhat}).} & \\\\\n \n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\nIn this paper we will show that there are several regions in $\\sigma-d$ plane in which we have different behavior of the critical exponent. Our results are summarized in Table I. We note that results on the line $\\sigma+d=2$ have been obtained before \\cite{Bhat3}. Regions I and IV correspond to Gaussian or mean-field behavior (see Figure 3). In region II we found that the system reproduces pure VW. Logarithmic corrections in region III and at the short-range upper critical dimension $d=2$ have been obtained as series expansion in $\\delta=2-\\sigma-d$.\n\nThe remainder of this paper is organized as follows: \nSection~\\ref{sec:model} reviews the field theoretic formulation of long range VW and describes Feynman rules and dimensionalities of various quantities. In section~\\ref{sec:LR} we derive the value of all fixed points and study their stability. Section~\\ref{sec:results} presents results for the critical exponents and logarithmic corrections of various dynamical observables. Section~\\ref{sec:concl} contains our concluding remarks. In Appendix A we give the details of the computation of some integrals that appear in Section~\\ref{sec:LR}.\n\n\n\n\\section{Modelling VW with long-range interations}\n\\label{sec:model}\n\n\nAs the starting point of the description of our model we consider $p$ sets of diffusing particles or random walks with $n_i$ particles in each set $i=1\\dots p$, with a pairwise intraset interaction which includes a local or short-range part and a non-local or long-range tail. The local part determines the vicious nature of walks: if two walks belonging to the different sets are brought close to each other, both are annihilated. Walks belonging to the same set are supposed to be independent. At $t=0$ all particles start in the vicinity of the origin. We are interested in the survival and reunion probabilities of walks at time $t>0$.\n\n\nA continuum description of a system of $N$ Brownian particles $X_i$ with two-body interactions is simplified by the coarse-graining procedure in which a large number of microscopic degrees of freedom are averaged out. Their influence is simply modelled as a Gaussian noise-term in the Langevin equations. A convenient starting point for the description of the stochastic dynamics is the path-integral formalism. Then the system under consideration is modeled by the classical action \n\\beq\\label{langLR} S= \\int\\limits_0^{+\\infty}dt \\left(\\sum\\limits_{i=1}^N \\dot X_i^2\/(2D_i) + \\sum\\limits_{i0$, the short-range term naively dominates the long-range term and we expect to have the behavior of the system similar to the case of pure VW. \nWe will reserve the symbol $\\varepsilon$ ($\\varepsilon=2-d$) to denote deviations from the short-range critical dimension $d_c=2$, and $\\delta$ ($\\delta=2-d-\\sigma$) for the deviations from the long-range critical dimension $d_{c}(\\sigma)$.\nIf $\\sigma=0$ then the critical dimension of the long-range part coincides with the short-range part and we have the non-trivial correction to the asymptotic behavior due to long-range interactions.\n This boundary separates mean-field or Gaussian behavior from long-range behavior. \nFor $\\sigma<0$ the long-range term dominates the short-range term and we expect to have non-trivial corrections to the behavior of the system. \n\nNow we consider diagrammatic representation elements of model (\\ref{hamil}).\nIn zero-loop approximation the vertex 4-point function takes a simpler form after Laplace-Fourier transformation: \n\\beq \\Gamma^{(2,2)}_{ij}(s,p) = V_{ij}(p_1+p_2) \\delta(\\sum\\limits_k p_k).\\end{equation}\nThe same transformation applied to the bare propagator yields:\n\\beq\\label{prop} \\Gamma^{(1,1)}_{j}(s,p) = (s+D_ip^2)^{-1} \n\\end{equation}\nWe note that there are no vertices in (\\ref{hamil}) that produce diagrams which dress the propagator, implying there is no field renormalization. As a consequence the bare propagator (\\ref{prop}) is the full propagator for the theory. Feynman rules are summarized in Figure 1. There are two vertices in the theory: one is a short-range $\\lambda$-vertex and another is a long-range momentum dependent $g$-vertex. Each external line of the vertex corresponds to a functionally independent field. The propagator is formed by contracting appropriate lines from different vertices. We recall the propagator is the correlation function of $\\phi_i$ and $\\phi^{\\dagger}_i$ fields only. \n\nPhysical observables are computed with the help of correlation functions. The probability that $p$ sets of particles with $n_i$ particles in each set start at the proximity of the origin and finish at $x_{i,\\alpha_i}$ ($i$ index enumerates different sets and $\\alpha_i$ index enumerates particles in set $i$) without intersecting each other can be obtained by generalizing eqn (\\ref{sp}). \nIn the field theoretical formulation, this probability becomes the following correlation function:\n\\begin{equation}\n\\label{sp-G}\n G(t)\n=\\int\\prod_{i=1}^p\\prod_{\\alpha_i=1}^{n_i}d^dx_{i,\\alpha_i}\n\\langle\\phi_i(t,x_{i,\\alpha_i})(\\phi^{\\dagger}_i(0,0))^{n_i}\\rangle,\n\\end{equation}\nIn the Feynman representation it is the vertex with $2N$ ($N=\\sum_j n_j$) external lines. In the first order of the perturbation theory one needs to contract these lines with corresponding lines of the vertices in Figure 1. Since there are many independent fields in the correlation function (\\ref{sp-G}) this operation can be done in many ways. It yields a combinatorial factor, $n_in_j$, in front of each diagram, which is the number of ways of constructing a loop from the $n_i$ lines of type $i$ and $n_j$ lines of type $j$ on the one hand and one line of type $i$ and one line of type $j$ on the other hand. From the next section we will see that the survival probability scales as $G(t)\\sim t^{-\\gamma}$, where $\\gamma$ is the critical exponent. If all walks are free, $\\gamma=0$. In the presence of interactions we expect $\\gamma$ to be a universal quantity that does not depend on the intensity of the short-range interaction $\\lambda_{ij}$. It is convenient to introduce the so called truncated correlation function which is obtained from (\\ref{sp-G}) by factoring out external lines:\n\\beq\\label{tcf} \\Gamma(t) = G(t)\/(\\Gamma^{(1,1)})^{2N}\\end{equation}\n\n Another physical observable, the reunion probability, is defined as the probability that $p$ sets of particles with $n_i$ particles in each set start at the proximity of the origin and without colliding into each other finish at the proximity of some point at time $t$: \n\\beq\\label{rp}\n R(t)\n=\\int d^dx \\prod_{i=1}^p \\langle\\phi_i(t,x)^{n_i}(\\phi^{\\dagger}_i(0,0))^{n_i}\\rangle,\n\\end{equation}\nIn the Feynman representation it is depicted as the watermelon diagram with $2N$ stripes.\nWe note that if the theory is free this expression is the product of free propagators and at the large-time limit the return probability scales as $R_{\\cal O}(t) \\sim t^{-(N-1)d\/2}$.\nIf interactions are taken into account it becomes $R(t) \\sim t^{-(N-1)d\/2 -2\\gamma}$, where $\\gamma$ is survival probability exponent. The reason that it enters with the factor 2 is the following. If we cut a watermelon diagram of the reunion probability correlation function in the middle then it produces two vertex diagrams with $2N$ external lines of the survival probability correlation function. As a result the reunion probability is the product of two survival probabilities. It remains true in all orders of perturbation theory. For a rigorous proof we refer to \\cite{Bhat2}.\n\n\\section{The Renormalization of observables}\n\\label{sec:LR}\n\n\n \\begin{figure}[b]\n\\includegraphics[scale=0.25]{fs.eps}\n\\caption{\\label{fig:sigmad} The critical behavior of vicious walks with long-range interactions in the different regions of the $(\\sigma,d)$ plane. Region I and IV correspond to the mean field short-range behavior, in region II will be critical short-range behavior, region III is the long-range behavior. The lines $d=2$ and $\\sigma+d=2$ represent regions V and VI respectively.}\n\\end{figure}\n\nWhile computing correlation functions like (\\ref{sp-G}) perturbatively one faces divergent integrals when $d=d_c$.\nThe convenient scheme developed for dealing with these divergences follows Callan-Symanzik renormalization-group analysis \\cite{Zinn, Amit}. Within this scheme we start with the bare correlation function \n$G(t;\\lambda,g)$, where $\\lambda =\\{\\lambda_{ij}\\}$, and $g=\\{g_{ij}\\}$ denote the set of bare short-range and long-range coupling constants. \nIn the renormalized theory it becomes $G_{R}(t;\\lambda_R,g_R,\\mu)$. From dimensional analysis it follows that \n\\beq\\label{diman} G_{R}(t;\\lambda_R,g_R,\\mu) = G_{R}(t\\mu;\\lambda_R,g_R),\\end{equation}\nwhere $\\mu$ is the renormalization scale. The scale invariance leads to the expression \n\\beq\\label{scaleinv} G_{R}(t;\\lambda_R,g_R,\\mu) = Z(\\lambda_R, g_R, \\mu)G(t;\\lambda,g).\\end{equation}\nHere functions $Z$ are chosen in such a way that $G_{R}(t,\\lambda_R,g_R,l)$ remains finite when the cut-off is removed at each order in a series expansion of $\\lambda_R$, $g_R$, $\\varepsilon$ and $\\delta$. From the fact that $G(t,\\lambda,g)$ does not depend on the renormalization scale $\\mu$ we get the Callan-Symanzik equation \n\\beq\\label{cseq} \\left(\\mu\\frac{\\pd}{\\pd \\mu} + \\beta_g \\frac{\\pd}{\\pd g} + \\beta_u \\frac{\\pd}{\\pd u} - \\gamma \\right) G_{R}=0, \\end{equation} \nwhere the $\\beta$-functions are defined by \n\\beq\\label{beta} \\beta_{\\lambda} (\\lambda_R, g_R)= \\mu \\frac{\\pd}{\\pd \\mu} \\lambda_R\\qquad \\beta_{g} (\\lambda_R, g_R)=\\mu \\frac{\\pd}{\\pd \\mu} g_R \\end{equation} and the function $\\gamma$ by \\beq\\label{gamma}\\gamma(\\lambda_R, g_R) = \\mu \\frac{\\pd}{\\pd \\mu} \\ln Z.\\end{equation}\nThe renormalization group functions are understood as the expansion in double series of coupling constants $\\lambda$ and $g$ and deviations from the critical dimension $\\varepsilon$ and $\\delta$. We take $\\delta = O(\\varepsilon)$.\nThe coefficient $Z(\\lambda_R, g_R, \\mu)$ is fixed by the normalization conditions. It is more convenient to impose these conditions on the Laplace transform of the truncated correlation function (\\ref{tcf}). One sets the following condition then \n\\beq\\label{norm} \\Gamma_{R}(\\mu) =1,\\end{equation}\nwhen $s=\\mu$. \nWe note that the same multiplicative renormalization factor $Z$ yields $\\Gamma$ finite. From this fact one can infer that \n\\beq\\label{scalGamma}\\Gamma(\\mu;\\lambda,g) = Z(\\mu;\\lambda,g)^{-1}.\\end{equation}\nIf we express unrenormalized couplings in terms of renormalized ones (\\ref{scalGamma}) we will obtain the equation for finding $Z$ explicitly. \n\n\nThe equation (\\ref{cseq}) can be solved by the method of characteristics. Within this method we let couplings depend on the scale which is parametrized by $\\mu(x)=x\\mu $. Here $x$ is introduced as a parametrization variable of the RG flow and is not to be confused with position. Henceforth $x$ will refer to this parametrization variable. We introduce running couplings $\\bar \\lambda(x)$ and $\\bar g(x)$. They satisfy the equations \\beq x\\frac{d}{dx}\\bar g(x)=\\beta_g(\\bar \\lambda(x),\\bar g(x))\\quad x\\frac{d}{dx}\\bar \\lambda(x)=\\beta_{\\lambda}(\\bar \\lambda(x),\\bar g(x)).\\end{equation} The renormalized value should be defined by the initial conditions $\\bar \\lambda(1)=\\lambda_R$ and $\\bar g(1)=g_R$. the solution of the equation is then\n\\beq\\label{solRG} G_{R}(t)\n= e^{\\int\\limits_1^{\\mu t} \\gamma(\\bar \\lambda(x), \\bar g(x))dx\/x}G_{R}(\\mu^{-1};\\bar\\lambda(\\mu t),\\bar g(\\mu t),\\mu) \\end{equation}\n\nNext we calculate the first-order contribution to the renormalized vertices.\nThe $\\lambda$-vertex is renormalized by the set of diagrams that are shown in Figure 2. We notice that there are no diagrams producing the momentum dependent $g$-vertex in the theory (\\ref{hamil}). \nThis statement is the corollary of the fact that only independent fields of power one enter into the expression of the vertex and there are no higher powers of fields. Also we keep in mind that the renormalized couplings are defined by the value of the vertex function taken at zero external momenta. It produces the following expression:\n\\beq\\label{Run}\\begin{cases} \n\\lambda_{Rij} &= \\lambda_{ij} -\\frac{1}{2} (\\lambda_{ij}^2 I_1 + 2\\lambda_{ij} g_{ij}I_2 + g_{ij}^2I_3)\\\\\ng_{Rij} &= g_{ij} \\\\\n\\end{cases} \\end{equation}\n where $I_k=I_k(\\sigma;D_i,D_j)$ are one-loop integrals corresponding to the diagrams $a$, $b$, $c$ in the Figure 2 respectively. Using the Feynman rules we can explicitly write them down:\n \\beq\\label{int} I_k = \\int \\frac{d^dq}{(2\\pi)^d} \\frac{q^{(k-1)\\sigma}}{2s+(D_i+D_j)q^2}, \\quad k=1,2,3. \\end{equation}\n We will use dimensional regularization procedure to compute these integrals. The details of the computation are summarized in Appendix A. We note that integrals will diverge logarithmically at different values of the spatial dimension $d$. For this reason it leads to different critical behavior in different regions of the $\\sigma-d$ plane (see Figure 3). These regions correspond to four possibilities for $\\varepsilon=2-d$ and $\\delta=2-d-\\sigma$ to be positive or negative. Only if $\\delta=O(\\varepsilon)$ or, in other words, if both $\\varepsilon$ and $\\delta$ are infinitesimally small but the ratio $\\varepsilon\/\\delta$ is finite we expect non-zero fixed points of the renormalization group flow. Similar approximation have been used before \\cite{Halp} but for different models with long-range disorder. It allows us to follow the standard procedure of deriving the $\\beta$-functions which consists of two steps. \n \n First, we express unrenormalized couplings in terms of the renormalized. For the short-range coupling constant $\\lambda$ it can be done by solving the quadratic equation in (\\ref{Run}). Expanding the square root and keeping terms up to the second order we infer that \n\\beq\\label{unR}\\begin{cases} \n\\lambda_{ij} &= \\lambda_{Rij} +\\frac{1}{2} (\\lambda_{Rij}^2 \\frac{a_d}{\\varepsilon} + 2\\lambda_{Rij} g_{Rij}\\frac{b_d}{\\delta} + g_{Rij}^2\\frac{c_d}{2\\delta-\\varepsilon})\\\\\ng_{ij} &= g_{Rij} \\\\\n\\end{cases} \\end{equation}\nwhere $a_d$, $b_d$ and $c_d$coefficients have been found explicitly in Appendix A. Now we introduce dimensionless renormalized couplings \n\\beq\\label{dless} \\bar{g}_{Rij} = a_d(2s)^{-\\delta\/2} \\quad \\bar{\\lambda}_{Rij} = b_d(2s)^{-\\varepsilon\/2}.\\end{equation} An important observation is that $c_da_d=b_d^2$ which can be verified by explicit substitution (see Appendix A). Multiplying the first and second equation in (\\ref{unR}) by the factors $a_d$ and $b_d$ respectively, and using redefinitions (\\ref{dless}) we can condense all pre-factors in the right hand side of the equations into the dimensionless constants. \n\nSecond, we differentiate equations (\\ref{unR}) with respect to the scaling parameter $\\mu$. Using definitions (\\ref{beta}) and the fact that bare couplings do not depend on the scale, we derive \n\\beq\\label{betalmg}\\begin{cases} \n\\beta_{\\lambda,ij} &= -\\varepsilon \\bar\\lambda_{Rij} + (\\bar\\lambda_{Rij}+ \\bar g_{Rij})^2\\\\\n\\beta_{g,ij} &= -\\delta \\bar g_{Rij} \\\\\n\\end{cases} \\end{equation}\nwhere the right hand side is understood as the leading contribution to the $\\beta$-functions from the double expansions in \n$\\lambda,g $ and $\\varepsilon,\\delta$. \n From (\\ref{betalmg}) we see that it is convenient to introduce new coupling constants $u_{Rij}= \\bar \\lambda_{Rij} +\\bar g_{Rij}$. After this step the renormalization group equations read\n\n \\beq\\label{flow}\\begin{cases} \n\\beta_{u,ij} &= -\\varepsilon u_{Rij} + u_{Rij}^2 -g_{Rij} \\\\\n\\beta_{g,ij} &= -\\delta g_{Rij} \\\\\n\\end{cases} \\end{equation}\nWe note that in the last equations $g$ coupling constant has been redefined $\\sigma \\bar g_{Rij}\\to g_{Rij}$.\n\nFixed points are zeros of the $\\beta$-functions. If $\\delta \\neq 0$ then the last equation in (\\ref{flow}) is zero only when $g_*=0$. Then the first equation has two solutions $u=0$ and $u=\\varepsilon$. If $\\delta=0$ then $g$ plays the role of a parameter and the fixed points are determined by the roots of the quadratic equation\n\\beq 0=-\\varepsilon u + u^2 -g\\end{equation}\nwhich are real if $g\\ge-(\\varepsilon\/2)^2$ and we find\n\\beq u_{1,2} = \\varepsilon\/2 \\pm \\sqrt{(\\varepsilon\/2)^2 + g}.\\end{equation}\nAll fixed points are listed in the Table II. The stability of these fixed points is determined by the matrix of partial derivatives \n\\beq\\label{stab} \\beta_* = - \n \\left( \\begin{array}{cc}\n\\pd\\beta_u\/\\pd u & \\pd\\beta_u\/\\pd g \\\\\n\\pd\\beta_g\/\\pd u & \\pd\\beta_g\/\\pd g \n \\end{array} \\right)_{u=u_*, g=g_*} \\end{equation}\nEigenvalues are listed in the Table \\ref{tab:fp}. The Gaussian fixed point is stable in all directions for $\\varepsilon<0$ and $\\delta<0$ which corresponds to region I in Figure 3. In this region we find both short-range(pure VW) and long-range mean-field behavior depending on the sign of $\\sigma$. \nOn the contrary, for $\\varepsilon>0$ and $\\delta>0$ we find that the Gaussian fixed point is unstable(irrelevant) in all directions and the short-range (pure VW) fixed point is stable(relevant) only in $u$-direction. It means that long-range interactions will play a leading role. This region corresponds to region III in Figure 3. Next for $\\varepsilon>0$ and $\\delta<0$ we find that the short-range (pure VW) fixed point is stable in all directions. It means that the system is insensitive to the long-range tail. This region corresponds to region II in Figure 3. Finally for $\\varepsilon<0$ and $\\delta>0$ we find that the short-range (pure VW) fixed point is unstable in all directions and the system will be described by mean-field at long time. \n\n\n\n\n\\begin{table}[b]\n\\caption{\\label{tab:fp}\nFixed points for flow equations (\\ref{flow}) and the corresponding eigenvalues $(\\lambda_1,\\lambda_2)$ of the stability matrix (\\ref{stab}). We note that $u_1$ and $u_2$ are values of the }\n\\begin{ruledtabular}\n\\begin{tabular}{ccc}\nFixed point & $(u_*, g_*)$ & $(\\lambda_1, \\lambda_2)$ \\\\\n\\hline\n Gaussian & $(0,0)$ & $(\\varepsilon,\\delta)$ \\\\\nPure VW & $(\\varepsilon,0)$ & $(-\\varepsilon,\\delta)$ \\\\\n LR stable & $(u_{1},0)$ & $(-\\sqrt{\\varepsilon^2-4g},0)$ \\\\\n LR unstable & $(u_{2},0)$ & $(\\sqrt{\\varepsilon^2-4g},0)$\n \\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\\section{Calculation of critical exponents and discussion}\n\\label{sec:results}\n\nHere we describe our method of computing critical exponents. It is based on the formula (\\ref{scalGamma}) from the previous section. First, we obtain the leading divergent part of the correlation function. \nThe renormalized correlation function depends on the scale $\\mu$ but it appears in all formulas in combination with time: $\\mu t$. \nSecond, since we have found the bare coupling constant as a function of renormalized (dressed) couplings we express correlation function in terms of dressed couplings. \nFinally using the normalization condition (\\ref{norm}) and the definition (\\ref{gamma}) we differentiate $Z$ with respect to $\\mu \\pd\/\\pd\\mu$ to obtain the exponent $\\gamma$. The poles should cancel after this operation. \n\n In section 2 it was explained that the truncated correlation function in the one-loop approximation is given by the formula\n\\beq\\label{oneloop} \\Gamma(t;\\lambda,g)= 1- \\sum_{i,j} n_i n_j\\left(\\lambda_{ij} I_1 +g_{ij}I_2\\right).\\end{equation}\nHere integrals are the same as in (\\ref{int}). \n\nWe start our analysis with the region I. Notice that truncated correlation function $\\Gamma(t)$ and survival probability $G(t)$ have similar large time behavior. We use large momentum cut-off to compute integrals $I_1$ and $I_2$ as in formula (\\ref{mfint}) in Appendix A. The renormalization of coupling constants is trivial in this case.\nTherefore the leading contribution to the survival probability is given by \n\\beq G(t)\\sim t^{(2-d)\/2}+g_0t^{(2-d-\\sigma)\/2},\\end{equation} \nwhere $g_0$ is non-universal coefficient and we will not need its exact value. We notice that if $\\sigma>0$ the second term will decay faster than the first term and in the long-time limit it will produce the same behavior as mean-field pure VW. On the other hand if $\\sigma<0$\nthe first term will decay faster and long-range interactions will play a leading role. Many authors observed similar behavior in various systems with long-range defects \\cite{Halp, Bla, Prud}. Intuitively if potential falls fast with distance than the system effectively represent system with short-range potential where particle interact when they are close to each other. \n\nRegion IV exhibits similar behavior. Now the integral $I_2$ is computed with the help of the dimensional regularization (\\ref{intres}) and the integral $I_1$ remains the same. From the fact (\\ref{diman}) one can infer that\nthe survival probability scales as \n\\beq G(t)\\sim t^{(2-d)\/2}.\\end{equation}\nShort-range behavior dominates because the running coupling constant will flow towards the Gaussian fixed point at long time limit which is the only stable fixed in this region. This result is exact regardless the number of loops one takes into account.\n\n\nIn Region II the computation is as follows. \n\\beq\\label{rtwo} \\ln Z = \\sum n_in_j\\left( \\lambda_{ij}\\frac{a_d}{\\varepsilon} + g_{ij}t^{(2-d-\\sigma)\/2}\\right),\\end{equation}\nso plugging the result from (\\ref{ad}) to (\\ref{rtwo}) we obtain at the fixed point $(\\lambda_*=\\varepsilon, g=0)$\n\\beq\\label{gtwo} \\gamma = -\\frac{1}{2}\\sum n_in_j\\varepsilon \\end{equation}\nAnd we reproduce the pure VW behavior. This result is the reflection of the fact that the renormalization-group trajectories run away to stable pure VW fixed point. It is with agreement with the results obtained by Katori in \\cite{Katori} for $d=1$, and the logarithmic intraset particle interactions. The irrelevance of the long-range interaction in lower dimensions is a typical phenomenon observed in a various out of equilibrium interacting\nparticle systems. \n \n\nWe now consider regions III, V and VI. Integrals in (\\ref{oneloop}) are computed via dimensional regularization. Taking the inverse of (\\ref{oneloop}) and then logarithm one can obtain at the leading order: \n\\beq\\label{lnZ} \\log Z = \\sum n_in_j\\left(\\lambda_{ij}\\frac{a_d}{\\varepsilon} + g_{ij}\\frac{b_d}{\\delta}\\right)\\end{equation}\nwhere $a_d$ and $b_d$ are defined in Appendix A in (\\ref{ad}) and (\\ref{bd}).\nWe note that after taking the derivative the poles in (\\ref{lnZ}) will cancel in the limit of $\\delta=O(\\varepsilon)$. Also one recalls the expansion (\\ref{unR}) and the redefinitions in (\\ref{dless}). Using (\\ref{gamma}) we show that the expression for the function $\\gamma$ which determines critical exponent takes the form\n\\beq \\gamma =- \\frac{1}{2}\\sum_{ij} n_in_j u_R \\end{equation}\n Evaluated at the stable fixed point $(u_1=\\varepsilon\/2+ \\sqrt{(\\varepsilon\/2)^2 +g}$ it gives the following result:\n\\beq\\label{gthree} \\gamma = -\\frac{1}{2}\\sum_{ij} n_in_j u_1, \\end{equation}\nand the survival probability scales as $G(t)\\sim t^{\\gamma}$. \n\n\nWe will now find the logarithmic corrections to this scaling law. The running coupling constant can be found from the flow equation (\\ref{flow}): $\\bar{g}(x) = e^{-\\delta x} g$. In the case $\\delta,\\varepsilon=0$ (the intersection of regions V and VI) the flow equation for $\\bar u(x)$ is \n\\beq\\label{flowu} x\\frac{d\\bar u(x)}{dx} = -\\bar{u}^2(x) +g \\end{equation} and the solution is\n\\beq\\label{tanh} \\bar{u}(x) = \\sqrt{g} \\tanh (\\sqrt{g} \\log x +\\phi_0)\\sim \\sqrt{g} \\tanh (\\sqrt{g} \\log x),\\end{equation}\nwhere $\\phi_0$ is the initial condition and we do not need its exact form. After plugging this expression into the (\\ref{solRG}) we infer \n\\beq\\label{gammaint} \\int\\limits_1^{\\mu t} \\gamma(\\bar u, \\bar g)\\frac{dx}{x} \\sim \\log (\\cosh (\\sqrt{g} \\log \\mu t))\\end{equation}\n Thus the survival probability is\n\\beq\\label{dnulld2} G(t) \\sim \\cosh (\\sqrt{g} \\log t)^{-\\frac{1}{2}\\sum n_in_j } \\end{equation}\nIn the limit of large time $\\cosh (\\sqrt{g} \\log t)\\sim t^{\\sqrt{g}} $ implying $gamma =- \\frac{1}{2}\\sum_{ij} n_in_j \\sqrt{g}$ which is consistent with equation (\\ref{gthree}). \nFor negative coupling constant $g<0$ the solution in (\\ref{tanh}) becomes\n\\beq\\label{tan} \\bar{u}(x) \\sim -\\sqrt{|g|} \\tan (\\sqrt{|g|} \\log x)\\end{equation}\nThe integral (\\ref{gammaint}) is divergent if $t>\\exp(\\pi\/2\\sqrt{|g|})$ which leads to the result that the survival probability is zero beyond this time. For smaller times one has $G(t) \\sim \\cos (\\sqrt{|g|} \\log t)^{-\\frac{1}{2}\\sum n_in_j }$. Thus, upto one-loop order approximation, It implies that if walks are attracted to each other then all of them will annihilate at some finite time. This might be a signature of faster than power law decay and we expect to have corrections to this behavior at higher loop approximation.\n\nNext we consider the case when $\\varepsilon=0$ and $\\delta\\ne 0$ but $\\delta$ remains small i.e. region V. The flow equation for the $\\bar u(x)$ is \n\\beq\\label{flowug} x\\frac{d\\bar u(x)}{dx} = -\\bar{u}^2(x) +g x^{-\\delta}\\end{equation} and the solution can be found by the method of perturbation. \nUp to the first order\n\\beq \\bar u(x) = \\sqrt{g}\\tanh (\\sqrt{g} \\log x)+\\delta\\sqrt{g}\\log(x)\\tanh (\\sqrt{g} \\log x)\\end{equation}\nAfter plugging this expression into eqn (\\ref{solRG}) we infer \n\\beq \\int\\limits_1^{\\mu t} \\gamma(\\bar u, \\bar g)\\frac{dx}{x} \\sim -\\frac{1}{2}\\sum n_in_j\\left(\\log(t^{\\sqrt{g}}) +\\frac{1}{2}\\delta\\sqrt{g} \\log^2 (t)\\right) \\end{equation}\nTherefore we have the correction to the survival probability in the form\n \\beq G\\sim t^{-\\frac{1}{2}\\sum n_in_j \\sqrt{g} (1 + \\delta\/2(\\log t))}\\end{equation}\n\n\n\nNow we extend our analysis to the case when $\\varepsilon>0$, corresponding to regions III and VI. The evolution of the coupling constant is \\beq x\\frac{d}{dx}\\bar u(x) = \\varepsilon\\bar u-\\bar u^2 +g x^{\\delta}\\end{equation}\nWe choose the ansatz in the form $\\bar u(x) = u_0(x)+\\delta v(x)$. For $\\delta=0$ (i.e. region VI) the equation for $u_0(x)$ reads \n\\beq \\label{u0ex} x\\frac{d}{dx} u_0(x) = \\varepsilon u_0- u_0^2 +g \\end{equation}\nand we reproduce the result (\\ref{gthree}). We now extend to the case where $\\varepsilon,\\delta>0$ (region III). Here we will need the exact solution to (\\ref{u0ex}) to find the corrections:\n\\beq u_0(x) = \\frac{Cx^{u_1-u_2} u_1 +u_2}{1+Cx^{u_1-u_2}}, \\end{equation} where $C =(u_R-u_2)\/(u_1-u_R)$. The logarithmic correction follows from the form of the perturbation. The equation for $v(x)$ is \n\\beq x\\frac{d}{dx}v(x) = \\varepsilon v-2u_0v -g \\log x \\end{equation}\n The solution can be found explicitly as a combination of hypergeometric functions. \n In the most interesting case, $\\varepsilon=1$ ($d=1$) the hypergeometric functions are degenerate and become linear functions. Corrections to the integral then read\n\\beq \\int\\limits_1^{\\mu t} \\gamma dx\/x\\sim \\frac{1}{2}\\delta u_1 \\log^2( t)\n+\\log( t) ( t)^{u_1-u_2}) \\end{equation}\nIn the limit of large time only the first term contributes to the exponent and the survival probability scales as \n\n\\beq G\\sim t^{-\\frac{1}{2}\\sum n_in_j u_1 (1 + \\delta\/2\\log t)}\\end{equation}\n\n\n\n\\section{Conclusion}\n\\label{sec:concl}\nIn summary, we studied long-range vicious walks using the methods of Callan-Symanzik renormalized field theory.\n Our work confirms the previously known RG fixed point structure including their stability regions.\n We calculated \nthe critical exponents for all values of $\\sigma$ and $d$ to first order in $\\varepsilon$ expansion and to all orders in $\\delta$ expansion, which have hitherto been known only for $d+\\sigma=2$. Our results indicate that, depending on the exact values of $d$ and $\\sigma$, the system can be dominated by either short range (pure VW) or long range behaviors.\nIn addition, we calculated the leading logarithmic corrections for several dynamical observables that are typically measured in simulations.\n\nWe hope that our work stimulates further interest in long-range vicious walks. It would be interesting to see further simulation results for the critical exponents for $d>1$ and for logarithmic corrections. Also, it would be interesting to have analytical and numerical results for other universal quantities such as scaling functions and amplitudes. \n\n\n\\section{Acknowledgments}\n AG would like to acknowledge UC Merced start-up funds and a James S. McDonnell Foundation Award for Studying Complex Systems.\n\n\n\n\n\n\n\\section*{Appendix A}\n\nEffective four-point function (one-particle irreducible, 1PI) that appeared in (\\ref{Run}) is composed of usual short-range and new momentum dependent vertices. This gives rise to integrals (\\ref{int}). The first integral $\\mu=1$ has been evaluated in \\cite{Cardy} by using alpha representation $1\/(q^2+s) = \\int_0^{+\\infty} d\\alpha e^{i(q^2+s)\\alpha}$ and the result is \\beq I_1=K_d (2s)^{-\\varepsilon\/2}\\Gamma(\\varepsilon\/2).\\end{equation} We notice that since there is no angular dependence one can perform $d-1$ integrations and one will be left with one dimensional integral. To compute this integral we use the formula \\cite{GR}:\n\n\\beq \\int\\limits_{0}^{+\\infty} dx \\frac{x^{\\nu-1}}{P+Qx^2} = \\frac{1}{2P} \\left(\\frac{P}{Q}\\right)^{\\nu\/2} \\Gamma\\left(\\frac{\\nu}{2}\\right)\\Gamma\\left(1-\\frac{\\nu}{2}\\right)\\end{equation}\nWe see that in our case $P=s$, $Q=(D_i+D_j)$ and $\\nu =d+(\\mu-1)\\sigma$. This immediately gives the result:\n\n\\begin{align} I_{\\mu} =& \\frac{K_d}{2} \\left(\\frac{1}{(D_i+D_j)}\\right)^{\\frac{d+(\\mu-1)\\sigma}{2}} s^{\\frac{d+(\\mu-1)\\sigma}{2}-1} \\times\n\\nonumber\\\\\n&\\times\\Gamma\\left(\\frac{d+(\\mu-1)\\sigma}{2}\\right)\\Gamma\\left(1-\\frac{d+(\\mu-1)\\sigma}{2}\\right)\n\\,, \\label{intres}%\n\\end{align}\nwhere $K_d = 2^{d-1}\\pi^{-d\/2}\\Gamma^{-1}(d\/2)$ is the surface area of $d$-dimensional unit sphere. \n\nIt is convenient to define \\beq\\label{ad} a_d = \\frac{K_d}{2} \\left(\\frac{2}{(D_i+D_j)}\\right)^{d\/2} (2s)^{-\\varepsilon\/2}\\end{equation}\n\\beq\\label{bd} b_d = \\frac{K_d}{2} \\left(\\frac{2}{(D_i+D_j)}\\right)^{(d+\\sigma)\/2} (2s)^{-\\delta\/2}\\end{equation}\n\\beq\\label{cd} c_d = \\frac{K_d}{2} \\left(\\frac{2}{(D_i+D_j)}\\right)^{(d+2\\sigma)\/2} (2s)^{-(2\\delta-\\varepsilon)\/2}\\end{equation}\nSo integral $I_{\\mu}$ in the limit of $\\delta=O(\\varepsilon)$ can be written as:\n\\beq I_1=\\frac{a_d}{\\varepsilon},\\quad I_2=\\frac{b_d}{\\delta},\\quad I_3=\\frac{c_d}{2\\delta-\\varepsilon}.\\end{equation}\nWe used an expansion $\\Gamma(\\varepsilon\/2) \\sim 2\/\\varepsilon$ for small $\\varepsilon$.\nAn important property of coefficients (\\ref{ad}) - (\\ref{cd}) is that \\beq c_da_d=b^2_d,\\end{equation}\nwhich can be verified by direct substitution. \n\n\n\nNow we compute mean field integrals:\n\n\\beq\\label{mfint} I_{\\mu} = \\int d^dq dt q^{d+\\sigma}\\exp(-t(D_i+D_j)q^2)\\sim t^{-(d+\\sigma-2)\/2}, \\end{equation}\nwhere we assumed that the large momentum cut-off is imposed and corresponding coupling constants have been renormalized. The non-universal coefficient is not important. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nDistributional reinforcement learning \\cite{jaquette73markov,sobel82variance,white88mean,morimura2010nonparametric,c51} focuses on the intrinsic randomness of returns within the reinforcement learning (RL) framework. As the agent interacts with the environment, irreducible randomness seeps in through the stochasticity of these interactions, the approximations in the agent's representation, and even the inherently chaotic nature of physical interaction \\cite{yu2016more}. Distributional RL aims to model the distribution over returns, whose mean is the traditional value function, and to use these distributions to evaluate and optimize a policy.\n\nAny distributional RL algorithm is characterized by two aspects: the parameterization of the return distribution, and the distance metric or loss function being optimized. Together, these choices control assumptions about the random returns and how approximations will be traded off. Categorical DQN \\citep[C51]{c51} combines a categorical distribution and the cross-entropy loss with the Cram\\'er-minimizing projection \\cite{rowland2018analysis}. For this, it assumes returns are bounded in a known range and trades off mean-preservation at the cost of overestimating variance.\n\nC51 outperformed all previous improvements to DQN on a set of 57 Atari 2600 games in the Arcade Learning Environment \\cite{bellemare13arcade}, which we refer to as the Atari-57 benchmark. Subsequently, several papers have built upon this successful combination to achieve significant improvements to the state-of-the-art in Atari-57 \\cite{hessel2018rainbow,gruslys2018reactor}, and challenging continuous control tasks \\cite{barthmaron2018d4pg}.\n\nThese algorithms are restricted to assigning probabilities to an a priori fixed, discrete set of possible returns. \\citet{dabney2017qr} propose an alternate pair of choices, parameterizing the distribution by a uniform mixture of Diracs whose locations are adjusted using quantile regression. Their algorithm, QR-DQN, while restricted to a discrete set of quantiles, automatically adapts return quantiles to minimize the Wasserstein distance between the Bellman updated and current return distributions. This flexibility allows QR-DQN to significantly improve on C51's Atari-57 performance.\n\nIn this paper, we extend the approach of \\citet{dabney2017qr}, from learning a discrete set of quantiles to learning the full quantile function, a continuous map from probabilities to returns. When combined with a base distribution, such as $U([0,1])$, this forms an implicit distribution capable of approximating any distribution over returns given sufficient network capacity. Our approach, \\textit{implicit quantile networks} (IQN), is best viewed as a simple distributional generalization of the DQN algorithm \\cite{mnih15nature}, and provides several benefits over QR-DQN. \n\nFirst, the approximation error for the distribution is no longer controlled by the number of quantiles output by the network, but by the size of the network itself, and the amount of training. Second, IQN can be used with as few, or as many, samples per update as desired, providing improved data efficiency with increasing number of samples per training update. Third, the implicit representation of the return distribution allows us to expand the class of policies to more fully take advantage of the learned distribution. Specifically, by taking the base distribution to be non-uniform, we expand the class of policies to $\\epsilon$-greedy policies on arbitrary distortion risk measures \\cite{yaari1987dual,wang1996premium}.\n\nWe begin by reviewing distributional reinforcement learning, related work, and introducing the concepts surrounding risk-sensitive RL. In subsequent sections, we introduce our proposed algorithm, IQN, and present a series of experiments using the Atari-57 benchmark, investigating the robustness and performance of IQN. Despite being a simple distributional extension to DQN, and forgoing any other improvements, IQN significantly outperforms QR-DQN and nearly matches the performance of Rainbow, which combines many orthogonal advances. \nIn fact, in human-starts as well as in the hardest Atari games (where current RL agents still underperform human players) IQN improves over Rainbow.\n\n\\section{Background \/ Related Work}\n\\label{sec:background}\n\nWe consider the standard RL setting, in which the interaction of an agent and\nan environment is modeled as a Markov Decision Process \n$(\\mathcal{X}, \\mathcal{A}, R, P, \\gamma)$ \\cite{puterman94markov}, \nwhere $\\mathcal{X}$ and $\\mathcal{A}$ denote the state and action spaces, \n$R$ the (state- and action-dependent) reward function,\n$P(\\cdot | x, a)$ the transition kernel, \nand $\\gamma \\in (0, 1)$ a discount factor. A policy $\\pi(\\cdot | x)$ maps a state to a distribution over actions.\n\nFor an agent following policy $\\pi$, the discounted sum of future \nrewards is denoted by the random variable $Z^\\pi(x, a) = \\sum_{t=0}^\\infty \\gamma^t R(x_t, a_t)$, where $x_0 = x$, $a_0 = a$, $x_t \\sim P(\\cdot | x_{t-1}, a_{t-1})$, and $a_t \\sim \\pi(\\cdot | x_{t})$. The action-value function is defined as\n$Q^\\pi(x,a) = \\mathbb{E}\\left[ Z^\\pi(x,a)\\right]$, and can be characterized by the Bellman equation \n\\begin{equation*}\nQ^\\pi(x,a) = \\mathbb{E} \\left[ R(x,a) \\right] + \n\\gamma \\mathbb{E}_{P,\\pi} \\left[ Q^\\pi(x',a')\\right]. \n\\end{equation*}\nThe objective in RL is to find an optimal policy $\\pi^*$, which maximizes $\\mathbb{E}[Z^\\pi]$,\ni.e.~$Q^{\\pi^*}(x,a) \\geq Q^{\\pi}(x,a)$ for all $\\pi$ and all $x, a$. One approach is to find the unique fixed point $Q^* = Q^{\\pi^*}$ of the Bellman optimality operator \\cite{bellman57dynamic}: \n\\begin{equation*}\nQ(x,a) = \\mathcal{T} Q(x,a) := \\mathbb{E}\\left[ R(x,a) \\right] + \\gamma \\mathbb{E}_{P} \\max_{a'} Q(x',a').\n\\end{equation*}\nTo this end, Q-learning \\cite{watkins1989learning} iteratively improves an estimate, $Q_\\theta$, of the optimal action-value function, $Q^*$, by repeatedly applying the Bellman update:\n\\begin{equation*}\nQ_\\theta(x,a) \\leftarrow \\mathbb{E} \\left[ R(x,a) \\right] + \n\\gamma \\mathbb{E}_{P} \\left[\\max_{a'} Q_\\theta(x',a')\\right]. \n\\end{equation*}\nThe action-value function can be approximated by a parameterized function $Q_\\theta$ (e.g.~a neural network), and trained by minimizing the squared temporal difference (TD) error,\n\\begin{equation*}\n \\delta_t^2 = \\left[ r_t + \\gamma \\max_{a' \\in \\mathcal{A}} Q_\\theta(x_{t+1}, a') - Q_\\theta(x_t, a_t) \\right]^2,\n\\end{equation*}\nover samples $(x_t, a_t, r_t, x_{t+1})$ observed while following an $\\epsilon$-greedy policy over $Q_\\theta$. This policy acts greedily with respect to $Q_\\theta$ with probability $1 - \\epsilon$ and uniformly at random otherwise.\nDQN \\cite{mnih15nature} uses a convolutional neural network to parameterize $Q_\\theta$ and the Q-learning algorithm to achieve human-level play on the Atari-57 benchmark.\n\n\n\\subsection{Distributional RL}\n\nIn distributional RL, the distribution over returns (the law of $Z^\\pi$) is considered instead of the scalar value function $Q^\\pi$ that is its expectation. This change in perspective has yielded new insights into the dynamics of RL \\cite{azar2012sample}, and been a useful tool for analysis \\cite{lattimore2012pac}. Empirically, distributional RL algorithms show improved sample complexity and final performance, as well as increased robustness to hyperparameter variation \\cite{barthmaron2018d4pg}.\n\nAn analogous distributional Bellman equation of the form\n\\begin{equation*}\nZ^\\pi(x,a) \\stackrel{D}{=} R(x,a) + \\gamma Z^\\pi(X',A')\n\\end{equation*}\ncan be derived, where $A \\stackrel{D}{=} B$ denotes that \ntwo random variables $A$ and $B$ have equal probability laws, and the \nrandom variables $X'$ and $A'$ are distributed according to $P(\\cdot | x, a)$ and $\\pi(\\cdot | x')$, respectively.\n\n\\citet{morimura10parametric} defined the distributional Bellman operator explicitly in terms of conditional probabilities, parameterized by the mean and scale of a Gaussian or Laplace distribution, and minimized the Kullback-Leibler (KL) divergence between the Bellman target and the current estimated return distribution. However, the distributional Bellman operator is not a contraction in the KL.\n\nAs with the scalar setting, a distributional Bellman optimality operator can be defined by\n\\begin{equation*}\n\\mathcal{T} Z(x,a) \\stackrel{D}{:=} R(x,a) + \\gamma Z(X',\\argmax_{a' \\in \\mathcal{A}} \\expect Z(X', a')),\n\\end{equation*}\nwith $X'$ distributed according to $P(\\cdot | x, a)$. While the distributional Bellman operator for policy evaluation is a contraction in the $p$-Wasserstein distance \\cite{c51}, this no longer holds for the control case. Convergence to the optimal policy can still be established, but requires a more involved argument.\n\n\\citet{c51} parameterize the return distribution as a categorical distribution over a fixed set of equidistant points and minimize the KL divergence to the projected distributional Bellman target. Their algorithm, C51, outperformed previous DQN variants on the Atari-57 benchmark. Subsequently, \\citet{hessel2018rainbow} combined C51 with enhancements such as prioritized experience replay \\cite{schaul16prioritized}, $n$-step updates \\cite{sutton1988learning}, and the dueling architecture \\cite{wang2016dueling}, leading to the Rainbow agent, current state-of-the-art in Atari-57.\n\nThe categorical parameterization, using the projected KL loss, has also been used in recent work to improve the critic of a policy gradient algorithm, D4PG, achieving significantly improved robustness and state-of-the-art performance across a variety of continuous control tasks \\cite{barthmaron2018d4pg}.\n\n\\subsection{$p$-Wasserstein Metric}\n\nThe $p$-Wasserstein metric, for $p \\in [1, \\infty]$, plays a key role in recent results in distributional RL \\cite{c51,dabney2017qr}. It has also been a topic of increasing interest in generative modeling \\cite{wgan,bousquet2017optimal,tolstikhin2017wasserstein}, because unlike the KL divergence, the Wasserstein metric inherently trades off approximate solutions with likelihoods.\n\nThe $p$-Wasserstein distance is the $L_p$ metric on inverse cumulative distribution functions (c.d.f.), also known as quantile functions \\cite{muller1997integral}. For random variables $U$ and $V$ with quantile functions $F_U^{-1}$ and $F_V^{-1}$, respectively, the $p$-Wasserstein distance is given by\n\\begin{equation*}\n W_p(U, V) = \\left( \\int_0^1 |F_U^{-1}(\\omega) - F_V^{-1}(\\omega)|^p d\\omega \\right)^{1\/p}.\n\\end{equation*}\n\nThe class of optimal transport metrics express distances between distributions in terms of the minimal cost for transporting mass to make the two distributions identical. This cost is given in terms of some metric, $c\\colon \\mathcal{X} \\times \\mathcal{X} \\to \\mathbb{R}^{\\geq0}$, on the underlying space $\\mathcal{X}$. The $p$-Wasserstein metric corresponds to $c = L_p$. We are particularly interested in the Wasserstein metrics due to the predominant use of $L_p$ spaces in mean-value reinforcement learning.\n\n\\subsection{Quantile Regression for Distributional RL}\n\\citet{c51} showed that the distributional Bellman operator is a contraction in the $p$-Wasserstein metric, but as the proposed algorithm did not itself minimize the Wasserstein metric, this left a theory-practice gap for distributional RL. Recently, this gap was closed, in both directions. First and most relevant to this work, \\citet{dabney2017qr} proposed the use of \\textit{quantile regression} for distributional RL and showed that by choosing the quantile targets suitably the resulting projected distributional Bellman operator is a contraction in the $\\infty$-Wasserstein metric. Concurrently, \\citet{rowland2018analysis} showed the original class of categorical algorithms are a contraction in the Cram\\'er distance, the $L_2$ metric on cumulative distribution functions.\n\nBy estimating the quantile function at precisely chosen points, QR-DQN minimizes the Wasserstein distance to the distributional Bellman target \\cite{dabney2017qr}. This estimation uses \\textit{quantile regression}, which has been shown to converge to the true quantile function value when minimized using stochastic approximation \\cite{qrbook}. \n\nIn QR-DQN, the random return is approximated by a uniform mixture of $N$ Diracs,\n\\begin{equation*}\n Z_\\theta(x,a) := \\tfrac{1}{N} \\sum_{i=1}^N \\delta_{\\theta_i(x,a)},\n\\end{equation*}\nwith each $\\theta_i$ assigned a fixed quantile target, $\\hat{\\tau}_i = \\frac{\\tau_{i-1} + \\tau_{i}}{2}$ for $1 \\le i \\le N$, where $\\tau_i = i\/N$. These quantile estimates are trained using the \\citet{huber1964robust} quantile regression loss, with threshold $\\kappa$,\n\\begin{align*}\n \\rho^\\kappa_\\tau(\\delta_{ij}) &= |\\tau - \\mathbb{I}{\\{ \\delta_{ij} < 0 \\}}| \\frac{\\mathcal{L}_\\kappa(\\delta_{ij})}{\\kappa},\\ \\quad \\text{with}\\\\\n \\mathcal{L}_\\kappa(\\delta_{ij}) &= \\begin{cases}\n \\frac{1}{2} \\delta_{ij}^2,\\quad \\ &\\text{if } |\\delta_{ij}| \\le \\kappa\\\\\n \\kappa (|\\delta_{ij}| - \\frac{1}{2}\\kappa),\\quad \\ &\\text{otherwise}\n \\end{cases},\n \n\\end{align*}\non the pairwise TD-errors\n$$\\delta_{ij} = r + \\gamma \\theta_j(x', \\pi(x')) - \\theta_i(x, a).$$\n\n\nAt the time of this writing, QR-DQN achieves the best performance on Atari-57, human-normalized mean and median, of all agents that do not combine distributional RL, prioritized replay, and $n$-step updates \\cite{dabney2017qr,hessel2018rainbow,gruslys2018reactor}.\n\n\\subsection{Risk in Reinforcement Learning}\n\nDistributional RL algorithms have been theoretically justified for the Wasserstein and Cram\\'er metrics \\cite{c51,rowland2018analysis}, and learning the distribution over returns, in and of itself, empirically results in significant improvements to data efficiency, final performance, and stability \\cite{c51,dabney2017qr,gruslys2018reactor,barthmaron2018d4pg}. However, in each of these recent works the policy used was based entirely on the mean of the return distribution, just as in standard reinforcement learning. A natural question arises: can we expand the class of policies using information provided by the distribution over returns (i.e.~to the class of risk-sensitive policies)? Furthermore, when would this larger policy class be beneficial?\n\nHere, `risk' refers to the uncertainty over possible outcomes, and \\emph{risk-sensitive} policies are those which depend upon more than the mean of the outcomes. At this point, it is important to highlight the difference between \\emph{intrinsic uncertainty}, captured by the distribution over returns, and \\emph{parametric uncertainty}, the uncertainty over the value estimate typically associated with Bayesian approaches such as PSRL \\cite{osband2013more} and Kalman TD \\cite{geist2010kalman}. Distributional RL seeks to capture the former, which classic approaches to risk are built upon\\footnote{One exception is the recent work \\cite{moerland2017efficient} towards combining both forms of uncertainty to improve exploration.}.\n\nExpected utility theory states that if a decision policy is consistent with a particular set of four axioms regarding its choices then the decision policy behaves as though it is maximizing the expected value of some utility function $U$ \\cite{von1947theory},\n$$\\pi(x) = \\argmax_a \\expect_{Z(x, a)} [U(z)].$$\nThis is perhaps the most pervasive notion of risk-sensitivity. A policy maximizing a linear utility function is called \\emph{risk-neutral}, whereas concave or convex utility functions give rise to \\emph{risk-averse} or \\emph{risk-seeking} policies, respectively. Many previous studies on risk-sensitive RL adopt the utility function approach \\cite{howard1972risk,marcus1997risk,maddison2017particle}.\n\n\nA crucial axiom of expected utility is \\textit{independence}: given random variables $X$, $Y$ and $Z$, such that $X \\succ Y$ ($X$ preferred over $Y$), any mixture between $X$ and $Z$ is preferred to the same mixture between $Y$ and $Z$ \\cite{von1947theory}. Stated in terms of the cumulative probability functions, $\\alpha F_X + (1 - \\alpha) F_Z \\ge \\alpha F_Y + (1 - \\alpha) F_Z,\\ \\forall \\alpha \\in [0, 1]$. This axiom in particular has troubled many researchers because it is consistently violated by human behavior \\cite{tversky1992advances}. The Allais paradox is a frequently used example of a decision problem where people violate the independence axiom of expected utility theory \\cite{allais1990allais}.\n\nHowever, as \\citet{yaari1987dual} showed, this axiom can be replaced by one in terms of convex combinations of outcome values, instead of mixtures of distributions. Specifically, if as before $X \\succ Y$, then for any $\\alpha \\in [0, 1]$ and random variable $Z$, $\\alpha F_X^{-1} + (1 - \\alpha) F_Z^{-1} \\ge \\alpha F_Y^{-1} + (1 - \\alpha)F_Z^{-1}$. This leads to an alternate, dual, theory of choice than that of expected utility. Under these axioms the decision policy behaves as though it is maximizing a distorted expectation, for some continuous monotonic function $h$:\n$$\\pi(x) = \\argmax_a \\int_{-\\infty}^{\\infty} z \\frac{\\partial}{\\partial z} (h \\circ F_{Z(x, a)})(z) \\,dz.$$\n\nSuch a function $h$ is known as a \\textit{distortion risk measure}, as it distorts the cumulative probabilities of the random variable \\cite{wang1996premium}. That is,\nwe have two fundamentally equivalent approaches to risk-sensitivity.\nEither, we choose a utility function and follow the expectation of this utility. Or, we choose a reweighting of the distribution and compute expectation under this distortion measure. Indeed, \\citet{yaari1987dual} further showed that these two functions are inverses of each other. The choice between them amounts to a choice over whether the behavior should be invariant to mixing with random events or to convex combinations of outcomes.\n\nDistortion risk measures include, as special cases, cumulative probability weighting used in cumulative prospect theory \\cite{tversky1992advances}, conditional value at risk \\cite{chow2014algorithms}, and many other methods \\cite{morimura2010nonparametric}. Recently \\citet{majumdar2017should} argued for the use of distortion risk measures in robotics.\n\n\n\\section{Implicit Quantile Networks}\n\\label{sec:analysis}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.48\\textwidth]{figures\/network_arch.pdf}\n\\end{center}\n\\caption{Network architectures for DQN and recent distributional RL algorithms.}\\label{fig:network_arch}\n\\end{figure}\n\n\nWe now introduce the \\textit{implicit quantile network} (IQN), a deterministic parametric function trained to reparameterize samples from a base distribution, e.g.~$\\tau \\sim U([0,1])$, to the respective quantile values of a target distribution.\nIQN provides an effective way to learn an implicit representation of the return distribution, yielding a powerful function approximator for a new DQN-like agent.\n\nLet $F^{-1}_Z(\\tau)$ be the quantile function at $\\tau \\in [0, 1]$ for the random variable $Z$. For notational simplicity we write $Z_\\tau := F^{-1}_Z(\\tau)$, thus for $\\tau \\sim U([0,1])$ the resulting state-action return distribution sample is $Z_\\tau(x, a) \\sim Z(x, a)$.\n\nWe propose to model the state-action quantile function as a mapping from state-actions and samples from some base distribution, typically $\\tau \\sim U([0,1])$, to $Z_\\tau(x, a)$, viewed as samples from the implicitly defined return distribution.\n\nLet $\\beta\\colon [0, 1] \\to [0, 1]$ be a distortion risk measure, with identity corresponding to risk-neutrality. Then, the \\textit{distorted expectation} of $Z(x, a)$ under $\\beta$ is given by\n\\begin{equation*}\n Q_\\beta(x, a) := \\expect_{\\tau \\sim U([0,1])} \\left[ Z_{\\beta(\\tau)}(x, a) \\right].\n\\end{equation*}\nNotice that the distorted expectation is equal to the expected value of $F^{-1}_{Z(x,a)}$ weighted by $\\beta$, that is, $Q_\\beta = \\int_0^1 F^{-1}_Z(\\tau) d\\beta(\\tau)$. The immediate implication of this is that for any $\\beta$, there exists a sampling distribution for $\\tau$ such that the mean of $Z_\\tau$ is equal to the distorted expectation of $Z$ under $\\beta$, that is, any distorted expectation can be represented as a weighted sum over the quantiles \\cite{dhaene2012remarks}. Denote by $\\pi_\\beta$ the risk-sensitive greedy policy\n\\begin{equation}\\label{eqn:rs_policy}\n \\pi_\\beta(x) = \\argmax_{a \\in \\mathcal{A}} Q_\\beta(x, a).\n\\end{equation}\n\nFor two samples $\\tau, \\tau' \\sim U([0,1])$, and policy $\\pi_\\beta$, the sampled temporal difference (TD) error at step $t$ is\n\\begin{equation}\\label{eqn:sampledTD}\n \\delta^{\\tau,\\tau'}_t = r_t + \\gamma Z_{\\tau'}(x_{t+1}, \\pi_\\beta(x_{t+1})) - Z_{\\tau}(x_t, a_t).\n\\end{equation}\nThen, the IQN loss function is given by\n\\begin{equation}\\label{eqn:iqn_loss}\n \\mathcal{L}(x_t, a_t, r_t, x_{t+1}) = \\frac{1}{N'} \\sum_{i=1}^{N} \\sum_{j=1}^{N'} \\rho_{\\tau_i}^\\kappa \\left( \\delta_t^{\\tau_i, \\tau_j'} \\right),\n \n\\end{equation}\nwhere $N$ and $N'$ denote the respective number of iid samples $\\tau_i, \\tau_j' \\sim U([0,1])$ used to estimate the loss.\nA corresponding sample-based risk-sensitive policy is obtained by approximating $Q_\\beta$ in Equation~\\ref{eqn:rs_policy} by $K$ samples of $\\tilde \\tau \\sim U([0,1])$:\n\\begin{equation*}\n \\tilde \\pi_\\beta(x) = \\argmax_{a \\in \\mathcal{A}} \\frac{1}{K}\\sum_{k=1}^K Z_{\\beta(\\tilde\\tau_k)}(x, a).\n\\end{equation*}\n\nImplicit quantile networks differ from the approach of \\citet{dabney2017qr} in two ways. First, instead of approximating the quantile function at $n$ fixed values of $\\tau$ we approximate it with $Z_\\tau(x, a) \\approx f( \\psi(x), \\phi(\\tau))_a$ for some differentiable functions $f$, $\\psi$, and $\\phi$. If we ignore the distributional interpretation for a moment and view each $Z_\\tau(x, a)$ as a separate action-value function, this highlights that implicit quantile networks are a type of \\textit{universal value function approximator} (UVFA) \\cite{schaul2015universal}. There may be additional benefits to implicit quantile networks beyond the obvious increase in representational fidelity. As with UVFAs, we might hope that training over many different $\\tau$'s (goals in the case of the UVFA) leads to better generalization between values and improved sample complexity than attempting to train each separately.\n\nSecond, $\\tau$, $\\tau'$, and $\\tilde \\tau$ are sampled from continuous, independent, distributions. Besides $U([0,1])$, we also explore risk-sentive policies $\\pi_\\beta$, with non-linear $\\beta$. The independent sampling of each $\\tau$, $\\tau'$ results in the sample TD errors being decorrelated, and the estimated action-values go from being the true mean of a mixture of $n$ Diracs to a sample mean of the implicit distribution defined by reparameterizing the sampling distribution via the learned quantile function.\n\n\n\\subsection{Implementation}\n\\label{sec:algorithm}\nConsider the neural network structure used by the DQN agent \\cite{mnih15nature}. Let $\\psi\\colon \\mathcal{X} \\to \\mathbb{R}^d$ be the function computed by the convolutional layers and $f\\colon \\mathbb{R}^d \\to \\mathbb{R}^{|\\mathcal{A}|}$ the subsequent fully-connected layers mapping $\\psi(x)$ to the estimated action-values, such that $Q(x, a) \\approx f(\\psi(x))_a$. For our network we use the same functions $\\psi$ and $f$ as in DQN, but include an additional function $\\phi\\colon [0, 1] \\to \\mathbb{R}^d$ computing an embedding for the sample point $\\tau$. We combine these to form the approximation $Z_\\tau(x, a) \\approx f(\\psi(x) \\odot \\phi(\\tau))_a$, where $\\odot$ denotes the element-wise (Hadamard) product.\n\nAs the network for $f$ is not particularly deep, we use the multiplicative form, $\\psi \\odot \\phi$, to force interaction between the convolutional features and the sample embedding. Alternative functional forms, e.g.~concatenation or a `residual' function $\\psi \\odot (1 + \\phi)$, are conceivable, and $\\phi(\\tau)$ can be parameterized in different ways. To investigate these, we compared performance across a number of architectural variants on six Atari 2600 games (\\textsc{Asterix, Assault, Breakout, Ms.Pacman, QBert, Space Invaders}).\nFull results are given in the Appendix. Despite minor variation in performance, we found the general approach to be robust to the various choices. Based upon the results we used the following function in our later experiments, for embedding dimension $n = 64$:\n\\begin{equation}\\label{eqn:iqn_architecture}\n \\phi_j(\\tau) := \\operatorname{ReLU}(\\sum_{i=0}^{n-1} \\cos(\\pi i \\tau)w_{ij} + b_j).\n\\end{equation}\n\nAfter settling on a network architecture, we study the effect of \nthe number of samples, $N$ and $N'$, used in the estimate terms of Equation~\\ref{eqn:iqn_loss}.\n\nWe hypothesized that $N$, the number of samples of $\\tau \\sim U([0,1])$, would affect the sample complexity of IQN, with larger values leading to faster learning, and that with $N = 1$ one would potentially approach the performance of DQN. This would support the hypothesis that the improved performance of many distributional RL algorithms rests on their effect as auxiliary loss functions, which would vanish in the case of $N = 1$.\nFurthermore, we believed that $N'$, the number of samples of $\\tau' \\sim U([0,1])$, would affect the variance of the gradient estimates much like a mini-batch size hyperparameter. Our prediction was that $N'$ would have the greatest effect on variance of the long-term performance of the agent.\n\nWe used the same set of six games as before, with our chosen architecture, and varied $N, N' \\in \\{1, 8, 32, 64\\}$. In Figure~\\ref{fig:num_atoms} we report the average human-normalized scores on the six games for each configuration. Figure~\\ref{fig:num_atoms} (left) shows the average performance over the first ten million frames, while (right) shows the average performance over the last ten million (from 190M to 200M). \n\nAs expected, we found that $N$ has a dramatic effect on early performance, shown by the continual improvement in score as the value increases.\nAdditionally, we observed that $N'$ affected performance very differently than expected: it had a strong effect on early performance, but minimal impact on long-term performance past $N' = 8$. \n\nOverall, while using more samples for both distributions is generally favorable, $N = N' = 8$ appears to be sufficient to achieve the majority of improvements offered by IQN for long-term performance, with variation past this point largely insignificant. To our surprise we found that even for $N = N' = 1$, which is comparable to DQN in the number of loss components, the longer term performance is still quite strong ($\\approx3\\times$ DQN).\n\nIn an informal evaluation, we did not find IQN to be sensitive to $K$, the number of samples used for the policy, and have fixed it at $K = 32$ for all experiments.\n\n\n\\section{Risk-Sensitive Reinforcement Learning}\n\\label{sec:risky_rl}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.5\\textwidth]{figures\/n_atoms.pdf}\n\\end{center}\n\\caption{Effect of varying $N$ and $N'$, the number of samples used in the loss function in Equation~\\ref{eqn:iqn_loss}. Figures show human-normalized agent performance, averaged over six Atari games, averaged over first 10M frames of training (left) and last 10M frames of training (right). Corresponding values for baselines: DQN ($32, 253$) and QR-DQN ($144, 1243$).}\\label{fig:num_atoms}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures\/atari_risky_iqn.pdf}\n\\end{center}\n\\caption{Effects of various changes to the sampling distribution, that is various cumulative probability weightings.}\\label{fig:risk_atari}\n\\end{figure*}\n\nIn this section, we explore the effects of varying the distortion risk measure, $\\beta$, away from identity. This only affects the policy, $\\pi_\\beta$, used both in Equation~\\ref{eqn:sampledTD} and for acting in the environment. As we have argued, evaluating under different distortion risk measures is equivalent to changing the sampling distribution for $\\tau$, allowing us to achieve various forms of risk-sensitive policies. We focus on a handful of sampling distributions and their corresponding distortion measures. The first one is the cumulative probability weighting parameterization proposed in cumulative prospect theory \\cite{tversky1992advances,gonzalez1999shape}:\n\\begin{equation*}\n \\operatorname{CPW}(\\eta, \\tau) = \\frac{\\tau^{\\eta}}{(\\tau^{\\eta} + (1 - \\tau)^\\eta)^{\\frac{1}{\\eta}}}.\n\\end{equation*}\nIn particular, we use the parameter value $\\eta = 0.71$ found by \\citet{wu1996curvature} to most closely match human subjects. This choice is interesting as, unlike the others we consider, it is neither globally convex nor concave. For small values of $\\tau$ it is locally concave and for larger values of $\\tau$ it becomes locally convex. Recall that concavity corresponds to risk-averse and convexity to risk-seeking policies.\n\nSecond, we consider the distortion risk measure proposed by \\citet{wang2000class}, where $\\Phi$ and $\\Phi^{-1}$ are taken to be the standard Normal cumulative distribution function and its inverse:\n\\begin{equation*}\n \\operatorname{Wang}(\\eta, \\tau) = \\Phi(\\Phi^{-1}(\\tau) + \\eta).\n\\end{equation*}\nFor $\\eta < 0$, this produces risk-averse policies and we include it due to its simple interpretation and ability to switch between risk-averse and risk-seeking distortions.\n\nThird, we consider a simple power formula for risk-averse ($\\eta < 0$) or risk-seeking ($\\eta > 0$) policies:\n\\begin{equation*}\n \\operatorname{Pow}(\\eta, \\tau) = \\begin{cases}\n \\tau^{\\frac{1}{1 + |\\eta|}},\\quad \\ &\\text{if } \\eta \\ge 0\\\\\n 1 - (1 - \\tau)^{\\frac{1}{1 + |\\eta|}},\\quad \\ &\\text{otherwise}\n \\end{cases}.\n\\end{equation*}\n\nFinally, we consider conditional value-at-risk (CVaR):\n\\begin{equation*}\n \\operatorname{CVaR}(\\eta, \\tau) = \\eta \\tau.\n\\end{equation*}\nCVaR has been widely studied in and out of reinforcement learning \\cite{chow2014algorithms}. Its implementation as a modification to the sampling distribution of $\\tau$ is particularly simple, as it changes $\\tau \\sim U([0,1])$ to $\\tau \\sim U([0,\\eta])$. Another interesting sampling distribution, not included in our experiments, is denoted $\\operatorname{Norm}(\\eta)$ and corresponds to $\\tau$ sampled by averaging $\\eta$ samples from $U([0,1])$.\n\nIn Figure~\\ref{fig:risk_atari} (right) we give an example of a distribution (Neutral) and how each of these distortion measures affects the implied distribution due to changing the sampling distribution of $\\tau$. $\\operatorname{Norm}(3)$ and $\\operatorname{CPW}(.71)$ reduce the impact of the tails of the distribution, while $\\operatorname{Wang}$ and $\\operatorname{CVaR}$ heavily shift the distribution mass towards the tails, creating a risk-averse or risk-seeking preference. Additionally, while CVaR entirely ignores all values corresponding to $\\tau > \\eta$, $\\operatorname{Wang}$ gives these non-zero, but vanishingly small, probability.\n\nBy using these sampling distributions we can induce various risk-sensitive policies in IQN. We evaluate these on the same set of six Atari 2600 games previously used. Our algorithm simply changes the policy to maximize the distorted expectations instead of the usual sample mean. Figure~\\ref{fig:risk_atari} (left) shows our results in this experiment, with average scores reported under the usual, risk-neutral, evaluation criterion.\n\nIntuitively, we expected to see a qualitative effect from risk-sensitive training, e.g.~strengthened exploration from a risk-seeking objective. Although we did see qualitative differences, these did not always match our expectations.\nFor two of the games, \\textsc{Asterix} and \\textsc{Assault}, there is a very significant advantage to the risk-averse policies. Although $\\operatorname{CPW}$ tends to perform almost identically to the standard risk-neutral policy, and the risk-seeking $\\operatorname{Wang}(1.5)$ performs as well or worse than risk-neutral, we find that both risk-averse policies improve performance over standard IQN. However, we also observe that the more risk-averse of the two, $\\operatorname{CVaR}(0.1)$, suffers some loss in performance on two other games (\\textsc{QBert} and \\textsc{Space Invaders}). \n\nAdditionally, we note that the risk-seeking policy significantly underperforms the risk-neutral policy on three of the six games. It remains an open question as to exactly why we see improved performance for risk-averse policies. There are many possible explanations for this phenomenon, e.g.~that risk-aversion encodes a heuristic to stay alive longer, which in many games is correlated with increased rewards.\n\n\\section{Full Atari-57 Results}\n\\label{sec:atari57}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{figures\/atari57_full.pdf}\n\\end{center}\n\\caption{Human-normalized mean (left) and median (right) scores on Atari-57 for IQN and various other algorithms. Random seeds shown as traces, with IQN averaged over 5, QR-DQN over 3, and Rainbow over 2 random seeds.}\\label{fig:atari57}\n\\end{figure*}\n\nFinally, we evaluate IQN on the full Atari-57 benchmark, comparing with the state-of-the-art performance of Rainbow, a distributional RL agent that combines several advances in deep RL \\cite{hessel2018rainbow}, the closely related algorithm QR-DQN \\cite{dabney2017qr}, prioritized experience replay DQN \\cite{schaul16prioritized}, and the original DQN agent \\cite{mnih15nature}. Note that in this section we use the risk-neutral variant of the IQN, that is, the policy of the IQN agent is the regular $\\epsilon$-greedy policy with respect to the mean of the state-action return distribution.\n\nIt is important to remember that Rainbow builds upon the distributional RL algorithm C51 \\cite{c51}, but also includes prioritized experience replay \\cite{schaul16prioritized}, Double DQN \\cite{vanhasselt16deep}, Dueling Network architecture \\cite{wang2016dueling}, Noisy Networks \\cite{fortunato2017noisy}, and multi-step updates \\cite{sutton1988learning}. In particular, besides the distributional update, $n$-step updates and prioritized experience replay were found to have significant impact on the performance of Rainbow. Our other competitive baseline is QR-DQN, which is currently state-of-the-art for agents that do not combine distributional updates, $n$-step updates, and prioritized replay.\n\nThus, between QR-DQN and the much more complex Rainbow we compare to the two most closely related, and best performing, agents in published work. In particular, we would expect that IQN would benefit from the additional enhancements in Rainbow, just as Rainbow improved significantly over C51.\n\nFigure~\\ref{fig:atari57} shows the mean (left) and median (right) human-normalized scores during training over the Atari-57 benchmark. IQN dramatically improves over QR-DQN, which itself improves on many previously published results. At 100 million frames IQN has reached the same level of performance as QR-DQN at 200 million frames. Table~\\ref{fig:perc_scores} gives a comparison between the same methods in terms of their best, human-normalized, scores per game under the 30 random no-op start condition. These are averages over the given number of seeds. Additionally, using human-starts, IQN achieves $162\\%$ median human-normalized score, whereas Rainbow reaches $153\\%$ \\cite{hessel2018rainbow}, see Table~\\ref{fig:perc_scores_human}.\n\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{ l | r | r | r | c }\n\\multicolumn{1}{c}{} & \\mbox{\\textbf{Mean}} & \\mbox{\\textbf{Median}} & \\mbox{\\textbf{Human Gap}} & \\mbox{\\textbf{Seeds}} \\\\\n\\hline\n\\textsc{DQN} & 228\\% & 79\\% & 0.334 & 1 \\\\\n\\textsc{Prior.} & 434\\% & 124\\% & 0.178 & 1 \\\\\n\\textsc{C51} & 701\\% & 178\\% & 0.152 & 1 \\\\\n\\textsc{Rainbow} & \\textbf{\\textcolor{blue}{1189\\%}} & \\textbf{\\textcolor{blue}{230\\%}} & 0.144 & 2 \\\\\n\\textsc{QR-DQN} & 864\\% & 193\\% & 0.165 & 3 \\\\\n\\hline\n\\textsc{IQN} & 1019\\% & 218\\% & \\textbf{\\textcolor{blue}{0.141}} & 5 \\\\\n\\end{tabular}\n\\end{center}\n\\caption{Mean and median of scores across 57 Atari 2600 games, measured as percentages of human baseline \\cite{nair15massively}. Scores are averages over number of seeds.}\n\\label{fig:perc_scores}\n\\end{table}\n\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{cccccc}\n\\multicolumn{6}{c}{\\textbf{Human-starts (median)}} \\\\\n\\hline\n\\textsc{DQN} & \\textsc{Prior.} & \\textsc{A3C} & \\textsc{C51} & \\textsc{Rainbow} & \\textsc{IQN} \\\\\n\\hline\n68\\% & 128\\% & 116\\% & 125\\% & 153\\% & \\textbf{\\textcolor{blue}{162\\%}}\n\\end{tabular}\n\\end{center}\n\\caption{Median human-normalized scores for human-starts.}\n\\label{fig:perc_scores_human}\n\\end{table}\n\nFinally, we took a closer look at the games in which each algorithm continues to underperform humans, and computed, on average, how far below human-level they perform\\footnote{Details of how this is computed can be found in the Appendix.}. We refer to this value as the \\textit{human-gap}\\footnote{Thanks to Joseph Modayil for proposing this metric.} metric and give results in Table~\\ref{fig:perc_scores}. Interestingly, C51 outperforms QR-DQN in this metric, and IQN outperforms all others. This shows that the remaining gap between Rainbow and IQN is entirely from games on which both algorithms are already super-human. The games where the most progress in RL is needed happen to be the games where IQN shows the greatest improvement over QR-DQN and Rainbow.\n\n\\section{Discussion and Conclusions}\n\\label{sec:discussion}\n\nWe have proposed a generalization of recent work based around using quantile regression to learn the distribution over returns of the current policy. Our generalization leads to a simple change to the DQN agent to enable distributional RL, the natural integration of risk-sensitive policies, and significantly improved performance over existing methods. The IQN algorithm provides, for the first time, a fully integrated distributional RL agent without prior assumptions on the parameterization of the return distribution.\n\nIQN can be trained with as little as a single sample from each state-action value distribution, or as many as computational limits allow to improve the algorithm's data efficiency. Furthermore, IQN allows us to expand the class of control policies to a large class of risk-sensitive policies connected to distortion risk measures. Finally, we show substantial gains on the Atari-57 benchmark over QR-DQN, and even halving the distance between QR-DQN and Rainbow.\n\nDespite the significant empirical successes in this paper there are many areas in need of additional theoretical analysis. We highlight a few particularly relevant open questions we were unable to address in the present work. First, sample-based convergence results have been recently shown for a class of categorical distributional RL algorithms \\cite{rowland2018analysis}. Could existing sample-based RL convergence results be extended to the QR-based algorithms?\n\nSecond, can the contraction mapping results for a fixed grid of quantiles given by \\citet{dabney2017qr} be extended to the more general class of approximate quantile functions studied in this work? Finally, and particularly salient to our experiments with distortion risk measures, theoretical guarantees for risk-sensitive RL have been building over recent years, but have been largely limited to special cases and restricted classes of risk-sensitive policies. Can the convergence of the distribution of returns under the Bellman operator be leveraged to show convergence to a fixed-point in distorted expectations? In particular, can the control results of \\citet{c51} be expanded to cover some class of risk-sensitive policies?\n\nThere remain many intriguing directions for future research into distributional RL, even on purely empirical fronts. \\citet{hessel2018rainbow} recently showed that distributional RL agents can be significantly improved, when combined with other techniques. Creating a Rainbow-IQN agent could yield even greater improvements on Atari-57. We also recall the surprisingly rich return distributions found by \\citet{barthmaron2018d4pg}, and hypothesize that the continuous control setting may be a particularly fruitful area for the application of distributional RL in general, and IQN in particular.\n\n\n\\clearpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}