diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjnwi" "b/data_all_eng_slimpj/shuffled/split2/finalzzjnwi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjnwi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:Intro}\n\nExtensions of the Standard Model (SM) with additional gauge symmetries are of particular interest in view of current direct and indirect\nsearches for physics beyond the SM. A feature of such theories is the existence of new gauge bosons.\nThese could provide clear deviations from the SM predictions and have been studied as possible early discoveries\nat the LHC.\n\nConstructions with a single additional $U(1)$ factor correspond to an extra neutral gauge boson $Z'$ \\cite{Langacker:ZpReview, SVZ:Zp, SSVZ:Zp,ABFKS:Zp} and have received a lot of attention, since they are naturally present in a large variety of models, such as Grand Unified Theories\n(GUTs), higher-dimensional models, little or composite Higgs models, superstring constructions.\nHence, the predicted $Z'$ mass spans a wide range of values, from the high GUT scale down to\nthe TeV scale, having different phenomenological consequences. On the other hand,\nit is difficult to determine model independent bounds on the $Z'$ mass, since its relation to\nobservables depends strongly on the mixing of the $Z'$ to the SM $Z$, the strength of the new gauge-coupling constant, the charges of the SM fermions and Higgs doublets, the presence of additional exotic\nfermions necessary to cancel the anomalies, etc..\n\nStill, there exist bounds on the $Z'$ mass and its mixing with the SM $Z$ from precision electroweak data, direct and\nindirect searches at the Tevatron, and interference effects at LEP2 in specific $Z'$ models\n(see~\\cite{Langacker:ZpReview} and references therein). Assuming an electroweak strength for the new gauge coupling constant and family-universal SM fermion charges, most of the canonical $Z'$ models have predictions in agreement with the experimental\nconstraints, for $Z'$ masses not too much below $1$ TeV. However,\nconsidering leptophobic $Z'$ models or models with family non-universal SM fermion charges, the $Z'$\nmass is allowed to be even smaller.\n\nBeyond the simple $U(1)$ group, extended gauge symmetries have also attracted a lot of attention, especially to explain the observed pattern of fermion masses and mixings (some of the earliest papers on this subject are~\\cite{Mohapatra:GaugedSym,BZ:GaugedSym, Ong:GaugeSym, WZ:GaugeSym, Chakrabarti:GaugeSym, MY:GaugeSym, DKW:GaugeSym, CPM:GaugeSym, BC:GaugeSym, Berezhiani:GaugeSym, BK:GaugeSym}). A common feature of these models is the appearance of dangerous flavour-changing-neutral-current (FCNC) contributions, mediated by the new flavour gauge bosons. Hence, a high new physics (NP) scale, usually larger than $1000$ TeV, must be adopted to provide the necessary suppressions. This prevents any NP direct observation at the present and future colliders.\n\nAn alternative approach to provide an explanation of the pattern of fermion masses is considering global flavour symmetries; in particular discrete symmetries have been widely studied in the last years (for a review see \\cite{AF:ReviewDiscreteSyms}). It is interesting to notice that the FCNC contributions are usually less constraining in these models~\\cite{IKOT:LFVA41, FHLM:LFVinA4, HIST:LFVA42, IKOST:LFVDelta54, FHM:Vacuum, FHLM:LFVinSUSYA4, HMP:LFVinSUSYA4SS, FP:RareDecaysA4, MRZ:FVTp, ABMP:Constraining1, ABMP:Constraining2} compared to gauge flavour models, opening the possibility for a lower NP scale and correspondingly for new particles beyond SM that can be detected in present and future colliders.\n\nRecently, a series of papers \\cite{GRV:SU3gauged, Feldmann:SU5gauged, GMS:SU3gaugedLR, AFM:EffectiveGaugeSym} appeared suggesting the possibility of explaining fermion masses and mixings with a NP scale of a few TeV, but with extra gauge symmetries. In these models, the gauge flavour symmetry is the product of non-Abelian $U(3)$ factors. For each gauge-group\ngenerator there is a new gauge boson, with no color or electric charge. In addition, new exotic fermions have to be introduced\nto cancel the anomalies of the new gauge sector.\n\nThe analyses in \\cite{AFM:EffectiveGaugeSym,GRV:SU3gauged, Feldmann:SU5gauged, GMS:SU3gaugedLR} and most papers to date studied almost exclusively the impact of flavour changing heavy neutral gauge bosons on $\\Delta F=2$ transitions and rare $B$ and $K$ decays, leaving their impact on the radiative decay $\\ov{B}\\to X_s\\gamma$ aside. As the latter provides generally a very strong constraint on extensions of the SM, it is important to investigate whether the presence of neutral gauge bosons with flavour violating couplings and new exotic heavy fermions would put the models in question into difficulties and whether generally the role of such new gauge bosons and new fermions in the $\\ov{B}\\to X_s\\gamma$ decay is significant. \n\nHaving this in mind, we point out in this paper two new contributions to the $\\ov{B}\\to X_s\\gamma$ decay, that to our knowledge\nhave not been considered in the literature so far. The first one originates from one-loop diagrams mediated by \n{\\it heavy neutral} gauge bosons and {\\it exotic} quarks with electric charge $-1\/3$, when {\\it flavour-violating left-handed and right-handed}\ncouplings of these gauge bosons to SM and exotic quarks are present. Then, this new physics contribution to the Wilson\ncoefficient of the dominant operator is enhanced by $m'_F\/m_b$ with $m_b$ being the mass of the $b$-quark and $m'_F$ the mass of the exotic fermion, in general significantly heavier than SM ones. \n\nAnalogous enhancement is known from the study of left-right symmetric models \\cite{CCFKM:BdecaysRLsym,AI:BSGdecayLRsym,CM:BSGdecayLRsym}, but the absence of new heavy fermions and the smallness of $W_L-W_R$ mixing makes this contribution, mediated dominantly by {\\it charged} SM gauge bosons with right-handed couplings, sufficiently small to agree with data. \n\nWithin the littlest Higgs model with T-parity, similar contributions to the $\\ov{B}\\to X_s\\gamma$ decay, mediated by heavy neutral gauge bosons and heavy mirror fermions, have already been considered \\cite{BBPTU:LittlestHiggsT}. However, in this context only left-handed currents are present and in addition the diagrams in question are strongly suppressed by the mixing angles in the T-odd sector and by the GIM mechanism, so that these contributions turn out to be very small.\n\nThe second new contribution stems solely from the heavy neutral gauge bosons, independently of the presence of exotic fermions in the theory. These gauge bosons are integrated out at the scale of the order of their masses, in general much higher than $\\mu_b\\approx m_b$. The resulting {\\it neutral current-current} dimension-six operators affect the $\\ov{B}\\to X_s\\gamma$ rate through QCD mixing with the magnetic dipole operator responsible for the $\\ov{B}\\to X_s\\gamma$ decay. This contribution is present when \nthe new neutral gauge bosons have direct couplings to the $b$ and $s$ quarks.\n\nAs we shall demonstrate explicitly, these new contributions imply constraints on the parameters of models,\ncontaining neutral flavour gauge bosons with couplings described above and with masses within the reach of the LHC. We shall also identify the conditions under which the first of these contributions turns out to be negligible, such as very small mixings among SM and exotic quarks, naturally following from a fermion see-saw mechanism.\n\nWe point out that similar contributions could be considered in $f_j\\to f_i \\gamma$, like $\\mu\\to e\\gamma$\nand $t\\to c\\gamma$, where $f$ is any SM fermion, mediated by exotic fermions with the same electric charge as $f$. The same comments apply to flavour conserving observables like $(g-2)_\\mu$\nand EDMs, in which dipole operators play the dominant role. Moreover, our QCD analysis of the mixing among neutral current-current and QCD-penguin operators is also applicable to other processes, such as non-leptonic two-body $B$ decays, and other observables, like $\\epsilon'\/\\epsilon$.\n\nWe describe the general framework and fix the notation that will be used throughout the rest of the paper in Sect.~\\ref{sec:Context}. In Sect.~\\ref{sec:analytic} we present the analytical results for the exotic-fermion contribution\nto the $\\ov{B}\\to X_s\\gamma$ decay in the context of only one neutral flavour gauge boson and\none exotic quark with electric charge $-1\/3$. Subsequently, we generalise our results to an arbitrary number of\nsuch gauge bosons and quarks. \n\nIn Sect.~\\ref{sec:QCD} we discuss the second new contribution to the $\\ov{B}\\to X_s\\gamma$ decay. We extend\nthe SM QCD renormalisation group (RG) analysis by taking into account all the new neutral current-current operators,\ngenerated through the exchange of a single neutral gauge boson $A_H$. To this end we calculate the relevant anomalous dimension matrices at the leading order (LO).\n\nIn Sect.~\\ref{sec:pheno} we present our model-independent results in a general form, that can be easily applied to all models with flavour-violating neutral gauge bosons, like $Z'$-models or models with non-Abelian gauge flavour symmetries. We identify three major classes of such models; for each we cast our results into simple formulae, that depend only on the masses and couplings of the new particles. Finally, we illustrate how these formulae can be translated into constraints on the parameter space of representative toy-models. In Sect.~\\ref{sec:concl} we summarise and conclude.\n\n\n\n\\section{Notation and fermion see-saw mechanism}\n\\label{sec:Context}\n\nWe focus on models in which the new physics contributions to\n$\\ov{B}\\to X_s\\gamma$ discussed in the previous section are present. In what follows, we describe a general framework and we fix the notation.\n\nWe consider the general case in which a product of $k\\geq1$ gauge flavour groups,\n\\mbox{$\\cG_f=\\cG_1\\times\\ldots \\times\\cG_k$}, is added to the SM gauge group. Depending\non the dimension $d_i$ of each symmetry factor, $d_1+\\ldots+d_k$ new flavour\ngauge bosons enrich the particle spectrum. In general, the SM fermions $\\psi$ transform\nunder $\\cG_f$ and consequently anomalies from gauge or mixed gauge-gravitational triangles\ncan arise. In this case, it is possible to introduce a new set of exotic fermions $\\Psi$,\nsuitably transforming under the SM and gauge flavour groups to obtain an anomaly-free theory. \n\nOn the other hand, the presence of these new exotic fermions is a quite general feature of\nflavour models, when considering renormalisable ultraviolet completions. This holds not\nonly for gauge flavour symmetries, that we are discussing in this paper, but also for global\nflavour symmetries. Indeed, in these models the new exotic fermions and their mixing with\nthe SM ones account for the mass spectrum of the SM fermions (see \\cite{BGPZ:HeavyFermions} and references therein).\nThis typically happens through a fermion see-saw mechanism where the small masses of\nthe SM fermions are explained by the large masses of their corresponding exotic fermions.\nWithout loss of generality, we consider only the case of $SU(2)_L$-singlet exotic\nfermions. Hence the see-saw Lagrangian can be schematically written as\n\\begin{equation}\n {\\cal L}} \\def\\cM{{\\cal M}_{\\text{see-saw}}= \\ov{\\Psi}_L\\, M\\, \\Psi_R + \\ov{\\psi}_L\\, Y^D_1\\, \\Psi_R\\, \\phi + \\ov{\\Psi}_L\\, M^D_2\\, \\psi_R+\\text{h.c.}\\,,\n\\label{SeesawLag}\n\\end{equation}\nwhere $\\psi_{L,R}$ ($\\Psi_{R,L}$) comprise all left- and right-handed SM\n(exotic) fermion fields; their dimensions also define the dimensions\nof the matrices $M$, $Y^D_1$ and $M^D_2$. In the same expression,\n$\\phi$ represents the SM Higgs field.\n\nThis see-saw Lagrangian can be made invariant under $\\cG_f$ if both SM and exotic\nfermions belong to the same representation of $\\cG_f$ and $M$, $Y^D_1$, and $M^D_2$ are proportional\nto the identity matrix in the flavour space. This case, however, is phenomenologically not interesting,\nbecause then the flavour symmetry is exactly preserved and it is not possible to reproduce\nthe observed pattern of SM fermion masses and mixings. \n\nOn the other hand, a more realistic situation appears when we impose the invariance under\n$\\cG_f$ at high scales, by introducing additional degrees of freedom, and then break the flavour symmetries at lower scales to provide the correct SM fermion masses and mixings. In the see-saw Lagrangian, the invariance under $\\cG_f$ and its breaking mechanism can be achieved in\nvarious ways: we can promote $M$, $Y_1^D$, and $M_2^D$, or only some of them, either to spurion fields \n\\`a la MFV \\cite{DGIS:MFV,CGIW:MLFV,DP:MLFV,AIMMN:VMLFV}, or to dynamical scalar fields with non-vanishing\nvacuum expectation values (VEVs) \\cite{FN}, or to fermion condensates \\cite{CG:MFV}. In what follows, we\nsimply treat the terms with $M$, $Y_1^D$, and $M_2^D$, or only some of them, as flavour violating\nterms originating from some non-specified dynamics at the high scale \\cite{AGMR:ScalarPotentialMFV}.\n\nIn the simplest phenomenologically interesting scenario, we discuss, the fields $\\psi_{L}$ ($\\psi_{R}$) and $\\Psi_{R}$ ($\\Psi_{L}$) transform\nunder the same representation of a single flavour group $\\cG_f$. The see-saw Lagrangian is the\nsame as in Eq.~(\\ref{SeesawLag}), with the only flavour symmetry breaking parameter being\n$M$, while $Y^D_1$ and $M^D_2$ are family universal parameters. In general, the square matrix $M$ has\n non-diagonal hierarchical entries in order to provide the\ncorrect description of the SM fermion masses and mixings. Before finding\nthe mass eigenvalues of the SM and exotic fermions, it is useful\nto diagonalise $M$. In general, $M$ is diagonalised by a bi-unitary transformation\n\\begin{equation}\n\\hat{M}=V^\\dag_L\\, M\\, V_R\\,,\n\\end{equation}\nwhere $V_{L,R}$ are unitary matrices and $\\hat{M}$ is a diagonal matrix. After the absorption of $V_L$ ($V_R$) in $\\Psi_L$ ($\\Psi_R$) and $\\psi_R$ ($\\psi_L$) by a field redefinition and the breaking of the electroweak symmetry, the resulting mass matrix is as follows:\n\\begin{equation}\n\\left(\\ov{\\psi}_L,\\, \\ov{\\Psi}_L\\right)\n\\left(\n \\begin{array}{cc}\n 0\t\t\t\t\t& Y_1^D\\mean{\\phi} \\times\\mathbb{1}\\\\[2mm]\n\t M_2^D\\times\\mathbb{1} \t\t& \\hat{M}\\\\\n \\end{array}\n\\right)\n\\left(\n \\begin{array}{c}\n \\psi_R\\\\\n\t \\Psi_R \\\\\n \\end{array}\n\\right)\\,.\n\\end{equation}\n\nReabsorbing possible unphysical phases, we can now define the mass eigenstates $f_{L,R}$ and $F_{L,R}$ by performing orthogonal rotations\non the flavour eigenstates $\\psi_{L,R}$ and $\\Psi_{L,R}$. For each generation we may write\n\\begin{equation}\n\\left(\n \\begin{array}{c}\n f_{L,R} \\\\\n F_{L,R} \\\\\n \\end{array}\n\\right)=\n\\left(\n \\begin{array}{cr}\n \\cos\\theta_{L,R} & -\\sin\\theta_{L,R} \\\\\n \\sin\\theta_{L,R} & \\cos\\theta_{L,R} \\\\\n \\end{array}\n\\right)\\left(\n \\begin{array}{c}\n \\psi_{L,R} \\\\\n \\Psi_{L,R} \\\\\n \\end{array}\n\\right)\\,.\n\\end{equation} \nDenoting with $m_{f_k}$ the mass of the light fermion $f_k$ and with $m'_{F_k}$ that of\nthe heavy exotic fermion $F_k$ and using for brevity the notation $M_1^D\\equiv Y_1^D \\mean{\\phi}$,\nwhat follows is a direct inverse proportionality between $m_{f_k}$ and $m'_{F_k}$, as in the usual see-saw mechanism:\n\\begin{equation}\nm_{f_k}m'_{F_k}=M_1^D\\,M_2^D\\,.\n\\end{equation}\nWe can express such masses in terms of the flavour symmetry breaking terms:\n\\begin{equation}\nm_{f_k}=\\dfrac{\\sin\\theta_{R_k}\\,\\sin\\theta_{L_k}}{\\cos^2\\theta_{R_k}-\\sin^2\\theta_{L_k}}\\hat{M}_k\\,,\\qquad\\qquad\nm'_{F_k}=\\dfrac{\\cos\\theta_{R_k}\\,\\cos\\theta_{L_k}}{\\cos^2\\theta_{R_k}-\\sin^2\\theta_{L_k}}\\hat{M}_k\\,,\n\\end{equation}\nwhere a straightforward calculation gives\n\\begin{equation}\n\\sin\\theta_{L_k}=\\sqrt{\\dfrac{m_{f_k}}{M_2^D}\\dfrac{M_1^D\\,m'_{F_k}-M_2^D m_{f_k}}{m^{\\prime\\,2}_{F_k}-m_{f_k}^2}}\\,,\\qquad\\quad\n\\sin\\theta_{R_k}=\\sqrt{\\dfrac{m_{f_k}}{M_1^D}\\dfrac{M_2^D\\,m'_{F_k}-M_1^D m_{f_k}}{m^{\\prime\\,2}_{F_k}-m_{f_k}^2}}\\,.\n\\label{FormulaSin1}\n\\end{equation}\n\nThese results are exact and valid for all the fermion generations. However, taking\nthe limit in which $m'_{F_k}\\gg m_{f_k}$ and $M_1^D\\approx M_2^D\\equiv M^D$,\nwe find simple formulae that transparently expose the behaviour of the previous expressions.\nIn this limit we find\n\\begin{equation}\nm_{f_k}\\approx \\dfrac{(M^D)^2}{\\hat{M}_k}\\,,\\qquad\\qquad\nm'_{F_k}\\approx \\hat{M}_k\\,,\\qquad\\qquad\n\\sin\\theta_{L_k}\\approx \\sin\\theta_{R_k} \\approx\\sqrt{\\dfrac{m_{f_k}}{m'_{F_k}}}\\,,\n\\label{FormulaSin2}\n\\end{equation}\nas in the usual see-saw scheme in the limit $\\hat{M}_k\\gg M_{1,2}^D$. Notice that these\nsimplified relations are valid for all the fermions, apart from the top quark for which the condition $\\hat{M}_3\\gg M^D_{1,2}$ is not satisfied and large corrections to Eq.~(\\ref{FormulaSin2}) are expected.\n\nWhen the flavour symmetry is broken, the flavour gauge bosons develop masses proportional to the flavour breaking parameters. It is not possible to enter into details without specifying a particular model and in the following we simply assume that the neutral gauge boson masses are controlled by the parameters $\\hat{M}_k$, as for the exotic fermions.\n\nThe part of the Lagrangian containing the kinetic terms and the usual couplings among\nfermion and gauge bosons is given by \n\\begin{equation}\n{\\cal L}} \\def\\cM{{\\cal M}_{kin}=i\\,\\left(\\ov{\\psi}\\, D\\hspace{-2.5mm}\\big\/\\, \\psi +\\ov{\\Psi}\\, D\\hspace{-2.5mm}\\big\/\\,\\Psi\\right)\\,.\n\\label{StandardKineticTerms}\n\\end{equation}\nSince we are only interested in discussing the couplings between fermions and the new gauge bosons, in writing the relevant covariant derivative we do not consider the coupling terms with the SM gauge bosons but only the following term\n\\begin{equation}\nD^\\mu\\supset i\\, \\dfrac{g_H}{2} \\sum_{a}A_{H_a}^\\mu\\, T_{\\cal R}} \\def\\cV{{\\cal V}^a\\,.\n\\end{equation}\nIn this expression, $g_H$ is the flavour-gauge coupling, $T_{\\cal R}} \\def\\cV{{\\cal V}^a$ are the generators of the flavour\ngroup $\\cG_f$ in the representation ${\\cal R}} \\def\\cV{{\\cal V}$, and $A_{H_a}$ the corresponding\nflavour gauge bosons. The resulting Feynman rules for a single gauge boson $A_H$ in the fermion-mass eigenbasis, ${\\cal F}} \\def\\cG{{\\cal G}\\equiv\\{F,\\,f\\}$, are given by the following relations:\n\\begin{equation}\n\\includegraphics{.\/vertex_general-crop.pdf}\n\\label{vertexGeneral}\n\\end{equation}\nwith\n\\begin{equation}\n\\label{Couplingff}\nA_H\\,\\ov{f}_i\\, f_j:\\begin{cases}\nC_L=\\cos\\theta_{L_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\cos\\theta_{L_j}+\\sin\\theta_{L_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\sin\\theta_{L_j}\\\\\nC_R=\\cos\\theta_{R_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\cos\\theta_{R_j}+\\sin\\theta_{R_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\sin\\theta_{R_j}\n\\end{cases}\n\\end{equation}\n\\begin{equation}\n\\label{CouplingFF}\nA_H\\,\\ov{F}_i\\, F_j:\\begin{cases}\nC_L=\\sin\\theta_{L_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\sin\\theta_{L_j}+\\cos\\theta_{L_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\cos\\theta_{L_j}\\\\\nC_R=\\sin\\theta_{R_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\sin\\theta_{R_j}+\\cos\\theta_{R_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\cos\\theta_{R_j}\n\\end{cases}\n\\end{equation}\n\\begin{equation}\n\\label{CouplingfF}\nA_H\\,\\ov{f}_i\\, F_j:\\begin{cases}\nC_L=\\cos\\theta_{L_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\sin\\theta_{L_j}-\\sin\\theta_{L_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\cos\\theta_{L_j}\\\\\nC_R=\\cos\\theta_{R_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\sin\\theta_{R_j}-\\sin\\theta_{R_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\cos\\theta_{R_j}\n\\end{cases}\n\\end{equation}\n\\begin{equation}\n\\label{CouplingFf}\nA_H\\,\\ov{F}_i\\, f_j:\\begin{cases}\nC_L=\\sin\\theta_{L_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\cos\\theta_{L_j}-\\cos\\theta_{L_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\sin\\theta_{L_j}\\\\\nC_R=\\sin\\theta_{R_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\cos\\theta_{R_j}-\\cos\\theta_{R_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\sin\\theta_{R_j}\n\\end{cases}\n\\end{equation}\n\nA few comments are in order:\n\\begin{description}\n\\item 1)\\quad These results are valid for both Abelian and non-Abelian flavour gauge\n symmetries. In particular, barring accidental cancellations,\n the couplings described in Eqs.~(\\ref{CouplingfF}) and (\\ref{CouplingFf}) are always non-vanishing. The only exception consists in all those $Z'$ models in which the SM fermions have universal charges. In these cases the flavour gauge symmetry $\\cG_f$ is a simple $U(1)$ and the charge universality assures that $T_{\\cal R}} \\def\\cV{{\\cal V}^a\\propto\\mathbb{1}$. Hence, the couplings in Eqs.~(\\ref{CouplingfF}) and (\\ref{CouplingFf}) identically vanish and the couplings in Eqs.~(\\ref{Couplingff}) and (\\ref{CouplingFF}) turn out to be flavour conserving.\n\n\\item 2)\\quad When the flavour symmetry is a product of different groups, \\mbox{$\\cG_f=\\cG_1\\times\\ldots \\times \\cG_k$}, and fermions transform non-trivially under different factors of $\\cG_f$, Eqs.~(\\ref{Couplingff})--(\\ref{CouplingFf})\n change. In particular, if $\\psi_L$ ($\\Psi_L$) and $\\psi_R$ ($\\Psi_R$) transform under\n two distinct groups, then only one term in the right-hand side of the previous\n equations is present (for example in Eq.~(\\ref{Couplingff}), $C_L$ could have either the term \\mbox{$\\cos\\theta_{L_i}\\left(V_R^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_R\\right)_{ij}\\cos\\theta_{L_j}$} or \\mbox{$\\sin\\theta_{L_i}\\left(V_L^\\dag T_{\\cal R}} \\def\\cV{{\\cal V}^a V_L\\right)_{ij}\\sin\\theta_{L_j}$}, but not both terms simultaneously). As a result, even for the simple case $\\cG_f=U(1)^2$ with\n universal fermion charges, the couplings described in Eqs.~(\\ref{CouplingfF}) and\n (\\ref{CouplingFf}) are non-vanishing. Notice that in this particular case all the couplings in Eqs.~(\\ref{Couplingff})--(\\ref{CouplingFf}) turn out to be flavour conserving.\n \n\\item 3)\\quad When the SM fermion masses originate from the see-saw mechanism with heavy exotic fermions, $\\sin\\theta_{L_k}$ and $\\sin\\theta_{R_k}$ are small (cf. Eqs.~(\\ref{FormulaSin1}) and (\\ref{FormulaSin2})). Hence, the couplings described in Eqs.~(\\ref{CouplingfF}) and (\\ref{CouplingFf}) are all suppressed. In the particular case of $m'_{F_k}\\gg m_{f_k}$, this suppression is roughly given by $\\sqrt{m_{f_k}\/m'_{F_k}}$.\n\n\\item 4)\\quad When a theory is anomaly free without the necessity of introducing extra exotic fermions, only the expressions in Eq.~(\\ref{Couplingff}) apply. Still the couplings can be flavour violating. This is the case of $U(1)$ models in which either the charge assignment of the SM fermions is such that no anomalies arise or the Green-Schwarz mechanism \\cite{GS:GSmechanism} is implemented into the theory to compensate for the anomalies. In the specific case of $Z'$ models with universal fermion $U(1)$-charges, the couplings are flavour conserving, as already discussed in 1).\n\\end{description}\n\nAs we will show in Sects.~\\ref{sec:analytic} and \\ref{sec:QCD}, in virtually any gauge model, in which SM fermion masses are explained through a see-saw mechanism with heavy exotic fermions, both new contributions we introduced in Sect.~\\ref{sec:Intro} are present and have potentially interesting effects. Their impact depends on the details of the specific model, according to the above discussion.\n\nOn the other hand, for completeness we will also consider models with gauge symmetries beyond those of the SM that provide either an explanation of the fermion mass patterns through a different mechanism than the see-saw or no explanation at all. In these cases, heavy exotic fermions might not be present in the particle spectrum, as in 4), and the corresponding contributions are absent. However, when present, the couplings listed in Eqs.~(\\ref{Couplingff})--(\\ref{CouplingFf}) have in general a completely different structure. In particular there might be no suppressions of the exotic fermion contributions due to $\\sin\\theta_{L_k}$ and $\\sin\\theta_{R_k}$.\n\nThe presence of exotic fermions can modify the couplings\nof the SM W and Z bosons to fermions with respect to their couplings within the\nSM. As a consequence, non-unitary quark and lepton mixing matrices and\nflavour violating Z couplings\nare usual results of such changes, which are specific to the considered gauge\nflavour model. Also these modifications affect only the Wilson coefficients and do not generate further operators. For this reasons we shall not perform\nsuch a study in our model-independent analysis. A study in a model-dependent\ncontext considering both SM and heavy gauge bosons contributing to the\n$\\ov{B}\\to X_s\\gamma$ branching ratio is in progress \\cite{BCMS:progress}.\n\n\n\n\n\\section{New contributions at the high scale}\n\\label{sec:analytic}\n\\mathversion{bold}\n\\subsection{Effective Hamiltonian for $b\\to s\\gamma$}\n\\mathversion{normal}\n\\label{subsec:ContributionGeneralStructure}\n\nAdopting the overall normalisation of the SM effective Hamiltonian and considering \nas a first step only the dipole operators and the contribution of new\nphysics to their Wilson coefficients, we write the effective Hamiltonian\nrelevant for $b\\to s\\gamma$ at the high matching scale $\\mu_H\\gg M_W$ as: \n\\begin{equation}\n\\label{Heff_at_mu}\n{\\cal H}_{\\rm eff}^{(b\\to s\\gamma)} = - \\dfrac{4 G_{\\rm F}}{\\sqrt{2}} V_{ts}^* V_{tb} \\Big[\\Delta C_{7\\gamma}(\\mu_H) Q_{7\\gamma} + \\Delta C_{8G}(\\mu_H) Q_{8G} +\\Delta C'_{7\\gamma}(\\mu_H) Q'_{7\\gamma} +\\Delta C'_{8G}(\\mu_H) Q'_{8G} \\Big]\\,.\n\\end{equation}\nAt this scale, the $W$ bosons are still dynamical and the usual SM contribution\nto the Wilson coefficients is absent. We will include it after integrating out the $W$\nboson at the lower scale $\\mu_W\\approx \\cO(M_W)$. In our conventions the dipole operators are given by:\n\\begin{equation}\n\\label{O6B}\n\\begin{aligned}\nQ_{7\\gamma} &= \\dfrac{e}{16\\pi^2}\\, m_b\\, \\bar{s}_\\alpha\\, \\sigma^{\\mu\\nu}\\, P_R\\, b_\\alpha\\, F_{\\mu\\nu}\\,,\\\\[2mm] \nQ_{8G} &= \\dfrac{g_s}{16\\pi^2}\\, m_b\\, \\bar{s}_\\alpha\\, \\sigma^{\\mu\\nu}\\, P_R\\, T^a_{\\alpha\\beta}\\, b_\\beta\\, G^a_{\\mu\\nu} \n\\end{aligned}\n\\end{equation}\nand the primed operators are obtained from them after replacing the right-handed projector,\n$P_{R}$, with the left-handed one, $P_L$. In the SM the contributions of the primed\noperators are suppressed by $m_s\/m_b$ relative to those coming from $Q_{7\\gamma}$ and\n$Q_{8G}$. We decompose the Wilson coefficients at the scale $\\mu_W$\nas the sum of the SM contribution and the new one from the exchange of neutral\nflavour gauge bosons, after evolving the new physics contribution using the\nRG running of the operators $Q^{(\\prime)}_{7\\gamma}$ and\n$Q^{(\\prime)}_{8G}$, namely\n\\begin{equation}\nC_i(\\mu_W)=C_i^{SM}(\\mu_W)+\\Delta C_i(\\mu_W)\n\\end{equation}\nand similarly for the primed coefficients. The SM Wilson coefficients\nat LO are \\cite{IL:Loops}\n\\begin{align}\n\\label{c7}\nC^{SM}_{7\\gamma} (\\mu_W) &= \\frac{3 x_t^3-2 x_t^2}{4(x_t-1)^4}\\ln x_t - \\frac{8 x_t^3 + 5 x_t^2 - 7 x_t}{24(x_t-1)^3}\\,,\\\\[2mm]\n\\label{c8}\nC^{SM}_{8G}(\\mu_W) &= \\frac{-3 x_t^2}{4(x_t-1)^4}\\ln x_t - \\frac{x_t^3 - 5 x_t^2 - 2 x_t}{8(x_t-1)^3}\\,,\n\\end{align}\nwhere $x_t\\equiv m_t^2\/M_W^2$. The corresponding primed coefficients are also given by\nthe previous expressions, but with an extra suppression factor of\n$m_s\/m_b$.\n\nThe new physics contributions at $\\mu_W$, $\\Delta C_i(\\mu_W)$, follow from the RG evolution of the Wilson coefficients at $\\mu_H$, $\\Delta C_i(\\mu_H)$, down to $\\mu_W$; Their relations are discussed in Sect.~\\ref{sec:QCD}. We can further decompose the new physics contributions at the high scale as the sum of two pieces, one coming from the exchanges of heavy exotic quarks and the second from the exchanges of light SM down-quarks: respectively,\n\\begin{equation}\n\\begin{aligned}\n\\Delta C_{7\\gamma}(\\mu_H)&=\\Delta C^\\text{heavy}_{7\\gamma}(\\mu_H) + \\Delta C^\\text{light}_{7\\gamma}(\\mu_H)\\,,\\\\[2mm]\n\\Delta C_{8G}(\\mu_H) &=\\Delta C^\\text{heavy}_{8G}(\\mu_H) + \\Delta C^\\text{light}_{8G}(\\mu_H)\\,.\n\\end{aligned}\n\\label{eq:wilsonatmhTot}\n\\end{equation}\nIn the following we analyse these two contributions separately.\n\n\n\n\n\n\\subsection{Contributions from heavy exotic quark exchanges}\n\nWe begin with the new contributions to $b\\to s \\gamma$ arising from the\nexchange of heavy neutral flavour gauge bosons and heavy exotic quarks. In what follows,\nwe restrict the discussion to the contribution from a single virtual gauge\nboson $A_H$ and a single virtual heavy quark of electric charge $-1\/3$ and mass\n$m'_F$. The results will then be generalised to an arbitrary number of such particles. \nFrom the general Feynman rules listed in Eqs.~(\\ref{CouplingfF}) and (\\ref{CouplingFf}),\nwe read off the flavour changing couplings of $A_H$ to the bottom and strange quarks and\nthe exotic quark $F$ relevant for $b\\to s \\gamma$:\n\\begin{equation}\n\\includegraphics{.\/vertex_heavy-crop.pdf}\n\\label{vertex}\n\\end{equation}\nThe only diagram \\footnote{The corrections on the external legs, necessary for a canonical\nkinetic term, are understood to be contained in finite off-diagonal field renormalisations.}\nwith a virtual $F$ and $A_H$ exchange contributing to the $b\\to s\\gamma$ transition in the\nunitary gauge is shown in Fig.~\\ref{fig:bsgamma}. In the same figure, we also show the\nanalogous diagram contributing to $b\\to s\\, g$. The latter will also contribute to the \n$\\ov{B}\\to X_s\\gamma$ rate via QCD mixing of $Q_{8G}$ in $Q_{7\\gamma}$. In both diagrams,\nthere is no triple gauge boson vertex as the heavy gauge boson $A_H$ considered does\nnot carry electric or colour charge.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics{.\/bsgamma-crop}\n\\caption{\\it Magnetic penguin diagram contributing to $b\\to s\\gamma$ and \n$b\\to s g$ with internal $A_H$ and $F$ exchanges.\n\\label{fig:bsgamma}}\n\\end{figure}\n\nWe further decompose the Wilson coefficients $\\Delta C^\\text{heavy}_i$ at the high\nscale as the sum of the SM-like LL contribution and a new LR one, where $L$ ($R$) stands for the $P_L$ ($P_R$) projector in the vertex of Eq.~(\\ref{vertex}) involving the $s$($b$)-quark:\n\\begin{equation}\n\\begin{aligned}\n\\Delta C^\\text{heavy}_{7\\gamma}(\\mu_H)&=\\Delta^{LL} C^\\text{heavy}_{7\\gamma}(\\mu_H) +\\Delta^{LR}C^\\text{heavy}_{7\\gamma}(\\mu_H)\\,,\\\\[2mm]\n\\Delta C^\\text{heavy}_{8G}(\\mu_H) &=\\Delta^{LL} C^\\text{heavy}_{8G}(\\mu_H) +\\Delta^{LR}C^\\text{heavy}_{8G}(\\mu_H)\\,.\n\\end{aligned}\n\\label{eq:wilsonatmh}\n\\end{equation}\nThe corresponding primed coefficients to Eq.~(\\ref{eq:wilsonatmh}) are obtained by interchanging $L\\leftrightarrow R$ and including the suppression factor $m_s\/m_b$ in the primed Wilson coefficients.\n\nFollowing \\cite{BBPTU:LittlestHiggsT}, the SM-like contributions LL and RR are\nobtained by noting that their topological structure is the same as the SM {\\it gluon}\nmagnetic penguin. Consequently, the known SM results for the Wilson coefficient of the\ngluon dipole operator, $C^{SM}_{8G}$, can be used to obtain both LL and RR contributions\nafter a suitable redefinition of couplings and masses. Notice that in the neutral\nexchange there is generally no analogue of a GIM mechanism, therefore mass independent terms\nin the SM Wilson coefficients must be kept. We extract them from \\cite{BMU:BSG1,BMU:BSG2}.\nOur results are as follows:\n\\begin{equation}\n\\begin{aligned}\n\\Delta^{LL}C^\\text{heavy}_{7\\gamma}(\\mu_H) &=-\\dfrac{1}{6}\\,\\dfrac{g_H^2}{g_2^2}\\,\\dfrac{M_W^2}{M_{A_H}^2}\\,\\dfrac{C_L^{sF*}\\,C_L^{bF}}{V_{ts}^*\\,V_{tb}}\\,\\left(C_{8G}^{SM}(x)+\\dfrac{1}{3}\\right),\\\\[2mm]\n\\Delta^{LL}C^\\text{heavy}_{8 G}(\\mu_H) &= - 3 \\Delta^{LL}C^\\text{heavy}_{7\\gamma}(\\mu_H)\\,,\n\\end{aligned}\n\\label{LLnew}\n\\end{equation}\nwith $x=m_F^2\/M_{A_H}^2$. The $RR$ contributions are obtained by the\ninterchange $L\\leftrightarrow R$. $C_{8G}^{SM}(x)$ is the SM function\non the right-hand side of Eq.~(\\ref{c8}).\\\\\n\nWe now consider the LR and RL contributions, which have no SM equivalent. To obtain the\nWilson coefficients, we adapt to our case known calculations of $b\\to sg$ in the context\nof $b\\to s\\gamma$ in the left-right symmetric models \\cite{CM:BSGdecayLRsym,BMU:BSG1},\nwhere, however, the process is mediated by charged gauge bosons. In this way,\nwe obtain the following LR Wilson coefficients:\n\\begin{equation}\n\\begin{aligned}\n\\Delta^{LR}C^\\text{heavy}_{7\\gamma}(\\mu_H)&=-\\dfrac{1}{6}\\,\\dfrac{g_H^2}{g_2^2}\\,\\dfrac{M_W^2}{ M_{A_H}^2}\\,\\dfrac{m'_F}{m_b}\\,\\dfrac{C_L^{sF*}\\,C_R^{bF}}{V_{ts}^*\\,V_{tb}}\\,C^{LR}_{8G}(x)\\,,\\\\[2mm]\n\\Delta^{LR}C^\\text{heavy}_{8G}(\\mu_H)&= -3\\Delta^{LR}C^\\text{heavy}_{7\\gamma}(\\mu_H)\\,,\n\\end{aligned}\n\\label{LRnew}\n\\end{equation}\nwith \n\\begin{equation}\nC^{LR}_{8G}(x)=\\dfrac{-3x}{2(1-x)^3}\\ln{x}+ \\dfrac{3 x (x-3)}{4(x-1)^2} -1\\,.\n\\label{CLR8Gcoeff}\n\\end{equation}\nThe analogous RL contributions are obtained by interchanging $L\\leftrightarrow R$. \nIn this case $C^{RL}_{8G}(x)=C^{LR}_{8G}(x)$.\n\nThe following properties should be noted:\n\\begin{description} \n\\item 1)\\quad the LR (RL) contribution in Eq.~(\\ref{LRnew}) dominates over the LL (RR) one of Eq.~(\\ref{LLnew}) in a large part of the parameter space, due to the factor $m'_F\/m_b$; \n\\item 2)\\quad the factor $m'_F\/m_b$ is replaced by $m_t\/m_b$ in the usual left-right symmetric models, therefore in these models the enhancement is less pronounced; \n\\item 3)\\quad $C^{LR}_{8G}(x)$ is a non-vanishing monotonic function of $x$ and takes values in the range $[-1,\\,-1\/4]$ for $x$ from $0$ to $\\infty$; \n\\item 4)\\quad in the decoupling limit these contributions turn out to vanish. Indeed when large masses for the exotic fermions $m'_{F_k}$ are considered, the masses of the neutral gauge bosons $M_{A_H}$ are also large, as we have already commented in Sect.~\\ref{sec:Context}. Therefore the LL (RR) contributions approach zero with $1\/M_{A_H}^2$ and the LR (RL) ones are strongly suppressed by $m'_F\/M_{A_H}^2$.\n\\end{description}\n\n\n\n\n\\subsection{Contributions from light SM quark exchanges}\n\nWe now discuss $\\Delta C^{\\text{light}}_{7\\gamma}$ and $\\Delta C^{\\text{light}}_{8G}$ from\nEq.~\\eqref{eq:wilsonatmhTot}, namely the contributions arising from the exchange of light down-type quarks in \nloop diagrams analogous to the one in Fig~\\ref{fig:bsgamma}, where the flavour changing couplings of $A_H$ to the SM quarks read\n\\begin{equation}\n\\includegraphics{.\/vertex_light-crop.pdf}\n\\label{vertexLight}\n\\end{equation}\narising from the general Feynman rules listed in Eq.~(\\ref{Couplingff}). In the SM, the light-quark contributions cancel each other at scales well above their masses due to the GIM mechanism. This is in contrast to the present case, where such a mechanism is absent.\n\nFollowing the procedure of the previous section, we decompose the Wilson coefficients at the high scale as the sum of a $LL$ part and a $LR$ part:\n\\begin{equation}\n\\begin{aligned}\n\\Delta C^\\text{light}_{7\\gamma}(\\mu_H)&=\\Delta^{LL} C^\\text{light}_{7\\gamma}(\\mu_H) +\\Delta^{LR}C^\\text{light}_{7\\gamma}(\\mu_H)\\,,\\\\[2mm]\n\\Delta C^\\text{light}_{8G}(\\mu_H) &=\\Delta^{LL} C^\\text{light}_{8G}(\\mu_H) +\\Delta^{LR}C^\\text{light}_{8G}(\\mu_H)\\,.\n\\end{aligned}\n\\label{eq:wilsonatmhLight}\n\\end{equation}\nIn the framework of effective field theories the light degrees of freedom are treated as massless at the high matching-scale $\\mu_H$.\nTherefore, we only need to account for the first term in the expansion of the\nlight masses and only mass-independent terms in\nEq.~(\\ref{LLnew}) and (\\ref{LRnew}) contribute. For the LL SM-like contributions we find\n\\begin{equation}\n\\begin{aligned}\n \\Delta^{LL}C^\\text{light}_{7\\gamma}(\\mu_H) &=-\\dfrac{1}{6}\\,\\dfrac{g_H^2}{g_2^2}\\,\n \t\t\t\t\t\t\\dfrac{M_W^2}{M_{A_H}^2}\\,\\,\\,\n\t\t\t\t\t\t\\sum_{f}\\dfrac{C_L^{sf*}\\,C_L^{bf}}{V_{ts}^*\\,V_{tb}}\\,\n\t\t\t\t\t\t\\left(\\dfrac{1}{3}\\right),\\\\[2mm]\n \\Delta^{LL}C^\\text{light}_{8 G}(\\mu_H) &= - 3 \\Delta^{LL}C^\\text{light}_{7\\gamma}(\\mu_H)\\,,\n\\end{aligned}\n\\label{eq:LLnewLight}\n\\end{equation}\nand for the LR contribution we have from Eqs.~(\\ref{LRnew}) and (\\ref{CLR8Gcoeff})\n\\begin{equation}\n\\begin{aligned}\n\\Delta^{LR}C^\\text{light}_{7\\gamma}(\\mu_H)&=-\\dfrac{1}{6}\\,\n\t\t\t\t\t\t\\dfrac{g_H^2}{g_2^2}\\,\n\t\t\t\t\t\t\\dfrac{M_W^2}{ M_{A_H}^2}\\,\\,\\,\n\t\t\t\t\t\t\\sum_f\\dfrac{m_f}{m_b}\n\t\t\t\t\t\t\\dfrac{C_L^{sf*}\\,C_R^{bf}}{V_{ts}^*\\,V_{tb}}\\,\n\t\t\t\t\t\t\\left( -1\\right)\\,,\\\\[2mm]\n\\Delta^{LR}C^\\text{light}_{8G}(\\mu_H)&= -3\\Delta^{LR}C^\\text{light}_{7\\gamma}(\\mu_H)\\,,\n\\end{aligned}\n\\label{eq:LRnewLight}\n\\end{equation}\nwhere $f$ stands for the SM down-type quarks. Analogously, we obtain also the results for the primed Wilson coefficients. The\nLL and RR SM-like contributions have already been considered in \\cite{LP:ZpNonUniversal}, while\nthe LR and RL are new. Notice that the latter come with a factor\nof $m_f\/m_b$ and hence, the effects from $d$- and $s$-quarks are suppressed with\nrespect to the $b$-quark contribution.\n\n\n\n\\subsection{Generalisations and the GIM-like mechanism}\n\nIt is easy to extend the previous results to the case of an arbitrary number of neutral\nflavour gauge bosons, $A_{H_i}$, and heavy fermions, $F_k$, by performing the\nfollowing substitution in Eqs.~(\\ref{LLnew}) and (\\ref{LRnew}):\n\\begin{equation}\n\\begin{aligned}\n&\\dfrac{g^2_H}{ M_{A_H}^2}\\,C_{L,R}^{sF*}\\,C_{L,R}^{bF}\\,\\left(C_{8G}^{SM}(x)+\\dfrac{1}{3}\\right)\n\\longrightarrow \n\\sum_{i,k}\\dfrac{g^2_{H_i}}{M_{A_{H_i}}^2}\\,C_{L,R}^{ski*}\\,C_{L,R}^{bki}\\,\\left(C_{8G}^{SM}(x_{ki})+\\dfrac{1}{3}\\right)\\,,\\\\[2mm]\n&g^2_H\\,\\dfrac{{m'_F}}{ M_{A_H}^2}\\,C_{L,R}^{sF*}\\,C_{R,L}^{bF}\\,C^{LR}_{8G}(x)\n\\longrightarrow\n\\sum_{i,k}g^2_{H_i}\\,\\dfrac{{m'_{F_k}}}{M_{A_{H_i}}^2}\\,C_{L,R}^{ski*}\\,C_{R,L}^{bki}\\,C^{LR}_{8G}(x_{ki})\\,,\n\\end{aligned}\n\\label{SumRule}\n\\end{equation}\nwhere $x_{ki}\\equiv {m^{\\prime\\,2}}_{F_k}\/M^2_{A_{H_i}}$ and $C_{L,R}^{(s,b)ki}$ are the\ncouplings among the light quarks $(s,b)$ with $A_{H_i}$ and $F_k$. In\nthis substitution we take $A_{H_i}$ to be the mass eigenstates of the neutral gauge\nbosons. Similarly, we may also generalise Eq.~\\eqref{eq:LLnewLight} and\n\\eqref{eq:LRnewLight}.\n\nNotice that in the case that the masses of the heavy particles span a wide range of values, the generalisation above is only a first approximation: a more precise result follows from a detailed analysis using the results of Sect. \\ref{sec:QCD} considering several threshold scales. This further study goes beyond the scope of the present paper and in the following we only consider a single high matching scale.\n\nThe factor $m'_{F_k}\/M^2_{A_{H_i}}$ can represent a dangerous enhancement of the LR (RL) contribution in Eq.~(\\ref{SumRule}), which is unlikely to vanish, barring accidental cancellations. We identify a series of conditions under which this contribution exactly vanishes:\n\\begin{description}\n\\item 1)\\quad $g_{H_i}=g_{H}$, i.e. there is only one gauge symmetry or all the new gauge coupling constants have the same strength;\n\\item 2)\\quad $m'_{F_k}=m'_{F}$, $M_{A_{H_i}}=M_{A_H}$, i.e. all the exotic fermions and the neutral gauge bosons are degenerate in mass, as in the case of unbroken flavour symmetries; \n\\item 3)\\quad $\\sum_{i}C_{L,R}^{ski*}\\,C_{L,R}^{bki}=\\sum_{i}C_{L,R}^{ski*}\\,C_{R,L}^{bki}=0$, i.e. the couplings are unitary matrices satisfying $C_L=C_R$.\n\\end{description}\nSimilar conditions apply also to the contributions of SM down-type quarks. \n\nThese conditions assure the exact cancellation of the NP contributions. However, similarly to the GIM mechanism, if they are only partially satisfied, the terms in Eq.~(\\ref{SumRule}) do not cancel each other completely and the NP contributions could still be dangerous.\n\nFurthermore, we remark that condition 2) would correspond to degenerate SM fermions and thus these conditions cannot be satisfied in models which successfully explain the SM fermion mass pattern through the see-saw mechanism illustrated in Sect.~\\ref{sec:Context}. In these models, however, the mixings between SM and exotic fermions are small, cf. Eqs.~\\eqref{CouplingfF} and \\eqref{CouplingFf}. This provides a sufficiently strong suppression to safely neglect these contributions. We further discuss this point in Sect.~\\ref{sec:3classes}.\n\n\n\n\n\\section{QCD corrections}\n\\label{sec:QCD}\n\\subsection{General structure}\n\\label{subsec:QCDgeneralstructure}\n\nIn order to complete the analysis of the $b\\to s\\gamma$ decay we have to include QCD\ncorrections, which play a very important role within the SM, enhancing\nthe rate by a factor of $2-3$~\\cite{MABCC:BSGalpha2}. It originates dominantly from the mixing of {\\it charged} current-current operators into\nthe dipole operators and to a smaller extent from the mixing with QCD-penguin operators.\n\nWhen the flavour gauge bosons $A_{H_i}$ and the exotic fermions $F_k$ are integrated\nout at the high scale $\\mu_H$, in addition to the dipole operators discussed at length\nin the previous section, also {\\it neutral} current-current operators corresponding to\na tree-level $A_{H_i}$ exchange are generated together with ordinary QCD-\nand QED-penguin operators.\n\nThe contributions of QCD- and QED-penguin operators, arising from diagrams\nwith $A_{H_i}$ and $F_k$ exchanges, are much less important than within the SM. Indeed,\nin the SM context, QCD-penguins do not have to compete with tree-level FCNC \ndiagrams, as the latter are absent due to the GIM mechanism. However, \nin the present case there are neutral current-current operators originating from\nthe tree-level $A_{H_i}$ exchanges that are not suppressed by $\\alpha_s(\\mu_H)$ and\nloop effects, contrary to QCD-penguins. Thus, for all practical purposes the contributions of\nQCD-penguins from $A_{H_i}$ and $F_k$ exchanges at the high scale $\\mu_H$ can be neglected. Even less important\nare effects from QED-penguins. Note that this is in contrast to dipole operators\nwhich cannot be generated at tree-level, but nevertheless mediate the $b\\to s\\gamma$ decay.\n\nIn what follows we will extend the SM RG analysis by considering\nthe QCD effects of neutral current-current operators at scales $\\mu_b\\leq \\mu\\leq\\mu_H$\ngenerated by diagrams with the exchange of a single neutral gauge boson $A_H$. The\nextension to the case of an arbitrary number of such gauge bosons is straightforward. This\nanalysis does take into account the mixing under QCD renormalisation of these neutral current-current operators 1) into dipole operators, 2) among themselves, 3) into QCD-penguin operators.\nThe latter mixing generates contributions to QCD-penguin operators at scales lower than $\\mu_H$, even if the initial conditions at $\\mu_H$ of the Wilson coefficients of these operators are neglected. Even if this effect is numerically\nnegligible, we include it for completeness as the mixing of QCD-penguin operators and\ndipole operators is taken into account in the SM part.\n\nBefore going into details, let us note that the neutral current-current operators do\nnot mix with the charged current-current operators as their flavour structures\ndiffer from each other. On the other hand, similarly to charged current-current\noperators, neutral current-current operators have an impact on dipole operators\nand QCD-penguins through RG effects without being themselves affected\nby the presence of these two types of operators.\n\nWe would like to emphasise that the QCD analysis presented below is also relevant for other processes, such as non-leptonic two-body $B$ decays, and other observables, like $\\epsilon'\/\\epsilon$.\n\nDenoting symbolically charged current-current, QCD-penguin,\ndipole, and neutral current-current operators and the corresponding primed operators respectively by\n\\begin{equation}\nQ^{cc}\\,,\\qquad\\quad\nQ_P\\,,\\qquad\\quad\nQ_D\\,,\\qquad\\quad\nQ^{nn}\\,,\\qquad\\quad\nQ'_P\\,,\\qquad\\quad\nQ'_D\\,,\\qquad\\quad\nQ^{nn\\,\\prime}\\,,\n\\end{equation}\nthe structure of the one-loop anomalous dimension matrix looks as follows:\n\\begin{equation}\n\\begin{tabular}{l|c|c|c|c|c|c|c|l}\n\\multicolumn{1}{r}{}& \\multicolumn{1}{c}{$Q^{cc}$} & \\multicolumn{1}{c}{$Q_P$} & \\multicolumn{1}{c}{$Q_D $}\t& \\multicolumn{1}{c}{$Q^{nn}$}\n & \\multicolumn{1}{c}{} \t& \\multicolumn{1}{c}{}\t\t& \\multicolumn{1}{c}{} \\\\\n\\cline{2-5}\n$Q^{cc}$ \t& $X_1$ \t& $X_2$\t\t& $X_3$ \t& 0 \t&\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\\\\\n\\cline{2-5}\n$Q_P$ \t\t& 0 \t\t& $X_4$ \t& $X_5$ \t& 0 \t&\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\\\\\n\\cline{2-5}\n$Q_D$ \t\t& 0 \t\t& 0 \t\t& $X_6$ \t& 0 \t&\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\\\\\n\\cline{2-5}\n$Q^{nn}$ \t& 0 \t\t& $Y_1$ \t& $Y_2$ \t& $Y_3$ &\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\t&\\multicolumn{1}{c}{}\\\\\n\\cline{2-8}\n\\multicolumn{1}{c}{} \t \t&\\multicolumn{1}{c}{}&\\multicolumn{1}{c}{}&\\multicolumn{1}{c}{}&\t& $X_4$ & $X_5$ & 0 \t& \\multicolumn{1}{l}{$Q_P'$}\\\\\n\\cline{6-8}\n\\multicolumn{1}{c}{} \t \t&\\multicolumn{1}{c}{}&\\multicolumn{1}{c}{}&\\multicolumn{1}{c}{}&\t& 0 & $X_6$ & 0 \t& \\multicolumn{1}{l}{$Q_D'$}\\\\\n\\cline{6-8}\n\\multicolumn{1}{c}{} \t\t&\\multicolumn{1}{c}{}&\\multicolumn{1}{c}{}&\\multicolumn{1}{c}{}&\t& $Y_1$\t& $Y_2$ & $Y_3$ & \\multicolumn{1}{l}{$Q^{nn\\,\\prime}$}\\\\\n\\cline{6-8}\n\\multicolumn{1}{r}{}\t\t&\\multicolumn{1}{r}{}&\\multicolumn{1}{r}{}&\\multicolumn{1}{r}{}&\\multicolumn{1}{r}{}& \\multicolumn{1}{c}{$Q_P'$} & \\multicolumn{1}{c}{$Q_D' $} & \\multicolumn{1}{c}{$Q^{nn\\,\\prime}$}\n\\end{tabular}\n\\label{AnomalousDimension}\n\\end{equation}\nThe non-vanishing entries denoted by $X_i$ are known from the SM analysis \\cite{CFRS:BSG,BBL:WeakNLO}. The new\nresults in the present paper are the entries denoted by $Y_i$, the initial\nconditions for $Q_D$, given already in Sect. \\ref{sec:analytic}, and the\ncorresponding conditions for $Q^{nn}$, given at the end of this section. \n\nThe RG analysis of the NP contributions can be performed\nindependent from the SM one: that is \n\\begin{equation}\n\\begin{aligned}\n&C_i^{SM}(\\mu_b)=U_{ij}(\\mu_b,\\, \\mu_W)\\,C_j^{SM}(\\mu_W)\\,,\\\\[2mm]\n&C_i^{NP}(\\mu_b)=W_{ij}(\\mu_b,\\,\\mu_H)\\, C_j^{NP}(\\mu_H)\\,,\\\\[2mm]\n&C_i^\\text{total}(\\mu_b)=C_i^{SM}(\\mu_b)+C_i^{NP}(\\mu_b)\\,,\n\\end{aligned}\n\\end{equation}\nwhere $U_{ij}$ and $W_{ij}$ are the elements of the RG evolution\nmatrix. The Wilson coefficients for the primed operators at the high scale are\nin general different from the expression above, but the evolution matrices $U$ and $W$ do not change.\nThe general structure of these matrices in terms of the anomalous dimension\nmatrices and the QCD-$\\beta$-functions is well known \\cite{Buras:LH-lectures}\nand will not be repeated here. Therefore, we present in the next sections our\nresults for the matrices $Y_i$ and the initial conditions for $Q^{nn}$ and\n$Q^{nn\\,\\prime}$ operators.\n\n\n\n\n\\subsection{Operator basis for neutral current-current operators}\n\nThere are $48$ neutral current-current operators generated by the exchange\nof $A_H$, all containing the neutral currents $(\\ov{s}\\,\\gamma_\\mu\\,P_{L,R}\\,b)$\nand the flavour conserving currents $(\\ov{f}\\,\\gamma_\\mu\\,P_{L,R}\\,f)$ with\n$f=u,c,t,d,s,b$. At this scale also the exotic fermions have been already integrated out at $\\mu_H$.\n\nThe general notation for the $48$ $Q^{nn}$ and $Q^{nn\\,\\prime}$ operators in\nquestion will be \n\\begin{equation}\nQ_{1,2}^f(A,B)\\,,\\qquad \\qquad \\text{for} \\quad A,B=\\{L,R\\}\\,.\n\\label{QnnGeneral}\n\\end{equation}\nFor instance\n\\begin{equation}\n\\begin{aligned}\n&Q_1^u(L,R)=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,b_\\beta\\right)\\left(\\ov{u}_\\beta\\,\\gamma^\\mu\\,P_R\\,u_\\alpha\\right)\\,,\\\\[2mm]\n&Q_2^u(L,R)=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,b_\\alpha)(\\ov{u}_\\beta\\,\\gamma^\\mu\\,P_R\\,u_\\beta\\right)\\,,\n\\end{aligned}\n\\end{equation}\nwhere $\\alpha,\\beta=1,2,3$ are colour indices.\n\nMoreover there are 8 additional neutral current-current operators\nthat have to be included, as they have a different structure, when colour\nand flavour structures are considered simultaneously:\n\\begin{equation}\n\\begin{aligned}\n&\\hat{Q}_1^d(A,B)=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_A\\,d_\\beta\\right)\\left(\\ov{d}_\\beta\\,\\gamma^\\mu\\,P_B\\,b_\\alpha\\right)\\,,\\\\[2mm]\n&\\hat{Q}_2^d(A,B)=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_A\\,d_\\alpha)(\\ov{d}_\\beta\\,\\gamma^\\mu\\,P_B\\,b_\\beta\\right)\\,.\n\\end{aligned}\n\\label{QnnhatGeneral}\n\\end{equation}\nIn this classification, we refer to $Q^{nn}$ [$Q^{nn\\,\\prime}$] as those operators\n$Q_{1,2}(A,B)$ with $A=L$ [$A=R$] and $B=L,R$ [$B=R,L$].\n\nWe point out that the operator basis in Eqs.~(\\ref{QnnGeneral}) and\n(\\ref{QnnhatGeneral}) can be reduced using Fierz transformations. Moreover\nsome relations can be found between the usual charged current-current\nand QCD-penguin operators\n\\begin{equation}\n\\begin{aligned}\n&Q_1=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,c_\\beta\\right)\\left(\\ov{c}_\\beta\\,\\gamma^\\mu\\,P_L\\,b_\\alpha\\right)\\,,\\\\[2mm]\n&Q_2=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,c_\\alpha\\right)\\left(\\ov{c}_\\beta\\,\\gamma^\\mu\\,P_L\\,b_\\beta\\right)\\,,\\\\[2mm]\n&Q_3=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,b_\\alpha\\right)\\sum_{q=u,d,s,c,b}\\left(\\ov{q}_\\beta\\,\\gamma^\\mu\\,P_L\\,q_\\beta\\right)\\,,\\\\[2mm]\n&Q_4=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,b_\\beta\\right)\\sum_{q=u,d,s,c,b}\\left(\\ov{q}_\\beta\\,\\gamma^\\mu\\,P_L\\,q_\\alpha\\right)\\,,\\\\[2mm]\n&Q_5=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,b_\\alpha\\right)\\sum_{q=u,d,s,c,b}\\left(\\ov{q}_\\beta\\,\\gamma^\\mu\\,P_R\\,q_\\beta\\right)\\,,\\\\[2mm]\n&Q_6=\\left(\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_L\\,b_\\beta\\right)\\sum_{q=u,d,s,c,b}\\left(\\ov{q}_\\beta\\,\\gamma^\\mu\\,P_R\\,q_\\alpha\\right)\\,.\n\\end{aligned}\n\\end{equation}\n\nYet, similarly to \\cite{BJL:AnatomyEpsilon,BBH:AnatomyEpsilon} we have decided\nto work with all operators and not use such relations when performing the RG analysis in order to keep the anomalous dimensions in a\ntransparent form. As discussed in \\cite{BJL:AnatomyEpsilon,BBH:AnatomyEpsilon}\none can work with linearly dependent operators in the process of RG\nanalysis without any problems and use such\nrelations only at the end of the evolution if necessary. However, in the case\nof $C_{7\\gamma}(\\mu_b)$ at LO this is not required. \n\nIn this context we should warn the reader that at NLO level a use of Fierz\nrelations in the RG analysis could lead to wrong results within\nthe NDR scheme unless Fierz evanescent operators are introduced \n\\cite{JP:QCDinDeltaS,BMU:2loopQCDGamma}.\n\n\n\\subsection{Anomalous dimension matrices}\n\nWe define the LO anomalous dimension matrix through \n\\begin{equation}\n\\hat{\\gamma}(\\alpha_s)=\\dfrac{\\alpha_s}{4\\pi}\\hat{\\gamma}^{(0)}\\,,\n\\end{equation}\nand give below the results for $\\hat{\\gamma}^{(0)}$ (dropping the index $(0)$ to simplify the notation). \n\nThe inspection of the one-loop diagrams contributing to the anomalous\ndimension matrices shows that only 16 operators have to be considered\nin order to find the full matrix. These are\n\\begin{equation}\n\\label{ListOperators}\n\\begin{array}{llll}\nQ_{1,2}^u(L,L)\\,,&\\qquad Q_{1,2}^d(L,L)\\,,&\\qquad Q_{1,2}^s(L,L)\\,,&\\qquad \\hat{Q}_{1,2}^d(L,L)\\,,\\\\[2mm]\nQ_{1,2}^u(L,R)\\,,&\\qquad Q_{1,2}^d(L,R)\\,,&\\qquad Q_{1,2}^s(L,R)\\,,&\\qquad \\hat{Q}_{1,2}^d(L,R)\\,.\n\\end{array}\n\\end{equation}\n\nThe anomalous dimensions for $u$ replaced by $c$ and $t$ are equal to\nthe one of $Q_{1,2}^u$. The ones of $Q_{1,2}^b$ are equal to the ones\nof $Q_{1,2}^s$. The anomalous dimensions of the remaining 28 operators,\nnamely the primed operators obtained by $L\\leftrightarrow R$, are\nthe same as those of the corresponding unprimed operators \n(see Eq.~(\\ref{AnomalousDimension})).\n\nThe mixing between the $Q^{nn}$ operators and $Q_D$, that is the matrix\n$Y_2$, can be extracted from \\cite{CFMRS:BSG,CFRS:BSG} by inspecting the mixing\nbetween QCD-penguin and dipole operators. For the {\\it transposed}\nmatrices we get\n\n\\begin{equation}\n\\hat\\gamma_D^T(L,L)=\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|}\n\\hline\n&&&&&&&&\\\\[-4mm]\n& $Q_1^u$ & $Q_2^u$ & $Q_1^d$ & $Q_2^d$ & $Q_1^s$ & $Q_2^s$ & $\\hat{Q}_1^d$ & $\\hat{Q}_2^d$ \\\\[2mm]\n\\hline\n\\hline\n&&&&&&&&\\\\[-4mm]\n$Q_{7\\gamma}$ & $\\dfrac{416}{81}$ & $0$ & $-\\dfrac{232}{81}$ & 0 & $-\\dfrac{232}{81}$ & $-\\dfrac{232}{81}$ & 0 & $-\\dfrac{232}{81}$ \\\\[2mm]\n\\hline\n&&&&&&&&\\\\[-4mm]\n$Q_{8G}$ & $\\dfrac{70}{27}$ & $3$ & $\\dfrac{70}{27}$ & $3$ & $\\dfrac{151}{27}$ & $\\dfrac{151}{27}$ & $3$ & $\\dfrac{70}{27}$\\\\[2mm]\n\\hline\n\\end{tabular}\n\\end{equation}\n\\begin{equation}\n\\hat\\gamma_D^T(L,R)=\n\\begin{tabular}{|c||c|c|c|c|c|c|c|c|}\n\\hline\n&&&&&&&&\\\\[-4mm]\n& $Q_1^u$ & $Q_2^u$ & $Q_1^d$ & $Q_2^d$ & $Q_1^s$ & $Q_2^s$ & $\\hat{Q}_1^d$ & $\\hat{Q}_2^d$ \\\\[2mm]\n\\hline\n\\hline\n&&&&&&&&\\\\[-4mm]\n$Q_{7\\gamma}$ & $-\\dfrac{448}{81}$ & $0$ & $\\dfrac{200}{81}$ & $0$ & $\\dfrac{200}{81}$ & $\\dfrac{16}{9}$ & $-\\dfrac{80}{3}$ & $-\\dfrac{32}{9}$ \\\\[2mm]\n\\hline\n&&&&&&&&\\\\[-4mm]\n$Q_{8G}$ & $-\\dfrac{119}{27}$ & $-3$ & $-\\dfrac{119}{27}$ & $-3$ & $-\\dfrac{173}{27}$ & $-\\dfrac{16}{3}$ & $-4$ & $\\dfrac{8}{3}$ \\\\[2mm]\n\\hline\n\\end{tabular}\n\\end{equation}\n\nThe mixing between neutral current-current operators, that is the\nmatrix $Y_3$, is universally given by two $2\\times 2$ matrices:\n\\begin{equation}\n\\hat\\gamma^{nn}(L,L)=\n\\begin{tabular}{|c||c|c|}\n\\hline\n&&\\\\[-4mm]\n& $Q_1$ & $Q_2$ \\\\[2mm]\n\\hline\n\\hline\n&&\\\\[-4mm]\n$Q_1$ & $-2$ & $6$ \\\\[2mm]\n\\hline\n&&\\\\[-4mm]\n$Q_2$ & $6$ & $-2$ \\\\[2mm]\n\\hline\n\\end{tabular}\\qquad\\qquad\n\\hat\\gamma^{nn}(L,R)=\n\\begin{tabular}{|c||c|c|}\n\\hline\n&&\\\\[-4mm]\n& $Q_1$ & $Q_2$ \\\\[2mm]\n\\hline\n\\hline\n&&\\\\[-4mm]\n$Q_1$ & $-16$ & $0$ \\\\[2mm]\n\\hline\n&&\\\\[-4mm]\n$Q_2$ & $-6$ & $2$ \\\\[2mm]\n\\hline\n\\end{tabular}\n\\end{equation}\nOnly operators with the same flavour-conserving structure $(\\ov{f}\\,\\gamma_\\mu\\,P_{L,R}\\,f)$ mix in this way.\n\nFinally, in the case of $Y_1$ there is a universal mixing of $Q^{nn}$\noperators into QCD-penguin operators:\n\\begin{equation}\n\\hat\\gamma_P=\n\\begin{tabular}{|c||c|c|c|c|}\n\\hline\n&&&&\\\\[-4mm]\n& $Q_3$ & $Q_4$ & $Q_5$ & $Q_6$ \\\\[2mm]\n\\hline\n\\hline\n&&&&\\\\[-4mm]\n$Q^{nn}$ & $-\\dfrac{2}{9}$ & $\\dfrac{2}{3}$ & $-\\dfrac{2}{9}$ & $\\dfrac{2}{3}$ \\\\[2mm]\n\\hline\n\\end{tabular}\n\\end{equation}\nfor\n\\begin{equation}\nQ^{nn}=\\left\\{Q_1^{u,c,t,d,s,b}(L,L),\\, Q_2^{s,b}(L,L),\\, \\hat{Q}_2^d(L,L),\\, Q_1^{u,c,t,d,s,b}(L,R)\\right\\}\n\\end{equation}\nwith no mixing for the remaining operators in Eq.~(\\ref{ListOperators}).\n\n\n\n\n\\subsection{Initial conditions}\n\nThe initial conditions at $\\mu_H$ for the dipole operators have already been presented in Sect.~\\ref{sec:analytic}: see Eqs.~(\\ref{c7}), (\\ref{c8}), (\\ref{LLnew}), (\\ref{LRnew}), (\\ref{eq:LLnewLight}) and (\\ref{eq:LRnewLight}). Here we give the corresponding initial conditions for the neutral current-current operators.\n\nThe initial conditions are determined by integrating out all the\nheavy degrees of freedom at $\\mu_H$ and matching the result to the effective theory. At LO this matching is trivial. In the following we give the result\nfor the simplified case in which only one neutral flavour gauge boson $A_H$\ncontributes to the initial conditions. The generalisation to several\ngauge bosons is obvious.\n\nDenoting the vertex $\\ov{f}_i\\,A_H\\,f_j$ as in Eqs.~(\\ref{vertexGeneral}) and (\\ref{Couplingff}), the general expression for the Wilson\ncoefficient of the neutral current-current operator\n\\begin{equation}\nQ^f_2(A,B)\\equiv (\\ov{s}_\\alpha\\,\\gamma_\\mu\\,P_A\\,b_\\alpha)(\\ov{f}_\\beta\\,\\gamma^\\mu\\,P_B\\,f_\\beta)\\,,\n\\end{equation}\nin the normalisation of Eq.~(\\ref{Heff_at_mu}), is given by\n\\begin{equation}\n\\Delta^{AB}C^f_2(\\mu_H)=-\\dfrac{1}{2}\\dfrac{g_H^2}{g_2^2}\\dfrac{M_W^2}{M_{A_H}^2}\\dfrac{C_A^{sb*}\\,C_B^{ff}}{V_{ts}^*\\,V_{tb}}\\,,\n\\label{InitialConditionQnn}\n\\end{equation}\nwhile all the coefficients $\\Delta^{AB} C_1$ are zero. Similarly, the Wilson coefficient for the $\\hat{Q}^d_{2}(A,B)$ operator is \n\\begin{equation}\n\\Delta^{AB}\\hat{C}_2^d(\\mu_H)=-\\dfrac{1}{2}\\dfrac{g_H^2}{g_2^2}\\dfrac{M_W^2}{M_{A_H}^2}\\dfrac{C_A^{sd*}\\,C_B^{bd}}{V_{ts}^*\\,V_{tb}}\\,,\n\\label{InitialConditionQnnHat}\n\\end{equation}\nwhile $\\Delta^{AB}\\hat{C}_1^d(\\mu_H)$ is again zero.\n\n\\section{Phenomenological analysis}\n\\label{sec:pheno}\n\\mathversion{bold}\n\\subsection{The $\\ov{B}\\to X_s \\gamma$ branching ratio}\n\\mathversion{normal}\n\\label{sec:Br}\n\nThe SM prediction for the $\\ov{B}\\to X_s\\gamma$ branching ratio at\nNNLO \\cite{MABCC:BSGalpha2,GM:BSGdefinition,MS:BSGdefinition} reads,\n\\begin{equation}\nBr(\\ov{B}\\to X_s \\gamma)=(3.15\\pm0.23)\\times 10^{-4}\\,,\n\\label{BRSMNNLO}\n\\end{equation}\nand has been calculated for a photon-energy cut-off $E_\\gamma>1.6$ GeV in the $\\ov{B}$-meson rest frame. This is to be compared\nwith the current experimental value \\cite{PDG2010},\n\\begin{equation}\nBr(\\ov{B}\\to X_s \\gamma)=(3.55\\pm 0.24 \\pm 0.09)\\times 10^{-4}\\,,\n\\label{BRexp}\n\\end{equation}\nfor the same energy cut-off $E_\\gamma$. In the presence of NP the expression\nfor the branching ratio is given as follows:\n\\begin{equation}\nBr(\\ov{B}\\to X_s \\gamma) = R \\left(|C_{7\\gamma}(\\mu_b)|^2+|C'_{7\\gamma}(\\mu_b)|^2+N(E_\\gamma)\\right)\\,,\n\\label{BRtotal}\n\\end{equation}\nwhere $R=2.47\\times10^{-3}$ and $N(E_\\gamma)=(3.6\\pm0.6)\\times10^{-3}$. The parameter $R$ is simply an overall factor, whose determination is discussed in \\cite{GM:BSGdefinition,MS:BSGdefinition}, while $N(E_\\gamma)$ is a non-perturbative contribution. Calculating the NP contributions in the LO approximation, but including the\nSM contribution at the NNLO level we find for $\\mu_b=2.5$ GeV\n\\begin{equation}\nC_{7\\gamma}(\\mu_b)=C^{SM}_{7\\gamma}(\\mu_b)+\\Delta C_{7\\gamma}(\\mu_b)\n\\label{eq:C7efftotal}\n\\end{equation}\nwhere the central value of $C^{SM}_{7\\gamma}(\\mu_b)$ at the NNLO, corresponding to the value in Eq.~(\\ref{BRSMNNLO}), is\ngiven by \\cite{MABCC:BSGalpha2,GM:BSGdefinition,MS:BSGdefinition}\n\\begin{equation}\nC^{SM}_{7\\gamma}(\\mu_b)=-0.3523\n\\label{eq:C7effSM}\n\\end{equation}\nand the NP one, calculated by us at the LO, by\n\\begin{equation} \n\\begin{split}\n\\Delta C_{7\\gamma}(\\mu_b)=&\\quad\\kappa_7~\\Delta C_{7\\gamma}(\\mu_H) +\\kappa_8~\\Delta C_{8G}(\\mu_H)+\\\\[2mm]\n\t\t\t &+\\!\\!\\!\\!\\!\\sum_{\\substack{A=L,R\\\\f=u,c,t,d,s,b}}\\!\\!\\!\\!\\! \\kappa^{f}_{LA}~\\Delta ^{LA} C_2^f(\\mu_H)\n\t\t\t +\\!\\!\\!\\sum_{A=L,R}\\!\\!\\!\\!\\hat{\\kappa}^{d}_{LA}~\\Delta ^{LA} \\hat{C}_2^d(\\mu_H).\n\\end{split}\n\\label{eq:DeltaC7eff}\n\\end{equation}\nHere $\\kappa$'s are the NP magic numbers listed in Tab.~\\ref{tab:c7magicnumbers}, calculated taking $\\alpha_s(M_Z=91.1876\\,\\text{GeV})=0.118$. The primed coefficient $C'_{7\\gamma}(\\mu_b)$ can be obtained from\n(\\ref{eq:C7efftotal})--(\\ref{eq:DeltaC7eff}), by interchanging $L\\leftrightarrow R$\nand taking the initial conditions of the primed Wilson coefficients. In particular, the NP magic numbers listed in Tab.~\\ref{tab:c7magicnumbers} are also valid\nfor the primed case, as QCD is blind to the fermion chirality\\footnote{In\nthe computation of $C'_{7\\gamma}(\\mu_b)$ we are neglecting the SM contributions\n$C_{7\\gamma}^{SM\\,\\prime}(\\mu_b)$ and $C_{8G}^{SM\\,\\prime}(\\mu_b)$, and the NP\ncontributions $\\Delta C'_{7\\gamma}(\\mu_b)$ and $\\Delta C'_{8G}(\\mu_b)$, since all of\nthem are suppressed by $m_s\/m_b$ with respect the unprimed ones. Our numerical\nanalyses confirm that this approximation is consistent with errors below the percent\nlevel.}.\n\n\\begin{table}[h!]\n\\begin{center}\n\\begin{tabular}{|c||r|r|r|r||r|}\n \\hline\n &&&&&\\\\[-4mm]\n $\\mu_H$\t \t & 200 GeV \t& 1 TeV\t\t& 5 TeV\t\t& 10 TeV & $M_Z$\\\\[1mm]\n \\hline\\hline\n $\\kappa_7$\t\t & 0.524\t& 0.457\t\t& 0.408\t\t& 0.390\t &\t 0.566 \\\\\t\t\t\n $\\kappa_8$\t\t & 0.118\t& 0.125\t\t& 0.129\t\t& 0.130\t & 0.111\t\\\\[1mm]\t\t\n \\hline\n &&&&&\\\\[-4mm]\n $\\kappa_{LL}^{u,c}$\t & 0.039\t& 0.057\t\t& 0.076\t\t& 0.084\t & 0.030\t\\\\\t\t\t\n $\\kappa_{LL}^{t}$\t &-0.002\t&-0.003\t\t&-0.002\t\t&-0.001\t & --\t\\\\\t\t\t\n $\\kappa_{LL}^{d}$\t &-0.040\t&-0.057\t\t&-0.072\t\t&-0.079\t & -0.032\t\\\\\t\t\t\n $\\kappa_{LL}^{s,b}$\t & 0.087\t& 0.090\t\t& 0.090\t\t& 0.090\t & 0.084\t\\\\\t\t\t\n $\\hat{\\kappa}_{LL}^{d}$& 0.128\t& 0.147\t\t& 0.163\t\t& 0.168\t & 0.116\t\\\\[1mm]\n \\hline \n &&&&&\\\\[-4mm] \t\n $\\kappa_{LR}^{u,c}$\t & 0.085\t& 0.128\t \t& 0.173\t\t& 0.193\t & 0.065\t\\\\\t\t\t\n $\\kappa_{LR}^{t}$\t & 0.004\t& 0.012\t \t& 0.023\t\t& 0.028\t & --\t\\\\\t\t\t\n $\\kappa_{LR}^{d}$\t &-0.015\t&-0.025\t\t&-0.036\t\t&-0.041\t & -0.011\t\\\\\t\t\t\n $\\kappa_{LR}^{s,b}$\t &-0.078\t&-0.092\t \t&-0.106\t\t&-0.111\t & -0.070\t\\\\\t\t\t\n $\\hat{\\kappa}_{LR}^{d}$& 0.473\t& 0.665\t\t& 0.865\t\t& 0.953\t & 0.383\t\\\\[1mm]\t\n \\hline \t\n\\end{tabular}\n\\caption{The NP magic numbers for $\\Delta C_{7\\gamma}(\\mu_b)$ defined\nin Eq.~(\\ref{eq:DeltaC7eff}). For completeness, we include in the last column the case of a flavour-violating $Z$. The analogous table for \n$\\Delta C_{8G}(\\mu_b)$ is provided in Appendix \\ref{AppA}.}\n\\label{tab:c7magicnumbers}\n\\end{center}\n\\end{table}\n\nThe SM contribution containing NLO and NNLO QCD corrections exhibits a negligible\n$\\mu_b$-dependence. This is not the case for the NP contribution at LO.\nHowever, we have checked numerically that when the NP contribution enhances the SM value\nby $20\\%$, the $\\mu_b$-dependence in the total branching ratio amounts\nto a $3\\%$ uncertainty for $\\mu_b\\in[2.5,\\,5]$ GeV. For smaller deviations from the SM prediction \nthe uncertainty further decreases. This renders this uncertainty sufficiently small\nfor our purposes.\\\\\n\nBefore concluding a few observations are in order:\n\\begin{description}\n\\item 1)\\quad Similarly to the SM, the magic numbers $\\kappa_7$ and $\\kappa_8$ suppress the initial values $\\Delta C_{7\\gamma}(\\mu_H)$ and $\\Delta C_{8G}(\\mu_H)$. This is due to the QCD RG evolution running down to $\\mu_b$. Furthermore, the suppression of $\\Delta C_{7\\gamma}(\\mu_H)$ increases with $\\mu_H$.\n \n\\item 2)\\quad As in the SM, provided $\\Delta^{AB}C_2^f(\\mu_H)$ and $\\Delta^{AB}\\hat{C}_2^d(\\mu_H)$\n are sufficiently larger than $\\Delta C_{7\\gamma}(\\mu_H)$, the additive QCD corrections stemming from the mixing of the neutral current-current operators\n into the dipole operators are dominant. Furthermore the QCD factors $\\kappa_i$ increase in\n most cases with $\\mu_H$.\n The most prominent is the coefficient $\\hat{\\kappa}_{LR}^d$ which\n could even be of $\\cO(1)$, but also $\\kappa_{LR}^{u,c}$, $\\hat{\\kappa}^d_{LL}$ and\n $\\kappa^{s,b}_{LL,LR}$ are sizable. However, values of the corresponding initial conditions could compensate these coefficients, as in the case of small couplings among SM fermions.\n \n\\item 3)\\quad The SM and NP primed dipole Wilson coefficients, $ C'_i(\\mu_W)$ and $\\Delta C'_i(\\mu_H)$, are suppressed by $m_s\/m_b$ and turn out to be numerically negligible. On the other hand, this suppression is absent in the contributions of the primed neutral current-current operators $Q^{nn\\prime}$ and therefore they should be considered in the determination of the branching ratio.\n\\end{description}\n\n\n\n\n\\subsection{Three classes of models}\n\\label{sec:3classes}\n\nThe results for $\\kappa_i$ shown in Tab.~\\ref{tab:c7magicnumbers} are\nmodel-independent and hold for all models in which the neutral gauge\nbosons have flavour violating couplings to fermions as\ndiscussed in Sect.~\\ref{sec:Context}.\nOn the other hand, the initial conditions of the Wilson coefficients are\nmodel-dependent. However, in spite of the large variety of models it is possible to distinguish three main classes, even if\nhybrid situations are also possible:\n\\begin{description}\n \\item 1)\\quad Models without exotic fermions. This is the case\n of models that do not aim at explaining the SM fermion masses and mixings\n or models that do provide such an explanation but without exotic fermions. For instance, $Z'$ constructions in which the theory is \n anomaly free without extending the fermion spectrum\n of the SM fall into this class. This is possible when the Green--Schwarz mechanism is implemented in the theory or when the\n generator of the additional $U(1)$ is\n a linear combination of the hypercharge $Y$ and $B-L$, where $B$ is the\n baryon number and $L$ is the lepton number.\n\n For all models of this class, the new diagrams with virtual\n exotic fermions do not contribute. On the other hand, the contributions from the exchange\n of the light down-type quarks, the neutral current-current operators\n and the corresponding QCD evolution are still present. As illustrated\n in our numerical analysis of Sect.~\\ref{sec:ToyModel}, the light-quark\n contributions turn out to be negligible in the whole parameter space.\n\n\n\\item 2)\\quad Models in which the SM fermion-mass patterns are governed\n by exotic fermions through a see-saw mechanism as illustrated in\n Sect.~\\ref{sec:Context}. In this case, the couplings of\n the neutral gauge bosons to SM and exotic fermions, described by\n Eqs.~(\\ref{Couplingff})--(\\ref{CouplingFf}), are suppressed by\n $\\sin\\theta_{L_k,R_k}$ given in Eqs.~(\\ref{FormulaSin1}) and (\\ref{FormulaSin2}).\n In the specific limit of $m'_{F_k}\\gg m_{f_k}$ and $M_1^D\\approx M_2^D\\equiv M^D$\n the suppression in $\\sin\\theta_{L_k,R_k}$ is approximately\n of order $\\cO\\left(\\sqrt{m_{f_k}\/m'_{F_k}}\\right)$ .\n\n In order to illustrate the size of the contributions from the new diagrams\n of Sect.~\\ref{sec:analytic}, we redefine the couplings $C_{L,R}^{ski}$\n and $C_{L,R}^{bki}$ to exhibit the dependence on the suppressing factor. Without\n loss of generality, we can approximately write\n \\begin{equation}\n C_{L,R}^{ski}\\simeq \\sqrt{\\dfrac{m_s}{m'_{F_k}}} \\tilde{C}_{L,R}^{ski}\\,,\\qquad\\qquad\n C_{L,R}^{bki}\\simeq \\sqrt{\\dfrac{m_b}{m'_{F_k}}} \\tilde{C}_{L,R}^{bki}\\,.\n \\end{equation}\n As a result, for the case of arbitrary $A_{H_i}$ and $F_k$, the expressions\n on the right-hand side of Eqs.~(\\ref{LLnew}) and (\\ref{LRnew}) are given by\n \\begin{equation}\n \\begin{aligned}\n \\Delta^{LL}C^{\\text{heavy}}_{7\\gamma}(\\mu_H) &\\simeq-\\dfrac{1}{6}\\,\\sum_{i,k}\\dfrac{g_{H_i}^2}{g_2^2}\\,\\dfrac{M_W^2}{M_{A_{H_i}}^2}\\,\\dfrac{\\sqrt{m_s\\,m_b}}{m'_{F_k}}\\,\\dfrac{\\tilde{C}_L^{ski*}\\,\\tilde{C}_L^{bki}}{V_{ts}^*\\,V_{tb}}\\,\\left(C_{8G}^{SM}(x_{ki})+\\dfrac{1}{3}\\right),\\\\[2mm]\n \\Delta^{LR}C^{\\text{heavy}}_{7\\gamma}(\\mu_H)&\\simeq-\\dfrac{1}{6}\\,\\sum_{i,k}\\dfrac{g_{H_i}^2}{g_2^2}\\,\\dfrac{M_W^2}{ M_{A_{H_i}}^2}\\,\\sqrt{\\dfrac{m_s}{m_b}}\\,\\dfrac{\\tilde{C}_L^{ski*}\\,\\tilde{C}_R^{bki}}{V_{ts}^*\\,V_{tb}}\\,C^{LR}_{8G}(x_{ki})\\,,\n \\end{aligned}\n \\label{LRnewnew}\n \\end{equation}\n and $\\Delta^{(LL,LR)}C^{\\text{heavy}}_{8G}(\\mu_H)=-3\\Delta^{(LL,LR)}C^{\\text{heavy}}_{7\\gamma}(\\mu_H)$.\n Similar expressions can be written for the primed contributions. Notice\n that the $m'_{F_k}\/m_b$ enhancement is completely removed from the LR (RL)\n Wilson coefficient and is replaced by the suppressing factor $\\sqrt{m_s\/m_b}$.\n The only dependence on $m'_{F_k}$ is in the loop factor $C^{LR}_{8G}(x_{ik})$.\n On the other hand, an extra inverse power of $m'_{F_k}$ is now present in the LL (RR) contribution, which further suppresses this term. \n\n\nAs we shall explicitly demonstrate in Sect.~\\ref{sec:ToyModel}\n these contributions turn out\n to be negligible with respect to those from the neutral current-current operators.\n The same holds for the light-quark contributions. Hence, when\n dealing with this class of models, we simply neglect the contributions of\n exotic fermions and therefore, as in 1), only neutral current-current operators\n and their mixing with dipole operators are relevant.\n\n\\item 3)\\quad Models with exotic fermions, in which the definition of their couplings to gauge bosons and SM fermions is independent of\n the mechanism of the SM fermion mass generation. For example, this is the case of models in which\n these couplings do not arise from the standard kinetic terms of\n Eq.~(\\ref{StandardKineticTerms}). As a result, $\\sin\\theta_{L_k,R_k}$ can in general be much larger than in\n the previous classes of models.\n\n In this case, the expressions in Eqs.~(\\ref{LLnew}) and (\\ref{LRnew}) do not suffer\n from the additional suppressions of Eq.~(\\ref{LRnewnew}) and in particular the LR (RL)\n contribution is strongly enhanced by the factor $m'_{F_k}\/m_b$. For the\n models of this class, the contribution of exotic fermions is the dominant one\n and all the other contributions can be safely neglected.\n\\end{description}\n\nWe summarise the relevant features of the models in these three classes\nin Tab.~\\ref{tab:Categories}.\n\n\\begin{table}[h!]\n\\begin{center}\n\\begin{tabular}{|l||c|c|c|c|}\n \\hline\n &&&\\\\[-4mm]\n Classes of models\t\t\t\t\t\t& $Q_D^\\text{heavy}$ \t& $Q_D^\\text{light}$ \t& $Q^{nn}$, $Q^{nn\\prime}$ \\\\[1mm]\n \\hline\\hline\n &&&\\\\[-4mm]\t\n 1) without exotic fermions \t\t\t\t& Absent\t\t\t\t& Negligible\t\t& Dominant \\\\\n 2) with exotic fermions and see-saw\t\t\t& Negligible\t\t\t& Negligible\t\t& Dominant \\\\\n 3) with exotic fermions but without see-saw \t& Dominant\t\t\t& Negligible\t\t& Negligible \\\\[1mm]\n \\hline \t\n\\end{tabular}\n\\caption{\\it Summary of the different classes of models and the\ncorresponding NP contributions to $b\\to s\\gamma$: contributions from\nthe exchange of heavy exotic quarks $Q_D^\\text{heavy}$, from the exchange\nof SM down-type quarks $Q_D^\\text{light}$, and from neutral current-current\noperators $Q^{nn}$ and $Q^{nn\\prime}$.\n}\n\\label{tab:Categories}\n\\end{center}\n\\end{table}\n\n\n\n\\subsection{Model-independent constraints}\n\nIt has been pointed out in \\cite{AGM:SignC7gamma,CMW:SignC7gamma,HW:SignC7gamma,\nGOS:SignC7gamma,GHM:SignC7gamma,AABGS:SO10SusyGutFCNC} that considering the\nmeasurements of $\\ov{B}\\to X_s \\ell^+\\ell^-$ \\cite{BaBar:Bsll,Belle:Bsll} the\nsign of $C_{7\\gamma}(\\mu_b)$ in the presence of NP is likely to be\nthe same as in the SM.\nThis provides a first model-independent constraint for NP contributions: since\n$C^{SM}_{7\\gamma}(\\mu_b)$ is negative, $\\Delta C_{7\\gamma}(\\mu_b)$ should\nalso be negative in order to soften the tension between the central values\nof the SM prediction and the experimental determination of \n$Br(\\overline{B}\\rightarrow X_s\\gamma)$. This holds not only for contributions from unprimed operators, but also when considering the primed\nones due to the relative suppression of $C'_{7\\gamma}(\\mu_b)$ with respect to \n$C_{7\\gamma}(\\mu_b)$, as explicitly confirmed by our numerical analysis.\n\nThe sign of the NP contributions is determined by the product of the initial conditions and the QCD magic numbers in\nTab.~\\ref{tab:c7magicnumbers}. The signs of the $\\kappa_i$ factors are fixed solely by the QCD running, while the sign of the initial conditions \n$\\Delta C_i(\\mu_H)$ depends on the couplings of the new neutral gauge bosons to the fermions.\n\nIn particular, for a model in the first two classes, this constraint translates into the requirement\nthat the second line on the right-hand side of Eq.~(\\ref{eq:DeltaC7eff})\nshould be negative, namely that:\n\\begin{equation}\n\\Delta C_{7\\gamma}(\\mu_b)\\simeq\\sum_{\\substack{A=L,R\\\\f=u,c,t,d,s,b}}\\kappa^{f}_{LA}~\\Delta ^{LA} C_2^f(\\mu_H)+\n\\sum_{A=L,R}\\!\\!\\!\\!\\hat{\\kappa}^{d}_{LA}~\\Delta ^{LA} \\hat{C}_2^d(\\mu_H)<0\\,,\n\\label{Condition1_12}\n\\end{equation}\n where only the couplings listed in Eq.~(\\ref{Couplingff}) are involved. \n \n On the other\n hand, when a model belongs to the third class, the first line on the right-hand\n side of Eq.~(\\ref{eq:DeltaC7eff}) must be negative,\n \\begin{equation}\n \\Delta C_{7\\gamma}(\\mu_b)\\simeq\\kappa_7~\\Delta C_{7\\gamma}(\\mu_H) +\\kappa_8~\\Delta C_{8G}(\\mu_H)<0\\,.\n \\label{Condition1_3}\n \\end{equation} \n This puts a constraint on the couplings of Eqs.~(\\ref{CouplingfF}) and (\\ref{CouplingFf}).\n\n Once the coupling constants are chosen such that the NP contributions have\n the same sign as the SM one, it is possible to further constrain the parameter space\n by requiring that the predicted branching ratio should not exceed the experimental\n bound. From Eq.~(\\ref{BRtotal}) we\n find the constraint\n \\begin{equation}\n -\\Delta C_{7\\gamma}(\\mu_b)+1.4 \\left(\\left|\\Delta C_{7\\gamma}(\\mu_b)|^2+|\\Delta C'_{7\\gamma}(\\mu_b)\\right|^2\\right)\\lesssim4.2 (6.1) \\times 10^{-2},\n \\label{Condition2}\n \\end{equation}\n corresponding to the $1\\,\\sigma$ ($2\\,\\sigma$) departure from the experimental value, Eq.~\\eqref{BRexp}.\n \n It is straightforward to apply these constraints to all models belonging\n to one of the three classes discussed above. It is not\n possible to obtain more insight without specifying a particular model (an analysis will follow in \\cite{BCMS:progress}).\n However, in the next section, we provide a simplified representative for each class of models\n and show how the constraints discussed so far apply to each of them.\n\n\n\\subsection{Toy-model examples}\n\\label{sec:ToyModel}\n\nFor each class discussed above, we consider a toy-model in order to justify\nthe approximations made in the previous sections and to exemplify the application\nof the model-independent constraints.\n\n\\begin{description}\n \\item {\\it Classes 1) and 2):}\\quad Here, the relevant initial conditions are those\n presented in Eqs.~(\\ref{InitialConditionQnn}) and (\\ref{InitialConditionQnnHat}).\n The coupling constants entering these expressions are $C_{L,R}^{sb}$, $C_{L,R}^{ff}$,\n $C_{L,R}^{sd}$ and $C_{L,R}^{bd}$, with $f=u,c,t,d,s,b$. To simplify the\n analysis, we assume that all flavour-violating couplings with a\n strange flavour are equal to $C_{FV}^{s}$, the two bottom couplings $C_{L,R}^{bd}$ equal to $C_{FV}^{b}$ and all the flavour conserving ones\n equal to $C_{FC}$. Furthermore, we take\n $V_{ts}=-0.04047$, $V_{tb}=0.999146$ \\cite{CKMfitter:2005}, assume that $g_H=g_2$, and consider only one heavy neutral gauge boson. In this way\n we have defined a toy model with only four free parameters: three coupling constants and the mass of the neutral gauge boson.\n \n In Fig.~\\ref{fig:C7comparison1} on the left we show the breakdown of \n $C_{7\\gamma}(\\mu_b)$ in its different contributions\n as a function of the coupling constants $C_{FV}\\equiv C_{FV}^{s}= C_{FV}^{b}=C_{FC}$, for $M_{A_H}=1$ TeV. For completeness, we also show the exotic-quark contributions for $m'_F=10$ TeV. As expected from the discussion\n in Sec.~\\ref{sec:3classes}, all the NP contributions apart from the neutral \n current-current are negligible, justifying our approximations in the previous section.\n We remark that $C^\\prime_{7\\gamma}(\\mu_b)$ almost coincides with the neutral current-current contribution (red line).\n In Fig.~\\ref{fig:C7comparison1} on the right we show the value of the coupling constants $C_{FV}$, as\n a function of $M_{A_H}$, for which the bound in Eq.~(\\ref{Condition2}) is saturated at the \n $1 \\sigma$ and $2\\sigma$ level. As we can see, for small values of $M_{A_H}$ the\n couplings are constrained to small values.\n \n \\begin{figure}[h!]\n\\centering\n\\includegraphics[]{.\/constraintWRcoup}\n\\hspace*{-10pt}\n\\includegraphics[]{.\/constraintWRscale}\n\\caption{\\it On the left, the different contributions to $C_{7\\gamma}(\\mu_b)$\n(solid line) are plotted as functions of $C_{FV}$: in blue the SM contribution, in red\nthe neutral current-current contribution, in green the overlapping contributions from exotic and SM down-type quarks. The shadowed region is excluded\nimposing the bound in Eq.~(\\ref{Condition2}) at $1\\sigma$ (lighter) and $2\\sigma$ (darker). On the right, we show the value of\n$C_{FV}$ as a function of $M_{A_H}$ for which the bound in Eq.~(\\ref{Condition2})\nis saturated. Again the shadowed region represents the excluded values.\n\\label{fig:C7comparison1}}\n\\end{figure}\n\nIn Fig.~\\ref{fig:Constraint12} we separately present the implementation of the two model-independent\nconstraints, for $M_{A_H}=1$ TeV. On the left, the constraint on the sign of $\\Delta C_{7\\gamma}(\\mu_b)$ \nmostly reduces the parameter space to cases in which the coupling constants \n$C_{FV}^{s}$ and $C_{FV}^{b}$\nhave opposite signs. In Fig.~\\ref{fig:Constraint12}\non the right, the bound from the branching ratio also applies, further constraining the parameter space.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[]{.\/constraintWRsign}\n\\hspace*{-10pt}\n\\includegraphics[]{.\/constraintWRbr}\n\\caption{\\it On the left (right), the points satisfying $\\Delta C_{7\\gamma}(\\mu_b)<0$\n(Eq.~(\\ref{Condition2})) in the plane $C_{FV}^{s}$ vs. $C_{FV}^{b}(\\equiv C_{FC})$.\nThe shadowed regions now represent the values for which the bounds are passed: lighter (darker) intensity refers to $-4.2 (6.1)\\times10^{-4}$. \n\\label{fig:Constraint12}}\n\\end{figure}\n\n\\item {\\it Class 3):}\\quad We consider here the case with only one heavy neutral\n gauge boson and one exotic fermion. The relevant initial conditions are those in\n Eqs.~(\\ref{LLnew}) and (\\ref{LRnew}). The coupling constants which enter these\n expressions are $C_{L,R}^{sF}$ and $C_{L,R}^{bF}$. Fixing $g_H=g_2$, $M_{A_H}=1$ TeV\n and $m'_F=10$ TeV, and identifying the coupling constants $C_{FV}\\equiv C_{L,R}^{sF}=C_{L,R}^{bF}$, we illustrate in Fig.~\\ref{fig:C7comparison2} on the left that the dominant NP contributions stems solely from exotic quarks. For a complete comparison, we also plot the contributions from light quarks and neutral current-current operators, adopting the same conventions as for Fig.~\\ref{fig:C7comparison1}.\n\n In Fig.~\\ref{fig:C7comparison2} on the right we show the value for the coupling\n constants $C_{FV}$ as function of $M_{A_H}$ for which the $1\\sigma$ and $2\\sigma$ bounds in Eq.~(\\ref{Condition2})\n are saturated. As we can see the constraint on the couplings is very strong and only\n a small part of the shown parameter space survives even for large values of $M_{A_H}$.\n \n \\begin{figure}[h!]\n\\centering\n\\includegraphics[]{.\/constraintNRcoup}\n\\hspace*{-10pt}\n\\includegraphics[]{.\/constraintNRscale}\n\\caption{\\it A similar description to Fig.~\\ref{fig:C7comparison1} applies. Here, however, the green (purple) line refers to the exotic-quark (SM-quark) contributions.\n\\label{fig:C7comparison2}}\n\\end{figure}\n\n In Fig.~\\ref{fig:Constraint3} on the left (right) we show how the couplings are\n constrained by the expression in Eq.~(\\ref{Condition1_3}) (Eq.~(\\ref{Condition2})):\n a negative value for $\\Delta C_{7\\gamma}(\\mu_b)$ is only recovered when both\n $C_L^{sF}$ and $C_{L,R}^{bF}$ have the same sign, while the lower bound\n in Eq.~(\\ref{Condition2}) provides a very strong constraint on the parameters of the model.\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[]{.\/constraintNRsign}\n\\hspace*{-10pt}\n\\includegraphics[]{.\/constraintNRbr}\n\\caption{\\it On the left (right), the points satisfying $\\Delta C_{7\\gamma}(\\mu_b)<0$\n(Eq.~(\\ref{Condition2})) in the plane $C_L^{sF}$ vs. $C_R^{bF}$, for $C_L^{bF}\\in[-1,1]$. The shadowed regions are defined in Fig.~\\ref{fig:Constraint12}. \n\\label{fig:Constraint3}}\n\\end{figure}\n\\end{description}\n\n\\section{Conclusions}\n\\label{sec:concl}\n\nExtensions of the SM in which the gauge group is enlarged by additional symmetries are attractive, because they provide an explanation of the flavour puzzle and predictions testable at colliders. In these models heavy exotic fermions are usually introduced in order to cancel possible anomalies and to justify the observed SM fermion spectrum through the see-saw mechanism. On the other hand, the presence of new heavy neutral gauge bosons and exotic fermions in principle translates into NP contributions to FCNC processes. \n\nIn this paper, we have pointed out two new contributions to the $\\ov{B}\\to X_s\\gamma$ decay that arise in such models. The first is generated through one-loop diagrams mediated by neutral gauge bosons and exotic down-type quarks. The relevance of this contribution depends on the strength of the left-handed and right-handed flavour violating couplings of the neutral gauge bosons to the SM and the exotic quarks. Analogous effects can be present in other processes like $\\mu\\to e\\gamma$ and $t\\to c\\gamma$ as well as flavour conserving observables like $(g-2)_\\mu$ and EDMs, in which dipole operators play the dominant role.\n\nThe second contribution is due to the presence of neutral current-current operators, mediated by neutral gauge bosons, and arises only if the neutral gauge bosons have flavour violating couplings to the SM quarks. Through the QCD mixing with the magnetic dipole operator $Q_{7\\gamma}$, these new neutral current-current operators contribute to $\\ov{B}\\to X_s\\gamma$. To our knowledge these QCD effects have been calculated here for the first time. Furthermore, our QCD analysis of the mixing among neutral current-current, QCD penguin and gluonic dipole operators could also be relevant for other processes, such as non-leptonic two-body $B$ decays, and other observables, like $\\epsilon'\/\\epsilon$.\n\nBeside these new contributions, we have also considered the contributions arising from one-loop diagrams with the exchange of neutral gauge bosons and SM down-type quarks, that have been only partially analysed in the literature.\n\nWe have studied the impact of all these contributions in a model-independent approach and summarised the resulting constraints in Eqs.~(\\ref{Condition1_12})--(\\ref{Condition2}). We have presented these expressions in such a manner that in order to test a specific model, it is sufficient to specify only the values of the couplings of the neutral gauge bosons to SM and exotic quarks and their masses. In particular, it is not necessary to repeat the QCD analysis.\n\nA detailed application of this study on a concrete NP scenario is in progress. Here, without entering into details of a particular model, we have described three representative classes of models and discussed the relevance of these contributions. For models in the first class, the SM spectrum is enriched only by the neutral gauge bosons, but no exotic quarks are present. In this case the contributions from the exotic quarks are absent, but those from the neutral current-current operators turn out to have a potentially observable effect on the branching ratio of $\\ov{B}\\to X_s\\gamma$. The value of the masses of the neutral gauge bosons and the strength of their flavour violating couplings to SM fermions determine the relevance of this effect.\n\nThe second class accounts for models in which the SM fermion masses $m_f$ are explained through the see-saw mechanism with heavy exotic fermions of masses $m'_F$. In this case, the couplings of neutral gauge bosons to SM and exotic fermions are suppressed by terms $m_f\/m'_F$. Therefore, the contributions to $Br(\\ov{B}\\to X_s\\gamma)$ from the exchange of exotic quarks turn out to be negligible with respect to those from the QCD mixing of the neutral current-current and magnetic dipole operators.\n\nThe models in the third class are characterised by the presence of heavy exotic fermions, which either provide the SM fermion masses through a different mechanism than the see-saw or do not participate at all in the explanation of the SM flavour spectrum. In this case, no suppression occurs in the couplings of neutral gauge bosons to SM and exotic fermions and the contributions to $Br(\\ov{B}\\to X_s\\gamma)$ from exotic quarks are enhanced by terms $m'_F\/m_b$, where $m_b$ is the mass of the bottom quark. These contributions dominate over all the others.\n\nFor all the models in the three classes, the contributions from SM down-type quarks turn out to be subdominant in the whole parameter space.\n\nOur analysis shows once more how FCNC processes can put constraints on beyond-SM constructions even before the discovery of new particles in high-energy processes.\n\n\n\n\\section*{Acknowledgments}\nWe thank Joachim Brod and Robert Ziegler for useful comments on the preliminary version of the paper and Mikolaj Misiak, Gerhard Buchalla, and Paride Paradisi for interesting discussions. This work was supported by ERC Advanced Grant ``FLAVOUR'' (267104).\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA typical machine learning task involves the minimization of an empirical loss function evaluated over instances within a specific training dataset. When the representation model is built on deep neural networks, millions of parameters are to be optimized; it is a common practice that one invokes a gradient-based optimization method to solve for the minimum of interest \\cite{lecun2015deep}. On the other hand, the datasets for training have been expanding to a very large scale, where the algorithms equipped with stochastic gradients are de facto dominating the practical use of gradient-based optimizers. Therefore, stochastic gradient optimizers have been of significant interest in the community of machine learning research in the era of deep learning, which facilitates various applications of deep learning to real-world problems. \n\n\nOriginated from the stochastic approximation \\cite{robbins1951stochastic,kushner2003stochastic}, various algorithms have been proposed and gained credits for the triumph of deep learning in a broad range of applications, including but not limited to finance \\cite{kim2020can,zhang2018benchmarking}, gaming AI \\cite{silver2016mastering,peng2017multiagent,yang2020overview}, autonomous driving \\cite{grigorescu2020survey,zhou2020smarts} and protein structure discovery \\cite{jumper2021highly}. The momentum methods were conceived along with the advent of back-propagation \\cite{rumelhart1986learning}, which has been proved to have a fundamental physical interpretation \\cite{qian1999momentum} and is very important for fast convergence \\cite{sutskever2013importance}. Adaptive methods, e.g. AdaGrad \\cite{duchi2011adaptive}, RMSProp \\cite{tieleman2012lecture} and Adam \\cite{kingma2014adam}, have demonstrated very fast convergence and robust performance, and now dominating the practical applications. Albeit greatly successful, it may still need more insights on the training dynamics guided by the gradient-based methods in a systematic perspective when one trains deep neural networks for real-world tasks.\n\n\nInvestigations devoted to this topic typically presume the stochastic gradients converge as discretizations of continuous-time dynamical systems, where the dynamical properties of the continuous-time counterpart are studied. The specification of the dynamical systems then lies in identifying the gradient noise. A classical assumption regarding the gradient noise is that it converges asymptotically towards Gaussian random variables due to the classical Central Limit Theorem (CLT)~\\cite{feller:1971,chung2001course}, behind which the variance of stochastic gradients are thought to be finite, and the size of mini-batch is reasonable enough for a good Gaussian approximation. Such assumption applies to typical variational analysis regarding SGD and its variants~\\cite{mandt2017stochastic}. Based on Gaussian noise assumption, some works approximate the dynamic of SGD by Langevin dynamics and proved its escaping efficiency from local minima~\\cite{li2017stochastic,he2019control,zhou2019toward,hu2019diffusion,luo2018thermostat,luo2020replica}. These works assume that the noise covariance of SGD is constant or upper bounded by some constant.\n\nRecently, the structure of the gradient noise has attracted more attention from the community. Many works study SGD from the perspective of its anisotropic nature of noise~\\cite{zhu2019anisotropic,wu2020noisy,xie2020diffusion}. Empirical evidence~\\cite{simsekli2019tail} has also been discovered that the gradient variance may be infinite, and it is the infinite variance that violates the conditions of the classical CLT with the Gaussian approximation. They introduced the generalized CLT which showed that the law of the sum of these i.i.d. variables with infinite variance still converges, but to a different family of heavy-tailed distributions.\n\n \nWe attribute such heavy tails phenomenon as the result of insufficient batch size for the noise (\\emph{i.e.} a sum of limited \\emph{i.i.d.} random variables) to converge to Gaussian rather than the hypothesis that the noise is heavy-tailed by its nature \\cite{simsekli2019tail,simsekli2020fractional}. \nThe investigation is taken through a different lens by evaluating the Gaussianity of noise in terms of the percentage of Gaussian elements for learnable weights in layers during the training procedure with different batch sizes: the gradient noise still exhibits good Gaussianity when the batch size increases. That is, the larger batch size we train, the higher Gaussianity for gradient noise we observe. Notably, there are some similar results showing in~\\cite{panigrahi2019non} where they observe Gaussian behavior for gradient noise distribution at large batch size (over 500). To gain a better understanding and a more quantitative measurement, we involve the Berry-Esseen theorem which characterizes the rate of convergence for CLT~\\cite{chung2001course} and then establish a link between the rate of convergence and the magnitude of the square root of kurtosis of the noise. With such evidence of Gaussian noise, we analyze the dynamics of SGD and unify the discretization scheme w.r.t. the general Langevin equations: the vanilla SGD corresponds to the high-friction limit of the continuous-time dynamics. The learning rate essentially corresponds to the effective temperature of the associated physical system. We provide a simple local approximation of the distribution of learning parameters when the dynamics enter the vicinity of minima.\n\n\n\n\n\nThe main contribution of this paper is twofold.\n1. We demonstrate that the noise within stochastic gradients possesses finite variance during the training procedure of deep neural networks, and therefore the noise is asymptotically Gaussian as is generally conceived due to the application of the classical Central Limit Theorem. We established a link of the rate of convergence to the kurtosis of the noise which can be readily computed.\n2. We analyze the dynamics of stochastic gradient descent and the distribution of parameters, clarifying that the distribution deviates from the Gibbs distribution in the traditional settings. We provide a local approximation to the distribution when the parameters are close to optimal, which resembles an isotropic quadratic potential.\n\n\n\n\n\n\n\n\n\\section{Analysis of Gaussian Behavior}\n\n\\label{analysis}\nIn this section, we show that the gradient noise would not admit infinite variance during neural network training with the analysis on forward and backward propagation under reasonable assumptions.\nAnd we further explain the non-Gaussianity phenomenon from the perspective of the Berry-Esseen theorem.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Preliminaries}\nThe optimization problems for neural networks usually involve minimizing an empirical loss with training data $\\left\\{x_{s}, y_s\\right\\}_{s=1}^{S}$ over the network parameter $\\theta \\in \\mathbb{R}^{d}$.\nTo be more specific, we look for an optimal $\\theta^\\star$, st.\n\\begin{align}\n \\theta^{\\star}=\\underset{\\theta \\in \\mathbb{R}^{d}}{\\arg \\min }\\left\\{\\mathscr{L}(\\theta) \\triangleq \\frac{1}{|\\mathscr{S}|} \\sum_{x \\in \\mathscr{S}} \\ell(x;\\theta)\\right\\},\n\\end{align}\n\nwhere $\\mathscr{L}: \\mathbb{R}^{d} \\rightarrow \\mathbb{R}$ denotes the empirical loss function in $\\theta$, each $\\ell(x;\\theta)$ denotes the loss over one sample. \nStochastic gradient descent (SGD) is one of the most popular approaches for attacking this problem in practice. \nIt performs parameter update $\\theta_{t+1}=\\theta_{t}-\\alpha \\nabla\\mathscr{L}(\\theta_t)$, using the stochastic gradient $\\nabla\\mathscr{L}(\\theta)$ estimated from the mini-batch $\\mathscr{S} = \\{x_s\\}$ and the learning rate $\\alpha$, where\n\\begin{align}\n\\nabla\\mathscr{L}(\\theta) = \\frac[1pt]{1}{|\\mathscr{S}|} \\sum_{x \\in \\mathscr{S}}\\frac{\\partial\\ell(x;\\theta)}{\\partial{\\theta}}.\n\\end{align}\n\nThe stochastic gradient noise, due to the fact that the gradients can be evaluated on different mini-batches, can be interpreted from additive and multiplicative viewpoints~\\cite{wu2019multiplicative}. \nIn this paper, we adopt the additive formulation and use $\\epsilon$ to represent the additive noise, \n\\begin{align}\n\\epsilon \\triangleq \\nabla\\mathscr{L}(\\theta) - \\nabla\\mathbb{E}\\ell(x;\\theta).\n\\end{align}\n\nWhile $\\theta$ assembles all the parameters in the feed forward neural network, we use $W^{(k)}$ and $b^{(k)}$ as weight matrix and bias vector in the $k$-th layer. Feed forward procedure is\n$u^{(k)} = W^{(k)}a^{(k-1)} + b^{(k)}$, where $a^{(k)}=f_k\\big(u^{(k)}\\big)$ denotes the input of the $k$-th layer (post-activation of the ($k$-1)-th layer), $u^{(k)}$ denotes the pre-activation of the $k$-th layer and $f_k$ denotes the activation function entrywisely applied on the pre-activation. \nThe updating rule for an entry $W_{ij}^{(k)}$ (at the i-th row and the j-th column) in the weight matrix $W^{(k)}$ is \n\\begin{align}\n W_{ij}^{(k)} = W_{ij}^{(k)} - \\alpha \\frac{\\partial\\mathscr{L}}{\\partial W_{ij}^{(k)}}.\n\\end{align}\n\n\n\n\n\\subsection{Central Limit Theorem}\nThe distinction between the requirements of CLT and Generalized CLT is whether the variances of summed variables are finite.\nIn this section, we analyze the variance of gradient noise during neural network training and thus show that the CLT still holds under mild assumptions. Firstly we provide mathematical induction to show that the variance of pre-\/post-activations will not be infinite through the forward propagation. Next, we analytically show that the partial derivative of loss function w.r.t. the output of the neural network and the derivatives of loss function w.r.t. weights will not admit infinite variance.\n\nNote that different steps in the proof have different minimum requirements on the data distribution for the conclusions to hold true.\nThe strongest assumption on the data in our discussion is that the data distribution has finite fourth order moments.\nWhile this is automatically true for empirical distributions, which have been adopted in the literature ~\\cite{jacot2018neural}, it is also true for discrete or continuous distributions with bounded support, e.g., image data with range $[0, 255]$. Preceding the full proof, we show the following lemma for Lipschitz functions which will be used for the variance analysis of activation functions.\\\\\n\n\\noindent\\textbf{Zero mean for SGD noise}\\\\\nDeterministic, full-batch gradient:\n\\begin{align}\n\\phi(\\theta) = \\nabla\\mathbb{E}\\ell(x;\\theta) = \\int_x \\frac{\\partial\\ell(x;\\theta)}{\\partial{\\theta}} \\diff{\\mathscr{D}(x)}.\n\\end{align}\nStochastic, mini-batch gradient:\n\\begin{align}\n\\nabla\\mathscr{L}(\\theta) = \\frac[1pt]{1}{|\\mathscr{S}|} \\sum_{x \\in \\mathscr{S}}\\frac{\\partial\\ell(x;\\theta)}{\\partial{\\theta}} \\approx \\int_x \\frac{\\partial\\ell(x;\\theta)}{\\partial{\\theta}} \\diff{\\mathscr{D}(x)},\n\\end{align}\n\\begin{align}\n\\mathbb{E}[\\nabla\\mathscr{L}(\\theta)] &=\n\\mathbb{E}\\bigg[\\frac[1pt]{1}{|\\mathscr{S}|} \\sum_{x \\in \\mathscr{S}} \\frac{\\partial\\ell(x;\\theta)}{\\partial{\\theta}}\\bigg] \\nonumber\\\\\n&=\\frac[1pt]{1}{|\\mathscr{S}|} \\sum_{x \\in \\mathscr{S}} \\nabla\\mathbb{E}\\ell(x;\\theta) = \\phi(\\theta).\n\\end{align}\n\nSo we can obtain the expectation of stochastic gradient noise\n$\\mathbb{E}[\\epsilon] = \\mathbb{E}[\\nabla\\mathscr{L} - \\mathbb{E}[\\nabla\\mathscr{L}(\\theta)]] = \\mathbb{E}[\\nabla\\mathscr{L} - \\phi] = 0$.\\\\\n\n\n\\noindent\\textbf{Finite variance for SGD noise} \n\\begin{lemma}\\label{lm:lip}\nAssume that a random vector $x\\in\\mathbb{R}^d$ has finite variance on each entry. $f:\\mathbb{R}\\rightarrow\\mathbb{R}$ is a Lipschitz function with some constant $c>0$ and entry-wisely applied on $x$.\nThen the random vector $f(x)\\in\\mathbb{R}^d$ has finite variance on each entry.\n\\end{lemma}\nThe feed forward process could be regarded as recurrent applications of activations and linear transformations.\nWith the finite fourth moments assumption, we know the second order moments of the input data $a^{(0)} :=x$ are finite.\nIf the input to the $k$-th layer, i.e., $a^{(k-1)}$, has finite variance, we show that $a^{(k)}$ has finite variance.\nDenote the mean and co-variance of $a^{(k-1)}$ respectively by $\\bar{a}^{(k-1)}$ and $\\Sigma^{(k-1)}$.\nRecall $u^{(k)}=W^{(k)}a^{(k-1)}+b^{(k)}$, the mean and covariance of $u^{(k)}$ would be\n\\begin{align}\n\\mathbb{E}[u^{(k)}] &= W^{(k)}\\bar{a}^{(k-1)}+b^{(k)},\\\\ \\mathbf{cov}[u^{(k)}] &= W^{(k)}\\Sigma (W^{(k)})^\\top\n\\end{align}\nClearly, the variance of $u^{(k)}$, as part of the covariance $\\mathbf{cov}[u^{(k)}]$, is finite.\nWe apply non-linearity $f_k(\\cdot)$ to $u^{(k)}$ element-wisely and get the activation $a^{(k)}$, i.e. $a^{(k)}=f_k(u^{(k)})$.\nFor commonly adopted activations, i.e., Sigmoid, ReLU, LeakyRelu, Tanh, SoftPlus, etc, they are all Lipschitz functions.\nWith Lemma \\ref{lm:lip}, the variance of $a^{(k)}$ is also finite.\nApplying this logic recursively, we would know for the whole neural network, variances of all pre-\/post-activations are finite. The variance of the final output $a^{(K)}$ is finite as well.\n\nTo use induction for back propagation, we show the variance of the derivative of loss function w.r.t. $a^{(K)}$ is finite.\nHere we investigate two variants of loss functions, i.e., mean square error loss function for regression problems and cross-entropy loss function for classification problems.\nThe mean square error loss is defined as $\\mathscr{L}_{MSE} = ||y-a^{(K)}||^2$.\nSince $\\partial \\mathscr{L}_{MSE}\/\\partial a^{(k)}=-2(y-a^{(k)})$, it is obvious that it has finite variance with our assumption on data and above induction.\nTo complete the discussion, we show the results for cross-entropy loss in the following lemma.\n\\begin{lemma}\nFor cross entropy loss function $\\mathscr{L}_{CE} = -\\sum_{j=1}^M y_j\\log\\big[\\exp{a^{(K)}_j}\/\\sum_{k}\\exp{a^{(K)}_k}\\big]$, the gradient w.r.t. the network output $a^{(K)}$, i.e., $\\partial \\mathscr{L}_{CE}\/\\partial a^{(K)}$, as a function on random variables $a^{(K)}$ and label $y$, has finite variance.\n\\end{lemma}\nThe derivatives of the loss function w.r.t. each entry of the weight matrix admit the following forms by the chain rule,\n\\begin{align}\\label{eq:grad_w}\n\\frac[1pt]{\\partial\\mathscr{L}}{\\partial{W^{(k)}_{ij}}} = \\frac[1pt]{\\partial\\mathscr{L}}{\\partial{u^{(k)}_{i}}}\\frac[1pt]{\\partial{u^{(k)}_{i}}}{\\partial{W^{(k)}_{ij}}}.\n\\end{align}\nNote that the partial gradient $\\partial\\mathscr{L}\/\\partial W_{ij}^{(k)}$ consists of both global information, i.e., $\\partial\\mathscr{L}\/\\partial u^{(k)}_i$, and local information, i.e., $\\partial u^{(k)}_i\/\\partial W_{ij}^{(k)}$. For the global information, there holds the following recurrent relationship\n\\begin{align}\\label{eq:grad_recurrent}\n\\frac[1pt]{\\partial\\mathscr{L}}{\\partial{u^{(k-1)}}} = \\bigg( \\big[W^{(k)}\\big]^\\top \\frac[1pt]{\\partial\\mathscr{L}}{\\partial{u^{(k)}}} \\bigg) \\odot f'_{k-1}\\big(u^{(k-1)}\\big),\n\\end{align}\nwhere $\\odot$ denotes the entrywise Hadamard product. Since $\\partial\\mathscr{L}\/\\partial a^{(K)}$ has finite variance and Lipschitz continuous activation functions have bounded (sub-)gradients, $\\partial\\mathscr{L}\/\\partial u^{(K)}$ also has finite variance.\nFrom Equation~\\ref{eq:grad_recurrent} we know $\\partial\\mathscr{L}\/\\partial u^{(k-1)}$ has finite variance.\nBy mathematical induction we conclude that $\\partial\\mathscr{L}\/\\partial u^{(k)}$ has finite variance for all $k$. For the local information, \n\\begin{align}\n\\label{eq:u_tau}\n\\frac[1pt]{\\partial u_{\\tau}^{(k)}}{\\partial W^{(k)}_{ij}}=\n\\begin{cases}\n\\,a_{j}^{(k-1)}& \\text{$\\tau = i$}\\\\\n\\,0& \\text{$\\tau \\neq i$}\n\\end{cases}\n\\end{align}\n$\\partial u^{(k)}\/\\partial W_{ij}^{(k)}$ is a vector with the $i$-th entry being $a_j^{(k-1)}$ and other entries being $0$'s. \nWith the finiteness assumption on fourth order moments, we can employ Cauchy\u2013Schwarz inequality to bound the variance of $\\partial\\mathscr{L}\/\\partial W^{(k)}_{ij}$, c.f. the supplemental material. \nSince gradient noise is the centered version of the partial gradient $\\partial\\mathscr{L}\/\\partial W^{(k)}_{ij}$\n, it will also have finite variance.\n\n\\subsection{CLT convergence rate}\n\nIn the previous subsection, we showed that the variance of gradient noise is finite under reasonable assumptions on the data and support that CLT will hold.\nThen the theorem about convergence rate of CLT, i.e. Berry\u2013Esseen theorem \\cite{feller:1971,chung2001course} applies.\n\\begin{theorem}[Berry\u2013Esseen theorem]\nThere exists a universal constant $A_0$, s.t. for any i.i.d. random variables $\\{\\chi_s\\}_{s=1}^S$ with $\\mathbb{E}[\\chi_s]=0$, $\\mathbf{var}[\\chi_s]=\\sigma^2$ and $\\beta := \\mathbb{E}[|\\chi_s|^3]<\\infty$, the following inequality holds\n\\begin{align}\\label{eq:betheorem}\n \\sup_\\chi |F_{|\\mathscr{S}|}(\\chi)-\\Phi(\\chi)| \\le \\frac{\\beta}{\\sigma^3}\\frac{A_0}{\\sqrt{|\\mathscr{S}|}},\n\\end{align}\nwhere $\\Phi$ is the c.d.f. of standard Gaussian distribution and $F_{|\\mathscr{S}|}$ is the c.d.f. of random variable $\\sum_{s=1}^S\\chi_s\/\\sqrt{|\\mathscr{S}|}\\sigma$.\n\\end{theorem}\n\n\n\n\n\n\nMore specifically, assume that neural network is a deterministic function, $x$ is an input datum, $\\ell(x;\\theta)$ is a loss function per datum, $u^{(k)}(x)$ is a pre-activation at $k$-th layer. By Equation~\\ref{eq:grad_recurrent}, we can have\n\\begin{align}\n\\frac[1pt]{\\partial\\mathscr{L}}{\\partial{u^{(k)}}}(x) = \\bigg( \\big[W_{ij}^{(k+1)}\\big]^\\top \\frac[1pt]{\\partial\\mathscr{L}}{\\partial{u^{(k+1)}}}(x) \\bigg) \\odot f'_{k}\\big(u^{(k)}(x)\\big),\n\\end{align}\nwhich is a function of $x$. The mini-batch gradient ($x_s \\in \\mathscr{S}$)\n\\begin{align}\n\\nabla\\mathscr{L}(\\theta) = \\frac[1pt]{1}{|\\mathscr{S}|} \\sum_{s}g_s = \\frac[1pt]{1}{|\\mathscr{S}|} \\sum_{s}\\frac[1pt]{\\partial\\ell(x_s)}{\\partial{W_{ij}^{(k)}}}.\n\\end{align}\nRecall Equation~\\eqref{eq:grad_w} \\& \\eqref{eq:u_tau}, we have \n\\begin{align}\ng_s = \\frac[1pt]{\\partial\\ell(x_s)}{\\partial{W_{ij}^{(k)}}} = \\sum_{\\tau}\\frac[1pt]{\\partial\\ell(x_s)}{\\partial{u^{(k)}_{\\tau}}}\\frac[1pt]{\\partial{u^{(k)}_{\\tau}}}{\\partial{W^{(k)}_{ij}}} = \\frac[1pt]{\\partial\\ell(x_s)}{\\partial{u^{(k)}_{i}}}a_{j}^{(k-1)}\n\\end{align}\nFor each data point, the gradient noise $\\triangle g = g - \\mathbb{E}[g]$ is zero mean. From Inequality Equation~\\eqref{eq:betheorem}, we know that the convergence of CLT is bounded by three quantities. Since $A_0$ is the universal constant and $|\\mathscr{S}|$ is the batch size, we need to estimate the standardized third absolute moment $\\beta\/\\sigma^3$. We define\n\\begin{align}\n\\beta &= \\mathbb{E}[|\\triangle g|^3],\\\\\n\\sigma^2= \\mathbf{var}[\\triangle g] &= \\mathbf{var}[g] = \\mathbf{var}\\bigg[\\frac[1pt]{\\partial\\ell(x_s)}{\\partial{u^{(k)}_{i}}}a_{j}^{(k-1)}\\bigg].\n\\end{align}\nSince the third absolute moment is hard to compute, with $\\sigma^2= \\mathbb{E}[(\\triangle g)^2]$, we provide a ready upper bound using Cauchy-Schwarz Inequality,\n\\begin{align}\n\\frac[1pt]{\\beta}{\\sigma^3} &= \n\\frac[1pt]{\\mathbb{E}[|\\triangle g(\\triangle g)^2|]}{\\sigma^3}\\leq\n\\frac[1pt]{\\sqrt{\\mathbb{E}[(\\triangle g)^2]}\\sqrt{\\mathbb{E}[(\\triangle g)^4]}}{\\sigma^3}\\nonumber\\\\&=\n\\frac[1pt]{\\sigma \\sqrt{\\mathbb{E}[(\\triangle g)^4]}}{\\sigma^3}=\n\\sqrt{\\mathbb{E}\\bigg[\\bigg(\\frac[1pt]{\\triangle g}{\\sigma}\\bigg)^4\\bigg]}\n\\end{align}\nWe denote standard score $z = \\triangle g\/ \\sigma$ which is a random variable with zero mean and unit variance. By the interpretation of kurtosis from~\\cite{moors1986meaning},\n\\begin{align}\n\\mathbb{E}\\left[z^{4}\\right]&=\\mathbf{var}\\left[z^{2}\\right]+\\left[\\mathbb{E}\\left[z^{2}\\right]\\right]^{2}\\nonumber\\\\\n&=\\mathbf{var}\\left[z^{2}\\right]+[\\mathbf{var}[z]]^{2}=\\mathbf{var}\\left[z^{2}\\right]+1\n\\end{align}\nFinally we have the upper bound\n\\begin{align}\n\\frac[1pt]{\\beta}{\\sigma^3} \\leq \\sqrt{\\mathbf{var}\\left[z^{2}\\right]+1}\n\\end{align}\n\n\nStanding on this ground, we assert that the non-Gaussianity of gradient noise observed in some neural network training is due to the fact that the asymptotic regime has not been reached with that settings.\nIn the following section, we also support our statements with some numerical experiments.\n\n\\section{Dynamics of SGD}\n\nGiven gradient noise is asymptotically Gaussian, the dynamics of gradient descent resembles the motion of a particle within a potential field. The topic can be related to the classical Kramers Escape Problem~\\cite{kramers1940brownian}.\nConsider the procedure of gradient descent optimizers, the parameter vector $\\theta \\in \\mathbb{R}^d$ is updated according to the gradient of loss, which resembles the evolution of position of a particle within the external position-dependent potential $\\phi(\\theta) = \\mathbb{E}\\ell(\\theta)$. With gradient being noisy, the latter is typically described by the Langevin equation and the stochastic gradient noise is effectively the random kicks acting on the particle\n\\begin{align}\n\\label{eq:langevin}\nm\\frac[1pt]{\\diff^2{\\theta}}{\\diff{t}^2} + \\gamma\\frac[1pt]{\\diff{\\theta}}{\\diff{t}} = -\\nabla\\phi(\\theta) + \\eta(\\theta, t)\n\\end{align}\nwhere $\\mathbb{E}\\ell(\\theta)$ is the average of gradients evaluated within a mini-batch also expressed as the underlying potential $\\phi$, and $\\eta(\\theta, t) \\in \\mathbb{R}^d$ is $\\theta$-dependent $\\delta$-correlated random fluctuation. $\\mathbb{E}[\\eta(\\theta, t)\\eta(\\theta, t')^\\top] = \\mathsf{D}(\\theta)\\delta(t - t')$ with $\\mathsf{D}(\\theta)$ denoting the corresponding $\\theta$-dependent diffusion matrix. Discretization w.r.t. time translates the random kicks as\n\\begin{align}\n\\eta(\\theta, t) \\to \\epsilon_n(\\theta) \\sim \\mathscr{N}\\big(0, \\mathsf{C}(\\theta)\\big),\n\\end{align}\nwhere we can express the covariance of SGN $\\epsilon$ as $\\mathsf{C}(\\theta) = \\mathsf{D}(\\theta)\/\\Delta{t}$. The discretization of the Langevin equation \\eqref{eq:langevin} w.r.t. time reads\n{\\small\n\\begin{align}\n\\label{eq:discret_langevin}\nm\\frac{\\Delta^2\\theta_{n}}{\\Delta{t}^2} + \\gamma\\frac{\\Delta\\theta_{n}}{\\Delta{t}} = -\\nabla\\mathbb{E}\\ell(\\theta) + \\epsilon_{n-1}(\\theta) = -\\nabla\\mathscr{L}(\\theta_{n-1}).\n\\end{align}\n}\nLet us define $v_n = \\Delta\\theta_n = \\theta_n - \\theta_{n-1} = \\theta(n\\Delta{t}) - \\theta((n-1)\\Delta{t})$, substituting $v_n$ into Equation~\\eqref{eq:discret_langevin} we can obtain\n\\begin{align}\nm\\frac{v_n - v_{n-1}}{\\Delta{t}^2} + \\gamma\\frac[1pt]{v_n}{\\Delta{t}} = -\\nabla\\mathscr{L}(\\theta_{n-1})\n\\end{align}\nBy re-arranging terms, \n\\begin{align}\nv_n = \\underset{\\rho}{\\underbrace{\\frac{1}{1 + \\gamma\\Delta{t} \/ m}}} v_{n-1} - \\underset{\\alpha}{\\underbrace{\\frac{\\Delta{t}}{\\gamma + m\/\\Delta{t}}}} \\nabla\\mathscr{L}(\\theta_{n-1}).\n\\end{align}\nDefining momentum term as $\\rho$ and learning rate as $\\alpha$,\nwe finally have\n\\begin{align}\n\\label{eq:sgd}\n\\begin{cases}\n\\, v_n = \\rho v_{n-1} - \\alpha \\nabla\\mathscr{L}(\\theta_{n-1})\\\\\n\\, \\theta_n = \\theta_{n-1} + v_n\n\\end{cases}\n\\end{align}\nwhich resembles the formulation of the classical momentum method~\\cite{sutskever2013importance} or method of ``heavy-sphere''~\\cite{polyak1964some}. For high-friction limit, i.e. $\\gamma \\gg m\/\\Delta{t}$, $\\rho \\to 0^+$, $\\alpha \\to \\Delta{t} \/ \\gamma$, Equation~\\eqref{eq:sgd} can be simplified as the update rule for vanilla SGD\n\\begin{align}\n\\theta_n = \\theta_{n-1} - \\alpha \\nabla\\mathscr{L}(\\theta_{n-1}).\n\\end{align}\nThis can be obtained alternatively by leveraging the \\emph{overdamped} Langevin equation, where we omit the second derivative in Equation~\\eqref{eq:langevin} as in high-friction limit the inertial force is negligible comparing to the friction.\n\n\nNow we focus on the Langevin equation which is the analogue of momentum method in continuous time. Associated with the Langevin equation~\\eqref{eq:langevin}, the equation describing the evolution of the distribution $\\pi(\\theta, \\varv, t)$ is the well-known \\emph{Kramers equation} with coordinates $\\theta$ and velocity $\\varv = \\diff{\\theta} \/ \\diff{t} \\in \\mathbb{R}^d$. We write the equation in the form of \\emph{continuity equation} \\cite{risken1996fokker,van1992stochastic} as follows, \n\\begin{align}\n\\label{eq:kramers}\n\\frac[1pt]{\\partial{\\pi}}{\\partial{t}} -\\,\\mathbf{div}\\,\\mathcal{J} = 0\n\\end{align}\nwhere we suppose the unit mass $m = 1$ and unity temperature, $\\mathcal{J}$ is the probability current\n\\begin{align}\n\\label{eq:j}\n\\mathcal{J} =\n\\begin{Bmatrix}\n\\begin{bmatrix} \n-\\varv\\\\[\\jot]\n\\gamma\\varv + \\nabla\\phi(\\theta)\n\\end{bmatrix} + \\frac[1pt]{1}{2}\n\\begin{bmatrix}\n0\\\\\n\\mathsf{D}(\\theta) \\cdot \\nabla_{\\varv}\n\\end{bmatrix}\n\\end{Bmatrix}\\pi(\\theta, \\varv, t)\n\\end{align}\nHowever, one may not always find such a solution for the general Fokker-Planck equation~\\cite{van1992stochastic}. For the steady-state distribution to exists, some conditions have to be imposed on the drift vector as well as the diffusion matrix; it can be shown that under mild conditions the existence of the steady-state distribution for \\eqref{eq:kramers} can be found~\\cite{soize1994fokker}.\n\n\\begin{proposition}\n\\label{p:detailed_balance}\nThe dynamics \\eqref{eq:kramers} in general does not satisfy detailed balance. The steady-state distribution associated with \\eqref{eq:kramers}, if exists, deviates from the classical Gibbs distribution in thermal equilibrium.\n\\end{proposition}\n\nThe deviation has been discovered recently in training neural networks~\\cite{kunin2021rethinking}; Similar phenomena is observed in physics, which is often referred to as \\emph{broken detailed balance}~\\cite{battle2016broken,gnesotto2018broken}.\n\n\\begin{assumption}\n\\label{asm:c2}\nThe potential $\\phi \\in \\mathcal{C}^2$ is second-order differentiable in the vicinity of mimimum $\\theta_*$.\n\\end{assumption}\n\\begin{align}\n\\label{eq:hessian}\n\\mathsf{C}(\\theta) = \\mathbf{cov}[\\nabla\\mathscr{L}(\\theta)] &= \\nabla^2\\phi(\\theta) - \\nabla\\phi(\\theta)\\nabla^\\top\\phi(\\theta) \\notag\\\\\n& \\approx \\nabla^2\\phi(\\theta_*) = \\mathsf{H}(\\theta_*) = \\mathsf{H}_*\n\\end{align}\nThe first approximation is due to the fact that gradient noise variance dominates the gradient mean near critical points~\\cite{zhu2019anisotropic,xie2020diffusion}. When evaluating at the true parameter $\\theta_*$, there is the exact equivalence between the Hessian $\\mathsf{H}$ and Fisher information matrix $\\mathsf{F}=\\nabla^2\\phi(\\theta_*)$ at $\\theta_*$, referring to Chapter 8 of~\\cite{pawitan2001all}.\nAccording to~\\cite{zhang2019algorithmic,xie2020diffusion}, we can approximate the potential in the vicinity of $\\theta_*$ by its second-order Taylor series as\n\\begin{align}\n\\phi(\\theta) \\approx \\phi(\\theta_*) &+ \\nabla^\\top\\phi(\\theta_*) (\\theta - \\theta_*) \\nonumber\\\\&+ \\frac[1pt]{1}{2} (\\theta - \\theta_*)^\\top \\mathsf{H}_* (\\theta - \\theta_*)\n\\end{align}\n\n\nNote that the diffusion matrix $\\mathsf{D}(\\theta) = \\Delta{t}\\mathsf{C}(\\theta)$ is essentially small variables~\\cite{xie2020diffusion,meng2020dynamic}. With $\\Delta{t} \\ll 1$, one may approximate the diffusion by a constant matrix with no dependency w.r.t. $\\theta$ \\cite{luo2017thermostat,luo2020replica}.\n\n\\begin{proposition}\n\\label{p:ss}\nThe steady-state distribution can be approximated when $\\theta$ is in the vicinity of local minimum $\\theta_*$, given Assumption \\ref{asm:c2} applies (Proof in Appendix),\n{\\small\n\\begin{align}\n\\label{eq:steady-state}\n\\pi^{\\mathrm{ss}} \\propto \\exp\\Big[ -\\frac[1pt]{\\gamma}{\\Delta{t}} (\\theta - \\theta_*)^\\top(\\theta - \\theta_*) \\Big] \\exp\\Big[ -\\frac[1pt]{\\gamma}{\\Delta{t}}\\varv^\\top \\mathsf{H}_*^{-1}\\varv \\Big].\n\\end{align}\n}\n\\end{proposition}\nIn the scenario that we assume constant covariance in a small noise regime, the marginal distribution w.r.t. $\\theta$ has the form of Boltzmann distribution with an efficient potential. If $\\Delta{t} \\ll 1$, Equation~\\eqref{eq:steady-state} indicates that the probability peaks at the minima~\\cite{hanggi1990reaction,matkowsky1977exit}. With the covariance as a variable, the distribution exists but may not have a closed form, but we believe the property of Proposition~\\ref{p:ss} provides us with a good approximation.\n\n\n\n\n\n\n\n\n\\section{Experimental Results}\n\nIn this experiment, we aim to verify the Gaussianity of stochastic gradient noise and find what kind of factors can affect the Gaussian behavior. We check the Gaussianity of stochastic gradient noise on different neural network architectures and conduct experiments on different models, namely AlexNet~\\cite{krizhevsky2012imagenet} and Residual Neural Network (ResNet)~\\cite{he2016deep} for classification problems. During the training, we test the Gaussianity of gradient noise for each layer. The data used for both networks is CIFAR-10\\footnote{https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html} dataset which consists of 60,000 colour images ($32 \\times 32$ pixels) in 10 classes. There are 50,000 training images and 10,000 test images, with 6,000 images per class. \n\nIn statistics, Gaussianity testing is a well-studied topic. The Shapiro and Wilk's $W$-test~\\cite{shapiro1965analysis} is found to be the top test in a comparison study~\\cite{razali2011power}. It assumes the null hypothesis that the given set of examples come from a Gaussian distribution. A small p-value provides strong evidence to reject the null hypothesis. The extension of the Shapiro-Wilk test for large samples up to $2000$ \\cite{royston1982extension} (from the original sample size of 50). We use a revised algorithm called \\textbf{AS R94} for fast calculation of the Shapiro-Wilk test \\cite{royston1995remark} in our experiment. More specifically, in each layer we sample points from all the batches of stochastic gradient noise and test whether they are belonging to Gaussian distribution or not, then we calculate the percentage of Gaussianity for all weight dimensions to verify the Gaussianity assumption of gradient noise. During the experiment, we find some extreme cases where the gradient noise samples in one batch follow point mass behavior due to the ReLU activation function. We refer to these distributions as the Dirac delta function~\\cite{dirac1981principles} which is the limit (in the sense of distributions) of the sequence of zero-centered Gaussian distributions. In the experimental results, the Shapiro-Wilk test also recognizes point mass data as Gaussian behavior.\nIt is worth noting that the image data have bounded support, and thus the gradient noise will not have infinite tails. As a result, it is not necessary to use tail index measuring to exam if the components rejected by the Gaussianity test have infinite variance or not -- the variances are finite by nature. See supplementary materials for more discussions related to the tail index.\n\n\\subsection{Gaussian behavior of AlexNet}\n\\label{Alexnet}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[Conv-layers in AlexNet]{\n\\label{fig:epoch_alexnet_1}\n\\includegraphics[width=0.475\\columnwidth,height=3.3cm]{Plots\/gaussian_epoch1_300.png}}\n\\subfigure[FC-layers in AlexNet]{\n\\label{fig:epoch_alexnet_2}\n\\includegraphics[width=0.475\\columnwidth,height=3.3cm]{Plots\/gaussian_epoch2_300.png}}\n\\vskip -5pt%\n\n\\subfigure[Conv-layers in AlexNet-BN]{\n\\label{fig:epoch_alexnet_bn_1}\n\\includegraphics[width=0.475\\columnwidth,height=3.3cm]{Plots\/gaussian_epoch_bn1.png}}\n\\subfigure[FC-layers in AlexNet-BN]{\n\\label{fig:epoch_alexnet_bn_2}\n\\includegraphics[width=0.475\\columnwidth,height=3.3cm]{Plots\/gaussian_epoch_bn2.png}}\n\\vskip -5pt%\n\\caption{Gaussianity of each layer during training epochs.}\n\\vskip -10pt%\n\\label{fig:epoch_alexnet}\n\\end{figure}\n\nAlexNet~\\cite{krizhevsky2012imagenet} is one of the most successful convolutional neural architectures. In this experiment, we investigate the behavior of stochastic gradients during training. Instead of applying the original AlexNet tailored for ImageNet classification, modifications are necessary before applying AlexNet on datasets like Cifar-10 \\cite{DBLP:conf\/iclr\/ZhangBHRV17}. We adopt some modifications to the architecture such that our network works with different formats of images. The modified AlexNet for Cifar-10 contains 8 learnable layers: the first 5 layers are 2$d$ convolutional layers (Conv-layers), which corresponds to the features extractor, the other 3 layers are fully-connected layers (FC-layers), comprising the classifier for prediction. We remove the \\emph{local response} normalization in the feature extractor, the \\emph{dropout} operations in the classifier, and the average-pooling layer between these two modules. We use non-overlapped max-pooling instead of the overlapped counterparts. The first convolutional layers are rescaled for adaptation on images of Cifar-10: kernel $5\\times 5$, stride $2$, padding $2$. After this refinement of AlexNet for Cifar-10, we further composed a variant with batch normalization (BN), where we insert a batch-norm layer before each ReLU activation after the signal is transformed by either convolutional layers or fully-connected ones. \n\nIn order to understand the change of Gaussian behavior of gradient noise during whole training epochs, we test classic AlexNet and AlexNet with batch-norm layers, show the Gaaussianity results of convolutional layers in Figure~\\ref{fig:epoch_alexnet_1} \\& \\ref{fig:epoch_alexnet_bn_1} and results of fully connection layers in Figure~\\ref{fig:epoch_alexnet_2} \\& \\ref{fig:epoch_alexnet_bn_2}. During total 300 epochs training, we set learning rate is 0.1 before 150 epochs, 0.01 within 150 epochs and 225 epochs, 0.001 from 225 epochs to 300 epochs. The mini-batch size $n=128$, momentum is 0.9 and weight decay is $5*10^{-4}$. From Figure~\\ref{fig:epoch_alexnet}, we can find that in the classic AlexNet, the 3-$rd$ and 4-$nd$ convolutional layers' gaussianity have a decreasing trend with training. But batch normalization helps each convolutional layer to remain strong Gaussian behavior.\n\n\nTo find more properties of gaussianity in the classic AlexNet, we did two experimental cases for two different batch numbers: 200 batches and 600 batches. In each case, we test several different batch size $n$ to see how it affects the Gaussianity of gradient noise. Figure~\\ref{fig:200alexnet} and \\ref{fig:600alexnet} show the same trend of Gaussianity of stochastic gradient noise. The percentage of Gaussianity has a steady increase with the larger batch size. The more details have been displayed in Appendix (Table~\\ref{alexnet_table_200} and \\ref{alexnet_table_600}). We show five convolutional layers' features weight sizes and the final layer's classifier weight size in the last column of the table. For each experiment, we have repeated five times to obtain the more accurate results, which can also be verified by the very small standard derivation values on the table. \n\n\nBased on experimental results, we can find that there is a huge rise on the 4-$th$ convolutional layer, around zero percent Gaussianity with batch size $n=64$ goes up to $56.29\\%$ (on 600 batches) and $73.73\\%$ (on 200 batches) with batch size $n=2048$. They indicate the characteristics of gradient noise can be changed from non-Gaussian to Gaussian behavior if enough batch size is provided. The gradient noise on the first convolutional layer shows the best Gaussian behavior which covers all the batch size settings. When batch size $n=2048$, the gradient noise of the whole neural network method is above $50\\%$ (on 600 batches) and $60\\%$ (on 200 batches). These Gaussian behavior results satisfy our expectations and are consistent with the initial Gaussianity assumption. \n\n\nFrom Figure~\\ref{fig:200alexnet} we have known that the 1-$st$ layer's gaussianity percentage of AlexNet (mini-batch 128 and 200 batches) is 94.12\\%. Shapiro-Wilk test still tells us that about five percent gradient noise does not have Gaussian behavior. Now we utilize tail index estimation~\\footnote{https:\/\/github.com\/josemiotto\/pylevy} and fit the rest non-Gaussian behavior gradient noise to Levy alpha-stable distribution. In Appendix, we show the tail index performance on Figure~\\ref{fig:tail_index_total} and show the example of gradient noise on Figure~\\ref{fig:tail_index_non}. Figure~\\ref{fig:tail_index_first} show the example of gradient noise of the 4-$th$ layer in AlexNet compared with different mini-batch settings.\n\n\\begin{figure}[!]\n\\centering\n\\subfigure[Alexnet (200 batches)]{\n\\label{fig:200alexnet}\n\\includegraphics[width=0.475\\columnwidth,height=3.1cm]{Plots\/200alexnetplot.png}}\n\\subfigure[Alexnet (600 batches)]{\n\\label{fig:600alexnet}\n\\includegraphics[width=0.475\\columnwidth,height=3.1cm]{Plots\/600alexnetplot.png}}\n\\vskip -5pt%\n\\caption{Gaussianity of each layer in Alexnet}\n\\vskip -10pt%\n\\end{figure}\n\n\\subsection{Gaussian behavior of ResNet}\n\\label{Resnet}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[ResNet (200 batches)]{\n\\label{fig:200resnet}\n\\includegraphics[width=0.475\\columnwidth,height=3.1cm]{Plots\/200resnetplot.png}}\n\\subfigure[ResNet (600 batches)]{\n\\label{fig:600resnet}\n\\includegraphics[width=0.475\\columnwidth,height=3.1cm]{Plots\/600resnetplot.png}}\n\\vskip -5pt%\n\\caption{Gaussianity of each layer in ResNet-56}\n\\vskip -10pt%\n\\end{figure}\n\nA Residual Neural Network (ResNet)~\\cite{he2016deep} is also a widely-used artificial neural network, its structure utilizes skip connections, or shortcuts to jump over some layers. For implementation, we choose official ResNet-20 and ResNet-56 packages from Pytorch~\\footnote{https:\/\/github.com\/pytorch\/vision}. The structure of ResNet is much more complicated, which contains the first convolutional layer, three block layers, and the final linear layer. Each block layer contains 6 convolutional layers with batch normalization for ResNet-20 or 18 convolutional layers with batch normalization for ResNet-56. \n\nWe train ResNet-20 on Cifar10 datasets with 300 epochs; the gaussianity performance of each block layer's first convolutional layer and sixth convolutional layer during whole training epochs is shown in Appendix (Figure~\\ref{fig:epoch_resnet}). ResNet has a very steady gaussianity for gradient noise, it does not change with training epochs. Furthermore, we do two experimental cases with ResNet-56 for two different batch numbers: 200 batches and 600 batches. Figure~\\ref{fig:200resnet} and \\ref{fig:600resnet} show ResNet-56 has a very high percentage of Gaussianity for gradient noise in each layer, which satisfies our Gaussianity assumption very well. Even if the batch size is small as $64$, they still show strong Gaussian behavior with above 85 percent Gaussianity. Besides, we can find similar results from AlexNet, which is the percentage of Gaussianity becomes higher with the larger batch size. The four light gray dashed lines divide three block layers on the plots. We also show these three block layers in Table~\\ref{resnet_table_200} and \\ref{resnet_table_600} in Appendix. In each block, for layout reasons, we only show layer 1, layer 9, and layer 18 in the table. We show the first convolutional layers' features weight dimension sizes, block layers' convolutional layers' features weight dimension sizes, and the final layer's classifier weight dimension size in the last column of the table. We repeat the experiment five times, the results are quite accurate with very small standard derivation errors. \n\n\\subsection{Analysis of Bound}\n\nAccording to Figure~\\ref{fig:200alexnet} and \\ref{fig:600alexnet}, we find that AlexNet's 4-$th$ and 5-$th$ convolutional layer show less Gaussian behavior in stochastic gradient noise compared with other layers. The rate of convergence may be the cause of such a phenomenon. We have introduced the Berry-Esseen theorem which states that the convergence of CLT is bounded by the standardized third absolute moment $\\beta\/\\sigma^3$ and the summation size (batch size) $|\\mathscr{S}|$. We keep as constant the batch size $|\\mathscr{S}|$, and evaluate numerically the values of standardized third absolute moments $\\beta\/\\sigma^3$ to measure the bound of convergence of CLT. We show in Appendix Figure~\\ref{fig:alexnet_distribution} that the empirical distribution of the standardized third absolute moments of all weight dimensions within each layer of AlexNet. Compared with Figure~\\ref{fig:200alexnet} and \\ref{fig:600alexnet}, it is noticeable that 4-$th$ and 5-$th$ convolutional layers have longer right tails and generally get larger values of the standardized third absolute moment, which means their bound is much larger than other layers, so that slow convergence to Gaussian occurs. \n\nLooking into more details in Appendix for 4-$th$ convolutional layer from Figure~\\ref{fig:alexnet_distribution}, we also estimate the quantile values of the third absolute moments distribution corresponding to batch size (from $n=64$ to $n=2048$ in Table~\\ref{fig:200alexnet}) in Appendix Figure~\\ref{fig:linevalues4}. For each batch size in this test, we calculate a quantile value w.r.t. the percentage of Gaussian dimensions. The results are demonstrated as vertical spines in Appendix Figure~\\ref{fig:linevalues4}, the area on the left hand side corresponds roughly to those dimensions that converge towards Gaussian; the spines are shifting right as the batch size grows larger, indicates more dimensions exhibits Gaussian behavior.\n\nFigure~\\ref{fig:600resnet} shows that the percentage of Gaussianity in ResNet-56 Block-3 drops from $94.44\\%$ to $91.58 \\%$ when batch size $n=256$. We select convolutional layer 1 in Block-3 ($94.44\\%$), layer 2 ($94.29\\%$), layer 11 ($90.82\\%$), layer 12 ($90.93\\%$), layer 15 ($93.40\\%$) and layer 16 ($92.60\\%$) to show the empirical distribution of the standardized third absolute moments of weights in Appendix Figure~\\ref{fig:resnet_distribution}; similar results has been shown as AlexNet. The layer with a higher Gaussianity of gradient noise often has a smaller value of bound. The CLT converges faster to Gaussian if the shape of the distribution of the standardized third absolute moments of weights is close to zero. Based on the comparison of Figure~\\ref{fig:alexnet_distribution} and Figure~\\ref{fig:resnet_distribution} in Appendix, we also notice the overall shape of distribution from ResNet is close to zero, only covers very small range around 0.01 to 0.02. It implies that ResNet has a strong Gaussian behavior for stochastic gradient noise. The overall shape of distribution from AlexNet is more flexible, the range reaches 0.2 in some cases. This explains why AlexNet has significant non-Gaussian behavior in some cases when batch size is not large enough.\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we investigated the characteristics of gradient noise during the training of deep neural networks using stochastic optimization methods. We showed that the gradient noise of stochastic methods exhibits finite variance during training.\nThe noise is therefore asymptotically Gaussian, instead of the latest proposal of $\\alpha$-stable framework~\\cite{simsekli2019tail}. To validate our claim that the non-Gaussianity is caused by the non-convergence of CLT, we conducted an empirical analysis on the percentage of Gaussian elements for each trainable layer and established quantitatively a connection of the Gaussianity with the standardized third absolute moment exploited in Berry-Esseen theorem. Our experiments show that the convergence rate of SGD noise is tied with the architecture of neural networks. The experiments show that the rate of convergence has a deep connection with the architecture of the neural networks: for each layer of AlexNet, the distributions of the third absolute moments calculated for all weight dimensions generally have a much wider spectrum, leading towards much worse Gaussianity in certain layers. For ResNet, the layer-wise moments' distributions have a narrower spectrum and smaller values, which validates the observations that the Gaussianity percentages of layers in ResNet are generally beyond $85\\%$. When increasing the batch sizes, the percentage of Gaussianity for both AlexNet and ResNet improves. Based on the evidence of Gaussian noise, we analyze the dynamics of SGD and unify the discretization scheme w.r.t. the general Langevin equations: the vanilla SGD corresponds to the high-friction limit of the continuous-time dynamics. We then provide a simple local approximation of the distribution of learning parameters when the dynamics enter the vicinity of minima. \n\n\n\n\\clearpage\n\n\n\n\\bigskip\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Problem Setting}\n\\label{subsec:ps}\nGiven a sequence of triplet constraints $S=\\lbrace(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})\\rbrace_{t=1}^{T}$ that arrive sequentially, where $\\lbrace\\bm{x_t},\\bm{x_t^{+}},\\bm{x_t^{-}}\\rbrace \\in \\mathcal{R}^d$, and $\\bm{x_t}$ (anchor) is similar to $\\bm{x_t^{+}}$ (positive) but is dissimilar to $\\bm{x_t^{-}}$ (negative), the goal of online adaptive metric learning is to learn a model $\\bm{F}: \\mathcal{R}^d \\mapsto \\mathcal{R}^{d'}$ such that $||\\bm{F}(\\bm{x_t})-\\bm{F}(\\bm{x_t^{+}})||_2\\ll||\\bm{F}(\\bm{x_t})-\\bm{F}(\\bm{x_t^{-}})||_2$. \n\nThe overall aim of the task is to learn a metric model with adaptive complexity while maintaining a high constraint utilization rate. Here, the complexity of $\\bm{F}$ needs to be adaptive so that its hypothesis space is automatically enlarged or shrinked as necessary.\n\nIn practical applications, the input triplet stream can be generated as follows. First, some seed triplets can be constructed from user clicks. Then, more triplets are formed by taking transitive closures\\footnote{If $(\\bm{x_1}, \\bm{x_2})$ and $(\\bm{x_1}, \\bm{x_3})$ are two similar pairs, then $(\\bm{x_2}, \\bm{x_3})$ is a similar pair. If $(\\bm{x_1}, \\bm{x_2})$ and $(\\bm{x_2}, \\bm{x_3})$ are two similar pairs, then $(\\bm{x_1}, \\bm{x_3})$ is a similar pair. If $(\\bm{x_1}, \\bm{x_2})$ is a similar pair and $(\\bm{x_1}, \\bm{x_3})$ is a dissimilar pair, then $(\\bm{x_2}, \\bm{x_3})$ is a dissimilar pair. If $(\\bm{x_1}, \\bm{x_2})$ is a similar pair and $(\\bm{x_2}, \\bm{x_3})$ is a dissimilar pair, then $(\\bm{x_1}, \\bm{x_3})$ is a dissimilar pair.} over these seed triplets. The generated triplets are inserted into the stream in chronological order of creation.\nIn this paper, we assume the existence of such triplet stream and focus on addressing the challenges of online metric learning in such scenarios.\n\n\\subsection{Overview}\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{fig1.jpg}\n\\caption{Overview of the OAHU{}. The chromatic dashed arrows represent update contributions and directions. Each $L_i \\in \\lbrace L_1, L_2, \\ldots \\rbrace$ represents a linear transformation layer followed by a ReLU activation. $\\lbrace E_0, E_1, \\dots \\rbrace$ are the embedding layers connected to corresponding input or hidden layers. Note that $E_0$ here represents a linear metric model, i.e., a linear transformation from the input feature space to the embedding space.}\n\\vspace{-2mm}\n\\label{fig:overview}\n\\end{figure*}\n\nTo learn a metric model with adaptive complexity, we need to address the following question: \\textit{when} and \\textit{how} to change the ``complexity'' of metric model in an \\textit{online} setting? In this section, we will discuss the details of the proposed framework OAHU{} that addresses the question in a data-driven manner.\n\nFigure~\\ref{fig:overview} illustrates the structure of our OAHU{} framework. Inspired by recent works~\\cite{SrinivasB16,HuangSLSW16}, we use an over-complete network and automatically adapt its effective depth to learn a metric function with an appropriate complexity based on input constraints. Consider a neural network with $L$ hidden layers:\nwe connect an independent embedding layer to the network input layer and each of these hidden layers. Every embedding layer represents a space where similar instances are closer to each other and vice-versa. Therefore, unlike conventional online metric learning solutions that usually learns a linear model, OAHU{} is an ensemble of models with varying complexities that share low-level knowledge with each other. For convenience, let $E_l \\in \\lbrace E_0, E_1, E_2, \\ldots, E_L\\rbrace$ denote the $l^{th}$ metric model in OAHU{}, i.e., the network branch starting from input layer to the $l^{th}$ metric embedding layer, as illustrated in Figure~\\ref{fig:overview}. The simplest model in OAHU{} is $E_0$, which represents a linear transformation from the input feature space to the metric embedding space. A weight $\\alpha^{(l)}\\in [0,1]$ is assigned to $E_l$, measuring its importance in OAHU{}.\n\nFor a triplet constraint $(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})$ that arrives at time $t$, its metric embedding $f^{(l)}(\\bm{x_t^*})$ generated by $E_l$ is\n\n\\small\n\\begin{equation}\n\\label{eq:metic_embedding}\n f^{(l)}(\\bm{x_t^*}) = h^{(l)}\\Theta^{(l)}\n\\end{equation}\n\\normalsize\nwhere $h^{(l)} = \\sigma(W^{(l)}h^{(l-1)}),$ with $l\\ge 1, l\\in \\mathbb{N}$, and $h^{(0)} = \\bm{x_t^*}$.\n\nHere $\\bm{x_t^*}$ denotes any anchor ($\\bm{x_t}$), positive ($\\bm{x_t^{+}}$) or negative ($\\bm{x_t^{-}}$) instance based on the definition in Section~\\ref{subsec:ps}, and $h^{(l)}$ represents the activation of $l^{\\text{th}}$ hidden layer. \nNote that we also explicitly limit the learned metric embedding $f^{(l)}(\\bm{x_t^*})$ to reside on a unit sphere, i.e., $||f^{(l)}(\\bm{x_t^*})||_2=1$, to reduce the potential model search space and accelerate the training of OAHU{}. \n\nDuring the training phase, for every arriving triplet $(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})$, OAHU{} first retrieves the metric embedding $f^{(l)}(\\bm{x_t^*})$ from $l^{th}$ metric model according to Eq.~\\ref{eq:metic_embedding}, and then produces a local loss $\\mathcal{L}^{(l)}$ for $E_l$ by evaluating the similarity and dissimilarity errors based on $f^{(l)}(\\bm{x_t^*})$. Thus, the overall loss introduced by this triplet is given by\n\\small\n\\begin{equation}\n \\mathcal{L}_{overall}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})=\\sum\\limits_{l=0}^L \\alpha^{(l)} \\cdot \\mathcal{L}^{(l)}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})\n\\end{equation}\n\\normalsize\n\nSince the new architecture of OAHU{} introduces two sets of new parameters, $\\Theta^{(l)}$ (parameters of $f^{(l)}$) and $\\alpha^{(l)}$, we need to learn $\\Theta^{(l)}$, $\\alpha^{(l)}$ and $W^{(l)}$ during the online learning phase. Therefore, the final optimization problem to solve in OAHU{} at time $t$ is given by\n\\small\n\\begin{equation}\n\\begin{aligned}\n& \\underset{\\Theta^{(l)},W^{(l)}, \\alpha^{(l)}}{\\text{minimize}}\n& & \\mathcal{L}_{overall} \\\\\n& \\text{subject to}\n& & ||f^{(l)}(\\bm{x_t^*})||_2 = 1, \\forall l = 0, \\ldots, L.\n\\end{aligned}\n\\end{equation}\n\\normalsize\n\nTo solve the above optimization problem, we propose a novel Adaptive-Bound Triplet Loss (ABTL) in Section~\\ref{subsec:abel} to estimate $\\mathcal{L}^{(l)}$ \nand introduce a novel Adaptive Hedge Update (AHU) method in Section~\\ref{subsec:ahu} for online updating $\\Theta^{(l)}$, $W^{(l)}$ and $\\alpha^{(l)}$.\n\n\\subsection{Adaptive-Bound Triplet Loss}\n\\label{subsec:abel}\n\n\\subsubsection{Limitations of Existing Pairwise\/Triplet Loss} \nExisting online metric learning solutions typically optimize a pairwise or triplet loss in order to learn a similarity metric. A widely-accepted pairwise loss~\\cite{Shalev-ShwartzSN04} and a common triplet loss~\\cite{LiGWZHS18} are shown in Equation~\\ref{eq:pairloss} and ~\\ref{eq:tripletloss} respectively. \n\\small\n\\begin{equation}\n\\label{eq:pairloss}\n L(x_t, x_t') = \\max\\lbrace0, y_{t}(d^2(x_{t}, x_{t}')-b)+1\\rbrace\n\\end{equation}\n\\begin{equation}\n\\label{eq:tripletloss}\n L(x_t, x_t^+, x_t^-) = \\max\\lbrace0, b+d(x_t, x_t^+)-d(x_t, x_t^-)\\rbrace\n\\end{equation}\n\\normalsize\nHere $y_t\\in\\lbrace+1, -1\\rbrace$ denotes whether $x_t$ is similar ($+1$) or dissimilar ($-1$) to $x_t'$ and $b \\in \\mathcal{R}$ is a user-specified fixed margin. Compared to the triplet loss that simultaneously evaluates both similarity and dissimilarity relations, the pairwise loss can only focus on one of the relations at a time, which leads to a poor metric quality. On the other hand, without a proper margin, \nthe loose restriction of triplet loss can result in many failure cases. \nHere, a failure case is a triplet that produces a zero loss value, where dissimilar instances are closer than similar instances.\nAs the example shown in Figure~\\ref{fig:failuremode}, for an input triplet $(x_t, x_t^+, x_t^-)$, although $x_t^+$ is \ncloser to $x_t^-$ and further from $x_t$, models optimizing triplet loss would incorrectly ignore this constraint due to its zero loss value. In addition, selecting an appropriate margin is highly data-dependent, and requires extensive domain knowledge. Furthermore, as the model improves performance over time, a larger margin is required to avoid failure cases. \n\nNext, we describe an adaptive-bound triplet loss that attempts to solve these drawbacks.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.5\\columnwidth]{fig2.jpg}\n\\caption{A failure case of triplet loss.}\n\\vspace{-4mm}\n\\label{fig:failuremode}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.7\\columnwidth]{fig3.jpg}\n\\caption{Schematic illustration of ABTL.}\n\\vspace{-4mm}\n\\label{fig:abel}\n\\end{figure}\n\n\\subsubsection{Adaptive-Bound Triplet Loss (ABTL)} \nWe define that instances of the same classes are mutually attractive to each other, while instances of different classes are mutually repulsive to each other. Therefore, for any input triplet constraint $(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})$, there is an attractive loss $\\mathcal{L}_{attr}\\in[0,1]$ between $\\bm{x_t}$ and $\\bm{x_t^{+}}$, and a repulsive loss $\\mathcal{L}_{rep}\\in[0,1]$ between $\\bm{x_t}$ and $\\bm{x_t^{-}}$. In the $l^{th}$ metric model $E_l \\in \\lbrace E_0, \\ldots, E_L \\rbrace$, we focus on a distance measure $D^{(l)}(\\bm{x_i}, \\bm{x_j})$ defined as follows:\n\\small\n\\begin{equation}\n\\label{eq:dist}\n\\begin{split}\nD^{(l)}(\\bm{x_i}, \\bm{x_j}) =& ||f^{(l)}(\\bm{x_i})-f^{(l)}(\\bm{x_j})||_2, \\quad \\forall l=0,1,2,\\ldots,L\\\\\n\\end{split}\n\\end{equation}\n\\normalsize\nwhere $f^{(l)}(\\bm{x})$ denotes the embedding generated by $E_l$. Since $||f^{(l)}(\\bm{x})||_2$ is restricted to be $1$, we have $D^{(l)}(\\bm{x_i}, \\bm{x_j})\\in [0,2]$.\nIn each iteration, let $D^{(l)}_{orig}(\\bm{x_1},\\bm{x_2})$ denote the distance between $\\bm{x_1}$ and $\\bm{x_2}$ before applying the update to $f^{(l)}$ (i.e., updating $\\Theta^{(l)}$ and $\\lbrace W^{(j)}\\rbrace_{j=0}^l$) and $D^{(l)}_{update}(\\bm{x_1},\\bm{x_2})$ denote the distance after applying the update. \n\nFigure~\\ref{fig:abel} illustrates the main idea of ABTL. The objective is to have the distance $D^{(l)}_{update}(\\bm{x_t}, \\bm{x_t^{+}})$ of two similar instances $\\bm{x_t}$ and $\\bm{x_t^{+}}$ to be less than or equal to a similarity threshold $d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})$ so that the attractive loss $\\mathcal{L}^{(l)}_{attr}(\\bm{x_t}, \\bm{x_t^{+}})$ drops to zero; on the other hand, for two dissimilar instances $\\bm{x_t}$ and $\\bm{x_t^{-}}$, we desire their distance $D^{(l)}_{update}(\\bm{x_t}, \\bm{x_t^{-}})$ to be greater than or equal to a dissimilarity threshold $d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^{-}})$, thereby reducing the repulsive loss $\\mathcal{L}^{(l)}_{rep}(\\bm{x_t}, \\bm{x_t^{-}})$ to zero. Equation~\\ref{eq:constraint} presents these constraints mathematically.\n\\small\n\\begin{equation}\n\\label{eq:constraint}\n \\left\\{\\begin{matrix}\n D^{(l)}_{update}(\\bm{x_t}, \\bm{x_t^{+}}) \\le d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})\\\\ \n \\\\\n D^{(l)}_{update}(\\bm{x_t}, \\bm{x_t^{-}}) \\ge d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^-})\n \\end{matrix}\\right.\n\\end{equation}\n\\normalsize\n\nUnfortunately, directly finding an appropriate $d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})$ or $d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^-})$ is difficult, since it varies with different input constraints. Therefore, we introduce a user-specified hyper-parameter $\\tau>0$ to explicitly restrict the range of both thresholds.\nLet $\\mathcal{T}^{(l)}_{sim}$ denote the upper bound of $d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})$, and $\\mathcal{T}^{(l)}_{dis}$ denote the lower bound of $d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^-})$, \n\n\\small\n\\begin{equation}\n \\left\\{\\begin{matrix}\n \\mathcal{T}^{(l)}_{sim} = \\tau \\ge d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})\\\\\n \\\\\n \\mathcal{T}^{(l)}_{dis} = 2-\\tau \\leq d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^-})\n \\end{matrix}\\right.\n\\end{equation}\n\\normalsize\n\nWithout loss of generality, we propose\n\n\\small\n\\begin{equation}\n \\left\\{\\begin{matrix}\n d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}}) = a_1{\\rm e}^{D^{(l)}_{orig}(\\bm{x_t}, \\bm{x_t^{+}})}+b_1\\\\ \n \\\\\n d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^-}) = -a_2{\\rm e}^{-D^{(l)}_{orig}(\\bm{x_t}, \\bm{x_t^{-}})}+b_2\n \\end{matrix}\\right.\n\\end{equation}\n\\normalsize\nwhere $a_1$, $b_1$, $a_2$ and $b_2$ are variables need to be determined.\nNote that $d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})$ and $d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^-})$ are monotonically increasing and decreasing, respectively. Therefore, $d^{(l)}_{sim}$ preserves the information of original relative similarity, i.e., $d^{(l)}_{sim}(\\bm{x_1}, \\bm{x_2}) < d^{(l)}_{sim}(\\bm{x_1}, \\bm{x_3})$ if $D^{(l)}_{orig}(\\bm{x_1}, \\bm{x_2}) < D^{(l)}_{orig}(\\bm{x_1}, \\bm{x_3})$. This property is critical to many real-world applications that require a fine-grained similarity comparison, such as image retrieval.\n\nThe values of $a_1$, $b_1$, $a_2$ and $b_2$ are then determined by the following boundary conditions:\n\\small\n\\begin{equation}\n\\left\\{\\begin{matrix}\nd^{(l)}_{sim}|_{D^{(l)}_{orig}(\\bm{x_t},\\bm{x_t^{+}})=2}\\leq\\mathcal{T}^{(l)}_{sim}=\\tau &\nd^{(l)}_{sim}|_{D^{(l)}_{orig}(\\bm{x_t},\\bm{x_t^{+}})=0}=0\\\\\nd^{(l)}_{dis}|_{D^{(l)}_{orig}(\\bm{x_t},\\bm{x_t^-})=0}\\ge \\mathcal{T}^{(l)}_{dis}=2-\\tau &\nd^{(l)}_{dis}|_{D^{(l)}_{orig}(\\bm{x_t},\\bm{x_t^-})=2}=2\n\\end{matrix}\\right.\n\\end{equation}\n\\normalsize\n\nThus we have\n\\small\n\\begin{equation}\n \\left\\{\\begin{matrix}\n d^{(l)}_{sim}\\big(\\bm{x_t}, \\bm{x_t^{+}}\\big)=\\frac{\\tau}{{\\rm e}^2-1}\\big ({\\rm e}^{D^{(l)}_{orig}(\\bm{x_t}, \\bm{x_t^{+}})}-1\\big ) \\\\\n \\\\\n d^{(l)}_{dis}\\big(\\bm{x_t}, \\bm{x_t^{-}}\\big)= -\\frac{2-(2-\\tau)}{1-{\\rm e}^{-2}}\\big({\\rm e}^{-D^{(l)}_{orig}(\\bm{x_t}, \\bm{x_t^{-}})}-1\\big) +(2-\\tau)\n\\end{matrix}\\right.\n\\end{equation}\n\\normalsize\n\nProvided with $d^{(l)}_{sim}$ and $d^{(l)}_{dis}$, the attractive loss $\\mathcal{L}_{attr}$ and repulsive loss $\\mathcal{L}_{rep}$ are determined by\n\n\\small\n\\begin{equation}\n\\label{eq:attr_rep}\n \\left\\{\\begin{matrix}\n \\mathcal{L}_{attr}^{(l)}(\\bm{x_t}, \\bm{x_t^{+}}) = \\max \\Big \\lbrace 0,\\frac{1}{2-c_1}D^{(l)}_{orig}(\\bm{x_t}, \\bm{x_t^{+}})-\\frac{c_1}{2-c_1}\\Big \\rbrace\\\\ \n \\\\\n \\mathcal{L}_{rep}^{(l)}(\\bm{x_t}, \\bm{x_t^{-}}) = \\max \\Big \\lbrace 0, \\frac{-1}{c_2}D^{(l)}_{orig}(\\bm{x_t}, \\bm{x_t^-})+1 \\Big \\rbrace\n \\end{matrix}\\right.\n\\end{equation}\n\\normalsize\nwhere $c_1=d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})$ and $c_2=d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^{-}})$.\n\nTherefore, the local loss $\\mathcal{L}^{(l)}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^-})$, which measures the similarity and dissimilarity errors of the input triplet $(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^-})$ in $E_l$, is the average of $\\mathcal{L}^{(l)}_{attr}$ and $\\mathcal{L}^{(l)}_{rep}$:\n\\small\n\\begin{equation}\n\\label{eq:local_loss}\n\\begin{split}\n \\mathcal{L}^{(l)}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^-})&=\\frac{1}{2}\\Big(\\mathcal{L}_{attr}^{(l)}(\\bm{x_t}, \\bm{x_t^{+}})+\\mathcal{L}_{rep}^{(l)}(\\bm{x_t}, \\bm{x_t^{-}})\\Big)\\\\\n\\end{split}\n\\end{equation}\n\\normalsize\n\nNext, we answer an important question: What is the best value of $\\tau$? Here, we demonstrate that the optimal range of $\\tau$ is $(0, \\frac{2}{3})$. The best value of $\\tau$ is thus empirically selected within this range via cross-validation.\n\n\n\n\\begin{theorem}\n\\label{theo:1}\nWith $\\tau \\in (0, \\frac{2}{3}) $, by optimizing the proposed adaptive-bound triplet loss, different classes are separated in the metric embedding space.\n\\end{theorem}\n\\begin{proof}\nLet $D(c_1,c_2)$ denotes the minimal distance between classes $c_1$ and $c_2$, i.e., the distance between two closest instances from $c_1$ and $c_2$ respectively. Consider an arbitrary quadrupole $(x_1, x_2, x_3, x_4) \\in \\mathcal{Q}$ where $\\lbrace x_1, x_2\\rbrace \\in c_1$, $\\lbrace x_3,x_4 \\rbrace \\in c_2$, and $\\mathcal{Q}$ is the set of all possible quadrupoles generated from class $c_1$ and $c_2$. Suppose $(x_2,x_3)$ is the closest dissimilar pair among all possible dissimilar pairs that can be extracted from $(x_1, x_2, x_3, x_4)$ \\footnote{it can always be achieved by re-arranging symbols of $x_1$,$x_2$,$x_3$ and $x_4$}. We first prove that the lower bound of $D(c_1,c_2)$ is given by $\\min\\limits_{(x_1, x_2, x_3, x_4)\\in \\mathcal{Q}} D^{(l)}(x_1, x_4)-D^{(l)}(x_1, x_2)-D^{(l)}(x_3, x_4)$.\nDue to the triangle inequality property of p-norm distance, we have \n\\small\n\\begin{equation}\n\\begin{split}\n D^{(l)}(x_1, x_4) &\\leq D^{(l)}(x_1, x_2) + D^{(l)}(x_2, x_4)\\\\\n & \\leq D^{(l)}(x_1, x_2) + D^{(l)}(x_2, x_3) + D^{(l)}(x_3, x_4)\n\\end{split}\n\\end{equation}\n\\normalsize\nTherefore,\n\\small\n\\begin{equation}\n\\begin{split}\n D(c_1, c_2) &= \\min_{(x_1, x_2, x_3, x_4)\\in \\mathcal{Q}} D^{(l)}(x_2, x_3) \\\\\n &\\ge \\min_{(x_1, x_2, x_3, x_4)\\in \\mathcal{Q}} D^{(l)}(x_1, x_4)-D^{(l)}(x_1, x_2)-D^{(l)}(x_3, x_4)\n\\end{split}\n\\end{equation}\n\\normalsize\n\nBy optimizing the adaptive-bound triplet loss, the following constraints are satisfied \n\n\\small\n\\begin{equation}\n \\left\\{\\begin{matrix}\n D^{(l)}(x_1, x_2) \\le d^{(l)}_{sim}(x_1, x_2) \\le \\mathcal{T}^{(l)}_{sim}\\\\\n \\\\\n D^{(l)}(x_3, x_4) \\le d^{(l)}_{sim}(x_3, x_4) \\le \\mathcal{T}^{(l)}_{sim}\\\\\n \\\\\n D^{(l)}(x_1, x_4) \\ge d^{(l)}_{dis}(x_1, x_4) \\ge \\mathcal{T}^{(l)}_{dis}\n \\end{matrix}\\right.\n\\end{equation}\n\\normalsize\n\nThus\n\\small\n\\begin{equation}\n\\begin{split}\n D(c_1, c_2) \n &\\ge \\min_{(x_1, x_2, x_3, x_4)\\in \\mathcal{Q}} D^{(l)}(x_1, x_4)-D^{(l)}(x_1, x_2)-D^{(l)}(x_3, x_4)\\\\\n &\\ge \\mathcal{T}^{(l)}_{dis} - 2\\mathcal{T}^{(l)}_{sim}\\\\\n &=2-\\tau-2\\tau\\\\\n &=2-3\\tau\n\\end{split}\n\\end{equation}\n\\normalsize\nif $\\tau \\in (0, \\frac{2}{3})$, we have $3\\tau < 2$. Therefore,\n\\small\n\\begin{equation}\n\\label{eq:final_proof}\n D(c_1, c_2) \\ge 2-3\\tau > 0\n\\end{equation}\n\\normalsize\nEquation~\\ref{eq:final_proof} indicates that the minimal distance between class $c_1$ and $c_2$ is always positive so that these two classes are separated. \n\n\\end{proof}\n\n\\subsubsection{Discussion}\nCompared to pairwise and triplet loss, the proposed ABTL has several advantages:\n\\begin{itemize}\n \\item Similar to triplet loss, ABTL simultaneously evaluates both similarity and dissimilarity errors to obtain a high-quality metric.\n \\item The similarity threshold $d^{(l)}_{sim}(\\bm{x_t}, \\bm{x_t^{+}})$ is always less than or equal to $D^{(l)}_{orig}(\\bm{x_t},\\bm{x_t^{+}})$ and the dissimilarity threshold $d^{(l)}_{dis}(\\bm{x_t}, \\bm{x_t^{-}})$ is always greater than or equal to $D^{(l)}_{orig}(\\bm{x_t},\\bm{x_t^{-}})$. Thus, every input constraint contributes to model update, which eliminates the failure cases of triplet loss and leads to a higher constraint utilization rate.\n \\item The optimal range of $\\tau$ is provided with a theoretical proof.\n\\end{itemize}\n\n\\subsection{Adaptive Hedge Update (AHU)}\n\\label{subsec:ahu}\nWith adaptive-bound triplet loss, the overall loss in OAHU{} becomes\n\\small\n\\begin{equation}\n\\begin{split}\n \\mathcal{L}_{overall}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})&=\\sum\\limits_{l=0}^L \\alpha^{(l)}\\cdot\\mathcal{L}^{(l)}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})\\\\\n &=\\sum\\limits_{l=0}^L \\frac{\\alpha^{(l)}}{2}\\Big(\\mathcal{L}^{(l)}_{attr}(\\bm{x_t}, \\bm{x_t^{+}})+\\mathcal{L}^{(l)}_{rep}(\\bm{x_t}, \\bm{x_t^{-}})\\Big)\n\\end{split}\n\\end{equation}\n\\normalsize\n\nWe propose to learn $\\alpha^{(l)}$ using the Hedge Algorithm~\\cite{FreundS97}. Initially, all weights are uniformly distributed, i.e., $\\alpha^{(l)}=\\frac{1}{L+1}, l=0,1,2,\\ldots, L$. At every iteration, for each metric model $E_l$, OAHU{} transforms the input triplet constraint into $E_l$'s metric embedding space, where the constraint similarity and dissimilarity errors are evaluated. Thus $\\alpha^{(l)}$ is updated based on the local loss suffered by $E_l$ as:\n\\small\n\\begin{equation}\n\\label{eq:alpha_update}\n \\alpha^{(l)}_{t+1} \\leftarrow\n \\begin{cases}\n \\alpha^{(l)}_t\\beta^{\\mathcal{L}^{(l)}} & \\beta^{\\min\\limits_{l}\\mathcal{L}^{(l)}}\\log\\mathcal{L}^{(l)} > \\beta-1\\\\\n \\alpha^{(l)}_t[1-(1-\\beta)\\mathcal{L}^{(l)}]& otherwise\n \\end{cases}\n\\end{equation}\n\\normalsize\nwhere $\\beta \\in (0,1)$ is the discount factor. We choose to update $\\alpha^{(l)}$ based on Equation~\\ref{eq:alpha_update}, to maximize the update of $\\alpha^{(l)}$ at each step. In other words, if a metric model produces a low\/high $\\mathcal{L}^{(l)}$ at time $t$, its associated weight $\\alpha^{(l)}$ will gain as much increment\/decrement as possible. Therefore, in OAHU{}, metric models with good performance are highlighted while models with poor performance are eliminated. \nMoreover, following Sahoo et.al~\\cite{SahooPLH18}, to avoid the model bias issue~\\cite{ChenGS15,GulcehreMVB16}, i.e., OAHU{} may unfairly prefer shallow models due to their faster convergence rate and excessively reduces weights of deeper models, we introduce a smooth factor $s$ used to set a minimal weight for each metric model. Thus, after updating weights according to Equation~\\ref{eq:alpha_update}, the weights are set as: $\\alpha^{(l)}_{t+1}=\\max (\\alpha^{(l)}_{t+1}, \\frac{s}{L+1})$. \nFinally, at the end of every round, the weights $\\alpha$ are normalized such that $\\sum\\limits_{l=0}^L \\alpha^{(l)}_{t+1}=1$.\n\nLearning the parameters $\\Theta^{(l)}$ and $W^{(l)}$ is tricky but can be achieved via gradient descent. By applying Online Gradient Descent (OGD) algorithm, the update rules for $\\Theta^{(l)}$ and $W^{(l)}$ are given by:\n\\small\n\\begin{equation}\n\\label{eq:theta_update}\n\\begin{split}\n \\Theta^{(l)}_{t+1} &\\leftarrow \\Theta^{(l)}_{t} - \\eta \\nabla_{\\Theta^{(l)}_t} \\mathcal{L}_{overall}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})\\\\\n &= \\Theta^{(l)}_{t} - \\eta \\frac{\\alpha^{(l)}}{2} \\Big( \\nabla_{\\Theta^{(l)}_t}\\mathcal{L}^{(l)}_{attr}(\\bm{x_t}, \\bm{x_t^{+}})+\\nabla_{\\Theta^{(l)}_t} \\mathcal{L}^{(l)}_{rep}(\\bm{x_t}, \\bm{x_t^{-}})\\Big)\n\\end{split}\n\\end{equation}\n\\begin{equation}\n\\label{eq:w_update}\n \\begin{split}\n W^{(l)}_{t+1} &\\leftarrow W^{(l)}_{t} - \\eta \\nabla_{W^{(l)}_t} \\mathcal{L}_{overall}(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^{-}})\\\\\n &= W^{(l)}_{t} - \\eta \\sum\\limits_{j=l}^L \\frac{\\alpha^{(j)}}{2} \\Big( \\nabla_{W^{(l)}_t}\\mathcal{L}^{(j)}_{attr}(\\bm{x_t}, \\bm{x_t^{+}})+\\nabla_{W^{(l)}_t} \\mathcal{L}^{(j)}_{rep}(\\bm{x_t}, \\bm{x_t^{-}})\\Big)\n\\end{split}\n\\end{equation}\n\\normalsize\n\nUpdating $\\Theta^{(l)}$ (Equation~\\ref{eq:theta_update}) is simple since $\\Theta^{(l)}$ is specific to $E_l$, and is not shared across different metric models. In contrast, because deeper models share low-level knowledge with shallower models in OAHU{}, all models deeper than $l$ layers contribute to the update of $W^{(l)}$, as expressed in Equation~\\ref{eq:w_update}. Algorithm~\\ref{alg:OAHU{}} outlines the training process of OAHU{}.\n\n\\subsection{Regret Bound}\nOverall, hedge algorithm enjoys a regret bound of $R_T\\leq \\sqrt{TlnN}$~\\cite{FreundS97} where $N$ is the number of experts and $T$ is the number of trials. In our case, $N$ is the number of metric models (i.e., $L+1$) and $T$ is the number of input triplet constraints. The average worst-case convergence rate of OAHU{} is then given by $O(\\sqrt{ln(N)\/T})$. Therefore, OAHU{} is guaranteed to converge after learning from sufficiently many triplets.\n\n\n\\begin{algorithm}[t]\n\\caption{\\textbf{OAHU{}: Online Metric Learning with Adaptive Hedge Update }}\n\\label{alg:OAHU{}}\n\\begin{algorithmic}[1]\n\\REQUIRE Discount Factor $\\beta\\in(0,1)$; Smooth Factor $s$; Control Parameter $\\tau \\in (0, \\frac{2}{3})$; Learning rate $\\eta$; A randomly initialized ANN with $L$ hidden layers that is parameterized by $\\Theta^{(l)}$,$W^{(l)}$ and $\\alpha^{(l)}$.\n\\ENSURE $\\alpha^{(l)}$, $\\Theta^{(l)}$ and $W^{(l)}$\\\\\n\\STATE Initialize $\\alpha^{(l)}=\\frac{1}{L+1}, \\forall l=0,1,\\ldots, L$ \n\\FOR{$t=1,2,\\ldots,T$}\n\\STATE Receive a triplet constraint $(\\bm{x_t}, \\bm{x_t^{+}}, \\bm{x_t^-})$\\\\\n\\STATE Transform and retrieve the metric embedding $f^{(l)}(\\bm{x_t})$, $f^{(l)}(\\bm{x_t^{+}})$ and $f^{(l)}(\\bm{x_t^-})$ from each model $E_l$.\\\\\n\\STATE Evaluate $\\mathcal{L}^{(l)}_{attr}(\\bm{x_t}, \\bm{x_t^{+}})$ and $\\mathcal{L}^{(l)}_{rep}(\\bm{x_t}, \\bm{x_t^{-}})$ (Eq.~\\ref{eq:attr_rep}), $\\forall l=0,1,\\ldots,L$.\n\\STATE Compute $L^{(l)}(\\bm{x_t}, \\bm{x_t^{+}},\\bm{x_t^{-}}), \\forall l=0, 1, \\ldots, L$ as per Eq.~\\ref{eq:local_loss}.\n\\STATE Update $\\Theta^{(l)}_{t+1},\\forall l=0,1,2,\\ldots,L$ as per Eq.~\\ref{eq:theta_update}.\\\\\n\\STATE Update $W^{(l)}_{t+1},\\forall l=0,1,2,\\ldots,L$ as per Eq.~\\ref{eq:w_update}.\\\\\n\\STATE Update $\\alpha^{(l)}_{t+1},\\forall l=0,1,2,\\ldots,L$ as per Eq.~\\ref{eq:alpha_update}.\\\\\n\\STATE $\\alpha^{(l)}_{t+1}=\\max (\\alpha^{(l)}_{t+1}, \\frac{s}{L+1}), \\forall l=0,1,2,\\ldots,L$.\\\\\n\\STATE Normalize $\\alpha^{(l)}_{t+1}$, i.e., $\\alpha^{(l)}_{t+1}=\\frac{\\alpha^{(l)}_{t+1}}{\\sum\\limits_{j=0}^L\\alpha^{(j)}_{t+1}}, \\forall l=0,1,2,\\ldots,L$.\n\\ENDFOR\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Application of OAHU{}}\n\\label{subsec:deployment}\n\nIn this part, we discuss the deployment of OAHU{} in different practical applications. Although OAHU{} can be applied in many applications, we mainly focus on the three most common tasks: classification, similarity comparison, and analogue retrieval. Note that the following discussion describes only one possible way of deployment for each task, and other deployment methods can exist.\n\n\\subsubsection{Classification}\n\\label{subsubsec:classification}\nGiven a training dataset $\\mathcal{D}=\\lbrace (\\bm{x_i}, y_i)\\rbrace_{i=1}^n$ where $\\bm{x_i}\\in \\mathcal{R}^d$ is an training instance and $y_i\\in \\lbrace1,2,\\ldots, c \\rbrace$ is its associated label, we determine the label $y\\in \\lbrace1,2,\\ldots,c\\rbrace$ of a test instance $\\bm{x}\\in \\mathcal{R}^d$ as follows:\n\\begin{itemize}\n \\item For every metric model $E_l$ in OAHU{}, we find $k$ nearest neighbors of $\\bm{x}$ measured by Eq.~\\ref{eq:dist}, each of which is a candidate similar to $\\bm{x}$. Hence, $k(L+1)$ candidates in total are found in OAHU{}. Let $\\mathcal{N} =\\lbrace \\lbrace(\\bm{x_j^{(l)}}, y^{(l)}_j)\\rbrace_{j=1}^k \\rbrace_{l=0}^L=\\lbrace \\mathcal{N}_l \\rbrace_{l=0}^L$ denotes these candidates where $\\bm{x_j^{(l)}}$ is the $j^{th}$ neighbor of $\\bm{x}$ found in $E_l$ and $y^{(l)}_j$ is its associated label. The similarity score between $\\bm{x}$ and $\\bm{x_j^{(l)}}$ is given by \n \\small\n \\begin{equation}\n S(\\bm{x}, \\bm{x_j^{(l)}})={\\rm e}^{-\\frac{D^{(l)}(\\bm{x_j^{(l)}}, \\bm{x})-d_{min}}{d_{max}-d_{min}}}\\cdot \\alpha^{(l)}\n \\end{equation}\n \\normalsize\n where\n $d_{min}=\\min\\limits_{ \\mathcal{N}_l}D^{(l)}(\\bm{x_j^{(l)}}, \\bm{x})$ and\n $d_{max}=\\max\\limits_{ \\mathcal{N}_l}D^{(l)}(\\bm{x_j^{(l)}}, \\bm{x})$.\n \\item Then, the class association score of each class $c_i$ is calculated by:\n \\small\n \\begin{equation}\n S(c_i) = \\sum\\limits_{y_j^{(l)}=c_i} S(\\bm{x}, \\bm{x_j^{(l)}})\n \\end{equation}\n \\normalsize\n \\item The final predicted label $y$ of $\\bm{x}$ in OAHU{} is determined as:\n \\small\n \\begin{equation}\n y = \\argmax\\limits_{c_i} S(c_i)\n \\end{equation}\n \\normalsize\n\\end{itemize}\n\\subsubsection{Similarity Comparison}\nThe task of similarity comparison is to determine whether a given pair $(\\bm{x_1}, \\bm{x_2})$ is similar or not. To do this, a user-specified threshold $\\mathcal{T}\\in (0,1)$ is needed. The steps below are followed to perform the similarity comparison in OAHU{}.\n\\begin{itemize}\n \\item First, for every metric model $E_l$, we compute the distance $D^{(l)}(\\bm{x_1}, \\bm{x_2})$ between $\\bm{x_1}$ and $\\bm{x_2}$. Since $D^{(l)}(\\bm{x_1}, \\bm{x_2}) \\in [0,2]$, we divide $D^{(l)}(\\bm{x_1}, \\bm{x_2})$ by $2$ for normalization.\n \\item The similarity probability of $(\\bm{x_1}, \\bm{x_2})$ is then determined by\n \\small\n \\begin{equation}\n \\label{eq:sim_com}\n P(\\bm{x_{1}},\\bm{x_{2}}) = \\sum\\limits_{l=0}^L \\alpha^{(l)}\\cdot p_{l}\n \\end{equation}\n \\normalsize\n where $p_{l}=1$ if $D^{(l)}(\\bm{x_1}, \\bm{x_2})\/2<\\mathcal{T}$; Otherwise, $p_l=0$.\n \\item $(\\bm{x_1},\\bm{x_2})$ is similar if $ P(\\bm{x_1}, \\bm{x_2})\\ge 0.5$ and is dissimilar if $ P(\\bm{x_1}, \\bm{x_2})< 0.5$.\n\\end{itemize}\n\\subsubsection{Analogue Retrieval}\nGiven a query item $\\bm{x}$, the task of analogue retrieval is to find $k$ items that are most similar to $\\bm{x}$ in a database $\\mathcal{D}$. Note that analogue retrieval is a simplified ``classification'' task where we only find similar items and ignore the labeling issue.\n\\begin{itemize}\n\\item We first retrieve the $k(L+1)$ candidates that are similar to $\\bm{x}$ following the same process in classification task. Let $\\mathcal{N} =\\lbrace \\lbrace \\bm{x_j^{(l)}}\\rbrace_{j=1}^k \\rbrace_{l=0}^L=\\lbrace \\mathcal{N}_l \\rbrace_{l=0}^L$ denote these candidates. The similarity score between $\\bm{x_j^{(l)}}$ and $\\bm{x}$ is:\n\\small\n\\begin{equation}\n\\label{eq:retrieval}\n Sim(\\bm{x_j^{(l)}}, \\bm{x}) = {\\rm e}^{-\\frac{D^{(l)}(\\bm{x_j^{(l)}}, \\bm{x})-d_{min}}{d_{max}-d_{min}}}\\cdot \\alpha^{(l)}\n\\end{equation}\n\\normalsize\nwhere $d_{min}=\\min\\limits_{ \\mathcal{N}_l}D^{(l)}(\\bm{x_j^{(l)}}, \\bm{x})$ and $d_{max}=\\max\\limits_{ \\mathcal{N}_l}D^{(l)}(\\bm{x_j^{(l)}}, \\bm{x})$.\n\\item Then we sort these items in descending order based on their similarity scores and keep only the top $k$ distinct items as the final retrieval result.\n\\end{itemize}\n\n\\subsection{Time and Space Complexity}\nOverall, the execution overhead of OAHU{} mainly arises from the training of the ANN-based metric model, especially from the gradient computation. Assume that the maximum time complexity for calculating the gradient of one layer is a constant $C$. The overall time complexity of OAHU{} is simply $O(nL^2C)$, where $n$ is the number of input constraints and $L$ is the number of hidden layers. Therefore, with pre-specified $L$ and ANN architecture, OAHU{} is an efficient online metric learning algorithm that runs in linear time. Let $d$, $S_{hidden}$ and $S_{emb}$ denote the dimensionality of input data, number of units in each hidden layer and metric embedding size respectively. The space complexity of OAHU{} is given by $O\\big(d\\cdot S_{emb} + L\\cdot S_{hidden}(d+S_{emb})+\\frac{L(L-1)}{2}S_{hidden}^2\\big)$.\n\n\\subsection{Extendability}\nAs discussed in Section~\\ref{subsec:deployment}, OAHU{} is a fundamental module that can perform classification, similarity comparison, or analogue retrieval tasks independently. Therefore, it can be easily plugged into any existing deep models (e.g., a CNN or RNN) by replacing the classifier, i.e., fully-connected layers, with OAHU{}. The parameters of those feature-extraction layers (e.g., a convolutional layer) can be fine-tuned by optimizing $\\mathcal{L}_{overall}$ directly. However, in this paper, to prove the validity of the proposed approach, we only focus on OAHU{} itself and leave the deep model extension for future work.\n\n\n\n\\subsection{Baselines}\n To examine the quality of metrics learned in OAHU{}, we compare it with several state-of-the-art online metric learning algorithms. (1) LEGO~\\cite{JainKDG08} (pairwise constraint): an online metric learning algorithm that learns a Mahalanobis metric based on LogDet regularization and gradient descent. (2) RDML~\\cite{JinWZ09} (pairwise constraint): an online algorithm for regularized distance metric learning that learns a Mahalanobis metric with a regret bound. (3) OASIS~\\cite{ChechikSSB10} (triplet constraint): an online algorithm for scalable image similarity learning that learns a bilinear similarity measure over sparse representations. Finally, (4) OPML~\\cite{LiGWZHS18} (triplet constraint): a one-pass closed-form solution for online metric learning that learns a Mahalanobis metric with a low computational cost. \n\n\\subsection{Implementation}\nWe have implemented OAHU{} using Python 3.6.2 and Pytorch 0.4.0 library. All baseline methods were based on code released by corresponding authors, except LEGO and RDML. Due to the unavailability of a fully functional code of LEGO and RDML, we use our own implementation based on the authors' description~\\cite{JainKDG08,JinWZ09}. Hyper-parameter setting of these approaches, including OAHU{}, varies with different tasks, and will be discussed later for each specific task.\n\n\n\\subsection{Image Classification}\n\\subsubsection{Dataset and Classifier} \nTo demonstrate the validacy of OAHU{} on complex practical applications, we use three publicly available real-world benchmark image datasets, including \\textbf{FASHION-MNIST}\\footnote{https:\/\/github.com\/zalandoresearch\/fashion-mnist}~\\cite{online}, \\textbf{EMNIST}\\footnote{https:\/\/www.nist.gov\/itl\/iad\/image-group\/emnist-dataset}~\\cite{CohenATS17} and \\textbf{CIFAR-10}\\footnote{https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html}~\\cite{Krizhevsky09learningmultiple} for evaluation. To be consistent with other image datasets, we convert images of CIFAR-10 into grayscale through the OpenCV API~\\cite{opencv_library}, resulting in 1024 features. \nThe \\textit{k-NN} classifier is employed here, since it is widely used for classification tasks with only one parameter, and is also consistent with the deployment of OAHU{} discussed in Section~\\ref{subsubsec:classification}. \nDetails of these datasets are listed in ``Image Classification'' part of Table~\\ref{tab:dataset}.\n\n\\begin{table}[t]\n\\centering\n\\resizebox{0.8\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|r|r|r|}\n\\hline\n\\textbf{Task} & \\textbf{Dataset} & \\multicolumn{1}{c|}{\\textbf{\\# features}} & \\multicolumn{1}{c|}{\\textbf{\\# classes}} & \\multicolumn{1}{c|}{\\textbf{\\# instances}} \\\\ \\hline\n\\multirow{3}{*}{\\textbf{Image Classification}} & FASHION-MNIST & 784 & 10 & 70,000 \\\\ \\cline{2-5} \n & EMNIST & 784 & 47 & 131,600 \\\\ \\cline{2-5} \n & CIFAR-10 & 1024 & 10 & 60,000 \\\\ \\hline\n\\textbf{Face Verification} & LFW & 73 & 5749 & 13,233 \\\\ \\hline\n\\multirow{2}{*}{\\textbf{Image Retrieval}} & CARS-196 & 4096 & 196 & 16,185 \\\\ \\cline{2-5} \n & CIFAR-100 & 4096 & 100 & 60,000 \\\\ \\hline\n\\end{tabular}%\n}\n\\caption{Description of Datasets}\n\\vspace{-4mm}\n\\label{tab:dataset}\n\\end{table}\n\n\\begin{table*}[t]\n\\centering\n\\resizebox{0.8\\textwidth}{!}{%\n\\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|c|}\n\\hline\n\\multirow{2}{*}{\\textbf{Methods}} & \\multicolumn{3}{c|}{\\textbf{FASHION-MNIST}} & \\multicolumn{3}{c|}{\\textbf{EMNIST}} & \\multicolumn{3}{c|}{\\textbf{CIFAR-10}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{win\/tie\/loss}}} \\\\ \\cline{2-10}\n & \\multicolumn{1}{c|}{\\textit{Error Rate}} & \\multicolumn{1}{c|}{$F_1$} & \\multicolumn{1}{c|}{$\\mathcal{U}$} & \\multicolumn{1}{c|}{\\textit{Error Rate}} & \\multicolumn{1}{c|}{$F_1$} & \\multicolumn{1}{c|}{$\\mathcal{U}$} & \\multicolumn{1}{c|}{\\textit{Error Rate}} & \\multicolumn{1}{c|}{$F_1$} & \\multicolumn{1}{c|}{$\\mathcal{U}$} & \\multicolumn{1}{c|}{} \\\\ \\hline\n\\textbf{LEGO} & 0.35$\\pm$0.01 $\\bullet$ & 0.64$\\pm$0.00 & 0.58$\\pm$0.00 & 0.84$\\pm$0.00 $\\bullet$ & 0.14$\\pm$0.02 & 0.66$\\pm$0.01 & 0.88$\\pm$0.00 $\\bullet$ & 0.05$\\pm$0.02 & 0.59$\\pm$0.00 & \\textbf{3\/0\/0} \\\\ \\hline\n\\textbf{RDML} & 0.22$\\pm$0.01 $\\bullet$ & 0.78$\\pm$0.00 & 0.47$\\pm$0.01 & 0.64$\\pm$0.03 $\\bullet$ & 0.32$\\pm$0.05 & 0.50$\\pm$0.00 & 0.76$\\pm$0.01 $\\bullet$ & 0.24$\\pm$0.01 & 0.50$\\pm$0.01 & \\textbf{3\/0\/0} \\\\ \\hline\n\\textbf{OASIS} & 0.21$\\pm$0.02 $\\bullet$ & 0.79$\\pm$0.01 & 0.14$\\pm$0.00 & 0.41$\\pm$0.01 $\\bullet$ & 0.59$\\pm$0.00 & 0.24$\\pm$0.00 & 0.79$\\pm$0.02 $\\bullet$ & 0.20$\\pm$0.03 & 0.47$\\pm$0.00 & \\textbf{3\/0\/0} \\\\ \\hline\n\\textbf{OPML} & 0.23$\\pm$0.00 $\\bullet$ & 0.78$\\pm$0.00 & 0.30$\\pm$0.01 & 0.43$\\pm$0.01 $\\bullet$ & 0.51$\\pm$0.02 & 0.45$\\pm$0.00 & 0.83$\\pm$0.01 $\\bullet$ & 0.16$\\pm$0.00 & 0.48$\\pm$0.04 & \\textbf{3\/0\/0} \\\\ \\hline\n\\textbf{OAHU{}} & \\textbf{0.18$\\pm$0.00} & \\textbf{0.81$\\pm$0.00} &\n\\textbf{1.00$\\pm$0.00} &\n\\textbf{0.35$\\pm$0.01} & \\textbf{0.64$\\pm$0.01} &\n\\textbf{1.00$\\pm$0.00} & \n\\textbf{0.68$\\pm$0.00} & \\textbf{0.31$\\pm$0.00} &\n\\textbf{1.00$\\pm$0.00} &\n - \\\\ \\hline\n\\end{tabular}%\n}\n\\caption{Comparison of classification performance on competing methods over benchmark image datasets. $\\bullet\/\\circ$ indicates \\textbf{OAHU{}} performs statistically better\/worse (0.05 significance level) than the respective method according to the $p$-values. The statistics of win\/tie\/loss is also included. Both mean and standard deviation of error rates, constraint utilization $\\mathcal{U}$ and macro $F_1$ scores are reported. $0.00$ denotes a value less than $0.005$.}\n\\vspace{-6mm}\n\\label{tab:classification}\n\\end{table*}\n\n\\begin{figure*}[t]\n\\centering\n\\subfloat[FASHION-MNIST\\label{fig:fashionmnist_triplet}]{\\includegraphics[width =0.3 \\textwidth]{fig4a.jpg}}\\hfill\n\\subfloat[CIFAR-10\\label{fig:cifar10_triplet}]{\\includegraphics[width = 0.3\\textwidth]{fig4b.jpg}}\n\\hfill\n\\subfloat[\\label{fig:weight_triplet}]{\\includegraphics[width = 0.3\\textwidth]{fig4c.jpg}}\n\\caption{(a) \/ (b) Error rates of competing methods with increasing number of constraints on FASHION-MNIST \/ CIFAR-10 dataset. (c) The evolvement of metric weight distribution in OAHU{} with increasing number of constraints on FASHION-MNIST dataset. }\n\\vspace{-4mm}\n\\label{fig:error_rate_vary_constraints}\n\\end{figure*}\n\n\\subsubsection{Experiment Setup and Constraint Generation}\n\\label{subsubsec: escg}\nTo perform the experiment, we first randomly shuffle each dataset and then divide it into development and test sets with the split ratio of $1:1$. Hyper-parameters of these baseline approaches were set based on values reported by the authors and fine-tuned via 10-fold cross-validation on the development set. Because OPML applies a one-pass triplet construction process to generate the constraints, we simply provide the training data in the development set as the input for OPML.\nFor other approaches including OAHU{}, we first randomly sample $5,000$ seeds (i.e., pairwise or triplet constraints) from the training data and then construct $5,000$ more constraints by taking transitive closure over these seeds.\nNote that the same number of constraints were adopted by the authors of LEGO~\\cite{JainKDG08} and OPML~\\cite{LiGWZHS18} when comparing their approaches with baselines. Thus our comparison is fair. \nIn OAHU{}, we set $\\beta=0.99$, $L=5$, $S_{hidden}=100$, $s=0.1$, $\\eta=0.3$ and $S_{emb}=50$ as default. The value of $\\tau$ was found by 10-fold cross-validation on the development set.\nTo reduce any bias that may result from random partition, all the classification results are averaged over $10$ individual runs with independent random shuffle processes.\n\n\\subsubsection{Evaluation Metric}\nSimilar to most of the baselines, e.g., LEGO~\\cite{JainKDG08} and OPML~\\cite{LiGWZHS18}, we adopt the error rate and macro $F_1$ score\\footnote{http:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.f1\\_score.html} as the evaluation criterion. Moreover, the $p$-values of student's t-test were calculated to check statistical significance. In addition, the statistics of win\/tie\/loss are reported according to the obtained p-values. The constraint utilization $\\mathcal{U}$, i.e., the fraction of input constraints that actually contribute to model update, is also reported for each method.\n\n\\subsubsection{Analysis}\nTable~\\ref{tab:classification} compares the classification performance of OAHU{} with baseline methods on benchmark datasets. We observe that LEGO and RDML perform poorly on complex image datasets, e.g., EMNIST ($47$ classes), since they learn a Mahalanobis metric from pairwise constraints, and are biased to either the similarity or dissimilarity relation when updating model parameters. OASIS performs better than OPML because the bilinear metric eliminates the positive and symmetric requirement of the covariance matrix, i.e., $A$ in $d(\\bm{x_1}, \\bm{x_2})=\\bm{x_1}^TA\\bm{x_2}$ which results in a larger hypothesis space~\\cite{ChechikSSB10}. However, OAHU{} outperforms all the baseline approaches by\nproviding not only significantly lower error rate, but also higher $F_1$ score, indicating the best overall classification performance among all competing methods.\n\n\nTo illustrate the effect of different numbers of constraints on the learning of a metric, we vary the input constraint numbers\\footnote{It is done by first generating $10,000$ constraints as discussed in Section~\\ref{subsubsec: escg} and then uniformly sample the desired number of constraints from them.} for all competing methods except OPML. Here we do not compare with OPML because it internally determines the number of triplets to use via a one-pass triplet construction process. Therefore, explicitly varying the input triplet number for OPML is impossible. Figure~\\ref{fig:fashionmnist_triplet} and Figure~\\ref{fig:cifar10_triplet} show the comparison of error rates on FASHION-MNIST and CIFAR-10 dataset respectively. \nThe large variance of error rates observed in both OASIS and RDML (i.e., error bars in the figures) indicates the strong correlation of their classification performance with the quality of input constraints.\nAll baseline approaches present an unstable, and sometimes, even worse performance with an increasing number of constraints. This is because these baselines are biased to those \"hard\" constraints leading to positive loss values and incorrectly ignore the emerging failure cases (see Figure~\\ref{fig:failuremode}). In contrast, OAHU{} provides the best and most stable classification performance with minimal variance, indicating its robustness to the quality of input constraints and better capability of capturing the semantic similarity and dissimilarity relations in data. \n\nOAHU{} is better mainly because it not only learns from every input constraint by optimizing the adaptive-bound triplet loss (see constraint utilization $\\mathcal{U}$ in Table~\\ref{tab:classification}), but also automatically adapts the model complexity to achieve better performance (see Figure~\\ref{fig:weight_triplet}).\n\n\n\\subsection{Face Verification}\n\\subsubsection{Dataset and Experiment Setup}\nFor face verification, we evaluate our method on Labeled Faces in the Wild Database (LFW)~\\cite{LFWTechUpdate}. \nThis dataset has two views: View 1 is used for development purpose (contains a training and a test set), and View 2 is taken as a benchmark for comparison (i.e., a 10-fold cross validation set containing pairwise images). \nThe goal of face verification in LFW is to determine whether a pair of face images belongs to the same person. \nSince OASIS, OPML, and OAHU{} are triplet-based methods, following the setting in OPML~\\cite{LiGWZHS18}, we adopt the image unrestricted configuration to conduct experiments for a fair comparison. \nIn the image unrestricted configuration, the label information (i.e. actual names) in training data can be used and as many pairs or triplets as one desires can be formulated. We use View 1 for hyper-parameter tuning and evaluate the performance of all competing algorithms on each fold (i.e., 300 intra-person and 300 inter-person pairs) in View 2. Note that the default parameter values in OAHU{} and baselines were set using the same values as discussed in Section~\\ref{subsubsec: escg}. Attribute features of LFW are used in the experiment so that the result can be easily reproduced and the comparison is fair. Attribute features\\footnote{http:\/\/www.cs.columbia.edu\/CAVE\/databases\/pubfig\/download\/lfw\\_attributes.txt} ($73$ dimension) provided by Kumar et al.~\\cite{KumarBBN09} are ``high-level'' features describing nameable attributes such as race, age etc., of a face image. Details of LFW are described in ``Face Verification'' part in Table~\\ref{tab:dataset}.\n\n\\subsubsection{Constraint Generation and Evaluation Metric}\nWe follow the same process described in Section~\\ref{subsubsec: escg} to generate constraints for training purpose. However, we add an additional step here: if a generated constraint contains any pair of instances in the validation set (i.e., all pairs in View 1) or the test set (i.e., all folds in View 2), we simply discard it and re-sample a new constraint. This step is repeated until the new constraint does not contain any pair in both sets. In the experiment, $10,000$ pair or triplet constraints in total are generated for training. \n\nNote, due to the one-pass triplet generation strategy of OPML, we let itself determine the training triplets.\nFor each test image pair, we first compute the distance between them measured by the learned metric, and then normalize the distance into $[0,1]$. The widely accepted Receiver Operating Characteristic (ROC) curve and Area under Curve (AUC) are adopted as the evaluation metric. \n\n\\subsubsection{Analysis}\n\n\\begin{figure}[t]\n\\centering\n\\subfloat[\\label{fig:roc}]{\\includegraphics[width =0.5 \\columnwidth]{fig5a.jpg}}\\hfill\n\\subfloat[\\label{fig:roc_weight}]{\\includegraphics[width = 0.5\\columnwidth]{fig5b.jpg}}\n\\caption{(a) ROC Curves of OAHU{} and contrastive methods on LFW dataset. AUC value of each method is given in bracket. (b) Metric weight distribution of OAHU{}.}\n\\vspace{-4mm}\n\\label{fig:roc_combine}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\subfloat[Failure Cases]{\\includegraphics[width = 0.49\\columnwidth]{fig6a.jpg}}\\hfill\n\\subfloat[Successful Cases]{\\includegraphics[width = 0.49\\columnwidth]{fig6b.jpg}}\n\\caption{Face verification examples in the test set (View 2) ($\\mathcal{T}=0.55$). For images belonging to same person, $P(\\bm{x_1}, \\bm{x_2})$ should be close to $1$ and for images belonging to different people, $P(\\bm{x_1}, \\bm{x_2})$ should drop to $0$.}\n\\vspace{-4mm}\n\\label{fig:fv_cases}\n\\end{figure}\n\nThe ROC curves of all competing methods are given in Figure~\\ref{fig:roc} with corresponding AUC values calculated. It can be observed that the proposed approach OAHU{} outperforms all baseline approaches by providing significantly higher AUC value. As demonstrated in Figure~\\ref{fig:roc_weight}, OAHU{} is better mainly because \nit automatically adapts the model complexity to incorporate non-linear models, which help to separate instances that are difficult to distinguish in simple linear models. Figure~\\ref{fig:fv_cases} presents some example comparisons in the test set for both successful and failure cases. Here, the comparison probability $P(\\bm{x_1}, \\bm{x_2})$ computed based on Eq.~\\ref{eq:sim_com} is expected to approach $1$ if both images are from the same person and should be close to $0$ if they belong to different people. Apparently, although OAHU{} performs much better than baselines, it may still fail in some cases where severe concept drift, e.g. aging and shave, are present.\n\n\n\n\\subsection{Image Retrieval}\n\\subsubsection{Dataset and Experiment Setup}\nTo demonstrate the performance of OAHU{} on image retrieval task, we show experiments on two benchmark datasets, which are CARS-196\\footnote{https:\/\/ai.stanford.edu\/~jkrause\/cars\/car\\_dataset.html}~\\cite{KrauseStarkDengFei-Fei} and CIFAR-100\\footnote{https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html}~\\cite{Krizhevsky09learningmultiple}. \nThe CARS-196 dataset is divided into $8,144$ training images and $8,401$ test images, where each class has been split roughly in a 50-50 ratio. \nSimilarly, we randomly split CIFAR-100 into $30,000$ training images and $30,000$ test images by equally dividing images of each class. \nFor each image in CARS-196 and CIFAR-100, its deep feature, i.e., the output of the last convolutional layer, is extracted from a VGG-19 model pretrained on the ImageNet~\\cite{ILSVRC15} dataset. \nDetails of these 2 datasets are listed in the ``Image Retrieval'' part of Table~\\ref{tab:dataset}. \nWe form the development set of each dataset by including all its training images.\nFor each dataset, hyper-parameters of baseline approaches were set based on values reported by the authors, and fine-tuned via 10-fold cross-validation on the development set. In OAHU{}, we set $\\beta=0.99$, $L=5$, $S_{hidden}=100$, $s=0.1$, $\\eta=0.3$ and $S_{emb}=50$ as default. The value of $\\tau$ in OAHU{} was found via 10-fold cross-validation on the development set. \n\n\\subsubsection{Constraint Generation and Evaluation Metric}\nWith the training images in the development set, we generate the training constraints in the same way as Section~\\ref{subsubsec: escg}. However, in this experiment, we sample $50,000$ constraints ($25,000$ seeds) for each dataset due to the vast amount of classes included in them. The Recall@K metric~\\cite{JegouDS11} is applied for performance evaluation. Each test image (query) first retrieves K most similar images from the test set and receives a \nscore of 1 if an image of the same class is retrieved among these K images and 0 otherwise. Recall@K averages this score over all the images.\n\n\\subsubsection{Analysis}\n\n\\begin{figure}[t]\n\\centering\n\\subfloat[\\label{fig:recall_cars}]{\\includegraphics[width =0.5 \\columnwidth]{fig7a.jpg}}\\hfill\n\\subfloat[\\label{fig:weight_cars}]{\\includegraphics[width = 0.5\\columnwidth]{fig7b.jpg}}\n\\caption{(a) Recall@K score on the test split of CARS-196. (b) Metric weight distribution of OAHU{} on CARS-196.}\n\\vspace{-4mm}\n\\label{fig:img_retr_cars}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\subfloat[\\label{fig:recall_cifar100}]{\\includegraphics[width =0.5 \\columnwidth]{fig8a.jpg}}\\hfill\n\\subfloat[\\label{fig:weight_cifar100}]{\\includegraphics[width = 0.5\\columnwidth]{fig8b.jpg}}\n\\caption{(a) Recall@K score on the test split of CIFAR-100. (b) Metric weight distribution of OAHU{} on CIFAR-100.}\n\\vspace{-4mm}\n\\label{fig:img_retr_cifar100}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\subfloat[CARS-196]{\\includegraphics[width = 0.49\\columnwidth]{fig9a.jpg}}\\hfill\n\\subfloat[CIFAR-100]{\\includegraphics[width = 0.49\\columnwidth]{fig9b.jpg}}\n\\caption{Examples of successful and failure queries on the CARS-196 and CIFAR-100 datasets using our embedding. Images in the first column are query images and the rest are three most similar images. Best viewed on a monitor zoomed in.}\n\\vspace{-4mm}\n\\label{fig:retrieval_cases}\n\\end{figure}\n\nThe Recall@K scores on the test set of CARS-196 and CIFAR-100 are shown in Figure~\\ref{fig:recall_cars} and Figure~\\ref{fig:recall_cifar100}, respectively. We observe that OAHU{} outperforms all baseline approaches by providing significantly higher Recall@K scores. In addition, the Recall@K score of OAHU{} grows more rapidly than that provided by most baselines. Figure~\\ref{fig:weight_cars} and Figure~\\ref{fig:weight_cifar100} illustrate the metric weight distribution of OAHU{} on CARS-196 and CIFAR-100 respectively. Compared to the baselines that are only linear models, OAHU{} takes full advantage of non-linear metrics to improve the model expressiveness. Figure~\\ref{fig:retrieval_cases} shows some examples of successful and failure queries on both datasets.\nDespite the huge change in the viewpoint, configuration, and illumination, our method can successfully retrieve examples from the same class and most retrieval failures come from subtle visual difference among images of different classes.\n\n\\subsection{Sensitivity of Parameters}\n\n\nThe three main parameters in OAHU{} is the control parameter $\\tau$. \nWe vary $\\tau$ from 0.1 to 0.9 and $S_{emb}$ from 10 to 100 to study their effect on the classification performance. \nWe observe that both classification accuracy (1-error rate) and $F_1$ score significantly drop if $\\tau$ is greater than 0.6 but remain unchanged with various $\\tau$ values if $\\tau\\leq 0.6$. This observation verifies Theorem~\\ref{theo:1} which states that classes are separated if $\\tau \\in (0,\\frac{2}{3})$. On the other hand, the classification accuracy of OAHU{} peaks at embedding size $S_{emb}=50$ where all semantic information are maintained. If we further increase the embedding size, noisy information is going to be introduced which degrades the classification performance. Therefore, we suggest to choose $\\tau=0.1$ and $S_{emb}=50$ as default.\n\n\n\\section{Introduction}\n\\label{Sec:introduction}\n\\input{introduction.tex}\n\n\\section{Related Work}\n\\label{Sec:background}\n\\input{background.tex}\n\n\\section{OAHU}\n\\label{Sec:approach}\n\\input{approach.tex}\n\n\\section{Empirical Evaluation}\n\\label{Sec:evaluation} \n\\input{experiment.tex}\n\n\\section{Limitations and Conclusions}\n\\input{conclusion}\n\\label{Sec:conclusion}\n\n\\section{ACKNOWLEDGMENTS}\nThis material is based upon work supported by NSF award numbers: DMS-1737978, DGE 17236021. OAC-1828467; ARO award number: W911-NF-18-1-0249; IBM faculty award (Research); and NSA awards.\n\n\\bibliographystyle{ACM-Reference-Format}\n\\balance\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper, we propose to study totally geodesic submanifolds\ninside locally symmetric spaces. Let us start by fixing some\nnotation: $(X^n,Y^m)$ will always refer to a pair of compact locally\nsymmetric spaces of noncompact type, with $Y^m\\subset X^n$ a\ntotally geodesic submanifold. The spaces $X^n,Y^m$ will be locally\nmodelled on $G\/K$, $G^\\prime\/K^\\prime$ respectively, where $G,G^\\prime$ are a\npair of semisimple Lie groups, and $K,K^\\prime$ are a pair of maximal\ncompact subgroups in the respective $G,G^\\prime$. Note that, since\n$Y^m\\subset X^n$ is totally geodesic, one can view $G^\\prime$ as a\nsubgroup of $G$, and hence one can take $K^\\prime = K\\cap G^\\prime$.\nWe will denote by $X_u=G_u\/K$, $Y_u=G_u^\\prime\/K^\\prime$ the nonnegatively\ncurved dual symmetric spaces to the nonpositively curved spaces $G\/K$,\n$G^\\prime\/K^\\prime$.\n\nNote that for a pair $(X^n,Y^m)$, the submanifold $Y^m$ is always\n{\\it homotopically\\\/} nontrivial. Indeed, the inclusion induces a\nmonomorphism on the level of fundamental groups. A more subtle\nquestion is whether the submanifold $Y^m$ is {\\it homologically\\\/}\nnontrivial, ie whether $[Y^m]\\neq 0 \\in H_m(X^n ;\\mathbb R)$ (or in\n$H_m(X^n ;\\mathbb Z)$). Our first result provides a criterion for\ndetecting when a totally geodesic submanifold $Y^m$ is homologically\nnontrivial (over $\\mathbb R$) in $X^n$.\n\n\n\\begin{Thm}\\label{Theorem1.1}\nLet $Y^m\\hookrightarrow X^n$ be a compact totally geodesic\nsubmanifold of noncompact type inside a compact locally symmetric\nspace of noncompact type, and denote by $\\rho$ the map on cohomology\n$H^m(X_u;\\mathbb R)\\rightarrow H^m(Y_u;\\mathbb R)\\simeq \\mathbb R$ induced by the\nembedding $Y_u\\hookrightarrow X_u$. Then we have the following:\n\\begin{itemize}\n\\item If $[Y^m]=0 \\in H_m(X^n; \\mathbb R)$ then the map $\\rho$ is identically zero.\n\\item If $\\rho$ is identically zero, and $m\\leq m(\\mathfrak g)$, where $m(\\mathfrak g)$ is the Matsushima\nconstant corresponding to the Lie algebra $\\mathfrak g$ of the Lie group $G$, then\nwe have that $[Y^m]=0 \\in H_m(X^n; \\mathbb R)$.\n\\end{itemize}\n\\end{Thm}\n\nOur proof of this first result is an adaptation of an argument of\nMatsushima \\cite{Mat} and relies on the existence of certain\ncompatible maps (the Matsushima maps) from the real cohomology of\nthe pair of nonnegatively curved duals $(X_u,Y_u)$ to the real\ncohomology of the nonpositively curved pair $(X^n,Y^m)$. It is\nreasonable to ask whether this map can be realized {\\it\ngeometrically}. Our second result, extending work of Okun\n\\cite{O1}, shows that this can sometimes be achieved {\\it\nrationally}:\n\n\\begin{Thm}\\label{Theorem1.2}\nAssume that $Y^m\\hookrightarrow X^n$ is a totally geodesic embedding\nof compact, locally symmetric spaces of noncompact type.\nFurthermore, assume that the map $G_u^\\prime\\hookrightarrow G_u$ induced\nby the inclusion $Y\\hookrightarrow X$ is a $\\pi_i$--isomorphism,\nfor $i< m$, and a surjection on $\\pi_m$. Then there exists a finite\ncover $\\bar X$ of $X^n$, and a connected lift $\\bar Y \\subset \\bar\nX$ of $Y^m$, with the property that there exists a {\\it tangential\nmap} of pairs $(\\bar X, \\bar Y) \\rightarrow (X_u, Y_u)$. If in\naddition we have $\\rk(G_u)=\\rk(K)$ and $\\rk(G_u^\\prime)=\\rk(K^\\prime)$, then the\nrespective tangential maps induce the Matsushima maps on cohomology.\n\\end{Thm}\n\nSince the tangent bundle of the submanifold $Y^m$ Whitney sum with\nthe normal bundle of $Y^m$ in $X^n$ yields the restriction of the\ntangent bundle of $X^n$ to the submanifold $Y^m$, this gives the\nimmediate:\n\n\\begin{Cor}\\label{Corollary1.3}\nUnder the hypotheses of \\fullref{Theorem1.2}, we have that the\npullback of the normal bundle of $Y_u$ in $X_u$ is stably equivalent\nto the normal bundle of $\\bar Y^m$ in $\\bar X^n$.\n\\end{Cor}\n\nIn the previous corollary, we note that if $2m+1\\leq n$, then these\ntwo bundles are in fact isomorphic (see for instance Husemoller \\cite[Chapter 8,\nTheorem 1.5]{H}).\n\nAn example where the hypotheses of the \\fullref{Theorem1.2} are satisfied\narises in the situation where $Y^m$, $X^n$ are real hyperbolic\nmanifolds. Specializing \\fullref{Corollary1.3} to this situation, we\nobtain:\n\n\\begin{Cor}\\label{Corollary1.4}\nLet $Y^m\\hookrightarrow X^n$ be a totally geodesic embedding, where\n$X^n$, $Y^m$ are compact hyperbolic manifolds, and assume that\n$2m+1\\leq n$. Then there exists a finite cover $\\bar X$ of $X^n$,\nand a connected lift $\\bar Y$ of $Y^m$, with the property that the\nnormal bundle of $\\bar Y$ in $\\bar X$ is trivial.\n\\end{Cor}\n\nWhile the hypotheses of \\fullref{Theorem1.2} are fairly technical, we\npoint out that there exist several examples of inclusions\n$Y^m\\hookrightarrow X^n$ satisfying the hypotheses of the theorem.\nThe proof of \\fullref{Corollary1.4}, as well as a discussion of some\nfurther examples will be included at the end of Section 4. Finally,\nwe will conclude the paper with various remarks and open questions\nin Section 5.\n\n\\vspace{-4pt}\n\\paragraph{Acknowledgements} \nThis research was partially conducted during the period when B\\,Schmidt was employed by the Clay Mathematics Institute as a Liftoff Fellow. The research of J\\,-F\\,Lafont was partly supported by the National Science Foundation under grant DMS - 0606002. The authors would like to thank the anonymous referee\nfor pointing out a simplification in our original proof of \\fullref{Proposition5.2}.\n\\vspace{-4pt}\n\n\n\n\n\\section{Background}\n\\vspace{-4pt}\n\nIn this section, we provide some discussion of the statements of our theorems. We also\nintroduce some of the ingredients that will be used in the proofs of our results.\n\\vspace{-4pt}\n\n\n\\subsection{Dual symmetric spaces}\n\\vspace{-4pt}\n\nLet us start by recalling the definition of dual symmetric spaces:\n\\vspace{-4pt}\n\n\\begin{Def}\nGiven a symmetric space $G\/K$ of noncompact type, we define the\n{\\it dual symmetric space\\\/} in the following manner. Let $G_\\mathbb C$\ndenote the complexification of the semisimple Lie group $G$, and\nlet $G_u$ denote the maximal compact subgroup in $G_\\mathbb C$. Since $K$\nis compact, under the natural inclusions $K\\subset G\\subset G_\\mathbb C$,\nwe can assume that $K\\subset G_u$ (up to conjugation). The symmetric\nspace dual to $G\/K$ is defined to be the symmetric space $G_u\/K$.\nBy abuse of language, if $X=\\Gamma \\backslash G\/K$ is a locally\nsymmetric space modelled on the symmetric space $G\/K$, we will say\nthat $X$ and $G_u\/K$ are dual spaces.\n\\end{Def}\n\\vspace{-4pt}\n\nNow assume that $Y^m\\hookrightarrow X^n$ is a totally geodesic\nsubmanifold, where both $Y^m$, $X^n$ are locally symmetric spaces of\nnoncompact type. Fixing a lift of $Y$, we have a totally geodesic\nembedding of the universal covers:\n$$G^\\prime\/K^\\prime =\\tilde Y\\hookrightarrow \\tilde X=G\/K$$\nCorresponding to this totally geodesic embedding, we get a natural commutative diagram:\n$$\\disablesubscriptcorrection\\xysavmatrix{G^\\prime \\ar[r] & G\\\\\nK^\\prime \\ar[r] \\ar[u] & K \\ar[u]}$$\nwhich, after passing to the complexification, and descending to the maximal compacts,\nyields a commutative diagram:\n$$\\disablesubscriptcorrection\\xysavmatrix{G^\\prime_u \\ar[r] & G_u\\\\\nK^\\prime \\ar[r] \\ar[u] & K \\ar[u]}$$\nIn particular, corresponding to the totally geodesic embedding $Y\\hookrightarrow X$,\nwe see that there is a totally geodesic embedding of the dual symmetric spaces $G_u^\\prime\/K^\\prime\n\\hookrightarrow G\/K$.\n\n\n\n\\subsection{Classifying spaces}\n\nFor $G$ a continuous group let $EG$ denote a contractible space\nwhich supports a free $G$--action. The quotient space, denoted $BG$,\nis called a {\\it classifying space\\\/} for principal $G$--bundles. This\nterminology is justified by the fact that, for any topological space\n$X$, there is a bijective correspondence between (1) isomorphism\nclasses of principal $G$--bundles over $X$, and (2) homotopy classes\nof maps from $X$ to $BG$. Note that the spaces $EG$ are only\ndefined up to $G$--equivariant homotopies, and likewise the spaces\n$BG$ are only defined up to homotopy. Milnor \\cite{Mil} gave a\nspecific construction, for a Lie group $G$, of a space homotopy\nequivalent to $BG$. The basic fact we will require concerning\nclassifying spaces is the following:\n\n\\begin{Thm} If $H$ is a closed subgroup of the Lie group $G$, then there exists a natural map\n$BH\\rightarrow BG$ between the models constructed by Milnor; furthermore this map is a fiber\nbundle with fiber the homogenous space $G\/H$.\n\\end{Thm}\n\n\\subsection{Okun's construction}\n\nOkun established \\cite[Theorem 5.1]{O1} the following nice result:\n\n\\begin{Thm}\\label{Theorem2.2} Let $X=\\Gamma \\backslash G\/K$ and $X_u=G_u\/K$ be dual symmetric spaces.\nThen there exists a finite sheeted cover $\\bar X$ of $X$ (ie a subgroup $\\bar \\Gamma$ of\nfinite index in $\\Gamma$, $\\bar X=\\bar \\Gamma \\backslash G\/K$), and a tangential map $k\\co\\bar X\n\\rightarrow X_u$.\n\\end{Thm}\n\nThis was subsequently used by Okun to exhibit exotic smooth\nstructures on certain compact locally symmetric spaces of\nnoncompact type \\cite{O2}, and by Aravinda--Farrell in their\nconstruction of exotic smooth structures on certain quaternionic\nhyperbolic manifolds supporting metrics of strict negative curvature\n\\cite{AF}. More recently, this was used by Lafont--Roy \\cite{LaR} to\ngive an elementary proof of the Hirzebruch proportionality principle\nfor Pontrjagin numbers, as well as (non)vanishing results for\nPontrjagin numbers of the Gromov--Thurston examples of manifolds with\nnegative sectional curvature.\n\nSince it will be relevant to our proof of the main theorem, we\nbriefly recall the construction of the finite cover that appears in\nOkun's argument for \\fullref{Theorem2.2}. Starting from the canonical\nprinciple fiber bundle\n$$\\Gamma \\backslash G\\rightarrow \\Gamma \\backslash G\/K=X$$\nwith structure group $K$ over the base space $X$, we can extend the\nstructure group to the group $G$, yielding the flat principle\nbundle:\n$$\\Gamma \\backslash G\\times_K G=G\/K \\times_\\Gamma G \\longrightarrow \\Gamma \\backslash G\/K=X$$\nFurther extending the structure group to $G_\\mathbb C$ yields a flat\nbundle with a complex linear algebraic structure group. A result of\nDeligne and Sullivan \\cite{DS} implies that there is a finite cover\n$\\bar X$ of $X$ where the pullback bundle is trivial; since $G_u$ is\nthe maximal compact in $G_\\mathbb C$, the bundle obtained by extending the\nstructure group from $K$ to $G_u$ is trivial as well. In terms of\nthe classifying spaces, this yields the commutative diagram:\n$$\\disablesubscriptcorrection\\xysavmatrix{ & G_u\/K \\ar[d]\\\\\n\\bar X \\ar[r] \\ar@{.>}[ur] \\ar[dr]_{\\simeq 0}& BK \\ar[d]\\\\\n & BG_u}$$\nUpon homotoping the bottom diagonal map to a point, one obtains that\nthe image of the horizontal map lies in the fiber above a point,\nie inside $G_u\/K$, yielding the dotted diagonal map in the above\ndiagram. Okun then proceeds to show that the map to the fiber is\nthe desired tangential map (since the pair of maps to $BK$ classify\nthe respective canonical $K$--bundles on $\\bar X$ and $G_u\/K$, and\nthe canonical $K$--bundles determine the respective tangent bundles).\n\n\n\\subsection{Matsushima's map}\n\nMatsushima \\cite{Mat} constructed a map on cohomology\n$j^*\\co H^*(G_u\/K; \\mathbb R)\\rightarrow H^*(X; \\mathbb R)$ whenever $X$ is a\ncompact locally symmetric space modelled on $G\/K$. We will require the\nfollowing fact concerning the Matsushima map:\n\n\\begin{Thm}[Matsushima \\cite{Mat}] The map $j^*$ is always injective.\nFurthermore, there exists a constant $m(\\mathfrak g)$ (called the Matsushima\nconstant) depending solely on the Lie algebra $\\mathfrak g$ of the Lie group\n$G$, with the property that the Matushima map $j^*$ is a surjection\nin cohomology up to the dimension $m(\\mathfrak g)$.\n\\end{Thm}\n\nThe specific value of the Matsushima constant for\nthe locally symmetric spaces that are K\\\"ahler can be found in \\cite{Mat}.\nWe also point out the following result of Okun \\cite[Theorem\n6.4]{O1}:\n\n\\begin{Thm}\\label{Theorem2.4} Let $X=\\Gamma \\backslash G\/K$ be a compact locally symmetric space,\nand $\\bar X$, $t\\co \\bar X \\rightarrow G_u\/K$ the finite cover and\ntangential map constructed in \\fullref{Theorem2.2}. If the groups $G_u$ and\n$K$ have equal rank, then the induced map $t^*$ on cohomology\ncoincides with Matsushima's map $j^*$.\n\\end{Thm}\n\n\n\n\n\n\\section{Detecting homologically essential submanifolds}\n\nIn this section, we provide a proof of \\fullref{Theorem1.1}, which gives a criterion for establishing\nwhen a totally geodesic submanifold $Y\\subset X$ in a locally symmetric space of noncompact\ntype, is homologically nontrivial.\n\n\\begin{proof}[Proof of \\fullref{Theorem1.1}]\nIn order to establish the theorem, we make use of differential\nforms. If a group $H$ acts on a smooth manifold $M$, we let $\\Omega\n^H(M)$ denote the complex of $H$--invariant differential forms on\n$M$. Let $X=\\Gamma \\backslash G\/K$, $Y=\\Lambda \\backslash G^\\prime\/K^\\prime$ be the pair of\ncompact locally symmetric spaces, and $X_u=G_u\/K$, $Y_u=G_u^\\prime\/K^\\prime$\nbe the corresponding dual spaces. We now consider the following\nfour complexes of differential forms: (1) $\\Omega ^{G}(G\/K)$, (2) $\\Omega\n^{G^\\prime}(G^\\prime\/K^\\prime)$, (3) $\\Omega ^\\Gamma(G\/K)$ and (4) $\\Omega\n^\\Lambda(G^\\prime\/K^\\prime)$.\n\nWe now observe that the cohomology of the first two complexes can be identified with the\ncohomology of\n$X_u$, $Y_u$ respectively. Indeed, we have the sequence of natural identifications:\n$$\\Omega ^{G}(G\/K)=H^*(\\mathfrak g,\\mathfrak t)=H^*(\\mathfrak g_u, \\mathfrak t)=\\Omega ^{G_u}(G_u\/K)$$\nThe first and third equalities come from the identification of the complex of harmonic\nforms with the relative Lie algebra cohomology. The second equality comes via the dual Cartan\ndecompositions: $\\mathfrak g=\\mathfrak t \\oplus \\mathfrak p$ and $\\mathfrak g_u=\\mathfrak t \\oplus i\\mathfrak p$. Since $X_u=G_u\/K$ is a compact closed\nmanifold, and $\\Omega ^{G_u}(G_u\/K)$ is the\ncomplex of harmonic forms on $X_u$, Hodge theory tells us that the cohomology of the\ncomplex $\\Omega ^{G_u}(G_u\/K)$ is just the cohomology of $X_u$.\nThe corresponding analysis holds for $\\Omega ^{G^\\prime}(G^\\prime\/K^\\prime)$.\n\nNext we note that the cohomology of the last two complexes can be\nidentified with the cohomology of $X$, $Y$ respectively. This just\ncomes from the fact that the projection $G\/K\\rightarrow \\Gamma \\backslash\nG\/K=X$ induces the isomorphism of complexes $\\Omega ^\\Gamma(G\/K)=\\Omega(X)$,\nand similarly for $Y$.\n\nNow observe that the four complexes fit into a commutative diagram\nof chain complexes:\n$$\\disablesubscriptcorrection\\xysavmatrix{\\Omega ^\\Lambda(G^\\prime\/K^\\prime) & \\Omega ^\\Gamma(G\/K)\\ar[l]_{~\\phi}\\\\\n\\Omega ^{G^\\prime}(G^\\prime\/K^\\prime) \\ar[u]^{j_Y} & \\Omega ^{G}(G\/K)\\ar[l]_{~\\psi}\n\\ar[u]_{j_X}}$$ Let us briefly comment on the maps in the diagram.\nThe vertical maps are obtained from the fact that $\\Gamma\\leq G$, so\nthat any $G$--invariant form can be viewed as a $\\Gamma$--invariant form,\nand similarly for $\\Lambda \\leq G^\\prime$.\n\nFor the horizontal maps, one observes that $G^\\prime\/K^\\prime\n\\hookrightarrow G\/K$ is an embedding, hence any form on $G\/K$\nrestricts to a form on $G^\\prime\/K^\\prime$. We also have the inclusion\n$\\Lambda\\leq \\Gamma$, and hence the restriction of a $\\Gamma$--invariant form on\n$G\/K$ yields a $\\Lambda$--invariant form on $G^\\prime\/K^\\prime$. This is the\nhorizontal map in the top row. One obtains the horizontal map in\nthe bottom row similarly.\n\nNow passing to the homology of the chain complexes, and using the\nidentifications discussed above, we obtain a commutative diagram in\ndimension $m=\\dim(Y)=\\dim(G_u^\\prime\/K^\\prime)$:\n$$\\disablesubscriptcorrection\\xysavmatrix{\\mathbb R \\simeq H^m(Y; \\mathbb R) & & H^m(X;\\mathbb R) \\ar[ll]_{\\phi^*}\\\\\n\\mathbb R \\simeq H^m(G_u^\\prime\/K^\\prime;\\mathbb R) \\ar[u]^{j_Y^*} & & H^m(G_u\/K;\\mathbb R)\n\\ar[ll]_{\\psi^*} \\ar[u]_{j_X^*}}$$ Note that the two vertical maps defined\nhere are precisely the Matsushima maps for the respective locally\nsymmetric spaces. Since Matsushima's map is always injective, and\nthe cohomology of $H^m(G_u^\\prime\/K^\\prime;\\mathbb R)$ and $H^m(Y; \\mathbb R)$ are both\none-dimensional, we obtain that $j_Y^*$ is an isomorphism. Likewise\n$j_X^*$ is always injective, and if $m\\leq m(\\mathfrak g)$ then $j_X^*$ is also\nsurjective (and hence $j_X^*$ is an isomorphism as well). This implies the \nfollowing two facts:\n\\begin{itemize}\n\\item If $\\phi^*$ is identically zero, then $\\psi^*$ is identically zero.\n\\item If furthermore $m\\leq m(\\mathfrak g)$, then both vertical maps are isomorphisms,\nand we have that $\\psi^*$ is identically zero if and only if $\\phi^*$ is identically zero.\n\\end{itemize}\nNow observe that\nboth of the horizontal maps coincide with the maps induced on\ncohomology by the respective inclusions $Y\\hookrightarrow X$ and\n$G_u^\\prime\/K^\\prime \\hookrightarrow G_u\/K$; indeed the maps are obtained by\nrestricting the forms defined on the ambient manifold to the\nappropriate submanifold. In particular, the map $\\psi^*$ coincides with\nthe map $\\rho$ that appears in the statement of our theorem. On the other hand, from \nthe Kronecker pairing, the map $\\phi^*$ is nonzero precisely when\n$[Y^m]\\neq 0\\in H_m(X;\\mathbb R)$. Combining these observations with the\ntwo facts in the previous paragraph completes the proof of \\fullref{Theorem1.1}.\n\\end{proof}\n\n\\begin{Rmk}\n(1)\\qua The Matsushima map is only defined on the real cohomology\n(since it passes through differential forms), and as a result, {\\it\ncannot} be used to obtain any information on torsion elements in\n$H^k(X^n;\\mathbb Z)$.\n\n(2)\\qua We remark that the proof of \\fullref{Theorem1.1} applies equally well to \nlower-dimen\\-sional cohomology (using the fact that Matsushima's map is injective\nin all dimensions), and gives the following lower-dimensional criterion. Assume\nthat\nthe map $H^k(X^n_u; \\mathbb R) \\rightarrow H^k(Y^m_u;\\mathbb R)$ has image containing a\nnonzero class $\\alpha$, and let $i(\\alpha)\\in H^k(Y^m; \\mathbb R)$ be the nonzero image class\nunder the Matsushima map. Then the homology class $\\beta\\in H_k(Y^m;\\mathbb R)$ dual\n(under the Kronecker pairing) to $i(\\alpha)$ has nonzero image in $H_k(X^n;\\mathbb R)$ under\nthe map induced by the inclusion $Y^m\\hookrightarrow X^n$.\n\\end{Rmk}\n\n\\section{Pairs of tangential maps}\nIn this section, we proceed to give a proof of \\fullref{Theorem1.2},\nestablishing the existence of pairs of tangential maps from the pair\n$(\\bar X,\\bar Y)$ to the pair $(X_u,Y_u)$.\n\n\\begin{proof}[Proof of \\fullref{Theorem1.2}]\nWe start out by applying\n\\fullref{Theorem2.2}, which gives us a finite cover $\\bar X$ of $X$ with the\nproperty that the natural composite map $\\bar X\\rightarrow\nBK\\rightarrow BG_u$ is homotopic to a point. Note that this map\nclassifies the principle $G_u$ bundle over $\\bar X$.\n\nNow let $\\bar Y\\hookrightarrow \\bar X$ be a connected lift of the\ntotally geodesic subspace $Y\\hookrightarrow X$. Observe that, by\nnaturality, we have a commutative diagram:\n$$\\disablesubscriptcorrection\\xysavmatrix@R=10pt{ & G_u^\\prime\/K^\\prime \\ar[rr] \\ar[dd] & & G_u\/K \\ar[dd]\\\\\n\\bar Y \\ar[rr] \\ar[dr] \\ar[ddr] & & \\bar X \\ar[dr] \\ar[ddr] & \\\\\n& BK^\\prime \\ar[rr] \\ar[d] & & BK \\ar[d] \\\\\nG_u\/G_u^\\prime \\ar[r] & BG_u^\\prime \\ar[rr] & & BG_u}$$ \nBy Okun's result,\nthe composite map $\\bar X\\rightarrow BG_u$ is homotopic to a point\nvia a homotopy $H\\co\\bar X \\times I\\rightarrow BG_u$. We would like\nto establish the existence of a homotopy $F\\co\\bar Y\\times I\n\\rightarrow BG_u^\\prime$ with the property that the following diagram\ncommutes:\n$$\\disablesubscriptcorrection\\xysavmatrix{\\bar Y\\times I \\ar[r]^{i\\times \\mathrm{Id}} \\ar[d]_F & \\bar X\\times I \\ar[d]^H\\\\\nBG_u^\\prime \\ar[r] & BG_u}$$\nIndeed, if we had the existence of such a compatible pair of\nhomotopies, then one can easily complete the argument: since each of\nthe vertical columns in the diagram are fiber bundles, we see that\nafter applying the pair of compatible homotopies, the images of\n$(\\bar X,\\bar Y)$ lies in the pair of fibers $(G_u\/K, G_u^\\prime\/K^\\prime)$.\nThis yields a pair of compatible lifts, yielding a commutative\ndiagram of the form:\n$$\\disablesubscriptcorrection\\xysavmatrix{ & G_u^\\prime\/K^\\prime \\ar[rr] \\ar[dd] & & G_u\/K \\ar[dd]\\\\\n\\bar Y \\ar[ru] \\ar[rr] \\ar[dr] & & \\bar X \\ar[dr] \\ar[ru] & \\\\\n & BK^\\prime \\ar[rr] & & BK } $$\nSince the pair of maps to $BK^\\prime$ (respectively $BK$) classify the\ncanonical $K^\\prime$--bundle structures on $\\bar Y$, $G_u^\\prime\/K^\\prime$\n(respectively the canonical $K$--bundle structure on $\\bar X$,\n$G_u\/K$), and since these bundles canonically determine the tangent\nbundles of these spaces \\cite[Lemma 2.3]{O1}, commutativity of the\ndiagram immediately\ngives us tangentiality of the maps $\\bar Y \\rightarrow G_u^\\prime\/K^\\prime$\n(respectively, of the map $\\bar X\\rightarrow G_u\/K$).\n\n\nIn order to show the existence of the compatible homotopy $F\\co\\bar\nY\\times I \\rightarrow BG_u^\\prime$, we start by observing that the\nbottom row of the commutative diagram is in fact a fibration\n$$ G_u\/G_u^\\prime \\rightarrow BG_u^\\prime \\rightarrow BG_u.$$\nSince $\\bar Y$ is embedded in $\\bar X$, we see that the homotopy $H$\ninduces by restriction a homotopy $H\\co\\bar Y \\times I \\rightarrow\nBG_u$. Since the bottom row is a fibration, we may lift this\nhomotopy to a homotopy $\\tilde H\\co \\bar Y\\times I\\rightarrow\nBG_u^\\prime$, with the property that $\\tilde H_0$ coincides with the map\n$\\bar Y \\rightarrow BG_u^\\prime$ which classifies the canonical\nprinciple $G_u^\\prime$ bundle over $\\bar Y$. Unfortunately, we do not\nknow, a priori, that the time one map $\\tilde H_1$ maps $\\bar Y$ to\na point in $BG_u^\\prime$. Indeed, we merely know that $\\tilde H_1(\\bar\nY)$ lies in the preimage of a point in $BG_u$, ie in the fiber\n$G_u\/G_u^\\prime$. Our next goal is to establish that the map $\\tilde\nH_1\\co \\bar Y \\rightarrow G_u\/G_u^\\prime$ is nullhomotopic. If this were\nthe case, we could concatenate the homotopy $\\tilde H$ taking $\\bar\nY$ into the fiber $G_u\/G_u^\\prime$ with a homotopy contracting $\\tilde\nH_1\\co\\bar Y \\rightarrow G_u\/G_u^\\prime$ to a point {\\it within the\nfiber}. This would yield the desired homotopy $F$.\n\\medskip\n\nIn order to establish that $\\tilde H_1\\co \\bar Y \\rightarrow\nG_u\/G_u^\\prime$ is nullhomotopic, we merely note that we have the\nfibration\n$$G_u^\\prime\\rightarrow G_u\\rightarrow G_u\/G_u^\\prime.$$\nFrom the corresponding long exact sequence in homotopy groups, and\nusing the fact that the inclusion $G_u^\\prime\\hookrightarrow G_u$\ninduces a $\\pi_i$--isomorphism for $i< m$ and a surjection on\n$\\pi_m$, we immediately obtain that $\\pi_i(G_u\/G_u^\\prime)\\cong 0$ for\n$i\\leq m$. Since the dimension of the manifold $\\bar Y$ is $m$, we\ncan now conclude that the map $\\tilde H_1$ is nullhomotopic.\nIndeed, taking a cellular decomposition of $\\bar Y$ with a single\n$0$--cell, one can recursively contract the image of the $i$--skeleton\nto the image of the $0$--cell: the obstruction to doing so lies in\n$\\pi_i(G_u\/G_u^\\prime)$, which we know vanishes. This yields that\n$\\tilde H_1$ is nullhomotopic, which by our earlier discussion,\nimplies the existence of a tangential map of pairs $(\\bar X,\\bar Y)\n\\rightarrow (X_u,Y_u)$. Finally, to conclude we merely point out the\nOkun has shown (see \\fullref{Theorem2.4}) that in the case where the rank of\n$G_u$ equals the rank of $K$, the tangential map he constructed\ninduces the Matsushima map on cohomology. Our construction\nrestricts to Okun's construction on both $X$ and $Y$, and from the\nhypothesis on the ranks, so we conclude that the tangential map of\npairs induces the Matsushima map on the cohomology of each of the\ntwo spaces. This concludes the proof of \\fullref{Theorem1.2}.\n\\end{proof}\n\\medskip\n\n\\begin{Rmk}\nWe observe that the argument given above, for the case of a pair $(X^n, Y^m)$,\ncan readily be adapted to deal with any descending chain of totally geodesic\nsubmanifolds. More precisely, assume that we have a series of totally\ngeodesic embeddings $X^n=Y_k \\supset \\cdots \\supset Y_2\\supset Y_1$, with\nthe property that each $Y_j$ is a closed locally symmetric space of noncompact type.\nFurther assume that, if $(Y_j)_u = (G_j)_u\/K_j$ denotes the compact duals,\nthe maps $(G_j)_u \\hookrightarrow (G_{j+1})_u$\ninduced by the inclusions $Y_j\\hookrightarrow Y_{j+1}$ are $\\pi_i$ isomorphisms\nfor $i< \\dim(Y_j)$ and a surjection on $\\pi_i$ ($i=\\dim(Y_j)$). Then there exists\na finite cover $\\bar X^n=\\bar Y_k$ of $X^n$, and connected lifts $\\bar Y_j$ of $Y_j$,\nhaving the property that:\n\\begin{itemize}\n\\item we have containments $\\bar Y_j\\subset \\bar Y_{j+1}$, and\n\\item there exists a map $(\\bar Y_k, \\ldots, \\bar Y_1)\\rightarrow \\big((Y_k)_u,\n\\ldots ,(Y_1)_u\\big)$ which restricts to a tangential map from each $\\bar Y_j$ to the\ncorresponding $(Y_j)_u$.\n\\end{itemize}\nThis is shown by induction on the length of a descending chain. We leave the details\nto the interested reader.\n\\end{Rmk}\n\nWe now proceed to show \\fullref{Corollary1.4}, that is to say, that in the\ncase where $X^n$ is real hyperbolic, and $Y^m\\hookrightarrow X^n$ is\ntotally geodesic, there exists a finite cover $\\bar X$ of $X^n$ and\na connected lift $\\bar Y$ of $Y^m$, with the property that the\nnormal bundle of $\\bar Y$ in $\\bar X$ is trivial.\n\n\\begin{proof}[Proof of \\fullref{Corollary1.4}] We first observe that, provided one could verify the\nhypotheses of \\fullref{Theorem1.2} for the pair $(X^n,Y^m)$, the corollary\nwould immediately follow. Note that in this case, the dual spaces\n$X_u$ and $Y_u$ are spheres of dimension $n$ and $m$ respectively.\nThis implies that the totally geodesic embedding $Y_u\\hookrightarrow\nX_u$ is in fact a totally geodesic embedding $S^m\\hookrightarrow\nS^n$, forcing the normal bundle to $Y_u$ in $X_u$ to be trivial. But\nnow \\fullref{Corollary1.3} to the \\fullref{Theorem1.2} immediately yields \\fullref{Corollary1.4}.\n\nSo we are left with establishing the hypotheses of \\fullref{Theorem1.2} for\nthe pair $(X^n,Y^m)$. We observe that in this situation we have\nthe groups $G_u \\cong SO(n+1)$, and $G_u^\\prime \\cong SO(m+1)$.\nFurthermore, there is essentially a unique totally geodesic\nembedding $S^m\\hookrightarrow S^n$, hence we may assume that the\nembedding $G_u^\\prime\\hookrightarrow G_u$ is the canonical one. But now\nwe have the classical facts that (1) the embeddings\n$SO(m+1)\\hookrightarrow SO(n+1)$ induce isomorphisms on $\\pi_i$ for\n$i< m$ and (2) that the embedding induces a surjection\n$\\pi_m(SO(m+1))\\rightarrow \\pi_m(SO(n+1))$. Indeed, this is\nprecisely the range of dimensions where the homotopy groups\nstabilize \\cite{Mil2}. This completes the verification of the\nhypotheses, and hence the proof of \\fullref{Corollary1.4}.\n\\end{proof}\n\nWe now proceed to give an example of an inclusion\n$Y^m\\hookrightarrow X^n$ satisfying the hypotheses of our theorem.\nOur spaces will be modelled on complex hyperbolic spaces, namely we\nhave:\n\\begin{align*}Y^{2m}&= \\Lambda\\backslash \\mathbb C\\mathbb H ^m = \\Lambda\\backslash \\mathit{SU}(m,1) \/ S(U(m)\\times U(1))\\\\\nX^{2n}& = \\Gamma\\backslash \\mathbb C\\mathbb H ^n = \\Gamma\\backslash \\mathit{SU}(n,1) \/ S(U(n)\\times U(1))\\end{align*}\nTo construct such pairs, one starts with the standard inclusion of $\\mathit{SU}(m,1)\\hookrightarrow \\mathit{SU}(n,1)$,\nwhich induces a totally geodesic embedding $\\mathbb C\\mathbb H ^m\\hookrightarrow \\mathbb C\\mathbb H ^n$. One can\nnow construct explicitly (by arguments similar to those in Gromov and Piatetski-Shapiro \\cite{GP}) an arithmetic uniform lattice\n$\\Lambda \\leq \\mathit{SU}(m,1)$ having an extension\nto an arithmetic uniform lattice $\\Gamma \\leq \\mathit{SU}(n,1)$. Quotienting out by these lattices gives the\ndesired pair.\n\nLet us now consider these examples in view of our \\fullref{Theorem1.2}. First of all, we have that the\nrespective complexifications are $G^\\prime_\\mathbb C=\\mathit{SL}(m+1, \\mathbb C)$ and $G_\\mathbb C=\\mathit{SL}(n+1,\\mathbb C)$, with\nthe natural embedding\n$$G^\\prime_\\mathbb C =\\mathit{SL}(m+1,\\mathbb C) \\hookrightarrow \\mathit{SL}(n+1,\\mathbb C)=G_\\mathbb C.$$\nLooking at the respective maximal compacts, we see that $G^\\prime_u=\\mathit{SU}(m+1)$, $G_u=\\mathit{SU}(n+1)$, and the inclusion is\nagain the natural embeddings\n$$G^\\prime_u = \\mathit{SU}(m+1) \\hookrightarrow \\mathit{SU}(n+1) = G_u.$$\nHence the homotopy condition in our theorem boils down to asking\nwhether the natural embedding $\\mathit{SU}(m+1)\\hookrightarrow \\mathit{SU}(n+1)$\ninduces isomorphisms on the homotopy groups $\\pi_i$, where $i\\leq\n\\dim(Y^{2m})=2m$. But it is a classical fact that the natural\nembedding induces isomorphisms in all dimensions $i<2(m+1)=2m+2$,\nsince this falls within the stable range for the homotopy groups\n(and indeed, one could use complex Bott periodicity to compute the\nexact value of these homotopy groups \\cite{Mil2}). Finally, we observe\nthat in this context, the dual spaces are complex projective spaces, and the\nembedding of dual spaces is the standard embedding $\\mathbb C`P^m\\hookrightarrow \\mathbb C`P^n$.\nIt is well known that for the standard embedding, we have that the induced map on\ncohomology $H^*(\\mathbb C`P^n) \\rightarrow H^*(\\mathbb C`P^m)$ is surjective on cohomology.\nOur \\fullref{Theorem1.1} now tells us that $Y^{2m}\\hookrightarrow X^{2n}$ is homologically\nnontrivial. Furthermore, we note that for these manifolds, $\\rk(G_u)=\\rk(K)$ and\n$\\rk(G_u^\\prime)=\\rk(K^\\prime)$, and hence \\fullref{Theorem1.2} tells us that\nthe cohomological map from the proof of \\fullref{Theorem1.1}\ncan be (rationally) realized via a tangential map of pairs.\n\n\\section{Concluding remarks}\n\nWe conclude this paper with a few comments and questions. First of all, in\nview of our \\fullref{Theorem1.1}, it is reasonable to ask for the converse:\n\n\\medskip\n{\\bf Question}\\qua Given an element $\\alpha \\in H_m(X^n\n;\\mathbb R)$, is there an $m$--dimensional totally geodesic submanifold\n$Y^m$ with $[Y^m]=\\alpha$?\n\\medskip\n\nA cautionary example for the previous question is provided by the following:\n\n\\begin{Prop}\nLet $X$ be a compact hyperbolic $3$--manifold that fibers over $S^1$, with\nfiber a surface $F$ of genus $\\geq 2$. Then the homology class\nrepresented by $[F]\\in H_2(X; \\mathbb Z)$ cannot be represented by a\ntotally geodesic submanifold.\n\\end{Prop}\n\n\\begin{proof}\nAssume that there were such a totally geodesic submanifold $Y\n\\subset X$, and observe that since $Y$ is totally geodesic, we have\nan embedding $\\pi_1(Y)\\hookrightarrow \\pi_1(X)$. Furthermore, since\n$X$ fibers over $S^1$ with fiber $F$, we also have a short exact\nsequence: \n$$ 0\\rightarrow \\pi_1(F)\\rightarrow \\pi_1(X)\\rightarrow \\mathbb Z \\rightarrow 0 $$\nOur goal is to show that $\\pi_1(Y)\\subset \\pi_1(F)$. Indeed, if we\ncould establish this containment, one could then argue as follows:\nsince $Y$ is a compact surface, covering space theory implies that\n$\\pi_1(Y)\\subset \\pi_1(F)$ is a finite index subgroup. Now pick a\npoint $x$ in the universal cover $\\tilde X\\cong \\mathbb H^3$, and consider\nthe subset $\\Lambda_Y \\subset \\partial ^\\infty \\mathbb H^3=S^2$ obtained\nby taking the closure of the $\\pi_1(Y)$--orbit of $x$. Since $Y$ is\nassumed to be totally geodesic, the subset $\\Lambda\\subset S^2$ is a\ntamely embedded $S^1$ (identified with the boundary at infinity of a\nsuitably chosen totally geodesic lift $\\tilde Y \\cong \\mathbb H^2$ of\n$Y$). On the other hand, since $\\pi_1(Y)$ has finite index in\n$\\pi_1(F)$, we have that $\\Lambda_Y$ must coincide with $\\Lambda_F$,\nthe closure of the $\\pi_1(F)$--orbit of $x$. But the latter, by a\nwell-known result of Cannon--Thurston is known to be the entire\nboundary at infinity (see for instance Mitra \\cite{Mit}).\n\nSo we are left with establishing that $\\pi_1(Y)\\subset \\pi_1(F)$. In\norder to see this, let us consider the cohomology class $\\alpha_F\\in\nH^1(X; \\mathbb Z)$ which is Poincar\\'e dual to the class $[F]\\in H_2(X;\n\\mathbb Z)$. Now recall that the evaluation of the cohomology class\n$\\alpha_F$ on an element $[\\gamma]\\in H_1(X;\\mathbb Z)$ can be interpreted\ngeometrically as the intersection number of the representing curve\n$\\gamma$ with the surface $F$. Furthermore, we have that the group\n$H_1(X;\\mathbb Z)$ is generated by the image of $H_1(F;\\mathbb Z)$, under the\ninclusion $F\\hookrightarrow X$, along with an element $[\\eta]\\in\nH_1(X; \\mathbb Z)$ mapping to $[S^1]\\in H_1(S^1;\\mathbb Z)$. Here $\\eta$ is\nchosen to be a closed loop in $M$ with the property that $\\eta$ maps\nhomeomorphically to the base $S^1$ (preserving orientations) under\nthe projection map. This gives us the following two facts:\n\\begin{itemize}\n\\item The class $\\alpha_F$ evaluates to $1$ on the element $[\\eta]$, since\n$F\\cap \\eta$ is a single transverse point.\n\\item The class $\\alpha_F$ evaluates to zero on the image of $H_1(F;\\mathbb Z)$ in\nthe group $H_1(X;\\mathbb Z)$. This follows from the fact that\nthe surface $F$ has trivial normal bundle in $X$, allowing any curve\nin $F$ representing (the image of) an element in $H_1(F; \\mathbb Z)$ to be homotoped to a\ncurve disjoint from $F$.\n\\end{itemize}\nFurthermore, since we are assuming that $[Y]=[F]\\in H_2(X;\\mathbb Z)$, we know that\nwe have an identification of Poincar\\'e duals $\\alpha_Y=\\alpha_F$.\n\nNow let us assume that $\\pi_1(Y)$ is {\\it not\\\/} contained in $\\pi_1(F)$, and\nobserve that this implies that there exists a closed loop $\\gamma\\subset Y$ having\nthe property that under the composition $Y\\hookrightarrow X\\rightarrow S^1$,\nthe class $[\\gamma]\\in H_1(Y;\\mathbb Z)$ maps to $k\\cdot [S^1]\\in H_1(S^1; \\mathbb Z)$ (and\n$k\\neq 0$). We now proceed\nto compute, in two different ways, the evaluation of the cohomology classes $\\alpha_Y\n=\\alpha _F$ on a suitable multiple of the homology class $[\\gamma]$.\n\nFirstly, from the comments above, we can write $[\\gamma]$ as the sum of\n$k\\cdot [\\eta]$, along with an element $\\beta$, where $\\beta$ lies in the image\nof $H_1(F; \\mathbb Z)$. By linearity of\nthe Kronecker pairing, along with the two facts from the previous paragraph, we\nobtain:\n$$\\alpha _F([\\gamma])=\\alpha_F(\\beta) + k\\alpha_F([\\eta]) = k \\neq 0$$\nSecondly, observe that $Y$ is assumed to be embedded in $X$, and\nrepresents the nonzero homology class $[Y]=[F]\\in H_2(X;Z)$. This\nimplies that $Y$ must be orientable, and hence has trivial normal\nbundle in $X$. In particular, the curve $\\gamma \\subset Y$ can be\nhomotoped (within $X$) to have image disjoint from $Y$. Since the\ninteger $\\alpha _Y([\\gamma])$ can be computed geometrically as the\nintersection number of the curve $\\gamma$ with $Y$, we conclude that\n$\\alpha_Y([\\gamma])=0$.\n\nCombining the two observations above, we see that\n$0=\\alpha_Y([\\gamma])= \\alpha_F([\\gamma])\\neq 0$, giving us the\ndesired contradiction. This completes the proof of the proposition.\n\\end{proof}\n\nWe observe that Thom \\cite{T} has shown that in dimensions $0\\leq\nk\\leq 6$ and $n-2\\leq k\\leq n$, every integral homology class can be\nrepresented by an immersed submanifold. In general however, there\ncan exist homology classes which are {\\it not\\\/} representable by\nsubmanifolds (see for instance Bohr, Hanke and\nKotschick \\cite{BHK}). The question above asks for a more stringent\ncondition, namely that the immersed submanifold in question be\ntotally geodesic. We believe that the weaker question is also of\nsome interest, namely:\n\n\\medskip\n{\\bf Question}\\qua Find an example $X^n$ of a compact locally\nsymmetric space of noncompact type, and a homology class in some\n$H_k(X^n; \\mathbb Z)$ which {\\it cannot\\\/} be represented by an immersed\nsubmanifold.\n\\medskip\n\nNow our original motivation for looking at totally geodesic submanifolds\ninside locally symmetric spaces was the desire to exhibit lower-dimensional\nbounded cohomology classes. In \\cite{LaS}, the authors showed that for\nthe fundamental group $\\Gamma$ of\na compact locally symmetric space of noncompact type $X^n$, the comparison\nmap from bounded cohomology to ordinary cohomology:\n$$H^*_b(\\Gamma)\\rightarrow H^*(\\Gamma)$$\nis {\\it surjective\\\/} in dimension $n$. The proof actually passed\nthrough the dual formulation, and showed that the $L^1$\n(pseudo)-norm of the fundamental class $[X^n]\\in H_n(X^n; \\mathbb R)$ is\nnonzero. Now given a totally geodesic embedding $Y\\hookrightarrow\nX$ of the type considered in this paper, it is tempting to guess\nthat the homology class $[Y]$ also has nonzero $L^1$ (pseudo)-norm.\nOf course, this naive guess fails, since one can find examples where\n$[Y]=0\\in H_m(X^n;\\mathbb R)$. The problem is that despite the fact that\nthe {\\it intrinsic\\\/} $L^1$ (pseudo)-norm of $[Y]$ is nonzero, the\n{\\it extrinsic\\\/} $L^1$ (pseudo)-norm of $[Y]$ is zero. In other\nwords, one can represent the fundamental class of $Y$ {\\it more\nefficiently} by using simplices that actually {\\it do not lie in\n$Y$} (despite the fact that $Y$ is totally geodesic). The authors were unable to answer the following:\n\n\\medskip\n{\\bf Question}\\qua Assume that $Y$ and $X$ are compact locally\nsymmetric spaces of noncompact type, that $Y\\subset X$ is a totally\ngeodesic embedding, and that $Y$ is orientable with $[Y]\\neq 0 \\in\nH_m(X^n; \\mathbb R)$. Does it follow that the dual cohomology class\n$\\beta\\in H^m(X^n; \\mathbb R)$ (via the Kronecker pairing) has a bounded\nrepresentative?\n\\medskip\n\nNow one situation in which nonvanishing of the $L^1$ (pseudo)-norm would\nbe preserved is the case where $Y\\hookrightarrow X$ is actually a retract of $X$.\nHence one can ask the following:\n\n\\medskip\n{\\bf Question}\\qua If $Y\\subset X$ is a compact totally\ngeodesic proper submanifold inside a locally symmetric space of\nnoncompact type, when is $Y$ a retract of $X$?\n\n\\begin{Rmk}\n(1)\\qua In the case where $X$ is a (nonproduct) higher rank locally\nsymmetric space of noncompact type, one cannot find a proper\ntotally geodesic submanifold $Y\\subset X$ which is a retract of $X$.\nIndeed, if there were such a submanifold, then the morphism\n$\\rho\\co\\pi_1(X)\\rightarrow \\pi_1(Y)$ induced by the retraction would have\nto be surjective. By Margulis' normal subgroup theorem \\cite{Mar},\nthis implies that either (1) $\\ker(\\rho)$ is finite, or (2)\nthe image $\\pi_1(Y)$ is finite. Since $Y$ is locally symmetric\nof noncompact type, $\\pi_1(Y)$ cannot be finite, and hence we must\nhave finite $\\ker(\\rho)$. But $\\ker(\\rho)$ is a subgroup of the torsion-free\ngroup $\\pi_1(X)$, hence must be trivial. This forces $\\pi_1(X)\\cong \\pi_1(Y)$,\nwhich contradicts the fact that the cohomological dimension of $\\pi_1(X)$\nis $\\dim (X)$, while the cohomological dimension of $\\pi_1(Y)$ is $\\dim (Y)<\n\\dim (X)$. This implies that no such morphism exists, and hence no such\nsubmanifold exists. The authors thank C\\,Leininger for pointing out this simple\nargument.\n\n(2)\\qua In the case where $X$ has rank one, the question is more delicate. Some\nexamples of such retracts can be found in a paper by Long and Reid \\cite{LoR}.\nWe remark that in this case, the application to bounded cohomology is not too\ninteresting, as Mineyev \\cite{Min} has already shown that the comparison map\nin this situation is surjective in all dimensions $\\geq 2$.\n\\end{Rmk}\n\nFinally, we conclude this paper by pointing out that the Okun maps, \nwhile easy to define, are geometrically\nvery complicated. We illustrate this with a brief comment on the \n{\\it singularities\\\/} of the tangential maps\nbetween locally symmetric spaces of noncompact type and their\nnonnegatively curved dual spaces. More precisely, for a smooth map\n$f\\co X\\rightarrow X_u$, consider the subset $\\Sing(f)\\subset\nX_u$ consisting of points $p\\in X_u$ having the property that there\nexists a point $q\\in X$ satisfying $f(q)=p$, and $\\ker(df(q))\\neq 0$\n(where $df\\co T_qX\\rightarrow T_pX_u$ is the differential of $f$ at the\npoint $q$). We can now ask how complicated the set $\\Sing(h)\\subset X_u$ \ngets for $h$ a smooth map within the homotopy class of the Okun maps.\n\n\\begin{Prop}\\label{Proposition5.2}\nLet $X$ be a closed, locally symmetric space of noncompact type, \n$X_u$ the nonnegatively curved dual space, $t\\co\\bar\nX\\rightarrow \\smash{X_u}$ the Okun map from a suitable finite\ncover $\\bar X$ of $X$, and $h$ an arbitrary smooth map in the\nhomotopy class of $t$. Consider an arbitrary embedded compact\nsubmanifold $N^k\\subset X$, having the property that \\textnormal{(1)} $[N^k]\\neq 0\\in\nH^k(X_u; \\mathbb Z)$, and \\textnormal{(2)} $N^k$ is simply connected.\nThen we have that $\\Sing(h)\\cap N^k\\neq\n\\emptyset$.\n\\end{Prop}\n\n\\begin{proof}\nLet $h$ be a smooth map homotopic to $t$, and assume that\n$\\Sing(h)\\cap N^k=\\emptyset$. This implies that $dh$ has full\nrank at every preimage point of $N^k\\subset X_u$. Choose a\nconnected component $S\\subset \\bar X$ of the set $h^{-1}(N^k)$,\nand observe that the restriction of $h$ to $S$ provides a local\ndiffeomorphism (and hence a covering map) to $N^k$. \nSince $N^k$ is simply connected, $S$ is diffeomorphic to\n$N^k$, and $h$ restricts to a diffeomorphism from $S\\subset \\bar X$\nto $N^k\\subset X_u$.\n\nNext we observe that the homology class represented by $[S]\\in\nH_k(\\bar X;\\mathbb Z)$ is nonzero, since this class has image, under $h$,\nthe homology class $[N_k]\\neq 0\\in H_k(X_u; \\mathbb Z)$. But \nobserve that $S\\subset \\bar\nX$, being simply connected, supports a cellular decomposition with a \nsingle $0$--cell and {\\it no $1$--cells\\\/}. In particular,\nthe submanifold $S$ is homotopic to a point in the aspherical space $\\bar X$, \nsince one can recursively contract all the cells of dimension $\\geq 2$ down to \nthe $0$--cell. This forces $[S]=0\\in H_k(\\bar X;\\mathbb Z)$, giving us the desired \ncontradiction.\n\\end{proof}\n\nWe point out a simple example illustrating this last proposition. If $X^{2n}$ is \ncomplex hyperbolic, then $X_u=\\mathbb C`P^n$, and one can take for $N^2=\\mathbb C`P^1$\nthe canonically embedded complex projective line. Topologically, $N^2$ is \ndiffeomorphic to $S^2$, hence is simply connected, while homologically we\nhave that $[N^2]$ is the generator for the cohomology group $H^2(\\mathbb C`P^n; \n\\mathbb Z) \\cong \\mathbb Z$. \\fullref{Proposition5.2} now applies, and gives us that\n{\\it any\\\/} smooth map $h$ in the homotopy class of the Okun map must have \nsingular set intersecting the canonical $\\mathbb C`P^1$. A similar example appears\nin the case of quaternionic hyperbolic manifolds, with the singular sets being\nforced to intersect the canonical $\\mathbb O`P^1$ (diffeomorphic to $S^4$) \ninside the dual space $\\mathbb O`P^n$.\n\n\\bibliographystyle{gtart}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\nWe thank the action editor and anonymous reviewers for their\nconstructive feedback. The authors also thank Nikita Moghe, Seraphina\nGoldfarb-Tarrant, Ondrej Bohdal and Heather Lent for their insightful\ncomments on earlier versions of this paper. We gratefully acknowledge\nthe support of the UK Engineering and Physical Sciences Research\nCouncil (grants EP\/L016427\/1 (Sherborne) and EP\/W002876\/1 (Lapata))\nand the European Research Council (award 681760, Lapata).\n\n\n\\section{Methodology}\n\\label{sec:methodology}\n\nWe combine two meta-learning techniques for\ncross-lingual semantic parsing. The first is the {Reptile} algorithm\noutlined in Section~\\ref{sec:rw}. {Reptile} optimizes for dense likelihood\nregions within the parameters (i.e., an \\emph{optimal manifold}) through\npromoting inter-batch generalization \\citep{DBLP:journals\/corr\/abs-1803-02999-REPTILE}.\nStandard Reptile iteratively optimizes the manifold for an improved\ninitialization across objectives. Rapid fine-tuning yields the final task-specific model. The second technique is the first-order approximation of DG-MAML\n\\citep{li-hospedales-metadg,wang-etal-2021-meta-DGMAML}. This\nsingle-stage process optimizes for domain generalization by simulating\n``source'' and ``target'' batches from different domains to\nexplicitly optimize for \\emph{cross-batch} generalization. Our\nalgorithm, \\mbox{\\sc XG-Reptile}, combines these paradigms to optimize\na target loss with the overall learning ``direction'' derived as\nthe \\textit{optimal manifold} learned via {Reptile}. This trains \nan accurate parser demonstrating sample-efficient cross-lingual transfer\nwithin an efficient \\emph{single-stage} learning process.\n\n\n\\subsection{The \\mbox{\\sc XG-Reptile} Algorithm}\n\nEach learning episode of \\mbox{\\sc XG-Reptile} comprises\ntwo component steps: \\mbox{\\emph{intra-task}} learning and\n\\emph{inter-language} generalization to jointly learn parsing and\ncross-lingual transfer. Alternating these processes trains a\ncompetitive parser from multiple languages with low computational\noverhead beyond existing gradient-descent training. Our approach\ncombines the typical two stages of meta-learning to produce a single\nmodel without a fine-tuning requirement.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{imgs\/xgr6.pdf}\n \\vspace{-2.5em}\n \\caption{\\small One iteration of \\mbox{\\sc XG-Reptile}. (1) Run\n $K$ iterations of gradient descent over $K$ support batches to\n learn $\\phi_{K}$, (2) compute $\\nabla_{macro}$, the difference\n between $\\phi_{K}$ and $\\phi_{1}$, (3) find the loss on the\n target batch using $\\phi_{K}$ and (4) compute the final gradient\n update from $\\nabla_{macro}$ and the target loss.}\n \\label{fig:xgr_diagram}\n \\vspace{-1em}\n\\end{figure}\n\n\\paragraph{Task Learning Step}\n\nWe first sample from the high-resource language (i.e.,\n$\\mathcal{S}_{\\rm EN}$) $K$ ``support'' batches of examples,\n$\\mathcal{B}^{S}=\\lbrace\\left(Q_{\\rm\n EN},~P,~\\mathcal{D}\\right)\\rbrace$. For each of $K$ batches: we\ncompute predictions, compute losses, calculate gradients and adjust\nparameters using some optimizer (see illustration in\nFigure~\\ref{fig:xgr_diagram}). After $K$ successive optimization steps\nthe initial weights in this episode,~$\\phi_{1}$, have been optimized\nto~$\\phi_{K}$. The difference between final and initial weights is\ncalculated as:\n\n\\vspace{-\\baselineskip}\n\\begin{align}\n \\nabla_{macro} = \\phi_{K} - \\phi_{1} \\label{eq:reptile_step}\n\\end{align}\n\nThis ``macro-gradient'' step is equivalent to a {Reptile} step\n\\citep{DBLP:journals\/corr\/abs-1803-02999-REPTILE}, representing\nlearning an optimal \\textit{manifold} as an approximation\nof overall learning trajectory.\n\n\\paragraph{Cross-Lingual Step}\n\nThe second step samples one ``target'' batch,\n$\\mathcal{B}^{T}=\\left(Q_{l},~P,~\\mathcal{D}\\right)$, from a\nsampled target language (i.e., $\\mathcal{S}_{\n l}\\subset\\mathcal{S}_{\\rm L}$). We compute the cross-entropy\nloss and gradients from the prediction of the model at $\\phi_{K}$ on\n$\\mathcal{B}^{T}$:\n\n\\vspace{-\\baselineskip}\n\\begin{align}\n \\mathcal{L}_{T} = \\textrm{Loss}\\left(p_{\\phi_{K}}\\left(Q_{l},~\\mathcal{D}\\right),~P\\right) \\label{eq:cross_lingual_step}\n\\end{align}\n\nWe evaluate the parser at $\\phi_{K}$ on a target\nlanguage we desire to generalize to. We show below that the\ngradient of $\\mathcal{L}_{T}$ comprises the loss at $\\phi_{K}$ and\nadditional terms maximizing the inner product between the high-likelihood\nmanifold and the target loss. The total gradient encourages\nintra-task and cross-lingual learning (see Figure~\\ref{fig:xgr_diagram}).\n\n\n\\begin{algorithm}[!t]\n\\caption{{\\sc XG-Reptile}\\label{xgr-algo}}\n{\\small\n\\begin{algorithmic}[1]\n \\REQUIRE{Support data, $\\mathcal{S}_{\\rm EN}$, target data, $\\mathcal{S}_{L}$}\n \\REQUIRE{Inner learning rate, $\\alpha$, outer learning rate, $\\beta$}\n \\STATE{Initialise $\\theta_{1}$, the vector of initial parameters}\n \\FOR{$t \\leftarrow 1$~\\textbf{to}~$T$}\n \\STATE{Copy $\\phi_{1} \\leftarrow \\theta_{t-1}$}\n \\STATE{Sample $K$ support batches $\\lbrace\\mathcal{B}^{S}\\rbrace_{k=1}^{K}$ from $\\mathcal{S}_{\\rm EN}$}\n \\STATE{Sample target language $l$ from $L$ languages}\n \\STATE{Sample target batch~$\\mathcal{B}^{T}$ from $\\mathcal{S}_{l}$}\n \\FOR{$k \\leftarrow 1$~{\\textbf{to}}~$K$ [Inner Loop]}\n \\STATE{$\\mathcal{L}^{S}_{k} \\leftarrow {\\rm Forward}\\left(\\mathcal{B}^{S}_{k},~\\phi_{k-1}\\right)$}\n \\STATE{$\\phi_{k}\\leftarrow{\\rm Adam}\\left(\\phi_{k-1},~\\nabla\\mathcal{L}^{S}_{k},~\\alpha\\right)$\\label{algo:line_grad}} \n \\ENDFOR\n \\STATE{Macro grad: ~$\\nabla_{macro}\\leftarrow\\phi_{K}-\\phi_{1}$}\n \\STATE{Target Step: ~$\\mathcal{L}_{T}\\leftarrow {\\rm Forward}\\left(\\mathcal{B}^{T},~\\phi_{K}\\right)$}\n \\STATE{Total gradient: $\\nabla_{\\Sigma}=\\nabla_{macro} + \\nabla_{\\phi_K}\\mathcal{L}_{T}$}\n \\STATE{Update $\\theta_{t} \\leftarrow {\\rm SGD}\\left(\\theta_{t-1},~\\nabla_{\\Sigma},~\\beta\\right)$}\n \\ENDFOR\n\\end{algorithmic}\n}\n\\end{algorithm}\n\nAlgorithm \\ref{xgr-algo} outlines the \\mbox{\\sc XG-Reptile} process (loss\ncalculation and batch processing are simplified for brevity). We repeat\nthis process over $T$ episodes to train model $p_{\\theta}$ to\nconvergence. If we optimized for target data to align with individual\nsupport batches (i.e., $K=1$) then we may observe batch-level noise in cross-lingual\ngeneralization. Our intuition is that aligning the target gradient with an\napproximation of the task manifold, i.e., $\\nabla_{macro}$, will overcome this\nnoise and align new languages to a more mutually beneficial direction during\ntraining. We observe this intuitive behavior during learning in Section~\\ref{sec:results}.\n\nWe efficiently generalize to low-resource languages by\nexploiting the asymmetric data requirements between steps: one\nbatch of the target language is required for $K$ batches of the source\nlanguage. For example, if $K=10$ then using this $\\frac{1}{K}$\nproportionality requires 10\\% of target-language data relative to\nsupport. We demonstrate in Section~\\ref{sec:results} that we\ncan use a smaller $<\\frac{1}{K}$ quantity per target language to \nincrease sample efficiency. \n\\vspace{-0.25em}\n\n\\paragraph{Gradient Analysis}\n\nFollowing \\citet{DBLP:journals\/corr\/abs-1803-02999-REPTILE}, we\nexpress,~$g_{k}=\\nabla\\mathcal{L}^{S}_{k}$, the gradient in a single\nstep of the inner loop (Line~\\ref{algo:line_grad}) as:\n\n\\vspace{-\\baselineskip}\n\\begin{align}\n\\centering\ng_{k} &= \\bar{g}_i + \\bar{H}_{k}\\left( \\phi_{k}-\\phi_{1}\\right) +\nO\\left(\\alpha^2 \\right) \\label{eq:gi_gradient_taylor_2} \n\\end{align}\nWe use a Taylor series expansion to approximate~$g_k$\nby~$\\bar{g}_{k}$, the gradient at the original point~$\\phi_{1}$, the\nHessian matrix of the gradient at the initial point, $\\bar{H}_{k}$,\nthe step difference between position~$\\phi_{k}$ and the initial\nposition and some scalar terms with marginal influence,\n$O\\left(\\alpha^{2}\\right)$.\n\nBy evaluating Equation~\\eqref{eq:gi_gradient_taylor_2} at $i=1$ and\nrewriting the difference as a sum of gradient steps (e.g.,~Equations\n\\eqref{eq:initial_pt_trick} and \\eqref{eq:diff_trick}), we arrive at\nan expression for $g_{k}$ shown in Equation~\\eqref{eq:gi_gradient_taylor_3} \nexpressing the gradient as an initial component, $\\hat{g}_k$,\nand the product of the Hessian at~$k$, with all prior gradient steps. We\nrefer to \\citet{DBLP:journals\/corr\/abs-1803-02999-REPTILE} for further\nvalidation that the gradient of this product maximizes the\ncross-batch expectation -- therefore promoting intra-task\ngeneralization and learning the optimal manifold. The final gradient (Equation \\eqref{eq:reptile_gradient_taylor})~\nis the accumulation over $g_{k}$ steps and is equivalent to Equation \\eqref{eq:reptile_step}. $\\nabla_{macro}$ comprises both gradients of $K$ steps and\nadditional terms maximizing the inner-product of inter-batch gradients.\n\n\\vspace{-\\baselineskip}\n\\begin{align}\n{\\rm Use}~~& g_{j}=\\bar{g}_{j} + O\\left(\\alpha\\right) \\label{eq:initial_pt_trick} \\\\ \n &~\\phi_{k}-\\phi_{1}=-\\alpha\\sum_{j=1}^{k-1}g_{j} \\label{eq:diff_trick}\\\\\ng_{k} &= \\bar{g}_i - \\alpha\\bar{H}_{i}\\sum_{j=1}^{k-1}\\bar{g}_{j} + O\\left(\\alpha^2 \\right) \\label{eq:gi_gradient_taylor_3} \\\\\n&\\nabla_{macro} = \\sum_{k=1}^{K}g_{k} \\label{eq:reptile_gradient_taylor} \n\\end{align}\n\nWe can similarly express the gradient of the target batch as Equation\n\\eqref{eq:target_gradient_taylor} where the term,\n$\\bar{H}_{T}\\nabla_{macro}$, is the cross-lingual generalization\nproduct similar to the intra-task generalization seen above.\n\\begin{align}\ng_{T} &= \\bar{g}_{T} -\\alpha\\bar{H}_{T}\\nabla_{macro} + O\\left(\\alpha^2 \\right) \\label{eq:target_gradient_taylor} \n\\end{align}\n\nEquation~\\eqref{eq:sigma_gradient_taylor} shows an example final \\tabularnewline\ngradient when $K=2$. Within the parentheses are\nthe intra-task and cross-lingual gradient products as components\npromoting fast learning across multiple axes of generalization.\n\n\\vspace{-\\baselineskip}\n\\begin{equation}\n \\begin{split}\n\\nabla_{\\Sigma} &= g_{1} + g_{2} + g_{T} \\\\ \n &= \\bar{g}_1 +\\bar{g}_2 + \\bar{g}_T \\\\ & -\\alpha\\left(\\bar{H}_{2}\\bar{g}_{1}+\\bar{H}_{T}\\lbrack\\bar{g}_1 +\\bar{g}_2\\rbrack\\right) + O\\left(\\alpha^2 \\right) \\label{eq:sigma_gradient_taylor} \\\\\n\\end{split}\n\\end{equation}\n\nThe key hyperparameter in \\mbox{\\sc XG-Reptile} is the number of\ninner-loop steps~$K$ representing a trade-off between a manifold\napproximation and target step frequency. At small $K$, the \nmanifold approximation may be poor, leading to sub-optimal learning. At\nlarge $K$, then improved manifold approximation incurs fewer\ntarget batch steps per epoch, leading to weakened cross-lingual\ntransfer. In practice, $K$ is set empirically, and Section~\\ref{sec:results}\nidentifies an optimal region for our task.\n\n\\mbox{\\sc XG-Reptile} can be viewed as generalizing two existing\nalgorithms. Without the $\\mathcal{L}_{T}$ loss, our approach is\nequivalent to {Reptile} and lacks cross-lingual alignment. If $K=1$, then \\mbox{\\sc XG-Reptile} is equivalent to\n\\mbox{\\sc DG-FMAML} \\citep{wang-etal-2021-meta-DGMAML} but lacks\nintra-task generalization across support batches. Our unification of\nthese algorithms represent the best of both approaches and\noutperforms both techniques within semantic parsing. Another perspective is that \\mbox{\\sc XG-Reptile} learns\na \\textit{regularized optimal manifold}, with immediate cross-lingual capability\nas opposed to standard {Reptile} which requires fine-tuning to transfer across tasks.\nWe identify how this contrast in approaches influences cross-lingual transfer in Section \\ref{sec:results}.\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe propose \\mbox{\\sc XG-Reptile}, a meta-learning\nalgorithm for few-shot cross-lingual generalization in semantic\nparsing. \\mbox{\\sc XG-Reptile} is able to better utilize fewer samples\nto learn an economical multilingual semantic parser with minimal cost\nand improved sample efficiency. Compared to adjacent training algorithms\nand zero-shot approaches, we obtain more accurate and\nconsistent logical forms across languages similar and dissimilar to\nEnglish. Results on ATIS show clear benefit across many languages and\nresults on Spider demonstrate that \\mbox{\\sc XG-Reptile} is effective\nin a challenging cross-lingual and cross-database scenario. We focus\nour study on semantic parsing, however, this algorithm could be\nbeneficial in other low-resource cross-lingual tasks. In future work\nwe plan to examine how to better align entities in low-resource\nlanguages to further improve parsing accuracy.\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nA semantic parser maps natural language (NL) utterances to logical\nforms (LF) or executable programs in some machine-readable language\n(e.g.,~SQL). Recent improvement in the capability of semantic parsers has\nfocused on domain transfer within English\n\\citep{su-yan-2017-xdomain-paraphrasing,suhr-etal-2020-exploring},\ncompositional generalization\n\\citep{yin-neubig-2017-syntactic-gcd-code-generation,\n herzig-berant-2021-span,scholak-etal-2021-picard}, and more recently\ncross-lingual methods\n\\citep{multilingsp-and-code-switching-Duong2017,neural-hybrid-trees-susanto2017,polyglotapi-Richardson2018}.\n\nWithin cross-lingual semantic parsing, there has been an effort to\nbootstrap parsers with minimal data to avoid the cost and labor\nrequired to support new languages. Recent proposals include using\nmachine translation to approximate training data for supervised\nlearning\n\\citep{moradshahi-etal-2020-localizing,sherborne-etal-2020-bootstrapping,nicosia-etal-2021-translate-fill-zero-shot-semparse}\nand zero-shot models, which engineer cross-lingual similarity with\nauxiliary losses\n\\citep{van-der-goot-etal-2021-masked,yang-etal-2021-frustratingly-simple-DRS-parsing,sherborne-lapata-2022-zero}. These\nshortcuts bypass costly data annotation but present limitations such\nas ``translationese'' artifacts from machine translation\n\\citep{koppel-ordan-2011-translationese} or undesirable domain\nshift \\citep{sherborne-lapata-2022-zero}. However, annotating a minimally\nsized data sample can potentially overcome these limitations while incurring\nsignificantly reduced costs compared to full dataset translation \\citep{garrette-baldridge-2013-learning}.\n\nWe argue that a few-shot approach is more realistic for an engineer\nmotivated to support additional languages for a database -- as one can\nrapidly retrieve a high-quality sample of translations and\ncombine these with existing supported languages (i.e., English).\nBeyond semantic parsing, cross-lingual few-shot approaches have also\nsucceeded at leveraging a small number of annotations within a variety\nof tasks \\citep[\\emph{inter alia}]{zhao-etal-2021-closer} including\nnatural language inference, paraphrase identification,\npart-of-speech-tagging, and named-entity recognition. Recently, the application\nof meta-learning to domain generalization has further demonstrated capability\nfor models to adapt to new domains with small samples\n\\citep{gu-etal-2018-meta-nmt,li-hospedales-metadg,wang-etal-fewshot-survey}.\n\nIn this work, we synthesize these directions into a meta-learning\nalgorithm for cross-lingual semantic parsing. Our approach \nexplicitly optimizes cross-lingual generalization using fewer training samples per new language without performance degradation. We also require minimal computational overhead beyond standard gradient-descent training and no external dependencies beyond\nin-task data and a pre-trained encoder. Our algorithm,\n\\textbf{Cross}-Lingual \\textbf{G}eneralization \\textbf{Reptile}\n(\\mbox{\\sc XG-Reptile}) unifies two-stage meta-learning into a single\nprocess and outperforms prior and constituent methods on all\nlanguages, given identical data constraints. The\nproposed algorithm is still model-agnostic and applicable to more tasks \nrequiring sample-efficient cross-lingual transfer.\n\nOur innovation is the combination of both \\mbox{intra-task} and\n\\mbox{inter-language} steps to jointly learn the parsing task and optimal\ncross-lingual transfer. Specifically, we interleave learning \nthe overall task from a high-resource language and learning cross-lingual\ntransfer from a minimal sample of a lower-resource language. Results\non ATIS \\citep{hemphill-etal-1990-atis} in six languages (English,\nFrench, Portuguese, Spanish, German, Chinese) and Spider\n\\citep{yu-etal-2018-spider} in two languages (English, Chinese)\ndemonstrate our proposal works in both single- and cross-domain\nenvironments. Our contributions are as follows: \n\n\\vspace{-0.5em}\n\\begin{itemize}\n\\item We introduce \\mbox{\\sc XG-Reptile}, a\n first-order meta-learning algorithm for cross-lingual generalization.\n \\mbox{\\sc XG-Reptile} approximates an \\textit{optimal manifold} using support\n languages with \\textit{cross-lingual regularization} using target\n languages to train for explicit cross-lingual similarity.\n \n\\item We showcase sample-efficient cross-lingual transfer within \n two challenging semantic parsing datasets across multiple languages.\n Our approach yields more accurate parsing in a few-shot scenario\n and demands 10$\\times$ fewer samples than prior methods. \n \n\\item We establish a cross-domain and cross-lingual \n parser obtaining promising results for both Spider in English \n \\citep{yu-etal-2018-spider} and CSpider in Chinese \\citep{Min2019-CSPIDER}.\n\\end{itemize}\n\n\n\n\n\\section{Problem Definition}\n\\label{sec:problem}\n\n\n\\paragraph{Semantic Parsing}\n\nWe wish to learn a parameterized parsing function, $p_{\\theta}$, which\nmaps from a natural language utterance and a relational database\ncontext to an executable program expressed in a logical form (LF) language:\n\\begin{align}\nP = p_{\\theta}\\left(Q,~D\\right) \\label{eq:parser_fn}\n\\end{align}\n\nAs formalized in Equation~\\eqref{eq:parser_fn}, we learn\nparameters,~$\\theta$, using paired data\n$\\left(Q,~P,~\\mathcal{D}\\right)$ where~$P$ is the logical form\nequivalent of natural language question~$Q$. In this work, our LFs are\nall executable SQL queries and therefore grounded in a\ndatabase~$\\mathcal{D}$. A single-domain dataset references only\none~$\\mathcal{D}$ database for all $\\left(Q,~P\\right)$, whereas a\nmulti-domain dataset demands reasoning about unseen databases to\ngeneralize to new queries. This is expressed as a `zero-shot'\nproblem if the databases at test time, $\\mathcal{D}_{\\rm test}$, were\nunseen during training. This challenge demands a parser\ncapable of \\textit{domain generalization} beyond observed\ndatabases. This is in addition to the \\textit{structured prediction}\nchallenge of semantic parsing.\n\n\\paragraph{Cross-Lingual Generalization}\n\nPrototypical semantic parsing datasets express the question, ${Q}$, in\nEnglish only. As discussed in Section~\\ref{sec:intro}, our parser\nshould be capable of mapping from \\emph{additional} languages to\nwell-formed, executable programs. However, prohibitive expense limits\nus from reproducing a monolingual model for each additional language\nand previous work demonstrates accuracy improvement by training\n\\textit{multilingual} models\n\\citep{multilingual-sp-hierch-tree-Jie2014}. In addition to the\nchallenges of structured prediction and domain generalization, we\njointly consider \\textit{cross-lingual generalization}. Training\nprimarily relies on existing English data (i.e., ${Q}_{\\rm EN}$\nsamples) and we show that our meta-learning algorithm in\nSection~\\ref{sec:methodology} leverages a small sample of training data in new\nlanguages for accurate parsing. We express this sample, $\\mathcal{S}_{l}$, for some language, $l$,\nas: \n\\begin{align}\n\t\\mathcal{S}_{l}&=\\left({Q}_{l},~{P},~\\mathcal{D}\\right)_{i=0}^{N_{l}} \\label{eq:sample_l}\n\\end{align} \nwhere $N_{l}$ is the sample size from~$l$, assumed to be smaller than the\noriginal English dataset (i.e.,~\\mbox{${N}_{l} \\ll N_{\\rm\n EN}$}). Where available, we extend this paradigm to develop models\nfor $L$ different languages simultaneously in a multilingual setup by\ncombining samples as:\n\\begin{align}\n\t\\mathcal{S}_{\\rm L} &= \\lbrace\\mathcal{S}_{ l_1},\\mathcal{S}_{ l_2},\\ldots,\\mathcal{S}_{l_N}\\rbrace \\label{eq:multilingual_s_outer}\n\\end{align}\nWe can express cross-lingual generalization as:\n\\begin{align}\n\t p_{\\theta}\\left(P~|~Q_{l},~\\mathcal{D}\\right) \\rightarrow p_{\\theta}\\left(P~|~Q_{\\rm EN},~\\mathcal{D}\\right) \\label{eq:xlingual_distributions_approach}\n\\end{align}\nwhere~$p_{\\theta}\\left(P~|~Q_{\\rm EN},~\\mathcal{D}\\right)$ is the\npredicted distribution over all possible output SQL sequences\nconditioned on an English question, $Q_{\\rm EN}$, and a\ndatabase~$\\mathcal{D}$. Our goal is for the prediction from a new language, $Q_{l}$, to converge\ntowards this existing distribution using the same parameters $\\theta$, constrained to fewer\nsamples in~$l$ than English.\n\n\nWe aim to maximize the accuracy of predicting programs on\nunseen test data from each non-English language~$l$. The key challenge\nis learning a performant distribution over each new language with\nminimal available samples. This includes learning to incorporate\neach $l$ into the parsing task and modeling the language-specific\nsurface form of questions. Our setup is akin to few-shot learning; however, the number of examples needed for satisfactory performance is\nan empirical question. We are searching for both minimal sample\nsizes and maximal sampling efficiency. We discuss our sampling\nstrategy in Section~\\ref{sec:sampling_for_xgen} with results at\nmultiple sizes of~$\\mathcal{S}_{L}$ in Section~\\ref{sec:results}.\n\n\n\\section{Results and Analysis}\n\\label{sec:results}\n\nWe contrast \\mbox{\\sc XG-Reptile} to baselines for ATIS in\nTable~\\ref{tab:xg_reptile_atis_1} and present further analysis within\nFigure~\\ref{fig:k_exp_1_2_3}. Results for the multi-domain Spider are\nshown in Table~\\ref{tab:xgr_spider_1}. Our findings\nsupport our hypothesis that \\mbox{\\sc XG-Reptile} is a superior\nalgorithm for jointly training a semantic parser and encouraging\ncross-lingual generalization with improved sample efficiency. Given the same\ndata, \\mbox{\\sc XG-Reptile} produces more mutually beneficial\nparameters for both model requirements with only modifications to the\ntraining loop.\n\n\\vspace{-\\baselineskip}\n\\paragraph{Comparison across Generalization Strategies}\n\nWe compare \\mbox{\\sc XG-Reptile} to established learning algorithms in\nTable \\ref{tab:xg_reptile_atis_1}. Across baselines, we find that\nsingle-stage training, i.e., \\emph{Train-EN$\\cup$All} or\nmachine-translation based models, perform below two-stage\napproaches. The strongest competitor is the\n\\emph{Reptile-EN$\\rightarrow$FT-All} model, highlighting the\neffectiveness of Reptile for single-task generalization\n\\citep{kedia-etal-2021-beyond-reptile}. However, \\mbox{\\sc XG-Reptile}\nperforms above all baselines across sample rates. Practically, 1\\%,\n5\\%, 10\\% correspond to 45, 225, and 450 example pairs, respectively. We\nidentify significant improvements ($p<0.01$; relative to the closest model using an independent t-test) in cross-lingual transfer through jointly learning to parse and multi-language generalization while maintaining single-stage training efficiency.\n\nCompared to the upper bounds, \\mbox{\\sc XG-Reptile} performs above\n\\emph{Monolingual Training} at $\\ge 1\\%$ sampling, which further\nsupports the prior benefit of multilingual modeling\n\\citep{arch-for-neural-multisp-Susanto2017}.\n\\emph{Multilingual Training} is only marginally stronger than \\mbox{\\sc\n XG-Reptile} at 1\\% and 5\\% sampling despite requiring many more examples. \\mbox{\\sc\n XG-Reptile}@10\\% improves on this model by an average~$+$1.3\\%. Considering that our upper bound uses $10\\times$ the data of\n\\mbox{\\sc XG-Reptile}@10\\%, this accuracy gain highlights the benefit\nof explicit cross-lingual generalization. This is\nconsistent at higher sample sizes (see Figure\n\\ref{fig:k_exp_1_2_3}(c) for German).\n\nAt the smallest sample size, \\mbox{\\sc XG-Reptile}@1\\%, demonstrates a\n$+$12.4\\% and $+$13.2\\% improvement relative to \\emph{Translate-Train} and\n\\emph{Translate-Test}. Machine translation is often viable for cross-lingual transfer \\citep{conneau2018xnli}. However, we find that mistranslation of named entities\nincurs an exaggerated parsing penalty -- leading to inaccurate logical forms\n\\citep{sherborne-etal-2020-bootstrapping}. This suggests that\nsample quality has an exaggerated influence on semantic parsing\nperformance. When training \\mbox{\\sc XG-Reptile} with MT\ndata, we also observe a lower Target-language average of 66.9\\%. \nThis contrast further supports\nthe importance of sample quality in our context.\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{imgs\/k_exp_subfig_v3_9_9_22.png}\n \\caption{Ablation Experiments on ATIS (a) accuracy against inner loop size $K$ across languages, (b) accuracy against $K$ for German when varying batch size, and (c) accuracy against dataset sample size relative to support dataset from 1\\% to 50\\% for German. For~(b), the $K=1$ case is equivalent to DG-FMAML \\citep{wang-etal-2021-meta-DGMAML}.}\n \\label{fig:k_exp_1_2_3}\n \\vspace{-1em}\n\\end{figure*}\n\n \\mbox{\\sc XG-Reptile} improves cross-lingual generalization across all languages at\n equivalent and lower sample sizes. At 1\\%, it\n improves by an average $+$15.7\\% over the closest model,\n \\emph{Reptile-EN$\\rightarrow$FT-All}. Similarly, at 5\\%, we find\n $+$9.8\\% gain, and at 10\\%, we find $+$8.9\\% relative to the closest\n competitor. Contrasting across sample sizes --- our best approach is\n @10\\%, however, this is $+$3.5\\%~above @1\\%, suggesting that smaller\n samples could be sufficient if 10\\% sampling is unattainable. This\n relative stability is an improvement compared to the 17.7\\%, 11.2\\% or\n 10.3\\% difference between @1\\% and @10\\% for other models. This\n implies that \\mbox{\\sc XG-Reptile} better utilizes smaller samples\n than adjacent methods.\n\nAcross languages at 1\\%, \\mbox{\\sc XG-Reptile} improves primarily for\nlanguages dissimilar to English \\citep{ahmad-etal-2019-difficulties}\nto better minimize the cross-lingual transfer gap. For Chinese\n(ZH), we see that \\mbox{\\sc XG-Reptile}@1\\% is $+$26.4\\% above the\nclosest baseline. This contrasts with the smallest gain, $+$8.5\\%~for\nGerman, with greater similarity to English. Our improvement also\nyields less variability across target languages --- the\nstandard deviation across languages for \\mbox{\\sc XG-Reptile}@1\\% is\n1.1, compared to 2.8 for \\emph{Train-EN$\\cup$All} or 7.4 for \n\\emph{Reptile-EN$\\rightarrow$FT-All}.\n\nWe can also compare to {\\sc ZX-Parse}, the method of\n\\citet{sherborne-lapata-2022-zero} which engineers cross-lingual latent\nalignment for zero-shot semantic parsing without data in target\nlanguages. With 45 samples per target language, \\mbox{\\sc\n XG-Reptile}@1\\% improves by an average of $+$4.9\\%. \\mbox{\\sc\n XG-Reptile} is more beneficial for distant languages --\ncross-lingual transfer penalty between English and Chinese is $-$12.3\\%\nfor {\\sc ZX-Parse} compared to $-$5.7\\% in our case. While these\nsystems are not truly comparable, given different data requirements,\nthis contrast is practically useful for comparison between zero- and\nfew-shot localization.\n\n\n\\paragraph{Influence of~$K$ on Performance}\nIn Figure~\\ref{fig:k_exp_1_2_3}(a) we study how variation in the key\nhyperparameter $K$, the size of the inner-loop in\nAlgorithm~\\ref{xgr-algo} or the number of batches used to approximate\nthe \\emph{optimal task manifold} influences model performance across\nlanguages (single run at 5\\% sampling). When \\mbox{$K=1$}, the model learns\ngeneralization from batch-wise similarity which is equivalent to\n\\emph{DG-FMAML} \\citep{wang-etal-2021-meta-DGMAML}. We empirically\nfind that increasing~$K$ beyond one benefits performance by\nencouraging cross-lingual generalization with the \\emph{task} over a\nsingle \\emph{batch}, and it is, therefore, beneficial to align an\nout-of-domain example with the overall \\textit{direction} of training.\nHowever, as theorized in Section~\\ref{sec:methodology}, increasing $K$\nalso decreases the frequency of the outer step within an epoch leading\nto poor cross-lingual transfer at high~$K$. This trade-off yields an\noptimal operating regime for this hyper-parameter. We use $K=10$ in\nour experiments as the center of this region. Given this setting\nof~$K$, the target sample size must be 10\\% of the support sample size\nfor training in a single epoch. However,\nTable~\\ref{tab:xg_reptile_atis_1} identifies \\mbox{\\sc XG-Reptile} as\nthe most capable algorithm for ``over-sampling'' smaller target\nsamples for resource-constrained generalization.\n\n\\paragraph{Influence of Batch size on performance}\nWe consider two further case studies to analyze \\mbox{\\sc\n XG-Reptile} performance. For clarity, we focus on German; however,\nthese trends are consistent across all target languages.\nFigure~\\ref{fig:k_exp_1_2_3}(b) examines if the effects of\ncross-lingual transfer within \\mbox{\\sc XG-Reptile} are sensitive to\nbatch size during training (single run at 5\\% sampling). A dependence between $K$ and batch size\ncould imply that the desired inter-task and cross-lingual\ngeneralization outlined in Equation~\\eqref{eq:sigma_gradient_taylor}\nis an unrealistic, edge-case phenomenon. This is not the\ncase, and a trend of optimal $K$ setting is consistent across many\nbatch sizes. This suggests that $K$ is an independent hyper-parameter\nrequiring tuning alongside existing experimental settings.\n\n\\paragraph{Performance across Larger Sample Sizes} We consider a wider\nrange of target data sample sizes between 1\\% to 50\\% in\nFigure~\\ref{fig:k_exp_1_2_3}(c). We observe that baseline approaches\nconverge to between 69.3\\% and 73.9\\% at 50\\% target sample\nsize. Surprisingly, the improvement of \\mbox{\\sc XG-Reptile} is\nretained at higher sample sizes with an accuracy of 76.5\\%. The benefit of \\mbox{\\sc\n XG-Reptile} is still greatest at low sample sizes with $+$5.4\\%\nimprovement at 1\\%; however, we maintain a $+$2.6\\% gain over the\nclosest system at 50\\%. While low sampling is the most\neconomical, the consistent benefit of \\mbox{\\sc XG-Reptile}\nsuggests a promising strategy for other cross-lingual tasks.\n\n\n\\begin{table}[t]\n\\small\n\\centering\n\\begin{tabular}{@{}llcc|cc@{}}\n\\toprule\n\\multicolumn{2}{l}{} & \\multicolumn{2}{c}{EN} & \\multicolumn{2}{c}{ZH} \\\\ \\midrule\n\\multicolumn{2}{l}{} & Dev & Test & Dev & Test \\\\ \\midrule\n\\multicolumn{6}{l}{\\textit{Monolingual}} \\\\ \\midrule\n\\multicolumn{2}{l}{\\makecell[l]{DG-MAML }} & \\textbf{68.9} & \\textbf{65.2} & \\textbf{50.4} & \\textbf{46.9} \\\\\n\\multicolumn{2}{l}{\\makecell[l]{DG-FMAML }} & 56.8 & --- & 32.5 & --- \\\\\n\\multicolumn{2}{l}{\\mbox{\\sc XG-Reptile}} & 63.5 & --- & 48.9 & --- \\\\ \\midrule\n\\multicolumn{6}{l}{\\textit{Multilingual}} \\\\ \\midrule\n\\mbox{\\sc XG-Reptile} \n & @1\\% & 56.8 & 56.5 & 47.0 & 45.6 \\\\\n & @5\\% & \\textbf{59.6} & 58.1 & 47.3 & 45.6 \\\\\n & @10\\% & 59.2 & \\textbf{59.7} & \\textbf{48.0} & \\textbf{46.0} \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Exact set match accuracy for RAT-SQL trained on Spider (English) and CSpider (Chinese) comparing \\mbox{\\sc XG-Reptile} to DG-MAML and DG-FMAML \\citep{wang-etal-2021-meta-DGMAML}. We experiment with sampling between 1\\% to 10\\% of Chinese examples relative to English. Monolingual and multilingual best results are bolded.\n}\n\\label{tab:xgr_spider_1}\n\\vspace{-1.5em}\n\\end{table}\n\n\\paragraph{Learning Spider and CSpider}\n\nOur results on Spider and CSpider are shown in\nTable~\\ref{tab:xgr_spider_1}. We compare \\mbox{\\sc XG-Reptile}\nto monolingual approaches from \\citet{wang-etal-2021-meta-DGMAML}\nand discuss cross-lingual results when sampling between 1\\% to 10\\% of CSpider target\nduring training.\n\nIn the \\emph{monolingual setting}, \\mbox{\\sc XG-Reptile} shows\nsignificant improvement ($p<0.01$; using an independent samples t-test) compared to DG-FMAML with $+$6.7\\% for English and $+$16.4\\% for Chinese dev accuracy. This further supports\nour claim that generalizing with a \\textit{task manifold} is superior\nto batch-level generalization. \n\nOur results are closer to DG-MAML \\citep{wang-etal-2021-meta-DGMAML}, a higher-order meta-learning method requiring computational resources and training times exceeding $4\\times$ the requirements for {\\sc XG-Reptile}. \\mbox{\\sc XG-Reptile} yields accuracies $-5.4\\%$ and $-1.5\\%$ below DG-MAML for English and Chinese, where DG-FMAML performs much lower at $-12.1\\%$ (EN) and $-17.9\\%$ (ZH). \nOur results suggest that \\mbox{\\sc XG-Reptile} is a superior first-order meta-learning\nalgorithm rivaling prior work with greater computational demands.\\footnote{We compare against DG-MAML as the\nbest \\emph{public} available model on the \\href{https:\/\/taolusi.github.io\/CSpider-explorer\/}{CSpider leaderboard}\nat the time of writing.}\n\nIn the multilingual setting, we observe that\n\\mbox{\\sc XG-Reptile} performs competitively using as little as 1\\% of\nChinese examples. While training sampling 1\\% and 5\\% perform similarly -- the best model sees 10\\% of CSpider samples during\ntraining to yield accuracy only $-$0.9\\%~(test) below the monolingual DG-MAML model.\nWhile performance does not match monolingual models, the multilingual\napproach has additional utility in serving more users. As a zero-shot\nsetup, predicting SQL from CSpider inputs through the model trained\nfor English, yields 7.9\\%~validation accuracy, underscoring that\ncross-lingual transfer for this dataset is \\mbox{non-trivial}.\n\nVarying the target sample size demonstrates more variable effects for\nSpider compared to ATIS. Notably, increasing the sample size yields\npoorer English performance beyond the optimal \\mbox{\\sc\n XG-Reptile}@5\\% setting for English. This may be a consequence of\nthe cross-database challenge in Spider --- information shared across\nlanguages may be less beneficial than for single-domain ATIS. The\nleast performant model for both languages is \\mbox{\\sc\n XG-Reptile}@1\\%. Low performance here for Chinese can be\nexpected, but the performance for English is surprising. We suggest that\nthis result is a consequence of ``over-sampling'' of the target\ndata disrupting the overall training process. That is, for 1\\%\nsampling and optimal $K=4$, the target data is ``over-used''\n$25\\times$ for each epoch of support data. We further observe\ndiminishing benefits for English with additional Chinese\nsamples. While we trained a competitive parser with minimal\nChinese data, this effect could be a consequence of how\nRAT-SQL cannot exploit certain English-oriented learning features (e.g., lexical similarity scores). Future\nwork could explore cross-lingual strategies to unify entity modeling for\nimproved feature sharing.\n\n\\paragraph{Visualizing the Manifold}\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{imgs\/pca_grid_metrics3.png}\n \\caption{PCA Visualizations of sentence-averaged encodings for English (EN), French (FR) and Chinese (ZH) from the ATIS test set (@1\\% sampling from Table \\ref{tab:xg_reptile_atis_1}). We identify the regularized weight manifold which improves cross-lingual transfer using \\mbox{\\sc XG-Reptile}. We also improve in two similarity metrics averaged across languages.}\n \\label{fig:pca}\n \\vspace{-1em}\n\\end{figure*}\n \nAnalysis of \\mbox{\\sc XG-Reptile} in Section \\ref{sec:methodology} relies on a theoretical basis that first-order meta-learning creates a dense high-likelihood sub-region in the parameters (i.e. \\textit{optimal manifold}). \nUnder these conditions, representations of new domains should cluster within the manifold to allow for rapid adaptation with minimal samples. This contrasts with methods without meta-learning, which provide no guarantees of representation density. However, metrics in Table \\ref{tab:xg_reptile_atis_1} and \\ref{tab:xgr_spider_1} do not directly explain if this expected effect arises. To this end, we visualize ATIS test set encoder outputs using PCA \\citep{doi:10.1137\/090771806} in Figure \\ref{fig:pca}. We contrast English (support) and French and Chinese as the most and least similar target languages. Using PCA allows for direct interpretation of low-dimensional distances across approaches. Cross-lingual similarity is a proxy for manifold alignment -- as our goal is accurate cross-lingual transfer from closely aligned representations from source and target languages \\citep{xia-etal-2021-metaxl, sherborne-lapata-2022-zero}.\n\nAnalyzing Figure \\ref{fig:pca}, we observe meta-learning methods (\\emph{Reptile-EN$\\rightarrow$FT-All},~\\mbox{\\sc XG-Reptile}) to fit target languages closer to the support (English, yellow circle). In contrast, methods not utilizing meta-learning (\\emph{Train-EN$\\cup$All},~\\emph{Train-EN$\\rightarrow$FT-All}) appear less ordered and with weaker representation overlap. Encodings from \\mbox{\\sc XG-Reptile} are less separable across languages and densely clustered, suggesting the regularized manifold hypothesized in Section \\ref{sec:methodology} ultimately yields improved cross-lingual transfer. Visualizing encodings from English in the \\emph{Reptile-EN} model \\textit{before} fine-tuning produces a similar cluster (not shown), however, required fine-tuning results in ``spreading'' leading to less cross-lingual similarity. \n\nWe also quantitatively examine the average encoding change in Figure \\ref{fig:pca} using Cosine similarity and Hausdorff distance \\citep{patra-etal-2019-bilingual} between English and each target language. Cosine similarity is measured pair-wise across parallel inputs in each language to gauge similarity from representations with equivalent SQL outputs. As a measure of mutual proximity between sets, Hausdorff distance denotes a worst-case distance between languages to measure more general ``closeness''. Under both metrics, \\mbox{\\sc XG-Reptile} yields the best performance with the most substantial pair-wise similarity and Hausdorff similarity. These indicators for cross-lingual similarity further support the observation that our expected behavior is legitimately occurring during training. \n\nOur findings better explain \\textit{why} our \\mbox{\\sc XG-Reptile} performs above other training algorithms. Specifically, our results suggest that \\mbox{\\sc XG-Reptile} learns a \\textit{regularized manifold} which produces stronger cross-lingual similarity and improved parsing compared to Reptile \\textit{fine-tuning a manifold}. This contrast will inform future work for cross-lingual meta-learning where \\mbox{\\sc XG-Reptile} can be applied. \n\n\\paragraph{Error Analysis}\n\n\\begin{figure}[t]\n\\centering\n\\begin{tabulary}{\\columnwidth}{@{}LL@{}}\n\\toprule\nEN & {\\footnotesize Show me all flights from San Jose to Phoenix} \\\\ \\midrule\nFR & {\\footnotesize Me montrer tous les vols de San Jos\\`e \\'a Phoenix } \\\\ \\midrule\n$\\times$ & {\\footnotesize SELECT DISTINCT flight\\_1.flight\\_id FROM flight flight\\_1, airport\\_service airport\\_service\\_1, city city\\_1, airport\\_service airport\\_service\\_2, city city\\_2 WHERE flight\\_1.from\\_airport = airport\\_service\\_1.airport\\_code AND airport\\_service\\_1.city\\_code = city\\_1.city\\_code AND city\\_1.city\\_name = \\colorbox{red}{'SAN FRANCISCO'} AND flight\\_1.to\\_airport = airport\\_service\\_2.airport\\_code AND airport\\_service\\_2.city\\_code = city\\_2.city\\_code AND city\\_2.city\\_name = \\colorbox{red}{'PHILADELPHIA'};} \\\\ \\midrule\n\\checked & {\\footnotesize SELECT DISTINCT flight\\_1.flight\\_id FROM flight flight\\_1, airport\\_service airport\\_service\\_1, city city\\_1, airport\\_service airport\\_service\\_2, city city\\_2 WHERE flight\\_1.from\\_airport = airport\\_service\\_1.airport\\_code AND airport\\_service\\_1.city\\_code = city\\_1.city\\_code AND city\\_1.city\\_name = \\colorbox{ForestGreen}{'SAN JOSE'} AND flight\\_1.to\\_airport = airport\\_service\\_2.airport\\_code AND airport\\_service\\_2.city\\_code = city\\_2.city\\_code AND city\\_2.city\\_name = \\colorbox{ForestGreen}{'PHOENIX'};} \\\\ \\bottomrule\n\\end{tabulary}\n\\caption{Contrast between SQL from a French input from ATIS for \\emph{Train-EN$\\cup$All} and \\mbox{\\sc XG-Reptile}. The entities ``San Jos\\'{e}'' and ``Phoenix'' are not observed in the 1\\% sample of French data but are mentioned in the English support data. The \\emph{Train-EN$\\cup$All} approach fails to connect attributes seen in English when generating SQL from French inputs ($\\times$). Training with \\mbox{\\sc XG-Reptile} better leverages support data to generate accurate SQL from other languages (\\checked).\n}\n\\label{tab:entity-error}\n\\vspace{-1em}\n\\end{figure}\n\n\nWe can also examine \\textit{where} the improved cross-lingual transfer influences parsing performance. Similar to Figure \\ref{fig:pca}, we consider the results of models using 1\\% sampling as the worst-case performance and examine where \\mbox{\\sc XG-Reptile} improves on other methods on the test set (448 examples) over five languages.\n\nAccurate semantic parsing requires sophisticated entity handling to translate mentioned proper nouns from utterance to logical form. In our few-shot sampling scenario, \\textit{most} entities will appear in the English support data (e.g. ``Denver'' or ``American Airlines''), and \\textit{some} will be mentioned within the target language sample (e.g. ``Mine\\'apolis'' or ``Nueva York'' in Spanish). These samples cannot include all possible entities -- effective cross-lingual learning must ``connect'' these entities from the support data to the target language -- such that these names can be parsed when predicting SQL from the target language. As shown in Figure \\ref{tab:entity-error}, the failure to recognize entities from support data, for inference on target languages, is a critical failing of all models besides \\mbox{\\sc XG-Reptile}. \n\nThe improvement in cross-lingual similarity using \\mbox{\\sc XG-Reptile} expresses a specific improvement in entity recognition. Compared to the worst performing model, \\emph{Train-EN$\\cup$All}, 55\\% of improvement accounts for handling entities absent from the 1\\% target sample but present in the 99\\% English support data. While \\mbox{\\sc XG-Reptile} can generate accurate SQL, other models are limited in expressivity to fall back on using seen entities from the 1\\% sample. This notably accounts for 60\\% of improvement in parsing Chinese, with minimal orthographic overlap to English, indicating that \\mbox{\\sc XG-Reptile} better leverages support data without reliance on token similarity. In 48\\% of improved parses, entity mishandling is the \\textit{sole error} -- highlighting how limiting poor cross-lingual transfer is for our task. \n\nOur model also improves handling of novel \\textit{modifiers} (e.g. ``on a weekday'', ``round-trip'') absent from target language samples. Modifiers are often realized as additional sub-queries and filtering logic in SQL outputs. Comparing \\mbox{\\sc XG-Reptile} to \\emph{Train-EN$\\cup$All}, 33\\% of improvement is related to modifier handling. Less capable systems fall back on modifiers observed from the target sample or ignore them entirely to generate inaccurate SQL. \n\nWhile \\mbox{\\sc XG-Reptile} better links parsing knowledge from English to target languages -- the problem is not solved. Outstanding errors in all languages primarily relate to query complexity, and the cross-lingual transfer gap is not closed. Furthermore, our error analysis suggests a future direction for optimal sample selection to minimize the error from interpreting unseen phenomena.\n\n\n\\section{Experimental Design}\n\\label{sec:exp_design}\n\nWe evaluate \\mbox{\\sc XG-Reptile} against several comparison systems\nacross multiple languages. Where possible, we re-implement existing\nmodels and use identical data splits to isolate the contribution of\nour training algorithm.\n\n\\subsection{Data}\n\nWe report results on two semantic parsing datasets. First on\n\\textbf{ATIS} \\citep{hemphill-etal-1990-atis}, using the multilingual\nversion from \\citet{sherborne-lapata-2022-zero} pairing utterances in six\nlanguages (English, French, Portuguese, Spanish, German, Chinese) to\nSQL queries. ATIS is split into 4,473 training pairs with 493 and 448\nexamples for validation and testing, respectively. \nWe report performance as execution accuracy to test if\npredicted SQL queries can retrieve accurate database results.\n\nWe also evaluate on \\textbf{Spider} \\citep{yu-etal-2018-spider}, combining\nEnglish and Chinese \\citep[CSpider]{Min2019-CSPIDER} versions as a\ncross-lingual task. The latter translates all questions to Chinese but\nretains the English database. Spider is significantly more\nchallenging; it contains~10,181 questions and 5,693 unique SQL queries\nfor 200 multi-table databases over 138 domains. We use the same split\nas \\citet{wang-etal-2021-meta-DGMAML} to measure generalization to\nunseen databases\/table-schema during testing. This split uses 8,659\nexamples from 146 databases for training and 1,034 examples from 20\ndatabases for validation. The test set contains 2,147 examples from 40\nheld-out databases and is held privately by the authors. To our knowledge, we report the first multilingual approach for Spider by training one model for English and Chinese. Our\nchallenge is now multi-dimensional, requiring cross-lingual and\ncross-domain generalization. Following \\citet{yu-etal-2018-spider}, we\nreport exact set match accuracy for evaluation.\n\n\\subsection{Sampling for Generalization}\n\\label{sec:sampling_for_xgen}\n\nTraining for cross-lingual generalization often uses parallel\nsamples across languages. We illustrate this in\nEquation~\\eqref{eq:sample1} where~$y_1$ is the equivalent\noutput for inputs, $x_{1}$, in each language:\n\\begin{align}\n {\\rm EN:}&\\left( x_1, y_1 \\right)~{\\rm DE:}\\left( x_1, y_1\n \\right)~{\\rm ZH:}\\left( x_1, y_1 \\right) \\label{eq:sample1}\n\\end{align}\nHowever, high sample overlap risks trivializing the task because\nmodels are not learning from new pairs, but instead matching only new\n\\textit{inputs} to known outputs. A preferable evaluation will test\ncomposition of novel outputs from unseen inputs:\n\\begin{align}\n {\\rm EN:}&\\left( x_1, y_1 \\right)~{\\rm DE:}\\left( x_2, y_2 \\right)~{\\rm ZH:}\\left( x_2, y_2 \\right) \\label{eq:sample2}\n\\end{align}\nEquation~\\eqref{eq:sample2} samples exclusive, disjoint datasets for\nEnglish and target languages during training. In other words, this\nprocess is \\textit{subtractive} e.g.,~a 5\\%~sample of German (or Chinese) target\ndata leaves 95\\% of data as the English support. This is similar to\n\\mbox{K-fold} cross-validation used to evaluate across many data\nsplits. We sample data for our experiments with\nEquation~\\eqref{eq:sample2}. It is also possible to use\nEquation~\\eqref{eq:sample3}, where target samples are also disjoint,\nbut we find this setup results in too few English examples for\neffective\nlearning.\n\\begin{align}\n {\\rm EN:}&\\left( x_1, y_1 \\right)~{\\rm DE:}\\left( x_2, y_2 \\right)~{\\rm ZH:}\\left( x_3, y_3 \\right) \\label{eq:sample3}\n\\end{align}\n\n\\subsection{Semantic Parsing Models}\n\nWe use a Transformer encoder-decoder model\nsimilar to \\citet{sherborne-lapata-2022-zero} for our ATIS experiments. We use the same mBART50\nencoder \\citep{tang2020multilingua-mbart50} and train a Transformer\ndecoder from scratch to generate SQL.\n\nFor Spider, we use the RAT-SQL model \\citep{wang-etal-2020-rat} which\nhas formed the basis of many performant submissions to the\n\\href{https:\/\/yale-lily.github.io\/spider}{Spider leaderboard}. RAT-SQL\ncan successfully reason about unseen databases and table schema using\na novel schema-linking approach within the encoder. We use the version from\n\\citet{wang-etal-2021-meta-DGMAML} with mBERT\n\\citep{devlin-etal-2019-BERT} input embeddings for a unified model between\nEnglish and Chinese inputs. Notably, RAT-SQL can be over-reliant on lexical similarity features between\ninput questions and tables \\cite{wang-etal-2020-rat}. This raises the\nchallenge of generalizing to Chinese where such overlap is null.\nFor fair comparison, we implement identical models as prior work on\neach dataset and only evaluate the change in training algorithm. \nThis is why we use an mBART50 encoder component for ATIS experiments\nand different mBERT input embeddings for Spider experiments.\n\n\\subsection{Comparison Systems}\n\nWe compare our algorithm against several strong baselines and\nadjacent training methods including:\n\\begin{itemize}\n\\item[\\textbf{Monolingual Training}] A monolingual Transformer\nis trained on gold-standard professionally translated data for\n each new language. This is a monolingual upper bound without\n few-shot constraints.\\vspace{-0.5em}\n\\item[\\textbf{Multilingual Training}] A multilingual Transformer \n is trained on the union of all data from the ``Monolingual Training''\n method. This ideal upper bound uses all data in all languages\n without few-shot constraints.\\vspace{-0.5em}\n\\item[\\textbf{Translate-Test}] A monolingual Transformer is trained\n on source English data ($\\mathcal{S}_{\\rm EN}$). Machine\n translation is used to translate test data from additional languages into\n English. Logical forms are predicted from translated data using the English\n model.\\vspace{-0.5em}\n\\item[\\textbf{Translate-Train}] Machine translation is used to translate\n English training data into each target language. A monolingual\n Transformer is trained on translated training data and logical forms\nare predicted using this model. \\vspace{-0.5em}\n\\item[\\textbf{Train-EN$\\cup$All}] A Transformer is trained on English\n data and samples from \\emph{all} target languages together in a single stage i.e.,~$\\mathcal{S}_{\\rm EN}\\cup\\mathcal{S}_{L}$. This is superior to\n training without English (e.g., on $\\mathcal{S}_{L}$ only), we\n contrast to this approach for more competitive\n comparison. \\vspace{-0.5em}\n\\item[\\textbf{TrainEN$\\rightarrow$FT-All}] We first train on English\n support data, $\\mathcal{S}_{\\rm EN}$, and then fine-tune on target\n samples, $\\mathcal{S}_{L}$. \\vspace{-0.5em}\n\\item[\\textbf{Reptile-EN$\\rightarrow$FT-All}] Initial training uses {Reptile}\n \\citep{DBLP:journals\/corr\/abs-1803-02999-REPTILE} on English support\n data,~$\\mathcal{S}_{\\rm EN}$, followed by fine-tuning on target\n samples, $\\mathcal{S}_{L}$. This is a typical usage of\n {Reptile} for training a low-resource multi-domain parser\n \\citep{chen-etal-2020-low-resource-domain-adaptation-task-oriented-parsing}.\n\\end{itemize}\n\nWe also compare to DG-FMAML \\citep{wang-etal-2021-meta-DGMAML} as a\nspecial case of \\mbox{\\sc XG-Reptile} when~\\mbox{$K=1$}. Additionally,\nwe omit pairwise versions of \\mbox{\\sc XG-Reptile} (e.g., separate models\ngeneralizing from English to individual languages). \nThese approaches demand more computation and demonstrated no significant\nimprovement over a multi-language approach. All Machine Translation uses Google Translate\n\\citep{gtranslate}.\n\n\n\\subsection{Training Configuration}\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{@{}lcc@{}}\n\\toprule\n & ATIS & Spider \\\\ \\midrule\nBatch Size & 10 & 16 \\\\\nInner Optimizer & \\multicolumn{2}{c}{SGD} \\\\\nInner LR & \\multicolumn{2}{c}{$1\\times 10^{-4}$} \\\\\nOuter Optimizer & \\multicolumn{2}{c}{Adam { \\citep{DBLP:journals\/corr\/KingmaB14}}} \\\\\nOuter LR & $1\\times 10^{-3}$ & $5\\times 10^{-4}$ \\\\\nOptimum $K$ & 10 & 3 \\\\\nMax Train Steps & \\multicolumn{2}{c}{20,000} \\\\\nTraining Time & 12 hours & 2.5 days \\\\ \\bottomrule\n\\end{tabular}%\n\\caption{Experimental Hyperparameters for \\mbox{\\sc XG-Reptile} on ATIS and Spider set primarily by replicating prior work.}\n\\label{tab:parameter_table}\n\\vspace{-1em}\n\\end{table}\n\nExperiments focus on the expansion from English to additional\nlanguages, where we use English as the ``support'' language and\nadditional languages as ``target''. Key hyperparameters are outlined\nin Table \\ref{tab:parameter_table}. We train each model using the\ngiven optimizers with early stopping where model selection is through\nminimal validation loss for combined support and target\nlanguages. Input utterances are tokenized using SentencePiece\n\\citep{kudo-richardson-2018-sentencepiece} and Stanza\n\\citep{qi-etal-2020-stanza} for ATIS and Spider, respectively. All\nexperiments are implemented in Pytorch on a single V100 GPU. We report\nkey results for ATIS averaged over three seeds and five random data splits. For Spider, we submit the best singular model from five random splits to the leaderboard.\n\n\n\n\\section{Related Work}\n\\label{sec:rw}\n\n\\paragraph{Meta-Learning for Generalization}\n\nMeta-Learning\\footnote{We refer the interested reader to\n\\citet{wang-etal-fewshot-survey},\n\\citet{hospedales-meta-learning-survey}, and\n\\citet{Wang2021GeneralizingTU-survey} for more extensive surveys on\nmeta-learning.} has recently emerged as a promising technique for\ngeneralization, delivering high performance on unseen domains by\n\\emph{learning to learn}, i.e., improving learning over multiple\nepisodes\n\\citep{hospedales-meta-learning-survey,Wang2021GeneralizingTU-survey}. A popular approach is Model-Agnostic Meta-Learning\n\\citep[MAML]{pmlr-v70-finn17a}, wherein the goal is to train a model\non a variety of learning tasks, such that it can solve \\emph{new}\ntasks using a small number of training samples. In effect, MAML\nfacilitates task-specific fine-tuning using few samples in\na two-stage process. MAML requires computing higher-order gradients\n(i.e.,~``gradient through a gradient'') which can often be\nprohibitively expensive for complex models. This limitation has motivated\n\\textit{first-order} approaches to MAML which offer similar\nperformance with improved computational efficiency.\n\nIn this vein, the {Reptile} algorithm\n\\citep{DBLP:journals\/corr\/abs-1803-02999-REPTILE} transforms the\nhigher-order gradient approach into $K$ successive\nfirst-order steps. \n{Reptile}-based training learns an \\textit{optimal manifold} across tasks (i.e., a high-density parameter sub-region biased for strong cross-task likelihood), then\nsimilarly followed by rapid fine-tuning. By learning an optimal \ninitialization, meta-learning proves useful for low-resource\nadaptation by minimizing the data required for out-of-domain tuning on new tasks. \\citet{kedia-etal-2021-beyond-reptile} also demonstrate the utility of Reptile to improve \\emph{single-task} performance. We build on this to examine single-task cross-lingual transfer using the \\textit{optimal manifold} learned with Reptile.\n\n\\paragraph{Meta-Learning for Semantic Parsing}\nA variety of NLP applications have adopted meta-learning in zero- and\nfew-shot learning scenarios as a method of explicitly training for\ngeneralization \\citep{lee-etal-2021-meta-tutorial,\n hedderich-etal-2021-survey-metanlp}. Within semantic parsing, there\nhas been increasing interest in \\textit{cross-database generalization}, motivated by datasets such as Spider\n\\citep{yu-etal-2018-spider} requiring navigation of unseen\ndatabases\n\\citep{sp-over-many-kbs-Herzig2017, suhr-etal-2020-exploring}.\n\nApproaches to generalization have included simulating source and\ntarget domains \\citep{givoli-reichart-2019-zero-shot-semantic-parsing}\nand synthesizing new training data based on unseen databases\n\\citep{zhong-etal-2020-grounded-adaptation-gazp,\n xu-etal-2020-autoqa}. Meta-learning has demonstrated fast adaptation\nto new data within a monolingual low-resource setting\n\\citep{huang-etal-2018-natural-structured-query-meta-learning-parsing,guo-etal-2019-coupling-context-meta-learning-parsing,DBLP:journals\/corr\/abs-1905-11499,Sun_Tang_Duan_Gong_Feng_Qin_Jiang_2020-meta-learning-parsing}. Similarly,\n\\citet{chen-etal-2020-low-resource-domain-adaptation-task-oriented-parsing}\nutilize Reptile to improve generalization of a model, trained\non source domains, to fine-tune on new domains. Our\nwork builds on \\citet{wang-etal-2021-meta-DGMAML}, who explicitly\npromote monolingual cross-domain generalization by ``meta-generalizing'' across\ndisjoint domain-specific batches during training.\n\n\\paragraph{Cross-lingual Semantic Parsing}\n\nA surge of interest in cross-lingual NLU has seen the creation of many\nbenchmarks across a breadth of languages\n\\citep{conneau2018xnli,hu2020xtreme,liang-etal-2020-xglue}, thereby\nmotivating significant exploration of cross-lingual transfer\n\\citep[\\emph{inter alia}]{nooralahzadeh-etal-2020-zero-xmaml,xia-etal-2021-metaxl,\n xu-etal-2021-soft-layer-selection-meta-simpler-xmaml,zhao-etal-2021-closer}.\nPrevious approaches to cross-lingual semantic parsing assume\nparallel multilingual training data\n\\citep{multilingual-sp-hierch-tree-Jie2014} and exploit multi-language\ninputs for training without resource constraints\n\\citep{arch-for-neural-multisp-Susanto2017,\n neural-hybrid-trees-susanto2017}.\n\nThere has been recent interest in evaluating if machine translation is\nan economic proxy for creating training data in new languages\n\\citep{sherborne-etal-2020-bootstrapping,moradshahi-etal-2020-localizing}.\nZero-shot approaches to cross-lingual parsing have also been explored using\nauxiliary training objectives\n\\cite{yang-etal-2021-frustratingly-simple-DRS-parsing,sherborne-lapata-2022-zero}.\nCross-lingual learning has also been gaining traction in the adjacent\nfield of spoken-language understanding (SLU).\nFor datasets such as MultiATIS \\citep{Upadhyay2018-multiatis},\nMultiATIS++ \\citep{xu-etal-2020-end-multiatis}, and MTOP\n\\citep{li-etal-2021-mtop}, zero-shot cross-lingual transfer has been\nstudied through specialized decoding methods\n\\citep{zhu-etal-2020-dont-parse-insert}, machine translation\n\\citep{nicosia-etal-2021-translate-fill-zero-shot-semparse}, and\nauxiliary objectives \\citep{van-der-goot-etal-2021-masked}.\n\nCross-lingual semantic parsing has mostly remained orthogonal to the\ncross-database generalization challenges raised by datasets such as\nSpider \\citep{yu-etal-2018-spider}. While we primarily present\nfindings for multilingual ATIS into SQL \n\\citep{hemphill-etal-1990-atis}, we also train a parser on both Spider\nand its Chinese version \\citep{Min2019-CSPIDER}. To the best of our\nknowledge, we are the first to explore a multilingual approach to this\ncross-database benchmark. We use {Reptile} to learn the overall task\nand leverage domain generalization techniques\n\\citep{li-hospedales-metadg, wang-etal-2021-meta-DGMAML} for\nsample-efficient cross-lingual transfer.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}