diff --git a/.gitattributes b/.gitattributes index 10d4e949dbaea816f316cb6efab4fe73559ef537..064ee9fb533a38c20fbab9b33e2c1fe2e7fed41a 100644 --- a/.gitattributes +++ b/.gitattributes @@ -238,3 +238,4 @@ data_all_eng_slimpj/shuffled/split/split_finalaa/part-03.finalaa filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalaa/part-00.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-18.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-17.finalaa filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalaa/part-02.finalaa filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalaa/part-02.finalaa b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-02.finalaa new file mode 100644 index 0000000000000000000000000000000000000000..265a00847f9827a1c5936e35698bdb2bd4fdd4b9 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-02.finalaa @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fc9eaa6eaa096bb6d2add3423080b5fe5e2240043b9c3d315bd7d4e9c575753 +size 12576672160 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzneq b/data_all_eng_slimpj/shuffled/split2/finalzneq new file mode 100644 index 0000000000000000000000000000000000000000..b3bad21ff7adbc0ad93c96c0dfd756e1e98bcea5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzneq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction.}\n\nFully self-consistent N-body simulations, where each galaxy is \nrepresented by a large number of particles, are a useful, albeit \nexpensive,\ntool for studying the evolution of galaxy groups and clusters. However, \nfor simulations of large clusters of galaxies, like the Coma cluster, the \nnecessary computing time is prohibitive. As a substitute people have \nconsi\\-dered explicit simulations, in which each galaxy is represented by a \nsingle point and the physics of the interactions is modelled by explicit \nprescriptions for merging conditions. In particular,\na variety of recipes are explored for the conditions the two galaxies \nmust fulfill in order to merge. In general, these merging conditions are \nbased on self-consistent si\\-mulations of two-galaxy collisions, and do not \ninclude the tidal\nforces between the galaxies or collisions involving more than two \ngalaxies. It is thus not a priori certain that they will perform well in \nsimulations of group or cluster evolution. In some cases (Merritt, 1983; \nRichstone and Malumuth, 1983; Mamon 1987), the authors also introduce \nother effects like dynamical friction and tidal forces from the\nbackground. The main advantage of this type of approach is that it is \ninexpensive in computing time and therefore allows one to explore a wide\nparameter space. In any case, a considerable fraction of the results on \nthe dynamics of galaxy groups are\nbased on the explicit approach. We may cite works by Jones and Efstathiou \n(1979), Roos and Norman (1979), Aarseth and Fall (1980), Cooper and \nMiller (1981), Roos (1981), Roos and Aarseth (1982), Merritt (1983), \nRichstone and Malumuth (1983), Malumuth and Richstone (1984), Saarinen \nand Valtonen (1985), Mamon (1987), Navarro et al. (1987) and Schindler \nand B\\\"ohringer (1993).\n\nNot many self-consistent simulations of groups with more than 10 galaxies \ncan be found in the literature. We can cite the articles by Carnevalli et \nal. (1981), Ishizawa et al. (1983), Ishizawa (1986), Rhee and Roos \n(1990), Barnes (1992), Funato et al. (1993) and Bode et al. (1994). The \nfirst works of this kind used Aarseth's (1971) N-body code and a limited \nnumber of points, typically $10-20$, to\nrepresent each galaxy, and only recently it has become possible to use \nthe order of 1000 particles per galaxy.\n\nOur aim is to compare the two approaches to see whether, and under what \nconditions, one can use explicit simulations and have confidence in the \nresults. For this purpose, we have evolved a set of initial conditions in \ntwo different ways. One way is to use an N-body code where physics is \nincluded explicitly, the other, to use self-consistent simulations and a \ntreecode (Barnes and Hut 1986, Hernquist 1987 for a vectorised \nversion), representing each galaxy either by $100$ or by $900$ points. \nIn section 2 we describe our initial conditions and the different merging \ncriteria used so far in the literature. In section 3 we compare the \nresults of fully self-consistent numerical simulations to those of \nexplicit simulations made with the various merging criteria, both without \n(section 3.1) and with dynamical friction (section 3.2). This comparison \nled us to propose a new merging criterion (section 3.3), whose \nperformance we also compare with the fully self-consistent simulations. \nIn this section we consider only groups with no common all-encompassing\ndark matter halo. Simulations including such a halo are presented in \nsection 4, where again we compare the results of self-consistent and \nexplicit simulations. We summarise and discuss our results in section 5. \n\n\n\\section{Initial conditions and merging criteria} \n\nWe have considered five different initial conditions, labeled A, B, C, D \nand H, each for systems consisting of 50 galaxies. In simulations A, B, D \nand H the radial distances from the\ngalaxy centers to the center of the group were picked at random between 0 \nand $R_{out}$.\nFor simulation C the central part of the sphere contained no galaxy, i.e. \nthe radial distances were picked between $0.5R_{out}$ and $R_{out}$. For \nsimulations\nA to D all the mass is in the individual galaxies, while in simulation H \nwe included a common live halo, centered on the center of the group, and \ncontaining half of the total mass. The halo density distribution is a \nPlummer one with a core radius equal to half $R_{out}$. Run A starts in \nfree-fall, and we will often refer to it as the collapsing group. The \nvelocity dispersions in the remaining three runs were chosen to be \nindependent of radius, gaussian, isotropic, and such that the system of \ngalaxies starts off in virial equilibrium. Simulation D is similar \nto B but more compact, as the radius of the sphere containing all the \ngalaxies is half that of run B. The particles in a given galaxy were \ninitially taken to follow a Plummer distribution of core radius equal to \n0.2 and of unit mass. When evolved in isolation, an individual\ngalaxy first shows a low amplitude relaxation in the very first few\ntime steps due to the fact that the simulations have a softening, while \nthe analytical Plummer sphere does not. After that, and for a time equal \nto that during which the group simulations were run, the galaxies do not\nevolve any further. Thus, during that time, for the representations with \n900 particles per\ngalaxy, the radii containing 25\\%, 50\\% and 75\\% of the mass of the \ngalaxy vary only by a couple of\npercent. For the representation with 100 particles the radii containing \n25\\% and 50\\% of the mass vary by 4-5\\%, and only the radius containing \n75\\% of the mass varies significantly, particularly in the later phases \nof the evolution.\n\nMore\ninformation on the initial conditions for the simulations is summarised \nin Table~1.\n{\\sl Column}~1 contains the name of the simulation, {\\sl Column}~2 gives \n$R_{out}$, the\nradius of the sphere containing the group at the start of the simulation, \n{\\sl Column}~3 shows the initial mean separation between the galaxies and \n{\\sl Column}~4 gives the ratio between the initial velocity dispersion of \nthe galaxies considered as point masses, $\\sigma_{cl}$, divided by the \nvelocity dispersion of the particles within a single galaxy, \n$\\sigma_{gal}$. {\\sl Column}~5 contains the crossing time defined as\n\\begin{equation}\nt_{cr}=\\left( {\\frac{2R_h^3}{GM}}\\right) ^{1\/2}, \\end{equation}\n\n\\noindent where $R_h$ represents the half mass radius. Finally, {\\sl \nColumn}~6 contains the ratio between $t_{tot},\\\/$ the total duration of \nthe simulation and $t_{cr}$. All through this paper our units are such \nthat the gravitational constant $G~=~1$. \n\n\\begin{table}\n\\begin{center}\n\\caption{Initial conditions of the simulations} \\vskip 0.25cm\n\\begin{tabular}{llllll}\n\\hline\nRun & $R_{out}$ & $$ & $\\sigma _{cl}\/\\sigma _{gal}$ & $t_{cr}$ & \n$t_{tot}\/t_{cr}$ \\\\ \\hline\nA & 30 & 8.0 & 0.0 & 15.3 & 2.0 \\\\\nB & 20 & 6.8 & 1.4 & 4.5 & 6.7 \\\\\nC & 20 & 10.3 & 1.0 & 11.6 & 2.6 \\\\\nD & 10 & 3.4 & 1.9 & 1.6 & 18.7 \\\\\nH & 20 & 7.4 & 2.7 & 8.9 & 3.4\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nThe self-consistent simulations were run using the vectorised version \n(Hernquist 1988) of the Barnes-Hut tree algorithm (Barnes and Hut 1986), \nwith a softening of 0.05 and an opening angle $\\theta=0.7$. In explicit \nsimulations each galaxy is represented by a single point to which is \nassociated a mass, an internal energy and a core radius. These parameters \nmay change during the evolution of the system due to the different \ninteractions suffered by the point-galaxies and we used the recipes of \nAarseth and Fall (1980) to follow their time evolution. The explicit \nsimulations are of course much faster than the self-consistent\nones. A complete self-consistent simulation with $900$ points per galaxy \ntook $521663$ seconds in a Cray YMP 2L computer. The self-consistent \nsimulation with $100$ particles per galaxy lasted only the $5\\%$ of this \ntime and the explicit simulation only $0.2\\%$. \n\nIn order to compare the results of the different kind of simulations we \nconsider the time evolution of the following global parameters of the \ngroups:\n\n\\begin{enumerate}\n\\item Number of galaxies: $N_{gal}$\n\\item Half mass radius: $R_h$, where $M(R_h)=1\/2\\,M_{tot}$ \\item Three \ndimensional velocity dispersion: \\end{enumerate}\n\n$$\\sigma_v^2=\\sum_{i=1}^{N_{gal}}\\frac{m_i\\mid {\\bf v_i}-<{\\bf v}> \n\\mid^2}{M_{tot}-m_i(t=0)},\\,\\,\n{\\rm where}\\,\\,<{\\bf v}>=\\sum_{i=1}^{N_{gal}}\\frac{m_i {\\bf v_i}} \n{M_{tot}}.$$\n\n\\noindent where all quantities are evaluated at each timestep, except for \n$m_i(t=0) = 1$ which is the mass of all individual galaxies at the \nstart of the simulations and is taken to be $m_i(t=0) = 1$.\n\nIn our explicit simulations we consider, in a first stage, only merging \nbetween galaxies. In a second set of simulations we include also the \neffect of dynamical friction. In this way we can check the importance of \nboth effects. Merging between galaxies is usually described in the \nliterature using an explicit condition involving the separation and \nrelative velocities of the pair of galaxies. If this condition is \nfulfilled, the two galaxies are merged in a single one in this timestep, \ntaking into account the conservation of mass, energy and momentum \n(Aarseth \\& Fall 1980). If this condition is not fulfilled both galaxies \nsurvive and continue their motion. \n\nWe found in the literature various criteria which have been used to \ndecide whether two galaxies are going to merge and we used all of them in \nturn in our explicit simulations. The condition of Roos and Norman (1979, \nhereafter condition RN) is: \\begin{equation}\nv(r_p)\\leq 3.1\\sigma (1-0.3\\frac{r_p}{R_g}) \n\\left(\\frac{1+m_2\/m_1}{2}\\right)^{1\/4}\n\\end{equation}\n\n\\noindent\nwhere $m_2 \\leq m_1$ and $r_p\/R_g < 1$. $r_p$ is the minimum separation \nbetween the galaxies, $v(r_p)$\nis their relative velocity at $r_p$, and $R_g$ is the larger of their \nradii. This criterion was obtained empirically from collisions between \ngalaxies described by fewer than $100$ particles. \n\nAarseth and Fall (1980, hereafter condition AF) used the criterion: \n\\begin{equation}\n{\\left[ \\frac{r_p}{{2.6(\\epsilon _1+\\epsilon _2)}}\\right] }^2+ \n{\\left[\\frac{v(r_p)}{{1.16v_e(r_p)}}\\right] }^2\\leq 1,\n \\end{equation}\n\n\\noindent\nwhich is a simple fit to the results of the simulations of van Albada and \nvan Gorkom (1977), White (1978) and Roos and Norman (1979). The core \nradius of galaxy {\\it i} is $\\epsilon _i$, while $v_e(r_p)$ is the escape \nvelocity of the system composed of the two galaxies before merging at \npericenter:\n\\begin{equation}\nv_e^2(r_p) = 2 G (m_1 + m_2)(r_p^2 + \\epsilon_1^2 + \\epsilon_2^2)^{-1\/2}.\n\\end{equation}\n\nFarouki and Shapiro (1982, hereafter condition FS) obtained a similar \ncondition for the merging of two rotating galaxies with massive halos and \nspins aligned with the orbital angular momentum: \\begin{equation}\n{\\left[ \\frac{r_p}{{5.5(\\epsilon _1+\\epsilon _2)}}\\right] }^2+ {\\left[ \n\\frac{v(r_p)}{{1.1v_e(r_p)}}\\right] }^2\\leq 1. \\end{equation}\n\nThis condition predicts more mergings than the criterion from Aarseth and \nFall (1980) for two reasons. It favours collisions between galaxies \nfurther apart and it forces the spins to be aligned. This criterion is \nnot directly applicable to our case, where we use initially nonrotating \nPlummer spheres, but we include it for the sake of completeness.\n\nFinally Richstone and Malumuth (1983, hereafter condition RM) use the \ndifferent criterion:\n\\begin{equation}\nr_pv(r_p)\\leq \\left[ 8\/3\\,G^2\\,(m_1\\,+\\,m_2)(m_1+m_2) \n\\right] ^{1\/4},\n\\end{equation}\n\n\\noindent\nwhich is a generalisation of a criterion propo\\-sed by Tre\\-maine (1980)\nfor the case of different masses. The value $$ is the mean \nquadratic radius of a galaxy. For the case of a Plummer sphere \n$=\\epsilon ^2\/2$, and this is the value we have used in our \nsimulations.\n\nTo save computer time we do not need to apply the adopted merging \ncriterion to all galaxy pairs at all times. Following Navarro {\\it et \nal.} (1987), we check whether the condition is fulfilled only if the \nseparation between two galaxies is smaller than $3(r_{h_1}+r_{h_2})$, \nwhere $r_{h_i}$ is the half mass radius of the galaxy {\\it i}. This \nseparation is sufficiently large so that merging events are not missed, \nwhile speeding up considerably the computations. \n\nAs the simulations evolve a central giant ``galaxy\" is formed as a result \nof the mer\\-gings and\/or tidal stripping of the galaxies in the group. \nDynamical friction between this and the remaining individual galaxies \ninfluences the\nevolution and we have therefore included this effect in the explicit \nsimulations, using the well known\nChandrashekar (1943) formula for the deceleration: \\begin{equation}\n{\\bf a_v} = - \\frac{4\\pi G^2\\, m_{gal} \\ln \\Lambda \\rho({\\bf r})} {v^3} \nF(v){\\bf v}\n\\end{equation}\nwhere\n\\begin{equation}\nF(v) = erf(X) - \\frac{2X}{\\sqrt{\\pi}}e^{-X^2} \\end{equation}\nand $erf(X)$ is the error function, $X=v\/\\sqrt{2}\\sigma$, $\\sigma$ is the \nvelocity dispersion of the\nobjects in the background, and $m_{gal}$ is the mass of the galaxy \ntravelling at speed ${\\bf v}$;\n$\\rho({\\bf r})$ is the density of the central galaxy, considered as a \nPlummer sphere, at the position of the\nsecondary galaxy, ${\\bf r}$ being the relative separation of their \ncenters, and $\\Lambda = b_{max} \/ b_{min}$, where $b_{max}$ and $b_{min}$ \nare the maximum and minimum\nimpact parameters of encounters contributing to the drag. When we include \na common halo we apply Eq. (6) twice, once for the central giant \n``galaxy\" and the other for the halo, adding these two accelerations. \n\nThe self-consistent simulations where analyzed as follows. First, we \nneed to define the central giant ``galaxy\", which we will refer to in \nthis paragraph simply as the central object. In order to do so, we \nanalyze at each timestep\nseparately each subsystem composed of the particles that were bound at \n$t=0$ in a single galaxy. Using the positions and velocities of these \nparticles we discard from the subsystem all particles with positive \nenergy\nrelative to it and consider them as part of the central object. The \nparticles that still form a bound subsystem will define the state of the \ngalaxy at this timestep. If after this process a galaxy contains less \nthan $10\\%$ of the particles it had at $t=0$, we discard this subsystem \nas a galaxy and we add all its particles to the central object, thus \nconsidering that the initial galaxy has been definitely disrupted. For \neach of the remaining galaxies we use the $35\\%$ of its most bound \nparticles to define its position and velocity. Finally, we also consider \npossible mergings between the remaining galaxies, as well as between \nthese galaxies\nand the central object. Two galaxies were merged in a single one if the \nfollowing conditions are satisfied:\n\\begin{eqnarray*}\n\\Delta r & < & a(r_{c1}+r_{c2}) \\\\\n\\Delta v & < & b(\\sigma_1 + \\sigma_2)\n\\end{eqnarray*}\n\\noindent where $r_{ci}$ is the radius of the sphere containing the \n$35\\%$ most bound particles and $\\sigma_i$ its velocity dispersion. The \nconstants $a=1.4$\nand $b=0.6$ were selected in order to have smooth central objects. The \nparameters of this object were calculated with just the $10\\%$ most bound \nparticles and not with the $35\\%$ as with the rest of the galaxies. This \nensured that we do not consider a merger between the central galaxy and \nanother galaxy while they still form two separate objects. We finally \nused the positions and velocities of the remaining galaxies\nto define the global parameters of the system.\n\n\\section{Simulations without common dark matter halo} \n\n\\subsection{Evolution without dynamical friction.} \n\nIn Fig.~1 we show the evolution of the number of galaxies $N_{gal}$ as a \nfunction of time for all the simulations without distributed dark matter. \nIn the first column we compare the self-consistent simulations with $900$ \nand $100$ particles per galaxy with the explicit simulations obtained \nwith the AF and RM conditions.\nIn the second column, the evolution of the number of galaxies in the \nself-consistent simulations is compared with the explicit simulations \nusing the FS and RN conditions. \n\nWe note that the explicit simulations perform rather unequally. The \nresults depend on the type of initial conditions and on the mergng \ncondition used to describe the interactions. Globally we can say that the \nAF and RM conditions\nseem to follow the time evolution of $N_{gal}$ much better than the FS \nand RN conditions. In the first stages of the evolution of the collapsing \ngroup (Run A), the less tightly bound and virialised group (Run B) and \nfor Run C, which is a virialised group with no central mass \nconcentration, both AF and RM conditions describe the time evolution of \nthe number of galaxies rather well. This is not true, however, for Run D \n(tightly bound and virialised group), for which the AF condition \noverestimates the number of mergings from the start, while the RM \ncondition does the opposite. As the evolution proceeds the discrepancies \nbetween the self-consistent simulations and the explicit simulations \nbecome more evident. For all initial conditions the FS and RN conditions \noverestimate the number of mergings from the start. The sole exception is \nthe explicit simulation with the FS condition in the case of Run C, where \nthe agreement with the self-consistent simulation is quite good. \n\nFor the time evolution of the half-mass radius, $R_{h}$, we find similar \nresults. This can be seen in Fig.~2, where the panels refer to the same \ninitial conditions as in Fig.~1. In general, the explicit simulations \ncontrolled by the AF and RM conditions show a better global behaviour \nthan the simulations governed by the FS and RN conditions. This is due to \nthe high number of mergings predicted by the latter conditions. In the \ncase of Run A, all explicit simulations follow well the collapse phase. \nWhen most of the mass is accumulated in the central area, the number of \nencounters is relatively large and there are strong interactions with the \ngiant central galaxy. At this moment, self-consistent and explicit \nsimulations separate. The AF and RM conditions allow some galaxies to \navoid merging with the giant galaxy in the first passage and the system \nexperiences an expansion which is not shown in self-consistent \nsimulations. On the other hand, the FS and RN conditions predict a much \nhigher rate of mergers than the self-consistent simulations and we are \nleft too early with only a single giant galaxy. In the case of Run~B, the \nAF and RM conditions describe very well the state of the system during \nthe first part of the simulations. As the simulation evolves, however, \nsome galaxies reach the central parts where they suffer an hyperbolic \nencounter with the central mass concentration of the giant galaxy instead \nof merging with it, as is the case in the self-consistent simulations, \nbecause the merging criteria strongly disfavour merging in high speed \ncollisions. This makes the system expand, an effect which is not seen in \nthe self-consistent simulations. This does not happen for Run C where \nthere is no such central mass concentration and explicit and \nself-consistent simulations follow the same evolution, except for minor \ndifferences and a strong deviation for the case of condition RN. Run D is \nthe most difficult case for the explicit simulations. In this situation \ngalaxies move at higher speeds than in Run B or Run C. Surprisingly, in \nthe case of self-consistent simulations, this does not make merging with \nthe central object more difficult, as one might expect naively in the \nfirst instance. However, the RM conditions predicts more hyperbolic \nencounters than the self-consistent simulations, giving strong \noscillations of the half mass radius. On the other hand, the AF condition \nseem to describe the situation quite well. The number of galaxies \npredicted by the RN and FS conditions\nare well below the numbers predicted by the self-consistent simulations, \nagain due to the high number of mergers predicted by these conditions.\n\nFinally, in Fig.~3 we show similar comparisons, now for the three \ndimensional velocity dispersion.\nThe larger number of mergers predicted by the FS and RN conditions nearly \nalways gives lower velocity dispersions than the self-consistent \nsimulations as well as strong oscillations due to small number\nstatistics. On the other hand, the AF and RM conditions give a better \ngeneral description of the evolution of the three dimensional velocity \ndispersion. This is specially true for Run A, where all the\nmotion is nearly radial and only small discrepancies appear at the end of \nthe simulations. For the case of Run B and the RM condition, the \nhyperbolic encounters which lead to a higher half mass radius of the \nsystem, give also higher velocity dispersions, because some galaxies \nwhich merge in the self-consistent simulations can escape in the explicit \nones. The AF condition describes this time evolution much better. The \nvelocity dispersion of Run C is well described for both conditions until \nshortly before the end of the simulation, when both conditions predict \nhigher velocity dispersions than the self-consistent simulations. In the \ncase of Run D the RM condition has again\nsome difficulty in describing the behaviour of the self-consistent \nsimulations. This is also due to the high number of large deflections of \nthe secondary galaxies. The AF condition follows well the evolution of \nthe three dimensional velocity dispersion in this situation. \n\nWe would like to note at this point that the self-consistent simulations \nwith $100$ particles per galaxy and with $900$ particles per galaxy do \nnot show major differences. The number of galaxies as a function of time \ndoes not change appreciably between these two simulations and this for \nall the initial conditions, i.e. both for virialised and collapsing \ngroups. In this sense our results differ from those of van Kampen (1995), \nwho found that the small virialised clumps formed during the simulations associated with the \ngalaxies do not resist the passage through the central part of the \ncluster. This could be due to the somewhat lower number of particles per \ngalaxy, since the typical galaxies in van Kampen's simulations are \ncomposed of 10-50 points (van Kampen 1995).\n\nSimilarly good agreement between the 900 and 100 points per galaxy \nsimulations is found for the\nvelocity dispersion. Somewhat bigger differences, in particular for run \nB, can be seen for the half-mass radius, but even these are not \nexcessive.\n\n\\subsection{Simulations with dynamical friction.} \n\nFigure~4 compares the time evolution of the number of galaxies in the \nself-consistent simulations and in the explicit simulations when the \neffect of dynamical friction is included. Since this slows down the \ngalaxies and thus\nfavours merging, the number\nof galaxies, $N_g$ will diminish faster. This is clearly seen in all the \npanels of Fig.~4. As can be seen from the left hand panels, this worsens \nthe predictions of the RN and FS conditions. The right hand panels show \nthat the agreement is now better for the RM condition, and worst for the \nAF one. For the case of Run A there is a systematic deviation between the \nAF condition and the self-consistent simulations. On the other hand, the \nRM condition which had, in the absence of dynamical friction, predicted a \nlow number of mergings is, in this case, in much better agreement with \nthe self-consistent case. The same can be said about Run B, while in Run \nC the effect of dynamical frictions is not noticeable. This is not \nsurprising as we take into account only the effect of dynamical friction \nwith the most massive galaxy which, in this case, is practically \nnonexistent. For the most difficult case, Run D, the AF condition falls \nbelow the results of the self-consistent simulations while the RM \ncondition gives good agreement. \n\nThe evolution of the half mass radius is also affected by the inclusion \nof dynamical friction, as is shown in Fig.~5, where we plot the evolution \nof $R_h$ as a function of time. For the explicit simulations with the RN \nand FS conditions dynamical friction does not alter the strong \ndisagreement with the\nself-consistent simulations. This happens because the explicit \nsimulations with these conditions allow too many mergings and we are left \nwith a single supergiant galaxy at the center of the system which \ncontains a large fraction of the mass and some small satellites. On the \nother hand, there is now a much better agreement between the explicit \nsimulations made with the AF and RM conditions and the self-consistent \ncases. For Run A neither condition shows a secondary bouncing of the \nsystem. The dynamical\nfriction acts as a braking mechanism that favours merging between the \nsecondary galaxies and the central one and a lower number of satellites \nsurvive in this situation. In Run B, the hyperbolic encounters of the \nsatellite galaxies with the central giant are not present and there is no \nlater expansion of the system as in the explicit simulations without \ndynamical friction. The explicit simulations with both\nthe AF and RM conditions predict too small a half mass radius. For Run C, \nas there\nis no giant galaxy, dynamical friction is unimportant and all the \nsimulations again show the same general behaviour. In Run D the galaxies \nmove faster because the system is more tightly bound. The explicit \nsimulations with the RM condition and no dynamical friction were not \ncapable of describing the evolution of the self-consistent simulations. \nThe inclusion of dynamical friction gives a much better agreement between \nthese two simulations. On the other hand, the explicit simulations with \nthe AF conditions seem to be systematically below the predictions of the \nself-consistent simulations. \n\nAs can be seen in Fig.~6, the three dimensional velocity dispersion shows \nmarked diffe\\-rences between the explicit simulations and the \nself-consistent ones. As was the case in the absence of dynamical \nfriction the explicit simulations with the RN and FS conditions do not \ntrack well the self-consistent results. The dynamical friction effect is \nbarely noticeable in this case, except for some tendency towards lower \nvelocity dispersions. As the RN and FS conditions predict many mergings, \nwe are left with a giant galaxy in the center and a low number\nof satellites orbiting around it. The dispersions are then low but they \nare more subject to fluctuations and have stronger oscillations. \nIncluding dynamical friction in the explicit simulations with the AF and \nRM conditions does not substantially improve their\nresults as can be seen if we compare Fig.~6 with Fig.~3. For runs A and C \nthe situation is further improved and the explicit simulations follow the \nself-consistent ones very well. Bigger differences between the explicit \nsimulations with and without dynamical friction are found for the \nvirialised groups (Run B and D). The values predicted by the AF condition \nare now always near the values obtained with the self-consistent \nsimulations. However, this is not the case for the RM parametrization. \nFor Run B, there are marked differences between these explicit \nsimulations and the last phase of the self-consistent simulations. For \nthe case of Run D the RM condition gives a systematically higher velocity \ndispersion than the self-consistent simulations.\n\n\\subsection{A new merging criterion}\n\nAs we have seen, none of the merging criteria proposed so far in the \nliterature is capable of describing the time evolution of the global \nproperties of groups of galaxies in the variety of situations considered \nin this paper. We can\nsay that, in general, the AF and RM conditions perform better that the FS \nand RN ones, but even they\nfail to describe the evolution of some of the groups. This has motivated \nour search for a more adequate merging criterion. \n\nWe searched for a formula of a form similar to the one proposed by \nAarseth and Fall (1980), namely:\n\\begin{equation}\n{\\left[ \\frac{(m_1+m_2)r_p}{{a(m_1\\epsilon _1+m_2\\epsilon _2)}} \n\\right]}^2+{\\left[\\frac{v(r_p)}{{bv_e(r_p)}}\\right] }^2\\leq 1. \n\\end{equation}\nFor the part concerning the velocities, we keep the same expression as in \nthe Aarseth and Fall formula, which performs quite well in the case of \nthe time evolution of the three dimensional velocity dispersions. For the \npart concerning the cores of the galaxies and the separation at \npericenter we use a mass weighted expression with the aim of taking into \naccount possible differences in collisions between galaxies of different \nmasses as in the expression due to Richstone and Malumuth (1983). The \nconstants $a$ and $b$ are free parameters and will be determined using \nthe self-consistent simulations as a reference. This expression can be \nviewed as the equation of the points within an ellipse centered at the \norigin in the plane defined by $(m_1+m_2)r_p\/ (m_1\\epsilon_1+ m_2\\epsilon \n_2)$ and $v(r_p)\/v_e(r_p)$. Then $a$ and $b$ are the semimajor axes of \nthis ellipse. Increasing the value of $a$ means increasing the axis of \nthe ellipse corresponding to the relative separation at pericenter and \nthus allowing mergings in more distant collisions. On the other hand, \nif we increase the value of $b$ we allow merging in faster \ncollisions. With this in mind, we fitted the values of $a$ and $b$ to \nthe self-consistent simulations using as the basis for our exploration \nthe values used by Aarseth and Fall (1980). After some trials and \ncomparisons with the self-consistent simulations we obtained the \nfollowing merging criterion: \\begin{equation}\n{\\left[ \\frac{(m_1+m_2)r_p}{{2.5(m_1\\epsilon _1+m_2\\epsilon _2)}} \\right] \n}^2+\n{\\left[\\frac{v(r_p)}{{1.18v_e(r_p)}}\\right] }^2\\leq 1. \\end{equation}\nThe effect of this new criterion is shown in Figs.~7, Fig.~8 and Fig.~9, \nwhere we compare the time evolution of the global parameters of the \nself-consistent simulations with that of the explicit simulations using \nthe AF and RM criteria and our new one. The dynamical friction with the \nmost massive galaxy is also included in these cases. \n\nIn Fig.~7 we show the time evolution of the number of galaxies $N_g$. In \nthe first column, we repeat the comparison between the self-consistent \nsimulations\nand the explicit simulations with the AF and RM criteria and dynamical \nfriction. In the second column, we have the comparison between the \nself-consistent simulations and the explicit simulations with dynamical \nfriction and our new merging criterion. As can be seen, while the \nexplicit simulations with the RM criterion mimic quite well the \nself-consistent simulations, this is not true for the AF condition. On \nthe other hand, our new criteria follows quite well the evolution of the \nnumber of galaxies given by the self-consistent simulations for all \ninitial conditions.\n\nIn Fig.~8 we show the time evolution of the half mass radius. For the \ncase of Run A both AF and RM conditions follow quite well the \nself-consistent simulations until the point of maximum collapse. After \nthis point, the half mass radius given by these explicit simulations \nfalls below the\nself-consistent case. Our new condition, however, follows the \nself-consistent simulations with $900$ particles very well. For the case \nof Run B, the AF and RM conditions end below the self-consistent case. \nOur new criterion performs better, following the self-consistent \nsimulations, but with some oscillations. For runs C and D we can say that \nall three criteria give similar results. \n\nFigure~9 which gives the time evolution of the three dimensional velocity\n\ndispersion, is the most interesting one. We have seen that the AF and RM \nconditions give good results for the case of the collapsing group (Run A) \nand this is true also for our new criterion. However, the AF and RM \nexplicit simulations do not work well for the case of a virialised group \n(Run B). The AF condition ends with a higher velocity dispersion and the \nRM with a smaller velocity disperson compared to the self-consistent \ncase; on the other hand, our new criterion performs much better than \neither. This is specially true for the most difficult case, Run D, the \nvirialised and tightly bound group. In this case our new criterion \nperforms much better than the AF and RM criteria.\n\n\\section{Simulations with a dark matter halo encompassing the whole \ngroup}\n\nSeveral observations suggest that clusters and groups of galaxies \nmay contain much matter not bound to the galaxies. This led us to run a\nself-consistent simulation (Run H), where part of the mass of the system \nis distributed in a background. In the corresponding explicit\nsimulations the background is included as a rigid Plummer potential with \nthe same parameters as the live background in the initial conditions of \nthe self-consistent\nsimulation. The explicit simulations include dynamical friction with the \nmost massive galaxy and with the Plummer halo. \n\nThe evolution of the group leads to a system where the central part of \nthe galaxy distribution has contracted, while the outer one has expanded. \nThis results in an increase of the half-mass radius and a lowering of the \nvelocity dispersion, as shown in Fig.~10. The upper panels give the time \nevolution of the number of galaxies in the system $N_g$, the middle ones \nthat of the half mass radius $R_h$ and the lower ones that of the three \ndimensional velocity dispersion. In the left panels the self-consistent \nsimulations are compared to\nthe explicit simulations with the AF and RM conditions and in the right \npanels with simulations using our new criterion. As we can see, the \nnumber of\ngalaxies diminishes slower in simulations including a common halo than in \nthe case of virialised\nsimulations with no distributed dark matter. The AF and RM conditions \nunderestimate the real number of mergers, and so, though to a lesser \nextent, does our new criterion.\nFor the time evolution of the half mass radius there are strong \ndiscrepancies between the self-consistent simulations and the explicit \nsimulations using any of the merging criteria including the new \ncriterion proposed in the previous section. \n\nThe three dimensional velocity dispersion of the galaxies is well \ndescribed by the explicit simulations using any of the merging criteria. \nThis global parameter systematically decreases during the simulation as \nthe galaxies that move faster near the center disappear and form the \ngiant central object. The slope of this evolution flattens off toward the \nend of the simulations. This behaviour is not well followed by the \nexplicit simulations using the AF or RM criterion. On the other hand, \nFig.~10 shows that our new merging criterion is able to reproduce these \nminor details better.\n\n\\section{Summary.}\n\nIn this paper we compared self-consistent simulations of galaxy groups \nwith simulations where the physics of the interactions is modelled by \nmerger rules. We used two sets of self-consistent simulations, one in \nwhich the galaxies were modelled with 900 points and the other with 100 \npoints. Insofar as the global dynamical\nparameters are concerned, the evolution of galaxy groups is similar in \nthose two cases. This shows that simulations with a relatively low number \nof particles can be used to follow the evolution of global dynamical \nproperties of groups or clusters. However, from the work of van Kampen \n(1995)\nit can be inferred that using lower that 100 points per galaxy can be \ndangerous.\n\nAs far as the explicit simulations are concerned, we show that the \nconditions used in the literature to\nsimulate the merging between galaxies are of unequal quality. Of these \nconditions, in the case\nwhere there is neither dynamical friction nor tidal forces, the best are \nthose of Tremaine (1980), modified for the case of different masses by \nRichstone and Malumuth (1983), and the one by Aarseth and Fall (1980). \nWhen we include dynamical friction effects the AF condition predicts too \nmany mergers but still maintains good predictions for the rest of the \nglobal parameters. The condition proposed by Richstone and Malumuth \n(1980) does better as far as the number of galaxies and $R_h$ are \nconcerned, but considerably worse for the velocity dispersion. \n\nAs none of these criteria seems to be a good guide for the time evolution \nof the groups as compared with the self-consistent simulations, we have \nfitted a new\ncriterion to the results of self-consistent simulations. This new \ncriterion is:\n\\begin{equation}\n{\\left[ \\frac{(m_1+m_2)r_p}{{2.5(m_1\\epsilon _1+m_2\\epsilon _2)}} \\right] \n}^2+\n{\\left[\\frac{v(r_p)}{{1.18v_e(r_p)}}\\right] }^2\\leq 1,\n \\end{equation}\nand is inspired in the expressions given by Aarseth and Fall (1980) and \nRichstone and Malumuth (1980). This new criterion mimics relatively well \nthe time evolution of the global parameters of the groups in as wide a\nvariety of situations as those presented by our simulations A to D. \nHowever it performed not so well in case H which has a common halo, but \nthis can be explained by the different nature of the simulations \nimplying that even this new criterion has only a limited range of \napplicability.\n\nOur comparisons show that some of the older results on the dynamics \nof groups and clusters of galaxies should be viewed with caution. For \ninstance, Roos (1981) studied the evolution of expanding systems of \ngalaxies to simulate the evolving universe. As he used the RN criterion \nin his simulation the predicted merger rate can be too high. In the same \nway, when Roos and Aarseth (1982) used this criterion to study the \nevolution of the luminosity function of a cluster of galaxies, their \nfinal luminosity functions can be artificialy peaked towards\nhigh luminositues. Similarly, Valtonen et al. (1984), Saarinen and \nValtonen (1985) and Perea et al. (1990) use explicit simulations to \ncriticize the virial mass obtained for galaxy clusters. We have, however, \nseen that this kind of simulation is biased toward higher velocity \ndispersions. Finally, the explicit simulations on compact groups by Mamon \n(1987) using a diffuse intergalactic background may also be biased.\n\nThus we can conclude that there is no ideal substitute for fully \nself-consistent N-body simulations. However, in cases when one needs to \nlook only at global quantities describing the system and is not \ninterested in fine structure and details, a first exploration of \nparameter space can be done using explicit simulations and the criterion \nproposed in this paper. This performs particularly well in cases where \nthe group has no common halo.\n\n{\\bf Acknowledgements.}\nWe thank Albert Bosma and Kevin Prendergast for reading and improving the \nmanuscript, and our referee, Joshua Barnes for his useful \nsuggestions and criticism which improved the quality of this paper. We \nalso thank L. Hernquist for making available to us his vectorised version \nof the treecode.\nSome of the simulations discussed in this paper were made at the C98 of \nthe IDRIS (Institut du d\\'eveloppement et des ressources en informatique \nscientifique, Orsay, France). \n\n\\noindent\n{\\Large \\bf References.}\n\n\\noindent\nAarseth, S.J. 1971 ApSS 14,20\\\\\nAarseth, S.J., Fall, S.M. 1980 ApJ 236,43\\\\ Barnes, J. 1992 in {\\sl \nMorphological and physical classification of galaxies} G. Longo et \nal.\\linebreak \\hspace*{0.5cm} (eds.) Kluwer Academic Publishers, \np277-292\\\\ Barnes, J., Hut, P. 1986 {\\sl Nature} 324,446\\\\ Bode, P.W., \nBerrington, R.C., Cohn, H.N., Lugger, Ph. M. 1994 ApJ 433,479\\\\\nCarnevalli, P., Cavaliere, A., Santangelo, P. 1981 ApJ 249,449\\\\ \nChandrasekhar, S. 1943 ApJ 97,255\\\\\nCooper, R.G., Miller, R.H. 1982 ApJ 254,16\\\\ Farouki, S.M., Shaphiro, \nS.L. 1982 ApJ 259,103\\\\ Funato, Y., Makino, J., Ebisuzaki, T. 1993 PASJ \n45,289\\\\ Hernquist, L. 1987 ApJS 64,715\\\\\nIshizawa, T. 1986 ApSS 119,221\\\\\nIshizawa, T., Matsumoto, R., Tajima, T., Kageyama, H., Sakai, H. 1983 \nPASJ 35,61\\\\\nJones, B.J.T., Efstathiou, G. 1979 MNRAS 189,27\\\\ Malumuth, E.M., \nRichstone, D.O. 1984 ApJ 276,413\\\\ Mamon, G.A. 1987 ApJ 321,622\\\\\nMerritt, D. 1983, ApJ 264,24\\\\\nNavarro, J.F., Mosconi, M.B., Lambas, D.G. 1987 MNRAS 228,501\\\\ Perea, \nJ., del Olmo, A., Moles, M. 1990 A\\&A 237,328\\\\ Rhee, G., Roos, N. 1990 \nMNRAS 243, 629\\\\ Richstone, D.O., Malumuth, E.M. 1983 ApJ 268,30\\\\ Roos, \nN. 1981 A\\&A 95,349\\\\\nRoos, N., Aarseth, S.J. 1982 A\\&A 114,41\\\\ Roos, N., Norman, C.A. 1979 \nA\\&A 76,75\\\\ Saarinen, S., Valtonen, M.J. 1985 A\\&A 153,130\\\\ Schindler, \nS., B\\\"ohringer, H. 1993 A\\&A 269,83\\\\ Tremaine, S.D. 1980 {\\sl The \nStructure and Evolution of Normal Galaxies.} ed. S.M. Fall et D. \n\\linebreak \\hspace*{0.5cm} Lynden-Bell. Cambridge University Press\\\\ van \nAlbada, T.S., van Gorkom, J.H. 1977 A\\&A 54,121\\\\ van Kampen, E. 1995 \nMNRAS 273,295\\\\\nValtonen, M.J., Innanen, K.A., Huang, T.-Y., Saarinen, S. 1985 A\\&A \n143,182\\\\ White, S.D.M. 1978 MNRAS 184,185\\\\\n\n\\newpage\n\n\\noindent\n{\\Large \\bf Figure Captions.}\n\n\\noindent {\\bf Fig.~1} Comparison of the time evolution of the number of \ngalaxies in the self-consistent simulations with $100$ particles per \ngalaxy (thin line) and $900$ particles per galaxy (thick line) with the \nexplicit simulations for the same initial conditions and without \ndynamical friction. In the left panels we use the AF and RM merging \nconditions and in the right panels we use the FS and RN ones. The initial \nconditions of each simulation are described in Table~1.\n\n\\noindent {\\bf Fig.~2} Comparison of the time evolution of the half mass \nradius of the system in self-consistent simulations and in explicit \nsimulations without dynamical friction for the same initial conditions. \nThe symbols are as in Fig.~1. \n\n\\noindent {\\bf Fig.~3} Time evolution of the three dimensional velocity \ndispersion of the galaxies consi\\-dered as point masses for the \nself-consistent and the explicit simulations. The symbols are as in \nFig.~1.\n\n\\noindent {\\bf Fig.~4} Time evolution of the number of galaxies $(N_g)$ \nin the self-consistent simulations compared with the evolution of this \nnumber in the explicit simulations with dynamical friction included. The \nthick lines correspond to the self-consistent simulations with $900$ \nparticles per galaxy and the thin lines to the simulations with $100$ \npoints per galaxy. In the first column, we show the comparison with the \nexplicit simulations using the AF criterion and using the RM criterion. \nIn the second column, we show the same comparisons with the explicit \nsimulations using the FS condition and using the RN condition. \n\n\\noindent {\\bf Fig.~5} Same as for Fig.~4 but for the time evolution of \nthe half mass radius of the system.\n\n\\noindent {\\bf Fig.~6} Same as for Fig.~5 but for the time evolution of \nthe three dimensional velocity dispersion. \n\n\\noindent {\\bf Fig.~7} Comparison of the explicit simulations using the \nAF and the RM criteria with the explicit simulations using the new \ncriterion. The performance of each criterion is compared with the \nself-consistent simulations. In the first column, we show the time \nevolution of the number of galaxies in the self-consistent simulations \nwith $900$ particles per galaxy (thick lines) and with $100$ particles \nper galaxy (thin lines) compared with the explicit\nsimulations using the AF criterion and using the RM criterion. In the \nsecond column, we compare the time evolution of $N_g$ for the \nself-consistent simulations with the results of the explicit simulations \nusing the new criterion. In all cases we include dynamical friction.\n\n\\noindent {\\bf Fig.~8} Same as Fig.~7 but for the time evolution of the \nhalf mass radius of the system.\n\n\\noindent {\\bf Fig.~9} Same as Fig.~7 but for the time evolution of the \nthree dimensional velocity dispersion. \n\n\\noindent {\\bf Fig.~10} Time evolution of the global parameters of the \nsimulations with distributed background. In both columns we show the \nevolution of $N_g$, $R_h$ and $\\sigma (3D)$ for the self-consistent \nsimulations with $450$ particles per galaxy (thick lines) and for the \nself-consistent simulations with $100$ particles per galaxy (thin lines). \nIn the left panel these are compared with the explicit simulations with \nthe AF condition and with the RM condition. In the right panel the \nself-consistent simulations are compared with the explicit simulations \nwith our new criterion. \n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nNowadays, advances in digital music production technology enabled the musicians to explore a greater range of sonic possibilities to work with.\nParticularly, the development of the Digital Audio Workstation (DAW) and virtual instruments greatly expanded the space of the musical creativity~\\cite{walzer2017independent}.\nAs there are a large number of virtual instruments with high timbral diversity and the quality of music is highly dependent on the timbre of the instruments, selecting appropriate musical instruments plays a crucial role in digital music production. \n\nTypical ways to retrieve proper musical instruments from a large library of instruments are listening to the audio samples of the instruments one-by-one or referring to the text description of the instruments if available.\nHowever, listening to the audio samples is time-consuming and inefficient, and the descriptions are often unavailable or insufficient to express the subtle nuance of the timbre of the musical instruments~\\cite{knees2015giantsteps}.\n\n\n\n\\begin{figure}[tb]\n\\setlength{\\belowcaptionskip}{-7pt}\n\\begin{minipage}[b]{1.0\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=8.5cm]{Figure1.png}}\n\\end{minipage}\n\\caption{Comparison between musical instrument recognition and retrieval task.}\n\\label{fig:task}\n\\end{figure}\n\n\nWe call this task of retrieving specific desired instruments from the library of musical instruments as \\textit{Musical Instrument Retrieval}.\nSince musicians often refer to existing music to describe the sound they want, we propose to use reference music as a query for musical instrument retrieval.\nIn this task, given a mixture audio query, the model has to retrieve the instruments that most closely resemble the instrument used in the mixture audio query.\nIn our experiment, for quantitative evaluation, the instrument used for mixture audio query was always included in the library.\nWe evaluated whether the model retrieved the exact instruments used in mixture audio query in terms of F1 score and mean Average Precision (mAP). \n\nMusical instrument recognition is a closely related task that has been actively studied in the field of music information retrieval~\\cite{han2016deep,avramidis2021deep,kratimenos2021augmentation,li2015automatic,lostanlen2016deep,cheuk2022jointist}.\nHowever, existing methods of musical instrument recognition rule out the unique character of each instrument and only predicts the coarse categories of the instrument so that it cannot be directly used for fine-grained musical instrument retrieval task.\nComparison between the two tasks is illustrated in Fig.~\\ref{fig:task}.\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\begin{subfigure}{\\columnwidth}\n \\centering\n \\caption{Training Single-Instrument Encoder.}\n \\label{fig:single_enc}\n \\vspace*{0.25cm}\n \\includegraphics[width=\\columnwidth]{Figure2a.png}\n \\end{subfigure}\n &\n \\\\\n \\hfill\n \\vspace*{0.1cm}\n \\begin{subfigure}{\\columnwidth}\n \\centering\n \\caption{Training Multi-Instrument Encoder.}\n \\label{fig:multi_enc}\n \\vspace*{0.25cm}\n \\includegraphics[width=\\columnwidth]{Figure2b.png}\n \\end{subfigure}\n &\n \\multirow[t]{2}[0]{*}[0.7cm]{\n \\begin{subfigure}{\\columnwidth}\n \\centering\n \\caption{Retrieving similar instruments from the library using proposed method.}\n \\label{fig:single_enc}\n \\vspace*{0.25cm}\n \\includegraphics[width=\\columnwidth]{Figure2c.png}\n \\end{subfigure}\n }\n \\end{tabular}\n \\vspace*{0.1cm}\n \\caption{The overall process of the suggested method. (a) Single-Instrument Encoder is trained to classify which instrument played the input audio. We take the penultimate layer's activation of the trained network as instrument embedding. (b) Multi-Instrument Encoder extracts multiple instrument embeddings from the mixture audio. The Single-Instrument Encoder provides the set of target embeddings. (c) At inference time, we first extract the instrument embeddings of each instrument in the instrument library for a single time. Then we extract the multiple embeddings from the mixture audio query and retrieve the most similar instruments from the instrument library.}\n \\label{fig:model}\n\\end{figure*}\n\n\n\nOur proposed method employs the Single-Instrument Encoder and the Multi-Instrument Encoder.\nThe Single-Instrument Encoder extracts the instrument embedding from single-track audio.\nUsing the embeddings extracted by the Single-Instrument Encoder as the target, the Multi-Instrument Encoder is trained to extract multiple numbers of instrument embeddings from the mixture audio.\nSince we estimate the set of embeddings, which is permutation-invariant, we use permutation invariant training (PIT)~\\cite{yu2017permutation} scheme for Multi-Instrument Encoder training.\n\n\nTraining and evaluating a general instrument encoder requires a dataset consisting of a large number of different instruments.\nAt the same time, the dataset should contain ensembles of different instruments to enable the model to extract embeddings robustly to instrument combinations and performance.\nTo meet these conditions, we propose a new dataset called \\textit{Nlakh} (pronounced as en-l\u00e4k), which is a combination of the NSynth dataset~\\cite{nsynth2017} and the Lakh dataset~\\cite{raffel2016learning,lakh}.\nBased on the implementation of ~\\cite{Martel_NSynth-MIDI-Renderer_2019}, we rendered the MIDI files in the Lakh dataset by using note samples in the NSynth dataset.\n\nExperimental results show that the Single-Instrument Encoder successfully maps the different audio samples of the same instruments into close embeddings. \nResults also show that the Multi-Instrument Encoder is able to separate the mixture audio at the embedding level and retrieve desired instruments from the library successfully.\n\n\n\\section{Related Works}\n\\label{sec:relatedworks}\n\nStudies on retrieving musical instruments or extracting instrument embeddings are still in its early stages. \nRecently,~\\cite{shi2022use} has trained and evaluated a model for extracting instrument embedding from a music signal by adopting the framework of speaker verification task, but the model was limited to extracting an embedding from single-sourced audio.\nMusical instrument retrieval methods with audio query have also been studied recently, but mostly focusing on retrieving drum samples. \n~\\cite{mehrabi2018similarity} adopts deep convolutional auto-encoders to retrieve drum samples by using vocal imitations as the query.\nFurthermore, ~\\cite{kim2020drum} conducts deep metric learning to extract the drum embeddings from a mixture audio as the query.\nIn this paper, we expand this approach for retrieving multi-pitched instruments. \n\n\n\\section{Method}\n\\label{sec:pagestyle}\n\n\nThe proposed model consists of the Single-Instrument Encoder and the Multi-Instrument Encoder.\nThe Single-Instrument Encoder extracts an instrument embedding from a single-track audio of the instrument.\nUsing the instrument embeddings computed by the Single-Instrument Encoder as a set of target embeddings, the Multi-Instrument Encoder is trained to estimate the multiple instrument embeddings.\nAs we estimate the set of embeddings, which is permutation-invariant, PIT scheme~\\cite{yu2017permutation} was used for training.\nThe overall framework of the proposed model is depicted in Fig.~\\ref{fig:model}.\n\n\\subsection{Single-Instrument Encoder}\n\\label{ssec:subhead_f}\n\nIn order to extract an instrument embedding from single-track audio with the Single-Instrument Encoder, we trained a network performing classification to match the audio samples with their instrument labels.\nWe used the network's penultimate layer's activation as the instrument embedding, which is a 1024-dimensional vector.\nFor an instrument $i_k$, the Single-Instrument Encoder $f$ extracts the embedding of the instrument $i_k$ as $f(x_{i_k})$, where $x_{i_k}$ is the single-track audio of the instrument $i_k$.\n\n\\subsection{Multi-Instrument Encoder}\n\\label{ssec:subhead}\n\n\nThe Multi-Instrument Encoder $g$ aims to estimate the embeddings of a set of instruments $I=\\{{i_1, i_2,...,i_N}\\}$ given a mixture audio $m=\\sum_{i\\in I} x_{i}$. \nThe target embeddings are the outputs of the Single-Instrument Encoder.\nWe designed the Multi-Instrument Encoder to output $M$ possible embeddings, where $M$ was set as the maximum number of instruments in a mixture audio in the training set.\n\nThe Multi-Instrument Encoder $g$ is trained to minimize the cosine embedding loss between the optimal permutation of the set of output embeddings $G=\\{ g(m)_{1,:}, g(m)_{2,:}, ..., g(m)_{M,:}\\}$ and the set of target embeddings $F=\\{ f(x_1), f(x_2), ..., f(x_N)\\}$.\nTo compensate for the difference in the number of embeddings and the indeterminacy of the instrument order, we used the idea of permutation invariant training to compute the loss function~\\cite{yu2017permutation}.\nThe minimized loss function is described as follows:\n\\begin{align*}\n \\mathcal{L} &= \\min\\limits_{\\pi} \\sum \\limits_{n=1}^N \\big( 1 - \\cos\\theta_{\\pi(n),n} \\big) \\\\\n \\cos \\theta_{\\pi(n),n} &= \\frac{g(m)_{{\\pi(n)},:} \\cdot f(x_n)}{||g(m)_{{\\pi(n)},:}|| \\cdot ||f(x_n)||}\n\\end{align*}\nwhere $\\pi:\\{1,2,\\dots,N\\} \\mapsto \\{1,2,\\dots,M\\}$ is an injective function.\n\nTo minimize the computational cost of finding the optimal permutation, we applied the optimal permutation invariant training method that utilizes the Hungarian algorithm~\\cite{dovrat2021many, kuhn1955hungarian}.\n\n\\subsection{Inference}\n\\label{section:inference}\nTo use the trained encoders for retrieval task, for each instrument $l_k$ in instrument library $L = \\{ l_1, l_2, l_3, ..., l_K\\}$, we extract the instrument embedding $f(x_{l_k})$ to construct the embedding library $E = \\{f(x_{l_1}), f(x_{l_2}), ..., f(x_{l_K}) \\} $ using the trained Single-Instrument Encoder.\nGiven the mixture audio query $m$, we extract output embeddings $\\{g(m)_{1,:}, ..., g(m)_{M,:}\\}$ using the trained Multi-Instrument Encoder.\nThen we calculate the cosine similarity $\\cos \\phi_{j,k}$ as follows.\n\\begin{align*}\n \\cos \\phi_{j,k} &= \\frac{g(m)_{j,:} \\cdot f(x_{l_k})}{||g(m)_{j,:}|| \\cdot ||f(x_{l_k})||}\n\\end{align*}\n\nFor each output embedding $g(m)_{j,:}$, we pick the instrument $l_k$ whose cosine similarity $\\cos \\phi_{j,k}$ is the largest among other instruments in $L$.\nTherefore, the set of retrieved instruments $R$ given mixture audio query $m$ can be formulated as follows.\n\\begin{align*}\n R &= \\{ l_{k'} | k' \\in \\{\\operatorname*{argmax}_k \\cos \\phi_{j,k}\\}_{j=1}^{M} \\}\n\\end{align*}\nNote that more than two output embeddings may be assigned to the same instrument.\nTherefore, the size of a set $R$ may be smaller than $M$.\n\n\n\\begin{figure}[tb]\n \\centering\n \\begin{subfigure}[b]{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{Figure3a.png}\n \\caption{}\n \\label{fig:data1}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{Figure3b.png}\n \\caption{}\n \\label{fig:data2}\n \\end{subfigure}\n \\caption{The process of rendering a sample of (a) Nlakh-single and (b) Nlakh-multi }\n \\label{fig:data}\n\\end{figure}\n\n\\begin{table}[tb]\n\\centering\n\\hfill\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{lccc}\n\\Xhline{2\\arrayrulewidth}\n\\textbf{Dataset} & \\begin{tabular}[x]{@{}c@{}} Size \\\\ (Hours) \\end{tabular} & \\begin{tabular}[x]{@{}c@{}} Number of \\\\ Instruments \\\\ (Categories) \\end{tabular} & \\begin{tabular}[x]{@{}c@{}} Stem \\\\ Availability\\end{tabular} \\\\ \\hline \nNlakh-single (ours) & 1,397 & 1,006 & $\\checkmark$ \\\\\nNlakh-multi (ours) & 153 & 1,006 & $\\checkmark$ \\\\ \\hline\nSlakh~\\cite{manilow2019cutting} & 145 & 158 & $\\checkmark$ \\\\\nMUSDB18~\\cite{rafii2017musdb18} & 10 & (5) & $\\checkmark$ \\\\\nMedleyDB~\\cite{bittner2014medleydb} & 7 & (80) & $\\checkmark$ \\\\\nOpenMIC~\\cite{humphrey2018openmic} & 56 & (20) & - \\\\\nIRMAS~\\cite{bosch2012comparison} & 6 & (11) & - \\\\\n\\Xhline{2\\arrayrulewidth}\n\\end{tabular}\n}\n\\caption{Comparison with other datasets.}\n\\label{tab:dataset}\n\\end{table}\n\n\\section{The Nlakh Dataset}\n\\label{ssec:subhead}\nTo train and evaluate the proposed model, the dataset should have a large number of different instruments. \nAlso, the dataset should contain the ensembles of different instruments to enable the model to extract instrument embeddings robustly to instrument combinations and performance. \nHowever, no existing dataset fully met these requirements. Therefore, we propose a new dataset called \\textit{Nlakh} that combines the NSynth dataset, which provides a large number of instruments, and the Lakh dataset, which provides multi-track MIDI data.\n\nNlakh consists of \\textit{Nlakh-single} that contains single-track audio and \\textit{Nlakh-multi} that contains mixture audio with separate tracks (stem) of each instrument.\nTo make Nlakh-single, we first separated each MIDI track of the Lakh dataset and categorized the tracks by their instrument family (bass, organ, guitar, etc.) according to the MIDI program number.\nThen for each instrument of NSynth, we randomly selected a five-second-long excerpt from MIDI tracks in the corresponding instrument family.\nFor example, if the selected instrument's family is the guitar, only the MIDI files in the guitar category are used for rendering.\nWe rendered 1,000 samples for each instrument. \nIn total, there are 1,006,000 samples in Nlakh-single.\nNlakh-single is split into train\/valid set following the instrument split of NSynth (953\/53).\n\n\n\nTo make Nlakh-multi, we first find a five-second-long multi-track MIDI section containing at least two valid tracks in which at least three notes are played.\nLikewise in Nlakh-single, we randomly selected instruments for rendering the multi-track MIDI excerpt within the corresponding instrument family.\nThe Nlakh-multi has 100,000 samples for the training set and 10,000 samples for the validation set.\nThe overall process of making the dataset is illustrated in Fig.~\\ref{fig:data}.\n\nAmong other multi-track music datasets that contains audio data, to the best of our knowledge, Nlakh has the largest number of instruments and the largest amount of data at the same time (Table~\\ref{tab:dataset}).\nIn addition to the rendered audio dataset, we also provide a codebase to generate our dataset, so one can use it to render more samples.\n\n\n\n\\begin{table*}[t]\n\\centering\n\\parbox{11cm}{\\caption{Performance of the Multi-Instrument Encoder. Small\/Large indicates the size of the model. Nlakh\/Random indicates which dataset is used for training.}\n\\label{tab:g_eval}\n}\n\\begin{tabular}{@{\\extracolsep{4pt}}lcccccc@{}}\n\\Xhline{2\\arrayrulewidth}\n\\multirow{3}{*}{\\textbf{Model}} & \\multicolumn{2}{c}{\\textbf{Family}} & \\multicolumn{4}{c}{\\textbf{Instrument}} \\\\\n\\cline{2-3} \\cline{4-7}\n& \\multicolumn{2}{c}{\\textbf{F1}} & \\multicolumn{2}{c}{\\textbf{F1}} & \\multicolumn{2}{c}{\\textbf{mAP}} \\\\ \\cline{2-3} \\cline{4-5} \\cline{6-7}\n& macro & weighted & macro & weighted & macro & weighted \\\\ \\hline\nChance & 0.343 & 0.437 & 0.065 & 0.077 & - & -\\\\ \\hline\nSmall-Nlakh & 0.626 & 0.723 & 0.482 & 0.524 & 0.553 & 0.597 \\\\\nLarge-Nlakh & 0.640 & 0.728 & 0.533 & 0.578 & 0.635 & 0.666 \\\\ \nSmall-Random & 0.691 & 0.697 & 0.528 & 0.543 & 0.598 & 0.615 \\\\ \nLarge-Random & \\textbf{0.814} & \\textbf{0.817} & \\textbf{0.694} & \\textbf{0.712} & \\textbf{0.752} & \\textbf{0.760} \\\\ \n\n\\Xhline{2\\arrayrulewidth}\n\\end{tabular}\n\\end{table*}\n\n\\section{Experiments}\n\n\n\\subsection{Single-Instrument Encoder}\n\nWe used the convolutional neural network architecture that was used in~\\cite{han2016deep} for the instrument recognition task as the backbone network of the Single-Instrument Encoder, using mel-spectrogram of the audio as the input.\nWe used Adam optimizer with a learning rate of 0.001, and set batch size as 32.\n\nTo evaluate the Single-Instrument Encoder, we adopted the method proposed by ~\\cite{shi2022use}, which used automatic speaker verification evaluation methodologies for evaluating the instrument embeddings.\nWe first extract the embeddings of five different samples of the target instrument by using the trained Single-Instrument Encoder. \nThe average of those embeddings is used as enrollment embedding.\nWe also make a comparison set that contains 20 embeddings from the target instrument and 20 embeddings from the other instruments.\nThen we compare each embedding in the comparison set with the enrollment embedding in terms of cosine similarity.\nVerifying whether the embeddings in the comparison set correspond to the enrollment embedding or not, we compute the false reject rate and false accept rate for each instrument.\nWe computed the average value of equal error rate (EER), which describes the point where the false reject rate and false accept rate are equal.\n\nThe average EER of our model on Nlakh-single was 0.026 while the previous work's EER on the NSynth dataset was 0.031.\nNote that the samples of the NSynth dataset contain only a single note, while the samples of Nlakh-single contain multiple notes.\nWe also visualized the instrument embeddings of training set and validation set using t-distributed stochastic neighbor embedding (t-SNE)~\\cite{van2008visualizing} in Figure~\\ref{fig:tsne_single}.\nThe results show that the Single-Instrument Encoder could cluster the instrument embeddings robustly even for the unseen instruments in the validation set.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.49\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{Figure4a.png}\n \\caption{}\n \\label{fig:data1}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.49\\columnwidth}\n \\centering\n \\includegraphics[width=\\columnwidth]{Figure4b.png}\n \\caption{}\n \\label{fig:data2}\n \\end{subfigure}\n \\caption{The t-SNE results of Single-Instrument Encoder on Nlakh-single (a) training and (b) validation dataset.}\n \\label{fig:tsne_single}\n\\end{figure}\n\n\n\\subsection{Multi-Instrument Encoder}\n\nThe Multi-Instrument Encoder is trained to extract the embedding of each instrument in the mixture audio.\nIn this experiment, the Multi-Instrument Encoder extracts nine embeddings, which is the maximum number of instruments composing a single mixture in Nlakh-multi. \nFor the network architecture of the Multi-Instrument Encoder, we tried two different network architectures.\nThe first is the same architecture~\\cite{han2016deep} as the Single-Instrument Encoder.\nWe also tried a larger convolutional neural network~\\cite{liu2022convnet} since the task for the Multi-Instrument Encoder is more difficult than that of the Single-Instrument Encoder.\nWe used Adam optimizer with a learning rate of 0.001 and set batch size as 128 for all cases.\n\nDuring the experiment, we noticed an imbalance of the instrument distribution in Nlakh-multi, which may harm the performance of the trained network.\nTo solve this issue, we also trained the network with randomly-mixed audio.\nWe randomly selected a number of musical instruments between two and nine, and then randomly picked the audio samples of selected instruments from Nlakh-single.\nThose samples were used to mix the randomly-mixed audio.\nRather than rendering the finite set of samples for the randomly-mixed dataset, we mixed the audio on-the-fly during training.\n\nGiven mixture audio query, we retrieved the instruments as described in Section~\\ref{section:inference} and computed F1 score.\nWe also calculated the F1 score with the instrument family as the basis of the evaluation.\nThe instrument family is a coarsely categorized class of instruments, which is predefined in NSynth dataset.\nTo calculate the mean Average Precision (mAP), we used the highest cosine similarity between the output embeddings and each embeddings in the embedding library as the similarity score.\n\nTable \\ref{tab:g_eval} shows the evaluation results of the Multi-Instrument Encoder. We had three main observations from the evaluation.\nFirst, every trained network performed significantly better than the chance level in all measurements.\nSecond, the network trained with randomly-mixed audio showed less overfitting than the network trained with Nlakh-multi.\nThird, the network using the larger convolutional neural network showed better performance.\nThe larger convolutional neural network learns more general information and therefore can better handle the extraction of the embedding from an input mixture audio.\n\n\\section{Conclusion}\nIn this work, we proposed a novel method for musical instrument retrieval that employs the Single-Instrument Encoder and the Multi-Instrument Encoder to extract the instrument embeddings.\nTo train and evaluate the proposed model, we suggested the Nlakh dataset, which contains single-track audio and mixture audio from a large number of different musical instruments.\nThe evaluation result showed that the Single-Instrument Encoder was able to learn the mapping from the audio signal of unseen instruments to the instrument embedding space, and the Multi-Instrument Encoder was able to extract multiple embeddings from the mixture audio and retrieve the desired instruments successfully.\nIn the future, we plan to improve the robustness of our method by elaborating our dataset with appliance of various audio effects and expansion of the instrument classes.\n\n\n\n\n\n\n\n\\vfill\\pagebreak\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} Extremal K\\\"ahler metrics were first introduced and studied by E.~Calabi in\n\\cite{cal,cal-2}. Let $X$ denote a connected compact complex manifold of complex dimension $n$. A K\\\"ahler metric $g$ on $X$, with\nK\\\"ahler form $\\omega_g$, is {\\it extremal} if it is a\ncritical point of the functional $g \\mapsto \\int _X s _g ^2 \\, \\frac{\\omega\n_g ^n}{n!}$, where $g$ runs over the set of all K\\\"ahler metrics on $X$\nwithin a fixed K\\\"ahler class $\\Omega = [\\omega]$, and $s_g$ denotes the\nscalar curvature of $g$. As shown in \\cite{cal}, $g$ is extremal if and\nonly if the symplectic gradient $K := {\\rm grad} _{\\omega} s _g = J \\, {\\rm\ngrad} _g s _g$ of $s _g$ is a Killing vector field (i.e. $\\mathcal{L} _K g\n= 0$) or, equivalently, a (real) holomorphic vector field. Extremal K\\\"ahler metrics include K\\\"ahler\nmetrics of constant scalar curvature --- CSC K\\\"ahler metrics for short ---\nand in particular K\\\"ahler--Einstein metrics. Clearly, if the identity component ${\\rm Aut} _0 (X)$ of the automorphism group of $X$ is\nreduced to $\\{1\\}$, i.e. if $X$ has no non-trivial holomorphic vector fields, any extremal K\\\"ahler metric is CSC, whereas a CSC K\\\"ahler metric is K\\\"ahler--Einstein if and only if $\\Omega$ is a multiple of the (real) first Chern class $c _1 (X)$. \n\nIt is conceivable to think about an extremal K\\\"ahler metric $g$ in $\\Omega$ as a {\\it canonical} representative of the K\\\"ahler metrics in the K\\\"ahler class $\\Omega$. One would then expect that the extremal K\\\"ahler metrics in $\\Omega$ reflect most of the holomorphic invariants of the pair $(X, \\Omega)$. In this vein, the goal of this note is to discuss how that the following natural splitting problem fits in with some recent progress in the field.\n\n\\begin{prob}Let $X_i,$ be compact projective manifolds polarized by ample holomorphic line bundles $L_i$ and $X= \\prod_{i=1}^r X_i$ their product endowed with the polarisation $L= \\bigotimes_{i=1}^r L_i$, where $L_i$ is seen as a holomorphic line bundle over $X$ via the natural pull-back. Does any extremal K\\\"ahler metric $g$ in the K\\\"ahler class $\\Omega = 2\\pi c_1(L)$ on $X$ is the Riemannian product of extremal K\\\"ahler metrics $g_i$ in the K\\\"ahler classes $\\Omega_i=2\\pi c_1(L_i)$ on the factors $X_i$?\\end{prob}\n\n\nSeveral remarks are in order.\n\n\\smallskip\nFirst of all, it is well-known (see e.g. \\cite[Thm.~2.1]{yau}) that the answer is positive if we suppose that $g$ is a K\\\"ahler--Einstein metric on $X$. It then follows from a standard Bochner argument (see e.g. \\cite{gauduchon-0,kob}) for the holomorphic projectors $P_j : TX = \\bigoplus_{i=1}^r TX_i\\to TX_i$, where $TX$ (resp. $TX_i$) denotes the holomorphic tangent bundle of $X$ (resp. $X_i$). This is the case when each $X_i$ is either a Calabi--Yau manifold (i.e. $c_1(X_i)=0$) or has ample canonical line bundle $K_{X_i}$ and $L_i= K_{X_i}$, or is a Fano manifold with vanishing Futaki invariant and $L_i= K^{-1}_{X_i}$.\n\n\n\\smallskip\nSecond, it is now known that the extremal K\\\"ahler metrics in a K\\\"ahler class $\\Omega$ are all isometric under the action of the {\\it reduced automorphism group}\\footnote{$\\widetilde{\\mathrm{Aut}}_0 (X)$ is the unique connected {\\it linear\nalgebraic subgroup} of ${\\mathrm{Aut}}_0 (X)$ such that the quotient ${\\rm\nAut} _0 (X)\/\\widetilde{{\\mathrm{Aut}}}_0 (X)$ is the Albanese torus of $X$~\\cite{fujiki-0}; its Lie algebra is the space of\n(real) holomorphic vector fields whose zero-set is non-empty\n~\\cite{fujiki-0,kobayashi,Le-Sim,gauduchon-book}.} $\\widetilde{\\mathrm{Aut}}_0 (X)$~\\cite{BM,CT,Do-one,M4}. Thus, the main difficulty in proving the splitting property is to show that if the polarized projective manifold $(X,L)= \\prod_{i=1}^r(X_i,L_i)$ admits an extremal K\\\"ahler metric, then each factor $(X_i,L_i)$ does also admit extremal K\\\"ahler metric. It was suggested by S.-T.~Yau \\cite{yau-1} that a complete obstruction to the\nexistence of extremal K\\\"ahler metrics in the K\\\"ahler class $\\Omega = 2\\pi c _1\n(L)$ on a projective manifold $X$ polarized by an ample\nholomorphic line bundle $L$ should be expressed in terms of {\\it stability}\nof the pair $(X, L)$. The currently accepted notion of stability is the\n$K$-({\\it poly}){\\it stability} introduced by G.~Tian~\\cite{Tian2} and S.~K.~Donaldson~\\cite{Do2}. The {\\it Yau--Tian--Donaldson conjecture} can\nthen be stated as follows. {\\it A polarized projective manifold $(X, L)$\nadmits a CSC K\\\"ahler metric if and only if it is $K$-polystable.} The implication `CSC $\\Rightarrow$\n{K-polystable}' in the conjecture is now well-established, thanks to work\nby S.~K.~Donaldson \\cite{Do4}, X.~X.~ Chen--G.~Tian~\\cite{CT}, J.~Stoppa\n\\cite{stoppa}, and T.~Mabuchi \\cite{mab-three,mab-three1} but the other direction is still open. In order to\naccount for extremal K\\\"ahler metrics of non-constant scalar curvature,\nG.~Szekelyhidi introduced in \\cite{Sz, gabor} the notion of {\\it\nrelative} $K$-(poly)stability with respect to a maximal torus of the\nconnected component ${\\rm Aut}_0(X,L)$ of the automorphism group ${\\rm Aut}(X,L)$ of the pair $(X, L)$~\\footnote{Recall that ${\\rm Aut}(X,L)$ consist of the automorphisms of $X$ which come from automorphisms of $L$. It is well-known (see e.g. \\cite{kob-0,gauduchon-book}) that ${\\rm Aut}_0(X,L)= \\widetilde{{\\rm Aut}}_0(X)$.} and the\nsimilar implication `extremal $\\Rightarrow$ {relative K-polystable}' was\nobtained in~\\cite{gabor-stoppa}. While it is not hard to see that in the product case (relative) $K$-(poly)stability of $(X,L)$ implies (relative) $K$-(poly)stability of each factor $(X_i, L_i)$, examples from~\\cite{ACGT} suggest that the notion of relative $K$-(poly)stability must be further strengthened in order to establish the other direction in the Yau--Tian--Donaldson correspondence. \n\n\\smallskip Our third observation is that if we start with a product K\\\"ahler metric in the class $2\\pi c_1(L)$, invariant under a maximal connected compact subgroup $K$ of $\\mathrm{Aut}_0(X)= \\prod_{i=1}^r\\mathrm{Aut}_0(X_i)$, then the $K$-relative Calabi flow (a gradient flow for the $K$-relative Mabuchi energy) preserves the Riemannian product structure. On the other hand, it is expected that this flow should converge to an extremal K\\\"ahler metric when it exists (see e.g.~\\cite{D0}). Although this conjecture is very far from being solved, a partial evidence for it is given in~\\cite{chen-hu,huang-zheng, tosatti-weinkove}. Note also that this approach has the advantage to apply to the more general case of a product of compact K\\\"ahler manifolds endowed with a product K\\\"ahler class.\n\n\\smallskip Thus motivated, we prove the splitting property under two additional hypotheses. \n\n\\begin{thm}\\label{main} Let $X_i$ be compact projective manifolds polarized by ample holomorphic line bundles $L_i$ and $X= \\prod_{i=1}^r X_i$ their product endowed with the polarisation $L= \\bigotimes_{i=1}^r L_i$, where $L_i$ is seen as a holomorphic line bundle over $X$ via the natural pull-back. Then, any extremal K\\\"ahler metric $g$ in the K\\\"ahler class $\\Omega = 2\\pi c_1(L)$ on $X$ is the Riemannian product of extremal K\\\"ahler metrics $g_i$ in the K\\\"ahler classes $\\Omega_i=2\\pi c_1(L_i)$ on the factors $X_i$, provided that at least one of the following hypotheses is satisfied. \n\\begin{enumerate} \n\\item[\\rm (i)] The integral Futaki invariants of $(X,L)$ introdced in \\cite{futaki-chow} all vanish.\n\\item[\\rm (ii)] For at most one factor $(X_i,L_i)$, the group ${\\mathrm{Aut}}_0(X_i,L_i)$ has a center of positive dimension. \n\\end{enumerate} \n\\end{thm}\n\nThe hypothesis in (i) automatically holds if ${\\rm Aut}_0(X,L)=\\{ {\\rm Id} \\}$. However, it is known that the hypothesis in (i) is a restrictive condition in the case when ${\\rm Aut}_0(X,L)$ is non-trivial (see e.g. \\cite{OSY}). Also by the results in \\cite{futaki-chow,M2}, in the case when $2\\pi c_1(L)$ admits an extremal K\\\"ahler metric, $(X,L)$ is asymptotically Chow stable if and only if the integral Futaki invariants of $(X,L)$ introdced in \\cite{futaki-chow} all vanish. More generally, the existence of an extremal K\\\"ahler metric in $2\\pi c_1(L)$ is expected to imply that $(X,L)$ is asymptotically Chow stable with respect to a maximal torus $T \\subset {\\rm Aut}_0(X,L)$: we give a precise formulation in Conjecture 1 below and discuss it in the light of the work of T.~Mabuchi~\\cite{M1,M2,M3,mab-three, mab-three1}. We then show how the conjectured correspondence would solve (via Lemma 2 and Theorem 7) the splitting of the extremal K\\\"ahler metrics in the general polarized case.\n\n\\smallskip\nWe now outline the proof of Theorem~\\ref{main}. It uses an idea going back to G.~Tian~\\cite{tian-0} (se also \\cite{yau0}) who proved that any K\\\"ahler metric $\\omega$ in $2\\pi c_1(L)$ can be approximated with induced Fubini--Study metrics from the projective embeddings of the polarized variety $(X, L)$. More precisely, let $h$ be a hermitian metric on $L$ whose curvature is $\\omega$. The induced hermitian metric on each tensor power $L^k$ is still denoted by $h$, and using $h$ and $\\omega$, consider the $L_2$ hermitian inner product on each vector space $H^0(X,L^k)$. Fixing an orthonormal basis for each $H^0(X,L^k)$, define a sequence of embeddings $\\Phi_k : X \\hookrightarrow {\\mathbb C} P^{N_k}$ and induced K\\\"ahler metrics $\\frac{1}{k} \\Phi_k^*(\\omega_{\\rm FS})$ in $\\Omega=2\\pi c_1(L)$. Tian showed that $\\frac{1}{k} \\Phi_k^*(\\omega_{\\rm FS})$ converge to $\\omega$ in $\\cC^2$ as $k \\to \\infty$ while the $\\cC^{\\infty}$ convergence follows from subsequent work by W.~Ruan~\\cite{ruan}. For each $k$, let $\\{ s_0, \\cdots, s_{N_k} \\}$ of $H^0(X,L^k)$ with respect to the $L_2$ hermitian inner product defined by $h_k=h^{\\otimes k}$ and $\\omega$, we denote the corresponding Bergman kernel $\\rho_k$ as \n$$\\rho_k = \\sum_{i=0}^{N_k} h_k(s_i,s_i).\n$$\nThe expansion of Bergman kernel was established by D.~Catlin~\\cite{catlin} and S.~Zeldich~\\cite{zeldich}. The coefficients of the expansion were calculated by Z. Lu \\cite{Lu}. An important ramification of this basic idea, relevant to the problem of existence of CSC metric in $2\\pi c_1(L)$, was given by S.~K.~Donaldson~\\cite{Do-one} who proved that when ${\\mathrm{Aut}}_0(X)=\\{ 1 \\}$, a CSC K\\\"ahler metric $\\omega$ in $2\\pi c_1(L)$ can be approximated in $\\cC^{\\infty}$ by using special projective embeddings called {\\it balanced}, a notion previously introduced and studied by H.~Luo~\\cite{L} and S.~Zhang~\\cite{Z} (see also \\cite{BLY}): a hermitian metric $h_k$ on $L^k$ is called balanced if the corresponding Bergman kernel $\\rho_k$ is a constant function on $X$, or equivalently, if the curvature $\\omega_k$ of $h_k$ satisfies $\\omega_k= \\frac{1}{k} \\Phi_k^*(\\omega_{\\rm FS})$. Thus, S.~K.~Donaldson's theorem states that if ${\\mathrm{Aut}}_0(X,L)=\\{ 1 \\}$ and $\\omega$ is a CSC K\\\"ahler metric in $2\\pi c_1(L)$, then for $k \\gg 1$, there exists a balanced hermitian metric $h_k$ on $L^k$ with curvature $\\omega_k$ and, moreover, $\\omega_k$ converges to $\\omega$ in $\\cC^{\\infty}$ as $k \\to \\infty.$ T.~Mabuchi~\\cite{M1,M2,M3} extended Donaldson's result to the case when ${\\mathrm{Aut}}_0(X,L)$ is non-trivial and $\\omega$ is an extremal K\\\"ahler metric: in this case, $\\omega$ can be approximated in $\\cC^{\\infty}$ by the normalized curvatures $\\omega_k$ of hermitian metrics $h_k$ on $L^k$ which are {\\it balanced relative to a torus} in ${\\rm Aut} _0 (X,L)$: this theory is reviewed in Section~\\ref{s:relative balanced}. For simplicity, we shall momentarily refer to such $h_k$'s as {\\it relative} balanced metrics on $L^k$. In the case when $(X,L)= \\prod_{i=1}^r (X_i,L_i)$, Grauert's direct image theorem for coherent sheaves implies that $H^0(X,L^k) = \\bigotimes_{i=1}^r H^0(X_i,L_i^k)$. It is then easily seen that if each $(X_i, L_i^k)$ admits a relative balanced hermitian metric, then the tensor product metric on $(X,L^k)$ is relative balanced and has curvature compatible with the product structure. Conversely, we show in Section~\\ref{s:proof} (see Theorem~\\ref{reduced}) that if $L^k$ admits some relative balanced metric then each $L_i^k$ does. We achieve this by studying in Section~\\ref{s:functional} the Kempf--Ness function ${\\mathbb D}$ introduced by H.~Luo~\\cite{L} and S.~K.~ Donaldson~\\cite{D1} (it is the function denoted $D$ in \\cite{L} and $\\tilde Z$ in \\cite{D1} and is essentially the $\\log$ of the Chow norm introduced in \\cite{Z}). This observation, together with Mabuchi's approximation result alluded to above, reduces our problem to showing the uniqueness of relative balanced metric on $L$ modulo the action of ${\\rm Aut}_0(X,L)$. This is not automatic in the setting of \\cite{M1,M2,M3} but holds under the assumptions (i) or (ii) of Theorem~\\ref{main}. We thus propose in Section~\\ref{s:relative balanced} a stronger notion of relative balanced metrics (which also appears in the recent work \\cite{M5}) and point out that for such (strongly) relative balanced metrics the uniqueness modulo ${\\rm Aut}_0(X,L)$ automatically holds (Lemma~\\ref{gabor}). \n\n\n\n\n\\section{Hermitian metrics balanced relative to a torus and relative Chow stability}\\label{s:relative balanced}\nIn this section we briefly review some material taken from the works of S.~K.~Donaldson~\\cite{Do-one,D1}, H.~Luo~\\cite{L}, T.~Mabuchi~\\cite{M1,M2,M3,M4,M5} and S.~Zhang~\\cite{Z} that we shall need in the sequel.\n\nLet $X$ be a compact complex projective manifold of complex dimension $n$, polarized by a very ample line bundle $L$, and $N+1$ be the dimension of $V=H^0(X,L)$. Let $\\kappa : X \\hookrightarrow P(V^*)$ denotes the Kodaira embedding with $L= \\kappa^*(\\cO(1))$. For any basis ${\\bf s}=\\{s_0, \\cdots, s_N\\}$ of $V$ we denote \n$$\n\\Phi_{\\bf s} : X \\hookrightarrow \\mathbb{C} {P}^N \n$$\nthe composition of $\\kappa$ with the identification ${\\bf s} : P(V^*) \\cong {\\mathbb C} P^{N}$.\n\n\n\n\n\nThe reduced automorphisms group $\\widetilde{\\mathrm{Aut}}_0(X)$ is the closed connected subgroup of ${\\rm Aut}_0(X)$ whose Lie algebra $\\mathfrak{h}_0$ is the ideal of holomorphic vector fields with zeros on $X$, see \\cite{fujiki-0,kobayashi,Le-Sim,gauduchon-book}. It is well-known (see e.g.~\\cite{gauduchon-book,kob-0}) that $\\widetilde{\\mathrm{Aut}}_0(X)$ coincides with the connected component ${\\mathrm{Aut}}_0(X,L)$ of the group of automorphisms of the pair $(X,L)$ and we obtain a group representation $\\rho : {\\mathrm{Aut}}_0(X,L) \\to {\\rm PGL}(V)$. One can think of ${\\mathrm{Aut}}_0(X,L)$ as the connected group generated by restrictions to $\\kappa (X)$ of elements of ${\\rm PGL}(V)$ which preserve $\\kappa(X) \\subset P(V^*)$; replacing $L$ by the tensor power $L^{N+1}$, we can further lift the action of $\\widetilde{\\mathrm{Aut}}_0(X)={\\rm Aut}_0(X,L)$ on $X$ to an action on the bundle $L$ (see e.g. \\cite{kob-0}), and find a group representation\n\\begin{equation}\\label{representation}\n\\rho : \\mathrm{Aut}_0(X,L) \\rightarrow {\\rm SL}(V).\n\\end{equation}\nIn conclusion, by replacing $L$ with a sufficiently big tensor power if necessarily, we can assume that the reduced automorphisms group $\\widetilde{\\mathrm{Aut}}_0(X) = {\\mathrm{Aut}}_0(X,L)$ of $X$ lifts to act on $L$, and identify the action of $\\widetilde{\\mathrm{Aut}}_0(X) = {\\mathrm{Aut}}_0(X,L)$ on $X$ with the induced action on $\\kappa(X)$ of the connected subgroup ${\\rm SL}_0(V, X)$ of elements ${\\rm SL} (V)$ which preserve $\\kappa(X) \\subset P(V^*)$; furthermore, we shall also assume $N>n$.\n\nFrom now on, we shall fix a real torus $T \\subset \\widetilde{\\mathrm{Aut}}_0(X,J)$ and consider hermitian metrics $h$ on $L$ which are $T$-invariant and whose curvature $\\omega$ defines a $T$-invariant K\\\"ahler form in $2\\pi c_1(L)$. Note also that, by the Calabi theorem~\\cite{cal}, if the K\\\"ahler class $\\Omega= 2\\pi c_1(L)$ admits an extremal K\\\"ahler metric, it will also admits one which is $T$-invariant. Thus, following \\cite{M1}, we are now in position to introduce the notion of a ($T$-invariant) hermitian metric $h$ on $L$ which is balanced relative to $T$. Denote by $T^{c}$ the complexified action of $T$ and consider the lifted linear $T^{c}$-action on $V$ via $\\rho$. Then, for every character $\\chi \\in \\mathrm{Hom}(T^{c}, \\mathbb{C}^*)$, we set\n$$\nV(\\chi) := \\{ s \\in V; \\rho(t)\\cdot s = \\chi(t)\\ s \\ \\mbox{ for all } t \\in T^{c} \\},\n$$\nand obtain the splitting with respect to the mutually distinct characters $\\chi_1, \\ldots, \\chi_{\\nu} \\in \\mathrm{Hom}(T^{c}, \\mathbb{C}^*)$ \n\\begin{equation}\\label{e:split}\nV = \\bigoplus_{k=1}^{\\nu} V(\\chi_k),\n\\end{equation}\nwith $\\prod_{k=1}^{\\nu} \\chi_k^{n_k}=1$ where $n_k= {\\rm dim}_{{\\mathbb C}}(V(\\chi_k))$.\n\n\n\n\n\\begin{defn} Let $m(\\cdot, \\cdot)$ be a hermitian inner product on $V$.\nWe say that $\\{s_0, s_1, \\ldots, s_N \\}$ is an $admissible\\ normal\\ basis$ of $(V,m)$ if it is compatible with the decomposition \\eqref{e:split} and provides a normal basis of $m$ on each factor $V(\\chi_k)$, i.e. if there exist positive real constants $b_k$ ($k=1, \\ldots, \\nu)$, with $\\sum_{k=1}^\\nu n_k b_k = N+1$, and a sub-basis $\\{s_{k,i}, \\ k=1, \\ldots, \\nu, \\ i=1, \\ldots, n_k \\}$ for $V(\\chi_k)$, such that\n\\begin{enumerate}\n\\item[$\\bullet$] $m(s_{k,i}, s_{l,j})=0$ if $l \\neq k$ or $i\\neq j$;\n\\item[$\\bullet$] $m(s_{k,i}, s_{k,i}) = b_k.$\n\\end{enumerate}\nThe vector $b := (b_1, \\ldots, b_\\nu)$ is called $index$ of the admissible normal basis $\\{s_0, \\ldots, s_N\\}$ for $(V,m)$; in the case when the index is $b=(1, \\ldots, 1)$ we shall call the basis {\\it admissible orthonormal} basis of $(V,m)$.\n\\end{defn}\nNote that a hermitian inner product $m( \\cdot, \\cdot)$ admits an admissible normal basis if and only if $V(\\chi_k) \\perp^m V(\\chi_l)$ for $k\\neq l$, which in turn is equivalent to $m$ being $T$-invariant. For any $T$-invariant hermitian metric $h$ on $L$ whose curvature is a K\\\"ahler form in $2\\pi c_1(L)$, $V(\\chi_k) \\perp V(\\chi_l)$ for $k \\neq l$ with respect to the induced $L^2$ hermitian inner product $m=\\langle \\cdot, \\cdot \\rangle_h$ on $V$, defined by \n$$\n\\langle s_1, s_2 \\rangle_h = \\int_X h(s_1, s_2) \\omega^n,\n$$ \nfor any two holomorphic sections $s_1, s_2 \\in H^0(X, L)$. We then define the smooth function $$E_{h, b} := \\sum_{i=0}^N h(s_i,s_i),$$ which is clearly independent of the choice of an admissible normal basis of index $b$ on $(V, m)$.\n\n\\begin{defn}\\label{d:balanced}\nA $T$-invariant hermitian metric $h$ on $L$ whose curvature $\\omega$ is K\\\"ahler metric on $X$ is called {\\it balanced relative to $T$ of index $b$} if the function $E_{h, b}$ is constant for any admissible normal basis of index $b$. The curvature $\\omega$ of $h$ is called a balanced K\\\"ahler metric of index $b$ relative to $T$.\n\\end{defn}\n\n\nThe definition above has the following useful interpretation in terms of the K\\\"ahler geometry of $X$. Consider the space ${\\mathcal B}^T(V)$ of bases of $V=H^0(M,L)$, which are {\\it compatible} with the splitting \\eqref{e:split}, i.e. which are admissible normal bases for some $T$-invariant hermitian inner product $m$. If ${\\bf s}= \\{ s_0, \\cdots, s_N\\}$ is an element of ${\\mathcal B}^T(V)$ and $h$ is any $T$-invariant hermitian metric on $L$, we put \n\\begin{equation}\\label{hs}\nh_{\\bf s} = \\frac{h}{\\sum_{i=0}^N h(s_i, s_i)},\n\\end{equation}\nwhich is manifestly independent of the auxiliary hermitian metric $h$ on $L$. \n\nAny basis ${\\bf s}=\\{s_0, \\cdots, s_N\\}$ in ${\\mathcal B}^T(V)$ determines a $T$-invariant hermitian inner product $m_{\\bf s}$ on $V$ (and $V^*$) such that $\\bf s$ (resp. the dual basis ${\\bf s}^*$) is admissible and orthonormal. The identification ${\\bf s}^* : P(V^*) \\cong {\\mathbb C} P^{N}$ determines a Fubini--Study metric $\\omega_{\\rm FS, {\\bf s}}$ on $P(V^*)$, representing $2\\pi \\cO(1)$; we denote by $\\omega_{X,{\\bf s}}= \\kappa^* (\\omega_{\\rm FS, {\\bf s}})$ the induced K\\\"ahler form on $X$ via the Kodaira embedding $\\kappa$. Note that $\\omega_{X,{\\bf s}}$ is the curvature of the hermitian metric $h_{\\bf s}$ on $L$ defined by \\eqref{hs} and if $\\omega$ is the curvature of $h$, it is easily seen that \n\\begin{equation}\\label{potential}\n\\omega_{X,{\\bf s}} = \\omega + \\frac{1}{2} dd^c \\log (\\sum_{i=0}^N h(s_i,s_i)).\n\\end{equation}\nOne therefore obatins\n\\begin{lemma}\\label{characterization}\nA $T$-invariant hermitian metric $h$ on $L$ is balanced relative to $T$ of index $b$ if and only if with respect to any admissible orthonormal basis ${\\bf s}$ of the hermitian inner product $m_{h,b}$ on $V$, defined by rescaling $\\langle \\cdot, \\cdot \\rangle_h$ on each space $V(\\chi_k)$ by a factor $1\/b_k^2$, $h_{\\bf s}= \\lambda h$ for some positive constant $\\lambda$. \n\\end{lemma}\n\n\n\n\n\n\n\\smallskip\nIn order to give further motivation for the above notions, we now briefly recall the (finite dimensional) momentum map interpretation given by S.~K.~Donaldson~\\cite{Do-one,D1}, and subsequently studied in \\cite{PS,W}. \n\n\n\nOn the space ${\\mathcal B}^T(V)$ the following groups act naturally:\n\\begin{enumerate}\n\\item[$\\bullet$] ${\\mathbb C}^{\\times}$ by scalar multiplications;\n\\item[$\\bullet$] $\\rho(Z_{{\\rm Aut}_0(X,L)}(T))$ where $Z_{{\\rm Aut}_0(X,L)}(T)$ is the connected component of identity of the centralizer of $T$ in ${\\rm Aut}_0(X,L)$;\n\\item[$\\bullet$] $G_T={\\rm S}(\\prod_{k=1}^{\\nu} {\\rm U}(n_k))$, which is also a connected component of the centralizer of $\\rho(T)$ in ${\\rm SU}(N+1)$.\n\\end{enumerate}\nAs the actions of ${\\mathbb C}^{\\times}$ and $\\rho(Z_{{\\rm Aut}_0(X,L)}(T))$ commute with the action of $G_T$, we can consider the quotient space ${\\mathcal Z}^T(V) = {\\mathcal B}^T(V) \/ \\big({\\mathbb C}^{\\times} \\times \\rho(Z_{{\\rm Aut}_0(X,L)}(T))\\big)$ on which $G_T$ acts with stabilizer (of every point) $G_T \\cap \\rho(Z_{{\\rm Aut}_0(X,L)}(T))$; in our setting $\\rho(T) \\subset G_T \\cap \\rho(Z_{{\\rm Aut}_0(X,L)}(T))$. Following \\cite{Do-one,PS}, there is a K\\\"ahler structure on ${\\mathcal Z}^T(V)$, whose definition uses the fact that any point ${\\bf s}= \\{s_{k,i}, \\ k=1,\\ldots \\nu, \\ i=1, \\ldots, n_k\\}$ of ${\\mathcal B}^T(V)$ defines an embedding of $\\Phi_{\\bf s} : X \\hookrightarrow {\\mathbb C}P^N$. With respect to this K\\\"ahler structure $G_T$ acts isometrically with momentum map given (up to a non-zero multiplicative constant) by\n$$\\mu_{G_{T}}({\\bf s}) = i \\ \\Big( \\bigoplus_{k=1}^{\\nu} ( \\langle s_{k,i}, s_{k,j} \\rangle_{h_{\\bf s}}) \\Big)_0, $$\nwhere the $( \\cdot )_0$ denotes the traceless part of the matrix (the Lie algebra ${\\rm su}(N+1)$ being identified with its dual using the positive definite Killing form), and with complexification $G_T^c= {\\rm S}(\\prod_{k=1}^{\\nu}{\\rm GL}(n_k, {\\mathbb C}))$. It follows that ${\\bf s}$ is a zero of the momentum map $\\mu_{G_T}$ if and only if $h_s$ is a balanced metric of index $b=(1, \\ldots, 1)$ relative to $T$; such a metric is also balanced with respect to the trivial torus $T=\\{ {\\rm Id} \\}$ (of index $1$). This is the classic notion of balanced embedding studied in \\cite{L,Z}. It follows from these works that the existence of a balanced basis ${\\bf s}$ is equivalent to the Chow polystability of the variety $(X,L)$, which we briefly recall: Let $d$ be the degree of the image $\\kappa (X) \\subset P(V^*)$ under the Kodaira embedding. Any element $h=(h_0, \\cdots, h_{n})$ of $P(V) \\times \\cdots \\times P(V)$ ($(n+1)$-times) is seen as $(n+1)$ hyper-planes in $P(V^*)$, and\n$$\\{ h \\in P(V) \\times \\cdots \\times P(V) : h_0 \\cap \\cdots \\cap h_n \\cap \\kappa(X) \\neq 0\\}$$\nbecomes a divisor in $P(V) \\times \\cdots \\times P(V)$ defined by a polynomial $\\hat X \\in W=\\big({\\rm Sym}^d(V^*)\\big)^{\\otimes (n+1)}$, called a {\\it Chow line} and determined up to a non-zero scale; the corresponding element $[\\hat X] \\in P(W)$ is the {\\it Chow point} associated to $(X,L)$.\n\\begin{defn}\\label{chow} The polarized variety $(X,L)$ is called {\\it Chow polystable} if the orbit of $\\hat X$ in $W=\\big({\\rm Sym}^d(V^*)\\big)^{\\otimes (n+1)}$ under the natural action of ${\\rm SL}(V)$ is closed. $(X,L)$ is called {\\it asymptotically Chow polystable} if $(X,L^k)$ is Chow polystable for any $k\\gg1$.\n\\end{defn}\nThe result of H.~Luo~\\cite{L} and S.~Zhang~\\cite{Z} (see also \\cite[Theorem~A]{M1}) then states\n\\begin{thm}\\label{luo-zhang} A compact polarized projective complex manifold $(X,L)$ is Chow polystable if and only if $L$ admits a balanced hermitian metric $h$ {\\rm (}of index $1$ relative to $T=\\{ {\\rm Id} \\}${\\rm )}.\n\\end{thm}\n\nThe relevance of balanced metrics to our work comes from the following central result in the theory, proved by S.~K.~Donaldson~\\cite{Do-one} in the case when ${\\mathrm{Aut}}_0(X,L)$ is trivial, and extended by T.~Mabuchi~\\cite{M2} to the general case.\n\\begin{thm}\\label{do-mabuchi} Let $(X,L)$ be an asymptotically Chow polystable compact polarized projective manifold and $\\omega \\in 2\\pi c_1(L)$ a K\\\"ahler metric metric of constant scalar curvature~\\footnote{A. Futaki~\\cite{futaki-chow} showed that the extremal K\\\"ahler metrics in the K\\\"ahler class of an asymptotically Chow stable polarization must be CSC.}. Then, there exist sequences of integers $m_k \\to \\infty$ and hermitian metrics $h_k$ on $L^{m_k}$, with curvatures $\\omega_k$, which are balanced {\\rm (}relative to $T=\\{ 1 \\}$ of index $b=1${\\rm )}, and such that $\\frac{1}{m_k}\\omega_k$ converge in $\\cC^{\\infty}$ to $\\omega$ as $k \\to \\infty$. \\end{thm}\nNote that when ${\\rm Aut}_0(X,L)$ is trivial, S.~K.~Donaldson also shows in \\cite{Do-one} that the existence of a CSC K\\\"ahler metric in $2\\pi c_1(L)$ implies that $(M,L)$ is asymptotically Chow polystable while it is known that the latter condition is restrictive in the case when ${\\rm Aut}_0(X,L)$ is non-trivial (see e.g. \\cite{OSY}). \n\n\\smallskip\nOne therefore needs to further relax the condition on balanced metrics in order to find similar approximations of extremal K\\\"ahler metrics on asymptotically Chow unstable varieties, and this is where the choice of indices $b$ will come to play. Using the momentum map picture described above, a natural approach developed in \\cite{gabor, Sz} would be, instead of zeroes of $\\mu_{G_T}$, to study the critical points of the squared norm $||\\mu_{G_T} ||^2$ (with respect to the positive definite Killing inner product of $\\mathfrak{su}(N+1)$). It follows from the moment map picture that a basis ${\\bf s} \\in {\\mathcal B}^T(V)$ is a critical point of $||\\mu_{G_T}||^2$ if and only if $\\mu_{G_T}({\\bf s})$ is a matrix which belongs to the Lie algebra of the stabilizer of the projection of ${\\bf s}$ to ${\\mathcal Z}^{T}$ for the action of $G_T$. In order to simplify the discussion, and with the application in mind, let us assume that $T$ is a maximal torus in ${\\rm Aut}_0(X,L)$. This implies that $\\rho(Z_{{\\rm Aut}_0(X,L)}(T)) \\cap G = \\rho(T)$ i.e. the stabilizer of any point of ${\\mathcal Z}^T(V)$ is $\\rho(T)$. Therefore, a basis ${\\bf s}$ is a critical point for $||\\mu_{G_T}||^2$ if and only if $\\mu_{G_T}({\\bf s})$ is a diagonal matrix $$ i \\ {\\rm diag}(a_1, \\ldots, a_1, a_2, \\ldots, a_2, \\cdots, a_{\\nu}, \\ldots a_{\\nu})$$ which belongs to the Lie algebra of $\\rho(T)$. In other words, ${\\bf s}$ defines a critical point of $||\\mu_{G_T}||^2$ on ${\\mathcal Z}^{T}$ if and only if the induced hermitian metric $h_{\\bf s}$ on $L$ is balanced relative to $T$ of index $b=(b_1, \\cdots, b_{\\nu})$ with \n\\begin{equation}\\label{constraint}\nb_k= \\frac{1 + \\log |\\chi_k(t)|}{1 + \\sum_{\\ell=1}^{\\nu} \\frac{n_{\\ell} \\log |\\chi_{\\ell}(t)|}{N+1}}, \\ k=1, \\ldots, \\nu\n\\end{equation} \nfor some $t \\in T^c$. \nThe corresponding interpretation in terms of Chow stability has been worked out by T.~Mabuchi~\\cite{M5} and is expressed by the closeness in $W$ of the Chow line $\\hat X$ under the natural action of the group\n$$G_{T^{\\perp}} ^c (V)= \\{ {\\rm diag} (A_1, \\cdots, A_{\\nu}) \\in \\prod_{k=1}^{\\nu} {\\rm GL}(V({\\chi_k})) : \\prod_{k=1}^{\\nu} {\\rm det} (A_k)^{1+\\log |\\chi_k(t)|}=1 \\ \\forall t \\in T^c \\}.$$\n\\begin{defn}\\label{stability} We call $(X,L)$ {\\it Chow polystable stable relative to $T$} if the Chow line $\\hat X$ associated to $(X,L)$ has a closed orbit with respect to $G_{T^{\\perp}}^c(V)$; $(X,L)$ is called {\\it asymptotically Chow stable relative to $T$} if $(X, L^k)$ is Chow polystable stable relative to $T$ for all $k\\gg 1$. \n\\end{defn}\n\\noindent We then have (cf. \\cite[Theorem A]{M1} and \\cite[Theorem C]{M5}) \n\\begin{thm}\\label{chow-stability}\nA polarized compact projective complex manifold $(X,L)$ is Chow polystable relative to $T$ if and only if $L$ admits a hermitian metric balanced relative to $T$ for some index $b$ satisfying \\eqref{constraint}. \n\\end{thm}\n\nA generalization of the Kempf--Ness theorem (see \\cite[Theorem 3.5]{gabor} or \\cite[Theorem 1.3.4]{Sz}) in the momentum map set up provides us with the following useful result.\n\\begin{lemma}\\label{gabor} Let $(X,L)$ be a compact polarized projective complex manifold and $T$ a maximal torus in ${\\rm Aut}_0(X,L)$. Then $L$ admits a hermitian metric balanced relative to $T$ of some index $b$ satisfying \\eqref{constraint} if and only if the orbit of some {\\rm (}and hence each{\\rm )} point of ${\\mathcal B}^T(V)$ under the action of the group $$G^c_{T^\\perp}=\\{ {\\rm diag} (A_1, \\cdots, A_{\\nu}) \\in \\prod_{k=1}^{\\nu} {\\rm GL}(n_k, {\\mathbb C})) : \\prod_{k=1}^{\\nu} {\\rm det} (A_k)^{1+\\log |\\chi_k(t)|}=1 \\ \\forall t \\in T^c \\}$$ contains a basis ${\\bf s}$ such that $h_{\\bf s}$ is balanced with respect to $T$ of index satisfying \\eqref{constraint}. Furthermore, any two balanced hermitian metrics relative to $T$ with indices satisfying \\eqref{constraint} are homothetic under the action of ${\\rm Aut}_0(X,L)$. \\end{lemma}\nIt is not difficult to give a direct prove of Lemma~\\ref{gabor}, once one knows the relevant identities to use. The uniqueness part follows from the fact that the $T^c$ action generates balanced metrics relative to $T$ (of some index $b$ satisfying \\eqref{constraint}) and the corresponding admissible bases of index $b$ (see Lemma~\\ref{characterization}) exhaust the $G^c_{T^\\perp}$ orbits of ${\\mathcal Z}^T(V)$; one can then apply Proposition~\\ref{convex} in Section~\\ref{s:functional}. In particular, the index $b$ in Lemma~\\ref{gabor} is uniquely determined.\n\n\n\\smallskip\nIn view of the discussion above, the following provides a natural scope of a generalization of Theorem~\\ref{do-mabuchi}.\n\\begin{conj}\\label{relative-stability} Let $(X,L)$ be a compact polarized projective manifold and $\\omega \\in 2\\pi c_1(L)$ an extremal K\\\"ahler metric which, without loss, is invariant under a maximal torus $T\\subset {\\rm Aut}_0(X,L)$. Then $(M,L)$ is asymptotically Chow polystable relative to $T$, and there exists a sequence of integers $m_k \\to \\infty$ and $T$-invariant hermitian metrics $h_k$ on $L^{m_k}$ with curvatures $\\omega_k$, which are balanced relative to $T$ of indices $b_k$ satisfying \\eqref{constraint}, such that the corresponding relative balanced K\\\"ahler metrics $\\frac{1}{m_k}\\omega_k$ on $X$ converge in $\\cC^{\\infty}$ to $\\omega$ as $k \\to \\infty$.\n\\end{conj}\n\n\n\n\\smallskip\nIn a series of work \\cite{M1,M2,M3}, T.~Mabuchi has established a weaker version of Conjecture~\\ref{relative-stability}. The main idea is to consider instead of the group $G_T$, the smaller group $G = \\prod_{k=1}^{\\nu} {\\rm SU}(n_k)$ which acts on ${\\mathcal Z}^T(V)$ with momentum map\n$$\\mu_{G}({\\bf s})= i \\ \\bigoplus_{k=1}^{\\nu} \\Big( \\langle s_{k,i}, s_{k,j} \\rangle_{h_{\\bf s}}\\Big)_0, $$\nso that the zeroes of $\\mu_G$ correspond to bases ${\\bf s}$ in ${\\mathcal B}^T(V)$ for which the hermitian metrics $h_{\\bf s}$ on $L$ which are balanced relative to $T$ of some index $b$ (not necessarily satisfying \\eqref{constraint}). The corresponding notion of Chow stability is then\n\\begin{defn}\\label{weak-stability} $(X,L)$ is {\\it weakly Chow polystable stable relative to $T$} if the Chow norm $\\hat X$ associated to $(X,L)$ has a closed orbit under the action of $G^c(V)= \\prod_{k=1}^{\\nu} {\\rm SL}(V(\\chi_k))$. We call $(X,L)$ {\\it asymptotically weakly Chow polystable stable relative to $T$} if $(M,L^k)$ is weakly Chow polystable stable relative to $T$ for all $k\\gg 1$. \n\\end{defn}\n\nThe following result is extracted from \\cite{M1,M2,M3}.\n\\begin{thm}\\label{mabuchi} Let $(X,L)$ be a compact polarized projective manifold and $\\omega \\in 2\\pi c_1(L)$ an extremal K\\\"ahler metric which, without loss, is invariant under a maximal compact connected subgroup $K\\subset {\\mathrm{Aut}}_0(X,L)$. Let $T\\subset K$ be any torus in the connected component of the identity of the centre of $K$. Then, $(M,L)$ is asymptotically weakly Chow polystable relative to $T$ and there exists a sequence of integers $m_k \\to \\infty$ and $T$-invariant hermitian metrics $h_k$ on $L^{m_k}$ with curvatures $\\omega_k$, which are balanced relative to $T$, such that $\\frac{1}{m_k}\\omega_k$ on $X$ converge in $\\cC^{\\infty}$ to $\\omega$ as $k \\to \\infty$. \n\\end{thm}\nThe above statement is implicitly established in \\cite{M3} in the course of proof that the existence of an extremal K\\\"ahler metric in $2\\pi c_1(L)$ implies that $(X,L)$ is asymptotically weakly Chow polystable relative to $T$; the choice of $T$ is specified by \\cite[Theorem~I]{M3}. More precisely, \\cite[Theorem~B]{M1} shows that any $T$-invariant extremal K\\\"ahler metric in $2\\pi c_1(L)$ can be approximated by a sequence of {\\it almost critical} metrics; then, combining S.~K.~Donaldson's idea in \\cite{Do-one} and D.~Phong--J.~Sturm's estimates in \\cite{PS}, a perturbation technique is elaborated in \\cite{M2} and applied in \\cite{M3} in order to perturb the almost critical metrics to balanced metrics relative to $T$, in a way that their curvatures converge to $\\omega$.\n\nThe limitation of Theorem~\\ref{mabuchi} to complete the proof of the splitting properly (Theorem~\\ref{main} in the introduction) in full generality is in the lack of analogue of Lemma~\\ref{gabor}, which guarantees that any two balanced metrics relative to $T$ on $L$ are homothetic. We show that this is true under the hypothesis (i) and (ii) of Theorem~\\ref{main}.\n\n\\section{The Kempf--Ness function ${\\mathbb D}$}\\label{s:functional}\nIn this section we are going to apply the well-known `Kempf--Ness' principle related to the problems of studying zeroes of momentum maps. For simplicity, we discuss the existence of hermitian metrics on $L$ which are balanced relative to a fixed torus $T\\subset {\\rm Aut}_0(X,L)$ of some index, but the discussion and all of the results can be easily adapted to the case of indices satisfying \\eqref{constraint} simply by changing the group $G$ to $G_{T^{\\perp}}$. We have seen in Section~\\ref{s:relative balanced} that the problem of finding a basis ${\\bf s}\\in {\\mathcal B}^T(V)$ for which $h_{\\bf s}$ is balanced with respect to $T$ is equivalent to finding zeroes of the momentum map $\\mu_G$ in a given orbit $G^c \\cdot [{\\bf s}_0 ]\\subset {\\mathcal Z}^T(V)$. As $\\mu_G$ is $G$-equivariant, this becomes a problem on the symmetric space $G^c\/G$. On that space we are going to consider a function $F_{{\\bf s}_0} : G^c\/G \\to {\\mathbb R}$, called {\\it Kempf--Ness} function, whose behaviour determines whether or not there exists a zero of $\\mu_G$ on $G^c \\cdot {\\bf s}_0$. This function is geodesically convex and its derivative is essentially $\\mu_G$; hence $\\mu_G$ admits a zero on $G^c \\cdot {\\bf s}_0$ if and only if $F_{{\\bf s}_0}$ attains a minimum on $G^c\/G$.\n\n\n \n\nOn the space ${\\mathcal H}$ of all hermitian inner products $m$ on $V$ such that $V(\\chi_k) \\perp^{m} V(\\chi_l), \\ l \\neq k$ (equivalently, which admit admissible normal bases of some index) the group $G^c(V)$ acts with stabilizer $G(V,m)=G^c(V) \\cap {\\rm U}(V,m)$; thus, for each $m_0 \\in {\\mathcal H}$, by introducing an admissible orthonormal basis ${\\bf s}_0$, we can identify the corresponding orbit ${\\mathcal M}_{m_0}= G^c(V) \\cdot m_0$ with the symmetric space $G^c\/G$ (which is known to be reducible of non-positive sectional curvature). The underlying riemannian metric is explicitly given by (see e.g.~\\cite{helgason})\n\\begin{equation}\\label{metric}\n(M_1, M_2)_m = {\\rm Tr}(M_1 \\cdot m^{-1} \\cdot M_2 \\cdot m^{-1}),\n\\end{equation}\nwhere the hermitian inner product $m$ is identified with a positive-definite hermitian endomorphism of $V$ via $m_0$, and $M_1,M_2 \\in T_m({\\mathcal M}_{m_0})$ with hermitian skew-symmetric endomorphisms of $(V, m_0)$. Another well-known fact (see e.g. \\cite{helgason}) is that geodesics correspond to $1$-parameter subgroups of $G^c(V)$, so the geodesic $m(t)$ joining two points $m_1, m_2 \\in {\\mathcal M}_{m_0}$ is generated by the family of admissible normal bases ${\\bf s}(t)= \\{e^{t\\gamma_0}s_0, \\cdots, e^{t\\gamma_N}s_N\\}$, where ${\\bf s}=\\{s_0, \\cdots, s_N\\}$ is an admissible orthonormal basis for $m_1$ which diagonalizes $m_2$, and $m_2(s_i,s_i)= e^{-2\\gamma_i}$ (with $\\sum_{i=1}^{n_k} \\gamma_{i,k}=0$) and $m(t)$ is the unique hermitian inner product for which $s(t)$ is an admissible orthonormal basis.\n\nDenote by $\\mathcal{K}_\\omega$ be the set of all K\\\"ahler metrics in the K\\\"ahler class $[\\omega]$, i.e.\n$$\n\\mathcal{K}_\\omega = \\{ \\omega_\\varphi \\ | \\ \\omega_\\varphi = \\omega + dd^c \\varphi > 0, \\varphi \\in C^\\infty (X) \\}\n$$\nWe can define a map $\\mathcal{FS} : \\mathcal{H} \\mapsto \\mathcal{K}_\\omega$ as follows: For any $m \\in {\\mathcal H}$ let ${\\bf s}=\\{s_0, \\cdots, s_N\\}$ be an admissible orthonormal basis of $V$ and $\\omega_{FS,{\\bf s}}$ the Fubini--Study it defines on $P(V^*)$. Consider the pull-back $\\omega_{X,{\\bf s}}= \\kappa^* (\\omega_{\\rm FS, {\\bf s}})$ under the Kodaira embedding (satisfying \\eqref{potential}), which is the curvature of the hermitian metric $h_{\\bf s}$ on $L$, given by \\eqref{hs}. Put\n\\begin{eqnarray}\n\\label{w}\n\\mathcal{FS}(m) : = \\omega_{X, {\\bf s}}, \\ \\ h_{m}:= h_{\\bf s}, \n\\end{eqnarray}\nnoting that for a fixed $m$ the right hand sides of \\eqref{hs} and \\eqref{potential} are independent of the choice of orthonormal basis ${\\bf s}$. \n\n\nMany authors have considered (see e.g. \\cite{do,gauduchon-book}) the functional ${\\mathbb I} : \\mathcal{K}_{\\omega} \\to {\\mathbb R}$, defined up to an additive constant by requiring that its derivative $\\delta {\\mathbb I}$ is given by\n$$\n(\\delta {\\mathbb I}) (\\dot{\\varphi}) = \\int_X \\dot{\\varphi} \\ \\omega_\\varphi^n,\n$$\nwhere $\\dot{\\varphi} \\in T_{\\omega_{\\varphi}} (K_{\\omega}) = C^{\\infty}(X)$. Following H.~Luo~\\cite{L} and S.~K.~Donaldson~\\cite{D1}, we introduce ${\\mathbb D} : \\mathcal{H} \\to {\\mathbb R}$ by \n\\begin{equation} \\label{d:d}\n{\\mathbb D} (m) : = - {\\mathbb I} (\\mathcal{FS} (m)).\n\\end{equation}\nThe restriction of ${\\mathbb D}$ to the $G^c(V)$ orbit ${\\mathcal M}_{m_0}$ defines a function on $G^c(V)\/G(V,m_0)$ which, by introducing an admissible orthonormal basis ${\\bf s}_0$ of $m_0$, will be the Kempf--Ness function $F_{{\\bf s}_0} : G^c\/G \\to {\\mathbb R}$ referred to earlier.\n\n\n\\smallskip\nThe following results in this section are essentially proved in \\cite{D1} and \\cite{L}. The way we treat the reduced automorphism group is inspired by X.~X.~Chen's work \\cite{C1}. \n\nWe start by characterizing the critical points of ${\\mathbb D}$.\n\n\\begin{prop} \\label{cha}\nA hermitian inner product $m$ is a critical point of ${\\mathbb D} : G^c(V)\\cdot m_0 \\mapsto {\\mathbb R}$ if and only if the induced hermitian metric $h_m$ defined by \\eqref{hs} and \\eqref{w} is balanced {\\rm (}of some index $b${\\rm )} with respect to $T$, i.e. if and only if any admissible orthonormal basis ${\\bf s}$ of $m$ is a zero of the momentum map $\\mu_G$. Likewise, $m$ is a critical point of ${\\mathbb D} : G^c_{T^{\\perp}}(V)\\cdot m_0 \\mapsto {\\mathbb R}$ if and only if $h_m$ is balanced of index satisfying \\eqref{constraint}.\n\\end{prop}\n\n\\begin{proof}\nFor ${\\mathcal M}_{m_0} = G^c(V) \\cdot m_0$, we will prove that $m$ is a critical point of ${\\mathbb D}: {\\mathcal M}_{m_0} \\mapsto {\\mathbb R}$ if and only if there exist real numbers $(b_1, \\ldots, b_{\\nu})$ such that for some (and hence any) admissible orthonormal basis ${\\bf s}= \\{ s_{k,i}, 1 \\leq k \\leq \\nu, 1 \\leq i \\leq n_k \\}$ of $m$\n\\begin{equation}\\label{conditions}\n\\begin{split}\n\\int_X h_{m} (s_{k,i}, s_{k,i}) \\ {\\mathcal FS}(m)^n & = b_k, \\ i=1, \\ldots, n_k, \\ k=1, \\ldots, \\nu \\\\\n\\int_X h_{m} (s_{k,i}, s_{l,j}) \\ {\\mathcal FS}(m)^n & = 0, \\ {\\rm if} \\ k\\neq l \\ {\\rm or} \\ i \\neq j.\n\\end{split}\n\\end{equation}\nUsing the hermitian metric $h_m$ in the definition \\eqref{hs}, we see that the conditions \\eqref{conditions} are equivalent to $h_m$ being balanced relative to $T$ of index $b=(b_1, \\ldots, b_{\\nu})$. The case of a $G^c_{T\ufffd{\\perp}}(V)$ orbit will follow with obvious modifications of the arguments below. \n\n\n\n\n\\smallskip\n($\\Rightarrow$) Let ${\\bf s}= \\{s_{k,i}, 1 \\leq k \\leq \\nu, 1 \\leq i \\leq n_k\\}$ be an admissible orthonormal basis of $m$. For any choice of $\\gamma_{l,j}$ with \n$$\n\\sum_{i=1}^{n_k} \\gamma_{k,i} = 0.\n$$\nthe basis ${\\bf s}_t=\\{ e^{\\gamma_{l,j}t} s_{l,j} \\}$ defines a hermitian inner product $m(t)$ on $V$ (such that ${\\bf s}_t$ is an admissible orthonormal bases for $m(t)$) and, as we have noticed, $m(t)$ is a geodesic. Put ${\\mathbb D}(t)={\\mathbb D}(m(t))$. Using \\eqref{potential}, \\eqref{w} and \\eqref{d:d}, we obtain for the derivative ${\\mathbb D}'(t)$\n\\begin{equation}\\label{D'}\n\\begin{split}\n{\\mathbb D}'(t) &= \\int_X \\frac{\\sum_{j=0}^N 2\\gamma_j e^{2t\\gamma_j}|s_j|_h^2}{\\sum_{j=0}^N e^{2t\\gamma_j}|s_j|_h^2} \\Big({\\mathcal FS}(m(t))\\Big)^n \\\\\n&= \\int_X Q(t) \\Big( {\\mathcal FS}(m(t))\\Big)^n,\n\\end{split}\n\\end{equation}\nwith \n\\begin{equation}\\label{Q}\nQ(t) = \\frac{\\sum_{j=0}^N 2\\gamma_j e^{2t\\gamma_j}|s_j|_h^2}{\\sum_{j=0}^N e^{2t\\gamma_j}|s_j|_h^2}.\n\\end{equation} \nThen, the fact that $m$ is a critical point of ${\\mathbb D}$ implies \n\\begin{equation}\\label{calcul}\n0 ={\\mathbb D}'(0) = 2 \\sum_{k=1}^{\\nu}\\sum_{i=1}^{n_k} \\gamma_{k,i} \\int_X h_{m} (s_{k,i}, s_{k,i}) \\ {\\mathcal FS}(m)^n,\n\\end{equation}\nwhere we have used the fact that \\eqref{Q} is independent of the choice of a hermitian metric $h$ on $L$. For the latter equality to hold for any choice of real numbers $\\gamma_{k,i}$ as above, $\\int_X h_{m} (s_{k,i}, s_{k,i}) \\ {\\mathcal FS}(m)^n$ must be \nindependent of $i$; since ${\\mathcal FS}(m)$ and $h_{m}$ are $T$ invariant, $\\int_X h_{m}(s_{k,i}, s_{l,j}) \\ {\\mathcal FS}(m)^n = 0$ for $k \\neq l$. Elementary linear algebra shows that if $\\int_X h_{m} (s_{k,i}, s_{k,i}) \\ {\\mathcal FS}(m)^n$ is independent of $i$ for {\\it any} choice of an admissible orthonormal basis ${\\bf s}$ for $m$, then one must have $\\int_X h_{m} (s_{k,i}, s_{k,j}) \\ {\\mathcal FS}(m)^n = 0$ for $ i \\neq j$.\n\n\n($\\Leftarrow$) The conditions \\eqref{conditions} mean that some admissible orthonormal basis ${\\bf s}$ of $m$ is an admissible normal basis (of index ${b}=(b_1, \\ldots, b_{\\nu})$) for the induced $L_2$ hermitian inner product $\\langle \\cdot , \\cdot \\rangle_{h_{m}}$; this is clearly independent of the choice of a particular admissible orthonormal basis of $m$. It is therefore enough to pick one admissible orthonormal basis ${\\bf s}$, and show that if \\eqref{conditions} is satisfied, then $m$ must be a critical point of ${\\mathbb D}$, or equivalently, ${\\mathbb D}'(0)=0$ along any geodesic $m(t)$ issued at $m$. The computation \\eqref{calcul} shows this. \\end{proof}\n\n\n\\begin{rem}\\label{chow-norm} If we consider ${\\mathbb D}$ as a function on $G^c(V)\/G(V,m_0)$ (or $G^c_{T^{\\perp}}(V)\/G_{T^{\\perp}}(V, m_0)$), the computation \\eqref{D'} (compared to a similar result in \\cite{Z}) shows that ${\\mathbb D}$ coincides, up to a positive scale and an additive constant, with the function $\\log || \\cdot ||_{{\\rm CH},m_0}$ defined on $G^c(V)\/G(V,m_0)$ (resp. $G^c_{T^{\\perp}}(V)\/G_{T^{\\perp}}(V, m_0)$), where $|| \\cdot ||_{{\\rm CH},m_0}$ is the $U(V,m_0)$-invariant {\\it Chow norm} on the space $W= {\\rm Sym}^d(V^*)^{\\otimes (n+1)},$ introduced in \\cite{Z}. This, together with Proposition~\\ref{cha}, explains Theorems~\\ref{luo-zhang} and \\ref{chow-stability}, once one proves (as in \\cite{Z} and \\cite{M1}) that $\\log || \\cdot ||_{{\\rm CH},m_0}$ has a critical point on the $G^c(V)$ (resp. $G^c_{T^{\\perp}}(V)$) orbit of $\\hat X$ if and only the orbit is closed (i.e. $(X,L)$ is (weakly) relative Chow stable). Note that the latter condition is independent of the choice of $m_0$, showing that the existence of critical points of ${\\mathbb D}$ is independent of the choice of a $G^c(V)$ orbit in ${\\mathcal H}$.\n\\end{rem}\n\n\nThe next results hold for ${\\mathcal M}_{m_0}$ being either the $G^c(V)$ or the $G^c_{T^{\\perp}}(V)$ orbit of $m_0$ in ${\\mathcal H}$.\n\n\\begin{prop}\\label{convex} \n${\\mathbb D}$ is convex along geodesics in $\\mathcal{M}_{m_0}$. Furthermore, for any two critical points $m_1, m_2$ {\\rm (}if they exist{\\rm )}, the geodesic $m(t)$ joining $m_1$ and $m_2$ defines a family of balanced hermitian metrics $h_{m(t)}$ relative to $T$ on $L$, which are isometric under the action of ${\\rm Aut}_0(X,L)$ and, therefore, have the same index $b$.\n\\end{prop}\n\\begin{proof} By \\eqref{D'}, the second derivative of ${\\mathbb D}(t)$ is\n\\begin{eqnarray*}\n{\\mathbb D}''(t) &=& \\int_X \\Big(\\frac{\\partial Q}{\\partial t} + n |dQ|^2_{{\\mathcal FS}(m(t))} \\Big) \\mathcal{FS}(m(t))^n.\n\\end{eqnarray*}\nTo show the convexity of ${\\mathbb D}(t)$ along geodesics, we adopt an argument of T.~Mabuchi~\\cite{M1} by constructing the map\n\\begin{eqnarray*}\n\\eta : [0,1] \\times [0, 2\\pi) \\times X &\\rightarrow& \\mathbb{C}P^N,\\\\\n(t, \\theta, x) &\\mapsto& [e^{(t+\\sqrt{-1}\\theta)\\gamma_0} s_0(x), \\ldots, e^{(t+\\sqrt{-1}\\theta)\\gamma_N} s_N(x)].\n\\end{eqnarray*}\nLetting $z=t+\\sqrt{-1}\\theta$, we have\n\\begin{eqnarray*}\n0 & \\leq & \\int_{\\eta([0,1] \\times [0, 2\\pi) \\times X)} \\omega_{{\\rm FS},{\\bf s}}^{n+1}\\\\\n&=&\\int_{[0,1] \\times [0, 2\\pi) \\times X} (\\eta^*\\omega_{{\\rm FS}, {\\bf s}})^{n+1}\\\\\n&=&\\int_{[0,1] \\times [0, 2\\pi) \\times X} ( \\frac{\\sqrt{-1}}{2\\pi} \\partial_X \\bar{\\partial}_X \\log (\\sum_{j=0}^N e^{2t\\gamma_j} |s_j|^2 ) + \\frac{\\sqrt{-1}}{4\\pi} \\partial_X Q \\wedge d \\bar{z}\\\\\n& & + \\frac{\\sqrt{-1}}{4\\pi} d z \\wedge \\bar{\\partial}_X Q + \\frac{\\sqrt{-1}}{8\\pi} \\frac{\\partial Q}{\\partial t} dz \\wedge d\\bar{z})^{n+1} \\\\\n&=& 2\\pi \\int_0^1 \\int_X \\frac{n+1}{4\\pi} \\frac{\\partial Q}{\\partial t} + \\frac{(n+1)n}{4\\pi} |d Q|^2_{{\\mathcal FS}(m(t))} (\\Phi^*_t \\omega_{{\\rm FS}, {\\bf s}(t)})^n dt \\\\\n&=& \\frac{n+1}{2} \\int_0^1 {\\mathbb D}^{''}(t) dt\n\\end{eqnarray*}\nHence ${\\mathbb D}(t)$ is convex. It follows that any critical point of ${\\mathbb D}$ is a global minimizer. Suppose now $m_1, m_2$ are two minimizers of ${\\mathbb D}$. Joining $m_1$ and $m_2$ with a geodesic $m(t)$ as in the proof of \\eqref{D'}, we have ${\\mathbb D}(0) = {\\mathbb D}(1)$ and ${\\mathbb D}'(0) = 0 = {\\mathbb D}'(1)$, so that ${\\mathbb D}''(t) = 0, 0 \\leq t \\leq 1$. It follows from the calculation above that\n$$\n\\int_{\\eta([0,1] \\times [0, 2\\pi) \\times X)} \\omega_{{\\rm FS}, {\\bf s}}^{n+1} = 0.\n$$\nWe conclude that for any point $p \\in \\eta([0,1] \\times [0, 2\\pi) \\times X) \\subset \\mathbb{C}P^N$, the complex dimension of any neighbourhood of $p$ in $\\eta([0,1] \\times [0, 2\\pi) \\times X)$ is $n$. Hence the image $\\Phi_{{\\bf s}(t)}(X)$ is fixed, showing that the one-parameter group ${\\rm diag}(e^{t\\gamma_0}, \\ldots, e^{t\\gamma_N})$ in ${\\rm SL}(N+1, {\\mathbb C})$ induces a one-parameter group in $\\widetilde{\\mathrm{Aut}}_0(X) = {\\mathrm{Aut}}(X,L)$. \\end{proof}\n\n\nFor any $m \\in {\\mathcal M}_{m_0}$, we introduce the group\n$$\\mathrm{Aut}_m (X, {\\mathcal M}_{m_0}, {\\mathbb D}) = \\{ g \\in \\widetilde{\\mathrm{Aut}}_0(X) \\ | \\ \\rho(g) ({\\mathcal M}_{m_0}) = {\\mathcal M}_{m_0}, \\ {\\mathbb D} (\\rho (g) \\cdot m) = {\\mathbb D}(m) \\},$$\nwhere $\\rho$ is the representation \\eqref{representation}. Clearly, ${\\mathrm{Aut}}_m(X, {\\mathcal M}_{m_0}, {\\mathbb D})$ is a closed subgroup of ${\\mathrm{Aut}}(X,L)= \\widetilde{\\mathrm{Aut}}_0(X)$ while $\\rho({\\mathrm{Aut}}_m(X, {\\mathcal M}_{m_0}, {\\mathbb D}))$ is a closed subgroup of ${\\rm SL}(V)$.\n\n\\begin{lemma} For any two points $m_1, m_2 \\in {\\mathcal M}_{m_0}$, \n$ \\mathrm{Aut}_{{m_1}}(X, {\\mathcal M}_{m_0}, {\\mathbb D}) = \\mathrm{Aut}_{m_2}(X, {\\mathcal M}_{m_0}, {\\mathbb D}).$\n\\end{lemma}\n\n\\begin{proof}\nLet $m(t), 0 \\leq t \\leq 1$ be the geodesic connecting $m_1$ and $m_{2}$ and ${\\bf s}$ an admissible orthonormal basis with respect to $m_1$. For any $g \\in \\mathrm{Aut}_{{m_1}}(X, {\\mathcal M}_{m_0}, {\\mathbb D})$, $\\rho(g) \\cdot m(t)$ is the geodesic connecting $\\rho(g) \\cdot m_1$ and $\\rho(g) \\cdot m_{2}$. Using the integral formula \\eqref{D'}, and noting that \\eqref{Q} is independent of the choice of a hermitian metric $h$, we have $\\frac{d}{dt}{\\mathbb D}(m(t)) = \\frac{d}{dt} {\\mathbb D}(\\rho(g) \\cdot m(t))$, and therefore ${\\mathbb D}(m_{2}) = {\\mathbb D}(\\rho(g) \\cdot m_{2})$. Hence $g \\in \\mathrm{Aut}_{m_{2}}(X, {\\mathcal M}_{m_0}, {\\mathbb D})$. Similarly, $\\mathrm{Aut}_{m_{2}} (X, {\\mathcal M}_{m_0}, {\\mathbb D})\\subset \\mathrm{Aut}_{m_1}(X, {\\mathcal M}_{m_0}, {\\mathbb D})$. \\end{proof}\nIn view of the above lemma, we adopt \n\n\\begin{defn} ${\\mathrm{Aut}}_X({\\mathcal M}_{m_0}, {\\mathbb D})$ is the closed subgroup of $\\widetilde{\\mathrm{Aut}}_0(X)$ of elements which preserve ${\\mathcal M}_{m_0}$ and ${\\mathbb D}$.\n\\end{defn}\n\n\\begin{rem} By definition, $\\rho({\\rm Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})) \\subset \\rho(Z_{{\\mathrm{Aut}}_0(X,L)}(T))\\cap G_{T^{\\perp}}^c$. If $T$ is a maximal torus in ${\\mathrm{Aut}}_0(X,L)$ and $X$ admits an extremal K\\\"ahler metric, a result by E.~Calabi~\\cite{cal} implies that $Z_{{\\mathrm{Aut}}_0(X,L)}(T)= T^c$. We conclude that in this case ${\\rm Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})$ is trivial. \n\nFormula \\eqref{D'} shows that any element of $\\rho(Z_{{\\mathrm{Aut}}_0(X,L)}(T))\\cap G_T^c$ (resp. $\\rho(Z_{{\\mathrm{Aut}}_0(X,L)}(T))\\cap G_{T^{\\perp}}^c$) sends a critical point of ${\\mathbb D}$ to another critical point. It then follows from Proposition~\\ref{convex} that when ${\\mathbb D}$ atteins its minimum on ${\\mathcal M}_{m_0}$ (i.e. $(X,L)$ is (weakly) relative Chow stable, see Theorem~\\ref{chow-stability} and Remark~\\ref{chow-norm}), ${\\rm Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})$ is the sub-group of $Z_{{\\mathrm{Aut}}_0(X,L)}(T)$ of elements whose lifts by $\\rho$ belong to $G^c_T$ (resp. $G^c_{T^{\\perp}}$). \n\\end{rem}\n\\begin{lemma}\\label{l:2} Suppose that ${\\mathbb D}$ has a minimum on ${\\mathcal M}_{m_0}$. Then, the set of all minimizers represents an orbit for the induced action $\\rho(\\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D}))$ and\nfor any $m \\in \\mathcal{M}_{m_0}$ there exists a minimizer $m_{\\mathrm{min}}$ of ${\\mathbb D}$ such that \n$$\nd(m,m_{\\mathrm{min}}) = \\min_{ g \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})} d(m, \\rho(g) \\cdot m_{\\mathrm{min}}),\n$$\nwhere $d$ is the distance function defined on $\\mathcal{M}_{m_0}$ with respect to the metric \\eqref{metric}. Furthermore, if $m(t), 0 \\leq t \\leq 1$ is the geodesic connecting $m$ and $m_{\\mathrm{min}}$, then \n$$\nd(m(t),m_{\\mathrm{min}}) = \\min_{g \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})} d(m(t), \\rho(g) \\cdot m_{\\mathrm{min}}).\n$$\n\\end{lemma}\n\n\\begin{proof}\nThe first part follows from Proposition~\\ref{convex}. For the second claim, suppose $g_k \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})$ is a sequence such that $$\\lim_{k\\to \\infty} d(m, \\rho(g_k)\\cdot m_{\\rm min}) = \\inf_{g \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})} d(m, \\rho(g) \\cdot m_{\\rm min}).$$ Let us denote $m_k= \\rho(g_n) \\cdot m_{\\rm min}$ and choose an admissible normal basis ${\\bf s}^k$ of $m$ which diagonalizes $m_k$. As $G(V)$ (resp. $G_{T^{\\perp}}(V)$) is compact, we can assume that ${\\bf s}^k$ converges to an admissible normal basis ${\\bf s}$ of $m$. On the other hand, as in the proof of Proposition~\\ref{convex}, we can express the geodesic between $m$ and $m_k$ by using a one parameter subgroup of $G^c(V)$ (resp. $G^c_{T^{\\perp}}(V)$) generated by ${\\rm diag}(e^{\\gamma^k_0}, \\cdots , e^{\\gamma^k_N})$ and compute\n$d^2(m,m_k)= \\sum_{i=0}^N |\\gamma_i^k|^2,$\nso that, taking a subsequence, ${\\rm diag}(e^{\\gamma^k_0}, \\cdots , e^{\\gamma^k_N})$ converges to a diagonal matrix ${\\rm diag}(e^{\\gamma_0}, \\cdots , e^{\\gamma_N})$; it defines an element $m_{\\infty} \\in {\\mathcal M}_{m_0}$ such that $m_{\\infty}(s_i,s_j)=0$ for $i\\neq j$ and $m_{\\infty}(s_i,s_i)= e^{-2\\gamma_i}m(s_i,s_i)$. The last conclusion holds easily by using the triangle inequality.\\end{proof}\nThe next result establishes the properness of ${\\mathbb D}$, provided it has critical points on ${\\mathcal M}_{m_0}$. A similar result has been originally established by H.~Luo \\cite{L} in the case when ${\\widetilde \\mathrm{Aut}}_0(X)$ is trivial. \n\\begin{prop}\\label{proper} Suppose $m_{\\min}$ is a minimizer of ${\\mathbb D}$ on $\\mathcal{M}_{m_0}$. For every $C > 0$, there exists $C_1$ such that for any $m \\in \\mathcal{M}_{m_1}$ with the property\n$$\n{\\mathbb D}(m) < {\\mathbb D}(m_{\\min}) + C,\n$$\nthere exists a $g \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})$ such that\n$$\nd(m, \\rho(g) \\cdot m_{\\min}) < C_1.\n$$\n\\end{prop}\n\n\\begin{proof}\nSuppose for contradiction that there is a constant $C > 0$ and a sequence $m_i \\in \\mathcal{M}_{m_0}$ such that \n\\begin{equation}\\label{contradiction}\n{\\mathbb D}(m_i) < {\\mathbb D}(m_{\\min}) + C\n\\end{equation}\nand\n\\begin{equation}\\label{contradiction1}\nd(m_i, \\rho(g) \\cdot m_{\\min}) > i\n\\end{equation}\nfor any $g \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})$. By Lemma~\\ref{l:2}, for any $i$ there exists $g_i \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})$ such that\n$$\nd(m_i, \\rho(g_i) \\cdot m_{\\min}) = \\min_{g \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})} d(m_i, \\rho(g) \\cdot m_{\\min})> i.\n$$\nLet $m_i(t), 0 \\leq t \\leq d_i=d(m_i, m_{\\min})$ be the normal geodesic connecting $m_{\\min}$ and $m_i$. Then, using \\eqref{contradiction}, \\eqref{contradiction1} and Proposition~\\ref{convex}, we get\n\\begin{eqnarray*}\nC & > & {\\mathbb D}(m_i) - {\\mathbb D}(m_{\\min}) \\\\\n& = & \\int_0^{d_i} {\\mathbb D}'(m_i(t)) dt\\\\\n& \\geq & \\int_0^i {\\mathbb D}'(m_i(t)) dt\\\\\n& \\geq & i \\int_0^1 {\\mathbb D}'(m_i(t)) dt\\\\\n&=& i({\\mathbb D}(m_i(1)) - {\\mathbb D}(m_{\\min})).\n\\end{eqnarray*}\nLetting ${\\tilde m}_i = m_i(1)$, we have \n$$\n{\\mathbb D}({\\tilde m}_i) < {\\mathbb D}({\\tilde m}_{\\min}) + \\frac{C}{i}\n$$\nwhile, by Lemma~\\ref{l:2}, \n$$\n1 = d({\\tilde m}_i, m_{\\min}) = \\min_{g \\in \\mathrm{Aut}_X({\\mathcal M}_{m_0}, {\\mathbb D})} d({\\tilde m}_i, \\rho(g) \\cdot m_{\\min}).\n$$\nTaking a subsequence of ${\\tilde m}_i$ converging to a minimizer $m_\\infty$ of ${\\mathbb D}$, we obtain a contradiction (see Lemma~\\ref{l:2}). \\end{proof}\n\n\\section{Proof of Theorem~\\ref{main}}~\\label{s:proof}\nIt is enough to consider the case when the polarized projective manifold $(X,L)$ is the product of two factors $(X_1,L_1)$ and $(X_2,L_2)$. Denote the dimensions of $X, X_1, X_2$ by $n, n_1, n_2$ respectively. Letting $p_i : X \\to X_i$ be the canonical projections, we have $L = \\pi_1^*(L_1) \\otimes \\pi_2^*(L_2)$. \n\nThe holomorphic splitting of the tangent bundle $TX = TX_1 \\oplus TX_2$ induces a product structure $\\widetilde{\\mathrm{Aut}}_0(X) = \\widetilde{\\mathrm{Aut}}_0(X_1)\\times \\widetilde {\\mathrm{Aut}}_0(X_2)$, so we can fix a maximal torus $T \\subset \\widetilde {\\mathrm{Aut}}_0(X)$ of the form $T= T_1 \\times T_2$, where $T_i \\subset \\widetilde {\\mathrm{Aut}}_0(X_i)$ are maximal tori. Taking a common tensor power of the $L_i$'s if necessarily, we will suppose that $(X,L)$ and $(X_i,L_i)$ all satisfy the assumptions made in Section~\\ref{s:relative balanced}. Grauert's direct image theorem for coherent sheaves implies that $V = V_1 \\otimes V_2$ where $V=H^0(X,L)$ and $V_i= H^0(X_i, L_i)$. Notice that if $V_i$ splits under $T_i$ as\n$$\nV_i = \\bigoplus_{k=1}^{\\nu_i} V_i(\\chi^i_k),\n$$\nthen\n$$\nV = \\bigoplus_{j,k} V_1 (\\chi^1_j) \\otimes V_2(\\chi^2_k)\n$$\ngives the decomposition \\eqref{e:split} for $V$ with $\\chi_j^1\\otimes \\chi_i^2 = \\chi_k$. \n\n\n\nLet $m^i_{0}$ be $T_i$-invariant hermitian inner products on $V_i$. Simplifying the notation in Section~\\ref{s:functional}, we let\n$\\mathcal{M}_i$ be the $G_i^c$ (resp. $({G_i})^c_{T_i^{\\perp}})$) orbit of $m^i_{0}$. The tensor product (of hermitian inner products and bases) defines a natural map $\\mathcal{M}_1 \\times \\mathcal{M}_2 \\to {\\mathcal M}$ where ${\\mathcal M}$ is the $G^c$ (resp. $G^c_{T^{\\perp}}$) orbit of $m_0=m^1_{0}\\otimes m^2_{0}$. We define the subspace ${\\mathcal M}_{\\rm prod}$ of decomposable elements of $\\mathcal{M}$ \n$$\n\\mathcal{M}_{\\rm prod} = \\{m =m^1 \\otimes m^2, \\ | \\ m^1 \\in \\mathcal{M}_1, m^2 \\in \\mathcal{M}_2 \\}.\n$$\n\n\\begin{lemma} \\label{subspace} ${\\mathcal M}_{\\rm prod}$ is a closed totally geodesic submanifold of $\\mathcal{M}$ which is stable under the action of $\\rho(\\widetilde{\\mathrm{Aut}}_0(X)) \\cap G^c$. Furthermore, for each $m=m^1\\otimes m^2 \\in {\\mathcal M}_{\\rm prod}$ the induced metric ${\\mathcal FS}(m)= {\\mathcal FS}(m^1) + {\\mathcal FS}(m^2)$ on $X=X_1\\times X_2$ is a product metric.\n\\end{lemma}\n\n\\begin{proof} As $\\widetilde{\\mathrm{Aut}}_0(X)= \\widetilde{\\mathrm{Aut}}_0(X_1) \\times \\widetilde{\\mathrm{Aut}}_0(X_2)$ and we have assumed (by taking a tensor power of $L_i$) that each $\\widetilde{\\mathrm{Aut}}_0(X_i)$ acts on $L_i$, it follows that $\\rho(\\widetilde{\\mathrm{Aut}}_0(X)) \\cap G^c$ preserves ${\\mathcal M}_{\\rm prod}$.\n\nFrom the description of the geodesics of ${\\mathcal M}$ (resp. ${\\mathcal M}_i$) in terms of a 1-parameter subgroups of $G^c$ (resp. $G_i^c$) used in the proof of Proposition~\\ref{convex}, it follows that if $m^i(t)$ is a geodesic of ${\\mathcal M}_i$ ($i=1,2$), then $m(t)=m^1(t)\\otimes m^2(t)$ is a geodesic of ${\\mathcal M}$ which belongs to ${\\mathcal M}$. \n\nThus, in order to established the first part of Lemma~\\ref{subspace}, we only need to show that ${\\mathcal M}_{\\rm prod}$ is a closed \nsubset of ${\\mathcal M}$. Consider a sequence $m_k = m^1_k \\otimes m^2_k \\in {\\mathcal M}_{\\rm prod}$ with $m^i_k \\in \\mathcal{M}_i$. The expression of the geodesic joining $m^i_{0}$ and $m_k^i$ in terms of a 1-parameter subgroup ${\\rm diag}(e^{t\\gamma^i_0}, \\cdots e^{t\\gamma^i_{N_i}})$ of $G_i^c$ (see the previous section) allows us to compute the distance functions $d$ and $d_i$\n\\begin{equation*}\n\\begin{split}\nd_i(m^i_{0}, m^i_k) ^2& = \\sum_{j=0}^{N_i} (\\gamma^i_j)^2, \\\\\nd(m_0, m_k)^2 & = \\sum_{r=0}^{N_1} \\sum_{j=0}^{N_2} (\\gamma^1_r + \\gamma^2_j)^2= N_2 d_1(m_{h_1}^1,m^1_k)^2 + N_1 d_2(m_{h_2}^2,m^2_k)^2,\n\\end{split}\n\\end{equation*}\nwhere we have used that $\\gamma^i_j$ satisfy $\\sum_{j=0}^{N_i} \\gamma_j^i=0$ for $i=1,2$. This completes the first part of the Lemma.\n\nThe final claim is a direct consequence of \\eqref{potential} and the fact that if we have chosen $h=h_1\\otimes h_2$ where $h_i$ is a $T_i$-invariant hermitian metric on $L_i$, then the curvature is $\\omega= \\omega_1 + \\omega_2$. \\end{proof}\n\n\\begin{prop}\\label{split}\nFor any critical point $m$ of ${\\mathbb D}$ on ${\\mathcal M}$, the induced K\\\"ahler metric on $X=X_1 \\times X_2$ is compatible with the the product structure. \n\\end{prop}\n\\begin{proof} Any critical point of ${\\mathcal M}$ must necessarily be a minimizer by Proposition~\\ref{convex}. Let $m_{\\min} $ be such a minimizer. We pick a sequence $m_k \\in {\\mathcal M}_{\\rm prod}$ such that\n$$\n\\lim_{k \\rightarrow \\infty} \\mathbb{D} (m_k) = \\inf_{m \\in {\\mathcal M}_{\\rm prod}} \\mathbb{D} (m).\n$$\nSince the functional ${\\mathbb D}$ defined on $\\mathcal{M}$ is proper in the sense of Proposition~\\ref{proper}, there exist $g_i \\in \\mathrm{Aut}_X({\\mathcal M}, {\\mathbb D})$ such that \n$$\nd(\\rho(g_i)^{-1} \\cdot m_{\\min}, m_i) = d(m_{\\min}, \\rho(g_k) \\cdot m_k) < C_1\n$$\nfor all $i$. Putting ${\\tilde m}_i = \\rho(g_i) \\cdot m_i$, we know by Lemma~\\ref{subspace} that $ {\\tilde m}_i \\in {\\mathcal M}_{\\rm prod}$. Taking a convergent subsequence of ${\\tilde m}_k$ and using the closeness of ${\\mathcal M}_{\\rm prod}$ (see Lemma~\\ref{subspace}), there exists $m \\in {\\mathcal M}_{\\rm prod}$ such that\n$$\n{\\mathbb D}(m) = \\min_{\\bar{m} \\in {\\mathcal M}_{\\rm prod}} {\\mathbb D}(\\bar{m}).\n$$\n\nLet $m=m^1\\otimes m^2$ be a minimizer of ${\\mathbb D}$ on ${\\mathcal M}_{\\rm prod}$. We claim that $m^i$ is a critical point of the corresponding functional ${\\mathbb D}_i$ on $\\mathcal{M}_i$. Without loss of generality, we only check this for $m^1$. Suppose $m^1(t)$ is a geodesic starting from $m^1(0)=m^1$ in $\\mathcal{M}_1$, expressed in terms of a 1-parameter subgroup ${\\rm diag}(e^{t\\gamma_0^1}, \\cdots, e^{t \\gamma_{N_1}^1})$ of $G_1^c$ (resp. $(G_1)^c_{T_1^{\\perp}}$): there exists an admissible orthonormal basis ${\\bf s}^1 =\\{ s^1_i, 0 \\leq i \\leq N_1\\} $ of $m^1$ such that ${\\bf s}^1(t)=\\{e^{t\\gamma_0^1}s^1_1, \\ldots, e^{t\\gamma_{N_1}^1}s_{N_1} \\}$ is an admissible orthonormal basis for $m^1(t)$. Let ${\\bf s}^2 =\\{ s^2_j, 0 \\leq j \\leq N_2\\} $ be an admissible orthonormal basis for $m^2$. Then $m(t) = m^1(t) \\otimes m^2$ is a geodesic in $\\mathcal{M}$ starting from $m$ and ${\\bf s}^1(t) \\otimes {\\bf s}^2$ is an admissible orthonormal basis for $m(t)$. Since $m(t) \\in {\\mathcal M}_{\\rm prod}$ and $m$ is a minimizer of $\\mathbb{D}$ on ${\\mathcal M}_{\\rm prod}$, we have\n\\begin{eqnarray*}\n0 &=& \\mathbb{D}'(0)\\\\\n&=& \\int_X \\frac{\\sum_{i=0}^{N_1} \\sum_{j=0}^{N_2} 2 \\gamma_i^1 |s^1_i|^2_{h_1} |s^2_j|^2_{h_2}}{\\sum_{i=0}^{N_1} \\sum_{j=0}^{N_2} |s^1_i|^2_{h_1} |s^2_j|^2_{h_2}} \\mathcal{FS}(m)^{n_1+n_2}\\\\\n&=& \\big( \\int_{X_1} \\frac{\\sum_{i=0}^{N_1} 2 \\gamma_i^1 |s^1_i|^2_{h_1} }{\\sum_{i=0}^{N_1} |s^1_i|^2_{h_1}} \\mathcal{FS}(m^1)^{n_1}\\Big) \\times \\Big( \\int_{X_2} \\frac{\\sum_{j=0}^{N_2} |s^2_j|^2_{h_2}}{ \\sum_{j=0}^{N_2} |s^2_j|^2_{h_2}} \\mathcal{FS}(m2)^{n_2}\\Big)\\\\\n&=& C \\int_{X_1} \\frac{\\sum_{i=0}^{N_1} 2 \\gamma_i^1 |s^1_i|^2_{h_1} }{\\sum_{i=0}^{N_1} |s^1_i|^2_{h_1}} \\mathcal{FS}(m^1)^{n_1}= C \\ \\mathbb{D}_1'(0),\n\\end{eqnarray*}\nwhere $C$ is a strictly positive constant. We conclude that $m^1$ is a critical point of $\\mathbb{D}_1$ on $\\mathcal{M}_1$ by using Proposition~\\ref{cha}. Conversely, Proposition~\\ref{cha} also shows that $m$ is a critical point of ${\\mathbb D}$ on $\\mathcal{M}$. Now, by Proposition~\\ref{convex}, the induced K\\\"ahler metrics on $X$ by the critical points of ${\\mathbb D}$ are isometric under the action of $\\widetilde{\\mathrm{Aut}}_0(X)= \\widetilde{\\mathrm{Aut}}_0(X_1) \\times \\widetilde{\\mathrm{Aut}}_0(X_2)$ so, in particular, to the induced product K\\\"ahler metric by $m=m^1\\otimes m2$ (see Lemma~\\ref{subspace}), which completes the proof.\\end{proof}\n\nAs the existence of critical points of ${\\mathbb D}$ is independent of the choice of orbits (see Lemma~\\ref{gabor} and Remark~\\ref{chow-norm}), we obtain as an immediate corollary of Proposition~\\ref{split}\n\\begin{thm}\\label{reduced} Suppose $X$ admits a balanced K\\\"ahler metric relative to $T$ in $2\\pi c_1(L)$. Then there exits a balanced K\\\"ahler metric relative to $T$ in $2\\pi c_1(L)$ compatible with the product structure $X=X_1\\times X_2$.\n\\end{thm}\n\n\\noindent{\\it Proof of Theorem~\\ref{main}.} Combining Theorem~\\ref{reduced} with Theorem~\\ref{do-mabuchi} and Propositions 1 and 2 yields the proof of Theorem~\\ref{main}(i). In order to prove Theorem~\\ref{main}(ii), we use Theorem~\\ref{mabuchi} with $T$ being the connected component of the centre of $\\widetilde{{\\rm Aut}}_0(X)$, so that, by the assumption, for one of the factors, $(X_1,L_1)$ say, $T_1=\\{ {\\rm Id} \\}$. It is not hard to see that in this case {\\it each} $G^c$ orbit of admissible hermitian inner products on $V=V_1\\otimes V_2$ contains products $m=m^1\\otimes m^2$. (The latter is not true in general.) We can then apply Proposition~\\ref{split}. $\\Box$\n\n\\begin{rem} The above arguments and the uniqueness established in Lemma 2 would imply the splitting property should Conjecture \\ref{relative-stability} be true.\n\\end{rem}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection{The concept of tipping cascades}\n\nHuman--induced impacts on the Earth system increasingly endanger the integrity of the Earth's climate system and some of its most vulnerable components and processes, the so--called tipping elements \\cite{lenton2008tipping}. Lately, it has been argued that the risk of potential tipping events or even cascading transitions up to a global cascade is rising under ongoing anthropogenic global warming \\cite{steffen2018trajectories,lenton2019climate}. While this is the case, there is considerable debate about the nature of tipping cascades within the scientific community itself and cascading tipping dynamics have been described rather roughly in the recent literature \\cite{steffen2018trajectories,lenton2019climate,lenton2020tipping,lenton2013origin,hughes2013multiscale,rocha2015regime,rocha2018cascading,barnosky2012approaching,brook2013does}. \n\nThe term cascade is used in various fields for a certain class of dynamics possibly exhibited by interacting (sub--)systems. It generally describes the sequential occurrence of similar events (event A is followed by event B which is followed by event C etc.). This sequence of events does not necessarily have to be causal opposed to when event A directly causes event B in a domino effect. The notion of a domino effect is sometimes used synonymously to the term cascade. Examples of cascades comprise cascading failures leading to the collapse of power grids as relevant physical infrastructure networks \\cite{watts2002simple,buldyrev2010catastrophic,gao2011robustness,gao2012networks,hu2011percolation}. Such a cascade may occur as an initial failure increases the likelihood of subsequent failures \\cite{watts2002simple}. In contrast, an initial failure may directly lead to the failure of dependent nodes \\cite{buldyrev2010catastrophic}. \n\nAlong these lines, cascading tipping events or regime shifts are increasingly discussed following the rising awareness of a highly interconnected world in the Anthropocene \\cite{helbing2013globally}. Tipping elements possibly undergoing a transition into a qualitatively different state after the crossing of some critical threshold were identified e.g. in ecology and climate system science \\cite{lenton2008tipping,scheffer2003catastrophic,scheffer2001catastrophic} and comprise, among others, shallow lakes transitioning from a clear to a turbid state \\cite{scheffer1989alternative,scheffer1993alternative}, coral reefs \\cite{hughes1994catastrophes}, the Atlantic Meridional Overturning Circulation \\cite{rahmstorf2005thermohaline,stommel1961thermohaline} and the continental ice sheets on Greenland \\cite{robinson2012multistability} and Antarctica \\cite{garbe2020hysteresis}. \n\nIn the climate system, multiple interactions between large--scale tipping elements have been identified \\cite{kriegler2009imprecise,caesar2018observed,rahmstorf2015exceptional,swingedouw2008antarctic,parsons2019influence,duque2019tipping}. For example, the Atlantic Meridional Overturning Circulation may slow down due to increasing meltwater flux originating from the Greenland Ice Sheet \\cite{caesar2018observed,rahmstorf2015exceptional}. Potential drying over the Amazon rainforest basin leading to loss of rainforest resilience may be influenced by the Atlantic Meridional Overturning Circulation \\cite{parsons2019influence} on the one hand and the El--Ni\u00f1o Southern Oscillation on the other hand \\cite{duque2019tipping}. Rocha et al.~\\cite{rocha2018cascading} identified potential links between ecological systems with alternative states such as the interaction of eutrophication and hypoxia or coupled shifts in coral reefs and mangrove systems. \n\nTipping interactions do not only exist across different large--scale systems, but span various spatial scales as exemplified by spatially extended (and heterogeneous) ecosystems \\cite{lenton2020tipping,rocha2018cascading}. On a local scale, confined ecosystems such as a shallow lake, in fact, consist of discrete units connected through dispersion or other exchange processes with each unit potentially exhibiting alternative stable states \\cite{van2005implications,dakos2010spatial,van2015resilience}. Regionally, regime shifts may propagate from one ecosystem entity to the other transmitted, among others, via small streams and rivers \\cite{hilt2011abrupt,scheffer2004ecology,van2017regime}, moisture recycling \\cite{lenton2020tipping,wunderling2020network,zemp2014importance,zemp2017self} or biotic exchange through e.g. larvae \\cite{brook2013does,van2015resilience,scheffer2012anticipating,lundberg2003mobile}. \n\nMotivated by these and further suggested tipping element interactions, cascading effects arising as potential dynamics have been discussed \\cite{steffen2018trajectories,lenton2019climate,lenton2020tipping,lenton2013origin,hughes2013multiscale,rocha2015regime,rocha2018cascading} as a possible mechanism for creating a potential planetary--scale tipping point (of the biosphere) \\cite{lenton2013origin,hughes2013multiscale,barnosky2012approaching,brook2013does}. Lenton et al.~\\cite{lenton2019climate} stated that we may approach a global cascade of tipping points via the progressive activation of tipping point clusters \\cite{schellnhuber2016right} through the increase of global mean temperature. This could potentially lead to undesirable hothouse climate trajectories \\cite{steffen2018trajectories}. However, it remains unclear whether and how cascade--like dynamics within the Earth system is promoted by the direction and strength of the existing feedbacks \\cite{lenton2020tipping,lenton2013origin,kriegler2009imprecise,wunderling2021modelling}. \n\nRecently, first conceptual steps \\cite{brummitt2015coupled,abraham1991computational} have been undertaken to determine whether the network of Earth system tipping elements is capable to produce global tipping cascades \\cite{wunderling2020interacting,gaucherel2017potential}. Using still conceptual, but process--based models, Dekker et al.~\\cite{dekker2018cascading} demonstrated a possible sequence of tipping events in a coupled system of the Atlantic Meridional Overturning Circulation and El--Ni\u00f1o Southern Oscillation. Social costs of future climate damages caused by carbon emissions originating from domino effects of interacting tipping elements were studied using an integrated assessment model \\cite{lemoine2016economics,cai2016risk}. Earlier, the propagation of critical transitions in lake chains as an ecological example was analyzed, coupling established models of shallow lakes by a unidirectional stream or via diffusion processes \\cite{van2005implications,hilt2011abrupt}. The effect of spatial heterogeneity and connectivity of bistable patches on the overall ecosystem response was further studied by the application of simple models for eutrophication and grazing of a (logictically--growing) resource \\cite{van2005implications,dakos2010spatial}. In addition, examples beyond the biogeophysical Earth system possibly giving rise to the propagation of critical transitions were proposed such as coupled subsystems in the fields of economics and finance \\cite{lenton2020tipping,brummitt2015coupled}.\n\n\\subsection{Descriptions of tipping cascades vary across the literature}\nHowever, tipping cascades or, more generally, patterns of multiple tipping dynamics discussed to arise from the interaction of tipping elements are often loosely described suffering a similar fate as the ancestral 'tipping point' concept \\cite{van2016you}. We encountered important differences across the description of tipping cascades in the recent literature. These differences are in particular related to whether causality is a necessary ingredient for a cascade or not. For example, the pattern where tipping of one system causes the tipping of another system is described as domino dynamics or tipping cascade by Lenton et al.~\\cite{lenton2020tipping}. The propagation of regime shifts by an initial critical transition causing a following one is underpinned by generalized tipping element interactions and termed a cascade by Brummitt et al.~\\cite{brummitt2015coupled}. By comparison, the term cascading tipping is used for a sequence of abrupt transitions in Dekker et al.~\\cite{dekker2018cascading} that may not necessarily be causal. This notion of cascading tipping is exemplary applied to the Atlantic Meridional Overturning Circulation and El--Nino Southern Oscillation as climatic tipping elements \\cite{dekker2018cascading}. Furthermore, and not restricted to causal events, an effect of one regime shift on the occurrence of another regime shift is suggested as cascading in Rocha et al.~\\cite{rocha2018cascading} and confirmed to connect ecological regime shifts such as fisheries collapse and transitions of kelp, mangrove and seagrass ecosystems. \n\nHere we systematically identify and characterize patterns of multiple tipping dynamics such as a domino cascade, a two phase cascade and a joint cascade, which arise in a previously studied system of idealized interacting tipping elements \\cite{brummitt2015coupled,abraham1991computational} (section~\\ref{sec:res}). In particular, these patterns of multiple tipping dynamics differ in the way of how the critical transition propagates from one tipping element to another. The domino cascade, the two phase cascade and the joint cascade are related to the varying descriptions of tipping cascades in the literature and examples of multiple tipping events with comparable characteristics in the Earth system are given. Furthermore, we address the potential for intervention and anticipation by common early warning indicators based on critical slowing down (see Supplementary Material for details). Implications of the distinct patterns of multiple tipping for the resilience of the Earth system, limitations of studying idealized interacting tipping elements and necessary future research are discussed (section~\\ref{ref:disc}). \n \n\n\\section{Patterns of multiple tipping in a model of idealized interacting tipping elements}\n\\label{sec:res}\nIn the following, we present distinct patterns of multiple tipping dynamics, which emerge from the linear bidirectional coupling of two idealized tipping elements (figure~\\ref{fig:fig_1}, \\cite{brummitt2015coupled,abraham1991computational}). Each tipping element depends on its control parameter (or driver), the variation of which may induce a critical transition from a normal to an alternative state with the crossing of a critical control parameter threshold. We consider homogeneous tipping elements, i.e. both tipping elements undergo a critical transition at the same control parameter threshold and on the same intrinsic tipping time scales. A linear coupling term captures the interaction of the tipping elements following Wunderling et al.~\\cite{wunderling2020interacting}, where the state of one tipping element is added to the control parameter of another, coupled tipping element. We refer to Wunderling et al.~\\cite{wunderling2020interacting} and Klose et al.~\\cite{klose2020emergence} for a detailed description of the model of idealized interacting tipping elements. \n\nThe patterns of multiple tipping dynamics described below and illustrated in figure~\\ref{fig:fig_2} originate from different pathways through the control parameter space of both tipping elements: The control parameter~$c_2$ of subsystem~$X_2$ as \\textit{following} tipping element is kept constant at distinct levels (figure~\\ref{fig:fig_2}, going from top to bottom). The control parameter~$c_1$ of subsystem~$X_1$ as \\textit{evolving} tipping element is increased (figure~\\ref{fig:fig_2}, going from left to right) sufficiently slowly such that this subsystem can follow its (moving) equilibrium. In other words, by a separation of the intrinsic system time scale and the time scale of the forcing, the system can be regarded as a fast--slow system \\cite{kuehn2011mathematical}, where the change in the forcing of the system is slow compared to the intrinsic system time scale. We observe the following three qualitatively different dynamic patterns of multiple tipping: \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[scale = 0.4]{figure1a_1b_1c.pdf}\n\t\\caption{(a) \\& (b): Bifurcation diagram of the idealized tipping elements~(TE)~$X_1$ (a) and $X_2$ (b). The respective differential equation is of the form $\\frac{\\rmd x_1}{\\rmd t} =-x_1^3+x_1+c_1+\\frac{1}{2}d_{21}(x_2+1)$ for subsystem~$X_1$ and $\\frac{\\rmd x_2}{\\rmd t}=-x_2^3+x_2+c_2+\\frac{1}{2}d_{12}(x_1+1)$ for subsystem~$X_2$. Note that for determining the bifurcation diagram of the idealized tipping elements~$X_1$ and $X_2$ the coupling term is not taken into account, i.e. the uncoupled case with $d_{21}= 0$ and $d_{12}= 0$ is shown here. Below the critical threshold~$c_{i_{\\rm{crit}}}$, $i = 1,2$, there exist two stable fixed points. As soon as the control parameter~$c_i$ transgresses its critical value~$c_{i_{\\rm{crit}}}$, a fold--bifurcation occurs and the system tips from the lower (normal) state~$x_{i^-}^*$ to the upper (alternative) state~$x_{i^+}^*$. (c) Sketch of the potential landscape of the two subsystems in case they do not interact shown as a ball--and--cup diagram.}\n\t\\label{fig:fig_1} \n\\end{figure}\n\n\\subsection{Two phase cascade (figure~\\ref{fig:fig_2}(a))}\n\nAn increase of the control parameter~$c_1$ across its threshold and the resulting critical transition of subsystem~$X_1$ is not sufficient to directly trigger a critical transition in subsystem~$X_2$. The system converges intermediately to a stable fixed point (as seen in the phase space portraits) and only a further increase of the control parameter~$c_1$ can initiate the critical transition in subsystem~$X_2$ by the loss of the intermediately occupied stable fixed point. Thus, by limiting the further increase in the control parameter~$c_1$ after the first tipping event of subsystem~$X_1$, a full two phase cascade can be mitigated. We can identify the two phase cascade with the cascade described and simulated in Dekker et al.~\\cite{dekker2018cascading}. Within the climate system, a stepwise change in the oxygen isotopic ratio at the Eocene--Oligocene transition may be interpreted as a two phase cascade of the Atlantic Meridional Overturning Circulation as the evolving tipping element and the Antarctic Ice Sheet as the following tipping element in response to a slowly decreasing atmospheric carbon dioxide concentration \\cite{dekker2018cascading,tigchelaar2011new}. \n\nAn increasingly slower recovery from perturbations and thus an increase in common statistical indicators such as autocorrelation and variance are observed for subsystem~$X_1$ on the approach of the two phase cascade in a \\textit{pre--tipping time span} before the critical transition of subsystem~$X_1$ (Supplementary Material, figure~S1--S3). In contrast, for subsystem~$X_2$, an increasingly slower recovery from perturbations as well as increasing autocorrelation and variance can not be detected in the pre--tipping time span prior to the critical transition of subsystem~$X_1$ (Supplementary Material, figure~S1--S3). However, given the intermediate convergence to a stable fixed point after the critical transition of subsystem~$X_1$ and prior to the critical transition of subsystem~$X_2$, an \\textit{intermediate time span} offers the possibility to indicate the upcoming critical transition of subsystem~$X_2$ in the two phase cascade. A step--like change to a relatively higher level of the statistical indicators for subsystem~$X_2$ compared to the respective level in the pre--tipping time span is observed (Supplementary Material, figure~S2--S3, compare also \\cite{dekker2018cascading}), indicating an increased vulnerability of subsystem~$X_2$ to a critical transition. The height of the step--like change in the statistical indicators varies with the magnitude of the constant control parameter~$c_2$ as a consequence of an increasingly slower recovery from perturbations in the intermediate time span with increasing magnitude of the constant control parameter~$c_2$. This observation corresponds to the rotation of the eigenvectors and the change in the eigenvalue magnitude of the system of interacting tipping elements, which determine the magnitude and direction of the recovery to perturbations and hence critical slowing down prior to a bifurcation--induced critical transition (\\cite{boerlijst2013catastrophic,dakos2018identifying}, Supplementary Material). However, no threshold, i.e. a height of the step--like change above which this following tipping occurs, can be observed but it rather is a continuous and relative quantity. In other words, a step--like change of the statistical indicators (though comparably smaller) may also be present after the critical transition of subsystem~$X_1$ even if a critical transition of subsystem~$X_2$ does not follow. Thus, to use this height of the step--like change to clearly indicate an upcoming following transition may be difficult in practice.\n\n\\subsection{Domino cascade (figure~\\ref{fig:fig_2}(b))}\n\nFor a slightly elevated level of the constant control parameter~$c_2$, the increase of the control parameter~$c_1$ across its threshold and the corresponding critical transition of subsystem~$X_1$ towards its alternative state is sufficient to trigger a critical transition of subsystem~$X_2$. Note that, in contrast to the two phase cascade, no further increase of the control parameter~$c_1$ is necessary to observe the domino cascade, but the tipping of one subsystem (the evolving tipping element) directly causes and initiates the tipping of another (the following tipping element). This corresponds to the description of a tipping cascade given in Lenton et al.~\\cite{lenton2020tipping} and Brummitt et al.~\\cite{brummitt2015coupled} and the general notion of a domino effect including causality \\cite{hornby2015dict}. A notable feature is the expected path of the system in the phase space. Even though the intermediately occupied stable fixed point involved in the two phase cascade is absent, it still influences the dynamics (see phase space, figure~\\ref{fig:fig_2}(b)) as a 'ghost' (e.g. \\cite{strogatz1989predicted,sardanyes2006ghosts,sardanyes2009ghosts,duarte2012chaos}). As demonstrated recently in a conceptual model, domino cascades may propagate through tipping elements in the Earth system, such as the large ice sheets on Greenland and West Antarctica and the Atlantic Meridional Overturning Circulation \\cite{wunderling2020interacting, wunderling2020basin}. \n\nA domino cascade may not be preceded clearly by the increase of the common early warning indicators and relying on these indicators may lead to an unexpected following critical transition of the following tipping element. An increasingly slower recovery from perturbations and thus increasing autocorrelation and variance as common statistical indicators are observed for subsystem~$X_1$ on the approach of the domino cascade in the pre--tipping time span (Supplementary Material, figure~S1--S3). The statistical indicators for subsystem~$X_2$ remain constant though on a relatively higher level than for the two phase cascade in the pre--tipping time span (Supplementary Material, figure~S1--S3). However, no clear intermediate time span prior to the critical transition of subsystem~$X_2$ exists allowing for an additional detection of early warning signals as for the two phase cascade. \n\n\\subsection{Joint cascade (figure~\\ref{fig:fig_2}(c))}\n\nSubsystem~$X_1$ and subsystem~$X_2$ may tip jointly with a possible trajectory evolving close to the phase space diagonal for an increase of the control parameter~$c_1$ across its threshold as opposed to the other two multiple tipping patterns. Such a joint cascade is observed with a strongly elevated level of the constant control parameter~$c_2$. The critical transitions of the respective subsystems cannot be clearly distinguished with regard to their order of tipping. Though the case of a joint cascades has not been treated explicitly in the recent literature on interacting tipping elements, a similar behaviour may be observed in spatially extended bistable ecosystems subject to regime shifts \\cite{van2005implications,dakos2010spatial}.\n\nFor both subsystems, a slower recovery from perturbations is expected prior to their joint tipping (Supplementary Material, figure~S1--S2). For subsystem~$X_1$ autocorrelation and variance increase on the approach of the joint cascade with increasing control parameter~$c_1$. Subsystem~$X_2$ exhibits a relatively high constant level of these statistical indicators prior to the joint cascade corresponding to the level of the constant control parameter~$c_2$ and indicating the vulnerability of this subsystem to critical transitions (Supplementary Material, figure~S3). \n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[scale = 0.5]{figure2a_2b_2c.pdf}\n\t\\caption{Three different types of tipping cascades depicted as three different situations. From left to right, the critical parameter~$c_1$ of the evolving tipping element~(TE)~$X_1$ is driven closer to and over its tipping point (compare to figure~\\ref{fig:fig_1}). From top to bottom, the critical parameter~$c_2$ of the following tipping element~(TE)~$X_2$ is also driven closer to, but not across, its tipping point. In this setting, three different patterns of multiple tipping or cascades can occur. (a) Two phase cascade: The first subsystem~$X_1$ tips and is then shifted closer towards subsystem~$X_2$ by an increase of the control parameter~$c_1$. Then subsystem~$X_2$ tips as well. (b) Domino cascade: The subsystems~$X_1$ and $X_2$ are closer together than in the two phase cascade such that a tipping of subsystem~$X_1$ (middle panel) is sufficient to trigger a critical transition in subsystem~$X_2$. (c) Joint cascade: The two subsystems are very close to each other such that the beginning of a tipping event in subsystem~$X_1$ immediately causes the tipping of the second subsystem~$X_2$ and the tipping events cannot be distinguished. The respective stable fixed point attractors and phase diagrams are shown below the domino sketches. Orange dots represent stable fixed points, while unstable fixed points are given by red dots. The background colour indicates the normalized speed $v = \\sqrt{\\dot{x}_{1}^2+\\dot{x}_{2}^2}\/v_{max}$ going from close to zero (purple) to fast (yellow--green).\n}\n\t\\label{fig:fig_2} \n\\end{figure}\n\n\\section{Discussion}\n\\label{ref:disc}\nStudying a system of idealized interacting tipping elements \\cite{brummitt2015coupled,abraham1991computational}, qualitatively different dynamic patterns of multiple tipping were identified and characterized as a two phase cascade, a domino cascade and a joint cascade. \n\nThe various patterns of multiple tipping originating from two idealized interacting tipping elements are related to different, though simplified and specific pathways through the control parameter space. In the end, the control parameter evolution determines the emergence of the specific system behavior, which may be a domino cascade, a two phase cascade or a joint cascade. In other words, the control parameter evolution, i.e., the evolution of the forcing, can therefore determine the characteristics of multiple tipping that are observed. However, other factors such as the strength and the sign of coupling are as well decisive for the emergence of tipping cascades. Moreover, in more complex systems, control parameters can not be treated separately for each tipping element and drivers may be shared \\cite{rocha2018cascading}.\n\nThe different observed patterns of multiple tipping may have implications for the mitigation of tipping by controlling the respective drivers. A limitation of the forcing can prevent the two phase cascade since a critical transition of the evolving tipping element is not sufficient for the spread of a tipping event to a following subsystem. Instead, the critical transition needs to be followed by a further evolution of the respective subsystem's state before a following critical transition is initiated. However, in a domino cascade an initial critical transition of the evolving tipping element is sufficient to trigger a slightly delayed but inevitable following critical transition of another tipping element.\n\nIn addition, the potential success of anticipating the emergence of tipping cascades through early warning indicators based on critical slowing down \\cite{wissel1984universal,scheffer2009critical,lenton2011early} was assessed and demonstrated to differ across the patterns of multiple tipping (see Supplementary Material). Using insights of Boerlijst et al.~\\cite{boerlijst2013catastrophic} and Dakos~\\cite{dakos2018identifying} on critical slowing down in multi--component systems in relation to the eigenvector orientation, it is shown how critical slowing down and common statistical indicators for the anticipation of critical transitions are related to the rotation of eigenvectors and the change in the eigenvalues' magnitude. Thereby, the analysis of statistical properties of the two phase cascade in Dekker et al.~\\cite{dekker2018cascading} is expanded. We find that these common statistical indicators based on critical slowing down may fail for upcoming domino cascades in a system of idealized interacting tipping elements. While increasing autocorrelation and variance are observed for the evolving tipping element on the approach of the domino cascade, constant levels of these statistical indicators were determined for the following tipping element. In the case of a two phase cascade or a joint cascade, the critical slowing down based indicators express some degree of vulnerability (or resilience) in the system of interacting tipping elements. However, their application may be unfeasible in practice. In particular, for the two phase cascade, the critical transition of the evolving tipping element is preceded by increasing autocorrelation and variance of the respective subsystem, while a step--like change towards a relatively higher level of the statistical indicators in the intermediate time span is found for the following tipping element. The joint cascade may be conceivable with a raised but constant level of autocorrelation and variance for the following tipping element accompanied by an increase of statistical indicators for the evolving tipping element. With the slower recovery from perturbations for both tipping elements, correlations between the subsystems' time series comparable to the application of spatial early warning signals \\cite{dakos2010spatial,dakos2011slowing,donangelo2010early,guttal2009spatial,kefi2014early} may unfold. \n\nAs these very specific and simplified scenarios of control parameter evolution demonstrate that an increase of autocorrelation and variance prior to multiple tipping events cannot necessarily be expected, these common early warning indicators should not be relied on as the only way of anticipating cascading critical transitions in systems of interacting tipping elements. Additionally taking into account often referenced limitations, false alarms and false positives in the application of critical slowing down based indicators to individual tipping elements and the anticipation of upcoming critical transitions \\cite{boettiger2013early,dakos2015resilience,ditlevsen2010tipping}, it seems to be necessary to invoke a combination of process-\u2013based modelling accompanied by monitoring the system under investigation resulting in predictions as well as data--driven techniques \\cite{dakos2015resilience,ditlevsen2010tipping,dakos2012methods} to detect upcoming multiple transitions and, in particular, the domino cascade. \n\nNote that the presented discussion is restricted to bifurcation--induced tipping with a relatively weak noise and a sufficiently slow change of the tipping element driver is applied. Hence, our examination of tipping cascades excludes early tipping \\cite{lohmann2021abrupt} and flickering \\cite{dakos2013flickering} due to noise as well as rate--induced effects, which will further influence the presented patterns of multiple tipping, their characteristics such as the intermediate time span of the two phase cascade and hence the potential for anticipation and mitigation. In a related stochastic system, similar patterns were demonstrated as fast and slow domino effects \\cite{ashwin2017fast}. The patterns of multiple tipping are expected to change in response to a fast change of the tipping element driver with respect to the intrinsic response time scales, which cannot be ruled out given the current unprecedented anthropogenic forcing of the biogeophysical Earth system \\cite{joos2008rates,zeebe2015anthropogenic}. In addition, rate--induced transitions may occur \\cite{ashwin2012tipping,wieczorek2011excitability} as suspected based on modelling studies for the Atlantic Meridional Overturning Circulation \\cite{alkhayuon2019basin,stocker1997influence,lohmann2021risk}, predator--prey systems \\cite{o2019tipping,scheffer2008pulse,siteur2016ecosystems} and for the release of soil carbon in the form of the compost--bomb instability \\cite{wieczorek2011excitability,luke2011soil} and may further complicate the early warning of cascading tipping \\cite{lohmann2021abrupt,ritchie2016early}. Heterogeneity across the response of tipping elements to the same control parameter level \\cite{brook2013does,scheffer2012anticipating} and in the intrinsic time scales of tipping \\cite{wunderling2020interacting,ritchie2021overshooting,hughes2013living} was neglected. \n\nFinally, it is assumed that the long--term behaviour of many real\u2013world systems in terms of the system's state such as the overturning strength of the Atlantic Meridional Overturning Circulation \\cite{stommel1961thermohaline,cessi1994simple}, the ice volume of the Greenland Ice Sheet \\cite{levermann2016simple} and the algae density in shallow lakes \\cite{scheffer1989alternative,scheffer1993alternative} can be qualitatively captured by the studied idealized tipping elements featuring a fold bifurcation as tipping mechanism. However, biogeophysical and biogeochemical processes involved in the behaviour of these real\u2013-world systems and included in some more complex climate models may either give rise to further types of cascading tipping or may dampen the overall possibilities of tipping behavior \\cite{wunderling2020interacting}. \n\n\\section{Conclusion}\n\nQualitatively different patterns of multiple tipping dynamics in interacting nonlinear subsystems of the climate and ecosystems have been identified in this work. These multiple tipping patterns may emerge as illustrated in a system of idealized interacting tipping elements and include the cases of joint cascades, domino cascades and two phase cascades. As described in Lenton et al.~\\cite{lenton2020tipping} and Brummitt et al.~\\cite{brummitt2015coupled} as well as corresponding to the general notion of a domino effect \\cite{hornby2015dict}, tipping of one subsystem causes or triggers the tipping of another subsystem in a domino cascade. In addition, we find a two phase cascade corresponding to the tipping pattern presented in Dekker et al.~\\cite{dekker2018cascading}. While we reveal that it may be possible to find critical slowing down based early warning indicators for the two phase cascade, such indicators can fail in the case of a domino cascade. \n\nHowever, our results are limited by the conceptual nature of the system investigated here. In particular, in more complex and process--detailed models of tipping elements the respective nonlinear properties might be smeared out and the presented characteristics of the emerging multiple tipping patterns might be altered due to processes such as strong noise, interactions to other system components or further biogeophysical processes that are not modelled here.\n\nSince cascading tipping dynamics have been described rather roughly in the recent literature and the presented patterns of multiple tipping dynamics differ in the potential of their mitigation and anticipation, we suggest to be more precise in future discussions on potential dynamics arising from the interaction of tipping elements and, in particular, on tipping cascades. \nIn the future, a quantitative assessment of interacting tipping elements with an ongoing improvement of their representation in complex (climate) models e.g. by including interactive evolving ice sheets into Earth system models \\cite{kreuzer2021coupling} as well as the additional use of paleoclimate data \\cite{thomas2020tipping} may help to reduce uncertainties on the preconditions for the emergence of tipping cascades and possible early warning indicators based on process--understanding. To the end, these insights may contribute to reflections on the boundaries of the safe--operating space for humanity, and to a better understanding of Earth system resilience with respect to anthropogenic perturbations more generally. \n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nWe consider lossy compression of a binary symmetric source\n(BSS) using a low-density generator-matrix (LDGM) code as shown in\nFigure~\\ref{fig:ldgmtanner}. More precisely, let $S \\in \\GF^m$ represent\nthe binary source of length $m$. We have $S=\\{S_1,S_2,\\dots,S_m\\}$,\nwhere the $\\{S_i\\}_{i=1}^{m}$ are iid random variables with\n$\\prob\\{S_i=1\\}=\\frac12$, $i\\in [m]$. Let $\\mathcal{S}$ denote the set\nof all source words.\n\\begin{figure}[htp]\n\\begin{center}\n\\input{ps\/ldgmtanner}\n\\end{center}\n\\caption{\\label{fig:ldgmtanner}\nThe Tanner graph corresponding to a simple LDGM code used for\nlossy compression of a BSS. We have $m=7$, $R=\\frac47$, \nand $L(x)=x^3$.\n}\n\\end{figure}\n\nGiven a source word $s \\in {\\mathcal S}$, we compress it by mapping it to\none of the $2^{m R}$ index words $w \\in {\\mathcal W} = \\GF^{m R}$, where\n$R$ is the {\\em rate}, $R \\in [0, 1]$. \nWe denote this encoding map by\n$\\encoder: s \\mapsto W$ (the map can be random). The reconstruction is\ndone via an LDGM code determined by a sparse binary $m R \\times m$ generator\nmatrix $G$. Let $\\hat{s}$ denote the reconstructed word associated to\n$w$. We have $\\hat{s} = w G$. We denote this decoding map by $\\decoder:\nw \\mapsto \\hat{s}$. Let $\\code$ denote the code, $\\code=\\{\\hat{s}^{(1)}, \\dots,\n\\hat{s}^{(2^{m R})}\\}$, $\\hat{s}^{(i)} \\in \\GF^m$. The codewords are not necessarily distinct.\n\nWe call the components of the index word $w=\\{w_1, \\dots, w_{m R}\\}$\nthe {\\em generators} and the associated nodes in the factor graph\nrepresenting the LDGM code the {\\em generator nodes}. \nWe assume that these generators nodes have a normalized degree distribution \n$L(x)=\\sum_{i} L_i x^i$. This means that $L_i$ represents the fraction \n(out of $m R$) of generator nodes of degree $i$.\n\nWe are interested in the trade-off between rate and distortion which is achievable \nin this setting. Let $\\distortion(\\cdot, \\cdot)$ denote the Hamming distortion function,\n$\\distortion: \\GF^m \\times \\GF^m \\rightarrow \\naturals$.\nThe average distortion is then given by\n\\begin{align*}\n\\frac1{m} \\expectation[d(S, \\decoder(\\encoder(S))].\n\\end{align*}\nWe are interested in the minimum of this average distortion, where the minimum\nis taken over all LDGM codes of a given rate, generator degree distribution $L(x)$, and length,\nas well as over all encoding functions.\n\n\\section{Review}\nGiven the success of sparse graph codes applied to the channel coding problem, it\nis not surprising that there is also interest in the use\nof sparse graph codes for the source coding problem.\nMartinian and Yedidia \\cite{MaYe03} were probably the first to \nwork on lossy compression using sparse graph codes. \nThey considered a memoryless ternary source with erasures and demonstrated a duality result between\ncompression of this source and the transmission problem over\na binary erasure channel (both using iterative encoding\/decoding).\nMezard, Zecchina, and Ciliberti \\cite{CiMe05} considered the lossy compression\nof the BSS using LDGM codes with a Poisson distribution on the generators.\nThey derived the one-step replica symmetry-breaking (1RSB) solution \nand the average\nrate-distortion function. According to this analysis, this ensemble approaches\nthe Shannon rate-distortion curve exponentially fast in the average degree.\nThey observed that the iterative interpretation associated to the 1RSB analysis\ngives rise to an algorithm, which they called {\\em survey propagation}. In\n\\cite{CiMeZe05} the same authors implement an encoder that utilizes a Tanner graph with random non-linear\nfunctions at the check nodes and a {\\em survey propagation} based\ndecimation algorithm for data compression of the BSS. \nIn \\cite{WaM05}, Wainwright and Maneva also considered the lossy compression of a\nBSS using an LDGM code with a given degree distribution. They showed how survey\npropagation can be interpreted as belief propagation algorithm \n(as did Braunstein and Zecchina \\cite{BZ03})\non an enlarged set of assignments and demonstrated\nthat the survey propagation algorithm is a practical and efficient encoding\nscheme.\nRecently, Filler and Friedrich \\cite{FiFr07} demonstrated experimentally that\neven standard belief propagation based decimation algorithms using optimized\ndegree distributions for LDGM codes and a proper initialization of the messages\ncan achieve a rate-distortion trade-off very close to the Shannon bound. \nMartinian and Wainwright \\cite{MaWa06,MaW06a,MaW06b} constructed {\\em compound LDPC\nand LDGM code ensembles} and gave rigorous {\\em upper bounds} on their distortion\nperformance. A standard LDGM code ensemble is a special case of their\nconstruction, hence they also provide {\\em upper bounds} on the rate-distortion\nfunction of LDGM ensembles. By using the first and second moment method they proved\nthat a code chosen randomly from the {\\em compound ensemble} under optimal encoding and decoding achieves the Shannon\nrate-distortion curve with high probability. Finally, they pointed out that such constructions are\nuseful also in a more general context (e.g., the Wyner-Ziv or the Gelfand-Pinsker\nproblem).\nDimakis et al \\cite{DiWaRa07} were the first authors to provide\nrigorous {\\em lower bounds} on the rate-distortion function of LDGM code\nensembles. \n\\begin{theorem}[Dimakis, Wainwright, Ramchandran \\cite{DiWaRa07}]\n\\label{the:dwrbound} Let $\\code$ be a binary code of blocklength $m$ and rate\n$R$ chosen uniformly at random from an ensemble of left Poisson LDGM Codes with check-node degree\n${\\mathtt r}$. Suppose that we perform MAP decoding. With high probability the \nrate-distortion pair ($R,D$) achieved by $\\code$ fulfills\n\\begin{align*} \nR & \\geq \\frac{1-h(D)}{1-e^{-\\frac{(1-D){\\mathtt r}}{R}}} > 1-h(D).\n\\end{align*}\n\\end{theorem}\n\n\\subsection{Outline}\nIn the spirit of Gallager's information theoretic bound for LDPC codes,\nwe are interested in deriving lower bounds on the rate-distortion function \nwhich are valid for {\\em any} LDGM code with a given generator node degree distribution $L(x)$.\nOur approach is very simple.\nPick a parameter $D$, $D \\in [0, \\frac12]$ (think of this parameter as\nthe distortion). Consider the set of ``covered'' sequences \\begin{align}\n\\label{equ:cofd} {\\mathcal C}(D) & = \\bigcup_{\\hat{s} \\in \\code} {\\mathcal\nB}(\\hat{s}, D m), \\end{align} where ${\\mathcal B}(x, i)$, $x \\in \\GF^m$,\n$i \\in [m]$, is the Hamming ball of radius $i$ centered at $x$. In words,\n${\\mathcal C}(D)$ represents the set of all those source sequences that\nare within Hamming distance at most $D m$ from at least one code word.\n\nRecall that for any $s \\in {\\cal S}$, ${\\encoder}(s) \\in {\\mathcal W}$\nrepresents the index word and that ${\\decoder}({\\encoder}(s))$ denotes the\nreconstructed word. We have\n\\begin{align*}\n\\distortion(s, {\\decoder}({\\encoder}(s))) & \\geq \n\\begin{cases}\n0, & s \\in {\\mathcal C}(D), \\\\\nDm, & s \\in \\GF^m \\setminus {\\mathcal C}(D).\n\\end{cases}\n\\end{align*}\nTherefore,\n\\begin{align} \n&\\frac1{m} \\expectation[\\distortion(S, {\\decoder}({\\encoder}(S)))] \\nonumber \\\\\n& = \\frac1{m}\\sum_{s \\in \\GF^m} 2^{-m} \\distortion(s, {\\decoder}({\\encoder}(s)))\n \\geq \\frac{2^{-m}}{m} \\sum_{s \\in \\GF^m \\setminus {\\mathcal C}(D)} \\distortion(s, {\\decoder}({\\encoder}(s))) \\nonumber \\\\\n& \\geq 2^{-m} D |\\GF^m \\setminus {\\mathcal C}(D)| \\geq D \\bigl(1-2^{-m} |{\\mathcal C}(D)| \\bigr).\n\\label{equ:averagedistortion}\n\\end{align}\nIf the codewords are well spread out then we know from Shannon's random coding\nargument that for a choice $D=h^{-1}(1-R)$, $|{\\mathcal C}(D)| \\approx 2^m$, \\cite{CoT91}.\nBut the codewords of an LDGM code are clustered since changing a\nsingle generator symbol only changes a constant number of symbols in\nthe codeword. There is therefore substantial overlap of the balls.\nWe will show that there\nexists a $D$ which is strictly larger than the distortion corresponding\nto Shannon's rate-distortion bound so that $|{\\mathcal C}(D)|$ is exponentially\nsmall compared to $2^m$ regardless of the specific code. From\n(\\ref{equ:averagedistortion}) this implies that the distortion is at\nleast $D$.\n\nTo derive the required upper bound on $|{\\mathcal C}(D)|$ we use two\ndifferent techniques. In Section~\\ref{sec:boundviacounting} we use a\nsimple combinatorial argument. In Section~\\ref{sec:boundviatestchannel},\non the other hand, we employ a probabilistic argument based on the\n``test channel'' which is typically used to show the achievability of\nthe Shannon rate-distortion function.\n\nAlthough both bounds prove that the rate-distortion function is\nstrictly bounded away from the Shannon rate-distortion function for\nthe whole range of rates and any LDGM code, we conjecture that a\nstronger bound is valid. We pose our conjecture as an open problem in\nSection~\\ref{sec:openquestions}.\n\n\\section{Bound Via Counting}\\label{sec:boundviacounting}\n\\begin{theorem}[Bound Via Counting]\\label{the:boundviacounting}\nLet $\\code$ be an LDGM code with blocklength $m$ and with generator\nnode degree distribution $L(x)$ and define $L'=L'(1)$. Let\n\\begin{align*}\nf(x) = \\prod_{i=0}^{d} (1+x^i)^{L_i}, \\;\\;\na(x) = \\prod_{i=0}^{d} i L_i \\frac{x^i}{1+x^i}, \\\\\n\\hat{R}(x) = \\frac{1-h(\\frac{x}{1+x})}{1-\\log \\frac{f(x)}{x^{a(x)}}}, \\;\\;\n\\hat{D}(x) = \\frac{x}{1+x} - a(x) \\hat{R}(x).\n\\end{align*}\nFor $R \\in [\\frac{1}{L'}, 1]$ let $x(R)$ be the unique positive solution of $\\hat{R}(x)=R$.\nDefine the curve $D(R)$ as\n\\begin{align*}\n& \\begin{cases}\n\\frac12 \\Bigl(1-R L' \\bigl(1-2\\bigl( \\frac{x(\\frac{1}{L'})}{1+x(\\frac{1}{L'})} -\n\\frac{a(x(\\frac{1}{L'}))}{{\\mathtt l}}\\bigr)\\bigr)\\Bigr), &\n R \\in [0, \\frac{1}{L'}], \\\\\n\\hat{D}(x(R)), R \\in [\\frac{1}{L'}, 1].\n\\end{cases}\n\\end{align*}\nThen, for any blocklength $m$, the achievable distortion of an LDGM code of rate $R$ and generator degree distribution $L(x)$\nis lower bounded by $D(R)$.\n\\end{theorem}\nDiscussion: \n(i) As stated above, if we are considering a single code of rate $R$ then\nthe lower bound on the distortion is $D(R)$. If, on the other hand we are considering\na family of codes, all with the same generator degree distribution $L(x)$ but with\ndifferent rates $R$, then it is more convenient to plot the lower bound in a parametric\nform. First plot the curve $(\\hat{D}(x), \\hat{R}(x))$ for $x \\in [0, 1]$. Then connect\nthe point $(D=\\frac12, R=0)$ to the point on the $(\\hat{D}(x), \\hat{R}(x))$ curve\nwith $\\hat{R}(x)=\\frac{1}{L'}$ by a straight line. The resulting upper envelope\ngives the stated lower bound for the whole range. This construction is shown in \nFigure~\\ref{fig:rdconstruction}.\n\\begin{figure}[htp]\n\\begin{center}\n\\input{ps\/rdconstruction}\n\\end{center}\n\\caption{\\label{fig:rdconstruction}\nConstruction of the bound for codes with $L(x)=x^2$ so that $L'=2$ (all generator\nnodes have degree $2$). The solid gray curve corresponds to the Shannon rate-distortion curve.\nThe black curve just above, which is partially solid and partially dotted, corresponds\nto the curve $(\\hat{D}(x), \\hat{R}(x))$ for $x \\in [0, 1]$. It starts at the point $(0, 1)$\n(which corresponds to $x=0$) and ends at $(\\frac{L'-1}{2 L'}=\\frac14, \\frac{1}{(L')^2}=\\frac14)$ \nwhich corresponds to $x=1$. The straight line goes from the point $(\\hat{D}(x(\\frac{1}{L'})), \\frac{1}{L'})$ to the point $(\\frac12, 0)$. Any achievable $(R, D)$ pair must lie in the lightly shaded region.\nThis region is strictly bounded away from the Shannon rate-distortion function over the whole range.\n}\n\\end{figure}\n(ii) \nAlthough this is difficult to glance from the expressions, we will\nsee in the proof that for any bounded generator degree distribution\n$L(x)$ the performance is strictly bounded away from the Shannon\nrate-distortion function. From a practical perspective however the gap\nto the rate-distortion bound decreases quickly in the degree.\n\n\\begin{example}[Generator-Regular LDGM Codes]\n\\label{exa:rdgeneratorregular}\nConsider codes with generator degree equal to ${\\mathtt l}$ and\nan arbitrary degree distribution on the check nodes.\nIn this case we have $f(x)=1+x^{\\mathtt l}$ and \n$a(x) = \\frac{{\\mathtt l} x^{\\mathtt l}}{1+x^{\\mathtt l}}$.\nFigure~\\ref{fig:rdgeneratorregular} compares the lower bound to\nthe rate-distortion curve for ${\\mathtt l}=1$, $2$, and $3$. For each case the\nachievable region is strictly bounded away from the Shannon rate-distortion curve.\n\\begin{figure}[htp]\n\\begin{center}\n\\input{ps\/rdgeneratorregular}\n\\end{center}\n\\caption{\\label{fig:rdgeneratorregular}\nBounds for $L(x)=x^{\\mathtt l}$ for ${\\mathtt l}=1$, $2$, and $3$.\nFor ${\\mathtt l}=2$ the $3$ gray dots correspond to the special\ncases $R=\\frac23$, $R=\\frac12$, and $R=\\frac25$ respectively.\nThe corresponding lower bounds on the distortion are\n$D(\\frac23) \\geq 0.0616> 0.0614905$ (rate-distortion bound), \n$D(\\frac12) \\geq0.115 > 0.11$ (rate-distortion bound), and $D(\\frac25) \\geq 0.1924 >0.1461$ (rate-distortion bound).\n}\n\\end{figure}\n\\end{example}\n\n\\begin{example}[$({\\mathtt l}, {\\mathtt r})$-Regular LDGM Codes]\nIn this case we have $R={\\mathtt l}\/{\\mathtt r}$ and $L(x)=x^{{\\mathtt l}}$.\nThe same bound as in Example~\\ref{exa:rdgeneratorregular} applies.\nThe three special cases $({\\mathtt l}=2, {\\mathtt r}=3)$, $({\\mathtt l}=2, {\\mathtt r}=4)$, and $({\\mathtt l}=2, {\\mathtt r}=5)$,\nwhich correspond to $R=\\frac23$, $R=\\frac12$, and $R=\\frac25$ respectively,\nare marked in Figure~\\ref{fig:rdgeneratorregular} as gray dots.\n\\end{example}\n\n\\begin{example}[${\\mathtt r}$-Regular LDGM Codes of Rate $R$]\nAssume that all check nodes have degree ${\\mathtt r}$ and that the \nconnections are chosen uniformly at random with repetitions.\nFor large blocklengths this implies that the degree distribution\non the variable nodes converges to a Poisson distribution, i.e., we have\nin the limit\n\\begin{align*}\nL(x) & = \\sum_{i=1}^{\\infty} L_i x^i = e^{\\frac{{\\mathtt r}}{R} (x-1)}.\n\\end{align*}\nLet us evaluate our bound for this generator degree distribution.\nNote that since the average degree of the {\\em check} nodes is fixed we have a different\ngenerator degree distribution $L(x)$ for each rate $R$.\nFigure~\\ref{fig:rdsourceregular} compares the resulting bound with the Shannon rate-distortion function\nas well as the bound of Theorem~\\ref{the:dwrbound}. The new bound\nis slightly tighter. But more importantly, it applies to {\\em any} LDGM code. \n\\begin{figure}[htp]\n\\begin{center}\n\\input{ps\/rdsourceregular}\n\\end{center}\n\\caption{\\label{fig:rdsourceregular}\nLower bound on achievable $(R, D)$ pairs for ${\\mathtt r}$-regular LDGM codes with a\nPoisson generator degree distribution and ${\\mathtt r}=2, 4$. The dashed curve corresponds to \nthe bound of Theorem~\\ref{the:dwrbound} and the solid black curve represents the bound\nof Theorem~\\ref{the:boundviacounting}.\nThe gray curve is the Shannon rate-distortion tradeoff. \n}\n\\end{figure}\n\n\\end{example}\n\n{\\em Proof of Theorem~\\ref{the:boundviacounting}.}\nFrom the statement in Theorem~\\ref{the:boundviacounting} you see that the\nbound consists of a portion of the curve $(\\hat{D}(x), \\hat{R}(x))$\nand a straight-line portion. The straight-line portion is easily\nexplained. Assume that all generator nodes have degree ${\\mathtt l}$ (for the\ngeneral case replace all mentions of ${\\mathtt l}$ by the average degree\n$L'$). Then the maximum number of check nodes that can depend on the\nchoice of generator nodes is $n {\\mathtt l}$. Therefore, if the rate $R$ is\nlower than $\\frac{1}{{\\mathtt l}}$ then at least a fraction $(1-R {\\mathtt l})$ of the\ncheck nodes cannot be connected to any generator node. For those nodes\nthe average distortion is $\\frac12$, whereas for the fraction $R {\\mathtt l}$\nof the check nodes which are (potentially) connected to at least one\ngenerator node the best achievable distortion is the same for any $0 \\leq\nR \\leq \\frac{1}{{\\mathtt l}}$. It suffices therefore to restrict our attention\nto rates in the range $[\\frac{1}{L'}, 1]$ and to prove that their $(R,\nD)$ pairs are lower bounded by the curve $(\\hat{D}(x), \\hat{R}(x))$.\n\nAs a second simplification note that although the bound is valid for\nall blocklengths $m$ we only need to prove it for the limit of infinite\nblocklengths. To see this, consider a particular code of blocklength\n$m$. Take $k$ identical copies of this code and consider these $k$\ncopies as one code of blocklength $k m$. Clearly, this large code has\nthe same rate $R$, the same generator degree distribution $L(x)$,\nand the same distortion $D$ as each component code. By letting\n$k$ tend to infinity we can construct an arbitrarily large code of\nthe same characteristics and apply the bound to this limit. Since our bound\nbelow is valid for {\\em any} sequence of codes whose blocklength tends to\ninfinity the claim follows.\n\nPick $w \\in \\naturals$ so that $D m + w \\leq \\frac{m}{2}$. Then\n\\begin{align*}\n|{\\mathcal C}(D)| \n& = |\\bigcup_{\\hat{s} \\in \\code} {\\mathcal B}(\\hat{s}, D m)| \\\\\n& \\stackrel{\\text{(i)}}{\\leq} \\frac{1}{A_m(w)} \\sum_{\\hat{s} \\in \\code} |{\\mathcal B}(\\hat{s}, Dm +w)| \\\\\n& \\stackrel{\\text{(ii)}}{\\leq} 2^{-mR \\log \\frac{f(x_{\\omega})}{x_{\\omega}^{\\omega}}+o_m(1)} 2^{m R} 2^{m h(D+w\/m)} \\\\\n& \\stackrel{\\text{(iii)}}{=} 2^{m (-R \\log \\frac{f(x_{\\omega})}{x_{\\omega}^{a(x_{\\omega})}}+ \nR+ h(D+a(x_{\\omega}) R) + o_m(1))}.\n\\end{align*}\n\nTo see (i) note that a ``big'' sphere ${\\mathcal B}(\\hat{s}, Dm +w)$, where $\\hat{s} \\in \\code$, contains \nall ``small'' spheres of the form ${\\mathcal B}(\\hat{s}', D m)$, where $\\hat{s}' \\in \\code$ so that\n$\\distortion(\\hat{s}, \\hat{s}') \\leq w$.\nLet $A_m(w)$ be the number of codewords of Hamming weight at most $w$. \nThen, by symmetry, each small sphere\n${\\mathcal B}(\\hat{s}', Dm)$ is in exactly $A_m(w)$ big spheres ${\\mathcal B}(\\hat{s}, Dm +w)$.\nIt follows that every point in $\\bigcup_{\\hat{s} \\in \\code} {\\mathcal B}(\\hat{s}, D m)$ is counted at least\n$A_m(w)$ times in the expression $\\sum_{\\hat{s} \\in \\code} |{\\mathcal B}(\\hat{s}, Dm +w)|$.\n\nConsider now step (ii). We need a lower bound on $A_m(w)$. Assume at first that\nall generator nodes have degree ${\\mathtt l}$. Assume that exactly $g$ generator nodes\nare set to $1$ and that all other nodes are set to $0$. There are $\\binom{mR}{g}$\nways of doing this. Now note that for each such constellation the weight\nof the resulting codeword is at most $w=g {\\mathtt l}$. It follows that in the generator regular case\nwe have\n\\begin{align} \\label{equ:anofwone}\nA_m(w) \\geq \\sum_{g=0}^{w\/{\\mathtt l}} \\binom{mR}{g}. \n\\end{align}\nWe can rewrite (\\ref{equ:anofwone}) in the form\n\\begin{align} \\label{equ:anofwtwo}\nA_m(w) & \\geq \\sum_{i=0}^{w} \\text{coef}\\{(1+x^{\\mathtt l})^{mR}, x^i\\}, \n\\end{align}\nwhere $\\text{coef}\\{ (1+x^{\\mathtt l})^{mR}, x^i\\}$ indicates the coefficient of the polynomial\n$(1+x^{\\mathtt l})^{mR}$ in front of the monomial $x^i$. \nThe expression (\\ref{equ:anofwtwo}) stays valid also for irregular generator degree distributions $L(x)$\nif we replace $(1+x^{\\mathtt l})^{mR}$ with $f(x)^{mR}$, where $f(x)=\\prod_i(1+x^i)^{L_i}$ as defined in\nthe statement of the theorem. This of course requires that $n$ is chosen in such a way\nthat $n L_i \\in \\naturals$ for all $i$. \n\nDefine $N_m(w) = \\sum_{i=0}^{w} \\text{coef}\\{f(x)^{mR}, x^i\\}$, so that\n(\\ref{equ:anofwtwo}) can be restated as $A_m(w) \\geq N_m(w)$. Step (ii) now\nfollows by using the asymptotic expansion of $N_m(w)$ stated as Theorem~1\n\\cite{MB04}, where we define $\\omega=w\/(mR)$ and where $x_{\\omega}$ is the\nunique positive solution to $a(x)=\\omega$.\n\nFinally, to see (iii) we replace $w$ by $mR a(x_{\\omega})$ and thus\nwe get the claim.\nSince this bound is valid for any $w \\in \\naturals$ so that $D m + w \\leq \\frac{m}{2}$ we get the bound\n\\begin{align*}\n\\lim_{m \\rightarrow \\infty} \\frac{1}{m} \\log |{\\mathcal C}(D)| \\leq g(D, R),\n\\end{align*}\nwhere\n\\begin{align*}\ng(D, R) & = \\inf_{\\stackrel{x \\geq 0}{D+a(x)R \\leq \\frac12}} -R \\log \\frac{f(x)}{x^{a(x)}}+ R+ h(D+a(x) R).\n\\end{align*}\n\n\nNow note that as long as $g(D, R)<1$,\n$|{\\mathcal C}(D)|$ is exponentially small compared to $2^m$.\nTherefore, looking back at (\\ref{equ:averagedistortion}) we see that in this case\nthe average distortion converges to at least $D$ in the limit $m \\rightarrow \\infty$.\nWe get the tightest bound by looking for the condition for equality, i.e. by looking\nat the equation $g(R, D)=1$. \nIf we take the derivative with respect to $x$ and set it to $0$ then we get the condition\n\\begin{align*}\n\\frac{x}{1+x} = D+R a(x).\n\\end{align*}\nRecall that $D + a(x) R \\leq \\frac12$, so that this translates to $x \\leq 1$.\nThis means that $x \\leq 1$. Replace $D+a(x) R$ in the entropy term by $\\frac{x}{1+x}$,\nset the resulting expression for $g(R, x)$ equal to $1$, and solve for $R$.\nThis gives $R$ as a function of $x$ and so we also get $D$ as a function\nof $x$. We have\n\\begin{align*}\nR(x) = \\frac{1-h(\\frac{x}{1+x})}{1-\\log \\frac{f(x)}{x^{a(x)}}}, \\,\\,\nD(x) = \\frac{x}{1+x} - a(x) R(x) .\n\\end{align*}\nA check shows that $x=0$ corresponds to $(D, R)=(0, 1)$ and that $x=1$ corresponds to \n$(D, R)=(\\frac{L'-1}{2 L'}, \\frac{1}{(L')^2})$. Further, $R$ and $D$ are monotone functions of $x$.\nRecall that we are only interested in the bound for $R \\in [\\frac{1}{L'}, 1]$. We get the corresponding\ncurve by letting $x$ take values in $[0, x(\\frac{1}{L'})]$. For smaller values of the rate\nwe get the aforementioned straight-line bound.\n\nLooking at the above expression for $g(D, R)$ one can see why this\nbound is strictly better than the rate-distortion curve for $D \\in (0,\n\\frac12)$. Assume at first that the generator degree distribution is\nregular. Let the degree be ${\\mathtt l}$. In this case a quick check shows\nthat $-R \\log \\frac{f(x)}{x^{a(x)}}$ is equal to $-R h(\\frac{a(x)}{{\\mathtt l}})$. Since\n$a(0)=0$ we get the rate distortion bound if we set $x=0$.\nThe claim follows by observing that $a(x)$ is a continuous strictly\nincreasing function and that $h(x)$ has an infinite derivative at $x=0$\nwhile $h(D+a(x)R)$ has a finite derivative at $x=0$. It follows that\nthere exists a sufficiently small $x$ so that $R h(\\frac{a(x)}{{\\mathtt l}})$ is strictly\nlarger than $h(D+a(x)R)-h(D)$ and so that $D+a(x) R \\leq \\frac12$. Hence, $g(D,\nR)$ is strictly decreasing as a function of $x$ at $x=0$. This bounds\nthe achievable distortion strictly away from the rate-distortion bound.\nThe same argument applies to an irregular generator degree distribution;\nthe simplest way to see this is to replace ${\\mathtt l}$ by the maximum degree\nof $L(x)$.\n\n\n\\section{Bound Via Test Channel}\\label{sec:boundviatestchannel}\nInstead of using a combinatorial approach to bound $|{\\mathcal C}(D)|$\none can also use a probabilistic argument using the ``test channel''\nshown in Figure~\\ref{fig:testchannel}.\n\\begin{figure}[htp]\n\\begin{center}\n\\input{ps\/testchannel}\n\\end{center}\n\\caption{\\label{fig:testchannel} \nThe generator words $W$ are chosen uniformly at random from ${\\mathcal W}$.\nThis generates a codeword $\\hat{S}$ uniformly at random. Each component of\n$\\hat{S}$ is then sent over a binary symmetric channel with transition probability $D'$.\n}\n\\end{figure}\n\nFor the cases we have checked \nthe resulting bound is numerically identical to the bound of\nTheorem~\\ref{the:boundviacounting} (excluding the straight-line portion).\nWe restrict our exposition to the regular case. \nThe generalization to the irregular case is straightforward.\n\\begin{theorem}[Bound Via Test Channel]\\label{the:boundviatestchannel}\nLet $\\code$ be an LDGM code with blocklength $m$, \ngenerator degree distribution $L(x)=x^{{\\mathtt l}}$, and rate $R$.\nThen for any pair $(R, D)$, where $D$ is the average distortion, we have\n\\begin{align*}\nR & \\geq \\sup_{D \\leq D' \\leq \\frac12} \\frac{1-h(D)-\\text{KL}(D \\| D')}{1-\\log_2\\Bigl(1+\\frac{(D')^{\\mathtt l}}{(1-D')^{\\mathtt l}} \\Bigr)} \\\\\n& \\geq \\frac{1-h(D)}{1-\\log_2\\Bigl(1+\\frac{D^{\\mathtt l}}{(1-D)^{\\mathtt l}} \\Bigr)} > 1-h(D),\n\\end{align*}\nwhere $\\text{KL}(D \\| D')=D \\log_2(D\/D')+(1-D) \\log_2((1-D)\/(1-D'))$.\n\\end{theorem}\n{\\em Proof.} \nThe same remark as in the proof of Theorem~\\ref{the:boundviacounting} applies: although\nthe bound is valid for any blocklength it suffices to prove it for the limit of\nblocklengths tending to infinity. Also, for simplicity we have not stated the bound\nin its strengthened form which includes a straight-line portion. But the same technique\nthat was applied in the proof of Theorem~\\ref{the:boundviacounting} applies also to the present case.\n\nAs remarked earlier, the idea of the proof is based on bounding\n$|{\\mathcal C}(D)|$ by using the ``test channel.'' More precisely,\nchoose $W$ uniformly at random from the set of\nall binary sequences of length $m R$. Subsequently compute\n$\\hat{S}$ via $\\hat{S} = W G$, where $G$ is the generator matrix\nof the LDGM code. Finally, let $S=\\hat{S}+Z$, where $Z$ has iid components with\n$\\prob\\{Z_i=1\\}=D'$.\n\nConsider the set of sequences $s \\in {\\mathcal C}(D)$.\nFor each such $s$ we know that there exists an $\\hat{s} \\in \\code$ so\nthat $\\distortion(s, \\hat{s}) \\leq Dm$. \nWe have\n\\begin{align*}\n& \\prob\\{S=s \\mid s \\in {\\mathcal C}(D) \\} \\\\\n& = \\sum_{\\hat{s}' \\in \\code} \\prob\\{S=s, \\hat{S}=\\hat{s}' \\mid s \\in {\\mathcal C}(D) \\} \\\\ \n& = \\sum_{w=0}^{m} \\sum_{\\hat{s}' \\in \\code: \\distortion(\\hat{s}', \\hat{s})=w} \\prob\\{S=s, \\hat{S}=\\hat{s}' \\mid s \\in {\\mathcal C}(D) \\} \\\\ \n& = \\sum_{w=0}^{m} A_{m}(w) \\prob\\{S=s, \\hat{S}=\\hat{s}' \\mid s \\in {\\mathcal C}(D), \\distortion(\\hat{s}', \\hat{s})=w \\} \\\\ \n& = \\sum_{w=0}^{m} A_{m}(w) 2^{-m R} \\Bigl(\\frac{D'}{1-D'}\\Bigr)^{\\distortion(s, \\hat{s}')} (1-D')^m\n\\end{align*}\n\\begin{align*}\n& \\geq \\sum_{w=0}^{m} A_{m}(w) 2^{-m R} \\Bigl(\\frac{D'}{1-D'}\\Bigr)^{\\distortion(s, \\hat{s})+\\distortion(\\hat{s}, \\hat{s}')} (1-D')^m \\\\\n& \\stackrel{ \\distortion(\\hat{s}', \\hat{s})=w}{=} \\sum_{w=0}^{m} A_{m}(w) 2^{-m R} \\Bigl(\\frac{D'}{1-D'}\\Bigr)^{\\distortion(s, \\hat{s})+w} (1-D')^m \\\\\n& \\stackrel{\\distortion(s, \\hat{s}) \\leq D m}{\\geq} \\sum_{w=0}^{m} A_{m}(w) 2^{-m R} \\Bigl(\\frac{D'}{1-D'}\\Bigr)^{D m+w} (1-D')^m \\\\\n& = 2^{-m R -mh(D)-m \\text{KL}(D \\| D')} \\sum_{w=0}^{m} A_m(w) \\Bigl(\\frac{D'}{1-D'}\\Bigr)^{w},\n\\end{align*}\nwhere $A_m(w)$ denotes the number of codewords in $\\code$ of Hamming weight $w$. Due to\nthe linearity of the code this is also the number of codewords in $\\code$ of Hamming\ndistance $w$ from $\\hat{s}$.\nUsing summation by\nparts and setting $c=D'\/(1-D')<1$, we have\n\\begin{align*}\n& \\sum_{w=0}^{m} A_m(w) c^w \\\\\n& = c^{m+1} 2^{mR}+ \\sum_{w=0}^{m}\\Bigl(\\sum_{i=0}^{w-1} A_m(i) \\Bigr) (c^w-c^{w+1}) \\\\\n& \\stackrel{(\\ref{equ:anofwtwo})}{\\geq} c^{m+1} 2^{mR}+ \\sum_{w=0}^{m}\\Bigl(\\sum_{i=0}^{\\lfloor(w-1)\/{\\mathtt l} \\rfloor} \n\\binom{mR}{i} \\Bigr) (c^w-c^{w+1}) \\\\\n& = \\sum_{w=0}^{\\lfloor m\/{\\mathtt l} \\rfloor} \\binom{mR}{w} c^{{\\mathtt l} w} + c^{m+1} \\Bigl(2^{mR} -\n\\sum_{i=0}^{\\lfloor m\/{\\mathtt l} \\rfloor} \\binom{mR}{i} \\Bigr) \\\\\n& \\geq \\sum_{w=0}^{\\lfloor m\/{\\mathtt l} \\rfloor} \\binom{mR}{w} c^{{\\mathtt l} w} \\geq \\frac1m (1+c^{\\mathtt l})^{m R}.\n\\end{align*}\nThe last step is valid as long as $\\frac{R c^{\\mathtt l}}{1+c^{\\mathtt l}} < \\frac{1}{{\\mathtt l}}$. In\nthis case the maximum term (which appears at $\\frac{R c^{\\mathtt l}}{1+c^{\\mathtt l}} m$) is\nincluded in the sum (which goes to $m\/{\\mathtt l}$) and is thus greater than equal to\nthe average of all the terms, which is $\\frac1m (1+c^{\\mathtt l})^{m R} $ . This\ncondition is trivially fulfilled for $R {\\mathtt l} < 1$. Assume for a moment that it\nis also fulfilled for $R {\\mathtt l} \\geq 1$ and the optimum choice of $D'$. It then\nfollows that\n\\begin{align*}\n\\prob\\{S=s \\mid s \\in {\\mathcal C}(D) \\} & \n\\geq \\frac1m 2^{-m(R+h(D)+\\text{KL}(D \\| D')-R \\log_2(1+c^{\\mathtt l}))}.\n\\end{align*}\nSince \n\\begin{align*}\n1 & = \\sum_{s \\in \\GF^m} \\prob\\{S=s\\} \n\\geq \\sum_{s \\in {\\mathcal C}(D)} \\prob\\{S=s\\} \\\\\n& \\geq |{\\mathcal C}(D)| \\frac1m 2^{-m(R+h(D)+\\text{KL}(D \\| D')-R \\log_2(1+c^{\\mathtt l}))},\n\\end{align*}\nwe have\n$|{\\mathcal C}(D)| \\leq m 2^{m(R+h(D)+\\text{KL}(D \\| D')-R \\log_2(1+c^{\\mathtt l}))}$.\nProceeding as in (\\ref{equ:averagedistortion}), we have\n\\begin{align*}\n& \\expectation[\\distortion(S, {\\decoder}({\\encoder}(S)))] \n\\geq D \\bigl(1-2^{-m} |{\\mathcal C}(D)| \\bigr) \\\\\n& \\geq D \\bigl(1-m 2^{m(R+h(D)+\\text{KL}(D \\| D')-R \\log_2(1+c^{\\mathtt l})-1)} \\bigr).\n\\end{align*}\nWe conclude that if for some $D \\leq D' \\leq \\frac12$, \n$R+h(D)+\\text{KL}(D \\| D')-R \\log_2(1+\\frac{(D')^{\\mathtt l}}{(1-D')^{\\mathtt l}})-1<0$\nthen the distortion is at least $D$. All this is still conditioned on\n$\\frac{R {\\mathtt l} c^{\\mathtt l}}{1+c^{\\mathtt l}} < 1$ for the optimum choice of $D'$.\nFor $R {\\mathtt l} <1$ we already checked this. So assume that $R {\\mathtt l} \\geq 1$.\nThe above condition can then equivalently be written\nas $D' < \\frac{1}{1+(R {\\mathtt l} -1)^{\\frac{1}{{\\mathtt l}}}}$.\nOn the other hand, taking the derivative of our final expression on\nthe rate-distortion function with respect to $D'$ we get the condition\nfor the maximum to be\n$D' = \\frac{1}{1+(1+\\frac{R {\\mathtt l}}{D'-D})^{\\frac{1}{{\\mathtt l}}}} < \\frac{1}{1+(R {\\mathtt l} -1)^{\\frac{1}{{\\mathtt l}}}}$.\nWe see therefore that our assumption $\\frac{R {\\mathtt l} c^{\\mathtt l}}{1+c^{\\mathtt l}} < 1$ is also correct\nin the case $R {\\mathtt l} \\geq 1$.\n\nNumerical experiments show that the present bound yields for the regular case\nidentical results as plotting the curve corresponding to $g(D, R)=1$, where\n$g(D, R)$ was defined in the proof of Theorem~\\ref{the:boundviacounting}.\nThis can be interpreted as follows. Choose $D'$ equal to the optimal radius of the Hamming\nball in the proof of Theorem~\\ref{the:boundviacounting}. Then the points $\\hat{s}'$ that\ncontribute most to the probability of $S=s$ must be those that have a distance to $\\hat{s}$ of\n$m(D'-D)$.\n\n\\section{Discussion and Open Questions}\\label{sec:openquestions}\nIn the preceding sections we gave two bounds. Both of them are\nbased on the idea of counting the number of points that are ``covered''\nby spheres centered around the codewords of an LDGM code. In the \nfirst case we derived a bound by double counting this number. In the second\ncase we derived a bound by looking at a probabilistic model using the test channel.\n\nAn interesting open question is to determine the exact relationship of\nthe test channel model to the rate-distortion problem.\nMore precisely, it is tempting to conjecture that a pair $(R, D)$ is only achievable\nif $H(S)=m$ in this test channel model. This would require to show\nthat only elements of the typical set of ${\\mathcal S}$ under the test channel model\nare covered, i.e., have code words within distance $D$. For the test channel model it is \nvery easy to determine a criterion in the spirit of Gallager's original bound.\nWe have\n\\begin{align*}\nH(S) \n& = H(W)+H(S \\mid W)- H(W \\mid S) \\\\\n& = m R + m h(D) - \\sum_{g=1}^{m R} H(W_g \\mid S, W_{1}, \\dots, W_{g-1}) \\\\\n& \\stackrel{\\text{(i)}}{\\leq} m R + m h(D) - \\sum_{g=1}^{m R} H(W_g \\mid S, W_{\\sim g}) \\\\\n& \\stackrel{\\text{(ii)}}{=} m R + m h(D) - \\sum_{g=1}^{m R} H(W_g \\mid S_g, W_{\\sim g}),\n\\end{align*}\nwhere $S_g$ denotes the subset of the components of the $S$ vectors which\nare connected to the generator $g$.\nStep (i) follows since conditioning decreases entropy. Step (ii) follows since\nknowing ($S_g, W_{\\sim g}$), $W_g$ is not dependent on $S_{\\sim g}$. The term\n$H(W_g \\mid S_g, W_{\\sim g}) $ represents the EXIT function of a repetition\ncode when transmitting over BSC($D$) channel. \nIf one could show that $H(S)=m$ is a necessary condition for \nachieving average distortion of $D$ then a quick calculation shows that\nthe resulting bound would read\n\\begin{align*}\nR & \\geq \n\\frac{1 - h(D)}{1-\\sum_{i=0}^{{\\mathtt l}} \\binom{{\\mathtt l}}{i} (1-D)^i \nD^{{\\mathtt l}-i} \\log_2\\Bigl(1+\\bigl(\\frac{D}{1-D}\\bigr)^{2 i -{\\mathtt l}}\\Bigr)}. \n\\end{align*}\nThis ``bound'' is similar in spirit to the original bound given by Gallager, except\nthat in Gallager's original bound for LDPC codes we have a term corresponding to the\nentropy of single-parity check codes, whereas here we have terms that correspond\nto the entropy of repetition codes; this would be quite fitting given the duality of the problems.\n\n\\section*{Acknowledgment} We gratefully acknowledge the support by the\nSwiss National Science Foundation under grant number 200020-113412.\n\n\\bibliographystyle{IEEEtran}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalznmr b/data_all_eng_slimpj/shuffled/split2/finalznmr new file mode 100644 index 0000000000000000000000000000000000000000..646b58e843e4fe2b9ab5cab25fd9310df90682ba --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalznmr @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe Tijdeman-Zagier conjecture \\cites{elkies2007abc, waldschmidt2004open, crandall2006prime}, or more formally the generalized Fermat equation \\cite{bennett2015generalized} states that given\n\\begin{equation}\nA^X+B^Y=C^Z \\label{eqn:1}\n\\end{equation}\nwhere $A$, $B$, $C$, $X$, $Y$, $Z$ are positive integers, and exponents $X$, $Y$, $Z\\geq3$, then bases $A$, $B$, and $C$ have a common factor. Some researchers refer to this conjecture as Beal's conjecture \\cite{beal1997generalization}. There are considerable theoretical foundational advances in and around this topic \\cites{nitaj1995conjecture, darmon1995equations, vega2020complexity}. Many exhaustive searches within limited domains have produced indications this conjecture may be correct \\cites{beal1997generalization, norvig2010beal, durango}. More formal attempts to explicitly prove or counter-prove the conjecture abound in the literature but are often unable to entirely generalize due to difficulty in overcoming floating point arithmetic limits, circular logic, unsubstantiated claims, incomplete steps, or reliance on conjectures \\cites{di2013proof, norvig2010beal, dahmen2013perfect, de2016solutions}. Many partial proofs are published in which limited domains, conditional upper bounds of the exponents, specific configuration of bases or exponents, or additional, relaxed, or modified constraints are applied in which the conjecture holds \\cites{beukers1998, beukers2020generalized, siksek2012partial, poonen1998some, merel1997winding, bennett2006equation, anni2016modular, billerey2018some, kraus1998equation, poonen2007twists, siksek2014generalised, miyazaki2015upper}. \n\nSome researchers demonstrate or prove limited coprimality of the exponents \\cite{ghosh2011proof}, properties of perfect powers and relationships to Pillai's conjecture \\cite{waldschmidt2009perfect}, impossibility of solutions for specific bases \\cite{mihailescu2004primary}, influence of the parity of the exponents \\cite{joseph2018another}, characterizations of related Diophantine equations \\cite{nathanson2016diophantine}, relationship between the smallest base and the common factor \\cite{townsend2010search}, and countless other insights. \n\n\nTo formally establish a rigorous and complete proof, we need to consider two complimentary conditions: 1) when gcd$(A,B,C)=1$ there is no integer solution to \\BealsEq, and 2) if there is an integer solution, then gcd$(A,B,C)>1$. The approach we take is linked to the properties of slopes. An integer solution that satisfies the conjecture also marks a point $(A,B,C)$ that subtends a line through the origin on a 3 dimensional Cartesian graph. Being integers, this is a lattice point and thus the line has a rational slope in all 3 planes. Among other properties, it will be shown that if gcd$(A,B,C)=1$ with integer exponents, then one or more of the slopes of the subtended line is irrational and cannot pass through any non-trivial lattice points. Conversely it will be shown that if there exists a solution that satisfies the conjecture, the subtended line must pass through a non-trivial lattice point and must have rational slopes.\n\n\\section{Details of the Proof}\n\n\nTo establish the proof requires we first identify, substantiate, and then prove several preliminary properties:\n\\begin{itemize}\n\\item \\textbf{Slopes of the Terms}: determine slopes of lines subtended by the origin\n and the lattice point $(A,B,C)$ that satisfy the terms of the conjecture\n (\\cref{Thm:2.1_Irrational_Slope_No_Lattice} on\n pages \\pageref{Section:Slopes_Start} to \\pageref{Section:Slopes_End}).\n\\item \\textbf{Coprimality of the Bases}: determine implications of 3-way coprimality on\n pairwise coprimality and implications of pairwise coprimality on 3-way\n coprimality\n (\\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime}\n on pages \\pageref{Section:Comprimality_Start} to\n \\pageref{Section:Comprimality_End}).\n\\item \\textbf{Restrictions of the Exponents}: determine limits of the exponents related\n to coprimality of the bases and bounds of the conjecture\n (\\cref{Thm:2.5_X_cannot_be_mult_of_Z} on pages \\pageref{Section:Exponents_Start}\n to \\pageref{Section:Exponents_End}).\n\\item \\textbf{Reparameterization of the Terms}: determine equivalent functional forms\n of the terms and associated properties as related to coprimality of the terms\n (\\cref{Thm:2.6_Initial_Expansion_of_Differences,Thm:2.7_Indeterminate_Limit,Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta,Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime,Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate} on\n pages \\pageref{Section:Reparameterize_Start} to\n \\pageref{Section:Reparameterize_End}).\n\\item \\textbf{Impossibility of the Terms}: determine the relationship between\n term coprimality and slope irrationality, and between\n slope irrationality and solution impossibility\n (\\cref{Thm:2.14_Main_Proof_Coprime_No_Solutions} on\n pages \\pageref{Section:Impossibility_Start} to \\pageref{Section:Impossibility_End}).\n\\item \\textbf{Requirement for Possibility of the Terms}: determine characteristics of\n gcd$(A,B,C)$ required for there to exist a solution given the\n properties of slopes and coprimality\n (\\cref{Thm:2.15_Main_Proof_Solutions_Then_Not_Coprime} on pages\n \\pageref{Section:Possibility_Start} to \\pageref{Section:Possibility_End}).\n\\end{itemize}\n\nBefore articulating each of the underlying formal proofs, we establish two specific definitions to ensure consistency of interpretation.\n\\begin{enumerate}\n\\item \\textbf{Reduced Form}: We define the bases of $A^X$, $B^Y$, and $C^Z$ to be in reduced\n form, meaning that rather than let the bases be perfect powers, we\n define exponents $X$, $Y$, and $Z$ such that the corresponding bases\n are not perfect powers. For example, $8^5$ can be reduced to\n $2^{15}$ and thus the base and exponent would be 2 and 15,\n respectively, not 8 and 5, respectively. Hence throughout this\n document, we assume all bases are reduced accordingly given that the\n reduced and non-reduced forms of bases raised to their corresponding\n exponents are equal.\n\\item $\\bm{C^Z=f(A,B,X,Y)}$: Without loss of generality, when establishing the impossibility\n of integer solutions, unless stated otherwise, we assume to start\n with integer values for $A$, $B$, $X$, and $Y$ and then determine\n the impossibility of integers for $C$ or $Z$. Given the commutative\n property of the equation, we hereafter base the determination of\n integrality of $C$ and $Z$ as a function of definite integrality of\n $A$, $B$, $X$, and $Y$, as doing so for any other combination of\n variables is a trivial generalization.\n\\end{enumerate}\n\nThe following hereafter establishes the above objectives.\n\n\n\n\n\n\\Line\n\\subsection{Slopes of the Terms}\n\\label{Section:Slopes_Start}\nAlthough the exponents in \\BealsEq\\, suggest a volumetric relationship between cubes and hypercubes, and given that the exponents cannot all be the same as it would violate Fermat's last theorem \\cites{wiles1995modular, taylor1995ring, shanks2001solved}, the expectation of apparent geometric interpretability is low. However every set of values that satisfy the conjecture correspond to a point on a Cartesian grid and subtend a line segment with the origin, which in turn means properties of the slopes of these line segments are directly related to the conjecture. Properties of these slopes form a crucial basis for the subsequent main proofs.\n\n\n\\begin{figure}\n\\large{$C^Z\\, vs\\, B^Y\\, vs\\, A^X$}\\\\\n\\includegraphics[width=.5\\textwidth]{CZBYAX_Scatter.jpg}\n\\caption{Plot of $(A^X,B^Y,C^Z)$ given positive $A$, $B$, $C$, $X$, $Y$, and $Z$, where \\BealsEq, $A^X+B^Y\\leq10^{28}$, $X\\geq4$, $Y,Z\\geq3$.}\n\\label{Fig:CZBYAZScatter}\n\\end{figure}\n\nIn the Cartesian plot of $A^X\\times B^Y\\times C^Z$, each point $(A^X, B^Y, C^Z)$ corresponds to a specific integer solution that satisfies the conjecture found from exhaustive search within a limited domain. See \\cref{Fig:CZBYAZScatter}. There exists a unique line segment between each point and the origin. The line segment subtends a line segment in each of the three planes, and a set of corresponding angles in those planes made with the axes. See \\cref{Fig:3DScatter,Fig:ScatterPlotWithAngles}.\n\n\\begin{figure}\n\\includegraphics[width=.33\\textwidth]{3D_Scatter_Plot.jpg}\n\\caption{Line segment connecting the origin and point $(A^X, B^Y, C^Z)$ where \\BealsEq\\, satisfying the conjecture from \\cref{Fig:CZBYAZScatter}.}\n\\label{Fig:3DScatter}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=.66\\textwidth]{3D_Scatter_Plot_with_Angles.jpg}\n\\caption{Angles between the axes and the line segment subtended by the origin and point $(A^X, B^Y, C^Z)$ from \\cref{Fig:CZBYAZScatter}.}\n\\label{Fig:ScatterPlotWithAngles}\n\\end{figure}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.1_Irrational_Slope_No_Lattice}\nIf the slope $m$ of line $y=mx$ is irrational, then the line does not go through any non-trivial lattice points.\n\\end{theorem}\n\n\\begin{proof}\nSuppose there exists a line $y=mx$, with irrational slope $m$ that goes through lattice point $(X,Y)$. Then the slope can be calculated from the two known lattice points through which this line traverses, $(0,0)$ and $(X,Y)$. Hence the slope is $\\displaystyle{m=\\frac{Y-0}{X-0}}$. However, $\\displaystyle{m=\\frac{Y}{X}}$ is the ratio of integers, thus contradicting $m$ being irrational, hence a line with an irrational slope passing through the origin cannot pass through a non-trivial lattice point.\n\\end{proof}\n\nSince values that satisfy the conjecture are integer, by definition they correspond to a lattice point and thus the line segment between that lattice point and the origin will have a rational slope per \\cref{Thm:2.1_Irrational_Slope_No_Lattice}. We next establish the properties of term coprimality and thereafter the relationship coprimality has on the slope irrationality. Thereafter we establish several other preliminary proofs relating to reparameterizations and non-standard binomial expansions before returning back to the connection between term coprimality, slope irrationality, and the conjecture proof.\n\\label{Section:Slopes_End}\n\n\n\n\n\n\n\\Line\n\\subsection{Coprimality of the Bases}\n\\label{Section:Comprimality_Start}\nAccording to the conjecture, solutions only exist when gcd$(A,B,C)>1$. Hence testing for impossibility when gcd$(A,B,C)=1$ requires we establish the relationship between 3-way coprimality and the more stringent pairwise coprimality.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.2_Coprime}\n\\Conjecture, if gcd$(A,B)=1$, then gcd$(A^X,C^Z )=$ gcd$(B^Y,C^Z )=1$.\\\\\n\\{If $A$ and $B$ are coprime, then $C^Z$ is pairwise coprime to $A^X$ and $B^Y$.\\}\n\\end{theorem}\n\n\\begin{proof}\nSuppose $A$ and $B$ are coprime. Then $A^X$ and $B^Y$ are coprime, and we can define these terms based on their respective prime factors, namely\n\\begin{subequations}\n\\begin{gather}\nA^X = \\PList \\label{eqn:2a} \\\\\nB^Y = \\QList \\label{eqn:2b}\n\\end{gather}\n\\end{subequations}\nwhere $p_i$, $q_j$ are prime, and $p_i\\neq q_j$, for all $i,j$. Based on \\cref{eqn:1}, we can express $C^Z$ as\n\\begin{equation}\nC^Z=\\PList + \\QList \\label{eqn:3}\n\\end{equation}\n\nWe now take any prime factor of $A^X$ or $B^Y$ and divide both sides of \\cref{eqn:3} by that prime factor. Without loss of generalization, suppose we choose $p_i$. Thus\n\\begin{subequations}\n\\begin{align}\n\\frac{C^Z}{p_i} &= \\frac{\\PList + \\QList}{p_i} \\label{eqn:4a} \\\\\n\\frac{C^Z}{p_i} &= \\frac{\\PList}{p_i} + \\frac{\\QList}{p_i} \\label{eqn:4b}\n\\end{align}\n\\end{subequations}\nThe term $\\displaystyle\\frac{\\PList}{p_i}$ is an integer since by definition $p_i |\\PList$. However the term $\\displaystyle\\frac{\\QList}{p_i}$ cannot be simplified since $p_i \\nmid \\QList$ and thus $p_i \\nmid (\\PList+\\QList)$. Hence by extension $p_i \\nmid C^Z$ and $A^X$ must thus be coprime to $C^Z$. By applying the same logic with $q_j$, then $B^Y$ must also be coprime to $C^Z$. Therefore if $A$ and $B$ are coprime, then $C^Z$ must be pairwise coprime to both $A^X$ and $B^Y$.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.3_Coprime}\n\\Conjecture, if gcd$(A,C)=1$ or gcd$(B,C)=1$ then gcd$(A^X,C^Z)=$ gcd$(B^Y,C^Z)=1$.\\\\\n\\{If $A$ or $B$ is coprime to $C$, then $C^Z$ is pairwise coprime to $A^X$ and $B^Y$.\\}\n\\end{theorem}\n\n\\begin{proof}\nWithout loss of generalization, suppose $A$ and $C$ are coprime. Thus $A^X$ and $C^Z$ are coprime. We can define $C^Z$ based on its prime factors, namely\n\\begin{equation}\nC^Z=\\RList \\label{eqn:5}\n\\end{equation}\nwhere $r_k$ are primes. Based on \\cref{eqn:2a,eqn:2a,eqn:5}, we can define $B^Y$ based on the difference between $C^Z$ and $A^X$, namely\n\\begin{equation}\nB^Y=\\RList - \\PList \\label{eqn:6}\n\\end{equation}\nWe now take any prime factor of $C^Z$ and divide both sides of \\cref{eqn:6} by that prime factor. Without loss of generalization, suppose we choose $r_k$. Thus\n\\begin{subequations}\n\\begin{align}\n\\frac{B^Y}{r_k} &= \\frac{\\RList - \\PList}{r_k} \\label{eqn:7a} \\\\\n\\frac{B^Y}{r_k} &= \\frac{\\RList}{r_k} - \\frac{\\PList}{r_k} \\label{eqn:7b}\n\\end{align}\n\\end{subequations}\nThe term $\\displaystyle\\frac{\\RList}{r_k}$ is an integer since by definition $r_k |\\RList$. However the term $\\displaystyle\\frac{\\PList}{r_k}$ cannot be simplified since $r_k \\nmid \\PList$ and thus $r_k \\nmid (\\RList - \\PList)$. Hence by extension $r_k \\nmid B^Y$ and $C^Z$ must thus be coprime to $B^Y$. By applying the same logic with $p_i$, then $C^Z$ must also be coprime to $A^X$. Therefore if either $A$ or $B$ is coprime to $C$, then $C^Z$ must be pairwise coprime to both $A^X$ and $B^Y$.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.4_Coprime}\n\\Conjecture, if gcd$(A,B,C) = 1$ then gcd $(A^X,B^Y)=$ gcd $(A^X,C^Z)=$ gcd $(B^Y,C^Z)=1$\\\\\n\\{If $A$, $B$ and $C$ are 3-way coprime, then they are all pairwise coprime.\\}\n\\end{theorem}\n\n\\begin{proof} We consider two scenarios when gcd$(A,B,C)=1$, namely: gcd$(A,B)>1$ and gcd$(A,C)>1$ (the later of which generalizes to gcd$(B,C)>1$).\n\n\\bigskip\n\n\\textbf{Scenario 1 of 2:} Suppose gcd$(A,B,C)=1$ while gcd$(A,B)>1$. Therefore $A$ and $B$ have a common factor. Thus we can express $A^X$ and $B^Y$ relative to their common factor, namely\n\\begin{subequations}\n\\begin{gather}\nA^X = k\\cdot\\PList \\label{eqn:8a} \\\\\nB^Y = k\\cdot\\QList \\label{eqn:8b}\n\\end{gather}\n\\end{subequations}\nwhere integer $k$ is the common factor, and $p_i$, $q_j$ are prime, and $p_i\\neq q_j$, for all $i,j$. Based on \\cref{eqn:1}, we can express $C^Z$ as\n\\begin{subequations}\n\\begin{gather}\nC^Z=k\\cdot\\PList + k\\cdot\\QList \\label{eqn:9a} \\\\\nC^Z=k(\\PList + \\QList) \\label{eqn:9b}\n\\end{gather}\n\\end{subequations}\nPer \\cref{eqn:9b}, $k$ is a factor of $C^Z$, just as it is a factor of $A^X$ and $B^Y$, thus gcd$(A,B,C)\\neq1$, hence a contradiction. Thus when gcd$(A,B,C)=1$ we know gcd$(A,B)\\ngtr1$ and thus $k$ must be 1.\n\n\\bigskip\n\n\\textbf{Scenario 2 of 2:} Suppose gcd$(A,B,C)=1$ while gcd$(A,C)>1$. Therefore $A$ and $C$ have a common factor. Thus we can express $A^X$ and $C^Z$ relative to their common factor, namely\n\\begin{subequations}\n\\begin{gather}\nA^X = k\\cdot\\PList \\label{eqn:10a} \\\\\nC^Z = k\\cdot\\RList \\label{eqn:10b}\n\\end{gather}\n\\end{subequations}\nwhere integer $k$ is the common factor, and $p_i$, $r_k$ are prime, and $p_i\\neq r_k$, for all $i,k$. Based on \\cref{eqn:1}, we can express $B^Y$ as\n\\begin{subequations}\n\\begin{gather}\nB^Y=k\\cdot\\RList - k\\cdot\\PList \\label{eqn:11a} \\\\\nB^Y=k(\\RList - \\PList) \\label{eqn:11b}\n\\end{gather}\n\\end{subequations}\nPer \\cref{eqn:11b}, $k$ is a factor of $B^Y$, just as it is a factor of $A^X$ and $C^Z$, thus gcd$(A,B,C)\\neq1$, hence a contradiction. Thus when gcd$(A,B,C)=1$ we know gcd$(A,C)\\ngtr1$ and thus $k$ must be 1. By extension and generalization, when gcd$(A,B,C)=1$ we know gcd$(B,C)\\ngtr1$.\n\\end{proof}\n\nBased on \\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime} if any pair of terms $A$, $B$, and $C$ have no common factor, then all pairs of terms are coprime. Hence either all three terms share a common factor or they are all pairwise coprime. We thus formally conclude that if gcd$(A, B, C)=1$, then gcd$(A^X,B^Y)=$ gcd$(A^X,C^Z)=$ gcd$(B^Y,C^Z)=1$, and if gcd$(A,B)=1$ or gcd$(A,C)=1$ or gcd$(B,C)=1$, then gcd$(A,B,C)=1$.\n\\label{Section:Comprimality_End}\n\n\n\\bigskip\n\n\n\\Line\n\\subsection{Restrictions of the Exponents}\n\\label{Section:Exponents_Start}\nTrivial restrictions of the exponents are defined by the conjecture, namely integer and greater than 2. However, other restrictions apply such as, per Fermat's last theorem, the exponents cannot be equal while greater than 2. More subtle restrictions also apply which will be required for the main proofs.\n\n\\begin{theorem}\n\\label{Thm:2.5_X_cannot_be_mult_of_Z}\n\\Conjecture, exponents $X$ and $Y$ cannot simultaneously be integer multiples of exponent $Z$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $X$ and $Y$ are simultaneously each integer multiples of $Z$. Thus $X=jZ$ and $Y=kZ$ for positive integers $j$ and $k$. Therefore we can restate \\cref{eqn:1} as\n\\begin{equation}\n(A^j)^Z + (B^k)^Z= C^Z \\label{eqn:12}\n\\end{equation}\nPer \\cref{eqn:12}, we have 3 terms which are each raised to exponent $Z\\ge3$. According to Fermat's last theorem \\cites{wiles1995modular, taylor1995ring, shanks2001solved}, no integer solution exists when the terms share a common exponent greater than 2. Therefore $X$ and $Y$ cannot simultaneously be integer multiples of $Z$.\n\\end{proof}\n\n\nA trivially equivalent variation of \\cref{Thm:2.5_X_cannot_be_mult_of_Z} is that $Z$ cannot simultaneously be a unit fraction of $X$ and $Y$. Given \\cref{Thm:2.5_X_cannot_be_mult_of_Z}, there are only two possibilities:\n\\begin{enumerate}\n\\item Neither $X$ or $Y$ are multiples of $Z$.\n\\item Only one of either $X$ or $Y$ is a multiple of $Z$.\n\\end{enumerate}\nAs such, given that at least one of the two exponents $X$ and $Y$ cannot be a multiple of $Z$, of the terms $A^X$ and $B^Y$, we therefore can arbitrarily choose $A^X$ to be the term whose exponent is not an integer multiple of exponent $Z$. Hence the following definition is used hereafter:\n\n\n\\begin{definition}\n\\label{Dfn:2.1_X_cannot_be_mult_of_Z}\n$X$ is not an integer multiple of $Z$.\n\\end{definition}\n\nSince per \\cref{Thm:2.5_X_cannot_be_mult_of_Z} at most only one of $X$ or $Y$ can be a multiple of $Z$ and given one can arbitrarily swap $A^X$ and $B^Y$, the arbitrary fixing hereafter of $A^X$ to be the term for which its exponent is not a multiple of $Z$ does not interfere with any of the characteristics or implications of the solution. Hence we hereafter define $A^X$ and $B^Y$ such that \\cref{Dfn:2.1_X_cannot_be_mult_of_Z} is maintained.\n\n\n\\label{Section:Exponents_End}\n\n\\bigskip\n\n\n\n\\Line\n\\subsection{Reparameterization of the Terms}\n\\label{Section:Reparameterize_Start}\nIn exploring ways to leverage the binomial expansion and other equivalences, some researchers \\cites{beauchamp2018,beauchamp2019,edwards2005platonic} explored reparameterizing one or more of the terms of \\BealsEq\\, so as to compare different sets of expansions. We broaden this idea to establish various irrationality conditions as related to coprimality of the terms, establish properties of the non-unique characteristics of key terms in the expansions, and showcase an exhaustive view to be leveraged when validating the conjecture.\n\nThe binomial expansion applied to the difference of perfect powers with different exponents is critical to mathematical research in general and to several proofs specifically later in this document. One feature of the binomial expansion in our application is the circumstance under which the upper limit of the sum is indeterminate \\cites{beauchamp2018,beauchamp2019} to be introduced in the following two theorems.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.6_Initial_Expansion_of_Differences}\nIf $p,q\\neq 0$ and $v,w$ are real, then $\\displaystyle{p^v-q^w=(p+q)(p^{v-1}-q^{w-1}) - pq(p^{v-2} - q^{w-2})}$.\\\\\n\\{Expanding the difference of two powers\\}\n\\end{theorem}\n\n\\begin{proof}\nGiven non-zero $p$ and $q$, and real $v$ and $w$, suppose we can expand the difference $p^v-q^w$ as\n\\begin{equation}\np^v-q^w=(p+q)(p^{v-1}-q^{w-1}) - pq(p^{v-2} - q^{w-2}) \\label{eqn:13}\n\\end{equation}\nDistributing $(p+q)$ on the right side of \\cref{eqn:13} into $(p^{v-1}-q^{w-1})$ gives us $\\left[p^v-pq^{w-1} + p^{v-1}q-q^w\\right]$ and distributing $-pq$ into $(p^{v-2} - q^{w-2})$ gives us $\\left[-p^{v-1}q+pq^{w-1}\\right]$. Thus simplifying \\cref{eqn:13} gives us\n\\begin{subequations}\n\\begin{align}\np^v-q^w &= \\left[p^v-pq^{w-1} + p^{v-1}q-q^w\\right] +\\left[-p^{v-1}q+pq^{w-1}\\right] \\label{eqn:14a} \\\\\np^v-q^w &= p^v+\\left[pq^{w-1}-pq^{w-1}\\right] + \\left[p^{v-1}q -p^{v-1}q\\right] -q^w\\label{eqn:14b} \\\\\np^v-q^w &= p^v -q^w\\label{eqn:14c}\n\\end{align}\n\\end{subequations}\n\\end{proof}\nThus the difference of powers can indeed be expanded per the above functional form accordingly. We also observe \\cref{eqn:13} can be expressed in more compact notation, namely\n\\begin{equation}\np^v-q^w=\\sum \\limits_{i=0}^1 (p+q)^{1-i}(-pq)^i(p^{v-1-i}-q^{w-1-i}) \\label{eqn:15}\n\\end{equation}\nWe further observe in \\cref{eqn:13} of \\cref{Thm:2.6_Initial_Expansion_of_Differences} that this expansion of the difference of two powers yields two other terms which are themselves differences of powers, namely $(p^{v-1}-q^{w-1})$ and $(p^{v-2} - q^{w-2})$. Each of these differences could likewise be expanded with the same functional form of \\cref{Thm:2.6_Initial_Expansion_of_Differences}. Recursively expanding the resulting terms of differences of powers leads to a more general form of \\cref{eqn:15}.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.7_Indeterminate_Limit}\nIf $p,q\\neq 0$ and integer $n\\geq0$, then $\\displaystyle{p^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})}$.\\\\\n\\{General form of the expansion of the difference of two powers\\}\n\\end{theorem}\n\n\\begin{proof}\nSuppose $p,q\\neq 0$ and integer $n\\geq0$, and suppose\n\\begin{equation}\np^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})\n\\label{eqn:16}\n\\end{equation}\nConsider $n=0$. The right side of \\cref{eqn:16} reduces to $p^v-q^w$, thus \\cref{eqn:16} holds when $n=0$. Consider $n=1$. The right side of \\cref{eqn:16} becomes\n\\begin{subequations}\n\\begin{align}\n\\begin{split}\np^v-q^w &=\\binom{1}{0} (p+q)^{1-0}(-pq)^0(p^{v-1-0}-q^{w-1-0})\n +\\binom{1}{1} (p+q)^{1-1}(-pq)^1(p^{v-1-1}-q^{w-1-1})\n \\label{eqn:17a}\n\\end{split}\\\\\n\\begin{split}\np^v-q^w &= \\biggl[ (p+q)(p^{v-1}-q^{w-1}) \\biggr] + \\biggl[(-pq)(p^{v-2}-q^{w-2})\\biggr] \\\\\n &= p^v-q^w\n \\label{eqn:17b}\n\\end{split}\n\\end{align}\n\\end{subequations}\nThe right side of \\cref{eqn:17b} also reduces to $p^v-q^w$. Hence \\cref{eqn:16} holds for $n=0$ and $n=1$.\n\n\nIn generalizing, enumerating the terms of \\cref{eqn:16} gives us\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^n (p^{v-n} -q^{w-n }) \\\\\n&+\\binom{n}{1} (p+q)^{n-1}(-pq) (p^{v-n-1} -q^{w-n-1}) \\\\\n&+\\binom{n}{2} (p+q)^{n-2}(-pq)^2(p^{v-n-2} -q^{w-n-2}) \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n}{n-2}(p+q)^2(-pq)^{n-2}(p^{v-2n+2}-q^{w-2n+2}) \\\\\n&+\\binom{n}{n-1}(p+q) (-pq)^{n-1}(p^{v-2n+1}-q^{w-2n+1}) \\\\\n&+\\binom{n}{n} (-pq)^n (p^{v-2n} - q^{w-2n}) \\\\\n\\end{split}\n\\label{eqn:18}\n\\end{equation}\nExpanding each of the $n+1$ differences of powers $(p^{v-n-i}-q^{w-n-i})$ of \\cref{eqn:18} per \\cref{Thm:2.6_Initial_Expansion_of_Differences} gives us\n\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^n \\left[(p+q)(p^{v-n-1}-q^{w-n-1}) - pq(p^{v-n-2} - q^{w-n-2}) \\right] \\\\\n&+\\binom{n}{1} (p+q)^{n-1}(-pq) [(p+q)(p^{v-n-2}-q^{w-n-2}) - pq(p^{v-n-3} - q^{w-n-3}) ] \\\\\n&+\\binom{n}{2} (p+q)^{n-2}(-pq)^2[(p+q)(p^{v-n-3}-q^{w-n-3}) - pq(p^{v-n-4} - q^{w-n-4}) ] \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n}{n-2}(p+q)^2(-pq)^{n-2}[(p+q)(p^{v-2n+1}-q^{w-2n+1}) - pq(p^{v-2n} - q^{w-2n}) ] \\\\\n&+\\binom{n}{n-1}(p+q) (-pq)^{n-1}[(p+q)(p^{v-2n}-q^{w-2n}) - pq(p^{v-2n-1} - q^{w-2n-1}) ] \\\\\n&+\\binom{n}{n} (-pq)^n [(p+q)(p^{v-2n-1}-q^{w-2n-1}) - pq(p^{v-2n-2} - q^{w-2n-2}) ] \\\\\n\\end{split}\n\\label{eqn:19}\n\\end{equation}\nDistributing each of the $\\displaystyle{\\binom{n}{i}(p+q)^{n-i}}(-pq)^i$ terms of \\cref{eqn:19} into the corresponding bracketed terms then gives us\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^{n+1} (p^{v-n-1}-q^{w-n-1}) + \\binom{n}{0}(p+q)^n(-pq)(p^{v-n-2} - q^{w-n-2}) \\\\\n&+\\binom{n}{1} (p+q)^n(-pq)(p^{v-n-2}-q^{w-n-2}) + \\binom{n}{1}(p+q)^{n-1}(-pq)^2(p^{v-n-3} - q^{w-n-3}) \\\\\n&+\\binom{n}{2} (p+q)^{n-1}(-pq)^2(p^{v-n-3}-q^{w-n-3}) + \\binom{n}{2}(p+q)^{n-2} (-pq)^3(p^{v-n-4} - q^{w-n-4}) \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n}{\\!n-2\\!}(p+q)^3(-pq)^{n-2}(p^{v-2n+1}\\!-\\!q^{w-2n+1}) + \\binom{n}{\\!n-2\\!}(p+q)^2 (-pq)^{n-1}(p^{v-2n} \\!-\\! q^{w-2n}) \\\\\n&+\\binom{n}{\\!n-1\\!}(p+q)^2(-pq)^{n-1}(p^{v-2n}\\!-\\!q^{w-2n}) + \\binom{n}{\\!n-1\\!}(p+q) (-pq)^n(p^{v-2n-1} \\!-\\! q^{w-2n-1}) \\\\\n&+\\binom{n}{n} (p+q) (-pq)^n (p^{v-2n-1}-q^{w-2n-1}) + \\binom{n}{n} (-pq)^{n+1}(p^{v-2n-2} - q^{w-2n-2}) \\\\\n\\end{split}\n\\label{eqn:20}\n\\end{equation}\nwhich can be simplified to\n\\begin{equation}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^{n+1} (p^{v-n-1}-q^{w-n-1}) \\\\\n&+\\left[\\binom{n}{1}+\\binom{n}{0}\\right](p+q)^n(-pq)(p^{v-n-2}-q^{w-n-2}) \\\\\n&+\\left[\\binom{n}{2}+\\binom{n}{1}\\right](p+q)^{n-1}(-pq)^2(p^{v-n-3}-q^{w-n-3})\\\\\n&+\\left[\\binom{n}{3}+\\binom{n}{2}\\right](p+q)^{n-2}(-pq)^3(p^{v-n-4}-q^{w-n-4})\\\\\n&+ \\ \\cdots \\\\\n&+\\left[\\binom{n}{n-2}+\\binom{n}{n-3}\\right](p+q)^3(-pq)^{n-2}(p^{v-2n+1}-q^{w-2n+1}) \\\\\n&+\\left[\\binom{n}{n-1}+\\binom{n}{n-2}\\right](p+q)^2(-pq)^{n-1}(p^{v-2n}-q^{w-2n}) \\\\\n&+\\left[\\binom{n}{n}+\\binom{n}{n-1}\\right] (p+q)(-pq)^n (p^{v-2n-1}-q^{w-2n-1})\\\\\n&+ \\binom{n}{n} (-pq){n+1}(p^{v-2n-2} - q^{w-2n-2}) \\\\\n\\end{split}\n\\label{eqn:21}\n\\end{equation}\nPascal's identity states that $\\displaystyle{\\binom{m+1}{k} = \\binom{m}{k}+\\binom{m}{k-1}}$ for integer $k\\geq1$ and integer $m\\geq0$ \\cite{macmillan2011proofs,fjelstad1991extending}. Given this identity, each sum of the pairs of binomial coefficients in the brackets of \\cref{eqn:21} simplifies to:\n\\begin{subequations}\n\\begin{gather}\n\\begin{split}\np^v-q^w &=\n \\binom{n}{0} (p+q)^{n+1} (p^{v-n-1} -q^{w-n-1}) \\\\\n&+\\binom{n+1}{1} (p+q)^n (-pq) (p^{v-n-2} -q^{w-n-2}) \\\\\n&+\\binom{n+1}{2} (p+q)^{n-1}(-pq)^2(p^{v-n-3} -q^{w-n-3}) \\\\\n&+\\binom{n+1}{3} (p+q)^{n-2}(-pq)^3(p^{v-n-4} -q^{w-n-4}) \\\\\n&+ \\ \\cdots \\\\\n&+\\binom{n+1}{n-2} (p+q)^3(-pq)^{n-2}(p^{v-2n+1}-q^{w-2n+1}) \\\\\n&+\\binom{n+1}{n-1} (p+q)^2(-pq)^{n-1}(p^{v-2n} -q^{w-2n}) \\\\\n&+\\binom{n+1}{n} (p+q) (-pq)^n (p^{v-2n-1}-q^{w-2n-1}) \\\\\n&+ \\binom{n}{n} (-pq)^{n+1}(p^{v-2n-2}-q^{w-2n-2}) \\\\\n\\end{split} \\label{eqn:22a} \\\\\np^v-q^w =\\sum \\limits_{i=0}^{n+1} \\binom{n+1}{i} (p+q)^{n-i+1}(-pq)^i(p^{v-n-i-1}-q^{w-n-i-1})\n\\label{eqn:22b}\n\\end{gather}\n\\end{subequations}\nThe right side of \\cref{eqn:16} and \\cref{eqn:22b} both equal $\\displaystyle{p^v-q^w}$, thus the right side of these two equations are equal. Hence\n\\begin{equation}\n\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i}) = \\sum \\limits_{i=0}^{n+1} \\binom{n+1}{i} (p+q)^{n-i+1}(-pq)^i(p^{v-n-i-1}-q^{w-n-i-1})\n\\label{eqn:23}\n\\end{equation}\nTherefore by induction, since $\\displaystyle{p^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})}$ for $n=0$ and $n=1$ and per \\cref{eqn:22b,eqn:23} this relation holds when $n$ is replaced with $n+1$. Hence this relation holds for all integers $n\\geq0$.\n\\end{proof}\n\nWe observe an important property, per \\cref{eqn:16,eqn:22b,eqn:23}, that $n$ is indeterminate since $\\displaystyle{p^v-q^w =\\sum \\limits_{i=0}^n \\binom{n}{i} (p+q)^{n-i}(-pq)^i(p^{v-n-i}-q^{w-n-i})}$ holds for every non-negative integer value of $n$. Hence any non-negative integer value of $n$ can be selected and the resulting expansion still applies, leading to different expansions that sum to identical outcomes.\n\n\n\\bigskip\n\n\nOther preliminary properties required for the proof of the conjecture include the fact that each of the perfect powers $A^X$, $B^Y$ and $C^Z$ can be expressed as a linear combination of an additive and multiplicative form of the bases of the other two terms. This property will reveal a variety of equivalences across various domains. We need to first establish a few basic principles.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.8_Functional_Form}\n\\Conjecture, if there exists an integer solution, then there exists non-zero positive rational $\\alpha$ and $\\beta$ such that $A^X=[(C+B)\\alpha-CB\\beta]^X$.\n\\end{theorem}\n\n\\begin{proof}\nGiven \\cref{eqn:1}, by definition $A^X=C^Z-B^Y$. If there exist integer solutions that satisfy the conjecture then $A$ and $\\sqrt[X]{C^Z-B^Y}$ must be integers. Suppose there exists non-zero rational $\\alpha$ and $\\beta$ such that $A=(C+B)\\alpha-CB\\beta$, then these two expressions for $A$ are identical, namely\n\\begin{equation}\n\\sqrt[X]{C^Z-B^Y} = (C+B)\\alpha-CB\\beta \\label{eqn:24}\n\\end{equation}\nSolving for $\\alpha$ when given any rational $\\beta>0$ gives us $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$, which is positive since the numerator and denominator are each positive. Further, if $\\beta$ is rational, then $\\alpha$ must also be rational since $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$, $C$, and $B$ are each integer.\n\nSolving instead for $\\beta$ when given any arbitrary sufficiently large positive rational $\\alpha$ gives us $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$, which is positive given the numerator and denominator are both negative integers. Further, if $\\alpha$ is rational, then $\\beta$ must also be rational since $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$, $C$, and $B$ are each integer.\n\nHence there exist non-zero positive rational $\\alpha$ and $\\beta$ such that $A^X=C^Z-B^Y$ and $A^X=[(C+B)\\alpha-CB\\beta]^X$, when the terms of the conjecture are satisfied.\n\\end{proof}\n\n\n\\bigskip\n\n\nWithout loss of generalization, it can be trivially shown that \\cref{Thm:2.8_Functional_Form} establishes an alternate functional form in which $A^X=[(C+B)\\alpha-CB\\beta]^X$ also applies to $B^Y$ wherein there exists other non-zero positive rational $\\alpha$ and $\\beta$ such that $B^Y=[(C+A)\\alpha-CA\\beta]^Y$.\n\nSuppose we arbitrarily let $\\displaystyle{\\alpha=\\sqrt[X]{|C^{Z-X}-B^{Y-X}|}}$ then per \\cref{Thm:2.8_Functional_Form}, we can solve for $\\beta$ from $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ which gives us $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$. Likewise if we arbitrarily let $\\displaystyle{\\beta=\\sqrt[X]{|C^{Z-2X}-B^{Y-2X}|}}$ then per \\cref{Thm:2.8_Functional_Form}, we can solve for $\\alpha$ from $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ which gives us $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$. In either case, this yields set $\\{\\alpha,\\beta\\}$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ which shows these definitions of $\\alpha$ and $\\beta$ maintain $A^X$ as a perfect power of an integer based on $B$ and $C$, namely $A^X=[(C+B)\\alpha-CB\\beta]^X$ while satisfying $A^X=C^Z-B^Y$. Further, based on the indeterminacy of the upper bound in the binomial expansion of the difference of two perfect powers from \\cref{Thm:2.7_Indeterminate_Limit}, we can also find values of $\\alpha$ and $\\beta$ that are explicitly functions of $C$ and $B$.\n\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.9_Real_Alpha_Beta}\n\\Conjecture, values $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$, $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$, and real $M$ satisfy $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$.\n\\end{theorem}\n\n\\begin{proof} Given \\cref{eqn:1}, we know that integer $A^X=C^Z-B^Y$ can be expanded with the general binomial expansion from \\cref{Thm:2.7_Indeterminate_Limit} for the difference of perfect powers, namely\n\\begin{equation}\nA^X=C^Z-B^Y=\\sum \\limits_{i=0}^n \\binom{n}{1} (C+B)^{n-i}(-CB)^i(C^{Z-n-i}-B^{Y-n-i})\n\\label{eqn:25}\n\\end{equation}\nPer \\cref{Thm:2.7_Indeterminate_Limit}, since upper limit $n$ in \\cref{eqn:25} is indeterminate, we can replace $n$ with any value such as $X$ while entirely preserving complete integrity of the terms, hence\n\\begin{equation}\nA^X=C^Z-B^Y=\\underbrace{\\sum \\limits_{i=0}^X \\binom{X}{i} (C+B)^{X-i}(-CB)^i}_{Common\\,to\\,Equation\\,(\\ref{eqn:27b})\\,below}(C^{Z-X-i}-B^{Y-X-i})\n\\label{eqn:26}\n\\end{equation}\nFurthermore, from \\cref{Thm:2.8_Functional_Form} we know that $A=(C+B)\\alpha-CB\\beta$ for non-zero real $\\alpha$ and $\\beta$. Raising $A=(C+B)\\alpha-CB\\beta$ to $X$ and then expanding gives us\n\\begin{subequations}\n\\begin{align}\nA^X=\\bigl((C+B)\\alpha-CB\\beta\\bigr)^X&=\\sum \\limits_{i=0}^X \\binom{X}{i}\n \\bigl((C+B)\\alpha\\bigr)^{X-i}\n (-CB\\beta)^i\n\\label{eqn:27a} \\\\\nA^X=\\bigl((C+B)\\alpha-CB\\beta\\bigr)^X&=\\underbrace{ \\sum \\limits_{i=0}^X \\binom{X}{i}\n (C+B)^{X-i}(-CB)^i}_{Common\\,to\\,Equation\\,(\\ref{eqn:26})\\,above}\n \\alpha^{X-i}\\beta^i\n\\label{eqn:27b}\n\\end{align}\n\\end{subequations}\n\\cref{eqn:26,eqn:27b} have an identical number of terms, share the identical binomial coefficient, and share $(C+B)^{X-i}(-CB)^i$ for each value of $i$. See the expansion in \\cref{tab:TableSideBySideExpansion}.\n\n\\begin{table}[h!]\n\\begin{center}\n\\caption{Term-by-term comparison of the binomial expansions of \\cref{eqn:26,eqn:27b}.}\n\\label{tab:TableSideBySideExpansion}\n\\begin{tabular}{c|c c c}\n\\textbf{Term} & \\textbf{Terms Common to} & \\multicolumn{2}{c}{\\textbf{\\quad Terms Unique to Equations}} \\\\\n\\textbf{\\bm{$i$}} & \\textbf{\\cref{eqn:26,eqn:27b}} & \\textbf{(\\ref{eqn:26})} & \\textbf{(\\ref{eqn:27b})} \\\\\n\\hline\n0 & $\\displaystyle{\\binom{X}{0}(C+B)^X}$ & $C^{Z-X}-B^{Y-X}$ & $\\alpha^{X}$\\\\\n1 & $\\displaystyle{\\binom{X}{1}(C+B)^{X-1}(-CB)}$ & $C^{Z-X-1}-B^{Y-X-1}$ & $\\alpha^{X-1}\\beta^1$\\\\\n2 & $\\displaystyle{\\binom{X}{2}(C+B)^{X-2}(-CB)^2}$ & $C^{Z-X-2}-B^{Y-X-2}$ & $\\alpha^{X-2}\\beta^2$\\\\\n\\vdots & \\vdots & \\vdots & \\vdots\\\\\n$X-1$ & $\\displaystyle{\\binom{X}{X-1}(C+B)(-CB)^{X-1}}$ & $C^{Z-2X+1}-B^{Y-2X+1}$ & $\\alpha^1\\beta^{X-1}$\\\\\n$X$ & $\\displaystyle{\\binom{X}{X}(-CB)^X}$ & $C^{Z-2X}-B^{Y-2X}$ & $\\beta^X$\\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nPer \\cref{eqn:26,eqn:27b,Thm:2.8_Functional_Form}, $A^X$ equals both $C^Z-B^Y$ and $[(C+B)\\alpha-CB\\beta]^X$, thus the finite sums of \\cref{eqn:26,eqn:27b} are equal. Each of the terms in the corresponding expansions have common components and unique components. Thus there exists a term-wise map between the unique components of the two sets of expansions.\n\nWhen $i=0$, we observe in \\cref{tab:TableSideBySideExpansion} that $\\alpha^{X}$ maps to $C^{Z-X}-B^{Y-X}$. Hence if $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ or more generically $\\displaystyle{\\alpha=\\sqrt[X]{|C^{Z-X}-B^{Y-X}|}}$, then per \\cref{Thm:2.8_Functional_Form}, the corresponding $\\beta$ is $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$.\n\nWhen $i=X$, we observe in \\cref{tab:TableSideBySideExpansion} that $\\beta^X$ maps to $C^{Z-2X}-B^{Y-2X}$. Hence if $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ or more generically $\\displaystyle{\\beta=\\sqrt[X]{|C^{Z-2X}-B^{Y-2X}|}}$, then per \\cref{Thm:2.8_Functional_Form}, the corresponding $\\alpha$ is $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$.\n\nUsing the map between $\\alpha^{X}$ to $C^{Z-X}-B^{Y-X}$ based on $i=0$ and the map between $\\beta^X$ to $C^{Z-2X}-B^{Y-2X}$ based on $i=X$, we can align to the other terms in the expansion of \\cref{eqn:26,eqn:27b}. When $i=1$, we have the terms $C^{Z-X-1}-B^{Y-X-1}$ and $\\alpha^{X-1}\\beta$ (see \\cref{tab:TableSideBySideExpansion}). Substituting $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ we have\n\\begin{subequations}\n\\begin{gather}\nC^{Z-X-1}-B^{Y-X-1} = \\alpha^{X-1}\\beta \\label{eqn:28a} \\\\\nC^{Z-X-1}-B^{Y-X-1} = (C^{Z-X}-B^{Y-X})^\\frac{X-1}{X}(C^{Z-2X}-B^{Y-2X})^{\\frac{1}{X}} \\label{eqn:28b}\n\\end{gather}\n\\end{subequations}\n\\cref{eqn:28b} holds in only trivial conditions. Hence $\\alpha$ and $\\beta$ cannot simultaneously equal $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$. As such, there are only three possibilities regarding $\\alpha$ and $\\beta$ that ensure $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$:\n\\begin{enumerate}\n\\item $\\alpha$ is arbitrarily defined and thus $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$.\n\\item $\\beta$ is arbitrarily defined and thus $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$.\n\\item $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ where one or both are scaled.\n\\end{enumerate}\nThe first two cases, given $\\alpha$ or $\\beta$ and the other derived therefrom per \\cref{Thm:2.8_Functional_Form}, will satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$. In the third case, since $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ do not simultaneously satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$, then scaling $\\alpha$ and $\\beta$ by $M$ such that $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$ will ensure equality, where\n\\begin{subequations}\n\\begin{gather}\nC^Z-B^Y = [(C+B)M\\alpha-CBM\\beta]^X \\label{eqn:29a}\\\\\n\\sqrt[X]{C^Z-B^Y} = M[(C+B)\\alpha-CB\\beta] \\label{eqn:29b}\\\\\nM= \\frac{\\sqrt[X]{C^Z-B^Y}}{(C+B)\\alpha-CB\\beta} \\label{eqn:29c}\n\\end{gather}\n\\end{subequations}\nSince every set $\\{\\alpha,\\beta\\}$ is unique, then per \\cref{eqn:29c} there exists a unique $M$ that satisfies $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$ when $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ simultaneously.\n\nGiven that $C^Z-B^Y$ and $[(C+B)M\\alpha-CBM\\beta]^X$ are identical, their binomial expansions are structurally identical, and their sums are identical, then indeed $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ together ensure the equality of \\cref{eqn:26,eqn:27b} hold for $M$ as defined in \\cref{eqn:29c}.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\noindent\\textbf{Characteristics of $\\bm{M}$, $\\bm{\\alpha}$ and $\\bm{\\beta$} from \\cref{Thm:2.9_Real_Alpha_Beta}}\n\nThe important feature is not the scalar but instead the characteristics of $\\alpha$ and $\\beta$ as defined above despite the scalar. Based on \\BealsEq, we know $A=\\sqrt[X]{C^Z-B^Y}$. The structural similarity between $C^Z-B^Y$ in the formula for $A$ and the expressions $C^{Z-X}-B^{Y-X}$ and $C^{Z-2X}-B^{Y-2X}$ in the formulas for $\\alpha$ and $\\beta$, respectively, is critical. This structural similarity will be explored and exploited later in this document.\n\nWe note $\\alpha$ and $\\beta$ could be defined differently from above and still maintain the equality of $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$, without scalar $M$. However, if one defined $\\alpha$ or $\\beta$ differently than above, then either of these terms must be $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$ or $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$ in order to satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$. Since any arbitrary $\\alpha$ corresponds to a unique $\\beta$, there are an infinite number of sets $\\{\\alpha,\\beta\\}$ that satisfy this equation. In some conditions, there are no rational pairs among the infinite number of sets $\\{\\alpha,\\beta\\}$, such as if $\\sqrt[X]{C^Z-B^Y}$ is irrational. But per the above, there is only unique set $\\{\\alpha,\\beta\\}$ when $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ when scalar $M$ is applied accordingly. We note all possible sets of $\\{\\alpha,\\beta\\}$ map to $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ for scalar $M$ since all sets must satisfy $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.10_No_Solution_Alpha_Beta_Irrational}\n\\Conjecture, if there exists set $\\{A,B,C,X,Y,Z\\}$ that does not satisfy these conditions, then $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ will be irrational.\n\\end{theorem}\n\\begin{proof}\nFrom \\cref{Thm:2.9_Real_Alpha_Beta}, we know for any and every possible value of $\\alpha$, that $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$. Without loss of generality, suppose $B$, $C$, $X$, $Y$, and $Z$ are integer but there is no integer value of $A$ that satisfies the conjecture, then $\\displaystyle{A=\\sqrt[X]{C^Z-B^Y}}$ must be irrational. Hence $\\beta$ is irrational given the irrational term $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$ in the numerator of $\\beta$.\n\nWe also know from \\cref{Thm:2.9_Real_Alpha_Beta} given any and every possible value of $\\beta$ that $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$. Here too without loss of generality, if $B$, $C$, $X$, $Y$, and $Z$ are integer but there is no integer value of $A$ that satisfies the conjecture, then $\\displaystyle{A=\\sqrt[X]{C^Z-B^Y}}$ must be irrational. Hence $\\alpha$ is irrational given the irrational term $\\displaystyle{\\sqrt[X]{C^Z-B^Y}}$ in the numerator of $\\alpha$.\n\nHence when set $\\{A,B,C,X,Y,Z\\}$ does not satisfy the conjecture, then the corresponding $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ is irrational.\n\\end{proof}\n\n\nWe note that the exclusion of scalar $M$ from $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$ in \\cref{Thm:2.10_No_Solution_Alpha_Beta_Irrational} (letting $M=1$) does not change the outcome in that if $C^Z-B^Y$ is not a perfect power, then scaled or unscaled $\\alpha$ and $\\beta$ cannot change the irrationality of the root of $C^Z-B^Y$. Since $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$, $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$, and scalar $M$ always together satisfy $C^Z-B^Y=[(C+B)M\\alpha-CBM\\beta]^X$, then we can study the properties of $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ as related to key characteristics of the conjecture. To do so, we need to establish the implication of coprimality and irrationality as it relates to $\\alpha$ and $\\beta$.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.11_Coprime_Alpha_Beta_Irrational}\n\\Conjecture, if gcd$(A,B,C)=1$, then both $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is rational, $A,B,C,X,Y,$ and $Z$ are integers that satisfy the conjecture, and gcd$(A,B,C)=1$. Then we can express $\\alpha$ as\n\\begin{subequations}\n\\begin{gather}\n\\alpha^X=C^{Z-X}-B^{Y-X}\n\\label{eqn:30a} \\\\\n\\alpha^X=\\frac{C^Z}{C^X}-\\frac{B^Y}{B^X}\n\\label{eqn:30b} \\\\\n\\alpha^X=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\n\\label{eqn:30c}\n\\end{gather}\n\\end{subequations}\nSince $\\alpha$ is rational, then $\\alpha$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\\label{eqn:31}\n\\end{equation}\nwhere $p^X=B^XC^Z-C^XB^Y$ and $q^X=B^XC^X$. We note that the denominator of \\cref{eqn:31} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nRegardless of the parity of $B$ or $C$, per \\cref{eqn:31}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$ and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}(f_1f_2\\cdots f_{n_f})^{jX}}{BC} = \\frac{B^XC^Z-C^XB^Y}{BC}\\label{eqn:32}\n\\end{equation}\n$B$ and $C$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1b_2\\cdots b_{n_b} c_1c_2\\cdots c_{n_c}}\n\\label{eqn:33}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $B$ and $C$ respectively. Based on the right side of \\cref{eqn:32}, the entire denominator $BC$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:33} simplifies to\n\\begin{equation}\np^X=2^{iX-2}f_1^{jX-1}f_2^{jX-2}\\cdots f_{n_f}^{jX-1}\n\\label{eqn:34}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. Therefore the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. We note that all factors in the denominator of \\cref{eqn:33} cannot each be perfect powers of $X$ since the bases $B$ and $C$ are defined to be reduced. More generally, beyond the illustration, after simplifying one or more terms, \\cref{eqn:34} will have an exponent that is not a multiple of $X$ and thus is irrational when taking the root accordingly.\n\nSuppose $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is rational, $A,B,C,X,Y,$ and $Z$ are integers, and gcd$(A,B,C)=1$. Then we can express $\\beta$ as\n\\begin{subequations}\n\\begin{gather}\n\\beta^X=C^{Z-2X}-B^{Y-2X}\n\\label{eqn:35a} \\\\\n\\beta^X=\\frac{C^Z}{C^{2X}}-\\frac{B^Y}{B^{2X}}\n\\label{eqn:35b} \\\\\n\\beta^X=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\n\\label{eqn:35c}\n\\end{gather}\n\\end{subequations}\nSince $\\beta$ is rational, then $\\beta$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\\label{eqn:36}\n\\end{equation}\nwhere $p^X=B^{2X}C^Z-C^{2X}B^Y$ and $q^X=B^{2X}C^{2X}$. We note that the denominator of \\cref{eqn:36} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nRegardless of the parity of $B$ or $C$, per \\cref{eqn:36}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$ and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}(f_1f_2\\cdots f_{n_f})^{jX}}{B^2C^2} = \\frac{B^{2X}C^Z-C^{2X}B^Y}{B^2C^2}\\label{eqn:37}\n\\end{equation}\n$B^2$ and $C^2$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1^2b_2^2\\cdots b_{n_b}^2 c_1^2c_2^2\\cdots c_{n_c}^2}\n\\label{eqn:38}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $B$ and $C$ respectively. Based on the right side of \\cref{eqn:37}, the entire denominator $B^2C^2$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:38} simplifies to\n\\begin{equation}\np^X=2^{iX-4}f_1^{jX-2}f_2^{jX-4}\\cdots f_{n_f}^{jX-2}\n\\label{eqn:39}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. Therefore the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. We note that all factors in the denominator of \\cref{eqn:38} cannot each be perfect powers of $X$ since the bases $B$ and $C$ are defined to be reduced. More generally, beyond the illustration, after simplifying one or more terms, \\cref{eqn:39} will have an exponent that is not a multiple of $X$ and thus is irrational when taking the root accordingly.\n\n\nSince the definition of a rational number is the ratio of two integers in which the ratio is reduced, given that gcd$(B,C)=1$, then both $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational} establishes that if gcd$(A,B,C)=1$ then $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational. We now establish the reverse, such that if $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are rational then gcd$(A,B,C)>1$.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime}\n\\Conjecture, if $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ or $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are rational, then gcd$(A,B,C)>1$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is rational and $A,B,C,X,Y,$ and $Z$ are integers that satisfy the conjecture. Then we can express $\\alpha$ as\n\\begin{subequations}\n\\begin{gather}\n\\alpha^X=C^{Z-X}-B^{Y-X}\n\\label{eqn:40a} \\\\\n\\alpha^X=\\frac{C^Z}{C^X}-\\frac{B^Y}{B^X}\n\\label{eqn:40b} \\\\\n\\alpha^X=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\n\\label{eqn:40c}\n\\end{gather}\n\\end{subequations}\nSince $\\alpha$ is rational, then $\\alpha$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^XC^Z-C^XB^Y}{B^XC^X}\\label{eqn:41}\n\\end{equation}\nwhere $p^X=B^XC^Z-C^XB^Y$ and $q^X=B^XC^X$. We note that the denominator of \\cref{eqn:41} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nSuppose gcd$(A,B,C)=k$ where integer $k\\geq 2$. Thus $A=ak$, $B=bk$, and $C=ck$ for pairwise coprime integers $a$, $b$, and $c$. We can express \\cref{eqn:41} with the common term, namely\n\\begin{subequations}\n\\begin{gather}\n\\frac{p^X}{q^X}=\\frac{(kb)^X(kc)^Z-(kc)^X(kb)^Y}{(kb)^X(kc)^X} \\label{eqn:42a} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{X+Z}b^Xc^Z-k^{X+Y}c^Xb^Y}{k^{2X}b^Xc^X} \\label{eqn:42b} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{Z-X}b^Xc^Z-k^{Y-X}c^Xb^Y}{b^Xc^X} \\label{eqn:42c} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{min(Z-X,Y-X)}[k^{Z-min(Z-X,Y-X)}b^Xc^Z-k^{Y-min(Z-X,Y-X)}c^Xb^Y]}{b^Xc^X} \\label{eqn:42d}\n\\end{gather}\n\\end{subequations}\n\nRegardless of the parity of $b$, $c$, or $k$ per \\cref{eqn:42c,eqn:42d}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}k^{min(Z-X,Y-X)}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$, $k^{min(Z-X,Y-X)}$ is the common factor based on gcd$(A,B,C)$, and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-X,Y-X)}(f_1f_2\\cdots f_n)^{jX}}{bc}\n=\\frac{k^{Z-X}b^Xc^Z-k^{Y-X}c^Xb^Y}{bc} \\label{eqn:43}\n\\end{equation}\nBoth $b$ and $c$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-X,Y-X)}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1b_2\\cdots b_{n_b} c_1c_2\\cdots c_{n_c}}\n\\label{eqn:44}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $b$ and $c$ respectively. Based on the right side of \\cref{eqn:43}, the entire denominator $bc$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:44} simplifies to\n\\begin{equation}\np^X=2^{iX-4}k^{min(Z-X,Y-X)}f_1^{jX-2}f_2^{jX-4}\\cdots f_{n_f}^{jX-2}\n\\label{eqn:45}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. If $k=1$, then the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. However if $k>1$ such that it is a composite of the factors that are not individually perfect powers of $X$, then the resulting expression is a perfect power of X. Hence when $k=1$, then per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is irrational. However if $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ is rational, then $k\\neq1$ and thus gcd$(A,B,C)\\neq1$.\n\n\\bigskip\n\nSuppose $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is rational and $A,B,C,X,Y,$ and $Z$ are integers that satisfy the conjecture. Then we can express $\\beta$ as\n\\begin{subequations}\n\\begin{gather}\n\\beta^X=C^{Z-2X}-B^{Y-2X}\n\\label{eqn:46a} \\\\\n\\beta^X=\\frac{C^Z}{C^2X}-\\frac{B^Y}{B^2X}\n\\label{eqn:46b} \\\\\n\\beta^X=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\n\\label{eqn:46c}\n\\end{gather}\n\\end{subequations}\nSince $\\beta$ is rational, then $\\beta$ can be expressed as the ratio of integers $p$ and $q$ such that\n\\begin{equation}\n\\frac{p^X}{q^X}=\\frac{B^{2X}C^Z-C^{2X}B^Y}{B^{2X}C^{2X}}\\label{eqn:47}\n\\end{equation}\nwhere $p^X=B^{2X}C^Z-C^{2X}B^Y$ and $q^X=B^{2X}C^{2X}$. We note that the denominator of \\cref{eqn:47} is a perfect power of $X$ and thus we know that the numerator must also be a perfect power of $X$ for the $X^{th}$ root of their ratio to be rational. Hence the ratio of the perfect power of the numerator and the perfect power of the denominator, even after simplifying, must thus be rational.\n\nSuppose gcd$(A,B,C)=k$ where integer $k\\geq 2$. Thus $A=ak$, $B=bk$, and $C=ck$ for pairwise coprime integers $a$, $b$, and $c$. We can express \\cref{eqn:47} with the common term, namely\n\\begin{subequations}\n\\begin{gather}\n\\frac{p^X}{q^X}=\\frac{(kb)^{2X}(kc)^Z-(kc)^{2X}(kb)^Y}{(kb)^{2X}(kc)^{2X}}\\label{eqn:48a} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{2X+Z}b^Xc^Z-k^{2X+Y}c^Xb^Y}{k^{4X}b^Xc^X} \\label{eqn:48b} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{Z-2X}b^Xc^Z-k^{Y-2X}c^Xb^Y}{b^Xc^X} \\label{eqn:48c} \\\\\n\\frac{p^X}{q^X}=\\frac{k^{min(Z-2X,Y-2X)}[k^{Z-min(Z-2X,Y-2X)}b^Xc^Z-k^{Y-min(Z-2X,Y-2X)}c^Xb^Y]}{b^Xc^X} \\label{eqn:48d}\n\\end{gather}\n\\end{subequations}\n\nRegardless of the parity of $b$, $c$, or $k$ per \\cref{eqn:48c,eqn:48d}, $p^X$ must be even as it is the difference of two odd numbers or the difference of two even numbers. Furthermore, since $p^X$ by definition is a perfect power of $X$ and given that it is even, then $p^X$ must be defined as $2^{iX}k^{min(Z-2X,Y-2X)}(f_1f_2 \\cdots f_{n_f})^{jX}$ for positive integers $i$, $j$, $f_1$, $f_2$, ..., $f_{n_f}$ where $2^{iX}$ is the perfect power of the even component of $p^X$, $k^{min(Z-X,Y-X)}$ is the common factor based on gcd$(A,B,C)$, and $(f_1f_2\\cdots f_{n_f})^{jX}$ is the perfect power of the remaining $n_f$ prime factors of $p^X$ where $f_1$, $f_2$, ..., $f_{n_f}$ are the remaining prime factors of $p^X$. Hence\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-2X,Y-2X)}(f_1f_2\\cdots f_{n_f})^{jX}}{bc}\n=\\frac{k^{Z-2X}b^Xc^Z-k^{Y-2X}c^Xb^Y}{bc} \\label{eqn:49}\n\\end{equation}\nBoth $b$ and $c$ can also be expressed as a function of their prime factors, thus\n\\begin{equation}\n\\frac{p^X}{q}=\\frac{2^{iX}k^{min(Z-2X,Y-2X)}f_1^{jX}f_2^{jX}\\cdots f_{n_f}^{jX}}{b_1b_2\\cdots b_{n_b} c_1c_2\\cdots c_{n_c}}\n\\label{eqn:50}\n\\end{equation}\nwhere $b_1, b_2, \\dots,b_{n_b}$ and $c_1, c_2, \\dots,c_{n_c}$ are prime factors of $b$ and $c$ respectively. Based on the right side of \\cref{eqn:49}, the entire denominator $bc$ is fully subsumed by the numerator, and thus every one of the prime factors in the denominator equals one of the prime factors in the numerator. Thus after dividing, one or more of the exponents in the numerator reduces thereby canceling the entire denominator accordingly. For illustration, suppose $b_1=b_2=2$, $b_{n_b}=f_1$, $c_1=c_2=f_2$ and $c_{n_c}=f_{n_f}$. As such, \\cref{eqn:50} simplifies to\n\\begin{equation}\np^X=2^{iX-4}k^{min(Z-2X,Y-2X)}f_1^{jX-2}f_2^{jX-4}\\cdots f_{n_f}^{jX-2}\n\\label{eqn:51}\n\\end{equation}\nwhich has terms with exponents that are not multiples of $X$ and are thus not perfect powers of $X$. If $k=1$, then the $X^{th}$ root is irrational which contradicts the assumption that $\\displaystyle{\\alpha=\\frac{p}{q}}$ is rational. However if $k>1$ such that it is a composite of the factors that are not individually perfect powers of $X$, then the resulting expression is a perfect power of X. Hence when $k=1$, then per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is irrational. However if $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ is rational, then $k\\neq1$ and thus gcd$(A,B,C)\\neq1$.\n\nThus if either or both $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ or $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are rational, then gcd$(A,B,C)>1$.\n\\end{proof}\n\n\n\\bigskip\n\n\nThe values of $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ have critical properties as related to coprimality and other relationships with integer solutions that satisfy the conjecture. We know from \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational} that if gcd$(A,B,C)=1$, then $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are both irrational. Even though all feasible values of $\\alpha$ and $\\beta$ map to this pair given \\cref{Thm:2.9_Real_Alpha_Beta}, we can still consider other feasible values of $\\alpha$ and $\\beta$ as related to coprimality.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate}\n\\Conjecture, if gcd$(A,B,C)=1$, then any value of $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ must be irrational or indeterminate.\n\\end{theorem}\n\n\\begin{proof}\nGiven $\\sqrt[X]{C^Z-B^Y} =(C+B)\\alpha-CB\\beta$ per \\cref{Thm:2.8_Functional_Form}. If we suppose $\\sqrt[X]{C^Z-B^Y}$ is irrational, then $(C+B)\\alpha-CB\\beta$ is irrational. With $C$ and $B$ integer, if $\\alpha$ and $\\beta$ were rational, then $(C+B)\\alpha-CB\\beta$ is composed solely of rational terms and thus must be rational, which contradicts the assumption of irrationality. Hence $\\alpha$ or $\\beta$ must be irrational. Further, per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\alpha$ and $\\beta$ are irrational if $\\sqrt[X]{C^Z-B^Y}$ is irrational and gcd$(A,B,C)=1$. Thus here too $\\alpha$ or $\\beta$ must be irrational.\n\nIf we suppose instead $\\sqrt[X]{C^Z-B^Y}$ is rational, then $(C+B)\\alpha-CB\\beta$ is rational.\nGiven gcd$(A,B,C)=1$, per \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational} we know both $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are irrational.\n\nSuppose instead $\\alpha$ is to be defined as any real other than $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$. As such, per \\cref{Thm:2.8_Functional_Form}, $\\beta$ is derived by $\\beta=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB}}$. Hence substituting into \\cref{eqn:24} gives us\n\\begin{subequations}\n\\begin{align}\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha-CB\\beta \\label{eqn:52a} \\\\\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha-CB\\frac{\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha}{-CB} \\label{eqn:52b}\\\\\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha+\\sqrt[X]{C^Z-B^Y}-(C+B)\\alpha \\label{eqn:52c} \\\\\n\\sqrt[X]{C^Z-B^Y} &=\\sqrt[X]{C^Z-B^Y} \\label{eqn:52d}\n\\end{align}\n\\end{subequations}\nRegardless of the value selected for $\\alpha$, both $\\alpha$ and $\\beta$ fall out and thus both are indeterminate when gcd$(A,B,C)=1$.\n\nSuppose instead $\\beta$ is to be defined as any real other than $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$. As such, per \\cref{Thm:2.8_Functional_Form}, $\\alpha$ is derived by $\\alpha=\\displaystyle{\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}}$. Hence substituting into \\cref{eqn:24} gives us\n\\begin{subequations}\n\\begin{align}\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\alpha-CB\\beta \\label{eqn:53a} \\\\\n\\sqrt[X]{C^Z-B^Y} &=(C+B)\\frac{\\sqrt[X]{C^Z-B^Y}+CB\\beta}{C+B}-CB\\beta \\label{eqn:53b}\\\\\n\\sqrt[X]{C^Z-B^Y} &=\\sqrt[X]{C^Z-B^Y}+CB\\beta-CB\\beta \\label{eqn:53c} \\\\\n\\sqrt[X]{C^Z-B^Y} &=\\sqrt[X]{C^Z-B^Y} \\label{eqn:53d}\n\\end{align}\n\\end{subequations}\nRegardless of the value selected for $\\beta$, both $\\alpha$ and $\\beta$ fall out and thus both are indeterminate when gcd$(A,B,C)=1$.\n\nHence if gcd$(A,B,C)=1$, then both of the terms $\\displaystyle{\\alpha=\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\beta=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ must be irrational, while any other random value of $\\alpha$ or $\\beta$ that satisfies $C^Z-B^Y=[(C+B)\\alpha-CB\\beta]^X$ must be irrational or indeterminate.\n\\end{proof}\n\\label{Section:Reparameterize_End}\n\n\n\n\\bigskip\n\n\n\\Line\n\\subsection{Impossibility of the Terms}\n\\label{Section:Impossibility_Start}\nHaving established that pairwise coprimality is a definite byproduct when gcd$(A,B,C)=1$ (\\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime}), there is a unique reparamterization of the bases of \\BealsEq\\, whose rationality is tied to coprimality of terms (\\cref{Thm:2.9_Real_Alpha_Beta,Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime}), and that a line through the origin with an irrational slope does not go through any non-trivial lattice points (\\cref{Thm:2.1_Irrational_Slope_No_Lattice}), we now delve into proving the conjecture under two mutually exclusive conditions:\n\\begin{enumerate}\n\\item geometric implications when gcd$(A,B,C)=1$\n\\item geometric implications when gcd$(A,B,C)>1$\n\\end{enumerate}\nThese steps lead to a critical contradiction which demonstrates the impossibility of the existance of counter-examples due to fundamental features of the conjecture.\n\nAs implied by Catalan's conjecture and proven by Mih\u0103ilescu \\cite{mihailescu2004primary}, no integer solutions exist when $A$, $B$, or $C$ equals 1, regardless of coprimality. Hence we consider the situation in which $A,B,C\\geq2$. Given the configuration of the conjecture, $A$, $B$, $C$, $X$, $Y$, and $Z$ are positive integers, and thus $A^X$, $B^Y$, and $C^Z$ are also integers. A set of values that satisfy the conjecture can be plotted on a Cartesian coordinate grid with axes $A^X$, $B^Y$, and $C^Z$. See \\cref{Fig:3DScatter}. Based on \\cref{eqn:1} the line passing through the origin and the point $(A^X,B^Y,A^X+B^Y)$ can be expressed based on the angles in relation to the axes (see \\cref{Fig:ScatterPlotWithAngles}), namely\n\\begin{subequations}\n\\begin{gather}\n\\theta_{_{C^ZB^Y}} = \\tan^{-1}\\frac{A^X+B^Y}{B^Y} \\label{eqn:54a} \\\\\n\\theta_{_{C^ZA^X}} = \\tan^{-1}\\frac{A^X+B^Y}{A^X} \\label{eqn:54b} \\\\\n\\theta_{_{B^YA^X}} = \\tan^{-1}\\frac{B^Y}{A^X} \\label{eqn:54c}\n\\end{gather}\n\\end{subequations}\nwhere $\\theta_{_{C^ZB^Y}}$ is the angle subtended between the $B^Y$ axis to the line through the origin and the given point in the $C^Z \\times B^Y$ plane, $\\theta_{_{C^ZA^X}}$ is the angle subtended between the $A^X$ axis to the line through the origin to the given point in the $C^Z \\times A^X$ plane, and $\\theta_{_{B^YA^X}}$ is the angle subtended between the $B^Y$ axis to the line through the origin to the given point in the $B^Y \\times A^X$ plane.\n\nThe line subtended in each plane based on the origin and the given point $(A^X,B^Y,A^X+B^Y)$ has slopes that by definition are identical to the arguments in the corresponding tangent functions in \\crefrange{eqn:54a}{eqn:54c}. In each case, the numerator and denominator are integers, and thus the corresponding ratios (and therefore slopes) are rational. Given that this line passes through the origin and has a rational slope in all three planes, then we conclude the infinitely long line passes through infinitely many lattice points, namely at specific integer multiples of the slopes.\n\n\\begin{figure}\n\\includegraphics[width=.55\\textwidth]{3D_Scatter_Map.jpg}\n\\caption{Map of a corresponding point between two different plots based on \\BealsEq\\, satisfying the associated constraints and bounds.}\n\\label{Fig:3DScatterMap}\n\\end{figure}\n\nBased on the conjecture the given lattice point $(A^X,B^Y,A^X+B^Y)$ relative to axes $A^X$, $B^Y$, and $C^Z$ corresponds to a lattice point $(A,B,\\sqrt[Z]{A^X+B^Y})$ in a scatter plot based on axes $A$, $B$, and $C$. See \\cref{Fig:3DScatterMap}. The conjecture states there is no integer solution to \\BealsEq\\, that simultaneously satisfies all the conditions if gcd$(A,B,C)=1$. Thus from a geometric perspective, this means that if gcd$(A,B,C)=1$, then the line in the scatter plot based on axes $A$, $B$, and $C$ in \\cref{Fig:3DScatterMap} could never go through a non-trivial lattice point since $A$, $B$, and $C$ could not all be integer simultaneously. Conversely, if the corresponding line in the scatter plot based on axes $A$, $B$, and $C$ in \\cref{Fig:3DScatterMap} does go through a non-trivial lattice point, then based on the conjecture we know gcd$(A,B,C)>1$. Hence to validate the conjecture, we need to test the relationship between \\BealsEq, the lattice points in both graphs, and the slopes subtended by the origin and these lattice points.\n\n\\begin{figure}\n\\includegraphics[width=.66\\textwidth]{3D_Scatter_Plot_with_Angles_2.jpg}\n\\caption{Angles between the axes and the line segment subtended by the origin and a single point $A$, $B$, and $C$ from \\cref{Fig:CZBYAZScatter}.}\n\\label{Fig:ScatterPlotWithAngles2}\n\\end{figure}\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.14_Main_Proof_Coprime_No_Solutions}\n\\Conjecture, if $gcd(A,B,C) = 1$, then there is no integer solution.\n\\end{theorem}\n\n\\begin{proof}\nThe line through the origin and point $(A,B,\\sqrt[Z]{A^X+B^Y})$ can be expressed based on the angles in relation to the axes. See \\cref{Fig:ScatterPlotWithAngles2}. These angles are\n\\begin{subequations}\n\\begin{gather}\n\\theta_{_{CB}} = \\tan^{-1}\\frac{\\sqrt[Z]{A^X+B^Y}}{B} \\label{eqn:55a} \\\\\n\\theta_{_{CA}} = \\tan^{-1}\\frac{\\sqrt[Z]{A^X+B^Y}}{A} \\label{eqn:55b} \\\\\n\\theta_{_{BA}} = \\tan^{-1}\\frac{B}{A} \\label{eqn:55c}\n\\end{gather}\n\\end{subequations}\nwhere $\\theta_{_{CB}}$ is the angle subtended between the $B$ axis to the line through the origin and the given point in the $C\\times B$ plane, $\\theta_{_{CA}}$ is the angle subtended between the $A$ axis to the line through the origin to the given point in the $C\\times A$ plane, and $\\theta_{_{BA}}$ is the angle subtended between the $B$ axis to the line through the origin to the given point in the $B\\times A$ plane. See \\cref{Fig:ScatterPlotWithAngles2}.\n\nThe line that corresponds to $\\theta_{_{CB}}$ in \\cref{eqn:55a} has slope $\\displaystyle{m=\\frac{\\sqrt[Z]{A^X+B^Y}}{B}}$ in the $C\\times B$ plane, and the line that corresponds to $\\theta_{_{CA}}$ in \\cref{eqn:55b} has slope $\\displaystyle{m=\\frac{\\sqrt[Z]{A^X+B^Y}}{A}}$ in the $C\\times A$ plane. These two slopes are different than the slope of the line that corresponds to $\\theta_{_{BA}}$ in \\cref{eqn:55c} which is $\\displaystyle{m=\\frac{B}{A}}$ in the $B\\times A$ plane since this latter slope is merely the ratio of two integers whereas the numerator of the first two are $\\sqrt[Z]{A^X+B^Y}$ which may not be rational.\n\nBuilding from \\cref{eqn:55a,eqn:55b}, let $\\MCB$ and $\\MCA$\nbe the slopes of the lines through the origin and the given point in the $C \\times B$ and $C \\times A$ planes, respectively. Thus we have\n\\begin{subequations}\n\\begin{align}\n\\MCB &= \\frac{\\sqrt[Z]{A^X+B^Y}}{B} &\n\\MCA &= \\frac{\\sqrt[Z]{A^X+B^Y}}{A}\n\\label{eqn:56a} \\\\\n\\MCB &= \\sqrt[Z]{\\frac{A^X+B^Y}{B^Z}} &\n\\MCA &= \\sqrt[Z]{\\frac{A^X+B^Y}{A^Z}}\n\\label{eqn:56b} \\\\\n\\MCB &= \\left(\\frac{A^X}{B^Z} +B^{Y-Z}\\right) ^{\\frac{1}{Z}} &\n\\MCA &= \\left(A^{X-Z} + \\frac{B^Y}{A^Z} \\right) ^{\\frac{1}{Z}}\n\\label{eqn:56c}\n\\end{align}\n\\end{subequations}\nPer \\cref{Thm:2.1_Irrational_Slope_No_Lattice}, if a line through the origin has an irrational slope, then that line does not pass through any non-trivial lattice points. Relative to the conjecture, a line in 3 dimensions that passes through the origin also passes through a point equal to the integer bases which satisfy the terms of the conjecture. Since the solution that satisfies the conjecture is integer and must be a lattice point, then the corresponding lines must have rational slopes. Hence if there exist integer solutions, then slopes $\\MCB$ and $\\MCA$ are rational. If the slopes are irrational, then there is no integer solution. We must now consider three mutually exclusive scenarios in relation to slopes $\\MCB$ and $\\MCA$:\n\n\\bigskip\n\n\\textbf{Scenario 1 of 2: $\\bm{\\displaystyle{A^X \\neq B^Y}$}.} Suppose $A^X\\neq B^Y$. Dividing both terms by $B^Z$ gives us $\\displaystyle{\\frac{A^X}{B^Z} \\neq B^{Y-Z}}$. Likewise, dividing both terms of $A^X \\neq B^Y$ by $A^Z$ gives us $\\displaystyle{A^{X-Z} \\neq \\frac{B^Y}{A^Z}}$. Using this relationship, $\\MCB$ and $\\MCA$ from \\cref{eqn:56c} become\n\\begin{subequations}\n\\begin{align}\n\\MCB &= \\left(\\frac{A^X}{B^Z}\\right)^{\\frac{1}{Z}} \\left(1+ \\frac{B^{Y-Z}}\n {\\left(\\frac{A^X}{B^Z}\\right)}\\right)^{\\frac{1}{Z}} &\n\\MCA &= (A^{X-Z})^{\\frac{1}{Z}} \\left(1+ \\frac{\\left(\\frac{B^Y}{A^Z}\\right)}\n {A^{X-Z}} \\right)^{\\frac{1}{Z}}\n\\label{eqn:57a} \\\\\n\\MCB &= \\frac{A^{\\frac{X}{Z}}}{B} \\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}} &\n\\MCA &= ({A^{\\frac{X}{Z}-1}}) \\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}\n\\label{eqn:57b}\n\\end{align}\n\\end{subequations}\n\nBoth $\\MCB$ and $\\MCA$ must be rational for there to exist an integer solution that satisfies the conjecture. Based on \\cref{eqn:57b}, there are two cases that can ensure both $\\MCB$ and $\\MCA$ are rational:\n\\begin{enumerate}\n\\item The term $\\displaystyle{\\left(1+\\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:57b} is rational, therefore both terms\n $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ must be rational so\n their respective products are rational.\n\\item The term $\\displaystyle{\\left(1+\\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:57b} is irrational, therefore both terms\n $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ are irrational.\n However the irrationality of these two terms are canceled with the irrationality\n of the denominator of $\\displaystyle{\\left(1+\\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$\n so their respective products are rational.\n\\end{enumerate}\n\n\\bigskip\n\n\\noindent\\textbf{Case 1 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are rational}\n\nStarting with the first case, assume $\\displaystyle{\\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:57b} is rational and thus $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ are rational. Since $A$ is reduced (not a perfect power), gcd$(A,B)=1$, and $A$ is raised to exponent $\\displaystyle{\\frac{X}{Z}}$ and $\\displaystyle{\\frac{X}{Z}-1}$, respectively, these terms are rational only if $X$ is an integer multiple of $Z$. However, per \\cref{Dfn:2.1_X_cannot_be_mult_of_Z} on page \\pageref{Dfn:2.1_X_cannot_be_mult_of_Z}\\, $X$ is not an integer multiple of $Z$, therefore the requirements to ensure $\\MCB$ and $\\MCA$ are rational cannot be met.\n\n\\bigskip\n\\noindent\\textbf{Case 2 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are irrational}\n\nConsidering the second case, assume $\\displaystyle{\\left(1+ \\frac{B^Y}{A^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:57b} is irrational, and thus $\\displaystyle{\\frac{A^{\\frac{X}{Z}}}{B}}$ and $A^{\\frac{X}{Z}-1}$ must also be irrational such that they cancel the irrationality with multiplication. Since slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:57b} are rational with irrational terms, both \\cref{eqn:57a,eqn:57b} must be rational. We can re-express \\cref{eqn:56a} equivalently as\n\\begin{equation}\n\\MCB = \\frac{C}{\\sqrt[Y]{C^Z-A^X}} \\qquad\\qquad\\quad\n\\MCA = \\frac{C}{\\sqrt[X]{C^Z-B^Y}}\n\\label{eqn:58}\n\\end{equation}\nin which the denominators must be rational. Per \\cref{Thm:2.8_Functional_Form}, the denominators in \\cref{eqn:58} can be reparameterized and thus \\cref{eqn:58} becomes\n\\begin{equation}\n \\MCB = \\frac{C}{(C+A)M_{_1}\\alphaCA -CAM_{_1}\\betaCA } \\qquad\\qquad\n \\MCA = \\frac{C}{(C+B)M_{_2}\\alphaCB -CBM_{_2}\\betaCB }\n \\label{eqn:59}\n\\end{equation}\nwhere $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ are positive rational numbers and $M_{_1}$ and $M_{_2}$ are positive scalars. Per \\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta} $\\alphaCA$, $\\alphaCB$, $\\betaCA$ and $\\betaCB$ can be defined as\n\\begin{subequations}\n\\begin{align}\n\\alphaCA &=\\sqrt[Y]{C^{Z-Y}-A^{X-Y}} &\n\\alphaCB &=\\sqrt[X]{C^{Z-X}-B^{Y-X}}\n\\label{eqn:60a} \\\\\n\\betaCA &=\\sqrt[Y]{C^{Z-2Y}-A^{X-2Y}} &\n\\betaCB &=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}\n\\label{eqn:60b}\n\\end{align}\n\\end{subequations}\nPer \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ as defined in \\cref{eqn:60a,eqn:60b} are irrational when gcd$(A,B,C)=1$. Thus $\\MCB$ and $\\MCA$ in \\cref{eqn:59} are irrational.\n\nTherefore as consequences of both cases 1 and 2, slopes $\\MCB$ and $\\MCA$ must be irrational when $A^X \\neq B^Y$ and gcd$(A,B,C)=1$.\n\n\\bigskip\n\nAs a side note, an alternate way to define $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ is to select any real value for $\\alphaCA$ and $\\alphaCB$ and then derive $\\betaCA$ and $\\betaCB$ or select any real value for $\\betaCA$ and $\\betaCB$ and then derive $\\alphaCA$ and $\\alphaCB$, per \\cref{Thm:2.9_Real_Alpha_Beta}. However per \\cref{Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate}, when gcd$(A,B,C)=1$, the derived values of $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ are either irrational or indeterminate. Thus $\\MCB$ and $\\MCA$ in \\cref{eqn:59} are irrational or indeterminate. If the slopes $\\MCB$ and $\\MCA$ were rational, then $\\alpha$ and $\\beta$ would both be determinate, thus a contradiction. Hence here too the requirements to ensure $\\MCB$ and $\\MCA$ are rational cannot be met when gcd$(A,B,C)=1$.\n\n\n\n\\bigskip\n\n\\textbf{Scenario 2 of 2: $\\bm{\\displaystyle{A^X=B^Y}$}.} Suppose $A^X=B^Y$. Given the bases are reduced, we conclude that $A=B$. Per this theorem, gcd$(A,B)=1$ and thus $A\\neq B$, hence this scenario is impossible.\n\n\\bigskip\n\nSince the scenarios are mutually exclusive and exhaustive given gcd$(A,B)=1$, and given that in each scenario, the slopes $\\MCB$ and $\\MCA$ are irrational, then the line that goes through the origin and through the integer solution must also have irrational slopes. However, as already proven in \\cref{Thm:2.1_Irrational_Slope_No_Lattice}, lines through the origin with irrational slopes cannot go through any non-trivial lattice points. \\Conjecture, if gcd$(A,B,C) = 1$, then slopes $\\MCB$ and $\\MCA$ are irrational and thus there is no integer solution.\n\\end{proof}\n\nWe know that each grid point $(A,B,C)$ subtends a line through the origin and that point, whereby that point is supposed to be an integer solution that satisfies the conjecture. We also know that each grid point corresponds to a set of slopes. Further, we know from \\cref{Thm:2.1_Irrational_Slope_No_Lattice} that a line through the origin with an irrational slope does not pass through any non-trivial lattice points. Since both $\\MCB$ and $\\MCA$ are irrational when gcd$(A,B,C)=1$, then their corresponding lines fail to go through any non-trivial lattice points, and thus these slopes mean there are no corresponding integer solutions for $A$, $B$, and $C$. Hence there is no integer solution satisfying the conjecture when gcd$(A,B,C)=1$.\n\\label{Section:Impossibility_End}\n\n\n\\bigskip\n\n\n\\Line\n\\subsection{Requirement for Possibility of the Terms}\n\\label{Section:Possibility_Start}\nHaving established that the slopes of a line through the origin and the lattice point $(A,B,C)$ are irrational when gcd$(A,B,C)=1$ translates to no integer solutions to the conjecture and the non-existence of a non-trivial lattice point. We now consider the reverse; if there is an integer solution that satisfies the conjecture, then gcd$(A,B,C)$ must be greater than 1 translates to the existence of a non-trivial lattice point through which the line passes.\n\n\n\\bigskip\n\n\n\\begin{theorem}\n\\label{Thm:2.15_Main_Proof_Solutions_Then_Not_Coprime}\n\\Conjecture, if there are integer solutions, then $gcd(A,B,C)>1$.\n\\end{theorem}\n\n\\begin{proof}\nGiven the configuration of the conjecture, $A$, $B$, $C$, $X$, $Y$, and $Z$ are positive integers, and thus $A^X$, $B^Y$, and $C^Z$ are also integers. A set of values that satisfy the conjecture correspond to a point on a scatter plot with axes $A$, $B$, and $C$. See \\cref{Fig:3DScatterMap}. Based on \\cref{eqn:1} the line passing through the origin and the point $(A,B,\\sqrt[Z]{A^X+B^Y})$ can be expressed based on its slopes in the three planes (see \\cref{eqn:55a,eqn:55b,eqn:56a} and \\cref{Fig:ScatterPlotWithAngles2}), namely\n\\begin{subequations}\n\\begin{align}\n\\MCB &= \\frac{\\sqrt[Z]{A^X+B^Y}}{B} =\\frac{C}{B} &\n\\MCA &= \\frac{\\sqrt[Z]{A^X+B^Y}}{A} =\\frac{C}{A} \\label{eqn:61a} \\\\\n\\MCB &= \\sqrt[Z]{\\frac{A^X+B^Y}{B^Z}} &\n\\MCA &= \\sqrt[Z]{\\frac{A^X+B^Y}{A^Z}} \\label{eqn:61b}\n\\end{align}\n\\end{subequations}\nwhere $\\MCB$ and $\\MCA$ are the slopes of the lines through the origin and the given point in the $C \\times B$ and $C \\times A$ planes, respectively. We can likewise define $\\MBA$ as the slope of the lines through the origin and the given point in the $A\\times B$ plane based on \\cref{eqn:55c}, namely\n\\begin{equation}\n \\MBA = \\frac{B}{A} \\label{eqn:62}\n\\end{equation}\nWe know from \\cref{Thm:2.14_Main_Proof_Coprime_No_Solutions} that $\\MCB$ and $\\MCA$ cannot be rational if gcd$(A,B,C)=1$. Suppose gcd$(A,B,C)=k$ where $k\\geq 2$. Thus with integer $k$ common to $A$, $B$, and $C$, we can express \\cref{eqn:1} with the common term, namely\n\\begin{equation}\na^Xk^X + b^Yk^Y = c^Zk^Z \\label{eqn:63}\n\\end{equation}\nwhere $a$, $b$, and $c$ are positive coprime integer factors of $A$, $B$, and $C$ respectively, and where $A=ak$, $B=bk$, $C=ck$, and where $A^X=a^Xk^X$, $B^Y=b^Yk^Y$, and $C^Z=c^Zk^Z$. We can thus express the slopes from \\cref{eqn:61b,eqn:62} with the common term, namely\n\\begin{subequations}\n\\begin{gather}\n\\MCB = \\sqrt[Z]{\\frac{a^Xk^X+b^Yk^Y}{b^Zk^Z}} \\qquad\\qquad\n\\MCA = \\sqrt[Z]{\\frac{a^Xk^X+b^Yk^Y}{a^Zk^Z}} \\label{eqn:64a}\\\\\n\\MBA = \\frac{bk}{ak} = \\frac{b}{a} \\label{eqn:64b}\n\\end{gather}\n\\end{subequations}\nWe observe that the common term $k$ cancels from $\\MBA$ in \\cref{eqn:64b} and thus $\\MBA$ is rational regardless of the common term. We can simplify \\cref{eqn:64a} as\n\\begin{equation}\n\\MCB = \\left(\\frac{(ak)^X}{(bk)^Z} +(bk)^{Y-Z}\\right) ^{\\frac{1}{Z}} \\qquad \\MCA = \\left((ak)^{X-Z} + \\frac{(bk)^Y}{(ak)^Z} \\right) ^{\\frac{1}{Z}} \\label{eqn:65}\n\\end{equation}\n\nBefore applying Newton's generalized binomial expansion to slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:65}, we must first consider two mutually exclusive scenarios:\n\n\\bigskip\n\n\\textbf{Scenario 1 of 2: $\\bm{\\displaystyle{A^X \\neq B^Y}$}.} Suppose $A^X\\neq B^Y$. As such\n$\\displaystyle{\\frac{A^X}{B^Z}\\neq B^{Y-Z}}$ and $\\displaystyle{A^{X-Z} \\neq \\frac{B^Y}{A^Z}}$, and thus $\\displaystyle{\\frac{(ak)^X}{(bk)^Z}\\neq (bk)^{Y-Z}}$ and $\\displaystyle{(ak)^{X-Z} \\neq \\frac{(bk)^Y}{(ak)^Z}}$. Therefore we can re-express $\\MCB$ and $\\MCA$ in \\cref{eqn:65} as\n\\begin{subequations}\n\\begin{align}\nm_{_{C, B}} &= \\left[\\frac{(ak)^X}{(bk)^Z}\\right]^{\\frac{1}{Z}}\n \\left[1+\\frac{(bk)^{Y-Z}} {\\left(\\frac{(ak)^X}{(bk)^Z}\\right)}\\right]\n ^{\\frac{1}{Z}} &\nm_{_{C, A}} &= \\left[(ak)^{X-Z}\\right]^{\\frac{1}{Z}} \\left[1+\n \\frac{ \\left( \\frac{(bk)^Y}{(ak)^Z} \\right)} {(ak)^{X-Z}} \\right] ^{\\frac{1}{Z}}\n\\label{eqn:66a} \\\\\nm_{_{C, B}} &= \\frac{(ak)^{\\frac{X}{Z}}}{bk} \\left(1 +\\frac{(bk)^Y}{(ak)^Z}\\right)\n ^{\\frac{1}{Z}} &\nm_{_{C, A}} &= {(ak)^{\\frac{X}{Z}-1}} \\left(1 +\\frac{(bk)^Y}{(ak)^Z}\\right) ^{\\frac{1}{Z}}\n\\label{eqn:66b}\n\\end{align}\n\\end{subequations}\n\n\\bigskip\n\nGiven there are integer solutions that satisfy the conjecture, then $\\MCB$ and $\\MCA$ must be rational. Based on \\cref{eqn:66b}, there are two cases that can ensure both $\\MCB$ and $\\MCA$ are rational:\n\\begin{enumerate}\n\\item The term $\\displaystyle{\\left(1+\\frac{(bk)^Y}{(ak)^Z}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:66b} is rational, therefore both terms\n $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ must be\n rational so their respective products are rational.\n\\item The term $\\displaystyle{\\left(1+\\frac{(bk)^Y}{(ak)^Z}\\right)^{\\frac{1}{Z}}}$ from\n \\cref{eqn:66b} is irrational, therefore both terms\n $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ are irrational.\n However the irrationality of these two terms are canceled with the irrationality\n of the denominator of $\\displaystyle{\\left(1+\\frac{(bk)^Y}{(ak)^Z}\\right)^{\\frac{1}{Z}}}$\n so their respective products are rational.\n\\end{enumerate}\n\n\\bigskip\n\n\\noindent\\textbf{Case 1 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are rational}\n\nStarting with the first case, assume $\\displaystyle{\\left(1+ \\frac{(bk)^Y}{(ak)^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:66b} is rational and thus $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ are rational. Applying the binomial expansion to \\cref{eqn:66b} gives us\n\n\\begin{equation}\n\\MCB = \\frac{(ak)^{\\frac{X}{Z}}}{bk} \\sum \\limits_{i=0}^{\\infty}\n \\binom{\\frac{1}{Z}}{i} \\frac{(bk)^{Yi}}{(ak)^{Xi}} \\qquad\n\\MCA = {(ak)^{\\frac{X}{Z}-1}} \\sum \\limits_{i=0}^{\\infty}\n \\binom{\\frac{1}{Z}}{i} \\frac{(bk)^{Yi}}{(ak)^{Xi}}\n\\label{eqn:67}\n\\end{equation}\nThe binomial coefficient $\\displaystyle{\\binom{\\frac{1}{Z}}{i}}$ and ratio $\\displaystyle{\\frac{(bk)^{Yi}}{(ak)^{Xi}}}$ in both formulas in \\cref{eqn:67} are rational for all $i$, as are their products, regardless of the value of $k$. Hence we consider the terms $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ in \\cref{eqn:67}. Since integers $A$ and $ak$ are reduced (not perfect powers), there are only three possibilities that ensure they are rational:\n\\begin{enumerate}\n\\item $a$ is a perfect power of $Z$, or\n\\item $X$ is an integer multiple of $Z$, or\n\\item $k > 1$ such that $ak$ is a perfect power of $Z$.\n\\end{enumerate}\n\nSuppose $a$ is a perfect power of Z. Since $A=ak$ cannot be a perfect power of $Z$ given that $A$ is reduced, then $k$ must be greater than 1 so that when factored from composite $A$, the remaining factors of $A$ are a perfect power of $Z$. Hence $k>1$ and thus gcd$(A,B,C)>1$.\n\nSuppose instead $X$ is an integer multiple of $Z$, hence exponent $X=iZ$ for some positive integer $i$. However, per \\cref{Dfn:2.1_X_cannot_be_mult_of_Z} on page \\pageref{Dfn:2.1_X_cannot_be_mult_of_Z}, $X$ is not an integer multiple of $Z$. Thus if $k=1$, the term $\\displaystyle{(ak)^{\\frac{X}{Z}}=A^{\\frac{X}{Z}}=a^{\\frac{X}{Z}}}$ is not a perfect power and thus $\\MCB$ and $\\MCA$ are irrational, contradicting their given rationality. If instead $k$ is a multiple of $a$ such as $k=a^j$ for some positive integer $j$, then $\\displaystyle{(ak)^{\\frac{X}{Z}}=a^{\\frac{X+j}{Z}}}$ for which $X+j$ could be a multiple of $Z$. Thus $\\MCB$ and $\\MCA$ are rational only for some values $k>1$ and thus gcd$(A,B,C)>1$.\n\n\n\\bigskip\n\n\\noindent\\textbf{Case 2 of 2: the terms of $\\bm{\\MCB }$ and $\\bm{\\MCA }$ are irrational}\n\nConsidering the second case, assume $\\displaystyle{\\left(1+ \\frac{(bk)^Y}{(ak)^X}\\right)^{\\frac{1}{Z}}}$ in \\cref{eqn:66b} is irrational, and thus $\\displaystyle{\\frac{(ak)^{\\frac{X}{Z}}}{bk}}$ and $(ak)^{\\frac{X}{Z}-1}$ must also be irrational such that they cancel the irrationality with multiplication. Since slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:66b} are rational with irrational terms, both slopes in \\cref{eqn:64a} must be rational.\n\nWe can re-express slopes $\\MCB$ and $\\MCA$ from \\cref{eqn:61a} as\n\\begin{equation}\n\\MCB = \\frac{C}{\\sqrt[Y]{C^Z-A^X}} \\qquad\\qquad\\quad\n\\MCA = \\frac{C}{\\sqrt[X]{C^Z-B^Y}}\n\\label{eqn:68}\n\\end{equation}\nin which the denominators must both be rational. Per \\cref{Thm:2.8_Functional_Form}, the denominators in \\cref{eqn:68} can be reparameterized and thus \\cref{eqn:68} becomes\n\\begin{equation}\n\\MCB = \\frac{C}{(C+A)M_{_1}\\alphaCA -CAM_{_1}\\betaCA } \\qquad\\qquad\n\\MCA = \\frac{C}{(C+B)M_{_2}\\alphaCB -CBM_{_2}\\betaCB }\n\\label{eqn:69}\n\\end{equation}\nwhere $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ are positive rational numbers and $M_{_1}$ and $M_{_2}$ are positive scalars. Per \\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta} $\\alphaCA$, $\\alphaCB$, $\\betaCA$ and $\\betaCB$ are defined as\n\\begin{subequations}\n\\begin{align}\n\\alphaCA &=\\sqrt[Y]{C^{Z-Y}-A^{X-Y}} &\n\\alphaCB &=\\sqrt[X]{C^{Z-X}-B^{Y-X}}\n\\label{eqn:70a} \\\\\n\\betaCA &=\\sqrt[Y]{C^{Z-2Y}-A^{X-2Y}} &\n\\betaCB &=\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}\n\\label{eqn:70b}\n\\end{align}\n\\end{subequations}\nPer \\cref{Thm:2.11_Coprime_Alpha_Beta_Irrational}, $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ as defined in \\cref{eqn:70a,eqn:70b} are irrational when gcd$(A,B,C)=k=1$. However, since $\\MCB$ and $\\MCA$ in \\cref{eqn:70a,eqn:70b} are rational, we need to consider the common factor in the bases.\n\nPer \\cref{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime}, $\\displaystyle{\\sqrt[Y]{C^{Z-Y}-A^{X-Y}}}$ and $\\displaystyle{\\sqrt[Y]{C^{Z-2Y}-A^{X-2Y}}}$ are rational only if gcd$(A,B,C)>1$. By extension, $\\displaystyle{\\sqrt[X]{C^{Z-X}-B^{Y-X}}}$ and $\\displaystyle{\\sqrt[X]{C^{Z-2X}-B^{Y-2X}}}$ are also rational only if gcd$(A,B,C)>1$.\n\nSince slopes $\\MCB$ and $\\MCA$ in \\cref{eqn:69} are rational, then their denominators are rational, and thus $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ must be rational. Per the defintions of $\\alphaCA$, $\\alphaCB$, $\\betaCA$, and $\\betaCB$ in \\cref{eqn:70a,eqn:70b} and given \\cref{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime} in which these terms can only be rational when gcd$(A,B,C)>1$, we conclude $k$ must be greater than 1 and thus gcd$(A,B,C)>1$.\n\n\\bigskip\n\nTherefore as consequences of both cases 1 and 2, for slopes $\\MCB$ and $\\MCA$ to be rational when $A^X\\neq B^Y$ requires gcd$(A,B,C)>1$, and thus $A$, $B$, and $C$ must share a common factor greater than 1.\n\n\\bigskip\n\n\\textbf{Scenario 2 of 2: $\\bm{\\displaystyle{A^X=B^Y}$}.} If $A^X=B^Y$, then gcd$(A,B)=k>1$. Hence by definition, $k$ is a factor of $A^X+B^Y$ and of $C^Z$, and thus $A$, $B$, and $C$ must share a common factor greater than 1.\n\n\n\\bigskip\n\n\\Conjecture, when there exist integer solutions, two scenarios show common factor $k$ must be greater than 1 to ensure slopes $\\MCB$ and $\\MCA$ are rational. We know that each grid point $(A,B,C)$ subtends a line through the origin and that point, whereby that point is supposed to be an integer solution that satisfies the conjecture. We also know that each grid point corresponds to a set of slopes. Further, we know from \\cref{Thm:2.1_Irrational_Slope_No_Lattice} that a line through the origin with an irrational slope does not pass through any non-trivial lattice points. Since both $\\MCB$ and $\\MCA$ are rational only for some common factor $k>1$, then gcd$(A,B,C)=k$ is required. We know $\\MBA$ is always rational but since $\\MCB$ and $\\MCA$ can be rational only when gcd$(A,B,C)>1$ for only certain common factors, then we know the lines go through non-trivial lattice points, and thus these slopes mean there can be integer solutions for $A$, $B$, and $C$. Hence there can be integer solutions satisfying the conjecture only when gcd$(A,B,C)>1$.\n\\end{proof}\n\\label{Section:Possibility_End}\n\n\n\\bigskip\n\n\n\\section{Conclusion}\nEvery set of values that satisfy the Tijdeman-Zagier conjecture corresponds to a lattice point on a multi-dimensional Cartesian grid. Together with the origin this point defines a line in multi-dimensional space. This line requires a rational slope in order for it to pass through a non-trivial lattice point. Hence the core of the various proofs contained herein center on the irrationality of the slope based on the coprimality of the terms. Several key steps were required to establish this relationship and then support the proof.\n\n\\cref{Thm:2.2_Coprime,Thm:2.3_Coprime,Thm:2.4_Coprime} establish that within the relation \\BealsEq\\, if any pair of terms, $A$, $B$, and $C$ is coprime, then all 3 terms must be coprime, and if all 3 terms are coprime, then each pair of terms must likewise be coprime. Likewise, \\cref{Thm:2.5_X_cannot_be_mult_of_Z} establish a similarly restrictive relationship between the exponents, namely that exponents $X$ and $Y$ cannot be integer multiples or unit fractions of exponent $Z$ and that $Z$ cannot be an integer multiple of $X$ or $Y$.\n\n\\cref{Thm:2.6_Initial_Expansion_of_Differences,Thm:2.7_Indeterminate_Limit} establish that the difference of powers can be factored and expanded based on an arbitrary and indeterminate upper limit.\n\n\\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta} establish that $A^X$ could be parameterized as a linear combination of $C+B$ and $CB$, with two parameters. \\cref{Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate} establish that when these parameters are irrational there can be no integer solution satisfying the conjecture and if gcd$(A,B,C)=1$, then these parameters must be irrational. \\cref{Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime} establishes that if\nthese two parameters are rational, then gcd$(A,B,C)>1$.\n\nThe relationships between coprimality of terms and irrationality of the parameters\n(\\cref{Thm:2.8_Functional_Form,Thm:2.9_Real_Alpha_Beta,Thm:2.10_No_Solution_Alpha_Beta_Irrational,Thm:2.11_Coprime_Alpha_Beta_Irrational,Thm:2.12_Rational_Alpha_Beta_Rational_Then_Not_Coprime,Thm:2.13_Coprime_Any_Alpha_Beta_Irrational_Indeterminate}) are critical to the slopes that are core to the remaining theorems. It is shown that the slopes are functions of these parameters and thus the irrationality properties of the parameters translate to irrationality conditions for the slopes.\n\n\\cref{Thm:2.1_Irrational_Slope_No_Lattice} establishes that a line with an irrational slope that passes through the origin will not pass through any non-trivial lattice points. This simple, subtle theorem is critical to the proof since the link between irrationality of slope and non-integer solutions is key to relating the outcomes to coprimality of terms. The logic of the proof is that integer solutions which satisfy the conjecture can be expressed only with a set of rational slopes and thus tests of the slope rationality are equivalent to tests of the integrality of the solution.\n\n\\cref{Thm:2.14_Main_Proof_Coprime_No_Solutions} establishes that when gcd$(A,B,C)=1$, the slopes are irrational. Thus if the slopes are irrational, then the line that is equivalent to the integer solution does not pass through non-trivial lattice points, hence there is no integer solution. \\cref{Thm:2.15_Main_Proof_Solutions_Then_Not_Coprime} establishes the reverse, namely that the slopes of the corresponding lines can only be rational when gcd$(A,B,C)>1$, and that integer solutions satisfying the conjecture fall on the lines with rational slopes.\n\nAny proof of the Tijdeman-Zagier conjecture requires four conditions be satisfied:\n\\begin{itemize}\n\\item $A$, $B$, $C$, $X$, $Y$, and $Z$ are positive integers.\n\\item $X,Y,Z\\geq3$\n\\item \\BealsEq\n\\item gcd$(A,B,C)=1$\n\\end{itemize}\nSince the set of values that satisfy the conjecture is directly a function of rationality of slopes, we have demonstrated the explicit linkage between the coprimality aspect of the conjecture, the integer requirement of the framework, and properties of slopes of lines through the origin. Via contradiction these theorems prove the four conditions cannot be simultaneously met. Given the fully exhaustive and mutual exclusivity of the theorems, the totality of the conjecture is thus proven.\n\n\n\\bigskip\n\n\n\n\n\\section*{Acknowledgment}\nThe authors acknowledge and thank emeritus Professor Harry Hauser for guidance and support in the shaping, wordsmithing, and expounding the theorems, proofs, and underlying flow of the document, and for the tremendous array of useful suggestions throughout.\n\n\n\n\n\\section*{References}\n\n\\begin{biblist}\n\n\\bib{anni2016modular}{article}{,\n title={Modular elliptic curves over real abelian fields and the\n generalized Fermat equation $x^{2l}+ y^{2m}= z^p$},\n author={Anni, Samuele},\n author={Siksek, Samir},\n journal={Algebra \\& Number Theory},\n volume={10},\n number={6},\n pages={1147--1172},\n year={2016},\n publisher={Mathematical Sciences Publishers},\n doi={https:\/\/doi.org\/10.2140\/ant.2016.10.1147}}\n\n\\bib{beauchamp2018}{article}{,\n title={A Proof for Beal's Conjecture},\n author={Beauchamp, Julian TP},\n journal={viXra},\n note={www.vixra.org\/abs\/1808.0567},\n date={2018-9-05} }\n\n\\bib{beauchamp2019}{article}{,\n title={A Concise Proof for Beal's Conjecture},\n author={Beauchamp, Julian TP},\n journal={viXra},\n note={www.vixra.org\/abs\/1906.0199},\n date={2019-6-13} }\n\n\\bib{bennett2006equation}{article}{,\n title = {The equation $x^{2n}+y^{2n}=z^5$},\n author = {Bennett, Michael A},\n journal = {Journal of th{\\'e}orie of Bordeaux numbers},\n volume = {18},\n number = {2},\n pages = {315--321},\n year = {2006},\n doi={https:\/\/doi.org\/10.5802\/jtnb.546} }\n\n\\bib{bennett2015generalized}{article}{,\n title={Generalized Fermat equations: a miscellany},\n author={Bennett, Michael A},\n author={Chen, Imin},\n author={Dahmen, Sander R},\n author={Yazdani, Soroosh},\n journal={International Journal of Number Theory},\n volume={11},\n number={01},\n pages={1--28},\n year={2015},\n publisher={World Scientific},\n doi={https:\/\/doi.org\/10.1142\/S179304211530001X} }\n\n\\bib{beukers1998}{article}{,\n title={The Diophantine equation $Ax^p+By^q=Cz^r$},\n author={Beukers, Frits},\n journal={Duke Mathematical Journal},\n month={01},\n year={1998},\n volume={91},\n number={1},\n pages={61--88},\n publisher={Duke University Press},\n dol={https:\/\/doi.org\/10.1215\/S0012-7094-98-09105-0} }\n\n\\bib{beukers2020generalized}{article}{,\n title={The generalized Fermat equation},\n author={Beukers, Frits},\n note={https:\/\/dspace.library.uu.nl\/handle\/1874\/26637},\n date={2006-01-20} }\n\n\\bib{billerey2018some}{article}{,\n title={Some extensions of the modular method and Fermat equations of signature $(13, 13, n)$},\n author={Billerey, Nicolas},\n author={Chen, Iimin},\n author={Dembele, Lassina},\n author={Dieulefait, Luis},\n author={Freitas, Nuno},\n journal={arXiv preprint arXiv:1802.04330},\n year={2018} }\n\n\\bib{crandall2006prime}{book}{,\n title={Prime numbers: a computational perspective},\n author={Crandall, Richard},\n author={Pomerance, Carl B},\n volume={182},\n year={2006},\n publisher={Springer Science \\& Business Media} }\n\n\\bib{dahmen2013perfect}{article}{,\n title={Perfect powers expressible as sums of two fifth or seventh powers},\n author={Dahmen, Sander R},\n author={Siksek, Samir},\n journal={arXiv preprint arXiv:1309.4030},\n year={2013} }\n\n\\bib{darmon1995equations}{article}{,\n title={On the equations $z^m=F(x, y)\\,and\\,Ax^p+By^q=Cz^r$},\n author={Darmon, Henri},\n author={Granville, Andrew},\n journal={Bulletin of the London Mathematical Society},\n volume={27},\n number={6},\n pages={513--543},\n year={1995},\n publisher={Wiley Online Library},\n doi={https:\/\/doi.org\/10.1112\/blms\/27.6.513} }\n\n\\bib{de2016solutions}{article}{,\n title={Solutions to Beal's Conjecture, Fermat's last theorem and Riemann Hypothesis},\n author={{d}e Alwis, A.C. Wimal Lalith},\n journal={Advances in Pure Mathematics},\n volume={6},\n number={10},\n pages={638--646},\n year={2016},\n publisher={Scientific Research Publishing},\n doi={https:\/\/doi.org\/10.4236\/apm.2016.610053} }\n\n\\bib{di2013proof}{article}{,\n title={Proof for the Beal conjecture and a new proof for Fermat's last theorem},\n author={Di Gregorio, Leandro Torres},\n journal={Pure and Applied Mathematics Journal},\n volume={2},\n number={5},\n pages={149--155},\n year={2013},\n doi={https:\/\/doi.org\/10.11648\/j.pamj.20130205.11} }\n\n\\bib{durango}{webpage}{,\n title={The Search for a Counterexample to Beal's Conjecture},\n author={Durango, Bill},\n year={2002},\n url={http:\/\/www.durangobill.com\/BealsConjecture.html},\n note={Computer search results} }\n\n\\bib{edwards2005platonic}{book}{,\n title={Platonic solids and solutions to $x^2+y^3=dZ^r$},\n author={Edwards, Edward Jonathan},\n year={2005},\n publisher={Utrecht University} }\n\n\\bib{elkies2007abc}{article}{\n title={The ABC's of number theory},\n author={Elkies, Noam},\n journal={The Harvard College Mathematics Review},\n year={2007},\n publisher={Harvard University},\n note={http:\/\/nrs.harvard.edu\/urn-3:HUL.InstRepos:2793857} }\n\n\\bib{fjelstad1991extending}{article}{,\n title={Extending Pascal's triangle},\n author={Fjelstad, P},\n journal={Computers \\& Mathematics with Applications},\n volume={21},\n number={9},\n pages={1--4},\n year={1991},\n publisher={Elsevier},\n doi={https:\/\/doi.org\/10.1016\/0898-1221(91)90119-O} }\n\n\\bib{gaal2013sum}{article}{,\n title={The sum of two S-units being a perfect power in global function fields},\n author={Ga{\\'a}l, Istv{\\'a}n},\n author={Pohst, Michael},\n journal={Mathematica Slovaca},\n volume={63},\n number={1},\n pages={69--76},\n year={2013},\n publisher={Springer},\n doi={https:\/\/doi.org\/10.2478\/s12175-012-0083-0} }\n\n\\bib{ghosh2011proof}{book}{,\n title={The Proof of the Beal's Conjecture},\n author={Ghosh, Byomkes Chandra},\n publisher={2006 Hawaii International Conference on Statistics, Mathematics and Related Fields,\n Honolulu, Hawaii, USA},\n year={2011},\n note={https:\/\/ssrn.com\/abstract=1967491} }\n\n\\bib{joseph2018another}{article}{,\n title={Another Proof of Beal's Conjecture},\n author={Joseph, James E},\n author={Nayar, Bhamini P},\n journal={Journal of Advances in Mathematics},\n volume={14},\n number={2},\n pages={7878--7879},\n year={2018},\n doi={https:\/\/doi.org\/10.24297\/jam.v14i2.7587} }\n\n\\bib{koshy2002elementary}{book}{,\n title={Elementary number theory with applications},\n author={Koshy, Thomas},\n year={2002},\n pages={545},\n publisher={Academic press},\n note={https:\/\/doi.org\/10.1017\/S0025557200173255} }\n\n\\bib{kraus1998equation}{article}{,\n title={Sur l'{\\'e}quation $a^3+b^3=c^p$},\n author={Kraus, Alain},\n journal={Experimental Mathematics},\n volume={7},\n number={1},\n pages={1--13},\n year={1998},\n publisher={Taylor \\& Francis},\n doi={https:\/\/doi.org\/10.1080\/10586458.1998.10504355} }\n\n\\bib{macmillan2011proofs}{article}{,\n title={Proofs of power sum and binomial coefficient congruences via Pascal's identity},\n author={MacMillan, Kieren}\n author={Sondow, Jonathan},\n journal={The American Mathematical Monthly},\n volume={118},\n number={6},\n pages={549--551},\n year={2011},\n publisher={Taylor \\& Francis},\n doi={https:\/\/doi.org\/10.4169\/amer.math.monthly.118.06.549} }\n\n\\bib{beal1997generalization}{article}{\n title={A generalization of Fermat's last theorem: the Beal conjecture and prize problem},\n author={Mauldin, R. Daniel},\n journal={Notices of the AMS},\n volume={44},\n number={11},\n year={1997},\n note={https:\/\/www.ams.org\/notices\/199711\/beal.pdf} }\n\n\\bib{merel1997winding}{article}{,\n title={Winding quotients and some variants of Fermat's Last Theorem},\n author={Merel, Loic},\n author={Darmon, Henri},\n journal={Journal f{\\\"u}r die reine und angewandte Mathematik},\n volume={1997},\n number={490},\n pages={81--100},\n year={1997},\n publisher={De Gruyter},\n doi={https:\/\/doi.org\/10.1515\/crll.1997.490.81} }\n\n\\bib{metsankyla2004catalan}{article}{,\n title={Catalan's conjecture: another old Diophantine problem solved},\n author={Mets{\\\"a}nkyl{\\\"a}, Tauno},\n journal={Bulletin of the American Mathematical Society},\n volume={41},\n number={1},\n pages={43--57},\n year={2004},\n doi={https:\/\/doi.org\/10.1090\/S0273-0979-03-00993-5} }\n\n\\bib{mihailescu2004primary}{article}{,\n title={Primary cyclotomic units and a proof of Catalan's conjecture},\n author={Mihailescu, Preda},\n journal={Journal Fur die reine und angewandte Mathematik},\n volume={572},\n pages={167--196},\n year={2004},\n doi={https:\/\/doi.org\/10.1515\/crll.2004.048} }\n\n\\bib{miyazaki2015upper}{article}{,\n title={Upper bounds for solutions of an exponential Diophantine equation},\n author={Miyazaki, Takafumi},\n journal={Rocky Mountain Journal of Mathematics},\n volume={45},\n number={1},\n pages={303--344},\n year={2015},\n publisher={Rocky Mountain Mathematics Consortium},\n doi={https:\/\/doi.org\/10.1216\/RMJ-2015-45-1-303} }\n\n\\bib{nathanson2016diophantine}{article}{,\n title={On a diophantine equation of MJ Karama},\n author={Nathanson, Melvyn B},\n journal={arXiv preprint arXiv:1612.03768},\n year={2016} }\n\n\\bib{nitaj1995conjecture}{article}{,\n title={On a Conjecture of Erd{\\\"o}s on 3-Powerful Numbers},\n author={Nitaj, Abderrahmane},\n journal={Bulletin of the London Mathematical Society},\n volume={27},\n number={4},\n pages={317--318},\n year={1995},\n publisher={Wiley Online Library},\n doi={https:\/\/doi.org\/10.1112\/blms\/27.4.317} }\n\n\\bib{norvig2010beal}{article}{,\n title={Beal's conjecture: A search for counterexamples},\n author={Norvig, Peter},\n journal={norvig.com, Retrieved 2014-03},\n volume={6},\n year={2010},\n note={http:\/\/norvig.com\/beal.html} }\n\n\\bib{poonen1998some}{article}{,\n title={Some diophantine equations of the form $x^n+y^n=z^m$},\n author={Poonen, Bjorn},\n journal={Acta Arithmetica},\n volume={86},\n number={3},\n pages={193--205},\n year={1998},\n doi={https:\/\/doi.org\/10.4064\/aa-86-3-193-205} }\n\n\\bib{poonen2007twists}{article}{,\n title={Twists of $X(7)$ and primitive solutions to $x^2+y^3=z^7$},\n author={Poonen, Bjorn},\n author={Schaefer, Edward F},\n author={Stoll, Michael},\n journal={Duke Mathematical Journal},\n volume={137},\n number={1},\n pages={103--158},\n year={2007},\n publisher={Duke University Press},\n doi={https:\/\/doi.org\/10.1215\/S0012-7094-07-13714-1} }\n\n\\bib{siksek2012partial}{article}{,\n title={Partial descent on hyperelliptic curves and the generalized Fermat equation\n $x^3+y^4+z^5=0$},\n author={Siksek, Samir},\n author={Stoll, Michael},\n journal={Bulletin of the London Mathematical Society},\n volume={44},\n number={1},\n pages={151--166},\n year={2012},\n publisher={Wiley Online Library},\n doi={https:\/\/doi.org\/10.1112\/blms\/bdr086} }\n\n\\bib{siksek2014generalised}{article}{,\n title={The generalised Fermat equation $x^2+ y^3=z^{15}$},\n author={Siksek, Samir},\n author={Stoll, Michael},\n journal={Archiv der Mathematik},\n volume={102},\n number={5},\n pages={411--421},\n year={2014},\n publisher={Springer},\n doi={https:\/\/doi.org\/10.1007\/s0001} }\n\n\\bib{shanks2001solved}{book}{,\n title={Solved and unsolved problems in number theory},\n author={Shanks, Daniel},\n volume={297},\n year={2001},\n publisher={American Mathematical Soc.} }\n\n\\bib{stillwell2012numbers}{book}{,\n title={Numbers and geometry},\n author={Stillwell, John},\n year={2012},\n pages={133},\n publisher={Springer Science \\& Business Media} }\n\n\\bib{taylor1995ring}{article}{,\n title={Ring-theoretic properties of certain Hecke algebras},\n author={Taylor, Richard},\n author={Wiles, Andrew},\n journal={Annals of Mathematics},\n pages={553--572},\n year={1995},\n publisher={JSTOR},\n doi={https:\/\/doi.org\/10.2307\/2118560},\n url={https:\/\/www.jstor.org\/stable\/2118560} }\n\n\\bib{townsend2010search}{article}{,\n title={The Search for Maximal Values of min$(A,B,C)$\/gcd$(A,B,C)$ for $A^x+B^y=C^z$},\n author={Townsend, Arthur R},\n journal={arXiv preprint arXiv:1004.0430},\n year={2010} }\n\n\\bib{vega2020complexity}{article}{,\n title={The Complexity of Mathematics},\n author={Vega, Frank},\n journal={Preprints},\n year={2020},\n publisher={MDPI AG},\n doi={https:\/\/doi.org\/10.20944\/preprints202002.0379.v4} }\n\n\\bib{waldschmidt2004open}{article}{,\n title={Open diophantine problems},\n author={Waldschmidt, Michel},\n journal={Moscow Mathematical Journal},\n volume={4},\n number={1},\n pages={245--305},\n year={2004},\n publisher={\u041d\u0435\u0437\u0430\u0432\u0438\u0441\u0438\u043c\u044b\u0439 \u041c\u043e\u0441\u043a\u043e\u0432\u0441\u043a\u0438\u0439 \u0443\u043d\u0438\u0432\u0435\u0440\u0441\u0438\u0442\u0435\u0442--\u041c\u0426\u041d\u041c\u041e},\n note={https:\/\/arxiv.org\/pdf\/math\/0312440.pdf} }\n\n\\bib{waldschmidt2009perfect}{article}{,\n title={Perfect Powers: Pillai's works and their developments},\n author={Waldschmidt, Michel},\n journal={arXiv preprint arXiv:0908.4031},\n year={2009},\n doi={https:\/\/doi.org\/10.17323\/1609-4514-2004-4-1-245-305} }\n\n\\bib{wiles1995modular}{article}{,\n title={Modular elliptic curves and Fermat's last theorem},\n author={Wiles, Andrew},\n journal={Annals of mathematics},\n volume={141},\n number={3},\n pages={443--551},\n year={1995},\n publisher={JSTOR},\n doi={https:\/\/doi.org\/10.2307\/2118559} }\n\n\\end{biblist}\n\n\\end{document}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMagneto-transport of two-dimensional electrons is an interesting but yet complicated topic in condensed matter physics. Its various behaviors, such as the Shubnikov-de Haas oscillation~\\cite{Physics Kinetics}, quantum Hall conductance~\\cite{Physics Kinetics}, linear magnetoresistance~\\cite{PhysRevB.58.2788}, etc., contain a wealth of information about the underlying systems. However, one of the simplest questions in this field, i.e., how the electron transports through disordered materials under a magnetic field in the classical regime, has not been fully understood yet. \n\nIn the classical ($\\omega_c\\tau \\lesssim 1$) regime, the electron transport can be generally described by the Boltzmann equation~\\cite{Boltzmann}. However, it has been pointed out that the Boltzmann equation has to be revised to incorporate the non-Markovian effect (also called memory effect \\cite{Phys. Rev. Lett. 75 197, J. Stat. Phys. 87 5\/6, PhysRevLett.89.266804, PRB2003, PRB2008, PRB2005}) resulting from either repeatedly scattering on the same impurity, or repeatedly passing through a region without scattering (the latter one is also called Corridor effect~\\cite{Phys. Rev. Lett. 75 197, J. Stat. Phys. 87 5\/6, PhysRevLett.89.266804, PRB2003}). In addition to the memory effect, there is an equally important issue that needs to be addressed, i.e. how the magnetic field affects a single electron-impurity scattering event. This problem has a fundamental difficulty in defining scattering parameters as the incoming and outgoing asymptotic trajectories are bent by the magnetic field. \n\nIn this work, we introduce a general recipe based on an abstraction of the actual impurity scattering process to define scattering parameters for the single elastic impurity scattering. It yields the conventional scattering parameters in the absence of the magnetic field. More importantly, it can introduce an appropriate set of scattering parameters in the presence of magnetic field to calculate the differential cross section. Specifically, the real scattering process can be abstracted into a sudden switch between the initial asymptotic and final asymptotic trajectory. \nIn this classical picture, we can conveniently describe the skew scattering~\\cite{Physica.24.1958} and coordinate jump~\\cite{PhysRevB.2.4559}, which will eventually modify the Boltzmann equation.\nWe then apply this recipe to the two-dimensional\nLorentz model ~\\cite{PhysRevA.25.533} where free electrons are subject to in-plane electric field and out-of-plane magnetic field, and scattered by randomly distributed hard-disk impurities. \n\nWe show the following results. 1) The magnetoresistivity is a negative parabolic function of magnetic field. Our result, together with the one from the previous theory of corridor effect \\cite{PRB2003} yields a more accurate magnetoresistivity, closer to the numerical result \\cite{PhysRevLett.89.266804}. 2) The obtained Hall coefficient becomes magnetic field-dependent, deviating from the Drude theory. For experiments, this deviation needs to be taken into account when converting the measured Hall coefficients to real electron densities. 3) The longitudinal relaxation time obtained in our theory depends on magnetic field which deviates from the Drude theory. \n\nThis paper is organized in the following way. In Section II, we present\nthe general recipe to define scattering parameters for the impurity scattering, and use it to discuss the skew scattering and coordinate jump under magnetic field. The conventional Boltzmann equation is thus modified by these two mechanisms in the linear response regime~\\cite{LRT}. In Section III, we solve the modified Boltzmann equation for the two-dimensional Lorentz model and derive the\nanomalous Hall resistivity and negative magnetoresistivity. In Section IV we compare our result with relevant\nsimulations and experiments. Finally, we introduce a phenomenological method to include skew scattering into the Drude model. \n\n\n\\section{Classical theory of impurity scattering and electron transport under magnetic field}\n\nIn this section, we will formulate a classical theory of impurity scattering and electron transport in two-dimensional plane influenced by the external perpendicular magnetic field. Our theory only considers a single scattering event and ignore the well-studied non-Markovian and localization effect. One possible application of our theory is the electron transport in randomly distributed two-dimensional anti-dots under magnetic field. The anti-dots are geometrical holes punched into two dimensional electron gas (2DEG) on semiconductor GaAs \\cite{APL.70.2309, Mirlin2001, Weiss1995, PhysRevLett.77.147}. \n\nOur theory requires $a\\ll ll$, which is the necessary condition to avoid repeated scattering at the same impurity. Summing up all the above requirement, the pre-condition of our theory is $a\\ll l0$, respectively. We call those imaginary trajectories as the initial and final asymptote, respectively. \nWe define this method as the abstraction of the impurity scattering process, as it only keeps the essence of the scattering process, i.e. the transition from the initial asymptote to the final asymptote, and abstract the detail of the transition as a sudden switch. \n\nThere is a degree of freedom in the above procedure. Note that even though we have restricted the scattering to occur at $t=0$, this point itself is not well defined. In other words, we have the freedom to define this artificial point. For a central scattering potential, we can fix this issue by requiring that at $t=0$ the electron reaches the point in the initial asymptote closest to the scatter. We call this point the starting point (represented by the red dot in Fig.~\\ref{fig_event_line}). If the scattering potential respects the rotational symmetry, the starting point in different initial asymptotes form a straight line called the event line which marks the occurring of scattering event as illustrated in Fig.~\\ref{fig_event_line}. It turns out that the event line is orthogonal to the initial asymptotes and passes the center of the scatterer. \n\nWith the help of the abstraction of the impurity scattering process, we define the scattering parameters as follows. We define the distance between the starting point and the scattering center to be the impact parameter, the momentum at $t=0_-$ and $t=0_+$ to be the incoming and outgoing momentum, respectively, and the angle between the incoming and outgoing momentum to be the scattering angle. Those scattering parameters reduce to the conventional ones in the absence of the magnetic field, as shown in Fig.~\\ref{fig_event_line}. We further define the point in the final asymptote at $t=0_+$ to be the ending point (represented by the blue dot in Fig.~\\ref{fig_event_line}). This definition of scattering parameters is clearly independent of the scattering details and works for any type of the initial and final asymptotes.\n\nUsing the above concepts, the abstraction of the scattering process can be concisely stated as follows: the electron moves along the initial asymptote to the starting point, gets scattered to the ending point and finally moves away from the scatterer along the final asymptote.\n\n\n\\begin{figure}[b]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.39}{\\includegraphics*{traditional_hard_ball_scattering.pdf}}\n\\caption{The illustration of the conventional hard ball scattering with no magnetic field. The red and blue empty dot are the starting point and ending point, respectively. The green line is the event line passing through starting point and impurity center. The initial asymptote and final asymptote are marked by dashed red and blue line with the incoming momentum $\\mathbf{k}$, the outgoing momentum $\\mathbf{k'}$, and the angle of scattering $\\theta$, the impact parameter $b$. The coordinate jump can be divided into two directions, which are transverse jump and longitudinal jump. }\n\\label{fig_traditional hard ball}\n\\end{figure}\n\n\n\\begin{figure}[b]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.4}{\\includegraphics*{pig_picture_new.pdf}}\n\\caption{The illustration of electron scattering on hard disk impurity with cyclotron orbit under magnetic field. The impurity radius is $a$. The cyclotron radius is $R$. \nThe red and blue solid lines are the real trajectory of the incoming and outgoing electron, respectively. The red and blue complete circle forms the initial asymptote and final asymptote. The red and blue empty dots are the starting point and ending point, respectively. The incoming momentum\n$\\mathbf{k}$ and the outgoing momentum $\\mathbf{k}^{\\prime}$ are along the tangential direction at the starting point and ending point. The angle of scattering $\\theta$ is the angle between $\\mathbf{k}$ and $\\mathbf{k'}$. }\n\\label{fig_pig_picture}\n\\end{figure}\n\n\n\\subsection{Application to hard disk potential}\n\nWe first apply the abstraction of the scattering process to hard disk potential in the absence of magnetic field. By applying to this fully known case, we aims at a necessity check of the correctness of our theory. \nConsider an electron incident on a hard disk potential with straight line trajectory (Fig.~\\ref{fig_traditional hard ball}). The real trajectory (solid lines) changes its direction after the electron hits the scatterer. However, the initial and final asymptote (dashed lines) can be elongated along the real trajectory and pass through the scatterer. The event line that marks the occurring of scattering event, passes through the center of scatterer and the starting point (red empty dot) on the initial asymptote. The incoming momentum $\\mathbf{k}$ and outgoing momentum $\\mathbf{k'}$ are defined as the starting (red empty dot) and ending point (blue empty dot) on the initial and final asymptote, respectively. \n\nIn contrast, in the presence of magnetic field, the trajectory is bent, and we use the abstraction of the scattering process discussed in the previous subsections to define scattering parameters, as shown in Fig.~\\ref{fig_pig_picture}. The incoming momentum $\\mathbf{k}$ and outgoing momentum $\\mathbf{k'}$ cannot be defined straightforwardly, due to the directions of the initial\/final asymptote changes over time. As shown in Fig. ~\\ref{fig_pig_picture}, the red and blue dashed lines are the asymptotic trajectory which completes the circular trajectory. \nThe incoming $\\mathbf{k}$ and outgoing $\\mathbf{k'}$ are defined along the tangential direction to the initial asymptote and final asymptote at the starting point and ending point, respectively (see Fig.~\\ref{fig_pig_picture}). The $\\mathbf{k}$ and $\\mathbf{k'}$ are rotated by the same angle in unit time. \n\nIn the Appendix \\ref{APP-E}, we demonstrate how the abstraction method can be applied to the soft potential under magnetic field. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.38}{\\includegraphics*{cross_section_tick_pi.pdf}}\n\\caption{The plots of the differential cross section of two processes: $\\mathbf{k} \\rightarrow \\mathbf{k'}$ and $\\mathbf{k'} \\rightarrow \\mathbf{k}$, respectively, in the unit of impurity radius $a$. The ratio $\\frac{a}{R}=0.16$. The $\\Omega_{\\mathbf{kk}^{\\prime}}$ does not overlap with $\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}$, which leads to skew scattering. }\n\\label{fig_cross section curve}\n\\end{figure}\n\n\n\\subsection{Skew scattering under magnetic field}\nIn this section, we discuss the skew scattering in the classical picture. As shown in previous literatures, the antisymmetric part of the probability of scattering $W_{ \\mathbf{k} \\mathbf{k}^\\prime}$ leads to the skew scattering~\\cite{Physica.24.1958}. $W_{ \\mathbf{k} \\mathbf{k}^\\prime}$ (probability of scattering of $\\mathbf{k} \\rightarrow \\mathbf{k'}$ process) is related to the differential cross section as $W_{ \\mathbf{k}\\mathbf{k}^\\prime}=n_i v_{ \\mathbf{k}} \\Omega_{ \\mathbf{k}\\mathbf{k}^\\prime}$, where $n_i$ is the impurity concentration and $v_{ \\mathbf{k}}$ is the electron velocity. For hard-disk potentials, the scattering is elastic, i.e. $|v_{ \\mathbf{k}}|=|v_{ \\mathbf{k}^\\prime}|$. Therefore, a nontrivial antisymmetric part of $W_{ \\mathbf{k} \\mathbf{k}^\\prime}$ only comes from that $\\Omega_{ \\mathbf{k}\\mathbf{k}^\\prime} \\neq \\Omega_{ \\mathbf{k}^\\prime\\mathbf{k}}$. \n\nUsing the scattering parameters shown in Fig.~\\ref{fig_pig_picture}, the differential cross section is easily calculated by $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}=\\left\\vert \\frac{db}{d\\theta}\\right\\vert $. \nHere we use the fact that $b$ is only a function of $\\theta$ and $k=|\\mathbf{k}|$ due to the rotational symmetry and the elastic nature of scattering. For two-dimensional Lorentz model, the relation between $b$ and $\\theta$ and $k$ (with $R=\\hbar k\/(eB)$) is given by (derived in Appendix \\ref{APP-A})\n\\begin{equation}\nb(\\theta,k)=-R+\\sqrt{a^{2}+R^{2}+2aR\\cos\\frac{\\theta}{2}},\n\\label{b}\n\\end{equation}\nTherefore, the differential cross section reads as\n\\begin{equation}\n\\label{eq_kkprime}\n\\Omega_{\\mathbf{kk}^{\\prime}}=%\n\\frac{a\\sin\\frac{\\theta}{2}}%\n{2\\sqrt{1+2\\frac{a}{R}\\cos\\frac{\\theta}{2}+\\left( \\frac{a}{R}\\right) ^{2}%\n}}.\n\\end{equation}\n\nOn the other hand, \nthe differential cross section of the inverse process $\\mathbf{k}^{\\prime}\\rightarrow \\mathbf{k}$ is labeled by $\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}$, and can be calculated as follows: $\\Omega_{\\mathbf{k}^\\prime\\mathbf{k}}=\\left\\vert \\frac{db}{d\\theta}\\right\\vert_{\\theta\\rightarrow 2\\pi-\\theta} $. Therefore, its expression reads as\n\\begin{equation}\n\\label{eq_kprimek}\n\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}=\\frac{a\\sin\\frac{\\theta}{2}}%\n{2\\sqrt{1-2\\frac{a}{R}\\cos\\frac{\\theta}{2}+\\left( \\frac{a}{R}\\right) ^{2}%\n}}.\n\\end{equation}\n\nWe plot $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}$ and $\\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}$ in Fig.~\\ref{fig_cross section curve}. It shows that $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}\\neq \\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}$, leading to the nontrivial skew scattering contribution to the electron transport in two-dimensional Lorentz model. In Eq. \\ref{tau-perp} in Section III B and Section IV C, we will find out that only when $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}\\neq \\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}$, there is $\\frac{1}{\\tau^{\\perp}} \\neq 0$ ($\\frac{1}{\\tau^{\\perp}}$ is the reciprocal of transverse relaxation time), which is the signature of skew scattering. We further comment that the nature of the above inequivalence is a finite magnetic field, i.e. only in the limit $\\mathbf{B}\\rightarrow 0$, $R\\rightarrow \\infty$ and hence $\\Omega_{\\mathbf{k}\\mathbf{k}^\\prime}- \\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}\\rightarrow 0$. Therefore, a finite magnetic field is essential to the skew scattering mechanism, which breaks the time-reversal symmetry. \n\n\n\\subsection{Coordinate jump under magnetic field}\n\nIn this section, we discuss the coordinate jump~\\cite{PhysRevB.2.4559, PhysRevB.72.045346, PhysRevB.73.075318}, labeled by $\\delta\\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}$ (coordinate jump from $\\mathbf{k} \\rightarrow \\mathbf{k'}$). In our recipe of describing the impurity scattering, it can be conveniently defined as the difference between the starting point $\\mathbf{r}_s$ and the ending point $\\mathbf{r}_e$: $\\delta\\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}=\\mathbf{r}_e-\\mathbf{r}_s$. It can be further divided into longitudinal jump and transverse jump, which are parallel and orthogonal to the incoming momentum $\\mathbf{k}$, respectively (Fig.~\\ref{fig_traditional hard ball}). \n\nAs the incoming momentum is along $x$-axis, the longitudinal jump is $\\delta\\mathbf{x}_{\\mathbf{k}^\\prime \\mathbf{k}}$, and the transverse jump is $\\delta\\mathbf{y}_{\\mathbf{k}^\\prime \\mathbf{k}}$. Similar to the differential cross section, the coordinate jump is also a functions of $\\theta$ and $k$, and can be calculated as follows based on the two-dimensional Lorentz model (derived in Appendix \\ref{APP-B})\n\\begin{align} \\label{eq_longj}\n\\delta\\mathbf{x}_{\\mathbf{k}^\\prime \\mathbf{k}}&=R\\left[ \\sin\\theta\n-\\frac{\\sin\\theta+2\\frac{a}{R}\\sin\\left( \\frac{\\theta}{2}\\right) }%\n{\\sqrt{1+\\frac{2a}{R}\\cos(\\frac{\\theta}{2})+\\frac{a^{2}}{R^{2}}}}\\right]\\mathbf{\\hat{x}}\\,,\\\\\n\\label{eq_tranj}\\delta\\mathbf{y}_{\\mathbf{k}^\\prime \\mathbf{k}}&=2R\\sin^{2}\\left(\n\\frac{\\theta}{2}\\right) \\left[ 1-\\frac{1}{\\sqrt{1+\\frac{2a}{R}\\cos\n(\\frac{\\theta}{2})+\\frac{a^{2}}{R^{2}}}}\\right] \\mathbf{\\hat{y}}\\,.\n\\end{align}\n\nGenerally, the coordinate jump has two contributions to the electron transport. First, it may induce a net jump velocity $\\mathbf{v}_{cj}$ that modifies the electronic drift velocity:\n\\begin{equation}\n\\mathbf{v}_{cj}=\\sum_{\\mathbf{k}^\\prime} W_{ \\mathbf{k} \\mathbf{k}^\\prime} \\delta \\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}=\\int_{0}^{2\\pi}d\\theta n_{i}v\\Omega_{\\mathbf{k}\\mathbf{k}^{\\prime}}\\delta\\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}, \n\\label{vcj}\n\\end{equation}\nwith $v=\\hbar k\/m$. \nSecondly, it leads to an electrostatic potential difference $e\\mathbf{E}\\cdot \\delta \\mathbf{r}_{\\mathbf{k}^\\prime \\mathbf{k}}$ and thus affects the electronic equilibrium distribution function. \n\nFinally, we comment that as $\\mathbf{B}\\rightarrow 0$ the transverse jump does not have a net jump velocity, as the system respects a mirror symmetry with the mirror passing through the scatterer, parallel to $\\mathbf{k}$, and normal to the material plane. On the other hand, the longitudinal jump is not restricted by any symmetry and hence the net jump velocity is nonzero. Both statements can be easily verified for the two-dimensional Lorentz model using Eq.~\\ref{eq_kkprime}, \\ref{eq_longj}, \\ref{eq_tranj} and \\ref{vcj}. \n\n\n\\subsection{The nature of the anisotropic scattering}\n\nIn the first glance, the assignment of the scattering events of $t=0$ at the event line instead of the circular boundary of scatter is counterintuitive and artificial. However, it has deeper physical ground underneath. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.19}{\\includegraphics*{b_not_zero2.pdf}}\n\\caption{The illustration of `$\\pi$ event' (when the scattering angle is $\\pi$) in the absence and presence of magnetic field. }\n\\label{fig_pi_not_0}\n\\end{figure}\n\n\nThe advantage of using the event line defined in our theory instead of the colliding boundary, is that the cross sectional area (which overlaps with the event line) is the projection of the boundary. The incoming scattering events are uniformly distributed on the event line with momentum perpendicular to the event line, but not uniform on the boundary. Therefore, the number of electrons being scattered is proportional to the cross-sectional area on the event line. This provides convenience to count the number of scattering events and scattering cross section. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.35}{\\includegraphics*{cross_section_theta.pdf}}\n\\caption{The plot of differential cross section of $\\mathbf{k} \\rightarrow \\mathbf{k}^{\\prime}$ process in the unit of impurity radius $a$. The vertical black line marks the $\\pi$ event. The red shaded area is the cross sectional area within scattering angle $[0,\\pi]$. The green shaded area is the cross sectional area within scattering angle $[\\pi,2\\pi]$. The $\\pi$-event unevenly divides the cross sectional area, with the red shaded area smaller than the green shaded area. }\n\\label{fig_cross_theta}\n\\end{figure}\n\nIn order to understand the nature of anisotropic scattering, we define `$\\pi$ event' as the scattering event with scattering angle $\\theta=\\pi$. When there is no magnetic field, the `$\\pi$ event' evenly divides the cross sectional area along the event line (Fig. \\ref{fig_pi_not_0}) and there is no skew scattering. When there is magnetic field present, the $\\pi$ event unevenly divides the cross-sectional area on the event line (Fig. \\ref{fig_pi_not_0}), resulting in the uneven division of the number of electrons being scattered up (with the scattering angle within $[0, \\pi]$) and scattered down (with the scattering angle within $[\\pi, 2\\pi]$). This is shown in Fig. \\ref{fig_cross_theta}, where the red shaded area (corresponding to the cross-sectional area being scattered up) is smaller than the green shaded area (corresponding to the cross-sectional area being scattered down). \n\nWe provide a second way to understand the anisotropic scattering in Appendix \\ref{APP-F}. \n\n\n\\subsection{Modified Boltzmann equation}\n\nThe Boltzmann equation can be generalized to include the skew scattering and coordinate jump, reading as ($e>0$)%\n\\begin{widetext}\n\\begin{equation}\n\\left( -e\\right) \\left( \\mathbf{E+v}\\times\\mathbf{B}\\right) \\cdot\n\\frac{\\partial f_{\\mathbf{k}}}{\\hbar\\partial\\mathbf{k}}=-n_{i}v\\int_{0}^{2\\pi\n}d\\theta \\left [ \\Omega_{\\mathbf{kk}^{\\prime}} f\\left( \\epsilon\n,\\mathbf{k}\\right) -\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}} f\\left( \\epsilon,\\mathbf{k}^{\\prime}\\right)\n+\\Omega_{\\mathbf{k}^\\prime \\mathbf{k}}\\partial_{\\epsilon}f^{0}e\\mathbf{E}\\cdot\\delta\\mathbf{r}_{\\mathbf{k}^{\\prime\n}\\mathbf{k}}\\right ] ,\n\\label{Boltzmann}\n\\end{equation}\n\\end{widetext}\nwhere $f^0$ is the equilibrium distribution function. We emphasize that in the above equation, $|\\mathbf{k}^\\prime|=|\\mathbf{k}|$ because the scattering is elastic.\n\nTo solve up to the linear order of electric field, we assume that\n\\begin{equation}\nf\\left( \\epsilon,\\mathbf{k}\\right) =f^{0}\\left( \\epsilon\\right)\n+g^{\\rm r}\\left( \\epsilon,\\mathbf{k}\\right) +g^{\\rm cj}\\left( \\epsilon\n,\\mathbf{k}\\right) ,\n\\label{assume}\n\\end{equation}\nwhere $g^{\\rm cj}\\left( \\epsilon,\\mathbf{k}\\right)$ is the part of the non-equilibrium distribution function purely due to the coordinate jump (or called anomalous distribution function), and $g^{\\rm r}$ is the non-equilibrium distribution function in the absence of coordinate jump (or called normal distribution function).\nCombining Eq.~\\ref{Boltzmann} and Eq.~\\ref{assume}, keeping the terms of linear order in the electric and magnetic field, and ignoring the coupling between skew scattering and coordinate jump, the Boltzmann equation is decomposed into two equations:%\n\\begin{widetext}\n\\begin{equation}\n\\left( -e\\right) \\mathbf{E}\\cdot\\frac{\\partial f^{0}}{\\hbar\\partial\n\\mathbf{k}}+\\left( -e\\right) \\left( \\mathbf{v}\\times\\mathbf{B}\\right)\n\\cdot\\frac{\\partial g^{\\rm r}_{\\mathbf{k}}}{\\hbar\\partial\\mathbf{k}}=-\\int_{0}^{2\\pi\n}d\\theta n_{i}v \\left[ \\Omega_{\\mathbf{kk}^{\\prime}} g^{\\rm r}_{\\mathbf{k}%\n}-\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}} g^{\\rm r}_{\\mathbf{k}\\prime}\\right] ,\\label{SBE-n}%\n\\end{equation}%\n\\begin{equation}\n\\left( -e\\right) \\mathbf{E}\\cdot\\left( \\int_{0}^{2\\pi}d\\theta n_{i}%\nv\\Omega_{\\mathbf{k}^{\\prime} \\mathbf{k}}\\delta\\mathbf{r}_{\\mathbf{k}^{\\prime}\\mathbf{k}}\\right) \\partial_{\\epsilon}f^{0}-\\left( -e\\right) \\left(\n\\mathbf{v}\\times\\mathbf{B}\\right) \\cdot\\frac{\\partial g_{\\mathbf{k}}^{\\rm cj}%\n}{\\hbar\\partial\\mathbf{k}}=\\int_{0}^{2\\pi}d\\theta n_{i}v \\left[ \\Omega_{\\mathbf{kk}%\n^{\\prime}} g_{\\mathbf{k}}^{\\rm cj}-\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}} g_{\\mathbf{k}\\prime}^{\\rm cj}\\right].\n\\label{SBE-a}%\n\\end{equation}\n\\end{widetext}\n\nWith all the above ingredients\nthe electrical current density is given by%\n\\begin{equation}\n\\mathbf{j}=\\left( -e\\right) \\int \\frac{d\\mathbf{k}}{4\\pi^2}\\left[ g^{\\rm r}+g^{\\rm cj}\\right] \\left[\n\\mathbf{v}+\\mathbf{v}^{cj}\\right] .\n\\end{equation}\n\n\n\\section{Solutions of the Boltzmann equation}\n\n\\subsection{Zero magnetic field case}\n\nIn this case, only the longitudinal coordinate jump along the $\\mathbf{k}$-direction exists. \n$\\mathbf{v}^{cj}\\equiv\\int_{0}^{2\\pi}d\\theta n_{i}v\\Omega\n\\left( \\theta\\right) \\delta\\mathbf{r}_{\\mathbf{k}^{\\prime}\\mathbf{k}}=-\\mathbf{v}\\frac{3\\pi n_{i}a^{2}}{4}$ which is along the opposite direction to\n$\\mathbf{v}$. \n\nThe Boltzmann equation is solved as%\n\\begin{equation}\ng^{\\rm r}_{\\mathbf{k}}=\\left( -\\partial_{\\epsilon}f^{0}\\right) \\left( -e\\right)\n\\mathbf{E\\cdot v}\\tau^{0}\\left( \\epsilon\\right) ,\n\\end{equation}\n\\begin{equation}\ng_{\\mathbf{k}}^{\\rm cj}=\\left( \\partial_{\\epsilon}f^{0}\\right) \\left(\n-e\\right) \\mathbf{E\\cdot v}^{cj}\\tau^{0}\\left( \\epsilon\\right) ,\n\\end{equation}\nwhere $\\frac{1}{\\tau^{0}\\left( \\epsilon\\right) }=n_{i}v\\frac{8a}{3}%\n$. The electric current density is therefore\n$j_{x}\\equiv\\left( \\sigma^{0}+\\sigma^{cj1}+\\sigma^{cj2}+\\sigma^{cj1,cj2}\\right)\nE_{x}$ with%\n\\begin{align}\n\\sigma^{0}=\\left( -e\\right) \\sum_{k}\\frac{g^{\\rm r}_{\\mathbf{k}}}{E_{x}}v_{x}%\n=\\frac{ne^{2}\\tau^{0}\\left( \\epsilon_{F}\\right) }{m},\n\\end{align}\n\\begin{align}\n\\sigma^{\\rm cj1} & =\\left( -e\\right) \\sum_{k}\\frac{g_{\\mathbf{k}}^{\\rm cj}}{E_{x}%\n}v_{x}=\\frac{3n_{i}\\pi a^{2}}{4}\\frac{ne^{2}\\tau^{0}\\left(\n\\epsilon_{F}\\right) }{m},\\\\\n\\sigma^{\\rm cj2} & =\\left( -e\\right) \\sum_{k}\\frac{g^{\\rm r}_{\\mathbf{k}}}{E_{x}}%\nv_{x}^{\\rm cj}=-\\sigma^{\\rm cj1},\n\\label{cancel}\n\\end{align}\nand%\n\\begin{align}\n\\sigma^{\\rm cj1, \\rm cj2}=\\left( -e\\right) \\sum_{k}\\frac{g_{\\mathbf{k}}^{\\rm cj}}{E_x} v_{x}^{\\rm cj}%\n=-\\frac{ne^{2}\\tau^{0}\\left( \\epsilon_{F}\\right) }{m}\\left(\n\\frac{3n_{i}\\pi a^{2}}{4}\\right) ^{2},\n\\end{align}\nwhere the carrier density $n=\\frac{m \\epsilon_{F}}{\\pi\\hbar^{2}}$ with $\\epsilon_{F}$ the Fermi energy.\\ \n\nHere, $\\sigma^{0}$ is the conventional zero-field conductivity in the Drude theory. $\\sigma^{\\rm cj1}$ is the conductivity induced by the anomalous distribution from the coordinate jump. $\\sigma^{\\rm cj2}$ is the conductivity induced by the velocity correction from the coordinate jump. It cancels $\\sigma^{\\rm cj1}$. $\\sigma^{\\rm cj1,\\rm cj2}$ is the conductivity with both the distribution and velocity being corrected by the coordinate jump. \nTherefore, the total electrical conductivity is\n\\begin{equation}\n\\sigma=\\sigma^{0}+\\sigma^{\\rm cj1, \\rm cj2}=\\frac{ne^{2}\\tau^{0}\\left(\n\\epsilon_{F}\\right) }{m}\\left[ 1-\\left( \\frac{3}{4}n_{i}\\pi a^{2}\\right)\n^{2}\\right] .\n\\end{equation}\n\n\nThere is a correction to the electron density, because the electrons are only present in the\nfree area excluding the area occupied by impurities. The electron density\n$n=\\frac{N}{A-A_i}=\\frac{n_D}{1-\\frac{A_i}{A}}$,\nwhere $A$ and $A_{i}$ represent the total 2D area and\nthe area occupied by the hard disk impurities, respectively, and $\\frac{A_i}{A}=\\pi n_i a^2$, and\n$n_D=\\frac{N}{A}$ is the electron density without the correction to exclude the area that impurities take. \nThus, the Fermi momentum $k_F=\\sqrt{2\\pi n}=\\frac{{k_F}^D}{\\sqrt{1-\\frac{A_i}{A}}}$, where $k_{F}^{D}=\\sqrt{2\\pi n_{D}}$. \n\nTherefore, the measured electrical conductivity is also corrected by\n\\begin{equation}\n\\begin{aligned}\n\\sigma^{M}&=\\sigma\\frac{A-A_{i}}{A}\\\\\n &=\\frac{n_{D}e^{2}\\tau^{D} }{m}\\left[ 1-\\left( \\frac{3}{4}\\pi n_{i}a^{2}\\right)\n^{2}\\right] \\sqrt{1-\\pi n_{i}a^{2}},\n\\label{corrected cond}\n\\end{aligned}\n\\end{equation}\nwith the Drude transport relaxation rate $1\/\\tau^{D}=n_{i}v_{F}^{D}\\frac\n{8a}{3}$ a constant. \nThe conductivity in our theory $\\sigma^{M}$ is lower than the Drude conductivity $\\sigma^{D}=\\frac{n_{D}e^{2}\\tau^{D} }{m}$ by a factor of $\\left[ 1-\\left( \\frac{3}{4}\\pi n_{i}a^{2}\\right)^{2}\\right] \\sqrt{1-\\pi n_{i}a^{2}}$ as can be seen from Eq.~\\ref{corrected cond}, which decreases as a function of the dimensionless quantity $n_{i}a^{2}$. The deviation of the diffusion coefficient from the Drude model in a previous computer simulation of Lorentz model with overlapped hard sphere impurities \\cite{PhysRevA.25.533} is similar to that in our theory. \n\n\\subsection{Low magnetic field case: Hall coefficient and magnetoresistivity}\n\nIn this section, we evaluate the conductivity under a weak magnetic field. We first discuss the contribution from the skew scattering. According to previous discussions, we need to solve the distribution function using Eq. ~\\ref{SBE-n}. \n\n$g_{\\mathbf{k}}^{\\rm r}=\\left( -\\partial_{\\epsilon}f^{0}\\right) \\left( -e\\right)\n\\left[ \\mathbf{E\\cdot v}\\tau^{L}\\left( \\epsilon\\right) +\\left(\n\\mathbf{\\mathbf{\\hat{z}}\\times E}\\right) \\cdot\\mathbf{v}\\tau^{T}\\left(\n\\epsilon\\right) \\right]$ into Eq.~\\ref{SBE-n} and obtain%\n\\begin{align}\n\\tau^{L}\\left( \\epsilon\\right) & =\\frac{\\tau^{\\parallel}\\left(\n\\epsilon\\right) }{1+\\left[ \\omega_{c}\\tau^{\\parallel}\\left( \\epsilon\n\\right) +\\frac{\\tau^{\\parallel}\\left( \\epsilon\\right) }{\\tau^{\\perp}\\left(\n\\epsilon\\right) }\\right] ^{2}},\\nonumber\\\\\n\\tau^{T}\\left( \\epsilon\\right) & =\\left[ \\omega_{c}\\tau^{\\parallel\n}\\left( \\epsilon\\right) +\\frac{\\tau^{\\parallel}\\left( \\epsilon\\right)}{\\tau^{\\perp}\\left( \\epsilon\\right) }\\right] \\tau^{L}\\left(\\epsilon\\right),\n\\end{align}\nwhere we define%\n\\begin{widetext}\n\\begin{align}\n\\frac{1}{\\tau^{\\parallel}\\left( \\epsilon\\right) } & =\\int_{0}^{2\\pi\n}d\\theta n_{i}v [ \\Omega^{A}\\left( 1+\\cos\\left(\n\\theta \\right) \\right)+\\Omega^{S}\\left( 1-\\cos\\left(\\theta \\right) \\right) ] =\\frac{8}{3}n_{i}va\\left[ 1-\\frac{1}{5}\\left( \\frac{a}{R}\\right) ^{2}+O\\left( \\left( \\frac{a}{R}\\right) ^{4}\\right) \\right],\n\\label{tau-para}\n\\end{align}\n\\begin{align}\n\\frac{1}{\\tau^{\\perp}\\left( \\epsilon\\right) } & =\\int_{0}^{2\\pi}d\\theta\nn_{i}v [ \\Omega^{S}-\\Omega^{A} ] \\sin\\left( \\theta \\right) =-\\frac{\\pi}{4}n_{i}va\\frac{a}{R}\\left[ 1+O\\left( \\left( \\frac{a}{R}\\right) ^{2}\\right)\n\\right]. \n\\label{tau-perp}\n\\end{align}\n\\end{widetext}\n\nHere $\\Omega^{A}=\\frac{1}{2}\\left(\\Omega_{\\mathbf{kk}^{\\prime}}-\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}\\right) $, which is the antisymmetric part of the differential cross section, and $\\Omega^{S}=\\frac{1}{2}\\left(\\Omega_{\\mathbf{k}^{\\prime}\\mathbf{k}}+\\Omega_{\\mathbf{kk}^{\\prime}}\\right) $, which is the symmetric part of the differential cross section. $\\tau^{\\perp}$ is purely due to the skew scattering, i.e. $\\Omega_\\mathbf{kk'} \\neq \\Omega_\\mathbf{k'k}$. In our theory, only when $B \\neq 0$, $\\Omega_\\mathbf{kk'} \\neq \\Omega_\\mathbf{k'k}$. \n\nGenerally, we prove that $\\tau^{\\parallel}$ is purely contributed by $\\Omega^{S}$ by showing $\\int_{0}^{2\\pi}d\\theta \\Omega^{A} (1+\\cos ( \\theta ))=0$, and $\\tau^{\\perp}$ is purely contributed by $\\Omega^{A}$ by showing $\\int_{0}^{2\\pi}d\\theta \\Omega^{S} \\sin\\left( \\theta \\right)=0$ (see Appendix \\ref{APP-D}). \nAs a result, $\\tau^{\\parallel}$ is not enough to characterize the collision process as long as the scattering probability contains an antisymmetric part, in which case, $\\tau^{\\perp}$ naturally emerges. \n\nIn our example, $\\frac{1}{\\tau^{\\perp}}$ is always negative (as shown in Eq. \\ref{tau-perp} and Fig. \\ref{fig_tau_perp}). Besides, as Eq. \\ref{tau-perp} shows, $\\frac{1}{\\tau^{\\perp}} \\neq 0$, as long as $\\frac{a}{R}$ is finite. Moreover, the ratio of $\\tau^{\\parallel}$ to $\\tau^{\\perp}$ is proportional to $a\/R$. Since we are considering the weak magnetic field scenario with a large $R$ ($R>a$), $\\tau^{\\perp}$ will be bigger than $\\tau^{\\parallel}$. Taking the data from the second row of Table I as example where $\\beta=0.6$, $\\frac{a}{R}=\\frac{ 2\\beta c}{\\pi} =0.06$, the ratio of $\\tau^{\\parallel}$ to $\\tau^{\\perp}$ is then around $-0.017$. \n\n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.36}{\\includegraphics*{tau_perp.pdf}}\n\\caption{The plot of the reciprocal of transverse relaxation time $\\frac{1}{\\tau^{\\perp}}$ in the unit of $n_{i} a v$, where $a$ is the impurity radius, $n_{i}$ is the impurity density, and $v$ is the electron velocity. The $\\frac{1}{\\tau^{\\perp}}$ is always negative as long as $a0,\n\\end{equation}\nrespectively. The correction due to the effective area of free space excluding the area of all the impurities is of higher order and thus neglected.\n\nThe magnetoresistivity $\\frac{\\delta\\rho_{\\parallel}\\left(\nB\\right) }{\\rho_{\\parallel}\\left( 0\\right) }\\simeq-\\frac{64}{15}\\left(\nn_{i}a^{2}\\omega_{c}\\tau^{0}\\right) ^{2}$ is negative, and\nis composed of three contributions: 1) the\ncontribution from the Hall angle, more specifically, from the anomalous distribution function to the Hall transport $\\left( C_{a}^{\\perp\n}\\rightarrow\\tau^{T,cj}\\rightarrow\\tan\\theta_{H}\\right) $;\n2) the magnetic-field-induced correction to the longitudinal transport\nrelaxation time $\\left( \\left( \\tau^{\\parallel}-\\tau^{0}\\right)\n\\rightarrow\\sigma_{xx}\\right) $;\n3) the contribution of\nanomalous distribution function to the longitudinal transport $\\left(\nC_{a}^{\\perp}\\rightarrow\\tau^{L,cj}\\rightarrow\\sigma_{xx}\\right) $.\n\nThe leading order correction of the Hall angle $-\\frac{\\pi}{4}n_{i}a^{2}\\omega_{c}\\tau^{0}$\nstems from the magnetic-field-induced skew\nscattering. This result is comparable to that corrected by the classical memory\neffect \\cite{PRB2008} in the limit $n_{i}a^{2}\\ll\\omega_{c}\\tau_{D}%\n\\ll1$: $\\delta R_{H}^{cm}\/R_{H}^{B}=-\\frac{32}{9\\pi}n_{i}a^{2}$, where $R_{H}^{B}$ is the Hall coefficient in the conventional Boltzmann theory $R_{H}^{B}=-\\frac{1}{n_D e}=-\\frac{1}{ne(1-\\pi n_i a^2)}$, and $\\delta R_{H}^{cm}$ \nis the difference between the Hall coefficient corrected by classical memory effect and the conventional Hall coefficient. \n\nIn experiments, to obtain the real electron density $n$ from the measured Hall coefficient, the correction to $R_{H}$ has to be included. \nThe Hall coefficient is $R_{H}=-1\/n^{\\prime}e$, where\n$n^{\\prime}$ is the effective electron density $n^{\\prime}\\approx \\frac{n}{\\left(\n1-\\frac{c}{4}+\\frac{128 c^{2}}{45\\pi^{2}}\\right)}$ and $c=\\pi a^{2}n_{i}$. We use the \nvalue of $c=0.15$ here as an example (this value is also used in the discussion) and find that $n^{\\prime}\\approx \\frac{n}{0.97} $ which is equivalent to a $3\\%$ error. This error is larger when the impurity density increases. \n\nWe note that in a previous work \\cite{PhysRevLett. 112. 166601}, it is already recognized that there may be corrections to the Hall coefficient. However, their result is due to the magnetic-field-affected Bloch-electron drifting motion, and is proportional to the $\\frac{1}{(\\tau^0)^2}$ (or equivalently, $(n_i a^2)^2$). In comparison, our correction here has different origins (magnetic-field-affected electron-impurity scattering), as well as different scaling behavior (i.e. proportional to $n_i a^2$).\n\n\n\\section{Discussions}\n\n\\subsection{Magnetoresistivity in comparison with simulation at low magnetic field}\n\nIn this section, we compare our theoretical results with\nthe analytical and numerical results for the pure 2D Lorentz model previously\nobtained in literatures \\cite{PRB2003, PhysRevLett.89.266804}.\nAt low magnetic field $\\omega_{c}\\tau^{0}<1$, the theory in \\cite{PRB2003, PhysRevLett.89.266804} predicted a negative magnetoresistivity due to the influence of magnetic field on the Corridor effect (enhancing the backscattering from the first impurity to the second impurity and back to the first impurity) and on multiple scatterings. \n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n$\\beta$ & $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{an}$ &\n$\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho_{0}}$ & $\\left( \\frac{\\delta\n\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right) ^{th}$ & $\\frac{\\delta\\rho\n_{\\parallel}^{\\prime}}{c\\rho_{0}}+\\left( \\frac{\\delta\\rho_{\\parallel}^{Cor}%\n}{c\\rho_{0}}\\right) ^{th}$ & $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}%\n}\\right) ^{an}+\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho_{0}}+\\left(\n\\frac{\\delta\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right) ^{th}$ & $\\left(\n\\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{si}$\\\\\n\\hline\n$\\beta\/c=3,\\beta=0.45$ & -0.0074 & -0.026 & -0.0605 & -0.0865 & -0.094 & -0.1\\\\\n\\hline\n$\\beta\/c=4,\\beta=0.6$ & -0.013 & -0.046 & -0.07 & -0.116 & -0.13 & -0.14\\\\\n\\hline\n$\\beta\/c=5,\\beta=0.75$ & -0.021 & -0.072 & -0.076 & -0.148 & -0.17 & -0.19\\\\\n\\hline\n$\\beta\/c=5.5,\\beta=0.825$ & -0.025 & -0.087 & -0.0796 & -0.167 & -0.19 & -0.24\\\\\n\\hline\n$\\beta\/c=6,\\beta=0.9$ & -0.03 & -0.103 & -0.086 & -0.189 & -0.22 & -0.28\\\\\\hline\n\\label{table_corridor}\n\\end{tabular}\\\\\n\\caption{The comparison between the summation of analytical results from \\cite{PRB2003} and our theory, and the numerical results from \\cite{PhysRevLett.89.266804}. The second column $\\left( \\frac{\\delta\\rho_{\\parallel}}%\n{c \\rho_{0}}\\right) ^{an}$ is the magnetoresistivity calculated by our formula. The third column $\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho\n_{0}}$ is the quadratic contribution due to the\ninfluences of magnetic field on returns after multiple scatterings in \\cite{PRB2003}. The fourth column $\\left( \\frac{\\delta\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right)\n^{th}$ is the analytical values of magnetoresistivity influenced by Corridor effect \nin \\cite{PRB2003}. The fifth column is the summation of all the analytical results from \\cite{PRB2003}. The sixth column includes our results, in addition to the previous analytical results in the fifth column. The seventh column $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right)\n^{si}$ is the simulation result of magnetoresistivity in \\cite{PhysRevLett.89.266804}. }\n\\end{table*}\n\nIn table I, $c=\\pi n_{i}a^{2}=0.15$, $\\beta=\\omega_{c}\\tau=\\frac{4}{3}\\omega\n_{c}\\tau^{0}$, where $\\tau=\\left( 2 v n_{i}a\\right) ^{-1}$ is the\nsingle-particle scattering time. The second column $\\left( \\frac{\\delta\\rho_{\\parallel}}%\n{c \\rho_{0}}\\right) ^{an}=\\frac{12}{5\\pi^{2}} c \\beta^{2}$ is the magnetoresistivity up to the order\n$O\\left( \\left( n_{i}a^{2}\\omega_{c}\\tau^{0}\\right) ^{2}\\right)\n$ calculated by our formula. The third column $\\frac{\\delta\\rho_{\\parallel}^{\\prime}}{c\\rho\n_{0}}=-\\frac{0.4}{\\pi}\\beta^{2}$ is the quadratic contribution due to\nthe influence of magnetic field on multiple scatterings in \\cite{PRB2003}. The fourth column $\\left( \\frac{\\delta\\rho_{\\parallel}^{Cor}}{c\\rho_{0}}\\right)\n^{th}$ is the analytical values of magnetoresistivity influenced by Corridor effect \nin \\cite{PRB2003}. The fifth column is the summation of all the analytical results from \\cite{PRB2003}. The sixth column includes our results in addition to the previous analytical results in the fifth column. The seventh column $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right)\n^{si}$ is the simulation result of magnetoresistivity in \\cite{PhysRevLett.89.266804}. \n\n\\begin{figure}[h]\n\\setlength{\\abovecaptionskip}{0pt}\n\\setlength{\\belowcaptionskip}{0pt}\n\\scalebox{0.32}{\\includegraphics*{compare_numerical_results_finer.pdf}}\n\\caption{The plots of the numerical magnetoresistivity from [5], the magnetoresistivity from correlation effect [6], and the inclusion of our magnetoresistivity into the analytical correlation effect [6], respectively. }\n\\label{fig_compare}\n\\end{figure}\n\nAs can be seen from the table I, the\ninclusion of $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{an}%\n$ (the magnetoresistivity calculated in our theory) yields a more accurate magnetoresistivity, closer to the numerical result, especially under relatively small magnetic field\n($\\beta=0.45$ and $\\beta=0.6$). This is also reflected in Fig. \\ref{fig_compare}. Under relatively larger magnetic\nfields, the deviation of the analytical values from the simulation values\nincreases. The reason is as follows. The validity of our theory demands $R>l\\gg0.6\/\\sqrt{n_{i}}a$ where $l$ is the mean free path \\cite{PRB2001}, such that the percolation transition cannot occur. \nThis requirement yields $\\beta<1\\ll0.83\\left( n_{i}a^{2}\\right) ^{-1\/2}$.\\ Using the value\n$\\pi n_{i}a^{2}=0.15$ in table I, we obtain the following restriction to $\\beta$: $\\beta \\ll 3.8$. Therefore, the value of $\\left( \\frac{\\delta\\rho_{\\parallel}}{c\\rho_{0}}\\right) ^{an}%\n$ at large $\\beta$ may not be accurate. Also,\nthe validity of the theory of corridor effect influenced magnetoresistivity \\cite{PRB2003} holds well under similar\nrestrictions. Thereby the difference between the analytical (fifth column) and simulation results (sixth column) increases with larger $\\beta$. \n\nWe choose the value of $c=\\pi n_{i}a^{2}=0.15$ in table I because we want to compare with the result from literatures \\cite{PRB2003, PhysRevLett.89.266804} where the largest value of $c$ is $0.15$, and besides, the larger the value of $c$, the more significant the negative magnetoresistivity effect in our theory. The reason can be found from the expression $\\left( \\delta\\rho_{\\parallel}\\right) ^{an}\/\\delta\\rho_{\\parallel}^{\\prime}=6n_{i}a^{2}$. \nIn literatures \\cite{PRB2003, PhysRevLett.89.266804}, the dominance of corridor effect decreases when the magnetic field increases. The suppression of the corridor effect makes our result prominent, therefore, we need the magnetic field as large as possible inside the weak field regime. \n\n\n\\subsection{Magnetoresistivity in comparison with experimental results}\n\nIn this subsection we discuss the possible relevance of our result to\nexperiments. Our theory is based on the Boltzmann framework neglecting the\nmemory effect with successive scattering events. In real 2D electron systems with strong scatterers, the correlation between successive collisions may be\nbroken by the disorders and the applied electric field. Thereby, we may try to fit some experiments by only our results\nregardless of the correlation effect. \n\nThe negative parabolic magnetoresistivity has been observed in a corrugated 2DEG in GaAs wells\n\\cite{PRB2004}. Although the authors explained their observation by the Corridor\neffect related magnetoresistivity, the fitting value for $a\/l=2n_{i}a^{2}$ is beyond\nits valid range (here $n_{i}a^{2}=2.6$, but the range of $n_{i}a^{2}$ is supposed to be $[0,1]$). Thus, the magnetoresistivity theory in terms of corridor effect in the 2D Lorentz model may\nnot provide a suitable description for the experiments with low magnetic field. If we fit the\nexperimental parabolic negative magnetoresistivity in low magnetic field to our formula, we\nget a reasonable value $n_{i}a^{2}=0.12$. Negative parabolic magnetoresistivity was also\nobserved in a 2DEG in a GaN heterostructure \\cite{PRB2005}, and explained by a\ntwo-component disorder model \\cite{Mirlin2001}. We can also fit the\nparabolic negative magnetoresistivity by choosing $n_{i}a^{2}=0.042$. However, both of our theory and the classical magnetoresistivity theory based on memory effects \\cite{PRB2005}\ncannot explain the observed large negative linear magnetoresistivity in a larger magnetic field\n\\cite{PRB2004, PRB2005}. This experimental regime is still beyond existing theories.\n\nWe comment that in some cases, the electron motion is quasi-two-dimensional, and the vertical motion is not negligible. One example is shown in Ref. \\cite{PhysRevLett.77.147}, in which an in-plane magnetic field is applied and the periodically distributed large-scale impurities are prepared. This is, however, beyond the scope of our theory. \n\n\n\\subsection{Phenomenological inclusion of skew scattering into the Drude model}\n\nIn this subsection we\ndemonstrate that the skew scattering, signified by $\\frac{1}{\\tau^{\\perp}}$, can be phenomenologically included into Drude framework using a tensor $\\tensor{\\frac{1}{\\tau}}$. \n\nIn traditional Drude theory, the scattering rate $\\frac{1}{\\tau}$ is\ntreated as a scalar. The equation of motion is%\n\\begin{equation}\nm\\mathbf{\\dot{v}}=-e(\\mathbf{E}+\\mathbf{v\\times \\mathbf{B}})-\\frac{m \\mathbf{v}}{\\tau}.\n\\end{equation}\n\nIn the presence of out of plane magnetic field, due to the rotational symmetry in the two dimensional plane, \n$\\frac{1}{\\tau}$ becomes an antisymmetric tensor \\cite{Physica.24.1958, PhysRevB.72.045346, Hua2015} with\nnonzero off-diagonal element: \n\\begin{equation}\n\\tensor{\\frac{1}{\\tau}}=\\left(\n\\begin{array}\n[c]{cc}%\n\\frac{1}{\\tau^{\\parallel}} & \\frac{1}{\\tau^{\\perp}}\\\\\n-\\frac{1}{\\tau^{\\perp}} & \\frac{1}{\\tau^{\\parallel}}%\n\\end{array}\n\\right) .\n\\end{equation}\n\nThe modified equation of motion is\n\\begin{equation}\nm\\mathbf{\\dot{v}}=-e\\left(\n\\begin{array}\n[c]{c}%\nE_{x}\\\\\nE_{y}%\n\\end{array}\n\\right) -e\\left(\n\\begin{array}\n[c]{c}%\nv_{y}B_{z}\\\\\n-v_{x}B_{z}%\n\\end{array}\n\\right) -m\\left(\n\\begin{array}\n[c]{cc}%\n\\frac{1}{\\tau^{\\parallel}} & \\frac{1}{\\tau^{\\perp}}\\\\\n-\\frac{1}{\\tau^{\\perp}} & \\frac{1}{\\tau^{\\parallel}}%\n\\end{array}\n\\right) \\left(\n\\begin{array}\n[c]{c}%\nv_{x}\\\\\nv_{y}%\n\\end{array}\n\\right),\n\\end{equation}\nwith the conductivity%\n\\begin{equation}\n\\sigma_{xx}=\\frac{\\frac{ne^{2}\\tau^{\\parallel}}{m}}{1+(\\frac{eB}{m}+\\frac\n{1}{\\tau^{\\perp}})^{2}\\tau^{\\parallel2}},\\text{ \\ }\\sigma_{xy}=-\\frac\n{ne^{2}(eB+\\frac{m}{\\tau^{\\perp}})\\frac{\\tau^{\\parallel2}}{m^{2}}}%\n{1+(\\frac{eB}{m}+\\frac{1}{\\tau^{\\perp}})^{2}\\tau^{\\parallel2}},%\n\\end{equation}\nwhich gives the same result as that in the Boltzmann theory Eq. \\ref{con-skew} when considering only the skew scattering part. In Drude model, $m \\mathbf{v}\/ \\tau$ is a resistive force. The physical meaning of the anisotropic resistive force is that the direction of the force is no longer the same with that of the velocity. This anisotropic force in the Drude theory, on the other hand, is equivalent to the anisotropic scattering in the Boltzmann theory. The difference between Boltzmann theory and the Drude phenomenological theory is that the Drude theory cannot give a specific expression of the longitudinal and transverse relaxation time. \n\nConverting the conductivity into resistivity, we get \n\\begin{equation}\n\\rho_{xx} =\\frac{m}{e^{2} \\tau^{\\parallel} n}, \n\\end{equation}\n\\begin{equation}\n\\rho_{xy} =-\\left( \\frac{B}{en}+\\frac{m}{e^{2} \\tau^{\\perp} n} \\right).\n\\end{equation}\nBased on our theory, from Eq. \\ref{tau-para}, we see that the magnetic field dependence of $\\tau^{\\parallel}$ contribute to the negative magnetoresistance, while $\\frac{1}{\\tau^{\\perp}}$ contribute to the anomalous Hall effect. \n\n\n\\section{Conclusion}\n\nIn summary, we have formulated a classical theory for the magnetotransport\nin the 2D Lorentz model. This theory takes into account the effects of the\nmagnetic field on the electron-impurity scattering\nusing the recipe of the abstraction of the real scattering process in the classical\nBoltzmann framework. We find a correction to the Hall\nresistivity in the conventional Boltzmann-Drude theory and a negative magnetoresistivity as a parabolic function of magnetic field. The origin of these results has been analyzed. We have also discussed the relevance between our theory and recent simulation and experimental works. Our theory dominates in a dilute impurity system where the correlation effect is negligible. \n\n\\vskip0.8cm\nWe acknowledge useful discussions with Liuyang Sun, Liang Dong, Nikolai A. Sinitsyn, Qi Chen, Liang Du, Fengcheng Wu and Huaiming Guo. Q.N. is supported by DOE (DE-FG03-02ER45958, \nDivision of Materials Science and Engineering) in the formulation of our theoy. J.F., C.X. and Y.G. are supported by NSF (EFMA- 1641101) and Welch Foundation (F-1255). \n\n\n\\vskip0.8cm\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Related Work} \\label{section:relatedwork}\n\n\\subsection{Color Mapping}\nContinuous color mapping (also heat mapping) refers to the association of a color to a scalar value over a domain and can be considered the most popular visualization technique for two-dimensional scalar data. \n\nThere are many heuristic rules for designing a good colormap, which have been applied mostly intuitively for hundreds of years~\\cite{Silva::2017::UseColorInVis, silva2011using, zhou2016survey}.\nThe most important ones are order, high discriminative power, uniformity, and smoothness~\\cite{Bujack:2018:TVCG}. \n\nWhile some colormaps have been designed to sever as default colormaps for many data sets and can perform reasonably well in terms of rule compliance~\\cite{moreland2009diverging}, many colormaps are purposely-designed according to application-specific requirements such as the shape of the data, the audience, the display, or the visualization goal~\\cite{sloan1979color, bergman1995rule, rheingans2000task, borland2011collaboration}. \nThe number of possible colormap configurations and the body of related work on this topic are huge~\\cite{ware1988color, bergman1995rule, rogowitz1996not, rheingans2000task, tominski2008task}. \n\nAn effort has been made to measure the quality of colormaps with respect to these rules quantitatively~\\cite{tajima1983uniform, robertson1986generation, levkowitz1992design, moreland2009diverging, Bujack:2018:TVCG} or experimentally~\\cite{ware1988color, rogowitz1999trajectories, kalvin2000building, ware2017uniformity}, in order to develop theories and algorithms that can help automate the generation, evaluation, and improvement of colormaps.\nAlthough such theories and algorithms are usually general enough to be application-independent, the design of colormaps in many practical applications can only be effective if one includes some application-specific semantics in the design, such as key values, critical ranges, non-linearity, probability distribution of certain values or certain spatial relationships among values, and so on.\nSupporting such application-specific design effort is the goal of this work.\n\n\\subsection{Colormap Test Data}\nSo far, there is no test suite for colormaps.\nHowever, the literature has provided various examples where some data sets were used for comparing color maps and demonstrating color mapping algorithms.\n\nSloan and Brown \\cite{sloan1979color} suggest treating a colormap as a path through a color space and stress that the optimal choice of a colormap depends on the task, human perception, and display. They showcase their findings with $x$-ray and satellite images.\nWainer and Francolini \\cite{wainer1980empirical} point out the importance of order in a visual variable using statistical information on maps.\nPizer \\cite{pizer1981intensity, pizer1982concepts} stresses the importance of uniformity in a colormap, of which the curve of just noticeable differences (JNDs) is constant and in a natural order.\nThe uniformity can be achieved by increasing monotonically in brightness or each of the RGB components, such that the order of their intensities does not change throughout the colormap.\nTajima \\cite{tajima1983uniform} uses colormaps with regular color differences in a perceptually uniform colorspace to achieve perceptually uniform color mapping of satellite images.\nLevkowitz and Herman \\cite{levkowitz1992design} suggest an algorithm for creating colormaps that produces maximal color differences while satisfying monotonicity in RGB, hue, saturation, and brightness. They test them with medical data from CT scans.\nBernard et al. \\cite{bernard2015survey} suggest definitions of colormap properties and build relations to mathematical criteria for their assessment and map them to different tasks independent from data properties in the context of bivariate colormaps. \nThey test the criteria on analytical data that has different shapes (e.g., different gradients and spherical surfaces).\n\nPizer states that the qualitative task is more important in color mapping applications, because quantitative tasks can be better performed using contours or by explicitly displaying the value when the user hovers over a data point with the mouse.\nHe uses medical images, including CT scans and digital subtraction fluorography, as example data sets.\nWare \\cite{ware1988color} also distinguishes qualitative and quantitative tasks.\nHe agrees that the qualitative characteristics are more important and explicitly mentions tables as a suitable means for the visualization of quantitative values.\nOn the one hand, he finds that monotonic change in luminance is important to see the overall form (qualitative) of his analytic test data consisting of linear gradients, ridges, convexity, concavity, saddles, discontinuities, and cusps.\nOn the other hand, his experiments show that when a colormap consists of only one completely monotonic path in a single perceptual channel, the quantitative task is error-prone if one tries to read the exact data values based on the visualization.\n\nRogowitz, et al.~\\cite{rogowitz1992task, rogowitz1994using, bergman1995rule, rogowitz1996not, rogowitz1998data, kalvin2000building, rogowitz2001blair} distinguish different tasks (isomorphic, segmentation, and highlighting), data types, (nominal, ordinal, interval, and ratio), and spatial frequency (low, high), recommending colormap properties for each combination.\nThey perform experiments on the visual perception of contrast in colormaps using Gaussian or Gabor targets of varying strength superimposed on linear gradients of common colormaps~\\cite{rogowitz1999trajectories, kalvin2000building}.\nRogowitz et al. use a huge variety of data through their extensive experiments, for example, text\\cite{rogowitz1992task}, MRI scans of the human head \\cite{rogowitz1998data, rogowitz1998data}, weather data showing clouds or ozone distribution \\cite{rogowitz1998data, rogowitz1998data}, vector field data from a simulation of the earth's magnetic field or jet flows \\cite{rogowitz1998data, bergman1995rule}, measurements from remote sensing \\cite{rogowitz1996not}, cartographic height data \\cite{rogowitz1998data}, analytic data covering a broad spectrum of frequencies such as planar wave patterns \\cite{rogowitz1996not}, linear gradients distorted by a Gaussian or Gabor target of increasing magnitude \\cite{rogowitz1999trajectories, kalvin2000building}, the luminance of a human face photograph \\cite{rogowitz2001blair}, and so on.\nTheir work demonstrates the diversity in the application field of color mapping and how important it is for a colormap to encode application-specific semantics.\nZhang and Montag \\cite{zhang2006perceptual} evaluate the quality of colormaps designed in a uniform color space with a user study using a CAT scan and scientific measurements such as remote sensing and topographic height data.\nGresh \\cite{gresh2010self} measures the JND between colors in a colormap, using cartographic height data.\nWare et al.~\\cite{ware2017uniformity} generate stimuli for experiments on colormap uniformity by superimposing vertical strips of Gabor filters of different spatial extent over popular colormaps with magnitudes ranging from nonexistence on the top to very strong contrast on the bottom.\nThe users' task is to pick the location where they could first perceive the distortion.\n\nLight and Bartlein \\cite{light2004end} warn of using the rainbow colormap, showing that it is highly confusing for color vision impaired users at the example of temperature data covering North and South America.\nBorland \\cite{borland2007rainbow} also criticizes the rainbow colormap for its lack of order.\nHe compares different colormaps based on analytic test data that features a spectrum of changing frequencies, different surface shapes, and gradients.\nKindlmann et al. \\cite{kindlmann2002face} suggest a method to evaluate users' perception of luminance using a photograph of a human face.\nSchulze-Wollgast et al. \\cite{schulze2005enhancing} focus on the task of comparing data using statistical information on maps.\nTominski et al. \\cite{tominski2008task} also stress that the characteristics of the data, tasks, goals, user, and output device need to be taken into account.\nThey introduce their task-color-cube, which gives recommendations for the different cases.\nThey use cartographic data to demonstrate their findings.\nWang \\cite{Wang:2008} chooses color for illustrative visualizations using medical data and measurements of transmission electron microscopy (TEM), analytic jumps, and mixing of rectangles.\nZeileis et al. \\cite{zeileis2009escaping} provide code to generate color palettes in the cylindrical coordinates of CIEUV and showcase results using geyser eruption data of Old Faithful and cartographic data. \nMoreland \\cite{moreland2009diverging} presents an algorithm that generates diverging colormaps that have a long path through CIELAB without sudden non-smooth bends.\nHis red-blue diverging colormap is the current default in ParaView \\cite{Ahrens:2005:ParaView}.\nHe tests different colormaps with data representing a spectrum of frequencies and gradients partly distorted by noise.\nHe also stresses the importance of testing on 3D surfaces where shading and color mapping compete, e.g., the density on the surface of objects in flow simulation data or on 3D renderings of cartographic height data. \nBorland \\cite{borland2011collaboration} collaborates with an application scientist working on urban airflow.\nThey suggest combining existing colormaps to design domain-specific ones, and in case of doubt stick with the black-body radiation map. %\nThey sacrifice traditional rules (e.g., order) to satisfy the needs (huge discriminative power) of the application.\nEisemann et al. \\cite{eisemann2011data} separate the adaption of the histogram of the data from the color mapping task, introducing an interactive pre-colormapping transformation for statistical information on maps.\nThompson et al. \\cite{thompson2013provably} suggest applying special colors outside the usual gradient of the colormap to dominantly-occurring values, which are ``prominent'' values occurring with high frequency.\nTheir test data includes the analytic Mandelbrot fractal and flow simulation results, which are partly provided as examples in ParaView. \nBrewer \\cite{brewer1994color,Brewer:2004:designing} provides an online tool to choose carefully designed discrete colormaps.\nThis is perhaps the most widely used tool for discrete colormaps.\nMittelst\\\"adt et al. \\cite{mittelstaedt2014methods,mittelstaedt2015colorcat} present a tool that helps to find a suitable colormap for different task combinations.\nThey showcase their findings with analytical data, like gradients and jumps, and real-world maps.\nSamsel et al. \\cite{Samsel:2015:CHI, samsel2017envir} provide intuitive colormaps designed by an artist to visualize ocean simulations and scientific measurements in the environmental sciences.\nFang et al.~\\cite{Fang:2017:TVCG} present an optimization tool for categorical colormaps, and use the tool to improve the colormap of the London underground map and that for seismological data visualization.\nNardini et al.~\\cite{nardini2019making} provides an online tool, the \\texttt{CCC-Tool}, for creating, editing, and analyzing continuous colormaps, demonstrating its uses with captured hurricane data, simulated ocean temperature data, and results of simulating ancient water formation.\n\n\\begin{table}[t]\n\\vspace{0mm}\n\\caption{The most popular test data for colormap testing in the visualization literature.\\label{t:related}}\n\\centering\n\\begin{tabular}{@{\\hspace{4mm}}r@{\\hspace{4mm}}l@{\\hspace{4mm}}}\n\\hline\nanalytic data & \n\\cite{ware1988color, rogowitz1996not, rogowitz1999trajectories, kalvin2000building, borland2007rainbow, Wang:2008, moreland2009diverging, thompson2013provably, mittelstaedt2014methods}\\\\\n& \\cite{mittelstaedt2015colorcat, bernard2015survey, ware2017uniformity}\\\\\nstatistics and maps& \\cite{sloan1979color, brewer1994color, Brewer:2004:designing, schulze2005enhancing, tominski2008task, zeileis2009escaping, eisemann2011data, mittelstaedt2014methods, Fang:2017:TVCG} \\\\\nmedical imaging & \\cite{sloan1979color, pizer1981intensity, pizer1982concepts, levkowitz1992design, rogowitz1998data, zhang2006perceptual, Wang:2008} \\\\\nscientific measurements & \\cite{rogowitz1996not, rogowitz1998data, light2004end, zhang2006perceptual, moreland2009diverging, gresh2010self, zeileis2009escaping, samsel2017envir, Fang:2017:TVCG, nardini2019making} \\\\\nscientific simulations & \\cite{bergman1995rule, rogowitz1998data, moreland2009diverging, thompson2013provably, borland2011collaboration, Samsel:2015:CHI, samsel2017envir, nardini2019making} \\\\\nphotographs & \\cite{sloan1979color, tajima1983uniform,rogowitz2001blair, kindlmann2002face} \\\\\n\\hline\n\\end{tabular}\n\\vspace{-4mm}\n\\end{table}\n\nAll in all, we found that the most popular way of evaluating the quality of colormaps in the literature is the use of specifically designed analytic data like gradients, ridges, different surface shapes, fractals, jumps, or different frequencies, because these synthetic data sets help to identify specific properties of the colormaps.\nThe second most common use is cartographic maps, which reflects the historical use of color mapping.\nFurthermore, it is also common to use data in typical applications of scientific visualization as test data, e.g., fluid simulations (wind, ocean, turbulence), scientific measurements (weather, clouds, chemical concentration, temperature, elevation data), and medical imaging (x-ray, CT scan, digital subtraction fluorography, transmission electron microscopy).\nA summary can be found in \\autoref{t:related}.\nWe have carefully designed our colormap test suite according to these findings, not only providing an extensive selection of expressive analytic data, but also containing real-world data from different scientific applications.\n\n\n\n\n\n\n\n\\section{Motivation} \\label{section:motivationAndDesign}\nIn several fields of computer science, the use of established test suites for evaluating techniques is standard or commonplace.\nThe motivation for this paper is to introduce such a test suite to scientific visualization. \nSo far, user testimonies and empirical studies have been the dominant means of evaluation in the literature.\nWith this work, we would like to initiate the development of an open resource that the community can use to conduct extensive and rigorous tests on various colormap designs.\nWe also anticipate that the community will contribute new tests to this resource continuously, including but not limited to tests for colormaps used in vector- or tensor-field visualization.\nSuch a test suite can also provide user-centered evaluation methods with stimuli and case studies, while stimulating new hypotheses to be investigated using perceptual and cognitive experiments. \n\nThe development of testing functions in this paper deals with the common features that pose challenges in scalar analysis, such as jumps, local extrema, ridge or valley lines, different distributions of scalar values, different gradients, different signal frequencies, different levels of noise, and so on.\nThis scope should be extended in future work progressively with more and more complex or specialized cases.\n\nThe main design goal of our test suite is to provide a set of intuitive functions, each of which deals with one particular challenge at a time.\nThey should be easy to interpret and to customize by experts as well as non-expert users.\nThis aspired simplicity in design can be exploited in future work to facilitate automatic production of test reports or automatic optimization of colormaps with respect to a selection of tests.\n\nAt present, this initial development should provides a set of test functions simulating a variety of planar scalar fields with different characteristic features.\nIt should enable the users to observe the effects when different continuous colormaps are applied to scalar fields that have the characteristic features similar to those featured in an application.\nIn many situations, the users may anticipate certain features in data sets that are yet to arrive, and would like to ensure that the color mapping can reveal such features effectively when the data arrives.\nFinding and defining a suitable testing function is usually easier than manually creating a data set.\nEspecially, unlike a synthetic data set, a test function is normally resolution-independent and is accompanied by some parameters for customization.\n\nIn addition, the test suite should provide users with data sets that come from real-world applications, possibly with some modification wherever appropriate. Such an application-focused collection can be compiled from the most popular data for colormap testing in the visualization literature.\nSince both the collection of test functions and that of real-world data sets are extensible, the field of visualization may soon see a powerful test suite for evaluating colormap design.\nThis is desirably in line with other computer science disciplines.\n\\section{Test Suite} \\label{section:testingFunctions}\nThe first design goal of our test functions is to allow intuitive interpretation of colormap properties by users.\nThis requires each test function to have an easily-understandable behavior, and to have a clear mathematical description that can be reproduced consistently across different implementation.\nThe second design goal is to build the collection of test functions on the existing analytic examples in the literature surveyed in \\autoref{section:relatedwork} to ensure that the existing experiments can be repeated and compared with new experiments.\nThe third design goal is to help users to find the test suite and to conduct tests easily.\nHence we integrate the test suite with the \\texttt{CCC-Tool}, allowing users to conduct tests immediately after creating or editing a colormap specification.\n\nAs mentioned in \\autoref{section:introduction}, our test suite has three parts: local tests, global tests, and a set of application data sets.\nThe local tests are mostly based on analytic examples in the literature and are defined with considerations from calculus to cover most local properties of scalar functions.\nThe global tests feature analytic properties of scalar functions that are not local, such as signal to noise ratio, global topological properties, different levels of variation, etc.\nFinally, the application-specific data sets reflect the well-documented fact that colormaps should also be evaluated using real-world data sets.\n\nThe mathematical notions in this section use a few common parameters. The user-defined parameters, $r, R$, set the test function range, with $r, R \\in \\PazoBB{R} \\land r \\neq R$. $R$ and $r$ determine the minimum $m$ and maximum $M$ of the test function with $m < M \\in \\PazoBB{R}$. With $b \\in \\PazoBB{N}$ the user can select an exponent that describes the polynomial order.\nFor functions with enumerated cases, the user can select a specific option $T$.\n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.99\\linewidth]{pic_stepsExample2.png}\n \t\\caption{ \\label{fig:stepExample}\n \t \\textbf{Left:} The table shows the structure of the neighborhood with four elements ($A=\\{a_0,a_1,a_2,a_3\\}$). The odd indexed columns (yellow) always include the same value, increasing from the first to the last column. The even indexed columns (orange) contain the whole set of test values in increasing order. \n \t \\textbf{Right:} Neighborhood variation test with $A=\\{0.0,0.25,0.75,1.0\\}$ for the colormap displayed below including a three-dimensional version encoding the values through height.\n \t }\n \n \\end{figure}\n \n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.8\\linewidth]{pic_gradientImage_short_NewColored.png}\n \t\\caption{ \\label{fig:gradientExample} Three gradient tests, with $r=0$, $R=1.0$.\n \t\\textbf{Top Row}: Color mapping visualizations of the \\texttt{Gradient Variation} function for the types $linear$, $convex$, and $concave$ with $T_x=T_y$ and $b=1$ for the first type and $b=2$ for the other types.\n \t\\textbf{Bottom Row}: 3D height-map visualizations of the three gradient tests.\n \t}\n \n \\end{figure}\n \n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.8\\linewidth]{pic_minMaxSaddleImage2.png}\n \t\\caption{ \\label{fig:minMaxSaddleExample} This figure shows a 2D color mapping representation (top) and a 3D height-map (bottom) of the 2d scalar fields created with the test function yielding a minimum with $o=1$ and $p=1$ (left), a maximum with $o=-1$ and $p=-1$ (middle), and a saddle with $o=-1$ and $p=1$ (right).}\n \n \\end{figure}\n \n \n \n \n \n \\begin{figure}[t]\n \t\\centering\n \t\\includegraphics[width=0.8\\linewidth]{pic_ridgeImage_short_NewColored.png}\n \t\\caption{ \\label{fig:ridgeExample} Three ridge\/valley-line tests (columns), with $r=0$, $R=1.0$. The ridge\/valley-line is always centrally at $x=0$.\n \t\\textbf{Top Row}: Color mapping with $T_x=T_y=linear$, $T_x=T_y=concave$, and $T_x=T_y=convex$, and $b=2$ in the latter two cases.\n \t\\textbf{Bottom Row}: 3D height-map versions of the same tests.\n \t}\n \n \\end{figure}\n \n \n \n\\input{4_1_0_Local_Attributes.tex}\n\\input{4_2_0_Global_Attributes.tex}\n\\input{4_3_0_RealWorldData.tex}\n \n \n \n \n \n \n\n\n\\subsection{Local Tests}\nOur basic design principle behind the local tests is classical calculus.\nLocal means that these test functions help to check the appearance of local properties of a scalar function after mapping it to color with the selected color map.\nThe main idea is to use typical local approximations like low order Taylor series expansions to create the test functions.\nWe use step functions to show the effect of discontinuities, and provide functions with different gradients, various local extrema, saddles, ridges, and valley lines.\nThis corresponds to ideas in the literature, as shown in \\autoref{section:relatedwork}, e.g., works by Mittelst{\\\"a}dt~\\cite{mittelstaedt2014methods,mittelstaedt2015colorcat} or Ware~\\cite{ware1988color}.\nWe also use elements of Fourier calculus by providing functions to test the effect of different frequencies.\nThe final test looks at the colormap's potential to visually reveal small differences within the data range, which might be an important colormap design goal. %\n \n \n\n\n \\input{4_1_1_Testing_Functions_NeighbourhoodVariations.tex}\n \\input{4_1_2_Testing_Functions_GradientVariations.tex}\n \\input{4_1_3_Testing_Functions_MinMaxSaddleVariations.tex}\n \\input{4_1_4_Testing_Functions_Ridge_and_Valley_Lines.tex}\n \\input{4_1_5_Testing_Functions_FrequencyVariations.tex}\n \\input{4_1_6_Testing_Functions_TresholdVariations.tex}\n \n\\subsubsection{Step Functions} \\label{subsubsection:neighbourhoodVariations}\nSome popular test images in the literature use steps between adjacent pixels~\\cite{mittelstaedt2014methods, mittelstaedt2015colorcat, Wang:2008}.\nIn terms of calculus, this means to use a function with discontinuities.\nIdeally, the function should have different step heights starting from different levels.\nFor this purpose, we define a set $A ={a_0, ... , a_{n-1}}$ of increasing test values $a_ir$, we get an increasing function and with $R0 \\land p>0$, and a saddle with $o>0 \\land p<0 \\lor o<0 \\land p>0$.\nThe starting value of the structure is given by $m \\in \\PazoBB{R}$.\n\\autoref{fig:minMaxSaddleExample} shows an example of this test function with visualizations of minima, maxima, and saddle points.\n\n\n\n\n \n\\subsubsection{Local Topology: Ridge and Valley Lines} \\label{subsubsection:ridgeValleyLines}\n\nBesides local extrema and saddle points, ridge and valley lines are further relevant topological shape descriptors.\nAgain, this has been noted by Ware~\\cite{ware1988color} with respect to color mapping.\nAlso, the relevance of ridges and valley lines is well established in feature-based flow visualization~\\cite{Heine:2016}.\nTo test the suitability of colormaps for scalar fields that include such lines, we use a function $f_{RV}:[-1,1]\\times[0,1] \\rightarrow \\PazoBB{R}$.\nThe location of the ridge\/valley-line is always at $x=0$ as a vertical line.\nIts shape is determined by the function $g$ that we introduced in \\autoref{subsubsection:gradientVariations}, so it may be linear, convex, or concave according to the exponent $b \\in \\PazoBB{N}$ and the shape descriptor $T_y$.\nFor the slope in $x$-direction, we basically use the absolute value with exponent $b$, i.e., $|x|^b$ on the interval $[-1,1]$.\nThis creates a concave shape.\nFor convex shapes, we use the similar function $1-(1-|x|)^b$.\nBoth functions are adjusted to interpolate between $r$ at $-1$ and $1$ and $g(y)$ at $0$.\nThis is quite similar to the definition of $g$.\nWe introduce the type parameter $T_x$ and set it to \"convex\" or \"concave\" and arrive at the definition:\n\n\\begin{equation}\n\\label{equ:ridgeValley}\nf_{RV}(x,y)=\n \\begin{cases}\n (r-g(y)) |x|^b + g(y) & \\textbf{if } T_x = concave \\\\\n (r-g(y)) (1-(1-|x|)^b) + g(y) & \\textbf{if } T_x = convex\n \\end{cases}\n\\end{equation}\n\nAs in the gradient variation case, $b=1$ leads to the same linear function in both cases, which we also denote as \"linear\".\nFor $R>r$, we get a ridge-line. For $R 0)\\\\\n \n (f_m(y)-t) (1-(1-|x|)^b) + t & \\textbf{if } (T=steep \\land x \\leq 0) \\\\\n (f_M(y)-t) (1-(1-|x|)^b) + t & \\textbf{if } (T=steep \\land x > 0)\n \\end{cases}\n\\end{multline}\n\n\\noindent In \\autoref{fig:tresholdExample}, a plot shows examples of all three types.\n\n\n\n \n\n\\subsection{Global Tests}\n\nIn contrast to the local tests, the global tests look at more global properties of scalar functions and how well the colormap presents them.\nFirst, we look at global topological properties.\nWe use functions showing Perlin noise to create multiple local minima and maxima on different height levels and different spatial structures.\nDetails can be found in \\autoref{subsubsection:globalTopologicalStructures}.\nSecond, it is a challenge for any colormap to deal with a large span of the overall values while small, but relevant, local value variations are also present.\nAny nearly linear colormap will completely overlook these so-called little bit variations.\nAs test functions, we use linear functions with varying gradient and height as background with small grooves to include little bit variations.\nThe definition is given in \\autoref{subsubsection:littleBitVariations}.\nThird, real-world data, especially images created by measurements, contain noise of various types and intensity, i.e., signal-to-noise ratio.\nWe use functions from the local test suite and add uniform or Gaussian distributed noise of different signal-to-noise ratios.\nWe describe the details in \\autoref{subsubsection:signalNoiseVariations}.\nFinally, we add a collection of test functions from other computer science disciplines to allow for tests using these functions.\n\n\\input{4_2_1_Global_Topological_Structures.tex}\n\\input{4_2_2_Testing_Functions_LittleBitVariations.tex}\n\\input{4_2_3_Testing_Functions_SignalNoiseVariations.tex}\n\\input{4_2_4_Testing_Functions_Collection.tex}\n\\subsubsection{Global Topological Structures} \\label{subsubsection:globalTopologicalStructures}\n\nAs noted above, other authors indicated the relevance of critical points for testing colormaps before.\nIn contrast to the local topology in \\autoref{subsubsection:minMaxSaddle}, we use a larger number of critical points in the following test.\nFor the creation of global topological structures, we take the 2D version of the improved noise algorithm introduced by Perlin \\cite{Perlin_1985, Perlin_2002}, which is often used for the creation of procedural textures or for terrain generation in computer games.\nThe idea of this test function is to use some other test function from \\autoref{subsubsection:gradientVariations} to \\autoref{subsubsection:littleBitVariations} as a background and combine this field with noise according to Perlin's work.\nThese distorted gradients and shapes are in analogy with colormap testing functions specifically used to determine the discriminative power of subregions of colormaps~\\cite{rogowitz1999trajectories, kalvin2000building, ware2017evaluating, ware2017uniformity, moreland2009diverging}.\n \nTo create the critical points, we use the noise function $f_{Noise}(x,y) \\in [-n,n]$, $n>0 \\land n\\leq1$ and distinguish four options (see \\autoref{fig:globalTopologyExample}).\nFor the options $min-$,$max-$, and $range-scaled$ the selected test-function $f_{test}$ affect the result. Whereby the influence of the first two options depend on the closeness of the local value $f_{Noise}(x,y)$ to the $m$, or rather $M$.\nThis procedure creates noise that is focused on small\/high values.\nAt the $range-scaled$ option, the adjustment of the local value is limited by the test-function range from $m$ to $M$.\nFurthermore, an optional clipping method for these three options prevent values out of $[m,M]$.\nFourth, we offer $replacement$ as a final option, where users can set a custom noise-range $N=[n_m,n_M]$ with $f_{Noise}(x,y) \\in N$.\nWith this option, the entries of the test-function will be replaced by the noise value.\n\n\\begin{equation}\n\\label{equ:noiseOptions}\nf_{test}(x,y)=\n \\begin{cases}\n f_{test}(x,y)+f_{Noise}(x,y)*\\frac{f_{test}(x,y)-m}{M-m} & \\textbf{if } max-scaled \\\\\n f_{test}(x,y)+f_{Noise}(x,y)*\\frac{M-f_{test}(x,y)}{M-m} & \\textbf{if } min-scaled \\\\\n f_{test}(x,y)+f_{Noise}(x,y)*(M-m) & \\textbf{if } range-scaled \\\\\n f_{Noise}(x,y) & \\textbf{if } replacement\n \\end{cases}\n\\end{equation}\n\n\n\n\\subsubsection{Little Bit Variation} \\label{subsubsection:littleBitVariations}\n\nThe teaser \\autoref{fig:camelExample} demonstrates that standard colormaps may easily lead to overlooked small value variations.\nFor such cases, i.e., if small variations in the scalar field (within a small sub-range of the full data range) carry valuable information for interpretation, we define a test on the potential of a given colormap to visually resolve small perturbations.\nThis is similar to distorted gradients, which appear quite frequently in the literature~\\cite{rogowitz1999trajectories, kalvin2000building, ware2017evaluating, ware2017uniformity, moreland2009diverging}.\nThe \\texttt{Little Bit Variation} tes\n \n\\begin{equation}\n f_{LB}:[0,2n+1]\\times[0,1] \\rightarrow \\PazoBB{R}\n\\end{equation}\n\n\\noindent uses a background function and adds a function $f_{G}$ producing $n$ small grooves. \nThe background function in this test is a linear gradient along the y-direction, which is defined by a user-specified value range $[m,M]$.\nAlong the $x$-direction, this function is modified by a function $f_{G}$ creating $2n+1$ alternate stripes of unchanged background and grooves, so we use\n\n\\begin{equation}\n \\label{equ:littleBit}\n f_{LB}(x,y)=m+(M-m)y-f_{G}(x)\n\\end{equation}\n\n\\noindent The function $f_G$ produces sine-shaped grooves for odd $\\left\\lfloor x \\right\\rfloor$ and no changes for even $\\left\\lfloor x \\right\\rfloor$.\nAs $x$ runs from $0$ to $2n+1$, this creates exactly $n$ grooves.\n\n\\begin{equation}\n\\label{equ:littleBitGroove}\n f_{G}(x) =\n \\begin{cases}\n 0, & \\textbf{ if } \\lfloor x \\rfloor \\mod 2 =0\\\\\n - f_{A}(x) \\sin( \\pi (x - \\lfloor x \\rfloor)), & \\textbf{otherwise}\\\\\n \\end{cases}\n\\end{equation}\n\n\\noindent As can be seen, the sine wave's amplitude is changed by a function $f_{A}(x)$ and create a test of different small value changes (groove depths).\nThe function $f_{A}(x)$ determine for each groove the depth by linear interpolation between user-defined minimum $g_m$ and maximum $g_M$.\nIn \\autoref{fig:littleBitExample}, you can see an example for the \\texttt{Little Bit Variations} test. \n\n\\begin{equation}\n\\label{equ:littleBitAmplitude}\n f_{A}(x) = g_{m} + \\frac{\\lfloor x \\rfloor - 1}{2*n-2} (g_{M}-g_{m})\n\\end{equation}\n \n\n\n\n\\subsubsection{Signal-Noise Variation} \\label{subsubsection:signalNoiseVariations}\n\nIn the signal and data processing, noise plays an important role.\nIt also affects the results of scientific visualizations.\nLike the global topology test (see~\\autoref{subsubsection:globalTopologicalStructures}), our tool offers to add noise to each test function (\\autoref{subsubsection:gradientVariations} - \\autoref{subsubsection:littleBitVariations}).\nThe tool uses the standard random algorithm from JavaScript, which produces pseud- random numbers in the range $[0,1]$ with uniform distribution.\nFor the noise behavior, we offer the same noise behavior options from \\autoref{subsubsection:globalTopologicalStructures}.\nIndependent from the selected option, the fraction of noisy pixels can be set.\nThis fraction describes how many randomly selected field values are affected by noise.\nIf the noise proportion is set to 100\\%, the full test-function is affected by noise.\nFor more flexibility, we also offer a conversion from a uniform distribution to a normal or a beta distribution.\nThe conversion from uniform to the normal distribution is done with the Box-Muller transform~\\cite{BoxMull58}.\nWith the normal distribution, the noise will be more focused on weaker changes around null for the $min\/max-scaled$ and $range-scaled$ options.\nFor the $replacment$ option, the normal distribution causes a focus on values around the median of the defined range of noise values.\nThe approach from uniform to a beta-like distribution (with $alpha,beta=0.5$) is done with the equation $beta_{Random}=sin(r*\\frac{\\pi}{2})^2$, with $r$ being the result of the standard random generator.\nAdding noise using a beta distribution with the $min\/max-scaled$ or $range-scaled$ options will have a priority for values near the maximal change parameter $m$ and $-m$.\nFor the $replacement$ option, values near the minimum and maximum of the defined noise value range will be preferred.\nWe modified this conversion with a view to do this preference on only one side, thus for $m$ or $-m$ in the first case or for the maximum or minimum in the other case.\nThe modification is a mirror at the median random value to the left or right side of this median.\nThis allows us to create a beta-like distribution and also a left-oriented beta distribution and a right-oriented beta distribution.\n\\autoref{fig:noiseCollection} shows the different distribution options.\n\n \n \n \n \n \n \n \n\n\\subsubsection{Function Collection} \\label{subsubsection:testcollection}\n\nMany domains of computer science use test functions for the evaluation of algorithms.\nThere are several widespread well-known functions like \\texttt{Mandelbrot Set} or \\texttt{Marschner Lobb} and also functions like the \\texttt{Six-Hump Camel Function} from the teaser, which are better known in optimization than computer science \\cite{Mandelbrot::1980, testfunctions::MarschnerLobb::94, testfunctions::Jamil::2013}.\nSuch functions and their different attributes could also be an enrichment for evaluation in scientific visualization.\nTherefore, we included a collection of such functions from the literature in our testing environment.\nThese functions stand beside our development of test functions, and provide further challenges for colormaps.\nWith this collection, we want to provide over time more and more such functions of interest.\nIn order to allow users to test their colormaps without changes, we allow the user to scale the values of these functions to the range of the colormap or a user-defined range.\n\\autoref{fig:collectionExample} shows some examples of functions used for optimization.\nObviously, they also have relevant properties for the evaluation of color mapping.\nFor example, the \\texttt{Bukin Function} includes many small local minima along a valley-line. \\cite{testfunctions::Jamil::2013}\n\n\n\n\\subsection{Application-Specific Tests} \\label{subsection:realWorldData}\nIn the two previous sections, we described several analytic test functions concerning specific challenges encountered in color mapping.\nAdditionally, we also introduced a collection of already existing test functions from other computer science domains.\nNevertheless, we think that the involvement of real-world data is indispensable for the completion of this test suite.\nReal-world data originates from many different sources, is generated with various measurement techniques or simulation algorithms, and includes a myriad of attribute variations.\nMost importantly, such data could potentially present several of the challenges described in the two previous sections at the same time. \nThis kind of test cannot be easily replaced by our theory-based test functions completely. \nTherefore, we decided to include a set of application test data from different domains to cover a wide spectrum of realistic challenges.\n\nWithin one specific scientific domain, there is often a similarity between typical data sets; e.g., in medicine, data from the MRI (Magnetic Resonance Imaging) or the CT (Computer Tomography) is frequently used. \nSuch data sets have similar attributes, and similar requirements have to be fulfilled by colormaps.\nIf we cover different typical data sets of different scientific disciplines, in the future, we can hopefully offer enough different real-world test cases so that most users will find a case that has some similarities with his data.\nLike the test function collection from \\autoref{subsubsection:testcollection}, this collection of real-world data will be extended over time.\nAt the current version, the tool offers medical-, flow-, and photograph-specific real-world data.\n \n\n \n\n \n\\section{Test Evaluation} \\label{section:testevaluation}\n\nMostly there are good reasons to select specific colormaps or to design colormaps in a specific way.\nDepending on the actually envisaged purpose of the colormap, a user decides on the number of keys; the hue, saturation and value of each key; the gradients in the mapping between the data range and the colormap; and so on.\nFurthermore, \\emph{de facto} standards and cognitive motivation may also influence the user's choice. \nTherefore, meaningful automated evaluation of continuous colormaps without knowledge of their intended use is rarely feasible.\nTherefore, a general colormap score computed based on automatic tests and benchmarks might not be informative.\n\nInstead, we propose to derive information based on aforementioned test functions that can be analyzed and rated by users themselves.\nA user first chooses a test-function from \\autoref{section:testingFunctions}.\nFor each grid point of the generated test field, we calculate the value differences to the neighboring grid points.\nDepending on the location within the field, the number of neighbors varies between three and eight.\nWe normalize these value differences with the minimum and maximum value differences found and save them into a \\texttt{Value Difference Field}.\nWe repeat this process also for the colors.\nHere, we use some color difference norm (Lab, DIN99, DE94, or CIEDE2000) and save the normalized values into the \\texttt{Color Difference Field}.\n\nBy subtracting these two fields from another, we get a \\texttt{Subtraction Field}.\nThis field represents the local uniformity of the color mapping; when the local gradients found in the data are accordingly represented in the color mapped field, the difference between normalized data field and normalized color mapped field is zero for all pixels\/locations. \nIn the case of a non-linear color mapping, in contrast, the \\texttt{Subtraction Field} will particularly highlight areas with strong non-linear mapping, which the user might have designed intentionally in order to increase the number of discriminable colors for a part of the data range.\nThe user can study the \\texttt{Color Difference Field} as well as the \\texttt{Subtraction Field} to analyze the color mapping of the test function.\n\nEach of the three fields has three up to eight values for each pixel.\nFor the color mapping (\\autoref{fig:evaluation_Screenshot}), the user can select maximum, average, or median.\nNext to that, there are options to select a method for the calculation of the color difference.\nThe tool offers Euclidean distance for Lab and DIN99 space or the use of the DE94 or CIEDE2000 metrics in the Lab space. \n\nTo compare the visualizations of \\texttt{Color Difference Field} of different colormaps, we cannot use the normalization by minimum and maximum. The colors of such color mappings would relate to different color difference values and are not comparable.\nTherefore we implemented two alternative options using fixed values for the minimum and maximum of the normalization to create comparable results.\nThe \\texttt{Black-White} normalization use the greatest possible color difference between black and white as maximum and zero as minimum.\nThe \\texttt{Custom} normalization uses a user-entered maximum, which is a necessity if the black-white difference is to big by contrast with the occurring color differences of the \\texttt{Color Difference Fields}.\nIn \\autoref{fig:application_Threshold}, we used this third option to get a comparable visualization for a colormap with a discontinuous transition point.\n\n\\section{Application Case} \\label{section:application}\n\nIn this section, we show how the test suite could be utilized to evaluate the suitability of colormaps with respect to a given application problem.\nFor this example, we chose a data set from a simulation with a high-resolution global atmosphere model.\nThe data we use is one timestep of the temperature at a height of 2m simulated with the icosahedral ICON model at a global resolution of 5km~\\cite{Stevens:2020}.\nWe remapped the data from the unstructured model grid to a regular grid with $4000 \\times 2000$ grid points for easier use with different tools. \n\nOn the global scale, the 2m-temperature is typically characterized by a wide range of values between less than $-80^{\\circ}${C} and more than $50^{\\circ}${C}.\nFor the selected time step, the simulated 2m-temperature varies between about $-63^{\\circ}${C} and $52^{\\circ}${C}.\nRegionally, however, small temperature variations of the order of $0.1^{\\circ}${C} might be critical for the analysis as, e.g., in the neighborhood of the freezing point at $0^{\\circ}${C}.\n\nPanel \\textbf{1a} of \\autoref{fig:application_Threshold} shows a visualization of the data using a spherical projection with a focus on the South Pole.\nIn contrast to mountainous regions, where the horizontal $2m$-temperature gradient is generally high, the gradient in flat areas such as oceanic regions is much smaller.\nHere, the color differences are too small to depict local temperature variations, as for example, in regions with values close to $0^{\\circ}${C} as shown in the close-up in the lower right corner of the image.\n\nTo test a given colormap for its discriminative power in the data range around the freezing point, we applied a threshold test with the options $Flat-Surrounding$, $m=-63$, $M=53$, and $t=0$.\nFirst, we start with a local uniform cool-warm colormap (\\textbf{1a} of \\autoref{fig:application_Threshold}).\nThe related test function visualization \\textbf{1b} demonstrates that it is impossible to differentiate between negative and positive values if the values are close to $0^{\\circ}${C}.\nThe \\textbf{1c} \\texttt{Subtraction Field} method of the test evaluation part (\\autoref{section:testevaluation}) yields a nearly white image, which reflects that the colormap uniformly represents the gradients produced by the test function.\nTo highlight the freezing point in the mapping, we introduce a non-linearity in the colormap, at $0^{\\circ}${C}.\nWe use the twin key option of the CCC-Tool colormap specification (CMS)~\\cite{nardini2019making}, which separates the color key at $0^{\\circ}${C} into a left and right color key to create the discontinuous transition.\nTo improve the visual difference between both sides, we slightly lower the lightness value and increase the left color saturation to achieve light blue.\nWe kept white as the right-hand part of the color key.\nPanels \\textbf{2a} and \\textbf{2b} of \\autoref{fig:application_Threshold} illustrates that the introduced discontinuity in the colormap clearly separates the areas with negative and positive temperature values.\nIn comparison to \\textbf{1c}, the \\texttt{Subtraction Field} in \\textbf{2c} shows with a vertical red line the spatial position of the discontinuous transition at $0^{\\circ}${C}.\nThe according visualization of the temperature field of the modified colormap is shown in panel \\textbf{2a}.\n\nIf we visualize the global 2m-temperature field using a linear colormap and look at the tropics or the mid-latitudes, we find that regional variations are also not very well resolved.\nUsing the same colormap, \\autoref{fig:application_LittleBit}~\\textbf{1a} shows a different view onto our planet, as \\autoref{fig:application_Threshold}~\\textbf{1a}.\nThe resolving power of the linear colormap is equally distributed over the full data range.\nHowever, when we analyze the global temperature distribution, we find that more than half of the data range is used for the temperature variations far below $0^{\\circ}${C} mostly in Antarctica, although this information is less important for most users of such a data set.\nWith respect to vegetation and agriculture, we may want to put more focus on regions with temperatures mostly above $0^{\\circ}${C}.\n\nTherefore we extended the path of the colormap through the color space to get more distinguishable colors for the positive data range. \nWe used a \\texttt{Little Bit} test to control improvements during this process.\nPanel~\\textbf{1a} of \\autoref{fig:application_LittleBit} shows a visualization using the colormap with the discontinuous transition introduced above.\nThe corresponding \\texttt{Little Bit} test is shown in panel~\\textbf{1b}.\nFor the evaluation, we used the \\texttt{Color Difference Field} (\\autoref{section:testevaluation}).\nPanel~\\textbf{1c} shows how the small grooves in the linear gradient of the \\texttt{Little Bit} test function (that are hardly noticeable in \\textbf{1b}) become clearly visible in the color difference field.\nFrom left to right, the regularly spaced perturbations in the field increase in magnitude, which is represented by a stripe pattern in panel~\\textbf{1c} that increases in contrast from left to right.\nThe vertically constant color of the stripe pattern is a direct consequence of a linear colormap.\n\nHowever, as we wanted to increase the discriminative power in the upper part of the colormap, we inserted additional color keys.\nFirst, we moved the blue part of the colormap representing negative values slightly away from cyan.\nThe freed color space was utilized to represent the lower positive temperatures.\nA gradient from white to cyan $0^{\\circ}${C}-$10^{\\circ}${C} is followed by a gradient from cyan to green to represent the moderate temperature range of $0^{\\circ}${C}-$20^{\\circ}${C}.\nNext to this, a subsequent gradient from yellow through beige to light brown shows values between $20^{\\circ}${C} and $40^{\\circ}${C}. \nA further transition to dark red finally shows higher temperature range of up to $53^{\\circ}${C}.\n\nOur colormap semantics were designed to roughly differentiate between five temperature zones: very cold (blue to light blue), moderately cool (white to cyan), moderately warm (cyan to green), warm (green to yellow to beige) and hot (red).\nConcerning red-green colorblind viewers, we used a lower and not overlapping lightness range for the red gradient and the green gradient.\nThe respective color gradients were separately optimized for local uniformity.\nThe panels~\\textbf{2a} and \\textbf{2b} of \\autoref{fig:application_LittleBit} show the visualizations of the temperature data and the test function with the modified colormap.\nNote that we used the \\texttt{Little Bit} test function only for the upper part of the colormap that corresponds to temperature values between $m=5^{\\circ}${C} and $M=53^{\\circ}${C}.\nAs a result of our modifications of the colormap, it is now possible to see much more detail in the inhabited part of our planet and to distinguish between the different temperature zones.\nCompared to~\\textbf{1c}, the \\texttt{Color Difference Field}~\\textbf{2c} shows an increase in the color difference at the expense of the local uniformity of the positive data range. \n \n\n \n\n \n\\section{Conclusion} \\label{section:conclusion}\nIn this paper, we have introduced the approach of using test functions as a standard evaluation method, and we have presented a test suite for continuous colormaps.\nLike in other fields of computer science, one could use such test functions besides user-centered evaluation (e.g., user testimonies and empirical studies).\nIn compassion with user-centered evaluation, there is no need to recruit participants, design questionnaires or stimuli, organize payment, arrange experiment time and environment, and provide apparatus. Evaluating colormaps using the test suite can be conducted quickly and easily.\nThe designer can test many optional colormaps against many test functions and data sets, which is usually not feasible with user-centered evaluation. The same tests can be repeated with consistent control and comparability.\n\nFor the test suite, we first focused on the specific challenges of scalar fields.\nThe \\autoref{subsubsection:neighbourhoodVariations}-\\ref{subsubsection:littleBitVariations} describe the test functions we chose to address these challenges.\nTo help users with a less mathematical background, we tried to develop intuitive functions that are simple and easy to interpret.\nThe test suite currently includes step functions, different gradients, minima, maxima, saddle points, ridge and valley lines, global topology, thresholds, different frequencies, and a test for very small value changes.\nAlthough these test functions cannot cover all possible challenges, we have laid down a solid foundation that can be extended continually.\nWe have also included the option to add noise to extend the possibilities of the basic test functions. \n\nBesides our newly designed functions, we have presented in \\autoref{subsubsection:testcollection} a collection of functions used for evaluation in other computer science fields.\nWe think they will prove to be useful for the evaluation of colormaps as well.\nFurthermore, we have included an initial selection of real-world data sets from different application areas.\nAs described in \\autoref{subsection:realWorldData}, tests against real-world data are important in practice. \nEach real-world data set in our test suite presents an individual challenge of a combination of in scalar field analysis. \nHere, our intention is to provide a broad cover such that users are less dependent on external data.\n\nOur test suite has been integrated into the open-access CCC-Tool.\nIn \\autoref{section:testevaluation} we describe means to evaluate the results of the test functions visually and numerically that we have also implemented into our online-tool. \nAn example of using the test suite to evaluate and enhance a user-designed colormap concerning a specific application problem is finally presented and discussed in \\autoref{section:application}.\n\nFor a long-term perspective, we plan to continue the extension of our collection.\nOne option for real world data would be an open source database with a web interface and a link to our tool.\nIn order to adopt the test suite as a standard evaluation method, we would like to work on the method of automatic test reports, which can perform automatic analysis of a colormap with a set of tests chosen by the user.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nCounterfactual explanations have generated immense interest in several high-stakes applications, e.g., lending, credit decision, hiring, etc~\\cite{verma2020counterfactual, Karimi_arXiv_2020,wachter2017counterfactual}. Broadly speaking, the goal of counterfactual explanations is to guide an applicant on how they can change the outcome of a model by providing suggestions for improvement. Given a specific input value (e.g., a data point that is declined by a model), counterfactual explanations attempt to find another input value for which the model would provide a different outcome (essentially get accepted). Such an input value that changes the model outcome is often referred to as a \\emph{counterfactual}.\n\n\nSeveral existing works usually focus on finding counterfactuals that are as ``close'' to the original data point as possible with respect to various distance metrics, e.g., $L_1$ cost or $L_2$ cost. This cost is believed to represent the ``effort'' that an applicant might need to make to get accepted by the model. Thus, the ``closest'' counterfactuals essentially represent the counterfactuals attainable with minimum effort. \n\n\n\n\nHowever, the closest counterfactuals may not always be the most preferred one. For instance, if the model changes even slightly, e.g., due to retraining, the counterfactual may no longer remain valid. In Table~\\ref{robustness_demo}, we present a scenario where we retrain an XGBoost model~\\cite{chen2015xgboost} with same hyperparameters on the same dataset, leaving out just one data point. We demonstrate that a large fraction of the ``closest'' counterfactuals generated using the state-of-the-art techniques for tree-based models no longer remain valid. This motivates our primary question: \n\\begin{center}\n\\emph{How do we generate counterfactuals for tree-based ensembles that are not only close but also robust to changes in the model?}\n\\end{center}\n\n\\begin{table}\n\\caption{Validity of Counterfactuals Generated Using State-Of-The-Art Techniques (with $L_1$ cost minimization) for XGBoost Models on German Credit Dataset~\\cite{UCI}: Models were retrained after dropping only a single data point. A large fraction of the counterfactuals for the previous model no longer remain valid for the new models obtained after retraining.}\n\\label{robustness_demo}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccc}\n\\toprule\nMethod & FT & FOCUS & FACE & NN \\\\\n\\midrule\nValidity & $72.9\\%$ \n & $72.8\\%$ \n & $84.4\\%$ \n & $92.5\\%$\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\n\n\nTowards addressing this question, in this work, we make the following contributions:\n\n\\begin{itemize}\n\\item \\textbf{Quantification of Counterfactual Stability:} We propose a novel metric that we call -- \\emph{Counterfactual Stability} -- that quantifies how robust a counterfactual is going to be to possible changes in the model. In order to arrive at this metric, we identify the desirable theoretical properties of counterfactuals in tree-based ensembles that can make them more stable, i.e., less likely to be invalidated due to possible model changes under retraining. Tree-based ensembles pose additional challenges in robust counterfactual generation because they do not conform to standard assumptions, e.g., they are not smooth and continuous, have a non-differentiable objective function, and can change a lot in the parameter space under retraining on similar data. Our proposed quantification is of the form $R_{\\Phi}(x,M)$ where $x \\in \\mathbb{R}^d$ is an input (not necessarily in the dataset or data manifold), $M(\\cdot):\\mathbb{R}^d \\to [0,1]$ is the original model, and $\\Phi$ denotes some hyperparameters for this metric. We find that while counterfactuals on the data manifold have been found to be more robust than simply ``closest'' or ``sparsest'' counterfactuals (see \\cite{pawelczyk2020counterfactual}), being on the data manifold may not be sufficient for robustness, thus calling for our metric.\n\n\\item \\textbf{Conservative Counterfactuals With Theoretical Robustness Guarantee:}\nWe introduce the concept of \\emph{Conservative Counterfactuals} which are essentially counterfactuals (points with desired outcome) lying in the dataset that also have high counterfactual stability $R_{\\Phi}(x,M)$. Given an input $x \\in \\mathbb{R}^d$, a conservative counterfactual is essentially its nearest neighbor in the dataset on the other side of the decision boundary that also passes the counterfactual stability test, i.e., $R_{\\Phi}(x,M) \\geq \\tau$ for some threshold $\\tau$. We provide a theoretical guarantee (see Theorem~\\ref{thm:guarantee}) that bounds the probability of invalidation of the conservative counterfactual under model changes.\n\n\n\\item \\textbf{An Algorithm for Robust Counterfactual Explanations (RobX):} We propose \\emph{RobX} that generates robust counterfactuals for tree-based ensembles leveraging our metric of counterfactual stability. Our proposed strategy is a post-processing one, i.e., it can be applied after generating counterfactuals using any of the existing methods for tree-based ensembles (that we also refer to as the base method), e.g., Feature Tweaking (FT)~\\cite{tolomei2017interpretable}, FOCUS~\\cite{lucic2019focus}, Nearest Neighbor (NN)~\\cite{albini2021counterfactual}, FACE~\\cite{poyiadzi2020face}, etc. Our strategy iteratively refines the counterfactual generated by the base method and moves it towards the conservative counterfactual, until a ``stable'' counterfactual is found (i.e., one that passes our counterfactual stability test $R_{\\Phi}(x,M) \\geq \\tau$).\n\n\\item \\textbf{Experimental Demonstration:} Our experiments on real-world datasets, namely, German Credit~\\cite{UCI}, and HELOC~\\cite{Fico_web_2018}, demonstrate that the counterfactuals generated using RobX significantly improves the robustness of counterfactuals over SOTA techniques (nearly 100\\% validity after actual model changes). Furthermore, our counterfactuals also lie in the dense regions of the data manifold, thereby being realistic in terms of Local Outlier Factor (see Definition~\\ref{defn:lof}), a metric popularly used to quantify likeness to the data manifold. \n\\end{itemize}\n\n\n\n\n\n\n\n\\begin{rem}[Drastic Model Changes]\nOne might question that why should one want counterfactuals to necessarily remain valid after changes to the model. Shouldn't they instead vary with the model to reflect the changes to the model? E.g., economic changes might cause drastic changes in lending models (possibly due to major data distribution shifts). In such scenarios, one might in fact prefer counterfactuals for the old and new models to be different. Indeed, we agree that counterfactuals are not required to remain valid for very drastic changes to the model (see Figure~\\ref{fig:robustness_not_needed}; also see an impossibility result in Theorem~\\ref{thm:impossibility1}). However, this work focuses on small changes to the model, e.g., retraining on some data drawn from the same distribution, or minor changes to the hyperparameters, keeping the underlying data mostly similar. Such small changes to the model are in fact quite common in several applications and occur frequently in practice~\\cite{upadhyay2021towards,Hancox-Li_fat_2020,black2021consistent,barocas2020hidden}.\n\\end{rem}\n\n\n\\textbf{Related Works:} Counterfactual explanations have received significant attention in recent years (see \\cite{verma2020counterfactual,Karimi_arXiv_2020,wachter2017counterfactual,multiobjective,konig2021causal,albini2021counterfactual,kanamori2020dace,poyiadzi2020face,lucic2019focus,pawelczyk2020counterfactual,ley2022global,spooner2021counterfactual,sharma2019certifai} as well as the references therein). In \\cite{pawelczyk2020counterfactual,kanamori2020dace,poyiadzi2020face}, the authors argue that counterfactuals that lie on the data manifold are likely to be more robust than the closest counterfactuals, but the focus is more on generating counterfactuals that specifically lie on the data manifold (which may not always be sufficient for robustness). Despite researchers arguing that robustness is an important desideratum of local explanation methods~\\cite{Hancox-Li_fat_2020}, the problem of generating robust counterfactuals has been less explored, with the notable exceptions of some recent works \\cite{upadhyay2021towards,rawal2020can,black2021consistent}. In \\cite{upadhyay2021towards,black2021consistent}, the authors propose algorithms that aim to find the \\emph{closest} counterfactuals that are also robust (with demonstration on linear models and neural networks). In \\cite{rawal2020can}, the focus is on analytical trade-offs between validity and cost. We also refer to \\cite{Mishra_arXiv_2021} for a survey on the robustness of both feature-based attributions and counterfactuals. \n\nIn this work, our focus is on generating robust counterfactuals for tree-based ensembles. Tree-based ensembles pose additional challenges in robust counterfactual generation because they do not conform to standard assumptions for linear models and neural networks, e.g., they have a non-smooth, non-differentiable objective function. Furthermore, our performance metrics include both distance ($L_1$ or $L_2$ cost), and likeness to the data manifold (LOF). \n\nWe note that \\cite{alvarez2018robustness} proposes an alternate perspective of robustness in explanations called $L$-stability which is built on similar individuals receiving similar explanations. Instead, our focus is on explanations remaining valid after some changes to the model.\n\n\\begin{figure}\n\\centering\n\\includegraphics[height=4.2cm]{robustness_not_needed}\n\\includegraphics[height=4.2cm]{robustness_needed}\n\\caption{Scenarios distinguishing drastic and small model changes: (Left) Drastic model changes due to major distribution shifts; One may not want robustness of counterfactuals here. (Right) Small model changes due to retraining on very similar data or minor hyperparameter changes that occur frequently in practice. Robustness of counterfactuals is highly desirable here.}\n\\label{fig:robustness_not_needed}\n\\end{figure}\n\n\\section{Problem Setup}\nLet $\\mathcal{X} \\subseteq \\mathbb{R}^d$ denote the input space and let $\\mathcal{S}=\\{x_i\\}_{i=1}^N \\in \\mathcal{X}$ be a dataset consisting of $N$ independent and identically distributed points generated from a density $q$ over $\\mathcal{X}$.\nWe also let $M (\\cdot):\\mathbb{R}^d \\to [0,1]$ denote the original machine learning model (a tree-based ensemble, e.g., an XGBoost model) that takes an input value and produces an output probability lying between $0$ and $1$. The final decision is denoted by: $$D(x)=\\begin{cases} 1 \\text{ if $M(x)>0.5$,}\\\\\n0 \\text{ otherwise.}\\end{cases}$$\n\nSimilarly, we denote a changed model by $M_{new}(\\cdot):\\mathbb{R}^d \\to [0,1]$, and the decision of the changed model by: $$D_{new}(x)=\\begin{cases} 1 \\text{ if $M_{new}(x)>0.5$,}\\\\\n0 \\text{ otherwise.}\\end{cases}$$\n\n\nIn this work, we are mainly interested in tree-based ensembles~\\cite{chen2015xgboost}. A tree-based ensemble model is defined as follows:\n$M(x)=\\sum_{t=1}^T m^{(t)}(x)$ where each $m^{(t)}(x)$ is an independent tree with $L$ leaves, having weights $\\{w_1,\\ldots,w_L\\} \\in \\mathbb{R}$. A tree $m^{(t)}(x)$ maps a data point $x \\in \\mathbb{R}^d$ to one of the leaf indices (based on the tree structure), and produces an output $w_l \\in \\{w_1,\\ldots,w_L\\}$. One may use a \\texttt{sigmoid} function~\\cite{chen2015xgboost} for the final output to lie in $[0,1]$.\n\n\n\\subsection{Background on Counterfactuals}\n\nHere, we provide a brief background on counterfactuals.\n\n\\begin{defn}[Closest Counterfactual $\\mathcal{C}_{p}(x,M)$]\nGiven $x\\in \\mathbb{R}^d$ such that $M(x)\\leq 0.5$, its closest counterfactual (in terms of $L_p$-norm) with respect to the model $M(\\cdot)$ is defined as a point $x'\\in \\mathbb{R}^d$ that minimizes the $l_p$ norm $||x-x'||_p$ such that $M(x')>0.5$. \n\\begin{equation}\n\\mathcal{C}_{p}(x,M)=\\arg \\min_{x'\\in \\mathbb{R}^d} ||x-x'||_p \n\\text{ such that } M(x')>0.5. \\nonumber\n\\end{equation} \n\\end{defn}\n\nFor tree-based ensembles, some existing approaches to find the closest counterfactuals include~\\cite{tolomei2017interpretable,lucic2019focus}. When $p=1$, these counterfactuals are also referred to as ``sparse'' counterfactuals in existing literature~\\cite{pawelczyk2020counterfactual} because they attempt to find counterfactuals that can be attained by changing as few features as possible (enforcing a sparsity constraint). \n\nClosest counterfactuals have often been criticized in existing literature~\\cite{poyiadzi2020face,pawelczyk2020counterfactual,kanamori2020dace} as being too far from the data manifold, and thus being too unrealistic, and anomalous. This has led to several approaches for generating ``data-support'' counterfactuals that are lie on the data manifold, e.g., \\cite{kanamori2020dace,albini2021counterfactual,poyiadzi2020face}. Here, we choose one such definition of data-support counterfactual which is essentially the nearest neighbor with respect to the dataset $\\mathcal{S}$, that also gets accepted by the model~\\cite{albini2021counterfactual}.\n\n\\begin{defn}[Closest Data-Support Counterfactual $\\mathcal{C}_{p,\\mathcal{S}}(x,M)$]\n\\label{defn:data-support-CF}\nGiven $x\\in \\mathbb{R}^d$ such that $M(x)\\leq 0.5$, its closest data-support counterfactual $\\mathcal{C}_{p,\\mathcal{S}}(x,M)$ with respect to the model $M(\\cdot)$ and dataset $\\mathcal{S}$ is defined as a point $x'\\in \\mathcal{S}$ that minimizes the $l_p$ norm $||x-x'||_p$ such that $M(x')>0.5$.\n\\begin{equation}\n\\mathcal{C}_{p,\\mathcal{S}}(x,M)=\\arg \\min_{x'\\in \\mathcal{S}} ||x-x'||_p\n\\text{ such that }{M(x')>0.5}. \\nonumber\n\\end{equation} \n\\end{defn}\n\n\\begin{rem}[Metrics to Quantify Likeness to Data Manifold] In practice, instead of finding counterfactuals that lie exactly on the dataset, one may use alternate metrics that quantify how alike or anomalous is a point with respect to the dataset. One popular metric to quantify anomality that is also used in existing literature~\\cite{pawelczyk2020counterfactual,kanamori2020dace} on counterfactual explanations is Local Outlier Factor (see Definition~\\ref{defn:lof}; also see \\cite{breunig2000lof}).\n\\end{rem}\n\n\\begin{defn}[Local Outlier Factor (LOF)]\nFor $x \\in \\mathcal{S}$, let $N_k(x)$ be its $k$-nearest neighbors (k-NN) in $\\mathcal{S}$. The $k$-reachability distance $rd_k$ of $x$ with respect to $x'$\nis defined by $rd_k(x, x')= \\max\\{\\Delta(x, x'), d_k(x')\\}$, where $d_k(x')$\nis the distance $\\Delta$ between $x'$ and its the $k$-th nearest instance\non $\\mathcal{S}$. The $k$-local reachability density of $x$ is defined by\n$lrd_k(x) = |N_k(x)| (\n\\sum_{x' \\in N_k(x)} rd_k(x, x'))^{-1}.$ Then, the\nk-LOF of $x$ on $\\mathcal{S}$ is defined as follows:\n$$q_k(x | \\mathcal{S}) = \\frac{1}{|N_k(x)|}\n\\sum_{x' \\in N_k(x)}\n\\frac{lrd_k(x')}{lrd_k(x)}\n.$$ Here, $\\Delta(x, x')$ is the distance between two $d$-dimensional feature vectors.\n\\label{defn:lof}\n\\end{defn}\n\nIn this work, we use an existing implementation of computing LOF from \\texttt{scikit}~\\cite{scikit-lof} that predicts $-1$ if the point is anomalous, and $+1$ for inliers. So, in this work, a high average LOF essentially suggests the points lie on the data manifold, and are more realistic, i.e., \\emph{higher is better}.\n\nNext, we introduce our goals.\n\\subsection{Goals}\n\nGiven a data point $x \\in \\mathcal{X}$ such that $M(x)\\leq 0.5$, our goal is to find a counterfactual $x'$ with $M(x')>0.5$ that meets our requirements:\n\\begin{itemize}[leftmargin=*, itemsep=0pt, topsep=0pt]\n\\item Close in terms of $L_p$ cost: The point $x'$ is close to $x$, i.e., $||x-x'||_p$ is as low as possible.\n\\item Robust: The point $x'$ remains valid after changes to the model, i.e., $M_{new}(x')>0.5$.\n\\item Realistic: The point $x'$ is as similar to the data manifold as possible, e.g., has a high LOF (higher is better).\n\\end{itemize}\n\n\n\n\n\n\n\\begin{rem}[Bookkeeping Past Counterfactuals] One possible solution for ensuring the robustness of counterfactuals under model changes could be to keep a record of past counterfactuals. Then, even if there are small changes to the model that can render those counterfactuals invalid, one might still want to accept them because they have been recommended in the past: Ouput $D(x) \\text{ if x is a past counterfactual}$ or $D_{new}(x)$ otherwise. However, this approach would require significant storage overhead. Furthermore, there would also be fairness concerns if two data points that are extremely close to each other are receiving the same decision, e.g., one is being accepted because it was a past counterfactual even though the new model rejects it, while the other point is being rejected. \n\\end{rem}\n\n\n\\section{Main Results}\n\\label{sec:main}\nIn this section, we first identify the desirable properties of counterfactuals in tree-based ensembles that make them more stable, i.e., less likely to be invalidated by small changes to the model. These properties then leads us to propose a novel metric -- that we call \\emph{Counterfactual Stability} -- that quantifies the robustness of a counterfactual with respect to possible changes to a model. This metric enables us to arrive at an algorithm for generating robust counterfactuals that can be applied over any base method.\n\n\\subsection{Desirable properties of counterfactuals in tree-based ensembles that make them more stable}\n\n\nIn this work, we are interested in finding counterfactuals that are robust to small changes to the model (recall Figure~\\ref{fig:robustness_not_needed}), e.g., retraining on some data from the same distribution, or minor changes to the hyperparameters. We note that if the model changes drastically, it might not make sense to expect that counterfactuals will remain valid, as demonstrated in the following impossibility result. \n\\begin{thm}[Impossibility Under Drastic Model Changes]\nGiven a tree-based ensemble model $M(\\cdot):\\mathbb{R}^d \\to [0,1]$, there always exists another tree-based ensemble model $M_{new}(\\cdot):\\mathbb{R}^d \\to [0,1]$ such that all counterfactuals to $M$ with respect to a dataset $\\mathcal{S}$ no longer remains valid.\n\\label{thm:impossibility1}\n\\end{thm}\n\n\n\n\nThus, we first need to make some \\emph{reasonable} assumptions on how the model changes during retraining, or rather, what kind of model changes are we most interested in.\n\n\nIn this work, we instead arrive at the following desirable properties of counterfactuals for tree-based ensembles that can make them more stable, i.e., less likely to be invalidated. Our first property is based on the fact that the output of a model $M(x) \\in [0,1]$ is expected to be higher if the model has more confidence in that prediction. \n\n\\begin{propty}\nFor any $x \\in \\mathbb{R}^d$, a higher value of $M(x)$ makes it less likely to be invalidated due to model changes.\n\\label{propty:high_confidence}\n\\end{propty}\n\n\n\n\n\nHowever, having high $M(x)$ may not be the only property to ensure robustness, particularly in tree-based ensembles. This is because \\emph{tree-based models do not have a smooth and continuous output function}. For instance, there may exist points $x \\in \\mathbb{R}^d$ with very high output value $M(x)$ but several points in its neighborhood have a low output value (not smooth). This issue is illustrated in Figure~\\ref{fig:property2}. There may be points with high $M(x)$ that are quite close to the decision boundary, and thus more vulnerable to being invalidated with model changes. \n\nAs a safeguard against such a possibility, we introduce our next desirable property.\n\n\\begin{propty}\nAn $x \\in \\mathbb{R}^d$ is less likely to be invalidated due to model changes if several points close to $x$ (denoted by $x'$) have a high value of $M(x')$.\n\\label{propty:high_confidence_mean}\n\\end{propty}\n\n\n\n\nWe also note that a counterfactual may be more likely to be invalidated if it lies in a highly variable region of the model output function $M(x)$. This is because the confidence of the model predictions in that region might be less reliable. This issue is illustrated in Figure~\\ref{fig:property3}. One resolution to capturing the variability of a model output is to examine its derivative. However, because \\emph{tree-based ensembles are not differentiable}, we instead examine the standard deviation of the model output around $x$ as a representative of its variability. \n\n\n\n\n\\begin{propty}\nAn $x \\in \\mathbb{R}^d$ is less likely to be invalidated due to model changes if the model output values around $x$ have low variability (standard deviation).\n\\label{propty:variability}\n\\end{propty}\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\centering\n{\\centering \\includegraphics[height=4.1cm]{property2}}\n\\caption{Counterfactual is close to the boundary. \\label{fig:property2}}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.48\\textwidth}\n\\centering\n{\\centering \\includegraphics[height=4.1cm]{property3}}\n\\caption{Counterfactual lies in a highly variable region.}\\label{fig:property3}\n\\end{subfigure}\n\\caption{Motivation for desirable properties.}\n\\end{figure}\n\n\n\\subsection{Proposed Quantification of Robustness to Possible Model Changes:\\\\ Counterfactual Stability}\n\nOur properties lead us to introduce a novel metric -- that we call counterfactual stability -- that attempts to quantify the robustness of a counterfactual $x \\in \\mathbb{R}^d$ to possible changes in the model (irrespective of whether $x$ is in the data manifold). \n\n\n\\begin{defn}[Counterfactual Stability] The stability of a counterfactual $x\\in \\mathbb{R}^d$ is defined as follows: \\begin{align}&R_{K,\\sigma^2}(x,M)=\\frac{1}{K}\\sum_{x' \\in N_x}M(x') - \\sqrt{\\frac{1}{K}\\sum_{x' \\in N_x}\\left(M(x') - \\frac{1}{K}\\sum_{x' \\in N_x}M(x')\\right)^2} \n\\end{align}where $N_x$ is a set of $K$ points in $\\mathbb{R}^d$ drawn from the distribution $\\mathcal{N}(x,\\sigma^2\\mathrm{I}_{d})$ where $\\mathrm{I}_{d}$ is the identity matrix.\n\\label{defn:stability}\n\\end{defn}\n\nThis metric of counterfactual stability is aligned with our desirable properties. Given a point $x\\in \\mathbb{R}^d$, it generates a set of $K$ points centered around $x$. The first term $\\frac{1}{K}\\sum_{x' \\in N_x}M(x')$ is expected to be high if the model output value $M(x)$ is high for $x$ (Property~\\ref{propty:high_confidence}) as well as several points close to $x$ (Property~\\ref{propty:high_confidence_mean}). However, we note that the mean value of $M(x)$ around a point $x \\in \\mathbb{R}^d$ may not always capture the variability in that region. For instance, a combination of very high and very low values can also produce a reasonable mean value. Thus, we also incorporate a second term, i.e., the standard deviation $\\sqrt{\\frac{1}{K}\\sum_{x' \\in N_x}\\left(M(x') - \\frac{1}{K}\\sum_{x' \\in N_x}M(x')\\right)^2} $ which captures the variability of the model output values in a region around $x$ (recall Property~\\ref{propty:variability}). \n\nWe also note that the variability term (standard deviation) in Definition~\\ref{defn:stability} is useful only given the first term (mean) as well. This is because even points on the other side of the decision boundary (i.e., $M(x')< 0.5$) can have high or low variance. We include the histogram of $M(x)$, $\\frac{1}{K}\\sum_{x' \\in N_x}M(x')$, and $R_{K,\\sigma^2}(x,M)$ in the Appendix for further insights.\n\nNext, we discuss how our proposed metric can be used to test if a counterfactual is \\emph{stable}.\n\n\\begin{defn}[Counterfactual Stability Test] \\label{defn:stability_test} A counterfactual $x\\in \\mathbb{R}^d$ satisfies the counterfactual stability test if: \n\\begin{equation}R_{K,\\sigma^2}(x,M)\\geq \\tau.\n\\end{equation}\n\\end{defn}\n\n\n\n\\begin{rem}[Discussion on Data Manifold] \n\\label{rem:data_manifold}Our definition of counterfactual stability holds for all points $x\\in \\mathbb{R}^d$ and is not necessarily restricted to points that lie on the data manifold, e.g., $x \\in \\mathcal{S}$. This is because there might be points or regions outside the data manifold that could also be robust to model changes. E.g., assume a loan applicant who is exceptionally good at literally everything. Such an applicant might not lie on the data manifold, but it is expected that most models would accept such a data point even after retraining. We note however that recent work~\\cite{pawelczyk2020counterfactual} demonstrate that data-support counterfactuals are more robust that sparse counterfactuals, an aspect that we discuss further in Section~\\ref{subsec:robustness_guarantee} which also motivates our definition of conservative counterfactual. \n\\end{rem}\n\n\n\\subsection{Concept of Conservative Counterfactuals}\n\\label{subsec:conservative_counterfactuals}\nHere, we introduce the concept of \\emph{Conservative Counterfactuals} which allows us to use our counterfactual stability test to generate stable counterfactuals from the dataset.\n\n\n\\begin{defn}[Conservative Counterfactual $\\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)$] Given a data point $x \\in \\mathcal{S}$ such that $M(x)\\leq 0.5$, a conservative counterfactual $\\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)$ is defined as a data point $x' \\in \\mathcal{S}$ such that $M(x')>0.5$ and $R_{K,\\sigma^2}(x',M)\\geq\\tau$, that also minimizes the $l_p$ norm $||x-x'||_p$, i.e.,\n\\begin{align}\n&\\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)=\\arg \\min_{x'\\in \\mathcal{S}} ||x-x'||_p \\nonumber \\\\\n&\\text{such that } M(x')>0.5 \\text{ and } R_{K,\\sigma^2}(x',M)\\geq \\tau.\n\\end{align} \n\\end{defn}\n\n\\begin{rem}[Existence] Higher $\\tau$ leads to better robustness. However, a conservative counterfactual may or may not exist depending on how high the threshold $\\tau$ is. When $\\tau$ is very low, the conservative counterfactuals become the closest data-support counterfactuals.\n\\end{rem}\n\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[height=4cm]{gaussian_1.png}\n \\includegraphics[height=4cm]{gaussian_2.png}\n \\includegraphics[height=4cm]{gaussian_3.png}\n \\caption{Thought experiment to understand how a conservative counterfactual is more robust than typical closest counterfactuals or closest-data-support counterfactuals: The data $\\mathcal{S}$ is drawn from the following distribution: $p(x|y=1)\\sim \\mathcal{N}(\\mu,\\Sigma)$ and $p(x|y=0)\\sim \\mathcal{N}(-\\mu,\\Sigma)$. The first model denotes the original model $M(x)$ while the next two models denote possible models obtained after retraining on the same data with almost similar accuracy (performance) on the given dataset. Given a rejected applicant, we have $A$, $B$, and $C$ as three possible counterfactuals. Counterfactual $A$ is the closest counterfactual: it may not lie on the data manifold. Counterfactual $B$ is the closest-data-support counterfactual, i.e., the nearest neighbor on the other side of the decision boundary. The second figure demonstrates that data-support counterfactuals are more robust than closest (or, sparse) counterfactuals. However, lying on the data manifold is not always enough for robustness (third figure), e.g., $B$ happens to be quite close to the boundary. Here, $C$ is the conservative counterfactual that not only lies on the data manifold but also well within the decision boundary.}\n \\label{fig:gaussian}\n\\end{figure*}\n\n\n\n\\subsection{Theoretical Robustness Guarantee of Conservative Counterfactuals}\n\\label{subsec:robustness_guarantee}\nHere, we derive theoretical guarantees on the robustness of conservative counterfactuals (see Theorem~\\ref{thm:guarantee} for our main result). Before stating our result, we introduce two assumptions over the randomness of the new model $M_{new}$.\n\n\\begin{assm}[Goodness of Metric] For any data point $x \\in \\mathbb{R}^d$, let $M_{new}(x)$ be a random variable taking different values due to model changes. \nWe assume that the expected value $E[M_{new}(x)]>R_{K,\\sigma^2}(x,M)$.\n\\label{assm:1}\n\\end{assm}\n\n\n\\begin{assm}[Goodness of Data Manifold]\nThe standard deviation of $M_{new}(x)$ is $V_x$ which depends on $x$. When $x \\in \\mathcal{S}$, we have $V_x \\leq V$ for a small constant $V$. \\label{assm:2}\n\\end{assm}\n\n\n\n\n\n\n\nThe rationale for Assumption~\\ref{assm:2} is built on evidence from recent work~\\cite{pawelczyk2020counterfactual} that demonstrate that data-support counterfactuals are more robust than closest or sparsest counterfactuals. When a model is retrained on same or similar data, the decisions of a model are less likely to change for points that lie on the data manifold as compared to points that may not lie on the data manifold (illustrated in Figure.~\\ref{fig:gaussian}).\n\n\n\n\\begin{rem}[One-Way Implication]\nWhile we assume that the new model outputs for points in the dataset $\\mathcal{S}$ have low standard deviation, we do not necessarily assume that points outside the dataset $\\mathcal{S}$ would always have high standard deviation. This is because there can potentially be regions outside the data manifold that also have low $V_x$, and are also robust to model changes (recall Remark~\\ref{rem:data_manifold}).\n\\end{rem}\n\nOne popular assumption in existing literature to quantify small model changes is to assume that the model changes are bounded in the parameter space, i.e., $|\\text{Parameters}(M)-\\text{Parameters}(M_{new})|\\leq \\Delta$ where $\\text{Parameters}(M)$ denote the parameters of the model $M$, e.g., weights of a neural network. However, this might not be a good assumption for tree-based ensembles. This is because tree-based ensembles can often change a lot in the parameter space while actually causing very little difference with respect to the actual decisions on the dataset $\\mathcal{S}$ (see Figure~\\ref{fig:gaussian}). \n\nClosely connected to model change is the idea of Rashomon models~\\cite{pawelczyk2020counterfactual,marx2020predictive} which suggests that there can be models that are very different from each other but have almost similar performance on the same data, e.g., $\\sum_{x \\in \\mathcal{S}} |D(x)-D_{new}(x)|\\leq \\Delta$. Thus, Assumption~\\ref{assm:2} might be better suited for tree-based ensembles over boundedness in the parameter space.\n\nNow, we provide our main result: a robustness guarantee on conservative counterfactuals based on these assumptions.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{thm}[Robustness Guarantee for Conservative Counterfactuals] Suppose Assumptions~\\ref{assm:1} and \\ref{assm:2} hold, and $\\tau>0.5$. Then, for any conservative counterfactual $x' \\in \\mathcal{C}^{(\\tau)}_{p,\\mathcal{S}}(x,M)$, the following holds:\n\\begin{equation}\n \\Pr(M_{new}(x')< 0.5) \\leq \\frac{V^2}{V^2 + (\\tau-0.5)^2}.\n\\end{equation}\n\\label{thm:guarantee}\n\\end{thm}\n\nThe result essentially says that the probability of invalidation by the new model ($\\Pr(M_{new}(x')< 0.5)$) is strictly upper-bounded for conservative counterfactuals. A smaller variability $V$ makes this bound smaller. \n\nThe conservative counterfactuals (henceforth denoted by CCF) already serve as good candidates for robust counterfactuals. They are also expected to be realistic with high LOF because they lie in the dataset $\\mathcal{S}$. However, because they only search for counterfactuals on the dataset $\\mathcal{S}$, they may not always be optimal in terms of the distance between the original data point and its counterfactual (not so close). This leads us to now propose a novel algorithm that leverages conservative counterfactuals (CCF) and counterfactual stability test to find robust counterfactuals that meet all our requirements (close, robust, realistic). \n\\subsection{Proposed Algorithm to Generate Robust Counterfactuals in Practice: RobX}\n\nIn this section, we discuss our proposed algorithm -- that we call RobX -- that generates robust counterfactuals that meets our requirements (see Algorithm~\\ref{alg:example}). \n\nOur proposed algorithm RobX can be applied on top of any preferred base method of counterfactual generation, irrespective of whether the counterfactual lies in the dataset $\\mathcal{S}$. RobX checks if the generated counterfactual satisfies the counterfactual \\emph{stability test} (recall Definition~\\ref{defn:stability_test}): if the test is not satisfied, the algorithm iteratively refines the obtained counterfactual and keeps moving it towards the conservative counterfactual until a \\emph{stable} counterfactual is found that satisfies the test. \n\nOne might wonder if moving a counterfactual towards the conservative counterfactual can cause it to pass through undesired regions of the model output where $M(x)<0.5$, thus making it more vulnerable to invalidation. We note that, while this concern is reasonable, the counterfactual \\emph{stability test} at each step ensures that such points are not selected. We further address this concern as follows: (i) consider a diverse set of conservative counterfactuals (e.g., first $c$ nearest neighbors that satisfy the stability test where $c>1$); (ii) iteratively move towards each one of them until a \\emph{stable} counterfactual is found for all $c$ cases; (iii) pick the best of these $c$ \\emph{stable} counterfactuals, e.g., one with the lowest $L_1$ or $L_2$ cost as desired. \n\nWe also observe that this approach of moving a counterfactual towards a conservative counterfactual improves its LOF, making it more realistic.\n\n\\begin{algorithm}[t]\n \\caption{RobX: Generating Robust Counterfactual Explanations for Tree-Based Ensembles}\n \\label{alg:example}\n\\begin{algorithmic}\n \\STATE {\\bfseries Input:} Model $M(\\cdot)$, Dataset $\\mathcal{S}$, Datapoint $x$ such that $M(x) \\leq 0.5$, Algorithm parameters $(p, K, \\sigma^2, \\tau, \\alpha, c)$\n \\STATE{Step 1: Generate counterfactual $x'$ for $x$ using any existing technique for tree-based ensembles}\n \\STATE{Step 2: Perform counterfactual stability test on $x'$: Check if $R_{K,\\sigma^2}(x',M)\\geq \\tau$ where $N_x$ is a set of $K$ points drawn from the distribution $\\mathcal{N}(x,\\sigma^2)$.}\n \\IF{counterfactual stability test is satisfied:}\n \\STATE{Output $x'$ and exit}\n \\ELSE\n \\STATE{Generate $c$ conservative counterfactuals $\\{x_1,\\ldots,x_c\\}$ which are $c$ nearest neighbors of $x'$ in the dataset $\\mathcal{S}$ that pass the stability test: $R_{K,\\sigma^2}(x_i,M)\\geq \\tau$}\n \\STATE{Initialize placeholders for $c$ counterfactuals $\\{x'_1,\\ldots,x'_c\\}$ with each $x'_i=x'$}\n \\FOR {$i=1 \\text{ to } c$}\n \\REPEAT \n \\STATE{Update: $x'_i=\\alpha x_i + (1-\\alpha)x'_i$}\n \\STATE{Perform counterfactual stability test on $x'_i$:\\\\ \\hspace{1cm} $R_{K,\\sigma^2}(x'_i,M)\\geq \\tau$}\n \\UNTIL counterfactual stability test on $x'_i$ is satisfied\n \\ENDFOR\n \\ENDIF\n \\STATE{Output $x^*=\\arg \\min_{x'_i \\in \\{x'_1,x'_2,\\ldots,x_c'\\}} ||x-x'_i||_p$ and exit}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\nHere, we present our experimental results on benchmark datasets, namely, German Credit~\\cite{UCI} and HELOC~\\cite{Fico_web_2018}. \n\nFor simplicity, we normalize the features to lie between $[0,1]$. We consider XGBoost models after selecting hyperparameters from a grid search (details in Appendix). For each of these datasets, we set aside 30\\% of the dataset for testing, and use the remaining 70\\% for training (in different configurations as discussed here). \n\nWe consider the following types of model change scenarios:\n\\begin{itemize}[leftmargin=*, topsep=0pt, itemsep=0pt]\n \\item Minor changes: (i) Train a model on the training dataset and retrain new models after dropping very few data points ($1$ for German Credit, $10$ for HELOC), keeping hyperparameters constant. (ii) Train a model on the training dataset and retrain new models, changing one hyperparameter, e.g., \\texttt{max\\_depth} or \\texttt{n\\_estimators}. The results for this configuration is in the Appendix.\n \\item Moderate changes: Train a model on half of the training dataset and retrain new models on the other half, keeping hyperparameters mostly constant, varying either \\texttt{max\\_depth} or \\texttt{n\\_estimators}. The results for this configuration is in Table~\\ref{table:performance}.\n\\end{itemize}\n\n\n\n\\begin{table}[t]\n\\caption{Performance on HELOC and German Credit dataset.}\n\\label{table:performance}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{HELOC}& \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\nCCF & 1.89 & 100\\% & 0.81 & 0.65& 99.9\\%& 0.75\\\\\n\\midrule\nFT & 0.19 & 18.7\\%& 0.40 & 0.16 & 15.6\\%& 0.48\\\\\n+RobX & 1.55 & 100\\% & 0.92 & 0.55 & 99.9\\% & 0.84 \\\\\n\\midrule\nFOCUS & 0.21 & 29.5\\% & 0.36 & 0.17 & 33.0\\% & 0.63\\\\\n+RobX & 1.52& 100\\% & 0.91 & 0.61& 99.8\\% & 0.72\\\\\n\\midrule\nFACE & 2.86 & 89.4\\%& 0.68 & 1.19 & 97.3\\% & 0.50 \\\\\n+RobX & 2.30 & 100\\% & 0.78 & 0.95& 100\\% & 0.65\\\\\n\\midrule\nNN & 0.96 & 35.1\\% & 0.81 & 0.34 & 39.0\\%& 0.69 \\\\\n+RobX & 1.61 & 100\\% & 0.93 & 0.56 & 100\\% & 0.85\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{German}& \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\nCCF & 2.92 & 100\\% & 0.85 & 1.21 & 100\\% & 0.94\\\\\n\\midrule\nFT & 0.13 & 55.7\\% & 0.93 & 0.11 & 59.2\\% & 0.94 \\\\\n+RobX & 2.17 & 92.6\\% & 1.0 & 0.95 & 91.1\\% & 0.94 \\\\\n\\midrule\nFOCUS & 0.37 & 65.7\\% & 0.93 & 0.24 & 65.3\\% & 0.93\\\\\n+RobX & 2.18 & 96.5\\% & 1.0 & 1.05 & 100\\% & 1.0\\\\\n\\midrule\nFACE & 2.65 & 84.5\\% & 0.57 & 1.30 & 87.6\\% & 0.76 \\\\\n+RobX & 2.29 & 97.1\\% & 1.0 & 1.05 & 96.1\\% & 0.94\\\\\n\\midrule\nNN & 0.76 & 65.9\\% & 1.0 & 0.48 & 60.7\\% & 1.0\\\\\n+RobX & 2.21 & 97.7\\% & 1.0 & 0.97 & 91.7\\% & 0.93 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\nFor each case, we first generate counterfactuals for the original model using the following base methods:\n\\begin{itemize}[leftmargin=*, topsep=0pt, itemsep=0pt]\n \\item Feature Tweaking (FT)~\\cite{tolomei2017interpretable} is a popular counterfactual generation technique for tree-based ensembles that finds ``closest'' counterfactuals ($L_1$ or $L_2$ cost), not necessarily on the data manifold. The algorithm searches for all possible paths (tweaks) in each tree that can change the final outcome of the model.\n \\item FOCUS~\\cite{lucic2019focus} is another popular technique that approximates the tree-based models with \\texttt{sigmoid} functions, and finds closest counterfactuals (not necessarily on the data manifold) by solving an optimization.\n \\item FACE~\\cite{poyiadzi2020face} attempts to find counterfactuals that are not only close ($L_1$ or $L_2$ cost), but also (i) lie on the data manifold; and (ii) are connected to the original data point via a path on a connectivity graph on the dataset $\\mathcal{S}$. Such a graph is generated from the given dataset $\\mathcal{S}$ by connecting every two points that are reasonably close to each other, so that one can be ``attained'' from the other. \n \\item Nearest Neighbor (NN)~\\cite{albini2021counterfactual} attempts to find counterfactuals that are essentially the nearest neighbors ($L_1$ or $L_2$ cost) to the original data points with respect to the dataset $\\mathcal{S}$ that lie on the other side of the decision boundary (recall Definition~\\ref{defn:data-support-CF}).\n\\end{itemize}\n\nWe compare these base methods with: (i) Our proposed Conservative Counterfactuals (CCF) approach; and (ii) Our proposed RobX applied on top of these base methods.\n\n\n\n\n\n\n\n\\begin{rem}\nWe note that there are several techniques for generating counterfactual explanations (see \\cite{verma2020counterfactual} for a survey); however only some of them apply to tree-based models. Several techniques are also broadly similar to each other in spirit. We believe our choice of these four base methods to be quite a diverse representation of the existing approaches, namely, search-based closest counterfactual (FT), optimization-based closest counterfactual (FOCUS), graph-based data-support counterfactual (FACE), and closest-data-support counterfactual (NN). We note that another alternative perspective is a causal approach~\\cite{konig2021causal}, that often requires knowledge of causal structure, which is outside the scope of this work.\n\\end{rem}\n\nOur empirical performance metrics of interest are:\n\\begin{itemize}[leftmargin=*, topsep=0pt, itemsep=0pt]\n \\item \\textbf{Cost ($L_1$ or $L_2$):} Average distance ($L_1$ or $L_2$) between the original point and its counterfactual.\n \\item \\textbf{Validity (\\%):} Percentage of counterfactuals that still remain counterfactuals under the new model $M_{new}$.\n \\item \\textbf{LOF}: See Definition~\\ref{defn:lof}; Implemented using \\cite{scikit-lof} (+1 for inliers, -1 otherwise). A higher average is better.\n\\end{itemize}\n\n\n\n\\textbf{Hyperparameters:} For the choice of $K$ and $\\sigma$, we refer to some guidelines in adversarial machine learning literature. Our metric of stability is loosely inspired from certifiable robustness in adversarial machine learning literature~\\cite{cohen2019certified,raghunathan2018certified}, which uses the metric $\\frac{1}{K}\\sum_{x' \\in N_x}I(M(x')>0.5)$. Here $I(.)$ is the indicator function. Our metric for counterfactual stability (in Definition~\\ref{defn:stability}) has some key differences: (i) No indicator function; (ii) We leverage the standard deviation as well along with the mean. Because the feature values are normalized, a fixed choice of $K=1000$ and $\\sigma=0.1$ is used for all our experiments. \n\nThe choice of threshold $\\tau$ however, is quite critical, and depends on the dataset. As we increase $\\tau$ for conservative counterfactuals, the validity improves but the $L_1$\/$L_2$ cost also increases, until the validity almost saturates. If we increase $\\tau$ beyond that, there are no more conservative counterfactuals found. In practice, one can examine the histogram of $R_{K,\\sigma^2}(x',M)$ for $x \\in \\mathcal{S}$, and choose an appropriate quantile for that dataset as $\\tau$ so that a reasonable fraction of points in $\\mathcal{S}$ qualify to be conservative counterfactuals. But, \\emph{the same quantile may not suffice for $\\tau$ across different datasets.} One could also perform the following steps: (i) choose a small validation set; (ii) keep increasing $\\tau$ from $0.5$ for CCF and plot the validity and $L1\/L2$ cost; (iii) select a $\\tau$ beyond which validity does not improve much and the $L1\/L2$ cost is acceptable. \n\n\nNext, we include the experimental results for moderate changes to the model in Table~\\ref{table:performance} for both HELOC and German Credit datasets. Additional results are provided in the Appendix.\n\n\n\\begin{table}\n\\caption{Performance of FOCUS on the model with a higher threshold, i.e., $M(x)> \\gamma$ on HELOC and German Credit datasets.}\n\\label{table:threshold_focus}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{HELOC}& \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\n$\\gamma{=}0.5$ & 0.21 & 29.5\\% & 0.36 & 0.17 & 33.0\\% & 0.63 \\\\\n+RobX & 1.52 & 100\\% & 0.91 & 0.55 & 99.9\\% & 0.84\\\\\n\\midrule\n$\\gamma{=}0.7$ & 0.59& 92.2\\% & -0.01& 0.34 & 98.8\\% & 0.30\\\\\n +RobX & 1.38 & 99.9\\% & 0.70 & 0.60 & 99.8\\% & 0.70 \\\\\n\\midrule\n$\\gamma{=}0.75$ & 1.13 & 98.9\\% & -0.32& 0.44 & 99.9\\% & 0.09\\\\\n +RobX & 1.44 & 100\\% & 0.06& 0.55 & 99.9\\% & 0.51\\\\\n\\midrule\n$\\gamma{=}0.8$ & 2.11 & 100\\% & -0.70& 0.60 & 100\\% & -0.20\\\\\n+RobX & 2.14 & 100\\% & -0.66 & 0.62 & 100\\% & -0.08\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{c|p{0.9cm}p{0.9cm}p{0.9cm}|p{0.9cm}p{0.9cm}p{0.9cm}}\n\\toprule\n\\textbf{German} & \\multicolumn{3}{c|}{$L_1$ Based} & \\multicolumn{3}{c}{$L_2$ Based}\\\\\n\\midrule\nMethod & Cost & Val. & LOF & Cost & Val. & LOF\\\\\n\\midrule\n$\\gamma{=}0.5$ & 0.37 & 65.7\\% & 0.93 & 0.24 & 65.3\\% & 0.93 \\\\\n +RobX & 2.18 & 96.5\\% & 1.0 & 1.05 & 100\\% & 1.0 \\\\\n\\midrule\n$\\gamma{=}0.8$ & 0.74& 83.9\\% & 0.87 & 0.41& 92.1\\% & 0.75\\\\\n +RobX & 2.20 & 98.0\\% & 1.0 & 1.05 & 100\\% & 1.0\\\\\n\\midrule\n$\\gamma{=}0.99$ & 2.57 & 97.7\\% & -0.45 & 1.05 & 98.5\\% & -0.33\\\\\n +RobX & 2.85 & 100\\% & 0.03& 1.19 & 100\\% & -0.27\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\n\n\n\n\\textbf{Observations:} The average cost ($L_1$ or $L_2$ cost) between the original data point and the counterfactual increases only slightly for base methods such as FT, FOCUS, and NN (which find counterfactuals by explicitly minimizing this cost); however our counterfactuals are significantly more robust (in terms of validity) and realistic (in terms of LOF). Interestingly, for FACE (which finds counterfactuals on the data manifold that are connected via a path), our strategy is able to improve both robustness (validity) and cost ($L_1$ or $L_2$ cost), with almost similar LOF.\n\nAnother competing approach that we consider in this work (that has not been considered before) is to find counterfactuals using base methods but setting a higher threshold value for the model, i.e., $M(x) > \\gamma$ where $\\gamma$ is greater than $0.5$. Interestingly, we observe that this simple modification can also sometimes generate counterfactuals that are significantly robust; however this approach has several disadvantages: (i) It generates counterfactuals that are quite unrealistic, and thus have very poor LOF. (ii) The algorithm takes significantly longer to find counterfactuals as the threshold $\\gamma$ is increased, and sometimes even returns a \\texttt{nan} value because no counterfactual is found, e.g., if $\\gamma=0.9$ and the model output $M(x)$ rarely takes such a high value (because the output range of tree-based ensembles) takes discrete values). Because of these disadvantages, we believe this technique might not be preferable to use standalone; however, it can be used as an alternate base method over which our technique might be applied when cost ($L_1$ or $L_2$) is a higher priority over LOF (see Table~\\ref{table:threshold_focus}). \n\n\n\n\n\n\n\n\n\n\\textbf{Discussion and Future Work:} This work addresses the problem of finding robust counterfactuals for tree-based ensembles. It provides a novel metric to compute the stability of a counterfactual that can be representative of its robustness to possible model changes, as well as, a novel algorithm to find robust counterfactuals. Though not exactly comparable, but our cost and validity are in the same ballpark as that observed for these datasets in existing works~\\cite{upadhyay2021towards,black2021consistent}, focusing on robust counterfactuals for linear models or neural networks (differentiable models). Our future work would include: (i) extending to causal approaches~\\cite{konig2021causal}; and (ii) accounting for immutability or differences among features, e.g., some features being more variable than others. \n\n\n\\paragraph{Disclaimer}\nThis paper was prepared for informational purposes by\nthe Artificial Intelligence Research group of JPMorgan Chase \\& Co. and its affiliates (``JP Morgan''),\nand is not a product of the Research Department of JP Morgan.\nJP Morgan makes no representation and warranty whatsoever and disclaims all liability,\nfor the completeness, accuracy or reliability of the information contained herein.\nThis document is not intended as investment research or investment advice, or a recommendation,\noffer or solicitation for the purchase or sale of any security, financial instrument, financial product or service,\nor to be used in any way for evaluating the merits of participating in any transaction,\nand shall not constitute a solicitation under any jurisdiction or to any person,\nif such solicitation under such jurisdiction or to such person would be unlawful.\n\n\n\\section*{Acknowledgements} We thank Emanuele Albini and Dan Ley for useful discussions.\n\n\n\\small{\n\n\\section{Proof of Theorem 1}\n\nThe proof follows by demonstrating that another tree-based ensemble model exists that does not accept the generated set of counterfactuals. Let us denote this set as $\\mathcal{CF}$.\n\nThere are various ways to construct such a model. A simple way could be to choose an identical tree structure but with the predictions flipped, i.e., $M_{new}(x)=1-M(x)$. This can be designed by altering the weights of the leaves.\n\n\nFor an $x \\in \\mathcal{CF}$ with $M(x)>0.5$, we would have $M_{new}(x) \\leq 0.5$.\n\n\n\n\\section{Proof of Theorem 2}\n\nThe proof follows from Cantelli's inequality. The inequality states that, for $ \\lambda >0$,\n$$ \\Pr(Z-\\mathbb{E}[Z]\\leq -\\lambda )\\leq {\\frac {V_Z^{2}}{V_Z ^{2}+\\lambda ^{2}}},$$\nwhere $Z$ is a real-valued random variable,\n$\\Pr $ is the probability measure,\n$ \\mathbb{E}[Z]$ is the expected value of $Z$, and\n$ V_Z^{2}$ is the variance of $Z$.\n\nHere, let $Z=M_{new}(x')$ be a random variable that takes different values for different models $M_{new}$. Let $$\\lambda=\\mathbb{E}[Z]-0.5 \\geq R(x',M,K,\\sigma^2)-0.5> \\tau-0.5 > 0.$$\n\nThen, we have:\n\\begin{align}\n \\Pr(Z\\leq 0.5 ) \n &= \\Pr(Z -\\mathbb{E}[Z] \\leq 0.5-\\mathbb{E}[Z] ) \\nonumber \\\\\n & = \\Pr(Z -\\mathbb{E}[Z] \\leq -\\lambda ) \\text{ where } \\lambda=\\mathbb{E}[Z]-0.5 \\nonumber \\\\\n & \\overset{(a)}{\\leq} \\frac{V_Z^2}{V_Z^2+\\lambda^2} \\overset{(b)}{\\leq} \\frac{V_Z^2}{V_Z^2+(\\tau-0.5)^2} \\overset{(c)}{\\leq} \\frac{V^2}{V^2+(\\tau-0.5)^2}.\n\\end{align}\nHere, (a) holds from Cantelli's inequality, (b) holds because $\\lambda> \\tau-0.5$ (from the conditions of the theorem), and (c) holds because the variance of $Z$ is bounded by $V^2$ from Assumption 2. \n\n\n\\section{Additional Details and Experiments}\n\nHere, we include further details related to our experiments, as well as, some additional results.\n\n\\subsection{Datasets}\n\n\\begin{itemize}[topsep=0pt, itemsep=0pt, leftmargin=*]\n \\item HELOC~\\cite{Fico_web_2018}: This dataset has $10K$ data points, each with $23$ finance-related features. We drop the features \\texttt{MSinceMostRecentDelq}, \\texttt{MSinceMostRecentInqexcl7days}, and \\texttt{NetFractionInstallBurden}. We also drop the data points with missing values. Our pre-processed dataset has $n=8291$ data points with $d=20$ features each.\n \\item German Credit~\\cite{UCI}: This dataset $1000$ data points, each with $20$ features. The features are a mix of numerical and categorical features. We only use the following $10$ features: \\texttt{existingchecking}, \\texttt{credithistory}, \\texttt{creditamount}, \\texttt{savings}, \\texttt{employmentsince}, \\texttt{otherdebtors}, \\texttt{property}, \\texttt{housing}, \\texttt{existingcredits}, and \\texttt{job}. Among these features, we convert the categorical features into appropriate numeric values. E.g., \\texttt{existingchecking} originally has four categorical values: A11 if ... < 0 DM, A12 if 0 <= ... < 200 DM, A13 if ... >= 200 DM, and A14 if no checking account. We convert them into numerical values as follows: $0$ for A14, $1$ for A11, $2$ for A12, and $3$ for A13. Our pre-processed dataset has $n=1000$ data points with $d=10$ features each.\n\\end{itemize}\n\nAll features are normalized to lie between $[0,1]$.\n\n\n\n\n\\subsection{Experimental Results Under Minor Changes to the Model}\n\nFor each of the datasets, we perform a $30\/70$ test-train split. We train an XGBoost Model after tuning the hyperparameters using the \\texttt{hyperopt} package. \n\nThe observed accuracy for the two datasets are: 74\\% (HELOC) and 73\\% (German Credit).\n\nFor RobX, we choose $K=1000$, $\\sigma=0.1$, and $\\tau$ is chosen based on the histogram of $R_{K,\\sigma^2}(x,M)$ for each dataset. For HELOC, $\\tau=0.65$, and for German Credit, $\\tau=0.93$.\n\nWe first present our experimental results after minor changes to the model.\\\\\n\n(i) Train a model ($M(x)$) on the training dataset and retrain new models ($M_{new}(x)$) after dropping a small percentage of data points ($1$ for German Credit, $10$ for HELOC), keeping hyperparameters fairly constant. We experiment with $20$ different new models and report the average values in Tables~\\ref{table:heloc_minor} and \\ref{table:german_minor}.\n\n\n\n\\begin{table}[!htbp]\n\\caption{Performance on HELOC dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:heloc_minor}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.83 & 100\\% & 0.89\\\\\n\\midrule\nFT & 0.19 & 71.1\\%& 0.25 \\\\\nFT +RobX & 1.51 & 100\\% & 0.96\\\\\n\\midrule\nFOCUS & 0.22 & 77.8\\% & 0.26 \\\\\nFOCUS +RobX & 1.49 & 100\\% & 0.94 \\\\\n\\midrule\nFACE & 2.95 & 98.9\\%& 0.70 \\\\\nFACE +RobX & 2.26 & 100\\% & 0.91\\\\\n\\midrule\nNN & 1.01 & 85.3\\% & 0.75 \\\\\nNN +RobX & 1.56 & 100\\% & 0.96 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 0.62 & 100\\% & 0.83\\\\\n\\midrule\nFT & 0.16 & 68.3\\% & 0.40 \\\\\nFT +RobX & 0.54 & 100\\% & 0.93 \\\\\n\\midrule\nFOCUS & 0.16 & 57.1\\% & 0.57 \\\\\nFOCUS +RobX & 0.59 & 100\\% & 0.86\\\\\n\\midrule\nFACE & 1.20 & 99.6\\% & 0.58 \\\\\nFACE +RobX & 0.89 & 100\\% & 0.74 \\\\\n\\midrule\nNN & 0.35 & 84.0\\%& 0.75 \\\\\nNN +RobX & 0.55 & 100\\%& 0.95 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\\begin{table}[!htbp]\n\\caption{Performance on German Credit dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:german_minor}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.05 & 100\\% & 1.0\\\\\n\\midrule\nFT & 0.08 & 72.9\\%& 0.65 \\\\\nFT +RobX & 2.70 & 99.7\\% & 1.0\\\\\n\\midrule\nFOCUS & 0.12 & 72.8\\% & 0.71 \\\\\nFOCUS +RobX & 2.71 & 100\\% & 1.0\\\\\n\\midrule\nFACE & 2.67 & 92.5\\% & 0.94 \\\\\nFACE +RobX & 2.70 & 99.1\\% & 1.0\\\\\n\\midrule\nNN & 0.80 & 84.4\\%& 0.94 \\\\\nNN +RobX & 2.71 & 100\\% & 1.0\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.42 & 100\\%& 1.0\\\\\n\\midrule\nFT & 0.08 & 68.7\\%& 0.65 \\\\\nFT +RobX & 1.27 & 100\\% & 1.0 \\\\\n\\midrule\nFOCUS & 0.11 & 70.1\\% & 0.82 \\\\\nFOCUS +RobX & 1.32 & 100\\% & 1.0\\\\\n\\midrule\nFACE & 1.25 & 95.0\\% & 0.77 \\\\\nFACE +RobX & 1.28 & 100\\% & 1.0\\\\\n\\midrule\nNN & 0.49 & 79.1\\%& 0.88 \\\\\nNN +RobX & 1.30 & 100\\% & 1.0\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n(ii) Train a model $M(x)$ on the training dataset and retrain new models, changing one hyperparameter, e.g., \\texttt{max\\_depth} or \\texttt{n\\_estimators}. We experiment with 20 different new models and report the average values in Tables~\\ref{table:heloc_minor2} and \\ref{table:german_minor2}. \n\n\n\n\n\\begin{table}[H]\n\\caption{Performance on HELOC dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:heloc_minor2}\n\\vskip 0.1in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.83 & 98.9\\% & 0.89\\\\\n\\midrule\nFT & 0.19 &49.9\\% & 0.25 \\\\\nFT +RobX &1.51 & 99.4\\% & 0.96\\\\\n\\midrule\nFOCUS & 0.22 & 37.1\\% & 0.26 \\\\\nFOCUS +RobX &1.49 & 99.2\\% & 0.94 \\\\\n\\midrule\nFACE & 2.67 &89.7\\% & 0.94 \\\\\nFACE +RobX & 2.70 & 99.7\\% & 1.0\\\\\n\\midrule\nNN & 1.01 & 71.3\\%& 0.75 \\\\\nNN +RobX & 1.56 &99.6\\% & 0.96\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 0.62 & 98.7\\% & 0.82\\\\\n\\midrule\nFT &0.16 & 49.4\\%& 0.40 \\\\\nFT +RobX &0.54 & 99.3\\% & 0.92 \\\\\n\\midrule\nFOCUS & 0.16 & 50.3\\% & 0.57 \\\\\nFOCUS +RobX & 0.59 & 99.9\\% & 0.86\\\\\n\\midrule\nFACE & 1.20 & 89.5\\% & 0.59 \\\\\nFACE +RobX & 0.89 & 99.9\\% & 0.75\\\\\n\\midrule\nNN & 0.35 & 71.0\\% & 0.74 \\\\\nNN +RobX & 0.55 & 99.6\\% & 0.95\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\\begin{table}[H]\n\\caption{Performance on German Credit dataset minimizing for $L_1$ and $L_2$ cost.}\n\\label{table:german_minor2}\n\\vskip 0.1in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_1$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.05 & 99.9\\% & 1.0\\\\\n\\midrule\nFT & 0.08 & 56.4\\%& 0.65 \\\\\nFT +RobX & 2.70 & 99.9\\% & 1.0 \\\\\n\\midrule\nFOCUS & 0.12 & 53.7\\% & 0.71 \\\\\nFOCUS +RobX & 2.71 & 99.7\\% & 1.0\\\\\n\\midrule\nFACE & 2.62 & 88.8\\%& 0.82 \\\\\nFACE +RobX & 2.72 & 99.7\\% & 1.0\\\\\n\\midrule\nNN & 0.80 & 84.4\\% & 0.94 \\\\\nNN +RobX & 2.71& 99.7\\%& 1.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 1.36 & 97.4\\% & 1.0 \\\\\n\\midrule\nFT & 0.08 & 53.4 & 0.65 \\\\\nFT +RobX & 1.17 & 98.6 & 1.0 \\\\\n\\midrule\nFOCUS & 0.11 & 53.2\\% & 0.82 \\\\\nFOCUS +RobX &1.2 & 100\\% &1.0 \\\\\n\\midrule\nFACE & 1.25 & 88.7\\%& 0.77 \\\\\nFACE +RobX & 1.18 & 98.4\\% & 1.0\\\\\n\\midrule\nNN & 0.49 & 79.0\\% & 0.88 \\\\\nNN +RobX & 1.18& 99.0\\% & 0.94\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\n\n\n\n\\subsection{Histograms}\nHere, we include the following histograms for the HELOC dataset for further insights (see Figure~\\ref{fig:ablation}):\n\\begin{enumerate}[itemsep=0pt, topsep=0pt, leftmargin=*]\n\\item Model outputs alone, i.e., $M(x)$.\n\\item Mean of the model outputs in a neighborhood, i.e., $\\frac{1}{K}\\sum_{x' \\in N_x}M(x')$.\n\\item Our robustness metric which includes the mean of the model outputs in a neighborhood minus their standard deviation, i.e., $R_{K,\\sigma^2}(x,M)=\\frac{1}{K}\\sum_{x' \\in N_x}M(x')- \\sqrt{\\frac{1}{K}\\sum_{x' \\in N_x}\\left(M(x') - \\frac{1}{K}\\sum_{x' \\in N_x}M(x')\\right)^2}$.\n\\end{enumerate}\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[height=3cm]{heloc_complete_histogram.png}\n\\caption{Histograms to visualize the proposed robustness metric. \\label{fig:ablation}}\n\\end{figure}\n\n\\subsection{Experiments Under Major Changes to the Model}\n\nThese experimental results have already been included in the main paper in Section 4. Here we include some additional details. For each of the datasets, we first perform a $30\/70$ test-train split, and set the test data aside.\n\nOn the training data, we again perform a $50\/50$ split. We train the original model $M(x)$ on one of these splits, and the new model $M_{new}(x)$ on the other. For $M(x)$, we train an XGBoost Model after tuning the hyperparameters using the \\texttt{hyperopt} package. For $M_{new}(x)$, we keep the hyperparameters mostly constant, varying either \\texttt{n\\_estimators} or \\texttt{max\\_depth}.\n\nThe observed accuracy for the two datasets are: 73\\% (HELOC) and 71\\% (German Credit).\n\nFor RobX, we choose $K=1000$, $\\sigma=0.1$, and $\\tau$ is chosen based on the histogram of $R_{K,\\sigma^2}(x,M)$ for each dataset. For HELOC, $\\tau=0.65$, and for German Credit, $\\tau=0.93$.\n\\subsection{Additional Experimental Results}\n\nIn our experiments so far, we normalize the features in the dataset to lie between $0$ and $1$ as is done in existing works (using \\texttt{MaxMinScalar}). Here, we also include some additional experimental results for using \\texttt{StandardScalar} instead of \\texttt{MaxMinScalar}. These are results for \\textbf{moderate} changes to the model.\n\n\\begin{table}[!htbp]\n\\caption{Performance on HELOC dataset minimizing for $L_2$ cost.}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.94& 100.0 \\%& 0.96\\\\\n\\midrule\nFT & 1.05 & 17.0\\%& 0.57 \\\\\nFT +RobX & 2.93 & 94.4\\% & 0.95 \\\\\n\\midrule\nFOCUS & 1.17 & 30.1\\% & 0.69 \\\\\nFOCUS +RobX & 2.94& 97.4\\% & 0.96\\\\\n\\midrule\nFACE & 5.83 & 90.1\\% & 0.74 \\\\\nFACE +RobX & 4.67& 100.0\\% & 0.94 \\\\\n\\midrule\nNN & 2.71 & 37.6\\%& 0.87 \\\\\nNN +RobX & 3.14 & 98.2\\% & 0.98 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\\begin{table}\n\\caption{Performance on HELOC dataset with $L_2$ cost: FOCUS is applied on the model with a higher threshold, i.e., $M(x) > \\gamma$}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nFOCUS ($\\gamma=$0.5) & 1.17 & 30.1\\% & 0.69 \\\\\nFOCUS ($\\gamma=$0.5) +RobX & 2.94 & 97.4\\% & 0.96 \\\\\n\\midrule\nFOCUS ($\\gamma=$0.6) & 1.67& 68.4\\% & 0.64\\\\\nFOCUS ($\\gamma=$0.6) +RobX & 3.65& 100\\% & 0.96\\\\\n\\midrule\nFOCUS ($\\gamma=$0.7) & 2.28& 97.4\\% & 0.59\\\\\nFOCUS ($\\gamma=$0.7) +RobX & 3.64& 100\\% & 0.97\\\\\n\\midrule\nFOCUS ($\\gamma=$0.8) & 3.68& 100\\% & 0.55\\\\\nFOCUS ($\\gamma=$0.8) +RobX & 3.69& 100\\% & 0.55\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\\begin{table}\n\\caption{Performance on German Credit dataset for $L_2$ cost.}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccc}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nCCF & 3.18 & 98.2\\% & 0.69 \\\\\n\\midrule\nFT & 0.56 & 60.0\\% & 0.88 \\\\\nFT +RobX & 2.35 & 97.0\\% & 0.88 \\\\\n\\midrule\nFOCUS & 0.77 & 67.2\\% & 0.87 \\\\\nFOCUS +RobX & 2.83 & 97.0\\% & 0.75 \\\\\n\\midrule\nFACE & 4.74 & 94.3\\% & 0.81 \\\\\nFACE +RobX & 3.38 & 96.9\\% & 0.87\\\\\n\\midrule\nNN & 1.99 & 76.7 \\%& 0.75 \\\\\nNN +RobX & 2.50 & 96.9\\% & 0.81 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\\begin{table}\n\\caption{Performance on German Credit dataset with $L_2$ cost: FOCUS is applied on the model with a higher threshold, i.e., $M(x) > \\gamma$.}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nMethod & $L_2$ Cost & Validity & LOF \\\\\n\\midrule\nFOCUS ($\\gamma=$0.5) & 0.77 & 67.2\\% & 0.87 \\\\\nFOCUS ($\\gamma=$0.5) +RobX & 2.83 & 97.0\\% & 0.75 \\\\\n\\midrule\nFOCUS ($\\gamma=$0.6) & 0.84& 77.6\\% & 0.81\\\\\nFOCUS ($\\gamma=$0.6) +RobX &2.83 & 97.0 \\% & 0.75 \\\\\n\\midrule\nFOCUS ($\\gamma=$0.7) & 0.89 & 82.0\\% & 0.81\\\\\nFOCUS ($\\gamma=$0.7) +RobX & 2.83& 97.0\\% & 0.75\\\\\n\\midrule\nFOCUS ($\\gamma=$0.8) & 1.27& 88.6\\% & 0.69\\\\\nFOCUS ($\\gamma=$0.8) +RobX & 2.79& 100\\% & 0.75\\\\\n\\midrule\nFOCUS ($\\gamma=$0.9) & 1.62& 87.7\\% & 0.51\\\\\nFOCUS ($\\gamma=$0.9) +RobX & 2.69& 100\\% & 0.81\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\noindent \nTo every prime $p$ we associate a set $E(p)$ of positive allowed exponents. Thus $E(p)$ is a subset\nof $\\mathbb N$. We consider the set $S$ of integers consisting of 1 and\nall integers $n$ of the form $n=\\prod_i p_i^{e_i}$ with $e_i\\in E(p_i)$. Note that this\nset is {\\it multiplicative}, i.e., if $m$ and $n$ are coprime integers then $mn$ is in $S$\niff both $m$ and $n$ are in $S$. It is easy to see that in this way we obtain all multiplicative\nsets of natural numbers. As an example, let us consider the case where $E(p)$ consists of the\npositive even integers if $p\\equiv 3({\\rm mod~}4)$ and $E(p)=\\mathbb N$ for the other primes.\nThe set $S_B$ obtained in this way can be described in another way. By the well-known result\nthat every positive integer can be written as a sum of two squares iff every prime divisor $p$ of\n$n$ of the form $p\\equiv 3({\\rm mod~}4)$ occurs to an even exponent, we see that $S_B$\nis the set of positive integers that can be written as a sum of two integer squares.\\\\\n\\indent In this note\nwe are interested in the counting function associated to $S$, $S(x)$, which counts the\nnumber of $n\\le x$ that are in $S$. \nBy $\\pi_S(x)$ we denote the number of primes $p\\le x$\nthat are in $S$. We will only consider $S$ with the property that $\\pi_S(x)$ can be\nwell-approximated by $\\delta \\pi(x)$ with $\\delta>0$ real and $\\pi(x)$ the prime counting\nfunction (thus $\\pi(x)=\\sum_{p\\le x}1$). Recall that the Prime Number Theorem states\nthat asymptotically $\\pi(x)\\sim x\/\\log x$. Gauss as a teenager conjectured that\nthe logarithmic integral, Li$(x)$, defined as $\\int_2^x{dt\/\\log t}$ gives a much better\napproximation to $\\pi(x)$. Indeed, it is now known that, for any $r>0$ we have \n$\\pi(x)={\\rm Li}(x)+O(x\\log^{-r}x)$. On the other hand, the result that\n$\\pi(x)={x\/\\log x}+O(x\\log^{-r}x)$, is false for $r>2$. In this note two types\nof approximation of $\\pi_S(x)$ by $\\delta \\pi(x)$ play an important role. We say\n$S$ satisfies Condtion A if, asymptotically,\n\\begin{equation}\n\\pi_S(x)\\sim \\delta {x\\over \\log x}.\n\\end{equation}\nWe say that $S$ satisfies Condition B if\nthere are some fixed positive numbers $\\delta$ and $\\rho$ such that asymptotically\n\\begin{equation}\n\\label{conditionB}\n\\pi_S(x)=\\delta{\\rm Li}(x)+O\\Big({x\\over \\log^{2+\\rho}x}\\Big).\n\\end{equation}\nThe following result is a special case of a result\nof Wirsing \\cite{Wirsing}, with a reformulation following Finch et al.~\\cite[p. 2732]{FMS}. As usual\n$\\Gamma$ will denote the gamma function. By $\\chi_S$ we deonte the characteristic function of $S$, that\nis we put $\\chi_S(n)=1$ if $n$ is in $S$ and zero otherwise.\n\\begin{Thm}\n\\label{een}\nLet $S$ be a multiplicative set satisfying Condtion A, then\n$$S(x)\\sim C_0(S) x \\log^{\\delta-1}x,$$\nwhere \n$$C_0(S)={1\\over \\Gamma(\\delta)}\\lim_{P\\rightarrow \\infty}\\prod_{p0$ this theorem states that\n$$\\pi(x;d,a):=\\sum_{p\\le x,~p\\equiv a({\\rm mod~}d)}1={{\\rm Li}(x)\\over \\varphi(d)}+\nO\\Big({x\\over \\log^{r}x}\\Big).$$\nTheorem \\ref{een} thus gives that, asymptotically,\n$S_B(x)\\sim C_0(S_B)x\/\\sqrt{\\log x}$, a result derived in 1908 by Edmund Landau.\nRamanujan, in his first letter to Hardy (1913), wrote in our notation that\n\\begin{equation}\n\\label{kleemie}\nS_B(x)=C_0(S_B)\\int_2^x{dt\\over \\sqrt{\\log t}}+\\theta(x),\n\\end{equation}\nwith $\\theta(x)$ very small. In reply to Hardy's question what `very small' is in this context\nRamanujan wrote back $O(\\sqrt{x\/\\log x})$. \n(For a more detailed account and further references see Moree and Cazaran \\cite{MC}.)\nNote that by partial integration Ramanujan's claim, if true, implies\nthe result of Landau. This leads us to the following defintion.\n\\begin{Def}\nLet $S$ be a multiplicative set such that $\\pi_S(x)\\sim \\delta x\/\\log x$ for some $\\delta>0$.\nIf for all $x$ sufficiently large $$|S(x)- C_0(S) x \\log^{\\delta-1}x|< \n|S(x)- C_0(S) \\int_2^x \\log^{\\delta-1}dt|,$$\nfor every $x$ sufficiently large, we say that the Landau approximation is better than the\nRamanujan approximation. If the reverse inequality holds for every $x$ sufficiently large, we say\nthat the Ramanujan approximation is better than the Landau approximation.\n\\end{Def}\nWe denote the formal Dirichlet series $\\sum_{n=1,~n\\in S}^{\\infty}n^{-s}$ associated\nto $S$ by $L_S(s)$. For Re$(s)>1$ it converges. If\n\\begin{equation}\n\\label{EK}\n\\gamma_S:=\\lim_{s\\rightarrow 1+0}\\Big({L_{S}'(s)\\over L_{S}(s)}+{\\delta\\over s-1}\\Big)\n\\end{equation}\nexists, we say that $S$ has {\\it Euler-Kronecker constant} $\\gamma_S$. \nIn case $S$ consists of all positive integers we have\n$L_S(s)=\\zeta(s)$ and it is well known that \n\\begin{equation}\n\\label{gammo}\n\\lim_{s\\rightarrow 1+0}\\Big({\\zeta'(s)\\over \\zeta(s)}+{1\\over s-1}\\Big)=\\gamma.\n\\end{equation}\nIf the multiplicative set $S$ satisfies\ncondtion B, then it can be shown that $\\gamma_S$ exists. Indeed, we have the following result.\n\\begin{Thm}\n\\label{vier1} {\\rm \\cite{eerstev}.}\nIf the multiplicative set $S$ satisfies Condition B, then\n$$S(x)=C_0(S)x\\log^{\\delta-1}x\\Big(1+(1+o(1)){C_1(S)\\over \\log x}\\Big),\\qquad {as}\\quad x\\to\\infty,$$\nwhere $C_1(S)=(1-\\delta)(1-\\gamma_S)$.\n\\end{Thm}\n\\begin{Cor}\nSuppose that $S$ is multiplicative and satisfies Condition B. If $\\gamma_S<1\/2$, then the Ramanujan \napproximation\nis asymptotically better than the Landau one. If $\\gamma_S>1\/2$ it is the other\nway around.\n\\end{Cor}\nThe corollary follows on noting that by partial integration we have\n\\begin{equation}\n\\label{part1}\n\\int_2^x \\log^{\\delta-1}dt=x\\log^{\\delta-1}x\\Big(1+{1-\\delta\\over \\log x}+O\\Big({1\\over \\log^2 x}\\Big)\\Big).\n\\end{equation}\nOn\ncomparing (\\ref{part1}) with Theorem \\ref{vier1} we see Ramanujan's claim (\\ref{kleemie}), if\ntrue, implies $\\gamma_{S_B}=0$.\\\\\n\\indent A special, but common case, is where the primes in the set $S$ are, with finitely many exceptions, precisely\nthose in a finite union of arithmetic progressions, that is, there exists a modulus $d$ and integers\n$a_1,\\ldots,a_s$ such that for all sufficiently large primes $p$ we have $p\\in S$ iff\n$p\\equiv a_i({\\rm mod~}d)$ for some $1\\le i\\le s$. \n(Indeed, all examples we consider in this paper belong to this special case.)\nUnder this assumption it can be shown, see Serre \\cite{Serre}, that $S(x)$ has an aysmptotic\nexpansion in the sense of Poincar\\'e, that is, for every integer $m\\ge 1$ we have\n\\begin{equation}\n\\label{starrie}\nS(x)=C_0(S)x\\log^{\\delta-1}x\\Big(1+{C_1(S)\\over \\log x}+{C_2(S)\\over \\log^2 x}+\\ldots+\n{C_m(S)\\over \\log^m x}+O({1\\over \\log^{m+1}x})\\Big),\n\\end{equation}\nwhere the implicit error term may depend on both $m$ and $S$. In particular \n$S_B(x)$ has an expansion of the form (\\ref{starrie}) (see, e.g., \nHardy \\cite[p. 63]{Hardy} for a proof). \n\n\\section{On the numerical evaluation of $\\gamma_S$}\nWe discuss various ways of numerically approximating $\\gamma_S$. A few of these approaches involve\na generalization of the von Mangoldt function $\\Lambda(n)$ (for more details see Section 2.2 of\n\\cite{MC}).\\\\\n\\indent We define $\\Lambda_S(n)$ implicitly by\n\\begin{equation}\n\\label{loggie}\n-{L_S'(s)\\over L_S(s)}=\\sum_{n=1}^{\\infty}{\\Lambda_S(n)\\over n^s}.\n\\end{equation}\nAs an example let us compute $\\Lambda_S(n)$ in case $S=\\mathbb N$. Since\n$$L_{\\mathbb N}(s)=\\zeta(s)=\\prod_p\\Big(1-{1\\over p^s}\\Big)^{-1},$$\nwe obtain \n$\\log \\zeta(s)=-\\sum_p \\log(1-p^{-s})$ and hence\n$$-{L_S'(s)\\over L_S(s)}=-{\\zeta'(s)\\over \\zeta(s)}=\\sum_p {\\log p\\over p^s-1}.$$\nWe infer that $\\Lambda_S(n)=\\Lambda(n)$, the von Mangoldt function. Recall that\n$$\\Lambda(n)=\n\\begin{cases}\n\\log p & {\\rm if~}n=p^e;\\\\\n0 & {\\rm otherwise}.\n\\end{cases}\n$$\nIn case $S$ is a multiplicative semigroup generated by $q_1,q_2,...\\ldots$, we have \n$$L_S(s)=\\prod_i\\Big(1-{1\\over {q_i}^s}\\Big)^{-1},$$\nand we find\n$$\\Lambda_S(n)=\n\\begin{cases}\n\\log q_i & {\\rm if~}n=q_i^e;\\\\\n0 & {\\rm otherwise}.\n\\end{cases}\n$$\nNote that $S_B$ is a multiplicative semigroup. It is generated by $2$, the primes $p\\equiv 1({\\rm mod~}4)$\nand the squares of the primes $p\\equiv 3({\\rm mod~}4)$.\\\\\n\\indent For a more general multiplicative set $\\Lambda_S(n)$ can become more difficult in nature as we\nwill now argue.\nWe claim that (\\ref{loggie}) gives rise to the identity\n\\begin{equation}\n\\label{idi1}\n\\chi_S(n)\\log n =\\sum_{d|n}\\chi_S({n\\over d})\\Lambda_S(d).\n\\end{equation}\nIn the case $S=\\mathbb N$, e.g., we obtain $\\log n=\\sum_{d|n}\\Lambda(d)$.\nIn order to derive (\\ref{idi1}) we use the observation that if $F(s)=\\sum f(n)n^{-s}$,\n$G(s)=\\sum g(n)n^{-s}$ and $F(s)G(s)=H(s)=\\sum h(n)n^{-s}$ are formal Dirichlet series, then\n$h$ is the Dirichlet convolution of $f$ and $g$, that is $h(n)=(f*g)(n)=\\sum_{d|n}f(d)g(n\/d)$.\nBy an argument similar to the one that led us to the von Mangoldt function, one sees that\n$\\Lambda_S(n)=0$ in case $n$ is not a prime power. Thus we can rewrite (\\ref{idi1}) as \n\\begin{equation}\n\\label{idi2}\n\\chi_S(n)\\log n =\\sum_{p^j|n}\\chi_S({n\\over d})\\Lambda_S(d).\n\\end{equation}\nBy induction one then finds that $\\Lambda_S(p^e)=c_S(p^e)\\log p$, where $c_S(p)=\\chi_S(p)$ and\n$c_S(p^e)$ is defined recursively for $e>1$ by\n$$c_S(p^e)=e\\chi_S(p^e)-\\sum_{j=1}^{e-1}c_s(p^j)\\chi_S(p^{e-j}).$$\nAlso a more closed expression for $\\Lambda_S(n)$ can be given (\\cite[Proposition 13]{MC}), namely \n$$\\Lambda_S(n)=e\\log p \\sum_{m=1}^e {(-1)^{m-1}\\over m} \\sum_{k_1+k_2+\\ldots+k_m=e}\\chi_S(p^{k_1})\\chi_S(p^{k_2})\n\\cdots \\chi_S(p^{k_m}),$$\nif $n=p^e$ for some $e\\ge 1$ and $\\Lambda_S(n)=0$ otherwise, or alternatively $\\Lambda_S(n)=We\\log p$, where\n$$W=\\sum_{l_1+2l_2+\\ldots+el_e=e}{(-1)^{l_1+\\ldots+l_e-1}\\over l_1+l_2+\\ldots+l_e}\n\\Big({l_1+l_2+\\ldots+l_e\\over l_1!l_2!\\cdots l_e!}\\Big)\n\\chi_S(p)^{l_1}\\chi_S(p^2)^{l_2}\\cdots \\chi_S(p^e)^{l_e},$$\nif $n=p^e$ and $\\Lambda_S(n)=0$ otherwise, where the $k_i$ run through the natural numbers and the $l_j$ through\nthe non-negative integers.\\\\\n\\indent Now that we can compute $\\Lambda_S(n)$ we are ready for some formulae expressing $\\gamma_S$ in\nterms of this function.\n\n\\begin{Thm}\n\\label{vier}\nSuppose that $S$ is a multiplicative set satisfying Condition B. Then\n$$\\sum_{n\\le x}{\\Lambda_S(n)\\over n}=\\delta \\log x - \\gamma_S+O({1\\over \\log^{\\rho}x}).$$\nMoreover, we have\n$$\\gamma_{S}=-\\delta \\gamma + \\sum_{n=1}^{\\infty}{\\delta-\\Lambda_S(n)\\over n}.$$\nIn case $S$ furthermore is a semigroup generated by $q_1,q_2,\\ldots$, then one has \n$$\\gamma_S=\\lim_{x\\rightarrow \\infty}\\Big(\\delta \\log x -\\sum_{q_i\\le x}{\\log q_i\\over q_i-1}\\Big).$$\n\\end{Thm}\nThe second formula given in\nTheorem \\ref{vier} easily follows from the first on invoking the classical definition of $\\gamma$:\n$$\\gamma=\\lim_{x\\rightarrow \\infty}\\Big(\\sum_{n\\le x}{1\\over n}-\\log x\\Big).$$\nTheorem \\ref{vier} is quite suitable for getting an approximative value of $\\gamma_{S}$. The formulae given\nthere, however, do not allow\none to compute $\\gamma_{S}$ with a prescribed numerical precision. For doing that another approach is needed, \nthe idea of which is to relate the generating series $L_{S}(s)$ to $\\zeta(s)$ and then take\nthe logarithmic derivative. We illustrate this in Section \\ref{SEK}\nby showing how $\\gamma_{S_D}$ (defined \nin that section) can be computed with high numerical precision.\n\n\n\\section{Non-divisibility of multiplicative arithmetic functions}\nGiven a multiplicative arithmetic function $f$ taking only integer values, it is an almost immediate obervation\nthat, with $q$ a prime, the set $S_{f;q}:=\\{n:q\\nmid f(n)\\}$ is multiplicative. \n\n\\subsection{Non-divisibility of Ramanujan's $\\tau$}\nIn his so-called `unpublished' manuscript on the partition and tau functions \\cite{BO}, Ramanujan considers\nthe counting function of $S_{\\tau;q}$, where $q\\in \\{3,5,7,23,691\\}$ and $\\tau$ is the Ramanujan\n$\\tau$-function. \nRamanujan's $\\tau$-function is defined as the coefficients of the power series in $q$;\n$$\\Delta:=q\\prod_{m=1}^{\\infty}(1-q^m)^{24}=\\sum_{n=1}^{\\infty}\\tau(n)q^n.$$\nAfter setting $q=e^{2\\pi i z}$, the function $\\Delta(z)$ is the unique normalized cusp form of\nweight 12 for the full modular group SL$_2(\\mathbb Z)$.\nIt turns out that $\\tau$ is a multiplicative function\nand hence the set $S_{\\tau;q}$ is multiplicative. Given any such $S_{\\tau;q}$, Ramanujan denotes\n$\\chi_{S_{\\tau;q}}(n)$ by $t_n$. He then typically writes: ``It is easy to prove by quite elementary\nmethods that $\\sum_{k=1}^n t_k=o(n)$. It can be shown by transcendental methods that\n\\begin{equation}\n\\label{simpelonia}\n\\sum_{k=1}^n t_k\\sim {Cn\\over \\log^{\\delta_q} n};\n\\end{equation}\nand\n\\begin{equation}\n\\label{kleemie2}\n\\sum_{k=1}^n t_k=C\\int_2^n{dx\\over \\log ^{\\delta_q} x}+O\\Big({n\\over \\log^r n}\\Big),\n\\end{equation}\nwhere $r$ is any positive number'. Ramanujan claims that $\\delta_3=\\delta_7=\\delta_{23}=1\/2$, \n$\\delta_5=1\/4$ and $\\delta_{691}=1\/690$. \nExcept for $q=5$ and $q=691$ Ramanujan also writes down an Euler\nproduct for $C$. These are correct, except for a minor omission he made in case $q=23$.\n\\begin{Thm} {\\rm (\\cite{M}).} For $q\\in \\{3,5,7,23,691\\}$ we have\n$\\gamma_{S_{\\tau;q}}\\ne 0$ and thus Ramamnujan's claim {\\rm (\\ref{kleemie2})} is false for $r>2$.\n\\end{Thm}\nThe reader might wonder why this specific small set of $q$. The answer is that in these cases Ramanujan\nestablished easy congruences such as \n$$\\tau(n)\\equiv \\sum_{d|n}d^{11}({\\rm mod~}691)$$ that allow one to \neasily describe the non-divisibility of $\\tau(n)$ for these $q$. Serre, see \\cite{SwD}, has shown\nthat for every odd prime $q$ a formula of type (\\ref{simpelonia}) exists, although no simple congruences\nas above exist. This result requires quite sophisticated tools, e.g., the theory of $l$-adic representations.\nThe question that\narises is whether $\\gamma_{S_{\\tau;q}}$ exists for every odd $q$ and if yes, to compute it with enough\nnumerical precision as to determine whether it is zero or not and to be able to tell whether the\nLandau or the Ramanujan approximation is better. \n\n\\subsection{Non-divisibility of Euler's totient function $\\varphi$}\nSpearman and Williams \\cite{SW} determined the\nasymptotic behaviour of $S_{\\varphi;q}(x)$. Here invariants from the cyclotomic field $\\mathbb Q(\\zeta_q)$ come\ninto play. The mathematical connection with cyclotomic fields is not very direct in \\cite{SW}. However, this\nconnection can be made and in this way the results of Spearman and Williams can then be rederived in a rather straightforward way, see \\cite{FLM, eerstev}. Recall that the Extended Riemann Hypothesis (ERH) says that the\nRiemann Hypothesis holds true for every Dirichlet L-series $L(s,\\chi)$.\n\\begin{Thm}{\\rm (\\cite{FLM}).} \\label{eflm}\nFor $q\\le 67$ we have $1\/2>\\gamma_{S_{\\varphi;q}}>0$. For $q>67$ we have \n$\\gamma_{S_{\\varphi;q}}>1\/2$.\nFurthermore we have $\\gamma_{S_{\\varphi;q}}=\\gamma+O(\\log^2q\/\\sqrt{q})$, \nunconditionally with an effective constant, $\\gamma_{S_{\\varphi;q}}=\\gamma+O(q^{\\epsilon-1})$, unconditionally\nwith an ineffective constant and $\\gamma_{S_{\\varphi;q}}=\\gamma+O((\\log q)(\\log\\log q)\/q)$ if ERH holds true.\n\\end{Thm}\nThe explicit inequalities in this result were \nfirst proved by the author \\cite{eerstev}, who established them assuming ERH. Note that the\nresult shows that Landau wins over Ramanujan for every prime $q\\ge 71$.\\\\\n\\indent Given a number field $K$, the Euler-Kronecker constant ${\\mathcal EK}_K$ of the number field $K$ is\ndefined as $${\\mathcal EK}_K=\\lim_{s \\downarrow 1}\\Big({\\zeta'_K(s)\\over \\zeta_K(s)}+{1\\over s-1}\\Big),$$\nwhere $\\zeta_K(s)$ denotes the Dedekind zeta-function of $K$. Given a prime $p\\ne q$, let $f_p$ the smallest\npositive integer such that $p^{f_p}\\equiv 1({\\rm mod~}q)$. Put\n$$S(q)=\\sum_{p\\ne q,f_p\\ge 2} \n{\\log p\\over p^{f_p}-1}.$$\nWe have \n\\begin{equation}\n\\label{EK01}\n\\gamma_{S_{\\varphi;q}}= \\gamma-{(3-q)\\log q\\over (q-1)^2(q+1)} -S(q) -\n{\\mathcal{EK}_{{\\mathbb Q}(\\zeta_q)}\\over q-1}.\n\\end{equation}\n(This is a consequence of Theorem \\ref{vier1} and Proposition 2 of Ford et al. \\cite{FLM}.)\\\\\n\\indent The Euler-Kronecker constants ${\\mathcal EK}_K$ and in particular \n$\\mathcal{EK}_{{\\mathbb Q}(\\zeta_q)}$ have been well-studied, see e.g. Ford et al.~\\cite{FLM}, Ihara \\cite{I} or \nKumar Murty \\cite{KM} for results and references.\n\n\n\\section{Some Euler-Kronecker constants related to binary quadratic forms}\n\\label{SEK}\nHardy \\cite[p. 9, p. 63]{Hardy} was under the misapprehension that for $S_B$ Landau's approximation is better. However, he based himself\non a computation of his student Geraldine Stanley \\cite{Stanley} that turned out to be incorrect.\nShanks proved that\n\\begin{equation}\n\\label{geraldine}\n\\gamma_{S_B}={\\gamma\\over 2}+{1\\over 2}{L'\\over L}(1,\\chi_{-4})-{\\log 2\\over 2}\n-\\sum_{p\\equiv 3({\\rm mod~}4)}{\\log p\\over p^2-1}.\n\\end{equation}\nVarious mathematicians independently discovered the result that\n$${L'\\over L}(1,\\chi_{-4})=\\log\\Big(M(1,\\sqrt{2})^2e^{\\gamma}\/2\\Big),$$\nwhere $M(1,\\sqrt{2})$ denotes the limiting value of Lagrange's AGM algorithm\n$a_{n+1}=(a_n+b_n)\/2$, $b_{n+1}=\\sqrt{a_n b_n}$ with starting values $a_0=1$ and $b_0=\\sqrt{2}$.\nGauss showed (in his diary) that\n$${1\\over M(1,\\sqrt{2})}={2\\over \\pi}\\int_0^1 {dx\\over \\sqrt{1-x^4}}.$$\nThe total arclength of the lemniscate $r^2=\\cos(2\\theta)$ is given by $2l$, where\n$L=\\pi\/M(1,\\sqrt{2})$ is the so-called lemniscate constant.\\\\\n\\indent Shanks used these formulae to show that \n$\\gamma_{S_B}=-0.1638973186345\\ldots \\ne 0$, thus establishing the\nfalsity of Ramanujan's claim (\\ref{kleemie}). \nSince $\\gamma_{S_B}<1\/2$, it follows by Corollary 1 that actually the Ramanujan approximation is better.\n\nA natural question is to determine the primitive binary quadratic forms $f(X,Y)=aX^2+bXY+cY^2$\nof negative discriminant for which the integers represented form a multiplicative set. \nThis does not seem to be known. However, in the more restrictive case where we require the multiplicative\nset to be also a semigroup the answer is known, see Earnest and Fitzgerald \\cite{earnest}.\n\\begin{Thm}\nThe value set of a positive definite integral binary quadratic form forms a semigroup if and only if it is in the \nprincipal class, i.e. represents 1, or has order 3 (under Gauss composition).\n\\end{Thm}\nIn the former case, the set of represented integers is just the set of norms from the order \n${\\mathfrak O}_D$, which is multiplicative. In the latter case, the smallest example are the forms of \ndiscriminant -23, for which the class group is cyclic of order 3: the primes $p$ are partitioned into those of the form $X^2 - XY + 6Y^2$ and those of the form $2X^2 \\pm XY + 3Y^2$.\n\nAlthough the integers represented by $f(X,Y)$ do not in general form a multiplicative set, the associated set\n$I_f$ of integers represented by $f$, always satisfies the same type of asymptotic, namely we have\n$$I_f(x)\\sim C_f{x\\over \\sqrt{\\log x}}.$$ \nThis result is due to Paul Bernays \\cite{Bernays}, of fame in logic, who did his PhD thesis with Landau. Since his work\nwas not published in a mathematical journal it got forgotten and later partially rediscovered by mathematicians\nsuch as James and Pall. For a brief description of the proof approach of Bernays see Brink et al. \\cite{Brink}. \n\nWe like to point out that in general the estimate\n$$I_f(x)=C_f{x\\over \\sqrt{\\log x}}\\Big(1+(1+o(1)){C'_f\\over \\log x}\\Big)$$ \ndoes not hold. For example, for $f(X,Y)=X^2+14Y^2$, see Shanks and Schmid \\cite{SS}.\n\nBernays did not compute $C_f$, this was only done much later and required the combined effort of various\nmathematicians. The author and Osburn \\cite{mos} combined these results to show that of all the two dimensional lattices\nof covolume 1, the hexagonal lattice has the fewest distances. Earlier Conway and Sloane \\cite{CS} had identified the\nlattices with fewest distances in dimensions 3 to 8, also relying on the work of many other mathematicians. \n\nIn the special case where $f=X^2+nY^2$, a remark in a paper of Shanks seemed to suggest that he thought $C_f$\nwould be maximal in case $n=2$. However, the maximum does not occur for $n=2$, see Brink et al. \\cite{Brink}.\n\nIn estimating $I_f(x)$, the first step is to count $B_D(x)$. \nGiven a discriminant $D\\le -3$ we let $B_D(x)$ count the number of integers $n\\le x$ that are coprime to\n$D$ and can be represented by some primitive quadratic integral form of discriminant $D$. The integers so\nrepresented are known, see e.g. James \\cite{James}, to form a multiplicative semigroup, $S_D$, generated by the \nprimes $p$ with $({D\\over p})=1$ and the squares \nof the primes $q$ with $({D\\over q})=-1$. James \\cite{James} showed that we have\n$$B_D(x)=C(S_D){x\\over \\sqrt{\\log x}}+O({x\\over \\log x}).$$\nAn easier proof, following closely the ideas employed by Rieger \\cite{Rieger}, was given by Williams \\cite{Williams}.\nThe set of primes in $S_D$ has density $\\delta=1\/2$. By the law of quadratic reciprocity the set of primes\n$p$ satisfying $({D\\over p})=1$ is, with finitely many exceptions, precisely a union of arithmetic progressions.\nIt thus follows that Condition B is satisfied and, moreover, that for every integer $m\\ge 1$, \nwe have an expansion of the form\n$$B_D(x)=C(S_D){x\\over \\sqrt{\\log x}}\\big(1+{b_1\\over \\log x}+{b_2\\over \\log^2 x}+\n\\cdots +O({1\\over \\log^m x})\\Big).$$\nBy Theorem \\ref{vier1} and Theorem \\ref{vier} we infer that $b_1=(1-\\gamma_{S_D})\/2$, with\n$$\\gamma_{S_D}=\\lim_{x\\rightarrow \\infty}\\Big({\\log x\\over 2}-\\sum_{p\\le x,~({D\\over p})=1}{\\log p\\over p-1}\\Big)\n-\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}.$$\nAs remarked earlier, in order to compute $\\gamma_{S_D}$ with some numerical\nprecision the above formula is not suitable and another route has to be taken.\n\\begin{Prop} \n\\label{expressie} {\\rm (\\cite{James}.)}\nWe have, for Re$(s)>1$,\n$$L_{S_D}(s)^2=\\zeta(s)L(s,\\chi_D)\\prod_{({D\\over p})=-1}(1-p^{-2s})^{-1}\\prod_{p|D}(1-p^{-s}).$$\n\\end{Prop}\n\\noindent {\\it Proof}. On noting that\n$$L_{S_D}(s)=\\prod_{({D\\over p})=1}(1-p^{-s})^{-1}\\prod_{({D\\over p})=-1}(1-p^{-2s})^{-1},$$\nand\n$$L(s,\\chi_D)=\\prod_{({D\\over p})=1}(1-p^{-s})^{-1}\\prod_{({D\\over p})=-1}(1+p^{-s})^{-1},$$\nthe proof follows on comparing Euler factors on both sides. \\qed\n\\begin{Prop} \n\\label{2gamma}\nWe have\n$$2\\gamma_{S_D}=\\gamma+{L'\\over L}(1,\\chi_D)-\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}+\n\\sum_{p|D}{\\log p\\over p-1}.$$\n\\end{Prop}\n\\noindent {\\it Proof}. Follows on logarithmically differentiating the expression for $L_{S_D}(s)^2$ given\nin Proposition \\ref{expressie}, invoking (\\ref{gammo}) and recalling that $L(1,\\chi_D)\\ne 0$. \\qed\\\\\n\nThe latter result together with $b_1=(1-\\gamma_{S_D})\/2$ leads to a formula first proved by Heupel \\cite{Heupel}\nin a different way. \n\nThe first sum appearing in Proposition \\ref{2gamma} can be evaluated with high numerical precision by using\nthe identity\n\\begin{equation}\n\\label{idie}\n\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}=\\sum_{k=1}^{\\infty}\\Big({L'\\over L}(2^k,\\chi_{D})-{\\zeta'\\over \\zeta}(2^k)-\\sum_{p|D}{\\log p\\over p^{2^k}-1}\\Big).\n\\end{equation}\n\nThis identity in case $D=-3$ was established in \\cite[p. 436]{M2}. The proof given there is easily \ngeneralized. An alternative proof follows on combining Proposition \\ref{55} with Proposition \\ref{56}.\n\\begin{Prop}\n\\label{55}\nWe have\n$$\\sum_p {({D\\over p})\\log p\\over p-1}=-{L'\\over L}(1,\\chi_{D})+\n\\sum_{k=1}^{\\infty}\\Big(-{L'\\over L}(2^k,\\chi_{D})+{\\zeta'\\over \\zeta}(2^k)+\\sum_{p|D}{\\log p\\over p^{2^k}-1}\\Big).$$\n\\end{Prop}\n{\\it Proof}. This is Lemma 12 in Cilleruelo \\cite{C}. \\qed\n\\begin{Prop}\n\\label{56}\nWe have\n$$-\\sum_p {({D\\over p})\\log p\\over p-1}={L'\\over L}(1,\\chi_{D})+\\sum_{({D\\over p})=-1}{2\\log p\\over p^2-1}.$$\n\\end{Prop}\n{\\it Proof}. Put $G_d(s)=\\prod_p(1-p^{-s})^{(D\/p)}$.\nWe have $${1\\over G_d(s)}=L(s,\\chi_{D})\\prod_{({D\\over p})=-1}(1-p^{-2s}).$$ The result then follows\non logarithmic differentiation of both sides of\nthe identity and the fact that $L(1,\\chi_{D})\\ne 0$. \\qed\\\\\n\nThe terms in (\\ref{idie}) can be calculated with MAGMA with high precision and\nthe series involved converge very fast. Cilleruelo \\cite{C} claims\nthat \n$$\\sum_{k=1}^{\\infty} {L'\\over L}(2^k,\\chi_D)=\\sum_{k=1}^6 {L'\\over L}(2^k,\\chi_D)+{\\rm Error},~|{\\rm Error}|\\le 10^{-40}.$$\n\nWe will now rederive Shanks' result (\\ref{geraldine}). Since there is only one primitive quadratic form\nof discriminant -4, we see that $S_{-4}$ is precisely the set of odd integers that can be written as a sum\nof two squares. If $m$ is an odd integer that can be written as a sum of two squares, then so can\n$2^em$ with $e\\ge 0$ arbitrary.\nIt follows that $L_{S_B}(s)=(1-2^{-s})^{-1}L_{S_{-4}}(s)$ and hence $\\gamma_{S_B}=\\gamma_{S_{-4}}-\\log 2$. On invoking\nProposition \\ref{2gamma} one then finds the identity (\\ref{geraldine}).\n\n\\section{Integers composed only of primes in a prescribed arithmetic progession}\nConsider an arithmetic progression having infinitely many primes in it, that is consider\nthe progression $a,a+d,a+2d,\\ldots$ with $a$ and $d$ coprime.\nLet $S'_{d;a}$ be the multiplicative set of integers composed only\nof primes $p\\equiv a({\\rm mod~}d)$. Here we will only consider the simple case where\n$a=1$ and $d=q$ is a prime number. This problem is very closely related to that in Section 3.2.\nOne has \n$L_{S'_{\\varphi;q}}(s)=(1+q^{-s})\\prod_{p\\equiv 1({\\rm mod~}q)\\atop p\\ne q}(1-p^{-s})^{-1}$.\nSince $L_{S'_{q;1}}(s)=\\prod_{p\\equiv 1({\\rm mod~}q)}(1-p^{-s})^{-1}$, we then infer that\n$$L_{S'_{\\varphi;q}}(s)L_{S'_{q;1}}(s)=\\zeta(s)(1-q^{-2s})$$\nand hence\n\\begin{eqnarray}\n\\label{bloep}\n\\gamma_{S'_{q;1}} & = &\\gamma-\\gamma_{S'_{\\varphi;q}}+{2\\log q\\over q^2-1}\\\\\n& = & {\\log q\\over (q-1)^2} +S(q) +{\\mathcal{EK}_{{\\mathbb Q}(\\zeta_q)}\\over q-1},\\nonumber\n\\end{eqnarray}\nwhere the latter equality follows by identity (\\ref{EK01}).\nBy Theorem \\ref{eflm}, (\\ref{bloep}) and the Table in Ford et al. \\cite{FLM}, we then arrive after\nsome easy analysis at the following\nresult.\n\\begin{Thm}\n\\label{eflm2}\nFor $q\\le 7$ we have $\\gamma_{S'_{q;1}}>0.5247$. For $q>7$ we have \n$\\gamma_{S'_{q;1}}<0.2862$.\nFurthermore we have $\\gamma_{S'_{q;1}}=O(\\log^2q\/\\sqrt{q})$, \nunconditionally with an effective constant, $\\gamma_{S'_{q;1}}=O(q^{\\epsilon-1})$, unconditionally\nwith an ineffective constant and $\\gamma_{S'_{q;1}}=O((\\log q)(\\log\\log q)\/q)$ if ERH holds true.\n\\end{Thm}\n \n\n\n\\section{Multiplicative set races}\nGiven two multiplicative sets $S_1$ and $S_2$, one can wonder whether for every $x\\ge 0$ we have\n$S_1(x)\\ge S_2(x)$. We give an example showing that this question is not as far-fetched as one might\nthink at first sight. Schmutz Schaller \\cite[p. 201]{PSS}, motivated by\nconsiderations from hyperbolic geometry, conjectured that the hexagonal lattice is better\nthan the square lattice, by which he means that $S_B(x)\\ge S_H(x)$ for every $x$, where $S_H$ is the set of squared distances\noccurring in the hexagonal lattices, that is the integers represented by \nthe quadratic form $X^2+XY+Y^2$. It is well-known that\nthe numbers represented by this form are the integers generated by the primes $p\\equiv 1({\\rm mod~}3)$, 3 and\nthe numbers $p^2$ with $p\\equiv 2({\\rm mod~}3)$. Thus $S_H$ is a multplicative set. If $0v>0$ integers. The set $S_{NH}$ of non-hypotenuse numbers\nforms a multiplicative set that is generated by 2 and all the primes $p\\equiv 3({\\rm mod~}4)$. \nShow that $L_{NH}(s)=L_{S_B}(s)\/L(s,\\chi_{-4})$ and hence\n$$2\\gamma_{NH}=2\\gamma_{S_B}-2{L'\\over L}(1,\\chi_{-4})=\\gamma-\\log 2 + \\sum_{p>2}{({-1\\over p})\\log p\\over p-1}.$$\n{\\tt Remark}. Put $f(x)=X^2+1$. Cilleruelo \\cite{C} showed that, as $n$ tends to infinity,\n$$\\log {\\rm l.c.m.} (f(1),\\ldots,f(n))=n\\log n +Jn+o(n),$$\nwith\n$$J=\\gamma-1-{\\log 2\\over 2}-\\sum_{p>3}{({-1\\over p})\\log p\\over p-1}=-0.0662756342\\ldots$$\nWe have $J=2\\gamma-1-{3\\over 2}\\log 2-2\\gamma_{NH}$.\\\\\n\\indent Recently the error term $o(n)$ has been improved by \nRu\\'e et al. \\cite{madrid} to \n$$O_{\\epsilon}\\big({n\\over \\log^{4\/9-\\epsilon}n}\\big),$$ with\n$\\epsilon>0$.\\\\\n\\vfil\\eject\n\\noindent {\\tt Exercise 2}. Let $S'_D$ be the semigroup generated by the primes $p$ with $({D\\over p})=-1$. It is easy\nto see that $L_{S'_D}(s)^2=L_{S_D}(s)^2L(s,\\chi_D)^{-2}$ and hence, by Proposition \\ref{2gamma}, we obtain\n\\begin{eqnarray*}\n2\\gamma_{S'_D}&=& 2\\gamma_{S_D}-2{L'\\over L}(1,\\chi_D)\\\\\n&=& \\gamma -{L'\\over L}(1,\\chi_D)-\\sum_{({D\\over p})=1}{2\\log p\\over p^2-1}+\\sum_{p|D}{\\log p\\over p-1}\\\\\n&=&\\gamma+\\sum_p{({D\\over p})\\log p\\over p-1}+\\sum_{p|D}{\\log p\\over p-1}.\n\\end{eqnarray*}\n\n\\centerline{{\\bf Table :} Overview of Euler-Kronecker constants discussed in this paper}\n$~~$\\\\\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\\hline\nset & $\\gamma_{\\rm set} $ & winner & reference \\\\ \\hline \\hline\n$n=a^2+b^2$ & $-0.1638\\ldots$ & Ramanujan & \\cite{Sh} \\\\ \\hline\nnon-hypotenuse & $-0.4095\\ldots$ & Ramanujan & \\cite{Sh2} \\\\ \\hline\n$3\\nmid \\tau$ & $+0.5349\\ldots$ & Landau & \\cite{M} \\\\ \\hline\n$5\\nmid \\tau$ & $+0.3995\\ldots$ & Ramanujan & \\cite{M} \\\\ \\hline\n$7\\nmid \\tau$ & $+0.2316\\ldots$ & Ramanujan & \\cite{M} \\\\ \\hline\n$23\\nmid \\tau$ & $+0.2166\\ldots$ & Ramanujan & \\cite{M} \\\\ \\hline\n$691\\nmid \\tau$ & $+0.5717\\ldots$ & Landau & \\cite{M} \\\\ \\hline\n$q\\nmid \\varphi$, $q\\le 67$ & $<0.4977$ & Ramanujan & \\cite{FLM} \\\\ \\hline\n$q\\nmid \\varphi$, $q\\ge 71$ & $>0.5023$ & Landau & \\cite{FLM} \\\\ \\hline\n$S'_{q;1}$, $q\\le 7$ & $>0.5247$ & Landau & Theorem \\ref{eflm2} \\\\ \\hline\n$S'_{q;1}$, $q>7$ & $<0.2862$ & Ramanujan & Theorem \\ref{eflm2} \\\\ \\hline\n\\end{tabular}\n\\end{center}\n$~~$\\\\\n$~~$\\\\\n\\noindent {\\tt Acknowledgement}. I like to thank Andrew Earnest and John Voight for helpful \ninformation concerning \nqudaratic forms having a value set that is multiplicative, and Ana Zumalac\\'arregui for sending me \\cite{madrid}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaksw b/data_all_eng_slimpj/shuffled/split2/finalzzaksw new file mode 100644 index 0000000000000000000000000000000000000000..26ec5fd9b84babeb5f0825ade817c07c11da86ed --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaksw @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION AND MOTIVATION} \nReactivity estimates are an important facet of nuclear criticality safety. Currently, reactivity cannot be directly measured and is instead inferred from the prompt neutron decay constant, $\\alpha$. The value of $\\alpha$ is estimated using Rossi-alpha measurements that are predicated on measuring the time difference between neutron detections~\\cite{uhrig}. Rossi-alpha measurements are traditionally conducted with $^3$He detectors, which use polyethylene to moderate neutrons to improve the detection efficiency. Because neutrons take time to moderate in the polyethylene, timing properties change (the decay of prompt neutrons and neutron slowing down time are convolved) and information can be lost. Organic scintillation detectors can detect neutrons directly and are fast compared to $^3$He systems. In this work, a fast plutonium assembly is simultaneously measured by $^3$He detectors and organic scintillators, and the detection systems are compared.\n\n\\section{THE ROSSI-ALPHA METHOD}\nIn a Rossi-alpha measurement, neutron detection times are recorded, the time differences between detections are calculated (see Fig.~\\ref{fig:RA}), and a Rossi-alpha histogram of the time differences is constructed~\\cite{uhrig,Feynman44_1,hansen}. The histogram is fit and $\\alpha$ is obtained from the fit parameters. Traditionally, the histogram is fit with a single-exponential-plus-constant model \\cite{ornbro}. Recent work has developed a double-exponential-plus-constant-model~\\cite{mikwa_2exp} that is more suitable than the single exponential model for measurements of reflected assemblies. The two-exponential model estimates a second parameter: $\\ell_{ctd}$, which describes the mean time a neutron spends in the reflector prior to detection. Sample Rossi-alpha histograms from a $^3$He detector measuring plutonium are shown in Fig.~\\ref{fig:sample_RA}.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.45\\linewidth]{Rossi_alpha}\n\t\\caption{Time difference calculation for Rossi-alpha measurements.}\n\t\\label{fig:RA} \n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.87\\linewidth]{sample_fits}\n\t\\caption{Rossi-alpha plot with one- and two-exp fits on linear (a) and semilog (b) scales.}\n\t\\label{fig:sample_RA}\n\\end{figure}\n\\section{ASSEMBLY SPECIFICATIONS AND EXPERIMENTAL SETUP}\n\\subsection{Assembly Specifications}\nIn this work, the bottom layer of the Comet critical assembly -- lead-moderated, copper-reflected plutonium (93 wt\\% $^{239}$Pu) -- was measured. A 3D rendering of the assembly is shown in Fig.~\\ref{fig:3D}; the layout of the bottom layer of copper or plutonium boxes is shown in Fig.~\\ref{fig:bottom_layer}, and a sample plutonium box is shown in Fig.~\\ref{fig:box} ~\\cite{joetta_PHYSOR}. The total mass of plutonium was approximately 15 kg. \n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.45\\linewidth]{3D_rendering}\n\t\\caption{3D rendering of the Comet critical assembly~\\cite{joetta_PHYSOR}.}\n\t\\label{fig:3D}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.7\\linewidth]{bottom_layer_layout}\n\t\t\\caption{Bottom layer box layout~\\cite{joetta_PHYSOR}.}\n\t\t\\label{fig:bottom_layer}\n\t\\end{minipage}%\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.6\\linewidth]{sample_box}\n\t\t\\caption{Photo of a plutonium box \\cite{joetta_PHYSOR}.}\n\t\t\\label{fig:box}\n\t\\end{minipage}\n\\end{figure}\n\\subsection{Simulation of the Assembly}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.45\\linewidth]{sample_TiBTaF}\n\t\\caption{Sample plot of the time-binned surface tally (F1) used to estimate the Rossi-alpha.}\n\t\\label{fig:TiBTaF}\n\\end{figure}\nTo estimate the prompt neutron decay constant, $\\alpha$, the measurement was simulated using MCNP6\\textregistered. The KCODE option estimated $k_\\text{eff} \\approx 0.624$. To determine $\\alpha$, surface (F1) and point-detector (F5) tallies were time-binned, and the tails (linear on a semilog plot) were fit. A sample time-bin tail-fit plot is shown in Fig.~\\ref{fig:TiBTaF} and $\\alpha = 52.3\\pm2.5$ ns. The uncertainty comes from the fit uncertainty. \n\\subsection{Experimental Setup and Detection System Details}\nIn the measurement of the assembly, two organic scintillator arrays (OSCARs) and one Neutron Multiplicity $^3$He Array Detector (NoMAD) were used. An OSCAR comprises 12 5.08 cm $\\times$ 5.08 cm diameter \\textit{trans}-stilbene organic scintillators coupled to photomultiplier tubes~\\cite{stilbene,stilbene2}. The NoMAD is similar to the MC-15 detection system \\cite{mc15_manual}, comprising 15 $^3$He detectors embedded in a polyethylene matrix. The systems were placed 50 cm from the edge of the assembly; a schematic is shown in Fig.~\\ref{fig:schematics} and a photo of the systems side-by-side is shown in Fig.~\\ref{fig:photo}. For this work, only 21 of the 24 OSCAR detectors were operational. Based on neutron detection rates, the NoMAD (in the given configuration) is 3.34 times more efficient than the OSCARs. \n\\begin{figure}[H]\n\t\\centering\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{detector_layout}\n\t\t\\caption{Schematic of detector placement.}\n\t\t\\label{fig:schematics}\n\t\\end{minipage}%\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{detection_systems}\n\t\t\\caption{Photo of detection systems.}\n\t\t\\label{fig:photo}\n\t\\end{minipage}\n\\end{figure}\n\\section{DATA ANALYSIS}\n\\subsection{Data Analysis for the $^3$He-based NoMAD System}\\label{sec:DA_NoMAD}\nThe output from measurement and preliminary data analysis is a list of detection times. The Rossi-alpha histogram is created using type-I binning (illustrated in Fig.~\\ref{fig:RA})~\\cite{hansen}. In theory, Rossi-alpha histograms peak at a time difference of 0 s; however, measurement considerations such as dead time and time of flight cause the peak to occur later. Suppose the max occurs in bin $b$. To mitigate the measurement considerations at short time differences, the first $2b$ bins are discarded. For some comparison purposes in this work, the histograms are integral normalized and constant-subtracted by taking the mean of the last points in the tail. A sample, resultant Rossi-alpha histogram for the NoMAD is shown in Fig.~\\ref{fig:NoMAD_RA}.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.5\\linewidth]{He3_fits}\n\t\\caption{Rossi-alpha histogram with fits from the NoMAD system.}\n\t\\label{fig:NoMAD_RA}\n\\end{figure}\n\\subsection{Data Analysis for the Organic Scintillator-based OSCAR System}\nThe output from measurement and preliminary data analysis is a list of detection times, total pulse integrals, and tail integrals. Because organic scintillators are sensitive to both neutrons and photons, pulse shape discrimination (PSD) is used to discriminate the pulses. The PSD is done for each detector and is both time and energy dependent; a sample PSD plot is shown in Fig.~\\ref{fig:PSD}. The PSD analysis results in three sets of data: neutron pulses, photon pulses, and pulses to discard (due to, for example, pulse pileup). Currently, gamma-ray Rossi-alpha is not considered; however, the photon pulses are still needed to correct for timing offsets. The OSCAR system is sensitive to time of flight and offsets due to electronics. To correct for offsets, the photon-photon coincidence peak (shown in Fig.~\\ref{fig:PP_offset}, present from prompt fission photons) is created for all detectors relative to one detector. If the peak is not centered about zero, all times in the neutron and photon pulse lists are subsequently shifted by a constant. Once the list of neutron detection times is corrected, the Rossi-alpha analysis is the same as that for the NoMAD system (see Section~\\ref{sec:DA_NoMAD}); the resultant Rossi-alpha plot is shown in Fig.~\\ref{fig:OSCAR_RA}. \n\\begin{figure}[H]\n\t\\centering\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{PSD}\n\t\t\\caption{Sample PSD plot.}\n\t\t\\label{fig:PSD}\n\t\\end{minipage}%\n\t\\begin{minipage}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{PP_offset}\n\t\t\\caption{Sample photon-photon coincidence plot.}\n\t\t\\label{fig:PP_offset}\n\t\\end{minipage}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=.5\\linewidth]{OSCAR_RA}\n\t\\caption{Rossi-alpha histogram from the OSCAR system.}\n\t\\label{fig:OSCAR_RA}\n\\end{figure}\n\\section{RESULTS AND DISCUSSION}\nUnnormalized, non-constant-subtracted Rossi-alpha histograms generated from two hours of data for each detection system are shown in Fig.~\\ref{fig:accidentals}. The OSCAR has fewer accidentals than that of the NoMAD: the constant value of the tail for the NoMAD is 95\\% of the maximum value, whereas the constant value of the tail for the OSCAR is only 0.7\\% of the maximum value. In some cases, the high proportion of the accidentals in the case of the NoMAD may obscure the second exponential. Obscuring the second exponential would reduce the fit model to a single exponential fit; however, since the parameters of interest are a linear combination of the two exponentials, $\\alpha$ and $\\ell_{ctd}$ cannot be determined.\n\nFit metrics plotted as a function of measurement time (and bin width for the NoMAD) are shown in Fig.~\\ref{fig:fit_metrics}. The root mean square error (RMSE) is normalized by the asymptotic values of the respective data series such that the y-axis is a measure of convergence. It takes the OSCAR less than 30 minutes to be within 50\\% of its asymptotic value, while it takes the NoMAD approximates 120 minutes (note that RMSE is fairly independent of the bin width, as expected). It takes the OSCAR less than 20 minutes to achieve an $R^2$ value greater than 0.90, whereas the the NoMAD with 2 $\\mu$s bins requires approximately 70 minutes. The NoMAD's $R^2$ convergence could be improved by increasing the bin widths; however, 2 $\\mu$s bin widths are already large compared to the time-decay constant (52.3 $\\pm$ 2.5 ns) the NoMAD is trying to observe. Reducing the bin widths to increase sensitivity to the physical phenomenon the system is trying to measure results in increases in the time is takes the NoMAD to achieve $R^2 > 0.90$; bin widths of 1 $\\mu$s require 140 minutes and bin widths of 500 ns require 280 minutes (the relationship is approximately linear). \n\nFrom simulation, the ``true\" value of $\\alpha$ for the assembly is taken to be 52.3 $\\pm$ 2.5 ns. Fitting the OSCAR data with a two exponential, $\\alpha$ is estimated to be 47.4 $\\pm$ 2.0 ns. The error is 9.37\\% and, qualitatively, the values are similar since the $1.09\\sigma$-confidence intervals overlap. The NoMAD estimate of $\\alpha$ is $\\approx 37$ $\\mu$s. The NoMAD has a known slowing down time of 35-40 $\\mu$s and, because $\\alpha\\ll 35$ $\\mu$s, the NoMAD is likely only sensitive to the neutron moderation time. \n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{accidentals}\n\t\\caption{Unnormalized, non-constant-subtracted Rossi-alpha histograms.}\n\t\\label{fig:accidentals}\n\\end{figure}\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{convergence}\n\t\\caption{Fit metrics as a function of measurement time for the NoMAD at different bin widths and the OSCAR.}\n\t\\label{fig:fit_metrics}\n\\end{figure}\n\n\\section{CONCLUSIONS AND FUTURE WORK}\nIn this work, the organic scintillator array (OSCAR), comprising 21 total operational \\textit{trans}-stilbene detectors, and the Neutron Multiplicity $^3$He Array Detector (NoMAD), comprising 15 $^3$He tubes embedded in a polyethylene matrix, simultaneously measured 15 kg of plutonium (93 wt\\% $^{239}$Pu) moderated by lead and reflected by copper with $k_\\text{eff} = 0.624$ and $\\alpha = $ 52.3$\\pm$2.5 ns. It was found that the OSCAR converged on its estimate of $\\alpha$ faster than the NoMAD, which translates to reduced procedural and operational costs in practical implementation. The convergence needs to be investigated further for assemblies where $\\alpha$ is much larger ($\\alpha\\propto 10-100s$ of $\\mu$s). Because neutrons are moderated in the polyethylene matrix of the NoMAD (and moderation is not inherent to the OSCAR), the OSCAR is an inherently faster detection system. The entire Rossi-alpha histogram (reset time) is less than 100 ns for the OSCAR (1 ns bins), whereas 100 ns is the clock tick length for the NoMAD. Therefore, for fast assemblies ($\\alpha \\propto 1-100s$ of ns), it is more suitable to use the OSCAR that estimated the true $\\alpha$ within 1.09 standard deviations and an error of 9.37\\% (on the order of uncertainty in nuclear data). Larger accidental contributions are more likely to wash out time information; the NoMAD has a large accidental contribution and the OSCAR has a negligible accidental contribution. Future work involves determining when each system is more suitable to a given measurement. Furthermore, gamma-ray and mixed-particle Rossi-alpha will be investigated with the organic scintillators. Work will also be done with more measurements to validate organic scintillator-based Rossi-alpha measurements.\n\n\\section*{ACKNOWLEDGEMENTS}\n\nThis work is supported by the National Science Foundation Graduate Research Fellowship under Grand No. DGE-1256260, by the Consortium for Verification Technology under Department of Energy National Nuclear Security Administration award number DENA0002534, and by the DOE Nuclear Criticality Safety Program, funded and managed by the National Nuclear Security Administration for the Department of Energy. Any opinion, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or other funding organizations.\n\n\n\n\\setlength{\\baselineskip}{12pt}\n\\bibliographystyle{physor}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{intro}\nCataclysmic variables (CVs) are interacting binary systems in which a low-mass star---usually a red dwarf---overfills its Roche lobe and transfers mass onto a white dwarf (WD). \\citet{warner} and \\citet{hellier} offer excellent overviews of these intriguing systems. In a subset of CVs known as polars, the exceptionally strong magnetic field ($\\sim$ tens of MG) of the WD synchronizes the WD's spin period with the orbital period of the binary (see \\citet{cropper} for a comprehensive review of polars specifically). The accretion stream from the secondary star follows a ballistic trajectory toward the WD until the magnetic pressure matches the stream's ram pressure. When this occurs, a threading region forms in which the accretion stream couples onto the WD's magnetic field lines, and the captured material is then channeled onto one or more accretion regions near the WD's magnetic poles. The impact of the stream creates a shock in which the plasma is heated to X-ray-emitting temperatures, so polars can be significantly brighter in X-ray wavelengths than ordinary non-magnetic CVs. In addition to X-rays, the accretion region produces polarized cyclotron emission in the optical and in the infrared, the detection of which is a defining characteristic of polars.\n\nEclipses of the WD have provided great insight into polars. Because a polar has no accretion disk, an eclipsing polar will generally exhibit a two-step eclipse: a very sharp eclipse of the compact ($\\sim$ white dwarf radius) cyclotron-emitting region, followed by a much more gradual eclipse of the extended accretion stream (see, {\\it e.g.}, \\citet{harrop-allin} for an eclipse-mapping study of HU Aqr). When the accretion rate is high, the WD photosphere makes only a modest contribution to the overall optical flux, overshadowed by the two accretion-powered components mentioned above. \n\nEclipsing polars also make it possible to determine the orientation of the magnetic axis with respect to the secondary. In HU Aqr, the orientation of the dominant magnetic pole leads the line of centers of the binary by about 45$^{\\circ}$ \\citep{harrop-allin}, while in DP Leo, another eclipsing polar, the equilibrium orientation leads the line of centers by 7$^{\\circ} \\pm 3^{\\circ}$ but with a long-term oscillation with an amplitude of $\\sim25^{\\circ}$ \\citep{beuermann}.\n\nIn at least four polars,\\footnote{In addition to the subject of this study (V1432 Aql), three other polars are incontrovertibly asynchronous: BY Cam, V1500 Cyg, and CD Ind. At the time of writing, there are at least two candidate systems: V4633 Sgr \\citep{lipkin} and CP Pup \\citep{bianchini}.} the WD's spin period differs from the orbital period by as much as several percent. In these asynchronous polars, the WD's magnetic field is gradually synchronizing the spin period with the orbital period on timescales of centuries. For example, \\citet{ss91} detected a derivative in the WD spin period in V1500 Cyg and estimated that the system would approach resynchronization about 150 years after the publication of their study. \n\nBecause the prototype asynchronous polar, V1500 Cyg, was almost certainly desynchronized during its 1975 nova eruption, the canonical view is that these systems are byproducts of nova eruptions which break the synchronous rotation by causing the primary to lose mass and to interact with the secondary \\citep{ssl}. However, \\citet{warner02} combined the fraction of asynchronous systems among all known polars with their estimated synchronization timescales and estimated an unexpectedly short nova recurrence time of a few thousand years for polars---far more rapid than the expected recurrence time of $\\sim1 \\times 10^5$ years. Every aspect of Warner's deduction ought to be explored, including the possibility of an additional channel for desynchronizing polars, selection effects that might alter the fraction of asynchronous polars, and methods of calculating the synchronization time scale. \n\nInterestingly, in each of the four confirmed asynchronous polars, the threading process is inefficient in comparison to fully synchronous systems. In synchronous systems, the accretion stream is fully captured not long after it leaves the L1 point, well before it can travel around the WD \\citep[e.g.][]{schwope97}. In none of the asynchronous systems is this efficient threading seen. For example, Doppler tomography by \\citet{schwope} of V1432 Aql showed an azimuthally extended accretion curtain, a finding which is possible only if the accretion stream can travel significantly around the WD. X-ray observations of V1432 Aql also indicate that the accretion stream travels most of the way around the WD before it is fully threaded onto the magnetic field lines \\citep{mukai}. Likewise, in the other three systems, there is mounting evidence that the accretion flow can significantly extend around the WD. In CD Ind, the accretion stream appears to thread onto the same magnetic field line throughout the beat cycle, requiring that the stream be able to travel around the WD \\citep{ramsay}. With regard to V1500 Cyg, \\citet{ss91} argued that the smooth sinusoidal variation of the polarization curve was consistent with the infalling stream forming a thin accretion ring around the WD. More recently, \\citet{litvinchova} detected evidence that this accretion ring is fragmented, periodically reducing the irradiation of the donor star by the hot WD. In the remaining system, BY Cam, Doppler tomograms show that the accretion curtain extends over $\\sim180^\\circ$ in azimuth around the WD, requiring a similar extent of the accretion stream \\citep{schwarz}. Although a sample size of four is small, it is remarkable that in each of the confirmed asynchronous polars, the threading process is so inefficient that the accretion stream can travel much of the way around the WD. \n\n\\section{V1432 Aql}\n\nV1432 Aql (= RX J1940.1-1025) is the only known eclipsing, asynchronous polar and was identified as such by \\citet{patterson} and \\citet{friedrich}. There are two stable periodicities in optical photometry of V1432 Aql. The first (12116 seconds) is the orbital period, which is easily measured from the timings of the eclipses of the WD by the secondary. Initially, the nature of the eclipses was unclear; \\citet{patterson} argued that the secondary was the occulting body, but \\citet{watson} contended that a dense portion of the accretion stream was the culprit. Much of the confusion was attributable to the presence of residual emission lines and X-rays throughout the eclipses, as well as the variable eclipse depth. Since X-rays in polars originate on or just above the WD's surface, the apparent X-ray signal throughout the eclipse was inconsistent with occultations by the donor star. Additionally, there was considerable scatter in the eclipse timings, and the system's eclipse light curves did not show the rapid ingresses and egresses characteristic of synchronous polars \\citep{watson}. However, \\citet{mukai} resolved the dispute with high-quality X-ray observations which showed that the donor actually eclipses the WD and that the residual X-ray flux previously attributed to V1432 Aql was actually contamination from a nearby Seyfert galaxy.\n\nThe second periodicity ($\\sim12150$ seconds) is the spin modulation of the WD. In optical photometry, this periodicity manifests itself in several ways. In particular, at $\\phi_{sp} = 0.0$, the WD is occulted by material accreting onto one of the magnetic poles, producing a broad ``spin minimum'' \\citep{friedrich}. Analyses of the spin minima have revealed several fascinating insights into V1432 Aql. For example, \\citet{gs97} undertook an O$-$C study of the timing residuals of the spin minima and managed to detect a decrease in the WD spin period, indicating that the system is resynchronizing itself. They also measured a cyclical variation in the timings of the spin minima, caused by (1) a longitudinal offset between the magnetic pole and its corresponding accretion region on the WD's surface and (2) the accretion stream threading onto different magnetic field lines throughout the spin-orbit beat period ($P^{-1}_{beat} = |P^{-1}_{orb}-P^{-1}_{sp}|$). Using these timings and a dipole accretion model, the authors managed to constrain the combined effect of the threading radius and the colatitude of the magnetic axis on the WD, but they could not constrain these parameters individually. \\citet{staubert03} applied the methodology of \\citet{gs97} to a larger dataset and refined the results of the earlier paper.\n\nA critical concept which emerges from the literature is the beat period between the spin and orbital periods. The beat period is simply the amount of time that it takes for the WD (and its magnetic field) to rotate once as seen from the perspective of the donor star. As \\citet{gs97} first demonstrated, the accretion stream will interact with different magnetic field lines as the system progresses through its beat period, a foundational principle which informs our analysis throughout this paper.\n\nV1432 Aql is especially suitable for long-term study because its long-term brightness has remained constant not only in our own observations but also in data from the American Association of Variable Star Observers\\footnote{www.aavso.org} dating back to 2002. Similarly, the Catalina Sky Survey \\citep{drake} does not show any low states in the system since coverage of V1432 Aql began in 2005. While many polars alternate unpredictably between bright and faint states due to changes in the mass-transfer rate, V1432 Aql has not been observed to do so.\n\nWe supplement these previous studies by reporting the detection of stable periodicities in both the residual eclipse flux and the O$-$C timing residuals of the eclipses. These phenomena occur at the beat period, and we use a model to show that our observations are consistent with a threading radius whose position with respect to the WD varies throughout the beat cycle.\n\nIn response to this study's observational findings, one of us (DB) followed up by analyzing a different set of observations obtained by the Center for Backyard Astrophysics\\footnote{http:\/\/cbastro.org\/} over a much longer timespan. His group's analysis provides confirmation of the residual-flux and timing variations described in this paper while also reporting additional beat-cycle-related phenomena \\citep{boyd}. \n\n\\section{Observations}\n\n\\begin{figure}\n\n\t\\includegraphics[width=0.45\\textwidth]{sample-eclipses-models}\n\t\n\\caption{Two representative eclipses of V1432 Aql. The data represented in black were obtained at $\\phi_{beat} = 0.89$, and the data in gray at $\\phi_{beat} = 0.54$. The solid lines are the best-fit polynomials for each dataset. The polynomials satisfactorily model the asymmetric eclipses while smoothing noisy, possibly spurious features in the light curves.}\n\\label{sample-eclipses}\n\\end{figure}\n\nAs part of a twenty-eight-month effort to study V1432 Aql's behavior at different beat phases, six of us (CL, RM, RC, KCM, TC, and DS) obtained unfiltered, time-resolved photometry using the University of Notre Dame's 28-cm Schmidt-Cassegrain telescope and SBIG ST-8XME CCD camera between July 2012 and July 2014. The exposure time was 30~seconds for each individual image, with an overhead time of 8~seconds per image. A total of 76 light curves, consisting of over 17,500 individual measurements, were obtained with this instrument. These observations constitute the bulk of our dataset, and their uniformity avoids the introduction of errors caused by combining unfiltered observations from different telescope-CCD combinations. Because of their homogeneity, we use these data for all three parts of our analysis: studying the eclipse O-C variations, measuring the mid-eclipse magnitude, and for constructing phase plots of the system at different beat phases.\n\nWe also obtained a number of light curves with other telescopes, but since these instruments have different spectral responses, we only used this supplemental data to explore eclipse O$-$C variations. CL obtained four unfiltered time series in July 2014 using the University of Notre Dame's 80-cm Sarah L. Krizmanich Telescope and two more with Wesleyan University's 60-cm Perkin Telescope in September 2014. The data obtained with the Krizmanich and Perkin Telescopes have much higher time resolution (exposure times between 5 and 7 seconds, each with a $\\sim$3-second readout time, for a total cadence of 10 seconds or less), facilitating the study of the rapid variability during the eclipses. In addition, MC, JU, DB, and LM respectively used a 40-cm Schmidt-Cassegrain and QSI-516 CCD camera with a Johnson $V$ filter, a 23-cm Schmidt-Cassegrain and QSI-583ws CCD camera, a 25-cm Newtonian with an unfiltered SXV-H9 CCD camera, and a 28-cm Schmidt-Cassegrain equipped with an STT-1603 CCD camera. With the exception of LM, who used 45-second exposures, each of them used an exposure time of 60 seconds.\n\nTo compensate for light-travel delays caused by Earth's orbital motion, the timestamp for each observation was corrected to the BJD (TDB) standard \\citep{eastman}.\n\nWith unfiltered photometry of a CV, it is possible to infer the approximate $V$-band magnitude of the CV by selecting a same-color comparison star and using its $V$ magnitude when calculating the magnitude of the CV. Since polars tend to be quite blue, we relied upon AAVSO field photometry to select two relatively comparison blue stars;\\footnote{These stars are labeled 117 and 120 in AAVSO chart 13643GMF, and they have $B-V$ colors of 0.20 and 0.43, respectively, according to the APASS photometric survey \\citep{APASS}.} we utilized these comparison stars for all photometry used in the analyses of mid-eclipse magnitude and the spin modulation at different beat phases.\n\nOne of the most obvious phenomena in the photometry is the highly variable magnitude of the system at mid-eclipse, which ranges from $V\\sim$ 16.0 to $V\\sim$ 17.5. Different eclipses also displayed strikingly different morphologies, and in Figure~\\ref{sample-eclipses}, we plot two eclipses which are representative of this variation. Such behavior is plainly at odds with normal eclipsing polars, which almost invariably have very abrupt ingresses and egresses since most of the flux originates in a small---and thus rapidly eclipsed---area on the WD \\citep[e.g.][]{harrop-allin}. V1432 Aql's gradual ingresses and egresses indicate that its flux originates in an extended region, and in this regard, its eclipses bear a superficial resemblance to those of CVs with accretion disks.\n\nWe measured both the time of minimum eclipse flux and the magnitude at mid-eclipse by fitting a fifth-order polynomial to each eclipse (see Table~\\ref{eclipse_timings}). Figure~\\ref{sample-eclipses} demonstrates the adequacy of the fit by plotting two eclipse light curves, each fitted with a fifth-order polynomial. Since the system's eclipses are frequently asymmetric, the time of minimum flux is not necessarily the midpoint between ingress and egress. Indeed, several eclipses were W-shaped, with two distinct minima. For these eclipses, we report the time of the deepest minimum. One particularly remarkable eclipse, observed on JD 2456843 and discussed in Section~\\ref{application_of_model}, had two minima of equal depth, so we report both times.\n\nAdditionally, we detected a number of spin minima. Since previous studies of the spin minima \\citep[e.g.][]{gs97} have measured the timing of each spin minimum by locating its vertical axis of symmetry, we fit a second-order polynomial to each spin minimum on the assumption that the minimum of this parabola will roughly approximate the vertical axis of symmetry. While a higher-order polynomial would do a better job of modeling the often-asymmetric spin minima, using the second-order polynomial increases the compatibility of our timings with those presented in other works.\n\nWe list in Table~\\ref{spin_timings} the timings of all clearly-detected spin minima. A number of spin minima were ill-defined or had multiple mimima of comparable depth, and in those instances, we did not report a timing because it was impossible to objectively identify the middle of the spin minimum.\n\n\n\\begin{table}\n\t\\centering\n\t\\begin{minipage}{\\textwidth}\n\t\\caption{Observed Times of Minimum Eclipse Flux \\label{eclipse_timings}}\n\n\n\n\t\\begin{tabular}{cccccc}\n\t\\hline\n\tBJD\\footnote{$2456000+$} & $\\phi_{beat}$ &$\\phi_{sp}$& BJD& $\\phi_{beat}$\n\t & $\\phi_{sp}$\\\\\n\t\\hline\n\n117.75353(52)&0.67&0.46&531.58818(26)&0.44&0.71\\\\\n121.67928(60)&0.73&0.39&534.67318(47)&0.49&0.66\\\\\n129.67477(51)&0.87&0.28&538.59720(41)&0.55&0.58\\\\\n129.81416(50)&0.87&0.27&539.57897(30)&0.57&0.56\\\\\n131.49702(40)&0.90&0.24&539.71935(49)&0.57&0.56\\\\\n132.47891(47)&0.91&0.23&540.70141(35)&0.58&0.55\\\\\n133.46253(73)&0.93&0.22&545.60953(54)&0.66&0.47\\\\\n134.44280(55)&0.94&0.20&546.59071(53)&0.68&0.45\\\\\n138.51071(58)&0.01&0.14&548.69482(44)&0.71&0.42\\\\\n145.80112(28)&0.13&0.01&549.67629(38)&0.73&0.40\\\\\n162.77047(40)&0.41&0.73&558.65164(39)&0.88&0.26\\\\\n175.67045(38)&0.62&0.51&559.63391(37)&0.89&0.24\\\\\n180.57845(40)&0.70&0.43&560.61564(49)&0.91&0.23\\\\\n180.71866(44)&0.70&0.43&562.58006(36)&0.94&0.21\\\\\n181.70019(43)&0.72&0.41&565.66515(61)&0.99&0.16\\\\\n182.68111(44)&0.74&0.39&566.64823(74)&0.01&0.15\\\\\n194.60329(44)&0.93&0.21&567.62862(61)&0.02&0.12\\\\\n428.79371(28)&0.76&0.37&573.65832(71)&0.12&0.02\\\\\n431.87824(79)&0.81&0.31&574.63842(48)&0.14&1.00\\\\\n447.86614(30)&0.07&0.06&575.61915(43)&0.15&0.97\\\\\n451.79366(39)&0.14&0.00&576.60209(40)&0.17&0.97\\\\\n460.76848(41)&0.28&0.85&577.58372(37)&0.18&0.95\\\\\n462.73257(35)&0.32&0.83&579.54661(33)&0.22&0.92\\\\\n463.71538(66)&0.33&0.82&580.52958(50)&0.23&0.91\\\\\n477.73569(51)&0.56&0.57&593.57155(31)&0.44&0.70\\\\\n484.74744(37)&0.68&0.45&594.55383(40)&0.46&0.69\\\\\n484.74771(71)&0.68&0.46&600.58074(45)&0.56&0.57\\\\\n484.88740(65)&0.68&0.45&787.79453(54)&0.58&0.54\\\\\n485.72961(41)&0.69&0.44&799.85494(46)&0.78&0.35\\\\\n485.72974(50)&0.69&0.44&801.81835(58)&0.81&0.32\\\\\n486.71075(55)&0.71&0.42&813.73962(81)&0.00&0.14\\\\\n486.85131(39)&0.71&0.42&814.72096(43)&0.02&0.12\\\\\n486.85137(40)&0.71&0.42&815.70217(53)&0.03&0.10\\\\\n487.69245(51)&0.72&0.41&815.8433(12)&0.04&0.10\\\\\n487.69267(51)&0.72&0.41&822.85472(77)&0.15&0.99\\\\\n488.81394(44)&0.74&0.39&842.76944(26)&0.47&0.68\\\\\n490.77791(51)&0.77&0.36&842.76957(49)&0.47&0.68\\\\\n503.67970(72)&0.98&0.15&843.74858(55)&0.48&0.65\\\\\n506.62498(99)&0.03&0.10&843.75227(55)&0.48&0.67\\\\\n506.76450(79)&0.03&0.10&843.7524(12)&0.48&0.67\\\\\n508.72870(47)&0.07&0.07&847.67489(53)&0.55&0.58\\\\\n510.69232(55)&0.10&0.04&847.8138(15)&0.55&0.57\\\\\n515.60006(51)&0.18&0.96&848.65653(31)&0.56&0.56\\\\\n528.64310(38)&0.39&0.76&849.77851(33)&0.58&0.55\\\\\n529.62302(21)&0.40&0.73&903.77121(45)&0.45&0.70\\\\\n529.76522(39)&0.41&0.74&904.61311(18)&0.46&0.69\\\\\n530.60621(49)&0.42&0.72&905.59470(33)&0.48&0.67\\\\\n\n\n\n\t\\hline\n\t\\end{tabular}\n\n\n \n\t\\end{minipage}\n\\end{table}\n\n\n\\section{Analysis}\n\n\\subsection{Orbital, Spin and Beat Ephemerides}\\label{ephem}\n\n\t\\begin{figure}\n\n\t\\includegraphics[width=0.45\\textwidth]{minima-O-C}\n\t\\caption{O$-$C timing residuals for the spin minima as a function of $\\phi_{beat}$. The black dataset represents the new timings which we report in Table~\\ref{spin_timings}, while the gray datapoints are from previously published studies as described in the text. The data are repeated for clarity. Our lack of timings from $0.0 < \\phi_{beat} < 0.5$ is a consequence of the weakness of the spin minima during this half of the beat cycle.}\n\t\\label{minima-timings}\n\t\n\t\\end{figure}\n\nWe used $\\chi^{2}$ minimization to determine the best-fit ephemerides for the spin and orbital periods using our data in conjunction with the published optical eclipse and spin-minima timings in \\citet{patterson}, \\citet{gs97}, \\citet{staubert03}, and \\citet{mukai}. Some of the timings from these studies lacked uncertainties; for those observations, we adopted the average uncertainty of all measurements which did have error estimates. Furthermore, both \\citet{abb06} and \\citet{b12} have made their photometry of V1432 Aql available electronically, and while their time resolution was too low for inclusion in our eclipse analysis, it was adequate for measuring the spin minima. In the interest of uniformity of analysis, we measured the spin minima in the \\citet{abb06} and \\citet{b12} datasets ourselves instead of using their published timings.\\footnote{The original preprint of this paper used the timings reported in \\citet{b12} without reanalyzing their photometry. Using our timing measurements of their spin minima resulted in a significantly lower values of values of $\\chi^{2}_{red}$ for our spin ephemerides in Section~\\ref{spin_ephemerides}.}\n\n\\subsubsection{Orbital Ephemeris}\n\nThe best-fit linear eclipse ephemeris is \\begin{equation} T_{ecl}[HJD] = T_{0, ecl} + P_{orb}E_{ecl},\\end{equation} with $T_{0, ecl} = 2454289.51352 \\pm 0.00004$ and $P_{orb} = 0.1402347644 \\pm 0.0000000018$ d. Even though our timestamps use the BJD standard, we report our epochs using the slightly less accurate HJD standard because the previously published data use HJD. We find no evidence of a period derivative in the orbital ephemeris, but both \\citet{b12} and \\citet{boyd} have reported quadratic orbital ephemerides. The latter paper had a larger dataset than the one used in this study, so our non-detection of an orbital period derivative does not necessarily contradict those claims.\n\n\\subsubsection{Spin Ephemeris}\\label{spin_ephemerides}\n\nThe spin ephemeris of \\citet{b12} fits our data very well, and we offer only a modestly refined cubic spin ephemeris of \\begin{equation} T_{min, sp}[HJD] = T_{0, sp} + P_{sp,0}E_{sp} + \\frac{\\dot{P}}{2}E^{2}_{sp} + \\frac{\\ddot{P}}{6}E^{3}_{sp},\\end{equation} where $T_{min}$ is the midpoint of the spin minimum, $T_{0, sp} = 2449638.3278 (\\pm 0.001), P_{sp,0} = 0.14062835 (\\pm 0.00000022)$ d, $\\dot{P}\/2 = -8.10 (\\pm 0.10) \\times 10^{-10}$ d cycle$^{-2}$, and $\\ddot{P}\/6 = -8.5 (\\pm 1.4) \\times 10^{-16}$ d cycle$^{-3}$. The uncertainties on these parameters were determined by bootstrapping the data. We do not have enough observations to meaningfully search for higher-order period derivatives like those reported by \\citet{boyd}, but these values are within the error bounds of those reported by \\citet{b12}. \n\nWhile a polynomial fit accurately models the existing data, $P_{sp}$ will likely approach $P_{orb}$ asymptotically over the synchronization timescale (P. Garnavich, private communication). If this is correct, then $\\dot{P}$ is probably proportional to the difference between $P_{sp}$ and $P_{orb}$ so that \\begin{equation}\\dot{P} \\equiv \\frac{dP_{sp}}{dE_{sp}} = k(P_{sp} - P_{orb}).\\label{pdot_exp}\\end{equation} Integrating the solution to this differential equation yields an ephemeris of \\begin{equation} T_{min, sp} = \\frac{P_{sp, 0} - P_{orb}}{k}(e^{kE_{sp}} - 1) + P_{orb}E_{sp} + T_{0, sp}, \\end{equation} where $P_{orb}$ is the measured value and the three free parameters are $k = -4.205 (\\pm0.008) \\times 10^{-6}$ cycles$^{-1}$, $P_{sp, 0} = 0.14062863 (\\pm0.00000008)$ d, and $T_{0} = 2449638.3277 (\\pm0.0010)$. \n\nAlthough $\\chi^{2}_{red} = 2.9$ for both the cubic ephemeris and the exponential ephemeris, both of these ephemerides neglect the cyclical shifts in the location of the accretion spot first reported by \\citet{gs97}. To illustrate the effect of these variations on the quality of our fit, Figure~\\ref{minima-timings} plots the residuals from the cubic ephemeris as a function of beat phase. Because this particular variation is not an actual change in the spin period, we did not attempt to incorporate it into our ephemerides. Unless a spin ephemeris were to take into account these variations and their $\\sim$1000-second peak-to-peak amplitude, it would be difficult to achieve a significantly lower $\\chi^{2}_{red}$.\n\nWith this caveat in mind, the comparable values of $\\chi^{2}_{red}$ for each ephemeris lead us to conclude that they model the data equally well as could be expected. Though we use the cubic ephemeris for the sake of simplicity when calculating the beat phase, the exponential spin ephemeris is at least grounded in a physical theory of the resynchronization process. Moreover, in principle, the only parameter which should need to be updated in the future is the constant $k$. By contrast, a polynomial ephemeris could require an ungainly number of terms in order to attain a satisfactory fit.\n\n\\subsubsection{Beat Ephemeris}\n\nBecause there are several non-trivial steps in calculating the system's beat phase, the beat ephemeris is too unwieldy to list here. Nevertheless, to facilitate future studies, we have written a Python script which calculates the system's beat phase at a user-specified Heliocentric Julian Date using the procedure outlined in Appendix~\\ref{beatphase}. Additionally, it calculates future dates at which the system will reach a user-specified beat phase. This script is available for download as supplemental online material and may also be obtained via e-mail from CL.\n\n\\subsubsection{Synchronization Timescale}\n\nAs defined by \\citet{ss91}, a first-order approximation of an asynchronous polar's synchronization timescale is given by \\begin{equation} \\tau_{s} = \\frac{P_{orb} - P_{sp}}{\\dot{P}}.\\label{timescale-formula} \\end{equation} If one assumes rather unrealistically that $\\dot{P}$ will remain constant until resynchronization, this formula provides a very rough estimate of when resynchronization will occur. If Equation~\\ref{pdot_exp} is substituted for $\\dot{P}$ in Equation~\\ref{timescale-formula}, this equation simplifies to $\\tau_{s} = -k^{-1}$. Since $k$ is essentially a decay rate, this formula yields the amount of time necessary for the initial value (in this context, the asynchronism at $T_0$, given by $P_{spin,0} - P_{orb}$) to be reduced by a factor of $e^{-1}$. Because $-k^{-1} = 237700$ spin cycles, $\\tau_{s} = 71.5\\pm0.4$ years with respect to August 2014, so in the year $\\sim$2086, the predicted spin period would be $\\sim$12128.8 seconds, fully 12.5 seconds longer than $P_{orb}$. While this estimate of $\\tau_{s}$ is obviously not an estimate of when resynchronization will actually occur, it is slightly less than the values in \\citet{gs97} and \\citet{abb06} and considerably less than \\citet{staubert03}.\n\nIt is unclear how long an exponential spin ephemeris might remain valid, but if ours were to hold true indefinitely, it predicts that $P_{sp}$ will approach $P_{orb}$ to within one second in the year $\\sim2320$ and to within 0.1 seconds in $\\sim2750$. These are not synchronization timescales as defined by \\citet{ss91}, but in the case of an exponential ephemeris, they provide a more realistic manner of extrapolating when the system might approach resynchronization. The inferred $\\sim$300-year timespan necessary just to attain $P_{sp} - P_{orb} < 1$ seconds is longer than the $\\sim$100-year timescales in \\citet{gs97} and \\citet{abb06}, but it is within the error bounds of the $\\sim$200-year synchronization timescale announced in \\citet{staubert03}. An important disclaimer with these synchronization timescales is that the orbital period may be decreasing, as claimed by both \\citet{b12} and \\citet{boyd}. Since V1432 Aql's WD is spinning up, a decreasing orbital period would presumably lengthen the resynchronization timescale.\n\nIf asynchronous polars do resynchronize asymptotically, it would suggest that a number of supposedly synchronous polars are very slightly asynchronous, with beat periods of months, years, or even decades. Unless they were closely observed for extended periods of time, these polars might be misclassified as being synchronous, so the true fraction of polars which are asynchronous might actually be higher than is currently believed. If correct, this result would be relevant in any examination of the problem of the unreasonably short nova-recurrence time in polars \\citep{warner02}. On one hand, a greater proportion of polars which are asynchronous would imply an even faster recurrence time, but on the other hand, an asymptotic approach to synchronism would also prolong the resynchronization process---and thus, the recurrence time. We leave it to a future work to more fully explicate these matters, but clearly, it will be important to independently confirm our exponential ephemeris, to resolve the possibility of an orbital-period derivative in V1432 Aql, and to determine if the other asynchronous systems also show evidence of asymptotic resynchronization.\n\n\\subsection{Variations in Eclipse O$-$C} \\label{O-C}\n\n\\begin{figure*}\n\n\t\\begin{subfigure}{\n\t\\includegraphics[width=0.5\\textwidth]{eclipse-timings-power}\n\t\\includegraphics[width=0.5\\textwidth]{eclipse-timings-waveform}}\n\t\\end{subfigure}\n\t\n\\caption{From left to right: the power spectrum of the timing residuals of the combined dataset described in section~\\ref{O-C}, and the waveform of the combined dataset when phased at the beat period. Black data points represent our data as listed in Table~\\ref{eclipse_timings}, while gray data points indicate previously published data as described in Section~\\ref{O-C}.}\n\\label{timing}\n\\end{figure*}\n\n\\subsubsection{Periodicity}\n\nIn a conference abstract, \\citet{gs99} first reported the discovery of a 200-second O$-$C shift in V1432 Aql's eclipse timings. We followed up on this periodicity by performing an O$-$C analysis on all eclipse timings listed in Table~\\ref{eclipse_timings}. We calculated both the O$-$C timing residual and the beat cycle count ($C_{beat}$; see Appendix~\\ref{beatphase}) for each eclipse and then used the analysis-of-variance (ANOVA) technique \\citep{anova} to generate several periodograms, with $C_{beat}$ serving as the abscissa.\n\nThe first periodogram used all of the eclipse timings reported in Table~\\ref{eclipse_timings}, and it showed a moderately strong signal at 1.00$\\pm$0.02 cycles per beat period, with the folded eclipse timings exhibiting a sawtooth waveform. We then recalculated the power spectrum after adding previously published optical eclipse timings by \\citet{patterson} and \\citet{watson} to the dataset. \n\n\\begin{table}\n\t\\centering\n\n\t\\begin{minipage}{\\textwidth}\n\t\\caption{Observed Times of Spin Minima\\label{spin_timings}}\n\t\\begin{tabular}{ccc|ccc}\n\t\\hline\n\tBJD\\footnote{2456000+} & $\\phi_{beat}$ & $\\phi_{orb}$ & BJD & $\\phi_{beat}$ & $\\phi_{orb}$\\\\\n\t\\hline\n\n117.8317(18)&0.64&0.55&486.7913(24)&0.69&0.57\\\\\n119.6574(22)&0.67&0.57&486.7925(18)&0.69&0.58\\\\\n121.6249(21)&0.70&0.60&487.7752(21)&0.70&0.58\\\\\n129.7710(26)&0.84&0.69&488.7581(29)&0.72&0.59\\\\\n131.4553(26)&0.87&0.70&490.7272(15)&0.75&0.63\\\\\n132.4406(31)&0.88&0.73&528.6812(16)&0.37&0.28\\\\\n162.6717(14)&0.38&0.3&534.7212(18)&0.47&0.35\\\\\n175.6023(18)&0.59&0.51&539.6384(12)&0.55&0.41\\\\\n180.6599(21)&0.68&0.57&540.6261(45)&0.57&0.46\\\\\n181.6459(22)&0.69&0.61&546.6710(19)&0.66&0.56\\\\\n182.6293(23)&0.71&0.62&549.6188(16)&0.71&0.58\\\\\n194.5668(21)&0.90&0.74&558.6081(23)&0.86&0.69\\\\\n431.8292(22)&0.79&0.64&560.5729(23)&0.89&0.70\\\\\n484.6855(25)&0.65&0.55&593.6136(18)&0.43&0.31\\\\\n484.8263(22)&0.66&0.55&594.5979(16)&0.44&0.32\\\\\n485.6698(17)&0.67&0.57&607.5310(19)&0.65&0.55\\\\\n485.8085(21)&0.67&0.56&&&\\\\\n\n\n\t\\hline\n\t\\end{tabular}\n\n\\end{minipage}\n\n\\end{table} \n\nThe combined dataset consists of 133 measurements spanning a total of 138 beat cycles. The strongest signal is at the beat period (1.001$\\pm$0.002 cycles per beat period), and its waveform consists of an abrupt 240-second shift in the timing variations near $\\phi_{beat}\\sim0.5$, which is when the residual eclipse flux is strongest (see Section~\\ref{flux-periodicity}). Both the periodogram and waveform are shown in Figure~\\ref{timing}. Between $\\sim0.5 < \\phi_{beat} < \\sim 0.85$, the eclipses occur $\\sim$120 seconds early, but after $\\phi_{beat} \\sim 0.85$, the eclipses begin occurring later, and by $\\phi_{beat} \\sim 1.0$, the eclipses are occurring $\\sim$120 seconds late. Although the 240-second O$-$C jump at $\\phi_{beat}\\sim0.5$ is the most obvious feature in the O$-$C plot, there is a 120-second jump towards earlier eclipses at $\\phi_{beat}\\sim0.0$. Considering the gradual eclipse ingresses and egresses, the WD must be surrounded by an extended emission region, so these eclipse timings track the centroid of emission rather than the actual position of the WD.\n\n\\subsubsection{Description of Model} \\label{description_of_model}\n\nGiven the asynchronous nature of the system and the ability of the stream to travel most of the way around the WD \\citep{mukai}, we hypothesize that cyclical changes in the location of the threading region are responsible for the O$-$C variation. In an asynchronous system, the position of the threading region can vary because the WD rotates with respect to the accretion stream, causing the amount of magnetic pressure at a given point along the stream to vary during the beat period. Threading occurs when the magnetic pressure ($\\propto r^{-6}$) balances the stream's ram pressure ($\\propto v^{2}$). For a magnetic dipole, the magnetic flux density $B$ has a radial dependence of $\\propto r^{-3}$, but with an additional dependence on the magnetic latitude; the magnetic pressure will be even greater by a factor of 4 near a magnetic pole as opposed to the magnetic equator. An additional consideration is that the stream's diameter is large enough that the magnetic pressure varies appreciably across the stream's cross section \\citep{mukai88}.\n\nKM modeled this scenario using a program which predicts times of eclipse ingresses and egresses of a point given its $x, y$, and $z$ coordinates within the corotating frame of the binary. The physical parameters used in the program are $P_{orb} = 3.365664$ h (measured), $M_{WD}$ = 0.88M$_{\\odot}$, $M_{donor}$ = 0.31M$_{\\odot}$, $R_{donor}$ = $2.47 \\times 10^{10}$ cm, $i = 76.8^{\\circ}$, and binary separation $a = 8.4 \\times 10^{10}$ cm \\citep{mukai}. The code treats the donor star as a sphere for simplicity, but since we do not attempt to comprehensively model the system in this paper, the errors introduced by this approximation should be minimal. For instance, as a result of this approximation, we had to decrease $i$ by 0.9$^{\\circ}$ compared to the value from \\citet{mukai} in order to reproduce the observed eclipse length.\n\nWe first calculated the ballistic trajectory of the accretion stream and arbitrarily selected four candidate threading regions along the stream (P1, P2, P3, and P4) under the assumption that the stream will follow its ballistic trajectory until captured by the magnetic field \\citep{mukai88}. The eclipse-prediction program then returned the phases of ingress and egress for each of the four points given their $x$ and $y$ coordinates within the corotating frame of the binary. We selected these four points arbitrarily in order to demonstrate the effects that a changing threading region would have on eclipse O$-$C timings; we do not claim that threading necessarily occurs at these positions or that this process is confined to a discrete point in the $x,y$ plane. Figure~\\ref{model} shows a schematic diagram of this model.\n\n\\begin{figure}\n\n\t\\includegraphics[width=0.5\\textwidth]{model}\n\t\\caption{A schematic diagram of the system as used in our model, viewed from above the binary rest frame. The WD is rest at the origin, and the black curved line is the accretion stream trajectory, which originates at the L1 point near the right edge of the diagram. P1, P2, P3, and P4 are illustrative threading regions, and the cross indicates the location of the stream's closest approach to the WD. Since $P_{sp} > P_{orb}$, the WD rotates clockwise in this figure.}\n\t\\label{model}\n\t\n\t\\end{figure}\n\n\\begin{figure*}\n\n\t\\includegraphics[width=1\\textwidth]{diagram}\n\t\n\\caption{A sketch indicating the general positions of the accretion spots at different beat phases as seen from the donor star. The black crosses represent accretion spots visible from the donor, and the vertical line is the WD's spin axis. Section~\\ref{orientation} explains how we inferred the positions of the two magnetic poles.}\n\\label{diagram}\n\\end{figure*}\n\nOnce threading occurs, the captured material will follow the WD's magnetic field lines until it accretes onto the WD. To simulate the magnetically channeled portion of the stream, we assumed that captured material travels in a straight line in the $x,y$ plane from the threading region to the WD while curving in the $z$ direction, where $z$ is the elevation above or below the $x,y$ plane. This is another simplification since the magnetic portion of the stream might be curved in the $x,y$ plane, but presumably, this approximation is reasonable. Since the magnetic field lines will lift the captured material out of the orbital plane, we calculated the $x,y$ coordinates of the midpoint between each threading region and the WD and computed its ingress and egress phases at several different values of $z$. We reiterate that this is not a comprehensive model, but as we explain shortly, it is sufficiently robust to offer an explanation for the observed O$-$C variations.\n\n\\subsubsection{Orientation of the Poles} \\label{orientation}\n\nBefore this model is applied to the observations, it is helpful to determine the orientations of the poles at different points in the beat cycle. We assume that there are two magnetic poles which are roughly opposite each other on the WD \\citep{mukai}. Since $i \\neq 90^{\\circ}$, one hemisphere of the WD is preferentially tilted toward Earth, and we refer to the magnetic pole in that hemisphere as the upper pole. The lower pole is the magnetic pole in the hemisphere which is less favorably viewed from Earth. In isolation, our observations do not unambiguously distinguish between these two poles, but since the midpoint of the spin minimum ({\\it i.e.}, $\\phi_{sp} = 0.0$) corresponds with the transit of the accretion region across the meridian of the WD \\citep[e.g.][]{staubert03}, we can estimate when the poles face the donor star. When $\\phi_{beat} \\sim 0.15$, the spin minimum coincides with the orbital eclipse, so one of the poles is approximately oriented towards the secondary at that beat phase. At $\\phi_{beat} \\sim 0.65$, the spin minimum occurs at an orbital phase of $\\sim$0.5, indicating that this pole is roughly facing the P3 region at that beat phase. But the question remains: Is this the upper pole, or the lower one?\n\n\\citet{mukai} relied upon X-ray observations of eclipse ingresses and egresses to differentiate between the upper and lower poles (see their Figure~15 and the accompanying text). While the accretion spots have not been identified in optical photometry, they are the system's dominant X-ray sources, so they produce steep, rapid X-ray eclipses \\citep{mukai}. The authors took advantage of the fact that since $P_{sp} > P_{orb}$, the accretion spots will increasingly lag behind the orbital motion of the donor star with each subsequent orbit. Consequently, the orientation of the accretion regions with respect to the donor star will continuously change across the beat cycle. When viewed throughout the beat period at the phase of eclipse, the accretion spots appear to slowly move across the face of the WD, thereby causing detectable changes in the times of X-ray ingress and egress. \n\nCritically, at some point during the beat cycle, each accretion region will have rotated out of view at the phase of eclipse, resulting in a jump in either the ingress or egress timings, depending on which pole has disappeared. The \\citet{mukai} model predicts that when the upper pole is aimed in the general direction of P4, the X-ray egresses will undergo a shift to later phases as the upper polecap rotates behind the left limb of the WD as seen at egress (see their Figure~15). Likewise, the disappearance of the lower pole behind the left limb at the phase of ingress results in a shift toward later phases in the ingress timings. Based on data in Table~5 of \\citet{mukai}, the egress jump occurs near $\\phi_{beat}\\sim0.9$, so at that beat phase, the upper pole should be pointed toward the P3-P4 region. The egress jump is more distinct than the ingress jump, so we base our identification of the poles on the egress jump only.\n\nOur identification of the upper and lower poles is an inference and should not be viewed as a definitive claim. For our method to be reliable, it would be necessary for the accretion geometry to repeat itself almost perfectly in both 1998 (when \\citet{mukai} observed) and the 28-month span from 2012-2014 when we observed V1432 Aql. Even though the accretion geometry does seem to repeat itself on a timescale of two decades (see, {\\it e.g.}, Section~\\ref{spin}), this may not always be the case, as is evidenced by an apparent discontinuity in the timings of the of the spin minima in 2002 \\citep{boyd}. If the accretion rate during our observations was different than it was in 1998, there would be changes in the location and size of the X-ray-emitting accretion regions \\citep{mukai88}. Moreover, \\citet{mukai} cautioned that their model was a simplification because the accretion geometry was poorly constrained. For example, they noted that their model did not account for the offset between the accretion region and the corresponding magnetic pole.\n\nIf the upper pole is aimed towards P3-P4 near $\\phi_{beat}\\sim0.9$, then the upper pole would face the donor at $\\phi_{beat} \\sim 0.65$ since the WD appears to rotate clockwise as seen from the donor. Thus, the lower pole is likely pointed in the general direction of the donor star near $\\phi_{beat} \\sim 0.15$. We provide a sketch of the system in Figure~\\ref{diagram} which shows the inferred positions of the polecaps throughout the beat cycle. \n\n\n\\subsubsection{Application of Model}\\label{application_of_model}\n\n\n\\begin{figure}\n\n\t\n\t\\includegraphics[width=0.45\\textwidth]{krizmanich-eclipse1}\n\t\\includegraphics[width=0.45\\textwidth]{krizmanich-eclipse2}\n\t\n\\caption{Two eclipses observed on consecutive nights with the 80-cm Krizmanich Telescope. Note the different vertical scale for the two panels. The vertical dashed lines indicate the expected phases of the WD's ingress and egress. On the first night (Panel A), the eclipse is very deep and begins with the WD's disappearance, but on the second night (Panel B), the eclipse starts before the occultation of the WD. These light curves are consistent with the appearance of a new threading region near P3-P4 in our model, indicating that this process requires less than 24 hours to take place.}\n\\label{shift}\n\\end{figure}\n\nEven though the four P points were arbitrarily selected, the results of the eclipse-timing program provide testable predictions concerning the O$-$C variations. In our model, the emission from the accretion curtain and the threading region result in a moving centroid which is responsible for an O$-$C shift with a half-amplitude of about $\\pm$120 seconds (see Fig.~\\ref{O-C}). When the centroid of emission is in the $+y$ region in Figure~\\ref{model}, the O$-$C would be positive, and if it were in the $-y$ half of the plot, the O$-$C would be negative. According to calculations using the model, eclipses of point sources at P1, P2, P3, and P4 would result in O$-$C values of 289 seconds, 204 seconds, 0 seconds, and $-$533 seconds, respectively. As for the midpoints between each of those four points and the WD, the O$-$C values would be 122 seconds, 103 seconds, 0 seconds, and $-$289 seconds for the P1, P2, P3, and P4 midpoints, respectively. The O$-$C values for the midpoints have a negligible dependence on the height above the orbital plane (provided that the secondary can still eclipse that point). Since the actual O$-$C variation does not exceed $\\pm$120 seconds, it is clear that the actual O$-$C timings are inconsistent with a centroid near P1, P2, and P4. However, centroids near the midpoints for P1, P2, and P3 would be consistent with the observed O$-$C timings.\n\nIt makes sense that the centroid of the emission region would have a less dramatic O$-$C value than the candidate threading points. Because we expect that the magnetically-channeled part of the stream travels from the threading region to the WD, the light from this accretion curtain would shift the projected centroid of emission towards the WD. In addition, since the threading region likely subtends a wide azimuthal range, the ability of the projected centroid to deviate dramatically from the WD's position would be limited. With these considerations in mind, the consistency of the theoretical O$-$C values for the P1, P2, and P3 midpoints with the observed O$-$C variations indicates that our model offers a plausible explanation of the O$-$C timings.\n\nThe sudden jump to early eclipses near $\\phi_{beat} \\sim 0.5$ occurs when the inferred orientation of the lower pole is toward the general direction of P3-P4. We surmise that the increased magnetic pressure on that part of the stream is able to balance the decreasing ram pressure, resulting in a luminous threading region. Since the P3-P4 vicinity is in the $-y$ half of Figure~\\ref{model}, an emission region there would result in an early ingress. In all likelihood, the centroid of that threading region does not approach P4 or its midpoint because the theoretical O$-$C values do not agree with the observed values. However, a centroid closer to P3 would result in a less-early eclipse which would be more consistent with the observations.\n\nAs the WD slowly rotates clockwise in Figure~\\ref{model}, the corresponding changes in the magnetic pressure along the stream's ballistic trajectory would move the position of the threading region within the binary rest frame, and the eclipses would gradually shift to later phases. Half a beat cycle after the $\\phi_{beat} \\sim 0.5$ jump in O$-$C timings, the lower pole would be oriented in the general direction of P2 and the upper pole towards P4. As the upper pole's magnetic pressure increases on the stream in the P3-P4 vicinity, a new threading region would form there, producing the O$-$C jump observed near $\\phi_{beat} \\sim 0.0$. In short, our model predicts the two distinct O$-$C jumps and explains why they are from late eclipses to earlier eclipses.\n\nOur observations provide circumstantial evidence of the brief, simultaneous presence of two separate emission regions as the system undergoes its O$-$C jump near $\\phi_{beat} \\sim 0.5$ during one beat cycle in July 2014. On JD 2456842, less than one day before the O$-$C jump, the time of minimum eclipse flux had an O$-$C of $\\sim$140 seconds, but on the very next night, there were two distinct minima within the same eclipse. Separated by a prominent increase in brightness, one minimum had an O$-$C of $-80$ seconds, while the other had an O$-$C of 240 seconds, consistent with the presence of discrete emission regions in the $-y$ and $+y$ halves of the plot in Figure~\\ref{model}. Moreover, assuming a WD eclipse duration of 700 seconds \\citep{mukai} centered upon orbital phase 0.0, the optical eclipse on the first night commenced when the donor occulted the WD, implying a lack of emission in the $-y$ region. However, the egress of that eclipse continued well after the reappearance of the WD, as one would expect if there were considerable emission in the $+y$ area. Indeed, a centroid of emission near the P1 midpoint would account for the observed O$-$C value. On the ensuing night, by contrast, the eclipse began before the disappearance of the WD, and ended almost exactly when the WD reappeared. The implication of these two light curves is that within a 24-hour span between $\\phi_{beat} \\sim 0.47-0.48$, the locations of the emission regions changed dramatically. Figure~\\ref{shift} shows these light curves and indicates in both of them the times of anticipated WD ingress and egress. Further observations are necessary to determine whether this behavior recurs during each beat cycle.\n\n\\begin{figure*}\n\n\t\\begin{subfigure}{\n\t\\includegraphics[width=0.5\\textwidth]{flux-power}\n\t\\includegraphics[width=0.5\\textwidth]{flux-waveform}}\n\t\\end{subfigure}\n\t\n\\caption{The power spectrum of the residual flux and a phase plot showing the waveform of the signal at the beat period. Spanning 11.8 beat cycles, these plots use only the observations made with the 28-cm Notre Dame telescope. The double-wave sinusoid in the phase plot is meant to assist with visualizing the data and does not represent an actual theoretical model of the system. }\n\\label{eclipses}\n\\end{figure*}\n\n\\subsubsection{Implications of Findings}\n\nOur hypothesis that the location of the threading radius is variable has ramifications for previous works. In particular, \\citet{gs97} and \\citet{staubert03} used the timing residuals of the spin minima to track the accretion spot as it traced an ellipse around one of the magnetic poles. One of their assumptions was that the threading radius is constant, but this is inconsistent with the conclusions we infer from our observations and model of the system. A variable threading radius would change the size and shape of the path of the accretion spot \\citep{mukai88}---and therefore, of the waveform of the spin minima timings used in those studies to constrain the accretion geometry.\n\nAdditionally, the agreement between the model and our observations provides compelling evidence which substantiates previous claims (see Section~\\ref{intro}) that the accretion stream in V1432 Aql is able to travel around the WD, as is also observed in the other asynchronous polars. The inefficient threading in asynchronous systems could be indicative of a relatively weak magnetic field or a high mass-transfer rate. For example, \\citet{schwarz} found that if the accretion rate in the asynchronous polar BY Cam were 10-20 times higher than normal accretion rates in polars, the stream could punch deeply enough into the WD's magnetosphere to reproduce the observed azimuthal extent of the accretion curtain. Although it is at least conceivable that the asynchronism itself causes the inefficient threading, it is not immediately apparent why this would be so when $P_{sp}$ and $P_{orb}$ are so close to each other.\n\nRegarding the possibility of a high mass-transfer rate, previous works \\citep[e.g.,][]{kps88} have proposed that irradiation by a nova can temporarily induce an elevated mass-transfer rate which persists for many decades after the eruption has ended. In line with this theory, \\citet{bklyn} proposed that CVs with consistently elevated mass-transfer rates---specifically, nova-like and ER UMa systems---exist fleetingly while the donor star cools after having been extensively irradiated by a nova. If all asynchronous polars are recent novae, as is commonly believed, this theory would predict that the same nova which desynchronizes the system also triggers a sustained, heightened mass-transfer rate as a result of irradiation. The increased ram pressure of the accretion stream would enable it to penetrate deeply into the WD's magnetosphere, thereby offering a plausible explanation as to why all four confirmed asynchronous polars show strong observational evidence of inefficient threading. However, this would not resolve the problem of the short nova-recurrence time in polars \\citep[][ discussed in Section~\\ref{intro}]{warner02}.\n\n\\subsection{Variations in the Residual Eclipse Flux}\n\n\\subsubsection{Periodicity} \\label{flux-periodicity}\n\nThe WD is invisible during eclipse, leaving two possible causes for the variation in residual eclipse flux: the donor star and the accretion stream. The magnetic field lines of the WD can carry captured material above the orbital plane of the system, so depending on projection effects, some of the accretion flow could remain visible throughout the WD's eclipse. Therefore, as the accretion flow threads onto different magnetic field lines throughout the beat period, the resulting variations in the accretion flow's trajectory could cause the residual eclipse flux to vary as a function of $\\phi_{beat}$. \n\nAfter we calculated the beat cycle count ($C_{beat}$) for each eclipse observation, we generated a power spectrum using the ANOVA method with $C_{beat}$ as the abscissa and the minimum magnitude as the ordinate. For this particular periodogram, we used only the 71 eclipses observed with the 28-cm Notre Dame telescope due to the difficulty of combining unfiltered data obtained with different equipment. The strongest signal in the resulting power spectrum has a frequency of $0.998 \\pm0.012$ cycles per beat period. Figure~\\ref{eclipses} shows both the periodogram and the corresponding phase plot, with two unequal maxima per beat cycle.\n\nWhile a double-wave sinusoid provides an excellent overall fit to the residual-flux variations, the observed mid-eclipse magnitude deviated strongly from the double sinusoid near $\\phi_{beat} \\sim 0.47$ in at least three beat cycles.\\footnote{While there are sporadic departures from the double-sinusoid, none is as dramatic as the behavior near $\\phi_{beat} \\sim 0.47$ or shows evidence of persistence across multiple beat cycles.} Two eclipses observed on consecutive nights in high-cadence photometry with the 80-cm Krizmanich telescope provide the best example of this variation. On JD 2456842, the system plummeted to $V\\sim17.8$ during an eclipse ($\\phi_{beat} = 0.469$) near the expected time of maximum residual flux. But just 24 hours later, the mid-eclipse magnitude had surged to $V\\sim16.2$ ($\\phi_{beat} = 0.485$), which was the approximate brightness predicted by the double-sinusoid fit. Furthermore, the eclipse light curve from the second night exhibited intricate structure which had not been present during the previous night's eclipse. These light curves were shown in Figure~\\ref{shift}. Comparably deep eclipses near $\\phi_{beat}\\sim0.47$ were observed during two additional beat cycles (one in 2013 and another in 2014), so there is at least some evidence that the residual flux might be consistently lower near this beat phase. Unfortunately, gaps in our data coverage make it impossible to ascertain whether the mid-eclipse magnitude always fluctuates near $\\phi_{beat} \\sim 0.47$, so confirmation of this enigmatic variation is necessary.\n\n\\subsubsection{Application of Model}\n\nWe propose that the overall variation in mid-eclipse flux is the signature of an accretion curtain whose vertical extent varies as a function of the threading radius. When the threading region is farther from the WD, the stream can couple onto magnetic field lines which achieve such a high altitude above the orbital plane that the donor star cannot fully eclipse them. By contrast, when the threading region is closer to the WD, the corresponding magnetic field lines are more compact, producing a smaller accretion curtain which the donor occults more fully. The schematic diagram in Figure~\\ref{flux-diagram} offers a visualization of this scenario.\n\nWhile it is conceivable that the residual flux variation is caused by material within the orbital plane, the available evidence disfavors this possibility. In particular, \\citet{ss01} saw no diminution in the strength of high-excitation UV emission lines during an eclipse with considerable residual flux at $\\phi_{beat} = 0.58$. If these emission lines originated within the orbital plane, they would have faded during the eclipse. Furthermore, if the source of the residual flux were in the orbital plane, the eclipse width would likely correlate with the mid-eclipse magnitude. The eclipses with high levels of residual flux would be long, while the deeper eclipses would be short. We do not see this pattern in our data, and Figure~12 in \\citet{boyd} does not show such a correlation, either.\n\n\\begin{figure}\n\n\t\\centerline{\\includegraphics[width=0.45\\textwidth]{flux-sketch-1}}\n\t\\par\n\t\\centerline{\\includegraphics[width=0.45\\textwidth]{flux-sketch-2}}\t\n\n\\caption{Two schematic diagrams providing a simplified illustration of our explanation for the residual flux variations at mid-eclipse. In both panels, the captured material travels in both directions along an illustrative magnetic field line. The secondary is the gray sphere eclipsing the WD, and the threading point is shown as a large $+$. The inclination of the magnetic axis with respect to the rotational axis was arbitrarily chosen as 30$^{\\circ}$. The portion of the magnetic stream which travels upward and which is visible at mideclipse is highlighted. The threading point in Panel A is near P4, and its threading radius is 3.6 times larger than that of the threading point in Panel B, when the threading point is near the stream's closest approach to the WD.}\n\n\\label{flux-diagram}\n\\end{figure}\n\nOur model from Section~\\ref{description_of_model} predicts that the threading radius will vary by a factor of $\\sim3.6$ between P4 and the stream's point of closest approach to the WD. (We reiterate that since these points are meant to be illustrative, this is not necessarily the actual variation in the threading radius.) The upshot is that at P3, threading would take place significantly deeper in the WD's magnetosphere than it would at P4. Moreover, since the predicted threading radius would be largest near an O$-$C jump, this hypothesis predicts that the amount of residual flux would be greatest near those jumps and lowest between them, as is observed in a comparison of Figures~\\ref{timing}~and~\\ref{eclipses}. In the case of a magnetic stream originating from a threading region between P2-P4, the midpoint of the stream would be visible if it achieves a minimum altitude of $z \\sim 0.08a$ above the orbital plane, where $a$ is the binary separation. At P4, this is only one-quarter the predicted threading radius, but at P2 and P3, this is three-quarters of the predicted threading radius. \n\nThis hypothesis also explains why some spectra of V1432 Aql during mid-eclipse show intense emission lines \\citep[e.g.][]{watson, ss01}, while others show only weak emission \\citep[e.g.][]{patterson}. For each of these previously published spectroscopic observations, we calculated $\\phi_{beat}$ and found that the ones showing strong emission lines were obtained when the predicted residual flux was near one of its maxima in Figure~\\ref{eclipses}. By contrast, the spectra containing weak emission were obtained when the expected residual flux was approaching one of its minima. If our hypothesis is correct, then the variation in the emission lines is simply the result of the changing visibility of the accretion curtain during eclipse. \\citet{watson} suggested a somewhat related scenario to account for the presence of emission lines throughout the eclipse, but they disfavored this possibility largely because of the apparent residual flux at X-ray wavelengths. (As mentioned previously, \\citet{mukai} later demonstrated that the residual X-ray flux was contamination from a nearby galaxy.)\n\nAn excellent way to test our theory would be to obtain Doppler tomograms near the times of maximum and minimum residual eclipse flux. \\citet{schwarz} showed that this technique is capable of revealing the azimuthal extent of the accretion curtain in BY Cam, and it would likely prove to be equally effective with V1432 Aql.\n\nWe do not have enough data to consider why the residual flux can vary by as much as $\\sim$1.5 mag in one day near the expected time of maximum residual flux. Knowing whether the residual flux is always low near $\\phi_{beat} = 0.47$ would be a necessary first step in this analysis.\n\n\\subsection{The Dependence of the Spin Modulation on Beat Phase}\\label{spin}\n\nAs the WD slowly spins with respect to the secondary, the accretion stream will couple to different magnetic field lines, meaning that the spin modulation will gradually change throughout the beat cycle. To explore this variation, we constructed non-overlapping, binned phase plots of the spin modulation in ten equal segments of the beat cycle ({\\it e.g.}, between $0.00 < \\phi_{beat} < 0.10$). As with the residual-eclipse-flux measurements, we used only the data obtained with the Notre Dame 28-cm telescope in order to avoid errors stemming from the different unfiltered spectral responses of multiple telescope-CCD combinations. In an effort to prevent eclipse observations from contaminating the spin modulation, we excluded all observations obtained between orbital phases 0.94 and 1.06. We then calculated the beat phase for all remaining observations and used only those observations which fell into the desired segment of the beat cycle. We used a bin width of 0.01$P_{sp}$, and we did not calculate bins if they consisted of fewer than five individual observations.\n\nFigure~\\ref{spin-waveform} shows these ten phase plots, and several features are particularly striking. For example, the spin minimum near spin phase 0.0 is highly variable. Conspicuous between $0.5 < \\phi_{beat} < 1.0$, it becomes feeble and ill-defined for most of the other half of the beat cycle. Sometimes, the spin minimum is quite smooth and symmetric, as it is between $0.7 < \\phi_{beat} < 0.8$, but it is highly asymmetric in other parts of the beat cycle, such as $0.5 < \\phi_{beat} < 0.6$. Additionally, there is a striking difference between the phase plots immediately before and after the O$-$C jump near $\\phi_{beat} \\sim 0.5$, as one would expect if the O$-$C jump marks a drastic change in the accretion geometry.\n\nThere is also a stable photometric maximum near spin phase $\\sim0.6$ which is visible for most of the beat cycle, though its strength is quite variable. We refer to this feature as the primary spin maximum, but it is not as prominent as the spin minimum. Its behavior is unremarkable.\n\n\\begin{figure*}\n\t\\centering\n \t\\begin{tabular}{cc}\n \\includegraphics[width=.5\\textwidth]{spin-phase-05.eps} &\n \\includegraphics[width=.5\\textwidth]{spin-phase-55.eps} \\\\\n \\includegraphics[width=.5\\textwidth]{spin-phase-15.eps} &\n \\includegraphics[width=.5\\textwidth]{spin-phase-65.eps} \\\\\n \\includegraphics[width=.5\\textwidth]{spin-phase-25.eps} &\n \\includegraphics[width=.5\\textwidth]{spin-phase-75.eps} \\\\\n \\includegraphics[width=.5\\textwidth]{spin-phase-35.eps} &\n \\includegraphics[width=.5\\textwidth]{spin-phase-85.eps} \\\\\n \\includegraphics[width=.5\\textwidth]{spin-phase-45.eps} &\n \\includegraphics[width=.5\\textwidth]{spin-phase-95.eps} \\\\\n \\end{tabular}\n \\caption{Binned phase plots of the spin modulation at different beat phases, with each bin representing 0.01 spin cycles. Gaps in the light curves are due to eclipses. The second spin maximum ($\\phi_{sp}\\sim0.4$) is strongest in panel C.}\n\\label{spin-waveform}\n\\end{figure*}\n\nInterestingly, there is another, much stronger photometric maximum at $\\phi_{sp} \\sim 0.4$ which is visible only between $0.0 < \\phi_{beat} < 0.5$. Since this feature shares the WD's spin period, we refer to it as the second spin maximum. The second spin maximum can be exceptionally prominent in photometry, attaining a peak brightness of $V \\sim 14.1$ in several of our light curves---which is the brightest that we have observed V1432 Aql to be. When visible, the second spin maximum precedes the primary spin maximum by $\\sim 0.2$ phase units. It begins to emerge near $\\phi_{beat} \\sim 0.0$, and gradually strengthens until it peaks between between $0.2 < \\phi_{beat} < 0.3$. It then weakens considerably as $\\phi_{beat}$ approaches 0.5, and after the O$-$C jump near $\\phi_{beat} \\sim 0.5$, the second spin maximum is replaced by a dip in the light curve.\n\nAlthough the second spin maximum consistently appears between $0.0 < \\phi_{beat} < 0.5$, it vanished in a matter of hours on JD 2456842 ($\\phi_{beat} \\sim 0.47$), only to reappear the next night. On the first night, our observations covered two spin cycles, and while the second spin maximum was obvious in the first cycle, it had disappeared by the second. Just 24 hours later, it was again visible in two successive spin cycles. This unexpected behavior coincides with the approximate beat phase at which we would expect the dominant threading region to shift to the P3-P4 region in our model. Nevertheless, our lack of observations near this beat phase precludes a more rigorous examination of this particular variation.\n\nThe second spin maximum is very apparent in some previously published light curves of V1432 Aql from as far back as two decades ago. For example, \\citet{watson} presented light curves of V1432 Aql obtained in 1993 which showcase the gradual growth of the second spin maximum (see Panels B-G of their Figure~2). Using our method of determining the beat phase, we extrapolate a beat phase of 0.96 for the light curve shown in their Panel B and a beat phase of 0.12 for the light curve in their Panel G. The increasing strength of the second spin maximum in their light curves agrees with the behavior that we observed at those beat phases (see our Figure~\\ref{spin-waveform}). Likewise, Figure~1 in \\citet{patterson} shows the second spin maximum at the expected beat phases. These considerations suggest that the second spin maximum is a stable, recurring feature in optical photometry of V1432 Aql.\n\nThe overall predictability of the second spin maximum does not answer the more fundamental question of what causes it. One possibility is that it is the result of an elevated accretion rate on one pole for half of the beat cycle. The apparent gap between the two spin maxima, therefore, might simply be the consequence of an absorption dip superimposed on the photometric maximum or a cyclotron beaming effect, splitting the spin maximum into two.\n\nA more interesting scenario is that the second spin maximum could be the optical counterpart to the possible third polecap detected by \\citet{rana} in X-ray and polarimetric data. In that study, \\citet{rana} detected three distinct maxima in X-ray light curves as well as negative circular polarization at spin phase 0.45, which is the approximate spin phase of the second spin maximum in optical photometry. They also measured positive circular polarization at spin phases 0.1 and 0.7, which correspond with the spin minimum and the primary spin maximum, respectively. Quite fortuitously, the authors obtained their polarimetric observations within several days of the photometric detection of the second spin maximum by \\citet{patterson}. Thus, it is reasonable to conclude that the circular polarization feature near spin phase 0.45 is related to the second spin maximum, consistent with a third accreting polecap. \n\nThe conclusions of \\citet{rana}, coupled with our identification of a second spin maximum, suggest that V1432 Aql might have at least three accreting polecaps---and therefore, a complex magnetic field. However, the available evidence is inconclusive, and follow-up polarimetry across the beat cycle could clarify the ambiguity concerning the WD's magnetic field structure.\n\n\\section{Conclusion}\n\nWe have presented the results of a two-year photometric study of V1432 Aql's beat cycle. We have confirmed and analyzed the eclipse O$-$C variations first reported by \\citet{gs99}, and we found that the residual mid-eclipse flux is modulated at the system's beat period. We interpret these variations as evidence that the threading region's location within the binary rest frame varies appreciably as a function of beat phase. Doppler tomography of the system at different beat phases could reveal any changes in the azimuthal extent of the accretion curtain, thereby providing a direct observational test of our model of the system.\n\nOur observations provide circumstantial evidence that the mid-eclipse magnitude undergoes high-amplitude variations on a timescale of less than a day near $\\phi_{beat} \\sim0.47$, deviating strongly from the expected brightness at that beat phase. In the most remarkable example of this variation, the mid-eclipse magnitude varied by $\\sim$1.5 mag in two eclipses observed just 24 hours apart. Whereas the first eclipse was deep and smooth, the second eclipse was shallow and W-shaped, with two distinct minima. Similar variations in residual flux were observed in two other beat cycles, providing at least some evidence that this behavior might be recurrent. Still, additional photometric observations are necessary to confirm the $\\phi_{beat}\\sim0.47$ fluctuations in mid-eclipse magnitude. Amateur astronomers are ideally suited to undertake such an investigation, especially when one considers that our residual-flux analysis utilized a small telescope and commercially available CCD camera. Moreover, observers with larger telescopes could also obtain relatively high-cadence photometry to study whether double-minima eclipses consistently appear near this beat phase.\n\nIn addition, we report a second photometric spin maximum which appears for only about half of the beat cycle. This phenomenon might be evidence of a complex magnetic field, but a careful polarimetric study of the beat cycle would be necessary to investigate this possibility in additional detail.\n\nWe also offer updated ephemerides of the orbital and spin periods (see Sec.~\\ref{ephem}), as well as a Python script which calculates V1432 Aql's beat phase at a given time and which also predicts when the system will reach a user-specified beat phase. An exponential spin ephemeris models the data as well as a polynomial ephemeris and is consistent with an asymptotic approach of the spin period toward the orbital period. According to the exponential ephemeris, the rate of change of the spin period is proportional to the level of asynchronism in the system; consequently, if the exponential ephemeris were to remain valid indefinitely, the resynchronization process in V1432 Aql would take considerably longer than previous estimates.\n\nFinally, while a comprehensive theoretical model of V1432 Aql is beyond the scope of this paper, such an analysis could refine our description of the system and shed additional light on V1432 Aql's unusual threading mechanisms.\n\n\n\\section*{Acknowledgments}\n\nWe thank Peter Garnavich and Joe Patterson for their helpful comments, as well as the anonymous referee, whose suggestions greatly improved the paper.\n\nThis study made use of observations in the AAVSO International Database, which consists of variable star observations contributed by a worldwide network of observers.\n\nThe Sarah L. Krizmanich Telescope was generously donated to the University of Notre Dame in memory of its namesake by the Krizmanich family. This is the first publication to make use of data obtained with this instrument.\n\nDB, MC, and JU participate in the Center for Backyard Astrophysics collaboration, which utilizes a global team of professional and amateur astronomers to study cataclysmic variable stars.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStatistical models with short range interactions on two-dimensional (2D) regular lattices exhibit no spontaneously symmetry \nbreaking at finite temperature, if the symmetry in local degrees of freedom is continuous~\\cite{Mermin_Wagner}. \nThe classical ferromagnetic XY model \nis a typical example, which has $O(2)$ symmetry, where the thermal average of the magnetization is zero at finite temperature. \nAn introduction of discrete nature to local degrees of freedom then induces an order-disorder transition in low temperature,\nwhere the universality class is dependent on the type of discretization. \nThe $q$-state clock model, which has $Z_q^{~}$ symmetry, is a well-known discrete analogue of the XY model. \nFor the case of $q \\le 4$, the clock model exhibits a second-order phase transition described by unitary minimal series of conformal field theory (CFT). \nIf $q > 4 $, the clock model has an intermediate critical phase between the high-temperature disordered phase and low-temperature ordered phase~\\cite{Elitzur, Nomura, Ortiz, Kumano}, where transitions to the critical phase are of Berezinskii-Kosterlitz-Thouless (BKT) type~\\cite{B1, B2, KT}. \nAs $q$ increases, the low-temperature ordered phase shrinks, and the $O(2)$ symmetry is finally recovered \nin the limit $q \\rightarrow \\infty$.\n\nDiscretization of the classical Heisenberg model, which has $O(3)$ symmetry, \nis not straightforward, in the sense that there is no established route of \ntaking continuous-symmetry limit. A possible manner of discretization is to introduce the polyhedral anisotropies, such as tetrahedral, cubic, \noctahedral, icosahedral, and dodecahedral ones, which correspond to the discrete subgroups of the $O(3)$ symmetry group. \nLet us consider the discrete vector-spin models, where on each lattice site there is a unit \nvector spin that can point to vertices of a polyhedron. The tetrahedron model \ncan be mapped to the four-state Potts model~\\cite{wu}. For the octahedron model, presence of weak first-order phase transition is \nsuggested by Patrascioiu and Seiler~\\cite{Patrascioiu}, and afterward is numerically confirmed~\\cite{Krcmar}. The cube model can be mapped to \nthree decoupled Ising models. \nPatrascioiu {\\it et al} reported a second-order transition for the icosahedron \nand dodecahedron models, respectively, which have 12 and 20 local degrees of freedom~\\cite{Patrascioiu, Patrascioiu2, Patrascioiu3}. \nFor the icosahedron model, the estimated transition temperatures is $1 \/ T_{\\rm c}^{~} = 1.802\\pm0.001$ and its critical indices are $\\nu \\sim 1.7$ and $\\gamma \\sim 3.0$, which are inconsistent with the minimal series of CFT. \nBy contrast, Surungan {\\it et al} gave another estimation $\\nu \\simeq 1.31$ for the same transition temperature\\cite{Surungan}.\nHowever, the system size of Monte Carlo simulations in provious works may be too small to conclude the universality of the icosahedron model.\nFinally, a possibility of an intermediate phase is suggested for the dodecahedron model in ~Refs. [\\onlinecite{Patrascioiu2}] and [\\onlinecite{Patrascioiu3}], whereas a solo second-order transition is suggested in Ref.~[\\onlinecite{Surungan}].\n\n\nIn this article, we focus on the critical behavior of the icosahedron model.\nWe calculate magnetization, effective correlation length and entanglement entropy in the bulk limit by means of the corner-transfer-matrix renormalization group (CTMRG) method~\\cite{ctmrg1, ctmrg2}, which is based on Baxter's corner-transfer matrix (CTM) scheme~\\cite{Baxter1, Baxter2, Baxter3}. \nAn advantage of the CTMRG method is that we can treat sufficiently large system size to obtain the conventional bulk physical quantities. \nActually, the system size of CTM in this work is up to $10^4 \\times 10^4$ sites, which can be viewed as a bulk limit in comparison with (effective) correlation length of the system.\nInstead, CTMRG results are strongly dependent on $m$, the number of states kept for the block-spin variables, near the transition point. \nNevertheless, this $m$-dependence of CTMRG results provides a powerful tool of the scaling analysis with respect to $m$~\\cite{fes1, tagliacozzo, pollmann, pivru}, the formulation of which is similar to the conventional finite-size scaling analysis~\\cite{Fisher, Barber}. \nThe $m$-scaling analysis actually extracts the presence of the second-order phase transition with the critical exponents $\\nu = 1.62\\pm0.02$ and $\\beta = 0.12\\pm0.01$.\nAnother interesting point on the CTMRG approach is that the classical analogue of the entanglement entropy~\\cite{entent} can be straightforwardly calculated through a reduced density matrix constructed from CTMs.\nThe $m$-dependence analysis of the entanglement entropy also yields the central charge $c = 1.90\\pm0.02$, which cannot be explained by the minimal series of CFT.\n\nThis article is organized as follows. In the next section, we introduce the icosahedron model, and briefly explain its tensor-network representation and CTMRG method. \nWe first show the temperature dependence of the magnetization to capture the nature of the phase transition. \nIn Section~III, we apply the finite-$m$ scaling to the effective correlation length, magnetization, and the entanglement entropy. \nTransition temperature, critical exponents, and the central charge are estimated in detail.\nThe results are summarized in the last section.\n\n\n\\section{Icosahedron model}\n\n\\begin{figure}\n\\includegraphics[width=8.5cm]{Fig_1.eps}\n\\caption{\n(a) Numbering of the vertices of the icosahedron. \n(b) Local Boltzmann weight in Eq.~(2) defined for a `black' plaquette, \nand its tensor representation. \n}\n\\label{Fig_1}\n\\end{figure}\n\nLet us consider the icosahedron model, which is a discrete analog of the classical Heisenberg model.\nOn each site of the square lattice, there is a vector spin ${\\bm v}^{(p)}_{~}\\!$ of unit length, which points to one of the vertices of the icosahedron, shown in Fig.~1 (a), where $p$ is the index of vertices running from 1 to 12.\nFigure 1 (b) shows four vector spins ${\\bm v}^{(p)}_{~}\\!$, ${\\bm v}^{(q)}_{~}\\!$, ${\\bm v}^{(r)}_{~}\\!$, \nand ${\\bm v}^{(s)}_{~}\\!$, around a `black' plaquette, where we have introduced the \nchess-board pattern on the lattice. We have omitted the lattice index of these\nspins, since they can be formally distinguished by $p$, $q$, $r$, and $s$, which represent the direction of the spins. \nNeighboring spins have Heisenberg-like interaction, which is represented by the inner product between them. \nThus, the local energy around the plaquette in Fig.~1 (b) is written as\n\\begin{eqnarray}\nh_{pqrs}^{~} = - J && \\left( \n{\\bm v}^{(p)}_{~} \\! \\cdot {\\bm v}^{(q)}_{~} + \n{\\bm v}^{(q)}_{~} \\! \\cdot {\\bm v}^{(r)}_{~} \\right. \\nonumber\\\\\n&& + \\left.\n{\\bm v}^{(r)}_{~} \\! \\cdot {\\bm v}^{(s)}_{~} + \n{\\bm v}^{(s)}_{~} \\! \\cdot {\\bm v}^{(p)}_{~}\n\\right) \\, .\n\\label{Eq_1}\n\\end{eqnarray}\nIn the following, we assume that coupling constant is spatially uniform and ferromagnetic $J > 0$. \n\n\\begin{figure}\n\\includegraphics[width=8cm]{Fig_2.eps}\n\\caption{Icosahedron model on the diagonal lattice, where $W$ on each `black' plaquette represents local Boltzmann weight\nof Eq.~(2). The partition function can be represented by a tensor-network on the square lattice. \nThe dashed lines show the division of the system into the quadrants corresponding to CTMs.}\n\\label{Fig_2}\n\\end{figure}\n\nWe represent the partition function of the system in the form of a vertex model, which can be \nregarded as a two-dimensional tensor network. For each `black' plaquette on the chess-board pattern introduced to the square \nlattice, we assign the local Boltzmann weight\n\\begin{equation} \nW_{pqrs}^{~} = \\exp\\biggl[ \\frac{h_{pqrs}^{~}}{T} \\biggr] \\, ,\n\\label{Eq_2}\n\\end{equation}\nwhere $T$ denotes the temperature in the unit of Boltzmann constant. \nNote that the vertex weight $W_{pqrs}^{~}$ is invariant under cyclic rotations of the indices. \nThroughout this article we choose $J$ as the unit of energy. As shown in Fig.~1 (b), the weight $W_{pqrs}^{~}$ is \nnaturally interpreted as the four-leg tensor, and thus the partition function can be represented as a \ncontraction among tensors, as schematically drawn on the right side panel of Fig.~2. \n\nIn Baxter's CTM formulation, the whole lattice is divided into four quadrants~\\cite{Baxter1, Baxter2, Baxter3}, as shown in Fig.~2. The partition function of a square-shaped\nfinite-size lattice is expressed by a trace of the fourth power of CTMs\n\\begin{equation}\nZ = {\\rm Tr} \\, C^4_{~} \\, ,\n\\label{Eq_3}\n\\end{equation}\nwhere $C$ denotes the CTM.\nNote that each matrix element of $C$ corresponds to the partition function of the quadrant where the spin configurations along the row and column edges are specified. \nWe numerically obtain $Z$ by means of the CTMRG method~\\cite{ctmrg1, ctmrg2}, \nwhere the area of CTM is increased iteratively by repeating the system-size extension and renormalization group (RG) transformation. \nThen, the matrix dimension of $C$ is truncated with cutoff dimension $m$, and under an appropriate normalization, $C$ converges to its bulk limit after a sufficient number of iterations, even if we assume a fixed boundary condition. \nAll the numerical data shown in this article are obtained after such convergence. \nThe numerical precision of CTMRG results are controlled by the cutoff $m$ for the singular value spectrum $\\{\\lambda_i\\}$ of CTMs with a truncation error $\\epsilon(m) = 1-\\sum_{i=1}^m \\lambda_i^4$. \nThe universal distribution of the spectrum \\cite{OHA, cftdistribution} suggests that the asymptotic behavior of $\\epsilon(m)$ could be model independent.\n\n\\begin{figure}\n\\includegraphics[width=7.5cm]{Fig_3.eps}\n\\caption{(Color online) Temperature dependence of magnetization $M$ for several \n $m$. The inset: magnified view in the region $0.54 \\leq T \\leq 0.59$. }\n\\label{Fig_3}\n\\end{figure}\nIn practical computations, we assume the fixed boundary condition, where all the \nspins are pointing to the direction ${\\bm v}^{(1)}_{~}\\!$ on the boundary of the system.\nWe define an order parameter as the magnetization $M$ at the center of the system\n\\begin{equation}\nM = \\frac{1}{Z} \\, \\sum^{12}_{s = 1} \\, \\left( {\\bm v}^{(1)}_{~} \\! \\cdot {\\bm v}^{(s)}_{~} \\, {\\rm Tr}'_{~} \n\\bigl[ C^4_{~} \\bigr] \\right) \\, ,\n\\label{Eq_4}\n\\end{equation}\nwhere ${\\bm v}^{(s)}_{~}\\!$ is the vector spin at the center, and \n${\\rm Tr}'_{~}\\!$ represents partial trace except for ${\\bm v}^{(s)}_{~}\\!$.\nFigure 3 shows the temperature dependence of the magnetization $M$ calculated with\n$m = 100$, $200$, $300$, $400$, and $500$. \nThe magnetization is well converged with respect to $m$ for $T < 0.55$ or $T> 0.57$, and the result supports emergence of the ordered phase in low-temperature\nregion as reported by Patrascioiu {\\it et al}~\\cite{Patrascioiu, Patrascioiu2, Patrascioiu3}.\nAs shown in the inset, however, the curve of $M$ has the shoulder structure exhibiting the strong $m$ dependence in the region $0.55 4$ where the intermediate critical region emerges.\nUsing the Basian fitting, then, we obtain $\\beta = 0.1293(27)$ for $m=100 \\sim 500$ and $\\beta = 0.1234(33)$ for $m=200 \\sim 500$. \nTaking into account the discrepancy, we adopt $\\beta = 0.12\\pm0.01$. \nWe however think that this value should be improved in further extensive calculations.\n\n\n\nIn order to obtain additional information for the scaling universality, we calculate the classical analogue of the entanglement entropy. \nThe concept of entanglement can be introduced to two-dimensional statistical models through the quantum-classical correspondence~\\cite{fradkin, Trotter, Suzuki1, Suzuki2}. \nThen, an essential point is that the fourth power of CTM, which appears in Eqs.~(3) and (4), can be interpreted as a density matrix of the corresponding one-dimensional quantum system~\\cite{HU2014}. \nFrom the normalized density matrix\n\\begin{equation}\n\\rho = \\frac{C^4_{~}}{Z} \\, ,\n\\label{Eq_9}\n\\end{equation}\nwe obtain the classical analogue of the entanglement entropy, in the form of Von Neumann entropy~\\cite{vnent1, vnent2}\n\\begin{equation}\nS_{\\rm E}^{~} = - {\\rm Tr} \\, \\rho \\ln \\, \\rho \\, .\n\\label{Eq_10}\n\\end{equation}\n\nIn the context of CTMRG, the following relation\n\\begin{equation}\nS_{\\rm E}^{~}( m, t ) \\sim \\frac{c}{6} \\, \\ln \\, \\xi( m, t ) + const.~,\n\\label{Eq_11}\n\\end{equation}\nis satisfied around the criticality~\\cite{Vidal, Calabrese}, where $c$ is the central charge. \nTaking the exponential of both sides of this equation, and substituting Eq.~(7), we obtain\n\\begin{eqnarray}\ne^{S_{\\rm E}^{~}}_{~} \\sim a \\Bigl[ \\xi( m, t ) \\Bigr]^{c\/6}_{~} \n&=& \\, a \\Bigl[ m^{\\kappa}_{~} \\, g\\bigl( m^{\\kappa \/ \\nu}_{~} \\, t \\bigr) \\Bigr]^{c\/6}_{~} \\nonumber \\\\\n&=& \\, m^{c \\kappa \/ 6}_{~} \\, {\\tilde g}\\bigl( m^{\\kappa \/ \\nu}_{~} \\, t \\bigr) \\, ,\n\\end{eqnarray}\nwhere $a$ is a non-universal constant, and ${\\tilde g} \\equiv ag^{c\/6}$. \nThus the critical exponent for $e^{S_{\\rm E}}_{~}$ is identified as $c \\nu \/ 6$. \n\nUsing $T_{\\rm c}^{~}$, $\\kappa$ and $\\nu$ previously obtained by the finite-$m$ scaling for $\\xi( m, t )$, we can estimate the central charge $c$. \nFigure 5 (c) shows the scaling plot of Eq.~(12) for the data of $m = 100, 200, 300, 400$, and $500$.\nThe central charge is estimated as $c = 1.894(12)$. \nIf we exclude the case $m = 100$ for the scaling analysis, we obtain $c = 1.900(15)$. \nConsidering the discrepancy between the above values of $c$, we adopt $c = 1.90\\pm0.02$.\n\nHere, it should be noted that this value is consistent with the relation\n\\begin{equation}\n\\kappa = \\frac{6}{ c\\bigl( \\sqrt{12 \/ c} \\, + 1 \\bigr) }\\, ,\n\\label{pollmann}\n\\end{equation}\nwhich is derived from the MPS description of one-dimensional critical quantum system.~\\cite{pollmann}\nSubstituting $c = 1.90$ and $\\kappa = 0.89$ to Eq. (\\ref{pollmann}), we actually have $6 \/ \\{ c( \\sqrt{12 \/ c}+1) \\} - \\kappa = 0.009$, which provides a complemental check of the finite-$m$ scaling in CTMRG.\n\n\\section{Summary and discussion}\n\nWe have investigated the phase transition and its critical properties of the icosahedron model on a square lattice, where the local vector spin has twelve degrees of freedom. \nWe have calculated the magnetization, the effective correlation length, and the classical analogue of the entanglement entropy by means of the CTMRG method. \nThe CTMRG results are strongly dependent on $m$, which is the cutoff dimension of CTMs, near the critical point.\nWe have then performed the finite-$m$ scaling analysis and found that the all numerical data can be well fitted with the scaling functions including the shoulder structures.\nWe have thus confirmed that the icosahedron model exhibits the second-order phase transition at $T_{\\rm c}=0.5550\\pm0.0001$, below which the icosahedral symmetry is broken to a five-fold axial symmetry.\nAlso, the scaling exponents are estimated as $\\nu = 1.62\\pm0.02$, $\\kappa = 0.89\\pm0.02$, and $\\beta=0.12\\pm0.01$. \nFrom the relation between entanglement entropy and the effective correlation length, moreover, we have extracted the central charge as $c = 1.90\\pm0.02$, which cannot be described by the minimal series of CFT.\nTo clarify the mechanism of such a non-trivial critical behavior in the icosahedron model is an important future issue. \n\n\nOur original motivation was from the systematical analysis of the continuous-symmetry limit toward the $O( 3 )$ Heisenberg spin.\nIn this sense, the next target is the dodecahedron model having twenty local degrees of freedom, which requires massive parallelized computations of CTMRG. \nIn addition, it is an interesting problem to introduce the XY-like uniaxial anisotropy to the icosahedron and dodecahedron models;\nA crossover of universality between the icosahedron\/dodecahedron model and the clock models can be expected, where the shoulder structures of the scaling functions may play an essential role.\n\n\n\n\n\\section{Acknowledgment}\n\nThis research was partially supported by Grants-in-Aid for Scientific Research under Grant No. 25800221, 26400387, 17H02931, and 17K14359 from JSPS and by VEGA 2\/0130\/15 and APVV-16-0186. \nIt was also supported by MEXT as ``Challenging Research on Post-K computer'' (Challenge of Basic Science: Exploring the Extremes through Multi-Physics Multi-Scale Simulations). \nThe numerical computations were performed on the K computer provided by the RIKEN Advanced Institute for Computational Science through the HPCI System Research project (Project ID:hp160262).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{introduction}\n\nIn light nuclei, the cluster aspect is one of the essential features, as well as the shell-model aspect.\nOwing to the coexistence of these two natures, namely, cluster and shell-model features, various structures \nappear in stable and unstable nuclei.\n\n$^{12}$C is one of the typical examples where the cluster and shell-model aspects coexist.\nThe ground state of $^{12}$C is known to have mainly a shell-model feature of \nthe $p_{3\/2}$ subshell closed configuration, whereas\nthe well-developed 3$\\alpha$-cluster structures appear in excited states.\nIn the theoretical works on the 3$\\alpha$-cluster structures,\\cite{Horiuchi_OCM_74,Uegaki_12C_77,Kamimura_12C_77,Descouvemont_12C_87,En'yo_12C_98,Tohsaki_12C_01,Funaki_12C_03,Neff_12C_04,En'yo_12C_07,Kurokawa_12C_07} \\ \nvarious configurations of the 3$\\alpha$-cluster structures were suggested in the excited states\nabove the 3$\\alpha$ threshold energy, for example, \nthe $\\alpha$ condensation of weakly interacting \nthree $\\alpha$ clusters in the $0^{+}_2$ state and \nthe equilateral-triangular structure of three $\\alpha$ clusters in the $3^{-}_{1}$ state.\nMoreover, a linear-chainlike (or an obtuse-angle-triangular) structure \nof three $\\alpha$ clusters in the $0^{+}_{3}$ state was suggested.\n\nCluster structures have also been found in light neutron-rich nuclei such as Be isotopes.\nIn $^{10}$Be, the low-lying states are understood in a molecular\n2$\\alpha+2n$ picture,\\cite{vonOertzen_ClusterRev_06,Itagaki_10Be_00} \\ where \ntwo $\\alpha$ cores are formed and two excess neutrons occupy molecular orbitals around the $2\\alpha$.\nIn terms of a simple shell model, $^{10}$Be is an $N=6$ nucleus, and therefore, the $p_{3\/2}$ subshell closure \neffect is also important, as well as the $2\\alpha+2n$ cluster feature at least in the ground state. \nThis means that the cluster-shell competition is essential in unstable nuclei \nas well as stable nuclei, as argued in Ref.~\\citen{Itagaki_ClusterShellCompetition_04}.\n\nFor theoretical investigations of such nuclei, it is necessary\nto describe the coexistence of shell and cluster features systematically. \nHowever, many theoretical frameworks still have deficiencies \nin describing both the shell-model and cluster structures.\nIn fact, in the case of $^{12}$C, shell models can be used to describe low-lying shell-model states but they\nusually fail to describe high-lying 3$\\alpha$-cluster states. \nOn the other hand, conventional cluster models are suitable for studying the 3$\\alpha$-cluster states,\nbut it is not easy to reproduce well the detailed properties of \nlow-lying shell-model states because $\\alpha$ cluster breaking is not incorporated\nin the cluster models.\n\nA method of antisymmetrized molecular dynamics (AMD)\\cite{En'yo_PTP_95,En'yo_AMD_95} \\ is one of the frameworks useful for overcoming this\nproblem. It was applied to $^{12}$C and \nsucceeded to describe the shell and cluster features due to the flexibility of its \nwave functions.\\cite{En'yo_12C_98,En'yo_12C_07} \\ \nMoreover, in the study of fermionic molecular dynamics (FMD), in which model wave functions \nare similar to those of AMD, the coexistence of shell and cluster features in $^{12}$C was \ndescribed successfully.\\cite{Neff_12C_04} \\ \n\nThe AMD method has also been applied to various stable and unstable nuclei, \nand it has been proved to be one of the powerful approaches of describing various structures\nsuch as cluster structures and shell-model structures.\\cite{En'yo_AMD_95,En'yo_AMD_03,En'yo_sup_01} \\ \nThere are some versions of the AMD, for example, the variation after parity and total-angular-momentum projections (VAPs),\\cite{En'yo_12C_98} \\ \nthe variation with the constraint on the quadrupole deformation \n$\\beta$ ($\\beta$ constraint AMD),\\cite{Dote_Beta-Constraint_97,Kimura_uptoMg_01,En'yo_AMD_03} \\ or the constraint on \nthe cluster distances ($d$-constraint AMD).\\cite{Taniguchi_D-Constraint_04} \\ \nIn principle, a basis AMD wave function is given by a Slater determinant of Gaussian wave packets, and\nexcited states are described by superposition of Slater determinants.\nIn practical calculations of excited states of light nuclei, it is important to prepare efficiently \nvarious cluster configurations \nincluding 2-body and 3-body clusterings as basis wave functions in the AMD framework.\nMoreover, in the study of unstable nuclei, further flexible model wave functions such as \n2-body or 3-body cluster structures with surrounding valence nucleons will be required to describe \npossible exotic cluster structures in excited states.\n\nTo study a variety of cluster structures and the coexistence of cluster and shell features\nin light unstable nuclei,\nwe propose an extended method of constraint AMD\nto describe various cluster and shell structures. That is the\ntwo-dimensional constraint with respect to the quadrupole deformation parameters, $\\beta$ and $\\gamma$,\nwhich is expected to be efficient for preparing basis wave functions with various cluster configurations.\nWe call this method $\\beta$-$\\gamma$ constraint AMD.\nWe expect shell-model structures to appear in the small $\\beta$ region, whereas\ndeveloped 2-body or 3-body cluster structures can be obtained for large $\\beta$.\nIn the large $\\beta$ region, various configurations of cluster structures may appear\ndepending on $\\beta$ and $\\gamma$. \n\nThe $\\beta$-$\\gamma$ constraint AMD may also be useful\nin the study of triaxial deformations.\nOn the other hand, in the Hartree-Fock-Bogolyubov (HFB) calculations, \nthe $\\beta$-$\\gamma$ constraint was adopted, for example, \nin Ref.~\\citen{Girod_triaxial_83}, and \nthe superposition of $\\beta$-$\\gamma$ constraint wave functions\nhas been performed recently by Bender and Heenen.\\cite{Bender_24Mg_08}.\nIt was found that the triaxiality is important to reproduce the experimental \ndata of $^{24}$Mg in the HFB calculations. \nHowever, works on triaxial calculations with the superposition are limited, and\nit is still a challenging problem.\nMoreover, such mean-field approaches are not necessarily \nsuitable for describing cluster structures.\nFor the study of cluster features, it is important to apply \nthe $\\beta$-$\\gamma$ constraint to a framework that can describe cluster structures.\n\nIn this paper, we applied the $\\beta$-$\\gamma$ constraint AMD\nto $N=6$ isotones, $^{10}$Be, $^{12}$C, $^{9}$Li, and $^{11}$B\nto check the applicability of this method.\nWe analyze the results and confirm that various structures appear as functions of \nthe deformation parameters, $\\beta$ and $\\gamma$, in the present framework. \nIn particular, we focus on the coexistence of shell and cluster features. \nFor $^{10}$Be and $^{12}$C, we also calculate the energy spectra of excited states\nby the superposition of the obtained basis wave functions and compare the results with \nthe experimental data.\nWe show that the $\\beta$-$\\gamma$ constraint AMD is useful for reproducing the energy spectra.\nA role of the $\\gamma$ degree of freedom is also discussed.\n\nThe content of this paper is as follows. \nIn \\S \\ref{framework}, we explain the framework of the $\\beta$-$\\gamma$ constraint AMD.\nThe calculated results are shown in \\S \\ref{results}.\nIn \\S \\ref{discussions}, we discuss the effect of the triaxial deformation parameter $\\gamma$.\nFinally, in \\S \\ref{summary}, a summary and an outlook are given.\n\n\n\\section{Framework of $\\beta$-$\\gamma$ constraint AMD}\\label{framework}\n\nWe adopt a method of AMD with constraint. \nThe frameworks of AMD and constraint AMD are described in detail, for example, in \nRefs.~\\citen{En'yo_AMD_95,En'yo_sup_01,En'yo_AMD_03}.\nIn this paper, we propose a two-dimensional constraint with respect to quadrupole deformation\nparameters.\n\n\\subsection{Wave function of AMD}\n\nIn the method of AMD, \na basis wave function of an $A$-nucleon system $|\\Phi \\rangle$ \nis described by a Slater determinant of single-particle wave functions $|\\varphi_{i} \\rangle$ as\n\\begin{equation}\n|\\Phi \\rangle = \\frac{1}{\\sqrt{A!}} \\det \\left\\{ |\\varphi_{1} \\rangle, \\cdots ,|\\varphi_{A} \\rangle \\right\\}.\n\\end{equation}\nThe $i$-th single-particle wave function $|\\varphi_{i} \\rangle$ consists of \nthe spatial part $|\\phi_{i} \\rangle$, spin part $|\\chi_{i} \\rangle$, and isospin part $|\\tau_{i} \\rangle$ as\n\\begin{equation}\n\t|\\varphi_{i} \\rangle = |\\phi_{i} \\rangle |\\chi_{i} \\rangle |\\tau_{i} \\rangle.\n\\end{equation}\nThe spatial part $|\\phi_{i} \\rangle$ is given by a Gaussian wave packet\nwhose center is located at $\\bm{Z}_{i}\/\\sqrt{\\nu}$ as\n\\begin{equation}\n\t\\langle \\bm{r} | \\phi_{i} \\rangle = \\left( \\frac{2\\nu}{\\pi} \\right)^{\\frac{3}{4}}\n\t\t\\exp \\left[ - \\nu \\left( \\bm{r} - \\frac{\\bm{Z}_{i}}{\\sqrt{\\nu}} \\right)^{2} \n\t\t+ \\frac{1}{2} \\bm{Z}_{i}^{2}\\right] \n\t\\label{single_particle_spatial}, \n\\end{equation}\nwhere $\\nu$ is the width parameter and is taken to be a common value for all the\nsingle-particle Gaussian wave functions in the present work.\nThe spin orientation is given by the parameter $\\bm{\\xi}_{i}$, while\nthe isospin part $|\\tau_{i} \\rangle$ is fixed to be up (proton) or down (neutron), \n\\begin{align}\n\t|\\chi_{i} \\rangle &= \\xi_{i\\uparrow} |\\uparrow \\ \\rangle + \\xi_{i\\downarrow} |\\downarrow \\ \\rangle,\\\\\n\t|\\tau_{i} \\rangle &= |p \\rangle \\ or \\ |n \\rangle.\n\\end{align}\nIn a basis wave function $|\\Phi \\rangle$, $\\{ X \\} \\equiv \\{ \\bm{Z} , \\bm{\\xi} \\} = \\{ \\bm{Z}_{1} , \\bm{\\xi}_{1} , \\bm{Z}_{2} , \\bm{\\xi}_{2} , \n\\cdots , \\bm{Z}_{A} , \\bm{\\xi}_{A} \\}$ are complex variational parameters and they \nare determined by the energy optimization using the frictional cooling method.\\cite{En'yo_sup_01,En'yo_AMD_03} \\ \nAs the variational wave function, we employ the parity-projected wave function\n\\begin{equation}\n\t|\\Phi ^{\\pm} \\rangle = P^{\\pm} |\\Phi \\rangle = \\frac{1 \\pm P}{2} |\\Phi \\rangle.\n\\end{equation}\nHere, $P$ is the parity transformation operator. \nWe perform the variation for the parity-projected energy\n$\\langle \\Phi ^{\\pm}| H |\\Phi ^{\\pm} \\rangle \/ \\langle \\Phi ^{\\pm}|\\Phi ^{\\pm} \\rangle$,\nwhere $H$ is the Hamiltonian.\nAfter the variation, we project the obtained wave function onto the \ntotal-angular-momentum eigenstate.\nIt means that the parity projection is performed before the variation, and\nthe total-angular-momentum projection is carried after the variation.\n\n\\subsection{$\\beta$-$\\gamma$ constraint}\n\nTo describe various cluster and shell-model structures that may appear \nin the ground and excited states of light nuclei,\nwe constrain the quadrupole deformation parameters, $\\beta$ and $\\gamma$, and perform the \nenergy variation with the constraints on the $\\beta$-$\\gamma$ plane.\n\nThe deformation parameters, $\\beta$ and $\\gamma$, are defined as\n\\begin{align}\n\t&\\beta \\cos \\gamma \\equiv \\frac{\\sqrt{5\\pi}}{3} \n\t\t\\frac{2\\langle z^{2} \\rangle -\\langle x^{2} \\rangle -\\langle y^{2} \\rangle }{R^{2}}, \\\\\n\t&\\beta \\sin \\gamma \\equiv \\sqrt{\\frac{5\\pi}{3}} \n\t\t\\frac{\\langle x^{2} \\rangle -\\langle y^{2} \\rangle }{R^{2}} \\label{definition_beta_gamma}, \\\\\n\t&R^{2} \\equiv \\frac{5}{3} \\left( \\langle x^{2} \\rangle + \\langle y^{2} \\rangle \n\t\t+ \\langle z^{2} \\rangle \\right).\n\\end{align}\nHere, $\\langle O \\rangle$ represents the expectation value of the operator $O$ for an intrinsic wave function $| \\Phi \\rangle$.\n$x$, $y$, and $z$ are the inertia principal axes that are chosen as\n$\\langle y^{2} \\rangle \\le \\langle x^{2} \\rangle \\le \\langle z^{2} \\rangle $ and\n$\\langle xy \\rangle = \\langle yz \\rangle = \\langle zx \\rangle =0$.\nTo satisfy the latter condition, we also impose \nthe constraints $\\langle xy \\rangle\/R^{2} = \\langle yz \\rangle\/R^{2} = \\langle zx \\rangle\/R^{2} =0$. \nTo obtain the energy minimum state under the constraint condition,\nwe add the constraint potential $V_{\\text{const}}$ to the total energy of the system\nin the energy variation. The constraint potential $V_{\\text{const}}$ is given as\n\\begin{align} \n\tV_{\\text{const}} \\equiv &\\eta_{1} \n\t\\left[ (\\beta \\cos \\gamma - \\beta_{0} \\cos \\gamma_{0})^{2} + (\\beta \\sin \\gamma - \\beta_{0} \\sin \\gamma_{0})^{2} \\right] \\notag \\\\\n\t+ &\\eta_{2} \\left[ \\left( \\frac{\\langle xy \\rangle}{R^{2}} \\right)^{2} \n\t\t+ \\left( \\frac{\\langle yz \\rangle}{R^{2}} \\right)^{2} \n\t\t+ \\left( \\frac{\\langle zx \\rangle}{R^{2}} \\right)^{2} \\right].\n\t\\label{constraint_energy}\n\\end{align}\nHere, $\\eta_{1}$ and $\\eta_{2}$ take sufficiently large values.\nAfter the variation with the constraint, we obtain the optimized wave functions\n$|\\Phi^{\\pm}(\\beta_{0}, \\gamma_{0}) \\rangle$\nfor each set of parameters, $(\\beta, \\gamma) = (\\beta_{0}, \\gamma_{0})$.\n\nIn the calculations of energy levels, \nwe superpose the total-angular-momentum projected \nwave functions $P^{J}_{MK} |\\Phi^{\\pm}(\\beta, \\gamma) \\rangle$. \nThus, the final wave function for the $J^\\pm_n$ state is given by\na linear combination of the basis wave functions as \n\\begin{equation}\n\t|\\Phi ^{J\\pm}_{n} \\rangle = \\sum_{K} \\sum_{i} f_{n}(\\beta_{i}, \\gamma_{i}, K) P^{J}_{MK} |\\Phi^{\\pm}(\\beta_{i}, \\gamma_{i}) \\rangle.\n\t\\label{dispersed_GCM}\n\\end{equation}\nThe coefficients $f_{n}(\\beta_{i}, \\gamma_{i}, K)$ are determined using the Hill-Wheeler equation\n\\begin{equation}\n\t\\delta \\left( \\langle \\Phi ^{J\\pm}_{n} | H | \\Phi ^{J\\pm}_{n} \\rangle - \n\tE_{n} \\langle \\Phi ^{J\\pm}_{n} | \\Phi ^{J\\pm}_{n} \\rangle\\right) = 0.\n\t\\label{Hill-Wheeler}\n\\end{equation}\nThis means the superposition of multiconfigurations described by \nparity and total-angular-momentum projected AMD wave functions.\nIn the limit of sufficient basis wave functions \non the $\\beta$-$\\gamma$ plane, it corresponds to the\ngenerator coordinate method (GCM) with the two-dimensional generator coordinates \nof the quadrupole deformation parameters, $\\beta$ and $\\gamma$.\n\n\\subsection{Hamiltonian and parameters}\n\nThe Hamiltonian $H$ consists of the kinetic term \nand effective two-body interactions as\n\\begin{equation}\n\tH = \\sum_{i} t_{i} - T_{\\text{G}} + \\sum_{i\nm_2^2$ this system has a discrete global $Z_2\\times Z_2$ symmetry\n$\\Phi_{1} \\to \\pm \\Phi_{1}$ and $\\Phi_{2} \\to \\pm \\Phi_{2}$. The\nstring solutions break it down to $Z_2$. The resulting kinks\ninterpolating between the two string solutions, called beads\n\\cite{Hindmarsh:1985xc}, can be interpreted as 't Hooft-Polyakov\nmonopoles with their flux confined to two tubes. When $m_1^2 =\nm_2^2$, the global symmetry is enlarged by the transformation $\\Phi_1\n\\to \\Phi_2$ to $D_4$, the square symmetry group, which is broken to\n$Z_2$ by strings. The resulting kinks are labelled by a $Z_4$\ntopological charge. A pair of these kinks has the same charge as a\nmonopole on a string, hence the name semipole.\n\nFinally, when $m_1^2 = m_2^2$ and $\\kappa=\\lambda$, there is a global O(2) symmetry \n\\begin{equation}\n\\label{e:U1Sym}\n\\Phi \\to e^{i\\al} \\Phi \\quad \\text{and} \\quad \\Phi \\to \\Phi^*,\n\\end{equation}\nwhere $\\Phi = \\Phi_1 + i \\Phi_2$. The phase of the complexified\nadjoint scalar $\\theta$, defined by $\\tan \\theta = |\\Phi_2|\/|\\Phi_1|$,\nchanges smoothly along the string. In this case the string supports\npersistent supercurrents, proportional to the gradient of the phase\nalong the string.\n\nIn order to achieve greater dynamic range, it is common practice in\ncosmic string simulations to scale the couplings and mass parameters\nwith factors $a^{1-s}$, where $a$ is the cosmological scale factor and\n$0 \\le s \\le 1$. This is done in such a way as to keep the scalar\nexpectation value fixed As a result, the physical string width grows\nfor $s < 1$, but the string tension depends only on the ratio of the\nscalar self coupling to the square of the gauge coupling, and so stays\nconstant. The dynamics of a string network at $s=0$ are very similar\nto those at $s=1$~\\cite{Daverio:2015nva}.\n\nBy contrast, the monopole mass $M_\\text{m}$ is inversely proportional to\nits radius, and so $M_\\text{m}$ and the dynamical quantity $d_\\text{BV}$ both grow\nthroughout simulations with $s < 1$. It is therefore not clear how\nthe necklaces should behave in this case: the growing mass might lead\none to expect that the monopole RMS velocity should decrease, and the\nmonopole density increase. We will see however that necklaces behave\nsimilarly with $s=0$ as they do with $s=1$.\n\n\n\n\\section{Lattice implementation}\n\\label{s:LatImp}\n\n\\subsection{Discretisation and initial conditions}\n\nWe simulate the system by setting temporal gauge $A_0 = 0$ and then\ndiscretising the system on a comoving 3D spatial lattice. The\nHamiltonian of this model in the cosmological background takes the\nform\n\\begin{multline}\n\\label{e:ModHam}\nH(t) = \\frac{1}{2g^2a^{2(s-1)}} \\sum_{x,i,a} \\epsilon_i^a(x,t)^2 + \\frac{1}{2} a^2 \\sum_{x; \\; n,a} \\; \\pi_n^a(x,t)^2 \\\\\n + \\frac{4}{g^2a^{2(s-1)}} \\sum_{x; \\; i 1$ the majority of the energy in the network is due to the\nmonopoles.\n\nNote that in the degenerate cases $m_2^2\/m_1^2 = 1$ with $\\kappa=1$, the\npoints where $\\Phi_1$ vanishes recorded by our monopole search\nalgorithm are not special: there is no local maximum in the energy\ndensity. However, they can be used as convenient markers of the phase\n$\\theta$, defined after Eq.~(\\ref{e:U1Sym}).\n\n\n\n\\subsection{Monopole and string velocities}\n\nWe use the positions of the strings and monopoles to compute the\nstring root-mean-square (RMS) velocity $\\bar v$, and the monopole RMS\nvelocity $\\bar{v}_\\text{m}$.\n\nUsing the projection methods discussed in\nAppendix~\\ref{app:projectors}, we record a list of the lattice cells\nthat contain magnetic charge every few timesteps. We then take these\nlists for two timesteps and form a distance matrix for every pair of\nmonopoles in the system. If the time interval $\\delta t$ is much\nsmaller than $\\xi_\\mathrm{m}$, we can assume that pairing each\nmonopole at the later timestep with the closest one at the earlier\ntimestep captures the same monopole at two different times. On the\nother hand, the time interval between measurements has to be large\nenough that lattice-scale discretisation ambiguities do not induce\nnoise~\\cite{Hindmarsh:2014rka}. We will therefore compare results for\nseveral different $\\delta t$.\n\nThere are a number of standard algorithms to find the choice of\npairings in a distance matrix that minimises the total distance. We\nused a simple `greedy' algorithm that found the smallest entry in the\nentire distance matrix, then removed that monopole pair, repeating\nuntil all monopoles at the later time were paired up. This algorithm\nhas the advantage of being easy to code, on the other hand it scales\nas the square of the number of monopoles.\n\nThe system has periodic boundary conditions, and so a `halo' region is\nincluded from the other side of the lattice to ensure that all\npossible subluminal monopole separations will be found. Once we have\ndetermined all the pairings, we remove spurious superluminal pairings\n(typically $\\lesssim 1\\%$ of measurements) and use the results to\ndetermine $\\bar{v}_\\text{m}$. We considered $\\delta t = 5$, $10$ and $15$ and\nfound convergence in the resulting curves. We used $\\delta t= 15$ for\nour results. The difference from $\\delta t=10$ can be considered as a\nsystematic uncertainty, but in practice it is comparable to or smaller\nthan the random error.\n\nFor the string velocities, a very similar approach was adopted, using\nthe positions of the plaquettes threaded by string. As many\nplaquettes can be threaded by the strings in the system, the above\npairing and distance finding algorithms were parallelised. Even so,\ndetermining the string velocity for a few hundred thousand plaquettes\nbetween a pair of timesteps took about five minutes on 120 processors.\nFor this reason, string velocities are not computed at early times,\nwhen the number of plaquettes becomes too large. The corresponding\nmonopole measurement takes about a second, and can be performed\nthroughout the simulations.\n\n\n\\section{Results}\n\nWe run over several different parameter choices for both $s=1$ and\n$s=0$. \n\nThe parameters cover both the degenerate ($m_1^2 = m_2^2$) and\nnon-degenerate cases, and allow us to explore the three possible\nglobal symmetries of the string solutions, namely $\\mathrm{O}(2)$,\n$D_4$, and $Z_2\\times Z_2$. In the degenerate case three\ncross-couplings $\\kappa$ are considered: the special case $\\kappa =\n2\\lambda$ having $\\mathrm{O}(2)$ symmetry, and both $\\kappa >\n2\\lambda$ and $\\kappa < 2\\lambda$. For the non-degenerate case,\nhaving $Z_2 \\times Z_2$ symmetry, we explore various ratios of $m_1^2$\nto $m_2^2$.\n\nTwo different expansion rate parameters $\\nu = 0.5, 1$ were chosen,\nwhere $\\nu$ is defined in Eq.~(\\ref{e:ExpRatPar}). The choice $\\nu =\n1$ represents a radiation-dominated universe. While $\\nu = 0.5$ does\nnot correspond to any realistic cosmology, it is useful to explore the\nimpact of different expansion rates. Simulating in a matter dominated\nbackground ($\\nu=2$) does not give enough dynamic range for reliable\nresults.\n\n\nAll runs are carried out with $m_1^2 = 0.25$ ($s=1$) and $m_1^2 = 0.1$\n($s=0$). The parameter choices are listed in Tables \\ref{tab:s1runs}\nand \\ref{tab:runs}. The scale factor is normalised so that $a=1$ at\nthe end of the simulation.\n\n\n\\begin{table}[t]\n\t\\begin{center}\n\t\t\\begin{tabular}{lllll|lll|lll}\n\t\t\t$m_1^2$ & $m_2^2$ & $g$ & $\\lambda$ & $\\kappa$ & $M_\\text{m}$ & $\\mu$ & $d_\\text{BV}$ & $\\nu$ & $t_{0,\\text{H}}$ & $t_\\text{cg}$ \\\\\n\t\t\t\\hline\n\t\t\t0.25 & 0.25 & 1 & 0.5 & 2 & 11 & 1.6 & 7 & 1 & 30 & 230 \\\\\n\t\t\t0.25 & 0.25 & 1 & 0.5 & 1 & 11 & 1.6 & 7 & 1 & 30 & 230 \\\\\n\t\t\t\\hline\n\t\t\t0.25 & 0.1 & 1 & 0.5 & 1 & 11 & 0.63 & 17.5 & 0.5 & 42.5 & 242.5 \\\\\n\t\t\t0.25 & 0.1 & 1 & 0.5 & 1 & 11 & 0.63 & 17.5 & 1 & 42.5 & 242.5 \\\\\n\t\t\t\\hline\n\t\t\t0.25 & 0.05 & 1 & 0.5 & 1 & 11 & 0.31 & 35 & 0.5 & 60 & 260 \\\\\n\t\t\t0.25 & 0.05 & 1 & 0.5 & 1 & 11 & 0.31 & 35 & 1 & 60 & 260 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{\\label{tab:s1runs} List of parameters for $s=1$\n (physical) runs, with dimensionful parameters given in units\n of the lattice spacing $a$. Potential parameters\n (\\ref{e:ScaPot}) are shown along with the isolated monopole\n mass $M_\\text{m}$ and the isolated string tension $\\mu$ computed\n using\n Eqs.~(\\ref{eq:monopolemass})~and~(\\ref{eq:stringtension}).\n The length scale $d_\\text{BV}$ as computed using\n Eq.~(\\ref{eq:dbvdefn}) is also shown. Finally, we quote the\n expansion rate parameter $\\nu = d\\ln a\/d \\ln t$, the time at\n which we change to Hubble damping during our simulations,\n $t_{0,\\text{H}}$, and the time at which core growth ends and\n strings and monopoles reach their true physical width\n $t_\\text{cg}$. All these simulations have lattice size 720 and\n duration 720. }\n\\end{table}\n\n\n\\begin{table}[t]\n\t\\begin{center}\n\t\t\\begin{tabular}{lllll|lll|l}\n\t\t\t$m_1^2$ & $m_2^2$ & $g$ & $\\lambda$ & $\\kappa$ & $M_\\text{m}$ & $\\mu$ & $d_\\text{BV}$ & $t_{0,\\text{H}}$ \\\\\n\t\t\t\\hline\n\t\t\t0.1 & 0.1 & 1 & 0.5 & 2 & 6.96 & 0.628 & 11.1 & 30 \\\\\n\t\t\t0.1 & 0.1 & 1 & 0.5 & 1 & 6.96 & 0.628 & 11.1 & 30 \\\\\n\t\t\t0.1 & 0.1 & 1 & 0.5 & 0.5 & 6.96 & 0.628 & 11.1 & 30 \\\\\n\t\t\t\\hline\n\t\t\t0.1 & 0.04 & 1 & 0.5 & 1 & 6.96 & 0.251 & 27.7 & 67.1 \\\\\n\t\t\t0.1 & 0.02 & 1 & 0.5 & 1 & 6.96 & 0.126 & 55.4 & 94.9 \\\\\n\t\t\t0.1 & 0.01 & 1 & 0.5 & 1 & 6.96 & 0.0628 & 111 & 134 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{\\label{tab:runs} List of simulation parameters for\n runs with $s=0$, as for Table \\ref{tab:s1runs}. The\n expansion rate parameter is $\\nu=1$ (radiation era) for all\n simulations. At $s=0$ the physical size of the monopole and\n string cores grows in proportion to the scale factor. All\n these simulations have lattice size 720 and duration 720.}\n\\end{table}\n\nThe units are defined such that the lattice spacing $\\Delta x$ is 1.\nAll simulations are carried out on a $720^3$ lattice, with timestep\n$\\Delta t = 0.25$ after the initial heavy damping period ends at\n$t_{0,\\text{H}}$, for a total time $720$, or one light-crossing time\nof the box. In principle, correlations can start to be established\nafter half a light-crossing time. However, the only massless\nexcitations are waves on the string, and the strings are much longer\nthan the box size even at the end of the simulations. The network\nlength scale does not show any evidence for finite-size effects,\nalthough it is possible that the slight increase in $d$ for semipoles\nand supercurrents at $t \\gtrsim 360$ in Fig.~\\ref{f:n_both} is a sign\nof the limited simulation volume.\n\n\n\nEach set of parameter choices is run for 3 different realisations of\nthe initial conditions, and our results are statistical averages over\nthese runs.\n\nWe investigate the monopole density with the two different measures\nintroduced in Section \\ref{s:Mea}, the monopole-to-string density\nratio $r$ and the number of monopoles per unit comoving length of\nstring $n$.\n\n\\subsection{Network length scale}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[clip=true,width=0.5\\textwidth]{xi_s1-all.pdf}\n\\includegraphics[clip=true,width=0.5\\textwidth]{xin-all.pdf}\n\\end{center}\n\\caption{\\label{fig:xin} Plot of the network length scale $\\xi_\\text{n}$,\n defined in Eq.~(\\ref{e:xinDef}), with core growth parameter $s=1$\n (top) and $s=0$ (bottom). Fits to linear growth are also shown,\n within the range indicated by the vertical dashed lines. The\n gradients of the fit are given in Tables \\ref{tab:fits_s1} and\n \\ref{tab:fits_s0}.}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:xin} we plot the comoving necklace network length\nscale $\\xi_\\text{n}$, defined in Eq.~(\\ref{e:xinDef}), for $s=1$ (top) and\n$s=0$ (bottom).\n\nAll cases show linear growth with time, which means that the network\nis scaling. We perform fits in the range $360 < t < 480$, which while\nin excess of the half light crossing time for the system, allows time\nfor the scaling behaviour to develop. There are small differences in\nthe slope between simulations with different mass ratios, although\nthere is not enough dynamic range to ensure that they are not\ninherited from differences in the initial conditions. There is also\nevidence that the lower expansion rate $\\nu = 1\/2$ the slope is lower,\ni.e. that the average necklace density is higher.\n\n\n\n\\begin{table}[h!]\n\\begin{center}\n\n\\begin{tabular}{lll|l|l}\n$m_1^2$ & $m_2^2$ & $\\kappa$ & $\\nu$ & $\\xi_\\mathrm{n}$ gradient \\\\\n\\hline\n0.25 & 0.25 & 2 & 1 & $0.171 \\pm 0.002$ \\\\\n0.25 & 0.25 & 1 & 1 & $0.168 \\pm 0.004$ \\\\\n\\hline\n0.25 & 0.1 & 1 & 0.5 & $0.154 \\pm 0.001$ \\\\\n0.25 & 0.1 & 1 & 1 & $0.171 \\pm 0.002$ \\\\\n\\hline\n0.25 & 0.05 & 1 & 0.5 & $0.158 \\pm 0.002$ \\\\\n0.25 & 0.05 & 1 & 1 & $0.165 \\pm 0.004$ \\\\\n\\hline\n\\end{tabular} \n \n\\end{center}\n\\caption{\\label{tab:fits_s1} Gradients for the network comoving length\n scale $\\xi_\\mathrm{n}$, from the fits shown in the graphs of $\\xi_n$\n against conformal time $t$ for $s=1$ in Fig.~\\ref{fig:xin} (top). }\n\n\\bigskip\n\n\\begin{tabular}{lll|l}\n$m_1^2$ & $m_2^2$ & $\\kappa$ & $\\xi_\\mathrm{n}$ gradient \\\\\n\\hline\n0.1 & 0.1 & 2 & $0.154 \\pm 0.005$ \\\\\n0.1 & 0.1 & 1 & $0.150 \\pm 0.003$ \\\\\n0.1 & 0.1 & 0.5 & $0.163 \\pm 0.008$ \\\\\n\\hline\n0.1 & 0.04 & 1 & $0.141 \\pm 0.004$ \\\\\n0.1 & 0.02 & 1 & $0.143 \\pm 0.001$ \\\\\n0.1 & 0.01 & 1 & $0.126 \\pm 0.001$ \\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:fits_s0} Gradients for the network comoving length\n scale $\\xi_\\mathrm{n}$, from the fits shown in the graphs of $\\xi_n$\n against conformal time $t$ for $s=0$ in Fig.~\\ref{fig:xin}\n (bottom). }\n\n\\end{table}\n\n\n\n\\subsection{Monopole density}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[clip=true,width=0.5\\textwidth]{r_s1-all.pdf}\n \\end{center}\n \\caption{\\label{f:r_s1} The ratio of monopole to string energy\n density (\\ref{e:rDef}) in simulations with $s=1$. The legend\n gives the expansion rate parameter $\\nu = d \\log a\/d \\log t$, the\n mass ratio of the fields $m_2\/m_1$, and in the degenerate case the\n value of the cross-coupling $\\kappa$, which is otherwise $\\kappa=1$.\n The mass parameter $m_1^2 = 0.25$. }\n\\end{figure}\n\n\nIn Fig.~\\ref{f:r_s1} we plot the ratio of monopole to string energy\ndensity $r$, defined in (\\ref{e:rDef}), against time in units of\n$m_1^{-1}$, for all parameters given in Table \\ref{tab:s1runs}. Note\nthat $m_1^{-1}$ is approximately the monopole size.\n\n\nWe see that $r$ decreases after the formation of the string network,\nwith what appears to be a power law after the core growth period has\nfinished.\n\nThe significance of the power law is clearer if we plot the comoving\nlinear monopole density on the string $n$, again in units of\n$m_1^{-1}$ (Fig.~\\ref{f:n_both}). We can see from the figure that,\nwith the possible exception of the mass-degenerate cases ($m_2^2\/m_1^2\n= 1$) at $s=1$, $n$ appears to tend to a constant at large time.\nHence the comoving separation of the monopoles remains the same order\nof magnitude as its value at the formation of the strings.\n\nThere is some evidence for a slow increase in $n$ for the degenerate\ncases $m_2^2\/m_1^2 = 1$ at $s=1$, which may be due to semipole\nannihilations being less probable than monopole-antimonopole\nannihilations -- some pairings of semipoles cannot\nannihilate~\\cite{Hindmarsh:2016lhy}. However, the increase occurs\nafter a half-light crossing time for the simulation box, so this may\nbe a finite volume effect.\n\nWe illustrate the ability of semipoles to avoid annihilation in\nFig.~\\ref{f:semipole_end}, which depicts two strings winding around\nthe periodic lattice when the total length of string and the semipole\nnumber has stabilised. One can see that on one of the strings, the\nsemipole density is much higher, and examination of multiple snapshots\nprior to this one shows that semipoles have repelled each other.\nHowever, the high semipole density may be an artefact of the periodic\nboundary conditions, which have prevented the strings from shrinking\nin length any further. Without this shrinking, semipoles are not\nforced together, so there is less likelihood of overcoming the\nrepulsion and annihilating.\n\nIn the degenerate cases $m_2^2\/m_1^2 = 1$ with $\\kappa=1$, we recall that\nthe recorded monopole positions are just places where the phase of the\ncomplexified scalar has the value $\\theta=\\pm\\pi\/2$. The fact that the\ncomoving distance between these points remains approximately constant\nindicates that the comoving RMS current is constant, and so the\nphysical RMS current decreases in inverse proportion to the scale\nfactor.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[clip=true,width=0.5\\textwidth]{n_s1-all.pdf}\n\\includegraphics[clip=true,width=0.5\\textwidth]{n-all.pdf}\n\\end{center}\n\\caption{\\label{f:n_both} The number of monopoles per comoving string\n length in simulations with $s=1$ (top) and $s=0$ (bottom). The\n legend gives the expansion rate parameter $\\nu = d \\log a\/d \\log t$,\n the mass ratio of the fields $m_2\/m_1$, and in the degenerate case\n the value of the cross-coupling $\\kappa$, which is otherwise $\\kappa=1$.\n The mass parameter $m_1^2 = 0.25$ ($s=1$) and $m_1^2 = 0.1$\n ($s=0$).}\n\\end{figure}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=0.3\\textwidth]{{visit-kappa2-late-nocredit}.jpeg}\n \\end{center}\n\\caption{\\label{f:semipole_end} A small $360^3$ box at $t = 1080$,\n simulated at $m_2^2\/m_1^2 = 1$ and $\\kappa = 2$. The high density\n of semipoles on one of the strings shows that semipoles can avoid\n annihilation in some cases. }\n\\end{figure}\n\n\nIn the $s=0$ case, the increased dynamic range means we can attempt a\nmeaningful fit to investigate the relaxation to the constant $n$\nevolution. In Fig.~\\ref{fig:nfit}, we show a graph of $n - n_\\infty$,\nwhere the asymptotic value of the linear monopole density $n_\\infty$\nis taken from a fit to the functional form\n\\begin{equation}\n\\label{e:nFit}\nn = n_\\infty + A\\exp(-B m_1 t).\n\\end{equation}\nFits are shown with dashed lines, and fit parameters are given in\nTable \\ref{t:nFitPar}.\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{ll|lll}\n$m_1^2$ & $m_2^2$ & $\\frac{n_\\infty}{m_1}$ & $A$ & $B$ \\\\\n\\hline\n0.1 & 0.04 & 0.036 & 0.031 & 0.0072 \\\\\n0.1 & 0.02 & 0.023 & 0.060 & 0.0104 \\\\\n0.1 & 0.01 & 0.025 & 0.075 & 0.0134 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\label{t:nFitPar} Parameters for the fit of the linear\n monopole density data in Fig.~\\ref{fig:nfit} to the function\n (\\ref{e:nFit}). All simulations are radiation era, with $s=0$. }\n\\end{table}\n\nThe fits confirm the visual impression that the linear monopole\ndensity is asymptoting to a constant non-zero value, and also support\nthe exponential ansatz for the relaxation.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[clip=true,width=0.5\\textwidth]{nfit.pdf}\n\\end{center}\n\\caption{\\label{fig:nfit} The difference of the linear monopole\n density $n$ from its asymptotic value $n_\\infty$. The parameter\n $n_\\infty$ is extracted from a fit of $n$ to a constant to\n exponential decay [see Eq.~(\\ref{e:nFit})]; the fits are shown as\n dashed lines. Both $n$ and the time are scaled by $m_1$ to make\n dimensionless quantities. Only those values of $m_2\/m_1$ where a\n reliable fit is possible are shown; for other values, the change in\n $n$ is too small. }\n\\end{figure}\n\n\n\n\\subsection{Monopole velocities}\n\n\\begin{figure*}[t!]\n\t\\begin{center}\n\t\t\\includegraphics[clip=true,width=0.45\\textwidth]{monopole-velocities-s1-scaled-dbv.pdf}\n\t\t\\includegraphics[clip=true,width=0.45\\textwidth]{string-velocities-s1-scaled-dbv.pdf}\n\t\t\\includegraphics[clip=true,width=0.45\\textwidth]{monopolesvel-all_dbv.pdf}\n\t\t\\includegraphics[clip=true,width=0.45\\textwidth]{stringsvel-all_dbv.pdf}\n\t\\end{center}\n\t\\caption{\\label{fig:rmsvels1} Plot of $\\bar v$ and $\\bar{v}_\\text{m}$,\n the root mean square string and monopole\/semipole\n velocities, for $s=1$ (top) and $s=0$ (bottom). The time\n axis is scaled by $d_\\text{BV}$, defined in Eq.~(\\ref{eq:dbvdefn}).\n }\n\\end{figure*}\n\nFig.~\\ref{fig:rmsvels1} shows the RMS velocities of the strings,\nmonopoles and semipoles for different masses, cross-couplings $\\kappa$,\nand expansion rate parameters $\\nu$. The RMS velocities all appear to\nasymptote at the same rate $d_\\text{BV}^{-1}$ to a constant value.\n\nWe see that the RMS string velocities are all around $0.5$. When the\nfield mass parameters $m_1$ and $m_2$ are different, the RMS monopole\nvelocities are also all about 0.5, independent of the mass ratio and\nthe expansion rate. If the mass parameters are the same, the RMS\nmonopole velocity at about $0.63$ is a little higher than the RMS\nstring velocity. RMS velocities are consistent between $s=1$ and\n$s=0$, with the exception of the semipoles at $s=0$, which appear to\nmove a little slower ($\\bar{v}_\\text{m} \\simeq 0.6$) than at $s=1$ ($\\bar{v}_\\text{m}\n\\simeq 0.68$).\n\n\nThe higher velocities of the semipoles should make collisions more\nfrequent than those between monopoles and antimonopoles. However, as\nobserved in the Introduction, semipole collisions need not result in\nannihilation, and so the higher velocities do not necessarily result\nin a lower monopole density.\n\nWe interpret the difference $\\bar v_\\text{rel}^2 = \\bar{v}_\\text{m}^2 - \\bar v^2$ as the mean\nsquare relative velocity of the monopoles and semipoles along the\nstring. One can estimate that, for semipoles, $\\bar v_\\text{rel} \\simeq 0.3$,\nwhile there is little evidence for relative motion of monopoles.\n\n\n\\section{Conclusions}\n\nWe have carried out simulations of non-Abelian cosmic strings, formed\nby the symmetry-breaking scheme SU(2)$\\to Z_2$ by two adjoint scalar\nfields. This theory has classical solutions which can be interpreted\nas 't Hooft-Polyakov monopoles or semipoles~\\cite{Hindmarsh:2016lhy}\nthreaded by non-Abelian strings. We observe the formation of cosmic\nnecklaces, consisting of networks of strings and monopoles or\nsemipoles.\n\nOur simulations were carried out in a cosmological background\ncorresponding to a radiation dominated era, and also one with half the\nexpansion rate of a radiation-dominated universe, testing the effect\nof the expansion rate. We performed simulations both with the true\nexpanding universe equations of motion, and allowing the cores of the\ntopological defects to grow with the expansion of the universe. Core\ngrowth has been shown not to significantly affect the dynamics of\nstrings \\cite{Bevis:2006mj,Bevis:2010gj,Daverio:2015nva}, but its\neffect on the dynamics of necklaces is important to check.\n\nIn all cases, our numerical results are consistent with the evolution\ntowards a scaling network of necklaces, with both the density of\nstrings and the density of monopoles proportional to $t^{-2}$. We\nobtain scaling with or without core growth, giving confidence that\nscaling is a robust feature of a necklace network. A necklace network\nshould therefore contribute a constant fraction to the energy density\nof the universe.\n\n\nWe observe that the number of monopoles per unit comoving length of\nstring $n$ changes little from its value at the formation of the\nstring network: monopole annihilation on the string is therefore not\nas efficient as envisaged in Ref.~\\cite{BlancoPillado:2007zr}, and the\naverage comoving separation of monopoles along the string $d = 1\/n$\nremains approximately constant. The monopole to string density ratio\n$r$ therefore decreases in inverse proportion to the scale factor, and\ndoes not increase as proposed in Ref.~\\cite{Berezinsky:1997td}. The\nRMS monopole velocity is close to the RMS string velocity, implying\nthat the monopoles have no significant motion along the string. In\nparticular, the suggestion that the monopole RMS velocity should be\n50\\% larger than the string RMS velocity \\cite{BlancoPillado:2007zr},\ndue to the extra degree of freedom or motion, is not supported.\n\n\nThe number per unit comoving length of semipoles is also approximately\nconstant in the simulations with core growth, but grows slightly in\nthe simulations using the true equations of motion. We do not have\nlarge enough dynamic range to establish whether this is a finite\nvolume effect. The semipole RMS velocity is higher than the string\nRMS velocity, indicating some relative motion of the semipoles along\nthe string. Annihilation is still inefficient despite the relative\nmotion, indicating that repulsion between semipoles is an important\nfactor in the dynamics.\n\nIn the special case where the strings carry a supercurrent, the\ncomoving distance between points where the $\\Phi_1$ field vanishes $d$\nalso stays approximately constant. The supercurrent along the string\ncan be estimated as $j \\sim 1\/ad$, where $a$ is the scale factor, and\nshould therefore decrease. This suggests that current is lost from\nshrinking loops of string, which would tend to prevent the formation\nof cosmologically disastrous stable string loops\n\\cite{Ostriker:1986xc,Copeland:1987th,Davis:1988ij}.\n\nWe are restricted to simulating necklace configurations with $r \\sim\n1$, so we are not able to fully test the robustness of the of the\nconstant comoving $d$ scaling regime. Nonetheless, we find it\ninteresting to explore the consequences as it was not anticipated in\nprevious dynamical modelling, which envisaged that $d$ would either\nshrink to the string width ~\\cite{Berezinsky:1997td}, or grow with the\nhorizon size \\cite{BlancoPillado:2007zr}. The absence of an\nsignificant relative velocity between monopoles and strings indicates\nthat monopoles are dragged around by the strings, independent of the\nratio of the energy scales. The average string separation is of order\nthe conformal time $t$, which means that loops of string shrink and\nannihilate on that timescale. We infer that the main monopole\nannihilation channel is though collisions on shrinking loops of\nstring.\n\nAs argued in \\cite{Hindmarsh:2016lhy}, semipoles and monopoles are\ngeneric on strings in GUT models. It is interesting to consider their\nobservational implications. As usual with strings, one must\nextrapolate the results of numerical simulations to a much larger\nratio of the horizon size to the string width, and it is possible that\nsubtle effects change the scaling of the network. It is clear in our\nsimulations that, just as with Abelian Higgs strings, our SU(2)\nstrings lose energy efficiently into Higgs and gauge radiation.\nHowever, the process that causes the strings to emit radiation of\nmassive Higgs and gauge fields is not well understood, and it may not\nbe efficient over the huge range of scales between today's horizon\nsize and the width of a GUT string. In this case, a necklace would end\nup behaving like ideal Nambu-Goto strings connecting massive\nparticles, as assumed in \\cite{Berezinsky:1997td} and\n\\cite{BlancoPillado:2007zr}.\n\nIn the case where field radiation is efficient, there is little\ndifference between a network of GUT strings with monopoles or\nsemipoles and an Abelian Higgs string network. The network length\nscale grows in proportion to the horizon, and its energy density\nremains a constant fraction of the total. The energy is lost to\nmassive particles, which (if coupled to the Standard Model) will show\nup in the diffuse $\\gamma$-ray background. Current observations from\nFermi-LAT indicate that the mass per unit length in Planck units\n$G\\mu$ is bounded above by $3 \\times 10^{-11} f^{-1}_\\text{SM}$, where\n$f_\\text{SM}$ is the fraction of the strings energy ending up in\n$\\gamma$-rays~\\cite{Mota:2014uka}. This fraction is likely to be close to\nunity in a GUT theory, and so such strings are essentially ruled out,\nas observed some time ago \\cite{Vincent:1997cx}. However, strings in a\nhidden sector are subject only to constraints from the Cosmic\nMicrowave\nBackground~\\cite{Moss:2014cra,Charnock:2016nzm,Lizarraga:2016onn},\nwhich are $G\\mu \\lesssim 10^{-7}$.\n\nIn the case where the string dynamics eventually changes over to\nNambu-Goto, the difference between a necklace network and an ordinary\ncosmic string network is more dramatic with our new picture that the\ncomoving distance between monopoles remains approximately constant\nfrom the time the strings formed. For GUT scale strings forming along\nwith the monopoles, this is bounded above by the horizon distance at\nthe GUT temperature, or a few metres today. Even if the scale of the\nU(1) symmetry-breaking is as low as a TeV, this distance is\nO($10^{12}$) m today, a factor $10^{-14}$ smaller than the horizon\nsize. When horizon-size string loops are chopped off the long string\nnetwork, they will therefore have a large number of monopoles on them.\nNumerical investigations indicate \\cite{Siemens:2000ty} that such\nstring loops do not have periodic non-self-intersecting solutions. We\ncan therefore expect them to quickly chop themselves up into smaller\nand smaller loops, some of which will be free of monopoles and find\nstable periodic non-self-intersecting trajectories. In this case, the\ntypical loop size for a GUT scale string would be a few metres rather\nthan the horizon size. Hence, the tight bounds on the Nambu-Goto\nstring tension from msec pulsar timing obtained by the European Pulsar\nTiming Array \\cite{Lentati:2015qwp} and NANOGrav\n\\cite{Arzoumanian:2015liz} would be avoided, as the gravitational\nwaves would be at frequencies inaccessible to direct observation.\n\n\n\\begin{acknowledgments}\nWe acknowledge fruitful discussions with Jarkko J\\\"arvel\\\"a during the\ninitial stages of this project. Our simulations made use of the COSMOS\nConsortium supercomputer (within the DiRAC Facility jointly funded by\nSTFC and the Large Facilities Capital Fund of BIS). DJW was supported\nby the People Programme (Marie Sk{\\l}odowska-Curie actions) of the\nEuropean Union Seventh Framework Programme (FP7\/2007-2013) under grant\nagreement number PIEF-GA-2013-629425. MH acknowledges support from\nthe Science and Technology Facilities Council (grant number\nST\/L000504\/1).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcdmq b/data_all_eng_slimpj/shuffled/split2/finalzzcdmq new file mode 100644 index 0000000000000000000000000000000000000000..f749ad4b352b545f68a0734c6398f4a376666ac6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcdmq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nRecently nanostructure technology has made it possible to create\nquasi one-dimensional electronic structures, the so-called quantum\nwires \\cite{field_1d,scott-thomas_1d,kastner_coulomb,goni_gas1d,%\ncalleja_gas1d,tarucha_wire_1d}.\nExperimentally situations have been reached where the width of such\na wire is of the order of the Fermi wavelength of the conduction electrons,\nwhich makes it a good realization of a one-dimensional electron gas\n\\cite{calleja_gas1d}.\n\nIn such a system one expects the electron-electron interactions to play an\nimportant role. In particular, at variance with what happens in other\none-dimensional conductors, the long-range Coulomb interaction in a quantum\nwire is not screened since the wire contains only one channel of\nelectrons. One can therefore expect very different physical properties\nthan those of a Luttinger liquid with short-range interactions.\nIt has been proposed that due to these long-range interactions, the electrons\nin a quantum wire will form a Wigner crystal \\cite{schulz_wigner_1d}.\nThe formation of a Wigner crystal can be described as a modulation of the\ncharge density $\\rho(x) \\sim \\rho_0 \\cos(Qx+2\\sqrt{2}\\Phi)$ where\n$\\rho_0$\nis the uniform amplitude of the charge density, $Q= 4 k_F$ its wave\nvector\nand $\\Phi$ describes the location and the motion of the Wigner crystal.\nThe existence of such a Wigner crystal should have observable consequences on\nthe transport properties of the system. Indeed, in the presence of impurities\nthe Wigner crystal will be pinned: the phase $\\Phi(x)$ adjusts to the impurity\npotential on a scale given by $L_0$ called the pinning length.\nThis process of pinning is analogous to what happens in charge density\nwaves in the presence of impurities \\cite{lee_rice_cdw,fukuyama_pinning}.\nSince, in the presence of long-range Coulomb interactions,\nthe most divergent fluctuation is now a $4 k_F$ density\nfluctuation \\cite{schulz_wigner_1d}, the transport properties are\ndominated by $4 k_F$ scattering on impurities, and not the\nusual $2 k_F$ scattering, as was assumed previously\n\\cite{ogata_wires_2kf,fukuyama_wires_2kf}. Due to the $4 k_F$\nscattering, one can also expect different\ntransport properties than those of a Luttinger liquid with short-range\ninteractions\n\\cite{apel_impurity_1d,suzumura_scha,giamarchi_loc_lettre,giamarchi_loc}\nwhere $2 k_F$ fluctuations are the dominant one.\n\nNon linear $I-V$ curves have been observed experimentally\nwhich could be interpreted as the result of pinning\n\\cite{kastner_coulomb}. Up to now only short wires have been made,\nfor which only few impurities are in the wire and dominate the transport.\nEven in that case what has mainly been focussed on theoretically\nis a system with short-range interactions\n\\cite{kane_qwires_tunnel_lettre,kane_qwires_tunnel,glazman_single_impurity}.\nFor long wires, it is important to consider the case of a\nuniform disorder, e.g. a thermodynamic number of impurities, as well as\nthe long-range Coulomb forces.\n\nIn this\npaper we study the effects of disorder on the transport\nproperties of a quantum wire.\nAlthough the problem is very close to that\nof a charge density wave pinned by impurities, there are important\ndifferences that are worth investigating. Due to the long-range\n nature of the forces one can expect modifications of the pinning\nlength and frequency dependence of the conductivity. In addition quantum\nfluctuations have to be taken into account, and for the case of\nshort-range forces are known to drastically affect the transport properties\ncompared to a classical situation \\cite{suzumura_scha,giamarchi_loc}.\n\nThe plan of the paper is as follows. The model is derived in\nsection~\\ref{model}. Effects of the pinning and the pinning length are\nstudied in section~\\ref{static}. The frequency dependence of the\nconductivity is computed in section~\\ref{conductivity} and the\ntemperature dependence of the conductivity (and conductance)\nis discussed in section~\\ref{temperature}.\nDiscussion of\nthe comparison with experiments and conclusions can be found in\nsection~\\ref{conclusion}. Some technical details about the treatment\nof quantum fluctuations can be found in the appendices.\n\n\\section{Model} \\label{model}\n\nWe consider a gas of electrons confined in a channel of\nlength $L$, with a width\n$d\\ll L$ and a thickness $e\\ll d\\ll L$.\nWe will assume that both $d$ and $e$ are small enough for\nthe system to be regarded as one-dimensional, meaning that\nonly one band is filled in the energy-spectrum of the electrons.\nSuch a situation will be realized when $d$ and $e$ become comparable to\nthe Fermi wavelength.\nIn the following we will therefore keep only the degrees of freedom\nalong the wire.\nSince we are interested only in low energy excitations\nwe can linearize the spectrum around the Fermi points\nand take for the free part of the Hamiltonian:\n\\begin{equation} \\label{free}\nH_0 = v_F \\sum _k (k-k_F) a^{\\dag }_{+,k}a_{+,k} + (-k-k_F)\na^{\\dag }_{-,k}a_{-,k}\n\\end{equation}\nwhere $v_F$ is the Fermi velocity and\n$a^{\\dag }_{+,k}$ ($a^{\\dag }_{-,k}$) is the creation\noperator of an electron on the right(left)-going branch\nwith wave-vector $k$. In addition we assume that the electrons\ninteract through the Coulomb interaction\n\\begin{equation}\\label{interact}\nH_c = \\frac1{2}\\int _0 ^L \\int _0 ^L dx dx' V(x-x') \\rho (x) \\rho (x')\n = \\frac1{2L} \\sum _k V_k \\rho _k \\rho _{-k}\n\\end{equation}\nIn a strictly one-dimensional theory, a $\\frac1{r}$ Coulomb potential\nhas no Fourier transform because of the divergence for\n$r\\to 0$. In the real system such a divergence does not exist\nowing to the finite width d of the wire.\nWe will use for $V(r)$ the\nfollowing approximate form \\cite{gold_1dplasmon} which cuts the\nsingularity at $r \\approx d$, and gives the correct asymptotic behavior\nat large $r$\n\\begin{equation}\nV(r) = \\frac{e^2}{\\sqrt{r^2+d^2}}\n\\end{equation}\nthe Fourier transform of which is\n\\begin{equation}\nV(q)=\\int_{-L\/2}^{L\/2} dr V(r)e^{iqr} \\approx 2e^2 K_0(qd)\n\\end{equation}\nwhere $K_0$ is a Bessel function, and one has assumed the wire to be\nlong enough $L\\to \\infty$.\nIn the following we shall frequently use the asymptotic expression\n\\begin{equation}\nK_0(qd) \\approx -\\ln (qd) \\qquad \\text{when} \\qquad qd \\ll 1\n\\end{equation}\n\nThe model (\\ref{free}) plus (\\ref{interact}) has been studied by\nSchulz \\cite{schulz_wigner_1d} who showed that\nthe system is dominated by $4k_F$ charge\ndensity wave fluctuations, which decay as\n\\begin{equation}\n\\langle \\rho_{4k_f}(x)\\rho_{4k_f}(0)\\rangle \\sim e^{-\\ln^{1\/2}(x)}\n\\end{equation}\nThe presence of such a $4k_F$ charge fluctuation can be\nviewed as the formation of a Wigner crystal.\nIn order to describe the pinning of such a Wigner crystal\nwe add to the hamiltonian (\\ref{free}) and (\\ref{interact})\nthe contribution due to impurities.\nWe assume that impurities are located in the wire at random sites\n$X_j$, and that each impurity acts on the electrons with a\npotential $V_{imp}$.\nWe will assume in the following that the potential\ndue to the impurities is short-ranged, and will replace it by a delta\nfunction.\n\\begin{equation}\nV_{imp}(x-X_j) = V_0\\delta (x-X_j)\n\\end{equation}\nThe part of the Hamiltonian stemming from a particular configuration of\nthe impurities is then\n\\begin{equation} \\label{imp}\nH_{imp} = \\sum_j \\int_0^L V_0\\delta (x-X_j) \\rho (x) = \\sum_j V_0\\rho (X_j)\n\\end{equation}\n\nIn order to treat the problem we use the representation of fermion\noperators in term of boson operators\n\\cite{solyom_revue_1d,emery_revue_1d}.\nOne introduces the phase field\n\\begin{equation}\n\\Phi (x) = - {i\\pi \\over L} \\sum_{k\\ne 0} {1 \\over k}e^{-ikx}(\\rho _{+,k}+\n\\rho _{-,k})\n\\end{equation}\nwhere $\\rho _{+,k}$($\\rho _{-,k}$) are the charge density operators for\nright(left)-moving electrons,\nand $\\Pi$, the momentum density conjugate to $\\Phi$.\nThe boson form for (\\ref{free}) plus (\\ref{interact})\nis \\cite{schulz_wigner_1d}\n\\begin{equation} \\label{starting}\nH_0+H_c = {u \\over 2\\pi }\\int_0^L dx \\lbrack K(\\pi \\Pi)^2\n + {1 \\over K} (\\partial _x \\Phi)^2 \\rbrack\n + {1 \\over \\pi^2} \\int _0^L \\int _0^L dxdx' V(x-x')\n (\\partial _x \\Phi(x)) (\\partial _{x}\\Phi(x'))\n\\end{equation}\n$K$ is a number containing the backscattering effects due to the\nFourier components of the interaction close to $2 k_F$ and $u$ is the\nrenormalized Fermi velocity due to the same interactions\n\\cite{solyom_revue_1d,emery_revue_1d,schulz_wigner_1d}.\nWe have taken $\\hbar =1$ in (\\ref{starting}).\nThe long-range nature of the Coulomb interaction manifests itself in\nthe last term of (\\ref{starting}). As we shall precise in the following,\nboth $K$ and the Coulomb potential $V$ control the strength of quantum\neffects.\n\nSince for (\\ref{starting}) the most divergent fluctuation\ncorresponds to a $4 k_F$ charge modulation \\cite{schulz_wigner_1d},\nwe will consider only\nthe coupling of the impurities with this mode and ignore the $2 k_F$\npart of the charge density. The range of validity of such an\napproximation will be discussed in the following.\nUsing the boson representation of the density\n\\cite{solyom_revue_1d,emery_revue_1d} and impurity Hamiltonian\n(\\ref{imp}), the total Hamiltonian becomes\n\\begin{eqnarray}\nH & = & {u \\over 2\\pi }\\int _0^L dx \\lbrack K(\\pi \\Pi)^2\n + {1 \\over K} (\\partial _x \\Phi )^2 \\rbrack\n + \\sum _j V_0\\rho _0 \\cos(4 k_F X_j + 2\\sqrt{2}\\Phi (X_j))\n \\nonumber \\\\\n & & + \\frac1{\\pi ^2} \\int _0^L \\int _0^L dxdx'\nV(x-x') (\\partial _x\\Phi(x)) (\\partial _{x}\\Phi(x')) \\label{total}\n\\end{eqnarray}\nwhere $\\rho_0$ is the average density of electrons.\nThe hamiltonian (\\ref{total}) has similarities with the phase\nHamiltonian of a pinned charge density wave\n\\cite{fukuyama_pinning}.\nSimilarly to the CDW case one can expect the\nphase to distort to take advantage of the impurity potential, leading to\nthe pinning of the Wigner crystal.\nAs for standard CDW, one has to distinguish\nbetween strong and weak pinning on the impurities\n\\cite{fukuyama_pinning}. In the first case\nthe phase adjusts itself on each impurity site. This corresponds to a\nstrong impurity potential or dilute impurities. In the weak pinning\ncase, the impurity potential is too weak or the impurities too close\nfor the phase to be able to adjust on each impurity site, due to the\ncost in elastic energy. Although the problem has similarities with the\nCDW problem, there are two important a priori physical differences that\nhave to be taken into account:\ncompared to the CDW case, one has\nto take into account the long-range Coulomb interaction. One can\nexpect such an interaction to make the Wigner crystal more rigid than a CDW\nand therefore more difficult to pin. In addition, for the Wigner\ncrystal, one cannot neglect the quantum term\n($\\Pi^2$) as is usually done for the CDW problem owing to the large\neffective mass of the CDW. In the absence of long-range interactions such\na term is known to give important quantum corrections\n\\cite{suzumura_scha,giamarchi_loc} on both the pinning length and the\nconductivity.\n\nIn the following sections we will examine both cases of strong and weak\npinning.\n\n\\section{Calculation of the pinning length}\n\\label{static}\nLet us first compute the pinning\nlength $L_0$ over which the phase $\\Phi (x)$ in the ground state varies\nin order to take advantage of the impurity potential.\nIf the impurities are dilute enough, or the impurity potential strong\nenough,\nthe phase $\\Phi (x)$ adjusts on each impurity site such that\n$\\cos(4 k_F X_j + 2\\sqrt{2}\\Phi (X_j))=-1$. This is the so-called strong\npinning regime \\cite{fukuyama_pinning} where\nthe pinning length is the distance between impurities\n$L_0 = n_i^{-1}$. If the impurities are dense enough, or their potential\nweak enough then the cost of elastic and Coulomb energy in distorting the\nphase has to be balanced with the gain in potential energy. One is in\nthe weak pinning regime where the pinning length can be much larger than\nthe distance between impurities.\nIn this regime, we calculate $L_0$ using\nFukuyama and Lee's method developed for the CDW\n\\cite{fukuyama_pinning,lee_coulomb_cdw}.\nThis method neglects the quantum fluctuations of the phase, and the\neffect of such fluctuations will be discussed at the end of this\nsection.\nOne assumes that the phase $\\Phi$ varies on a scale $L_0$. One can\ntherefore divide the wire in segments of size $L_0$ where the phase is\nroughly constant and takes the optimal value to gain the maximum pinning\nenergy. $L_0$ is determined by\noptimizing the total gain in energy, equal to the gain in potential\nenergy minus the cost in elastic and Coulomb energy. If one assumes\nthat the phase varies of a quantity of order $2\\pi$ over a length\n$L_0$, the cost of elastic energy per unit length is\n\\begin{equation}\\label{eps-el}\n{\\cal E}_{el} = {u \\over 2\\pi K} {1 \\over \\alpha L_0^2}\n\\end{equation}\nwhere $\\alpha$ is a number of order unity depending on the precise\nvariation of the phase. Since the impurity potential varies randomly\nin segments of length $L_0$, the gain per unit length due to pinning is\n\\cite{fukuyama_pinning}\n\\begin{equation} \\label{eps-imp}\n{\\cal E}_{\\rm imp}(L_0) = - V_0\\rho _0 ({n_i \\over L_0})^{1 \\over 2}\n\\end{equation}\nIn our case we also have to consider the cost in Coulomb energy.\n\\begin{equation}\n{\\cal E}_{coul} = {1 \\over L} {1 \\over \\pi ^2} \\int _0^L dx \\int _0^L dx'\nV(x-x') \\langle \\partial _x \\Phi \\rangle_{av} \\langle \\partial _{x'} \\Phi\n\\rangle_{av}\n\\end{equation}\nwhere the subscript {\\it av} indicates that the quantity is averaged over all\nimpurity configurations. Since one assumes\nthat the phase varies of a quantity of order $2\\pi$ over a length\n$L_0$, the phases for electrons distant of more than\n$L_0$ are uncorrelated, so that the interactions between such pairs of\nelectrons do not contribute to the energy.\nThe calculation can thus be reduced to the evaluation of the energy for\na segment of length $L_0$\n\\begin{equation} \\label{eps-coul}\n{\\cal E}_{coul} \\approx {1 \\over \\pi ^2}{1 \\over L_0}\nL_0 \\int _{-L_0}^{L_0} du V(u) {\\langle \\Phi ^2(x) \\rangle _{av} \\over L_0^2}\n= {2e^2 \\over \\pi ^2\\alpha L_0^2} \\ln {L_0 \\over d}\n\\end{equation}\nwhere $\\alpha$ is the constant introduced in (\\ref{eps-el}).\n\nThe minimization of the total energy provides a self-consistent expression\nfor $L_0$:\n\\begin{equation} \\label{implicit}\nL_0 = ({8e^2 \\over \\alpha \\pi ^2 V_0\\rho _0n_i^{1 \\over 2}})^{2 \\over 3}\n\\ln ^{2 \\over 3} ({CL_0 \\over d})\n\\end{equation}\nwhere $C$ is a constant of order one\n\\begin{equation} \\label{constant}\nC = e^{({\\pi u \\over 4Ke^2} - {1 \\over 2})}\n\\end{equation}\nTaking typical values $u=3\\times 10^7cm.s^{-1}$ so that\n$\\hbar u=3.15\\times 10^{-20}$ e.s.u. and $K=0.5$,\none gets\n$C \\approx 0.75$. For these typical values of the parameters\nthe contribution of the elastic (short-range) part of the hamiltonian to\nthat result is negligible compared to that of the Coulomb term.\nIn the following, since we expect $\\frac{L_0}{d} \\gg 1$ we approximate\n$\\ln \\frac{CL_0}{d} \\sim \\ln \\frac{L_0}{d}$.\n\nNeglecting $\\log(\\log)$ corrections, one can solve (\\ref{implicit})\nto get\n\\begin{equation} \\label{length}\nL_0 = \\Biggl(\\frac{8e^2}{\\alpha \\pi ^2 V_0\\rho_0\n n_i^{\\frac1{2}}}\\Biggr)^{\\frac2{3}}\n\\ln ^{\\frac2{3}} \\Biggl( \\frac 1{d} \\Bigl({8e^2 \\over \\alpha\n\\pi ^2V_0\\rho _0n_i^{1\/2}}\\Bigr)^{\\frac2{3}}\\Biggr)\n\\end{equation}\nCompared to the pinning length of a CDW\n\\cite{fukuyama_pinning},\n$L_0 \\approx (\\frac {v_F}{\\alpha \\pi V_0\\rho_0n_i^{1\/2}})^{2\/3}$, the\npinning length (\\ref{length}), is enhanced by a logarithmic factor.\nThis is due to the Coulomb interaction which enhances the rigidity of\nthe system and makes it more difficult to pin than a classical\nCDW.\n\nThe expression (\\ref{length}) has been derived for the weak pinning case\nwhere $L_0 \\gg n_i^{-1}$. The crossover to the strong pinning regime\noccurs when the\nphase can adjust itself on each impurity\nsite and $L_0 = n_i^{-1}$.\nOne can introduce a dimensionless quantity $\\epsilon_0$\ncharacterizing the two regimes\n\\begin{equation} \\label{epsilono}\n\\epsilon_0 = \\frac{\\alpha \\pi^2 V_0\\rho _0}{8 n_i e^2}\n\\ln ^{-1} \\Biggl( \\frac 1{d} \\Bigl({8e^2 \\over \\alpha\n\\pi ^2V_0\\rho _0n_i^{1\/2}}\\Bigr)^{\\frac2{3}}\\Biggr)\n\\end{equation}\nThe weak pinning corresponds to $\\epsilon_0 \\ll 1$, and the strong\nstarts at $\\epsilon_0 \\simeq 1$. Compared to a CDW where\n$\\epsilon_0 = \\frac{V_0\\rho_0}{n_iv_f}$, the domain of weak pinning is\nlarger due to the Coulomb interaction. This is again a consequence of\nthe enhanced rigidity of the system that makes it more difficult to pin.\nTo study the conductivity it is also convenient to introduce\n\\begin{equation} \\label{epsilon}\n\\epsilon = \\frac{V_0 \\rho _0}{n_ie^2}\n\\end{equation}\nIndeed we have evaluated, using typical values $d=10^{-8}$m and\n$L_0 \\simeq 10^{-6}$m, (estimated for typical wires in\nsection~\\ref{conclusion}),\nthat $\\epsilon_0 \\simeq 2 \\epsilon$ so that $\\epsilon$ can also be used\nas criterion to distinguish the two regimes of pinning.\n\nExpressions (\\ref{length}) and (\\ref{epsilono}) do not take into account\nthe effects of quantum fluctuations. In the absence of Coulomb\ninteractions, the quantum fluctuations drastically increase the\npinning length compared to the classical case\n\\cite{suzumura_scha,giamarchi_loc} giving a pinning length (for a $4\nk_F$ dominant scattering)\n\\begin{equation}\nL_0 \\sim (1\/V_0)^{2\/(3-4K)}\n\\end{equation}\nTo compute the effect of the quantum fluctuations in the presence of the\nCoulomb interaction we use the self-consistent harmonic approximation\n\\cite{suzumura_scha} for the cosine term in (\\ref{starting})\n\\begin{equation} \\label{scha}\n\\cos (Q x+2\\sqrt{2}(\\Phi_{cl}+\\hat{\\Phi}))=e^{-4\\langle \\hat{\\Phi}^2(x)\n\\rangle}\n\\cos(Qx+2\\sqrt{2}\\Phi_{cl}) (1-4(\\hat{\\Phi}^2(x)-\\langle \\hat{\\Phi}^2(x)\n\\rangle))\n\\end{equation}\nwhere $\\Phi =\\Phi_{cl}+\\hat{\\Phi}$ and $\\hat{\\Phi}$ represents\nthe quantum fluctuations around the classical solution $\\Phi_{cl}$.\nThe average $\\langle\\hat{\\Phi} \\rangle$ has to be done self\nconsistently. Such a calculation is performed in appendix~\\ref{quant}\nand one obtains for the pinning length\n\\begin{equation}\nL_0 =({8e^2 \\over \\alpha \\pi ^2 V_0\\rho _0\\gamma n_i^{1 \\over 2}})^{2 \\over 3}\n\\ln ^{2 \\over 3} ({CL_0 \\over d})\n\\end{equation}\nwhere $\\gamma = e^{-4\\langle\\hat{\\Phi}^2\\rangle} \\approx e^{-\\frac{8\\tilde{K}}\n{\\sqrt{3}}\\ln^{1\/2}V_0}$ and\n$\\tilde{K} = \\frac{\\sqrt{\\pi uK}}{2\\sqrt{2}e}$ instead of\n(\\ref{length}). The quantum fluctuations can thus\nbe taken into account by replacing $V_0$ by the effective impurity\npotential $V_0 \\gamma$. There is an increase of the pinning length\ndue to the quantum fluctuations which can be considerable since\n$\\gamma \\ll 1$. Opposite to what happens for the case of short\nrange interactions, there is no correction in the exponent for the\npinning length. This can be traced back to the fact that the correlation\nfunctions decay much more slowly ($e^{-\\ln^{1\/2}(r)}$ instead of a\npower law), therefore the system is much more ordered and\nthe fluctuations around the ground state are much less important. As a\nconsequence even if one is dealing with a system of electrons, and not a\nclassical CDW, the\nCoulomb interactions push the system to the classical limit\nwhere quantum fluctuations can be neglected except for the redefinition\nof the impurity potential $V_0 \\to V_0 \\gamma$. Note that this effect\ncan be very important quantitatively, since $L_0$ is very large for\ndilute impurities. Such a fluctuation effect also contributes to make\nthe system more likely to be in the weak pinning regime.\n\nWe will in the following make the assumption that all quantum\nfluctuation effects have been absorbed in the proper redefinition of the\npinning length. Such an approximation will be valid as long as one is\ndealing with properties at low enough frequencies. At high frequencies\nthe effect of quantum fluctuations will again be important and will be\nexamined in section~\\ref{LargeFreq}.\n\n\\section{Calculation of the conductivity}\n\\label{conductivity}\n\nIn order to study the transport properties, one makes an expansion\naround the static solution $\\Phi_0(x)$ studied in section (\\ref{static})\nthat minimizes the total energy \\cite{fukuyama_pinning}, assuming\nthat the deviations $\\Psi(x,t)$ are small\n\\begin{equation}\n\\Psi (x,t) = \\Phi (x,t) -\\Phi _0(x)\n\\end{equation}\nOne can expand the Hamiltonian in $\\Psi(x,t)$ to quadratic order\n\\begin{eqnarray} \\label{expansion}\n{\\cal H} _{\\Psi } & = & {u \\over 2\\pi } \\int _0^L dx K(\\pi \\Pi )^2 + {1\n\\over K}(\\partial _x\\Psi )^2\n+ {1 \\over \\pi ^2} \\int _0^L \\int _0^L dx dx' V(x-x') \\partial _x\\Psi\n\\partial _{x'}\\Psi \\nonumber \\\\\n & & - 4\\sum _j V_0\\rho _0 \\cos(4k_F X_j +2\\sqrt{2}\\Phi _0(X_j))\n(\\Psi (X_j))^2\n\\end{eqnarray}\nThis expansion is valid in the classical case. We assume that for the\nquantum problem all quantum corrections are absorbed in the proper\nredefinition of the pinning length $L_0$, as explained in\nsection~\\ref{static} and appendix~\\ref{quant}.\nSuch corrections do not affect\nthe frequency dependence of the conductivity. From\nKubo formula and the representation of the current in terms\nof the field $\\Psi$, the conductivity takes the form\n\\begin{equation}\n\\sigma (\\omega ) = 2i\\omega ({e \\over \\pi} )^2 {\\cal D}\n(0,0;i\\omega _n) \\rfloor _{i\\omega _n \\rightarrow \\omega + i0^+}\n\\end{equation}\nwhere ${\\cal D}(q,q';i\\omega _n)$ is the Green's function of the field\n$\\Phi$\n\\begin{equation}\n{\\cal D}(q,q';i\\omega _n) = \\int _0^{\\beta} d\\tau e^{i\\omega\n_n\\tau} \\langle T_{\\tau }\\Psi _q(\\tau )\\Psi _{-q'}(0)\\rangle\n\\end{equation}\nwith $\\Psi _q(\\tau )=e^{H\\tau }\\Psi _qe^{-H\\tau }$, $\\beta = T^{-1} (k_B=1)$\nand $\\omega _n=2\\pi nT$, where\n$T$ is the temperature, and $T_ {\\tau }$ is the time-ordering operator.\nOur problem is then reduced to the evaluation of this Green function. From\n(\\ref{expansion}) one gets the Dyson equation\n\\begin{equation} \\label{dysondep}\n{\\cal D}(q,q';i\\omega _n) = {\\cal D}_0(q,i\\omega _n) \\lbrack \\delta _{q,q'}\n+ 8V_0\\rho _0 \\sum _{q''} S(q''-q){\\cal D}(q'',q';i\\omega _n) \\rbrack\n\\end{equation}\nwhere\n\\begin{equation}\nS(q)=\\frac1{L} \\sum _j e^{iqX_j} \\cos(QX_j+2\\sqrt{2}\\Phi _0(X_j))\n\\end{equation}\nAfter averaging over all impurity configurations (\\ref{dysondep})\nbecomes\n\\begin{equation}\n\\langle {\\cal D}(q,q';i\\omega _n)\\rangle_{av} =\n\\delta _{q,q'}{\\cal D}(q,i\\omega _n)\n= {1 \\over {\\cal D}_0(q,i\\omega _n)^{-1}-\\Sigma (q,i\\omega _n)}\n\\end{equation}\nwhere the self-energy term $\\Sigma $ contains all connected contributions to\n${\\cal D}$, and ${\\cal D}_0$ is the free Green Function\n\\begin{equation}\n{\\cal D}_0(q,i\\omega _n)=\n{\\pi uK \\over \\omega _n^2 + q^2u^2(1+{2KV(q) \\over \\pi u})}\n\\end{equation}\nIn a similar fashion than for CDW we will compute the self-energy, using\na self-consistent Born approximation \\cite{fukuyama_pinning}, for the\ntwo limiting cases of strong and weak pinning.\n\n\\subsection{Weak pinning case $(\\epsilon \\ll 1)$}\n\\label{weak}\n\nIn that case, as for standard CDW \\cite{fukuyama_pinning},\nthe self-energy can be expanded to second-order in perturbation,\n$\\Sigma \\approx \\Sigma _1 + \\Sigma _2$.\nIndeed we easily verify that in the weak pinning case\n$\\Sigma _1 \\sim\\Sigma _2 \\sim n_i^2(V_0\\rho_0\/n_i)^{4\/3}$,\nwhereas for $n \\ge 1$, $\\Sigma _{2n+1}=0$ and\n$\\Sigma _{2n} \\sim n_i^2(\\frac{V_0\\rho_0}{n_i})^{\\frac{2+2n}{3}}$. Since\n$\\frac{V_0\\rho_0}{n_i} \\sim \\epsilon e^2 \\ll 1$, self-energy terms of higher\norder than $\\Sigma_2$ are negligible. $\\Sigma_1$ is\neasily computed as\n\\begin{equation} \\label{sigma1}\n\\Sigma _1 = 8V_0\\rho_0\\langle S(0)\\rangle_{av} =\n-8V_0\\rho _0 ({n_i \\over L_0})^{1 \\over 2}\n\\end{equation}\nsince again one can divide the wire into $L\/L_0$ segments of length $L_0$,\nand use, as for equation (\\ref{eps-imp}), the random-walk argument of\nreference \\onlinecite{fukuyama_pinning} which gives\n\\begin{equation} \\label{cos}\n\\frac1{L}\\langle\\sum_j \\cos(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av}\n\\approx \\sqrt{\\frac{n_i}{L_0}}\n\\end{equation}\n\n$\\Sigma _2$ is given by\n\\begin{equation}\n\\Sigma _2 = (8V_0\\rho _0)^2 \\sum _{q''} {\\cal D} _0(q'',i\\omega\n_n)\\langle S(q''-q)S(q-q'') \\rangle_{av}\n\\end{equation}\nIf one assumes that there is\nno interference between scattering on different impurities (single site\napproximation), then\nthe exponentials in $\\langle S(q''-q)S(q-q'')\\rangle_{av}$\ncancel and we find\n\\begin{eqnarray} \\label{sigma2}\n\\Sigma_2 & = & ({8V_0\\rho _0 \\over L})^2 \\sum _{q''} {\\cal D}\n_0(q'',i\\omega _n)\n\\langle \\sum _j \\cos^2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av} \\\\\n& =& 64{n_i \\over 2}(V_0\\rho _0)^2 {1 \\over L} \\sum _{q''} {\\cal D}\n_0(q'',i\\omega _n) \\nonumber\n\\end{eqnarray}\nThe approximation\n$\\frac1{L}\\langle \\sum _j \\cos^2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av}\n\\approx \\frac1{2}n_i$ is valid in the weak pinning case only. A more general\nresult is\n\\begin{eqnarray}\n\\frac1{L}\\langle \\sum _j \\cos^2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\\rangle_{av}\n& = & \\frac1{L}\\langle \\sum_j \\frac1{2}(1+\\cos 2(QX_j + 2\\sqrt{2}\\Phi _0(X_j))\n\\rangle_{av} \\nonumber \\\\\n& \\approx & \\frac1{2}(n_i + \\sqrt{\\frac{n_i}{L_0}}) \\label{SquCos}\n\\end{eqnarray}\nbut in the weak pinning\ncase it can be simplified using $n_iL_0 \\gg 1$.\n\n$\\Sigma _2$ given by (\\ref{sigma2}) diverges as\n$\\frac1{|\\omega_n|}\\ln \\frac1{|\\omega_n|}$ when $|\\omega_n| \\to 0$, so\none has to compute $\\Sigma$ self-consistently, and\nreplace ${\\cal D}_0$ by ${\\cal D}$ in the calculation of $\\Sigma_2$.\n(\\ref{sigma2}) is replaced by\n\\begin{equation} \\label{prime}\n\\Sigma _2' = 32n_i(V_0\\rho _0)^2 {1 \\over L} \\sum _{q''}{\\cal\nD}(q'',i\\omega _n)\n= 32n_i(V_0\\rho _0)^2 {1 \\over L} \\sum _{q''}\\lbrack {\\cal\nD}_0^{-1}(q'',i\\omega _n) - \\Sigma (q'',i\\omega _n) \\rbrack ^{-1}\n\\end{equation}\ngiving the self-consistent equation for $\\Sigma$\n\\begin{equation} \\label{exacte}\n\\Sigma = \\Sigma_1 + \\Sigma _2'\n = -8V_0\\rho_0 \\sqrt{\\frac{n_i}{L_0}}\n + 16 n_i(V_0\\rho_0)^2(\\pi uK)\\frac 2{\\pi}\n \\int _0^{\\infty} \\frac{dq}{-\\omega^2-\\pi uK\\Sigma +\n q^2u^2(1+\\alpha_c K_0(qd))}\n\\end{equation}\nwhere we have done the analytic continuation $i\\omega_n \\to \\omega\n+i0^+$ and we have noted $\\alpha_c = \\frac{4Ke^2}{\\pi u}$.\nIt is convenient to rescale (\\ref{exacte}) by the pinning frequency\n$\\omega^*$ defined by\n\\begin{equation} \\label{PinFreq}\n\\omega^{*3}\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n =16n_iu(V_0\\rho_0\\pi K)^2 \\frac 1{\\sqrt{\\alpha_c}}\n={8\\pi ^{5\/2}(uK)^{3\/2} \\over e} n_i(V_0\\rho_0)^2\n\\end{equation}\nwhere $\\tilde{u}=u\\sqrt{\\alpha_c}$.\nIn terms of $L_0$, (\\ref{PinFreq}) can be rewritten as\n\\begin{equation}\n\\omega^* \\ln^{1\/6}\\frac{\\tilde{u}}{\\omega^*d}=\n 4\\alpha^{-2\/3}\\tilde{u}L_0^{-1}\\ln^{2\/3}\\Bigl({L_0 \\over d}\\Bigr)\n\\end{equation}\nNeglecting $\\log(\\log)$ factors, and in the limit $L_O \\gg d$ allowing\nto discard the constants in the logarithm\n($\\ln\\frac{\\tilde{u}}{\\omega^*d} \\approx \\ln \\frac {L_0}{d}$)\nwe obtain\n\\begin{equation} \\label{pinningob}\n\\omega^*\\approx 4\\alpha^{-2\/3}\\tilde{u}L_0^{-1}\\ln^{1\/2}\\Bigl({L_0 \\over d}\n\\Bigr)\n\\end{equation}\nLeaving aside a factor $4\\alpha^{-2\/3}$, $\\omega^*$ given by\n(\\ref{pinningob}) is\nthe characteristic frequency of a segment of the wire of length $L_0$.\nIndeed if we modelize the wire as a collection of independent oscillators of\ntypical length $L_0$ and use the dispersion law $\\omega \\sim\nq\\ln^{1\/2}q$ of\nthe Wigner Crystal \\cite{schulz_wigner_1d}, those oscillators have\nthe frequency $\\omega_0=\\tilde{u}L_0^{-1}\\ln^{1\/2}\\Bigl({L_0 \\over d}\\Bigr)$.\nNumerically we find $4\\alpha^{-2\/3} \\approx 1$ so that actually\n$\\omega^* \\approx \\omega_0$.\nIntroducing the rescaled quantities\n$y=\\frac{\\omega}{\\omega^* }$ and $G={\\pi uK\\Sigma \\over \\omega^{*2}}$,\nwe rewrite (\\ref{exacte}) as\n\\begin{equation} \\label{rescaled}\nG = G_1 + G'_2\n\\end{equation}\nwith $G_1 = -\\alpha^{\\frac 1{3}}$\nand\n\\begin{equation} \\label{integrale}\nG'_2=\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\\sqrt{\\alpha_c}\\frac 2{\\pi}\n\\int_0^{\\infty} \\frac{dt}{-y^2-G+t^2(1+\\alpha_cK_0(\\frac{\\omega^*d}{u}t))}\n\\end{equation}\nThe rescaled conductivity is:\n\\begin{equation} \\label{conduct}\n\\omega^* \\Re e \\sigma (y) = - \\frac{2uKe^2}{\\pi} y\\Im m ({1 \\over -y^2-G})\n\\end{equation}\n\nThe full solution of (\\ref{rescaled}) has to be obtained numerically,\nbut it is possible to obtain analytically the asymptotic expressions at\nsmall and large frequencies. To evaluate the integral $G'_2$, one notices\nthat there is a frequency $\\omega_{cr}$ above which the Coulomb term\nwill be negligible compared to the kinetic (short-range) term.\n$\\omega_{cr}$ defines a crossover length $\\xi_{cr} \\sim u\/\\omega_{cr}$\nwhich is roughly given by\n\\begin{equation}\n\\xi_{cr} \\sim d e^{1\/\\alpha_c} = d e^{\\frac{\\pi u}{4 K e^2}}\n\\end{equation}\nUsing a numerical estimate for $\\alpha_c$ and the values of the Bessel\nfunction one gets $\\alpha_c K_0(x) \\sim 1$ for $x\\sim 1.5$, giving a\ncrossover frequency $\\omega_{cr}\n\\sim 1.5\\frac {\\tilde{u}}{d} \\sim 10^{14} Hz$. Such a frequency\nis two order of magnitude larger than the pinning frequency\n$\\omega^*$.\nFor frequencies above\n$\\omega_{cr}$ the system is dominated by short-range interactions:\nin that case the dominant fluctuations are always the $2k_F$ charge\nfluctuations and not the $4k_F$ ones, and therefore the model\n(\\ref{total}) is not applicable. One has to take into account\nthe pinning on a $2k_F$ fluctuation as done in reference\n\\onlinecite{suzumura_scha,giamarchi_loc}.\nNote that it\nmakes sense to use a one-dimensional model to describe the behavior\nabove $\\omega_{cr}$ only if $\\xi_{cr} \\gg d$. This can occur for example\nif the short-range interactions are strong enough so that $K$ is very\nsmall. With the numerical values of $u$ that seem relevant for\nexperimental quantum wires and assuming that $K$ is not too small\n$K\\sim 0.5$, one gets $\\xi_{cr} \\sim d$. Therefore one can assert that in\nall the range of frequencies for which the problem can be considered\nas one-dimensional, Coulomb interactions will dominate.\nConsequently the result of the integration (\\ref{integrale}) is,\nwhen $\\omega^*\\sqrt{-y^2-G} \\ll \\omega_{cr}$\n\\begin{equation} \\label{small}\nG'_2 =\\frac 1{\\sqrt{-y^2-G}}\\ln^{-1\/2}\\frac{\\tilde{u}}{d\\omega^*\\sqrt{-y^2-G}}\n\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n\\end{equation}\nLet us focus on small frequencies $\\omega \\ll \\omega^*$.\nWe will show that in that limit\n$\\omega^*\\sqrt{-y^2-G}\\sim \\omega^* \\ll \\omega_{cr}$, so that we use\n(\\ref{small}) and replace in (\\ref{rescaled})\n\\begin{equation}\nG = G_1 + {1 \\over \\sqrt{-y^2-G}} \\ln ^{-1\/2}\n{\\tilde{u} \\over \\omega^* d\\sqrt{-y^2-G}}\n\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n\\qquad \\text{when} \\qquad y \\ll 1\n\\end{equation}\n$G$ tends to a limit $G_0$ verifying\n\\begin{equation} \\label{Gzero}\nG_0 = G_1 + (-G_0)^{-1\/2}\\ln ^{-1\/2}\\Bigl(\\frac{\\tilde{u}}{d\\omega^*}\n(-G_0)^{-1\/2}\\Bigr)\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\n\\end{equation}\nEquation (\\ref{Gzero}) has different classes of solutions depending on the\nvalue of $\\alpha$ (zero, one or two roots),\nbut the only physically relevant situation\nis the case of a single solution\n(for a discussion, see reference \\onlinecite{fukuyama_pinning}). The\ncorresponding value of\n$G_0$ is\n\\begin{equation} \\label{value}\nG_0 \\approx -2^{-2\/3}\n\\end{equation}\nExpanding (\\ref{Gzero}) in terms of $y$ and of $G-G_0$ around\nthat solution, we find\n\\begin{equation} \\label{Gexpansion}\nG-G_0 = \\pm i\\frac 2{\\sqrt{3}}(-G_0)^{1\/2}y\n\\qquad \\text{for} \\qquad y\\ll 1\n\\end{equation}\nWe assumed in deriving (\\ref{value}) and (\\ref{Gexpansion}) that\n$\\ln^{-1}(\\tilde{u}\/d\\omega^* )$ is small compared to $1$. This can be\nverified numerically for the parameters we are taking. Using\n$\\omega^* \\sim 1\\times 10^{12}Hz$ (as estimated in section\n\\ref{conclusion}) and taking $d \\sim 10^{-8}m$ one obtains\n$\\ln^{-1}(\\frac {\\tilde{u}}{d\\omega^*}) \\sim 0.2$.\nReplacing in(\\ref{conduct}), we find for the conductivity\n\\begin{equation} \\label{result}\n\\omega^* \\Re e \\sigma (y) = {uKe^2 \\over \\pi } \\frac 8{\\sqrt{3}}y^2\n\\qquad (y \\to 0)\n\\end{equation}\n\nOne can now check that the hypothesis\n$\\omega^*\\sqrt{-y^2-G}\\sim \\omega^* \\ll \\omega_{cr}$\nis indeed verified.\n$\\sqrt{-y^2-G}$ is well defined for $y \\ll 1$\nsince $-G_0$ is positive, and $\\sqrt{-y^2-G} \\sim 1$ since\n$\\sqrt{-G_0}$ is of the order of $1$.\n\nWe have plotted in figure~\\ref{conducti}, the full frequency behavior\nof the conductivity, together with the analytic estimate at small\nfrequencies.\nThe small $\\omega$ behavior as well as the general shape of the\nconductivity is very similar to the one of a classical charge density\nwave: the small $\\omega$ conductivity is behaving as $\\omega^2$, there\nis a maximum at the pinning frequency $\\omega^*$ followed by a decrease\nin $1\/\\omega^4$. As shown in appendix~\\ref{quant} the quantum\nfluctuations do not change the frequency dependence for frequencies\nlower than\n$\\omega^*$. The large frequency behavior will be analyzed in details in\nsection~\\ref{LargeFreq}.\n\nThe low frequency conductivity obtained in our approximation is to be\ncontrasted with the previous result of Efros and\nShklovskii \\cite{efros_coulomb_gap} who find\nthat the low frequency conductivity of a one-dimensional electron gas in\nthe presence of Coulomb interactions should behave as $\\omega$.\nThis result is derived in a very different physical limit where the\nlocalization length is much smaller than the interparticle distance,\nwhereas the implicit assumption to derive the model (\\ref{total}) is that the\nlocalization length is much larger than the interparticle distance\n$k_F^{-1}$. In the limit that was considered in\n\\onlinecite{efros_coulomb_gap} the phase\n$\\phi$ would consist of a series of kinks of width $l$ the localization\nlength and located at random positions (with an average spacing\n$k_F^{-1} \\gg l$).\nThe low-energy excitations that are taken into account in\n\\onlinecite{shklovskii_conductivity_coulomb}, would\ncorrespond to soliton-like\nexcitations for the phase\n$\\phi$, where the phase jumps by $2\\pi$ between two distant kinks.\nIn the physical limit we are considering $k_F^{-1} \\ll L_0$, the phase\n$\\phi$ has no kink-like structure but rather smooth distortions between\nrandom values at a scale of order $L_0$. To get the dynamics, the\napproximation we are using only retains the small ``phonon'' like\ndisplacements of the phase $\\phi$ relative to the equilibrium position\nand no ``soliton'' like excitations are taken into account.\nIn the absence of Coulomb interactions the phonon-like excitations\nalone, when treated exactly in the classical limit $K\\to 0$\nare known \\cite{vinokur_cdw_exact} to give the\ncorrect frequency dependence of the conductivity\n$\\omega^2\\ln^2(1\/\\omega)$ (the self-consistent Born approximation only\ngets the $\\omega^2$ and misses the log correction).\nWhen Coulomb\ninteractions are included and one is in the limit where the localization\nlength is much larger than the interparticle distance, it is not clear\nwhether soliton-like interactions similar to those considered by Efros\nand Shklovskii have to be taken into\naccount. From the solution of a uniform sine-Gordon equation, one\ncould naively say that solitons are only important when the quantum\neffects are large $K \\sim 1$. In the classical limit $K \\to 0$, the\nphonon modes have a much lower energy than the soliton excitations, and\nthe physical behavior of the system should be dominated by such modes.\nWe would therefore argue that the conductivity is given correctly by\nour result (up to possible log corrections) and to behave\nin $\\omega^2$, and not\n$\\omega$, at least if the system is classical enough ($K$ small) thanks\nto the {\\bf short-range} part of the interaction. If our assumption is\ncorrect the crossover towards the Efros and Shklovskii result when the\ndisorder becomes stronger would be very interesting to study.\n\n\\subsection{Strong pinning case $\\epsilon > 1$}\n\nLet us now look at the other limit case of strong pinning.\nIn that case one cannot expand the self-energy $\\Sigma$,\nall the\nsingle-site contributions \\cite{fukuyama_pinning} have to be summed.\nThe result of that summation is\n\\begin{equation}\n\\Sigma = (-8V_0\\rho_0n_i) {1 \\over 1+8V_0\\rho_0A}\n\\end{equation}\nwhere $A$ is defined by\n\\begin{equation}\\label{self-consist}\nA=\\frac1{L} \\sum_{q} {\\cal D}(q,i\\omega_n)\n\\end{equation}\nHere we rescale the conductivity by the characteristic frequency\n\\begin{equation}\n\\omega_0= n_i \\tilde{u}\\ln ^{1\/2}({1 \\over d n_i})\n\\end{equation}\ncorresponding to a pinning length $L_0 \\sim n_i^{-1}$. It is thus the analog\nof $\\omega^*$, to a factor $4\\alpha^{-2\/3} \\approx 1$.\nWe use as rescaled parameters\n$\\overline{y}={\\omega \\over \\omega_0}$\nand $\\overline{G}={\\pi uK\\Sigma \\over \\omega_0^2}$, in which terms\nthe expression of the conductivity is similar to\n(\\ref{conduct}), where we replace $y$,$G$ and $\\omega^*$ respectively by\n$\\overline{y}$, $\\overline{G}$, and $\\omega_0$.\nThe resolution is quite similar to what was done for the CDW\n\\cite{fukuyama_pinning}, so that we give only the main results.\n\nThe exact equation on the rescaled self-energy $\\overline{G}$ is\n\\begin{equation} \\label{self}\n\\overline{G}=-\\lbrack \\frac1{2\\pi^2}\\ln\\frac{\\tilde{u}}{\\omega_0d}\n\\frac1{\\epsilon}+ \\sqrt{\\alpha_c}\\ln^{1\/2}\\frac{\\tilde{u}}{\\omega_0d}\n\\frac1{\\pi} \\int_0^{\\infty}\n\\frac{dt}{-(-y^2-\\overline{G})+t^2(1+\\alpha_cK_0(\\frac{\\omega_0d}{u}t))}\n\\rbrack ^{-1}\n\\end{equation}\nwhere $\\epsilon$, strength of the pinning, was defined in (\\ref{epsilon}).\nThe numerical resolution of this equation gives the conductivity plotted\non Fig.~\\ref{conductiStrong}, for different values of $\\epsilon$.\nThere is a gap below a frequency $\\omega_{\\text{lim}} < \\omega_0$\nclose to the pinning frequency and tending to it as $\\epsilon$ gets\nbigger. In the extremely strong pinning limit $\\epsilon \\gg 1$ one\ncan obtain analytically the conductivity. The equation for the self\nenergy (\\ref{self}), after replacement of the integral by its analytical\napproximate which can be taken from (\\ref{small}) since\n$\\omega_0 \\ll \\omega_{cr}$, is:\n\\begin{equation}\n \\overline{G}= -2\\Bigl(\\ln^{-1\/2}\\frac{\\tilde{u}}{\\omega_0d}\\Bigr)\n \\sqrt{-y^2-\\overline{G}}\n \\ln^{1\/2}\\frac{\\tilde{u}}{\\omega_0d\\sqrt{-y^2-\\overline{G}}}\n\\end{equation}\nwhere the integral is given by (\\ref{weak})\nsince $\\omega_0 \\ll \\omega_{cr}$.\nIn this limit $\\omega_{lim}=\\omega_0$\nand the conductivity is given near the threshold by\n\\begin{equation}\n\\omega_0 {\\cal R}e\\sigma(\\overline{y}) \\approx \\frac{4\\sqrt{2}uKe^2}{\\pi}\n\\sqrt{\\overline{y}-1}\n\\end{equation}\nThis gap below $\\omega_{\\text{lim}}$ is not physical and is an\nartifact of considering only the mean distance $n_i^{-1}$\nbetween impurities.\nIn the real system there is a finite probability of finding neighboring\nimpurities farther apart than $n_i^{-1}$. Such configurations\nwill give contributions at frequencies smaller than\n$\\omega_{\\text{lim}}$.\nAn estimation of those contributions can be done in a similar way than\nfor a CDW \\cite{gorkov_cdw_strong,fukuyama_pinning}.\nThe probability of finding two neighboring impurities at a distance $l$\nis $n_ie^{-n_i l}$. In the strong pinning case where we model our pinned\nCDW by a collection of independent oscillators with frequencies\n${u\\pi \\over l}\\ln^{1\/2}(l\/d)$,\nthe conductivity for $\\omega < \\omega_{\\text{lim}}$ will then be\nproportional to the sum of the contributions over all possible $l$\n\\begin{eqnarray}\n{\\cal R}e\\sigma(\\omega) & \\sim & \\int_0^{\\infty} dl \\; n_i e^{-n_i l}\n\\delta(\\omega -{u\\pi \\over l}ln^{1\/2}{l \\over \\pi d})\\nonumber \\\\\n& \\sim & \\omega ^{-2}\\ln^{1\/2}\\frac1{\\omega d}\ne^{-\\pi n_i \\frac u{\\omega}\\ln^{1\/2}\\frac u{\\omega d}}\n\\end{eqnarray}\nCompared to a CDW, the conductivity in the pseudo gap is\nlowered in the presence of Coulomb interactions. This can again be\nrelated to the fact that the long-range forces make the Wigner crystal\nmore rigid.\n\n\\subsection{Large frequency conductivity}\n\\label{LargeFreq}\n\nWe focus now on large frequencies $\\omega \\gg \\omega^*$,($\\omega_0$),\nwhere we expect the physics to be determined over segments of typical size\n$l_{\\omega} \\sim \\frac{\\tilde{u}}{\\omega} \\ll L_0$,($n_i^{-1}$),\nso that intuitively the behavior of the conductivity should be\nindependent of whether we are in the strong or weak pinning regime.\nAnd indeed at high $\\omega$ the conductivity can always be computed\nusing the approximation $\\Sigma \\approx \\Sigma _1 + \\Sigma _2$,\nwhatever the pinning is, since the self-energy terms $\\Sigma_n$'s are\nof order $(\\frac1{\\omega})^{n-1}$.\nBut we recall that we made drastic assumptions on the phase $\\Phi$,\ndepending on the pinning regime. To be consistent, they should\ngive similar results at high frequencies.\nLet's start first from a weak pinning regime: at $\\omega \\ll \\omega^*$ we\nsupposed the physics to be determined on domains of length $L_0$ on which\nthe phase $\\Phi$ is roughly constant. If we now increase $\\omega$ above\n$\\omega^*$ we simply replace $L_0$ by $l_{\\omega}$ in the evaluation\nof $\\Sigma_1$ and $\\Sigma_2$. More precisely (\\ref{cos}) and (\\ref{SquCos})\nare replaced by:\n\\begin{eqnarray}\n\\frac1{L}\\langle \\sum_j \\cos(QX_j +2\\sqrt{2}\\Phi_0(X_j)) \\rangle_{av} &=&\n\\sqrt{\\frac{n_i}{l_{\\omega}}} \\\\\n\\frac1{L}\\langle \\sum_j \\cos^2(QX_j+2\\sqrt{2}\\Phi_0(X_j))\\rangle_{av}\n&=&\\frac1{2}n_i(1+\\frac 1{\\sqrt{n_il_{\\omega}}})\n\\end{eqnarray}\nThis is valid of course as long as $l_{\\omega} \\gg n_i^{-1}$, above which\nthose averages saturate at values:\n\\begin{eqnarray} \\label{satur}\n\\frac1{L}\\langle \\sum_j\\cos(QX_j+2\\sqrt{2}\\Phi_0(X_j)) \\rangle_{av} &=& n_i \\\\\n\\label{satur2}\n\\frac1{L}\\langle \\sum_j\\cos^2(QX_j+2\\sqrt{2}\\Phi_0(X_j))\\rangle_{av} &=& n_i\n\\end{eqnarray}\nStarting from the strong pinning case and keeping the picture of the phase\nbeing adjusted on each impurity site we find expressions identical to\n(\\ref{satur}) and(\\ref{satur2}), regardless of the frequency.\n\nIn the end, using results of section \\ref{weak} we compute the conductivity\n to be\n\\begin{equation} \\label{highcon}\n\\omega^* \\Re e \\sigma (y) = \\frac{c_{\\Phi}4uKe^2}{\\pi}\ny^{-4}\\ln^{-1\/2}\\frac{\\tilde{u}}{d\\omega^*y}\\ln^{1\/2}\\frac{\\tilde{u}}\n{d\\omega^*}\n\\end{equation}\nwhen $\\omega^*,\\omega_0 \\ll \\omega \\ll \\omega_{cr}$, and where $c_{\\Phi}$\nis a numerical coefficient between $\\frac1{2}$ and $1$. More\nprecisely\n\\begin{eqnarray} \\label{cphi}\nc_{\\Phi} &=& \\frac1{2}(1+\\frac1{\\sqrt{n_il_{\\omega}}}) \\qquad\n\\text{for} \\qquad l_{\\omega} \\ge n_i^{-1} \\\\\n c_\\Phi &=& 1 \\qquad \\text{for} \\qquad l_{\\omega} \\le n_i^{-1}\n\\nonumber\n\\end{eqnarray}\nwhich sums up both weak and strong pinning results.\n\nThe result (\\ref{highcon}) does not take into account the effect of\nquantum fluctuations. Such effects are expected to become important for\nfrequencies larger than the pinning frequency.\nFor short-range interactions, using renormalization group techniques\n\\cite{giamarchi_loc,giamarchi_umklapp_1d}, one can show that if it is\npossible to\nneglect the renormalization of the interactions by disorder (for example\nfor very weak disorder) the conductivity becomes\n(for a $4 k_F$ pinning)\n$\\sigma(\\omega) \\sim \\omega^{4K-4}$ instead of $\\omega^{-4}$ due to\nquantum effects, and would be $\\sigma(\\omega) \\sim \\omega^{K-3}$\nfor $2 k_F$ scattering.\nAlthough one can derive these results\nand get the conductivity at high frequency for long-range interactions,\nusing the memory function formalism \\cite{gotze_fonction_memoire}\nin a way similar to\n\\onlinecite{giamarchi_umklapp_1d,giamarchi_attract_1d},\nwe will show here how to use the SCHA to get the high frequency\nconductivity.\nA naive way to take the frequency into account in the SCHA\nis to divide the system into segments of length $\\frac u{\\omega}$, and\nlook at the system on scale of such a segment. Using this method it is\npossible to rederive the results for the short-range interactions\nand tackle the case of long-range interactions in which we\nare interested. Such a calculation is performed\nin appendix~\\ref{HighFreq}. Instead of (\\ref{highcon}), one gets\n\\begin{equation} \\label{highconq}\n\\omega^* \\Re e \\sigma (y) = \\frac{c_{\\Phi} 4uKe^2}{\\pi}\ny^{-4}\\ln^{-1\/2}\\frac{\\tilde{u}}{d\\omega^*y}\n\\ln^{1\/2}\\frac{\\tilde{u}}{d\\omega^*}\ne^{-8\\sqrt{2}\n\\tilde{K} \\ln^{\\frac 1{2}}\\frac{\\tilde{u}}{(\\omega^* y d)}}\n\\end{equation}\n From (\\ref{highconq}) one sees that,\nas far as exponents are concerned,\nthe conductivity still decays as $1\/\\omega^4$. This would correspond to\na nearly classical ($K \\sim 0$) system with short-range interactions.\nNote that in this limit the $4 k_F$ scattering is indeed dominant over\nthe $2 k_F$ one since the latter would only give a conductivity in\n$1\/\\omega^3$ for $K\\to 0$ (in the above power laws the frequency is\nnormalized by the bandwidth so that $\\omega \\ll 1$).\n\n\\section{Temperature dependence of the conductivity}\n\\label{temperature}\n\nOne can use arguments similar to the one introduced in\nsection~\\ref{conductivity} to obtain the temperature dependence of the\nconductivity. Instead of having a cutoff length imposed by the frequency\n$\\omega \\sim u\/l_\\omega$, or more precisely as in (\\ref{pinningob})\nwhen Coulomb interactions dominate, one can introduce a thermal length\n$\\xi_T$ such that $T \\sim u\/\\xi_T$, which will act as a similar cutoff.\nInstead of rederiving all the expressions as a function of the\ntemperature, it is simpler to use the following relation for the\nconductivity\n\\cite{gotze_fonction_memoire,giamarchi_umklapp_1d,giamarchi_attract_1d}\n\\begin{eqnarray} \\label{functional}\n\\sigma(\\omega,T=0) \\sim M(\\omega,T=0)\/\\omega^2 \\\\\n\\rho(\\omega=0,T) \\sim M(\\omega=0,T) \\nonumber\n\\end{eqnarray}\nwhere $M$ is the so-called memory function. $M$ has the same functional\nform depending on the lowest cutoff in the problem. Therefore\n$M(\\omega,T=0)$ and $M(\\omega=0,T)$ have identical form provided one\nreplaces $\\omega$ by $T$. From (\\ref{functional}), one sees that it is\npossible to obtain the temperature dependence of the resistivity by\nmultiplying\nthe frequency form obtained in section~\\ref{conductivity}, and then\nsubstituting $\\omega$ by $T$. Such a procedure will be valid as long as\none can have a perturbation expansion in the scattering potential, so\nthat (\\ref{functional}) is valid\n\\cite{giamarchi_umklapp_1d,giamarchi_attract_1d}. This will be the case\nas long as the thermal length $\\xi_T$ is smaller than the pinning length\n$L_0$. Let us examine the various regimes\n\n\\subsection{$\\xi_T \\ll \\xi_{cr}$}\n\nAs discussed in\nsection~\\ref{weak}, for quantum wires with unscreened long-range Coulomb\ninteractions, in such a regime a one-dimensional model is probably not\napplicable. However, it can have\napplication either if the long-range interactions are screened or if the\nshort-range interactions are strong enough ($K$ small) so that\n$\\xi_{cr} \\gg d$.\nIn that case, as discussed in section~\\ref{LargeFreq}, the short-range\ninteractions dominate. One is back to the situation of $2 k_F$\nscattering in a Luttinger liquid for which the temperature dependence of\nthe conductivity was computed in reference\n\\cite{giamarchi_loc_lettre,giamarchi_loc}. Let us briefly recall the\nresults (for a complete discussion see\n\\onlinecite{giamarchi_loc_lettre,giamarchi_loc}): for repulsive\ninteractions the conductivity is roughly given by\n\\begin{equation} \\label{rough}\n\\sigma(T) \\sim T^{\\frac52 - K(T) - \\frac32 K_\\sigma(T)}\n\\end{equation}\nwhere $K(T)$ and $K_\\sigma(T)$ are the renormalized Luttinger liquid\nparameters for charge and spin at the length scale $\\xi_T$. The $K$ are\nrenormalized by the disorder and decrease when the temperature is\nlowered. Such a decrease of the exponents is a signature of the tendency\nof the system to localize \\cite{giamarchi_loc}. As a\nresult the conductivity has no simple power law form since the exponents\nthemselves depend on the temperature. If the disorder is weak enough so\nthat one can neglect the renormalization of the exponents, one gets the\napproximate expression for the conductivity\n\\cite{apel_impurity_1d,giamarchi_loc} (see\nalso appendix~\\ref{HighFreq} for a rederivation of this result using\nSCHA)\n\\begin{equation} \\label{appcond}\n\\sigma(T) \\sim T^{1 - K}\n\\end{equation}\nsince (in the absence of renormalization by disorder) $K_\\sigma = 1$\ndue to spin symmetry. The expression (\\ref{appcond}) coincides with the\none obtained subsequently for the conductance of a single impurity\n\\cite{kane_qwires_tunnel_lettre,kane_qwires_tunnel}. For one single\nimpurity there\nis no renormalization of the exponents \\cite{kane_qwires_tunnel} and the\nconductance is given by\n\\begin{equation} \\label{kane}\nG_0 \\sim T^{1-K}\n\\end{equation}\nat all temperatures. If one assumes that there are $N_i$ impurities in\na wire of length $L$ and that the impurities act as {\\bf independent}\nscatterers, then the conductivity would be, if $G$ is the\nconductance of the wire\n\\begin{equation} \\label{gsig}\n\\sigma(T) = L G = \\frac{L}{N_i} G_0 = \\frac1{n_i} G_0\n\\end{equation}\nand one recovers (\\ref{appcond}) (the impurity density $n_i$ is\nincluded in the disorder in (\\ref{appcond})). When many impurities are\npresent\nthe assumption that their contributions can be added independently is of\ncourse incorrect. The collective effects of many impurities leads to the\nrenormalization of the Luttinger liquid parameters (and in particular\nto localization) and to the formula (\\ref{rough}) for the conductivity\ninstead of (\\ref{appcond}).\n\n\\subsection{$\\xi_{cr} \\ll \\xi_T \\ll L_0$}\n\nIn this regime Coulomb interactions dominate and the $4 k_F$ scattering\nis the dominant process. Using (\\ref{highconq}) one gets\n\\begin{equation} \\label{intert}\n\\rho(T) \\sim \\frac1{T^2}\n\\ln^{-1\/2}\\frac{\\tilde{u}}{d T} e^{-8\\sqrt{2}\n\\tilde{K} \\ln^{\\frac 1{2}}\\frac{\\tilde{u}}{(T d)}}\n\\end{equation}\nProvided the wire is long enough (\\ref{intert}) gives also the\ntemperature dependence of the conductance of the wire.\nIn this regime the $2 k_F$ scattering would give $\\rho_{2 k_F}(T)\n\\sim 1\/T$ and is subdominant. Due to the long-range interactions the\nrenormalization of the exponents of the conductivity that took place for\nshort-range interactions \\cite{giamarchi_loc} does not take place. Such\na change of exponent with temperature is replaced by sub-leading\ncorrections. This is due to the fact that the correlation functions\ndecay much more slowly than a power-law.\n\n\\subsection{$L_0 \\ll \\xi_T$}\n\nThis is the asymptotic regime for which the system is pinned and no\nexpansion like (\\ref{functional}) is available. In this regime the\ntemperature dependence is much less clear. In analogy with the\ncollective pinning of vortex lattices\n\\cite{feigelman_collective,nattermann_pinning}, one could\nexpect a glassy-type nonlinear $I-V$ characteristic of the form\n\\begin{equation} \\label{nonlin}\nI \\sim e^{-\\beta (1\/E)^\\mu}\n\\end{equation}\nSuch an $I-V$ characteristic would correspond to diverging barrier\nbetween metastable states as the voltage goes to zero. (\\ref{nonlin})\nimplies that the linear conductivity vanishes at a finite\ntemperature. Since this could be viewed as a phase transition (with the\nlinear conductivity as an order parameter), it is forbidden in a\nstrictly one dimensional system. In fact, in\na purely one dimensional system (in principle for $d<2$\n\\cite{nattermann_pinning}), the barriers\nshould remain finite. In that case one gets a finite linear\nconductivity, going to zero when $T\\to 0$. A possible form being\n\\begin{equation}\n\\sigma(T) \\sim e^{-E_B\/T}\n\\end{equation}\nwhere $E_B \\sim 1\/L_0$ is a typical energy scale for the barriers.\nHowever, no\ndefinite theoretical method exists to decide the issue, and an\nexperimental determination of the low temperature conductivity would\nprove extremely interesting.\n\n\\section{Discussion and conclusions}\n\\label{conclusion}\n\nWe have looked in this paper at the conductivity of a one-dimensional\nelectron gas in the presence of both disorder and long-range Coulomb\ninteractions. Due to long-range interactions, the electron gas forms a\nWigner crystal which will be pinned by impurities. As a result,\nconversely to what happens in a Luttinger liquid, the dominant\nscattering corresponds to $4 k_F$ scattering on the impurities and not\n$2 k_F$ scattering. Such a pinned Wigner crystal\nis close to classical charge density waves but important a\npriori differences lie in the presence of long-range interactions and\nnon-negligible quantum fluctuations.\n\nWe have computed the\npinning length above which the (quasi) long-range crystalline order is\ndestroyed by the disorder. Compared to the standard CDW case the pinning\nlength is increased both by the Coulomb interactions that makes the\nsystem more rigid and therefore more difficult to pin, and by the\nquantum fluctuations that make the pinning less effective. These effects\nmake the system more likely to be in the weak pinning regime.\nWe have also computed\nthe frequency dependence of the conductivity of such a\nsystem. At low frequencies, the conductivity varies as\n$\\omega^2$ if the pinning on impurities is weak. This is to be\ncontrasted to the result of\nEfros and Shklovskii \\cite{shklovskii_conductivity_coulomb}\n$\\sigma\\sim \\omega$. We believe that this difference is due to the fact that\ntheir result was derived in a different physical limit, namely when the\npinning length is much shorter than the interparticle distance. However\nsince the method we use is approximate, it could also be the consequence\nof having neglected soliton-like excitations of the phase field.\nAlthough we do expect such excitations to play little role at least when\nthe short-range repulsion is strong enough ($K$ small). More\ntheoretical, and especially more experimental investigations would prove\nextremely interesting to settle this important issue.\nFor the case of\nstrong pinning there is a pseudo-gap in the optical conductivity up to\nthe pinning frequency. In the pseudo gap\nthe conductivity behaves as\n$\\frac1{\\omega^2}\\ln^{1\/2}\\frac1{\\omega}e^{-1\/\\omega\\ln^{1\/2}\\frac1{\\omega}}$.\nAbove the pinning\nfrequency, for both regimes,\nthe conductivity decreases as $1\/\\omega^4\\ln^{-1\/2}(\\omega_{cr}\/\\omega)\ne^{-\\text{Cste}\\ln^{1\/2}(\\omega_{cr}\/\\omega)}$\nup to the crossover frequency $\\omega_{cr}$ above which\nthe long-range Coulomb interactions become unimportant.\nFor the parameter we took here, $\\omega_{cr}$ is also the limit of\napplicability of a one-dimensional system since $\\xi_{cr} \\sim d$, the\nwidth of the wire. However if the short-range interactions are strong\nenough ($K$ small), so that $\\xi_{cr} \\gg d$, then above $\\omega_{cr}$\na one-dimensional description will still be valid.\nOne is back to the situation of $2 k_F$ scattering in a Luttinger\nliquid which was studied in detail in \\onlinecite{giamarchi_loc}.\nThe conductivity then behaves as\n$(1\/\\omega)^{\\mu(\\omega)}$, where $\\mu(\\omega)$ is a non universal\nexponent depending on the short-range part of the interactions, and due\nto the renormalization of Luttinger liquid parameters by disorder, also\ndependent on the frequency \\cite{giamarchi_loc}. If one can neglect such\na renormalization of the exponents (e.g. for very weak disorder) then\n$\\mu = 3 - K$.\n\nThe temperature dependence can be obtained by similar methods.\nOne can\ndefine a thermal length $\\xi_T\\sim u\/T$.\nWhen $\\xi_T < L_0$, the\nfrequency and temperature dependence of the conductivity are simply\nrelated by $\\rho(\\omega=0,T) \\sim T^2 \\sigma(\\omega\\to T, T=0)$, giving\n$\\rho(T) \\sim 1\/T^2\\ln^{-1\/2}(1\/T) e^{-\\text{Cste}\\ln^{1\/2}(1\/T)}$.\nAbove the pinning length, frequency and temperature\ncan no longer be treated as equivalent cutoffs, and the conductivity is\nmuch more difficult to compute.\nOn can expect an exponentially vanishing linear conductivity, provided\nthat the barriers between metastable states remain finite. If it is not\nthe case, one should get a non-linear characteristic of the form $I \\sim\n\\text{exp}[-\\beta (1\/E)^\\alpha]$, where $\\beta =1\/T$, and $\\alpha$ is an\nexponent. Again an experimental determination of $\\sigma(T)$\nwould prove extremely useful. Note that\nalthough we considered here the conductivity, most of the results can be\napplied to the conductance of a finite wire, provided that the size of\nthe wire $L$ is larger than the thermal length $L_T$.\n\nWe know that under the application of strong enough electric fields,\na classical CDW can be depinned \\cite{littlewood_sliding_cdw,lee_depinning}.\nSimilarly we expect for a\nWigner crystal the existence of a threshold electric field $E_{th}$\nabove which a finite static conductivity appears.\nWe can make a crude estimation of this threshold field,\nmade on the simple assertion that the electrical energy at threshold must\nbe of the order of the pinning energy\n$\\omega^* \\sim \\frac{\\tilde{u}}{L_0}\\ln^{1\/2}\\frac{L_0}{d}$.\nThis energy\ncan be written as $eU$, where $U$ is the electrical potential corresponding\nto a segment of the wire of length $L_0$, that is to say $U=E_{th}L_0$.\nThus the threshold field is estimated as\n\\begin{equation}\n\\label{threshold}\nE_{th} \\sim \\frac{\\tilde{u}}{eL_0^2}\\ln^{1\/2}\\frac{L_0}{d}\n\\end{equation}\n From this we can extract an estimation of the pinning frequency $\\omega^*$.\nIndeed experimental values of such threshold fields can be found in the\nliterature\\cite{kastner_coulomb}. The latter reference gives a value of\nthreshold field, for a wire of length of about $10\\mu m$, of\n$E_{th}=5\\times 10^2.V.m^{-1}$. Thus (\\ref{threshold}) gives\n$L_0 \\sim 1.4 \\mu m$, which seems quite reasonable compared to the\nlength of the wire, and gives for the pinning frequency the estimation\n$\\omega^* \\sim 1\\times 10^{12}Hz$ (since the wires\nof reference \\onlinecite{kastner_coulomb} contain typically two or three\nimpurities, one is probably here in a strong pinning regime).\nData reported here are just meant as typical values. The system\nstudied in \\onlinecite{kastner_coulomb} is at the limit of\napplicability of our study at low temperatures,\nsince the wire is so short that it contains only few impurities. However,\nregardless of the pinning mechanism and number of impurities,\nour theory should give correctly the\ntemperature dependence of the conductivity (or conductance) at\ntemperatures such that $\\xi_T < n_i^{-1}$, since in this regime the\nimpurities act as independent scatterers. To make a study of the low\ntemperature\/low frequencies properties longer wires would be needed.\n\nThe above estimates, although very crude, show that\ntypical frequencies or\ntemperatures for such systems are in the range of experimentally realizable\nvalues, which gives hope for more experimental evidence for the\nexistence of such pinned Wigner crystals. In particular, measurements\nof the temperature dependence of the conductivity\/conductance would\nprove decisive. At low temperature they would provide evidence for a\npinned Wigner crystal, and at higher temperature test for the\nscattering on impurities in the presence of long-range interactions\n($\\sim 1\/T^2$ behavior). A possible crossover between a Luttinger liquid\n(dominated by short-range interactions) and the\nWigner Crystal (dominated by long-range interactions) could also in\nprinciple be seen on the temperature dependence of the conductivity.\nFrequency dependent conductivity measurement are probably much more\ndifficult to carry out, but would be also of high importance. At low\nfrequency they could serve as tests both on the nature of the pinning\nmechanism and on the effects on long-range Coulomb interactions on the\nfrequency dependence of the conductivity. For these purposes, quantum\nwires would constitute a much cleaner system than the standard CDW\ncompounds.\n\n\\acknowledgments\n\nWe would like to thank H.J. Schulz for many stimulating discussions. One\nof us (T.G.) would like to thank H. Fukuyama, M. Ogata,\nB. Spivak and V. Vinokur for interesting discussions,\nand the Aspen center for Physics where part of this work was completed.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn a myriad of robotic systems, trajectory generation plays a very important role since trajectories govern the robot actions at both joint and task space levels. One popular trajectory generation approach for robots is imitation learning \\citep{Ijspeert,Calinon2007}, where the trajectory of interest is learned from human demonstrations. Typically, the learned trajectories can be successfully reproduced or generalized by the robot under conditions that are similar to those in which the demonstrations took place. However, \nin practice, robots may also encounter unseen situations, such as obstacles or human intervention, which can be considered as new task constraints, requiring the robot to adapt its trajectory in order to perform adequately. \n\nIn the context of imitation learning, several algorithms such as dynamic movement primitives (DMP) \\citep{Ijspeert} and probabilistic movement primitives (ProMP) \\citep{Paraschos} have been developed to generate desired trajectories in various scenarios. However, due to an explicit description of the trajectory dynamics, DMP introduces many open parameters in addition to basis functions and their weighting coefficients. The same problem arises in ProMP, which fits trajectories using basis functions that are manually defined. Moreover, DMP and ProMP were formulated towards the learning of time-driven trajectories (i.e., trajectories explicitly dependent on time), where the learning with high-dimensional inputs are not addressed.\n\nIn order to alleviate the modeling of trajectories via specific functions and meanwhile facilitate the learning of trajectories driven by high dimensional inputs, Gaussian mixture model (GMM) \\citep{Calinon2007} has been employed to model the joint distribution of input variables and demonstrated motions. Usually, GMM is complemented with Gaussian mixture regression (GMR) \\citep{Cohn} to retrieve a desired trajectory. Despite the improvements with respect to other techniques, adapting learned skills with GMM\/GMR is not straightforward. Indeed, it is difficult to re-optimize GMM to fulfill new requirements (e.g., via-points), since this usually requires to re-estimate new model parameters (i.e., mixture coefficients, means and covariance matrices) that actually lie on a high-dimensional space . \n\nAn alternative solution to refine trajectories for satisfying new task constraints is reinforcement learning (RL). For instance, a variant of policy improvement with path integrals \\citep{Buchli} was employed to optimize the movement pattern of DMP \\citep{Stulp}. Also, natural actor-critic \\citep{Peters2005} was used to optimize the centers of GMM components \\citep{Guenter}. However, the time-consuming search of the optimal policy might make the application of RL approaches to on-line refinements (such as those required after perturbations) impractical. In contrast to the RL treatment, ProMP formulates the modulation of trajectories as a Gaussian conditioning problem, and therefore derives an analytical solution to adapt trajectories towards new via-points or targets.\nIt is worth pointing out that DMP can adapt trajectories towards different goals, however, the via-points constraints are not addressed therein.\n\nBesides the generation of adaptive trajectories, another desired property in imitation learning is extrapolation.\nOften, human demonstrations are provided for a limited set of task instances, but the robot is expected to apply the learned movement patterns in a wider range of circumstances.\nIn this context, DMP is capable of generating trajectories starting from arbitrary locations and converging to a goal. This is achieved through a formulation based on a spring-damper system whose equilibrium corresponds to the target of the robot motion. In contrast, ProMP and GMM model the distribution of demonstrated trajectories in absolute frames rather than relative frames, which limits their extrapolation capabilities. As an extension of GMM, a task-parameterized formulation is studied in \\cite{Calinon2016}, which in essence models local (or relative) trajectories and corresponding local patterns, therefore endowing GMM with better extrapolation performance. \n\n\nWhile the aforementioned algorithms have achieved reliable performances, we aim for a solution that addresses the most crucial limitations of those approaches. In particular, we propose an algorithm that:\n\\begin{enumerate}[label=(\\roman*)]\n\\item preserves the probabilistic properties exhibited in multiple demonstrations, \n\\item deals with adaptation and superposition of trajectories,\n\\item can be generalized for extrapolations, \n\\item learns human demonstrations associated with high-dimensional inputs while alleviating the need to explicitly define basis functions. \n\\end{enumerate}\t\n\nThe main contribution of this paper is the development of a novel \\emph{kernelized movement primitive} (KMP), which allows us to address the above listed problems using a single framework. Specifically, KMP provides a non-parametric solution for imitation learning and hence alleviates the explicit representation of trajectories using basis functions, rendering fewer open parameters and easy implementation. More importantly, in light of the kernel treatment, KMP has the ability to model demonstrations associated with high-dimensional inputs, which is usually viewed as a non-trivial problem due to the curse of dimensionality. \n\nIn addition, this paper extends KMP from a task-parameterized perspective and formulates \\emph{local}-KMP, improving the extrapolation capabilities to different task situations described by a set of local coordinate frames. \nFinally, as a special case, we considers the application of KMP to the learning of time-driven trajectories, which inherits all the advantages of KMP while being suitable for time-scale modulation. For the sake of clear comparison, we list most relevant features of the state-of-the-art methods as well as our approach in Table~\\ref{table:comp:table}. Note that we consider the modulation of robot trajectories to pass through desired via-points and end-points as the adaptation capability.\n\n\n\\begin{table}[bt]\n\t\\caption {Comparison Among the State-of-the-Art and KMP}\n\t\\centering\n\t\\scalebox{0.9}{\t\n\t\t\\begin{tabular}{lcccc}\n\t\t\t\\toprule %\n\t\t\t&DMP & ProMP & GMM & Our Approach\\\\ \\toprule %\n\t\t\t$Probabilistic$ \n\t\t\t& -\n\t\t\t& \\checkmark \n\t\t\t& \\checkmark \n\t\t\t& \\checkmark \n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$Via\\!\\!-\\!\\!point$\n\t\t\t& -\n\t\t\t& \\checkmark \n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$End\\!\\!-\\!\\!point$\n\t\t\t& \\checkmark\n\t\t\t& \\checkmark \n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$Extrapolation$ \n\t\t\t& \\checkmark\n\t\t\t& -\n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\t$High$-$dim \\; Inputs$\n\t\t\t& - \n\t\t\t& -\n\t\t\t& \\checkmark\n\t\t\t& \\checkmark\n\t\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}}\n\t\t\\label{table:comp:table}\n\t\\end{table}\n\t\n \nThe structure of this paper is arranged as follows. \nWe formulate imitation learning from an information-theory perspective and\npropose KMP in Section~\\ref{sec:kmp}. Subsequently, we extend KMP to deal with trajectory modulation and superposition in Section \\ref{subsec:kmp:modulation} and Section~\\ref{subsec:super:position}, respectively. \nMoreover, we introduce the concept of learning local trajectories into KMP in Section \\ref{subsec:local_frame}, augmenting its extrapolation capabilities in task space. In Section~\\ref{sec:time_kmp}, we discuss a special application of KMP to time-driven trajectories. We test the performance of KMP on trajectory modulation, superposition and extrapolation in Section \\ref{sec:evaluations}, where\nseveral scenarios are considered, ranging from learning handwritten letters to real robotic experiments.\nAfter that, we review related work in Section~\\ref{sec:relative:work}. An insightful discussion is provided in Section \\ref{sec:discuss}, where we elaborate on the potential of our approach and the similarities between KMP and ProMP, as well as open challenges. Finally, we close with conclusions in Section~\\ref{sec:conclusion}. \n\n\n\n\n\\section{Kernelized Representation of Movement Trajectory}\n\\label{sec:kmp}\nLearning from multiple demonstrations allows for encoding trajectory distributions and extracting important or consistent features of the task. In this section, we first illustrate a probabilistic modeling of human demonstrations (Section~\\ref{subsec:ref:traj}), and, subsequently, we exploit the resulting trajectory distribution to derive KMP (Section~\\ref{subsec:kmp}). \n\n\\subsection{Learning from Human Demonstrations}\n\\label{subsec:ref:traj}\nFormally, let us denote the set of demonstrated training data by $\\{\\{ \\vec{s}_{n,h},{\\vec{\\xi}}_{n,h}\\}_{n=1}^{N}\\}_{h=1}^{H}$ where $\\vec{s}_{n,h} \\in \\mathbb{R}^{\\mathcal{I}}$ is the input and ${\\vec{\\xi}}_{n,h} \\in \\mathbb{R}^{\\mathcal{O}}$ denotes the output. Here, the super-indexes $\\mathcal{I}$, $\\mathcal{O}$, $H$ and $N$ respectively represent the dimensionality of the input and output space, the number of demonstrations, and the trajectory length. Note that a probabilistic encoding of the demonstrations allows the input $\\vec{s}$ and output ${\\vec{\\xi}}$ to represent different types of variables. For instance, by considering $\\vec{s}$ as the position of the robot and ${\\vec{\\xi}}$ as its velocity, the representation becomes an autonomous system. Alternatively, if $\\vec{s}$ and ${\\vec{\\xi}}$ respectively represent time and position, the resulting encoding corresponds to a time-driven trajectory. \n\nIn order to capture the probabilistic distribution of demonstrations, a number of algorithms can be employed, such as GMM \\citep{Calinon2007}, hidden Markov models \\citep{Leonel13}, and even a single Gaussian distribution \\citep{Englert,Osa}, which differ in the type of information that is extracted from the demonstrations. As an example, let us exploit GMM as the model used to encode the training data. More specifically, GMM is employed to estimate the joint probability distribution $\\mathcal{P}(\\vec{s},\\vec{\\xi})$ from demonstrations, i.e., \n\\begin{equation}\n\\left[\\begin{matrix}\n\\vec{s}\\\\\\vec{\\xi}\n\\end{matrix}\\right] \\sim \\sum_{c=1}^{C} \\pi_c \\mathcal{N}(\\vec{\\mu}_c,\\vec{\\Sigma}_c),\n\\label{equ:gmm}\n\\end{equation}\nwhere $\\pi_c$, $\\vec{\\mu}_c$ and $\\vec{\\Sigma}_c$ respectively represent the prior probability, mean and covariance of the $c$-th Gaussian component, while $C$ denotes the number of Gaussian components.\n\nFurthermore, a probabilistic \\emph{reference trajectory} $\\{\\hat{\\vec{\\xi}}_{n}\\}_{n=1}^{N}$ can be retrieved via GMR \\citep{Cohn,Calinon2016}, where each point ${\\hat{\\vec{\\xi}}}_{n}$ associated with $\\vec{s}_n$ is described by a conditional probability distribution with mean $\\hat{\\vec{\\mu}}_n$ and covariance $\\hat{\\vec{\\Sigma}}_n$, i.e, ${\\hat{\\vec{\\xi}}}_{n}|\\vec{s}_n\\sim \\mathcal{N}(\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n})$ (see Appendix~\\ref{app:gmr} for details). \nThis probabilistic reference trajectory encapsulates the variability in the demonstrations as well as the correlations among outputs. \nWe take advantage of the probabilistic reference trajectory to derive KMP.\n\n\n\n\\subsection{Kernelized Movement Primitive (KMP)}\n\\label{subsec:kmp}\n\nWe start the derivation of KMP by considering a \\emph{parametric trajectory} \n\\begin{equation}\n\\vec{\\xi}(\\vec{s})\n= \\vec{\\Theta}(\\vec{s})^{\\mathsf{T}} \\vec{w}\n\\label{equ:linear:form}\n\\end{equation}\nwith the matrix $\\vec{\\Theta}(\\vec{s})\\in \\mathbb{R}^{B\\mathcal{O} \\times \\mathcal{O}}$ defined as follows\n\\begin{equation}\n\\vec{\\Theta}(\\vec{s})=\\left[\\begin{matrix} \n\\vec{\\varphi}(\\vec{s}) & \\vec{0} & \\cdots &\\vec{0} \\\\\n\\vec{0} & \\vec{\\varphi}(\\vec{s}) & \\cdots &\\vec{0} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\vec{0} & \\vec{0} & \\cdots & \\vec{\\varphi}(\\vec{s})\\\\\n\\end{matrix}\\right] ,\n\\label{equ:basis:function}\n\\end{equation}\nand the weight vector $\\vec{w} \\in \\mathbb{R}^{B\\mathcal{O}}$, where $\\vec{\\varphi}(\\vec{s})\\in \\mathbb{R}^{B}$ denotes $B$-dimensional basis functions\\footnote{The treatment of fitting trajectories by using basis functions has also been studied in DMP \\citep{Ijspeert} and ProMP \\citep{Paraschos}.}.\nFurthermore, we assume that the weight vector $\\vec{w}$ is normally distributed, i.e., ${\\vec{w}\\sim \\mathcal{N}(\\vec{\\mu}_w,\\vec{\\Sigma}_w)}$, where the mean $\\vec{\\mu}_w$ and the covariance $\\vec{\\Sigma}_w$ are \\emph{unknown}. Therefore, the parametric trajectory satisfies \n\\begin{equation}\n\\vec{\\xi}(\\vec{s}) \\sim \\mathcal{N} \\left(\\vec{\\Theta}(\\vec{s})^{\\mathsf{T}} \\vec{\\mu}_w, \\vec{\\Theta}(\\vec{s})^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}) \\right).\n\\label{equ:distribution:para:traj}\n\\end{equation} \nNote that our goal is to imitate the probabilistic reference trajectory $\\{\\hat{\\vec{\\xi}}_{n}\\}_{n=1}^{N}$, thus we aim to match the parametric trajectory distribution formulated by (\\ref{equ:distribution:para:traj}) with the reference trajectory distribution. In order to address this problem, we propose to minimize the \\emph{Kullback-Leibler} divergence (KL-divergence) \\citep{Kullback,Rasmussen} between both trajectory distributions (Section~\\ref{subsubsec:kl}). Subsequently, we derive optimal solutions for both $\\vec{\\mu}_w$ and $\\vec{\\Sigma}_w$, and formulate KMP by using the kernel trick in Sections~\\ref{subsubsec:optimal:mean:kmp} and \\ref{subsubsec:optimal:var:kmp}, respectively.\n\n\n\\subsubsection{Imitation Learning Based on Information Theory:}\n\\label{subsubsec:kl}\n\nSince the well-known KL-divergence can be used to measure the distance between two probability distributions, we here exploit it to optimize the parametric trajectory distribution so that it matches the reference trajectory distribution.\nFrom the perspective of information transmission, the minimization of KL-divergence guarantees minimal information-loss in the process of imitation learning. \n\nFormally, we consider the minimization of the objective function\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}(\\vec{\\mu}_w,\\!\\vec{\\Sigma}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N}\\!\\! D_{KL} \\biggl ( \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)\n|| \\mathcal{P}_{\\mathbf{r}} (\\vec{\\xi}|\\vec{s}_n) \\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini:temp}\n\\end{equation}\nwhere \n$\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)$ represents the probability distribution of the parametric trajectory (\\ref{equ:distribution:para:traj}) given the input $\\vec{s}_n$, i.e.,\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)\\!= \\!\\mathcal{N} \\left(\\vec{\\xi}|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\! \\vec{\\mu}_w, \\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n) \\right),\n\\label{equ:def:prob:para}\n\\end{equation}\n$\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n)$ corresponds to the probability distribution of the reference trajectory associated with $\\vec{s}_n$ (as described in Section~\\ref{subsec:ref:traj}), namely\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n)=\\mathcal{N} (\\vec{\\xi}|\\hat{\\vec{\\mu}}_n,\\hat{\\vec{\\Sigma}}_n).\n\\label{equ:def:prob:ref}\n\\end{equation}\n$D_{KL}(\\cdot||\\cdot)$ denotes the KL-divergence between the probability distributions $\\mathcal{P}_{\\mathbf{p}}$ and $\\mathcal{P}_{\\mathbf{r}}$, which is defined by \n\\begin{equation}\n\\begin{aligned}\nD_{KL}(\\mathcal{P}_{\\mathbf{p}}&(\\vec{\\xi}|\\vec{s}_n)||\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n))\\\\\n&=\\int \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n) \\log \\frac{\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)}{\\mathcal{P}_{\\mathbf{r}}(\\vec{\\xi}|\\vec{s}_n)} d\\vec{\\xi}.\n\\end{aligned}\n\\label{equ:kl:def}\n\\end{equation}\nBy using the properties of KL-divergence between two Gaussian distributions, we rewrite (\\ref{equ:kl:cost:ini:temp}) as \t\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}(\\vec{\\mu}_w,\\! \\vec{\\Sigma}_w)\\! \\! =\\sum_{n=1}^{N}\\! \\frac{1}{2} \\biggl ( \n\\log|\\hat{\\vec{\\Sigma}}_n|\n\\!-\\!\\log|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n-\\mathcal{O} \n+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_n^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n) )\\\\\n+(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\! \\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})^{\\mathsf{T}} \\! \\hat{\\vec{\\Sigma}}_n^{-1}\\! (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\! \\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})\n\\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini}\n\\end{equation}\nwhere $| \\, \\cdot \\, |$ and $\\mathrm{Tr}(\\cdot)$ denote the determinant and trace of a matrix, respectively. \n\nAfter removing the coefficient `$\\frac{1}{2}$', the constant terms $\\log|\\hat{\\vec{\\Sigma}}_n|$ and $\\mathcal{O}$, this objective function (\\ref{equ:kl:cost:ini}) can be further decomposed into a \\emph{mean minimization subproblem} and a \\emph{covariance minimization subproblem}. The former is defined by minimizing\n\\begin{equation}\n{J}_{ini}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w \\!- \\!\\hat{\\vec{\\mu}}_{n})^{\\mathsf{T}} \\hat{\\vec{\\Sigma}}_n^{-1} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w \\!- \\! \\hat{\\vec{\\mu}}_{n})\n\\label{equ:kl:mean:cost}\n\\end{equation}\nand the latter is written as the minimization of\n\\begin{equation}\n\\begin{aligned}\n{J}_{ini}(\\vec{\\Sigma}_w)=\\sum_{n=1}^{N} \\Big(&\n-\\log|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_n^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)\n\\end{aligned}.\n\\label{equ:kl:var:cost}\n\\end{equation}\nIn the following two sections,\nwe separately solve the mean and covariance subproblems, resulting in the KMP formulation.\n\n\n\n\\subsubsection{Mean Prediction of KMP:}\n\\label{subsubsec:optimal:mean:kmp}\n\nIn contrast to kernel ridge regression (KRR) \\citep{Saunders, Murphy}, we introduce a penalty term $||\\vec{\\mu}_w||^2$ into the mean minimization subproblem (\\ref{equ:kl:mean:cost}) so as to circumvent the over-fitting problem. Thus, the new mean minimization subproblem can be re-rewritten as\n\\begin{equation}\n\\begin{aligned}\n\t{J}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\!\\sum_{n=1}^{N}\\! (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\!\\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})^{\\mathsf{T}} \\hat{\\vec{\\Sigma}}_n^{-1} (\\vec{\\Theta}&(\\vec{s}_n)^{\\mathsf{T}} \\!\\vec{\\mu}_w \\!\\!-\\!\\! \\hat{\\vec{\\mu}}_{n})\\\\\n\t&+\\lambda \\vec{\\mu}_w^{\\mathsf{T}}\\vec{\\mu}_w,\n\\end{aligned}\n\t\\label{equ:kl:mean:cost:penalty}\n\\end{equation}\nwhere $\\lambda>0$.\n\nThe cost function (\\ref{equ:kl:mean:cost:penalty}) resembles a weighted least squares formulation, except for the penalty term $\\lambda \\vec{\\mu}_w^{\\mathsf{T}}\\vec{\\mu}_w$. Also, it is similar to the common quadratic cost function minimized in KRR, where $\\hat{\\vec{\\Sigma}}_n^{-1}=\\vec{I}_{\\mathcal{O}}$. However, the variability of the demonstrations encapsulated in $\\hat{\\vec{\\Sigma}}_n$ is introduced in (\\ref{equ:kl:mean:cost:penalty}) as an importance measure associated to each trajectory datapoint, which can be understood as relaxing or reinforcing the optimization for a particular datapoint. In other words, this covariance-weighted cost function permits large deviations from the reference trajectory points with high covariances, while demanding to be close when the associated covariance is low. \n\n\n\nBy taking advantage of the dual transformation of KRR, \nthe optimal solution ${\\vec{\\mu}}_w^{*}$ of (\\ref{equ:kl:mean:cost:penalty}) can be derived as (see \\cite{Murphy,Kober2011} for details)\n\\begin{equation}\n{\\vec{\\mu}}_w^{*}=\\vec{\\Phi} ( \\vec{\\Phi}^{\\mathsf{T}} \\vec{\\Phi} +\\lambda \\vec{\\Sigma} )^{-1} {\\vec{\\mu}},\n\\label{equ:kmp:muw}\n\\end{equation}\nwhere \n\\begin{equation}\n\\begin{aligned}\n\\vec{\\Phi}&=[\n\\vec{\\Theta}(\\vec{s}_1) \\ \\vec{\\Theta}(\\vec{s}_2) \\ \\cdots \\ \\vec{\\Theta}(\\vec{s}_N)\n],\\\\\n\\vec{\\Sigma}&=\\mathrm{blockdiag}(\\hat{\\vec{\\Sigma}}_1, \\ \\hat{\\vec{\\Sigma}}_2, \\ \\ldots, \\ \\hat{\\vec{\\Sigma}}_N), \\quad \\\\\n{\\vec{\\mu}}&=[\n\\hat{\\vec{\\mu}}_1^{\\mathsf{T}} \\ \\hat{\\vec{\\mu}}_2^{\\mathsf{T}} \\ \\cdots \\ \\hat{\\vec{\\mu}}_N^{\\mathsf{T}}\n]^{\\mathsf{T}}.\n\\end{aligned}\n\\label{equ:notations:define}\n\\end{equation}\nSubsequently, \nfor a query $\\vec{s}^{*}$ (i.e., new input), its corresponding output (expected value) is computed as\n\\begin{equation}\n\\mathbb{E}(\\vec{\\xi}(\\vec{s}^{*})) \n\\!\\!=\\!\\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}}{\\vec{\\mu}}_w^{*}\\!\\!=\\!\\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}}\\vec{\\Phi} ( \\vec{\\Phi}^{\\mathsf{T}} \\vec{\\Phi} \\!+\\!\\lambda \\vec{\\Sigma} )^{-1} {\\vec{\\mu}}.\n\t\\label{equ:kmp:mean:temp}\n\\end{equation}\nIn order to facilitate the application of (\\ref{equ:kmp:mean:temp}) (particularly for high-dimensional $\\vec{s}$), we propose to kernelize (\\ref{equ:kmp:mean:temp}) so as to avoid the explicit definition of basis functions. Let us define the inner product for $\\vec{\\varphi}(\\vec{s}_i)$ and $\\vec{\\varphi}(\\vec{s}_j)$ as\n\\begin{equation}\n\\vec{\\varphi}(\\vec{s}_i)^{\\mathsf{T}} \\vec{\\varphi}(\\vec{s}_j)=k(\\vec{s}_i,\\vec{s}_j),\n\\label{equ:single:basis:product}\n\\end{equation} \nwhere $k(\\cdot,\\cdot)$ is a kernel function. Then, based on (\\ref{equ:basis:function}) and (\\ref{equ:single:basis:product}), we have \n\\begin{equation}\n\\vec{\\Theta}({\\vec{s}_i})^{\\mathsf{T}}\\vec{\\Theta}({\\vec{s}_j})\n=\\left[\\begin{matrix} \nk(\\vec{s}_i, \\vec{s}_j) & \\vec{0} & \\cdots &\\vec{0} \\\\\n\\vec{0} & k(\\vec{s}_i, \\vec{s}_j) & \\cdots &\\vec{0} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\vec{0} & \\vec{0} & \\cdots & k(\\vec{s}_i, \\vec{s}_j)\\\\\n\\end{matrix}\\right],\n\\label{equ:basis:product}\n\\end{equation}\nwhich can be further rewritten as \na kernel matrix\n\\begin{equation}\n\t\\vec{k}(\\vec{s}_i, \\ \\vec{s}_j)= \\vec{\\Theta}({\\vec{s}_i})^{\\mathsf{T}}\\vec{\\Theta}({\\vec{s}_j})=\n\t k(\\vec{s}_i, \\vec{s}_j)\\vec{I}_{\\mathcal{O}},\n\t\\label{equ:kernel:matrix}\n\\end{equation} \nwhere\n$\\vec{I}_{\\mathcal{O}}$ is the ${\\mathcal{O}}$-dimensional identity matrix.\nAlso, let us denote the matrix $\\vec{{K}}$ as\n\\begin{equation}\n\\vec{K}\n=\\left[\\begin{matrix} \n\\vec{k}(\\vec{s}_1, \\vec{s}_1) & \\vec{k}(\\vec{s}_1, \\vec{s}_2) & \\cdots &\\vec{k}(\\vec{s}_1, \\vec{s}_N) \\\\\n\\vec{k}(\\vec{s}_2, \\vec{s}_1) & \\vec{k}(\\vec{s}_2, \\vec{s}_2) & \\cdots &\\vec{k}(\\vec{s}_2, \\vec{s}_N) \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\vec{k}(\\vec{s}_N, \\vec{s}_1) & \\vec{k}(\\vec{s}_N, \\vec{s}_2) & \\cdots &\\vec{k}(\\vec{s}_N, \\vec{s}_N) \\\\\n\\end{matrix}\\right],\n\\label{equ:K:matrix}\n\\end{equation}\nand write the matrix $\\vec{k}^{*}$ as\n\\begin{equation}\n\\vec{k}^{*}=[\\vec{k}(\\vec{s}^{*}, \\vec{s}_{1}) \\; \\vec{k}(\\vec{s}^{*}, \\vec{s}_{2}) \\; \\cdots \\; \\vec{k}(\\vec{s}^{*}, \\vec{s}_{N})],\n\\label{equ:k:star}\n\\end{equation}\nthen the prediction in (\\ref{equ:kmp:mean:temp}) becomes\n\\begin{equation}\n\\mathbb{E}(\\vec{\\xi}(\\vec{s}^{*}))\n=\\ \\vec{{k}}^{*} (\\vec{{K}}+\\lambda \\vec{\\Sigma})^{-1} {\\vec{\\mu}}.\n\t\\label{equ:kmp:mean}\n\\end{equation}\n\nNote that a similar result was derived in the context of reinforcement learning \\citep{Kober2011} (called cost regularized kernel regression, CrKR). \nIn contrast to the mean prediction of KMP, CrKR models target components separately without considering their correlations, i.e., a diagonal weighted matrix ${\\vec{R}_n=r_n \\vec{I}_{\\mathcal{O}}}$ \nis used instead of the full covariance matrix $\\hat{\\vec{\\Sigma}}_n^{-1}$ from (\\ref{equ:kl:mean:cost:penalty}). Furthermore, for the case in which $\\hat{\\vec{\\Sigma}}_n=\\vec{I}_{\\mathcal{O}}$, the prediction in (\\ref{equ:kmp:mean}) is identical to the mean of the Gaussian process regression (GPR) \\citep{Rasmussen}.\n\n\nIt is worth pointing out that the initial mean minimization subproblem (\\ref{equ:kl:mean:cost}) is essentially equivalent to the problem of maximizing the posterior\n$\\prod_{n=1}^{N} \\mathcal{P}(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w|\\hat{\\vec{\\mu}}_n,\\hat{\\vec{\\Sigma}}_n)$, please refer to Appendix~\\ref{app:mean:dual} for the proof. Thus, the optimal solution ${\\vec{\\mu}}_w^{*}$ can be viewed as the best estimation given the observed reference trajectory distribution.\n\n\n\n\\subsubsection{Covariance Prediction of KMP:}\n\\label{subsubsec:optimal:var:kmp}\n\nSimilar to the treatment in (\\ref{equ:kl:mean:cost:penalty}), we propose to add a penalty term into the covariance minimization subproblem (\\ref{equ:kl:var:cost}) in order to bound the covariance $\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)$. On the basis of the properties of the \\emph{Rayleigh quotient},\nthe penalty term could be defined by the largest eigenvalue of $\\vec{\\Sigma}_w$. For the sake of easy derivation, we impose a relaxed penalty term $\\mathrm{Tr}(\\vec{\\Sigma}_w)$ which is larger than the largest eigenvalue of $\\vec{\\Sigma}_w$ since $\\vec{\\Sigma}_w$ is positive definite. \nTherefore, the new covariance minimization subproblem becomes\n\\begin{equation}\n\\begin{aligned}\n{J}(\\vec{\\Sigma}_w)&=\\sum_{n=1}^{N} \\Big(-\\log |\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_n^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)+\\lambda \\mathrm{Tr}(\\vec{\\Sigma}_w)\n\\end{aligned}.\n\\label{equ:kl:var:cost:penalty}\n\\end{equation}\n\nBy computing the derivative of (\\ref{equ:kl:var:cost:penalty}) with respect to $\\vec{\\Sigma}_w$\nand setting it to 0, we have\\footnote{The following results on matrix derivatives \\citep{Petersen} are used: $\\frac{\\partial |\\vec{A}\\vec{X}\\vec{B}|}\n\t{\\partial \\vec{X}}=|\\vec{A}\\vec{X}\\vec{B}|(\\vec{X}^{\\mathsf{T}})^{-1}$ and ${\\frac{\\partial}{\\partial \\vec{X}} \\mathrm{Tr}(\\vec{A}\\vec{X}\\vec{B})=\\vec{A}^{\\mathsf{T}} \\vec{B}^{\\mathsf{T}}}$.} \n\\begin{equation}\n\\begin{aligned}\n\\sum_{n=1}^{N} \\Big(\n-\\vec{\\Sigma}_w^{-1}+ \\vec{\\Theta}(\\vec{s}_n) \\hat{\\vec{\\Sigma}}_n^{-1} \\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\n\\Big) + \\lambda \\vec{I}=0\n\\end{aligned}.\n\\label{equ:kl:var:cost:derivative}\n\\end{equation}\nFurthermore, we can rewrite (\\ref{equ:kl:var:cost:derivative}) in a compact form by using $\\vec{\\Phi}$ and $\\vec{\\Sigma}$ from (\\ref{equ:notations:define}) and derive the optimal solution $\\vec{\\Sigma}_w^{*}$ as follows\n\\begin{equation}\n\\vec{\\Sigma}_w^{*}=N(\\vec{\\Phi}\\vec{\\Sigma}^{-1}\\vec{\\Phi}^{\\mathsf{T}}+\\lambda \\vec{I})^{-1}.\n\\end{equation}\nThis solution resembles the covariance of weighted least square estimation, except for the factor `$N$' and the regularized term $\\lambda \\vec{I}$. \n\nAccording to the $\\emph{Woodbury identity}$\\footnote{$(\\vec{A}+\\vec{C}\\vec{B}\\vec{C}^{\\mathsf{T}})^{-1}=\\vec{A}^{-1}\\!-\\!\\vec{A}^{-1}\\vec{C}(\\vec{B}^{-1}+\\vec{C}^{\\mathsf{T}}\\vec{A}^{-1}\\vec{C})^{-1}\\vec{C}^{\\mathsf{T}}\\vec{A}^{-1}$.},\nwe can determine the covariance of $\\vec{\\xi}(\\vec{s}^{*})$ for a query $\\vec{s}^{*}$ as\n\\begin{equation}\n\\begin{aligned}\n\\mathbb{D}(\\vec{\\xi}(\\vec{s}^{*}))&= \\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}}\\vec{\\Sigma}_w^{*} \\vec{\\Theta}(\\vec{s}^{*})\\\\\n&=N \\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}} (\\vec{\\Phi}\\vec{\\Sigma}^{-1}\\vec{\\Phi}^{\\mathsf{T}}+\\lambda \\vec{I})^{-1} \\vec{\\Theta}(\\vec{s}^{*})\\\\\n&=\\frac{N}{\\lambda} \\vec{\\Theta}(\\vec{s}^{*})^{\\mathsf{T}} \n\\left(\\vec{I} \\!- \\! \\vec{\\Phi}(\\vec{\\Phi}^{\\mathsf{T}}\\vec{\\Phi}\\!+\\!\\lambda\\vec{\\Sigma}))^{-1}\\vec{\\Phi}^{\\mathsf{T}} \\right)\n\\vec{\\Theta}(\\vec{s}^{*}).\n\\end{aligned}\n\\label{equ:kmp:var:temp}\n\\end{equation}\nRecall that we have defined the kernel matrix in (\\ref{equ:kernel:matrix})-(\\ref{equ:K:matrix}), and hence the covariance of $\\vec{\\xi}(\\vec{s}^{*})$ becomes\n\\begin{equation}\n\\mathbb{D}(\\vec{\\xi}(\\vec{s}^{*}))=\\frac{N}{\\lambda} \\left(\\vec{k}(\\vec{s}^{*}, \\vec{s}^{*}) -\\vec{k}^{*}(\\vec{K}+\\lambda \\vec{\\Sigma})^{-1} \\vec{k}^{*\\mathsf{T}}\\right).\n\\label{equ:kmp:var}\n\\end{equation}\nIn addition to the factor `$\\frac{N}{\\lambda}$', the covariance formula in (\\ref{equ:kmp:var}) differs from the covariances defined in GPR and CrKR in two essential aspects. \nFirst, the variability $\\vec{\\Sigma}$ extracted from demonstrations (as defined in (\\ref{equ:notations:define})) is used in the term $(\\vec{K}+\\lambda \\vec{\\Sigma})^{-1}$, while the identity matrix and the diagonal weighted matrix are used in GPR and CrKR, respectively. Second, in contrast to the diagonal covariances predicted by GPR and CrKR, KMP predicts a full matrix covariance which allows for predicting the correlations between output components. \nFor the purpose of convenient descriptions in the following discussion, we refer to\n${\\vec{D}=\\{\\vec{s}_n,\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n}\\}_{n=1}^{N}}$ as the \\emph{reference database}. The prediction of both the mean and covariance using KMP is summarized in Algorithm~\\ref{algorithm:kmp}.\n\n\n\n\\begin{algorithm}[t]\n\t\\caption{\\emph{Kernelized Movement Primitive}}\n\t\\begin{algorithmic}[1]\n\\State{\\textbf{{Initialization}}}\n\\Statex{- Define the kernel $k(\\cdot,\\cdot)$ and set the factor $\\lambda$.} \n\\State{\\textbf{{Learning from demonstrations}}} (see Section~\\ref{subsec:ref:traj})\n\\Statex{- Collect demonstrations $\\{ \\{ \\vec{s}_{n,h},\\vec{\\xi}_{n,h}\\}_{n=1}^{N} \\}_{h=1}^{H}$}.\n\\Statex{- Extract the reference database $\\{\\vec{s}_n,\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n}\\}_{n=1}^{N}$.}\n\\State{\\textbf{{Prediction using KMP}}} (see Section~\\ref{subsec:kmp})\n\\Statex{- {\\emph{Input}}: query $\\vec{s}^{*}$.}\n\\Statex{- Calculate $\\vec{\\Sigma}$, $\\vec{\\mu}$, $\\vec{K}$ and $\\vec{k}^{*}$ using (\\ref{equ:notations:define}), (\\ref{equ:K:matrix}) and (\\ref{equ:k:star}).}\n\\Statex{- {\\emph{Output}}: $\\mathbb{E}(\\vec{\\xi}(\\vec{s}^{*}))\n=\\ \\vec{{k}}^{*} (\\vec{{K}}+\\lambda \\vec{\\Sigma})^{-1} {\\vec{\\mu}}$ \\quad\\,and\n\\Statex{$\\mathbb{D}(\\vec{\\xi}(\\vec{s}^{*}))=\\frac{N}{\\lambda} \\left(\\vec{k}(\\vec{s}^{*}, \\vec{s}^{*}) -\\vec{k}^{*}(\\vec{K}+\\lambda \\vec{\\Sigma})^{-1} \\vec{k}^{*\\mathsf{T}}\\right)$}. } \n\t\\end{algorithmic}\n\t\\label{algorithm:kmp}\n\\end{algorithm}\n\n\\section{Extensions of Kernelized Movement Primitive} \n\\label{sec:tp-kmp}\nAs previously explained, human demonstrations can be used to retrieve a distribution of trajectories that the robot exploits to carry out a specific task. However, in dynamic and unstructured environments the robot also needs to adapt its motions when required. \nFor example, if an obstacle suddenly occupies an area that intersects the robot motion path, the robot is required to modulate its movement trajectory so that collisions are avoided. A similar modulation is necessary (e.g., in pick-and-place and reaching tasks) when the target varies its location during the task execution. The trajectory modulation problem will be addressed in Section~\\ref{subsec:kmp:modulation} by exploiting the proposed KMP formulation. \n\n\nBesides the modulation of a single trajectory, another challenging problem arises when the robot is given a set of candidate trajectories to follow, which represent feasible solutions for the task. Each of them may be assigned with a different priority (extracted, for example, from the task constraint). These candidate trajectories can be exploited to compute a mixed trajectory so as to balance all the feasible solutions according to their priorities. We cope with the superposition problem in Section~\\ref{subsec:super:position} by using KMP.\n\n\nFinally, human demonstrations are often provided in a relatively convenient task space. However, the robot might be expected to apply the learned skill to a broader domain. In order to address this problem, we extend KMP \nby using local coordinate systems and affine transformations as in \\cite{Calinon2016}, which allows KMP to exhibit better extrapolation capabilities (Section \\ref{subsec:local_frame}).\n\n\\subsection{Trajectory Modulation Using KMP}\n\\label{subsec:kmp:modulation}\n\nWe here consider trajectory modulation in terms of adapting trajectories to pass through new via-points\/end-points.\nFormally, let us define $M$ new desired points as $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\xi}}_m\\}_{m=1}^{M}$ associated with conditional probability distributions $\\bar{\\vec{\\xi}}_m | \\bar{\\vec{s}}_{m} \\sim \\mathcal{N} ( \\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m} )$.\nThese conditional distributions can be designed based on new task requirements. For instance, if there are new via-points that the robot needs to pass through with high precision, small covariances $\\bar{\\vec{\\Sigma}}_{m}$ are assigned. On the contrary, for via-points that allow for large tracking errors, high covariances can be set.\n\nIn order to consider both new desired points and the reference trajectory distribution simultaneously, we reformulate the original objective function defined in (\\ref{equ:kl:cost:ini:temp}) as\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}^{U}(\\vec{\\mu}_w,&\\vec{\\Sigma}_w)\\!\\!=\\!\\!\n\\sum_{n=1}^{N} \\!D_{KL}\\! \\biggl (\\! \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_n)\n|| \\mathcal{P}_{\\mathbf{r}} (\\vec{\\xi}|\\vec{s}_n) \\!\\biggr)\\\\ &+\\sum_{m=1}^{M}D_{KL} \\biggl ( \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\bar{\\vec{s}}_m) || \\mathcal{P}_{\\mathbf{d}}(\\vec{\\xi}|\\bar{\\vec{s}}_m) \\biggr)\n\\end{aligned}\n\\label{equ:kl:cost:ini:modulate}\n\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\bar{\\vec{s}}_m)\\!\\!=\\!\\! \\mathcal{N}\\!\\! \\left(\\vec{\\xi}|\\vec{\\Theta}(\\bar{\\vec{s}}_m)^{\\mathsf{T}}\\! \\vec{\\mu}_w, \\!\\vec{\\Theta}(\\bar{\\vec{s}}_m)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\bar{\\vec{s}}_m) \\right)\n\\label{equ:def:prob:para:des}\n\\end{equation}\nand\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{d}}(\\vec{\\xi}|\\bar{\\vec{s}}_m)=\\mathcal{N} (\\vec{\\xi}|\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m).\n\\label{equ:def:prob:ref:des}\n\\end{equation}\nLet ${\\bar{\\vec{D}}=\\{\\bar{\\vec{s}}_{m}, \\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m}\\}_{m=1}^{M}}$ denote the \\emph{desired database}. We can concatenate the reference database $\\vec{D}$ with the desired database $\\bar{\\vec{D}}$ and generate an \\emph{extended reference database} $\\{\\vec{s}_{i}^{U},\\vec{\\mu}_{i}^{U},\\vec{\\Sigma}_{i}^{U}\\}_{i=1}^{N+M}$, \nwhich is defined as follows\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n\\vec{s}_{i}^{U}&\\!=\\!\\vec{s}_{i}, \\quad\\,\\, \\vec{\\mu}_{i}^{U}\\!=\\!\\hat{\\vec{\\mu}}_{i}, \\;\\;\\;\\,\\, \\vec{\\Sigma}_{i}^{U}\\!=\\!\\hat{\\vec{\\Sigma}}_{i}, \\;\\;\\,\\,\\,\\,\\, \\mathrm{if} \\;\\;\\; 1 \\leq i \\leq N\\\\\n\\vec{s}_{i}^{U}&\\!=\\!\\bar{\\vec{s}}_{i-N}, \\vec{\\mu}_{i}^{U}\\!=\\!\\bar{\\vec{\\mu}}_{i-N}, \\!\\vec{\\Sigma}_{i}^{U}\\!=\\!\\bar{\\vec{\\Sigma}}_{i-N}, \\mathrm{if} \\;N \\!< i\\!\\leq\\! N\\!\\!+\\!\\!M\\\\\n\\end{aligned}\\right. \\!,\n\\label{equ:combine:ref:desired}\n\\end{equation} \nThen, the objective function (\\ref{equ:kl:cost:ini:modulate}) can be written as follows\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}^{U}(\\vec{\\mu}_w,\\!\\vec{\\Sigma}_w)\\!\\!=\\!\\!\\!\\sum_{i=1}^{M+N}\\!\\!\\! D_{KL} \\biggl (\\! \\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_{i}^{U}) || \\mathcal{P}_{\\mathbf{u}}(\\vec{\\xi}|\\vec{s}_{i}^{U}) \\!\\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini:modulate:update:ref}\n\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{p}}(\\vec{\\xi}|\\vec{s}_{i}^{U})\\!\\!=\\!\\! \\mathcal{N}\\!\\! \\left(\\vec{\\xi}|\\vec{\\Theta}(\\vec{s}_{i}^{U})^{\\mathsf{T}}\\! \\vec{\\mu}_w, \\!\\vec{\\Theta}(\\vec{s}_{i}^{U})^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_{i}^{U}) \\right)\n\\label{equ:def:prob:para:extend}\n\\end{equation}\nand\n\\begin{equation}\n\\mathcal{P}_{\\mathbf{u}}(\\vec{\\xi}|\\vec{s}_{i}^{U})=\\mathcal{N} (\\vec{\\xi}|\\vec{\\mu}_i^{U},\\vec{\\Sigma}_i^{U}).\n\\label{equ:def:prob:ref:extend}\n\\end{equation}\nNote that (\\ref{equ:kl:cost:ini:modulate:update:ref}) has the same form as (\\ref{equ:kl:cost:ini:temp}). Hence, for the problem of enforcing trajectories to pass through desired via-points\/end-points, we can first concatenate the original reference database with the desired database through (\\ref{equ:combine:ref:desired}) and, subsequently, with the extended reference database, we follow Algorithm~\\ref{algorithm:kmp} to predict the mean and covariance for new queries $\\vec{s}^{*}$.\n\nIt is worth pointing out that there might exist conflicts between the desired database and the original reference database. \nIn order to illustrate this issue clearly, let us consider an extreme case: if there exist \na new input $\\bar{\\vec{s}}_m=\\vec{s}_n$, but $\\bar{\\vec{\\mu}}_m$ is distant from $\\hat{\\vec{\\mu}}_n$ while $\\bar{\\vec{\\Sigma}}_m$ and $\\hat{\\vec{\\Sigma}}_n$ are nearly the same, then the optimal solution of (\\ref{equ:kl:cost:ini:modulate:update:ref}) corresponding to the query $\\vec{s}_n$ can only be a trade-off between $\\bar{\\vec{\\mu}}_m$ and $\\hat{\\vec{\\mu}}_n$.\nIn the context of trajectory modulation using via-points\/end-points, it is natural to consider new desired points with the highest preference. Thus, we propose to update the reference database from the perspective of reducing the above mentioned conflicts while maintaining most of datapoints in the reference database. \nThe update procedure is carried out as follows.\nFor each datapoint $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}$ in the desired database, \nwe first compare its input $\\bar{\\vec{s}}_{m}$ with the inputs $\\{\\vec{s}_{n}\\}_{n=1}^{N}$ of the reference database so as to find the nearest datapoint $\\{\\vec{s}_{r},\\hat{\\vec{\\mu}}_r,\\hat{\\vec{\\Sigma}}_r\\}$ that\nsatisfies ${d(\\bar{\\vec{s}}_m,\\vec{s}_r) \\leq d(\\bar{\\vec{s}}_m,\\vec{s}_n), \\forall n \\in \\{1,2,\\ldots,N\\}}$, where $d(\\cdot)$ could be an arbitrary distance measure such as 2-norm. \nIf the nearest distance $d(\\bar{\\vec{s}}_m,\\vec{s}_r)$ is smaller than a predefined threshold $\\zeta>0$, we replace $\\{\\vec{s}_{r},\\hat{\\vec{\\mu}}_r,\\hat{\\vec{\\Sigma}}_r\\}$ with $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}$; Otherwise, we insert $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}$ into the reference database. More specifically, given a new desired point $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\xi}}_m\\}$ described by ${\\bar{\\vec{\\xi}}_m | \\bar{\\vec{s}}_{m} \\sim \\mathcal{N} ( \\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m} )}$, we update the reference database \naccording to\n\\begin{equation}\n\t\\left\\{\n\t\\begin{aligned}\n\t\t&\\!\\!\\vec{{D}} \\!\\leftarrow\\! \\{\\! \\vec{D}\/\\{\\vec{s}_{r},\\hat{\\vec{\\mu}}_{r},\\hat{\\vec{\\Sigma}}_{r}\\}\\! \\} \\!\\cup\\! \\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m}\\!\\}, \n\\;\\mathrm{if} \\, d(\\bar{\\vec{s}}_{m},\\vec{s}_{r}) \\!<\\! \\zeta,\\\\ \n\t\t&\\!\\!\\vec{{D}} \\!\\leftarrow \\vec{D} \\cup \\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_{m},\\bar{\\vec{\\Sigma}}_{m}\\}, \n\t\t\\hspace{0.85in} \\mathrm{otherwise},\n\t\\end{aligned}\n\t\\right.\n\t\\label{equ:kmp:update}\n\\end{equation}\nwhere $r=\\arg\\!\\min_{n} d(\\bar{\\vec{s}}_{m},\\vec{s}_{n}), n\\in\\{1,2,\\ldots,N\\}$ and the symbols `$\/$' and `$\\cup$' represent exclusion and union operations, respectively.\n\n\n\\subsection{Trajectory Superposition Using KMP}\n\\label{subsec:super:position}\nIn addition to the modulation operations on a single trajectory, we extend KMP to mix multiple trajectories that represent different feasible solutions for a task, with different priorities. Formally, given a set of $L$ reference trajectory distributions, associated with inputs and corresponding priorities $\\gamma_{n,l}$, denoted as $\\{ \\{\\vec{s}_{n},\\hat{\\vec{\\xi}}_{n,l},\\gamma_{n,l}\\}_{n=1}^{N}\\}_{l=1}^{L}$, where ${\\hat{\\vec{\\xi}}_{n,l}|\\vec{s}_n \\sim \\mathcal{N}(\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l})}$, and $\\gamma_{n,l} \\in (0,1)$ is a priority assigned to the point $\\{\\vec{s}_{n},\\hat{\\vec{\\xi}}_{n,l}\\}$ satisfying $\\sum_{l=1}^{L}\\gamma_{n,l}=1$.\n\nSince each priority indicates the importance of one datapoint in a reference trajectory, we use them to weigh the information-loss as follows\n\\begin{equation}\n\\begin{aligned}\nJ_{ini}^{S}(\\vec{\\mu}_w,\\!\\vec{\\Sigma}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N}\\!\\sum_{l=1}^{L}\\!\\!\\gamma_{n,l} D_{KL} \\biggl (\\!\\! \\mathcal{P}_{\\mathbf{p}}(&\\vec{\\xi}|\\vec{s}_n)\n|| \\mathcal{P}^{l}_{\\mathbf{s}}(\\vec{\\xi}|\\vec{s}_n) \\!\\!\\biggr),\n\\end{aligned}\n\\label{equ:kl:cost:ini:mix}\n\\end{equation}\nwhere $\\mathcal{P}^{l}_{\\mathbf{s}}$ is defined as\n\\begin{equation}\n\\mathcal{P}^{l}_{\\mathbf{s}}(\\vec{\\xi}|\\vec{s}_n)=\\mathcal{N} (\\vec{\\xi}|\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l}),\n\\label{equ:def:prob:subref}\n\\end{equation}\nrepresenting the distribution of the $l$-th reference trajectory given the input $\\vec{s}_n$.\n\nSimilar to the decomposition in (\\ref{equ:kl:cost:ini})--(\\ref{equ:kl:var:cost}), the objective function (\\ref{equ:kl:cost:ini:mix}) can be decomposed into a \\emph{weighted mean minimization subproblem} and a \\emph{weighted covariance minimization subproblem}. The former is written as\n\\begin{equation}\n\\begin{aligned}\n{J}_{ini}^{S}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N}\\! \\sum_{l=1}^{L} \\gamma_{n,l}(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} &\\vec{\\mu}_w -\\hat{\\vec{\\mu}}_{n,l})^{\\mathsf{T}} \\hat{\\vec{\\Sigma}}_{n,l}^{-1} \\\\\n&(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w \\!-\\! \\hat{\\vec{\\mu}}_{n,l})\n\\end{aligned}\n\\label{equ:kl:mean:cost:mix}\n\\end{equation}\nand the latter is\n\\begin{equation}\n\\begin{aligned}\n{J}_{ini}^{S}(\\vec{\\Sigma}_w)\\!=\\!\\sum_{n=1}^{N} &\\sum_{l=1}^{L}\n\\gamma_{n,l}\\Big(\\!-\\!\\log|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}(\\hat{\\vec{\\Sigma}}_{n,l}^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)\n\\end{aligned}.\n\\label{equ:kl:var:cost:mix}\n\\end{equation}\n \nIt can be proved that the weighted mean subproblem can be solved by minimizing (see Appendix~\\ref{app:compose:mean})\n\\begin{equation}\n\\tilde{J}_{ini}^{S}(\\vec{\\mu}_w)\\!\\!=\\!\\!\\sum_{n=1}^{N} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\!\\vec{\\mu}_w \\!-\\! \\vec{\\mu}_{n}^{S})^{\\mathsf{T}} {\\vec{\\Sigma}_n^{S}}^{-1} (\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\!\\vec{\\mu}_w \\!-\\! \\vec{\\mu}_{n}^{S})\n\\label{equ:kl:mean:cost:mix:prod}\n\\end{equation}\nand the weighted covariance subproblem is equivalent to the problem of minimizing (see Appendix~\\ref{app:compose:var})\n\\begin{equation}\n\\begin{aligned}\n\\tilde{J}_{ini}^{S}(\\vec{\\Sigma}_w)\\!=\\!\\sum_{n=1}^{N} \\Big(&\n\\!\\!-\\log |\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)|\\\\\n&+\\mathrm{Tr}({\\vec{\\Sigma}_n^{S}}^{-1}\n\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n)) \\Big)\n\\end{aligned},\n\\label{equ:kl:var:cost:mix:prod}\n\\end{equation}\nwhere\n\\begin{equation}\n{\\vec{\\Sigma}_n^{S}}^{-1}=\\sum_{l=1}^{L} \\left( \\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l} \\right)^{-1} \\quad \\mathrm{and}\n\\label{equ:prod:var}\n\\end{equation}\n\\begin{equation}\n\\vec{\\mu}_{n}^{S}={\\vec{\\Sigma}_n^{S}} \\sum_{l=1}^{L} \\left( \\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l} \\right)^{-1} \\hat{\\vec{\\mu}}_{n,l}.\n\\label{equ:prod:mean}\n\\end{equation} \nObserve that (\\ref{equ:kl:mean:cost:mix:prod}) and (\\ref{equ:kl:var:cost:mix:prod}) have the same form as the subproblems defined in (\\ref{equ:kl:mean:cost}) and (\\ref{equ:kl:var:cost}), respectively. Note that the definitions in (\\ref{equ:prod:var}) and (\\ref{equ:prod:mean}) essentially correspond to the product of $L$ Gaussian distributions $\\mathcal{N}(\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l})$ with ${l=1,2,\\ldots,L}$, given by\n\\begin{equation}\n\\mathcal{N}(\\vec{\\mu}_{n}^{S},{\\vec{\\Sigma}_n^{S}}) \\propto \\prod_{l=1}^{L} \\mathcal{N}(\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l}\/\\gamma_{n,l}).\n\\label{equ:product:mix}\n\\end{equation}\nThus, for the problem of trajectory superposition, we first determine a \\emph{mixed reference database}\n$\\{\\vec{s}_n,\\vec{\\mu}_n^{S},\\vec{\\Sigma}_n^{S}\\}_{n=1}^{N}$ through (\\ref{equ:product:mix}), \nthen we employ Algorithm~\\ref{algorithm:kmp} to predict the corresponding mixed trajectory points for arbitrary queries. \nNote that the weighted mean minimization subproblem (\\ref{equ:kl:mean:cost:mix}) can be interpreted as the maximization of the weighted posterior \n$\\prod_{n=1}^{N} \\prod_{l=1}^{L} \\mathcal{P} \\left( \\vec{\\Theta}(\\vec{s}_{n})^{\\mathsf{T}} \\vec{\\mu}_w|\\hat{\\vec{\\mu}}_{n,l},\\hat{\\vec{\\Sigma}}_{n,l} \\right)^{\\gamma_{n,l}}.$\nIn comparison with the trajectory mixture in ProMP \\citep{Paraschos}, \nwe here consider an optimization problem with an unknown $\\vec{\\mu}_w$ rather than the direct product of a set of known probabilities.\n\n\n\\subsection{Local Movement Learning Using KMP}\n\\label{subsec:local_frame}\n\n\n\n\\begin{algorithm}[t]\n\t\\caption{\\emph{Local Kernelized Movement Primitives with Via-points\/End-points}}\n\t\\begin{algorithmic}[1]\n\t\\State{\\textbf{{Initialization}}}\n\t\t\\Statex{- Define $k(\\cdot,\\cdot)$ and set $\\lambda$.} \n\t\t\\Statex{- Determine $P$ local frames $\\{\\vec{A}^{(p)},\\vec{b}^{(p)}\\}_{p=1}^{P}$.}\n\t\\State{\\textbf{{Learning from local demonstrations}}}\n\t\t\\Statex{- Collect demonstrations $\\{ \\{ \\vec{s}_{n,h},\\vec{\\xi}_{n,h}\\}_{n=1}^{N} \\}_{h=1}^{H}$ in $\\{O\\}$.}\n\t\t\\Statex{- Project demonstrations into local frames via (\\ref{equ:linear:transform}).}\n\t\t\\Statex{- Extract local reference databases $\\!\\{\\vec{s}_n^{(p)}\\!,\\!\\hat{\\vec{\\mu}}_n^{(p)}\\!,\\!\\hat{\\vec{\\Sigma}}_n^{(p)}\\}_{n=1}^{N}\\!$.}\n\t\\State{\\textbf{{Update local reference databases}}}\n\t\t\\Statex{- Project via-points\/end-points into local frames via (\\ref{equ:linear:transform}).} \n\t\t\\Statex{- Update local reference databases via (\\ref{equ:kmp:update}).}\n\t\t\\Statex{- Update $\\vec{K}^{(p)},\\vec{\\mu}^{(p)},\\vec{\\Sigma}^{(p)}$ in each frame $\\{p\\}$.}\n\t\\State{\\textbf{{Prediction using local-KMPs}}} \n\t\t\\Statex{- {\\emph{Input}}: query $\\vec{s}^{*}$.}\n\t\t\\Statex{- Update $P$ local frames based on new task requirements.} \n\t\t\\Statex{- Project $\\vec{s}^{*}$ into local frames via (\\ref{equ:linear:transform}), yielding $\\{\\!\\vec{s}\\!^{*\\!(p)}\\!\\}\\!_{p=1}^{P}$.}\n\t\t\\Statex{- Predict the local trajectory point associated with $\\vec{s}^{*(p)}$ in each frame $\\{p\\}$ using KMP.} \n\t\t\\Statex{- {\\emph{Output}}: Compute $\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})$ in the frame $\\{O\\}$ using (\\ref{equ:local:product}).}\n\t\\end{algorithmic}\n\t\\label{algorithm:local-kmp}\n\\end{algorithm}\n\nSo far we have considered trajectories that are represented with respect to the same global frame (coordinate system).\nIn order to enhance the extrapolation capability of KMP in task space, human demonstrations can be encoded in local frames\\footnote{Also referred to as task parameters in \\cite{Calinon2016}.} so as to extract local movement patterns, which can then be applied to a wider range of task instances. \nUsually, the definition of local frames depends on the task at hand. \nFor example, in a transportation task where the robot moves an object from a starting position (that may vary) to different target locations, two local frames can be defined respectively at the starting and ending positions. \n\n\n \n \nFormally, let us define $P$ local frames as $\\{\\vec{A}^{(p)},\\vec{b}^{(p)}\\}_{p=1}^{P}$,\nwhere $\\vec{A}^{(p)}$ and $\\vec{b}^{(p)}$ respectively represent the rotation matrix and the translation vector of frame $\\{p\\}$ with respect to the base frame $\\{O\\}$. Demonstrations are projected into each frame $\\{p\\}$, resulting in new trajectory points $\\{ \\{\\vec{s}_{n,h}^{(p)},\\vec{\\xi}_{n,h}^{(p)}\\}_{n=1}^{N} \\}_{h=1}^{H}$ for each local frame, where \n\\begin{equation}\n\t\\left[ \\begin{matrix}\n\t\t\\vec{s}_{n,h}^{(p)} \\\\ \\vec{\\xi}_{n,h}^{(p)} \n\t\\end{matrix} \\right]=\n\t\\left[\\begin{matrix}\n\t\\vec{A}_{s}^{(p)} &\\vec{0}\\\\\n\t\\vec{0} & \\vec{A}_{\\xi}^{(p)}\t\n\t\\end{matrix}\\right]^{-1} \n\t\\left( \n\t\\left[ \\begin{matrix}\n\t\t\\vec{s}_{n,h} \\\\ \\vec{\\xi}_{n,h}\n\t\\end{matrix} \\right]-\n\t\\left[\\begin{matrix}\n\t\\vec{b}_{s}^{(p)} \\\\\n\t\\vec{b}_{\\xi}^{(p)}\t\n\t\\end{matrix}\\right] \n\t\\right) ,\n\t\\label{equ:linear:transform}\n\\end{equation}\nwith $\\vec{A}_{s}^{(p)}\\!=\\vec{A}_{\\xi}^{(p)}\\!=\\vec{A}^{(p)}$ and $\\vec{b}_{s}^{(p)}\\!=\\vec{b}_{\\xi}^{(p)}\\!=\\vec{b}^{(p)}$\\footnote{Note that, if the input $\\vec{s}$ becomes time, then $\\vec{A}_{s}^{(p)}=\\!1$ and $\\vec{b}_{s}^{(p)}=\\!0$.}.\nSubsequently, by following the procedure in Section~\\ref{subsec:ref:traj}, for each local frame $\\{p\\}$ we can generate \na \\emph{local reference database} ${\\vec{D}^{(p)}=\\{\\vec{s}_n^{(p)},\\hat{\\vec{\\mu}}^{(p)}_{n},\\hat{\\vec{\\Sigma}}^{(p)}_{n}\\}_{n=1}^{N}}$.\n\n\nWe refer to the learning of KMPs in local frames as \\emph{local}-KMPs. For sake of simplicity, we only discuss the trajectory modulations with via-points\/end-points. The operation of trajectory superposition can be treated in the similar manner.\nGiven a set of desired points in the robot base frame $\\{O\\}$ described by the desired database $\\{\\bar{\\vec{s}}_{m},\\bar{\\vec{\\mu}}_m,\\bar{\\vec{\\Sigma}}_m\\}_{m=1}^{M}$, we project the desired database into local frames using (\\ref{equ:linear:transform}), leading to the set of transformed \\emph{local desired databases} $\\bar{\\vec{D}}^{(p)}=\\{\\bar{\\vec{s}}_{m}^{(p)},\\bar{\\vec{\\mu}}_m^{(p)},\\bar{\\vec{\\Sigma}}_m^{(p)}\\}_{m=1}^{M}$ with $p=\\{1,2,\\dots,P\\}$. Then, we carry out the update procedure described by (\\ref{equ:kmp:update}) in each frame $\\{p\\}$ and obtain a new local reference database $\\vec{D}^{(p)}$. \n\n\nFor a new input $\\vec{s}^{*}$ in the base frame $\\{O\\}$, we first project it into local frames using the input transformation in (\\ref{equ:linear:transform}), yielding local inputs $\\{\\vec{s}^{*(p)}\\}_{p=1}^{P}$. Note that, during the prediction phase, local frames might be updated depending on new task requirements and the corresponding task parameters $\\vec{A}^{(p)}$ and $\\vec{b}^{(p)}$ might vary accordingly. Later, in each frame $\\{p\\}$ we can predict a local trajectory point $\\widetilde{\\vec{\\xi}}^{(p)}\\!\\!( \\vec{s}^{*(p)})\\sim \\mathcal{N}( {\\vec{\\mu}}^{*(p)} , {\\vec{\\Sigma}}^{*(p)} )$ with updated mean ${\\vec{\\mu}}^{*(p)}$ and covariance ${\\vec{\\Sigma}}^{*(p)}$ by using (\\ref{equ:kmp:mean}) and (\\ref{equ:kmp:var}). \nFurthermore, new local trajectory points from all local frames can be simultaneously transformed into the robot base frame using an inverse formulation of (\\ref{equ:linear:transform}). Thus, for the query $\\vec{s}^{*}$ in the base frame $\\{O\\}$, its corresponding trajectory point $\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})$ in $\\{O\\}$ can be determined by maximizing the product of linearly transformed Gaussian distributions\n\\begin{equation}\n\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})\\!=\\!\\argmax_{\\vec{\\xi}}\\!\\prod_{p=1}^{P}\\! \\mathcal{N}\\! \\biggl(\\! \\vec{\\xi} | \\underbrace{\\vec{A}_{\\xi}^{(p)} \\vec{{\\mu}}^{*(p)} \\!\\!+\\! \\vec{b}_{\\xi}^{(p)}}_{\\widetilde{\\vec{\\mu}}_p}, \\underbrace{\\vec{A}_{\\xi}^{(p)} {\\vec{\\Sigma}}^{*(p)} \\!\\!{\\vec{A}_{\\xi}^{(p)}}^{\\mathsf{T}}}_{\\widetilde{\\vec{\\Sigma}}_p} \\biggr)\\!, \n\\label{equ:local:to:global}\n\\end{equation}\nwhose optimal solution is\n\\begin{equation}\n\\widetilde{\\vec{\\xi}}(\\vec{s}^{*})=\\biggl(\\sum_{p=1}^{P} \\widetilde{\\vec{\\Sigma}}_p^{-1} \\biggr)^{-1} \\sum_{p=1}^{P} \\widetilde{\\vec{\\Sigma}}_p^{-1} \\widetilde{\\vec{\\mu}}_p.\n\\label{equ:local:product}\n\\end{equation}\nThe described procedure is summarized in Algorithm \\ref{algorithm:local-kmp}. \nNote that the solution (\\ref{equ:local:product}) actually corresponds to the expectation part of the product of Gaussian distributions in (\\ref{equ:local:to:global}).\n\n\n\n\\begin{figure*}[bt] \\centering\n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{demos_G.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{demos_G_gmm.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{demos_G_gmr.png}} \n\t\\caption{Demonstrations of handwritten letter `G' and the estimation of the reference trajectory through GMM\/GMR. (\\emph{a}) shows the trajectories of `G', where `$\\ast$' and `+' denote the starting and ending points of the demonstrations, respectively. \n\t\t(\\emph{b}) depicts the estimated GMM with the ellipses representing Gaussian components. (\\emph{c}) displays the retrieval of the reference trajectory distribution, where the red solid curve and shaded area, respectively, correspond to the mean and standard deviation of the reference trajectory.} \n\t\\label{fig:g:demos} \n\\end{figure*}\n\n\n\\section{Time-driven Kernelized Movement Primitives}\n\\label{sec:time_kmp}\n\nIn many robotic tasks, such as biped locomotion \\citep{Nakanishi} and striking movements \\citep{Huang2016}, time plays a critical role when generating movement trajectories for a robot. We here consider a special case of KMP by taking time $t$ as the input $\\vec{s}$, which is aimed at learning time-driven trajectories. \n\n\n\\subsection{A Special Treatment of Time-Driven KMP}\n\\label{subsec:time:tmp}\nSimilarly to ProMP, we formulate a parametric trajectory comprising positions and velocities as\n\\begin{equation}\n\\left[ \\begin{matrix}\n\\vec{\\xi}(t) \\\\ \\dot{\\vec{\\xi}}(t) \n\\end{matrix} \\right] = \\vec{\\Theta}(t)^{\\mathsf{T}} \\vec{w},\n\\label{equ:linear:form:time}\n\\end{equation}\nwhere the matrix $\\vec{\\Theta}(t)\\in \\mathbb{R}^{B\\mathcal{O} \\times 2\\mathcal{O}}$ is\n\\begin{equation}\n\\vec{\\Theta}(t)\\!\\!=\\!\\!\\left[\\begin{matrix} \n\\vec{\\varphi}(t) \\!& \\vec{0} \\!& \\cdots \\!&\\vec{0} \\!& \\dot{\\vec{\\varphi}}(t) \\!& \\vec{0} \\!& \\cdots \\!&\\vec{0} \\\\\n\\vec{0} \\!& \\vec{\\varphi}(t) \\!& \\cdots \\!&\\vec{0} \\!&\\vec{0} \\!& \\dot{\\vec{\\varphi}}(t) \\!& \\cdots &\\vec{0}\\\\\n\\vdots \\!& \\vdots \\!& \\ddots \\!& \\vdots \\!&\\vdots \\!& \\vdots \\!& \\ddots \\!& \\vdots\\\\\n\\vec{0} \\!& \\vec{0} \\!& \\cdots \\!& \\vec{\\varphi}(t) \\!&\\vec{0} \\!& \\vec{0} \\!& \\cdots \\!& \\dot{\\vec{\\varphi}}(t)\\\\\n\\end{matrix}\\right] \\!.\n\\label{equ:basis:function:time}\n\\end{equation}\nNote that we have included the first-order derivative of the parametric trajectory $\\vec{\\xi}(t)$ in (\\ref{equ:linear:form:time}), which allows us to encode the observed dynamics of the motion. Consequently, we include the derivative of basis functions as shown in (\\ref{equ:basis:function:time})\n\n\nIn order to encapsulate the variability in demonstrations, we here model the joint probability $\\mathcal{P}(t,\\vec{\\xi},\\dot{\\vec{\\xi}})$ using GMM, similarly to Section~\\ref{subsec:ref:traj}. The probabilistic reference trajectory associated with time input $t_n$ can then be extracted by GMR as the conditional probability \n${\\mathcal{P}(\\hat{\\vec{\\xi}}_n,\\hat{\\dot{\\vec{\\xi}}}_n|t_n)\n\\sim \\mathcal{N}(\\hat{\\vec{\\mu}}_{n},\\hat{\\vec{\\Sigma}}_{n})}$. \nFinally, we can derive the time-driven KMP by following the derivations presented in Section~\\ref{subsec:kmp}. \n\nIt is noted that, when we calculate the kernel matrix as previously defined in (\\ref{equ:single:basis:product})--(\\ref{equ:kernel:matrix}), we here encounter four types of products $\\vec{\\varphi}(t_i)^{\\mathsf{T}} \\vec{\\varphi}(t_j)$, $\\vec{\\varphi}(t_i)^{\\mathsf{T}} \\dot{\\vec{\\varphi}}(t_j)$, $\\dot{\\vec{\\varphi}}(t_i)^{\\mathsf{T}} \\vec{\\varphi}(t_j)$ and $\\dot{\\vec{\\varphi}}(t_i)^{\\mathsf{T}} \\dot{\\vec{\\varphi}}(t_j)$. \nHence, we propose to approximate $\\vec{\\dot{\\varphi}}(t)$ as\n$\\vec{\\dot{\\varphi}}(t) \\approx \\frac{\\vec{\\varphi}(t+\\delta)-\\vec{\\varphi}(t)}{\\delta}$ by using the finite difference method, where $\\delta>0$ is an extremely small constant. So, based on the definition $\\vec{\\varphi}(t_i)^{\\mathsf{T}} \\vec{\\varphi}(t_j)=k(t_i,t_j)$, we can determine the kernel matrix as\n\\begin{equation}\n\\vec{k}(t_i,t_j)\\!=\\! \\vec{\\Theta}({t_i})^{\\mathsf{T}}\\vec{\\Theta}({t_j})\\!\\!=\\!\\!\n\\left[ \\begin{matrix} k_{tt}(i,j)\\vec{I}_{\\mathcal{O}} \\!&\\! k_{td}(i,j)\\vec{I}_{\\mathcal{O}}\\\\\nk_{dt}(i,j)\\vec{I}_{\\mathcal{O}} \\!&\\! k_{dd}(i,j)\\vec{I}_{\\mathcal{O}} \\\\\n\\end{matrix} \\right],\n\\label{equ:kernel:matrix:time}\n\\end{equation} \nwhere\n\\begin{equation}\n\\begin{aligned}\nk_{tt}(i,j)\\!\\!&=\\!\\!k(t_i,t_j),\\\\\nk_{td}(i,j)\\!\\!&=\\!\\!\\frac{k(t_i,t_j\\!+\\!\\delta)\\!\\!-\\!\\!k(t_i,t_j)}{\\delta},\\\\\nk_{dt}(i,j)\\!\\!&=\\!\\!\\frac{k(t_i\\!+\\!\\delta,t_j)\\!\\!-\\!\\!k(t_i,t_j)}{\\delta}, \\\\\nk_{dd}(i,j)\\!\\!&=\\!\\!\\frac{k(t_i\\!+\\!\\delta, \\!t_j\\!+\\!\\delta)\\!\\! -\\!\\!k(t_i\\!+\\!\\delta, \\!t_j)\\!\\! -\\!\\!k(t_i,\\!t_j\\!+\\!\\delta)\\!\\! +\\!\\!k(t_i,\\!t_j)}{{\\delta}^{2}}. \n\\end{aligned}\n\\end{equation}\nIt follows that we can actually model the output variable $\\vec{\\xi}(t)$ and its derivative $\\dot{\\vec{\\xi}}(t)$ in (\\ref{equ:linear:form:time}) using \n${\\vec{\\Theta}(t)=blockdiag(\n\\vec{\\varphi}(t),\\vec{\\varphi}(t),\\cdots,\\vec{\\varphi}(t))}$.\nIn other words, the derivative of basis functions is not used. However, this treatment requires a higher dimensional $\\vec{\\Theta}(t)$, (i.e., $2B\\mathcal{O}\\times 2\\mathcal{O}$) and thus leads to a higher dimensional $\\vec{w}\\in\\mathbb{R}^{2B\\mathcal{O}}$. In contrast, if both basis functions and their derivatives (as defined in (\\ref{equ:basis:function:time})) are employed, we can obtain a compact representation which essentially corresponds to a lower dimensional $\\vec{w}\\in\\mathbb{R}^{B\\mathcal{O}}$.\n\nWhile the derivation presented in this section applies for a time-driven case, it cannot be easily generalized to the case of high-dimensional $\\vec{s}$. Unlike a straightforward approximation of $\\vec{\\dot{\\varphi}}(t)$ by using the finite difference method, for the high-dimensional input $\\vec{s}$ it is a non-trivial problem to estimate $\\vec{\\dot{\\varphi}}(s)=\\frac{\\partial \\vec{\\varphi}(s)}{\\partial s}\\frac{\\partial s}{\\partial t}$\nunless we have an additional model which can reflect the dynamics between time $t$ and the input $\\vec{s}$. Due to the difficulty of estimating $\\vec{\\dot{\\varphi}}(s)$, an alternative way to encode $[\\vec{\\xi}^{\\mathsf{T}}(\\vec{s}) \\ \\dot{\\vec{\\xi}}^{\\mathsf{T}}(\\vec{s})]^{\\mathsf{T}}$ with high-dimensional input $\\vec{s}$ is to use (\\ref{equ:linear:form}) with an extended matrix $\\vec{\\Theta}(\\vec{s}) \\in \\mathbb{R}^{2B\\mathcal{O} \\times 2\\mathcal{O}}$, i.e., ${\\vec{\\Theta}(\\vec{s})=blockdiag(\n\t\\vec{\\varphi}(\\vec{s}),\\vec{\\varphi}(\\vec{s}),\\cdots,\\vec{\\varphi}(\\vec{s}))}$. \n\n\n\n\\subsection{Time-scale Modulation of Time-driven KMP}\n\\label{subsec:time:modulation}\nIn the context of time-driven trajectories, new tasks may demand to speed up or slow down the robot movement, and hence the trajectory modulation on the time-scale is required. Let us denote the movement duration of demonstrations and the time length of the corresponding reference trajectory as $t_N$. To generate adapted trajectories with new durations $t_D$, we define a monotonic function ${\\tau: [0,t_D] \\mapsto [0,t_N]}$, which is a transformation of time.\nThis straightforward solution implies that for any query $t^{*}\\in[0,t_D]$, we use $\\tau(t^{*})$ as the input for the prediction through KMP, and thus trajectories can be modulated as faster or slower (see also \\cite{Ijspeert, Paraschos} for the modulations in time-scale, where the time modulation is called the phase transformation.). \n\n\n\n\n\n\n\\section{Evaluations of the Approach}\n\\label{sec:evaluations}\nIn this section, several examples are used to evaluate KMP. We first consider the adaptation of trajectories with via-points\/end-points as well as the mixture of multiple trajectories (Section~\\ref{subsec:traj:modulate}), where comparisons with ProMP are shown. Then, we evaluate the extrapolation capabilities of local-KMPs (Section~\\ref{subsec:extra:evaluate}). Subsequently, we validate the approach in two different scenarios using real robots. First, we study a novel application of robot motion adaptation by adding via-points according to sensed forces at the end-effector of the robot (Section~\\ref{subsec:force:ada}). Second, we focus on a human-robot collaboration scenario, namely, the 3rd-hand task, where a 6-dimensional input is considered in the learning and adaptation problems (Section~\\ref{subsec:3rd:hand}). \n\n\n\n\n\\begin{figure*} \\centering \n\t\\subfigure[Trajectory modulations with one start-point and one via-point.]{\n\t\t\\includegraphics[width=0.80\\textwidth,bb=0.0cm 9.2cm 29cm 24cm,clip]{modulation_viaOne.png}\n\t\t\\put (-412.75,161) {KMP}\n\t\t\\put (-416,55) {ProMP}}\n\t\\subfigure[Trajectory modulation with one via-point and one end-point.]{ \n\t\t\\includegraphics[width=0.80\\textwidth,bb=0.0cm 9.2cm 29cm 24cm,clip]{modulation_viaTwo.png}\n\t\t\\put (-412.75,161) {KMP}\n\t\t\\put (-416,55) {ProMP}}\n\t\\subfigure[Superposition of two probabilistic reference trajectories.]{ \n\t\t\\includegraphics[width=0.80\\textwidth,bb=0.0cm 9.2cm 29cm 24cm,clip]{modulation_mix.png}\n\t\t\\put (-412.75,161) {KMP}\n\t\t\\put (-416,55) {ProMP}}\n\t\\caption{Different cases of trajectory modulation using KMP and ProMP. \\emph{(a)--(b)} show trajectories (red and green curves) that are adapted to go through different desired points (depicted by circles). The gray dashed curves represent the mean of probabilistic reference trajectories for KMP and ProMP, while \n\tthe shaded areas depict the standard deviation.\n\t\\emph{(c)} shows the superposition of various reference trajectories, where the dashed red and green curves correspond to the adapted trajectories in \\emph{(a)} and \\emph{(b)}, respectively. \n\tThe resulting trajectory is displayed in solid pink curve.} \n\t\\label{fig:viapoint:compare} \n\\end{figure*} \n\n\\begin{figure*} \\centering \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.31\\textwidth]{extra_demos.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.31\\textwidth]{extra_projectDemos_f1.png}} \n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.31\\textwidth]{extra_projectDemos_f2.png}}\n\t\\caption{Demonstrations of the transportation task as well as GMM modeling of local trajectories. (\\emph{a}) shows the demonstrated trajectories (plotted by purple curves), where gray curves correspond to the projection of demonstrated trajectories into the $x$--$y$ plane. `$\\ast$' and `+' denote the starting and ending points of trajectories, respectively. (\\emph{b})-(\\emph{c}) depict GMM modeling of local trajectories, where local trajectories are obtained by projecting demonstrations into two local frames, respectively. } \n\t\\label{fig:project:gmm} \n\\end{figure*}\n\n\n\n\n\\subsection{Trajectory Modulation\/Superposition}\n\\label{subsec:traj:modulate}\n\nWe first evaluate our approach using five trajectories of the handwritten letter `G'\\footnote{These trajectories are obtained from \\cite{Calinon2017}.}, \nas shown in Figure~\\ref{fig:g:demos}(\\emph{a}).\nThese demonstrations are encoded by GMM with input $t$ and output $\\vec{\\xi}(t)$ being the 2-D position $[x(t)\\, y(t)]^{\\mathsf{T}}$. Subsequently, a probabilistic reference trajectory is retrieved through GMR, as depicted in Figure~\\ref{fig:g:demos}(\\emph{b})--(\\emph{c}), where the position values from the reference trajectory are shown. This probabilistic reference trajectory along with the input is used to initialize KMP as described in Section~\\ref{subsec:time:tmp}, which uses a Gaussian kernel ${k(t_i,t_j)=exp(-\\ell (t_i-t_j)^{2})}$ with hyperparameter $\\ell>0$. The relevant parameters for KMP are set as $\\ell=2$ and $\\lambda=1$. \n\nFor comparison purposes, ProMP is evaluated as well, where 21 Gaussian basis functions chosen empirically are used. For each demonstration, we employ the regularized least squares method to estimate the weights $\\vec{w}\\in \\mathbb{R}^{42}$ of the corresponding basis functions. Subsequently, the probability distribution $\\mathcal{P}(\\vec{\\mu}_w,\\vec{\\Sigma}_w)$ that is computed through maximum likelihood estimation \\citep{Paraschos2015} is used to initialize ProMP. Due to the number of demonstrations being significantly lower when compared to the dimension of $\\vec{w}$, a diagonal regularized term is added to $\\vec{\\Sigma}_w$ in order to avoid singular estimations. \n \nFigure~\\ref{fig:viapoint:compare} displays different trajectory modulation cases using KMP and ProMP. We test not only cases in which new requirements arise in the form of via-points and start-points\/end-points, but also the scenario of mixing different reference trajectories\\footnote{We here only consider position requirements, but velocity constraints can also be directly incorporated in desired points.}. It can be observed from Figure~\\ref{fig:viapoint:compare}(\\emph{a})--(\\emph{b}) that both KMP and ProMP successfully generate trajectories that fulfill the new requirements. For the case of trajectory superposition in Figure~\\ref{fig:viapoint:compare}(\\emph{c}), we consider the adapted trajectories in Figure~\\ref{fig:viapoint:compare}(\\emph{a}) and (\\emph{b}) as candidate reference trajectories and assign them with the priorities ${\\gamma_{t,1}=\\exp(-t)}$ and $\\gamma_{t,2}=1-\\exp(-t)$, respectively. Note that $\\gamma_{t,1}$ and $\\gamma_{t,2}$ correspond to monotonically decreasing and increasing functions, respectively. As depicted in Figure~\\ref{fig:viapoint:compare}(\\emph{c}), the mixed trajectory (solid pink curve) indeed switches from the first to the second reference trajectory. \n\nDespite KMP and ProMP perform similarly, the key difference between them lies on the determination of basis functions. In contrast to ProMP that requires explicit basis functions, KMP is a non-parametric method that does not depend on explicit basis functions. This difference proves to be substantially crucial for tasks where the robot actions are driven by a high-dimensional input. We will show this effect in the 3rd-hand task which is associated with a 6-D input, where the implementation of ProMP becomes difficult since a large number of basis functions need to be defined.\n\n\n\n\n\n\\subsection{Extrapolation with Local-KMPs}\n\\label{subsec:extra:evaluate}\n\nWe evaluate the extrapolation capabilities of local-KMPs in an application with a new set of desired points (start-, via- and end-points) lying far away from the area covered by the original demonstrations, in contrast to the experiment reported in Section~\\ref{subsec:traj:modulate}. Note that ProMP does not consider any task-parameterization, and therefore the extrapolation capability is limited (see \\cite{Havoutis} for a discussion). Thus, we only evaluate our approach here.\n\nWe study a collaborative object transportation task, where the robot assists a human to carry an object from a starting point to an ending location. Five demonstrations\nin the robot base frame are used for the training of local-KMPs (see Figure~\\ref{fig:project:gmm}(\\emph{a})). We consider time $t$ as the input, and the 3-D Cartesian position $[x(t)\\, y(t)\\, z(t)]^{\\mathsf{T}}$ of the robot end-effector as the output $\\vec{\\xi}(t)$.\nFor the implementation of local-KMPs, we define two frames located at the initial and the final locations of the transportation trajectories (as depicted in Figure~\\ref{fig:project:gmm}(\\emph{b})--(\\emph{c})), \nsimilarly to \\cite{Leonel15}, \nwhich are then used to extract the local motion patterns. \n\nWe consider two extrapolation tests, where the starting and ending locations are different from the demonstrated ones. \nIn the first test, we study the transportation from ${\\vec{p}_s\\!=\\![-0.2 \\; 0.2\\;0.2]^{\\mathsf{T}}}$ to ${\\vec{p}_e\\!=\\![-0.15\\; 0.8\\; 0.1]^{\\mathsf{T}}}$. In the second test, we evaluate the extrapolation with ${\\vec{p}_s\\!=\\![0.2\\; -\\!0.3 \\; 0.1]^{\\mathsf{T}}}$ and ${\\vec{p}_e\\!=\\![0.25\\; 0.5\\; 0.05]^{\\mathsf{T}}}$. Note that all locations are described with respect to the robot base frame. In addition to the desired starting and ending locations in the transportation task, we also introduce additional position constraints which require the robot passing through two via-points (plotted by circles in Figure~\\ref{fig:extra:compare}).\nThe extrapolation of local-KMPs for these new situations is achieved according to Algorithm \\ref{algorithm:local-kmp}, where the Gaussian kernel is used. For each test, the local frames are set as ${\\vec{A}^{(1)}=\\vec{A}^{(2)}=\\vec{I}_3}$, $\\vec{b}^{(1)}=\\vec{p}_s$ and $\\vec{b}^{(2)}=\\vec{p}_e$. The related KMP parameters are $\\ell=0.5$ and $\\lambda=10$.\nFigure~\\ref{fig:extra:compare} shows that local-KMPs successfully extrapolate to new frame locations and lead the robot to go through various new desired points while maintaining the shape of the demonstrated trajectories. \n\nNote that the environment might drastically change from demonstrations to final execution, so the capability of modulating the demonstrated trajectories to go through new points is important in many applications. In this sense, local-KMPs prove superior to other local-frame approaches such as those exploited in \\cite{Leonel15, Calinon2016}, which do not consider trajectory modulation.\n\n\n \n\n\n\n\n\\begin{figure} \\centering \t\n\t\\includegraphics[width=0.72\\columnwidth]{extra_viaPoints.png}\n\t\\caption{Extrapolation evaluations of local-KMPs for new starting and ending locations in the transportation task. The purple curve represents the mean of the original probabilistic reference trajectory for KMP, while the red and yellow trajectories show the extrapolation cases. Circles represent desired points describing additional task requirements. Squares denote desired starting and ending locations of the transportation task. Gray curves depict the projection of trajectories into the $x$--$y$ plane.} \n\t\\label{fig:extra:compare} \n\\end{figure}\n\n\n\n\n\\subsection{Force-based Trajectory Adaptation}\n\\label{subsec:force:ada}\nThrough kinesthetic teaching, humans are able to provide the robot with initial feasible trajectories. However, this procedure does not account for unpredicted situations. For instance, when the robot is moving towards a target, undesired circumstances such as obstacles occupying the robot workspace might appear, which requires the robot to avoid possible collisions. \nSince humans have reliable reactions over dynamic environments, we here propose to use the human supervision to adapt the robot trajectory when the environment changes. In particular, we use a force sensor installed at the end-effector of the robot in order to measure corrective forces exerted by the human. \n\nWe treat the force-based adaptation problem under the KMP framework by defining new via-points as a function of the sensed forces.\nWhenever the robot is about to collide with the obstacle, the user interacts physically with the end-effector and applies a corrective force. This force is used to determine a desired via-point which the robot needs to pass through in order to avoid the obstacle.\nBy updating the reference database using this obtained via-point through (\\ref{equ:kmp:update}), KMP can generate an adapted trajectory that fulfills the via-point constraint.\n\n\nFor the human interaction at time $t$, given the robot Cartesian position $\\vec{p}_t$ and the sensed force $\\vec{F}_t$, the first desired datapoint is defined as:\n${\\bar{t}_1=t+\\delta_t}$ and ${\\bar{\\vec{p}}_{1}=\\vec{p}_{t}+\\vec{K}_f \\vec{F}_t}$, where $\\delta_t>0$ controls the regulation time and $\\vec{K}_f>0$ determines the adaptation proportion for the robot trajectory. In order to avoid undesired trajectory modulations caused by the force sensor noise, we introduce a force threshold $F_{thre}$ and add the new force-based via-point to the reference trajectory only when $||\\vec{F}_t||>F_{thre}$.\nNote that the adapted trajectory might be far away from the previous planned trajectory due to the new via-point, we hence consider adding $\\vec{p}_t$ as the second desired point so as to ensure a smooth trajectory for the robot. Doing so, for each interaction, we define the second desired point as $\\bar{t}_2=t$ and $\\bar{\\vec{p}}_{2}=\\vec{p}_t$.\n\n\n\n\n\\begin{figure} \\centering\n\t\\includegraphics[width=0.92\\columnwidth]{forceHumanDemo.jpg}\n\t\\caption{Kinesthetic teaching of the reaching task on the KUKA robot, where demonstrations comprising time and end-effector Cartesian position are collected. The green arrow shows the motion direction of the robot.} \n\t\\label{fig:force:humanDemo} \n\\end{figure} \n\n\\begin{figure} \\centering \n\t\\includegraphics[width=0.90\\columnwidth]{force_gmm.png}\n\t\\caption{GMM modeling of demonstrations for the force-based adaptation task, where the green curves represent demonstrated trajectories and ellipses depict Gaussian components.} \n\t\\label{fig:force:demos} \n\\end{figure} \n\n\n\\begin{figure*} \\centering\n\t\\includegraphics[width=2.03\\columnwidth]{forceAdaRobot.jpg}\n\t\\caption{Snapshots of the force-based trajectory adaptations, where the force exerted by the human is used to determine the via-points for the robot, which ensures collision avoidance. (a) and (f) correspond to the initial and final states of the robot, where circles depict the initial and final positions, respectively. Figures (b)--(e) show human interactions with the green arrows depicting the directions of corrective force. }\n\t\\label{fig:force:robot} \n\\end{figure*} \n\n\n\\begin{figure*} \\centering \n\t\\includegraphics[width=1.9\\columnwidth,bb=4.0cm 0cm 45cm 21.5cm,clip]{force_adaTraj.png}\n\t\\caption{\\emph{Top row}: the desired trajectory (generated by KMP) and the real robot trajectory, where `$\\ast$' represents the force-based desired points and `+' corresponds to the initial and final locations for the robot. For comparison, we also provide the desired trajectory predicted by KMP without obstacles (i.e., without human intervention). The shaded areas show the regulation durations for various human interventions. \\emph{Bottom row}: the force measured at the end-effector of the KUKA robot.} \n\t\\label{fig:force:adaTraj} \n\\end{figure*}\n\n\n\n\n\nIn order to evaluate the adaptation capability of KMP, we consider a reaching task where unpredicted obstacles will intersect the robot movement path. First, we collect six demonstrations (as depicted in Figure~\\ref{fig:force:humanDemo}) comprising time input $t$ and output $\\vec{\\xi}(t)$ being the 3-D Cartesian position ${[x(t)\\, y(t)\\, z(t)]^{\\mathsf{T}}}$. Note that obstacles are not placed in the training phase. The collected data is fitted using GMM (plotted in Figure~\\ref{fig:force:demos}) so as to retrieve a reference database, which is subsequently used to initialize KMP. \nThen, during the evaluation phase,\ntwo obstacles whose locations intersect the robot path are placed on the table, as shown in Figure~\\ref{fig:force:robot}.\nIn addition to the via-points that will be added through physical interaction,\nwe add the initial and target locations for the robot as desired points beforehand, where the initial location corresponds to the robot position before starting moving.\nThe relevant parameters are $\\vec{K}_f\\!\\!=\\!\\!0.006\\vec{I}_{3}$, $\\delta_t=1s$, $F_{thre}=10N$, $\\ell=0.15$ and ${\\lambda=0.3}$. \n\n\nThe trajectory that is generated by KMP according to various desired points\nas well as the real robot trajectory are depicted in Figure~\\ref{fig:force:adaTraj}.\nWe can observe that for each obstacle the robot trajectory is adapted twice. In the first two adaptations (around $8s$ and $11s$), the corrective force is dominant along the $z$ direction, while in the last two adaptations (around $17s$ and $20s$), the force has a larger component along the $x$ and $y$ directions. For all cases, KMP successfully adapts the end-effector trajectory according to the measured forces. \n\nNote that, even without human interaction, the proposed scheme can also help the robot replan its trajectory when it touches the obstacles, where the collision force takes the role of the human correction and guides the robot to move away from the obstacles. \nThus, with KMP the robot is capable of autonomously adapting its trajectory through low-impact collisions, whose tolerated force can be regulated using $F_{thre}$.\nSupplementary material includes a video of experiments using the human corrective force and the obstacle collision force.\n\n\n\n\n\n\n\n\n\n\\begin{figure*} \\centering\n\t\\includegraphics[width=2.03\\columnwidth]{3rdHandDemoRobot.jpg}\n\t\\caption{The 3rd-hand task in the soldering environment with the Barrett WAM robot. \\emph{(a)} shows the initial states of the user hands and the robot end-effector (the 3rd hand in this experiment). \\textcircled{1}--\\textcircled{4} separately correspond to the circuit board (held by the robot), magnifying glass, soldering iron and solder. \\emph{(b)} corresponds to the handover of the circuit board. \\emph{(c)} shows the robot grasping of the magnifying glass. \n\t\t\\emph{(d)} depicts the final scenario of the soldering task using both of the user hands and the robot end-effector. Red, blue and green arrows depict the movement directions of the user left hand, right hand and the robot end-effector, respectively. }\n\t\\label{fig:3rHand:demos:robot} \n\\end{figure*} \n\n\n\n\n\n\n\n\\subsection{3rd-hand Task}\n\\label{subsec:3rd:hand}\n\n\n\nSo far the reported experiments have shown the performances of KMP by learning various time-driven trajectories. We now consider a different task which requires a 6-D input, in particular a robot-assisted soldering scenario.\nAs shown in Figure~\\ref{fig:3rHand:demos:robot}, the task proceeds as follows: \\emph{(1)} the robot needs to hand over a circuit board to the user at the \\emph{handover location} $\\vec{p}^{h}$ (Figure~\\ref{fig:3rHand:demos:robot}\\emph{(b)}), where the left hand of the user is used. \\emph{(2)} the user moves his left hand to place the circuit board at the \\emph{soldering location} $\\vec{p}^{s}$ and simultaneously moves his right hand towards the soldering iron and then grasps it. Meanwhile, the robot is required to move towards the magnifying glass and grasp it at the \\emph{magnifying glass location} $\\vec{p}^{g}$ (Figure~\\ref{fig:3rHand:demos:robot}\\emph{(c)}). \\emph{(3)} the user moves his right hand to the soldering location so as to repair the circuit board. Meanwhile, the robot, holding the magnifying glass, moves towards the soldering place in order to allow the user to take a better look at the small components of the board (Figure~\\ref{fig:3rHand:demos:robot}\\emph{(d)}).\n\n\n\n\n\n\n\n\n\nLet us denote $\\vec{p}^{\\mathcal{H}_l}$, $\\vec{p}^{\\mathcal{H}_r}$ and $\\vec{p}^{\\mathcal{R}}$ as positions of the user left hand, right hand and robot end-effector (i.e., the ``third hand''), respectively. \nSince the robot is required to react properly according to the user hand positions, we formulate the 3rd-hand task as the prediction of the robot end-effector position according to the user hand positions. In other words, in the prediction problem we consider ${\\vec{s}=\\{\\vec{p}^{\\mathcal{H}_l}, \\vec{p}^{\\mathcal{H}_r}\\}}$ as the input (6-D) and $\\vec{\\xi}(\\vec{s})=\\vec{p}^{\\mathcal{R}}$ as the output (3-D) . \n\nFollowing the procedure illustrated in Figure~\\ref{fig:3rHand:demos:robot}, we collect five demonstrations comprising $\\{\\vec{p}^{\\mathcal{H}_l}\\!,\\vec{p}^{\\mathcal{H}_r}\\!,\\vec{p}^{\\mathcal{R}}\\!\\}$ for training KMP, as shown in Figure~\\ref{fig:3rdHand:task:demo}. \nNote that the teacher only gets involved in the training phase.\nWe fit the collected data using GMM, and subsequently extract a probabilistic reference trajectory using GMR,\nwhere the input for the probabilistic reference trajectory is sampled from the marginal probability distribution $\\mathcal{P}(\\vec{s})$, since in this scenario the exact input is unknown (unlike time $t$ in previous experiments). The Gaussian kernel \nis also employed in KMP, whose hyperparameters are set to $\\ell=0.5$ and $\\lambda=2$. \n\n\n\\begin{figure} \\centering\n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{3rdHand_demos_circles.png}}\t\t\t\n\t\\caption{\n\t\tDemonstrations for the 3rd-hand task,\n\t\twhere the red and blue curves respectively correspond to the user left and right hands, while the green curves represent the demonstrated trajectories for the robot. The `$\\ast$' and `+' mark the starting and ending points of various trajectories, respectively.} \n\t\\label{fig:3rdHand:task:demo} \n\\end{figure}\n\n\nTwo evaluations are carried out to evaluate KMP in this scenario. \nFirst, we employ the learned reference database without adaptation so as to verify the reproduction ability of KMP, as shown in Figure~\\ref{fig:3rdHand:eva} (\\emph{top row}). \nThe user left- and right-hand trajectories as well as the real robot trajectory, depicted in Figure~\\ref{fig:3rdHand:eva} (\\emph{top row}), are plotted in Figure~\\ref{fig:3rdHand:task:eva} (dotted curves), where the desired trajectory for robot end-effector is generated by KMP. We can observe that KMP maintains the shape of the demonstrated trajectories for the robot while accomplishing the soldering task. Second, we evaluate the adaptation capability of KMP by adjusting the handover location \n$\\vec{p}^{h}$, the magnifying glass location $\\vec{p}^{g}$ as well as the soldering location $\\vec{p}^{s}$, as illustrated in Figure~\\ref{fig:3rdHand:eva} (\\emph{bottom row}). \nNote that these new locations are unseen in the demonstrations, thus we consider them as new via-point\/end-point constraints within the KMP framework. \n\nTo take the handover as an example, we can define a via-point (associated with input) as\n$\\{\\bar{\\vec{p}}^{\\mathcal{H}_l}_{1},\\bar{\\vec{p}}^{\\mathcal{H}_r}_{1},\\bar{\\vec{p}}^{\\mathcal{R}}_{1}\\}$, where\n$\\bar{\\vec{p}}^{\\mathcal{H}_l}_{1}=\\vec{p}^{h}$, $\\bar{\\vec{p}}^{\\mathcal{H}_r}_{1}=\\vec{p}^{\\mathcal{H}_r}_{ini}$ and $\\bar{\\vec{p}}^{\\mathcal{R}}_{1}=\\vec{p}^{h}$, which implies that the robot should reach the new handover location $\\vec{p}^{h}$ when the user left hand arrives at $\\vec{p}^{h}$ and the user right hand stays at its initial position $\\vec{p}^{\\mathcal{H}_r}_{ini}$.\nSimilarly, we can define additional via- and end-points to ensure that the robot grasps the magnifying glass at a new location $\\vec{p}^{g}$ and assists the user at a new location $\\vec{p}^{s}$. Thus, two via-points and one end-point are used to update the original reference database according to (\\ref{equ:kmp:update}) so as to address the three adaptation situations.\nFigure~\\ref{fig:3rdHand:task:eva} shows the adaptations of the robot trajectory (green solid curve) in accordance with the user hand trajectories (red and blue solid curves).\nIt can be seen that the robot trajectory is indeed modulated towards the new handover, magnifying glass and soldering locations, showing the capability of KMP to adapt trajectories associated with high-dimensional inputs. \n\n\nIt is worth pointing out that the entire soldering task is accomplished by a single KMP without any trajectory segmentation for different subtasks, thus allowing for a straightforward learning of several sequential subtasks. Moreover, KMP makes the adaptation of learned skills associated with high-dimensional inputs feasible. Also, KMP is driven by the user hand positions, which allows for slower\/faster hand movements since the prediction of KMP does not depend on time, hence alleviating the typical problem of time-alignment in human-robot collaborations. For details on the 3rd-hand experiments, please refer to the video in the supplementary material.\n\n\n\n\n\\begin{figure*} \\centering\n\t\\includegraphics[width=2.03\\columnwidth]{3rdHandEvaRobot.jpg}\t\n\t\\caption{Snapshots of reproduction and adaptation using KMP. \\emph{Top row} shows the reproduction case using the learned reference database without adaptation. \\emph{Bottom row} displays the adaptation case using the new reference database which is updated using three new desired points: new handover, magnifying glass and soldering locations depicted as dashed circles (notice the difference with respect to the top row).} \n\t\\label{fig:3rdHand:eva} \n\\end{figure*} \n\n\n\\begin{figure} \\centering\n\t\\subfigure[]{ \n\t\t\\includegraphics[width=0.32\\textwidth]{3rdHand_evaluate.png}}\t\t\t\n\t\\caption{\n\t\tThe reproduction (dotted curves) and adaptation (solid curves) capabilities of KMP in the 3rd-hand task, where the user left-hand and right-hand trajectories (red and blue curves) are used to retrieve the robot end-effector trajectory (green curves).} \n\t\\label{fig:3rdHand:task:eva} \n\\end{figure}\n\n\n\n\\section{Related Work}\n\\label{sec:relative:work}\n\nIn light of the reliable temporal and spatial generalization, DMP \\citep{Ijspeert} has achieved remarkable success in a vast range of applications.\nIn addition, many variants of DMP have been developed for specific circumstances, such as stylistic DMP \\citep{Matsubara}, task-parameterized DMP \\citep{Pervez} and combined DMP \\citep{Pastor}. However, due to the spring-damper dynamics, DMP converges to the target position with zero velocity, which prevents its application to cases with velocity requirements (e.g., the striking\/batting movement). Besides, DMP does not provide a straightforward way to incorporate desired via-points. \n\nBy exploiting the properties of Gaussian distributions, ProMP \\citep{Paraschos} allows for trajectory adaptations with via-points and end-points simultaneously. The similarities between DMP and ProMP lie on the fact that both methods need the explicit definition of basis functions and are aimed at learning time-driven trajectories. As a consequence, when we encounter trajectories with high-dimensional inputs (e.g., human hand position and posture in human-robot collaboration scenarios), the selection of basis functions in DMP and ProMP becomes difficult and thus undesired. \n\nIn contrast to DMP and ProMP, GMM\/GMR based learning algorithms \\citep{Muhlig,Calinon2007} have been proven effective in encoding demonstrations with high-dimensional inputs. However, the large number of variables arising in GMM makes the re-optimization of GMM expensive, which therefore prevents its use in unstructured environments where robot adaptation capabilities are imperative. \n\nKMP provides several advantages compared to the aforementioned works. Unlike GMM\/GMR, KMP is capable of adapting trajectories towards various via-points\/end-points without the optimization of high-dimensional hyperparameters. Unlike DMP and ProMP, KMP alleviates the need of explicit basis functions due to its kernel treatment, and thus can be easily implemented for problems with high-dimensional inputs and outputs. \n\nIt is noted that the training of DMP only needs a single demonstration, while ProMP, GMM and KMP require a set of trajectories. In contrast to the learning of a single demonstration, the exploitation of multiple demonstrations makes the extraction of probabilistic properties of human skills possible. In this context, demonstrations have been exploited using the covariance-weighted strategy, as in trajectory-GMM \\citep{Calinon2016}, linear quadratic regulators (LQR) \\citep{Leonel15},\nmovement similarity criterion \\citep{Muhlig} and demonstration-guided trajectory optimization \\citep{Osa}. Note that the mean minimization subproblem as formulated in (\\ref{equ:kl:mean:cost}) also uses the covariance to weigh the cost, sharing the same spirit of the aforementioned results. \n\n\n \nSimilarly to our approach, information theory has also been exploited in different robot learning techniques. As an effective way to measure the distance between two probabilistic distributions, KL-divergence was exploited in policy search \\citep{Peters, Kahn}, trajectory optimization \\citep{Levine} and imitation learning \\citep{Englert}.\nIn \\cite{Englert} KL-divergence was used to measure the difference between the distributions of demonstrations and the predicted robot trajectories (obtained from a control policy and a Gaussian process forward model), and subsequently the probabilistic inference for learning control \\citep{Deisenroth} was employed to iteratively minimize the KL-divergence so as to find the optimal policy parameters. It is noted that this KL-divergence formulation makes the derivations of analytical solution intractable. In this article, we formulate the trajectory matching problem as (\\ref{equ:kl:cost:ini:temp}), which allows us to separate the mean and covariance subproblems and derive closed-form solutions for them separately.\n\n\n\n\n\n\\section{Discussion} \n\\label{sec:discuss}\nWhile both KMP and ProMP \\citep{Paraschos} learn the probabilistic properties of demonstrations, we here discuss their similarities and possible shortcomings in detail. \nFor the KMP, \nimitation learning is formulated as an optimization problem (Section \\ref{subsubsec:kl}), where the optimal distribution $\\mathcal{N}({\\vec{\\mu}}_w^{*},{\\vec{\\Sigma}}_w^{*})$ of $\\vec{w}$ is derived by minimizing the information-loss between the parametric trajectory and the demonstrations. Specifically, the mean minimization subproblem (\\ref{equ:kl:mean:cost}) can be viewed as the problem of maximizing the posterior $\\prod_{n=1}^{N} \\mathcal{P}(\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}} \\vec{\\mu}_w|\\hat{\\vec{\\mu}}_n,\\hat{\\vec{\\Sigma}}_n)$. \nIn contrast, ProMP formulates the problem of imitation learning as an estimation of the probability distribution of movement pattern $\\vec{w}$ (i.e., $\\vec{w} \\sim \\mathcal{N}(\\vec{\\mu}_w,\\vec{\\Sigma}_w)$), which is essentially equivalent to the maximization of the likelihood $\\prod_{h=1}^{H} \\prod_{n=1}^{N} \\mathcal{P}({\\vec{\\xi}}_{n,h}|\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\vec{\\mu}_w,\\vec{\\Theta}(\\vec{s}_n)^{\\mathsf{T}}\\vec{\\Sigma}_w \\vec{\\Theta}(\\vec{s}_n))$. \nTo solve this maximization problem, the regularized least-squares is first used for each demonstration so as to estimate its corresponding movement pattern vector \\citep{Paraschos2015}, where basis functions are used to fit these demonstrations. Subsequently, using the movement patterns extracted from demonstrations, the distribution $\\mathcal{P}(\\vec{w})$ is determined by using the maximum likelihood estimation.\n\nA direct problem in ProMP is the estimation of $\\mathcal{P}(\\vec{w})$.\nIf the dimension of $\\vec{w}$ (i.e., $B\\mathcal{O}$) is too high compared to the number of demonstrations $H$, a singular covariance $\\vec{\\Sigma}_w$ might appear. For this reason, learning movements with ProMP typically requires a high number of demonstrations. In contrast, KMP needs a probabilistic reference trajectory, which is derived from the joint probability distribution of $\\{\\vec{s},\\vec{\\xi}\\}$ that is typically characterized by a lower dimensionality (i.e., $\\mathcal{I}+\\mathcal{O}$). \nAnother problem in ProMP comes up with demonstrations with high dimensional input $\\vec{s}$, where the number of basis functions increases often exponentially, which is the typical curse of dimensionality (see also the discussion on the disadvantages of fixed basis functions in \\cite{Bishop}). In contrast, KMP is combined with a kernel function, alleviating the need for basis functions, while inheriting all the potentials and expressiveness of kernel-based methods.\n\n\nThere are several possible extensions for KMP. First, similarly to most regression algorithms, the computation complexity of KMP increases with the size of training data (i.e., the reference database in our case). One possible solution could be the use of partial training data so as to build a sparse model \\citep{Bishop}.\nSecond, even though we have shown the capability of KMP on trajectory adaptation, the choice of desired points is rather empirical. For more complicated situations where we have no (or minor) prior information, the search of optimal desired points could be useful. To address this problem, RL algorithms could be employed to find appropriate new via-points that fulfill the relevant task requirements that can be encapsulated by cost functions. Third, since KMP predicts mean and covariance of the trajectory simultaneously, it may be exploited in approaches that combine optimal control and probabilistic learning methods \\citep{Medina}. For example, the mean and covariance can be respectively used as the desired trajectory and the weighted matrix for tracking errors in LQR \\citep{Calinon2016}.\nFinally, besides the frequently used Gaussian kernel, the exploitation of various kernels \\citep{Hofmann} could be promising in the future research.\n\n\\section{Conclusions} \n\\label{sec:conclusion}\nWe have proposed a novel formulation of robot movement primitives that incorporates a kernel-based treatment into the process of minimizing the information-loss in imitation learning. Our approach KMP is capable of preserving the probabilistic properties of human demonstrations, adapting trajectories to different unseen situations described by new temporal or spatial requirements and mixing different trajectories. The proposed method was extended to deal with local frames, which provides the robot with reliable extrapolation capabilities. Since KMP is essentially a kernel-based non-parametric approach, it overcomes several limitations of state-of-the-art methods, being able to model complex and high dimensional trajectories. Through extensive evaluations in simulations and real robotic systems, we showed that KMP performs well in a wide range of applications such as time-driven movements and human-robot collaboration scenarios.\n\n\\section*{Acknowledgments}\nWe thank Fares J. Abu-Dakka, Luka Peternel and Martijn J. A. Zeestraten for their help on real robot experiments. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this work, we consider a static and undirected network of $K$ agents connected over some graph where each agent $k$ owns a private cost function $J_k: \\real^{M} \\rightarrow \\real$. Through only local interactions (i.e., with agents only communicating with their immediate neighbors), each agent is interested in finding a solution to the following problem: \n\\begin{align}\nw^\\star \\in \\argmin_{w\\in \\mathbb{R}^M} \\quad\n \\frac{1}{K}\\sum_{k=1}^K J_k(w) + R(w) \\label{decentralized1} \n\\end{align}\n where $R:\\real^{M} \\rightarrow \\real \\cup \\{+ \\infty \\}$ is a convex function (not necessarily differentiable). We adopt the following assumption throughout this work.\n\\begin{assumption} \\label{assump:cost}\n{\\rm ({\\bf Cost function}): We assume that a solution exists to problem \\eqref{decentralized1} and each cost function $ J_k(w)$ is first-order differentiable and $\\nu$-strongly-convex:\n\\eq{\n(w^o-w^\\bullet)\\tran \\big(\\grad J_k(w^o)-\\grad J_k(w^\\bullet)\\big) &\\geq \\nu \\|w^o-w^\\bullet\\|^2 \\label{stron-convexity} \n} \nwith $\\delta$-Lipschitz continuous gradients:\n\\eq{\n\\|\\grad J_k(w^o)-\\grad J_k(w^\\bullet)\\| &\\leq \\delta \\|w^o-w^\\bullet\\| \\label{lipschitz}\n}\n\\noindent for any $w^o$ and $w^\\bullet$. Constants $\\nu$ and $\\delta$ are strictly positive and satisfy $\\nu\\leq \\delta$. \nWe also assume $R(w)$ to be a proper\\footnote{The function $f(.)$ is proper if $-\\infty B$) if $A-B$ is positive semi-definite (positive definite). The $N \\times N$ identity matrix is denoted by $I_N$. We let $\\one_{N}$ be a vector of size $N$ with all entries equal to one. The Kronecker product is denoted by $\\otimes$. We let ${\\rm col}\\{x_n\\}_{n=1}^N$ denote a column vector (matrix) that stacks the vector (matrices) $x_n$ of appropriate dimensions on top of each other. The subdifferential $\\partial f(x)$ of a function $f:\\real^{M} \\rightarrow \\real$ at some $x \\in \\real^{M}$ is the set of all subgradients $\n\\partial f(x) = \\{g \\ | \\ g\\tran(y-x)\\leq f(y)-f(x), \\forall \\ y \\in \\real^{M}\\} $.\nThe proximal operator with parameter $\\mu>0$ of a function $f:\\real^{M} \\rightarrow \\real$ is\n\\eq{\n{\\rm \\bf prox}_{\\mu f}(x) = \\argmin_z \\ f(z)+{1 \\over 2 \\mu} \\|z-x\\|^2 \\label{def_proximal}\n}\n\\section{Unified Decentralized Algorithm (UDA)} \\label{sec:ATC:smooth}\nIn this section, we present the {\\em unified decentralized algorithm} (UDA) that covers various state-of-the-art algorithms as special cases. To this end, we will first focus on the smooth case ($R(w)=0$), which will then be extended to handle the non-smooth component $R(w)$ in the following section.\n\\subsection{General Primal-Dual Framework}\n For algorithm derivation and motivation purposes, we will rewrite problem \\eqref{decentralized1} in an equivalent manner. To do that, we let $w_k \\in \\real^M$ denote a local copy of $w$ available at agent $k$ and introduce the network quantities: \n \\eq{\n \\sw \\define {\\rm col}\\{w_1,\\cdots,w_K\\} \\in \\real^{KM}, \\quad \\cJ(\\sw) &\\define \\frac{1}{K} \\sum_{k=1}^K J_k(w_k)\n } \n Further, we introduce two general symmetric matrices $\\cB \\in \\real^{MK \\times MK}$ and $\\cC \\in \\real^{MK \\times MK}$ that satisfy the following conditions: \n \\begin{subnumcases}{\\label{consensus-condition-both}} \n\t \\cB \\sw=0 \\iff w_1=\\cdots=w_K \\label{consensus-condition-B} \\\\\n\\cC=0 \\quad {\\rm or} \\quad \t\\cC \\sw =0 \\iff \\cB \\sw=0 \\label{consensus-condition-C}\t \n\t\\end{subnumcases} \n For algorithm derivation, the matrices $\\{\\cB,\\cC\\}$ can be any general consensus matrices \\cite{loizou2016new}. Later, we will see how to choose these matrices to recover different decentralized implementations -- see Section \\ref{sec:specific_ins}. With these quantities, it is easy to see that problem \\eqref{decentralized1} with $R(w)=0$ is equivalent to the following problem:\n\\begin{align}\n \\underset{\\ssw\\in \\mathbb{R}^{KM}}{\\text{minimize }}& \\quad\n \\cJ(\\sw)+\\frac{1}{2 \\mu}\\| \\sw\\|_{\\cC}^2 , \\quad {\\rm s.t.} \\ \\cB \\sw=0\\label{decentralized2} \n\\end{align}\nwhere $\\mu >0$ and the matrix $\\cC \\in \\real^{MK \\times MK}$ is a positive semi-definite consensus penalty matrix satisfying \\eqref{consensus-condition-C}. To solve problem \\eqref{decentralized2}, we consider the saddle-point formulation:\n\\eq{\n \\min_{\\ssw} \\max_{\\ssy} \\quad \\cL(\\sw,\\sy) \\define \\cJ(\\sw) + \\frac{1}{ \\mu} \\sy\\tran \\cB\\sw + \\frac{1}{2 \\mu}\\| \\sw\\|_{\\cC}^2\n\\label{saddle_point}\n}\nwhere $\\sy \\in \\real^{MK}$ is the dual variable. To solve \\eqref{saddle_point}, we propose the following algorithm: let $\\sy_{-1}=0$ and $\\sw_{-1}$ take any arbitrary value. Repeat for $i=0,1,\\cdots$\n\n\\begin{subnumcases}{\\label{alg_ATC_framework}}\n\\ssz_i = (I-\\cC) \\sw_{i-1}-\\mu \\grad \\cJ(\\sw_{i-1}) - \\cB \\sy_{i-1} \\label{z_ATC_DIG} &\\textbf{(primal-descent)} \\\\\n\\sy_i = \\sy_{i-1}+ \\cB \\ssz_i \\label{dual_ATC_DIG} &\\textbf{(dual-ascent)} \\\\\n\\sw_i = \\bar{\\cA} \\ssz_{i} \\label{primal_ATC_DIG} &\\textbf{(Combine)} \n \\end{subnumcases}\n where $\\bar{\\cA}=\\bar{A} \\otimes I_M$ and $\\bar{A}$ is a symmetric and doubly-stochastic combination matrix. In the above UDA algorithm, step \\eqref{z_ATC_DIG} is a gradient descent followed by a gradient ascent step in \\eqref{dual_ATC_DIG}, both applied to the saddle-point problem \\eqref{saddle_point} with step-size $\\mu$. The last step \\eqref{primal_ATC_DIG} is a combination step that enforces further agreement. Next we show that by proper choices of $\\bar{\\cA}$, $\\cB$, and $\\cC$ we can recover many state of the art algorithms. To do that, we need to introduce the combination matrix associated with the network.\n\\subsection{Network Combination Matrix} \\label{sec:combina:matrix}\n Thus, we introduce the combination matrices\n \\eq{\n A=[a_{sk}] \\in \\real^{K \\times K}, \\quad \\cA= A \\otimes I_M \\label{combination-cal-A}\n} \n where the entry $a_{sk}=0$ if there is no edge connecting agents $k$ and $s$. The matrix $A$ is assumed to be symmetric and doubly stochastic matrix (different from $\\bar{A}$). We further assume the matrix to be primitive, i.e., there exists an integer $j>0$ such that all entries of $A^j$ are positive. \n Under these conditions it holds that $(I_{MK}-\\cA) \\sw=0$ if and only if $w_k=w_s$ for all $k,s$ --- see \\cite{shi2015extra,yuan2019exactdiffI}. \n\\subsection{Specific Instances} \\label{sec:specific_ins}\nWe start by rewriting recursion \\eqref{alg_ATC_framework} in an equivalent manner by eliminating the dual variable $\\sy_i$. Thus, from \\eqref{z_ATC_DIG} it holds that\n \\eq{\n\\ssz_i-\\ssz_{i-1} &= (I-\\cC) (\\sw_{i-1}-\\sw_{i-2})- \\cB (\\sy_{i-1}-\\sy_{i-2}) -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\nonumber \\\\\n &\\overset{\\eqref{dual_ATC_DIG}}{=} (I-\\cC) (\\sw_{i-1}-\\sw_{i-2})- \\cB^2 \\ssz_{i-1} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\nonumber\n}\nRearranging the previous equation we get:\n \\eq{\n\\ssz_i &= (I-\\cB^2) \\ssz_{i-1} + (I-\\cC) (\\sw_{i-1}-\\sw_{i-2}) -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n\\label{eq:sub_atc}\n}\n Utilizing this property, we will now choose specific matrices $\\{\\bar{\\cA},\\cB,\\cC\\}$ and show that we can recover many state of the art algorithms (see Table \\ref{table}): \n\\subsubsection{\\bf Exact diffusion \\cite{yuan2019exactdiffI}}\n If we choose $\\bar{\\cA}=0.5 (I+\\cA)$, $\\cC=0$ and $\\cB^2=0.5 (I- \\cA)$ in \\eqref{eq:sub_atc}, we get: \n \\eq{\n\\ssz_i &= \n\\bar{\\cA}\n \\ssz_{i-1} + \\sw_{i-1}-\\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n}\n Multiplying the previous equation by $\\bar{\\cA}$ and noting from \\eqref{primal_ATC_DIG} that $\\sw_i= \\bar{\\cA} \\ssz_{i}$, we get:\n \\eq{\n\\sw_i=\\bar{\\cA} \\bigg( 2 \\sw_{i-1}\n - \\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\bigg) \\label{exact-diffusion}\n}\n The above recursion is the exact diffusion recursion first proposed in \\cite{yuan2019exactdiffI}. We also note that if we choose $\\cC=0$, $\\cB^2=c (I- \\cA)$ ($c \\in \\real$), and $\\bar{\\cA}=I-\\cB^2$ then we recover the smooth case of the NIDS algorithm from \\cite{li2017nids}. As highlighted in \\cite{li2017nids}, NIDS is identical to exact diffusion for the smooth case when $c=0.5$.\n\\subsubsection{\\bf Aug-DGM \\cite{xu2015augmented}}\n Let $\\cC=0$, $\\bar{\\cA}=\\cA^2$, and $\\cB=I-\\cA$. Substituting into \\eqref{eq:sub_atc}:\n \\eq{\n\\ssz_i &= (2\\cA-\\cA^2) \\ssz_{i-1} + \\sw_{i-1}-\\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n}\nBy multiplying the previous equation by $\\bar{\\cA}=\\cA^2$ and noting from \\eqref{primal_ATC_DIG} that $\\sw_i= \\cA^2 \\ssz_{i}$, we get the recursion:\n\\eq{\n\\sw_i=\\cA \\bigg( 2 \\sw_{i-1}\n - \\cA \\sw_{i-2} -\\mu \\cA \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\bigg) \\label{atc_DGM_eliminate} \n}\nThe above recursion is equivalent to the Aug-DGM \\cite{xu2015augmented} (also known as ATC-DIGing \\cite{nedic2017geometrically}) algorithm:\n\\begin{subequations} \\label{atc_DGM}\n\\eq{ \n\\sw_i&=\\cA(\\sw_{i-1}-\\mu \\ssx_{i-1}) \\label{atc-dgm1} \\\\\n\\ssx_{i}&=\\cA \\big(\\ssx_{i-1}+ \\grad \\cJ(\\sw_i)- \\grad \\cJ(\\sw_{i-1}) \\big) \\label{atc-dgm2}\n}\n\\end{subequations}\nBy eliminating the gradient tracking variable $\\ssx_{i}$, we can rewrite the previous recursion as \\eqref{atc_DGM_eliminate} -- see Appendix \\ref{supp_equiva_representation}.\n\\subsubsection{\\bf ATC tracking method \\cite{di2016next,scutari2019distributed}}\nLet $\\cC=I-\\cA$ and $\\cB=I-\\cA$. Substituting into \\eqref{eq:sub_atc}:\n \\eq{\n\\ssz_i &=(2\\cA -\\cA^2) \\ssz_{i-1} \n+ \\cA \\sw_{i-1} - \\cA \\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \n}\nBy multiplying the previous equation by $\\bar{\\cA}=\\cA$ and noting from \\eqref{primal_ATC_DIG} that $\\sw_i= \\cA \\ssz_{i}$, we get the recursion:\n\\eq{\n\\sw_i=\\cA \\bigg( 2 \\sw_{i-1}\n - \\cA \\sw_{i-2} -\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw_{i-2})\\big) \\bigg) \\label{next_eliminate}\n}\nThe above recursion is equivalent to the following variant of the ATC tracking method \\cite{di2016next,scutari2019distributed}:\n\\begin{subequations} \\label{next}\n\\eq{\n\\sw_i&=\\cA(\\sw_{i-1}-\\mu \\ssx_{i-1}) \\label{next1} \\\\\n\\ssx_{i}&=\\cA \\ssx_{i-1}+ \\grad \\cJ(\\sw_i)- \\grad \\cJ(\\sw_{i-1}) \\label{next2}\n}\n\\end{subequations}\nBy eliminating the gradient tracking variable $\\ssx_i$, we can show that the previous recursion is exactly \\eqref{next_eliminate} -- see Appendix \\ref{supp_equiva_representation}. \n\\subsubsection{\\bf NON-ATC Algorithms ($\\bar{\\cA}=I$)}\nWe note that DIGing \\cite{qu2017harnessing,nedic2017achieving}, EXTRA \\cite{shi2015extra}, and the decentralized linearized alternating direction method of multipliers (DLM) \\cite{ling2015dlm} can also be represented by \\eqref{alg_ATC_framework} with $\\bar{\\cA}=I$ and proper choices of $\\cB^2$ and $\\cC$ -- see Table \\ref{table}. Since $\\bar{\\cA}=I$, these algorithms are not of the ATC form. Please see Appendix \\ref{supp_non_atc} for the details and analysis of non-ATC case.\n\\begin{table}[t] \n\\caption{Listing of some state-of-the-art first-order algorithms that can recovered by specific choices of $\\bar{\\cA}$, $\\cB$, and $\\cC$ in \\eqref{alg_ATC_framework}. The matrix $\\cA$ is a typical symmetric and doubly stochastic network combination matrix introduced in \\eqref{combination-cal-A}. The matrix $\\cL$ is chosen such that the $k$-th block of $\\cL \\sw_i$ is equal to $\\sum_{s \\in \\cN_k} w_{k,i}-w_{s,i}$ and $c>0$ is a step-size parameter. }\n\\centering\n\\large \n\\begin{tabular}{|c|c|c|c|}\n\\thickhline\n\\rowcolor[HTML]{C0C0C0} \n{\\bf ATC algorithms} & $\\bar{\\cA}$ & $\\cB^2$ & $\\cC$ \\\\ \\thickhline\n\\cellcolor[HTML]{EFEFEF} Aug-DGM\/ATC-DIGing \\cite{xu2015augmented,nedic2017geometrically} & $\\cA^2$ & $(I-\\cA)^2$ & $0$ \\\\ \\hline\n \\cellcolor[HTML]{EFEFEF} ATC tracking \\cite{di2016next,scutari2019distributed} & $\\cA$ & $(I-\\cA)^2$ & $I-\\cA$ \\\\ \\hline\n\\cellcolor[HTML]{EFEFEF} Exact diffusion \\cite{yuan2019exactdiffI} & $0.5(I+\\cA)$ & $0.5(I-\\cA)$ & 0 \\\\ \\hline\n \\cellcolor[HTML]{EFEFEF} NIDS \\cite{li2017nids} & $I-c(I-\\cA)$ & $c(I-\\cA)$ & 0 \\\\ \\thickhline\n \\rowcolor[HTML]{C0C0C0} \n{\\bf NON-ATC algorithms} & $\\bar{\\cA}$ & $\\cB^2$ & $\\cC$ \\\\ \\thickhline\n \\cellcolor[HTML]{EFEFEF} DIGing \\cite{qu2017harnessing,nedic2017achieving} & $I$ & $(I-\\cA)^2$ & $I-\\cA^2$ \\\\ \\hline \n\\cellcolor[HTML]{EFEFEF} EXTRA \\cite{shi2015extra} & $I$ & $0.5(I-\\cA)$ & $0.5(I-\\cA)$ \\\\ \\hline \n\\cellcolor[HTML]{EFEFEF} DLM \\cite{ling2015dlm} & $I$ & $c \\mu \\cL$ & $c \\mu \\cL$ \\\\ \\thickhline\n\\end{tabular}\n \\label{table}\n\\end{table}\n\\begin{remark} [\\sc Communication cost] \\label{remak:sharing-variable}{\\rm\nNote that exact diffusion \\eqref{exact-diffusion} requires one round of communication or combination per iteration. This means that each agent sends an $M$ vector to its neighbor per iteration. On the other hand, the gradient tracking method \\eqref{next} requires two rounds of combination\/communication per iteration for the vectors $\\sw_{i-1}-\\mu \\ssx_{i-1}$ and $\\ssx_{i-1}$, which means each agent sends a $2M$ vector to its neighbor. Similarly, the Aug-DGM (ATC-DIGing) method \\eqref{atc_DGM} also requires two rounds of combination per iteration for the vectors $\\sw_{i-1}-\\mu \\ssx_{i-1}$ and $\\ssx_{i-1}+ \\grad \\cJ(\\sw_i)- \\grad \\cJ(\\sw_{i-1})$; moreover, it requires communicating these two variables sequentially (at different communication steps). \\qd\n}\n\\end{remark}\n\n\t\\section{Proximal Unified Decentralized Algorithm (PUDA)}\n In this section, we extend UDA \\eqref{alg_ATC_framework} to handle the non-differentiable component $R(w)$ to get a proximal unified decentralized algorithm (PUDA). Let us introduce the network quantity\n\\eq{\n \\cR(\\sw) &\\define {1 \\over K} \\sum_{k=1}^K R(w_k) \n} \nWith this definition, we propose the following recursion: let $\\sy_{-1}=0$ and $\\sw_{-1}$ take any arbitrary value. Repeat for $i=0,1,\\ldots$\n\\begin{subnumcases}{ \\label{alg_prox_ATC_framework}} \n\\ssz_i = (I-\\cC) \\sw_{i-1}-\\mu \\grad \\cJ(\\sw_{i-1}) - \\cB \\sy_{i-1} \\label{z_prox_ATC_DIG} \\\\\n\\sy_i = \\sy_{i-1}+ \\cB \\ssz_i \\label{dual_prox_ATC_DIG} \\\\\n\\sw_i = {\\rm \\bf prox}_{\\mu \\cR}\\big(\\bar{\\cA} \\ssz_{i} \\big) \\label{primal_prox_ATC_DIG} \n \\end{subnumcases}\n We refer the reader to Appendix \\ref{supp_equiva_represent_prox} for specific instances of PUDA \\eqref{alg_prox_ATC_framework} and how to implement them in a decentralized manner. In the following, we will show that $\\sw_i$ in the above recursion converges to $\\one_K \\otimes w^\\star$ where $w^\\star$ is the desired solution of \\eqref{decentralized1}. We first prove the existence and optimality of the fixed points of recursion \\eqref{alg_prox_ATC_framework}.\n\\begin{lemma}[\\sc Optimality Point] \\label{lemma:existence_fixed_optimality}{\\rm Under Assumption \\ref{assump:cost} and condition \\eqref{consensus-condition-both}, a fixed point $(\\sw^\\star, \\sy^\\star, \\ssz^\\star)$ exists for recursions \\eqref{z_prox_ATC_DIG}--\\eqref{primal_prox_ATC_DIG}, i.e., it holds that\n\t\\begin{subnumcases}{}\n\t\\hspace{.5mm} \\ssz^\\star =\\sw^\\star-\\mu \\grad \\cJ(\\sw^\\star)- \\cB \\sy^\\star \\label{p-d_ed-star} \\\\\n\t\\hspace{2.8mm} 0 = \\cB \\ssz^\\star \\label{d-a_ed-star} \\\\\n\t\\sw^\\star = {\\rm \\bf prox}_{\\mu \\cR}(\\bar{\\cA} \\ssz^\\star) \\label{prox_step_ed-star}\n\t\\end{subnumcases}\nMoreover, $\\sw^\\star$ and $\\ssz^\\star$ are unique with $\\sw^\\star=\\one_K \\otimes w^\\star$ where $w^\\star$ is the solution of problem \\eqref{decentralized1}.\n\t}\n\\end{lemma}\n\\begin{proof} See Appendix \\ref{supp_lemma_fixed}. \n\\end{proof} \n \\section{Linear Convergence}\nNote that there exists a particular fixed point $(\\sw^\\star, \\sy_b^\\star, \\sz^\\star)$ where $\\sy_b^\\star$ is a unique vector that belongs to the range space of $\\cB$ -- see \\cite[Remark 2]{alghunaim2019linearly}. In the following we will show that the iterates $(\\sw_i, \\sy_i, \\sz_i)$ converge linearly to this particular fixed point $(\\sw^\\star, \\sy_b^\\star, \\sz^\\star)$. To this end, we introduce the error quantities:\n\\begin{align}\n\t\\tsw_i\\define \\sw_i-\\sw^\\star, \\quad \\tsy_i \\define \\sy_i - \\sy^\\star_b, \\quad \\tsz_i \\define \\ssz_i-\\ssz^\\star\n\\end{align}\nNote that from condition \\eqref{consensus-condition-both} we have $\\cC\\sw^\\star=0$. Therefore, from \\eqref{z_prox_ATC_DIG}--\\eqref{primal_prox_ATC_DIG} and \\eqref{p-d_ed-star}--\\eqref{prox_step_ed-star} we can reach the following error recursions:\n\\begin{subnumcases}{}\n\\tsz_i=(I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big) - \\cB \\tsy_{i-1} \\label{error_primal_ed} \\\\\n\\tsy_i = \\tsy_{i-1}+ \\cB \\tsz_i \\label{error_dual_ed} \\\\\n\\tsw_i = {\\rm \\bf prox}_{\\mu \\cR}\\big(\\bar{\\cA} \\ssz_i\\big)-{\\rm \\bf prox}_{\\mu \\cR}(\\bar{\\cA} \\ssz^\\star) \\label{error_prox_ed}\n\\end{subnumcases}\nFor our convergence result, we need the following technical conditions.\n\\begin{assumption}[\\sc Consensus matrices] \\label{assump_combination}\n{\\rm It is assumed that both condition \\eqref{consensus-condition-both} and the following condition hold:\n\\eq{\n\\bar{\\cA}^2 \\leq I-\\cB^2 \\ {\\rm and} \\ 0 \\leq \\cC < 2I \\label{eq:asump_penalty} }\n\\qd\n}\n\\end{assumption} \n\\begin{remark}[\\sc Convergence conditions]{\\rm\n\\label{remark:conv_conditions}\nNote that the above conditions are satisfied for exact diffusion \\cite{yuan2019exactdiffII} and NIDS \\cite{li2017nids}. \nFor the ATC tracking methods \\eqref{atc_DGM} and \\eqref{next}, the conditions translate to the requirement that the eigenvalues of $A$ are between $[0,1]$, rather than the typical $(-1,1]$.\n Although this condition is not necessary, it can be easily satisfied by redefining $A \\leftarrow 0.5 (I+A)$. We also impose it to unify the analysis of these methods through a short proof. Note that most works that analyze decentralized methods under more relaxed conditions on the network topology impose restrictive step-size conditions that depend on the network and on the order of $O(\\nu^{\\theta_1}\/ \\delta^{\\theta_2})$ where $0 < \\theta_1 \\leq 1$ and $\\theta_2>1$ -- see \\cite{nedic2017geometrically,qu2017harnessing,pu2018push,\njakovetic2019unification}. On the other hand, we require step sizes of order $O(1 \/ \\delta)$. Moreover, we will show that any algorithm that fits into our setup with $\\cC=0$ can use a step-size as large as the centralized proximal gradient descent -- see discussion after Theorem \\ref{theorem_lin_convergence}. \\qd\n}\n\\end{remark}\nNote that $\\cB^2$ and $\\cC$ are symmetric; thus, their singular values are equal to their eigenvalues. Moreover, since the square of a symmetric matrix is positive semi-definite, Assumption \\ref{assump_combination} implies $0 < \\underline{\\sigma}(\\cB^2) \\leq 1$ and $\\sigma_{\\max}(\\cC)<2$.\n\n\\begin{theorem}[\\sc Linear Convergence]\\label{theorem_lin_convergence}\n{\\rm\tUnder Assumptions \\ref{assump:cost}--\\ref{assump_combination}, if $\\sy_0=0$ and the step-size satisfies \\eq{\n\\mu < {2-\\sigma_{\\max}(\\cC) \\over \\delta},\n}\n it holds that\n\\eq{\n\t\\|\\tsw_i\\|^2+ \\|\\tsy_i\\|^2 \n\t&\\leq \\gamma \\big(\\|\\tsw_{i-1}\\|^2+ \\|\\tsy_{i-1}\\|^2 \\big)\n}\nwhere $\\gamma= \\max \\big\\{ 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta ) ,1 - \\underline{\\sigma}(\\cB^2) \\big\\}<1$.}\n\\end{theorem}\n\\begin{proof} Squaring both sides of \\eqref{error_primal_ed} and \\eqref{error_dual_ed} we get\n\\eq{\n\t\\|\\tsz_i\\|^2&= \\|(I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big)\\|^2 + \\| \\cB \\tsy_{i-1}\\|^2 \\nonumber \\\\\n\t& \\ -2 \\tsy_{i-1}\\tran \\cB \\left((I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big)\\right) \n\t\\label{er_sq_primal_ed}\n}\nand\n\\eq{\n\t\\|\\tsy_i\\|^2 =\\|\\tsy_{i-1}+ \\cB \\tsz_i \\|^2 &= \\|\\tsy_{i-1}\\|^2+ \\| \\cB \\tsz_i \\|^2 + 2 \\tsy_{i-1} \\tran \\cB \\tsz_i \\nonumber \\\\\n\t&\\overset{\\eqref{error_primal_ed}}{=} \\|\\tsy_{i-1}\\|^2+ \\| \\tsz_i \\|^2_{\\cB^2} - 2 \\|\\cB \\tsy_{i-1}\\|^2 \\nonumber \\\\ \n\t& \\quad +2 \\tsy_{i-1}\\tran \\cB \\left((I-\\cC)\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star) \\big)\\right) \\label{er_sq_dual_ed}\n}\nAdding equation \\eqref{er_sq_dual_ed} to \\eqref{er_sq_primal_ed} and rearranging, we get \n\\eq{\n\\|\\tsz_i\\|^2_{\\cQ} \\hspace{-0.6mm}+\\hspace{-0.6mm} \\|\\tsy_i\\|^2 \\hspace{-0.6mm}=\\hspace{-0.6mm} \\|(I-\\cC)\\tsw_{i-1} \\hspace{-0.6mm}- \\hspace{-0.6mm} \\mu \\big(\\grad \\cJ(\\sw_{i-1})\\hspace{-0.6mm}-\\hspace{-0.6mm}\\grad \\cJ(\\sw^\\star) \\big)\\|^2 \\hspace{-0.6mm}+\\hspace{-0.6mm} \\|\\tsy_{i-1}\\|^2 \\hspace{-0.6mm}-\\hspace{-0.6mm} \\|\\cB \\tsy_{i-1}\\|^2 \\label{err_sum_ed}\n}\nwhere $\\cQ = I - \\cB^2$ is positive semi-definite from \\eqref{eq:asump_penalty}. Since $\\sy_0 = 0$ and $\\sy_i = \\sy_{i-1} + \\cB \\ssz_i$, we know $\\sy_i\\in \\mbox{range}(\\cB)$ for any $i$. Thus, both $\\sy_i$ and $\\sy_b^\\star$ lie in the range space of $\\cB$, and it holds that $\n\\|\\cB \\tsy_{i-1}\\|^2 \\geq \n\\underline{\\sigma}(\\cB^2) \\|\\tsy_{i-1}\\|^2 $. Therefore, we can bound \\eqref{err_sum_ed} by\n\\eq{\n\t\\|\\tsz_i\\|^2_{\\cQ}+ \\|\\tsy_i\\|^2 \n\t& \\le\\|\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\\|^2 \\hspace{-0.5mm}+\\hspace{-0.5mm} (1- \\underline{\\sigma}(\\cB^2))\\|\\tsy_{i-1}\\|^2 \\label{err_sum1_ed}\n}\nAlso, since $\\cJ(\\sw)+{1 \\over 2 \\mu}\\|\\sw\\|^2_{\\cC}$ is $\\delta_{\\mu}=\\delta+{1 \\over \\mu} \\sigma_{\\max}(\\cC)$-smooth, it holds that \\cite[Theorem 2.1.5]{nesterov2013introductory}:\n\\eq{\n\\| \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\\|^2 \\leq \\delta_{\\mu} \\tsw_{i-1}\\tran \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\n}\nUsing this bound, it can be easily verified that:\n\\eq{\n&\\|\\tsw_{i-1}-\\mu \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big)\\|^2 \\nnb\n&\\leq \\|\\tsw_{i-1}\\|^2 - \\mu (2-\\mu \\delta_{\\mu} ) \\tsw_{i-1}\\tran \\big(\\grad \\cJ(\\sw_{i-1})-\\grad \\cJ(\\sw^\\star)+{1 \\over \\mu}\\cC \\tsw_{i-1} \\big) \\nnb\n&\\leq \\big(1- \\mu \\nu (2- \\mu\\delta_{\\mu} )\\big) \\|\\tsw_{i-1}\\|^2=\\big(1- \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta )\\big) \\|\\tsw_{i-1}\\|^2\n} \nwhere in the last step we used the fact that $2-\\mu\\delta_\\mu > 0$, which follows from the condition $\\mu<(2-\\sigma_{\\max}(\\cC))\/\\delta$, and the fact that $\\cJ(\\sw)+{1 \\over 2 \\mu}\\|\\sw\\|^2_{\\cC}$ is $\\nu$-strongly convex. Thus, we can substitute the previous inequality in \\eqref{err_sum1_ed} and get\n\\eq{\n\t\\|\\tsz_i\\|^2_{\\cQ} \\hspace{-0.5mm}+\\hspace{-0.5mm} \\|\\tsy_i\\|^2 \n\t& \\le \\big(\\hspace{-0.5mm} 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta ) \\hspace{-0.5mm}\\big)\\hspace{-0.5mm}\\|\\tsw_{i-1}\\|^2 \\hspace{-0.5mm}+ (1- \\underline{\\sigma}(\\cB^2))\\|\\tsy_{i-1}\\|^2 \\label{err_sum1_ed-2_ed}\n} \nFrom \\eqref{error_prox_ed} and the nonexpansive property of the proximal operator, we have\n\\eq{\n\t\\|\\tsw_i\\|^2 &= \\|{\\rm \\bf prox}_{\\mu \\cR}\\big(\\bar{\\cA} \\ssz_i \\big)-{\\rm \\bf prox}_{\\mu \\cR}(\\bar{\\cA} \\ssz^\\star) \\|^2 \\leq \\|\\bar{\\cA} \\tsz_i\\|^2 \\leq \\| \\tsz_i\\|^2_{\\cQ} \\label{prox_bound_last}\n}\nwhere the last step holds because of condition \\eqref{eq:asump_penalty} so that $\\|\\bar{\\cA} \\tsz_i\\|^2=\\| \\tsz_i\\|^2_{\\bar{\\cA}^2} \\leq \\| \\tsz_i\\|^2_{\\cQ}$. Substituting \\eqref{prox_bound_last} into \\eqref{err_sum1_ed-2_ed} we reach our result. Finally we note that:\n\\eq{\n\\big(\\hspace{-0.5mm} 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\sigma_{\\max}(\\cC)-\\mu \\delta ) \\hspace{-0.5mm}\\big) < 1 \\iff \\mu < {2-\\sigma_{\\max}(\\cC) \\over \\delta}\n}\n\\end{proof}\n\\noindent An interesting choice of $\\bar{\\cA}$, $\\cB$, and $\\cC$ is the class with $\\cC=0$. For $\\cC=0$, which is the case for exact diffusion \\eqref{exact-diffusion} and Aug-DGM (ATC-DIGing) \\eqref{atc_DGM}, the step size bound in Theorem \\ref{theorem_lin_convergence} becomes $\\mu<{2 \\over \\delta}$, which is independent of the network and as large as the centralized proximal gradient descent. Moreover, for $\\cC=0$ the convergence rate becomes $\\gamma= \\max \\{ 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\mu \\delta ) ,1 - \\underline{\\sigma}(\\cB^2)\\}<1$, which separates the network effect from the cost function. If we further choose $\\bar{\\cA}=\\cA^j$ and and $\\cB^2=I-\\cA^j$ for integer $j\\geq 1$, then we have $1 - \\underline{\\sigma}(\\cB^2)=\\lambda_2(\\cA^j) \\rightarrow 0$ as $j \\rightarrow \\infty$ where $\\lambda_2(\\cA^j)$ is the second largest eigenvalue of $\\cA^j$ . Thus, the convergence rate $\\gamma= 1 \\hspace{-0.5mm}-\\hspace{-0.5mm} \\mu \\nu (2-\\mu \\delta )$ can match the rate of centralized algorithms for large $j$. A similar conclusion appears for NIDS \\cite{li2017nids} but for the smooth case, which is subsumed in our framework. \n\\begin{remark}[\\sc Network Effect]{\\rm\n\\label{remark:numberofag}\n The convergence rate depends on the network graph through the terms $\\underline{\\sigma}(\\cB^2)$ and $\\sigma_{\\max}(\\cC)$. Given a certain graph, it will depend on the number of agents {\\em indirectly} as we now explain. If we choose $\\cB^2=I-\\cA$ and $\\cC=0$ where $\\cA$ is constructed as in Section \\ref{sec:combina:matrix} and satisfy Assumption \\ref{assump_combination}. Then, we have that $1 - \\underline{\\sigma}(\\cB^2)=\\lambda_2(\\cA)$ where $\\lambda_2(\\cA)$ denotes the second largest eigenvalue of $\\cA$. For a cyclic network it holds that $\\lambda_2(\\cA)=1-\\cO(1\/K^2)$. For a grid network we have $\\lambda_2(\\cA)=1-\\cO(1\/K)$. For a fully connected network, we can choose $\\cA= {1 \\over K} \\one \\one\\tran$ so that $\\lambda_2(\\cA)=0$. In this case, we can also choose $\\bar{\\cA}= {1 \\over K} \\one \\one\\tran$ and the primal updates in \\eqref{alg_prox_ATC_framework} becomes so that each agent updates its vector via a proximal gradient descent update on the objective function given in problem \\eqref{decentralized1}. \\qd\n}\n\\end{remark}\n\\section{Simulations on real data} \\label{sec-simulation} \nIn this section we test the performance of three different instances of the proposed method \\eqref{alg_prox_ATC_framework} against some state-of-the-art algorithms. We consider the following sparse logistic regression problem:\n\\eq{\n\t\\min_{w\\in \\real^M} \\frac{1}{K}\\sum_{k=1}^K J_k(w) + \\rho \\|w\\|_1\t\\quad \\mbox{where}\\quad J_k(w) = \\frac{1}{L}\\sum_{\\ell=1}^{L}\\ln(1+\\exp(-y_{k,\\ell} x_{k,\\ell}\\tran w)) + \\frac{\\lambda}{2}\\|w\\|^2 \\nonumber\n}\nwhere $\\{x_{k,\\ell}, y_{k,\\ell}\\}_{\\ell=1}^L$ are local data kept by agent $k$ and $L$ is the size of the local dataset. We consider three real datasets: Covtype.binary, MNIST, and CIFAR10. The last two datasets have been transformed into binary classification problems by considering data with two labels, digits two and four (`2' and `4') classes for MNIST, and cat and dog classes for CIFAR-10. In Covtype.binary we use 50,000 samples as training data and each data has dimension 54. In MNIST we use 10,000 samples as training data and each data has dimension 784. In CIFAR-10 we use 10,000 training data and each data has dimension 3072. All features have been preprocessed and normalized to the unit vector with sklearn's normalizer\\footnote{\\url{https:\/\/scikit-learn.org}}.\n\\begin{figure*}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.35]{network.jpg}\n\t\\caption{The network topology used in the simulation.}\n\t\\label{fig-network}\n\\end{figure*} \n\n\nFor the network, we generated a randomly connected network with $K=20$ agents, which is shown in Fig. \\ref{fig-network}. The associated combination matrix $A$ is generated according to the Metropolis rule \\cite{sayed2014nowbook}. For all simulations, we assign data evenly to each agent. We set $\\lambda=10^{-4}$ and $\\rho=2\\times10^{-3}$ for Covtype, $\\lambda=10^{-2}$ and $\\rho=5\\times10^{-4}$ for CIFAR-10, and $\\lambda=10^{-4}$ and $\\rho=2\\times10^{-3}$ for MNIST. The simulation results are shown in Figure \\ref{fig-lr}. The decentralized implementations of Prox-ED, Prox-ATC I, and prox-ATC II are given in Appendix \\ref{supp_equiva_represent_prox}. For each algorithm, we tune the step-sizes manually to achieve the best possible convergence rate. We notice that the performance of each algorithm differs in each data set and Prox-ED performs the best in our simulation setup. The $x$-axis in these plots is in terms of rounds of communication per iteration. Note that Prox-ATC I and Prox-ATC II require two rounds of communication per iteration compared to only one round for all other algorithms -- see Remark \\ref{remak:sharing-variable}.\n\\begin{figure*}[t!]\n\t\\centering\n\t\\includegraphics[scale=0.35]{covtype_plot.pdf}\n\t\\includegraphics[scale=0.35]{cifar10_plot.pdf}\n\t\\includegraphics[scale=0.35]{mnist_plot.pdf}\n\t\\caption{ \\footnotesize Simulation results. The $y$-axis indicates the relative squared error $\\sum_{k=1}^{K}\\|w_{k,i} - w^\\star\\|^2\/\\|w^\\star\\|^2$. Prox-ED refers to \\eqref{alg_prox_ATC_framework} with $\\bar{\\cA}=0.5 (I+\\cA)$, $\\cB^2=0.5 (I- \\cA)$, and $\\cC=0$. Prox-ATC I refers to \\eqref{alg_prox_ATC_framework} with $\\bar{\\cA}=\\cA^2$, $\\cB=I-\\cA$, and $\\cC=0$. Prox-ATC II refers to \\eqref{alg_prox_ATC_framework} with $\\bar{\\cA}=\\cA$, $\\cB=I-\\cA$, and $\\cC=I-\\cA$. DL-ADMM \\cite{chang2015multi}, PG-EXTRA \\cite{shi2015proximal}, NIDS \\cite{li2017nids}.\n}\n\t\\label{fig-lr}\n\\end{figure*} \n\n\\section{Separate non-smooth terms: sublinear rate} \\label{sec:sublinearbound}\n In this section, we will show that if each agent owns a different local non-smooth term, then {\\em exact} global linear convergence cannot be attained in the worst case (for all problem\ninstances) although it can still be possible for some special cases. Consider the more general problem with agent specific regularizers:\n\\eq{\n\\label{eq:separarate-regularizer}\n\\min_{w\\in \\real^M}\\ \\frac{1}{K}\\sum_{k=1}^{K}J_k(w)+R_k(w),\n}\nwhere $J_k(w)$ is a strongly convex smooth function and $R_k(w)$ is non-smooth convex with closed form proximal mappings (each $J_k(w)$ and $R_k(w)$ are further assumed to be closed and proper functions). Although many algorithms (centralized and decentralized) exist that solve \\eqref{eq:separarate-regularizer}, none have been shown to achieve linear convergence in the presence of general non-smooth proximal terms $R_k(w)$. In the following, by tailoring the results from \\cite{woodworth2016tight}, we show that this is not possible when having access to the proximal mapping of each individual non-smooth term $R_k(w)$ separately.\n\\subsection{Sublinear Lower Bound} \\label{sec-sublinear-bound}\nLet $\\mathcal{H}$ be a deterministic algorithm that queries\n\\[\n\\{J_k(\\cdot), R_k(\\cdot), \\nabla J_k(\\cdot), {\\rm \\bf prox}_{\\mu_{i,k} R_k}(\\cdot)\n\\,|\\,\n\\mu_{i,k}>0,\\,\nk=1,\\dots,K\n\\}\n\\]\nonce for each iteration $i=0,1,\\dots$.\nTo clarify, the scalar parameter $\\mu_{i,k}>0$ can differ for $i=0,1,\\dots$ and $k=1,\\dots,K$ or they can be constants (e.g.\\ $\\mu_{i,k}=\\mu >0$).\nNote that $\\mathcal{H}$ has the option to combine the queried values in any possible combination (e.g., it can only use certain information from certain communications). Thus, $\\mathcal{H}$ includes decentralized algorithms in which communication is restricted to edges on a graph.\n\n\nConsider the specific instance of \\eqref{eq:separarate-regularizer}\n\\eq{\n\\min_{w\\in \\real^M} \\ F_\\nu(w)=\\frac{\\nu}{2}\\|w\\|^2+\\frac{1}{K}\\sum_{k=1}^{K} R_k(w)\n\\label{cost_F_nu}}\nwhere $\\nu>0$ and $J_k(w)= \\frac{\\nu}{2K}\\|w\\|^2$. Assume $R_k(w)<\\infty$ if and only if $\\|w\\|\\le B$ and $|R_k(w_1)-R_k(w_2)|\\le G\\|w_1-w_2\\|$ for all $w_1,w_2$ (where $B$ and $G$ are some positive constants) such that \n$\\|w_1\\|\\le B$ and $\\|w_2\\|\\le B$. To prove that linear convergence is not possible, we will reduce our setup to $\\min_{w\\in \\real^M}\\, F_0(w)$, which has a known lower bound \\cite{woodworth2016tight}.\nLet $\\mathcal{H}_o$ be a deterministic algorithm that queries\n\\[\n\\{ R_k(\\cdot), {\\rm \\bf prox}_{\\mu_{i,k} R_k(\\cdot)}(\\cdot)\n\\,|\\,\n\\mu_{i,k}>0,\\,\nk=1,\\dots,K\n\\}\n\\]\nonce for each iteration $i=0,1,\\dots$ and communicates through a fully connected network.\nThe following result is a special case of the more general result \\cite[Theorem~1]{woodworth2016tight}.\n\\begin{theorem} \n\\label{thm:woodworth_lower_bnd}\nLet $00$ as otherwise it can be used to efficiently solve $\\min_{w}\\, F_0(w)$ and contradict Theorem~\\ref{thm:woodworth_lower_bnd}.\n\n\\begin{theorem}\n\\label{thm:main_lower_bound}\nLet $0<\\nu$, $00$ and all $i \\geq i_o$.} has been established in \\cite{latafat2017new} when the functions $\\{J_k(\\cdot),R_k(\\cdot)\\}$ are piecewise linear quadratic.\nThis result does not contradict our result as the linear rate and the number of iterations needed to observe the linear rate are \\emph{dependent} on the problem dimension.\nOur linear convergence result of Theorem \\ref{theorem_lin_convergence} is dimension independent as it holds for any dimension $M$.\n} \\qd\n\\end{remark}\n\\subsection{Numerical Counterexample}\nIn this section, we numerically show that linear convergence to the exact solution $w^\\star$ is not possible in general. We consider an instance of \\eqref{eq:separarate-regularizer} with $K = 2$, $M$ is a very large even number, and quadratic smooth terms $J_k(w)=\\eta\/2 \\|w\\|^2$ for some $\\eta >0$. We let the non-smooth terms be\n\\begin{subequations}\\label{Rk}\n\t\\eq{\n\t\tR_1(w)&=|\\sqrt{2}w(1)-1| \\hspace{-0.5mm}+\\hspace{-0.5mm} |w(2)-w(3)| \\hspace{-0.5mm}+\\hspace{-0.5mm} |w(4)-w(5)| \\hspace{-0.5mm}+\\hspace{-0.5mm} \\cdots \\hspace{-0.5mm}+\\hspace{-0.5mm}|w(M\\hspace{-0.5mm}-\\hspace{-0.5mm}2)\\hspace{-0.5mm}-\\hspace{-0.5mm}w(M\\hspace{-0.5mm}-\\hspace{-0.5mm}1)| \\\\\n\t\tR_2(w)&=|w(1)-w(2)|+|w(3)-w(4)|+\\cdots\n\t\t+|w(M-1)-w(M)| \n\t}\n\\end{subequations}\n Both ${\\bf prox}_{R_1}$ and ${\\bf prox}_{R_2}$ have closed forms --- see Appendix \\ref{app_counter_example_proximal} for details.\n The above construction is related to the one in \\cite{arjevani2015communication}, which was used to derive lower bounds for a different class of algorithms as explained in the introduction.\n\nIn the numerical experiment, we test the performance of two well known decentralized proximal methods, PG-EXTRA \\cite{shi2015proximal} and DL-ADMM \\cite{chang2015multi,aybat2018distributed}. Note that the structure of updates \\eqref{alg_prox_ATC_framework} are designed to handle a common non-smooth term case only, which is why we do not test it in this numerical counterexample. We set $M=2000$ and $\\eta = 1$. The step-sizes for both PG-EXTRA and DL-ADMM are set to $0.005$. The combination matrix is set as $A = \\frac{1}{2}\\mathds{1}_2 \\mathds{1}_2\\tran$. The numerical results in the left plot of Fig. \\ref{fig-counter-example} shows that both PG-EXTRA and DL-ADMM converge sublinearly to the solution. In particular, we see that the error curves after around $10^3$ iterations has sublinear convergence. The right plot in Fig. \\ref{fig-counter-example} shows the squared error where both $x$-axis and $y$-axis are in logarithmic scales. In this scale, a straight line indicates a sublinear rate, which is clearly visible after around $10^3$ iterations. \nNo global linear convergence is observed in the simulation for sufficiently large dimension $M$ and algorithms independent of $M$, which is consistent with our discussion in Remark \\ref{remark:lowerbound_dimension}. \n\\begin{figure*}[h!]\n\\centering\n\t\\includegraphics[scale=0.55]{counter_example_semilogy.pdf}\n\t\\includegraphics[scale=0.55]{counter_example_loglog.pdf}\n\t\\caption{ \\footnotesize Numerical counterexample simulations. Both $y$-axis and $x$-axis are in logarithmic scales in the right plot. PG-EXTRA \\cite{shi2015proximal} and DL-ADMM \\cite{chang2015multi,aybat2018distributed} converge sublinearly to the solution\n of the proposed numerical counterexample.}\n\t\\label{fig-counter-example}\n\\end{figure*} \n\\section{Concluding Remarks}\nIn this work, we proposed a proximal primal-dual algorithmic framework, which subsumes many existing algorithms in the smooth case, and established its linear convergence under strongly-convex objectives. Our analysis provides wider step-size conditions than many existing works, which provides insightful indications on the performance of each algorithm. That said, these step-size bound comes at the expense of stronger assumption on the combination matrices -- see Remark \\ref{remark:conv_conditions}. It is therefore of interest to study the interrelation between the step-sizes and combination matrices for linear convergence. Regarding the discussion below Theorem \\ref{theorem_lin_convergence}, a useful future direction is to study how to optimally choose $\\bar{\\cA}$, $\\cB$, and $\\cC$ as a function of $\\cA$ to get the best possible convergence rate while balancing the communication cost per iteration. \n\n\n\n \n \n\\medskip\n{\\small\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Supergiant Fast X--ray Transients before {\\it Swift}}\n\nThe Galactic plane monitoring performed with the INTEGRAL satellite led to\nthe discovery of several new sources (Bird et al., 2007). \nSome of them displayed \nsporadic, recurrent, bright and short flares, with a typical duration of a few hours and reaching\na peak luminosity of 10$^{36}$--10$^{37}$~erg~s$^{-1}$\n(Sguera et al, 2005, 2006; Negueruela et al. 2006).\nRefining the INTEGRAL positions at arcsec level with \nX--ray follow-up observations, allowed the \nassociation with OB supergiant companions\n(e.g. Halpern et al.\\ 2004; Pellizza et al.\\ 2006; Masetti et al.\\ 2006;\nNegueruela et al.\\ 2006b; Nespoli et al.\\ 2008).\n\nOther important properties are the spectral similarity with \naccreting pulsars (hard power law spectra with a high energy cut-off around 15--30~keV) \nand the large dynamic range, from a peak luminosity\nof 10$^{36}$--10$^{37}$~erg~s$^{-1}$, down to a quiescent emission of 10$^{32}$~erg~s$^{-1}$.\nThe two main characterizing properties (the transient X--ray emission and\nthe association with supergiant companions) \nindicate that these transients form a new class of High Mass X--ray Binaries, \nlater called\nSupergiant Fast X--ray Transients (SFXTs; e.g. Negueruela et al. 2006).\n\nThe similarities of the SFXTs with the properties of accreting pulsars suggest \nthat the majority of these transients are indeed HMXBs hosting a neutron star,\nalthough only in three SFXTs X--ray pulsations have been discovered: \nIGR~J11215--5952 ($P_{\\rm spin}$$\\sim$186.8 \\,s, Swank et al.\\ 2007); \nAX~J1841.0--0536 ($P_{\\rm spin}$$\\sim$4.7\\,s, Bamba et al.\\ 2001)\nand IGR~J18483--0311 ($P_{\\rm spin}$$\\sim$21 \\,s, Sguera et al.\\ 2007).\n\nThe confirmed SFXTs are eight \n(IGR~J08408--4503, IGR~J11215--5952, IGR~J16479--4514, XTE~J1739--302, IGR~J17544--2619,\nSAX~J1818.6--1703, AX~J1841.0-0536 and IGR~J18483--0311),\nwith $\\sim$15 more candidates which\nshowed short transient flaring activity,\nbut with no confirmed association with an OB supergiant companion.\n\nThe main mechanisms proposed to explain the short and bright flaring activity\nfrom SFXTs deal with the properties of the accretion from the \nsupergiant wind (see Sidoli 2008 for a review), either \nrelated with the wind structure (in't Zand 2005; Walter \\& Zurita Heras, 2007;\nNegueruela et al. 2008; Sidoli et al. 2007) or to gated mechanisms which allow accretion onto\nthe neutron star surface only when the centrifugal or the magnetic barriers are open, depending\non the values of the neutron star spin and surface magnetic field (e.g. Bozzo et al. 2008 and references therein).\n\nThe properties of the SFXTs outbursts, although sporadic and short, \nhave been studied more in depth than the quiescent state.\nThe observations performed outside the bright outbursts have been indeed only a few and short (a few ks long), \nand caught these sources either in a low level flaring activity (IGR~J17544--2619, Gonzalez-Riestra et al. 2004) \nor in quiescence (with a very soft spectrum, likely thermal, with an \nX--ray luminosity of $\\sim$10$^{32}$~erg~s$^{-1}$). \nNote that this latter quiescent state has been observed\n{\\em only} in a couple of SFXTs, IGR~J17544--2619 (in't Zand 2005) and IGR J08408--4503\n(Leyder et al.\\ 2007).\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics*[angle=270,scale=0.5]{inte_igr16479_lc.ps}\n\\includegraphics*[angle=270,scale=0.5]{inte_igr17544_lc.ps}\\\\\n\\includegraphics*[angle=270,scale=0.5]{inte_xte1739_lc.ps}\n\\includegraphics*[angle=270,scale=0.5]{inte_igr18410_lc.ps}\\\\\n\\end{center}\n\\caption{\\scriptsize Light curves of the 4 SFXTs monitored with \\emph{Swift}\/XRT (0.2--10 keV), \nfrom 2007 October to 2008 September 10. \nThe upward pointing arrow in the IGR~J17544--2619 light curve \nmarks an outburst which triggered the BAT Monitor \non MJD 54412 (2007-11-08) but could not be observed with XRT because the source was \nSun-constrained. The downward-pointing arrows are 3-$\\sigma$ upper limits. \nThe gap in the observations between about December 2007 and January 2008\nis because the sources were Sun-costrained.\n}\n\\label{lsfig:4lc}\n\\end{figure}\n\n\n\n\\section{{\\it Swift} monitoring of Supergiant Fast X--ray Transients}\n\nBefore the {\\it Swift} campaign (which is still in progress since October 2008) \nno long-term monitoring of SFXTs have ever been performed to study \nthe status where these transients spend most of their life. \nNevertheless, it has been assumed by several authors, without observational evidence, that \nSFXTs spend most of the time in quiescence, when they are not in bright outburst.\n\nThe first observations with {\\it Swift} of a member of this new class of sources \nhave been performed during the 2007 February outburst of IGR~J11215--5952 (Romano et al. 2007).\nThis outburst could be completely monitored thanks to its predictability, because IGR~J11215--5952 was\nthe first SFXT where periodically recurrent outbursts were discovered (Sidoli et al. 2006).\nThese observations are one of the most complete set of observations of a SFXT in outburst,\nand clearly demonstrate, for the first time, that the short (a few hours long) \nflares observed with INTEGRAL (or RXTE, in a few sources), are actually \npart of a much longer outburst event lasting a few days, \nimplying that the accretion phase lasts longer than what was previously thought \n(Romano et al. 2007; Sidoli et al. 2007).\n\nThe success of this campaign led us to propose with {\\it Swift} the first wide-band, long-term \nand deep monitoring campaign of a sample of four SFXTs,\nwith the main aim of\n(1)-studying the long-term properties of these transients, (2)-performing \na truly simultaneous spectroscopy \n(0.3--150 keV) during outbursts, (3)-studying the outburst recurrence and their durations (see \nalso Romano et al. 2008b, these proceedings).\nThe 4 targets are: XTE~J1739--302, IGR~J17544--2619, IGR~J16479--4514\nand AX~J1841.0--0536\/IGR~J18410--0535.\nThe {\\it Swift} campaign consists of 2--3 observations\/week\/source (each observation lasts 1--2~ks; \nsee Romano et al. 2008b, these proceedings, for the campaign strategy).\nFig.~\\ref{lsfig:4lc} shows the four {\\it Swift}\/XRT light curves (0.2--10~keV) \naccumulated in the period October 2007--September 2008.\n\nHere we report on the entire {\\it Swift} monitoring campaign, updated to 2008 September 10.\nIn particular, we focus on the out-of-outburst behaviour \n(Sidoli et al. 2008a, hereafter Paper~I) \nand on the bright flares observed \nfrom two SFXTs of the sample, XTE~J1739--302 and IGR~J17544--2619 \n(Sidoli et al. 2008b, hereafter Paper~III; Sidoli et al. in preparation). \nAnother outburst caught during this campaign\nfrom IGR~J16479--4514 was published by Romano et al. (2008a, Paper~II).\n\nPreliminary results from the last outbursts from XTE~J1739--302 (triggered on 2008 August 13, Romano et al. 2008c) \nand from IGR~J17544--2619 (triggered on 2008 September 4, Romano et al. 2008d) \nare also discussed here for the first time. \nA complete analysis will be addressed in Sidoli et al. (in preparation).\n\n\n\n\\subsection{SFXTs: the long-term X-ray emission outside the bright outbursts}\n\nThe SFXTs light curves of Fig.~\\ref{lsfig:4lc} show a clear \nevidence for highly variable source fluxes even outside the bright outbursts (which were caught\nin three of the four sources we are monitoring).\nThe light curve variability is on timescales of days,\nweeks and months, with a dynamic range (outside bright outbursts)\nof more than one order of magnitude in all four SFXTs.\nThese sources spend most of the time in a \nfrequent low-level flaring activity\nwith an average 2--10 keV luminosity of about 10$^{33}$--10$^{34}$~erg~s$^{-1}$ (see Paper~I).\n\nThe average spectra of this out-of-outburst emission are hard (although not as hard as during the\nbright flares) and can be fitted with an absorbed power law\nwith a photon index in the range 1--2. The absorbing column density is typically higher than\nthe Galactic value, which can be derived from the optical extinction toward the optical counterparts.\n\nThe out-of-outburst emission in IGR~J16479--4514 and in AX~J1841.0--0536 appears to be modulated\nwith a periodicity in the range 22--25~days, although a full timing analysis\nwill be addressed at the end of the campaign.\nThe spectral properties together with the high dynamic range in the flux \nvariability when the sources are {\\em not} in outburst, demonstrate \nthat SFXTs still accrete matter even outside their bright outbursts,\nand that the quiescent state (characterized by a very soft spectrum and by a low level of emission\nat about 10$^{32}$~erg~s$^{-1}$) is not the typical long-term state in SFXTs. \n\n\n \n\\subsection{SFXTs: bright flares from IGR~J17544--2619 and XTE~J1739--302}\n\nTypically, the SFXTs long-term light curves show a number of bright outbursts, reaching peak luminosities \nof a few 10$^{36}$~erg~s$^{-1}$, assuming the distances determined by Rahoui et al. (2008).\nThe only source which did not undergo bright flares is AX~J1841.0--0536\/IGR~J18410--0535,\nwhich showed a flux variability of more than two orders of magnitude.\n\nDuring the {\\it Swift} campaign, three and two outbursts were caught respectively from \nIGR~J17544--2619 (the first of them triggered BAT, but could not be observed with {\\it Swift}\/XRT \nbecause of Sun-constraints) and from XTE~J1739--302, at the following\ndates: on 2007 November 8, 2008 March 31 and 2008 September 4 from IGR~J17544--2619, and\non 2008 April 8 and 2008 August 13 from XTE~J1739--302.\nThus, bright flares in this two prototypical SFXTs occur on a timescale of $\\sim$4--5 months\n(the three outbursts from IGR~J17544--2619 were spaced by $\\sim$144 and 157 days, respectively, while\nthe two outbursts from XTE~J1739--302 were spaced by 127~days).\n\nThe bright flare from IGR~J17544--2619 (on 2008 March 31; Paper~III) \ncould be observed simultaneously with XRT (0.2--10~keV) and BAT (15--150~keV).\nA fit with a power law with a high energy cut-off ($e^{(E_{\\rm cut}-E)\/E_{\\rm fold}}$) \nresulted in the following parameters:\n$N_{\\rm H}$=(1.1$\\pm{0.2}$)$\\times 10^{22}$~cm$^{-2}$, $\\Gamma$=0.75$\\pm{0.11}$, \ncut-off energy $E_{\\rm cut}$=18$\\pm{2}$~keV\nand e-folding energy $E_{\\rm fold}$=4$\\pm{2}$~keV, reaching a \nluminosity of 5$\\times$10$^{37}$~erg~s$^{-1}$ (0.5--100~keV at 3.6~kpc).\nNote that the out-of-outburst emission observed with XRT below 10 keV\nis softer and more absorbed than the emission during this flare.\n\nThe other flare observed from IGR~J17544--2619 on 2008 September 4 was even brighter (Romano et al. 2008d), \nand was preceeded by intense activity for a few days as observed with INTEGRAL \nduring the Galactic bulge monitoring programme (Kuulkers et al. 2008; Romano et al. 2008d).\nThe XRT light curve exceeded 20~s$^{-1}$. \nThis peak emission\ncould be fitted with an absorbed power law with a photon index of 1.3$\\pm{0.2}$ and \nan absorbing column density of 1.8$^{+0.4}_{-0.3}$$\\times10^{22}$~cm$^{-2}$.\nThe average flux in the 2--10~keV range was 8$\\times$10$^{-10}$~erg~cm$^{-2}$~s$^{-1}$.\nThe fainter X--ray emission during the flare (2$\\times$10$^{-10}$~erg~cm$^{-2}$~s$^{-1}$) \ndisplayed a similar \nabsorbing column density of 1.4$^{+0.7}_{-0.5}$$\\times10^{22}$~cm$^{-2}$ and a photon index \n$\\Gamma$=0.8 $^{+0.4}_{-0.3}$.\nA more detailed analysis of the properties of this outburst will be performed in a\nforthcoming paper (Sidoli et al. in preparation).\n\nThe first outburst from XTE~J1739--302 was caught on 2008 April 8 (Paper~III) and was composed \nby two bright flares separated by about 6000~s.\nThe X--ray emission was significantly more absorbed than in IGR~J17544--2619:\nthe broad band (XRT+BAT) spectrum could be well described by an\nabsorbed high energy cut-off \npower law with the following parameters: $N_{\\rm H}$=1.3$\\times$$10^{23}$~cm$^{-2}$, \n$\\Gamma$=1.4$^{+0.5} _{-1.0}$,\ncut-off energy $E_{\\rm cut}$=6$ ^{+7} _{-6}$~keV\nand e-folding energy $E_{\\rm fold}$=16 $ ^{+12} _{-8}$~keV.\nThe derived X--ray luminosity is 3$\\times$10$^{37}$~erg~s$^{-1}$ (0.5--100~keV).\n\nA new outburst was caught from XTE~J1739--302 on 2008 August 13 (Romano et al. 2008c).\nA preliminary spectral analysis of the average broad band spectrum of this bright flare resulted\nin the following parameters, adopting an absorbed power law with a high energy cut-off:\nabsorbing column density $N_{\\rm H}$=(4.0$\\pm{0.3}$)$\\times$$10^{23}$~cm$^{-2}$, \n$\\Gamma$=0.7$\\pm{0.1}$,\n$E_{\\rm cut}$=4.6$\\pm{0.3}$~keV\nand $E_{\\rm fold}$=9 $ ^{+2} _{-1}$~keV. \nThe X--ray luminosities during the flare were 2$\\times$10$^{36}$~erg~s$^{-1}$ (0.5--10~keV) and\n5$\\times$10$^{36}$~erg~s$^{-1}$ (0.5--100~keV). \nFig.~\\ref{lsfig:contxte} shows the comparison of the spectroscopy in the\nsoft energy range (XRT data) of the out-of-outburst emission with the results\nfrom the two flares from XTE~J1739--302.\nA time resolved spectral \nanalysis during the flare will be reported in a forthcoming paper (Sidoli et al., in preparation).\n\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics*[angle=270,scale=0.45]{fig2.ps}\n\\end{center}\n\\caption{\\scriptsize Comparison of the spectral paramenters (absorbed single power law model)\nderived for XTE~J1739--302 during the two bright flares discussed here, \nand the total spectrum of the out-of-outburst emission reported in Paper~I.\n68\\%, 90\\% and 99\\% confidence level contours are shown.\n}\n\\label{lsfig:contxte}\n\\end{figure}\n\nA comparison of the SFXTs light curves (the four SFXTs constantly monitored with {\\it Swift},\ntogether with other two sources, IGR~J11215--5952 and IGR~J08408--4503) \nduring their outbursts are reported in\nFig.~\\ref{lsfig:duration}.\nThis plot clearly demonstrates that the outbursts from \nall these transients last much longer than simply a few hours as previously thought.\nFig.~\\ref{lsfig:duration} shows about 8 days of monitoring for each target, and \nit is clear that the first SFXT, where a day-long outburst event has been observed\n(IGR~J11215--5952, Romano et al. 2007), is not a peculiar case among SFXTs, but a similar behaviour\nhas been observed in the other SFXTs monitored by {\\it Swift} during the last year \n(except AX~J1841.0--0536, where no outburst have yet been observed).\n\n\n\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics*[angle=0,scale=0.7]{fig3.ps}\n\\end{center}\n\\caption{\\scriptsize Light curves of the outbursts of SFXTs followed by {\\it Swift}\/XRT\nreferred to their\nrespective triggers. We show the 2005 outburst of IGR~J16479$-$4514 (Paper~I), \nwhich is more complete than the one observed in 2008 (Paper~II).\nThe IGR~J11215$-$5952 light curve has an arbitrary start time, since\nthe source\ndid not trigger the BAT (the observations were obtained as a ToO; Romano et al. 2007).\nThe third and the last panels report the two flares from XTE~J1739--302 observed\non 2008 April 8 and on 2008 August 13, respectively.\nThe forth panel shows the outburst from IGR~J17544--2916 occurred on 2008 March 31 (Paper~III).\nThe fifth panel reports on a multiple flaring activity reported from another SFXT,\nnot part of this campaign, IGR~J08408--4503, and occurred on 2008 July 5 (Romano et al., 2008e).\nNote that where no data are plotted, no data were collected. Vertical dashed lines \nmark time intervals equal to 1 day.\n}\n\\label{lsfig:duration}\n\\end{figure}\n\n\n\n\n\\section{Conclusions}\n\nThe results of the monitoring campaign we have been performing in the last year \nwith {\\it Swift} of a sample of 4 SFXTs can be summarized as follows:\n\n\\begin{itemize}\n\n\\item the long-term behaviour of the SFXTs outside their outbursts is a low-level accretion phase at a\nluminosity of 10$^{33}$--10$^{34}$~erg~s$^{-1}$, with a dynamic range of 1 up to, sometimes, 2 orders of magnitude in flux;\n\n\\item the broad band X--ray emission during the bright flares can be described well with models \ncommonly adopted for the emission from the accreting X--ray pulsars;\n\n\\item the SFXTs spectra during flares show high energy cut-offs \ncompatible with a neutron star magnetic field of about 10$^{12}$~G, although no cyclotron lines have been detected yet;\n\n\\item the duration of the outbursts from different SFXTs observed with {\\it Swift} are longer than a few hours.\n\n\\end{itemize}\n\n\n\\acknowledgments\nWe thank the {\\it Swift} team duty scientists and science planners P.J.\\ Brown, M.\\ Chester,\nE.A.\\ Hoversten, S.\\ Hunsberger, C.\\ Pagani, J.\\ Racusin, and M.C.\\ Stroh\nfor their dedication and willingness to accomodate our sudden requests\nin response to outbursts during this long monitoring effort.\nWe also thank the remainder of the {\\it Swift} XRT and BAT teams,\nJ.A.\\ Nousek and S.\\ Barthelmy in particular, for their invaluable help and support with\nthe planning and execution of the observing strategy.\nThis work was supported in Italy by contracts ASI I\/023\/05\/0, I\/088\/06\/0, and I\/008\/07\/0, \nat PSU by NASA contract NAS5-00136.\nH.A.K. was supported by the {\\it Swift } project.\nP.R.\\ thanks INAF-IASF Milano and L.S.\\ INAF-IASF Palermo,\nfor their kind hospitality.\nItalian researchers acknowledge the support of Nature (455, 835-836) and thank\nthe Editors for increasing the international awareness of the current\ncritical situation of the Italian Research.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe hierarchy of quark and charged lepton masses and the small quark mixing angles\nhas been one of the most puzzling aspects left unresolved by the Standard Model.\nThe recent discovery of neutrino masses and mixings has provided further clues\nin the search for the new physics Beyond the Standard Model which must be\nresponsible for the pattern of fermion masses and mixing angles.\nOne promising approach to understanding the fermion spectrum is\nthe idea of family symmetry, and in particular the idea of a\n$U(1)$ family symmetry as originally proposed by Froggatt and Nielsen \\cite{Froggatt:1978nt}.\nSuch an approach was given considerable impetus by the observation\nthat in many string constructions additional $U(1)$ symmetries\nare ubiquitous, and furthermore such a gauged broken $U(1)$\ncould provide a phenomenologically viable candidate\nfamily symmetry by virtue of the Green-Schwartz anomaly cancellation\nmechanism \\cite{Green:1984sg} which provides a string solution to the no-go theorem\nthat anomaly freedom requires such symmetries to be family independent \\cite{Weinberg:anomalies}.\nAs a result of this a considerable literature has developed in recent\nyears based on string-inspired $U(1)$ family symmetries\n\\cite{Chankowski:2005qp,Babuetal}.\n\nMany non-abelian family symmetries have also been considered, \nfor example based on $SU(3)$ family symmetry \\cite{King:2001uz},\nand also\ntextures and analyses of fermion masses have been done not using any family\nsymmetry. At the present time some very successful approaches exist, and\nothers that may with modification also be effective. Family symmetries can\nbe abelian or non-abelian, they can require symmetric Yukawa matrices or\nnot, they can be imposed with or without an associated grand unified theory,\nand so on. Criteria that could be used to choose among possible approaches\ninclude not only describing the quark masses and mixings, and the charged\nlepton masses, but also neutrino masses and mixings, supersymmetry soft\nbreaking effects (since particularly the trilinear couplings are affected by\nthe Yukawa couplings), how many parameters are used to describe the data,\nwhether some results such as the Cabibbo angle are generic or fitted, and\nmore. One of our main goals here is to look at the various possibilities\nsystematically and see if some seem to be favoured by how well they do on a\nset of criteria such as the above listed ones. Presumably family\nsymmetries originate in string theories, and are different for different\nstring constructions that lead to a description of nature, so identifying a\nunique family symmetry (or a subset of possible ones) could point strongly\ntoward a class of string theories and away from other classes. At the\npresent time this approach is not very powerful, though it gives some\ninteresting insights, but better analyses and additional data may improve it.\n\n\nIn this paper we shall consider $U(1)$ family symmetries and\nunification as a viable framework for quark and lepton masses and\nmixing angles in the light of neutrino mass and mixing data\n\\cite{King:2003jb}, using\nsequential right-hand neutrino dominance \\cite{King:1998jw} as a guide\nto constructing hierarchical neutrino mass models with bi-large\nmixing. As has been pointed earlier \\cite{Ross:2000fn}, models which\nsatisfy the Gatto-Sartori-Tonin relations (GST\n\\cite{Gatto:1968ss}){\\footnote{$V_{us}=|\\sqrt{\\frac{m_d}{m_s}}-e^{i\\Phi_1}\\sqrt{\\frac{m_u}{m_c}}|$}}\nrequire the presence of both positive and negative Abelian charges.\nAs we will discuss, the sequential dominance conditions require also\nthe presence of both positive and negative Abelian charges, and hence\nat least two flavon fields of equal and opposite charges. These models\nhowever result in complicated $U(1)$ charges, on the other hand\nNon-GST models have a simpler charge structure and may be possible to\nrealize in a more general context. In this work we also consider non\nGST cases.\n\nWe shall consider $U(1)$ family symmetry combined with unified gauge groups\nbased on $SU(5)$ and $SO(10)$, assuming a Georgi-Jarlskog relation,\nand also consider non-unified models without such a relation.\nWe will present new classes of solutions\nto the anomaly cancellation conditions and perform phenomenological fits,\nand we will compare the different classes of $U(1)$\nto each other and to non-Abelian family symmetry models based on\n$SU(3)$ \\cite{King:2001uz}, \nby performing specific phenomenological fits to the\nundetermined coefficients of the operators.\nFinally we will consider the implications of such\nan approach on flavour-changing processes in the framework\nof supersymmetry, leaving a detailed analysis for a future reference.\n\nThe layout of the paper is as follows. In Section \\ref{sec:anomconst} we consider the\ngeneral conditions for Green-Schwartz anomaly cancellation, and move on to describe\nthe classes of solutions, by whether they are consistent with $SU(5)$, $SO(10)$, Pati-Salam\nunification of representations, generalized non-unified relations, or not at all consistent\nwith unification. Having found these solutions, we move on in section \\ref{sec:new-paramaterisation}\nto re-parametrize in terms of differences in $U(1)_F$ charges. In section \\ref{sec:su5q} we consider\nthe constraints on the Yukawa textures from requiring acceptable quark mixings and quark and lepton\nmasses. Then in section \\ref{sec:neuts}, the constraints from getting acceptable neutrino masses\nand mixings from single right-handed neutrino dominance (SRHND) models, which are a class of see-saw\nmodels. In section \\ref{sec:su5-solut-satisfy-GST} we construct solutions which are consistent with\n$SU(5)$ unification, the Gatto-Satori-Tonin (GST) relation \\cite{Gatto:1968ss}, and correct fermion masses and mixings. \nIn section \\ref{sec:su5-solutions-not-GST} we construct solutions which are consistent with $SU(5)$ unification,\ncorrect fermion masses and mixing angles but which are not consistent with the GST relation. In section \\ref{sec:non-su5-cases}\nwe construct solutions which are not consistent with $SU(5)$ unification. In section \\ref{sec:fitsmasses}, we take\nsome of the solutions constructed in section \\ref{sec:su5-solut-satisfy-GST} and section \\ref{sec:su5-solutions-not-GST}\nand fit the arbitrary $O(1)$ parameters to try to closely predict the observed fermion masses and mixing angles.\nThen in section \\ref{sec:susyconst} we briefly consider whether flavour changing processes will be dangerously high\nin these models, presenting two specific scenarios: a non minimal sugra possibility and a string-inspired mSUGRA-like scenario which is expected to be (or be close to) the best-case scenario for flavour-changing and for which we check explicitly $\\mu\\rightarrow e\\gamma$ Finally, we conclude in\nsection \\ref{sec:conclusions}.\n\n\n\n\\section{Anomaly Constraints on $U(1)$ Family symmetries\\label{sec:anomconst}}\n\n\n\\subsection{Green-Schwartz anomaly cancellation}\n\\label{sec:green-schw-anom}\n\nConsider an arbitrary $U(1)$ symmetry which extends the Standard \nModel gauge group. If\nwe were to insist that it does not contribute to mixed anomalies with \nthe Standard Model,\nwe would find that the generators of $U(1)$ would be a linear \ncombination of Weak hypercharge\nand $B-L$ \\cite{Weinberg:anomalies}. This clearly is not useful for \nfamily symmetries, so we need to use a more sophisticated\nway of removing the anomalies, Green-Schwartz anomaly cancellation \\cite{Green:1984sg}. In \nthis case, we can cancel the mixed\n$U(1) - SU(3) - SU(3)$, $U(1) - SU(2) - SU(2)$ and \n$U(1) - U(1)_Y - U(1)_Y$ anomalies, $A_3$, $A_2$, and $A_1$\nif they appear in the ratio:\n\\begin{equation}\n \\label{eq:aratio}\n A_3 : A_2 : A_1: A_{U(1)}:A_G = k_3 : k_2 : k_1: 3 k_{U(1)}:24,\n\\end{equation}\nwhere we have included the relations to the anomalies of the anomalous flavour groups $A_{U(1)}$ and the gravitational anomaly; $k_i$ are the Kac-Moody levels of the gauge groups, defined by the GUT-scale relation:\n\\begin{equation}\n \\label{eq:g2ratio}\n g_3^2 k_3 = g_2^2 k_2 = g_1^2 k_1\n\\end{equation}\nIf we work with a GUT that has the canonical GUT normalization, we \nfind:\n\\begin{equation}\n \\label{eq:arelation}\n A_3 = A_2 = \\frac{3}{5} A_1\n\\end{equation}\nBut we still require that the $U(1) - U(1) - U(1)_Y$ \nanomaly, $A_1^\\prime$ vanishes.\nNow, the anomalies are given by:\n\\begin{equation}\n \\label{eq:4}\n A_i = \\frac{1}{2}\\mathrm{Tr}\\left[ \\left\\{ T^{(i)}_a , \nT^{(i)}_c\\right\\} T^\\prime_c \\right].\n\\end{equation}\nWe then use the fact that $\\left\\{T_a, T_b\\right\\} = \\delta_{ab} \n\\mathbf{1}$ for $SU(N)$ and\n$\\left\\{ Y, Y\\right\\} = 2Y^2$ for $U(1)_Y$ to obtain:\n\\begin{eqnarray}\n \\label{eq:A3}\n A_3 &=& \\frac{1}{2} \\left[ \\sum_{i = 1}^3 ( 2 q_i + u_i + d_i ) \n\\right] \\\\\n \\label{eq:A2}\n A_2 &=& \\frac{1}{2} \\left[ \\sum_{i = 1}^3 ( 3 q_i + l_i) + h_u + h_d \n\\right] \\\\\n \\frac{3}{5} A_1 &=& \\frac{1}{2}\n \\left[\n \\sum_{i = 1}^3 ( \\frac{q_i}{5} + \\frac{8 u_i}{5} + \\frac{2}{5} d_i \n+ \\frac{3 l_i}{5}\n + \\frac{6 e_i}{5} ) + \\frac{3}{5}( h_u + h_d )\n \\right]\\\\\n \\label{eq:A1p}\n A_1^\\prime &=& \\sum_{i=1}^3 ( -q_i^2 + 2 u_i^2 - d_i^2 + l_i^2 - \ne_i^2 ) + ( h_d^2 - h_u^2 ) = 0\n\\end{eqnarray}\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|cccccccc|}\n \\hline\n Field & $Q_i$ & $\\overline{U}_i$ & $\\overline{D}_i$ & $L_i$ & \n$\\overline{E}_i$ & $\\overline{N}_i$ & $H_u$ & $H_d$ \\\\\n \\hline\n Charge & $q_i$ & $u_i$ & $d_i$ & $l_i$ & $e_i$ & $n_i$ & $h_u$ & \n$h_d$ \\\\\n \\hline\n \\end{tabular}\n \\caption{Fields and family charges}\n \\label{tab:charges}\n\\end{table}\nSince in the mixed anomalies of the $U(1)$ group with the SM gauge group that cancel via \nthe Green-Schwartz mechanism wherever a charge\nappears, it appears in a sum, we parameterize the sums as follows \\cite{Jain:1994hd}:\n\\begin{eqnarray}\n \\label{eq:sumqi}\n \\sum_{i=1}^3 q_i \\!&=&\\! x + u,\\quad \\sum_{i=1}^3 u_i \\ =\\ x + 2u, \\\\\n \\label{eq:sumdi}\n \\sum_{i=1}^3 d_i \\!&=&\\! y + v,\\quad \\sum_{i=1}^3 l_i \\ =\\ y, \\\\\n \\label{eq:sumei}\n \\sum_{i=1}^3 e_i \\!&=&\\! x, \\\\\n \\label{eq:hd}\n h_u \\!&=&\\! -z,\\quad h_d \\ =\\ z + ( u + v ).\n\\end{eqnarray}\nSubstituting \\eq{eq:sumqi}-\\eq{eq:hd} into \\eq{eq:A3}-\\eq{eq:A1p}\nwe find that they satisfy\nEq.~(\\ref{eq:arelation}):\n\\begin{equation}\n \\label{eq:anomalies}\n A_3 = A_2 = \\frac{3}{5} A_1 = \\frac{1}{2} \\left[ 3x + 4u + y + \nv\\right],\n\\end{equation}\nwhich shows that the parameterization is consistent. However we need to find those solutions which also satisfy $A'_{1}=0$.\nWe will see how we can achieve this for different cases. Since the proposal of the GS anomaly mechanism it has been known that the easiest solution, \n$u=v=0$, leads to a $SU(5)$ or Pati-Salam group realization of mass matrices. Another possible solution is to have $u = -v \\ne 0$. Both these forms\n admit a SUSY $\\mu$ term in the tree level\n superpotential at the gravitational scale. However given the form of \\eq{eq:sumqi}-\\eq{eq:hd} one can try to use the flavour symmetry in order \nto forbid this term, allowing it just in the K\\\"ahler potential and thus invoking the Giudice-Masiero \\cite{Giudice:1988yz} mechanism in order to generate the\n $\\mu$ of the desired phenomenological order. Therefore apart from the cases $u+v=0$ we examine plausible cases for $u \\ne -v \\ne 0$. Of course \nin the cases $u=v=0, u=-v\\ne 0$ one can use another symmetry to forbid the $\\mu$ term in the superpotential, however it is appealing if the flavour\n symmetry forbids the $\\mu$ term at high scales.\n\n\\subsection{Anomaly free $A_1^\\prime$ with $u = v = 0$ solutions\\label{sec:yukawa-textures-uv-zero}}\nIn this case the parameterization simplifies and in fact we can\ndecompose the $U(1)$ charges in flavour independent and flavour dependent parts\n\\begin{equation}\n\\label{eq:FIaFDch}\nf_i = \\frac{1}{3}f + f_i^\\prime.\n\\end{equation}\nThe first term is flavour independent because it just depends on the total sum of the individual charges and the $f_i^\\prime$ are flavour dependent charges. We can always find $x$ and $y$ which satisfy\n\\begin{equation}\n \\label{eq:sumfip}\n\\sum_{i=1}^3 f_i^\\prime = 0.\n\\end{equation}\nIn this way $A'_{1}$ can be expressed in flavour independent plus flavour dependent terms\n\\begin{eqnarray}\nA'_{1}=A'_{1FI}+ A'_{1FD}.\n\\end{eqnarray}\nFollowing this, with the unfortunate notation that we have a new $u$, \ncompletely unrelated to the $u$ that we have already set to zero, we then have:\n\\begin{eqnarray}\n \\label{eq:A1pFIFD}\n A_1^\\prime &=& A'_{1FI}+ A'_{1FD}\\nonumber\\\\\n &=& \\frac{1}{3} \\left[ - q^2 + 2 u^2 - d^2 + l^2 - e^2 \n\\right] \n + \\sum_{i = 1}^3 ( -q_i^{\\prime\\; 2} + 2 u_i^{\\prime\\; 2} - \nd_i^{\\prime\\;2} + l_i^{\\prime\\;2} - e_i^{\\prime\\;2} )\n\\end{eqnarray}\nNow it is clear that the terms in the\nsquare bracket in \\eq{eq:A1pFIFD} are family \nindependent. It turns out that the square bracket term is\nautomatically\nzero in this case, since from Eqs.\\ref{eq:sumqi}-\\ref{eq:sumei},\nwe have: $q = u = e = x$ and $l = d = y$. Then \nwe have to make the family dependent part (the second term in \\eq{eq:A1pFIFD}) vanish.\n\n\\subsubsection{$SU(5)$ and $SO(10)$ type cases}\nOne way to make the family dependent part vanish, $A'_{1FD}=0$ , \nis to set $l_i = d_i$ and $q_i = u_i = e_i$ \n\\footnote{The reason that the charges are unprimed\nhere is that if it is true for the primed charges, it is also true for \nthe unprimed charges}. This condition would be automatic in \n$SU(5)$, but in general such a condition on the charges\ndoes not necessarily imply a field theory $SU(5)$ GUT to actually be\npresent, although it may be.\n\nSince the generic Yukawa structure is of the form:\n\\begin{equation}\n \\label{eq:upsymcaspar}\n Y^f \\approx \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|f_1 + q_1+h_f| } & \n \\epsilon^{|f_2 + q_1+h_f|} &\n \\epsilon^{|f_3 + q_1+h_f| } \\\\\n \\epsilon^{|f_3 + q_2+h_f| } &\n \\epsilon^{|f_2 + q_2+h_f| } &\n \\epsilon^{|f_3 + q_2+h_f| } \\\\\n \\epsilon^{|f_1 + q_3+h_f|} &\n \\epsilon^{|f_2 + q_3+h_f| } &\n \\epsilon^{|f_3 + q_3+h_f| }\n \\end{array}\n \\right].\n\\end{equation}\nit is clear that the $SU(5)$ relations $d_i = l_i$, $q_i = u_i \n= e_i$ lead to Yukawa textures of the form:\n\\begin{eqnarray}\n \\label{eq:Yusu5}\n Y^u &\\approx&\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2 e_1 -2e_3|} &\n \\epsilon^{|e_1 + e_2-2e_3|} &\n \\epsilon^{|e_1 - e_3|} \\\\\n \\epsilon^{|e_1 + e_2 - 2e_3|} &\n \\epsilon^{|2 e_2- 2 e_3|} &\n \\epsilon^{|e_2 - e_3|} \\\\\n \\epsilon^{|e_1 - e_3 |} &\n \\epsilon^{|e_2 - e_3 |} &\n \\epsilon^{| 0 |}\n \\end{array}\n \\right], \\\\\n\\label{eq:Ydsu5}\n Y^d &\\approx&\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1 + e_1+h_d| } &\n \\epsilon^{|l_2 + e_1+h_d| } &\n \\epsilon^{|l_3 + e_1+h_d| } \\\\\n \\epsilon^{|l_1 + e_2+h_d| } &\n \\epsilon^{|l_2 + e_2+h_d| } &\n \\epsilon^{|l_3 + e_2+h_d| } \\\\\n \\epsilon^{|l_1 + e_3+h_d| } &\n \\epsilon^{|l_2 + e_3+h_d| } &\n \\epsilon^{|l_3 + e_3+h_d| }\n \\end{array}\n \\right], \\\\\n Y^e &\\approx& Y^{d\\ T}.\n\\end{eqnarray}\nNote that the up matrix is approximately symmetric,\ndue to the assumed $SU(5)$ relation of charges.\nThe reason why the textures above are approximate is that \neach entry in each matrix contains an undetermined order unity\nflavour dependent coefficient, generically denoted as $a^f_{ij}=O(1)$.\nWe shall continue to suppress such coefficients\nin order to make the discussion less cumbersome, \nbut will return to this question when \nwe discuss the numerical fits later in the paper.\nWe have also assumed that the up and down Yukawa matrices are\ndescribed by a single expansion parameter $\\epsilon$.\nThe possibility of having two different expansion parameters,\none for the up sector and one for the down sector, \nwill also be discussed later in the paper.\nIn order to\nhave an acceptable top quark mass, we have required that $h_u+2e_3 = 0$, in which \ncase the smallness of the bottom quark mass can be due\nto $h_d+e_3+l_3 \\ne 0$, and we are free to have a small $\\tan\\beta$, because we \ndon't need large $\\tan\\beta$ to explain the ratio $\\frac{m_t}{m_b}$ on its own.\n\nAlso note that, as expected from the $SU(5)$ relation of charges, the\ndown and electron textures are the approximate transposes of each\nother, $Y^d \\approx (Y^e)^T$. Such a relation implies bad mass\nrelations for between the down type quarks and charged leptons, but\nmay be remedied by using Clebsch factors such as a Georgi-Jarlskog\nfactor of 3 in the (2,2) position of the charged lepton Yukawa\nmatrix. \n\n\nIf we were to look at the case $x = y$, then we would have a solution\nsuggestive of unified $SO(10)$ GUT symmetry, for which $l_i = q_i =\nu_i = d_i = e_i$. The same comments above also apply here, namely that \nsuch a condition on the charges, though consistent with \nan $SO(10)$ GUT does not necessarily imply a field theory realization of it. \nThe matrices \\eq{eq:Yusu5}-\\eq{eq:Ydsu5} would all become equal to \nthe same symmetric texture in Eq.\\ref{eq:Yusu5}, in the $SO(10)$ \ncase that $x=y$.\n\n\\subsubsection{Pati-Salam type cases}\nIn this case, applying the Pati-Salam constraints on the charges,\n\\begin{eqnarray}\n\\label{eq:pscharg}\nq_i=l_i\\equiv q^L_i,\\quad u_i=d_i=e_i=n_i\\equiv q^R_i,\n\\end{eqnarray}\nso we can immediately see that also for this choice of charges \nboth the the\nflavour independent and \ndependent parts in \\eq{eq:A1pFIFD} vanishes. We have also\nincluded the right-handed neutrino charges, which do not enter into\nthe anomaly cancellation conditions,\\ \\eq{eq:A3}-\\eq{eq:A1p}, but with\na Pati-Salam group should obey the relation of \\eq{eq:pscharg}. Thus\nin this case all the mass matrices have the form\n\\begin{eqnarray}\n Y^{f} &=&\n \\left(\n \\begin{array}{ccc}\n \\epsilon^{|l_1 + e_1+h_{f}| } &\n \\epsilon^{|l_1 + e_2+h_{f}| } &\n \\epsilon^{|l_1 + e_3+h_{f}|} \\\\\n \\epsilon^{|l_2 + e_1+h_{f}|} &\n \\epsilon^{|l_2 + e_2+h_{f}|} &\n \\epsilon^{|l_2 + e_3+h_{f}| } \\\\\n \\epsilon^{|l_3 + e_1+h_{f}| } &\n \\epsilon^{|l_3 + e_2+h_{f}| } &\n \\epsilon^{|l_3 + e_3+h_{f}| }\n \\end{array}\n\\right)\n\\label{PStexture}\n\\end{eqnarray}\nfor $h_{f}=h_u,\\ h_d$. In this case we always need to satisfy $x=y$, in contrast with the generic case of $SU(5)$ where it is not necessary $x=y$. So we can put one of the charges in terms of the other two and the parameters $x=y$\n\\begin{eqnarray}\ne_1=x-(e_2+e_3),\\quad l_1=x-(l_2+l_3),\\quad \\Rightarrow\ne_1+e_2+e_3=l_1+l_2+l_3.\n\\label{PScondn}\n\\end{eqnarray}\nWe have already noted that the Pati-Salam constraints \non the charges imply that the anomaly $A_1'$\nautomatically vanishes. It is also a remarkable fact that \nthe constraints in Eq.\\ref{PScondn} do not in practice lead to \nany physical constraints on the form of the Yukawa texture\nin Eq.\\ref{PStexture}. In practice, assuming only that $u+v=0$,\none can start with any set of charges $l_i$, $e_i$ which \nlead to any desired Yukawa texture, where the charges do not\nsatisfy the anomaly free constraint in Eq.\\ref{PScondn}.\nThen from any set of non-anomaly-free charges one can construct\na set of anomaly-free charges which do satisfy Eq.\\ref{PScondn},\nbut do not change the form of the Yukawa matrix in Eq.\\ref{PStexture},\nby simply making an equal and opposite flavour-independent shift \non the charges as follows \\cite{King:2000ge}:\n$e_i\\rightarrow e_i +\\Delta$, $l_i\\rightarrow l_i -\\Delta$.\nIn this paper we shall not consider the Pati-Salam approach in detail.\n\n\\subsection{Solutions with anomaly free $A_1^\\prime$ with $u + v = 0 \\\n \\ (u,v \\ne 0)$ \\label{sec:u-=-v}}\nIn this case, we can repeat the analysis of the previous subsection, \nbut with the general constraints. Note however,\nthat since $u+v = 0$, $h_u = -z$ and $h_d = +z$.\n\nThen we are left with the result that \n\\begin{eqnarray}\n \\label{eq:12}\n A_1^\\prime = \\frac{1}{3} \\left[\n 6 u^2 + 6 x u + 2 y u\n \\right] -\n \\sum_{i=1}^3 \\left( q_i^{\\prime\\;2} - 2 u_i^{\\prime\\;2} + \nd_i^{\\prime\\;2} - l_i^{\\prime\\;2} + e_i^{\\prime\\;2} \\right).\n\\end{eqnarray}\nNote that the family independent part will vanish if \n\\begin{equation}\n\\label{eq:13}\nu = -v = -\\left( x + \\frac{y}{3} \\right).\n\\end{equation}\n\nHaving done this, we may substitute Eq.~(\\ref{eq:13}) into \nEqs.~(\\ref{eq:sumqi}-~\\ref{eq:hd}) Then we find that:\n\\begin{eqnarray}\n \\label{eq:14}\n \\sum_{i=1}^3 q_i &=& -\\frac{y}{3},\\quad \\quad \\\n \\sum_{i=1}^3 u_i \\ =\\ - ( x + \\frac{2y}{3} ), \\nonumber\\\\\n \\sum_{i=1}^3 d_i &=& x + \\frac{4y}{3},\\quad\n \\sum_{i=1}^3 l_i \\ \\ =\\ y,\\\\\n \\sum_{i=1}^3 e_i &=& x.\n\\end{eqnarray}\n\n\\subsubsection{Yukawa textures for a sample solution \\label{sec:yukawa-textures-uv-nonzero}}\nAt this point, we note that there will be a large number of solutions. \nHowever, one class of solutions that will easily be satisfied\nwill be:\n\\begin{equation}\n \\label{eq:19}\n q_i = -\\frac{l_i}{3} \\;,\\; u_i = - ( \\frac{2 l_i}{3} + e_i ) \\; , \\; \nd_i = \\frac{4 l_i}{3} + e_i.\n\\end{equation}\nThe same equation will hold for the primed charges:\n\\begin{equation}\n \\label{eq:20}\n q_i^\\prime = -\\frac{l_i^\\prime}{3} \\;,\\; u_i^\\prime = - ( \\frac{2 \nl_i^\\prime}{3} + e_i^\\prime ) \\; , \\; \n d_i^\\prime = \\frac{4 l_i^\\prime}{3} + e_i^\\prime.\n\\end{equation}\n\nWe can now put Eq.~(\\ref{eq:20}) into the anomaly, Eq.~(\\ref{eq:12}). \nIn this case we find that:\n\\begin{eqnarray}\n \\nonumber\n A_1^\\prime &=& \\frac{1}{3} \\left[ x^2 ( 6 - 6 ) + \\frac{2}{3} y^2( 1 \n- 1 ) + xy ( 4- 2 - 2) \\right] \\\\\n && - \\sum_{i=1}^3 \\left( l_i^{\\prime\\;2} \\frac{1}{9} ( -1 + 8 - 16 + \n9 ) + e_i^{\\prime\\;2}( 2 - 1 -1 ) \\right)\n = 0.\n \\label{eq:21}\\end{eqnarray}\nSo we see that for this particular relation of leptonic and quark \ncharges, we are automatically anomaly-free.\n\nAgain, we see that, just as for the $u = v = 0$ case, we can specify \neverything by the leptonic charges $l_i$ and $e_i$.\nHowever, in this case we will get three different textures. \nSpecifically, we will get:\n\\begin{eqnarray}\n \\label{eq:yu-umvn0}\n Y^u &\\approx& \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1 + e_1 + h_u|} & \n \\epsilon^{|\\frac{1}{3}(l_2 + 2l_1)+e_1+ h_u|} &\n \\epsilon^{|\\frac{1}{3}(l_3 + 2l_1)+e_1+ h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(l_1+ 2l_2)+e_2 + h_u|} &\n \\epsilon^{|l_2+e_2 + h_u|} &\n \\epsilon^{|\\frac{1}{3}(l_3+2l_2)+e_2 + h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(l_1+2l_2)+e_3 + h_u|} &\n \\epsilon^{|\\frac{1}{3}(l_2+2l_3)+e_3 + h_u|} &\n \\epsilon^{|l_3 + e_3 + h_u|}\n \\end{array} \n \\right] \\\\\n \\label{eq:yd-umvn0}\n Y^d &\\approx& \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1+e_1 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_1+4l_2)+e_2 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_1+4l_3)+e_3 - h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(-l_2+4l_1)+e_1 - h_u|} &\n \\epsilon^{|l_2+e_2 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_2+4l_3)+e_2 - h_u|} \\\\\n \\epsilon^{|\\frac{1}{3}(-l_1+4l_3)+e_3 - h_u|} &\n \\epsilon^{|\\frac{1}{3}(-l_2+4l_3)+e_3 - h_u|} &\n \\epsilon^{|l_3+e_3 - h_u|}\n \\end{array} \n \\right] \\\\\n\\label{eq:ye-umvn0}\n Y^e &\\approx& \n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1+e_1 - h_u|} & \n \\epsilon^{|l_1+e_2 - h_u|} &\n \\epsilon^{|l_1+e_3 - h_u|} \\\\\n \\epsilon^{|l_2+e_1 - h_u|} &\n \\epsilon^{|l_2+e_2 - h_u|} &\n \\epsilon^{|l_2+e_3 - h_u|} \\\\\n \\epsilon^{|l_3+e_1 - h_u|} &\n \\epsilon^{|l_3+e_2 - h_u|} &\n \\epsilon^{|l_3+e_3 - h_u|}\n \\end{array} \n \\right]\n\\end{eqnarray}\nWe note that this is a rather predictive scheme; \nwe require that the diagonal elements are of\nthe same order in the between the down and electron Yukawa matrices constrained by the \nanomalies. Also, we require (at the very least)\n$l_3 + e_3 +h_u= 0$ to get a correct top quark mass.\n\n\n\\subsection{Anomaly free $A_1^\\prime$ with $u + v \\neq 0$ solutions\\label{sec:yukawa-textures-upv-notzero}}\n\nIn this case we can not decompose the expression of $A_1^\\prime$ into flavour independent and flavour dependent parts, but we can use for example the relation $\\left(\\sum f_i\\right)^2=\\sum f_i^2+2(f_1(f_2+f_3)+f_2f_3)$ such that we have\n\\begin{eqnarray}\nA_1^\\prime=-2(4u^2+u(v+3x+z)+v(z-y))-\\!2\\!\\!\\!\\!\\!\\!\\!\\sum_{f=u,d,l,e,q}\\!\\!\\!\\!\\!\\! g_f (f_1(f_2+f_3)+f_2f_3),\n\\end{eqnarray}\nwhere $g_f=1,-2,1,-1,1$ respectively for $f=q,u,d,l,e$. However it is difficult to depart from here in order to find some ansatz which cancels the $A_1^\\prime$ anomaly. Instead we can generalize the kind of relations which in the limit of $u=v=0$ would give the $SU(5)$ cases or the Pati-Salam cases.\n\\subsubsection{An extended $SU(5)$ case} \n\\label{sec:genrzsu5like}\nHere a non-GUT case is considered, taken by generalizing the $SU(5)$ relation between the charges. In the $SU(5)$ case,\nwe had $q_i = u_i = e_i$ and $d_i = l_i$. If instead we have the linear relations:\n\\begin{eqnarray}\n\\label{eq:chargrelgensu5like}\nq_i=u_i+\\alpha=e_i+\\gamma,\\quad d_i=l_i+\\beta,\\quad\n\\end{eqnarray}\nFrom the parameterization of Eqs.~(\\ref{eq:A3}-\\ref{eq:A1p}), we see that \nin the limit of the $u=v=0$ we recover the $SU(5)$ case. In agreement with the cancellation of anomalies then one should have\n\\begin{eqnarray}\nq_i=u_i-\\frac{u}{3}=e_i+\\frac{u}{3},\\quad d_i=l_i+\\frac{v}{3}.\n\\label{eq:1}\n\\end{eqnarray}\nIn the expression of the $A_1^\\prime$ anomaly, as given in \\eq{eq:A1p}, the sums of squared charges cancel and we can write it just in terms of sum of charges, which we have parameterized in terms of $u,v,x,y$,\n\\begin{eqnarray}\nA_1^\\prime=-10 \\frac{u^2}{3}-\\frac{2}{3}v^2+2u(x+v)+2y\\frac{v}{3}-2z(u+v)=0.\n\\end{eqnarray}\nThus we need to satisfy this equation in order to have anomaly free solutions. Requiring the condition of $O(1)$ top coupling we have\n\\begin{eqnarray}\n\\label{eq:charggensu5like}\nh_u&=&-z=-2e_3-u,\\nonumber\\\\\nh_d&=&2u+v+2e_3,\\nonumber\\\\\n{\\mathcal{C}}(Y^u_{ij})&=&|e_i+e_j-2e_3|,\\nonumber\\\\\n{\\mathcal{C}}(Y^d_{ij})&=&|e_i+l_j+2e_3+\\frac{7u}{3}+\\frac{4v}{3}|,\\nonumber\\\\\n{\\mathcal{C}}(Y^e_{ij})&=&|l_i+e_j+2e_3+2u+v|,\n\\end{eqnarray}\nwhere ${\\mathcal{C}}(Y^u_{ij})$ denotes the power of $\\epsilon$ for the $(i,j)$ element of the correspondent Yukawa matrix. Note that although we did not begin with an {\\it a priori} condition of having $Y^u$ symmetric, the requirement of the $O(1)$ top coupling cancels the parameter $u$ in all the entries of $Y^u$ and so we end up with a symmetric matrix. \n\\subsubsection{An extended Pati-Salam case}\n\\label{sec:pati-salam-like-case}\nFollowing the extended $SU(5)$ case, we look for solutions which in the\n$u=v=0$ limit reproduce the Pati-Salam case, so we should have the\nrelations\n\\begin{eqnarray}\n\\label{eq:PSgenrel}\nq_i=l_i+\\alpha,\\quad u_i=d_i+\\beta.\n\\end{eqnarray}\nAlso $e_i$ and $n_i$ need to be related to $u_i$ by a constant, as in \\eq{eq:PSgenrel}. In these case in order to satisfy the G-S anomaly conditions we need\n\\begin{eqnarray}\nq_i=l_i+\\frac{u+(x-y)}{3},\\quad u_i=e_i+\\frac{2u}{3},\\quad d_i=e_i+\\frac{v+(y-x)}{3}.\n\\label{eq:8}\n\\end{eqnarray}\nThus the expression for the $A_1^\\prime$ anomaly is\n\\begin{eqnarray}\nA_1^\\prime&=&-\\frac{2}{9}\\left[8u^2+4v^2+u(9v+11x-2y)+2(x-y)^2 -v(2x+y)\\right]\\nonumber\\\\\n&&-2z(u+v),\n\\end{eqnarray}\nand finally requiring the condition of $O(1)$ top Yukawa coupling we have\n\\begin{eqnarray}\nh_u&=&-z=-(l_3+e_3+u+\\frac{x-y}{3}),\\nonumber\\\\\nh_d&=&l_3+e_3+2u+v+\\frac{x-y}{3},\\nonumber\\\\\n{\\mathcal{C}}(Y^u_{ij})&=&|l_i-l_3+e_j-e_3|,\\nonumber\\\\\n{\\mathcal{C}}(Y^d_{ij})&=&|l_i+e_j+l_3+e_3+\\frac{4v+7u+(x-y)}{3}+\\frac{4v}{3}|,\\nonumber\\\\\n{\\mathcal{C}}(Y^e_{ij})&=&|l_i+e_j+2e_3+2u+v|.\n\\end{eqnarray}\n\n\n\\section{A useful phenomenological parameterization}\n\\label{sec:new-paramaterisation}\nSo far we have discussed the anomaly cancellation conditions in $U(1)$ family\nsymmetry models, and some of the possible solutions to these\nconditions, including some new solutions not previously\ndiscussed in the literature. It turns out however that the \nanomaly free charges themselves do not provide the most convenient\nparameters for discussing the phenomenological constraints on the\nYukawa matrices arising from the quark and lepton spectrum.\nIt is more convenient to introduce a \nnew parameterization for the Yukawa matrices as follows:\n\\begin{equation}\n \\label{eq:6}\n Y^f \\approx \\left(\n \\begin{array}{ccc}\n \\epsilon^{|s'_f + r'_f + k_f|} & \\epsilon^{|s'_f + r_f + k_f|} & \\epsilon^{|s'_f + k_f|} \\\\\n \\epsilon^{|s_f + r'_f + k_f|} & \\epsilon^{|s_f + r_f + k_f|} & \\epsilon^{|s_f + k_f|} \\\\\n \\epsilon^{| r'_f + k_f|} & \\epsilon^{| r_f + k_f|} & \\epsilon^{| k_f|}\n \\end{array}\n \\right)\n\\end{equation}\nwhere $f=u,d,e,\\nu$, and we have introduced the\nparameters $r_f, r'_f, s_f, s'_f, k_f$ which are defined \nin terms of the charges in Table 1 as:\n\\begin{eqnarray}\n \\nonumber\n r_f = f_2 - f_3 & r'_f = f_1 - f_3 & k_u = q_3 + u_3 + h_u \\\\\n \\nonumber\n s_{u,d} = q_2 - q_3 & s'_{u,d} = q_1 - q_3 & k_d = q_3 + d_3 + h_d \\\\\n \\nonumber\n s_{e,\\nu} = l_2 - l_3 & s'_{e,\\nu} = l_1 - l_3 & k_e = l_3 + e_3 + h_d \\\\\n \\label{eq:gyukpar}\n & & k_\\nu = l_3 + n_3 + h_u \n\\end{eqnarray}\nIn order to get an acceptable top quark mass, we require that $k_u = 0$. \nNote that the parametrization above is \ncompletely general, there is no information loss from the form of\nEq.~(\\ref{eq:upsymcaspar}), and thus far we have not imposed any\nconstraints on the charges arising from either anomaly cancellation\nor from GUTs. We now consider the simplifications \nwhich arise in the new parametrization \nwhen the charges are constrained by considerations of anomaly cancellation and \nGUTs, as discussed in the previous section.\n\n\\subsubsection*{Simplification in $SU(5)$ type case}\n\nConsider the case where the family charges are consistent with the representations in an $SU(5)$ GUT, $d_i = l_i$, and $q_i = u_i = e_i$:\n\\begin{eqnarray}\n \\nonumber\n k_e = k_d\\ & s_{u,d} = r_{u,e} & s'_{u,d} = r'_{u,e} \\\\\n \\label{eq:11}\n s_{e,\\nu} = r_d & s'_{e,\\nu} = r'_d \n\\end{eqnarray}\nIn this case, all of the parameters can be expressed purely\nin terms of the lepton charges:\n\\begin{eqnarray}\n \\nonumber\n s_{u,d}=r_{u,e} = e_2 - e_3 & s'_{u,d} = r'_{u,e} = e_1 - e_3 \\\\\n s_{e, \\nu} = r_d = l_2 - l_3 & s'_{e,\\nu} = r'_{d} = l_1 - l_3\n\\label{eq:24} \n\\end{eqnarray}\nNote that this leads directly to the fact that $Y^e \\approx (Y^d)^T$. The equality is broken by the arbitrary $O(1)$ coefficients.\nAs discussed, the $SU(5)$ charge conditions are sufficient to\nguarantee anomaly cancellation for the case $u=v=0$.\n\n\\subsubsection*{Simplification in the extended $SU(5)$ case}\n\nIn the case $u+v\\neq 0$, anomalies can again be cancelled by assuming\nthe charge conditions in Eq.~(\\ref{eq:chargrelgensu5like}).\nIf we take Eq.~(\\ref{eq:chargrelgensu5like}), we can again simplify Eq.~(\\ref{eq:gyukpar}). In this case we find:\n\\begin{eqnarray}\n \\nonumber\n s_{u,d} = r_{u,e} & s'_{u,d} = r'_{u,e} \\\\\n \\label{eq:16}\n s_{e,\\nu} = r_d & s'_{e,\\nu} = r'_d\n\\end{eqnarray}\n\nIn this case we have that the texture of $Y^e$ can be attained from $Y^d$ by replacing $k_d$ with $k_e$ and then\ntransposing.\n\n\\subsubsection*{Simplification in the Pati-Salam case}\n\nIn the case of having charge relations consistent with a Pati-Salam theory,\n$q_i = l_i$ and $u_i = d_i = e_i = n_i$, we can simplify:\n\\begin{eqnarray}\n \\nonumber\n k_e = k_d & s_{u,d} = s_{e,\\nu} & s'_{u,d} = s'_{e,\\nu} \\\\\n \\label{eq:15}\n k_u = k_\\nu & r_u = r_d = r_e = r_\\nu & r'_u = r'_d = r'_e = r'_\\nu \n\\end{eqnarray}\n\n\n\n\\section{Quark masses and mixings in $SU(5)$ \\label{sec:su5q}}\nIn this section we shall provide some constraints on the\nphenomenological parameters introduced in the last section,\narising from the quark masses and mixings,\nassuming the simplification in the $SU(5)$ type case mentioned above.\nIn $SU(5)$ Eqs.~(\\ref{eq:6}),(\\ref{eq:11}) imply the quark Yukawa\nmatrices are explicitly of the form:\n\\begin{eqnarray}\n\\label{eq:su5matparam}\nY^u\\approx \n\\left(\n\\begin{array}{ccc}\n\\varepsilon^{|2s'|}&\\varepsilon^{|s'+s|}&\\varepsilon^{|s'|}\\\\\n\\varepsilon^{|s'+s|}&\\varepsilon^{|2s|}&\\varepsilon^{|s|}\\\\\n\\varepsilon^{|s'|}&\\varepsilon^{|s|}&1\n\\end{array}\n\\right),\\ \\ \\ \\ \nY^d\\approx \n\\left(\n\\begin{array}{ccc}\n\\varepsilon^{|s'+r'_{d}+k_d|}&\\varepsilon^{|s'+r_{d}+k_d|}&\\varepsilon^{|s'+k_d|}\\\\\n\\varepsilon^{|s+r'_{d}+k_d|}&\\varepsilon^{|s+r_{d}+k_d|}&\\varepsilon^{|s+k_d|}\\\\\n\\varepsilon^{|r'_{d}+k_d|}&\\varepsilon^{|r_{d}+k_d|}&\\varepsilon^{|k_d|}\n\\end{array}\n\\right).\n\\end{eqnarray}\nwhere we have written $s=s_{u,d}=r_{u,e}$, $s' = s'_{u,d}=r'_{u,e}$.\n\\footnote{Note that the extended $SU(5)$ anomaly free solutions examined\nin section \\ref{sec:genrzsu5like} leave the parameters \n$s,s',r_d,r'_d,k_d$ invariant, as is clear by comparing\nEqs.\\ref{eq:11} and \\ref{eq:16}.\nHence the results in this section for the quark\nsector apply not only to the $SU(5)$ type case\nbut also the extended $SU(5)$ anomaly free cases.}\nNote that we are assuming a single expansion parameter $\\varepsilon$, \nand are suppressing $O(1)$ coefficients. Clebsch factors are also not\nconsidered, and only leading order operators are discussed.\n\nIn order to determine the possible solutions for $s,\\ s',\\ r_d, \\\nr'_d$ and $k_d$ which successfully reproduce quark \nmasses and mixings one can numerically diagonalize Yukawa matrices and\nobtain the CKM matrix. However, in order to understand the behaivour\nof this structure it is quite useful to use the technique of\ndiagonalization by blocks in the $(2,3)$, $(1,3)$ and $(1,2)$ sectors\n\\footnote{This only works if there is an appropriate hierarchy among the elements}. The results are presented in the next subsections.\n\n\\subsection{Quark Masses}\n\n\\noindent Barring accidental cancellations the down quark Yukawa\nmatrix $Y^d$ may be diagonalized, leading to the following \neigenvalues:\n\\begin{eqnarray}\n\\label{eq:yukeigen}\ny_1\\!\\!\\!\\!&\\approx&\\!\\!\\! a_{11}\\varepsilon^{|s'+r'+k|}-\n\\frac{(a_{31}\\varepsilon^{|r'+k|}+a_{23}a_{21}\\varepsilon^{|s+k|+|s+r'+k|-|k|}e^{2i(\\beta^L_2-\\beta^L_1)})}{c^{R}_{23}(\\varepsilon^{|k|}+a^{2}_{32}\\varepsilon^{2|r+k|-|k|}e^{-2i(\\beta^R_2-\\beta^R_1)} )}\\times\\nonumber\\\\ \n&&\\times (a_{13}\\varepsilon^{|s'+k|}\\!+a_{23}a_{12}\\varepsilon^{|r+k|+|s'+r+k|-|k|}e^{-2i(\\beta^R_2-\\beta^R_1)})+\\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\! \\frac{-(a_{12}\\varepsilon^{|s'+r+k|}\\!-\\!a_{32}a_{13}\\varepsilon^{|r+k|+|s'+k|-|k|})(a_{21}\\varepsilon^{|s+r'+k|}\\!-\\!a_{23}a_{31}\\varepsilon^{|s+k|+|r'+k|-|k|} )}{(a_{22}\\varepsilon^{|s+r+k|}-a_{23}a_{32}\\varepsilon^{|s+k|+|r+k|-|k|})e^{-i(\\beta^L_3-\\beta^R_3)}},\\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\ny_2\\!\\!\\!\\!&\\approx&\\!\\!\\!c^R_{23}\\left(a_{22} \\varepsilon^{|s+r+k|} -a_{23}a_{32}\\varepsilon^{|r+k|+|s+k|-|k|}\\right)e^{2i(\\beta^L_2-\\beta^R_2)},\\nonumber\\\\\ny_3\\!\\!\\!\\!&\\approx&\\!\\!\\!c^{R}_{23}\\left(\\varepsilon^{|k|} +a^{2}_{32}\\varepsilon^{2|r+k|-|k|}e^{2i(\\beta^R_1-\\beta^R_2)} \\right)e^{i(\\beta^L_1-\\beta^R_1)},\n\\end{eqnarray}\nwhere we have suppressed the index $d$ in order to make clearer the\nnotation and re-scaled all the (complex) coefficients by $1\/a_{33}$,\nso that instead of having $a_{33}$ we have 1. \nNote that the down quark masses are given by:\n$m^d_i=y^d_i v_d\/\\sqrt{2}$.\nAnalogous results also apply to the up quark sector,\nwith the replacements\n$r\\rightarrow s$, $r'\\rightarrow s'$, $k\\rightarrow 0$. \nThe phases $\\beta^L_i$\ncorrespond to the diagonalization matrices of the Yukawa matrices,\nwhose notation is given in Appendix (\\ref{ap:diagmat}).\n \nIt is important to remark that in the case of positive charges all the\nelements of the first row of the Yukawa matrix contribute at the same\norder, $s'+r'+k$, to their correspondent lightest eigenvalue, so in\nthese cases it is not possible to have the Gatto-Sartori-Tonin (GST)\nrelation. However in the cases of having $s$ and $s'$ (analogous for\n$r$ and $r'$) with different sign, as in the example of\n\\eq{eq:textibross}, we can have a cancellation in powers of $\\varepsilon$ to\nthe contribution to $y_1$ coming from the diagonalization in the\n$(1,2)$ sector, which is the third term in the expression for $y_1$ in\n\\eq{eq:yukeigen}. On the other hand we can have an enhancement in the\npower of $\\varepsilon$ of the contributions from the $(1,1)$ entry and the\nrotation in the $(1,3)$ sectors, which correspond to the first and\nsecond term of $y_1$, respectively, in \\eq{eq:yukeigen}. This together\nwith the condition ${\\mathcal{C}}(Y_{21})={\\mathcal{C}}(Y_{12})$ are\nthe requirements to achieve the GST relation. We will present examples\nsatisfying and not satisfying the GST relation.\n\n\nWe remark here the constraints from the bottom mass are\n\\begin{eqnarray}\n\\label{eq:tanb}\nm_b \\tan\\beta=\\varepsilon^{|k_d|} m_t,\\qquad k_d=q_3+d_3+h_d\n\\end{eqnarray}\nsince $m_t=O(\\langle H_u\\rangle)$ and $\\tan\\beta=\\langle H_u\\rangle\/\\langle H_d\\rangle$. Thus in terms of charges we have $h_u=-(q_3+u_3)$ and $h_d=q_3+u_3$, for $u=v=0$, $k=2q_3+d_3+u_3$.\n\n\n\\subsection{Quark Mixings}\n\nWe can also obtain the mixing angles in this approximation and compare\nto the required experimental values (see Appendix \\ref{ap:compinf}).\nThe mixing angles in the down sector, again dropping flavour indices,\nare as follows:\n\\begin{eqnarray}\nt^L_{23}&=&e^{i(\\beta^L_2-\\beta^L_1)}a_{23}\\varepsilon^{|s+k|-|k|}+a_{23}a_{22}\\varepsilon^{|s+r+k|+|s+k|-2|k|}e^{i\\xi_L}\\nonumber\\\\\nt^R_{23}&=&e^{i(\\beta^R_2-\\beta^R_1)}a_{32}\\varepsilon^{|r+k|-|k|}+a_{23}a_{22}\\varepsilon^{|s+r+k|+|s+k|-2|k|}e^{i\\xi_R}\\nonumber\n\\end{eqnarray}\n\\begin{eqnarray}\nt^L_{13}&=&\\frac{a_{13}\\varepsilon^{|s'+k|}+a_{32}a_{12}\\varepsilon^{|r+k|+|s'+r+k|-|k|}e^{-i2(\\beta^R_2-\\beta^R_1)}}{\\left(\\varepsilon^{|k|}+a^2_{32}\\varepsilon^{2|r+k|-|k|}e^{2i(\\beta^R_1-\\beta^R_2)}\n\\right) e^{i\\beta^L_1}}\\nonumber\\\\\nt^R_{13}&=&\\frac{a_{31}\\varepsilon^{|r'+k|}+a_{23}a_{21}\\varepsilon^{|s+k|+|s+r'+k|-|k|}e^{2i(\\beta^L_2-\\beta^L_1)}}{\\left(\\varepsilon^{|k|}+a^2_{32}\\varepsilon^{2|r+k|-|k|}e^{2i(\\beta^R_1-\\beta^R_2)}\n\\right) e^{-i\\beta^R_1}}\\sqrt{1+|a^2_{32}|\\varepsilon^{2|r+k|-2|k|}}\\nonumber\\\\\nt^L_{12}&=&\\frac{\\left(a_{12}\\varepsilon^{|s'+r+k|}-a_{32}a_{13}\\varepsilon^{|r+k|+|s'+k|-|k|}\\right)e^{-i(\\beta^R_3+\\beta^L_2)}}{\\left(\na_{22}\\varepsilon^{|s+r+k|}-a_{23}a_{32}\\varepsilon^{|s+k|+|r+k|-|k|} \\right)}\\nonumber\\\\\nt^R_{12}&=&\\frac{\\left(a_{21}\\varepsilon^{|s+r'+k|}-a_{23}a_{31}\\varepsilon^{|s+k|+|r'+k|-|k|}\\right)e^{i(\\beta^L_3+\\beta^R_2)}}{\\left(\na_{22}\\varepsilon^{|s+r+k|}-a_{23}a_{32}\\varepsilon^{|s+k|+|r+k|-|k|} \\right)}\\nonumber\\\\\n\\xi_L\\!\\!\\!&\\!=\\!&\\!\\!\\!-(\\beta^L_2-\\beta^L_1)-2(\\beta^R_2-\\beta^R_1),\\\n\\xi_R=-(\\beta^R_2-\\beta^R_1)-2(\\beta^L_2-\\beta^L_1)\\label{eq:mixsgeral}.\n\\end{eqnarray}\nAnalogous results also apply to the up quark sector, with the\nreplacements $r_d\\rightarrow s$, $r_d'\\rightarrow s'$, $k_d\\rightarrow 0$.\nNote that in the case of positive $s,s',r,r'$ and $k$, the angles\n$t^L_{12}$ and $t^L_{23}$, of the left sector do not depend on\n$r_d,r'_d$, \nso they are equal, at first approximation, for the up and down\nsectors. Having the tangent of the angles expressed in terms of the\nYukawa elements we can see directly their contributions to the CKM\nelements ($V_{\\mathrm{CKM}}=L^uL^{d\\dagger}$ in the notation of\nAppendix (\\ref{ap:diagmat}))\n\\begin{eqnarray}\n\\frac{|V_{ub}|}{|V_{cb}|}&=&\\frac{|s^{u}_{12}s^Q_{23}-s^Q_{13}e^{i(\\Phi_1-\\Phi_2)}|}{|s^Q_{23}|}\\approx 0.09\\sim (\\lambda^2,\\lambda) \\nonumber\\\\\n\\frac{|V_{td}|}{|V_{ts}|}&=&\\frac{|s^{d}_{12}s^Q_{23}-s^Q_{13}e^{i(\\Phi_2)}|}{|s^Q_{23}|}\\sim \\lambda\\nonumber\\\\\n|V_{us}|&=&|s^d_{12}-s^u_{12}e^{i\\Phi_1}|=\\lambda \\approx 0.224\\nonumber\\\\\n{\\rm{Im}}\\{J\\}&=&s^Q_{23}(s^Q_{23}s^d_{12}s^u_{12}\\sin(\\Phi_1)-s^Q_{13}(s^d_{12}\\sin(\\Phi_2))- s^u_{12}\\sin(\\Phi_2-\\Phi_1)),\n\\label{eq:Vsasyu1}\n\\end{eqnarray}\nwith $s^Q_{ij}=|s^d_{ij}-e^{i\\Phi_{X_{ij}}}s^u_{ij}|$. The phases $\\Phi_1$, $\\Phi_2$ and $\\Phi_{X_{ij}}$ depending on the contributions that the mixing angles receive from the different elements of the Yukawa matrix and have a different expression in terms of the phases of the Yukaw matrix for different cases. For example when the elements $(1,2)$ and $(1,3)$ are of the same order and the right handed mixing angle in the $(2,3)$ sector is large, the\n$\\Phi_2$ phase will be\n\\begin{eqnarray}\n\\label{eq:phi2}\n\\Phi_2={\\rm{Arg}}\\left[\\frac{Y^d_{12}+Y^d_{13}t^R_{23}}{Y^d_{33}+Y^d_{23}t^R_{23}} \\right]\n\\end{eqnarray}\nAs we can see from the expressions in \\eq{eq:Vsasyu1} involving $\\Phi_1$, this can be associated to the $U$ sector. When all the signalization angles in this sector are small, then this phase takes the form\n\\begin{eqnarray}\n\\label{eq:phi1}\n\\Phi_1=\\phi^u_{12}-\\phi^u_{22}\n\\end{eqnarray}\nwhere $\\phi_{12}$ and $\\phi_{22}$ are the phases of the $Y^u_{12}$ and $Y^u_{22}$ elements.\nFinally the phases $\\Phi_{ij}$, which appear in $s^Q_{ij}$, can be associated either with the $U$ or with the $D$ sector.\n\\begin{table}[ht] \\centering%\n\\begin{center}\n\\begin{tabular}{|l l |c||l l| c|}\n\\hline\n\\!$U(1)$ relations\\!\\!& \\!Constraint &\\!Reason\\!\\!& $U(1)$ relations\\!\\!& \\!Constraint &\\!Reason\\!\\\\\n\\hline\n$\\varepsilon^{|s+k_d|-|k_d|}$ & $\\!\\sim \\lambda^2$ & $s^Q_{23}$ & $\\varepsilon^{|3q_3+d_3|}$&$\\!\\sim (1,\\lambda^3)$&$m_b$\\\\\n$\\varepsilon^{|s'+k_d|-|k_d|}$ & $\\!\\gtrsim \\lambda^3$ & $s^Q_{13}$ & $\\varepsilon^{|s+r_d+k_d|-|k_d|}$ & $\\!\\sim (\\lambda^2,\\ \\lambda^3) $ & $\\frac{m_s}{m_b}$\\\\\n$\\varepsilon^{|s'+r_d+k_d|-|s+r_d+k_d|}\\!$ & $\\!\\sim\\lambda$ & $s^{Q}_{12}$ & $\\varepsilon^{|s'+r'_d+k_d|-|k_d|}$ & $\\!\\sim (\\lambda^4,\\ \\lambda^5)$ & $\\frac{m_d}{m_s}$\\\\\n$$ & $$ & $$ & $\\varepsilon^{|2s+k_d|-|k_d|}$ & $\\!\\sim \\lambda^4 $ & $\\frac{m_c}{m_t}$\\\\\n$$ & $$ & $$ & $\\varepsilon^{|2s'+k_d|-|k_d|}$ & $\\! \\geq \\lambda^6$ & $\\frac{m_u}{m_c}$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{{\\small Constraints on the parameters $s,\\ s',\\ r_d, \\ r'_d$\nand $k_d$ from quark mixing angles and mass ratios. For the mixing\nangles we need to satisfy the conditions for up or down sector,\nwhere the analogous conditions for the up sector are obtained by\nmaking the replacements \n$r_d\\rightarrow s$, $r_d'\\rightarrow s'$, $k_d\\rightarrow 0$. They\ndo not need to be satisfied for both as long as for the sector in\nwhich they are not satisfied they do not give a bigger contribution\nthan the indicated power.} }\n\\label{table:phen1}\n\\end{table}\n\nWith the requirements of Table (\\ref{table:phen1}) and the values of quark masses in Appendix (\\ref{ap:compinf}), we can identify the viable solutions in the quark sector. \nOne solution which has been widely explored is the up-down symmetric case for which we have $x=y$ thus, $f_i=q_i=u_i=e_i=d_i=l_i$. In this case $h_u=-2e_3=-h_d$ so $k_u=0$, $k_d=k_l=4e_3$, but in this case we need two expansion parameters $\\varepsilon_u$ and $\\varepsilon_d$ to reproduce appropriate mass ratios and mixings, thus we have\n\\begin{eqnarray}\nY^f=\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|2s'+k_f|}_f&\\varepsilon^{|s+s'+k_f|}_f&\\varepsilon^{|s'+k_f|}_f\\\\\n\\varepsilon^{|s+s'+k_f|}_f&\\varepsilon^{|2s+k_f|}_f&\\varepsilon^{|s+k_f|}_f\\\\\n\\varepsilon^{|s'+k_f|}_f&\\varepsilon^{|s+k_f|}_f&\\varepsilon^{|k_f|}\n\\end{array}\n\\right].\n\\end{eqnarray} \nWe can think of fixing $s+s'$, and then check for which choice of $s$ we have appropriate phenomenological solutions. For example if we take $s+s'=\\pm 3$ \nand $e_3=0$ ($k_f=0$, $\\forall f$) we have\n\\begin{eqnarray}\n\\label{eq:textibross}\nY^f=\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|6-2f_2|}_f&\\varepsilon^{|3|}_f&\\varepsilon^{|3-f_2|}_f\\\\\n\\varepsilon^{|3|}_f&\\varepsilon^{|2f_2|}_f&\\varepsilon^{|f_2|}_f\\\\\n\\varepsilon^{|3-f_2|}_f&\\varepsilon^{|f_2|}_f&1\n\\end{array}\n\\right]\n\\end{eqnarray} \nThe viable phenomenological fit for the case of quarks is for $f_2=-1$ and $f_1=4$ or $f_2=1$ and $f_1=-4$\n\\cite{Ibanez:1994ig}. In this case we have then $x=y=\\pm 3$\nrespectively. \n\n\n\n\n\n\n\n\\section{Neutrino masses and mixings in SRHND\\label{sec:neuts}}\nIn this section we apply the requirements of getting acceptable\nneutrino masses and mixings by using a class seesaw model where \n$l_2 =l_3$. These are a subset of a class of seesaw models called single\nright-handed neutrino dominance (SRHND) or sequential dominance\n\\cite{King:1998jw}. \nThis additional constraint $l_2 =l_3$ will henceforth be \napplied in obtaining phenomenological solutions in the\nlepton sector. \n\nApart from the obvious benefit of considering the neutrino sector, it\nwill turn out that the neutrino sector will constrain the absolute \nvalues of the charges\nunder the $U(1)$ family symmetry, (not the charge differences,) due to\nthe Majorana nature of neutrinos.\nThis is due to the relations between the\ncharges imposed by the relevant GUT constraints, or the extended GUT\nconstraints, eq.~(\\ref{eq:1}) for the extended $SU(5)$ solution of\nsection \\ref{sec:genrzsu5like} and eq.~(\\ref{eq:8}) for \nthe extended Pati-Salam solution of section \\ref{sec:pati-salam-like-case}.\nFor example the additional constraint $l_2 =l_3$ implies immediately\n\\begin{equation}\nr_d = s_{e, \\nu} = l_2 - l_3=0,\n\\label{l2l3}\n\\end{equation}\nin the $SU(5)$ type cases from Eq.\\ref{eq:24}.\n\nHere we would like to study the cases for which large mixing angles in\nthe atmospheric sector and the neutrino sector can be explained\nnaturally in terms of the parameters of the $U(1)$ class of symmetries\nthat we have constructed in the previous sections, under the framework\nof the type I see-saw mechanism together with the scenario of the\nsingle right handed neutrino dominance (SRHND). We refer the reader\nfor a review of this scenario to \\cite{King:1998jw}. Here we\nmake a brief summary of the results and apply them to the present\ncases. In the type I see-saw the mass matrix of the low energy\nneutrinos is given by $m_{LL}\\approx v^2_u Y^{\\nu} M^{-1}_R Y^{\\nu\nT}$, where $Y^{\\nu}$ is the Dirac matrix for neutrinos and $M_R$ is\nthe Majorana matrix for right-handed neutrinos. If we have three right\nhanded neutrinos, $M_1$, $M_2$ and $M_3$, then for the right handed\nneutrino mass, in terms of $U(1)$ charges we have:\n\\begin{eqnarray}\n\\label{eq:Ynu}\nY^\\nu &=& \\left[\n \\begin{array}{ccc}\n \\epsilon^{|l_1+n_1+h_u|} & \\epsilon^{|l_1+n_2+h_u|} & \\epsilon^{|l_1+n_3+h_u|} \\\\\n \\epsilon^{|l_2+n_1+h_u|} & \\epsilon^{|l_2+n_2+h_u|} & \\epsilon^{|l_2+n_3+h_u|} \\\\\n \\epsilon^{|l_3+n_1+h_u|} & \\epsilon^{|l_3+n_2+h_u|} & \\epsilon^{|l_3+n_3+h_u|} \n \\end{array}\n\\right]\\\\\n\\label{eq:MR}\nM_{RR}&=& \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|2n_1+\\sigma|}&\\varepsilon^{|n_1+n_2+\\sigma|}&\\varepsilon^{|n_1+n_3+\\sigma|}\\\\\n\\varepsilon^{|n_1+n_2+\\sigma|}&\\varepsilon^{|2n_2+\\sigma|}&\\varepsilon^{|n_2+n_3+\\sigma|}\\\\\n\\varepsilon^{|n_1+n_3+\\sigma|}&\\varepsilon^{|n_2+n_3+\\sigma|}&\\varepsilon^{|2n_3+\\sigma|}\\\\\n\\end{array}\n\\right]<\\Sigma>\n\\end{eqnarray}\nwhere the charges $n_i$ are the $U(1)$ charges of the right handed neutrinos, $\\nu_{Ri}$ and $\\sigma$ is the $U(1)$ charge of the field $\\Sigma$ giving Majorana masses to the right handed neutrinos. These charges are not constrained by the anomaly cancellation conditions \\eq{eq:sumqi}-\\eq{eq:hd} of Section (\\ref{sec:anomconst}), at least in the $SU(5)$ case, which gives some freedom in order to find appropriate solutions giving two large mixing angles and one small mixing angle for neutrinos. We expect $\\Sigma$ to be of order the scale at which the $U(1)$ symmetry is broken, for example at $M_P=M_{\\rm{Planck}}$, or some other fundamental scale, such as the Grand Unification scale, $M_G$, for the solutions with an underlying GUT theory.\n\nHere we restrict ourselves to the cases in which \\eq{eq:MR} can be considered as diagonal, $M_R\\approx \\rm{diag}\\{M_1,M_2,M_3 \\}$, for which we need in the $(2,3)$ block\n\\begin{eqnarray}\n\\label{eq:srhnd1}\n|n_3+n_2+\\sigma|&>& min\\{ |2n_3+\\sigma|,|2n_2+\\sigma|\\},\\nonumber\\\\\n2|n_3+n_2+\\sigma|&\\geq& |2n_3+\\sigma| +|2n_2+\\sigma|.\n\\end{eqnarray}\nThe conditions in the $(1,2)$ block are analogous to the $(2,3)$ and also we need\n\\begin{eqnarray}\n\\label{eq:srhnd2}\n|n_1+n_3+\\sigma|>max\\{|2n_2+\\sigma|, |2n_3+\\sigma|\\}.\n\\end{eqnarray}\nNow, there are two cases that we can consider here, which correspond to selecting which of the neutrinos will dominate, $M_1$ or $M_3$. For the later case the SRHND conditions are\n\\begin{eqnarray}\n\\frac{|Y^\\nu_{i3}Y^\\nu_{j3}|}{|M_3|} \\gg \\frac{|Y^\\nu_{i2}Y^\\nu_{j2}|}{|M_2|}\\gg\n\\frac{|Y^{\\nu 2}_{31}, Y^{\\nu 2}_{21}, Y^{\\nu}_{21}, Y^{\\nu}_{31} |}{|M_1|}; \\quad i,j=1,2,3. \n\\end{eqnarray}\nFor the case in which $M_1$ dominates we just have to interchange the indices $1$ and $3$ in the neutrino Yukawa terms.\n\nFor the case in which $M_3$ dominates, at first order approximation, we have the following expressions for the neutrino mixings \\cite{King:1998jw},\n\\begin{eqnarray}\nt^\\nu_{23}&=&\\frac{Y_{23}^{\\nu}}{Y^{\\nu}_{33}},\\label{eq:tan23}\\\\\nt^\\nu_{13} &=&\\frac{Y^\\nu_{13}} {\\sqrt{Y^{\\nu 2}_{33}+Y^{\\nu 2}_{23}}}+\n\\frac{M_3}{M_2}\\frac{Y^\\nu_{12}(s_{23}Y^{\\nu}_{22}+c_{23}Y^\\nu_{32})}{\\sqrt{Y^{\\nu 2}_{33}+Y^{\\nu 2}_{23}} },\\label{eq:tan13}\\\\\nt^\\nu_{12}&=&\\frac{Y^{\\nu}_{12}(Y^{\\nu 2}_{33}+Y^{\\nu 2}_{23})-Y^{\\nu}_{13}(Y^{\\nu}_{33}Y^{\\nu}_{32}-Y^{\\nu}_{22}Y^{\\nu}_{23} ) }\n{(Y^{\\nu}_{33}Y^{\\nu}_{33}-Y^{\\nu}_{32}Y^{\\nu}_{23}) \\sqrt{Y^{\\nu 2}_{33} +Y^{\\nu 2}_{23} +Y^{\\nu 2}_{13} } }\\approx\n\\frac{Y^\\nu_{12}}{c_{23}Y^\\nu_{22}-s_{23}Y^\\nu_{32}}.\\label{eq:tan12}\n\\end{eqnarray}\nIn terms of the Abelian charges the Yukawa elements are\n\\begin{eqnarray}\nY^\\nu_{ij}=\\varepsilon^{|l_i+n_j+h_u|}\\equiv \\varepsilon^{| l'_i+n_j|}, \n\\quad l'_i\\equiv l_i+h_u=l_i-2e_3,\n\\end{eqnarray}\nwhere we have defined primed lepton doublet charges\nwhich absorb the Higgs charge, as shown.\nWe can work here in terms of the primed charges, once they are fixed\nwe can determine the original Abelian charges (unprimed). The approximation in \\eq{eq:tan12}\ncorresponds to the case in which we have enough suppression of the\nsecond term in the expression for $t\\nu_{12}$. In \\eq{eq:tan13} the\nsecond term can be neglected sometimes, depending on the ratio\n$M_3\/M_2$. The heaviest low energy neutrino masses are given by\n\\begin{eqnarray}\nm_{\\nu_3}=\\frac{a^{\\nu 2}_3 \\varepsilon^{2|l'_2+n_3|}v^2 }{M_3},\\quad m_{\\nu_2}=\\frac{ a^{\\nu 2}_2 \\varepsilon^{2|l'_2+n_2|} v^2 }{M_2},\n\\end{eqnarray}\nwhere we have written $a^{\\nu 2}_3 \\varepsilon^{2|l'_2+n_3|}= Y^{\\nu 2}_{33}+ Y^{\\nu 2}_{23}$ and $a^{\\nu 2}_2 \\varepsilon^{2|l'_2+n_2|}=\\left( c_{23} Y^{\\nu}_{22} -s_{23}Y^{\\nu}_{32} \\right)^2$. Thus the ratio of the differences of the solar to atmospheric neutrino can be written as\n\\begin{eqnarray}\n\\label{ratmn}\n\\frac{m_{\\nu_2}}{m_{\\nu_3}}\\approx \\frac{M_3}{M_2} \\frac{c^2_{23}}{c^2_{12}}\\frac{\\left( Y^{\\nu}_{22} -Y^{\\nu}_{32} t\\nu_{23}\\right)^2}{Y^{\\nu 2}_{33} + Y^{\\nu 2}_{23}}\\sim \\varepsilon^{p_2-p_3},\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\label{eq:p3}\np_{k}=|2l'_2+n_k|-|2n_k+\\sigma|,\\ {\\mathrm{for}}\\ k=2,\\ 3.\n\\end{eqnarray}\n\nNote that $p_k$ is then defined such that \n\\begin{equation}\n \\label{eq:30}\n m_{\\nu_k} \\approx \\frac{v^2}{\\left<\\Sigma\\right>} \\epsilon^{p_k}.\n\\end{equation}\n\n\n\\section{$SU(5)$ solutions satisfying the GST relation}\n\\label{sec:su5-solut-satisfy-GST}\nIn this section we shall continue to focus on the \ncase of $SU(5)$, where the quark Yukawa matrices take\nthe form of Eq.\\ref{eq:su5matparam},\nwhere, motivated by large atmospheric neutrino mixing,\nwe shall assume $r_d=0$ from Eq.~(\\ref{l2l3})\nThe purpose of this section is to show how the GST\nrelation can emerge from $SU(5)$, by imposing additional\nconstraints on the parameters.\n\\footnote{Note that results in Section \\ref{sec:su5-solut-satisfy-GST} and in \nSection \\ref{sec:su5-solutions-not-GST} apply to both $SU(5)$ type and extended $SU(5)$ \nmodels, as discussed above.}\n\n\n\n\\subsection{The quark sector}\nWe have already seen that the GST relation\ncan be achieved in the u sector, mainly by allowing the\nparameters $s$ and $s'$ to have different signs. \nIn the down sector to satisfy GST we additionally require:\n\\begin{eqnarray}\n|k_d+r'_d+s|&=&|k_d+s'|\\nonumber\\\\\n|k_d+r'_d+s'|-|k_d|&>&|k_d+r'_d+s|+|k_d+s'|-|k_d+s|\\nonumber\\\\\n|r'_d+k_d|&>&|k_d|.\n\\end{eqnarray}\nThe first of these equations ensures the equality of the order of the\nelements $(1,2)$ and $(2,1)$ of the $Y^d$ matrix. The second equation\nensures that the element $(1,1)$ is suppressed enough with respect to\nthe contribution from the signalization of the $(1,2)$ block. This\nlast condition is usually satisfied whenever\n$|k_d+r'_d+s'|>|k_d+r'_d+s|$ is satisfied. Finally the third condition\nensures a small right-handed mixing for d-quarks and a small\nleft-handed mixing for charged leptons. Now in order to satisfy the\nrelations\n\\begin{eqnarray}\ns^u_{12}=\\sqrt{\\frac{m_u}{m_c}}\\approx \\lambda^2,\\quad s^d_{12}=\\sqrt{\\frac{m_d}{m_s}}\\approx \\lambda,\n\\end{eqnarray}\nwe need a structure of matrices, in terms of just one expansion parameter $\\varepsilon=O(\\lambda)$, such as\n\\begin{eqnarray}\nY^u= \\left[\n\\begin{array}{ccc}\n... &\\varepsilon^6&...\\\\\n\\varepsilon^6& \\varepsilon^4 & \\varepsilon^2\\\\\n...&\\varepsilon^2 &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n... &\\varepsilon^5& \\varepsilon^5\\\\\n\\varepsilon^5& \\varepsilon^4 & \\varepsilon^4\\\\\n...&\\varepsilon^2 &\\varepsilon^2\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor which we have\n\\begin{eqnarray}\ns^u_{12}\\approx \\varepsilon^2 \\quad s^d_{12}\\approx \\varepsilon,\\quad s^d_{23}\\approx \\varepsilon^2,\\quad s^d_{13}\\approx \\varepsilon^3,\\nonumber\\\\\n\\frac{m_c}{m_t}\\approx \\varepsilon^4,\\quad \\frac{m_s}{m_b}\\approx \\varepsilon^2, \\quad \\frac{m_b}{m_t}\\approx \\varepsilon^2, ~~~~~\n\\end{eqnarray}\nin agreement with observed values for quark masses and mixings for $\\varepsilon=\\lambda$.\n\nNow we can proceed as in the example of \\eq{eq:textibross} where \n$s'+s$ is fixed to be $\\pm 3$. In this case we see that we can have plausible solutions in the up sector by allowing half integer solutions\n\\begin{eqnarray}\n\\label{eq:solsspgst}\n|s'+s|=13\/2,\\ 6,\\ 11\/2.\n\\end{eqnarray}\nWe will refer to these solutions as Solution 1, 2 and 3 respectively.\nNote that only the charge differences are constrained here, the actual charges are not.\n\n\\noindent {\\bf Solution 1}, $|s+s'|=13\/2$,\n\\begin{eqnarray}\n\\label{eq:tex1gst}\nY^u= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{35\/2}&\\varepsilon^{13\/2}&\\varepsilon^{35\/4}\\\\\n\\varepsilon^{13\/2}& \\varepsilon^{9\/2} & \\varepsilon^{9\/4}\\\\\n\\varepsilon^{35\/4}&\\varepsilon^{9\/4} &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{69\/4}&\\varepsilon^{25\/4}& \\varepsilon^{25\/4}\\\\\n\\varepsilon^{25\/4}& \\varepsilon^{19\/4} & \\varepsilon^{19\/4}\\\\\n\\varepsilon^{17\/2}&\\varepsilon^{5\/2} &\\varepsilon^{5\/2}\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor\n\\begin{eqnarray}\n&& r'_d=l_1-l_3=11,\\ s=-\\frac{9}{4},\\ s'=\\frac{35}{4},\\ k_d=-\\frac{5}{2},\n\\quad {\\rm or}\\nonumber\n\\\\ && \nr'_d=l_1-l_3=-11, \\ s=\\frac{9}{4},\\ s'=-\\frac{35}{4},\\ k_d=\\frac{5}{2}.\n\\label{eq:sol1gst}\n\\end{eqnarray}\n{\\bf Solution 2}, $|s'+s|=6$,\n\\begin{eqnarray}\n\\label{eq:tex2gst}\nY^u= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{16}&\\varepsilon^6&\\varepsilon^8\\\\\n\\varepsilon^6& \\varepsilon^4 & \\varepsilon^2\\\\\n\\varepsilon^8&\\varepsilon^2 &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{31\/2}&\\varepsilon^{11\/2}& \\varepsilon^{11\/2}\\\\\n\\varepsilon^{11\/2}& \\varepsilon^{9\/2} & \\varepsilon^{9\/2}\\\\\n\\varepsilon^{15\/2}&\\varepsilon^{5\/2} &\\varepsilon^{5\/2}\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor\n\\begin{eqnarray}\n&&r'_d=l_1-l_3=10, \\ s=-2,\\ s'=8,\\ k_d=-\\frac{5}{2},\\quad {\\rm or}\\nonumber\\\\\n&&r'_d=l_1-l_3=-10, \\ s=2,\\ s'=-8,\\ k_d=\\frac{5}{2}.\\label{eq:sol2gst}\n\\end{eqnarray}\n{\\bf Solution 3}, $|s+s'|=11\/2$,\n\\begin{eqnarray}\n\\label{tex3gst}\nY^u= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{29\/2}&\\varepsilon^{11\/2}&\\varepsilon^{29\/4}\\\\\n\\varepsilon^{11\/2}& \\varepsilon^{7\/2} & \\varepsilon^{7\/4}\\\\\n\\varepsilon^{29\/4}&\\varepsilon^{7\/4} &1\n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n\\varepsilon^{31\/2}&\\varepsilon^{21\/4}& \\varepsilon^{21\/4}\\\\\n\\varepsilon^{21\/4}& \\varepsilon^{15\/4} & \\varepsilon^{15\/4}\\\\\n\\varepsilon^{33\/4}&\\varepsilon^{2} &\\varepsilon^{2}\n\\end{array}\n\\right],\n\\end{eqnarray}\nfor\n\\begin{eqnarray}\n&&r'_d=l_1-l_3=41\/4, \\ s=-\\frac{7}{4},\\ s'=\\frac{29}{4},\\ k_d=-2,\n\\quad{\\rm or}\\nonumber\n\\\\ &&r'_d=l_1-l_3=-41\/4, \\ s=\\frac{7}{4},\\ s'=-\\frac{29}{4},\\ k_d=2. \n\\label{eq:sol3gst}\n\\end{eqnarray}\nAll the previous solutions \\eqs{eq:sol1gst}-\\eqs{eq:sol3gst} lead to\nsmall $\\tan\\beta$ ($O(1)$), due to the choice of $k_d$. To find\nsolutions such that $\\tan\\beta$ is $O(10)$ is more difficult, due to\nthe requirements in the up sector, but we have found the following\nsolution\n\\begin{eqnarray}\n&&r'_d=l_1-l_3=\\frac{19}{2}, \\ s=-2,\\ s'=\\frac{15}{2},\\ k_d=-\\frac{3}{2},\n \\quad {\\rm or}\\nonumber\n\\\\ \n&&\nr'_d=l_1-l_3=\\frac{-19}{2}, \\ s=2, \\ s'=-\\frac{15}{2},\\ k_d=\\frac{3}{2}\n\\label{eq:sol4gst}.\n\\end{eqnarray}\n\\label{sec:neutgst}\n\n\\subsection{The neutrino sector}\n\\label{sec:neutrino-sector-gst}\n\nNow we construct solutions for the lepton sector constrained by the\nrequirements from the quark sector in the previous subsection, \nwhere we assumed $r_d=l_2-l_3=0$, and determined\nthe charge differences $r'_d = l_1-l_2$ that agree with the GST\nrelation. Indeed it is convenient to label the solutions\nin the previous subsection by the value of $r'_d = l_1-l_2$.\nHere we find the charges $n_i,l_i,$ and $\\sigma$ which satisfy\nthe conditions arising from the neutrino sector,\nEqs.~(\\ref{eq:tan23}-\\ref{eq:tan12}). {\\footnote{The\ncondition $l_2 =l_3$ is a requirement of the class of see-saw models\nthat we are looking for, single right-handed neutrino dominance\n(SRHND). Note that here we can also have $l'_2=-l'_3$ which then\nforces $n_3=0$ for $l'_2 \\ne 0$, in which case the solutions will be\neven more restricted.}}. In order to satisfy \\eq{eq:tan23}, the most\nnatural solution to achieve $t^\\nu_{12}$ large is to have\n\\begin{eqnarray}\n|l'_1+n_2|=|l'_2+n_2|.\n\\end{eqnarray}\nThe simplest solution is to assume that $n_2 = 0$. \nSince $l'_1$ and $l'_2$ are related through $r'_d = l_1 - l_3 = l'_1 - l'_2$ the solutions to this equation are:\n\\begin{eqnarray}\nr'_d&=&0\\\\\nl'_1&=&\\frac{r'_d}{2}=-l'_2.\\label{eq:soll1l2}\n\\end{eqnarray}\n\nSince none of the solutions found in the previous subsection had\n$r'_d = 0$, we have to work with the second solution in\nEq.~(\\ref{eq:soll1l2}). However, we do not need to solve\nEq.~(\\ref{eq:tan12}) exactly, so we are going to perturb away from it,\nby keeping $n_2\\neq 0$, but we expect it to be small in comparison\nwith $l'_1 = -l'_2$. Then we write:\n\\begin{eqnarray}\n\\label{eq:needp12}\np_{12}=|l'_1+n_2|-|l'_2+n_2|\n\\end{eqnarray}\nSo $t^\\nu_{12}$ is $O(\\varepsilon^{p_{12}})$. The solution \\eq{eq:soll1l2} implies that $l'_1$ and $l'_2$ should have opposite sign, so we choose the case $l'_1>0$\n(the other case is similar). Since $r'_d$ is large for all three GST solutions, and $n_2$ should be small in order to satisfy Eq.~(\\ref{eq:needp12}),\nwe can see that $|l'_2 + n_2| = -(l'_2 + n_2)$, and $|l'_1 + n_2| =\nl'_1 + n_2$ for all the solutions from the previous subsection.\nPutting these relations into Eq.~(\\ref{eq:needp12}) we get:\n\\begin{eqnarray}\n\\label{eq:condl1l2}\nn_2=\\frac{p_{12}}{2}.\n\\end{eqnarray}\nSo when we choose $p_{12}$, $n_2$ is determined. Now for the $t^\\nu_{13}$ mixing, which should be at most $O(\\lambda)$, from \\eq{eq:tan13} we need\n\\begin{eqnarray}\n\\label{eq:n3pos}\n|l'_1+n_3|>|l'_2+n_3|\\Rightarrow n_3>0,\n\\end{eqnarray}\nhence let us define $p_{13}$ by:\n\\begin{eqnarray}\n\\label{eq:needp13}\np_{13}=|l'_1+n_3|-|l'_2+n_3|,\n\\end{eqnarray}\n %\nWe assume that the first term in Eq.~(\\ref{eq:tan13}) dominates. Then $t^\\nu_{13}\\approx \\varepsilon^{p_{13}}\/\\sqrt{2}$. \n\\footnote{We have checked that this is indeed true for the solutions that we find for $n_2,n_3$ later in this section.}\nBy applying the same logic that led to Eq.~(\\ref{eq:condl1l2}), we achieve:\n\\begin{eqnarray}\n\\label{eq:n3zeta}\nn_3=\\frac{p_{13}}{2}\n\\end{eqnarray}\nSo fixing $p_{13} \\geq 1$ we fix $n_3$. Now we need to impose the conditions under which we can have an appropriate value of \\eq{ratmn}. \nFirst note that in order to achieve $m_{\\nu_3}=O(10^{-2}){\\rm eV}$:\n\\begin{eqnarray}\n{\\rm{for}} <\\Sigma>=M_P,\\quad \\frac{v^2}{<\\Sigma>}\\approx 6 \\times 10^{-6} \\ {\\rm eV}\\quad {\\rm we}\\ {\\rm need}\\ \\varepsilon^{p_3}\\sim 10^4 \\nonumber\\\\\n{\\rm{for}} <\\Sigma>=M_G,\\quad \\frac{v^2}{<\\Sigma>}\\approx 6 \\times 10^{-3} \\ {\\rm eV}\\quad {\\rm we}\\ {\\rm need}\\ \\varepsilon^{p_3}\\sim 10,\n\\end{eqnarray}\nwhere $p_3$ has been defined in \\eq{eq:p3}. In terms of powers of $\\lambda$, we have $\\lambda^{-4}-\\lambda^{-7}=O(10^5)-O(10^4)$ for $<\\Sigma>=M_P$ and $\\lambda^{-1}, \\lambda^{-2}=O(10)$\nfor $<\\Sigma>=M_G$. This corresponds to the following requirements:\n\\begin{eqnarray}\n\\label{eq:consonsig}\n{\\rm{for}} <\\Sigma>=M_P,\\quad p_3=(-4,-7)\\\\\n{\\rm{for}} <\\Sigma>=M_G,\\quad p_3=(-1,-2).\n\\end{eqnarray}\nWe can conclude that for zero $n_2$, from Eq.~(\\ref{eq:srhnd1}), since $n_3 > 0$, so must $\\sigma$ be positive. Then we can write the power $p_2-p_3$ ($m_{\\nu_2}\/m_{\\nu_3}\\sim\\varepsilon^{p_2-p_3}$) as follows:\n\\begin{equation}\n \\label{eq:2}\n p_2-p_3 = -2(l'_2 + n_2)- (2n_2 + \\sigma) +(2n_3 + \\sigma) \\mp 2(l'_2 + n_3).\n\\end{equation}\nThe uncertainty in the final sign comes from whether $|l'_2| > |n_3|$. If this is the case then we get:\n\\begin{eqnarray}\n\\label{eq:difn3n2}\np_3-p_2=4(n_2-n_3).\n\\end{eqnarray}\nOtherwise we end up with\n\\begin{equation}\np_2-p_3= -4(l'_2+n_2)\n\\end{equation}\nThe second form is of no use to us, since we know that $-l_2'$ is big for the models we are considering, and since $n_2$ is small\nwe can not get an acceptable mass ratio for $m_{\\nu_2}$ to $m_{\\nu_3}$. For the first form, Eq.~(\\ref{eq:difn3n2}), we need $n_2 \\ne 0$,\nbecause substituting \\eq{eq:n3zeta} into \\eq{eq:difn3n2} we have $p_2-p_3=2p_{13}-4n_2$ and we need $p_{13}\\geq 1$ so for $n_2=0$ we have $p_2-p_3\\geq 2$. \n\nWith the above requirements then we can see that the parameters $n_3$ and $n_2$ do not depend on $r'_d$. The only parameter which \ndepends on this is $\\sigma$, through Eq.~(\\ref{eq:2}), using the fact that $l'_2 = -r'_d\/2$. This also fixes the scale at \nwhich the $U(1)$ should be broken. So, independently of $r'_d$, we have the following solutions\n\\begin{eqnarray}\np_{12}=\\frac{1}{4},\\ p_{13}=1, \\ p_2-p_3=\\frac{3}{2} & \\Rightarrow & \\ n_2=\\frac{1}{8},\\ n_3=\\frac{1}{2};\\nonumber \\\\ \np_{12}=\\frac{1}{2},\\ p_{13}=1, \\ p_2-p_3=1 & \\Rightarrow & \\ n_2=\\frac{1}{4},\\ n_3=\\frac{1}{2}.\\label{eq:solsn3n2}\n\\end{eqnarray}\nWe can write the approximate expressions of mixings and masses in terms of the above results and the coefficients $a^\\nu_{ij}$ of $O(1)$,\n\\begin{eqnarray}\n\\label{eq:mixmasresn}\nt^\\nu_{23}&=&\\frac{a^{\\nu}_{23}}{a^{\\nu}_{33}},\\quad\nt^\\nu_{13}\\ \\ = \\ \\ \\frac {a^\\nu_{13}\\varepsilon^{|2n_3|}}{\\sqrt{a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23} }},\\quad\nt^\\nu_{12}\\ \\ = \\ \\ \\frac{a^\\nu_{12}\\varepsilon^{|2n_2|}}{(c_{23}a^\\nu_{22}-s_{23}a^\\nu_{32})},\\nonumber\\\\\n\\frac{m_{\\nu_2}}{m_{\\nu_3}}&=& \\frac{c^{\\nu\\ 2}_{23}}{c^{\\nu\\ 2}_{12}}\\frac{(a^{\\nu}_{22}-a^{\\nu}_{32}t_{23})^2}{(a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23})}\\varepsilon^{|4(n_3-n_2)|},\\quad\nm_{\\nu_3}\\ \\ = \\ \\ \\frac{v^2}{\\langle \\Sigma \\rangle}(a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23})\\varepsilon^{|p_3|}.\n\\end{eqnarray}\nAs we have seen above, the charges $\\sigma$ are constrained by the differences $r'_d$, the requirements of \nEq.~(\\ref{eq:2}) and the solutions to \\eq{eq:solsn3n2}, which have the same value for $n_3$, so for these two sets of solutions we have the same value for $\\sigma$. We write down these solutions for $<\\Sigma>=M_P$ in Table (\\ref{tbl:solGSTMP}) and for $<\\Sigma>=M_G$ in Table (\\ref{tbl:solGSTMG}).\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|cc|ccc|}\n \\hline\nSol.& $r'_d$ & $n_2$ & $n_3$ & $p_3$ & $\\sigma$ & $M_3$ [GeV] \\\\\n \\hline\n {\\bf 1}&11&$\\frac{1}{8}$&$\\frac{1}{2}$ & (-4,-7)&(14,16)&$O(10^{10}),O(10^8)$ \\\\\n {\\bf 1}&11&$\\frac{1}{4}$&$\\frac{1}{2}$& (-4,-7)&(14,16)&$O(10^{10}),O(10^8)$ \\\\\n {\\bf 2}&10&$\\frac{1}{8}$&$\\frac{1}{2}$&(-4,-7)&(13,14)&$O(10^{11}),O(10^9)$ \\\\\n {\\bf 2}&10&$\\frac{1}{4}$&$\\frac{1}{2}$&(-4,-7)&(13,14)&$O(10^{11}),O(10^9)$ \\\\\n {\\bf 3}&$\\frac{41}{4}$&$\\frac{1}{8}$&$\\frac{1}{2}$&(-4,-7)&$(\\frac{53}{4},\\frac{57}{4})$&$O(10^{10})$,$O(10^8)$ \\\\\n {\\bf 3}&$\\frac{41}{4}$&$\\frac{1}{4}$&$\\frac{1}{2}$&(-4,-7)&$(\\frac{53}{4},\\frac{57}{4})$&$O(10^{10})$,$O(10^8)$ \\\\\n \\hline\n \\end{tabular}\n \\caption{$\\Sigma$ at $M_P$ for the solutions satisfying the GST relation.}\n \\label{tbl:solGSTMP}\n\\end{table}\n %\n %\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|cc|ccc|}\n \\hline\n Sol.&$r'_d$ & $n_2$ & $n_3$ & $p_3$ & $\\sigma$ & $M_3$ [GeV] \\\\\n \\hline\n {\\bf 1}&11& $\\frac{1}{8}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^8)$ \\\\\n {\\bf 1}&11& $\\frac{1}{4}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^8)$ \\\\\n {\\bf 2}&10& $\\frac{1}{8}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^9), O(10^{10})$ \\\\\n {\\bf 2}&10& $\\frac{1}{4}$&$\\frac{1}{2}$&(-1,-2)&$(10,11)$&$O(10^9), O(10^{10})$ \\\\\n {\\bf 3}&$\\frac{41}{4}$& $\\frac{1}{8}$&$\\frac{1}{2}$&(-1,-2)&$(\\frac{37}{4},\\frac{41}{4})$&$O(10^9), O(10^8)$ \\\\\n {\\bf 3}&$\\frac{41}{4}$& $\\frac{1}{4}$&$\\frac{1}{2}$&(-1,-2)&$(\\frac{37}{4},\\frac{41}{4})$&$O(10^9), O(10^8)$ \\\\\n \\hline\n \\end{tabular}\n \\caption{$\\Sigma$ at $M_G$ for the solutions satisfying the GST relation.}\n \\label{tbl:solGSTMG}\n\\end{table}\n %\n\n The solutions presented here satisfy the conditions of the single neutrino right-handed dominance, Eq.~(\\ref{eq:srhnd1}), \nwhich relate second and third families. For the first and second family we need similar conditions, which are safely satisfied whenever\n$2n_1>2n_2>-\\sigma$ for $(2n_i+\\sigma)$ positive. Thus $n_1$ is not completely determined but we can choose it to be a negative number between $-\\sigma\/2$ and $0$.\n\nNow that we have determined the conditions that the charges $l'_i$ and $n_i$ need to satisfy in order to produce \nSRHND solutions we can determine the $e_i$ and $l_i$ charges, which are in agreement with the cancellation of\n anomalies, Eqs.(\\ref{eq:A3}-\\ref{eq:A1p}), and that determines the matrices $Y^e$, $Y^u$ and $Y^d$.\nIn Section \\ref{sec:su5q} we presented the conditions that the fermion mass matrices $Y^u$, $Y^d$, \n$Y^e$ and $Y^{\\nu}$ need to satisfy in order to give acceptable predictions for mass ratios and mixings but \nwithout specifying the charges. \nThe charges are then determined from $r'_d$ and $k_d$. We start by rewriting $k_d$ using the $SU(5)$ charge relations,\nand the fact that $l'_i \\equiv l_i + h_u$:\n\\begin{equation}\n \\label{eq:25}\n k_d = q_3 + d_3 + h_d = e_3 + l_3 - h_u = e_3 + (l'_3 - h_u ) - h_u \n\\end{equation}\nThen we use the fact that $k_u = 0 = 2e_3 + h_u$, and we can solve for $e_3$ in terms of $k_d$ and $r'_d$ ( using Eq.~(\\ref{eq:soll1l2}):\n\\begin{equation}\n \\label{eq:26}\n e_3 = \\frac{2 k_d + r'_d}{10}\n\\end{equation}\nOnce we have $e_3$, and $l'_3$, we can get $l_3$ since $h_u = - 2e_3$. From there, we can calculate the other charges from\n$s,s',r,r'$ using Eq.~(\\ref{eq:11}) and Eq.~(\\ref{eq:24}).\n\nThe charges calculated in this way are laid out in Table \\ref{tbl:sol-leptch-GST}.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|c|cc|ccccc|c|}\n \\hline\n Sol.& $r'_d$ & $k_d$ & $n_2$ & $n_3$ &$e_1$ & $e_2$ & $e_3$ & $l_1$ & $l_3$ & Fit \\\\\n \\hline \n {\\bf 1}&11& $\\frac{-5}{2}$ & $\\frac{1}{8}$&$\\frac{1}{2}$&$\\frac{187}{20}$ & $\\frac{-33}{20}$ & $\\frac{3}{5}$ & $\\frac{67}{10}$ & $\\frac{-43}{10}$ & - \\\\\n {\\bf 1}&11& $\\frac{-5}{2}$ & $\\frac{1}{4}$&$\\frac{1}{2}$&$\\frac{187}{20}$ & $\\frac{-33}{20}$ & $\\frac{3}{5}$ & $\\frac{67}{10}$ & $\\frac{-43}{10}$ & - \\\\\n {\\bf 2}&10& $\\frac{-5}{2}$ & $\\frac{1}{8}$&$\\frac{1}{2}$& $8$ & $-2$ & $0$ & $\\frac{15}{2}$ & $\\frac{-5}{2}$ & - \\\\ \n {\\bf 2}&10& $\\frac{-5}{2}$ & $\\frac{1}{4}$&$\\frac{1}{2}$&$8$ & $-2$ & $0$ & $\\frac{15}{2}$ & $\\frac{-5}{2}$ & 1 \\\\ \n {\\bf 3}&$\\frac{41}{4}$ & $-2$ &$\\frac{1}{8}$&$\\frac{1}{2}$ & $\\frac{63}{8}$ & $\\frac{9}{8}$ & $\\frac{25}{8}$ & $\\frac{51}{8}$ & $\\frac{-31}{8}$ & - \\\\\n {\\bf 3}&$\\frac{41}{4}$ & $-2$ &$\\frac{1}{4}$&$\\frac{1}{2}$ & $\\frac{63}{8}$ & $\\frac{9}{8}$ & $\\frac{25}{8}$ & $\\frac{51}{8}$ & $\\frac{-31}{8}$ & - \\\\\n\\hline\n \\end{tabular}\n \\caption{Charged lepton charges for the $SU(5)$ type solutions with $u=v=0$ satisfying the GST relation. The fits are discussed in section \\ref{sec:fitsmasses}}\n \\label{tbl:sol-leptch-GST}\n\\end{table}\n\n\n\\subsection{Solutions for the extended $SU(5)$ case with $u+v \\ne 0$}\n\\label{sec:solutions-su5-like}\n\n\nFor this class of solutions, it is clear from Eq.~(\\ref{eq:chargrelgensu5like}) and Eq.~(\\ref{eq:gyukpar}) that the quark sector results will\nbe unchanged. This happens since $s,s',r,r'$ are blind to whether the family charges are related by the $SU(5)$ relation, or the extended $SU(5)$ relation.\n$k_u$ must always be zero,\nand the parameterization happens to leave $k_d$ unchanged. Since $k_e$ is not unchanged, as discussed in section~\\ref{sec:genrzsu5like},\nwe need to find $k_e$ in order to know the structure of the electron Yukawa matrix.\n\n\n\n\nIt is helpful to rewrite $k_e$ and $k_d$, from the form in Eq.~(\\ref{eq:8}) by using Eqs.~(\\ref{eq:hd},~\\ref{eq:1}) and $k_u = 0$:\n\\begin{eqnarray}\nk_d&=&l_3+3e_3+u+\\frac{4}{3}m,\\nonumber\\\\\n\\label{eq:kd_ke_general}\nk_e&=&l_3+3e_3+u+m,\n\\end{eqnarray}\nwhere we have written $u+v=m$, as we will discuss in Section \\ref{sbsec:susycppr} $m$ can be determined such a that the effects of the breaking of $U(1)$ in the $\\mu$ term are of order $\\leq m_{3\/2}$. But on the other hand we need to keep the observed relation at low energies $m_b=O(m_{\\tau})$, so either $m$ has to remain small or be negative to achieve $|k_d|=O(|k_e|)$.\nIn the present case the $Y^d$ matrix has exactly the same form as in \\eqs{eq:charggensu5like} and $Y^e$ has the form\n\\begin{eqnarray}\nY^{e}=\n\\left(\n\\begin{array}{ccc}\n\\varepsilon^{|s'+r'_{d}+k_e|}&\\varepsilon^{|s+r'_{d}+k_e|}&\\varepsilon^{|r'_d+k_e|}\\\\\n\\varepsilon^{|s'+r_{d}+k_e|}&\\varepsilon^{|s+r_{d}+k_e|}&\\varepsilon^{|r_d+k_e|}\\\\\n\\varepsilon^{|s'+k_e|}&\\varepsilon^{|s+k_e|}&\\varepsilon^{|k_e|}\n\\end{array}\n\\right).\n\\label{eq:chleptmatuvn0}\n\\end{eqnarray}\n\nWith $l_2=l_3$,\nwhich determines the solutions of the charges $e_i$ and $l_i$\ncompatible with the condition $r_d=l_2-l_3=0$, the discussion \nfollows exactly as Section \\ref{sec:neutrino-sector-gst} because there we have not referred to other parameters than to \n$k_d$, $r$, $r'$, $s$ and $s'$ without specifying their relations with the charges cancelling the anomalies.\n \n\n\n\nIn this case, the analysis that leads to Eq.~(\\ref{eq:26}) must be repeated, but accounting for the\nfact that instead of the $SU(5)$ relation between the charges, we must instead use the extended\n$SU(5)$ relation between the charges. In this case, we find that:\n\\begin{equation}\n \\label{eq:27}\n k_d = 3e_3 + l_3 + u+\\frac{4}{3}(u + v) - 2 h_u = 5 e_3 + l'_3 + \\frac{10}{3} u + \\frac{4}{3} v\n\\end{equation}\nwhere we have used that $l'_i=l_i-2e_3-u$. $l'_i$ is defined in such a way that $l_i+n_j+h_u=l'_i+n_j$. Using again the fact that $l'_3 = l'_2 = -\\frac{r'_d }{2}$, we find that:\n\\begin{equation}\n \\label{eq:28}\n e_3 = \\frac{1}{10}(2 k_d + r'_d - \\frac{20}{3} u - \\frac{8}{3} v )\n\\end{equation}\n\nUsing these results, and the values of $s,s',r_f, r'_f$, we can find the charges \nin Table~\\ref{tbl:sol-leptch-GSTuvne0}.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|l|c|c|cc|c|ccccccc|c|}\n \\hline\n Sol. & $r'_d$ & $k_d$ & $u$ & $v$ & $k_e$ &$e_1$ & $e_2$ & $e_3$ & $l_1$ & $l_3$ & $n_2$ & $n_3$ & Fit \\\\\n \\hline\n {\\bf 1} & 11 & $\\frac{-5}{2}$ & $-\\frac{13}{2}$ & $7$ & $\\frac{8}{3}$ &$\\frac{709}{60}$ & $\\frac{49}{60}$ \n & $\\frac{46}{15}$ & $\\frac{77}{15}$ & $\\frac{-88}{15}$ &\n $\\frac{1}{8}$ & $\\frac{1}{2}$ & - \\\\\n {\\bf 1} & 11 & $\\frac{-5}{2}$ & $\\frac{-13}{2}$ & $7$ & $\\frac{8}{3}$ &$\\frac{709}{60}$ & $\\frac{49}{60}$ \n & $\\frac{46}{15}$ & $\\frac{77}{15}$ & $\\frac{-88}{15}$ & \n $\\frac{1}{4}$ & $\\frac{1}{2}$ & - \\\\\n {\\bf 2} &10 & $\\frac{-5}{2}$ & $-13$ & $\\frac{27}{2}$ & $\\frac{8}{3}$ & $\\frac{407}{30}$ \n & $\\frac{107}{30}$ & $\\frac{167}{30}$ & $\\frac{47}{15}$ & $\\frac{-103}{15}$& \n $\\frac{1}{8}$& $\\frac{1}{2}$ &2 \\\\ \n {\\bf 2} &10 & $\\frac{-5}{2}$ & $-13$ & $\\frac{27}{2}$ & $\\frac{8}{3}$ & $\\frac{407}{30}$ \n & $\\frac{107}{30}$ & $\\frac{167}{30}$ & $\\frac{47}{15}$ & $\\frac{-103}{15}$& \n $\\frac{1}{4}$ & $\\frac{1}{2}$ &- \\\\ \n {\\bf 3} &$\\frac{41}{4}$ & $-2$ & $\\frac{-10}{3}$ & $\\frac{23}{6}$ &$\\frac{7}{3}$ & $\\frac{363}{40}$ \n & $\\frac{3}{40}$ & $\\frac{73}{40}$ & $\\frac{653}{120}$ & $\\frac{-577}{120}$& \n $\\frac{1}{8}$&$\\frac{1}{2}$ &3 \\\\\n {\\bf 3} &$\\frac{41}{4}$ & $-2$ & $\\frac{-10}{3}$ & $\\frac{23}{6}$ &$\\frac{7}{3}$ & $\\frac{363}{40}$ \n & $\\frac{3}{40}$ & $\\frac{73}{40}$ & $\\frac{653}{120}$ & $\\frac{-577}{120}$& \n $\\frac{1}{4}$&$\\frac{1}{2}$& - \\\\\n \\hline\n \\end{tabular}\n \\caption{Charged lepton charges for the extended $SU(5)$\n solutions with $m=u+v=1\/2$ satisfying the GST relation. The fits are discussed in section \\ref{sec:fitsmasses}}\n \\label{tbl:sol-leptch-GSTuvne0}\n\\end{table}\n\n\n\n\n\\section{$SU(5)$ solutions not satisfying the GST relation}\n\\label{sec:su5-solutions-not-GST}\n\\subsection{The quark sector}\n\\label{sec:quark-sector-not-GST}\nAs we can see the GST relation puts a constraint on the opposite signs\nof $s$ and $s'$ and on the difference of $r'_d=l_1-l_3$. If we do not\nimpose these requirements, allowing all the numbers $s,\\ s^\\prime,\\ r,\n\\ r^\\prime$ and $k_d$ to have the same sign, positive or negative, we\ncan factorize the $k_d$ factor out of the $Y^d$ matrix and so can\nwrite the down matrix in the form\n\\begin{eqnarray}\nY^d=\n\\varepsilon^{|k_d|}\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|s'+l_1-l_3|}&\\varepsilon^{|s'|}&\\varepsilon^{|s'|}\\\\\n\\varepsilon^{|s+l_1-l_3|}&\\varepsilon^{|s|}&\\varepsilon^{|s|}\\\\\n\\varepsilon^{|l_1-l_3|}&1&1\n\\end{array}\n\\right].\n\\end{eqnarray}\nIn this case we do not have the restriction $|s+l_1-l_3|=|s'|$ so the parameter $l_1-l_2$ is not fixed\n by these conditions. In these cases $k_d$ is not constrained so it can acquire a value in the range \n$\\sim (0,3)$ for different values of $\\tan\\beta$. In these cases all positive or all negative charges, \nthe cases which reproduce quark masses and mixings are for\n\\begin{eqnarray}\n\\label{eq:solsspnongst}\n|s|=2,\\ |s'|=3 \\quad {\\rm{or}} \\quad |s|=2,\\ |s'|=4.\n\\end{eqnarray}\nFor $|s|=2, \\ |s'|=3$ we have\n\\begin{eqnarray}\nY^d=\\varepsilon^{|k_d|}\n\\label{eq:ydinl1l3A}\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|3+l_1-l_3|}& \\varepsilon^{|3|}&\\varepsilon^{|3|}\\\\\n\\varepsilon^{|2+l_1-l_3|}& \\varepsilon^{|2|}& \\varepsilon^{|2|}\\\\ \n\\varepsilon^{|l_1-l_3|}&1&1\n\\end{array}\n\\right].\n\\end{eqnarray}\nFor $|s|=2, \\ |s'|=4$ we have\n\\begin{eqnarray}\n\\label{eq:ydinl1l3B}\nY^d=\\varepsilon^{|k_d|}\n \\left[\n\\begin{array}{ccc}\n\\varepsilon^{|4+l_1-l_3|}& \\varepsilon^{|4|}& \\varepsilon^{|4|}\\\\\n\\varepsilon^{|2+l_1-l_3|}& \\varepsilon^{|2|}& \\varepsilon^{|2|}\\\\ \n\\varepsilon^{|l_1-l_3|}&1&1\n\\end{array}\n\\right].\n\\end{eqnarray}\nFrom \\eq{eq:ydinl1l3A} and \\eq{eq:ydinl1l3B} we can check if certain differences of leptonic charges can yield a suitable quark phenomenology. From \\eq{eq:yukeigen} we can see that in the cases of having all charges $l_i$ and $e_i$ either positive or negative, then all the terms contributing to the first eigenvalue of $Y^d$, $y_1$, will have the same power, as we mentioned earlier. So the difference $r'_d$ here is constrained to reproduce an appropriate ratio\n$m_d\/m_s$. Let us take here for definitiveness the case for positive charges (the negative charges case is completely analogous). Thus for $s=2$, $s'=3$, we have\n\\begin{eqnarray}\n\\frac{m_d}{m_s}\\sim \\frac{\\varepsilon^{3+r'_d}}{\\varepsilon^2}\\sim (\\lambda^2, \\lambda^{3\/2})\n\\end{eqnarray}\nso in this case we have $r'_d=1,\\ 3\/2$. \nFor the case $s=2$, $s'=4$, we have\n\\begin{eqnarray}\n\\frac{m_d}{m_s}\\sim \\frac{\\varepsilon^{4+r'_d}}{\\varepsilon^2}\\sim (\\lambda^2, \\lambda^{3\/2})\n\\end{eqnarray}\nwe do not want $r'_d=0$ as it will give somewhat large contribution from the $(3,1)$ element of the $Y^d$ matrix to the eigenvalues. So for this case $r'_d\\approx 1\/2$. \nIn this case we have the following matrices for \\eq{eq:ydinl1l3A} \n\\begin{eqnarray}\n\\label{eq:sol1-2nongst}\nY^d\\!=\\!\n \\left[\\begin{array}{ccc}\n\\epsilon^{4}&\\epsilon^{3}&\\epsilon^{3}\\\\\n\\epsilon^{3}&\\epsilon^{2}&\\epsilon^{2}\\\\\n\\epsilon^{1}&1&1\n\\end{array}\n\\right]\\varepsilon^{k_d},\\\nY^d\\!=\\!\n \\left[\\begin{array}{ccc}\n\\epsilon^{9\/2}&\\epsilon^{4}&\\epsilon^{4}\\\\\n\\epsilon^{7\/2}&\\epsilon^{2}&\\epsilon^{2}\\\\\n\\epsilon^{3\/2}&1&1\n\\end{array}\n\\right]\\varepsilon^{k_d},\n\\end{eqnarray}\nrespectively for $r'_d=1,\\ 3\/2$.\nFor \\eq{eq:ydinl1l3B} we have \n\\begin{eqnarray}\n\\label{eq:sol3nongst}\nY^d\\!=\\!\n \\left[\\begin{array}{ccc}\n\\epsilon^{9\/2}&\\epsilon^{4}&\\epsilon^{4}\\\\\n\\epsilon^{5\/2}&\\epsilon^{2}&\\epsilon^{2}\\\\\n\\epsilon^{1\/2}&1&1\n\\end{array}\n\\right]\\epsilon^{k_d},\n\\end{eqnarray}\nfor $r'_d=1\/2.$ These solutions work for $k_d\\in (0,3)$, depending on the value of $\\tan\\beta$, these matrices yield acceptable phenomenology in both charged the lepton sector and d quark sector.\n\n\\subsection{The neutrino sector}\n\\label{sec:neutrino-sector-non-GST}\n\nAs we have seen in Section \\ref{sec:quark-sector-not-GST}, in these cases $r'_d$ is constrained to be \n$r'_d\\in(1,3\/2)$ for $(s,s')=(2,3)$ and $r'_d\\approx 1\/2$ for $(s,s')=(2,4)$ but let us leave it \nunspecified for the moment. We consider here the case of all the parameters related to $l_i$ and $e_i$ \npositive. In this case we require that all the neutrino charges, $n_i$ to be negative but $\\sigma$ positive.\nWe proceed as in Section (\\ref{sec:neutgst}), in order to identify the charges $l'_i$, $n_i$ and $\\sigma$.\nIn principle we need $\\varepsilon^{|l'_1+n_2|}=\\varepsilon^{|l'_2+n_2|}$ but now we require $l'_1,l'_2\\geq 0$ so now the appropriate solution to this would be\n\\begin{eqnarray}\nl'_1=r'_d,\\ l'_2=0,\\quad n_2=\\frac{-r'_d}{2}.\n\\end{eqnarray}\nHowever in this case, as in the case of section (\\ref{sec:neutgst}), we will only be able to produce $m_{\\nu_2}\/m_{\\nu_3}\\sim \\varepsilon^2$. So we work with a solution of the form \\eq{eq:needp12}. For this case we then have \n\\begin{eqnarray}\n\\label{eq:soln2Ngst}\nl'_1=r'_d,\\ l'_2=0,\\ n_2=\\frac{p_{12}-r'_d}{2}.\n\\end{eqnarray}\nNote that in this case the charges $l_i$ are positive because $l_2=k_d-3e_3$ and\nhere $e_3=k_d$. For $t_{13}$ we also make use of the parameterization of \\eq{eq:needp13}. Assuming that $|r'_d | > |n_3|$,\n\\begin{eqnarray}\n\\label{eq:soln3Ngst}\nn_3=\\frac{p_{13}-r'_d}{2}.\n\\end{eqnarray}\nIn order to achieve an appropriate ratio for $m_{\\nu_2}\/m_{\\nu_3}$ we need now the conditions \n$2n_3+\\sigma>0$, $2n_2+\\sigma>0$, $l'_2+n_2<0$, $l'_2+n_3<0$, for one of the last two inequalities the equality can be satisfied, but not for both. For this case, we have\nalso $p_2-p_3=4(n_3-n_2)$ and using \\eq{eq:soln2Ngst} and \\eq{eq:soln3Ngst} we have\n$p_2-p_3=2(p_{13}-p_{12})$. We can also choose the parameters $p_{12}$, $p_{13}$ and $p_2-p_3$ as in\n\\eq{eq:solsn3n2} but now $n_3$ and $n_2$ are given by \\eq{eq:soln2Ngst} and \\eq{eq:soln3Ngst}.\nThus we have\n\\begin{eqnarray}\n\\nonumber\n&& p_{12}=\\frac{1}{4},\\ p_{13}=1, \\ p_2-p_3=\\frac{3}{2}\\\\\n&&\n\\rightarrow \\ n_2=\\frac{1}{8}-\\frac{r'_d}{2}<0 ,\\ n_3=\\frac{1}{2}-\\frac{r'_d}{2}<0 \\Rightarrow r'_d\\geq 1, \\label{eq:n3n2rdNgstA}\\\\ \n\\nonumber\n&&\np_{12}=\\frac{1}{2},\\ p_{13}=1, \\ p_2-p_3=1\\\\\n&&\n\\rightarrow \\ n_2=\\frac{1}{4}-\\frac{r'_d}{2}<0,\\ n_3=\\frac{1}{2}-\\frac{r'_d}{2}<0 \\Rightarrow r'_d\\geq 1. \\label{eq:n3n2rdNgstB}\n\\end{eqnarray}\nIn Section (\\ref{sec:su5q}) we determined the approximate values for $r'_d$. For $(s,s')=(2,3)$ we can have $r'_d=1,3\/2$ while for $(s,s')=(2,4)$ we have \n$r'_d\\approx 1\/2$, which however is not in agreement with the conditions of \\eq{eq:n3n2rdNgstA} and \\eq{eq:n3n2rdNgstB}. The approximate expressions of mixings and masses in terms of the above results and the coefficients $a^\\nu_{ij}$ of $O(1)$ are as in \\eq{eq:mixmasresn}, except for $t^\\nu_{13}$ and $t^\\nu_{12}$ which now read\n\\begin{eqnarray}\n\\label{eq:mixangngst}\nt^\\nu_{13}\\ \\ = \\ \\ \\frac {a^\\nu_{13}\\varepsilon^{|r'_d+n_3|-|n_3|}}{\\sqrt{a^{\\nu \\ 2}_{33} +a^{\\nu \\ 2}_{23} }},\\quad\nt^\\nu_{12}\\ \\ = \\ \\ \\frac{a^\\nu_{12}\\varepsilon^{(|r'_d+n_2|-|n_2|)}}{(c_{23}a^\\nu_{22}-s_{23}a^\\nu_{32})}.\\nonumber\\\\\n\\end{eqnarray}\nWe have listed the possible solutions of Table (\\ref{tbl:solNGSTMP}) for \\eq{eq:n3n2rdNgstA} at $\\langle \\Sigma\\rangle=M_P$ and in Table (\\ref{tbl:solNGSTMG}) for $\\langle \\Sigma\\rangle=M_G$.\n\\begin{table}[htbp]\n \\centering\n \\begin{tabular}{|cc|cccc|}\n \\hline\n$r'_d$ & $n_2$ &$n_3$& $p_3$ & $\\sigma$ & $M_3 [GeV]$ \\\\\n \\hline\n$1$ & $\\frac{-3}{8}$ & $0$ & $(-\\frac{9}{2},-\\frac{5}{2})$ & $(\\frac{9}{2},5)$ & $O(10^{15})$ \\\\\n$\\frac{3}{2}$ & $\\frac{-5}{8}$ & $\\frac{-1}{4}$ & $(-\\frac{17}{4},-\\frac{19}{4})$ & $(5,\\frac{11}{2})$ & $O(10^{15})$\\\\\n$1$ & $\\frac{-1}{4}$&1& (-6,-7)&(6,7)&$O(10^{15})$, $O(10^{14})$\\\\\n$\\frac{3}{2}$ & $\\frac{-1}{2}$&$\\frac{-1}{4}$& (-6,-7)&($\\frac{27}{4}$,$\\frac{31}{4}$) &$O(10^{14})$, $O(10^{15})$\\\\\n\\hline\n\\end{tabular}\n \\caption{$\\Sigma$ at $M_P$ for the solutions not satisfying the GST relation.}\n \\label{tbl:solNGSTMP}\n\\end{table}\n %\n\\begin{table}[htbp]\n \\centering\n \\begin{tabular}{|cc|cccc|}\n \\hline\n$r'_d$ & $n_2$&$n_3$ & $p_3$ & $\\sigma$ & $M_3$ [GeV] \\\\\n \\hline\n$1$ & $\\frac{-3}{8}$&$0$ & $(0,-\\frac{1}{2})$ & $(0,\\frac{1}{2})$ & $O(10^{15})$ \\\\\n$\\frac{3}{2}$ & $\\frac{-5}{8}$&$\\frac{-1}{4}$& $(-\\frac{1}{4},-\\frac{3}{4})$ &$(1,\\frac{3}{2})$ &$O(10^{15})$,\\\\\n$1$ & $\\frac{-1}{4}$&$0$&1& (-1,-2)&$O(10^{18})$\\\\\n$\\frac{3}{2}$ & $\\frac{-1}{2}$&$\\frac{-1}{4}$& (-1,-2)&($\\frac{7}{4}$,$\\frac{11}{4}$)&$O(10^{18})$,$O(10^{17})$\\\\\n \\hline\n \\end{tabular}\n \\caption{$\\Sigma$ at $M_G$ for the solutions not satisfying the GST relation.}\n \\label{tbl:solNGSTMG}\n\\end{table}\n %\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|cccccc|c|}\n \\hline\n$r'_d$ & $k_d$ &$e_1$ & $e_2$ & $e_3$ & $l_1$ & $l_3$ & Fit\\\\\n\\hline\n1&2& $\\frac{17}{5}$ & $\\frac{12}{5}$ & $\\frac{2}{5}$ & $\\frac{9}{5}$ & $\\frac{4}{5}$ & 4 \\\\\n$\\frac{3}{2}$& $2$ & $\\frac{17}{5}$ & $\\frac{12}{5}$ & $\\frac{2}{5}$ & $\\frac{7}{10}$ & $\\frac{4}{5}$ & 5\\\\\n1& $3$ & $\\frac{18}{5}$ & $\\frac{13}{5}$ & $\\frac{3}{5}$ & $\\frac{7}{10}$ & $\\frac{-3}{10}$ & -\\\\\n$\\frac{3}{2}$& $3$ & $\\frac{18}{5}$ & $\\frac{13}{5}$ & $\\frac{3}{5}$ & $\\frac{6}{5}$ & $\\frac{-3}{10}$ & -\\\\\n \\hline\n \\end{tabular}\n \\caption{Charged lepton $U(1)_X$ charges for the solutions $u=v=0$ not satisfying the GST \nrelation. The fits are discussed in section \\ref{sec:fitsmasses}}\n \\label{tbl:sol-lepchar-NGST}\n\\end{table}\n\n\n\\subsection{Solutions for the extended $SU(5)$ case with $u+v \\ne 0$}\n\\label{sec:solutions-su5-like-non-GST}\n\nWe do not present any charges for this class of solutions, but here is how the charges would be calculated.\nIn this case, the analysis is carried out in the same way as in section \\ref{sec:solutions-su5-like}. The only subtlety\nis that the relation linking $l'_2$ to $r'_d$ is different. Instead, we have, from Eq.~(\\ref{eq:soln2Ngst}) that $l'_2 = l'_3 = 0$.\nPutting this result into Eq.~(\\ref{eq:27}), we achieve:\n\\begin{equation}\n \\label{eq:29}\n e_3 = \\frac{1}{10}(2 k_d + \\frac{4}{3} u - \\frac{8}{3} v ).\n\\end{equation}\n\nFrom $e_3$ and $l'_3$, the other charges may be calculated using the known values of $s,s',r_d,r'_d,u \\, \\mathrm{and}\\, v$,\nby using the extended $SU(5)$ charge relations, Eq.~(\\ref{eq:1}) and the simplified parametrization, Eq.~(\\ref{eq:16}).\n\n\n\\section{The non-$SU(5)$ Cases}\n\\label{sec:non-su5-cases}\n\n\\subsection{Solutions for $u=v=0$, in the Pati-Salam case\\label{sec:patisalamq}}\nWith $l_2=l_3$, in this case we have $s=q_2-q_3=l_2-l_3=0$ then the charges of the matrices are\n\\begin{eqnarray}\nC(Y^{u, \\nu})&=&\n \\left[\n\\begin{array}{ccc}\n l_1-l_3+e_1-e_3 & l_1-l_3+e_2-e_3 & l_1-l_3 \\\\\n e_1-e_3 & e_2-e_3 & 0\\\\\n e_1-e_3 & e_2-e_3 & 0\n\\end{array}\n\\right]\\nonumber\\\\\nC(Y^{d, l})&=& \n \\left[\n \\begin{array}{ccc}\n l_1 + l_3 + e_1 + e_3 & l_1 + l_3 + e_2 + e_3 & l_1 + l_3 + 2e_3 \\\\\n 2l_3 + e_1 + e_3 & 2l_3 + e_2 + e_3 & 2l_3 + 2e_3 \\\\\n 2l_3 + e_1 + e_3 & 2l_3 + e_2 + e_3 & 2l_3 + 2e_3\n \\end{array}\n\\right]\n\\end{eqnarray}\nIn this case the $U(1)_X$ symmetry does not give an appropriate description of fermion masses and mixings, however it can be combined with non-renormalizable operators of the Pati-Salam group, \\cite{King:2000ge}, in order to give a good description of the fermion phenomenology.\n\\subsection{Solutions for $u+v=0$}\nOne trivial example of non- $SU(5)$ cases was given in section (\\ref{sec:yukawa-textures-uv-nonzero}) for the solution $u+v=0$. \nWe proceed as in the section (~\\ref{sec:su5q})- in order to analyze the appropriate phenomenology. We are interested \nin the cases $l_2=l_3$, this together with the condition of $O(1)$ top Yukawa coupling give us the following matrices of charges, \nwhich are derived with the appropriate substitutions in \\eq{eq:yu-umvn0}-\\eq{eq:ye-umvn0},\n\\begin{eqnarray}\nC(Y^d)&=&\n \\left[\n\\begin{array}{ccc}\n l_1 + e_1 &\n \\frac{4(l_3-l_1)}{3} + e_1 - e_3 &\n e_2 - e_3 \\\\\n \\frac{l_3-l_1}{3} + e_2 - e_3 &\n e_2 - e_3 &\n e_2 - e_3 \\\\\n \\frac{l_3-l_1}{3} &\n 0 &\n 0\n\\end{array}\n\\right],\\nonumber\\\\\nC(Y^u)&=&\n \\left[\n\\begin{array}{ccc}\n l_1 + e_1 &\n \\frac{2(l_3-l_1)}{3} + e_3 - e_1 &\n \\frac{2(l_3-l_1)}{3} + e_3 - e_1 \\\\\n \\frac{l_3 - l_1}{3} + e_3 - e_3 &\n e_3 - e_2 &\n e_3 - e_2 \\\\\n \\frac{l_3 - l_1}{3} &\n 0 & 0\n\\end{array}\n\\right],\\nonumber\\\\\nC(Y^e)&=&\n \\left[\n\\begin{array}{ccc}\n l_1+e_1 & l_1 + e_2 & l_1 + e_3 \\\\\n e_2-e_3 & e_2 - e_3 & 0 \\\\\n e_2-e_3 & e_2 - e_3 & 0\n\\end{array}\n\\right].\\nonumber\\\\\n\\end{eqnarray}\nDue to the form of the charges in the up and down quark matrices, first at all we would need two expansion parameters: $\\epsilon^u$ and $\\epsilon^d$. But with this structure alone it is not possible to account simultaneously for appropriate mass ratios of the second to third family of quarks and for an appropriate $V_{cb}$ mixing. So in this case just with a $U(1)$ it is not possible to explain fermion masses and mixings in the context of the single neutrino right-handed dominance, SNRHD. \n \n\n\n\\section{Numerical fits of masses and mixings}\n\\label{sec:fitsmasses}\n\\subsection{Fitted examples}\nIn this section we present numerical fits to some of the examples detailed in Sections \n(\\ref{sec:su5-solut-satisfy-GST},\\ref{sec:su5-solutions-not-GST}) and we compare the results \nwith a fit of a generic $SU(3)$-like case \\cite{King:2001uz}. \nThe simplest way to construct the lepton Yukawa matrices from the charges is to first calculate $h_{u,d}$. We extract $h_d$ from $k_e$, $l_3$ and $e_3$ from $k_e = l_3 + e_3 + h_d$. In general, we can use Eq.~(\\ref{eq:kd_ke_general}) to obtain:\n\\begin{equation}\n \\label{eq:22}\n h_u + h_d = m = 3(k_d - k_e)\n\\end{equation}\nThis is then enough to construct the lepton Yukawas from the appropriate line of the table (\\ref{tbl:sol-leptch-GST} or \\ref{tbl:sol-leptch-GSTuvne0}) of the lepton and Yukawa family charges. Below we specify the examples that we have chosen to fit.\n\\subsubsection*{Fit 1: $SU(5)$ type solution ($u=v=0$): example satisfying the GST relation}\n\\label{sec:fit-1}\n\nThis takes GST solution 2, (Eq.~(\\ref{eq:tex2gst})) in the $SU(5)$ type case, with $u=v=0$. The charges $l_i$, $e_i$, and $n_{2,3}$ are read off from the\nfourth line of Table \\ref{tbl:sol-leptch-GST}. In principle, the value of $\\sigma$ would be read off from either Table \\ref{tbl:solGSTMG} ( for neutrino\nmasses generated at the GUT scale ) or Table \\ref{tbl:solGSTMP} ( for neutrino masses geneated at the Planck scale). However, these tables allow for\na range of $\\sigma$; for this fit, we take $\\sigma = 21\/2$ for GUT scale neturino mass generation, and $\\sigma = 29\/2$ for Planck scale neutrino mass\ngeneration.\n\nThen, up to $\\sigma$ and $n_1: -\\sigma\/2 \\le n_1 \\le 0$, the Yukawa and Majorana matrices are:\n\n\\begin{eqnarray}\n \\label{eq:5}\n Y^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{16} & a^u_{12} \\epsilon^{6} & a^u_{13} \\epsilon^{8} \\\\\n a^u_{21} \\epsilon^{6} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{8} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n\\quad \\quad \\quad\n Y^d \\ = \\ \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{31\/2} & a^d_{12} \\epsilon^{11\/2} & a^d_{13} \\epsilon^{11\/2} \\\\\n a^d_{21} \\epsilon^{11\/2} & a^d_{22} \\epsilon^{9\/2} & a^d_{23} \\epsilon^{9\/2} \\\\\n a^d_{31} \\epsilon^{15\/2} & a^d_{32} \\epsilon^{5\/2} & a^d_{33} \\epsilon^{5\/2}\n \\end{array}\n \\right]\n \\nonumber \\\\\n\\label{eq:7}\n Y^e \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{31\/2} & a^e_{12} \\epsilon^{11\/2} & a^e_{13} \\epsilon^{15\/2} \\\\\n a^e_{21} \\epsilon^{11\/2} & a^e_{22} \\epsilon^{9\/2} & a^e_{23} \\epsilon^{5\/2} \\\\\n a^e_{31} \\epsilon^{11\/2} & a^e_{32} \\epsilon^{9\/2} & a^e_{33} \\epsilon^{5\/2}\n \\end{array}\n \\right]\\!,\n \\\n Y^\\nu = \n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 + 5\/2|} & a^\\nu_{12} \\epsilon^{11\/4} & a^\\nu_{13} \\epsilon^{12\/4} \\\\\n a^\\nu_{21} \\epsilon^{|n_1 - 15\/2|} & a^\\nu_{22} \\epsilon^{29\/4} & a^\\nu_{23} \\epsilon^{28\/4} \\\\\n a^\\nu_{31} \\epsilon^{|n_1 - 15\/2|} & a^\\nu_{32} \\epsilon^{29\/4} & a^\\nu_{33} \\epsilon^{28\/4}\n \\end{array}\n \\right]\n \\nonumber \\\\\n \\label{eq:9}\n M_{RR}\\! &=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1+\\sigma|} & \\epsilon^{|1\/4 + n_1 + \\sigma|} & \\epsilon^{|1\/2+ n_1 + \\sigma|} \\\\\n . & a^N_{22} \\epsilon^{|1\/2 + \\sigma| } & \\epsilon^{|3\/4 + \\sigma|} \\\\\n . & . & \\epsilon^{|1 + \\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>\n\\end{eqnarray}\n\n\n\\subsubsection*{Fit 2: Extended $SU(5)$ solution ($u+v \\ne 0$) satisfying the GST relation}\n\\label{sec:fit-2}\n\nThis takes GST solution 2, (Eq. (\\ref{eq:tex2gst})), in the extended $SU(5)$ case with $u+v\\ne0$. The charges $l_i$, $e_i$ and $n_{2,3}$ are read\noff from the third line of Table \\ref{tbl:sol-leptch-GSTuvne0}. The values of $\\sigma$ taken are $\\sigma = 19\/2$, $\\sigma = 29\/2$ for GUT scale\nand Planck scale neutrino mass generation respectively. Again, $n_1$ is taken to lie in the region $-\\sigma\/2 \\le n_1 \\le 0$.\n{\\footnote{The difference between Fit 1 and Fit 2 is that the charges (Tables (\\ref{tbl:sol-leptch-GST}) and \n(\\ref{tbl:sol-leptch-GSTuvne0}) respectively) are determined in a different way and hence the value of the \neffective parameter expansion $\\varepsilon$ is different}. \n\\begin{eqnarray}\n \\label{eq:10}\n Y^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{16} & a^u_{12} \\epsilon^{6} & a^u_{13} \\epsilon^{8} \\\\\n a^u_{21} \\epsilon^{6} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{8} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d \\ = \\ \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{31\/2} & a^d_{12} \\epsilon^{11\/2} & a^d_{13} \\epsilon^{11\/2} \\\\\n a^d_{21} \\epsilon^{11\/2} & a^d_{22} \\epsilon^{9\/2} & a^d_{23} \\epsilon^{9\/2} \\\\\n a^d_{31} \\epsilon^{15\/2} & a^d_{32} \\epsilon^{5\/2} & a^d_{33} \\epsilon^{5\/2}\n \\end{array}\n \\right]\n \\nonumber \\\\\n\\label{eq:3}\n Y^e &=& \n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{46\/3} & a^e_{12} \\epsilon^{16\/3} & a^e_{13} \\epsilon^{22\/3} \\\\\n a^e_{21} \\epsilon^{16\/3} & a^e_{22} \\epsilon^{14\/3} & a^e_{23} \\epsilon^{8\/3} \\\\\n a^e_{31} \\epsilon^{16\/3} & a^e_{32} \\epsilon^{14\/3} & a^e_{33} \\epsilon^{8\/3}\n \\end{array}\n \\right]\\!,\n \\\n Y^\\nu =\n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 +5|} & a^\\nu_{12} \\epsilon^{\\frac{41}{8}} & a^\\nu_{13} \\epsilon^{\\frac{11}{2}} \\\\\n a^\\nu_{21} \\epsilon^{|n_1 - 5|} & a^\\nu_{22} \\epsilon^{\\frac{39}{8}} & a^\\nu_{23} \\epsilon^{\\frac{9}{2}} \\\\\n a^\\nu_{31} \\epsilon^{|n_1 -5|} & a^\\nu_{32} \\epsilon^{\\frac{39}{8}} & a^\\nu_{33} \\epsilon^{\\frac{9}{2}}\n \\end{array}\n \\right]\n \\nonumber\\\\\n M_{RR} \\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1+\\sigma|} & \\epsilon^{|1\/8 + n_1 + \\sigma|} & \\epsilon^{|1\/2 + n_1+\\sigma|} \\\\\n . & a^N_{22} \\epsilon^{|1\/4+\\sigma|} & \\epsilon^{|5\/8+\\sigma|} \\\\\n . & . & \\epsilon^{|1+\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>\n\\end{eqnarray}\n\n\n\\subsubsection*{Fit 3: Extended $SU(5)$ solution ($u+v\\neq 0$), satisfying the GST relation}\n\\label{sec:fit-3}\n\nThis takes GST solution 3, ( Eq. (\\ref{eq:sol3gst})), in the extended $SU(5)$ case with $u+v\\ne0$. The charges $l_i$, $e_i$ and $n_{2,3}$ are read off from\nthe fifth line of table \\ref{tbl:sol-leptch-GSTuvne0}. The values of $\\sigma$ taken are $\\sigma = 39\/4$, $\\sigma = 55\/4$ for GUT and Planck scale neutrino\nmass generation respectively. $n_1$ lies in the region $-\\sigma\/2 \\le n_1 \\le 0$.\n\\begin{eqnarray}\n\\label{eq:18}\nY^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{38\/4} & a^u_{12} \\epsilon^{22\/4} & a^u_{13} \\epsilon^{29\/4}\\\\\n a^u_{21} \\epsilon^{22\/4} & a^u_{22} \\epsilon^{14\/4} & a^u_{23} \\epsilon^{7\/4}\\\\\n a^u_{31} \\epsilon^{29\/4} & a^u_{32} \\epsilon^{7\/4} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d = \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{62\/4} & a^d_{12} \\epsilon^{21\/4} & a^d_{13} \\epsilon^{21\/4}\\\\\n a^d_{21} \\epsilon^{21\/4} & a^d_{22} \\epsilon^{15\/4} & a^d_{23} \\epsilon^{15\/4}\\\\\n a^d_{31} \\epsilon^{33\/4} & a^d_{32} \\epsilon^{8\/4} & a^d_{33} \\epsilon^{8\/4}\n \\end{array}\n \\right]\n \\nonumber\\\\\n Y^e &=& \n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{46\/3} & a^e_{12} \\epsilon^{19\/3} & a^e_{13} \\epsilon^{97\/12}\\\\\n a^e_{21} \\epsilon^{61\/12} & a^e_{22} \\epsilon^{47\/12} & a^e_{23} \\epsilon^{13\/6}\\\\\n a^e_{31} \\epsilon^{61\/12} & a^e_{32} \\epsilon^{47\/12} & a^e_{33} \\epsilon^{13\/6}\n \\end{array}\n \\right]\\!,\n \\\n Y^\\nu =\n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1+\\frac{41}{8}|} & a^\\nu_{12} \\epsilon^{\\frac{21}{4}} & a^\\nu_{13} \\epsilon^{\\frac{45}{8}}\\\\\n a^\\nu_{21} \\epsilon^{|n_1-\\frac{41}{8}} & a^\\nu_{22} \\epsilon^{5} & a^\\nu_{23} \\epsilon^{\\frac{37}{8}}\\\\\n a^\\nu_{31} \\epsilon^{|n_1-\\frac{41}{8}} & a^\\nu_{32} \\epsilon^{5} & a^\\nu_{33} \\epsilon^{\\frac{37}{8}}\n \\end{array}\n \\right]\n \\nonumber \\\\\n \\label{eq:17}\n M_{RR} \\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1+\\sigma|} & \\epsilon^{|1\/8 + n_1 + \\sigma|} & \\epsilon^{|1\/2 + n_1+\\sigma|} \\\\\n . & a^N_{22}\\epsilon^{|1\/4+\\sigma|} & \\epsilon^{|5\/8+\\sigma|} \\\\\n . & . & \\epsilon^{|1+\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>\n\\end{eqnarray}\n\\subsubsection*{Fit 4: $SU(5)$ ($u=v=0$) solution not satisfying the GST relation}\nHere we present a solution non satisfying the GST relation of the form of \\eq{eq:ydinl1l3A} for $l_1-l_3=1$, which corresponds to the set of charges of the first line of Table (\\ref{tbl:sol-lepchar-NGST}). We also fix here the expansion parameter $\\varepsilon=0.19$, using the FI term. The high energy Yukawa and Majorana matrices are:\n\\begin{eqnarray}\nY^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{6} & a^u_{12} \\epsilon^{5} & a^u_{13} \\epsilon^{3} \\\\\n a^u_{21} \\epsilon^{5} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{3} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d \\! = \\! \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{4} & a^d_{12} \\epsilon^{3} & a^d_{13} \\epsilon^{3} \\\\\n a^d_{21} \\epsilon^{3} & a^d_{22} \\epsilon^{2} & a^d_{23} \\epsilon^{2} \\\\\n a^d_{31} \\epsilon & a^d_{32} & a^d_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|}\\nonumber\\\\\n Y^e\\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{4} & a^e_{12} \\epsilon^{3} & a^e_{13} \\epsilon \\\\\n a^e_{21} \\epsilon^{3} & a^e_{22} \\epsilon^{2} & a^e_{23} \\\\\n a^e_{31} \\epsilon^{3} & a^e_{32}\\epsilon^{2} & a^e_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|},\n\\quad \n Y^\\nu \\!=\\! \n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 + 1|} & a^\\nu_{12} \\epsilon^{5\/8} & a^\\nu_{13} \\epsilon \\\\\n a^\\nu_{21} \\epsilon^{|n_1-3\/8|} & a^\\nu_{22} \\epsilon^{3\/8} & a^\\nu_{23} \\\\\n a^\\nu_{31} \\epsilon^{|n_1|} & a^\\nu_{32} \\epsilon^{3\/8} & a^\\nu_{33} \n \\end{array}\n \\right],\n \\nonumber \\\\\n M_{RR}\\! &=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1 + \\sigma|} & \\epsilon^{|-3\/8 + n_1 + \\sigma|} & \\epsilon^{|n_1+\\sigma|} \\\\\n . & a^N_{22}\\epsilon^{|-3\/4 + \\sigma|} & \\epsilon^{|-3\/8 + \\sigma|} \\\\\n . & . &\\epsilon^{|\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>.\n\\label{eq:31}\n\\end{eqnarray}\n\\subsubsection*{Fit 5: $SU(5)$ ($u=v=0$) solution not satisfying the GST relation}\nHere we present another solution non satisfying the GST relation of the form of \\eq{eq:ydinl1l3A} for $l_1-l_3=3\/2$, which corresponds to the set of charges of the second line of Table (\\ref{tbl:sol-lepchar-NGST}). We also fix here the expansion parameter $\\varepsilon=0.185$, using the FI term. The high energy Yukawa and Majorana matrices are:\n\\begin{eqnarray}\nY^u \\! &=&\\! \n \\left[\n \\begin{array}{ccc}\n a^u_{11} \\epsilon^{6} & a^u_{12} \\epsilon^{5} & a^u_{13} \\epsilon^{3} \\\\\n a^u_{21} \\epsilon^{5} & a^u_{22} \\epsilon^{4} & a^u_{23} \\epsilon^{2} \\\\\n a^u_{31} \\epsilon^{3} & a^u_{32} \\epsilon^{2} & a^u_{33}\n \\end{array}\n \\right]\\!,\n \\quad \\quad \\quad\n Y^d \\! = \\! \n \\left[\n \\begin{array}{ccc}\n a^d_{11} \\epsilon^{9\/2} & a^d_{12} \\epsilon^{3} & a^d_{13} \\epsilon^{3} \\\\\n a^d_{21} \\epsilon^{7\/2} & a^d_{22} \\epsilon^{2} & a^d_{23} \\epsilon^{2} \\\\\n a^d_{31} \\epsilon^{3\/2} & a^d_{32} & a^d_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|}\\nonumber\\\\\n Y^e\\!&=&\\!\n \\left[\n \\begin{array}{ccc}\n a^e_{11} \\epsilon^{9\/2} & a^e_{12} \\epsilon^{7\/2} & a^e_{13} \\epsilon^{3\/2} \\\\\n a^e_{21} \\epsilon^{3} & a^e_{22} \\epsilon^{2} & a^e_{23} \\\\\n a^e_{31} \\epsilon^{3} & a^e_{32}\\epsilon^{2} & a^e_{33}\n \\end{array}\n \\right]\\epsilon^{|k_d|},\n\\quad \n Y^\\nu \\!=\\! \n \\left[\n \\begin{array}{ccc}\n a^\\nu_{11} \\epsilon^{|n_1 + 1|} & a^\\nu_{12} \\epsilon^{5\/8} & a^\\nu_{13} \\epsilon \\\\\n a^\\nu_{21} \\epsilon^{|n_1-3\/8|} & a^\\nu_{22} \\epsilon^{3\/8} & a^\\nu_{23} \\\\\n a^\\nu_{31} \\epsilon^{|n_1|} & a^\\nu_{32} \\epsilon^{3\/8} & a^\\nu_{33} \n \\end{array}\n \\right],\n \\nonumber \\\\\n M_{RR}\\! &=&\\!\n \\left[\n \\begin{array}{ccc}\n \\epsilon^{|2n_1 + \\sigma|} & \\epsilon^{|-5\/8 + n_1+\\sigma|} & \\epsilon^{|-1\/4+n_1+\\sigma|} \\\\\n . & a^N_{22}\\epsilon^{|-5\/4+\\sigma|} & \\epsilon^{|-7\/8+\\sigma|} \\\\\n . & . &\\epsilon^{|-1\/2+\\sigma|}\n \\end{array}\n \\right] \\left<\\Sigma\\right>.\n\\label{eq:32}\n\\end{eqnarray}\n\\subsection{Details of the fitting method}\n\\label{sec:deta-fitt-meth}\nOne of the purposes of these fits is to compare which solution fits the data best while constraining the abritary coefficients to remain at $O(1)$.\nWe therefore choose a minimization routine to find these $O(1)$ coefficients and compare the numerical values for the different solutions.\n In the quark sector we use eight experimental inputs in order to determine the parameters (coefficients or phases):\n\\begin{eqnarray}\n\\label{eq:fitparquarks}\nV_{ub}\/V_{cb}, \\quad V_{td}\/V_{ts}, \\quad V_{us},\\quad {\\rm{Im}}\\{J\\},\\quad m_u\/m_c,\\quad m_c\/m_t,\\quad\nm_d\/m_s,\\quad m_s\/m_b.\n\\end{eqnarray}\nWe explain in the Appendix (\\ref{ap:compinf}) how this fit is performed, the important point is that we can only fit eight parameters and the rest need to be fixed. The minimization algorithm has been optimized to fit the solutions satisfying the GST relation because the number of parameters is close to eight. We also fit examples of the non GST solutions but since there are more free parameters in this cases (mainly phases) it is un-practical to make a fit by fixing so many free parameters. So we present particular examples in these cases which do not necessarily correspond to the best $\\chi^2$.\n\nIn the lepton sector we perform two fits, one for the coefficients of the charged lepton mass matrix and the other for the coefficients of the neutrino mass matrix. We do not perform a combined fit for the coefficients of $Y^\\nu$ and $Y^e$ because the uncertainties in these sectors are quite different. While the uncertainties in the masses of the charged leptons is very small, the uncertainties in lepton mixings and quantities related to neutrino masses are still large, such that we cannot determine the parameters involved to a very good accuracy.\n\nThe quantities used for the fit of the coefficients of the charged lepton mass matrix are\n\\begin{eqnarray}\n\\frac{m_e}{m_\\mu},\\quad \\frac{m_\\mu}{m_\\tau},\n\\end{eqnarray}\nsuch that we can just determine two parameters, $a^e_{12}$ and $a^e_{22}$, but for the cases presented here this is enough.\nIn order to do the fit for the coefficients of the neutrino mass matrix we use the observables\n\\begin{eqnarray}\n\\label{eq:fitparneuts}\nt^l_{23},\\quad t^l_{13},\\quad t^l_{12},\\quad \\frac{|m_{\\rm{sol}}|}{|m_{\\rm{atm}}|},\\quad m_{\\nu_3}\n\\end{eqnarray}\nwhere we relate $t^l_{23}$ to the atmospheric mixing, $t^l_{12}$ to the solar mixing and $t^l_{13}$ to the reactor mixing. In this case we are going to be able to fit just five parameters. For this reason and because the uncertainties in the above observables are significantly bigger than the uncertainties in the quark sector, the fits of the coefficients of the neutrino mass matrix have large errors and they may leave a room for other solutions once the experimental uncertainties improve. Since we only have an upper bound for the reactor angle, $t^l_{13}$, we fit the solutions in the neighborhood of this upper bound.\n\\subsection{Results of the fits}\n\\subsubsection{Fit 1: $SU(5)$ ($u=v=0$) example satisfying the GST relation}\n\nThis is a $SU(5)$ type solution, and hence $u=v=0$, which satisfies the GST relation. The textures are as laid out in Eq.~(\\ref{eq:9}). \n\n\\subsubsection*{Quark sector}\nWe can use the expressions \\eq{eq:yukeigen} and \\eq{eq:mixsgeral} adapted to the solution of \\eq{eq:sol2gst} in order to fit the Yukawa coefficients, \nalong with the appropriate phases entering into the expressions of mixings. The expansion parameter $\\varepsilon$ is determined with the Fayet-Iliopoulos term \nand the appropriate charges cancelling the anomalies, for this case its value is $\\varepsilon=0.183$. The parameters that we \nfit are the real parameters\n\\begin{eqnarray}\n\\label{eq:fitpargst}\na^u_{12},\\quad a^u_{23},\\quad a^d_{22},\\quad a^d_{12},\\quad a^d_{13},\\quad a^d_{23},\\quad a^d_{32},\\quad \\cos(\\Phi_2),\n\\end{eqnarray}\nwhich enter in the expressions of mixings and masses, \\eq{eq:yukeigen}-\\eq{eq:Vsasyu1}. Note that in these expressions the coefficients $a^f_{ij}$ can be complex but for the fit we choose them real and write down explicitly the phases. We are free to choose the parameters to fit. However we need to check which are the most relevant parameters to test the symmetry. Thus we follow this as a guideline to choose the parameters to fit and leave other parameters fixed. Due to the form of \\eq{eq:sol2gst} the mixing angles in the $(2,3)$ sector of both matrices contribute at the same order in the $V_{\\rm{CKM}}$ matrix mixing, $s^Q_{23}=|a^d_{23}-a^u_{23}e^{i\\Phi_{X_{23}}}|\\varepsilon^2$, so we have decided to put a phase here. In the $s^u_{12}$ diagonalization angle and the second eigenvalue of $Y^u$ the combination $a^u_{22}e^{i\\Phi_3}-a^{u\\ 2}_{23}$ appears, so we have chosen as well to include a phase difference there. The fixed parameters are then\n\\begin{eqnarray}\n\\label{eq:fixpargst}\na^u_{22},\\quad \\Phi_1, \\quad \\Phi_3,\\quad \\Phi_{X_{23}},\n\\end{eqnarray}\nwhere $\\Phi_1$ has the form of \\eq{eq:phi1} and the phases $\\Phi_3$ and $\\Phi_{X_{23}}$ can be written as {\\footnote{In terms of the $\\beta_i$ phases appearing in the diagonalization matrices, \\eq{eq:pardimatL}, we have $\\Phi_1=-\\beta^{u \\ L}_3$, $\\Phi_2=-\\beta^{d\\ L}_3$ and $\\Phi_{X_{23}}=(\\beta^{d\\ L}_2-\\beta^{d\\ L}_1)-(\\beta^{u\\ L}_2-\\beta^{u\\ L}_1)$.}}\n\\begin{eqnarray}\n\\label{eq:phi3}\n\\Phi_3=\\phi^u_{22}-2\\phi^u_{23},\\quad \\Phi_{X_{23}}=(\\phi^d_{33}-\\phi^d_{23})-(\\phi^u_{33}-\\phi^u_{23}) .\n\\end{eqnarray}\nThe results of the fit in the quark sector appear in the second column of Table (\\ref{tabl:sol2gst}). \n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l||l|l|}\n\\hline\n\\multicolumn{4}{|c|}{{Quark Fitted Parameters}}\\\\ \\hline\n\\multicolumn{2}{|c|}{{GST sol. 2}}&\n\\multicolumn{1}{|c|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{1}{|c|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\nParameter & BFP Value & BFP Value & BFP Value \\\\\n$a^{u}_{12}$& $2.74\\pm 0.61$ & $1.04\\pm 0.19$ & $2.74\\pm 0.71$\\\\\n$a^u_{23}$& $1.68\\pm 0.17 $ & $1.34\\pm 0.13$ & $1.41\\pm 0.18$ \\\\\n$a^d_{22}$& $1.08\\pm 0.18$ & $1.05\\pm 0.11$ & $0.70\\pm 0.23$ \\\\\n$a^d_{12}$& $0.93\\pm 0.15$ & $0.55\\pm 0.20$ & $0.74\\pm 0.13$ \\\\\n$a^d_{13}$& $0.29\\pm 0.21$ & $0.30\\pm 0.14$ & $0.74\\pm 0.17$\\\\\n$a^d_{23}$& $0.79\\pm 0.10$ & $0.70\\pm 0.13$& $0.66\\pm 0.35$\\\\\n$a^d_{32}$& $0.48\\pm 0.17$ & $1.28\\pm 0.32$ & $1.28\\pm 0.58$\\\\\n$\\cos(\\Phi_2)$& $0.454\\pm 0.041$ & $0.456\\pm 0.041$ & $0.547\\pm 0.424$\\\\ \\hline\n\\multicolumn{4}{|c|}{{Quark Fixed Parameters}} \\\\ \\hline\n$\\varepsilon$ & $0.183$ & $0.217$ & $0.154$ \\\\\n$a^{u}_{22}$& $1$& $1$ & $1.4$ \\\\\n$\\cos(\\Phi_3)$ & $0.8$ & $0.83$ & $0.8$\\\\\n$\\cos(\\Phi_{X_{23}})$& $1$ & $1$ & $1$\\\\\n$\\Phi_1$ & $\\pi\/2$ & $\\pi\/2$ & $\\pi\/2$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} & \\\\ \\hline\n$\\chi^2$ & $1.47$ & $2.41$ & $4.32$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Quark fitted parameters for the examples of Section \\ref{sec:su5-solut-satisfy-GST}). The second column corresponds to \nthe Solution 2 in the $SU(5)$ ($u=v=0$) case, the third column to the Solution 2 in the $u \\ne -v \\neq 0$ case. \nThe fourth column presents the fit to the Solution 3 in the $u\\neq -v \\neq 0$ case.}}\n\\label{tabl:sol2gst}\n\\end{table}\nGiven these results we can think that the structure of Yukawa matrices has the following form\n\\begin{eqnarray}\nY^u= \\left[\n\\begin{array}{ccc}\n* &y_{12}e^{i\\Phi_1}&y_{13}\\\\\ny_{12}e^{i\\Phi_1}&y_{22}e^{i\\Phi_3}&y_{23}\\\\\ny_{13} &y_{23}&1 \n\\end{array}\n\\right],\\quad\nY^d= \\left[\n\\begin{array}{ccc}\n*& y_{12}e^{i\\Phi_2} & y_{13}e^{i\\Phi_2}\\\\\n\\left[y_{21}e^{i\\Phi^R_2}\\right] & y_{22} & y_{23}\\\\\n*&y_{32}&1\n\\end{array}\n\\right],\n\\end{eqnarray}\nwhere $y_{ij}$ denote real elements and we have associated the phases $\\Phi_i$ to particular elements of the matrices. Note that we need three phases to determine the amount of CP violation experimentally required because in all the fits we found $\\Phi_{X_{23}}=0$. If this phase was not zero then it could have been associated to the $Y^d_{23}$ element. The entries marked with $*$ cannot be determined because they are not restricted by masses and mixings, due to the structure of the Yukawa matrices. The value of $y_{21}e^{i\\Phi^R_2}$ is determined indirectly because we need to satisfy the GST relation so $t^R_{12}=t^L_{12}$ for both up and quark sectors.\n\\subsubsection*{Lepton sector}\nWe have fixed the coefficients of $Y^d$ in the quark sector and now we can use the results for the charged lepton matrix $Y^e$. The masses of the charged lepton are obtained through the $SU(5)$ relations, ensuring the correct value of charged lepton masses, once the masses of the d-quarks are in agreement with experimental information. Thus in this case we perform a fit just for coefficients of the neutrino mass matrix, $Y^\\nu$, using the ratio of neutrino mass differences (solar to atmospheric), the mass of the heaviest neutrino and the lepton mixings, which have a contributions from both the charged leptons and the neutrinos. Here the relevant parameter that we need from the quark sector is $a^d_{32}$ because the tangent of the angle diagonalizing $Y^e$ on the left is related to this parameter: $t^e_{23}=a^e_{23}\\propto a^d_{32}$. Since this is an $O(1)$ mixing we have to take it into account for the results of the $U_{MNS}$ mixings, thus we have\n\\begin{eqnarray}\n\\label{eq:t23lept}\nt^l_{23}=\\frac{|c^e_{23}s^\\nu_{23}e^{-i\\phi_{X_{23}}}-s^e_{23}c^{\\nu}_{23} |}{|s^\\nu_{23}s^e_{23}+c^{\\nu}_{23}c^e_{23}e^{i\\phi_{X_{23}}}| },\n\\end{eqnarray}\nwhere we use the expression \\eq{eq:tan23} to determine $s^\\nu_{23}$ and $c^{\\nu}_{23}$, and the approximation $t^e_{23}=a^d_{32}$; $\\phi_{X_{23}}$ is a phase relating $e$ and $\\nu$ mixings in the $(2,3)$ sector \\cite{NuMngsPhases}. We denote the $U_{MNS}$ angles by the superscript $l$ and by $e$ and $\\nu$ the charged lepton and neutrino mixings respectively. The mixings $t^l_{13}$ and $t^\\nu_{12}$ are essentially given by the neutrino mixings, \\eqs{eq:mixmasresn}, so we fit these mixings according to \\eq{eq:tan13} and \\eq{eq:tan12} respectively. We note from Table (\\ref{tabl:sol2gstneut}) that in the lepton sector we need two phases, $\\phi_{X_{23}}$ and $\\phi^{\\nu}$. The phase $\\phi_{X_{23}}$ can be associated to the charged lepton sector and we can put it in the $Y^e_{23}$ entry. The second phase, $\\phi^{\\nu}$ can be assigned to $Y^{\\nu}_{22}$.\nWe fit the mass ratio and the heaviest neutrino state using their expressions appearing in \\eqs{eq:mixmasresn}. The results for this fit appear in the second column of Table (\\ref{tabl:sol2gstneut}).\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{7}{|c|}{{Neutrino Fitted Parameters}}\\\\ \\hline\n\\multicolumn{3}{|l|}{{$\\!\\!\\!$Parameter$\\!\\!\\!$ ~~ GST sol. 2}}&\n\\multicolumn{2}{|c|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{2}{|c|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\n & $M_P$ & $M_G$ & $M_P$ & $M_G$ & $M_P$ & $M_G$ \\\\\n & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!$ BFP value & $\\!\\!\\!$ BFP value \\\\\n$a^\\nu_{23}$& $\\!\\!\\!\\!\\!0.75\\pm 0.79\\!\\!\\!\\!$ & $\\!\\!\\!\\!0.67\\pm 0.61\\!\\!\\!$ & $\\!\\!0.21\\pm 0.25\\!\\!\\!$ & $\\!\\!\\!\\!0.85\\pm 0.27\\!\\!\\!\\!$ & $\\!\\!\\!0.30\\pm 0.18\\!\\!\\!$ & $\\!\\!\\!\\!0.40\\pm 0.15\\!\\!\\!\\!$ \\\\\n$a^\\nu_{13}$& $\\!\\!\\!\\!\\!1.41\\pm 1.32\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!1.36\\pm 1.10\\!\\!\\!$ & $\\!\\!0.97\\pm 0.47\\!\\!\\!$ & $\\!\\!\\!\\!1.25\\pm 0.63\\!\\!\\!\\!$ & $\\!\\!\\!1.02\\pm 0.50\\!\\!\\!$ & $\\!\\!\\!\\!1.45\\pm 0.70\\!\\!\\!\\!$ \\\\\n$a^\\nu_{12}$& $\\!\\!\\!\\!\\!2.23\\pm 0.92\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!2.10\\pm 0.81\\!\\!\\!$ & $\\!\\!1.25\\pm 0.29\\!\\!\\!$ & $\\!\\!\\!\\!2.08\\pm 0.69\\!\\!\\!\\!$ & $\\!\\!\\!1.35\\pm 0.34\\!\\!\\!$ & $\\!\\!\\!\\!1.97\\pm 0.45\\!\\!\\!\\!$ \\\\\n$a^\\nu_{22}$& $\\!\\!\\!\\!\\!1.84\\pm 1.37\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!1.96\\pm 1.92\\!\\!\\!$ & $\\!\\!1.23\\pm 1.41\\!\\!\\!$ & $\\!\\!\\!\\!1.98\\pm 0.79\\!\\!\\!\\!$ & $\\!\\!\\!1.48\\pm 1.31\\!\\!\\!$ & $\\!\\!\\!\\!2.26\\pm 1.6\\!\\!\\!\\!$ \\\\\n$a^\\nu_{32}$& $\\!\\!\\!\\!1.47\\pm 1.93\\!\\!\\!\\!$ & $\\!\\!\\!\\!\\!0.98\\pm 0.91\\!\\!\\!$ & $\\!\\!0.65\\pm 0.70\\!\\!\\!$ & $\\!\\!\\!\\!1.53 \\pm 0.75\\!\\!\\!\\!$ & $\\!\\!\\!0.53 \\pm 0.78\\!\\!\\!$ & $\\!\\!\\!\\!0.56 \\pm 0.98\\!\\!\\!\\!$ \\\\\n\\hline\n\\multicolumn{7}{|c|}{{Neutrino Fixed Parameters}}\\\\ \\hline\n$\\varepsilon$ & $0.183$ & & $0.217$ & & $0.154$ & \\\\\n$a^e_{23}$ & $a^d_{32}=0.48$ & & $-1.6$ & & $1.2$ &\\\\\n$a^\\nu_{33}$ & $1$ & & $1$ & & $0.7$ & $1$\\\\\n$\\sigma$ & $29\/2$ & $21\/2$ & $29\/2$ & $19\/2$ & $55\/4$ & $39\/4$ \\\\ \n$\\!\\!\\!c(\\phi_{\\!X_{23}})\\!\\!\\!$ & $0.29$ & & $0.29$ & & $1$ & $0.5$\\\\\n$\\!\\!\\!c(\\!\\phi^{\\nu}\\!)\\!\\!$ & $-1$ & & $-0.5$ & & $0.86$ & $1$\\\\ \\hline\n$\\!\\!\\!\\!(n_2,n_3)\\!\\!\\!$ & \\multicolumn{2}{|c|}{{$(1\/4,1\/2)$}}&\n\\multicolumn{4}{|c|}{{$(1\/8,1\/2)$}}\\\\\n\\hline\n\\multicolumn{7}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $0.44$ & $0.12$ & $1.67$ & $0.49$ & $2.16$& $0.72$\\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Neutrino fitted parameters for the examples of Section \\ref{sec:su5-solut-satisfy-GST}. The second column corresponds to the \nSolution 2 in the $SU(5)$ ($u=v=0$) case, the third column to the Solution 2 in the $u + v \\neq 0$ case. The fourth column \npresents the fit to the Solution 3 in the $u + v \\ne 0$ case. Here $c(y)$ is the cosine of the respective parameter.}}\n\\label{tabl:sol2gstneut}\n\\end{table}\n\n\\subsubsection{Fit 2 and Fit 3: Extended $SU(5)$ solutions with $u + v \\neq 0$ satisfying the GST relation}\nThese are both extended $SU(5)$ solutions, with $u+v \\ne 0$, satisfying the GST relation.\nFit 2 corresponds to the textures laid out in Eq.~(\\ref{eq:3}), and Fit 3 corresponds to the textures laid out\nin Eq.~(\\ref{eq:17}).\n\n\\subsubsection*{Quark sector}\n\nThis section is completely analogous to the previous one, the only difference is in the value of $\\varepsilon$. We present here two examples. The first example corresponds to the first solution of \\eq{eq:sol2gst}, which we called Solution 2, and corresponds to $\\varepsilon=0.217$ according to the charges of the third row of \\eq{tbl:sol-leptch-GSTuvne0}. The second example corresponds to the first solution of \\eq{eq:sol3gst}, which has been called Solution 3 and corresponds to $\\varepsilon=0.154$, according to the charges of the fourth row of \\eq{tbl:sol-leptch-GSTuvne0}. The fitted and fixed parameters are also those of the previous example, \\eq{eq:fitpargst} and \\eq{eq:fixpargst} respectively. The results for the quark fitting are presented in the third and fourth column of Table (\\ref{tabl:sol2gst}), respectively, so we can compare directly with the previous case.\n\\subsubsection*{Lepton sector}\nThis case is different from the Section \\ref{sec:fit-1} because now we do not have the $SU(5)$ relations. Instead the parameter \n$k_e$ is different from $k_d$, as explained in Section (\\ref{sec:su5q}), and hence $Y^e\\neq (Y^d)^T$. In this case we perform two fits, one for the coefficients of the charged lepton mass matrix, $Y^e$ and another for the coefficients of the neutrino mass matrix, $Y^\\nu$.\n\nFor the Solution 2, taking into account the value of the charges, the second row of Table (\\ref{tbl:sol-leptch-GSTuvne0}), and that $m=u+v=1\/2$ we have $k_e=-8\/3$. We note in this case that since we need $m_b\\approx m_{\\tau}$, which are given by \n\\begin{eqnarray}\n\\label{eq:mbmtau}\nm_b&=&m_t\\varepsilon^{|k_d|},\\quad k_d=l_3+3e_3+u+4(u+v)\/3\\nonumber\\\\\nm_\\tau&=&m_t\\varepsilon^{|k_e|},\\quad k_e=l_3+3e_3+u+(u+v),\n\\end{eqnarray}\nwe expect the sum $(u+v)$ to remain small.\n\nNow the coefficients $a^e_{23}$ and $a^d_{32}$ are not related but we can fix $a^e_{23}$ in the neutrino sector such that it is in agreement with the results from neutrino oscillation. We have performed a fit using the experimental information of the parameters of \\eq{eq:fitparneuts}. Here we have also used the expression \\eq{eq:t23lept} in order to fit the atmospheric angle, the expressions \\eq{eq:tan13} and \\eq{eq:tan12} to fit $t^l_{13}$ and $t^l_{12}$ (reactor and solar angle respectively) and the mass ratio and the heaviest neutrino state using their expressions appearing in \\eqs{eq:mixmasresn}. The results for this fit appear in the third column of Table (\\ref{tabl:sol2gstneut}). \n\nOnce the parameter $a^e_{23}$ has been fixed we fit the parameters of the charged lepton mass matrix, of the form \\eq{eq:chleptmatuvn0} and the other parameters as in the first solution of \\eq{eq:sol2gst}. In this case the relevant parameters are $a^e_{12}$ and $a^e_{22}$. However if we just fit the expressions\n\\begin{eqnarray}\n\\frac{m_e}{m_{\\mu}}&=&\\frac{|a^e_{12}|^2}{|(a^e_{22}-a^e_{23}a^e_{32})|^2}\\varepsilon^{4\/3}=s^{e\\ 2}_{12},\\nonumber \\\\\n\\frac{m_\\mu}{m_\\tau}&=&(a^e_{22}-a^e_{23}a^e_{32})\\varepsilon^2,\n\\end{eqnarray}\nthe coefficients $a^e_{12}$ and $a^e_{22}$ are not quite $O(1)$ so we have to make use of a coefficient, $c$ such that $(a^e_{22}-a^e_{23}a^e_{32})\\rightarrow \\ (a^e_{22}-a^e_{23}a^e_{32})\/c $, e.g. $c=3$, in order to have acceptable values for charged lepton masses. This fit is presented in the second column of Table (\\ref{tabl:sol2gscluvn0}). In this case the extra-coefficient needed for the fit is not really justified in the context of just a single $U(1)$ symmetry.\n \nFor the Solution 3, we have $m=1\/2$, $k_e=-13\/6$, according to the charges of the third row of Table (\\ref{tbl:sol-leptch-GSTuvne0}). The fit of the coefficients of the neutrino mass matrix are completely analogous for Solution 2 and they appear in the third column of Table (\\ref{tabl:sol2gstneut}). The relevant parameters for the charged lepton sector are\n\\begin{eqnarray}\n\\frac{m_e}{m_{\\mu}}&=&\\frac{|a^e_{12}|^2}{|(a^e_{22}-a^e_{23}a^e_{32})|^2}\\varepsilon^{29\/6}=s^{e\\ 2}_{12},\\nonumber \\\\\n\\frac{m_\\mu}{m_\\tau}&=&(a^e_{22}-a^e_{23}a^e_{32})\\varepsilon^{7\/4}.\n\\end{eqnarray}\nFor this case {\\it there is no need} to invoke another coefficient as for the Solution 2. $O(1)$ coefficients in this case can account for the masses and mixings in the leptonic sector. Once the coefficient $a^e_{23}$ is fitted in the charged lepton sector then we need to use this parameter as a fixed parameter in the fit for the neutrino sector but in this case the fit is not as good as for the previous solution. The results are presented in the third column of Table (\\ref{tabl:sol2gscluvn0}).\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l|l|}\n\\hline\n\\multicolumn{3}{|c|}{{Charged lepton Fitted Parameters}}\\\\ \\hline\n\\multicolumn{2}{|c|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{1}{|c|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\nParameter & BFP Value & BFP Value \\\\\n$a^e_{12}$& $0.56\\pm 0.006$ & $2.88 \\pm 0.032$ \\\\\n$a^e_{22}$& $0.92\\pm 0.013$ & $1.87 \\pm 0.013$\\\\\n\\hline\n\\multicolumn{3}{|c|}{{Charged lepton Fixed Parameters}}\\\\ \\hline\n$\\varepsilon$ & $0.217$ & $0.154$\\\\\n$a^e_{23}$ & $-1.6$ & $1.2$ \\\\ \n$a^e_{32}$ & $1.8$ & $1.2$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $0.05$ & $1.2\\times 10^{-5}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Charged lepton fitted parameters for the examples of Section \\ref{sec:su5-solut-satisfy-GST}\nThe second column corresponds to the Solution 2 in the $u + v \\neq 0$ case. The fourth column presents the fit to the Solution 3 in the $u + v \\ne 0$ case.}}\n\\label{tabl:sol2gscluvn0}\n\\end{table}\n\\subsubsection{Fit 4: $SU(5)$ type ($u=v=0$) solution not satisfying the GST relation}\n\nThis is a $SU(5)$ type solution, hence $u=v=0$, which doesn't satisfy the GST relation. The charges are as laid out\nin Eq.~(\\ref{eq:31}). \n\n\\subsubsection*{Quark sector}\nHere we also use the expressions \\eq{eq:yukeigen} and \\eq{eq:mixsgeral} adapted to the solution \\eq{eq:ydinl1l3A} for $r'_d=l_1-l_3=1$ and check the fit with an exact numerical solution, which agrees with the fit to \\eq{eq:yukeigen} and \\eq{eq:mixsgeral} within a $5\\%$ error.\nSince the fit can just fit eight parameters, in this case it is not possible to select out ``the best fit'', according to the criteria that we have used for the previous fits, so we present the following solution for the coefficients of the up and down Yukawa matrices:\n\\begin{eqnarray}\n\\label{eq:ngstsol1}\na^u&=&\n \\left[\n\\begin{array}{ccc}\n0.42 & 0.58 e^{-i\\pi\/2} & 0.51\\\\\n0.58 e^{-i\\pi\/2} & 0.9 e^{-i\\pi} &0.43e^{-i\\pi\/2} \\\\\n0.51 &0.43e^{-i\\pi\/2} & 1 \\\\\n\\end{array}\n\\right],\\nonumber\\\\\na^d&=&\n \\left[\n\\begin{array}{ccc}\ne^{-i0.5} & 0.8 & 0.29 e^{i 0.48}\\\\\n1.63 e^{-i1.49} & 0.86 e^{-i1.2} &0.55e^{-i0.7} \\\\\ne^{-i0.79} &0.4e^{-i0.5} & e^{-i3.05} \\\\\n\\end{array}\n\\right].\n\\end{eqnarray}\nFor this fit we have $\\chi^2=2.31$.\n\\subsubsection*{Lepton sector}\nIn the lepton sector, once we have done the fit to the quark masses, the $SU(5)$ relations produce acceptable values for the charged lepton masses, what we need to care about are the mixings for the neutrino sector. According to the expressions for the mixings in the $(1,2)$ and $(1,3)$ neutrino sector, \\eq{eq:mixangngst}, now $t^\\nu_{13}=a^\\nu_{13}\\varepsilon\/\\sqrt{a^{\\nu \\ 2}_{33}+a^{\\nu\\ 2}_{23}}$ and \n $t^\\nu_{12}=a^\\nu_{12}\\varepsilon^{1\/4}\/(c^{\\nu}_{23}a^{\\nu}_{22}-s^{\\nu}_{23}a^{\\nu}_{32}))$, for $(n_2,n_3)=(-3\/8,0)$. On the other hand the mixings in the charged lepton sector go as $t^e_{12}=|a^d_{21}+3a^d_{23}a^d_{31}\/a^d_{33}|\\varepsilon\/3|a^d_{22}+3a^d_{32}a^d_{23}|$ and $t^e_{13}=a^d_{31}\\varepsilon\/|a^d_{33}+|a^{d}_{32}|^2|$, so here these contributions are important to the $U_{MNS}$ $s^l_{12}$ and $s^l_{13}$ mixings, identified respectively to the solar and reactor mixings, for example for $s^l_{13}$ we have\n\\begin{eqnarray}\ns^l_{13}&=&|c^{e}_{12}c^{e}_{13}s^\\nu_{13}-c^\\nu_{13}(e^{i(\\beta^{e}_1-\\beta^\\nu_1)}c^\\nu_{23}(c^{e}_{12}c^{e}_{23}s^{e}_{13}+e^{i\\beta^{e}_3}s^{e}_{12}s^{e}_{23})\\nonumber \\\\\n& &-e^{i(\\beta^{e}_2-\\beta^{\\nu}_2)}s^{\\nu}_{23}(e^{i\\beta^{e}_3}s^{e}_{12}c^{e}_{13}-c^{e}_{12}s^{e}_{13})s^{e}_{23})|.\n\\end{eqnarray}\nThe mixing $s^l_{23}$ is driven by the neutrino mixing $s^\\nu_{23}$\n\\begin{eqnarray}\ns^l_{23}c^l_{13}\\approx |e^{i(\\beta^{e}_2-\\beta^\\nu_2)}s^\\nu_{23}c^{e}_{12}c^{e}_{23}-e^{i(\\beta^{e}_1-\\beta^\\nu_1)}s^{e}_{23}c^\\nu_{13}c^\\nu_{23}|.\n\\end{eqnarray}\nDespite all the contributions to the mixings $s^l_{13}$ and $s^l_{12}$ we can reproduce the observed masses and mixings in the neutrino sector with $O(1)$ coefficients and with out any phase in this sector, we just use the phases of the right handed quark matrix, which are given by\n\\begin{eqnarray}\n\\beta^{e}_1&=&{\\rm{ArcTan}}\\left[\\frac{\\sin(\\phi^d_{33})}{\\cos(\\phi^d_{33})+|a^d_{32}|^2}\\right]-\\phi^d_{31}\\nonumber\\\\\n\\beta^{e}_2&=&(\\phi^d_{32}-\\phi^d_{33})+\\beta^{dR}_1\\nonumber\\\\\n\\beta^{e}_3&=&(\\phi^d_{22}-\\phi^d_{21})-\\beta^{dR}_2,\n\\end{eqnarray}\nand are specified in \\eq{eq:ngstsol1}.\nThe results of this fit are given in the second row of in Table (\\ref{tabl:sol2nongstneut}).\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l|l|}\n\\hline\n\\multicolumn{3}{|c|}{{Neutrino Fitted Parameters}}\\\\ \\hline\n\\multicolumn{2}{|c|}{{Non GST sol. 1}}&\n\\multicolumn{1}{|c|}{{Non GST sol. 2}}\\\\\n\\hline\nParameter & BFP Value & BFP Value\\\\\n$a^\\nu_{23}$& $1.6\\pm 0.8$ & $2\\pm 0.9$\\\\\n$a^\\nu_{13}$& $1.4\\pm 0.7$ & $0.9\\pm 0.3$ \\\\\n$a^\\nu_{12}$& $1\\pm 0.6$ & $1.6\\pm 0.3$ \\\\\n$a^\\nu_{22}$& $0.67\\pm 0.27$ & $0.5 \\pm 0.4$ \\\\\n\\hline\n\\multicolumn{3}{|c|}{{Neutrino Fixed Parameters}}\\\\ \\hline\n$\\varepsilon$ & $0.19$ & $0.185$\\\\\n$a^e_{23}$ & $-3a^d_{32}=-1.2$ & $-3a^d_{32}=-1.25$\\\\\n$a^\\nu_{33}$ & $1$ & $1$ \\\\\n$a^N_{22}$ & $2$ & $2$\\\\\n$\\sigma$ & $(4.5,0.5)$ & $(5,1)$ \\\\ \n$(n_2,n_3)$ &$(-3\/8,0)$ & $(-5\/8,-1\/4)$\\\\\n\\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $(5.09,4.77)$ & $(4.78,3.79)$\\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Neutrino fitted parameters for two of the non GST examples of Section \\ref{sec:su5-solutions-not-GST}\nThe second and third columns correspond respectively to solution 1 and 2 in the non GST $SU(5)$ ($u=v=0$) cases, \nfor the first one we have used $r'_d=1$ and for the second $r'_d=3\/2$. While we have fitted in the first case \n$t^\\nu_{13}$ to saturate its current upper limit, we have allowed for the second case to be smaller than it. \nThe first entry for $\\sigma$ corresponds to the fit using $M_P$ and the second entry using $M_G$; analogously for $\\chi^2$.}}\n\\label{tabl:sol2nongstneut}\n\\end{table}\n\\subsubsection{Fit 5: $SU(5)$ type ($u=v=0$) solution not satisfying the GST relation}\n\nThis is a $SU(5)$ type solution, and hence $u=v=0$ which doesn't satisfy the GST relation.\nThe textures are as laid out in Eq.~(\\ref{eq:32}).\n\n\n\\subsubsection*{Quark sector}\nHere we present the following solution for the case $r_d=l_1-l_3=\\frac{3}{2}$, in this case the coefficients of the up and down Yukawa matrices:\n\\begin{eqnarray}\n\\label{eq:ngstsol1_5}\na^u&=&\n \\left[\n\\begin{array}{ccc}\n0.5& 0.6 e^{-i\\pi\/2} & 0.5\\\\\n0.6 e^{-i\\pi\/2} & e^{-i\\pi} &0.43e^{-i\\pi\/2} \\\\\n0.5 &0.43e^{-i\\pi\/2} & 1 \\\\\n\\end{array}\n\\right],\\nonumber\\\\\na^d&=&\n \\left[\n\\begin{array}{ccc}\n1& 0.72 & 0.29 e^{i 0.49}\\\\\n1.82 e^{-i2.28} & 0.76 e^{-i1.12} &0.55e^{-i0.71} \\\\\ne^{-i1.57} &0.4e^{-i0.41} & e^{-i2.951} \\\\\n\\end{array}\n\\right].\n\\end{eqnarray}\nFor this fit we have $\\chi^2=2.10$.\n\\subsubsection*{Lepton sector}\nThe analysis of this fit is completely analogous to the Fit 4, the results of the fitting procedure is presented in the second column of Table \\ref{tabl:sol2nongstneut}.\n\\subsubsection{Top and bottom masses and $\\mathbf{\\tan\\beta}$}\nFor these cases $\\tan\\beta$ and $a^d_{33}$ are a prediction, once the coefficient $a^u_{33}$ is fixed through the value of $m_t$, $m_t=Y^u_{33}v\/\\sqrt{2}$. The values of $a^u_{33}$, $a^d_{33}$ and $\\tan\\beta$ for the cases presented in this section are given in Table (\\ref{tabl:tanbetares}). We can see that for a natural value of $a^u_{33}=1$ we have acceptable values for $\\tan\\beta$ (which should be $>2$) and $a^d_{33}$ in any of the cases presented.\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|r|l|l|l|}\n\\hline\n\\multicolumn{4}{|c|}{{$\\tan\\beta$, $a^u_{33}$ and $a^d_{33}$}}\\\\ \\hline\n\\multicolumn{1}{|r|}{Parameter} &\n\\multicolumn{1}{|l|}{{GST sol. 2, $u\\!=v\\! =\\! 0$}}&\n\\multicolumn{1}{|l|}{{GST sol. 2, $u,v\\neq 0$}}&\n\\multicolumn{1}{|l|}{{GST sol. 3, $u,v\\neq 0$}}\n\\\\ \\hline\n$a^u_{33}$ & $(1,1.34)$ & $(1.1.3)$ & $(1,1.3)$\\\\\n$a^d_{33}$ & $(5.33^{-1.13}_{+2.81},2.40^{-0.12}_{+0.13})$ & $(3.49^{-0.73}_{+1.84},1.62^{-0.09}_{+0.08})$ & $(3.23^{-0.68}_{+1.70}, 1.62^{-0.09}_{+0.10})$ \\\\\n$\\tan\\beta$ & $(3.00^{-0.66}_{+4.82}, 1.00^{-0.06}_{+0.06})$ & $(3.00^{-0.66}_{+1.61}, 1.07^{-0.07}_{+0.07})$ & $(3.00^{-0.45}_{+1.61}, 1.07^{-0.07}_{+0.07})$ \\\\\n\\hline\n($\\epsilon$, $|k_d|$) & ($0.183$, $5\/2$) & ($0.217$, $5\/2$) & ($0.154$, $2$)\\\\ \\hline\n\\multicolumn{1}{|r|}{Parameter} &\n\\multicolumn{1}{|l|}{{$\\!\\!$Non GST sol. 1, $u\\!=v\\! =\\!0\\!\\!\\!$}}&\n\\multicolumn{2}{|l|}{$\\!\\!$Non GST sol. 2, $u\\! = v\\! =\\!0\\! $}\\\\ \\hline\n$a^u_{33}$ & $(1,1.2)$ & $(1,1.2)$ & \\\\\n$a^d_{33}$ & $(2.12^{-0.44}_{+0.98},1.1^{-0.03}_{+0.08})$ & $(2.11^{-0.45}_{+0.87},1.3^{-0.06}_{+0.09})$ & \\\\\n$\\tan\\beta$ & $(3^{-0.66}_{+1.61},1.3^{-0.1}_{+0.1})$ & $(3^{-0.78}_{+1.32},1.2^{-0.2}_{+0.2})$ & \\\\\n\\hline\n($\\epsilon$, $|k_d|$) & ($0.19$, $2$) & ($0.185$, $2$) & \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\footnotesize{Value of $a^d_{33}$ and $\\tan\\beta$ for the different models presented, once $a^u_{33}$ is fixed using $m_t$.}}\n\\label{tabl:tanbetares}\n\\end{table}\n\\subsection{Comparison to the $SU(3)$ case}\nIn this section we present the comparison to a generic $SU(3)$ case\n\\cite{King:2001uz}. \nWhat we fit are the $O(1)$ coefficients of a Yukawa matrices of the form\n\\begin{eqnarray}\nY^f= \\left[\n\\begin{array}{ccc}\n\\varepsilon^8_f &\\varepsilon^3_f& \\varepsilon^3_f\\\\\n\\varepsilon^3_f &\\varepsilon^2_f& \\varepsilon^2_f\\\\\n\\varepsilon^3_f &\\varepsilon^2_f& 1\n\\end{array}\n\\right],\n\\end{eqnarray}\nwhere we allow two different expansion paramaters $\\varepsilon_u$ and $\\varepsilon_d$ and complex phases to reproduce the CP violation phase. It is enough to consider one different phase in each of the $Y^u$ and $Y^d$ matrices. Here we put the phases on $Y^d_{13}$ and $Y^u_{12}$ \\cite{Ross:2004qn}, but we have the freedom to use other choices. We have used here as well the method of minimization that we have used for the $U(1)$ cases. The results of these fits are consistent with previous determination of these parameters, \\cite{Roberts:2001zy, Ross:2004qn}, taking into account the change induced by the change of the value used here for the parameter $m_c\/m_s=15.5\\pm 3.7$ and the different methods used for the determination of coefficients{\\footnote{In \\cite{Roberts:2001zy, Ross:2004qn} $m_c\/m_s=9.5\\pm 1.7$.}}. The fits presented here are the fits with the lowest possible $\\chi^2$ because of the minimization procedure.\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{|r|l||l|}\n\\hline\n\\multicolumn{3}{|c|}{{ Quark fitted Parameters, SU(3)-like case}}\\\\ \\hline\nParameter & BFP Value $\\pm \\sigma$ & BFP Value $\\pm \\sigma$ \\\\\n$a{'u}_{22}$& $1.11\\pm 0.55$ & $1.11\\pm 0.07$ \\\\\n$a^d_{12}$& $0.66\\pm 0.32$ & $2.45\\pm 0.20 $ \\\\\n$a^d_{13}$& $0.10\\pm 0.12$ & $0.91\\pm 0.15$ \\\\\n$a^d_{22}$& $0.74\\pm 0.10 $ & $1.77\\pm 0.09$ \\\\\n$a^d_{23}$& $0.45\\pm 0.29 $ & $1.18\\pm 0.12$ \\\\\n$\\epsilon^u$& $0.05\\pm 0.007$ & $0.05\\pm 0.007$ \\\\\n$\\epsilon^d$& $0.25\\pm 0.03$ & $0.16 \\pm 0.02$ \\\\\n$\\cos(\\Phi_2)$& $0.516\\pm 0.1$ & $0.450\\pm 0.045$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{ Quark Fixed Parameters, SU(3)-like case}}\\\\ \\hline\n$\\Phi_1^*$ & $-1.25 \\approx=-0.8\\pi\/2$ & $1.120 \\approx 0.7\\pi\/2$ \\\\ \\hline\n\\multicolumn{3}{|c|}{{$\\chi^2$}} \\\\ \\hline\n$\\chi^2$ & $0.972$ & $0.974$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\small{Fitted and fixed parameters for the $SU(3)$-like case.}}\n\\label{tabl:paru1su3}\n\\end{table}\nWe have not included here for the $SU(3)$ case a fit in the neutrino sector because in the $SU(3)$-like cases the neutrino sector requires more assumptions than in the analogous $U(1)$ cases.\n\nAnother important difference between the $SU(3)$ and the $U(1)$ cases presented here is that in the first one there are \ntwo parameter expansions $\\varepsilon^u$ and $\\varepsilon^d$ which have been fitted while in the $U(1)$ cases there is only one expansion \nparameter which can be fixed by relating the $U(1)$ symmetry to the cancellation of anomalies and the Fayet-Iliopoulos term. \nThis has allowed that more $O(1)$ coefficients have been able to be fitted.\n\nBy comparing Tables (\\ref{tabl:sol2gst}) and (\\ref{tabl:paru1su3}) we can see that according to the minimization \nprocedure and the criteria of $O(1)$ coefficients, the second case of the $SU(3)$ solution fits better the data. \nHowever the $U(1)$ solutions also have a good fit and taking into account the fact that for the neutrino sector we \njust have added the SRHND conditions, the fits in both of the $U(1)$ cases presented are good. We can therefore consider \nthat $U(1)$ symmetries are still an appealing description of the fermion masses and mixings observed. Note that although \nthe Solution 3 in the $u\\neq v\\neq 0$ does not fit the data as well as the Solution 2 (in either case, $u=v=0$ or not) in \nthe quark sector, it does reproduce masses and mixings in the charged lepton sector. We have for this case $Y^e\\neq (Y^d)^ T$ \nbut we have $m_b\\approx m_{\\tau}$ without introducing ad-hoc $O(1)$ coefficients in order to reproduce the appropriate mixings.\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|l|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{ Comparison}}\\\\ \\hline\n & $U(1)$ (GST)& $U(1)$ (Non-GST)& $SU(3)$-like \\\\\n\\# of expansion pars. &1& 1&2\\\\\n\\# of free pars.(quark sector) &12& $>$18 &10\\\\\nGST relation &yes & no & yes\\\\\nprediction for $\\tan\\beta$ & small& small &no\\\\\nlepton sector &o.k.&o.k&o.k\\\\\nsimple flavour charges &no &yes & yes\\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\small{Some criteria of comparison. Here the number of free parameters corresponds to the number of coefficients, phases and parameter expansions that need to be adjusted or determined in the fits.}}\n\\label{tabl:compar}\n\\end{table}\n\nGiven the results of these fits we need further criteria in order to compare models based in anomalous $U(1)$ models and non-Abelian models, such as $SU(3)$. These other criteria may be found in the predictions that the models presented here can give in the\nsupersymmetric sector.\n\n\\section{Flavour issues in SUSY flavour symmetry models}\n\\label{sec:susyconst}\n\nSince the flavour symmetry is expected to be broken at a high energy scale,\nnon supersymmetric models will have a hierarchy problem, since the cutoff\nof the theory must at least be of the order of the flavour symmetry breaking scale.\nSupersymmetric models with soft\nbreaking parameters around the TeV scale do not have this problem. For\nthis reason flavour symmetries are almost exclusively considered in the context\nof one of the minimal supersymmetric models, or one of the popular SUSY unified\ntheories. The soft Lagrangian parameters are strongly constrained by the\nsupersymmetric flavour problem and the supersymmetric CP problem. \n\nThe supersymmetric flavour problem needs the soft scalar mass squared matrices\nto be diagonal to good approximation at high energy scales, since the off-diagonal\nelements contribute to one-loop flavour violating decays such as the highly \nconstrained $\\mu\\rightarrow e \\gamma$ in the lepton sector and $b\\rightarrow s\\gamma$ in\nthe quark sector. It also requires that the trilinear couplings are aligned well to\nthe corresponding Yukawa matrix, since off diagonal elements in the trilinears in\nthe mass eigenstate basis also contribute to highly constrained decays. \nThe supersymmetric CP problem is related to the phases of the parameters in\nthe soft Lagrangian. The general requirement is that these phases need to be\nsmall for the majority of soft breaking parameters.\n\nThe reason that these problems are relevant in the context of family symmetries\nis that in general, the existence of the family symmetry and the fields that break\nit can give dangerous contributions to the soft Lagrangian parameters. It would\nbe remiss to look at these models but not check whether CP violation or flavour\nviolation is likely to rule them out. The starting point for investigating these\nproblems is to consider the hidden sector part of the theory, which leads to the\nsize and phases of the vevs of the fields which break the $U(1)$ symmetry, $\\theta$\nand $\\overline\\theta$.\n\n\\subsection{The flavon sector}\n\\label{sec:flavons}\n\nWe start by considering the values of the expansion parameters $\\epsilon$ and $\\overline{\\epsilon}$.\nThey are defined by:\n\\begin{equation}\n \\label{eq:epsilon}\n \\epsilon \\equiv \\frac{\\left<\\theta\\right>}{M} \\;\\; \\; \\overline{\\epsilon} = \\frac{\\left<\\overline{\\theta}\\right>}{M},\n\\end{equation}\nwhere $\\theta$ and $\\overline{\\theta}$ are scalars which break the $U(1)_F$ symmetry, and have charges of $1,-1$ respectively\nunder the symmetry. We wish to arrange that $\\epsilon = \\overline{\\epsilon}$, which entails arranging that the\npotential is minimized by $<\\theta> = <\\overline{\\theta}>$. This would be simple if the $U(1)$ were non-anomalous, and thus\nmissing a Fayet-Iliopoulis term. If we set the $\\theta$ sector of the superpotential to be:\n\\begin{equation}\n \\label{eq:33}\n W_\\theta = S(\\theta \\overline{\\theta} - M_\\theta^2)\n\\end{equation}\nWe introduce a new field, $X$, which has charge $q_X$ under $U(1)$. $q_X$ will be unspecified, but some number such that\nwhen $ \\ne 0$, it doesn't contribute to the fermion mass operators ( or, at the very least, it doesn't contribute at\nleading order). Then, if we give $\\theta$ and $\\overline{\\theta}$ the same soft mass\n\\footnote{This requirement may seem somewhat strong, but we also wish to minimize flavour violation coming from the D-term\nassociated with $U(1)$, which is proportional to $m_\\theta^2 - m_{\\overline\\theta}^2$, and will provide a non-universal\ncontribution to the scalar masses. This contribution will lead to off diagonal elements in the SCKM basis which can easily\nbe dangerously large with regard to flavour violation.\n}\n, and require that $X$ doesn't get\na soft mass, we end up with a hidden sector potential:\n\\begin{equation}\n \\label{eq:34}\n V = | \\theta \\overline{\\theta} - M_\\theta^2 |^2 + \\frac{g^2}{2} \\left( |\\theta|^2 + |\\overline{\\theta}|^2 - q_X |X|^2 + \\xi^2\\right)^2 \n + m^2(\\theta^2 + {\\overline{\\theta}^2}).\n\\end{equation}\nIf we minimize this potential with respect to $\\theta, \\overline{\\theta}$ and $X$, we end up with the following constraints:\n\\begin{eqnarray}\n \\label{eq:35}\n \\frac{\\partial V}{\\partial \\theta} &= 0 =& 2 \\overline\\theta ( \\theta \\overline\\theta - M_\\theta^2) + {g^2}\\theta\n ( |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2 ) + 2 m^2 \\theta\\\\\n \\label{eq:36}\n \\frac{\\partial V}{\\partial \\overline\\theta} &= 0 = &\n 2 \\theta ( \\theta \\overline\\theta - M_\\theta^2 ) + {g^2} \\overline \\theta ( |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2)\n + 2m^2 \\overline\\theta \\\\\n \\label{eq:37}\n \\frac{\\partial V}{\\partial X} & = 0 = & \\frac{g^2}{2} 2 X ( |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2 ) \n\\end{eqnarray}\n\nSince $X$ doesn't have a mass term, it would be massless unless $ \\ne 0$. Therefore, of the two solutions of Eq.~(\\ref{eq:37}),\nwe have to take $X \\ne 0$. From this, we see that:\n\\begin{equation}\n \\label{eq:38}\n |\\theta|^2 - |\\overline\\theta|^2 + q_X |X|^2 + \\xi^2 = 0\n\\end{equation}\nSubstituting Eq.~(\\ref{eq:38}) into Eq.~(\\ref{eq:35}) and Eq.~(\\ref{eq:36}), and multiplying by $\\overline\\theta$ and $\\theta$ respectively,\nwe find:\n\\begin{eqnarray}\n \\label{eq:39}\n 0 &=& \\theta\\overline\\theta(\\theta\\overline\\theta - M^2_\\theta) + m^2 \\theta^2 \\\\\n 0 &=& \\theta\\overline\\theta(\\theta\\overline\\theta - M^2_\\theta) + m^2 {\\overline\\theta}^2\n\\end{eqnarray}\nFrom this, we can deduce that either $\\theta = \\overline\\theta = 0$ or $|\\theta| = |\\overline\\theta| = M_\\theta$. The potential is minimized\nby the second solution if $m^2 < 2 M_\\theta^2$. As we expect $M_\\theta$ to be a GUT scale mass, and $m$ to be a TeV scale soft mass term, we\nfind, that as desired, that we will have:\n\\begin{eqnarray}\n \\label{eq:40}\n <\\theta> = <\\overline\\theta> \\Rightarrow \\epsilon = \\overline\\epsilon\n\\end{eqnarray}\n\nThis allows us to consider Yukawa textures without having to keep track of whether the overall charge for each\nterm is positive or negative. \n\n\n\\subsubsection{Getting $\\epsilon$ from the Fayet-Iliopoulos term}\n\\label{sec:epsilon-from-FI}\n\nThe GST requirement leads to needing flavon fields with opposite charges. under $U(1)_F$. Were this not the case, we would have an\nelegant way of generating $<\\theta>$. Consider a simple case where $\\theta$ doesn't have a superpotential mass term, but does\nhave a soft mass:\n\\begin{equation}\n \\label{eq:41}\n V = \\frac{g^2}{2} ( -|\\theta|^2 + \\xi^2 )^2 + m^2_\\theta \\theta^2\n\\end{equation}\nThen, without the need for an explicit mass term in the superpotential, we would find that minimizing the potential with respect\nto $\\theta$ would lead to:\n\\begin{equation}\n \\label{eq:42}\n <\\theta> = \\xi \\sqrt{1 + \\frac{m_\\theta^2}{\\xi^2}} \\approx \\xi\n\\end{equation}\nWhere the final approximation is due to the fact that we expect $\\xi^2$ to be much larger than $m^2_\\theta$. \nSo we have managed to set $<\\theta>$ from $\\xi$, which can be predicted from string theory. So this allows one to\npredict the flavon vev, rather than having to put it in by hand. \n\nThis provides a motivation for trying to set up the case where $<\\theta>$ and $<\\overline\\theta>$ could both be\nset by the FI term. However, it doesn't seem possible to make this work without adding in either an extra symmetry,\nor extra matter. Even then, trying to arrange things so that $<\\theta> = <\\overline\\theta> = z \\xi$, with $z$ some\nreal number is difficult. \n\n\\subsection{Yukawa Operators}\n\nSince the net $U(1)$ charge can be either positive or negative and we have $\\epsilon = \\overline\\epsilon$, an effective potential has the following form:\n\\begin{eqnarray} \n \\nonumber\n W = \\sum_{f=u,d;\\;ij} & Q^i f^{c\\;j} H_f & a^f_{ij} \\epsilon^{|q_i + f_j + h_f|} \\\\\n \\label{eq:effectsup}\n + \\sum_{f=e,n;\\;ij} & L^i f^{c\\;j} H_f & a^f_{ij} \\epsilon^{|l_i + f_j + h_f|}. \n\\end{eqnarray}\nWe cannot say anything in particular about the K\\\"ahler potential. We can assume that the phases responsible for CP violation only appear in the flavour sector.\nThen observable CP violating phases will be put into the Yukawa couplings indirectly from the effective superpotential of Eq.~(\\ref{eq:effectsup}). In general \nwe can consider an effective K\\\"ahler potential of the form:\n\\begin{eqnarray}\nK=K_o(t_\\alpha)-\\ln(S+\\bar{S}+\\delta_{GS})+ \\sum_i f_i (t_\\alpha) \\theta_i \\bar{\\theta_i}+...+\\sum_{ij} K^{\\Phi}_{ij}\\Phi^i\\bar{\\Phi}^j\n\\end{eqnarray}\nwhere $K_o$ is the K\\\"ahler potential of the moduli fields, $t_\\alpha=T_\\alpha+\\bar{T}_\\alpha$, $S$ is the dilaton, \n$f _i(t_\\alpha)$ are possible functions of these moduli fields e.g. $f(t)=\\Pi^p_{\\alpha=1}t_{\\alpha}^{n(\\alpha)_{ij^*}}$. But we cannot specify \nthe form of the K\\\"ahler metric. \nIt may be that the K\\\"ahler metric is canonical, in which case $K^{\\Phi}_{ij^*}=\\delta_{ij^*}$. Such a form has a good change of leading to\nacceptable phenomenology, since the scalar mass matrices will be proportional to the identity at the appropriate high energy scale. When\nrotating the scalar mass matrices to the super-CKM (SCKM) basis at the high energy scale, the transformation will leave the mass matrices invariant.\nFlavour violation tends to be proportional to off-diagonal elements in the scalar mass matrices in the SCKM basis, so any flavour violation will\nbe due to RG effects, and will therefore be suppressed. On the other hand, the K\\\"ahler metric could have off-diagonal structure, in which case\nthe risk of flavour violating effects would be high, and the case where the K\\\"ahler metric is diagonal but non-universal is potentially very interesting since flavour changing effects are induced in general by the SCKM rotation.\n\n\\subsection{The SUSY CP problem}\n\\label{sbsec:susycppr}\n\n\\subsubsection{The $\\mu$ problem}\n\nIn order to avoid the $\\mu$ problem, a symmetry or other mechanism to protect $\\mu$ from unwanted contributions needs to be introduced.\nThe $\\mu$ parameter can have contributions from the superpotential, (expected to be at the Planck scale) and from the K\\\"ahler potential,\nvia the Giudice-Masiero mechanism \\cite{Giudice:1988yz} or other mechanisms \\cite{Casas:1992mk,Kim:1994eu}, $\\mu = \\mu_W + \\mu_K$. The charges of the fields $H_u$ and $H_d$ under the flavour symmetry \ncan be chosen in such a way that $\\mu_W(M_P)$ is forbidden in the superpotential. Then another field, $S$ can be introduced, so that the term \n$\\lambda S H_uH_d$ is allowed in the K\\\"ahler potential, which generates an effective $\\mu = O(m_{3\/2})$. \nNote that in the cases that we have found for $u+v\\neq 0$ there is no $\\mu_W$ at $M_P$. In general for a theory containing two flavon fields with opposite charges, once the flavour symmetry \nis broken below the Planck scale, the contributions to the $\\mu$ term are:\n\\begin{eqnarray}\n\\label{eq:mubreaku3s}\n\\epsilon^{|u+v|} H_u H_d \\mu_W + \\epsilon^{|u+v|} H_u H_d \\mu_K\n\\end{eqnarray}\nThus, even if the $\\mu$ term is missing from the superpotential at renormalizable level, it will be generated by non-renormalizable\noperators once the family symmetry is broken. However, it will appear suppressed by a factor of $\\epsilon^{|u+v|}$. To get an sufficient\nsuppression, either $|u+v|$ must be large or $\\epsilon$ must be small.\nObviously, since the same factor $\\epsilon^{|u+v}|$ appears suppressing both superpotential and K\\\"ahler potential $\\mu$ contributions,\nthere is no extra constraint from considering the second term in Eq.~(\\ref{eq:mubreaku3s}).\n\nHowever, $|u+v|$ is related to the anomaly cancellation conditions considered in Section \\ref{sec:anomconst}. There are two possibilities\nfor having small $|u+v|$. The first is to have small expansion parameters, $\\epsilon$; however if $\\epsilon$ becomes too small, it makes\npredicting the fermion mass hierarchy very difficult. The second is to accept a contribution to $\\mu$ that is larger than order O($m_{3\/2}$);\nhowever phenomenologically, the total $\\mu$ should not be much bigger than the $O(m_{3\/2})$. It is, however, possible to apply a new discrete\nsymmetry to disallow the superpotential $\\mu$ term, which never allows any flavon corrections to generate it. \n\n\\subsubsection{Electric dipole moment constraints}\nThe electric dipole moments (EDMs) constrain the form of the trilinear couplings, $(Y^A_{f})_{ij}$. The trilinear couplings are\ndefined through $(Y^A_{f})_{ij}H_{f}Q_i f^c_j$. Here we need to ensure that there is not a large contribution from the phases found \nin the trilinear terms to the CP violating phases. In the context of flavour symmetries it is usually postulated that the only phases \nappearing in the theory are in the Yukawa couplings and any other phase will enter as a consequence of a dependence in the Yukawa couplings. \nThen to check if the model gives contribution below the bounds one needs to compare the diagonal elements of the Yukawa couplings \nwith the diagonal elements of the trilinear couplings, in the SCKM basis. The trilinear terms in general can be written as:\n %\n \\begin{eqnarray}\n \\label{eq:trilinears}\n \\mathbf{(Y^A_{f})}_{ij}=Y^{f}_{ij}F^a\\partial_a\\left(\\tilde{K}+\\ln(K^f_f K^i_i K^j_j) \\right)\n+F^a\\partial_a {Y}^{f}_{ij}\n \\end{eqnarray}\n %\nWe can always write the first term in a ``factorisable'' form \\cite{Kobayashi:2000br}, such that if the Yukawa couplings, \n\\eq{eq:effectsup}, are the only source of CP violation then the first term does not give any contribution at the leading order.\nFor the second term, which involves the derivative in terms of the flavon fields, if the flavon field is the only field with $F^\\theta\\neq 0$ then the \ndiagonal trilinear couplings in the SCKM basis are real at leading order in the flavon fields \\cite{Ross:2002mr}. \nThus there is not an $O(1)$ contribution to the CP phases from this sector. \n\nOne can check this simply by writing the last term of \\eq{eq:trilinears} in the SCKM basis: \n\\begin{eqnarray}\n \\nonumber\n (F^a\\partial_a({\\hat Y}^f))^{\\rm{SCKM}}_{ij} &=& F^a(V^\\dagger_L)_{ik}(\\partial_a V_L)_{kj}(Y_{\\rm{Diag}})_{jj}+\\\\\n \\label{eq:45}\n &&F^a(\\partial_aY_{\\rm{Diag}})_{ij}+F^a(Y_{\\rm{Diag}})_{ii}\n (\\partial_a V_R)_{ir}(V^\\dagger_R)_{rj}\n\\end{eqnarray}\n Where $V^\\dagger_L$ and $V^\\dagger_R$ diagonalize the Yukawa matrix: $Y_{\\rm{Diag}}=V^\\dagger_L Y V^\\dagger_R$. The leading term of the Eq.~(\\ref{eq:45})\nis the second term and it is at most of order $\\theta$. \nIf another field has non-zero F-term, $F^X\\neq 0$ then all the quantities appearing in \\eq{eq:trilinears} can be written as a expansion \nin $X$ and $\\theta\/M=\\varepsilon$:\n\\begin{equation}\n (Y_{\\rm{Diag}})_{ii}=(a_{ii}+b_{ii}X)\\varepsilon^{p_{ii}}\\label{eq:46}.\n\\end{equation}\nWe are assuming that only the matter sector in \n\\eq{eq:effectsup} has phases leading to CP violation, so the term $b_{ii}X\\varepsilon^{p_{ii}}$ is real and hence so is:\n\\begin{equation}\n F^X(\\partial_a Y_{\\rm{Diag}})_{ii}=F^X b_{ii}\\theta^{p_{ii}}\\label{eq:47}\n\\end{equation}\n\\subsection{SUSY flavour problem}\nIn addition to the F term contribution to the soft masses we have to add the D term contributions\n %\n \\begin{eqnarray}\n(M^2)_{ij}=(M^2)_{F\\ ij}+(M^2)_{D\\ ij}.\n \\end{eqnarray}\n %\nIf the K\\\"ahler metric is diagonal in the basis where the symmetry is broken both contributions are diagonal and proportional to the K\\\"ahler metric. \nFor example, consider universal SUGRA: $(M^2)_{F\\ ij}=K_{ij}m^2_o$. However, even if we assume that the first term is indeed proportional to \nthe K\\\"ahler metric, the D-term will not in general be proportional to the K\\\"ahler metric:\n\n\\begin{equation}\n\\label{eq:48}\n(M^2)_{D\\ ij}= \\sum_N g_N X_{N\\ \\theta_a} K_{ij^*}(\\theta_a)m^2_{D},\\;\\;\\; m^2_{D}=O(m^2_{3\/2})\n\\end{equation}\nThe main problems for FC processes for these kind of theories are the contributions to the trilinear couplings from the anomalous D-term \ncontribution to the soft masses \\cite{Chung:2003fi}. For the last issue there is no real solution so far but one can ameliorate the problem by making all the\nscalars heavier, which is a simply mass suppression.\n\n\nIn order to study all the possible consequences of models with the superpotential structure of \\eq{eq:effectsup}, \nwe can parameterize the K\\\"ahler metric according to the different contributions it may have, \nassuming a broken underlying symmetry with at least two flavon fields with opposite charges. \nOnce this is done we can then study their consequences. As mentioned earlier, this analysis is beyond the scope of this paper, so\nwe just mention how extreme and dangerous situations may arise and we leave the analysis for a future reference \\cite{inpreparation}. Some authors have studied possible consequences of flavour models for FC effects but very specific assumptions need to be assumed due to the many unknown supersymmetric parameters \\cite{Babuetal, Ciuchini:2003rg,Masina:2003wt}.\n\nThe most strict bound for flavour changing processes is coming from the decay $\\mu\\rightarrow\\ e \\ \\gamma$ \\cite{Hisano:1995cp}-\\cite{Masina:2002mv} and given the fact that we \nhave a large mixing angle in the left handed sector of the charged lepton matrices it is crucial to determine under which conditions we can \nproduce a suppressed effect. Also the constraints given by the process $B\\rightarrow\\ \\Phi \\ K_S$ may select out some of the possibilities presented.\n\\subsubsection{Non minimal sugra and diagonal K\\\"ahler metric}\nConsider, for example, the case for which at the scale at which the flavour symmetry is broken, the K\\\"ahler metric is diagonal. For this case, we also\nwant the soft scalar mass matrices diagonal but not proportional to the unit matrix, due to possible different D term contributions. Since the general case it is difficult to handle we consider the case where $M^2_{\\tilde f \\ 1}-M^2_{\\tilde f \\ 2}$ is small and $M^2_{\\tilde f \\ 1}- M^2_{\\tilde f \\ 3}>0$.\nIn order to estimate the flavour changing processes we need to take into account the effects from renormalization group equations (RGE's) and then at the electroweak scale make the transformation to the basis where the fermions are diagonal. Here we consider the case of leptons, since we are interested in determining\n$\\delta^{l}_{ij}$ and in particular $\\delta^{l}_{12}$ which is the most constrained parameter due to $B(\\mu\\rightarrow e\\ \\gamma)$.\n\nWe make an estimation of the contributions from the renormalization $\\beta$ functions in this case, such that at the scale where the dominant right handed neutrino it is decoupled we can write the soft masses as\n\\begin{eqnarray}\n\\label{eq:massren}\nM^2_{\\tilde L\\ ij}(M_{Y})\\approx M^2_{\\tilde L \\ ij}(M_X)-\\frac{1}{16\\pi^2} \\ln\\left(\\frac{M_X}{M_Y} \\right)(\\beta^{(1)}_{M^2_{\\tilde L\\ ij}})\n\\end{eqnarray}\nfor $M_X=M_{\\rm{G}}$ or $M_{\\rm{P}}$, GUT or Planck scales respectively, and for \n$M_Y=M_{RR\\ 3}$ in this case and considering just one loop corrections. The $\\beta$ functions of $M^2_{\\tilde L\\ ij}$, from $M_X$ to $M_{RR\\ 3}$ receive the contributions from the MSSM particles plus the contribution from right-handed neutrinos.\nAt $M_3$ we then run from that scale to the electroweak symmetry breaking scale with the appropriate $\\beta$ function and matter content. In the case of SNRHD scenario and the form of the Yukawa matrices that we have considered in Section (\\ref{sec:fitsmasses}) we can make the following approximations for the $\\beta$ functions{\\footnote{For the MSSM see for example \\cite{Martin:1993zk}, when including right handed neutrinos, see for example \\cite{Hisano:1995cp}.}}:\n\\begin{eqnarray}\n\\label{eq:betasMSSM}\n\\left(\\beta^{(1)}_{ M^2_{\\tilde L\\ ii} }\\right)^{MSSM}\\!\\!\\!\\!\\!\\!\\!&\\!\\approx\\!&\\!\\!\n2\\left[(m^2_{M^2_{\\tilde L\\ ij}} +m^2_{\\tilde H_d})\\left(|Y_{2i}|^2+|Y_{3i}|^2\\right) + m^2_{\\tilde e_2}(1+a^2)(\\left|Y_{2i}|^2+r^2_{\\tilde e_{23}}|Y_{3i}|^2\\right)\\right]\\nonumber\\\\\n&& -6g^2_2|m_2|^2-\\frac{6}{5}g^2_1|m_1|^2-\\frac{3}{5}g^2_1 S\\nonumber\\\\\n\\left(\\beta^{(1)}_{ M^2_{\\tilde L\\ ij} }\\right)^{MSSM}\\!\\!\\!\\!\\!\\!\\!&\\!\\approx\\!&\\!\\! (2m^2_{\\tilde H_d}+m^2_{\\tilde L\\ i} +m^2_{\\tilde L\\ j})\n\\left( Y^{e *}_{2i}Y^{e*}_{2j} + Y^{e *}_{3i}Y^{e*}_{3j}\\right)+\\nonumber\\\\\n&&+ 2m^2_{\\tilde e_2}(1+a^2)\\left(Y^{e *}_{2i}Y^{e*}_{2j} + r^2_{\\tilde e_{23}} Y^{e *}_{3i}Y^{e*}_{3j} \\right)\n\\end{eqnarray}\nwhere we have assumed that the trilinear terms can be written as \n$A^f_{ij}=aY^f_{ij}M^2_{\\tilde e}$, and $M^2_{\\tilde e}$ is not necessarily diagonal. The parameter $S$, defined as $S=m^2_{\\tilde H_u}-m^2_{\\tilde H_d}+\\rm{Tr}\\left[M^2_{\\tilde Q} -M^2_{\\tilde L}-2 M^2_{\\tilde u}+ M^2_{\\tilde d}+ M^2_{\\tilde e} \\right]$, does not generate big contributions as long the masses involved remain somewhat degenerate. The $\\beta$ functions generated by the dominant right-handed neutrino can be approximated by\n\\begin{eqnarray}\n\\label{eq:betasMR3}\n\\left(\\beta^{(1)}_{ M^2_{\\tilde L\\ ij} }\\right)^{\\nu_{M_3}}\\!\\!\\!\\!\\!&\\!\\approx\\!&\\!\\!2Y^{\\nu *}_{3i}Y^{\\nu}_{3j}\\left[m^2_{\\tilde L 3} + m^2_{\\tilde \\nu 3}(1+b^2)+m^2_{\\tilde H_u} \\right]\n\\end{eqnarray}\nFrom $M_X=M_3$ to $M_Y=M_{\\rm{S}}$ -the supersymmetry breaking scale-, we consider $\\left(\\beta ^{(1)}_{ M^2_{\\tilde L\\ ij }}\\right)^{MSSM}$. For this estimation we ignore the effect from $M_{\\rm{S}}$ down to the electroweak scale. At this scale we then transform the renormalized $M^2_{\\tilde L}$ in the basis where the charged leptons are diagonal. Since there is a large mixing angle $(s^{e_L}_{23})$ in the left sector of $Y^e$ we are interested here only in estimating $(M^2_{\\tilde L})_{LL}$. We can use the parameterization of Appendix A in order to make this transformation, i.e.\n\\begin{eqnarray}\n \\label{eq:49}\n Y^f_{\\rm{diag}}=V^{f\\dagger}_{L} Y^f V^f_{R},\\quad\n (M^2_{\\tilde L})'_{LL}=V^{f\\dagger}_{L} M^2_{\\tilde L} V^f_{L},\n\\end{eqnarray}\nfor $V^f_{L,R}$ as parameterized in \\eq{eq:pardimatL}, with the $\\beta$ phases as follow\n\\begin{eqnarray}\n\\{\\beta^{e_L}_1,\\beta^{e_L}_2,\\beta^{e_L}_3\\}=\\{\\phi^{e}_{X_{23}},0,0\\},\\quad \\phi^{e}_{X_{23}}=\\beta^{e_L}_1-\\beta^{e_L}_2.\n\\end{eqnarray}\nUsing these approximations, we obtain the following results\n\\begin{eqnarray}\n(M^2_{\\tilde L})^{\\prime}_{12}&=&\ns^{e_L}_{12}(c^{e_L}_{23} {m^2_{\\tilde L\\ 22 }} - {m^2_{\\tilde L\\ 11 }}) + \\nonumber\\\\\n&+&(c^{e_L}_{12})^2e^{-i\\beta_{3L}}\\left(c^{e_L}_{23} e^{-i\\beta_{2L}} {m^2_{\\tilde L\\ 12 }}-2t_{12}c^{e_L}_{23}s^{e_L}_{23} e^{i\\beta_{3L}} \\rm{Re}\\{ {m^2_{\\tilde L\\ 23}} e^{-i\\chi}\\}\\right.\\nonumber\\\\\n&&-\\left. s^{e_L}_{23}e^{-i\\beta_{1L}} {m^2_{\\tilde L\\ 13 }} \\right),\\nonumber\\\\\n(m^2_{\\tilde L})^{\\prime}_{13}&=&\nc^{e_L}_{23} s^{e_L}_{23}s^{e_L}_{12}e^{i\\beta_3L}( {m^2_{\\tilde L\\ 22 }} - {m^2_{\\tilde L\\ 33 }} ) +\\nonumber\\\\\n&+& c^{e_L}_{12}c^{e_L}_{23} \\left(\\left( \ne^{-i\\chi} c^{e_L}_{23}t_{12} {m^2_{\\tilde L\\ 23 }}\n-e^{i\\chi} t_{12}t_{23}s^{e_L}_{23} \\beta^*_{m^2_{\\tilde L\\ 23 }}\n\\right)\\right.+\\nonumber\\\\\n&+&\\left.t_{23}e^{i\\chi} {m^2_{\\tilde L\\ 12 }}+ {m^2_{\\tilde L\\ 13 }} \\right)\n\\nonumber,\\\\\n(m^2_{\\tilde L})^{\\prime}_{23}&=&\nc^{e_L}_{23}s^{e_L}_{23}e^{i\\beta_{3L}}\\left({m^2_{\\tilde L\\ 22 }}- {m^2_{\\tilde L\\ 33 }} \\right)\n \\nonumber\\\\\n&&+e^{i\\beta_{3L}}c^{e_L}_{12}\\left((c^{e_L}_{23})^2e^{-i\\chi} {m^2_{\\tilde L\\ 23 }} - (s^{e_L}_{23})^2e^{i\\chi} \\beta^*_{m^2_{\\tilde L\\ 23 }} \\right) ,\\nonumber\\\\\n(m^2_{\\tilde L})^{\\prime}_{11}&=&\n(s^{e_L}_{12})^2\\left((c^{e_L}_{23})^2 m^2_{\\tilde L\\ 22 }+ (s^{e_L}_{23})^2 m^2_{\\tilde L\\ 33 } \\right) \n\\nonumber\\\\\n(m^2_{\\tilde L})^{\\prime}_{22}&=&\n(c^{e_L}_{12})^2\\left((c^{e_L}_{23})^2 m^2_{\\tilde L\\ 22 }+(s^{e_L}_{23})^2 m^2_{\\tilde L\\ 33 }\\right)-(c^{e_L}_{12})^2c^{e_L}_{23}s^{e_L}_{23} 2\\rm{Re}\\{ {m^2_{\\tilde L\\ 23}} e^{-i\\chi}\\}\\nonumber\\\\ \n(m^2_{\\tilde L})^{\\prime}_{33}&=&\n(c^{e_L}_{13})^2\\left((s^{e_L}_{23})^2 m^2_{\\tilde L\\ 22 }+ (c^{e_L}_{23})^2 m^2_{\\tilde L\\ 33 } \\right)+\nc^{e_L}_{23}s^{e_L}_{23}\\left( 2\\rm{Re}\\{ {m^2_{\\tilde L\\ 23}} e^{-i\\chi}\\} \\right)\\nonumber\\\\\n \\end{eqnarray}\nhere the soft masses $m^2_{\\tilde L {ij}}$ are the soft masses at $M_{\\rm{S}}$, renormalized from $M_X=M_{\\rm{G}}, M_{\\rm{P}}$ down to $M_3$ with the appropriate contributions from the dominant right handed neutrino, \\eq{eq:massren}, and \\eq {eq:betasMSSM}-\\eq{eq:betasMR3} and then from $M_3$ to $M_S$ with the appropriate $\\beta^(MSSM)$ functions. Thus we began with a diagonal matrix $M^2_{\\tilde L}$ at $M_X$, then the RGE effects up to the scale where $M_3$ is decoupled generate a non diagonal matrix which receives more RGE contributions from $M_3$ to $M_S$. At electroweak scale we transformed to the basis where charged leptons are diagonal.\nThe mixing angles in this sector can be approximated as\n\\begin{eqnarray}\ns^{e_L}_{12}=|(a^e_{12}-t_{32}a^e_{13})|\/|(a^e_{22}-a^e_{32}a^e_{23})|\\epsilon^{p^e_{12}},\\quad\ns^{e_L}_{13}=a^e_{13}\/a^e_{33}\\epsilon^{p^e_{13}},\\quad \ns^{e_L}_{23}=a^e_{23}\/a^e_{33}\n\\end{eqnarray}\nThe powers $p^e_{ij}$ for the different solutions presented now correspond to \n$p^e_{12}=2\/3,14\/3$, $p^e_{13}=29\/12,71\/12$ for Fits 2 and 3 respectively.\nSo in this case we see that we need a big suppression of the element $(m^2_{\\tilde l \\ L})^{\\prime}_{12}$ in order to be in agreement with the \nobserved bound on $\\mu\\rightarrow\\ e \\gamma$. In the present example the suppression it is related to a bound on $( m^2_{\\tilde L1}\\! -\\!m^2_{\\tilde L2})$ and a relative big set of soft masses. The results of these estimations are presented in Table \\ref{tbl:nonsugex}.\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Estimation of $\\delta_{ij}$ for the Fit.3 of Section \\ref{sec:fitsmasses}.}\\\\\n\\hline\n Paramter & Ex. I & Ex. II & Ex. III\\\\\n\\hline\n$m_{\\tilde L 1}[\\rm{GeV}]$ & 520 & 520 &520\\\\\n$m_{\\tilde L 2}[\\rm{GeV}]$ & 530 & 530 & 570\\\\\n$m_{\\tilde L 3}[\\rm{GeV}]$ & 500 & 500 & 230\\\\\n$m_{\\tilde e 1}[\\rm{GeV}]$ & 520 & 520 & 520\\\\\n$m_{\\tilde e 2}[\\rm{GeV}]$ & 530 & 530 & 550\\\\\n$m_{\\tilde e 3}[\\rm{GeV}]$ & 500 & 300& 300\\\\\n$M_1[\\rm{GeV}]$ & 500 & 500 & 500\\\\\n$M_2[\\rm{GeV}]$ & 2$M_1$ & 700 &700\\\\\n$M_{H_d}[\\rm{GeV}]$ & 510 & 510 &510\\\\\n$M_{H_u}[\\rm{GeV}]$ & 510 & 510&510\\\\\n$M_{\\rm{S}}$ & 1000 & 1000&1000\\\\\n\\hline\n$\\overline{m}_{\\tilde l}$ &514 &486&456\\\\\n\\hline\n$x=m^2_{\\tilde \\gamma}\/m^2_{\\tilde l}$&\\multicolumn{3}{|c|}{$0.3$}\\\\ \n$|(\\delta^l_{LL})^E_{12}|$ &$4.3\\times 10^{-3}$ & \n$5.6\\times 10^{-3}$ &$1.4\\times 10^{-3}$\\\\\n$|(\\delta^l_{LL})^B_{12}|$ &\\multicolumn{2}{|c|}{$O(10^{-1})$}& $O(10^{-2})$\\\\ \n$|(\\delta^l_{LL})^E_{13}|$ &$1.7\\times 10^{-3}$ & $1.8\\times 10^{-3}$&$1.9\\times 10^{-2}$\\\\\n$|(\\delta^l_{LL})^E_{23}|$ &$5.7\\times 10^{-2}$ & $6.4\\times 10^{-2}$&$6.3\\times 10^{-1}$\\\\\n$|(\\delta^l_{LL})^B_{23}|$ &\\multicolumn{2}{|c|}{$O(10^{-1})$}& $O(10^{-1})$\\\\ \n\\hline\n \\end{tabular}\n \\caption{Estimation of $|\\delta^l_{ij}|^E$ in the fit 3 presented for the non minimal sugra example and its comparison to the observed bounds $|\\delta^l_{ij}|^E$ \\cite{Hisano:1995cp}-\\cite{Masina:2002mv}.}\n \\label{tbl:nonsugex}\n\\end{table}\nAs we can see from the results of Table (\\ref{tbl:nonsugex}) the estimation of $|(\\delta^l_{LL})^E_{ij}|$ is less dependent on the relation among the original soft mass terms $m^2_{\\tilde L i}$ than on the value taken for the average s-lepton mass, which indeed needs to be large.\nHere we note that this is just an estimation on the conditions that $B(\\mu\\rightarrow e\\ \\gamma)$ imposes on the soft masses, but with out fully checking whether or not appropriate masses for all the MSSM parameters can be obtained. In the following we consider a numerical investigation in the minimal sugra case.\n\\subsubsection{Numerical Investigation of $B(\\mu\\rightarrow e\\ \\gamma)$ in minimal sugra}\n\\label{sec:numer-invest-fits}\nThe presence of a right-handed neutrino fields leads to RG lepton flavour violation. Since the masses of the right handed neutrinos are so light for the GST solutions, fits 1-3, we attempted a numerical analysis for all of the fits of Section (\\ref{sec:fitsmasses}) using the same modified version of SOFTSUSY \\cite{Allanach:2001kg} as used in \\cite{King:2003kf}. \n\nIn order to get a good handle, we have embedded the flavour model fits into a string-inspired mSUGRA type scenario, with no D-term contribution to the scalar masses. This scenario was chosen because it is expected to be the embedding with the lowest flavour violation. In the scenario,\n$A_0, m^2_0, M_{1\/2}$ are all related to a gravitino mass $m_{3\/2}$.\n\n As $n_1$ was only constrained to be between $-\\sigma\/2$ and $0$, we allow it to vary within this\nrange. We define the model at the GUT scale as:\n\\begin{equation}\n \\label{eq:23}\n m^2_0 = \\frac{1}{4} m_{3\/2}^2 \n \\;\\;,\\;\\;\n A^0 = \\sqrt{\\frac{3}{4}} m_{3\/2}\\;\\;\\;\n M_{1\/2} = \\sqrt{\\frac{3}{4}} m_{3\/2}.\n\\end{equation}\nThis setup of the soft parameters corresponds to benchmark point A in \\cite{King:2003kf}.\nThe results are as follows, for Fit 1 the code being used can not generate any low energy data for this fit so we do not find any safe $B(\\mu\\rightarrow e \\gamma)$ region using the conditions presented above.\nThe Fit 2 has $\\mathrm{BR}({\\mu\\rightarrow e\\gamma}) <= 10^{-30}$ which is unattainably low, thus this fit is plausible within the context of the minimal sugra conditions that have been specified. \nThe smallness of the branching ratio for fit 2 comes about because with no RG running, in mSUGRA this rate would be exactly zero. The RG flavour violation will come from terms proportional to ${Y^\\nu}^\\dag Y^\\nu$, whose elements are tiny ( the largest is $O(10^{-14})$ ).\n\nThe Fit 3 generates a tachyonic s-electron for the full $(m_{3\/2}, n_1)$ range. This is not to say that this fit will always have a tachyonic s-electron in other, less trivial embeddings. \nFits 4 and 5 produce regions below and above the experimental limits on $B(\\mu\\rightarrow e\\gamma)$, the graphs for these fits appear in Tables (\\ref{fig:br_meg_fit_4a}-\\ref{fig:br_meg_fit_5b}).\n\\begin{figure}[htbp]\n \\centering\n \\input{fit4a.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 4, with $\\left<\\Sigma\\right> = O(M_G)$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above.}\n \\label{fig:br_meg_fit_4a}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\input{fit4b.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 4, with $\\left<\\Sigma\\right> = O(M_{Pl})$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above}\n \\label{fig:br_meg_fit_4b}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\input{fit5a.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 5, with $\\left<\\Sigma\\right> = O(M_G)$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above}\n \\label{fig:br_meg_fit_5a}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\input{fit5b.tex}\n \\caption{$\\mathrm{BR}(\\mu\\rightarrow e\\gamma)$ for fit 5, with $\\left<\\Sigma\\right> = O(M_{Pl})$. The solid points are below the experimental\nlimit of $1.1 \\cdot 10^{-11}$, and the hollow points are above}\n \\label{fig:br_meg_fit_5b}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nIn summary, we began our analysis\nby reviewing the Green-Schwartz (GS) conditions for anomaly\ncancellation for theories based on a \n$U(1)$ family symmetry. We then used these conditions\nto fix the charges of all the quark, lepton\nand Higgs fields and studied possibilities where the Higgs mass \n$\\mu$ term is either present or absent in the original\nsuperpotential. The solutions which we constructed do not necessarily require\nan underlying Grand Unified Theory (GUT) but may be\nconsistent with unification because of the GS conditions. \nRegardless of the presence of an explicit unified gauge group,\nthe explicit solutions can produce matrices of the form that are\nidentical to those that would be expected in \nan $SU(5)$ case or Pati-Salam unified theory, for example.\n\nThe flavour structure of the resulting Yukawa matrices is \ncontrolled by the charges of the quarks and leptons under the \n$U(1)$ family symmetry gauge group.\nWe have determined these charges which are consistent with\nanomaly cancellation, and studied cases\nwhich can reproduce quark Yukawa matrices satisfying\nthe Gatto-Sartori-Tonin (GST) relation, as well as other cases\nwhich do not satisfy the GST relation. \nWe find the GST relation to be an\nappealing description of the value of the element $V_{us}$,\nand the GST relation provides a useful criterion \nfor classifying flavour models. \nIn our view, having the Cabibbo angle emerging automatically\nfrom a flavour model should have a similar status to gauge\ncoupling unification in a high scale model. \nHaving classified the solutions in terms of the GST condition,\nwe then further classify\nthe solutions according to which of them can produce the\nobserved mixings in the lepton sector, and those that are consistent\nwith a sub-class of solutions based on the SRHND or sequential dominance\nscenario with the further condition that the charges of the lepton\ndoublets for the second and third family are equal, $l_2=l_3$. \nWe find that the GST solutions combined with SRHND results in \nhighly fractional charges. \nOn the other hand non-GST solutions with SRHND results in simpler\ncharges, and we have therefore studied both sorts of examples.\n\nWe have presented three numerical examples of solutions satisfying the\nGST relation and two examples of non-GST solutions in order to compare\nhow well these solutions fit the experimental information while\nmaintaining $O(1)$ coefficients. For the GST solutions, one of these\nexamples corresponds to a model that can be thought of as coming from an\nunderlying $SU(5)$ and for which a $\\mu$ term is allowed in the\nsuperpotential. It is well known that in this case, given the relation\n$Y^e=Y^{d \\ T}$, there should be a Clebsch-Gordan coefficient\ndifferent in the charged lepton $(2,3)$ sector and in the $(2,3)$\nd-quark sector in order to produce appropriate mixings in the\ncontext of the $U(1)$ flavour symmetry and the GUT theory. Two other\nGST examples are presented for which the $\\mu$ term is not allowed and\nwhich are not consistent with an underlying $SU(5)$, or other GUT theory. In\nthese cases $Y^e\\neq (Y^d)^T$ but it is possible to maintain the\nrelation $m_\\tau\\approx m_b$ and in one of them just the $O(1)$\ncoefficients of the underlying $U(1)$ theory can account for the\nappropriate mixings in the charged lepton and d-quark sector.\nThe non-GST cases also give a good description of masses and mixings,\nalthough in this case we need to rely on further coefficients,\npossible Clebsch-Gordan coefficients from an underlying GUT, in order\nto achieve a good phenomenological description.\n \nFor the above examples we have provided detailed numerical fits \nof the $O(1)$ coefficients required to reproduce the observed\nmasses and mixings in both quark and lepton sectors.\nThe purpose of performing such fits\nis to compare how well the different models can fit the data, \nand to try to determine quantitatively the best possible model\ncorresponding to the best possible fit. \nAlthough in the cases just mentioned the solutions which fit the\ndata best are the solutions consistent with an underlying $SU(5)$ theory, the\nother two fits are quite plausible and represent interesting\npossibilities which cannot be excluded. \nSince all the models constructed\nhave good agreement with the fermion masses and mixings, we clearly \nneed further criteria in order to discriminate between the different\nclasses of $U(1)$ family symmetry models. \n\nOne may ask the more general question whether family symmetries based on\nabelian or non-abelian gauge groups are generically preferred?\nIn order to address this question, we have extended the fit to include\na generic symmetric form of quark and lepton mass matrices that can be\nunderstood in the context of a theory based on $SU(3)$ family\nsymmetry. We have found that overall the generic $SU(3)$ family\nsymmetry produces Yukawa matrices which tend to fit the data better, \nalthough the effect is not decisive, and \none cannot draw a strong conclusion based solely on fits to\nfermion masses and mixings (or the way they can be reproduced). \nWe have therefore enumerated \nsome other possible criteria that are important in order to\nfurther discriminate among different flavour theories. \nIncluding the effects from the supersymmetric sector provides \nan additional way to discriminate among different\ntheories based on their different predictions for \nsoft masses and the resulting flavour changing processes and CP violation. \nWe have presented two frameworks in which\nthese processes can be studied in the context of flavour theories. The\nfirst is a non-minimal sugra scenario where family symmetries\nmay render the K\\\"ahler metric diagonal at the flavour symmetry breaking\nscale, with off-diagonal elements arising only due to RG contributions and\nthe non-degeneracy of soft masses. The second framework is a minimal sugra\nscenario for which a numerical exploration of $\\mu\\rightarrow e\\ \\gamma$\nwas performed. The results of this analysis\nshows marked differences between the different models presented. Of the\nGST cases only one survives the test of $B(\\mu\\rightarrow e\\ \\gamma)$\nwhile for all of the non-GST cases presented there exist regions\ncompatible with the $B(\\mu\\rightarrow e\\ \\gamma)$ experimental limit.\n\nIn conclusion, \nat the present time, phenomenological analyses provide some guidance\nabout what family symmetry approaches may be valid, but do not yet allow\none to draw any firm conclusion. More specific assumptions or data in\nthe supersymmetric sector are needed in order to further discriminate \nbetween classes of models based on different family symmetry, \nunification or GST criteria.\n\n\n\n\\section*{Acknowledgments}\nL. V-S. would like to thank the School of Physics and Astronomy at the U of Southampton for its hospitality during a visit last year. S.K. would like to thank the MCTP for its hospitality during August 2004 when this work was under development. The work of G. K. and L. V-S. is supported by the U. S. A. Department of Energy.\n\n\n\\newpage\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcinv b/data_all_eng_slimpj/shuffled/split2/finalzzcinv new file mode 100644 index 0000000000000000000000000000000000000000..011d8a17e7e66fe839b829fc07e632cfd24b99da --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcinv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction.}\\label{intro}\n\n\\section{Introduction}\nThe spatio-temporal traffic origin-destination demand is a critical component to dynamic system modeling for transport operation and management. For decades, dynamic traffic demand is deterministically modeled as it is translated into deterministic link\/path flow and travel cost. Recent studies on transportation network uncertainty and reliability indicate that the variation of traffic demand also has equally large economic and environmental regional impacts \\citep{mahmassani2014incorporating}. However, traffic demand and flow variation from time to time ({\\em e.g.}, morning versus afternoon, today versus yesterday) cannot be captured in those deterministic models. In addition, the multi-day large-scale data cannot be characterized by deterministic traffic models. It is essential to consider the flow variation and understand its causes for system-wide decision making in the real-world. \nTherefore, modeling and estimating the stochasticity of traffic demand, namely its spatial-temporal correlation\/variation, is a real need for public agencies and decision-makers.\n\n\nIn view of this, this paper addresses a fundamental problem to estimate the probabilistic dynamic origin-destination demand (PDOD) on general road networks. The reasons for estimating the PDOD instead of the deterministic dynamic OD demand (DDOD) are four-fold: 1) PDOD enables the modeling of system variation \\citep{han2018stochastic}, and hence the corresponding traffic model is more reliable; 2) later we will show that there is a theoretical bias when using the deterministic dynamic OD demand estimation (DDODE) framework with stochastic traffic flow; 2) the probabilistic dynamic OD estimation (PDODE) framework makes full use of multi-day traffic data, and the confidence level of the estimated PDOD can be quantified. In particular, the confidence in estimation accuracy increases when the number of data increases in the PDODE framework; 4) the estimated PDOD facilitates public agencies to operate and manage the stochastic complex road networks more robustly \\citep{jin2019behavior}.\n\n\nBefore focusing on the PDODE problem, we first review the large body of literature for DDODE problems, then their extensions to PDODE problems are discussed.\nThe DDODE problem is originally proposed and solved through a generalized least square (GLS) formulation by assuming the networks are not congested and travelers' behaviors ({\\em e.g.} route choice, departure time choice) are exogenous \\citep{cascetta1993dynamic}. On congested networks, travelers' behaviors need to be considered endogenously. A bi-level formulation is then proposed on top of the GLS formulation, in which the upper-level problem solves for the GLS formulation with fixed travelers' behaviors and the lower-level problem updates the travelers' behaviors \\citep{tavana2001internally}. Readers are referred to more details on the bi-level formulation from \\citet{nguyen1977estimating, leblanc1982selection, fisk1989trip, yang1992estimation, florian1995coordinate, jha2004development,nie2008variational}. The DDODE problem can also be solved with real-time data feeds from and for ATIS\/ATMS applications, and state-space models are usually adopted to estimate the OD demand on a rolling basis \\citep{bierlaire2004efficient, zhou2007structural, ashok2000alternative}. Another interesting trend is that emerging data sources are becoming available to estimate OD demand directly, which include automatic vehicle identification data \\citep{cao2021day}, mobile phone data \\citep{bachir2019inferring}, Bluetooth data \\citep{cipriani2021traffic}, GPS trajectories \\citep{ros2022practical}, and satellite images \\citep{kaack2019truck}. Unlike static networks \\citep{wu2018hierarchical, waller2021rapidex}, an universal framework that can integrate multi-source data is still lacking for dynamic networks.\n\nSolution algorithms to the DDODE problem can be categorized into two types: 1) meta-heuristic methods; 2) gradient-based methods.\nThough meta-heuristics methods might be able to search for the global optimal, most studies only handle small networks with low-dimensional OD demand \\citep{patil2022methods}. In contrast, gradient-based methods can be applied to large-scale networks without exploiting computational resources.\nThe performance of gradient-based methods depends on how to accurately evaluate the gradient of the GLS formulation. \\citet{balakrishna2008time, cipriani2011gradient} adopt the stochastic perturbation simultaneous approximation (SPSA) framework to approximate the gradients. \\citet{lee2009new, vaze2009calibration, ben2012dynamic, lu2015enhanced, tympakianaki2015c, antoniou2015w, oh2019demand, qurashi2019pc} further enhance the SPSA-based methods. \\citet{lu2013dynamic} discuss to evaluate the gradients of dynamic OD under congested networks. \\citet{flotterod2011bayesian, yu2021bayesian} derives the gradient of OD demand in a Bayesian inference framework. \\citet{osorio2019dynamic, osorio2019high, patwary2021metamodel, dantsuji2022novel} develop a meta-model to approximate the gradients of dynamic OD demand through linear models. Recently, \\citet{wu2018hierarchical, ma2019estimating} propose a novel approach to evaluate the gradient of OD demand efficiently through the computational graph approach. \n\nA few studies have explored the possibilities of estimating PDOD, and this problem turns out to be much more challenging than the DDODE problem. As far as we know, all the existing studies related to the probabilistic OD demand focus on static networks. For example, a statistical inference framework with Markov Chain Monte Carlo (MCMC) algorithm is proposed to estimate the probabilistic OD demand \\citep{hazelton2008statistical}. The GLS formulation is also extended to consider the variance\/covariance matrices in order to estimate the probabilistic OD demand \\citep{shao2014estimation, shao2015estimation}. \\citet{ma2018statistical} estimate the probabilistic OD demand under statistical traffic equilibrium using Maximum Likelihood Estimators (MLE). Recently, \\citet{yang2019estimating} adopt the Generalized Method of Moment (GMM) methods to estimate the parameters of probability distributions of OD demand. \n\nEstimating the probabilistic dynamic OD demand (PDOD) is challenging, and the reasons are three-fold: 1) PDODE problem requires modeling the dynamic traffic networks in the probabilistic space, hence a number of existing models need to be adapted or re-formulated \\citep{shao2006reliability, nakayama2014consistent, watling2015stochastic, ma2017variance}; 2) estimating the probabilistic OD demand is an under-determined problem, and the problem dimension of PDODE is much higher than that for DDODE \\citep{shao2015estimation,ma2018statistical,yang2019estimating}; 3) solving PDODE problem is more computationally intensive than solving the DDODE problem, and hence new approaches need to be developed to improve the efficiency of the solution algorithm \\citep{flotterod2017search, ma2018estimating, shen2019spatial}. \n\n\n\nIn both PDODE and DDODE formulations, travelers' behaviors are modeled through the dynamic traffic assignment (DTA) models. Two major types of DTA models are Dynamic User Equilibrium (DUE) models and Dynamic System Optimal (DSO) models. The DUE models search for the user optimal traffic conditions such that all travelers in the same OD pair have the same utilities \\citep{mahmassani1984dynamic, nie2010solving}; DSO models solve the system optimal in which the total disutility is minimized \\citep{shen2007path, qian2012system, ma2014continuous}. Most DTA models rely on the Dynamic Network Loading (DNL) models on general networks \\citep{ma2008polymorphic}, and the DNL models simulate all the vehicle trajectories and spatio-temporal traffic conditions given the origin-destination (OD) demand and fixed travelers' behaviors. \n\n\nOne noteworthy observation is that many studies have shown great potential in improving the solution efficiency by casting network modeling formulations into computational graphs \\citep{wu2018hierarchical, ma2019estimating, sun2019analyzing, zhang2021network, kim2021computational}. The advantages of using computational graphs for PDODE problem lies in that, the computational graph shares similarities with deep neural networks from many perspectives. Hence a number of state-of-art techniques, which are previously developed to enhance the efficiency for solving neural networks, can be directly used for solving the PDODE problem. These techniques include, but are not limited to, adaptive gradient-based methods, dropout \\citep{srivastava2014dropout}, GPU acceleration, multi-processing \\citep{zinkevich2009slow}. Some of the techniques have been examined in our previous study, and the experimental results demonstrate great potential on large-scale networks \\citep{ma2019estimating}. Additionally, multi-source data can be seamlessly integrated into the computational graph to estimate the OD demand.\n\nThe success of computational graphs advocates the development of the end-to-end framework, and this paper inherits this idea to estimate the mean and standard deviation of PDOD simultaneously. Ideally, the computational graph involves the variables that will be estimated (decision variables, {\\em e.g.}, mean and standard deviation of PDOD), intermediate variables ({\\em e.g.}, path\/link flow distributions), and observed data ({\\em e.g.}, observed traffic volumes), and it finally computes the objective function as a scalar. Among different variables, all the neural network operations can be employed. The chain rule and back-propagation can be adopted to update all the variables on the computational graph. This process is also known as differential programming \\citep{jax2018github}. As some of the variables in the computational graph contain physical meaning, we view the computational graph as a powerful tool to integrate data-driven approaches and domain-oriented knowledge. An overview of a computation graph is presented in Figure~\\ref{fig:cg}. \n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{cg}\n\t\\caption{\\footnotesize{An illustrative figure for computation graphs.}}\n\t\\label{fig:cg}\n\\end{figure}\n\n\nIn this paper, we develop a data-driven framework that solves the probabilistic dynamic OD demand estimation (PDODE) problem using multi-day traffic data on general networks. The proposed framework rigorously formulates the PDODE problem on computational graphs, and different statistical distances ({\\em e.g.}, $\\ell_p$-norm, Wasserstein distance, KL divergence, Bhattacharyya distance) are used and compared for the objective function. \nThe closest studies to this paper are those of \\citet{wu2018hierarchical, ma2019estimating}, which construct the computational graphs for static and dynamic OD demand estimation, respectively. This paper extends the usage of computation graphs to solve PDODE problems. The main contributions of this paper are summarized as follows:\n\\begin{enumerate}[label=\\arabic*)]\n\t\\item We illustrate the potential bias in the DDODE framework when dynamic OD demands are stochastic.\n\t\\item We rigorously formulate the probabilistic dynamic OD estimation (PDODE) problem, and different statistical distances are compared for the objective function. It is found that $\\ell_1$ and $\\ell_2$ norms have advantages in estimating the mean and standard deviation of the PDOD, respectively, and the 2-Wasserstein distance achieves a balanced accuracy in estimating both mean and standard deviation. \n\t\\item The PDODE formulation is vectorized and cast into a computational graph, and a reparameterization trick is first time developed to estimate the mean and standard deviation of the PDOD simultaneously using adaptive gradient-based methods.\n\t\\item We examine the proposed PDODE framework on a large-scale network to demonstrate its effectiveness and computational efficiency.\n\\end{enumerate}\n\n\nThe remainder of this paper as organized as follows. Section~\\ref{sec:example} illustrates the necessity of the PDODE, and section~\\ref{sec:model} presents the proposed model formulation and casts the formulation into computational graphs. Section~\\ref{sec:solution} proposes a novel solution algorithm with a reparameterization trick. Numerical experiments on both small and large networks are conducted in section~\\ref{sec:experiment}. Finally, conclusions and future research are summarized in section \\ref{sec:con}.\n\n\n\\section{An Illustrative Example}\n\\label{sec:example}\nTo illustrate the necessity of considering the demand variation when the traffic flow is stochastic, we show that the DDODE framework can under-estimate the DDOD (mean of PDOD) when traffic flow is stochastic. Considering a simple 2-link network with a bottleneck, as shown in Figure~\\ref{fig:bottle}, the capacity of the bottleneck is 2000vehicles\/hour, and the incoming flow follows a Gaussian distribution with mean 2000vehicles\/hour. Due to the limited bottleneck capacity, we can compute the probability density functions (PDFs) of the queue accumulative rate and flow rate on the downstream link, as shown in Figure~\\ref{fig:bottle}.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{example.png}\n\t\\caption{\\footnotesize{A simple network with a bottleneck.}}\n\t\\label{fig:bottle}\n\\end{figure}\n\nSuppose link 2 is installed with a loop detector, we aim to estimate the OD demand from Origin to Destination. If the DDODE method is used, we ignore the variation in the traffic flow observation on link 2, and the mean traffic flow is used, which is below 2000vehicles\/hour. Therefore, the estimated OD demand will be less than 2000vehicles\/hour. One can see the demand is under-estimated, and the bias is due to the ignorance of the flow variation. In contrast, the flow variation is considered in our proposed model. By matching the PDF of the observed traffic flow, the distribution of the OD demand can be estimated in an unbiased manner if the model specifications of the OD demand is correct. Overall, considering flow variation could improve the estimation accuracy of traffic demand, which motivates the development of the PDODE framework.\n\n\n\\section{Model}\n\\label{sec:model}\nIn this section, we first present model assumptions. Then important components of the probabilistic traffic dynamics on general networks, which include PDOD, route choice models, and network loading models, are discussed. The PDODE problem is formulated and cast to a vectorized representation using random vectors. Lastly, we propose different statistical distances as the objective function.\n\n\\subsection{Assumptions}\nLet $Q_{rs}^h$ represent the dynamic traffic demand (number of travelers departing to travel) from OD pair $r$ to $s$ in time interval $h$, where $r \\in R, s\\in S$ and $h \\in H$. $R$ is the set of origins, $S$ is the set of destinations and $H$ is the set of all time intervals. \n\\begin{assumption}\n\t\\label{as:mvn}\n\tThe probabilistic dynamic OD demand (PDOD) follows multivariate Gaussian distribution with diagonal covariance matrix. We also assume that $Q_{rs}^h$ is bounded such that $Q_{rs}^h \\geq 0$ for the sake of simplification. Readers are referred to \\citet{nakayama2016effect} for more discussions about the assumptions of bounded Gaussian distribution of OD demand.\n\\end{assumption}\n\n\\begin{assumption}\n\tThe dynamic traffic flows, including OD demand, path flow, and link flow, are infinitesimal. Therefore, the variation of travelers' choice is not considered \\citep{ma2017variance}.\n\\end{assumption}\n\n\\subsection{Modeling the probabilistic network dynamics}\nWe present different components and their relationship on a probabilistic and dynamic network.\n\\subsubsection{Probabilistic dynamic OD demand} \nThe dynamic OD demand $Q_{rs}^h$ is an univariate random variable, and it can be decomposed into two parts, as shown in Equation~\\ref{eq:od}.\n\\begin{eqnarray}\n\t\\label{eq:od}\n\tQ_{rs}^h = q_{rs}^h + \\varepsilon_{rs}^h\n\\end{eqnarray}\nwhere $q_{rs}^h$ is the mean OD demand for OD pair $rs$ in time interval $h$ and it is a deterministic scalar; while $\\varepsilon_{rs}^h$ represents the randomness of OD demand. Based on Assumption~\\ref{as:mvn}, $\\varepsilon_{rs}^h$ follows the zero-mean Gaussian distribution, as presented in Equation~\\ref{eq:random}.\n\n\\begin{eqnarray}\n\t\\label{eq:random}\n\t\\varepsilon_{rs}^h \\sim \\mathcal{N}\\left(0, \\left(\\sigma_{rs}^h\\right)^2 \\right)\n\\end{eqnarray}\nwhere $\\sigma_{rs}^h$ is the standard deviation of $Q_{rs}^h$, and $\\mathcal{N}(\\cdot, \\cdot)$ represents the Gaussian distribution.\n\n\\subsubsection{Travelers' Route Choice}\nTo model the travelers' route choice behaviors, we define the time-dependent route choice portion $p_{rs}^{kh}$ such that it distributes OD demand $Q_{rs}^{h}$ to path flow $F_{rs}^{kh}$ by Equation~\\ref{eq:ODpath}.\n\\begin{eqnarray}\n\t\\label{eq:ODpath}\n\tF_{rs}^{kh} = p_{rs}^{kh} Q_{rs}^h \n\\end{eqnarray}\nwhere $F_{rs}^{kh}$ is the path flow (number of travelers departing to travel along a path) for $k$th path in OD pair $rs$ in time interval $h$. The route choice portion $p_{rs}^{kh}$ can be determined through a generalized route choice model, as presented in Equation~\\ref{eq:gen_choice}.\n\\begin{eqnarray}\n\t\\label{eq:gen_choice}\n\tp_{rs}^{kh} = \\Psi_{rs}^{kh}\\left( \\D\\left(\\{C_{rs}^{kh}\\}_{rskh}\\right), \\D\\left(\\{T_{a}^h\\}_{ah}\\right)\\right)\n\\end{eqnarray}\nwhere $\\Psi_{rs}^{kh}$ is the generalized route choice model and the operator $\\D(\\cdot)$ extracts all the parameters in a certain distribution. For example, if $Y\\sim \\N(\\mu, \\sigma^2)$, then $\\D(Y) = (\\mu, \\sigma)$. $T_{a}^{h}$ represents the link travel time for link $a$ in time interval $h$, and $C_{rs}^{kh}$ represents the path travel time for $k$th path in OD pair $rs$ departing in time interval $h$. Equation~\\ref{eq:gen_choice} indicates that the route choice portions are based on the distributions of link travel time and path travel time. In this paper we use travel time as the disutility function, while any form of disutility can be used as long as it can be simulated by $\\Lambda$. The generalized travel time can include roads tolls, left turn penalty, delay at intersessions, travelers' preferences and so on.\n\n\n\n\\subsubsection{Dynamic network loading} For a dynamic network, the network conditions ({\\em i.e.} path travel time, link travel time, delays) are governed by the link\/node flow dynamics, which can be modeled through dynamic network loading (DNL) models \\citep{ma2008polymorphic}. Let $\\Lambda(\\cdot)$ represent the DNL model, as presented in Equation~\\ref{eq:dnl}.\n\\begin{eqnarray}\n\t\\label{eq:dnl}\n\t\\left\\{T_{a}^{h}, C_{rs}^{kh}, {\\rho}_{rs}^{ka}(h, h') \\right\\} _{r,s,k,a,h, h'} = \\Lambda(\\{F_{rs}^{kh}\\}_{r,s,k,h})\n\\end{eqnarray}\nwhere $\\rho_{rs}^{ka}(h, h')$ is the dynamic assignment ratio (DAR) which represents the portion of the $k$th path flow departing within time interval $h$ for OD pair $rs$ which arrives at link $a$ within time interval $h'$ \\citep{ma2018estimating}. Link $a$ is chosen from the link set $A$, and path $k$ is chosen from the set of all paths for OD $rs$, as represented by $a \\in A, k \\in K_{rs}$. We remark that $T_{a}^{h}, C_{rs}^{kh}, \\rho_{rs}^{ka}(h, h')$ are random variables as they are the function outputs of the random variable $F_{rs}^{kh}$. \n\nThe DNL model $\\Lambda$ depicts the network dynamics through traffic flow theory \\citep{zhang2013modelling,jin2012link}. Essentially, many existing traffic simulation packages, which include but are not limited to, MATSIM \\citep{balmer2009matsim}, Polaris \\citep{stolte2002polaris}, BEAM \\citep{sheppard2017modeling}, DynaMIT \\citep{ben1998dynamit}, DYNASMART \\citep{mahmassani1992dynamic}, DTALite \\citep{zhou2014dtalite} and MAC-POSTS \\citep{qian2016dynamic, CARTRUCK}, can be used as function $\\Lambda$. In this paper, MAC-POSTS is used as $\\Lambda$.\n\nFurthermore, link flow can be modeled by Equation~\\ref{eq:link}.\n\\begin{eqnarray}\n\t\\label{eq:link}\n\tX_a^{h'} = \\sum_{rs \\in K_q} \\sum_{k \\in K_{rs}} \\sum_{h \\in H}{\\rho}_{rs}^{ka}(h, h') F_{rs}^{kh}\n\\end{eqnarray}\nwhere $X_a^{h'}$ represents the flow of link $a$ that arrives in time intervel $h'$, and $K_q$ is the set of all OD pairs. \n\n\\subsubsection{Statistical equilibrium}\nThe route choice proportion $p_{rs}^{kh}$ is a deterministic variable rather than a random variable, and the reason is that we assume travelers' behaviors are based on the statistical equilibrium originally defined in \\citet{ma2017variance}, as presented in Definition~\\ref{def:equi}.\n\\begin{definition}\n\t\\label{def:equi}\n\tA road network is under statistical equilibrium, if all travelers practice the following behavior: on each day, each traveler from origin $r$ to destination $s$ independently chooses route $k$ with a deterministic probability $p_{rs}^k$. For a sufficient number of days, this choice behavior yields a stabilized distribution of travel costs with parameters $\\D\\left(\\{C_{rs}^{kh}\\}_{rskh}\\right), \\D\\left(\\{T_{a}^h\\}_{ah}\\right)$. This stabilized distribution, in turn, results in the deterministic probabilities $p_{rs}^{kh} = \\Psi_{rs}^k\\left( \\D\\left(\\{C_{rs}^{kh}\\}_{rskh}\\right), \\D\\left(\\{T_{a}^h\\}_{ah}\\right)\\right)$ where $\\psi_{rs}^k(\\cdot)$ is a general route choice function. Mathematically, we say the network is under statistical equilibrium when the random variables $(\\{Q_{rs}^h \\}_{rsh},\\{F_{rs}^{kh}\\}_{rskh}, \\{X_a^h\\}_{ah}, \\{C_{rs}^{kh}\\}_{rskh}, \\{T_a^h\\}_{ah})$ are consistent with the Formulation \\ref{eq:ODpath}, \\ref{eq:gen_choice} and \\ref{eq:dnl}.\n\\end{definition}\n\nThe statistical equilibrium indicates that the multi-day traffic conditions are independent and identically distributed, which differs from the assumptions in day-to-day traffic models. Readers are referred to \\citet{ma2017variance} for more details. The assumption of statistical equilibrium allows us to estimate the distribution of link\/path\/OD flow from the observed data, and the empirical covariance matrix of link\/path\/OD flow can approximate the corresponding true covariance matrix when there is a large number of data.\n\n\\subsubsection{Vectorization} \nTo simplify notations, all the related variables are vectorized. We set $N = |H|$ and denote the total number of paths as $\\Pi = \\sum_{rs} |K_{rs}|$, $K=|K_q|$. The vectorized variables are presented in Table~\\ref{tab:mcvec}.\n\\begin{table*}[h]\n\t\\begin{center}\n\t\t\\caption{Variable vectorization table (R.V.: random variable).}\n\t\t\\label{tab:mcvec}\n\t\t\\begin{tabular}{p{3cm}cccccp{4.5cm}}\n\t\t\t\\hline\n\t\t\tVariable & R.V. &Scalar & Vector& Dimension & Type & Description\\\\\n\t\t\t\\hline\\hline \\rule{0pt}{3ex}\n\t\t\tMean OD flow & No &$q_{rs}^h$ & $\\vec{q}$ &$\\mathbb{R}^{NK}$ & Dense & $q_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tStandard deviation of OD flow & No &$\\sigma_{rs}^h$ & $\\boldsymbol \\sigma$ &$\\mathbb{R}^{NK}$ & Dense & $\\sigma_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tRandomness of OD flow & Yes &$\\varepsilon_{rs}^h$ & $\\boldsymbol \\varepsilon$ &$\\mathbb{R}^{NK}$ & Dense & $\\varepsilon_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tOD flow & Yes&$Q_{rs}^h$ & $\\vec{Q}$ &$\\mathbb{R}^{NK}$ & Dense & $Q_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tPath flow & Yes &$F_{rs}^{kh}$ &$\\mathbf{F}$ & $\\mathbb{R}^{N\\Pi}$ & Dense & $F_{rs}^{kh}$ is placed at entry $(h-1)\\Pi + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tLink flow & Yes &$X_{a}^h$ & $\\mathbf{X}$ &$\\mathbb{R}^{N|A|}$ & Dense & $X_{a}^h$ is placed at entry $(N-1)|A| + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tLink travel time & Yes&$T_{a}^h$ & $\\mathbf{T}$ &$\\mathbb{R}^{N|A|}$ & Dense & $T_{a}^h$ is placed at entry $(N-1)|A| + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tPath travel time & Yes&$C_{rs}^{kh}$ & $\\mathbf{C}$ &$\\mathbb{R}^{N\\Pi}$ & Dense & $C_{rs}^{kh}$ is placed at entry $(N-1)\\Pi + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tDAR matrix & Yes&$\\rho_{rs}^{ka}(h, h')$ &$\\boldsymbol \\rho$ & $\\mathbb{R}^{N|A| \\times N\\Pi}$ & Sparse & $\\rho_{rs}^{ka}(h, h')$ is placed at entry $[(h'-1)|A| + a, (h-1)\\Pi + k]$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tRoute choice matrix & No&$p_{rs}^{kh}$ &$\\mathbf{p}$ & $\\mathbb{R}^{N\\Pi \\times NK}$ & Sparse & $p_{rs}^{kh}$ is placed at entry $[(h-1)|\\Pi| + k, (h-1)K + rs]$\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table*}\n\nUsing the notations presented in Table~\\ref{tab:mcvec}, we can rewrite Equation~\\ref{eq:od}, \\ref{eq:random}, \\ref{eq:ODpath}, \\ref{eq:gen_choice}, \\ref{eq:dnl}, and \\ref{eq:link} in Equation~\\ref{eq:vec}.\n\\begin{equation}\n\t\\label{eq:vec}\n\t\\begin{array}{llllll}\n\t\t\\vspace{5pt}\n\t\t\\vec{Q} &=& \\vec{q} + \\boldsymbol \\varepsilon\\\\\n\t\t{\\boldsymbol \\varepsilon} &\\sim& \\N\\left(\\vec{0}, {\\boldsymbol \\sigma}^2\\right)\\\\\n\t\t\\vec{F} &=& \\vec{p}\\vec{Q}\\\\\n\t\t\\vec{p} &= & \\Psi \\left( \\D \\left(\\vec{C} \\right), \\D \\left(\\vec{T}\\right) \\right)& \\\\\n\t\t\\left \\{ \\vec{C}, \\vec{T}, {\\boldsymbol\\rho} \\right\\} &= & \\Lambda(\\vec{F}) & \\\\\n\t\t\\vec{X} &=&{\\boldsymbol \\rho} \\vec{F}\n\t\\end{array}\n\\end{equation}\nwhere ${\\boldsymbol \\sigma}^2$ denotes the element-wise square for matrix ${\\boldsymbol \\sigma}$. In the rest of this paper, we will use the vectorized notations for simplicity.\n\n\\subsection{Formulating the PDODE problem}\nThe PDODE problem is formulated in this section. In particular, different objective functions are discussed.\n\\subsubsection{Objective function} \n\\label{sec:obj}\nTo formulate the PDODE problem, we first define the objective function in the optimization problem. DDODE problem minimizes the gap between the estimated (reproduced) and the observed traffic conditions. The gap is usually measured through $\\ell^2$-norm, which is commonly used to measure the distance between two deterministic variables. However, the PDODE problem is formulated in the probabilistic space, and we need to measure the distance between the distributions of the observed traffic conditions and the estimated (reproduced) traffic conditions. \nTo this end, we define a generalized form to measure the observed and estimated distribution of traffic conditions, as presented by Equation~\\ref{eq:ere}.\n\n\\begin{eqnarray}\n\t\\label{eq:ere}\n\t\\mathcal{L}_0 = \\mathcal{M} \\left(\\tilde{\\mathbf{X}}, \\mathbf{X}(\\mathbf{Q})\\right) \n\\end{eqnarray}\nwhere $\\mathcal{M}$ measures the statistical distance between two distributions, which is defined in Definition~\\ref{def:stat}. \n\n\n\\begin{definition}\n\\label{def:stat}\nThe statistical distance $\\mathcal{M}(\\mathbf{X}_1, \\mathbf{X}_2)$ is defined as the distance between two random vectors ({\\em i.e.}, two probabilistic distributions) $\\mathbf{X}_1$ and $\\mathbf{X}_2$, and it should satisfy two properties: 1) $\\mathcal{M}(\\mathbf{X}_1, \\mathbf{X}_2) \\geq 0, \\forall~\\mathbf{X}_1, \\mathbf{X}_2 $; 2) $\\mathcal{M}(\\mathbf{X}_1, \\mathbf{X}_2) = 0 \\iff \\mathbf{X}_1 = \\mathbf{X}_2$. \n\\end{definition}\n\nThe statistical distance may not be symmetric with respect to $\\mathbf{X}_1$ and $\\mathbf{X}_2$, and hence it may not be viewed as a metric. Various statistical distances can be used for $\\mathcal{M}$, and we review existing literature to list out some commonly used distances that have explicit form for Gaussian distributions. We further simplify the notation $\\mathbf{X}(\\mathbf{Q})$ to $\\mathbf{X}$, and we assume $\\tilde{\\mathbf{X}} \\sim \\mathcal{N}\\left( \\tilde{\\mathbf{x}}, \\Sigma_{\\tilde{\\mathbf{X}}} \\right)$ and $\\mathbf{X} \\sim \\mathcal{N}\\left( \\mathbf{x}, \\Sigma_{\\mathbf{X}} \\right)$, then different statistical distances can be computed as follows.\n\\begin{itemize}\n\t\\item $\\ell_p$-norm on distribution parameters: this metric directly compare the $\\ell_p$-norm of the mean vector and covariance matrix, which can be written as:\n\t$$\\|\\tilde{\\mathbf{x}} - \\mathbf{x}\\|_p + \\| \\Sigma_{\\tilde{\\mathbf{X}}} - \\Sigma_{\\mathbf{X}}\\|_p $$\n\t\\item Wasserstein distance: the 2-Wasserstein distance has close-form for Gaussian distributions, and $\\mathcal{M}\\left(\\tilde{\\mathbf{X}}, \\mathbf{X}\\right)$ can be written as:\n\t$$\\|\\tilde{\\mathbf{x}} - \\mathbf{x}\\|_2^2 + \\text{Tr}\\left( \\Sigma_{\\tilde{\\mathbf{X}}} + \\Sigma_{\\mathbf{X}} -2 \\left(\\Sigma_{\\tilde{\\mathbf{X}}}^{1\/2} \\Sigma_{\\mathbf{X}} \\Sigma_{\\tilde{\\mathbf{X}}}^{1\/2} \\right)^{1\/2} \\right)$$\n\n\n\n\t\\item Kullback\u2013Leibler (KL) divergence: also known as relative entropy. KL divergence is not symmetric, and we choose the forward KL divergence to avoid taking inverse of $\\Sigma_{\\mathbf{X}}$, which can be written as \n\t$$\\frac{1}{2} \\left[ \\log \\frac{|\\Sigma_{\\tilde{\\mathbf{X}}}|}{|\\Sigma_{\\mathbf{X}}|} + (\\tilde{\\mathbf{x}} - \\mathbf{x})^T\\Sigma_{\\tilde{\\mathbf{X}}}^{-1} (\\tilde{\\mathbf{x}} - \\mathbf{x}) + \\text{Tr}\\left(\\Sigma_{\\tilde{\\mathbf{X}}}^{-1} \\Sigma_{\\mathbf{X}}\\right) \\right]. $$\n\tWe note that KL divergence can be further extended to Jensen\u2013Shannon (JS) divergence, while it requires to take the inverse of $\\Sigma_{\\mathbf{X}}$, so we will not consider it in this study.\n\t\\item Bhattacharyya distance: we set $\\Sigma = \\frac{\\Sigma_{\\tilde{\\mathbf{X}}} + \\Sigma_{\\mathbf{X}}}{2}$, then $\\mathcal{M}\\left(\\tilde{\\mathbf{X}}, \\mathbf{X}\\right)$ can be written as:\n\t$$\\frac{1}{8}(\\tilde{\\mathbf{x}} - \\mathbf{x})^T\\Sigma^{-1} (\\tilde{\\mathbf{x}} - \\mathbf{x}) + \\frac{1}{2}\\ln \\frac{|\\Sigma|}{\\sqrt{|\\Sigma_{\\tilde{\\mathbf{X}}}| |\\Sigma_{\\mathbf{X}}|}}$$\n\\end{itemize}\n\nAll the above statistical distances satisfy Definition~\\ref{def:stat}, and they are continuous with respect to the distribution parameters. More importantly, all the statistical distances are differentiable, as each of the used operation is differentiable and the auto-differentiation techniques can be used to derive the overall gradient of the statistical distances with respect to the distribution parameters \\citep{speelpenning1980compiling}. Theoretically, all the above distances can be used as the objective function in PDODE, while we will show in the numerical experiments that their performance can be drastically different.\n\n\n\\subsubsection{PDODE formulation}\n\nTo simulate the availability of multi-day traffic data, we assume that $I$ day's traffic counts data are collected, and $\\tilde{\\mathbf{X}}^{(i)}$ is the $i$th observed link flow data, $i= 1, 2, \\cdots, I$, and $\\tilde{\\mathbf{X}}^{(i)}$ iid follows the distribution of $\\tilde{\\mathbf{X}}$.\nBecause the actual distributions of $\\tilde{\\mathbf{X}}$ and $\\mathbf{X}$ are unknown, we use the Monte-Carlo approximation to approximate $\\mathcal{L}_0$, as presented in Equation~\\ref{eq:ereap}.\n\\begin{eqnarray}\n\t\\label{eq:ereap}\n\t\\mathcal{L} &=& \\mathbb{E}_{\\left({\\boldsymbol \\alpha}, {\\boldsymbol \\beta}\\right) \\sim \n\t\t{\\tilde{\\mathbf{X}}}^{\\bigotimes I} \\bigotimes {\\mathbf{X}}^{\\bigotimes L}} \\mathcal{M} \\left(\\boldsymbol \\alpha, \\boldsymbol \\beta\\right) \\nonumber\\\\\n\t&=&\\frac{1}{IL}\\sum_{i=1}^I \\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right) \\label{eq:L}\n\\end{eqnarray}\nwhere $I, L$ are the number of samples from distributions of $\\tilde{\\mathbf{X}}$ and $\\mathbf{X}$, respectively, and $\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}$ are the sampled distributions of $\\tilde{\\mathbf{X}}$ and $\\mathbf{X}$, respectively.\nBy the law of large numbers (LLN), $\\mathcal{L}$ converges to $\\mathcal{L}_0$ when $I,L \\to \\infty$.\n\n\nCombining the constraints in Equation~\\ref{eq:vec} and the objective function in Equation~\\ref{eq:L}, now we are ready to formulate the PDODE problem in Formulation~\\ref{eq:pdode1}.\n\n\\begin{equation}\n\t\\label{eq:pdode1}\n\t\\begin{array}{rrcllll}\n\t\t\\vspace{5pt}\n\t\t\\displaystyle \\min_{\\vec{q}, {\\boldsymbol \\sigma}} & \\multicolumn{4}{l}{\\displaystyle \\frac{1}{IL}\\sum_{i=1}^I \\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)} &\\\\\n\t\t\\textrm{s.t.} & \\left \\{ \\vec{C}^{(l)}, \\vec{T}^{(l)}, {\\boldsymbol \\rho}^{(l)} \\ \\right\\} &= & \\Lambda(\\vec{F}^{(l)}) & \\forall l&\\\\\n\t\t~ & \\vec{p}^{(l)} &= & \\Psi \\left( \\D(\\vec{C}^{(l)}), \\D(\\vec{T}^{(l)}) \\right)& \\forall l &\\\\\n\t\t~ & \\mathbf{Q}^{(l)} &\\sim& \\mathcal{N}\\left(\\vec{q}, {\\boldsymbol \\sigma}^2\\right)&\\forall l&\\\\\n\t\t~ & \\vec{F}^{(l)} & = & \\vec{p}^{(l)}\\vec{Q}^{(l)} & \\forall l&\\\\\n\t\t~ & \\mathbf{X}^{(l)} & = & {\\boldsymbol \\rho}^{(l)} \\vec{F}^{(l)} &\\forall l &\n\t\\end{array}\n\\end{equation}\nwhere $\\vec{C}^{(l)}, \\vec{T}^{(l)}, \\mathbf{X}^{(l)}, \\vec{F}^{(l)}, \\mathbf{Q}^{(l)}, {\\boldsymbol \\rho}^{(l)}, \\vec{p}^{(l)}$ are the sample distributions of $\\vec{C}, \\vec{T}, \\mathbf{X}, \\vec{F}, \\mathbf{Q}, {\\boldsymbol \\rho}, \\vec{p}$, respectively. Formulation~\\ref{eq:pdode1} searches for the optimal mean and standard deviation of the dynamic OD demand to minimize the statistical distance between the observed and estimated link flow distributions such that the DNL and travelers' behavior models are satisfied. We note that Formulation~\\ref{eq:pdode1} can be extended to include the traffic speed, travel time, and historical OD demand data \\citep{ma2019estimating}. It is straightforward to show that Formulation~\\ref{eq:pdode1} is always feasible, as long as the sampled PDOD is feasible to the traffic simulator, as presented in Proposition~\\ref{prop:fea}.\n\n\n\\begin{proposition}[Feasibility]\n\\label{prop:fea}\nThere exist a feasible solution $(\\vec{q}, {\\boldsymbol \\sigma})$ to Formulation~\\ref{eq:pdode1} if the non-negative support of the distribution $\\mathcal{N}\\left(\\vec{q}, {\\boldsymbol \\sigma}^2\\right)$ is feasible to the traffic simulator $\\Lambda$.\n\\end{proposition}\n\n\nTo compute $\\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)$, we first characterize the distribution of $\\vec{X}^{(l)}$ by $\\vec{X}^{(l)} = \\Lambda(\\vec{p}^{(l)} \\vec{Q}^{(l)})$.\nHence the computation of the $\\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)$ is based on the distribution of $\\vec{Q}^{(l)}$, as the distribution of $\\mathbf{X}^{(l)}$ is obtained from $\\mathbf{Q}^{(l)}$. Additionally, the sample distribution $\\mathbf{Q}^{(l)}$ is further generated from $\\mathbf{Q}^{(l)} \\sim \\mathcal{N}\\left(\\vec{q}, {\\boldsymbol \\sigma}^2\\right)$. \n\n\n\nFormulation~\\ref{eq:pdode1} is challenging to solve because the derivatives of the loss function with respect to $\\vec{q}$ and ${\\boldsymbol \\sigma}$ are difficult to obtain. The reason is that $\\mathbf{Q}^{(l)}$ is sampled from the Gaussian distribution, and it is difficult to compute $\\frac{\\partial \\mathbf{Q}^{(l)}}{\\partial \\vec{q}}$ and $\\frac{\\partial \\mathbf{Q}^{(l)}}{\\partial {\\boldsymbol \\sigma}}$. Without the closed-form gradients, most existing studies adopt a two-step approach to estimate the PDOD. The first step estimates the OD demand mean and the second step estimates the standard deviation. The two steps are conducted iteratively until convergence \\citep{ma2018statistical, yang2019estimating}. \nIn this paper, we propose a novel solution to estimate the mean and standard deviation simultaneously by casting the PDODE problem into computational graphs. Details will be discussed in the following section.\n\n\n\n\\section{Solution Algorithm}\n\\label{sec:solution}\nIn this section, a reparameterization trick is developed to enable the simultaneous estimation of mean and standard deviation of the dynamic OD demand. The PDODE formulation in Equation~\\ref{eq:pdode1} is then cast into a computational graph. We then summarize the step-by-step solution framework for PDODE. Finally, the underdetermination issue of the PDODE problem is discussed .\n\n\\subsection{A key reparameterization trick}\n\nTo solve Formulation~\\ref{eq:pdode1}, our objective is to directly evaluate the derivative of both mean and standard deviation of OD demand, {\\em i.e.} $\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol \\sigma}}$, then gradient descent methods can be used to search for the optimal solution.\n\nWe will leave the computation of $\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}}$ in the next section, while this section addresses a key issue of evaluating $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial {\\boldsymbol\\sigma}}$. The idea is actually simple and straightforward. Instead of directly sampling $\\vec{Q}^{(l)}$ from $\\mathcal{N}\\left(\\vec{q}, {\\boldsymbol\\sigma}^2\\right)$, we conduct the following steps to generate $\\vec{Q}^{(l)}$: 1) Sample ${\\boldsymbol\\nu}^{(l)} \\in \\mathbb{R}^{NK}$ from $\\mathcal{N}\\left(\\vec{0}, \\mathbf{1}\\right)$; 2) Obtain $\\vec{Q}^{(l)}$ by $\\vec{Q}^{(l)} = \\vec{q} + {\\boldsymbol\\sigma} \\circ {\\boldsymbol\\nu}^{(l)}$, where $\\circ $ represents the element-wise product. \n\nThrough the above reparameterization trick, we can compute the derivatives $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial {\\boldsymbol\\sigma}}$ by Equation~\\ref{eq:odd}.\n\\begin{equation}\n\t\\label{eq:odd}\n\t\\begin{array}{llllll}\n\t\t\\frac{\\partial \\vec{Q}^{(l)}}{\\partial \\mathbf{q}} &=& \\vec{1}_{NK}\\\\\n\t\t\\frac{\\partial \\vec{Q}^{(l)}}{\\partial {\\boldsymbol\\sigma}} &=& {\\boldsymbol\\nu}^{(l)}\n\t\\end{array}\n\\end{equation}\nwhere $\\vec{1}_{NK} \\in \\mathbb{R}^{NK}$ is a vector filled with $1$. This reparameterization trick is originally used to solve the variational autoencoder (VAE) \\citep{kingma2013auto}, and we adapt it to solve the PDODE problem. \n\n\\subsection{Reformulating PDODE through computational graphs}\nWith the reparameterization trick discussed in the previous section, we can reformulate the PDODE problem in Equation~\\ref{eq:pdode2}.\n\\begin{equation}\n\t\\label{eq:pdode2}\n\t\\begin{array}{rrcllll}\n\t\t\\vspace{5pt}\n\t\t\\displaystyle \\min_{\\vec{q}, {\\boldsymbol \\sigma}} & \\multicolumn{4}{l}{\\displaystyle \\frac{1}{IL}\\sum_{i=1}^I \\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)} &\\\\\n\t\t\\textrm{s.t.} & \\left \\{ \\vec{C}^{(l)}, \\vec{T}^{(l)}, {\\boldsymbol \\rho}^{(l)} \\ \\right\\} &= & \\Lambda(\\vec{F}^{(l)}) & \\forall l&\\\\\n\t\t~ & \\vec{p}^{(l)} &= & \\Psi \\left( \\D(\\vec{C}^{(l)}), \\D(\\vec{T}^{(l)}) \\right)& \\forall l &\\\\\n\t\t~ & {\\boldsymbol\\nu}^{(l)} &\\sim& \\mathcal{N}\\left(\\vec{0}, \\mathbf{1}\\right)& \\forall l&\\\\\n\t\t~ & \\mathbf{Q}^{(l)} & = & \\vec{q} + {\\boldsymbol\\sigma}\\circ{\\boldsymbol\\nu}^{(l)}& \\forall l&\\\\\n\t\t~ & \\vec{F}^{(l)} & = & \\vec{p}^{(l)}\\vec{Q}^{(l)} & \\forall l&\\\\\n\t\t~ & \\mathbf{X}^{(l)} & = & {\\boldsymbol \\rho}^{(l)} \\vec{F}^{(l)} &\\forall l &\n\t\\end{array}\n\\end{equation}\n\nExtending the forward-backward algorithm proposed by \\citet{ma2019estimating}, we can solve Formulation~\\ref{eq:pdode2} through the forward-backward algorithm. The forward-backward algorithm consists of two major components: 1) the forward iteration computes the objective function of Formulation~\\ref{eq:pdode2}; 2) the backward iteration evaluates the gradients of the objective function with respect to the mean and standard deviation of the dynamic OD demand ($\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{q}}, \\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}}$). \n\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{diagram}\n\t\\caption{The computational graph for PDODE.}\n\t\\label{fig:fb}\n\\end{figure*}\n\n{\\bf Forward iteration.} In the forward iteration, we compute the objective function based on the sample distribution of observation $\\tilde{\\mathbf{X}}^{(i)}$ in a decomposed manner, as presented in Equation~\\ref{eq:forward}.\n\\begin{equation}\n\t\\begin{array}{lllllll}\n\t\t\\label{eq:forward}\n\t\t{\\boldsymbol\\nu}^{(l)} &\\sim& \\mathcal{N}\\left(\\vec{0}, \\mathbf{1}\\right)&\\forall l\\\\\n\t\t\\mathbf{Q}^{(l)} & = & \\vec{q} + {\\boldsymbol\\sigma} {\\boldsymbol \\nu}^{(l)}&\\forall l\\\\\n\t\t\\vec{F}^{(l)} &=& \\vec{p}^{(l)} \\vec{Q}^{(l)}&\\forall l\\\\\n\t\t\\mathbf{X}^{(l)} &=& {\\boldsymbol \\rho}^{(l)} \\vec{F}^{(l)}&\\forall l\\\\\n\t\t\\mathcal{L} &=& \\frac{1}{L}\\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)\\\\\n\t\\end{array}\n\\end{equation}\n\n\n{\\bf Backward iteration.} The backward iteration evaluates the gradients of mean and standard deviation of the PDOD through the back-propagation (BP) method, as presented in Equation~\\ref{eq:backward}.\n\\begin{equation}\n\t\\begin{array}{llllllll}\n\t\t\\label{eq:backward}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{X}^{(l)}} &=& \\frac{\\partial \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)}{\\partial \\mathbf{X}^{(l)}} & \\forall l\\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\vec{F}^{(l)}} &=& {{\\boldsymbol \\rho}^{(l)}}^T \\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{X}^{(l)}}& \\forall l \\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\vec{Q}^{(l)}} &=& {\\vec{p}^{(l)}}^T \\frac{\\partial \\mathcal{L}}{\\partial \\vec{F}^{(l)}}& \\forall l\\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}} &= &\\frac{\\partial \\mathcal{L}}{\\partial \\vec{Q}^{(l)}}& \\forall l\\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}} &= & {\\boldsymbol\\nu}^{(l)}\\circ \\frac{\\partial \\mathcal{L}}{\\partial \\vec{Q}^{(l)}} & \\forall l\n\t\\end{array}\n\\end{equation}\n\n\nThe forward-backward algorithm is presented in Figure~\\ref{fig:fb}. Forward iteration is conducted through the solid line, during which the temporary matrices ${\\boldsymbol\\nu}, \\mathbf{p}, {\\boldsymbol\\rho}$ are also prepared (through the dot dashed link). The objective $\\mathcal{L}$ is computed at the end of $\\mathcal{L}$, and the backward iteration is conducted through the dashed line, and both OD demand mean and standard deviation are updated simultaneously.\n\n\nIn one iteration of forward-backward algorithm, we first run the forward iteration to compute the objective function, then the backward iteration is performed to evaluate the gradient of the objective function with respect to $\\mathbf{q}, {\\boldsymbol\\sigma}$.\nWith the forward-backward algorithm to compute the gradient of the objective function, we can solve the PDODE formulation in Equation~\\ref{eq:pdode2} through gradient-based methods. For example, the projected gradient descent method can be used to iteratively update the OD demand. This paper adopts Adagrad, a gradient-based method using adaptive step sizes \\cite{duchi2011adaptive}. As for the stopping criteria, Proposition~\\ref{prop:stop} indicates that the following two conditions are equivalent: 1) in the forward iteration, the distribution of path cost, link cost, path flow, OD demand do not change; 2) in the backward iteration, $\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}} = 0$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}} = 0$. \n\n\\begin{proposition}[Stopping criterion]\n\t\\label{prop:stop}\n\tThe PDODE formulation is solved when the forward and backward iterations converge, namely the the distributions of path cost, link cost, path flow, OD demand do not change, and $\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}} = 0$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}} = 0$.\n\\end{proposition}\n\nSince $\\mathcal{L} \\to \\mathcal{L}_0$ when $I,L\\to \\infty$, we claim the PDODE problem is solved when $\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}}$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}}$ are close to zero given a large $I$ and $L$.\n\n\n\n\\subsection{Solution Framework}\nTo summarize, the overall solution algorithm for PDODE is summarized in Table~\\ref{tab:sol}.\n\\begin{table}[h]\n\t\\begin{tabular}{p{2.2cm}p{13.6cm}}\n\t\t\\textbf{Algorithm}& \\textbf{[\\textit{PDODE-FRAMEWORK}]} \\\\[3ex]\\hline\n\t\t\\textit{Step 0} & \\textit{Initialization.} Initialize the mean and standard deviation of dynamic OD demand $\\vec{q}, {\\boldsymbol\\sigma}$. \\\\[3ex]\\hline\n\t\t\\textit{Step 1} & \\textit{Data preparation.} Randomly select a batch of observed data to form the sample distribution of $\\tilde{\\mathbf{X}}^{(i)}$.\\\\[3ex]\\hline\n\t\t\\textit{Step 2} & \\textit{Forward iteration.} Iterate over $l=1,\\cdots,L$: for each $l$, sample ${\\boldsymbol\\nu}^{(l)}$, solve the DNL models and travelers' behavior model, and compute the objective function $\\mathcal{L}$ based on Equation~\\ref{eq:forward} with $\\vec{q}, {\\boldsymbol\\sigma}$ \\\\[3ex]\\hline\n\t\t\\textit{Step 3} & \\textit{Backward iteration.} Compute the gradient of the mean and standard deviation of dynamic OD demand using the backward iteration presented in Equation~\\ref{eq:backward}. \\\\[3ex]\\hline\n\t\t\\textit{Step 4} & \\textit{Update PDOD.} Update the mean and standard deviation of dynamic OD ($\\vec{q}, {\\boldsymbol\\sigma}$) with the gradient-based projection method.\\\\[3ex]\\hline\n\t\t\\textit{Step 5} & \\textit{Batch Convergence check.} Continue when the changes of OD demand mean and standard deviation are within tolerance. Otherwise, go to Step 2.\\\\[3ex]\\hline\n\t\t\\textit{Step 6} & \\textit{Convergence check.} Iterate over $i=1,\\cdots, I$. Stop when the changes of OD demand mean and standard deviation are within tolerance across different $i$. Otherwise, go to Step 1.\\\\[3ex]\\hline\n\t\\end{tabular}\n\t\\caption{The PDODE solution framework.}\n\t\\label{tab:sol}\n\\end{table}\n\nIn practical applications, Step 3 and Step 4 can be conducted using the stochastic gradient projection methods to enhance the algorithm efficiency.\nAdditionally, Step 3 and Step 4 can be conducted using the auto-differentiation and deep learning packages, such as PyTorch, TensorFlow, JAX, etc, and both steps can be run on multi-core CPUs and GPUs efficiently. \n\n\n\n\\subsection{Underdetermination and evaluation criterion}\nIn this section, we discuss the underdetermination issue for the PDODE problem. It is well known that both static OD estimation and dynamic OD estimation problems are undetermined \\citep{yang1995heuristic,ma2018statistical}. We claim that PDODE is also under-determined because the problem dimension is much higher than its deterministic version. In the case of PDODE, not only the OD demand mean but also the standard deviation need to be estimated.\nTherefore, estimating exact PDOD accurately with limited observed data is challenging in most practical applications. Instead, since the objective of PDODE is to better reproduce the observed traffic conditions, we can evaluate the PDODE methods based on whether they can reproduce the network conditions accurately. We can evaluate the PDODE framework by measuring how well the traffic conditions can be evaluated for the observed and all links, respectively. Using this concept, we categorize the PDODE evaluation criterion into three levels as follows:\n\\begin{enumerate}[label=\\roman*)]\n\t\\item Observed Links (\\texttt{OL}): The traffic conditions simulated from the estimated PDOD on the observed links are accurate. \n\t\\item All Links (\\texttt{AL}): The traffic conditions simulated from the estimated PDOD on all the links are accurate.\n\t\\item Dynamic OD demand (\\texttt{OD}): The estimated PDOD is accurate.\n\\end{enumerate}\n\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=0.75\\linewidth]{ec}\n\t\\caption{An overview of the evaluation criterion in PDODE.}\n\t\\label{fig:ec}\n\\end{figure*}\n\nThe three evaluation criterion are summarized in Figure~\\ref{fig:ec}. One can see that the objective of Formulation~\\ref{eq:pdode2} is actually \\texttt{OL}, and we include a series constraints in order to achieve \\texttt{AL}. Specifically, the flow conservation and route choice model help to achieve \\texttt{AL}. As for the \\texttt{OD}, there is no guarantee for large-scale networks. Many recent studies also indicate the same observations \\citep{osorio2019high, ma2019estimating, wollenstein2022joint}.\n\nOverall, a PDODE framework that satisfies \\texttt{OL} tends to overfit the observed data. We claim that a PDODE framework that satisfies \\texttt{AL} is sufficient for most needs in traffic operation and management, as the ultimate goal for PDODE is to understand the dynamic traffic conditions on the networks. To achieve \\texttt{OD}, a high-quality prior PDOD matrix is necessary to reduce the search space \\citep{ma2018estimating}.\n\nFrom the perspective of the underdetermination issue of PDODE, \\texttt{OL} is always determined as it only focuses on the observed links. On general networks, \\texttt{OD} is an under-determined problem as the cardinality of a network is much smaller than the dimension of OD demand. Whether \\texttt{AL} is determined or not is based on the network topology and data availability, and hence it is promising to make full use of the proposed computational graphs to achieve \\texttt{AL}, as the computational graphs have an advantage in multi-source data integration and fast computation. This further motivates the necessity to formulate the PDODE problem using computational graphs.\n\n\n\\section{Numerical experiments}\n\\label{sec:experiment}\nIn this section, we first examine the proposed PDODE framework on a small network. Different statistical distances are compared and the optimal one is selected. We further compare the PDODE with DDODE method, and the parameter sensitivity is discussed. In addition, the effectiveness and scalability of the PDODE framework are demonstrated on a real-world large-scale network: SR-41 corridor. All the experiments in this section are conducted on a desktop with Intel Core i7-6700K CPU 4.00GHz $\\times$ 8, 2133 MHz 2 $\\times$ 16GB RAM, 500GB SSD.\n\n\\subsection{A small network}\n\n\\subsubsection{Settings}\n\\label{sec:setting}\nWe first work on a small network with 13 nodes, 27 links, and 3 OD pairs, as presented in Figure~\\ref{fig:31net}. There are in total 29 paths for the 3 OD pairs ($1 \\to9$, $5 \\to9$, and $10 \\to9$). Links connecting node $1,5,9, 10$ are OD connectors, and the rest of links are standard roads with two lanes. The triangular fundamental diagrams (FD) are used for the standard links, in which the length of each road segment is $0.5$ mile, flow capacity is 2,000 vehicles\/hour, and holding capacity is $200$ vehicles\/mile. The free flow speed is uniform sampled from $20$ to $45$ miles\/hour.\n\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.75\\linewidth]{network33}\n\t\\caption{\\footnotesize{An overview of the small network.}}\n\t\\label{fig:31net}\n\\end{figure}\n\n\nTo evaluate the performance of the proposed method, we generate the mean and standard deviation of PDOD using a triangular pattern, as shown in Figure~\\ref{fig:dod}. The PDOD is high enough to generate congestion. The observed flow is obtained by solving the statistical traffic equilibrium and then a Gaussian distributed noise is added to the observation. The performance of the PDODE estimation formulation is assessed by comparing the estimated flow\nwith the ``true'' flow (flow includes observed link flow, all link flow, and OD demand) \\citep{antoniou2015towards}. We set the study period to 10 time intervals and each time interval lasts 100 seconds.\nA Logit model with a dispersion factor $0.1$ is applied to the mean route travel time for modeling the travelers' behaviors. \n\nSupposing 100 days' data are collected, the dynamic OD demand from the ``true'' distribution of dynamic OD demand is generated on each day, and the demand is loaded on the network with the route choice model and DNL model. We randomly select 12 links to be observed, and a random Gaussian noise $\\mathcal{N}(0, 5)$ is further added to the observed link flow. Our task is to estimate the mean and standard deviation of PDOD using the observed 100 days' data.\n\nWe run the proposed PDODE presented in Table~\\ref{tab:sol} with the projected stochastic gradient descent, and the solution algorithm is Adagrad \\citep{duchi2011adaptive}. We use the loss function $\\mathcal{L}$ to measure the efficiency of the proposed method, as presented in Equation~\\ref{eq:ereap}. \nNote that we loop all the 100 days' data once in each epoch.\n\n\\subsubsection{Comparing different statistical distances}\n\n\\begin{table*}[h]\n\t\\centering\n\t\\begin{tabular}{|l|cc|cc|cc|}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\backslashbox{$\\mathcal{M}$}{Accuracy}} & \\multicolumn{2}{c|}{\\texttt{OL}} & \\multicolumn{2}{c|}{\\texttt{AL}} & \\multicolumn{2}{c|}{\\texttt{OD}} \\\\\n\t\t\\cline {2-7}\n\t\t~ & Mean & Std & Mean & Std & Mean & Std \\\\\n\t\t\\hline\\hline\n\t\t$\\ell_1$-norm & {\\bf 0.968}& 0.792& {\\bf 0.997}& 0.806& {\\bf 0.996}& 0.804\\\\\n\t\t$\\ell_2$-norm & 0.955 & {\\bf 0.880}& 0.994& {\\bf 0.897} & 0.985& {\\bf 0.892} \\\\\n\t\t2-Wasserstein distance & {\\em 0.961} & {\\em 0.843} & {\\em 0.996} & {\\em 0.861} & {\\em 0.991}& {\\em 0.860} \\\\\n\t\tKL divergence & -0.575 & 0.027 & 0.508 & 0.062 & -0.592 & 0.027 \\\\\n\t\tBhattacharyya distance& -0.726 & -0.004 & 0.460 & 0.029 & -0.748 & -0.005 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Performance of different statistical distances in terms of R-squared score.}\n\t\\label{tab:compare}\n\\end{table*}\n\nWe first compare different statistical distances discussed in section~\\ref{sec:obj}. Under the same settings in section~\\ref{sec:setting}, different statistical distances are compared as the objective function in Formulation~\\ref{eq:pdode2}, and the estimation results are presented in Table~\\ref{tab:compare} in terms of R-squared score. We use the \\texttt{r2\\_score} function in sklearn to compute the R-squared score, and the score can be negative (because the model can be arbitrarily worse)\\footnote{\\url{https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.r2\\_score.html}}.\n\n\nOne can see that neither KL divergence nor Bhattacharyya distance can yield proper estimation of PDOD, which may be due to the complicated formulations of its objective function, and gradient explosion and vanishing with respect to the objective function significantly affect the estimation accuracy. The other three statistical distances achieve satisfactory accuracy. Using $\\ell_1$-norm and $\\ell_2$-norm can achieve the best estimation of PDOD mean and standard deviation, respectively. Both objectives perform stably, which is probably attributed to the simple forms of their gradients. This finding is also consistent with many existing literature \\citep{shao2014estimation,shao2015estimation}. The 2-Wasserstein distance achieves a balanced performance in terms of estimating both mean and standard deviation, which might be because the 2-Wasserstein distance compares the probability density functions, instead of directly comparing the parameters of the two distributions. For the rest of the experiments, we choose 2-Wasserstein distance as the objective function. \n\n\n\\subsubsection{Basic estimation results}\n\nWe present the detailed estimation results using 2-Wasserstein distance as the objective function. The estimated and ``true'' PDOD are compared in Figure~\\ref{fig:dod}. One can see that the proposed PDODE framework accurately estimates the mean and standard deviation of the PDOD. Particularly, both surging and decreasing trends are quickly captured.\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{odflow}\n\t\\caption{Comparison between the ``true'' and estimated OD demand (first row: mean; second row: standard deviation; first column: $1\\to9$; second column: $5\\to9$; third column: $10\\to9$; unit: vehicle\/100seconds).}\n\t\\label{fig:dod}\n\\end{figure}\n\n\n\n\\subsubsection{Comparing with the deterministic DODE}\n\n\nTo demonstrate the advantages of the PDODE framework, we also run the standard DDODE framework proposed by \\citet{ma2019estimating} using the same setting and data presented in section~\\ref{sec:setting}. Because the DDODE framework does not estimate the standard deviation, so we only evaluate the estimation accuracy of the mean. The comparison is conducted by plotting the estimated OD demand mean, observed link flow and all link flow against ``true'' flow for both PDODE and DDODE frameworks, as presented in Figure~\\ref{fig:31comp}. The algorithm performs well when the scattered points are close to $y=x$ line.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{flow}\n\t\\caption{Comparison between the ``true'' and estimated flow in terms of \\texttt{OL}, \\texttt{AL}, and \\texttt{OD} (first row: the proposed PDODE framework; second row: the standard DDODE framework; unit:vehicle\/100seconds).}\n\t\\label{fig:31comp}\n\\end{figure}\n\nIt can be seen from Figure~\\ref{fig:31comp}, the PDODE framework can better reproduce the ``true'' traffic flow. Firstly, DDODE can fit the observed link flow better as it directly optimize the gap between the observed and estimated link flow. However, the DDODE framework tends to overfit the noisy data because it does not model the variance of the flow explicitly. PDODE can provide a better estimation for those unobserved links and OD demand through a comprehensive modeling of the flow distribution. To summarize, DDODE achieves a higher accuracy on observed links (\\texttt{OL}), while PDODE outperforms DDODE in terms of \\texttt{AL} and \\texttt{OD}. \n\nTo quantify the error, we compute the R-squared scores between the ``true'' and estimated flow for both PDODE and DODE, as presented in Table~\\ref{tab:sr}.\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{|c|ccc|}\n\t\t\\hline\n\t\t\\backslashbox{Formulation}{Accuracy} & \\texttt{OL} & \\texttt{AL} & \\texttt{OD} \\\\\n\t\t\\hline\\hline\n\t\tPDODE & 0.961 & {\\bf 0.996} & {\\bf 0.991} \\\\\n\t\tDDODE & {\\bf 0.963} & 0.979 & 0.857 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{R-squared scores between the ``true'' and estimated flow for PDODE and DODE.}\n\t\\label{tab:sr}\n\\end{table}\n\nThe R-squared score between the ``true'' and estimated OD demand and all links for PDODE is higher than that for DODE, while the differences between the R-squared scores for observed link flow are relatively small. This further explains the overfitting issue of the DDODE on the observed links, and the above experiments verify the illustrative example presented in section~\\ref{sec:example}.\n\n\n\n\\subsubsection{Sensitivity analysis.} \n\nWe also conduct sensitivity analysis regarding the proposed PDODE framework.\n\n{\\bf Impact of travel time.}\nIf the travel time of each link on the network is also observed, the proposed PDODE framework can be extended to incorporate the data. To be specific, we use the travel time information to calibrate the DAR matrix using the approach presented in \\citet{ma2018estimating}.\nIt is expected that the the estimation accuracy can further improved. The comparison of estimation accuracy is presented in Table~\\ref{tab:compare2}. One can see that the inclusion of travel time data is beneficial to all the estimates (\\texttt{OL}, \\texttt{AL}, \\texttt{OD}). Particularly, the estimation accuracy of standard deviation can be improved significantly by over 5\\%.\n\n\\begin{table*}[h]\n\t\\centering\n\t\\begin{tabular}{|c|cc|cc|cc|}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\backslashbox{$\\mathcal{M}$}{Accuracy}} & \\multicolumn{2}{c|}{\\texttt{OL}} & \\multicolumn{2}{c|}{\\texttt{AL}} & \\multicolumn{2}{c|}{\\texttt{OD}} \\\\\n\t\t\\cline {2-7}\n\t\t~ & Mean & Std & Mean & Std & Mean & Std \\\\\n\t\t\\hline\\hline\n\t\tPDODE & 0.961 & 0.843 & 0.996 & 0.861 & 0.991& 0.860 \\\\\n\t\tPDODE + travel time & 0.997& 0.925& 0.997& 0.921& 0.998& 0.908 \\\\\n\t\t\\hline\n\t\tImprovement & +3.746\\% & +9.727\\% & +0.100\\% & +9.969\\% & +0.706\\% & +5.581\\%\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Effects of considering travel time data.}\n\t\\label{tab:compare2}\n\\end{table*}\n\n{\\bf Adaptive gradient descent methods.}\nWe compare different adaptive gradient descent methods, and the convergence curves are shown in Figure~\\ref{fig:conv}. Note that the conventional gradient descent (GD) or stochastic gradient descent (SGD) cannot converge and the loss does not decrease, so their curves are not shown in the figure. One can see that the Adagrad converges quickly within 20 epochs, and the whole 200 epochs take less than 10 minutes. Both Adam and AdaDelta can also converge after 60 epochs, while the curves are not as stable as the Adagrad. This is the reason we choose Adagrad in the proposed framework.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{converge}\n\t\\caption{Convergence curves of different adaptive gradient descent methods.}\n\t\\label{fig:conv}\n\\end{figure}\n\nSensitivity analysis regarding the learning rates, number of data samples, number of CPU cores, and noise level have also been conducted, and the results are similar to the previous study. For details, readers are referred to \\citet{ma2019estimating}.\n\n\\subsection{A large-scale network: SR-41 corridor}\nIn this section, we perform the proposed PDODE framework on a large-scale network. The SR-41 corridor is located in the City of Fresno, California. It consists of one major freeway and two parallel arterial roads. These roads are connected with local streets, as presented in Figure~\\ref{fig:srnet}. The network contains 1,441 nodes, 2,413 links and 7,110 OD pairs \\citep{liu2006streamlined, zhang2008developing}. We consider 6 time intervals and each time interval last 15 minutes. The ``true'' dynamic OD mean is generated from $\\texttt{Unif}(0,5)$ and the standard deviation is generated from $\\texttt{Unif}(0,1)$. It is assumed $500$ links are observed. The statistical traffic equilibrium is solved with Logit mode, and we generate $10$ days' data under the equilibrium. We run the proposed PDODE framework, and $\\mathcal{L}$ with 2-Wasserstein distance is used to measure the efficiency of the algorithm. No historical OD demand is used, and the initial PDOD is randomly generated for the proposed solution algorithm. The convergence curve is presented in Figure~\\ref{fig:convsr}. \n\n\\begin{figure}[h]\n\t\\centering\n\t\\begin{subfigure}[b]{0.475\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{network_sr41}\n\t\t\\caption{\\footnotesize{Overview of the SR41 corridor.}}\n\t\t\\label{fig:srnet}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.475\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{sr41conv}\n\t\t\\caption{\\footnotesize{Convergence curve of the proposed PDODE framework.}}\n\t\t\\label{fig:convsr}\n\t\\end{subfigure}\n\t\\caption{Network overview and the algorithm convergence curve for SR41 corridor.}\n\t\\label{fig:srover}\n\\end{figure}\n\nOne can see the proposed framework performs well and the objective function converges quickly. Each epoch takes 37 minutes and hence the algorithm takes 3,700 minutes ($\\sim$ 62 hours) to finish 100 epochs.\n\nAs discussed in previous sections, we do not compare the OD demand as the estimation of OD demand is under-determined, and it is challenging to fully recover the exact dynamic OD demand on such large-scale network without historical OD demand, analogous to DDODE or static ODE in the literature. Instead, we focus on the \\texttt{OL} and \\texttt{AL} to assess the performance of the proposed PDODE framework.\n\n\nWe plot the ``true'' and estimated mean of link flow on observed links and all links in Figure~\\ref{fig:srcomp}, respectively. \nOne can see that PDODE can reproduce the flow on observed links accurately, while the accuracy on all links is relatively lower. This observation is different from the small network, which implies extra difficulties and potential challenges of estimating dynamic network flow on large-scale networks, in extremely high dimensions. Quantitatively, the R-squared score is 0.949 on the observed links and 0.851 on all links. Both R-squared scores are satisfactory, and the R-squared score for \\texttt{AL} is higher than that estimated by the DDODE framework ($0.823$). Hence we conclude that the proposed PDODE framework performs well on the large-scale network.\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{sr41compare}\n\t\\caption{Comparison between the ``true'' and estimated mean of link flow in PDODE framework (unit:vehicle\/15minutes).}\n\t\\label{fig:srcomp}\n\\end{figure}\n\n\n\n\\section{Conclusions}\n\\label{sec:con}\nThis paper presents a data-driven framework for the probabilistic dynamic origin-destination demand estimation (PDODE) problem. The PDODE problem is rigorously formulated on general networks. \nDifferent statistical distances ({\\em e.g.}, $\\ell_p$-norm, Wasserstein distance, KL divergence, Bhattacharyya distance) are tested as the objective function. All the variables involved in the PDODE formulation are vectorized, and the proposed framework is cast into a computational graph.\nBoth mean and standard deviation of the PDOD can be simultaneously estimated through a novel reparameterization trick. The underdetermination issues of the PDODE problem are also discussed, and three different evaluation criterion (\\texttt{OL}, \\texttt{AL}, \\texttt{OD}) are presented.\n\n\nThe proposed PDODE framework is examined on a small network as well as a real-world\nlarge-scale network. The loss function reduces quickly on both networks and the time consumption is satisfactory. $\\ell_1$ and $\\ell_2$ norms have advantages in estimating the mean and standard deviation of dynamic OD demand, respectively, and the 2-Wasserstein distance achieves a balanced accuracy in estimating both mean and standard deviation. We also compared the DDODE framework with the proposed PDODE framework. The experiment results show that the DDODE framework tends to overfit on \\texttt{OL}, and PDODE can achieve better estimation on \\texttt{AL} and \\texttt{OD}.\n\nIn the near future, we will extend the existing PDODE formulation to estimate the spatio-temporal covariance of the dynamic OD demand. The covariance (correlation) of dynamic OD demand can further help public agencies to better understand the intercorrelation of network dynamics and further improve the effectiveness of the operation\/management strategies. Low-rank or sparsity regularization for the covariance matrix of the PDOD might be necessary. The choice of statistical distances can be better justified through theoretical derivations. The computational graph also has great potential in incorporating multi-source data \\citep{ma2019estimating}, and it is interesting to explore the possibility of estimating the PDOD using emerging data sources, such as vehicle trajectory \\citep{ma2019measuring} and automatic vehicle identification (AVI) data \\citep{cao2021day}. The under-determination issue remains a critical challenge for the OD estimation problem (including both DDODE and PDODE), and this study demonstrates the possibility of mitigating the overfitting issue by considering the standard deviation. We believe this sheds light on overcoming the under-determination issues in general OD estimation problems. \n\n\\section*{Supplementary Materials}\nThe proposed PDODE framework is implemented with PyTorch and open-sourced on Github (\\url{https:\/\/github.com\/Lemma1\/Probabilistic-OD-Estimation}).\n\n\n\\ACKNOWLEDGMENT{%\nThe work described in this paper was supported by U.S. National Science Foundation CMMI-1751448. The first author was supported by the National Natural Science Foundation of China (No. 52102385) and a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. PolyU\/25209221). The contents of this paper reflect the views of the authors, who are responsible for the facts and the accuracy of the information presented herein. \n\n\n\n\n\n\n\\clearpage\n\n\\bibliographystyle{informs2014trsc}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRandom object generation is a broad topic, since the word ``object'' has many connotations in mathematics and applied probability. For example, ``object'' could refer to a matrix or a polynomial. Indeed, observed data are random objects; for instance, a vector of observables in a regression context satisfies transparently the idea of a probabilistic \"object\" \\citep{Leemis:2006}. Of late, a class of random object models is growing in popularity, namely Latent Factor (or Feature) Models, abbreviated LFM. The theory and use of these models lie at the intersection of probability theory, Bayesian inference, and simulation methods, particularly Markov chain Monte Carlo (MCMC). Saving the formal description of LFMs for future sections, consider the following heuristics of certain key ideas central to the paper.\n\nLatent variables are unobserved, or are not directly measurable. Parenting skill, speech impediments, socio-economic status, and quality of life are some examples of these. Latent variables could also correspond to a ``true'' variable observed with error. Examples would include iron intake measured by a food frequency, self-reported weight, and lung capacity measured by forced expiratory volume in one second. In Bayesian hierarchical modeling, latent variables are often used to represent unobserved properties or hidden causes of data that are being modeled \\citep{Bishop:1998}. Often, these variables have a natural interpretation in terms of certain underlying but unobserved features of the data; as examples, thematic topics in a document or motifs in an image. The simplest of such models, which we will refer to as Latent Variable Models (LVMs), typically use a finite number of latent variables, with each datum related to a single latent variable \\citep{Bishop:1998,McLaughlan:2000}. This class of models includes finite mixture models, where a datum is associated with a single latent mixture component, and Hidden Markov Models (HMMs) where each point in a time series is associated with a single latent state \\citep{Baum:Petrie:1996}. All data associated with a given latent parameter are assumed to be independently and identically simulated according to a distribution parametrized by that latent parameter.\n\nGreater flexibility can be obtained by allowing multiple latent features for each datum. This allows different aspects of a datum to be shared with different subsets of the dataset. For example, two articles may share the theme ``science'', but the second article may also exhibit the theme ``finance''. Similarly, a picture of a dog in front of a tree has aspects in common with both pictures of trees and pictures of dogs. Models that allow multiple features are typically referred to as Latent Factor Models (LFMs). Examples of LFMs include Bayesian Principle Component Analysis where data are represented using a weighted superposition of latent factors, and Latent Dirichlet Allocation where data are represented using a mixture of latent factors; see \\citet{Roweis:Ghahramani:1999} for a review of both LVMs and LFMs.\n\nIn the majority of LVMs and LFMs, the number of latent variables is finite and pre-specified. The appropriate cardinality is often hard to determine \\emph{a priori} and, in many cases, we do not expect our training set to contain exemplars of all possible latent variables. These difficulties have led to the increasing popularity of LVMs and LFMs where the number of latent variables associated with each datum or object is potentially unbounded; see, \\citep{Antoniak:1974,Teh:Jordan:Beal:Blei:2006, Griffiths:Ghahramani:2005,Titsias:2007,Broderick:Mackay:Paisley:Jordan:2015}. These latter probabilistic models with an infinite number of parameters are referred to as nonparametric latent variable models (npLVMs) and nonparametric latent factor models (npLFMs). These models generally tend to provide richer inferences than their finite-dimensional counterparts, since deeper relationships between the unobserved variables and the observed data could be obtained by relaxing finite distributional assumptions about the probability generating mechanism.\n\nIn many applications, data are assumed \\emph{exchangeable}, in that no information is conveyed by the order in which data are observed. Even though exchangeability is a weaker (hence preferable) assumption than independent and identically distributed data, often times, observed data are time-stamped emissions from some evolving process. That is, the ordering (or dependency) is crucial to understanding the entire random data-generating mechanism. There are two types of dependent data that, typically, arise in practice. It is convenient to use terminology from the biomedical literature to distinguish the two. \\emph{Longitudinal dependency} refers to situations where one records multiple entries from the same random process over a period of time. In AIDS research, a biomarker such as a CD4 lymphocyte cell count is observed intermittently for a patient and its relation to time of death is of interest. In a different context, the ordering of frames in a video sequence or the ordering of windowed audio spectra in a piece of music within a time interval are crucial to our understanding of the entire video or musical piece. \n\n\\emph{Epidemiological dependency} corresponds to situations where our data generating mechanism involves multiple random processes, but where we typically observe each single process at only one covariate value; that is, that is, single records from multiple entities constitute the observed data. For instance, in an annual survey on diabetic indicators one might interview a different group of people each year; the observations correspond to different random processes (i.e. different individuals), but still capture global trends. Or consider articles published in a newspaper: while the articles published today are distinct from those published yesterday, there are likely to be general trends in the themes covered over time. \n\nMost research using npLFMs has focused on the exchangeable setting with non-dependent nonparametric LFMs being deployed in a number of application areas \\citep{Wood:Griffiths:Ghahramani:2006,Ruiz:Valera:Blanco:PerezCruz:2014,Meeds:Ghahramani:Neal:Roweis:2007}. A number of papers have developed npLFMs for epidemiological dependence \\citep{Foti:Futoma:Rockmore:Williamson:2013,Ren:Wang:Carin:Dunson:2011,Zhou:Yang:Sapiro:Dunson:Carin:2011,Rao:Teh:2009}. In these settings we are often able to make use of conjugacy to develop reasonably efficient stochastic simulation schemes. In addition, several nonparametric priors for LFMs have been proposed for longitudinally dependent data ~\\citep{Williamson:Orbanz:Ghahramani:2010,Gershman:Frazier:Blei:2015}, but unfortunately these papers, by virtue of their modeling approaches, require computationally complex inference protocols. Furthermore, these existing epidemiologically and longitudinally dependent methods are often invariant under time reversal. This is often a poor choice for modeling temporal dynamics, where the direction of causality means that they dynamics are not invariant under time reversal. \n\n\nIn this paper, we introduce a new class of npLFMs that is suitable for time (or longitudinally) dependent data. From a modeling perspective, the focus is on npLFMs rather than npLVMs since the separability assumptions underlying LVMs are overly restrictive for most real data. Specifically, we follow the tradition of generative or simulation-based npLFMs. A Bayesian approach is natural in this framework since the form of npLFMs needed to better model temporal dependency involves the use of probability distributions on function spaces; the latter idea is commonly referred to as Bayesian nonparametric inference \\citep{Walker:Damien:Laud:Smith:1999}.\n\nThe primary aims of this research are the following. First, to develop a class of npLFMs with practically useful attributes to generate random objects in a variety of applications. These attributes include an unbounded number of latent factors; capturing temporal dynamics in the data; and the tracking of \\emph{persistent} factors over time. The significance of this class of models is best described with a simple, yet meaningful, example. Consider a flautist playing a musical piece. At very short time intervals if the flautist is playing a B$\\flat$ at time $t$, it is likely that note would still be playing at time $t+1$. Arguably, this is a continuation of a single note instance that begins at time $t$ and persists to time $t+1$ (or beyond). Unlike current approaches, our proposed time-dynamic model captures this (persistent latent factor) dependency in the musical notes from time $t$ to $t+1$ (or beyond). The second goal of this research is to develop a general Markov chain Monte Carlo algorithm to enable full Bayesian implementation of the new npLFM family. Finally, applications of time-dependent npLFMs are shown via simulated and real data analysis. \n\nIn Section~\\ref{sec:bg}, finite and nonparametric LFMs are described. Section~\\ref{sec:ibp} discusses the Indian Buffet Processes that form the kernel for the new class of npLFMs introduced in Section~\\ref{sec:model_long}. Section~\\ref{sec:inf} details the inference methods used to implement the models in Section~\\ref{sec:model_long}, followed by synthetic and real data illustrations in Section~\\ref{sec:results}. A brief discussion in Section~\\ref{sec:conc} concludes the paper.\n\n\n\\section{Latent Factor Models}\\label{sec:bg\nA Latent Variable Model (LVM) posits that the variation within a dataset of size $N$ could be described using some set of $K$ features, with each observation associated with a single parameter. As an example, consider a mixture of $K$ Gaussian distributions where each datum belongs to one of the $K$ mixture components parametrized by different means and variances. These parameters, along with the cluster allocations, comprise the latent variables. In alternative settings, the number of features may be infinite; however since each data point is associated with a single feature, the number of features required to describe the dataset will always be upper bounded by $N$.\n\n\nWhile mixture models are widely used for representing the latent structure of a dataset, there are many practical applications where the observed data exhibit multiple underlying features. For example, in image modeling we may have two pictures, one of a dog beside a tree, and one of a dog beside a bicycle. If we assign both images to a single cluster, we ignore the difference between tree and bicycle. If we assign them to different clusters, we ignore the commonality of the dogs. In these situations, LVMs should allow each datum to be associated with multiple latent variables.\n\nIf each datum can be subdivided into a collection of discrete observations, one approach is to use an admixture model, such as latent Dirichlet allocation \\citep{Blei:Ng:Jordan:2003} or a hierarchical Dirichlet process \\citep{Teh:Jordan:Beal:Blei:2006}. Such approaches model the constituent observations of a data point using a mixture model, allowing a data point to express multiple features. For example, if a datum is a text document, the constituent observations might be words, each of which can be associated with a separate latent variable.\n\nIf it is not natural to split a data point into constituent parts---for example, if a data point is described in terms of a single vector---then we can construct models that directly associate each data point with multiple latent variables. This extension of LVMs is typically referred to as Latent Feature Models or Latent Factor Models (LFMs). For clarity, throughout this paper, LVM refers exclusively to models where each datum is associated with a single latent parameter, and LFM refers to models where each datum is associated with multiple latent parameters.\n\nA classic example of an LFM is Factor Analysis \\citep{Cattell:1952}, wherein one assumes $K$ $D$-dimensional latent features (or factors) $f_k$ which are typically represented as a $K\\times D$ matrix $F$. Each datum, $x_n$, is associated with a vector of weights, $\\lambda_n$, known as the factor loading, which determines the degree to which the datum exhibits each factor. Letting $X = (x_n)_{n=1}^N$ be the $N\\times D$ data matrix and $L = (\\lambda_n)_{n=1}^N$ be the $N\\times K$ factor loading matrix, we can write $ X = L F + \\mathbf{e}$, \nwhere $\\mathbf{e}$ is a matrix of random noise terms. Factor Analysis can be cast in a Bayesian framework by placing appropriate priors on the factors, loadings and noise terms \\citep{Press:Shigemasu:1989}. Such analysis is used in many contexts; as examples: micro array data \\citep{Hochreiter:Clevert:Obermayer:2006}, dietary patterns \\citep{Venkaiah:Brahmam:Vijayaraghavan:2011}, and psychological test responses \\citep{Tait:1986}. Independent Component Analysis \\citep[ICA]{Hyvarinen:Karhunen:Oja:2001} is a related model with independent non-Gaussian factors; ICA is commonly used in blind source separation of audio data.\n\nA serious disadvantage of LFMs such as Factor Analysis and ICA is that they assume a fixed, finite number of latent factors. In many settings, such an assumption is hard to justify. Even with a fixed, finite dataset, picking an appropriate number of factors, \\emph{a priori}, requires expensive cross-validation. In an online setting, where the dataset is constantly growing, it may be unreasonable to consider any finite upper bound. As illustrations, the number of topics that may appear in a newspaper, or the number of image features that may appear in an online image database, could grow unboundedly over time. One way of obviating this difficulty is to allow an infinite number of latent features \\emph{a priori}, and to ensure that every datum exhibits only a finite number of features wherein popular features tend to get reused. Such a construction would allow the number of exhibited features to grow in an unbounded manner as sample size grows, while still borrowing (statistical) strength from repeated features.\n\nThe transition from finite to infinite dimensional latent factors implies that the probability distributions on these factors in the generative process would now be elements in some function space; i.e., we enter the realm of Bayesian nonparametric inference. There is a vast literature on Bayesian nonparametric models; the classic references are \\citet{Ferguson:1973} and \\citet{Lo:1984}. Since the Indian buffet process is central to this paper, it is discussed in the following subsection. \n\n\n\n\\subsection{The Indian Buffet Process (IBP)}\\label{sec:ibp}\n\n\nA new class of nonparametric distributions of particular relevance to LFMs was developed by \\citet{Griffiths:Ghahramani:2005} who labeled their stochastic process prior the Indian Buffet Process (IBP). This prior adopts a Bayesian nonparametric inference approach to the generative process of an LFM where the goal of unsupervised learning is to discover the latent variables responsible for generating the observed properties of a set of objects.\n\nThe IBP provides a mechanism for selecting overlapping sets of features. This mechanism can be broken down into two components: a global random sequence of feature probabilities that assigns probabilities to infinitely many features, and a local random process that selects a finite subset of these features for each datum. \n\nThe global sequence of feature probabilities is distributed according to a stochastic process known as the beta process \\citep{Hjort:1990,Thibaux:Jordan:2007}. Loosely speaking, the beta process is a random measure, $B = \\sum_{k=1}^\\infty \\mu_k \\delta_{\\theta_k}$, that assigns finite mass to a countably infinite number of locations; these atomic masses $\\mu_k$ are independent, and are distributed according to the infinitesimal limit of a beta distribution. The locations, $\\theta_k$, of the atoms parametrize an infinite sequence of latent features.\n\nThe subset selection mechanism is a stochastic process known as the Bernoulli process \\citep{Thibaux:Jordan:2007}. This process samples a random measure $\\zeta_n = \\sum_{k=1}^\\infty z_{nk} \\delta_{\\theta_k}$, where each $z_{nk}\\in \\{0,1\\}$ indicates the presence or absence of the $k$th feature $\\theta_k$ in the latent representation of the $n$th datum, and are sampled independently as $z_{nk}\\sim \\mbox{Bernoulli}(\\mu_k)$. We can use these random measures $\\zeta_n$ to construct a binary feature allocation matrix $Z$ by ordering the features according to their popularity and aligning the corresponding ordered vector of indicators. This matrix will have a finite but unbounded number of columns with at least one non-zero entry; the re-ordering allows us to store the non-zero portion of the matrix in memory. It is often convenient to work directly with this random, binary matrix, and doing so offers certain insights into the properties of the IBP. This representation depicts the IBP as a (stochastic process) prior probability distribution over equivalence classes of binary matrices with a specified number of rows and a random, unbounded number of non-zero columns that grows in expectation as the amount of data increases. \n\nConsider a mathematical representation of the above discussion. Let $Z$ denote a random, binary matrix with $N$ rows and infinitely many columns, $K$ of which contain at least one non-zero entry. Then, following \\citet{Griffiths:Ghahramani:2005}, the IBP prior distribution for $Z$ is given by\n\\begin{equation}\np(Z) = \\frac{\\alpha^K}{\\prod_{h=1}^{2^N-1}K_h!}\\exp\\{-\\alpha H_N\\}\\prod_{k=1}^K\\frac{(N-m_k)!(m_k-1)!}{N!}\n\\label{eqn:ibpjoint}\n\\end{equation}\nwhere $m_k = \\sum_{n=1}^N z_{nk}$ is the number of times we have seen feature $k$; $K_h$ is the number of columns whose binary pattern encodes the number $h$ written as a binary number; $H_N$ is the $N$th harmonic number, and $\\alpha>0$ is the parameter of the process. Succinctly, Equation~\\ref{eqn:ibpjoint} is stated as: $Z \\sim \\mbox{IBP}(\\alpha)$, where $\\alpha$ is the parameter of the process; that is, Z has an IBP distribution with parameter $\\alpha$.\n\nWhat is the meaning of $\\alpha$? Perhaps the most intuitive way to understand the answer to this question is to recast $p(Z)$ in Equation~\\ref{eqn:ibpjoint} through the spectrum of an Indian buffet restaurant serving an infinite number of dishes at a buffet. Customers (observations) sequentially enter this restaurant, and select a subset of the dishes (observations). The first customer takes $\\mbox{Poisson}(\\alpha)$ dishes. The $n$th customer selects each previously sampled dish with probability $m_k\/n$, where $m_k$ is the number of customers who have previously selected that dishes -- i.e. she chooses dishes proportional to their popularity. In addition, she samples a $\\mbox{Poisson}(\\alpha\/n)$ number of previously untried dishes. This process continues until all $N$ customers visit the buffet. Now, represent the outcome of this buffet process in a binary matrix $Z$ where the rows of the matrix are customers and the columns are the dishes. The element $z_{n,k}$ is 1 if observation $n$ possesses feature $k$. Then, after some algebra, it follows that the probability distribution over the random, binary matrix $Z$ (up to a reordering of the columns) induced by this buffet process is invariant to the order in which customers arrived at the buffet, and is the expression given in Equation~\\ref{eqn:ibpjoint}.\n\nThe meaning of $\\alpha$ is now clear. The smaller the $\\alpha$, the lower the number of features with $\\sum_n z_{nk} > 0$, and the lower the average number of features per data point, with the number of features per datapoint distributed (marginally) as $\\mbox{Poisson}(\\alpha)$. Thus when the IBP is used in the generative process of an LFM, the total number of features exhibited by $N$ data points will be finite, but random, and this number will grow in expectation with the number of data points. This subset selection procedure behaves in a ``rich-get-richer'' manner--- if a dish had been selected by previous customers, it would likely be selected by new arrivals to the buffet. Stating generically, therefore, if a feature appears frequently in previously observed data points it would likely continue to appear again in subsequent observations as well.\n\nWe could use the IBP as the basis for an LFM by specifying a prior on the latent factors (henceforward denoted by a $ K \\times D $ matrix $A$), as well as a likelihood model for generating observations, as shown in the following examples. If the data are real-valued vectors, an appropriate choice for the likelihood model data could be a weighted superposition of Gaussian features:\n\\begin{equation}\n\\begin{aligned}\nZ = (Z_n)_{n=1}^N \\sim& \\mbox{IBP}(\\alpha) & &\\\\\ny_{nk} \\sim& f &&\\\\\nA_k \\sim& \\mbox{Normal}(0, \\sigma_A^2I), &k&=1,2,\\dots \\\\\nX_n \\sim& \\mbox{Normal}((Z\\circ Y)A, \\sigma_X^2I), &n&=1,\\dots, N.\\label{eqn:lingauss}\n\\end{aligned}\n\\end{equation}\n\nHere, $Y$ is the $N\\times \\infty$ matrix with elements $y_{nk}$; $A$ is the $\\infty \\times D$ matrix with rows $A_k$; $\\circ$ is the Hadamard product; and $f$ is a distribution over the weights for a given feature instance. Note that, while we are working with infinite-dimensional matrices, the number of non-zero columns of $Z$ is finite almost surely, so we only need to represent finitely many columns of $Y$ and rows of $A$. If $f=\\delta_{1}$, we have a model where features are either included or not in a data point, and where a feature is the same each time it appears; this straightforward model was proposed by \\citet{Griffiths:Ghahramani:2005}, but is somewhat inflexible for real-life modeling scenarios.\n\nLetting $f=\\mbox{Normal}(\\mu_f, \\sigma_f^2)$ gives Gaussian weights, yielding a nonparametric variant of Factor Analysis \\citep{Knowles:Ghahramani:2007,Teh:Gorur:Ghahramani:2007}. This approach is useful in modeling psychometric test data, or analyzing marketing survey data. Letting $f=\\mbox{Laplace}(\\mu_f, b_f)$ results in a heavier-tailed distribution over feature weights, yielding a nonparametric version of Independent Components Analysis \\citep{Knowles:Ghahramani:2007}. This allows one to perform blind source separation where the number of sources is unknown, making it a potentially useful tool in signal processing applications.\n\nOften, one encounters binary-valued data: for example, an indicator vector corresponding to disease symptoms (where a 1 indicates the patient exhibits that symptom), or purchasing patterns (where a 1 indicates that a consumer has purchased that product). In these cases, a weighted superposition model is not directly applicable, but it may be reasonable to believe there are multiple latent causes influencing whether an element is turned on or not. One option in such cases is to use the IBP with a likelihood model \\citep{Wood:Griffiths:Ghahramani:2006} where observations are generated according to:\n\\begin{equation*}\n\\begin{split}\nZ = (Z_n)_{n=1}^N \\sim& \\mbox{IBP}(\\alpha)\\\\\ny_{dk} \\sim& \\mbox{Bernoulli}(p)\\\\\nP(x_{nd}=1|Z,Y) =& 1-(1-\\lambda)^{Z_nY_d^T}(1-\\epsilon),\n\\end{split}\n\\end{equation*}\nand where $Y$ is the $D\\times \\infty$ matrix with elements $y_{dk}$; $Z_i$ and $Y_i$ are the $i$th rows of $Z$ and $Y$ respectively. \n\nThe above illustrations exemplify the value of IBP priors in LFMs. While these illustrations cover a vast range of applied problems, there are limitations. Notable among them is that the above LFMs do not encapsulate time dynamics. The aim of this paper is to develop a new family of IBP-based LFMs that obviates this crucial shortcoming. Additionally, unlike the afore-described models, the new class also allows one to capture repeat occurrence of a feature through time; i.e., \\emph{persistence} of latent factors. (Recall from the Introduction the example of a flautist's musical note persisting in successive time intervals.) \n\n\n\\subsection{Temporal Dynamics in npLFMs}\\label{sec:dynamic_bg}\n\nThe IBP, like its finite-dimensional analogues, assumes that the data are exchangeable. In practice, this could be a restrictive assumption. In many applications, the data exhibit either longitudinal (time) dependence or epidemiological dependence. Since the latter form of dependency is not the focus of this paper, we no longer consider it in the ensuing discussions. Some important references for this latter type of dependency include \\citet{Ren:Wang:Carin:Dunson:2011}, \\citet{Foti:Futoma:Rockmore:Williamson:2013}, and \\citet{Zhou:Yang:Sapiro:Dunson:Carin:2011}.\n\nLongitudinal dependence considers the case where each datum corresponds to an instantiation of a single evolving entity at different points in time. For example, data might correspond to timepoints in an audio recording, or measurements from a single patient over time. Mathematically, this means we would like to capture continuity of latent features. This setting has been considered less frequently in the literature. The Dependent Indian Buffet Process \\citep[DIBP,][]{Williamson:Orbanz:Ghahramani:2010} captures longitudinal dependence by modeling the occurrence of a given feature with a transformed Gaussian process. This allows for a great deal of flexibility in the form of dependence but comes at high computational cost: inference in each Gaussian process scales cubically with the number of time steps, and we must use a separate Gaussian process for each feature. \n\nAnother model for longitudinal dependence is the Distance-Dependent Indian Buffet Process (DDIBP) of \\citet{Gershman:Frazier:Blei:2015}. In this model, features are allocated using a variant of the IBP metaphor, wherein each pair of data points is associated with some distance measure. The probability of two data points sharing a feature depends on the distance between them. With an appropriate choice of distance measure, this model could prove useful for time-dependent data.\n\nAn alternative approach is provided by IBP-based hidden Markov models. For example, the Markov IBP extends the IBP such that rows are indexed by time and the presence or absence of a feature at time $t$ depends only on which features were present at time $t-1$. This model is extended further in the Infinite Factorial Unbounded State Hidden Markov Model \\citep[IFUHMM,][]{Valera:Ruiz:Perez:2016} and the Infinite Factorial Dynamical Model \\citep[IFDM,][]{Valera:Ruiz:Lennart:PerezCruz2015}. These related models combine hidden Markov models, one which controls which features are present, and one which controls the expression of that feature. The feature presence\/absence is modeled using a Markov IBP \\citep{VanGael:Teh:Ghahramani:2009}. At different time points, a single feature can have multiple expressions. During a contiguous time period where feature $k$ is present, it moves between these expressions using Markovian dynamics. While this increases the model flexibility, this comes at a cost of interpretability. Unlike the DIBP, the DDIBP, and the model proposed in this paper, the IFUHMM and IFDM do not impose any similarity requirements on the expressions of a given feature and can therefore use a single feature to capture two very different effects, provided they never occur simultaneously. \n\n\nWhile not a dynamic latent factor model, another dynamic model based on npLFMs is the beta process autoregressive hidden Markov model \\citep[BP-AR-HMM,][]{Fox:Sudderth:Jordan:Willsky:2009,Fox:Hughes:Sudderth:Jordan:2014}. In this model, an IBP is used to couple multiple time-series in a vector autoregressive model. The IBP is used to control the weights assigned to the lagged components; these weights are stationary over time.\n\nIn addition to the longitudinally dependent variants npLFMs mentioned here, there also exist a large number of temporally dependent npLVMs. In particular, dependent Dirichlet processes \\citep[e.g.][]{Maceachern:2000,Caron:Davy:Doucet:2017,Lin:Grimson:Fisher:2010,Griffin:2011} extend the Dirichlet process to allow temporal dynamics, allowing for time-dependent clustering models. Hidden Markov models based on the hierarchical Dirichlet process \\citep{Fox:Sudderth:Jordan:Willsky:2008,Fox:Sudderth:Jordan:Willsky:2011,Zhang:Guletkin:Paislet:2016} allow the latent variable associated with an observation to evolve in a Markovian manner. We do not discuss these methods in depth here, since they assume a single latent variable at each time point.\n\n\n\n\n\\iffalse\nLastly, the other dynamic latent factor model that we consider is the Infinite Factorial Dynamical Model \\citep[IFDM]{Valera:Ruiz:Lennart:PerezCruz2015}. The IFDM combines two hidden Markov models, one which controls which features are present, and one which controls the expression of that feature. Concretely, the feature presence\/absence is modeled using a Markov IBP \\citep{VanGael:Teh:Ghahramani:2009}, a variant of the IBP where rows are indexed by time and the presence or absence of a feature at time $t$ depends only on which features were present at time $t-1$. At different time points, a single feature can have multiple expressions. During a contiguous time period where feature $k$ is present, it moves between these expressions using Markovian dynamics.\n\nThe Infinite Factorial Dynamical Model is one of several nonparametric hidden Markov models (HMMs) that could be used to represent the data. For example, we could use an Hierarchical DP-based HMM such as \\cite{Fox:Sudderth:Jordan:Willsky:2008,Fox:Sudderth:Jordan:Willsky:2011,Zhang:Guletkin:Paislet:2016}, where each time period is associated with a single latent state. Other non-parametric HMMs like \\cite{Fox:Sudderth:Jordan:Willsky:2009,Fox:Hughes:Sudderth:Jordan:2014} use the beta process to learn multiple features to model collections of autoregressive time series of order $ r $ where where an observation $ y_{i,t} $ of time series $ i $ at time $ t $ is modeled as\n\\begin{equation*}\ny_{i,t} = \\sum_{j=1}^{r}A_{j,z_{i,t}}y_{i,t-j}+ \\epsilon_{i,t}(z_{i,t}),\n\\end{equation*}\nfor a feature $ A_{j,k} $ and feature dependent noise vector $ \\epsilon_{i,t}(\\cdot) $ with latent indicator $ z_{i,t} $. However such an approach ignores the flexibility of a latent feature model where we can capture partial similarities between time point. Moreover in our model, the probability of a feature turning on depends on the total number of times it's been seen, rather than on just the last time step as in the non-parameteric HMMs. We choose the IFDM as it is most similar to our approach: At any given time point it describes a latent feature model, and like in our model the expression of a feature can vary over time. However, the IFDM is by nature less interpretable than our model, since it does not impose any similarity requirements on the expressions of a given feature and can therefore use a single feature to capture two very different effects, provided they never occur simultaneously. \n\\fi\n\n\n\n\nThe model we propose in Section~\\ref{sec:model_long} falls into this class of longitudinally dependent LFMs. Unlike the DIBP, DDIBP, our model explicitly models feature persistence. Unlike all the models described above, our model allows multiple instances of a feature to appear at once. This is appropriate in many contexts; for instance, in music analysis, where each note has an explicit duration and two musicians could play the same note simultaneously. Importantly, the proposed nonparametric LFM leaves the underlying IBP mechanism intact, leading to more straightforward inference procedures when compared to DIBP and DDIBP.\n\n\n\n\n\n\\section{A New Family of npLFMs for Time-Dependent Data}\\label{sec:model_long}\n\nExisting temporally dependent versions of the IBP ~\\citep{Williamson:Orbanz:Ghahramani:2010,Gershman:Frazier:Blei:2015} rely on explicitly or implicitly varying the underlying latent feature probabilities---a difficult task--- and inference tends to be computationally complex.\n\nOur proposed method obviates these limitations. In a nutshell, unlike existing dependent npLFMs, we build our model on top of a single IBP, as described in Section~\\ref{sec:ibp}. Temporal dependence is encapsulated via a \\textit{likelihood model}. The value of our approach could be best understood via some simple examples. Consider audio data. A common approach to modeling audio data is to view them as superpositions of multiple sources; for example, individual speakers or different instruments. The IBP has previously been used in these types of applications \\citep{Knowles:Ghahramani:2007,DoshiVelez:2009}. However, these approaches ignore \\emph{temporal dynamics} present in most audio data. Recall the flautist example: at very short time intervals, if a flautist is playing a B$\\flat$ at time $t$, it is likely that note could still be playing at time $t+k$, $k=1,2,\\dots$. Our proposed model captures this dependency in the musical notes. In Section~\\ref{sec:results}, using real data, we show the benefit of incorporating this dynamic, temporal \\emph{feature persistence} and contrast it to a static IBP, DIBP, and DDIBP. \n\nAs noted in the Abstract, another illustration is the modeling of sensor outputs over time. Sensors record responses to a variety of external events: for example, in a building we may have sensors recording temperature, humidity, power consumption and noise levels. These are all altered by events happening in the building---the presence of individuals; the turning on and off of electrical devices; and so on. Latent factors influencing the sensor output are typically present for a contiguous period of time before disappearing; besides, multiple factors could be present at a time. Thus, for instance, our model should capture the effect on power consumption due to an air conditioning unit being turned on from 9am to 5pm, and which could be subject to latent disturbances during that time interval such as voltage fluctuations. \n\nConsider a third illustration involving medical signals such as EEG or ECG data. Here, we could identify latent factors causing unexpected patterns in the data, as well as infer the duration of their influence. As in previous examples, we expect such factors to contribute for a contiguous period of time: for instance, a release of stress hormones would affect all time periods until the levels decrease below a threshold. Note that the temporal variation in all three illustrations above cannot be accurately captured with epidemiologically dependent factor models where the probability of a factor varies smoothly over time, but the actual presence or absence of that feature is sampled independently given appropriate probabilities. This approach would lead to noisy data where a feature randomly oscillates between on and off.\n\nUnder the linear Gaussian likelihood model described in Equation~\\ref{eqn:lingauss}, conditioned on the latent factors $A_k$, the $n$th datum is characterized entirely by the $n$th row of the IBP-distributed matrix $Z$ thereby ensuring that the data, like the rows of $Z$, are exchangeable. In the following, the key point of departure from the npLFMs described earlier is this: we now let the $n$th datum depend not only on the $n$th row of $Z$, but also on the $n-1$ preceding rows, thus breaking the exchangeability of the $X_n$ data sequence. This is the mathematical equivalent of dependency in the data that we now formalize.\n\nAssociate each non-zero element $z_{nk}$ of $Z$ with a geometrically-distributed ``lifetime'', namely $\\ell_{nk} \\sim \\mbox{Geometric}(\\rho_k)$. An instance of the $k$th latent factor is then incorporated from the $n$th to the $(n+\\ell_{nk}-1)$th datum. The $n$th datum is therefore associated with a set $\\mathcal{Y}_n$, which represents the time series data, of feature indices $\\{(i,j): z_{ij}=1, i+\\ell>n\\}$. We use the term ``feature'' to refer to a factor, and the term ``feature instance'' to refer to a specific realization of that factor. For example, if each factor corresponds to a single note in an audio recording, the global representation of the note $C$ would be a feature, and the specific instance of note $C$ that starts at time $n$ and lasts for a geometrically distributed time would be a feature instance. If we assume a shared lifetime parameter, $\\rho_k=\\rho$ for all features, then the number of features at any time point is given, in expectation, by a geometric series $E[|\\mathcal{Y}_n|] = \\sum_{i=0}^{n-1}\\alpha \\rho^i\\rightarrow \\frac{\\alpha}{1-\\rho}$ as $n\\rightarrow \\infty$, i.e. as we forget the start of the process. More generally, we allow $\\rho_k$ to differ between features, and place a $\\mbox{Beta}(a_\\rho, b_\\rho)$ prior on each $\\rho_k$. By a judicious choice of the hyper-parameters, this prior could be easily tailored to encapsulate vague prior knowledge or contextual knowledge. (As an added bonus, it leads to simpler stochastic simulation methods which will be discussed later on.)\n\nThis geometric lifetime is the source of dependency in our new class of IBP-based npLFMs. It captures the idea of feature \\textit{persistence}: a feature instance ``turned on'' at time $t$ appears in a geometrically distributed number of future time steps. Since any feature instance that contributes to $x_n$ also contributes to $x_{n+1}$ with probability $\\rho_k$, we expect $x_n$ to share $\\frac{\\alpha+\\rho-1}{1-\\rho}$ feature instances with $x_{n-1}$, and to introduce $\\alpha$ new feature instances. Of these new feature instances, we expect $\\alpha\/n$ to be versions of previously unseen features.\n\nNote that this construction allows a specific datum to exhibit multiple instances of a given latent factor. For example, if $\\mathcal{Y}_n=\\{(n,1), (n,3),(n-1,1)\\}$, then the $n$th datum will exhibit two copies of the first feature and one copy of the third feature. In many settings, this is a reasonable assumption: two trees appearing in a movie frame, or two instruments playing the same note at the same time.\n\nThe construction of dependency detailed above could now be combined with a variety of likelihood functions (or models) appropriate for different data sources or applications. We could also replace the geometric lifetime with other choices, for example using semi-Markovian models as in \\citet{johnson2013bayesian}. Armed with this kernel of geometric dependency and likelihood functions, we now illustrate the broad scope of the proposed family of time-dependent npLFMs via two generalizations. Later, we demonstrate these using real or synthetic data.\n\nAdapting the linear Gaussian IBP LFM used by \\citet{Griffiths:Ghahramani:2005} to our dynamic time-dependent model, where each datum is given by a linear superposition of Gaussian features, results in:\n\\begin{equation}\n\\begin{aligned}\nZ \\sim& \\mbox{IBP}(\\alpha) && \\\\\nA_k \\sim& \\mbox{Normal}(0,\\sigma_A^2) && \\\\\n\\ell_{nk}\\sim& \\mbox{Geometric}(\\rho_k), &k&=1,2,\\dots\\\\\n\\mu_n =& \\textstyle\\sum_{i=1}^n \\sum_{k=1}^\\infty z_{ik}I(i+\\ell_{ik}>n)A_k & &\\\\\nX_n \\sim& \\mbox{Normal}(\\mu_k, \\sigma_X^2), &n&=1,\\dots, N,\\label{eqn:dynamic_general}\n\\end{aligned}\n\\end{equation}\nwhere $I()$ is the indicator function.\n\nConsider a second generalization where one wishes to model variations in the appearance of a feature. Here, we can customize each feature instance using a feature weight $b_{nk}$ distributed according to some distribution $f$ so that: \n\\begin{equation}\n\\begin{aligned}\nZ \\sim& \\mbox{IBP}(\\alpha) &&\\\\\nA_k \\sim& \\mbox{Normal}(0,\\sigma_A^2) &&\\\\\n\\ell_{nk}\\sim& \\mbox{Geometric}(\\rho_k), &k&=1,2,\\dots\\\\\nb_{nk}\\sim& f &&\\\\\n\\mu_n =& \\textstyle \\sum_{i=1}^n \\sum_{k=1}^\\infty z_{ik}b_{ik}I(i+\\ell_{ik}>n)A_k&&\\\\\nX_n \\sim& \\mbox{Normal}(\\mu_k, \\sigma_X^2), &n&=1,\\dots, N.\n\\end{aligned}\n\\label{eqn:dynamic_amplitude}\n\\end{equation}\n\nFor example, in modeling audio data, a note or chord might be played at different volumes throughout a piece. In this case, it is appropriate to incorporate a per-factor-instance gamma-distributed weight, $b_{nk}\\sim \\mbox{Gamma}(\\alpha_B,\\beta_B)$. \n\nThe new family of time-dependent models above could be used in many applications, provided they are computationally feasible. In the following, we develop stochastic simulation methods to achieve this goal.\n\n\n\n\n\\section{Inference Methods for npLFMs}\\label{sec:inf}\nA number of inference methods have been proposed for the IBP, including Gibbs samplers \\citep{Griffiths:Ghahramani:2005,Teh:Gorur:Ghahramani:2007}, variational inference algorithms \\citep{Doshi:Miller:VanGael:Teh:2009}, and sequential Monte Carlo samplers \\citep{Wood:Griffiths:2006}. In this work, we focus on Markov chain Monte Carlo (MCMC) approaches (like the Gibbs sampler) since, under certain conditions, they are guaranteed to asymptotically converge to the true posterior distributions of the random parameters. Additionally, having tested various simulation methods for the dynamic models introduced in this paper, we found that the MCMC approach is easier to implement, and has good mixing properties.\n\nWhen working with nonparametric models, we are faced with a choice. One, perform inference on the full nonparametric model by assuming infinitely many features {\\emph{a priori} and inferring the appropriate number of features required to model the data. Two, work with a large, $K$-dimensional model that converges (in a weak-limit sense) to the true posterior distributions as $K$ tends to infinity. The former approach will asymptotically sample from the true posterior distributions, but the latter approximation approach is often preferred in practice due to lower computational costs. We describe algorithms for both approaches.\n\t\n\t\n\t\n\t\\subsection{An MCMC Algorithm for the Dynamic npLFM}\\label{sec:MCMCbasic}\n\tConsider the weighted model in Equation~\\ref{eqn:dynamic_amplitude}, where the feature instance weights $b_{nk}$ are distributed according to some arbitrary distribution $f(b)$ defined on the positive reals. Define $B$ as the matrix with elements $b_{nk}$. Inference for the uniform-weighted model in Equation~\\ref{eqn:dynamic_general} is easily recovered by setting $b_{nk}=1$ for all $n,k$.\n\t\n\tOur algorithms adapt existing fully nonparametric \\citep{Griffiths:Ghahramani:2005,DoshiVelez:Ghahramani:2009} and weak-limit MCMC algorithms \\citep{zhou2009non} for the IBP. One key difference is that we must sample not only whether feature $k$ is instantiated in observation $n$, but also for the number of observations for which this particular feature remains active. We obtain inferences for the IBP-distributed matrix $Z$ and the lifetimes $\\ell_{nk}$ using a Metropolis-Hastings algorithm described below.\n\t\n\t\n\n\t\n\t\\paragraph{Sampling $Z$ and the $\\ell_{nk}$ in the Full Nonparametric Model:}\n\t\n\tWe jointly sample the feature instance matrix $Z$ and the corresponding lifetimes $\\ell$ using a slice sampler \\citep{Neal:2003}. Let $\\Lambda$ be the matrix whose elements are given by $\\lambda_{nk}:=z_{nk}\\ell_{nk}$. To sample a new value for $\\lambda_{nk}$ where $\\sum_{i\\neq n}\\lambda_{ik}>0$, we first sample an auxiliary slice variable $u\\sim \\mbox{Uniform}(0,Q^*(\\lambda_{nk}))$, where $Q^*(\\lambda_{nk}) = p(\\lambda_{nk}|\\rho_k, m_k^{-n})p(X|\\lambda_{nk}, A,B, \\sigma_X^2)$. Here, the likelihood term $p(X|\\lambda_{nk}, A, B, \\sigma_X^2)$ depends on the choice of likelihood, and\n\t\n\t\\begin{equation}\n\tp(\\lambda|\\rho_k m_k^{-n}) = \\begin{cases} \\frac{N-m_k^{-n}}{N} & \\mbox{if }\\lambda=0\\\\\n\t\\frac{m_k^{-n}}{N} \\rho(1-\\rho)^{\\lambda_{nk}-1} & \\mbox{otherwise}\n\t\\end{cases}\\label{eqn:plambda_np}\n\t\\end{equation}\n\t\n\tWe then define a bracket centered on the current value of $\\lambda_{nk}$, and sample $\\lambda_{nk}^*$ uniformly from this bracket. We accept $\\lambda_{nk}^*$ if $Q(\\lambda_{nk}^*) = p(\\lambda_{nk}^*|\\rho_k, m_k^{-n})p(X|\\lambda_{nk}^*, A,B, \\sigma_X^2) > u$. If we do not accept $\\lambda^*_{nk}$, we shrink our bracket so that it excludes $\\lambda_{nk}^*$ but includes $\\lambda_{nk}$, and repeat this procedure until we either accept a new value, or our bracket contains only the previous value.\n\t\n\tFor the $n$th row of $Z$, we can sample the number of singleton features --- i.e. features where $z_{nk}=1$ but $\\sum_{i\\neq n}z_{ik}=0$ --- using a Metropolis Hastings step. We sample the number $K^*$ of singletons in our proposal from a $\\mbox{Poisson}(\\alpha\/N)$ distribution, and sample corresponding values of $b^*_{nk}\\sim f(b)$. We also sample corresponding lifetime probabilities $\\rho_k^*\\sim \\mbox{Beta}(a_\\rho, b_\\rho)$ and lifetimes $\\ell_{nk}^* \\sim \\mbox{Geometric}(\\rho_k^*)$ for the proposed singleton features. We then accept the new $\\Lambda$ and $B$ with probability\n\t$$\\min\\left(1, \\frac{Q(X|\\Lambda^*, A,B^*, \\sigma_X)}{Q(\\Lambda, A, B, \\sigma_X)}\\right),$$\n\tfor some proposal distribution $ Q $.\n\t\n\t\\paragraph{Sampling $Z$ and the $\\ell_{nk}$ using a Weak-Limit Approximation:}\n\tInference in the weak-limit setting is more straightforward since we do not have to worry about adding and deleting new features. We modify the slice sampler for the full nonparametric model, replacing the definition of $p(\\lambda_{nk}|\\rho_k, m_{k}^{-n})$ in Equation~\\ref{eqn:plambda_np} by\n\t\\begin{equation}\n\tp(\\lambda|\\rho_k m_k^{-n}) = \\begin{cases} \\frac{N-m_k^{-n}}{N+\\frac{\\alpha}{K}} & \\mbox{if }\\lambda=0\\\\\n\t\\frac{m_k^{-n}+\\frac{\\alpha}{K}}{N+\\frac{\\alpha}{K}} \\rho(1-\\rho)^{\\lambda_{nk}-1} & \\mbox{otherwise,}\n\t\\end{cases}\\label{eqn:plambda_wl}\n\t\\end{equation}\n\tand by slice sampling $\\lambda_{nk}$ even if $\\sum_{i\\neq n}\\lambda_{ik}=0$. In the weak limit setting, we do not have a separate procedure for sampling singleton features.\n\t\n\t\n\t\\paragraph{Sampling $A$ and $B$:}\n\tConditioned on $Z$ and the $\\ell_{nk}$, inferring $A$ and $B$ will generally be identical to a model based on the static IBP, and does not depend on whether we used a weak-limit approximation for sampling $Z$. Recall that $\\mathcal{Y}_{n}$ is the vector of feature indices $\\{(i,j): z_{ij}=1, i+\\ell>n\\}$. Let $Y$ be the matrix with elements $y_{nk} = \\sum_{i: (i,k)\\in \\mathcal{Y}_{n}} b_{nk}$ -- i.e. the total weight given to the $k$th feature in the $n$th observation. Then conditioned on $Y$ and $B$, the feature matrix $A$ is normally distributed with mean\n\t$$\\mu_A = \\left(Y^TY + \\frac{\\sigma_X^2}{\\sigma_A^2}\\mathbf{I}\\right)^{-1}Y^TX$$\n\tand block-diagonal covariance, with each column of $A$ having the same covariance \n\t$$\\Sigma_A = \\sigma_x^2\\left(Y^T Y+\\frac{\\sigma_X^2}{\\sigma_A^2}\\mathbf{I}\\right)^{-1}.$$\n\t\n\tWe can use a Metropolis-Hastings proposal to sample from the conditional distribution $P(b_{nk}|X,Z,\\{\\ell_{nk}\\}, A, \\sigma_X^2)$ --- for example, sampling $b_{nk}^*\\sim f(b)$ and accepting with probability\n\t$$\\min\\left(1, \\frac{P(X|Z, \\{\\ell_{nk}\\}, A, B^*, \\sigma_X)}{P(X|Z,\\{\\ell_{nk}\\}, A,B, \\sigma_X)}\\right).$$\n\t\n\t\\paragraph{Sampling Hyperparameters:}\n\tWith respect to the choice of model we could either incorporate informative prior beliefs or use non-informative settings, depending on the user knowledge and the data at hand. Without loss of generality, we place inverse gamma priors on $\\sigma_X^2$ and $\\sigma_A^2$ and beta priors on each of the $\\ell_k$; then, we can easily sample from their conditional distributions due to conjugacy. \n\tSimilarly, if we place a $\\mbox{Gamma}(a_\\alpha,b_\\alpha)$ prior on $\\alpha$, we can sample from its conditional distribution\n\t$$\\alpha|Z\\sim \\mbox{Gamma}\\left(K+a_\\alpha, \\frac{b_{\\alpha}}{1+b_\\alpha H_n}\\right)$$\n\twhere $H_n$ is the $n$th harmonic number. These inverse gamma and gamma prior distributions are general since, by a judicious choice of hyperparameter values, they could be tailored to model very little to strong prior information. \n\t\n\t\\section{Experimental Evaluation}\\label{sec:results}\n\tHere the proposed models and stochastic simulation methods are exemplified via synthetic and real data illustrations. In the synthetic illustration, we used the full nonparametric simulation method; in the real data examples, we used the weak-limit approximation version of the MCMC algorithm to sample the nonparametric component. We do this to allow fair comparison with the DIBP and DDIBP, which both use a weak-limit approximation. We choose to compare with the IFDM over the related IFUHMM since it offers a more efficient inference algorithm, and because code was made available by the authors\n\t\n\tThe ``gold standard'' in assessing npLFMs is to first set aside a hold-out sample. Then, using the estimated parameters one predicts these held-out data; i.e., comparing actual versus predicted values. In this section, we do this by alternately imputing the missing values from their appropriate conditional distributions, and using the imputed values to sample the latent variables.\n\t\n\t\n\tSince the aim is to compare static npLFM models and existing dynamic (DIBP, DDIBP and IFDM) models with the temporal dynamic npLFM models developed in this paper, the mean square error (MSE) is used to contrast the performance of these approaches on the held-out samples. We choose to consider squared error over absolute error due to its emphasis on extreme values. In the interest of space, we have not included plots or figures demonstrating the mixing of the MCMC sampler though one may use typical MCMC convergence heuristics to assess convergence \\citep[for example]{Geweke:1991,Gelman:Rubin:1992,Brooks:Gelman:1998}.\n\t\n\n\t\n\t\\subsection{Synthetic Data}\n\t\n\tTo show the benefits of explicitly addressing temporal dependence, we carried out the following.\n\t\n\t\\begin{itemize}\n\t\t\\item Generate a synthetic dataset with the canonical ``Cambridge bars'' features shown in Figure~\\ref{fig:synthA}; these features were used to generate a longitudinally varying dataset. \n\t\t\\item Simulate a sequence of $N=500$ data points corresponding to $N$ time steps.\n\t\t\\item For each time step, add a new instance of each feature with probability 0.2, then sample an active lifetime for that feature instance according to a geometric distribution with parameter 0.5.\n\t\t\\item Each datum was generated by superimposing all the active feature instances (i.e. those whose lifetimes had not expired) and adding Gaussian noise to give an $6\\times 6$ real-valued image.\n\t\t\\item We designated 10\\% of the observations as our test set. For each test set observation, we held out 30 of the 36 pixels. The remaining 6 pixels allowed us to infer the features associated with the test set observations.\n\t\\end{itemize}\n\t\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.45\\textwidth]{figs\/synthetic_features.png}\\\\\n\t\t\\includegraphics[width=1.0\\textwidth]{figs\/synthetic_obs.png}\n\t\t\\caption{Top row: Four synthetic features used to generate data. Bottom row: Ten consecutive observations.}\n\t\t\\label{fig:synthA}\n\t\\end{figure}\n\t\n\tWe considered four models: The static IBP; the dynamic npLFM proposed in this paper; the DIBP, the DDIBP, and the IFDM. For the dynamic npLFM and the static IBP, we used our fully nonparametric sampler. For the DIBP, DDIBP and the IFDM we used code available from the authors. The DIBP and DDIBP codes use a weak limit sampler; we fixed $K=20$ for the DDIBP and for the DIBP; the lower value for the DIBP is needed due to the much slower computational protocol for this method. \n\t\n\tTable~\\ref{table:synthetic1} shows the MSEs obtained on the held-out data; the number of features; and the average feature persistence. All values are the final MSE averaged over 5 trials from the appropriate posterior distributions following convergence of the MCMC chain. The average MSE is significantly lower for our dynamic model in contrast to all the other models we compared against.\n\tNext, consider Figure~\\ref{fig:instances} that shows the total number of times each feature contributes to a data point (i.e., the sum of that feature's lifetimes), based on a single iteration from both the dynamic and the static model. It is clear that the dynamic model reuses common features a larger number of times than the static model.\n\t\\begin{table}\n\t\t\\centering\n\t\t\\begin{tabular}{|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& MSE & Number of features & Average persistence \\\\\n\t\t\t\\hline\n\t\t\tDynamic npLFM & $0.274\\pm 0.02$ & $15.80 \\pm 0.748$ & $2.147 \\pm 0.58$\\\\\n\t\t\t\\hline\n\t\t\tStatic npLFM & $0.496 \\pm 0.04$ & $19.80 \\pm 0.400$ & $-$ \\\\\n\t\t\t\\hline\n\t\t\tDIBP & $0.459 \\pm 0.01$ & $20^*$ & $-$ \\\\\n\t\t\t\\hline\n\t\t\tDDIBP & $0.561 \\pm 0.02$ & $20^{\\dagger}$ & $-$\\\\\n\t\t\t\\hline\n\t\t\tIFDM & $0.7513 \\pm 0.003$ & $2^{\\dagger}$ & $-$ \\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t\t\\caption{Average MSE; number of features; and feature persistence on synthetic data under static and dynamic npLFMs. $ ^*$Note that the DIBP was restricted to $K=20$ features for computational reasons. $^{\\dagger}$The results over the 5 trials resulted in learning the same number of features.}\n\t\t\\label{table:synthetic1}\n\t\\end{table}\n\t\n\tThere are two critical reasons for this superior performance. First, consider a datum with two instances of a given feature: one that has just been introduced, and one that has persisted from a previous time-point. Our dynamic model is able to use the same latent feature to model both feature instances, while the static model, the DIBP, and the DDIBP must use two separate features (or model this double-instance as a separate feature from a single-instance). This is seen in the lower average number of features required by the dynamic model (Table~\\ref{table:synthetic1}), and in the greater number of times common features are reused (Figure~\\ref{fig:instances}).\n\t\n\tIn general, if (in the limit of infinitely many observations) there is truely a finite number of latent features, it is known that non-parametric models will tend to overestimate this number \\citep{Miller:Harrison:2013}. With that said, from a modeling perspective we generally wish to recover fewer redundant features, giving a parsimonious reconstruction of the data. We can see that we achieve this, by comparing the number and populatity of the features recovered with our dynamic model, relative to the static model.\n\t\n\tSecond, the dynamic npLFM makes use of the ordering information and anticipates that feature instances will persist for multiple time periods; this means that the latent structure for a given test-set observation is informed not only by the six observed pixels, but also by the latent structures of the adjacent observations. We see that the average feature persistence is $2.147$ time steps, which confirms that the dynamic model makes use of the temporal dynamics inherent in the data. While the DIBP, DDIBP and IFDM both have mechanisms to model temporal variation, their models do not match the method used to generate the data, and cannot capture the persistence variation explicitly.\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.65\\textwidth]{figs\/synthetic_feat_counts.png}\n\t\t\\caption{Number of times each feature contributes to a data point under static and dynamic npLFMs. Note that under the dynamic model, a feature can contribute multiple times to the same data point. In this setting, the features are arbitrarily labeled and thus labeled according to their popularity.}\n\t\t\\label{fig:instances}\n\t\\end{figure}\n\t\n\t\\begin{table}\n\t\t\\centering\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& Household power consumption & Audio data & Bird call data \\\\\n\t\t\t\\hline\n\t\t\tDynamic npLFM & $0.287 \\pm 0.013$ & $0.722 \\pm 0.007$ & $ 0.561 \\pm 0.306$\\\\\n\t\t\t\\hline\n\t\t\tStatic npLFM & $1.835 \\pm 0.182$ & $1.013 \\pm 0.013$ & $ 1.026 \\pm 0.481 $ \\\\ \n\t\t\t\\hline\n\t\t\tDDIBP & $1.424 \\pm 0.069$ & $1.289 \\pm 0.224$ & $ 0.606 \\pm 0.036 $ \\\\ \n\t\t\t\\hline\n\t\t\tDIBP & $1.324 \\pm 0.106$ & $ 1.845 \\pm 0.264$ & $ 1.308 \\pm 0.744 $ \\\\\n\t\t\t\\hline\n\t\t\tIFDM & $ 0.294 \\pm 0.022$ & $ 1.906 \\pm 0.009 $ & $1.222 \\pm 0.130$ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\caption{Average MSE obtained on the empirical datasets by the dynamic model proposed in this paper; the static IBP latent feature model; the DDIBP; the DIBP; and the IFDM.}\n\t\t\\label{table:real1}\n\t\\end{table}\n\t\n\t\n\t\\subsection{Household Power Consumption Real Data Illustration}\n\tA number of different appliances contribute to a household's overall power consumption, and each appliance will have different energy consumption and operating patterns. We analyzed the ``Individual household electric power consumption'' data set\\footnote{We only analyzed a subset of the data for computational reasons.} available from the UCI Machine Learning Repository\\footnote{\\texttt{http:\/\/archive.ics.uci.edu\/ml}}. This dataset records overall minute-averaged active power, overall minute-averaged reactive power, minute-averaged voltage, overall minute-averaged current intensity, and watt-hours of active energy on three sub-circuits within one house.\n\t\n\tWe examined 500 consecutive recordings. For each recording, we independently scaled each observed feature to have zero mean and unit variance, and subtracted the minimum value for each observed feature. The preprocessed data can, therefore, be seen as excess observation above a baseline, with all features existing on the same scale justifying a shared prior variance. Based on the assumption that a given appliance's energy demands are approximately constant, we applied our dynamic npLFM with constant weights, described in Equation~\\ref{eqn:dynamic_general}.\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{figs\/dibp_hpc_shared}\n\t\t\\caption{Latent structure obtained from the household power consumption data using the dynamic npLFM. Top left: Intensity of observed feature at each observation (after pre-processing). Bottom left: Latent features found by the model. Top right: Number of instances of each latent feature, at each observation.}\n\t\t\\label{fig:hpc_dynamic}\n\t\\end{figure}\n\t\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{figs\/ibp_hpc_shared}\n\t\t\\caption{Latent structure obtained from the household power consumption data using the static IBP. Top left: Intensity of observed feature at each observation (after pre-processing). Bottom left: Latent features found by the model. Top right: Number of instances of each latent feature, at each observation.}\n\t\t\\label{fig:hpc_ibp}\n\t\\end{figure}\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\textwidth]{figs\/hpc_time_series}\n\t\t\\caption{Plot of observed features 2 and 6 from the household power consumption data, over time.}\n\t\t\\label{fig:time_series}\n\t\\end{figure}\n\tWe again compared against a static IBP, the DIBP, the DDIBP (with exponential similarity measure), and the IFDM. For all models, we used a weak limit sampler with a maximum of 20 features. For validation, 10\\% of the data were set aside, with a randomly selected six out of seven dimensions being held out. We expect the dynamic models to perform better than the static model, given the underlying data generating process: electricity demand is dictated by which appliances and systems are currently drawing power. Most appliances are used for contiguous stretches of time. For example, we turn a light on when we enter a room, and turn it off when we leave some time later. Further, many appliances have characteristic periods of use: a microwave is typically on for a few minutes, while a washing machine is on for around an hour. A static model cannot capture these patterns.\n\t\n\tThe held-out set average MSEs with bounds are shown in Table~\\ref{table:real1}. The DDIBP performs comparably with the static model, suggesting its form of dependence is not appropriate for this task. The DIBP performs slightly better than the static model, indicating that it can capture the feature persistence described above. However, our model significantly outperforms the other models. This can be explained by two properties of our model that are not present in the comparison methods. First, our method of modeling feature persistence is a natural fit for the data set: latent features are turned on at a rate given by the IBP, and they have an explicit duration that is independent of this rate. \n\t\n\tBy contrast, in the DIBP, a single Gaussian process controls both the rate at which a feature is turned on, and the amount of time for which it contributes. Second, our construction allows multiple instances of the same feature to contribute to a given time point. This means that our approach allows a single feature to model multiple similar appliances -- e.g. light bulbs -- which can be used simultaneously. The IFDM also performs favorably for this task, as we could imagine this problem to be like a blind signal separation problem where we want to model the probability when a dishwasher or laundry machine is on at a certain time point which is the setting for which such a model is designed. The IBP, DIBP and DDIBP, by contrast, must use separate features for different numbers of similar appliances in use, such as light bulbs.\n\t\n\tConsider a visual assessment of the importance of allowing multiple instances by examining the latent structures obtained from the static IBP and our dynamic npLFM. Figures~\\ref{fig:hpc_dynamic} and \\ref{fig:hpc_ibp}, respectively, show the latent structure obtained from a single sample from these models. The top left panel of each of these figures shows the levels of the observed features. We can see that observed features 2 and 6 have spikes between observations 250 and 300. These spikes can be seen more clearly in Figure~\\ref{fig:time_series} which plots the use of observed features 2 and 6 over time. Feature 2 corresponds to the minute-averaged voltage, and feature 6 corresponds to watt-hours of active energy in the third sub-circuit, which powers an electric water heater and an air-conditioner --- both power-hungry appliances. The spikes are likely due to either the simultaneous use of the air-conditioner and water heater, or different levels of use of these appliances.\n\t\n\t\n\tUnder the dynamic model, the bottom left panel of Figure~\\ref{fig:hpc_dynamic} depicts that latent feature 0 places mass on observed features 2 and 6. The top right panel shows that there are multiple instances of this feature in observations corresponding to the previously discussed spikes in observed features 2 and 6. The corresponding static model graph in \\ref{fig:hpc_ibp} shows that the static IBP is unable to account for this latent behavior resulting from increased usage of the third sub-circuit; hence this model must use a combination of multiple features to capture the same behavior.\n\t\n\t\n\t\\subsection{Audio Real Data Illustration}\\label{sec:audio1}\n\tIt is natural to think of a musical piece in terms of latent features, for it is made up of one or more instruments playing one or more notes simultaneously. There is clearly persistence of features, making the longitudinal model described in Section~\\ref{sec:model_long} a perfect fit. We chose to evaluate the model on a section of Strauss's ``Also Sprach Zarathrustra''. A midi-synthesized multi-instrumental recording of the piece was converted to a mono wave recording with an 8kHz sampling rate. We then generated a sequence of $D=128$-dimensional observations by applying a short-time Fourier transform using a $128$-point discrete Fourier transform, a $128$-point Hanning window, and an 128-point advance between frames---so, each datum corresponds to a 16ms segment with a 16ms advance between segments. We scaled the data along each frequency component to have unit variance, and subtracted the minimum value for each observed feature.\n\t\n\tTo evaluate the model, a hold-out sample of 10\\% of the data, evenly spaced throughout the piece, was set aside. All but eight randomly selected dimensions were held out. Again, we use the same settings as described in the earlier experiments. We obtained average MSEs, along with bounds, by averaging over 5 independent trials from the final value of the Gibbs sampler, and are reported in Table~\\ref{table:real1}. We see that by modeling the duration of features, we can perform favorably in a musical example which exhibits durational properties unlike the other models we compared against. Recall that the dynamic model has two advantages over the static model: it explicitly models feature duration, and allows multiple instances of a given feature to contribute to a given observation. The first aspect allows the model to capture the duration of notes or chords. The second allows the model to capture dynamic ranges of a single instrument, and the effect of multiple instruments playing the same note.\n\t\n\t\\subsection{Bird Call Source Separation}\n\tNext, we consider the problem of separating different sources of audio. Since it is difficult to imagine the number of different audio sources \\textit{a priori}, we could instead learn the number of sources non-parametrically. A dynamic Indian buffet process model is well suited to this type of problem, as we may imagine different but possibly repeating sources of audio represented as the dishes selected from an IBP. To this end, we apply our dynamic model to an audio sample of bird calls. The audio source that we will look at for this problem is a two minute long recording of various bird calls in Kerala, India\\footnote{The recording is available at \\url{https:\/\/freesound.org\/s\/27334\/}}. We transformed the raw wave file by Cholesky whitening the data and then took a regularly spaced subsample of 2,000 observations, of which we held out $10\\%$ of the data randomly as a test set. We then analysed the data as described in Section~\\ref{sec:audio1}.\n\t\n\tOne could easily imagine that a bird call would be a natural duration element and would reappear throughout the recording. Hence, for a recording such as this one, being able to incorporate durational effects would be important to modeling the data. Though equivalently, one could also imagine this mode, again, like a blind source separation problem for which we could imagine a model like the IFDM performing favorably, without needing to model the durational component of the features. As seen in Table~\\ref{table:real1}, we obtain superior performance for reasons, we posit, that we described above.\n\t\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\n\t\\section{Conclusion}\\label{sec:conc}\n\tThis paper introduces a new family of longitudinally dependent latent factor (or feature) models for time-dependent data. Unobserved latent features are often subject to temporal dynamics in data arising in a multitude of applications in industry. Static models for time-dependence exist but, as shown in this work, such approaches disregard key insights that could be gained if time dependency were to be modeled dynamically. Synthetic and real data illustrations exemplify the improved predictive accuracy while using time-dependent, nonparametric latent feature models. General algorithms to sample from the new family developed here could be easily adapted to model data arising in different applications where the likelihood function changes. \n\t\n\tThis paper focused on temporal dynamics for random, \\emph{fixed}, time-dependent data using nonparametric LFMs. But if data are \\emph{changing} in real time, as in moving images in a film, then the notion of temporal dependency needs a different treatment than the one developed here. We wish to investigate this type of dependence in future work. In addition to the mathematical challenges that this proposed extension presents, the computational challenges are daunting as well. The theoretical and empirical work exhibited here show promise and we hope to develop faster and more expressive non-parametric factor models.\n\n\t\\section*{Acknowledgments}\n\tSinead Williamson and Michael Zhang were supported by NSF grant 1447721.\n\t\n\n\n\t\n\n\t\n\n\n\t\\bibliographystyle{apalike}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Results}\n\n\n\n\\noindent{\\bf Ejection speed.} The surface tension catapult realizes maximum ejection speed when the spore and Buller's drop have nearly the same volume. The two drops that coalesce to power the surface tension catapult are made of condensed water vapor and form after secretion of hygroscopic substances by the spore.\nWhen Buller's drop coalesces with the adaxial drop, the resulting reduction of surface area provides the surface energy to accelerate the spore. Because the adaxial drop is pinned to the surface of the spore, Buller's drop accelerates towards the distal tip of the spore. Once the coalesced drops reach the tip of the spore, capillarity and contact line pinning decelerate water and its momentum is transferred to the spore. Momentum transfer causes the force that breaks the hilum and results in spore ejection away from the basidium. \nThe release of surface energy by coalescence is $\\sim\\pi\\gamma R_B^2$, where $\\gamma$ is surface tension, $R_B$ is Buller's drop radius.\nBy balancing surface energy to kinetic energy of the spore - drop complex, we obtain:\n\\begin{equation}\nv_0 = \nU \\sqrt{\\dfrac{y^2}{y^3+\\beta}}\n\\label{eq:v0}\n\\end{equation}\nwhere $v_0$ is the ejection velocity, $U= \\sqrt{3\\alpha\\gamma\/(2\\rho_B R_s)}$; $y=R_B\/R_s$, $R_B$ is Buller's drop radius and $R_s$ is the radius of a sphere with the same volume as the spore; $\\beta = \\rho_B\/\\rho_s$; $\\rho_B$ and $\\rho_s$ are densities of Buller's drop and spore respectively. The parameter $\\alpha$ signifies that a fraction of available energy is dissipated in the process of breaking the spore apart from the hilum, the structure that holds it attached to the gill. We will consider $\\alpha=0.23$, which is the average among the values of efficiency previously measured \\cite{thesis_jessica}. Viscous dissipation during the dynamics of coalescence can be neglected because ballistospory operates in a regime of low Onhesorge number \\cite{liu2017}. We realized that the simple energy balance discussed at length in the literature and recapitulated in equation~\\eqref{eq:v0} predicts that there is a radius of Buller's drop that maximizes $v_0$ (see Figure 2). By zeroing the derivative in \\eqref{eq:v0} we obtain the size of Buller's drop that maximizes ejection speed:\n\\begin{equation}\ny_{\\text{max}}=(2 \\beta)^{1\/3}\n\\label{eq:ymax}\n\\end{equation}\n\\noindent and considering spores with density once or twice the density of water\\cite{hussein}, $\\beta=1$ to 2, implying that at $y_{\\text{max}}$ Buller's drop radius is comparable to the equivalent radius of the spore $R_B\\sim 1.26 R_s$ to $1.59 R_s$(Figure~\\ref{fig:velocity}, grey vertical mark labeled $y_{\\text{max}}$). \nNote that at $y_{\\text{max}}$ the ejection speed is controlled robustly, i.e.~it becomes insensitive to small deviations from the exact value of Buller's drop size. \nBuller's drop is generally assumed to scale with spore length \\cite{aerodynamics_ballisto} and this scaling appears to hold for at least 13 species of basidiomycetes as shown in \\cite{aerodynamics_ballisto,thesis_jessica,pringle2005,Stolze-Rybczynski2009}. Supplementary Figure 1 shows these published data, as a function of spore equivalent radius $R_s$, pointing to $y_{\\text{data}}= R_B\/R_s \\sim 0.35 \\pm 0.11$ where we report average $\\pm$ standard deviation.\n$y_{\\text{data}}$ are represented on the horizontal axis in Figure~\\ref{fig:velocity}, suggesting these fungi do not operate at maximum ejection speed, but rather remain on the rising slope preceding the maximum. \n\\\\\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{figure1.png\n\\caption{\\footnotesize Energy balance from eq~\\eqref{eq:v0} predicts discharge speed $v_0$ as a function of $y$ defined as the ratio of Buller's drop radius $R_B$ divided by spore equivalent radius $R_s$. Velocity peaks at $y_{\\text{max}} = (2 \\beta)^{1\/3} = 1.26$ to $1.59$ for $\\beta$ ranging from 1 to $2$, where $\\beta$ is the ratio of spore to drop density. \nThe same ejection speed is attained for two values of $y$, one on either side of the maximum. Experimental data of $y$ all lie to the left of the peak, suggesting evolution has favored smaller drops. \n} \n\\label{fig:velocity}\n\\end{figure}\n\n \n\\noindent{\\bf Maximum spore packing.} \nOnce the spore-drop complex is ejected, it is soon decelerated by air drag and its relaxation time is well approximated by the Stokes time \\cite{stokes,aerodynamics}:\n\\begin{equation}\n\\tau\n= T (y^3 + 1)^{2\/3}\n\\label{eq:tau}\n\\end{equation}\n\\noindent where we have considered the complex as an equivalent sphere with volume equal to the sum of the spore and drop volumes. Here, $T=2R_s^2 \/ (9\\nu \\bar{\\beta})$, $\\nu$ is the air kinematic viscosity, $\\bar{\\beta}$ is the density of air divided by the density of the spore-drop complex. \nAfter discharge, spores travel horizontally a distance $x=v_0 \\tau$, with $v_0$ and $\\tau$ from equations~\\eqref{eq:v0} and \\eqref{eq:tau} and then stop abruptly and start to sediment vertically out of the gills, following a trajectory commonly known as ``sporabola'' (represented in Figure~\\ref{fig:maxpack}A). In order to successfully escape, spores should first travel horizontally far enough to avoid sticking to the spores and basidia underneath. If $x$ is indeed dictated by this safety criterion, then the distance between two opposite gills, $d$, should be at least twice $x$, hence $d>2x$. To pack as many spores as possible and avoid inefficient empty spaces, \nthe distance between gills must be close to this minimum value:\n\\[\nd \\sim 2 v_0 \\tau\n\\]\n\\noindent Plugging in the values of $v_0$ and $\\tau$ given by equations~\\eqref{eq:v0} and \\eqref{eq:tau} we obtain:\n\\begin{equation}\n\\frac{d}{2UT} =\n\\Bigl(\\frac{y_{\\text{pack}}^2}{y_{\\text{pack}}^3 + \\beta} \\Bigr)^{1\/2} (y_{\\text{pack}}^3+1)^{2\/3} \n\\label{eq:theory}\n\\end{equation}\nFor any combination of spore density and radius as well as intergill distance, equation~\\eqref{eq:theory} predicts the optimal radius of Buller's drop normalized by spore radius, $y_{\\text{pack}}$, that achieves maximum packing. We solve numerically Equation~\\eqref{eq:theory} and show the result for $y_{\\text{pack}}$ in Figure~\\ref{fig:maxpack} for different combinations of intergill distance and spore radius, assuming $\\beta=1.2$, $\\alpha=0.23$, $\\bar{\\beta}=10^{-3}$ color-coded from 0 (cyan) to 10 (dark blue). The value of $y_{\\text{max}}$ from Equation~\\eqref{eq:ymax} that maximizes ejection speed is marked in white for $\\beta=1.2$, for reference. \n\\begin{figure}[h!]\n\\includegraphics[width=0.5\\textwidth]{figure3.pdf}\n\\caption{\\footnotesize Optimal morphology of mushroom caps. (A) Sketch of a cross section of a mushroom, close up of gills and magnified view of adjacent gills with basidia and basidiospores. Several trajectories of individual spores (sporabolas) are represented with black arrows; trajectories traced by Buller in 1909 \\cite{buller}. Maximum packing implies that spores initially travel a distance $x = v_0 \\tau$ to reach the midpoint between two opposite gills $d =2 v_0 \\tau$ with $v_0$ and $\\tau$ given by Equations~\\eqref{eq:v0} and \\eqref{eq:tau}. (B) Prediction for normalized Buller drop radius at maximum packing, $y_{\\text{pack}}$, obtained by numerically solving Equation~\\eqref{eq:theory} with $\\beta=1.2$, $\\bar{\\beta}=0.001$ and $\\alpha=23$\\%. $y_{\\text{pack}}$ is color coded from 0 (cyan) to $10$ (dark blue), and white marks normalized Buller drop radius at maximum velocity from Equation~\\eqref{eq:ymax}. Red symbols correspond to data of intergill distance and spore equivalent radius from 8 species collected and analyzed in the present study (see Figure 4).\nThe predicted radius of Buller's drop that maximizes packing for the 8 collected species is $y_{\\text{pack}} \\sim 0.56 \\pm 0.20$, which compares well to measured values of Buller's drop size pointing to $y \\sim 0.35 \\pm 0.11$, where we report average $\\pm$ standard deviation.} \n\\label{fig:maxpack}\n\\end{figure}\n\\\\\n\n\n\\begin{figure}[h!]\n\\includegraphics[width=0.5\\textwidth]{figure4.pdf}\n\\caption{\\footnotesize Data collection. (A) Picture of wild isolate of mushroom cap. (B) Spore print obtained by deposition of the spore cap on aluminum foil overnight. (C) Confocal microscope image of a sample of spores from the spore print. (D) Segmentation of spore image to recover spore contour. (E) Concentric circle around the center of the cap where gill distance is measured and definition of azimuthal angle $\\theta$. (F) Grey scale value from image in panel E, as a function of azimuthal angle $\\theta$. (G) Close up image showing locations of two peaks in the grey image, marked automatically by arrows 1 and 2 (above). Gill distance is defined as the distance between peaks minus their width (see Materials and Methods).} \n\\label{fig:collection}\n\\end{figure}\n\\noindent{\\bf Data collection and data analysis.} \\\\\nTo place real species on the phase space generated by the theory, we collected data of spore and gill morphology for eight wild mushroom isolates. \nWe isolate mushroom caps (Figure~\\ref{fig:collection}A), let them sit overnight on aluminum foil, resulting in what is called a spore print (Figure~\\ref{fig:collection}B) and then isolate samples of spores from different regions of the mushroom. Spores are imaged under confocal microscopy (Figure~\\ref{fig:collection}C), and images are analyzed with a standard segmentation postprocessing using imageJ to isolate contours of spores (Figure~\\ref{fig:collection}D). Spore area $S$ is computed from these images and radius is obtained from the area $R_s=\\sqrt{S\/\\pi}$. To measure gill distance, we first identify the center of the cap by eye. We then draw several circles, between 6 and 10 depending on the size of the cap, around the center of the cap (Figure~\\ref{fig:collection}E). Grey values along the circles are obtained (one example in Figure~\\ref{fig:collection}F) and the profile of the grey value analyzed to define the distance $d$ between the gills as the peak to peak distance minus the width of the peaks (see close up of two peaks in Figure~\\ref{fig:collection}G, and Materials and Methods). \\\\ \nThe collected data show that spore size varies from species to species, but does not vary across a single mushroom cap, suggesting that mushrooms tend to produce spores of the same size in a single fruit body (Figure~\\ref{fig:avspores}). The average intergill distance varies little with distance from the center of the cap, with the exception of \\emph{Russula cremicolor}, which is the only species with no secondary gills in our collection, consistent with previous models and experiments \\cite{phillips91,gills}. The intergill distance varies from about 0.25~mm to 1.5~mm (Figure~\\ref{fig:avintergill}) with no obvious correlation with the size of the mushroom cap\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{spore_radius.pdf}\n\\caption{\\footnotesize Results of data analysis. Spore size does not vary across a single mushroom cap.} \n\\label{fig:avspores}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{intergill.pdf}\n\\caption{\\footnotesize Results of data analysis. Average gill spacing varies little with distance from the center of the cap. The only exception is \\emph{Russula cremicolor} which has no secondary gills.}\n\\label{fig:avintergill} \n\\end{figure}\nWe use these data to compute average and standard deviation of spore radius and intergill distance across a single individual. The experimental data are superimposed to the theory for maximum spore packing. \nThe 8 species tested in this study fall in a region where, if gill morphology is optimized for maximum spore packing, then Buller's drop radius is $R_B\\sim 0.55 R_s$, consistent with previously published data pointing to $R_B\\sim 0.33 R_s$ (Figure~\\ref{fig:maxpack}B and Figure~\\ref{fig:velocity}). \n\\\\\n\n\\noindent{\\bf Conclusions.} \nGilled mushroom have long been hypothesized to have intricate morphologies to maximize the surface to volume ratio and pack the maximum number of spores with minimum amount of biomass. In order to comply with this hypothesis, the horizontal range that spores travel upon ejection must be finely tuned to land spores midway in between two opposite gills. Spore range is dictated by the dimension of Buller's drop and its density relative to the dimension and density of the spore. We find that real species populate a region of the phase space where the radius of Buller's drop that maximizes spore packing achieves velocities that are smaller than the maximum ejection speed $R_B \\sim 0.55 R_s$, while at maximum ejection speed $R_B \\sim 1.3$ to $1.6 R_s$.\nThis conclusion is backed from data previously published in the literature, suggesting that Buller's drop radius does indeed scale with spore dimensions, and is smaller than the value that maximizes ejection speed $R_B \\sim 0.32 R_s$. \nFurther data monitoring spore, gills and Buller's drop morphologies and densities at the same time are needed to find how close are species to maximum packing.\nAll data to date are consistent with the hypothesis of maximum spore packing, suggesting that Buller's drop radius is finely tuned to control range and speed. How this fine tuning might function, in a process that is purely extracellular, in the face of fluctuations in the environmental conditions remains a fascinating question for future research.\n \n\n\n\\begin{table*}\n\\caption{\\footnotesize List of collected species, location that these specimens were collected from, number of spores imaged and analyzed, corresponding symbol used in Figure~\\ref{fig:maxpack},\\ref{fig:avspores}-\\ref{fig:avintergill}.}\n\\label{tab:species}\n\\begin{tabular}{p{0.3\\textwidth}p{0.38\\textwidth}p{0.1\\textwidth}p{0.1\\textwidth}}\n\\hline\n{\\footnotesize \\bf Collected species} & {\\footnotesize \\bf Location} & {\\footnotesize \\bf \\# spores}& {\\footnotesize \\bf symbol}\\\\\n\\hline\n{\\footnotesize \\emph{Camarophyllus borealis}}& {\\footnotesize Huron Mountain Club }& 231 & $\\pmb \\RHD$\\\\\n{\\footnotesize \\emph{Cortinarius caperatus}}& {\\footnotesize Huron Mountain Club }& 1180 & $\\pmb\\bigvarstar$\\\\\n{\\footnotesize \\emph{Amanita lavendula}}&{\\footnotesize Huron Mountain Club }& 155 & $\\pmb\\times$\\\\\n{\\footnotesize \\emph{Armillaria mellea sp. complex}}& {\\footnotesize Huron Mountain Club }& 301 &$\\pmb\\bigstar$\\\\\n{\\footnotesize \\emph{Armillaria mellea sp. complex}}& {\\footnotesize Huron Mountain Club }& 257 & { $\\pmb\\LHD$}\\\\\n{\\footnotesize \\emph{Mycena sp.}} & {\\footnotesize UW-Madison Lakeshore Natural Preserve }& 530 & {\\Large$\\pmb\\bullet$}\\\\\n{\\footnotesize \\emph{Russula sp.}} & {\\footnotesize UW-Madison Lakeshore Natural Preserve }& 1053 & {\\Large $\\pmb\\blackdiamond$}\\\\\n{\\footnotesize \\emph{Galerina marginata\/autumnalis}} & {\\footnotesize UW-Madison Lakeshore Natural Preserve }& 1159 & {\\Large$\\pmb\\sqbullet$}\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\n{\\small\n\\section*{\\bf \\large Materials and methods.}\n\\noindent{\\bf Data collection and published data}\\quad Between the 15th and 17th of September, 2017 we collected mushrooms from lands owned by the Huron Mountain Club, in the Upper Peninsula of Michigan. On the 15th of October, 2017 we collected mushrooms from the University of Wisconsin-Madison Lakeshore Natural Preserve. We collected opportunistically, taking any mushroom that appeared in good shape, but focusing on gilled (not pored) fungi. Unfortunately we were collecting during a particularly dry period, nonetheless, we collected specimens of eight morphologically identified species, listed in Tab.~\\ref{tab:species}. \nWe integrated our data with data from the literature where spore dimensions and radius of the Buller's drop were precised ~\\cite{aerodynamics_ballisto,thesis_jessica,pringle2005,Stolze-Rybczynski2009}. \\\\\n\n\\noindent{\\bf Preparing specimens for morphometrics}\\quad On the same day mushrooms were collected, caps were separated from stipes using a scalpel and left face down from 8 to 12 hours on a piece of paper covered with aluminum foil in order to create spore prints. Spore prints are generated when spores fall from gills and settle directly underneath. They reflect the morphology of each collected specimen and the location of stipes and patterns of gill spacing are easily visualized. Spore prints were carefully wrapped in wax paper and taken back to the Pringle laboratory at the University of Wisconsin-Madison. To image spores, three small pieces of aluminum foil, each measuring approximately 1mm x 1mm , were cut (i) from close to each stem,(ii) equidistant between the stem and the cap edge and (iii) from near the edge of each cap. Spores were washed off the foil and suspended in a Tween 80 $0.01 \\% $ vol solution. 15 $\\mu$l of each spore suspension were then spread right after onto a glass slide and spores imaged. Microscope slides were sealed with nail polish in order to avoid evaporation of Tween and consequent movement of spores during the imaging. To measure distance between gills, a photograph of each cap's underside, with a ruler included in the photograph, was taken immediately after spore printing using a Canon EOS400D. After spore printing and photography, collected mushrooms were dried in a mushroom dryer and stored in the Pringle laboratory.\\\\\n\n\\noindent{\\bf Identification of species using DNA barcoding}\\quad To identify the taxa of sporocarps, we extract DNA with NaOH extraction method modified from Wang et al. (1993) to amplify internal transcribed spacer. Specifically, the tissues of the sporocarps were ground finely with pestle in $40 \\mu l$ of 0.5 M NaOH and\ncentrifuged at 13,000 rpm for 10 min. Five microliters of supernatant was transferred to\n$495 \\mu l$ of 100 mM Tris-HCl (pH 8) and centrifuged at 13,000 rpm for another min.\nTo amplify the internal transcribed spacer, 1 \u00b5l of extracted DNA was mixed with $1\n\\mu l$ of $10 \\mu M$ ITS1F (5'-CTT GGT CAT TTA GAG GAA GTA A-3'), 1 \u00b5l of 10 \u00b5M ITS4 (5'-TCC\nTCC GCT TAT TGA TAT GC-3'), $12.5 \\mu l$ of Econotaq plus green 2x master mix (Lucigen,\nWisconsin), and $9.5 \\mu l$ of nuclease-free water. The reaction mixtures were incubated in\n$95 \\circ C$ for 5 min, followed with 30 rounds of amplifying reaction, including (1)\ndenaturation under $95 \\circ C$ for 30 s, (2) primer annealing under $50 \\circ C$ for 30 s and (3)elongation under 72\u02daC for 60 s. The reaction ends with 7 min of additional elongation\nunder $72 \\circ C$ and pauses at $4 \\circ C$. Amplified internal transcribed spacer were cleaned,\nSanger-sequenced by Functional Biosciences (Wisconsin) and deposited on Genbank\ndatabase (https:\/\/www.ncbi.nlm.nih.gov\/).\\\\\n\n\\noindent{\\bf Microscopy and image analysis for spore geometry.}\\quad Microscope images of spores were taken and recorded at the Newcomb Image Center at the University of Wisconsin-Madison. Spores were imaged either individually or in groups depending whether a particular microscope field of view housed one or more than one spore using Zeiss Elyra LSM 780 and Zeiss LSM 710 confocal microscopes. Spores were not stained as all species collected proved to be autofluorescent. The laser wavelength used to excite autofluorescence was 405 nm. The average area and average radius of spores of each species were then calculated using an image analysis tool implemented in ImageJ v.~1.51. Pixel's dimension in $\\mu$m was obtained from the microscope and the image converted to greyscale (8-bit or 16-bit). Thresholding was done using imageJ to then convert greyscale to binary image, highlight all the spores to be counted and measure the area of each spore as shown in Figure~\\ref{fig:collection}C-D. Spores touching other spores were not measured, nor were particles smaller than $2 \\mu m^2$. Particles bigger than $ 2 \\mu m^2$ were identified either as spores or not-spores by eye. \\\\\n\n\\noindent{\\bf Image analysis for gill distance.}\\quad The distance betweeen gills was measured based on the cross section of gills at various distances from the center of the cap. Image analysis with ImageJ v1.51 and then analyzed with a custom made Matlab R2017b script. \nWe first used ImageJ v1.51 to open each picture, set pixel length in $mm$ using the image of the ruler and convert images to greyscale (8-bit or 16-bit). The Oval Profile plugin was implemented to obtain the grey scale profile along oval traces, drawn manually around the mushroom cap center. The area of the ovals was measured to calculate the average distance from the cap center which was used to convert the distance between gills from radiants to mm. The greyscale is sampled at 3600 equally spaced points along the oval. \nThe grey scale profile obtained from ImageJ was imported into Matlab and analyzed with the function Findpeaks to first identify the center of the gills as the peaks in the greyscale image. Peaks that were closer than 0.3$^\\circ$ were discarded as noise. Visual inspection was applied to check that minor peaks did correspond to gills. Additionally, we quantified gill thickness as the width of the peak, defined as the distance where grey value drops half way below peak prominence, which is a measure of peak height. Distance between two gills is defined as the distance between their centers minus the half width of the two gills. \n\n}\n\n\n\\section*{Aknowledgements} \nThis work was supported by the Agence Nationale de la Recherche Investissements d'Avenir UCA$^{\\textrm{\\sc \\sf \\tiny JEDI}}$ \\#ANR-15-IDEX-01, by CNRS PICS ``2FORECAST'', by the Thomas Jefferson Fund a program of FACE and by Global Health Institute - University of Wisconsin-Madison. We would like also to thank Houron Mountain Club for its kind hospitality and Sarah Swanson for all her help and discussions about confocal microscopy. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nGamma-ray bursts (GRBs) are sudden releases of gamma-rays, lasting from milliseconds to thousands of seconds. The typical energy range\nof GRBs is from tens of keV to several MeV. The soft gamma-ray\/hard X-ray emission is usually called ``prompt emission\"\nof GRBs. According to the classification criterion based on the duration distribution \\citep{1993ApJ...413...L101}, short gamma-ray\nbursts (sGRBs), for which T$_9$$_0$ $<$2 s, likely originate from the mergers of compact binaries involving Double Neutron Star\n(DNS) and Neutron Star Black-Hole (NSBH) systems \\citep{1986ApJ...308...L43, 1989nature...340...126, 1992ApJ...395L..83N}. The prompt\nemission properties of sGRBs are distinguished from long GRBs (lGRBs), for example, sGRBs are found to have negligible spectral lag\n\\citep {2000APJ...534...248, 2006MNRAS...373...729} and harder spectra than lGRBs \\citep {1993ApJ...413...L101}.\nSwift\/BAT (The Burst Alert Telescope) has detected about 120 sGRBs in the past $\\sim$14 years since it was successfully launched in November 2004 and achieved several important breakthroughs during the study of the prompt emission and the early afterglow \\citep{2004ApJ...611...1005, 2016ApJ...829...7}.\n\nIn the internal shock model, as the jet propagates, a faster shell meets a slower one and interacts in the form of an\ninternal shock, which is thought to cause the prompt emission of GRBs \\citep{1994APJ...430...L93, 1997APJ...490...92, 1999PR...308...L43}. Although the light curves of the prompt emissions of GRBs have very irregular and complex structures, some of these can be divided into some individual or overlapping pulses which contain the key information of internal\nenergy dissipation and radiative mechanisms \\citep{1996APJ...459...393, 2005APJ...627...324}. In general, the prompt gamma-ray emission\nof GRBs may consist of various emission episodes, including precursors, main peaks and\nextended emissions (EEs) or parts of them. For some GRBs, the three components are usually bright enough to be detected easily. For instance, the extraordinarily bright GRB 160625B was found to have three\nemission episodes separated by long quiescent intervals \\citep{2018Nature...2...69}.\n\n\\cite {1974ApJ...194...L19} reported a probable precursor prior to the three main impulses with the deviation 3.1 $\\sigma$ from\nbackground. They pointed out that the precursor was not initiated by its most\nexplosive phase. The precursor might come from the photospheric emission and have black\nbody-like spectrum \\citep [e.g.,][]{1991Nature...350...592,2000ApJ...543...L129, 2002MNRAS...336...1271,2007MNRAS...380...621}.\nFor the lGRBs, the jet launched by the central engine makes its way out of the stellar envelope of the progenitor star and releases\nthermal emission as the shock breakout precursors \\citep{2002APJ..331...197}. When the progenitor system is a NS-NS system and\nor magnetar, the similar shock breakout precursor could occur in sGRBs if a dense wind is released\nfrom the center engine \\citep{2014ApJ...788...L8}. \\cite{2007APJ...670...1247} discussed that\nthe central engine might undergo a second collapse and the precursor may be due to the initial weak jet. They reported that some of the initial\njet materials fall back onto the central engine when it manages to penetrate the stellar mantle and will be accreted by\nthe central engine. The core collapse or\nbinary merger could produce a temporarily stable intermediate object as a ``spinar\", which supports a large range of precursor energies \\citep{2008MNRAS...383...1397, 2009MNRAS...397...1695}. Different precursor models can be used to explain the diversity of the quiescent period timescales, the spectral and temporal properties and so on. Observationally, significant differences were not found not only\nbetween precursor and main peak but also for the GRBs with\/without precursor\n\\citep [e.g.,][]{2010ApJ...723...1711, 2014ApJ...789...145}. Moreover, there was no obvious or mild correlations between\n the precursor and the main peak \\citep [e.g.,][]{1995ApJ...452...145, 2005MNRAS...357...722, 2008APJ...685...L19, 2009AA...505...569,\n 2015MNRAS...448...2624}. By reanalysing the observed pulse light curves, one can check the previous findings or exclude some untenable theoretical models for the origin of the precursors.\n\nThe EE component as soft $\\gamma$-ray emission or\nhard X-ray afterglow occurring after the main prompt emission is another important messager. Postburst emission component might be a\nfeature of BATSE bursts, which was interpreted as the hard X-ray afterglow occurring \\citep [e.g.,][]{2001AA...379...L39,2002APJ...567...1028,2005Science...309...1833}.\n\\cite{2002Mazets} discussed a special kind of the ``short\" burst, the initial emission of which was spike-like and accompanied\nby low intensity EE for tens of seconds. \\cite{2006Nature...444...1044} found that the temporal lag and peak luminosity of long\nGRB 060614, were similar to those of the sGRBs.\nBy studying a large BATSE sample, \\cite{2006APJ...643...266} found that a handful of GRBs were somewhat similar to GRB 060614.\nThey showed that the extended components were always softer than\nthe initial spikes. Note that GRB 060614 is a sGRB-like long burst with the EE component \\citep{2006Nature...444...1053}, which is very similar to GRB 050709, a short burst with the EE lasting about 130s. Interestingly, both of them are found to associate with a macro-nova and share with the same origin \\citep{2015ApJ...811...L22,2016NC...7...12898}.\nAlthough the EE tails within some GRBs have been confirmed,\ntheir physical natures are still in debate. For example, the sGRBs with EE tail may be produced by the formation and early evolution of\na highly magnetized, rapidly rotating neutron star \\citep[e.g.,][]{2008MNRAS...385..1455,2012MNRAS...419...1537B}. \\cite{2011MNRAS...417...2161} proposed\na short-duration jet powered by heating due to $\\nu$$\\tilde{\\nu}$ annihilation and a long-lived Blandford-Znajek jet to describe\nthe initial pulses (IP) and the EE segment. The lifetime of the accretion process, which is divided into multiple emission episodes\nby the magnetic barrier, may be prolonged by radial angular momentum transfer and the accretion disk mass is critical for producing\nthe observed soft EE \\citep{2012Apj...760...63}. \\cite{2017MNRAS...470...4925} explained the EE tail with a process of fallback accretion on to a new born magnetar. The temporal and spectral characteristics of the IP and the EE of the sGRBs, or\nthe sGRBs with\/without EE, were extracted and compared \\citep[e.g.,][]{2010APJ...717...411, 2011APJ...735...23, 2013MNRAS...428...1623,\n2015MNRAS...452...824, 2015ApJ...811...4, 2016ApJ...829...7, 2018MNRAS...481...4332}. \\cite{2017ApJ...846...142} argued that the EE components reflect the central engine activities instead of the external environments associated with afterglows in general. Thus, it would be very intriguing to\nanalyse whether observational temporal properties of EE tail were similar or dissimilar to those of precursor\nor main peak.\n\nFor the first time, we systematically investigate the temporal properties of the fitted sGRB pulses in the third Swift\/BAT catalog and present a joint temporal analysis of the three prompt emission components across four energy channels in one-component and two-component sGRBs.\nThe data preparation and sample selection are given in Section 2. The results are presented in Section 3, in which, we pay special attention to a direct comparison with\nour recent results of BATSE sGRBs (Li et al. 2020, hereafter paper I).\nFinally, we discuss and summarize the results in Section 4 and 5, respectively.\n\\section{SAMPLE PREPARATION} \\label{sec:DATE PREPARATION }\n\\subsection{Data and Method}\nWe construct our initial sGRBs sample using the parameter T$_9$$_0$ from the third Swift GRBs Catalog from December 2004 to July 2019, which corresponds to about all the sGRBs detected by the Swift\/BAT. The sample comprises 124 sGRBs including additional sGRB 090510 \\citep [e.g.,][]{2010ApJ...720...1008, 2010ApJ...723...1711,\n2010ApJ...716...1178, 2013ApJ...772...62}, sGRB 050724 \\cite[e.g.,][]{2007APJ...655...989,2007PTRSLS...365...1281}\nand GRB 060614 \\citep [e.g.,][]{2006Nature...444...1053,2006Nature...444...1044, 2007APJ...655...L25, 2013MNRAS...428...1623,\n2014ApJ...789...145, 2015ApJ...811...4, 2016ApJ...829...7}.\n\nThe mask-weighted light curve data of sGRBs are taken from\nthe Swift website \\citep{2016ApJ...829...7}\\footnote{\\url{https:\/\/swift.gsfc.nasa.gov\/results\/batgrbcat\/}} for four energy channels, labeled with Ch1 (15-25 keV), Ch2 (25-50 keV), Ch3 (50-100 keV), and Ch4 (100-350 keV). We calculate the background noise (1$\\sigma$) and define the effective sGRBs signal at a level of S\/N $>$3. Although the increase in bin size can reduce the\nlevel of background noise fluctuations, it might change the potential pulse structure.\nBecause the total BAT energy band with good localization is narrower\nand the signals of sGRBs are relatively weaker than those of the lGRBs, we fit the sGRB pulses of different energy bands with a small bin size of 8ms till the potential GRBs signals can be identified significantly, except some sGRBs with precursor or EE, i.e. GRB 071112B and GRBs 060614, 150101B, 170817A. For these four GRBs, we fit the pulse light curves with other bin sizes\nof 2ms, 16ms, 64ms or 1s instead. Several points need to be cautioned for these GRBs. First, the precursor or EE weak pulse structure can all be identified in a corresponding energy band. Second, the detection points of the effective GRB signal should be relatively more enough to ensure a successful fit. In addition,\nfor the possible weak signals, we combine the adjacent energy channels into one channel in order to increase the statistical reliability.\n\nConsidering the features of duration\nand noise level of sGRBs, the mask-weighted light curves data are intercepted from\n1$-$2T$_9$$_0$ prior to the BAT trigger time and 2$-$3T$_9$$_0$ posterior to the trigger time to\nfurther enhance the fitting accuracy. The detailed methods to identify the pulse numbers of a burst have been described in paper I. The pulse shapes depend on the final choice of fit. In this study, we have\nused the least chi-square criterion together with a residual analysis to evaluate the goodness of our fits, as done in paper I and other previous works \\citep[e.g.,][]{1996APJ...459...393,2005APJ...627...324,2003APJ...596...389,Peng2006,2011ApJ...740...104,2019ApJ...876...89}. In spite of this, this high-dimensional nonlinear regression fit should instead be performed using unbinned maximum likelihood estimation using the photon arrival times \\citep{2002Fraley,2000McLachlan,2008McLachlan,2014Tartakovsky}. Subsequently, we will apply the powerful EM method to study how many pulses within a burst in our subsequent paper.\n\nSeveral authors have proposed some relatively simple functions to describe the pulse light curves of GRBs \\citep{2002APJ...566...210, 2003APJ...596...389,\n1996APJ...459...393, 2005APJ...627...324, 2007APJ...662...1093, 2007ApJ...670...565, 2016ApJS..224...20Y}. Among these functions, the ``KRL'' function proves the most flexible profiles of individual GRB pulses \\citep{2003APJ...596...389} and can be written as \\begin{equation}\\label{equation:1}\nf(t)=f_m(\\frac{t+t_0}{t_m+t_0})^r[\\frac{d}{d+r}+\\frac{r}{d+r}(\\frac{t+t_0}{t_m+t_0})^{(r+1)}]^{-(\\frac{r+d}{r+1})},\n\\end{equation}\nwhere \\emph{r} and \\emph{d} determine the rise and the decay shapes of an individual pulse, $f_m$ represents the peak flux, $t_m$ is the peak time, $t_0$ is the offset from the pulse start time to the trigger time. Simultaneously, the five parameters in the ``KRL'' model have been interpreted in theory by \\cite{2005MNRAS...363...1290}. As applied in our paper I, this empirical function will be utilized again in this study.\n\nTo investigate the prompt emission mechanisms or classify GRBs, many temporal properties of GRBs pulse light curves have been studied \\citep[e.g.,][]{1996APJ...459...393, 2005APJ...627...324, 2001AA...380...L31, 2002AA...385...377,2003APJ...596...389,2007APSS...310...19,2011ApJ...740...104, 2014ApJ...783...88,2015ApJ...815...134,2018ApJ...855...101,2019ApJ...883...70,2019-190510440}, but bimodal distributions are still preferred \\citep[e.g.,][] {1993ApJ...413...L101,2008AA...484...293,2016APSS...361...257,2017APSS...362...70,\n2018ApSS...363...223,2016MNRAS...462...3243,2018PASP...130...054202,2015AA...581... 29,2019ApJ...870... 105,2019ApJ...887... 97}.\nIn this study, the pulse properties including peak amplitude (f$_m$), peak time (t$_m$), full width\nat half maximum (FWHM), rise time (t$_r$) and decay time (t$_d$) as well as asymmetry (t$_r$\/t$_d$) will be investigated in details for different kinds of Swift sGRBs. The systematic errors of pulse measurements are estimated with error propagation using the same methods described in paper I according to \\cite{2006ChJAA...6...312}. In particular, if the diverse parameters of the pulse shapes have strong covariance the relationships among these pulse properties could not reflect underlying behaviors. Fortunately, we calculate the covariance matrix and the correlations matrix and find that there are no significant correlations between different pulse parameters derived from our fits.\n\n\\subsection{Selection Criteria of Precursor and EE Candidates }\n \\cite {1995ApJ...452...145} had a conclusion that there are only 3\\% GRBs with precursor in their 1000 BATSE\n GRBs. \\cite {2010ApJ...723...1711} found that 8\\%-10\\% of Swift\/BAT sGRBs display precursor. Further studies showed that roughly 5\\% of Fermi\/GBM sGRBs and 18\\% of Fermi\/GBM lGRBs have precursor \\citep{2015PHD...???...???}. \\cite{2017ApJ...43...1} find the precursors existing in less than 0.4\\% of\nthe SPI-ACS\/INTEGRAL sGRBs. There is no obviously objective criterion to define the ``precursor''. In general, the peak flux of precursor is smaller than those of main events while the flux falls below the background level before the start\nof the main event \\cite[e.g.,][]{2008APJ...685...L19, 2010ApJ...723...1711}. \\cite{2014ApJ...789...145} pointed out that precursors can be triggered and non-triggered events. Considering all above aspects, we identify a significant precursor\nwhen it fulfills the following three conditions:\n\n(1) The precursor is effective as the detection points are at least 3$\\sigma$ above the background in the whole energy range of 15-350 keV.\n\n(2) The precursor at least includes three detection points and its peak flux is smaller than that of the main peak (see also \\citealt{2008APJ...685...L19,2010ApJ...723...1711}).\n\n(3) The precursor can be detected prior or posterior to the BAT trigger time. Simultaneously, quiescent period duration between the precursor and the main peak is well-defined (or, the precursor flux has fallen into the background level when the main peak starts) (see also \\citealt{2008APJ...685...L19,2010ApJ...723...1711}).\n\n Figure \\ref{fig:precursor1} and \\ref{fig:precursor2} show six single-peaked sGRBs (SPs) and three double-peaked sGRBs (DPs) with precursors. Totally, 25 precursor pulses across different energy bands have been successfully identified.\n It is worthy to point out that two precursors are found in GRB 090510 \\citep{2010ApJ...723...1711}. In this work, however, only one precursor occurring at t $\\sim$ 0.5 s prior to the main peaks is confidently discriminated as a real precursor detected by the Fermi\/GBM in the higher energy bands \\citep{2009Nature...462...331,2010ApJ...723...1711}, will be carefully reanalyzed. We stress that it is a very challenging result since most precursors are very fainter and generally detected before the main outbursts in the lower energy channels.\n \\begin{figure*}\n\\centering\n\\gridline{\n\\fig{pre060502B.pdf}{0.5\\textwidth}{(a)}\n\\fig{pre071112B.pdf}{0.5\\textwidth}{(b)}\n }\n\\gridline{\n\\fig{pre100702A.pdf}{0.495\\textwidth}{(c)}\n\\fig{pre160408A.pdf}{0.5\\textwidth}{(d)}\n }\n\\gridline{\n\\fig{pre160726A.pdf}{0.5\\textwidth}{(e)}\n\\fig{pre180402A.pdf}{0.5\\textwidth}{(f)}\n }\n\\caption{Examples of pulses in the Pre+SPs. For comparison, the pulses of the two individual channels and the combined energy channel for each sGRBs are analyzed. The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:precursor1}}\n\\end{figure*}\n \\begin{figure*}\n\\centering\n\\gridline{\n \\fig{pre081024A.pdf}{0.5\\textwidth}{(a)}\n \\fig{pre090510.pdf}{0.5\\textwidth}{(b)}\n }\n \\gridline{\n \\fig{pre100625A.pdf}{0.5\\textwidth}{(c)}\n}\n\\caption{Examples of pulses in the Pre+DPs. For comparison, the pulses of the two individual channels and the combined energy channel for each sGRBs are analyzed. The horizontal dotted black lines mark a 3$\\sigma$ confidence level.\\label{fig:precursor2}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{ee050724.pdf}{0.5\\linewidth}{(a)}\n\\fig{ee051221A.pdf}{0.5\\textwidth}{(b)}\n }\n \\gridline{\n \\fig{ee060614.pdf}{0.5\\textwidth}{(c)}\n \\fig{ee150101B.pdf}{0.5\\textwidth}{(d)}\n }\n \\gridline{\n \\fig{ee130603B.pdf}{0.5\\textwidth}{(e)}\n \\fig{ee170817A.pdf}{0.5\\textwidth}{(f)}\n }\n\\caption{Examples of pulses in the SPs+EE and DPs+EE (GRB 130603B). For comparison, the pulses of the two individual channels and the combined energy channel for each sGRBs are analyzed. The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:EE}}\n\\end{figure*}\n\n\n\\cite{2013MNRAS...428...1623} searched 11 from 256 BATSE GRBs with EE to unveil the BATSE population of a new hybrid class of GRBs similar to\nGRB 060614. Using Bayesian Block (BB) methods, \\cite{2010APJ...717...411,2011APJ...735...23} found $\\sim$25\\% of 51 Swift\/BAT sGRBs with EE component. \\cite{2016ApJ...829...7} reported the fraction\nof sGRBs with EE to be 1.19\\% in the third Swift\/BAT catalog. Here, we extend our search to include the GRBs with EE tails\nreported in literatures \\cite[e.g.,][]{2006APJ...643...266,2017ApJ...846...142,Yu2020,Zhangxiaolu2020}. According to \\cite{2019ApJ...876...89}, we adopt the following three criteria:\n\n(1) The peak flux of the EE is well lower than that of the main peak.\n\n(2) The signal-to-noise ratios of the main peak and the EE tails should be at least S\/N$>3$ above background.\n\n(3) The EE parts must be brightest in lower energy channel than 50 keV and are too weak to be distinguished effectively in higher energy channels, say 50-350 keV.\n\nWe have to stress that we would like to include as many sGRBs with EE as possible in our EE sample. Although the EE tails reflect long-lasting activities with a typical timescale of\n$\\lesssim$ 10$^3$ s \\cite[e.g.,][]{2001AA...379...L39, 2002APJ...567...1028, 2006APJ...643...266, 2008MNRAS...385..1455,2014ApJ...789...145, 2017ApJ...846...142},\nmost of them cannot be well fitted by a pulse function because of their low signal-to-noise ratios as illustrated as in \\cite{2018ApJ...855...101} and our paper I. So we just choose GRB 050724 \\citep{2005Nature...438...988,2010ApJ...723...1711,2016ApJ...829...7,2017ApJ...846...142,2018MNRAS...481...4332}, GRB 051221A\n\\citep{2006MNRAS...372...L19, 2011ApJ...734...35, 2017ApJ...846...142}, GRB 060614\n\\citep{2006Nature...444...1044,2006APJ...643...266}, GRB 130603B \\citep{2015ApJ...802..119K}, and GRB 150101B \\citep{2018NC...9...4089,2018APJL...863...L34, 2019ApJ...876...89, 2019MNRAS} as the subsample with EE. For comparison, the EE sample also includes the first gravitational-wave associated GRB 170817A detected by the Fermi Gamma-ray Burst Monitor (GBM), whose main\npeak is dominant over an energy\nrange of 50-300 keV and its soft tail is stronger below 50 keV \\citep{2017ApJL...848...L14,ZhangBB2018,2019ApJ...876...89,2020MNRAS...492...3622}. Finally, twelve typical EE tail pulses across different energy bands are fitted (see Figure \\ref{fig:EE}). Totally, Table \\ref{tableEE} lists 67 typical sGRBs with three components, including 10 precursor events ($\\sim$15\\%) and 17 EE events ($\\sim$25\\%). It is necessary to point out that there are no sGRBs with both the precursor and the EE components in our sample.\n\\section{RESULTS}\nIn this section, the pulse features of the main peaks not only in one-component and two-component sGRBs but also in the SPs and the DPs are compared. On the other hand, the correlations of the main peaks with either the precursors or the EEs in the two-component sGRBs are inspected. As done in paper I, the M-loose double-peaked sGRBs (Ml-DPs) and the M-tight double-peaked sGRBs (Mt-DPs) will be investigated separately. For convenience, we name the sGRBs with precursor or EE as Pre+sGRBs or sGRBs+EE hereafter.\n\\subsection{Main Peaks}\n\\subsubsection{One-component versus Two-component sGRBs}\nIn order to investigate whether the temporal properties of the main peaks in one-component sGRBs differ from\nthose of two-component sGRBs, comparisons are displayed\nin Figures \\ref{fig:trtd}$-$\\ref{fig:fmFWHM}.\nIn Figure \\ref{fig:trtd}$-$\\ref{fig:fmFWHM} (a), we find that the main peaks of the one-component SPs tend to be similar to those of the two-component SPs. In Figures \\ref{fig:trtd}$-$\\ref{fig:fmFWHM} (b), the main peaks of both the first pulses (1st) and the second pulses (2nd) of the two-component Mt-DPs tend to be similar to those of the one-component Mt-DPs. There is no two-component Ml-DPs in our sample. Note that, we can see from Figures \\ref{fig:trtd}$-$\\ref{fig:fmFWHM} that there is no significant evolutions on energy in both the SPs and the Mt-DPs. Thus we combine adjacent two channels into one channel for the two-component sGRBs and compare them with ones of the one-component sGRBs in two individual channels. These comparisons suggest that the main peaks in either one-component or two-component sGRBs tend to have no significant difference and are\nlikely to share the similar physical mechanism.\n\n\\subsubsection{SPs versus DPs}\nIn order to reveal the individual or\ncollective temporal characteristics of the main peaks\nbetween the SPs and the DPs, we compare the analysis results in Figures \\ref{fig:trtd} (a) $-$ \\ref{fig:fmFWHM} (a) with the reuslts of Figures \\ref{fig:trtd} (b) $-$ \\ref{fig:fmFWHM} (b) and \\ref{fig:trtd} (c) $-$ \\ref{fig:fmFWHM} (c).\n\nIt is found that the t$_r$ and the t$_d$ of the main peaks\nof all kinds of the sGRBs are self-similar with a\npower-law form as t$_r \\sim t_d$$^\\beta$ in Figure \\ref{fig:trtd},\nThe detailed fitting results\nare summarized in Table \\ref{tab:tabletrtd}. Interestingly, these results\nfor all kinds of the sGRBs are consistent with\nthe BATSE sGRBs, especially for the SPs. For\ntwo kinds of DPs, the power-law correlations are\ntighter or looser than those of the BATSE DPs.\n\nThe relations of the FWHM, the t$_m$ and the f$_m$ with\nthe asymmetry appear to be not evident in Figures\n\\ref{fig:FWHM-asy}$-$\\ref{fig:fm-asy}. It needs to note that the\nweak dependence of the t$_r$\/t$_d$ on the FWHM or the\nt$_m$ was found in the BATSE Ml-DPs, instead of the Swift\/BAT sGRBs. It may implies that these temporal parameters could evolve with the energy bands of detectors.\n\nFigure \\ref{fig:tmFWHM} illustrates that the t$_m$ increases with the FWHM, following a power-law behavior t$_m \\sim FWHM^\\mu$, which are similar to the results found by \\cite{2005APJ...627...324} for the long-lag, wide-pulse BATSE\nGRBs and our previous results for BATSE sGRBs. The\ndetailed fitting results are summarized in Table \\ref{tab:tabletmFWHM}. The\nf$_m$ and the FWHM are an anti-correlated power-law relation with the\nBATSE lGRBs \\citep{2002AA...385...377}, similar to our previous conclusion\nfor the BATSE sGRBs. Figure \\ref{fig:fmFWHM} shows that the\nf$_m$ is generally anti-correlated with the FWHM with a\npower-law form of f$_m \\sim FWHM^\\nu$. The detailed fitting\nresults are summarized in Table \\ref{tab:tablefmFWHM}. It is worthy to point\nout that the power-law index of the SPs is consistent with that of the BATSE SPs.\nFor two kinds of the DPs, the power-law relations of the first pulses are tighter than those of the second pulses.\n\nIn Table \\ref{tab:tableasy}, we find that the asymmetries of the main peaks for all kinds\nof sGRBs are limited from 0.03 to 1.56 and the\nmean asymmetry is 0.79 which is almost equal\nto the value of the BATSE sGRBs and lager than the value\nof 0.65 for a sample of 100 bright BATSE sGRBs found\nby \\cite{2001AA...380...L31}. The result is well agreement\nwith the value of 0.81 obtained by \\cite{2007APSS...310...19}.\nThe really interesting result is that the mean asymmetry of the SPs is still very similar to the values\nof the 1st pulses of the two subclasses of the\nDPs, which we found in the BATSE sGRBs. A K-S test\nto the cumulative distributions of the t$_r$\/t$_d$ between\nthe SPs and the 1st pulses of the Mt-DPs\ngives a p-value of 0.50 showing they are not significantly different. Similarly, a K-S test to the distributions\nof the t$_r$\/t$_d$ between the SPs and the 1st\npulses of the Ml-DPs gives a a p-value of 0.50 showing they\nalso are not significantly different.\n\n\\subsubsection{Dependence of Pulse Width on Energy}\n\nFigures \\ref{fig:singleFandE} - Figure \\ref{fig:mlFandE} show the relations of the width\n(FWHM) dependence on the average photon energy\nwith a form as FWHM $\\sim E^\\alpha$ for the SPs and the\ntwo kinds of the DPs. In Figure \\ref{fig:singleFandE}, except the\nGRB 070810B, the power-law indexes of the 16 SPs are negative and the mean value is $\\alpha\\simeq$ $-$0.32 $\\pm$ 0.02\nwhich is in close proximity to the results of $-$0.4 found by \\cite{1995APJ...448...L101} and \\cite{1996APJ...459...393} for the long\nGRBs. Very excitingly, the mean value is almost equal\nto the value of $-$0.32 for the BATSE SPs (see paper I). Figure \\ref{fig:single-index} shows the distribution of these power-law indexes of the 16 SPs. The detailed\nfitting results are summarized in Table \\ref{tab:index}.\n\nFor GRB 070810B, the value of the power-law index\nis $\\alpha\\simeq$ $-$0.05 $\\pm$ 0.16 and the pulse shape evolution from\nlow to high energy channel is shown in Figure \\ref{fig:pulse evo} (1).\nThe values of t$_r\/t_d$ from the low to the high energy channel are 0.49, 0.60, 0.82\nand 0.60, respectively.\n\nIn Figures \\ref{fig:mtFandE} and \\ref{fig:mlFandE}, we find that most of the DPs possess the negative\npower-law indexes except GRB 130912A whose\nlight curve shows two overlapping peaks \\citep{2013GCN...15216...1}. The mean values of these\nnegative power-law indexes are $\\alpha\\simeq$ $-$0.38 $\\pm$ 0.85 (1st)\nand $\\alpha\\simeq$ $-$0.45 $\\pm$ 0.33 (2nd) for the Mt-DPs and\n$\\alpha\\simeq$ $-$0.22 $\\pm$ 0.21 (1st) and $\\alpha\\simeq$ $-$0.42 $\\pm$ 0.13 (2nd) for the Ml-DPs, respectively. For Mt-DP 130912A, the\nvalues of the power-law indexes are $\\alpha\\simeq$ 0.13 $\\pm$ 0.12\nand $\\alpha\\simeq$ $-$0.12 $\\pm$ 0.11 for the 1st pulse and the 2nd\npulse, respectively. The pulse shape evolution from low\nto high energy channel is shown in Figure \\ref{fig:pulse evo} (2). The\nvalues of t$_r$\/t$_d$ are 0.67, 0.65, 0.82 and 0.66 for the 1st\npulses and 1.39, 1.38, 1.44 and 1.37 for the 2nd pulses.\nObviously, we can see that there is no obvious shape\nevolution for either the 1st or the 2nd pulses.\n\n\\subsection{Main Peaks versus Precursors or EEs}\n\nIn Figure \\ref{fig:preandee} (a1), we find weak positive correlations\nin the f$_m$ between the precursors and the main\npeaks. For the three Pre+DPs, we just check the\nfirst (1st) main peaks (main1) because the number\nof the second (2nd) main peaks is relatively\nlimited. Power-law fits across different energy bands\ngive $logf_{m,main1}$=$(0.85\\pm0.44)\\times logf_{m,pre}+(0.25\\pm0.18)$ with the correlation coefficient of Pearson's r=0.59\n(15-350 keV), $logf_{m,main1}$=$(0.87\\pm0.59)\\times logf_{m,pre}+(0.07\\pm0.41)$ with the correlation coefficient of Pearson's\nr=0.55 (15-50 keV), $logf_{m,main1}$=$(0.41\\pm0.34)\\times logf_{m,pre}+(0.08\\pm0.20)$ with the correlation coefficient of Pearson's r=0.56 (50-350 keV). The results are marginally in agreement with the recent conclusion\ndrawn by \\cite{2019Zhong} for 18 sGRB candidates with precursor observed by Fermi\/GBM and Swift\/BAT.\n\nIn Figure \\ref{fig:preandee} (b1)(c1)(d1)(e1), mild correlations are\nfound in the t$_r$\/t$_d$, the FWHM, the t$_r$ and the t$_d$ for\nthe Pre+sGRBs, similar to the results of the ``Type I\"\nprecursors reported by \\cite{2015PHD...???...???}. Additionally, there\nis general no event in the lower right region from the\nsolid line in Figure \\ref{fig:preandee} (c1)(d1)(e1). The FWHM of the\nprecursors are found to be on average an order of magnitude\nsmaller than those of the main peaks. These\nresults all indicate that the widths of main peak pulses\ntend to be wider than those of precursor pulses for\nthe Pre+sGRBs, suggesting that the main peaks tend\nto last for longer time than the precursors, agreement\nwith result of the \\cite{2019Zhong}.\n\nSimilarly, a positive correlation is found in the\nf$_m$ between the EE and the main peaks. For the GRB\n051221A with two EE pulses, we take into account the 1st\nEE pulse. For Mt-DPs+EE 130603B, we also just consider\nthe first main pulse. The results of power-law fit gives\n$logf_{m,main1}$=$(1.16\\pm0.08)\\times logf_{m,EE1}+(0.78\\pm0.10)$\nwith the correlation coefficient of Pearson's r =0.97 (see\nFigure \\ref{fig:preandee} (a2)). The positive power-law index is larger than the result suggested by \\cite{2019ApJ...876...89} for\nthe Fermi sGRBs with soft tail similar to GRB 170817A.\nHere, note that the correlations in the f$_m$ between the\nEE and the main peaks among different energy channels\nbehave no dependence on energy in Figure \\ref{fig:preandee} (a2),\nthus, all the first EE pulses across 15-350 keV are chosen\nto increase the statistical reliability.\n\nNo distinct correlations are found in the t$_r$\/t$_d$, the\nFWHM, the t$_r$ and the t$_d$ not only for the Pre+sGRBs but also for\nthe sGRBs+EE (see Figure \\ref{fig:preandee} (b2), (c2), (d2), (e2)).\nHowever, there are less events in the upper left region\nfrom the solid line in Figure \\ref{fig:preandee} (c2)(d2)(e2). These\nresults indicate that the widths of main peak pulses\ntend to be narrower than those of the EE pulses for the\nsGRBs+EE, which is caused by both the rise and the decay times. Particularly, we compare the temporal properties with those of the GRB 170817A and no obvious differences are found.\n\nAdditionally,\nFigure \\ref{fig:preandee} (a1)(a2) illustrate that the f$_m$ values of the main\npeaks are generally larger than those of the other two\ncomponents. On the other hand, the photon\nfluxes of main\npeaks and the other two components seem to be positively correlated, indicating that the luminosities of the main\npeaks are linked to those of the precursors and the EE components, agreement\nwith the results of \\cite{Zhangxiaolu2020} and \\cite{2019ApJ...876...89}. Moreover, it in return hints that the three parts of prompt gamma ray emissions could be produced from the same progenitor. For example, recently, \\cite{2018Nature...2...69} studied the properties\nof a extraordinarily bright three-episode long GRB 160625B detected by Fermi. Although we have not\nfound three-episode sGRBs in our sample, the similarities may indicate that the two weak emission episodes\nmay be likely to exist, or, in other words, be intrinsic. The absences of the EEs or precursors might be related to sensitivity or energy coverage of the current GRB detectors.\n\nEspecially, we check the relations of the width dependence\non the average photon energy and the pulse evolution\nmodes among different energy channels not only\nin the precursor pulse but the main pulse of the Pre+SPs in Figure \\ref{fig:pulse evo2}. The values of t$_r$\/t$_d$ of the precursor\nin GRB 160726A are 0.67, 0.64, 0.91, 0.69 while those\nof the main pulse are 0.48, 0.84, 0.96, 1.21 corresponding\nto channels from low to high individually. There is\nalmost no shape evolution across different energy bands\nfor precursor pulse. However, the shape evolution of the\nmain pulses is very different.\n\n\\section{DISCUSSION}\nExcept two regular evolving modes, ``MODE I\" and\n``MODE II\" which are corresponding to the sGRBs with\npositive and negative power-law index of the pulse width\nwith the photon energy (see paper I), we found that the\nindexes of some sGRBs are marginally zero including\nGRB 100206A, GRB 070810B, GRB 111117A (1st), GRB 101219A, GRB 130912A and GRB\n160726A (main). Surprisingly, there are two possible cases for\nthese sGRBs. In one case, we find there are almost no\nshape evolution across different energy bands, for example,\nGRB 100206A, GRB 070810B and GRB 130912A.\nIn another case, pulse shape of the main peak for the GRB 160726A evolves from the low to the high channel\nin the inverse ``MODE II\" way. In case this evolution\nmode is new for sGRBs. Therefore, it is worthy\nto search for the same effect in events from the Fermi\/\nGBM, HXMT\/HE catalogues to asses more robust\nconclusions in the future.\n\nOn 2017 August 17, GRB 170817A was observed independently\nby the Fermi GBM Monitor and the Anti-\nCoincidence Shield for the Spectrometer for the International\nGamma-Ray Astrophysics Laboratory \\citep{2017ApJL...848...L14,2017ApJL...848...L15,2017ApJL...848...L13}\n$\\sim$ 1.7 s posterior to the first binary neutron star (BNS)\nmerger event GW170817 observed by the Advanced\nLIGO and Virgo detectors \\citep{2017GCN21509}. The\njoint detection of GW170817\/GRB 170817A confirms\nthat at least some sGRBs are indeed originated from the\nmergers of the compact binaries. So, which sGRBs show\nsimilar characteristics to GRB 170817A becomes a hot topic \\citep[e.g.][]{2018APJL...863...L34,2018ApJL...853...L10,2018NC...9...4089,2019ApJ...876...89,2019APJL...880...L63}. In our subsample with EE, the temporal\nproperties of GRB 150101B and GRB 050724, including\nthe apparent two-component signature and the EE\ntails which are strongest below 50 keV energy range starting\napproximately at the end of the main peak,\nare very phenomenologically similar in shape to that of\nGRB 170817A, strengthening the potential relation with\nGRB 170817A (see \\citealt{2018APJL...863...L34,2019ApJ...876...89} for similar conclusions).\n\n\\cite{2016MNRAS...461...3607} reported that a highly variable\nlight curve viewed on-axis will become smooth\nand apparently single-pulsed (when viewed off-axis).\nThey suggested that low-luminosity GRBs are consistent\nwith being ordinary bursts seen off-axis. \\cite{2019APJL...880...L63} investigated the outflow structure\nof GRB 170817A and found 14 sGRBs share the\nsimilar relativistic structured jets with GRB 170817A.\nThey modelled their afterglow light curves and generated\nthe on-axis light curve for GRB 170817A which is\nconsistent with those of common sGRBs, as suggested by \\cite{2019AA...628...A18}. Therefore,\nit is very interesting to compare the on-axis prompt\nemission properties by further observations directly.\n\n\n\\emph{\\section{CONCLUSIONS}}\nWe studied nine sGRBs with precursor in our sample. Simultaneously, five typical Swift sGRBs with EE and one Fermi GRB 170817A\nhave been analyzed. For the first time, we presented the joint temporal property analysis of the fitting pulses\namong the main peak with the other two components in both one-component sGRBs and two-component sGRBs. Our major results are summarized as follows:\n\n1. We confirm that the main peaks in either one-component or two-component sGRBs tend to have no significant difference and might be generated from the similar physical mechanism.\n\n2. We inspected the correlations among the temporal properties of the SPs and DPs and found that the results are essentially consistent with those in\nCGRO\/BATSE ones recently found in our paper I. For instance, the t$_r$ and the t$_d$ are in agreement with a power-law relation for the SPs and the DPs, except the 2nd pulses of the Mt-DPs. There are no evident correlation between the asymmetry and the FWHM, the t$_m$ as well as the f$_m$, marginally similar to the results of BATSE sGRBs. Particularly, the temporal properties of the SPs have been found to be quite close to these\nof the 1st pulses of the sub-type DPs, especially the asymmetry.\n\n3. The relations, FWHM $\\sim E^\\alpha$, of the width of the main peak dependence on the average photon energy have been compared with the results discovered in paper I. The negatively mean index of $a\\sim-$0.32 for the SPs is consistent with the\nresults for the BATSE SPs. Relative to the negative and positive energy correlations, $\\alpha$ is found to be marginally zero not only in the SPs but also in the DPs. We studied the pulse shape evolutions from low to high energy channel for the sGRBs with $\\alpha$ which is in close proximity to zero. It is found that there is no obvious shape evolution in one case. In another case, pulse shape evolves from low to high channel in a new way, inverse ``MODE II'', demonstrating that there would have more evolution\nmodes of pulses across differently adjacent energy channels in view of the Swift\/BAT observations.\n\n4. Furthermore, we studied the correlations of the main peaks with either the precursors or the EEs. No distinct correlations have been found in the asymmetry, the FWHM, the t$_r$ and the t$_r$ not only for the Pre+sGRBs but also for the sGRBs+EE. We found that the widths of main peaks tend to be narrower than those of the EE pulses and wider than those of the precursors. In particular, we verified the power-law correlations in the f$_m$ of the three components, strongly suggesting that they are seem to origin from the similar central engine activities. Especially, we compared the\ntemporal properties of the GRB 170817A with the other sGRBs+EE and no obvious difference have been found.\n\nOn the basis of these studies, we hope that the most important role of our results could show new lights on the search of the possible connection among these three components. More observational data of the precursors and the EEs are extremely needed in the future to constrain the current physical models in order to interpret the complex GRB light curves in the new era of satellites.\n\n\nAcknowledgements\n\nWe thank the referee for very helpful suggestion and comments.\nThis work makes use of the data supplied NASA's High Energy Astrophysics Science Archive Research Center (HEASARC). It was supported by the Youth Innovations and Talents Project of Shandong Provincial Colleges and Universities (Grant No. 201909118) and the Natural Science Foundations (ZR2018MA030, XKJJC201901, OP201511, 20165660 and 11104161).\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{trtd.pdf}{1\\textwidth}{}\n }\n\\caption{The t$_r$ vs. $t_d$ of the sGRB pulses in the main peaks. In each panel, (a) SPs, (b) Mt-DPs, (c) Ml-DPs and\n(d) Comparisons of the SPs and the 1st pulses in two kinds of the DPs.\nThe lines are the best fits, solid lines for the 1st main peaks and dotted lines for the 2nd ones, respectively.\n\\label{fig:trtd}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{FWHM-asy.pdf}{1\\textwidth}{}\n }\n\\caption{ The FWHM vs. $t_r\/t_d$ of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:FWHM-asy}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{tm-asy.pdf}{1\\textwidth}{}\n}\n\\caption{ The $t_m$ vs. $t_r\/t_d$ of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:tm-asy}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{fm-asy.pdf}{1\\textwidth}{}\n }\n\\caption{ The $f_m$ vs. $t_r\/t_d$ of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:fm-asy}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\n\\gridline{\n\\fig{tmFWHM.pdf}{1\\textwidth}{}\n }\n\\caption{The log$t_m$ vs. logFWHM of the sGRB pulses in the main peaks. The values of $t_m$ in this figure are positive. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:tmFWHM}}\n\\end{figure*}\n\\begin{figure*}\n\\centering\n\n\\gridline{\n\\fig{fmFWHM.pdf}{1\\textwidth}{}\n }\n\\caption{The $f_m$ vs. FWHM of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:fmFWHM}}\n\\end{figure*}\n\n\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{single1.pdf}{0.45\\linewidth}{(a)}\n\\fig{single2.pdf}{0.45\\linewidth}{(b)}}\n\\gridline{\n\\fig{single3.pdf}{0.5\\linewidth}{(c)}}\n\\caption{The FWHM vs. average photon energy of\nthe 17 SPs. The solid line stands for the best power-law fit to the observations. \\label{fig:singleFandE}}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{single-index.pdf}{0.5\\linewidth}{}}\n\\caption{Distribution of the negative power-law indexes in $FWHM \\sim E^\\alpha$ for the 16 SPs. The vertical red dashed line shows the mean value of $\\alpha \\sim$ $-$0.32. \\label{fig:single-index}}\n\\end{figure*}\n\\begin{figure*}[ht]\n\\gridline{\n\t\\centering\n\\fig{double_101219A1.pdf}{0.3333\\linewidth}{(a)}\n\\fig{double_120804A.pdf}{0.3333\\textwidth}{(b)}\n\\fig{double_130912A1.pdf}{0.3333\\textwidth}{(c)}\n }\n\\caption{the FWHM vs. the average photon energy of the Mt-DPs. The examples are present for\n(a) GRB 101219A, (b) GRB 120804A and (c) GRB 130912A. The solid line stands for the best power-law fit to the observations. \\label{fig:mtFandE}}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{double_l_111117A.pdf}{0.3333\\linewidth}{(a)}\n\\fig{double_l_180204A1.pdf}{0.3333\\textwidth}{(b)}\n }\n\\caption{The FWHM vs. average photon energy of the Ml-DPs. The examples are shown for\n(a) GRB 111117A and (b) GRB 180204A. The solid line stands for the best power-law fit to the observations. \\label{fig:mlFandE}}\n\\end{figure*}\n\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{single070810Bpulse.pdf}{0.495\\linewidth}{(1) the pulse shape revolutions of GRB 070810B}\n\\fig{130912Apulse.pdf}{0.5\\textwidth}{(2) the pulse shape revolutions of GRB 130912A}}\n\\caption{The revolution from the lower to the higher energy channels. The vertical\nblack dash lines mark the peak time (t$_m$) of the main peaks in Ch1 (GRB 070810B) and Ch1+2 (GRB 130912A). The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:pulse evo}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\\fig{fm-pre-m.pdf}{0.33\\textwidth}{(a1)}\n \\fig{asy-pre-m.pdf}{0.33\\textwidth}{(b1)}\n \\fig{FWHM-pre-m.pdf}{0.33\\textwidth}{(c1)}\n }\n\\gridline{\\fig{fm-EE-m.pdf}{0.33\\textwidth}{(a2)}\n \\fig{asy-EE-m.pdf}{0.33\\textwidth}{(b2)}\n \\fig{FWHM-EE-m.pdf}{0.33\\textwidth}{(c2)}\n }\n\\gridline{\\fig{tr-pre-m.pdf}{0.33\\textwidth}{(d1)}\n \\fig{td-pre-m.pdf}{0.33\\textwidth}{(e1)}\n }\n\\gridline{\\fig{tr-EE-m.pdf}{0.33\\textwidth}{(d2)}\n \\fig{td-EE-m.pdf}{0.33\\textwidth}{(e2)}\n }\n\\caption{Comparisons of the pulse properties of the main peaks to those of the precursors (panels a(1) b(1) c(1) d(1) e(1)) the EE tails (panels a(2) b(2) c(2) d(2) e(2)). The solid black lines plotted\nin both panels denote where the pulse properties are equal. In panel a(1), the lines stand for the best power-law fits to the data of the first main peak and its precursor, dotted line for Ch1+2+3+4, dashed line for Ch1+2, dash-dotted line for Ch3+4. In panel a(2), the dotted line stands for the best power-law fit to the data of the main peak and its first EE pulse.\n\\label{fig:preandee}}\n\\end{figure*}\n\n\\begin{figure*}[!h]\n\\centering\n\\gridline{\n\\fig{pre160726AFandE.pdf}{0.5\\textwidth}{(1)}\n\\fig{160726Apulse.pdf}{0.5\\textwidth}{(2)}\n }\n\\caption{The example of the Pre+SPs. (1) The FWHM vs. average photon energy for GRB 160726A. (2) The pulse shapes of GRB 160726A revolution from the lower to the higher energy channels. The vertical\nblack dash lines mark the peak time (t$_m$) of the main peaks in Ch1. The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:pulse evo2}}\n\\end{figure*}\n\n\\clearpage\n\\startlongtable\n\\begin{deluxetable*}{l| c|c|c|c|c|c|c}\n\\tablecaption{The sample of sGRBs with three emission components. \\label{tableEE}}\n\\tablehead{\n\\emph{GRB}& T$_{90} (s)$ & $Redshift$ & $Energy$ $band$ & $Satellite$ & $Precursor$ & $Main peak$ & $EE$ }\n\\tabletypesize{\\small}\n\\startdata\n190326A&0.076&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n181123B&0.260&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n180727A&1.056&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n180718A&0.084&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n180402A&0.180&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n180204A&1.164&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n170817A&2.048&0.009783&8-350 keV&Fermi&$\\times$&$\\surd$&$\\surd$\\\\\n170428A&0.200&0.454&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n170325A&0.332&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n160726A&0.728&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n160612A&0.248&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n160601A&0.120&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n160408A&0.320&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n151229A&1.440&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n151228A&0.276&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150831A&0.920&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150710A&0.152&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150423A$^\\ddag$&0.216&1.394&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n150301A&0.484&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150120A$^\\ddag$&1.196&0.460&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n150101B&0.012&0.1343&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n141212A&0.288&0.596&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n140930B&0.844&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n140903A&0.296&0.351&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n131004A$^\\ddag$&1.536&0.717&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n130912A&0.284&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n130626A&0.160&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n130603B&0.176&0.3565&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n130515A&0.296&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n120804A$^\\ddag$&0.808&1.3&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n120630A&0.596&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n120521A&0.512&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n120305A&0.100&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n111117A$^\\ddag$&0.464&2.211&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n111020A&0.384&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n110420B&0.084&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n101224A&0.244&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n101219A$^\\ddag$&0.828&0.718&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n101129A&0.384&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n100724A$^\\ddag$&1.388&1.288&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n100702A&0.512&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n100625A$^\\ddag$&0.332&0.452&15-350 keV&Swift&$\\surd$&$\\surd$&$\\surd$\\\\\n100206A&0.116&0.4068&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n100117A$^\\ddag$&0.292&0.915&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n091109B&0.272&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n090621B&0.140&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n090510$^\\ddag$&5.664&0.903&15-350 keV&Swift&$\\surd$&$\\surd$&$\\surd$\\\\\n090426$^\\ddag$&1.236&2.609&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n090417A&0.068&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n081226A&0.436&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n081101&0.180&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n081024A&1.824&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n080426&1.732&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n071112B&0.304&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n070923&0.040&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n070810B&0.072&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n070809&1.280&0.2187&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n061217&0.224&0.827&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n060614&109.104&0.1254 &15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n060502B&0.144&0.287&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n051221A&1.392&0.5464&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n051105A&0.056&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050925&0.092&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050813&0.384&0.722&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050724$^\\dagger$&98.684&0.257&15-350 keV&Swift&$\\surd$&$\\surd$&$\\surd$\\\\\n050509B&0.024&0.2249&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050202&0.112&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\n\\enddata\n\\tablecomments{ The precursor (or EE) components of these sGRBs are too weak to be fitted successfully and are marked with $\\dag$ (or $\\ddag$).}\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{clcccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the power-law correlation between the t$_r$ and the t$_d$.\\label{tab:tabletrtd}}\n\\tablehead{\n\\emph{GRB}& $\\emph{N}$&$\\emph{Pearson's r}$ & \\emph{$R^2$}& $\\emph{$\\beta$}$}\n\\startdata\n\\hline\nSPs&157&0.82&0.67&0.86 $\\pm$ 0.05 \\\\\n\\hline\nMt-DPs 1st&18&0.79&0.59&0.94 $\\pm$ 0.18 \\\\\nMt-DPs 2nd&18&0.24&0&0.32 $\\pm$ 0.31 \\\\\n\\hline\nMl-DPs 1st&\t7&0.52&0.12&1.31 $\\pm$ 0.96 \\\\\nMl-DPs 2nd&\t7&0.93&0.83&1.07 $\\pm$ 0.20 \\\\\n\\hline\n\\enddata\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{clcccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the correlation of the logt$_m$ with the logFWHM. \\label{tab:tabletmFWHM}}\n\\tablehead{\n\\emph{GRB}& $\\emph{N}$&$\\emph{Pearson's r}$ & \\emph{$R^2$}& $\\emph{$\\mu$}$}\n\\startdata\n\\hline\nSPs&154*&0.78&0.60&0.93$\\pm$ 0.06\\\\\n\\hline\nMt-DPs 1st\t&15*&0.46&0.15&0.38 $\\pm$ 0.21\\\\\nMt-DPs 2nd\t&18*&0.43&0.14&0.67$\\pm$ 0.35\\\\\n\\hline\nMl-DPs 1st\t&6*&0.53&0.10&2.20 $\\pm$ 1.76\\\\\nMl-DPs 2nd\t&7*&0.95&0.89&0.33 $\\pm$ 0.05\\\\\n\\hline\n\\enddata\n\\tablecomments{* The number of sGRB pulses whose t$_m$ is positive.}\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{cccccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the power-law correlation of the f$_m$ with the FWHM. \\label{tab:tablefmFWHM}}\n\\tablehead{\n\\emph{GRB}&\\emph{N}&\\emph{Pearson's r} & \\emph{$R^2$}& \\emph{$\\nu$}}\n\\startdata\nSPs&157 &-0.51 &0.25 &-0.45 $\\pm$ 0.06\\\\\n\\hline\nMt-DPs 1st&18 &-0.79 &0.61 &-1.08 $\\pm$ 0.21\\\\\nMt-DPs 2nd&18 &-0.31&0.04&-0.65 $\\pm$ 0.50\\\\\n\\hline\nMl-DPs 1st&7 &-0.67 &0.34 &-1.31 $\\pm$ 0.66\\\\\nMl-DPs 2nd&7 &-0.30 &-0.09 &-0.13 $\\pm$ 0.18 \\\\\n\\hline\n\\enddata\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{cccccc}\n\\tabletypesize{\\small}\n\\tablecaption{Asymmetric properties of the main peaks for different kinds of sGRBs.\\label{tab:tableasy}}\n\\tablehead{\n\\emph{GRB}&\\emph{N}&\\emph{mean}&\\emph{Median} & \\emph{Minimum}& \\emph{Maximum}}\n\\startdata\nSPs\t\t&157&0.73&0.65&0.03&1.56\\\\\n\\hline\nMt-DPs\t1st\t&18&0.79&0.65&0.17&1.56\\\\\nMt-DPs\t2nd\t&18&1.12&1.28&0.07&1.48\\\\\n\\hline\nMl-DPs\t1st\t&7&0.87&0.80&0.33&1.30\\\\\nMl-DPs\t2nd\t&7&1.07&1.37&0.50&1.51\\\\\n\\hline\nAll&207&0.79&0.70&0.03&1.56\\\\\n\\enddata\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{clccccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the correlation between the FWHM and the average energy\nof photons in each channel.\\label{tab:index}}\n\\tablehead{\n\\emph{Type}&\\emph{GRB} &\\emph{Pearson's r}& \\emph{$R^2$}& \\emph{$\\chi^2_\\nu$}& \\emph{$\\alpha$}}\n\\startdata\nSPs&190326A &-0.99 &0.97 &1.2 &-0.40 $\\pm$ 0.04\\\\\nSPs&180718A &-0.94 &0.82 &0.60&-0.47 $\\pm$ 0.12\\\\\nSPs&170428A &-0.92 &0.77&5.15 &-0.34 $\\pm$ 0.10\\\\\nSPs&160601A &-0.94 &0.82 &7.27&-0.66 $\\pm$ 0.17\\\\\nSPs&150710A &-0.97 &0.92&0.29 &-0.19 $\\pm$ 0.03\\\\\nSPs& 150301A &-0.93 &0.78&6.32 &-0.21 $\\pm$ 0.06\\\\\nSPs&141212A &-0.90 &0.71&3.16 &-0.33 $\\pm$ 0.11\\\\\nSPs&140903A &-0.92 &0.76 &3.12&-0.26 $\\pm$ 0.08\\\\\nSPs& 131004A &-0.90 &0.70 &1.52&-0.21 $\\pm$ 0.06\\\\\nSPs& 120305A&-0.99 &0.98 &0.63&-0.26 $\\pm$ 0.02\\\\\nSPs& 110420B &-0.93 &0.79 &12.82&-0.80 $\\pm$ 0.23\\\\\nSPs& 100206A* &-0.90 &0.73 &0.08&-0.06 $\\pm$ 0.02\\\\\nSPs& 091109B &-0.95 &0.85 &0.91&-0.27 $\\pm$ 0.06\\\\\nSPs& 090621B &-0.98 &0.93 &0.63&-0.26 $\\pm$ 0.04\\\\\nSPs& 070923 &-0.99 &0.97&0.10 &-0.18 $\\pm$ 0.02\\\\\nSPs& 050925&-0.91 &0.75 &3.60&-0.29 $\\pm$ 0.09\\\\\n\\hline\nPre+SPs& 160726A precursor&-0.82&0.50&0.89&-0.28$\\pm$ 0.14\\\\\nPre+SPs& 160726A* main&-0.66&0.16&3.57&-0.11 $\\pm$ 0.09\\\\\n\\hline\nSPs&GRB 070810B* &0.23 &-0.42 &0.69&0.05 $\\pm$ 0.16\\\\\n\\hline\n\\hline\nMt-DPs& 120804A 1st &-0.28 &-0.38 &33.88&-0.35 $\\pm$ 0.83\\\\\nMt-DPs& 120804A 2nd &-0.86 &0.62 &64.79&-0.79 $\\pm$ 0.32\\\\\nMt-DPs& 101219A 1st &-0.85 &0.59 &31.33&-0.42 $\\pm$ 0.18\\\\\nMt-DPs&101219A* 2nd &-0.69 &0.21&0.66 &-0.11 $\\pm$ 0.08\\\\\n\\hline\nMt-DPs& 130912A* 1st &0.61 &0.05 &4.23&0.13 $\\pm$ 0.12\\\\\nMt-DPs& 130912A* 2nd &-0.60 &0.05 &1.76&-0.12 $\\pm$ 0.11\\\\\n\\hline\nMl-DPs& 180204A 1st&-0.79&0.43&10.27&-0.38 $\\pm$ 0.21\\\\\nMl-DPs& 180204A 2nd&-0.93&0.79&4.32&-0.28 $\\pm$ 0.08\\\\\nMl-DPs& 111117A* 1st&-0.83&0.52&0.07&-0.06 $\\pm$ 0.03\\\\\nMl-DPs& 111117A 2nd&-0.97&0.91&0.59&-0.57 $\\pm$ 0.10\\\\\n\\enddata\n\\tablecomments{* The power-law indexes in $FWHM \\sim E^\\alpha$ of sGRBs are marginally zero.}\n\\end{deluxetable*}\n\n\n\\clearpage\n\\begin{longrotatetable}\n\\begin{deluxetable*}{lclcrccCccccccccccc}\n\\tabletypesize{\\tiny}\n\\tablecaption{Analysis Results of Precursors\\label{prefitting}}\n\\tablewidth{750pt}\n\\setlength{\\tabcolsep}{0.15mm}\n\\tablehead{\n\\colhead{sGRBs}&\\colhead{f$_{mp}$}&\n\\colhead{t$_{mp}$} &\n\\colhead{t$_{rp}$} & \\colhead{t$_{dp}$} & \\colhead{(t$_r$\/t$_d$)$_p$} &\n\\colhead{FWHM$_{p}$}&\n\\colhead{f$_{mm1}$} &\n\\colhead{t$_{mm1}$} &\n\\colhead{t$_{rm1}$} & \\colhead{t$_{dm1}$} & \\colhead{(t$_r$\/t$_d$)$_{m1}$} &\n\\colhead{FWHM$_{m1}$}&\\colhead{f$_{mm2}$} &\n\\colhead{t$_{mm2}$} &\n\\colhead{t$_{rm2}$} & \\colhead{t$_{dm2}$} & \\colhead{(t$_r$\/t$_d$)$_{m2}$} &\n\\colhead{FWHM$_{m2}$}\\\\\n&\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}\n}\n\\startdata\nsingle-main-peaked Pre+sGRBs&&&&&&&&&&&&&&&&&&\\\\\n\\hline 060502B(Ch1+2+3+4)(8ms)&0.29254&-0.39565&0.01804&0.03231&0.55834&0.05035&1.10619&0.01680&0.01249&0.03320&0.37620&0.04569&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 060502B(Ch1+2)&0.18210&-0.39826&0.00933&0.03153&0.29591&0.04086&0.47337&0.01878&0.01421&0.03878&0.36643&0.05299&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n 060502B(Ch3+4)&0.12216&-0.40005&0.01804&0.03385&0.53294&0.05189&0.65985&0.01582&0.01095&0.02689&0.40721&0.03784&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 071112B(Ch1+2+3+4)(16ms)\\tablenotemark{a}&0.36533&-0.57263&0.02383&0.02295&1.03834&0.04678&0.34022&0.05956&0.06489&0.10150&0.63931&0.16639&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 071112B(Ch1+2)&0.15123&-0.58015&0.02623&0.03538&0.74138&0.06161&0.16127&0.06561&0.05627&0.04372&1.28705&0.09999&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n071112B(Ch3+4)&0.21181&-0.57754&0.01532&0.02371&0.64614&0.03903&0.26077&0.05354&0.03620&0.09331&0.38795&0.12951&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n100702A(Ch1+2+3+4)(8ms)&0.46258&-0.25755&0.01681&0.02625&0.64038&0.04306&1.57141&0.08286&0.04207&0.06910&0.60883&0.11117&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n100702A(Ch2)&0.18082&-0.25757&0.03834&0.02750&1.39418&0.06584&0.56600&0.07597&0.04315&0.07271&0.59345 &0.11586&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n100702A(Ch3)&0.09887&-0.26002&0.02406&0.01716&1.40210&0.04122&0.55149&0.07321&0.02712&0.06700&0.40478&0.09412&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160408A(Ch1+2+3+4)(8ms)&0.34285&-0.92805&0.00620&0.09794&0.06330&0.10414&0.80350&0.12414&0.09374&0.19035&0.49246&0.28409&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160408A(Ch1+2)&0.24496&-0.89193 &0.04242&0.03554&1.19358&0.07796&0.26421&0.23971&0.20388&0.15022&1.35721&0.35410&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160408A(Ch3+4)&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&0.57607&0.07839&0.05323&0.17024&0.31268&0.22347&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t160726A(Ch1+2+3+4)(8ms)&1.37414&0.01983&0.01621&0.02554&0.63469&0.04175&1.56596&0.63190&0.08339&0.09692&0.86040&0.18031&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160726A(Ch1+2)&0.45222&0.01986&0.02871&0.04433&0.64764&0.07304&0.81925&0.61585&0.06846&0.09671&0.70789&0.16517&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160726A(Ch3+4)&0.88454&0.02220&0.01645&0.01661&0.99037&0.03306&0.76960&0.66611&0.10826&0.08215&1.31783&0.19041&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t180402A(Ch1+2+3+4)(8ms)&0.42598&-0.19677&0.01679&0.02710&0.61956&0.04389&0.99640&0.19356&0.10409&0.07669&1.35728&0.18078&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n180402A(Ch1+2)&0.31619&-0.19699&0.01170&0.00754&1.55172&0.01924&0.33803&0.17332&0.13891&0.11366&1.22215&0.25257&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n180402A(Ch3+4)&0.34479&-0.20039&0.00735&0.01979&0.37140&0.02714&0.74040&0.20217&0.07322&0.05872&1.24693&0.13194&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n\\hline\ndouble-main-peaked Pre+sGRBs &&&&&&&&&&&&&&&&&&\\\\\n\\hline\n081024A(Ch1+2+3+4)(8ms)&0.32881&-1.62738 &0.02039&0.03372&0.60469&0.05411&0.20932&-0.26660&0.17324&0.1313&1.31932&0.30455&0.45532&0.07160 &0.06408&0.07796&0.82196&0.14204\\\\\n081024A(Ch1+2)&0.16799&-1.61997&0.03414&0.02491&1.37053&0.05905&0.10299&-0.27799&0.19897&0.12758&1.55957&0.32655&0.21059&0.09102 &0.10461&0.07092&1.47504&0.17553\\\\\n081024A(Ch3)&0.12958&-1.63504&0.01357&0.04425&0.30667&0.05782&0.06638&-0.24707&0.16134&0.11555&1.39628&0.27689&0.18884&0.09078&0.05191 &0.04091&1.26888&0.09282\\\\\n090510(Ch1+2+3+4)(8ms)&0.96274&-0.52401&0.02343&0.01152&2.03385&0.03495&3.13425&0.03598&0.03423&0.04810&0.71164&0.08233&1.48482&0.29996&0.07362&0.05295&1.39037&0.12657\\\\\n090510(Ch1+2)&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&0.98523&0.03116 &0.03742&0.05867&0.63780&0.09609&0.67289&0.28228&0.06988&0.08297&0.84223&0.15285\\\\\n090510(Ch3+4)&0.79215&-0.52361&0.01878&0.01302&1.44240&0.03180&2.21937&0.03597&0.02867&0.04453&0.64384&0.07320&0.73833&0.29850&0.08080&0.05654&1.42908&0.13734\\\\\n100625A(Ch1+2+3+4)(8ms)&0.25267&-0.37602&0.01657&0.01334&1.24213&0.02991&0.91095&0.04791&0.08971&0.06296&1.42487&0.15267&1.19950 &0.21595 &0.08502&0.05728&1.48429&0.14230\\\\\n100625A(Ch1+2)&0.13500&-0.37508&0.01876&0.01315&1.42662&0.03191&0.43599&-0.02516&0.04723&0.07267&0.64992&0.11990&0.50705&0.21970 &0.11453&0.08541&1.34094&0.19994\\\\\n100625A(Ch3+4)&0.09812&-0.37598&0.02120&0.01405&1.50890&0.03525&0.61602&0.07495&0.07481&0.05539&1.35052&0.13020&0.87084&0.21245&0.03871 &0.02780&1.39245&0.06651\\\\\n\\enddata\n\\tablecomments{ We give the peak amplitude f$_{m}$, the peak time t$_{m}$, the rise time t$_{r}$, the decay time t$_{d}$, the asymmetry t$_r$\/t$_d$ and the FWHM of each component. The subscript m and p identify the main peaks and precursor components.\n}\n\\tablenotetext{a}{For this GRB, we adopt the average amplitude to estimate the peak amplitudes of the 8 ms light curve data. }\n\\end{deluxetable*}\n\\end{longrotatetable}\n\n\\clearpage\n\\begin{longrotatetable}\n\\begin{deluxetable*}{lccccccCccccccllrcc}\n\\tabletypesize{\\tiny}\n\\tablecaption{Analysis Results of Extended Emissions\\label{EEfitting1}}\n\\tablewidth{500pt}\n\\setlength{\\tabcolsep}{0.25mm}\n\\tablehead{\n\\colhead{sGRBs} &\n\\colhead{f$_{mm}$} &\n\\colhead{t$_{mm}$} &\n\\colhead{t$_{rm}$} & \\colhead{t$_{dm}$} & \\colhead{(t$_r$\/t$_d$)$_m$} &\n\\colhead{FWHM$_{m}$}&\n\\colhead{f$_{mE1}$} &\n\\colhead{t$_{mE1}$} &\n\\colhead{t$_{rE1}$} & \\colhead{t$_{dE1}$} & \\colhead{(t$_r$\/t$_d$)$_{E1}$} &\n\\colhead{FWHM$_{E1}$}&\\colhead{Cp$_{E2}$} &\n\\colhead{Tp$_{E2}$} &\n\\colhead{t$_{rE2}$} & \\colhead{t$_{dE2}$} & \\colhead{(t$_r$\/t$_d$)$_{E2}$} &\n\\colhead{FWHM$_{E2}$}\\\\\n&\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}\n}\n\\startdata\nsGRBs with single EE&&&&&&&&&&&&&&&&&&\\\\\n\\hline\n050724(Ch1+2+3+4)(8ms)&1.48424&0.03066&0.04292&0.14942&0.28724&0.19234&0.25967&1.06785&0.10299&0.13674&0.75318&0.23973&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t050724(Ch1+2)&0.90669&0.01477&0.03347&0.17023&0.19662&0.20370&0.21770&1.08060&0.09051&0.08264&1.09523&0.17315&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 050724(Ch3+4)&0.63496&0.06784&0.05864&0.10271&0.57093&0.16135&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n060614(Ch1+2+3+4)(1s)\\tablenotemark{a}&0.93613&0.73466&2.24341&2.04035&1.09952&4.28376&0.54390&37.72733&22.07336&31.22141&0.70699&53.29477&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t060614(Ch1+2)&0.58041&0.73500&2.20074&2.04868&1.07422&4.24942&0.42033&38.73400&21.86396&31.21840&0.70035&53.08236&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t060614(Ch3+4)&0.33087&0.74033&2.32277&2.37680&0.97727&4.69957&0.12627&33.75601&22.95666&30.75025&0.74655&53.70691&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t150101B(Ch1+2+3+4)(2ms)\\tablenotemark{a}&4.10741&0.00899&0.00632&0.00292&2.16438&0.00924&0.53550&0.03902&0.03900&0.02880&1.35411&0.06780&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t150101B(Ch1+2)&2.59968&0.00701&0.00303&0.00601&0.50416&0.00904&0.39149&0.05099&0.02439&0.01809&1.34826&0.04248&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t150101B(Ch3+4)&2.14224&0.00678&0.00634&0.00439&1.44465&0.01073&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n170817A(8-350 KeV)(64ms)\\tablenotemark{a}{$^,$}\\tablenotemark{b}&57.07469&-0.06509&0.20199&0.30698&0.65799&0.50897&23.73346&1.66293&0.29296&0.23442&1.24970&0.52738&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t170817A(8-50 KeV)&24.19345&0.12665&0.32209&0.20113&1.60140&0.52322&14.38594&1.53474&0.90567&0.74143&1.22152&1.64710&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t170817A(50-350 KeV)&52.45493&-0.16450&0.05621&0.15697&0.35809&0.21318&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t\t\t\t\t\t\t\\hline\nsGRBs with double EE&&&&&&&&&&&&&&&&&&\\\\\n\\hline\n051221A(Ch1+2+3+4)(8ms)&5.96541&0.11585&0.06280&0.10660&0.58912&0.16940&0.55322&0.62791&0.04217&0.06762&0.62363&0.10979&0.99519&0.89204&0.03746&0.22567&0.16599&0.26313\\\\\n051221A(Ch1+2)&2.06378&0.16418&0.13124&0.11657&1.12585&0.24781&0.44105&0.58814&0.21491&0.15913&1.35053&0.37404&0.81210&0.90792&0.05647&0.21265&0.26555&0.26912\\\\\n051221A(Ch3+4)&3.54495&0.11599&0.05270&0.08532&0.61767&0.13802&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&0.2260&0.96211&0.08544&0.06866&1.24439&0.15410\\\\\n\\enddata\n\\tablecomments{We give the peak amplitude f$_{m}$, the peak time t$_{m}$, the rise time t$_{r}$, the decay time t$_{d}$, the asymmetry t$_r$\/t$_d$ and the FWHM of each component. The subscript m and E identify the main peaks and EE components.\n}\n\\tablenotetext{a}{For these GRBs, we adopt the average amplitude to estimate the peak amplitudes of the 8 ms light curve data. }\n\\tablenotetext{b} {For GRB\n170817A, the summed GBM lightcurves for sodium iodide\n(NaI) detectors 1, 2, and 5 with 64 ms resolution\nbetween 8 and 350 keV have been used. We estimate the backgrounds\nusing the model \\emph{$f_0$(t) = at+b} from 20 s prior to the trigger time.}\n\\end{deluxetable*}\n\\end{longrotatetable}\n\n\\begin{longrotatetable}\n\\setlength{\\tabcolsep}{6pt}\n\\begin{deluxetable*}{lccccccCccccccllrcc}\n\\tabletypesize{\\tiny}\n\\tablecaption{Analysis Results of Extended Emissions (Continued) \\label{EEfitting2}}\n\\tablewidth{500pt}\n\\setlength{\\tabcolsep}{0.25mm}\n\\tablehead{\n\\colhead{sGRBs} &\n\\colhead{f$_{mm1}$} &\n\\colhead{t$_{mm1}$} &\n\\colhead{t$_{rm1}$} & \\colhead{t$_{dm1}$} & \\colhead{(t$_r$\/t$_d$)$_{m1}$} &\n\\colhead{FWHM$_{m1}$}&\n\\colhead{f$_{mm2}$} &\n\\colhead{t$_{mm2}$} &\n\\colhead{t$_{rm2}$} & \\colhead{t$_{dm2}$} & \\colhead{(t$_r$\/t$_d$)$_{m2}$} &\n\\colhead{FWHM$_{m2}$}&\\colhead{f$_{mE}$} &\n\\colhead{t$_{mE}$} &\n\\colhead{t$_{rE}$} & \\colhead{t$_{dE}$} & \\colhead{(t$_r$\/t$_d$)$_{E}$} &\n\\colhead{FWHM$_{E}$}\\\\\n&\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}\n}\n\\startdata\nsGRBs with single EE&&&&&&&&&&&&&&&&&&\\\\\n130603B(Ch1+2+3+4)(8ms)&11.14306&0.02243&0.01184&0.02019&0.58643&0.03203&5.42614&0.07061&0.02032&0.02692&0.75483&0.04724&0.70613&0.19549&0.09033&0.06332&1.42656&0.15365\\\\\n130603B(Ch1+2)&4.86977&0.02241&0.01246&0.01949&0.63930&0.03195&2.13590&0.07964&0.02673&0.03681&0.72616&0.06354&0.49596&0.21647&0.07554&0.05542&1.36305&0.13096\\\\\n130603B(Ch3+4)&5.29256&0.02239&0.01183&0.01758&0.67292&0.02941&3.77207&0.06909&0.02843&0.02086&1.36290&0.04929&0.40396&0.11880&0.06665&0.05060&1.31719&0.11725\\\\\n\\enddata\n\\tablecomments{We give the peak amplitude f$_{m}$, the peak time t$_{m}$, the rise time t$_{r}$, the decay time t$_{d}$, the asymmetry t$_r$\/t$_d$ and the FWHM of each component. The subscript m and E identify the main peak and EE components.\n}\n\\end{deluxetable*}\n\\end{longrotatetable}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\nIn developing human-computer interaction systems, Speech Emotion Recognition (SER) technology is considered as an essential element to provide proper response depending on a user's emotional state \\cite{kolakowska2014emotion}.\nMany machine learning models have been built for SER, in which the models are trained to predict an emotion among the candidates such as happy, sad, angry, or neutral for a given speech~\\cite{nwe2003speech, chavhan2010speech, mao2014learning, mirsamadi2017automatic}.\nRecently, researchers have adopted multimodal approaches in SER considering that emotions can be expressed in various ways such as facial expressions, gestures, texts, or speech~\\cite{castellano2008emotion, yoon2020attentive}.\nIn particular, the text modality has been frequently used in addition to the speech in many SER studies, because human speech inherently consists of the acoustic features and the linguistic contents that can be expressed using text~\\cite{yoon2019speech, xu2019learning}.\n\nThe major issue in SER using both the audio and text modalities is how to extract and combine the information that each audio and text carries.\nFor example, if someone says, \\emph{``Thank you for being with me.\"} in a very calm voice, the emotional information is contained mostly in the linguistic contents while it sounds neutral based on the acoustic features.\nPrevious studies have approached this issue by designing their models to encode the audio and text independently and fuse the results using attention mechanisms, which help their models effectively capture the locally salient regions from given signals.\nIn these attention mechanisms, the separately encoded audio and text information operated as each other's query and key-value pair.\nYoon et al.~\\cite{yoon2019speech} used the last hidden state of a recurrent modality encoder as a query and used the other encoded modality as a key-value pair in the attention mechanism.\nIn another research, Xu et al.~\\cite{xu2019learning} designed their model to learn the alignment between the audio and text by itself from the attention mechanism.\n\nHowever, letting the model learn the complex interaction between the different modalities without any constraints can make the training more difficult.\nUsing the last hidden state of a recurrent encoder as a query as in \\cite{yoon2019speech} can lead to temporal information loss in the attention as pointed out in \\cite{mirsamadi2017automatic}.\nBesides, learning the alignment between the audio and text signals relying on the attention mechanism as in \\cite{xu2019learning} is a challenging task unless additional prior knowledge is provided as in \\cite{raffel2017online, battenberg2019location}.\n\n\\input{ForcedAlignment.tex}\nTo overcome these limitations, we propose a novel SER model called Cross Attention Network (CAN) that effectively combines the information obtained from aligned audio and text signals.\nInspired by how humans recognize speech, we design our model to regard the audio and text as temporarily aligned signals.\nIn the CAN, each audio and text input is separately encoded through its own recurrent encoder.\nThen, the hidden states obtained from each encoder are independently aggregated by applying the global attention mechanism onto each modality.\nFurthermore, the attention weights extracted from each modality are directly applied to each other's hidden states in a crossed way, so that the information at the same time steps is aggregated with the same weights.\n\nIn order to make the cross attention work properly, we propose an aligned segmentation technique that divides each audio and text signal into the same number of parts in an aligned way.\nIn the aligned segmentation technique, the text signal is segmented into words.\nFollowing the text, the audio signal is segmented using alignment information as shown in Table~\\ref{tab:alignment}, where the start- and end-time for each word are used to determine the partitioning points in the audio signal.\nThe aligned segmentation technique enables our model to successfully combine the information from the aligned audio and text signals without having to learn the complex attention between different modalities as in the previous works.\n\nTo evaluate the performance of the proposed method, we conduct experiments on the IEMOCAP dataset.\nFirstly, we compare the CAN with other state-of-the-art SER models that use additional text modality.\nThe results show that our model outperforms the other models in both weighted and unweighted accuracy by 2.66\\% and 3.18\\% relatively.\nFurthermore, ablation studies are conducted to see the actual effectiveness of the components such as aligned segmentation, stop-gradient operator, and additional loss.\nIn the ablation studies, we observe the independent contribution of each component for improving the model performance.\n\n\\section{Related work}\nAfter the classical machine learning models such as the hidden markov model or the support vector machine \\cite{nwe2003speech, chavhan2010speech}, models using neural networks have been actively studied in Speech Emotion Recognition (SER).\nTo improve the model performance, researchers proposed various methods to effectively capture the locally salient regions over the time axis from a given speech.\nBertero et al. \\cite{bertero2017first} proposed a model consisting of the convolutional neural network (CNN) that captures local information from given acoustic feature frames.\nMirsamadi et al. \\cite{mirsamadi2017automatic} used the global attention mechanism to make their model learn where to attend to capture the locally salient features.\nSahoo et al. \\cite{sahoo2019segment} proposed to train a CNN-based model with audio segments that are segmented from an utterance with equal length, which improved the model by forcing it to learn to capture the locally salient emotional features in a more elaborated manner.\n\nRecently, multimodal models that use the audio and text together for SER have attracted much attention \\cite{yoon2019speech, xu2019learning, sebastian2019fusion, liang2019cross}.\nSince the audio and text signals contain different information, it has been a major issue of how to design the models to effectively extract information from each modality and combine them.\nIn the previous studies, attention mechanisms were frequently used to combine the information \\cite{yoon2019speech, xu2019learning}, where the hidden states obtained separately from the audio and text signals were used as each other's query or key-value pair.\nThe attention mechanisms were expected to help their models learn to combine the information of each modality by themselves.\nHowever, none of these studies used proper constraints of prior knowledge to ease the difficulty of learning the complex interaction between the audio and text signals.\n\\section{Methodology}\n\\label{section:algorithm}\nIn this section, we propose a novel Speech Emotion Recognition (SER) model called Cross Attention Network (CAN).\nFirst, we explain a preprocessing of the text and audio data, which is necessary for the CAN to work properly.\nThe purpose of the preprocessing is to make the text and audio have the same number of time steps while the same time steps of the sequential signals cover the same time span.\nThen the CAN is explained, which is a model utilizing the cross attention mechanism that enables the CAN to focus on the salient features of the aligned text and audio signals with a different perspective of each modality.\n\n\\subsection{Data preprocessing}\n\\subsubsection{Text data}\nIn this study, we consider a text input as a word sequence, so the text input is represented as $X=\\{x_1, x_2, ..., x_L\\},~X\\in \\mathbb{R}^{L\\times V}$, where $L$ is the number of words, $V$ is the size of the vocabulary, and each $x_i$ is a one-hot vector representing the corresponding word.\nThen, $\\textbf{E}^{(T)}\\in \\mathbb{R}^{L\\times D_e}$, an embedded text input, is obtained after the $X$ passes through a trainable Glove embedding layer \\cite{pennington2014glove}, where $D_e$ is the dimension of the embedding layer.\n\n\\subsubsection{Audio data}\n\\label{subsection:audioprocessing}\nLet $Y=\\{y_1, y_2, ..., y_T\\},\\;Y\\in \\mathbb{R}^{T}$ be the 1-dimensional audio data and $D=\\{d_1, d_2, ..., d_L\\}$ be its alignment information, where $T$ is the audio length and each $d_i=(s_i, e_i)$ represents the start and the end of each word.\nTo prevent information loss about the correlation, we make the neighboring $d_i$s have 10\\% overlap.\n\nUsing the $Y$ and $D$, we obtain a segmented audio data $\\textbf{E}^{(A)}\\in \\mathbb{R}^{L\\times (T'\\times D_f)}$; the audio $Y$ is first segmented into audio segments $Y'=\\{y_{s_1:e_1},y_{s_2:e_2},...,\\;y_{s_L:e_L}\\}$, and then each segment is converted into a MFCC feature and stacked into the $\\textbf{E}^{(A)}$ with zero-padding.\nHere, $D_f$ is the number of the MFCC coefficients and $T'$ is the length of the longest MFCC.\n\n\n\n\n\n\\subsection{Model architecture}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.20]{encoders_grey.png}\n \\caption{Text encoder and Audio encoder. The BLSTM modules inside the dotted box share their weights. For better understanding, we represent the audio input using raw waveforms but we convert the wavs into MFCC segments in advance and use them as audio inputs.}\n \\label{fig:encoders}\n\\end{figure}\n\\subsubsection{Text encoder}\nThe embedded text data $\\textbf{E}^{(T)}$ is fed into the text encoder consisting of the bidirectional long short-term memory (BLSTM) \\cite{hochreiter1997long} as represented at the left side of Figure \\ref{fig:encoders}, which leads to the hidden states $\\textbf{H}^{(T)}\\in\\;\\mathbb{R}^{L\\times D_h}$ obtained from the equations below:\n\\begin{align}\n & \\overrightarrow{h_i}=f_{\\theta}(\\overrightarrow{h_{i-1}},~ \\textbf{E}^{(T)}_i), \\\\[10pt]\n & \\overleftarrow{h_i}=f'_{\\theta}(\\overleftarrow{h_{i+1}},~ \\textbf{E}^{(T)}_i), \\\\[10pt]\n & \\textbf{H}^{(T)}=\\{[\\overrightarrow{h_1};\\overleftarrow{h_1}],~ [\\overrightarrow{h_2};\\overleftarrow{h_2}],~...,~ [\\overrightarrow{h_L};\\overleftarrow{h_L}]\\},\n\\end{align}\nwhere $f_\\theta$, $f'_\\theta$ are the forward and backward LSTMs having $D_h$ hidden units with parameter $\\theta$.\nAdditionally, $h_i$ represents the hidden state at i-th time step and $\\textbf{E}^{(T)}_i$ represents the i-th embedded word vector of the text data.\n\n\\subsubsection{Audio encoder}\nThe audio encoder consists of two bidirectional LSTM layers as represented at the right side of Figure \\ref{fig:encoders}.\nThe bottom LSTM layer encodes each MFCC segment $\\textbf{E}^{(A)}_i\\in \\mathbb{R}^{T'\\times D_f}$ independently and outputs a vector from each segment using average pooling.\nThe BLSTM modules inside the dotted box in Figure \\ref{fig:encoders} share their weights.\nThe upper LSTM layer encodes the audio features obtained from the bottom layer and outputs the hidden states $\\textbf{H}^{(A)}\\in \\mathbb{R}^{L\\times D_h}$, which has the same time steps $L$ with the $\\textbf{H}^{(T)}$.\n\n\n\n\\subsubsection{Cross attention}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.20]{CAN_grey.png}\n \\caption{Cross Attention Network. The scissors represent the stop-gradient operator that cuts the gradient flow during back propagation.}\n \\label{fig:can}\n\\end{figure}\nIn the cross attention, attention weights obtained from one modality are used to aggregate the other modality as shown in Figure \\ref{fig:can}, while conforming to the constraint that the audio and text are temporarily aligned.\nSince the salient regions can be different depending on what modality the prediction is based on, the aggregation of the modalities happens twice based on each modality in the cross attention as follows:\n\\begin{gather}\n\t\\alpha^{(T)}_i=\\dfrac{\\text{exp}({~(\\textbf{q}^{(T)})}^\\intercal~\\textbf{H}^{(T)}_{i}~)}{\\sum_{j} \\text{exp}({~(\\textbf{q}^{(T)})}^\\intercal~\\textbf{H}^{(T)}_{j}~)},~~~(i=1,...,L),\n \\\\[10pt]\n \\alpha^{(A)}_i=\\dfrac{\\text{exp}({~(\\textbf{q}^{(A)})}^\\intercal~\\textbf{H}^{(A)}_{i}~)}{\\sum_{j} \\text{exp}({~(\\textbf{q}^{(A)})}^\\intercal~\\textbf{H}^{(A)}_{j}~)},~~~(i=1,...,L),\n\\end{gather}\n\\begin{align}\n \\textbf{c}^{(TT)}&={\\sum_{i}} \\alpha^{(T)}_i~{\\textbf{H}^{(T)}_i},\n \\\\\n \\textbf{c}^{(TA)}&={\\sum_{i}} \\mathbf{sg}(\\alpha^{(T)}_i)~{\\textbf{H}^{(A)}_i},\n \\\\\n \\textbf{c}^{(AA)}&={\\sum_{i}} \\alpha^{(A)}_i~{\\textbf{H}^{(A)}_i},\n \\\\\n \\textbf{c}^{(AT)}&={\\sum_{i}} \\mathbf{sg}(\\alpha^{(A)}_i)~{\\textbf{H}^{(T)}_i},\n\\end{align}\nwhere $\\textbf{q}^{(T)}$ and $\\textbf{q}^{(A)}$ are the global queries used to decide which parts of the aligned signals to focus on based on each modality perspective.\nAdditionally, $\\textbf{c}^{(xy)}$s are context vectors, where the $x$ represents the modality used as a query and the $y$ represents the modality used as a key-value pair.\nTo prevent the CAN from learning attention based on the other modality, we introduce a function $\\mathbf{sg}$; stop-gradient operator as shown in the equations (7) and (9).\nIt cuts the gradient flow through its argument during the backpropagation.\n\n\\subsection{Training objective}\nIn the training, the CAN makes three different predictions using the context vectors.\n\\begin{gather}\n \\hat{y}= \\text{softmax}([{c}^{(TT)};{c}^{(TA)};{c}^{(AA)};{c}^{(AT)}])^\\intercal~\\textbf{W}+\\textbf{b}), \\\\[10pt]\n \\hat{y}^{(T)}= \\text{softmax}(({c}^{(TT)})^\\intercal~\\textbf{W}^{(T)}+\\textbf{b}^{(T)}~),\\\\[10pt]\n \\hat{y}^{(A)}= \\text{softmax}(({c}^{(AA)})^\\intercal~\\textbf{W}^{(A)}+\\textbf{b}^{(A)}~),\n\\end{gather}\nwhere the $\\textbf{W}$s and $\\textbf{b}$s are trainable weights.\n$\\hat{y}$ is made based on all the context vectors and each $\\hat{y}^{(T)}$ and $\\hat{y}^{(A)}$ is made based on a context vector that uses either the text or the audio modality.\nUsing the predictions, we calculate loss terms as follows:\n\\begin{gather}\n \\mathcal{L}_{align}= CE(\\hat{y}^{(T)},~y) + CE(\\hat{y}^{(A)},~y),\\\\[10pt]\n \\mathcal{L}_{total}= CE(\\hat{y},~y) + \\alpha \\cdot \\mathcal{L}_{align},\n\\end{gather}\nwhere CE represents the cross-entropy loss, $y$ is the true emotion labels, and $\\alpha$ is a weight for the additional loss term $\\mathcal{L}_{align}$, of which optimal value is found using the validation dataset.\nThe additional loss terms in $\\mathcal{L}_{align}$ are added to help the global attention attend to the salient features based on each modality better.\n\\begin{gather}\n \\hat{y}^\\text{final}=(\\hat{y})\\cdot(\\hat{y}^{(T)})^\\alpha \\cdot(\\hat{y}^{(A)})^\\alpha\n\\end{gather}\nAfter the training, the final prediction $\\hat{y}^\\text{final}$ is calculated following the equation (15).\n\n\n\n\\section{Experiments}\nIn this section, we describe the experimental setup and the results conducted on the IEMOCAP dataset.\nFirst, we compare the CAN to other SER models for the weighted accuracy (\\textbf{WA}) and the unweighted accuracy (\\textbf{UA}), where the CAN shows the best performance.\nIn addition, we conduct several analyses on our model to see how each component described in Section \\ref{section:algorithm} affects the performance of the CAN.\n\n\\subsection{Dataset}\nIn the experiments, we use the Interactive Emotional Dyadic Motion Capture (IEMOCAP) \\cite{busso2008iemocap} dataset which provides the speech and text dataset including the alignment information as represented in Table \\ref{tab:alignment}.\nEach utterance in the dataset is labeled as one of the 10-class emotions, where we do not use the classes with too few data instances (fear, disgust, other) so the final dataset contains 7,486 utterances in total (1,103 angry, 1,040 excite, 595 happy, 1,084 sad, 1,849 frustrated, 107 surprise and 1,708 neutral).\nIn the experiments, we perform 10-fold cross-validation, and in each validation, the total dataset is split into 8:1:1 training set, validation set, and test set, respectively.\n\n\\subsection{Experimental setup}\n\\label{subsection:setup}\nFor text input, we use a sequence of words in Table \\ref{tab:alignment} as our text input and the 300-dimensional GloVe word vectors \\cite{pennington2014glove} are used as the embedding vectors.\nIn this step, we remove the special tokens such as `$\\left< s \\right>$', `$\\left< sil \\right>$', `$\\left< \/s \\right>$' and their durations are equally divided into the neighboring words.\nFor audio input, we use the zero-padded MFCC segments as our audio input, which are obtained as described in Section \\ref{subsection:audioprocessing}.\nIn the MFCC conversion, we use 40 MFCC coefficients and the frames are extracted while sliding the hamming window with 25ms frame size and 10ms hopping.\nWe use the bidirectional LSTMs with 128 hidden units followed by the dropout layer with 0.3 dropout probability.\nFor the cross attention, multi-head global attention with four heads is used to view the inputs from various perspectives so enrich the aggregated information \\cite{vaswani2017attention}.\nDuring the training, we use the validation dataset as a criterion of early stopping with the patience 10.\nWe use the batch size of 64 and use the Adam optimizer \\cite{kingma2014adam} with a learning rate of 1e-3, and the gradients are clipped with a norm value of 1.0.\nThe weight of the additional loss term $\\alpha$ is set to 0.1, which is obtained from the cross-validation.\n\n\\subsection{Results}\n\\subsubsection{Performance comparison}\n\\input{Accuracy}\nTable~\\ref{tab:accuracy} shows the performance of the CAN and the other SER models. Each `TextModel' and `AudioModel' uses a single modality by encoding it with a simple bidirectional LSTM with the global attention following \\cite{mirsamadi2017automatic}. The other two multimodal models are \\cite{yoon2019speech} and \\cite{xu2019learning} proposed in the previous studies, where the attention weights are obtained based on the interaction between the audio and the text modalities.\nIn the experiments, we re-implement all the models and obtain the accuracy values as described in Section \\ref{subsection:setup}.\nAs the Table~\\ref{tab:accuracy} shows, our CAN outperforms the other models for both \\textbf{WA} and \\textbf{UA} including the previous state-of-the-art model \\cite{yoon2019speech}.\nTo analyze the causes of the performance gain, we conduct further experiments to see the effectiveness of the components in our methodology, which are described in the next sections.\n\n\n\\input{Segmentation}\n\\subsubsection{Segmentation policy}\nIn order to demonstrate the superiority of the aligned segmentation, we compare it to the segmentation where a 1-dimensional audio signal is segmented into the segments of equal length, which has been widely used in the previous studies \\cite{sahoo2019segment, mao2019deep}.\nIn the experiment, the aligned segmentation outperforms the equal segmentation for both \\textbf{WA} and \\textbf{UA}.\nThe results in Table \\ref{tab:segmentation} imply that our aligned segmentation actually has effectiveness in combining the information in the cross attention.\n\n\\subsubsection{Ablation study}\n\\input{ablation}\nAs supportive evidence of the components in our model, we conduct ablation studies for four variant models while removing each of the components in Section \\ref{section:algorithm}.\nWhen we remove the stop-gradient operator and additional loss term $\\mathcal{L}_{align}$, the accuracy of the CAN decreases and the worst performance for the \\textbf{WA} is observed when both components are removed.\nFurthermore, we even remove the whole cross attention in the CAN, where the prediction is based only on a concatenation of the $c^{\\text{(TT)}}$ and $c^{\\text{(AA)}}$ and the stop-gradient operator and the $\\mathcal{L}_{align}$ are not used.\nIn that case, the performance decreases even more compared to the other variants.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we propose a Cross Attention Network (CAN) for Speech Emotion Recognition (SER) task.\nIt uses the cross attention to combine information from the aligned audio and text signals.\nInspired by the way humans recognize speech, we align the text and audio signals so that the CAN regards each modality to have the same time resolution.\nIn the experiments conducted on the IEMOCAP dataset, the proposed system outperforms the state-of-the-art systems by 2.66\\% and 3.18\\% relatively for the weighted and unweighted accuracy.\nTo the best of our knowledge, this is the first study that shows the improvement\nusing the aligned audio and text signals in SER.\nIn order to apply our system to the real-world scenario where only the speech signal is available, the text and alignment information are required for the CAN to work properly.\nIn future work, we plan to extend our research by integrating the CAN with the automatic speech recognition system which outputs the text and alignment information given a speech signal.\n\\section{Acknowledgments}\nK. Jung is with ASRI, Seoul National University, Korea.\nThis work was supported by the Ministry of Trade, Industry \\& Energy (MOTIE, Korea) under the Industrial Technology Innovation Program (No. 10073144).\nThis research was results of a study on the ``HPC Support'' Project, supported by the `Ministry of Science and ICT' and NIPA.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcorp b/data_all_eng_slimpj/shuffled/split2/finalzzcorp new file mode 100644 index 0000000000000000000000000000000000000000..7f0c7254ed7f67e5bbd9871a2953b46fca8133f5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcorp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn recent years significant progress has been achieved in the\nunderstanding of the mathematical structure of the correlation functions\nof the XXZ model and related integrable models.\nFirst of all the ground state correlation functions were studied.\nThey are completely defined through the quantum-mechanical density\nmatrix. An explicit expression for the density matrix of a finite\nsubchain of the infinite XXZ chain in the massive regime was first\nobtained by Jimbo, Miki, Miwa and Nakayashiki \\cite{JMMN92}. They\nexpressed the elements of the density matrix in terms of multiple\nintegrals. Subsequently, extensions of their formulae to the massless\nregime and to a non-vanishing longitudinal magnetic field were obtained\nin \\cite{JiMi96,KMT99b}.\n\nThen it was realized that the multiple integrals can be factorized\n\\cite{BoKo01} and that, utilizing the so-called reduced\nKnizhnik-Zamolodchikov equation, the factorized integrals can be\nwritten in a compact exponential form \\cite{BJMST04a,BJMST04b}. The\nlatter allows one to distinguish between an algebraic part and a physical\npart. The physical part is defined by a small number of transcendental \nfunctions, fixed by the one-point-correlators and by the two-point\nneighbour correlators which depend on the physical parameters like\nanisotropy, temperature, length of the chain, magnetic field, boundary\nconditions etc. The algebraic part is related to the representation\ntheory of the symmetry algebra behind the model, namely the quantum\ngroup $U_q(\\widehat{\\mathfrak{sl}_2})$ in case of the XXZ chain.\n\nIn \\cite{BJMST06b} it was observed that the formula for the correlation\nfunctions looks nicer if the XXZ chain is regularized by introducing an\nadditional parameter, the disorder field~$\\a$. With this new parameter \nit was possible to express the density matrix in terms of special\nfermionic annihilation operators $\\mathbf{b}$ and $\\mathbf{c}$ acting not on states\nof the spin chain, but on the space of (quasi-) local operators on\nthese states. The annihilation operators appeared to be responsible for\nthe algebraic part. The physical part was represented by a transcendental\nfunction $\\om$ determined by a single integral. In \\cite{BJMST08app}\nthe dual fermionic creation operators $\\mathbf{b}^*$, $\\mathbf{c}^*$ and a bosonic\ncreation operator $\\mathbf{t}^*$ were constructed. These operators together\ngenerate a special basis of the space of quasi-local operators. Since\n$\\mathbf{b}^*$, $\\mathbf{c}^*$ are Fermi operators, Wick's theorem applies and\nexpectation values of products of $\\mathbf{b}^*$, $\\mathbf{c}^*$ and $\\mathbf{t}^*$ in an\nappropriately defined vacuum state can be written as determinants, very\nmuch as in the case of free fermions.\n\nThermodynamic properties of integrable lattice models can be\nstudied within the Suzuki-Trotter formalism by considering\nan auxiliary lattice with staggering in the so-called\nTrotter direction \\cite{Suzuki85}. The temperature appears as a result\nof a special limit when the extension of the lattice in\nTrotter direction becomes infinite. Physical quantities are\nexpressed in an efficient way through the solution to certain\nnon-linear integral equations \\cite{Kluemper92}. A detailed discussion\nof this issue and further references can, for instance, be found in\nthe book \\cite{thebook}. \n\nIn papers \\cite{GKS04a,GKS05} the Suzuki-Trotter formalism was used\nin order to generalize the multiple integrals to finite temperature.\nThen their factorization was probed for several examples of correlation\nfunctions, first for the XXX chain \\cite{BGKS06} and later for the\nXXZ chain \\cite{BGKS07,BDGKSW08}. Also a conjecture was formulated\nstating that the above mentioned exponential form is valid with the\nsame fermionic operators (at least as long as they act on spin reversal\ninvariant products of local operators) as for the ground state and\ntwo functions $\\om$, $\\om'$ obtained from an $\\a$-dependent function\nin the limit $\\a\\rightarrow 0$. \n\nUnfortunately, the formulae of \\cite{BGKS07,BDGKSW08} worked only in\nthis limit. The generalization to generic $\\a$ stayed obscure. One of\nthe purposes of the present work is to add to the clarification of this\npoint, starting from a proper multiple integral representation with\ndisorder parameter $\\a$. Here, as we had to learn \\cite{Kitanine08up},\nthe crucial point is that the `Cauchy extraction trick', invented in\n\\cite{IKMT99} and described in more detail in \\cite{KKMST09a}, can be\napplied in the finite temperature case and also in the more general\nsituation of a finite lattice with inhomogeneities in Trotter direction.\n\nImportant new insight came from a recent paper \\cite{JMS08pp} by Jimbo,\nMiwa and Smirnov, where they suggested a purely algebraic\napproach to the problem of calculating the static correlation\nfunctions of the XXZ model.\nThe key idea of \\cite{JMS08pp} is to evaluate a linear functional\nrelated to the partition function within the fermionic basis constructed\nin \\cite{BJMST08app}.\nThe authors of \\cite{JMS08pp} work with a finite lattice, inhomogeneous\nin Trotter direction. In this situation they suggest a new and\nsurprising construction of the function $\\om$ depending on a magnetic\nfield and on the disorder parameter $\\a$.\n\nIn the present paper we discuss the relation of the work by Jimbo,\nMiwa and Smirnov to the approach using non-linear integral equations\nwhich at the moment seems more appropriate e.g.\\ for taking the\nTrotter limit (which was omitted in \\cite{JMS08pp}). In particular,\nwe present an alternative description of the function $\\om$ starting\nfrom the multiple integral and using the explicit factorization\nof the density matrix for two neighbouring lattice sites. We then\ngive a direct proof that our expression, though looking rather\ndifferent than that in \\cite{JMS08pp}, in fact describes the same \nfunction.\n\nAn inhomogeneous lattice in Trotter direction is very general and\nleaves many different options for the realization of physical correlation\nfunctions. Here we shall concentrate on two of them, the correlation\nfunctions of the infinite XXZ chain at finite temperature and magnetic\nfield (temperature case) and the ground state correlation functions\nof a finite chain with twisted periodic boundary conditions (finite\nlength case). Both cases can be treated to a very large extend\nsimultaneously. They are only distinct in that a different distribution\nof inhomogeneity parameters is required and in that for the finite\ntemperature case the Trotter limit has to be performed. Note that\ninstead of the XXZ Hamiltonian we could consider combinations of\nconserved quantities obtained from the transfer matrix of the six-vertex\nmodel within the formalism of non-linear integral equations. For the\nbulk thermodynamic properties this issue was recently studied in\n\\cite{TrKl07}.\n\nThe paper is organized as follows. In the next section we define our\nbasic objects and recall some of their properties. In the third\nsection we show the multiple integral formula for the elements of\nthe ($\\a$-twisted) density matrix for a sub-chain of length~$m$.\nIn section four we consider the simplest case $m = 1$. The fifth\nsection is devoted to applying the factorization technique to the\ndouble integrals for $m = 2$. In section~\\ref{sec:om} we introduce\nthe function $\\om$. We discuss its properties and the relation to \nits realization by Jimbo, Miwa and Smirnov. The content of\nsection~\\ref{sec:top} is some preliminary work on the construction\nof an operator $\\mathbf{t}$, dual to the creation operator $\\mathbf{t}^*$, which\nshould appear in the construction of an exponential form for finite\ntemperature and finite disorder parameter. In the appendices we\nprovide a derivation of the multiple integral formulae, we discuss\nthe limit $\\a \\rightarrow 0$, and we compare with the results of the\npapers \\cite{BGKS07,BDGKSW08}. \n\n\\section{Density matrix and correlation functions}\nThe XXZ quantum spin chain is defined by the Hamiltonian\n\\begin{equation} \\label{xxzham}\n H_N (\\k) = J \\sum_{j=1}^N \\bigl( \\s_{j-1}^x \\s_j^x\n + \\s_{j-1}^y \\s_j^y + \\D (\\s_{j-1}^z \\s_j^z - 1) \\bigr) \\, ,\n\\end{equation}\nwritten here in terms of the Pauli matrices $\\s^x = e_-^+ + e_+^-$,\n$\\s^y = {\\rm i} (e_-^+ - e_+^-)$, $\\s^z = e_+^+ - e_-^-$ (where the $e^\\a_\\be$\nare the elements of the gl(2) standard basis). The two real parameters\n$J$ and $\\D$ control the ground state phase diagram of the model.\nFor simplicity of notation we shall restrict ourselves in the following\nto the critical phase $J > 0$, $|\\D| < 1$. Note, however, that the\nresults of this work can be easily extended to the off-critical\nantiferromagnetic phase $\\D > 1$. We shall also assume without further\nmentioning that the number of lattice sites $N$ is even.\n\nTo fully specify $H_N (\\k)$ we have to define the boundary conditions.\nWe shall consider twisted periodic boundary conditions, when we\nare dealing with the ground state of the finite chain. Then $H_N (\\k)$\ndepends on an additional parameter $\\k$ through\n\\begin{equation} \\label{twistbound}\n \\begin{pmatrix} {e_0}_+^+ & {e_0}_-^+ \\\\\n {e_0}_+^- & {e_0}_-^- \\end{pmatrix} =\n q^{- \\k \\s^z} \\begin{pmatrix} {e_N}_+^+ & {e_N}_-^+ \\\\\n {e_N}_+^- & {e_N}_-^-\n\t\t \\end{pmatrix} q^{\\k \\s^z} \\, .\n\\end{equation}\nHere $q$ is related to $\\D$ as $\\D = (q + q^{-1})\/2$. For the finite\ntemperature case we shall assume periodic boundary conditions for\nthe Hamiltonian. Nevertheless the same parameter $\\k$ will appear\nin that case as a twist parameter of the quantum transfer matrix, having\nthen a rather different physical meaning as an external magnetic field\ncoupling to the spins by a Zeeman term. We shall elaborate on this\nbelow.\n\nThe integrable structure behind the Hamiltonian (\\ref{xxzham}) is\ngenerated by the trigonometric $R$-matrix of the six-vertex model\n\\cite{Babook},\n\\begin{align} \\label{rxxz}\n R(\\la) & = \\begin{pmatrix}\n 1 & 0 & 0 & 0 \\\\\n\t\t 0 & b(\\la) & c(\\la) & 0 \\\\\n\t\t 0 & c(\\la) & b(\\la) & 0 \\\\\n\t\t 0 & 0 & 0 & 1\n\t\t\\end{pmatrix} \\, , \\\\[2ex]\n b(\\la) & = \\frac{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la)}{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la + \\h)} \\, , \\qd\n c(\\la) = \\frac{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\h)}{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la + \\h)} \\, , \\label{defbc}\n\\end{align}\nacting on ${\\mathbb C}^2 \\otimes {\\mathbb C}^2$. As presented here it\nsatisfies the Yang-Baxter equation in additive form. To facilitate the\ncomparison with \\cite{BJMST08app,JMS08pp}, where the multiplicative form\nwas preferred, we set $q = {\\rm e}^\\h$ and $\\z = {\\rm e}^\\la$. Then for arbitrary\ncomplex inhomogeneity parameters $\\be_j$, $j = 1, \\dots, N$, the definition\n\\begin{equation} \\label{defmono}\n T_a (\\z) = R_{a, N} (\\la - \\be_N) \\dots R_{a, 1} (\\la - \\be_1)\n\\end{equation}\nof the monodromy matrix makes sense, where, as usual, the indices\n $1, \\dots,N$ refer to the spin chain while $a$ refers to an additional\nsite defining the so-called auxiliary space. We also set $T_a (\\z, \\k) =\nT_a (\\z) q^{\\k \\s^z_a}$ and introduce the twisted transfer matrix\n\\begin{equation}\n t(\\z, \\k) = {\\rm tr}_a \\bigl( T_a (\\z, \\k) \\bigr) \\, .\n\\end{equation}\n\nIn \\cite{JMS08pp} a six vertex-model with $N$ horizontal rows and\nan arbitrary distribution of the inhomogeneities $\\tau_j = {\\rm e}^{\\be_j}$\non these rows was considered. Here we would like to point out that two\nspecific distributions are of particular interest in physical\napplications. Moreover, in both cases the special functions that enter\nthe representations of the transfer matrix eigenvalues and correlation\nfunctions have nice descriptions in terms of solutions of linear and\nnon-linear integral equations.\n\nThe first case relates to the ground state of the Hamiltonian\n(\\ref{xxzham}). We call it the finite length case. In this case we choose\n\\begin{equation} \\label{tdistr}\n \\be_j = \\h\/2 \\, , \\qd j = 1, \\dots, N \\, .\n\\end{equation}\nThen\n\\begin{equation} \\label{hamfromt}\n H_N (\\k) = 2 J {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\h) \\,\n \\partial_\\la \\ln \\bigl( t^{-1}(q^\\frac{1}{2}} \\def\\4{\\frac{1}{4},\\k) \\, t(\\z,\\k) \\bigr)%\n\t \\big|_{\\la = \\h\/2} \\, ,\n\\end{equation}\nwith twisted boundary conditions (\\ref{twistbound}) if we identify\n$\\D = \\ch (\\h)$. The critical regime $|\\D| < 1$ corresponds to purely\nimaginary $\\h = {\\rm i} \\g$, $\\g \\in [0, \\p)$. In this case the physical twist\nangle or flux $\\PH \\in [0,2\\p)$ is $\\PH = - \\k \\g$, whence $\\k$ should\nbe real. If we stick to the vertex model picture of \\cite{JMS08pp},\nthen $t(\\z,\\k)$ is the vertical or column-to-column transfer matrix in\nthis case.\n\nThe second case is determined by an alternating choice\n\\begin{equation} \\label{qtmdistr}\n \\be_j = \\begin{cases} \\be_{2j-1} = \\h - \\frac{\\be}{N} \\\\\n \\be_{2j} = \\frac{\\be}{N}\n \\end{cases} \\, , \\qd j = 1, \\dots, N\/2 \\, ,\n\\end{equation}\nof inhomogeneity parameters. This case will be called the finite\ntemperature case as it relates to the quantum transfer matrix, whose\nmonodromy matrix is\n\\begin{multline}\n T^{QTM}_a (\\z) = \\\\ R_{a, N} (\\la - \\be\/N)\n R_{N-1, a}^{t_1} (- \\be\/N - \\la) \\dots\n R_{a, 2} (\\la - \\be\/N)\n R_{1, a}^{t_1} (- \\be\/N - \\la) \\, .\n\\end{multline}\nHere the superscript `$t_1$' indicates transposition with respect to the\nfirst space. In fact, setting $Y = \\prod_{j=1}^{N\/2} \\s_{2j-1}^y$ and\nusing the crossing symmetry\n\\begin{equation}\n \\s_j^y R_{a, j} (\\la - \\h) \\s_j^y = b(\\la - \\h) R_{j, a}^{t_1} (- \\la)\n\\end{equation}\nof the $R$-matrix we find that\n\\begin{equation} \\label{ttqtm}\n T^{QTM}_a (\\z) = Y T_a (\\z) Y\n \\prod_{j=1}^{N\/2} \\frac{1}{b(\\la - \\be_{2j-1})} \\, .\n\\end{equation}\nThe quantum transfer matrix is by definition\n\\begin{equation}\n t^{QTM} (\\z,\\k) = {\\rm tr}_a \\bigl( T^{QTM}_a (\\z, \\k) \\bigr) \\, ,\n\\end{equation}\nwhere $T^{QTM}_a (\\z, \\k) = T^{QTM}_a (\\z) q^{\\k \\s^z_a}$.\n\nAgain, within the vertex model picture, $t^{QTM} (\\z,\\k)$, or $t (\\z,\\k)$\nwith the choice (\\ref{qtmdistr}) of inhomogeneity parameter, corresponds\nto the vertical transfer matrix. There is an important difference,\nthough, that has been explained at several occasions \\cite{Kluemper92,%\nGKS04a}. In the finite length case the Hamiltonian can be derived\nfrom the vertical transfer matrix. In particular, the vertical\ntransfer matrix and the Hamiltonian (\\ref{xxzham}) have the same\neigenstates. In the finite temperature case, on the other hand,\nwith a lattice which is homogeneous in horizontal direction, say, the\nHamiltonian is related to the horizontal transfer matrix with purely\nperiodic boundary conditions. It is then also periodic and will be\ndenoted $H_L (0)$, where $L$ is the horizontal extension of the lattice.\nIn this case the vertical transfer matrix eigenstates are different\nfrom those of the Hamiltonian. In particular, the eigenstate with\nthe largest modulus determines the state of thermodynamic equilibrium\nin the thermodynamic limit, i.e.\\ the free energy of the XXZ chain\nand all its static correlation functions \\cite{GKS04a}. Also the\nphysical interpretation of the parameter $\\k$ is rather different in\nthis case. It corresponds to a magnetic field coupling to the spin chain\nthrough a Zeeman term (see e.g.\\ \\cite{GKS04a}).\n\nUsing a lattice of finite extension $L$ in horizontal direction we can\nexpress the partition function of the homogeneous XXZ chain of length $L$\nas\n\\begin{equation} \\label{zustand}\n Z_L = {\\rm tr}_{1, \\dots, L} {\\rm e}^{- \\be H_L (0) + h S_{[1,L]}\/T}\n = \\lim_{N \\rightarrow \\infty} {\\rm tr}_{1, \\dots, N}\n\t \\bigl( t^{QTM} (1,h\/(2 \\h T)) \\bigr)^L \\, .\n\\end{equation}\nHere $T$ is the temperature and $h$ is a longitudinal magnetic field.\n$\\be$ must be chosen as $\\be = 2J {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\h)\/T$. Furthermore\n\\begin{equation}\n S_{[1,L]} = \\tst{\\frac{1}{2}} \\def\\4{\\frac{1}{4}} \\sum_{j=1}^L \\s_j^z\n\\end{equation}\nis the conserved $z$-component of the total spin. Equation (\\ref{zustand})\nbecomes efficient in the thermodynamic limit $L \\rightarrow \\infty$,\nsince then a single eigenvalue $\\La^{QTM} (1, \\k)$ of $t^{QTM} (1,\\k)$\nof largest modulus dominates the large-$L$ asymptotics of $Z_L$ in the\nTrotter limit $N \\rightarrow \\infty$. We shall refer to this eigenvalue\nas the dominant one.\n\nWe would like to remark that in our understanding the quantum transfer\nmatrix is, in general, more appropriate for studying integrable spin\nmodels on the infinite lattice than the usual transfer matrix. In\ngeneral there is no crossing symmetry, and the quantum transfer matrix\nand the usual transfer matrix are not related by a similarity\ntransformation like in (\\ref{ttqtm}). Also within the quantum transfer\nmatrix formulation the density matrix directly takes its `natural form' in\nterms of monodromy matrix elements (see below). No solution of a quantum\ninverse problem as in \\cite{KMT99b} is required. In our particular case\nwe do have the crossing symmetry, and the quantum transfer matrix\nand the usual transfer matrix with staggered inhomogeneities\n(\\ref{qtmdistr}) give an equivalent description of the density matrix\n(see below). Still, the largest eigenvalue of $t(\\z,\\k)$ with the\ndistribution (\\ref{qtmdistr}) of inhomogeneities diverges in the Trotter\nlimit as can be seen from (\\ref{ttqtm}).\n\nLet us come back to the situation of arbitrarily distributed\ninhomogeneity parameters $\\be_j$. Following \\cite{JMS08pp} we shall\nassume that for a certain spectral parameter $\\z_0$ and any\n$\\k \\in {\\mathbb C}$ the transfer matrix $t(\\z_0,\\k)$ has a unique\neigenvector $|\\k\\>$ with eigenvalue $\\La (\\z_0, \\k)$ of largest modulus.\nThis is certainly true for the two special cases considered above.\nIn the finite length case $\\z_0 = q^{1\/2}$, while $\\z_0 = 1$ in the\nfinite temperature case. We fix a set of `vertical inhomogeneity\nparameters' $\\n_1, \\dots, \\n_m$ and set $\\x_j = {\\rm e}^{\\n_j}$. Then we\ncan define the object of our main interest, the density matrix\nwith matrix elements\n\\begin{equation} \\label{defdens}\n {D_N}^{\\e_1' \\dots \\e_m'}_{\\e_1 \\dots \\e_m}\n (\\x_1, \\dots, \\x_m|\\k, \\a) =\n\t\\frac{\\<\\k + \\a| T^{\\e_1'}_{\\e_1} (\\x_1, \\k) \\dots\n\t T^{\\e_m'}_{\\e_m} (\\x_m, \\k) |\\k\\>}\n {\\<\\k + \\a|\\prod_{j=1}^m t (\\x_j,\\k)|\\k\\>} \\, ,\n\\end{equation}\nwhich is in fact an inhomogeneous and `$\\a$-twisted' version of the\nusual density matrix.\n\nIn the finite length case (\\ref{tdistr}) with twist angle $\\PH$ the\nexpectation value in the ground state $|\\PH\\>$ of any operator\n$X_{[1,m]}$ acting non-trivially only on the first $m$ lattice\nsites is \\cite{DGHK07}\n\\begin{equation}\n \\frac{\\<\\PH|X_{[1,m]}|\\PH\\>}{\\<\\PH|\\PH\\>} =\n \\lim_{\\a \\rightarrow 0}\\, \\lim_{\\n_j \\rightarrow \\h\/2}\n\t{\\rm tr}_{1, \\dots, m} \\bigl\\{\n\t {D_N} (\\x_1, \\dots, \\x_m|- \\PH\/\\g, \\a)\\, X_{[1,m]}\n\t \\bigr\\} \\, .\n\\end{equation}\nIn the finite temperature case (\\ref{qtmdistr}) we use that the\nright hand side of (\\ref{defdens}) stays form invariant under\nthe transformation (\\ref{ttqtm}) which replaces all objects\nrelating to the ordinary transfer matrix with the corresponding\nobjects relating to the quantum transfer matrix. Hence, from\n\\cite{GKS05},\n\\begin{multline}\n \\_{T, h} =\n \\lim_{L \\rightarrow \\infty}\n\t\\frac{{\\rm tr}_{1, \\dots, L} \\bigl\\{ {\\rm e}^{- \\be H_L (0) + h S_{[1,L]}\/T}%\n\t X_{[1,m]} \\bigr\\}}{Z_L} \\\\[1ex] =\n \\lim_{\\a \\rightarrow 0}\\, \\lim_{\\n_j \\rightarrow 0}\n \\lim_{N \\rightarrow \\infty}\n\t{\\rm tr}_{1, \\dots, m} \\bigl\\{\n\t {D_N} (\\x_1, \\dots, \\x_m|h\/(2\\h T), \\a)\\, X_{[1,m]}\n\t \\bigr\\} \\, .\n\\end{multline}\n\nThe density matrix (\\ref{defdens}) allows for reduction from the\nleft and from the right expressed by\n\\begin{subequations}\n\\label{redu}\n\\begin{align}\n {\\rm tr}_1 \\bigl\\{ D_N (\\x_1, \\dots, \\x_m|\\k, \\a) q^{\\a \\s_1^z} \\bigr\\} & =\n \\r (\\x_1) D_N (\\x_2, \\dots, \\x_m|\\k, \\a) \\, , \\\\[1ex]\n {\\rm tr}_m \\bigl\\{ D_N (\\x_1, \\dots, \\x_m|\\k, \\a) \\bigr\\} & =\n D_N (\\x_1, \\dots, \\x_{m-1}|\\k, \\a) \\, ,\n\\label{reductionD}\n\\end{align}\n\\end{subequations}\nwhere\n\\begin{equation} \\label{defrho}\n \\r (\\z) = \\frac{\\La (\\z, \\k + \\a)}{\\La (\\z, \\k)} \\, .\n\\end{equation}\nThe function $\\r$ plays an important role in \\cite{JMS08pp}. As we\nshall see below it is also important for the formulation of a\nmultiple integral formula for the density matrix and is the only\nnon-trivial one-point function for finite $\\a$. In the temperature case\nwith $\\k = h\/(2\\h T)$ we have\n\\enlargethispage{3ex}\n\\begin{equation}\n \\r(1) = 1 + m(T, h)2 \\h \\a + {\\cal O} (\\a^2) \\, ,\n\\end{equation}\nwhere $m(T, h)$ is the magnetization.\n\nIn the temperature case as well as in the finite length case and\nin certain inhomogeneous generalizations of both cases the function\n$\\r$ can be expressed in terms of an integral over certain auxiliary\nfunctions (see e.g.\\ \\cite{GKS04a,DGHK07}),\n\\begin{equation} \\label{rhoint}\n \\r(\\z) = q^\\a \\exp \\biggl\\{\n \\int_{\\cal C} \\frac{{\\rm d} \\m}{2 \\p {\\rm i}} \\: {\\rm e} (\\m - \\la)\n\t\t \\ln \\biggl[ \\frac{1 + \\mathfrak{a} (\\m, \\k + \\a)}\n\t\t {1 + \\mathfrak{a} (\\m, \\k )} \\biggr] \\biggr\\}\n\t\t\t\t \\, .\n\\end{equation}\nHere ${\\rm e}(\\la)$ is the `bare energy'\n\\begin{equation}\n {\\rm e}(\\la) = \\cth(\\la) - \\cth(\\la + \\h)\n\\end{equation}\nand $\\mathfrak{a}(\\la, \\k)$ is the solution of a non-linear integral equation\nwith integration kernel\n\\begin{equation} \\label{kernel}\n K(\\la) = \\cth(\\la - \\h) - \\cth(\\la + \\h) \\, .\n\\end{equation}\nIn the finite length case this equation reads\n\\begin{multline} \\label{nliefin}\n \\ln (\\mathfrak{a} (\\la, \\k)) = \\\\ (N - 2\\k) \\h + \\sum_{j=1}^N\n\t\t \\ln \\biggl[ \\frac{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th} (\\la - \\be_j)}\n\t\t {{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th} (\\la - \\be_j + \\h)} \\biggr]\n - \\int_{\\cal C} \\frac{{\\rm d} \\m}{2 \\p {\\rm i}}\n\t\t\t K(\\la - \\m) \\ln (1 + \\mathfrak{a} (\\m, \\k )) \\, .\n\\end{multline}\nEquations (\\ref{rhoint}) and (\\ref{nliefin}) are still valid if the\n$\\be_j$ are not precisely those of equation (\\ref{tdistr}), but are close\nto $\\h\/2$ with $\\Im \\be_j = \\g\/2$. The contour of integration to be used\nin (\\ref{rhoint}) and (\\ref{nliefin}) is shown in figure \\ref{fig:cont}.\n\\begin{figure}\n \\centering\n \\includegraphics{cont10.eps}\n \\caption{\\label{fig:cont} The canonical contour ${\\cal C}$ surrounds\n the real axis in counterclockwise manner inside the\n strip $- \\frac{\\g}{2} < \\Im \\la < \\frac{\\g}{2}$.} \n\\end{figure}\nIn the temperature case the non-linear integral equation has a similar\nstructure, but the driving term is different. Suppose that for $j = 1,\n\\dots, N\/2$ the $\\be_{2j-1}$ are close to $\\h$, whereas the $\\be_{2j}$\nare close to $0$. Then\n\\begin{multline} \\label{nlietem}\n \\ln (\\mathfrak{a} (\\la, \\k)) = - 2 \\k \\h \\\\ + \\sum_{j=1}^{N\/2}\n\t \\ln \\biggl[ \\frac{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la - \\be_{2j})\n\t {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la - \\be_{2j-1} + 2\\h)}\n\t\t\t {{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th} (\\la - \\be_{2j} + \\h)\n\t\t\t {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la - \\be_{2j-1} + \\h)} \\biggr]\n - \\int_{\\cal C} \\frac{{\\rm d} \\m}{2 \\p {\\rm i}}\n\t\t\t K(\\la - \\m) \\ln (1 + \\mathfrak{a} (\\m, \\k )) \\, .\n\\end{multline}\n\nWe presented both equations (\\ref{nliefin}) and (\\ref{nlietem}) in\ninhomogeneous form, since we shall need this later, when comparing\nwith \\cite{JMS08pp}. Note, however, that the homogeneous limit is\ntrivial in both cases and that, moreover, the Trotter limit\ncan be performed in (\\ref{nlietem}). Then\n\\begin{equation} \\label{nliefinhom}\n \\ln (\\mathfrak{a} (\\la, \\k)) = (N - 2\\k) \\h\n + N \\ln \\biggl[ \\frac{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th} (\\la - \\h\/2)}\n\t {{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th} (\\la + \\h\/2)} \\biggr]\n - \\int_{\\cal C} \\frac{{\\rm d} \\m}{2 \\p {\\rm i}}\n\t\t K(\\la - \\m) \\ln (1 + \\mathfrak{a} (\\m, \\k )) \\, .\n\\end{equation}\nin the finite length case and\n\\begin{equation} \\label{nlietemhom}\n \\ln (\\mathfrak{a} (\\la, \\k)) = - 2\\k \\h - \\frac{2J {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\h) {\\rm e} (\\la)}{T}\n - \\int_{\\cal C} \\frac{{\\rm d} \\m}{2 \\p {\\rm i}}\n\t\t K(\\la - \\m) \\ln (1 + \\mathfrak{a} (\\m, \\k ))\n\\end{equation}\nin the temperature case and in the Trotter limit. Equations\n(\\ref{nliefinhom}) and (\\ref{nlietemhom}) are what we call the\n$\\mathfrak{a}$-form of the non-linear integral equation. There is another\nso-called $\\mathfrak{b} \\overline{\\mathfrak{b}}$-form \\cite{Kluemper92,DGHK07} which is more\nconvenient for an accurate calculation of the numerical values\nof the functions.\n\n\\section{The multiple integral representation of the density matrix}\n\\label{sec:multint}\nIn appendix \\ref{app:dermult} we derive the following multiple integral\nrepresentation for the elements of the density matrix.\n\\begin{multline} \\label{multint}\n {D_N}^{\\e_1' \\dots \\e_m'}_{\\e_1 \\dots \\e_m}\n (\\x_1, \\dots, \\x_m|\\k, \\a) =\n\t \\biggl[ \\prod_{j=1}^p \\int_{\\cal C} {\\rm d} m(\\la_j) \\:\n\t F^+_{\\ell_j} (\\la_j) \\biggr]\n\t \\biggl[ \\prod_{j=p+1}^m \\int_{\\cal C} {\\rm d} \\overline{m}(\\la_j) \\:\n\t F^-_{\\ell_j} (\\la_j) \\biggr] \\\\[1ex]\n \\frac{\\det_{j, k = 1, \\dots, m} \\bigl[- G(\\la_j, \\n_k) \\bigr]}\n\t {\\prod_{1 \\le j < k \\le m} {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la_j - \\la_k - \\h)\n\t\t {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\n_k - \\n_j)} \\, ,\n\\end{multline}\nwhere we have used the notation\n \n \n\t\t \n \n \n \n\\begin{align}\n {\\rm d} m(\\la) &\n = \\frac{{\\rm d} \\la}{2 \\p {\\rm i} \\, \\r(\\z) (1 + \\mathfrak{a} (\\la, \\k))} \\, , \\qd\n {\\rm d} \\overline{m} (\\la)\n = \\mathfrak{a} (\\la, \\k) {\\rm d} m(\\la) \\, , \\\\[2ex] \\notag\n F_{\\ell_j}^\\pm (\\la) &\n = \\prod_{k=1}^{\\ell_j - 1} {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la - \\n_k)\n\t \\prod_{k=\\ell_j + 1}^m {\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\la - \\n_k \\mp \\h) \\, , \\qd\n \\ell_j = \\begin{cases}\n \\e_j^+ & j = 1, \\dots, p \\\\\n\t\t \\e_{m - j + 1}^- & j = p + 1, \\dots, m\n \\end{cases}\n\\end{align}\nwith $\\e_j^+$ the $j$th plus in the sequence $(\\e_j)_{j=1}^m$, $\\e_j^-$\nthe $j$th minus sign in the sequence $(\\e_j')_{j=1}^m$ and $p$ the\nnumber of plus signs in $(\\e_j)_{j=1}^m$. The function $G$ is new here.\nIt is defined as the solution of the linear integral equation\n\\begin{equation} \\label{newg}\n G(\\la, \\n) = q^{-\\a} \\cth(\\la - \\n - \\h) - \\r (\\x) \\cth (\\la - \\n) \n + \\int_{\\cal C} {\\rm d} m(\\m) K_\\a (\\la - \\m) G(\\m, \\n) \\, ,\n\\end{equation}\nwhere $\\x = {\\rm e}^\\n$, and the kernel\n\\begin{equation}\n K_\\a (\\la) = q^{- \\a} \\cth (\\la - \\h) - q^\\a \\cth (\\la + \\h)\n\\end{equation}\nis a deformed version of (\\ref{kernel}).\n\nEquation (\\ref{multint}) is a generalization to finite $\\a$ of the multiple\nintegral formulae first derived in \\cite{GKS05,DGHK07}. To simplify\nthe notation we shall sometimes suppress the dependence of the density\nmatrix elements on $\\k$ and $\\a$.\n\\section{The case m = 1}\nFor $m = 1$ there are only two non-vanishing density matrix elements.\nThey are related to the function $\\r$ by the reduction relations\n(\\ref{redu}) which imply that\n\\begin{equation}\n \\begin{pmatrix} D^+_+ (\\x) \\\\ D^-_- (\\x) \\end{pmatrix}\n = \\frac{1}{q^\\a - q^{-\\a}}\n\t \\begin{pmatrix}\n\t - q^{-\\a} & \\mspace{14.mu} 1 \\\\ q^\\a & - 1\n\t \\end{pmatrix}\n\t \\begin{pmatrix} 1 \\\\ \\r(\\x) \\end{pmatrix} \\, .\n\\end{equation}\nWhen we insert equation (\\ref{multint}) for $m = 1$ here, we do not\nobtain an independent equation, but rather an interesting identity\nfor $\\r$ (recall that $\\r$ appears in the measure),\n\\begin{equation} \\label{grhoid}\n \\r (\\x) = q^{-\\a} - (q^\\a - q^{-\\a}) \\int_{\\cal C} {\\rm d} m(\\m) G(\\m, \\n)\n \\, .\n\\end{equation}\nIt allows us to calculate the asymptotic behaviour of the\nfunction $G$,\n\\begin{equation}\n \\lim_{{\\rm Re\\,}} \\def\\Im{{\\rm Im\\,} \\la \\rightarrow \\pm \\infty} G(\\la, \\n) = 0 \\, .\n\\end{equation}\n\\section{Factorization of the density matrix for m = 2}\nThe factorization of the multiple integrals for the ground state\ndensity matrix was discovered in \\cite{BoKo01}. In that case the\nintegrand consists of explicit functions whose analytic properties\nwere used in the calculation. In the finite temperature case a\ndifferent factorization technique had to be invented. As was demonstrated \nin \\cite{BGKS06} the linear integral equation for the function G,\nappropriately used under the multiple integral, can be viewed as the\nsource of the factorization, at least for the special case of the\nisotropic chain at $\\a = 0$. For the XXZ chain outside the isotropic\npoint and without the disorder parameter $\\a$, however, that trick\ndoes not work anymore. Here we shall see that a finite $\\a$ allows us\nto perform the factorization of the density matrix in much the same\nway as in \\cite{BGKS06}.\n\nLet us consider $m = 2$ in (\\ref{multint}). There are six non-vanishing\nmatrix elements in this case, one for $p = 0$, four for $p = 1$ and one\nfor $p = 2$. We shall concentrate on the case $p = 1$, since the matrix\nelements for $p = 0$ or $ 2$ can be obtained from those for $p = 1$ by\nmeans of the reduction relation (\\ref{redu}). After substituting\n$w_j = {\\rm e}^{2 \\m_j}$ and $\\x_j = {\\rm e}^{\\n_j}$, $j = 1, 2$, the corresponding\nintegrals are all of the form\n\\begin{equation} \\label{intm2}\n {\\cal I} = \\frac{1}{\\x_2^2 - \\x_1^2}\n \\int_{\\cal C} {\\rm d} m(\\m_1) \\int_{\\cal C} {\\rm d} \\overline{m} (\\m_2)\n\t\\det \\bigl[ G(\\m_j, \\n_k) \\bigr] r(w_1, w_2) \\, ,\n\\end{equation}\nwhere\n\\begin{equation}\n r(w_1, w_2) = \\frac{p(w_1, w_2)}{w_1 - q^2 w_2} \\, , \\qd\n p(w_1, w_2) = c_0 w_1 w_2 + c_1 w_1 + c_2 w_2 + c_3 \\, .\n\\end{equation}\nThe coefficients $c_j$ are different for the four different matrix\nelements. They are listed in table \\ref{tab:pcoeff}.\n\\renewcommand{\\arraystretch}{1.5}\n\\begin{table}[t]\n\\begin{minipage}{\\linewidth}\n \\centering\n \\begin{tabular}{ccccc}\n \\toprule\n\t$\\begin{smallmatrix} \\e_1' & \\e_2' \\\\ \\e_1 & \\e_2\n\t \\end{smallmatrix}$\n & $c_0$ & $c_1$ & $c_2$ & $c_3$ \\\\\n \\midrule\n\t$\\begin{smallmatrix} + & - \\\\ + & - \\end{smallmatrix}$\n\t& 1 & $- \\x_1^2$ & $- q^2 \\x_2^2$ & $q^2 \\x_1^2 \\x_2^2$ \\\\\n\t$\\begin{smallmatrix} - & + \\\\ - & + \\end{smallmatrix}$\n\t& $q^2$ & $- \\x_2^2$ & $- q^2 \\x_1^2$ & $\\x_1^2 \\x_2^2$ \\\\\n\t$\\begin{smallmatrix} + & - \\\\ - & + \\end{smallmatrix}$\n\t& $q \\x_2\/\\x_1$ & $- q \\x_1 \\x_2$ & $- q \\x_1 \\x_2$\n\t& $q \\x_1^3 \\x_2$ \\\\\n\t$\\begin{smallmatrix} - & + \\\\ + & - \\end{smallmatrix}$\n\t& $q \\x_1\/\\x_2$ & $- q^{-1} \\x_1 \\x_2$ & $- q^3 \\x_1 \\x_2$\n\t& $q \\x_1 \\x_2^3$ \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{\\label{tab:pcoeff} The coefficients of the polynomial $p$.}\n\\end{minipage}\n\\end{table}\n\\renewcommand{\\arraystretch}{1.1}\n\nInserting\n\\begin{equation}\n {\\rm d} \\overline{m} (\\m) = \\frac{{\\rm d} \\la}{2 \\p {\\rm i} \\r({\\rm e}^\\m)}\n - {\\rm d} m(\\m)\n\\end{equation}\ninto (\\ref{intm2}) and taking into account that $\\r ({\\rm e}^\\m)$ is\nanalytic and non-zero inside $\\cal C$ we obtain\n\\begin{multline} \\label{ij}\n {\\cal I} (\\x_2^2 - \\x_1^2) = - \\int_{\\cal C} {\\rm d} m(\\m) \\,\n \\det \\begin{pmatrix} G(\\m, \\n_1) & G(\\m, \\n_2) \\\\\n\t r(w, \\x_1^2) & r(w, \\x_2^2) \\end{pmatrix}\n\t\t\t \\\\[1ex]\n - \\int_{\\cal C} {\\rm d} m(\\m_1) \\int_{\\cal C} {\\rm d} m (\\m_2)\n\t \\det \\bigl[ G(\\m_j, \\n_k) \\bigr] r(w_1, w_2) \\, ,\n\\end{multline}\nwhere $w = {\\rm e}^{2 \\m}$. Here the first integral is already factorized.\nUnder the second integral the integration measures now appear\nsymmetrically. Hence, we may replace $r(w_1, w_2)$ by $(r(w_1, w_2) -\nr(w_2, w_1))\/2$.\n\nFollowing \\cite{BGKS06} we want to use the integral equation (\\ref{newg})\nunder the second integral in (\\ref{ij}). This is possible if rational\nfunctions $F(w_1, w_2)$ and $g(w)$ exist, such that\n\\begin{equation} \\label{decr}\n r(w_1, w_2) - r(w_2, w_1) = F(w_1, w_2) +\n g(w_1) K_\\a (\\m_1 - \\m_2) - g(w_2) K_\\a (\\m_2 - \\m_1) \\, ,\n\\end{equation}\nand the antisymmetric function $F(w_1, w_2)$ is a sum of factorized\nfunctions in $w_1$ and $w_2$. Then $F$ considered as a function of\n$w_1$ cannot have poles whose position depends on $w_2$. In particular,\nthe residue at $w_1 = q^2 w_2$ must vanish. Using this in (\\ref{decr})\nwith the explicit forms of $r$ and $K_\\a$ inserted we obtain a\ndifference equation for $g$,\n\\begin{equation} \\label{diffg}\n g(q^2 w) y^{-1} - g(w) y = \\frac{p(q^2 w, w)}{2 q^2 w} \\, .\n\\end{equation}\nHere $y = q^\\a$. Clearly this equation has a solution of the form\n\\begin{equation}\n g(w) = g_+ w + g_0 + \\frac{g_-}{w} \\, .\n\\end{equation}\nThe coefficients are easily obtained by substituting the latter expression\ninto (\\ref{diffg}),\n\\begin{equation}\n g_+ = \\frac{c_0 y}{2(q^2 - y^2)} \\, , \\qd\n g_- = \\frac{c_3 y}{2(1 - q^2 y^2)} \\, , \\qd\n g_0 = \\frac{(c_1 + q^{-2} c_2) y}{2(1 - y^2)} \\, .\n\\end{equation}\nSubstituting $g$ back into (\\ref{decr}) we obtain $F(w_1, w_2) = f(w_1)\n- f(w_2)$, where\n\\begin{equation}\n f(w) = (y - y^{-1})\\Bigl(g_+ w - \\frac{g_-}{w}\\Bigr) \\, .\n\\end{equation}\nConsequentially\n\\begin{equation}\n r(w_1, w_2) = f(w_1) + g(w_1) K_\\a (\\m_1 - \\m_2)\n + \\text{symmetric function.}\n\\end{equation}\n\nWith this we can factorize the second integral in (\\ref{ij}) by means\nof the integral equation (\\ref{newg}),\n\\begin{multline} \\label{ifirstfac}\n \\int_{\\cal C} {\\rm d} m(\\m_1) \\int_{\\cal C} {\\rm d} m (\\m_2)\n \\det \\bigl[ G(\\m_j, \\n_k) \\bigr] r(w_1, w_2) \\\\\n = (y - y^{-1})\n\t \\det \\begin{pmatrix}\n\t g_+ \\ph_+ (\\n_1) - g_- \\ph_- (\\n_1) &\n\t g_+ \\ph_+ (\\n_2) - g_- \\ph_- (\\n_2) \\\\\n\t \\ph_0 (\\n_1) & \\ph_0 (\\n_2)\n\t \\end{pmatrix} \\\\[1ex] +\n \\int_{\\cal C} {\\rm d} m(\\m) \\,\n\t \\det \\begin{pmatrix}\n\t G(\\m, \\n_1) & G(\\m, \\n_2) \\\\\n\t g(w) H(\\m, \\n_1; y^{-1}) & g(w) H(\\m, \\n_2; y^{-1})\n\t \\end{pmatrix} \\, ,\n\\end{multline}\nwhere\n\\begin{subequations}\n\\begin{align}\n \\ph_j (\\n) & = \\int_{\\cal C} {\\rm d} m(\\m) \\, w^j G(\\m, \\n) \\, , \\qd\n j = +, 0, - \\, , \\\\\n H(\\m, \\n; y^{-1}) &\n = \\r(\\x) \\cth (\\m - \\n) - y^{-1} \\cth(\\m - \\n - \\h) \\, .\n\\end{align}\n\\end{subequations}\nFinally we substitute (\\ref{ifirstfac}) into (\\ref{ij}) and further\nsimplify the resulting expression using the identities\n\\begin{subequations}\n\\begin{align}\n g(w) H(\\m, \\n; y^{-1}) & = g(\\x^2) H(\\m, \\n; y)\n - \\frac{p(q^2 \\x^2, \\x^2)}{2 q^2 \\x^2} \\cth(\\m - \\n - \\h)\n\t\\notag \\\\\n\t& \\mspace{36.mu} - \\ph_0 (\\n) \\bigl(f(w) - f(\\x^2)\\bigr)\n\t + \\frac{y^{-1}}{y - y^{-1}} \\bigl(f(\\x^2) - f(q^2 \\x^2)\\bigr)\n\t \\, , \\\\[1ex]\n r(w, \\x^2) & = \\frac{p(q^2 \\x^2, \\x^2)}{2 q^2 \\x^2} \\cth(\\m - \\n - \\h)\n\t\t - \\frac{p(- q^2 \\x^2, \\x^2)}{2 q^2 \\x^2} \\, .\n\\end{align}\n\\end{subequations}\nThen\n\\begin{multline} \\label{innerfac}\n {\\cal I} =\n \\frac{g(\\x_2^2) \\Ps (\\x_2, \\x_1) - g(\\x_1^2) \\Ps (\\x_1, \\x_2)}\n {\\x_2^2 - \\x_1^2}\n + \\frac{(c_1 - q^{-2} c_2)(\\r(\\x_1) - \\r(\\x_2))}\n\t {2(\\x_2^2 - \\x_1^2)(y - y^{-1})} \\\\[1ex]\n + \\frac{(y^{-1} - \\r(\\x_1))(y - \\r(\\x_2)) f(\\x_2^2)\n\t - (y^{-1} - \\r(\\x_2))(y - \\r(\\x_1)) f(\\x_1^2)}\n\t {(\\x_2^2 - \\x_1^2)(y - y^{-1})^2} \\, ,\n\\end{multline}\nwhere\n\\begin{equation} \\label{Psi}\n \\Ps (\\x_1, \\x_2) =\n \\int_{\\cal C} {\\rm d} m(\\m) G(\\m, \\n_2)\n\t\\bigl(q^\\a \\cth(\\m - \\n_1 - \\h) - \\r(\\x_1) \\cth(\\m - \\n_1) \\bigr)\n\t\\, .\n\\end{equation}\n\nEquation (\\ref{innerfac}) determines the four density matrix elements\nfor $p = 1$ in factorized form. Note that the matrix elements depend\non only two transcendental functions $\\r$ and $\\Ps$. The remaining\ntwo non-zero density matrix elements for $m = 2$ follow from\n(\\ref{innerfac}) by means of the reduction relations (\\ref{redu}),\n\\begin{subequations}\n\\begin{align}\n D^{++}_{++} (\\x_1, \\x_2) &\n = \\frac{\\r(\\x_1) - y^{-1}}{y - y^{-1}}\n\t - D^{+-}_{+-} (\\x_1, \\x_2) \\, , \\\\\n D^{--}_{--} (\\x_1, \\x_2) &\n = \\frac{y - \\r(\\x_1)}{y - y^{-1}} - D^{-+}_{-+} (\\x_1, \\x_2) \\, .\n\\end{align}\n\\end{subequations}\nWe shall give a fully explicit matrix representation of the factorized\ndensity matrix for $m = 2$ below, after we have introduced the function\n$\\om$.\n\n\\section{The function $\\om$} \\label{sec:om}\nIn the recent work \\cite{JMS08pp} is was shown that the correlation\nfunctions defined by the inhomogeneous and $\\a$-twisted density\nmatrix (\\ref{defdens}) factorize and can all be expressed in terms\nof only two transcendental functions, the function $\\r$ entering the\nreduction relations (\\ref{redu}) and another function $\\om$ which\nin \\cite{JMS08pp} was defined as the expectation value of a product\nof two creation operators and was represented by a determinant formula.\nThe approach of \\cite{JMS08pp} is slightly different from ours here\nin that the lattice used in \\cite{JMS08pp} is homogeneous in `horizontal\ndirection' (all the $\\x$s in (\\ref{defdens}) are taken to be 1 from\nthe outset). For the ground state both cases lead to the same function\n$\\om$ (see section 5.3 and 5.4 of \\cite{BJMST08app}). In particular,\nin the inhomogeneous case, following sections 5.1 and 5.3 of\n\\cite{BJMST08app}, we have\\footnote{More precisely this function\nwas denoted $(\\om_0 - \\om)(\\x_1\/\\x_2, \\a)$ in \\cite{BJMST08app}.}\n\\begin{equation} \\label{defom}\n \\om (\\x_1, \\x_2)\n = - \\bigl\\langle\n\t \\mathbf{c}^*_{[1,2]} (\\x_2,\\a) \\mathbf{b}^*_{[1,2]}(\\x_1,\\a - 1) (1)\n\t \\bigr\\rangle \\, .\n\\end{equation}\n\nReplacing the vacuum expectation value by the expectation value\ncalculated with the density matrix (\\ref{defdens}) we take (\\ref{defom})\nas our definition of the function $\\om$. In our case $\\om$ depends on\ntwo twist parameters $\\k$ and $\\a$. We indicate this by writing \n$\\om (\\x_1, \\x_2| \\k, \\a)$. The construction of the operators\n$\\mathbf{b}^*_{[1,2]}$ and $\\mathbf{c}^*_{[1,2]}$ is explained in \\cite{BJMST08app}.\nFor the product needed in (\\ref{defom}) we find the explicit expression\n\\begin{align} \\label{c2b11}\n \\x^{-\\a} \\mathbf{c}^*_{[1,2]} & (\\x_2, \\a) \\mathbf{b}^*_{[1,2]} (\\x_1,\\a - 1) (1) =\n \\notag \\\\[1ex] &\n \\biggl( \\frac{q^{\\a - 1} \\x^{-1}}{q \\x - q^{-1} \\x^{-1}} -\n \\frac{q^{1 - \\a} \\x^{-1}}{q^{-1} \\x - q \\x^{-1}} +\n \\frac{q^\\a - q^{- \\a}}{2} \\biggr) \\s^z \\otimes \\s^z\n \\notag \\\\[1ex] & +\n \\frac{q^\\a - q^{- \\a}}{2}\n \\biggl( \\frac{q^{-1} \\x^{-1}}{q \\x - q^{-1} \\x^{-1}} -\n \\frac{q \\x^{-1}}{q^{-1} \\x - q \\x^{-1}} \\biggr)\n \\bigl( I_2 \\otimes \\s^z - \\s^z \\otimes I_2 \\bigr)\n \\notag \\\\[1ex] & +\n 2 \\biggl( \\frac{q^\\a}{q \\x - q^{-1} \\x^{-1}} -\n \\frac{q^{- \\a}}{q^{-1} \\x - q \\x^{-1}} \\biggr)\n \\bigl( \\s^+ \\otimes \\s^- + \\s^- \\otimes \\s^+ \\bigr)\n \\notag \\\\[1ex] & +\n (q^\\a - q^{- \\a})\n \\biggl( \\frac{1}{q \\x - q^{-1} \\x^{-1}} +\n \\frac{1}{q^{-1} \\x - q \\x^{-1}} \\biggr)\n \\bigl( \\s^+ \\otimes \\s^- - \\s^- \\otimes \\s^+ \\bigr) \\, ,\n\\end{align}\nwhere $\\x = \\x_1\/\\x_2$. Inserting this into (\\ref{defom}) and calculating\nthe average with the factorized two-site density matrix of the previous\nsection we obtain\n\\begin{equation} \\label{ompsi}\n \\om(\\x_1, \\x_2|\\k, \\a) = 2 \\x^\\a \\Ps(\\x_1, \\x_2) - \\D \\ps(\\x)\n + 2 \\bigl( \\r(\\x_1) - \\r(\\x_2) \\bigr) \\ps(\\x) \\, .\n\\end{equation}\nHere we adopted the notation from \\cite{BJMST08app},\n\\begin{equation} \\label{defpsi}\n \\ps(\\x) = \\frac{\\x^\\a (\\x^2 + 1)}{2(\\x^2 - 1)} \\, ,\n\\end{equation}\nand $\\D$ is the difference operator whose action on a function $f$\nis defined by $\\D f(\\x) = f(q \\x) - f(q^{-1} \\x)$.\n\nThe remaining part of this section is devoted to the exploration of\nthe properties of~$\\om$. First of all we substitute $\\om$ back into\nthe equation for the two-site density matrix, which can then be\nexpressed entirely in terms of $\\om$ and a function\n\\begin{equation}\n \\ph (\\z|\\k, \\a) = \\frac{\\ch (\\a \\h) - \\r(\\z)}{{\\rm sh}} \\def\\ch{{\\rm ch}} \\def\\tanh{{\\rm th}(\\a \\h)}\n\\end{equation}\nwhich is sometimes more convenient than the function $\\r$ itself.\nWe obtain\n\\begin{align}\n D_N & (\\x_1, \\x_2|\\k, \\a) = \\4 I_2 \\otimes I_2 \\notag \\\\\n & - \\frac{1}{4(q^{\\a - 1} - q^{1 - \\a})}\n \\biggl( \\frac{\\x^{1 - \\a} \\om_{12} - \\x^{\\a - 1} \\om_{21}}\n {\\x - \\x^{-1}} +\n \\frac{\\ph_1 \\ph_2 (q^\\a - q^{- \\a})}{2} \\biggr)\n \\notag \\\\ & \\qd\n \\biggl( \\frac{q - q^{-1}}{2} I_2 \\otimes \\s^z\n - \\frac{q + q^{-1}}{2} \\s^z \\otimes \\s^z\n + \\x^{-1} \\, \\s^+ \\otimes \\s^- + \\x \\, \\s^- \\otimes \\s^+\n \\biggr) \\notag \\\\[1ex]\n & - \\frac{1}{4(q^{\\a + 1} - q^{- \\a - 1})}\n \\biggl( \\frac{\\x^{- \\a - 1} \\om_{12} - \\x^{\\a + 1} \\om_{21}}\n {\\x - \\x^{-1}} +\n \\frac{\\ph_1 \\ph_2 (q^\\a - q^{- \\a})}{2} \\biggr)\n \\notag \\\\ & \\qd\n \\biggl( - \\frac{q - q^{-1}}{2} I_2 \\otimes \\s^z\n - \\frac{q + q^{-1}}{2} \\s^z \\otimes \\s^z\n + \\x \\, \\s^+ \\otimes \\s^- + \\x^{-1} \\, \\s^- \\otimes \\s^+\n \\biggr) \\notag \\\\[1ex]\n & - \\frac{\\x^{-\\a} \\om_{12} - \\x^\\a \\om_{21}}\n {4(\\x - \\x^{-1})(q^\\a - q^{- \\a})}\n \\bigl( (\\x + \\x^{-1}) \\s^z \\otimes \\s^z\n - (q + q^{-1})(\\s^+ \\otimes \\s^- + \\s^- \\otimes \\s^+)\n \\bigr) \\notag \\\\[1ex]\n & - \\4 \\bigl(\\ph_1 \\, \\s^z \\otimes I_2\n + \\ph_2 \\, I_2 \\otimes \\s^z \\bigr)\n - \\frac{q - q^{-1}}{4(\\x - \\x^{-1})} (\\ph_1 - \\ph_2)\n (\\s^+ \\otimes \\s^- - \\s^- \\otimes \\s^+) \\, ,\n\\end{align}\nwhere we introduced the abbreviations $\\om_{jk} = \\om(\\x_j, \\x_k| \\k, \\a)$\nand $\\ph_j = \\ph(\\x_j| \\k, \\a)$.\n\nFor the limit $\\a \\rightarrow 0$ the properties of the functions $\\ph$\nand $\\om$ with respect to negating $\\k$ and $\\a$ are important. They\nfollow from the fact that the $R$-matrix is invariant under spin reversal,\n\\begin{equation} \\label{rspinrev}\n R(\\la) = (\\s^x \\otimes \\s^x) R(\\la) (\\s^x \\otimes \\s^x) \\, .\n\\end{equation}\nIntroducing the spin reversal operator $J = \\s_1^x \\dots \\s_N^x$\nwe conclude with (\\ref{rspinrev}) that\n\\begin{equation}\n T_a (\\z, - \\k) = \\s_a^x J \\, T_a (\\z, \\k) \\, J \\s_a^x \\, .\n\\end{equation}\nIt follows that $t(\\z, -\\k) = J t(\\z, \\k) J$. Hence,\n\\begin{subequations}\n\\begin{align}\n J |\\k\\> & = |-\\k\\> \\, , \\\\[1ex] \\La (\\z, \\k) & = \\La (\\z, -\\k) \\, .\n \\label{evinv}\n\\end{align}\n\\end{subequations}\nThe latter two equations used in the definition (\\ref{defdens}) of\nthe $\\a$-twisted density matrix imply that\n\\begin{equation} \\label{densrevers}\n D_N (\\x_1, \\dots, \\x_m|- \\k, - \\a) = (\\s^x)^{\\otimes m} \\,\n D_N (\\x_1, \\dots, \\x_m|\\k, \\a) \\, (\\s^x)^{\\otimes m} \\, .\n\\end{equation}\n\nFrom (\\ref{defrho}), (\\ref{evinv}) we obtain the relation\n\\begin{equation}\n \\ph(\\z|- \\k, - \\a) = - \\ph(\\z| \\k, \\a) \\, .\n\\end{equation}\nEquation (\\ref{densrevers}) together with (\\ref{defom})-(\\ref{ompsi})\nand the expressions for the density matrix elements of the previous\nsection implies that\n\\begin{equation}\n \\om(\\x_1, \\x_2|\\k, \\a) = \\om(\\x_2, \\x_1|- \\k, - \\a) \\, .\n\\end{equation}\n\nOur next step is to verify that the function $\\om$ given by the formula\n(\\ref{ompsi}) satisfies a property called the 'normalization condition'\nby the authors of \\cite{JMS08pp} (see equation (6.10) there). So we come\nback to the case of finite Trotter number $N$ with arbitrary inhomogeneity\nparameters $\\be_j$, $j = 1, \\dots, N$ as it is written in (\\ref{defmono}).\nWe shall also use multiplicative parameters $\\tau_j=e^{\\be_j}$.\n\nWe consider the normalization condition in the following form \n\\begin{multline} \\label{norm}\n \\bigl(\\om(\\z,\\xi|\\k, \\a)\n + {\\overline{D}}_{\\z}{\\overline D}_{\\xi}\\Delta_{\\z}^{-1}\n \\psi(\\z\/\\xi)\\bigr)\\bigr|_{\\z=\\tau_j} + \\\\\n +\\rho(\\tau_j) \\bigl(\\om(\\z,\\xi|\\k, \\a)\n +{\\overline D}_{\\z}{\\overline D}_{\\xi}\\Delta_{\\z}^{-1}\n \\psi(\\z\/\\xi)\\bigr)\\bigr|_{\\z=q^{-1}\\tau_j} = 0 \\, ,\n\\end{multline}\n$j = 1, \\dots N$, which can be obtained from the integral in (6.10) of\n\\cite{JMS08pp} by taking the residues and using the TQ-relation (4.2)\nof that paper. Also let us recall the definition \n\\begin{equation}\n {\\overline{D}}_{\\z} g(\\z) = g(q\\z)+g(q^{-1}\\z)-2\\rho(\\z)g(\\z) \\, .\n\\label{D}\n\\end{equation}\nActually, (6.10) of \\cite{JMS08pp} comprises one more equation related\nto the residue at $\\z^2 = 0$. This case needs separate treatment and will\nbe discussed below.\n\nFirst we use the following difference equation for the function \n$\\Psi$ defined by (\\ref{Psi}),\n\\begin{align} \\label{eqPsi}\n \\Psi&(\\xi_1,\\xi_2)+\\rho(\\xi_1)q^{-\\a}\\Psi(q^{-1}\\xi_1, \\x_2)=\n \\frac{G(\\nu_1,\\nu_2)}{1+\\bar\\mathfrak{a}(\\nu_1,\\kappa)}\n -\\rho(\\xi_1)q^{-\\a}\\frac{G(\\nu_1-\\eta,\\nu_2)}\n {1+\\mathfrak{a}(\\nu_1-\\eta,\\kappa)} \\notag \\\\[1ex]\n &+\\rho(\\xi_2)\\cth(\\nu_1-\\nu_2)-q^{-\\a}\\cth(\\nu_1-\\nu_2-\\eta)\n \\notag \\\\[1ex]\n &- q^{- \\a} \\bigl(\\rho(\\xi_1)\\rho(q^{-1}\\xi_1)-1\\bigr)\n \\int_{\\cal C} {\\rm d} m(\\mu) G(\\mu,\\nu_2)\\cth(\\mu-\\nu_1+\\eta) \\, ,\n\\end{align}\nwhere $\\overline{\\mathfrak{a}} = 1\/\\mathfrak{a}$ by definition.\nThis equation is the result of an analytical continuation defined for\n$\\Psi(q^{-1} \\x_1, \\x_2)$ through an appropriate deformation of the\nintegration contour in (\\ref{Psi}). Some simplifications occur in the\nlimit $\\nu_1 \\rightarrow \\be_j$ or equivalently $\\xi_1 \\rightarrow \\tau_j$,\nnamely, since $\\mathfrak{a}(\\be_j,\\kappa)=\\bar\\mathfrak{a}(\\be_j-\\eta,\\kappa)=0$ or\n$\\bar\\mathfrak{a}(\\be_j,\\kappa)=\\mathfrak{a}(\\be_j-\\eta,\\kappa)=\\infty$, the first two terms\nin the right hand side of (\\ref{eqPsi}) do not contribute. Then we have\n\\[\n \\rho(\\tau_j)\\rho(q^{-1}\\tau_j) = \n \\frac{Q^-(q^{-1}\\tau_j;\\kappa+\\a)Q^+(\\tau_j;\\kappa)}\n {Q^-(\\tau_j;\\kappa+\\a)Q^+(q^{-1}\\tau_j;\\kappa)}\\cdot\n \\frac{Q^-(\\tau_j;\\kappa+\\a)Q^+(q^{-1}\\tau_j;\\kappa)}\n {Q^-(q^{-1}\\tau_j;\\kappa+\\a)Q^+(\\tau_j;\\kappa)} = 1\n\\]\nwith the $Q$-functions $Q^\\pm$ defined in \\cite{JMS08pp}. This means\nthat also the last term in the right hand side of (\\ref{eqPsi}) does\nnot contribute. Hence, we obtain\n\\begin{equation} \\label{eqPsi1}\n \\Psi(\\tau_j, \\x_2) + \\rho(\\tau_j)q^{-\\a}\\Psi(q^{-1}\\tau_j, \\x_2) =\n \\rho(\\xi_2) \\cth(\\be_j-\\nu_2)-q^{-\\a} \\cth(\\be_j-\\nu_2-\\eta) \\, .\n\\end{equation}\nNote that the right hand side is, up to the sign, equal to the driving\nterm in the integral equation (\\ref{newg}) for $G$.\n\nIf we take the formula (\\ref{ompsi}) and use (\\ref{eqPsi1}) then, after\nsome algebra, we obtain \n\\begin{align} \\label{eqom}\n \\om(&\\tau_j,\\xi_2|\\kappa,\\a)\n +\\rho(\\tau_j)\\om(q^{-1}\\tau_j,\\xi_2|\\kappa,\\a) = \\notag \\\\[1ex]\n &-{\\bigl(\\Delta_{\\z}\\psi(\\z\/\\xi_2)\\bigr)}\\bigr|_{\\z=\\tau_j}-\n \\rho(\\tau_j){\\bigl(\\Delta_{\\z}\\psi(\\z\/\\xi_2)\\bigr)}\n \\bigr|_{\\z=q^{-1}\\tau_j} \\notag \\\\[1ex]\n &+2(\\rho(\\tau_j)+\\rho(\\xi_2))\\;\\psi(\\tau_j\/\\xi_2)\n -2(1+\\rho(\\tau_j)\\rho(\\xi_2))\\;\\psi(q^{-1}\\tau_j\/\\xi_2) \\, .\n\\end{align}\nNow we need to check that this equation is equivalent to (\\ref{norm}).\nTo this end we should verify the following equality \n\\begin{multline} \\label{equal}\n \\bigl( {\\overline{D}}_{\\z} {\\overline D}_{\\xi}\n \\Delta_{\\z}^{-1}\\psi(\\z\/\\xi)\\bigr)\\bigr|_{\\z=\\tau_j}\n +\\rho(\\tau_j) \\bigl({\\overline D}_{\\z}{\\overline D}_{\\xi}\n \\Delta_{\\z}^{-1}\\psi(\\z\/\\xi) \\bigr) \\bigr|_{\\z=q^{-1}\\tau_j}\n = \\\\[1ex]\n \\bigl( \\Delta_{\\z}\\psi(\\z\/\\xi_2) \\bigr) \\bigr|_{\\z=\\tau_j} +\n \\rho(\\tau_j) \\bigl( \\Delta_{\\z} \\psi(\\z\/\\xi_2) \\bigr)\n \\bigr|_{\\z=q^{-1}\\tau_j} \\\\\n -2(\\rho(\\tau_j)+\\rho(\\xi_2)) \\psi(\\tau_j\/\\xi_2)\n +2(1+\\rho(\\tau_j)\\rho(\\xi_2))\\psi(q^{-1}\\tau_j\/\\xi_2) \\, .\n\\end{multline}\nUsing the definition (\\ref{D}) we come after a little algebra to the\nfollowing expression for an arbitrary function $g(\\z)$ \n\\begin{multline} \\label{actdbar}\n {\\overline{D}}_{\\z}{\\overline D}_{\\xi}\\;g(\\z\/\\xi)=\n \\Delta_{\\z}^2\\;g(\\z\/\\xi)\n + 4(1-\\rho(\\z))\\;(1-\\rho(\\xi))\\;g(\\z\/\\xi) \\\\\n -2(\\rho(\\z)+\\rho(\\xi))\\;(g(q\\z\/\\xi)+g(q^{-1}\\z\/\\xi)-2g(\\z\/\\xi)) \\, .\n\\end{multline}\nNow take \n\\begin{multline}\n \\bigl({\\overline{D}}_{\\z} {\\overline D}_{\\xi}\n \\;g(\\z\/\\xi)\\bigr)\\bigr|_{\\z=\\tau_j}+\n \\rho(\\tau_j) \\bigl( {\\overline D}_{\\z} {\\overline D}_{\\xi}\n \\;g(\\z\/\\xi)\\bigr) \\bigr|_{\\z=q^{-1}\\tau_j} = \\\\[1ex]\n \\bigl(\\Delta_{\\z}^2\\;g(\\z\/\\xi)\\bigr)\\bigr|_{\\z=\\tau_j}\n +\\bigl(\\Delta_{\\z}^2 \\;g(\\z\/\\xi)\\bigr)\\bigr|_{\\z=q^{-1}\\tau_j} -\n 2(\\rho(\\tau_j)+\\rho(\\xi)) \\bigl(\\Delta_{\\z}\\;g(\\z\/\\xi)\\bigr)\n \\bigr|_{\\z=\\tau_j} \\\\[1ex] + 2(1+\\rho(\\tau_j)\\rho(\\xi))\n \\bigl(\\Delta_{\\z}\\;g(\\z\/\\xi)\\bigr) \\bigr|_{\\z=q^{-1}\\tau_j} \\, .\n\\end{multline}\nIf we substitute $g(\\z\/\\xi)=\\Delta_{\\z}^{-1}\\psi(\\z\/\\xi)$ and take\n$\\xi=\\xi_2$, then we immediately arrive at the equality (\\ref{equal}).\n\nAs was mentioned above, there is one more case to be considered,\ncorresponding to the contour $\\G_0$, i.e.\\ to the residue at $\\z^2 = 0$\nin equation (6.10) of \\cite{JMS08pp} which has to vanish. Its vanishing\nfollows from\n\\begin{multline}\n \\lim_{\\x_1 \\rightarrow 0} \\x^{- \\a} \\bigl( \\om (\\x_1, \\x_2) +\n \\overline{D}_{\\x_1} \\overline{D}_{\\x_2} \\D^{-1}_{\\x_1} \\ps (\\x)\n \\bigr) = \\\\\n \\frac{2 q^{- \\k}}{q^\\k + q^{- \\k}} \\biggl[\n \\r(\\x_2) - q^{- \\a} + (q^\\a - q^{- \\a})\n \\int_{\\cal C} {\\rm d} m(\\m) G(\\m, \\n_2) \\biggr] = 0 \\, .\n\\end{multline}\nHere we have used (\\ref{Psi}), (\\ref{ompsi}), (\\ref{defpsi}),\n(\\ref{actdbar}) as well as the fact that $\\lim_{\\n \\rightarrow - \\infty}\n\\r (\\x) = (q^{\\a + \\k} + q^{- \\a - \\k})\/(q^\\k + q^{- \\k})$ in the first\nequation and the identity (\\ref{grhoid}) in the second equation.\n\nThe normalization condition just shown to be satisfied by our function\n$\\om$ defined in (\\ref{defom}) is the main ingredient in our proof\nthat $\\om$ is in fact the same function as introduced in equation (7.2)\nof \\cite{JMS08pp}. Let us consider $\\om$ as a function of $\\x_1$. As\nwas shown in \\cite{JMS08pp} the function $\\r (\\x_1)$ depends only on\n$\\x_1^2$. The same is then true for $\\Psi (\\x_1, \\x_2)$ from (\\ref{Psi}).\nUsing (\\ref{ompsi}) we conclude that $\\x^{- \\a} \\om(\\x_1, \\x_2|\\k, \\a)$\nis a function of $\\x_1^2$. From its definition (\\ref{defom}) and from\n(\\ref{defdens}), (\\ref{c2b11}) we see that $\\om$ is rational in $\\x_1^2$\nof the form $P(\\x_1^2)\/Q(\\x_1^2)$, where $P$ and $Q$ are polynomials.\nClearly both of them are at most of degree $N + 2$. The zeros of $Q$ are\nthe $N$ zeros of the transfer matrix eigenvalue $\\La (\\x_1, \\k)$ plus\ntwo zeros at $q^{\\pm 2} \\x_2^2$ stemming from the two simple poles of\n$\\x^{-\\a} \\mathbf{c}^*_{[1,2]} (\\x_2, \\a) \\mathbf{b}^*_{[1,2]} (\\x_1,\\a - 1) (1)$.\nComparing now with the definition (7.2) of \\cite{JMS08pp} we see that\nthe functions there has precisely the same structure. It is rational\nof the form $\\tilde P (\\x_1^2)\/ \\tilde Q (\\x_1^2)$ with two polynomials\n$\\tilde P$, $\\tilde Q$ at most of degree $N + 2$. $Q$ and $\\tilde Q$\nhave the same zeros. We may therefore assume that they are identical.\nIn order to show that $P$ and $\\tilde P$ also agree we have to provide\n$N + 3$ relations. $N + 1$ of them are given by the normalization\ncondition above. Another two come from the residues at the two trivial\npoles.\n\nSince they are outside the canonical contour, we have to consider\nagain the analytic continuation of the integral (\\ref{Psi}) defining\n$\\Psi$ with respect to $\\x_1$. There are four regions depending on the\n\\begin{figure}\n \\centering\n \\includegraphics{anacontpsi}\n \\caption{\\label{fig:anacontpsi}\n Four cases to be considered for the analytic continuation\n of $\\Psi (\\x_1, \\x_2)$ with respect to $\\n_1$. Here $\\cal C$\n is the canonical contour of figure~\\ref{fig:cont}.}\n\\end{figure}\nlocation of $\\n_1$ relative to the contour (see figure\n\\ref{fig:anacontpsi}). Using (\\ref{Psi}) we obtain\n\\begin{multline} \\label{anacontpsi}\n \\Ps (\\x_1, \\x_2) =\n \\int_{\\cal C} {\\rm d} m(\\m) G(\\m, \\n_2)\n\t\\bigl(q^\\a \\cth(\\m - \\n_1 - \\h) - \\r(\\x_1) \\cth(\\m - \\n_1) \\bigr)\n \\\\[1ex] - \\begin{cases}\n \\dst{\\frac{G(\\n_1, \\n_2)}{1 + \\mathfrak{a} (\\n_1, \\k)}}\n & \\text{case (I)} \\\\[2ex]\n 0 & \\text{case (II)} \\\\[1ex]\n \\dst{\\frac{G(\\n_1, \\n_2)}{1 + \\mathfrak{a} (\\n_1, \\k)} +\n \\frac{q^\\a G(\\n_1 + \\h, \\n_2)}\n {(1 + \\mathfrak{a} (\\n_1 + \\h, \\k)) \\r(q \\x_1)}}\n & \\text{case (III)} \\\\[2ex]\n \\dst{\\frac{G(\\n_1, \\n_2)}{1 + \\mathfrak{a} (\\n_1, \\k)}}\n & \\text{case (IV)} \\, .\n \\end{cases}\n\\end{multline}\nThen, e.g.\\ be means of the integral equation (\\ref{newg})\n\\begin{subequations}\n\\label{restrivpsi}\n\\begin{align} \\label{resa}\n \\res_{\\x_1^2 = q^2 \\x_2^2} & \\Psi (\\x_1, \\x_2) = \\notag \\\\\n & - \\frac{2 \\x_2^2 q^{2 - \\a}}\n {(1 + \\mathfrak{a} (\\n_2 + \\h, \\k))(1 + \\overline{\\mathfrak{a}}(\\n_2, \\k))} =\n - \\frac{2 \\x_2^2 q^{2 - \\a} a(\\x_2 q) d(\\x_2)}\n {\\La (\\x_2 q, \\k) \\La(\\x_2, \\k)} \\, , \\\\[2ex]\n \\res_{\\x_1^2 = q^{- 2} \\x_2^2} & \\Psi (\\x_1, \\x_2) = \\notag \\\\\n \\label{resb}\n & \\frac{2 \\x_2^2 q^{\\a - 2}}\n {(1 + \\mathfrak{a} (\\n_2, \\k))(1 + \\overline{\\mathfrak{a}}(\\n_2 - \\h, \\k))} =\n \\frac{2 \\x_2^2 q^{\\a - 2} a(\\x_2) d(\\x_2 q^{- 1})}\n {\\La (\\x_2, \\k) \\La(\\x_2 q^{-1}, \\k)} \\, ,\n\\end{align}\n\\end{subequations}\nwhere $a$ and $d$ are the vacuum expectation values of the diagonal\nelements of $T(\\z)$. Since the ratios on the right hand side are\ninvariant under changing the normalization of the $R$-matrix, we can\ndirectly compare the residues obtained from (\\ref{ompsi}),\n(\\ref{restrivpsi}) with those obtained from equation (7.2) of\n\\cite{JMS08pp}. We find agreement, which completes the proof.\n\nWhat if we consider (\\ref{ompsi}) as the definition of $\\om$? Then,\nin addition, we have to show that there are no poles other than the\ntwo trivial ones and those at the location of the zeros of\n$\\La(\\x_1, \\k)$. But this is immediately clear from (\\ref{anacontpsi}).\nThe integral has only poles at the zeros of $\\La(\\x_1, \\k)$. In case\n(I) there is one additional pole at $\\n_1 = \\n_2 + \\h$ with residue\n(\\ref{resa}). The simple poles of $G (\\n_1, \\n_2)$ at $\\la_j + \\h$,\nwhere the $\\la_j$ are the Bethe roots (see appendix \\ref{app:dermult}),\nare canceled by the simple poles of $\\mathfrak{a} (\\n_1, \\k)$ (see equation\n(\\ref{auxes})). In cases (II) and (IV) there is nothing to show. In\ncase (III) we have one additional pole at $\\n_1 = \\n_2 - \\h$ with residue\n(\\ref{resb}). The simple poles at $\\la_j - \\h$ have vanishing\nresidue due to (\\ref{auxes}) and since\n\\begin{equation}\n \\res_{\\n_1 = \\la_j - \\h} G(\\n_1, \\n_2) =\n - \\frac{q^\\a G(\\la_j, \\n_2)}{\\r (\\z_j) \\mathfrak{a}' (\\la_j)} \\, .\n\\end{equation}\n\n\\section{The exponential form -- preliminary remarks}\n\\label{sec:top}\nThe main result of \\cite{JMS08pp} is the formula (1.12). It makes\nthe calculation of arbitrary correlation functions possible, because\nthe operators $\\mathbf{t}^*,\\mathbf{b}^*,\\mathbf{c}^*$ generate a basis of the space of\nquasi-local operators \\cite{BJMS09app}. Although this formula proves the\nfactorization of the correlation functions and allows, in principle,\nalso for their direct numerical evaluation, it may be sometimes\npreferable to avoid the creation operators and to have an explicit\nformula for the correlation functions in the standard basis generated\nby the local operators ${e_j}_\\e^{\\e'}$. We believe that some form of\nthe exponential formula discussed in the previous papers\n\\cite{BJMST08app,BGKS06,BGKS07,BDGKSW08} must be valid in case of\ntemperature, disorder and magnetic fields as well. Unfortunately, the\nproblem of constructing all operators that appear in this formula remains\nstill open. We hope to come back to it in a future publication. Here we\nformulate the general properties we expect for these operators and show\nby examples how they should look like for short distances.\n\nFrom now on we shall use the notation and the terminology of the paper\n\\cite{BJMST08app}. In particular we shall be dealing with the space\n${\\cal W}^{(\\a)}$ of quasi-local operators of the form $q^{2\\a S(0)}{\\cal O}} \\def\\CP{{\\cal P}$\nintroduced there. First we define a density operator $D_N^{\\ast}:\n{\\cal W}^{(\\a)} \\rightarrow {\\mathbb C}$ which generalizes that one\ndefined by the formulae (33), (34) of \\cite{BGKS07}, namely, for any\nquasi-local operator ${\\cal O}} \\def\\CP{{\\cal P}$ we define\n\\begin{equation}\n D_N^\\ast ({\\cal O}} \\def\\CP{{\\cal P}) = \\< {\\cal O}} \\def\\CP{{\\cal P} \\>_{T,\\a,\\kappa}\n\\end{equation}\nin such a way that \n\\begin{equation}\n D_N^\\ast \\bigl( {e_1}^{\\e_1}_{\\e_1'} \\dots {e_m}^{\\e_m}_{\\e_m'}\n \\bigr) =\n {D_N}^{\\e_1' \\dots \\e_m'}_{\\e_1 \\dots \\e_m}(\\x_1, \\dots, \\x_m|\\k, \\a)\n\\end{equation}\nwhere ${D_N}$ is the density matrix defined in (\\ref{defdens}).\n\nWe expect that as before\n\\begin{equation} \\label{exponential}\n D_N^\\ast ({\\cal O}} \\def\\CP{{\\cal P}) = \\mathbf{tr}^{\\a} \\bigl\\{ \\exp(\\Omega)\n \\bigl( q^{2 \\a S(0)} \\mathcal{O} \\bigr) \\bigr\\} \\, ,\n\\end{equation}\nwhere $\\mathbf{tr}^\\a$ is the $\\a$-trace defined in \\cite{BJMST08app}\nand where the operator $\\Omega$ consists of two terms like that one\nconstructed in \\cite{BGKS07}\n\\begin{equation} \\label{Omega}\n \\Omega = \\Omega_1 + \\Omega_2 \\, .\n\\end{equation}\nIn fact, the first term follows from \\cite{BJMST08app,JMS08pp}. It must\nbe of the form\n\\begin{equation} \\label{Omega1}\n \\Omega_1 = \\int\\frac{d\\z_1^2}{2\\pi i\\z_1^2}\n \\int\\frac{d\\z_2^2}{2\\pi i\\z_2^2}\n \\bigl(\\om_0(\\z_1\/\\z_2|\\a) - \\om(\\z_1,\\z_2|\\k,\\a) \\bigr)\n \\mathbf{b}(\\z_1)\\mathbf{c}(\\z_2) \\, ,\n\\end{equation}\nwhere the function $\\om_0$ was defined in \\cite{BJMST08app},\n\\begin{equation} \\label{om0}\n \\om_0(\\z|\\a) = - \\biggl( \\frac{1-q^{\\a}}{1+q^{\\a}} \\biggr)^2\n \\Delta_{\\z} \\psi(\\z) \\, .\n\\end{equation}\nThe second part in the right hand side of (\\ref{Omega}) should be of the\nform \n\\begin{equation}\n \\Omega_2 = \\int\\frac{d\\z^2}{2\\pi i\\z^2}\\log(\\rho(\\z))\\mathbf{t}(\\z)\n\\label{Omega2}\n\\end{equation}\nwhere the operator $\\mathbf{t}$ is yet to be determined. In some sense\nit must be the conjugate of the operator $\\mathbf{t}^*$. The integration\ncontour for both, $\\Om_1$ and $\\Om_2$, is taken around all simple\npoles $\\z_1,\\z_2,\\z=\\xi_j$ with $j=1,\\dots,m$ in anti-clockwise\ndirection. The number $m$ is the length of locality of the operator\n${\\cal O}} \\def\\CP{{\\cal P}$.\n\nLet us list some of the most important expected properties of the\noperator $\\mathbf{t}$. First, we expect that like $\\mathbf{t} ^*(\\z )$ the operator\n$\\mathbf{t}(\\z)$ is block diagonal,\n\\[\n \\mathbf{t}(\\z ):\\ \\ \\mathcal{W}_{\\a ,s} \\rightarrow \\mathcal{W}_{\\a ,s} \\, ,\n\\]\nwhere, as was explained in \\cite{BJMST08app}, $\\mathcal{W}_{\\a ,s}\n\\subset \\mathcal{W}^{(\\a)}$ is the space of quasi-local operators of\nspin $s$. We will deal below mostly with the sector $s=0$.\n\nThen we expect $\\mathbf{t}(\\z)$ to have simple poles at $\\z=\\xi_j$. Let us define\n\\begin{equation} \\label{tbj}\n \\mathbf{t}_j = \\res_{\\z=\\xi_j} \\mathbf{t}(\\z) \\frac{d\\z^2}{\\z^2}\n\\end{equation}\nwhile\n\\begin{equation}\n \\mathbf{t}^*_j = \\mathbf{t}^*(\\xi_j) \\, .\n\\label{tb*j}\n\\end{equation}\nIn contrast to (\\ref{tbj}) the operator $\\mathbf{t}^*_j$ is well defined only\nif it acts on the states $X_{[k,l]}$ with $l