diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhqzp" "b/data_all_eng_slimpj/shuffled/split2/finalzzhqzp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhqzp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nGeometrically branched molecules (small-molecule dendrimers\n\\refto{discovery} \n and \ndendritic polymers \\refto {dendrimer_polymer_review}) \nare seeing progressively sophisticated \napplications \\refto{dendrimer_review_2,dendrimer_review_1},\nparticularly as biomedical and drug-delivery agents\n\\refto{dendritic_polymer_medicine}.\nThe proliferation of free ends (possibly functionalized) and\na rich design space in the interior of these single molecules\nmakes them ideal as engineering platforms.\n\nThis work is particularly focused on determining the collective \nstructure of large collections of dendritic \nor hyper-branched \\refto{hyperbranched}, \npolymers end-grafted on a surface\n\\refto{brush}.\nEven ordinary, linear polymers gain interesting properties when they \nare crowded together onto a surface by one end.\nIn making an analogy between the Edwards single-chain free energy and\nan electrostatic system, Semenov \\refto{semenov}, and subsequently\nMilner, Witten, and Cates \\refto{mwc} (in making an analogy to the classical\nmechanics of a particle falling in an external potential), determined that\nthe monomer chemical potential took on a universal profile - the\n``parabolic'' potential (and density profile, when the polymer brush was\nsolvated).\nMonodispersity of the chains is a very powerful constraint producing this\nbehavior, as is the unique self-consistent potential thus determined.\nWhen the chains are grafted on a convex surface \\refto{marko}\nthe parabolic potential profile is no longer the self-consistent solution\nas such an {\\it ansatz} would require monomers to overfill space near the\ngrafting surface (so that the self-consistently determined distribution\nof free ends would be {\\it negative} in some regions).\nA considerably more complex analysis is required to determine the \nself-consistent potential in the presence of regions with zero end-density \n(so-called ``dead'' or ``exclusion'' zones).\n\nHowever, when the polymers in the brush\n are branched (star-like \\refto{katya} \nor dendritic \\refto{galen_forrest}-\\refto{galen_zook})\nthe architectural increase \nof free ends counteracts the ``overfilling'' effect, and it can \noccur that non-zero end-densities are achieved even for \nthe extreme case of a single dendritic polymer in a good solvent.\nHere \\refto{galen_zook}, the absence of dead-zones and the consequent \nparabolic potential and therefore monomer density causes \na single dendritic polymer to have a dense core of monomers that \ndecreases monotonically to its edge in good solvent.\nThis is in contrast to the original prediction \\refto{degennes_dendrimer}\nthat the tips of the dendritic polymer would all extend to the\nsame spherical surface producing a large, characteristic ``hollow core''\nwhich had eluded detection in early simulations \\refto{filled_core}.\nThe monodispersity of the hyper-branched polymer is again the culprit, as\na massive free-energy degeneracy is a consequence of the parabolic\npotential (and is the cause of the isochronous behavior of the harmonic\noscillator \\refto{goldstein}).\n\nIn this work, we generalize a previous theory \n\\refto{galen_forrest,dendrimer_copoly} as in Ref.~\\refto{galen_zook} in a\n``continuous branching'' model for the dendrimer brush. \nVastly different architectures (regularly branched, randomly branched,\nand various power-law schemes of branching) can all be handled in the\nsame theory with few assumptions and fitting parameters.\nThe parabolic shape of the insertion potential survives the\ncontinuous branching treatment, and is the exact analytic solution for\nan unreasonably large class of polymer brush architectures.\n\nThe paper is organized as follows. First, we introduce the strong \nstretching self-consistent theory for continuously branched brushes, \nthen we analyze the end-distributions for various architectures. Finally we\ndiscuss the implications for the near-universal stabilization of the\nparabolic profile by branching, and make our conclusions.\n\n\n\\omit{\nDendritic molecules have received much attention since their \ninvention \\refto{discovery},\npartly because of their intriguing architecture, but more importantly\nfor their promise in producing materials of decidedly interesting\nproperties \\refto{dendrimer_review_2,dendrimer_review_1}.\nSingle dendrimers can be synthesized with very extremely regular\narchitecture, and are capable of forming thermally tunable complexes\nas drug-delivery agents \\refto{thermal_tune}.\nTheir well-characterized size and relative stiffness makes them suited\nto forming complex two-dimensionally packed arrays that can then be\ndecorated or etched to produce nanoscopically patterned surfaces \n\\refto{patterns}.\nAnd, the geometric proliferation of chain tips, each capable of being\ntagged with a bio-specific functionality, makes them an ideal material\nfor engineering smart surfaces \\refto{darrel_tips}.\nIt should be pointed out that most naturally occurring biologically\nrelevant molecules are either lightly branched (the three-armed fatty-acids)\nor linear (phospholipids, DNA, and proteins), and it is therefore\nentirely possible that hyperbranching in and of itself can grant\nunusual single-molecules properties with no natural analog.\n\nRelated to these dendrimers are dendritic polymers, where the \nbranching points are connected by flexible spacer polymers of a controllable\nmolecular weight and composition.\nIn some respects, these dendritic polymers resemble \npolymer stars \\refto{daoud}, with\na general splaying of their many free arms away from a dense core.\nIndeed, miktoarm stars with an unequal number of $A$ and $B$ arms have been\npredicted to drastically reshape the $AB$ diblock copolymer \nphase diagram \\refto{milner_stars},\nand hyperbranched dendritic polymers are predicted to have the same\neffect \\refto{fredrickson,dendrimer_copoly}.\nThe microphases are consistently skewed toward having the component\nwith the most branches on the exterior of a curved surface,\nwith the most pronounced effects occurring for stars with, for example, \na single $A$ type arm and many $B$ arms.\nSimilarly, when the $A$ block is a linear flexible homopolymer, and the\n$B$ block is a $G$-generation dendritic polymer, the phase diagram\nis skewed considerably toward keeping the branched block on the\nexterior of the cylindrical and spherical domains.\nFor example, it was found that for compositionally symmetric ``tadpole''\ncopolymers \\refto{tadpole}, the lamellar \nphase is stable when the branched block \nis $G1-5$, while the cylindrical phase obtains for $G6-8$, and the spherical\nphase occurs for $G9$ and more branched \\refto{dendrimer_copoly}.\n\nThe purpose of the present work is to extend earlier treatments of this system\nwhich had been limited to this most dramatic case where\nthe $A$ and $B$ blocks of the copolymer are both dendrimers of independent\ngeneration.\nThus, we investigate the strong-segregation limit for $GA-GB$ block \ncopolymers, and determine the phase boundaries between the ``classical''\nblock copolymer phases.\nIn the lamellar phase (L), each of the branched blocks stretches\naway from the flat $AB$ interface, producing a layered material as in \nFigure~1.\nThere are two cylindrical phases, one (CA) with the $A$ material forming\na cylindrical core with the $B$ phase forming a continuous matrix in \nwhich the $A$ cores are hexagonally packed.\nThe second cylindrical phase (CB) has the $B$ material confined to the cores,\nwhile the $A$ material forms the continuous matrix.\nLikewise there are two spherical phases, (SA) and (SB) in which one material \nis confined to spherical domains packed on a bcc lattice, while the\nother material forms a continuous matrix.\nDenoting $\\phi_A$ and $\\phi_B$ the overall compositions of the single\ndendriblock copolymers, the phase boundaries will be controlled by the\noverall composition and the generations of the $A$ and $B$\nblocks, $\\phi_B(G_A,G_B)$.\nThe transitions between these phases will always follow the sequence\n(SA) - (CA) - (L) - (CB) - (SB) as $\\phi_B$ is increased.\nThe calculations we make are in the spherical\/cylindrical unit cell \napproximation, and rule out a priori exotic bicontinuous\nphases.\n\nThe first set of calculations we employ involve the Alexander-de Gennes\napproximation \\refto{fredrickson,alexander,degennes}.\nWe assume that all of the $A$ tips and the $B$ tips reside \non the same surfaces. \nIn the (L) phase, this results in the $A$ tips being segregated to a single\nsurface extending away from the $AB$ interface, and similarly for the $B$\ntips.\nIn the (C) phases, the outer block tips all reside on a cylindrical surface\nenclosing the core cylinder, while the inner tips\nare brought toward the center axis of the core,\nand similarly for the (S) phases.\nIt should be noted that the (L) phase thus resembles back-to-back brushes\nof dendritic molecules, while the (C) phase resembles the \nconformation of single dendrimer-comb copolymers, where dendrimers are\ngrafted to a single flexible backbone chain at regular lengths \n\\refto{dendrimer_review_2}.\nThe (S) phase resembles the single-molecule conformation of an individual\ndendrimer molecule.\nThus, the Alexander approximation can be seen as being similar to\nthe de Gennes and Hervet ansatz for a ``hollow-core'' \ndendrimer \\refto{degennes_dendrimer}.\nThe ``filled-core'' picture of Muthukumar and \nLescansec \\refto{filled_core}, however, seems\nto be theoretically and experimentally a more sound description.\nIndeed, allowing the tips of the internal and external dendrimer to occupy\nthe entire lamellar\/cylindrical\/spherical domain turns out to be a more\nrealistic assumption.\nIn the strong-segregation limit, relaxing the confinement of the\nchain free ends is achieved in the so-called ``classical limit'' \n\\refto{semenov,mwc} of\nthe Edwards self-consistent field.\n\n\n\nThe paper is organized as follows. We initially describe the\nAlexander-de Gennes calculation, and then the classical path approximation.\nThen, the phase diagrams for the dendrimer-dendrimer \ncopolymer will be developed.\nThe results herein rest on the stong-segregation limit, the robustness\nof which can be determined in a simple Random Phase Approximation\n(RPA) calculation.\nFinally, our conclusions will be offered.\n}\n\n\n\\section{Model}\nWe use $n$ as a ``chemical index'' marking the fewest monomers required from\na given monomer to arrive at a non-grafted free end.\nThus, each free tip has a chemical index of $0$, and the unique grafted monomer\nhas a chemical index of $N$ (analogous to the overall\nmolecular weight for a linear polymer).\nWe denote the branching profile, $f(n)$ to stand for the number of statistically\nequivalent monomers with the chemical index $n$.\nFor an unbranched polymer, $f(n)=1$, while for a regular \n(double tip-splitting) dendritic polymer \nof generation $G$, \n$f(n)$ is an exponentially decreasoing step function with $f(0) = 2^G$ and discontinuities at \n$n=1\/G, 2\/G ... (G-1)\/G$. This branching profile has been used to\nanalyze brushes of dendritic polymers \\refto{galen_forrest} and \ncopolymers of dendritic polymers \\refto{dendrimer_copoly} and brushes\nof star polymers \\refto{katya}.\nWe can model polymers, however, with continuous branching profiles, such as\n\\begin{equation}\n\\label{geometric}\nf(n) = 2^{-nG+G} \\equiv e^{-b n}\n\\end{equation}\nas in \\refto{galen_zook} where the conformation of a single dendrimer\nwas considered. \nThe ``branching index'', $b$ can be chosen so that we model polymers with\n$G$ generations and a junction functionality of 3 modeling tip-splitting\ndendrimers ... but any positive value of $b$ makes physical sense.\nThis ``continuously branched'' polymer model has significant analytic advantages\nand loses only some of the (admittedly interesting) self-similar details of\nthe exact free-end density.\nWe will consider another class of branching profiles:\n\\begin{equation}\n\\label{powerlaw}\nf(n) = \\left[\\frac{1 - \\alpha n\/N}{1-\\alpha}\\right]^b\n\\end{equation}\nmodeling polymers which are branched so that the average number of \nstatistically equivalent monomers at any chemical index \ndecreases as a power law from the free tips.\nHere, $\\alpha$ is a measure of how strongly branched the polymers are, with the degree of branching diverging at $\\alpha \\rightarrow 1$.\nThese two scenarios are depicted in Figure~\\ref{figure1} (top panel). \n\nAt any rate, we consider a set of polymers characterized by a maximal \nchemical index $N$, of monomers of volume $a^3$ with the $n=N$ monomer \nirreversibly grafted to a flat surface with a grafting density of $\\sigma$\nchains per unit area.\nWe consider a melt brush thus formed so that the overall height of the \ngrafted layer is consistent with the incompressibility of the monomers:\n\\begin{equation}\n\\frac{h}{\\sigma} = a^3 \\int_0^N dn f(n).\n\\end{equation}\nIf $\\sigma$ is large enough, then the chains in the brush will be considerably\nstretched from the grafting surface. \nAs in Figure~\\ref{figure1}, the $n=N$ monomer is located at $z=0$, and \nthe $f(0)$ equivalent chain tips are located at $z=z_o < h$.\nThe single-chain Edwards free energy of this chain is\n\\begin{equation}\nF_{single} = \\int_0^N dn f(n) \\left[ \\frac{1}{2 a^2} \\left|\\frac{dz}{dn}\\right|^2 +\n a^3 P(z(n)) \\right]\n\\label{single}\n\\end{equation}\nUnder conditions in which $F_{single} >> 1$ (in natural energy units of\n$k_B T$), fluctuations of the chain conformations around the\nconfiguration which minimizes Eq.~\\ref{single} are negligible, and all\nstatistical averages can be calculated from the configurations minimizing \nthe single chain free energy itself.\nHere, the first term counts the Gaussian elastic energy of $f(n)$ equivalent\nchains each consisting of $dn$ monomers and stretched a distance $dz$.\nThe second term counts the energy required to evacuate a volume $a^3$ for\neach monomer in these test chains, at a cost in free energy of $P(z)a^3$.\nThus, $P(z)$ is the monomer pressure in the layer, created by the crowding and \nthe conformations of all of the other chains.\n\nThe minimization of Eq.~\\ref{single} is effected through the usual calculus\nof variations, easily recognizable if the kernel of the integral is\ninterpreted as the Lagrangian of a particle with a time-dependent mass in \na gravitational field $-P(z)$:\n\\begin{equation}\nL[z;\\frac{dz}{dn}] = f(n) \\left[ \\frac{1}{2 a^2} |\\frac{dz}{dn}|^2 +\n a^3 P(z(n)) \\right],\n\\end{equation}\nwith the momentum conjugate to $z(n)$ given by\n\\begin{equation}\np = \\frac{\\partial}{\\partial dz\/dn} L = \\frac{f(n)}{a^2} \\frac{dz}{dn},\n\\end{equation}\nand the generalized force given by \n\\begin{equation}\nf = \\frac{\\partial L}{\\partial z} = f(n) a^2 \\frac{d}{dz} P(z(n)).\n\\end{equation}\nThus, the Euler-Lagrange equation governing the minimzation of $F_{single}$ is\n\\begin{equation}\n\\frac{d}{dn} p = f \n\\end{equation}\nor\n\\begin{equation}\n\\frac{d}{dn} f(n) \\frac{dz}{dn} = f(n) a^5 \\frac{dP}{dz}.\n\\label{eqmot}\n\\end{equation}\nThe physical initial condition on this equation of motion is that \n\\begin{equation}\n\\left. \\frac{dz}{dn}\\right|_{n=0} =0,\n\\end{equation} \nthus requiring that there be no tension on the free chain end.\n\nThe pressure field, $P$, can be calculated self-consistently from the \nsolutions to Eq.~\\ref{eqmot}.\nFirst, we need that $P$ satisfies an isochronous property.\nFollowing the time-dependent mass analogy, a particle of mass $f(t)$ released at rest \nfrom a position $z_o$ must hit the grafting surface $z=0$ when $n=N$.\nThis condition must obtain from every position with a non-zero density of \nfree ends in the layer:\n\\begin{equation}\n\\label{sc1}\nN = \\int_o^N dn = \\int_0^{z_o} dz \\frac{1}{dz\/dn}.\n\\end{equation}\nAlso, given such a $P$ it must be possible to create a layer completely\nfilled with monomers at every $z$ without overfilling space for the melt brush we consider here. The\ncommon formulation of this condition is that the height density of free\ntips $d \\sigma \/ dz$ be postive:\n\\begin{equation}\n\\label{sc2}\n\\Phi(z) \\equiv 1 = \\int_z^h dz_o \\frac{d \\sigma (z_o)}{dz_o} \\phi(z;z_o)\n\\end{equation}\nwith the volume fraction of monomers in the vicinity of $z$ for a chain with its \n$f(0)$ ends located at $z_o$:\n\\begin{equation}\n\\phi(z;z_o) = \\sigma a^3 f(n) \\frac{dn}{dz}.\n\\end{equation}\nWhen $f(n)\\equiv1$, (an ordinary polymer brush) self-consistency and the isochronous condition are satisfied \nwhen \\refto{mwc}\n\\begin{equation}\nP(z) = P_o (1-z^2\/h^2)\n\\end{equation}\nwith $P_o = \\pi^2\/8 a \\sigma^2$ and\n\\begin{equation}\n\\frac{d \\sigma}{dz} = \\sigma \\frac{z\/h}{\\sqrt{h^2 - z^2}},\n\\end{equation}\nremarkably compact and elegant expressions.\n\nOne appraoch to solving the self-consistency equations \nEqs~\\ref{sc1}-\\ref{sc2} when $f(n) \\ne 1$ is\nto solve this set of integral equations for the unknown fields $P(z)$ and\n$d \\sigma \/dz$ simultaneously and numerically. \nHowever, knowing that an arbitrary $f(n)$ gives the same physics as a time\ndependent mass in a gravitational field, we can be confident that the \n{\\em parabolic} pressure profile is a fruitful {\\it ansatz}.\nA time-dependent mass in a harmonic gravitational potential will have the\nsame falling time no matter where it is released from rest.\nThe equation of motion under these conditions becomes:\n\\begin{equation}\n\\label{parabolic_eq}\n\\frac{df}{dn} \\frac{dz}{dn} + f(t) \\frac{d^2 z}{dn^2} = -2 f(n) a^5 P_o z\n\\end{equation}\nwith $dz\/dn|_0=0$ where the pressure scale $P_o$ still has to be \ndetermined self-consistently by enforcing $z(N)=0$ numerically.\n\nTo make further progress, we need to specify $f(t)$, but there are two\ngeneral results that can be gleaned at once.\nFirst, if the distribution of ends is non-zero for all $0 0$ throughout the brush, so that the parabolic\n{\\it ansatz} is indeed the correct self-consistent inter-monomer interaction\npotential.\n\n\n\\section{Discussion}\nGiven how successful the parabolic {\\it ansatz} is, it is tempting to\njump to the erroneous conclusion that the parabolic potential is the\nself-consistent potential for {\\em any} choice of $f(n)$.\nThe two choices we have made result in nicely analytic minimum-action\ntrajectories, $z(n;z_o)$, but the critical feature required is that\n$f(n)$ be a decreasing function of chemical index.\nThat is, the chains are unbranched at the grafting surface, and become more\nand more branched as they continue away from the grafting surface.\nIf the chains started highly branched at the grafting surface, with\nsubchains combining into loops as one left the grafting surface so that the\nchain became less and less branched toward the free ends \nthen the brush will resemble a layer or Alexander-deGennes \n\\refto{alexander} \\refto{degennes} brushes \nof loops, with the free ends evacuated from a large region of the \nbrush \\refto{fredrickson}. The self-consistent potential is then known\nto be radically different from the parabolic potential here, although the\nreal potential still satisfies an isochronous, or monodisperse, condition.\n\nIndeed, if we start with an unbranched set of chains, $f(n)=1$,\nand alter the branching a bit by\n\\begin{equation}\nf(n) = 1 + \\epsilon(n)\n\\end{equation}\nwith $\\epsilon(n)$ a small function that is very strongly peaked at $n=0$, \nit is easy to show that the surface end-density must be negative, and\nspace is overfilled if the parabolic potential is followed.\nEssentially, we need to increase the number of chain free ends located with\n$z_o$ near the full brush height, $h$, above the unbranched limit. This causes \nthe number of monomers entrained into the layer to be slightly larger than they would \nhave been had $\\epsilon(n)=0$, with the result that building the brush from the\nouter edge to the surface maintaining $\\phi(z)=1$ will overfill space with \nmonomers at the grafting surface.\n\nSo, if $f(n)$ is a {\\em rapidly} decreasing function of $n$, each chain ``dumps''\nalmost all of its monomers right in the vicinity of the free end location,\n$z_o$.\nIn this case, the molten layer resembles a set of large, dense microgels \ntethered by much lower molecular weight net of branched chains. \nThe minimum-action chain trajectories become (essentially) particles at rest\nfor most of $n0.2$). Fainter than an absolute magnitude of $H=5$, deep water ice absorption is never seen.\n\nWhile such a correlation of absolute magnitude and presence of\nwater ice might be expected simply from the increased albedo\nof objects with more water ice on their surfaces, Spitzer radiometry\nhas shown that the $H>3$ KBOs are indeed smaller than the $H<3$ KBOs\n\\citep{2008ssbn.book..161S}.\nSomewhere between the $\\sim$650 km diameter of Ixion and \nthe $\\sim$900 km diameters\nof Quaoar and Orcus, KBOs appear\nto dramatically change in surface composition.\n\nThe change in surface composition is clearly not monotonic with\nsize. Ixion, 2002 UX25, 2002 AW197, 2004 GV9,\nand 2003 AZ84 have $\\sim$650 km diameters\nwithin their uncertainties \\citep{2008ssbn.book..161S}, but\nonly 2003 AZ84 has a large abundance of water ice on the surface. \nLikewise, the smaller 2005 RM43, which would have a diameter of \n$\\sim$520 km if it has the same albedo as 2003 AZ84, has water\nice absorption as deep as the larger objects Quaoar and 2007 OR10,\nwhile the similarly-sized objects Varuna and Huya have much\nsmaller absorption depths. \n\nSmaller than the $\\sim$520 km size of 2005 RM43, however, no (non-Haumea\nfamily member) KBO has been found with strong water ice absorption.\nFew objects in this range are bright enough for\nhigh quality spectroscopy (1996 TP66, with a diameter of $\\sim$180 km\nfor an assumed 0.1 albedo is the smallest KBO with a well measured\nspectrum), however. Nonetheless, spectroscopic evidence from\ncentaurs and photometric evidence from smaller KBOs (see below)\nsuggests that this trend continues, and that between 500 and 700 km\nin diameter KBOs transition from typical surfaces of small KBOs\nto those dominated by absorption due to water ice.\n\nAdditional evidence for a surface change in this diameter\nrange comes from measurements\nof albedos. \nThough the size and\nalbedo data available from Spitzer photometry have large\nuncertainties, a jump in albedo at this size range \nis apparent.\nThe same effect is see in the\ncumulative absolute magnitude distribution of the large\nobjects. While this distribution follows a power law from \nabsolute magnitudes of 3 until at least 4.5, for objects \nbrighter than $H=3$, the cumulative number distribution is\nlarger than would be expected from this power law, a clear\nsign of the increased albedos of these mid-sized\nobjects \\citep{2008ssbn.book..335B}. Implications for this surface\nchange are discussed below.\n\n\\subsection{Ammonia}\nOf these mid-sized objects with strong water ice absorption\nfeatures, two have evidence for a 2.25 $\\mu$m absorption\nfeature due to ammonia. Ammonia was first detected on Charon\n\\citep{2000Sci...287..107B,2007ApJ...663.1406C,2010Icar..210..930M} and later suggested to be present also on\nOrcus \\citep{2008A&A...479L..13B}. In both cases, the presence of ammonia\nwas postulated to be due to the flow of ammonia-rich interior\nliquid water on to the surface of the object at some point in\nthe past. Detailed models of the interior structure and evolution\nof these two bodies have suggested that such a scenario \nappears reasonable \\citep{2007ApJ...663.1406C, 2010A&A...520A..40D}.\n\nAmmonia has not been detected on the other water ice-rich objects,\nbut for Quaoar a bright object where high quality\nspectra can be obtained, the 2.3 $\\mu$m region of the spectrum\nwhere ammonia has its only detectable near-infrared feature is\ninstead dominated by the strongest absorption feature due to CH$_4$.\nAmmonia has additional absorption features beyond 3 $\\mu$m which\ncould be distinguished from CH$_4$ but spectral measurement\nat these wavelengths awaits larger ground or space-based telescopes.\nThe smaller water-ice-rich objects 2003 AZ84 and 2005 RM43 have spectra with\ninsufficient signal-to-noise to detect ammonia. Larger telescopes will\nagain be required.\n\nBased on the likely presence of ammonia on Charon and Orcus,\nand the sharp increase in water ice absorption with larger\nsize for these objects, \nwe hypothesize that on these largest objects the presence\nof water ice -- whether ammonia is detectable or not -- is a\nremnant of past liquid flows on the surface. We predict that\nfor these objects ammonia will always be detected when sufficient\nsignal-to-noise is available. The liquid flows\nneed not be recent -- the purity of the water ice on the surface of\nHaumea and its family shows that water ice can remain pristine through\nthe age of the solar system -- but the liquid flows must have \noccurred after the volatiles whose irradiation would cause coloration\n(see below) have all escaped. The increase in water ice absorption\nwith size would be a natural consequence of the larger \ninterior liquid reservoirs of larger objects. The sharp increase\nin water ice absorption starting at diameters around $\\sim$650 km\ngives an important clue into the physics of liquid interiors\nand surface water flows.\n\n\\section{Small objects: spectroscopic constraints}\n\\subsection{Kuiper belt objects}\n\nMany objects smaller than the size where strong water ice\nabsorption is present are statistically\nconsistent with having no water ice on their surface, but there is\na significant prevalence in the data for positive detection of water ice.\nRandom and known systematic errors would not produce a bias towards\nwater ice detection. Indeed, we regard the nearly complete lack of \nobjects with negative water ice fraction as an indication of the\nrobustness of our method. We conclude, therefore, that even the low\nlevel of water ice fraction detected in the majority of the\nobjects is a real indication that water ice at a low level\nis common even on the objects smaller than Ixion.\n\n2002 VE95, the smallest Kuiper belt object with a very high quality \nspectrum ($\\sim$330 km diameter assuming a 0.1 albedo), shows\nclear evidence of crystalline water ice on its surface\n\\citep{2008AJ....135...55B, 2006A&A...455..725B}.\nIndeed, the 1.65 $\\mu$m\nabsorption feature of crystalline water ice can always be detected when\nthe signal-to-noise is sufficiently high. \n\nBased on their smaller sizes, it\nseems likely that the water ice on these smaller objects is\nnot caused by liquid flows on the exterior, but rather by the\nexposure of crustal water ice. An important test of this expectation\nwould be that these smaller KBOs should not show the presence\nof ammonia on their surfaces. Currently feasible spectroscopy\ncannot achieve the signal-to-noise required to make this test, however.\n\n\n\\subsection{Centaurs}\nKBOs smaller than those discussed above are too faint for high quality\nspectroscopy even with the largest telescopes in the world. To understand\nthe compositions of these objects, we have to resort to\nobservational proxies. One important\nproxy has been spectroscopic observations of Centaurs. Centaurs are\nformer KBOs which have been perturbed onto short-lived planet-crossing\norbits. Being much closer to the sun than typical KBOs, they are \nbrighter and easier to study in detail.\n\nSpectroscopically, the Centaurs appear indistinguishable\nfrom the smallest KBOs whose spectra can be measured: objects contain\neither no detectable absorption features, a small amount of absorption \ndue to water ice, or (in one case) absorption due to water ice and \nmethanol. Figure 5 includes water ice absorption depth for a large\nsample of Centaurs, compared to KBOs. No discernible difference\ncan be seen between the largest of the centaurs and the smallest of\nthe measured KBOs. \nThe amount of water ice absorption seen on the surface of a centaur\ndoes not appear to correlate with anything, including\nperihelion, semimajor axis, optical color, dynamical lifetime, or \nactivity. Interestingly, while much speculation has occurred about \nCentaur surface evolution as objects move closer to the sun and \nbegin heating, infrared spectroscopy shows no such evidence of any\nchange in the distribution of water ice absorption depth.\n\n\\subsection{Methanol}\nAn absorption band at $\\sim$2.27 $\\mu$m\nwas first detected on the bright\ncentaur Pholus \\citep{1993Icar..102..166D}, and \\citet{1998Icar..135..389C} present \nthe case that this band is plausibly due to the presence of\nmethanol, though they point out that the identification is\nnot unique and that other low molecular weight hydrocarbons\nor photolytic products of methanol might fit the spectrum equally well.\nPholus is one of the reddest objects in the solar system, again\nfitting our picture of optical colors of irradiated hydrocarbons well.\n\nBased on lower signal-to-noise spectra with properties\nwhich resemble the water-ice-plus-methanol spectrum of Pholus, the presence of\nmethanol was suggested on the KBOs 1996 GQ21 and 2002 VE95\n\\citep{2008ssbn.book..143B}. \nTo examine this possibility more closely, we combine the Keck spectra\nof these two objects to increase signal-to-noise and consider\nthe presence of methanol. While the signal-to-noise\nremains low, the presence of absorption features similar\nto those on Pholus is certainly plausible.\nBoth objects, like Pholus, are red.\n\nA handful of other KBOs have recently been reported to also have\nabsorption features near 2.27 $\\mu$m, but, unlike\nPholus, 1996 GQ21 and 2002 VE95, to not have absorption\ndue to water ice \\citep{2011Icar..214..297B}. The signal-to-noise in\nthe spectral region are low,\nso it is difficult to determine\nif the absorption features are real. \nTo examine the possibility that\na $\\sim$2.27 $\\mu$m absorption feature occurs on faint KBOs,\nwe first examine all of the KBOs and centaurs in the Keck sample\n(which includes none of the potential methanol objects from\nthe VLT sample).\nWe find that a small number of objects have absorption features\nat or near the 2.27 $\\mu$m methanol absorption line, but\nno single spectrum is sufficiently reliable in this region to\nassert a positive detection. To increase the signal-to-noise,\nwe sum the spectra of all of the KBOs and centaurs in the Keck\nsample\nexcept for those with water ice absorption at the level of\nthat seen on 2003 AZ84 or deeper and those already suspected\nto contain methanol-like features (1996 GQ21 and 2002 VE95). This combined spectrum shows\nresidual absorptions due to water, as would be inferred from\nthe positive detections on most objects. The only major deviation\nfrom the water ice spectrum occurs at precisely the wavelength\nof the feature seen on Pholus and suspected on 1996 GQ21 and 2002 VE95.\nWe conclude that\nthe methanol-like feature is indeed present at a low-level on\nKBOs even though the feature cannot be reliably identified in\nindividual spectra\n(Figure 6).\n\\begin{figure}\n\\includegraphics[scale=.75]{fig6.eps}\n\\caption{The region of the near-infrared spectrum containing \nthe 2.27 $\\mu$m absorption feature attributed to methanol.\nThe feature can clearly be seen on Pholus, superimposed on a \nsmall amount of water ice absorption. The combined\n(to increase signal-to-noise) spectrum of 1996 GQ21 and 2002 VE95\nshows hints of a feature at the same location. The sum of 38\nKBO and centaur spectra, none of which individually show clear evidence\nfor the feature, clearly shows a feature at the same spot.\n}\n\\end{figure}\n\n\nNo hypothesis has ever been formulated for the \nsporadic presence of methanol on KBOs or centaurs, other\nthan to point out that methanol is common in cometary comae,\nso it is expected to be present in the interior of KBOs.\nIts presence is less expected on the surface of KBOs, however,\nas the absorption features of hydrocarbon ices quickly degrade\nunder irradiation while the remnants turn red \\citep{2006ApJ...644..646B}.\nVisible methanol absorption features suggest that the\nmethanol has only recently been exposed at the surface, perhaps\nas a result of a collision exposing the subsurface. \nOne prediction of this suggestion would be that the amount of\nmethanol observed would vary as different faces of the object were\nobserved. While such a test is possible in principle, in practice\nspectroscopy of these faint objects \nis sufficiently difficult that variation would be difficult to\nprove. If, however, irradiation preferentially turned the regions\nwith exposed methanol red, these objects should perhaps show color\nvariation with rotation, something which is otherwise rarely \nobserved in objects of this size \\citep{2008ssbn.book..129S}.\n\n\\subsection{Silicates}\nIn addition to ices and their irradiation products, we should\nexpect that KBOs that are small enough to be\nundifferentiated should expose some of their\nrock component on the surface. While silicates such as\nolivine or pyroxene\nare often included in detailed models of KBO spectra \n\\citep[i.e.][]{2010Icar..208..945M,2011Icar..214..297B}\n no\nspecific absorptions are easily observable: olivine, for example, has a \nbroad absorption centered at around 1 $\\mu$m, where observations\nare usually poor. These silicates are thus included in models\nto fit the overall spectral shapes over the $\\sim$0.6 - 1.2 $\\mu$m\nrange.\n\nMid-infrared observations have the possibility of positively\nidentifying silicate emission in the 10 and 20 $\\mu$m region \non KBOs. To date only the centaur Asbolus has had a \nspectrum measured with sufficient signal-to-noise to even\ndetect these spectral regions, and, in this case, emissivity\npeaks around 10 and 20 $\\mu$m are indeed seen, similar to\nthat observed in Trojan asteroids \\citep{2006Icar..182..496E,2008ssbn.book..143B},\nand interpreted to be due to fine-grained silicates.\nFuture observations will require improved space-based \nmid-infrared facilities, but characterization of \nactual silicate composition is indeed possible.\n\nWhile olivines or pyroxenes cannot be specifically identified\nin the visible to near-infrared range,\nthere have been a few reports of shallow broad absorption\nfeatures in the visible wavelength range\nsimilar to absorptions seen on some asteroids.\nOn asteroids these are generally attributed to aqueously \naltered silicates.\nConfirmation of these features has been difficult; the absorptions\nare subtle and have frequently appeared changed or absent upon reobservation\nof the same object \\citep{2004A&A...421..353F,2004A&A...416..791D, 2009A&A...508..457F, 2008A&A...487..741A, 2003AJ....125.1554L}\nWhile these changes are usually attributed to rotational\nvariability, it is worth noting that this speculation has never\nbeen verified. Indeed, for one object observations over half \nof a rotational period showed no signs of the visible absorption \nfeatures.\n\nWhile it is possible that these difficult-to-confirm features \nare a product of sporadic systematic error, the possibility\nof the existence of aqueously altered silicates is \nan interesting one to consider. The presence of\nliquid water on surfaces in the Kuiper belt might seem surprising, but\nhydrous materials seem to be present in small comets, interplanetary\ndust particles, and debris disks \\citep{2004A&A...416..791D}. \nIf these detections are indeed real, the most surprising thing\nabout them, perhaps, is that they are uncommon. \n\n\n\n\\section{Small objects: photometric constraints}\nSmall objects in the Kuiper belt are too faint to detect spectroscopically,\nbut photometric measurements can still give information -- albeit limited --\nabout the surfaces of these objects. To date, the single most robust\nconclusion based on photometry is that Kuiper belt surfaces are\ndiverse.\nLarge surveys of KBO optical\ncolors have found a wider range of\nsurface colors\nin the Kuiper belt than in any other small body population in the \nsolar system. The colors are uncorrelated with\nmost dynamical or physical properties \\citep[see review in][]{2008ssbn.book...91D}.\nA few systematic patterns\nhave been found, however, which are important clues to understanding the\nsurface compositions of these outer solar system objects.\n\n\\subsection{The bifurcated colors of centaurs} \nThe range of centaur optical colors generally covers the same wide\n range of colors\nfound in the Kuiper belt, but the centaurs are deficient in colors in\nthe middle part of the range, giving the distribution of centaur optical\ncolors\na bimodal appearance \\citep{2008ssbn.book..105T} with a neutral\nand a red clump of objects. Interestingly, this bifurcation is not\nseen only in the centaurs, but it appears to extend to all objects with\nlow perihelion distance whether the objects are dynamically unstable or\nnot. This result immediately suggests that the bifurcation in the \noptical colors is somehow formed through the increased heating or\nirradiation experienced by lower perihelion objects.\n\nThe H\/WTSOSS (Hubble\/WFC3 Test of Surfaces in the Outer Solar System) survey which used the Hubble Space Telescope to extend\n optical photometry of centaurs (and other low\nperihelion objects) into the near-infrared\n\\citep{hwtsoss} found that the neutral and red clumps do not consist\nof two groups with identical surfaces, \nbut rather are best described by two groups that fall along two\nseparate \nmixing lines.\nThe neutral clump of objects consists of a mixture of\na nearly neutrally reflecting material and a slightly red material,\nwhile the red clump of objects consists of a mixture of the same\nneutrally reflecting material and a much red material\n(Fig 7). \n\\begin{figure}\n\\includegraphics[scale=.65]{fig7.eps}\n\\caption{\nThree-color HST photometry\nof objects with perihelia inside of 30 AU from\nthe H\/WTSOSS survey. The filters are at 0.606, 0.814 1.39, and\n1.54 $\\mu$m, corresponding roughly to $R$, $I$, $H$-continuum, \nand a filter sampling the 1.6 $\\mu$m water ice absorption \nfeature. \nThe yellow point shows the\ncolors of the sun. \nThe objects appear to bifurcate into two clumps, one with near-solar optical colors\n(the ''neutral'' clump) and one with significantly redder optical colors (the ''red clump'').\nThe neutral clump (on the left in [F606W]-[F814W]\ncolors) shows a clear spread in the other colors, while the\nred clump shows a spread in all three colors. The color variation in\nboth clumps can be described by a mixing line (purple lines) where\nthe neutral and red clumps both have a single \ncommon end member but are mixed with either a more neutral or more \nred component, respectively.}\n\n\n\\end{figure}\n\n\nWhile three color photometry is generally incapable of identifying specific \nices or minerals, \\citet{hwtsoss} find that the neutral component \ncommon to all centaurs is consistent with some of the same\nhydrated silicates suggested from the optical spectroscopy\ndiscussed above. Significantly more spectral work is required,\nhowever, to further explore this possibility.\n\n \\subsection{The uniform colors of the cold classical KBOS.}\nWhile most of the rest of the \nKuiper belt appears to be composed of essentially the\nsame distribution\n of neutral to red objects \\citep{morbandbrown,2008ssbn.book...91D}\none dynamical region stands out for its\nhomogeneous composition. The cold classical Kuiper belt was first identified\nas a dynamically unique region of the Kuiper belt -- a difficult-to-explain \noverabundance of low inclination, dynamically cold objects beyond about\n41 AU \\citep{2001AJ....121.2804B}. \nSubsequent observations revealed that these objects\nshared a common red coloring \\citep{2002ApJ...566L.125T}. \nIn the H\/WTSOSS survey these objects do not fall along mixing\nlines, as the neutral and red clumps of centaurs do, but instead appear\nnearly uniform in color space \\citep{hwtsoss}.\nThese cold classical KBOs are\nalso know to be unique in their lack of large bodies \n\\citep{2001AJ....121.1730L}, their\nhigher abundance of satellites \\citep{2008Icar..194..758N}, \nand their different size distribution \\citep{2010Icar..210..944F}.\nAnd though our understanding of albedos in the Kuiper belt is\nstill poor, preliminary results suggest that the cold classical KBOs also appear to have higher albedos than\nthose of the remaining population\n\\citep{2009Icar..201..284B}. \nAll of these properties appear to signify a population with a \ndifferent -- and perhaps unique -- formation location or history. \nUnderstanding the surface compositions of these objects will be \na challenge given their distance and small sizes. In addition, these\nobjects are dynamically stable, so we likely never get samples\nof this population as centaurs or comets.\n\n\n\\subsection{The diverse colors of the remainder of the Kuiper belt}\nOther than the unique color properties of the centaurs and cold classical\nKBOs, the bulk of KBOs have no discernible pattern to their colors.\nThis finding in itself is significant for understanding the causes\nof the colors of KBOs.\nThis lack of\na connection between color and any dynamical property, \nparticularly with semi-major axis or perihelion\/aphelion \ndistance argues strongly that local heating, UV irradiation, \nand solar wind and cosmic ray\nbombardment \\citep{2003EM&P...92..261C} cannot be responsible for\nthe varying colors of the Kuiper belt. Local conditions appear to\nhave no primary influence on the colors of KBOs. \nFurthermore, careful measurement of the colors of the separate \ncomponents of binary KBOs has shown a tight correlation over the full\nrange of Kuiper belt colors \\citep{2009Icar..200..292B}. \nThe colors of two KBOs in orbit around\neach other are almost always nearly identical.\nThis fact immediately rules out any of the stochastic process such as\ncollisions for the causes of these Kuiper belt colors. Indeed, \ngiven the lack of correlation of color with local conditions, the\nnearly identical colors of binary KBOs argues that colors are simply primordial.\nIf binary KBOs were formed by early mutual capture in a \nquiescent disk \\citep{2002Natur.420..643G}, \nthe two component would likely have formed in very similar locations. If,\nalternatively, binary KBOs were formed in an initial gravitational\ncollapse \\citep{2010AJ....140..785N}, the objects would of necessity have formed at the \nsame location and of the same materials. \n\n\\subsection{The transition from KBOs to centaurs}\nThe manner in which primordial KBO surfaces evolve to become\nthe color-bifurcated centaur population could provide important \nclues to the compositions of both surfaces. While it appears\nthat a transition from a unimodal to bimodal color distribution\nmust occur as objects move to lower perihelia, the actual\nevidence for change is, in fact, weak.\nThe KBOs and centaurs with measured colors\nhave very different ranges of sizes, \nand, as demonstrated above, large KBOs have their\nsurfaces modified by the presence of volatiles and water ice. We\nthus must only compare like-sized objects. In addition, the cold\nclassical KBOs, with their unique surfaces, likely never enter the\ncentaur population, so these should be excluded from the \ncomparison. Finally,\nthe KBOs, being more distant, are likely to have higher uncertainty in\ntheir color measurements. \n\n\\begin{figure}\n\\plotone{fig8.eps}\n\\caption{\nA comparison of the colors of (non-cold classical)\nKBOs (thin line) and centaurs (thick line) in the $6P_z\/A$.\nAs a reasonable first approximation, we treat the nucleus as a\ncollection of $N=3A$ valence quarks, which, on the average, carry each\nlongitudinal momentum $x_{0}P_{z}\/A$ with $x_{0}=1\/3$.\nIn our approach the cumulative pion production proceeds in two steps.\nFirst a valence quark with a scaling variable $x>1$ is created.\nAfterwards\nit decays into the observed hadron with its scaling variable $x$ close\nto the initial cumulative quark's one. This second step is described by\nthe well-known quark fragmentation functions\n\\cite{CapellaT81} and will not be\ndiscussed here.\n\nThe produced cumulative (\"active\") quark acquires\nthe momentum much greater than $x_{0}P_{z}\/A$ only if this\nquark has interacted by means of gluon exchanges\nwith other $p$ quarks of flucton (\"donors\")\nand has taken some of their longitudinal momenta (see Fig.1).\nIf this active quark accumulates all\nlongitudinal momentum of these $p$ quarks then\n$K_z=(p+1)x_0P_z\/A$ and the donors become soft.\nIt is well-known that interactions which make the longitudinal momentum\nof one of the quark equal to zero may be treated by perturbation theory\n\\cite{Brodsky92}. This allows to calculate the part of Fig. 1\nresponsible for the creation of a cumulative quark explicitly.\nThis was done in \\cite{NPB94,YF97}, to which papers we refer the\nreader for all the details. As a result we were able to explain the\nexponential fall-off of the production rate in the cumulative region.\n\nSince with the rise of $x$ the active quark has to interact with\na greater number of donors, one expects that its average transverse momentum\nalso grows with $x$. Roughly one expects that $\\langle K_{\\perp}^2 \\rangle$\nis proportional to the number of interactions, that is, to $x$.\nIn \\cite{NPB94,YF97}\nthis point was not studied:\nwe have limited ourselves with the inclusive cross-section\nintegrated over the transverse momenta,\nwhich lead to some simplifications.\nThe aim of the present paper is to find\nthe pion production rate dependence on\nthe transverse momentum\nand the mean value of the latter\nas a function of $x$ in the cumulative region.\nThis dependence and also the magnitude of $\\langle K_{\\perp}^2 \\rangle$\nhave been studied experimentally. The comparison of our predictions\nwith the data allows to obtain further support for our model and\nfix one of the two its parameters (the infrared cutoff).\n\n\n\\section{The $K_{\\perp}$ dependence}\n\nRepeating the calculations of the diagram in Fig.1 described in\n\\cite{NPB94,YF97} but not\nlimiting ourselves with the inclusive cross-section\nintegrated over the transverse momentum,\nwe readily find that all dependence upon\nthe transverse momentum $K_{\\perp}$ of the produced particle\nis concentrated in a factor:\n\\begin{equation}\nJ (K_{\\perp})=\n\\int\n\\rho_A(\\underbrace{r,...,r}_{p+1}|\\underbrace{\\ol{r},...,\\ol{r}}_{p+1})\nG(c_1,...,c_p)\n\\prod_{i=1}^{p}\n\\lambda(c_i-r)\\lambda(c_i-\\ol{r})d^2 c_i\ne^{i(\\ol{r}-r)K_{\\perp}}\nd^2 r d^2 \\ol{r}\n\\end{equation}\nHere $\\rho_A$ is the (translationally invariant)\nquark density matrix of the nucleus:\n\\begin{equation}\n\\rho_A(r_i|\\ol{r}_i)\\equiv\n\\int\n\\psi_{\\perp A}(r_i,r_m)\n\\psi^*_{\\perp A}(\\ol{r}_i,r_m)\n\\prod_{m=p+2}^{N} d^2 r_m\n\\end{equation}\nwhere\n$\\psi_{\\perp A}$ is the transverse part of\nthe nuclear quark wave function. The propgation of soft donor quarks\nis decribed by\n\\begin{equation}\n\\lambda(c)=\\frac{K_0(m|c|)}{2\\pi}\n\\label{lam}\n\\end{equation}\nwhere $m$ is the constituent quark mass and $K_0$\nis the modified Bessel function (the Mac-Donald function).\nThe interaction with the projectile contributes a factor\n\\begin{equation}\nG(c_1,...,c_p)=\n\\int\n\\prod_{i=1}^{p}\n\\sigma_{qq}(c_i-b_i)\n\\eta_H(b_1,...,b_p) d^2 b_i\n\\end{equation}\nwhere $\\sigma_{qq}(c)$ is the quark-quark cross-section\nat a given value of impact parameter $c$ and\n\\begin{equation}\n\\eta_H(b_1,...,b_p)=\n\\sum_{L\\geq p}\\frac{L!}{(L-p)!}\n\\int|\\psi_{\\perp H}(b_i)|^2\n\\delta^{(2)}(\\frac{1}{L}\\sum_{i=1}^{L} b_i)\n\\ d^2 b_{p+1}...d^2 b_L\n\\end{equation}\nis a multiparton distribution in the projectile, expressed via\nthe transverse part of\nits partonic wave function $\\psi_{\\perp H}$ .\nIf we integrate $J (K_{\\perp})$ over $K_{\\perp}$ we come back\nto our old result (Eq. (33) in \\cite{NPB94}):\n$$\n\\int J (K_{\\perp})\n\\frac{d^2 K_{\\perp}}{(2\\pi)^2}=\n\\rho_A(\\underbrace{0,...,0}_{p+1}|\\underbrace{0,...,0}_{p+1})\n\\int\nG(c_1,...,c_p)\n\\prod_{i=1}^{p} \\lambda^2(c_i-r)\nd^2 c_i d^2 r\n$$\n\nIf one assumes factorization of the multiparton distribution\n$\\eta_H(b_1,...,b_p)$ then\\\\\n$G(c_1,...,c_p)$ also factorizes:\n\\begin{equation}\nG(c_1,...,c_p)=\n\\prod_{i=1}^{p}\nG_0(c_i)\n\\label{facG}\n\\end{equation}\nFollowing \\cite{YF97} we use the quasi-eikonal approximation\nfor $\\eta_H$:\n$$\n\\eta_H(b_1,...,b_p)=\n\\xi^{(p-1)\/2}\\nu_H^{p}\\prod_{i=1}^{p}\\eta_H(b_i)\n$$\nwhere $\\xi$ is the quasi-eikonal diffraction factor,\n$\\nu_H^{}$ is the mean number of partons in the projectile hadron\nand the single parton distribution $\\eta_H(b)$\nis normalized to unity.\nIn a Gaussian approximation\nfor $\\sigma(c)$ and $\\eta_H(b)$ we find:\n$$\nG_0(c)=\n\\xi^{\\frac{1}{2}-\\frac{1}{2p}}\n\\frac{\\nu_H\\sigma_{qq}}{\\pi r_{0H}^2}\ne^{-\\frac{c^2}{r_{0H}^2}}\n$$\nwhere $\\sigma_{qq}$ is the total quark-quark cross-section,\n$r_{0H}^2=r_0^2+r_H^2$, $r_0$ and $r_H$ are the widths of\n$\\sigma(c)$ and $\\eta_H(b)$ respectively.\n\nWith the factorised $G(c_1,...,c_p)$ (\\ref{facG}) we have\n$$\nJ (K_{\\perp})=\n\\int\n\\rho_A(\\underbrace{0,...,0}_{p+1}|\\underbrace{\\ol{r}-r,...,\\ol{r}-r}_{p+1})\nj^p(r,\\ol{r})\ne^{i(\\ol{r}-r)K_{\\perp}}\nd^2 r d^2 \\ol{r}\n$$\nwhere\n$$\nj(r,\\ol{r})=\n\\int d^2 c\nG_0(c)\n\\lambda(c-r)\n\\lambda(c-\\ol{r})\n$$\nWe also have used the translational invariance of the $\\rho$-matrix.\nNote that near the real threshold we have no spectators and\n$$\n\\rho_A(\\underbrace{0,...,0}_{p+1}|\\underbrace{\\ol{r}-r,...,\\ol{r}-r}_{p+1})\n=\\rho_A(\\underbrace{0,...,0}_{p+1}|\\underbrace{0,...,0}_{p+1})\n$$\nIn any case large $K_{\\perp}$ corresponds to small $\\ol{r}-r$ so\nwe factor $\\rho_A$ out of the integral in zero point.\nIn the rest integral we pass to the variables\n$$\nB=\\frac{r+\\ol{r}}{2}, \\hs 1 b=\\ol{r}-r\n$$\nand shift the integration variable $c$, then\n\\begin{equation}\nJ (K_{\\perp})=\n\\rho_A(\\underbrace{0,...,0}_{p+1}|\\underbrace{0,...,0}_{p+1})\n\\int\nj^p(B,b)\ne^{ibK_{\\perp}}\nd^2 b d^2 B\n\\end{equation}\nwhere\n\\begin{equation}\nj(B,b)=\n\\int\nG_0(B+c)\n\\lambda(\\frac{b}{2}-c)\n\\lambda(\\frac{b}{2}+c)\nd^2 c\n\\label{j}\n\\end{equation}\n\n\n\\section{The calculation of $\\langle |K_{\\perp}|\\rangle $}\n\nNow we would like to find the width of the distribution on $K_{\\perp}$\nas a function of $p$ or what is the same of the cumulative\nnumber $x=(p+1)\/3$.\nFrom the mathematical point of view it is simpler to calculate\nthe mean squared width of the distribution $\\langle K_{\\perp}^2\\rangle $.\nUnfortunately in our case this quantity is logarithmically\ndivergent at large $K_{\\perp}$.\nThis divergency results from the behavior of $j(B,b)$\nat small $b$. This behavior is determined by the behavior of the\n$ \\lambda(b)=K_0(m|b|)\/(2\\pi) $ (\\ref{lam}),\nwhich has a logarithmical singularity\nat $|b|=0$. Smooth $G_0(B+c)$ does not affect this behavior.\n\nFor this reason we shall rather calculate $\\langle |K_{\\perp}|\\rangle $:\n\\begin{equation}\n\\langle |K_{\\perp}|\\rangle\n=\\frac{1}{J_N}\n\\int\nj^p(B,b)\n|K_{\\perp}|e^{ibK_{\\perp}}\nd^2 b d^2 B\n\\frac{d^2 K_{\\perp}}{(2\\pi)^2}\n\\end{equation}\nwhere $J_N$ is the same integral as in the numerator\nbut without $|K_{\\perp}|$.\nPresenting $|K_{\\perp}|$ as $K_{\\perp}^2\/|K_{\\perp}|$ and\n$K_{\\perp}^2$ as the Laplacian $\\Delta_b$ applied to\nthe exponent we find\n$$\n\\langle |K_{\\perp}|\\rangle\n=-\\frac{1}{J_N}\n\\int\nj^p(B,b)\n\\Delta_b e^{ibK_{\\perp}}\nd^2 b d^2 B\n\\frac{d^2 K_{\\perp}}{|K_{\\perp}|(2\\pi)^2}\n$$\nTwice integrating by parts\nand using the formula\n$$\n\\int\n\\frac{d^2 K_{\\perp}}{|K_{\\perp}|}e^{ibK_{\\perp}} =\n\\frac{2\\pi}{|b|}\n$$\nwe find\n$$\n\\langle |K_{\\perp}|\\rangle\n=-\\frac{1}{2\\pi J_N}\n\\int\n\\frac{1}{|b|}\n\\Delta_b j^p(B,b)\nd^2 b d^2 B\n$$\nNow we again integrate by parts once to find\n$$\n\\langle |K_{\\perp}|\\rangle\n=-\\frac{1}{2\\pi J_N}\n\\int\nd^2 B\n\\frac{d^2 b}{|b|^2}\n(n_b \\nabla_b) j^p(B,b)\n$$\nwhere $n_b=b\/|b|$. This leads to our final formula\n\\begin{equation}\n\\langle |K_{\\perp}|\\rangle\n=-\\frac{p}{2\\pi J_N}\n\\int\nd^2 B\n\\frac{d^2 b}{|b|^2}\nj^{p-1}(B,b)\n(n_b \\nabla_b) j(B,b)\n\\label{Kperp}\n\\end{equation}\nwhere $j(B,b)$ is given by (\\ref{j}),\n$ \\lambda(b) $ is given by (\\ref{lam}) and\n$$\nJ_N=\\int d^2 B j^{p}(B,b=0)\n$$\n\n\n\\section{Approximations}\nTo simplify numerical calculations we make some additional\napproximations, which are not very essential but are well supported\nby the comparison with exact calulations at a few sample points.\n\nAs follows from the\nthe asymptotics of $K_0(z)$ at large $z$\n$$\nK_0(z)\\simeq\\sqrt{\\frac{\\pi}{2z}}e^{-z}\n$$\nthe width of $\\lambda(b)$ (\\ref{lam}) is of the order $m^{-1}$.\nThe function $G_0$ is smooth in the vicinity of the origin\nand its width $r_{0H}=\\sqrt{r_0^2+r_H^2}$ is substancially\nlarger than the width of $\\lambda$.\nFor this reason we factor $G_0(B+c)$ out of the integral (\\ref{j})\nover $c$ at the point $B$:\n\\begin{equation}\nj(B,b)=\nG_0(B)\\Lambda(b),\n\\hs 1\n\\Lambda(b) \\equiv\n\\int\n\\lambda(c)\n\\lambda(c-b)\nd^2 c=\n\\frac{|b|}{4\\pi m}K_1(m|b|)\n\\label{Lam}\n\\end{equation}\nThen we find that the integrals over $B$ and $b$ decouple\n\\begin{equation}\nJ (K_{\\perp})=\n\\rho_A(\\underbrace{0,...,0}_{p+1}|\\underbrace{0,...,0}_{p+1})\n\\int G^p_0(B) d^2 B\n\\int \\Lambda^p(b)\ne^{ibK_{\\perp}}\nd^2 b\n\\end{equation}\n\nIn this approximation we find that $\\langle |K_{\\perp}|\\rangle $\ndepends only\non one parameter - the constituent quark mass $m$,\nwhich in our approach plays the role of an infrared cutoff:\n\\begin{equation}\n\\langle |K_{\\perp}|\\rangle\n=pm\n\\int_0^{\\infty} dz K_0(z) (zK_1(z))^{p-1}\n\\label{apprKperp}\n\\end{equation}\nThis allows to relate $m$ directly to the experimental data on\nthe transverse momentum dependence.\n\n\\section{Comparison with the data and discussion}\n\nThe integral in (\\ref{apprKperp}) can be easy calculated\nnumerically. For values of $p=1,...,12$ it is very well\napproximated by a power dependence (see Fig.2), so that we obtain\n\\begin{equation}\n\\langle |K_{\\perp}|\\rangle\/m\n=1.594\\, p^{0.625}\n\\end{equation}\nAs we observe, the rise of $\\langle |K_{\\perp}|\\rangle$ turns out to be\neven faster than\nexpected on naive physical grounds mentioned in the Introduction\n($\\sim\\sqrt{p}$).\nThe resulting plots for $\\langle |K_{\\perp}|\\rangle^2$\nas a function of the cumulative number $x=(p+1)\/3$\nat different values of parameter $m$ are shown in Fig.3\ntogether with avaiable experimental data from \\cite{Boyarinov94}\non $\\langle K_{\\perp}^2\\rangle$ for pion production obtained in\nexperiments\n\\cite{Boyarinov94}-\\cite{Boyarinov87}\nwith 10 $GeV$ protons\nand \\cite{Baldin82, Baldin83} with 8.94 $GeV$ protons.\n\nNote that earlier publications of the first group\n\\cite{Boyarinov92,Boyarinov87}\nreported a much stronger increase of $\\langle K_{\\perp}^2\\rangle$\nwith $x$, up to value 2 $(GeV\/c)^2$ at $x=3$ for pion production.\nIn our approach such an increase would require the quark mass to be as high\nas $m \\simeq 225 MeV$.\nIn a more recent publication \\cite{Boyarinov94}\nthe rise of $\\langle K_{\\perp}^2\\rangle$ is substancially weaker\n(it corresponds to $m \\simeq 175 MeV$ in our approach).\nThe authors of \\cite{Boyarinov94} explain this\nby new experimental data obtained and by a cutoff $K_{\\perp max}$\nintroduced in calculations of $\\langle K_{\\perp}^2\\rangle$ in \\cite{Boyarinov94}.\nThe introduction of this cutoff considerably\n(approximatly two times)\ndecreases the experimental value of $\\langle K_{\\perp}^2\\rangle$ at $x=3$.\nIn our opinion this is a confirmation that the\ncumulative pion production rate only weakly decreases with $K_{\\perp}$\nin the cumulative region so that the\nthe integral over $K_{\\perp}^2$ which enters the definition of\n$\\langle K_{\\perp}^2\\rangle$ is weakly convergent or even divergent,\nas in our approach.\nUndoubtedly presentation of the experimental data in terms of\nthe mean value\n$\\langle |K_{\\perp}| \\rangle^2$,\nrather than $\\langle K_{\\perp}^2\\rangle$ should reduce\nthe dependence on the cutoff\n$K_{\\perp max}$ and make the results more informative.\n\nOne of the ideas behind the investigations of the cumulative\nphenomena is that they may be a manifestation\nof a cold quark-gluon plasma formed when several nucleons overlap in the\nnuclear matter. In \\cite{NPB94} we pointed out that our model does not\ncorrespond to this picture. It implies coherent interactions of the\nactive quark with donors and, as a result, strong correlations between the\nlongitudinal and transverse motion. Predictions for the dependence of\n$\\langle |K_{\\perp}| \\rangle$ on $x$ are also different. From the cold\nquark-gluon plasma model one expects $\\langle |K_{\\perp}| \\rangle$ to behave\nas $x^{1\/3}$, since the Fermi momentum of the quarks inside the overlap\nvolume is proportional to the cubic root of the quark density. Our model\npredicts a much faster increase, with a power twice larger. The\nexperimental data seem to support our predictions.\n\n\\section{Acknowledgments}\n\nThe authors are greatly thankful to Prof. P.Hoyer who attracted\ntheir attention to the problem.\n\nThis work is supported by the Russian Foundation for Fundamental\nResearch, Grant No. 97-02-18123.\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Acknowledgments}\nWe would like to thank De Meng, Maryam Fazel and Mehran Mesbahi for their help.\n\n\\section{Conclusions}\\label{conclusion}\nWe derived a class of $N$ player, weighted potential games under heterogeneous MDP dynamics and with application to multi-agent path coordination. For these games, we show equivalence between the unique Nash equilibrium and the global solution of a potential minimization problem, which we solve via gradient descent and single-player dynamic programming. Future work include deriving learning-based solutions for the games and \\change{integrating partially observable scenarios in which players have local observations only.} \n\n\n\n\n\\section{Multi-agent path coordination}\\label{sec:mapf}\n\\noindent We apply our game model to a multi-agent pick up and delivery scenario \\change{with stochastic package arrival times}. As shown in Figure~\\ref{fig:grid_world}, $N$ players navigate a 2D \\change{space}. \\change{Each player's goal is} to transport packages from the pick up chutes to the drop off chutes while \\change{avoiding collision with others}. Code for the simulation is available at \\url{https:\/\/github.com\/lisarah\/mdp_path_coordination}.\n\\begin{figure}\n \\centering\n \\vspace*{0.3cm} \n \\includegraphics[width=0.7\\columnwidth]{Figure\/plzwork.png}\n \\caption{Operation environment for multi-robot warehouse scenario.}\n \\label{fig:grid_world}\n\\end{figure}\n\\subsection{Stationary MDP Model}\n\\change{Players operate in a} two dimensional grid world with $5$ rows and $10$ columns. In addition to capturing location, each state also dictates whether the robot is in pick up or delivery mode. The state space is given by\n\\[[S] = \\Big\\{ (\\change{v, w}, m) \\ | \\ 1\\leq \\change{v} \\leq 5, \\ 1\\leq \\change{w} \\leq 10, \\ m \\in \\change{\\{1,2\\}}\\Big\\}.\\]\nAt each state, available actions are $[A] = \\{u, d, r, l, s\\}$, corresponding to up, down, right, left, stay. Player transition dynamics and rewards are \\emph{stationary} in time.\nThe transition probability of each state $(\\change{v, w}, m)$ extends the location-based transition probabilities $P^0$. \n\n\\noindent\\textbf{Location-based transition}. Let $u =\\change{(v, w)}$ denote the location component of the state. At each location, each action either points to a feasible target $u_{targ}(a)$ or is infeasible. The set of all feasible targets from $u$ is $\\mathcal{N}(u)$. \nWhen a target exists, players have $1 > q > 0$ chance of reaching it and $1 - q$ chance of reaching other states in $\\mathcal{N}(u)$. \n\\begin{equation}\\label{eqn:feasible_location_transitions}\nP^0_{u'ua} = \\begin{cases}\nq & u' = u_{targ}(a), \\\\\n\\frac{1 - q}{|\\mathcal{N}(u)|} & u' \\in \\mathcal{N}(u)\/\\{u_{targ}(a)\\},\\\\\n0 & \\text{otherwise}.\n\\end{cases}\n\\end{equation}\nWhen the target location is infeasible, the player transitions into a neighboring state $u' \\in \\mathcal{N}(u)$ at random.\n\\begin{equation}\\label{eqn:infeasible_location_transitions}\nP^0_{u'ua} = \\begin{cases}\n\\frac{1}{|\\mathcal{N}(u)|} & u' \\in \\mathcal{N}(u),\\\\\n0 & \\text{otherwise}.\n\\end{cases}\n\\end{equation}\n\\textbf{Full transition dynamics}. \\change{Within the same mode}, players transition between locations via dynamics $P^0$. Player modes \\change{transition at pick up chutes $\\mathcal{P}$ and drop off chutes $\\mathcal{D}$}.\n\\change{\n\\begin{enumerate}\n \\item When player $i$ is in mode $1$ (pick up) and about to transition into pick chute $p^i \\in \\mathcal{P}$, player $i$'s mode has $r^i$ probability of switching to mode $2$ (drop off). \n \\[\\begin{cases}\n P^i_{t(p^i, 2)sa} = r^iP^0_{tp^iua},\\\\\n P^i_{t(p^i, 1)sa} = (1 - r^i)P^0_{tp^iua},\n \\end{cases}\\ \\forall s = (u, 1), \\ s \\in [S].\\]\n \\item When player $i$ is in mode $2$ (drop off) and about to transition into drop chute $d^i \\in \\mathcal{D}$, player $i$ transitions to mode $1$ with probability $1$.\n \\[\\textstyle \\begin{cases}\n \\textstyle P^i_{t(d^i, 1)sa} = P^0_{td^iua},\\\\\n \\textstyle P^i_{t(p^i, 2)sa} = 0,\n \\end{cases}\\ \\forall s = (u, 2), \\ s \\in [S].\\]\n\\end{enumerate}\n}\nHere, $r^i \\in {\\mathbb{R}}$ denotes the probability of package arrival when player $i$ is in \\change{$p^i$}. Modeled as an independent Poisson process with rate $\\lambda_i$ and interval $\\Delta t = 1s$, $r^i = \\exp(-\\lambda_i \\Delta_t)$.\n\n\\subsection{Player Costs}\n\\noindent For all $(t,s, a) \\in \\mathcal{T}\\times[S]\\times[A]$ and congestion distribution $y$~\\eqref{eqn:congestion_distribution}, player $i$'s \\change{cost is given by\n\\[\\textstyle\\ell_{tsa}^i(y, x^i) = \\epsilon x^i_{tsa} - c^i_{tsa} + \\alpha_i f_{ts}(y).\\]}\n\\noindent The player-specific objective $c^i_{tsa}$ is defined as \n\\begin{equation}\nc^i_{t(\\change{v, w}, m) a} = \\begin{cases}\n1 & \\change{(v, w) = p^i}, \\ m = \\change{1}, \\\\\n1 & \\change{(v, w) = d^i}, \\ m = \\change{2}, \\\\\n0 & \\text{otherwise.}\n\\end{cases}\n\\end{equation}\nThe congestion function is strictly state-based and is an exponential function given by\n\\begin{equation}\\label{eqn:simulation_congestion_function}\nf_{t\\change{(v, w, m)}}(y) = -\\beta\\exp\\big(\\beta(\\sum_{m' \\in \\change{\\{1,2\\}}}\\sum_{a' \\in [A]}y_{t(\\change{v, w}, m')a'} - 1)\\big),\n\\end{equation}\nwhere $\\alpha_i> 0$ for all $(t,s,a) \\in \\mathcal{T}\\times[S]\\times[A]$. \\change{As opposed to~\\eqref{eqn:individual_costs}, function~\\eqref{eqn:simulation_congestion_function} calculates the congestion in $(v,w, \\cdot)$ using both $(v,w, 1)$'s and $(v, w, 2)$'s congestion level. }\n\\begin{figure}\n \\centering\n \\vspace*{0.3cm} \n \\includegraphics[trim={0 0 0 0.5cm},width=0.85\\columnwidth]{Figure\/polished_figures\/distribution_convergence.png}\n \\caption{$\\norm{\\cdot}_2$ of player $i$'s state-action distribution over Algorithm~\\ref{alg:frank_wolfe} iterations.}\n \\label{fig:frank_wolfe}\n\\end{figure}\n\\subsection{Simulation Results}\n\\noindent We simulate the path coordination game using parameters from Table~\\ref{tab:sim_params}. Player $i$'s pick up locations is the $i^{th}$ element of $ \\mathcal{P}= \\{(4, \\change{w^i}) \\ | \\ \\change{w^i} \\in [8, 7, 2]\\}$, \\change{and its drop-off location is the $i^{th}$ element of $\\mathcal{D} = \\{(0, w^i) \\ | \\ w^i \\in [4,5,8]\\}$}. At $t = 0$, players are initialized \\change{at their drop off location. }\n\\begin{table}[h!]\n\\begin{center}\n\\begin{tabular}{|ccccccccc|}\n\\hline\n$N$ & $q$ & $\\gamma_i$ & $\\lambda_i$ & \\change{$\\alpha_i$} & $\\Delta t$ & $T$ & \\change{$\\epsilon$} & \\change{$\\beta$} \\\\\n\\hline\n3 & 0.98 & 0.99& 0.5 & \\change{\\{0.5, 1, 1.5\\}} & 1s & \\change{120s} & \\change{1e-3} & \\change{40}\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Parameters for simulation environment.}\\label{tab:sim_params}\n\\end{table}\n\n\\noindent We run Algorithm~\\ref{alg:frank_wolfe} for $100$ iterations, where line~\\ref{alg:mdp} is solved via value iteration~\\eqref{eqn:value_iteration}. \\change{The two norm of $x^i$ is shown in Figure~\\ref{fig:frank_wolfe} as a function of the algorithm iterations. We see that the state-action densities stabilize in about $20$ steps.} Performance is evaluated by: 1) expected number of collisions, 2) expected packages delivery time, 3) worst package delivery time. The results over $100$ random trials are visualized in Figures~\\ref{fig:collision_results} and~\\ref{fig:waiting_time_results}. \n\\begin{figure}\n \\centering\n \\vspace*{0.3cm} \n \\includegraphics[trim={0 0 0 0.5cm},width=0.8\\columnwidth]{Figure\/polished_figures\/collision_time_line.png}\n \\caption{Collisions per player as a function of MDP time step $t$.}\n \\label{fig:collision_results}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{Figure\/polished_figures\/collision_stats.png}\n \\caption{Average waiting time per package, worst case waiting time per package, and average number of collisions in $T$ for each player.}\n \\label{fig:waiting_time_results}\n\\end{figure}\n\n\\noindent\\change{\nWe compare the \\emph{jointly optimal congestion-free wait time} computed using Algorithm~\\ref{alg:frank_wolfe}, and compare them to the shortest \\change{wait time} available in the absence of opponents.\n}\nEach path is the number of steps to complete the drop off-pick up-drop off cycle. \\change{Based on players' pick-up and drop-off locations, their shortest \\change{wait time} in the absence of opponents is $16$, $12$, $20$ respectively.} We note that this matches well with the average wait time shown in Figure~\\ref{fig:waiting_time_results}.\n\nWe set the player impact factors as $\\{0.5, 1, 1.5\\}$ as in Table~\\ref{tab:sim_params}. From Figure~\\ref{fig:waiting_time_results}, the impact factors directly correlate with rate of collision players experience. Player $0$ impacts congestion the least and is the least sensitive to congestion. As a result, it encountered more collisions. Player $2$ impacts congestion the most and is the most sensitive to congestion. As a result, it encountered the least collisions. The collision rate is spread out evenly over $\\mathcal{T}$ (Figure~\\ref{fig:collision_results}).\n\\section{Individual Players}\n\nTo analyze how the penalty functions affect the individual player's decision making, we develop their perspective through the value function formulation, make connections to Q-Functions, and show that the penalties directly affect the value functions of individual players.\n\n\\subsection{Dual Formulation: Value Iteration}\nIt is well know that the dual of the linear programming MDP optimization problem is the dynamic programming formulation of the same problem. Here we show that in the non-linear, state dependent case, the dual constraints forms the dynamic programming problem new players joining the game would solve at time $t$.\n\nIndividual players try to optimize their value functions through solving the dual of the primal MDP problem. Given the MDP game primal problem, Eqn. \\ref{MDP game}, the dual can be written as: \n\\begin{equation}\n\\begin{aligned}\n\\max_{V_{ts}} & \\quad \\sum_s V_{0s}p_s \\\\\n\\text{s.t } & V_{ts} \\leq \\sum_{s'} P(s'| s, a)V_{t+1,s'} + \\ell_{tsa}(y_{tsa}) \\quad \\\\\n& \\forall \\, s \\in \\mathcal{S}, a\\in \\mathcal{A}, t \\in [T-1] \\\\\n& V_{Ts} \\leq \\ell_{tsa}(y_{tsa}) \\quad \\forall \\, s \\in \\mathcal{S},\\,\\, a\\in \\mathcal{A}\n\\end{aligned} \\label{unconstrained dual}\n\\end{equation}\nFrom this we can derive the value iteration method shown for latency functions $\\ell_{tsa}(y_{tsa}) $. Let the Q function be defined as following: \n\\begin{equation}\\label{q function}\nQ_{tsa}(y_{tsa}) = \\sum_{s'} P(s'| s, a)V_{t+1,s'} + \\ell_{tsa}(y_{tsa}) \\\\\n\\end{equation}\nThe difference $V^*_{ts} - Q_{tsa}(y^*_{tsa}) = \\mu^*_{tsa} \\geq 0$ is the dual variable of constraint $y_{tsa} \\geq 0 $, and is also the inefficiency of the action $a$. Any action $\\bar{a} $ whose inefficiency is zero is the optimal action at that particular state and time. \n\n\nWhich leads to the value iteration method Alg. \\ref{value iteration}. \n\\begin{algorithm}\n\\caption{Value iteration}\n\\begin{algorithmic}[h]\n\\Require \\(R\\), \\(P\\).\n\\Ensure \\(V^\\star\\), \\(\\pi^\\star\\).\n\\ForEach{\\(s,\\in\\mathcal{S}\\)}\n\\State{\\(V^\\star_{Ts}= \\underset{a\\in\\mathcal{A}}{\\mbox{min}} \\,\\,\\ell_{Tsa}(y_{Tsa})\\)}\n\\State{\\(\\pi^\\star_{Ts}= \\underset{a\\in\\mathcal{A}}{\\mathop{\\rm argmin}} \\,\\,\\ell_{Tsa}(y_{Tsa})\\)}\n\\EndFor\n\\For{\\(t=T-1, T-2, \\ldots, 0\\)}\n\\ForEach{\\(s\\in\\mathcal{S}\\)}\n\\State{\\(\\displaystyle V^\\star_{ts}= \\underset{a\\in\\mathcal{A}}{\\mbox{min}}\\,\\, Q_{tsa}(y_{tsa})\\)}\n\\State{\\(\\displaystyle \\pi^\\star_{ts} = \\underset{a\\in\\mathcal{A}}{\\mathop{\\rm argmin}}\\,\\, Q_{tsa}(y_{tsa})\\)}\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\label{value iteration}\n\\end{algorithm}\n\n\\begin{theorem}\nAssuming existing players have reached nash equilibirum, and the rewards are state dependent $R_{tsa}(y_{tsa}) $, algorithm \\ref{value iteration} with optimal $y^*_{tsa} $ is the dynamic programming problem that any new player solves entering the game at time $t $.\n\nFurthermore, the best reward that the new player can gain is defined by \n\\[\\min_s V^*_{ts}\\]\n\\end{theorem}\n\n\\begin{proof}\nSince the algorithm depends on current state-action distribution, $y_{tsa}$. At Nash Equilibrium the constraints would be: \n\\[V_{ts} \\leq \\sum_{s'} P(s'| s, a)V_{t+1,s'} + \\ell_{tsa}(y^*_{tsa}) \\]\n\\[V_{Ts} \\leq \\ell_{tsa}(y^*_{tsa}) \\]\nAny player entering at time t would need to maximize its value function according to $V_{ts} $. Since the game is non-atomic, the new player would not affect the state depedent reward functions. Therefore, the new player's optimal policy can be found from algorithm \\ref{value iteration}.\n\n\\end{proof}\n\n\n\nSuppose we impose constraint $y_{t\\bar{s}\\bar{a}} \\leq d_{t\\bar{s}\\bar{a}} $ for one state $\\bar{s} $ and all times t, and suppose it was violated during time step $\\bar{t}$. From an individual game player's perspective, the exact penalty function coefficients $k_t(\\bar{s},\\bar{a}) $, directly offsets the value functions at constrained optimal $y_c^*(s,a)$. \n\nUsing the exact penalty theorem, we find the minimum coefficient $k_{\\bar{t}\\bar{s}\\bar{a}} \\geq \\tau^*_{\\bar{t}\\bar{s}\\bar{a}} $ to ensure that the optimal solutions are between the penalized problem and the unconstrained problem are the same. However in order to ensure that the next player will also avoid the constraints, additional \"toll\" need to be charged. \n\n\n\\begin{theorem}\nIn addition to $\\tau^*_{\\bar{t}\\bar{s}}$ the dual variable's optimal value corresponding to the constraint $y_{t\\bar{s}\\bar{a}} \\leq d_{t\\bar{s}} $ in problem \\eqref{MDP game}. Let \n\\[k_{\\bar{t}\\bar{s}\\bar{a}} > \\max\\{\\tau^*_{\\bar{t}\\bar{s}\\bar{a}}, \\mu_{\\bar{t}\\bar{s}a_{new}} \\}\\]\n\\[a_{new} = \\text{argmin}_{a \\neq \\bar{a}} Q_{\\bar{t}\\bar{s}a} \\]\nThen we can guarantee that the next player will not choose to violate the constraint $y_{\\bar{t}\\bar{s}\\bar{a}} \\leq d_{\\bar{t}\\bar{s}\\bar{a}} $\n\n\\end{theorem}\n\n\\begin{proof}\nFrom the exact penalty theorem, the solution to the unconstrained problem \\eqref{MDP Game} and the constrained problem \\eqref{cMDP game} share the same value and argument. \n\\begin{equation}\n\\begin{aligned}\n\\underset{y}{\\mbox{min.}} & \\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}} \\sum\\limits_{a\\in\\mathcal{A}} \\int_0^{y_{tsa}}\\ell_{tsa}(x)dx\\\\\n\\mbox{s.t.} &\\sum\\limits_{a\\in\\mathcal{A}} y_{t+1, sa} = \\sum\\limits_{s'\\in\\mathcal{S}}\\sum\\limits_{a\\in\\mathcal{A}}P_{s'as}y_{ts'a}, \\quad t\\in[T-1],\\\\\n&\\sum\\limits_{a\\in\\mathcal{A}}y_{0sa}=p_s,\\\\\n& y_{t\\bar{s}\\bar{a}} \\leq d_{t\\bar{s}\\bar{a}}, \\\\\n&y_{tsa}\\geq 0,\\quad \\forall s\\in\\mathcal{S}, a\\in\\mathcal{A}, t\\in[T]\n\\end{aligned}\\label{cMDP game}\n\\end{equation}\n\nFor at the optimal solution of this constrained MDP game, the dual value iteration method occurs according to algorithm \\ref{constrained value iteration}. Here we see the value of $k_{t\\bar{s}\\bar{a}} $ directly offsets the Q-functions at the times when the unconstrained solution violated the state constraints.\n\n\\begin{algorithm}\n\\caption{Constrained Value iteration at constrained optimal $y_c^* $}\n\\begin{algorithmic}[h]\n\\Require \\(R\\), \\(P\\).\n\\Ensure \\(V^\\star\\), \\(\\pi^\\star\\).\n\\ForEach{\\(s,\\in\\mathcal{S}\\)}\n\\State{\\(\nV^\\star_{Ts}= \\underset{a\\in\\mathcal{A}}{\\mbox{min}} \\,\\,Q_{Tsa} + k_T(s,a)\\)}\n\\State{\\(\n\\pi^\\star_{T}(s)= \\underset{a\\in\\mathcal{A}}{\\mathop{\\rm argmin}} \\,\\,Q_{Tsa} + k_T(s,a)\\)}\n\\EndFor\n\\For{\\(t=T-1, T-2, \\ldots, 0\\)}\n\\ForEach{\\(s\\in\\mathcal{S}\\)}\n\\State{\\(\\displaystyle V^\\star_{t}(s)= \\underset{a\\in\\mathcal{A}}{\\mbox{min}}\\,\\, Q_{tsa} + k_t(s,a)\\)}\n\\State{\\(\\displaystyle \\pi^\\star_{t}(s) = \\underset{a\\in\\mathcal{A}}{\\mathop{\\rm argmin}}\\,\\, Q_{tsa}+ k_t(s,a)\\)}\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\label{constrained value iteration}\n\\end{algorithm}\n\nSuppose the violated constraint occurs at $\\bar{t}, \\bar{s}, \\bar{a} $. Let $\\tilde{\\mu} $ be the smallest inefficiency of the set of actions available at $\\bar{s} $, excluding optimal action $\\bar{a} $. If $k_{\\bar{t}\\bar{s}\\bar{a}} > \\tilde{\\mu} $, then $\\bar{a} $ is no longer the optimal, and so the player will not choose $\\bar{a} $ as optimal policy. Therefore, the original constraint $y_{t\\bar{s}\\bar{a}} \\leq d_{t\\bar{s}\\bar{a}} $ cannot be violated at $\\bar{t} $ if the original solution was feasible. \n\nSince $k_{\\bar{t}\\bar{s}\\bar{a}} > \\tau^*_{\\bar{t}\\bar{s}\\bar{a}} $, then we still satisfy the exact penalty theorem, and so the optimal solution and arguments of the penalized problem still correspond to the constrained problem's solution and arguments. \n\\end{proof}\n\n\\begin{theorem}\nFor a new player entering the game at time $t $, their best expected reward is upperbounded by the optimal value function at time $t $. \n\\[\\max_{s} \\tilde{V}_{ts} \\]\nWhere $\\tilde{V}_{ts} $ is the result of solving algorithm \\ref{constrained value iteration}. Furthermore, the player can achieve this value by starting at state $s = \\text{argmax} V_{ts} $ and follow optimal policy $\\pi_{ts}^* $from algorithm \\ref{constrained value iteration}.\n\\end{theorem}\n\\begin{proof}\nThis follows from the fact that the value functions is equivalent to the expected sum of rewards over time.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\\label{sec:introduction}\nAs autonomous \\change{path} planning algorithms become widely-adapted by aeronautical, robotics, and operational sectors~\\cite{yun2020multi,ota2006multi}, the standard underlying assumption that the operating environment is stationary is no longer sufficient. More likely, autonomous players \\emph{share} the operating environment with other players who may have conflicting objectives. While the possibility for multi-agent conflicts has pushed single-agent \\change{path} planning towards greater emphasis on \\change{robust planning} and collision avoidance, we believe that the overarching goal should be to consider other players' trajectories and achieve optimality with respect to the \\emph{multi-agent dynamics}. \n\nWe focus on the scenario where a group of heterogeneous players collectively perform path planning in response to stochastic demands. We are inspired by fleets of robo-taxis fulfilling ride demands while avoiding congestion in traffic~\\cite{vosooghi2019robo} and warehouse robots retrieving packages under dynamic arrival rates~\\cite{kumar2018development,li2020mechanism} while avoiding collisions. The common feature in these applications is that the players must plan with respect to a forecasted demand distribution rather than a deterministic demand. We assume that the desirable outcome is a competitive equilibrium. Beyond competitive settings, a competitive equilibrium can be used in cooperative settings to ensure that each player achieves identical costs and each demand is \\emph{optimally} fulfilled with respect to other demands, thus ensuring a degree of fairness.\n\nWe propose MDP congestion games as \\change{a theoretical framework for analyzing the resulting path coordination problem}. By leveraging common congestion features in multi-agent \\change{path} planning, our key contribution is \\emph{reducing} the $N$-player coupled MDP problem to a \\emph{single} potential minimization problem. As a result, we can use optimization techniques to analyze the Nash equilibrium as well as apply gradient descent methods to compute it.\n\n\n\n\\textbf{Contributions}. To address the lack of \\change{game-theoretical models for} path \\change{coordination under} MDP dynamics, we propose an MDP congestion game with finite players and heterogeneous \\change{player} costs and dynamics. We define Bellman equation-type conditions for the Nash equilibrium, formulate a \\emph{potential function} and provide a necessary and sufficient condition for its existence. \\change{Under certain assumptions on the player costs}, we show equivalence between the \\change{Nash equilibrium} and the \\change{global solution of} the potential minimization problem, and provide sufficient conditions for a unique Nash equilibrium. Specifically \\change{for multi-player path coordination}, we \\change{formulate a class of cost functions that allows players to have different sensitivities to the total congestion and to find congestion-free paths that optimally achieve their individual objectives}. Finally, we provide a \\change{distributed} algorithm that converges to the Nash equilibrium and give rates of \\change{its} convergence. We demonstrate our model and algorithm on a 2D autonomous warehouse problem where robots retrieve and deliver packages \\change{with stochastic arrival times} while sharing a common navigation space. \n\n \n\n\n\n\n\n\n\n\n\n\\section{Unknown Initial Distribution}\n\n(I believe this is the the most general way to constrain the policy to maintain safety in the MDP.)\n\nI think we want to solve the following problem for all $t' = 0,\\dots,T$.\n\n\\begin{align}\n\\max_{y,p_{t'}} & \\quad \\sum_{s} \\sum_{a} \\Gamma_{t'}(s'|s,a) K_{t'}(s,a)p_{t'}(s) \\\\\n\\text{s.t.} & \\quad \\sum_s \\sum_a y_0(s,a) = 1 \\notag \\\\\n& \\quad \\sum_a y_{t+1}(s',a) = \\sum_{s \\in \\mathcal{S}} \\sum_{a \\in \\mathcal{A}} \\Gamma_t(s'|s,a) y_t(s,a) \\notag \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad t = 0,\\dots,t'-1 \\notag \\\\\n& \\quad \\sum_a y_{t'}(s,a) = p_{t'}(s) \\notag \\\\\n& \\quad 0 \\leq y_t(s,a) \\leq d_t(s,a) \\qquad t=0,\\dots,t' \\notag\n\\end{align}\n\nWe want to get a bound for the maximum amount of mass in state $s^*$. \n\\begin{align}\n\\min & \\quad \\lambda + \\sum_t\\sum_s\\sum_a d_t(s,a)\\tau_t(s,a) \\\\\n\\text{s.t.} & \\quad \\lambda = V_0(s)+\\tau_0(s,a) \\qquad \\qquad \\qquad \\forall s \\in \\mathcal{S} \\notag \\\\\n& \\quad V_t(s') = \\sum_s \\Gamma_t(s'|s,a)\\Big(V_{t+1}(s)+\\mu_t(s,a)\\Big)+\\tau_t(s,a) \\notag \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\forall a \\in \\mathcal{A} \\notag \\\\\n& \\quad V_{t'} = \\sum_a \\Gamma_{t'}(s^*|s,a)K_{t'}(s,a) + \\tau_{t'}(s,a) \\notag\n\\end{align}\n\n\n\\section{Useless Stuff}\n\nWe now consider the case where the initial distribution is not known a-priori but rather only constrained to be within some set. We assume $p_0 \\in \\mathcal{P}_0$. We first consider box capacity constraints on $p_0 \\leq d_0$ and defining the least conservative constant constraints on the policy that keep the set invariant. Set $\\bar{d}_0 = d_0$. Let $\\bar{d}_t$ be upper bounds on the state distribution at time $t$. We now derive bounds on the entries of $K_t(s,a)$. If the constraint at the next time step \n\\begin{align}\nd_{t+1}(s')= \\Gamma_t(s'|s,a)K_t(s,a)\\bar{d}_t(s)\n\\end{align}\n\\begin{align}\nK_t(s,a) = \n\\frac{d_{t+1}(s')}{\\Gamma_t(s'|s,a)\\bar{d}_t(s)}\n\\end{align}\n\\begin{align}\nD_t(s,a) = \\min \\left\\{\\min_{s'}\n\\frac{d_{t+1}(s')}{\\Gamma_t(s'|s,a)\\bar{d}_t(s)},1\n\\right\\}\n\\end{align}\n\\begin{align}\n\\bar{d}_{t+1}(s') = \\max_{y_t} & \\quad \\sum_s \\sum_a \\Gamma_t(s'|s,a)y_t(s,a) \\\\ \n\\text{s.t.}& \\quad \\sum_a y_t(s,a) \\leq \\bar{d}_t(s) \\notag \\\\\n& \\quad \\sum_{s,a} y_t(s,a) = 1 \\notag \\\\\n& \\quad y_t(s,a) \\leq \\text{diag}(y_t\\mathbf{1})D_t(s,a) \\notag\n\\end{align}\n\\section{Heterogeneous MDP Congestion Game}\\label{sec:notation}\nConsider a finite number of players $[N] = \\{1, \\ldots, N\\}$ with a \\emph{shared finite state-action space} given by $([S], [A])$ and common time interval $\\mathcal{T} = \\{0, 1, \\ldots, T\\}$. Each player $i$ has \\emph{individual} time-dependent transition probabilities given by $P^i\\in {\\mathbb{R}}_+^{TSSA}$, where at time $t$, $P^i_{ts'sa}$ is the transition probability from state $s$ to state $s'$ using action $a$ \\change{satisfying} the simplex constraints:\n\\begin{equation}\n \\textstyle\\sum_{s'} P^i_{t s' s a} = 1, \\ \\forall (i, t, s, a) \\in [N]\\times \\change{[T]}\\times [S] \\times [A].\n\\end{equation}\n\\textbf{State-action distribution}. \\change{At time $t$, let player $i$'s state be $s^i(t) \\in [S]$ and action taken be $a^i(t) \\in [A]$, then $x^i_{tsa} = \\mathbb{P}[s^i(t) = s, a^i(t) = a]$ is player $i$'s probability of being in state $s$ taking action $a$ at time $t$.}\n\\change{Player $i$'s state-action probability trajectory over time period $\\mathcal{T}$ is $x^i \\in{\\mathbb{R}}^{(T+1)SA}$, its \\emph{state-action distribution}.} \\change{We use $\\mathcal{X}(P^i, z^i_0)$ to denote the set of all feasible state-action distributions under transition dynamics $P^i$ and initial condition $z^i_0 \\in{\\mathbb{R}}^S_+$, where $z^i_{0s}= \\mathbb{P}[s^i(0) = s]$ is player $i$'s probability of starting in state $s$.}\n\\begin{multline}\\label{eqn:feasible_mdp_flows}\n\\textstyle\\mathcal{X}(P^i, \\change{z^i_{0}}):= \\Big\\{\\change{x^i} \\in {\\mathbb{R}}_+^{(T+1)SA} \\Bigg| \\sum_{a} \\change{x^i_{0sa} = z^i_{0s}}, \\forall s \\in [S], \\Big. \\\\ \n\\textstyle\\Big.\\sum_{s', a} P^i_{tss'a} \\change{x^i_{(t-1)s'a}} = \\sum_{a} \\change{x^i_{tsa}}, \\ \\forall (t,s) \\in \\change{[T]}\\times [S] \\Big\\}.\n\\end{multline}\nThe \\emph{joint state-action distribution} of all players is given by\n\\begin{equation}\\label{eqn:joint_state_action}\n x = (x^1, \\ldots, x^N) \\in {\\mathbb{R}}_+^{N(T+1)SA}.\n\\end{equation}\n\\change{We assume that $x$ is fully observable and may denote it as $x = (x^i, x^{-i})$ where $x^{-i} = (x^j)_{j \\in [N]\/\\{i\\}}$.}\n\n\\noindent\\textbf{\\change{Player} costs}. \\change{Similar to stochastic games, the player} costs are continuously differentiable \\emph{functions} of $x$: player $i$ incurs a cost $\\ell^i_{tsa}(x)$ for taking action $a$ at state $s$ and time $t$.\n\\begin{equation}\\label{eqn:player_cost_def}\n \\ell^i_{tsa}: {\\mathbb{R}}_+^{N(T+1)SA} \\mapsto {\\mathbb{R}}, \\ \\forall (i,t,s,a) \\in [N]\\times \\mathcal{T} \\times [S] \\times [A].\n\\end{equation}\n\\change{Compared to stochastic games where player costs are coupled to the opponent policies,~\\eqref{eqn:player_cost_def} is better suited to model collision events. For example, the expectation of the log-barrier function for players $i$ and $j$ at time $t$ can be modeled as $\\sum_{s, s' \\in [S]} (\\sum_a x^i_{tsa})(\\sum_a x^j_{ts'a})\\log(d_{s,s'})$, in which $d_{s,s'}$ denotes the distance between states $s, s'\\in [S]$. }\n\n\\noindent The \\emph{cost vector} of $(\\ell^1,\\ldots \\ell^N)$~\\eqref{eqn:player_cost_def} is given by $\\xi:{\\mathbb{R}}_+^{N(T+1)SA}\\mapsto {\\mathbb{R}}_+^{N(T+1)SA}$,\n\\begin{equation}\\label{eqn:cost_vector}\n \\xi(x) = [\\ell^1_{011}(x), \\ell^1_{012}(x), \\ldots, \\ell^N_{TSA}(x)] \\in {\\mathbb{R}}_+^{N(T+1)SA}.\n\\end{equation}\nWe assume that $\\xi$ has a positive definite gradient in $x$. \n\\begin{assumption}\\label{assum:positive_definite} The player cost vector $\\xi$~\\eqref{eqn:cost_vector} satisfies $\\nabla \\xi(x) \\succ 0$ for all $x$~\\eqref{eqn:joint_state_action} where $x^i \\in \\mathcal{X}(P^i, x_0^i), \\ \\forall i \\in [N]$.\n\\end{assumption}\n\\noindent For the class of player costs considered in Section~\\ref{sec:path_coordination_costs}, Assumption~\\ref{assum:positive_definite} implies that the player costs strictly increase as the number of players increases.\n\n\\noindent\\textbf{Coupled MDPs}. Given an initial distribution $z_0^i \\in {\\mathbb{R}}^S_+$ and fixed state-action distributions \\change{$x^{-i}$}~\\eqref{eqn:joint_state_action}, player $i$ solves the following optimization problem under MDP dynamics.\n\\begin{equation}\n\\begin{aligned}\\label{eqn:individual_player_mdp}\n \\underset{x^i}{\\mbox{min}} & \\change{\\sum_{t, s, a}\\int_0^{x^i_{tsa}}\\ell^i_{tsa}(u^i, x^{-i}) \\partial u^i_{tsa}} \\ \\mbox{s.t. } & x^i \\in \\mathcal{X}(P^i, z^i_{0}).\n\\end{aligned}\n\\end{equation}\n\\change{In~\\eqref{eqn:individual_player_mdp}, we note that each integral is taken over $u^i_{tsa}$, the $(t, s, a)^{th}$ element of $u^i$.} \nWhen $\\ell^i_{tsa}(x)$ is constant for all $(t,s,a) \\in \\mathcal{T} \\times [S]\\times [A]$, player $i$ solves a standard \\emph{linear program} MDP.\n\n\\noindent\\textbf{Dynamic programming}. At a joint state-action distribution $x$~\\eqref{eqn:joint_state_action}, player $i$'s cost-to-go in~\\eqref{eqn:individual_player_mdp} can be recursively defined via Q-value functions~\\cite{puterman2014markov} as\n\\begin{multline}\n Q^i_{Tsa}(x) := \\ell^i_{Tsa}(x), \\label{eqn:q_value} \\\\ \n Q^i_{(t-1)sa}(x) := \\ell^i_{(t-1)sa}(x) + \\textstyle\\sum_{s'} P^i_{ts'sa}\\underset{a'}{\\min}\\, Q^i_{t,s'a'}(x),\\\\\n\\forall \\ t \\in [T]\n\\end{multline}\n\\noindent\\change{\nThe optimal solution of~\\eqref{eqn:individual_player_mdp} can be stated using~\\eqref{eqn:q_value}.}\n\n\\begin{theorem}\n\\change{Under Assumption~\\ref{assum:positive_definite}, ${x}^i$~\\eqref{eqn:feasible_mdp_flows} uniquely minimizes~\\eqref{eqn:individual_player_mdp} with respect to the state-action distribution $x^{-i}$ if and only if its associated $Q^i({x}^i, x^{-i})$~\\eqref{eqn:q_value} satisfies\n\\begin{equation}\\label{eqn:individual_optimality}\n {x}^i_{tsa} > 0 \\Rightarrow \\textstyle Q^i_{tsa}({x}^i, x^{-i})= \\min_{a'} Q^i_{tsa'}({x}^i, x^{-i}),\n\\end{equation}\nfor all $(t, s, a) \\in \\mathcal{T}\\times[S]\\times[A]$.} I.e., ${x}^i$ is optimal for~\\eqref{eqn:individual_player_mdp} if and only if every action played with nonzero probability achieves the minimum cost-to-go~\\eqref{eqn:q_value} among available actions. \n\\end{theorem}\n\\begin{proof}\n\\change{\nLet $F({x}^i, x^{-i}) = \\sum_{t, s, a}\\int_0^{{x}^i_{tsa}}\\ell^i_{tsa}(u^i, x^{-i}) \\partial u^i_{tsa}$, then $\\partial F({x}^i, x^{-i}) \/\\partial x^i = \\ell({x}^i, x^{-i})$. We then apply Proposition A\\ref{prop:dp_to_minimizer} to~\\eqref{eqn:individual_player_mdp} and the theorem's results follow directly.}\n\\end{proof}\n\\noindent\\change{\nWhen all players jointly achieve the optimal cost-to-go~\\eqref{eqn:individual_optimality}, a stable equilibrium for unilateral optimality is achieved.\n}\n\\begin{definition}[Nash Equilibrium] \\label{def:wardrop} \nThe joint state-action distribution $\\hat{x}= [\\hat{x}^1, \\ldots, \\hat{x}^N]$~\\eqref{eqn:joint_state_action}\nis a \\emph{Nash equilibrium} if \\change{$\\big(\\hat{x}^i, Q^i(\\hat{x})\\big)$} satisfies~\\eqref{eqn:individual_optimality} for all $i \\in [N]$.\n\\end{definition}\n\\subsection{Potential optimization form}\n\\noindent We are interested in MDP congestion games that can be reduced from the coupled MDPs~\\eqref{eqn:individual_player_mdp} to a single minimization problem given by\n\\begin{equation}\n \\begin{aligned}\\label{eqn:convex_opt_eqn}\n \\min_{x^1, \\ldots, x^N} F(x), \\ \n \\text{s.t. } x^i \\in \\mathcal{X}(P^i, z^i_0) , \\ \\forall \\ i \\in [N],\n \n \\end{aligned}\n\\end{equation}\nwhere $F$ is the \\emph{potential function} of the corresponding game.\n\n\\begin{definition}[Potential Function]\\label{def:potential}\nWe say an MDP congestion game with {player costs} $\\{\\ell^{i}\\}_{i \\in [N]}$~\\eqref{eqn:player_cost_def} has a \\emph{potential function} $F: {\\mathbb{R}}^{N(T+1) S A}\\mapsto {\\mathbb{R}}$ if $F$ satisfies\n\\begin{equation}\\label{eqn:potential_first_order}\n \\frac{\\partial F(x)}{\\partial x^i_{tsa}} = \\ell^i_{tsa}(x), \\ \\forall \\ (i, t, s, a) \\in [N]\\times\\mathcal{T}\\times[S]\\times [A].\n \\end{equation}\n\\end{definition}\n\\noindent The following assumption on $\\{\\ell^{i}\\}_{i \\in [N]}$ is necessary and sufficient for the existence of $F$~\\cite[Eqn.2.44]{patriksson2015traffic}.\n\\begin{assumption}\\label{ass:potential_existence}\nFor all $(i, t, s, a) , (i', t', s', a') \\in [N]\\times\\mathcal{T}\\times[S]\\times [A]$, the player costs $\\{\\ell^{i}\\}_{i \\in [N]}$ satisfy\n \\begin{equation}\\label{eqn:potential_second_order}\n \n \\frac{\\partial \\ell^i_{tsa}(x)}{\\partial x^{i'}_{t's'a'}} = \\frac{\\partial \\ell^{i'}_{t's'a'}(x)}{\\partial x^i_{tsa}}.\n \\end{equation}\n\\end{assumption}\n\\begin{remark}\nAssumption~\\ref{ass:potential_existence} \\change{is equivalent to} $F$ \\change{being} conservative: $\\forall \\ x_1, x_2 \\in \\{x^i_{tsa}\\ | \\ (i, t, s, a) \\in [N] \\times \\mathcal{T} \\times [S] \\times [A]\\}$, \n\\begin{equation}\\label{eqn:potential_conservative}\n {\\partial^2 F(x)}\/{\\partial x_1\\partial x_2} = {\\partial^2 F(x)}\/{\\partial x_2\\partial x_1}.\n\\end{equation}\nIn other words, the Jacobian of $\\xi$~\\eqref{eqn:cost_vector}, ${\\partial\\xi(x)}\/{\\partial x}$, is symmetrical.\n\\end{remark}\n\\noindent Verifying the existence of $F$~\\eqref{eqn:potential_first_order} is non-trivial. However, if $F$ exists, the \\change{solution} of~\\eqref{eqn:convex_opt_eqn} is the Nash equilibrium~\\cite{calderone_finite}. \n\\begin{theorem}\\label{thm:kkt_pts} \n\\change{If the player costs $\\{\\ell^{i}\\}_{i \\in [N]}$~\\eqref{eqn:player_cost_def} satisfy Assumption~\\ref{assum:positive_definite}, \n\\begin{enumerate}\n \\item the potential function (Definition~\\ref{def:potential}) exists,\n \\item $\\hat{x}$~\\eqref{eqn:joint_state_action} is the global optimal solution of~\\eqref{eqn:convex_opt_eqn} if and only if $\\hat{x}$ is a Nash equilibrium (Definition~\\ref{def:wardrop}). \n\\end{enumerate}\n}\n\\end{theorem}\n\\begin{proof}\n\\change{We prove statement $1$ by showing that Assumption~\\ref{assum:positive_definite} implies Assumption~\\ref{ass:potential_existence}: if $\\nabla \\xi(x)\\succ 0$ for all feasible joint state-action distributions $x$~\\eqref{eqn:joint_state_action}, then $\\nabla \\xi(x)$ is symmetrical and satisfies~\\eqref{eqn:potential_second_order}.}\n\\change{Next, we show the forward direction of the statement $2$. If $(\\hat{x}^1,\\ldots \\hat{x}^N)$ minimizes~\\eqref{eqn:convex_opt_eqn}, then for each $i\\in [N]$, $\\hat{x}^i$ minimizes~\\eqref{eqn:mdp_variant} at $\\hat{x}^{-i}$. From Proposition A\\ref{prop:dp_to_minimizer}, $\\hat{x}^i$ satisfies~\\eqref{eqn:individual_optimality} for all $i\\in [N]$, therefore $\\hat{x}$ is a Nash equilibrium. To show the reverse direction of $2$, if~\\eqref{eqn:individual_optimality} is satisfied for all $i \\in [N]$, $\\hat{x}^i$ is coordinate-wise optimal for coordinate $i$ (Proposition A\\ref{prop:dp_to_minimizer}). Under Assumption~\\ref{assum:positive_definite},~\\eqref{eqn:convex_opt_eqn} has a strictly convex differentiable objective with separable convex constraints $\\mathcal{X}(P^i, z^i_0)$---each $x^i$ is constrained independently of $x^j$, $\\forall j \\in[N]\/\\{i\\}$, then the jointly coordinate-wise optimal $\\hat{x}$ is the global optimal solution of~\\eqref{eqn:convex_opt_eqn}~\\cite[Thm 4.1]{tseng2001convergence}. }\n\\end{proof}\n\\subsection{Path Coordination as an MDP Congestion Game}\\label{sec:path_coordination_costs}\n\\noindent We now model the path coordination problem as an MDP congestion game and demonstrate how players can achieve individual objectives while avoiding each other.\n\n\\noindent To reflect the congestion level of each state-action, we first define a \\textbf{congestion distribution} as the weighted sum of individual state-action distributions.\n\\begin{equation}\\label{eqn:congestion_distribution}\n \\textstyle y := \\sum_{i\\in[N]} \\alpha_ix^i \\in {\\mathbb{R}}^{(T+1)SA}, \\ \\alpha_i > 0, \\ \\forall i \\in [N],\n\\end{equation}\n\\noindent where $\\alpha_i$ is player $i$'s \\emph{impact factor}. If all players contribute to congestion equally, $\\alpha_i = 1 \\ \\forall i \\in [N]$. \n\n\\noindent\\textbf{Player costs}. \\change{We derive a class of player costs that satisfy Assumption~\\ref{assum:positive_definite}, incorporate congestion-based penalties, and enable players to pursue individual objectives. For all $(i,t,s,a) \\in [N]\\times\\mathcal{T}\\times[S]\\times[A]$, the player cost is given by }\n\\begin{equation}\\label{eqn:individual_costs}\n \\textstyle\\ell^i_{tsa}(y, x^i) = \\alpha_i f_{ts}\\big(\\sum_{a'} y_{tsa'}\\big) + \\alpha_i g_{tsa}\\big(y_{tsa}\\big) + h^i_{tsa}(x^i_{tsa}),\n\\end{equation}\nwhere \\change{$\\alpha_i$ is the same as in~\\eqref{eqn:congestion_distribution}, $f_{ts}:{\\mathbb{R}} \\mapsto{\\mathbb{R}}$ is the state-dependent congestion and takes the congestion level of $(t,s)$ as input, $g_{tsa}:{\\mathbb{R}} \\mapsto{\\mathbb{R}}$ is the state-action-dependent congestion and takes the congestion level of $(t,s,a)$ as input, and $h^i_{tsa}:{\\mathbb{R}}\\mapsto{\\mathbb{R}}$ is the player-specific objective and takes player $i$'s probability of being in $(t,s,a)$ as input. Player-specific objectives such as obstacle avoidance and target reachability can be incorporated as constant offsets in $h^i$}. \n\\begin{remark}[Effect of $\\alpha_i$]\nThe impact factor $\\alpha_i$ scales player $i$'s \\change{relative impact on the total congestion and the total congestion's impact on player $i$. When $\\alpha_i < \\alpha_j$, player $i$ impacts congestion less and cares about the congestion less than player $j$. When $\\alpha_i > \\alpha_j$, player $i$ impacts congestion more and cares about the congestion more than player $j$. }\n\\end{remark}\n\\noindent The potential function~\\eqref{eqn:potential_first_order} of the game with costs~\\eqref{eqn:individual_costs} is\n\\begin{equation}\n \\begin{aligned}\\label{eqn:state_state_action_potential}\n F(x)= & \n\\textstyle \\sum_{t,s} \\int_0^{\\sum_{a'}y_{tsa'}}f_{ts}(u) \\partial u + \\sum_{t,s,a} \\int_0^{y_{tsa}}g_{tsa}(u) \\partial u \\\\\n & \\textstyle \n + \\sum_{i, t, s, a} \\int_{0}^{x^i_{tsa}} h^i_{tsa} (u)\\partial u.\n\\end{aligned} \n\\end{equation}\n\\begin{remark}\n\\change{Congestion costs $f$ and $g$ must be identical for all players in order for a potential (Definition~\\ref{def:potential}) to exist.}\n\\end{remark}\n\\begin{example}[Road-sharing Vehicles]\n\\change{\nConsider a sedan (player $1$, $\\alpha_1=1$) and a trailer (player $2$, $\\alpha_2=2$) sharing a road network modeled by $[S]\\times [A]$. Player $i$ wants to reach state $s_i \\in [S]$. The player-specific objective is $h^i_{tsa}(x^i_{tsa}) = -\\mathbb{1}[s = s_i] + \\epsilon_i x^i_{tsa}$, where $\\mathbb{1}[w]$ is $1$ when $w$ is true and $0$ otherwise. The term $\\epsilon_i x^i_{tsa}$ where $\\epsilon_i > 0$ encourages player $i$ to randomize its policy over all optimal actions. Players experience state-based congestion as $f_{ts}(w) =\\exp(w)$. The player cost~\\eqref{eqn:individual_costs} is $\\ell^i_{tsa}(y, x^i) = \\alpha_i\\exp(\\sum_{a'} y_{tsa'}) + \\epsilon_ix^i_{tsa} -\\mathbb{1}[s = s_i]$. }\n\\end{example}\n\n\n\n\n\n\n\n\n\n\\begin{corollary}\\label{cor:unique_ne}\nPlayer costs of form~\\eqref{eqn:individual_costs} \\change{satisfy Assumption~\\ref{assum:positive_definite} if $h^i_{tsa}(\\cdot)$ is strictly increasing and $f_{ts}(\\cdot)$, $g_{tsa}(\\cdot)$ are non-decreasing} $\\forall (i,t,s,a)\\in [N]\\times\\mathcal{T}\\times[S]\\times[A]$.\n\\end{corollary}\n\\begin{proof}\n \\change{Let $I_Z$ be an identity matrix of size $Z\\times Z$, $\\mathbf 1_Z$ be a ones vector of size $Z\\times 1$, $\\vec{\\alpha} = [\\alpha_1,\\ldots,\\alpha_N] \\in {\\mathbb{R}}^{N\\times 1}$, $h(x) = [h^1(x),\\ldots,h^N(x)] \\in {\\mathbb{R}}^{N(T+1)SA}$, and $\\otimes$ be a kronecker product. We define the matrices $M = \\vec{\\alpha} \\otimes I_{(T+1)SA}$ and $J = (I_{(T+1)S}\\otimes \\mathbf 1_A^\\top)M$, and verify that $Mx = y$, $[Jx]_{ts} = \\sum_{a'}y_{tsa'}$ $\\forall (t,s) \\in \\mathcal{T}\\times[S]$, and $\\xi(x) = J^\\top f(Jx) + M^\\top g(Mx) + h(x)$. Let $w = Jx$, we can take $\\xi$'s gradient as\n$\n\\textstyle \\nabla \\xi(x) = \n J^\\top \\nabla f(w) J + \n M^\\top \\nabla g(y) M + \\nabla h(x).\n $\nUnder Corollary assumptions,\n$\\nabla f(w)$ and $\\nabla g(y)$ are non-negative diagonal matrices and $\\nabla h(x) $ is a strictly positive diagonal matrix. Therefore}, $\\nabla \\xi(x) \\succ 0$. \n\\end{proof}\n\\begin{remark}\nCorollary~\\ref{cor:unique_ne} implies that a strictly increasing $h^i$ is crucial to ensuring a unique Nash equilibrium. \nTherefore, $h^i$ can be interpreted as a regularization term.\n\\end{remark}\n\\subsection{Frank-Wolfe Learning Dynamics}\n\\noindent We find the Nash equilibrium of MDP congestion games by {leveraging} single-agent dynamic programming. \n\\begin{algorithm}[ht!]\n\\caption{Frank-Wolfe with dynamic programming}\n\\begin{algorithmic}[1]\n\\Require \\(\\{\\ell^i\\}_{i\\in[N]}\\), \\(\\{P^i\\}_{i\\in[N]}\\), \\(\\{z^i_0\\}_{i\\in[N]}\\), \\(N\\), \\([S], [A], \\mathcal{T}\\).\n\\Ensure \\(\\change{\\{\\hat{x}^{i}_{tsa}\\}_{t \\in \\mathcal{T}, s\\in[S], a \\in [A]}}\\).\n\\State{\\( x^{i0}\\in\\mathcal{X}(P^i, z^i_0) \\in {\\mathbb{R}}^{(T+1)SA}, \\quad \\forall \\ i \\in [N]. \\)}\n\n\\For{\\(k = 1, 2, \\ldots, \\)}\n\\For{\\(i = 1,\\ldots,N\\)}\n \\State{\\(\\change{C^{ik}} = \\ell^i([x^{1k},\\ldots, x^{Nk}])\\)}\\label{alg:cost_retrieval}\n\t\\State{\\(\\pi^i\\) = \\text{MDP}(\\change{\\(C^{ik}\\)}, \\(P^i\\), \\([S]\\), \\([A]\\), \\(T\\), \\change{\\(z^i_0\\)})}\\label{alg:mdp}\n\t\\State{\\(b^{ik} = \\) \\Call{RetrieveDensity}{\\(P\\), \\(z^i_0\\), \\(\\pi^i\\)}}\\Comment{Alg.~\\ref{alg:density}}\n\t\\State{\\(x^{i(k+1)}= (1 - \\frac{2}{k+1})x^{ik} + \\frac{2}{k+1} b^{ik}\\)}\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\label{alg:frank_wolfe}\n\\end{algorithm}\n\n\\noindent In Algorithm~\\ref{alg:frank_wolfe}, each player can access an \\emph{oracle} \\change{that} returns the cost for a given joint state-action distribution. \n\\change{In line~\\ref{alg:mdp}, $\\pi^i \\in [A]^{(T+1)S}$ is any deterministic policy that solves the finite time MDP with cost $C^{ik}$, transition probability $P^i$, and initial distribution $z^i_0$. We use value iteration to recursively find $\\pi^i$ as}\n\\change{\\begin{equation}\n\\begin{aligned}\\label{eqn:value_iteration}\n V^i_{Ts} & = \\textstyle\\min_{a}C^{ik}_{Tsa}, \\ \\pi^i_{Ts} \\in \\mathop{\\rm argmin}_{a}C^{ik}_{Tsa}, \\\\\n V^i_{(t-1)s} & \\textstyle = \\min_{a} C^{ik}_{(t-1)sa} + {\\sum_{s'}}P^i_{ts'sa}V^i_{ts'} \\ \\forall t \\in [T] \\\\\n \\pi^i_{(t-1)s} & \\textstyle \\in \\mathop{\\rm argmin}_{a} C^{ik}_{(t-1)sa} + {\\sum_{s'}}P^i_{ts'sa}V^i_{ts'} \\ \\forall t \\in [T]\n\\end{aligned}\n\\end{equation}}\n\\noindent \\change{Algorithm~\\ref{alg:frank_wolfe}} then retrieves the corresponding state-action density $b^{ik}$ via Algorithm~\\ref{alg:density} and \\change{combines it} with the current state-action density $x^{ik}$ to derive the next joint state-action density. All steps within lines 4 to 7 are parallelizable.\n\n\\begin{algorithm}[ht!]\n\\caption{Retrieving \\change{state-action distribution} from \\change{$\\pi$} }\n\\begin{algorithmic}[1]\n\\Require \\(P\\), \\(z\\), \\(\\pi\\).\n\\Ensure \\(\\{d_{tsa}\\}_{t \\in\\mathcal{T}, s\\in[S], a\\in[A]}\\)\n\\State{\\(d_{0s\\pi_{0s}} = z_s, \\ \\forall s \\in [S]\\)}\n\\For{\\(t=1, \\ldots, T\\)}\n\t \n\t \n\t \n\t\t \\State{\\(d_{ts(\\pi_{ts})} = \\textstyle\\sum_a\\sum_{s'} P_{tss'a}d_{(t-1)s'a}, \\ \\forall\\ s \\in [S]\\)}\n\\EndFor\n\\end{algorithmic}\n\\label{alg:density}\n\\end{algorithm}\n\n\\begin{theorem}\nUnder Assumption~\\ref{assum:positive_definite}, \nAlgorithm~\\ref{alg:frank_wolfe} converges towards the Nash equilibrium \\change{$\\hat{x} = (\\hat{x}^1,\\ldots, \\hat{x}^N)$} as \n\\begin{equation}\\label{eqn:alg_convergence}\n \\textstyle \\frac{\\alpha}{2}\\sum_{i\\in[N]} \\norm{x^{ik} - \\change{\\hat{x}^{i}}}^2_2 \\leq \\frac{2C_F}{k+2} \n\\end{equation}\nwhere $C_F$ is the potential function $F$'s~\\eqref{eqn:potential_first_order} \\emph{curvature constant} given by\n\\[ C_F = \\underset{\\substack{x^i, s^i \\in \\mathcal{X}(P^i, z^i_0) \\\\ \\gamma \\in [0, 1]\\\\\nw^i = x^i + \\gamma(s^i - x^i)}}{\\sup} \\frac{2}{\\gamma^2} \\Big(F(\\change{s}) - F(x) - \\sum_{i \\in [N]}(x^i - w^i)^\\top \\ell^i(x)\\change{\\Big)}.\\]\n\\end{theorem}\n\\begin{proof}\nAlgorithm~\\ref{alg:frank_wolfe} is a straight-forward implementation of~\\cite[Alg.2]{jaggi2013revisiting}. \n \\change{From Assumption~\\ref{assum:positive_definite}, $\\nabla \\xi(\\hat{x}) \\succ 0$. Therefore}, the potential function $F$ is \\change{strongly} convex and satisfies $ \\frac{\\alpha}{2}\\sum_{i\\in[N]} \\norm{x^{ik} - \\change{\\hat{x}^{i}}}^2_2 \\leq F(x^{k}) - F(\\hat{x})$. Equation~\\eqref{eqn:alg_convergence} then follows directly from~\\cite[Thm.1]{jaggi2013revisiting}.\n\\end{proof}\n\\begin{remark}[Scalability]\nAlgorithm~\\ref{alg:frank_wolfe} has linear complexity in the number of players.\n\\end{remark}\n\\subsection{Population Perspective: Numerical Method}\\label{sec:dualFormulation}\n\nAfter the social planner has offered incentives, the population plays the Wardrop equilibrium defined by modified rewards~\\eqref{eq:generalReward}; this equilibrium can be computed using the Frank Wolfe (FW) method~\\cite{freund2016new}, given in Algorithm~\\ref{alg:MSA}, with known optimal variables $\\{\\tau_i^{\\star}\\}$. \n\nFW is a numerical method for convex optimization problems with continuously differentiable objectives and compact feasible sets \\cite{powell1982convergence}, including routing games. One advantage of this learning paradigm is that the population does not need to know the function $r(\\cdot)$. Instead, they simply react to the realized rewards of previous game at each iteration. It also provides an interpretation for how a Wardrop equilibrium might be asymptotically reached by agents in MDPCG in an online fashion.\n\nAssume that we have a repeated game play, where players execute a fixed strategy determined at the start of each game. At the end of each game $k$, rewards of game $k$ based on $y^k_{tsa}$ are revealed to all players. FW models the population as having two sub-types: \\emph{adventurous} and \\emph{conservative}. Upon receiving reward information $\\ell_{tsa}(y^{k}_{tsa})$, the adventurous population decides to change its strategy while the conservative population does not. To determine its new strategy, the adventurous population uses value iteration on the latest reward information---i.e.~Algorithm \\ref{alg:valueIteration}---to compute a new optimal policy. Their resultant density trajectory is then computed using Algorithm \\ref{alg:policy iteration}. The step size at each iteration is equivalent to the fraction of total population who switches strategy. \nThe stopping criteria for the FW algorithm is determined by the Wardrop equilibrium notion---that is, as the population iteratively gets closer to an optimal strategy, the marginal increase in potential decreases to zero.\n\\begin{algorithm}\n\\caption{Value Iteration Method}\n\\begin{algorithmic}[h]\n\\Require \\(r\\), \\(P\\).\n\\Ensure \\( \\{\\pi^{\\star}_{ts}\\}_{ t \\in [T], \\, s\\in \\mathcal{S}} \\)\n\\For{\\(t=T, \\ldots, 1\\)}\n\t\t\\State{\\(\\displaystyle V_{ts}= \\underset{a\\in\\mathcal{A}}{\\mbox{max}}\\, Q_{tsa}\\), \\(\\displaystyle \\pi^{\\star}_{ts} = \\underset{a\\in\\mathcal{A}}{\\mbox{argmax}}\\, Q_{tsa},\\ \\forall \\ s \\in \\mathcal{S}\\)} \\Comment{Eqn.~\\eqref{eq:qvalue}}\n\\EndFor\n\\end{algorithmic}\n\\label{alg:valueIteration}\n\\end{algorithm}\n\\begin{algorithm}\n\\caption{Retrieving density trajectory from a policy }\n\\begin{algorithmic}[h]\n\\Require \\(P\\), \\(p\\), \\(\\pi\\).\n\\Ensure \\(\\{d_{tsa}\\}_{t \\in[T], s\\in\\mathcal{S}, a\\in\\mathcal{A}}\\)\n\\State{\\(d_{tsa} = 0, \\ \\forall \\ t \\in [T], s \\in \\mathcal{S}, a \\in \\mathcal{A}\\)}\n\\State{\\(d_{1s\\pi_{1s}} = p_s, \\ \\forall s \\in \\mathcal{S}\\)}\n\\For{\\(t=2, \\ldots, T\\)}\n\t \n\t \n\t \n\t\t \\State{\\(d_{ts(\\pi_{ts})} = \\sum\\limits_{a \\in \\mathcal{A}}\\sum\\limits_{s' \\in \\mathcal{S}} P_{t-1,ss'a}d_{t-1,s'a}, \\ \\forall\\ s \\in \\mathcal{S} \\)}\n\\EndFor\n\\end{algorithmic}\n\\label{alg:policy iteration}\n\\end{algorithm}\n\\begin{algorithm}\n\\caption{Frank Wolfe Method with Value Iteration}\n\\begin{algorithmic}[h]\n\\Require \\(\\bar{\\ell}\\), \\(P\\), \\(p_s\\), \\(N\\), \\(\\epsilon\\).\n\\Ensure \\(\\{y^{\\star}_{tsa}\\}_{t \\in [T], s\\in\\mathcal{S}, a \\in \\mathcal{A}}\\).\n\\State{\\( y^0 = 0 \\in {\\mathbb{R}}^{T \\times|\\mathcal{S}| \\times |\\mathcal{A}|} \\)}\n\\For{\\(k = 1, 2, \\ldots, N\\)}\n \\State{\\(c^k_{tsa} = \\bar{\\ell}_{tsa}(y^k)\\), \\(\\quad \\forall \\,\\, t \\in [T], s \\in \\mathcal{S}, a \\in \\mathcal{A}\\)}\n\t\\State{\\(\\pi_{ts} = \\) \\Call{ValueIteration}{\\(c^k\\), \\(P\\)}}\n\t\\Comment{Alg.~\\ref{alg:valueIteration}}\n\t\\State{\\(d^k\\) = \\Call{RetrieveDensity}{\\(P, p_s, \\pi_{ts}\\)}}\\Comment{Alg.~\\ref{alg:policy iteration}}\n\n\t\\State{\\( y^k = (1 - \\frac{2}{k+1})y^{k-1} + \\frac{2}{k+1} d^k\\)}\n\t\\State{Stop if }\n\t\\State{\\(\\quad \\sum\\limits_{t\\in[T]} \\sum\\limits_{s\\in\\mathcal{S}} \\sum\\limits_{a\\in\\mathcal{A}}\\Big(c^k_{tsa} - c^{k-1}_{tsa}\\Big)^2 \\leq \\epsilon\\)}\n\\EndFor\n\\end{algorithmic}\n\\label{alg:MSA}\n\\end{algorithm}\n\nIn contrast to implementations of FW in routing game literature, Algorithm \\ref{alg:MSA}'s descent direction is determined by solving an MDP \\cite[Section~4.5]{puterman2014markov} as opposed to a shortest path problem from origin to destination \\cite[Sec.4.1.3]{patriksson2015traffic}.\nAlgorithm~\\ref{alg:MSA} is guaranteed to converge to a Wardrop equilibirum if the predetermined step sizes decrease to zero as a harmonic series \\cite{freund2016new}---e.g., $\\frac{2}{k+1}$.\nFW with predetermined step sizes has been shown to have sub-linear worst case convergence in routing games \\cite{powell1982convergence}. \nOn the other hand, replacing fixed step sizes with optimal step sizes found by a line search method leads to a much better convergence rate.\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Problem Formulation}\\label{problemFormulation}\n\n\\section{Problem Formulation}\\label{problemFormulation}\nConsider a finite horizon, Markov Decision Process (MDP) congestion game, where a group of players individually solves an MDP to maximize expected self gain while competing for global resources. We assume that all players have the same set of possible actions and stochastic transitions. This type of finite horizon, Markov Decision Process congestion game (MDPCG) is formulated as a convex optimization problem in \\cite{MDPCongestionGame}. \n\nIn this paper, we develop a dual interpretation for the role that state constraints play in MDPCG: through a social planner's perspective and individual players' perspective. Players will attempt to maximize their individual objectives by finding an optimal sequence of actions, while a social planner strives for game-level objectives, such as maximizing total gain or enforcing state constraints. To achieve these objectives, the planner cannot impose explicit constraints, but instead can adjust individual objectives to incentivize\/discourage certain actions. A toll\/reward method is developed for implicitly realizing global state dependent constraints .\n\nThe rest of the paper is organized as follows: in Section \\ref{notation}, we introduce the convex optimization model of state-constrained MDPCG. Section \\ref{convergence} connects the constrained optimal solution to Wardrop equilibrium with respect to a generalized cost function. In Section \\ref{dualFormulation}, we perform simulations of repeated game play using a fixed step-sized Frank Wolfe Method. The simulations show that when the cost functions are adjusted for tolls, players indeed converge at the optimal strategy that does not violate corresponding state constraints.\n\n\n\\section{Related work}\n\\label{sec:related work}\n\n\n\nAn MDP congestion game~\\cite{calderone2017infinite} is a stochastic population game and is related to potential mean field games \\cite{lasry2007mean,gueant2011infinity} in the discrete time and state-action space \\cite{gomes2010discrete} and mean field games on graphs \\cite{gueant2015existence}. \n\\change{In this paper, we extend our previous framework from continuous populations of identical MDP decision makers~\\cite{calderone2017infinite} to a finite number of heterogenous MDP decision makers.} In the continuous population case, MDP congestion games have been analyzed for constraint satisfaction in~\\cite{li2019tolling} and sensitivity to \\change{hyper}parameters in~\\cite{li2019sensitivity}. \n\nModel-based multi-agent \\change{path} planning is typically solved via graph-based searches~\\cite{cohen2019optimal} and mixed integer linear programming~\\cite{chen2020scalable}. Recently, reinforcement learning has been introduced as a viable method for solving multi-agent \\change{path} planning~\\cite{semnani2020multi,yun2020multi}. In most scenarios, the \\change{path} planning problem is modeled as an MDP~\\cite{bayerlein2021multi,lo2021towards}. In particular, \\cite{lo2021towards} adopts a stochastic game model for human-robot collision avoidance, but focuses more on algorithm development rather than game structure analysis. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Reward Uncertainty}\nHere we introduce slightly different notation: \n\\[y_{ts} = \\begin{pmatrix}\ny_{tsa_1} \\\\ y_{tsa_2}\\\\\\vdots\\\\y_{tsa_m}\\\\\n\\end{pmatrix} \\in {\\mathbb{R}}^{|\\mathcal{A}|} \\]\n\\[\n\\ell_{ts}(y_{ts}) = \\begin{pmatrix}\n\\ell_{tsa_1}(y_{tsa_1}) \\\\ \\ell_{tsa_2}(y_{tsa_2})\\\\\\vdots\\\\\\ell_{tsa_m}(y_{tsa_m})\\\\\n\\end{pmatrix} \\in {\\mathbb{R}}^{|\\mathcal{A}|}\n\\]\nThe optimization problem can be rewritten in the new vector variable form: \n\\begin{equation}\n\\begin{aligned}\n\\underset{y}{\\mbox{min.}} & \\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}} \\int_0^{y_{ts}}\\ell_{ts}(x)dx\\\\\n\\mbox{s.t.} & \\mathbf 1^T y_{t+1, s} = \\sum\\limits_{s'\\in\\mathcal{S}}P_{s\\cdot s'}^Ty_{ts'}, \\quad t\\in[T-1],\\\\\n&\\mathbf 1^Ty_{0s}=p_s,\\\\\n&y_{ts}\\geq 0,\\quad \\forall s\\in\\mathcal{S}, t\\in[T]\n\\end{aligned}\\label{mdp game vector}\n\\end{equation}\n\n\\subsection{Uncertainty Model}\nFor each individual solving the problem, their model of the reward may be entirely experience based and therefore prone to uncertainty. \n\nHere, we model their uncertainty as the following: for $(t,s,a)$, the reward can be anywhere within the following ellipse, which is also state dependent in its mean value\n\\begin{equation}\n\\begin{aligned}\n\\ell_{ts}(x) &\\in \\mathcal{E}_{ts} \\quad \\forall \\,\\, x \\in [0,1] \\\\\n& \\in \\{R_{ts}x + Q_{ts}u_{ts}\\, | \\quad \\norm{u}_2 \\leq 1\\} \\\\\n\\end{aligned}\n\\end{equation}\nWhere $R_{ts} \\in {\\mathbb{R}}^{|S| \\times |S| } $ is a time and state dependent matrix. \n\nSuppose we are considering worstcase scenario, the objective can be rewritten as: \n\\begin{equation}\n\\begin{aligned}\n&\\sup\\{\\mathbf 1^T\\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}} \\int_0^{y_{ts}}\\ell_{ts}(x)dx\\} \\\\\n= &\\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}} \\mathbf 1^T\\int_0^{y_{ts}}R_{ts}xdx + \\sup\\{\\mathbf 1^T\\int_0^{y_{ts}}Q_{ts}u dx | \\norm{u}_2 \\leq 1\\} \\\\\n\\leq &\\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}}\\frac{1}{2}\\norm{R_{ts}{y_{ts}}}^2_2 + \\sup\\{\\norm{u_{ts}^TQ_{ts}^Ty_{ts}}_2| \\norm{u}_2 \\leq 1\\} \\\\\n\\leq &\\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}}\\frac{1}{2}\\norm{R_{ts}{y_{ts}}}^2_2 +\\norm{Q_{ts}^Ty_{ts}}_2 \\\\\n\\end{aligned}\n\\end{equation} \nThen the optimization problem becomes: \n\\begin{equation}\n\\begin{aligned}\n\\underset{y}{\\mbox{min.}} & \\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}}\\frac{1}{2}\\norm{R_{ts}{y_{ts}}}^2_2 +\\norm{Q_{ts}^Ty_{ts}}_2 \\\\\n\\mbox{s.t.} & \\mathbf 1^T y_{t+1, s} = \\sum\\limits_{s'\\in\\mathcal{S}}P_{s\\cdot s'}^Ty_{ts'}, \\quad t\\in[T-1],\\\\\n&\\mathbf 1^Ty_{0s}=p_s,\\\\\n&y_{ts}\\geq 0,\\quad \\forall s\\in\\mathcal{S}, t\\in[T]\n\\end{aligned}\\label{mdp game vector}\n\\end{equation}\n\nThe Lagangian becomes\n\\begin{equation}\n\\begin{aligned}\nL(y_{ts}, V_{ts}, \\mu_{ts}) & = \\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}}\\frac{1}{2}\\norm{R_{ts}y_{ts}}^2_2 +\\norm{Q_{ts}^Ty_{ts}}_2 \\\\\n& + \\sum\\limits_{t \\in [1,T]}\\sum\\limits_{s\\in\\mathcal{S}} V_{ts}\\Big(\\sum\\limits_{s' \\in \\mathcal{S}}P^T_{s\\cdot s'} y_{t-1,s'} - \\mathbf 1^Ty_{ts}\\Big) \\\\\n& + \\sum\\limits_{s \\in \\mathcal{S}}V_{0s}\\Big(p_s - \\mathbf 1^Ty_{0s}\\Big) \\\\\n& - \\sum\\limits_{s \\in \\mathcal{S}}\\sum\\limits_{t \\in[T]}\\mu_{ts}^Ty_{ts} \\\\\n\\end{aligned}\n\\end{equation}\n\nThen the corresponding KKT conditions for the value iteration is: \n\n\\begin{equation}\n\\begin{aligned}\n&t = 0 \\cdots T-1: \\\\\n&\\quad V_{ts}\\mathbf 1 = -\\mu_{ts} + R_{ts}y_{ts} + \\frac{1}{\\norm{Q_{ts}^Ty_{ts}}_2}Q_{ts}Q_{ts}^Ty_{ts}\\\\\n& \\quad\\quad + \\sum_{s'}P_{s'\\cdot s}V_{t+1,s'} \\\\\n&t = T: \\\\\n&\\quad V_{Ts}\\mathbf 1 = -\\mu_{tsa} + R_{Ts}y_{Ts} + \\frac{1}{\\norm{Q_{Ts}^Ty_{Ts}}_2}Q_{Ts}Q_{Ts}^Ty_{Ts}\\\\\n\\end{aligned} \\label{robust kkt condition}\n\\end{equation} \n\nWe can estimate the worst case of $\\frac{1}{\\norm{Q_{ts}^Ty_{ts}}_2}Q_{ts}Q_{ts}^Ty_{ts} $ by the following inequality based on Cauchy-Schwarz inequality: \n\\[\\frac{1}{\\norm{Q_{ts}^Ty_{ts}}_2}Q_{ts}Q_{ts}^Ty_{ts} \\geq -\\norm{Q_{ts}}_2 \\]\n\nThen the worst case dual problem is the following: \n\\begin{equation}\n\\begin{aligned}\n\\underset{y}{\\mbox{min.}} & \\sum\\limits_{t\\in[T]}\\sum\\limits_{s\\in\\mathcal{S}}\\frac{1}{2}\\norm{R_{ts}{y_{ts}}}^2_2 - y^T_{ts}R_{ts}y_{ts}+ \\sum\\limits_{s \\in \\mathcal{S}}p_sV_{0s} \\\\\n\\mbox{s.t.} & V_{ts}\\mathbf 1 \\leq R_{ts}y_{ts} - \\norm{Q_{ts}}_2 + \\sum_{s'}P_{s'\\cdot s}V_{t+1,s'} \\\\\n&V_{Ts}\\mathbf 1 \\leq R_{Ts}y_{Ts} - \\norm{Q_{Ts}}_2 \\\\\n\\end{aligned}\\label{mdp game vector}\n\\end{equation}\nNotice that since $\\norm{Q_{ts}}_2 = \\lambda_{max} $, this is equivalent to offsetting the reward function by the worst possible variation, to obtain a robust reward function $R_{ts} - \\lambda_{max}(Q_{ts}) $\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}