diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzngdv" "b/data_all_eng_slimpj/shuffled/split2/finalzzngdv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzngdv" @@ -0,0 +1,5 @@ +{"text":"\\section{The Physics and Status of Core Collapse Supernova Simulations}\\label{sec:foundation}\n\nCore collapse supernovae (CCSNe) are initiated by the collapse of the iron cores of massive stars at the ends of their lives. The collapse proceeds to ultrahigh densities, in excess of the densities of nucleons in the nucleus of an atom (``super-nuclear'' densities). The inner core becomes incompressible under these extremes, bounces, and, acting like a piston, launches a shock wave into the outer stellar core. This shock wave will ultimately propagate through the stellar layers beyond the core and disrupt the star in a CCSN explosion. However, the shock stalls in the outer core, losing energy as it plows through it, and exactly how the shock wave is revived remains an open question, although, important progress is being made, particularly with two-dimensional (2D) models. The means to this revival is the central question in CCSN theory. (For a more complete review, the reader is referred to \\cite{Mezz05}, \\cite{Jank12}, and \\cite{KoTaSu12}.) \n\nAfter core bounce, $\\sim10^{53}$~ergs of energy in neutrinos and antineutrinos of all three flavors are released from the newly formed proto-neutron star (PNS) at the center of the explosion. \nThe typical observationally estimated CCSN explosion energy is $\\sim 10^{51}$~ergs ($\\equiv1$~Bethe), with estimates for individual supernovae ranging from 0.3--5~Bethe \\cite{Hamu03,NoToUm06,Smar09}.\nPast simulations \\cite{Wils85,BeWi85} demonstrated that energy in the form of neutrinos emerging from the PNS can be deposited behind the shock and may revive it. \nThis neutrino reheating is central to CCSN models today. \nHowever, while a prodigious amount of neutrino energy emerges from the PNS, the neutrinos are weakly coupled to the material below the shock. \nThe neutrino heating is very sensitive to the distribution of neutrinos in energy (or frequency) and direction of propagation, at any given spatial point behind the shock \n\\cite{BuGo93,JaMu96,MeMeBr98,MeCaBr98b,MeLiMe01,Jank01}.\nRealistic CCSN simulations require a neutrino transport method that can reproduce the angular and energy distributions of the neutrinos in the critical heating region.\n\nNormal iron core stars do not explode when modeled in spherical symmetry \\cite[cf.,][]{LiMeTh01a,RaJa02,ThBuPi03}, thus multidimensional effects are required. \nFluid instabilities ({\\it e.g.}, convection) in the PNS may boost the luminosity of this central neutrino source and consequent neutrino heating \\cite{SmWiBa81,WiMa93,MiPoUr02,BrRaMe04,BuRaJa06}. \nNeutrino-driven convection between the PNS and the shock fundamentally alters the nature of energy flow and shock revival \\cite{HeBeHi94,BuHaFr95,JaMu96,FrWa04,BuRaJa06,BrDiMe06} relative to the spherically symmetric case, allowing simultaneous down-flows that fuel the neutrino luminosities and neutrino-heated up-flows that bring energy to the shock. \nThe standing accretion shock instability (SASI), a computationally discovered instability of the shock wave itself \\cite{BlMeDe03}, dramatically alters the shock and explosion dynamics \n\\cite{BlMeDe03,JaBuKi05,BuLiDe06,OhKoYa06,HaMuWo13}. Recent axisymmetric (2D) models \\cite{MuJaHe12,BrMeHi13} demonstrate that neutrino heating in conjunction with neutrino-driven convection and the SASI are able to generate explosions, although the quantitative predictions --- in particular, the explosion energies --- differ between these two groups. However, it is important to note that our predictions are consistent with observations \\cite{BrLeHi14} across a range of observables: explosion energy, $^{56}$Ni mass, neutron star mass, and neutron star kicks.\nDespite these differences, these advances suggest that the SASI may be the ``missing link'' that will enable the Wilson delayed-shock, neutrino-heating mechanism to operate successfully in multiple spatial dimensions, especially for more massive progenitors. \n\nThere are many other inputs to the physics of the core collapse supernova (CCSN) mechanism that must also be included in simulations. The strength of these effects have been tested in many one-dimensional (1D) simulations and some multidimensional simulations.\nThe PNS in a developing CCSN is sufficiently compact to require the inclusion of general relativistic effects to gravity and neutrino propagation \\cite{BaCoKa85,LiMeTh01a,LiMeTh01b,BrDeMe01,MaDiJa06,OtDiMa07,MuJaDi10,LeMeMe12a,MuJaMa12}.\nGetting the correct radiative coupling requires inclusion of all neutrino--matter interactions (opacities) that affect the neutrino transport, heating, and cooling. Several recent studies have considered the effects of neutrino opacities, including inelastic scattering of neutrinos on electrons, nucleons, and nuclei, detailed nuclear electron capture, and nuclear medium effects on the neutrino interactions \\cite{HiMeBr03,BuJaKe03,KeRaJa03,ThBuPi03,MaJaBu05,MaLiFr06,LaMaMu08,JuLaHi10,RoReSh12,LeMeMe12b}.\nA nuclear equation of state for both nuclear matter in the PNS and the nuclei and nucleons in the surrounding matter is required. Several equations of state have been proposed \\cite{BeBrAp79,ElHi80,Coop85,LaSw91,WiMa93,ShToOy98b,HeSc10,ShHoTe11,StHeFi13} and their impact in CCSNe has been examined \\cite{SwLaMy94,RaBuJa02,SuYaSu05,MaJaMu09,LeHiBa10,Couc13a}.\nFinally, the nuclear composition must be evolved in the outer regions where nuclear statistical equilibrium (NSE) does not apply.\n\nThe centrifugal effects of stellar core rotation, especially for rapid rotation, can also change supernova dynamics qualitatively and quantitatively \\cite{FrWa04,BuRaJa06}. \nAn additional level of complexity is added by models with dynamically important magnetic fields, amplified by rapid rotation and the magnetorotational instability, that may play a significant role in driving, and perhaps collimating, some CCSNe \\cite{Symb84,AkWhMe03,BuDeLi07} and \\emph{collapsars} (jets generated by accretion disks about newborn black holes producing combined CCSNe\/$\\gamma$-ray bursts). \nRecent observations of shock breakout \\cite{ScJuWo08} disfavor a strongly collimated jet as the driver for explosions for ordinary supernovae \\cite{CoWhMi09} --- i.e., cases where rotation likely does not play a major role. \nMagnetic fields are expected to become important in the context of rapidly rotating progenitors, where significant rotational energy can be tapped to develop strong and organized magnetic fields (e.g., see \\cite{BuDeLi07}). State-of-the-art stellar evolution models for massive stars \\cite{wohe07} do not predict large core rotation rates. For non-rapidly rotating progenitors, magnetic fields are expected to serve more of a supporting role, for neutrino shock reheating (e.g., see \\cite{ObJaAl14}).\n \nWhile the list of major macroscopic components in any CCSN clearly indicates this is a 3D phenomenon, 3D studies have been relatively rare and, until recently, generally have skimped, largely for practical reasons, on key physics to which prior studies (noted above) have indicated careful attention must be paid. \n3D simulations have examined aspects of the CCSN problem using a progression of approximations.\n3D, hydrodynamics-only simulations of the SASI, which isolate the accretion flow from the feedbacks of neutrino heating and convection, have identified the spiral ($m=1$) mode, with self-generated counter-rotating flows that can spin the PNS to match the $\\sim$50~ms periods of young pulsars \\cite{Blon05a,Blon05b,BlMe07} and examined the generation of magnetic fields \\cite{EnCaBu10} and turbulence \\cite{EnCaBu12} by the SASI.\nAnother often-used formulation for approximate 3D simulations is the neutrino ``lightbulb'' approximation, where a proscribed neutrino luminosity determines the heating rate, with the neutrino heating and cooling parameterized independently. \nNeutrino lightbulb simulations have been used successfully to study the development of NS kicks \\cite{NoBrBu12,WoJaMu12,WoJaMu13}, mixing in the ejecta \\cite{HaJaMu10}, and, in 2D simulations, the growth of the SASI with neutrino feedbacks \\cite{ScJaFo08}. Lightbulb simulations have also been used to examine the role of dimensionality (1D-2D-3D) in CCSNe \\cite{MuBu08,NoBuAl10,HaMaMu12,Couc13b}.\nA more sophisticated approximate neutrino transport method is the ``leakage'' scheme. Leakage schemes use the local neutrino emission rate and the opaqueness of the overlying material to estimate the cooling rate and from that the neutrino luminosity and heating rate. \nLeakage models have been used by Ott et al. \\cite{OtAbMo13}, including the full 3D effects of GR.\nFryer and Warren \\cite{FrWa02,FrWa04} employed a \\emph{gray} neutrino transport scheme in three dimensions. In such schemes, one evolves the spatial neutrino energy and momentum densities with a \nparameterization of the neutrino spectra. As a neutrino angle- and energy-integrated scheme, the dimensionality of the models is greatly reduced, which is ideal for performing a larger number of exploratory studies.\nThese 3D studies, and other recent studies \\cite[cf.][]{TaKoSu12,BuDoMu12,HaMuWo13,CoOc13}, confirm the conclusion that CCSN simulations must ultimately be performed in three spatial dimensions. \n\nThe modeling of CCSNe in three dimensions took an important step forward recently. The Max Planck (MPA) group launched the first 3D CCSN simulation with multifrequency neutrino transport with relativistic corrections and state-of-the-art neutrino opacities, and general relativistic gravity. Results from the first 400 ms after stellar core bounce were reported in \\cite{HaMuWo13} for a 27 \\ensuremath{M_{\\odot}}\\ progenitor. At present, the ``Oak Ridge'' group is performing a comparable simulation beginning with the 15~\\ensuremath{M_{\\odot}}\\ progenitor used in our 2D studies. We have evolved approximately the first half second after bounce (for further discussion, see Section~\\ref{sec:current3D}). \n\n\\section{Lessons from Spherical Symmetry}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{fig2_color.pdf}\n\\caption{Shock trajectories in km, versus time after bounce, for models with decreasing physics \\cite{LeMeMe12}.}\n\\label{fig:shockvphysics}\n\\end{figure}\n\nRecent studies carried out in the context of general relativistic, spherically symmetric CCSN models with Boltzmann neutrino transport demonstrate that (i) a general relativistic treatment of gravity, (ii) special and general relativistic corrections to the neutrino transport, such as the gravitational redshift of neutrinos, and (iii) the use of a complete set of weak interactions and a realistic treatment of those interactions are indispensable \\cite{LeMeMe12a}. As shown in Figure \\ref{fig:shockvphysics}, the impact of moving to a Newtonian description of gravity from a fully general relativistic treatment has a significant impact on the shock trajectory. The Newtonian simulation neglects general relativity in the description of gravity {\\it per se}, as well as general relativistic transport effects such as gravitational redshift. Thus, the switch from a general relativistic description to a Newtonian description impacts more than just the treatment of gravity. In turn, if we continue to simplify the model, this time reducing the set of weak interactions included and the realism with which these weak interactions are included, we see a further significant change in the shock trajectory, with fundamentally different behavior early on after bounce. In this instance, we have neglected the impact of nucleon correlations in the computation of electron capture on nuclei (see \\cite{HiMeMe03}), energy exchange in the scattering of neutrinos on electrons, corrections due to degeneracy and nucleon recoil in the scattering of neutrinos on nucleons, and nucleon--nucleon bremsstrahlung. Finally, if we continue to simplify the neutrino transport by neglecting special relativistic corrections to the transport, such as the Doppler shift, we obtain yet another significant change. The spread in the shock radii at $t>$120 ms after bounce is approximately 60 km. Its relative fraction of the average of the shock radii across the four cases at $t>$ 120 ms is $>$33\\%. Moreover, the largest variation in the shock radii in our 2D models is obtained at $\\sim$ 120 ms after bounce, which is around the time when the shock radii in our one- and two-dimensional models begin to diverge (see Figure \\ref{fig:label1Dv2D}). In all four of our 2D models, the postbounce evolution is quasi-spherical until $\\sim$110 ms after bounce. Thus, the use of the \\textsc{Agile-BOLTZTRAN}\\ code, which solves the general relativistic Boltzmann equation with a complete set of neutrino weak interactions for the neutrino transport in the context of spherically symmetric models, to determine the physics requirements of more realistic two- and three-dimensional modeling is possible. Indeed, the conclusions of our studies are corroborated by similar studies carried out in the context of 2D multi-physics models \\cite{MuJaMa12}. Taken together, these studies establish the {\\it necessary} physics that must be included in CCSN models in the future. Whether or not the current treatments of this physics in the context of two- and three-dimensional models is {\\it sufficient}, as we will discuss, remains to be determined.\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{1Dv2D.pdf}\n\\caption{Shock trajectories in km, versus time after bounce, for our 1D and 2D models \\cite{BrMeHi13}. The 1D and 2D evolution begins to diverge between 100 and 125 ms after bounce.}\n\\label{fig:label1Dv2D}\n\\end{figure}\n\n\\section{Our Code}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{rbr.pdf}\n\\caption{A depiction of the ``ray-by-ray'' (RbR) approach. Each ray corresponds to a separate spherically symmetric problem. In the limit of spherical symmetry, the RbR approach is exact. Each ray solve gives what would be obtained in a spherically symmetric solve for conditions at the base of the ray, on the proto-neutron star surface. For a {\\it persistent} hot spot, such as the one depicted here at the base of ray 1, the RbR approximation would overestimate the angular variations in the neutrino heating at the points 1 and 2 above the surface. In spherical symmetry, the condition at the base of each ray is assumed to be the same over the entire portion of the surface subtended by the backward causal cone for that ray. Thus, for ray 1, the entire subtended surface would be considered hotter than it is, whereas for ray 2 the contribution from the hot spot at the base of ray 1 to the heating at point 2 above the surface would be ignored.\n\\label{fig:rbr}\n}\n\\end{figure}\n\n\\textsc{Chimera}\\ is a parallel, multi-physics code built specifically for multidimensional simulation of CCSNe.\nIt is the chimeric combination of separate codes for hydrodynamics and gravity; neutrino transport and opacities; and a nuclear EoS and reaction network, coupled by a layer that oversees data management, parallelism, I\/O, and control.\n\nThe hydrodynamics are modeled using a dimensionally-split, Lagrangian-Remap (PPMLR) scheme \\cite{CoWo84} as implemented in VH1 \\cite{HaBlLi12}.\nSelf-gravity is computed by multipole expansion \\cite{MuSt95}.\nWe include the most important effects of GR by replacing the Newtonian monopole term with a GR monopole computed from the TOV equations \\cite[][Case~A]{MaDiJa06}.\n\nNeutrino transport is computed in the ``ray-by-ray-plus'' (RbR+) approximation \\cite{BuRaJa03}, where an independent, spherically symmetric transport solve is computed for each ``ray'' (radial array of zones with the same $\\theta$, $\\phi$). (It is very important to note that the RbR+ approximation does not restrict the neutrinos to strict radial propagation only. In spherical symmetry, neutrinos propagate along arbitrary rays, not just radial rays, but the {\\em net} angular flux is zero, leaving only radial flux. Each RbR+ solve is a {\\em full} spherically symmetric solve (see Figure \\ref{fig:rbr}). The 3D problem is broken up into $N_{\\theta}\\times N_{\\phi}$ spherically symmetric problems, where $N_{\\theta,\\phi}$ are the number of latitudinal and longitudinal zones, respectively. RbR+ is exact (physically speaking, modulo numerical error) if the neutrino source is spherically symmetric. Thus, if accreted material raining down on the PNS surface via the non-spherical accretion funnels, obvious in Figures~\\ref{fig:entropy} and \\ref{fig:entropy3D}, and creating hot spots, spreads rapidly over the surface relative to the neutrino-heating and shock-revival time scales, which we find it does, and in the absence of significant rotation, the RbR+ approximation is a reasonable approximation, at least initially. There are practical benefits to the approximation, as well, which we will discuss later.)\n\nThe transport solver for each ray is an improved and updated version of the multi-group flux-limited diffusion transport solver of Bruenn \\cite{Brue85} enhanced for GR \\cite{BrDeMe01}, with an additional geometric flux limiter to prevent an overly-rapid transition to free streaming of the standard flux-limiter. All $O(v\/c)$ observer correction terms have been included.\n\n\\textsc{Chimera}\\ solves for all three flavors of neutrinos and antineutrinos with four coupled species: \\ensuremath{\\nu_{e}}, \\ensuremath{\\bar \\nu_e}, $\\ensuremath{\\nu_{\\mu\\tau}}=\\{\\ensuremath{\\nu_{\\mu}},\\ensuremath{\\nu_{\\tau}}\\}$, $\\ensuremath{\\bar \\nu_{\\mu\\tau}}=\\{\\ensuremath{\\bar \\nu_{\\mu}},\\ensuremath{\\bar \\nu_{\\tau}}\\}$, with typically 20 energy groups covering two decades in neutrino energy.\nOur standard, modernized, neutrino--matter interactions include emission, absorption, and non-isoenergetic scattering on free nucleons \\cite{RePrLa98}, with weak magnetism corrections \\cite{Horo02}; emission\/absorption (electron capture) on nuclei \\cite{LaMaSa03}; isoenergetic scattering on nuclei, including ion-ion correlations; non-isoenergetic scattering on electrons and positrons; and pair emission from $e^+e^-$-annihilation \\cite{Brue85} and nucleon-nucleon bremsstrahlung \\cite{HaRa98}.\n\\textsc{Chimera}\\ generally utilizes the $K = 220$~\\mbox{MeV}\\ incompressibility version of the Lattimer--Swesty \\cite{LaSw91} EoS for $\\rho>10^{11}\\,\\ensuremath{{\\mbox{g~cm}}^{-3}}$ and a modified version of the Cooperstein \\cite{Coop85} EoS for $\\rho<10^{11}\\,\\ensuremath{{\\mbox{g~cm}}^{-3}}$, where nuclear statistical equilibrium (NSE) applies.\nMost \\textsc{Chimera}\\ simulations have used a 14-species $\\alpha$-network ($\\alpha$, \\isotope{C}{12}-\\isotope{Zn}{60}) for the non-NSE regions \\cite{HiTh99a}. In addition,\n\\textsc{Chimera}\\ utilizes a 17-nuclear-species NSE calculation for the nuclear component of the EOS for $Y_{\\rm e}>26\/56$ to provide a smooth join with the non-NSE regime\n\nDuring evolution, the radial zones are gradually and automatically repositioned to track changes in the mean radial structure.\nTo minimize restrictions on the time step from the Courant limit, the lateral hydrodynamics for a few inner zones are ``frozen'' during collapse, and after prompt convection fades, the laterally frozen region expands to the inner 6--8~km.\nIn the ``frozen'' region the radial hydrodynamics and neutrino transport are computed in spherical symmetry.\n\nThe supernova code most closely resembling \\textsc{Chimera}\\ \nis the \\textsc{PROMETHEUS-VERTEX}\\ code developed by the Max Planck group \\cite{BuRaJa03,BuRaJa06,BuJaRa06,MuJaDi10}. This code utilizes a RbR+ approach to neutrino transport, solving the first two multifrequency angular moments of the transport equations with a variable Eddington closure that is solved at intervals using a 1D approximate Boltzmann equation.\n\n\\textsc{Chimera}\\ does not yet include magnetic fields. Studies with \\textsc{Chimera}\\ that include magnetic fields will be part of future efforts. \n\n\\section{Our Approach in Context}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{2DApproaches.pdf}\n\\caption{An overview of the approaches used in the context of 2D CCSN modeling by various groups around the world \\cite{SuKoTa10,TaKoSu14,NaTaKu14,DoBuZh14,maja09,MuJaMa12,BrMeHi13}. \n\\label{fig:label2DApproaches}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.0in]{3DApproaches.pdf}\n\\caption{An overview of the approaches used in the context of 3D CCSN modeling by several groups around the world \\cite{TaKoSu12,HaMuWo13,LeBrHi15}.\n\\label{fig:label3DApproaches}}\n\\end{figure}\n\nA number of 2D simulations have been performed to date with multi-frequency neutrino transport. These break down into two classes, those that have implemented the RbR neutrino transport approximation and those that have not --- i.e., those that have implemented 2D transport. Figure \\ref{fig:label2DApproaches} provides an overview of the approaches used by various supernova groups in producing these 2D models. It is clear the RbR approximation has enabled the inclusion of general relativity and state-of-the-art neutrino interactions, at the expense of the added spatial dimensionality of the transport, whereas the non-RbR approach includes the second spatial dimension in the neutrino transport, but does so at the expense of realism in the treatment of gravity and the neutrino interactions with stellar matter. The reason for this is simple: In the RbR approach, transport codes that have been used in spherically symmetric studies, such as \\textsc{Agile-BOLTZTRAN}\\ , can be deployed. These codes already, or at least can more easily, include all relativistic transport corrections and full weak interaction physics. To achieve the same level of sophistication in two and three spatial dimensions is more difficult and far more computationally intensive. For example, a 3D multi-frequency approach (e.g., flux-limited diffusion or a variable Eddington tensor method) will require the sustained-petaflop performance of present-day leadership-class computing facilities. In light of the practical difficulties associated with including more physics in fully 3D simulations, the RbR approximation provides an alternative approach that can be used in the interim. The use of both approaches by the community as it moves forward will be essential, as simulations with RbR neutrino transport with approximate general relativity and full weak interaction physics must be gauged by non-RbR approaches that can test the efficacy of the RbR approach. Ultimately, the two approaches must merge, with 3D simulations performed with 3D (i.e., not RbR) general relativistic neutrino transport, general relativistic hydrodynamics and gravity, and a full weak interaction set. Figure \\ref{fig:label3DApproaches} gives an overview of the 3D simulations performed to date, using multi-frequency neutrino transport. It is obvious that fewer groups have attempted this, and far fewer simulations have been performed. It is also evident they have all been performed with RbR and not 3D neutrino transport.\n\n\\section{Results from our 2D Core Collapse Supernova Models}\\label{sec:current2D}\n\nWe \\cite{BrMeHi13,BrLeHi14} have performed four 2D simulations with \\textsc{Chimera}\\ beginning with the 12, 15, 20, and \n25~\\ensuremath{M_{\\odot}}\\ progenitors of Woosley and Heger \\cite{wohe07}.\nOne result of these simulations is the realization that a fully developed (and therefore final) explosion energy will require much more lengthy simulations than anticipated in the past.\nIn the explosion energy plot, Figure~\\ref{fig:energy}, the dashed lines show the growth of the ``diagnostic energy'' (the sum of the gravitational potential energy, the kinetic energy, and the internal energy in each zone --- i.e., the total energy in each zone --- for all zones having a total energy greater than zero) along with more refined estimates of the final explosion energy that account for the work required to lift the as-yet-unshocked envelope ``overburden'' (dash-dotted lines) and, in addition, the estimated energy released from recombination of free nucleons and alpha particles into heavier nuclei (solid lines). We expect these latter two measures to bracket the final kinetic energy of the fully developed explosion. Using the definition of the explosion energy that includes both the energy cost to lift the overlying material and the energy gain associated with nuclear recombination, we can define $t_{\\rm explosion}$, the explosion time, which is the time at which the explosion energy becomes positive and, therefore, the explosion can be said to have been initiated. For the 12, 15, 20, and 25 M$_\\odot$ models, $t_{\\rm explosion}$ is approximately 320, 320, 500, and 620 ms after bounce, respectively. \n\nMoving now to a comparison with observations: All four models have achieved explosion energies that are in the $\\approx $0.4--1.4 Bethe range of observed Type~II supernovae (see Figure \\ref{fig:energycomparison}). Figures \\ref{fig:nickelmass} and \\ref{fig:pnsmass} compare our predictions for the mass of $^{56}$Ni produced and the final proto-neutron star (baryonic) masses produced, respectively, with observations. Note, the large systematic errors in observed progenitor masses preclude any detailed comparison between our results and observations {\\em as a function of progenitor mass}. Nonetheless, comparisons of our predicted {\\em ranges} of explosion energies, $^{56}$Ni masses, etc. with observed ranges is meaningful and demonstrates we are making progress toward developing predictive models.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{movie.jpg}\n\\caption{Evolution of the entropy (upper half) and radial velocity (lower half) at 150, 300, and 600~ms after bounce for the 12~\\ensuremath{M_{\\odot}}\\ model of Bruenn et al. \\cite{BrMeHi13}. \n\\label{fig:entropy}}\n\\end{figure}\n\nThree snapshots of hydrodynamic motion are visible in \nFigure~\\ref{fig:entropy}, \nwhich shows the entropy (upper half) and radial velocity (lower half) for the 12 \\ensuremath{M_{\\odot}}\\ model at 150~ms, 300~ms, and 600~ms after bounce. \nAt 150~ms, roughly 100~ms before rapid shock expansion heralds the onset of a developing explosion, asphericity is developing as a result of vigorous neutrino-driven convection and the SASI. \nBy 300~ms large-scale, high-entropy, buoyant plumes are evident, as the explosion continues to develop. \nHowever, low-entropy down-flows still connect the unshocked regions with the PNS surface, continuing to supply accretion energy to power the neutrino luminosities driving the development of the explosion. By 600~ms, these down-flows have been cut off by the expanding ejecta, but their remnants continue to accrete onto the PNS, allowing the explosion to continue to gain in strength.\n\nThough these simulations have run further into explosion than previous simulations, the final explosion energies --- in particular, for the 20 and 25 M$_\\odot$ models --- are clearly still developing. \nThese simulations will therefore continue. Additional 2D simulations --- e.g., using different progenitor masses --- are planned.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Expl_E_vs_t_12M_25M_Comp.pdf}\n\\caption{Diagnostic energy (\\ensuremath{E^{+}}; dashed lines) versus post-bounce time for all of our published 2D models \\cite{BrMeHi13,BrLeHi14}. Dash-dotted lines (\\ensuremath{E^{+}_{\\rm ov}}) include binding energy of overburden and dashed lines (\\ensuremath{E^{+}_{\\rm ov, rec}}) also include estimated energy gain from nuclear recombination.}\n\\label{fig:energy}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{Explosion_Energy_Comparisons.pdf}\n\\caption{\nObserved explosion energies for a number of CCSNe, along with predicted explosion energies from our 12, 15, 20, and 25 M$_\\odot$ progenitor models (red dots) \\cite{BrLeHi14}. The arrows indicate that our explosion energies are still increasing at the end of each run. The length of each arrow is a measure of the rate of change of the explosion energy at the end of the corresponding run.\n\\label{fig:energycomparison}\n}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{Nickel56_Comparisons.pdf}\n\\caption{\nObserved production of $^{56}$Ni for a number of CCSNe, along with our predictions from our 12, 15, 20, and 25 M$_\\odot$ progenitor models (red dots) \\cite{BrLeHi14}.\n\\label{fig:nickelmass}\n}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{N_Star_Mass.pdf}\n\\caption{\nTime evolution of the proto-neutron star (baryonic) mass in each of our 4 2D models, beginning with 12, 15, 20, and 25 M$_\\odot$ progenitors \\cite{BrLeHi14}.\n\\label{fig:pnsmass}\n}\n\\end{figure}\n\n\\section{Preliminary Results from our 3D Core Collapse Supernova Model}\\label{sec:current3D}\n\n\\begin{figure}\n\\includegraphics[width=3.1in]{1D2D3DShockTrajectories.pdf}\n\\caption{Evolution of the shock trajectory from our 1D model and the angle-averaged shock trajectories from our 2D and 3D models, all for the 15~\\ensuremath{M_{\\odot}}\\ case \\cite{LeBrHi15}. The 1D model does not develop an explosion, whereas an explosion is obtained in both our 2D and our 3D models.\n\\label{fig:1D2D3DShockTrajectories}\n}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.15in]{3D441msYZ.pdf}\n\\caption{Snapshot of the equatorial cross section of the entropy in our ongoing 3D simulation for the 15~\\ensuremath{M_{\\odot}}\\ case at $\\sim$441 ms after bounce \\cite{LeBrHi15}. Red indicates high-entropy, expanding, rising material. Green\/blue indicates cooler, denser material. Evident are significant (green) down flows fueling the neutrino luminosities.\n\\label{fig:entropy3D}\n}\n\\end{figure}\n\nFew 3D multiphysics models with necessary realism (as defined above) have been performed. Notable among these is the recently published model of Hanke et al. \\cite{HaMuWo13}. Preliminary results from the Oak Ridge group \\cite{LeBrHi15} in the context of a model similar to the Garching group's model -- i.e., with essentially the same physics and treatment of this physics -- are presented here, although we begin with the same 15 M$_\\odot$ Woosley--Heger progenitor used in our 2D models, whereas they began with the 27 M$_\\odot$ Woosley--Heger progenitor. \n\nFigure \\ref{fig:1D2D3DShockTrajectories} shows the angle-averaged shock trajectories from our one-, two-, and three-dimensional models, all run with the \\textsc{Chimera}\\ code beginning with the same 15 M$_\\odot$ Woosley--Heger progenitor and including the same (full) physics. Explosion is evident in both the 2D and the 3D cases. Explosion is not obtained in 1D. Comparing the two- and three-dimensional trajectories, we see that the development of the explosion in the 3D case is slower. In the 2D case, the shock radius changes rapidly beginning at about 200 ms after bounce. In the 3D case, the shock radius does not begin to climb dramatically until approximately 100 ms later, at $\\sim$300 ms after bounce. The 1D and 2D\/3D angle-averaged shock radii diverge at approximately 125 ms after bounce, and the 2D and 3D angle-averaged shock radii diverge later, at about 200 ms after bounce.\n\nFigure \\ref{fig:entropy3D} is a snapshot of a 2D slice of our ongoing 3D model at approximately 441 ms after bounce. Shown is the stellar core entropy. The shock wave is clearly outlined by the jump in entropy across it. Neutrino-driven convection is evident in the slice. Hotter (red) rising plumes bring neutrino-heated material up to the shock, while cooler (green) down flows replace the fluid below. Distortion of the shock away from axisymmetry and the nonaxisymmetric patterns of convection beneath the shock are also evident. Conclusive evidence for $l=1$, ``sloshing'' and $m=1$, ``spiral'' modes of the SASI will require a modal analysis, although the 2D slice clearly does not rule out either mode. \n\nThis simulation utilizes 32,400 rays (solid angle elements) with 2\\ensuremath{^\\circ}\\ resolution in longitude and a resolution in latitude that varies from 8\\ensuremath{^\\circ}\\ at the pole to better than 0.7\\ensuremath{^\\circ}\\ at the equator, but is uniform in the cosine of the colatitude. \nDue to the Courant limit, the coordinate pole in standard spherical-polar coordinates creates a strong restriction on the time step size and therefore lengthens the total run time compared to a similar resolution 2D simulation. \nOur constant cosine-of-colatitude grid seeks to minimize this impact without resorting to a grid that is coarse at all latitudes or implementing unevolved (frozen) regions near the pole. The simulation will consume approximately 100 M core--hours to complete. {\\em (This gives a strong indication of how the physics included in the models, even in the RbR+ approximation, significantly drives upward their computational cost.)}\nAs this 3D simulation for a 15~\\ensuremath{M_{\\odot}}\\ progenitor evolves, we will be able to examine the nature of the CCSN explosion mechanism without the assumption of axisymmetry that is inherent in the 2D models. {\\em The} key question: Will this model yield a robust explosion? And will other predictions agree with observations? As indicated by all of our 2D models, our current 3D model will need to be run significantly longer, and detailed computations of the explosion energy and other observables will need to be completed before we can begin to answer these questions.\n\n\\section{Conclusions and Outlook}\n\nThe most sophisticated spherically symmetric models developed to date do not exhibit core collapse supernova explosions. Despite the prodigious amount of gravitational binding energy tapped during stellar core collapse and radiated via neutrinos, neutrino heating of the stellar core material beneath the supernova shock wave, unaided by other physics, is unable to power such explosions. On the other hand, with the aid of neutrino-driven convection beneath the shock, and the SASI, robust explosions have been obtained in both two- and three-dimensional models, with model predictions consistent with observations of multiple quantities (explosion energy, $^{56}$Ni mass, neutron star mass, neutron star kick velocity).\n\nOne- and two-dimensional studies have identified a list of key physics needed in CCSN models. The addition of new physics (e.g., magnetic fields) will likely add to this list as the new physics is added to today's most advanced models (e.g., see \\cite{ObJaAl14}). It is also possible that the addition of new physics will render some of the physics currently included less important. However, it is unlikely that the impact of general relativity and of important neutrino physics (e.g., relativistic transport corrections such as gravitational redshift and the full physics of electron capture and neutrino scattering) will be significantly lessened by adding new physics. The quantum leap in CCSN modeling that occurred two decades ago, where axisymmetry replaced spherical symmetry, did not reduce the importance of this physics --- case in point, both Lentz et al. \\cite{LeMeMe12} and Mueller et al. \\cite{MuJaMa12} reached the same conclusions. Moreover, the development of magnetic fields will depend on the environment established by accretion and neutrino heating.\nFuture modeling --- in particular, the direction we choose to take --- should rely on the predictions of the best {\\em available} models, more so than on speculation of what physics may or may not be important. With this in mind, the task at hand is, therefore, to build 3D models with the minimum physics set identified in the studies mentioned above. \n\nIn this brief review, we outlined the approaches used by the various supernova modeling groups around the world, focusing on two- and three-dimensional, multi-frequency models. While a comparative analysis of the results of these studies can shed light on the impact of (a) Newtonian versus general relativistic gravity, hydrodynamics, and neutrino transport, and\/or (b) including a reduced versus a complete set of neutrino weak interactions, the latter of which would include detailed nuclear electron capture and neutrino energy scattering, results from simulations cutting across these various levels of sophistication should not be compared with the expectation that the outcomes --- in particular, whether or not robust explosions are obtained --- should be the same. For example, comparing a Newtonian and a general relativistic model, with all other physics in the models kept the same, allows us to understand the role of general relativity, but we should not expect the Newtonian and general relativistic models to agree quantitatively, or even qualitatively.\n\nHaving said this, a comparison between, for example, the results obtained by the Oak Ridge and Garching groups can be made given the similarity of their approaches and the physics included in each of their model sets. In this context, it is important to note that the results of the Garching group differ between simulations performed with their \\textsc{PROMETHEUS-VERTEX}\\ code \\cite{maja09}, which uses a general relativistic monopole correction to the Newtonian self-gravitational potential, derived from the Tolman-Oppenheimer-Volkov equation of the spherically-averaged fluid and thermodynamic quantities in the stellar core, and with their \\textsc{COCONUT-VERTEX}\\ code \\cite{MuJaMa12}, which instead uses the conformal flatness approximation to the general relativistic gravitational field. \\textsc{PROMETHEUS-VERTEX}\\ is the code most similar to \\textsc{Chimera}\\ . Unfortunately, to date, results from the \\textsc{PROMETHEUS-VERTEX}\\ code using the more modern Woosley--Heger progenitor set \\cite{wohe07} have not been published, so a direct comparison is not yet possible.\n\nFocusing once again on the ongoing 3D simulations cited here: Will robust neutrino-driven explosions be obtained? If the answer is no, three explanations are possible: (1) Removing current approximations in the models (e.g., the use of RbR neutrino transport) and\/or making other improvements (e.g., increasing the spatial resolution) may fundamentally alter the outcomes. (2) We are missing essential physics. (3) A combination of additional physics and improved modeling may be needed to alter the outcomes. \nWith regard to (1)-(3):\n\n(A) All of the simulations documented here were initiated from state-of-the-art (e.g., the \\citet{wohe07} series) spherically-symmetric progenitor models. \nCouch and Ott \\cite{CoOt13} point out that multidimensional simulations of the advanced stages of stellar evolution of massive stars yield large deviations from \nspherical symmetry in the Si\/O layer (see \\cite{Arnett14} and the references cited therein).\nThey demonstrate that such (expected) deviations from spherical symmetry can qualitatively alter the \npost-stellar-core-bounce evolution, triggering an explosion in a model that otherwise fails to explode. Such a qualitative change in outcome \ndemands better initial conditions, which can be obtained when spherically symmetric models, currently able to complete stellar evolution through \nsilicon burning and the formation of the iron core (multidimensional models are not yet capable of this), are informed by 3D stellar\nevolution models of earlier burning stages.\n\n(B) Given the importance of the SASI in the explosion models developed thus far, and given that the SASI is a long-wavelength instability, how will the SASI and the turbulence it induces, or neutrino-driven convection and the turbulence it induces, interact? There is evidence, for example, that the energy in long-wavelength modes of the SASI are sapped by the very turbulence the SASI seeds, as a result of the significant shear between counterrotating flows induced by its $m=1$ spiral mode in three dimensions \\citep{EnCaBu12}. On the other hand, Couch and Ott \\cite{CoOt14} recently showed that turbulent ram pressure may be important in driving the shock outward, relieving some of the work from the thermal pressure associated with neutrino heating. Moreover, significant deviations from spherical symmetry in the progenitor, as would be expected based on the current 3D stellar evolution models discussed above, would seed turbulence and, thus, potentially enhance the contribution of turbulence to the outward pressure driving the shock.\n\n(C) If we maintain that CCSNe are neutrino-driven, it may be logical to assume that we are missing something essential in the neutrino sector. Motivated by the experimental and observational measurement of neutrino mass, recent efforts to explore its impact on neutrino transport in stellar cores have uncovered new and increasingly complex physical scenarios \\citep{dufuqi10,chcafr12,chcafr13,VlFuCi14}. Now that the quantum kinetic equations for neutrinos in stellar cores have been derived (e.g., see \\cite{VlFuCi14}), efforts can begin in earnest to extend Boltzmann models to include the quantum mechanical coherent effects associated with neutrino mass. This is, of course, a long-term goal. It is not clear that physics associated with neutrino mass will have an impact on the explosion mechanism, but it has been demonstrated that such physics may impact terrestrial CCSN neutrino signatures significantly (e.g., see \\cite{DuFuCa07}).\n\nSince Colgate and White first proposed that CCSNe are neutrino-driven \\cite{CoWh66}, nearly five decades have passed. Ascertaining the CCSN explosion mechanism has certainly been a challenge. Each new piece of physics, each new dimension, has brought both breakthroughs and additional challenges. Nonetheless, the last decade of CCSN modeling has led to rapid progress. This progress --- in particular, the recent progress outlined here --- and the growing capability of available supercomputing platforms, encourage us that a solution to this long-standing astrophysics problem is achievable with a continued, systematic effort in perhaps the not-too-distant future.\n\n\n\n\\section{The Physics and Status of Core Collapse Supernova Simulations}\\label{sec:foundation}\n\nCore collapse supernovae (CCSNe) are initiated by the collapse of the iron cores of massive stars at the ends of their lives. The collapse proceeds to ultrahigh densities, in excess of the densities of nucleons in the nucleus of an atom (``super-nuclear'' densities). The inner core becomes incompressible under these extremes, bounces, and, acting like a piston, launches a shock wave into the outer stellar core. This shock wave will ultimately propagate through the stellar layers beyond the core and disrupt the star in a CCSN explosion. However, the shock stalls in the outer core, losing energy as it plows through it, and exactly how the shock wave is revived remains an open question, although, important progress is being made, particularly with two-dimensional (2D) models. The means to this revival is the central question in CCSN theory. (For a more complete review, the reader is referred to \\cite{Mezz05}, \\cite{Jank12}, and \\cite{KoTaSu12}.) \n\nAfter core bounce, $\\sim10^{53}$~ergs of energy in neutrinos and antineutrinos of all three flavors are released from the newly formed proto-neutron star (PNS) at the center of the explosion. \nThe typical observationally estimated CCSN explosion energy is $\\sim 10^{51}$~ergs ($\\equiv1$~Bethe), with estimates for individual supernovae ranging from 0.3--5~Bethe \\cite{Hamu03,NoToUm06,Smar09}.\nPast simulations \\cite{Wils85,BeWi85} demonstrated that energy in the form of neutrinos emerging from the PNS can be deposited behind the shock and may revive it. \nThis neutrino reheating is central to CCSN models today. \nHowever, while a prodigious amount of neutrino energy emerges from the PNS, the neutrinos are weakly coupled to the material below the shock. \nThe neutrino heating is very sensitive to the distribution of neutrinos in energy (or frequency) and direction of propagation, at any given spatial point behind the shock \n\\cite{BuGo93,JaMu96,MeMeBr98,MeCaBr98b,MeLiMe01,Jank01}.\nRealistic CCSN simulations require a neutrino transport method that can reproduce the angular and energy distributions of the neutrinos in the critical heating region.\n\nNormal iron core stars do not explode when modeled in spherical symmetry \\cite[cf.,][]{LiMeTh01a,RaJa02,ThBuPi03}, thus multidimensional effects are required. \nFluid instabilities ({\\it e.g.}, convection) in the PNS may boost the luminosity of this central neutrino source and consequent neutrino heating \\cite{SmWiBa81,WiMa93,MiPoUr02,BrRaMe04,BuRaJa06}. \nNeutrino-driven convection between the PNS and the shock fundamentally alters the nature of energy flow and shock revival \\cite{HeBeHi94,BuHaFr95,JaMu96,FrWa04,BuRaJa06,BrDiMe06} relative to the spherically symmetric case, allowing simultaneous down-flows that fuel the neutrino luminosities and neutrino-heated up-flows that bring energy to the shock. \nThe standing accretion shock instability (SASI), a computationally discovered instability of the shock wave itself \\cite{BlMeDe03}, dramatically alters the shock and explosion dynamics \n\\cite{BlMeDe03,JaBuKi05,BuLiDe06,OhKoYa06,HaMuWo13}. Recent axisymmetric (2D) models \\cite{MuJaHe12,BrMeHi13} demonstrate that neutrino heating in conjunction with neutrino-driven convection and the SASI are able to generate explosions, although the quantitative predictions --- in particular, the explosion energies --- differ between these two groups. However, it is important to note that our predictions are consistent with observations \\cite{BrLeHi14} across a range of observables: explosion energy, $^{56}$Ni mass, neutron star mass, and neutron star kicks.\nDespite these differences, these advances suggest that the SASI may be the ``missing link'' that will enable the Wilson delayed-shock, neutrino-heating mechanism to operate successfully in multiple spatial dimensions, especially for more massive progenitors. \n\nThere are many other inputs to the physics of the core collapse supernova (CCSN) mechanism that must also be included in simulations. The strength of these effects have been tested in many one-dimensional (1D) simulations and some multidimensional simulations.\nThe PNS in a developing CCSN is sufficiently compact to require the inclusion of general relativistic effects to gravity and neutrino propagation \\cite{BaCoKa85,LiMeTh01a,LiMeTh01b,BrDeMe01,MaDiJa06,OtDiMa07,MuJaDi10,LeMeMe12a,MuJaMa12}.\nGetting the correct radiative coupling requires inclusion of all neutrino--matter interactions (opacities) that affect the neutrino transport, heating, and cooling. Several recent studies have considered the effects of neutrino opacities, including inelastic scattering of neutrinos on electrons, nucleons, and nuclei, detailed nuclear electron capture, and nuclear medium effects on the neutrino interactions \\cite{HiMeBr03,BuJaKe03,KeRaJa03,ThBuPi03,MaJaBu05,MaLiFr06,LaMaMu08,JuLaHi10,RoReSh12,LeMeMe12b}.\nA nuclear equation of state for both nuclear matter in the PNS and the nuclei and nucleons in the surrounding matter is required. Several equations of state have been proposed \\cite{BeBrAp79,ElHi80,Coop85,LaSw91,WiMa93,ShToOy98b,HeSc10,ShHoTe11,StHeFi13} and their impact in CCSNe has been examined \\cite{SwLaMy94,RaBuJa02,SuYaSu05,MaJaMu09,LeHiBa10,Couc13a}.\nFinally, the nuclear composition must be evolved in the outer regions where nuclear statistical equilibrium (NSE) does not apply.\n\nThe centrifugal effects of stellar core rotation, especially for rapid rotation, can also change supernova dynamics qualitatively and quantitatively \\cite{FrWa04,BuRaJa06}. \nAn additional level of complexity is added by models with dynamically important magnetic fields, amplified by rapid rotation and the magnetorotational instability, that may play a significant role in driving, and perhaps collimating, some CCSNe \\cite{Symb84,AkWhMe03,BuDeLi07} and \\emph{collapsars} (jets generated by accretion disks about newborn black holes producing combined CCSNe\/$\\gamma$-ray bursts). \nRecent observations of shock breakout \\cite{ScJuWo08} disfavor a strongly collimated jet as the driver for explosions for ordinary supernovae \\cite{CoWhMi09} --- i.e., cases where rotation likely does not play a major role. \nMagnetic fields are expected to become important in the context of rapidly rotating progenitors, where significant rotational energy can be tapped to develop strong and organized magnetic fields (e.g., see \\cite{BuDeLi07}). State-of-the-art stellar evolution models for massive stars \\cite{wohe07} do not predict large core rotation rates. For non-rapidly rotating progenitors, magnetic fields are expected to serve more of a supporting role, for neutrino shock reheating (e.g., see \\cite{ObJaAl14}).\n \nWhile the list of major macroscopic components in any CCSN clearly indicates this is a 3D phenomenon, 3D studies have been relatively rare and, until recently, generally have skimped, largely for practical reasons, on key physics to which prior studies (noted above) have indicated careful attention must be paid. \n3D simulations have examined aspects of the CCSN problem using a progression of approximations.\n3D, hydrodynamics-only simulations of the SASI, which isolate the accretion flow from the feedbacks of neutrino heating and convection, have identified the spiral ($m=1$) mode, with self-generated counter-rotating flows that can spin the PNS to match the $\\sim$50~ms periods of young pulsars \\cite{Blon05a,Blon05b,BlMe07} and examined the generation of magnetic fields \\cite{EnCaBu10} and turbulence \\cite{EnCaBu12} by the SASI.\nAnother often-used formulation for approximate 3D simulations is the neutrino ``lightbulb'' approximation, where a proscribed neutrino luminosity determines the heating rate, with the neutrino heating and cooling parameterized independently. \nNeutrino lightbulb simulations have been used successfully to study the development of NS kicks \\cite{NoBrBu12,WoJaMu12,WoJaMu13}, mixing in the ejecta \\cite{HaJaMu10}, and, in 2D simulations, the growth of the SASI with neutrino feedbacks \\cite{ScJaFo08}. Lightbulb simulations have also been used to examine the role of dimensionality (1D-2D-3D) in CCSNe \\cite{MuBu08,NoBuAl10,HaMaMu12,Couc13b}.\nA more sophisticated approximate neutrino transport method is the ``leakage'' scheme. Leakage schemes use the local neutrino emission rate and the opaqueness of the overlying material to estimate the cooling rate and from that the neutrino luminosity and heating rate. \nLeakage models have been used by Ott et al. \\cite{OtAbMo13}, including the full 3D effects of GR.\nFryer and Warren \\cite{FrWa02,FrWa04} employed a \\emph{gray} neutrino transport scheme in three dimensions. In such schemes, one evolves the spatial neutrino energy and momentum densities with a \nparameterization of the neutrino spectra. As a neutrino angle- and energy-integrated scheme, the dimensionality of the models is greatly reduced, which is ideal for performing a larger number of exploratory studies.\nThese 3D studies, and other recent studies \\cite[cf.][]{TaKoSu12,BuDoMu12,HaMuWo13,CoOc13}, confirm the conclusion that CCSN simulations must ultimately be performed in three spatial dimensions. \n\nThe modeling of CCSNe in three dimensions took an important step forward recently. The Max Planck (MPA) group launched the first 3D CCSN simulation with multifrequency neutrino transport with relativistic corrections and state-of-the-art neutrino opacities, and general relativistic gravity. Results from the first 400 ms after stellar core bounce were reported in \\cite{HaMuWo13} for a 27 \\ensuremath{M_{\\odot}}\\ progenitor. At present, the ``Oak Ridge'' group is performing a comparable simulation beginning with the 15~\\ensuremath{M_{\\odot}}\\ progenitor used in our 2D studies. We have evolved approximately the first half second after bounce (for further discussion, see Section~\\ref{sec:current3D}). \n\n\\section{Lessons from Spherical Symmetry}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{fig2_color.pdf}\n\\caption{Shock trajectories in km, versus time after bounce, for models with decreasing physics \\cite{LeMeMe12}.}\n\\label{fig:shockvphysics}\n\\end{figure}\n\nRecent studies carried out in the context of general relativistic, spherically symmetric CCSN models with Boltzmann neutrino transport demonstrate that (i) a general relativistic treatment of gravity, (ii) special and general relativistic corrections to the neutrino transport, such as the gravitational redshift of neutrinos, and (iii) the use of a complete set of weak interactions and a realistic treatment of those interactions are indispensable \\cite{LeMeMe12a}. As shown in Figure \\ref{fig:shockvphysics}, the impact of moving to a Newtonian description of gravity from a fully general relativistic treatment has a significant impact on the shock trajectory. The Newtonian simulation neglects general relativity in the description of gravity {\\it per se}, as well as general relativistic transport effects such as gravitational redshift. Thus, the switch from a general relativistic description to a Newtonian description impacts more than just the treatment of gravity. In turn, if we continue to simplify the model, this time reducing the set of weak interactions included and the realism with which these weak interactions are included, we see a further significant change in the shock trajectory, with fundamentally different behavior early on after bounce. In this instance, we have neglected the impact of nucleon correlations in the computation of electron capture on nuclei (see \\cite{HiMeMe03}), energy exchange in the scattering of neutrinos on electrons, corrections due to degeneracy and nucleon recoil in the scattering of neutrinos on nucleons, and nucleon--nucleon bremsstrahlung. Finally, if we continue to simplify the neutrino transport by neglecting special relativistic corrections to the transport, such as the Doppler shift, we obtain yet another significant change. The spread in the shock radii at $t>$120 ms after bounce is approximately 60 km. Its relative fraction of the average of the shock radii across the four cases at $t>$ 120 ms is $>$33\\%. Moreover, the largest variation in the shock radii in our 2D models is obtained at $\\sim$ 120 ms after bounce, which is around the time when the shock radii in our one- and two-dimensional models begin to diverge (see Figure \\ref{fig:label1Dv2D}). In all four of our 2D models, the postbounce evolution is quasi-spherical until $\\sim$110 ms after bounce. Thus, the use of the \\textsc{Agile-BOLTZTRAN}\\ code, which solves the general relativistic Boltzmann equation with a complete set of neutrino weak interactions for the neutrino transport in the context of spherically symmetric models, to determine the physics requirements of more realistic two- and three-dimensional modeling is possible. Indeed, the conclusions of our studies are corroborated by similar studies carried out in the context of 2D multi-physics models \\cite{MuJaMa12}. Taken together, these studies establish the {\\it necessary} physics that must be included in CCSN models in the future. Whether or not the current treatments of this physics in the context of two- and three-dimensional models is {\\it sufficient}, as we will discuss, remains to be determined.\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{1Dv2D.pdf}\n\\caption{Shock trajectories in km, versus time after bounce, for our 1D and 2D models \\cite{BrMeHi13}. The 1D and 2D evolution begins to diverge between 100 and 125 ms after bounce.}\n\\label{fig:label1Dv2D}\n\\end{figure}\n\n\\section{Our Code}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{rbr.pdf}\n\\caption{A depiction of the ``ray-by-ray'' (RbR) approach. Each ray corresponds to a separate spherically symmetric problem. In the limit of spherical symmetry, the RbR approach is exact. Each ray solve gives what would be obtained in a spherically symmetric solve for conditions at the base of the ray, on the proto-neutron star surface. For a {\\it persistent} hot spot, such as the one depicted here at the base of ray 1, the RbR approximation would overestimate the angular variations in the neutrino heating at the points 1 and 2 above the surface. In spherical symmetry, the condition at the base of each ray is assumed to be the same over the entire portion of the surface subtended by the backward causal cone for that ray. Thus, for ray 1, the entire subtended surface would be considered hotter than it is, whereas for ray 2 the contribution from the hot spot at the base of ray 1 to the heating at point 2 above the surface would be ignored.\n\\label{fig:rbr}\n}\n\\end{figure}\n\n\\textsc{Chimera}\\ is a parallel, multi-physics code built specifically for multidimensional simulation of CCSNe.\nIt is the chimeric combination of separate codes for hydrodynamics and gravity; neutrino transport and opacities; and a nuclear EoS and reaction network, coupled by a layer that oversees data management, parallelism, I\/O, and control.\n\nThe hydrodynamics are modeled using a dimensionally-split, Lagrangian-Remap (PPMLR) scheme \\cite{CoWo84} as implemented in VH1 \\cite{HaBlLi12}.\nSelf-gravity is computed by multipole expansion \\cite{MuSt95}.\nWe include the most important effects of GR by replacing the Newtonian monopole term with a GR monopole computed from the TOV equations \\cite[][Case~A]{MaDiJa06}.\n\nNeutrino transport is computed in the ``ray-by-ray-plus'' (RbR+) approximation \\cite{BuRaJa03}, where an independent, spherically symmetric transport solve is computed for each ``ray'' (radial array of zones with the same $\\theta$, $\\phi$). (It is very important to note that the RbR+ approximation does not restrict the neutrinos to strict radial propagation only. In spherical symmetry, neutrinos propagate along arbitrary rays, not just radial rays, but the {\\em net} angular flux is zero, leaving only radial flux. Each RbR+ solve is a {\\em full} spherically symmetric solve (see Figure \\ref{fig:rbr}). The 3D problem is broken up into $N_{\\theta}\\times N_{\\phi}$ spherically symmetric problems, where $N_{\\theta,\\phi}$ are the number of latitudinal and longitudinal zones, respectively. RbR+ is exact (physically speaking, modulo numerical error) if the neutrino source is spherically symmetric. Thus, if accreted material raining down on the PNS surface via the non-spherical accretion funnels, obvious in Figures~\\ref{fig:entropy} and \\ref{fig:entropy3D}, and creating hot spots, spreads rapidly over the surface relative to the neutrino-heating and shock-revival time scales, which we find it does, and in the absence of significant rotation, the RbR+ approximation is a reasonable approximation, at least initially. There are practical benefits to the approximation, as well, which we will discuss later.)\n\nThe transport solver for each ray is an improved and updated version of the multi-group flux-limited diffusion transport solver of Bruenn \\cite{Brue85} enhanced for GR \\cite{BrDeMe01}, with an additional geometric flux limiter to prevent an overly-rapid transition to free streaming of the standard flux-limiter. All $O(v\/c)$ observer correction terms have been included.\n\n\\textsc{Chimera}\\ solves for all three flavors of neutrinos and antineutrinos with four coupled species: \\ensuremath{\\nu_{e}}, \\ensuremath{\\bar \\nu_e}, $\\ensuremath{\\nu_{\\mu\\tau}}=\\{\\ensuremath{\\nu_{\\mu}},\\ensuremath{\\nu_{\\tau}}\\}$, $\\ensuremath{\\bar \\nu_{\\mu\\tau}}=\\{\\ensuremath{\\bar \\nu_{\\mu}},\\ensuremath{\\bar \\nu_{\\tau}}\\}$, with typically 20 energy groups covering two decades in neutrino energy.\nOur standard, modernized, neutrino--matter interactions include emission, absorption, and non-isoenergetic scattering on free nucleons \\cite{RePrLa98}, with weak magnetism corrections \\cite{Horo02}; emission\/absorption (electron capture) on nuclei \\cite{LaMaSa03}; isoenergetic scattering on nuclei, including ion-ion correlations; non-isoenergetic scattering on electrons and positrons; and pair emission from $e^+e^-$-annihilation \\cite{Brue85} and nucleon-nucleon bremsstrahlung \\cite{HaRa98}.\n\\textsc{Chimera}\\ generally utilizes the $K = 220$~\\mbox{MeV}\\ incompressibility version of the Lattimer--Swesty \\cite{LaSw91} EoS for $\\rho>10^{11}\\,\\ensuremath{{\\mbox{g~cm}}^{-3}}$ and a modified version of the Cooperstein \\cite{Coop85} EoS for $\\rho<10^{11}\\,\\ensuremath{{\\mbox{g~cm}}^{-3}}$, where nuclear statistical equilibrium (NSE) applies.\nMost \\textsc{Chimera}\\ simulations have used a 14-species $\\alpha$-network ($\\alpha$, \\isotope{C}{12}-\\isotope{Zn}{60}) for the non-NSE regions \\cite{HiTh99a}. In addition,\n\\textsc{Chimera}\\ utilizes a 17-nuclear-species NSE calculation for the nuclear component of the EOS for $Y_{\\rm e}>26\/56$ to provide a smooth join with the non-NSE regime\n\nDuring evolution, the radial zones are gradually and automatically repositioned to track changes in the mean radial structure.\nTo minimize restrictions on the time step from the Courant limit, the lateral hydrodynamics for a few inner zones are ``frozen'' during collapse, and after prompt convection fades, the laterally frozen region expands to the inner 6--8~km.\nIn the ``frozen'' region the radial hydrodynamics and neutrino transport are computed in spherical symmetry.\n\nThe supernova code most closely resembling \\textsc{Chimera}\\ \nis the \\textsc{PROMETHEUS-VERTEX}\\ code developed by the Max Planck group \\cite{BuRaJa03,BuRaJa06,BuJaRa06,MuJaDi10}. This code utilizes a RbR+ approach to neutrino transport, solving the first two multifrequency angular moments of the transport equations with a variable Eddington closure that is solved at intervals using a 1D approximate Boltzmann equation.\n\n\\textsc{Chimera}\\ does not yet include magnetic fields. Studies with \\textsc{Chimera}\\ that include magnetic fields will be part of future efforts. \n\n\\section{Our Approach in Context}\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{2DApproaches.pdf}\n\\caption{An overview of the approaches used in the context of 2D CCSN modeling by various groups around the world \\cite{SuKoTa10,TaKoSu14,NaTaKu14,DoBuZh14,maja09,MuJaMa12,BrMeHi13}. \n\\label{fig:label2DApproaches}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.0in]{3DApproaches.pdf}\n\\caption{An overview of the approaches used in the context of 3D CCSN modeling by several groups around the world \\cite{TaKoSu12,HaMuWo13,LeBrHi15}.\n\\label{fig:label3DApproaches}}\n\\end{figure}\n\nA number of 2D simulations have been performed to date with multi-frequency neutrino transport. These break down into two classes, those that have implemented the RbR neutrino transport approximation and those that have not --- i.e., those that have implemented 2D transport. Figure \\ref{fig:label2DApproaches} provides an overview of the approaches used by various supernova groups in producing these 2D models. It is clear the RbR approximation has enabled the inclusion of general relativity and state-of-the-art neutrino interactions, at the expense of the added spatial dimensionality of the transport, whereas the non-RbR approach includes the second spatial dimension in the neutrino transport, but does so at the expense of realism in the treatment of gravity and the neutrino interactions with stellar matter. The reason for this is simple: In the RbR approach, transport codes that have been used in spherically symmetric studies, such as \\textsc{Agile-BOLTZTRAN}\\ , can be deployed. These codes already, or at least can more easily, include all relativistic transport corrections and full weak interaction physics. To achieve the same level of sophistication in two and three spatial dimensions is more difficult and far more computationally intensive. For example, a 3D multi-frequency approach (e.g., flux-limited diffusion or a variable Eddington tensor method) will require the sustained-petaflop performance of present-day leadership-class computing facilities. In light of the practical difficulties associated with including more physics in fully 3D simulations, the RbR approximation provides an alternative approach that can be used in the interim. The use of both approaches by the community as it moves forward will be essential, as simulations with RbR neutrino transport with approximate general relativity and full weak interaction physics must be gauged by non-RbR approaches that can test the efficacy of the RbR approach. Ultimately, the two approaches must merge, with 3D simulations performed with 3D (i.e., not RbR) general relativistic neutrino transport, general relativistic hydrodynamics and gravity, and a full weak interaction set. Figure \\ref{fig:label3DApproaches} gives an overview of the 3D simulations performed to date, using multi-frequency neutrino transport. It is obvious that fewer groups have attempted this, and far fewer simulations have been performed. It is also evident they have all been performed with RbR and not 3D neutrino transport.\n\n\\section{Results from our 2D Core Collapse Supernova Models}\\label{sec:current2D}\n\nWe \\cite{BrMeHi13,BrLeHi14} have performed four 2D simulations with \\textsc{Chimera}\\ beginning with the 12, 15, 20, and \n25~\\ensuremath{M_{\\odot}}\\ progenitors of Woosley and Heger \\cite{wohe07}.\nOne result of these simulations is the realization that a fully developed (and therefore final) explosion energy will require much more lengthy simulations than anticipated in the past.\nIn the explosion energy plot, Figure~\\ref{fig:energy}, the dashed lines show the growth of the ``diagnostic energy'' (the sum of the gravitational potential energy, the kinetic energy, and the internal energy in each zone --- i.e., the total energy in each zone --- for all zones having a total energy greater than zero) along with more refined estimates of the final explosion energy that account for the work required to lift the as-yet-unshocked envelope ``overburden'' (dash-dotted lines) and, in addition, the estimated energy released from recombination of free nucleons and alpha particles into heavier nuclei (solid lines). We expect these latter two measures to bracket the final kinetic energy of the fully developed explosion. Using the definition of the explosion energy that includes both the energy cost to lift the overlying material and the energy gain associated with nuclear recombination, we can define $t_{\\rm explosion}$, the explosion time, which is the time at which the explosion energy becomes positive and, therefore, the explosion can be said to have been initiated. For the 12, 15, 20, and 25 M$_\\odot$ models, $t_{\\rm explosion}$ is approximately 320, 320, 500, and 620 ms after bounce, respectively. \n\nMoving now to a comparison with observations: All four models have achieved explosion energies that are in the $\\approx $0.4--1.4 Bethe range of observed Type~II supernovae (see Figure \\ref{fig:energycomparison}). Figures \\ref{fig:nickelmass} and \\ref{fig:pnsmass} compare our predictions for the mass of $^{56}$Ni produced and the final proto-neutron star (baryonic) masses produced, respectively, with observations. Note, the large systematic errors in observed progenitor masses preclude any detailed comparison between our results and observations {\\em as a function of progenitor mass}. Nonetheless, comparisons of our predicted {\\em ranges} of explosion energies, $^{56}$Ni masses, etc. with observed ranges is meaningful and demonstrates we are making progress toward developing predictive models.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{movie.jpg}\n\\caption{Evolution of the entropy (upper half) and radial velocity (lower half) at 150, 300, and 600~ms after bounce for the 12~\\ensuremath{M_{\\odot}}\\ model of Bruenn et al. \\cite{BrMeHi13}. \n\\label{fig:entropy}}\n\\end{figure}\n\nThree snapshots of hydrodynamic motion are visible in \nFigure~\\ref{fig:entropy}, \nwhich shows the entropy (upper half) and radial velocity (lower half) for the 12 \\ensuremath{M_{\\odot}}\\ model at 150~ms, 300~ms, and 600~ms after bounce. \nAt 150~ms, roughly 100~ms before rapid shock expansion heralds the onset of a developing explosion, asphericity is developing as a result of vigorous neutrino-driven convection and the SASI. \nBy 300~ms large-scale, high-entropy, buoyant plumes are evident, as the explosion continues to develop. \nHowever, low-entropy down-flows still connect the unshocked regions with the PNS surface, continuing to supply accretion energy to power the neutrino luminosities driving the development of the explosion. By 600~ms, these down-flows have been cut off by the expanding ejecta, but their remnants continue to accrete onto the PNS, allowing the explosion to continue to gain in strength.\n\nThough these simulations have run further into explosion than previous simulations, the final explosion energies --- in particular, for the 20 and 25 M$_\\odot$ models --- are clearly still developing. \nThese simulations will therefore continue. Additional 2D simulations --- e.g., using different progenitor masses --- are planned.\n\n\\begin{figure}\n\\includegraphics[width=3.5in]{Expl_E_vs_t_12M_25M_Comp.pdf}\n\\caption{Diagnostic energy (\\ensuremath{E^{+}}; dashed lines) versus post-bounce time for all of our published 2D models \\cite{BrMeHi13,BrLeHi14}. Dash-dotted lines (\\ensuremath{E^{+}_{\\rm ov}}) include binding energy of overburden and dashed lines (\\ensuremath{E^{+}_{\\rm ov, rec}}) also include estimated energy gain from nuclear recombination.}\n\\label{fig:energy}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{Explosion_Energy_Comparisons.pdf}\n\\caption{\nObserved explosion energies for a number of CCSNe, along with predicted explosion energies from our 12, 15, 20, and 25 M$_\\odot$ progenitor models (red dots) \\cite{BrLeHi14}. The arrows indicate that our explosion energies are still increasing at the end of each run. The length of each arrow is a measure of the rate of change of the explosion energy at the end of the corresponding run.\n\\label{fig:energycomparison}\n}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{Nickel56_Comparisons.pdf}\n\\caption{\nObserved production of $^{56}$Ni for a number of CCSNe, along with our predictions from our 12, 15, 20, and 25 M$_\\odot$ progenitor models (red dots) \\cite{BrLeHi14}.\n\\label{fig:nickelmass}\n}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.00in]{N_Star_Mass.pdf}\n\\caption{\nTime evolution of the proto-neutron star (baryonic) mass in each of our 4 2D models, beginning with 12, 15, 20, and 25 M$_\\odot$ progenitors \\cite{BrLeHi14}.\n\\label{fig:pnsmass}\n}\n\\end{figure}\n\n\\section{Preliminary Results from our 3D Core Collapse Supernova Model}\\label{sec:current3D}\n\n\\begin{figure}\n\\includegraphics[width=3.1in]{1D2D3DShockTrajectories.pdf}\n\\caption{Evolution of the shock trajectory from our 1D model and the angle-averaged shock trajectories from our 2D and 3D models, all for the 15~\\ensuremath{M_{\\odot}}\\ case \\cite{LeBrHi15}. The 1D model does not develop an explosion, whereas an explosion is obtained in both our 2D and our 3D models.\n\\label{fig:1D2D3DShockTrajectories}\n}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=3.15in]{3D441msYZ.pdf}\n\\caption{Snapshot of the equatorial cross section of the entropy in our ongoing 3D simulation for the 15~\\ensuremath{M_{\\odot}}\\ case at $\\sim$441 ms after bounce \\cite{LeBrHi15}. Red indicates high-entropy, expanding, rising material. Green\/blue indicates cooler, denser material. Evident are significant (green) down flows fueling the neutrino luminosities.\n\\label{fig:entropy3D}\n}\n\\end{figure}\n\nFew 3D multiphysics models with necessary realism (as defined above) have been performed. Notable among these is the recently published model of Hanke et al. \\cite{HaMuWo13}. Preliminary results from the Oak Ridge group \\cite{LeBrHi15} in the context of a model similar to the Garching group's model -- i.e., with essentially the same physics and treatment of this physics -- are presented here, although we begin with the same 15 M$_\\odot$ Woosley--Heger progenitor used in our 2D models, whereas they began with the 27 M$_\\odot$ Woosley--Heger progenitor. \n\nFigure \\ref{fig:1D2D3DShockTrajectories} shows the angle-averaged shock trajectories from our one-, two-, and three-dimensional models, all run with the \\textsc{Chimera}\\ code beginning with the same 15 M$_\\odot$ Woosley--Heger progenitor and including the same (full) physics. Explosion is evident in both the 2D and the 3D cases. Explosion is not obtained in 1D. Comparing the two- and three-dimensional trajectories, we see that the development of the explosion in the 3D case is slower. In the 2D case, the shock radius changes rapidly beginning at about 200 ms after bounce. In the 3D case, the shock radius does not begin to climb dramatically until approximately 100 ms later, at $\\sim$300 ms after bounce. The 1D and 2D\/3D angle-averaged shock radii diverge at approximately 125 ms after bounce, and the 2D and 3D angle-averaged shock radii diverge later, at about 200 ms after bounce.\n\nFigure \\ref{fig:entropy3D} is a snapshot of a 2D slice of our ongoing 3D model at approximately 441 ms after bounce. Shown is the stellar core entropy. The shock wave is clearly outlined by the jump in entropy across it. Neutrino-driven convection is evident in the slice. Hotter (red) rising plumes bring neutrino-heated material up to the shock, while cooler (green) down flows replace the fluid below. Distortion of the shock away from axisymmetry and the nonaxisymmetric patterns of convection beneath the shock are also evident. Conclusive evidence for $l=1$, ``sloshing'' and $m=1$, ``spiral'' modes of the SASI will require a modal analysis, although the 2D slice clearly does not rule out either mode. \n\nThis simulation utilizes 32,400 rays (solid angle elements) with 2\\ensuremath{^\\circ}\\ resolution in longitude and a resolution in latitude that varies from 8\\ensuremath{^\\circ}\\ at the pole to better than 0.7\\ensuremath{^\\circ}\\ at the equator, but is uniform in the cosine of the colatitude. \nDue to the Courant limit, the coordinate pole in standard spherical-polar coordinates creates a strong restriction on the time step size and therefore lengthens the total run time compared to a similar resolution 2D simulation. \nOur constant cosine-of-colatitude grid seeks to minimize this impact without resorting to a grid that is coarse at all latitudes or implementing unevolved (frozen) regions near the pole. The simulation will consume approximately 100 M core--hours to complete. {\\em (This gives a strong indication of how the physics included in the models, even in the RbR+ approximation, significantly drives upward their computational cost.)}\nAs this 3D simulation for a 15~\\ensuremath{M_{\\odot}}\\ progenitor evolves, we will be able to examine the nature of the CCSN explosion mechanism without the assumption of axisymmetry that is inherent in the 2D models. {\\em The} key question: Will this model yield a robust explosion? And will other predictions agree with observations? As indicated by all of our 2D models, our current 3D model will need to be run significantly longer, and detailed computations of the explosion energy and other observables will need to be completed before we can begin to answer these questions.\n\n\\section{Conclusions and Outlook}\n\nThe most sophisticated spherically symmetric models developed to date do not exhibit core collapse supernova explosions. Despite the prodigious amount of gravitational binding energy tapped during stellar core collapse and radiated via neutrinos, neutrino heating of the stellar core material beneath the supernova shock wave, unaided by other physics, is unable to power such explosions. On the other hand, with the aid of neutrino-driven convection beneath the shock, and the SASI, robust explosions have been obtained in both two- and three-dimensional models, with model predictions consistent with observations of multiple quantities (explosion energy, $^{56}$Ni mass, neutron star mass, neutron star kick velocity).\n\nOne- and two-dimensional studies have identified a list of key physics needed in CCSN models. The addition of new physics (e.g., magnetic fields) will likely add to this list as the new physics is added to today's most advanced models (e.g., see \\cite{ObJaAl14}). It is also possible that the addition of new physics will render some of the physics currently included less important. However, it is unlikely that the impact of general relativity and of important neutrino physics (e.g., relativistic transport corrections such as gravitational redshift and the full physics of electron capture and neutrino scattering) will be significantly lessened by adding new physics. The quantum leap in CCSN modeling that occurred two decades ago, where axisymmetry replaced spherical symmetry, did not reduce the importance of this physics --- case in point, both Lentz et al. \\cite{LeMeMe12} and Mueller et al. \\cite{MuJaMa12} reached the same conclusions. Moreover, the development of magnetic fields will depend on the environment established by accretion and neutrino heating.\nFuture modeling --- in particular, the direction we choose to take --- should rely on the predictions of the best {\\em available} models, more so than on speculation of what physics may or may not be important. With this in mind, the task at hand is, therefore, to build 3D models with the minimum physics set identified in the studies mentioned above. \n\nIn this brief review, we outlined the approaches used by the various supernova modeling groups around the world, focusing on two- and three-dimensional, multi-frequency models. While a comparative analysis of the results of these studies can shed light on the impact of (a) Newtonian versus general relativistic gravity, hydrodynamics, and neutrino transport, and\/or (b) including a reduced versus a complete set of neutrino weak interactions, the latter of which would include detailed nuclear electron capture and neutrino energy scattering, results from simulations cutting across these various levels of sophistication should not be compared with the expectation that the outcomes --- in particular, whether or not robust explosions are obtained --- should be the same. For example, comparing a Newtonian and a general relativistic model, with all other physics in the models kept the same, allows us to understand the role of general relativity, but we should not expect the Newtonian and general relativistic models to agree quantitatively, or even qualitatively.\n\nHaving said this, a comparison between, for example, the results obtained by the Oak Ridge and Garching groups can be made given the similarity of their approaches and the physics included in each of their model sets. In this context, it is important to note that the results of the Garching group differ between simulations performed with their \\textsc{PROMETHEUS-VERTEX}\\ code \\cite{maja09}, which uses a general relativistic monopole correction to the Newtonian self-gravitational potential, derived from the Tolman-Oppenheimer-Volkov equation of the spherically-averaged fluid and thermodynamic quantities in the stellar core, and with their \\textsc{COCONUT-VERTEX}\\ code \\cite{MuJaMa12}, which instead uses the conformal flatness approximation to the general relativistic gravitational field. \\textsc{PROMETHEUS-VERTEX}\\ is the code most similar to \\textsc{Chimera}\\ . Unfortunately, to date, results from the \\textsc{PROMETHEUS-VERTEX}\\ code using the more modern Woosley--Heger progenitor set \\cite{wohe07} have not been published, so a direct comparison is not yet possible.\n\nFocusing once again on the ongoing 3D simulations cited here: Will robust neutrino-driven explosions be obtained? If the answer is no, three explanations are possible: (1) Removing current approximations in the models (e.g., the use of RbR neutrino transport) and\/or making other improvements (e.g., increasing the spatial resolution) may fundamentally alter the outcomes. (2) We are missing essential physics. (3) A combination of additional physics and improved modeling may be needed to alter the outcomes. \nWith regard to (1)-(3):\n\n(A) All of the simulations documented here were initiated from state-of-the-art (e.g., the \\citet{wohe07} series) spherically-symmetric progenitor models. \nCouch and Ott \\cite{CoOt13} point out that multidimensional simulations of the advanced stages of stellar evolution of massive stars yield large deviations from \nspherical symmetry in the Si\/O layer (see \\cite{Arnett14} and the references cited therein).\nThey demonstrate that such (expected) deviations from spherical symmetry can qualitatively alter the \npost-stellar-core-bounce evolution, triggering an explosion in a model that otherwise fails to explode. Such a qualitative change in outcome \ndemands better initial conditions, which can be obtained when spherically symmetric models, currently able to complete stellar evolution through \nsilicon burning and the formation of the iron core (multidimensional models are not yet capable of this), are informed by 3D stellar\nevolution models of earlier burning stages.\n\n(B) Given the importance of the SASI in the explosion models developed thus far, and given that the SASI is a long-wavelength instability, how will the SASI and the turbulence it induces, or neutrino-driven convection and the turbulence it induces, interact? There is evidence, for example, that the energy in long-wavelength modes of the SASI are sapped by the very turbulence the SASI seeds, as a result of the significant shear between counterrotating flows induced by its $m=1$ spiral mode in three dimensions \\citep{EnCaBu12}. On the other hand, Couch and Ott \\cite{CoOt14} recently showed that turbulent ram pressure may be important in driving the shock outward, relieving some of the work from the thermal pressure associated with neutrino heating. Moreover, significant deviations from spherical symmetry in the progenitor, as would be expected based on the current 3D stellar evolution models discussed above, would seed turbulence and, thus, potentially enhance the contribution of turbulence to the outward pressure driving the shock.\n\n(C) If we maintain that CCSNe are neutrino-driven, it may be logical to assume that we are missing something essential in the neutrino sector. Motivated by the experimental and observational measurement of neutrino mass, recent efforts to explore its impact on neutrino transport in stellar cores have uncovered new and increasingly complex physical scenarios \\citep{dufuqi10,chcafr12,chcafr13,VlFuCi14}. Now that the quantum kinetic equations for neutrinos in stellar cores have been derived (e.g., see \\cite{VlFuCi14}), efforts can begin in earnest to extend Boltzmann models to include the quantum mechanical coherent effects associated with neutrino mass. This is, of course, a long-term goal. It is not clear that physics associated with neutrino mass will have an impact on the explosion mechanism, but it has been demonstrated that such physics may impact terrestrial CCSN neutrino signatures significantly (e.g., see \\cite{DuFuCa07}).\n\nSince Colgate and White first proposed that CCSNe are neutrino-driven \\cite{CoWh66}, nearly five decades have passed. Ascertaining the CCSN explosion mechanism has certainly been a challenge. Each new piece of physics, each new dimension, has brought both breakthroughs and additional challenges. Nonetheless, the last decade of CCSN modeling has led to rapid progress. This progress --- in particular, the recent progress outlined here --- and the growing capability of available supercomputing platforms, encourage us that a solution to this long-standing astrophysics problem is achievable with a continued, systematic effort in perhaps the not-too-distant future.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and results}\n Recently, Bauke and Mertens have proposed in \\cite{BaMe} a\n new and original look at disordered spin systems.\nThis point of view consists of studying the micro-canonical\nscenario,\n contrary to the canonical formalism, that has become\n the favorite tool to treat models of statistical mechanics.\n More precisely, they analyze the statistics of spin\n configurations whose energy is very close to a given value.\n In discrete spin systems, for a given system size,\n the Hamiltonian will take on a finite number\n of random values, and generally\n (at least, if the disorder is continuous)\n a given value $E$ is attained\n with probability $0$.\n One may, however, ask :\n How close to $E$ the best approximant is\n when the system size grows and, more generally,\n what the distribution of the energies that come closest to\n $E$ is~? Finally, how the values of the corresponding\n configurations are distributed in configuration space~?\n\n The original motivation for this viewpoint came from\n a reformulation of a problem in combinatorial optimization,\n the number partitioning problem\n (this is the problem of\n partitioning $N$ (random) numbers into two subsets such that\n their sums in these subsets are as close as possible)\n in terms of a spin system\n Hamiltonian \\cite{BFM, M1,M2}. Mertens conjecture stated in these\n papers has been proven to be correct in \\cite{BCP}\n (see also \\cite{BCMP}),\n and generalized in \\cite{BK1}\n for the partitioning into $k>2$ subsets.\n\n Some time later, Bauke and Mertens generalized this conjecture\n in the following sense : let $(H_N(\\sigma))_{\\s \\in \\Sigma_N}$\n be the Hamiltonian\n of any disordered spin system with discrete spins\n ($\\Sigma_N$ being the configuration space) and\n continuously distributed couplings, let $E$ be any given number,\n then the distribution of the close to optimal approximants of the\n level $\\sqrt{N}E$ is asymptotically\n (when the volume of the system $N$ grows to infinity)\n the same as if the energies $H_N(\\s)$\n are replaced by independent Gaussian random\n variables with the same mean and variance as $H_N(\\s)$\n (that is the same as for Derrida's Random Energy spin glass Model \\cite{D1},\n that is why it is called the REM conjecture).\n\n What this distribution for independent Gaussian random variables\n is ? Let $X$ be a standard Gaussian random variable,\n let $\\delta_N \\to 0$ as $N \\to \\infty$, $E \\in {\\bf R}$,\n $b>0$.\n Then it is easy to compute that\n $$ \\mathop{\\hbox{\\sf P}}\\nolimits( X \\in [E -\\detla_N b, E+ \\delta_N b])= (2 \\delta_N b)\n \\sqrt{1\/(2\\pi)}e^{-E^2\/2}(1+o(1))\\ \\ \\ N \\to \\infty.$$\n Let now $(X_\\s)_{s \\in \\Sigma_N}$ be $|\\Sigma_N|$\n independent standard Gaussian random variables.\n Since they are independent, the number of them\n that are in the interval $[E -\\detla_N b, E+ \\delta_N\n b]$ has a Binomial distribution with parameters\n $(2 \\delta_N b)\n \\sqrt{1\/(2\\pi)}e^{-E^2\/2}(1+o(1))$ and $ |\\Sigma_N|$. If we put\n $$\\delta_N =|\\Sigma_N|^{-1} \\sqrt{2\\pi}(1\/2)e^{E^2\/2},$$\n by a well known theorem of the course of elementary Probability,\n this random number converges in law to the Poisson\n distribution with parameter $b$ as $N \\to \\infty$. More generally,\n the point process\n $$ \\sum_{\\sigma \\in \\Sigma_N} \\delta_{ \\{\\delta_N^{-1} N^{-1\/2}\n |\\sqrt{N} X_\\s-\n \\sqrt{N}E|\\}} $$\n converges, as $N \\to \\infty$, to the Poisson point process\n in ${\\bf R}_+$ whose intensity measure is the Lebesgue measure.\n\n So, Bauke and Mertens conjecture states that\n for the Hamiltonian $(H_N(\\s))_{\\s \\in \\Sigma_N}$\n of any disordered spin system and\n for a suitable normalization $C(N,E)$\n the sequence of point processes\n $$ \\sum_{\\sigma \\in \\Sigma_N} \\delta_{ \\{C(N,E)|H_N(\\s)-\n \\sqrt{N}E|\\}} $$\n converges, as $N \\to \\infty$, to the Poisson point process\n in ${\\bf R}_+$ whose intensity measure is the Lebesgue measure.\n In other words, the best approximant\n to $\\sqrt{N} E$ is at distance $C^{-1}(N,E)W$,\n where $W$ is an exponential random variable of mean $1$.\n More generally, the $k$th best approximant\n to $\\sqrt{N} E$ is at distance $C^{-1}(N,E)(W_1+\\cdots +W_k)$,\n where $W_1,\\ldots, W_k$ are independent\n exponential random variables of mean $1$, $k=1,2\\ldots$\n It appears rather surprising that such a result\n holds in great generality.\n Indeed, it is well known that the correlations of the random\n variables are strong enough to modify e.g.\\ the maxima of the\n Hamiltonian.\n This conjecture\n has been proven in \\cite{BK2}\n for a rather large class of disordered spin systems\n including\n short range lattice spin systems as well as\n mean-field spin glasses,\n like $p$-spin Sherringthon-Kirkpatrick (SK) models with\n Hamiltonian $H_N(\\s)=N^{1\/2-p\/2} \\sum_{i_1,\\ldots, i_p}\n \\s_{i_1}\\cdots \\s_{i_p}J_{1\\leq i_1,\\ldots, i_p\\leq N}$\n where $J_{i_1,\\ldots, i_p}$ are\n independent standard Gaussian random variables, $p\\geq 1$.\n See also \\cite{BCMN1}\n for the detailed study of the case $p=1$.\n\n Two questions naturally pose themselves.\n (i) Consider instead of $E$, $N$-dependent\n energy levels, say, $E_N={\\rm const} N^\\alpha$.\n How fast can we allow $E_N$ to grow with $N \\to \\infty$\n for the same behaviour\n (i.e.\\ convergence to the standard Poisson point process under a\n suitable normalization) to hold ?\n (ii) What type of behaviour can we expect\n once $E_N$ grows faster than this value ?\n\n The first question (i) has been investigated\n for Gaussian disordered spin systems in \\cite{BK2}.\n It turned out that for short range\n lattice spin systems on ${\\bf Z}^d$ this\n convergence is still true up to $\\alpha<1\/4$.\n For mean-field spin glasses,\n like $p$-spin SK models with\n Hamiltonian $H_N(\\s)=N^{1\/2-p\/2} \\sum_{i_1,\\ldots, i_p}\n \\s_{i_1}\\cdots \\s_{i_p}J_{i_1,\\ldots, i_p}$\n mentioned above,\n this conjecture holds true up to $\\alpha<1\/4$\n for $p=1$ and up to $\\alpha<1\/2$ for $p\\geq 2$.\n It has been proven in \\cite{BCMN2}\n that the conjecture fails at\n $\\alpha=1\/4$ for $p=1$ and $\\alpha=1\/2$\n for $p=2$.\n The paper \\cite{BCMN2}\n extends also these results for non-Gaussian\n mean-field $1$-spin SK models with $\\alpha>0$.\n\n The second question (ii), that is the local behaviour\n beyond the critical value of $\\alpha$,\n where Bauke and Mertens conjecture fails,\n has been investigated\n for Derrida's Generalized Random\n Energy Models (\\cite{D2}) in \\cite{BK3}.\n\n Finally,\n the paper \\cite{BGK} introduces a new REM conjecture,\n where the range of energies involved is not reduced to a small\n window. The authors prove that for large class of random Hamiltonians\n the point process of properly normalized energies\n restricted to a sparse enough random subset of spin\n configuration space converges to the same point process\n as for the Random Energy Model, i.e. Poisson point process\n with intensity measure $\\pi^{-1\/2}e^{-t\\sqrt{2\\ln 2}}dt$.\n\n In this paper we study Bauke and Merten's conjecture\n on the local behaviour of energies not\n for disordered spin systems but for directed\n polymers in random environment.\n These models have received enough of attention\n of mathematical community over past fifteen years,\n see e.g.\\ \\cite{CSY} for a survey of the main results\n and references therein.\n Let $(\\{w_n\\}_{n\\geq 0}, P)$ is a simple\n random walk on the $d$-dimensional lattice\n ${\\bf Z}^d$. More precisely,\n we let $\\Omega$ be the path space\n$\\Omega=\\{\\omega=(\\omega_n)_{n\\geq 0};\n \\omega_n\\in {\\bf Z}^d, n\\geq 0\\}$,\n ${\\cal F}$ be the cylindrical $\\sigma$-field on $\\Omega$\n and for all $n\\geq 0$, $\\omega_n: \\omega \\to \\omega_n$\n be the projection map.\n We consider the unique probability measure $P$\n on $(\\Omega, {\\cal F})$ such that $\\omega_1-\\omega_0,\n \\ldots, \\omega_n-\\o_{n-1}$ are independent and\n $$ P(\\o_0=0)=1,\\ \\\n P(\\o_n-\\o_{n-1}=\\pm \\delta_j)=(2d)^{-1}, \\ \\\n j=1,\\ldots, d,$$\n where $\\delta_j=(\\delta_{kj})_{k=1}^d$ is the $j$th\n vector of the canonical basis of ${\\bf Z}^d$.\n We will denote by $S_N=\\{\\omega^N=(i,\\omega_i)_{i=0}^N\\}$\n ($(i,\\omega_i)\\in {\\bf N}\\times {\\bf Z}^d$)\n the space of paths of length $N$.\n We define the energy of the path $\\omega^N=(i,\\omega_i)_{i=0}^N$\n as\n\\begin{equation}\n\\label{enn}\n \\eta(\\omega^N)=N^{-1\/2}\\sum_{i=1}^N \\eta(i,\\o_i)\n \\end{equation}\nwhere $\\{\\eta(n,x) : n \\in {\\bf N}, x\\in {\\bf Z}^d\\}$\n is a sequence of independent identically distributed\n random variables on a probability space $(H, {\\cal G}, \\mathop{\\hbox{\\sf P}}\\nolimits)$.\n We assume that they have mean zero and variance $1$.\n\nOur first theorem extends Bauke and Merens conjecture\n for directed polymers.\n\n\\begin{theo}\n\\label{th0}\n Let $\\eta(n,x)$, $\\{\\eta(n,x) : n \\in {\\bf N}, x\\in {\\bf\nZ}^d\\}$,\n be the i.i.d. random variables of the third moment finite and\n with the Fourier transform\n $\\phi(t)$ such that $|\\phi(t)|=O(|t|^{-1})$, $|t|\\to \\infty$.\n Let $E_N=c \\in {\\bf R}$ and let\n \\begin{equation}\n \\label{delta0}\n \\delta_N = \\sqrt{\\pi\/2} e^{c^2\/2}\n ((2d)^N)^{-1}.\n \\end{equation}\n Then the point process\n\\begin{equation}\n\\label{th1e0}\n \\sum_{\\o^N\\in S_N} \\delta_{\\{\\delta_N^{-1}\n |\\eta(\\o^N)-E_N|\\}}\n\\end{equation}\n converges weakly as $N \\uparrow \\infty$ to the Poisson\n point process ${\\cal P}$ on ${\\bf R}_+$\n whose intensity measure is the Lebesgue measure.\n Moreover, for any $\\epsilon>0$ and any $b \\in {\\bf R}_+$\n\\begin{equation}\n\\label{th1b0} \\mathop{\\hbox{\\sf P}}\\nolimits(\\forall N_0\\ \\exists N \\geq N_0,\\\n \\exists \\o^{N,1}, \\o^{N,2}\\ : \\\n {\\rm cov}\\,(\\eta(\\o^{N,1}), \\eta(\\o^{N,2}))>\\epsilon\\ :$$\n $$ |\\eta(\\o^{N,1})-E_N|\\leq |\\eta(\\o^{N,2})-E_N|\\leq \\delta_N\n b)=0.\n\\end{equation}\n\\end{theo}\n\n The decay assumption on the Fourier transform is not optimal,\n we believe that it can be weaken but we did not try to\n optimize it. Nevertheless, some condition\n of this type is needed, the result can not be extended\n for discrete distributions where the number of possible\n values the Hamiltonian takes on would be finite.\n\nThe next two theorems\n prove Bauke and Mertens conjecture\n for directed polymers in Gaussian environment for growing levels\n $E_N=cN^{\\alpha}$.\n We are able to prove that this conjecture\n holds true for $\\alpha<1\/4$ for polymers\n in dimension $d=1$ et\n and $\\alpha<1\/2$ in dimension\n $d\\geq 2$.\n We leave this investigation\n open for non-Gaussian environments.\n\n The values $\\alpha=1\/4$ for $d=1$ and\n $\\alpha=1\/2$ for $d\\geq 2$ are likely to be the true\n critical values. Note that these are the same\n as for Gaussian SK-spin glass models\n for $p=1$ and $p=2$ respectively according to\n \\cite{BCMN2}, and likely for $p\\geq 3$ as well.\n\n\\begin{theo}\n\\label{th1}\n Let $\\eta(n,x)$, $\\{\\eta(n,x) : n \\in {\\bf N}, x\\in {\\bf\nZ}^d\\}$, be independent standard Gaussian random variables.\n Let $d=1$. Let $E_N=c N^{\\alpha}$ with\n $c \\in {\\bf R}$, $\\alpha \\in [0, 1\/4[$ and\n \\begin{equation}\n \\label{delta}\n \\delta_N = \\sqrt{\\pi\/2} e^{E_N^2\/2}\n (2^N)^{-1}.\n \\end{equation}\n Then the point process\n\\begin{equation}\n\\label{th1e}\n \\sum_{\\o^N\\in S_N} \\delta_{\\{\\delta_N^{-1}\n |\\eta(\\o^N)-E_N|\\}}\n\\end{equation}\n converges weakly as $N \\uparrow \\infty$ to the Poisson\n point process ${\\cal P}$ on ${\\bf R}_+$\n whose intensity measure is the Lebesgue measure.\n Moreover, for any $\\epsilon>0$ and any $b \\in {\\bf R}_+$\n\\begin{equation}\n\\label{th1b} \\mathop{\\hbox{\\sf P}}\\nolimits(\\forall N_0\\ \\exists N \\geq N_0,\\\n \\exists \\o^{N,1}, \\o^{N,2}\\ : \\\n {\\rm cov}\\,(\\eta(\\o^{N,1}), \\eta(\\o^{N,2}))>\\epsilon\\ :$$\n $$ |\\eta(\\o^{N,1})-E_N|\\leq |\\eta(\\o^{N,2})-E_N|\\leq \\delta_N\n b)=0.\n\\end{equation}\n\\end{theo}\n\n\\begin{theo}\n\\label{th2}\nLet $\\eta(n,x)$, $\\{\\eta(n,x) : n \\in {\\bf N}, x\\in {\\bf\nZ}^d\\}$ be independent standard Gaussian random variables.\n Let $d \\geq 2$. Let $E_N=c N^{\\alpha}$ with\n $c \\in {\\bf R}$, $\\alpha \\in [0, 1\/2[$ and\n\\begin{equation}\n\\label{delta1}\n \\delta_N = \\sqrt{\\pi\/2} e^{E_N^2\/2}\n ((2d)^N)^{-1}.\n \\end{equation}\n Then the point process\n\\begin{equation}\n\\label{th2e}\n \\sum_{\\o^N\\in S_N} \\delta_{\\{\\delta_N^{-1}\n |\\eta(\\o^N)-E_N|\\}}\n\\end{equation}\n converges weakly as $N \\uparrow \\infty$ to the Poisson\n point process ${\\cal P}$ on ${\\bf R}_+$\n whose intensity measure is the Lebesgue measure.\n Moreover, for any $\\epsilon>0$ and any $b \\in {\\bf R}_+$\n\\begin{equation}\n\\label{th2b} \\mathop{\\hbox{\\sf P}}\\nolimits(\\forall N_0\\ \\exists N \\geq N_0,\\\n \\exists \\o^{N,1}, \\o^{N,2}\\ : \\\n {\\rm cov}\\,(\\eta(\\o^{N,1}), \\eta(\\o^{N,2}))>\\epsilon\\ :$$\n $$ |\\eta(\\o^{N,1})-E_N|\\leq |\\eta(\\o^{N,2})-E_N|\\leq \\delta_N\n b)=0.\n\\end{equation}\n\\end{theo}\n\n\\noindent{\\bf Acknowledgements.}\n The author thanks Francis Comets for introducing him\n to the area of directed polymers. He also thanks\n Stephan Mertens and Anton Bovier for attracting\n his attention to the local behavior of disordered spin systems\n and interesting discussions.\n\n\\section{Proofs of the theorems.}\n\n\nOur approach is based on the following sufficient condition\n of convergence to the Poisson point process.\n It has been proven in a somewhat more general form\n in \\cite{BK1}.\n\n\\begin{theo}\n\\label{tc}\n Let $V_{i,M}\\geq 0$, $i\\in {\\bf N}$, be a family of\nnon-negative random\n variables satisfying the following assumptions : for any\n $l \\in {\\bf N}$ and all sets of constants $b_j>0$,\n $j=1,\\ldots,l$\n $$ \\lim_{ M \\to \\infty} \\sum_{(i_1,\\ldots, i_l) \\in \\{1,\\ldots,\n M\\} }\\mathop{\\hbox{\\sf P}}\\nolimits(\\forall_{j=1}^{l} V_{i_j, M}0$.\n It follows that for all $N>0$\n\\begin{eqnarray}\n \\lefteqn{ |S_N^{\\otimes,l}\\setminus {\\cal R}_{N,l}^{\\eta}|\n }\\nonumber \\\\\n &\\leq & (l(l-1)\/2) 2^{N(l-2)}\n \\#\\Big\\{\\omega^{N,1},\\omega^{N,2} :\n \\#\\{m \\in[0,\\ldots,N] :\n \\omega_m^1 - \\o_m^2=0\\} \\geq N^{1\/2+\\eta}\\Big\\}\n \\nonumber \\\\\n &\\leq & 2^{Nl} C N \\exp(-h N^{2\\eta}) \\label{zgu}\n\\end{eqnarray}\n where $C>0$, $h>0$ are some constants.\n\n\\medskip\n\n\\noindent{\\it Step 2.} The second preparatory step\n is the estimation (\\ref{es1}) and (\\ref{es2})\n of the probabilities in the sum (\\ref{zet}).\n Let $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$\n be the covariance matrix of the random variables\n $\\eta(\\o^{N,i})$ for\n $i=1,\\ldots, l$.\n Then, if $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ is non-degenerate,\n \\begin{equation}\n \\label{mia}\n \\mathop{\\hbox{\\sf P}}\\nolimits(\\forall_{i=1}^{l} : |\\eta(\\o^{N,i})-E_N|0$\n\\begin{equation}\n\\label{es2}\n \\mathop{\\hbox{\\sf P}}\\nolimits(\\forall_{i=1}^{l} : |\\eta(\\o^{N,i})-E_N|0$.\n\n\\medskip\n\n \\noindent{\\it Step 3.}\n Armed with (\\ref{zgu}), (\\ref{es1}) and (\\ref{es2}),\n we now proceed with the proof of the theorem.\n\n For given $\\alpha \\in ]0, 1\/4[$, let us choose\n first\n $\\eta_0 \\in ]0, 1\/4[$ such that\n\\begin{equation}\n\\label{eta_0}\n 2\\alpha-1\/2+\\eta_0<0.\n \\end{equation}\n Next, let us choose $\\eta_1>\\eta_0$\n such that\n\\begin{equation}\n\\label{keta_0}\n 2\\alpha-1\/2+\\eta_1<2\\eta_0,\n \\end{equation}\n then $\\eta_2>\\eta_1$ such that\n\\begin{equation}\n\\label{eta_1}\n 2\\alpha-1\/2+\\eta_2<2\\eta_1,\n \\end{equation}\netc. After $i-1$ steps we choose $\\eta_i >\\eta_{i-1}$ such that\n\\begin{equation}\n\\label{eta_i}\n 2\\alpha-1\/2+\\eta_i<2\\eta_{i-1}.\n \\end{equation}\n Let us take e.g.\\ $\\eta_i=(i+1)\\eta_0$.\n We stop the procedure at\n $n = [\\alpha\/\\eta_0]$th step, that is\n\\begin{equation}\n \\label{eta_n}\n n=\\min\\{i\\geq 0 : \\alpha <\\eta_i\\}.\n\\end{equation}\n Note that $\\eta_{n-1}\\leq \\alpha<1\/4$, and then\n $\\eta_n=\\eta_{n-1}+\\eta_0<1\/2$.\n\n We will prove that the sum\n(\\ref{zet}) over ${\\cal R}_{N,l}^{\\eta_0}$\n converges to $b_1\\cdots b_l$, while those over\n ${\\cal R}_{N,l}^{\\eta_i}\\setminus {\\cal R}_{N,l}^{\\eta_{i-1}}$\n for $i=1,2,\\ldots,n$ and the one over\n$S_N^{\\otimes l} \\setminus {\\cal R}_{N,l}^{\\eta_{n}}$\n converge o zero.\n\nBy (\\ref{es1}), each term of the sum (\\ref{zet})\n over ${\\cal R}^{\\eta_0}_{N,l}\n $ equals\n$$(2\\delta_N\/\\sqrt{2\\pi})^l (b_1\\cdots b_l)\ne^{- \\|\\vec E_N\\|^2 (1+O(N^{\\eta_0-1\/2}))\/2 }(1+o(1)).\n$$\n Here $e^{\\|\\vec E_N\\|^2 \\times O(N^{\\eta_0-1\/2})}\n =1+o(1)$ by the choice (\\ref{eta_0}) of $\\eta_0$.\n Then, by the definition of $\\delta_N$\n (\\ref{delta}), each term of the sum (\\ref{zet})\n over ${\\cal R}^{\\eta_0}_{N,l}$ is\n$$ (b_1\\cdots b_l) 2^{-Nl}(1+o(1))$$\n uniformly for $(\\omega^{N,1},\\ldots, \\o^{N,l}) \\in {\\cal\n R}_{N,l}^{\\eta_0}$.\n The number of terms in this\n sum is $|{\\cal\n R}_{N,l}^{\\eta_0}|$, that is\n $2^{Nl}(1+o(1))$ by (\\ref{zgu}).\n Hence, the sum (\\ref{zet}) over\n ${\\cal R}^{\\eta_0}_{N,l}\n $ converges to $b_1\\cdots b_l$.\n\n\n Let us consider the sum over\n${\\cal R}_{N,l}^{\\eta_i}\\setminus {\\cal R}_{N,l}^{\\eta_{i-1}}$\n for $i=1,2,\\ldots,n$.\n Each term in this sum equals\n$$(2\\delta_N\/\\sqrt{2\\pi})^l (b_1\\cdots b_l)\ne^{- \\|\\vec E_N\\|^2 (1+O(N^{\\eta_i-1\/2})\/2 }(1+o(1))\n$$\nuniformly for $(\\omega^{N,1},\\ldots, \\o^{N,l}) \\in {\\cal\n R}_{N,l}^{\\eta_i}$. Then, by the definition\n of $\\delta_N$ (\\ref{delta}), it is bounded by\n $2^{-Nl} C_i e^{h_i N^{2\\alpha -1\/2+\\eta_i}}$\n with some constants $C_i, h_i>0$.\n The number of terms in this sum\nis not greater than $|S_{N}^{\\otimes l} \\setminus {\\cal\nR}_{N,l}^{\\eta_{i-1}}|$\n which is bounded due to (\\ref{zgu})\n by $C N 2^{Nl}\\exp(-h N^{2\\eta_{i-1}})$.\n Then by the choice of $\\eta_i$\n (\\ref{eta_i}) this sum converges to zero\n exponentially fast.\n\n Let us now treat the sum over\n$S_N^{\\otimes l} \\setminus {\\cal R}_{N,l}^{\\eta_{n}}$.\n Let us first study the sum\nover $(\\o^{N,1},\\ldots, \\o^{N,l})$ such that\n the matrix $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ is non-degenerate.\n By (\\ref{es2}) each term in this sum\n is bounded by\n $ 2^{-Nl}e^{c^2 l N^{2\\alpha}\/2}N^{k(l)}$\n for some $k(l)>0$.\n The number of terms in this sum is bounded by\n $ C N 2^{Nl}\\exp(-h N^{2\\eta_{n}})$ by (\\ref{zgu}). Since\n $\\alpha<\\eta_n$ by (\\ref{eta_n}),\n this sum converges to zero exponentially fast.\n\n Let us finally turn to\nthe sum over $(\\o^{N,1},\\ldots, \\o^{N,l})$ such that\n the matrix $B(\\o^{N,1},\\ldots, \\o^{N,l})$\n is degenerate of the rank $r0$.\n\n There are $r$ paths among\n$\\o^{N,1},\\ldots, \\o^{N,l}$ such that\n corresponding $\\eta(\\o^{N,i})$ form the basis.\n Without loss of generality we may assume that these\n are $\\o^{N,1},\\ldots, \\o^{N,r}$.\n Note that $\\o^{N,1},\\ldots, \\o^{N,r}$\n are such that it can not be for no one\n $m \\in [0,\\ldots, N]$ that\n $\\o^1_m,\\ldots, \\o^r_m$ are all different.\n In fact, assume that\n $\\o^1_m,\\ldots, \\o^r_m$ are all different. Then\n $\\eta(m, \\o^{1}_m),\\ldots, \\eta(m, \\o^{r}_m)$\n are independent identically distributed random variables\n and $\\eta(m, \\o^{r+1}_m)=\n \\mu_1 \\eta(m, \\o^{1}_m)+\\cdots + \\mu_r \\eta(m,\n \\o^{r}_m)$.\n If $\\o^{r+1}_m$ is different from all $\\o^1_m,\\ldots, \\o^r_m$,\n then $\\eta(m, \\o^{r+1}_m)$ is independent from\n all of $\\eta(m, \\o^{1}_m),\\ldots,\\eta(m,\n \\o^{r}_m)$, then the linear coefficients, being the\n covariances of $\\eta(m, \\o^{r+1}_m)$\n with $\\eta(m, \\o^{1}_m),\\ldots, \\eta(m, \\o^{r}_m)$,\n are $\\mu_1=\\cdots=\\mu_r=0$.\n So, $\\eta(\\o^{N,r+1})$\n can not be a non-trivial linear combination\n of $\\eta(\\o^{N,1}),\\ldots, \\eta(\\o^{N,r})$.\n If $\\o^{r+1}_m$ equals one of $\\o^1_m,\\ldots, \\o^r_m$,\n say $\\o^{i}_m$, then again by computing the\n covariances of $\\eta(m, \\o^{r+1}_m)$\n with $\\eta(m, \\o^{1}_m),\\ldots, \\eta(m, \\o^{r}_m)$,\n we get $\\mu_i=1$, $\\mu_j=0$\n for $j=1,\\ldots, i-1,i+1,\\ldots,r$.\n Consequently,\n $\\eta(\\o^{i}_k)=\\eta(\\o^{r+1}_k)$\n for all $k=1,\\ldots, N$, so that\n $\\o^{N,i}=\\o^{N,r+1}$. But this is impossible\n since the sum (\\ref{zet})\n is taken over \\underline{different\\\/} paths\n $\\o^{N,1},\\ldots, \\o^{N,l}$.\n Thus the sum is taken only over paths\n$\\o^{N,1},\\ldots, \\o^{N,r}$ where at each moment of time\n at least two of them are at the same place.\n\n The number of such sets of $r$ different\n paths is exponentially smaller than\n $2^{Nr}$ : there exists $p>0$ such that\n is does not exceed $2^{Nr}e^{-pN}$.\n(In fact, consider $r$ independent simple\n random walks on ${\\bf Z}$ that at a given moment of time\n occupy any $k0$, according to given $\\o_m^1,\\ldots, \\o_m^r$,\n let us add to A $n(m)$ rows : each equation\n $\\lambda_{i_1}+\\cdots + \\lambda_{i_k}=0$ gives\n a row with $1$ at places $i_1,\\ldots, i_k$ and\n $0$ at all other places.\n Then the equation\n $\\lambda_1\\eta(\\o^{N,1})+\\cdots+\\lambda_r\\eta(\\o^{N,i})=0$\n is equivalent $A \\vec \\lambda =\\vec 0$\n with $\\vec \\lambda=(\\lambda_1,\\ldots, \\lambda_r)$.\n Since this equation has only a trivial solution $\\vec \\lambda=0$,\n then the rank of $A$ equals $r$.\n The matrix $A$ contains at most $2^r$ different rows.\n There is less than $(2^r)^r$ possibilities\n to choose $r$ linearly independent of them.\n Let $A^{r \\times r}$ be an $r \\times r$\n matrix consisting of $r$ linearly independent rows of $A$.\n The fact that $\\eta(\\omega^{N,r+1})$ is\n a linear combination\n $\\mu_1\\eta(\\o^{N,1})+\\cdots+\\mu_r\\eta(\\o^{N,r})=\\eta(\\o^{N,r+1})$\n can be written as $A^{r \\times r} \\vec \\mu =\\vec b$\n where the vector $\\vec b$ contains only $1$\n and $0$ : if a given row $t$ of the matrix\n $A^{r \\times r}$ corresponds to the $m$th step\n of the random walks and has $1$ at places\n $i_1,\\ldots,i_k$ and $0$ elsewhere, then\n we put $b_t=1$ if $\\o_m^{i_1}=\\o_m^{r+1}$\n and $b_t=0$ if $\\o_m^{i_1}\\ne \\o_m^{r+1}$.\n Thus, given\n $\\o^{N,1},\\ldots, \\o^{N,r}$,\n there is an $N$ independent number\n of possibilities to write the system $A^{r \\times r} \\vec \\mu =\\vec b$\n with non degenerate matrix $A^{r \\times r}$\n which determines uniquely linear coefficients\n $\\mu_1,\\ldots, \\mu_r$ and consequently\n $\\eta(\\o^{N,r+1})$ and the path $\\o^{N,r+1}$\n itself through these linear coefficients.\n Hence, there is not more possibilities to\n choose $\\o^{N,r+1}$\n than the number of non-degenerate matrices\n $A^{r \\times r}$ multiplied by the number of vectors $\\vec\n b$, that is roughly not more than $2^{r^2+r}$.\n\n These observations lead to the fact that the sum (\\ref{zet})\n with the covariance matrix $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ of the\n rank $r$ contains at most $(2^{r^2+r})^{l-r} 2^{Nr}e^{-p N}$\n different terms with some constant $p>0$.\n Then, taking into account the estimate (\\ref{pp})\n of each term with $2\\alpha<1$, we deduce that it converges to zero\n exponentially fast.\n This finishes the proof\n of (\\ref{th1e}).\n\n To show (\\ref{th1b}), we have been already noticed\n that the sum of terms\n $\\mathop{\\hbox{\\sf P}}\\nolimits(\\forall_{i=1}^{2} : |\\eta(\\o^{N,i})-E_N| N^{\\beta}\\Big\\}.\n \\nonumber\n\\end{eqnarray}\n It has been shown in the proof of Theorem \\ref{th1} that\n the number\n$$\\#\\Big\\{\\omega^{N,1},\\omega^{N,2} :\n \\#\\{m \\in[0,\\ldots,N] :\n \\omega_m^1 - \\o_m^2=0\\} > N^{\\beta}\\Big\\}$$\n equals the number of paths of a simple random walk\n within the period $[0,2N]$ that visit the origin\n at least $[N^\\beta]+1$ times.\n\n Let $W_r$ be the time of the $r$th return to the origin\n of a simple random walk\n ($W_1=0$), $R_N$ be the number of returns\n to the origin in the first $N$ steps.\n Then for any integer $q$\n $$P(R_N \\leq q)=P(W_1+(W_2-W_1)+\\cdots +(W_q-W_{q-1}) \\geq N)\n \\geq \\sum_{k=1}^{q-1} P(E_k)$$\n where $E_k$ is the event that exactly $k$ of the variables\n $W_s-W_{s-1}$ are greater or equal than $N$,\n and $q-1-k$ are less than $N$. Then\n$$\\sum_{k=1}^{q-1} P(E_k)=\\sum_{k=1}^{q-1} {q-1 \\choose k}\n P(W_2-W_1 \\geq N)^k (1- P(W_2-W_1 \\geq N))^{q-1-k}$$\n $$=\n 1-(1- P(W_2-W_1 \\geq N))^{q-1}.$$\n It is shown in \\cite{ET}\n that in the case $d=2$\n $$P(W_2-W_1 \\geq N)\n =\\pi (\\log N)^{-1}(1+ O((\\log N)^{-1})), \\ \\ \\ N \\to \\infty.$$\n Then\n $$ P(R_N >q) \\leq \\Big(1-\\pi (\\log N)^{-1}(1+o(1))\\Big)^{q-1}.$$\n Consequently,\n $$ \\#\\Big\\{\\omega^{N,1},\\omega^{N,2} :\n \\#\\{m \\in[0,\\ldots,N] :\n \\omega_m^1 - \\o_m^2=0\\} > N^{\\beta}\\Big\\}\n $$\n $$=(2d)^{2N} P(R_{2N}>[N^\\beta])$$\n $$\\leq\n (2d)^{2N} \\Big(1-\\pi (\\log 2N)^{-1}(1+o(1)) \\Big)^{[N^\\beta]-1}\n \\leq (2d)^{2N} \\exp(- h (\\log 2N)^{-1} N^{\\beta}) $$\n with some constant $h>0$.\n Finally for $d=2$ and all $N>0$\n by (\\ref{zgu1})\n \\begin{eqnarray}\n |S_N^{\\otimes l}\\setminus {\\cal K}_{N,l}^{\\eta}|\n \\leq (2d)^{lN} \\exp(- h_2 (\\log 2N)^{-1} N^{\\beta}) \\label{kk}\n \\end{eqnarray}\n with some constant $h_2>0$.\n\n In the case $d\\geq 3$ the random walk is transient and\n $$P(W_2-W_1 \\geq N)\\geq P(W_2-W_1 =\\infty)=\\gamma_d>0.$$\n It follows that $\\mathop{\\hbox{\\sf P}}\\nolimits(R_N>q)\\leq (1-\\gamma_d)^{q-1}$ and\n consequently\n\\begin{eqnarray}\n |S_N^{\\otimes,l}\\setminus {\\cal K}_{N,l}^{\\beta}|\n \\leq (2d)^{lN} \\exp(- h_d N^{\\beta}) \\label{kkk}\n \\end{eqnarray}\n with some constant $h_d>0$.\n\n\\medskip\n\n \\noindent{\\it Step 2.} Proceeding exactly as in the proof of Theorem\n \\ref{th1}, we obtain that uniformly for\n $(\\omega^{N,1},\\ldots, \\o^{N,l}) \\in {\\cal\n K}_{N,l}^{\\beta}$,\n \\begin{equation}\n \\label{est1}\n \\mathop{\\hbox{\\sf P}}\\nolimits(\\forall_{i=1}^{l} : |\\eta(\\o^{N,i})-E_N|0$.\n\n\\medskip\n\n \\noindent{\\it Step 3.}\n Having (\\ref{kk}), (\\ref{kkk}), (\\ref{est1}) and (\\ref{est2}),\n we are able to carry out the proof of the theorem.\n For given $\\alpha \\in ]0, 1\/2[$, let us choose\n first $\\beta_0>0$ such that\n\\begin{equation}\n\\label{beta_0}\n 2\\alpha-1+\\beta_0<0.\n \\end{equation}\n Next, let us choose $\\beta_1>\\beta_0$\n such that\n\\begin{equation}\n\\label{bketa_0}\n 2\\alpha-1+\\beta_1<\\beta_0,\n \\end{equation}\n then $\\beta_2>\\beta_1$ such that\n\\begin{equation}\n\\label{beta_1}\n 2\\alpha-1+\\beta_2<\\beta_1,\n \\end{equation}\netc. After $i-1$ steps we choose $\\beta_i >\\beta_{i-1}$ such that\n\\begin{equation}\n\\label{beta_i}\n 2\\alpha-1+\\beta_i<\\beta_{i-1}.\n \\end{equation}\n Let us take e.g.\\ $\\beta_i=(i+1)\\beta_0$.\n We stop the procedure at\n $n = [2\\alpha\/\\beta_0]$th step, that is\n\\begin{equation}\n \\label{beta_n}\n n=\\min\\{i\\geq 0 : 2\\alpha <\\beta_i\\}.\n\\end{equation}\n Note that $\\beta_{n-1} \\leq 2\\alpha$, and then\n $\\beta_n=\\beta_{n-1}+\\beta_0<2\\alpha+ 1-2\\alpha=1$.\n\n We will prove that the sum\n(\\ref{zet}) over ${\\cal K}_{N,l}^{\\beta_0}$\n converges to $b_1\\cdots b_l$, while those over\n ${\\cal K}_{N,l}^{\\beta_i}\\setminus {\\cal K}_{N,l}^{\\beta_{i-1}}$\n for $i=1,2,\\ldots,n$ and the one over\n$S_N^{\\otimes l} \\setminus {\\cal K}_{N,l}^{\\beta_{n}}$\n converge o zero.\n\nBy (\\ref{est1}), each term of the sum (\\ref{zet})\n over ${\\cal K}^{\\beta_0}_{N,l}\n $ equals\n$$(2\\delta_N\/\\sqrt{2\\pi})^l (b_1\\cdots b_l)\ne^{- \\|\\vec E_N\\|^2 (1+O(N^{\\beta_0-1}))\/2 }(1+o(1)).\n$$\n Here $e^{\\|\\vec E_N\\|^2 \\times O(N^{\\beta_0-1})}\n =1+o(1)$ by the choice (\\ref{beta_0}) of $\\beta_0$.\n Then, by the definition of $\\delta_N$\n (\\ref{delta1}), each term of the sum (\\ref{zet})\n over ${\\cal K}^{\\beta_0}_{N,l}$ is\n$$ (b_1\\cdots b_l) (2d)^{-Nl}(1+o(1))$$\n uniformly for $(\\omega^{N,1},\\ldots, \\o^{N,l}) \\in {\\cal\n K}_{N,l}^{\\eta_0}$.\n The number of terms in this\n sum is $|{\\cal\n K}_{N,l}^{\\beta_0}|$, that is\n $(2d)^{Nl}(1+o(1))$ by (\\ref{kk}) and (\\ref{kkk}).\n Hence, the sum (\\ref{zet}) over\n ${\\cal K}^{\\beta_0}_{N,l}\n $ converges to $b_1\\cdots b_l$.\n\n Let us consider the sum over\n${\\cal K}_{N,l}^{\\beta_i}\\setminus {\\cal K}_{N,l}^{\\beta_{i-1}}$\n for $i=1,2,\\ldots,n$. By (\\ref{est1})\n each term in this sum equals\n$$(2\\delta_N\/\\sqrt{2\\pi})^l (b_1\\cdots b_l)\ne^{- \\|\\vec E_N\\|^2 (1+O(N^{\\beta_i-1})\/2 }(1+o(1))\n$$\nuniformly for $(\\omega^{N,1},\\ldots, \\o^{N,l}) \\in {\\cal\n K}_{N,l}^{\\beta_i}$. Then, by the definition\n of $\\delta_N$ (\\ref{delta1}), it is bounded by\n the quantity $(2d)^{-Nl} C_i e^{h_i N^{2\\alpha -1+\\beta_i}}$\n with some constants $C_i, h_i>0$.\n The number of terms in this sum\nis not greater than $|S_{N}^{\\otimes l} \\setminus {\\cal\nK}_{N,l}^{\\beta_{i-1}}|$\n which is bounded\n by $(2d)^{Nl}\\exp(-h_2 N^{\\beta_{i-1}} (\\log 2 N)^{-1})$\n in the case $d=2$ due to (\\ref{kk})\n and\n by the quantity $(2d)^{Nl}\\exp(-h_d N^{\\beta_{i-1}} )$\n in the case $d\\geq 3$ due to (\\ref{kkk}).\n Then by the choice of $\\beta_i$\n (\\ref{beta_i}) this sum converges to zero\n exponentially fast.\n\n Let us now treat the sum over\n$S_N^{\\otimes l} \\setminus {\\cal K}_{N,l}^{\\beta_{n}}$.\n Let us first analyze the sum\nover $(\\o^{N,1},\\ldots, \\o^{N,l})$ such that\n the matrix $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ is non-degenerate.\n By (\\ref{est2}) each term in this sum\n is bounded by\n $ (2d)^{-Nl}e^{c^2 l N^{2\\alpha}\/2}N^{k(l)}$\n for some $k(l)>0$.\n The number of terms in this sum is bounded by\n the quantity $ (2d)^{Nl}\\exp(-h_2 N^{\\beta_{n}} (\\log 2N)^{-1})$\n in the case $d=2$ and by\n $ (2d)^{Nl}\\exp(-h_d N^{\\beta_{n}})$\n in the case $d\\geq 3$ respectively by (\\ref{kk}) and (\\ref{kkk}) . Since\n $2\\alpha<\\beta_n$ by (\\ref{beta_n}),\n this sum converges to zero exponentially fast.\n\nLet us finally turn to the sum over $(\\o^{N,1},\\ldots, \\o^{N,l})$\nsuch that\n the matrix $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$\n is degenerate of the rank $r0$, while exactly by\n the same arguments as in the proof of Theorem \\ref{th1},\n (they are, indeed, valid in all dimensions)\n the number of terms in this sum is\n less than $O((2d)^{Nr})e^{-p N}$\n with some constant $p>0$.\n Hence, this last sum converges to zero\n exponentially fast as $2\\alpha <1$.\n This finishes the proof of (\\ref{th2e}).\n The proof of (\\ref{th2b}) is completely\n analogous to the one of\n(\\ref{th1b}).\n\n\\medskip\n\n\n\\noindent{\\bf Proof of Theorem \\ref{th0}.}\n We again concentrate on the proof in the sum (\\ref{zet})\n with $E_N=c$.\n\n \\noindent{\\it Step 1.} First of all, we need a rather rough estimate\n of the probabilities of (\\ref{zet}).\n Let $(\\o^{N,1},\\ldots, \\o^{N,r})$\n be such that the matrix $B_N(\\o^{N,1},\\ldots, \\o^{N,r})\n $ is non-degenerate.\n We prove in this step that there exists a constant $k(r)>0$\n such that for any $N>0$ and any $(\\o^{N,1},\\ldots, \\o^{N,r})$\n with non-degenerate $B_N(\\o^{N,1},\\ldots, \\o^{N,r})$, we have:\n\\begin{equation}\n\\label{gg2} \\mathop{\\hbox{\\sf P}}\\nolimits(\\forall_{i=1}^{r} : |\\eta(\\o^{N,i})-c|0$. Furthermore\n\\begin{equation}\n\\label{bb}\n \\Big|\\prod_{k=1}^r \\frac{ e^{-i t_k (-b_k \\delta_N+c)} - e^{-i t_k (b_k\n \\delta_N +c)}}{it_k} \\Big| \\leq\n \\prod_{k=1}^r \\min \\Big( (2\\delta_N)b_k, \\\n \\frac{2}{|t_k|}\\Big) \\leq C'\\prod_{k=1}^r\n \\min \\Big((2d)^{-N}, \\frac{1}{|t_k|} \\Big)\n\\end{equation}\n with some $C'>0$. Hence,\n \\begin{eqnarray}\n \\lefteqn{\\frac{1}{(2\\pi)^r}\n \\int\\limits_{{\\bf R}^r} \\Big|f^{\\o^{N,1},\\ldots, \\o^{N,r}}_N(\\vec t)\n \\prod_{k=1}^r \\frac{ e^{-i t_k (-b_k \\delta_N+c)} - e^{-i t_k (b_k\n \\delta_N +c)}}{it_k}\\Big| dt_1\\cdots d t_r }\\nonumber\\\\\n & \\leq & C_0 N^{r\/2}\n \\int\n\\prod_{k=1}^r\n \\min \\Big((2d)^{-N}, \\frac{1}{|t_k|} \\Big)\n \\min \\Big(1, \\frac{1 }{ |(A^r \\vec\n t)_k|} \\Big) d \\vec t \\label{ss}\n \\end{eqnarray}\n with some constant $C_0>0$ depending on the function $\\phi$ and on\n $b_1,\\ldots, b_r$ only.\n Since the matrix $A^r$ is non-degenerate,\n using easy arguments\n of linear algebra, one can show that for some\n constant $C_1>0$ depending on the matrix $A^{r}$ only,\n we have\n\\begin{equation}\n \\int \\prod_{k=1}^r\n \\min \\Big((2d)^{-N}, \\frac{1}{|t_k|} \\Big)\n \\min \\Big(1, \\frac{1 }{ |(A^r \\vec\n t)_k|} \\Big)d\\vec t\n \\leq C_1 \\int \\prod_{k=1}^r\n \\min \\Big((2d)^{-N}, \\frac{1}{|t_k|} \\Big)\n \\Big(1, \\frac{1 }{ |\n t_k|} \\Big) d\\vec t. \\label{sdg}\n\\end{equation}\n The proof of (\\ref{sdg}) is given in Appendix.\nBut the right-hand of (\\ref{sdg}) is finite.\n This shows that the integrand in (\\ref{fou})\n is in $L^1({\\bf R}^d)$ and\n the inversion formula (\\ref{fou}) is valid.\n Moreover, the right-hand side of (\\ref{sdg}) equals $C_1 (2((2d)^{-N}\n+ (2d)^{-N} N \\ln 2d + (2d)^{-N}))^r$.\n Hence, the probabilities above are bounded by the quantity\n $C_0 N^{r\/2} C_1 2^r(2+N \\ln (2d))^r (2d)^{-Nr}$\n with $C_0$ depending on $\\phi$ and $b_1,\\ldots, b_r$ and\n $C_1$ depending on the choice of $A^r$.\nTo conclude the proof of (\\ref{gg2}), it remains to remark\n that there is an $N$-independent number of possibilities\n to construct a matrix $A^{r}$ (at most $2^{r^2}$),\nsince it contains only $0$ or $1$.\n\n\n\\medskip\n\n \\noindent{\\it Step 2.} We keep the notation ${\\cal R}_{N,l}^{\\eta}$\n from (\\ref{rr}) for $\\eta \\in ]0,1\/2[$.\n The capacity of this set for $d=1$ is estimated in (\\ref{zgu}).\n Moreover by (\\ref{kk}) for $d=2$\n $$ |S_{N}^{\\otimes l}\\setminus {\\cal R}_{N,l}^{\\eta}|=\n |S_{N}^{\\otimes l}\\setminus {\\cal K}_{N,l}^{\\eta+1\/2}|\n \\leq (2d)^{Nl} \\exp(-h_2 (\\log 2N)^{-1} N^{1\/2+\\eta})$$\n and by (\\ref{kkk}) for $d\\geq 3$\n $$ |S_{N}^{\\otimes l}\\setminus {\\cal R}_{N,l}^{\\eta}|\n = |S_{N}^{\\otimes l}\\setminus {\\cal K}_{N,l}^{\\eta+1\/2}|\n \\leq (2d)^{Nl}\\exp(-h_d N^{1\/2+\\eta}),$$\n so that, for all $d\\geq 1$ there are $h_d, C_d>0$\n such that for all $N> 0$\n\\begin{equation}\n\\label{rrs}\n |S_{N}^{\\otimes l}\\setminus {\\cal R}_{N,l}^{\\eta}|\n \\leq (2d)^{Nl}C_d N \\exp(-h_d N^{2\\eta}).\n \\end{equation}\n\n\\medskip\n\n\n\\noindent{\\it Sep 3.} In this step we show that uniformly for\n$(\\o^{N,1},\n \\ldots, \\o^{N,l}) \\in\n{\\cal R}_{N,l}^{\\eta}$\n \\begin{equation}\n \\label{gg}\n\\mathop{\\hbox{\\sf P}}\\nolimits(\\forall_{i=1}^{l} : |\\eta(\\o^{N,i})-c|\\epsilon N^{1\/6}}\n \\prod_{k=1}^l \\frac{ e^{-i t_k (-b_k \\delta_N+c)} - e^{-i t_k (b_k\n \\delta_N +c)}}{it_k} e^{-\\vec t B_N(\\o^{N,1},\\ldots, \\o^{N,l})\n \\vec t\/2}d\\vec t.\\nonumber\\\\\n\n I_N^2& =& \\frac{1}{(2\\pi)^l} \\int\\limits_{\\|t\\|<\\epsilon N^{1\/6}}\n \\prod_{k=1}^l \\frac{ e^{-i t_k (-b_k \\delta_N+c)} - e^{-i t_k (b_k\n \\delta_N +c)}}{it_k} \\nonumber \\\\\n && \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ {} \\times \\Big(\n f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)-\n e^{-\\vec t B_N(\\o^{N,1},\\ldots, \\o^{N,l})\n \\vec t\/2}\\Big) d\\vec t\\label{ff}\\\\\n I_N^3& =&\\frac{1}{(2\\pi)^l} \\int\\limits_{\\epsilon N^{1\/6}<\\|t\\|<\\delta N^{1\/2}}\n \\prod_{k=1}^l \\frac{ e^{-i t_k (-b_k \\delta_N+c)} - e^{-i t_k (b_k\n \\delta_N +c)}}{it_k}\n f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)\n d\\vec t\\nonumber\\\\\n I_N^4& =& \\frac{1}{(2\\pi)^l}\n \\int\\limits_{ \\|t\\|>\\delta N^{1\/2}}\n \\prod_{k=1}^l \\frac{ e^{-i t_k (-b_k \\delta_N+c)} - e^{-i t_k (b_k\n \\delta_N +c)}}{it_k}\n f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)\n d\\vec t\\nonumber\n\\end{eqnarray}\n with $\\epsilon, \\delta>0$ chosen according to the following\n Proposition \\ref{pr1}.\n\\begin{pro}\n\\label{pr1}\n There exist constants $N_0,C,\\epsilon, \\delta, \\zeta>0$ such\nthat\n for all $(\\o^{N,1},\\ldots \\o^{N,l}) \\in\n{\\cal R}_{N,l}^{\\eta}$ and all $N \\geq N_0$\n the following estimates hold:\n\\begin{equation}\n\\label{base1} \\Big|f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t) -\ne^{-\\vec t B_N(\\o^{N,1},\\ldots, \\o^{N,l}) \\vec t\/2} \\Big|\n \\leq \\frac{C \\|t\\|^3}{\\sqrt{N}}\n e^{ -\\vec t B_N(\\o^{N,1},\\ldots, \\o^{N,l}) \\vec t\/2},\\ \\ \\ \\\n \\hbox{for all } \\|t\\|\\leq \\epsilon N^{1\/6}.\n\\end{equation}\n\n\\begin{equation}\n\\label{base2} \\Big| f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)\\Big|\n\\leq e^{-\\zeta \\|t\\|^2} \\ \\ \\\n \\hbox{for all } \\|t\\|<\\delta \\sqrt{N}.\n\\end{equation}\n\\end{pro}\n\n The proof of this proposition mimics the one of the Berry-Essen\n inequality and is given in Appendix.\n\n The first part of\n$I_N^1$ is just the probability that\n $l$ Gaussian random variables with\n zero mean and covariance matrix\n $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ belong\n to the intervals\n $[-\\delta_N b_k+c, \\delta_N b_k+c]$ for $k=1,\\ldots, l$ respectively.\n This is\n \\begin{equation}\n \\label{mia0}\n \\int\\limits_{|z_j- c|\\leq\n \\delta_N b_j, \\forall_{j=1}^l }\n \\frac{e^{-(\\vec z B^{-1}(\\o^{N,1},\\ldots,\n \\o^{N,l})\\vec z)\/2}}{(2\\pi)^{l\/2}\n \\sqrt{{\\rm det} B(o^{N,1},\\ldots,\n \\o^{N,l})} }\\, d\\vec z\n $$\n$$ = (2\\delta_N\/\\sqrt{2\\pi})^l (b_1\\cdots b_l) e^{-(\\vec c\nB^{-1}(\\o^{N,1},\\ldots,\n \\o^{N,l})\\vec c)\/2}(1+o(1))$$\n$$ =(2\\delta_N\/\\sqrt{2\\pi})^l (b_1\\cdots b_l)\n e^{-lc^2(1+O(N^{\\eta-1\/2}))\/2}(1+o(1))\n = (2d)^{-Nl}b_1\\cdots b_l(1+o(1))\n \\end{equation}\n uniformly for\n $(\\omega^{N,1},\\ldots, \\o^{N,l}) \\in {\\cal\n R}_{N,l}^{\\eta}$, where we denoted by $\\vec c$ the vector\n$(c, \\ldots, c)$.\n Since\n \\begin{equation}\n \\label{pr}\n \\prod_{k=1}^l \\Big| \\frac{ e^{-i t_k (-b_k \\delta_N+c)} - e^{-i t_k (b_k\n \\delta_N +c)}}{it_k} \\Big|\n \\leq (2 \\delta_N b_1)\\cdots (2\\delta_N b_l)= O((2d)^{-Nl})\n \\end{equation}\nand the elements of the matrix $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$\nout\nof the\n diagonal are $O(N^{\\eta-1\/2})=o(1)$ as $N \\to \\infty$,\n the second part of $I_N^1$\n is smaller than\n $(2d)^{-Nl}$ exponentially (with exponential\n term $\\exp(-h N^{1\/3})$ for some $h>0$).\n\n There is a constant $C>0$ such that\n the term $I_N^2$ is bounded by\n $C (2d)^{-Nl} N^{-1\/2}$\n for any $(\\o^{N,1},\\ldots \\o^{N,l}) \\in\n{\\cal R}_{N,l}^{\\eta}$ and all $N$ large enough.\n This follows from (\\ref{pr}),\n the estimate (\\ref{base1})\n and again the fact that\n the elements of the matrix $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ out of the\n diagonal are $O(N^{\\eta-1\/2})=o(1)$ as $N \\to \\infty$.\n\n The third term $I_N^3$ is exponentially smaller than\n $(2d)^{-Nl}$\n by (\\ref{pr}) and the estimate (\\ref{base2}).\n\n Finally, by (\\ref{pr})\n$$ |I_N^4|\\leq (2 \\delta_N b_1)\\cdots (2\\delta_N b_l)\n \\int\\limits_{\\|t\\|>\\delta \\sqrt{N}}\n |f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)| d\\vec t=\n O((2d)^{-Nl})\n \\int\\limits_{\\|t\\|>\\delta \\sqrt{N}}\n |f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)| d\\vec t.$$\n The function\n $ f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)$\n is the product of $N$ generating functions (\\ref{zey}).\n Note that for any pair $\\o^{N,i}, \\o^{N,j}$\n of $(\\o^{N,1},\\ldots, \\o^{N,l})\\in {\\cal R}_{N,l}^{\\eta}$,\n there are at most $N^{\\eta+1\/2}$\n steps $n$ where\n $\\o^{N,i}_n= \\o^{N,j}_n$.\n Then there are at least $N -[l(l-1)\/2]N^{\\eta+1\/2}=a(N)$\n steps where all $l$ coordinates $\\o^{N,i}$, $i=1,\\ldots,\n l$,\n of the vector\n$(\\o^{N,1},\\ldots, \\o^{N,l}) \\in {\\cal R}_{N,l}^{\\eta}$\n are\n different.\n In this case\n$$\\mathop{\\hbox{\\sf E}}\\nolimits \\exp\\Big( i N^{-1\/2}\\sum_{k=1}^l t_k \\eta(n, \\o^{N,k}_n)\n\\Big)=\\phi(t_1 N^{-1\/2})\\cdots \\phi(t_k N^{-1\/2}).$$\n By the assumption made on $\\phi$,\n this function is aperiodic and thus $|\\phi(t)|<1$\n for $t\\ne 0$.\n Moreover, for any $\\delta>0$ there exists\n $h(\\delta)>0$ such that $|\\phi(t)|\\leq 1-h(\\delta)$\n for $|t|>\\delta\/l$.\n Then\n $$\\int\\limits_{\\|t\\|>\\delta \\sqrt{N}}\n |f^{\\o^{N,1},\\ldots, \\o^{N,l}}_N(\\vec t)| d\\vec t\n \\leq \\int\\limits_{\\|t\\| >\\delta \\sqrt{N}} |\n \\phi(t_1 N^{-1\/2})\\cdots \\phi(t_k N^{-1\/2})|^{a(N)}d\\vec t\n$$\n$$ = N^{l\/2} \\int\\limits_{\\|s\\| >\\delta }\n |\\phi(s_1 )\\cdots \\phi(s_k )|^{a(N)}d\\vec s\n \\leq N^{l\/2}(1-h(\\delta))^{a(N)-2}\n \\int\\limits_{\\|s\\| >\\delta }\n |\\phi(s_1 )\\cdots \\phi(s_k )|^2d\\vec s $$\n where $a(N)=N(1+o(1))$ and\n the last integral converges due to the assumption\n made on $\\phi(s)$.\n Hence $I_N^4$ is exponentially smaller than $(2d)^{-Nl}$.\n This finishes the proof of (\\ref{gg}).\n\n\\medskip\n\n \\noindent{\\it Step 4.} We are now able to prove the theorem using\n the estimates (\\ref{gg2}),(\\ref{rrs}) and (\\ref{gg}).\n By (\\ref{gg}),\n the sum (\\ref{zet}) over ${\\cal R}_{N,l}^{\\eta}$\n (with fixed $\\eta \\in ]0,1\/2[$) that contains\n by (\\ref{rrs})$(2d)^{Nl}(1+o(1))$ terms,\n converges to $b_1 \\cdots b_l$.\n The sum (\\ref{zet}) over $(\\o^{N,1},\\ldots, \\o^{N,l}) \\not \\in\n {\\cal R}_{N,l}^{\\eta}$ but with $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$\n non-degenerate, by (\\ref{rrs}) has only at most\n $(2d)^{Nl} C N \\exp(-h N^{2\\eta})$ terms,\n while each of its terms by (\\ref{gg2}) with $r=l$\n is of the order $(2d)^{-Nl}$ up to a polynomial term. Hence,\n this sum converges to zero.\n Finally,\ndue to the fact that in any\n set $(\\o^{N,1},\\ldots, \\o^{N,l})$\n taken into account in (\\ref{zet}) the paths\n are all different,\n the sum over $(\\o^{N,1},\\ldots, \\o^{N,l}) \\not \\in\n {\\cal R}_{N,l}^{\\eta}$ with\n $B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ of the rank $r0$ such that\n for any $( \\o^{N,1},\\ldots, \\o^{N,l} ) \\in\n {\\cal R}_{N,l}^{\\eta}$ and any $j$\n we have: $|\\alpha_j|\\leq C_1 \\|\\vec t\\|^2 N^{-1}+ C_2\\|\\vec t\\|^3\n N^{-3\/2}$. Then $|\\alpha_j|<1\/2$ and $|\\alpha_j|^2 \\leq\n C_3 \\|\\vec t\\|^3\n N^{-3\/2}$ with some $C_3>0$\n for all $\\vec t$ of the absolute\n value $\\|\\vec t\\|\\leq \\delta \\sqrt{N}$\n with $\\delta>0$ small enough.\n Thus $ \\ln \\phi(N^{-1\/2}(A \\vec t)_j)\n =-\\alpha_j+\\tilde \\theta_j \\alpha_j^2\/2$\n (using the expansion $\\ln(1+z)=z +\\tilde \\theta z^2\/2$\n with some $\\tilde \\theta$\n of the absolute value $|\\tilde \\theta|<1$\n which is true for all $z$ with $|z|<1\/2$)\n for all $( \\o^{N,1},\\ldots, \\o^{N,l} ) \\in\n {\\cal R}_{N,l}^{\\eta}$ and for all $\\vec t$ with\n $\\|\\vec t\\|\\leq \\delta \\sqrt{N}$\n with some $\\tilde \\theta_j$ such that $|\\tilde \\theta_j|<1$.\n It follows that\n\\begin{equation}\n\\label{zgh}\n f_N^{\\o^{N,1},\\ldots, \\o^{N,l}}(\\vec t)\n = \\exp\\Big( -\\sum_{j=1}^{K(N,\\o)}\n \\alpha_j+ \\sum_{j=1}^{K(N,\\o)}\n \\tilde \\theta_j \\alpha_j^2\/2\\Big).\n \\end{equation}\n Since $A^*A=B_N(\\o^{N,1},\\ldots,\n \\o^{N,l})$, here $-\\sum_{j=1}^{K(N,\\o)}\n \\alpha_j =-\\vec t B_N(\\o^{N,1},\\ldots, \\o^{N,l}) \\vec t\/2\n +\\sum_{j=1}^{K(N,\\o)} p_j$ where\n $|p_j| \\leq C_2\\|\\vec t\\|^3\n N^{-3\/2}$. Then\n\\begin{equation}\n\\label{zgh1}\n f_N^{\\o^{N,1},\\ldots, \\o^{N,l}}(\\vec t)=\n \\exp \\Big( -\\vec t B_N(\\o^{N,1},\\ldots, \\o^{N,l}) \\vec t\/2 \\Big)\n \\exp\\Big(\\sum_{j=1}^{K(N,\\o)}p_j+\n \\tilde \\theta_j \\alpha_j^2\/2\\Big)\n\\end{equation}\n where\n $|p_j|+ |\\tilde \\theta_j \\alpha_j^2\/2| \\leq\n (C_2+C_3\/2)\\|\\vec t\\|^3 N^{-3\/2}$ for all $j$.\n Since $K(N,\\o) \\leq l N$, we have\n \\begin{equation}\n \\label{qk}\n \\Big| \\sum_{j=1}^{K(N,\\o)}\n p_j+ \\tilde \\theta_j \\alpha_j^2\/2 \\Big|\\leq (C_2+C_3\/2)l\\|t\\|^3\n N^{-1\/2}.\n \\end{equation}\n It follows that for $\\epsilon>0$ small enough\n $| \\exp (\\sum_{j=1}^{K(N,\\o)}\n p_j+ \\tilde \\theta_j \\alpha_j^2\/2)-1|\n \\leq C_4 \\|\\vec t\\|^3 N^{-1\/2}$\n for all $\\vec t$ with $\\|\\vec t\\| \\leq \\epsilon N^{1\/6}$.\n This proves (\\ref{base1}).\n Finally\n\\begin{equation}\n\\label{zlt}\n |f_N^{\\o^{N,1},\\ldots, \\o^{N,l}}(\\vec t)| \\leq\n \\exp \\Big( -\\vec t B_N(\\o^{N,1},\\ldots, \\o^{N,l}) \\vec t\/2 \\Big)\n \\exp \\Big((C_2+C_3\/2)l\\|t\\|^3\n N^{-1\/2}\\Big).\n \\end{equation}\nTaking into account the fact that the elements of\n$B_N(\\o^{N,1},\\ldots, \\o^{N,l})$ out of the diagonal are at most\n $N^{-1\/2+\\eta}=o(1)$ as $N \\to \\infty$,\n one deduces from (\\ref{zlt})\n that for $\\delta>0$ small enough\n(\\ref{base2}) holds true with some $\\zeta>0$\n for all $N$ large enough and all $\\vec t$ with\n $\\|\\vec t\\|\\leq \\delta \\sqrt{N}$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction}\nIn quantum world there are different kinds of correlations between multiparticle quantum states. Understanding the nature of correlations is one of the\nchallenges in the development of quantum information\nscience. Given a bipartite or multipartite state one usually tries to characterize the amount of classical correlation, quantum correlation and quantum entanglement \ncontained in the composite system. Different correlations can arise depending on the state preparation procedure and\nmeasurements performed on the system. These correlations can account for many counter-intuitive\nfeatures in the quantum world.\nIn particular, entanglement is a physical property that has been successfully employed to interpret several phenomena which cannot be understood\nusing the laws of classical physics \\cite{Horodecki1}. It has also been identified as the basic ingredient for different quantum communication protocols like super-dense coding\n\\cite{Wiesner}, quantum teleportation \\cite{Brassard}, quantum cryptography \\cite{Bennett}, remote-state preparation \\cite{Pati, Bennett2}\nand quantum computational tasks such as the one-way quantum computer \\cite{bri}.\n\n\nOne fundamental property of quantum correlations in multiparty quantum states is that it can be monogamous \\cite{Coffman}. To state this in a qualitative way, if a correlation \nmeasure is monogamous, then\nthis says that in a composite quantum state, if two subsystems are more correlated with each other, then they will share a less amount of correlation with the other subsystems \nwith respect to that measure of correlation. In other words, it puts a restriction on the shareability of correlation between the different parties of a composite quantum state. \nSpecifically, \nif the two subsystems are maximally quantum correlated with each other, then they cannot get correlated to any other subsystem at the same time.\nThe measures of classical correlation are never monogamous and therefore are considered to be freely shareable. But, not all measures of quantum correlation\nsatisfy monogamy \\cite{Osborne,Adesso,Hiroshima,Seevinck,Lee}. For example, the square of concurrence and the squashed entanglement satisfy the monogamy inequality \\cite{Matthias1},\nwhereas the relative entropy of entanglement, the entanglement of formation and other measures do not satisfy monogamy in general. Recently, it has been shown that the monogamous \ncharacter is not an intrinsic property of other quantum correlation measures. In particular, the quantum discord \\cite{Zurek} for tripartite states does not obey monogamy in general \n\\cite{Prabhu,gio,alex}. However, interestingly, though a quantum correlation measure may not satisfy monogamy, yet the quantum correlation measure raised to a power will certainly \nobey monogamy \\cite{Salini}. It has been shown that the square of the concurrence, which is a monotonic function of entanglement of formation, is monogamous. Similarly, it has been shown that \nthe square of the quantum discord also satisfies monogamy.\nThe concept of monogamy, not only is important from fundamental point of view, it finds practical importance too.\nFor example, the monogamy of quantum correlations plays a crucial role in the security of quantum cryptography \\cite{Gisin1}.\n\nWhile the monogamy is an important property to study for various correlation measures, still there remains other desirable properties that the \ncorrelation measures are expected to obey from the perspective of being physically meaningful. \nOne such property is the additivity on tensor product of density matrices \\cite{Shor1}. \nThe property of additivity on tensor product states dictates that a correlation measure is an additive measure if the value of that measure on the tensor product of density matrices is \nsimply equal to the addition of the values of that correlation measure on the individual density matrices forming the tensor product state. \nThe quantum mutual information is an additive measure of total correlation and the squashed entanglement is another additive measure of quantum\ncorrelation \\cite{Matthias}.\nHowever, all correlation measures are not yet proved to be additive \\cite{Werner1}. \nThere are measures of entanglement and \ncapacity of channels that have been proved to be non-additive \\cite{Werner,Hastings,Hayden1,Smith}. For example the relative entropy of entanglement is proved to be non-additive \n\\cite{Shor2} and there is\nstrong indication that the bipartite distillable entanglement is also non-additive \\cite{Werner1}. Also, the additivity of entanglement of formation still remains an open question,\n and it is conjectured to obey a strong super-additivity condition \\cite{Pomeransky}. Thus, the question of additivity of the different correlation measures is\n one of the intriguing and yet to be solved question in the realm of quantum information theory.\n\n\nA measure of total correlation was proposed by\nTerhal {\\it et al}. \\cite{Terhal} called the entanglement of purification. It should be emphasized that entanglement of purification\nis not a measure of entanglement, but a measure of total correlation defined in units of pure state entanglement. This definition was motivated operationally, \ntrying to see if quantum states could be\nconstructed from EPR pairs, i.e. the Einstein-Podolsky-Rosen\npairs, with vanishing amount of communication asymptotically. It\nis based on the entanglement-separability paradigm, \ntrying to capture the classical and quantum correlations in an unified way. \nIt was shown to be satisfying the properties of a genuine measure of total correlation. \nAlso, a monogamy relation involving the entanglement of purification and the quantum advantage of dense coding was given by Horodecki {\\it et al}. \\cite{Horodecki}.\nHowever, the conditions for the monogamy or polygamy nature of entanglement of purification have not been found yet. The present paper is motivated from the fact that the mutual \ninformation, \na measure of total correlation is monogamous for any tripartite pure states \\cite{gio}. Therefore, if the entanglement of purification is a measure of total correlation can it be \nstrictly monogamous for all tripartite pure states? We find that the entanglement of purification of a tripartite pure state $\\rho_{ABC}$ across $A:BC$ partition is never less than\nits sum for the reduced density matrices $\\rho_{AB}$ and $\\rho_{AC}$, and is mostly polygamous. This observation calls for further investigation in understanding\nthe nature of correlation captured by the entanglement of purification. At first, we prove that similar to the mutual information, the entanglement of purification does not increase \nupon discarding ancilla. Thereafter, we explore the monogamy, polygamy and the additivity properties of the entanglement of purification for pure as well as mixed tripartite\nstates. Furthermore, we find analytically the lower bound and actual value of the entanglement of purification for different classes of mixed states.\nWe also present some conditions for the monogamy of entanglement of purification in terms of monogamy of entanglement of formation and other entropic inequalities.\nWe use these properties of entanglement of purification to explore the monogamy and additivity properties of the quantum advantage of dense coding. \nThe above definition as a theory of 'all correlation' may have important applications in quantum information theory.\n\nThe paper is organized as follows. In section II, we provide the definition of the monogamy of correlations.\nIn section III, we discuss the measures of total correlation namely the quantum mutual information \nand the entanglement of purification, mentioning specifically the monogamy \nproperties of the quantum mutual information. Here we also state the definition of interaction information and discuss some of its properties briefly.\nThen, we move on to find the relation between the entanglement of purification and the quantum mutual information of the purified state in section IV.\nHere, we discuss what happens to the entanglement of purification upon discarding of subsystem of a composite quantum system. Thereafter, we\nobtain the lower bounds and exact values of entanglement of purification for some mixed quantum states, specifically for a class of tripartite states and\nhigher dimensional bipartite states. In section V,\nwe derive the results for the monogamy and polygamy nature of the entanglement of purification for pure as well as mixed states, extending to the case of $n$ parties. \nHere, we discuss a relation between the monogamy of entanglement of purification with that of the entanglement of formation and quantum discord in\nthe case of tripartite pure states. Also, in this section, the monogamy conditions for the mixed states are explored with specific examples\nand cases where the the states are polygamous. \nIn section VI, we find out that if entanglement of purification is not additive on tensor product of density matrices, then \nit has to be a sub-additive quantity. Next, using the results we derive in the previous sections,\nwe find out the monogamy, super-additivity (on tensor products) properties for the quantum advantage of dense coding in section VII, \nwhere we also obtain the upper bounds for some states and identify the states with no quantum advantage of dense coding. We end with conclusions and outlook in section VIII.\n\n\n\\section{Monogamy and Polygamy of correlations}\nMonogamy is a property of a multiparticle quantum state that can be studied with respect to a particular correlation measure. \nIt is an important property that tells us about the nature of the correlation at our disposal, in particular, whether it is freely shareable or not. \nClassical correlations \\cite{Henderson} are always polygamous, whereas certain quantum correlation\nmeasures satisfy this property and some others do not \\cite{Adesso,Hiroshima,Prabhu,Matthias}. For example, the quantum discord is not in general a monogamous quantity for even \nsome cases of the pure tripartite states, whereas the total correlation given by the quantum mutual information is strictly monogamous for all tripartite pure states.\nTherefore, the monogamy or polygamy nature of the total correlation measure that supposedly contains some amount of quantum and classical\ncorrelation is an important question to consider. \nNow, according to the definition of monogamy, it is a property which does not allow the free sharing of correlation between the \nsubparts of a composite system. Mathematically, if a correlation measure $Q(\\rho)$ satisfies \n\\begin{equation}\n Q(A:BC)\\geq Q(A:B)+Q(A:C) \n\\end{equation}\nfor any tripartite state $\\rho_{ABC}$, then the correlation measure is called monogamous, otherwise it is called polygamous. This definition can be extended to the case of \n$n$ parties as well. A correlation measure $Q$ is said to be $n$ partite monogamous if the following inequality is satisfied\n\\begin{equation}\n Q(A_1:A_2..A_n)\\geq Q(A_1:A_2)+Q(A_1:A_3)+..Q(A_1:A_n)\\nonumber\n\\end{equation}\nand otherwise it is called $n$ partite polygamous.\n\n\n\\section{ Measures of total correlation}\n\nWe consider multiparticle quantum system with each subsystem defined on a finite dimensional Hilbert space ${\\cal H}$. Let ${\\cal L}(\\cal H)$ be the \nset of all linear operators acting on ${\\cal H}$ and $D({\\cal H})$ be the set of all density operators $\\rho$ with $\\rho \\ge 0$ and $\\t Tr(\\rho) =1$. \nThe composite state $\\rho_{ABC} \\in D({\\cal H}_{ABC} )$ is a general state that may contain classical and quantum correlations including entanglement. \nThe von-Neumann entropy for a density operator $\\rho_A$ is defined as \n$S(A) = -{\\t Tr}(\\rho_A \\log_2 \\rho_A)$, where $ \\rho_{A}=Tr_{BC}(\\rho_{ABC})$.\nIn this section we discuss two important measures of total correlation in the bipartite scenario, namely, \nthe quantum mutual information and the entanglement of purification. The measures of total correlation try to capture quantitatively the total correlations comprising of the \nclassical as well as the quantum correlations in a bipartite state $\\rho_{AB} = {\\t Tr}_C (\\rho_{ABC})$.\n\n\n\n\\subsection{ Quantum Mutual Information}\n\nThe quantum mutual information is a measure of total correlation in a quantum system.\nIt is a straightforward generalization of the classical mutual information. The quantum mutual information is obtained by just replacing the Shannon entropy \nby the von-Neumann entropy for the respective terms in the expression for the classical mutual information. \nThus, for a bipartite quantum state, the quantum mutual information of the state $\\rho_{AB}$ is defined as $ I(A:B)= S(A)-S(A|B)$, where $\nS(A|B) = S(AB)- S(B)$ is the quantum conditional entropy \\cite{Cerf}. Quantum mutual information satisfies some natural properties, all of which, \na total correlation measure is expected to satisfy. \nFirst, it never increases \nupon discarding of quantum systems, i.e., $I(A:BC)\\geq I(A:B)$. Secondly, the quantum mutual information is additive on tensor product of density matrices, which is \n$I(AC:BD)=I(A:B)+I(C:D)$ for $\\rho_{AB}\\otimes\\sigma_{CD}$. \nApart from these, the monogamy properties of the mutual information have been studied in Ref.\\cite{gio}. There, it was shown that a necessary and sufficient condition \nfor the monogamy of quantum mutual information can be stated in terms of the interaction information \\cite{Prabhu,gio}. Specifically, it can be shown that for any pure tripartite state \n$\\vert\\Psi\\rangle_{ABC}$, we have \n\\begin{align}\nI(A:B) + I(A:C) = I(A:BC), \\nonumber\n\\end{align}\nwhich implies that the quantum mutual information is strictly monogamous for a pure tripartite state. The necessary and sufficient criteria for quantum mutual information to be \nmonogamous for mixed tripartite state is that the interaction information should be positive \\cite{gio}. \nIn classical information theory, interaction information of state $\\rho_{ABC}$ is defined as $\\tilde{I}(\\rho_{ABC})= H(AB)+H(BC)+H(AC)-H(A)-H(B)-H(C)-H(ABC)$, where $H(AB)$ denote the Shannon entropies \n\\cite{Thomas}.\nReplacing the Shannon entropies by the von-Neumann entropies we obtain the quantum generalization of the interaction information. The quantum interaction information is therefore\nnothing but $\\tilde{I}(\\rho_{ABC})= S(AB)+S(BC)+S(AC)-S(A)-S(B)-S(C)-S(ABC)$, where $S(AB)$ denote the von-Neumann entropy of the density matrix $\\rho_{AB}$. \nInteraction information is a measure of the effect of \nthe presence of a third party $C$ on the amount of correlations shared by the other two parties as it is given by the difference between the information shared between the parties \n$A$ and $B$ when $C$ is present and when $C$ is not present. Quantum interaction information can be positive as well as negative. It is invariant under the action of local unitaries \nand non-increasing under the action of unilocal measurements \\cite{Prabhu}. It has been used to provide necessary and sufficient conditions for the monogamy of quantum discord in\nRef.\\cite{Prabhu}.\nQuantum mutual information is an important measure of correlation and finds application in\na large number of settings primarily in studying the channel capacities \\cite{Bennett1,Benjamin}. Also, an operational interpretation has been given of the quantum mutual \ninformation in Ref.\\cite{Groissman}. There, it was interpreted as the total amount of randomness or noise needed to erase the correlations in a bipartite quantum state \ncompletely.\n\n\\subsection{ Entanglement of purification}\n\n\nThe entanglement of purification is a measure of total correlation along a bipartition in a quantum state, \\cite{Terhal} defined \nusing the notion of the entanglement separability paradigm. Interestingly, in this approach the authors in \\cite{Terhal} have treated both the quantum entanglement and the classical correlation in a unified framework, \nby defining a measure of total correlation namely the entanglement of purification in units of pure state entanglement.\nBy their definition, the entanglement of purification is expressed as the entanglement of the purified version of the mixed state as follows. Suppose we have a mixed state $\\rho_{AB}$, and we purify it \nto a pure state $\\vert\\Psi\\rangle_{ABA'B'}$. Then, the entanglement of purification is defined as\n\\begin{equation}\nE_p(A:B)=\\min_{A'B'} E_f(AA':BB'),\n\\end{equation}\nwhere $E_p(A:B)$ denotes the entanglement of purification of the state $\\rho_{AB}$ across $A:B$ partition, and $E_f(AA':BB')$ is the entanglement of \nformation across the bipartition $ AA':BB'$ of the pure state $\\vert\\Psi\\rangle_{ABA'B'}$, \nobtained from $\\rho_{AB}$ by any standard\npurification procedure such as $\\vert\\Psi_s\\rangle_{AA':BB'}= \\sum_{i}\\sqrt{\\lambda_i}\\vert \\Psi_i\\rangle_{AB}\\otimes\\vert 0\\rangle_{A'}\\vert i\\rangle_{B'}$. Here, the $\\lambda_i$ \nare the Schmidt coefficients and $ \\vert \\Psi_i\\rangle $ are the corresponding Schmidt vectors in $\\cal H_{AB}$. \nThe above expression can be reformulated in terms of the trace preserving completely positive (TCP) maps, \nsince every quantum operation can be written in terms of the TPCP maps. Following Ref.\\cite{Terhal}, from Eq(4), we \nget $E_p(A:B)$ of $\\rho_{AB}$ as the following minimum over unitary matrices as \n\\begin{align}\nE_p(A:B)=\\min_{U_{A'B'}}E_f(AA':BB'),\n\\end{align}\nwhere $E_f(AA':BB')$ is the entanglement of formation across the $AA':BB'$ partition of the pure state \n${(I_{AB}\\otimes U_{A'B'})(\\vert\\Psi_s\\rangle\\langle\\Psi_s\\vert)(I_{AB}\\otimes U_{A'B'})^{\\dagger}}$ obtained from $\\rho_{AB}$ by a standard purification procedure and then acting \nunitary matrices over the ancilla part. This is nothing but the entropy $\\min_{U_{A'B'}}S(Tr_{AA'}((I_{AB}\\otimes U_{A'B'})(\\vert\\Psi_s\\rangle\\langle\\Psi_s\\vert)(I_{AB}\\otimes U_{A'B'})^{\\dagger}))$. Now by tracing out the\n$ AA'$ part from the pure state as well as the unitary operator, one obtains the following equivalent form of entanglement of purification in terms of the TCP map\n\\begin{eqnarray}\nE_p(A:B)=\\min_{\\Lambda_{B'}}S ((I_{B}\\otimes\\Lambda_{B'})(\\mu_{BB'}(\\rho_{AB})));\\nonumber\\\\\n\\Lambda_{B'}(\\nu) = {\\t Tr}_{A'}(U_{A'B'}(\\nu_{B'}\\otimes\\vert0\\rangle\\langle 0\\vert_{A'})U^\\dagger_{A'B'});\\nonumber\\\\\n\\mu_{BB'}(\\rho_{AB})= {\\t Tr}_{AA'}(\\vert\\Psi\\rangle\\langle\\Psi\\vert),\n\\end{eqnarray}\nwhere $ \\Lambda_{B'}$ is a TCP map. The above form is derived in \\cite{Terhal}. Therefore, the \nminimization over unitary matrices in Eq(3) is now represented as a minimization over all TPCP maps $\\Lambda_{B'}$, since a TCP map is equivalently represented as an unitary transformation on the larger system followed by \ntracing over the ancilla.\nIt was shown that the above optimization can be successfully performed in a Hilbert space of a limited dimension $ d_{A'}=d_{AB}$ and $d_{B'}=d_{AB}^2 $, due to the result by \nTerhal {\\it et al}. \\cite{Terhal}.\nFor pure states, the entanglement of purification is equal to the entanglement of formation and for a mixed state $\\rho_{AB}$, one has $E_p(A:B)\\geq E_f(A:B)$.\nAlongside, the authors have introduced the regularised entanglement of purification $E_p^\\infty(A:B)$. It was shown that the asymptotic cost of preparing $n$ copies of \n$\\rho_{AB}$ from singlets using only local operations and an asymptotically vanishing\namount of quantum or classical communication is equal to the regularised entanglement of purification.\nThis implies that the regularised entanglement of purification is actually the \nentanglement cost (with $LO_q$) of the quantum states $\\rho$ on $\\cal H_d\\otimes\\cal H_d$ \\cite{Terhal}, i.e., $E_{LO_q}(A:B)=E_p^\\infty(A:B)$.\nLater, from an operational point of view it was shown that if it is additive on tensor product states then $ E^\\infty_P(A:B)$ is actually the optimal visible compression rate for mixed \nstates \\cite{Hayashi}. Other operational interpretations have been explored for this quantity. In particular, the regularized entanglement of purification was shown to be equal to the\nentanglement assisted noisy channel capacity \\cite{Nilanjana}. On another note it was shown that the regularized entanglement of purification $E_{LO_q}(A:B)$ gives the communication cost \nof simulating a channel without the presence of prior entanglement \\cite{Shor}. However, the entanglement of purification is mostly an unexplored quantity since \nit is a difficult quantity to calculate analytically owing to the optimization needed to be done in a larger Hilbert space. But, using the monogamy property of entanglement,\nthe authors in Ref.\\cite{Matthias1} have found the entanglement of purification for a class of bipartite states supported in symmetric or antisymmetric subspaces analytically to be \n$S(A)$. However, one of the unanswered question regarding the entanglement of purification is the property of additivity. \nIt is still not known whether the entanglement of purification is additive on tensor product states or not. But, some progress has been made in this direction by, where\nentanglement of purification has been proved to be non-additive within a certain numerical tolerance \\cite{Chen}. The entanglement of purification has been related to some other \ninformation theoretic quantities as well. It has also been shown that the entanglement of purification is related to the partial quantum information, through its monogamy relation \nwith the quantum advantage of dense coding \\cite{Horodecki}.\n\n\n\n\\section{ Entanglement of purification in terms of quantum mutual information\n: Lower bound and exact values}\n\nThe entanglement of purification can be rewritten in terms of the quantum mutual information.\nFor the pure state $\\vert\\Psi\\rangle_{ABA'B'}$, which is the\noptimally purified state for the mixed state $\\rho_{AB}$ for evaluating the entanglement of purification, the quantum mutual information between parties $AA'$ and $BB'$ is given by \n$ I(AA':BB')= S(AA')+S(BB')-S(AA'BB')$. Since $\\vert\\Psi\\rangle_{ABA'B'}$ is a pure state, we have\n\\begin{equation}\n E_p(A:B) = \\frac{I(AA':BB')}{2}.\\nonumber\n\\end{equation}\nTherefore, the entanglement of purification is actually half of the optimised quantum mutual information of the purified version of the mixed density\nmatrix. The above equations are then used to prove a better lower bound for the entanglement of purification. Before that, we prove an important property of entanglement of \npurification, an attribute of a measure of total correlation.\n\\vskip 10pt\n{\\textbf{ Proposition 1}}: The entanglement of purification never increases upon discarding of quantum system, i.e.,\n\\begin{equation}\n E_p(A:BC)\\geq E_p(A:B).\n\\end{equation}\n\n\\textit{Proof}: \nIf $\\rho_{ABC}$ is pure, then $E_p(A:BC)=S(A)$. Also, we know that $E_p(A:B)\\leq S(A)$. This leads to $E_p(A:BC)\\geq E_p(A:B)$.\nIn case of mixed states $\\rho_{ABC}$, we note that the set of all the pure states for calculating $E_p(A:BC)$ is a subset of the set of all pure states taken for calculating\n$E_p(A:B)$.\nThis clearly implies that $ min[I(AA':BB')]\\leq min[I(AA':BC(BC)')]$. From here we thus conclude that $E_p(A:BC)\\geq E_p(A:B)$. Thus, like\nthe quantum mutual information, the entanglement of purification also never increases upon discarding of quantum systems. This is a desired property that the total\ncorrelation should not increase upon discarding of quantum system. It is easily seen that the equality condition holds when $\\rho_{AB}$ is supported in the symmetric\nor antisymmetric subspace.\n\nWe now state some simple inequalities for entanglement of purification which will be later used for deriving the monogamy and polygamy conditions for it. Let \n$\\vert\\Psi\\rangle_{ABA'B'}$ be the optimal pure state for evaluating the entanglement of purification of $\\rho_{AB}$. \nUsing the sub-additivity of conditional entropy \\cite{Chuang} for a composite quantum system of four parties, i.e., \n$S(AB\\vert A'B')\\leq S(A\\vert A')+S(B\\vert B')$, we get $S(ABA'B')-S(AB)\\leq S(AA')-S(A)+S(BB')-S(B)$. But we know $E_p(A:B)=S(AA')=S(BB')$ and $S(ABA'B')=0$, \nsince according to the definition of entanglement of purification $\\rho_{ABA'B'}$ is a pure state. Using this in the above inequality,\nwe get $2E_p(A:B)\\geq I(A:B)$. Therefore, we have the following lower bound on $E_p(A:B)$\n\\begin{equation}\n E_p(A:B)\\geq \\frac{I(A:B)}{2}.\n\\end{equation}\nExtending this to the asymptotic limit, one easily obtains $E_p^{\\infty}(A:B)=E_{LO_q}(A:B)\\geq\\frac{I(A:B)}{2}$, by using the fact that the quantum mutual information is additive on \ntensor product of quantum states. The above lower bound was known for the entanglement of purification, but only in the asymptotic limit,\nand it was obtained from an operational point of view in \\cite{Terhal}. Here we obtain this bound for a single copy of $\\rho_{AB}$, \nand easily extend this to the asymptotic limit as $E_{LO_q}(A:B)\\geq \\frac{I(A:B)}{2}$ and get back the result given in Ref.\\cite{Terhal}. \nAlso, the lower bound given in \\cite{Terhal} for a single copy of $\\rho_{AB}$ is $E_f(A:B)$. However, we know that for some states one has \n$E_f(A:B)\\leq \\frac{I({A:B})}{2}$. Therefore, for these states we get a better lower bound\nfor a single copy of $\\rho_{AB}$. Now, we use the equation for entanglement of purification in terms of \nquantum mutual information to derive a lower bound for tripartite mixed states which is different from half of its quantum mutual information.\n\\vskip 10pt\n{\\textbf {Proposition 2}}:\nFor any pure or mixed tripartite quantum state:\n\\begin{equation}\n E_p(A:BC)\\geq S(A)-\\frac{1}{2}[S(A\\vert B)+S(A\\vert C)].\n\\end{equation}\n\n{\\textit {Proof}}:\nLet $\\vert\\Psi\\rangle_{ABCA'D'}$ be the optimal pure state for evaluating the entanglement of purification of $\\rho_{ABC}$. \nTherefore, we have $E_p(A:BC)= \\frac{I(AA':BCD')}{2}$.\nNote that the quantum mutual information of pure states satisfy the monogamy equality condition. Therefore, $E_p(A:BC)= \\frac{I(AA':B)}{2}+\\frac{I(AA':CD')}{2}$. \nAgain, the mutual information is non-increasing upon discarding of quantum systems, hence we have \n\\begin{align}\nE_p(A:BC)\\geq \\frac{I(A:B)}{2}+\\frac{I(A:C)}{2}.\n\\end{align} \nThis implies\n$E_p(A:BC)\\geq S(A)- (\\frac{S(A\\vert B)}{2}+\\frac{S(A\\vert C)}{2})$. In general, from the previous literature we know that $E_p(A:BC)\\geq \\frac{I(A:BC)}{2} $.\nHowever, for the states with $I(A:BC)\\leq I(A:B)+I(A:C)$, i.e, with the negative interaction information, we then have\n$E_p(A:BC)\\geq \\frac{I(A:B)}{2}+\\frac{I(A:C)}{2}\\geq\\frac{I(A:BC)}{2}$. Therefore, for these class of states, the entanglement of purification is upper and lower bounded as\n$S(A)\\geq E_p(A:BC)\\geq S(A)- (\\frac{S(A\\vert B)}{2}+\\frac{S(A\\vert C)}{2})$. \nExtending this to the asymptotic limit we obtain $E_{LO_q}(A:BC)\\geq \\frac{I(A:B)}{2}+\\frac{I(A:C)}{2}$, using the fact that quantum mutual information is \nadditive on tensor product of density matrices.\nWe note that the tripartite quantum states with negative interaction information are always \npolygamous for the quantum mutual information. Therefore, for these states, the above bound is always greater than the previous bound $\\frac{I(A:BC)}{2}$. This may give\na better lower bound than $\\frac{I(A:B)}{2}$ or the regularised classical mutual information \\cite{Terhal} for states consisting of quantum as well as classical correlations, \ndepending on the negativity of interaction information. One may extend this to the case of $n$ parties as well, such that for a $n$ partite density matrices $\\rho_{A_1A_2...A_n}$,\nwe get $E_p(A_1:A_2A_3..A_n)\\geq \\max[\\frac{(I(A_1:A_iA_j..)}{2}+\\frac{(I(A_1:A_kA_l..)}{2}]$ etc. where one takes all possible combinations of bipartitions between $A_1A_2...A_n$\n(keeping the node $A_1$ same for the reduced density matrices)\nto achieve the maximum value of the lower bound. Therefore, the quantum states with negative interaction information across any bipartition will have either the regularised \nclassical mutual information or this as the better lower bound than half of its quantum mutual information.\n\\vskip 10pt\n{\\textbf {Corollary}}:\nThe entanglement of purification for the class of tripartite mixed states satisfying the sub-additivity equality condition is given by $S(A)$.\n\\vskip 10pt\n{\\textit {Proof}}: From the previous paragraph we see that when $S(A\\vert B)+S(A\\vert C)=0$, we get $E_p(A:BC)\\geq S(A)$. But again, from the upper bound of\nentanglement of purification we have $E_p(A:BC)\\leq S(A) $. Therefore combining the above two equations, one obtains $E_p(A:B)=S(A)$ for the states which\nsatisfy the strong sub-additivity equality condition. Also, we know that mixtures\nof the tripartite mixed states each satisfying the strong sub-additivity equality condition and satisfying an additional constraint of biorthogonality if the third party is\ntraced out, satisfy the strong sub-additivity equality, and hence their entanglement of purification is also $S(A)$. Hence the proof. \nThe structure of the states obeying the sub-additivity equality condition has been precisely given in Ref.\\cite{Hayden}. There it was shown that every separable state can \nbe extended to a state that obeys the sub-additivity equality condition. Therefore, from these observations\nwe can comment that all separable states can be extended to a tripartite mixed state which has the maximum amount of total correlation as $S(A)$.\nFrom the viewpoint of the structure of the states \\cite{Hayden},\nthe structure states satisfying the SSA equality has been given as $ \\rho_{ABC} = \\bigoplus_j q_j\\rho_{Ab_j^L}\\otimes\\rho_{b_j^RC}$,\nwith states $\\rho_{Ab_j^L} $ on Hilbert space $ H_A\\otimes H_{b^L_j}$ and $\\rho_{b^R_jC}$ on $ H_{b^R_j}\\otimes H_C$ with probability distribution $q_j$. \nThus, all states of this form and all extensions of this class of states\nhave the maximal amount of total correlation given by the entanglement of purification as $S(A)$.\nNow we discuss the lower bound and exact values with some specific examples as given below.\n\n\\begin{figure}\n\\includegraphics[scale=0.63]{W_clscl_delta.pdf}\n\\caption{Difference between lower bounds for state $ p\\vert W\\rangle\\langle W\\vert +(1-p)[a\\vert 000\\rangle\\langle 000\\vert +(1-a)\\vert 111\\rangle \\langle 111\\vert]$.\nThe difference between the new lower bound and the previous one is always positive in this case.}\n\\end{figure}\n\n\\textit {Examples of exact values}:\n\nFirst we state the value of entanglement of purification for the following class of bipartite mixed states. \nThe entanglement of purification of the states satisfying the Araki-Lieb equality condition is $S(A)$.\nWe know $S(A)\\geq E_p(A:B)\\geq \\frac{1}{2}I(A:B)$. But $\\frac{1}{2}I(A:B)=S(A)+\\frac{1}{2}[S(B)-S(A)-S(AB)]$. The states satisfying the Araki-Lieb equality\ncondition have $S(B)-S(A)=S(AB)$. Then, we have $S(A)\\geq E_p(A:B)\\geq S(A)$. Therefore, $ E_p(A:B)= S(A)$ for these states.\nThe structure of states satisfying the Araki-Lieb equality condition is given in Ref.\\cite{Zhang}. There, it was shown that the states satisfy the Araki-Lieb equality condition\nif and only if the following conditions are satisfied. First, $\\cal{H_A}$ can be factorized as $\\cal {H_L}\\otimes\\cal{H_R}$ and secondly \n$\\rho_{AB}= \\rho_L\\otimes\\vert\\Psi_{RB}\\rangle\\langle\\Psi_{RB}\\vert$, where $\\vert\\Psi_{RB}\\rangle\\in\\cal{H_R}\\otimes\\cal{H_B}$. \nThe structure of such states that satisfy the Araki-Lieb equality condition is therefore of the form $\\rho_{AB}= \\rho_L\\otimes\\vert\\Psi_{RB}\\rangle\\langle\\Psi_{RB}\\vert$.\nTherefore, the value of entanglement of purification for these states is $S(A)$.\n\nFor the case of tripartite states, the entanglement of purification of states of the form \n$\\rho_{ABC}= p\\vert GHZ\\rangle\\langle GHZ\\vert^{\\underline{+}}+(1-p)[b\\vert 000\\rangle\\langle 000\\vert+(1-b)\\vert 111\\rangle \\langle 111\\vert]$\nis $S(A)$ for all values of $\\{p,a,b\\} \\in [0,1]$, where $\\vert GHZ\\rangle^{\\underline{+}} = \\sqrt{a}\\vert 000\\rangle \\underline{+} \\sqrt{(1-a)}\\vert111\\rangle$ is the \ngeneralized GHZ state \\cite{GHZ}. This holds for $n$ party as well, i.e., for the following state\n$\\rho_{ABC}= p\\vert GHZ_n\\rangle\\langle GHZ_n\\vert^{\\underline{+}} +(1-p)[b\\vert 0\\rangle\\langle 0\\vert^{\\otimes n} +(1-b)\\vert 1\\rangle \\langle 1\\vert^{\\otimes n}]$\nwhere $ \\vert GHZ_n\\rangle^{\\underline{+}} =\\sqrt{a}\\vert 0\\rangle^{\\otimes n} \\underline{+} \\sqrt{(1-a)}\\vert1\\rangle^{\\otimes n}$.\nThe proof is as follows. We know that for tripartite states $E_p(A:BC)\\geq \\frac{1}{2}[I(A:B)+I(A:C)]$. For the state given above, $I(A:B)+I(A:C)=2I(A:B)=2[S(A)+S(B)-S(AB)]=2S(A)$.\nThe first equality follows owing to the symmetry of the state between parties $B$ and $C$. The third equality follows from the fact that the nonzero eigenvalues of the density \nmatrices $\\rho_{AB}$ and $\\rho_{B}$ are exactly equal. Therefore, for the given state $ S(A)\\geq E_p(A:BC)\\geq S(A)$. Thus, $E_p(A:BC)=S(A)$. Let us consider another \nexample. The tripartite mixed state as a mixture of the $\\vert GHZ\\rangle^+$ and $\\vert GHZ\\rangle^-$, i.e., if \n$\\rho_{ABC}= p\\vert GHZ\\rangle\\langle GHZ\\vert^+ +(1-p)\\vert GHZ\\rangle\\langle GHZ\\vert^-$ then it also has $E_p(A:B)= S(A)$ according to\nour previous argument. Here, the states are generalized $\\vert GHZ\\rangle$ states. And similar to the above,\nthis is also true for the arbitrary mixture of $n$ partite generalized $\\vert GHZ\\rangle$ states.\n\n\\textit {Examples of lower bounds}:\nAmong other examples, for the tripartite states of the form\n$ \\rho_{ABC}=p\\vert W\\rangle\\langle W\\vert +(1-p)[a\\vert 000\\rangle\\langle 000\\vert +(1-a)\\vert 111\\rangle \\langle 111\\vert]$,\nwhere $\\vert W\\rangle=\\frac{1}{\\sqrt{3}}[\\vert 100\\rangle+\\vert 010\\rangle+\\vert 001\\rangle]$ is the $\\vert W\\rangle$ state, a\nbetter lower bound\nis provided by $\\frac{1}{2}[I(A:B)+I(A:C)]\\geq \\frac{1}{2}I(A:BC)$, since the quantum mutual information is polygamous for these classes of states. This holds even for the regularised\nversion of the entanglement of purification, i.e., $E_{LO_q}(A:BC)\\geq \\frac{1}{2}[I(A:B)+I(A:C)]$, owing to the additivity of the \nquantum mutual information on tensor product of density matrices. The difference $\\Delta_{LB}$ between the two lower bounds equal to\n$\\frac{1}{2}[I(A:B)+I(A:C)-I(A:BC)]$ is plotted in Fig 1, which shows that it is always positive. Again we may consider the state\n$ \\rho_{ABC}= p\\vert W\\rangle\\langle W\\vert +\\frac{(1-p)}{8}I_3$ and the difference between the lower bounds are plotted in Fig 2.\n\n\\begin{figure}\n\\includegraphics[scale=0.83]{W_Id_delta1.pdf}\n\\caption{Difference between lower bounds for state $ \\rho= p\\vert W\\rangle\\langle W\\vert +\\frac{(1-p)}{8}I_3$. The difference between the new and the old lower bound is always positive\nhere. The difference in lower bounds is given by the amount of polygamy of quantum mutual information.}\n\\end{figure}\n\nOne can use the polygamy of the quantum mutual information to lower bound the entanglement of purification in higher dimensional bipartite states. If a sub-party is of higher \ndimension, and if the quantum mutual information is polygamous for the lower dimensional subparts obtained by breaking the higher dimensional subparty, then it gives a better\nlower bound for the entanglement of purification than just half of the quantum mutual information of the state $\\rho_{AB}$. \n\nSuppose for a $ 2^n$ dimensional party $B$ in $\\rho_{AB}$, we break it down into two lower dimensional subparties $B_1$ and $B_2$ \\cite{Cornello}. Then, from Eq(8) we have\n$E_p(A:B)\\geq \\frac{1}{2}[I(A:B_1)+I(A:B_2)]$.\nFor negative interaction information between $B_1$ and $B_2$, i.e., $S(AB_1)+S(AB_2)+S(B_1B_2)-S(A)-S(B_1)-S(B_2)-S(AB_1B_2)< 0$, \nthe R.H.S is greater than $\\frac{I(A:B)}{2}$ \\cite{gio}. Thus it gives a better lower bound. We may say \nthat this better lower bound arises as a result of a second order polygamy relation of quantum mutual information. \nOne can easily extend to the asymptotic limit as well, thus we obtain the lower bound \n$E_{LO_q}(A:B)\\geq \\frac{1}{2}[I(A:B_1)+I(A:B_2)]> \\frac{I(A:B)}{2}$. For these states $ E_{LO_q}(A:B)$ quantifies more correlation than $\\frac{I(A:B)}{2}$ as given in the original paper.\nFor these states, one now has to compare the quantity $\\frac{1}{2}[I(A:B_1)+I(A:B_2)]$ with the classical mutual information for obtaining a better lower bound.\nThe above equation can also be written as \n\\begin{equation}\nE_p(A:B)\\geq S(A)-\\frac{1}{2}[S(A\\vert B_1)+S(A\\vert B_2)]. \\nonumber\n\\end{equation} \nFrom this equation we can say that for the $2^n$ dimensional party $B$ \nin the bipartite state $\\rho_{AB}$, if the internal structure of $B$ is such that across any subpartition inside it, the sub-additivity equality condition is satisfied then \nthe entanglement of purification of that state is $S(A)$. Therefore, with the aid of the new lower bound as half of the summation of the quantum mutual information of the subparties, \nwe are able to conclude about the new exact values of entanglement of purification for these classes of the higher dimensional bipartite states.\n\n\n\\section{Monogamy and polygamy of entanglement of purification}\n\nHere we explore various conditions under which the entanglement of purification will be polygamous or monogamous for pure and mixed states.\n\n\\subsection{ Monogamy and polygamy of entanglement of purification for pure tripartite states}\n\\vskip 10pt\n{\\textbf {Theorem 1}}:\nThe entanglement of purification is polygamous for a tripartite pure state $\\rho_{ABC}$:\n\\begin{equation}\nE_p(A:B)+E_p(A:C)\\geq E_p(A: BC).\n\\end{equation}\n\\textit {Proof}:\nFrom Eq(6) we know that $E_p(A:B)\\geq \\frac{I(A:B)}{2}$. Therefore, we have\n$E_p(A:B)+E_p(A:C)\\geq \\frac{I(A:B)}{2}+\\frac{I(A:C)}{2}$. In\ncase of the tripartite pure state $\\rho_{ABC}$ the right hand side of the inequality just gives $S(A)$. This implies that\n\\begin{equation}\n E_p(A:B)+E_p(A:C)\\geq S(A).\\nonumber\n\\end{equation}\nSince for pure tripartite state $\\rho_{ABC}$, $E_p(A: BC)=S(A)$, we obtain:\n\\begin{equation}\n E_p(A:B)+E_p(A:C)\\geq E_p(A:BC).\\nonumber\n\\end{equation}\nThis shows the polygamous nature of the entanglement of purification for pure tripartite state $\\rho_{ABC}$.\nOne can directly see that the same relation holds for the regularised entanglement of purification:\n$E_{LO_{q}}(A:B)+E_{LO_{q}}(A:C)\\geq E_{LO_{q}}(A:BC)$, i.e., the regularised entanglement of purification is also a polygamous quantity.\nThis proves that entanglement of purification for any tripartite pure state is in general a polygamous quantity.\nAn implication of this is that the sum of the asymptotic entanglement cost of preparing $\\rho_{AB}$ and $\\rho_{AC}$ will \nnot be restricted by the asymptotic cost of preparing $\\rho_{A:BC}$.\n\nThe polygamy inequality above shows that there can be states satisfying the equality condition in the inequality. To analyse the states that may satisfy the equality condition we \nfind a following relation to the monogamy of entanglement of formation for those states.\nGiven a pure state $\\rho_{ABC}$, if entanglement of formation violates monogamy, then entanglement of purification will\nviolate monogamy equality for the same. However the converse is not true.\nThe proof is as follows. If entanglement of formation $E_f(A:BC)$ violates monogamy for some pure state $\\rho_{ABC}$, then we have\n$E_f(A:BC)< E_f(A:B)+E_f(A:C)$.\nBut for a pure state $\\rho_{ABC}$, we know that $E_f(A:BC)=E_p(A:BC)$. Therefore, replacing this\nin the above equation we get $ E_p(A:BC)< E_f(A:B)+E_f(A:C)$. Also, it is known that for \nany state $\\rho_{AB}$, we have $E_f(A:B)\\leq E_p(A:B) $.\nThis implies $E_p(A:BC)< E_p(A:B)+E_p(A:C)$\nwhich shows that the entanglement of purification also violates monogamy. Hence the proof. However the vice versa may not be true.\nWe know that for pure states the monogamy of entanglement of formation is equivalent to\nthe monogamy of quantum discord \\cite{gio}. Therefore, we conclude that the polygamy of quantum discord will also imply the polygamy of entanglement of purification likewise.\nIn other words, monogamy of entanglement of formation or quantum discord is a necessary condition for the tripartite state $\\rho_{ABC}$ to satisfy the \nmonogamy equality condition for entanglement of purification.\nNow let us try to compare the monogamy inequality of the entanglement of formation with the entanglement of purification for\nmixed tripartite state $\\rho_{ABC}$. Before that, we define a quantity called correlation of classical and quantum origin $E_{cq}(A:B)$ of the state $\\rho_{AB}$ as\n\\begin{align}\nE_{cq}(A:B) = E_{p}(A:B) - E_{f}(A:B).\\nonumber\n\\end{align}\nThis quantity is positive for mixed states and vanishes for pure bipartite states. Intuitively, this may contain some classical \ncorrelation and some amount of quantum correlation beyond entanglement that is captured by the entanglement of formation.\nFrom the definition it is clear that for a \ngiven mixed state $\\rho_{ABC}$, if $E_{cq}(A:B)$ and $E_{f}(A:B)$ are monogamous (polygamous), then the entanglement of purification will be (monogamous) polygamous.\nOne can also show that for three-qubit states if the the correlation of classical and quantum origin obeys monogamy and entanglement of formation satisfies \\cite{fanch} \n \\begin{align}\nE_{f}(A:B) + E_{f}(A:C) \\le 1.18\\nonumber\n\\end{align}\nthen the entanglement of purification will obey a weak monogamy relation as given by\n\\begin{align}\nE_{p}(A:B) + E_{p}(A:C) \\le E_{p}(A:BC) + 1.18.\n\\end{align}\n\n\\subsection{Mixed states}\n\nThe entanglement of purification $E_p(A:BC)$ of a mixed tripartite state $\\rho_{ABC}$ is $\\frac{I(AA':BC(BC)')}{2}$, where the optimal pure state of $\\rho_{ABC}$ is\n$\\vert\\Psi_{ABCA'(BC)'}\\rangle$. Similarly, the entanglement of purification $E_p(A:B)$ of $\\rho_{AB}$ is $\\frac{I(AA'':BB'')}{2}$, where the optimal pure state for $\\rho_{AB}$ is\n$\\vert\\Phi_{ABA''B''}\\rangle$, and the entanglement of purification $E_p(A:C)$ of $\\rho_{AC}$ is $\\frac{I(AA''':CC''')}{2}$, where the optimal pure state for $\\rho_{AC}$ is\n$\\vert\\xi_{ACA'''C'''}\\rangle$. Therefore, the monogamy inequality for a mixed tripartite state $\\rho_{ABC}$ is\n$I(AA':BC(BC)')\\geq I(AA'':BB'')+I(AA''':CC''')$. But owing to the largely difficult optimization needed, we may not be able to check this equation directly. Instead, we analyze \nsome specific cases of mixed states that are polygamous for entanglement of purification as follows.\n\nAt first we note that the tripartite mixed states satisfying the strong sub-additivity equality condition are polygamous for entanglement of purification. To see this,\nlet $\\vert\\Psi_{ABA'B'}\\rangle$ and $\\vert\\Psi_{ACA''C''}\\rangle$ be the optimal pure states for $\\rho_{AB}$ and $\\rho_{AC} $ respectively.\nThen $E_p(A:B)+E_p(A:C)\\geq \\frac{1}{2}[I(A:B)+I(A:C) + I(AA':B')+I(AA'':C'')]$. But $ I(A:B)+I(A:C) = 2S(A)-(S(A\\vert B)+S(A\\vert C))$,\nand if the strong sub-additivity equality condition is satisfied then we have $S(A\\vert B)+S(A\\vert C) = 0$. \nPutting these in the equation, we get $ E_p(A:B)+E_p(A:C)\\geq S(A) + \\frac{1}{2}[I(AA':B')+I(AA'':C'')] $.\nBut the last two terms on the R.H.S are positive in general, as the quantum mutual information is always positive and vanishes only for the maximally mixed state. \nAlso, we know $E_p(A:BC)= S(A)$. Thus, combining these inequalities together we obtain \n$ E_p(A:B)+E_p(A:C)\\geq E_p(A:BC)$. Thus, the entanglement of purification is polygamous for the class of states that satisfy the strong sub-additivity equality.\nAmong other classes of states, if anyone of the reduced density matrices $\\rho_{AB}$, $\\rho_{AC}$ of a mixed state $\\rho_{ABC}$ are entirely supported on\nthe symmetric or antisymmetric subspaces, then the state will violate monogamy of entanglement of purification.\nThis follows from the result by Winter \\textit{et al}.\\cite{Matthias1}. The entanglement of purification of such bipartite density matrices (with the same dimension for both parties) is \n$S(A)$. But the entanglement of purification of the tripartite mixed state is also $S(A)$ and in general $E_p(A:C)\\geq 0$. Therefore, the polygamy inequality follows directly by \ncombining the above observations. Also, any tripartite extension of bipartite mixed states that satisfy the Araki-Lieb equality condition for their von-Neumann entropy\nis polygamous for entanglement of purification. We know that the states that satisfy the Araki Lieb equality condition have $E_p(A:B)= S(A)[S(B)]$. \nHowever, the other reduced density matrix has some non zero correlation and therefore non-zero entanglement of purification. Thus, in this case we have \n$E_p(A:B)+E_p(A:C)\\geq S(A)=E_p(A:BC)$, making entanglement of purification a polygamous measure of total correlation. Though for pure tripartite states we could prove the general \npolygamy inequality, for mixed states it is not clear whether such general inequality exists or not.\n\nNext, we discuss the relation to polygamy of quantum mutual information. \nSuppose $E_p(A:B)+E_p(A:C)= S(B\\vert A)+S(C\\vert A)+I(A:B)+I(A:C)$. This is greater than $ S(BC\\vert A)+I(A:B)+I(A:C)$ which is again\ngreater than $ E_p(A:BC)+I(A:B)+I(A:C)-I(A:BC)$. From the above equations one can see that if the mutual information is polygamous,\nthen here the entanglement of purification becomes polygamous. Again, a sufficient condition for monogamy of $E_p$\nis $\\frac{I(A:BC)}{2}\\geq E_p(A:B)+E_p(A:C)$. This implies $\\frac{I(A:BC)}{2}\\geq \\frac{I(A:B)}{2}+\\frac{I(A:C)}{2}$, which is nothing but\n$I(A:BC)\\geq I(A:B)+I(A:C)$, i.e., the monogamy inequality for the quantum mutual information. This says that the states satisfying this particular sufficient\ncondition for monogamy of $E_p$ will also satisfy the monogamy inequality of quantum mutual information. \n\n\\subsection{Polygamy of entanglement of purification for multiparty}\n\nNow we investigate the polygamy of entanglement of purification in case of $n$ partite density matrices. The conditions for the polygamy for mixed states\nalso get translated here as sufficient conditions for polygamy. To put it in other words, the $n$-partite density matrices, pure or mixed, are polygamous if\nany one of the reduced density matrices of the subsystem satisfy the Araki-Lieb equality condition, strong sub-additivity equality condition or is supported on the symmetric\nor antisymmetric subspace. Now we state a simple sufficient condition for the polygamy of entanglement of purification and construct some examples.\n\\vskip 10pt\n{\\textbf{Proposition 3}}: All the $n$-partite states, pure or mixed with $\\sum_{i=1}^n I(A:A_i)\\geq 2S(A)$ are polygamous for entanglement of purification.\n\\vskip 10pt\n\n\\textit{Proof}: We have $\\sum_{i=1}^n E_p(A:A_i)\\geq\\frac{1}{2}[\\sum_{i=1}^n I(A:A_i)]$. From this we get\n$\\sum_{i=1}^n E_p(A:A_i)\\geq S(A)+\\frac{1}{2}[\\sum_{i=1}^n I(A:A_i)]-2S(A)$. Thus, we get the condition in the proposition as the sufficient condition for polygamy of\nentanglement of purification. A large number of states will satisfy this condition, and thus will be polygamous. However, some states will violate this condition,\nand it will be inconclusive about the polygamous nature in case of those states. \n\nUsing the above relation, we easily see that\nthe $n$-party generalized $\\vert GHZ\\rangle $ and the $n$-party $ \\vert W\\rangle$ states are polygamous with respect to the entanglement of purification. We can explicitly see the\nproofs as follows. We have the generalized $GHZ$ state as $\\vert GHZ\\rangle = \\sqrt{p}\\vert 0\\rangle^{\\otimes n} + \\sqrt{1-p}\\vert 1\\rangle^{\\otimes n}$, where $0\\leq p\\leq1$ \n\\cite{GHZ}.\nBut we have obtained before that for tripartite pure states, $ E_p(A:A_1)+E_p(A:A_2)\\geq S(A)$. Thus, it holds true for the tripartite generalized $\\vert GHZ\\rangle$ states as well.\nNow for $n\\geq 3$, we see that all the reduced density matrices are exactly the same and L.H.S becomes $\\sum_{i=1}^n E_p(A:A_i)$. \nThis is nothing but $ E_p(A:A_1)+E_p(A:A_2)+\\sum_{i=3}^n E_p(A:A_i)$. Since each of the two party reduced density matrices are exactly the same as the two party reduced density matrices \nin the case of tripartite pure state, therefore using the above two equations we obtain $\\sum_{i=1}^n E_p(A:A_i)\\geq S(A)+\\sum_{i=3}^n E_p(A:A_i)$. The last term on R.H.S is \nalways positive. Therefore we obtain $ \\sum_{i=1}^n E_p(A:A_i)\\geq S(A)$, rendering the entanglement of purification polygamous for all $n$ in the case of generalized \n$\\vert GHZ\\rangle$ state. This is expected since every reduced density matrices share only classical correlation with the other reduced density matrices.\nWe now consider $\\vert W\\rangle = \\frac{1}{\\sqrt{n}}[\\vert 10..0\\rangle + \\vert 01..0\\rangle + .. ]$, where there are $n$ terms within the parenthesis \\cite{Dur}.\nWe show that this state is also polygamous for all values of $n$. To see this, first we note that all the two party reduced density matrices $\\rho_{AA_i}$ of this state \nare exactly same due to the symmetry of the state. Specifically each $\\rho_{AA_i} = \\frac{1}{n}[(n-2)\\vert 00\\rangle\\langle 00\\vert]+2\\vert\\Phi^+\\rangle\\langle\\Phi^+\\vert]$, where\n$\\vert\\Phi^+\\rangle=\\frac{1}{\\sqrt{2}}[\\vert 10\\rangle+\\vert 01\\rangle]$ is the Bell state. Now we calculate $\\frac{1}{2}[\\sum_{i=1}^n I(A:A_i)]=\\frac{n}{2}I(A:A_1)$, since \nall the two party reduced density matrices are same. Evaluating the eigenvalues in terms of $n$, we find that $S(A)=S(A_1)=2\\log_2 n-\\log_2(n-1)$ and $S(AA_1)=2\\log_2 n-1-\\log_2(n-2)$.\nPutting these values in the equation above, we get\n$\\frac{n}{2}I(A:A_1)-S(A)=\\frac{n}{2}+\\frac{n}{2}\\log_2 (n-2) +(n-1)\\log_2 \\frac{n}{n-1}$. This value is always positive for all values of $n>2$. \nThus combining the earlier result of tripartite pure state with the above finding, we conclude that the entanglement of purification is polygamous \nfor $n$ party $\\vert W\\rangle$ state. \n\nLikewise the case for mixed states, where we state some conditions relating monogamy of entanglement of purification with that of quantum mutual information,\nwe now state a proposition connecting the polygamy of quantum mutual information to the polygamy of entanglement of purification for a pure state of $n$ parties.\n\\vskip 10pt\n{\\textbf{Proposition 4}}: All the $n$ party pure states for which the quantum mutual information is $(n-1)$ partite polygamous for at least any one of the $(n-1)$ party reduced density \nmatrices of the pure state, is $n$ partite polygamous for both the entanglement of purification as well as the quantum mutual information.\n\\vskip 10pt\n{\\textit{Proof}}: Note that for $n$ partite pure state, we have $\\sum_{i=2}^n E_p(A_1:A_i)\\geq \\frac{1}{2}\\sum_{i=2}^n I(A_1:A_i)$. Now, let us take a reduced density\nmatrix $\\rho_{A_1A_2...A_{n-1}}$ to be polygamous for quantum mutual information, i.e., $\\sum_{i=2}^{n-1} I(A_1:A_i)\\geq I(A_1:A_2...A_{n-1})$. Then, we have \n$I(A_1:A_n)+\\sum_{i=2}^{n-1} I(A_1:A_i)\\geq I(A_1:A_n)+I(A_1:A_2...A_{n-1})$. Since the $n$ partite quantum state we are considering is a pure state,\ntherefore by virtue of monogamy of quantum mutual information, the R.H.S. of this equation is nothing but $ I(A_1:A_2A_3...A_n)$. But, we know for a pure state \n$ I(A_1:A_2A_3...A_n)=2S(A_1)$. From here it then follows that $\\sum_{i=2}^n E_p(A_1:A_i)\\geq S(A_1)$ and also $\\sum_{i=2}^n I(A_1:A_i)\\geq 2S(A_1)$. \nThese two equations are just the equations of polygamy for the entanglement of purification and the quantum mutual respectively for a $n$ partite pure state. It is easy to see that\none could take any one of the possible $(n-1)$ different reduced density matrices possible of the $n$ partite pure state (keeping the node $A_1$ intact for each reduced density matrix)\nas the one polygamous for the quantum mutual information and eventually get back the polygamy equation for both the entanglement of purification and quantum mutual information.\nAs a specific example of this proposition, we easily see that all the four party pure states with negative interaction information across any two pair of its bipartite reduced density \nmatrices, are polygamous for entanglement of purification.\n\n\n\\section{ Sub-additivity on tensor products}\nAdditivity is a desirable property to hold for a given measure of total correlation. Quantum mutual information is an additive measure of correlation, however entanglement of \npurification may not be an additive measure. Using strong numerical support this has been shown in Ref.\\cite{Chen}. Here we prove that if it is non-additive then it has to be a\nsub-additive quantity. We have the following theorem.\n\\vskip 5pt\n{\\textbf {Theorem 2}}:\nThe entanglement of purification is sub-additive in the tensor product of density matrices, i.e., for a tensor product density matrix ${\\rho_{AB}\\otimes\\sigma_{CD}}$, the following equation\nholds\n\\begin{equation}\n E_p(AC: BD)\\leq E_p(A: B)+E_p(C: D).\\nonumber\n\\end{equation}\nwith equality if and only if the optimal pure state for the tensor product of density matrices is\nthe tensor product of optimal pure states of the corresponding density matrices upto a local unitary equivalence.\n\\vskip 10pt\n{\\textit {Proof}}:\nLet us suppose $\\vert\\Psi_{ABA'B'}\\rangle$ and $\\vert\\Phi_{CDC'D'}\\rangle$ are the optimal purification for $\\rho_{AB}$ and $\\sigma_{CD}$ \ncorresponding to the value of entanglement of \npurification. Then $ \\vert\\Psi_{ABA'B'}\\rangle\\otimes\\vert\\Phi_{CDC'D'}\\rangle $ is a valid purification for $\\rho_{AB}\\otimes\\sigma_{CD} $, \nhowever not generally the optimal one. Now, we know \nthat $ E_p(A: B)=\\frac{I(AA': BB')}{2}$ and $ E_p(C: D)=\\frac{I(CC': DD')}{2}$ . \nAdding these two quantities we get $E_p(A: B)+E_p(C: D)=\\frac{I(AA': BB')}{2}+\\frac{I(CC': DD')}{2}$.\nBut the quantum mutual information is additive on tensor product of quantum states. Therefore, $\\frac{I(AA': BB')}{2}+\\frac{I(CC': DD')}{2}=\\frac{I(AA'CC': BB'DD')}{2}$ \nwhere $I(AA'CC': BB'DD')$ is the quantum mutual information of the state $\\vert\\Psi_{ABA'B'}\\rangle\\otimes\\vert\\Phi_{CDC'D'}\\rangle$. Thus, we have\n\\begin{align} \\nonumber E_p(A: B)+E_p(C: D) =\\frac{I(AA'CC': BB'DD')}{2}.\\nonumber\n\\end{align}\nSince $\\vert\\Psi_{ABA'B'}\\rangle\\otimes\\vert\\Phi_{CDC'D'}\\rangle$ is only one such purification of $\\rho_{AB}\\otimes\\sigma_{CD}$ \nand the optimization for $E_p(AC:BD)$ is over all possible purifications of $\\rho_{AB}\\otimes\\sigma_{CD}$ denoted by the set of pure states \n$\\{\\vert\\xi_{ABCDA''B''}\\rangle\\}$, therefore we have\n\\begin{align}\\nonumber\n \\min_{A''B''} \\frac{I(ACA'': BDB'')}{2}\n \\leq \\frac{I(ACA'C': BDB'D')}{2}, \\nonumber\n\\end{align}\nwhere $I(ACA'': BDB'')$ is the quantum mutual information of any such purification $ \\vert\\xi_{ABCDA''B''}\\rangle\\ $ and the minimum is over all such purification of \n$\\rho_{AB}\\otimes\\sigma_{CD}$ by the addition of ancilla part $A''B''$ to it. Hence we easily see that the above equation is nothing but the following inequality,\n\\begin{align}\\nonumber\nE_p(AC: BD) \\nonumber\n\\leq \\frac{I(ACA'C':BDB'D')}{2},\\nonumber\n\\end{align}\nwhich directly implies that, $E_p(AC: BD)\\leq E_p(A:B)+E_p(C:D)$ for the four partite tensor product density matrix $\\rho_{AB}\\otimes\\sigma_{CD}$.\nNow, in the following paragraph we check the equality condition. \n\nWhile checking the equality condition, we now omit the subscripts and write $\\vert\\Psi_{ABA'B'}\\rangle $ \nas $\\vert\\Psi\\rangle$, $\\vert\\Phi_{CDC'D'}\\rangle$ as $ \\vert\\Phi\\rangle$ and $\\vert\\xi_{ABCDA''B''}\\rangle$ as $\\vert\\xi\\rangle$ for simplicity.\nFirst, we check that if $ \\vert\\xi\\rangle=\\vert\\Psi\\rangle\\otimes\\vert\\Phi\\rangle$, then whether the \ndimensionality of the optimal purifying state agrees with the dimension of the Hilbert space of the ancilla part, as given in Ref.\\cite {Terhal}. \nWe note that if $ \\vert\\xi\\rangle=\\vert\\Psi\\rangle\\otimes\\vert\\Phi\\rangle$, then $d_{A''}(\\vert\\xi\\rangle)=d_{A'}(\\vert\\Psi\\rangle)d_{C'}(\\vert\\Phi\\rangle),\nd_{B''}(\\vert\\xi\\rangle)=d_{B'}(\\vert\\Psi\\rangle)d_{D'}(\\vert\\Phi\\rangle)$.\nAccording to the theorem given in Ref.\\cite {Terhal}, $d_{A'}(\\vert\\Psi\\rangle)=d_{AB}(\\rho_{AB})$, $d_{C'}(\\vert\\Phi\\rangle)= d_{CD}(\\sigma_{CD})$ and \n$ d_{A''}(\\vert\\xi\\rangle)=d_{ABCD}(\\rho_{AB}\\otimes\\sigma_{CD})$. Similarly by the same theorem, we have \n$d_{B'}(\\vert\\Psi\\rangle)=d_{AB}^2(\\rho_{AB})$, $d_{D'}(\\vert\\Phi\\rangle)= d_{CD}^2(\\sigma_{CD})$ and \n$ d_{B''}(\\vert\\xi\\rangle)=d_{ABCD}^2(\\rho_{AB}\\otimes\\sigma_{CD})$. Now, we verify if the above two equations are consistent with dimensions proposed in Ref.\\cite{Terhal}\nfor $\\vert\\xi\\rangle$. Putting the values of $d_{A'}$ and $d_{B'}$ in terms of $d_{AB}$, we get \n$d_{A''}(\\vert\\xi\\rangle)=d_{AB}(\\rho_{AB})d_{CD}(\\sigma_{CD})$ and $d_{B''}(\\vert\\xi\\rangle)=d_{AB}^2(\\rho_{AB})d_{CD}^2(\\sigma_{CD})$. These values can be reframed as\nthe dimensions of the tensor product of the corresponding density matrices, i.e., $d_{AB}(\\rho_{AB})d_{CD}(\\sigma_{CD})= d_{ABCD}(\\rho_{AB}\\otimes\\sigma_{CD})$. Similarly \n$d_{AB}^2(\\rho_{AB})d_{CD}^2(\\sigma_{CD})= d_{ABCD}^2(\\rho_{AB}\\otimes\\sigma_{CD})$. This holds true even when \n$\\vert\\xi\\rangle=U_{A'C'}\\otimes U_{B'D'}\\vert\\Psi\\rangle\\otimes\\vert\\Phi\\rangle$, since the unitary matrices do not map density matrices from Hilbert space of a given dimension\nto that of a different dimension. \nThis shows that the dimensions are in agreement with those given by the theorem in \nRef.\\cite{Terhal}.\n\nWe now move on to the equality condition for the mutual information. \nFor this purpose, let us note that if $\\vert\\xi\\rangle=U_{A'C'}\\otimes U_{B'D'}\\vert\\Psi\\rangle\\otimes\\vert\\Phi\\rangle$, \nthen owing to the additivity of quantum mutual information and its invariance under the action of local unitaries, \none has $I(ACA'':BDB'')=I(AA':BB')+I(CC'':DD')$, where the mutual information terms are that of $\\vert\\xi\\rangle$, $\\vert\\Psi\\rangle$, \nand $\\vert\\Phi\\rangle$ respectively.\nThis implies that $E_p(AC:BD)=E_p(A:B)+E_p(C:D)$ for $\\rho_{AB}\\otimes\\sigma_{CD}$. This proves the if part of \ntheorem above. \n\nFor the only if condition we see that if $ \\vert\\xi\\rangle\\neq U_{A'C'}\\otimes U_{B'D'}\\vert\\Psi\\rangle\\otimes\\vert\\Phi\\rangle$, \nthen $I(\\vert\\xi\\rangle)\\neq I(\\vert\\Psi\\rangle\\otimes\\vert\\Phi\\rangle)$. This is because, the action of a non-local unitary will \nchange the probability distributions of the reduced density matrices and thus will change the value of quantum mutual information across $ACA'C':BDB'D'$ partition. As a result, \nthe equality holds only if the optimal pure state for the tensor product of the density matrices is the tensor \nproduct of corresponding optimal pure states, upto the local unitary equivalence $U_{A'C'}\\otimes U_{B'D'}$. \nTherefore, we see that if the entanglement of purification is non-additive, it is actually sub-additive. Thus, the above theorem rules out the \nsuper-additivity of entanglement of purification. The sub-additivity has been shown numerically in Ref.\\cite{Chen} for the Werner states.\nIt is important to note that, according to the result by authors in Ref.\\cite{Terhal}, one is guaranteed to find the optimal pure state in the Hilbert space of the aforementioned \ndimensionality. In that case our equality condition holds for the tensor product of the optimal pure states. However it does not rule out the existence of optimal pure states in \nHilbert space of other dimensions. Thus, in addition to the optimal pure state in Hilbert space of the dimensions given by the theorem, one may find other optimal pure states in \nHilbert space of higher or lower dimension. In particular, one might be able to find optimal pure state in Hilbert space of lower dimension.\nAs an example we have the Werner state and its optimal pure state for entanglement of purification can be found in Hilbert space of dimensions $4\\times 4$ as proved numerically\nin Ref.\\cite{Terhal}.\n\nUsing the results we have obtained on entanglement of purification, \nwe identify the classes of states that are additive on tensor products for the entanglement of purification as follows. We see that\nthe bipartite states satisfying the equality condition in Araki-Lieb inequality, the higher dimensional bipartite states satisfying the equality condition in strong sub-additivity\nwhen any party of it can be broken down into two lower dimensional subparties, the tripartite states satisfying the strong sub-additivity equality condition\nare additive on tensor products for entanglement of purification.\nThus, for the above class of states, the regularised entanglement of purification and their optimal visible\ncompression rate is given by the entanglement of purification. Apart from this, we are able to also draw the conclusion that\nthe entanglement of purification is additive on tensor products if and only if it is also super-additive on tensor products for all quantum states.\nHowever, whether there can be states $\\rho_{AB}\\otimes\\sigma_{CD}$ for which $ E_p(AC:BD) < E_p(A:B)_{\\rho_{AB}}+E_p(C:D)_{\\sigma_{CD}}$ is still an open question. We note that the \nquestion of non-additivity is now reduced to only the sub-additivity condition, ruling out the possibility of\n$E_p(AC:BD) > E_p(A:B)_{\\rho_{AB}}+E_p(C:D)_{\\sigma_{CD}}$ for $\\rho_{AB}\\otimes\\sigma_{CD}$.\n\n\n\\section{ Implications on the quantum advantage of dense coding}\n\nQuantum dense coding is a quantum communication protocol where one sends classical information beyond the classical capacity of the\nquantum channel with the help of a quantum state shared between two distant observers, and a noiseless quantum channel. The quantum advantage of dense coding is the\nincrease in the rate of classical information transmission due to shared entanglement. Mathematically, the quantum advantage of dense coding of a quantum state $\\rho_{AB}$is \ndefined in terms of the\ncoherent information as $\\Delta(A\\rangle B) = S(B)-inf_{\\Lambda_A}S[(\\Lambda_A\\otimes I_B)\\rho_{AB}]=sup_{\\Lambda_A}I'(A\\rangle B)$, \nwhere the infimum or supremum is performed over all the maps $\\Lambda_A$ acting on the state $\\rho_{AB}$ and $ I'(A\\rangle B)=S(B)-S(AB)$ is the coherent information of $\\rho_{AB}$.\nThere, it was proved that the quantum advantage of dense coding is a non-negative quantity. Again, \na quantum state is said to be dense codeable if the above quantity $ \\Delta(A\\rangle B)$ is strictly positive.\n It was shown in the paper by Horodecki \\cite{Horodecki} that it suffices to consider\nonly the extremal TPCP maps in evaluating the infimum or supremum for the above quantity, owing to the concavity of the von-Neumann entropy. It was also shown that the\nquantum advantage of dense coding may be non-additive, though not proved definitely. Apart from the aforementioned properties, the quantum advantage of dense coding\nwas shown to obey a monogamy relation with the entanglement of purification as $ S(B)\\geq \\Delta(A\\rangle B)+E_p(B:C)$ \\cite{Horodecki}, for any tripartite state $\\rho_{ABC}$, with equality\nfor pure tripartite states.\n\nTherefore, from the monogamy inequality and the polygamy of\nentanglement of purification for pure tripartite states as well as some of the mixed tripartite states mentioned here previously, it follows that\n\\begin{equation}\n\\triangle(B\\rangle A)+\\triangle(C\\rangle A)\\leq \\triangle(BC\\rangle A),\\nonumber\n\\end{equation}\nimplying that the quantum advantage of dense coding is strictly monogamous for the tripartite pure states as well as the other tripartite mixed states mentioned previously. \nThis property is straight forwardly carried over to the asymptotic limit as well.\nThus, we have $\\triangle^{\\infty}(B\\rangle A)+\\triangle^{\\infty}(C\\rangle A)\\leq \\triangle^{\\infty}(BC\\rangle A)$ for those same set of states. Also, it is easy to see that for the \nmixed states satisfying SSA equality \ncondition, the symmetric (antisymmetric) subspace condition and the states satisfying the Araki-Lieb equality condition and the cases for the $n$ partite pure states, monogamy is \nfollowed.\n\nIn the same way as that of the entanglement of purification, we conclude that the quantum advantage of dense coding is super-additive on tensor product of density matrices, i.e., \nfor a four partite tensor product state $\\rho_{AB}\\otimes\\sigma_{CD}$, we have the following equation\n\\begin{equation}\n \\Delta(AC\\rangle BD)\\geq \\Delta(A\\rangle B) +\\Delta(C\\rangle D).\\nonumber\n\\end{equation}\nThe proof is as follows. By definition, we have $\\Delta(A>B)=sup_{\\Lambda_A}I'(A>B)$. Thus, for the density matrix $\\rho_{AB}\\otimes\\sigma_{CD}$ we have \n$\\Delta(A\\rangle B)+\\Delta(C\\rangle D)=sup_{\\Lambda_A}I'(A\\rangle B)+sup_{\\Lambda_C}I'(C\\rangle D)= \nsup_{\\Lambda_{A}\\otimes\\Lambda_{C}}I'(AC\\rangle BD)$.\nThe second equation follows from the fact that the von-Neumann entropies are additive on tensor products of density matrices. Again for $\\rho_{AB}\\otimes\\sigma_{CD}$, by definition we have \n$\\Delta(AC\\rangle BD)=sup_{\\Lambda_{AC}}I'(AC\\rangle BD)$.\nHowever, the optimization for $\\rho_{AB}\\otimes\\rho_{CD}$ is over all $\\Lambda_{AC}$, and $\\{\\Lambda_A\\otimes\\Lambda_C\\}$ is only a subset of $\\{\\Lambda_{AC}\\}$. Thus, \n$sup_{\\Lambda_{AC}}I'(AC\\rangle BD\\geq sup_{\\Lambda_A\\otimes\\Lambda_C}I'(AC\\rangle BD)$ for the same four partite product state $\\rho_{AB}\\otimes\\sigma_{CD}$.\nWith the last equation we arrive at the super-additivity equation\nfor the quantum advantage of dense coding for tensor product states of the form $\\rho_{AB}\\otimes\\sigma_{CD}$, \ni.e., $\\Delta(AC\\rangle BD)\\geq \\Delta(A\\rangle B) +\\Delta(C\\rangle D)$ for $\\rho_{AB}\\otimes\\sigma_{CD}$.\n\n\nNot only super-additivity, but the monogamy inequality with entanglement of purification has other implications on the quantum advantage of dense coding as well. \nFrom the lower bound and some of the actual value of entanglement of purification, \nusing the property of monogamy with it and non-negativity of the quantum advantage of dense coding, we can identify some of the quantum states that have no \nquantum advantage of dense and also put an upper bound on it for some specific cases.\n\nLet $\\rho_{ABCD}$ be a quantum state, such that the sub-additivity equality condition is satisfied for the reduced density matrix $\\rho_{ABC}$, i.e., $S(B\\vert A)+S(B\\vert C)=0$.\nThen, from the monogamy inequality with entanglement of purification, we get $ S(B)\\geq \\Delta(D\\rangle B)+E_p(B:AC)$. But, in this case $E_p(B:AC)=S(B)$. Thus, putting this\nvalue, we have $ \\Delta(D\\rangle B)\\leq 0$. But, since $ \\Delta(D\\rangle B)\\geq 0$, thus we have $ \\Delta(D\\rangle B)=0$ for the states $\\rho_{BD}$, i.e., the quantum advantage of\ndense coding vanishes precisely for these states. Similarly, for any tripartite state, pure or mixed $\\rho_{ABC}$, if the state $\\rho_{BC}$ satisfies the Araki-Lieb equality condition,\nthen the quantum advantage of dense coding $\\Delta(A\\rangle B)$ of $\\rho_{AB}$ also becomes zero. Apart from the above exact values,\nthe lower bound on entanglement of purification puts an upper bound on the quantum advantage of dense coding via its monogamy relation with the quantum advantage of dense coding.\n\n\n\\section{ Conclusions and Outlook}\nIn this paper, we find that the monogamous nature of correlations is not unique to quantum correlations, but can also be the case for the total correlations for certain quantum \nstates. Thus, monogamy is not a property of the quantum correlation alone. Contrary to the monogamy nature of the mutual \ninformation for tripartite pure states, we have proved that the entanglement of purification can be polygamous for such states.\nThis shows that even though the mutual information and the entanglement of purification are supposed to capture total correlation, the nature of these correlations\ncan be completely opposite at least for tripartite systems.\nIn case of pure and mixed states, the monogamy of entanglement of purification is related to the monogamy of entanglement of formation. Also, we have found a\nnecessary condition for monogamy of entanglement of purification for a special class of mixed states, in terms of the interaction information or the polygamy of the\nquantum mutual information. A new lower bound of the entanglement of purification has been given for the tripartite mixed states and higher dimensional bipartite systems. \nUsing the formula for the lower bound we have been able to find the exact values\nof entanglement of purification for some classes of states. Furthermore,\nin this paper we have also shown that if entanglement of purification is not additive, it has to be a sub-additive quantity. \nUsing these results we have also shown that the quantum advantage of dense coding\nis strictly monogamous for all tripartite pure states and it is super-additive on tensor products. We have also identified some of the quantum states with no \nquantum advantage of dense coding.\nWe have brought forward these important aspects of the measure of total correlation as well as that of the quantum advantage of dense coding to the forefront. \nThese will help us understand better the nature of total and quantum correlations of composite quantum states. \nThis calls for more explorations and a deeper understanding of the total correlation present in a composite\nmixed state. The total correlation quantified by the mutual information can be split into quantum correlation and classical correlation. However,\nwe still do not know whether we can express the entanglement of purification as the sum of quantum and classical correlations. In view of the\npolygamy nature of entanglement of purification, can it be the case that the entanglement of purification contains more classical like\ncorrelation than the quantum correlation. This will be a topic of future investigation.\n\n\\section{ Acknowldgements}\nSB acknowledges discussions with Ujjwal Sen, Aditi Sen de, Andreas Winter, Debbie Leung and Atri Bhattacharya. SB acknowledges cluster computing facility at HRI.\nSB and AKP acknowledge financial support from DAE, Govt. of India.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{Introduction}\nEvidence-based medicine intends to optimize healthcare decision-making by using evidence from well-designed and conducted research \\citep{guyatt2002users, moher2006systematic, egger2008systematic}. It classifies evidence by its epistemological strength and recommends using evidence from randomized controlled trials (RCTs), systematic reviews, and meta-analyses when available, to inform guidelines and policies. When conducted properly, systematic reviews and meta-analyses provide the most reliable evidence for synthesizing the benefits and harms associated with various treatment options, and can provide patients, caregivers, and doctors with integrated information for healthcare decision-making \\citep{moher1999improving,bossuyt2003towards, moher2009preferred,stewart2015preferred}.\n\nAlmost all RCTs measure and report more than one outcome, and often these outcomes are correlated with each other. For example, in a cardiovascular trial, the reduction in lipids level may be correlated with risk of clinical events such as stroke and myocardial infarction. In many RCTs, there is a balance between safety and efficacy; an experimental treatment may have greater efficacy than the placebo or standard therapy, but it may also have higher risk of adverse side effects such as transient toxicity or death. In practice, clinical decision-making relies on both efficacy and safety, so these outcomes must be considered simultaneously. Multivariate meta-analysis (MMA) is one technique proposed to jointly analyze multiple outcomes. MMA can borrow information from the potential correlation among the outcomes to improve the estimation of the pooled effect sizes \\citep{riley2007bivariate, riley2007evaluation,jackson2011multivariate}.\n\nOn the other hand, since multiple outcomes are simultaneously considered in medical decision-making, biases in some of the outcomes can affect the overall decision of treatment. Recently, empirical studies have provided convincing evidence of the existence of selective reporting. \\citet{chan2004empirical} compared the protocols of 102 trials with 122 published reports. Their investigation showed that, on average, 50\\% of efficacy outcomes and 65\\% of safety outcomes in each trial were incompletely reported; 62\\% of the 82 trials had major inconsistencies between outcomes stated in the trial protocols and those reported in publications. They also found that, compared with nonsignificant outcomes, statistically significant outcomes are more likely to have higher odds of being reported for both efficacy outcomes (odds ratio = 2.4) and safety outcomes (odds ratio = 4.7). Other studies have found similar results, such as in the reporting of toxicity in seven different medical areas \\citep{hemminki1980study, chan2004outcome, hazell2006under, al2008selective, chowers2009reporting, mathieu2009comparison}, and in the reporting of safety outcomes for breast cancer treatments \\citep{vera2013bias}. In the present article's motivating study for the effects of interventions on hospital readmission and quality of life for heart failure patients, 11 studies out of 45 do not report readmission, while 30 studies do not report quality of life. \n\nAs such, outcome reporting bias (ORB), defined as ``{\\it{the selective reporting of some outcomes but not others, depending on the nature and direction of the results}}'' \\citep{sterne2016chapter}, may lead to biased inference in the pooled estimates of the outcomes and negatively affect patient outcomes. In addition to biased inference, ORB can also invalidate results from meta-analyses. For example, in this article's case study, the significant decrease in relative risk (RR) of hospital readmission for heart failure patients in the intervention group (95\\% confidence interval [CI] 0.862--0.993) is no longer present \\textit{after} we adjust for ORB (95\\% CI 0.876--1.051). In our meta-evaluation of 748 bivariate meta-analyses from the Cochrane Database of Systematic Reviews in Section \\ref{meta-meta}, we also found that 157 reviews experienced a change in statistical significance for at least one outcome \\textit{after} correcting for ORB.\n\nUntil recently, ORB has been understudied, especially compared to the well-studied publication bias (PB) problem, defined as ``{\\it{the publication or nonpublication of research findings, depending on the nature and direction of the results}}'' \\citep{sterne2016chapter}. In the presence of PB, the published studies form a biased selection of the research in certain areas, which then leads to biased estimates \\citep{jackson2007assessing}. The ORB problem is different from the PB problem in that, although outcomes of a study have been selectively reported under ORB, the remaining outcomes are still available. For PB, on the other hand, studies are completely missing, and we do not even know the number of studies that have been conducted but not published. Thus, the strategy for addressing ORB differs from that for PB, especially when leveraging the partially observed outcomes to infer the unreported outcomes.\n\nSince part of the outcomes are available, the current strategy for MMA with missing outcomes has focused on joint modeling of multiple outcomes that ``borrow strength'' across correlated outcomes \\citep{riley2009multivariate,kirkham2012multivariate,frosi2015multivariate}. The idea is that the set of studies with outcomes reported can inform the correlations among multiple outcomes, which can be used to ``impute'' the missing outcomes from the reported outcomes. Unfortunately, this joint modeling strategy alone is insufficient as an approach to account for ORB because it relies on the {\\emph{missing at random}} (MAR) assumption. This assumption is often not true in RCTs, since evidence suggests that the majority of missing outcomes are \\textit{selectively} unreported \\citep{chan2004empirical, vera2013bias}. It is also unclear if joint modeling alone can lead to less biased estimates in the presence of ORB. \n\nThe evaluation of ORB has been included as a key component by the Cochrane risk of bias tool \\citep{higgins2011cochrane}, which is becoming a standard procedure in conducting a systematic review. However, the \\textit{Cochrane Handbook for Systematic Reviews of Interventions} (Chapter 8.14.2, version 5.1.0, 2011), has acknowledged that ``statistical methods to detect within-study selective reporting (i.e., outcome-reporting bias) are, as yet, not well developed'' \\citep{higgins2011chapter}. The \\textit{Journal of Clinical Epidemiology} has also stressed that ``guidance is needed for using multiple outcomes and results in systematic reviews'' \\citep{mayo2017multiple}. \n\nMotivated by the critical need for statistical models that can adjust for and evaluate the impact of ORB, we develop {\\bf{A}} {\\bf{B}}aysian {\\bf{S}}election model for correcting {\\bf{ORB}} (abbreviated as ABSORB henceforth) in this article. Specifically, we rely on selection models where multivariate latent variables are used to model the process of selective reporting of multiple outcomes in a flexible way. We then use a Bayesian approach to conduct estimation by placing appropriate priors on the unknown parameters. {\\emph{From a modeling point of view}}, the distributions of the latent variables that govern the reporting processes are allowed to be correlated with not only the significance of the outcomes but also the characteristics of the study. \n{\\emph{From a statistical inference point of view}}, the Bayesian approach allows the implementation of the model straightforwardly using Markov chain Monte Carlo (MCMC) and naturally provides uncertainty quantification for the model parameters through their posterior distributions. While there have been several approaches proposed for quantifying PB in \\textit{univariate} meta-analysis \\citep{lin2018quantifying, BaiLinBolandChen2020}, we are not aware of any existing approaches to quantify the impact of \\textit{ORB} in \\textit{multivariate} meta-analyses. By taking the Hellinger distance between the bias-corrected and non-bias corrected posterior densities for model parameters, we propose a measure to quantify the impact of outcome reporting bias.\n\nThe rest of the article is structured as follows. Section~\\ref{MotivatingData} describes the motivating case study of the effects of interventions on quality of life and hospital readmission for heart failure patients. Section~\\ref{ABSORB} introduces our proposed ABSORB model and our measure for quantifying the impact of ORB using our model. Section~\\ref{meta-meta} empirically evaluates these approaches through a meta-evaluation of bivariate meta-analyses from the Cochrane Database of Systematic Reviews. Section~\\ref{Application} applies our approaches to the case study of heart failure patients. Section~\\ref{Discussion} concludes the article with a discussion of our findings and potential extensions for future work.\n\n\\section{A Motivating Meta-Analysis on Interventions for Heart Failure Patients} \\label{MotivatingData}\n\nFor heart failure (HF) patients, readmission (ReAd) after discharging from the hospital is not rare, which places substantial burdens on both the patients and the health system. According to Medicare, the median risk-standardized 30-day readmission rate for HF was 23.0\\% \\citep{ZiaeianFonarow2016}. Due to the high cost of HF, preventing ReAd for HF patients has received particular attention from clinicians, researchers, and policymakers. For example, the Affordable Care Act has instituted a financial penalty for excessive readmissions for hospitals that is capped at 3\\% of a hospital's total Medicare payments for 2015 and beyond \\citep{ZiaeianFonarow2016}. On the other hand, quality of life (QoL) is an outcome that attracts more attention from patients, and the factors that affect the QoL of HF patients include anxiety, depression, and physical disability. A literature review by \\citet{Celano2018} found that these adverse QoL outcomes were associated with poor function, reduced adherence to treatment, and elevated mortality in HF patients. \n\nTelemonitoring (TM) and structured telephone support (STS) are two common interventions and are demonstrated to be effective in reducing HF-specific readmission \\citep{inglis2015structured}. A series of RCTs measuring both the all-cause ReAd and QoL provides a good opportunity to systematically evaluate the effects of interventions (TM or STS) on these two outcomes for patients with heart failure. Moreover, \\citet{Celano2018} found that QoL was significantly associated with rehospitalization rates for HF patients. Therefore, it is of practical interest to \\textit{jointly} model the effects of interventions on both ReAd and QoL in order to capture the inherent correlations between these two outcomes.\n\nAfter a systematic search of scientific literature, 45 intervention studies were included in our analysis. For ReAd, we calculated the RR in order to quantify the change in risk of readmission due to the interventions compared to the usual care. Since the quantitative measure of QoL differed across studies, we calculated the standardized mean difference (SMD) in order to quantify the change of QoL between the intervention group and the group with usual care. \n\nFor multiple studies in our meta-analysis, either ReAd or QoL was missing. For each of the studies, there were three possible scenarios: 1) the study reported both ReAd \\textit{and} QoL, 2) the study reported \\textit{only} ReAd, and 3) the study reported \\textit{only} QoL. Among the 45 studies, only 8 studies published the results for both ReAd and QoL, 33 studies published only one of the two outcomes, and four studies did not publish either ReAd or QoL. Among the 41 studies with at least one outcome reported, 34 studies published the effect size of interventions on ReAd, and 15 studies published the effect size of interventions on QoL.\n\n\n\\begin{table}[!htbp]\n\t\\centering\n\t\\caption{Number of studies in the meta-analysis of interventions for HF patients, summarized by outcomes (columns) and by missingness scenarios (rows). \\checkmark : reported, \\text{\\sffamily X}: missing.}\n\t\\medskip \n\t\n\t\\begin{tabular}{ccccccc}\n\t\t\\hline\n\t\t& \\multicolumn{3}{c}{Published studies} & \\multicolumn{3}{c}{Updated studies}\\\\\n\t\t\\cline{2-7}\n\t\t\\multirow{2}{*}{\\parbox{2cm}{\\centering Scenario}} & \\multicolumn{2}{c}{Outcome} & \\multirow{2}{*}{\\parbox{1.5cm}{\\centering No.\\ of studies}} &\\multicolumn{2}{c}{Outcome}&\\multirow{2}{*}{\\parbox{1.5cm}{\\centering No.\\ of studies}} \\\\\n\t\t\\cline{2-3}\\cline{5-6} \n\t\t& ReAd & QoL && ReAd & QoL \\\\\n\t\t\\hline\n\t\t1 & \\checkmark & \\checkmark & 8 &\\checkmark &\\checkmark & 11\\\\\n\t\t2 & \\checkmark & \\text{\\sffamily X} & 26 &\\checkmark &\\text{\\sffamily X} & 23 \\\\\n\t\t3 & \\text{\\sffamily X} & \\checkmark & 7 &\\text{\\sffamily X} &\\checkmark & 10 \\\\\n\t\tNo.\\ of studies & 34 & 15 & 41 & 34 & 21 & 44 \\\\\n\t\t\\hline\n\t\\end{tabular} \\label{ReAdQoLTable}\n\\end{table}\n\nWe queried the corresponding authors for the studies that did not report either QoL or ReAd, and only half of the authors we contacted replied. Specifically, we obtained QoL results for six of the 30 studies that did not publish results on QoL.\nThese new results gave us an updated sample with 11 studies that reported both ReAd and QoL and three \\textit{new} studies that \\textit{only} reported QoL. Table~\\ref{ReAdQoLTable} summarizes the outcome reporting in our initial dataset (i.e., published studies) and in our new dataset \\textit{after} obtaining six unpublished results on QoL from corresponding authors (i.e., updated studies).\n\nAs a preliminary investigation, we conducted Begg's test \\citep{Begg1994} and Egger's test \\citep{Egger1997} for PB on ReAd and QoL separately. These tests suggested strong evidence of publication bias for ReAd (Begg's test: p-value = 0.02, Egger's test: p-value = 0.01). For QoL, there was moderate evidence of publication bias (Egger's test: p-value=0.10). However, by plotting the funnel plot for QoL, the evidence of selective reporting became more pronounced. As depicted in the left panel of Figure~\\ref{funnelplots}, there was a moderate degree of asymmetry in the funnel plot for the published studies, as evidenced by a missing chunk out of the funnel on the left hand side. In the updated studies (right panel of Figure~\\ref{funnelplots}), we found that \\textit{all} six missing studies' QoL (represented by diamonds) were statistically \\textit{nonsignificant}. This strongly suggested the existence of selective reporting of QoL. Even though the six missing studies were updated, there was still evidence of outcome reporting bias, as shown by the Egger's regression \\citep{Egger1997}, i.e., the intercept was found to deviate from zero. \n\nWhile our initial investigation analyzed the ReAd and QoL outcomes separately, ReAd and QoL are likely to be correlated \\citep{Celano2018} in practice, and biased estimation in one outcome can affect estimation in the other. In addition, given the evidence that many missing outcomes in RCTs are selectively unreported rather than missing at random \\citep{chan2004empirical} (including in our case study), we were motivated to: 1) \\textit{jointly} model ReAd and QoL in such a way that \\textit{adjusts} for potential ORB, and 2) \\textit{quantify} the impact of ORB on our MMA. We detail our novel modeling approaches in Section~\\ref{ABSORB}.\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\includegraphics[width=.9\\linewidth]{funnel_plot_QoL_updated.png}\n\t\\caption{Contour-enhanced funnel plots for QoL in the published studies (left panel) and the updated studies (right panel). There is slightly less asymmetry in the funnel plot for the updated studies, suggesting the existence of selective reporting for QoL.} \\label{funnelplots}\n\\end{figure}\n\n\n\n\\section{Statistical Methods} \\label{ABSORB}\n\nBased on our motivating case study, we focus on meta-analyses where two outcomes are of interest (or \\textit{bivariate} meta-analysis). In practice, a bivariate meta-analysis of studies of diagnostic test accuracy is the most common medical application of MMA \\citep{jackson2011multivariate, reitsma2005bivariate, chu2006bivariate}. In studies for drugs and other medical treatments, clinical efficacy and safety are also typically the two outcomes of greatest interest \\citep{chan2004empirical}. However, the extension of ABSORB to meta-analyses with more than two outcomes is relatively straightforward and is discussed in Section~\\ref{Discussion}. \n\nIn a bivariate meta-analysis, our main parameter of interest is an unknown vector of two population treatment effects $\\bm{\\mu} = (\\mu_1, \\mu_2)'$. For example, the first endpoint $\\mu_1$ could be a quantitative measure of the efficacy of a treatment, while the second endpoint $\\mu_2$ is a quantitative measure for the treatment's safety. In our case study, $\\mu_1$ is the RR of readmission, and $\\mu_2$ is the SMD of the quality of life for heart failure patients. We let $\\bm{y} = ( y_{1}, y_{2})'$ denote the reported effects for $\\bm{\\mu}$.\n\n\\subsection{The ABSORB Model} \\label{ABSORBModel}\n\nAs discussed in Section~\\ref{Introduction}, a common difficulty with conducting MMA is that in practice, outcomes are frequently unreported \\citep{jackson2011multivariate}. Selective reporting of $y_1$ or $y_2$ might lead to biased estimation and misleading inference about $\\bm{\\mu}$. With the ABSORB model, we aim to adjust for this ORB.\n\n\\subsubsection{Model Specification and Assumptions} \\label{ModelSpecification}\n\nBuilding upon the selection model literature for correcting PB in meta-analysis \\citep{copas1999works, copas2000meta, copas2001sensitivity, BaiLinBolandChen2020}, our goal is to explicitly model the selective reporting mechanism for partially reported outcomes. We assume that for each outcome $y_j, j = 1, 2$, there is a latent variable $z_j$ which determines the likelihood of $y_j$ being reported. \n\nLet $n$ denote the number of studies in our MMA. We assume that\n\\begin{align} \\label{YgivenZ}\ny_{ij} \\mid ( z_{ij} > 0 ) = \\mu_j + \\tau_j u_{ij} + s_{ij} \\epsilon_{ij}, \\hspace{.5cm} i = 1, \\ldots, n, \\hspace{.2cm} j = 1, 2,\n\\end{align}\nwhere $y_{ij}$ is the reported outcome for the $j$th endpoint for the $i$th study, $\\mu_j$ is the mean effect for the $j$th endpoint, and $s_{ij}$ is the reported standard error for $y_{ij}$. We assume that $u_{ij}$ and $\\epsilon_{ij}$ are marginally distributed as $\\mathcal{N}(0,1)$ and that $\\textrm{corr}(u_{ij}, \\epsilon_{ij}) = 0$. The $u_{ij}$'s are random effects that capture the between-study heterogeneity for the $j$th endpoint, while $\\tau_j > 0$ quantifies the amount of between-study heterogeneity. Meanwhile, the within-study random error is captured by $\\epsilon_{ij}$. Under \\eqref{YgivenZ}, we assume that $y_{ij}$ is only reported if the associated latent variable $z_{ij}$ is greater than zero. We further assume that the $z_{ij}$'s are generated according to\n\\begin{align} \\label{latentZ}\n\tz_{ij} = \\gamma_{0j} + \\gamma_{1j} \/ s_{ij} + \\delta_{ij}\n\\end{align}\nwhere $\\delta_{ij} \\sim \\mathcal{N}(0, 1)$. In \\eqref{latentZ}, the parameter $\\gamma_{0j}$ determines the overall probability of reporting $y_{ij}$, while $\\gamma_{1j}$ determines how the likelihood of reporting depends on sample size. In general, $\\gamma_{1j} \\geq 0$, so that studies with larger sample sizes are\nmore likely to report their outcomes. We assume that\n\\begin{align} \\label{EpsilonDeltaCorrelation}\n\\textrm{corr} (\\epsilon_{ij}, \\delta_{ij}) = \\rho_j\n\\end{align}\nthat is, the reported outcome $y_{ij}$ and the latent variables $z_{ij}$ are correlated through $\\rho_j$. The correlation parameters $\\rho_1$ and $\\rho_2$ in \\eqref{EpsilonDeltaCorrelation} control how the probability of reporting for the first and second endpoint respectively is influenced by the effect size of the study. When ORB for both endpoints is present, then $\\rho_1 \\neq 0$ and $\\rho_2 \\neq 0$. In this case, standard meta-analyses may lead to \\textit{biased} estimation of $\\bm{\\mu}$. \n\nIn line with standard bivariate meta-analysis \\citep{jackson2011multivariate}, we further assume that there is both within-study correlation between the $\\epsilon_{ij}$'s in \\eqref{YgivenZ}, as well as between-study correlation for the two endpoints. To model the within-study correlation, we assume that\n\\begin{align} \\label{WithinStudyCorrelation}\n\t\\textrm{corr} ( \\epsilon_{i1}, \\epsilon_{i2} ) = \\rho_\\text{W}\n\\end{align}\nAlthough the assumption that the within-study correlation is a constant $\\rho_\\text{W}$ across all the studies may be strong, this approach is commonly adopted in practice for MMA \\citep{RileyThompsonAbrams2007, LinChu2018} in order to keep the model parsimonious.\n\nTo model the between-study correlation, we assume that the random effects $(u_{i1}, u_{i2})'$ for the two endpoints in \\eqref{YgivenZ} are also correlated. That is, we assume that\n\\begin{align} \\label{BetweenStudyCorrelation}\n\t\\textrm{corr} ( u_{i1}, u_{i2} ) = \\rho_\\text{B}\n\\end{align}\nFinally, we assume that\n\\begin{align} \\label{ZeroCorrelations}\n\t\\textrm{corr} ( \\epsilon_{i1}, \\delta_{i2}) = \\textrm{corr} ( \\epsilon_{i2}, \\delta_{i1} ) = \\textrm{corr}(\\delta_{i1}, \\delta_{i2}) = 0\n\\end{align} \nAssumption \\eqref{ZeroCorrelations} implies that $y_{i1} \\mid (z_{i1} > 0, z_{i2}) = y_{i1} \\mid (z_{i1} > 0)$ and $y_{i2} \\mid ( z_{i1}, z_{i2} > 0) = y_{i2} \\mid (z_{i2} > 0)$. In other words, $y_{i1}$ is reported only if $z_{i1} > 0$ and does not depend on the value of $z_{i2}$. Similarly, $y_{i2}$ does not depend on $z_{i1}$, and $z_{i1}$ does not depend on $z_{i2}$. We stress that the outcomes $y_{1}$ and $y_{2}$ themselves are likely to be correlated, and this is captured in our model through the within-study correlation $\\rho_\\text{W}$ \\eqref{WithinStudyCorrelation} and the between-study correlation $\\rho_\\text{B}$ \\eqref{BetweenStudyCorrelation}. However, the probability of \\textit{reporting} each individual outcome should depend only on the associated latent variable.\n\n\\subsection{Estimation in the ABSORB Model} \\label{PriorSpecification}\nThe basic ABSORB model is given in \\eqref{YgivenZ}--\\eqref{latentZ}, while additional assumptions about the correlation structure of different parameters are encoded in \\eqref{EpsilonDeltaCorrelation}--\\eqref{ZeroCorrelations}. In summary, we have a total of 12 unknown parameters $(\\mu_1, \\mu_2, \\tau_1, \\tau_2, \\gamma_{01}, \\gamma_{02}, \\gamma_{11}, \\gamma_{12}, \\rho_1, \\rho_2, \\rho_\\text{W}, \\rho_\\text{B})'$ under the ABSORB model. We propose a Bayesian approach to estimating all these parameters by placing appropriate priors on them. \n\nFor the mean treatment effects $\\bm{\\mu}$, we place the vague priors,\n\\begin{align} \\label{muprior}\n\\mu_j \\sim \\mathcal{N}(0, 10^4)\n\\end{align}\nand for the heterogeneity parameters $(\\tau_1, \\tau_2)'$, we place vague half-Cauchy priors,\n\\begin{align} \\label{tauprior}\n\\tau_j \\sim \\mathcal{C}^{+} (0, 1)\n\\end{align}\nNext, we consider priors for $(\\gamma_{01}, \\gamma_{02}, \\gamma_{11}, \\gamma_{12})'$, the parameters that control the overall likelihood of reporting for the first and second endpoint respectively. To induce weakly informative priors on these parameters, we follow \\cite{BaiLinBolandChen2020} and specify the priors as,\n\\begin{align} \\label{gamma0prior}\n\\gamma_{0j} \\sim \\mathcal{U} (-2, 2)\n\\end{align}\n\\begin{align} \\label{gamma1prior}\n\\gamma_{1j} \\sim \\mathcal{U} (0, \\max_i s_{ij} )\n\\end{align}\nThe priors \\eqref{gamma0prior}--\\eqref{gamma1prior} ensure that most of the mass for each of the latent variables $z_{ij}$ lies in the interval $(-2, 3)$, leading to selection probabilities between 2.5\\% and 99.7\\%. Finally, in order to complete the prior specification, we place noninformative uniform priors on each of the correlation parameters,\n\\begin{align} \\label{rhoprior}\n\\rho_1, \\rho_2, \\rho_\\text{W}, \\rho_\\text{B} \\sim \\mathcal{U}(-1, 1).\n\\end{align}\nThe Bayesian approach is especially appealing for several reasons. First, we can implement the model straightforwardly using MCMC, thus avoiding the difficulties of maximum likelihood estimation (MLE). The main issue with the MLE in selection models is that it can face non-convergence \\citep{copas2001sensitivity}. This can arise from poor initializations, a flat plateau in the likelihood, or instability in the computation of a $12 \\times 12$ Hessian matrix during the optimization procedure \\citep{copas2001sensitivity, ning2017maximum}. MCMC sampling does not encounter such difficulties (provided that we run the MCMC for enough iterations), and we can monitor convergence for the posteriors $p(\\mu_1 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$ and $p(\\mu_2 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$ using trace plots or the effective sample size (ESS). Besides the computational advantages of the Bayesian approach, we can also obtain natural uncertainty quantification for the model parameters through their posterior distributions. These posterior densities will ultimately allow us to quantify the \\textit{impact} of outcome reporting bias, as we discuss in Section~\\ref{QuantifyingORB}.\n\n\\subsection{The ABSORB Likelihood and Implementation} \\label{LikelihoodImplementation}\n\nIn order to perform Bayesian inference under the ABSORB model \\eqref{YgivenZ}--\\eqref{ZeroCorrelations}, we need to obtain the likelihood function and then place the priors \\eqref{muprior}--\\eqref{rhoprior} on the model parameters. In this section, we describe how to derive the ABSORB likelihood for the $n$ studies in our MMA and perform posterior inference under this likelihood. \n\nBecause not all studies report both $y_{1}$ and $y_{2}$, we may not have an equal number of observations for $y_1$ and $y_2$. Consequently, we need to consider three separate cases for the reported outcomes in our meta-analysis: 1) both endpoints are reported, 2) only the first endpoint is reported, or 3) only the second endpoint is reported. Without loss of generality, suppose that the first $m_1$ studies report both endpoints, the next $m_2$ studies report only the first endpoint $y_1$, and the remaining $m_3 = n-(m_1+m_2)$ studies report only the second endpoint $y_2$. \n\nFirst note that we can reparameterize the ABSORB model \\eqref{YgivenZ}--\\eqref{ZeroCorrelations} as a hierarchical model by introducing further latent parameters $(\\theta_{i1}, \\theta_{i2})'$ for the $m_1$ studies that report both endpoints, $\\widetilde{\\theta}_{i1}$ for the $m_2$ studies that report only $y_1$, and $\\check{\\theta}_{i2}$ for the $m_3$ studies that report only $y_2$. The main reason for introducing these additional latent parameters is to ensure that the joint densities in our likelihood can be written explicitly. We denote $\\bm{\\Xi}$ as the collection of all unknown parameters, including these latent parameters.\n\nFor the $m_1$ studies that report both outcomes, we rewrite \\eqref{YgivenZ} as\n\\begin{equation} \\label{ABSORBreparam}\n\\begin{array}{rl}\ny_{i1} \\mid ( z_{i1} > 0) \\sim & \\mathcal{N} ( \\theta_{i1}, s_{i1}^2 ); \\\\\ny_{i2} \\mid (z _{i2} > 0) \\sim & \\mathcal{N} ( \\theta_{i2}, s_{i2}^2),\n\\end{array} \\hspace{.5cm} i = 1, \\ldots, m_1,\n\\end{equation}\nwhere $(\\theta_{i1}, \\theta_{i2})$ is jointly distributed as\n\\begin{align} \\label{latentTheta}\n\\begin{pmatrix} \\theta_{i1}\\\\ \\theta_{i2} \\end{pmatrix} \\sim \\mathcal{N} \\left( \\begin{pmatrix} \\mu_1 \\\\ \\mu_2 \\end{pmatrix}, \\begin{pmatrix} \\tau_1^2 & \\rho_\\text{B} \\tau_1 \\tau_2 \\\\ \\rho_\\text{B} \\tau_1 \\tau_2 & \\tau_2^2 \\end{pmatrix} \\right), \\hspace{.5cm} i = 1, \\ldots, m_1.\n\\end{align}\n For studies $i = 1, \\ldots, m_1$ that report both outcomes, the ABSORB model can be represented as\n\\begin{align} \\label{JointYZbothoutcomes}\n\\begin{pmatrix} y_{i1} \\\\ y_{i2} \\\\ z_{i1} \\\\ z_{i2} \\end{pmatrix} \\sim \\mathcal{N} \\left( \\begin{pmatrix} \\theta_{i1} \\\\ \\theta_{i2} \\\\ \\gamma_{01} + \\gamma_{11} \/ s_{i1} \\\\ \\gamma_{02} + \\gamma_{12}\/s_{i2} \\end{pmatrix}, \\begin{pmatrix} s_{i1}^2 & \\rho_\\text{W} s_{i1} s_{i2} & \\rho_1 s_{i1} & 0 \\\\ \\rho_\\text{W} s_{i1} s_{i2} & s_{i2}^2 & 0 & \\rho_2 s_{i2} \\\\ \\rho_1 s_{i1} & 0 & 1 & 0 \\\\ 0 & \\rho_2 s_{i2} & 0 & 1 \\end{pmatrix} \\right) \\mathbb{I}_{ [ z_{i1} > 0 \\cap z_{i2} > 0]}.\n\\end{align}\nNamely, for each of these $m_1$ studies, the joint density of $(y_{i1}, y_{i2}, z_{i1}, z_{i2})$ is a truncated normal density that contains both endpoints $(y_{i1}, y_{i2})'$ only because \\textit{both} associated latent variables $z_{1}$ and $z_{2}$ are greater than zero. The off-diagonal entries in the covariance matrix in \\eqref{JointYZbothoutcomes} capture the correlations between $y_{i1}$, $y_{i2}$, $z_{i1}$, and $z_{i2}$. The likelihood function for $\\bm{\\Xi}$ in these $m_1$ studies is easily seen to be\n\\begin{equation} \\label{LikelihoodBothYs}\n\tL_1 (\\bm{\\Xi} ) = \\prod_{i=1}^{m_1} f( y_{i1}, y_{i2}, z_{i1}, z_{i2} \\mid \\theta_{i1}, \\theta_{i2}, \\gamma_{01}, \\gamma_{11}, \\gamma_{02}, \\gamma_{12}, \\rho_1, \\rho_2, \\rho_\\text{W} ),\n\\end{equation}\nwhere $f(y_{i1}, y_{i2}, z_{i1}, z_{i2} \\mid \\cdot )$ is the probability density function (pdf) for the truncated normal density in \\eqref{JointYZbothoutcomes}. \n\nFor the $m_2$ studies that only report the first endpoint $y_{1}$ but not $y_{2}$, we can also represent the model with a truncated normal density. However, since we do not observe $y_{2}$ for these studies, we can only write the joint density of $(y_{i1}, z_{i1}, z_{i2})'$ for studies $i = m_1+1, \\ldots, m_1+m_2$ as follows: \n\\begin{align} \\label{jointYZfirstoutcomeonly}\n\\begin{pmatrix} y_{i1} \\\\ z_{i1} \\\\ z_{i2} \\end{pmatrix} \\sim \\mathcal{N} \\left( \\begin{pmatrix} \\widetilde{\\theta}_{i1} \\\\ \\gamma_{01} + \\gamma_{11} \/ s_{i1} \\\\ \\gamma_{02} + \\gamma_{12} \/ s_{i2} \\end{pmatrix}, \\begin{pmatrix} s_{i1}^2 & \\rho_1 s_{i1} & 0 \\\\ \\rho_1 s_{i1} & 1 & 0 \\\\ 0 & 0 & 1 \\end{pmatrix} \\right) \\mathbb{I}_{[ z_{i1} > 0 \\cap z_{i2} < 0 ]},\n\\end{align} \nwhere $\\widetilde{\\theta}_{i1}$ is marginally distributed as\n\\begin{equation} \\label{latenttheta1}\n \\widetilde{\\theta}_{i1} \\sim \\mathcal{N} (\\mu_1, \\tau_1^2), \\hspace{.5cm} i = m_1+1, \\ldots, m_1+m_2.\n\\end{equation}\nThe representation in \\eqref{jointYZfirstoutcomeonly} ensures that the first endpoint $y_1$ is only reported because the corresponding latent variable $z_1$ is greater than zero, while the second endpoint $y_2$ is \\textit{not} reported because the corresponding latent variable $z_2$ is \\textit{less} than zero. The main issue that we encounter with \\eqref{jointYZfirstoutcomeonly} is that the standard errors $s_{i2}$'s are not available for these $m_2$ studies (since $y_2$ was not reported for any of them), and our model requires these standard errors in order to parameterize the mean of $z_{i2}$. Nevertheless, we can estimate the missing $s_{i2}$'s using the approach given in Section~3.5 of \\cite{copas2014model}. Specifically, we use the relationship that $1 \/ s_{i2}^2 = k_2 n_i$, where $k_2$ is a constant and $n_i$ is the sample size of the $i$th study. Based on the other $n-m_2$ studies that reported $s_{i2}$ (i.e., the $m_1$ studies where both endpoints were reported and the $m_3$ studies that only reported the second endpoint $y_2$) and the corresponding sample sizes for these studies, we then estimate $k_2$ as\n\\begin{align*}\n\t\\widehat{k}_2 = \\frac{\\sum_{i \\in R_2} 1 \/ s_{i2}^2}{\\sum_{i \\in R_2} n_i },\n\t\\end{align*}\nwhere $R_2$ is the index set of the $n-m_2$ studies that have reported $s_{i2}$. The missing $s_{i2}$'s for the $m_2$ studies in \\eqref{jointYZfirstoutcomeonly} can then be estimated as $\\widehat{s}_{i2} = \\sqrt{1 \/ ( \\widehat{k}_2 n_i ) }$ \\citep{copas2014model}. Substituting the $s_{i2}$'s with their estimates $\\widehat{s}_{i2}$'s, the likelihood function for the $m_2$ studies that only report $y_1$ but not $y_2$ can be written as\n\\begin{equation} \\label{LikelihoodY1Only}\n\tL_2 (\\bm{\\Xi}) = \\prod_{i=m_1+1}^{m_1+m_2} f( y_{i1}, z_{i1}, z_{i2} \\mid \\widetilde{\\theta}_{i1}, \\gamma_{01}, \\gamma_{11}, \\gamma_{02}, \\gamma_{12}, \\rho_1) ,\n\\end{equation}\nwhere $f(y_{i1}, z_{i1}, z_{i2} \\mid \\cdot )$ is the pdf of the truncated normal density in \\eqref{jointYZfirstoutcomeonly}.\n\nFinally, for the remaining $m_3$ studies that only report the second endpoint $y_{2}$ but not $y_1$, we can similarly represent the model as follows. For $i = m_1 + m_2 + 1, \\ldots, n$, we have \n\\begin{align} \\label{jointYZsecondoutcomeonly}\n\\begin{pmatrix} y_{i2} \\\\ z_{i1} \\\\ z_{i2} \\end{pmatrix} \\sim \\mathcal{N} \\left( \\begin{pmatrix} \\check{\\theta}_{i2} \\\\ \\gamma_{01}+ \\gamma_{11} \/ s_{i1} \\\\ \\gamma_{02} + \\gamma_{12} \/ s_{i2} \\end{pmatrix}, \\begin{pmatrix} s_{i2}^2 & 0 & \\rho_2 s_{i2} \\\\ 0 & 1 & 0 \\\\ \\rho_2 s_{i2} & 0 & 1 \\end{pmatrix} \\right) \\mathbb{I}_{[ z_{i1} < 0 \\cap z_{i2} > 0]},\n\\end{align} \nwhere $\\check{\\theta}_{i2}$ is marginally distributed as\n\\begin{equation}\\label{latenttheta2}\n\\check{\\theta}_{i2} \\sim \\mathcal{N}(\\mu_2, \\tau_2^2), \\hspace{.5cm} i = m_1+m_2+1, \\ldots, n.\n\\end{equation}\nThe truncated normal density in \\eqref{jointYZsecondoutcomeonly} ensures that the second endpoint $y_2$ is only reported because the corresponding latent variable $z_2$ is greater than zero, while $y_1$ is \\textit{not} reported because $z_1$ is \\textit{less} than zero. For these $m_3$ studies, we do not observe the standard errors $s_{i1}$'s since none of these studies reported $y_1$. As we require these $s_{i1}$'s in order to parameterize the mean of $z_{i1}$, we again follow the approach of \\cite{copas2014model} and first estimate\n\\begin{align*}\n\t\\widehat{k}_1 = \\frac{ \\sum_{i \\in R_1} 1 \/ s_{i1}^2}{\\sum_{i \\in R_1} n_i},\n\\end{align*}\nwhere $R_1$ is the index set for the $m_1+m_2$ studies that have reported $s_{i1}$. The missing $s_{i1}$'s for the $n-(m_1+m_2)$ studies in (3.7) are then estimated as $\\widehat{s}_{i1} = \\sqrt{1\/(\\widehat{k}_1 n_i)}$. Similarly as in \\eqref{LikelihoodY1Only}, the likelihood for these $n-(m_1+m_2)$ studies after substituting the $s_{i1}$'s with the $\\widehat{s}_{i1}$'s is\n\\begin{equation} \\label{LikelihoodY2Only}\n\tL_3 (\\bm{\\Xi} ) = \\prod_{i=m_1+m_2+1}^{n} f( y_{i2}, z_{i1}, z_{i2} \\mid \\check{\\theta}_{i2}, \\gamma_{01}, \\gamma_{11}, \\gamma_{02}, \\gamma_{12}, \\rho_2 ),\n\\end{equation}\nwhere $f(y_{i2}, z_{i1}, z_{i2} \\mid \\cdot )$ is the pdf of the truncated normal density in \\eqref{jointYZsecondoutcomeonly}. Combining \\eqref{LikelihoodBothYs}, \\eqref{LikelihoodY1Only}, and \\eqref{LikelihoodY2Only}, we see that the complete likelihood function for all $n$ studies is\n\\begin{align} \\label{ABSORBLikelihood}\nL ( \\bm{\\Xi} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n ) = L_1 (\\bm{\\Xi} ) L_2 (\\bm{\\Xi}) L_3 (\\bm{\\Xi}).\n\t\\end{align}\n Under \\eqref{ABSORBLikelihood}, the joint posterior distribution for $\\bm{\\Xi}$ is then\n\\begin{align} \\label{ABSORBposterior}\np ( \\bm{\\Xi} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n ) \\propto L ( \\bm{\\Xi} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n ) p ( \\bm{\\Xi} ),\n\\end{align}\nwhere $p (\\bm{\\Xi})$ is the product of the priors \\eqref{muprior}--\\eqref{rhoprior}, \\eqref{latentTheta}, \\eqref{latenttheta1}, and \\eqref{latenttheta2} on the model parameters. The main challenge with the ABSORB model is sampling from the truncated densities \\eqref{JointYZbothoutcomes}, \\eqref{jointYZfirstoutcomeonly}, and \\eqref{jointYZsecondoutcomeonly} in the full likelihood \\eqref{ABSORBLikelihood}. In Appendix~\\ref{Sampling}, we describe how to approximately sample from these truncated densities. With the prior for $\\bm{\\Xi}$ specified, the complete ABSORB model can then be implemented in any standard MCMC software to approximate the posterior distributions $p( \\mu_1 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n )$ and $p ( \\mu_2 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n )$. For our implementation, we use the \\texttt{JAGS} software. \n\nNote that it may be the case that only one of the endpoints $y_1$ or $y_2$ in our MMA contains missing values. When there are no missing outcomes for $y_2$, the number of studies that only report $y_1$ but not $y_2$ is $m_2 = 0$, and we replace \\eqref{ABSORBLikelihood} with $L(\\bm{\\Xi} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n) = L_1 (\\bm{\\Xi}) L_3(\\bm{\\Xi})$. Similarly, if there are no missing outcomes for $y_1$, then $m_3 = 0$ and we replace \\eqref{ABSORBLikelihood} with $L(\\bm{\\Xi} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n ) = L_1 (\\bm{\\Xi}) L_2 (\\bm{\\Xi})$. In Appendix~\\ref{ABSORBISM}, we describe how to further extend the ABSORB model to incorporate studies that are \\textit{completely} missing due to publication bias (i.e., studies that do not report \\textit{either} $y_1$ or $y_2$). Such an extension of ABSORB to account for PB, in addition to ORB, is possible if we know the \\textit{number} of missing studies.\n\n\\subsection{Quantifying the Impact of Outcome Reporting Bias} \\label{QuantifyingORB}\n\nIn addition to correcting the bias in estimation of $\\bm{\\mu}$, it is also of practical interest to evaluate the \\textit{impact} of ORB on MMA. To the best of our knowledge, there are no existing approaches to quantify the impact of ORB, either frequentist or Bayesian. The Bayesian approach has a natural way of doing this through comparing the bias-corrected posteriors for $\\mu_1$ and\/or $\\mu_2$ under the ABSORB model against their \\textit{non}-bias corrected posteriors.\n\n\\subsubsection{Estimation for the Non-Bias Corrected Model} \\label{NonBiasCorrectedModel}\n\nWe first describe how to estimate the parameters in MMA with missing outcomes \\textit{without} accounting for ORB. The ABSORB model \\eqref{YgivenZ}--\\eqref{ZeroCorrelations} explicitly models the selective reporting mechanism through the latent variables $z_{1}$ and $z_{2}$. These variables control whether or not the corresponding outcomes $y_{1}$ or $y_{2}$ are reported, and thus, we obtain bias-corrected estimates of $\\bm{\\mu}$ under ABSORB. The likelihood of reporting $y_1$ and $y_2$ ultimately depends on the correlation parameters $\\rho_1$ and $\\rho_2$ in \\eqref{EpsilonDeltaCorrelation}. However, if $\\rho_1 = \\rho_2 = 0$, then $\\textrm{corr}(y_{ij}, z_{ij}) = 0$ for all $i = 1, \\ldots, n, j = 1, 2$, and the model \\eqref{YgivenZ}--\\eqref{ZeroCorrelations} reduces to\n\\begin{equation} \\label{ABSORBNoCorrelations}\n\t\\begin{array}{lll}\n\ty_{i1} & = \\mu_1 + \\tau_1 u_{i1} + s_{i1} \\epsilon_{i1}, & \\textrm{corr}(\\epsilon_{i1}, \\epsilon_{i2}) = \\rho_\\text{W}; \\\\\n\ty_{i2} & = \\mu_2 + \\tau_2 u_{i2} + s_{i2} \\epsilon_{i2}, & \\textrm{corr}(u_{i1}, u_{i2}) = \\rho_\\text{B}.\n\\end{array}\n\\end{equation}\nIn other words, when $\\rho_1 = \\rho_2 = 0$, the dependence of $y_{i1}$ and $y_{i2}$ on $z_{i1}$ and $z_{i2}$ respectively is removed in \\eqref{ABSORBNoCorrelations}, and we \\textit{only} have the unknown parameters $(\\mu_1, \\mu_2, \\tau_1, \\tau_2, \\rho_\\text{W}, \\rho_\\text{B})'$. In this case, the ABSORB model reduces to a joint model with a bivariate random effects model for the $m_1$ studies that report both $(y_1, y_2)'$ and univariate random effects models for the $m_2$ studies that report only $y_1$ and the $m_3$ studies that report only $y_2$. We call model \\eqref{ABSORBNoCorrelations} the \\textit{non}-bias corrected model because we ignore the selection process that was induced through the latent variables $z_1$ and $z_2$.\n\nSimilar to the bias-corrected ABSORB model, we introduce the latent parameters $(\\theta_{i1}, \\theta_{i2})'$ for $i = 1, \\ldots, m_1$, $\\widetilde{\\theta}_{i1}$ for $i = m_1+1, \\ldots, m_1+m_2$, and $\\check{\\theta}_{i2}$ for $i = m_1+m_2+1, \\ldots, n$, as in \\eqref{latentTheta}, \\eqref{latenttheta1}, and \\eqref{latenttheta2}. Let $\\bm{\\Omega}$ denote all the unknown parameters in the non-bias corrected model, including these latent parameters. Note that $\\bm{\\Omega}$ does not include the parameters $(\\rho_1, \\rho_2, \\gamma_{01}, \\gamma_{11}, \\gamma_{02}, \\gamma_{12})'$, because $\\rho_1$ and $\\rho_2$ are fixed at zero and we no longer need to condition on the latent variables $(z_1, z_2)'$ in our analysis. In the non-bias corrected model, we model the $m_1$ studies that report both outcomes as\n\\begin{align} \\label{StandardBivariateMetaAnalysis}\n\t\\begin{pmatrix} y_{i1} \\\\ y_{i2} \\end{pmatrix} \\sim \\mathcal{N} \\left( \\begin{pmatrix} \\theta_{i1} \\\\ \\theta_{i2} \\end{pmatrix}, \\begin{pmatrix} s_{i1}^2 & \\rho_\\text{W} s_{i1} s_{i2} \\\\ \\rho_\\text{W} s_{i1} s_{i2} & s_{i2}^2 \\end{pmatrix} \\right), \\hspace{.5cm} i = 1, \\ldots, m_1,\n\\end{align} \nwhere the joint distribution of $(\\theta_{i1}, \\theta_{i2})'$ is given in \\eqref{latentTheta}. The likelihood function for these $m_1$ studies in the non-bias corrected model is\n\\begin{align} \\label{NonBiasCorrectedBothOutcomes}\n\tL_1 ( \\bm{\\Omega} ) = \\prod_{i=1}^{m_1} f(y_{i1}, y_{i2} \\mid \\theta_{i1}, \\theta_{i2}, \\rho_\\text{W} ),\n\\end{align}\nwhere $f( y_{i1}, y_{i2} \\mid \\cdot)$ is the pdf of the bivariate normal density in \\eqref{StandardBivariateMetaAnalysis}. For the $m_2$ studies that only report $y_{1}$ but not $y_{2}$, the non-bias corrected model reduces to $y_{i1} \\sim \\mathcal{N} ( \\widetilde{\\theta}_{i1}, s_{i1}^2)$, where $\\widetilde{\\theta}_{i1} \\sim \\mathcal{N}(\\mu_1, \\tau_1^2)$. The corresponding likelihood function for these $m_2$ studies is\n\\begin{align} \\label{NonBiasCorrectedFirstOutcome}\n\tL_2 ( \\bm{\\Omega} ) = \\prod_{i=m_1+1}^{m_1+m_2} f(y_{i1} \\mid \\widetilde{\\theta}_{i1} ) ,\n\\end{align}\nwhere $f(y_{i1} \\mid \\widetilde{\\theta}_{i1})$ is the pdf for $\\mathcal{N}( \\widetilde{\\theta}_{i1}, s_{i1}^2)$. Similarly, for the $m_3$ studies that only report $y_{2}$ but not $y_{1}$, the non-bias corrected model reduces to $y_{i2} \\sim \\mathcal{N} (\\check{\\theta}_{i2}, s_{i2}^2)$, where $\\check{\\theta}_{i2} \\sim \\mathcal{N}( \\mu_2, \\tau_2^2)$. The corresponding likelihood for these $m_3$ studies is\n\\begin{align} \\label{NonBiasCorrectedSecondOutcome}\n\tL_3 (\\bm{\\Omega}) = \\prod_{i=m_1+m_2+1}^{n} f(y_{i2} \\mid \\check{\\theta}_{i2} ),\n\\end{align}\nwhere $f(y_{i2} \\mid \\check{\\theta}_{i2})$ is the pdf for $\\mathcal{N}(\\check{\\theta}_{i2}, s_{i2}^2)$. Altogether, the joint likelihood for all $n$ studies in the \\textit{non}-bias corrected model is the product of the likelihoods in \\eqref{NonBiasCorrectedBothOutcomes}--\\eqref{NonBiasCorrectedSecondOutcome}:\n\\begin{equation} \\label{NonBiasCorrectedLikelihood}\n\tL (\\bm{\\Omega} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n ) = L_1(\\bm{\\Omega}) L_2 ( \\bm{\\Omega} ) L_3 (\\bm{\\Omega}).\n\\end{equation}\nFrom \\eqref{NonBiasCorrectedLikelihood}, we conduct posterior inference for $\\bm{\\Omega}$ by placing the priors \\eqref{latentTheta}, \\eqref{latenttheta1}, and \\eqref{latenttheta2} on the latent variables $(\\theta_{i1}, \\theta_{i2})'$, $\\widetilde{\\theta}_{i1}$, and $\\check{\\theta}_{i2}$ respectively, and the priors \\eqref{muprior} on $\\bm{\\mu}$, \\eqref{tauprior} on $(\\tau_1, \\tau_2)'$, and \\eqref{rhoprior} on $(\\rho_\\text{W}, \\rho_\\text{B})'$. We thus obtain the posterior for $\\bm{\\Omega}$ as\n\\begin{align} \\label{NonBiasCorrectedPosterior}\n\tp( \\bm{\\Omega} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n) \\propto L ( \\bm{\\Omega} \\mid \\bm{y}_1, \\dots, \\bm{y}_n) p(\\bm{\\Omega}).\n\\end{align}\nWith the model fully specified, we can approximate the marginal posteriors $p(\\mu_1 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$ and $p(\\mu_2 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$ using MCMC. As before, if $m_2=0$, then we replace \\eqref{NonBiasCorrectedLikelihood} with $L(\\bm{\\Omega} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n) = L_1 (\\bm{\\Omega}) L_3 (\\bm{\\Omega})$, and if $m_3 = 0$, then we replace \\eqref{NonBiasCorrectedLikelihood} with $L(\\bm{\\Omega} \\mid \\bm{y}_1, \\ldots, \\bm{y}_n) = L_1(\\bm{\\Omega}) L_2 (\\bm{\\Omega})$.\n\n\\subsubsection{The $D$ Measure for Quantifying the Impact of ORB} \\label{DMeasure}\n\nTo quantify the impact of publication bias in \\textit{univariate} meta-analysis, \\cite{BaiLinBolandChen2020} proposed the $D$ measure as a way of measuring the difference between a publication bias-corrected and \\textit{non}-bias-corrected posterior for a mean treatment effect. Here, we extend the $D$ measure to quantify the impact of ORB bias in MMA. \n\nLet $p_\\text{ABS} ( \\mu_j \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$ denote the posterior for $\\mu_j, j=1,2$ under the ABSORB model as described in Section~\\ref{LikelihoodImplementation}, and let $p_{NBC} ( \\mu_j \\mid \\bm{y}_1, \\ldots, \\bm{y}_n )$ denote the posterior for $\\mu_j$ under the non-bias corrected model described in Section~\\ref{NonBiasCorrectedModel}. To quantify the impact of ORB for each individual endpoint $\\mu_j$, we propose taking the Hellinger distance between $p_\\text{ABS}(\\mu_j \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$ and $p_{NBC} (\\mu_j \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$. If we are instead interested in quantifying the joint impact from ORB on both endpoints, we can take the Hellinger distance between the joint posteriors $p_\\text{ABS}(\\mu_1, \\mu_2 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$ and $p_{NBC} ( \\mu_1, \\mu_2 \\mid \\bm{y}_1, \\ldots, \\bm{y}_n)$. \n\nLet $\\bm{x}$ be either a random scalar or a random vector. The Hellinger distance between densities $f$ and $g$ is defined as\n\\begin{equation} \\label{Hellinger}\n\tH(f,g) = \\left[ 1 - \\displaystyle \\int \\sqrt{f (\\bm{x}) g (\\bm{x}) } d \\bm{x} \\right]^{1\/2},\n\\end{equation}\nThe Hellinger distance is an appealing way to quantify the dissimilarity between two probability densities. Unlike other probability distance measures such as the Kullback-Leibler distance, the Hellinger distance is symmetric \\textit{and} always bounded between zero and one. This gives the Hellinger distance a clear interpretation. Values close to zero indicate that $f$ and $g$ are nearly identical distributions, while values close to one indicate that the majority of the probability mass in $f$ does \\textit{not} overlap with that of $g$. \n\nFor shorthand, let $p_\\text{ABS}$ and $p_{NBC}$ be the posteriors for either $\\mu_1$, $\\mu_2$, or $\\bm{\\mu}$. Unfortunately, these posterior distributions are intractable and therefore need to be approximated. In the present context, we approximate the posteriors $p_\\text{ABS}$ and $p_{NBC}$ using MCMC samples to obtain kernel density estimates, $\\widehat{p}_\\text{ABS}$ and $\\widehat{p}_{NBC}$. We then use numerical integration to estimate the Hellinger distance \\eqref{Hellinger} between $\\widehat{p}_\\text{ABS}$ and $\\widehat{p}_{NBC}$. In short, our measure for the impact of ORB is\n\\begin{equation} \\label{Dmeasure}\n\tD = H \\left( \\widehat{p}_\\text{ABS}, \\widehat{p}_{NBC} \\right),\n\\end{equation}\nThe $D$ measure \\eqref{Dmeasure} quantifies the degree to which the ABSORB posterior changes from the non-bias corrected posterior. Smaller values of $D$ ($D \\approx 0$) indicate that $p_\\text{ABS}$ and $p_{NBC}$ are almost identical. Thus, we conclude that there is negligible impact from ORB on the MMA. Meanwhile larger values of $D$ ($D \\approx 1$) indicate that ORB has a strong impact on the estimation of $\\bm{\\mu}$. In this case, the ABSORB posterior differs quite drastically from the non-bias corrected posterior. In the next section, we provide several illustrations of the $D$ measure on real systematic reviews from the Cochrane Database of Systematic Reviews.\n\n\\section{Meta-Evaluation with the Cochrane Database of Systematic Reviews}\n\\label{meta-meta}\n\nTo evaluate the performance of our model, we conducted a meta-evaluation of 748 systematic reviews from the Cochrane Database of Systematic Reviews (hereinafter refer to as the ``Cochrane Database''). We describe how we arrived at these 748 eligible reviews in Section A of the Supplementary Material. For dichotomous outcomes, we performed a log transformation to risk ratio and odds ratio outcomes. For each of the reviews in our meta-evaluation, we fit the ABSORB and non-bias corrected models of Section~\\ref{ABSORB}. For both models, we ran three separate chains of the MCMC algorithm for 50,000 iterations, discarding the first 10,000 samples as burn-in. This left us with a total of 120,000 samples from three chains with which to approximate the posteriors and calculate the $D$ measures \\eqref{Dmeasure}. We monitored the convergence of the MCMC using ESS; if the ESS was below 100 for $\\mu_1$ or $\\mu_2$, then we increased the number of iterations to 100,000, 200,000, etc.\\ as needed.\n\nWe present three representative meta-analyses from our meta-evaluation, which we denote as MMA1, MMA2, and MMA3. Table~\\ref{ThreeMetaAnalyses} provides the details of these meta-analyses, including the review topic, the effect measure, and descriptions of the bivariate treatment effects of interest. The results from these three meta-analyses are depicted in Figure~\\ref{meta:CochraneExamples}. In Figure~\\ref{meta:CochraneExamples}, we plot the bias-corrected posteriors under the ABSORB model (solid line) against their non-biased corrected posteriors (dashed line) for $\\mu_1$ and $\\mu_2$, as well as the contour plots for the bias-corrected and non-biased corrected joint posteriors of $\\bm{\\mu}$. We also report the $D$ measures for $\\mu_1$, $\\mu_2$, and $\\bm{\\mu}$. For MMA1 (panels (a)-(c)), we see that there is a negligible impact from ORB for both endpoints, and thus the $D$ measures are all close to zero. In MMA2 (panels (d)-(f)), there is a fairly strong impact from ORB for the first endpoint ($D=0.41$) and a negligible impact ($D=0.12)$ for the second endpoint. In MMA3 (panels (g)-(i)), there is a very strong impact from ORB for the first endpoint ($D=0.98$) and a fairly strong impact ($D=0.49$) for the second endpoint. The bottom left graph in Figure~\\ref{meta:CochraneExamples} shows very little overlap between the bias-corrected and non-bias corrected posteriors for $\\mu_1$ in MMA3, and hence, we obtained a $D$ measure close to one. \n\n\n\\begin{table}[t!]\n \\centering\n \\caption{Three representative meta-analyses from the Cochrane Database.}\n \\begin{center}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{lllll}\n \\hline \n & Topic & Outcome & Effect measure & Analysis \\\\\n \\hline\n \\multirow{2}{*}{MMA1\\tablefootnote{Cochrane Database ID:CD000990, DOI: 10.1002\/14651858.CD000990.pub4.}}& Exercise for intermittent & $\\mu_1$ & Mean difference & Change in maximal walking distance or time \\\\\n & claudication & $\\mu_2$ & Mean difference & Ankle brachial index \\\\\n \\hline\n \\multirow{2}{*}{MMA2\\tablefootnote{Cochrane Database ID:CD000335, DOI: 10.1002\/14651858.CD000335.pub2.}} & Exercise therapy for treatment & $\\mu_1$ & Mean difference & Function measure \\\\ \n & of non-specific low back pain & $\\mu_2$ & Mean difference & Pain measure \\\\\n \\hline\n \\multirow{2}{*}{MMA3\\tablefootnote{Cochrane Database ID:CD001886, DOI: 10.1002\/14651858.CD001886.pub4.}}& Anti-fibrinolytic use for minimizing & $\\mu_1$ & Risk ratio & Number of patients exposed to allogeneic blood \\\\ \n & perioperative allogeneic blood transfusion & $\\mu_2$ & Mean difference & Units of allogeneic blood transfused \\\\\n \\hline\n\\end{tabular}}\n\\end{center} \\label{ThreeMetaAnalyses}\n\\end{table}\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=.52\\linewidth]{Figure1_legend.png} \\\\\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_mu1_plots_case2.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_mu2_plots_case2.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_contour_plots_case2.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_mu1_plots_case1.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_mu2_plots_case1.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_contour_plots_case1.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_mu1_plots_case3.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_mu2_plots_case3.pdf}\n\\includegraphics[width=.25\\textwidth]{meta_ABSORB_contour_plots_case3.pdf}\n\t\\caption{Illustrations of three meta-analyses from the Cochrane Database. Panels~(a)--(c) show the results for MMA1, panels (d)--(f) show the results for MMA2, and panels (g)--(i) show the results for MMA3. In panels (g) and (i), $\\mu_1$ is plotted on the log-RR scale. }\\label{meta:CochraneExamples}\n\t\\end{figure}\n\nIn particular, for MMA2, the 95\\% posterior credible interval for $\\mu_1$ (i.e., the mean change in function measure after exercise therapy for lower back pain) shifted from $(-3.52, -0.38)$ under the non-bias corrected posterior to $(-2.53, 0.69)$ under the ABSORB bias-corrected posterior. This indicates that after adjusting for ORB, the 95\\% bias-corrected posterior interval contained zero, and the mean change in function measure after exercise therapy was \\textit{no longer} statistically significant. As a consequence of non-negligible ORB, 12.03\\% of all 748 meta-analyses in our meta-evaluation (90 reviews) had a change in statistical significance for the first outcome, and 10.56\\% (79 reviews) had a change in statistical significance for the second outcome. For 12 reviews, the statistical significance changed for \\textit{both} $\\mu_1$ and $\\mu_2$. These results demonstrate that non-negligible ORB can have a profound effect on the conclusions from MMA. \n\n In Appendix~\\ref{AdditionalResults}, we provide the specific quantiles of the $D$ measure from our analysis. Based on these quantiles, we determined the following guidelines for interpreting the $D$ measure:\n\\begin{itemize}\n \\item 0.00 to 0.20: probably no impact from ORB;\n \\item 0.10 to 0.40: may represent moderate impact from ORB;\n \\item 0.30 to 0.60: may represent substantial impact from ORB;\n \\item 0.50 to 1.00: may represent severe impact from ORB.\n\\end{itemize}\nOur intervals were inspired by the guidelines given for the $I^2$ statistic \\citep{higgins2002quantifying} in the Cochrane Handbook for Systematic Reviews of Interventions.\\footnote{\\url{https:\/\/handbook-5-1.cochrane.org\/chapter_9\/9_5_2_identifying_and_measuring_heterogeneity.htm}.} The $I^2$ statistic (for measuring heterogeneity in univariate meta-analyses) also lies between 0 and 1, and the Cochrane Handbook provides overlapping intervals for ``unimportant,'' ``moderate,'' ``substantial,'' and ``considerable'' heterogeneity based on $I^2$, so as to avoid setting hard cutoffs for its interpretation. \n\nIn our experience, a $D$ measure of 0.20 or higher usually suggested non-negligible ORB or the potential to qualitatively change the conclusions from meta-analyses. Meanwhile, a $D$ measure below 0.10 normally ruled out any impact from ORB (as illustrated in panels~(a)--(c) of Figure~\\ref{meta:CochraneExamples}). However, there were a few reviews where the statistical significance changed for an outcome even when $D<0.10$. This occurred when one of the CI endpoints was extremely close to zero -- in this case, the 95\\% CIs before and after bias correction were very similar to each other, but even a tiny discrepancy near zero changed the conclusion. Thus, the systematic reviewer should also investigate the CIs, not just the $D$ measure.\n\nOur meta-evaluation of the Cochrane Database found that 50.00\\% of MMAs had $D < 0.10$ for the first endpoint, 48.80\\% had $D < 0.10$ for the second endpoint, and 52.94\\% had $D < 0.10$ for both endpoints. However, there were also a few reviews where ORB had a very high impact. Namely, 26 reviews had $D$ measures greater than 0.50 for the first endpoint, and 11 reviews had $D$ measures greater than 0.50 for the second endpoint. Figure~\\ref{CochraneHistograms} plots the empirical histograms for the $D$ measure from our meta-evaluation. \n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[width=.32\\textwidth]{meta_D1.pdf}\n\t\\includegraphics[width=.32\\textwidth]{meta_D2.pdf}\n\t\\includegraphics[width=.32\\textwidth]{meta_D12.pdf}\n\t\\caption{Empirical histograms of the $D$ measure for $\\mu_1$ (left panel), $\\mu_2$ (middle panel) and $\\bm{\\mu}$ (right panel) from all 748 reviews in our meta-evaluation.}\\label{CochraneHistograms}\n\\end{figure}\n\nInstead of using the provided guidelines, an alternative is to simply use the quantiles from our meta-evaluation for interpretation. The quantiles for the $D$ measures are provided in Table \\ref{Dtable1} of Appendix \\ref{AdditionalResults}. Using this table, the systematic reviewer can locate the percentile of the $D$ measure obtained from his or her dataset among the $D$ measures from the Cochrane database and conclude that the evidence for ORB in his or her study is in the top, e.g., 20\\% of all analyzed datasets. \n\nOur meta-evaluation of real systematic reviews from the Cochrane Database illustrates the potential of the ABSORB model for adjusting the estimates of effect sizes in the presence of ORB and the $D$ measure for quantifying the impact of ORB. In Appendix~\\ref{Simulations}, we further validate our model through simulation studies under a variety of degrees of between-study heterogeneity and missingness. To summarize briefly, our simulation studies demonstrate that when ORB is present, the ABSORB model has lower bias and better empirical coverage than alternative approaches that remove studies with missing data, that impute the missing outcomes, or that ignore the correlations between the two endpoints.\n\n\n\n\n\n\n\n\\section{Case Study Results} \\label{Application}\n\nWe now apply the ABSORB model to the meta-analysis introduced in Section~\\ref{MotivatingData} on the effects of intervention on ReAd and QoL for HF patients. As in Section~\\ref{ABSORB}, $m_1$ denotes the number of studies that reported both ReAd and QoL, $m_2$ is the number of studies that reported only ReAd, and $m_3$ is the number of studies that reported only QoL. As discussed in Section~\\ref{MotivatingData}, our sample contained 45 studies on the effects of interventions on HF patients. We initially had $n=41$ published studies that reported at least one of ReAd or QoL ($m_1 = 8$, $m_2 = 26$, $m_3 = 7$). After querying the corresponding authors, we were able to obtain six additional results for QoL and a total of $n = 44$ studies with results for at least one of ReAd or QoL ($m_1 = 11$, $m_2 = 23$, $m_3 = 10$). \n\nWith our \\textit{a priori} knowledge about which studies had QoL results after querying the corresponding authors, we proceeded to conduct a two-stage analysis. In this first stage, we applied the ABSORB model to only the $n=41$ published studies that reported at least one of the ReAd or QoL outcomes (i.e., \\textit{before} we had queried the authors). In the second stage, we performed our analysis with the $n=44$ updated studies (i.e., \\textit{after} querying the authors). Our purpose for conducting this two-stage analysis was to see how our results changed after we were able to partially mitigate some of the ORB for QoL by querying corresponding authors. For our analysis, we did not include the studies that failed to report either ReAd or QoL. In Appendix~\\ref{AdditionalHFResults}, we apply an augmented model (introduced in Appendix~\\ref{ABSORBISM}) to \\textit{all} 45 intervention studies in both the published and the updated data. \n\nTo quantify the impact of outcome reporting bias in our MMA, we fit the ABSORB model of Section~\\ref{ABSORBModel} and the non-bias corrected model of Section~\\ref{NonBiasCorrectedModel} and used their posterior samples to compute the $D$ measure \\eqref{Dmeasure}. For both models, we ran three MCMC chains of 100,000 iterations, discarding the first 10,000 iterations as burn-in. In Appendix~\\ref{AdditionalHFResults}, we provide trace plots for these models, which show that the three chains mixed well and that the number of iterations we used was sufficient to achieve convergence.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\\begin{figure}[!htbp]\n\\centering\n\\hspace{.5cm} \\includegraphics[width=.5\\linewidth]{Table3_legend.png} \\\\\n\\includegraphics[width=.4\\textwidth]{Table3_ReAd.pdf}\n\\includegraphics[width=.4\\textwidth]{Table3_QoL.pdf}\n\\caption{Plots of the posterior means and 95\\% posterior credible intervals for our case study on interventions for HF patients under the non-bias corrected and ABOSRB models. Panel~(a) plots the results for ReAd and panel~(b) plots the results for QoL.} \\label{HeartFailureResults}\n\\end{figure}\n \n \nFigure \\ref{HeartFailureResults} shows the posterior mean effect sizes and 95\\% posterior credible intervals for ReAd and QoL. For ReAd (panel (a) of Figure \\ref{HeartFailureResults}), there was little difference between the MMA results obtained from the published and updated datasets. The ABSORB model estimated a mean RR of 0.955 with a 95\\% CI of (0.876, 1.051) for the published data, and a mean RR of 0.956 with a 95\\% CI of (0.877, 1.054) for the updated data, which indicated \\textit{no} significant reduction of risk for hospital readmission for the intervention group. However, there \\textit{was} a qualitative difference in the clinical conclusions for ReAd from the \\textit{non}-bias corrected models. In addition to slightly lower mean estimates of RR for hospital readmission (0.931 for both the published and the updated data), the non-bias corrected models estimated 95\\% CIs of (0.862, 0.993) for the published data and (0.862, 0.994) for the updated data. This indicates that \\textit{without} correcting for ORB with the ABSORB model, our meta-analysis would have concluded that there was a \\textit{significant} reduction in risk of hospital readmission. \n\nAs for QoL (panel (b) of Figure \\ref{HeartFailureResults}), the ABSORB model estimated an SMD of 0.15 for QoL between intervention and control groups in the published data, which was slightly larger than the result obtained from the updated data (0.138). The 95\\% CI for the updated data (0.031, 0.232) was also narrower than the interval for the published data (0.009, 0.278). Based on these 95\\% CIs, there was a significant improvement in QoL for heart failure patients in the intervention group. Meanwhile, in the \\textit{non}-bias corrected model, the point estimates obtained from the published data and the updated data were both higher than their corresponding estimates under the ABSORB model. However, there was no change in the clinical conclusion from our ORB correction, since the non-bias corrected model also showed a significant improvement in QoL for the intervention group. \n\n \n\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\hspace{.4cm} \\includegraphics[width=.55\\linewidth]{Figure1_legend.png} \\\\\n\t\\includegraphics[width=.28\\textwidth]{app_ABSORB_mu1_plots_before.pdf}\n\t\\includegraphics[width=.28\\textwidth]{app_ABSORB_mu2_plots_before.pdf}\n\t\\includegraphics[width=.28\\textwidth]{app_ABSORB_contour_plots_before.pdf} \\\\\n\t\\includegraphics[width=.28\\textwidth]{app_ABSORB_mu1_plots_after.pdf} \n\t\\includegraphics[width=.28\\textwidth]{app_ABSORB_mu2_plots_after.pdf} \n\t\\includegraphics[width=.28\\textwidth]{app_ABSORB_contour_plots_after.pdf}\n\t\\caption{Panels~(a)--(c) show the results for the meta-analysis of interventions on HF patients using the \\textit{published} data. Panels~(d)--(f) show the results using the \\textit{updated} data. ReAd is plotted on the log-RR scale in panels (a), (c), (d), and (f). }\\label{HeartFailurePlots}\n\\end{figure}\n\nIn Figure~\\ref{HeartFailurePlots}, we plot the posterior distributions of ReAd and QoL for the ABSORB model (solid line) and the non-bias corrected model (dashed line). In panels (a)-(c), we plot the results based on the published data, and in panels (d)-(f), we plot the results based on the updated data. For ReAd, we obtained $D = 0.25$ on the published data and $D=0.26$ on the updated data. These $D$ measures reflect the non-negligible impact from outcome reporting bias. In this case, the shift in the ReAd posterior towards the null side was enough to qualitatively change the conclusions from our meta-analysis.\n\nFor QoL, we obtained $D=0.25$ on the published data and $D=0.17$ on the updated data. By procuring more QoL outcomes from some missing studies, the updated data was less subject to ORB. This was consistent with the lower $D$ measure for QoL in our second stage analysis. Panel (b) of Figure~\\ref{HeartFailureResults} and the middle two panels of Figure~\\ref{HeartFailurePlots} illustrate that the unadjusted and adjusted results for QoL were more similar to each other in the updated data than in the published data. In particular, the Jaccard index for QoL was 0.62 in the published data and 0.72 in the updated data. The Jaccard index (length of the intersection of two intervals divided by the length their union) gives a measure of consistency between the non-bias corrected and bias-corrected 95\\% CIs, with a larger value indicating greater similarity. In many practical situations, it may not be possible for systematic reviewers to obtain an updated dataset. However, this case study validates that our method produces bias-corrected results that are more consistent with the unadjusted analyses when researchers \\textit{are} able to mitigate some of the ORB.\n\nOur findings have important implications for clinicians, policymakers, and HF patients. Reducing hospital readmission for HF patients has been the primary objective of these stakeholders \\citep{ZiaeianFonarow2016}, and this has been the rationale for employing interventions like TM and STS. However, our results suggest that these interventions may not significantly reduce the risk of readmission. On the other hand, there seems to be a significant improvement in quality of life for HF patients who receive these interventions, compared to the patients who receive usual care. Therefore, we may conclude that TM and STS are still beneficial for the quality of life of patients, but that other approaches may be needed to significantly reduce the risk of hospital readmission.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion} \\label{Discussion}\n\nIn this article, we have introduced a Bayesian selection model for correcting and quantifying the impact of outcome reporting bias (ABSORB) in multivariate meta-analysis. Our model enables us to not only correct the estimates of treatment effects, but also quantifies their uncertainty due to the presence of ORB. We employed the $D$ measure \\eqref{Dmeasure} to quantify the \\textit{impact} of ORB on the results of MMA by measuring the dissimilarity between the bias-corrected and non-bias corrected posterior densities. Our approaches were empirically evaluated through a meta-evaluation of 748 real systematic reviews from the Cochrane Database. In addition, we applied the ABSORB model to a meta-analysis on the effects of interventions on quality of life and hospital readmission for heart failure patients. Our results show that the presence of ORB can lead to qualitative differences in the conclusions from MMA. In particular, the relative risk of hospital readmission for HF patients in the intervention group shifted from a significant decrease (RR: 0.931, 95\\% CI 0.862--0.993) to a statistically \\textit{nonsignificant} effect (RR: 0.955, 95\\% CI 0.876--1.051) once we adjusted for ORB. Furthermore, we found in our meta-evaluation that after correcting for ORB, 157 out of 748 bivariate meta-analyses from the Cochrane Database \\textit{also} had a change in statistical significance for at least one outcome. Our study demonstrates the importance of accounting for ORB when conducting MMA.\n\nIn this paper, we focused on bivariate meta-analysis. However, the ABSORB model can also be extended to models with more than two outcomes. Suppose that we have $p$ outcomes of interest. When $p>2$, we can model each of the outcomes $y_{ij}$, $j = 1, \\ldots, p$, exactly as we did in the bivariate case through \\eqref{YgivenZ}--\\eqref{latentZ}. We also model each of the correlation parameters that controls the likelihood of reporting, $\\rho_j := \\textrm{corr}(\\epsilon_{ij}, \\delta_{ij})$, and the correlations between $\\epsilon_{ij}$, $\\epsilon_{ij'}$, $u_{ij}$ and $u_{ij'}$ for $j \\neq j'$ similarly as in \\eqref{EpsilonDeltaCorrelation}--\\eqref{BetweenStudyCorrelation}. While this extension of ABSORB to $p>2$ endpoints is straightforward, the potential downside is that the number of correlation parameters to estimate can be very large if $p$ is even moderately large. Thus, it may be desirable to simplify the correlation structure when $p$ is large so that the model remains parsimonious.\n\nAnother limitation when $p > 2$ is that the ABSORB model requires consideration of $2^p-1$ scenarios to completely specify its likelihood (e.g., studies with no missing endpoints, studies with only the first endpoint missing, studies with only the first two endpoints missing, etc.). While this is feasible for small $p$, it may become cumbersome if $p$ is moderately large. In the future, we plan to explore computationally efficient ways to extend the ABSORB model to handle a larger number of endpoints. This will make our model more appealing not just for MMA, but also for network meta-analysis (NMA). NMA expands the scope of a pairwise meta-analysis by simultaneously making comparisons across trials based on a common comparator (e.g., a standard treatment) \\citep{Lumley2002}. NMA combines direct evidence and indirect evidence under the assumption of evidence consistency. Ignoring the impact of ORB in NMA can lead to bias in both direct evidence and indirect evidence. Thus, it is critical to develop new approaches to account for ORB in the NMA framework.\n\nWhile the $D$ measure \\eqref{Dmeasure} that we introduced in Section~\\ref{QuantifyingORB} is a useful statistic for summarizing the \\textit{sensitivity} of the results from MMA to ORB, there are several limitations to it. First, the $D$ measure does not take into account the \\textit{direction} of the bias. Second, the $D$ measure does not have a variance estimate associated with it. Thus, unlike the $I^2$ statistic \\citep{higgins2002quantifying} or other measures for quantifying PB \\citep{LinChu2018}, there is no natural way of forming $100(1-\\alpha)\\%, \\alpha \\in (0,1)$, uncertainty intervals for the $D$ measure. One possibility is to calculate the $D$ measure on many independent, slightly perturbed datasets and to use the quantiles of the subsequent empirical distribution to obtain an interval estimate for $D$. However, this approach is also limited, and it is desirable to find more straightforward ways of obtaining \\textit{interval} estimates for $D$. In the future, we hope to develop measures that not only quantify the impact of ORB, but that can also take into account both the direction of the bias and the inherent uncertainty of the measure itself.\n\n\\section*{Code}\n\nAn \\textsf{R} package for implementing the model in this paper is available at \\url{https:\/\/github.com\/raybai07\/ABSORB}.\n\n\\section*{Acknowledgments}\nWe acknowledge Dr. Brian Finkelman for his help in collecting the intervention studies for the case study in this paper. This work was initiated when the first listed author was a postdoctoral researcher at the University of Pennsylvania under the supervision of the last listed author.\n\n\\section*{Funding}\nThis research was supported in part by generous funding from the College of Arts and Sciences at the University of South Carolina (RB), National Science Foundation grant OIA-1655740 (RB), National Institutes of Health (NIH) grants 1R01LM012607, 1R01AI130460, 1R01AG073435, 1R56AG074604, 1R01LM013519, 1R56AG069880 (XL and YC), and NIH grants R01LM012982 and UL1TR002494 (HC). This work was supported partially through Patient-Centered Outcomes Research Institute (PCORI) Project Program Awards (ME-2019C3-18315 and ME-2018C3-14899). All statements in this report, including its findings and conclusions, are solely those of the authors and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute (PCORI), its Board of Governors or Methodology Committee.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{S1. Electronic band structures}\n\\setlength{\\parindent}{0pt}\n\\begin{figure}[h]\n \\centering\n {\\includegraphics[width=0.8\\textwidth]{SI\/Fig_S1_band.pdf}} \\\\\n \\caption{Electronic band structures of (a) \\ce{Sb2S3} and (b) \\ce{Sb2Se3}.}\n \\label{fig_band}\n\\end{figure}\n\n\\section*{S2. Fr{\\\"o}hlich polaron coupling constants}\n\\begin{table}[h]\n \\label{tab_alpha}\n \\caption{Parameters used to calculate Fr{\\\"o}hlich polaron coupling constant $\\alpha$ in this paper. The effective phonon frequency ($\\omega$) is in THz}\n\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}ccccccc}\n\\hline\n\\multirow{2}{*}{Material} & \\multirow{2}{*}{} & \\multirow{2}{*}{\\textit{\\textepsilon}$_{\\infty}$} & \\multirow{2}{*}{\\textit{\\textepsilon}$_0$} & \\multirow{2}{*}{\\textit{$\\omega$}} & \\multicolumn{2}{c}{\\textit{m}$^*$} \\\\ \\cline{6-7} \n & & & & & e & h \\\\ \\hline\n\\multirow{4}{*}{\\ce{Sb2S3}} & avg & \\multirow{4}{*}{10.26} & \\multirow{4}{*}{68.76} & \\multirow{4}{*}{3.49} & 0.40 & 0.64 \\\\\n & \\textit{x} & & & & 0.16 & 0.47 \\\\\n & \\textit{y} & & & & 0.92 & 0.65 \\\\\n & \\textit{z} & & & & 5 & 0.97 \\\\ \\hline\n\\multirow{4}{*}{\\ce{Sb2Se3}} & avg & \\multirow{4}{*}{13.52} & \\multirow{4}{*}{76.27} & \\multirow{4}{*}{2.57} & 0.35 & 0.9 \\\\\n & \\textit{x} & & & & 0.14 & 0.85 \\\\\n & \\textit{y} & & & & 0.81 & 0.55 \\\\\n & \\textit{z} & & & & 7 & 3 \\\\ \\hline\n\\end{tabular*}\n\\end{table}\n~\\\\\nThe long-range electron-longitudinal optical phonon coupling can be expressed by the dimensionless Fr{\\\"o}hlich polaron coupling constant $\\alpha$\\cite{frohlich1952interaction}\n\\begin{equation}\n\\alpha=\\frac{e^2}{\\hbar}(\\frac{1}{\\varepsilon_\\infty}-\\frac{1}{\\varepsilon_0})\\sqrt{\\frac{m^*}{2\\hbar\\omega}},\n\\end{equation}\nwhere \\textit{\\textepsilon}$_{\\infty}$ and \\textit{\\textepsilon}$_0$ are the high-frequency and static dielectric constants, respectively, \\textit{m}$^*$ is the effective mass and $\\omega$ is the effective phonon frequency. The effective mass and effective frequency were calculated using the AMSET package\\cite{ganose2021efficient}.\nThe isotropic $\\alpha$ was obtained using the harmonic mean of the effective masses and the arithmetic average of the dielectric constants. The anisotropic $\\alpha$ was calculated using the anisotropic (direction-dependent) effective masses, consistent with previous work\\cite{guster2021frohlich}.\n~\\\\\n\\section*{S3. Effect of grain boundary scattering}\n\n\\setlength{\\parindent}{0pt}\n\\begin{figure}[ht]\n \\centering\n {\\includegraphics[width=1.0\\textwidth]{SI\/Fig_S2_mobility_mfp.pdf}} \\\\\n \\caption{Calculated component and total mobilities with mean free path of (a) \\SI{100}{\\nanometre} and (b) \\SI{10}{\\nm} as a function of temperature.}\n \\label{fig_mfp}\n\\end{figure}\n\nThe effect of grain boundary scattering on the mobility in \\ce{Sb2X3} was evaluated by incorporating an average grain size using the AMSET package\\cite{ganose2021efficient}. The grain boundary scattering lifetime is set to $v_g\/L$, where $v_g$ is the group velocity and \\textit{L} is the mean free path. In this work, the mean free path of \\SI{10} and \\SI{100}{\\nm} were tested. The carrier concentration and defect concentration were assumed to be \\SI{e13}{\\conc} and \\SI{e17}{\\conc}, respectively. According to our results (Fig. S2), at temperatures between 100 and \\SI{500}{\\kelvin}, the total mobility is not limited by the grain boundary scattering.\n\n\\section*{S4. Workflow of localising a polaron in \\ce{Sb2X3}}\n\nWe attempted to localise an electron or a hole in \\ce{Sb2S3} and \\ce{Sb2Se3} by the bond distortion method and electron attractor method. A 3$\\times$1$\\times$1 supercell (with the dimension of 11.40$\\times$11.20$\\times$\\SI{11.39}{\\cubic\\angstrom} and 11.85$\\times$11.55$\\times$\\SI{11.93}{\\cubic\\angstrom} for \\ce{Sb2S3} and \\ce{Sb2Se3}, respectively) was constructed. In each system, one electron per supercell was added or removed to introduce an electron or a hole.\n~\\\\\n~\\\\\nWe first applied the bond distortion method to introduce distortions around one designated atom (Sb for adding an electron and S\/Se for adding a hole) and add small random displacements to all atoms. These are implemented by the ShakeNBreak package\\cite{shakenbreak_github,mosquera2021search}. Different distortions between 20\\% and 40\\% with both compression and stretching were considered and a standard deviation of 0.15 was used for the random displacements. However, after structural optimisation, all structures relaxed to perfect configurations.\n~\\\\\n~\\\\\nWe further combined the bond distortion method with the electron attractor method to confirm the formation of hole polarons in \\ce{Sb2S3}. The electron attractor method refers to attracting electrons or holes to a particular atomic site by replacing one certain atom. Phosphorous has stronger attraction to holes than sulfur as it contains fewer protons and has a lower electronegativity. Here, we used one P to replace one S in a supercell, and also introduced some local perturbations around the P atom. The number of electrons were kept the same as the neutral replaced system, suggesting one extra hole in \\ce{Sb2S3}. The structure with the substituted atom and local distortions was fully relaxed. Finally, we replaced back the S atom and relaxed the configuration again. Nevertheless, all structures went back to perfect configurations, indicating that the localised polarons are unlikely to form.\n\n\\section*{S5. Parameters used to calculate mobilities in \\ce{Sb2X3}}\nThe \\textit{k}-point meshes used to calculate transport properties were tested (shown in Fig. \\ref{fig_con}) and a \\textit{k}-point mesh of 169$\\times$57$\\times$57 is used for all calculations. The carrier concentration was set to \\SI{e13}{\\conc} according to previous experimental results in \\ce{Sb2X3} \\cite{chen2017characterization,liu2016green,zhou2014solution,yuan2016rapid,li2021defect,chalapathi2020influence,black1957electrical}. The calculated effective phonon frequency is 3.49 for \\ce{Sb2S3} and 2.57 for \\ce{Sb2Se3}. The calculated deformation potentials, elastic constants and dielectric constants are shown in Table \\ref{tab_def}, \\ref{tab_ela} and \\ref{tab_die}, respectively.\n\n\\begin{figure}[ht]\n \\centering\n {\\includegraphics[width=0.8\\textwidth]{SI\/Fig_S3_mobility_convergence.pdf}} \\\\\n \\caption{The convergence of mobility in \\ce{Sb2X3} under different \\textit{k}-point meshes. The defect concentration is set to be \\SI{e14}{\\conc} and the temperature is set to be \\SI{300}{\\kelvin}.}\n \\label{fig_con}\n\\end{figure}\n\n\\begin{table}[ht]\n\\caption{Calculated deformation potentials (D, eV) for the upper valence and lower conduction bands of \\ce{Sb2S3} and \\ce{Sb2Se3}}\n \\label{tab_def}\n \\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}cccccc}\n\\hline\nMaterial & \\multicolumn{2}{l}{} & D$_{XX}$ & D$_{YY}$ & D$_{ZZ}$ \\\\ \\hline\n\\multirow{6}{*}{\\ce{Sb2S3}} & \\multirow{3}{*}{VBM} & D$_{XX}$ & 5.41 & 0.26 & 0.07 \\\\\n & & D$_{YY}$ & 0.26 & 0.10 & 0.02 \\\\\n & & D$_{ZZ}$ & 0.07 & 0.02 & 1.27 \\\\ \\cline{2-6}\n & \\multirow{3}{*}{CBM} & D$_{XX}$ & 5.26 & 0.42 & 0.17 \\\\\n & & D$_{YY}$ & 0.42 & 2.43 & 3.35 \\\\\n & & D$_{ZZ}$ & 0.17 & 3.35 & 2.62 \\\\ \\hline\n\\multirow{6}{*}{\\ce{Sb2Se3}} & \\multirow{3}{*}{VBM} & D$_{XX}$ & 0.53 & 0.16 & 0.05 \\\\\n & & D$_{YY}$ & 0.16 & 2.86 & 0.03 \\\\\n & & D$_{ZZ}$ & 0.05 & 0.03 & 2.47 \\\\ \\cline{2-6}\n & \\multirow{3}{*}{CBM} & D$_{XX}$ & 3.31 & 0.36 & 0.09 \\\\\n & & D$_{YY}$ & 0.36 & 0.39 & 0.29 \\\\\n & & D$_{ZZ}$ & 0.09 & 0.29 & 1.38 \\\\ \\hline\n\\end{tabular*}\n\\end{table}\n\n\\begin{table}[ht]\n \\caption{Calculated elastic constants (in GPa) of \\ce{Sb2S3} and \\ce{Sb2Se3}}\n \\label{tab_ela}\n\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c}\n\\hline\nMaterial & & C$_{XX}$ & C$_{YY}$ & C$_{ZZ}$ & C$_{XY}$ & C$_{YZ}$ & C$_{ZX}$ \\\\ \\hline\n\\multirow{6}{*}{\\ce{Sb2S3}} & C$_{XX}$ & 93.75 & 28.00 & 18.50 & 0.00 & 0.00 & 0.00 \\\\\n & C$_{YY}$ & 28.00 & 57.25 & 15.39 & 0.00 & 0.00 & 0.00 \\\\\n & C$_{ZZ}$ & 18.50 & 15.39 & 37.69 & 0.00 & 0.00 & 0.00 \\\\\n & C$_{XY}$ & 0.00 & 0.00 & 0.00 & 31.68 & 0.00 & 0.00 \\\\\n & C$_{YZ}$ & 0.00 & 0.00 & 0.00 & 0.00 & 17.11 & 0.00 \\\\\n & C$_{ZX}$ & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 8.77 \\\\ \\hline\n\\multirow{6}{*}{\\ce{Sb2Se3}} & C$_{XX}$ & 77.15 & 25.63 & 17.11 & 0.00 & 0.00 & 0.00 \\\\\n & C$_{YY}$ & 25.63 & 54.15 & 17.03 & 0.00 & 0.00 & 0.00 \\\\\n & C$_{ZZ}$ & 17.11 & 17.03 & 31.75 & 0.00 & 0.00 & 0.00 \\\\\n & C$_{XY}$ & 0.00 & 0.00 & 0.00 & 23.42 & 0.00 & 0.00 \\\\\n & C$_{YZ}$ & 0.00 & 0.00 & 0.00 & 0.00 & 18.41 & 0.00 \\\\\n & C$_{ZX}$ & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 5.08 \\\\ \\hline\n\\end{tabular*}\n\\end{table}\n\n\\begin{table}[ht]\n\\caption{Calculated static (\\textit{\\textepsilon}$_0$) and high-frequency (\\textit{\\textepsilon}$_{\\infty}$) dielectric constants of \\ce{Sb2S3} and \\ce{Sb2Se3}}\n \\label{tab_die}\n\\begin{tabular*}{1\\textwidth}{@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c@{\\extracolsep{\\fill}}c}\n \\hline\n \\multirow{2}{*}{Material} & \\multicolumn{3}{c}{\\textit{\\textepsilon}$_0$} & \\multicolumn{3}{c}{\\textit{\\textepsilon}$_{\\infty}$} \\\\\n & \\textit{x} & \\textit{y} & \\textit{z} & \\textit{x} & \\textit{y} & \\textit{z} \\\\ \\hline\n Sb$_2$S$_3$ & 98.94 & 94.21 & 13.14 & 11.55 & 10.97 & 8.25 \\\\ \n Sb$_2$Se$_3$ & 85.64 & 128.18 & 15.00 & 15.11 & 14.92 & 10.53 \\\\ \\hline\n\\end{tabular*}\n\\end{table}\n\n\\clearpage\n\n\\section{Introduction}\nAntimony chalcogenides (\\ce{Sb2X3}; X=S, Se) have emerged as promising light absorbing materials due to their attractive electronic and optical properties, including ideal band gaps (\\SIrange{1.1}{1.8}{\\electronvolt}) and high optical absorption coefficients (\\textgreater \\SI{e5}{\\per\\cm})\\cite{versavel2007structural,liu2016green,messina2009antimony,lai2012preparation,chen2015optical,vadapoo2011self,vadapoo2011electronic,nasr2011electronic,savory2019complex,lei2019review,wang2022lone}. \nThey are binary compounds with earth-abundant, low-cost and non-toxic constituents.\nThe \\ac{PCEs} in \\ce{Sb2X3} solar cells have improved rapidly over the past decade, with record efficiencies reaching \\SI{7.50}{\\percent} and \\SI{10.12}{\\percent} for \\ce{Sb2S3} and \\ce{Sb2Se3}, respectively\\cite{choi2014highly,duan2022effi}. Nevertheless, efficiencies are still well below those seen in state-of-the-art CdTe or hybrid halide perovskite devices, which have reached above \\SI{25}{\\percent} under laboratory conditions\\cite{green2021solar}.\n\nThe underlying efficiency bottleneck is unclear. While the structural, electronic and optical properties of \\ce{Sb2X3} have been widely investigated,\nthe charge carrier dynamics, which critically affect conversion efficiencies, remain controversial. Charge carrier transport in \\ce{Sb2X3} has been reported by several studies\\cite{yang2019ultrafast,grad2021charge,zhang2021suppressing,grad2020photoexcited,chen2017characterization}, but there are several fundamental questions that remain unanswered. The first is whether the nature of carrier transport is band-like or thermally-activated hopping. \\citet{yang2019ultrafast} studied the charge carrier dynamics in \\ce{Sb2S3} and ascribed the observed \\SI{0.6}{\\electronvolt} Stokes shift to self-trapped excitons, suggesting hopping transport. \nIn contrast, \\citet{liu2022ultrafast} and \\citet{zhang2021suppressing} argued against self-trapping in \\ce{Sb2Se3} due to the saturation of fast signal decay with increasing carrier density. \nConsidering it is challenging for direct measurements to distinguish whether the photoexcited carriers are intrinsically self-trapped or trapped at defect sites\\cite{ramo2007theoretical}, a systematic theoretical study on the carrier transport in \\ce{Sb2X3} is necessary. \nThe second issue is about the resulting charge carrier mobility. \nMeasured mobilities in \\ce{Sb2X3} show a large variation\\cite{chen2017characterization,liu2016green,zhou2014solution,yuan2016rapid,li2021defect,chalapathi2020influence,black1957electrical}, in part due to different synthesis and characterisation methods.\nAs such, the intrinsic limits to mobility in \\ce{Sb2X3} are unclear and the scattering physics underlying transport are not yet understood.\n\nIn this work, we studied the tendency for polaron trapping and its effect on charge carrier transport in \\ce{Sb2X3} by first-principles \\ac{DFT} and Boltzmann transport calculations. The electron-lattice interaction in \\ce{Sb2X3} was explored through the Fr{\\\"o}hlich polaron coupling constant and Schultz polaron radius. Modelling of electron and hole polarons in \\ce{Sb2X3} indicates the intrinsic formation of large polarons and contrast to recent suggestions of small polarons (i.e.~self-trapped carriers)\\cite{yang2019ultrafast,grad2021charge}. The prediction of large polaron formation is further reinforced by the results of carrier transport calculations. \nThe isotropically averaged mobilities are larger than \\SI{10}{\\mob} at room temperature and decrease with increasing temperature for both electrons and holes, further confirming the band-like transport in \\ce{Sb2X3}. We find the intrinsic mobility is limited by scattering from polar optical phonons at low and moderate defect concentrations, while at high charged defect concentrations (\\textgreater \\SI{e18}{\\conc}) impurity scattering dominates. We expect our results will enable the design of \\ce{Sb2X3} devices with improved efficiencies.\n\n\\ce{Sb2X3} crystallise in the orthorhombic \\textit{Pnma} space group and are comprised of strongly bonded quasi-\\ac{1D} [Sb$_4$X$_6$]$_n$ ribbons oriented along the [100] direction (Fig.~\\ref{fig_structure}). Ribbon formation is driven by the Sb lone pair with ribbons stacked together by weak interactions\\cite{wang2022lone}. According to our previous optimization using the HSE06 hybrid functional and D3 dispersion correction,\\cite{wang2022lone} the calculated lattice parameters are 3.80\/\\SI{3.95}{\\angstrom}, 11.20\/\\SI{11.55}{\\angstrom} and 11.39\/\\SI{11.93}{\\angstrom} for \\ce{Sb2S3}\/\\ce{Sb2Se3} along the \\textit{a}, \\textit{b} and \\textit{c} axes, respectively.\n\\ce{Sb2X3} are indirect band gap semiconductors with calculated indirect\/direct band gaps of 1.79\/\\SI{1.95}{\\electronvolt} and 1.42\/\\SI{1.48}{\\electronvolt}, respectively, which are in reasonable agreement with previous experimental \\cite{yesugade1995structural,el1998substrate,versavel2007structural,liu2016green,torane1999preparation,messina2009antimony,lai2012preparation,chen2015optical} and theoretical studies\\cite{vadapoo2011self,vadapoo2011electronic,caracas2005first,nasr2011electronic,savory2019complex}. The electronic band structures are shown in Fig.~S1 of the Supplementary Information.\nIt has been widely suggested that efficient transport can only happen along the ribbons, based on the understanding that \\ce{Sb2X3} are \\ac{1D} semiconductors\\cite{caruso2015excitons,song2017highly,guo2018tunable,yang2018adjusting,gusmao2019antimony}. However, neither the structural dimensionality nor the electronic dimensionality of \\ce{Sb2X3} is \\ac{1D}.\\cite{deringer2015vibrational,wang2022lone} \n\n\\begin{figure}[ht]\n \\centering\n {\\includegraphics[width=0.5\\textwidth]{Fig_1_structure}} \\\\\n \\caption{Ground-state crystal structure (\\textit{Pnma} space group) of \\ce{Sb2X3}. The conventional unit cell is represented by a rectangle.}\n \\label{fig_structure}\n\\end{figure}\n\n\n\n\\begin{table}[ht]\n \\caption{Calculated Fr{\\\"o}hlich parameter ($\\alpha$) and Schultz polaron radius (r$_f$) for electrons (e$^-$) and holes (h$^+$) in \\ce{Sb2S3} and \\ce{Sb2Se3} at T = \\SI{300}{\\kelvin}}\n \\label{tab_alpha}\n \\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}cccccc}\n\\hline\n\\multicolumn{1}{c}{\\multirow{2}{*}{Material}} & \\multicolumn{1}{c}{\\multirow{2}{*}{}} & \\multicolumn{2}{c}{$\\alpha$} & \\multicolumn{2}{c}{r$_f$ (\\AA)} \\\\ \\cline{3-6} \n\\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{e$^-$} & \\multicolumn{1}{c}{h$^+$} & \\multicolumn{1}{c}{e$^-$} & \\multicolumn{1}{c}{h$^+$} \\\\ \\hline\n\\multirow{4}{*}{\\ce{Sb2S3}} & avg & 1.6 & 2.0 & 45.5 & 40.4 \\\\\n & \\textit{x} & 1.0 & 1.8 & 57.3 & 43.7 \\\\\n & \\textit{y} & 2.4 & 2.1 & 36.9 & 40.3 \\\\\n & \\textit{z} & 5.7 & 2.5 & 23.7 & 36.4 \\\\ \\hline\n\\multirow{4}{*}{\\ce{Sb2Se3}} & avg & 1.3 & 2.1 & 40.5 & 31.9 \\\\\n & \\textit{x} & 0.8 & 2.0 & 50.9 & 32.4 \\\\\n & \\textit{y} & 2.0 & 1.6 & 32.8 & 36.1 \\\\\n & \\textit{z} & 5.8 & 3.8 & 18.8 & 23.5 \\\\ \\hline\n \\end{tabular*}\n\\end{table}\n\nCharge carriers in crystals are formally described as quasi-particles due to their interaction with the extended structure. In polar semiconductors, the charge carriers and the surrounding lattice deformation form a so-called polaron,\\cite{emin2013polarons} which determines the nature of carrier transport.\nPolarons can be classified into two types based on the strength of electron-phonon coupling. Stronger coupling leads to larger local lattice distortion which provides the driving force for small polarons to form. Thus, for a small polaron, the lattice deformation is usually confined to one unit cell, and a carrier's motion is typically incoherent with thermally activated hops which lead to low mobility ($\\ll$ \\SI{1}{\\mob}). By contrast, the lattice deformation in a large polaron is usually moderate and spreads over multiple unit cells, resulting in a larger mobility (\\textgreater {} \\SI{1}{\\mob}). \nIn polar crystals, the electron-phonon interaction is usually dominated by the coupling of charge carriers to the \\ac{LO} phonons, which can be described within the Fr{\\\"o}hlich model\\cite{frohlich1952interaction}. \n\nWe first evaluate the Fr{\\\"o}hlich interaction by the coupling constant $\\alpha$. The calculated $\\alpha$ (shown in Table \\ref{tab_alpha}) shows an isotropically averaged value of $\\sim$2 for both \\ce{Sb2S3} and \\ce{Sb2Se3}, which falls in the intermediate electron-phonon coupling regime (defined as 0.5 $\\lesssim \\alpha \\lesssim$ 6).\\cite{stoneham2001theory} The magnitude of $\\alpha$ along the [100] and [010] directions is quite close ($\\Delta \\alpha$ = 1.2--1.4 and 0.3--0.4 for electrons and holes, respectively), suggesting similar electron-phonon interaction strengths along these two directions. We further estimate the size of polarons in \\ce{Sb2X3} by the Schultz polaron radius (r$_f$)\\cite{schultz1959slow}. The large values of electron and hole polaron radii (which extend over multiple structural units) indicate the polarons are delocalised in both \\ce{Sb2S3} and \\ce{Sb2Se3}. The details of parameters used and the procedure for averaging $\\alpha$ can be found in Section S2 of the Supplementary Information.\n\nFor an alternative assessment, we performed direct first-principles \\ac{DFT} calculations to model charge carriers in \\ce{Sb2X3}.\nThere are two challenges for reliable polaron modelling.\nThe first is the self-interaction error\\cite{parr1989w} arising from the approximate form of the exchange-correlation functional which causes electrons to spuriously delocalise\\cite{pacchioni2008modeling,pham2020efficient}. This is typically resolved by employing a hybrid functional\\cite{finazzi2008excess,deak2011polaronic,di2006electronic} which incorporates a certain amount of exact Fock exchange or by a Hubbard correction (DFT+U)\\cite{dudarev1998electron,anisimov1997first}. \nSecondly, the formation of localised polarons is dependent on the initial geometries and wavefunctions. Different methods have been proposed to break the crystal symmetry and promote the formation of localised states. Among them, the bond distortion method and electron attractor method have proved reliable across a range of structures and chemistries\\cite{ramo2007theoretical,pham2020efficient,deskins2011distribution,deskins2009localized,shibuya2012systematic,hao2015coexistence,liu2019photocatalytic}. The former involves introducing local perturbations in a supercell in a region where the polaron is expected to localise, while the latter uses a temporarily-substituted atom to attract an electron, which is then removed and the structure re-relaxed.\nIn this work, all polaron calculations were performed using the HSE06 hybrid functional. We attempted to localise electron and hole polarons by adding or removing an electron from a \\ce{Sb2X3} supercell using both these distortion methods. The full computational details are provided in Section S4. No energy lowering distortions were found in any case. The electrons and holes always preferred to delocalise rather than localise in both \\ce{Sb2S3} and \\ce{Sb2Se3}, indicating again that small polarons are unlikely to form intrinsically by self-trapping. This is also supported by recent experimental evidence that the trap states in \\ce{Sb2Se3} are saturated by moderate density photocarriers and the free carrier lifetime is sensitive to the impurity density, which together exclude the possibility of self-trapping in \\ce{Sb2Se3}\\cite{liu2022ultrafast}.\n\nWe next consider the possibility of forming self-trapped excitons. Firstly, the large dielectric constants ($\\sim$ 100) and small effective masses ($\\sim$ 0.1) in \\ce{Sb2X3}\\cite{wang2022lone} suggest that the Coulomb interaction is strongly screened and a large exciton radius is favoured. The small experimental exciton binding energies (\\SIrange{0.01}{0.05}{\\electronvolt} for \\ce{Sb2S3} and \\SI{0.04}{\\electronvolt} for \\ce{Sb2Se3})\\cite{caruso2015excitons,lawal2018investigation} further indicate weak electron-hole interactions in \\ce{Sb2X3}. Additionally, experimental measurements of the imaginary part of the frequency-dependent complex photoconductivity in \\ce{Sb2Se3} do not reveal any negative components\\cite{wang2019both} that can be a signal of exciton formation. \nConsequently, we conclude that self-trapped excitons in \\ce{Sb2X3} are unlikely.\n\n\\begin{figure*}[t]\n \\centering\n {\\includegraphics[width=1.0\\textwidth]{Fig_2_mobility}} \\\\\n \\caption{(a) Calculated average mobilities of electrons and holes in \\ce{Sb2S3} and \\ce{Sb2Se3} as a function of temperature with different defect concentrations. (b) Calculated total and component mobilities as a function of bulk defect concentration at \\SI{300}{\\kelvin}. ADP, acoustic deformation potential; POP, polar optical phonon; IMP, ionized impurity. $N_D$, defect concentration.}\n \\label{fig_mobility_avg}\n\\end{figure*}\n\n\\begin{figure}[t]\n \\centering\n {\\includegraphics[width=0.5\\textwidth]{Fig_3_ani_mobility}} \\\\\n \\caption{The anisotropic net carrier mobilities including all scattering mechanisms in \\ce{Sb2S3} and \\ce{Sb2Se3} as a function of temperature with a bulk defect concentration of \\SI{e14}{\\conc}.}\n \\label{fig_mobility_ani}\n\\end{figure}\n\n\\begin{table}[ht]\n \\caption{Calculated mobilities of electrons ($\\mu_e$) and holes ($\\mu_h$) in \\ce{Sb2X3} at \\SI{300}{\\kelvin} under different defect concentrations ($N_D$) and experimental values for comparison. The anisotropy ratio (\\textit{a}$_r$) is defined as the ratio of\nmaximum to minimum mobility}\n \\label{tab_mobility}\n \\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}ccccccc}\n \\hline\n Material & \\multicolumn{2}{c}{} & \\multicolumn{3}{c}{Calculated (\\si{\\mob})} & Experiment (\\si{\\mob}) \\\\\n\\hline\n & & & \\multicolumn{3}{c}{$N_D$ (cm$^{-3}$)} & \\\\ \n & & & 10$^{14}$ & 10$^{17}$ & 10$^{20}$ & \\\\ \\cline{4-6} \n\\multirow{8}{*}{\\ce{Sb2S3}} & \\multirow{4}{*}{$\\mu_e$} & \\textit{x} & 53.90 & 44.72 & 0.96 & \\\\\n & & \\textit{y} & 9.60 & 7.13 & 0.07 & \\\\\n & & \\textit{z} & 1.88 & 1.35 & 0.01 & \\\\\n & & avg & 21.79 & 17.73 & 0.35 & \\\\ \n & & \\textit{a}$_r$ & 28.67 & 33.13 & 96.00 & \\\\ \\cline{2-7} \n & \\multirow{4}{*}{$\\mu_h$} & \\textit{x} & 18.58 & 15.90 & 0.38 & \\\\\n & & \\textit{y} & 13.53 & 11.33 & 0.19 & \\\\\n & & \\textit{z} & 9.34 & 8.35 & 0.22 & \\\\\n & & avg & 13.82 & 11.86 & 0.26 & 6.4-12.8\\cite{liu2016green}, 32.2-54.0\\cite{chalapathi2020influence} \\\\ \n & & \\textit{a}$_r$ & 1.99 & 1.90 & 2.00 & \\\\ \\hline\n\\multirow{8}{*}{\\ce{Sb2Se3}} & \\multirow{4}{*}{$\\mu_e$} & \\textit{x} & 89.97 & 76.38 & 1.96 & \\\\\n & & \\textit{y} & 16.74 & 11.65 & 0.11 & \\\\\n & & \\textit{z} & 1.94 & 1.41 & 0.01 & \\\\\n & & avg & 36.22 & 29.81 & 0.70 & 15\\cite{black1957electrical} \\\\ \n & & \\textit{a}$_r$ & 46.38 & 54.17 & 196.00 & \\\\ \\cline{2-7}\n & \\multirow{4}{*}{$\\mu_h$} & \\textit{x} & 9.50 & 8.38 & 0.17 & 2.59\\cite{chen_characterization_2017} \\\\\n & & \\textit{y} & 16.95 & 14.63 & 0.25 & 1.17\\cite{chen_characterization_2017} \\\\\n & & \\textit{z} & 2.22 & 1.95 & 0.06 & 0.69\\cite{chen_characterization_2017} \\\\\n & & avg & 9.55 & 8.32 & 0.16 & 5.1\\cite{zhou2014solution}, 3.7-21.88\\cite{yuan2016rapid}, 45\\cite{black1957electrical} \\\\ \n & & \\textit{a}$_r$ & 7.64 & 7.50 & 4.17 & \\\\ \\hline\n\\end{tabular*}\n\\end{table}\n\n\nTo further understand the nature of transport in \\ce{Sb2X3} the first-principles carrier mobility\\cite{ganose2021efficient} was calculated. Both \\textit{n}-type and \\textit{p}-type doping were investigated, with calculations including scattering from \\ac{IMP}, \\ac{ADP} and \\ac{POP}.\nPiezoelectric scattering was not considered due to the centrosymmetric crystal structure.\nThe isotropically averaged mobilities are reasonably high at room temperature (T = \\SI{300}{\\kelvin}) for both electrons ($\\sim$\\SI{40}{\\mob}) and holes ($\\sim$\\SI{15}{\\mob}), at low and moderate defect concentrations ($<$\\SI{1e18}{\\conc}), indicating band-like transport (Fig.~\\ref{fig_mobility_avg}a).\nThe hole mobilities are a little lower than the electron mobilities in both \\ce{Sb2S3} and \\ce{Sb2Se3}, suggesting that \\textit{n}-type doping could be beneficial for carrier collection in photovoltaic devices.\nThis is in contrast to experimental measurements that have indicated higher mobility for \\textit{p}-type \\ce{Sb2Se3},\\cite{black1957electrical} however, this may be related to the doping asymmetry in these materials.\nThe intrinsic mobility is limited by Fr{\\\"o}hlich-type polar optical phonon scattering suggesting that large polarons are responsible for the transport behaviour (Fig.~\\ref{fig_mobility_avg}b).\nWe note that large deformation potentials have been suggested as the origin of self-trapping in the bismuth double perovskites\\cite{wu2021strong}.\nHowever, in \\ce{Sb2X3}, acoustic deformation potential scattering is weak (due to small deformation potentials $<$ \\SI{6}{\\electronvolt}), similar to that seen in the hybrid halide perovskites\\cite{wright2016electron,lu2017piezoelectric}, indicating self-trapping is unlikely to occur via coupling with acoustic vibrations.\n\nThe scattering from ionized impurities increases with the defect concentration. \nAt concentrations around \\SI{e18}{\\conc}, \\ac{IMP} and \\ac{POP} scattering are roughly the same strength and cause the mobility to reduce by a factor of a half (Fig.~\\ref{fig_mobility_avg}b).\nAt higher defect concentrations transport is entirely dominated by ionized impurities.\nOur results indicate that careful control of defect concentrations are essential for preventing degradation of device efficiencies.\nThis agrees well with previous experimental reports that the defect density is crucial to the carrier transport in \\ce{Sb2X3}, whereby bulk defect densities above \\SI{e15}{\\conc} led to significant degradation in conversion efficiency \\cite{islam2020two,li2020simulation,khadir2022performance}. \nFurthermore, considering that most experimental mobility measurements in \\ce{Sb2X3} were obtained from thin films where grain boundary scattering will further lower the mobility, we also tested the inclusion of mean free path scattering. According to our results (Fig. S2), the mobilities in \\ce{Sb2X3} are not significantly affected by grain boundary scattering even with grain sizes down to \\SI{10}{\\nm}, much smaller than the domain sizes typically seen in experiments\\cite{rijal2021influence,maghraoui2010structural,perales2008optical,lokhande2001novel}.\nAccordingly, our results suggest that grain boundary scattering is unlikely to be a dominant source of scattering in \\ce{Sb2X3} thin films, in agreement with previous studies\\cite{gonzalez2022deciphering}.\n\nThe anisotropy of mobility was also considered. As shown in Table \\ref{tab_mobility} and Fig.~\\ref{fig_mobility_ani}, our calculated mobilities are in reasonable agreement with the range of measured values.\nFor electron transport, there is considerable anisotropy with the [100] direction showing roughly 5 times the mobility of the [010] direction and over 25 times the mobility of the [001] direction in both \\ce{Sb2S3} and \\ce{Sb2Se3}.\nFor holes in \\ce{Sb2S3}, there is a high mobility in the (001) plane where the transport is roughly isotropic and approximately twice that of the [001] direction.\nFor holes in \\ce{Sb2Se3}, the picture is slightly altered with the highest mobility seen along [010], roughly 2 times the mobility along [100] and 8 times the mobility along [001].\nThe anisotropy in mobility follows the anisotropy in the calculated effective masses and the Fermi-surface dimensionality\\cite{wang2022lone}.\nDespite the anisotropic behaviour, even at moderate defect concentrations the electron and hole mobilities are still reasonably large ($>$\\SI{10}{\\mob}) in at least two directions.\nThe common description of \\ce{Sb2X3} as a \\ac{1D} semiconductors\\cite{zhou2015thin,liang2020crystallographic} oversimplifies the nature of transport.\nAccordingly, it may be possible to obtain high mobility thin films, even when the grains are not fully aligned along the direction of the quasi-1D ribbons.\n\nIn summary, we investigated the nature of charge carriers in \\ce{Sb2X3} semiconductors. \nOur results strongly suggest that self-trapping (i.e. the formation of small polarons) is unlikely to occur and that instead charge transport involves large polarons. \nIn particular, we found:\ni) moderate Fr{\\\"o}hlich coupling constants ($\\sim$2); ii) high Schultz polaron radii ($\\sim$\\SI{40}{\\angstrom}); \niii) the absence of electron or hole polaron formation in density functional theory calculations using the bond distortion and electron attractor methods; and iv) large carrier mobilities \\textgreater 10 cm$^2$\/Vs at room temperature for both electrons and holes (in agreement with experiments).\nWe conclude that there is no theoretical evidence for small polaron formation in pristine \\ce{Sb2X3} and self-trapping is unlikely to be the origin of the low open-circuit voltages in \\ce{Sb2X3} devices as reported in previous studies\\cite{yang2019ultrafast,grad2021charge}.\nAccordingly, the low photovoltages may not be a bulk property of these materials and could be surmountable with improved fabrication and processing conditions to engineer the defect and interfacial properties of devices.\n\n\\section{Methods}\nThe Fr{\\\"o}hlich polaron properties were solved using the open-source package \\textsc{PolaronMobility}\\cite{Frost2017}. \nThe first-principles carrier scattering rates and resulting mobilities were calculated using \\textsc{AMSET}\\cite{ganose2021efficient}. \nThe set of materials parameters used for these predictions are provided in Table S1--S4.\nThe crystal structure was plotted using \\textsc{Blender}\\cite{blender} and \\textsc{Beautiful Atoms}\\cite{Beautiful_Atoms2022}.\n\n\nAll of the underlying electronic structure calculations were performed based on Kohn-Sham density-functional theory\\cite{kohn1965self,dreizler1990density} as implemented in \\ac{VASP}\\cite{kresse1996efficient}. The projector augmented-wave (PAW) method\\cite{kresse1999ultrasoft} was employed with a plane-wave energy cutoff of \\SI{400}{\\electronvolt}. All calculations were carried out using the Heyd-Scuseria-Ernzerhof hybrid functional (HSE06)\\cite{heyd2003hybrid,krukau2006influence} with the D3 dispersion correction\\cite{grimme2004accurate}. The atomic positions were optimised until the Hellman-Feynman forces on each atom were below \\SI{0.0005}{\\electronvolt\\per\\angstrom} for unit cells and \\SI{0.01}{\\electronvolt\\per\\angstrom} for 3$\\times$1$\\times$1 supercells. \nThe energy convergence criterion was set to \\SI{e-6}{\\electronvolt}. $\\varGamma$-centered \\textit{k}-point meshes were set to 7$\\times$2$\\times$2 and 2$\\times$2$\\times$2 for geometry optimisation with primitive unit cells and supercells, respectively. For uniform band structure calculations which were used as inputs for AMSET, a denser \\textit{k}-point mesh of 19$\\times$10$\\times$10 was used which is consistent with our previous calculations of carrier effective masses\\cite{wang2022lone}. Detailed settings and convergence data are presented in Section S5.\n\n\\section*{Acknowledgements}\nX.W. thanks Jarvist M. Frost and Yuchen Fu for valuable discussions.\nWe are grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP\/P020194\/1 and EP\/T022213\/1). X.W. acknowledges Imperial College London for a President's PhD Scholarship. A.M.G. was supported by EPSRC Fellowship EP\/T033231\/1. S.R.K. acknowledges the EPSRC Centre for Doctoral Training in the Advanced Characterisation of Materials (CDT-ACM)(EP\/S023259\/1) for a PhD studentship. \n\n\\section*{Author Contributions}\nThe author contributions have been defined following the CRediT system.\nX.W.: Conceptualization, Investigation, Formal analysis, Methodology, Visualization, Writing \u2013 original draft. \nA.M.G.: Methodology, Supervision, Writing \u2013 review \\& editing. \nS.R.K.: Methodology, Writing \u2013 review \\& editing. \nA.W.: Conceptualization, Methodology, Supervision, Writing \u2013 review \\& editing.\n\n\\section*{Data Access Statement}\n\nThe data supporting the findings reported in this study are openly available from \\url{https:\/\/nomad-lab.eu} at [DOI:xxx].\n\n\n\\section{Introduction}\nAntimony chalcogenides (\\ce{Sb2X3}; X=S, Se) have emerged as promising light absorbing materials due to their attractive electronic and optical properties, such as ideal band gaps and high optical absorption coefficients (\\textgreater 10$^{5}$ cm$^{-1}$)\\cite{wang2022lone}. They are simple binary compounds with earth-abundant, low-cost and non-toxic constituents\\cite{zeng2016antimony,lei2019review,dong2021boosting}. The \\ac{PCEs} in \\ce{Sb2X3} solar cells have improved rapidly over the past decade, with the record efficiencies reaching 7.5\\% and 9.2\\% for \\ce{Sb2S3} and \\ce{Sb2Se3}, respectively\\cite{choi2014highly,li20199}. Nevertheless, they are still far from those competing systems such as CdTe or perovskites solar cells (above 21\\% under laboratory conditions\\cite{green2021solar}) and have stagnated for a few years. \n\nThe underlying bottleneck of such unsatisfied efficiencies is elusive. While the structural, electronic and optical properties of \\ce{Sb2X3} have been widely investigated \\cite{wang2022lone,tideswell1957crystal,kyono2002low,messina2009antimony,chen2015optical,kocc2012first}, the charge carrier dynamics, which critically affect the conversion efficiencies, is still controversial in \\ce{Sb2X3}. Charge carrier transport in \\ce{Sb2X3} has been reported by several studies\\cite{yang2019ultrafast,grad2021charge,zhang2021suppressing,grad2020photoexcited,chen2017characterization}, but there are several fundamental questions remaining unclear. The first one is whether the nature of carrier transport in \\ce{Sb2X3} is band-like or thermally activated hopping. Yang et al. \\cite{yang2019ultrafast} studied the charge carrier dynamics in \\ce{Sb2S3} and ascribed the observed 0.6 eV Stokes shift to self-trapped excitons, which correspond to hopping transport. Other papers have not reached an agreement if self-trapping process occurs in \\ce{Sb2Se3}\\cite{zhang2021suppressing,grad2020photoexcited}. Considering it is challenging for experimentalists to distinguish whether the photoexcited carriers in \\ce{Sb2X3} are intrinsically self-trapped or trapped at extrinsic defect sites due to the inevitable imperfection in samples and small energies of self-trapping \\cite{ramo2007theoretical}, a systematically theoretical study on the carrier transport in \\ce{Sb2X3} is necessary, but still lacking. The second issue is about the resulting charge mobilities. Measured mobilities in \\ce{Sb2X3} show a large variation\\cite{chen2017characterization,liu2016green,zhou2014solution,yuan2016rapid,li2021defect,chalapathi2020influence,black1957electrical} due to different synthesis conditions of samples and different methods of measurement, and the scattering mechanism that limits the mobility in \\ce{Sb2X3} remains unknown. Besides, the direction of the carrier transport in \\ce{Sb2X3}, which is considered to be 1D (i.e. only efficient along the direction of ribbons)by previous papers\\cite{zhou2015thin,liang2020crystallographic}, has not been investigated theoretically. \n\nIn this work, we studied the polaron trapping and its effect on the charge carrier transport property in \\ce{Sb2X3} by first-principles \\ac{DFT} calculations. The electron-lattice interaction in \\ce{Sb2X3} was investigated by Fr{\\\"o}hlich polaron coupling constant and Schultz polaron radius. The results, together with our modelling of electron and hole polarons in \\ce{Sb2X3}, indicate the intrinsic formation of large polarons instead of small polarons (i.e. self-trapped carriers). The interpretation of large polaron formation is further reinforced by the results of carrier mobility. Our calculated mobilities are larger than 10 cm$^2$\/Vs at room temperature and decrease with increasing temperature for both electrons and holes, further confirming the band-like transport in \\ce{Sb2X3}. What's more, the limiting scattering mechanism was identified. We demonstrate that the theoretical achievable mobilities in \\ce{Sb2X3} are limited by the polar optical phonon scattering at low and moderate defect concentrations, while at high defect concentration ($\\textgreater 10^{18}$ cm$^{-3}$) they are limited by \\ac{IMP} scattering.\n\nAs shown in the Fig. \\ref{fig_structure}, \\ce{Sb2X3} have orthorhombic crystallographic phase (\\textit{Pnma} space group) and are comprised of strongly bonded quasi-1D [Sb$_4$X$_6$]$_n$ ribbons along the [100] direction. Those ribbons are separated by the Sb lone pairs and stacked together by weak interactions \\cite{wang2022lone}. According to our previous calculations\\cite{wang2022lone} with the HSE06 hybrid functional and D3 dispersion correction, the calculated lattice parameters are 3.80 (3.95), 11.20 (11.55) and 11.39 (11.93) {\\AA } for \\ce{Sb2S3} (\\ce{Sb2Se3}) along the a, b and c direction, respectively.\n\\ce{Sb2X3} are indirect band gap semiconductors with our calculated indirect (direct) band gaps of 1.79 (1.95) and 1.42 (1.48) eV, respectively, which are in reasonable agreement with previous experimental \\cite{yesugade1995structural,el1998substrate,versavel2007structural,liu2016green,torane1999preparation,messina2009antimony,lai2012preparation,chen2015optical} and theoretical studies \\cite{vadapoo2011self,vadapoo2011electronic,caracas2005first,nasr2011electronic,kocc2012first,savory2019complex}. The electronic band structures are shown in the Fig. S1.\nIt has been widely reported experimentally that in \\ce{Sb2X3}, efficient transport can only happen along the ribbons, based on the understanding that \\ce{Sb2X3} are 1D semiconductors\\cite{caruso2015excitons,song2017highly,guo2018tunable,yang2018adjusting,gusmao2019antimony}. However, our previous paper demonstrates that neither the structural dimensionality nor the electronic dimensionality of \\ce{Sb2X3} is 1D. Our results of Fermi surfaces suggest a combination of 3D (holes in \\ce{Sb2S3}) and quasi-2D transport. What's more, the small effective masses in \\ce{Sb2X3} also potentially favour the transport properties.\n\n\\begin{figure}[h]\n \\centering\n {\\includegraphics[width=0.5\\textwidth]{Fig_1_structure}} \\\\\n \\caption{Ground-state crystal structures (\\textit{Pnma} space group) of \\ce{Sb2X3}. The unit cells are represented by rectangles.}\n \\label{fig_structure}\n\\end{figure}\n\n\n\\begin{table}[h]\n \\caption{Calculated Fr{\\\"o}hlich parameter and Schultz polaron radius (\\AA) in \\ce{Sb2S3} and \\ce{Sb2Se3} at 300 K}\n \\label{tab_alpha}\n \\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}cccccc}\n\\hline\n\\multicolumn{1}{c}{\\multirow{2}{*}{Material}} & \\multicolumn{1}{c}{\\multirow{2}{*}{}} & \\multicolumn{2}{c}{$\\alpha$} & \\multicolumn{2}{c}{r$_f$} \\\\ \\cline{3-6} \n\\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{e} & \\multicolumn{1}{c}{h} & \\multicolumn{1}{c}{e} & \\multicolumn{1}{c}{h} \\\\ \\hline\n\\multirow{4}{*}{\\ce{Sb2S3}} & avg & 1.61 & 2.04 & 45.50 & 40.40 \\\\\n & \\textit{x} & 1.02 & 1.75 & 57.30 & 43.69 \\\\\n & \\textit{y} & 2.44 & 2.05 & 36.85 & 40.25 \\\\\n & \\textit{z} & 5.69 & 2.51 & 23.70 & 36.36 \\\\ \\hline\n\\multirow{4}{*}{\\ce{Sb2Se3}} & avg & 1.29 & 2.07 & 40.46 & 31.90 \\\\\n & \\textit{x} & 0.81 & 2.01 & 50.92 & 32.36 \\\\\n & \\textit{y} & 1.96 & 1.61 & 32.76 & 36.11 \\\\\n & \\textit{z} & 5.76 & 3.77 & 18.82 & 23.48 \\\\ \\hline\n \\end{tabular*}\n\\end{table}\n\nNevertheless, the precise carrier transport property is largely determined by the polaron's property instead of the bare carrier\\cite{hayes2012defects}. When photogenerated carriers interact with a deformable lattice, they can result in displacement of atoms from their equilibrium positions. The charge carriers and the surrounding lattice deformation form a so-called polaron\\cite{emin2013polarons}. Polarons can be classified into two types based on the spatial extent of lattice deformation which depends on the strength of electron-phonon coupling. Stronger electron-phonon coupling leads to larger local lattice distortion which provides the driving force for small polarons. Thus, in a small polaron, the lattice deformation is usually confined to one unit cell, and a carrier's motion is typically incoherently which leads to a low mobility ($\\ll$ 1 cm$^2$\/Vs). By contrast, the lattice deformation in a large polaron is usually moderate spreading over several unit cells, and the mobility is larger ($\\textgreater$ 1 cm$^2$\/Vs). \nIn polar crystals, the electron-phonon interaction is usually dominant by the coupling of charge carriers to the longitudinal \\ac{LO} phonons, which can be described within the Fr{\\\"o}hlich model\\cite{frohlich1952interaction}. We first evaluate the Fr{\\\"o}hlich interaction by the Fr{\\\"o}hlich coupling constant $\\alpha$. Our calculated $\\alpha$ (shown in the Table. \\ref{tab_alpha}) shows an isotropic value of $\\sim$ 2 for both \\ce{Sb2S3} and \\ce{Sb2Se3}, which falls in intermediate electron-phonon coupling regime (defined as 1\/2 $\\lesssim \\alpha \\lesssim$ 6 \\cite{stoneham2001theory}). The magnitude of $\\alpha$ along [100] and [010] directions are quite close, suggesting similar electron-phonon interaction along these two directions. We further estimated the size of polarons in \\ce{Sb2X3} by the Schultz polaron radius (r$_f$)\\cite{schultz1959slow}. The large values of electron and hole polaron radii (extend over several structural units) indicate the polarons tend to delocalise in both \\ce{Sb2S3} and \\ce{Sb2Se3}. The details of parameters used can be found in the Table. S1.\n\nFor a more accurate assessment, we further performed first-principles \\ac{DFT} calculations to model polarons in \\ce{Sb2X3}. Although polarons have been successfully simulated by DFT in many systems\\cite{deskins2007electron,hao2015coexistence,castleton2019benchmarking,ramo2007theoretical}, two main issues should be addressed in order to get reasonable results. The first one is the \\ac{SIE} in any approximate form of the exchange-correlation functional. Approaches to this problem include employing a hybrid functional which incorporates a certain amount of exact exchange from Hartree-Fock theory, or employing a Hubbard correction (DFT+U) which accounts for strong on-site Coulomb interaction of localised electrons. \nAnother challenge is how to accurately model localised states. Simply applying corrections to \\ac{SIE} cannot guarantee the formation of stable polarons. The formation of localised polarons is particularly dependent on initial geometries and wavefunctions. Different methods have been proposed to break the crystal symmetry and identify localised states. Among them, bond distortion method\\cite{pham2020efficient} and electron attractor method are especially efficient and popular. The former mainly involves introducing some local perturbations in a supercell, while the latter uses a substituted atom to attract an electron. \nTherefore, in this paper, all polaron calculations were conducted using the HSE06 hybrid functional, which has been proved to be able to well describe the structural and electronic properties in perfect \\ce{Sb2X3} crystals (i. e. in the absence of an excess electron\/hole) according to our previous study\\cite{wang2022lone}. We attempted to localise an electron or a hole by adding or removing an electron from the \\ce{Sb2X3} supercell using both the bond distortion method and electron attractor method. The details of modelling are shown in the SI. However, according to our results, electron and hole polarons prefer to delocalise rather than localise in both \\ce{Sb2S3} and \\ce{Sb2Se3}, indicating small polarons are unlikely to form intrinsically.\n\nThe formation of self-trapped carriers has been excluded. In the following, we then consider the possibility of forming self-trapped excitons in \\ce{Sb2X3}. First, large dielectric constants and small effective masses in \\ce{Sb2X3}\\cite{wang2022lone} suggest that the Coulomb interaction is strongly screened and large exciton radius is more favoured. The small binding energies (0.01-0.05 eV and 0.04 eV for \\ce{Sb2S3} and \\ce{Sb2Se3}, respectively\\cite{caruso2015excitons,lawal2018investigation}) further indicate weak electron-hole attraction in \\ce{Sb2X3}. Besides, it is supported by experimental evidence that a negative imaginary part of frequency-dependent complex photoconductivity was not observed in \\ce{Sb2Se3}\\cite{wang2019both}. Consequently, exciton formation in \\ce{Sb2X3} is unlikely.\n\n\\begin{figure*}[t]\n \\centering\n {\\includegraphics[width=1.0\\textwidth]{Fig_2_mobility}} \\\\\n \\caption{Calculated component and total mobilities of electrons and holes in (a) \\ce{Sb2S3} and (b) \\ce{Sb2Se3} as a function of bulk defect concentration at different temperatures. ADP: acoustic deformation potential; POP: polar optical phonon; IMP: ionized impurity}\n \\label{fig_mobility_ref}\n\\end{figure*}\n\nIn order to further verify our argument and quantify the effect of polarons on the carrier transport property in \\ce{Sb2X3}, the anisotropic mobilities were calculated by the AMSET package (shown in Fig. \\ref{fig_mobility_ref}). Different scattering mechanisms including \\ac{ADP}, \\ac{IMP} and \\ac{POP} scattering were considered. Piezoelectric scattering was not considered due to the centrosymmetric crystal structure of \\ce{Sb2Se3}. It can be seen that the mobility arising from IMP scattering shows a linear decrease as the defect concentration increases, and thus the limiting scattering mechanism is sensitive to the defect concentration. If the bulk defect concentration is low, the mobility in \\ce{Sb2X3} is limited by POP scattering, while when the defect concentration is high enough (on the order of 10$^{18}$ cm$^{-3}$), IMP scattering becomes the limiting one. At intermediate defect concentration, the mobility due to POP and IMP are comparable and control the mobility together. This critical defect concentration depends on the temperature: the higher the temperature, the larger the critical value. It agrees with the reported papers that the defect density is crucial to the carrier transport and in \\ce{Sb2X3}, bulk defect densities above 10$^{15}$ cm$^{-3}$ lead to significant degradation in the conversion efficiency \\cite{islam2020two,li2020simulation,khadir2022performance}. \n\nThe anisotropy of mobility was also considered. Considering that the measurements of carrier mobilities vary widely from different laboratories, it is hard to directly compare our results with experiments. However, our calculated mobilities are basically close to the experimental values. From Fig. \\ref{fig_mobility_ref}, the electron mobilities along [100] and [010] directions are larger than those along [001] direction. For example, assuming the defect density is 10$^{17}$ cm$^{-3}$, the electron mobilities at room temperature in \\ce{Sb2S3} (\\ce{Sb2Se3}) are 49.25 (83.63), 8.10 (13.52) and 1.56 (1.61) cm$^2$\/Vs along \\textit{x}, \\textit{y} and \\textit{z} directions, respectively. However, the hole mobility shows less anisotropy than the electron mobility, especially in \\ce{Sb2S3}. These are in good agreement with the quasi-2D electron Fermi surfaces in both \\ce{Sb2S3} and \\ce{Sb2Se3}, and 3D hole Fermi surface in \\ce{Sb2S3}\\cite{wang2022lone}. Despite anisotropic behaviour, at a reasonable defect concentration, both electron and hole mobilities along each different direction show a large value ($\\textgreater$ 1 cm$^2$\/Vs) which indicates large polaron formation in \\ce{Sb2X3}. Moreover, the mobilities decrease as the temperature increases, which provides further evidence that large polarons instead of small polarons are more likely to form in \\ce{Sb2X3}. The results above demonstrate that the common understanding of \\ac{1D} transport in \\ce{Sb2X3} \\cite{zhou2015thin,liang2020crystallographic} is not comprehensive. The isotropic charge carrier dynamics has also been proved experimentally\\cite{grad2020photoexcited}. \nFurthermore, considering that most of the experimental values of mobility in \\ce{Sb2X3} were obtained in thin films and grain boundary scattering could also lower the mobility, we also included the mean free path to study the effect of grain boundary scattering in these systems. Acccording to our results (Fig. S2), the mobilities in \\ce{Sb2X3} are not limited by the grain boundary scattering, which agrees with the previous study\\cite{gonzalez2022deciphering}.\n\nConclusively, all our results above give no evidence for self-trapping in \\ce{Sb2X3}. Nevertheless, we note that our results are not contradictory with the experimental observations in \\ce{Sb2S3} which were explained in the framework of self-trapping by Yang et al.\\cite{yang2019ultrafast}. The experimental observations include: \\textrm{i}) a Stokes shift of 0.6 eV; \\textrm{ii}) picosecond carrier trapping dynamics and broad \\ac{PL} peak; \\textrm{iii}) large photoexcited carrier density and \\textrm{iv}) polarized light emission in \\ce{Sb2S3} single crystal. We clarify that those observations could but not necessarily be the signals of self-trapping process by the following points: \n1) It is widely acknowledged that a large Stokes shift is attributed to emission from trap states instead of band-edge states, but the origin of the trap states can either be self-trapped carriers\/excitons or defect states\\cite{baimuratov2019giant}.\n2) Ultrafast decay and broad \\ac{PL} emission are complex phenomena which are still under intense debate in the literature\\cite{baimuratov2019giant}. The timescale for the decay in \\ac{TA} measurement in self-trapping is typical subpicoseconds or a few picoseconds\\cite{buizza2021charge,kastl2022picoseconds,dexheimer2000femtosecond}, while Yang et al.\\cite{yang2019ultrafast} showed a timescale of $\\sim$20 ps for \\ce{Sb2S3} polycrystalline film and $\\sim$40 ps for \\ce{Sb2S3} single crystal.\n3) The large photoexcited carrier density could also be originated from the \\ac{PIA}. This can also be supported by the large trap density of 2.1 $\\times$ 10$^{20}$ cm$^{-3}$ reported in \\ce{Sb2Se3} and the authors demonstrate the absence of self-trapping process\\cite{zhang2021suppressing}.\n4) Besides experimental observations, Yang et al.\\cite{yang2019ultrafast} also mentioned the hopping transport mechanism in \\ce{Sb2S3} by citing another paper\\cite{roy1978electrical}. However, that was just simply deduced by narrow bands in \\ce{Sb2S3} and more convincing evidence is needed.\nTherefore, we conclude that it is possible for electrons or\/and holes to be trapped in \\ce{Sb2X3} with the assistance of extrinsic defects. \n\nIn summary, by systematic first-principles DFT calculations, we demonstate the intrinsic formation of large polarons and their effects on the charge carrier transport in \\ce{Sb2X3}. We studied the electron-lattice interaction including the Fr\u00f6hlich coupling constant and polaron radii, and modelled the electron and hole polarons in \\ce{Sb2X3} via bond distortion method and electron attractor method. All our results support that electron and hole polarons in \\ce{Sb2X3} tend to delocalise instead of localise. The large polarons further result in large carrier mobilities in \\ce{Sb2X3} (an isotropic value of \\textgreater 10 cm$^2$\/Vs at room temperature for both electrons and holes). Therefore, we conclude that there is no theoretical evidence for self-trapping of carriers in \\ce{Sb2X3}. Besides, we revealed that the maximum achievable mobilities in \\ce{Sb2X3} are limited by the polar optical phonon scattering at low and moderate defect concentrations, while at high defect concentration ($\\textgreater 10^{18}$ cm$^{-3}$), they are limited by ionized impurity scattering. Our study provides guidance for designing \\ce{Sb2X3} based solar cells with high efficiencies.\n\n\\section{Methods}\nAll calculations were performed based on Kohn-Sham density-functional theory (DFT)\\cite{kohn1965self,dreizler1990density} as implemented in the \\ac{VASP}\\cite{kresse1996efficient}. The projector augmented-wave (PAW) method\\cite{kresse1999ultrasoft} was employed with a plane-wave energy cutoff of 400 eV. All calculations were carried out using the Heyd-Scuseria-Ernzerhof hybrid functional (HSE06)\\cite{heyd2003hybrid,krukau2006influence} and the D3 dispersion correction\\cite{grimme2004accurate}. The atomic positions were optimised until the Hellman-Feynman forces on each atom were below 0.0005 eV \\AA$^{-1}$ for unit cells and 0.01 eV \\AA$^{-1}$ for 3$\\times$1$\\times$1 supercells. The energy convergence criterion was set to 10$^{-6}$ eV. $\\varGamma$-centered \\textit{k}-point meshes were set to 7$\\times$2$\\times$2 and 2$\\times$2$\\times$2 for geometry optimisation with primitive unit cells and supercells, respectively. \nThe Fr{\\\"o}hlich parameters were calculated by the PolaronMobility package\\cite{Frost2017}. The mobility was calculated using the AMSET package \\cite{ganose2021efficient}. The carrier concentration was set to 10$^{13}$ cm$^{-3}$ according to previous experimental results\\cite{chen2017characterization,liu2016green,zhou2014solution,yuan2016rapid,li2021defect,chalapathi2020influence,black1957electrical}.\nThe crystal structures were plotted using CrystalMaker$^{\\circledR}$\\cite{crystalmaker}. \n\n\\section*{Acknowledgements}\nWe are grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP\/P020194\/1 and EP\/T022213\/1). Xinwei Wang acknowledges Imperial College London for the funding of a President's PhD Scholarship. Alex M. Ganose was supported by EPSRC Fellowship EP\/T033231\/1. Se\u00e1n R. Kavanagh acknowledges the EPSRC Centre for Doctoral Training in the Advanced Characterisation of Materials (CDT-ACM)(EP\/S023259\/1) for funding a PhD studentship. Xinwei Wang thanks Jarvist Moore Frost, Ye Yang and Yuchen Fu for valuable discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}