diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfnnu" "b/data_all_eng_slimpj/shuffled/split2/finalzzfnnu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfnnu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe production of top quarks at high-energy colliders is a process of utmost importance, both in\ntesting the validity of the Standard Model~(SM) and in the quest for new physics.\nWithin the SM, the main source of top-quark events in hadronic collisions is\ntop-quark pair ($t {\\bar t}$) production.\nThe large data set delivered by the CERN LHC\nenables precise measurements of the $t {\\bar t}$ production cross section as a\nfunction of the $t {\\bar t}$ kinematics (see e.g. Refs.~\\cite{Khachatryan:2015oqa,Khachatryan:2015fwh,Aad:2015hna,Aad:2015mbv,Khachatryan:2016gxp,Aaboud:2016iot,Aaboud:2016syx,Sirunyan:2017mzl,Sirunyan:2018wem,Sirunyan:2019zvx}), which can be compared with the SM predictions.\nAt the same time these studies have a wider relevance, since $t {\\bar t}$ production is a crucial\nbackground in many new-physics searches.\n\nNext-to-leading-order (NLO) QCD corrections \nfor this production process were obtained thirty years ago\n\\cite{Nason:1987xz, Beenakker:1988bq, Beenakker:1990maa, Nason:1989zy,Mangano:1991jk}.\nBeyond the on-shell approximation of $t {\\bar t}$ production, first NLO QCD studies were carried out \nwithin the narrow-width approximation~\\cite{Melnikov:2009dn,Bernreuther:2010ny,Campbell:2012uf}.\nSuch NLO studies were later\nperformed by considering\nthe complete $W^+W^-b\\bar b$ final states, with off-shell\nleptonic~\\cite{Bevilacqua:2010qb,Denner:2010jp,Denner:2012yc,Heinrich:2013qaa} and\nsemi-leptonic~\\cite{Denner:2017kzu} decays.\nIn the leptonic channel the case of massive bottom quarks was investigated in\nRefs.~\\cite{Cascioli:2013wga,Frederix:2013gra}.\nNLO QCD results for off-shell $t {\\bar t}$ production in association with an additional jet were obtained\nin Refs.~\\cite{Bevilacqua:2015qha,Bevilacqua:2016jfk}.\nNLO electroweak~(EW) corrections for on-shell $t {\\bar t}$ production were studied in\nRefs.~\\cite{Kuhn:2006vh,Bernreuther:2006vg,Kuhn:2013zoa,Bernreuther:2010ny,Hollik:2011ps,Pagani:2016caq},\nand a merged calculation for $t {\\bar t}+0,1\\,$jets including EW corrections was presented in\nRef.~\\cite{Gutschow:2018tuk}.\nFor the leptonic decay channel the complete NLO EW corrections to the production of the\nsix-particle final state are known~\\cite{Denner:2016jyo}.\n\nThe calculation of the next-to-next-to-leading-order (NNLO) QCD \ncorrections to the $t {\\bar t}$ total cross section was completed a few years ago\n\\cite{Baernreuther:2012ws, Czakon:2012zr, Czakon:2012pz, Czakon:2013goa}.\nNNLO results for some differential distributions were presented in\nRefs.~\\cite{Czakon:2015owf,Czakon:2016ckf,Czakon:2017dip}.\nThis calculation was recently combined with NLO EW corrections~\\cite{Czakon:2017wor}.\nThe $t {\\bar t}$ charge asymmetry is known at NLO \\cite{Kuhn:1998kw} and NNLO \\cite{Czakon:2014xsa} in QCD,\nand also including NLO EW corrections \\cite{Czakon:2017lgo}.\nFirst NNLO QCD results including top-quark decays are starting to appear \\cite{Behring:2019iiv}.\n\nIn the present paper we deal with on-shell $t {\\bar t}$ production in NNLO QCD.\nThe calculation of the $t {\\bar t}$ production cross section at this perturbative order requires \ntree-level contributions with two additional\nfinal-state partons,\none-loop contributions with one additional parton and \npurely virtual contributions.\nThe required tree-level and one-loop scattering amplitudes\nare known. They enter the NLO calculation of \nthe associated production of a $t {\\bar t}$ pair and one jet~\\cite{Dittmaier:2007wz,Dittmaier:2008uj},\nbut in the case of NNLO $t {\\bar t}$ production they\nneed to be accurately evaluated also in the infrared-singular regions where the jet becomes unresolved.\nThe purely virtual contributions entail the square of one-loop scattering amplitudes and the two-loop\nscattering amplitudes.\nThe squared one-loop amplitudes are known~\\cite{Korner:2008bn,Anastasiou:2008vd,Kniehl:2008fd}.\nThe complete computation of the two-loop amplitudes has been carried out\nnumerically~\\cite{Czakon:2008zk,Baernreuther:2013caa}.\nPartial results for these amplitudes are available in analytic\nform~\\cite{Bonciani:2008az,Bonciani:2009nb,Bonciani:2010mn,Bonciani:2013ywa}.\nRecent progress in the computation of non-planar\ntwo-loop master integrals~\\cite{Becchetti:2019tjy,DiVita:2019lpl} indicates that\nthe analytic calculation can be completed in the near future.\n\nThe implementation of the various scattering amplitudes in a (fully differential)\nNNLO calculation is definitely a non-trivial task because of the presence of infrared~(IR) \ndivergences at intermediate stages of the calculation. \nVarious methods have been proposed and used to overcome these difficulties \nat the NNLO level (the interested reader can consult the list of references in\nRef.~\\cite{Bendavid:2018nar}).\n\nUsing the antenna subtraction method~\\cite{GehrmannDeRidder:2005cm, Abelof:2011jv},\npartial results for $t {\\bar t}$ production in the $q{\\bar q}$ partonic channel\nwere obtained by considering the complete fermionic contributions and evaluating the\nremaining contributions\nin the leading-colour approximation~\\cite{Abelof:2014fza, Abelof:2014jna,Abelof:2015lna}.\nThe complete NNLO computation of\nRefs.~\\cite{Baernreuther:2012ws, Czakon:2012zr, Czakon:2012pz, Czakon:2013goa,Czakon:2014xsa,Czakon:2015owf,Czakon:2016ckf,Czakon:2017dip}\nwas performed by using the {\\sc Stripper} method~\\cite{Czakon:2010td,Czakon:2011ve,Czakon:2014oma}.\n\n\nIn a recent paper~\\cite{Catani:2019iny} we have presented a new calculation of the inclusive\n$t {\\bar t}$ production cross section in NNLO QCD.\nThis calculation completes a previous computation\nthat was limited to the flavour off-diagonal partonic channels~\\cite{Bonciani:2015sha}.\nThe calculation uses the $q_T$ subtraction formalism~\\cite{Catani:2007vq} to handle and cancel\nIR-singular contributions\nin real and virtual corrections,\nand it is now completely integrated into the {\\sc Matrix} framework~\\cite{Grazzini:2017mhc}.\nThis allows us to perform fast and efficient computations of fiducial cross sections and\n(multi-)differential kinematical distributions for the production of on-shell top quarks.\n\nIn the present paper we extend the results of Ref.~\\cite{Catani:2019iny} in various respects.\nWe present NNLO QCD predictions for several differential distributions in the transverse momenta\nand rapidities of the top quarks, as well as in the invariant mass and the rapidity of the\n$t {\\bar t}$ system, and we discuss the results obtained by using different scale choices. \nWe compare these results with CMS measurements in the lepton+jets channel at the centre-of-mass energy\n\\mbox{$\\sqrt{s}=13$~TeV}~\\cite{Sirunyan:2018wem}.\nWe then consider double-differential distributions and compare our results with the corresponding\nmeasurements by CMS~\\cite{Sirunyan:2018wem}.\n\n\nThe paper is organized as follows. In Section~\\ref{sec:matrix} we illustrate the framework\nin which the calculation is performed. In Section~\\ref{sec:resu}\nwe present results for single-differential and double-differential distributions,\nand we compare them with the experimental measurements.\nFinally, in Section~\\ref{sec:summary} we present our conclusions.\nIn Appendix~\\ref{sec:validation} we present a quantitative comparison of our NNLO differential results with those available\nin the literature.\n\n\n\\section[Calculation within the {\\sc Matrix} framework]{Calculation within the M{\\normalsize ATRIX} framework}\n\\label{sec:matrix}\n\nOur fully differential NNLO computation of $t {\\bar t}$ production is carried out within\nthe \\Matrix{}~\\cite{Grazzini:2017mhc} framework.\n\\Matrix{} features a completely automated implementation of the $q_T$ subtraction formalism~\\cite{Catani:2007vq}\nto compute NNLO corrections,\nand it is thus applicable to the production of an arbitrary set of colourless final-state particles\nin hadronic collisions~\\cite{Catani:2013tia},\nas long as the two-loop virtual corrections to the corresponding leading-order~(LO) process are provided.\nWith appropriate modifications of the NNLO subtraction counterterm\nand the explicit computation of additional soft contributions (see below),\n\\Matrix{} can now also deal with the production of heavy-quark pairs.\n\nAccording to the $q_T$ subtraction method, the NNLO\ndifferential cross section $d{\\sigma}^{t{\\bar t}}_{\\rm NNLO}$ for \nthe production process $pp\\to t{\\bar t}+X$ can be written as\n\\begin{equation}\n\\label{eq:main}\nd{\\sigma}^{t{\\bar t}}_{\\rm NNLO}={\\cal H}^{t{\\bar t}}_{\\rm NNLO}\\otimes d{\\sigma}^{t{\\bar t}}_{\\rm LO}\n+\\left[ d{\\sigma}^{t{\\bar t}+\\rm{jet}}_{\\rm NLO}-\nd{\\sigma}^{t{\\bar t}, \\, CT}_{\\rm NNLO}\\right],\n\\end{equation}\nwhere $d{\\sigma}^{t{\\bar t}+\\rm{jet}}_{\\rm NLO}$ is the $t {\\bar t}$+jet cross section at NLO accuracy.\n\nThe square bracket term of Eq.~(\\ref{eq:main}) is IR finite in the limit\nin which the transverse momentum of the $t {\\bar t}$ pair, $q_T$, vanishes.\nHowever, the individual contributions\n$d{\\sigma}^{t{\\bar t}+\\rm{jet}}_{\\rm NLO}$ and\n$d{\\sigma}^{t{\\bar t}, \\, CT}_{\\rm NNLO}$ are separately divergent.\nThe contribution $d{\\sigma}^{t{\\bar t}+\\rm{jet}}_{\\rm NLO}$ can be evaluated with any available NLO method to handle and cancel IR divergences.\nThe IR subtraction counterterm $d{\\sigma}^{t{\\bar t}, \\,CT}_{\\rm NNLO}$\nis obtained from the NNLO perturbative expansion \n(see e.g.\\ Refs.~\\cite{Bozzi:2005wk,Bozzi:2007pn,Bonciani:2015sha})\nof the resummation formula\nof the logarithmically-enhanced\ncontributions to the $q_T$ distribution\nof the $t {\\bar t}$ pair~\\cite{Zhu:2012ts,Li:2013mia,Catani:2014qha}:\nthe explicit form of $d{\\sigma}^{t{\\bar t}, \\,CT}_{\\rm NNLO}$ is fully known.\n\nTo complete the NNLO calculation, the second-order functions\n${\\cal H}^{t{\\bar t}}_{\\rm NNLO}$ in Eq.~(\\ref{eq:main}) are needed.\nThese functions embody\nprocess-independent and process-dependent contributions. The process-independent contributions to \n${\\cal H}^{t{\\bar t}}_{\\rm NNLO}$ are analogous to those entering Higgs\nboson~\\cite{Catani:2007vq} and vector-boson~\\cite{Catani:2009sm} production,\nand they are explicitly known~\\cite{Catani:2011kr,Catani:2012qa,Catani:2013tia,Gehrmann:2012ze,Gehrmann:2014yya}.\nSince in $t {\\bar t}$ production both the $gg$ and the $q{\\bar q}$ partonic channels contribute\nat the same perturbative order,\nall these process-independent contributions are required.\nIn the flavour off-diagonal channels \nthe process-dependent contributions to ${\\cal H}^{t{\\bar t}}_{\\rm NNLO}$\ninvolve only amplitudes of the partonic\nprocesses \\mbox{$q{\\bar q} \\to t{\\bar t}$} and \\mbox{$gg\\to t{\\bar t}$} up to the one-loop level,\nand the explicit results on the NLO {\\em azimuthal-correlation} terms\nin the transverse-momentum resummation formalism~\\cite{Catani:2014qha}.\nThe computation of ${\\cal H}^{t{\\bar t}}_{\\rm NNLO}$ in the flavour diagonal $q{\\bar q}$ and $gg$\nchannels additionally requires the two-loop amplitudes\nfor \\mbox{$q{\\bar q}\\to t{\\bar t}$} and \\mbox{$gg\\to t{\\bar t}$},\nand the evaluation of new contributions of purely {\\it soft} origin.\nThe two-loop amplitudes are available in a numerical form~\\cite{Baernreuther:2013caa},\nand the corresponding grids have been implemented into {\\sc Matrix} through a suitable\ninterpolation routine.\nThe computation of the additional soft contributions\nhas been completed by some of us~\\cite{inprep}\\footnote{An independent computation of\nthese soft contributions is presented in Ref.~\\cite{Angeles-Martinez:2018mqh}.},\nand it has been implemented into {\\sc Matrix} as well.\n\nThe core of the \\Matrix{} framework is the Monte Carlo program \\Munich{}\\footnote{\\Munich{} is the \nabbreviation of ``MUlti-chaNnel Integrator at Swiss~(CH) precision'' --- an automated parton-level\nNLO generator by S.~Kallweit.}, which includes a fully automated implementation of the\ndipole-subtraction method for massless~\\cite{Catani:1996jh,Catani:1996vz}\nand massive~\\cite{Catani:2002hc} partons,\nand an efficient phase-space integration.\nAll the required (spin- and colour-correlated) tree-level and one-loop (squared) amplitudes\nare obtained by using {\\sc OpenLoops~2}~\\cite{Cascioli:2011va,openloops2},\nexcept for the four-parton tree-level colour correlations that are based on an analytic implementation.\n{\\sc OpenLoops~2} relies on its new on-the-fly tensor reduction~\\cite{Buccioni:2017yxi} that guarantees\nstability all over the phase space, especially in the IR-singular regions,\nwhile scalar integrals from {\\sc Collier}~\\cite{Denner:2014gla,Denner:2016kdg} are used.\nTo the purpose of validating our results for the real--virtual corrections,\nwe have also used the independent matrix-element generator {\\sc Recola}~\\cite{Actis:2016mpe,Denner:2017wsf}, which employs tensor reduction and scalar integrals from {\\sc Collier}, and we find complete agreement.\n\nThe subtraction in the square brackets of Eq.~(\\ref{eq:main}) is not local, but the cross section \nis formally finite in the limit \\mbox{$q_T \\to 0$}. In practice, a technical cut on $q_T$\nis introduced to render $d{\\sigma}^{t {\\bar t}+\\mathrm{jet}}_{\\mathrm{(N)LO}}$ and \n$d{\\sigma}^{\\mathrm{CT}}_{\\mathrm{(N)NLO}}$ separately finite.\nTherefore, in our actual implementation, the $q_T$ subtraction method is\nvery similar to a phase-space slicing method. \nIt turns out that a cut, $r_{\\mathrm{cut}}$, on the \ndimensionless quantity \\mbox{$r=q_T\/m_{t{\\bar t}}$} ($m_{t{\\bar t}}$\ndenotes the invariant mass of the $t {\\bar t}$ pair) \nis more convenient from a practical point of view. \nThe absence of any residual logarithmic\ndependence on $r_{\\mathrm{cut}}$ is a strong evidence of the correctness of the \ncomputation, since any mismatch between the contributions would result in a divergence \nof the cross section in the limit \\mbox{$r_{\\mathrm{cut}}\\to0$}.\nThe remaining power-suppressed contributions vanish in that limit, and they can be controlled by\nmonitoring the $r_{\\mathrm{cut}}$ dependence of the cross section.\n\n\nThe $r_\\text{cut}\\to 0$ extrapolation for the total cross section is carried out by using the approach\nintroduced in Ref.~\\cite{Grazzini:2017mhc}.\nA quadratic least $\\chi^2$ fit to the $r_\\text{cut}$ dependent results is performed and repeated\nby varying the upper bound of the $r_\\text{cut}$ interval.\nFinally, the result with the lowest $\\chi^2\/$degrees-of-freedom value is taken as the best fit,\nwhile the remaining results are used to estimate the extrapolation uncertainty.\nIn addition to this analysis at the level of the total cross section,\nwe have performed a similar bin-wise extrapolation in the computation of differential cross sections.\nWe find that the results are in good agreement with those obtained by directly\nusing a sufficiently low value of $r_\\text{cut}$ (\\mbox{$r_{\\rm cut} \\lesssim 0.15\\%$}).\n\n\n\\section{Results}\n\\label{sec:resu}\n\nTo present our quantitative results, we consider $pp$ collisions at \\mbox{$\\sqrt{s}=13$~TeV},\nand we fix the pole mass ${m_{t}}$ of the top quark to the value \\mbox{${m_{t}}=173.3$~GeV}.\nWe consider $n_F=5$ massless quark flavours, and we use the corresponding NNPDF31~\\cite{Ball:2017nwa}\nsets of parton distribution functions~(PDFs) with \\mbox{$\\as({m_Z})=0.118$}.\nIn particular, N$^n$LO (with $n = 0,1,2$) predictions are obtained by using PDFs\nat the corresponding perturbative order and the evolution of $\\as$ at \\mbox{$(n + 1)$}-loop order,\nas provided by the PDF set.\n\nQCD scale uncertainties are estimated through the customary procedure of independently varying\nthe renormalization ($\\mu_R$) and factorization ($\\mu_F$) scales \nby a factor of two\naround their common central value $\\mu_0$ with the constraint \\mbox{$0.5\\leq \\mu_F\/\\mu_R\\leq 2$},\ni.e.\\ we use the standard 7-point scale variation.\n\nSetting $\\mu_0={m_{t}}$, the total cross sections and their corresponding scale uncertainties read\n\\begin{equation}\n\\sigma_\\text{LO}^{t {\\bar t}} = 478.9(1)^{+29.6\\%}_{-21.4\\%}\\text{~pb}\\,,\\quad\n\\sigma_\\text{NLO}^{t {\\bar t}} = 726.9(1)^{+11.7\\%}_{-11.9\\%}\\text{~pb}\\,,\\quad\n\\sigma_\\text{NNLO}^{t {\\bar t}} = 794.0(8)^{+3.5\\%}_{-5.7\\%}\\text{~pb}\\,.\n\\label{eq:sigmatot}\n\\end{equation}\nWe note that the LO and NLO results are not fully consistent within the corresponding uncertainties,\nindicating that, at least at LO, scale variations cannot be trusted as perturbative uncertainties.\nSimilar features are shared by various other hard-scattering processes at hadron colliders.\nIn contrast, the NLO and NNLO predictions are consistent, suggesting that scale variations\ncan be used to estimate the size of perturbative contributions beyond NNLO. \n\n\nThe characteristic hard-scattering scale that controls the perturbative QCD behaviour\nof the total cross section $\\sigma^{t {\\bar t}}$ is ${m_{t}}$. In our calculation of\n$\\sigma^{t {\\bar t}}$, as reported in Eq.~(\\ref{eq:sigmatot}), we have used QCD scales ($\\mu_R$ and\n$\\mu_F$) at values of the order of ${m_{t}}$. \nDifferential cross sections are \ncontrolled by corresponding characteristic hard scales, and we use QCD scales of that order\nin our computation of these observables.\nThe characteristic hard scale specifically depends on the differential cross section under consideration. \n\nHaving at our disposal a fully differential calculation we can also use {\\it dynamical}\nQCD scales. By dynamical we mean hard scales that refer to multi-differential\ncross sections\neventually integrated over the phase space to obtain the specific \ndifferential cross section under consideration. \nThe use of a dynamical scale produces practical simplifications\nsince it allows us to compute several observables (e.g., differential cross sections)\nsimultaneously, without changing the QCD scales on an observable-dependent basis.\nIn practice, we use dynamical scales that are expected to be ``effectively similar''\nto characteristic hard scales. Moreover, the study of dynamical scales\nis of interest independently of how we use them.\n\nThe default dynamical scale that we use throughout the paper is set to the \ncentral value \\mbox{$\\mu_0=H_{T}\/2$}, where $H_{T}$ is the sum of the transverse masses\nof the top and antitop quarks,\n\\begin{equation}\n H_{T}=m_{T,t}+m_{T,{\\bar t}}\\,,\n\\end{equation}\nwith\n\\begin{equation}\nm_{T,t({\\bar t})}=\\sqrt{{m^2_{t}}+p^2_{T,t({\\bar t})}}\\, ,\n\\end{equation}\nand $p_{T,t}$ and $p_{T,{\\bar t}}$ are the transverse momenta of the top and the antitop quark, respectively.\nWe present differential cross sections that are obtained by using \n\\mbox{$\\mu_0=H_{T}\/2$} and values of $\\mu_0$ of the order of the characteristic hard scale\nfor that cross section. We also show results obtained by using central scales that are\nlowered by a factor of $1\/2$. A reduced central scale, such as $H_{T}\/4$, was considered\nin the studies of Ref.~\\cite{Czakon:2016dgf} on the basis of features of\nfastest perturbative convergence of some observables,\nand it was also already suggested in Ref.~\\cite{Denner:2012yc}.\n\nWe have chosen the dynamical scale $H_{T}$ since it is expected to be parametrically\nof the same order as the characteristic hard scale of the observables that we examine\nin this paper. This {\\it a priori} expectation is based on the kinematical features of\nthese observables and on the general dynamical features of $t {\\bar t}$ production.\nIn the following paragraph we briefly comment about this. Independently of the expectation,\nthroughout the paper we comment on the actual quantitative results that we obtain by\nusing different QCD scales.\n\nOwing to dynamics, the typical size of both $p_{T,t}$ and $p_{T,{\\bar t}}$ is\nof the order of ${m_{t}}$ (see, e.g., Figs.~\\ref{fig:pt1}--\\ref{fig:pth} and \n\\ref{fig:pth_yth}). Therefore, in the case of observables that are inclusive over\n$p_{T,t}$ and $p_{T,{\\bar t}}$, such as the total cross section and the pair rapidity\ndistribution in Fig.~\\ref{fig:ytt}, $H_{T}\/2$ turns out to be of the same order as \n${m_{t}}$, which is the characteristic hard scale for these observables. Analogously,\nsince \\mbox{$p_{T,t} \\sim p_{T,{\\bar t}}$}, $H_{T}\/2$ turns out to be of the same order\nas the transverse masses, which are the characteristic hard scales for the differential cross sections\nin Figs.~\\ref{fig:pt1}--\\ref{fig:pth} and \\ref{fig:pth_yth}.\nThe invariant mass $m_{t{\\bar t}}$ of the $t {\\bar t}$ pair is the characteristic hard scale in the case\nof the differential cross sections in Figs.~\\ref{fig:mtt}, \\ref{fig:ytt_mtt} and \\ref{fig:mtt_pth}.\nThe invariant mass is of the same order as $H_{T}$ with the exception of the kinematical subregions\nwhere the transverse momentum $p_{T,t{\\bar t}}$\nof the pair or the rapidity separation \\mbox{$|y_t - y_{\\bar t}|$} between\nthe top and the antitop quark are large. However, these subregions are dynamically suppressed,\nand therefore they give a minor contribution to the inclusive\n(over $p_{T,t{\\bar t}}$ and \\mbox{$|y_t - y_{\\bar t}|$}) cross sections\nin Figs.~\\ref{fig:mtt}, \\ref{fig:ytt_mtt} and \\ref{fig:mtt_pth}.\n\n\nOur numerical results for differential cross sections are compared with the measurements of the\nCMS collaboration~\\cite{Sirunyan:2018wem} (the data correspond to an integrated luminosity of $35.8~{\\rm fb}^{-1}$) in the lepton+jets channel at parton level.\nThe extrapolation from particle to parton level is carried out by the CMS collaboration in the inclusive phase space,\nand therefore no kinematical cuts are applied to obtain our theoretical predictions.\nTo perform the comparison, our results are multiplied by the factor $0.292$,\nwhich corresponds to the value $0.438$~\\cite{Tanabashi:2018oca}\nof the semileptonic decay fraction of the $t {\\bar t}$ pair, multiplied by a factor of $2\/3$\nsince Ref.~\\cite{Sirunyan:2018wem} considers only the decay into electrons and muons.\n\nIn Ref.~\\cite{Sirunyan:2018wem} the CMS data for single- and double-differential distributions\nare compared to theoretical results obtained with the NLO Monte Carlo event generators\n{\\sc POWHEG}~\\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd},\ninterfaced either to {\\sc PYTHIA8}~\\cite{Sjostrand:2007gs} or to {\\sc HERWIG++}~\\cite{Bahr:2008pv},\nand {\\sc MG5\\_aMC@NLO}~\\cite{Alwall:2014hca} interfaced to {\\sc PYTHIA8}~\\cite{Sjostrand:2007gs}\n(using the {\\sc FxFx} method \\cite{Frederix:2012ps} to deal with multijet merging).\nIn addition, some of the measured parton-level single-differential distributions,\nnamely the transverse-momentum and rapidity distributions of the leptonically and hadronically\ndecaying top quark and the invariant-mass and rapidity distribution of the $t {\\bar t}$ pair,\nare also compared to the NNLO QCD+NLO EW results of Ref.~\\cite{Czakon:2017wor}.\nNone of the double-differential distributions in Ref.~\\cite{Sirunyan:2018wem} are compared to theoretical results beyond NLO QCD.\n\n\n\n\\subsection{Single-differential distributions}\n\\label{sec:single}\n\nIn this section we present LO, NLO and NNLO results for a selection of single-differential distributions\nand compare them with the CMS measurements from Ref.~\\cite{Sirunyan:2018wem}.\nAt each perturbative order the scale-uncertainty bands in the figures\nare computed as explained at the beginning of Section~\\ref{sec:resu}. \n\nWe start the presentation by considering the transverse-momentum distributions of the top and antitop quarks.\nFor each event we classify the transverse momenta\naccording to their maximum and minimum values, $p_{T,t_\\text{high}}$ and $p_{T,t_\\text{low}}$.\n\n\n\\begin{figure}[t]\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_HT_2_pT_t1.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_mT_t1_pT_t1.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_mT_t1_2_pT_t1.pdf}\n\\vspace{-4ex}\n\\caption{\\label{fig:pt1}\n Single-differential cross sections as a function of $p_{T,t_\\text{high}}$.\n CMS data~\\cite{Sirunyan:2018wem} and LO, NLO and NNLO results\n for central scales equal to $H_{T}\/2$ (left), $m_{T,t_\\text{high}}$ (central) and $m_{T,t_\\text{high}}\/2$ (right).\n}\n\\end{figure}\n\\begin{figure}[t]\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_HT_2_pT_t2.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_mT_t2_pT_t2.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_mT_t2_2_pT_t2.pdf}\n\\vspace{-4ex}\n\\caption{\\label{fig:pt2}\n Single-differential cross sections as a function of $p_{T,t_\\text{low}}$.\n CMS data~\\cite{Sirunyan:2018wem} and LO, NLO and NNLO results\n for central scales equal to $H_{T}\/2$ (left), $m_{T,t_\\text{low}}$ (central) and $m_{T,t_\\text{low}}\/2$ (right).\n}\n\\end{figure}\n\nIn Figs.~\\ref{fig:pt1} and \\ref{fig:pt2}~(left)\nwe show these distributions\\footnote{NNLO results for these distributions have been recently presented in Ref.~\\cite{Czakon:2019bcq}.}\ncomputed at our reference scale \\mbox{$\\mu_0=H_T\/2$}.\nThe characteristic hard scale of a transverse-momentum distribution is the transverse mass $m_T$.\nTherefore, in Fig.~\\ref{fig:pt1} (central and right) we also report the $p_{T,t_\\text{high}}$ distribution for\ncentral scales \\mbox{$\\mu_0=m_{T,t_\\text{high}}$} and \\mbox{$\\mu_0=m_{T,t_\\text{high}}\/2$}, respectively,\nwhile in Fig.~\\ref{fig:pt2} (central and right) we consider $p_{T,t_\\text{low}}$\nfor \\mbox{$\\mu_0=m_{T,t_\\text{low}}$} and \\mbox{$\\mu_0=m_{T,t_\\text{low}}\/2$}.\nThe $p_{T,t_\\text{high}}$ distribution is peaked at \\mbox{$p_{T,t_\\text{high}}\\sim 100$~GeV},\nwhile the $p_{T,t_\\text{low}}$ distribution is peaked at \\mbox{$p_{T,t_\\text{low}}\\sim 60$~GeV}.\n\nWe first discuss the $p_{T,t_\\text{high}}$ distribution and focus on the \\mbox{$p_{T,t_\\text{high}}\\to 0$} region.\nIf $p_{T,t_\\text{high}}$ is small, both top quarks are forced to have small transverse momenta.\nAs a consequence, this kinematical region corresponds to a small transverse momentum of\nthe top-quark pair, $p_{T,t{\\bar t}}$.\nThe small-$p_{T,t{\\bar t}}$ region exhibits Sudakov-type divergences \\cite{Zhu:2012ts,Li:2013mia,Catani:2014qha,Catani:2018mei} at fixed order in perturbation theory,\nfrom the strong unbalance of real and virtual contributions due to soft-collinear emissions.\nIn the computation of $p_{T,t_\\text{high}}$, the unphysical fixed-order behaviour of $p_{T,t{\\bar t}}$ is smeared\ndue to the integration over $p_{T,t_\\text{low}}$, and the Sudakov-type perturbative divergences disappear, by leaving (possibly large) residual effects.\nThe amount of smearing is controlled by the shape of the $p_{T,t_\\text{high}}$ distribution in the low-$p_T$ region\nat LO, which affects the unbalance between real and virtual contributions.\nThe steeply rising LO distribution at low $p_T$ strongly suppresses real radiation,\nand the NLO radiative corrections to $p_{T,t_\\text{high}}$ tend to be large and negative\nas $p_{T,t_\\text{high}}$ decreases (a large and positive effect occurs at NNLO, and so forth).\nAccurate theoretical predictions of the detailed shape of the $p_{T,t_\\text{high}}$ distribution at small $p_T$ require studies of all-order resummation effects of Sudakov type.\nHowever, in the case of large $p_T$ bins (as is the case in Fig.~\\ref{fig:pt1}), reliable predictions can be obtained by considering perturbation theory\nat a sufficiently high order.\n\nComparing the results in Fig.~\\ref{fig:pt1} for the scales $\\mu_0=H_T\/2$ (left) and $\\mu_0=m_{T,t_\\text{high}}$ (central)\nwe see that they are rather similar, and that the NNLO prediction agrees with the data.\nThe scale \\mbox{$\\mu_0=m_{T,t_\\text{high}}\/2$} also leads to good agreement with the data,\nbut the corresponding NNLO uncertainty band is significantly narrower,\nespecially in the intermediate region of transverse momenta,\nwhich is not observed for the corresponding band at NLO.\nThis behaviour, namely the drastic shrinking of the scale-uncertainty band from NLO to NNLO,\nmight indicate that for this choice scale variations cannot be trusted\nas an estimate of the perturbative uncertainties at NNLO.\nWe also note that in the intermediate and large $p_T$ region the result obtained by using \\mbox{$\\mu_0=m_{T,t_\\text{high}}\/2$} coincides\nwith the upper bound of the corresponding NNLO band (i.e., the point $\\mu_0=m_{T,t_\\text{high}}\/2$ corresponds to a local maximum of the NNLO cross section as a function of the scales).\n\nWe now discuss the $p_{T,t_\\text{low}}$ distribution. In the region \\mbox{$p_{T,t_\\text{low}}\\to 0$},\nfor LO kinematics\nboth the top and the antitop quark are required to have small $p_T$.\nAt NLO, real corrections open up a phase-space region where one top quark has a small $p_T$ and\nthe other one has a relatively large $p_T$,\nthereby leading to large positive radiative contributions.\nThe perturbative instability affecting the \\mbox{$p_{T,t_\\text{high}}\\to 0$} behaviour is now spread\nover the entire region of transverse momenta since, contrary to the low-$p_{T,t_\\text{high}}$ region,\nsmall values of $p_{T,t_\\text{low}}$ do not constrain $p_{T,t{\\bar t}}$ to be small.\nThe choices \\mbox{$\\mu_0=H_T\/2$} and \\mbox{$\\mu_0=m_{T,t_\\text{low}}$} lead to rather similar results.\nIn both cases, at low and large $p_{T,t_\\text{low}}$ NLO and NNLO bands overlap,\nwhereas they do not in the intermediate region where\nthe NLO band shrinks, showing that NLO perturbative uncertainties are underestimated.\nWe note that the scale \\mbox{$\\mu_0=m_{T,t_\\text{low}}\/2$} makes the perturbative convergence worse and the scale uncertainties\nlarger at both NLO and NNLO.\n\n\\begin{figure}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_HT_2_pT_th.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_mT_avt_pT_th.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_mT_avt_2_pT_th.pdf}\n\\vspace{-4ex}\n\\caption{\\label{fig:pth}\n Single-differential cross sections as a function of $p_{T,t_\\text{had}}$.\n CMS data~\\cite{Sirunyan:2018wem} and LO, NLO and NNLO results\n for central scales equal to $H_{T}\/2$ (left), $m_{T,t_{\\rm av}}$ (central) and $m_{T,t_{\\rm av}}\/2$ (right).\n}\n\\end{figure}\n\n\nWe next consider the distribution in the transverse momentum of the hadronically decaying top or antitop quark, $p_{T,t_\\text{had}}$.\nSince our calculation refers to stable top quarks, a prediction for the $p_{T,t_\\text{had}}$ distribution\ncan be obtained by computing the transverse-momentum spectra of the top and the antitop quark,\nand taking their average afterwards.%\n\\footnote{This is also the definition used in the data\/theory comparison performed\nby the CMS collaboration \\cite{Sirunyan:2018wem}.}\nDiscussing our theoretical predictions, we refer to this as the $p_{T,t_{\\mathrm{av}}}$ distribution.\nThe corresponding LO, NLO and NNLO results are depicted in Fig.~\\ref{fig:pth}\nfor three different scale choices.\nWe show the predictions for our default choice \\mbox{$\\mu_0=H_T\/2$} on the left.\nIn the predictions for our natural scale choices, \nthe top (antitop) $p_T$ distributions required to compute the average are evaluated\nat the corresponding transverse mass, $\\mu_0=m_{T,t({\\bar t})}$ (central) and $\\mu_0=m_{T,t({\\bar t})}\/2$ (right).\nWe denote these scale choices as $m_{T,t_{\\rm av}}$ and $m_{T,t_{\\rm av}}\/2$, respectively.\nThe $p_{T,t_\\text{had}}$ distribution has a maximum at \\mbox{$p_{T,t_\\text{had}} \\sim 80~{\\rm GeV}$}.\nThe LO and NLO scale-uncertainty bands do not overlap, except for \\mbox{$\\mu_0=m_{T,t_{\\rm av}}\/2$}.\nThis is consistent with what happens for the corresponding total cross sections.\nThe NLO and NNLO bands do overlap in the entire $p_{T,t_\\text{had}}$ range,\nsuggesting a good convergence of the perturbative expansion.\nIn Fig.~\\ref{fig:pth} we also observe that the scale choices $\\mu_0=H_T\/2$ (left) and $\\mu_0=m_{T,t_{\\rm av}}$ (central)\ngive rather similar results.\nOn the contrary, the choice \\mbox{$\\mu_0=m_{T,t_{\\rm av}}\/2$} suggests a faster convergence\nof the perturbative expansion \\cite{Czakon:2016dgf}. \nHowever, we also note that with this scale choice the NLO scale dependence is similar\nto what we obtain with $\\mu_0=H_T\/2$ and $\\mu_0=m_{T,t_{\\rm av}}$,\nwhereas the NNLO scale dependence is significantly smaller than at NLO,\nthereby suggesting a possible underestimation of the perturbative uncertainty at NNLO.\nWe note that \\mbox{$\\mu_0=m_{T,t_{\\rm av}}\/2$} is also the scale used\nfor the NNLO QCD+NLO EW prediction~\\cite{Czakon:2017wor}\nto which the CMS data are compared in Ref.~\\cite{Sirunyan:2018wem}.\nThe data show that the measured $p_{T,t_\\text{had}}$ distribution is slightly softer than the NNLO prediction.\nThis is noticed also by the CMS collaboration~\\cite{Sirunyan:2018wem} and in previous comparisons between NNLO results and LHC measurements \\cite{Khachatryan:2015oqa,Khachatryan:2015fwh,Aad:2015hna,Aad:2015mbv,Khachatryan:2016gxp,Aaboud:2016iot,Aaboud:2016syx,Sirunyan:2017mzl}.\nHowever Fig.~\\ref{fig:pth} shows that the NNLO result and the data are consistent within the respective uncertainties.\nOur predictions for the $p_T$ spectrum of the leptonically decaying top quark are, of course,\nidentical to those for $p_{T,t_\\text{had}}$, and they are not shown here.\nThe comparison with the data shows similar features.\n\n\n\nWe add a few comments on the perturbative behaviour of the $p_{T,t_\\text{high}}$, $p_{T,t_\\text{low}}$ and $p_{T,t_{\\mathrm{av}}}$ distributions\npresented in Figs.~\\ref{fig:pt1}, \\ref{fig:pt2} and \\ref{fig:pth}.\nThe three distributions are identical at LO,\nbut their behaviour beyond LO is clearly very different.\nAs we can see from Fig.~\\ref{fig:pth}, the shape of the $p_{T,t_{\\mathrm{av}}}$ distribution is almost unchanged\nwith respect to the LO prediction.\nThis feature is somehow expected, since the transverse-momentum spectrum of the top (antitop) quark\nat higher orders is affected by recoiling hard multijet radiation,\nwhich leads to a partly softer spectrum only at quite high $p_{T,t_{\\mathrm{av}}}$\n(beyond the $p_{T,t_\\text{had}}$ range in Fig.~\\ref{fig:pth}),\nwhere hard multijet radiation is kinematically suppressed.\n\nThe physical shape of the $p_{T,t_\\text{high}}$ and $p_{T,t_\\text{low}}$ distributions is expected\nto be different from $p_{T,t_{\\mathrm{av}}}$ at low and intermediate values of $p_T$.\nThere we roughly have \\mbox{$p_{T,t_\\text{high}} - p_{T,t_\\text{low}} \\sim p_{T,t{\\bar t}}$}.\nThe $p_{T,t{\\bar t}}$ distribution, which is confined to \\mbox{$p_{T,t{\\bar t}} = 0$} at LO,\nhas an average value of about 50~GeV (which is already achieved at NLO),\nand it is localized in the small-$p_{T,t{\\bar t}}$ region with a peak around 10~GeV~\\cite{Zhu:2012ts,Catani:2018mei}.\nWe thus expect that the physical shape of $p_{T,t_\\text{high}}$ ($p_{T,t_\\text{low}}$) is harder (softer) than its LO counterpart,\nwith shape distortions of few tens of GeV as given by the size of $p_{T,t{\\bar t}}$.\nIndeed, this is what we can observe from the comparison between the data and the LO prediction\nat small and intermediate values of $p_T$ in Figs.~\\ref{fig:pt1} and~\\ref{fig:pt2}.\nThis shape distortion has a smaller effect at high values of both $p_{T,t_\\text{high}}$ and $p_{T,t_\\text{low}}$.\n\nIn view of these physical expectations, it is not surprising that the shape\nof the $p_{T,t_\\text{high}}$ and $p_{T,t_\\text{low}}$ distributions is strongly affected by beyond-LO contributions.\nAs discussed before, their fixed-order perturbative features are a smoothened version\nof the corresponding features of the $p_{T,t{\\bar t}}$ distribution \\cite{Catani:2018mei},\nthe smoother behaviour being due to the smearing that is produced by the integration\nof $p_{T,t{\\bar t}}$ over the respective unobserved $p_T$.\n\n\n\\begin{figure}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_HT_2_m_ttx.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_m_ttx_2_m_ttx.pdf}\n\\includegraphics[height=0.30\\textheight]{plots\/LHC13.CMSsetup\/plotCMS13_HT_4_m_ttx.pdf}\n\\vspace{-4ex}\n\\caption{\\label{fig:mtt}\n Single-differential cross sections as a function of $m_{t{\\bar t}}$.\n CMS data~\\cite{Sirunyan:2018wem} and LO, NLO and NNLO results\n for central scales equal to $H_{T}\/2$ (left), $m_{t{\\bar t}}\/2$ (central) and $H_{T}\/4$ (right).\n}\n\\end{figure}\n\nThe invariant-mass distribution of the top-quark pair is reported in Fig.~\\ref{fig:mtt}.\nThe distribution is peaked at \\mbox{$m_{t{\\bar t}} \\sim 400~{\\rm GeV}$}.\nThe characteristic hard-scattering scale for this distribution is of the order of $m_{t{\\bar t}}$ itself.\nWe use our default scale choice\n\\mbox{$\\mu_0=H_T\/2$}~(left), and two other central values, namely \\mbox{$\\mu_0=m_{t{\\bar t}}\/2$}~(central) and \\mbox{$\\mu_0=H_T\/4$}~(right).\n\n\nWe first comment on the convergence of the perturbative series for the three scale choices.\nIn the cases \\mbox{$\\mu_0=H_T\/2$} and \\mbox{$\\mu_0=m_{t{\\bar t}}\/2$} we see that LO and NLO bands\ndo not overlap, analogously to what was previously observed for the $p_{T,t_\\text{had}}$ distribution and for the total cross section in Eq.~(\\ref{eq:sigmatot}).\nIn the case \\mbox{$\\mu_0=H_T\/2$}, the NNLO corrections enhance the NLO result\nby about $10\\%$ in the peak region,\nand their effect slightly increases with $m_{t{\\bar t}}$ up to about $15\\%$ in the highest-$m_{t{\\bar t}}$ bin.\nUsing \\mbox{$\\mu_0=m_{t{\\bar t}}\/2$}, the NNLO effect is similar in the peak region,\nbut it increases to about $20\\%$ at high $m_{t{\\bar t}}$. The NLO and NNLO bands do overlap using both\n\\mbox{$\\mu_0=H_T\/2$} and \\mbox{$\\mu_0=m_{t{\\bar t}}\/2$}.\nAs observed in Ref.~\\cite{Czakon:2016dgf}, the choice \\mbox{$\\mu_0=H_T\/4$} leads to a faster convergence\nof the perturbative series.\nHowever, in the region where $m_{t{\\bar t}} > 360~{\\rm GeV}$ we see that the size of the scale-variation band is very much reduced in going from the NLO to the NNLO result.\nThis behaviour suggests that the central scale $\\mu_0=H_T\/4$ is (accidentally) quite close to a region of (local) minimal sensitivity \\cite{Stevenson:1981vj} of the scale dependence of the NNLO result. In view of this feature, we think that the NNLO scale variation band\nwith $\\mu_0=H_T\/4$ likely underestimates the perturbative uncertainty in this $m_{t{\\bar t}}$ region and, especially, in the intermediate mass range $400~{\\rm GeV}\\ltapm_{t{\\bar t}} \\ltap 1~{\\rm TeV}$.\n\n\nWe now comment on the comparison with the data. The first bin, \\mbox{$300\\,{\\rm GeV} 2\\sqrt{{m^2_{t}}+p_{T,{\\rm min}}^2} \\equiv 2 m_{T,{\\rm min}}$} for LO kinematics.\nBelow this unphysical threshold the LO result vanishes,\nand the NLO and NNLO results are effectively LO and NLO predictions, respectively.\nAs a consequence, they suffer from larger theoretical uncertainties,\nwhich is reflected by the stronger scale dependence.\nAbove this threshold, the LO distribution sharply increases up to a kinematical peak\nclose to $2m_{T,{\\rm min}}$.\nOwing to this LO behaviour, soft-collinear radiation produces shape instabilities~\\cite{Catani:1997xc}\nin this $m_{t{\\bar t}}$ region at each subsequent perturbative order.\nThe qualitative behaviour of these shape instabilities is completely analogous to that observed and discussed in Ref.~\\cite{Catani:2018krb} (see Figs.~10 and 20 and related comments therein) in the case of diphoton production\nin the presence of $p_T$ cuts.\nThese perturbative instabilities are localized in a very narrow region\naround the LO threshold, and therefore their effect is smeared if a sufficiently large bin size is considered, as is the case for the differential distribution in Fig.~\\ref{fig:mtt_pth}.\n\nThe comparison with the data shows that in the first two $p_{T,t_\\text{had}}$ intervals\nthe NNLO prediction undershoots the data in the first $m_{t{\\bar t}}$ bin.\nThis discrepancy at low-$m_{t{\\bar t}}$ is not resolved in the two highest-$p_{T,t_\\text{had}}$ intervals,\nsince the larger bin size ($300\\text{ GeV} < m_{t{\\bar t}} < 430\\text{ GeV}$) renders the distribution less sensitive to the behaviour close to the physical threshold.\nThese observations are consistent with the expectations\nfrom the behaviour of the single-differential distribution in Fig.~\\ref{fig:mtt} at low $m_{t{\\bar t}}$.\nExcluding the narrower bins at low $m_{t{\\bar t}}$, the NNLO prediction in Fig.~\\ref{fig:mtt_pth} is in very good agreement with the experimental measurements.\nThe results with \\mbox{$\\mu_0=H_T\/2$}~(upper) and \\mbox{$\\mu_0=m_{t{\\bar t}}\/2$}~(lower)\nturn out to be rather similar, consistently with\nour general expectation at the beginning of Section~\\ref{sec:resu}.\nThe exception is the highest-$p_{T,t_\\text{had}}$ interval, where the scale \\mbox{$\\mu_0=m_{t{\\bar t}}\/2$} leads\nto a quite large NLO scale dependence at low $m_{t{\\bar t}}$, which is drastically reduced at NNLO.\nIn this region of low $m_{t{\\bar t}}$ and high $p_{T,t_\\text{had}}$, the invariant mass $m_{t{\\bar t}}$ is not the characteristic hard scale anymore:\nthe scale choice \\mbox{$\\mu_0=m_{t{\\bar t}}\/2$} is not expected to be optimal,\nand the scale \\mbox{$\\mu_0=H_T\/2$} turns out to be more appropriate.\n\n\\section{Summary and outlook}\n\\label{sec:summary}\n\nIn this paper we have presented a new fully differential NNLO calculation\nof top-quark pair production at hadron colliders.\nThe calculation is carried out by using the $q_T$ subtraction formalism\nto handle IR divergences from real and virtual contributions, and it is implemented in the \\Matrix{} framework.\nOur code enables fast and efficient calculations of fiducial cross sections\nand multi-differential distributions.\n\nWe have computed several single- and double-differential distributions of the top quarks,\nand we have compared our results with recent measurements performed by the CMS collaboration\nin the lepton+jets decay channel.\nWe have considered several values of the renormalization and factorization scales to compute each of the distributions.\nWe have used natural scales (i.e. ${m_{t}}$, $m_{t{\\bar t}}\/2$ and the relevant transverse masses $m_T$)\nof the order of the characteristic hard scale of the computed distribution,\nand we have shown that the corresponding results are similar to what is obtained \nwith the overall choice $\\mu_0 = H_T\/2$.\nWe find that both the natural scales and $\\mu_0 = H_T\/2$ lead\nto a reasonable perturbative behaviour for all the distributions that we have examined.\nThe NNLO corrections substantially reduce the uncertainties of the theoretical predictions,\nand they improve the overall agreement with the experimental measurements.\nThe largest deviation between data and the NNLO result\noccurs close to the $m_{t{\\bar t}}$ threshold in single- and double-differential distributions.\nThis discrepancy could be related to a variety of effects,\nincluding issues in the extrapolation of the data from particle to parton level,\nwhich is expected to be delicate in such threshold region.\nA lower value of the top-quark mass also has a significant impact close to the threshold.\n\nThe code that is used to perform these calculations is going to become public in a future \\Matrix{} release,\nproviding a fast and flexible tool to compute (multi-)differential distributions\nwith arbitrary cuts on the top-quark kinematical variables.\nThe inclusion of NLO EW corrections and of top-quark decays is left to future work.\n\n\\vspace*{2ex}\n\\noindent {\\bf Acknowledgements}\n\n\\noindent We are very grateful to Hayk Sargsyan for his contribution at early stages of this work.\nWe are also indebted to Federico Buccioni, Jean-Nicolas Lang, Jonas Lindert and Stefano Pozzorini for their ongoing support with {\\sc OpenLoops~2}.\nWe wish to thank Ben Kilminster and Florencia Canelli for useful discussions.\nThis work is supported in part by the Swiss National Science Foundation (SNF) under contract 200020-169041. The work of SK is supported by the ERC Starting Grant 714788 REINVENT. \n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe mechanism underlying the explosive deaths of massive stars remains poorly understood. \\citet{Colg66} first \nsuggested that neutrino energy deposition plays a central role in powering core-collapse supernovae (CCSNe). \nEver since, much of the effort in CCSN theory has focused on building increasingly sophisticated \nneutrino radiation hydrodynamical models, with the hopes of reproducing the properties of \nCCSNe, including the kinetic energies, debris morphologies, nucleosynthetic yields, and the remnant mass, \nspin, and velocity distributions. Despite this effort, the best models still fall short of accounting \nfor any of these properties, much less all of them simultaneously. Perhaps even more alarming, \nthe various groups involved in CCSN modeling often reach qualitatively different conclusions with \ncalculations that are ostensibly quite similar, vis-\\'{a}-vis whether an explosion even occurs. \n\nIn \\citet{Mull12,Mull12_sasi}, results are reported from 2D axisymmetric modeling with the \\textsc{Vertex-CoCoNuT} code, \nwhich uses a conformally-flat spacetime approximation of general relativity \\citep{Mull10}. They find \nexplosions for $8.1$-$\\,{\\rm M}_\\odot$, $11.2$-$\\,{\\rm M}_\\odot$, $15$-$\\,{\\rm M}_\\odot$, and $27$-$\\,{\\rm M}_\\odot$ progenitors, but, when \nreported, the explosion energies are $\\sim$10 times smaller than the canonical $10^{51}\\, {\\rm erg}$ energy \nof typical CCSNe. Similar findings were presented for a variety of other progenitors (Janka et al., Nuclear Astrophysics Workshop, Ringberg Castle, 2014). Recently, \\citet{Hank13} reported results from \na three-dimensional simulation with the \\textsc{Prometheus-Vertex} code of the same $27$-$\\,{\\rm M}_\\odot$ progenitor \nconsidered in \\citet{Mull12_sasi} and found no explosion. This negative result has been recapitulated for all\nother 3D simulations performed recently by this group \\citep{Tamb14}, despite their having seen explosions in the corresponding 2D simulations. \n\n\\citet{Suwa10} reports an explosion of a $13$-$\\,{\\rm M}_\\odot$ progenitor in a 2D simulation and \\citet{Taki12} finds explosions of an $11.2$-$\\,{\\rm M}_\\odot$ progenitor in both 2D and 3D. These models neglected the heavy lepton neutrinos, which were recently incorporated with an approximate leakage scheme \\citep{Taki13}. In all cases, the $\\nu_e$ and $\\bar{\\nu}_e$ transport was computed using the isotropic diffusion source approximation (IDSA) \\citep{Lieb09}, a crude approximation meant to enable multi-D simulations at minimal cost. While interesting, their results are difficult to interpret in the context of the viability of the neutrino mechanism, as the authors acknowledge.\n\nMeanwhile, \\citet{Brue13} report results of \n2D axisymmetric modeling with their \\textsc{Chimera} code. They consider $12$-$\\,{\\rm M}_\\odot$, $15$-$\\,{\\rm M}_\\odot$, \n$20$-$\\,{\\rm M}_\\odot$, and $25$-$\\,{\\rm M}_\\odot$ progenitors from \\citet{Woos07} and find explosions in all cases, \ncuriously at almost the same post-bounce time. They also report energies that are somewhat larger than \nthe those reported in \\citet{Mull12}, but that still fall short of the $10^{51}\\, {\\rm erg}$ mark. Janka at el. (Nuclear Astrophysics Workshop, Ringberg Castle, 2014) recently reported 2D models of the same four progenitors, and found significantly different results, with, for example, the $12$-$\\,{\\rm M}_\\odot$ model not yet exploding more than $700\\,{\\rm ms}$ after bounce.\n\nImportantly, all of the studies discussed above relied on the so-called \n``ray-by-ray-plus'' approximation of neutrino transport, which replaces the real transport problem \nwith a series of independent spherically-symmetric transport solves. This is a crude approximation \nthat introduces large variations in angle and time in the neutrino fluxes and the associated neutrino energy deposition \nso crucial for the neutrino-driven mechanism. This simplification has yet to be clearly justified, \nand may be producing qualitatively incorrect results, particularly in 2D. \n\nThe only calculations ever performed \nwhich allow for multidimensional transport were the VULCAN\/2D results reported in \\citet{Burr06}, \\citet{Burr07}, \\citet{Ott08}, and \\citet{Bran11}, \nand none of these calculations showed a revival of the stalled shock in 2D by the delayed-neutrino mechanism.\nThe calculations of \\citet{Ott08} and \\citet{Bran11} were multi-angle as well.\nHowever, these calculations were performed without $\\mathcal{O}(v\/c)$ transport effects \\citep{Hube07}.\nWe are, therefore, motivated in this paper to perform new 2D multi-group radiation hydrodynamics calculations with\na new code with both multi-D transport (avoiding the simplifications of the ray-by-ray approach) and \nthe velocity-dependent terms to determine whether these earlier results were artifacts of the neglect \nof $\\mathcal{O}(v\/c)$ terms, and for comparison with the 2D results of other groups.\nTo accomplish this, we have developed the CASTRO radiation hydrodynamics code. CASTRO contains a \nmulti-group flux-limited neutrino transport solver, is time-dependent and multidimensional, \nand treats three neutrino species ($\\nu_e$, $\\bar{\\nu}_e$, $\\nu_x$, where the $\\nu_x$ species \nincludes the $\\mu$ and $\\tau$ neutrinos and their antiparticles), including all relevant $\\mathcal{O}(v\/c)$ terms.\nWe find that none of our new 2D calculations, employing the same progenitors as \\citet{Brue13}, explode by the delayed-neutrino mechanism.\nWith this paper, we describe our results and speculate on the reasons for the different outcomes we find.\nSince all other groups are using the ray-by-ray approach, we suggest that one reason for the different outcomes\nmay be in the handling of multi-D transport. In 2D, the axial sloshing motions, often not seen in 3D \\citep{Burr12}, may be reinforcing the errors in the ray-by-ray approach\nand leading to a qualitatively incorrect outcome. In 3D, these axial sloshing effects are often absent, and the ray-by-ray approach\nmay be less anomalous (due to the greater sphericity of the hydrodynamics), so the lack\nof explosions seen by the Garching group in 3D, when they observe explosions for the same progenitors in 2D, remains\npuzzling.\n\n\n\n\\section{Numerics and Setup}\n\nWe use the CASTRO code to carry out our CCSN simulations \\citep{Almg10,Zhan11,Zhan13}. \nCASTRO is a second-order, Eulerian, compressible, Godunov-type, radiation hydrodynamics \ncode that uses block-structured adaptive mesh refinement to simultaneously refine \nin both space and time. Simulations can be performed in 1D (spherical), 2D (cylindrical), and 3D (Cartesian). The hydrodynamic \nupdates use piecewise-parabolic reconstruction with higher-order limiters to preserve \naccuracy at smooth extrema, an approximate Riemann solver detailed in \\citet{Almg10}, and incorporates \nfull corner coupling in the directionally unsplit integration. In this work, we make \nuse of the multi-group flux-limited diffusion (MGFLD) neutrino transport solver detailed \nin \\citet{Zhan13}. \n\nCurrently, multi-dimensional multi-angle\ntransport is not feasible and will require the exascale.\nOur comoving frame multi-group flux-limited diffusion (MGFLD) formulation includes $\\mathcal{O}(v\/c)$ terms that can\nresult (correctly) in significant differences in the dynamic diffusion limit where\nradiation transport is dominated by motion of the fluid \\citep{Cast04}.\nThe approach used in CASTRO\nsplits the system into three parts: 1) a part that couples the radiation\nand fluid in a hyperbolic subsystem with a piecewise parabolic\nmethod (PPM) with characteristic tracing and full corner\ncoupling \\citep{Mill02}, 2) another part that is a system of coupled\nparabolic equations that evolves radiation diffusion over all the\ngroups (along with $Y_e$ in the neutrino case) and source-sink\nterms, and 3) a final part that performs frequency-space advection.\nThe hyperbolic subsystem is solved explicitly with a high-order\nGodunov scheme as part of the hydrodynamic component of the algorithm,\nwhereas the parabolic part is solved implicitly with a first-order\nbackward Euler method. Frequency-space advection is performed using\na standard approach based on the method of lines. The frequency-space advection has its\nown CFL condition for stability and, if necessary, subcycling in time is employed in order to satisfy the\nfrequency-space CFL condition. The primary computational expense of\nthe radiation is in the solution of linear systems as part of the\niteration over energy groups. We rely on the {\\sl hypre} library\n\\citep{Falg02} for solving these systems on large parallel machines.\n\nCASTRO uses a hybrid parallelization strategy based on MPI + OpenMP\nusing the BoxLib framework \\citep{Almg10}. The basic strategy\nis to distribute grids within the AMR hierarchy to computational nodes.\nThis provides a natural coarse-grained approach to distributing the\ncomputational work. A dynamic load balancing technique is needed to\nadjust the load. Although the code supports both a heuristic knapsack\nalgorithm and a space-filling curve algorithm for load balancing, the\ndata-locality properties make the space-filling curve the method of\nchoice for problems with radiation. The main advantages of the CASTRO code\nare the efficiency due to the use of AMR, the temporal sub-cycling, and\nthe accuracy due to the coupling of radiation force into the Riemann solver.\n\nCASTRO incorporates 1) a complicated set of opacity tables for the\nvarious neutrino species that includes weak-magnetism,\nion-ion correlation, and ion form-factor corrections,\n2) various extant nuclear equations of state (including the\nShen [default] and various Lattimer and Swesty equations of state), 3) inelastic\nneutrino-electron and neutrino-nucleon scattering, and 4) a\ntemporal sub-cycling algorithm that accelerates computation\nby many factors over traditional codes. CASTRO\ntreats three neutrino species ($\\nu_e$, $\\bar{\\nu}_e$, $\\nu_x$, where \nthe $\\nu_x$ species includes the $\\mu$ and $\\tau$ neutrinos and their antiparticles). In this work, \nfor computational expediency, we neglect in the 2D simulations energy-group coupling processes such as \ninelastic scattering, and justify this choice in Section~\\ref{1D} through the comparison of model shock radius \nevolutions in 1D, with and without them. We also adopt the Shen equation of state \\citep{Shen98a,Shen98b}. \n\nWe follow the collapse, bounce, and subsequent evolution of four progenitors in 1D and 2D. The progenitors are nonrotating, solar metallicity models with zero-age main sequence masses of $12$-$\\,{\\rm M}_\\odot$, $15$-$\\,{\\rm M}_\\odot$, $20$-$\\,{\\rm M}_\\odot$, and $25$-$\\,{\\rm M}_\\odot$ \\citep{Woos07}. These are the same progenitors recently considered by \\citet{Brue13} and Janka et al. (Nuclear Astrophysics Workshop, Ringberg Castle, 2014). Our numerical grid has $0.5\\,{\\rm km}$ resolution in the inner $\\sim$$128\\,{\\rm km}$, $1\\,{\\rm km}$ resolution out to $\\sim$$270\\,{\\rm km}$, never worse than $4\\,{\\rm km}$ resolution anywhere beneath the shock, and extends out to $5120\\,{\\rm km}$.\n\n\\section{Results of Two-Dimensional Simulations}\n\\label{results}\n\n\\subsection{Global Structure}\n\\label{global}\nFigure~\\ref{fig:snapshots} shows representative snapshots of the radial velocity and entropy from our $12$-$\\,{\\rm M}_\\odot$ and $25$-$\\,{\\rm M}_\\odot$ models. At $200\\,{\\rm ms}$ after bounce, both models are nearly spherical and show weak convective activity with a characteristic angular scale corresponding to $\\ell\\sim10$ ($\\ell$ is spherical harmonic degree) as is clearly seen in the radial velocities. By $400\\,{\\rm ms}$ after bounce, both models have become more aspherical, indicating stronger convective activity that is at larger angular scales (smaller $\\ell$). The snapshots at $600\\,{\\rm ms}$ after bounce look qualitatively similar to those at $400\\,{\\rm ms}$ after bounce, showing large scale convective motions including relatively low entropy accretion streams and higher entropy buoyant plumes.\n\nFigure~\\ref{fig:rshock} shows the evolutions of the average shock radii for our 1D and 2D models. Not surprisingly, our 1D models do not explode, consistent with nearly all spherically symmetric CCSN calculations. In 2D, despite evolving the models to $\\gtrsim$$600\\,{\\rm ms}$ after bounce, we do not find explosions. Nevertheless, there are interesting differences between 1D and 2D models. Perhaps most importantly, the 2D stalled shock radii are consistently larger by a factor 1.3--2, as found in essentially all 1D-2D comparisons \\citep[e.g.][]{Mull12}. This is often attributed to aspherical instabilities, namely convection and the standing accretion shock instability (SASI), both increasing the neutrino heating efficiency in the gain region and providing turbulent pressure support \\citep[e.g.][]{Hera92,Burr95,Jank96}. In our models, we see convective activity develop immediately as the bounce shock moves out. This brief phase of ``prompt convection,'' seeded by perturbations introduced by our aspherical grid, drives the shock to large radii much faster than in the corresponding 1D models. By $\\sim$$20\\,{\\rm ms}$ after bounce, all four 2D models have shock radii of $\\sim$$200\\,{\\rm km}$ and these shocks stall $\\sim$$60\\,{\\rm ms}$ earlier than the 1D models, which never reach radii much beyond $\\sim$$150\\,{\\rm km}$. Between 300 and 400$\\,{\\rm ms}$, the $20$-$\\,{\\rm M}_\\odot$ and $25$-$\\,{\\rm M}_\\odot$ models show jumps in the average shock radii, both in 1D and 2D. These jumps correspond to the accretion of the Si\/O interface in these models, wherein $\\dot{M}$ drops suddenly and the shock responds by moving outward. In 1-D, the jump is only about $10\\%$ of the pre-interface shock radius, but the effect is 2--3 times larger in 2D. Apparently and interestingly, the sensitivity to the accretion of shelves increases with shock radius. \n\nThe shock radii are only one aspect of the global structure that determines if and when explosions commence. The gain radius, $R_g(\\theta)$, is defined as the radius above which the net rate of energy deposition in the gas, integrated over all neutrino energies and summed over species, is positive. The gain region, bounded by the gain and shock radii, contains the mass where neutrino energy deposition, occurring at a rate\n\\begin{equation}\\label{eq:heating_rate}\n(\\mathcal{H} - \\mathcal{C})_{\\rm gain} = \\sum_s \\int d\\Omega \\int_{R_g(\\theta)}^{R_s(\\theta)} r^2 dr \\int_{\\varepsilon_{s}^{\\rm min}}^{\\varepsilon_{s}^{\\rm max}} (c\\kappa_{s,\\varepsilon} E_{s,\\varepsilon} - j_{s,\\varepsilon}) d\\varepsilon\\,,\n\\end{equation}\n is thought to lead to shock revival. In Eq.~\\ref{eq:heating_rate}, $s\\in\\left\\{\\nu_e,\\bar{\\nu}_e,\\nu_x\\right\\}$, $R_s(\\theta)$ is the angle-dependent shock radius, $\\varepsilon$ is neutrino energy, $\\kappa$ is the absorption cross section, $E$ is the neutrino energy density, and $j$ is the volume emissivity. As important is the cooling region where accreted material cools and settles onto the proto-neutron star, undermining the pressure support crucial to reviving the stalled shock. We define a net cooling rate analogous to the heating rate above as\n\\begin{equation}\n(\\mathcal{H} - \\mathcal{C})_{\\rm cool} = \\sum_s \\int d\\Omega \\int_{R_{\\nu_s}(\\theta)}^{R_g(\\theta)} r^2 dr \\int_{\\varepsilon_{\\nu_s}^{\\rm min}}^{\\varepsilon_{\\nu_s}^{\\rm max}} (c\\kappa_{s,\\varepsilon} E_{s,\\varepsilon} - j_{s,\\varepsilon}) d\\varepsilon\\,,\n\\end{equation}\nwhere $R_{\\nu_s}$ is the angle-, species-, and energy-dependent neutrinosphere radius defined implicitly by\n\\begin{equation}\n\\int_{R_{\\nu_s}}^\\infty \\kappa dr = \\frac{2}{3}\\,.\n\\end{equation}\nThis definition roughly accounts for all the optically-thin cooling in the region, but does not include diffusive flux from beneath the $\\tau=2\/3$ surface. We define an effective inner cooling radius as\n\\begin{equation}\nR_{\\rm in} = \\frac{\\sum_s \\sum_\\varepsilon |w_s| R_{\\nu_s}}{\\sum_s \\sum_\\varepsilon |w_s|}\\,,\n\\end{equation}\nwhere\n\\begin{equation}\nw_s = \\int_{R_{\\nu_s}}^\\infty (c\\kappa E - j) r^2 dr\n\\end{equation}\nis an integral of the net neutrino energy deposition along a column (at fixed $\\theta$) and should be understood to be angle-, species-, and energy-dependent. This inner cooling radius is related to the neutrinosphere radii of the dominant cooling agents. The region between $R_{\\rm in}$ and $R_g$ is cooling rapidly by optically-thin neutrino emission, predominantly through the $\\nu_e$ and $\\bar{\\nu}_e$ species, and represents at least one component of the accretion luminosity. Figure~\\ref{fig:radii} shows the evolutions of the shock, gain, and inner radii for our four 2D models. After a short-lived initial transient, the gain and inner radii generally recede in all four models, reflecting the contraction of the inner proto-neutron star.\n\n\\subsection{Diagnostic Quantities}\n\\label{diagnostics}\nThe evolutions of certain diagnostic quantities have proven useful in distilling insight from the complicated multi-D dynamics of pre-explosion supernova cores. The heating efficiency, $\\epsilon_h$, is defined as the ratio of the net heating in the gain region to the sum $L_{\\nu_e}+L_{\\bar{\\nu}_e}$. We also define an analogous cooling efficiency, $\\epsilon_c\\equiv-(\\mathcal{H}-\\mathcal{C})_{\\rm cool}\/(L_{\\nu_e}+L_{\\bar{\\nu}_e})$. Both efficiencies are shown in Fig.~\\ref{fig:efficiencies}. The heating efficiencies initially rise slowly and show broad peaks around $200\\,{\\rm ms}$ after bounce. The peak values range from 6\\% to 9\\%, monotonically increasing with progenitor mass. At low optical depths, the heating efficiency is equivalent to an effective optical depth, giving a heating rate $(L_{\\nu_e}+L_{\\bar{\\nu}_e}) (1-\\exp{(-\\tau_{\\rm eff})})\\approx (L_{\\nu_e}+L_{\\bar{\\nu}_e}) \\tau_{\\rm eff}$. The basic character of the heating efficiencies can be understood by a simple estimate of $\\tau_{\\rm eff}$. First, write $\\tau_{\\rm eff}\\sim\\langle \\rho\\rangle_g \\sigma \\Delta R$, where $\\langle \\rho\\rangle_g$ is the average density in the gain region, $\\sigma$ is a characteristic neutrino absorption cross section per unit mass, and $\\Delta R=\\langle R_s\\rangle - \\langle R_g\\rangle$. The average density is easily computed from the gain mass and shock and gain radii. We write the cross section as $\\sigma\\approx\\sigma_0 (\\varepsilon\/\\varepsilon_0)^2\\sim\\sigma_0 (R_0\/R_{\\rm in})^2$, where the 0-subscript signifies some characteristic value. This last scaling is justified by the approximately inverse relationship between root-mean-square neutrino energy and inner cooling radius. Finally, we arrive at\n\\begin{equation}\\label{eq:tau}\n\\tau_{\\rm eff}\\sim \\frac{3\\sigma_0 M_g}{4\\pi} \\frac{R_s-R_g}{R_s^3-R_g^3} \\left(\\frac{R_0}{R_{\\rm in}}\\right)^2\\,.\n\\end{equation}\nThis estimate reproduces the heating efficiencies reasonably well, accounting, for example, for the slow decline in the $12$-$\\,{\\rm M}_\\odot$ model, the nearly constant value for the $15$-$\\,{\\rm M}_\\odot$ model, and the rise in the latter half of the $20$-$\\,{\\rm M}_\\odot$ and $25$-$\\,{\\rm M}_\\odot$ models. The virtue of this estimate is that it shows how the heating efficiencies depend on the global structure of the solutions.\n\nThe cooling efficiency $\\epsilon_c$ represents the fraction of the $\\nu_e$ and $\\bar{\\nu}_e$ luminosities arising from optically thin cooling. Since some of the accreted material advects into optically thick regions before cooling appreciably, the net cooling rate $(\\mathcal{H}-\\mathcal{C})_{\\rm cool}$ entering into the definition of $\\epsilon_c$ is not equal to the total accretion luminosity, which must be $\\approx GM_{\\rm pns}\\dot{M}_{\\rm pns}\/R_{\\rm pns}$ (pns$\\equiv$proto-neutron star), at least in an integral-averaged sense. Rather, $\\epsilon_c$ represents a response function to sudden changes in $\\dot{M}$. For example, if the accretion rate drops due to the accretion of a composition interface, the immediate response in the luminosity (delayed somewhat by the advection time between the shock and cooling region) will be to drop fractionally by approximately $\\epsilon_c \\dot{M}\/\\dot{M}_0$, where $\\dot{M}_0$ and $\\dot{M}$ are the pre- and post-interface accretion rates, respectively. The noticeable shelves accreted in the $12$-$\\,{\\rm M}_\\odot$, $20$-$\\,{\\rm M}_\\odot$, and $25$-$\\,{\\rm M}_\\odot$ models confirm this basic picture. Apparently, a low cooling efficiency may make a model prone to explosion when shelves are accreted since, though $\\dot{M}$ can drop appreciably, the luminosity responds only weakly, leaving the model closer to (or perhaps beyond) the critical luminosity for explosion \\citep{Burr13}. Evidently, this effect is not sufficient to drive explosions in our models.\n\nA variety of ``explosion conditions'' have been proposed in the literature. The most popular considers the ratio of the timescale for advection through the gain region to the heating timescale. Define the advection timescale as $\\tau_{\\rm adv} = M_{\\rm gain}\/\\dot{M}$ and the heating timescale as\n\\begin{equation}\n\\tau_{\\rm heat} = \\frac{\\int_{\\rm gain} (\\rho e - \\rho e_0) dV}{(\\mathcal{H}-\\mathcal{C})_{\\rm gain}}\\,,\n\\end{equation}\nwhere $e_0$ is approximately the zero-point of the EOS, i.e. $e_0=e(\\rho,Y_e,T_{\\rm min})$ where $T_{\\rm min}$ is the lowest temperature of the tabulated EOS ($10^{-2}$ MeV). The left panel of Fig.~\\ref{fig:crit_panel} shows the evolutions of this ratio for all four models. Surprisingly, the curves all show the same basic structure, peaking around $200\\,{\\rm ms}$ after bounce before slowly declining. Superimposed on these slowly varying curves, the $20$-$\\,{\\rm M}_\\odot$ and $25$-$\\,{\\rm M}_\\odot$ models show narrow peaks around $300\\,{\\rm ms}$ after bounce, corresponding to the accretion of the Si\/O interfaces in these models when $\\dot{M}$ drops abruptly by a factor of two.\n\nAnother interesting dimensionless quantity that arises naturally in discussing critical conditions for explosion \\citep[see, e.g.,][]{Dole13} is the ratio of heating in the gain region to the accretion power $\\mathcal{H}_g R_s\/(G M \\dot{M})$, shown in the right panel of Fig.~\\ref{fig:crit_panel}.\nComparing the left and right panels of Fig.~\\ref{fig:crit_panel}, it is clear that these dimensionless ratios have very similar behaviors. Writing the heating in the gain region $\\mathcal{H}_g = h_g M_g$, where $h_g$ is the specific heating rate and $M_g$ is the mass in the gain region, and noting that the zero-point-subtracted specific internal energy is of order $G M\/R_s$, it is easy to see that $\\mathcal{H}_g R_s\/(G M \\dot{M})\\sim \\tau_{\\rm adv}\/\\tau_{\\rm heat}$, so the similarities between the left and right panels of Fig.~\\ref{fig:crit_panel} are not surprising. Similarly, the ``antesonic'' condition, proposed by \\citet{Pejc12}, is easily related to these other conditions. Apparently, $\\tau_{\\rm adv}\/\\tau_{\\rm heat} \\sim \\mathcal{H}_g R_s\/(G M \\dot{M}) \\propto c_s^2\/v_{\\rm esc}^2$.\n\nWhile all three ratios are interesting diagnostics and can clearly indicate when explosions are underway, none seem to have a well-determined critical threshold above which explosions are inevitable. In particular, the value of one is not necessarily associated with transitioning to explosion. The critical values likely depend in detail on how these quantities are defined, and may be below or above one, but reasonable definitions seem to at least give critical values of order one. Of course, since we do not obtain explosions, we are unable to determine these critical values for our models and definitions.\n\n\\section{Comparisons with One-Dimensional Results}\n\\label{1D}\n\nMultidimensional effects seem to be crucial if the neutrino mechanism succeeds in producing explosions. A variety of effects may be important. The post-shock turbulent flow, driven by convective and\/or SASI activity, leads to longer dwell times in the gain region on average, exposing the accreted material to more net heating \\citep{Murp08,Dole13}. The turbulent flow itself gives rise to Reynolds stresses that can help support the shock \\citep{Murp13}. Convection in the proto-neutron star can advect neutrinos closer to their neutrinosphere, shortening their escape time and boosting the diffusive luminosity from the core \\citep{Dess06,Mull14}. Even the efficiency of cooling through the accretion luminosity can vary between dimensions. We have already explored the former two effects in parametrized setups \\citep{Dole13,Murp13}, so here we focus on characterizing how the neutrino luminosities and average energies differ between 1D and 2D models.\n\nFigure~\\ref{fig:le_1d2d} shows the evolutions of the $\\nu_e$ and $\\bar{\\nu}_e$ luminosities and root-mean-square neutrino energies, as measured in the laboratory frame at $500\\,{\\rm km}$ and defined by\n\\begin{equation}\n\\varepsilon_{\\rm rms} = \\sqrt{\\frac{\\int_{\\varepsilon_{\\rm min}}^{\\varepsilon_{\\rm max}} \\varepsilon^2 F_\\varepsilon d\\varepsilon}{\\int_{\\varepsilon_{\\rm min}}^{\\varepsilon_{\\rm max}} F_\\varepsilon d\\varepsilon}}\\, ,\n\\end{equation}\nfor all four progenitor models. Some differences between 1D and 2D seem to be model-independent. For example, the rms energies are systematically higher in 1D than in 2D by $\\sim$5--10\\%, consistent with results obtained by other groups \\citep{Bura06}. The luminosities, on the other hand, do not show such a generic difference. Around $100\\,{\\rm ms}$ post-bounce, the $20$-$\\,{\\rm M}_\\odot$ and $25$-$\\,{\\rm M}_\\odot$ models show deficits of up to $\\sim$$20\\%$ in $\\nu_e$ and $\\bar{\\nu}_e$ luminosities in 2D relative to 1D. Beyond $\\sim$$200\\,{\\rm ms}$ post bounce, all the 2D models tend to have higher $\\nu_e$ and $\\bar{\\nu}_e$ luminosities by $\\sim$$5\\%$, likely an effect of Ledoux convection in the proto-neutron star, driven by the destabilizing lepton gradient \\citep{Bura06,Dess06}.\n\nThe idea of a critical luminosity for explosion at a given mass accretion rate was first discussed in \\citet{Burr93}. \\citet{Murp08} later showed, in the context of parametrized modeling, that the critical luminosity is lower in 2D than in 1D, a result confirmed by other studies \\citep{Hank13,Dole13,Couc13_2d3d}. Unfortunately, the critical curves that emerged from these studies are not easily adapted to self-consistent radiation hydrodynamic models. One reason is that the critical luminosity almost certainly depends on quantities other than $L_{\\nu_e}$ and $\\dot{M}$. For example, it may depend on the proto-neutron star mass and radius and\/or the shock radius, as suggested in the discussion of the dimensionless quantities in Section~\\ref{diagnostics}. It also depends on the structure of the flow beneath the gain layer, which can be quite different between parametrized and self-consistent models. Nevertheless, the lower critical luminosity in 2D very likely remains in self-consistent models.\n\nIn Fig.~\\ref{fig:crit}, we show the evolutions in the $L_{\\nu_e}$-$\\dot{M}$ plane for all four progenitors in both 1D and 2D. All of the models show relatively flat curves until late times, indicating that the neutrino luminosities remain roughly constant while the accretion rate drops. It is during this phase that a crossing of critical luminosity curve (and subsequent explosion) seem most likely, though this does not occur in our models. As in Fig.~\\ref{fig:le_1d2d}, the 2D models tend to show higher $\\nu_e$ luminosities at late times relative to 1D. Naively, this might suggest that 2D models are easier to explode not only because the critical curve is lower, but also because 2D models tend to have higher luminosities at a given mass accretion rate. Of course, this argument neglects other important differences between 1D and 2D including the higher rms neutrino energies in 1D, for example. Rewriting Eq.~\\ref{eq:tau} for the effective optical depth (or, equivalently, the heating efficiency) with $M_g=\\dot{M} \\tau_{\\rm adv}$ and $R_0\/R_{\\rm in}=\\varepsilon\/\\varepsilon_0$, we arrive at\n\\begin{equation}\n\\frac{\\tau_{\\rm eff}^{\\rm 2D}}{\\tau_{\\rm eff}^{\\rm 1D}} \\sim \\left(\\frac{\\tau_{\\rm adv}^{\\rm 2D}}{\\tau_{\\rm adv}^{\\rm 1D}}\\right) \\left[ \\left(\\frac{R_s-R_g}{R_s^3-R_g^3}\\right)^{\\rm 2D} \\left(\\frac{R_s^3-R_g^3}{R_s-R_g}\\right)^{\\rm 1D} \\right] \\left(\\frac{\\varepsilon_{\\rm rms}^{\\rm 2D}}{\\varepsilon_{\\rm rms}^{\\rm 1D}}\\right)^2\\;.\n\\end{equation}\nThe first term on the right hand side is the ratio of the advection timescales, which favors 2D. The second term (in square brackets) favors 1D since the gain volume is typically much larger in 2D. As we show in Fig.~\\ref{fig:le_1d2d}, the last term given by the square of the ratio of rms neutrino energies favors 1D. Evidently, though some effects in 2D tend toward lower heating efficiencies, these are overwhelmed by the different advection timescales, leading to higher heating efficiencies in 2D. Now consider the diagnostic given by the ratio of heating in the gain region to accretion power, which Fig.~\\ref{fig:crit_panel} shows is intimately related to the ratio of advection to heating timescales. It is easy to show that the 2D to 1D ratio of this diagnostic is\n\\begin{equation}\n\\left(\\frac{H R_s}{G M \\dot{M}}\\right)^{\\rm 2D}\\left\/\\left(\\frac{H R_s}{G M \\dot{M}}\\right)^{\\rm 1D}\\right. \\approx \\left(\\frac{L_{\\nu_e}^{\\rm 2D}}{L_{\\nu_e}^{\\rm 1D}}\\right) \\left(\\frac{\\tau_{\\rm eff}^{\\rm 2D}}{\\tau_{\\rm eff}^{\\rm 1D}}\\right) \\left(\\frac{R_s^{\\rm 2D}}{R_s^{\\rm 1D}}\\right)\\;.\n\\end{equation}\nAt least after the first 100--200$\\,{\\rm ms}$, all of these terms favor 2D, making 2D models much easier to explode than corresponding 1D models.\n\nIn the 1D and 2D models discussed thus far, we have ignored inelastic scattering processes. Figure~\\ref{fig:inelastic} shows results from two 1D simulations of the $15$-$\\,{\\rm M}_\\odot$ progenitor that differ only in whether inelastic scattering on electrons is included. The most marked difference is in the rms energy of the $\\nu_x$ species, which is lower with inelastic scattering included by about $5\\,{\\rm MeV}$. In principle, depositing $5\\,{\\rm MeV}$ per $\\nu_x$ neutrino represents a significant source of heating, but this energy deposition occurs mainly at large optical depths for the $\\nu_e$ and $\\bar{\\nu}_e$ species, so the net effect is substantially muted. For example, the $\\nu_e$ and $\\bar{\\nu}_e$ luminosities are higher by only 3--4\\% and the rms neutrino energies are higher by only $\\sim$$0.5\\%$ in the model with inelastic scattering. The $\\nu_x$ luminosity with inelastic scattering adjusts so that the total neutrino luminosity, summed over all species, is nearly identical to the model without inelastic scattering, with fractional differences typically less than $0.1\\%$. Thus, though inelastic scattering on electrons does lead to small changes in the luminosities and average energies of each species, the net effect on the structure as a whole is quite small. For example, the shock radii shown in Fig.~\\ref{fig:inelastic} are almost identical, differing by at most $2\\%$ and typically less than $1\\%$.\n\n\n\\section{Angular and Temporal Variations in the Neutrino Sector}\n\\label{angle}\n\n\\subsection{A Criticism of ``Ray-by-ray''}\nOne of the motivations for carrying out this study is to compare our MGFLD neutrino transport results with those obtained with the \\textit{de facto} standard ``ray-by-ray'' formulation currently used by all other groups carrying out radiation hydrodynamic simulations of core-collapse supernovae. \\citet{Ott08} carried out both MGFLD and full multi-angle transport calculations and found that, though MGFLD is only approximate in the semi-transparent and free streaming regimes, the two methods actually agree quite well in the supernova context. No such comparison with ``ray-by-ray'' has yet been discussed \\citep[but see][for a comparison of multi-angle and ``ray-by-ray'' transport results based on time-independent snapshots of multidimensional simulations]{Sumi14}. Here, we take a small step towards this comparison, which captures one of our main criticisms of the technique --- the unphysically high degree of angular and temporal correlation between the matter and neutrino radiation fields.\n\nAn argument often made in the core-collapse community is that the spatio-temporal variations in the flow effectively average out the error made in employing the ``ray-by-ray'' approach \\citep{Bura06a,Mezz14}. However, both \\citet{Bura06a} and \\citet{Mezz14} acknowledge that such averaging can only be approximate and that artifacts may remain in ``ray-by-ray'' calculations. For example, \\citet{Bura06a}, in an attempt to address this concern, post-processed their ``ray-by-ray'' models by recomputing the heating in the gain region using an angularly averaged radiation field. When integrated over the volume of the gain region and also time-averaged, they found good agreement between the ``ray-by-ray'' and angularly-averaged ``ray-by-ray'' heating rates \\citep[see also][]{Sumi14}. Beyond this zeroth order comparison, however, larger differences begin to emerge. For example, \\citet{Bura06a} find that the heating of downflows and high-entropy bubbles, when analyzed separately, show differences larger than 10\\% in the time-averaged heating rates as computed by ``ray-by-ray'' and angularly-averaged calculations. This finding strongly reinforces our concern about the artificially high degree of correlation between matter and neutrino sectors. Importantly, the most crucial question with regards to ``ray-by-ray,'' how it effects the time-dependent hydrodynamic response of the system, remains open. We do not attempt to address that here and therefore make no claim that we have proven that ``ray-by-ray'' calculations are wholly wrong. Instead, we aim only to show some ways in which ``ray-by-ray'' calculations fall short, and to suggest that these shortcomings may make the current crop of supernova modeling efforts in the community difficult to reconcile.\n\nTo that end, we consider a snapshot from our 2D simulation of the $12$-$\\,{\\rm M}_\\odot$ model $604\\,{\\rm ms}$ after bounce (i.e., at the end of the run). We extract the energy integrated fluxes of the electron neutrinos at $250\\,{\\rm km}$ as computed by CASTRO's MGFLD solver. With this snapshot, we do two more transport calculations, ignoring velocity- and time-dependence and scattering. The first calculation gives the full angle- and energy-dependent specific intensities at discrete latitudes at $250\\,{\\rm km}$. We then integrate over energy and take the first moment to get the fluxes. Finally, we compute the angle- and energy-dependent specific intensities under the ``ray-by-ray'' assumption, then integrate over energy and take the first moment to recover the ``ray-by-ray'' fluxes. We describe our techniques for the multi-angle and ``ray-by-ray'' calculations in Appendix~\\ref{appendix}. The simulation snapshot and the results of these transport calculations are shown in Fig.~\\ref{fig:rbr_comp}. The MGFLD and full multi-angle transport results agree quite well in the general character of the angular variation, despite the neglect of velocity- and time-dependence and of scattering in the simplified multi-angle calculation. The variation is smooth and has a small amplitude. Though the low-$\\ell$ modes have similar amplitudes compared with the other two schemes, the ``ray-by-ray'' results show much more intermediate- to high-$\\ell$ power, manifest as wild variations that close inspection reveals are highly correlated with structures in the flow. The peak-to-peak variation in the ``ray-by-ray'' result is about a factor of five larger than for MGFLD and about a factor of four larger than for multi-angle transport. While this may already be a concern, the most serious failing of the ``ray-by-ray'' approach is that these large variations are tightly correlated with hydrodynamic structures along each ray. The neutrino radiation field is produced by an integral over many sources at depth, an effect which can only be truly captured by doing full multi-angle transport and seems reasonably well approximated by MGFLD, but which a ``ray-by-ray'' approach can never reliably reproduce. The ``ray-by-ray'' flux depends only on the profile along a given radial ray, introducing large variations which should be washed away by the integral character of transport and which are highly correlated with the hydrodynamic variations of the flow. We note that our choice of measuring the fluxes at $250\\,{\\rm km}$ was motivated by the desire to include transport through the entire gain region, but choosing a radius far from the neutrinosphere tends to emphasize the differences between the schemes. However, one must measure the fluxes very close to the neutrinosphere for the methods to show comparable fluctuation, and even then the ``ray-by-ray'' scheme exaggerates the variation, though to a lesser degree than when viewed at $250\\,{\\rm km}$.\n\n\n\\subsection{Angular Variations}\n\\label{angular_variations}\nAs has been discussed previously, the angular variations of the neutrino radiation field are muted relative to variations in the matter sector \\citep{Bran11}. In Fig.~\\ref{fig:shock_lum_corr}, we show the normalized standard deviations of the shock radii and $\\nu_e$-fluxes (at $500\\,{\\rm km}$) as a function of time. We define the standard deviation ($\\sigma$) normalized by the mean ($\\mu$) of quantity $Q$ as \n\\begin{equation}\n\\frac{\\sigma}{\\mu} = \\frac{1}{\\langle Q\\rangle}\\left(\\frac{\\int_0^\\pi (Q(\\theta)-\\langle Q(\\theta)\\rangle)^2 \\sin\\theta d\\theta}{\\int_0^\\pi \\sin\\theta d\\theta}\\right)^{1\/2}\\;,\n\\end{equation}\nwhere the angle brackets indicate solid-angle averaging. Typically, the fractional angular variations in the shock radii are about an order-of-magnitude larger than the corresponding variations in the fluxes. We also show the cross-correlations of the shock radii and $\\nu_e$-fluxes, defined by\n\\begin{equation}\nCorr(R_s,F_{\\nu_e}) = \\frac{1}{2}\\int_0^\\pi \\frac{(R_s - \\langle R_s \\rangle)(F_{\\nu_e}-\\langle F_{\\nu_e}\\rangle)}{\\sigma_{R_s} \\sigma_{F_{\\nu_e}}} \\sin\\theta d\\theta\\;,\n\\end{equation}\nwhere $\\sigma$ is the standard deviation, computed as above, but without the normalization by the mean. All the models show a positive correlation on average, as indicated by the gray dashed lines in the figure. The $20$-$\\,{\\rm M}_\\odot$ and $25$-$\\,{\\rm M}_\\odot$ models show a period $\\sim$$100\\,{\\rm ms}$ long of consistently high correlation. Interestingly, these phases follow immediately after the accretion of the significant Si\/O interfaces in these models, where the accretion rates drop by a factor $\\approx$2. Importantly, however, though the shock and $\\nu_e$-fluxes are correlated in all models, the amplitude of the $\\nu_e$-flux variation is much smaller than one would find with a ``ray-by-ray'' calculation, so the degree to which these variations couple may be quite different than in ``ray-by-ray'' calculations \\citep{Burr13}.\n\n\\citet{Tamb14} recently reported systematic asymmetries in the net lepton-number emission in several 3D models and dubbed this feature LESA (Lepton-number Emission Self-sustained Asymmetry). To investigate whether we see a similar phenomenon, we decompose the lepton-number flux $F^n_{\\nu_e} - F^n_{\\bar{\\nu}_e}$ of our $12$-$\\,{\\rm M}_\\odot$ model ($n$ indicates number instead of energy flux) into spherical harmonics and focus on the evolution of the normalized dipolar component $a_1\/a_0$. In the model described in \\citet{Tamb14}, deviations from spherical symmetry in the lepton-number flux are intimately related to corresponding deviations in the shock structure. In the bottom panel of Fig.~\\ref{fig:lesa}, we show the evolutions of $a_1\/a_0$ for the lepton-number flux and the shock surface. For ease of comparison, we have multiplied $a_1\/a_0$ for the lepton-number flux by five. We see no sign of a strong correlation between the two quantities. Indeed, the normalized cross-correlation as a function of time lag shows a broad feature around zero offset, with a modest peak value of $\\sim$0.3 (1 indicates perfectly correlated signals) at $-21\\,{\\rm ms}$, comparable to the advection timescale. Importantly, the magnitude of the asymmetry is also about an order-of-magnitude smaller than shown in \\citet{Tamb14}. In short, we find no evidence for LESA in our models, but they are limited to 2D and more work is needed before a final judgment on the existence of LESA can be made. On the other hand, in our 2D simulations we do find that the dipolar component of the \\textit{sum} $F_{\\nu_e}+F_{\\bar{\\nu}_e}$ (and also the sum of number fluxes) is highly correlated with the dipolar component of the shock, as shown in the top panel of Fig.~\\ref{fig:lesa}. In this panel, $a_1\/a_0$ of $F_{\\nu_e}+F_{\\bar{\\nu}_e}$ is shown, multiplied by ten for ease of comparison with $a_1\/a_0$ of the shock.\n\n\\subsection{Temporal Variations}\n\\label{temporal}\nAn interesting diagnostic of the temporal variations in the neutrino sector is the power spectrum of the signal expected to be detected from a galactic supernova. Following \\citet{Lund10}, we compute the event rate expected for the IceCube detector at 256 observer orientations for our $25$-$\\,{\\rm M}_\\odot$ model, assumed to be at a distance of 10 kpc. We compute the power spectra of these signals and then solid-angle average these to produce the power spectrum shown in Fig.~\\ref{fig:nu_pspec}. Comparing with Fig.~5 of \\citet{Lund10}, we see remarkable agreement between our power spectrum and the ``north hemispheric average'' reported in that work. Interestingly, their power spectrum produced by averaging the spectra from each ray, in a manner similar to what we have done, shows significantly more power, particularly at high frequencies. Their hemispheric averaging is meant to mask the inherent error in their ``ray-by-ray'' transport by mimicking the angular integral character of transport. Evidently, our MGFLD transport naturally produces power spectra that agree quite nicely with their hemisphere-averaged results. By contrast, the ``ray-by-ray'' technique clearly leads to significant overestimates of the variability in the quantity $L_{\\bar{\\nu}_e} \\varepsilon_{\\rm rms}^2$, particularly at high frequencies. This quantity bears directly on the energy deposition in the gain region, and so directly on the viability of the neutrino-driven supernova mechanism.\n\nOne interesting aspect of the power spectrum that was not addressed by \\citet{Lund10} (or the follow-on references \\citealt{Lund12} and \\citealt{Tamb13}) is its exponential shape. The dashed line in Fig.~\\ref{fig:nu_pspec} is a fit of the power spectrum above $50\\,{\\rm Hz}$ with $P(f)=P_0 \\exp (-4\\pi f\\tau)$, where $P_0$ and $\\tau$ are free parameters. The resulting fit seems to describe our results quite well, and appears to be consistent with what others have found. At least one way to produce an exponential power spectrum is with Lorentzian pulses of the form\n\\begin{equation}\nL(t) \\propto \\frac{\\gamma}{(t-t_0)^2 + \\gamma^2}\\;,\n\\end{equation}\nwhere $t_0$ is the time of the pulse and $\\gamma$ is the half-width at half-maximum. In Fig.~\\ref{fig:lorentz}, we show a fake signal, produced as a constant plus 500 Lorentzian pulses\\footnote{The number of pulses only effects the normalization, not the e-folding timescale of the exponential.} at randomly chosen times and with randomly chosen amplitudes. The power spectrum, shown below the signal, has an exponential dependence on frequency, with an e-folding timescale of $4\\pi\\gamma$. So, one interpretation of the exponential power spectrum of our modeled IceCube signal is that the signal is comprised of many approximately Lorentzian pulses with a typical full-width at half-maximum timescale, extracted from our fit, of $2\\tau=2.3\\,{\\rm ms}$. This represents very rapid variability. A simplistic picture of how this timescale may emerge is of blobs cooling as they advect through the cooling region. In this picture, we adopt the simple scaling $L_{\\rm blob} \\varepsilon_{\\rm rms}^2\\sim T^8$ and estimate the timescale $2\\tau\\sim(8 v_r d\\ln T\/dr)^{-1}$, with $v_r$ the radial velocity. Using the velocity and temperature gradient in the cooling region shown in Fig.~\\ref{fig:rbr_comp}, our estimate yields $2\\tau\\sim2.5\\,{\\rm ms}$, in surprisingly good agreement with the measured value of $2.3\\,{\\rm ms}$.\n\n\n\n\\section{Summary and Conclusions}\n\\label{sum}\n\nUsing our new multi-group, multi-dimensional radiation hydrodynamics code CASTRO, which incorporates all terms to $\\mathcal{O}(v\/c)$ in the transport and does not make the ray-by-ray approximation employed by all other groups now modeling core-collapse supernovae, we have simulated in two spatial dimensions the dynamics of four progenitor massive star models. One goal was to determine, using a different code, whether the outcome of our previous simulations using the VULCAN\/2D methodology \\citep{Burr06,Burr07,Ott08} depended upon the absence of the $\\mathcal{O}(v\/c)$ terms in VULCAN\/2D. We have determined that the results are qualitatively the same and, as when employing VULCAN\/2D, we do not see explosions by the neutrino heating mechanism after $\\sim$600 milliseconds after bounce. Both codes perform two-dimensional transport, though using a multi-group flux-limited (MGFLD) formulation. This conclusion concerning the overall outcome of these models (i.e., explosion in 2D, driven by neutrino heating) is in contrast with the results of \\citet{Brue13} and Janka et al. (Nuclear Astrophysics Workshop, Ringberg Castle, 2014), who also do not agree one with the other, but who do obtain neutrino-driven explosions in some or all of their 2D simulations.\n\nOne is left to ponder the reasons for these remaining differences in the community of researchers engaged in detailed simulations of the core-collapse supernova phenomenon. We have demonstrated that the ray-by-ray approach does not reproduce the correct angular and temporal neutrino field variations, though no one has yet performed the head-to-head ray-by-ray versus correct transport comparisons needed to definitively clarify the impact of the ray-by-ray approximation. We speculate, however, that the combination of the ray-by-ray approach with the artificiality of the axial sloshing effects manifest in 2D simulations may be the reason the groups using ray-by-ray obtain explosions in 2D (when they do).\n\nWhile the ray-by-ray approximation is clearly suspect, there are other differences that may prove to play an important role in producing the range of findings in the community. One might suspect that differences in the neutrino interaction physics may play an important role, but our experimentation indicates that the numerous hydrodynamic, thermal, and radiative feedbacks in the core-collapse problem mute the effects of even large changes in the neutrino-matter cross sections and associated emissivities on the dynamic evolution after collapse. In 1D test calculations we have performed in which the $\\nu_{e}$$-$neutron absorption cross section was changed by a factor of two (both increased and decreased), the resulting stalled shock radii were the same to within a few percent. Some recent calculations suggest there may be some sensitivity to the choice of equation of state (EOS), with calculations using the Lattimer and Swesty EOS tending to explode more easily than those using the Shen EOS \\citep{Janka12,Suwa13,Couch13}. Since both the present study and the VULCAN\/2D studies used the Shen EOS and failed to explode, it may prove illuminating to repeat some of these calculations with the Lattimer and Swesty EOS. The effects of general relativity (GR) and the differing fidelity with which they are included in calculations may also contribute \\citep[e.g.][]{Mull12}, but note that GR seems not to be generally requisite for explosions, as demonstrated by the 2D $27$-$\\,{\\rm M}_\\odot$ models reported in \\citet{Mull12_sasi} that included GR and in \\citet{Hank13} that used a monopolar gravity approximation with mock GR corrections, nevertheless transitioning to explosion at nearly the same post-bounce time. The marked difference between 1D and 2D in the early evolution of the shock radius in our models, which we attribute to a vigorous burst of prompt convection seeded by perturbations from our aspherical grid, may also be a concern, but we would expect the memory of this defect to be lost within a few dynamical times ($<100\\,{\\rm ms}$) as the system dynamically relaxes to a quasi-steady configuration. Differences in the transport algorithms (apart from the ray-by-ray versus multi-D transport issue) could be to blame, and code-to-code comparisons are called for. This was one early motivation for embarking upon this study with CASTRO---to see whether the outcomes were different from those we obtained using VULCAN\/2D. But, more inter-group comparisons, not just intra-group comparisons, are needed.\n\nThe fact that the 3D simulations of the Garching group are not exploding when they were in 2D \\citep{Hank13} should be a wake-up call to the community to determine the origins of these differences between the various simulation groups and between 2D and 3D. As we have suggested, the use of the ray-by-ray approach is dubious, and since its artificial character is more manifest in 2D we suspect that it is part of the problem. However, this does not explain the current conundrum in 3D---something else may be amiss. It could be that the progenitor models are to blame and a new generation of such models, performed in 3D at the terminal stages of a massive star's life are needed \\citep{Meak11}. It could be that rotation, even the modest rotation expected from the pulsar injection constraint \\citep{Emme89}, could, by the resultant centrifugal support and consequent expected boost in the stalled shock radius, convert duds into explosions. This is the simplest solution, and one is reminded that the exploding model of \\citet{Mare09} was rotating. Both large-scale and turbulent magnetic fields could play a role, through the associated stress, but also due to enhanced angular momentum transport from the core to the mantle \\citep[e.g.][]{Sawa14}. However, without very rapid rotation, which might be associated with the rare hypernovae \\citep{Burr07mhd} that serve as a bridge to the collapsar model of long-soft gamma-ray bursts, there would not seem to be enough extra free energy to power explosion generically. Perturbations of the progenitor cores that collapse have never been properly included into supernova theory, and might be a fruitful line of investigation \\citep{Couc13}. Such perturbations seed the instabilities long identified with more robust dynamics and the viability of the delayed neutrino mechanism.\n\nWhatever the solution to this recalcitrant problem, advances in the numerical arts seem destined to play a central role. Approximations have been made by all groups to accommodate the limitations of the available computer resources, leaving one to wonder whether such compromises have corrupted the results. One would hope that simple, compelling reasoning, and physical insight could in the end lead to a solution. This has happened before in astrophysics. However, the complexity of the dynamics, the fact that the explosion energy is a small fraction of the available energy, and the circumstance that the central ``engine'' is shrouded in mystery by the profound opacity of the stellar envelope, and, hence, is itself (almost) inaccessible to direct observation or measurement, may mitigate against a breakthrough unaided by computation.\n\n\n\\acknowledgments\n\nThe authors acknowledge conversations and collaborations with Jeremiah Murphy, \nChristian Ott, Stan Woosley, Ann Almgren, John Bell, and Louis Howell.\nThe development of the CASTRO code was supported by the Scientific Discovery through\nAdvanced Computing (SciDAC) program of the DOE, under grant number DE-FG02-08ER41544,\nthe NSF under the subaward no. ND201387 to the Joint Institute for Nuclear Astrophysics (JINA, NSF PHY-0822648),\nand the NSF PetaApps program, under award OCI-0905046 via a subaward\nno. 44592 from Louisiana State University to Princeton University.\nThe authors employed computational resources provided by the TIGRESS\nhigh performance computer center at Princeton University, which is jointly supported by the Princeton\nInstitute for Computational Science and Engineering (PICSciE) and the Princeton University Office of\nInformation Technology; by the National Energy Research Scientific Computing Center\n(NERSC), which is supported by the Office of Science of the US Department of\nEnergy under contract DE-AC03-76SF00098; and on the Kraken supercomputer,\nhosted at NICS and provided by the National Science Foundation through\nthe TeraGrid Advanced Support Program under grant number TG-AST100001.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nNonlinear parabolic equations of the form \n\\begin{equation} \\label{eq:pde}\n\\partial u\/\\partial t = \\nabla\\cdot \\bigl(D(u,\\nabla u)\\nabla u\\bigr)\\quad\\text{on }\\Omega\\times (0,T),\n\\end{equation}\nequipped with suitable boundary and initial conditions, are frequently encountered in applications. If the diffusion constant $D(u,\\nabla u)$ vanishes for some values of $u$ and $\\nabla u$, i.e., the equation is degenerate, one obtains a quite different dynamics compared to the linear case. The two main nonlinear features are finite speed of propagation and the absence of parabolic smoothening of the solution. Concrete applications can, e.g., be found when modelling gas flow through porous media, phase transitions and population dynamics. A survey of such applications is given in \\cite[Section~1.3 and Chapter~2]{Vasquez.2007}. In order to keep the presentation as clear-cut as possible, we will mostly ignore the presence of lower-order advection and reactions terms. \n\nApproximating the solution of a partial differential equation typically results in large-scale computations, which require the usage of parallel and distributed hardware. One possibility to design numerical schemes that make use of such hardware is to decompose the equation's domain into a family of subdomains. The domain decomposition method then consists of an iterative procedure where, in every step, the equation is solved independently on each subdomain and the resulting solutions are thereafter communicated to the adjacent subdomains. This independence of the decomposed equations and the absence of global communication enables the parallel and distributed implementation of domain decomposition methods. For linear parabolic equations the common procedure is to first discretize the equation in time by a standard implicit integrator. Then an elliptic equation on $\\Omega$ is obtained in every time step, which is iteratively solved by a domain decomposition based discretization. We refer to the monographs~\\cite{Mathew.2008,QuarteroniValli.1999,ToselliWidlund.2005} for an in-depth treatment of this approach. Another possibility is to apply the domain decomposition method to the full space-time domain $\\Omega\\times (0,T)$, which leads to an iterative procedure over parabolic problems that can be parallelized both in space and time; see, e.g., \\cite{Gander.1999,GanderHalpern.2007,GiladiKeller.2002}. \n\nWhen considering nonlinear parabolic problems one finds that there are hardly any results concerning the analysis of domain decomposition based schemes. Two exceptions are the papers~\\cite{KimEtal.2000,Lapin.1991}, where domain decomposition schemes are analyzed for non-degenerate quasilinear parabolic equations and the degenerate two-phase Stefan problem, respectively. The lack of results in the context of degenerate equations is rather surprising from a practical point of view, as the equations' finite speed of propagation is ideal for applying domain decomposition strategies. For example, a solution that is initially zero in parts of the domain $\\Omega$ will in each time step only propagate to a small number of neighboring subdomains, which limits the computational work considerably. However, from a theoretical perspective the lack of convergence results is less surprising. The issue is that the standard domain decomposition schemes all link together the equations on the subdomains via boundary conditions. As the solutions of degenerate parabolic equations typically lack higher-order regularity, making sense of such boundary linking is, at the very least, challenging.\n\n\\begin{figure}\n \\centering \n \\includegraphics[scale=0.6]{Fig1.pdf}\n \\includegraphics[scale=0.6]{Fig2.pdf}\n\\caption{Examples of overlapping domain decompositions $\\{ \\Omega_{\\ell} \\}_{\\ell =1}^{s}$ of a domain $\\Omega\\subset \\mathbb{R}^{2}$,\nwith $s=4$ subdomains (left) and $s=2$ subdomains that are further decomposed into families of pairwise disjoint sets (right), respectively.}\n\\label{fig:dom}\n\\end{figure}\n\nIn order to remedy this, we propose to directly introduce the domain decomposition in the time integrator via an operator splitting procedure. More precisely, let~$\\{ \\Omega_{\\ell} \\}_{\\ell =1}^{s}$ be an overlapping decomposition of the spatial domain $\\Omega$, as exemplified in Figure~\\ref{fig:dom}. On these subdomains we introduce the partition of unity $\\{ \\chi_{\\ell} \\}_{\\ell =1}^{s}$ and the operator decomposition, or splitting, \n\\begin{equation} \\label{eq:opdecomp}\nfu= \\nabla\\cdot \\bigl(D(u,\\nabla u)\\nabla u\\bigr)= \\sum_{\\ell=1}^{s}\\nabla\\cdot \\bigl(\\chi_{\\ell} D(u,\\nabla u)\\nabla u\\bigr) = \\sum_{\\ell=1}^{s} f_{\\ell}u.\n\\end{equation}\nTwo possible (formally) first-order integrators are then the sum splitting \n\\begin{equation}\\label{schemes:sumpre}\n\\left\\{\n\\begin{aligned}\n&v_{\\ell} =u_{n}+ sh f_{\\ell} v_{\\ell}, \\quad \\ell=1,\\ldots, s,\\\\\n& u_{n+1} = \\sfrac1s\\sum_{\\ell=1}^{s} v_{\\ell},\n\\end{aligned}\n\\right.\n\\end{equation}\nwhich represents a ``quick and dirty'' scheme that is straightforward to parallelize, and the Lie splitting \n\\begin{equation}\\label{schemes:liepre}\n\\left\\{\n\\begin{aligned}\n& v_{0} =u_{n},\\\\\n& v_{\\ell} =v_{\\ell-1}+ h f_{\\ell} v_{\\ell}, \\quad \\ell=1,\\ldots, s,\\\\\n& u_{n+1} = v_{s},\n\\end{aligned}\n\\right.\n\\end{equation}\nwhich is usually more accurate but requires a further partitioning of the subdomains $\\Omega_\\ell$ in order to enable parallelization, as illustrated in Figure~\\ref{fig:dom}. In contrast to the earlier domain decomposition based schemes, where an iterative procedure is required with possibly many instances of boundary communications, one time step of either splitting scheme only needs the solution of $s$ elliptic equations together with the communication of the data related to the overlaps. Similar splitting schemes have, e.g., been considered in the papers \\cite{Arraras.2015,Hansen.2016,Mathew.1998,Vabishchevich.2013} when applied to linear, and to some extent semilinear, parabolic problems. However, there does not seem to be any analysis applicable to degenerate, or even quasilinear, parabolic equations in the literature.\n\nHence, the goal of this paper is twofold. First, we aim to derive a new energetic, or variational, framework that allows a proper interpretation of the operator decomposition~\\eqref{eq:opdecomp} for two commonly occurring families of degenerate parabolic equations. These are the $p$-Laplace type evolutions, where the prototypical example is given by $D(u,\\nabla u)=|\\nabla u|^{p-2}$, and the porous medium type equations, where $D(u,\\nabla u)=(p-1)|u|^{p-2}$ in the simplest case. For the porous medium application we will use the strategic reformulation \n\\begin{equation*} \nfu=\\Delta \\alpha (u)=\\sum_{\\ell=1}^{s}\\Delta \\bigl(\\chi_{\\ell}\\alpha (u)\\bigr)=\\sum_{\\ell=1}^{s} f_{\\ell}u\n\\end{equation*}\nof the decomposition~\\eqref{eq:opdecomp}, in order to enable an energetic interpretation. \n\nSecondly, we will strive to obtain a general convergence analysis for the domain decomposition based time integrators, including the sum and Lie splitting schemes. The main idea of the convergence analysis is to introduce the nonlinear Friedrich extensions of the operators $f$ and $f_{\\ell}$, via our new abstract energetic framework, and then to employ a Lax-type result from the nonlinear semigroup theory~\\cite{BrezisPazy.1972}.\n\n\\section{Function spaces}\\label{sec:func}\n\nThroughout the analysis $\\Omega \\subset \\mathbb{R}^d$, $d\\geq 1$, will be an open, connected and bounded set and the parameter $p\\in (1,\\infty)$ is fixed. Next, let $\\{ \\Omega_{\\ell} \\}_{\\ell =1}^{s}$ be a family of overlapping subsets of $\\Omega$ such that $\\bigcup_{\\ell =1}^s \\Omega_{\\ell} = \\Omega$ holds. Here, each $\\Omega_{\\ell}$ is either an open connected set, or a union of pairwise disjoint open, connected sets $\\Omega_{\\ell,k}$ such that $\\bigcup_{k=1}^{r}\\Omega_{\\ell,k} = \\Omega_{\\ell}$. On $\\{ \\Omega_{\\ell} \\}_{\\ell =1}^{s}$ we introduce the partition of unity $\\{ \\chi_{\\ell} \\}_{\\ell =1}^{s}\\subset C^{\\infty}(\\Omega)$ such that\n\\begin{align*}\n\\chi_{\\ell} (x)>0\\text{ for all }x\\in\\Omega_{\\ell},\\quad \\chi_{\\ell} (x) = 0\\text{ for all }x\\in\\Omega\\setminus\\Omega_{\\ell}\\quad \\text{and} \\quad \\sum_{\\ell =1}^{s} \\chi_{\\ell}= 1.\n\\end{align*} \nFor details on the construction of explicit domain decompositions $\\{ \\Omega_{\\ell} \\}_{\\ell =1}^{s}$ and partitions of unity $\\{ \\chi_{\\ell} \\}_{\\ell =1}^{s}$ we refer to \\cite[Section~3.2]{Arraras.2015} and \\cite[Section~4.1]{Mathew.1998}.\n\nThe related weighted Lebesgue space $L^p(\\Omega_{\\ell},\\chi_{\\ell})$ can now be defined as the set of all measurable functions $u$ on $\\Omega_{\\ell}$ such that the norm\n\\begin{align*}\n\\|u\\|^p_{L^p(\\Omega_{\\ell},\\chi_{\\ell})} = \\int_{\\Omega_{\\ell}}\\chi_{\\ell} |u|^p \\,\\mathrm{d}x\n\\end{align*}\nis finite. The space $L^p(\\Omega_{\\ell},\\chi_{\\ell})$ is a reflexive Banach space, which follows by observing that the map $G : L^p(\\Omega_{\\ell},\\chi_{\\ell}) \\to L^p(\\Omega_{\\ell}):u\\mapsto \\chi_{\\ell}^{\\nicefrac 1p} u$ is an isometric isomorphism \\cite[Chapter~1]{DrabekEtAl.1997}. We will also make frequent use of the product space $L^p(\\Omega_{\\ell},\\chi_{\\ell})^k$, equipped with the norm\n\\begin{align*}\n\\|(u_1,\\ldots,u_{k})\\|_{L^p(\\Omega_{\\ell},\\chi_{\\ell})^{k}}^{p}= \\int_{\\Omega_{\\ell}}\\chi_{\\ell} |(u_1,\\ldots,u_{k})|^p \\,\\mathrm{d}x,\n\\end{align*}\nwhich is again a reflexive Banach space~\\cite[Theorem 1.23]{AdamsFournier.2003}. \n\nNext, let $\\left(H, \\ska{\\cdot}{\\cdot}_{H}\\right)$ be a real Hilbert space and denote the space of distributions on \n$\\Omega$ by $\\mathcal{D}'(\\Omega)$. For a given $k\\geq 1$ we introduce the linear operator\n\\begin{align*}\n\\delta: H \\to \\mathcal{D}'(\\Omega)^k,\n\\end{align*}\nwhich is assumed to be continuous in the following fashion.\n\\begin{ass}\\label{ass:1}\nIf $\\lim_{n\\to\\infty} u_{n}=u$ in $H$ then, \nfor $j=1,\\ldots,k$,\n\\begin{equation*}\n\\lim_{n\\to\\infty} (\\delta u_n )_j(\\phi) = (\\delta u)_{j}(\\phi)\\quad \\text{in } \\mathbb{R}\\quad \\text{for all } \\phi \\in C_0^{\\infty}(\\Omega).\n\\end{equation*}\n\\end{ass}\nAs the regularity of the weights $\\chi_{\\ell}$ implies that $\\chi_{\\ell}\\phi \\in C^{\\infty}_0(\\Omega_\\ell)$ for all $\\phi \\in C^{\\infty}_0(\\Omega)$, we can define the product $\\chi_{\\ell}\\delta u$ by \n\\begin{align*}\n(\\chi_{\\ell}\\delta u)_{j}(\\phi)=(\\delta u)_{j}(\\chi_{\\ell}\\phi)\\quad \\text{for all } \\phi \\in C^{\\infty}_0(\\Omega).\n\\end{align*}\nWith this in place we can introduce our energetic spaces $V$ and $V_{\\ell}$ as subspaces of $H$ given by\n\\begin{align*}\nV &= \\Bigl\\{ u \\in H:\\text{ there exists a } v_j\\in L^p(\\Omega) \\text{ such that }\\\\\n& \\qquad \\qquad (\\delta u )_j(\\phi) = \\int_{\\Omega} v_j \\phi \\,\\mathrm{d}x\\quad \\text{for all } \\phi \\in C^{\\infty}_0(\\Omega),\\ j=1,\\dots,k \\Bigr\\}\n\\end{align*}\nand\n\\begin{align*}\nV_{\\ell} &= \\Bigl\\{ u \\in H:\\text{ there exists a } v_j\\in L^p(\\Omega_{\\ell},\\chi_{\\ell})\\text{ such that }\\\\\n& \\qquad \\qquad (\\chi_{\\ell}\\delta u )_j(\\phi) = \\int_{\\Omega_{\\ell}} v_j \\chi_{\\ell} \\phi \\,\\mathrm{d}x\\quad\\text{for all } \\phi \\in C^{\\infty}_0(\\Omega),\\ j=1,\\dots,k \\Bigr\\}, \n\\end{align*}\nrespectively. On the energetic spaces we consider the operators\n\\begin{align*}\n\\delta_{p}: V \\subseteq H \\to L^p(\\Omega)^k \\quad \\text{ and } \\quad \n\\delta_{p, \\ell}: V_{\\ell} \\subseteq H \\to L^p(\\Omega_{\\ell},\\chi_{\\ell})^k,\n\\end{align*}\nwhere $\\delta_{p}$ maps $u\\in V$ to the corresponding $L^p(\\Omega)$ functions that $\\delta u$ can be represented by, and $\\delta_{p,\\ell}$ maps $u\\in V_{\\ell}$ to the corresponding $L^p(\\Omega_{\\ell},\\chi_{\\ell})$ functions that $\\chi_{\\ell}\\delta u$ can be represented by, respectively.\n\n\\begin{lemma}\\label{lem:Vintersec}\n$V = \\bigcap_{\\ell =1}^s V_{\\ell}$.\n\\end{lemma}\n\n\\begin{proof}\nFor an arbitrary $u \\in V$ it follows, for $\\ell = 1,\\dots,s$, that \n\\begin{align*}\n(\\chi_{\\ell}\\delta u)_j (\\phi) = (\\delta u)_j (\\chi_{\\ell}\\phi) = \\int_{\\Omega} (\\delta_p u)_j \\chi_{\\ell}\\phi \\,\\mathrm{d}x\n\\end{align*}\nfor every $\\phi \\in C_0^{\\infty}(\\Omega)$ and $j = 1,\\dots,k$. As $(\\delta_p u)_j|_{\\Omega_{\\ell}} \\in L^p(\\Omega_{\\ell}) \\subseteq L^p(\\Omega_{\\ell},\\chi_{\\ell} )$, we have a representation of $(\\delta u)_j$ in $L^p(\\Omega_{\\ell},\\chi_{\\ell} )$, i.e., $u \\in V_{\\ell}$ for every $\\ell = 1,\\dots, s$. Hence, $V \\subseteq \\bigcap_{\\ell =1}^s V_{\\ell}$. \n\t\t\nNext, assume that $u\\in\\bigcap_{\\ell =1}^s V_{\\ell}$. Then we can write \n\\begin{align*}\n(\\delta u)_j (\\phi) = (\\delta u)_j \\bigl(\\sum_{\\ell =1}^{s} \\chi_{\\ell} \\phi \\bigr) \n\t = \\sum_{\\ell =1}^{s} (\\delta u)_j \\left( \\chi_{\\ell} \\phi \\right) \n\t = \\sum_{\\ell =1}^{s} \\int_{\\Omega_{\\ell} }(\\delta_{p,\\ell} u)_j \\chi_{\\ell} \\phi \\,\\mathrm{d}x\n\\end{align*}\nfor every $\\phi \\in C_0^{\\infty}(\\Omega)$ and $j =1,\\dots,k$. Let $w_{\\ell,j}$ be the zero extension of $(\\delta_{p,\\ell} u)_j$ to the whole of $\\Omega$. We can then define the measurable function \n$v_{j}$ on $\\Omega$ as $v_j = \\sum_{\\ell =1}^{s} \\chi_{\\ell} w_{\\ell,j}$, which satisfies\n\\begin{align*}\n(\\delta u)_j (\\phi)= \\int_{\\Omega} v_j \\phi \\,\\mathrm{d}x\\quad\\text{for all }\\phi \\in C_0^{\\infty}(\\Omega).\n\\end{align*}\nFurthermore, the $L^{p}(\\Omega)$ norm of $v_{j}$ can be bounded by\n\\begin{align*}\n\\|v_j\\|_{L^p(\\Omega)} \n\t\\leq \\sum_{\\ell=1}^{s} \\bigl(\\int_{\\Omega_{\\ell} } \\chi_{\\ell}^p \\left|(\\delta_{p,\\ell} u)_j\\right|^p \\,\\mathrm{d}x\\bigr)^{\\nicefrac{1}{p}}\n\t\\leq \\sum_{\\ell=1}^{s} \\|\\chi_{\\ell}\\|_{L^\\infty(\\Omega_{\\ell})}^{\\nicefrac{(p-1)}{p}} \\left\\|(\\delta_{p,\\ell} u)_j\\right\\|_{L^p(\\Omega_{\\ell}, \\chi_{\\ell})}.\n\\end{align*}\nThis yields that $(\\delta_p u)_j = v_j \\in L^p(\\Omega)$ for $j =1,\\dots,k$, i.e., $u\\in V$ and we thereby have the identification $V=\\bigcap_{\\ell =1}^s V_{\\ell}$. \\qed\n\\end{proof}\n\n\\begin{lemma}\nIf Assumption~\\ref{ass:1} holds, then the operators $\\delta_p$ and $\\delta_{p,\\ell}$, $\\ell=1,\\dots,s$, are linear and closed.\n\\end{lemma}\n\n\\begin{proof}\nThe linearity of the operators is clear, since $\\delta$ is a linear operator. \nLet the sequence $\\seq{u}\\subset V_{\\ell}$ satisfy\n\\begin{align*}\n\\lim_{n\\to\\infty} u_{n}=u\\quad\\text{in }H \\quad \\text{and} \\quad\n\\lim_{n\\to\\infty} \\delta_{p,\\ell} u_n=v\\quad\\text{in }L^p(\\Omega_{\\ell},\\chi_{\\ell})^k.\n\\end{align*} \nAssumption~\\ref{ass:1} then yields that \n\\begin{align*}\n(\\chi_{\\ell} \\delta u)_j (\\phi) = \\lim_{n\\to \\infty} (\\delta u_n)_j (\\chi_{\\ell}\\phi)\t\n= \\lim_{n\\to \\infty } \\int_{\\Omega_{\\ell}} (\\delta_{p,\\ell} u_n)_j \\chi_{\\ell} \\phi\t \\,\\mathrm{d}x \n= \\int_{\\Omega_{\\ell}} v_j \\chi_{\\ell} \t\\phi \\,\\mathrm{d}x\n\\end{align*}\nfor every $\\phi \\in C_0^{\\infty}(\\Omega)$ and $j=1,\\dots,k$. Hence, $(\\chi_{\\ell}\\delta u)_j$ can be represented by the \n$L^p(\\Omega_{\\ell},\\chi_{\\ell})$ function $v_j$, i.e., $\\delta_{p,\\ell} u = v$ holds and the operator $\\delta_{p, \\ell}$ is therefore closed. The closedness of $\\delta_{p}$ follows by the same line of reasoning. \n\\qed\n\\end{proof}\n\nOn the energetic spaces $V$ and $V_{\\ell}$, $\\ell =1,\\dots,s$, we define the norms\n\\begin{align*}\n\\|\\cdot\\|_{V}= \\|\\cdot\\|_H + \\|\\delta_{p}\\cdot\\|_{L^p(\\Omega)^k}\\quad\\text{and}\\quad\n\\|\\cdot\\|_{V_{\\ell}}= \\|\\cdot\\|_H + \\| \\delta_{p, \\ell}\\cdot \\|_{L^p(\\Omega_{\\ell},\\chi_{\\ell})^k},\n\\end{align*}\nrespectively. \n\n\\begin{lemma}\nIf Assumption~\\ref{ass:1} holds, then the spaces $(V, \\|\\cdot\\|_V)$ and $(V_{\\ell}, \\|\\cdot\\|_{V_{\\ell}})$, $\\ell =1,\\dots,s$, are reflexive Banach spaces.\n\\end{lemma}\n\n\\begin{proof}\nConsider the reflexive Banach space $X=H\\times L^p(\\Omega_{\\ell},\\chi_{\\ell})^k$,\nequipped with the norm $\\|(u_{1},u_{2})\\|_{X}=\\|u_{1}\\|_{H}+\\|u_{2}\\|_{L^p(\\Omega_{\\ell},\\chi_{\\ell})^k}$,\nand introduce the linear and isometric operator\n\\begin{align*}\nG:V_{\\ell}\\to X:u\\mapsto (u,\\delta_{p,\\ell}u).\n\\end{align*}\nThe graph of the closed operator $\\delta_{p,\\ell}$ coincides with the image $G(V_{\\ell})$, which makes $G(V_{\\ell})$ a closed linear subset of $X$. Here, $(G(V_{\\ell}),\\|\\cdot\\|_{X})$ is a reflexive Banach space \\cite[Theorem 1.22]{AdamsFournier.2003} and, as $G$ is isometric, it is isometrically isomorphic to $(V_{\\ell},\\|\\cdot\\|_{V_{\\ell}})$. Hence, the latter is also a reflexive Banach space. The same line of argumentation yields that $V$ is a reflexive Banach space.\n\\qed\n\\end{proof}\n\nHereafter, we will assume the following.\n\\begin{ass}\\label{ass:2}\nThe set $V$ is dense in $H$.\n\\end{ass}\nUnder this assumption it also holds that $V_{\\ell}$ is a dense subsets of $H$. By the construction of the energetic norms, one then obtains that the reflexive Banach spaces $(V, \\|\\cdot\\|_V)$ and $(V_{\\ell}, \\|\\cdot\\|_{V_{\\ell}})$ are densely and continuously embedded in $H$ and we have the following Gelfand triplets\n\\begin{align*}\nV \\overset{d}{\\hookrightarrow} H \\cong H^* \\overset{d}{\\hookrightarrow} V^*\n\\quad \\text{and}\\quad \nV_{\\ell} \\overset{d}{\\hookrightarrow} H \\cong H^* \\overset{d}{\\hookrightarrow} V^*_{\\ell}.\n\\end{align*}\nHere, the density of $H^*$ in $V^*$ and $V^*_{\\ell}$, respectively, follows, e.g., by~\\cite[Bemerkung I.5.14]{GGZ.1974}. For future reference, we denote the dual pairing between a Banach space $X$ and its dual $X^*$ by $\\dualX{\\cdot}{\\cdot}$, \nand the Riesz isomorphism from $H$ to $H^{*}$ by\n\\begin{align*}\n\\gamma: H \\to H^*: u\\mapsto \\ska{u}{\\cdot}_{H}.\n\\end{align*}\nHere, the Riesz isomorphism satisfies the relations\n\\begin{align*}\n\\dualV{\\gamma u}{v}=\\ska{u}{v}_{H}\\quad\\text{and}\n\\quad\\dualVi{\\gamma u}{v_{\\ell}}=\\ska{u}{v_{\\ell}}_{H}\n\\end{align*}\nfor all $u\\in H$, $v\\in V$ and $v_{\\ell}\\in V_{\\ell}$. \n\n\\begin{remark}\\label{rem:hm1}\nThroughout the derivation of the energetic framework we have assumed that the partition of unity $\\{ \\chi_{\\ell} \\}_{\\ell =1}^{s}$ consists of elements in $C^{\\infty}(\\Omega)$. This is somewhat restrictive from a numerical point of view, but this regularity is required if nothing else is known about the operator $\\delta: H \\to \\mathcal{D}'(\\Omega)^k$. Fortunately, in concrete examples; see Sections~\\ref{sec:pLap} and~\\ref{sec:por}, one commonly has that $\\delta(H)\\subseteq H^{-1}(\\Omega)^k$. If we then choose a partition of unity $\\{ \\chi_{\\ell} \\}_{\\ell =1}^{s}$ in $W^{1,\\infty}(\\Omega)$, we have the property that $\\chi_{\\ell}\\phi\\in H^{1}_{0}(\\Omega)$ for every $\\phi\\in H^{1}_{0}(\\Omega)$, and we can once more derive the above energetic setting by testing with functions $\\phi$ in $H^{1}_{0}(\\Omega)$, instead of in $C^{\\infty}_{0}(\\Omega)$.\n\\end{remark}\n\n\\section{Energetic extensions of the vector fields}\\label{sec:enform}\n\nWith the function spaces in place, we are now able to define the general energetic extensions of our vector fields.\n\\begin{ass}\\label{ass:3}\nFor a fixed $p\\in(1,\\infty)$, let $\\alpha: \\Omega \\times \\mathbb{R}^k \\to \\mathbb{R}^k$ fulfill the properties below.\n\\begin{itemize}\n\\item[$\\alpha_{1})$] The map $\\alpha: \\Omega \\times \\mathbb{R}^k \\to \\mathbb{R}^k$ fulfills the Carath\\'{e}odory condition, i.e., $z \\mapsto \\alpha(x,z)$ is continuous for a.e.\\ $x\\in \\Omega$ and $x \\mapsto \\alpha(x,z)$ is measurable for every $z\\in \\mathbb{R}^k$.\n\\item[$\\alpha_{2})$] The growth condition $|\\alpha(x,z)| \\leq c_1 |z|^{p-1} +c_2(x) $ holds for a.e.\\ $x\\in \\Omega$ and every $z\\in \\mathbb{R}^k$, where $c_1>0$ and $c_2\\in L^{\\nicefrac{p}{(p-1)}}(\\Omega)$ is nonnegative.\n\\item[$\\alpha_{3})$] The map $\\alpha$ is monotone, i.e., for every $z,\\tilde{z} \\in \\mathbb{R}^k$ and a.e.\\ $x\\in \\Omega$ the inequality $(\\alpha(x,z) - \\alpha(x,\\tilde{z}))\\cdot(z - \\tilde{z}) \\geq 0 $ holds. \n\\item[$\\alpha_{4})$] The map $\\alpha$ is coercive, i.e., there exists $c_3>0$ and $c_4\\in L^1(\\Omega)$ such that for every $z\\in \\mathbb{R}^k$ and a.e.\\ $x\\in \\Omega$ the condition $\\alpha(x,z) \\cdot z \\geq c_3 |z|^p - c_4(x)$ holds.\n\\end{itemize}\n\\end{ass}\nCompare with~\\cite[Section 26.3]{Zeidler.1989}.\n\nWe introduce the full energetic operator $F : V \\to V^*$ as\n\\begin{align*}\n\\dualV{Fu}{v} = \\int_{\\Omega} \\alpha(\\delta_p u ) \\cdot \\delta_p v \\,\\mathrm{d}x\\quad\\text{for }u,v\\in V.\n\\end{align*}\nThe operator $F$ is well defined, as $\\delta_p v \\in L^p(\\Omega)^k$ for $v\\in V$ and by ($\\alpha_{2}$) \nwe obtain that $\\alpha(\\delta_p v) \\in L^{\\nicefrac{p}{(p-1)}}(\\Omega)^k \\cong \\left(L^{p}(\\Omega)^k\\right)^*$.\nFurthermore, we define the decomposed energetic operators $F_{\\ell} : V_{\\ell} \\to V^*_{\\ell}$, $\\ell =1,\\dots, s$, by\n\\begin{align*}\n\\dualVi{F_{\\ell} u}{v} = \\int_{\\Omega_{\\ell}} \\chi_{\\ell} \\alpha(\\delta_{p, \\ell} u ) \\cdot \\delta_{p, \\ell}v \\,\\mathrm{d}x\\quad\\text{for all }u,v\\in V_{\\ell}.\n\\end{align*}\nThese operators are well defined, as\n\\begin{align*}\n|\\dualVi{F_{\\ell} u& }{v}| \n\\leq \\int_{\\Omega_{\\ell}} \\chi_{\\ell} (c_{1}|\\delta_{p, \\ell} u |^{p-1} +c_{2}) |\\delta_{p, \\ell}v|\\,\\mathrm{d}x\\\\\n&\\leq \\bigl(c_{1}\\bigl(\\int_{\\Omega_{\\ell}} \\chi_{\\ell}|\\delta_{p, \\ell}u|^p\\,\\mathrm{d}x\\bigr)^{\\nicefrac{(p-1)}{p}}\n +\\bigl(\\int_{\\Omega_{\\ell}} \\chi_{\\ell}c_{2}^{\\nicefrac{p}{(p-1)}}\\,\\mathrm{d}x\\bigr)^{\\nicefrac{(p-1)}{p}}\\bigr)\n\\bigl(\\int_{\\Omega_{\\ell}} \\chi_{\\ell} |\\delta_{p, \\ell}v|^p \\,\\mathrm{d}x\\bigr)^{\\nicefrac{1}{p}}\n\\end{align*}\nis finite for every $u,v\\in V_{\\ell}$, due to ($\\alpha_{2}$). This family of operators is a decomposition of $F$, as it fulfills\n\\begin{align*}\n\\dualV{Fu}{v}=\\sum_{\\ell =1}^{s} \\dualVi{F_{\\ell}u}{v}\\quad\\text{for all }u,v\\in V.\n\\end{align*}\nWe can now derive the basic properties of the energetic operators.\n\n\\begin{lemma}\\label{lem:energy}\nIf the Assumptions \\ref{ass:1}--\\ref{ass:3} hold and $h>0$ , then the operators $\\gamma + hF: V \\to V^*$ and \n$\\gamma+ hF_{\\ell}: V_{\\ell} \\to V^*_{\\ell}$, $\\ell=1,\\ldots,s$, are strictly monotone, hemicontinuous and coercive.\n\\end{lemma}\n\n\\begin{proof} \nWe will only derive the properties for $\\gamma + hF_{\\ell}$, as the same argumentation holds for $\\gamma + hF$. The strict monotonicity of the operator follows using ($\\alpha_{3}$), as\n\\begin{align*}\n\\dualVi{(\\gamma+ hF_{\\ell})u - &(\\gamma + hF_{\\ell})v}{u-v}= \\\\\n & \\ska{u-v}{u-v}_{H} + h\\int_{\\Omega_{\\ell}} \\chi_{\\ell} \n \\bigl(\\alpha(\\delta_{p, \\ell} u) - \\alpha(\\delta_{p, \\ell} v) \\bigr) \\cdot \\delta_{p, \\ell} (u-v) \\,\\mathrm{d}x> 0\n\\end{align*}\nholds for all $u,v\\in V_{\\ell}$ with $u\\neq v$.\n\nNext, we prove that $F_{\\ell}$ is hemicontinuous, i.e., $t \\mapsto \\dualVi{F_{\\ell}(u +tv)}{w}$ is continuous on $[0,1]$ for $u,v,w\\in V_{\\ell}$. Consider a sequence $\\seq{t}$ in $[0,1]$ with limit $t$ and introduce\n\\begin{align*}\ng(t,x)= \\chi_{\\ell}(x) \\alpha\\bigr(x,(\\delta_{p, \\ell} u +t\\delta_{p, \\ell}v) (x)\\bigr) \\cdot \\delta_{p, \\ell} w(x).\n\\end{align*}\nAs $\\lim_{n\\to \\infty } g(t_{n},x)=g(t,x)$ holds for almost every $x\\in \\Omega_{\\ell}$, due to ($\\alpha_{1}$), and\n\\begin{align*}\n|g(t,x)|\\leq \\chi_{\\ell}(x) \\bigl( c_{1} \\bigl(|\\delta_{p, \\ell} u(x)|+|\\delta_{p, \\ell} v(x)|\\bigr)^{p-1}+c_{2}(x)\\bigr)\n|\\delta_{p, \\ell} w(x)|,\n\\end{align*}\nwhere the right-hand side is an $L^1(\\Omega_{\\ell})$ element, we obtain that\n\\begin{align*}\n\\lim_{n\\to \\infty} \\dualVi{F_{\\ell}(u +t_nv)}{w}\t\n &= \\lim_{n\\to \\infty}\\int_{\\Omega_{\\ell}}\\chi_{\\ell} \\alpha(\\delta_{p, \\ell} (u +t_nv) ) \\cdot \\delta_{p, \\ell} w \\,\\mathrm{d}x\\\\\n &= \\dualVi{F_{\\ell}(u +tv)}{w},\n\\end{align*}\nby the dominated convergence theorem. This implies that $F_{\\ell}$ is hemicontinuous, \nand the same trivially holds for $\\gamma + hF_{\\ell}$.\n\t\nLast, we prove the coercivity of $\\gamma + hF_{\\ell}$. By assumption ($\\alpha_{4}$), we have\\begin{align*}\n\\dualVi{(\\gamma + hF_{\\ell}) u }{u} \n\t&= \\ska{u}{u}_{H} + h \\int_{\\Omega_{\\ell}} \\chi_{\\ell} \\alpha(\\delta_{p, \\ell} u ) \\cdot \\delta_{p, \\ell} u \\,\\mathrm{d}x \\\\\n\t&\\geq \\|u\\|_H^2 + h \\int_{\\Omega_{\\ell}} \\chi_{\\ell}(c_3|\\delta_{p, \\ell} u |^p -c_{4})\\,\\mathrm{d}x\\\\\n\t&\\geq\\|u\\|_H^2 + c_3h \\|\\delta_{p, \\ell} u\\|_{L^p(\\Omega_{\\ell},\\chi_{\\ell})^k}^p - h\\|\\chi_{\\ell}\\|_{L^\\infty(\\Omega_{\\ell})}\\|c_4\\|_{L^1(\\Omega_{\\ell})}\n\\end{align*} \nfor every $u \\in V_{\\ell}$. Hence, we have the limit\n\\begin{align*}\n\\frac{\\dualVi{(\\gamma + hF_{\\ell}) u }{u}}{\\|u\\|_{V_{\\ell}}}\\geq \n\\min(1,c_{3}h)\\frac{\\|u\\|_H^2 + \\|\\delta_{p, \\ell} u\\|_{L^p(\\Omega_{\\ell},\\chi_{\\ell})^k}^p}{\\|u\\|_H \n+ \\|\\delta_{p, \\ell} u\\|_{L^p(\\Omega_{\\ell},\\chi_{\\ell})^k} } \n- \\frac{c(\\chi_{\\ell},c_{4})}{\\|u\\|_{V_{\\ell}}}\\to \\infty,\n\\end{align*}\nas $\\|u\\|_{V_{\\ell}} \\to \\infty$, which implies the coercivity of $\\gamma + hF_{\\ell}$. \\qed\n\\end{proof}\n\n\\begin{corollary}\\label{cor:energy}\nIf the Assumptions \\ref{ass:1}--\\ref{ass:3} hold and $h>0$ , then the operators $\\gamma + hF: V \\to V^*$ and \n$\\gamma+ hF_{\\ell}: V_{\\ell} \\to V^*_{\\ell}$, $\\ell=1,\\ldots,s$, are all bijective.\n\\end{corollary}\n\n\\begin{proof}\nAs $\\gamma + hF: V \\to V^*$ and $\\gamma+ hF_{\\ell}: V_{\\ell} \\to V^*_{\\ell}$ are all, by Lemma~\\ref{lem:energy}, \nstrictly monotone, hemicontinuous and coercive, their bijectivity follows by the Browder--Minty theorem; \nsee, e.g., \\cite[Theorem 26.A]{Zeidler.1989}.\\qed\n\\end{proof}\n\n\\section{Friedrich extensions of the vector fields}\\label{sec:frform}\n\nThe energetic setting is too general for the convergence analysis that we have in mind. We therefore\nintroduce the nonlinear Friedrich extensions of our vector fields, i.e., we restrict the domains of the energetic operators such that they become (unbounded) operators on the pivot space $H$. More precisely, we define the Friedrich extension $f: {D}(f) \\subseteq H \\to H$ of the full vector field by\n\\begin{align*} \n{D} (f) = \\{u \\in V : F u \\in H^* \\}\\quad\\text{and}\n\\quad f u = -\\gamma^{-1} Fu\\quad\\text{for } u\\in{D} (f).\n\\end{align*}\nAnalogously, we introduce the Friedrich extensions $f_{\\ell}: {D}(f_{\\ell}) \\subseteq H \\to H$, $\\ell =1,\\dots,s$, of the decomposed vector fields by\n\\begin{align*} \n{D} (f_{\\ell}) = \\{u \\in V_{\\ell} : F_{\\ell} u \\in H^* \\}\\quad\\text{and}\n\\quad f_{\\ell} u = -\\gamma^{-1} F_{\\ell}u\\quad\\text{for } u\\in{D} (f_{\\ell}).\n\\end{align*}\n\n\\begin{lemma} \\label{lem:friedrich}\nIf the Assumptions \\ref{ass:1}--\\ref{ass:3} hold, then the operators $f:{D}(f) \\subseteq H \\to H$ and $f_{\\ell}:{D}(f_{\\ell}) \\subseteq H \\to H$, \n$\\ell =1,\\dots,s$, are all maximal dissipative.\n\\end{lemma}\n\n\\begin{proof}\nBy ($\\alpha_{3}$) of Assumption~\\ref{ass:3}, we have that\n\\begin{align*}\n\\ska{f_{\\ell}u - f_{\\ell}v}{u - v}_{H} & = -\\dualVi{F_{\\ell}u-F_{\\ell}v}{u-v}\\\\\n&=-\\int_{\\Omega_{\\ell}} \\chi_{\\ell}\\bigl(\\alpha(\\delta_{p,\\ell} u)- \\alpha(\\delta_{p,\\ell} v)\\bigr)\n\\cdot \\delta_{p,\\ell} (u - v)\\,\\mathrm{d}x \\leq 0\n\\end{align*}\nfor all $u,v\\in{D}(f_{\\ell})$, i.e., $f_{\\ell}$ is dissipative. Next, for given $h>0$ and $v\\in H$ one has, in virtue of Corollary~\\ref{cor:energy}, that \nthere exists a unique $u\\in V_{\\ell}$ such that $(\\gamma+hF_{\\ell})u=\\gamma v$, or equivalently\n\\begin{align*}\nF_{\\ell} u = -\\sfrac1h\\, \\gamma (u-v)\\in H^{*}.\n\\end{align*}\nHence, $u\\in{D}(f_{\\ell})$ and $(I-hf)u=v$ in $H$, i.e., ${R}(I-hf_{\\ell})=H$\nand $f_{\\ell}$ is therefore maximal. The same argumentation also yields that $f$ is maximal dissipative.\n\\qed\n\\end{proof}\n\nBefore we continue with our analysis we recapitulate a few properties of a general maximal dissipative \noperator $g:{D}(g)\\subseteq H\\to H$. The resolvent\n\\begin{align*}\n(I-hg)^{-1}:H\\to{D}(g)\\subseteq H\n\\end{align*}\nis well defined, for every $h>0$, and nonexpansive, i.e.,\n\\begin{align*}\n\\|(I-hg)^{-1}u-(I-hg)^{-1}v\\|_{H}\\leq\\|u-v\\|_{H}\\quad\\text{for all }u,v\\in H.\n\\end{align*}\nThe latter follows directly by the definition of dissipativity. Furthermore, the resolvent \nand the related Yosida approximation $g(I - hg)^{-1}$ satisfies the following.\n\n\\begin{lemma} \\label{lem:yosida}\nIf $g:{D}(g)\\subseteq H\\to H$ is maximal dissipative, then \n\\begin{align*}\n\\lim_{h\\to 0} (I - hg)^{-1}u=u\\quad\\text{and}\\quad\\lim_{h\\to 0} g(I - hg)^{-1}v= gv\n\\end{align*}\nin $H$ for every $u\\in \\overline{{D}(g)}$ and $v\\in{D}(g)$, respectively.\n\\end{lemma}\n\nThe proof of Lemma~\\ref{lem:yosida} can, e.g., be found in \\cite[Proposition II. 3.6]{Barbu.1976} or \n\\cite[Proposition 11.3]{Deimling.1985}. Next, we will relate the full vector field $f$ with its decomposition $\\sum_{\\ell =1}^{s} f_{\\ell}$.\n\n\\begin{lemma} \\label{lem:frieddom}\nIf the Assumptions \\ref{ass:1}--\\ref{ass:3} hold, then $\\bigcap_{\\ell =1}^s {D}(f_{\\ell}) \\subseteq {D}(f)$ and\n$fu = \\sum_{\\ell =1}^{s} f_{\\ell}u$ for every $u\\in\\bigcap_{\\ell =1}^s {D}(f_{\\ell})$.\n\\end{lemma}\n\n\\begin{proof}\nChoose a $u\\in\\bigcap_{\\ell =1}^s {D}(f_{\\ell})$, then $u\\in\\bigcap_{\\ell =1}^s V_{\\ell} = V$ and the sum $z=\\sum_{\\ell=1}^{s} f_{\\ell}u\\in H$ satisfies the relation\n\\begin{align*}\n\\ska{-z}{v}_{H}=\\sum_{\\ell =1}^{s} \\dualVi{F_{\\ell}u}{v}=\\dualV{Fu}{v}\n\\end{align*}\nfor all $v\\in V$. Hence, $Fu\\in H ^{*}$, which yields that $u\\in{D}(f)$ and $fu=-\\gamma^{-1} Fu =z$.\n\\qed\n\\end{proof}\n\nUnfortunately, the set ${D}(f)$ is in general not equal to $\\bigcap_{\\ell =1}^s {D}(f_{\\ell})$, as $u\\in {D}(f)$ does not necessarily imply that $F_{\\ell} u \\in H^*$ for every $\\ell =1,\\dots,s$. This issue is well known and we will encounter it when decomposing the $p$-Laplacian; compare with Section~\\ref{sec:pLap}. We will therefore assume that the mild regularity property below holds.\n\\begin{ass}\\label{ass:f1}\n$V \\subseteq {R}\\bigl( I - h f |_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})} \\bigr)\\quad$for all $h>0$.\n\\end{ass}\nUnder this assumption one has the following identification, which is sufficient for our convergence analysis.\n\n\\begin{lemma}\\label{lem:close}\nIf the Assumptions \\ref{ass:1}--\\ref{ass:f1} hold, then the closure of $f|_{\\bigcap_{\\ell =1}^s{D}(f_{\\ell})}$ is $f$, i.e., \n\\begin{align*}\n\\overline{\\graph\\bigl( f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})} \\bigr)} = \\graph(f).\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nBy Lemma~\\ref{lem:frieddom} and the fact that the maximal dissipative operator $f$ is closed~\\cite[Proposition II.3.4]{Barbu.1976}, \nwe obtain that\n\\begin{align*}\n\t\\overline{ \\graph\\bigl(f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})} \\bigr)} \n\t\\subseteq \\overline{ \\graph(f)}=\\graph(f).\n\\end{align*}\nNext, choose an arbitrary $(u,fu) \\in \\graph(f)$. Since\n\\begin{align*}\nu \\in {D}(f) \\subseteq V \\subseteq {R}\\bigl( I - h f |_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})} \\bigr),\n\\end{align*}\nwe can define $v_h \\in \\bigcap_{\\ell =1}^s{D}(f_{\\ell})$ via\n\\begin{align*}\nv_h = (I - hf)^{-1} u = \\bigl(I - h f |_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})} \\bigr)^{-1} u\n\\end{align*}\nfor every $h>0$.\nBy Lemma~\\ref{lem:yosida}, we have the limits\n\\begin{align*}\n\\lim_{h\\to 0} v_h = u\\quad \\text{ and }\\quad \\lim_{h\\to 0} f v_h = \\lim_{h\\to 0} f(I-hf)^{-1} u =fu\\quad \\text{ in } H.\n\\end{align*}\nHence, the set $\\graph\\bigl( f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})} \\bigr)$ is dense in \n$\\graph(f)$, i.e., its closure in $H\\times H$ is equal to $\\graph(f)$. \\qed\n\\end{proof}\n\n\\section{Abstract evolution equations and their approximations}\\label{sec:approx}\n\nWith the Friedrich formulation of our full vector field $f:{D}(f) \\subseteq H \\to H$, the parabolic equations \nall take the form of an abstract evolution equations, i.e., \n\\begin{equation}\\label{eq:evolv}\n\\dot{u}=fu,\\quad u(0)=\\eta,\n\\end{equation}\non $H$. Furthermore, with the decomposition $f=\\sum_{\\ell =1}^s f_{\\ell}$, the splitting schemes \n\\eqref{schemes:sumpre} and \\eqref{schemes:liepre} are given by the operators\n\\begin{align*}\nS_{h}=\\sfrac1s\\,\\sum_{\\ell=1}^{s}\\bigl(I-hsf_{\\ell}\\bigr)^{-1} : H \\to H\\quad\\text{and}\\quad\nP_{h}= \\prod_{\\ell=1}^{s}\\bigl(I-hf_{\\ell}\\bigr)^{-1}: H\\to H,\n\\end{align*}\nrespectively. Here, $S^{n}_{h}\\eta$ and $P^{n}_{h}\\eta$ are both approximations of the exact solution $u$ at time $t=nh$.\n\nAs the resolvent of a maximal dissipative operator is well defined and nonexpansive on $H$, it is a natural starting point for a solution concept. To this end, consider the operator family $\\{\\ee{tf}\\}_{t\\geq 0}$ defined by \n\\begin{equation*}\n\\ee{tf}\\eta=\\lim_{n\\to\\infty}\\bigl(I-\\sfrac tn\\,f\\bigr)^{-n}\\eta,\n\\end{equation*}\nwhere the limit is well defined in $H$ for every $\\eta\\in\\overline{{D}(f)}$ and $t\\geq 0$; see \\cite[Theorem~I]{CrandallLiggett.1971}. \nThe operator family $\\{\\ee{tf}\\}_{t\\geq 0}$ is in fact a (nonlinear) semigroup and each $\\ee{tf}:\\overline{{D}(f)}\\to\\overline{{D}(f)}$ is a nonexpansive operator on $H$. The unique mild solution of the evolution equation \\eqref{eq:evolv} is then given by the function $u:t\\mapsto \\ee{tf}\\eta$, which is continuous on bounded time intervals. An extensive exposition of the nonlinear semigroup theory can, e.g., be found in \\cite{Barbu.1976}.\n\nThere is a discrepancy between the domain of the solution operator, i.e., ${D}(\\ee{tf})=\\overline{{D}(f)}$, and \nthe fact that the operators $S_{h}$ and $P_{h}$ are not necessarily invariant over it. In order to\navoid several technicalities induced by this, we will assume the following.\n\\begin{ass}\\label{ass:f2}\nThe domain ${D}(f)$ is dense in $H$.\n\\end{ass}\nAs $f$ is the closure of $f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})}$, one has the inclusions\n\\begin{equation*}\n{D}(f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})})\\subseteq D(f)\\subseteq\\overline{{D}(f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})})},\n\\end{equation*}\nwhich implies that $\\overline{{D}(f)}=\\overline{{D}(f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})})}$.\nHence, ${D}(f|_{\\bigcap_{\\ell =1}^s {D}(f_{\\ell})})$ is also dense in $H$ when Assumption~\\ref{ass:f2} holds.\n\nWe can now formulate the following simplified version of the Lax-type convergence \nresult given in~\\cite[Corollary~4.3]{BrezisPazy.1972}.\n\n\\begin{lemma}\\label{lem:BrePaz}\nConsider an operator family $\\{G_{h}\\}_{h>0}$, where each operator $G_{h}:H\\to H$ is nonexpansive on $H$\nand the operator family is consistent, i.e., \n\\begin{align*}\n\\lim_{h\\to 0} \\sfrac 1h\\,(G_{h}-I)u= fu\\quad\\text{in } H \\quad\\text{for all }u\\in \\cap_{\\ell =1}^s {D}(f_{\\ell}). \n\\end{align*}\nIf the Assumptions~\\ref{ass:1}--\\ref{ass:f2} hold, then \n\\begin{align*}\n\\lim_{n\\to\\infty}\\sup_{t\\in(0,\\,T)}\\bigl\\| G^{n}_{t\/n}\\eta-\\ee{tf}\\eta\\bigr\\|_{H}=0\n\\end{align*}\nfor every $\\eta\\in H$ and $T<\\infty$.\n\\end{lemma}\n\n\\begin{theorem}\\label{thm:sum}\nIf the Assumptions~\\ref{ass:1}--\\ref{ass:f2} hold, then the sum splitting~\\eqref{schemes:sumpre} \nis convergent in $H$, uniformly on bounded time intervals, to the mild solution of the abstract evolution \nequation~\\eqref{eq:evolv}, i.e., \n\\begin{align*}\n\\lim_{n\\to\\infty}\\sup_{t\\in(0,\\,T)}\\bigl\\| S^{n}_{t\/n}\\eta-\\ee{tf}\\eta\\bigr\\|_{H}=0\n\\end{align*}\nfor every $\\eta\\in H$ and $T<\\infty$.\n\\end{theorem}\n\n\\begin{proof}\nAs each resolvent $(I-hsf_{\\ell})^{-1}$ is nonexpansive on $H$ for all values of $hs>0$, one has the bound\n\\begin{align*}\n\\|S_{h}u-S_{h}v\\|_{H}\\leq \\sfrac 1s\\, \\sum_{\\ell=1}^{s} \\|(I-hsf_{\\ell})^{-1}u-(I-hsf_{\\ell})^{-1}v\\|_{H}\\leq \\|u-v\\|_{H},\n\\end{align*}\nand $S_{h}$ is therefore nonexpansive on $H$. To validate the consistency of $\\{S_{h}\\}_{h>0}$, we first observe that\n\\begin{align*}\n\\sfrac 1h\\,\\bigl((I-hsf_{\\ell})^{-1}-I\\bigr)=sf_{\\ell}(I-hsf_{\\ell})^{-1}.\n\\end{align*}\nThe consistency can then be formulated in terms of the Yosida approximation, i.e.,\nfor every $u\\in \\cap_{\\ell =1}^s {D}(f_{\\ell})$ one has the limit\n\\begin{align*}\n\\sfrac1h\\,(S_{h}-I)u = \\sum_{\\ell=1}^{s} \\sfrac{1}{hs}\\,\\bigl((I-hsf_{\\ell})^{-1}-I\\bigr)u \n= \\sum_{\\ell=1}^{s} f_{\\ell}(I-hsf_{\\ell})^{-1}u\\to \\sum_{\\ell=1}^{s} f_{\\ell}u=fu\n\\end{align*}\nin $H$, as $h\\to 0$; compare with Lemma~\\ref{lem:yosida}. The desired convergence is then proven \nas the hypotheses of Lemma~\\ref{lem:BrePaz} hold. \\qed\n\\end{proof}\n\n\\begin{theorem}\\label{thm:lie}\nIf the Assumptions~\\ref{ass:1}--\\ref{ass:f2} hold, then the Lie splitting~\\eqref{schemes:liepre} \nis convergent in $H$, uniformly on bounded time intervals, to the mild solution of the abstract evolution \nequation~\\eqref{eq:evolv}, i.e., \n\\begin{align*}\n\\lim_{n\\to\\infty}\\sup_{t\\in(0,\\,T)}\\bigl\\| P^{n}_{t\/n}\\eta-\\ee{tf}\\eta\\bigr\\|_{H}=0\n\\end{align*}\nfor every $\\eta\\in H$ and $T<\\infty$.\n\\end{theorem}\n\n\\begin{proof}\nWe once more prove convergence by validating the hypotheses of Lemma~\\ref{lem:BrePaz}. The nonexpansivity\nof the operator $P_{h}$ on $H$ follows trivially as every resolvent $(I-hf_{\\ell})^{-1}$ has the same property. In order to validate the consistency of $\\{P_{h}\\}_{h>0}$, let $u\\in \\cap_{\\ell =1}^s {D}(f_{\\ell})$ and consider the telescopic expansion\n\\begin{equation}\\label{eq:Pconist}\n\\sfrac1h\\,(P_{h}-I)u = \\sum_{\\ell=1}^{s} \\sfrac1h\\,\\bigl((I-hf_{\\ell})^{-1}-I\\bigr)u_{\\ell,h}= \n\\sum_{\\ell=1}^{s} f_{\\ell}(I-hf_{\\ell})^{-1}u_{\\ell,h},\n\\end{equation}\nwhere $u_{1,h}=u$ and \n\\begin{equation*}\nu_{\\ell,h}=(I-hf_{\\ell-1})^{-1}\\ldots(I-hf_{1})^{-1}u\\quad \\text{for }\\ell=2,\\ldots,s.\n\\end{equation*}\nAs the arguments of the Yosida approximations in~\\eqref{eq:Pconist} are $h$ dependent, we can not directly use Lemma~\\ref{lem:yosida}. Instead, we assume for the time being that the limit \n\\begin{equation}\\label{eq:uhlimit}\n\\lim_{h\\to 0} \\sfrac1h(u-u_{\\ell,h}) =z_{\\ell},\\quad \\text{in }H,\n\\end{equation}\nexists. By introducing the maximal dissipative operator \n\\begin{align*}\ne_{\\ell}: {D}(f_{\\ell}) \\subseteq H \\to H : u\\mapsto f_{\\ell}u-z_{\\ell},\n\\end{align*}\nwhich satisfies $(I-hf_{\\ell})^{-1}u_{\\ell,h}=(I-he_{\\ell})^{-1}(u_{\\ell,h}+hz_{\\ell})$, we have the reformulation\n\\begin{align*}\nf_{\\ell}(I-hf_{\\ell})^{-1}u_{\\ell,h} &= \\sfrac1h\\,(I-he_{\\ell})^{-1}(u_{\\ell,h}+hz_{\\ell})-\\sfrac1h\\,(I-he_{\\ell})^{-1}u\\\\\n&\\qquad+\\sfrac1h\\,\\bigl((I-he_{\\ell})^{-1}-I\\bigr)u+\\sfrac1h\\,(u-u_{\\ell,h}).\n\\end{align*}\nBy Lemma~\\ref{lem:yosida} and the nonexpansivity of $(I-he_{\\ell})^{-1}$, one then obtains the limit \n\\begin{align*}\n\\|f_{\\ell}&(I-hf_{\\ell})^{-1}u_{\\ell,h} -f_{\\ell}u\\|_{H}\\\\\n&\\leq \\|\\sfrac1h\\,(I-he_{\\ell})^{-1}(u_{\\ell,h}+hz_{\\ell})-\\sfrac1h\\,(I-he_{\\ell})^{-1}u\\|_{H}\\\\\n&\\qquad+\\|\\sfrac1h\\,\\bigl((I-he_{\\ell})^{-1}-I\\bigr)u-e_{\\ell} u\\|_{H} + \\|\\sfrac1h\\,(u-u_{\\ell,h}) + e_{\\ell} u - f_{\\ell} u\\|_{H}\\\\\n&\\leq \\|-\\sfrac1h\\,(u-u_{\\ell,h})+z_{\\ell}\\|_{H}+\\|e_{\\ell}(I-he_{\\ell})^{-1}u-e_{\\ell}u\\|_{H}\\\\\n&\\qquad+ \\|\\sfrac1h\\,(u-u_{\\ell,h})-z_{\\ell}\\|_{H}\\to 0,\\quad\\text{as }h\\to 0.\n\\end{align*}\nHence, if \\eqref{eq:uhlimit} exists then $ \\lim_{h\\to 0}f_{\\ell}(I-hf_{\\ell})^{-1}u_{\\ell,h}=f_{\\ell}u$. Furthermore,\nif \\eqref{eq:uhlimit} exists for every $\\ell=1,\\ldots,s$, then $\\lim_{h\\to 0}1\/h\\,(P_{h}-I)u=fu$ in $H$.\n\nThe limit \\eqref{eq:uhlimit} obviously exists for $\\ell=1$. If it exists for $\\ell=k$ then it also exists for $\\ell=k+1$, as\n\\begin{align*}\n\\sfrac1h\\,(u-u_{k+1,h})&=\\sfrac1h\\,(u-u_{k,h})-\\sfrac1h\\,\\bigl((I-hf_{k})^{-1}-I\\bigr)u_{k,h}\\\\\n&=\\sfrac1h\\,(u-u_{k,h})-f_{k}(I-hf_{k})^{-1}u_{k,h}\\to z_{k}-f_{k}u\n\\end{align*}\nin $H$, as $h\\to 0$. By induction, the limit~\\eqref{eq:uhlimit} exists for every $\\ell=1,\\ldots,s$, and $\\{P_{h}\\}_{h>0}$ is therefore consistent.\\qed\n\\end{proof}\n\n\\begin{remark}\nThe results can be extended to perturbed equations $\\dot{u}=(f+g)u$, e.g., arising if a lower-order advection or reaction term is added to the diffusion process. Here, $g$ and $f+g$ are both assumed to satisfy a shifted dissipativity condition of the form\n\\begin{align*}\n\\ska{gu - gv}{u - v}_{H} \\leq M[g] \\|u-v\\|_{H}^{2}\\quad\\text{for all }u,v\\in{D}(g),\n\\end{align*}\nwith $M$ being a nonnegative constant, and the range condition ${R}(I-hg)=H$ for $h\\in(0,1\/M)$.\nThis is, e.g., satisfied when $g:H\\to H$ is Lipschitz continuous. \nMore elaborate perturbation examples are given in \\cite[Section~II.3.2]{Barbu.1976}.\nFor these perturbed evolution equations, one has convergence for the modified splitting schemes, with a single step given by\n\\begin{align*}\n\\tilde{S}_{h}= (I-hg)^{-1}S_{h}\\quad\\text{and}\\quad \\tilde{P}_{h}= (I-hg)^{-1}P_{h},\n\\end{align*}\nrespectively. If $g:H\\to H$ is in addition Lipschitz continuous, then convergence is also obtained for the semi-implicit schemes\n\\begin{align*}\n\\hat{S}_{h}= (I+hg)S_{h}\\quad\\text{and}\\quad \\hat{P}_{h}= (I+hg)P_{h}.\n\\end{align*}\nThe convergence of the modified schemes follow just as for the proof of Theorem~\\ref{thm:lie} together with \nthe fact that \\cite[Corollary~4.3]{BrezisPazy.1972} is valid for operators $G_{h}$ that have Lipschitz constants of the form $1+Ch$.\n\\end{remark}\n\n\\section{Parabolic equations of p-Laplace type}\\label{sec:pLap}\n\nAs a first problem class we consider the parabolic equations of $p$-Laplace type with \nhomogeneous Neumann boundary conditions, i.e., \n\\begin{equation}\\label{eq:pLap}\n\\begin{cases}\n\\partial u\/ \\partial t = \\nabla \\cdot \\alpha(\\nabla u) &\\text{in } \\Omega\\times (0,T),\\\\\n\\alpha(\\nabla u)\\cdot n =0 &\\text{on } \\partial \\Omega\\times (0,T),\\\\\nu(0) =\\eta &\\text{in } \\Omega.\n\\end{cases}\n\\end{equation}\nThe domain $\\Omega\\subset \\mathbb{R}^d$ is assumed to have a locally Lipschitz boundary $ \\partial \\Omega$, and the map $\\alpha:\\Omega\\times\\mathbb{R}^{d}\\to \\mathbb{R}^{d}$ satisfies Assumption~\\ref{ass:3} for a given $p\\geq 2$. The classical $p$-Laplacian is then given by \n\\begin{equation*}\n\\alpha(x,z)=|z|^{p-2}z.\n\\end{equation*}\nAfter multiplication with $v$ and a subsequent integration by parts, the variational form of~\\eqref{eq:pLap} and its decomposition is formally given by\n\\begin{equation}\\label{eq:pLaplvar}\n(\\partial u\/ \\partial t,v)_{L^2(\\Omega)} = - \\int_{\\Omega}\\alpha (\\nabla u)\\cdot \\nabla v \\,\\mathrm{d}x = \n- \\sum_{\\ell=1}^{s} \\int_{\\Omega_{\\ell}}\\chi_{\\ell} \\alpha (\\nabla u)\\cdot \\nabla v \\,\\mathrm{d}x.\n\\end{equation}\nHere, we have introduce a domain decomposition $\\{\\Omega_{\\ell}\\}_{\\ell =1}^s$, where $\\bigcup_{\\ell =1}^s \\Omega_{\\ell}=\\Omega$, together with a partition of unity $\\{\\chi_{\\ell}\\}_{\\ell=1}^{s}$ chosen in $W^{1,\\infty}(\\Omega)$; compare with Remark~\\ref{rem:hm1}. \n\nIn order to fit the variational form into the abstract setting of Sections~\\ref{sec:enform}, we choose the pivot space $H = L^2(\\Omega)$ and the operator $\\delta$ as the distributional gradient\n\\begin{align*}\n\\delta: L^2(\\Omega) \\to \\mathcal{D}'(\\Omega)^d: u \\mapsto \\nabla u.\n\\end{align*}\nThis choice of $\\delta$ fulfills the continuity Assumption~\\ref{ass:1}, since for a convergent sequence $\\seq{u}\\subset L^2(\\Omega)$ and an arbitrary $\\phi \\in C^{\\infty}_0(\\Omega)$ one can write\n\\begin{align*}\n\\lim_{n\\to \\infty} (D_j u_n) (\\phi) = -\\lim_{n\\to \\infty} \\int_{\\Omega} u_n D_j \\phi \\,\\mathrm{d}x = - \\int_{\\Omega} u D_j \\phi \\,\\mathrm{d}x = (D_j u) (\\phi) \n\\end{align*} \nfor every$j=1,\\dots,d$, where $D_j$ is the $j$-th partial derivative in a distributional sense.\nThe space $V$ is then\n\\begin{align*}\nV = \\bigl\\{ u\\in L^2(\\Omega): \\nabla u \\in L^p(\\Omega)^d \\bigr\\}.\n\\end{align*}\nA bootstrap argument using the Sobolev embedding theorem yields the identification $V = W^{1,p}(\\Omega)$. Since $W^{1,p}(\\Omega)$ is dense in $L^2(\\Omega)$, Assumption~\\ref{ass:2} is also fulfilled. \n\nWith these choices, $\\delta_{p}u$ is simply the weak gradient of $u\\inW^{1,p}(\\Omega)$ and we obtain the standard energetic form $F :V \\to V^*$ of $p$-Laplace type vector fields, i.e., \n\\begin{align*}\n\\dualV{Fu}{v} = \\int_{\\Omega}\\alpha (\\nabla u)\\cdot \\nabla v \\,\\mathrm{d}x.\n\\end{align*}\nThe domain of the corresponding Friedrich extension can be written as\n\\begin{align*}\n{D}(f) &= \\Bigl\\{u \\in W^{1,p}(\\Omega):\\text{ there exists a }z\\in L^2(\\Omega) \\text{ such that }\\\\\n& \\qquad \\qquad -\\int_{\\Omega}\\alpha (\\nabla u)\\cdot \\nabla v \\,\\mathrm{d}x=\\int_{\\Omega} zv\\,\\mathrm{d}x \\quad\\text{for all } v \\in W^{1,p}(\\Omega) \\Bigr\\}, \n\\end{align*}\nand $fu$ is given by the weak divergence of $\\alpha(\\nabla u)$. The same characterization can be made for $F_{\\ell}$ and $f_{\\ell}$, respectively. Applying Lemma~\\ref{lem:friedrich} the operators $f$ and $f_{\\ell}$, $\\ell =1,\\dots,s$, are maximal dissipative and Lemma~\\ref{lem:frieddom} yields that\n\\begin{align*}\n\\bigcap_{\\ell =1}^{s} {D}(f_{\\ell}) \\subseteq {D}(f)\\quad \\text{ and }\\quad\nfu = \\sum_{\\ell =1}^{s} f_{\\ell}u\\quad\\text{for }u\\in\\bigcap_{\\ell =1}^{s} {D}(f_{\\ell}).\n\\end{align*}\nValidation of Assumption~\\ref{ass:f2} requires further structure of the map $\\alpha$. For the classical $p$-Laplacian the related $\\alpha$ is continuously differentiable and $\\alpha(0)=0$, which implies that $C_0^{\\infty}(\\Omega)$ is a subset of ${D}(f)$. Hence, ${D}(f)$ is dense in $L^2(\\Omega)$ and Assumption~\\ref{ass:f2} is valid in this context. Finally, if Assumption~\\ref{ass:f1} holds then the convergence results from Section \\ref{sec:approx} can directly be applied. \n\n\\begin{figure}\n \\centering \n \\includegraphics[scale=0.7]{Fig3.pdf}\n \\caption{An example of a domain decomposition $\\{\\Omega_{\\ell}\\}_{\\ell =1}^3$ that fulfills \\eqref{ass:dom}.}\n \\label{fig:pLaplace}\n\\end{figure}\n\nApart from the special cases when $d=1$ or $p=2$, the domains ${D}(f)$ of $p$-Laplace type vector fields can not be expected to coincide with $\\bigcap_{\\ell =1}^{s} {D}(f_{\\ell})$. The issue is that for an element $u \\in {D}(f)$ one has\n\\begin{align*}\nf_{\\ell}u=\\nabla\\cdot \\bigl(\\chi_{\\ell} \\alpha(\\nabla u) \\bigr)= \\nabla \\chi_{\\ell} \\cdot\\alpha(\\nabla u) \n+ \\chi_{\\ell} \\nabla\\cdot \\alpha(\\nabla u),\n\\end{align*}\nwhere the function $\\alpha(\\nabla u)$ only lies in $L^{\\nicefrac{p}{(p-1)}}(\\Omega)^{d}$, with $p>2$. The term $f_{\\ell}u$ is therefore, in general, not an $L^{2}(\\Omega)$ function. In order to give a possible setting for which Assumption~\\ref{ass:f1} is valid, we assume that the domain decomposition $\\{\\Omega_{\\ell}\\}_{\\ell =1}^s$ is chosen such that\n\\begin{equation}\\label{ass:dom}\n\\text{closure}(\\bigcup_{\\ell =1}^{s-1}\\Omega_{\\ell})\\setminus\\partial\\Omega = \\emptyset.\n\\end{equation}\nThat is, the subdomain $\\Omega_{s}$ separates the boundary $\\partial\\Omega$ from the other subdomains; as illustrated in Figure~\\ref{fig:pLaplace}.\n\n\\begin{lemma} \nConsider a domain decomposition $\\{\\Omega_{\\ell}\\}_{\\ell =1}^s$ that satisfies \\eqref{ass:dom} and with subdomains $\\Omega_{\\ell}$, $\\ell=1,\\ldots,s-1$, that all have the segment property. If $p\\geq 2$ in addition satisfies $p>(d+1)\/2$ and the map $\\alpha$ fulfills Assumption~\\ref{ass:3}($\\alpha_{2}$) with $c_{2}\\in L^{2}(\\Omega)$, then the Friedrich extension $f$ of a $p$-Laplace type vector field and its decomposition into the operators $f_{\\ell}$ fulfill Assumption~\\ref{ass:f1}.\n\\end{lemma}\n\n\\begin{proof}\nFor an arbitrary $g \\in V=W^{1,p}(\\Omega)$ there exists a unique $u\\in {D}(f)$ such that\n$u-hfu = g$ and Assumption~\\ref{ass:f1} is then valid if $u\\in\\bigcup_{\\ell =1}^s{D}(f_{\\ell})$. To prove this, we first observe that $fu=\\nabla \\cdot \\alpha(\\nabla u) = (u-g)\/h \\inW^{1,p}(\\Omega)$ and $W^{1,p}(\\Omega) \\hookrightarrow L^r(\\Omega)$ for some $r > dp\/(p-1)$, as $p \\geq 2$ and $p > (d+1)\/2$. Hence, \\cite[Theorem 2 and Remarks pp.~829--830]{diBenedetto.1983} implies that $\\nabla u$ is locally H\\\"older continuous on $\\Omega$ and we obtain that\n\\begin{align*}\n\\alpha(\\nabla u)|_{\\Omega_{int}}\\in L^{2}(\\Omega_{int})^{d}\n\\end{align*}\nfor every open domain $\\Omega_{int}$ such that $\\overline{\\Omega}_{int}\\subset\\Omega$. \n\nAs $u\\in{D}(f)$, we have the integration by parts\n\\begin{align} \\label{eq:intbypar}\n-\\int_{\\Omega} \\alpha (\\nabla u)\\cdot \\nabla w\\,\\mathrm{d}x\n= \\int_{\\Omega} \\nabla\\cdot\\alpha (\\nabla u) w\\,\\mathrm{d}x\n\\end{align} \nfor every $w\\inW^{1,p}(\\Omega)$. Due to the extra interior regularity of $\\alpha(\\nabla u)$ we can, e.g., extend~\\eqref{eq:intbypar} to \nall $w=w_{1}+w_{2}$, where $w_{1}\\in W^{1,p}(\\Omega)$ and $w_{2}\\in H^{1}(\\Omega)$ is a.e.\\ zero on $\\Omega\\setminus\\Omega_{int}$\nfor some open subdomain $\\Omega_{int}$ that has the segment property and fulfills $\\overline{\\Omega}_{int}\\subset\\Omega$.\nThe latter implies that $w_{2}$ is the zero extension of $w_{2}|_{\\Omega_{int}}\\in H^{1}_0 (\\Omega_{int})$; see, e.g., \\cite[Theorem 5.29]{AdamsFournier.2003}. \n\nNext, let $v\\in V_{\\ell}\\subset L^{2} (\\Omega)$, for $\\ell=1,\\ldots,s$, and consider $\\chi_{\\ell} v \\in L^2(\\Omega)$. Here, \n\\begin{align*}\nD_{j}(\\chi_{\\ell} v) (\\phi) &= D_{j}( v) (\\chi_{\\ell}\\phi) + \\int_{\\Omega} (D_{j}\\chi_{\\ell}) v\\phi\\,\\mathrm{d}x= \\int_{\\Omega_{\\ell}} \\bigl(\\chi_{\\ell} (\\delta_{p,\\ell}v)_{j}+(D_{j}\\chi_{\\ell})v\\bigr)\\phi \\,\\mathrm{d}x\n\\end{align*}\nfor every $\\phi \\in C^\\infty_0 (\\Omega)$, i.e., $\\chi_{\\ell} v \\in H^1(\\Omega)$ and $\\chi_{\\ell} v = 0$ a.e.\\ on $\\Omega\\setminus \\Omega_{\\ell}$. If $\\ell2.\n\\end{align*}\nThis restriction on $p$ is made in order to assure the embedding\n\\begin{equation}\\label{eq:H1inLq}\nH_0^1(\\Omega)\\overset{d}{\\hookrightarrow} L^{\\nicefrac{p}{(p-1)}}(\\Omega),\n\\end{equation}\nwhich is central in our forthcoming analysis. The standard porous medium equation is then given by\n\\begin{align*}\n\\alpha(x,z)=|z|^{p-2}z,\\quad\\text{with }p\\geq 2,\n\\end{align*}\nand the fast diffusion equation is obtained for the same $\\alpha$, but with $10$, and Assumption~\\ref{ass:3} is then valid for $p=2$. \n\nAfter multiplying \\eqref{eq:pme} by $w$, where $-\\Delta w = v$ in $\\Omega$ and $w=0$ on $\\partial\\Omega$, and integrating by parts twice, the variational form of~\\eqref{eq:pme} and its decomposition is formally \n\\begin{equation}\\label{eq:pmevar}\n\\int_{\\Omega}\\frac{\\partial u}{\\partial t}\\,(-\\Delta)^{-1}v\\,\\mathrm{d}x = \n-\\int_{\\Omega}\\alpha (u) v \\,\\mathrm{d}x = \n- \\sum_{\\ell=1}^{s} \\int_{\\Omega_{\\ell}}\\chi_{\\ell} \\alpha (u) v \\,\\mathrm{d}x.\n\\end{equation}\nAbove, we have once more introduced a domain decomposition $\\{\\Omega_{\\ell}\\}_{\\ell =1}^s$ of $\\Omega$ together with a partition of unity $\\{\\chi_{\\ell}\\}_{\\ell=1}^{s}$. \n\nWith the proper interpretation, the left-hand side of \\eqref{eq:pmevar} is given by the inner product on\n$H^{-1}(\\Omega)$; compare with \\cite[Bemerkung III.1.13]{GGZ.1974}. The formal variational formulation~\\eqref{eq:pmevar} therefore leads us to choosing the pivot space $H=H^{-1}(\\Omega)$ and the operator\n\\begin{align*}\n\\delta:H^{-1}(\\Omega) \\to \\mathcal{D}'(\\Omega): u \\mapsto u.\n\\end{align*}\nThe operator $\\delta$ obviously fulfills the continuity Assumption~\\ref{ass:1}. The space $V$ is now\n\\begin{align*}\nV &= \\Bigl\\{ u \\in H^{-1}(\\Omega) :\\text{ there exists a } v\\in L^p(\\Omega) \\text{ such that }\\\\\n& \\qquad\\qquad \\dualHH{u}{\\phi} = \\int_{\\Omega} v \\phi \\,\\mathrm{d}x\\quad\\text{for all } \\phi \\in H^{1}_0(\\Omega)\\Bigr\\}\n= \\bigl(L^{\\nicefrac{p}{(p-1)}}(\\Omega)\\bigr)^*,\n\\end{align*}\nand as before $\\delta_p u = v$, where $v$ is the unique function stated in the definition of $V$. By the embedding~\\eqref{eq:H1inLq} and~\\cite[Bemerkung I.5.14]{GGZ.1974}, we obtain that \n\\begin{align*}\n\\bigl(L^{\\nicefrac{p}{(p-1)}}(\\Omega)\\bigr)^*\\overset{d}{\\hookrightarrow} H^{-1}(\\Omega),\n\\end{align*} \ni.e., Assumption~\\ref{ass:2} is fulfilled. With these choices, we have the energetic form $F :V \\to V^*$ given by\n\\begin{align*}\n\\dualV{Fu}{v} = \\int_{\\Omega}\\alpha (\\delta_{p} u)\\delta_{p} v \\,\\mathrm{d}x.\n\\end{align*}\n\nIn order to characterize the Friedrich operator $f$, we introduce the Dirichlet Laplacian $-\\Delta:H^{1}_{0}(\\Omega)\\to H^{-1}(\\Omega)$, where\n\\begin{align*}\n\\dualHH{-\\Delta u}{v} = \\int_{\\Omega} \\nabla u\\cdot\\nabla v\\,\\mathrm{d}x\\quad\\text{for all } u,v\\in H^{1}_{0}(\\Omega).\n\\end{align*}\nAs $-\\Delta$ is the Riesz isomorphism from $H^{1}_{0}(\\Omega)$ to $H^{-1}(\\Omega)$,\nthe inner product on $H^{-1}(\\Omega)$ satisfies\n\\begin{align*}\n\\ska{u}{v}_{H^{-1}(\\Omega)} \n&= \\sfrac14\\bigl(\\|u+v\\|_{H^{-1}(\\Omega)}-\\|u-v\\|_{H^{-1}(\\Omega)}\\bigr)\\\\\n&=\\sfrac14\\bigl(\\|(-\\Delta)^{-1}(u+v)\\|_{H^{1}_{0}(\\Omega)}-\\|(-\\Delta)^{-1}(u-v)\\|_{H^1_{0}(\\Omega)}\\bigr)\\\\\n&=\\ska{(-\\Delta)^{-1}u}{(-\\Delta)^{-1}v}_{H^{1}_{0}(\\Omega)}\\\\\n&=\\dualHH{u}{(-\\Delta)^{-1}v}\n\\end{align*}\nfor all $u,v\\in H^{-1}(\\Omega)$; compare with \\cite{EmmrichSiska.2012}. Next, for $u\\in{D}(f)$ there exists a $z\\in H^{-1}(\\Omega)$ such that\n\\begin{align*}\n-\\int_{\\Omega}\\alpha (\\delta_{p} u)\\,\\delta_{p} v \\,\\mathrm{d}x=(z,v)_{H^{-1}(\\Omega)}=\\dualHH{v}{(-\\Delta)^{-1}z}\n\\end{align*}\nfor all $v \\in \\bigl(L^{\\nicefrac{p}{(p-1)}}(\\Omega)\\bigr)^*$, or equivalently \n\\begin{align*}\n-\\int_{\\Omega}\\alpha (\\delta_{p} u)\\,w \\,\\mathrm{d}x= \\int_{\\Omega} w \\,(-\\Delta)^{-1} z\\,\\mathrm{d}x\n\\quad\\text{ for all } w \\in L^{p}(\\Omega).\n\\end{align*}\nHence, $-\\alpha (\\delta_{p} u)=(-\\Delta)^{-1}z\\in H^{1}_{0}(\\Omega)$; see, e.g., \\cite[Lemma~3.31]{AdamsFournier.2003}, and we obtain the characterization \n\\begin{align*}\n{D}(f)=\\bigl\\{u\\in \\bigl(L^{\\nicefrac{p}{(p-1)}}(\\Omega)\\bigr)^*:\\alpha (\\delta_{p} u)\\in H^{1}_{0}(\\Omega)\\bigr\\},\n\\end{align*}\nand $fu=\\Delta\\alpha (\\delta_{p} u)$ for $u\\in{D}(f)$.\n\nAnalogously to Section~\\ref{sec:pLap}, we have ${R}(\\delta)= H^{-1}(\\Omega) \\subset \\mathcal{D}'(\\Omega)$ and we can therefore allow a partition of unity $\\{\\chi_{\\ell}\\}$ in $W^{1,\\infty}(\\Omega)$. The spaces $V_{ \\ell}$, $\\ell =1,\\dots,s$, are then\n\\begin{align*}\nV_{\\ell}&= \\big\\{ u \\in H^{-1}(\\Omega): \\text{ there exists a } v \\in L^p(\\Omega_{\\ell},\\chi_{\\ell}) \\text{ such that }\\\\\n& \\qquad\\qquad \\dualHH{\\chi_{\\ell} u}{\\phi} = \\int_{\\Omega_{\\ell}} \\chi_{\\ell} v \\phi \\,\\mathrm{d}x\\quad\\text{for every } \\phi \\in H^{1}_0(\\Omega) \\big\\}.\n\\end{align*}\nAgain, we write $\\delta_{p, \\ell} u$ for the unique $L^p(\\Omega_{\\ell},\\chi_{\\ell})$ function $v$ from this definition.\n\nAfter introducing $F_{\\ell}$ and $f_{\\ell}$, as described in Sections~\\ref{sec:enform}, we have by Lemmas~\\ref{lem:friedrich} and~\\ref{lem:frieddom} that the operators $f$ and $f_{\\ell}$, $\\ell =1,\\dots,s$, are maximal dissipative and\n\\begin{align*}\nfu = \\sum_{\\ell =1}^{s} f_{\\ell}u\\quad\\text{for }u\\in\\bigcap_{\\ell =1}^{s} {D}(f_{\\ell})\\subseteq {D}(f).\n\\end{align*}\nInstead of Assumption~\\ref{ass:f1} we can prove the stronger condition \n\\begin{align*}\n\\bigcap_{\\ell =1}^s {D}(f_{\\ell}) = {D}(f).\n\\end{align*}\nTo prove the equality take an arbitrary $u\\in {D}(f)$. Since $\\alpha(\\delta_p u )\\in H_0^1(\\Omega)$, we also have\nthat $\\chi_{\\ell}\\alpha(\\delta_p u ) \\in H_0^1(\\Omega)$ for every weight function $\\chi_{\\ell} \\in W^{1,\\infty}(\\Omega)$ and\n\\begin{align*} \n-\\int_{\\Omega_{\\ell}} \\chi_{\\ell}\\alpha (\\delta_{p} u)\\, \\delta_{p,\\ell}v\\,\\mathrm{d}x &=\\dualHH{v}{-\\chi_{\\ell}\\alpha (\\delta_{p} u)}\\\\\n&= \\ska{\\Delta\\bigl(\\chi_{\\ell}\\alpha (\\delta_{p} u)\\bigr)}{v}_{H^{-1}(\\Omega)}\\quad \\text{ for all } v\\in V_{\\ell}.\n\\end{align*}\nThat is, $u$ also lies in ${D}(f_{\\ell})$ for $\\ell =1,\\dots,s$.\n\nAssumption~\\ref{ass:f2} requires some further regularity of the map $\\alpha$ and the validation that $\\alpha (\\delta_{p} u)$ vanishes on the boundary $\\partial\\Omega$. For the porous medium equation and the two-phase Stefan problem one has that $\\alpha(\\phi)\\in H^{1}_{0}(\\Omega)$ for every $\\phi\\in C_0^{\\infty}(\\Omega)$. The set of functionals of the form $v\\mapsto\\int_{\\Omega}u v\\,\\mathrm{d}x$, where $u\\in C_0^{\\infty}(\\Omega)$ and $v\\in H^{1}_{0}(\\Omega)$, is therefore a subset of ${D}(f)$. It is also a dense subset of $H^{-1}(\\Omega)$, as $C_0^{\\infty}(\\Omega)$ is dense in $L^{2}(\\Omega)$ and $L^{2}(\\Omega)^*$ is dense in $H^{-1}(\\Omega)$. Hence, Assumption~\\ref{ass:f2} is valid for these two prototypical examples, and the convergence results of Section~\\ref{sec:approx} hold.\n\n\\begin{remark}\nThe variational setting of porous medium type equations, with $H^{-1}(\\Omega)$ as pivot space, is by no means standard. However, it enables a clear-cut way of introducing the related Friedrich operator. The variational setting has, e.g., been proposed in \\cite[Bemerkung I.5.14]{GGZ.1974}. It has also been employed in \\cite{EmmrichSiska.2012} when proving convergence of finite element\/implicit Euler approximations for the porous medium equation, on its very weak form. Note that the standard approach to prove that $\\Delta\\alpha$ is a maximal dissipative operator on $H^{-1}(\\Omega)$ is to directly observe that it is the gradient of a convex function; see \\cite[Example 3]{Brezis.1971}.\n\\end{remark}\n\n\\begin{acknowledgements}\nPart of this study was conducted during Hansen's guest research stay at the Institut f\\\"{u}r Mathematik, TU Berlin. Hansen would like to thank Etienne Emmrich for enabling this inspiring stay. \n\\end{acknowledgements}\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\@startsection {section}{1}{\\z@}%\n {-3.5ex \\@plus -1ex \\@minus -.2ex\n {2.3ex \\@plus.2ex}%\n {\\normalfont\\large\\bfseries}}\n\\renewcommand\\subsection{\\@startsection{subsection}{2}{\\z@}%\n {-3.25ex\\@plus -1ex \\@minus -.2ex}%\n {1.5ex \\@plus .2ex}%\n {\\normalfont\\bfseries}}\n\\makeatother\n\n\\newcommand\\bl{\\color{blue}} \n\n\n\n\\begin{document}\n\n\\begin{titlepage}\n \n\\begin{center}\n{\\Large \\bf Holographic Quenches and\n\\vspace{3mm} Fermionic Spectral Functions}\n\n\\vskip 10mm\n\n{\\large N.~Callebaut$^{a,b}$, B.~Craps$^{a,c}$, F.~Galli$^d$, D.~C.~Thompson$^a$, \\\\\n\\vspace{3mm}\nJ.~Vanhoof$^a$, J.~Zaanen$^e$, Hongbao~Zhang$^a$}\n\\vskip 7mm\n\n$^a$ Theoretische Natuurkunde, Vrije Universiteit Brussel, and \\\\\n\\hspace*{0.15cm} International Solvay Institutes, \nPleinlaan 2, B-1050 Brussels, Belgium \\\\\n$^b$ Ghent University, Department of Physics and Astronomy,\\\\ Krijgslaan 281-S9, 9000 Gent, Belgium \\\\\n$^c$ Laboratoire de Physique Th\\'eorique, Ecole Normale Sup\\'erieure,\\\\ 24 rue Lhomond, F-75231 Paris Cedex 05, France \\\\\n$^d$ Instituut voor Theoretische Fysica, KU Leuven,\\\\ \nCelestijnenlaan 200D, B-3001 Leuven, Belgium.\\\\\n$^e$ Instituut-Lorentz for Theoretical Physics, Universiteit Leiden,\\\\ PO Box 9506, NL-2300 RA Leiden, The Netherlands\n\n\\vskip 10mm\n \n{\\small\\noindent {\\tt ncalleba.Callebaut@ugent.be, Ben.Craps@vub.ac.be, federico@itf.fys.kuleuven.be, Daniel.Thompson@vub.ac.be, Joris.Vanhoof@vub.ac.be, Jan@lorentz.leidenuniv.nl, Hongbao.Zhang@vub.ac.be}}\n \n\\end{center}\n\n\\vskip 10mm\n\n\\begin{center}\n{\\bf ABSTRACT}\n\\vspace{3mm}\n\\end{center}\nUsing holographic methods we investigate the behaviour of fermionic spectral functions of strongly coupled $2+1$ dimensional field theories as both temperature and chemical potential are quenched. \n\\vfill \n\\end{titlepage}\n\n\n\n\\section{Introduction}\n\nArguably two of the most fundamental problems in quantum field theory, or equivalently quantum many body physics, are non-equilibrium unitary time evolution and finite density, respectively. With the exception of integrable systems in 1+1D (see, for instance, \\cite{Calabrese:2006rx}) there exists no direct field theoretical method that can deal with far from equilibrium dynamics in strongly interacting systems with infinite degrees of freedom. Even at equilibrium, for strongly interacting fermionic systems at finite density, the fermion sign problem obscures the view of non-perturbative physics.\n\nThese problems, associated with understanding non-Fermi liquid physics, have been pushed to center stage\nin condensed matter physics since the discovery of the strange metals in high $T_c$ superconductors and related systems \\cite{Anderson,VLSAR,VNS}. The study of non-equilibrium physics of quantum many body systems is at present rapidly evolving in the laboratory because of the new techniques becoming available in the field of cold atoms, high energy physics (heavy ion collisions) and the ``ultrafast\" experiments on condensed matter systems. \n\nThe AdS\/CFT correspondence is unique in its capacity to deal both with finite density and non-equilibrium in a mathematically controlled way, albeit within \nthe limitations of the ``large $N$\" limit and the restrictions on the non-equilibrium physics that can be addressed easily. To illustrate this power, we present \nhere an extreme example of a time-dependent ``experiment'' involving strongly interacting fermion matter. \n\nConsider a strongly interacting \nrelativistic quantum critical state formed from fermions and other degrees of freedom in 2+1D at zero density and zero temperature. \nThis can be pictured as graphene right at the quantum phase transition to the Mott insulator \\cite{Herbut}. At large negative times its fermion\nspectral functions, which can be measured by (inverse) photoemission, will be of the ``branch cut\" form dictated by Lorentz and scale invariance: \n\\begin{equation}\n A (\\omega, k) \\sim \\text{Im} \\left[( \\sqrt{k^2 - \\omega^2})^{2\\Delta-d}\\right] \\ .\n\\end{equation}\nHere, $d$ is the spacetime dimension and throughout this work we will consider $d=3$. We are typically interested in the regime $2\\Delta - d < 0$, noticing that the scaling dimension is bounded from below by the unitarity limit for the fermionic CFT operators, $\\Delta = (d-1)\/2$. \n\nSubsequently, this system is prepared in a highly excited state by a sudden ``quench\" at a given instant, but now at some finite {\\em charge density} which is uniform in space. The main purpose of this paper is to compute time-dependent spectral properties of this system using holography. Specifically, we compute its time-dependent spectral function and show that it suggests the gradual build-up of a Fermi surface. We also highlight some caveats, commenting in particular on the relation with quantities potentially measurable using time-resolved ARPES experiments. \n\n\n\\section{Time-dependent fermion spectral functions}\n\nIn equilibrium, the spectral function of an operator is defined as ($-2$ times) the imaginary part of its corresponding retarded Green's function in frequency space. However, out of equilibrium such a definition is ambiguous; the retarded Green's function $G_R(\\vec{x}_1, t_1; \\vec{x}_2, t_2)$ may no longer depend on just the relative time interval $ t = t_2 - t_1$ but on $t_1$ and $t_2$ separately and a conventional Fourier transform to frequency space can not be taken. \nIn \\cite{Balasubramanian:2012tu}, a generalised notion of a spectral function was proposed and computed in a far-from-equilibrium AdS\/CFT context\n(see \\cite{CaronHuot:2011dr,Chesler:2011ds} and \\cite{Banerjee:2012uq,Mukhopadhyay:2012hv,Steineder:2013ana} for studies of this and related quantities far from equilibrium and closer to equilibrium, respectively). \nIntroducing both the relative time $t$ and the average time, $T=(t_1+t_2)\/2$, \n we consider the Fourier transform of the Green's function with respect to $t$ whilst keeping $T$ fixed, \n \\begin{equation}\\label{eq:Wignertrans}\n G(T, \\omega, k) = \\int dt~e^{i \\omega t} G(T,t, k) \\ .\n \\end{equation}\nWe assume that homogeneity is preserved such that we can Fourier transform to spatial momenta with no change. Then a notion of a time-dependent spectral function is defined by \n \\def{\\rm{ Im}}{{\\rm{ Im}}}\n \\begin{equation}\\label{eq:SpectralDef}\n A(T,\\omega, k) = - 2 \\, {\\rm{ Im}} \\, G_R(T, \\omega, k) \\ . \n \\end{equation}\nIn passing we note that this recipe for taking a Fourier transform has a rather well established analogue as the Wigner distribution \\cite{Wigner:1932eb} used in phase space approaches to Quantum Mechanics.\n\nSuppose that a system starts in equilibrium, and then undergoes an instantaneous quench at time $t_0=0$, eventually relaxing to find a new equilibrium. \nAt $T=-\\infty$ the spectral function \\eqref{eq:SpectralDef} will match the conventional equilibrium spectral function of the initial configuration and at $T=+\\infty$ it will match that of the system at its new equilibrium. \nFor intermediate values of average time, \\eqref{eq:SpectralDef} can generically display rather wild oscillations and indeed need not remain positive for positive frequencies (see, for instance, \\cite{Balasubramanian:2012tu}). This feature is similar to the ``negative probabilities'' displayed by the Wigner distribution and may be countered by coarse-graining with a Gaussian filter. \n \nOne consequence of the inherent non-locality of the definition \\eqref{eq:SpectralDef} is that for $T<0$ the spectral function is already influenced by the effects of the quench; this does not represent non-causality, it is simply because for a large enough relative time, the interval centered around $T$ will necessarily extend through the time of the quench. \nOne could choose an alternate definition removing this feature, for instance by considering the retarded Green's function as a function of final time $t_2$ and relative time $t=t_2-t_1$, and Fourier transforming with respect to the relative time $t$ at fixed final time. (Note that the retarded Green's function vanishes for $t<0$.)\n\n \n\n\\section{Holographic setup} \n\nHow can a charged quench be described in AdS\/CFT? In AdS\/CFT the field theoretical problem in $d$ dimensions is dualized in an equivalent gravitational problem living in a $d+1$ dimensional bulk, described by classical gravity in the large $N$ limit of the gauge theory. When the bulk is characterised by an Anti-de-Sitter (AdS) geometry it describes the vacuum of a conformal field theory on\nthe boundary at zero temperature and density. To study the non-equilibrium physics associated with an instantaneous quench in the zero density theory, a standard simple approach (see, for instance, \\cite{Bhattacharyya:2009uu,Balasubramanian:2010ce,Wu:2012rib,Hubeny:2007xt,AbajoArrastia:2010yt,Albash:2010mv,Liu:2013iza, Balasubramanian:2011at,Allais:2011ys,Callan:2012ip,Hubeny:2013hz}) is to inject at the AdS boundary a shell of light-like ``dust\" (or other matter that effectively behaves like dust) in AdS$_{d+1}$. This corresponds in the field theory with the sudden creation of an excited state whose time evolution is subsequently determined by the equivalent gravitational evolution in the bulk. A time-dependent metric describing such a process is available in closed form; it is the AdS version of a metric derived by Vaidya in the 1930's. After some time this in-falling shell of dust will form a black brane, which in turn encodes a thermal equilibrium state in the field theory: this is the holographic description of the anticipated thermalization of the field theory in the long time limit. \n\nThis in-falling shell paradigm of holographic thermalisation was recently extended to Einstein-Maxwell gravity, where one can consider the injection of a shell of dust which is {\\em electrically charged} \\cite{Galante:2012pv,Caceres:2012em}. Fig.~\\ref{fig:Penrose} illustrates the Penrose diagram for the corresponding Reissner-Nordstr\\\"om-AdS-Vaidya spacetime.\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[width=0.28 \\textwidth]{penroseRN.pdf}\n\\caption{ Penrose diagram for Vaidya AdS Reissner-Nordstr{\\\"o}m (RN) spacetime. }\\label{fig:Penrose}\n\\end{figure} \nAlthough the (scale-dependent) equilibration time becomes dependent on whether the chemical potential is large or small \ncompared to temperature, the gross evolution in the bulk is similar as in the zero density case: after some time a charged Reissner-Nordstr{\\\"o}m (RN) black brane will form, describing a holographic strange metal at finite temperature.\n\nIt has been known for a while that, for sufficiently relevant scaling dimensions of the fermion operators of the zero density CFT, Fermi liquid like \nquasiparticles form at finite density \\cite{Liu:2009dm,Cubrovic:2009ye,Faulkner:2009wj}, characterised by a propagator \n \\begin{equation}\\label{fermion2pt}\nG(\\omega,k)=\\frac{1}{\\omega-v_F(k-k_F)+\\Sigma(\\omega,k)}\n\\end{equation}\nwith self energy $\\Sigma \\sim \\omega^{2\\nu_{k=k_F}}$ where $\\nu_k$ is associated with the scaling dimension in the AdS$_2\\times \\Rbar^2$\nnear horizon geometry. We deliberately choose parameters such that $2 \\nu_{k_F} > 1$, so that sharp quasiparticles are formed in the finite density \nsystem.\n\nIt is however by now well understood that, when backreaction from bulk fermions is taken into account, the RN black hole is quantum mechanically unstable to the formation of an electron star \\cite{Hartnoll:2009ns,Hartnoll:2010gu,Cubrovic:2010bf,Cubrovic:2011xm,Allais:2013lha} (see \\cite{Hartnoll:2011fn} for a review). This also corresponds to a thermodynamic instability, in the sense that for fixed temperature and chemical potential, the electron star solution has lower free energy than the RN black brane. While it would be interesting to study this effect in the present fixed-energy (as opposed to fixed-temperature) context, in the present paper we choose to work in the limit in which we treat the bulk classically (which can be thought of as ignoring perturbative and non-perturbative $1\/N$ corrections).\n \nIn our $d=3$ case, the metric and the bulk spacetime gauge fields are given in Eddington-Finkelstein coordinates by \n\\begin{equation}\n\\begin{aligned}\n&ds^2 = \\frac{1}{z^2} \\left(-f(v,z) dv^2 - 2 dv dz + d\\vec{x}^2 \\right) \\ , \\\\\n&A = g(v,z) dv \\, , \n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{aligned}\nf(v,z) &=1+ \\Theta(v)\\left( - ( 1 + Q^{2}) z^3 + Q^2 z^4 \\right) \\ , \\\\\n\\quad g(v,z) &= \\Theta(v) \\mu (1- z) \\ . \n\\end{aligned}\n\\end{equation}\nAt the spacetime boundary $z=0$, the bulk lightcone time $v$ coincides with the field theory time.\nAs in \\cite{Liu:2009dm} we work with dimensionless quantities obtained by rescaling such that the horizon is fixed at $z=1$. The temperature of the final black hole produced is $\\mathcal T=\\frac{1}{4\\pi} \\left(3 - Q^2 \\right)$ and the parameter $\\mu= g_F Q$ can be identified with the chemical potential of the theory where $g_F$ is the $U(1)$ coupling which we shall set to unity.\n\nTo determine fermionic spectral functions, we use the same strategy that was employed in \\cite{Balasubramanian:2012tu} for scalar spectral functions. The retarded propagator of an operator is determined by computing its expectation value following a perturbation by a delta-function source \\cite{Iqbal:2008by}. The expectation value and the source are identified with certain coefficients in a near-boundary expansion of the dual bulk field. The bulk field is obtained by solving its equation of motion subject to boundary conditions determined by the delta-function source and initial conditions that set the field to zero outside the future lightcone of the source. \n\nTo implement this procedure for a fermionic operator of conformal dimension $\\Delta$ and $U(1)$ charge $q$, we consider the Dirac equation for a bulk fermionic field of mass $m$, \n\\begin{equation}\n(\\slashed{D} - m)\\Psi \\equiv \\gamma^{\\mu}\\partial_{\\mu}\\Psi + \\frac{1}{4} \\gamma^{\\mu} \\omega_{\\mu AB}\\Gamma^{AB}\\Psi - i q \\gamma^{\\mu}A_{\\mu} \\Psi - m \\Psi = 0 \\ , \n\\end{equation} \nin which $\\gamma^\\mu$ are the curved space gamma matrices of the $AdS$ spacetime, $\\omega_{\\mu AB}$ the spin connection, $\\Gamma^A$ the constant bulk gamma matrices in the local tangent frame and $\\Gamma^{AB} = \\frac{1}{2} [\\Gamma^A,\\Gamma^B]$ the antisymmetric generators of Lorentz transformations. \n\nThe conformal dimension is related to the mass of the bulk field by $(\\Delta-3\/2)^{2}=m^{2}$ or $\\Delta_{\\pm}=3\/2\\pm m$. One should keep in mind that the unitarity bound on fermionic CFT operators is $\\Delta>(d-1)\/2=1$. This implies that in the mass range $m<-1\/2$ only the choice $\\Delta=\\Delta_{-}$ is allowed and in the range $1\/2 t_1$ the source has no support and the expectation value of the boundary operator can be easily extracted. The upper component of $G_R$ can be calculated as \n\\begin{equation} \nG_{11}(t_2, t_1) = -i \\lim_{z\\to0} z^{-1} y_{+}(v_2,z) = -i \\lim_{z\\to0} z^{-1} \\frac{\\alpha(v_2,z) +\\beta(v_2,z)}{2} \\ , \n\\end{equation} \nwith a similar definition for the lower component $G_{22}$ coming from the analogous equations for $y_{-}$ and $z_{+}$. \n\n\n\\section{Numerical solution strategy}\n\nTo solve the equations of motions \\eqref{eq:eqms} we can distinguish three cases: a) when the relative time interval $t=t_2-t_1$ occurs entirely before the quench; b) when it extends across the quench and c) when it occurs entirely after the quench. \n\nIn case a) one needs only to consider the pure AdS region of the Vaidya spacetime for which exact analytic expressions are known for the bulk-to-boundary retarded Green's function. \n\n In case b) the same analytic expressions may be used for the AdS portion of the spacetime and in particular on the shell of null dust at $v=0$. We then employ numerical methods to propagate these analytic expressions forwards from the shell at $v=0$ to fill out the remainder of the spacetime outside the horizon. \nIn particular, knowing $\\beta$ on a fixed $v$ slice, the first equation in \\eqref{eq:eqms} with suitable boundary conditions can be integrated to obtain the bulk profile for $\\alpha$. Given the bulk profiles for $\\alpha$ and $\\beta$ on a fixed $v$ slice the second equation in \\eqref{eq:eqms} is used to evolve $\\beta$ to a subsequent time. \nTo perform the integration in the radial, $z$, direction we employ a Chebyshev pseudospectral method, combined with an explicit $4^{th}$ order Runge-Kutta method in the $v$ direction. For numerical purposes, it is also necessary to impose a cut-off at late $v_{cut}$ in our numerical integration. To generate the plots in the next section we took $N=27$ Chebyshev points, a Runge-Kutta time step $\\delta v= 0.05$ and a cut-off in the Runge-Kutta integration $v_{cut}=60$. Details of the convergence and accuracy of these methods are provided in appendix A where we show that for the chosen values of parameters the results for the spectral function are convergent to within the resolution of plots shown.\n \nIn case c), although the relevant geometry is the AdS-RN black hole, one does not have access to analytic expressions for the retarded propagator in the mixed representation $G_R(t, k)$. We found a sensible approach in this case to first calculate $G_R(\\omega, k)$ in Fourier space as in \\cite{Liu:2009dm} and to then perform an inverse Fourier transform to find the desired Green's function in the mixed representation. \n\nThe next step to obtain our time-dependent spectral function is to perform the Wigner transformation to Fourier space as described in \\eqref{eq:Wignertrans}. Depending on the conformal dimension of the operator, $G_R(T,t, k)$ can exhibit power law divergences as $t\\rightarrow 0$, which need to be appropriately regulated. To address this problem one can take \n the ratio of $G_R(T,t, k)$ with another analytically known and appropriately regulated function ${\\cal A}(t)$ such that the ratio is finite as $t\\rightarrow 0$. If the Fourier transform of ${\\cal A}(t)$ is also known analytically, then a simple application of the convolution theorem can be used to establish $G_R(T, \\omega, k)$. In this work, Fourier transformations are implemented numerically using discrete Fourier routines. To avoid artefacts associated to the late time cut-off $v_{cut}$ of our numerical evolution, e.g.\\ Wilbraham-Gibbs or ringing phenomena, we include a Lanczos $\\sigma$ factor in our Fourier transformation \\cite{Lanczos}. \n\n\n\\section{Results}\n\nFig.~\\ref{fig:DensityPlots} shows the spectral sum (the trace of the spectral function matrix),\n\\begin{equation}\nA(T, \\omega, k)=-2\\left[ {\\rm{ Im}}(G_{11}(T, \\omega, k))+ {\\rm{ Im}}(G_{22}(T, \\omega, k))\\right] \\ , \n\\end{equation}\nin the $\\{\\omega, k\\}$ plane for a fermionic operator of conformal dimension $\\Delta = 3\/2$ (dual to a bulk field of mass $m=0$) when we quench the system to a near extremal final state: the chemical potential is quenched from $\\mu =0 $ to $\\mu = 1.7$ and the temperature is kept close to zero (recall at zero temperature $Q=\\sqrt{3} \\approx 1.732$).\nA small non-zero temperature has the advantage of stabilising the numerics, while being closer to what one would obtain in a real experimental situation. \n \\begin{figure}[ht]\n \\begin{center}\n\\includegraphics[width=0.32 \\textwidth]{densityplotM0Q1-7TavMinusInfty.pdf} \n \\includegraphics[width= 0.32 \\textwidth]{densityplotM0Q1-7Tav-5.pdf} \n\\includegraphics[width=0.32 \\textwidth]{densityplotM0Q1-7Tav-2-5.pdf} \\\\\n \\includegraphics[width=0.32 \\textwidth]{densityplotM0Q1-7Tav0.pdf} \n \\includegraphics[width=0.32\\textwidth]{densityplotM0Q1-7Tav1-5withMu.pdf}\n\\includegraphics[width=0.32\\textwidth]{densityplotM0Q1-7TavPlusInfty.pdf}\n \\end{center}\n\\caption{False colour density plots for spectral sum $A= -2 \\left[ {\\rm{ Im}}(G_{11})+ {\\rm{ Im}}(G_{22})\\right]$ for bulk field of mass $m=0$ when quenched to $Q=1.7$, at (from top to bottom) $T=\\{-\\infty,\n-5,-2.5, 0, 1.5,+\\infty\n\\}$. For reference we plot, in red dashed lines, lightcones centered around $\\omega =0$.} \n \\label{fig:DensityPlots}\n\\end{figure} \n\nThe top central panel of Fig.~\\ref{fig:DensityPlots} shows the result at an early average time $T=-5$ and we see that whilst the density is roughly symmetric around the light cone centered at $\\omega=0$ (indicated in the figure by the dashed red lines), a number of oscillations have already begun to influence the profile. In the top right panel, at $T=-2.5$, the period of the oscillations has roughly doubled, as can be seen more clearly in Fig.~\\ref{fig:SpectralSection}.\n\nIn the bottom left panel, at $T=0$, the oscillations have essentially disappeared, and the spectral function has developed a clear asymmetry between positive and negative frequency branches. In the bottom central panel, at $T=1.5$, it is now clear that the asymptotic behaviour of the peak of the spectral function has shifted down still further, characteristic of a system at finite density.\n\nTaking Fig.~\\ref{fig:DensityPlots} at face value, after a transient regime at short times, characterised by oscillations of the ``critical Dirac cones\", it appears that at longer times a Fermi energy and Fermi surface develop. Observing that the asymptotic ``Dirac cones'' gradually move down, it is tempting to define a notion of time-dependent effective Fermi energy, or equivalently time-dependent effective chemical potential, $\\mu_{eff}(T)$, which measures how much the asymptotic cones have moved down. (This is indicated by the green dashed lightcone in the bottom central panel of Fig.\\ \\ref{fig:DensityPlots}.)\\footnote{At a practical level we numerically calculate the location of the maximum value $\\omega_{\\star}$ of the spectral sum at a fixed large $k$ and use this to define $\\mu_{eff}(T)= k-\\omega_{\\star}$. However, since the peaks of the spectral functions have a finite width and a profile that depends on both $Q$ and $\\Delta$, in what follows to make true comparisons we always take the ratio of this quantity with the same quantity calculated in the final equilibrium state.}\n \\begin{figure}[h]\n \\begin{center} \n \\includegraphics[width= 0.45\\textwidth]{peakprofilewposM0Q1-7Tav-5.pdf} \\qquad\n \\includegraphics[width= 0.45\\textwidth]{peakprofilewposM0Q1-7Tav-2-5.pdf} \n \\end{center}\n \\caption{Detail of section of spectral sum along $\\omega = k-2$ at average times $T=-5$ (left) and $T = -2.5$ (right) highlighting an approximate halving of the frequency of oscillations.}\n \\label{fig:SpectralSection}\n\\end{figure} \n\nFig.~\\ref{fig:omega0Plots} \n\\begin{figure}[h]\n \\begin{center}\n \\includegraphics[width=0.5\\textwidth]{omega0plotsv2.pdf}\n \\end{center}\n\\caption{Plots of spectral sum at $\\omega = -10^{-3}$ for field of mass $m=0$ when quenched to $Q=1.7$ for $T= \\{-5,-2.5,-0.5,0,0.25, 0.5, 1.5,2.5,5, 10,15 \\}$. The higher $T$, the higher the maximal value. Also shown is a line at $k_F \\approx 0.92$, the value of the Fermi momenta established in \\cite{Liu:2009dm}. } \\label{fig:omega0Plots}\n\\end{figure} \nshows the accumulation of a peak at zero frequency for a number of average times. We see for very early times (the curves with lowest peaks in Fig.~\\ref{fig:omega0Plots}) that the spectral function attains its maximum very close to zero momenta, as would be expected at zero density, and exhibits clear oscillations including negative spectral weight. As the average time evolves one finds that these oscillations die out, the location of the maximum at zero frequency migrates to larger values of $k$ and the value at the maximum increases. The edge at larger $k$ sharpens and a clear peak develops with the location of the peak moving to where ultimately a sharp spike at $k=k_F \\approx 0.92$ \\cite{Liu:2009dm} will occur. \n\n\nIn Fig.~\\ref{fig:chempot} \n\\begin{figure}[h]\n \\begin{center}\n \\includegraphics[width=0.5\\textwidth]{effchemv2.pdf}\n \\end{center}\n\\caption{Plots of the relative effective chemical potential $\\Delta \\mu(T) = \\mu_{eff}(T)\/\\mu_{eff}(+\\infty)$ as a function of average time, $T$, for $m=0$ quenches with $Q = 0.5$ (red circles), $1$ (orange squares), $1.25$ (cyan diamonds), $1.7$ (blue triangles). } \\label{fig:chempot}\n\\end{figure} \nand Fig.~\\ref{fig:Deltachempot} \n\\begin{figure}[h]\n \\begin{center}\n \\includegraphics[width=0.5\\textwidth]{deltacomparev2.pdf}\n \\end{center}\n\\caption{Plots of the relative effective chemical potential as a function of average time for quenches with $Q =1$ for different conformal dimensions $\\Delta = 1.5$ (red circles), $\\Delta = 1.35$ (orange squares), $\\Delta = 1.25$ (cyan diamonds), $\\Delta = 1.15$ (blue triangles) and $\\Delta = 1.01$ (green inverted triangles). } \\label{fig:Deltachempot}\n\\end{figure} \nwe show how this effective chemical potential $\\mu_{eff}(T) $ evolves as a function of time for various choices of the net charge density and fermion scaling dimension. At late times, the system seems to settle in an equilibrium (quasi) Fermi liquid with a Fermi surface that obeys the Luttinger Volume theorem (which states that the volume enclosed by the Fermi surface is proportional to the fermion charge density) at a finite temperature, chosen to be small compared to the chemical potential in the examples we show. \nIn Fig.~\\ref{fig:chempot} we examine this effective chemical potential as a function of time for quenches with different values of $Q$ and find several distinct features. As might be anticipated, at $T=0$ the effective chemical potential appears to attain half its final value. For $T>0$ there is a universality in the approach to the final value of chemical potential, while for $T<0$ the onset of chemical potential gain occurs later for larger $Q$. \nFor charges $Q$ of order 1, i.e., comparable to the extremal value $\\sqrt3$, the unit of time in Fig.~\\ref{fig:chempot} is of order the inverse final chemical potential. \nFig.~\\ref{fig:chempot} shows that the effective chemical potential is built up on a time scale of the same order.\nIn Fig.~\\ref{fig:Deltachempot} we examine this effective chemical potential for different values of the fermionic operator conformal dimension (bulk mass) and find a final clear feature: the smaller $\\Delta$, the quicker chemical potential is acquired.\n\nThe effective chemical potential can be viewed as a new non-local probe of thermalization. Probes considered in previous work include equal-time two-point functions, Wilson loops and entanglement entropy \\cite{Balasubramanian:2010ce,Galante:2012pv,Caceres:2012em}, in which case the thermalization time depends on the spatial extent of the probe. If one chose probes of size set by the final chemical potential, one would find thermalization times of the same order as in Fig.~\\ref{fig:chempot}.\n\n\n\\section{Caveats and further discussion}\n\nIn interpreting our results, several caveats need to be taken into account. A first caveat, which we have already mentioned, is that our ``quasi\" Fermi liquid realised at long times is in fact a false vacuum artefact associated with the large $N$ limit. \nAs we already discussed, it is understood that the true equilibrium state is a Fermi liquid which is dual to the electron star in the bulk. The dynamical ``uncollapse'' of the black brane in the electron star requires a quantised description of the geometry: in classical gravity this cannot happen. However, as long as $N$ is not too small we expect a separation of time scales such that the further relaxation from the RN ``quasi'' Fermi liquid to the electron star Fermi liquid will take place at a much later time. We leave this problem of the dynamical formation of the electron star due to quantised geometry as a challenge for holography in general.\n\nNext, while our notions of time-dependent spectral function and of time-dependent effective Fermi energy are well-defined and reduce to the appropriate equilibrium concepts in static situations, we should ask to what extent these quantities are measurable in a lab, and exactly which physical information they carry.\n\nIn equilibrium, spectral functions can be experimentally determined using Angle-Resolved Photoemission Spectroscopy (ARPES). Actually, electrons can only be emitted from occupied states, so what ARPES measures is really the product of the spectral function and the Fermi factor. Technically, this product equals the ``lesser Green function'' in Fourier space, often denoted $G^<(\\omega, k)$ or $D^<(\\omega, k)$; see, for instance, \\cite{Bellac}. Because of the fluctuation-dissipation theorem, knowledge of the spectral function suffices to determine the lesser Green function, and vice versa (at least for nonzero temperature).\n\nAway from equilibrium, time-resolved ARPES is the tool of choice. As shown in \\cite{Freericks}, the lesser Green function (which now depends on two moments of time) still carries the relevant information. In quite close analogy to \\eqref{eq:Wignertrans}, it should be Fourier transformed with respect to the relative time coordinate, with the additional complication that the experimental setup introduces time windows for both moments of time. For slowly varying systems, at least some version of the fluctuation-dissipation theorem remains valid \\cite{Keller_Jarrell, Mukhopadhyay:2012hv}. Far from equilibrium, however, the retarded and lesser Green functions are not straightforwardly related (see, for instance, \\cite{CaronHuot:2011dr,Chesler:2011ds, Balasubramanian:2012tu}), making it much less clear how ARPES is related to time-dependent spectral functions. While in current versions of time-resolved ARPES the fluctuation-dissipation relation is a good approximation, this would not be the case for an idealized ARPES experiment preformed on our system, which is quenched instantaneously.\n\nOne may wonder about the meaning of the oscillations and of the negative regions in our time-dependent spectral functions. Our present understanding is that these are due to simple interference between the pre-quench and post-quench periods, as was the case for the quenched harmonic oscillator discussed in \\cite{Balasubramanian:2012tu}. One piece of evidence supporting this statement is that the wavelength of the oscillations in $\\omega$ seems to be set by the time $T$ to the quench (see Fig.~\\ref{fig:SpectralSection}). This presumably makes it hard to relate this quantity to practical experiments. Given a system prepared in an out-of-equilibrium state, what one would really like to probe is how it behaves {\\em after} it has been prepared in that state. But in our setup, the retarded Green function with both moments of time after the quench behaves exactly as if the system were in equilibrium (as discussed in \\cite{Bhattacharyya:2009uu} for a thermal quench). So if one defined a time-dependent spectral function using only those ``post-quench'' data (by introducing appropriate time windows), the oscillations and negative regions would disappear. The reason for the equilibrium behaviour is that the post-quench retarded propagator is only sensitive to the bulk geometry outside the shell, which agrees with that of a charged black brane.\n\nNevertheless, our system right after the quench is far from equilibrium: as also mentioned in \\cite{Bhattacharyya:2009uu} (and worked out in detail in \\cite{Bhattacharyya:2009uu, Wu:2012rib, Balasubramanian:2010ce, Hubeny:2007xt,AbajoArrastia:2010yt,Albash:2010mv,Liu:2013iza, Balasubramanian:2011at,Allais:2011ys,Callan:2012ip,Hubeny:2013hz}), spatially nonlocal observables do not immediately take their equilibrium values. Therefore, time-resolved ARPES, which does not probe the retarded propagator but the lesser Green function, should not give equilibrium results either (at least in principle: in practice it may not be easy to have good enough time resolution to see deviations from equilibrium). From this point of view (linear response after the quench being trivial, but time-resolved ARPES not), it would be interesting to compute the lesser Green function for our system. While it is in principle known how to do this (see, for instance, \\cite{Herzog:2002pc,Skenderis:2008dg}), we leave this computation to future work.\n\n\n\\section*{Acknowledgements}\nWe thank D.~Dudal, N.~Iqbal, V.~Ker\\\"anen and E.~Keski-Vakkuri for useful discussions, M.~Heller for helpful lectures on numerical methods, and Z.~Cao and Y.~Tian for stimulating discussions on the numerical strategy.\nB.C.\\ and J.Z.\\ thank the organizers of the 2014 Amsterdam String Workshop for hospitality during the final stages of this work. This work was supported in part by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P7\/37, by FWO-Vlaanderen through projects G020714N and G.0651.11, by the Vrije Universiteit Brussel through the Strategic Research Program ``High-Energy Physics'' and by the European Science Foundation Holograv Network. J.Z.\\ acknowledges support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. J.V.\\ is Aspirant FWO. D.T.\\ and F.G.\\ are FWO postdocs.\n\n\n\n\\begin{appendix}\n\\section{Numerical convergence}\nWe provide some details illustrating the convergence of the numerical methods used. There are three key variables that influence the numerical accuracy: {\\it i}) $N$, the number of nodes used in the pseudospectral method to integrate in the radial direction; {\\it ii}) $\\delta v$, the time-stepping used in the Runge-Kutta integration in the $v$ direction; {\\it iii}) $v_{cut}$, the final relative time used for the numerical integration. The convergence for each of these is illustrated in Figs.~\\ref{fig:convcheb}--\\ref{fig:convcut} showing the imaginary part of the Fourier transformed Green's function $G_R(\\omega, k)$ for the case of $Q=1.7$, $T=-2.5$ at a fixed $k=0.9$. These values are chosen since they highlight the most challenging parts to obtain numerically. Setting $Q=1.7$, as was used in the bulk of this paper, means that we are at low temperature and hence $G_{R}(t,k)$ is not thermally suppressed. The choice $k=0.9$ corresponds to focusing on the region of momenta space of most interest, where we might anticipate long lived excitations. With $T=-2.5$ we capture the difficult rapidly fluctuating parts of the spectral function arising from transient behaviour in the quench regime. \n\\begin{figure}[ht]\n \\begin{center}\n \\includegraphics[width=0.48 \\textwidth]{ChebconvTavm2p5.pdf}\n \\end{center} \n\\caption{Convergence as number of Chebyshev nodes is varied (we fix the Runge-Kutta step $\\delta v=0.05$ and $v_{cut}=60$). Very good convergence, at the resolution of this plot, is achieved after $N=25$ points. The most sensitive point is the absolute height of the transient negative peak, but even this is well converged at $N=30$. In the paper we used $N=27$ to balance resource use and accuracy. } \\label{fig:convcheb}\n\\end{figure} \n \\begin{figure}[ht]\n \\begin{center}\n \\includegraphics[width=0.48\\textwidth]{RKconvTavm2p5.pdf}\n \\end{center}\n\\caption{Convergence as Runge-Kutta time step $\\delta v= \\frac{1}{5}2^{-n}$ is varied (we fix $N=30$ Chebyshev nodes and $v_{cut}=60$). Very good convergence, at the resolution of this plot the different curves are virtually indistinguishable, is achieved with time stepping $\\delta v = 0.05$ as was adopted in the paper. }\\label{fig:convRK}\n\\end{figure} \n \\begin{figure}[ht]\n \\begin{center}\n \\includegraphics[width=0.48\\textwidth]{TcutconvTavm2p5.pdf}\n \\end{center}\n\\caption{Convergence as the final integration time $v_{cut}$ is varied (we fix $N=30$ Chebyshev nodes and the Runge-Kutta step $\\delta v=0.05$). Here the results are more sensitive but are well converged for $v_{cut} = 45$ (the green, blue and cyan lines are essentially coincident). We used $v_{cut}=60$ in the paper. }\\label{fig:convcut}\n\\end{figure} \n\\end{appendix}\n \n \n \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\\IEEEPARstart{P}{ath} planning is a method to find an optimal route from the starting point to the target point. It has been widely used in various fields such as robotics \\cite{yu2020path,kumar2020comparison,wohlke2021hierarchies}, drone \\cite{jeauneau2018path,qie2019joint,wang2020multi,wang2020mobile,hayat2020multi,bayerlein2021multi}, military service \\cite{lee2020autonomous,hu2022autonomous}, and self-driving car \\cite{wang2019obstacle,tammvee2021human}. Recently, reinforcement learning (RL) has been mainly studied for the path planning \\cite{lee2020autonomous,wang2020mobile,yao2020path,qiao2020hierarchical,bayerlein2021multi,lin2021collision,cao2021confidence,wohlke2021hierarchies,hu2022autonomous}. To get an optimal solution, it is essential to give enough reward for an agent to reach the goal and to set up a specific environment. Several studies on learning the RL model have proposed to make an agent be robust in a complicated or an unknown environment for the path planning \\cite{li2019deep,yang2020efficient,hu2020voronoi}. However, existing studies have defined one single goal before the learning. That is, the agent's ability to search the path when completed learning can be limited. To make the agent reach a number of goals in a dynamic environment, learning a controllable agent is needed. One of the recent approaches for controlling the agent has a limitation in that the agent can only learn the behavior from trajectories that have been directly experienced \\cite{lee2022learning}. Therefore, the agent can only be under control in the area visited by the agent. In this paper, I focus on learning a fully controllable agent in the path planning using a goal-conditioned RL. Especially, I apply, to the goal-conditioned RL, a bi-directional memory editing and a sub-goals dedicated network to improve the ability to search the path of the agent. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.40]{fig1.pdf}\n\\caption{Examples of training and test environment in this study. The experiment was set as a simple version of the two-dimensional grid environment. The starting point and the target point were fixed at the location in the training environment. The policy of the agent is to search the path from the starting point to the target point. (a), The training environment was set for the agent to easily get an optimal route. (b), The test environment was set for the agent to get an optimal route in a difficult way. Several sub-goals including the final goal were given to the agent. }\n\\label{fig1}\n\\vskip -0.2in\n\\end{figure}\n\nIn the goal-conditioned RL, the agent learns the sub-goals, part of the trajectory of the agent \\cite{lee2020weakly,okudo2021subgoal,zhang2021world,chane2021goal,yang2022rethinking}. By learning the sub-goals, eventually, the agent can reach the final goal. This method showed good performance mainly in robotics. However, previous studies have focused on reaching a complicated single goal. In addition, the multi-goal RL model, in which the agent can perform many goals, has been proposed \\cite{bai2019guided,zhao2019maximum}. However, the multi-goals should also be defined before the learning. Unlike these studies, in this present study, I propose an RL framework in which the agent can perform a number of sub-goals in various scenarios. An important point here is that the sub-goals are not defined in advance. In other words, the agent completed the learning reaches the goals, which have never been visited by the agent in the learning.\n\n\nFig \\ref{fig1} shows the examples of the training and the test environments of this study. The training environment has been simply set for the agent to reach the final goal. But the test environments have been set in a difficult way. The agent should go through the sub-goals and reach the target point, even if the starting point and the target point are set in reverse. The difficult missions were also given to the agent such as a round trip. In the test environments, an important thing is that the agent has to reach the goal never been visited in the learning. To perform the goals, the agent must be fully controlled by the sub-goals that are customized by the user. \n\n\n Meanwhile, memory editing occurs in our daily lives. \\cite{bernstein2009tell,schacter2011memory,fernandez2015benefits,phelps2019memory}. The memory editing helps us to get away from mental illnesses such as trauma \\cite{phelps2019memory}. We can also get precise information by editing untidy memories \\cite{fernandez2015benefits}. A recent study used the concept of the memory editing on goal-conditioned RL for the agent to reach sub-goals so that the user can control the agent \\cite{lee2022learning}. However, the agent cannot move the sub-goals in difficult environments. This is because the memory of the RL model is edited only based on one direction of the paths that are moved by the agent. The first purpose of the RL is to achieve the final goal. Despite the sub-goals, the agent can ignore the sub-goals if the sub-goals get in the way of the final goal. That is, the agent does not need to go round and round to reach the final goal. In this study, I developed the concept of the memory editing for the fully controllable agent in the path planning. \n \nLet's assume that we walk to the destination (See Fig \\ref{fig0}). From the route that we walked, we can learn how we reach the destination. We can remember intermediate stops in the route and know that the total path is the overall set, which consists of the intermediate stops. Further, if we recall our memories backward, we can also find the route back to the starting point. For example, in the Fig \\ref{fig0}, when we want to return to the starting point, we know which action we have to perform. However, it is difficult to find an inverse action in a dynamic RL environment. Thus, an inverse module to predict the inverse action is necessary to obtain the exact knowledge to come back to the starting point. \n\nBased on the processes, in which we recall our memories and obtain knowledge, I utilize a bi-directional memory reminiscence and editing (bi-directional memory editing) to obtain various trajectories. Using these trajectories, the agent can learn various behaviors. Furthermore, I use the dedicated network for learning the sub-goals to improve learning efficiency. Finally, I present a reward shaping for the shorter path of the agent. Using these techniques, the agent can achieve various sub-tasks as well as the final goal in the path planning environment. The fully controllable agent can be useful in the environments where we have to consider a number of variables and we have to assume various scenarios. The main contributions of this article are as follows:\n\n\\begin{itemize}\n\\item Using the bi-directional memory editing, we can obtain various trajectories, and can learn the agent to perform various tasks, based on the trajectories. The agent can be fully controlled so that the agent can reach any point in the path planning environment. \\\\\n\\item I employ the sub-goals dedicated network to improve the efficiency of learning the sub-goals. To distinguish the network from the policy network, the agent can focus on performing the various sub-goals. \\\\\n\\item I propose the reward shaping for the shorter path of the agent. In the path planning, it is important for the agent not only to reach the final goal but also to reach it within a limited time. By applying the reward shaping in the bi-directional memory editing, the agent can reach the final goal within a shorter time. \\\\\n\\item To the best of our knowledge, this study is the first RL methodology for the path planning in that the agent is fully under control. Therefore, the agent achieves the user-defined sub-goals such as a round trip in a difficult test environment. Moreover, the agent can move to the point that has never been reached by the agent in the training. By using this methodology, we can suppose and conduct various scenarios in the path planning. \n\\end{itemize}\n\n\nThe rest of the paper proceeds as follows. Section 2 presents the background of this study. In Section 3, learning a fully controllable agent is proposed. To do this, I introduce the bi-directional memory editing and the sub-goals dedicated network. Furthermore, I propose the reward shaping for the shorter path of the agent. Section 4 provides the results of the experiment. I confirmed whether the agent can successfully perform the sub-goals in various scenarios. Section 5 concludes this paper and calls for future research.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.6]{fig0.pdf}\n\\caption{ Illustration of the concept of the bi-directional memory editing. After we arrive at the destination, if we recall our memory, we can find out the route for returning to the starting point. Further, we can also obtain knowledge about which action to perform to reach the starting point. }\n\\label{fig0}\n\\end{figure}\n\n\n\n\n\n\\section{Background}\n\\subsection{Path planning}\n\n\nTo get an optimal solution to reach the target point from the starting point, traditionally, optimization methods have been used in the path planning. Several studies using an A* \\cite{yan2018path,chen2020improved}, a genetic algorithm \\cite{zhou2020trajectory}, and a particle swarm optimization \\cite{huang2018uav,chen2021three, wang2022improved} have been proposed. Combining two optimization methods also has been studied \\cite{jamshidi2020analysis}. \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=\\textwidth]{fig2.png}\n\\caption{Illustration of the proposed method. The red line indicates the route visited by the agent. The blue line indicates the inverse direction of the route. The inverse module predicts the action when $s_t$ and $s_{t+1}$ are given, and the predicted action is used to collect the reverse trajectories. In the bi-directional memory editing, various sub-goals are generated and stored in the separated replay memory $(D_s)$. The transition samples from $D_s$ are trained by the sub-goals dedicated network.}\n\\label{fig2}\n\\end{figure*}\n\nRecently, with the development of deep learning, studies on the path planning using the RL have mainly been proposed \\cite{lei2018dynamic,lee2020autonomous,liu2021novel,wang2020mobile,yao2020path,yan2020towards,wang2020multi,qiao2020hierarchical,bayerlein2021multi,lin2021collision,cao2021confidence,wohlke2021hierarchies,hu2022autonomous}. They have supposed the specific scenario and set an environment to apply the agent in the path planning. Especially, they have applied their study to robotics \\cite{wang2020mobile,wohlke2021hierarchies,lin2021collision}, \ndrone \\cite{jeauneau2018path, qie2019joint ,yan2020towards, wang2020multi,hayat2020multi,bayerlein2021multi }, and ship \\cite{chen2019knowledge, guo2020autonomous}. Also, they have focused on the one single goal of the agent to reach the target point, avoiding obstacles. After learning is completed, the agent could reach the goal; however, the agent cannot be under control in the previous studies. That is, the agent can only perform the predefined goal. \n\nIn addition, learning user-defined sub-goals has been proposed \\cite{lee2022learning}. However, the agent was partially under control. The agent could not perform the round trip and only moved the area that was visited by the agent. In this study, I focus on learning the fully controllable agent so that the agent can perform various trips, as shown in Fig \\ref{fig1}.b. \n\n\n\n\\subsection{Goal-conditioned RL}\n\nHindsight experience replay (HER) used the sub-goals and the pseudo rewards to make the RL model converge to the final policy \\cite{andrychowicz2017hindsight}. The reason why learning the sub-goals improves the performance of the RL is that the sub-goals are located on the route to reaching the final goal. Several studies developed the HER for an exploration of the agent \\cite{nguyen2019hindsight, fang2019curriculum,lai2020hindsight }. \n\nThe goal-conditional RL models excel to reach the goal through the intermediate sub-goals and show excellent performance on the robotics problem \\cite{nasiriany2019planning,zhao2019maximum,ghosh2019learning,nair2018visual,eysenbach2019search,bai2019guided,eysenbach2020c,lee2020weakly,okudo2021subgoal,zhang2021world,chane2021goal,yang2022rethinking}. They have focused on searching for meaningful sub-goals and improving the performance of the main policy network (high-level policy network). To reach the desired policy, it is important to induce the agent to reach the landmark of the sub-goals \\cite{nair2018visual,nasiriany2019planning,bai2019guided,zhang2021world,kim2021landmark} and to improve sample efficiency \\cite{nachum2018data,eysenbach2019search,gurtler2021hierarchical}. The existing studies with learning the sub-goals are similar to this study in terms of generating the sub-goals and relabeling the rewards. \n\n\n\nHowever, in this study, the sample efficiency and searching for the valid sub-goals are not essential. By performing the bi-directional memory editing, we can get trajectories two times more than when bi-directional memory editing is not performed. Then, enough sub-goals can be collected for learning the sub-goals. Also, it does not matter the time when the sub-goals dedicated network is trained, which is separated from the original policy network. Whether the sub-goals are trained after learning the policy network or during learning the policy network, if sufficient trajectories are gathered, the agent can learn various behaviors and sub-goals. The main purpose of this study is to make the agent under control for performing various tasks which are not defined in the training environment. After learning is completed, the agent can achieve various user-defined sub-goals as well as the final goal. \n\nI consider a discounted, finite-horizon, goal-conditioned Markov decision process (MDP) defined as a tuple ($\\mathcal{S}, \\mathcal{G}, \\mathcal{A}, p,R, \\gamma, H$), where $\\mathcal{S}$ is the set of state, $\\mathcal{G}$ is the set of goals, $\\mathcal{A}$ is set of actions, $p(s_{t+1}|s_{t},a_{t})$ is a dynamics function, $R$ is the reward function, $\\gamma \\in [0,1)$ is the discounted factor, and $H$ is the horizon. In the goal-conditioned RL, the agent learns to maximize the expected discounted cumulative reward $\\mathbb{E}[ \\sum_{t=1}^\\infty R(s_t,g,t)]$, where $g$ is the goal. The objective is to obtain a policy $\\pi(a_t|s_t,g,t)$.\n\n\n\n\n\n\n\\section{Proposed method}\nFig \\ref{fig2} shows the illustration of a summary of the proposed method. For the fully controllable agent in the path planning, I introduce three simple techniques. First, I propose a bi-directional memory editing to generate various behaviors and sub-goals of the agent. Here, to secure reverse trajectories, an inverse module to predict actions is used. Second, to improve the efficiency of learning, I utilize the sub-goals dedicated network separated from the policy network. Finally, I present a reward shaping for the shorter path of the agent.\n\n\\subsection{Bi-directional memory editing}\nThe memory editing is performed to generate sub-goals of the agent. The sub-goals are generated from the trajectories of the agent, and additional rewards are given to the agent. As the agent begins to recognize the sub-goals, the agent can achieve the sub-goals and greedily reach the final goal in the previous studies.\n\nIn the path planning, it is important to allow the agent to visit a wider area such that the agent can visit various locations and learn the optimal route to reach the goal. Thus, if we can get various trajectories more than the trajectories that are actually visited by the agent, it can bring greater benefit to learning the agent's ability for searching the path. In addition to a forward route, the reverse route from the goal to the starting point can be a useful ingredient for the agent to learn various behaviors and sub-goals. To do this, I employ the reverse trajectory to generate various sub-goals and to learn the robust RL model by performing the bi-directional memory editing. \n\nFirst, a forward memory editing is performed, which is described in line 21-24 in Algorithm 1. The sub-goals ($g$) are generated from the forward route, and the state of the sub-goals ($s_{t+1} \\| g$) are made. After that, a backward memory editing is performed, which is described in line 25-29 in Algorithm 1. Reverse transition is $\\{(s_{t+1},a_{t+1}',r_t,s_t)\\}$ whereas original transition is $\\{(s_t,a_t,r_t,s_{t+1})\\}$. Here, it is difficult to obtain $a_{t+1}'$. The reason is that it is not simple to find an action to derive the $s_t$, when $s_{t+1}$ is given. I propose to use the inverse module to obtain $\\hat{a}_{t+1}$ by predicting $a_{t+1}'$ when $s_t$ and $s_{t+1}$ are given. Like the forward memory editing, the sub-goals ($g$) and the state of the sub-goals ($s_{t+1} \\| g$) are generated. Finally, the edited memories are stored in the replay memory for the sub-goals ($\\mathcal{D_s}$).\n\nUsing the bi-directional memory editing, we can obtain two routes from the one single route moved by the agent. Beyond being two times the trajectories, it means that the agent can learn various relationships between the actions and the sub-goals. In the path planning, the agent can almost only learn the route that is visited by the agent. For instance, as shown in Fig \\ref{fig1}.(a), the agent can almost learn the leftward and upward directions because of the location of the goal. However, using the bi-directional memory editing, the agent can learn all directions in various locations. Therefore, by learning a number of sub-goals and behaviors, the agent can reach the goals that were never visited in the training.\n\n\n\n\n\n\\begin{algorithm}[t]\n\\begin{algorithmic}[1]\n\\caption{Learning the sub-goals using bi-directional memory editing }\\label{alg:algorithm1}\n\n\\State Initialize policy network parameters $\\theta_{p}$\n\\State Initialize sub-goals dedicated network parameters $\\theta_{g}$\n\\State Initialize inverse module parameters $\\theta_{iv}$\n\\State Initialize replay buffer for original goal $\\mathcal{D} \\leftarrow \\emptyset$\n\\State Initialize replay buffer for sub-goals $\\mathcal{D_s} \\leftarrow \\emptyset$\n\\Procedure{Learning the sub-goals}{}\n\\For{episode = 1, M}\n\\State \\verb|\\\\| Simulation stage.\n\\For{each step}\n\\State Execute an action $s_t,a_t,e_t,s_{t+1} \\approx \\pi_{\\theta}(a_t \\mid s_t)$\n\\State Store transition $\\mathcal{E}\\leftarrow \\mathcal{E} \\cup \\{(s_t,a_t,r_t)\\}$\n\\State Optimize inverse module $\\theta_{iv}$ \n\\EndFor\n\n\n\\If{ $s_{t+1}$ is terminal}\n\\State Compute returns $R_t= \\Sigma^\\infty_{k}\\gamma^{k-t}{r}_k$ in $\\mathcal{E}$\n\\State $\\mathcal{D}\\leftarrow \\mathcal{D}\\cup \\{(s_t,a_t,R_t)\\}$\n\\State Clear episode buffer $\\mathcal{E} \\leftarrow \\emptyset$\n\\EndIf\n\\State\n\n\\State \\verb|\\\\|Bi-directional memory editing\n\\State Generate sub-goals $g$ \n\\State Generate state of sub-goals $s_{t} \\| g$\n \\algorithmiccomment{$\\|$ denotes concatenation}\n\\State Set additional rewards $r_t'$ \n\\State $\\mathcal{D_s}\\leftarrow \\mathcal{D_s}\\cup \\{s_t \\| g,a_t,r_t')\\}$\n\\State Predict the inverse action $\\hat{a_{t+1}} \\leftarrow \\theta_{iv}(s_{t+1},s_t)$ \n\\State Generate reverse transition $\\{(s_{t+1},\\hat{a}_{t+1},r_t,s_{t})\\}$\n\\State Generate sub-goals $g$ \n\\State Generate state of sub-goals $s_{t+1} \\| g$\n\\State Set additional rewards $r_t''$ \n\\State $s_t \\leftarrow s_{t+1}$,$a_t \\leftarrow \\hat{a}_{t+1}$,$r_t \\leftarrow r_t''$\n\\State $\\mathcal{D_s}\\leftarrow \\mathcal{D_s}\\cup \\{s_t \\| g,a_t,r_t')\\}$\n\n\\State\n\\State \\verb|\\\\|Learning stage.\n\\For{k= 1, N}\n\\State Sample a minibatch $\\{(s,a,R)\\}$ from $\\mathcal{D}$\n\\State \\algorithmiccomment{Optimize policy network $\\theta_{p}$ }\n\\EndFor\n\\For{k= 1, P}\n\\State Sample a minibatch $\\{(s \\| g,a,r')\\}$ from $\\mathcal{D_s}$\n\\State \\algorithmiccomment{Optimize sub-goals dedicated network $\\theta_{g}$ }\n\n\\EndFor\n\\EndFor\n\\EndProcedure\n\n\n\\end{algorithmic}\n\\label{algo1}\n\\end{algorithm}\n\n\n\n\n\n\\subsection{The sub-goals dedicated network}\nUsing the bi-directional memory editing, the agent can learn various behaviors and sub-goals. However, the agent at the middle of the route can be confused about where the agent has to go. Because the agent is forced to learn how to go in both directions at one point due to the bi-directionally edited memories. For example, in the Fig \\ref{fig0}, if the agent is located near the tree, the agent learns to move both upwards and leftwards. If the agent is trained using the policy network, the agent can be confused about where the agent has to go if the agent is located near the tree. Because the purpose of the policy network is only to train the agent to reach the final goal. Actually, the agent has been hovering around the middle point of the environment in the experiment. Therefore, I employ the network for the sub-goals separately from the network for the final goal. \n\nThe sub-goals dedicated network only learns the sub-goals, and the original policy network only learns the final goal. To do this, I also employ the replay memory for the sub-goals ($\\mathcal{D_s}$) in addition to replay memory for the final goal ($\\mathcal{D}$). Moreover, using the sub-goals dedicated network can improve the sample efficiency. In general, the capacity of the replay memory is limited and the replay memory is updated with the last transition of the agent. Thus, the agent mainly learns the recent transitions and gradually reaches the goal. However, as previously mentioned, using the bi-directional memory editing, we can obtain a number of sub-goals and behavior of the agent. Therefore, using the separated network and the replay memory, we can learn the agent to reach the sub-goals whenever, during or after the policy network learning.\n\nAlthough the agent of the policy network cannot reach the final policy, the agent of the sub-goals dedicated network can reach the various sub-goals as well as the final goal. The users can fully control the agent by collecting various sub-goals from the bi-directional memory editing and by learning the agent on the sub-goals dedicated network. \n\n\n\n\n\\subsection{Reward shaping for the shorter path}\nIn the path planning, one of the important factors is to make the agent reach the destination within a specific period. To improve the agent's ability, it is necessary to give enough rewards according to the steps to reach the target point. However, in this study, I focus on learning the sub-goals, and additional rewards are given to the agent to reach the sub-goals, regardless of the shortest path. Furthermore, I want to confirm that the agent can reach various sub-points in the environment. Thus, I assume that the environment of this study is a sparse reward environment so that the agent does not need to reach the target point in the shortest path. Rather, due to the exploration bonus, it is likely for the agent to delay the one's arrival. Therefore, we propose the reward shaping when the bi-directional editing is performed for the shorter path of the agent in the path planning.\nWhen the bi-directional editing is performed, the additional rewards ($r_t'$) are given with the corresponding sub-goals. Here, the rewards are reshaped as follows: \n\\begin{eqnarray}\n{r_{tg}} & = & r_{t}' + (dist_{short} \u2013 dist_{s_t})\n\\end{eqnarray}\n: where $dist_{short}$ indicates the number of the shortest steps that can be possibly reached by the agent from $s_t$ to the current sub-goal ($g_s$), and $dist_{st}$ indicates the number of steps of the path that is moved by the agent from $s_t$ to the current sub-goal ($g_s$). That is, in each sub-goal, the agent gets a penalty by the number of steps to reach the sub-goal point. If the agent reaches the sub-goal in the shortest path, the agent does not receive any penalty. Otherwise, the agent receives a penalty depending on the number of steps to reach the sub-goal points. \n\n\n\n\n\\subsection{Learning a fully controllable agent in the path planning}\nFig \\ref{fig2} shows the summary of the proposed method. The inverse module predicts the action of when $s_t$ and $s_{t+1}$ are given in the bi-directional memory editing. The sub-goals from two trajectories are generated. At this time, the reward shaping for the shorter path is performed. Then, the transitions are stored in the replay memory for the sub-goals ($\\mathcal{D_s}$). The sub-goals dedicated network is trained independently with the policy network, whereas the policy network is only trained for the final policy. After learning is completed, in the various scenarios, the agent gets the sub-goals that are defined by the users and tried to achieve the sub-goals as well as the final goal. \n\nAlgorithm 1 shows the procedure in the proposed method in detail. The simulation stage is similar to the other RL methods except for the inverse module to obtain $\\hat{a}_{t+1}$ by predict $a_{t+1}'$ when $s_t$ and $s_{t+1}$ are given. In the bi-directional memory editing, the reverse transition $\\{(s_{t+1},\\hat{a}_{t+1},r_t,s_{t})\\}$ is obtained using the inverse module and various sub-goals are generated and stored in the memory $(D_s)$. In the learning stage, optimize the policy network $(\\theta_p)$ for the final goal and optimize the sub-goals dedicated network $(\\theta_g)$ for the sub-goals. In fact, it does not matter to learn the goals dedicated network separately after learning the policy network is completed, if the number of the edited memories is enough. \n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\\subsection{Experimental setting}\nIn the experiments, I wanted to confirm that the agent can be fully controlled by the sub-goals and that the agent can achieve various sub-goals, which were never visited by the agent in the learning. Thus, the training environment was constructed in a simpler way, while the test environment in a difficult way. I assumed a number of scenarios to test the agent's ability of the path searching. Furthermore, I wanted to show the effect of the reward shaping for the shorter path.\n\n Fig \\ref{fig1} shows the first environment. The goal of the agent was set to reach the target point in a simple two-dimensional (2D) grid environment. The reward was 0 except for the agent reaching the target point (+30). The RL model was performed for a total of 10,000 episodes. In the test environment, I set a total of 26 scenarios. In each scenario, several sub-goals were given to the agent. At first, a sub-goal nearest the current location of the agent was given to the agent. Then, if the agent reaches the sub-goal, the next sub-goal nearest the current location was given. Also, if the agent could not reach the given sub-goal, the next sub-goal nearest the current location was given.\n In various scenarios, I observed whether the agent reaches various sub-goals and successfully performs difficult missions such as a round trip. Moreover, I compared with and without the reward shaping for the shorter path in the proposed method. In each scenario, I calculated the number of steps to reach the target point.\n \n\n\n\n\n Fig \\ref{fig3} shows the second environment. The environment is the 'key-door domain'. The environment has a total of 4 stages and the agent should go through the bonus point (key) to clear the stage (door). Even though the agent reaches the target point (door), if the agent could not pass the bonus point (key), the agent cannot jump up to the next stage. The reward was set +10 for a bonus point, -10 for a penalty point, and +100 for the goal, when the agent goes through these points, respectively. This environment was difficult because of the condition that the agent must pass the bonus point to clear each stage and that the environment was defined as the sparse reward setting. The RL model performed for a total of 50,000 episodes. In the test environment, I set the two scenarios. In each stage, two sub-goals, defined by the user, the bonus point, and the goal, were given to the agent as the sub-goals in an order.\n\n\n\n \\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.35]{fig3.png}\n\\caption{The key-door domain environment. In each stage, the factors (a starting point, a bonus point, a wall, and a target point) were set differently. The agent must go through the bonus point to clear the stage. Even if the agent reaches the target point if the agent did not go through the bonus point, the agent cannot pass by the next stage.}\n\\label{fig3}\n\\end{figure}\n\n\\subsection{Base architecture of the RL model}\n\nSelf-imitation learning (SIL) is an on-policy algorithm to exploit valuable past decisions from the replay memory \\cite{oh2018self}. In the learning stage, the transitions are sampled, and they are trained by the policy network. At this time, if the transitions of the past are not valuable compared to the current value, the transitions are not exploited. That is, the SIL imitates valuable behaviors of the agent in the past. The authors combined the SIL and the on-policy RL model \\cite{mnih2016asynchronous,schulman2017proximal} and proposed the following off-policy actor-critic loss:\n\n\\begin{eqnarray}\n\\mathcal{L}&=&\\mathbb{E}_{s,a,R\\in \\mathcal{D}}[\\mathcal{L}_{policy} + \\beta\\mathcal{L}_{value}] , \\label{eq:sil1} \\\\\n\\mathcal{L}_{policy}&=&-log\\pi_{\\theta}(a|s)(R-V_\\theta(S))_+ , \\label{eq:sil2} \\\\\n\\mathcal{L}_{value}& =&{{1}\\over{2}} \\parallel(R-V_\\theta(S))_+\\parallel ^2 , \\label{eq:sil3}\n\\end{eqnarray}\nwhere $(\\cdot)_+=max(\\cdot,0)$; $\\pi_\\theta$ and $V_{\\theta}(s)$ are the policy network and the value network, respectively, parameterized by $\\theta$. The value loss is controlled by $\\beta \\in \\mathbb{R}^+$. From the $(\\cdot)_+$ operator in the loss, the transition, in which the current value is larger than the past return, is trained by the policy network and the value network.\n\nThe reason why that the SIL is used in the study is that the exploitation of valuable transition is needed. In fact, in the study, the off-policy RL model is necessary to utilize the replay memory \\cite{mnih2015human,schaul2015prioritized,van2016deep}. However, in the path planning, to reach the target point, various routes can be obtained. Accordingly, I utilize the off-policy actor-critic RL model to get an effect of both the on-policy and the off-policy. The final RL architecture in this study is the combination of the SIL and the actor-critic network (ASIL).\n\nIn addition, I utilized the random network distillation (RND), which is widely used as an exploration bonus method \\cite{burda2018exploration}. The RND uses two networks: a fixed and random initialized network (target network), and a predictor network trained using the output of the target network. The exploration bonus is given as a difference between the outputs of the two networks. If the agent visits a specific point continually, the exploration bonus is gradually decreased. Otherwise, if the agent visits a novel space, a large exploration bonus will be given to the agent.\n\n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.45]{fig4.pdf}\n\\caption{The visualization of the path of the agent for the first environment. A color change from blue to red indicates that the agent has visited more often. \\textbf{a}, The path of the agent of the policy network without learning the sub-goals. The agent easily reached the target point. \\textbf{b}, The path of the agent of policy network with learning the sub-goals using the bi-directional memory editing. The agent was confused by the sub-goals, so that the agent could not reach the final goal. }\n\\label{fig4}\n\\end{figure}\n\n\n\\subsection{Simple 2D grid environment}\nFig \\ref{fig4} shows the visualization of the path that is moved by the agent of the policy network without (a) and with (b) learning the sub-goals. A color change from blue to red means that the agent has visited more often. When the policy network did not learn the sub-goals, the agent of the policy network easily reached the target point, and the agent mostly moved only between the starting point and the target point. That is, the agent mostly moved the left and top areas in the environment because of the location of the target point.\n\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[scale=0.45]{fig6.png}\n\\caption{\\textbf{a}, The visualization of the path of the agent of the policy network with learning the sub-goals. The agent could not learn the sub-goals so the agent had a tendency to just move to the right. \\textbf{b}, The visualization of the path of the agent of the sub-dedicated network with the forward directional memory editing. The agent was trained using a one-directional route. Therefore, the agent tried leftward, even though the given sub-goals were located on the right of the agent. }\n\\label{fig6}\n\\end{figure*}\n\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[scale=0.42]{fig5.png}\n\\caption{The visualization of the path of the agent when the sub-goals were given in the test environment according to the reward shaping. The top and the bottom of the figure indicate the route of the agent without and with the reward shaping, respectively. (a), The agent successfully went through the sub-goals and reached the final goal. (b), In a round-trip environment, the agent also passed all the sub-goals and came back to the starting point. With the reward shaping (bottom), the agent arrived at each sub-goal point and the final goal faster than without the reward shaping (top). }\n\\label{fig5}\n\\end{figure*}\n\nIn addition, if the policy network was trained on the sub-goals using the bi-directional memory editing, the agent of the policy network could not reach the target point, as shown in Fig \\ref{fig4}.b. At a specific point, if the agent is trained to go in various directions by the sub-goals, the agent of the policy network was confused about where it has to go. Then, the agent would fail to reach the final goal. In the experiment, the agent just moved to the right area in the environment. This case was repeated every time the network was trained. Notably, in the experiment, the agent of the policy network without learning the sub-goals always could reach the final goal. In the test environment, the agent of the policy network, which learns the sub-goals using bi-directional editing, also failed to reach the sub-goals, as shown in Fig \\ref{fig6}.a. The agent even left the environment as soon as the agent departed, as shown in Fig \\ref{fig6}.a.(left). This is because the agent of the policy network mainly moved to the right area in the environment due to the confusion. \n\n\nWhen the sub-goals dedicated network was trained using the forward directional memory editing only, the agent failed to reach the sub-goals, as shown in Fig \\ref{fig6}.b. It was observed that the agent tried to move in the left direction. The reason is that the agent of the policy network mainly moved to the leftward and the upward direction, and the agent is trained using the one-directional trajectories. \n\nFig \\ref{fig5} shows the result of learning the sub-goals in the test environments without (top) and with (bottom) the reward shaping for the shorter path. The agent was trained from the sub-goals dedicated network using the trajectories, which were collected from the agent of the policy network. In the test environment, I set the sub-goals difficultly, even though all the points including the starting point and the target point were set inversely. The agent successfully reached all the sub-goals and the target points in the experiment, as shown in Fig \\ref{fig5}.a. That is, the agent was able to be fully controlled by the sub-goals, in various scenarios. The agent reached the sub-goals that had never been visited by the agent of the policy network. Interestingly, in extremely hard environments (round trip tasks), as shown in Fig \\ref{fig5}.b, the agent departed from the starting point and went through the sub-goals, and then the agent turned halfway point and came back to the starting point. \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[scale=0.55]{fig7.png}\n\\caption{The visualization of the path of the agent without and with the reward shaping for the shorter path in 20 scenarios. The number, located in the upper of each figure, indicates the number of steps to reach the target point. The agent of the sub-goals dedicated network arrived at the target point in all scenarios. With a visual inspection, the path of the agent with the reward shaping was shorter than without the reward shaping. In the comparison of the number of steps, it can be seen that the agent with the reward shaping reaches the target point within a short time than the agent without the reward shaping. }\n\\label{fig7}\n\\end{figure*}\n\nIn addition, with the reward shaping for the shorter path, the agent reached the target point faster than without the reward shaping, as shown in Fig \\ref{fig5}. With a visual inspection, we can observe that the agent with the reward shaping reached the sub-goals located diagonally with a shorter distance from the current location. Moreover, the difference between the number of steps to reach the target point with and without the reward shaping was significant. \n\n\n\n\n\n\n\n\n\nTo strongly confirm the performance difference between without and with the reward shaping of the proposed method, I additionally constructed 20 scenarios. Fig \\ref{fig7} shows the route of the agent with and without the reward shaping in each scenario. The number in each figure indicates the number of steps to reach the target point. In all scenarios, the agent went through the sub-goals and reached the final goal. It can be seen that the agent was able to be fully under control by the sub-goals. Moreover, except for three scenarios, the scenario 14, 15, and 16, the agent with the reward shaping reached the target point much faster in all scenarios. This characteristic was salient in complex environments such as the scenario 1 and 2. The average of the steps with the reward shaping was 338.25 and the average of the steps without the reward shaping was 411.45. It was confirmed that the rewards shaping can shorten the number of steps to arrive at the destination by 21.6.\n\n\n\n\n\\subsection{Key-door domain}\nThe agent of the (ASIL + RND) reached the final stage within 50,000 episodes only one time out of 10 trials in the 'key-door domain' environment. It was a very difficult environment to clear. This was because of the condition to clear each stage and the sparseness of the reward. I assumed the two scenarios in the test environment. Two sub-goals were imposed on the agent differently in each stage. The bonus point and the goal point are also given as the sub-goals. The agent was enforced to reach the sub-goals at first and after that, the agent was encouraged to reach the bonus point and the target point. \n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=\\textwidth]{fig8.png}\n\\caption{The visualization of the path of the agent in the key-door domain environment. \\textbf{a}, The path of the agent of the policy network in each stage, when the agent cleared all stages. The agent of the policy network cleared only one time out of 10 trials in total. \\textbf{b}, The path of the agent of the policy network in each stage, when the agent did not clear all stages. \\textbf{c} $\\sim$ \\textbf{d}, The path of the agent of the sub-goals dedicated network, when the sub-goals were given. The sub-goals dedicated network was used, when the agent of the policy network failed to clear all stages.}\n\\label{fig8}\n\\end{figure*}\n\n\nFig \\ref{fig8}.a shows the visualization of the path of the agent, success case of learning the desired policy. If the policy network converges to the desired policy, the agent of the network showed a clear path to the goal of Stage 4. However, in the experiment, the agent of the policy network almost failed to reach the final stage in the training environment within 50,000 episodes. Fig \\ref{fig8}.b shows the visualization of the path of the agent, when the agent of the policy network failed to learn the desired policy. The agent could reach Stage 4, but the agent failed to clear the stage. In contrast, the agent of the sub-goals dedicated network was able to reach the goal in the final stage, going through the sub-goals and the bonus point in the two scenarios as shown in Fig \\ref{fig8}.c $\\sim$ d, even though the agent of the policy network failed, as shown in Fig \\ref{fig8}.b. This result means that learning the sub-goals can improve the ability of the agent to reach the final goal, just like in previous studies. Unlike the previous study, I set the mission for the agent's performance to the sub-goals dedicated network in order to apply in the path planning problem, in which the ability to move in various directions is necessary. Indeed, in the experiments, the last 300 episodes of the policy network were enough to learn the sub-goals dedicated network so that the agent could be under control. We do not need to collect a number of trajectories to learn the agent in the path planning, if we utilize the proposed method. \n\n\n\n\n\n\n\nHowever, the agent of the sub-goals dedicated network did not show the shortest path whereas the agent of the policy network showed the almost shortest path. Furthermore, like a previous study \\cite{lee2022learning}, sometimes, the agent that learned the sub-goals was confused when the bonus point was near the current location of the agent. The reason is that the agent was trained to go through the bonus point, at first, to clear each stage. That is, the agent was enforced to move to the sub-goals in learning the sub-goals, and the agent was also encouraged to move to the bonus point and the target point. When the agent was near the bonus point, the agent could get a larger reward, although the agent did not reach the sub-goals. The agent was partially able to be controllable in a complex environment with various variables that have a great deal of influence on the agent. These remaining issues call for further studies. \n\n\n\n\n\n\\section{Conclusion}\nIn this paper, I propose a novel RL framework within which the agent can be under control in the path planning environment so that the agent can reach various sub-goals. The agent that completed the learning can perform the difficult missions such as a round trip, even the agent can reach an unknown area. Therefore, the bi-directional memory editing and the sub-goals dedicated network were presented on the goal-conditioned RL. From the bi-directional memory editing, we can obtain various sub-goals and behaviors of the agent such that the agent can be more robust in the test environment. In addition, using the sub-goals dedicated network, the agent can perform several behaviors that are directed by different sub-goals at one point. It was confirmed that the agent can be fully controlled and can achieve various sub-goals that are customized by the users in the test environment. Furthermore, the proposed reward shaping for the shorter path can improve the ability of the path planning. \n\nHowever, in a complex environment with various variables such as the key-door domain, the agent was confused about whether to select the sub-goals or the bonus point. Although a fully controllable agent is useful in the path planning, various variables such as an obstacle and the limited number of steps in an environmental setting should be considered. Further, the reward shaping cannot guarantee the optimal path. Future studies are required for a fully controllable agent for the path planning. I expect that this study would be applied and studied in a variety of domains with a fully controllable agent in various scenarios. \n\n\n\n\n\\ifCLASSOPTIONcompsoc\n \n \\section*{Acknowledgments}\n\\else\n \n \\section*{Acknowledgment}\n\\fi\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}