diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdydm" "b/data_all_eng_slimpj/shuffled/split2/finalzzdydm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdydm" @@ -0,0 +1,5 @@ +{"text":"\\section{Inclusive and diffractive DIS}\n\nThe intriguing phenomenon of the frequent appearance of large rapidity gaps\nin electron proton collisions at HERA \\cite{h1z} has changed our physical \npicture of deep inelastic scattering (DIS) to a large extent. The large \nrapidity gap events are very difficult to understand in the parton model \nwhere the struck quark is expected to break up the proton leading to a \ncontinuous flow of hadrons between the current jet and the proton remnant.\n\nTo develop a physical picture of diffractive DIS \\cite{heb} it is convenient \nto view the scattering process in the proton rest frame. In this frame the\nvirtual photon fluctuates into partonic states $q {\\bar q}$ , $q {\\bar q}g$ , ... which then\nscatter off the proton. From the leading twist contributions to inclusive and\ndiffractive structure functions one obtains the parton distribution functions\nin a frame where the proton moves fast. This connection holds for diffractive\nas well as non-diffractive processes.\n\nIn the following we shall review the present status of our theoretical\nunderstanding of diffractive DIS with emphasis on the close analogy to\ninclusive DIS. This is appropriate since diffractive DIS is dominated by\nthe leading twist contribution, which has been one of the most surprising \naspects of the large rapidity gap events. The scattering of the partonic\nfluctuations of the photon off the proton will be treated in the\nsemiclassical approach. After a comparison of theoretical predictions\nwith data we shall conclude with a discussion of some open questions. \n \n\\subsection*{Inclusive DIS}\n\nInclusive deep inelastic scattering \\cite{esw} is characterized by the \nkinematic variables\n\\begin{equation}\nQ^2=-q^2\\;,\\quad W^2=(q+P)^2\\;, \\quad x={Q^2\\over Q^2 + W^2}\\;,\n\\end{equation}\nwhere $q$ and $P$ are the momenta of the virtual photon and the proton,\nrespectively. The cross section is determined by the hadronic tensor,\n\\begin{eqnarray}\nW_{\\mu\\nu}(P,q) &=& {1\\over 4\\pi} \\sum_{X} \n \\langle P|J_\\nu(0)|X\\rangle \n \\langle X|J_\\mu(0)|P\\rangle (2\\pi)^4 \\delta(P-P_X)\\nonumber\\\\\n&=&\\left(-g_{\\mu\\nu}+{q_\\mu q_\\nu\\over q^2}\\right) F_1(x,Q^2)\n +{1\\over \\nu}\\left(P_\\mu - {\\nu\\over q^2}q_\\mu\\right) \n \\left(P_\\nu - {\\nu\\over q^2}q_\\nu\\right) F_2(x,Q^2)\\;.\n\\end{eqnarray}\nHere $J_\\mu(x)$ is the electromagnetic current, $\\nu=q\\cdot P$, and \nspin averaging has been implicitly assumed.\n\nThe structure functions are a sum of leading twist and of higher twist\ncontributions which are suppressed by powers of $Q^2$,\n\\begin{equation}\nF_i(x,Q^2) = F_i^{(LT)}(x,Q^2) + {F_i^{(HT)}(x,Q^2)\\over Q^2} + \\ldots\\;.\n\\end{equation}\nThe leading twist term is dominant for $Q^2$ above some value \n$Q_0^2$, which is not very well known and frequently chosen to be \n${\\cal O}$(1 GeV$^2$). However, higher twist contributions are known to\nbe important for hadronic energies $W^2 \\le 4$~GeV$^2$ \\cite{mrst2}.\n\nThe structure functions $F_i^{(LT)}(x,Q^2)$ can be expressed in terms of\nprocess independent parton distribution functions,\n\\begin{equation}\nF_i^{(LT)}(x,Q^2) \\rightarrow f_i(x,\\mu^2) = q(x,\\mu^2),\\ g(x,\\mu^2)\\;,\n\\end{equation}\nwhich depend on $x$ and on the factorization scale $\\mu^2$. At small $x$, the\nquark distribution is assumed to be the same for all light flavours.\nThe parton distribution functions $f_i(x,\\mu^2)$ obey the perturbative\nQCD evolution equations \\cite{dglap},\n\\begin{equation}\n\\mu^2 {\\partial\\over \\partial\\mu^2} f_i(x,\\mu^2) =\n{\\alpha_s\\over 2\\pi} \\int_x^1{dy\\over y} P_{ij}\\left({x\\over y}\\right)\nf_i(y,\\mu^2)\\;,\n\\end{equation}\nwhere $P_{ij}(z)$ are the Altarelli-Parisi splitting functions. \nThe parton distributions can be directly expressed in terms of\nthe quark and gluon field operators. For instance, the quark distribution\nis given by\n\\begin{eqnarray}\\label{qdincl}\nq(x,\\mu^2) &=& {1\\over 4\\pi} \\int dx_- e^{-ixP_+x_-\/2}\\sum_{X}\\nonumber\\\\\n&&\\hspace{1.2cm}\n\\langle P|\\bar{q}(0,x_-,0_{\\perp})U(x_-,\\infty)|X\\rangle\\gamma_+\n\\langle X|U(\\infty,0)q(0,0,0_{\\perp})|P\\rangle \\;,\n\\end{eqnarray}\nwhere $U(a,b)$ is the colour matrix\n\\begin{equation}\nU(a,b) = P \\exp{\\left(-{i\\over 2}\\int_b^a dy_-A_+(0,y_-,0_{\\perp})\\right)}\\;.\n\\end{equation}\nThis definition can be used as a starting point of a theoretical\nnon-perturbative evaluation of the quark distribution.\n\n\\subsection*{Diffractive DIS}\n\nDiffractive DIS can be discussed in close analogy to inclusive DIS. There \nare two more kinematical variables which charaterize the diffractively \nscattered proton: the invariant momentum transfer $t$ and the fraction \n$\\xi$ of lost longitudinal momentum. A complete set\nof variables is\n\\begin{equation}\nt=(P-P')^2\\;,\\quad \\xi\\equiv x_{I\\!\\!P} \\;,\\quad Q^2=-q^2\\;,\\quad \nM^2=(q+\\xi P)^2\\;,\\quad \\beta={Q^2\\over Q^2 + M^2}\\;.\n\\end{equation}\nCompared to inclusive DIS, the diffractive mass $M$ plays the role of the\ntotal hadronic mass $W$, and $\\beta$ corresponds to $x$.\n\nThe hadronic tensor for diffractive DIS,\n\\begin{eqnarray} \nW^D_{\\mu\\nu}(P,P',q) &=& {1\\over 4\\pi} \\sum_{X} \n \\langle P|J_\\nu(0)|X;P'\\rangle \n \\langle X;P'|J_\\mu(0)|P\\rangle \n (2\\pi)^4 \\delta(P-P'-P_X)\\nonumber\\\\\n&=&\\left(-g_{\\mu\\nu}+{q_\\mu q_\\nu\\over q^2}\\right) F_1^{D(4)}(t,\\xi,\\beta,Q^2)\n \\nonumber\\\\ \n&&\\quad+{1\\over \\nu}\\left(P_\\mu - {\\nu\\over q^2}q_\\mu\\right) \n\\left(P_\\nu - {\\nu\\over q^2}q_\\nu\\right) F_2^{D(4)}(t,\\xi,\\beta,Q^2)+\\ldots\\;,\n\\end{eqnarray}\ndefines the diffractive structure functions $F_i^{D(4)}(t,\\xi,\\beta,Q^2)$.\nIntegration over $t$, which is dominated by small $|t|$ for diffractive\nscattering, yields the extensively studied structure function\n\\begin{equation}\nF_2^{D(3)}(\\xi,\\beta,Q^2) = \\int dt F_2^{D(4)}(t,\\xi,\\beta,Q^2)\\;.\n\\end{equation}\n\nAlso the diffractive structure functions have contributions of leading and\nhigher twist, \n\\begin{equation}\nF_i^{D(3)}(\\xi,\\beta,Q^2) = F_i^{D(3,LT)}(\\xi,\\beta,Q^2) + \n{F_i^{D(3,HT)}(\\xi,\\beta,Q^2)\\over Q^2} + \\ldots\\;.\n\\end{equation}\nAgain it is unclear above which value of $Q^2_0$ the leading twist part\ndominates. At small $x$, $W^2\\simeq Q^2\/x$ should be large enough, whereas\nthe lower bound on $M^2$ is an open question. Our phenomenological analysis\nin the next section will show that the leading twist description breaks\ndown at $M_0^2 \\simeq 4$ GeV$^2$. This again demonstrates that in diffractive\nDIS $M^2$ plays a role analogous to $W^2$ in inclusive DIS. \n\nFor diffractive DIS factorization holds like for inclusive DIS \\cite{c}. \nThe diffractive structure functions $F_i^{D(3,LT)}(\\xi,\\beta,Q^2)$ can be \nexpressed in terms of `fracture functions' \\cite{tre}, or\n`diffractive parton distributions' \\cite{ber1},\n\\begin{equation}\nF_i^{D(3,LT)}(\\xi,\\beta,Q^2) \\rightarrow {df_i(\\xi,\\beta,\\mu^2)\\over d\\xi} = \n{dq(\\xi,\\beta,\\mu^2)\\over d\\xi}\\;,\\; {dg(\\xi,\\beta,\\mu^2)\\over d\\xi}\\;,\n\\end{equation}\nwhich depend on $\\xi$, $\\beta$ and the factorization scale $\\mu^2$. \nThe diffractive parton distribution functions $df_i(\\xi,\\beta,\\mu^2)\/d\\xi$ \nalso obey the perturbative QCD evolution equations,\n\\begin{equation}\\label{ddglap}\n\\mu^2 {\\partial\\over \\partial\\mu^2} {df_i(\\xi,\\beta,\\mu^2)\\over d\\xi} =\n{\\alpha_s\\over 2\\pi} \\int_\\beta^1{db\\over b} P_{ij}\\left({\\beta\\over b}\\right)\n{df_i(\\xi,b,\\mu^2)\\over d\\xi}\\;.\n\\end{equation}\nNote that the evolution takes now place in $\\beta$ and $Q^2$; $\\xi$ \nmerely acts as a parameter. The physical reason for this is intuitively clear: \nfor an arbitrary DIS event the invariant hadronic mass is $W$, and the quark \nwhich couples to the virtual photon can be radiated by a parton whose fraction\nof the proton momentum varies from 1 to $x=Q^2\/(Q^2+W^2)$. In a diffractive \nevent, the diffractive invariant mass is $M$. Hence, $W$ is \nreplaced by $M$, and the quark which couples to the photon can be radiated \nby a parton whose fraction of the momentum $\\xi P$ varies from 1 to \n$\\beta=Q^2\/(Q^2+M^2)$. Formally, Eq.~(\\ref{ddglap}) follows from the fact \nthat ultraviolet divergencies and renormalization are the same for \ninclusive and diffractive parton distribution functions \\cite{ber2}. This is \napparent from a comparison of the corresponding operator definitions. The \ndiffractive quark distribution, for instance, is given by \\cite{ber2},\n\\begin{eqnarray}\\label{qddiff}\n{dq(\\xi,\\beta,\\mu^2)\\over d\\xi} \n&=& {1\\over 64\\pi^3}\\int dt \\int dx_- e^{-ixP_+x_-\/2}\\sum_{X}\\nonumber\\\\\n&&\\langle P|\\bar{q}(0,x_-,\\vec{0})U(x_-,\\infty)|X;P'\\rangle\\gamma_+\n\\langle X;P'|U(\\infty,0)q(0,0,\\vec{0})|P\\rangle \\;.\n\\end{eqnarray}\nAssuming `Regge factorization' for the diffractive quark and gluon distribution\nfunctions yields the Ingelman-Schlein model of hard diffractive scattering\n\\cite{ing} which can also be applied to deep inelastic scattering \\cite{dl}.\n\nThe physical interpretation of the diffractive parton distributions is \nanalogous to the interpretation of the inclusive distributions. The function \n$df(\\xi,b,\\mu^2)\/d\\xi$ is a conditional probability distribution. It describes\nthe probability density to find a parton $f$, carrying a fraction $\\xi b$ of \nthe proton momentum, under the condition that the proton has lost a fraction \n$\\xi$ of its momentum in the scattering process. \n\nThe formal definition of diffractive parton distributions tells us very\nlittle about their properties, although, comparing Eqs.~(\\ref{qdincl}) and\n(\\ref{qddiff}), one may expect that diffractive DIS is a leading twist\neffect. However, the important physics question concerns the relation\nbetween the two types of distribution functions,\n\\begin{equation}\nf_i(x,\\mu^2) \\longleftrightarrow {df_i(\\xi,\\beta,\\mu^2)\\over d\\xi}\\quad ?\n\\end{equation} \nBoth kinds of parton distributions represent non-perturbative properties\nof the proton and are therefore not accessible to perturbation theory.\nStill, one may hope that at small $x$, i.e. large hadronic energies $W$,\nsome simple relations between inclusive and diffractive deep inelastic\nscattering may exist. In the following section we shall describe a picture\nof hadrons at small $x$ where this is indeed the case.\n\n\n\n\\section{Semiclassical approach}\n\nThe phenomenon of the large rapidity gap events in DIS is very difficult\nto understand within the parton model. Naively, one would expect that the\nstruck quark will always break up the proton, which should lead to a flow\nof hadrons between the current jet and the proton remnant without large\ngaps in rapidity. \n\n\\begin{figure}\n\\begin{center}\n\\vspace*{-.5cm}\n\\parbox[b]{13.7cm}{\\psfig{width=10cm,file=semi.eps}}\\\\\n\\end{center}\n\\refstepcounter{figure}\n\\label{f1}\n{\\bf Figure \\ref{f1}:} Diffractive or non-diffractive DIS in the proton rest \nframe; the proton is viewed as a superposition of colour fields with\nsize $1\/\\Lambda$. \n\\end{figure}\n\nThe connection between diffractive DIS and ordinary, non-diffractive DIS can\nbe most easily understood in the proton rest frame which has frequently been\nused in the early days of DIS, almost 30 years ago. In this frame, DIS \nappears as the scattering of partonic fluctuations of the photon, $q {\\bar q}$, $q {\\bar q}g$\\ \netc., off the proton. In the semiclassical approach \\cite{bh1} the proton is \nviewed as a superposition of colour fields of size $1\/\\Lambda$ in DIS at small\n$x$, i.e. at high $\\gamma^* p$ center-of-mass energies. The simplest partonic\nfluctuation is a quark-antiquark pair (cf.~Fig.~1). Penetrating the proton,\nquark and antiquark change their colour. If the $q {\\bar q}$\\ pair leaves the proton \nin a colour singlet configuration, it can fragment independently of the\nproton remnant yielding a diffractive event. A $q {\\bar q}$\\ pair in a colour octet\nstate will build up a flux tube together with the proton remnant, whose\nbreakup will lead to an ordinary non-diffractive event. \n\nThe scattering amplitude for both types of events is determined by a single\nnon-perturbative quantity, $\\mbox{tr}W_{x_{\\perp}}(y_{\\perp})$. Here $x_{\\perp}$ and \n$x_{\\perp}+y_{\\perp}$ are the transverse positions where quark and antiquark penetrate the \ncolour field of the proton. The function \n\\begin{equation}\nW_{x_{\\perp}}(y_{\\perp})=U(x_{\\perp})U^{\\dagger}(x_{\\perp}+y_{\\perp})-1\\;, \\label{wuu}\n\\end{equation}\nwith\n\\begin{equation}\nU(x_{\\perp}) = P \\exp{\\left(-{i\\over 2}\\int_{-\\infty}^{\\infty} \n dx_-A_+(0,x_-,x_{\\perp})\\right)}\\;,\n\\end{equation}\nis essentially a closed Wilson loop through the corresponding section of \nthe proton, which measures an integral of the proton colour field strength. \n\nDiffractive DIS requires a colour singlet pair in the final state. Hence the\nscattering amplitude is $\\propto \\mbox{tr}W_{x_{\\perp}}(y_{\\perp})$ and the diffractive\ncross section takes the form,\n\\begin{equation}\\label{dsdiff}\nd\\sigma^D \\propto \\int_{x_{\\perp}} \\ldots |\\ldots \\mbox{tr}W_{x_{\\perp}}(y_{\\perp})\\ldots|^2\\;.\n\\end{equation} \nThe inclusive cross section is obtained by summing over all colours, which\nyields\n\\begin{eqnarray}\nd\\sigma^{incl} &\\propto& \\int_{x_{\\perp}} \\ldots \\mbox{tr}\n \\left(W_{x_{\\perp}}(y_{\\perp})W^\\dagger_{x_{\\perp}}(y_{\\perp})\\right)\\ldots \\nonumber\\\\\n&\\propto& \\int_{x_{\\perp}} \\ldots \\mbox{tr}W_{x_{\\perp}}(y_{\\perp})\\ldots \\;,\\label{dsincl}\n\\end{eqnarray} \nwhere the last equation follows from the unitarity of the matrix $U(x_{\\perp})$. \n\n\nFrom Eqs.~(\\ref{dsdiff}) and (\\ref{dsincl}) one immediately derives the\nproperties of Bjorken's aligned jet model \\cite{bjo}. For small \nquark-antiquark separations one has,\n\\begin{equation}\n\\int_{x_{\\perp}} \\mbox{tr}W_{x_{\\perp}}(y_{\\perp}) \\propto y_{\\perp}^2 \\;.\n\\end{equation}\nHence, since all kinematical factors are the same for $d\\sigma^D$ and\n$d\\sigma^{incl}$, small $q {\\bar q}$\\ pairs are suppressed in diffractive DIS. For\nlarge pairs of size $1\/\\Lambda$, the transverse momentum $l'_{\\perp}$ and the\nlongitudinal momentum fractions $\\alpha$ and $1-\\alpha$ are\n\\begin{equation}\nl'_{\\perp} \\sim \\Lambda\\;, \\quad \\alpha \\sim {\\Lambda^2\\over Q^2}\\;, \n\\quad 1-\\alpha \\simeq 1\\;.\n\\end{equation}\nThese are the asymmetric, aligned jet configurations \\cite{bjo} which\ndominate diffractive DIS. \n\n\\begin{figure}\n\\begin{center}\n\\vspace*{-.5cm}\n\\parbox[b]{15cm}{\\psfig{width=15cm,file=f2d.eps}}\\\\\n\\end{center}\n\\refstepcounter{figure}\n\\label{f2d}\n{\\bf Figure \\ref{f2d}:} Diffractive DIS in the proton rest frame (left) and \nthe Breit frame (right); asymmetric quark fluctuations correspond to \ndiffractive quark scattering, asymmetric gluon fluctuations to diffractive \nboson-gluon fusion. \n\\begin{center}\n\\vspace*{-.5cm}\n\\parbox[b]{13.7cm}{\\psfig{width=13.7cm,file=f2.eps}}\\\\\n\\end{center}\n\\refstepcounter{figure}\n\\label{f2}\n{\\bf Figure \\ref{f2}:} Inclusive DIS in the proton rest frame (left) and the \nBreit frame (right); asymmetric fluctuations correspond to quark scattering \n(a), symmetric fluctuations to boson-gluon fusion (b). \n\\end{figure}\n\n\n\\subsection*{Diffractive and inclusive parton distributions}\n\nIn the semiclassical approach the evaluation of inclusive and diffractive \nstructure functions is straightforward, in principle. One has to calculate\nthe scattering amplitudes for the production of $q {\\bar q}$\\ , $q {\\bar q}g$\\ ... \nconfigurations \\cite{bhm} in an external colour field, analogous to the \nproduction of $\\mu^+\\mu^-$ pairs in an external electromagnetic field \n\\cite{bks}, treat the interaction of the fast partons with the non-abelian \ncolour field in the eikonal approximation \\cite{n}, and finally integrate \nover all target colour fields.\n\n\nThe result for the leading twist part can be expressed in terms of\ndiffractive parton distributions \\cite{h}. For the transverse structure\nfunction, for instance, one finds to leading order in the QCD coupling,\n\\begin{eqnarray}\nF_T^D(\\xi,\\beta,Q^2) &=& 2e_q^2x \\int_{\\beta}^1 {db\\over b} \\Bigg\\{\\left(\n\\delta(1-z) + {\\alpha_s\\over 2\\pi}\\left(P_{qq}(z)\\ln{Q^2\\over \\mu^2} + \n\\ldots\\right) \\right){dq(b,\\xi,\\mu^2)\\over d\\xi}\\nonumber\\\\\n&&\\hspace{1cm} + {\\alpha_s\\over 2\\pi}\\left(P_{qg}(z)\\ln{Q^2\\over \\mu^2}\n+\\ldots\\right){dg(b,\\xi,\\mu^2)\\over d\\xi}\\Bigg\\}\\;,\\label{ftd}\n\\end{eqnarray}\nwhere $z=\\beta\/b$, and $C_F$ and $T_F$ are the usual colour factors. This\nexpression is completely analogous to the well known result for the\ninclusive structure function $F_T(x,Q^2)$. In the diffractive case, \n$\\beta$ plays the role of $x$, whereas $\\xi$ only acts as a parameter.\nFrom Eq.~(\\ref{ftd}) it is obvious that, as anticipated, the diffractive \nparton distributions satisfy the perturbative QCD evolution equations \n(\\ref{ddglap}).\n\nThe diffractive quark and gluon distributions have been determined in \n\\cite{h}. In terms of Wilson loops in coordinate space, the quark \ndistribution can be expressed as follows, \n\\begin{eqnarray}\\label{dqd}\n{dq(\\xi,b,\\mu^2)\\over d\\xi}&=&{2b\\over \\xi^2(1-b)^3}\n \\int{d^2l'_{\\perp} {l'_{\\perp}}^4\\over(2\\pi)^6 N_c} \n \\int_{y_{\\perp},y_{\\perp}'} e^{il'_{\\perp}(y_{\\perp}-y_{\\perp}')}\\,{y_{\\perp}\\y'\\over y\\, y'}\\nonumber \\\\\\nn\n&&\\times\\,K_1(yN)K_1(y'N)\\int_{x_{\\perp}}\\mbox{tr}W_{x_{\\perp}}(y_{\\perp})\n\\mbox{tr}W^{\\dagger}_{x_{\\perp}}(y_{\\perp}')\\;, \n\\end{eqnarray}\nwhere $N_c$ is the number of colours and $N^2={l'_{\\perp}}^2{b\\over 1-b}$.\n\nIt is very instructive to compare diffractive DIS in the proton rest frame\nand in the Breit frame (cf.~Fig. 2). The number of partons in the final\nstate is the same, of course, in both frames. Note, however, that the virtual\nparton connected to the proton changes it's direction. It appears incoming\nin the proton rest frame and outgoing in the Breit frame. Diffractive quark\nand gluon distributions correspond to asymmetric $q {\\bar q}$\\ and $q {\\bar q}g$\\ fluctuations\nwith a slow antiquark and gluon, respectively.\n\nInclusive parton distributions can be calculated in a similar way. The\ninclusive quark distribution is again given by the asymmetric $q {\\bar q}$\\ \nconfiguration (cf.~Fig.~3), just with arbitrary colours in the final state.\nA special role is played by the inclusive gluon distribution. It\nis related to small symmetric $q {\\bar q}$\\ pairs which probe the colour field of the \nproton directly (cf.~Fig.~3). Contrary to all other parton distributions,\nthe inclusive gluon distribution \\cite{bgh},\n\n\\begin{eqnarray}\nxg(x,Q^2) &=& \\frac{3\\pi}{\\alpha_s e_q^2}\\,\\cdot\\,\\frac{\\partial F_T(x,Q^2)}\n{\\partial\\ln Q^2} \\label{ctrans}\\\\\n&=& {1\\over 2\\pi^2 \\alpha_s}\\int_{x_{\\perp}} \\mbox{tr}\\left(\n\\partial_{y_{\\perp}}W_{x_{\\perp}}(0)\\partial_{y_{\\perp}}W_{x_{\\perp}}^{\\dagger}(0)\\right)\n= {\\cal O}\\left({1\\over \\alpha_s}\\right)\\;,\\label{gi}\n\\end{eqnarray}\nis enhanced by an inverse power of $\\alpha_s$ in the semiclassical approach.\nThis is the reason why diffractive DIS is suppressed. Note that the gluon \ndistribution is directly related to the cross section for a small\n$q {\\bar q}$\\ pair with transverse size $y$ \\cite{fms},\n\\begin{equation}\n\\sigma_{q\\bar{q}} (y;x,Q^2) = {\\pi^2\\over 3} \\alpha_s x g(x,Q^2) y^2 + \n{\\cal O}(y^4)\\;.\n\\end{equation}\n\n\\subsection*{Integration over the target gluon fields}\\label{av}\n\nSo far we have expressed diffractive and inclusive parton distributions in \nterms of Wilson loops which integrate the gluon field strength in the area\nbetween the trajectories of two fast colour charges penetrating the proton.\nThe integration over the gluon field configurations of the target is a \ncomplicated operation depending on the full details of the non-perturbative \nhadronic state. However, in the special case of a very large target, a \nquantitative treatment becomes possible under minimal additional assumptions.\nThe reason is that the large size of a hadronic target, realized, e.g., in an \nextremely heavy nucleus, introduces a new hard scale \\cite{mv}. From the \ntarget rest frame point of view, this means that the \ntypical transverse size of the partonic fluctuations of the virtual photon \nremains perturbative \\cite{hw}, thus justifying the omission of higher Fock \nstates in the semiclassical calculation. \n\nWithin this framework, it is natural to introduce the additional assumption \nthat the gluonic fields encountered by the partonic probe in distant regions \nof the target are not correlated. Thus, one arrives at the situation \ndepicted in Fig.~\\ref{lt}, where a colour dipole passes a large number of \nregions, each one of size $\\sim 1\/\\Lambda$, with mutually uncorrelated \ncolour fields $A_1$ ... $A_n$.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\vspace*{-.5cm}\n\\parbox[b]{12cm}{\\psfig{width=9cm,file=lt.eps}}\\\\\n\\end{center}\n\\refstepcounter{figure}\n\\label{lt}\n{\\bf Figure \\ref{lt}:} Colour dipole travelling through a large\nhadronic target.\n\\end{figure}\n\nThe crucial assumption that the fields in regions $1$ ... $n$ are \nuncorrelated is implemented by writing the integral over all field \nconfigurations as\n\\begin{equation}\n\\int_{A}=\\int_{A_1}\\cdots\\int_{A_n}\\,\\,,\\label{int}\n\\end{equation}\ni.e., as a product of independent integrals. Here the appropriate weighting \nprovided by the target wave functional is implicit in the symbol $\\int_A$. \nFor inclusive and diffractive parton distributions we need the\ntwo colour contractions for products of Wilson loops,\n\\begin{equation}\\label{essence}\n\\mbox{tr}\\left(W_{x_{\\perp}}(y_{\\perp})W_{x_{\\perp}}^{\\dagger}(y_{\\perp})\\right) \n\\longleftrightarrow \\frac{1}{N_c}\\mbox{tr}W_{x_{\\perp}}(y_{\\perp})\\mbox{tr}\nW_{x_{\\perp}}^{\\dagger}(y_{\\perp})\\;.\n\\end{equation}\nThis relation, which provides the connection between inclusive and diffractive\nDIS, is the essence of the semiclassical approach.\n\nPerforming the integration over the colour fields one obtains in the large\n$N_c$ limit \\cite{bgh},\n\\begin{eqnarray}\n\\int_{x_\\perp}\\int_A\\mbox{tr}\\left(W_{x_\\perp}(y_\\perp)\nW^{\\dagger}_{x_\\perp}(y_\\perp')\\right)&=&\\Omega N_c\\left(1-e^{-ay^2}-\ne^{-ay'^2}+e^{-a(y_\\perp-y_\\perp')^2}\\right)\\label{ww0}\\,,\n\\\\\n\\frac{1}{N_c}\\int_{x_\\perp}\\int_A\\mbox{tr}W_{x_\\perp}(y_\\perp)\n\\mbox{tr}W^{\\dagger}_{x_\\perp}(y_\\perp')&=&\\Omega N_c\\left(1-\ne^{-ay^2}\\right)\\,\\left(1-e^{-ay'^2}\\right)\\,,\\label{wwf}\n\\end{eqnarray}\nwhere $\\Omega = \\int d^2x_{\\perp}$ is the geometric size of the target and $a$ plays\nthe role of a saturation scale. Note that according to Eqs.~(\\ref{ww0}) and\n(\\ref{wwf}) the diffractive structure function is not suppressed by a colour \nfactor relative to the inclusive structure function, in contrast to the \nsuggestion made in~\\cite{bh}. \n\nAs an example, consider the inclusive quark distribution. From \nEqs.~(\\ref{dqd}), (\\ref{essence}) and (\\ref{ww0}) one obtains, after changing \nthe integration variable $\\xi$ to $N^2$, \n\\begin{equation}\\label{idiff}\nx q(x,\\mu^2) = \\int_x^1 d\\xi {dq(\\xi,b=x\/\\xi,\\mu^2)\\over d\\xi}\n= \\int_{x_{\\perp}} \\int_{l'_{\\perp}} {d\\ xq(x,\\mu^2)\\over d^2x_{\\perp} d^2l'_{\\perp}}\\;,\n\\end{equation}\nwith the unintegrated quark density\n\\begin{eqnarray}\\label{uidiff}\n{d\\ xq(x,\\mu^2)\\over d^2x_{\\perp} d^2l'_{\\perp}} &=& \n{N_c\\over 32\\pi^6} \\int_0^{\\mu^2} dN^2 N^2\n\\int_{y_{\\perp},y_{\\perp}'} e^{il'_{\\perp}(y_{\\perp}-y_{\\perp}')}\\,{y_{\\perp}\\y'\\over y\\, y'}\\nonumber \\\\\\nn\n&&\\times\\,K_1(yN)K_1(y'N)\\left(1-e^{-ay^2}-\ne^{-ay'^2}+e^{-a(y_\\perp-y_\\perp')^2}\\right)\\;.\n\\end{eqnarray}\nThis result has recently been obtained for the quark density in a large\nnucleus \\cite{mue} by exponentiating the amplitude for a small $q {\\bar q}$\\ pair \nscattering off a single nucleon, which is described by two-gluon exchange.\nThe effect of the varying thickness of the nucleus has also been taken into \naccount in \\cite{mue}, which makes the saturation scale $a$ dependent on the \nimpact parameter $x_{\\perp}$.\n\nA Glauber type model with two-gluon exchange, similar in \nspirit to \\cite{got}, has recently been used to study the effect of parton \nsaturation in inclusive and diffractive DIS \\cite{gw}. Although perturbative \ntwo-gluon exchange is a higher twist effect, the contribution from the soft \nregion can be used as a model for inclusive and diffractive DIS \n\\cite{nz,bekw}. \n\nParticularly close to the semiclassical approach is the light cone hamiltonian\napproach to diffractive processes \\cite{hks}, which is also based on\ndiffractive parton densities expressed in terms of expectation values of\nproducts of Wilson lines. For a hadronic target, modelled as a colour singlet\nwhich only couples to one flavour of heavy quarks, diffractive DIS is \ndominated by two-gluon exchange. In the semiclassical approach, the proton\nis described by a superposition of colour fields. The role of classical\ncolour fields in the case of high gluon densities has first been discussed\nby McLerran and Venugopalan in the case of a large nucleus \\cite{mv}. \n\n\n\\begin{figure}\n\\begin{center}\n\\parbox[b]{7cm}{\\psfig{file=difpdf.ps,width=7cm}}\\\\\n\\end{center}\n\\refstepcounter{figure}\n\\label{figdifpdf}\n{\\bf Figure \\ref{figdifpdf}:}\nDiffractive quark and gluon distributions at the initial scale $Q_0^2$ and \nafter $Q^2$ evolution. From \\cite{bgh}.\n\\end{figure}\n\n\n\\subsection*{Comparison with data}\\label{na}\n\nWe now use the large hadronic target as a toy model for the proton. In case \nthe proton can be viewed as an ensemble of regions with independently \nfluctuating colour fields, the model might even be realistic. We have \nexplicitly verified that in the semiclassical approach inclusive and \ndiffractive parton distributions satisfy the DGLAP evolution equations \n\\cite{dglap}. Hence, we can use the calculated quark and gluon distributions \nas non-perturbative input at some scale $Q_0^2$ and determine the \ndistributions at larger $Q^2$ by means of the evolution equations. \n\n\\begin{figure}\n\\begin{center}\n\\parbox[c]{6cm}{\\psfig{file=difq2.ps,width=6cm} }\\parbox[c]{2cm}{\\hspace{2cm}}\n\\parbox[c]{6cm}{\\psfig{file=difq2z.ps,width=6cm} }\\\\\n\\vspace{0.6cm}\n\\end{center}\n\\refstepcounter{figure}\n\\label{figdifq2}\n{\\bf Figure \\ref{figdifq2}:}\nDependence of the diffractive structure function $F_2^{D(3)}$ on $\\beta$\nand $Q^2$, compared to data from H1 (left)~\\protect\\cite{diffh} and ZEUS \n(right)~\\protect\\cite{diffz}. Open data points \ncorrespond to $M^2\\leq 4$~GeV$^2$. The charm content of the \nstructure function is indicated as a dashed line. From \\cite{bgh}.\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\parbox[b]{14cm}{\\psfig{file=f2dh1.ps,width=14cm}}\\\\\n\\end{center}\n\\refstepcounter{figure}\n\\label{figf2dh1}\n{\\bf Figure \\ref{figf2dh1}:}\nThe diffractive structure function $F_2^{D(3)}(\\xi,\\beta,Q^2)$ at small $\\xi$ \ncomputed in the semiclassical approach, using the fitted parameters \ngiven in the text.\nH1 data taken from~\\protect\\cite{diffh}. The open data points correspond\nto $M^2 \\leq 4$~GeV$^2$ and are not included in the fit. From \\cite{bgh}.\n\\end{figure}\n\nFor a given colour field, the semiclassical description of parton distribution \nfunctions always predicts an energy dependence corresponding to a classical \nbremsstrahlung spectrum: $q(x),g(x)\\sim 1\/x$. One expects that, in a more \ncomplete treatment, a non-trivial energy dependence is induced since the \nintegration over the soft target colour fields encompasses more and more modes \nwith increasing energy of the probe \\cite{bgh}. At present we are unable to \ncalculate this non-perturbative energy dependence from first \nprinciples. Instead, we choose to parametrize it in the form of a soft, \nlogarithmic growth of the normalization of diffractive and inclusive parton \ndistributions with the collision energy $\\sim 1\/x$, consistent with the \nunitarity bound. This introduces one further parameter, $L$, into the model, \n\\begin{equation}\n\\Omega \\to \\Omega \\left(L - \\ln x \\right)^2.\n\\end{equation}\n\nIncluding this energy dependence, one obtains the following compact\nexpressions for inclusive and diffractive parton distributions \\cite{bgh}, \n\\begin{eqnarray}\nxq(x,Q_0^2) & = & \\frac{a \\Omega N_c \\left( L - \\ln x \\right)^2}\n{3 \\pi^3} \\left(\\ln\\frac{Q_0^2}{a} - 0.6424\\right)\\; , \\label{qiinp}\\\\\nxg(x,Q_0^2) & = & \\frac{ 2 a \\Omega N_c \\left( L - \\ln x \\right)^2}\n{\\pi^2 \\alpha_s(Q_0^2)}\\; , \\label{giinp}\\\\\n\\frac{dq\\left(\\beta,\\xi,Q_0^2\\right)}{d\\xi} & = & \\frac{a\\Omega N_c \n(1-\\beta) \n\\left(L - \\ln \\xi\\right)^2}{2\\pi^3\\xi^2} f_q(\\beta)\\; , \\label{qdinp} \\\\\n\\frac{dg\\left(\\beta,\n\\xi,Q_0^2\\right)}{d\\xi} & = & \\frac{a\\Omega N_c^2 (1-\\beta)^2 \n\\left(L - \\ln \\xi\\right)^2}{2\\pi^3 \\beta \\xi^2} f_g(\\beta)\\; \\label{gdinp}.\n\\end{eqnarray}\nThese expressions are only applicable in the small-$x$ region, which we define\nby $x\\leq \\xi \\leq 0.01$. The functions $f_{q,g}(\\beta)$ are parameter free\npredictions. The model does not specify whether, in the diffractive case, the \nenergy-dependent logarithm should be a function of $x$ or of $\\xi$. However, \nboth prescriptions differ only by terms proportional to $\\ln \\beta$, which can\nbe disregarded in comparison with $\\ln x$ or $\\ln \\xi$ in the small-$x$ region.\n\n\\begin{figure}[t]\n\\begin{center}\n\\parbox[b]{15.5cm}{\\psfig{file=f2i.ps,width=15.5cm}}\\\\\n\\end{center}\n\\refstepcounter{figure}\n\\label{figf2i}\n{\\bf Figure \\ref{figf2i}:}\nThe inclusive structure function $F_2(x,Q^2)$ at small $x$ \ncomputed in the semiclassical approach, using the fitted parameters \ngiven in the text.\nData taken from~\\protect\\cite{incl}. The data with $Q^2 = 1.5$~GeV$^2$ \nare not included in the fit. From \\cite{bgh}.\n\\end{figure}\n\n\nThe above equations summarize our input distributions, depending on \n$a$, $\\Omega$, $L$, and the input scale $Q_0^2$. At this order, the measured \nstructure function $F_2$ coincides with the transverse structure function \n$F_T$. We assume all three light quark flavours to yield the same \ncontribution, such that the singlet quark distribution is simply six times the\nquark distribution defined above, both in the inclusive and in the diffractive\ncase,\n\\begin{equation}\n\\Sigma (x,Q^2) = 6\\, q(x,Q^2)\\; , \\qquad \\frac{d\\,\\Sigma(\\xi,\\beta,Q^2)}{d\\xi} \n= 6\\, \\frac{dq(\\xi,\\beta,Q^2)}{d\\xi}\\; .\n\\end{equation}\nValence quark contributions are absent in the semiclassical approach, which \ndoes not account for the exchange of flavour quantum numbers between the \nproton and the fast moving virtual photon state. Charm quarks are treated \nas massive quarks in the fixed flavour number scheme \\cite{ffns} (we use\n$\\Lambda_{{\\rm LO},n_f=3}= 144$~MeV, $\\alpha_s(M_Z)=0.118$, $m_c=1.5$~GeV,\n$m_b=4.5$~GeV, $\\mu_c = 2 m_c$). A fit to the data yields for the model \nparameters $Q_0^2 = 1.23\\ {\\rm GeV}^2,\\ L = 8.16, \n\\Omega = (712\\ {\\rm MeV})^{-2},\\ a = ( 74.5\\ {\\rm MeV})^2$ . The starting \nscale $Q_0^2$ is in the region where one would expect the transition between \nperturbative and non-perturbative dynamics to take place; the two \nother dimensionful parameters $\\Omega L^2$ and $a$ are both of the order \nof typical hadronic scales. \n\nThe perturbative evolution of inclusive and diffractive structure functions \nis driven by the gluon distribution, which is considerably larger \nthan the singlet quark distribution in both cases. \nThe ratio of the inclusive singlet quark and gluon distributions can be \nread off from Eqs.~(\\ref{qiinp}) and (\\ref{giinp}). For the obtained fit \nparameters, it turns out that the inclusive gluon distribution is about twice \nas large as the singlet quark distribution.\n\nThe relative magnitude and the $\\beta$ dependence of the diffractive \ndistributions are completely independent of the model parameters. Moreover, \ntheir absolute normalization is, up to the slowly varying factor \n$1\/\\alpha_s(Q_0^2)$, closely tied to the normalization of the \ninclusive gluon distribution. \n\nFigure~\\ref{figdifpdf} displays the diffractive distributions for\nfixed $\\xi=0.003$ and different values of $Q^2$. The $\\beta$ dependences \nof the quark and the gluon distribution at $Q_0^2$ are substantially \ndifferent: the asymmetric quark distribution $\\beta d\\Sigma\/d \\xi$ is peaked \naround $\\beta \\approx 0.65 $, thus being harder than the symmetric \ndistribution $\\beta (1-\\beta)$ suggested in~\\cite{dl}. The gluon distribution\n$\\beta d g\/d \\xi$, on the other hand, approaches a constant for \n$\\beta \\to 0$ and falls off like $(1-\\beta)^2$ at large $\\beta$. \nIn spite of the $(1-\\beta)^2$ behaviour, gluons remain important \neven at large $\\beta$, simply due to the large total normalization of this \ndistribution (the $\\beta$ integral over $\\beta dg\/d\\xi$ at $Q_0^2$ is \napproximately three times the $\\beta$ integral over $\\beta d\\Sigma\/d\\xi$). \nAs a result, the quark distribution does not change with increasing $Q^2$ \nfor $\\beta\\approx 0.5$ and is only slowly decreasing for larger values \nof $\\beta$. \n\nThe dependence of the diffractive structure function on $\\beta$ and $Q^2$ is \nillustrated in Fig.~\\ref{figdifq2}, where the predictions are compared with \ndata from the H1 and ZEUS experiments~\\cite{diffh,diffz} at fixed $\\xi$.\nDisregarding the large-$\\beta$ region, the model gives a good description of \nthe $\\beta$ dependence of the diffractive structure function for all values of\n$Q^2$. It is remarkable that the qualitative features of the $\\beta$ and\n$Q^2$ dependence are also correctly described by the perturbative approach\nof \\cite{hks}. This indicates that the $\\beta$ dependence of the diffractive\nstructure function is to a large extent determined by the kinematics of the\n$q {\\bar q}$\\ and $q {\\bar q}g$\\ fluctuations and only partly sensitive to the details of the \nsoft interaction with the proton. Also the $\\xi$ dependence of the diffractive\nstructure function is rather well described for $M^2 > 4$~GeV$^2$, as\ndemonstrated in Fig.~\\ref{figf2dh1}. It has been demonstrated in \\cite{bekw}\nthat higher twist contributions can account for the data at low $M^2$\n(below 4 GeV$^2$). This is analogous to the breakdown of the leading twist \ndescription of the inclusive structure functions, where it occurs for similar \ninvariant hadronic masses, namely $W^2\\lower .7ex\\hbox{$\\;\\stackrel{\\textstyle <}{\\sim}\\;$} 4$~GeV$^2$ \\cite{mrst2}.\n\nFinally, also the data of the H1 and ZEUS experiments on the inclusive \nstructure function $F_2(x,Q^2)$~\\cite{incl} are well reproduced by the model,\nas demonstrated by Fig.~\\ref{figf2i}.\n\n\\section{Open questions}\\label{conc}\n\nThe theoretical work, stimulated by the observation of the large rapidity\ngap events at HERA, has led to a clear understanding of diffractive DIS\nas a leading twist phenomenon. Diffractive parton distribution functions\ncan be defined completely analogous to inclusive parton distribution \nfunctions. Both kinds of distribution functions obey the perturbative QCD \nevolution equations. The parton distribution functions cannot be calculated\nperturbatively, any relation between them reflects a non-perturbative\nproperty of the proton. The leading twist description breaks down at\n$W^2<4$ GeV$^2$ and $M^2<4$ GeV$^2$ for inclusive and diffractive DIS, \nrespectively.\n\nA physical picture for diffractive DIS, and the relation to inclusive DIS, \nis most easily obtained in the proton rest frame where all DIS processes \ncorrespond to the scattering of partonic fluctuations of the virtual photon \noff the proton. In the semiclassical approach the proton is described by a\nsuperposition of colour fields. The qualitative properties of the $\\beta$\ndependence of the diffractive structure function are well reproduced by\nthe scattering of colour dipoles off colour fields, which are generated \neither by a small colour dipole or by a large nucleus. This supports our \ngeneral ideas about diffractive DIS. It will be interesting to see whether a\nmore precise measurement of the $\\beta$ spectrum will be quantitatively\nconsistent with the idea of a colour field fluctuating independently\nin different sections of the proton. \n\nIn the semiclassical approach the proton colour field is assumed to be\ndominated by soft modes. Hence, diffractive and non-diffractive DIS\nevents are kinematically very similar at small $x$, i.e. large hadronic\nenergies $W$. This leads to an approximate relation between the inclusive\nand diffractive structure functions \\cite{bh1,b},\n\\begin{equation}\\label{scal}\nF_2^{D(3)}(\\xi,\\beta,Q^2) \\sim {1\\over \\ln{Q^2}} F_2(x=\\xi,Q^2)\\;,\n\\end{equation}\nwhich is in broad agreement with data \\cite{diffz}.\nFor fixed momentum transfer $Q^2$ and diffractive mass $M$, both structure \nfunctions have the same dependence on the $\\gamma^* p$ center-of-mass energy \n$W$. The factor $1\/\\ln{Q^2}$ reflects the suppression of small colour \ndipoles in diffractive scattering.\n\nThe dependence of the diffractive structure functions on $\\xi$ is not\naffected by the perturbative QCD evolution, contrary to the $x$ dependence\nof the inclusive structure functions, and therefore a genuine non-perturbative\nproperty of the proton. Hence, Eq.~(\\ref{scal}) can only hold as long as the\neffect of the perturbative evolution can be approximated by a single\n$\\ln{Q^2}$ factor. The $\\xi$ dependence of the diffractive structure\nfunction then plays the role of the non-perturbative input for the inclusive \nstructure function at some low scale $Q_0^2$.\n\nOne expects that, due to unitarity, diffractive and inclusive structure\nfunctions satisfy a relation similar to the one between elastic and\ntotal proton-proton cross section \\cite{mue1},\n\\begin{eqnarray}\n\\sigma_{el} &=& \\int d^2b \\left(1-S(b)\\right)^2\\;,\\label{undi}\\\\\n\\sigma_{tot} &=& 2 \\int d^2b \\left(1-S(b)\\right)\\;,\\label{unin}\n\\end{eqnarray}\nwhere $S(b)$ is the S-matrix at a given impact parameter $b$. Recently, it has\nbeen shown that this relation also holds for the diffractive and inclusive\ncross sections of a $q {\\bar q}$\\ pair off the proton, if the diffractive cross section\nis defined by the colour singlet projection \\cite{km}. In the semiclassical \napproach, \nthis relation can be read off from Eqs.~(\\ref{dsdiff}) and (\\ref{dsincl}) or, \nmore explicitly, from Eqs.~(\\ref{ww0}) and (\\ref{wwf}). After integration over\n$l'_{\\perp}$ in Eqs.~(\\ref{idiff}) and (\\ref{uidiff}), which yields $y_{\\perp}=y_{\\perp}'$, one \nobtains for the cross sections of a $q {\\bar q}$\\ pair with size $y$,\n\\begin{eqnarray}\n\\sigma^D_{q\\bar{q}}(y) &\\propto& \n\\frac{1}{N_c}\\int_{x_\\perp}\\int_A\\mbox{tr}W_{x_\\perp}(y_\\perp)\n\\mbox{tr}W^{\\dagger}_{x_\\perp}(y_\\perp) \\nonumber\\\\\n&=& \\Omega N_c\\left(1-e^{-ay^2}\\right)^2\\;,\\\\\n\\sigma^{incl}_{q\\bar{q}}(y) &\\propto& \n\\int_{x_\\perp}\\int_A\\mbox{tr}\\left(W_{x_\\perp}(y_\\perp)\nW^{\\dagger}_{x_\\perp}(y_\\perp)\\right) \\nonumber\\\\\n&=& 2\\ \\Omega N_c\\left(1-e^{-ay^2}\\right)\\;.\n\\end{eqnarray}\nSince the dependence of the saturation parameter $a$ on the varying thickness \nof the target has been neglected, the integration over the impact parameter \n$x_{\\perp}$ (corresponding to $b$ in(\\ref{undi}),(\\ref{unin})) could be carried \nout yielding the geometric size $\\Omega$ as overall factor. It will be \ninteresting to extend these considerations to more complicated partonic\nfluctuations.\n\nIn the coming years experiments at HERA will provide detailed information\nabout diffractive final states, including charm and high-$p_{\\perp}$\\ jets. \nAnticipating further support of the semiclassical approach by data, \nwe can hope to learn a lot about the colour structure of the proton from\na comparison of inclusive and diffractive DIS. \n\\vspace{.4cm} \n\nThe content of this progress report is largely based on recent work with\nThomas Gehrmann and Arthur Hebecker whom I thank for an enjoyable \ncollaboration.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\nThe authors acknowledge support from the Defense Advanced Research Projects Agency (DARPA) under the Physics of Artificial Intelligence (PAI) program (contract HR$00111890034$). \nAdditional computing resources were provided by the University of Notre Dame's Center for Research Computing (CRC). \n\n\n\n\\section{Sobel filter to estimate spatial gradients}\\label{appendix:sobel_filter}\nSobel filter is used to estimate horizontal and vertical spatial gradients by applying one convolution with the following $3 \\times 3$ kernels, respectively:\n\\[\n\\mathcal H = \n\\begin{bmatrix}\n 1 & 0 & -1 \\\\\n 2 & 0 & -2 \\\\\n 1 & 0 & -1\n\\end{bmatrix}, \\quad\n\\mathcal V =\n\\begin{bmatrix}\n 1 & 2 & 1 \\\\\n 0 & 0 & 0 \\\\\n -1 & -2 & -1\n\\end{bmatrix}.\n\\]\nIntuitively it is a smoothed finite difference method.\nThe convolution operation goes natural with CNN representation of solution fields and is highly efficient. Sobel filter is way more efficient than using automatic differentiation to obtain spatial gradients in the FC-NN parameterization, with the compromise of reduced accuracy, especially for locations close to the boundaries. \n\nTo improve the accuracy of gradient estimate on the boundary, we use the following correction.\nFor 2D image matrix $\\mathbf{I}$ of size $H\\times W$, Sobel kernel $\\mathcal H$, and correction matrix $M_{\\mathcal H}$ of size $W\\times W$,\n\\[\nM_{\\mathcal H} = \n\\begin{bmatrix}\n 4 & 0 & 0 & & & 0 \\\\\n -1 & 1 & 0 & & & \\\\\n 0 & 0 & 1 & & & \\\\\n & & & \\cdots & 0 & 0 \\\\\n & & & & 1 & -1 \\\\\n 0 & & & 0 & 0 & 4 \\\\\n\\end{bmatrix},\n\\]\nthe horizontal gradient is estimated as $(\\mathbf{I} \\star \\mathcal H) M_{\\mathcal H}$, where $\\star$ is convolution with replicate padding on the boundary. This is effectively using forward finite differences on the left boundary and backward finite differences on the right boundary. The vertical gradient estimate is corrected similarly. We found that this correction reduces the error of the learned solution by several times. However, there are still errors in four corners, which can be further improved by more refined correction.\n\n\\section{Solving PDEs with FC-NNs and CNNs}\\label{appendix:solve}\n\\subsection{Network architecture}\nThe FC-NN used in the experiments in Section~\\ref{sec:solve_det_pde} has $8$ hidden layers and $512$ nodes per hidden layer, with the input and output dimensions being $2$ and $3$, respectively. The nonlinear activation is \\texttt{Tanh}. The total number of parameters is $1,841,155$. We increased the number of nodes in the hidden layer from $20$ to $512$ to overfit the solution. We considered the collocation points to be at random locations in the domain, and increased their number. However, none of these modifications lead to improvement of the learned solution.\n\nThe convolutional decoder network uses two dense blocks with $8$ and $6$ dense layers respectively to transform the latent $\\mathbf{z}$ of size $1\\times 16 \\times 16$ to the output $\\mathbf{y}$ of size $3 \\times 64 \\times 64$. The decoding layers use nearest upsampling followed by one $3\\times 3$ convolution. The network has $514,278$ parameters and $20$ convolution layers.\n\nWe train FC-NNs and CNNs with mixed residual loss using L-BFGS optimizer (with history size $50$ and maximum iteration $20$), learning rate $0.5$. The weight for boundary loss is $\\lambda = 10$.\n\n \n\n\\subsection{Supplementary Numerical Experiments}\nWe also show learned solutions for GRF KLE$4096$ in Fig.~\\ref{fig:solve_k4096}. Again the CNN can capture the flux field much faster and better than FC-NN, but in this case the pressure field begins to show severe checkerboard artifact that the largest error being larger than that of the pressure solution of FC-NN.\n\\begin{figure}[h!]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/fc_mr_pred_epoch500_13_k4096.png}\n \\caption{FC-NN, iteration $500$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/fc_mr_pred_epoch2000_13_k4096.png}\n \\caption{FC-NN, iteration $2000$.}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/conv_mr_pred_epoch50_13_k4096.png}\n \\caption{CNN, iteration $50$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/conv_mr_pred_epoch500_13_k4096.png}\n \\caption{CNN, iteration $500$.}\n \\end{subfigure}\n\n \\caption{Solving Darcy flow for one sample from GRF KLE4096 under mixed residual loss.}\n \\label{fig:solve_k4096}\n\\end{figure}\n\n\n\n\\section{Details on conditional Glow and supplementary results}\\label{appendix:cglow}\n\\subsection{Details on the network structure}\nIn Fig.~\\ref{fig:glow_msc_sub}, the encoder network includes a cascade of $L$ \\texttt{Dense Block}s (that maintain the feature map size) and $L-1$ \\texttt{Trans Down} layers (that typically half the feature map size, e.g. from $32\\times 32$ to $16\\times 16$). The features extracted after each dense block are treated as input features $\\{\\bm{\\xi}_l\\}_{l=1}^L$. For details of \\texttt{Dense Block}s and \\texttt{Trans Down} layers (encoding layer), please refer to Section~$2.2$ in~\\cite{zhu2018bayesian}.\n\nIn Fig.~\\ref{fig:glow_msc_sub}, the \\texttt{Squeeze} operator rearranges the features of size $C \\times H \\times W$ into $4C \\times \\frac{1}{2} H \\times \\frac{1}{2}W$ if the squeeze factor is $2$. The \\texttt{Split} operator splits out half of the features\/channels as latent variable $\\mathbf{z}_l$, which is diagonal Gaussian parameterized by the other half features $\\mathbf{h}_l$ with a $3 \\times 3$ convolution, stride 1 and zero initialization.\nIn Fig.~\\ref{fig:flow_step}, one step of flow contains an activation normalization layer (\\texttt{ActNorm}), an invertable $1\\times1$ convolution layer and an affine coupling layer. \\texttt{ActNorm} performs affine transformation of the activation with a scale and bias parameter per-channel. \nThe $1\\times 1$ convolution layer with equal number of input and output channels is a learnable permutation operation to mix the two parts of the flow features before passing them to the affine coupling layer.\n\nWe show the detailed computation of the forward and reverse paths of the affine coupling layer (Fig.~\\ref{fig:affine_coupling}) in Table~\\ref{tab:affine_coupling}. The nonlinear transform \\texttt{CouplingNN} includes $3$ dense layers, followed by a $3\\times3$ convolution layer with zero initialization, whose output channels split into two parts, i.e. $(\\hat{\\mathbf{s}}, \\mathbf{t})$.\n\\begin{table}[]\n \\caption{Forward (from $\\mathbf{y}'$ to $\\mathbf{z}'$) and reverse paths of affine coupling layer with condition of input features $\\bm{\\xi}_l$ as in Figure~\\ref{fig:affine_coupling}.}\n \\label{tab:affine_coupling}\n \\centering\n \\begin{tabularx}{\\textwidth}{ X | X }\n Forward & Reverse\n \\\\\\hline\n \\begin{tabular}[t]{@{}l@{}}\n $\\mathbf{y}_1', \\mathbf{y}_2' = \\texttt{split}(\\mathbf{y}')$\\\\\n $\\hat{\\mathbf{y}}_1 = \\texttt{concat}(\\mathbf{y}_1', \\bm{\\xi}_l)$\\\\\n $(\\hat{\\mathbf{s}}, \\mathbf{t}) = \\texttt{CouplingNN}(\\hat{\\mathbf{y}}_1)$\\\\\n $\\mathbf{s} = \\texttt{sigmoid}(\\hat{\\mathbf{s}} + 2)$\\\\\n $\\mathbf{z}_2' = \\mathbf{s} \\odot \\mathbf{y}_2' + \\mathbf{t}$\\\\\n $\\mathbf{z}_1' = \\mathbf{y}_1'$\\\\\n $\\mathbf{z}' = \\texttt{concat}(\\mathbf{z}_1', \\mathbf{z}_2')$\n \\end{tabular}\n &\n \\begin{tabular}[t]{@{}l@{}}\n $\\mathbf{z}_1', \\mathbf{z}_2' = \\texttt{split}(\\mathbf{z}')$\\\\\n $\\hat{\\mathbf{z}}_1 = \\texttt{concat}(\\mathbf{z}_1', \\bm{\\xi}_l)$\\\\\n $(\\hat{\\mathbf{s}}, \\mathbf{t}) = \\texttt{CouplingNN}(\\hat{\\mathbf{z}}_1)$\\\\\n $\\mathbf{s} = \\texttt{sigmoid}(\\hat{\\mathbf{s}} + 2)$\\\\\n $\\mathbf{y}_2' = (\\mathbf{z}_2' - \\mathbf{t}) \/ \\mathbf{s}$\\\\\n $\\mathbf{y}_1' = \\mathbf{z}_1'$\\\\\n $\\mathbf{y}' = \\texttt{concat}(\\mathbf{y}_1', \\mathbf{y}_2')$\n \\end{tabular}\n \\end{tabularx}\n\\end{table}\n\n\n\\subsection{Results on higher input dimension}\nWe also trained the conditional Glow with $4096$ samples from GRF KLE$256$ over $32\\times 32$ grid. The prediction results for two test inputs are shown in Fig.~\\ref{fig:cglow_kle256_grid32}.\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle256_grid32\/prediction\/pred_epoch400_408.png}\n \\caption{Test realization $1$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle256_grid32\/prediction\/pred_epoch400_284.png}\n \\caption{Test realization $2$.}\n \\end{subfigure}\n \n \\caption{Prediction of the multiscale conditional Glow ($\\beta=200$) for two test inputs in (a) and (b) which are sampled from GRF KLE$256$ over $32\\times 32$ grid. The predictive mean ($2$nd row) and one standard derivation ($3$rd row) are obtained with 20 output samples. The first row shows three simulated output fields, and the fourth row shows the error between the simulation and predictive mean. The relative $L_2$ error for the predicted pressure field is $0.019875$, evaluated on 512 test samples from GRF KLE256.}\n \\label{fig:cglow_kle256_grid32}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:Conclusions}\nThis paper has offered a foray in physics-aware machine learning for surrogate modeling and uncertainty quantification, with emphasis on the solution of PDEs. The most significant contribution of the proposed framework and simultaneously the biggest difference with other efforts along these lines, is that \\textit{no labeled data} are needed i.e. one does not need to solve governing PDEs for the training inputs. This is accomplished by incorporating appropriately the governing equations into the loss\/likelihood functions. We have demonstrated that convolutional encoder-decoder network-based surrogate models can achieve high predictive accuracy for high-dimensional stochastic input fields. Furthermore, the generalization performance of the physics-constrained surrogates proposed is consistently better than data-driven surrogates for out-of-distribution test inputs. The probabilistic surrogate built on the flow-based conditional generative model and trained by employing the reverse KL-divergence loss, is able to capture predictive uncertainty as demonstrated in several uncertainty propagation and calibration tasks.\n\nMany important unresolved tasks have been identified that will be addressed in forthcoming works. They include \n(a) Extension of this work to surrogate modeling for dynamical systems, (b) Improving generalization on out-of-distribution input, e.g. fine-tuning the trained surrogate on test input~\\cite{devlin2018bert, radford2018improving}, learned gradient update~\\cite{adler2017solving, hammernik2018learning}, meta-learning on a distribution of regression tasks~\\cite{finn2017model}, etc., (c) Combining physics-aware and data-driven approaches when only limited simulation data and partially known physics are available~\\cite{yang2018physics}, (d) Scale the flow-based conditional generative models to higher dimensions~\\cite{2018arXiv181001367G}, (e) More reliable probabilistic models, e.g. being able to express what the model does not know~\\cite{nalisnick2018deep, choi2018generative} by showing larger predictive uncertainty when tested on out-of-distribution input, (f) Exploring ways to increase the expressiveness of FC-NNs to better capture the multiscale features of PDE solutions, e.g. by evolving network architectures~\\cite{stanley2002evolving} and (g) Exploring the solution landscape with the conditional generative surrogates~\\cite{farrell2015deflation}.\n \n\\section{Numerical Experiments}\\label{sec:numerical_experiments}\n\n\\paragraph{Model problem} Steady-state flow in random heterogeneous media is studied as the model problem throughout the experiments, as in Eqs.~(\\ref{eq:darcy}),~(\\ref{eq:darcy_mixed}),~(\\ref{eq:BCs}). We consider the domain $\\mathcal S = [0, 1] \\times [0, 1]$, the left and right boundaries are Dirichlet, with pressure values $1$ and $0$, respectively. The upper and lower boundaries are Neumann, with zero flux. The source field is zero.\n\n\n\\paragraph{Dataset} Only input samples are needed to train the physics-constrained surrogates (PCSs). Additional simulated output data for training data-driven surrogates (DDSs) and evaluating surrogate performance are obtained with FEniCS~\\cite{AlnaesBlechta2015a}. Here, we mainly introduce three types of input datasets, which are Gaussian random field (GRF), warped GRF, and channelized field.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figs\/codec\/generalization\/input_distributions.png}\n \\caption{Samples from $5$ test input distributions over a $64\\times 64$ uniform grid, i.e. GRF KLE$512$, GRF KLE$128$, GRF KLE$2048$, warped GRF, channelized field. Log permeability samples are shown except the last channelized field that is defined with binary values $0.01$ and $1.0$.}\n \\label{fig:generalization_test_input_samples}\n\\end{figure}\n\nThe first input dataset is the exponential of a GRF, i.e. $K(s) = \\exp(G(s))$, $G(\\cdot) \\sim \\mathcal{GP}(0, k(\\cdot, \\cdot))$, where $k(s, s') = \\exp(-\\norm{s - s'}_2 \/ l)$, $l$ is the length scale. The field realization is generated with Karhunen-Lo\\`eve expansion (KLE) with the leading $N$ terms, paired with Latin hypercube sampling. See Section 4.1 in~\\cite{zhu2018bayesian} for more details. This type of dataset is called \\texttt{GRF KLE$N$}. For the deterministic surrogate experiments in Section~\\ref{sec:deterministic_surrogate}, the training input GRF KLE$512$ is generated with length scale $l=0.25$, $N=512$ leading terms, discretized over a $64 \\times 64$ uniform grid, which accumulates $95.04\\%$ energy. For the probabilistic surrogate in Section~\\ref{sec:prob_surrogate}, the parameters for the training input GRF KLE$100$ are $N=100$, $l=0.2$, over $32 \\times 32$ uniform grid.\nThe test set may have other KLE truncations, but with the same length scale in each case, i.e. $l=0.25$ for $64 \\times 64$, and $l=0.2$ for $32\\times 32$. The dataset for uncertainty propagation consists of 10,000 input-output data pairs unseen during training.\n\nA slightly different test input is \\texttt{warped GRF}, where there are two Gaussian fields and the output of the first GRF is the input to the second GRF. The kernel for both GRFs is squared exponential kernel, the length scale and KLE terms are $2, 16$ for the first GRF and $0.1, 128$ for the second GRF.\n\nThe last type of input field considered is a \\texttt{channelized} field. Samples are obtained by cropping $64\\times 64$ patches from one large training image~\\cite{laloy2018training} of size $2500\\times 2500$, or $32\\times 32$ patches from the resized $1250\\times 1250$ image (resized with nearest neighborhood). Typical samples of the input datasets considered are shown in Fig.~\\ref{fig:generalization_test_input_samples}.\n\n\nWe begin our experiments by solving deterministic PDEs with spatially-varying coefficient (input) with convolutional decoder networks, and compare with FC-NNs. Then we show experiments for surrogate modeling for solving random PDEs, and compare with the data-driven approach. The last part is on experiments of using the conditional Glow as our probabilistic surrogate for uncertainty quantification tasks. The code and datasets for this work will become available at \\url{https:\/\/github.com\/cics-nd\/pde-surrogate} upon publication.\n\n\n\\subsection{Solving Deterministic PDEs}\n\\label{sec:solve_det_pde}\nIn this section, we explore the relative merit of using CNNs and FC-NNs to parameterize the solutions of deterministic PDEs with image-like input field, including both linear and nonlinear PDEs. Since our focus is on surrogate modeling, the results below are mostly qualitative. The network architectures and training details are described in~\\ref{appendix:solve}.\n\n\\begin{figure}[h!]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/fc_mixed_residual_pred_epoch500_8_k1024.png}\n \\caption{FC-NN, iteration $500$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/fc_mixed_residual_pred_epoch2000_8_k1024.png}\n \\caption{FC-NN, iteration $2000$.}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/conv_mixed_residual_pred_epoch250_8_k1024.png}\n \\caption{CNN, iteration $250$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/conv_mixed_residual_pred_epoch500_8_k1024.png}\n \\caption{CNN, iteration $500$.}\n \\end{subfigure}\n\n \\caption{Solving Darcy flow for one sample from GRF KLE1024 under mixed residual loss. FC-NN takes much longer to resolve the fine details of flux fields, while the pressure field does not improve much. CNN obtains more accurate solution within shorter time.}\n \\label{fig:solve_k1024}\n\\end{figure}\n\\paragraph{Comparison of CNNs and FC-NNs to solve Darcy flow} \nWe compare convolutional decoder networks and fully-connected networks presented in Section~\\ref{sec:solve_det_pde} to solve the PDE system in Eq.~(\\ref{eq:darcy}). The input permeability field is sampled from GRF KLE$1024$ over a $64 \\times 64$ uniform grid. We optimize the CNN and the FC-NN with mixed residual loss using L-BFGS optimizer for $500$ and $2000$ iterations, respectively. The results are shown in Fig.~\\ref{fig:solve_k1024}. The solution learned with CNN in iteration $250$ is even better than the solution learned with FC-NN in iteration $2000$, in terms of accuracy and retaining multiscale features of the flux fields. The same phenomenon is observed for input GRFs with other intrinsic dimensionalities. We further experiment on input sampled from the channelized field, as shown in Fig.~\\ref{fig:solve_channel}. For this case, however, we observe that the FC-NN fails to converge to a small enough error in contrast to the CNN.\n\\begin{figure}[h!]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/conv_mr_pred_epoch500_4105_channel.png}\n \\caption{CNN, iteration $500$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/fc_mr_pred_epoch2000_4105_channel.png}\n \\caption{FC-NN, iteration $2000$.}\n \\end{subfigure}\n \\caption{Solving Darcy flow for one sample of the channelized field. The same network and training setup as in Fig.~\\ref{fig:solve_k1024} are used. The FC-NN parameterization fails to converge.}\n \\label{fig:solve_channel}\n\\end{figure}\n\nThe experiments on solving deterministic PDEs show that CNNs can capture the multiscale features of the solution much more effectively than the FC-NNs, as reflected by the resolved flux fields. This is mostly because of the difference in their parameterizations of a field solution and the ways to obtain spatial gradients. FC-NNs turn to generate images that look like light-paintings\\footnote{https:\/\/distill.pub\/2018\/differentiable-parameterizations\/\\#section-xy2rgb}, but not rugged field. More broadly, this type of parameterization is intensively explored named compositional pattern producing networks~\\cite{stanley2007compositional}.\nCNNs can represent images with multiscale features quite efficiently as is evident in our experiments and the rapid advances in image generation applications. \nDue to the discretization of spatial gradients with Sobel filters, the error of the learned solution is mainly on the boundaries, and the checkerboard artifact becomes more severe in the pressure field as the flux fields becomes more rugged, as shown in Fig.~\\ref{fig:solve_k4096} in~\\ref{appendix:solve}.\n\n\\paragraph{Nonlinear flow in porous media}\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/nonlinear_darcy_conv_mr_pred_epoch500_8_k1024.png}\n \\caption{GRF KLE$1024$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/solve\/nonlinear_darcy_conv_mr_pred_epoch500_1236_channel.png}\n \\caption{Channelized field.}\n \\end{subfigure}\n \\caption{Simulation (FEniCS) and learned solution (prediction) with CNN for the nonlinear flow for (a) GRF KLE1024 with $\\alpha_1=0.1$ and $\\alpha_2=0.1$, and (b) channelzied field with $\\alpha_1=1.0$ and $\\alpha_2=1.0$.}\n \\label{fig:darcy_nonlinear_solve}\n\\end{figure}\nDarcy's law $\\bm{\\tau} = - K \\nabla u$ is a well established linear constitutive relationship for flow through porous media when the Reynolds number $Re$ approaches zero. It has been shown both theoretically~\\cite{mei1991effect} and experimentally~\\cite{firdaouss1997nonlinear, rojas1998nonlinear} that the constitutive relation undergoes a cubic transitional regime at low $Re$, and then a quadratic Forchheimer~\\cite{forchheimer1901wasserbewegung} when $Re\\sim O(1)$. To show that our approach also works for nonlinear PDEs, we look at the nonlinear correction of Darcy's law as the following\n\\begin{equation}\\label{eq:constitutive_nonlinear}\n - \\nabla u = \\frac{1}{K} \\bm{\\tau} + \\frac{\\alpha_1}{K^{\\frac{1}{2}}} \\bm{\\tau}^2 + \\alpha_2 \\bm{\\tau}^3,\n\\end{equation}\nwhere $\\alpha_1, \\alpha_2$ are usually obtained by fitting to experiment data.\nWe use CNNs to solve this nonlinear flow with the constitutive Eq.~(\\ref{eq:constitutive_nonlinear}), the continuity equation $\\nabla \\cdot \\bm{\\tau} = 0$ and the same boundary condition with the linear Darcy case. The reference solution is obtained with FEniCS (dual mixed formulation with Newton solver that converges in $5 \\sim 6$ iterations with relative tolerance below $10^{-6}$). We experiment on input fields from GRF KLE$1024$ and the channelized field, with $\\alpha_1=0.1$ and $\\alpha_2=0.1$ in the first case, and $\\alpha_1=1.0$ and $\\alpha_2=1.0$ in the second case.\nThe convolutional decoder network is the same as in the previous section, and is trained with mixed residual loss. The results is shown in Fig.~\\ref{fig:darcy_nonlinear_solve}.\n\nFor GRF KLE$1024$, the effect of the cubic constitutive relation is actually smoothing out the flux field in comparison to the linear case in Fig.~\\ref{fig:solve_k1024} using the same input field. The nonlinearity of PDEs does not seem to increase the burden for the CNN training except for a few more steps of forward and backpropagation due to the nonlinear operations in the constitutive equation. This is a negligible cost w.r.t. the computations in the decoder network itself. However, note that solving nonlinear PDEs with the Newton solver requires $N$ iterations, thus increasing the computation by $N$ times.\nFor surrogate modeling, the mapping that the CNN learns from $K$ to $u$ is nonlinear even when the PDE to solve is linear.\nWe expect it will be easier to learn a surrogate in the nonlinear case due to the smoother output fields. We leave further investigation of surrogate modeling and uncertainty quantification for nonlinear stochastic PDEs for our future work.\n \n\n\n\n\\subsection{Deterministic Surrogate}\\label{sec:exp_det_surrogate}\n\nThe experiments in solving deterministic PDEs lead us to choose CNNs over FC-NNs for surrogate modeling, with less training time and comparable accuracy, especially for high-dimensional input. We train both the physics-constrained surrogates and data-driven surrogates, and compare their accuracy and generalizability.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/codec.png}\n \\caption{Dense convolutional encoder-decoder network as the deterministic surrogate. The model's input is the realization of a random field, the model's output is the prediction for each input field including 3 output fields, i.e. pressure and two flux fields. The model is trained with physics-constrained loss without target data.}\n \\label{fig:codec}\n\\end{figure}\n\\paragraph{Network}\nDense convolutional encoder-decoder network~\\cite{zhu2018bayesian} is used as the surrogate model, with one input channel $\\mathbf{x}$ and three output channels $[\\mathbf{u}, \\bm{\\tau}_1, \\bm{\\tau}_2]$, as shown in Fig.~\\ref{fig:codec}.\nThe upsampling method in the decoding layers in the current implementation is nearest upsampling followed by convolution, different from transposed convolution used in the data-driven case. This is essential to avoid the checkerboard effect\\footnote{https:\/\/distill.pub\/2016\/deconv-checkerboard\/}, partially severed by Sobel filter besides the natural tendency of transposed convolution.\nThe resolution of the input fields is reduced by 4 times through the encoding path, from $64\\times 64$ to $16 \\times 16$, then increased to the size of the output fields, $64\\times 64$. The number of layers in the three dense blocks are $6, 8, 6$, with growth rate $16$. There are $48$ initial feature maps after the first convolution layer.\n\n\\paragraph{Training}\nWe train the PCS with mixed residual loss as in Eq.~(\\ref{eq:mixed_residual_impl}) with only input data, and compare it with the DDS with the same network architecture but trained with additional output data. The number of training data, mini-batch size and the category of test distributions vary in different experiments, but all with $T=512$ test data and employing the Adam~\\cite{kingma2014adam} optimizer paired with one cycle policy\\footnote{\\url{https:\/\/github.com\/fastai\/fastai\/blob\/master\/fastai\/callbacks\/one\\_cycle.py}} (learning rate scheduler) where the maximum learning rate is $0.001$. The mini-batch size ranges from $8$ to $32$ depending on the number of training data. The weight coefficient for the boundary conditions is $\\lambda = 10$. \nThe evaluation metrics for prediction are relative $L_2$ error and $R^2$ score,\n\\begin{equation}\n \\epsilon_j = \\frac{1}{T} \\sum_{i=1}^T \\frac{\\norm{\\hat{\\mathbf{y}}_j^{(i)} - \\mathbf{y}_j^{(i)}}_2}{\\norm{\\mathbf{y}_j^{(i)}}_2}, \\qquad R^2_j = 1 - \\frac{\\sum_{i=1}^T \\norm{\\hat{\\mathbf{y}}_j^{(i)} - \\mathbf{y}_j^{(i)}}_2^2}{\\sum_{i=1}^T \\norm{\\mathbf{y}_j^{(i)} - \\bar{\\mathbf{y}}_j}_2^2},\n\\end{equation}\nwhere $\\hat{\\mathbf{y}}_j^{(i)}$ is the surrogate prediction of the $j$-th output channel\/field ($j=1,2,3$ for pressure, horizontal flux and vertical flux field respectively), $\\mathbf{y}_j^{(i)}$ is the corresponding simulator output, $\\bar{\\mathbf{y}}_j=\\frac{1}{T}\\sum_{i=1}^T \\mathbf{y}_j^{(i)}$, $T$ is the total number of test inputs, $\\norm{\\cdot}_2$ is the $L_2$ norm. We mainly use relative $l_2$ error as evaluation metric.\nThe PCS is trained for $300$ epochs and the DDS is trained for $200$ epochs, since DDS is faster to converge than the PCS in general, as shown in Fig.~\\ref{fig:training_curve}.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/mixed\/test_nrmse_ntrain8192_run8_1.pdf}\n \\caption{Test relative $L_2$ error.}\n \\label{fig:test_nrmse_training_curve}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/mixed\/test_r2_ntrain8192_run8_1.pdf}\n \\caption{Test $R^2$ score.}\n \\label{fig:test_r2_training_curve}\n \\end{subfigure}\n \\caption{Test relative $L_2$ error and $R^2$ score during training. The solid lines shows the error for the PCSs and the dashed lines for the DDSs. Both surrogates are trained on $8192$ samples of GRF KLE$512$ and tested on the same $512$ samples of GRF KLE$512$.}\n \\label{fig:training_curve}\n\\end{figure}\n\n\n\n\\paragraph{Prediction} \n\nTo show that the physics-constrained approach to learn surrogate works well, we train the PCS on two datasets, i.e. GRF KLE$512$ ($8192$ samples) and channelized fields ($4096$ samples), respectively.\nThe prediction examples of the PCS for test GRFs and channelized fields are shown in Fig.~\\ref{fig:PCS_mixed_res_pred_examples}.\n\\begin{figure}[h!]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/mixed\/kle512\/pred_epoch300_30.png}\n \\caption{GRF KLE$512$, test $1$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/mixed\/kle512\/pred_epoch300_34.png}\n \\caption{GRF KLE$512$, test $2$.}\n \\label{fig:PCS_mixed_res_pred_examples_kle512_sample2}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/mixed\/channel\/pred_epoch300_52.png}\n \\caption{Channelized, test $1$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/mixed\/channel\/pred_epoch300_23.png}\n \\caption{Channelized, test $2$.}\n \\end{subfigure}\n \n \\caption{Prediction examples of the PCS under the mixed residual loss. (a) and (b) are $2$ test results for the PCS trained with $8192$ samples of GRF KLE$512$; (c) and (d) are $2$ test results for the PCS trained with $4096$ samples of channelized fields.}\n \\label{fig:PCS_mixed_res_pred_examples}\n\\end{figure}\n\n\nWe show the test relative $L_2$ error and $R^2$ score during training in Fig.~\\ref{fig:training_curve}. Overall the PCS takes longer to converge than the DDS, which is reasonable since the PCS has to solve the PDE and learn the surrogate mapping at the same time. Compared with the DDS, the accuracy of the PCS' predictions of the pressure field are similar when trained with the same number of data, but the PCS' predictions of the flux fields are worse. For the later case, the evaluation metric is dominated by the error on the boundary which is induced by the approximation of spatial derivatives. However, the predictions within the boundary are accurate, as shown in Fig.~\\ref{fig:PCS_mixed_res_pred_examples}. Also the relative $L_2$ error is more sensitive than $R^2$ when the error is small, which can be seen by comparing Figs.~\\ref{fig:test_nrmse_training_curve} and~\\ref{fig:test_r2_training_curve}.\n\n\n\n\\remark{The quantitative results are mainly for the pressure field, not the flux fields even through we use the mixed formulation loss to train the model. Using the loss functions in Eqs.~(\\ref{eq:loss_det_surr}) and~(\\ref{eq:loss_det_surr_data}), we observe that the DDS focuses more on the flux fields than the pressure field, but the PCS has better predictability on the pressure field, which is often desirable. For the PCS trained with the mixed formulation, we can either output the pressure and flux fields directly, or re-compute the flux field with the predicted pressure field using the constitutive equation. The other reason for using the mixed residual loss over the primal variational loss is the better predictive accuracy of the pressure field.}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figs\/codec\/nrmse_pressure_grf_kle512.pdf}\n \\caption{The relative $L_2$ error of the predicted pressure field of physics-constrained and data-driven surrogates trained with $512, 1024, 2048, 4096, 8192$ GRF KLE$512$ data, each with $5$ runs to obtain the error bars. The test set contains $512$ samples from GRF KLE$512$ as well. We emphasize that training the DDS requires an equal number of output data i.e. solutions of the governing PDE. The reference to compute relative $L_2$ error is simulated with FEniCS.} \n \\label{fig:nrmse_pressure}\n\\end{figure}\n\\paragraph{Varying the number of training inputs} We train the PCS with different number of samples from GRF KLE$512$, and compare its predictive performance against the DDS in Fig.~\\ref{fig:nrmse_pressure}. From the figure, the relative $L_2$ error decreases as the PCS is trained with more input data. While this is not surprising, it shows the convergence behavior of physics-constraint learning approach. Moreover, the PCS achieves similar relative $L_2$ error of predicted pressure field with the DDS when there are enough training input samples, and even lower when the number of training input samples is $8192$. \n\nThe common requirement for data-driven modeling of physical systems is data efficiency, since we need \\textit{expensive} simulated output data to supervise the training. Taking~\\cite{zhu2018bayesian} for an example, the number of training data is often less than $1024$. The comparison here is not really appropriate. The DDS does not require physics while the PCS does not require output data.\nOverall, Fig.~\\ref{fig:nrmse_pressure} suggests that with physical knowledge, we can achieve \\textit{comparable} predictive performance with the state-of-the-art DDS \\textit{without} any simulation output (but only samples from the random input).\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figs\/codec\/generalization\/pressure_ntrain8192.pdf}\n \\caption{Generalization to new input distributions, which are GRF KLE$128$, KLE$512$ (interpolation), KLE$2048$, warped GRF, and channelized fields. The surrogates are trained with $8192$ samples from GRF KLE$512$. Each test set contains $512$ samples.}\n \\label{fig:generalization_nrmse_pressure_ntrain8192}\n\\end{figure}\n\\paragraph{Generalization} Apart from computational time, the PCS can `generalize' to \\textit{any} input by directly solving the governing equations, i.e. minimizing the loss function in Eq.~(\\ref{eq:loss_det_surr}) over this particular input, as shown to work properly in Section~\\ref{sec:solve_det_pde}.\nThus generalization here evaluates how accurate the model's prediction is when we need to predict fast, e.g. pass the input through the surrogate, or fine-tuning the surrogate for few steps.\n\nFigure~\\ref{fig:nrmse_pressure} shows the surrogates' interpolation performance for the test input from the same distribution as the training input, i.e. GRF KLE$512$. \nHere, we further examine the surrogates' \\textit{extrapolation} to out-of-distribution input. We select two other GRFs with different KLE terms, in particular we take KLE$128$ which is smoother than KLE$512$ and KLE$2048$ that leads to higher-variability than KLE$512$. The third test input is warped GRF which is two layers of Gaussian processes. The fourth test input is the channelized field. The samples from those test distributions are shown in Fig.~\\ref{fig:generalization_test_input_samples}. \n\nWe take the surrogates trained on GRF KLE$512$ as in the previous experiment, and test them on the four new input distributions. The relative $L_2$ error of predicted pressure field is shown in Fig.~\\ref{fig:generalization_nrmse_pressure_ntrain8192} for the surrogates trained with $8192$ data. The figure shows both PCSs and DDSs generalize well to other test GRF input, including the warped one, but less so when it comes to the channelized field, which is completely different from the training input. \nNotably, the PCS has better generalization than the DDS when tested on warped GRF and channelized fields, which are further away from the training input distribution than the other two GRFs. This is highlighted in Fig.~\\ref{fig:channel_nrmse_pressure}. This holds as well for surrogates trained on $512, 1024, 2048, 4096$ samples. Figure~\\ref{fig:generalization_nrmse_pressure_ntrain4096} shows the generalization performance when the training sample size is $4096$.\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/nrmse_pressure_channel.pdf}\n \\caption{}\n \\label{fig:channel_nrmse_pressure}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/codec\/generalization\/pressure_ntrain4096.pdf}\n \\caption{}\n \\label{fig:generalization_nrmse_pressure_ntrain4096}\n \\end{subfigure}\n \\caption{(a) The relative $L_2$ error of predicted pressure field with PCSs and DDSs trained with $512, 1024, 2048, 4096, 8192$ GRF KLE512 data (the same surrogates as Fig.~\\ref{fig:nrmse_pressure}), each with $5$ runs. The test set contains $512$ samples from channelized field, with completely different distribution from the training GRF. (b) Generalization across new test input distributions for surrogates trained with $4096$ samples from GRF KLE$512$.}\n\\end{figure}\n\n \n \n\n\\subsection{Probabilistic Surrogate}\\label{sec:prob_surrogate}\nThis section presents the experiments on using the conditional Glow model shown in Fig.~\\ref{fig:glow_msc} as the probabilistic surrogate. We are interested in how the conditional Glow captures predictive uncertainty, uncertainty calibration and its generalization performance to unseen test input. We choose to work on $32\\times 32$ discretization instead of $64\\times 64$ with the input GRF KLE$100$ because of the large model size of the current implementation of the model.\n\n\\paragraph{Network} In our experiment, we use $L=3$ levels, each of which contains $F=6$ steps of flow. Both the dense blocks and coupling networks \\fbox{$\\mathbf{s}$} and \\fbox{$\\mathbf{t}$} in affine coupling layers use DenseNet~\\cite{huang2017densely} as the building block. The number of dense layers within each dense block in the encoder is $3, 4, 4$ (from the input to the latent direction). The coupling networks $\\texttt{CouplingNN}$ as in Table~\\ref{tab:affine_coupling} for scaling and shift have $3$ dense layers, followed by a $3\\times3$ convolution layer with zero initialization to reduce the number of output features to be the same as its input features. The model has $1,535,549$ parameters, including $179$ convolution layers. For other hyperparameters of the model, please refer to our open-source code.\n\n\\paragraph{Training} The model is trained with $4096$ input samples from GRF KLE$100$ over $32 \\times 32$ grid for $400$ epochs with mini-batch size $32$. No output data is needed for training. We use the Adam optimizer with initial learning rate $0.0015$, and one-cycle learning rate scheduler. The weight for boundary conditions $\\lambda$ is $50$. The inverse temperature $\\beta$ is prefixed to certain values. Training the model with the above setting on a single NVIDIA GeForce GTX $1080$ Ti GPU card takes about $3$ hours.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/pred_at_x\/pred_epoch400_497.png}\n \\caption{Predictive mean and variance.}\n \\label{fig:cflow_pred_at_x}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/pred_at_x\/epoch400_samples_c0_index497.png}\n \\caption{Samples of predicted pressure.}\n \\label{fig:cflow_pred_at_x_samples_pressure}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/pred_at_x\/epoch400_samples_c1_index497.png}\n \\caption{Samples of predicted horizontal flux.}\n \\label{fig:cflow_pred_at_x_samples_hor_flux}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/pred_at_x\/epoch400_samples_c2_index497.png}\n \\caption{Samples of predicted vertical flux.}\n \\label{fig:cflow_pred_at_x_samples_ver_flux}\n \\end{subfigure}\n \\caption{Prediction of the multiscale conditional Glow ($\\beta=150$) for a test input which is sampled from GRF KLE$100$ over $32\\times 32$ grid. (a) The predictive mean ($2$nd row) and one standard derivation ($3$rd row) are obtained with 20 output samples. The first row show three simulated output fields, the $4$-th row shows the error between the reference and the predictive mean. In (b), (c), (d), the top left corner shows the simulated output, the rest $15$ are samples from the conditional predictive density $p_\\bm{\\theta}(\\mathbf{y} | \\mathbf{x})$. The relative $L_2$ error for the predicted pressure field is $0.0038$ when tested on 512 samples from GRF KLE$100$.}\n \\label{fig:cflow_kle100_grid32}\n\\end{figure}\n\\paragraph{Predictive distribution} Fig.~\\ref{fig:cflow_kle100_grid32} shows the prediction for a test input from GRF KLE100, where in Fig.~\\ref{fig:cflow_pred_at_x} the predictive mean and variance are estimated pixel-wise with $20$ samples from the conditional density by sampling $20$ realizations of noise $\\{\\bm{\\epsilon}_l^{(i)}\\}_{l=1, i=1}^{L, 20}$ as in Algorithm~\\ref{algo:cglow}. \nThe test relative $L_2$ error for the pressure field (comparing predictive mean against simulated output) achieves $0.0038$, which is comparable to the relative $L_2$ error of the deterministic surrogate ($0.0035$). The predictive variance of the pressure and vertical flux fields reflect correctly the boundary conditions, which are close to zero on the left-right boundaries and top-bottom boundaries, respectively. We also draw $15$ samples from the predictive distribution for each output field, which are shown in Figs.~\\ref{fig:cflow_pred_at_x_samples_pressure},~\\ref{fig:cflow_pred_at_x_samples_hor_flux},~\\ref{fig:cflow_pred_at_x_samples_ver_flux}. The predictive output samples are still diverse despite the predictive mean being highly accurate. Mode collapse is a well-known problem for conditional GANs~\\cite{anonymous2019diversity-sensitive, zhu2017toward} and VAEs~\\cite{anonymous2019lagging}, which seems not much of a concern for flow-based generative models as demonstrated with the diversity of samples.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/pred_mean_vs_MC}\n \\caption{Estimate of output mean.}\n \\label{fig:cglow_up_mean}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/pred_var_vs_MC}\n \\caption{Estimate of output variance.}\n \\end{subfigure}\n \\caption{Uncertainty propagation with multiscale conditional Glow, $\\beta=150$. (a) The first row shows the sample mean of $10,000$ simulated output, the second and third rows show the sample mean and two standard deviation of the estimate mean of $10,000$ predicted output with the probabilistic surrogate, and the fourth row shows the error between the first two rows. (b) The results for output variance.}\n \\label{fig:cflow_up}\n\\end{figure}\n\\paragraph{Uncertainty propagation}\nWe use the trained conditional Glow as a surrogate to quickly predict the output for $10,000$ input samples from GRF KLE100, then compute the mean and variance of the estimated output mean, and output variance, then compare against the Monte Carlo estimate using the corresponding $10,000$ simulated output. We generate 20 samples for each input with the trained surrogate, then estimate the mean and variance of the output with the law of total expectation and the law of total variance. By repeating this process for 10 times, we obtain 10 estimates of the mean and variance for the output. Then the sample mean and variance of the 10 estimate means and estimate variances can be computed, which are shown in the second and third row of Fig.~\\ref{fig:cflow_up}. The statistics of the surrogate output matches that of the simulation output very well, especially for the output variance which are typically underestimated when using surrogates. Note that there is only small error (around $3 \\%$ relative error) between the estimated mean of the horizontal flux field despite the noticeable difference in color as in Fig.~\\ref{fig:cglow_up_mean}.\n\n\\paragraph{Distribution estimate} We show in Fig.~\\ref{fig:cflow_dist_est} the kernel density estimation for the values of three output fields at random locations in the domain using the $10,000$ output samples from simulation and the ones propagated with the trained conditional Glow. \n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[b]{0.95\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/dist_est\/loc_2875_8875.pdf}\n \\caption{Distribution estimation at location $(0.2875, 0.8875)$.}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.95\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/dist_est\/loc_4625_2875.pdf}\n \\caption{Distribution estimation at location $(0.4625, 0.2875)$.}\n \\end{subfigure}\n \\caption{Distribution estimate with conditional Glow, $\\beta=150$. From left to right shows the density estimate of pressure, horizontal flux and vertical flux at certain locations of the domain $[0, 1]^2$. }\n \\label{fig:cflow_dist_est}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\centering\n \\begin{subfigure}[b]{0.7\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/entropy_nrmse_beta_pressure}\n \\caption{}\n \\label{fig:cglow_entropy_nrmse_vs_beta}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.8\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/reliability_diagram_pressure}\n \\caption{}\n \\label{fig:cglow_reliability}\n \\end{subfigure}\n \\caption{(a) Conditional entropy of $p_\\bm{\\theta}(\\mathbf{y}|\\mathbf{x})$ and relative $L_2$ error of predicted pressure field w.r.t. $\\beta$. Conditional entropy is evaluated in bits per dimension.\n The surrogate is tested on $512$ input samples from GRF KLE$100$. The error bar is obtained with $3$ independent runs. (b) Reliability diagram of predicted pressure field with conditional Glow trained with different $\\beta$, which is evaluated with $10,000$ input-output data pairs. The closer the diagram is to the diagonal, the better the probabilistic surrogate is calibrated.}\n \\label{fig:cglow_uncertainty}\n\\end{figure}\n\\paragraph{Uncertainty calibration by tuning $\\beta$} \nGiven the PDEs and boundary conditions, the prediction of the surrogate can be evaluated directly with the loss $L(\\mathbf{y}, \\mathbf{x})$, without requiring the reference solution (e.g. simulation output). However, this loss cannot be readily translated to the uncertainty of the solution, e.g. the upper and lower bound of the solution at every grid point in the domain. The probabilistic surrogate trained under the reverse KL divergence can provide the uncertainty estimate, but may be at the expense of the accuracy of the mean prediction. The precision parameter $\\beta$ controls the overall variance of the reference density, which is reflected from the conditional entropy of the model density $p_\\bm{\\theta}(\\mathbf{y} | \\mathbf{x})$ in Fig.~\\ref{fig:cglow_entropy_nrmse_vs_beta}. The influence of $\\beta$ on the accuracy and the entropy of the model can be seen from the two competing terms in the reverse KL divergence as well. Larger $\\beta$ puts more penalty of the PDE loss term $L(\\mathbf{y}, \\mathbf{x})$ and less on the negative conditional entropy, thus the predictions become more accurate but less diverse, and to some extent, the probabilistic surrogate becomes over confident, as shown in Fig.~\\ref{fig:cglow_reliability} when $\\beta=250$. On the other hand, when $\\beta$ is too small, the probabilistic surrogate is prudent (large uncertainty estimate) and less accurate about the solution, e.g. the case of $\\beta=50$. From the figure, the model trained under $\\beta=150$ is well-calibrated (its reliability diagram is close to the diagonal dashed line) and achieves high accuracy at the same time. \n\n\\paragraph{Generalization}\nWe test the generalization of conditional Glow on input distributions different from the training input (GRF KLE$100$), including GRF KLE$256$, GRF KLE$512$, warped GRF, and channelized fields, as in Fig.~\\ref{fig:cglow_generalization}. However, we could not observe larger uncertainty when the test input is far away from the training input. The error between the predictive mean and simulation is in general one magnitude larger than the uncertainty. Thus the current surrogate cannot express what it does not known which in practice is a highly desirable outcome.\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/generalization\/k256_pred_epoch400_110}\n \\caption{GRF KLE$256$.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/generalization\/k512_pred_epoch400_110}\n \\caption{GRF KLE$512$.}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/generalization\/warped_grf_pred_epoch400_146}\n \\caption{Warped GRF.}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/cflow\/kle100_grid32\/generalization\/channel_pred_epoch400_223}\n \\caption{Channelized field.}\n \\end{subfigure}\n \\caption{Generalization of conditional Glow to out-of-distribution input. The model is trained on GRF KLE$100$, and tested on (a) GRF KLE$256$, (b) GRF KLE$512$, (c) Warped GRF, (d) channelized field. Note that the results are cherry-picked.}\n \\label{fig:cglow_generalization}\n\\end{figure}\n\n\n \n\\section{Introduction}\n\nSurrogate modeling is computationally attractive for problems that require repetitive yet expensive simulations, such as determinsitsic design, uncertainty propagation, optimization under uncertainty or inverse modeling. Data-efficiency, uncertainty quantification and generalization are the main challenges facing surrogate modeling, especially for problems with high-dimensional stochastic input, such as material properties~\\cite{bilionis2013multi}, background potentials~\\cite{charalampidis2018computing}, etc.\n\nTraining surrogate models is commonly posed as a supervised learning problem, which requires simulation data as the target. \nGaussian process (GP) models are widely used as emulators for physical systems~\\cite{kennedy2000predicting} with built-in uncertainty quantification. The recent advances to scale GPs to high-dimensional input include Kronecker product decomposition that exploits the spatial structure~\\cite{bilionis2013multi, wilson2015kernel, atkinson2018structured}, convolutional kernels~\\cite{NIPS2017_6877} and other algorithmic and software developments~\\cite{gardner2018gpytorch}. However, GPs are still struggling to effectively model high-dimensional input-output maps.\nDeep neural networks (DNNs) are becoming the most popular surrogate models nowadays across engineering and scientific fields. As universal function approximators, DNNs excel at settings where both the input and output are high-dimensional. Applications in flow simulations include pressure projections in solving Navier-Stokes equations~\\cite{doi:10.1002\/cav.1695}, fluid flow through random heterogeneous media~\\cite{zhu2018bayesian, tripathy2018deep, mo2018deep}, Reynolds-Averaged Navier-Stokes simulations~\\cite{ling_kurzawski_templeton_2016, thuerey2018well,Geneva2019Quantifying} and others.\nUncertainty quantification for DNNs is often studied under the re-emerging framework of Bayesian deep learning\\footnote{http:\/\/bayesiandeeplearning.org\/}~\\cite{doi:10.1162\/neco.1992.4.3.448}, mostly using variational inference for approximate posterior of model parameters, e.g. variational dropout~\\cite{NIPS2015_5666, gal2016dropout}, Stein variational gradient descent~\\cite{NIPS2016_6338, zhu2018bayesian}, although other methods exist, e.g. ensemble methods~\\cite{lakshminarayanan2017simple}.\nAnother perspective to high-dimensional problems is offered by latent variable models~\\cite{2017arXiv171102475G}, where the latent variables encode the information bottleneck between the input and output. \n\n\nSufficient amount of training data is usually required for the surrogates to achieve accurate predictions even under restricted settings, e.g. fixed boundary conditions. For physically-grounded domains, baking in the prior knowledge can potentially overcome the challenges of data-efficiency and generalization, etc. The \\textit{inductive bias} can be built into the network architecture, e.g. spherical convolutional neural networks (CNNs) for the physical fields on unstructured grid~\\cite{jiang2018spherical}, graph networks for object- and relation-centric representations of complex, dynamical systems~\\cite{gn_icml18}, learning linear embeddings of nonlinear dynamics based on Koopman operator theory~\\cite{lusch2018deep}. Another approach is to \\textit{embed physical laws} into the learning systems, such as approximating differential operators with convolutions~\\cite{pmlr-v80-long18a}, enforcing hard constraint of mass conservation by learning the stream function~\\cite{2018arXiv180602071K} whose curl is guaranteed to be divergence-free. \n\nA more general way to incorporate physical knowledge is through \\textit{constraint learning}~\\cite{2016arXiv160905566S}, i.e. learning the models by minimizing the violation of the physical constraints, symmetries, e.g. cycle consistency in domain translation~\\cite{CycleGAN2017}, temporal coherence of consecutive frames in fluid simulation~\\cite{xie2018tempogan} and video translation~\\cite{wang2018vid2vid}. One typical example in computational physics is \\textit{learning solutions of deterministic PDEs with neural networks} in space\/time, which dates back at least to the early $1990$s, e.g.~\\cite{psichogios1992hybrid, meade1994numerical, lagaris1998artificial}. The main idea is to train neural networks to approximate the solution by minimizing the violation of the governing PDEs (e.g. the residual of the PDEs) and also of the initial and boundary conditions.\nIn~\\cite{lagaris1998artificial}, a one-hidden-layer fully-connected neural network (FC-NN) with spatial coordinates as input is trained to minimize the residual norm evaluated on a fixed grid. \nThe success of deep neural networks brings several new developments: (1) most of the works parameterize the solution with FC-NNs, thus the solution is analytical and meshfree~\\cite{raissi2017physics, berg2018unified}; (2) the loss function can be derived from the variational form~\\cite{weinan2018deep, nabian2018deep}; (3) stochastic gradient descent is used to train the network by randomly sampling mini-batches of inputs (spatial locations and\/or time instances)~\\cite{sirignano2018dgm, weinan2018deep}; (4) deeper networks are used to break the curse of dimensionality~\\cite{grohs2018proof} allowing for several high-dimensional PDEs to be solved with high accuracy and speed~\\cite{han2018solving, beck2017machine, sirignano2018dgm, raissi2018forward}; (5) multiscale numerical solvers are enhanced by replacing the linear basis with learned ones with DNNs~\\cite{wang2018deep, fan2018multiscale}; (6) surrogate modeling for PDEs~\\cite{CNNFluid2016, khoo2017solving, nabian2018deep}.\n\n\n\nOur work focuses on physics-constrained surrogate modeling for stochastic PDEs with high-dimensional spatially-varying coefficients \\textit{without} simulation data. \nWe first show that when solving deterministic PDEs, the CNN-based parameterizations are more computationally efficient in capturing multiscale features of the solution fields than the FC-NN ones. \nFurthermore, we demonstrate that in comparison with image-to-image regression approaches that employ Deep NNs~\\cite{zhu2018bayesian}, the proposed method achieves comparable predictive performance, despite the fact that it does not make use of any output simulation data. In addition, it produces better predictions under extrapolative conditions as when out-of-distribution test input datasets are used.\nFinally, a flow-based conditional generative model is proposed to capture the predictive distribution with calibrated uncertainty, without compromising the predictive accuracy.\n\nThe paper is organized as follows.\nSection~\\ref{sec:Definition} provides the definition of the problems of interest including the solution of PDEs, surrogate modeling and uncertainty quantification. Section~\\ref{sec:Methodology} provides the parametrization of the solutions with FC-NNs and CNNs, the physics-constrained learning of a deterministic surrogate and the variational learning of a probabilistic surrogate. Section~\\ref{sec:numerical_experiments} investigates the performance of the developed techniques with a variety of tests for various PDE systems. We conclude in Section~\\ref{sec:Conclusions} with a summary of this work and extensions to address limitations that have been identified. \n\n\n\\section{Methodology}\n\\label{sec:Methodology}\n\\subsection{Differentiable Parameterizations of Solutions}\\label{sec:solve_pde}\nWe only consider the parameterizations of solutions using neural networks, primarily FC-NNs and CNNs. Given one input $\\mathbf{x} = [K(s_1), \\cdots, K(s_{n_s})]$, most previous works~\\cite{lagaris1998artificial, han2018solving, raissi2017physics, sirignano2018dgm} use FC-NNs to represent the solution as\n\\begin{equation}\\label{eq:solve_fc_nn}\n u(s) = \\hat{u}_\\bm{\\phi} (s),\n\\end{equation}\nwhere the input to the network is coordinate $s$, the output is the predicted solution at $s$, and $\\hat{u}_\\bm{\\phi}$ denotes a FC-NN with parameters $\\bm{\\phi}$. The spatial gradients can be evaluated exactly by automatic differentiation. This approach yields a smooth representation of the solution that can be evaluated at any input location. Even though the outputs in this model at two different locations are correlated (as they both depend on the shared parameters $\\bm{\\phi}$ of the NN), FC-NNs do not have the inductive bias as in CNNs, e.g. translation invariance, parameter sharing, etc.\nDespite promising results in a series of canonical problems~\\cite{raissi2018physics}, the trainability and predictive performance of FC-NNs deteriorates as the complexity of the underlying solution increases. This drawback is confirmed by our numerical studies presented in Section~\\ref{sec:solve_det_pde} involving solution fields with multiscale features.\n\nAn alternative parametrization of the solution is through a convolutional decoder network\n\\begin{equation}\\label{eq:solve_cnn}\n \\mathbf{y} = \\hat{\\mathbf{y}}_\\bm{\\theta} (\\mathbf{z}),\n\\end{equation}\nwhere $\\mathbf{y} = [u(s_1), \\cdots, u(s_{n_s})]$ denotes the solution on pre-defined fixed grids $s_1, \\cdots, s_{n_s}$ that is generated by one pass of the latent variable $\\mathbf{z}$ through the decoder, similarly as in~\\cite{2017arXiv171110925U}. Note that $\\mathbf{z}$ is usually much lower-dimensional than $n_s$ and initialized arbitrarily. The spatial gradients can be approximated efficiently with Sobel filter\\footnote{\\url{https:\/\/www.researchgate.net\/publication\/239398674_An_Isotropic_3x3_Image_Gradient_Operator}}, which amounts to one convolution layer with fixed kernel, see~\\ref{appendix:sobel_filter} for details.\nIn contrast to FC-NNs, convolutional architectures can directly capture complex spatial correlations and return a multi-resolution representation of the underlying solution field.\n\n\\begin{remark}\nThe dimensionality $n_s$ of the input $\\mathbf{x}$ is not required to be the same as that of the output $\\mathbf{y}$. Since our CNN approach would involve operations between images including pixel-wise multiplication of input and output images (see Section~\\ref{sec:LossFnDarcyFlow}), we select herein the same dimensionality for both inputs and outputs. Upsampling\/downsampling can always be used to accommodate different dimensionalities $n_{sx}$ and $n_{sy}$ of the input and output images, respectively.\n\\end{remark}\n\nTo solve the deterministic PDE for a given input, we can train the FC-NN solution as in Eq.~(\\ref{eq:solve_fc_nn}) by minimizing the residual loss where the exact derivatives are calculated with automatic differentiation~\\cite{lagaris1998artificial, han2018solving, raissi2017physics, sirignano2018dgm}. For the CNN representation, we will detail the loss functions and numerical derivatives in the next section.\n\n\n\n\\subsection{Physics-constrained Learning of Deterministic Surrogates without Labeled Data}\n\\label{sec:deterministic_surrogate}\n\nWe are particularly interested in surrogate modeling with high-dimensional input and output, i.e. $\\text{dim}(\\mathbf{x}), \\text{dim}(\\mathbf{y}) \\gg 1$. Surrogate modeling is an extension of the solution networks in the previous section by adding the realizations of stochastic input $\\mathbf{x}$ as the input, e.g. $u(s, \\mathbf{x}) = \\hat{u}_\\bm{\\phi}(s, \\mathbf{x})$ in the FC-NN case~\\cite{nabian2018deep}, or $\\mathbf{y} = \\hat{\\mathbf{y}}_\\bm{\\theta}(\\mathbf{x})$ in the CNN case~\\cite{khoo2017solving}.\n\nHere, we adopt the \\textit{image-to-image regression} approach~\\cite{zhu2018bayesian} to deal with the problem arising in practice where the realizations of the random input field are image-like data instead of being computed from an analytical formula. More specifically, the surrogate model $\\mathbf{y} = \\hat{\\mathbf{y}}_\\bm{\\theta} (\\mathbf{x})$ is an extension of the decoder network in Eq.~(\\ref{eq:solve_cnn}) by prepending an encoder network to transform the high-dimensional input $\\mathbf{x}$ to the latent variable $\\mathbf{z}$, i.e. $\\mathbf{y} = \\text{decoder} \\circ \\text{encoder} (\\mathbf{x})$.\n\nIn contrast to existing convolutional encoder-decoder network structures~\\cite{zhu2018bayesian}, the surrogate model studied here is trained \\textit{without} labeled data i.e. without computing the solution of the PDE. Instead, it is trained by learning to solve the PDE with given boundary conditions, using the following loss function\n\\begin{equation}\\label{eq:loss_det_surr}\n L(\\bm{\\theta}; \\{\\mathbf{x}^{(i)}\\}_{i=1}^N) = \\frac{1}{N} \\sum_{i=1}^N \\Big[V(\\hat{\\mathbf{y}}_{\\bm{\\theta}}(\\mathbf{x}^{(i)}), \\mathbf{x}^{(i)}) + \\lambda B(\\hat{\\mathbf{y}}_{\\bm{\\theta}}(\\mathbf{x}^{(i)})) \\Big],\n\\end{equation}\nwhere $\\hat{\\mathbf{y}}^{(i)} = \\hat{\\mathbf{y}}_{\\bm{\\theta}}(\\mathbf{x}^{(i)})$ is the prediction of the surrogate for $\\mathbf{x}^{(i)} \\in \\mathcal D_{\\text{input}}$, $V(\\hat{\\mathbf{y}}^{(i)}, \\mathbf{x}^{(i)})$ is the equation loss, either in the form of the \\textit{residual norm}~\\cite{lagaris1998artificial} or the \\textit{variational functional}~\\cite{weinan2018deep} of the PDE, $B(\\hat{\\mathbf{y}}^{(i)})$ is the boundary loss of the prediction $\\hat{\\mathbf{y}}^{(i)}$, and $\\lambda$ is the weight (Lagrange multiplier) to softly enforce the boundary conditions. Both $V(\\hat{\\mathbf{y}}^{(i)}, \\mathbf{x}^{(i)})$ and $B(\\hat{\\mathbf{y}}^{(i)})$ may involve integration and differentiation with respect to the spatial coordinates, which are approximated with highly efficient discrete operations, detailed below for the Darcy flow problem. The surrogate trained with the loss function in Eq.~(\\ref{eq:loss_det_surr}) is called \\textit{physics-constrained surrogate} (PCS).\n\nIn contrast to the physically motivated loss function advocated above, a typical data-driven surrogate employs a loss function of the form\n\\begin{equation}\\label{eq:loss_det_surr_data}\n L_{\\text{MLE}}(\\bm{\\theta}; \\{(\\mathbf{x}^{(i)}, \\mathbf{y}^{(i)})\\}_{i=1}^N) = \\frac{1}{N} \\sum_{i=1}^N \\norm{\\mathbf{y}^{(i)} - \\hat{\\mathbf{y}}_\\bm{\\theta}(\\mathbf{x}^{(i)})}_2^2,\n\\end{equation}\nwhere $\\mathbf{y}^{(i)}$ is the output data for the input $\\mathbf{x}^{(i)}$ which must be computed in advance. We refer to the surrogate trained with loss function in Eq.~(\\ref{eq:loss_det_surr_data}) as the \\textit{data-driven surrogate} (DDS).\n\n\n\\subsubsection{Loss Function for Darcy Flow}\n\\label{sec:LossFnDarcyFlow}\nThere are at least four variations of loss functions for a second-order elliptic PDE problem, depending on whether the field variables refer to the primal variable (pressure) or to mixed variables (pressure and fluxes), and whether the loss is expressed in strong form or variational form. Specifically, for the Darcy flow problem defined in Eq.~(\\ref{eq:darcy}), we can consider:\n\n\\paragraph{Primal residual loss} The residual norm for the primal variable is\n\\begin{equation}\n V(u; K) = \\int_{\\mathcal S} \\Big(\\nabla \\cdot (K \\nabla u) + f \\Big)^2 ds.\n\\end{equation}\n\\paragraph{Primal variational loss}\nThe energy functional is\n\\begin{equation}\n \\label{eq:energy_functional}\n V(u; K) = \\int_{\\mathcal S} \\Big( \\frac{1}{2} K \\nabla u\\cdot \\nabla u - f u \\Big) ds - \\int_{\\Gamma_N} g u ~ds.\n \\end{equation}\n\nMixed formulation introduces an additional (vector) variable, namely flux $\\tau$, which turns Eq.~(\\ref{eq:darcy}) into a systems of equations\n\\begin{equation}\n\\label{eq:darcy_mixed}\n \\begin{aligned}\n \\tau &= - K \\nabla u, \\qquad \\text{in } \\mathcal S, \\\\\n \\nabla \\cdot \\tau &= f, \\qquad \\text{in } \\mathcal S,\n \\end{aligned}\n\\end{equation}\nwith the same boundary conditions as in Eq.~(\\ref{eq:BCs}). $\\tau(s) = [\\tau_1(s), \\tau_2(s)]$ are the flux field components along the horizontal and vertical directions, respectively.\n \n\\paragraph{Mixed variational loss} Following the Hellinger-Reissner principle~\\cite{arnold1990mixed}, the mixed variational principle states that the solution $(\\tau^*, u^*)$ of the Darcy flow problem is the unique critical point of the functional\n\\begin{equation}\n\\label{eq:mixed_functional}\n V(\\tau, u; K) = \\int_\\Omega \\Big(\\frac{1}{2} K^{-1} \\tau \\cdot \\tau + u \\nabla \\cdot \\tau + f u \\Big) ds - \\int_{\\Gamma_D} u_D \\tau \\cdot \\mathbf{n} ds, \n\\end{equation}\nover the space of vector fields $\\tau \\in \\mathcal H(\\text{div})$ satisfying the Neumann boundary condition and all the fields $u \\in \\mathcal L^2$.\nIt should be highlighted that the solution $(\\tau^*, u^*)$ is not an extreme point of the functional in Eq.~(\\ref{eq:mixed_functional}), but a \\textit{saddle point}, i.e.\n\\begin{equation*}\n V(\\tau^*, u) \\leq V(\\tau^*, u^*) \\leq V(\\tau, u^*).\n\\end{equation*}\n\n\n\\paragraph{Mixed residual loss} The residual norm for the mixed variables is\n\\begin{equation}\\label{eq:mixed_residual}\n V(u; K) = \\int_{\\mathcal S} \\Big[ \\Big(\\tau + K \\nabla u \\Big)^2 + \\Big(\\nabla \\cdot \\tau - f\\Big)^2 \\Big] ds.\n\\end{equation}\n\nBoth the variational and mixed formulations have the advantage of lowering the order of differentiation which is approximated numerically in our implementation by a Sobel filter, as detailed in~\\ref{appendix:sobel_filter}. For example by employing the discretized representation $\\mathbf{x}$ for $K$ where the domain is $\\mathcal S=[0, 1] \\times [0, 1]$, the mixed residual loss is evaluated as\n\\begin{equation}\\label{eq:mixed_residual_impl}\n V(\\bm{\\tau}, \\mathbf{u}; \\mathbf{x}) \\approx \\frac{1}{n_s} \\Big(\\norm{\\bm{\\tau} + \\mathbf{x} \\odot \\nabla \\mathbf{u}}^2_2 + \\norm{\\nabla \\cdot \\bm{\\tau} - \\mathbf{f}}^2_2 \\Big),\n\\end{equation}\nwhere $n_s$ is the number of uniform grids, $\\nabla \\mathbf{u} = [\\mathbf{u}_h, \\mathbf{u}_v]$, $\\mathbf{u}_h, \\mathbf{u}_v$ are two gradient images along the horizontal and vertical directions estimated by Sobel filter, similarly for $\\nabla \\cdot \\bm{\\tau} = (\\bm{\\tau}_1)_h + (\\bm{\\tau}_2)_v$, and $\\odot$ denotes the element-wise product.\n\n\n\n\\begin{comment}\n\n\\subsubsection{Primal formulation: minimizing energy functional}\nThe loss function includes boundary loss (weighted by $\\lambda$), plus energy functional instead of the mean squared error (MSE) of the predicted output against the simulated output.\n\\begin{equation}\n\\label{eq:loss_energy_boundary}\n \\mathcal L(\\bm{\\theta}; \\mathcal D_{\\text{input}}) = \\frac{1}{N} \\sum_{i=1}^N \\big[ \\mathcal V(\\mathbf{y}^{(i)}, \\mathbf{x}^{(i)}) + \\lambda \\mathcal L_{\\text{boundary}}(\\mathbf{y}^{(i)}, \\nabla \\mathbf{y}^{(i)}) \\big],\n\\end{equation}\nwhere $\\mathbf{y}^{(i)} = \\mathbf{f}_{\\bm{\\theta}}(\\mathbf{x}^{(i)})$ is the prediction from the surrogate, $\\mathbf{x}^{(i)} \\sim p(\\mathbf{x}), i=1, \\cdots, N$ are input samples, $\\mathcal V(\\mathbf{y}^{(i)}, \\mathbf{x}^{(i)})$ is evaluated directly since the energy potential is known for the given PDE, and $\\lambda$ is the weight (Lanrangian multiplier).\nThe boundary loss consists of Dirichlet and Neumann parts, which is evaluated by extracting the boundaries from the output $\\mathbf{y}$ and its estimated gradient $\\nabla \\mathbf{y}$ with Sobel filter, which is implemented as an extra convolution layer applied to the output $\\mathbf{y}$ with Sobel kernel (see Section~\\ref{sec:Implementation}) for details. \n\n\n\\subsubsection{Mixed formulation: minimax the mixed functional}\n\nFor problems where the dual variable (e.g. flux for flow problems, stress for elasticity) is of most interest, mixed methods are preferred. Another motivation is to explore ways to improve the prediction accuracy over the energy methods which only predict the primal variable (e.g. pressure).\n\nThe surrogate model using both $\\mathbf{u}$ and its scaled gradient $\\bm{\\tau} \\equiv \\mathbf{x} \\nabla \\mathbf{u}$ is\n\\begin{equation}\n \\label{eq:det_surrogate_mixed}\n [\\bm{\\tau}, \\mathbf{u}] = \\mathbf{y} = \\mathbf{f}_\\bm{\\theta} (\\mathbf{x}),\n\\end{equation}\nwhere $\\bm{\\tau} \\in \\mathbb{R}^{2\\times 65 \\times 65}$ denotes the two flux fields (as two output channels), $\\mathbf{u} \\in \\mathbb{R}^{1\\times 65 \\times 65}$ denotes the pressure field, $\\mathbf{x} \\in \\mathbb{R}^{1\\times 65 \\times 65}$ is the input permeability field, $\\bm{\\theta}$ denotes the parameters of the convnet surrogate $\\mathbf{f}$.\\red{GENERALIZE THINGS HERE - MOVE SPECIFICS TO IMPLEMENTATION}\n\nThe loss function for the surrogate is the following:\n\\begin{equation}\n \\label{eq:loss_mixed}\n \\mathcal L(\\bm{\\theta}; \\mathcal D_{\\text{input}}) = \\frac{1}{N} \\sum_{i=1}^N \\Big[ \\mathcal V(\\bm{\\tau}^{(i)}, \\mathbf{u}^{(i)}; \\mathbf{x}^{(i)}) + \\lambda_b \\mathcal L_{\\text{boundary}}(\\mathbf{y}^{(i)}, \\nabla \\mathbf{y}^{(i)}) + \\lambda_c \\mathcal L_{\\text{constitutive}}(\\mathbf{y}^{(i)}) \\Big],\n\\end{equation}\nwhere the mixed functional part $\\mathcal V(\\bm{\\tau}, \\mathbf{u}; \\mathbf{x})$ is minimized w.r.t. $\\bm{\\tau}$ and maximized w.r.t. $\\mathbf{u}$ alternatively; both the boundary loss and constitutive loss are minimized in every update, with weights $\\lambda_b$ and $\\lambda_c$, respectively.\n\n\n\\subsubsection{Mixed formulation: minimizing the mixed residual}\n\nInstead of dealing with mini-max problem of mixed functional, the surrogate can be trained with residuals of the mixed systems, i.e. \\red{SECOND TERM AND ELSEWHERE NEED TO BE GENERALIZED BEYOND THE DARCY FLOW}\n\\begin{equation}\n \\label{eq:loss_mixed_residual}\n \\mathcal L(\\bm{\\theta}; \\mathcal D_{\\text{input}}) = \\frac{1}{N} \\sum_{i=1}^N \\Big[ \\mathcal R(\\bm{\\tau}^{(i)}, \\mathbf{u}^{(i)}; \\mathbf{x}^{(i)}) + \\lambda \\mathcal L_{\\text{boundary}}(\\mathbf{y}^{(i)}, \\nabla \\mathbf{y}^{(i)}) \\Big],\n\\end{equation}\nwhere the residual is\n\\begin{equation}\n \\mathcal R(\\bm{\\tau}, \\mathbf{u}; \\mathbf{x}) = \\norm{\\bm{\\tau} - \\mathbf{x} \\nabla \\mathbf{u}}^2 + \\norm{\\nabla \\cdot \\bm{\\tau}}^2.\n\\end{equation}\n\n\\end{comment}\n\n\n\n\n\n\\subsection{Probabilistic Surrogates with Reverse KL Formulation}\n\nWhile a deterministic surrogate provides fast predictions to new input realizations, it does not model the predictive uncertainty which is important in practice especially when the surrogate is tested on unseen (during training) inputs.\nMoreover, many PDEs in physics have multiple solutions~\\cite{farrell2015deflation} which cannot be captured with a deterministic model. \nThus building probabilistic surrogates that can model the distribution over possible solutions given the input is of great importance.\n\nA probabilistic surrogate models the conditional density of the predicted solution given the input, i.e. $p_\\bm{\\theta}(\\mathbf{y} | \\mathbf{x})$. \nInstead of learning this conditional density with labeled data~\\cite{sohn2015learning, mirza2014conditional, van2016conditional}, we distill it from a reference density $p_\\beta(\\mathbf{y} | \\mathbf{x})$. The reference density is a Boltzmann distribution\n\\begin{equation}\n\\label{eqn:target_conditional}\n p_{\\beta}(\\mathbf{y} | \\mathbf{x}) = \\frac{\\exp{(-\\beta L(\\mathbf{y}, \\mathbf{x}) )}}{Z_\\beta(\\mathbf{x})},\n\\end{equation}\nwhere $L(\\mathbf{y}, \\mathbf{x}) = V(\\mathbf{y}, \\mathbf{x}) + \\lambda B(\\mathbf{y})$ is the loss function (Eq.~\\ref{eq:loss_det_surr}) for the deterministic surrogate that penalizes the violation of the PDE and boundary conditions, and $\\beta$ is an inverse temperature parameter that controls the overall variance of the reference density. This energy-based model is obtained solely from the PDE and boundary conditions, without having access to labeled output data~\\cite{lecun2006tutorial}. However, this PDE-constrained model provides similar information as the labeled data allowing us to learn a probabilistic surrogate.\n\nSince sampling from the probabilistic surrogate $p_\\bm{\\theta}(\\mathbf{y} | \\mathbf{x})$ is usually fast and evaluating the (unnormalized) reference density $p_\\beta(\\mathbf{y} | \\mathbf{x})$ is often cheap, we choose to minimize the following reverse KL divergence:\n\\begin{equation}\\label{eq:rev_kl}\n\\begin{split}\n D_{\\mathrm{KL}} (p(\\mathbf{x})~p_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x}) \\parallel p(\\mathbf{x})~ p_{\\beta}( \\mathbf{y} | \\mathbf{x} )) \n & = \\mathbb{E}_{p(\\mathbf{x})} \\Big[ - \\mathbb{E}_{p_{\\bm\\theta}(\\mathbf{y} | \\mathbf{x})}[\\log p_{\\beta} (\\mathbf{y} | \\mathbf{x})] + \\mathbb{E}_{p_{\\bm\\theta}(\\mathbf{y} | \\mathbf{x})} [\\log p_{\\bm\\theta}(\\mathbf{y} | \\mathbf{x})] \\Big]\\\\\n & = \\beta \\mathbb{E}_{p(\\mathbf{x})p_{\\bm\\theta}(\\mathbf{y} | \\mathbf{x})} [L (\\mathbf{y}, \\mathbf{x})] - \\mathbb{H}_\\bm{\\theta} (\\mathbf{y} | \\mathbf{x}) + \\mathbb{E}_{p(\\mathbf{x})}[\\log Z_{\\beta}(\\mathbf{x})].\n\\end{split}\n\\end{equation}\nThe first term is the expectation of the loss function $L(\\mathbf{y},\\mathbf{x})$ w.r.t. the joint density $p(\\mathbf{x}) p_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x})$, which enforces the satisfaction of PDEs and boundary conditions.\nThe second term is the negative conditional entropy of $p_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x})$ which promotes the diversity of model predictions. It also helps to stabilize the training of flow-based conditional generative model introduced in Section~\\ref{sec:cflow}. \nThe third term is the variational free energy, which is constant when optimizing $\\bm{\\theta}$. For the models with intractable log-likelihood $\\log p_{\\bm\\theta}(\\mathbf{y} | \\mathbf{x})$, one can derive a lower bound for the conditional entropy $\\mathbb{H}_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x})$ that helps to regularize training and avoid mode collapse as in~\\cite{yang2018adversarial}. In this work, the log-likelihood can be exactly evaluated for the model introduced in Section~\\ref{sec:cflow}.\n\nThis idea is similar to probability density distillation~\\cite{oord2017parallel} to learn generative models for real-time speech synthesis, neural renormalization group~\\cite{li2018neural} to accelerate sampling for Ising models, and Boltzmann generators~\\cite{noe2018boltzmann}\n to efficiently sample equilibrium states of many-body systems.\n\n\nThe reverse KL divergence itself is not enough to guarantee that the predictive uncertainty is well-calibrated. Even if this divergence is optimized to zero, i.e. $p_\\bm{\\theta}(\\mathbf{y}|\\mathbf{x}) = p_\\beta(\\mathbf{y}|\\mathbf{x})$, the predictive uncertainty is still controlled by $\\beta$. Thus we add an uncertainty calibration constraint to the optimization problem, i.e. \n\\begin{equation}\n\\begin{aligned}\n \\min_{\\beta, \\bm{\\theta}} \\quad & D_{\\mathrm{KL}} (p(\\mathbf{x}) p_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x})\n \\parallel p(\\mathbf{x}) p_{\\beta}( \\mathbf{y} | \\mathbf{x} )), \\\\\n s.t. \\quad & p_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x}) \\text{ is calibrated on validation data.}\n\\end{aligned}\n\\end{equation}\nHere, the predictive uncertainty is calibrated using the reliability diagram~\\cite{degroot1983comparison}. \nThe naive approach to select $\\beta$ is through grid search, i.e. train the probabilistic surrogate with different values of $\\beta$, and select the one under which the trained surrogate is well-calibrated w.r.t. validation data, which includes input-output data pairs.\n \n\n\\remark{Instead of tuning $\\beta$ with grid search, we can also re-calibrate the trained model \\textit{post-hoc}~\\cite{guo2017calibration, kuleshov2018accurate} by learning an auxiliary regression model. For a small amount of miscalibration, sampling latent variables with different temperature (Section~6 in~\\cite{kingma2018glow}) can also change the variance of the output with a slight drop of predictive accuracy.} \n\n\n\\begin{remark}\nSimilar to our approach, Probabilistic Numerical Methods (PNMs)~\\cite{hennig2015probabilistic, cockayne2016probabilistic, cockayne2017bayesian} take a statistical point of view of classical numerical methods (e.g. a finite element solver) that treat the output as a point estimate of the true solution. Given finite information (e.g. finite number of evaluations of the PDE operator and boundary conditions) and prior belief about the solution, PNMs output the posterior distribution of the solution. \nPNM focuses on inference of the solution for one input, instead of amortized inference as what the probabilistic surrogate does.\n\\end{remark}\n\n\\subsubsection{Conditional Flow-based Generative Models}\\label{sec:cflow}\nThis section presents flow-based generative models~\\cite{dinh2016density} as our probabilistic surrogates. This family of models offers several advantages over other generative models~\\cite{2013arXiv1312.6114K, NIPS2014_5423}, such as exact inference and exact log-likelihood evaluation that is particularly attractive for learning the conditional distribution with the reverse KL divergence as in Eq.~(\\ref{eq:rev_kl}). The generative model $\\mathbf{y} = \\mathbf{g}_\\bm{\\theta}(\\mathbf{z})$ consists of a sequence of \\textit{invertible} layers (also called normalizing flows~\\cite{rezende2015variational}) that transforms a simple distribution $p(\\mathbf{z})$ to a target distribution $p(\\mathbf{y})$, i.e. \n$$\\mathbf{y} := \\mathbf{h}_0 \\overset{\\mathbf{g}_\\bm{\\theta}^1}{\\longleftrightarrow} \\mathbf{h}_1 \\overset{\\mathbf{g}_\\bm{\\theta}^2}{\\longleftrightarrow} \\mathbf{h}_2 \\cdots \\overset{\\mathbf{g}_\\bm{\\theta}^L}{\\longleftrightarrow} \\mathbf{h}_L := \\mathbf{z},$$\nwhere $\\mathbf{g}_\\bm{\\theta} = \\mathbf{g}_\\bm{\\theta}^1 \\circ \\mathbf{g}_\\bm{\\theta}^2 \\circ \\cdots \\circ \\mathbf{g}_\\bm{\\theta}^L$.\nBy the change of variables formula, the log-likelihood of the model given $\\mathbf{y}$ can be calculated as\n\\begin{align*}\n \\log p_\\bm{\\theta}(\\mathbf{y}) = \\log p_\\bm{\\theta}(\\mathbf{z}) + \\sum_{l=1}^L \\log |\\det(d\\mathbf{h}_l \/ d\\mathbf{h}_{l-1}))|,\n\\end{align*}\nwhere the log-determinant of the absolute value of the Jacobian term $\\log |\\det(d\\mathbf{h}_l \/ d\\mathbf{h}_{l-1}))|$ for each transform $(\\mathbf{g}_\\bm{\\theta}^l)^{-1}$ can be easily computed for certain design of invertible layers~\\cite{rezende2015variational, dinh2016density} similar to the Feistel cipher. Given training data of $\\mathbf{y}$, the model can be optimized stably with maximum likelihood estimation.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.65\\textwidth}\n \\hspace{-1em}\n \\includegraphics[width=1.1\\textwidth]{figs\/glow_msc}\n \\caption{Mutliscale conditioning.}\n \\label{fig:glow_msc_sub}\n \\end{subfigure}\n \\hfill\n \\begin{minipage}[b]{0.33\\textwidth}\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\includegraphics[width=.95\\textwidth]{figs\/flow_step}\n \\caption{One step of flow.}\n \\label{fig:flow_step}\n \\end{subfigure}\n \\\\[\\baselineskip]\n \\begin{subfigure}[b]{\\textwidth}\n \\centering\n \\includegraphics[width=0.98\\textwidth]{figs\/affine_coupling}\n \\caption{Affine coupling layer.}\n \\label{fig:affine_coupling}\n \\end{subfigure}\n \\end{minipage}\n \\caption{Multiscale conditional Glow. (a) Multiscale features extracted with the encoder network (left) are used as conditions to generate output with the Glow model (right). $\\times F$, $\\times (L-2)$ denotes repeating for $F$ times and $L-2$ times respectively. (b) One step of flow, i.e. \\texttt{Flow} block in (a), and (c) Affine coupling layer following the structure of Glow (Fig.~$2$ in~\\cite{kingma2018glow}) except conditioning on the input features. The figure shows the forward path from $\\{\\mathbf{y}; \\mathbf{x} \\}$ to $\\mathbf{z} = \\{\\mathbf{z}_2, \\cdots, \\mathbf{z}_L\\}$. The reverse (sampling) path from $\\{\\mathbf{z}; \\mathbf{x} \\}$ to $\\mathbf{y}$ is used during training, where $\\mathbf{z}$ are sampled from diagonal Gaussians, see Algorithm~\\ref{algo:cglow}. See~\\ref{appendix:cglow} for the details of all modules in the model.}\n \\label{fig:glow_msc}\n\\end{figure}\nA recently developed generative flow model called Glow~\\cite{kingma2018glow} proposed to learn invertible $1\\times1$ convolution to replace the fixed permutation and synthesize large photo-realistic images using the log-likelihood objective.\nWe extend Glow to condition on high-dimensional input $\\mathbf{x}$, e.g. images, as shown in Fig.~\\ref{fig:glow_msc}. The conditional model consists of two components (Fig.~\\ref{fig:glow_msc_sub}): an encoder network which extracts multiscale features $\\{\\bm{\\xi}_l\\}_{l=1}^L$ from the input $\\mathbf{x}$ through a cascade of alternating dense blocks and downsampling layers, and a Glow model (with multiscale structure) which transforms the latent variables $\\mathbf{z}=\\{\\mathbf{z}_2, \\cdots, \\mathbf{z}_L\\}$ distributed at different scales to the output $\\mathbf{y}$ conditioned on $\\{\\bm{\\xi}_l\\}_{l=1}^L$ through skip connections (dashed lines in Fig.~\\ref{fig:glow_msc_sub}, as in Unet~\\cite{ronneberger2015u}) between the encoder and the Glow.\n\nMore specifically, the input features $\\bm{\\xi}_l$ enter the Glow model as the condition for the affine coupling layers at the same scale, as shown in Fig.~\\ref{fig:flow_step}, whose input and output are denoted as $\\mathbf{y}'$ and $\\mathbf{z}'$ in the forward path. As shown in Fig.~\\ref{fig:affine_coupling}, the input features $\\bm{\\xi}_l$ are concatenated \\encircle{c} with half of the flow features $\\mathbf{y}_1'$ before passing to scale \\fbox{$\\mathbf{s}$} and shift \\fbox{$\\mathbf{t}$} networks which specify arbitrarily nonlinear transforms that need not to be invertible. Given $\\mathbf{z}' = [\\mathbf{z}_1', \\mathbf{z}_2']$ and $\\bm{\\xi}_l$, $\\mathbf{y}'=[\\mathbf{y}_1', \\mathbf{y}_2']$ can be recovered exactly by reversing the shift and scaling operations, as detailed in Table~\\ref{tab:affine_coupling}. Note that $\\bm{\\xi}_l$ is the condition for all $F$ steps of flow at scale $l=1, \\cdots, L$, where $L$ denotes the number of scales (or levels). More details of the model including dense blocks, transition down layers, split, squeeze, and affine coupling layers are given in~\\ref{appendix:cglow}.\n\nIn a data-driven scenario, the conditional Glow is trained by passing data $\\mathbf{y}$ through the model to compute the latent $\\mathbf{z}$ and maximizing the evaluated log-likelihood of data given $\\mathbf{x}$. But to train with the loss in Eq.~(\\ref{eq:rev_kl}), we need to sample the output $\\hat{\\mathbf{y}}$ from the conditional density $p_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x})$ given $\\mathbf{x}$, which goes in the opposite direction of the data-driven case. Algorithm~\\ref{algo:cglow} shows the details of training conditional Glow. \nThe sampling\/generation process is shown within the for-loop before computing the loss. Note that for one input sample only one output sample is used to approximate the expectation over $p_{\\bm{\\theta}}(\\mathbf{y} | \\mathbf{x})$ during training. To obtain multiple output samples for an input e.g. to compute the predictive mean and variance during prediction, we only need to sample the noise variables $\\{\\bm{\\epsilon}_l\\}_{l=2}^L$ multiple times, and pass them through the reverse path of the Glow.\nThe conditional log-likelihood $p_{\\bm{\\theta}}(\\hat{\\mathbf{y}} | \\mathbf{x})$ can be exactly evaluated as the following:\n\\begin{equation}\n \\label{eq:flow_likelihood}\n \\log p_{\\bm{\\theta}}(\\hat{\\mathbf{y}} | \\mathbf{x}) = \\log p_{\\bm{\\theta}}(\\mathbf{z}) + \\log |\\det (d\\mathbf{z} \/ d\\hat{\\mathbf{y}})|,\n\\end{equation}\nwhere both the latent $\\mathbf{z}$ and $\\log |\\det (d\\mathbf{z} \/ d\\hat{\\mathbf{y}})|$ depend on $\\mathbf{x}$ and realizations of the noise $\\{\\bm{\\epsilon}_l\\}_{l=2}^L$. The density of the latent $p_{\\bm{\\theta}}(\\mathbf{z})$ is usually a simple distribution, e.g. diagonal Gaussian, which is computed with the second (for $\\mathbf{z}_L$) and third (for $\\{\\mathbf{z}_l\\}_{l=2}^{L-1}$) terms within the bracket of the reverse KL divergence loss in Algorithm~\\ref{algo:cglow}. Also $\\log |\\det (d\\mathbf{z} \/ d\\mathbf{y})|$ is computed with the fourth term. Notably, the log-determinant of the Jacobian for the affine coupling layer is just $\\texttt{sum}(\\log |\\mathbf{s}|)$, where $\\mathbf{s}$ is the output of the scaling network.\nThus the conditional density $p_\\bm{\\theta}(\\hat{\\mathbf{y}} | \\mathbf{x})$ can be evaluated exactly and efficiently, enabling us to directly approximate the entropy term in Eq.~(\\ref{eq:rev_kl}), e.g. via Monte Carlo approximation.\n\n\\remark{\nThe training process does not require output data. However, validation data with input-output pairs are necessary to calibrate the predictive uncertainty of the trained model. Careful initialization of the model is important to stabilize the training process. In this work, we initialize the \\texttt{ActNorm} to be the identity transform, the weight matrix of \\texttt{Invertible $1\\times1$ Convolution} to be a random rotation matrix, and the \\texttt{Affine Coupling} layer to be close to the identity transform ($\\hat{\\mathbf{s}}=\\bm{0}$ and $\\mathbf{t}=\\bm{0}$ in Table~\\ref{tab:affine_coupling}). \nWe can also use data-dependent initialization to speed up the training process. More specifically, one mini-batch $\\mathcal D_{\\text{init}} = \\{(\\mathbf{x}^{(j)}, \\mathbf{r}^{(j)})\\}_{j=1}^{M'}$ (e.g. $M'=32$) of input-output data pairs can be passed forward from $\\{\\mathbf{y}; \\mathbf{x}\\}$ to $\\mathbf{z}$ to initialize the parameters of \\texttt{ActNorm} such that the post-\\texttt{ActNorm} activations per-channel have zero mean and unit variance given $\\mathcal D_{\\text{init}}$~\\cite{kingma2018glow}. The reference output $\\mathbf{r}$ can be the solution from standard deterministic PDE solvers or more appropriately here from the methods presented in Sections~\\ref{sec:solve_pde} and~\\ref{sec:solve_det_pde}.\n}\n\n\\begin{algorithm}\n\\SetAlgoLined\n\\KwIn{Inverse temperature $\\beta$, input samples $\\{\\mathbf{x}^{(i)}\\}_{i=1}^N$. Mini-batch size M, number of steps $F$ of \\texttt{Flow} in each scale, number of scales $L$.}\n\\KwOut{Model parameters $\\bm{\\theta}$}\n\\For{number of training iterations}{\n Sample a mini-batch of input $\\{\\mathbf{x}^{(i)}\\}_{i=1}^M$, pass it through the encoder to compute the multiscale input features $\\{\\bm{\\xi}_l^{(i)}\\}_{l=1, i=1}^{L, M}$; \n \n Sample the latent $\\mathbf{z}^{(i)}_L = \\bm{\\mu}_\\bm{\\theta}^L(\\bm{\\xi}^{(i)}_L) + \\bm{\\sigma}_\\bm{\\theta}^L(\\bm{\\xi}^{(i)}_L) \\odot \\bm{\\epsilon}_L^{(i)}, \\bm{\\epsilon}_L^{(i)} \\sim \\mathcal N(\\bm{0}, \\mathbf{I})$; \n \n Compute flow feature $\\mathbf{h}_{L-1}^{(i)} = \\mathbf{g}_{\\bm{\\theta}}^L (\\mathbf{h}_{L}^{(i)}; \\bm{\\xi}_L^{(i)})$; \\algorithmiccomment{$\\mathbf{h}_L = \\mathbf{z}_L$, $\\mathbf{g}_\\bm{\\theta}^L$ includes the reverse path of \\texttt{Sequeeze} and $F$ steps of \\texttt{Flow}} \n \n \\For{$l=L-1:2$}{\n Sample the split latent variable at level $l$ $\\mathbf{z}_l^{(i)} = \\bm{\\mu}_\\bm{\\theta}^l(\\mathbf{h}^{(i)}_l) + \\bm{\\sigma}_\\bm{\\theta}^l(\\mathbf{h}^{(i)}_l) \\odot \\bm{\\epsilon}_l^{(i)}, \\bm{\\epsilon}_l^{(i)} \\sim \\mathcal N(\\bm{0}, \\mathbf{I}), i=1, \\cdots, M$; \n \n Compute flow feature $\\mathbf{h}_{l-1}^{(i)} = \\mathbf{g}_{\\bm{\\theta}}^l (\\mathbf{h}_l^{(i)}, \\mathbf{z}_l^{(i)}; \\bm{\\xi}_l^{(i)})$; \\algorithmiccomment{$\\mathbf{g}_\\bm{\\theta}^l$ includes the reverse path of \\texttt{Sequeeze}, $F$ steps of \\texttt{Flow} and \\texttt{Split}}\n }\n \n Compute output $\\hat{\\mathbf{y}}^{(i)} = \\mathbf{g}_\\bm{\\theta}^1(\\mathbf{h}_1^{(i)}; \\bm{\\xi}_1^{(i)})$;\n \\algorithmiccomment{$\\mathbf{g}_\\bm{\\theta}^1$ includes the reverse path of $F$ steps of \\texttt{Flow}}\n \n Minimize the \\textit{reverse KL} divergence in Eq.~(\\ref{eq:rev_kl}) with Adam optimizer w.r.t. $\\bm{\\theta}$ \n $ \\frac{1}{M} \\sum_{i=1}^M \\Big[ \\beta L(\\hat{\\mathbf{y}}^{(i)}, \\mathbf{x}^{(i)}) \n + \\sum_{l=2}^{L-1} \\log \\mathcal N(\\mathbf{z}_l^{(i)} | \\bm{\\mu}_\\bm{\\theta}^l(\\mathbf{h}_l^{(i)}), (\\bm{\\sigma}_\\bm{\\theta}^l(\\mathbf{h}_l^{(i)}))^2) \n + \\log \\mathcal N(\\mathbf{z}_L^{(i)} | \\bm{\\mu}_\\bm{\\theta}^L(\\bm{\\xi}_L^{(i)}), (\\bm{\\sigma}_\\bm{\\theta}^L(\\bm{\\xi}_L^{(i)}))^2) \n + \\sum_{l=1}^L \\log |\\det(d\\mathbf{h}_l^{(i)} \/ d\\mathbf{h}_{l-1}^{(i)})| \\Big].$\n\n \\algorithmiccomment{$\\mathbf{h}_0=\\hat{\\mathbf{y}}$, see Table 1 in~\\cite{kingma2018glow} for formula to compute $\\log |\\det(d\\mathbf{h}_l^{(i)} \/ d\\mathbf{h}_{l-1}^{(i)})|$, i.e. log-determinant of Jacobian for \\texttt{ActNorm}, \\texttt{Invertible $1\\times 1$ Conv} and \\texttt{Affine Coupling} layer.}\n}\n\\caption{Training conditional Glow.}\n\\label{algo:cglow}\n\\end{algorithm}\n\n\\section{Problem Definition}\n\\label{sec:Definition}\nConsider modeling of a physical system governed by PDEs:\n\\begin{equation}\n \\label{eq:PDE}\n \\begin{aligned}\n \\mathcal N (u(s); K(s)) &= f(s), \\qquad s \\in \\mathcal S, \\\\\n \\mathcal B (u(s)) &= b(s), \\qquad s \\in \\Gamma,\n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal N$ is a general differential operator, $u(s)$ are the field variables of interest, $f(s)$ is the source field, and $K(s)$ denotes an input property field defining the system's constitutive behavior. $\\mathcal B$ is the operator for boundary conditions defined on the boundary $\\Gamma$ of the domain $\\mathcal S$.\nIn particular, we consider the following Darcy flow problem as a motivating example throughout this paper:\n\\begin{equation}\n\\label{eq:darcy}\n - \\nabla \\cdot (K(s) \\nabla u(s))) = f(s), \\qquad s \\in \\mathcal S,\n\\end{equation}\nwith boundary conditions\n\\begin{equation}\\label{eq:BCs}\n \\begin{aligned}\n u(s) &= u_D(s), \\qquad s \\in \\Gamma_D, \\\\\n \\nabla u(s) \\cdot n &= g(s), \\qquad s \\in \\Gamma_N,\n \\end{aligned}\n\\end{equation}\nwhere $n$ is the unit normal vector to the Neumann boundary $\\Gamma_N$, $\\Gamma_D$ is the Dirichlet boundary. \n\nOf particular interest are PDEs for which the field variables can be computed by appropriate minimization of a field energy functional (potential) $V(u; K)$, i.e. \n\\begin{equation}\n\\argmin_u V(u; K). \n\\end{equation}\nSuch potentials are common in many linear and nonlinear problems in physics and engineering and serve as the basis of the finite element method. For problems where such potentials cannot be found~\\cite{FilSavSho92}, one can consider $V$ as the square of the residual norm of the PDE evaluated at different trial solutions, e.g.\n\\begin{equation}\n V(u; K)=R^2\\left(u; K\\right).\n\\end{equation}\nIn this paper, we are interested in the solution of parametric PDEs for a given set of boundary conditions. \n \n\n\\begin{defn}[Solution of a deterministic PDE system]\n\\label{defn:solution_pde}\nGiven the potential $V(u; K)$, and the boundary conditions in Eq.~(\\ref{eq:BCs}), compute the solution $u(s)$ of the PDE for a given input field $K(s)$.\n\\end{defn}\n\nThe input field $K(s)$ is often modeled as a random field $K(s, \\omega)$ in the context of uncertainty quantification, where $\\omega$ denotes a random event in the sample space $\\Omega$. In practice, discretized versions of this field are employed in computations which is denoted as the random vector $\\mathbf{x}$, i.e. $\\mathbf{x} = [K(s_1), \\cdots, K(s_{n_s})]$. We note that when fine-scale fluctuations of the input field $K$ are present, the dimension $n_s$ of $\\mathbf{x}$ can become very high. Let $p(\\mathbf{x})$ be the associated density postulated by mathematical considerations or learned from data, e.g. CT scans of microstructures, measurement of permeability fields, etc.\nSuppose $\\mathbf{y}$ denotes a discretized version of the PDE solution, i.e.\n$$\\mathbf{y} = [u(s_1), \\cdots, u(s_{n_s})].$$ \n\nNote that all the discretized field variable(s) are denoted as bold, while the continuous field variable(s) are non-bold.\n\nWe are interested in developing a surrogate model that allows fast calculation of the system response $\\mathbf{y}$\nfor any input realization $\\mathbf{x} \\in p(\\mathbf{x})$, and potentially for various boundary conditions.\nThis leads to the following definition:\n\n\\begin{defn}[Deterministic Surrogate Model]\n\\label{defn:det_surrogate}\n Given the potential $V(u; K)$, the boundary conditions in Eq.~(\\ref{eq:BCs}), and a set of training input data $\\mathcal D_{\\text{input}} = \\{\\mathbf{x}^{(i)}\\}_{i=1}^N, \\mathbf{x}^{(i)} \\sim p(\\mathbf{x})$, learn a deterministic surrogate \n$\n \\mathbf{y} = \\hat{\\mathbf{y}}_\\bm{\\theta}(\\mathbf{x}),\n$\nfor predicting the solution $\\mathbf{y}$ for any input $\\mathbf{x} \\in p(\\mathbf{x})$, where $\\bm{\\theta}$ denotes the parameters of the surrogate model.\n\\end{defn}\n\nNote that often the density $p(\\mathbf{x})$ is not known and needs to be approximated from the given data $\\{\\mathbf{x}^{(i)}\\}_{i=1}^N$. When the density $p(\\mathbf{x})$ is given, the surrogate model can be defined without referring to the particular training data set. In this case, as part of the training process, one can select any dataset of size $N$, $\\{\\mathbf{x}^{(i)}\\}_{i=1}^N, \\mathbf{x}^{(i)} \\sim p(\\mathbf{x})$, including the most informative one for the surrogate task. \n\nWe note that the aforementioned problem refers to a new type of machine learning task that falls between unsupervised learning due to the absence of labeled data (i.e. the $\\mathbf{y}^{(i)}$ corresponding to each $\\mathbf{x}^{(i)}$ is {\\em not} provided) and (semi-)supervised learning because the objective involves discovering the map from the input $\\mathbf{x}$ to the output $\\mathbf{y}$. \nGiven the finite training data employed in practice and the inadequacies of the model postulated, $\\hat{\\mathbf{y}}_\\bm{\\theta}(\\mathbf{x})$, it is often advantageous to obtain \n a distribution over the possible solutions via a probabilistic surrogate, rather than a mere point estimate for the solution.\n\n\\begin{defn}[Probabilistic Surrogate Model]\nGiven the potential $V(u; K)$, the boundary conditions in Eq.~(\\ref{eq:BCs}), and a set of training input data $\\mathcal D_{\\text{input}} = \\{\\mathbf{x}^{(i)}\\}_{i=1}^N, \\mathbf{x}^{(i)} \\sim p(\\mathbf{x})$, a probabilistic surrogate model specifies a conditional density $p_\\bm{\\theta}(\\mathbf{y} | \\mathbf{x})$, where $\\bm{\\theta}$ denotes the model parameters.\n\\end{defn}\n\nFinally, since the input $\\mathbf{x}$ arises from an underlying probability density, one may be interested to compute the statistics of the output $\\mathbf{y}$ leading to the following forward uncertainty propagation problem. \n\n\\begin{defn}[Forward Uncertainty Propagation]\n\\label{defn:uncertainty_propagation} \nGiven the potential $V(u; K)$, the boundary conditions in Eq.~(\\ref{eq:BCs}), and a set of training input data $\\mathcal {\\mathcal D}_{\\text{input}} = \\{\\mathbf{x}^{(i)}\\}_{i=1}^N, \\mathbf{x}^{(i)} \\sim p(\\mathbf{x})$, estimate moments of the response, $\\mathbb{E}[\\mathbf{y}], \\text{Var}[\\mathbf{y}], \\ldots$ or more generally any aspect of the probability density of $\\mathbf{y}$.\n\\end{defn}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{section-intro}\n\nAs one of the nearest radio galaxies to us \\citep[$D=16$ Mpc is \nadopted;][]{ton91}, M87 is amongst the best-studied of its source class. \nIt is perhaps best known for its exceptionally bright arcsecond-scale \njet \\citep{cur18}, well-imaged at radio through X-ray frequencies at \nincreasingly improved sensitivity and resolution over the decades \n\\citep[e.g.,][]{bir91,spa96,mar02,per05}. Near its central $\\sim (3-6) \n\\times 10^{9}$ solar mass supermassive black hole \\citep{mac97,geb09}, \nthe jet base has been imaged down to $\\sim$0.01 pc resolution \n\\citep[$\\sim 15-30\\times$ the Schwarzschild \nradius,][]{jun99,ly07,tev09}.\n\nAt the highest energies, M87 is regularly detected by HESS, MAGIC, and \nVERITAS with variable TeV emission on timescales of years and flaring in \na few days \\citep{aha06,alb08,acc08,tev09}. The sensitivity of these \nCherenkov telescopes have also enabled the detection of another \nwell-known nearby radio galaxy Cen~A \\citep{aha09}. Without comparable \nimaging resolution to the lower energy studies however, variability and \nspectral modeling are necessary to infer the production site of the TeV \n$\\gamma$-rays and to deduce the source physical parameters.\n\nAt high energy $\\gamma$-rays ($\\sim$20 MeV -- 100 GeV), we are similarly \npoised for new radio galaxy discoveries with the Large Area Telescope (LAT) \naboard the recently launched {\\it Fermi Gamma-ray Space Telescope} \n\\citep{atw09}. Indeed, we report here the detection of a faint, point-like \n$\\gamma$-ray source positionally coincident with M87 using the $Fermi$-LAT. \nAfter the confirmation of the EGRET discovery of Cen~A \\citep[][and in \npreparation]{sre99,lbas}, and the recent detection of Per~A\/NGC~1275 \n\\citep{pera}, this is the third radio galaxy successfully detected by $Fermi$. \nUnlike the known variable TeV source, there is so far no evidence for \nvariability of the MeV\/GeV emission in M87. An origin of the LAT emission from \nthe unresolved parsec scale jet (hereafter, denoted as the `nucleus' or \n`core') observed contemporaneously with $Chandra$ and the VLBA\\footnote{The \nNational Radio Astronomy Observatory is a facility of the National Science \nFoundation operated under cooperative agreement by Associated Universities, \nInc.} is discussed. Potential contributions from the larger-scale ($\\simgt \n0.1-1$ kpc) jet to the unresolved $\\gamma$-ray source are also briefly \nconsidered. Section 2 contains the details of the LAT observations, including \na description of the $Chandra$ and VLBA data utilized, with the discussion \nof these results in section 3.\n\n\n\\section{Observations}\\label{section-lat}\n\nThe $Fermi$-LAT is a pair creation telescope which covers the energy range from \n$\\sim$20 MeV to $>$300 GeV \\citep{atw09}. It operates primarily in an `all-sky \nsurvey' mode, scanning the entire sky approximately every three hours.\nThe initial LAT detection of M87 resulted from nominal processing of \n6-months of all-sky survey data, as was applied to the initial 3-month \ndataset described in \\citet{bsl}, with a test statistic \\citep{mat96}, \n$TS \\sim 60$. Including here an additional 4-months of data, the $TS$ \nincreased to 108.5, which is equivalent to a source significance \n$\\sim\\sqrt{TS}=10.4\\sigma$. The resultant 10-month dataset (Aug.\\ 4, \n2008 - May 31, 2009) corresponds to mission elapsed times (MET) \n239557418 to 265420800. Our analysis followed standard selections of \n\"Diffuse\" class events \\citep{atw09} with energies $E>$200 MeV, a zenith \nangle cut of $<$105\\hbox{$^\\circ$}, and a rocking angle cut of 43\\hbox{$^\\circ$}\\ applied in \norder to avoid Earth albedo $\\gamma$-rays. {\\it Fermi} Science \ntools\\footnote{http:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/analysis\/documentation\/Cicerone\/} \nversion v9r10 and instrumental response functions (IRFs) version {\\tt \nP6$\\_$V3$\\_$DIFFUSE} were used for the analysis.\n\n\n\\begin{figure}\n\\epsscale{1.1}\n\\plotone{fig1.ps}\n \\caption{VLA $\\lambda$=90cm radio image from \\citet{owe00} with the LAT\n$\\gamma$-ray localization error circles indicated: $r_{95\\%}= 5.2'$ and\n$r_{68\\%}=3.2'$ (statistical only; see \\S~\\ref{section-lat}). The M87\ncore is the faint feature near the center of the few kpc-scale\ndouble-lobed radio structure (in white). At the adopted distance $D=16$\nMpc, 1$'$ = 4.7 kpc. \n}\n\\label{figure-vla}\n\\end{figure}\n\n\nA localization analysis with {\\tt GTFINDSRC} resulted in a best-fit \nposition, RA = 187\\hbox{$^\\circ$}.722, Dec.\\ = 12\\hbox{$^\\circ$}.404 (J2000.0 equinox), with a \n95$\\%$ confidence error radius, $r_{95\\%}= 0\\hbox{$^\\circ$}.086 = 5.2'$ \n(statistical only; $r_{68\\%}=3.2'$). To account for possible \ncontamination from nearby sources, the model included all point sources \ndetected at $>5\\sigma$ in an internal LAT 9-month source list within a \nregion of interest (ROI) of $r$=15\\hbox{$^\\circ$}\\ centered on the $\\gamma$-ray \nposition. Galactic diffuse emission was modeled using GALPROP \n\\citep{str04}, updated to include recent gas maps and a more accurate \ndecomposition into Galactocentric rings (galdef ID {\\tt \n54$\\_$59varh7S}). An additional isotropic diffuse component modeled as a \npower-law was included. Figure~\\ref{figure-vla} shows the resultant \n$\\gamma$-ray source localization on a VLA radio image from \n\\citet{owe00}. The $\\gamma$-ray source is positionally coincident with \nthe known radio position of the M87 core \\citep[RA = 187\\hbox{$^\\circ$}.706, Dec.\\ \n= 12\\hbox{$^\\circ$}.391;][]{fey04}, with an offset (0\\hbox{$^\\circ$}.020 = 1.2$'$) that is a \nsmall fraction of the localization circle. Currently, the best estimate \nof the systematic uncertainty in $r_{95\\%}$ is 2.4$'$ \\citep{bsl}, which \nshould be added in quadrature to the determined statistical one.\n\nSpectral analysis was performed utilizing an unbinned likelihood fit of \nthe $>$200 MeV data with a power-law ($dN\/dE \\propto E^{-\\Gamma}$) \nimplemented in the {\\tt GTLIKE} tool. This resulted in $F$($>$100 MeV) = \n2.45 ($\\pm 0.63$) $\\times$ 10$^{-8}$ ph cm$^{-2}$ s$^{-1}$\\ with a photon index, $\\Gamma = 2.26 \\pm 0.13$; \nerrors are statistical only. The flux was extrapolated down to 100 MeV \nto facilitate comparison with the previous EGRET non-detection of $<$ \n2.18 $\\times$ 10$^{-8}$ ph cm$^{-2}$ s$^{-1}$\\ (2$\\sigma$) from observations spanning the 1990's \n\\citep{rei03}. Thus, there is no apparent changes in the flux (i.e., a \nrise) in the decade since the EGRET observations. Systematic errors of \n($+0.17$\/$-0.15$) $\\times$ 10$^{-8}$ ph cm$^{-2}$ s$^{-1}$\\ on the flux and $+0.04$\/$-0.11$ on the index \nwere derived by bracketing the energy-dependent ROI of the IRFs to \nvalues of 10$\\%$, 5$\\%$, and 20$\\%$ above and below their nominal values \nat log($E$[MeV]) = 2, 2.75, and 4, respectively. The spectrum extends to \njust over 30 GeV where the highest energy photon is detected within the \n95$\\%$ containment. The LAT spectral data points presented in \nFigure~\\ref{figure-sedgamma} were generated by performing a subsequent \nlikelihood analysis in five equal logarithmically spaced energy bins \nfrom $0.2-31.5$ GeV. The 1$\\sigma$ bounds on the spectrum, obtained from \nthe full $>$200 MeV unbinned likelihood fit, were extended to higher \nenergies for comparison with previous TeV measurements (see section~3).\n\nLightcurves were produced in 10-day (Figure~\\ref{figure-lightcurve}) and \n28-day (not shown) bins over the 10-month LAT dataset. Considering the \nlimited statistics, it was necessary to fix the photon index to the \n(average) fitted value in order to usefully gauge variability in the \nflux. Considering only statistical errors of all the binned data points \nwith $TS \\geq1$ ($1\\sigma$), a $\\chi^{2}$ test against the weighted mean \nfluxes of the 10-day and 28-day lightcurves resulted in probabilities, \n$P(\\chi^{2},\\nu) = 22\\%$ and 70$\\%$, respectively, indicating plausible \nfits to the tested hypothesis. We conclude that there is no evidence \nfor variability over the period of observations.\n\n\n\\begin{figure}\n\\epsscale{1.2}\n\\plotone{fig2.eps}\n \\caption{The observed LAT spectrum (red circles) with representative\nTeV measurements of M87 in a low state from the 2004 observing season\n(black triangles) and during a high state in 2005 (blue squares), both\nby HESS \\citep{aha06}. The lines indicate 1$\\sigma$ bounds on the power\nlaw fit to the LAT data as well as its extrapolation into TeV energies.\n}\n\\label{figure-sedgamma}\n\\end{figure}\n\n\n\\begin{figure}\n\\epsscale{1.2}\n\\plotone{fig3.eps}\n \\caption{Lightcurve in 10-day bins obtained with the fitted photon\nindex ($\\Gamma$=2.26) fixed. The average flux is indicated with the\nsolid horizontal line and the dotted lines are $\\pm 1 \\sigma$ about the\naverage. Data points with $TS<1$ (i.e., 1$\\sigma$) are shown as upper\nlimits.\n}\n\\label{figure-lightcurve}\n\\end{figure}\n\n\nA radial profile of the $\\gamma$-ray source counts (not shown) was \nextracted for the total energy range ($>$200 MeV). The profile is \nconsistent with that of a point source simulated at energies $0.2-200$ \nGeV using the fitted spectral parameters above with a reduced \n$\\chi^{2}=1.04$ for 20 degrees of freedom. The total $\\sim$0\\hbox{$^\\circ$}.2 \nextent of the 10's kpc-scale radio lobes of M87 \n\\citep[Figure~\\ref{figure-vla};][]{owe00} is comparable to the LAT \nangular resolution, $\\theta_{\\rm 68} \\simeq 0\\hbox{$^\\circ$}.8~ E_{\\rm GeV}^{-0.8}$ \n\\citep{atw09}. Therefore, from the presently available data, we can not \ndisentangle (or exclude) a possible contribution of the extended radio \nfeatures to the total $\\gamma$-ray flux.\n\nTo gauge the X-ray activity of M87 over the duration of the LAT \nobservations, we analyzed five new 5 ksec $Chandra$ ACIS-S images \nobtained in $\\sim$6 week intervals between Nov.\\ 2008 and May 2009 (PI: \nD.~E.\\ Harris). The X-ray core fluxes (0.5--7 keV) in these monitoring \nobservations, $(1.2-1.6) \\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ \n\\citep[$\\sim 0.4-0.6$ keV s$^{-1}$ in the units of Fig.~9 of][]{har09}, \nare at the low end of the observed range over the last $\\sim$7 years. \nAdditionally, the fractional variability is small ($\\sigma$\/$<$flux$> \n\\sim 0.1$), indicating low X-ray activity in the core over the LAT \nobserving period.\n\nAt milli-arcsecond (mas) resolution in the radio band, M87 has been \nmonitored with the VLBA at 15 GHz since 1995 as part of the 2cm survey \n\\citep{kel04} and MOJAVE \\citep{lis09} programs\\footnote{See: \nhttp:\/\/www.physics.purdue.edu\/MOJAVE\/}. These data were re-imaged \nuniformly at 0.6 mas $\\times$ 1.3 mas (position angle=--11\\hbox{$^\\circ$}) \nresolution to match the additional map presented in \\citet{kov07} \nresulting in 23 total measurements of the unresolved core flux up to the \nlatest observation on Jan.\\ 7, 2009 (one of the \n$Chandra$ exposures described above was obtained on the same day). \nThis observation is the only one overlapping with the LAT dataset \nand the peak flux of 1.05 Jy\/beam is consistent with the average over \nall the measurements ($1.11 \\pm 0.16$ Jy\/beam). An indication of the \nsensitivity of these data to detecting flaring core emission is that the \nhigh flux state observed in the detailed 43 GHz VLBA monitoring at the \ntime of TeV flaring in 2008 \\citep[][see \n\\S~\\ref{section-discussion}]{tev09} is visible in the 15 GHz data as a \nsingle high point on May 1, 2008 (1.45 Jy\/beam)\\footnote{Conversely, \nduring the previous TeV flaring period spanning March - May 2005 \n\\citep{aha06}, no comparable flare was visible in the VLBA 15 GHz core: \n1.02 Jy\/beam peak on Apr.\\ 21st and 0.98 Jy\/beam on Nov.\\ 7th. Instead, \nthe flaring X-ray\/optical\/radio knot HST-1 (60 pc from the core, \nprojected) was observed to peak in early 2005 \\citep{har09}, suggesting \nan association with the variable TeV source \n\\citep{che07}.\\label{footnote-hst1}}. This only suggests a period of low \nactivity in the radio core (as reflected in the X-ray data), as the \nsingle radio flux may not be representative of the entire 10-month LAT \nviewing period.\n\n\n\\section{Discussion}\\label{section-discussion}\n\nIn blazars, it is commonly believed that $\\gamma$-rays are produced in \ncompact emission regions moving with relativistic bulk velocities in or \nnear the parsec scale core in order to explain the observed rapid \nvariability and to avoid catastrophic pair-production \n\\citep[e.g.,][]{don95}. Consequently, it is natural to extend this \nsupposition to radio galaxies \\citep{chi01} which are believed to have \njets oriented at systematically larger angles to our line of sight, thus \nconstituting the parent population of blazars. Indeed, in the case of \nM87, a significant months timescale rise in the flux of the sub-pc scale \nradio core was discovered with the VLBA (at 43 GHz) during a period in \nearly 2008 when few day timescale TeV flaring was detected \n\\citep{tev09}. During this period of increased activity, a $Chandra$ \nmeasurement of the sub-arcsecond scale X-ray nucleus also indicated a \nrelatively higher flux than seen in past observations \\citep{har09}, \nthus signaling a common origin for the flaring emissions in the M87 \nnucleus. Therefore, during periods of lower $\\gamma$-ray activity, the \nradio\/X-ray core can also be considered a dominant source of the \nunresolved higher-energy emission, and we discuss this in the context of \nthe LAT MeV\/GeV detection.\n\nIn Figure~\\ref{figure-sedgamma}, the LAT spectrum of M87 is plotted \nalong with representative integrated TeV spectra from HESS \n\\citep{aha06}. The TeV measurements cover periods when M87 was in its \nhistorical-minimum (in 2004), and during a high state \\citep[in 2005 -- \ncf., Fig.~3 in][]{acc08}. Although the formal difference in the fitted \nphoton indices of the TeV data at high and low states is not \nstatistically significant ($\\Gamma = 2.22 \\pm 0.15$ and $2.62 \\pm 0.35$, \nrespectively), the LAT MeV\/GeV spectrum ($\\Gamma =2.3$) connects \nsmoothly with the low-state TeV spectrum. Taken together with the X-ray \nand radio measurements obtained during the LAT observation \n(\\S~\\ref{section-lat}), we view this as an indication that M87 is in an \noverall low $\\gamma$-ray activity state during the considered period. In \nfact, no significant TeV flaring was detected in a preliminary analysis \nof 18 hrs of contemporaneous VERITAS observations from Jan.\\ - Apr.\\ \n2009 \\citep{hui09}.\n\nM87 is the faintest $\\gamma$-ray radio galaxy detected so far by the LAT \nwith a $>$100 MeV flux ($\\sim 2.5$$\\times$ 10$^{-8}$ ph cm$^{-2}$ s$^{-1}$) about an order of magnitude \nlower than in Cen~A \\citep{lbas} and Per~A \\citep{pera}; the \ncorresponding $>100$ MeV luminosity, $4.9 \\times 10^{41}$ erg s$^{-1}$, \nis 4$\\times$ greater than that of Cen~A, but $>$200$\\times$ smaller than \nin Per A. There is no evidence of intra-year or decade-timescale MeV\/GeV \nvariability in M87 (\\S~\\ref{section-lat}), in contrast to the $\\simgt \n7\\times$ and $\\sim 1.6\\times$ larger observed LAT fluxes than the \nprevious EGRET ones in the cases of Per~A \\citep{pera} and Cen~A \n\\citep{lbas}, respectively. The $\\gamma$-ray photon index of M87 in the \nLAT band is similar to that of Per~A ($\\Gamma = 2.3$ and 2.2, \nrespectively), while being smaller than observed in Cen~A \n\\citep[$\\Gamma=2.9$;][]{lbas}. These sources are low-power (FRI) radio \ngalaxies, and have broad low-energy synchrotron and high-energy inverse \nCompton (IC) components in their spectral energy distributions (SEDs) \npeaking roughly in the infrared and $\\gamma$-ray bands, respectively. \nLow-energy peaked BL Lac objects have similar shaped SEDs, with \napproximately equal apparent luminosities \\citep[e.g.,][]{kub98}. As FRI \nradio galaxies are believed to constitute the parent population of BL \nLacs in unified schemes \\citep{urr95}, the overall similarity of their \nSEDs is not surprising.\n\nWe construct a SED for M87 (Figure~\\ref{figure-sed}) using the the LAT \n$\\gamma$-ray spectrum and the overlapping Jan.\\ 7, 2009 $Chandra$ and \nVLBA measurements of the core. Also plotted are historical radio to \nX-ray fluxes of the core \\citep[see][]{spa96,tan08} measured at the \nhighest resolutions at the respective frequencies. The core is known to \nbe variable, with factors of $\\sim2$ changes on months timescales common \nin the optical and X-ray bands \\citep{per03,har09}. To help constrain \nthe overall SED at frequencies between the X-ray and LAT measurements, \nwe determined integrated 3$\\sigma$ upper limits in three hard X-ray \nbands \\citep[following,][]{aje08} based on the $Swift$\/BAT dataset in \n\\citet{aje09}, including about another additional year of exposure \n(i.e., $\\sim$4 years total from Mar.\\ 2005 - Jan.\\ 2009).\n\n\n\\begin{figure}\n\\epsscale{1.2}\n\\plotone{fig4.eps}\n \\caption{SED of M87 with the LAT spectrum and the Jan.\\ 7, 2009 MOJAVE\nVLBA 15 GHz and $Chandra$ X-ray measurements of the core indicated in\nred. The non-simultaneous 2004 TeV spectrum described in\nFigure~\\ref{figure-sedgamma} and $Swift$\/BAT hard X-ray limits\n(\\S~\\ref{section-discussion}) of the integrated emission are shown in\nlight brown. Historical measurements of the core from VLA 1.5, 5, 15 GHz\n\\citep{bir91}, IRAM 89 GHz \\citep{des96}, SMA 230 GHz \\citep{tan08},\n$Spitzer$ 70, 24 $\\mu$m \\citep{shi07}, Gemini 10.8 $\\mu$m \\citep{per01},\n$HST$ optical\/UV \\citep{spa96}, and $Chandra$ 1 keV from \\citet[][hidden\nbehind the new measurements]{mar02} are plotted as black circles. The\nVLBA 15 GHz flux is systematically lower than the historical\narcsec-resolution radio to infrared measurements due to the presence of\nintermediate scale emission \\citep[see e.g.,][]{kov07}. The blue line\nshows the one-zone SSC model fit for the core described in\n\\S~\\ref{section-discussion}.\n}\n\\label{figure-sed}\n\\end{figure}\n\n\nThe broad-band SED is fit with a homogeneous one-zone synchrotron \nself-Compton (SSC) jet model \\citep{fin08} assuming an angle to the line \nof sight, $\\theta$=10\\hbox{$^\\circ$}, and bulk Lorentz factor, $\\Gamma_{\\rm b} = \n2.3$ (Doppler factor, $\\delta = 3.9$), consistent with observations of \napparent motions of $\\simgt 0.4c$ ($\\Gamma_{\\rm b} > 1.1$) in the \nparsec-scale radio jet \\citep{ly07}. A broken power-law electron energy \ndistribution $N(\\gamma)\\propto\\gamma^{-p}$ is assumed, and the indices, \n$p_{\\rm 1}=1.6$ for $\\gamma$ = [1, $4 \\times 10^{3}$] and $p_{\\rm \n2}=3.6$ for $\\gamma$ = [$4 \\times 10^{3}$, $10^{7}$] are best guesses \nbased on the available core measurements. The normalization at low \nenergies is constrained by the single contemporaneous VLBA 15 GHz flux \nwhich is measured with $\\sim 10^{2}-10^{3}\\times$ better resolution than \nthe adjacent points. The source radius, $r= 1.4 \\times 10^{16}$ cm = 4.5 \nmpc is chosen to be consistent with the best VLBA 43 GHz map resolution \n\\citep[$r<$7.8 mpc = 0.1 mas,][]{jun99,ly07} and is of order the size \nimplied by the few day timescale TeV variability \\citep{tev09}. For the \nsource size adopted, internal $\\gamma-\\gamma$ absorption is avoided so \nthat the LAT spectrum extends relatively smoothly into the TeV band, \nconsistent with the historical-minimum flux detected by HESS \n\\citep{aha06} and the preliminary upper limit of $<1.9\\%$ Crab from \nVERITAS observations \\citep{hui09} contemporaneous with the LAT ones.\n\nIn the SSC model, the magnetic field is $B = 55$ mG \nand assuming the proton energy density is 10$\\times$ greater than the \nelectron energy density, the total jet power is $P_{\\rm j} \\sim 7.0 \n\\times 10^{43}$ erg s$^{-1}$. The jet power is particle dominated, with \nonly a small contribution from the magnetic field component ($P_{\\rm B} \n\\sim 2 \\times 10^{40}$ erg s$^{-1}$). In comparison, the total kinetic \npower in the jet is $\\sim$few $\\times 10^{44}$ erg s$^{-1}$ as \ndetermined from the energetics of the kpc-scale jet and lobes \n\\citep{bic96}, and is consistent with the jet power available from \naccretion, $P_{\\rm j} \\simlt 10^{45}$ erg s$^{-1}$ \\citep{rey96,dim03}. \nThese power estimates are similar to those derived for BL Lacs from \nsimilarly modeling their broad-band SEDs \\citep[e.g.,][]{cel08}.\n\nAs applied to M87, such single-zone SSC emission models also reproduce \nwell the broad-band SEDs up to MeV\/GeV energies in the radio galaxies \nPer~A \\citep{pera} and Cen~A \\citep{chi01}. In this context, the \nobserved MeV\/GeV $\\gamma$-ray fluxes of blazars appear to be correlated \nwith their compact radio cores \\citep{lbas,kov09}, suggesting a common \norigin in the Doppler boosted emission in the sub-parsec scale jets. The \nfact that the 3 radio galaxies detected by the LAT so far have amongst \nthe brightest ($\\simgt$1 Jy) unresolved radio cores, in line with these \nexpectations \\citep{ghi05}, lend evidence for a common connection \nbetween the $\\gamma$-ray and radio emitting zones in such jets.\n\nIt should be emphasized that these observations are not simultaneous and \nparticularly, the TeV emission is known to be variable on year \ntimescales, so other emission components may contribute to the variable \nemission. Therefore, although not strictly required, more sophisticated \nmodels over the single-zone one presented can reproduce or contribute to \nthe observed emission. In particular, the beaming requirements in the \none-zone SSC modeling of the three known $\\gamma$-ray radio galaxies are \nsystematically lower than required in BL Lacs, suggesting velocity \nprofiles in the flow \\citep{chi01}. Such models \\citep{geo05,tav08} have \nin fact been used to fit the SED of M87 in addition to models based on \nadditional spatial structure \\citep[e.g.,][]{len07}. Protons, being \ninevitably accelerated if they co-exist with electrons in the emission \nregions, probably dominate energetically and dynamically the jets of \npowerful AGN \\citep[e.g.,][]{cel08}. Applying the synchrotron-proton \nblazar model \\citep{muc03,rei04} to the quiescent M87 data set yields \nreasonable agreement model fits that support a highly magnetized compact \nemission region with approximate equipartition between fields and \nparticles and a total jet power comparable with the above estimates, \nwhere protons are accelerated up to $\\sim 10^{9}$ GeV.\n\nOutside of the pc-scale core, the well-known arcsecond-scale jet \n\\citep[e.g.,][]{bir91,mar02,per05} is also a possible source of IC \nemission. As both the LAT and TeV telescopes are unable to spatially \nresolve emission on such small scales, the expected spectral and \ntemporal properties of the predicted emission must be examined. On the \nobserved scales, the dominant seed photon source is the host galaxy \nstarlight, and such an IC\/starlight model applied to one of the \nbrightest resolved knots in the jet -- knot A, $\\sim$1 kpc projected \ndistance from the core -- results in a spectrum peaking at TeV energies \n\\citep{sta05}, thus producing a harder MeV\/GeV spectrum than observed by \nthe LAT. Even closer to the core ($\\sim$60 pc, projected), the \nsuperluminal knot HST-1 \\citep[$>4c-6c$;][]{bir99,che07} is a more \ncomplex case. This knot is more compact than knot A, and its IC emission \nis expected to be further enhanced by the increased energy densities of \nthe surrounding circumnuclear and galactic photon fields, as well of the \ncomoving synchrotron radiation \\citep{sta06}. The radio\/optical\/X-ray \nfluxes of HST-1 have been declining since its giant flare peaked in 2005 \n\\citep{har09}, with current X-ray fluxes comparable to its pre-flare \nlevels in 2002. Considering the variable and compact nature of the \nsource (with observed months doubling timescales implying $r\\simlt 22 \n\\delta$ mpc; cf., footnote~\\ref{footnote-hst1}), the predicted IC \nspectrum has a complex temporal and spectral behavior. In the absence of \ndetailed contemporaneous measurements, its possible role in the \nproduction of the LAT observed MeV\/GeV emission is unclear.\n\nContinued LAT monitoring of M87 coordinated with multi-wavelength \nobservations can extend the current study of `quiescent' emission to \npossible flaring, in order to further address the physics of the \nradiation zone. While the extragalactic $\\gamma$-ray sky is dominated by \nblazars \\citep{har99,lbas}, this optimistically indicates an emerging \npopulation of $\\gamma$-ray radio galaxies. Other examples, including \nthe few possible associations with EGRET detections like, NGC~6251 \n\\citep{muk02} and 3C~111 \\citep{sgu05,har08} await confirmation with the \nLAT, and more radio galaxies are expected to be detected at lower \nfluxes. This holds great promise for systematic studies of relativistic \njets with a range of viewing geometries in the high energy $\\gamma$-ray \nwindow opened up by the $Fermi$-LAT.\n\n\n\\acknowledgments\n\nThe $Fermi$ LAT Collaboration acknowledges generous ongoing support from \na number of agencies and institutes that have supported both the \ndevelopment and the operation of the LAT as well as scientific data \nanalysis. These include the National Aeronautics and Space \nAdministration and the Department of Energy in the United States, the \nCommissariat \\`a l'Energie Atomique and the Centre National de la \nRecherche Scientifique \/ Institut National de Physique Nucl\\'eaire et de \nPhysique des Particules in France, the Agenzia Spaziale Italiana and the \nIstituto Nazionale di Fisica Nucleare in Italy, the Ministry of \nEducation, Culture, Sports, Science and Technology (MEXT), High Energy \nAccelerator Research Organization (KEK) and Japan Aerospace Exploration \nAgency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish \nResearch Council and the Swedish National Space Board in Sweden.\n\nAdditional support for science analysis during the operations phase is \ngratefully acknowledged from the Istituto Nazionale di Astrofisica in \nItaly.\n\nC.C.C. was supported by an appointment to the NASA Postdoctoral Program \nat Goddard Space Flight Center, administered by Oak Ridge Associated \nUniversities through a contract with NASA. Support from NASA grants \nGO8-9116X and GO9-0108X (D.E.H., F.M.) and the Foundation BLANCEFLOR \nBoncompagni-Ludovisi, n'ee Bildt (F.M.) are acknowledged. This research \nhas made use of data from the MOJAVE database that is maintained by the \nMOJAVE team \\citep{lis09}. We thank F.~Owen for providing the VLA 90cm \nimage.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $P=\\Big(p(i,j)\\Big)_{i,j \\in \\Omega}$ be a reversible Markov chain over a sample space $\\Omega$, that is, it must satisfy the following {\\it detailed balance conditions}:\n$$\\pi_i p(i,j)=\\pi_jp(j,i) \\qquad \\forall i,j \\in \\Omega,$$\nwhere $\\pi$ is a non-trivial non-negative function over $\\Omega$. If $P$ admits a unique stationary distribution $\\nu$, then ${1 \\over \\sum\\limits_{i \\in \\Omega} \\pi_i}\\pi=\\nu$.\n\nIt can be shown that the reversible $P$ is a self-adjoint operator in $\\ell^2(\\pi)$, the space generated by the following inner product induced by $\\pi$\n $$\\ip{f}{g}_{\\pi}=\\sum_{i \\in \\Omega} f(i)g(i)\\pi_i$$\n If $P$ is a tridiagonal operator (i.e.\\ a nearest-neighbor random walk) on $\\Omega=\\{0,1,2,\\dots\\}$, then it must have a simple spectrum, and is diagonalizable via orthogonal polynomials as it was studied in the 50's by Karlin and McGregor, see \\cite{SKJM1959}, \\cite{GS1975}, and \\cite{MI2005}. There, the extended eigenfunctions $Q_j(\\lambda)$ satisfying $Q_0 \\equiv 1$ and $$P\\begin{pmatrix}\n Q_0(\\lambda)\\\\\n Q_1(\\lambda)\\\\\n Q_2(\\lambda)\\\\\n \\vdots\n\\end{pmatrix} =\n\\lambda\\begin{pmatrix}\n Q_0(\\lambda)\\\\\n Q_1(\\lambda)\\\\\n Q_2(\\lambda)\\\\\n \\vdots\n\\end{pmatrix}\n$$\n are orthogonal polynomials with respect to a probability measure\n $\\psi$. If we let $p_t(i,j)$ denote the entries of the operator\n $P^t$ that represent $t$ step transition probabilities from state $i$ to state $j$ then\n $$p_t(i,j)=\\pi_j \\int_{-1}^1 \\lambda^t Q_i(\\lambda) Q_j(\\lambda) d\\psi(\\lambda)~~~\\forall i,j \\in \\Omega, $$\n where $\\pi_j$ with $\\pi_0=1$ is the reversibility measure of $P$.\n \n\n We will use the following distance to measure the deviation from the stationary distribution on a scale from zero to one. \n \\begin{mydef}\nIf $\\mu$ and $\\nu$ are two probability distributions over a sample space $\\Omega$, then the {\\it total variation distance} is \n$$\\| \\nu - \\mu \\|_{TV} = {1 \\over 2} \\sum_{x \\in \\Omega} |\\nu(x)-\\mu(x)|=\\sup_{A \\subset \\Omega} |\\nu(A)-\\mu(A)|$$\n\\end{mydef}\n \\noindent\n Let $\\rho=\\sum_{k=0}^{\\infty} \\pi_k$. Observe that $\\rho < \\infty$ if and only if the random walk $P$ is positive recurrent. Recall that $\\nu={1 \\over \\rho}\\pi$ is the stationary probability distribution. If in addition to being positive recurrent, the aperiodic nearest neighbor Markov chain originates at site $j$, then\n the total variation distance between the distribution $\\mu_t=\\mu_0P^t$ and $\\nu$ is given by\n \\begin{equation} \\label{eqTV}\n \\left\\|\\nu - \\mu_t \\right\\|_{TV} = {1 \\over 2} \\sum_{n \\in \\Omega} \\pi_n \\left|\\int_{(-1,1)} \\lambda^t Q_j(\\lambda) Q_n(\\lambda) d\\psi(\\lambda)\\right|,\n \\end{equation} \n as measure $\\psi$ contains a point mass of weight ${1 \\over \\rho}$ at $1$. \n See \\cite{YK2009}.\n \n The rates of convergence are quantified via mixing times, which for an infinite state space with a unique stationary distribution are defined as follows. Here the notion of a mixing time depends on the state of origination $j$ of the Markov chain. See \\cite{YK2010}.\n\\begin{mydef}\n Suppose $P$ is a Markov chain with a stationary probability distribution $\\nu$ that commences at $X_0=j$. Given an $\\epsilon >0$, the mixing time $t_{mix}(\\epsilon)$ is defined as\n $$t_{mix}(\\epsilon)=\\min\\left\\{t~:~\\|\\nu-\\mu_t\\|_{TV} \\leq \\epsilon \\right\\}$$\n\\end{mydef}\n\\vskip 0.2 in\n\\noindent\n In the case of a nearest-neighbor process on $\\Omega=\\{0,1,2,\\dots\\}$ commencing at $j$, the corresponding mixing time has the following simple expression in orthogonal polynomials\n$$ t_{mix}(\\epsilon)=\\min\\left\\{t~:~\\sum_{n} \\pi_n \\left|\\int_{(-1,1)} \\lambda^t Q_j(\\lambda) Q_n(\\lambda) d\\psi(\\lambda)\\right| \\leq 2 \\epsilon \\right\\},$$\n\nInvestigations into the use of orthogonal polynomial techniques (see \\cite{SKJM1959}, \\cite{GS1975}) in the estimation of mixing times and distance to the stationary distribution has been carried out in \\cite{YK2010} for certain classes of random walks. In this paper we consider the problem from the other direction. Namely given a large class of orthogonal polynomials we outline how to find the corresponding random walk and estimate the rate for the distance to the stationary distribution. \n\nMore specifically beginning with the Jacobi polynomials, whose weight\nfunction lies in $(-1,1)$ we use Koornwinder's techniques\n\\cite{TK1984} to attach a point mass at $1$. For the class\nof Jacobi type polynomials $Q_n$ thus obtained, the three term recurrence\nrelationship is understood \\cite{HKJW1996}. The tridiagonal operator\ncorresponding to these polynomials is not a Markov chain, however the\noperator can be deformed to become one. The corresponding changes in\nthe polynomials are easy to trace. This gives a four parameter family\nof nearest neighbor Markov chains whose distance to the stationary\ndistribution decays in a non-geometric way. In principle the\nasymptotic analysis presented in this paper can be applied to the\nentire four parameter family. We outline how this proceeds for\nChebyshev-type subfamily consisting of taking $\\alpha=\\beta=-1\/2$ in\nthe Koornwinder class.\n\nWe would like to point out the important results of V.~B.~Uvarov\n\\cite{VU1969} on transformation of orthogonal polynomial systems by\nattaching point masses to the orthogonality measure, predating the\nKoornwinder results by fifteen years. The results of V.~B.~Uvarov can\npotentially be used in order to significantly extend the scope of\nconvergence rate problems covered in this current manuscript.\n\nThe paper is organized as follows. In Section \\ref{sec:koorn} we discuss constructing positive recurrent Markov chains from the Jacobi family of orthogonal polynomials adjusted by using Koornwinder's techniques to place a point mass at $x=1$. Next, we derive an asymptotic upper bound on the total variation distance to the stationary distribution in the case of general $\\alpha>-1$ and $\\beta>-1$ in Section \\ref{sec:asympt}. Our main result, Theorem~\\ref{main}, is presented in Section \\ref{sec:chebyshev}. There, for the case of Chebyshev type polynomials corresponding to $\\alpha=\\beta=-1\/2$, we produce both asymptotic lower and upper bounds for the total variation distance. Finally, in Section~\\ref{comparison} we compare our main result to related results obtained by other techniques.\n\n\n\n\n\\section{From Orthogonal Polynomials to Random Walks via Koornwinder}\\label{sec:koorn}\n\n\nT.~Koornwinder \\cite{TK1984} provides a method for finding the\northogonal polynomials whose weight distribution is obtained from the\nstandard Jacobi weight functions\n$C_{\\alpha,\\beta}(1-x)^{\\alpha}(1+x)^{\\beta}$ by attaching weighted\npoint masses at $-1$ and $1$. A spectral measure corresponding to a\nMarkov chain contains a point mass at $-1$ if and only if the Markov\nchain is periodic. A spectral measure for an aperiodic Markov chain\ncontains a point mass at $1$ if and only if it is positive\nrecurrent. Thus in order to create a class of positive recurrent\naperiodic Markov chains with a Koornwinder type orthogonal polynomial\ndiagonalization we will only need to attach a point mass at $1$ and no\npoint mass at $-1$.\n \n\nLet $N\\geq 0$ and let $\\alpha$, $\\beta>-1$. For $n=0, 1, 2, \\ldots$\ndefine\n\\begin{equation}\\label{eq:Koornwinder}\nP_n^{\\alpha,\\beta,N}(x)=\\Big(\\frac{(\\alpha+\\beta+2)_{n-1}}{n!}\\Big)A_n\\Big[-N\n (1+x)\\frac{d}{dx}+B_n\\Big]P_n^{\\alpha,\\beta}(x),\n\\end{equation}\nwhere \n$$A_n=\\frac{(\\alpha+1)_n}{(\\beta+1)_n},$$\n$$B_n=\\frac{(\\beta+1)_nn!}{(\\alpha+1)_n(\\alpha+\\beta+2)_{n-1}}+\\frac{n(n+\\alpha+\\beta+1)N}{(\\alpha+1)},$$\n$P_n^{\\alpha,\\beta}$ is the standard Jacobi polynomials of degree $n$\nand order $(\\alpha,\\beta)$, $(x)_n=x(x+1)\\cdots(x+n-1)$. These\npolynomials form a system of orthogonal polynomials with respect to\nthe probability measure\n$d\\psi(x)={C_{\\alpha,\\beta}(1-x)^\\alpha(1+x)^\\beta dx+N\\delta_1(x)\n \\over N+1}$, where $C_{\\alpha,\\beta}={1 \\over\n \\mathcal{B}(\\alpha+1,\\beta+1)}$, \\quad $\\mathcal{B}(\\cdot,\\cdot)$ is\nthe beta function, and $\\delta_1(x)$ denotes the a unit point mass\nmeasure at $x=1$. See T.~Koornwinder \\cite{TK1984}. Direct calculation\nshows that $P_n^{\\alpha,\\beta,N}(1)=\\frac{(\\alpha+1)_n}{n!}$, and so\nwe normalize $Q_n(x)=n!P_n^{\\alpha,\\beta,N}(x)\/(\\alpha+1)_n$ which is\nthe orthogonal set of polynomials with respect to $d\\psi$ satisfying\n$Q_n(1)=1$.\n\nAs we have mentioned earlier, the tridiagonal operator $H$ corresponding to the recurrence relation of the orthogonal polynomials may not be a Markov chain operator. Let $p_i$, $r_i$ and $q_i$ denote the coefficients in the tridiagonal recursion\n$$p_i Q_{i+1}(x)+r_iQ_i(x)+q_i Q_{i-1}(x)=xQ_i(x),$$\nfor $i=0,1,2,\\dots$, where we let $Q_{-1} \\equiv 0$ as always.\n\n\nNotice because the polynomials are normalized so that\n$Q_i(1)=1$ it follows immediately that $p_i+r_i+q_i=1$.\nHowever some of the coefficients $p_i$, $r_i$, or $q_i$ may turn out to be negative, in which case the rows of the tridiagonal operator $A$ would add up to one, but will not necessarily consist of all nonnegative entries. \n\nIn the case when all the negative entries are located on the main diagonal, this may be overcome by\nconsidering the operator $\\frac{1}{\\lambda+1}(H+\\lambda I)$. For $\\lambda \\geq-\\inf\\limits_i r_i$ this ensures all entries in the matrix $\\frac{1}{\\lambda+1}(H+\\lambda I)$ are nonnegative and hence can be thought of as transition probabilities. More generally, if a polynomial $p(\\cdot)$ with coefficients adding up to one is found to satisfy $p(H) \\geq 0$ coordinatewise, then such $p(H)$ would be a Markov chain.\n\n\n\n\n\n\n\\section{An Asymptotic Upper Bound for Jacobi type Polynomials}\\label{sec:asympt}\n\nIn this section we derive asymptotic estimates for the distance to the\nstationary distribution when our operator given by\n$P_{\\lambda}=\\frac{1}{\\lambda+1}(H+\\lambda I)$ is a Markov chain. In\nthis case the Karlin-McGregor orthogonal polynomials for $P_{\\lambda}$\nare $Q_j\\Big((1+\\lambda)x-\\lambda\\Big)$ and the orthogonality\nprobability measure is \n${1 \\over 1+\\lambda}d\\psi\\Big((1+\\lambda)x-\\lambda\\Big)$ over \n$\\Big({\\lambda-1 \\over \\lambda+1}, 1 \\Big]$, where the $Q_j$ are the\nJacobi type polynomials introduced by Koornwinder from the previous\nsection.\n\nOf course the new operator $P_{\\lambda}$ is again tridiagonal. For the $n$-th row of $P_{\\lambda}$, let us denote the $(n-1)$-st, $n$-th, and $(n+1)$-st entries by $q_n^\\lambda$, $r_n^\\lambda$, and $p_n^\\lambda$ respectively. Here the entries of $P_{\\lambda}$ can be expressed via the entries of $H$ as follows\n$$p_n^\\lambda=\\frac{p_n}{1+\\lambda}, \\qquad r_n^\\lambda=\\frac{r_n+\\lambda}{1+\\lambda}, \\quad \\text{ and } \\quad q_n^\\lambda=\\frac{q_n}{1+\\lambda}$$ \nClearly we still have that\n$p_n^\\lambda+r_n^\\lambda+q_n^\\lambda=1$. \n\nWith the probabilities in hand we now compute the corresponding reversibility function $\\pi_n^\\lambda$ of $P_{\\lambda}$ which is equal to the corresponding function of $H$ defined as $\\pi_n=\\frac{p_0\\cdots p_{n-1}}{q_1\\cdots q_n}$. Here $\\pi_0^\\lambda=1=\\pi_0$ and $\\pi_n^\\lambda=\\frac{p_0^\\lambda\\cdots p_{n-1}^\\lambda}{q_1^\\lambda\\cdots q_n^\\lambda}=\\frac{p_0\\cdots p_{n-1}}{q_1\\cdots q_n}=\\pi_n$.\n\n\nChanging variables in \\eqref{eqTV} yields\n$$\\|\\nu-\\mu_t\\|_{TV}=\\frac{1}{2}\\sum_{n=0}^{\\infty}\\pi_n\\bigg|\\int_{(-1,1)}\\Big(\\frac{x}{1+\\lambda}+\\frac{\\lambda}{1+\\lambda}\\Big)^tQ_j(x)Q_n(x)\\,d\\psi(x)\\bigg|$$\n\n\n\\begin{lem}\\label{lem:bounds}\nConsider the case when $p_n>0$ and $q_n>0$ for all $n \\geq 0$, and \\mbox{$\\infty> \\lambda \\geq-\\inf\\limits_i r_i$}.\nThen, for the Jacobi type polynomials $Q_j$ the distance to the stationary distribution satisfies the following bound\n\\begin{equation}\\label{jacobi_bound}\n\\norm{\\nu-\\mu_t}_{TV}\\leq\\frac{C_{\\alpha,\\beta,\\lambda}\\norm{Q_j}_\\infty}{(t+1)^{1+\\alpha}}\\sum_{n=0}^{t+j}\\pi_n\\norm{Q_n}_\\infty+\\frac{1}{2}\\sum_{n=j+t+1}^{\\infty}\\pi_n\n\\end{equation}\nfor a certain constant $C_{\\alpha,\\beta,\\lambda}$.\n\\end{lem}\n\n\\begin{proof}\nFor $n>j+t$, it follows from the orthogonality of the polynomials and our normalization $Q_i(1)=1$ that\n$$\\int_{(-1,1)}\\Big(\\frac{x}{1+\\lambda}+\\frac{\\lambda}{1+\\lambda}\\Big)^tQ_j(x)Q_n(x)\\,d\\psi(x)=1$$\n It is then easy to see that \n$\\norm{\\nu-\\mu}_{TV}\\leq I+II+\\frac{1}{2}\\sum_{n=j+t+1}^{\\infty}\\pi_n$, where \n$$I=\\frac{1}{2}\\sum_{n=0}^{j+t}\\pi_n\\int_{(-1,0)}\\abs{\\Big(\\frac{x+\\lambda}{1+\\lambda}\\Big)^tQ_j(x)Q_n(x)}(1-x)^\\alpha(1+x)^\\beta\\,dx$$\n$$\\text{ and } \\quad II=\\frac{1}{2}\\sum_{n=0}^{j+t}\\pi_n\\int_{(0,1)}\\Big(\\frac{x+\\lambda}{1+\\lambda}\\Big)^t\\big|Q_j(x)Q_n(x)\\big|(1-x)^\\alpha(1+x)^\\beta\\,dx$$\n\nTo estimate $I$ notice that $\\abs{\\frac{x+\\lambda}{1+\\lambda}}\\leq\n\\max(\\frac{\\lambda}{1+\\lambda}, \\abs{\\frac{1-\\lambda}{1+\\lambda}})<1$ for\n$\\lambda>0$. Hence \\mbox{$I\\leq A_j(|t|)e^{-ct}$} for an appropriate polynomial $A_j(\\cdot)$ such that \n$$\\frac{1}{2}\\|Q_j\\|_{\\infty}\\sum_{n=0}^{j+t}\\pi_n \\|Q_n\\|_{\\infty}\\int_{(-1,0)}(1-x)^\\alpha(1+x)^\\beta\\,dx \\leq A_j(|t|),$$\nand \n$c=-\\log\\left\\{\\max(\\frac{\\lambda}{1+\\lambda}, \\abs{\\frac{1-\\lambda}{1+\\lambda}})\\right\\}$. Such polynomial $A_j$ exists since $\\|Q_n\\|_{\\infty}$ grows polynomially in $n$ and $\\pi_n$ is bounded. See formula 22.14.1 in Abramowitz and Stegun \\cite{MAIS1972}.\n\n\nThus $I$ is clearly bounded by the right hand side of \\eqref{jacobi_bound}.\n\nFor the second term,\n$II\\leq \\frac{1}{2}\\sum_{n=0}^{j+t}\\pi_n\\norm{Q_nQ_j}_{\\infty}\\int_0^1\\Big(\\frac{x+\\lambda}{1+\\lambda}\\Big)^t\\big(1-x)^\\alpha(1+x)^\\beta\\,dx$.\nThere we make the change of variables $s=-\\log(\\frac{x+\\lambda}{1+\\lambda})$, and for\nsimplicity let $x(s)=(1+\\lambda)e^{-s}-\\lambda$. Then the\nintegral reduces to\n$$(1+\\lambda)^{1+\\alpha}\\int_0^{\\log(\\frac{1+\\lambda}{\\lambda})}e^{-s(t+1)}\\big(1-\\lambda+(1+\\lambda)e^{-s})^\\beta\\big(1-e^{-s}\\big)^\\alpha\\,ds$$\n\nUsing the fact that $(1-e^{-s})^\\alpha=s^\\alpha\\Big(1+O(s)\\Big)$ and \n$\\big(1-\\lambda+(1+\\lambda)e^{-s})^\\beta=2^\\beta+O(s)$,\nthe above integral becomes\n$$(1+\\lambda)^{1+\\alpha}\\int_0^{\\log(\\frac{1+\\lambda}{\\lambda})}e^{-s(t+1)}\\Big(2^\\beta\ns^\\alpha+O(s^{\\alpha+1})\\Big)\\,ds,$$\nwhere the upper bounds $O(s)$ can be made specific.\nNext, applying the standard asymptotic methods of Laplace to this yields the following asymptotics \n$$\\int_0^{\\log(\\frac{1+\\lambda}{\\lambda})}e^{-s(t+1)}\ns^\\alpha\\,ds ~\\asymp~ \\frac{\\Gamma(\\alpha+1)}{(t+1)^{1+\\alpha}}$$\n\nThus one can obtain a large enough constant $\\widetilde{C}_{\\alpha,\\beta,\\lambda}$ such that\n$$II\\leq\\\n\\frac{\\widetilde{C}_{\\alpha,\\beta,\\lambda}\\norm{Q_j}_\\infty}{(t+1)^{1+\\alpha}}\\sum_{n=0}^{t+j}\\pi_n\\norm{Q_n}_\\infty$$ \n\\end{proof}\n\nIn order to derived effective bounds on $\\norm{\\nu-\\mu_t}_{TV}$ it is\nnecessary to gain a more detailed understanding of $\\pi_n$\nand $\\norm{Q_n}_\\infty$. When $\\min(\\alpha,\\beta)\\geq -\\frac{1}{2}$, the $\\norm{Q_n}_\\infty$ can be\nestimated using the known maximum for the Jacobi polynomials found in Lemma 4.2.1 on page 85 of \\cite{MI2005} together\nwith Koornwinder's definition of these polynomials.\n\nOne way to derive estimates for $\\pi_n$ is to use the expression\n$\\pi_n$ in terms of $p_n$, $r_n$, and $q_n$. For Koorwinder's class of\npolynomials these expressions are derived for all $\\alpha, \\beta, M,\nN$ in \\cite{HKJW1996}. It can be verified directly that in the case\nwhen $M=0$, then\n$p_0=\\frac{2(\\alpha+1)}{(1+N)(\\alpha+\\beta+2)}>0$. After taking into\naccount the normilization $Q_n(1)=1$, and taking into account a\nsmall typo, it can be verified from equations (41)--(45) in \\cite{HKJW1996} that $p_n$ and\n$q_n$ are positive for $n\\geq 1$. Thus the conditions for Lemma~\\ref{lem:bounds} are satisfied for all $\\alpha, \\beta >-1$. Furthermore, from (18), (19) and (32) in \\cite{HKJW1996}\nit can be easily seen that $p_n\\to \\frac{1}{2}$ and $q_n\\to \\frac{1}{2}$\nas $n\\to\\infty$, and hence $r_n=1-p_n-q_n\\to 0$ as $n\\to \\infty$. Thus\nfor $\\lambda$ large enough the operator $P^\\lambda$ corresponds to a\nMarkov chain.\n\nAs the expressions for these quantities\nlaborious to write down, instead we focus our attention on a specific\ncase in which our calculations are easy to follow. Specifically\nwe focus on the Chebyshev polynomials.\n\n\n\n\\section{Chebyshev Polynomials: Upper and Lower Bounds}\\label{sec:chebyshev}\nBy applying Koorwinder's results to the Chebyshev polynomials of the first kind which correspond to the case of $\\alpha=\\beta=-{1 \\over 2}$, we arrive at a family of orthogonal polynomials with respect to the measure\n$\\frac{1}{1+N}\\Big(\\frac{1}{\\pi\\sqrt{1-x^2}}dx+N\\delta_1(x)\\Big)$.\nUsing \\eqref{eq:Koornwinder} we find that here,\n$$Q_n(x):=-N(x+1)U_{n-1}(x)+(1+2nN)T_n(x),$$ \nwhere $T_n$ and $U_n$ denote the Chebyshev polynomials of the first and second kind\nrespectively. Notice that $U_n(1)=n+1$ and $T_n(1)=1$, which immediately to verify that $Q_n(1)=1$.\n\nOnce again we consider the operator \n$$H=\\begin{pmatrix}r_0 & p_0 & 0 & 0 & 0 &\\cdots\\\\\nq_1 & r_1 & p_1 & 0 & 0 & \\vdots\\\\\n0 & q_2 & r_2 & p_2 & 0 & \\ddots\\\\\n0 & 0 & q_3 & r_3 & \\ddots & \\ddots\\\\\n\\vdots & \\cdots & \\ddots & \\ddots & \\ddots & \\dots\\\\\n\\end{pmatrix},$$\non $\\ell^2(\\pi)$, so that vector $(Q_0(x), Q_1(x), Q_2(x),\n\\ldots)^T$ is an eigenvector with eigenvalue~$x$.\n\nSpecifically the numbers $p_n$, $r_n$, and $q_n$ satisfy\n$p_0P_1(x)+r_0P_0(x)=x$ for $n=0$, and \n\\begin{equation}\\label{eigvaleq}\n p_nQ_{n+1}(x)+r_nQ_n(x)+q_nQ_{n-1}(x)=xQ_n(x)\\qquad \\text{for $n\\geq 1$.}\n\\end{equation}\n\nKeisel and Wimp \\cite{HKJW1996} give expressions\nfor $p_n$, $r_n$ and $q_n$ for $n\\geq 0$. To find the expressions\ndirectly in this case one could use \\eqref{eigvaleq} to derive three\nlinearly independent equations, and solve for $p_n$, $r_n$, and $q_n$.\n\nFor the case $n=0$ the equation immediately gives us that\n$p_0=\\frac{1}{N+1}$ and $r_0=\\frac{N}{N+1}$. Evaluating at convenient\nchoices of $x$, such as $-1, 0, 1$, do not yield linearly\nindependent equations for all $n$. One solution to this is to\nevaluate at $x=1,-1$ and differentiate \\eqref{eigvaleq} and then\nevaluate at $x=0$. This gives three linearly independent equations\nand a direct calculation then shows that\n\\begin{equation}\n\\begin{gathered}\\label{eq:linearprobs}\np_n=\\frac{1}{2}\\cdot\\frac{1+(2n-1)N}{1+(2n+1)N}, \\quad\nq_n=\\frac{1}{2}\\cdot\\frac{1+(2n+1)N}{1+(2n-1)N}, \\quad \\text{and}\\\\\nr_n=\\frac{-2N^2}{(1+(2n-1)N)(1+(2n+1)N)}\n\\end{gathered}\n\\end{equation}\nAs $r_n\\leq 0$ the operator $H$ fails to correspond to a Markov chain.\nHowever this is the case we addressed at the end of Section~\\ref{sec:koorn} of the current paper.\nThus consider $P_\\lambda=\\frac{1}{1+\\lambda}(H+\\lambda I)$. Now, since $|r_n|$ is\na decreasing sequence for $n\\geq 1$. So provided that $\\lambda\\geq |r_1|=\\frac{2N^2}{(1+N)(1+3N)}$, we then have \n$p_n^\\lambda, r_n^\\lambda, q_n^\\lambda\\geq 0$. Thus we can consider these coefficients $p_n^\\lambda$, $r_n^\\lambda$, and $q_n^\\lambda$ as the transition probabilities in a nearest neighbor random walk.\n\nRecall that $\\pi_n^\\lambda=\\pi_n=\\frac{p_0\\cdots p_{n-1}}{q_1\\cdots\n q_n}$. Thus for $P_\\lambda$ we can directly calculate $\\pi_n$ from\n\\eqref{eq:linearprobs}. We have that $p_0\\cdots\np_{n-1}=\\frac{1}{2^{n-1}}\\frac{N}{1+(2n-1)N}$ and similarly $q_1\\cdots q_n=\\frac{1}{2^n}\\frac{1+(2n+1)N}{1+N}$. Thus $\\pi_n=\\frac{2(1+N)N}{(1+(2n-1)N)(1+(2n+1)N)}$.\n\n\\begin{thm}\\label{main} Given $N>0$ and $\\lambda\\geq \\frac{2N^2}{(1+N)(1+3N)}$.\n Consider the case of the Chebyshev-type random walks over $\\Omega=\\{0,1,2,\\dots \\}$ with probability operator\n $$P_\\lambda=\\begin{pmatrix}r_0^\\lambda & p_0^\\lambda & 0 & 0 & 0 &\\cdots\\\\\nq_1^\\lambda & r_1^\\lambda & p_1^\\lambda & 0 & 0 & \\vdots\\\\\n0 & q_2^\\lambda & r_2^\\lambda & p_2^\\lambda & 0 & \\ddots\\\\\n0 & 0 & q_3^\\lambda & r_3^\\lambda & \\ddots & \\ddots\\\\\n\\vdots & \\cdots & \\ddots & \\ddots & \\ddots & \\dots\\\\\n\\end{pmatrix},$$\n where $p_n^\\lambda=\\frac{1}{2(1+\\lambda)}\\cdot\\frac{1+(2n-1)N}{1+(2n+1)N}$, \\quad $q_n^\\lambda=\\frac{1}{2(1+\\lambda)}\\cdot\\frac{1+(2n+1)N}{1+(2n-1)N}$\n and \\quad $r_n^\\lambda=1-p_n^\\lambda-q_n^\\lambda$ for $n \\geq 1$, with $p_0^\\lambda=\\frac{1}{(1+\\lambda)(N+1)}=1-r_0^\\lambda$. \n \n Then for the random walk originating at some site $j \\in \\Omega$, there are positive constants $c$ and $C$ that depend on $j$, $N$ and $\\lambda$ such that \n $$\\frac{c}{\\sqrt{t}}\\leq \\norm{\\nu-\\mu_t}_{TV}\\leq\n C\\frac{\\log t}{\\sqrt{t}}$$\n for $t$ sufficiently large.\n \n\\end{thm}\n\\begin{proof}\n For the upper bound we simply need to estimate the sums appearing in Lemma~\\ref{lem:bounds}. Since $\\pi_n=O\\big(\\frac{1}{(n+1)^2}\\big)$, it is easy to see that the second sum $\\sum_{n=j+t+1}^{\\infty}\\pi_n$ is bounded by $C_N\/(t+j+1)$. \n The main term turns out to be the first sum.\n\n In the case of the Chebyshev type polynomials we have the bound\n \\mbox{$\\norm{Q_n}_\\infty\\leq 4Nn+1$}. Thus the first sum in Lemma~\\ref{lem:bounds} is bounded by \n $\\hat C_{\\alpha,\\beta,\\lambda,N}\\frac{j\\log(t+j+2)}{\\sqrt t}$ for an appropriate constant $\\hat C_{\\alpha,\\beta,\\lambda,N}$.\n And so, for an appropriate $C$ and large $t$,\n $$\\norm{\\nu-\\mu_n}_{TV}\\leq C\\frac{\\log t}{\\sqrt{t}}$$\n\n On the other hand, recalling that $Q_0(x)=\\pi_0=1$, we have that:\n $$\\norm{\\nu-\\mu_n}_{TV}\\geq \\abs{\\int_{(-1,1)}\\Big(\\frac{x+\\lambda}{1+\\lambda}\\Big)^tQ_j(x)(1+x^\\beta)(1-x)^\\alpha\\,dx}$$\n However we have already shown that for large enough $t$, the above right-hand side is asymptotic to\n $\\frac{\\tilde C}{\\sqrt{1+t}}$.\n\\end{proof}\n\nWe finish with some concluding remarks. At first the bound\n\\mbox{$\\norm{Q_n}_\\infty\\leq 4Nn+1$} may appear somewhat imprecise since near $x=1$, we have that $Q_n(1)=1$. \nIt is tempting to suggest that the correct asymptotic for the total\nvariation norm is $C\/\\sqrt{t}$. However on closer examination in the\nneighborhood of $x=1$, $Q_n'(x)\\approx n^3$. This $n^3$ causes the\nerrors to be at least of the order of the main term. Overall\nit seems unlikely to the authors that $C\/\\sqrt{t}$ is the correct asymptotic for the Chebyshev-type polynomials. \n\n\n\n\n\\section{Comparison to other methods}\\label{comparison}\n\nAn ergodic Markov chain $P=\\Big(p(i,j)\\Big)_{i,j \\in \\Omega}$ with stationary distribution $\\nu$ is said to be {\\it geometrically ergodic} if and only if there exists $0|\\lambda_2| \\geq \\dots \\geq |\\lambda_d|$$\nIn which case, the Perron-Frobenious Theorem will imply geometric ergodicity with \n$$\\|p_t(i,\\cdot)-\\nu(\\cdot)\\|_{TV}=O(t^{m_2-1} |\\lambda_2|^t),$$\nwhere $m_2$ is the algebraic multiplicity of $\\lambda_2$. Here the existence of a positive {\\it spectral gap}, $1-|\\lambda_2|>0$, implies geometric ergodicity with the exponent $-\\log |\\lambda_2| \\approx 1- |\\lambda_2|$ whenever the spectral gap is small enough.\n\nWhen dealing with Markov chains over general countably infinite state space $\\Omega$, the existence of a positive spectral gap of the operator $P$ is essentially equivalent to the chain being geometrically ergodic. For instance, the orthogonal polynomial approach in \\cite{YK2010} resulted in establishing the geometric rate $R=\\max\\left\\{r+2\\sqrt{pq},~{q \\over q+r}\\right\\}$ for the Markov chain\n$$P=\\left(\\begin{array}{ccccc}0 & 1 & 0 & 0 & \\dots \\\\q & r & p & 0 & \\dots \\\\0 & q & r & p & \\ddots \\\\0 & 0 & q & r & \\ddots \\\\\\vdots & \\vdots & \\ddots & \\ddots & \\ddots\\end{array}\\right) \\qquad q>p,~~~r>0$$ \nover $\\Omega=\\mathbb{Z}_+$, together with establishing the value of the spectral gap, $1-r>0$.\n\nAs for the Markov chain $P_\\lambda$ considered in Theorem~\\ref{main} of this paper, its spectral measure ${1 \\over 1+\\lambda}d\\psi\\Big((1+\\lambda)x-\\lambda\\Big)$ over $\\Big({\\lambda-1 \\over \\lambda+1}, 1 \\Big]$ admits {\\it no} spectral gap between the point mass at $1$ and the rest of the spectrum implying sub-geometric ergodicity. The sub-exponential rate in total variation norm is then estimated to be of polynomial order between $\\frac{1}{\\sqrt{t}}$ and $\\frac{\\log t}{\\sqrt{t}}$.\n\nIn the field of probability and stochastic processes, there is a great\ninterest in finding methods for analyzing Markov chains over general\nstate space that have polynomial rates of convergence to stationary\ndistribution. In Menshikov and Popov \\cite{MMSP1995} a one dimensional\nversion of Lamperti's problem is considered. There, a class of ergodic\nMarkov chains on countably infinite state space with sub-exponential\nconvergence to the stationary probabilities is studied via\nprobabilistic techniques. One of their results relates to our main\nresult, Theorem~\\ref{main}. Namely, Theorem 3.1 of \\cite{MMSP1995}\nwhen applied to our case, implies for any $\\varepsilon>0$ the\nexistence of positive real constants $C_1$ and $C_2$ such that\n$$C_1t^{-{1 \\over 2}-\\varepsilon} \\leq |\\nu(0)-\\mu_t(0)| \\leq C_2t^{-{1 \\over 2}+\\varepsilon}$$\nThus for the Markov chain considered in Theorem~\\ref{main}, the\northogonal polynomials approach provides a closed form expression for\nthe difference $\\nu-\\mu_t$, and a significantly sharper estimate on\nconvergence of $\\mu_t$ to the stationary distribution $\\nu$, for both\nthe single state distance $|\\nu(0)-\\mu_t(0)|$ and a much stronger\ntotal variation norm, $\\|\\nu-\\mu_t\\|_{TV}$.\n\n\n\n\\section{Acknowledgments}\nWe would like to thank Yuan Xu of University of Oregon for his helpful comments that\ninitiated this work. We would also like to thank Michael Anshelevich of Texas A \\& M for the feedback he provided during the conference on orthogonal polynomials in probability theory in July of 2010. We would like thank Andrew R. Wade of the University of Strathclyde for his helpful comments on the preprint of this paper. Finally, we would like to thank the anonymous referee for the many helpful corrections and suggestions.\n\n\n\n\\begin{bibdiv}\n \\begin{biblist}\n \\bib{MAIS1972}{book}{\n editor={M. Abramowitz},\n editor={I. A. Stegun},\n title={Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables},\n edition={Ninth}, \n publisher={Dover Publications}, \n year={1972}\n } \n \\bib{MI2005}{book}{\n author={M. Ismail},\n title={Classical and quantum orthogonal polynomials in one variable},\n series={Encyclopedia of Mathematics and its Applications}, \n number={98},\n publisher={Cambridge University Press}, \n year={2005}\n } \n \\bib{SKJM1959}{article}{\n title={Random Walks},\n author={S. Karlin},\n author={J. L. McGregor},\n journal={Illinois Journal of Math.},\n volume={3},\n date={1959},\n number={1},\n pages={417--431}\n }\n \\bib{HKJW1996}{article}{\n title={A note on Koornwinder's polynomials with weight function $(1-x)^\\alpha(1+x)^\\beta+M\\delta(x+1)+N\\delta(x-1)$},\n author={Kiesel, K.},\n author={Wimp, J.},\n journal={Numerical Algorithms},\n volume={11},\n date={1996},\n pages={229--241}\n }\n \\bib{TK1984}{article}{\n title={Orthogonal polynomials with weight function $(1-x)^{\\alpha }(1+x)^{\\beta }+M\\delta (x+1)+N\\delta (x-1)$},\n author={Koornwinder, T.}\n journal={Canad. Math. Bull.},\n volume={27},\n date={1984},\n number={2},\n pages={205--214}\n }\n \\bib{YK2009}{article}{\n title={Orthogonality and probability: beyond nearest neighbor transitions},\n author={Kovchegov, Y.},\n journal={Electron. Commun. Probab.},\n volume={14},\n year={2009},\n pages={90--103}\n } \n \\bib{YK2010}{article}{\n title={Orthogonality and probability: mixing times},\n author={Kovchegov, Y.},\n journal={Electron. Commun. Probab.},\n volume={15},\n year={2010},\n pages={59--67}\n }\n \\bib{SMRT2009}{book}{\n author={S. Meyn},\n author={R. L. Tweedie},\n title={Markov Chains and Stochastic Stability},\n edition={Second}, \n publisher={Cambridge University Press}, \n year={2009}\n } \n \\bib{MMSP1995}{article}{\n title={Exact Power Estimates For Countable Markov Chains},\n author={Menshikov, M.~V.},\n author={Popov, S.~Yu.},\n journal={Markov Processes Relat. Fields},\n volume={1},\n date={1995},\n pages={57--78}\n }\n \\bib{GS1975}{book}{\n author={G. Szeg\\\"{o}},\n title={Orthogonal Polynomials},\n edition={Fourth}, \n publisher={AMS Colloquium Publications}, \n volume={23},\n year={1975}\n } \n \\bib{VU1969}{article}{\n title={Relation between systems of polynomials orthogonal with respect to various distribution functions},\n author={Uvarov, V.~B.},\n journal={Vychisl. Mat. i Mat. Fiz (USSR)},\n volume={9},\n number={6},\n year={1969},\n pages={1253--1262}\n }\n\\end{biblist}\n\\end{bibdiv}\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{{\\rm \\footnotesize INTRODUCTION}}\n``How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?'' Sherlock Holmes to Dr. Watson (Conan~Doyle 1890). \n\nIn the study of open clusters a similar totalogy could be paraphrased as ``when you have eliminated the effects of extinction and differential reddening for cluster stars, the resulting cluster color-magnitude diagram, however unusual, must represent an accurate picture of the temperatures and luminosities of cluster stars.'' That philosophy was demonstrated to be the case for relatively young (10$^7$--10$^8$ yr old) open clusters by Turner (1996), as well as for the extremely young ($3.5 \\times 10^6$ yr old) cluster IC 1590 (Guetter \\& Turner 1997). The last study demonstrated how closely pre-main-sequence members of IC 1590 matched model isochrone predictions by Palla \\& Stahler (1993) converted to the Johnson \\& Morgan (1953) {\\it UBV} system. \n\nThe situation for other young clusters is more complicated, primarily because most of the original {\\it UBV} studies of the brighter members of the class (NGC 2264, NGC 6530, IC 1546, and NGC 6611) were made by Merle Walker (Walker 1956, 1957, 1959, 1961, respectively) using a detector system that was not an ideal match to the Johnson system, although that was not realized at the time. For example, later studies of NGC 6611 (Hiltner \\& Morgan 1969) and NGC 2264 (e.g., Mendoza \\& G\\'{o}mez 1980) noted differences between Walker's photometry and observations tied more closely to the {\\it UBV} system. The origin of such differences can be explained by the work of Moffat \\& Vogt (1977) and Guti\\'{e}rrez-Moreno, Moreno \\& Cort\\'{e}z (1981), who noted that systematic errors, specifically in {\\it U--B} measures, can arise from the manner in which the Balmer discontinuity in the continua of early-type stars is sampled by non-standard telescope\/filter\/detector systems, as well as by the treatment of atmospheric extinction (see Cousins \\& Caldwell 2001). The differences in the case of NGC 6611 are fairly extreme, amounting to offsets of $0.10$ in {\\it U--B} and $0.03$ in {\\it B--V} (Hiltner \\& Morgan 1969).\n\nThe purpose of the present study is to redo Walker's original study of NGC 2264 (Walker 1956) in order to generate a new reddening-free and extinction-corrected color-magnitude diagram for the cluster. The cluster has been studied many times previously, but never for the purpose of improving upon Walker's results. Recent detections of non-radial pulsation via asteroseismology in many of the pre-main-sequence members of NGC 2264 (e.g., Zwintz 2008; Kallinger, Zwintz \\& Weiss 2008; Guenther et al. 2009) make it imperative to have a clear picture of the evolutionary status and exact H-R diagram location of cluster stars.\n\n\n\n\\section{{\\rm \\footnotesize ADJUSTING EXISTING {\\it UBV} OBSERVATIONS}}\nNGC 2264 has a rich history of observation, and has been studied fruitfully using many optical photometric systems. The emphasis here is on {\\it UBV} photometry, which has certain advantages over other photometric systems for analyzing interstellar reddening; in most optical photometric systems interstellar reddening corrections are made using mean reddening laws, occasionally established incorrectly, that may not describe the reddening applicable to the cluster under study (e.g., Turner 1989, 1994, 1996). Str\\\"{o}mgren system photometry is similar enough to {\\it UBV} photometry that it can be transformed accurately to the latter for most early-type stars (Turner 1990).\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{n2264f1.eps}\n\\end{center}\n\\caption{\\small{Differences, $\\Delta V$ (upper) and $\\Delta (B$--$V)$ (lower), between the derived mean {\\it UBV} data from the literature and the Walker (1956) values, as a function of Walker's tabulated {\\it B--V} color. The derived trends are indicated, and an open circle denotes the red star not used in the $\\Delta (B-V)$ comparison.}}\n\\label{fig1}\n\\end{figure}\n\nThe procedure adopted here was to compare existing {\\it UBV} observations, identified as being closely linked to the Johnson system and including transformations of published Str\\\"{o}mgren photometry for cluster stars, with the original measures of Walker (1956), and to adjust the latter for systematic effects prior to forming optimized values for individual stars. {\\it UBV} observations not clearly tied closely to the Johnson system were not used, our experience being that systematic effects in such data are often non-linear, making them very difficult to treat reliably.\n\nSources of {\\it UBV} photometry for NGC 2264 that contain data for sizable samples of stars are presented by Mendoza \\& G\\'{o}mez (1980), Per\\'{e}z, Th\\'{e} \\& Westerlund (1987), and Kwon \\& Lee (1983). Observations for more restricted samples can be found in papers by Johnson \\& Morgan (1955), Hiltner (1956), Karlsson (1966), Turner (1976b), Macmillan (1977), Echevarr\\'{i}a, Roth \\& Warman (1979), Clar\\'{i}a (1985), Oja (1991), and Zwintz (2008). Suitable Str\\\"{o}mgren system photometry that can be converted to equivalent {\\it UBV} data, at least for early-type stars, is presented by Crawford, Barnes \\& Golson (1971), Strom, Strom \\& Yost (1971), Morrison (1975), Gronbech \\& Olsen (1976), Perry \\& Johnston (1982), Lindroos (1983), Olsen (1983), Per\\'{e}z et al. (1988), Mendoza, Rolland \\& Rodriguez (1990), Knude (1992), Handler (1999), Pe\\~{n}a, Peniche \\& Cervantes (2000), and Kalinger et al. (2008). The relationships derived by Turner (1990) were used to convert the latter observations to the Johnson system.\n\nA comparison of the average {\\it BV} values from such an analysis with Walker's (1956) photometry is presented in Fig.~\\ref{fig1}. The differences in $\\Delta V$ and $\\Delta(B$--$V)$ as a function of {\\it B--V} color are generally small, the best-fitting results being: $\\Delta V=+0.037(\\pm0.003) - 0.014(\\pm0.006)$({\\it B--V}) and $\\Delta$({\\it B--V}) $=-0.013(\\pm0.002) - 0.002(\\pm0.005)$({\\it B--V}). There is a slight offset in color for Walker's observations (the reddest star being omitted for reasons of possible brightness variability) and the visual magnitudes display a reasonably distinct color dependence. The comparison of {\\it U--B} colors in Fig.~\\ref{fig2} is more interesting. There is increased scatter for stars with colors near {\\it U}--$B \\simeq 0.0$ that has the distinct signature of a Balmer discontinuity effect as described by Moffat \\& Vogt (1977). The data were therefore matched to linear or quadratic functions in separate segments, with the resulting zig-zag relationship plotted in the figure adopted for further analysis. The Moffat \\& Vogt (1977) results were used as a guide for interpreting the trends in the $\\Delta(U$--$B)$ data.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{n2264f2.eps}\n\\end{center}\n\\caption{\\small{Differences $\\Delta (U$--$B)$ between the derived mean {\\it UBV} data from the literature and Walker's (1956) values, as a function of Walker's tabulated {\\it U--B} color. The derived Z-shaped curve represents a least squares fit to three segments of the data.}}\n\\label{fig2}\n\\end{figure}\n\n\n\\setcounter{table}{0}\n\\begin{table*}\n\\caption[]{Optimized {\\it UBV} observations for NGC 2264 stars}\n\\label{tab1}\n\\begin{center}\n\\footnotesize\n\\begin{tabular*}{0.89\\textwidth}{@{\\extracolsep{-0mm}}ccccccccccccccccc}\n\\hline \\noalign{\\smallskip}\nStar &{\\it V} &{\\it B--V} &{\\it U--B} &n & &Star &{\\it V} &{\\it B--V} &{\\it U--B} &n & &Star &{\\it V} &{\\it B--V} &{\\it U--B} &n \\\\\n\\noalign{\\smallskip} \\hline \\noalign{\\smallskip}\n1 &14.08 &0.91 &0.33 &3 & &80 &15.27 &1.52 &0.69 &1 & &159 &10.99 &0.06 &0.00 &4 \\\\\n2 &9.72 &0.24 &0.18 &18 & &81 &16.29 &1.24 &0.04 &1 & &161 &14.99 &0.66 &--0.60 &1 \\\\\n3 &14.60 &1.43 &1.12 &1 & &83 &7.95 &--0.16 &--0.83 &21 & &164 &13.32 &0.83 &0.21 &1 \\\\\n4 &13.49 &0.42 &--0.03 &1 & &84 &11.99 &0.57 &0.07 &5 & &165 &10.99 &0.14 &0.11 &5 \\\\\n5 &14.14 &0.53 &0.05 &3 & &85 &15.00 &1.08 &0.76 &1 & &168 &15.15 &1.24 &1.02 &1 \\\\\n6 &8.21 &0.36 &--0.04 &6 & &87 &10.77 &0.21 &0.16 &3 & &169 &13.42 &0.77 &0.22 &1 \\\\\n7 &7.79 &--0.12 &--0.60 &6 & &88 &9.05 &--0.12 &--0.66 &13 & &172 &10.09 &--0.06 &--0.40 &2 \\\\\n8 &13.64 &1.25 &0.86 &4 & &89 &16.36 &1.04 &--0.57 &1 & &173 &16.43 &1.27 &0.75 &1 \\\\\n9 &11.99 &2.02 &2.18 &2 & &90 &12.67 &0.19 &--0.04 &16 & &175 &16.14 &1.39 &0.45 &1 \\\\\n10 &11.26 &0.56 &0.19 &4 & &91 &12.33 &0.64 &0.00 &3 & &176 &15.80 &1.57 &0.83 &1 \\\\\n11 &16.06 &0.86 &0.03 &1 & &92 &11.66 &0.85 &0.38 &6 & &177 &9.08:: &0.74:: &0.05 &4 \\\\\n12 &15.49 &1.28 &0.15 &1 & &93 &13.21 &0.87 &0.32 &2 & &177B &11.98 &0.57 &0.00 &1 \\\\\n13 &12.05 &1.66 &1.46 &2 & &94 &10.45 &0.41 &--0.04 &2 & &178 &7.18 &--0.20 &--0.98 &13 \\\\\n14 &14.85 &0.79 &0.17 &1 & &95 &16.07 &1.10 &--0.05 &1 & &178B &10.23 &0.16 &--0.12 &2 \\\\\n15 &14.66 &1.20 &0.65 &1 & &96 &14.10 &1.09 &1.22 &1 & &179 &9.96 &0.00 &--0.18 &5 \\\\\n16 &16.12 &0.95 &0.33 &3 & &98 &11.77 &0.57 &0.00 &2 & &180 &12.92 &0.51 &--0.08 &2 \\\\\n17 &12.89 &0.37 &0.23 &4 & &99 &10.86 &0.40 &--0.03 &5 & &181 &10.06 &--0.05 &--0.30 &3 \\\\\n18 &15.23 &0.84 &0.20 &2 & &100 &10.04 &0.12 &0.09 &10 & &182 &10.33 &0.06 &0.08 &4 \\\\\n19 &15.54 &0.82 &--0.06 &1 & &103 &10.08 &0.02 &--0.10 &2 & &183 &15.24 &0.96 &0.63 &1 \\\\\n20 &10.30 &0.42 &0.16 &26 & &104 &11.40 &0.22 &0.13 &7 & &184 &14.16 &0.99 &--0.05 &1 \\\\\n21 &14.09 &0.77 &0.22 &2 & &105 &15.13 &1.06 &--0.33 &1 & &186 &15.63 &1.54 &0.92 &1 \\\\\n22 &17.08 &1.07 &0.16 &3 & &106 &13.31 &0.71 &0.38 &1 & &187 &9.25 &--0.08 &--0.33 &5 \\\\\n23 &13.75 &0.82 &0.49 &2 & &107 &8.86 &--0.08 &--0.46 &15 & &189 &11.36 &0.54 &0.01 &4 \\\\\n24 &8.57 &--0.06 &--0.31 &5 & &108 &12.05 &0.58 &0.07 &6 & &190 &12.29 &0.67 &--0.01 &1 \\\\\n25 &7.83 &0.37 &0.00 &11 & &109 &9.12 &--0.11 &--0.53 &5 & &193 &9.81 &0.23 &0.12 &6 \\\\\n26 &11.78 &0.47 &0.01 &3 & &112 &10.81 &--0.01 &--0.11 &6 & &195 &12.66 &0.52 &--0.05 &4 \\\\\n27 &12.08 &0.52 &0.21 &3 & &114 &11.55 &0.52 &0.07 &4 & &196 &11.72 &0.52 &--0.03 &3 \\\\\n28 &12.34 &0.46 &--0.08 &4 & &115 &14.44 &1.00 &0.37 &1 & &197 &16.30 &0.92 &--0.08 &1 \\\\\n29 &10.16 &0.43 &0.00 &5 & &116 &11.66 &0.54 &0.09 &5 & &202 &9.02 &0.07 &--0.59 &10 \\\\\n30 &10.78 &0.03 &--0.02 &6 & &117 &13.55 &0.69 &0.25 &1 & &203 &12.93 &0.75 &0.14 &1 \\\\\n31 &10.56 &0.35 &0.03 &5 & &118 &11.80 &0.55 &--0.04 &2 & &205 &10.64 &0.33 &--0.02 &5 \\\\\n32 &13.02 &0.76 &0.05 &1 & &121 &12.14 &0.42 &--0.27 &1 & &206 &9.50 &0.09 &--0.34 &5 \\\\\n33 &11.68 &2.63 &2.49 &5 & &125 &12.29 &0.61 &0.02 &2 & &209 &11.37 &0.37 &0.02 &4 \\\\\n34 &10.94 &0.40 &0.05 &1 & &127 &15.77 &1.48 &0.85 &1 & &210 &13.30 &0.76 &0.09 &1 \\\\\n35 &10.34 &0.08 &0.06 &4 & &128 &11.03 &0.26 &0.10 &3 & &212 &7.51 &--0.16 &--0.75 &20 \\\\\n36 &11.01 &0.01 &--0.01 &5 & &131 &4.64 &--0.24 &--1.07 &63 & &215 &9.32 &0.07 &--0.14 &10 \\\\\n37 &8.08 &1.49 &1.66 &7 & &132 &10.22 &--0.04 &--0.29 &6 & &216 &11.72 &0.76 &0.14 &1 \\\\\n38 &10.98 &1.04 &0.64 &2 & &133 &13.85 &1.05 &0.61 &1 & &217 &13.60 &0.81 &--0.34 &2 \\\\\n39 &11.35 &0.11 &0.11 &2 & &134 &12.42 &0.82 &0.01 &3 & &220 &9.72 &0.46 &--0.06 &3 \\\\\n43 &10.57 &0.20 &0.16 &8 & &136 &15.17 &1.56 &1.05 &1 & &221 &12.15 &0.39 &--0.07 &4 \\\\\n46 &9.23 &0.21 &0.18 &14 & &137 &9.93 &--0.08 &--0.40 &5 & &222 &9.92 &0.14 &0.18 &6 \\\\\n50 &8.15 &--0.15 &--0.74 &19 & &138 &10.19 &0.06 &--0.01 &5 & &223 &10.93 &0.33 &0.03 &5 \\\\\n54 &14.27 &1.23 &1.01 &1 & &139 &13.33 &1.24 &0.81 &2 & &224 &11.50 &0.52 &0.15 &2 \\\\\n58 &15.52 &0.96 &0.60 &1 & &141 &14.72 &1.14 &0.85 &1 & &225 &13.22 &0.57 &--0.01 &2 \\\\\n60 &12.49 &1.01 &0.67 &2 & &142 &8.86 &--0.09 &--0.53 &10 & &226 &9.63 &0.14 &0.16 &7 \\\\\n62 &12.37 &0.50 &--0.18 &2 & &143 &10.63 &0.06 &--0.10 &1 & &227 &11.80 &0.52 &--0.06 &1 \\\\\n64 &15.36 &0.93 &0.39 &1 & &144 &13.85 &1.36 &0.30 &1 & &228 &11.13 &0.35 &0.01 &5 \\\\\n65 &11.76 &0.49 &--0.04 &2 & &145 &10.67 &0.04 &0.02 &6 & &229 &8.54 &1.23 &1.03 &9 \\\\\n66 &12.43 &0.71 &--0.14 &4 & &146 &14.48 &1.08 &0.75 &1 & &230 &12.36 &1.31 &0.52 &1 \\\\\n67 &10.88 &0.59 &--0.41 &5 & &147 &11.00 &0.80 &0.38 &2 & &231 &8.99 &--0.14 &--0.64 &7 \\\\\n68 &11.75 &0.68 &0.11 &7 & &148 &13.63 &0.68 &0.05 &1 & &232 &9.83 &0.00 &--0.02 &5 \\\\\n69 &8.28 &1.38 &1.54 &10 & &150 &14.13 &0.98 &1.00 &2 & &233 &9.57 &0.56 &0.05 &3 \\\\\n70 &11.17 &0.57 &0.06 &5 & &151 &12.56 &0.50 &--0.01 &2 & &234 &12.43 &0.46 &--0.05 &2 \\\\\n72 &12.36 &0.50 &--0.10 &1 & &152 &9.15 &--0.07 &--0.41 &14 & &235 &13.47 &0.51 &0.38 &3 \\\\\n73 &9.35 &0.84 &0.48 &8 & &153 &15.89 &1.22 &0.33 &1 & &236 &11.39 &0.63 &0.12 &5 \\\\\n74 &6.88 &--0.11 &--0.63 &16 & &154 &12.62 &0.79 &0.03 &2 & &237 &9.45 &1.43 &1.26 &5 \\\\\n76 &14.14 &0.75 &0.38 &1 & &155 &16.50 &1.41 &0.96 &1 & &238 &9.97 &1.25 &1.56 &3 \\\\\n77 &14.53 &1.17 &0.90 &1 & &156 &14.68 &1.23 &0.66 &1 & &239 &9.34 &1.15 &1.02 &4 \\\\\n78 &15.42 &1.21 &--0.39 &1 & &157 &10.04 &--0.03 &--0.31 &6 & & & & & & \\\\\n79 &15.90 &0.55 &--1.21 &1 & &158 &10.35 &0.34 &0.07 &6 & & & & & & \\\\\n\\noalign{\\smallskip} \\hline\n\\end{tabular*}\n\\end{center}\n$\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;$\\small{Star numbers from Walker (1956) and WEBDA, except as noted in text.}\n\\end{table*}\n\n\n\nThe power of the present approach to homogenizing existing {\\it UBV} photometry for NGC 2264 can be seen in the results. Walker's (1956) {\\it UBV} observations were adjusted with the relations plotted in Figs.~\\ref{fig1}~and~\\ref{fig2} and averaged with the other data, weighted by the number of observations for each observer, to form optimized values, summarized in Table~\\ref{tab1}. The resulting data are plotted in color-color and color-magnitude diagrams in Fig.~\\ref{fig3}, and display a variety of traits that differ in subtle ways from the original versions published by Walker (1956). As in Walker's study, stars exhibiting $H\\alpha$ emission (Herbig 1954) are identified.\n\nFirst, the sequence of reddened B-type stars closely fits the intrinsic relation for dwarfs with $E_{B-V}=0.075$, similar to what is seen in Stock 16 (Turner 1985, 1996) where there is no evidence for differential reddening. There is sufficient scatter, however, to suggest a better match to NGC 2422 (Turner 1996), which displays a color excess spread of $\\Delta E_{B-V}= 0.05$. Second, the colors of stars near the A0-star kink fit the reddened relation closely, much better than for Walker's (1956) data. Apparently the correction for the Balmer discontinuity effect adopted in Fig.~\\ref{fig2} offers a satisfactory solution to a serious systematic error in the original Walker photometry. Third, there are a number of FGK stars in the field with the colors of unreddened field stars, suggesting a possible source of contamination for the cluster color-magnitude diagram. There is also little evidence to indicate that heavily reddened stars are cluster members, given that the data do not display a continuous run of reddening, as in NGC 1647 (Turner 1992, 1996). It can be further noted that there are a number of objects with colors that are inconsistent with those of stars of normal reddening. Such an effect may be the result of extreme youth or contamination of the photometry by bright surrounding nebulosity. Many display $H\\alpha$ emission.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.40\\textwidth]{n2264f3.eps}\n\\end{center}\n\\caption{\\small{{\\it UBV} color-color (upper) and color-magnitude (lower) diagrams for NGC 2264 stars with photoelectric observations. The intrinsic color-color relation is the black curve, while gray curves denote that relation and the zero-age main sequence (ZAMS) reddened by $E_{B-V} = 0.075$ and {\\it V--M}$_V = 9.68$. Stars exhibiting emission at $H\\alpha$ are denoted by vertical lines.}}\n\\label{fig3}\n\\end{figure}\n\nThe nature of the interstellar reddening in the direction of NGC 2264 was studied previously by Turner (1976b), yielding a {\\it UBV} reddening slope of $E_{U-B}\/E_{B-V} = 0.77$ and a ratio of total-to-selective extinction for early-type stars in the region of $R = A_V\/E_{B-V} = 3.2 \\pm 0.2$. Those values were adopted here in an analysis of the {\\it UBV} data of Table~\\ref{tab1}, and the resulting color excesses $E_{B-V}$ for individual stars were corrected for the color dependence caused by bandwidth effects in the {\\it UBV} system (Schmidt-Kaler 1961; Fernie 1963; Buser 1978).\n\nThe field of NGC 2264 contains just two groups of OB stars. One displays only a small amount of interstellar reddening and consists of massive members of the cluster. The other consists of more heavily reddened ZAMS members of Mon OB2 lying $\\sim 800$ pc beyond NGC 2264. The reddening separation of AFGK-type stars in terms of distance is less clear cut, as discussed in \\S4.\n\nThe numbering of stars in Table~\\ref{tab1} is generally that of Walker (1956), although with two additions and a few corrections. In the study by Pe\\~{n}a et al. (2000), for example, some stars are misidentified. Their observation for star 14 is a second measurement for star 24, that for star 31 is for star 26, the two stars identified as 77A and 77B are measures for star 177 of Walker (1956) with the A and B notation reversed, the data for star 106 are for star 112, and the observation for star 169 is inconsistent with its measured brightness by Walker (1956) or with those of any bright stars in the cluster. It has therefore been omitted from the analysis. The observations by Echevarr\\'{i}a et al. (1979) for the multiple system surrounding S Mon (ADS 5322, star 131) include measures for a star, ADS 5322Ad, that is star 121 of Walker (1956), but with a much more reasonable brightness than that obtained by Johnson for Walker's study. It has been included in Table~\\ref{tab1} with Johnson's measures omitted. Likewise, the data from Lindroos (1983) include separate measures for a bright companion of Walker star 178, HD 47887B, so it has also been included.\n\n\\section{{\\rm \\footnotesize PARAMETERS FOR NGC 2264}}\nSince the scatter of reddened B-type stars in NGC 2264 seen in Fig.~\\ref{fig3} is larger than the uncertainties in the observed colors of at most $\\pm 0.01$, it is appropriate to analyze the data of Table~\\ref{tab1} using the variable-extinction method (see Turner 1976a). A reddening line of slope $E_{U-B}\/E_{B-V}=0.77$ (Turner 1976b) was therefore applied to the observations, with larger slopes being used for G, K, and M-type stars, and the data were dereddened to the intrinsic relation for dwarfs (see Turner 1996). Absolute magnitudes appropriate for a zero-age main-sequence (ZAMS) star were assigned to each analyzed star on the basis of its derived intrinsic color, where the ZAMS values are from Turner (1976a, 1979). The results are displayed in Fig.~\\ref{fig4}, which reveals much about the extinction in the direction of NGC 2264.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{n2264f4.eps}\n\\end{center}\n\\caption{\\small{Variable extinction diagram for early-type stars in NGC 2264, for ZAMS values of $M_V$. The sloped lines represent $R=3.2$ for a ZAMS fit to NGC 2264 (gray line) and for Mon OB2 (black line), the last from Turner (1976b).}}\n\\label{fig4}\n\\end{figure}\n\nAn initial inference from Fig.~\\ref{fig4} is that there are unreddened stars towards NGC 2264 with intrinsic distance moduli $V_0$--$M_V$ ranging between 4 and 8.34, which indicates that the dust clouds responsible for the reddening of most stars along the line of sight must lie at distances of $\\sim 465$ pc, a result consistent with the findings of Neckel \\& Klare (1980) for surrounding fields. The fact that most cluster stars lie in regions of bright nebulosity, both emission and reflection, and yet have only small reddenings, indicates that the cluster must lie mostly in front of the gas and dust clouds that dominate the region. A few cluster stars have slightly larger reddenings that suggest they lie on the far side of the cluster and are imbedded in the dust clouds.\n\nMore heavily reddened cluster stars, on the other hand, display a remarkably close coincidence ($d = 1.6$ kpc, $R = 3.2$) with the spectroscopically-derived variable-extinction results of Turner (1976b) for OB stars in Mon OB2, implying that one can readily detect background objects lying on the far side of the cluster and beyond the associated gas and dust clouds. A detailed analysis of the full data set (\\S4) also indicates that there are many other non-members of the cluster lying in the field, a few at about the same distance as the main group of cluster stars and many more foreground to the cluster but beyond the dust clouds producing most of the foreground reddening. A similar result can be inferred from the proper motion analysis of Vasilevskis, Sanders \\& Balz (1965) for bright stars in the cluster field.\n\nThe distance to NGC 2264 can be obtained from the 13 B-type ZAMS stars in Fig.~\\ref{fig4} that form a lower envelope to the main body of data for cluster stars. The resulting intrinsic distance modulus is $V_0-M_V=9.45 \\pm 0.03$ s.e., corresponding to a distance of $777 \\pm12$ pc. An almost identical distance was obtained by Turner (1976b) from an independent analysis of Walker's original observations in conjunction with spectroscopic data for members of Mon OB1 lying near the cluster. The average reddening for stars belonging to the main body of the cluster is $E_{B-V}=0.075 \\pm0.003$ s.e., as noted earlier, with an observed dispersion from the standard deviation ($\\sigma$) of $\\Delta E_{B-V} = 0.06$, slightly larger than the reddening dispersion observed in NGC 2422, discussed above. \n\nThe identification of likely cluster members is more complicated than the case for more distant groups, primarily because the cluster, because of its relative closeness, is spread out on the sky. The general appearance of NGC 2264 is also that of a double cluster, with no guarantee that the northern and southern components are of the same age and distance (although it does seem likely). For bright stars there are radial velocity measures and proper motion membership probabilities (Vasilevskis et al. 1965) for consideration, although neither is fully reliable, particularly in a direction roughly towards the Galactic anticenter, as is the case for NGC 2264 ($l = 203^{\\circ}$). The radial velocities for individual stars summarized in WEBDA display large scatter for several stars, the signature of spectroscopic binaries, are biased towards bright cluster stars, and have similar values both for {\\it bona fide} cluster members and nearby field stars, while the proper motions of stars across the line of sight at distances of 500--800 pc may be fairly similar. Compounding the situation is the apparent presence of a sizable population of AF-type stars in the immediate foreground to the cluster.\n\nLikely cluster members were therefore identified in the present study by considering all available data for each star, in conjunction with the results for other cluster stars analyzed. The dereddening procedure from Fig.~\\ref{fig3} is clearly not straightforward, given the potential non-physical solutions that arise for many of the faint red stars, many of which are variable or display $H\\alpha$ emission. Initially it was believed that many of the latter objects might be cluster members, and dereddening solutions were adopted for them using the mean cluster reddening. However, the resulting distribution of pre-main-sequence stars in the cluster color-magnitude diagram differed from that for non-variable pre-main-sequence stars without $H\\alpha$ emission, suggesting that they are either field stars or embedded objects that must be treated separately.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{n2264f5.eps}\n\\end{center}\n\\caption{\\small{Reddening and extinction corrected color-magnitude diagram for likely members of NGC 2264 from the present study. The black curve represents the ZAMS for the derived cluster distance, and gray curves at the bottom are pre-main-sequence isochrones for $\\log t = 6.5$ (upper) and 6.75 (lower). The gray curve at top is a post-main-sequence isochrone for $\\log t = 6.7$. Known and suspected spectroscopic binaries are indicated by horizontal lines, rapidly-rotating stars are circled, and stars exhibiting emission at $H\\alpha$ are denoted by vertical lines.}}\n\\label{fig5}\n\\end{figure}\n\n\nA compilation of likely members of NGC 2264 from the present study is given in Table~\\ref{tab2}, and the resulting reddening-free and extinction-free color-magnitude diagram for them is shown in Fig.~\\ref{fig5}. The close fit of the 13 stars discussed above to the ZAMS is evident here, as are a few other noteworthy features. Known or suspected spectroscopic binaries are identified, and the bias towards bright cluster stars is evident. Stars identified spectroscopically as rapid rotators, from ``n'' or ``nn'' classifications or large $V \\sin i$ values (Vogel \\& Kuhi 1981), are also noted, as are the two stars displaying emission at $H\\alpha$ (Walker 96, 133).\n\nThe identification of rapid rotators relates to the cluster main sequence, which appears to contain gaps, a feature seen in several other clusters (Mermilliod 1982) and attributed by Turner (1996) to close binary mergers and the resulting creation of rapidly-rotating merger products, which results in a color spread for cluster stars more luminous than each gap. That characteristic is evident for NGC 2264, as it is for the clusters discussed by Turner (1996). Most notable in the case of NGC 2264 is the B3 Vn star 74, the second most luminous likely cluster member, which lies above a main-sequence gap near $V_0\\simeq7.2$, and the group of rapidly-rotating stars lying above the main-sequence gap near $V_0\\simeq9.5$.\n\n\n\\setcounter{table}{1}\n\\begin{table}[!t]\n\\caption[]{Likely members of NGC 2264 from this study}\n\\label{tab2}\n\\begin{center}\n\\footnotesize\n\\begin{tabular*}{0.45\\textwidth}{@{\\extracolsep{-2.5mm}}ccccccccc}\n\\hline \\noalign{\\smallskip}\nStar &$(B-V)_0$ &$E_{B-V}$ &$V_0$ & &Star &$(B-V)_0$ &$E_{B-V}$ &$V_0$ \\\\\n\\noalign{\\smallskip} \\hline \\noalign{\\smallskip}\n7 &--0.19 &0.07 &7.57 & &128 &+0.22 &0.04 &10.89 \\\\\n15 &+1.13 &0.08 &14.39 & &131 &--0.31 &0.07 &4.42 \\\\\n24 &--0.10 &0.04 &8.46 & &132 &--0.10 &0.06 &10.02 \\\\\n26 &+0.43 &0.04 &11.66 & &133 &+0.80 &0.27 &13.00 \\\\\n30 &--0.04 &0.07 &10.57 & &137 &--0.12 &0.04 &9.81 \\\\\n34 &+0.33 &0.07 &10.72 & &138 &--0.03 &0.09 &9.89 \\\\\n35 &+0.00 &0.08 &10.08 & &142 &--0.17 &0.08 &8.61 \\\\\n36 &--0.02 &0.03 &10.90 & &143 &--0.06 &0.12 &10.24 \\\\\n39 &+0.03 &0.09 &11.08 & &145 &--0.01 &0.05 &10.49 \\\\\n43 &+0.12 &0.08 &10.31 & &152 &--0.13 &0.06 &8.95 \\\\\n50 &--0.22 &0.07 &7.91 & &154 &+0.72 &0.08 &12.37 \\\\\n54 &+0.99 &0.27 &13.40 & &157 &--0.11 &0.08 &9.78 \\\\\n60 &+0.89 &0.13 &12.09 & &159 &--0.03 &0.09 &10.69 \\\\\n65 &+0.43 &0.06 &11.57 & &165 &+0.01 &0.13 &10.57 \\\\\n68 &+0.53 &0.16 &11.23 & &172 &--0.13 &0.07 &9.86 \\\\\n74 &--0.20 &0.09 &6.59 & &177B &+0.53 &0.04 &11.85 \\\\\n83 &--0.25 &0.09 &7.67 & &178 &--0.29 &0.09 &6.89 \\\\\n84 &+0.45 &0.13 &11.58 & &178B &--0.09 &0.26 &9.41 \\\\\n87 &+0.13 &0.08 &10.51 & &179 &--0.08 &0.09 &9.69 \\\\\n88 &--0.20 &0.08 &8.79 & &181 &--0.10 &0.06 &9.88 \\\\\n91 &+0.52 &0.13 &11.91 & &182 &+0.04 &0.02 &10.26 \\\\\n96 &+1.02 &0.07 &13.87 & &187 &--0.10 &0.03 &9.16 \\\\\n98 &+0.51 &0.06 &11.58 & &189 &+0.48 &0.07 &11.14 \\\\\n100 &+0.00 &0.13 &9.64 & &190 &+0.60 &0.07 &12.06 \\\\\n103 &--0.05 &0.07 &9.86 & &202 &--0.23 &0.31 &8.04 \\\\\n104 &+0.17 &0.06 &11.22 & &206 &--0.15 &0.24 &8.74 \\\\\n107 &--0.15 &0.07 &8.63 & &209 &+0.35 &0.02 &11.31 \\\\\n108 &+0.46 &0.13 &11.63 & &212 &--0.22 &0.06 &7.32 \\\\\n109 &--0.16 &0.05 &8.95 & &215 &--0.08 &0.15 &8.83 \\\\\n112 &--0.05 &0.04 &10.68 & &224 &+0.35 &0.18 &10.92 \\\\\n114 &+0.41 &0.12 &11.16 & &227 &+0.45 &0.07 &11.58 \\\\\n116 &+0.42 &0.13 &11.26 & &228 &+0.35 &0.00 &11.12 \\\\\n118 &+0.49 &0.07 &11.59 & &231 &--0.19 &0.06 &8.81 \\\\\n125 &+0.55 &0.06 &12.09 & &232 &--0.02 &0.02 &9.77 \\\\\n\\noalign{\\smallskip} \\hline\n\\end{tabular*}\n\\end{center}\n\\end{table}\n\nIsochrones are also plotted in Fig.~\\ref{fig5} for $\\log t = 6.5$ and 6.75, the latter of which provides a good fit to the lower envelope of pre-main-sequence stars. An age of $\\log t \\simeq 6.7$ is also inferred for the most luminous cluster star, Walker 131 = S Mon = HD 47839 [(O7 V((f)), Walborn 1972], since it appears to have evolved away from the ZAMS. As in Guetter \\& Turner (1997), the pre-main-sequence isochrones are adapted from the results of Palla \\& Stahler (1994), while the post-main-sequence isochrone in Fig.~\\ref{fig5} is from Meynet, Mermilliod \\& Maeder (1993). Likewise, much like the case for IC 1590 (Guetter \\& Turner 1997), it is not clear that the dispersion in luminosities for pre-main-sequence stars represents a spread in formation times. Binarity is also a possibility, particularly given the similarity in implied ages between lower envelope pre-main-sequence stars and S Mon. The implied age for members of NGC 2264 is therefore $\\sim 5.5 \\times 10^6$ yr.\n\n\\section{{\\rm \\footnotesize FIELD STARS IN NGC 2264}}\nMost previous studies of NGC 2264 have assumed that the AFGK-type stars lying above the main sequence in Walker's (1956) original cluster color-magnitude diagram are pre-main-sequence objects, with variability, $H\\alpha$ emission, and large membership probabilities (Vasilevskis et al. 1965) for the same objects supplying confirmatory evidence. Yet when most such stars are assumed to be cluster members with reddenings similar to the cluster average, the resulting sequence of assumed pre-main-sequence objects runs parallel to the ZAMS, which is inconsistent with model isochrones (e.g., Palla \\& Stahler 1993). That can be seen in Fig.~\\ref{fig6}, which presents color-color and color-magnitude diagrams for stars from Table~\\ref{tab1} that were rejected from cluster membership in Table~\\ref{tab2}. Note that most of the stars displaying $H\\alpha$ emission (Herbig 1954) fall in the plots of Fig.~\\ref{fig6}.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{n2264f6.eps}\n\\end{center}\n\\caption{\\small{{\\it UBV} color-color (upper) and color-magnitude (lower) diagrams for NGC 2264 stars with photoelectric observations. The intrinsic color-color and zero-age main sequence (ZAMS) relations are the black curves, the latter for $V_0$--$M_V = 7.55$, while gray curves denote the color-color relation and the ZAMS reddened by $E_{B-V} = 0.074$ and {\\it V--M}$_V = 9.0$. Stars exhibiting emission at $H\\alpha$ are denoted by vertical lines.}}\n\\label{fig6}\n\\end{figure}\n\nA large number of the stars fall along an unreddened ZAMS for $V_0-M_V=7.55$ ($d=324$ pc), which suggests the presence of an intermediate-age population in this direction lying $\\sim 150$ pc foreground to the dust clouds responsible for the initial interstellar extinction in NGC 2264. A smaller selection of stars fit the ZAMS reddened by $E_{B-V}=0.075$ for {\\it V--M}$_V = 9.0$, corresponding to a distance of $570$ pc, $\\sim 100$ pc beyond the foreground dust complexes but well foreground to NGC 2264. There also appear to be a few GK-type stars in Fig.~\\ref{fig6} with distance moduli similar to that of NGC 2264. They probably lie at distances comparable to that of the cluster or slightly foreground to it, similar to the case for some of the stars in Fig.~\\ref{fig4}. In this alternate interpretation of the observations, the region of NGC 2264 contains a sizable sample of stars belonging to the general field, the exceptions being the slightly reddened OB stars and many of the AF-type stars. $H\\alpha$ emission does not appear to correlate with any particular status for individual stars.\n\nThe above discussion is relevant for recent observations of small-amplitude variable stars in NGC 2264 studied by means of asteroseimology. In the study made by Zwintz et al. (2009), for example, their star V1 seems likely to be a foreground star, stars V3 and V4 are possible pre-main-sequence objects, and star V2 is Walker 39, a newly-arrived ZAMS member of NGC 2264. Zwintz et al. (2009) refer to it as a pre-main-sequence $\\delta$ Scuti variable; for that to be the case the star must be just on the verge of initiating core hydrogen burning. Such distinctions are important for establishing the pulsational characteristics of other targets for asteroseismology chosen according to their true location in the cluster color-magnitude diagram.\n\n\n\\section{{\\rm \\footnotesize DISCUSSION}}\nOne advantage of removing the effects of interstellar reddening and extinction from cluster color-magnitude diagrams is that the resulting distribution of data points must be closely linked to the true variations in effective temperature and luminosity for cluster members. In the case of NGC 2264, the color-magnitude diagram presented here in Fig.~\\ref{fig5} provides a number of important insights into the evolutionary status of cluster stars. The main sequence gaps, for example, imply an advanced dynamical state, provided they arise from close binary mergers, as argued by Turner (1996). And the close coincidence in the cluster age inferred from both pre-main-sequence stars and the slightly-evolved S Mon imply that the formation of cluster stars was not spread out greatly in time, the creation of cluster stars probably consuming no more than a few hundred thousand years.\n\n\n\n\\subsection*{{\\rm \\scriptsize ACKNOWLEDGEMENTS}}\n\\scriptsize{The present study has used the open cluster data compilation maintained by WEBDA, operated at the Institute for Astronomy of the University of Vienna.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}