diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzenlp" "b/data_all_eng_slimpj/shuffled/split2/finalzzenlp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzenlp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nThis paper is concerned with model adaptation in the context of\nadvection dominated, compressible fluid flows as they appear in, for\nexample, aerospace engineering. Compressible fluid flows can be\ndescribed by different models having different levels of complexity.\nOne example are the compressible Euler equations which are the limit\nof the Navier-Stokes-Fourier (NSF) equations when heat conduction and\nviscosity go to zero. Arguably the NSF system provides a more\naccurate description of reality since viscous effects which are\nneglected in Euler's equation play a dominant role in certain flow\nregimes like thin regions near obstacles, for example, aerofoils\nexhibiting Prandtls's boundary layers \\cite{Nic73}. However, viscous\neffects are negligible in large parts of the computational domain\nwhere convective effects dominate \\cite{BCR89,CW01,DGQ14}. Thus, it is\ndesirable to avoid the effort of handling the viscous terms in these\nparts of the domain, that is, to use the NSF system only where needed\nand simpler models, \\eg (linearised) Euler equations, on the rest of\nthe computational domain.\n \nThis insight has lead to the development, of a certain type, of\nso-called heterogeneous domain decomposition methods in which on a\ncertain part of the computational domain the NSF equations are solved\nnumerically, whereas the (linearised) Euler equations are used for far\nfield computations \\cite[e.g.]{USDM06, CW01,Xu00,BHR11}. In those\nworks the domain was decomposed a priori, \\ie before the start of the\nnumerical computation. The accuracy and efficiency of those schemes\ndepends sensitively on the placement of the domains. Thus, the user\nis required to have some physical intuition on where to put the domain\nboundary.\n\nIt has been suggested that applying a more ``adaptive'' approach using\ncutoff functions to the dissipative terms in the NSF equations might\nlead to modified model only containing second derivatives when a\ncertain threshold is exceeded \\cite{BCR89,AP93}. The disadvantage\nwith this approach is that error control for this type of model\nadaptation is not available, and the main justification for its use is\nthat the modified equations converge to the original ones when the\ncut-off parameter tends to zero. The beginnings of a more rigorous\nmodel adaptation approach for stationary linear advection-diffusion\nsystems based on optimal control techniques can be found in\n\\cite{AGQ06}. Recently a heterogeneous domain decomposition technique\nfor linear model problems based on factorisation was suggested in\n\\cite{GHM16}.\n\nOur approach to this issue differs from the previous ones, except\n\\cite{AGQ06} in that we aim at having an {\\it a posteriori} criterion\nwhich enables an automatic and adaptive choice of domains. Our\napproach is inspired by \\cite{BE03} where a model adaptive algorithm\nbased on an a posteriori modeling error estimator is presented. In\n\\cite{BE03} such an estimator was developed for hierarchies of\nelliptic models based on dual weighted residuals. This approach\neasily extends to parabolic problems and was extended to\nincompressible, reactive flows in \\cite{BE04}.\n\nThe situation which we consider differs from the one studied in\n\\cite{BE03} since the problems at hand are highly nonlinear and\nconvection dominated in nature. In particular, it is known that error\nestimators based on dual weighted residuals have certain theoretical\nlimitations for nonlinear systems of hyperbolic conservation laws, \\eg\nEuler's equations, since the dual problems may become ill-posed in\ncase the solution to the primal problem is discontinuous \\cite{HH02}.\n\nOur estimates are based on non-linear energy-like arguments using the\nrelative entropy framework. Our presentation will focus on\nRunge--Kutta--discontinuous--Galerkin (RKDG) discretisations which are\nan established tool for viscous as well as inviscid compressible fluid\nflows. The schemes we study use discretisations of the NSF equations\non some part of the computational domain and discretisations of\nEuler's equations on the rest of the domain. Following\n\\cite{GMP_15,DG_15}, which were concerned with a posteriori estimators\nfor discretisation errors for hyperbolic conservation laws, we define\n(explicitly computable) reconstructions of the numerical solution and\nuse the relative entropy stability framework in order to derive an\nexplicitly computable bound for the difference between the\nreconstruction and the exact solution to the NSF equations. Our error\nestimator, in fact, consists of two parts. One part is related to the\nmodelling error while the other one is related to the numerical\ndiscretisation error such that our estimator allows for simultaneous\nmodel and mesh adaptation. Model adaptation for hyperbolic\nconservation laws and relaxation systems was addressed recently in\n\\cite{MCGS12}. In this work an error estimator for scalar problems,\nbased on Kruzhkovs $\\leb{1}$ stability theory, was presented.\nAdditional relevant work is \\cite{CCGMS13} where an error indicator\nfor general systems based on Chapman-Enskog expansions was derived.\n\nWe present an abstract framework where we will consider general models\nthat include dissipative effects of the form:\n\\begin{equation}\\label{cx} \\partial_t \\vec u +\n \\sum_\\alpha \\partial_{x_\\alpha} \\vec f_\\alpha (\\vec u) =\n \\sum_\\alpha \\partial_{x_\\alpha}( \\varepsilon \\vec g_\\alpha(\\vec u , \\nabla\n \\vec u)) \\text{ on } \\rT^d \\times (0,T) \\end{equation}\nwhere $\\vec u$ is the (unknown) vector of conserved quantities,\n$\\varepsilon>0$ is a small parameter,\n$\\vec f_\\alpha \\in \\cont{2}(U,\\rR^{ n}),$ $\\alpha=1,\\dots,d$ are the\ncomponents of an advective flux and\n$\\vec g_\\alpha \\in \\cont{2}(U \\times \\rR^{d \\times n},\\rR^{ n}),$\n$\\alpha=1,\\dots,d$ are the components of a diffusive flux. Moreover,\nthe so-called state space $U \\subset \\rR^n$ is an open set; $T>0$ is\nsome finite time and $\\rT^d$ denotes the $d$-dimensional flat torus.\nBy presenting the arguments in this form we are able to simultaneously\nanalyse the NSF system as well as various other examples.\n\nWe are interested in the case of $\\varepsilon$ being very small, so that on\nlarge parts of the computational domain the inviscid model\n\\begin{equation}\\label{sim} \\partial_t \\vec u + \\sum_\\alpha \\partial_{x_\\alpha} \\vec f_\\alpha (\\vec u) = 0 \\text{ on } \\rT^d \\times (0,T)\\end{equation}\nis a reasonable approximation of \\eqref{cx}, for which, numerical\nmethods are computationally cheaper.\n\nWe will show later how the NSF equations fit into the framework\n\\eqref{cx} and that the Euler equations have the form \\eqref{sim}.\nIndeed, we will present our analysis first for a scalar model problem,\nwhere we are able to derive fully computable a posteriori estimators,\nleading to \\eqref{sim} and \\eqref{cx}. Pairs of models \\eqref{sim} and\n\\eqref{cx} also exist in applications beyond compressible fluid flows,\nfor example traffic modelling.\n\nWe will assume throughout this work that \\eqref{cx} and \\eqref{sim}\nare endowed with an (identical) convex entropy\/entropy flux pair.\nThis assumption is true for the Euler and NSF model hierarchy, and\nexpresses that both models are compatible with the second law of\nthermodynamics. For the scalar model problem it is trivially\nsatisfied.\n\nThis entropy pair gives rise to a stability framework via the\nso-called relative entropy, which we will exploit in this work. The\nrelative entropy framework for the NSF equations has received a lot of\ninterest in recent years, \\eg \\cite{FJN12,LT13}. It enables us to\n study modelling errors for inviscid approximations of\nviscous, compressible flows in an a posteriori fashion.\nIn the case $n=1$ and general $d$, which\nis the scalar case, we are able to prove rigorous a\nposteriori estimates. All constants are explicitly computable and we\ncall such quantities \\emph{estimators}. When $n>1$ a non-computable\nconstant appears in the argument which we were not able to\ncircumvent. The resultant a posteriori bounds are called\n\\emph{indicators} in the sequel.\nBased on this a posteriori estimator\/indicator we\nconstruct a model adaptive algorithm which we have implemented for\nmodel problems.\n\nThere are many applications, \\eg aeroacoustics \\cite{USDM06}, where it\nwould be desirable to consider the pair of models consisting of NSF\nand the {\\it linearised} Euler equations for computational efficiency.\nWe are currently unable to deal with this setting since our analysis\nrelies heavily on \\eqref{cx} and \\eqref{sim} having the same entropy\nfunctional, while the linearised Euler system has a different entropy.\n\n\n\nThe outline of this paper is as follows: In \\S \\ref{sec:mees} we\npresent the general framework of designing a posteriori estimators\nbased on the study of the abstract equations (\\ref{cx})--(\\ref{sim})\nin the scalar case. In \\S \\ref{sec:meesy} we describe how this\nframework can be extended to the specific example of the NSF\/Euler\nproblem through the study of generic systems of the form (\\ref{cx})\nand (\\ref{sim}). A posteriori analysis is always based on the\nstability framework of the underlying problem. For the class of\nproblems considered here we make use of the relative entropy\nframework. For dG spatial approximations of these operators the a\nposteriori argument requires a \\emph{reconstruction} approach. This\nis since the discrete solution itself is not smooth enough to be used\nin the relative entropy stability analysis. In \\S \\ref{sec:recon} we\ngive an overview of the reconstruction approach presented in\n\\cite{GMP_15,DG_15}. We also take the opportunity to extend the\noperators to 2 spatial dimensions for use in numerical experiments\npresented. Finally, we conclude the presentation with \\S \\ref{sec:num}\nwhere we summarise extensive numerical experiments aimed at testing\nthe performance of the indicator as a basis for model adaptivity.\n\n\\section{Estimating the modeling error for generic reconstructions - the scalar case}\\label{sec:mees}\nOur goal in this section is to show how the entropic structure can be used to obtain estimates for the difference between a (sufficiently regular) solution to \\eqref{cx} and\n the solution $\\vec v_h$ of a numerical scheme which is a discretisation of \\eqref{sim} on part of the (space-time) domain and \nof \\eqref{cx} everywhere else.\nWe will pay particular attention to the {\\it model adaptation error}, \\ie the part of the error which is due to approximating not \\eqref{cx} but \\eqref{sim} on part of the domain.\nIn this section we present the arguments in the scalar case, as in this case all arguments can be given in a tangible way, see Section \\ref{subs:sca}.\n\n\\subsection{Relative Entropy}\nBefore treating the scalar case we will review the classical concept of entropy\/entropy flux pair which will be used in this Section and in Section \\ref{sec:meesy}.\nWe will show how it can be employed in the relative entropy stability framework.\nWe will also show explicitly that the relative entropy in the NSF model satisfies the conditions which are necessary for our analysis.\nThroughout this exposition we will use $d$ to denote the spatial dimension of the problem and $n$ as the number of equations in the system.\n\n\\begin{definition}[Entropy pair]\nWe call a tuple $(\\eta,{\\bf q}) \\in \\cont{1}(U,\\rR) \\times C^1(U,\\rR^d)$ an entropy pair to \\eqref{cx}\n provided the following compatibility conditions are satisfied:\n\\begin{equation}\\label{cc1}\n \\D \\eta \\D \\vec f_\\alpha = \\D q_\\alpha \\quad \\text{for } \\alpha = 1,\\dots, d \\end{equation}\n and\n \\begin{equation}\\label{cc2}\n \\partial_{x_\\alpha} (\\D \\eta (\\vec y)) : \\vec g_\\alpha(\\vec y, \\nabla \\vec y) \\geq 0 \\quad \\text{ for any } \\vec y \\in \\cont{1}(\\rT^d,U) \\text{ and } \\alpha =1,\\dots,d\n \\end{equation}\nwhere $\\D$ denotes the Jacobian of functions defined on $U$ (the state space).\nWe call an entropy pair strictly convex if $\\eta$ is strictly convex.\n\\end{definition}\n\n\\begin{remark}[Entropy equality]\n Note that every solution $\\vec u \\in \\sobh{1}(\\rT^d \\times (0,T), U)$ of \\eqref{cx} satisfies the additional companion balance law\n \\[\\partial_t \\eta(\\vec u) + \\sum_\\alpha \\partial_{x_\\alpha} \\Big( q_\\alpha (\\vec u) - \\varepsilon \\vec g_\\alpha(\\vec u, \\nabla \\vec u) \\D \\eta(\\vec u)\\Big) =\n - \\varepsilon \\sum_\\alpha \\vec g_\\alpha (\\vec u, \\nabla \\vec u)\\partial_{x_\\alpha} \\D \\eta(\\vec u).\\]\n We refer to $\\sum_\\alpha \\vec g_\\alpha (\\vec u, \\nabla \\vec u)\\partial_{x_\\alpha} \\D \\eta(\\vec u)$ as entropy dissipation.\n\\end{remark}\n\n\n\\begin{Rem}[Entropic structure]\nWe restrict our attention to systems \\eqref{cx} which are endowed with at least one strictly convex entropy pair.\nSuch a convex entropy pair gives rise to a stability theory based on the relative entropy which we recall in Definition \\ref{def:red}.\nWe will make the additional assumption that $\\eta \\in \\cont{3}(U,\\rR).$\nWhile this last assumption is not standard in relative entropy estimates, it does not exclude any important cases.\n\\end{Rem}\n\n\\begin{remark}[Commutation property]\n Note that the existence of $q_\\alpha$ means that for each $\\alpha$ the vector field $\\vec u \\mapsto \\D \\eta(\\vec u) \\D \\vec f_\\alpha(\\vec u)$ has a potential and, thus,\n gives rise to the following commutivity property:\n \\begin{equation}\\label{eq:dfdeta}\n \\Transpose{(\\D \\vec f_\\alpha)} \\D^2 \\eta = \\D^2 \\eta \\D \\vec f_\\alpha \\quad \\text{ for } \\alpha=1,\\dots, d.\n \\end{equation}\n\\end{remark}\n\n\n\\begin{definition}[Relative entropy]\\label{def:red}\n Let \\eqref{cx} be endowed with an entropy pair $(\\eta, \\vec q).$ Then the relative entropy and relative entropy flux between the states\n $\\vec u,\\vec v \\in U$ are defined by\n \\begin{equation}\\label{red}\n \\begin{split}\n \\eta(\\vec u|\\vec v) &:= \\eta(\\vec u) - \\eta(\\vec v) - \\D \\eta(\\vec v) (\\vec u - \\vec v)\\\\\n \\vec q(\\vec u|\\vec v) &:= \\vec q(\\vec u) - \\vec q(\\vec v) - \\D \\eta(\\vec v) (\\vec f(\\vec u) - \\vec f(\\vec v)).\n \\end{split}\n \\end{equation}\n\\end{definition}\n\n\n\n\n\n\n\\begin{Hyp}[Existence of reconstruction]\\label{hyp:recon}\n In the remainder of this section we will assume existence of a reconstruction $\\widehat{\\vec v} $ of a numerical solution $\\vec v_h$ which weakly\nsolves\n\\begin{equation}\\label{interm} \\partial_t \\widehat{\\vec v} + \\div \\vec f(\\widehat{\\vec v}) = \\div \\qp{ \\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v})} + \\cR_H + \\cR_P\\end{equation}\nwith explicitly computable residuals $\\cR_H,\\cR_P$ and $\\widehat \\varepsilon: \\rT^d \\times [0,T] \\rightarrow [0,\\varepsilon]$ being a function \nwhich is a consequence of the model adaptation procedure and determines in which part of the space-time domain which model is discretised.\nWe assume that the residual can be split into a part denoted $\\cR_H \\in \\leb{2}(\\rT^d \\times (0,T), \\rR^n)$ and a part proportional \nto $\\widehat \\varepsilon,$ denoted $\\cR_P,$ which is an element of $ \\leb{2}(0,T;\\sobh{-1}(\\rT^d, \\rR^n)).$ \n\nWe also assume that an explicitly computable bound for $\\norm{\\widehat{\\vec v} - \\vec v_h}$ is available.\n\\end{Hyp}\n\n\\begin{Rem}[Reconstructions]\n We present some reconstructions in Section \\ref{sec:recon} and point out that different choices of reconstructions will lead to different behaviours of the residuals\n$\\cR_H,\\cR_P$ with respect to the mesh width $h.$ However, our main interest in the work at hand is a rigorous \nestimation of the modelling error, and not the derivation of optimal order discretisation error estimates,\nwhich is a challenging task.\n\\end{Rem}\n\n\n\n\nThroughout this paper we will make the following assumption on the exact solution.\n\\begin{Hyp}[Values in a compact set]\n We assume that we have {\\it a priori} knowledge of a pair $(T,\\mathfrak{O}),$\n where $T>0$ and $\\mathfrak{O} \\subset U$ is compact and convex, such that the exact solution $\\vec u$ of \\eqref{cx} takes values in \n $\\mathfrak{O}$ up to time $T,$ \\ie \n\\[\\vec u({\\vec x},t) \\in \\mathfrak{O} \\quad \\forall \\ ({\\vec x},t) \\in \\rT^d \\times [0,T].\\]\n\\end{Hyp}\n\n\\subsection{A posteriori analysis of the scalar case}\\label{subs:sca}\nIn the scalar setting, \\ie $n=1,$ we restrict ourselves to the ``complex'' model being given by \n\\begin{equation}\\label{vb}\n \\partial_t u + \\div \\qp{\\vec f(u)} = \\div (\\varepsilon \\nabla u) \n\\end{equation}\nand the simple model by \n\\begin{equation}\\label{ivb}\n \\partial_t u + \\div \\qp{\\vec f(u)} = 0.\n\\end{equation}\nTherefore, our model adaptive algorithm can be viewed as a numerical discretisation of\n\\begin{equation}\\label{intb}\n \\partial_t u + \\div \\qp{\\vec f(u)} = \\div (\\widehat \\varepsilon \\nabla u) \n\\end{equation}\nwith a space and time dependent function $\\widehat \\varepsilon$ taking values in $[0,\\varepsilon].$\nThe spatial distribution of $\\widehat \\varepsilon$ determines which model is solved on which part of the domain.\nThe reconstruction $\\widehat{v}$ of the numerical solution is a Lipschitz continuous weak solution to the perturbed problem\n\\begin{equation}\\label{vb_R}\n \\partial_t \\widehat{v} + \\div \\qp{\\vec f(\\widehat{v})} = \\div (\\widehat \\varepsilon \\nabla \\widehat{v}) + \\cR_H + \\cR_P,\n\\end{equation}\nwhere $\\widehat \\varepsilon$ is a space dependent function with values in $[0,\\varepsilon],$ $\\cR_H$ is the ``hyperbolic'' part of the discretisation residual\nwhich is in $\\leb{2}(\\rT^d \\times (0,T),\\rR^n)$, and \n$\\cR_P$ is the ``parabolic'' part of the discretisation residual.\nNote that $\\cR_P$ is not in $\\leb{2}(\\rT^d \\times (0,T),\\rR^n)$ but in $\\leb{2}(0,T;\\sobh{-1}(\\rT^d,\\rR^n))$ and that $\\Norm{\\cR_P}_{\\leb{2}(\\sobh{-1})}$ is proportional to $\\widehat \\varepsilon.$\nSee Hypothesis \\ref{hyp:recon} for our general assumption and \\eqref{eq:dres} for such a splitting in case of a specific reconstruction.\n\nIn what follows we will use the relative entropy stability framework to derive a bound for the difference between the solution $u$\nof \\eqref{vb} and the reconstruction $\\widehat{v}$ of the numerical solution.\nIn the scalar case every strictly convex $\\eta \\in \\cont{2}(U,\\rR)$ is an entropy for \\eqref{vb}, because to each such $\\eta$ a consistent entropy flux may be defined by \n\\begin{equation}\n q_\\alpha (u):= \\int^u \\eta'(v) f_\\alpha'(v) \\d v \\text{ for } \\alpha =1,\\dots, d\n\\end{equation}\nand the compatibility with the diffusive term boils down to\n\\begin{equation}\n - (\\partial_{x_\\alpha } y ) \\eta''(y) (\\partial_{x_\\alpha } y ) \\leq 0 \\quad \\text{ for all } y \\in \\cont{1}(U,\\rR) \\text{ and } \\alpha =1,\\dots,d,\n\\end{equation}\nwhich is satisfied as a consequence of the convexity of $\\eta.$\n\nWe choose \n$\\eta(u)=\\tfrac{1}{2} u^2$\nas this simplifies the subsequent calculations.\nIn particular,\n\\[ \\eta(u|v)= \\frac{1}{2} ( u- v)^2\\]\nfor all $u, v\\in U.$\n\n\\begin{remark}[Stability framework]\nNote that in the scalar case we might also use Kruzhkov's $\\leb{1}$ stability framework \\cite{Kru70} instead of the relative entropy.\nHowever, the exposition at hand is supposed to serve as a blueprint for what we are going to do for systems in Section \\ref{sec:meesy}.\n\\end{remark}\n\n\\begin{Rem}[Bounds on flux]\n Due to the regularity of $\\vec f$ and the compactness of $\\mathfrak{O}$ there exists a\n constant $0 < C_{\\overline{\\vec f}} < \\infty$ such\n that\n \\begin{equation}\n \\label{eq:consts1d}\n\\norm{ \\vec f''( u) } \n \\leq C_{\\overline{\\vec f}} \\quad \\forall u \\in \\mathfrak{O} , \n \\end{equation}\n where $\\norm{\\cdot}$ is the Euclidean norm for vectors.\n Note that $C_{\\overline{\\vec f}}$ can be explicitly computed from $\\mathfrak{O}$ and $\\vec f.$\n\\end{Rem}\nThe main result of this Section is then the following Theorem:\n\\begin{The}[a posteriori modelling error control]\\label{the:1}\n Let \n $u \\in \\sobh{1}(\\rT^d \\times (0,T),\\rR)$ be a solution to \\eqref{vb} and let $\\widehat{v} \\in\\sob{1}{\\infty}(\\rT^d\\times (0,T),\\rR)$ solve \\eqref{vb_R}. \n Then, for almost all $t \\in (0,T)$ the following bound for the difference between $u$ and $\\widehat{v}$ holds:\n \\begin{multline}\\label{eq:the1}\n \\Norm{ u(\\cdot,t) - \\widehat{v} (\\cdot,t) }_{\\leb{2}(\\rT^d)}^2 + \\int_0^t \\varepsilon \\norm{u(\\cdot,s) - \\widehat{v} (\\cdot,s)}_{\\sobh{1}(\\rT^d)}^2 \\d s \n \\\\ \\leq\n \\Big( \\Norm{ u(\\cdot,0) - \\widehat{v} (\\cdot,0) }_{\\leb{2}(\\rT^d)}^2 + \\cE_M + \\cE_D \n \\Big)\\\\ \n \\times \\exp\\big( (\\Norm{ \\nabla \\widehat{v}}_{\\leb{\\infty}(\\rT^d\\times (0,t))} C_{\\overline{\\vec f}} + 1) t\\big),\n \\end{multline}\n with\n \\begin{equation}\n \\begin{split}\n \\cE_M :=& \\Norm{\\qp{\\varepsilon - \\widehat \\varepsilon}^{1\/2} \\nabla \\widehat{v}}_{\\leb{2}(\\rT^d \\times (0,t))}^2,\\\\\n \\cE_D :=& \\Norm{\\cR_H}_{\\leb{2}(\\rT^d \\times (0,t))}^2 +\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\leb{2}( 0,t; \\sobh{-1}(\\rT^d) )}^2.\n \\end{split}\n \\end{equation}\n\n\\end{The}\n\n\\begin{remark}[Dependence of the estimator on $\\varepsilon.$]\nWe expect that the modelling residual part of the estimator in \\eqref{eq:the1}, $\\cE_M$,\nbecomes small in large parts of the computational domain in case $\\varepsilon$ is sufficiently small, even if $\\widehat \\varepsilon$ vanishes everywhere.\nThis means that \\eqref{sim} is a reasonable approximation of \\eqref{cx}.\nIt should be\nnoted that $\\Norm{\\cR_P}_{\\leb{2}( 0,t; \\sobh{-1}(\\rT^d) )} \\sim \\cO(\\varepsilon)$ (as $\\widehat \\varepsilon$ only takes the values $0$ and $\\varepsilon$),\ntherefore we expect $\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\leb{2}( 0,t; \\sobh{-1}(\\rT^d) )}^2 \\sim \\cO(\\varepsilon).$ \n\\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{the:1}]\n Testing \\eqref{vb} and \\eqref{vb_R} with $(u - \\widehat{v})$ and subtracting both equations we obtain\n \\begin{multline}\\label{eq:re1}\n \\int_{\\rT^d}\\frac{1}{2} \\pdt ((u- \\widehat{v})^2) - \\nabla \\widehat{v} \\big(\\vec f(u) - \\vec f(\\widehat{v}) - \\vec f'(\\widehat{v})(u - \\widehat{v}) \\big) + \\varepsilon | \\nabla (u - \\widehat{v})|^2 \\\\\n =\\int_{\\rT^d} (\\widehat \\varepsilon - \\varepsilon) \\nabla \\widehat{v} \\cdot \\nabla (u - \\widehat{v}) + \\cR_H (u - \\widehat{v}) + \\cR_P (u - \\widehat{v}),\n \\end{multline}\nwhere we have used that $\\rT^d$ has no boundary and \n\\begin{equation}\n \\div (\\vec q(u|\\widehat{v})) - \\nabla \\widehat{v} \\big(\\vec f(u) - \\vec f(\\widehat{v}) - \\vec f'(\\widehat{v})(u - \\widehat{v}) \\big) = \\div( \\vec f(u) - \\vec f(\\widehat{v})) (u - \\widehat{v}).\n\\end{equation}\nApplying Young's inequality in \\eqref{eq:re1} we obtain\n \\begin{multline}\\label{eq:re2}\n \\d_t \\left(\\frac{1}{2} \\Norm{u- \\widehat{v}}_{\\leb{2}(\\rT^d)}^2\\right) + \\varepsilon \\norm{ (u - \\widehat{v})}_{\\sobh{1}(\\rT^d)}^2 \\\\\n \\leq \\Norm{\\qp{\\varepsilon - \\widehat \\varepsilon}^{1\/2} \\nabla \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 + \\frac{\\varepsilon}{4} \\norm{u - \\widehat{v}}_{\\sobh{1}(\\rT^d)}^2 + \\Norm{\\cR_H}_{\\leb{2}(\\rT^d)}^2 + \\Norm{u - \\widehat{v}}_{\\leb{2}(\\rT^d)}^2\\\\\n +\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)}^2 + \\frac{\\varepsilon}{4} \\norm{u - \\widehat{v}}_{\\sobh{1}(\\rT^d)}^2 + \\norm{ \\widehat{v}}_{\\sob{1}{\\infty}(\\rT^d)} C_{\\overline{\\vec f}} \\Norm{u - \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 .\n \\end{multline}\n Several terms in \\eqref{eq:re2} cancel each other and we obtain\n \\begin{multline}\\label{eq:re3}\n \\d_t \\left(\\frac{1}{2} \\Norm{u- \\widehat{v}}_{\\leb{2}(\\rT^d)}^2\\right) +\\frac{ \\varepsilon}{2} \\norm{ (u - \\widehat{v})}_{\\sobh{1}(\\rT^d)}^2\\\\\n \\leq \\Norm{\\qp{\\varepsilon - \\widehat \\varepsilon}^{1\/2} \\nabla \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 + \\Norm{\\cR_H}_{\\leb{2}(\\rT^d)}^2 \\\\\n +\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)}^2 + (\\norm{ \\widehat{v}}_{\\sob{1}{\\infty}(\\rT^d)} C_{\\overline{\\vec f}} + 1) \\Norm{u - \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 .\n \\end{multline}\n Integrating \\eqref{eq:re3} in time implies the assertion of the theorem.\n\\end{proof}\n\n\n\\section{Estimating the modeling error for generic reconstructions - the systems case}\\label{sec:meesy}\n\nIn this section we set up a more abstract framework allowing for the analysis of systems of equations. This framework will be built in such a way that it is applicable for two special cases motivated by compressible fluid dynamics, see Remarks \\ref{rem:vhins} and \\ref{rem:vhnsf}. We will make additional assumptions which are not as classical as the existence of a strictly convex entropy.\nThey essentially impose a compatibility of the dissipative flux $\\vec g$ and the parabolic part of the residual $\\cR_P$\n with the {\\it relative} entropy.\n For clarity of exposition we do not explicitly track the constants, rather denote a generic constant $C$ which may depend on $\\mathfrak{O},\\vec f, \\vec g, \\eta$ but is independent of mesh size and solution.\n \n \\begin{remark}[Guiding examples]\\label{rem:gex}\n Note that in this Remark and in Remarks \\ref{rem:vhins} and \\ref{rem:vhnsf} the notation differs from that in the rest of the paper so that the physical quantities \n under consideration are denoted as is standard in the fluid mechanics literature.\n The hypotheses we will make are such that they are satisfied for, at least, two specific cases of great practical relevance.\n The first case is the isothermal Navier-Stokes system (INS), where we replace the Navier-Stokes stress by $\\nabla \\mathbf v$ for simplicity,\n which reads:\n \\begin{equation}\\label{isoNS}\n \\begin{split}\n \\partial_t \\rho + \\div(\\rho \\mathbf v) &=0\\\\\n \\partial_t (\\rho \\mathbf v) + \\div(\\rho \\mathbf v \\otimes \\mathbf v) + \\nabla (p(\\rho)) &= \\div (\\mu \\nabla \\mathbf v) \n \\end{split}\n \\end{equation}\n where $\\rho$ denotes density, $\\mathbf v$ denotes velocity and $p=p(\\rho)$ is the pressure, given by a constitutive relation as a monotone function of density,\nand $\\mu\\geq0$ is the viscosity parameter.\n In this case the simple model are the isothermal Euler equations which are obtained from \\eqref{isoNS} by setting $\\mu=0.$\n For these models the vector of conserved quantities is $\\vec u=\\Transpose{(\\rho, \\rho \\mathbf v)}$ and the mathematical entropy is given by\n \\[ \\eta(\\rho, \\rho \\mathbf v) = W(\\rho) + \\frac{|\\rho \\mathbf v|^2 }{2\\rho},\\]\nwhere Helmholtz energy $W$ and pressure $p$ are related by the Gibbs-Duhem relation\n\\[ p'(\\rho)= \\rho W''(\\rho).\\]\\medskip\nThe second case are the Navier-Stokes-Fourier (NSF) equations for an ideal gas, where we again replace the Navier-Stokes stress by $\\nabla \\mathbf v:$\n \\begin{equation}\\label{NSF}\n \\begin{split}\n \\partial_t \\rho + \\div(\\rho \\mathbf v) &=0\\\\\n \\partial_t (\\rho \\mathbf v) + \\div(\\rho \\mathbf v \\otimes \\mathbf v) + \\nabla (p(\\rho,\\epsilon)) &= \\div (\\mu \\nabla \\mathbf v) \\\\\n \\partial_t e + \\div((e+ p)\\mathbf v ) &= \\div (\\mu(\\nabla \\mathbf v) \\cdot \\mathbf v + \\kappa \\nabla T),\n \\end{split}\n \\end{equation}\n where we understand $(\\nabla \\mathbf v) \\cdot \\mathbf v$ by $((\\nabla \\mathbf v) \\cdot \\mathbf v)_\\alpha =\\sum_\\beta v_\\beta \\partial_{x_\\alpha } v_\\beta.$\n The variables and parameters which appear in \\eqref{NSF} and did not appear before are the temperature $T,$ the specific inner energy $\\epsilon,$\n and the heat conductivity $\\kappa >0$.\n In an ideal gas it holds\n \\begin{equation}\n \\begin{split}\n e&= \\rho \\epsilon + \\frac{1}{2} \\rho |\\mathbf v|^2,\\\\\n p&=(\\gamma - 1) \\rho \\epsilon = \\rho R T,\n \\end{split}\n \\end{equation}\n where $R$ is the specific gas constant and $\\gamma$ is the adiabatic coefficient.\n For air it holds $R=287$ and $\\gamma=1.4.$\nIn this case the simple model are the Euler equations which are obtained from \\eqref{NSF} by setting $\\mu, \\kappa =0.$\nThe vector of conserved quantities is $\\mathbf u=\\Transpose{(\\rho, \\rho \\mathbf v,e)}$ and the (mathematical) entropy is given by \n\\[ \\eta(\\mathbf u)= - \\rho \\ln \\left( \\frac{p}{\\rho^\\gamma}\\right).\\]\nWe will impose in both cases that the state space $\\mathfrak{O}$ enforces positive densities.\n \\end{remark}\n \n\n \\begin{Hyp}[Compatibilities]\\label{Hyp:new}\n We impose that we can find a function $\\cD : U \\times \\rR^{n \\times d} \\times U \\times \\rR^{n \\times d} \\rightarrow [0,\\infty)$ \nand a constant $k>0$ such that for all $\\vec w , \\widetilde{ \\vec w} \\in \\sob{1}{\\infty}(\\rT^d)$ taking values in $\\mathfrak{O}$ the following holds\n \\begin{multline}\\label{ca1}\n\\sum_\\alpha ( \\vec g_\\alpha(\\vec w, \\nabla \\vec w) - \\vec g_\\alpha(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\partial_{x_\\alpha} (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\ \\geq\n\\frac{1}{k} \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) - k (\\norm{\\vec w}_{\\sob{1}{\\infty}}^2 + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}}^2) \\eta(\\vec w| \\widetilde {\\vec w})\n \\end{multline}\n and \n \\begin{multline}\\label{ca2}\n \\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\\\\n \\leq\n k^2 (\\norm{\\vec w}_{\\sob{1}{\\infty}}^2 + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}}^2 +1) \\eta(\\vec w| \\widetilde {\\vec w})+ \\frac{1}{2k} \\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \n + k^2 \\sum_\\alpha \\vec g_\\alpha (\\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \\partial_{x_\\alpha} \\D \\eta(\\widetilde {\\vec w})\n \\end{multline}\n and \n \\begin{multline}\\label{rp:comp}\n \\int_{\\rT^d} \\cR_P (\\D \\eta (\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\\\\n \\leq k \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)} \\left((\\norm{\\vec w}_{\\sob{1}{\\infty}} \n + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}} +1) \\sqrt{\\eta( \\vec w | \\widetilde {\\vec w} ) }\n + \\sqrt{\\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\right).\n \\end{multline}\n \\end{Hyp}\n\n \\begin{remark}[Validity of Hypothesis \\ref{Hyp:new} for INS]\\label{rem:vhins}\n In case of the isothermal Navier-Stokes equations \\eqref{isoNS} the diffusive fluxes are given by\n \\begin{equation}\n \\label{bdd1}\n \\vec g_\\alpha = \\Transpose{(0, \\partial_{x_\\alpha} \\mathbf v)}\n \\end{equation}\n so that for understanding relations in Hypothesis \\ref{Hyp:new} it is sufficient to \n compute\n \\begin{equation}\n \\label{bdd2}\n \\frac{\\partial \\eta}{\\partial (\\rho \\mathbf v) } = \\frac{\\rho \\mathbf v}{\\rho} = \\mathbf v.\n \\end{equation}\n Thus, in this case entropy dissipation is given by\n \\[ \\mu \\sum_\\alpha \\vec g_\\alpha (\\vec w, \\nabla \\vec w) \\partial_{x_\\alpha} \\D \\eta(\\vec w)= \\mu \\norm{ \\nabla \\mathbf v}^2, \\]\n where $|\\cdot|$ denotes the Frobenius norm and $\\vec w= \\Transpose{( \\rho, \\rho \\mathbf v)} .$\n With $\\widetilde{\\vec w} = \\Transpose{(\\widetilde \\rho,\\widetilde \\rho \\widetilde\\mathbf v)}$ we define\n \\begin{equation}\n \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w})\\\\\n := \\norm{ \\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v}^2 .\n \\end{equation}\nLet us now verify that we find a constant $k$ such that the assumptions from Hypothesis \\ref{Hyp:new} are valid.\nMaking use of the definitions (\\ref{bdd1}) and (\\ref{bdd2}) we obtain\n \\begin{multline}\\label{ca1t1}\n ( \\vec g(\\vec w, \\nabla \\vec w) - \\vec g(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\nabla (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\\n= \\left( \\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v \\right) : (\\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v ) = \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}),\n\\end{multline}\nso that \\eqref{ca1} is satisfied for any $k \\geq 1.$\nNow, again using the definitions, we find for any $k \\geq 1$\n\\begin{multline}\\label{ca2t1}\n \\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\n = \\norm{ \\nabla (\\mathbf v - \\widetilde \\mathbf v ) : \\nabla \\widetilde \\mathbf v}\\\\\n \\leq \\frac{1}{2k} \\norm{ \\nabla (\\mathbf v - \\widetilde \\mathbf v )}^2 + k \\norm{ \\nabla \\widetilde \\mathbf v}^2\n =\n \\frac{1}{2k} \\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \n + k \\sum_\\alpha \\vec g_\\alpha (\\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \\partial_{x_\\alpha} \\D \\eta(\\widetilde {\\vec w}),\n \\end{multline}\n \\ie \\eqref{ca2} is satisfied for any $k \\geq 1.$\n If a reasonable discretisation of the isothermal Navier-Stokes equations is used there is only a parabolic residual in the momentum balance equations, \\ie\n \\begin{multline}\n \\int_{\\rT^d} \\cR_P (\\D \\eta (\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\leq \\Norm{\\cR_P}_{\\sobh{-1}} \\Norm{\\mathbf v - \\widetilde \\mathbf v }_{\\sobh{1}}\\\\\n \\leq \\Norm{\\cR_P}_{\\sobh{-1}} \\qp{k \\Norm{\\vec w - \\widetilde{\\vec w} }_{\\leb{2}} + \\sqrt{\\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w})} }\n \\end{multline}\nwhich shows that \\eqref{rp:comp} is satisfied with a constant $k$ depending only on $\\mathfrak{O}$.\n \\end{remark}\n \n \n \\begin{remark}[Validity of Hypothesis \\ref{Hyp:new} for NSF]\\label{rem:vhnsf}\n In case of the Navier-Stokes-Fourier equations there are two parameters $\\mu,\\kappa$ scaling dissipative mechanisms.\n We will identify $\\mu$ with the small parameter in \\eqref{cx} and keep the ratio $\\tfrac{\\kappa}{\\mu}$, which we will treat as a constant, of order $1.$\n Then, \n $\\vec g_\\alpha = \\Transpose{(0, \\partial_{x_\\alpha} \\mathbf v, \\mathbf v \\cdot \\partial_{x_\\alpha} \\mathbf v + \\tfrac{\\kappa}{\\mu} \\partial_{x_\\alpha} T )}.$\n Therefore, it is sufficient to \n compute\n \\begin{equation}\\label{NSF1} \\frac{\\partial \\eta}{\\partial (\\rho \\mathbf v) } = \\frac{\\gamma-1}{R} \\frac{\\mathbf v}{T}; \\quad \\frac{\\partial \\eta}{\\partial e} =- \\frac{\\gamma-1}{R} \\frac{1}{T}\\end{equation}\n for understanding relations in Hypothesis \\ref{Hyp:new}. \n From equation \\eqref{NSF1} we may compute entropy dissipation:\n \\begin{equation}\n \\mu \\sum_\\alpha \\vec g_\\alpha (\\vec u, \\nabla \\vec u) \\partial_{x_\\alpha} \\D \\eta(\\vec u) \n = \\frac{\\gamma-1}{R} \\frac{\\mu}{T} \\norm{\\nabla \\mathbf v}^2 + \n \\kappa \\frac{\\gamma-1}{R} \\frac{|\\nabla T|^2}{T^2}\n \\end{equation}\nand we define\n \\begin{equation}\n \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w})\\\\\n := \\frac{1}{\\widetilde T} \\norm{ \\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v}^2 + \\frac{\\kappa}{\\mu} \\frac{\\norm{\\nabla T - \\nabla \\widetilde T}^2}{\\widetilde T^2} .\n \\end{equation}\n Note that $\\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w})$ is not symmetric in $\\vec w$ and $\\widetilde{\\vec w}.$\nLet us now verify that we find a constant $k$ such that the assumptions from Hypothesis \\ref{Hyp:new} are valid.\nBy inserting the definitions we obtain\n \\begin{multline}\\label{ca1t2}\n ( \\vec g(\\vec w, \\nabla \\vec w) - \\vec g(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\nabla (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\\n= \\frac{\\gamma-1}{R}\\Big[ \\nabla\\qp{ \\frac{\\mathbf v}{T} - \\frac{\\widetilde \\mathbf v}{T}} : \\nabla (\\mathbf v - \\widetilde \\mathbf v)\n- \\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} ( \\nabla \\mathbf v \\cdot \\mathbf v - \\nabla \\widetilde \\mathbf v \\cdot \\widetilde \\mathbf v)\\\\\n- \\frac{\\kappa}{\\mu} \\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla (T - \\widetilde T)\n\\Big].\n \\end{multline}\n After a lengthy but straightforward computation we arrive at\n \\begin{multline}\\label{ca1t3}\n \\frac{R}{\\gamma-1}( \\vec g(\\vec w, \\nabla \\vec w) - \\vec g(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\nabla (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\\n= \\frac{1}{\\widetilde T} \\norm{\\nabla (\\mathbf v - \\widetilde \\mathbf v)}^2 - \\frac{\\nabla \\widetilde T}{\\widetilde T^2} (\\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v) (\\mathbf v - \\widetilde \\mathbf v) \n\\\\\n+ \\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\mathbf v : \\nabla (\\mathbf v - \\widetilde \\mathbf v) \n- \\qp{ \\frac{1}{T^2} - \\frac{1}{\\widetilde T^2}}\\nabla T \\nabla \\widetilde \\mathbf v (\\mathbf v - \\widetilde \\mathbf v) \n\\\\\n+ \\frac{\\nabla (T - \\widetilde T)}{\\widetilde T^2} \\nabla \\widetilde \\mathbf v (\\mathbf v - \\widetilde \\mathbf v) \n+ \\frac{\\kappa}{\\mu} \\frac{\\norm{\\nabla (T - \\widetilde T)}^2}{\\widetilde T^2}\n+ \\frac{\\kappa}{\\mu} \\qp{ \\frac{1}{T^2} - \\frac{1}{\\widetilde T^2}}\\nabla T \\nabla (T - \\widetilde T).\n \\end{multline}\n Note that the two summands of $\\cD$ both appear on the right hand side of \\eqref{ca1t3}.\n Applying Young's inequality to the other terms on the right hand side of \\eqref{ca1t3} shows that \\eqref{ca1}\n is true for some $k$ only depending on $\\mathfrak{O}.$\n \n By inserting we find\n \\begin{multline}\\label{ca2t2}\n \\frac{R}{\\gamma-1}\\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\\\\n \\leq\n \\nabla\\qp{ \\frac{\\mathbf v}{T} - \\frac{\\widetilde \\mathbf v}{T}} \\nabla \\widetilde \\mathbf v - \\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\widetilde \\mathbf v \\cdot\\widetilde \\mathbf v \n - \\frac{\\kappa}{\\mu}\\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\widetilde T.\n \\end{multline}\n We may rewrite this as \n \\begin{multline}\\label{ca2t3}\n \\frac{R}{\\gamma-1}\\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\\\\n \\leq\n \\frac{\\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v}{T} : \\nabla \\widetilde \\mathbf v - \\frac{\\nabla T}{T^2} \\nabla \\widetilde \\mathbf v (\\mathbf v - \\widetilde \\mathbf v) + \\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\widetilde \\mathbf v : \\nabla \\widetilde \\mathbf v\n \\\\\n + \\frac{\\kappa}{\\mu} \\frac{1}{\\widetilde T^2} \\nabla (T - \\widetilde T) \\cdot \\nabla \\widetilde T + \\frac{\\kappa}{\\mu}\\qp{ \\frac{1}{T^2} - \\frac{1}{\\widetilde T^2}} \\nabla T \\cdot \\nabla \\widetilde T.\n \\end{multline}\n Using Young's inequality we may infer from \\eqref{ca2t3} that \\eqref{ca2}\n is true for some $k$ only depending on $\\mathfrak{O}.$\n \n If a reasonable discretisation of the Navier-Stokes-Fourier equations is used there is no parabolic residual\n in the mass conservation equation, \\ie\n \\begin{multline}\n \\frac{R}{\\gamma-1} \\int_{\\rT^d} \\cR_P (\\D \\eta (\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\leq \\Norm{\\cR_P}_{\\sobh{-1}}\\qp{ \\Norm{\\frac{\\mathbf v}{T} - \\frac{\\widetilde \\mathbf v}{\\widetilde T} }_{\\sobh{1}} \n + \\Norm{\\frac{1}{T} - \\frac{1}{\\widetilde T} }_{\\sobh{1}} }\\\\\n \\leq \\Norm{\\cR_P}_{\\sobh{-1}} \\qp{C ( \\norm{\\vec w}_{\\sob{1}{\\infty}} + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}} +1) \\Norm{\\vec w - \\widetilde{\\vec w} }_{\\leb{2}} + \\sqrt{\\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w})} }\n \\end{multline}\nwhich shows that \\eqref{rp:comp} is satisfied with a constant $k$ depending only on $\\mathfrak{O}$.\n \\end{remark}\n\n\nNow we return to the general setting \\eqref{cx},\\eqref{sim}. In particular,\n$\\vec v$ denotes states in $\\mathfrak{O}$ and not fluid velocities.\n\n\n\n\n\n\n\n\n\\begin{Rem}[Bounds on flux and entropy]\n Due to the regularity of $\\vec f$ and $\\eta,$ and the compactness of $\\mathfrak{O}$ there are\n constants $0 < C_{\\overline{\\vec f}} < \\infty$ and $0< C_{\\underline{\\eta}} < C_{\\overline{\\eta}} < \\infty$ such\n that\n \\begin{equation}\n \\label{eq:consts}\n \\norm{\\Transpose{\\vec v} \\D^2 \\vec f(\\vec u) \\vec v} \n \\leq C_{\\overline{\\vec f}} \\norm{\\vec v}^2, \n \\qquad \n C_{\\underline{\\eta}} \n \\norm{\\vec v}^2\n \\leq \n \\Transpose{\\vec v} \n \\D^2 \\eta(\\vec u)\n \\vec v\n \\leq C_{\\overline{\\eta}} \\norm{\\vec v}^2\n \\Foreach \\vec v \\in \\reals^n, \\vec u \\in \\mathfrak{O},\n \\end{equation}\n and \n \\begin{equation}\\label{eq:constt}\n \\norm{ \\D^3 \\eta (\\vec u)} \\leq C_{\\overline{\\eta}} \\Foreach \\vec u \\in \\mathfrak{O},\n \\end{equation}\n where $\\norm{\\cdot}$ is the Euclidean norm for vectors in \\eqref{eq:consts} and a Frobenius norm for 3-tensors in \\eqref{eq:constt}.\n Note that $C_{\\overline{\\vec f}}$, $C_{\\underline{\\eta}}$ and $C_{\\overline{\\eta}}$ can be explicitly computed from $\\mathfrak{O}$, $\\vec f$ and $\\eta.$\n\\end{Rem}\n\n\nNow we are in position to state the main result of this section\n\\begin{The}[A posteriori modelling error control]\\label{the:2}\n Let \n $\\vec u \\in \\sob{1}{\\infty}(\\rT^d \\times (0,T),\\rR^n)$ be a weak solution to \\eqref{cx} and let $\\widehat{\\vec v} \\in\\sob{1}{\\infty}(\\rT^d\\times (0,T),\\rR^n)$ weakly solve \\eqref{interm}.\n Let \\eqref{cx} be\n endowed with a strictly convex entropy pair.\n Let Hypothesis \\ref{Hyp:new} be satisfied and let\n $\\vec u$ and $\\widehat{\\vec v}$ only take values in $\\mathfrak{O}.$\n Then, the following a posteriori error estimate holds:\n \\begin{multline}\\label{eq:the2}\n \\Norm{ \\vec u(\\cdot,t) - \\widehat{\\vec v} (\\cdot,t) }_{\\leb{2}(\\rT^d)}^2 + \\int_{\\rT^d \\times (0,t)} \\frac{\\varepsilon}{4k} \\cD (\\vec u,\\nabla \\vec u, \\widehat{\\vec v} , \\nabla \\widehat{\\vec v} ) \n \\\\ \n \\leq C \\left( \\Norm{ \\vec u(\\cdot,0) - \\widehat{\\vec v} (\\cdot,0) }_{\\leb{2}(\\rT^d)}^2 + \\cE_D + \\cE_M \\right)\\\\\n \\times \\exp\\left( C ( \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) t \\right)\n \\end{multline}\n with $C,k$ being constants depending on $(\\mathfrak{O}, \\vec f, \\vec g, \\eta)$ and\n\\begin{equation}\\label{def:ce}\n\\begin{split}\n \\cE_M &:= \\Norm{\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v})}_{\\leb{2}(\\rT^d \\times (0,t))}^2 +\\int_{\\rT^d \\times (0,t)} (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v}) ,\\\\\n \\cE_D &:= \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\leb{2}(0,t;\\sobh{-1}(\\rT^d))}^2 \n + \\Norm{\\cR_H}_{\\leb{2}(\\rT^d \\times (0,t))}^2 .\n \\end{split}\n\\end{equation}\n\n\\end{The}\n\n\\begin{remark}[Energy dissipation degeneracy]\n It should be noted that the estimate in Theorem \\ref{the:2} contains certain assumptions, \\ie $\\vec u\\in\\sob{1}{\\infty}$, which are not verifiable in an a posteriori fashion. \n In particular, we assume more regularity than can expected for solutions of systems of the form \\eqref{cx}; see \\cite{FNS11,FJN12} for existence results for systems of this type.\n However, the weak solutions defined in those references are not unique and only weak-strong uniqueness results are available, cf. \\cite{FJN12}.\n Thus, convergent a posteriori error estimators can only be expected in case the problem \\eqref{cx} has a more regular solution than \n is guaranteed analytically. Note that the corresponding term, \\ie $\\Norm{\\nabla \\vec u}_{\\leb{\\infty}}$ does not appear in the scalar case and is a consequence of the dissipation only being present in the equations for $\\vec v$ and $T$ but not in that for $\\rho$. This leads to a form of degeneracy of the energy dissipation governing the underlying system.\n\\end{remark}\n\n\\begin{remark}[Structure of the estimator]\nNote that the first factor in the estimator consists of three parts. The first part is the error in the discretisation and reconstruction of the initial data.\nThe second part $\\cE_D$ is due to the residuals caused by the discretisation error. The third part $\\cE_M$ consists of residuals caused by the model approximation error.\n\\end{remark}\n\n\\begin{remark}[Structure of modelling error residual]\nRecall that we are interested in the case of $\\varepsilon$ being (very) small. In this case the term \n\\[ \\int_{\\rT^d \\times (0,t)} (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v}) \\]\nis the dominating part in the modelling error residual $\\cE_M$ and, thus, letting $\\widehat \\varepsilon$ be $\\varepsilon$ in larger parts of the domain will usually reduce the\nmodelling error residual.\n\\end{remark}\n\n\n\\begin{proof}[Proof of Theorem \\ref{the:2}]\n We test \\eqref{cx} by $\\D \\eta(\\vec u) - \\D \\eta (\\widehat{\\vec v})$ and \\eqref{interm} by $\\D^2 \\eta(\\vec u) (\\vec u - \\widehat{\\vec v})$ and subtract both equations.\n By rearranging terms and using \\eqref{eq:dfdeta} we obtain\n \\begin{multline}\\label{remd1}\n \\int_{\\rT^d} \\partial_t \\eta(\\vec u| \\widehat{\\vec v}) + \\sum_\\alpha \\partial_{x_\\alpha} \\vec q_\\alpha (\\vec u, \\widehat{\\vec v}) + \\varepsilon( \\vec g(\\vec u, \\nabla \\vec u) - \\vec g(\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) ): \\nabla (\\D \\eta(\\vec u) - \\D \\eta(\\widehat{\\vec v})) \n \\\\ = E_1 + E_2 + E_3,\n \\end{multline}\n with\n \\begin{equation}\\label{remd2}\n \\begin{split}\n E_1 &:= \\int_{\\rT^d}\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) : \\nabla (\\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}))\n -\\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) : \\nabla (\\D \\eta(\\vec u) - \\D \\eta(\\widehat{\\vec v})) ,\n \\\\\n E_2 &:=-\\int_{\\rT^d} \\sum_\\alpha \\partial_{x_\\alpha}\\widehat{\\vec v} \\D^2 \\eta(\\widehat{\\vec v}) (\\vec f_\\alpha(\\vec u) - \\vec f_\\alpha (\\widehat{\\vec v}) - \\D \\vec f_\\alpha (\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v})),\\\\\n E_3 &:= \\int_{\\rT^d} (\\cR_H + \\cR_P) \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}).\n \\end{split}\n \\end{equation}\nAs $\\rT^d$ does not have a boundary and because of \\eqref{ca1} we obtain\n \\begin{equation}\\label{remd3}\n \\int_{\\rT^d} \\partial_t \\eta(\\vec u| \\widehat{\\vec v}) + \\frac{\\varepsilon}{k} \\cD (\\vec u, \\nabla \\vec u,\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) \n \\leq E_1 + E_2 + E_3 + \\varepsilon k (\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2)\\int_{\\rT^d} \\eta(\\vec u| \\widehat{\\vec v}).\n \\end{equation}\nWe are now going to derive estimates for the terms $E_1,E_2,E_3$ on the right hand side of \\eqref{remd3}.\nWe may rewrite $E_1$ as\n\\begin{equation}\\label{e1}\n E_1 = E_{11} + E_{12}\n\\end{equation}\nwith\n\\begin{multline}\\label{e11}\n \\abs{E_{11}} :=\\norm{ \\int_{\\rT^d}\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}): \\nabla (\\D^2 \\eta (\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v} ) \n - \\D \\eta(\\vec u) + \\D \\eta (\\widehat{\\vec v}) ) }\\\\\n \\leq \\Norm{\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) }_{\\leb{2}}^ 2\n + C_{\\overline{\\eta}} ( \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2) \\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2,\n\\end{multline}\nwhere we used \\eqref{eq:constt} and \n\\begin{equation}\\label{e12a}\n E_{12} := \n -\\int_{\\rT^d} (\\varepsilon - \\widehat \\varepsilon) \\vec g(\\widehat{\\vec v} , \\nabla \\widehat{\\vec v}) : \\nabla ( \\D \\eta(\\vec u) - \\D \\eta (\\widehat{\\vec v})).\n \\end{equation}\n Using \\eqref{ca2} we find \n\\begin{multline}\\label{e12} \n| E_{12}| \\leq \n \\varepsilon k^2 (\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) \\eta(\\vec u| \\widehat{\\vec v})\\\\\n + \\frac{\\varepsilon}{2k} \\cD(\\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \n + (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v}).\n\\end{multline}\nConcerning $E_2$ we note\n\\begin{equation}\\label{e2}\n|E_2| \\leq C_{\\overline{\\eta}} C_{\\overline{\\vec f}} \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}} \\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2.\n\\end{equation}\nWe decompose $E_3$ into two terms\n\\begin{equation}\\label{e3}\n E_3 = \\int_{\\rT^d} \\cR_H \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) + \\cR_P \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) =: E_{31} + E_{32}.\n\\end{equation}\nWe have\n\\begin{equation}\\label{e31}\n |E_{31} |\\leq \\Norm{\\cR_H}_{\\leb{2}}^2+ C_{\\overline{\\eta}}^2 \\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2.\n\\end{equation}\nWe rewrite $E_{32}$ as\n\\begin{equation}\n E_{32} = \\int_{\\rT^d} \\cR_P \\left( - \\D \\eta (\\vec u ) + \\D \\eta (\\widehat{\\vec v}) + \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) \\right) + \\cR_P \\left( \\D \\eta (\\vec u ) - \\D \\eta (\\widehat{\\vec v})\\right)\n\\end{equation}\nsuch that we get the following estimate\n\\begin{multline}\\label{e32b}\n |E_{32} |\\leq \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)} \\Norm{\\D \\eta (\\vec u ) - \\D \\eta (\\widehat{\\vec v}) - \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) }_{\\sobh{1}(\\rT^d)} \\\\\n + k \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)} \\left((\\norm{\\vec u}_{\\sob{1}{\\infty}} + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}} +1) \\Norm{ \\vec u- \\widehat{\\vec v} }_{\\leb{2}(\\rT^d)} \n + \\sqrt{\\cD ( \\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) }\\right)\n\\end{multline}\ndue to \\eqref{rp:comp}. We have \n\\begin{equation}\\label{e32c}\n \\Norm{\\D \\eta (\\vec u ) - \\D \\eta (\\widehat{\\vec v}) - \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) }_{\\sobh{1}} \\leq C_{\\overline{\\eta}} ( \\Norm{\\vec u}_{\\sob{1}{\\infty}} + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}) \\Norm{\\vec u- \\widehat{\\vec v}}_{\\leb{2}},\n\\end{equation}\nsuch that \\eqref{e32b} becomes\n\\begin{multline}\\label{e32d}\n | E_{32}| \\leq \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}}^2 + 2 \\varepsilon C_{\\overline{\\eta}}^2 ( \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 + 1) \\Norm{\\vec u- \\widehat{\\vec v}}_{\\leb{2}}^2\\\\\n + \\frac{\\varepsilon}{4k} \\int_{\\rT^d}\\cD ( \\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}).\n\\end{multline}\nCombining \\eqref{e31} and \\eqref{e32d} we obtain \n\\begin{multline}\\label{e3f}\n | E_3 | \\leq \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}}^2 + 2 C_{\\overline{\\eta}}^2 (\\varepsilon \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\varepsilon \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 + 2) \\Norm{\\vec u- \\widehat{\\vec v}}_{\\leb{2}}^2\\\\\n + \\Norm{\\cR_H}_{\\leb{2}}^2 + \\frac{\\varepsilon}{4k} \\int_{\\rT^d}\\cD ( \\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}).\n\\end{multline}\nUpon inserting \\eqref{e11}, \\eqref{e12}, \\eqref{e2} and \\eqref{e3f} into \\eqref{remd3} we obtain for $\\varepsilon <1$\n \\begin{multline}\\label{remd4}\n \\int_{\\rT^d} \\partial_t \\eta(\\vec u| \\widehat{\\vec v}) + \\frac{\\varepsilon}{4k} \\cD (\\vec u, \\nabla \\vec u,\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) \\\\\n \\leq \n\\Norm{\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) }_{\\leb{2}}^ 2\n + C (\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) \\int_{\\rT^d} \\eta(\\vec u| \\widehat{\\vec v})\\\\\n + (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v})\n + \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}}^2\n + \\Norm{\\cR_H}_{\\leb{2}}^2 \n \\end{multline}\n where we have used that $\\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2$ is bounded in terms of the relative entropy.\nUsing Gronwall's Lemma we obtain\n\\begin{multline}\n \\Norm{ \\vec u(\\cdot,t) - \\widehat{\\vec v} (\\cdot,t) }_{\\leb{2}(\\rT^d)}^2 +\n \\frac{\\varepsilon}{4k} \\int_{\\rT^d \\times (0,t)} \\cD (\\vec u,\\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\d s \\\\\n \\leq C_{\\overline{\\eta}} \\left( \\Norm{ \\vec u(\\cdot,0) - \\widehat{\\vec v} (\\cdot,0) }_{\\leb{2}(\\rT^d)}^2 + \\cE_D + \\cE_M\\right)\\\\\n \\times \\exp\\left( C(\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) t \\right)\n\\end{multline}\nwith $\\cE_D, \\cE_M$ defined in \\eqref{def:ce}.\n\\end{proof}\n\n\n\n\\section{Reconstructions}\\label{sec:recon}\nIn Sections \\ref{sec:mees} and \\ref{sec:meesy} we have assumed existence of reconstructions of numerical solutions whose residuals are computable, see Hypothesis \\ref{hyp:recon}. We have also assumed a certain regularity of these reconstructions. In this Section we will describe one way to obtain such reconstructions for semi- (spatially) discrete dG schemes.\n\nIn previous works reconstructions for dG schemes have been mainly used for deriving a posteriori bounds of {\\it discretisation errors} \\cite[c.f.]{DG_15,GMP_15,GeorgoulisHallMakridakis:2014} for hyperbolic problems.\nIn these works the main idea is to compare the numerical solution $\\vec v_h$ and the exact solution $\\vec u$ not directly,\nbut to introduce an intermediate quantity, the reconstruction $\\widehat{\\vec v}$ of the numerical solution.\nThis reconstruction must have two crucial properties:\n\\begin{itemize}\n\\item Explicit a posteriori bounds for the difference $\\Norm{\\widehat{\\vec v} -\\vec v_h}_\\cX$ for some appropriate $\\cX$ need to be \n available and,\n\\item The reconstruction $\\widehat{\\vec v}$ needs to be globally smooth enough to apply the appropriate stability theory of the underlying PDE.\n\\end{itemize}\nThese two properties allow the derivation of an a posteriori bound for the difference $\\Norm{\\vec u - \\vec v_h}_\\cX.$\n\nIn the sequel we will provide a methodology for the explicit computation of $\\widehat{\\vec v}$ \\emph{only} from the numerical solution $\\vec v_h$. This means trivially that the difference $\\Norm{\\widehat{\\vec v} - \\vec v_h}_\\cX$ can be controlled explicitly.\n\nFrom Sections \\ref{sec:mees} and \\ref{sec:meesy} the stability theory we advocate is that of relative entropy and we have extended the classical approach \nsuch that not only \\emph{discretisation} but also \\emph{modelling errors} are accounted for.\nNote also that for our results from Sections \\ref{sec:mees} and \\ref{sec:meesy} to be applicable we require $\\widehat{\\vec v} \\in \\sob{1}{\\infty}(\\rT^d \\times [0,T]).$\n\nIn this Section we describe how to obtain reconstructions $\\widehat{\\vec v}$ of numerical solutions $\\vec v_h$ \nwhich are obtained by solving \\eqref{cx} on part of the space-time domain and \\eqref{sim} on the rest of the space-time domain.\nFor brevity we will focus on numerical solutions obtained by semi-(spatially)-discrete dG schemes,\nwhich are a frequently used tool for the numerical simulation of models of the forms \\eqref{cx} and \\eqref{sim} alike.\nWe will view $\\vec v_h$ as a discretisation of the ``intermediate'' problem \n\\begin{equation}\n \\label{interm-c}\n \\partial_t \\vec v + \\div \\vec f(\\vec v) = \\div \\qp{ \\widehat \\varepsilon \\vec g(\\vec v, \\nabla \\vec v)}\\end{equation}\nwhere $\\widehat \\varepsilon$ is the model adaptation function, which will be chosen as part of the numerical method.\n\n\\begin{remark}[Alternative types of reconstruction]\nIf \\eqref{cx} was a parabolic problem, this would be a quite strong argument in favour of using elliptic reconstruction, see \\cite{MN03},\nbut this would make the residuals scale with $\\frac{1}{\\varepsilon}.$ Recall that we are interested in the case of $\\varepsilon$ being small.\nAs important examples, \\eg the Navier-Stokes-Fourier equations, are not parabolic we will describe a reconstruction approach here\nwhich was developed for semi-discrete dG schemes for hyperbolic problems in one space dimension in \\cite{GMP_15}.\nAn extension to fully discrete methods can be found in \\cite{DG_15}.\n\\end{remark}\n\nNote that we state reconstructions in this paper to keep it self contained and to describe how we proceed in our numerical experiments in Section \\ref{sec:num}.\nIt is, however, beyond the scope of this work to derive optimal reconstructions for \\eqref{cx}, \\eqref{sim}.\nFor all of these problems the derivation of optimal reconstructions of the numerical solution is a problem in its own right.\nNote that in this framework {\\it optimality} of a reconstruction means that the error estimator, which is obtained\nbased on this reconstruction, is of the same order as the (true) error of the numerical\nscheme.\n\nWe will first outline the reconstructions for \\eqref{sim}, proposed in \\cite{GMP_15}, and investigate in which sense they lead to reconstructions of \nnumerical solutions to\n\\eqref{cx} or \\eqref{interm-c}.\nAfterwards we describe how the reconstruction approach can be extended to dG methods on Cartesian meshes in two space dimensions.\nWe choose Cartesian meshes because they lend themselves to an extension of the approach from \\cite{GMP_15}.\nWe are not able to show the optimality of $\\cR_H$ in this case, though.\nFinding suitable (optimal) reconstructions for non-linear hyperbolic systems on unstructured meshes is the topic of ongoing research.\n\n\n\\subsection{A reconstruction approach for dG approximations of hyperbolic conservation laws}\\label{subs:rec:claw}\nIn this section we recall a reconstruction approach for semi-(spatially)-discrete dG schemes for systems of hyperbolic conservation laws \\eqref{sim}\ncomplemented with initial data $\\vec u(\\cdot,0) = \\vec u_0 \\in L^\\infty(\\rT).$\nWe consider the one dimensional case. Let $\\cT$ be a set of open intervals such that\n\\begin{equation}\n \\bigcup_{S \\in \\cT} \\bar S = \\rT \\text{ (the 1d torus)}, \\text{ and } \\text{ for all } S_1,S_2 \\in \\cT \\text{ it holds } S_1=S_2 \\text{ or } S_1 \\cap S_2 = \\emptyset.\n\\end{equation}\nBy $\\cE$ we denote the set of interval boundaries.\n\nThe space of piecewise polynomial functions of degree $q \\in \\rN$ is defined by\n\\begin{equation}\n \\fes_q := \\{ \\vec w : \\rT \\rightarrow \\rR^n \\, : \\, \\vec w|_S \\in \\rP_q(S,\\rR^n) \\ \\forall \\, S \\in \\cT\\},\n\\end{equation}\nwhere $ \\rP_q(S,\\rR^n)$ denotes the space of polynomials of degree $\\leq q$ on $S$ with values in $\\rR^n.$\n\nFor defining our scheme we also need jump and average operators which require the definition of a broken Sobolev space:\n\\begin{definition}[Broken Sobolev space]\n The broken Sobolev space $\\sobh{1}(\\cT,\\rR^n)$ is defined by\n \\begin{equation}\n \\sobh{1}(\\cT,\\rR^n) := \\{ \\vec w : \\rT \\rightarrow \\rR^n \\, : \\, \\vec w|_S \\in \\sobh{1}(S,\\rR^n) \\, \\forall \\, S \\in \\cT\\}.\n\\end{equation}\n\\end{definition}\n\n\n\n\\begin{definition}[Traces, jumps and averages]\n For any $\\vec w \\in \\sobh{1}(\\cT,\\rR^n)$ we define \n \\begin{itemize}\n \\item $\\vec w^\\pm : \\cE \\rightarrow \\rR^n $ by $ \\vec w^\\pm (\\cdot):= \\lim_{ s \\searrow 0} \\vec w(\\cdot \\pm s) $ ,\n \\item $\\avg{\\vec w} : \\cE \\rightarrow \\rR^n $ by $ \\avg{\\vec w} = \\frac{\\vec w^- + \\vec w^+}{2},$\n \\item $\\jump{\\vec w} : \\cE \\rightarrow \\rR^n $ by $ \\jump{\\vec w} = \\vec w^- - \\vec w^+.$\n \\end{itemize}\n\n\\end{definition}\n\nNow we are in position to state the numerical schemes under consideration:\n\\begin{definition}[Numerical scheme for \\eqref{sim}]\n The numerical scheme is to seek $\\vec v_h \\in \\cont{1}((0,T), \\fes_q)$ such that:\n\\begin{equation}\\label{eq:sddg}\n\\begin{split}\n &\\vec v_h(0)= \\cP_q [\\vec u_0] \\\\\n &\\int_{\\cT} \\partial_t \\vec v_h \\cdot \\vec \\phi - \\vec f(\\vec v_h) \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-,\\vec v_h^+) \\jump{\\vec \\phi} =0 \\quad \\text{for all } \\vec \\phi \\in \\fes_q,\n \\end{split}\n\\end{equation}\nwhere $\\int_{\\cT}$ is an abbreviation for $\\sum_{S \\in \\cT} \\int_S$, $\\cP_q$ denotes $\\leb{2}$-orthogonal projection into $\\fes_q,$\nand $\\vec F: U \\times U \\rightarrow \\mathbb{R}^n$ is a numerical flux function.\nWe impose that the numerical flux function satisfies the following condition:\nThere exist $L>0$ and $\\vec w : U \\times U \\rightarrow U$ such that\n\\begin{equation}\n \\label{eq:repf}\n \\vec F (\\vec u, \\vec v) = \\vec f(\\vec w(\\vec u, \\vec v)) \\quad \\text{ for all } \\vec u, \\vec v \\in U,\n\\end{equation}\nand\n\\begin{equation}\n \\label{eq:condw}\n \\norm{\\vec w(\\vec u, \\vec v) - \\vec u} + \\norm{\\vec w(\\vec u, \\vec v) - \\vec v} \\leq L \\norm{\\vec u - \\vec v} \\quad \\text{ for all } \\vec u, \\vec v \\in \\mathfrak{O}.\n\\end{equation}\n\\end{definition}\n\n\n\\begin{remark}[Conditions on the flux]\nNote that conditions \\eqref{eq:repf} and \\eqref{eq:condw} imply the consistency and local Lipschitz continuity conditions usually imposed on numerical fluxes in the convergence \nanalysis of dG approximations of hyperbolic conservation laws.\nThe conditions do not make the flux monotone nor do they ensure stability of \\eqref{eq:sddg}. \nThey do, however, ensure that the right hand side of\n\\eqref{eq:sddg} is Lipschitz continuous and, therefore, \\eqref{eq:sddg} has unique solutions for small times.\nObviously, practical interest is restricted to numerical fluxes leading to reasonable stability properties of \\eqref{eq:sddg} at least as long as the exact solution \nto \\eqref{sim} is Lipschitz continuous.\nFluxes of Richtmyer and Lax-Wendroff type lead to stable numerical schemes (as long as the exact solution is smooth) and \nsatisfy a relaxed version of conditions \\eqref{eq:repf} and \\eqref{eq:condw}.\nIt was shown in \\cite{DG_15} that these relaxed conditions (see \\cite[Rem. 3.6]{DG_15}) are sufficient for obtaining optimal a posteriori error estimates.\n\\end{remark}\n\nLet us now return to the main purpose of this section: the definition of a reconstruction operator.\nIn addition, we present a reconstruction of the numerical flux which will be used for splitting the residual into a parabolic and a hyperbolic part, in Section \\ref{subs:rec:hp}.\nThey are based on information from the numerical scheme:\n\\begin{definition}[Reconstructions]\n For each $t \\in [0,T]$ we define the flux reconstruction $\\widehat{\\vec f}(\\cdot,t) \\in \\fes_{q+1}$ through\n \\begin{equation}\\label{eq:rf1}\n \\begin{split}\n \\int_{\\cT} \\partial_x \\widehat{\\vec f}(\\cdot,t) \\cdot \\vec \\phi &= -\\int_{\\cT} \\vec f(\\vec v_h(\\cdot,t)) \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-(\\cdot,t),\\vec v_h^+(\\cdot,t)) \\jump{\\vec \\phi} \\quad \\text{for all } \\vec \\phi \\in \\fes_q\\\\\n \\widehat{\\vec f}^+(\\cdot,t) &= \\vec F (\\vec v_h^-(\\cdot,t),\\vec v_h^+(\\cdot,t)) \\quad \\text{ on } \\cE.\n \\end{split}\n\\end{equation}\nFor each $t \\in [0,T]$ we define the reconstruction $\\widehat{\\vec v}(\\cdot,t) \\in \\fes_{q+1}$ through\n \\begin{equation}\\label{eq:rf2}\n \\begin{split}\n \\int_{\\rT} \\widehat{\\vec v}(\\cdot,t) \\cdot \\vec \\psi &= \\int_{\\rT} \\vec v_h (\\cdot,t) \\cdot \\vec \\psi \\quad \\text{for all } \\vec \\psi \\in \\fes_{q-1}\\\\\n \\widehat{\\vec v}^\\pm(\\cdot,t) &= \\vec w (\\vec v_h^-(\\cdot,t),\\vec v_h^+(\\cdot,t)) \\quad \\text{ on } \\cE.\n \\end{split}\n\\end{equation}\n\\end{definition}\n\n\\begin{remark}[Properties of reconstruction]\n It was shown in \\cite{GMP_15} that these reconstructions are well-defined, explicitly and locally computable and Lipschitz continuous in space.\n Due to the Lipschitz continuity of $\\vec w$ they are also Lipschitz continuous in time.\n Recall from Sections \\ref{sec:mees} and \\ref{sec:meesy} that the Lipschitz continuity of $\\widehat{\\vec v}$ in space was crucial for our arguments.\n\\end{remark}\n\nDue to the Lipschitz continuity of $\\widehat{\\vec v}$ the definition of the discretisation residual satisfies\n\\begin{equation}\\label{eq:hypres}\n \\cR := \\partial_t \\widehat{\\vec v} + \\partial_x \\vec f(\\widehat{\\vec v}) \\in \\leb{\\infty}.\n\\end{equation}\nAt this point the reader might ask why we have defined $\\widehat{\\vec f}$ as it is not present in \\eqref{eq:hypres}\nand is not needed for computing the residual $\\cR$ either.\nWe will use $\\widehat{\\vec f}$ in Section \\ref{subs:rec:hp} to split the residual into a parabolic and a hyperbolic part.\nAs a preparation to this end let us note that upon combing \n\\eqref{eq:sddg} and \\eqref{eq:rf1} we obtain\n\\[ \\partial_t \\vec v_h + \\partial_x \\widehat{\\vec f} =0\\]\npointwise almost everywhere. Thus, we may split the residual as follows:\n\\[ \\cR = \\partial_t ( \\widehat{\\vec v} - \\vec v_h) + \\partial_x (\\vec f(\\widehat{\\vec v}) - \\widehat{\\vec f}).\\]\nThis splitting was used in \\cite{GMP_15} to argue that the residual is of optimal order.\n\n\n\n\\subsection{A reconstruction approach for dG approximations of hyperbolic\/parabolic problems}\\label{subs:rec:hp}\nWe will describe in this Section how the reconstruction methodology described above can be used in case of dG semi- (spatial) discretisations\nof \\eqref{interm-c} in one space dimension following the local dG methodology.\n\n\\begin{definition}[Discrete gradients]\n By $\\partial_x^-, \\partial_x^+ : \\sobh{1}(\\cT,\\rR^m) \\rightarrow \\fes_q$ we denote discrete gradient operators defined through\n\\begin{equation}\n \\int_{\\rT} \\partial_x^\\pm \\vec y \\cdot \\vec \\phi = - \\int_{\\cT} \\vec y \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec y^\\pm \\jump{\\vec \\phi} \\quad \\text{ for all } \\vec y, \\vec \\phi \\in \\fes_q.\n\\end{equation}\n\\end{definition}\n\\begin{lemma}[Discrete integration by parts]\\label{lem:dibp}\nThe operators $\\partial_x^\\pm$ satisfy the following duality property:\nFor any $\\vec \\phi, \\vec \\psi \\in \\fes_q$ it holds\n\\[ \\int_{\\rT} \\vec \\phi \\partial_x^- \\vec \\psi = - \\int_{\\rT} \\vec \\psi \\partial_x^+ \\vec \\phi .\\]\n\\end{lemma}\nThe proof of Lemma \\ref{lem:dibp} can be found in \\cite{DE12}. \nRewriting \\eqref{interm-c} as \n\\begin{equation}\n \\begin{split}\n \\vec s &= \\partial_x \\vec v\\\\\n \\partial_t \\vec v + \\partial_{x} \\vec f (\\vec v) &= \\partial_{x} (\\widehat \\varepsilon \\vec g(\\vec v , \\vec s)) \n \\end{split}\n\\end{equation}\n motivates the following semi-discrete numerical scheme:\n\\begin{definition}[Numerical scheme]\n The numerical solution $(\\vec v_h, \\vec s_h ) \\in \\qb{\\cont{1}((0,T), \\fes_q)}^2$\n is given as the solution to \n \\begin{equation}\\label{eq:sddg2}\n\\begin{split}\n \\vec v_h(0)&= \\cP_q [\\vec u_0] \\\\\n \\vec s_h &= \\partial_x^- \\vec v_h\\\\\n \\int_{\\cT} \\partial_t \\vec v_h \\cdot \\vec \\phi - \\vec f(\\vec v_h) \\cdot \\partial_x \\vec \\phi + \\widehat \\varepsilon \\vec g(\\vec v_h, \\vec s_h) \\partial_x^- \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-,\\vec v_h^+) \\jump{\\vec \\phi} \n &=0 \\quad \\text{for all } \\vec \\phi \\in \\fes_q.\n \\end{split}\n\\end{equation}\n\\end{definition}\n\nDefining $\\widehat{\\vec f}$ as in \\eqref{eq:rf1} allows us to rewrite \\eqref{eq:sddg2}$_3$, using Lemma \\ref{lem:dibp}, as\n\\begin{equation}\\label{nssf}\n \\partial_t \\vec v_h + \\partial_x \\widehat{\\vec f} - \\partial_x^+ \\cP_q[ \\widehat \\varepsilon \\vec g(\\vec v_h, \\partial_x^- \\vec v_h)]=0.\n\\end{equation}\n\nDue to the arguments given in Section \\ref{subs:rec:claw} the reconstruction $\\widehat{\\vec v}$ is an element of $ \\sob{1}{\\infty}(\\rT \\times (0,T), \\rR^n)$ such that the following \nresidual makes sense in $\\leb{2}(0,T;\\sobh{-1}(\\rT)):$\n\\begin{equation}\n \\cR := \\partial_t \\widehat{\\vec v} + \\partial_x \\vec f(\\widehat{\\vec v}) - \\partial_x \\qp{ \\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\partial_x \\widehat{\\vec v})}.\n\\end{equation}\nUsing \\eqref{nssf} we may rewrite the residual as\n\\begin{equation}\\label{eq:dres}\n \\cR = \\underbrace{\\partial_t (\\widehat{\\vec v} - \\vec v_h) + \\partial_x (\\vec f(\\widehat{\\vec v})- \\widehat{\\vec f})}_{=: \\cR_H} + \\underbrace{\\partial_x^+ \\cP_q[ \\widehat \\varepsilon \\vec g(\\vec v_h, \\partial_x^- \\vec v_h)] - \\partial_x (\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\partial_x \\widehat{\\vec v})) }_{=: \\cR_P},\n\\end{equation}\n\\ie we have a decomposition of the residual as assumed in previous Sections, see \\eqref{interm} in particular.\n\n\n\n\\subsection{Extension of the reconstruction to $2$ space dimensions}\n\\label{subs:rec:2d}\nIn this section we present an extension of the reconstruction approach described before to semi-(spatially)-discrete dG schemes for systems of hyperbolic conservation laws \\eqref{sim}\ncomplemented with initial data $\\vec v(\\cdot,0) = \\vec v_0 \\in L^\\infty(\\rT^2)$ using Cartesian meshes in two space dimensions.\nThe extension to Cartesian meshes in more than two dimensions is straightforward.\n\nWe consider a system of hyperbolic conservation laws in two space dimensions\n\\begin{equation}\\label{claw}\n \\partial_t \\vec v + \\partial_{x_1} \\vec f_1(\\vec v) + \\partial_{x_2} \\vec f_2(\\vec v) =0,\n\\end{equation}\nwhere $\\vec f_{1,2} \\in \\cont{2}(U , \\rR^n).$ \n\nWe discretise $\\rT^2$ using partitions \n\\[-1=x_0 < x_1 < \\dots < x_N =1, \\quad -1=y_0 < y_1 < \\dots < y_M =1.\\] \nWe consider a Cartesian mesh $\\cT$ such that each element satisfies $K=[x_i,x_{i+1}] \\times [y_j,y_{j+1}]$ \nfor some $(i,j)\\in \\{0,\\dots,N-1\\} \\times \\{0,\\dots,M-1\\}.$\nFor any $p,q \\in \\rN$ and $K \\in \\cT$ let\n\\[ \\rP_q \\otimes \\rP_p (K) := \\rP_q([x_i,x_{i+1}]) \\otimes \\rP_p ([y_j,y_{j+1}]).\\]\nBy $\\fes_{p,q}$ we denote the space of trial and test functions, \\ie\n\\[ \\fes_{p,q}:= \\{ \\Phi : \\rT^2 \\rightarrow \\rR^m \\, : \\, \\Phi|_K \\in (\\rP_p \\otimes \\rP_q (K))^m \\ \\forall K \\in \\cT\\}.\\]\nNote that our dG space has a tensorial structure on each element.\n\nAs before $\\cE$ denotes the set of all edges, which can be decomposed into the sets of horizontal and vertical edges $\\cE^h, \\cE^v,$ respectively.\nLet us define the following jump operators:\nFor $\\Phi \\in \\sobh{1}(\\cT,\\rR^n)$ we define\n\\begin{align*}\n \\jump{\\Phi}^h &: \\cE^v \\rightarrow \\rR^n\\, \\quad \\jump{\\Phi}^h(\\cdot) := \\lim_{s \\searrow 0} \\Phi (\\cdot - s \\vec e_1) - \\lim_{s \\searrow 0} \\Phi (\\cdot + s\\vec e_1)\\\\\n \\jump{\\Phi}^v & : \\cE^h \\rightarrow \\rR^n, \\quad \\jump{\\Phi}^v(\\cdot) := \\lim_{s \\searrow 0} \\Phi ( \\cdot - s \\vec e_2) - \\lim_{s \\searrow 0} \\Phi (\\cdot + s\\vec e_2).\n\\end{align*}\n\n\nLet $\\vec F_{1,2}$ be numerical flux functions which satisfy conditions \\eqref{eq:repf} and \\eqref{eq:condw} with functions $\\vec w_1, \\vec w_2: U \\times U \\rightarrow U,$\n\\ie\n\\[ \\vec F_i (\\vec u, \\vec v) = \\vec f_i(\\vec w_i(\\vec u, \\vec v)) \\quad \\text{ for all } \\vec u, \\vec v \\in U \\text{ and } i=1,2.\\]\nThen, we consider semi-(spatially)-discrete discontinuous Galerkin schemes given as follows:\nSearch for $\\vec v_h\\in \\cont{1}([0,\\infty) , \\fes_{q,q})$ satisfying\n\\begin{multline}\\label{sch1}\n \\int_{\\T{}} \\partial_t \\vec v_h \\Phi - \\vec f_1(\\vec v)\\partial_{x_1} \\Phi - \\vec f_2(\\vec v)\\partial_{x_2} \\Phi\\\\\n + \\int_{\\E^v} \\vec F_1(\\vec v_h^-,\\vec v_h^+) \\jump{\\Phi}^h + \\int_{\\E^h} \\vec F_2(\\vec v_h^-,\\vec v_h^+) \\jump{\\Phi}^v =0 \\quad \\forall \\Phi \\in \\fes_{q,q}.\n\\end{multline}\n\n\n\nWhile we have avoided choosing particular bases of our dG spaces we will do so now as we believe that it makes the presentation of our reconstruction approach more concise.\nWe choose so-called \\emph{nodal basis functions} consisting of Lagrange polynomials, see \\cite{HW08}, and as we use a Cartesian mesh we may use tensor-products of one-dimensional Lagrange polynomials\nto this end. \nWe associate the Lagrange polynomials with Gauss points, as in this way the nodal basis functions form an orthogonal basis of our dG space $\\fes_{q,q},$ due to\nthe exactness properties of Gauss quadrature, see \\cite[e.g.]{Hin12}.\nWe will introduce some notation now:\nLet $\\{ \\xi_0,\\dots,\\xi_{q}\\} $ denote the Gauss points on $[-1,1].$ \nFor any element $K= [x_i,x_{i+1}] \\times [y_j,y_{j+1}] \\in \\cT$ let $\\{ \\xi_0^{K,1},\\dots,\\xi_{q}^{K,1}\\} $ and $\\{ \\xi_0^{K,2},\\dots,\\xi_{q}^{K,2}\\} $ denote their image\nunder the linear bijections $[-1,1] \\rightarrow [x_i,x_{i+1}]$ and $[-1,1] \\rightarrow [y_j,y_{j+1}].$\nFor $i=1,2$ we denote by $\\mathit{l}^{K,i}_j$ the Lagrange polynomial satisfying $\\mathit{l}^{K,i}_j(\\xi_k^{K,i})=\\delta_{jk}$. \n\n\\begin{definition}[Flux reconstruction]\nLet $\\widehat{\\vec f}_1 \\in \\fes_{q+1,q} $ satisfy\n\\begin{equation}\n \\label{recon1a}\n \\int_{\\T{}} (\\partial_{x_1} \\widehat{\\vec f}_1) \\Phi = - \\int_{\\T{}} \\vec f_1(\\vec v_h)\\partial_{x_1} \\Phi \n + \\int_{\\E^v} \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)] \\jump{\\Phi}^h \\quad \\forall \\Phi \\in \\fes_{q,q},\n\\end{equation}\nwhere $\\cP_q$ denotes $\\leb{2}$-orthogonal projection in the space of piece-wise polynomials of degree $\\leq q$ on $\\cE^v,$ and \n\\begin{equation}\n \\label{recon1b} \\widehat{\\vec f}_1(x_i,\\xi^{K,2}_k)^+ := \\lim_{s \\searrow 0} \\widehat{\\vec f}_1(x_i+s,\\xi^{K,2}_k) = \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_i,\\xi^{K,2}_k)\n\\end{equation}\nfor $k=0,\\dots,q$ and all $K \\in \\cT.$\nThe definition of $\\widehat{\\vec f}_2 \\in \\fes_{q,q+1} $ is analogous.\n\\end{definition}\n\n\\begin{remark}[Regularity of flux reconstruction]\n Note that in order to split the residual in two space dimensions in a way analogous to what we did in \\eqref{eq:dres} we require that for each $\\alpha=1,\\dots,d$ the components of \n the flux reconstruction\n $\\widehat{\\vec f}_\\alpha $ are Lipschitz continuous in $x_\\alpha$-direction.\n This is exactly what is needed such that $\\partial_{x_\\alpha} \\widehat{\\vec f}_\\alpha$ makes sense in $\\leb{\\infty}.$\n\\end{remark}\n\n\\begin{lemma}[Properties of flux reconstruction]\n The flux reconstructions $\\widehat{\\vec f}_1$, $\\widehat{\\vec f}_2$ are well defined; and $\\widehat{\\vec f}_1$ is Lipschitz continuous in $x_1$-direction and $\\widehat{\\vec f}_2$ is Lipschitz continuous in $x_2$-direction.\n\\end{lemma}\n\n\\begin{proof}\nWe will give the proof for $\\widehat{\\vec f}_1$. \nFor every $K \\in \\T{}$ the restriction $\\widehat{\\vec f}_1|_K$ is determined by \\eqref{recon1a} up to a linear combination (in each component) of\n\\[ 1 \\otimes \\mathit{l}^{K,2}_0, \\dots, 1 \\otimes \\mathit{l}^{K,2}_{q},\\]\nwhere $1$ denotes the polynomial having the value $1$ everywhere.\nPrescribing \\eqref{recon1b} obviously fixes these degrees of freedom.\nTherefore, $\\widehat{\\vec f}_1$ exists, is uniquely determined, and locally computable.\n\nFor showing that $\\widehat{\\vec f}_1$ is Lipschitz in the $x_1$-direction\nit suffices to prove that $\\widehat{\\vec f}_1$ is continuous along the 'vertical' faces.\nLet $K= [x_i,x_{i+1}] \\times [y_j,y_{j+1}] \\in \\cT$ then we define \n\\[ \\chi_K^k := 1_{[x_i,x_{i+1}]} \\otimes (l^{K,2}_k \\cdot 1_{[y_j,y_{j+1}]})\\]\nwhere for any interval $I$ we denote the characteristic function of that interval by $1_I.$\nFor any $k \\in \\{0,\\dots, q\\}$ we have on the one hand\n\\begin{equation}\\label{recon1c} \\int_{\\T{}} \\partial_{x_1} \\widehat{\\vec f}_1 \\chi_K^k\n = \\omega_k h^y_j \\int_{x_i}^{x_{i+1}} \\partial_{x_1} \\widehat{\\vec f}_1(\\cdot, \\xi^{K,2}_k) = \\omega_k h^y_j \\big(\\widehat{\\vec f}_1(x_{i+1}, \\xi^{K,2}_k)^- - \\widehat{\\vec f}_1(x_{i}, \\xi^{K,2}_k)^+\\big)\n \\end{equation}\n where $h^y_j = y_{j+1} - y_j$ and $\\omega_k$ is the Gauss quadrature weight associated to $\\xi_k$.\nOn the other hand we find, using \\eqref{recon1a},\n\\begin{equation}\\label{recon1d} \\int_{\\T{}} \\partial_{x_1} \\widehat{\\vec f}_1 \\chi_K^k \\\\\n = \\omega_k h^y_j \\left( \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_{i+1},\\xi^{K,2}_k ) - \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_{i},\\xi^{K,2}_k )\\right).\n \\end{equation}\nCombining \\eqref{recon1c}, \\eqref{recon1d} and \\eqref{recon1b} \nwe obtain\n \\begin{equation}\\label{recon1f} \\widehat{\\vec f}_1(x_{i+1}, \\xi^{K,2}_k)^- \n = \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_{i+1},\\xi^{K,2}_k ) = \\widehat{\\vec f}_1(x_{i+1}, \\xi^{K,2}_k)^+ \\quad \\text{for } k=0,\\dots,q.\n \\end{equation}\nAs $\\widehat{\\vec f}_1(x_{i+1}, \\cdot)^\\pm|_{[y_j,y_{j+1}]}$ is a polynomial of degree $q$ and $k$ is arbitrary in equation \\eqref{recon1f} we find \n\\begin{equation}\\label{recon1g}\n \\widehat{\\vec f}_1(x_{i+1}, \\cdot)^+|_{[y_j,y_{j+1}]} = \\widehat{\\vec f}_1(x_{i+1}, \\cdot)^-|_{[y_j,y_{j+1}]}.\n\\end{equation}\nAs $i,j$ were arbitrary this implies Lipschitz continuity of $\\widehat{\\vec f}_1$ in $x_1$-direction.\n\n\\end{proof}\n\n\nFrom equations \\eqref{sch1} and \\eqref{recon1a} we obtain\nthe following pointwise equation almost everywhere:\n\\begin{equation}\\label{sch4}\n\\partial_t \\vec v_h + \\partial_{x_1} \\widehat{\\vec f}_1 + \\partial_{x_2} \\widehat{\\vec f}_2=0 .\n\\end{equation}\n\n\n\n\n\\begin{remark}[Main idea of a $2$ dimensional reconstruction]\nRecalling the arguments presented in previous Sections our main priority is to make $\\widehat{\\vec v} $ Lipschitz continuous.\n The particular reconstruction we describe is based on the following principles inspired by Section \\ref{subs:rec:claw}.\nWe wish $\\widehat{\\vec v}|_K - \\vec v_h|_K$ to be orthogonal to polynomials on $K$ of degree $q-1$ which is ensured by imposing them to coincide on the tensor product Gauss points.\nWe wish $\\widehat{\\vec f}_1 $ and $\\vec f_1(\\widehat{\\vec v})$ to be similar on vertical faces which is ensured by fixing the values of $\\widehat{\\vec v} $ on points of the form $(x_i, \\xi^{K,2}_l)$ when $K= [x_i,x_{i+1}]\\times [y_j,y_{j+1}].$\nImposing the conditions described above on a reconstruction in $\\fes_{q+1,q+1}$ is impossible because it does not have\nenough degrees of freedom.\nThus, we define a reconstruction $\\widehat{\\vec v} \\in \\fes_{q+2,q+2}.$ For such a function imposing the degrees of freedom described above\nleaves four degrees of freedom per cell undefined. Thus, we may prescribe values in corners.\nTo this end let us fix an averaging operator $\\bar {\\vec w}: U^4 \\rightarrow U.$\n\\end{remark}\n\n\\begin{definition}[Solution reconstruction]\nWe define (at each time) the reconstruction $\\widehat{\\vec v} \\in \\fes_{q+2,q+2}$ of $\\vec v_h \\in \\fes_{q,q}$ by prescribing for every $K= [x_i,x_{i+1}]\\times [y_j,y_{j+1}] \\in \\cT$\n\\begin{equation}\\label{urec}\n \\begin{split}\n \\widehat{\\vec v}|_K (\\xi^{K,1}_k, \\xi^{K,2}_l) &= \\vec v_h (\\xi^{K,1}_k, \\xi^{K,2}_l) \\quad \\text{ for } k,l=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K (x_i,\\xi^{K,2}_l ) &= \\vec w_1 ( \\vec v_h (x_i, \\xi^{K,2}_l)^-, \\vec v_h (x_i, \\xi^{K,2}_l)^+) \\quad \\text{ for } l=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K (x_{i+1}, \\xi^{K,2}_l) &=\\vec w_1 ( \\vec v_h (x_{i+1}, \\xi^{K,2}_l)^-, \\vec v_h (x_{i+1}, \\xi^{K,2}_l)^+) \\quad \\text{ for } l=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K ( \\xi^{K,1}_k, y_j) &=\\vec w_2 ( \\vec v_h ( \\xi^{K,1}_k, y_j)^-, \\vec v_h ( \\xi^{K,1}_k, y_j)^+) \\quad \\text{ for } k=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K ( \\xi^{K,1}_k, y_{j+1}) &=\\vec w_2 ( \\vec v_h ( \\xi^{K,1}_k, y_{j+1})^-, \\vec v_h ( \\xi^{K,1}_k, y_{j+1})^+) \\quad \\text{ for } k=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K ( x_i, y_j) &=\\bar{\\vec w} ( \\lim_{s \\searrow 0} \\vec v_h (x_i +s, y_j +s), \n \\lim_{s \\searrow 0} \\vec v_h (x_i -s, y_j +s),\\\\\n & \\quad \\qquad \\lim_{s \\searrow 0} \\vec v_h (x_i +s, y_j -s),\n \\lim_{s \\searrow 0} \\vec v_h (x_i -s, y_j -s))\n \\end{split}\n\\end{equation}\nand analogous prescriptions for the remaining three corners of $K$.\n\\end{definition}\n\n\n\n\\begin{lemma}[Properties of $\\widehat{\\vec v}$]\n The reconstruction $\\widehat{\\vec v},$ is well-defined, locally computable and Lipschitz continuous.\n Moreover, for $q \\geq 1$ the following local conservation property is satisfied:\n \\begin{equation}\\label{eq:consrec} \\int_{K} \\widehat{\\vec v} - \\vec v_h =0 \\quad \\forall \\ K \\in \\cT.\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWe will only prove the Lipschitz continuity and the conservation property. As $\\widehat{\\vec v}$ is piecewise polynomial it is sufficient to prove continuity to show Lipschitz continuity.\nLet $K=[x_i,x_{i+1}]\\times [y_j,y_{j+1}]$ and $K'= [x_{i-1},x_{i}]\\times [y_j,y_{j+1}]$ then\n$\\widehat{\\vec v}|_K $ and $\\widehat{\\vec v}|_{K'}$ coincide on $(x_i,y_j)$, $(x_i, \\xi^{K,2}_k)_{k=0,\\dots,q}$ and $(x_i,y_{j+1})$.\nTherefore, $\\widehat{\\vec v}|_K $ and $\\widehat{\\vec v}|_{K'}$ coincide on $\\{x_i\\} \\times [y_j,y_{j+1}]$. \nAnalogous arguments hold for the other edges, such that $\\widehat{\\vec v}$ is indeed (Lipschitz) continuous.\n\nAs the nodal points on each element have tensor structure we can use the exactness properties of one-dimensional Gauss quadrature.\nThe conservation property \\eqref{eq:consrec} follows from the fact that one-dimensional Gauss quadrature with $q+1$ Gauss points is exact for polynomials\nof degree up to $2q+1$ which is larger or equal $q+2$ provided $q \\geq 1.$\n\\end{proof}\n\n\\begin{remark}[Reconstructions for hyperbolic\/parabolic problems in $2$ dimensions]\nIn order to obtain reconstructions and splittings of residuals into hyperbolic and parabolic parts for numerical discretisations of \\eqref{interm-c}\n the reconstructions $\\widehat{\\vec v}, \\widehat{\\vec f}_\\alpha$ described in this section can be used in the same way the reconstructions from Section \\ref{subs:rec:claw} were used in Section \\ref{subs:rec:hp}.\n In particular, $\\widehat{\\vec v}$ described above is already regular enough to serve as a reconstruction in case of a numerical scheme\n for \\eqref{interm-c}.\n The flux reconstructions $(\\widehat{\\vec f}_\\alpha)_{\\alpha=1,\\dots,d}$ can be used to obtain a splitting analogous to \\eqref{eq:dres} by making use of \\eqref{sch4}.\n\\end{remark}\n\n\n\\section{Numerical experiments}\n\n\\label{sec:num}\n\n\nIn this section we study the numerical behaviour of the error\nindicators $\\cE_M$ and $\\cE_D$ presented in the previous Sections and compare this with the ``error'', which we quantify as the difference between the numerical approximation of the adaptive model and the numerical approximation of the full model, on some test problems.\n\n\\subsection{Test 1 : The scalar case - the 1d viscous and inviscid Burgers' equation}\n\\label{sec:Burger1d}\nWe conduct an illustrative experiment using Burgers' equation. In this\ncase the ``complex'' model which we want to approximate is given by\n\\begin{equation}\n \\label{eq:viscburger}\n \\pdt u_\\varepsilon + \\pd{x}{\\qp{\\frac{u_\\varepsilon^2}{2}}} = \\varepsilon \\pd{xx} u_\\varepsilon\n\\end{equation}\nfor fixed $\\varepsilon = 0.005$ with homogeneous Dirichlet boundary data and the simple model we will use in the majority of the domain is given by \n\\begin{equation}\n \\label{eq:burger}\n \\pdt u + \\pd{x}{\\qp{\\frac{u^2}{2}}} = 0.\n\\end{equation}\n\nWe discretise the problem (\\ref{eq:burger}) using a piecewise linear dG scheme\n(\\ref{eq:sddg}) together with Richtmyer type fluxes given by\n\\begin{equation}\n \\label{eq:Richtmyer}\n \\vec F (\\vec v_h^-,\\vec v_h^+)\n =\n \\vec f\\qp{\\frac{1}{2}\\qp{\\vec v_h^- + \\vec v_h^+} - \\frac{\\tau}{h}\\qp{\\vec f(\\vec v_h^+) - \\vec f(\\vec v_h^-)}}.\n\\end{equation}\nNote that these\nfluxes satisfy the assumptions (\\ref{eq:repf})--(\\ref{eq:condw}). In this case our dG formulation is given by \\eqref{eq:sddg} under a first order IMEX temporal discretisation. We take $\\tau = 10^{-4}$ and $h = \\pi\/500$ uniformly over the space-time domain. In the region where the ``complex'' model (\\ref{eq:viscburger}) is implemented for the discretisation of the diffusion term we use an interior penalty discretisation with piecewise constant $\\widehat \\varepsilon$, that is\n\\begin{equation}\n \\widehat \\varepsilon =\n \\begin{cases}\n 0.005 \\text{ over cells where the a posteriori model error bound is large}\n \\\\\n 0 \\text{ otherwise.}\n \\end{cases}\n\\end{equation}\nThis means the discretisation becomes\n\\begin{equation}\\label{eq:sddg2n}\n\\begin{split}\n &\\vec v_h(0)= \\cP_q [\\vec u_0] \\\\\n &\\int_{\\cT} \\partial_t \\vec v_h \\cdot \\vec \\phi - \\vec f(\\vec v_h) \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-,\\vec v_h^+) \\jump{\\vec \\phi}\n +\n \\bih{\\widehat \\varepsilon \\vec v_h}{\\vec \\phi} = 0\n \\quad \\text{for all } \\vec \\phi \\in \\fes_q,\n \\end{split}\n\\end{equation}\nwhere\n\\begin{equation}\n \\bih{\\vec w_h}{\\vec\\phi}\n =\n \\int_{\\T{}} \\pd x {\\vec w_h} \\cdot \\pd x {\\vec \\phi}\n -\n \\int_\\E\n \\jump{\\vec w_h}\\cdot \\avg{\\pd x {\\vec \\phi}}\n +\n \\jump{\\vec\\phi}\\cdot \\avg{\\pd x {\\vec w_h}}\n -\n \\frac{\\sigma}{h} \\jump{\\vec w_h}\\cdot\\jump{\\vec \\phi}\n\\end{equation}\nand $\\sigma = 10$ is the penalty parameter.\n\nThe model adaptive algorithm we employ is described by the following pseudocode:\n\\subsection{{$\\Algoname{Model Adaptation}$}}\n\\label{alg:model-adapt}\n\\begin{algorithmic}\n \\Require\n $(\\tau,t_0,T,\\vec u^0,\\tol,\\tol_c,\\varepsilon)$\n \\Ensure $(\\vec u_h^n)_{\\rangefromto n1N}$, model adaptive solution\n \\State $\\widehat \\varepsilon(x,0):=0$\n \\State $t = \\tau, n=1$\n \\While{$t\\leq T$}\n \\State\n $(\\vec u_h^n) := \\Algoname{Solve one timestep of dg scheme} (\\vec u_h^{n-1},\\widehat \\varepsilon)$\n \n \\State compute $\\cE_D, \\cE_M$\n \n \\If{$\\cE_D + \\cE_M > \\tol$}\n \n \\State Mark a subset of elements, $\\{ J \\}$ where $\\cE_D + \\cE_M$ is large\n \n \\For{$K\\in \\{ J \\}$}\n \n \\State Set $\\widehat \\varepsilon(x,t)|_K := \\varepsilon$\n \n \\EndFor\n \n \\ElsIf{$\\cE_D + \\cE_M < \\tol$}\n \n \\State Do nothing\n \n \\EndIf\n\n \\For{$K\\in \\T{}$}\n \n \\If{$\\cE_M|_K < \\abs{K} \\ tol_c \\ tol\/\\varepsilon$}\n \n \\State $\\widehat \\varepsilon(x,t)|_K = 0$\n \n \\EndIf\n\n \\EndFor\n \n \\State $t := t + \\tau, n := n+1$\n \n \\EndWhile\n \n \\State return $(\\vec u_h^n)_{\\rangefromto n1N}$,\n\\end{algorithmic}\n\nFor this test the parameters are given by $\\tol = 10^{-2} \\AND \\tol_c = 10^{-3}$.\n\n\\begin{Rem}[Coupling to other adaptive procedures]\n \n The a posteriori bound given in Theorem \\ref{the:2} has a structure which allows for both model and mesh adaptivity. This means that Algorithm \\ref{alg:model-adapt} could be easily coupled with other mechanisms employing $h$-$p$ spatial adaptivity in addition to local timestep control. As can be seen from the pseudocode we use the complex model even in the case the discretisation error $\\cE_D$ is large and the modelling error $\\cE_M$ is small. This is due to the fact mesh adaptation is beyond the scope of this paper and we focus on model adaptation.\n\\end{Rem}\nInitial conditions are chosen as\n\\begin{equation}\n u(x,0) := \\sin{x}\n\\end{equation}\nover the interval $[-\\pi,\\pi]$. The results are summarised in Figures \\ref{fig:burger} and \\ref{fig:burger-err}.\n\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger}\n \n A numerical experiment testing model adaptivity on Burgers' equation. The simulation is described in \\S\\ref{sec:Burger1d}. \n Here we display the solution at various times (top) together with a representation of the model adaptation parameter $\\widehat \\varepsilon$ (bottom).\n Blue is the region $\\widehat \\varepsilon=0$, where the simplified (inviscid Burgers') problem is being computed and red is where $\\widehat \\varepsilon \\neq 0$, where the full (viscous Burgers') problem is being computed.\n We see that initially only the simplified model is computed but as time progresses the full model is solved in a region around where the steep layer forms. As this forms the domain where the complex model is solved collapses and eventually is very localised around the layer.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-0.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5375$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-0-5375.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.1625$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-1-1625.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.3$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-1-3.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.55$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-1-55.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=2.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-2-5.png}\n }\n \n \\hfill\n \\end{center}\n \\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger-err}\n \n A numerical experiment testing model adaptivity on Burgers equation. The simulation is described in \\S\\ref{sec:Burger1d}. Here we display the error $\\norm{u_h-u_{\\varepsilon,h}}$, that is, the difference between the approximation of the full expensive model and that of the adaptive approximation at the same times as in Figure \\ref{fig:burger} together with a representation of $\\widehat \\varepsilon$ (bottom).\n An interesting phenomenon is the propagation of dispersive waves emanating from the interface between the region where $\\widehat \\varepsilon=0$ and that of $\\widehat \\varepsilon\\neq 0$.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-0.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5375$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-0-5375.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.1625$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-1-1625.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.3$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-1-3.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.55$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-1-55.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=2.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-2-5.png}\n }\n \n \\hfill\n \\end{center}\n \\end{figure}\n\n\\subsection{Test 2 : The scalar case - the 2d viscous and inviscid Burgers' equation}\n\\label{sec:Burger2d}\nIn this test we examine how the adaptive procedure extends into the multi-dimensional setting again using Burgers' equation as an illustrative example. In this\ncase the ``complex'' model which we want to approximate is given by\n\\begin{equation}\n \\label{eq:viscburger2d}\n \\pdt u_\\varepsilon + \\qp{u \\one} \\cdot \\nabla u_\\varepsilon = \\varepsilon \\Delta u_\\varepsilon,\n\\end{equation}\nwhere $\\one = \\Transpose{\\qp{1,1}}$. The simple model we will use in the majority of the domain is given by \n\\begin{equation}\n \\label{eq:burger2d}\n \\pdt u + \\qp{u \\one} \\cdot \\nabla u = 0.\n\\end{equation}\nAs in Test 1 we make use of a 1st order IMEX, piecewise linear dG scheme together with Richtmyer fluxes and an IP method for the viscosity. We pick an initial condition\n\\begin{equation}\n u(\\vec x,0) = \\exp\\qp{-10\\norm{\\vec x}^2}\n\\end{equation}\nand use the parameters are $\\varepsilon = 0.01$, $h = \\sqrt{2}\/50$, $\\tau = \\sqrt{2}\/400$, $\\tol = 10^{-2}$ and $\\tol_c = 10^{-3}$ in Algorithm \\ref{alg:model-adapt}. The results are summarised in Figures \\ref{fig:burger2d} and \\ref{fig:burger2d-err}.\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger2d}\n \n A numerical experiment testing model adaptivity on Burgers' equation. The simulation is described in \\S\\ref{sec:Burger2d}. Here we display the solution at various times (top) together with a representation of $\\widehat \\varepsilon$ (bottom). Blue is the region $\\widehat \\varepsilon=0$, where the simplified (inviscid Burgers') problem is being computed and red is where $\\widehat \\varepsilon \\neq 0$, where the full (viscous Burgers') problem is being computed.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0.0025$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-0-0025.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-0-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-0-5.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-1.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-1-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-1-5.png}\n }\n \n \\hfill\n \\end{center}\n \\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger2d-err}\n \n A numerical experiment testing model adaptivity on Burgers' equation. The simulation is described in \\S\\ref{sec:Burger2d}. Here we display the error $\\norm{u_h-u_{\\varepsilon,h}}$, that is, the difference between the approximation of the full expensive model and that of the adaptive approximation at the same times as in Figure \\ref{fig:burger2d} together with a representation of $\\widehat \\varepsilon$ (bottom).\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0.0025$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-0-0025.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-0-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-0-5.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-1.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-1-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-1-5.png}\n }\n \n \\end{center}\n \\end{figure}\n\n\n\\subsection{Test 3 : The Navier-Stokes-Fourier system}\n\\label{sec:NSF}\nIn this test we examine application to the scenario of flow around a NACA aerofoil. We simulate the Navier-Stokes-Fourier system which is given by seeking $\\qp{\\rho_\\mu, \\rho_\\mu\\mathbf v_\\mu, e_\\mu}$\n\\begin{equation}\n \\label{eq:NSF3}\n \\begin{split}\n \\partial_t \\rho_\\mu + \\div(\\rho_\\mu \\mathbf v_\\mu) &=0\\\\\n \\partial_t (\\rho_\\mu \\mathbf v_\\mu) + \\div(\\rho_\\mu \\mathbf v_\\mu \\otimes \\mathbf v_\\mu) + \\nabla p &= \\div (\\mu \\nabla \\mathbf v_\\mu) \\\\\n \\partial_t e_\\mu + \\div((e_\\mu+ p)\\mathbf v_\\mu ) &= \\div (\\mu(\\nabla \\mathbf v_\\mu) \\cdot \\mathbf v_\\mu + \\mu \\nabla T_\\mu),\n \\end{split}\n\\end{equation}\nwith a Reynolds number of $1000$. In this case our ``complex'' model is given by (\\ref{eq:NSF3}) and the approximation is given by\n\\begin{equation}\n \\label{eq:EF}\n \\begin{split}\n \\partial_t \\rho + \\div(\\rho \\mathbf v) &=0\\\\\n \\partial_t (\\rho \\mathbf v) + \\div(\\rho \\mathbf v \\otimes \\mathbf v) + \\nabla p &= 0 \\\\\n \\partial_t e + \\div((e+ p)\\mathbf v ) &= 0.\n \\end{split}\n\\end{equation}\nWe again make use of a 1st order IMEX, piecewise linear dG scheme with the Richtmyer fluxes (\\ref{eq:Richtmyer}) and an IP discretisation of both dissipative terms. The numerical domain we use is a triangulation around a NACA aerofoil, shown in Figure\n5\n, where the discretisation parameters are detailed. We use Algorithm \\ref{alg:model-adapt} where the parameter $\\varepsilon$ and adaptive parameter $\\widehat \\varepsilon$ have been replaced with $\\mu = \\frac{U_r D}{Re}$ with $U_r = 1$ denoting the reference velocity, $D = 1$ denoting the length of the aerofoil and $Re = 1000$ as the Reynolds number. We fix $\\tau = 0.0001$ and take $\\tol = 10^{-2}$ and $\\tol_c = 10^{-3}$. \nSome results are shown in Figure \\ref{fig:naca-vel}, Figure \\ref{fig:naca-temp} and Figure \\ref{fig:naca-pres}.\n\n\\begin{Rem}[Coupling of viscosity and heat conduction]\n In our experiment we choose $\\mu$ as both the coefficient of viscosity and the heat conduction. In practical situations these parameters may scale differently and by splitting the adaptation estimator the model adaptivity can be conducted independently for both the viscous and heat conduction term. \n\\end{Rem}\n\n\\begin{figure}[!ht]\n \\caption[]\n \n The underlying mesh of the NACA simulation. Here the minimum mesh size is around the aerofoil where $h \\approx 0.0008$ and the maximum is away from the aerofoil where $h \\approx 0.3$.\n \n }\n \\subfigure[{\n \n Global view of the mesh.\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-mesh.png}\n }\n \n \\hfill\n \\subfigure[{\n \n Zoom of the NACA aerofoil.\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-mesh-zoom.png}\n }\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:naca-vel}\n \n A numerical experiment testing model adaptivity on the Navier-Stokes-Fourier system. The simulation is described in \\ref{sec:NSF}. Here we display the velocity of the solution at various times (top) together with a representation of both $\\mu$ and $\\kappa$ (bottom). Blue is the region $\\mu=\\kappa=0$, where the simplified (full Euler system) problem is being computed and red is where $\\mu=\\kappa \\neq 0$, where the full (Navier-Stokes-Fourier system) problem is being computed.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0.0002$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-0-02.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.01$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-1.png}\n }\n \\subfigure[{\n \n $t=0.02$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-2.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.05$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-5.png}\n }\n \\subfigure[{\n \n $t=0.24$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-24.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=5.24$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-524.png}\n }\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]{\\label{fig:naca-temp}\n \n A numerical experiment showing the temperature around the aerofoil at short time scales. Notice the high temperature initially diffused around the leading edge of aerofoil. This is where the model adaptive parameter is not zero, compare with Figure 6.\n \n }\n \\subfigure[{\n \n $t=0.02$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-002.png}\n }\n \\subfigure[{\n \n $t=0.06$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-006.png}\n }\n \\subfigure[{\n \n $t=0.1$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-010.png}\n }\n \\subfigure[{\n \n $t=0.15$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-015.png}\n }\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]{\\label{fig:naca-pres\n \n A plot of the pressure around the aerofoil and associated contours at $t=5.24$.\n \n }\n \\includegraphics[scale=\\figscale,width=0.7\\figwidth]{figures\/naca-pressure-524.png}\n\\end{figure}\n\n\n\n\n \\bibliographystyle{alpha}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\\IEEEPARstart{P}{erception} of obstacles' 3D information via different types of sensors is a fundamental task in the field of computer vision and robotics. This topic has been extensively studied with the development of Autonomous Driving(AD) and intelligent transportation system. Though the LiDAR sensors have the superiority of providing distance information of the obstacles, the texture and color information has been totally lost due to the sparse scanning. Therefore, False Positive (FP) detection and wrong categories classification often happen for LiDAR-based object detection frameworks. In particular, with the development of the deep learning techniques on point-cloud-based representation, {LIDAR-based approaches} can be generally divided into point-based \\cite{shi2019pointrcnn}, voxel-based \\cite{zhou2018voxelnet, lang2019pointpillars}, and hybrid-point-voxel-based methods \\cite{shi2020pv}.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Figs\/Head_chart.pdf}\n\t\\centering\n\t\\caption{The proposed \\textit{Multi-Sem Fusion (MSF)} is a general multi-modal fusion framework which can be employed for different 3D object detectors. a) illustrate the improvements on three different baselines. b) gives the performance of \\textit{CenterPoint} \\cite{yin2021center} with the proposed modules on the public nuScenes benchmark. c) gives the improvements on different categories respectively. In addition, ``w\/o'' represent ``without'' in short.} \n\t\\label{Fig:head_figure}\n\\end{figure}\n\nOn the contrary, the camera sensors can provide details texture and color information {while the distance information has been totally lost during the perspective projection.}\nThe {fusion} of the two types of data is a promising way for boosting the perception performance of AD. {Generally, multi-modal-based object detection approaches} can be divided into early fusion-based \\cite{dou2019seg, vora2020pointpainting}, deep fusion-based \\cite{zhang2020maff, xu2018pointfusion, tian2020adaptive} and late fusion-based approaches \\cite{pang2020clocs}. Early fusion-based approaches aim at creating a new type of data by combining the raw data directly before sending them into the detection framework. Usually, these kinds of methods require pixel-level correspondence between each type sensor data. Different from the early fusion-based methods, late fusion-based approaches {fuse the detection results at the bounding box level. While} deep fusion-based methods usually extract the features with different types of deep neural networks first and then fuse them at the features level. {\\textit{PointPainting} \\cite{vora2020pointpainting} is an early fusion method which} achieves superior detection results on different benchmarks. {Specifically, The network takes point clouds together with the 2D image semantic predictions as inputs and output the detection results with any LiDAR-based 3D object detector.} {More importantly, it} can be incorporated into any 3D object detectors {regardless of} point-based or voxel-based frameworks.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Figs\/head.pdf}\n\t\\centering\n\t\\caption{(a) is the point cloud painted with the 2D segmentation results. The frustum within the blue line shows misclassified area due to the blurring effect at object's boundary; (b) is the point cloud painted by 3D segmentation results (misclassified points are demonstrated in red color); (c) and (d) give the object detection results based on 2D painted point cloud (with an obvious False Positive (FP) Detection) and the proposed 2D\/3D fusion framework respectively. }\n\t\\label{Fig:2DSeg_in_3d}\n\\end{figure}\n\nHowever, the {blurring effect at object's boundary} often happens inevitably in image-based semantic segmentation methods. This effect becomes much worse when reprojecting the 2D semantic projections into the 3D point clouds. An example of this effect has been shown in Fig. \\ref{Fig:2DSeg_in_3d}. Taking the big truck at the left bottom of the sub-fig. \\ref{Fig:2DSeg_in_3d}-(a) as an example, we can find that there is a large frustum area of the background (e.g, points in orange color) that has been miss-classified as foreground due to the inaccurate projection results. In addition, the re-projection of 3D points to 2D image pixels is not exactly one-to-one because of the digital quantization and many-to-one projection issues. An interesting phenomenon is that the segmentation from the 3D point clouds (e.g., sub-fig. \\ref{Fig:2DSeg_in_3d}-(b)) performs much better at the boundary of obstacles. However, compared to the 2D image, the category classification from the 3D point cloud often gives worse results (e.g., point in red color) due to the detailed texture information has been lost in the point clouds.\n\n\n\n\n\n\n\n{The PointPainting, with 2D image} semantic information, has been proved to be effective for the 3D object detection task even with some semantic mistakes. An intuition idea is that whether the final detection performance can be further improved if both the 2D and 3D semantic results are fused together. Based on this idea, we propose a general multi-modal fusion framework \\textit{Multi-Seg Fusion} and try to fuse different types of sensors at the semantic level to improve the final 3D object detection performance. First of all, {we obtain the 2D\/3D semantic information throughout 2D\/3D parsing approaches by taking images and raw point clouds. Each point in point cloud has two types of semantic information after projecting point clouds onto 2D semantic images based on the intrinsic and extrinsic calibration parameters.}, the semantic results conflict usually happens for a certain point, rather than concatenating the two types of information directly, an AAF strategy is proposed to fuse different kinds of semantic information in point or voxel-level adaptively based on the learned context features in a self-attention style. Specifically, an attention score has been learned for each point or voxel to balance the importance of the two different semantic results.\n \nFurthermore, {in order to detect obstacles with different sizes in an efficient way}, a DFF module is proposed here to fuse the features {at multi-scale receptive fields and a channel attention network to gather related channel information in the feature map}. Then the fused features are passed for the following classification and regression heads to generate the final detection. As the results on nuScenes dataset are given in \\figref{Fig:head_figure} (a), we can obviously find that the proposed modules can robustly boost the detection performance on different baselines. {The results in \\figref{Fig:head_figure}(b) illustrates the contribution of each module we proposed} and from this figure we can find that the detection results can be consistently improved by adding more modules gradually. \n\\figref{Fig:head_figure} (c) shows the improvements on different categories and from this figure we can easily find that all the classes have been improved and ``Bicycle'' gets the most improvement (e.g., 22.3 points) on the nuScenes dataset.\n \n\nThis manuscript is an extension of the previously published conference paper \\cite{xu2021fusionpainting} with a new review of the relevant state-of-the-art methods, new theoretical developments, and new experimental results. Compared to the previous conference paper, the 3D object detection accuracy has been improved with a large margin on the nuScenes 3D object detection benchmark by the submission of this manuscript. Furthermore, we also evaluate the proposed framework on the KITTT 3D object detection dataset and the experimental results on both public datasets prove the superiority of our framework. In general, the contributions of this work can be summarized as follows: \n\\begin{enumerate}\n \\item A general multi-modal fusion framework \\textit{Multi-Sem Fusion} has been proposed to fuse the different types of sensors in the semantic level to boost the 3D object detection performances.\n \\item Rather than combining different semantic results directly, an adaptive attention-based fusion module is proposed to fuse different kinds of semantic information in point or voxel-level by learning the fusion attention scores.\n \\item Furthermore, a deep features fusion module is also proposed to fuse deep features at different levels for considering kinds of size object.\n \\item Finally, the proposed framework has been sufficiently verified on two public large-scale 3D object detection benchmarks and the experimental results show the superiority of the proposed fusion framework. The proposed method achieves SOTA results on the nuScenes dataset and outperforms other approaches with a big margin. Taking the proposed approach as the baseline, we have won the fourth nuScenes detection challenge held at ICRA 2021 on 3D Object Detection open track \\footnote{\\url{https:\/\/eval.ai\/web\/challenges\/challenge-page\/356\/leaderboard\/1012}}. \n\\end{enumerate}\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\\section{Related Work}\\label{sec:related_work}\n\n\n\nGenerally, 3D object detection methods can be categorized as LiDAR-based, image-based \\cite{ku2019monocular} and multi-modal fusion-based, which takes LiDAR point cloud, image and multi-sensor captured data as inputs, respectively. \n\n\\subsection{{LiDAR-based 3D Object Detection}}\nThe existing LIDAR-{based} 3D object detection methods {can be generally categorized into three main groups as} projection-based~\\cite{ku2018joint,luo2018fast,yang2018pixor,liang2021rangeioudet,hu2020you}, voxel-based~\\cite{zhou2018voxelnet,yan2018second,kuang2020voxel, yin2020lidar,zhou2019iou} and point-based~\\cite{zhou2020joint,lang2019pointpillars,qi2018frustum}. \nRangeRCNN \\cite{liang2021rangeioudet} proposed a 3D range {image-based} 3D object detection framework {where} the anchors could be generated on the BEV (bird's-eye-view) map. VoxelNet \\cite{zhou2018voxelnet} is the first {voxel-based} framework, which uses a VFE (voxel feature encoding) layer {to extract the point-level features for each voxel.} {To accelerate the inference speed,} SECOND \\cite{yan2018second} presents a new framework which {employs} the sparse convolution \\cite{graham20183d} to replace the heavy 3D convolution operation. {Inspired by the 2D anchor-free object detector CenterNet \\cite{duan2019centernet}, CenterPoint \\cite{yin2021center} presented an anchor-free-based 3D object detector on LiDAR point cloud and achieved SOTA (state-of-the-art) performance on the nuScenes benchmark recently. To further improve the efficiency,} PointPillars \\cite{lang2019pointpillars} divides the points into vertical columns (e.g., pillars) and extracts features from each pillar with the PointNet \\cite{qi2017pointnet} module. Then, {the feature map} can be regarded as a pseudo image and all the 2D object detection pipelines can be applied for the 3D object detection task. Different from the regular representation like voxel or pseudo image, {PointRCNN \\cite{shi2019pointrcnn} directly extracts features from the raw points and generates the 3D proposals from the foreground points. Based on the PointRCNN, PV-RCNN \\cite{shi2020pv} takes both the advantage from voxel and points representation and learns more discriminative features by employing both 3D voxel CNN and PointNet-based networks together.}\n\n\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.85\\textwidth]{Figs\/frame.pdf}\n\t\\centering\n\t\\caption{Overview of the proposed \\textit{Multi-Sem Fusion} framework. We first process the input point clouds and 2D image with 2D and 3D parsing approaches to obtain the semantic {information}. Then, the proposed AAF module is adopted to fuse the two types of data at the semantic level. Furthermore, a DFF module is also proposed to fuse the deep features at different spatial levels to boost the detection for accurate kinds of size object. Finally, fused features are sent to the detection heads for producing the final detection results.\n}\n\t\\label{Fig:PointPainting}\n\\end{figure*}\n\n\n\\subsection{{Camera-based 3D Object Detection}}\n3D object detection can be achieved from stereo rigs \\cite{zhou2014modeling} by detecting the 2D object from the image first and recovering the distance {with traditional geometric} stereo matching techniques \\cite{Efficient2010}. {Currently, with sufficient training data}, the depth can be recovered from a single image \\cite{song2021mlda}. Generally, the image-based 3D object detection methods {with deep learning techniques} can be roughly divided into CAD Model-guided based, depth estimation-based and direct regression-based respectively. MANTA (Many-Tasks) \\cite{chabot2017deep} is proposed for many-task vehicle analysis from a given image, which {can simultaneously} output vehicle detection, part localization, visibility characterization and 3D dimension estimation etc. {In \\cite{song2019apollocar3d}, a large-scale dataset ApolloCar3D is provided for instance-level 3D car understanding. Based on this dataset, a key-points-based framework has been designed by extracting the key points first, and then the vehicle pose is obtained using a PnP solver.} Furthermore, a {novel} work \\cite{ku2019monocular} leverages {object} proposals and shapes reconstruction {with an end-to-end deep neural network.} With the rapid development of depth estimation technology from {a} single image, {leveraging} the depth information for 3D object detection becomes popular. \\cite{wang2019pseudo} and \\cite{qian2020end} {compute the disparity map from stereo image pair and} convert the depth maps to pseudo-LiDAR point clouds first and then any point cloud-based detectors can be employed for the 3D object detection task. The similar idea has been employed in \\cite{weng2019monocular}, \\cite{ma2019accurate} and \\cite{cai2020monocular}. Instead of using CAD models or depth estimation, ``SMOKE'' \\cite{liu2020smoke} proposes to regress the 3D bounding box directly from the image and achieve promising performance on the KITTI benchmark. Based on this, \\cite{zhou2020iafa} proposes a plug-and-play module IAFA to aggregate useful foreground information in an instance-aware manner for further improving the detection accuracy.\n\n\n\n\n\n\\subsection{{Multi-sensors Fusion-based 3D Object Detection}}\n{The LiDAR can provide accurate distance information while the textures information has been lost. The camera sensor is just the opposite. In order to utilize the advantages of both sensors, many multi-sensor fusion-based approaches have been proposed. As mentioned in \\cite{nie2020multimodality}, the fusion approaches can be divided into model-based \\cite{muresan2020stabilization} and data-based approaches based on the way of fusing the sensor data. Generally, some model-based approaches have been used for tracking \\cite{muresan2019multi} while data-based approaches usually focus on modular environment perception tasks such as object detection \\cite{du2018general,pang2020clocs,chen2017multi}. For all fusion-based object detection approaches, also} can be generally divided into three types: early fusion \\cite{du2018general}, late fusion \\cite{pang2020clocs} and deep fusion \\cite{chen2017multi}. F-PointNet \\cite{qi2018frustum}, PointFusion \\cite{xu2018pointfusion} and \\cite{du2018general} are proposed to generate the object proposal in the 2D image first and then fuse the image features and point cloud for 3D BBox generation. \nPointPainting \\cite{vora2020pointpainting} is proposed to fuse the semantic information from the 2D image into the point cloud to boost the 3D point cloud object detectors. For achieving the parsing results, any SOTA approach can be used. AVOD \\cite{ku2018joint}, which is a typical late fusion framework, generates features that can be shared by RPN (region proposal network) and second stage refinement network.\nMulti-task fusion\\cite{he2017mask} \\cite{liang2019multi} is also an effective technology, e.g., \\cite{zhou2020joint} jointing the semantic segmentation with 3D object detection tasks to further improve the performance. Furthermore, Radar and HDMaps are also employed for improving the detection accuracy. CenterFusion \\cite{nabati2021centerfusion} focuses on the fusion of radar and camera sensors and associates the radar detections to objects on the image using a frustum-based association method and creates radar-based feature maps to complement the image features in a middle-fusion approach. In \\cite{yang2018hdnet} and \\cite{fang2021mapfusion}, the HDMaps are also taken as strong prior information\u00a0for moving object detection in AD scenarios.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Multi-Sem Fusion Framework} \\label{sec:method}\nAn overview of the proposed \\textit{Multi-Sem Fusion} framework is illustrated in \\figref{Fig:PointPainting}. To fully explore the information from different sensors, we advocate fusing them at two different levels. First, the two types of information are early fused with the \\textit{PointPainting} technique by painting the point cloud with both the 2D and 3D semantic parsing results. To handle the {inaccurate segmentation} results, an AAF module is proposed to learn an attention score for different sensors {for the following fusion purpose}. By taking the points {together with fused semantic information} as inputs, deep features can be extracted from the backbone network. By considering that different size object requires different levels features, a novel DFF module is proposed to enhance the features with different levels for exploring the global context information and the local spatial details in a deep way. As shown in \\figref{Fig:PointPainting}, the proposed framework consists of three main models as a multi-modal semantic segmentation module, an AAF module, and a DFF module. First of all, any off-the-shelf 2D and 3D scene parsing approaches can be employed to obtain the semantic segmentation from the RGB image and LiDAR point clouds respectively. Then the 2D and 3D semantic information are fused by the AAF module. Bypassing the points {together with fused semantic labels} into the backbone network, the DFF module is used to {further improve the results by aggregating the features within different receptive fields and a channel attention module.}\n \n\n\n\n\\begin{figure*}[ht]\n \\centering\n\t\\includegraphics[width=1\\textwidth]{Figs\/Attention.pdf}\n\t\\caption{The architecture of the proposed AAF module for 2D\/3D semantic fusion. The input points and 2D\/3D semantic results are first utilized to learn attention scores. Then, the raw points or voxel are painted with 2D\/3D semantic labels adaptive by using the learned attention scores.}\n\t\\label{Fig:attention_module}\n\\end{figure*}\n\\subsection{2D\/3D Semantic Parsing}\n\n\\textit{{2D Image Parsing.}} We can directly get rich texture and color information from 2D images, which can provide complementary information for the 3D point clouds. For acquiring 2D semantic labels, a modern semantic segmentation network is employed for generating pixel-wise segmentation results. More importantly, the proposed framework is agnostic to specific segmentation model, and any state-of-the-art segmentation approaches can be employed here (e.g., \\cite{choudhury2018segmentation,zhao2017pyramid,chen2019hybrid,shelhamer2017fully,Howard_2019_ICCV}, etc). {We employ Deeplabv3+ \\cite{choudhury2018segmentation} for generating the semantic results here.} The network takes 2D images as input and outputs pixel-wise semantic classes scores for both the foreground instances and the background. {Assuming that the obtained semantic map is $S \\in \\mathbb{R}^{w\\times h \\times m}$, where $(w, h)$ is the image size and $m$ is the number of category. By employing the intrinsic and extrinsic matrices, the 2D semantic information can be easily re-projected into the 3D point cloud. Specifically, by assuming} the intrinsic matrix is $\\mathbf{K} \\in \\mathbb{R}^{3\\times 3}$, the extrinsic matrix is $\\mathbf{M}\\in \\mathbb{R}^{3\\times 4}$ and the original 3D points clouds is $\\mathbf{P}\\in \\mathbb{R}^{N\\times 3}$, the projection of 3D point clouds from LiDAR to the camera can be obtained as Eq.\\eqref{eq.projection} shown:\n\\begin{equation} \\label{eq.projection}\n\t {\\mathbf{P}}^{'} = \\text{Proj}(\\mathbf{K}, \\mathbf{M}, \\mathbf{P}),\n\\end{equation} \nwhere $\\mathbf{P}^{'}$ is the point in {camera} coordinate and ``Proj'' represents the projection process. {By this projection, the parsing results} from the 2D image can be assigned to its corresponding 3D points which is denoted by $\\mathbf{P}_{2D}\\in \\mathbb{R}^{N\\times m}$.\n\n\n\n\n\n\\textit{{3D Point Cloud Parsing.}} As we have mentioned above, {parsing results from the point clouds can well overcome the boundary blur influence while keeping the distance information. Similar to the 2D image segmentation, any SOTA 3D parsing approach can be employed here \\cite{zhang2020polarnet,cheng20212,xu2021rpvnet}. We employ the Cylinder3D \\cite{cong2021input} for generating the semantic results because of its impressive performance on the AD scenario.}\n{More importantly, the ground truth point-wise semantic annotations can be generated from the 3D object bounding boxes roughly as \\cite{shi2019pointrcnn} while any extra semantic annotations are not necessary.} Specifically, for foreground instances, the points inside a 3D bounding box are directly assigned with the corresponding class label while the points outside all the 3D bounding boxes are taken as the background. From this point of view, the proposed framework can work on any 3D detection benchmarks directly without any extra point-wise semantic annotations. After obtaining the trained network, we will obtain the parsing results which is denoted by $\\mathbf{P}_{3D}\\in \\mathbb{R}^{N\\times m}$. \n\n\n\\subsection{Adaptive Attention-based 2D\/3D Semantic Fusion} \\label{subsec:shallow_fusion}\n{As mentioned in previous work PointPainting, 2D semantic segmentation network have achieved impressive performance,} however, the {blurring effect at the shape boundary} is also serious due to the limited resolution of the feature map (e.g., $\\frac{1}{4}$ of the original image size). Therefore, {the point clouds with 2D semantic segmentation} usually has misclassified regions around the objects' boundary. For example, the frustum region is illustrated in the sub-fig.~\\ref{Fig:2DSeg_in_3d} (a) behind the big truck {has been totally misclassified}. On the contrary, the {parsing results} from the 3D point clouds usually can produce a clear and accurate object boundary e.g., sub-fig.~\\ref{Fig:2DSeg_in_3d}(b). However, the disadvantages of the 3D segmentor are also obvious. One drawback is that without the color and texture information, the 3D segmentor is difficult to distinguish these categories with similar shapes from the point cloud-only. In order to boost advantages while suppressing disadvantages, an AAF module has been proposed to adaptively combine the 2D\/3D semantic segmentation results. Then the fused semantic information is sent to the following 3D object detectors backbone to further extract the enhanced feature to improve the final detection results.\n\n \\begin{figure*}[h]\n\t\\centering\n \t\\includegraphics[width=0.95\\textwidth]{Figs\/DFF.pdf}\n\t\\centering\n\t\\caption{An illustration of the proposed Deep Feature Fusion (DFF) module which includes one \\textit{fusion} module and one \\textit{channel attention} module respectively. The \\textit{fusion} module includes two branches for producing features with different field-of-view.}\n\t\\label{fig:deep_feature_fusion}\n\\end{figure*}\n\n\\textbf{AAF Module.} The detailed architecture of the proposed AAF module is illustrated in Fig.~\\ref{Fig:attention_module}. By defining the input point clouds as $\\{\\mathbf{P}_i, i=1,2,3...N\\}$, each point $\\mathbf{P}_i$ contains $(x, y, z)$ coordinates and other optional information such as intensity. For simplicity, only the coordinates $(x, y, z)$ are considered in the following context. Our goal is to find {an} efficient strategy to fuse semantic results from {the 2D images and 3D point clouds}. Here, we propose to utilize the learned attention score to {adaptively} fuse the two types of results. Specifically, we first combine point clouds coordinate attributes $(x, y, z)$ and 2D\/3D labels with a concatenation operation to obtain a fused point clouds with the size of $N\\times (2m+3)$. For saving memory consumption, the following fusion is executed at the voxel level rather than the point level. Specifically, each point clouds has been divided into evenly distributed voxels and each voxel contains a fixed number of points. {Here, we represent the voxels as $\\{ V_i, i=1,2,3...E\\}$, where $E$ is the total number of the voxels and each voxel $V_{i} = (\\textbf{P}_i, \\textbf{V}^{i}_{2D}, \\textbf{V}^{i}_{3D}) \\in \\mathbb{R} ^{M \\times(2m+3)}$ containing a fixed number of $M$ points with $2m+3$ features. Here, $\\textbf{P}_i$, $\\textbf{V}^{i}_{2D}$, $\\textbf{V}^{i}_{3D}]$ are point coordinates, predicted 2D and 3D semantic vectors in voxel level respectively.} We employ the sampling technique to keep the same number of points in each voxel. \nThen, local and global feature aggregation strategies are applied to estimate an attention weight for each voxel for determining the importance of the 2D and 3D semantic labels.\n\nIn order to get local features, a PointNet~\\cite{qi2017pointnet}-like module is adopted to extract the voxel-wise information inside each non-empty voxel. For the $i$-th voxel, the local feature can be represented as: \n\\begin{equation} \\label{local_feature}\n V_i = f(p_1, p_2, \u00a0\\cdots , p_M) =\n \\max_{i=1,...,M} \\{\\text{MLP}_{l}(p_{i^{'}})\\} \\in \\mathbb{R}^{C_1},\n\\end{equation}\n{where $\\{p_{i^{'}}, i^{'}=1,2,3...M\\}$ indicates the point insize each voxel}. $\\text{MLP}_{l}(\\cdot)$ and ${max}$ are the local muti-layer perception (MLP) networks and max-pooling operation. Specifically, $\\text{MLP}_{l}(\\cdot)$ consists of a linear layer, a batch normalization layer, an activation layer and outputs local features with $C_1$ channels. To achieve global feature information, we aggregate information based on the $E$ voxels. In particular, we first use a $\\text{MLP}_{g}(\\cdot)$ to map each voxel features from $C_1$ dimension to $C_2$ dimension. Then, another PointNet-like module is applied on all the voxels as:\n\\begin{equation} \\label{global_feature}\nV_{global} = f(V_1, V_2, \\cdots ,V_E) = \\max_{i=1,...,E} \\{\\text{MLP}_{g}(V_i)\\} \\in \\mathbb{R}^{C_2}.\n\\end{equation}\nFinally, the global feature vector $V_{global}$ is expanded to the same size of voxel number and then concatenate them to each local feature $V_i$ to obtain final fused local and global features $V_{gl}\\in \\mathbb{R}^{E \\times{(C_1 + C_2)}}$.\n \n\n{After getting} fused features $V_{gl}$ from the network, we need to estimate an attention score for each point in the voxel. This can be achieved by applying another MLP module $\\text{MLP}_{att}(\\cdot)$ on $V_{gl}$ \nand a Sigmod activation function $\\sigma(\\cdot)$. Afterwards, we multiply the confidence score by corresponding one-hot semantic vectors for each voxel, which is denoted as Eq. (\\ref{attention_2d}), Eq. (\\ref{attention_3d}):\n\\begin{equation} \\label{attention_2d}\n\t \\mathbf{V}^{i}_{a.S} = \\mathbf{V}^{i}_{2D} \\times \\sigma{(\\text{MLP}_{att}(V^{i}_{gl})}) ,\n\\end{equation} \n\\begin{equation}\\label{attention_3d}\n\t \\mathbf{V}^{i}_{a.T} = \\mathbf{V}^{i}_{3D} \\times (1-\\sigma{(\\text{MLP}_{att}(V^{i}_{gl}})))\n\\end{equation} \nwhere $\\mathbf{V}^{i}_{2D}$ and $\\mathbf{V}^{i}_{3D}$ are the 2D and 3D semantic segmentation voxel labels which are encoded with {one-hot} format. {The final semantic vector $V^{i}_{final}$ of each voxel can be obtained by concatenating or element-wise addition of $\\mathbf{V}^{i}_{a.T}$ and $\\mathbf{V}^{i}_{a.S}$.} \n\n\n\n\n\\iffalse\n\\xsqcomments{\n Taking KITTI dataset as example, for a certain point, supposed that we got 2D\/3D semantic segmentation after encoder to {one-hot} vector are as follows:\n\\begin{equation}\\label{p_2d_onehot}\n\t \\mathbf{P}_{2D} = [0,0,1,0]\n\\end{equation} \n\\begin{equation}\\label{p_3d_onehot}\n\t \\mathbf{P}_{3D} = [0,0,0,1]\n\\end{equation} \n}\n\\xsqcomments{In the equation, different dimensions denote the category of 'Car', 'Pedestrian', 'Cyclist' and 'Background', respectively. Here, each point both have 2D\/3D semantic information, and '1' present which category this point belong to. If the weight of score to trust 2D\/3D semantic information is 0.2 and 0.8 for this certain point, after Eq.(\\ref{attention_2d}), Eq(\\ref{attention_3d}), we got follow present:\n\\begin{equation} \\label{attention_2d_final}\n\t \\mathbf{P}_{a.S} = [0,0,1,0] \\otimes 0.2 = [0,0,0.2,0],\n\\end{equation} \n\\begin{equation}\\label{attention_3d_final}\n\t \\mathbf{P}_{a.T} =[0,0,0,1] \\otimes 0.8 = [0,0,0,0.8]\n\\end{equation} \n}\n\\xsqcomments{Different with previous conference, we plus them together as described in Eq.(\\ref{attention_final}).\n\\begin{equation}\\label{attention_final}\n\t \\mathbf{P}_{final} = \\mathbf{P}_{a.T} + \\mathbf{P}_{a.S} = [0,0.2,0,0.8]\n\\end{equation}\nWe can find the weight of 'background' from 3D semantic information in more than 'Pedestrian' which from 3D semantic. The results denote that 3D semantic information can greatly correct the mistakes from 2D boundary-blurring effect happened in 2D semantic segmentation. On the contrary, 2D semantic information will correct the miss-classified which happened in 3D semantic segmentation if the score of 2D is more than 3D semantic segmentation. Then we fused $\\mathbf{P}_{fianl} \\in \\mathbb{R}^{N \\times M \\times m}$ with raw point $P$ in each voxel to replace the voxels shape $\\widehat{V}_m\\in \\mathbb{R}^{N\\times M \\times (3 + 2m)}$, where $m, M$ denote semantic classes number and points number in voxel. Finally, $\\hat{V_m}$ contains the adaptively fused information from 2D and 3D semantic labels.}\n\\fi\n\n\n\n\n\\begin{table*}[ht]\n\\centering\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.85\\textwidth}{!}\n{\n\\begin{tabular}{r|c|lll|lll|lll}\n\\hline\n\\multicolumn{1}{r|}{\\multirow{2}{*}{$\\textbf{Methods}$}} &\\multicolumn{1}{c|}{\\multirow{2}{*}{$\\textbf{mAP}$ (Mod.)(\\%)}} &\n\\multicolumn{3}{c|}{$\\textbf{Car}$ $AP_{70}(\\%)$} & \n\\multicolumn{3}{c|}{$\\textbf{Pedestrian}$ $AP_{70}(\\%)$} & \n\\multicolumn{3}{c}{$\\textbf{Cyclist}$ $AP_{70}(\\%)$} \\\\\n\n{} & {} & {Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} \\\\ \\hline \\hline\n{SECOND } & {66.64} & \n{90.04} & {81.22} & {78.22} &\n{56.34} & {52.40} & {46.52} &\n83.94 & 66.31 & {62.37} \\\\\n{SECOND $^{\\ast}$} & {68.11} & \n{91.04} & {82.31} & {79.31} &\n {59.28} & {54.18} & {50.20} &\n85.11 & 67.52 & {63.36} \\\\\n{Improvement} & \\R{+1.47} & \n\\R{+1.00} & \\R{+1.09} & \\R{+1.09} &\n\\R{+2.94} & \\R{+1.78} & \\R{+3.68} &\n\\R{+1.17} & \\R{+1.54} & \\R{+0.99} \\\\ \\hline \n\n{Pointpillars} & {62.90} \n& {87.59} & {78.17} & {75.23} &\n {53.58} & {47.58} & {44.04} &\n{82.21} & {62.95} & {58.66} \\\\\n{PointPillars$^{\\star}$} & {65.78} \n{89.58} & {78.60} & {75.63} &\n{60.22} & {54.23} & {49.49} &\n84.83 & 64.50 & {60.17} \\\\\n\n{Improvement} & \\R{+2.88} & \n\\R{+1.99} & \\R{+0.43} & \\R{+0.4} &\n\\R{+6.64} & \\R{+6.65} & \\R{+5.45} &\n\\R{+2.62} & \\R{+1.55} & \\R{+1.51} \\\\ \\hline\n\n{PV-RCNN} & {71.82} &\n{92.23} & {83.10} & {82.42} &\n{65.68} & {59.29} & {53.99} &\n91.57 & 73.06 & {69.80} \\\\\n{PV-RCNN$^{\\ast}$} & {73.95} & \n{91.85} & {84.59} & {82.66} & \n{69.12} & {61.61} & {55.96} & \n94.90 & 75.65 & {71.03} \\\\ \n{Improvement} & \\R{+2.13} & \n\\G{-0.38} & \\R{+1.49} & \\R{+0.24} &\n\\R{+3.44} & \\R{+2.32} & \\R{+1.97} &\n\\R{+3.33} & \\R{+2.59} & \\R{+1.23} \\\\ \\hline\n\\end{tabular}\n}\n\\caption{3D object detection evaluation on KITTI ``val'' split on different baseline approaches, where $^{\\star}$ represents the boosted baseline by adding the proposed fusion modules. ``Easy'', ``Mod.'' and ``Hard'' represent the three difficult levels defined by official benchmark and $\\textbf{mAP}$ (Mod.) represents the average $\\textbf{AP}$ of ``Car'', ``Pedestrian'' and ``Cyclist'' on ``Mod.'' level. For easy understanding, we also highlight the improvements with different colors, where red represents an increase and green represents a decrease compared to the baseline method. This table is better to be viewed in color mode.}\n\\label{tab:kitti_3D_detection}\n\\end{table*}\n\n\\begin{table*}[!ht]\n\\centering\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.85\\textwidth}{!}\n{\n \\begin{tabular}{r|c| ccc | ccc| ccc}\n \\hline\n \\multirow{2}{*}{$\\textbf{Methods}$} &\\multirow{2}{*}{$\\textbf{mAP}$ (Mod.)(\\%)} &\n \\multicolumn{3}{c|}{$\\textbf{Car}$ $AP_{70}(\\%)$} &\n \\multicolumn{3}{c|}{$\\textbf{Pedestrian}$ $AP_{70}(\\%)$} & \n \\multicolumn{3}{c}{$\\textbf{Cyclist}$ $AP_{70}(\\%)$} \\\\\n {} & {} & {Easy} & {Mod.} & {Hard} & \n {Easy} & {Mod.} & {Hard} & \n {Easy} & {Mod.} & {Hard} \\\\ \\hline \\hline \n {SECOND} & {71.95} \n & {92.31} & {88.99} & {86.59} \n & {60.5} & {56.21} & {51.25} \n & {87.30} & {70.65} & {66.63} \\\\\n SECOND $^{\\star}$ & {73.72} \n & {94.70} & {91.62} & {88.35} \n & {63.95} & {59.66} & {55.81} \n & {91.95} & {73.28} & {67.75} \\\\\n {Improvement} & \\RED{{+2..90}} \n & \\RED{+2.39} & \\RED{+2.63} & \\RED{+1.76} \n & \\RED{+3.45} & \\RED{+3.45} & \\RED{+4.56} \n & \\RED{+4.65} & \\RED{+2.63} & \\RED{+1.12} \\\\ \\hline\n {Pointpillars } & {69.18} \n & {92.50} & {87.80} & {87.55} \n & {58.58} & {52.88} & {48.30} \n & {86.77} & {66.87} & {62.46} \\\\\n \n PointPillars $^{\\star}$ & {72.13} \n & {94.39} & {87.65} & {89.86} \n & {64.84} & {59.57} & {55.16} \n & {89.55} & {69.18} & {64.65} \\\\\n {Improvement} & \\RED{{+2.95}} \n & \\RED{+1.89} & \\GRE{-0.15} & \\RED{+2.31} \n & \\RED{+6.26} & \\RED{+6.69} & \\RED{+6.86} \n & \\RED{+2.78} & \\RED{+2.31} & \\RED{+2.19} \\\\ \\hline\n {PV-RCNN} & {76.23} \n & {94.50} & {90.62} & {88.53} \n & {68.67} & {62.49} & {58.01} \n & {92.76} & {75.59} & {71.06} \\\\\n PV-RCNN $^{\\star}$ & {78.17} \n & {94.86} & {90.87} & {88.88} \n & {71.99} & {64.71} & {59.01} \n & {96.35} & {78.93} & {74.51} \\\\\n {Improvement} & \\RED{{+1.94}} \n & \\RED{+0.36} & \\RED{+0.25} & \\RED{+0.35}\n & \\RED{+3.32} & \\RED{+2.22} & \\RED{+1.00} \n & \\RED{+3.59} & \\RED{+3.34} & \\RED{+3.45} \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Evaluation of bird's-eye view object detection on KITTI ``val'' split with different baselines. Similar to the 3D detection, the red color represents an increase and the green color represents a decrease compared to the baseline method. This table is also better to be viewed in color mode.}\n\\label{tab:KITTI_BEV_detection}\n\\end{table*}\n\n\n\n\n\\subsection{Deep Feature Fusion Module} \\label{subsec:deep_fusion}\nIn AD scenarios, we not only need to know what the object is but also where object it is, because both of them are very important for the following planning and control modules. In typical object detection frameworks, they correspond to the classification and regression branches respectively. Empirically, global context information is important to recognize the specific class attributes. On the contrary, the object's attributes (e.g., dimension, orientation, and precise location, etc) regression branch pays more attention to detailed spatial information around the ROI (region of interest) in a relatively small range. For accurate kinds of size object detection, therefore various scales receptive fields are necessary. This issue has been considered in most object detection frameworks. However, how to use various field of view fields in an efficient way is vitally important.\n\nTo handle this kind of issue, a specific DFF module has been proposed to aggregate both the high-level and low-level features with different receptive fields. An illustration of the DFF module is shown in \\figref{fig:deep_feature_fusion}. First, the backbone features $\\textbf{F}^{0}_B$ from the feature extractor pass the \\textit{Conv\\_block1} with several convolution layers to obtain the $\\textbf{F}^{1}_B$ as a basic feature.\nHere, \\textit{Conv\\_block1} has four Conv modules and the first $conv$ module takes $C$ channels as input and outputs 128 channels, and the following three $convs$ share the same input {channels} and output channels. For each $conv$ module in the \\figref{fig:deep_feature_fusion}, it consists one $Con2d$, a batch normalization layer, and a Rectified Linear Unit (ReLU) activation layer. For easy understanding, we have given the stride and kernel size for each $conv$ operation {at the bottom of \\figref{fig:deep_feature_fusion}. Then, the feature $\\textbf{F}^{1}_B$ will pass two branches to obtain the features with different receptive fields. For one branch, the features are down-sampled into $1\/2$ size with $Conv-block2$ first and then pass the $Conv2$ operation. Finally, the outputs are up-sampled into the feature map $F_{R}^{L}\\in \\mathbb{R}^{H \\times W \\times C' }$ with $Deconv2$. For the other branch, $F^{1}_{B}$ will pass $Covn3$ and $Conv4$ to obtain the features of $F_{R}^{S}$ consecutively. {And the shape of the output $F_{R}^{L}$ is the same as $F_{R}^{S}$. Specially, $C'$ can be C.}\nAfter adding the high-level and low-level features} element-wisely, a channel-attention (CA) module \\cite{fu2019dual} is employed to further fuse both of them. The CA module selectively emphasizes interdependent channel maps by integrating relative features among all channel maps, so the $\\textbf{F}^{F}_E$ can be fused better in a deep way. Finally, $\\textbf{F}^{G}_E$ is taken as the inputs for the following classification and regression heads. \n\n\n\n \n\n\n\n\n\n\n\n\n\\subsection{3D Object Detection Framework} \\label{subsec:3d_detector}\n\n{The proposed AAF and DFF modules are }detector independent and any off-the-shelf 3D object detectors can be directly employed as the baseline of our proposed framework. The 3D detector receives the points or voxels produced by the AAF module as inputs and can keep backbone structures unchanged to {obtain the backbone features. Then, the backbone features are boosted by passing the proposed DFF module. Finally, detection results are generated from the classification and regression heads.}\n\n\n\n\n\n\n\n\n\n\\section{Experimental Results} \\label{sec:experiments}\n\nTo verify the effectiveness of our proposed framework, we evaluate it on two large-scale 3D object detection dataset in AD scenarios as KITTI \\cite{geiger2012we} and nuScenes \\cite{caesar2020nuscenes}. Furthermore, {the proposed modules is also evaluated on different kinds of baselines} for verifying its generalizability, including SECOND \\cite{yan2018second}, PointPilars \\cite{lang2019pointpillars} and PVRCNN \\cite{shi2020pv}, etc.\n\n\\subsection{Evaluation on KITTI Dataset} \\label{subsec:eval_kitti}\n \n\\textit{KITTI} is one of the most popular benchmarks for 3D object detection in AD, which contains 7481 samples for training and 7518 samples for testing. The objects in each class are divided into three difficulty levels as ``easy'', ``moderate'', and ``hard'', according to the object size, the occlusion ratio, and the truncation level. Since the ground truth annotations of the test samples are not available and the access to the test server is limited, we follow the idea in~\\cite{chen2017multi} and split the training data into ``train'' and ``val'' where each set contains 3712 and 3769 samples respectively. In this dataset, both the LiDAR point clouds and the RGB images have been provided. In addition, both the intrinsic parameters and extrinsic parameters between different sensors have been given.\n\n\n\n\\begin{table*}[ht!]\n\\centering\n\\normalsize\n\\resizebox{1\\textwidth}{!}\n{\n\t\\begin{tabular}{l | c | c | c c c c c c c c c c}\n\t\\hline\n {\\multirow{2}{*}{Methods}} &\n {\\multirow{2}{*}{\\textbf{NDS} (\\%)}} &\n {\\multirow{2}{*}{\\textbf{mAP} (\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\\n {} & {} & {} & {Car} & {Pedestrian} &{Bus} &{Barrier} & {T.C.} &\n {Truck} & {Trailer} & {Moto.} & {Cons.} &\n {Bicycle} \\\\ \\hline \\hline\n \n \n\tSECOND \\cite{yan2018second} &61.96 &50.85 &81.61 &77.37 & 68.53 &57.75 &56.86 &51.93 &38.19 & 40.14 &17.95 & 18.16 \\\\\n\tSECOND$^{\\ast}$ & 67.61 & 62.61 & 84.77 & 83.36 &72.41 &63.67 &74.99 &60.32 &42.89 &66.50 &23.59 &54.62 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{\\textbf{+5.65}} & \\textcolor{red}{\\textbf{+11.76}} & \\textcolor{red}{+3.16} & \\textcolor{red}{+5.99} & \\textcolor{red}{+3.88} & \\textcolor{red}{+5.92} & \\textcolor{red}{+18.13} & \\textcolor{red}{+8.39} & \\textcolor{red}{+4.70} & \\textcolor{red}{+26.36} & \\textcolor{red}{+5.64}& \\textcolor{red}{+36.46} \\\\\n \\hline\n\tPointPillars \\cite{lang2019pointpillars} & 57.50 &43.46 &80.67 &70.80 &62.01 &49.23 &44.00 &49.35 &34.86 &26.74 &11.64 &5.27 \\\\\n\tPointPillars$^{\\ast}$ &66.43 & 61.75 &84.79 &83.41 &70.52 &59.42 &75.43 &57.50 &42.32 &66.97 &22.43 &54.68 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red} {\\textbf{+8.93}} & \\textcolor{red}{\\textbf{+18.29}} & \n\t\\textcolor{red}{+4.12} & \\textcolor{red}{+12.61} & \\textcolor{red}{+8.51 } & \\textcolor{red}{ +10.19} & \\textcolor{red}{+31.43} & \\textcolor{red}{+8.15} & \\textcolor{red}{+7.46} & \\textcolor{red}{+40.23} & \\textcolor{red}{+10.79} & \\textcolor{red}{+49.41} \\\\\n \\hline\n\tCenterPoint \\cite{yin2021center} & 64.82 &56.53 &84.73 &83.42 &66.95 &64.56 &64.76 &54.52 &36.13 &56.81 &15.81 &37.57\\\\\n\tCenterPoint$^{\\ast}$ &69.91 &65.85 &87.00 &87.95 & 71.53 &67.14 &79.89 &61.91 & 41.22 & 73.85 &23.97 &64.06\\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{\\textbf{+5.09}} & \\textcolor{red}{\\textbf{+6.81}} & \\textcolor{red}{+2.77} & \\textcolor{red}{+4.53} & \\textcolor{red}{+4.58} & \\textcolor{red}{+2.58} & \\textcolor{red}{+15.13} & \\textcolor{red}{+7.39} & \\textcolor{red}{+5.09} & \\textcolor{red}{+17.04} & \\textcolor{red}{+8.16}&\\textcolor{red}{+26.49} \\\\\n \\hline\n\t\\end{tabular}\n}\n\\caption{\\normalfont Evaluation results on nuScenes validation dataset. ``NDS'' and ``mAP'' mean nuScenes detection score and mean Average Precision. ``T.C.'', ``Moto.'' and ``Cons.'' are short for ``traffic cone'', ``motorcycle'', and ``construction vehicle'' respectively. `` * '' denotes the improved baseline by adding the proposed fusion module. The red color represents an increase compared to the baseline. This table is also better to be viewed in color mode.}\n\\label{tab:eval_on_nuscenes_val}\n\\end{table*}\n\n\n\n\n\\noindent\\textbf{Evaluation Metrics.} {We follow the official metrics provided by the KITTI for evaluation. ${AP}_{70}$ is used for ``Car'' category while ${AP}_{50}$ is used for ``Pedestrain'' and ``Cyclist''. }\nSpecifically, before the publication of \\cite{simonelli2019disentangling}, the KITTI official benchmark used the 11 recall positions for comparison. After that, the official benchmark changes the evaluation criterion from 11-points to 40-points because the latter one is proved to be more stable than the former \\cite{simonelli2019disentangling}. Therefore, we use the 40-points criterion for all the experiments here. In addition, similar to \\cite{vora2020pointpainting}, the average AP (mAP) of three Classes for ``Moderate'' is also taken as an indicator for evaluating the average performance on all three classes. \n\n\\noindent\\textbf{Baselines.} Three different baselines have been used for evaluation on KITTI: \n\\begin{enumerate}\n \\item \\textit{SECOND \\cite{yan2018second}} is the first to employ the sparse convolution on the voxel-based 3D object detection framework to accelerate the efficiency of LiDAR-based 3D object detection. \n \n \\item \\textit{PointPillars \\cite{lang2019pointpillars}} {is proposed to further improve the detection efficiency by dividing the point cloud into vertical pillars rather than voxels. For each pillar, \\textit{Pillar Feature Net} is applied to extract the point-level feature.}\n \n \\item\\textit{PV-RCNN \\cite{shi2020pv}} is a hybrid point-voxel-based 3D object detector, which can utilize the advantages from both the point and voxel representations.\n\\end{enumerate}\n\n\\noindent\\textbf{Implementation Details.} DeeplabV3+ \\cite{deeplabv3plus2018} and Cylinder3D \\cite{zhu2021cylindrical} are employed for 2D and 3D scene parsing respectively. More details, the DeeplabV3+ is pre-trained on Cityscape \\footnote{\\url{https:\/\/www.cityscapes-dataset.com}}, and the Cylinder3D is trained on KITTI point clouds by taking points in 3D ground trues bounding box as foreground annotation. For AAF module, $m = 4$, $C_1 = 64,C_2 = 128$, respectively. {The voxel size for PointPillars and SECOND are $0.16m \\times 0.16m \\times 4m$ and $0.05m \\times 0.05m \\times 0.1m$} respectively. In addition, both of the two baselines use the same optimization (e.g., AdamW) and learning strategies (e.g., one cycle) for all the experiments.\n{In the DFF module, we set $C = 256$ and $C' = 512$ for \\textit{SECOND} and \\textit{PV-RCNN} framework and $C = 64$ and $C' = 384$ for \\textit{PointPillars} network. The kernel and stride size are represented with $k$ and $s$ in Fig.\\ref{fig:deep_feature_fusion}, respectively.}\n\n{The proposed approach is implemented with PaddlePaddle \\cite{PaddlePaddle_2019} and all the methods are trained on NVIDIA Tesla V100 with 8 GPUs. The AdamW is taken as the optimizer and the one-cycle learning strategy is adopted for training the network. For \\textit{SECOND} and \\textit{PointPillars}, the batch size is 4 per GPU and the maximum learning rate is 0.003 while the bath size is 2 per GPU and the maximum learning rate is 0.001 for \\textit{PV-RCNN}.}\n\n\n\n \n\n\\noindent\\textbf{Quantitative Evaluation.} We illustrate the results on 3D detection and Bird's-eye view in \\tabref{tab:kitti_3D_detection} and \\tabref{tab:KITTI_BEV_detection} respectively.\n{From the table, we can clearly see that remarkable improvements have been achieved on the three baselines across all the categories.}\nTaking Pointpillars as an example, the proposed method has achieved 0.43\\%, 6.65\\%, 1.55\\% points improvements on ``Car'', ``Pedestrian'' and ``Cyclist'' respectively. Interestingly, compared to ``Car'', ``Pedestrian'' and ``Cyclist'' give much more improvements by the fusion painting modules. \n\n\\begin{table*}[ht!]\n \\centering\n \\Large\n \n\\resizebox{0.9\\textwidth}{!}{\n\n\\begin{tabular}{ l| c| c| c| c c c c c c c c c c}\n \\hline\n {\\multirow{2}{*}{Methods}} & \n {\\multirow{2}{*}{Modality}} & {\\multirow{2}{*}{\\textbf{NDS}(\\%)}} & \n {\\multirow{2}{*}{\\textbf{mAP}(\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\ \n {} & {} & {} & {} & {Car} & {Truck} & {Bus} & {Trailer} & {Cons} & {Ped} & {Moto} & {Bicycle} & {T.C} & {Barrier} \\\\ \\hline \\hline\n PointPillars \\cite{vora2020pointpainting} & L & 55.0 & 40.1 & 76.0 & 31.0 & 32.1 & 36.6 & 11.3 & 64.0 & 34.2 & 14.0 & 45.6 & 56.4 \\\\\n 3DSSD \\cite{yang20203dssd} & L & 56.4 & 46.2 & 81.2 & 47.2 & 61.4 & 30.5 & 12.6 & 70.2 & 36.0 & 8.6 & 31.1 & 47.9 \\\\\n CBGS \\cite{zhu2019class} & L & 63.3 & 52.8 & 81.1 & 48.5 & 54.9 & 42.9 & 10.5 & 80.1 & 51.5 & 22.3 & 70.9 & 65.7 \\\\\n HotSpotNet \\cite{chen2020object} & L & 66.0 & 59.3 & 83.1 & 50.9 & 56.4 & 53.3 & {23.0} & 81.3 & {63.5} & 36.6 & 73.0 & 71.6 \\\\\n CVCNET\\cite{chen2020every} & L & 66.6 & 58.2 & 82.6 & 49.5 & 59.4 & 51.1 & 16.2 & {83.0} & 61.8 & {38.8} & 69.7 & 69.7 \\\\\n CenterPoint \\cite{yin2021center} & L & {67.3} & {60.3} & {85.2} & {53.5} & {63.6} & {56.0} & 20.0 & 54.6 & 59.5 & 30.7 & {78.4} & 71.1 \\\\\n PointPainting \\cite{vora2020pointpainting} & L \\& C & 58.1 & 46.4 & 77.9 & 35.8 & 36.2 & 37.3 & 15.8 & 73.3 & 41.5 & 24.1 & 62.4 & 60.2 \\\\\n 3DCVF \\cite{DBLP:conf\/eccv\/YooKKC20} & L \\& C & 62.3 & 52.7 & 83.0 & 45.0 & 48.8 & 49.6 & 15.9 & 74.2 & 51.2 & 30.4 & 62.9 & 65.9 \\\\ \\hline\n \n {Our proposed} & L \\& C & \\textbf{71.6} & \\textbf{68.2} & \\textbf{87.1} & \\textbf{60.8} & \\textbf{66.5} & \\textbf{61.7} & \\textbf{30.0} & \\textbf{88.3} & \\textbf{74.7} & \\textbf{53.5} & \\textbf{85.0} & \\textbf{71.8} \\\\ \n \n \\hline\n \\end{tabular}\n\n}\n\\caption{Comparison with other SOTA methods on the nuScenes 3D object detection testing benchmark. ``L'' and ``C'' in the modality column represent LiDAR and Camera sensors respectively. For easy understanding, the highest score in each column is shown in bold font. To be clear, only the results with publications are listed here.}\n\\label{tab:test_tabel}\n\\end{table*}\n\n\n\\subsection{Evaluation on the nuScenes Dataset} \\label{subsec:nuScenes}\nThe nuScenes \\cite{caesar2020nuscenes} is a recently released large-scale (with a total of 1,000 scenes) AD benchmark with different kinds of information including LiDAR point cloud, radar point, Camera images, and High Definition Maps, etc. For a fair comparison, the dataset has been divided into ``train'', ``val'', and ``test'' three subsets officially, which includes 700 scenes (28130 samples), 150 scenes (6019 samples), and 150 scenes (6008 samples) respectively. Objects are annotated in the LiDAR coordinate and projected into different sensors' coordinates with pre-calibrated intrinsic and extrinsic parameters. For the point clouds stream, only the keyframes (2fps) are annotated. With a 32 lines LiDAR scanner, each frame contains about {300,000} points for 360-degree viewpoint. For the object detection task, the obstacles have been categorized into 10 classes as ``car'', ``truck'', ``bicycle'' and ``pedestrian'' etc. Besides the point clouds, the corresponding RGB images are also provided for each keyframe, and for each keyframe, there are 6 cameras that can cover 360 fields of view.\n\n\n\\noindent\\textbf{Evaluation Metrics.} The evaluation metric for nuScenes is totally different from KITTI and they propose to use mean Average Precision (mAP) and nuScenes detection score (NDS) as the main metrics. Different from the original mAP defined in \\cite{everingham2010pascal}, nuScenes consider the BEV center distance with thresholds of \\{0.5, 1, 2, 4\\} meters, instead of the IoUs of bounding boxes. NDS is a weighted sum of mAP and other metric scores, such as average translation error (ATE) and average scale error (ASE). For more details about the evaluation metric please refer to \\cite{caesar2020nuscenes}.\n\n\\noindent\\textbf{Baselines.} We have integrated the proposed module on three different SOTA baselines to verify its effectiveness. Similar to the KITTI dataset, both the \\textit{SECOND} and \\textit{PointPillars} are employed and the other detector is \\textit{CenterPoint} \\cite{yin2021center}. which is the first anchor-free-based 3D object detector for LiDAR-based 3D object detection. \n\n\n\\noindent\\textbf{Implementation Details} HTCNet \\cite{chen2019hybrid} and Cylinder3D \\cite{zhu2021cylindrical} are employed here for obtaining the 2D and 3D semantic segmentation results respectively. We used the HTCNet model trained on nuImages \\footnote{\\url{https:\/\/www.nuscenes.org\/nuimages}} dataset directly for generating the semantic labels. For Cylinder3D, we train it directly on the nuScenes 3D object detection dataset while the point cloud semantic label is produced by taking the points inside each bounding box. In AAF module, $m = 11$, $C_1 = 64$ and $C_2 = 128$ respectively. \n{In DFF module, $C = 256 $ and $C' = 512$ for \\textit{SECOND} and \\textit{CenterPoint} while $C = 64$ and $C' = 384$ for \\textit{PointPillars}. The setting for kernel size $k$ and stride $s$ are given in Fig. \\ref{fig:deep_feature_fusion} and same for all three detectors.} The voxel size for PointPillars, SECOND and CenterPoint are $0.2m \\times 0.2m \\times 8 m$, $0.1m \\times 0.1m \\times0.2m$ and $0.075m \\times 0.075m \\times 0.075m$, respectively. We use AdamW \\cite{ loshchilov2019decoupled} as the optimizer with the max learning rate is 0.001. Following \\cite{caesar2020nuscenes}, 10 previous LiDAR sweeps are stacked into the keyframe to make the point clouds more dense.\n\n\n\n{All the baselines are trained on NVIDIA Tesla V100 (8 GPUs) with a batch size of 4 per GPU for 20 epochs. AdamW is taken as the optimizer and the one-cycle learning strategy is adopted for training the network with the maximum learning rate is 0.001.} \n\n\n\n\n\n\n\\noindent\\textbf{Quantitative Evaluation.} \n{The proposed framework has been evaluated on nuScenes benchmark for both ``val'' and ``test'' splits.} {The results of comparison with three baselines are given in Tab. \\ref{tab:eval_on_nuscenes_val}. From this table, we can see that significant improvements have been achieved on both the $\\text{mAP}$ and $\\text{NDS}$ across all the three baselines. For the \\textit{PointPillars}, the proposed module gives \\textbf{8.93} and \\textbf{18.29} points improvements on $\\text{NDS}$ and $\\text{mAP}$ respectively. For \\textit{SECOND}, the two values are \\textbf{5.65} and \\textbf{11.76} respectively. Even for the strong baseline \\textit{CenterPoint}, the proposed module can also give \\textbf{5.09} and \\textbf{9.32} points improvements.}\nIn addition, we also find that the {categories which share small sizes} such as ``Traffic Cone'', ``Moto'' and ``Bicycle'' have received more improvements compared to other categories. Taking ``Bicycle'' as an example, the $\\text{mAP}$ has been improved by {36.46\\%}, {49.41\\%} and {26.49\\%} compared to \\textit{SECOND}, \\textit{PointPillar} and \\textit{Centerpoint} respectively. {This phenomenon can be explained as these categories with small sizes are hard to be recognized in point clouds because of a few LiDAR points on them. In this case, the semantic information from the 2D\/3D parsing results is extremely helpful for improving 3D object detection performance. It should be emphasized that the result in Tab. \\ref{tab:eval_on_nuscenes_val} without any Test Time Augmentation(TTA) strategies.}\n\n\nTo compare the proposed framework with other SOTA methods, we submit our results (adding fusion modules on \\textit{CenterPoint}) on the nuScenes evaluation sever\\footnote{\\url{https:\/\/www.nuscenes.org\/object-detection\/}} for test split. The detailed results are given in Tab. \\ref{tab:test_tabel}. From this table, we can find that the proposed method achieves the best performance on both the $\\text{mAP}$ and $\\text{NDS}$ scores. Compared to the baseline \\textit{CenterPoint}, 4.3 and 7.9 points improvements have been achieved by adding our proposed fusion modules. For easy understanding, we have highlighted the best performances in bold in each column. It should be noted that the results in Tab. \\ref{tab:test_tabel} the Test Time Augmentation(TTA) strategy including multiple flip and rotation operations is applied during the inference time which is 1.69 points higher than the origin method of ours.\n\n \n \n\n\n\\subsection{Ablation Studies} \\label{subsec:ablation_study}\n\n\n{To verify the effectiveness of different modules, a series of ablation experiments have been designed to analyze the contribution of each component for the final detection results. Specially, three types of experiments are given as \\textit{different semantic representations} and \\textit{different fusion modules} and \\textit{effectiveness of channel attention}.\n}\n\n\\begin{table}[ht!]\n\\centering\n\\small\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.4\\textwidth}{!}\n{ \n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Representations}$ & $\\textbf{mAP }$ (\\%) & $\\textbf{NDS}$ (\\%) \\\\ \\hline\n {PointPillar \\cite{lang2019pointpillars}} & 43.46 & 57.50 \\\\ \n {Semantic ID} & 50.96 (+7.50) & 60.69 (+3.19) \\\\ \n {Onehot Vector} & 52.18 (+8.72) & 61.59 (+4.09) \\\\ \n {Semantic Score} & 53.10 (+9.64) & 62.20 (+4.70) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the nuScenes \\cite{geiger2012we} dataset for fusing the semantic results with different representations.}\n\\label{tab:ablation_dif_semantic_reps}\n\\end{table}\n\n\\noindent\\textbf{Different Semantic Representations.} First of all, we investigate the influence of the different semantic result representations on the final detection performance. Three different representations as ``Semantic ID'', ``Onehot Vector'' and ``Semantic Score'' are considered here. For ``Semantic ID'', the digital class ID is used directly and for the ``Semantic Score'', the predicted probability after the Softmax operation is used. In addition, to convert the semantic scores to a ``Onehot Vector'', we assign the class with the highest score as ``1'' and keep other classes as ``0''. Here, we just add the semantic feature to the original $(x, y, z)$ coordinates with concatenation operation and \\textit{PointPillars} is taken as the baseline on nuScenes dataset due to its efficiency of model iteration. From the results in \\tabref{tab:ablation_dif_semantic_reps}, we can easily find that the 3D semantic information can significantly boost the final object detection performance regardless of different representations. More specifically, the ``Semantic Score'' achieves the best performance among the three which gives 9.64 and 4.70 points improvements for $\\textbf{mAP}$ and $\\textbf{NDS}$ respectively. We guess that the semantic score can provide more information than the other two representations because it not only provides the class ID but also the confidence for all the classes. \n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{Car} (Mod.)$ & $\\textbf{Pedestrian}$ (Mod.) & $\\textbf{Cyclist}$ (Mod.) \\\\\n {PointPillar \\cite{lang2019pointpillars}} & 78.17 & 47.58 & 62.95 \\\\ \n {Semantic ID.} & 78.42(+0.25) & 48.94(+1.36) & 57.88(-5.07) \\\\ \n {Onehot Vector} & 78.55(+0.35) & 50.15(+2.57) & 54.14(-8.81) \\\\ \n {Semantic Score} & 79.08(+0.91) & 51.41(+3.93) & 61.25(-1.70) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public KITTI \\cite{geiger2012we} dataset. The way use 3D semantic info. To be clear, only the results of the ``Moderate'' category have been given and the mAP represents the mean AP of ``Car'', ``Pedestrian'' and ``Cyclist''.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\\fi\n\n\n\\noindent\\textbf{Different Fusion Modules.} We also execute different experiments to analyze the performances of each fusion module on both KITTI and nuScene dataset. The \\textit{SECOND} is chosen as the baseline for KITTI while all the three detectors are verified on nuScene. \nThe results are given in \\tabref{tab:ablation_kitti_table} and \\tabref{tab:ablation_nuScene_table} for KITTI and nuScenes respectively. To be clear, the ``3D Sem'' and ``2D Sem.'' represent the 2D and 3D parsing results. ``AAF'' represents the fusion of the 2D and 3D semantic information with the proposed adaptive attention module and the ``AAF + DFF'' represents the module with both the two fusion strategies.\n\n\\begin{table}[hb!]\n\\centering\n\\small\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | ccc|l}\n \\hline\n\\multirow{2}{*}{Strategies} & \\multicolumn{3}{c|}{AP(\\%)} & \\multirow{2}{*}{mAP(\\%)} \\\\\n & \\multicolumn{1}{l}{Car(Mod.)} & \\multicolumn{1}{l}{Ped.(Mod.)} & \\multicolumn{1}{l|}{Cyc.(Mod.)} & \\\\ \\hline\n {SECOND \\cite{yan2018second}} & 88.99 & 56.21 & 70.65 & 71.95 \\\\\n {3D Sem.} & + 0.81 & + 0.70 & + 0.63 & + 0.71 \\\\ \n {2D Sem.} & + 1.09 & + 1.35 & + 1.20 & + 1.41 \\\\ \n {AAF} & + 1.62 & + 1.78 & + 1.91 & + 1.77 \\\\ \n {AAF \\& DFF} & + 2.63 & + 3.45 & + 2.63 & + 2.90 \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Evaluation of ablation studies on the public KITTI \\cite{geiger2012we} dataset. Similar to PointPainting \\cite{vora2020pointpainting}, we provide the 2D BEV detection here. To be clear, only the results of ``Mod'' have been given and \\textbf{mAP} is the average mean of all the three categories.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\n\n\\begin{table}[ht!]\n\\centering\n\\setlength{\\tabcolsep}{2pt}\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cc | cc |cc}\n \\hline\n \\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Strategies}}} & \n \\multicolumn{2}{c|}{\\textbf{SECOND}} & \n \\multicolumn{2}{c|}{\\textbf{PointPillars}} & \n \\multicolumn{2}{c}{\\textbf{CenterPoint}} \\\\\n {} & \\textbf{mAP}(\\%) & \\textbf{NDS}(\\%) & \\textbf{mAP}(\\%) & \\textbf{NDS}(\\%)& \\textbf{mAP}(\\%)& \\textbf{NDS}(\\%) \n \\\\ \\hline \n {SECOND} & 50.85 & 61.96 & 43.46 & 57.50 & 56.53 & 64.82\\\\\n {3D Sem.} & + 3.60& + 1.45 & + 8.72 & + 4.09 & + 4.59 & + 1.86 \\\\ \n {2D Sem.} & + 8.55& + 4.07 & +15.64 &+ 7.55 & + 6.28 & + 3.67 \\\\ \n {AAF} & + 11.30 & + 5.45 & +17.31 & + 8.54 & + 8.46& + 4.56 \\\\ \n {AAF \\& DFF} & + 11.76 & + 5.65 & + 18.19 & + 8.93 & + 9.32 & + 5.09\\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies for different fusion strategies on the nuScenes benchmark. The first row is the performance of the baseline method and the following values are the gains obtained by adding different modules.}\n\\label{tab:ablation_nuScene_table}\n\\end{table}\n\nIn \\tabref{tab:ablation_kitti_table}, the first row is the performance of the baseline method and the following values are the gains obtained by adding different modules. From the table, we can obviously find that all the modules give positive effects on all three categories for the final detection results. For all the categories, the proposed modules give 2.9 points improvements averagely and ``Pedestrian'' achieves the most gain which achieves 3.45 points. Furthermore, we find that deep fusion gives the most improvements compared to the other three modules in this dataset. \n\n\\tabref{tab:ablation_nuScene_table} gives the results on the nuScenes dataset. \n{To be clear, a boosted version of baseline is presented here than in \\cite{xu2021fusionpainting} by fixing the sync-bn issue in the source code.} \nFrom this table, we can see that both the 2D and 3D {semantic information} can significantly boost the performances while the 2D information provide more improvements than the 3D information. This phenomenon can be explained as that the 2D texture information can highly benefit the categories classification results which is very important for the final \\textbf{mAP} computation. In addition, by giving the 2D information, the recall ability can be improved especially for objects in long distance with a few scanned points. In other words, the advantage for 3D semantic information is that the point clouds can well handle the occlusion among objects which are very common in AD scenarios which is hard to deal with in 2D. After fusion, all the detectors achieve much better performances than only one semantic information (2D or 3D).\n\nFurthermore, a deep fusion module is also proposed to aggregate the backbone features to further improve the performance. From the Tab. \\tabref{tab:ablation_kitti_table} and \\tabref{tab:ablation_nuScene_table}, we find that the deep fusion module can slightly improve the results for all three baseline detectors. Interestingly, compared to the \\textit{SECOND} and \\textit{PointPillars}, \\textit{CenterPoint} gives much better performance by adding the deep fusion module. This can be explained that the large deep backbone network in \\textit{CenterPoint} gives much deeper features that are more suitable for the proposed deep feature fusion module. \n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cc | cc }\n \\hline\n \\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Strategies}}} & \n \\multicolumn{2}{c|}{\\textbf{SECOND}} & \n \\multicolumn{2}{c}{\\textbf{CenterPoint}} \\\\\n {} & \\textbf{NDS}(\\%) & \\textbf{mAP}(\\%) & \\textbf{NDS}(\\%) & \\textbf{mAP}(\\%) \\\\ \\hline \n {Baselines} & 50.85 & 61.96 & 59.04 & 66.60\\\\\n {Without CA.} & 52.24(\\R{+1.39})& 63.18(\\R{+0.85}) &67.45(\\R{+0.8}) &59.84(\\R{+1.08}) \\\\ \n {Baseline with DFF.} &52.96(\\R{+2.11})&63.75(\\R{+1.79}) &67.84(\\R{+1.24}) &60.20(\\R{+1.16}) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation with\/without channel attention on nuScenes dataset.}\n\\label{tab:ablation_channel_attention_nuscenes}\n\\end{table}\n\\fi\n\n \n\n\\begin{table}[ht!]\n\\centering\n\\renewcommand\\arraystretch{1.1}\n\\setlength{\\tabcolsep}{1pt}\n\\resizebox{0.5\\textwidth}{!}\n{%\n\\begin{tabular}{r|ccc|c} \n\\hline\n\\multirow{2}{*}{\\textbf{Strategies}} & \\multicolumn{3}{c|}{\\textbf{AP(\\%)}} & \\multirow{2}{*}{\\textbf{mAP(\\%)}} \\\\\n & \\multicolumn{1}{l}{Car(Mod.)} & \\multicolumn{1}{l}{Ped.(Mod.)} & \\multicolumn{1}{l|}{Cyc.(Mod.)} & \\\\ \n\\hline\nSECOND & ~88.99~ & 56.21 & 70.65 & 71.95 \\\\\nOne scale CA & ~89.68 (\\textcolor{red}{+0.69})~ & 57.77 (\\textcolor{red}{+1.56}) & ~71.39 (\\textcolor{red}{+0.74})~ & 73.70 (\\textcolor{red}{+1.75}) \\\\\nMulti-scale CA & 90.22 (\\textcolor{red}{+1.23}) & ~58.11 (\\textcolor{red}{+1.90})~ & ~71.77 (\\textcolor{red}{+1.12})~ & 74.41 (\\textcolor{red}{+2.46}) \\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Ablation with\/without multi-scale channel attention (CA) on KITTI dataset. The meaning of \\textit{one-scale CA} is we just use $F_{R}^{L}$ to the \\textit{CA} module. And multi-scale denote that we use both fused $F_{R}^{L}$ and $F_{R}^{S}$ feature to the next module.}\n\\label{tab:ablation_channel_attention_kitti}\n\\end{table}\n\n\n\\noindent\\textbf{Effectiveness of Multi-scale CA.} {In addition, a small experiment is also designed for testing the effectiveness of multi-scale attention in the DFF module. \\textit{SECOND} is taken as the baseline and tested on the KITTI dataset. From the results given in Tab. \\ref{tab:ablation_channel_attention_kitti}, we can see that by employing the one-scale features, the \\textbf{mAP} has been improved by 1.75 points compared to the baseline. By adding the multi-scale operation, additional 0.71 points improvements can be further obtained.}\n\n\n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{Car} (Mod.)$ & $\\textbf{Pedestrian}$ (Mod.) & $\\textbf{Cyclist}$ (Mod.) & $\\textbf{mAP }$ (\\%) \\\\ \\hline \\hline\n {PointPillar \\cite{lang2019pointpillars}} & & & & \\\\ \n {Semantic ID.} & & & & \\\\ \n {Onehot Vector} & & & & \\\\ \n {Semantic Score} & & & & \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public KITTI \\cite{geiger2012we} dataset. The way use 3D semantic info. To be clear, only the results of the ``Moderate'' category have been given and the mAP represents the mean AP of ``Car'', ``Pedestrian'' and ``Cyclist''.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.35\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{mAP }$ (\\%) & $\\textbf{NDS}$ (\\%) \\\\ \\hline \\hline\n {PointPillar \\cite{lang2019pointpillars}} & 44.46 & 57.50 \\\\ \n {Semantic ID.} & 50.96 (+7.50) & 60.69 (+3.19) \\\\ \n {Onehot Vector} & 52.18 (+8.72) & 61.59 (+4.09) \\\\ \n {Semantic Score} & 53.10 (+9.64) & 62.20 (+4.70) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public nuScenes \\cite{geiger2012we} dataset for different semantic representations. }\n\\label{tab:ablation_dif_semantic_reps}\n\\end{table}\n\\fi\n\n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}{\n\\begin{tabular}{l | c c c c l l }\n\\hline\n\\textbf{Baselines} & \\textbf{3D-Sem} & \\textbf{2D-Sem} & \\textbf{ShallowFusion} & \\textbf{DeepFusion} & \\textbf{mAP(\\%)} & \\textbf{NDS(\\%)} \\\\ \\hline\n\\multirow{4}{*}{\\textbf{SECOND}} & -- & -- & -- & -- &50.85 (baseline) &61.96 (baseline) \\\\\n &\\checkmark & & & &54.45 (\\textcolor{red}{+3.60}) &63.41 (\\textcolor{red}{+1.45}) \\\\\n & & \\checkmark & & &59.40 (\\textcolor{red}{+8.55}) &66.03 (\\textcolor{red}{+4.07}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &62.15 (\\textcolor{red}{+11.30}) & 67.41 (\\textcolor{red}{+5.45}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &62.86 (\\textcolor{red}{+12.01}) & 68.05 (\\textcolor{red}{+6.09}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{PointPillars}} & -- & -- & -- & -- &43.46 (baseline) &57.50 (baseline) \\\\\n & \\checkmark & & & &52.18 (\\textcolor{red}{+8.72}) &61.59 (\\textcolor{red}{+4.09}) \\\\\n & & \\checkmark & & &59.10 (\\textcolor{red}{+15.64}) & 65.05 (\\textcolor{red}{+7.55}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & & 60.77 (\\textcolor{red}{+17.31}) &66.04 (\\textcolor{red}{+8.54}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &61.58 (\\textcolor{red}{+18.12}) & 66.78 (\\textcolor{red}{+9.28}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{CenterPoint}} & -- & -- & -- & -- &59.04 (baseline) & 66.60 (baseline) \\\\\n & \\checkmark & & & & 61.12 (\\textcolor{red}{+2.08}) &67.68 (\\textcolor{red}{+1.08}) \\\\\n & & \\checkmark & & & 62.81 (\\textcolor{red}{+3.77}) & 68.49 (\\textcolor{red}{+1.89})\\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &64.99 (\\textcolor{red}{+5.95}) & 69.38 (\\textcolor{red}{+2.78}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &65.85 (\\textcolor{red}{+6.81}) & 69.91 (\\textcolor{red}{+3.31}) \\\\ \\hline\n \n\\multirow{1}{*}{\\textbf{CenterPoint*}} &\\checkmark &\\checkmark &\\checkmark &\\checkmark&67.84 (\\textcolor{red}{+8.80}) & 71.64 (\\textcolor{red}{+5.04}) \n\n\\\\ \\hline\n\\end{tabular}\n}\n\\caption{An ablation study for different fusion strategies. ``Att-Mod'' is short for Adaptive Attention Module, ``3D-P'' and ``2D-P'' are short for ``3D Painting Module'' and ``2D Painting Module'', respectively. Especially, CenterPoint* means Centerpoint with Double Flip and DCN. }\n\n\\label{tab:ablation_table}\n\\end{table}\n\\fi \n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.4\\textwidth}{!}\n{%\n\\begin{tabular}{|l|c|c|}\n\\hline\n & PointPillars (ms) & SECOND (ms) \\\\ \\hline\nBaseline & 53.5 & 108.2 \\\\ \\hline\n+3D Sem & 56.5\\textcolor{red}{(+3.0)} & 108.7\\textcolor{red}{(+0.5)} \\\\ \\hline\n+2D Sem & 56.9\\textcolor{red}{(+3.4)} & 108.4\\textcolor{red}{(+0.2)} \\\\ \\hline\n+AAF+DFF & 62.5\\textcolor{red}{(+9.0)} & 113.3\\textcolor{red}{(+4.6)} \\\\ \\hline\n\\end{tabular}\n}\n\\caption{Running time of differ modules. Here, +3D Sem, +2D Sem, +AAF+DFF denote the inference time of use PointCloud segmentation, image semantic segmentataion and both of two type segmentaion and AAF \\& DFF module. }\n\\label{tab:differ running time}\n\\end{table}\n\\\\\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.4\\textwidth}{!}\n{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nType & 2D Sem & 3D Sem & Projection \\\\ \\hline\nTime(ms) & 32ms & 140ms & 3ms \\\\ \\hline\n\\end{tabular}\n}\n\\caption{The type of 2D Sem and 3D Sem denote that the time of we got 2D semantic segmentation and 3D semantic segmentation respectively. And the Projection is the time of project pointcloud on semantic image to get the score of prediction result}\n\\label{tab:seg time}\n\\end{table}\n\\fi\n\n{\\subsection{Computation Time}} \\label{subsec:comp_time}\n\\begin{table}[ht!]\n\\centering\n\\setlength{\\tabcolsep}{5pt}\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.35\\textwidth}{!}\n{ \n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Types}$ & $\\textbf{PointPillars }$ & $\\textbf{SECOND}$ \\\\ \\hline \n {Baseline} & 53.5 (ms) & 108.2 (ms) \\\\\n {+3D Sem} & +3.0 (ms) & +0.5 (ms) \\\\ \n {+2D Sem} & +3.4 (ms) & +0.2 (ms) \\\\ \n {The Proposed} & 62.5 (ms) & 113.3 (ms) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Inference time of different modules. Here, \\textit{+3D Sem} and \\textit{+2D Sem} denote the time of adding 3D\/2D semantic information to input data. \\textit{The Proposed} denotes the proposed framework with the modules including two types of semantic information and \\textit{AAF} \\& \\textit{DFF} modules.}\n\\label{tab:inference_time}\n\\end{table}\n\n\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{Figs\/det_res.pdf}\n\t\\centering\n\t\\caption{Visualization of detection results with Open3d \\cite{zhou2018open3d}, where (a) is the ground-truth, (b) is the baseline method based on only point cloud, (c), (d) and (e) are the detection results based on the 2D semantic, 3D semantic and fusion semantic information, respectively. Especially, the yellow and red dash ellipses show some false positive and false negative detection. For nuScenes dataset, the baseline detector used here is CenterPoint, and for KITTI is SECOND.}\n\t\\label{Fig:detection result}\n\\end{figure*}\n\nBesides the performance, the computation time is also very important. Here, we test the computation time of each module based on the {\\textit{PointPillars} and \\textit{SECOND} on the KITTI dataset in \\tabref{tab:inference_time}.} For a single Nvidia V100 GPU, the \\textit{PointPillars} takes 53.5 ms per frame. By adding the 2D and 3D semantic information, the inference time increases 3.0 ms and 3.4 ms respectively while the time increases about 9 ms by adding two types semantic information and the AFF \\& DFF modules. The inference time for \\textit{SECOND} is given at the right column of \\tabref{tab:inference_time}. Compared with the baseline, the 2D\/3D semantic segmentation information gives nearly no extra computational burden. We explain this phenomenon as that in \\textit{SECOND} the simple \\textit{mean} operation is employed for extracting the features for each voxel and the computation time of this operation will not change too much with the increasing of the feature dimension. For \\textit{PointPillars}, the MPL is employed for feature extracting in each pillar, therefore the computation time will increase largely with the increasing of the feature dimension.\n\n{In addition, we also record the time used for obtaining the 2D\/3D semantic results. For Deeplab V3+, the inference time is about 32 ms per frame while for Cylinder3D, it takes about 140 ms per frame. Furthermore, the re-projection of 2D image to 3D point clouds also takes about 3 ms for each frame. The almost time consuming here are 2D\/3D point clouds segmentation operations. But in the practical using there just need a extra segmentation head after detection network backbone. In other words, multi-head is needed here for both detection and segmentation task. These just taking few milliseconds when using model inference acceleration operation, like C++ inference library TensorRT.}\n\\\\\n\n\n\\subsection{Qualitative Detection Results}\\label{sub:Qualitative_Res}\nWe show some qualitative detection results on nuScenes and KITTI dataset in \\figref{Fig:detection result} in which \\figref{Fig:detection result} (a) is the ground truth, (b), (c), and (d) are the detect result of baseline (CenterPoint) without any extra information, with 2D and 3D semantic information respectively and (e) is final results with all the fusion modules. From these figures, we can easily find that there is some false positive detection caused by the frustum blurring effect in 2D painting, while the 3D {semantic results} give a relatively clear boundary of the object but provides some worse class classification. More importantly, the proposed framework which combines both the two complementary information from 2D and 3D segmentation can give much more accurate detection results.\n\n\\subsection{Ablation Studies} \\label{subsec:ablation_study}\n\nIn this part, we design several ablation experiments to analyze the contribution of different components for the final detection results. Specially, we have designed two series of experiments for both the KITTI and nuScenes dataset respectively. \n\n\n\n\n\\textbf{KITTI Benchmark}: we choose the SECOND detector for the ablation study on KITTI benchmark. \n \n\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{Car} (Mod.)$ & $\\textbf{Pedestrian}$ (Mod.) & $\\textbf{Cyclist}$ (Mod.) & $\\textbf{mAP }$ (\\%) \\\\ \\hline \\hline\n {SECOND \\cite{yan2018second}} & 88.99 & 56.21 & 70.65 & 71.95 \\\\ \n {3D Sem.} & + 0.81 & + 0.70 & + 0.63 & + 0.71 \\\\ \n { 2D Sem.} & + 1.09 & + 1.35 & + 1.20 & + 1.41 \\\\ \n { Shallow Fusion} & + 1.62 & + 1.78 & + 1.91 & + 1.77 \\\\ \n { Deep Fusion } & + 2.63 & + 3.45 & + 2.63 & + 2.90 \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public KITTI \\cite{geiger2012we} dataset. Here, we take the SECOND \\cite{yan2018second} as the baseline method. Similar to PointPainting \\cite{vora2020pointpainting}, we provide the detection results on the Bird's-eye view here. To be clear, only the results of the ``Moderate'' category have been given and the mAP represents the mean AP of ``Car'', ``Pedestrian'' and ``Cyclist''.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\n\n\\textbf{nuScenes Benchmark}: three different detectors have been employed for ablation here. The experimental results are given in \\tabref{tab:ablation_nuScene_table}.\n\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cc | cc |cc}\n \\hline\n \\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Strategies}}} & \n \\multicolumn{2}{c|}{\\textbf{SECOND}} & \n \\multicolumn{2}{c|}{\\textbf{PointPillars}} & \n \\multicolumn{2}{c}{\\textbf{CenterPoint}} \\\\\n {} & \\textbf{mAP} (\\%) & \\textbf{NDS} (\\%) & \\textbf{mAP} (\\%) & \\textbf{NDS} (\\%) & \\textbf{mAP} (\\%) & \\textbf{NDS} (\\%) \n \\\\ \\hline \\hline\n {Baseline} & 50.85 & 61.96 & 43.46 & 57.50 & 59.04 & 66.60\\\\ \n {3D Sem.} & + 3.60& + 1.45 & + 8.72 & + 4.09 & + 0.84 & + 0.58 \\\\ \n {2D Sem.} & + 8.55& + 4.07 & + 15.64 &+ 7.55 & + 7.29& + 3.69 \\\\ \n { ShallowFusion} & + 11.30 & + 5.45 & + 17.31 & + 8.54 & + 10.00& + 5.86 \\\\ \n { DeepFusion } & + 12.01 & + 6.09 & + 18.12 & + 9.28 & +xxxxx & + xxxxx\\\\ \\hline\n \\end{tabular}\n}\n\\caption{An ablation study for different fusion strategies. ``Att-Mod'' is short for Adaptive Attention Module, ``3D-P'' and ``2D-P'' are short for ``3D Painting Module'' and ``2D Painting Module'', respectively. Especially, CenterPoint* means Centerpoint with Double Flip and DCN.}\n\\label{tab:ablation_nuScene_table}\n\\end{table}\n\nFor Second and PointPillars 3D detector we trained and evaluated with whole dataset,we can see that both of 3D and 2D segmentation information has clearly improved the detect accuracy as showed as Tabel.\\ref{tab:ablation_table}(1th and 2th row),Pointpillar received largest mAP(+17.31) and NDS(+8.54) imporved compare with baseline,\nwhile Centerpoint achieved that mAP(+10.49) and NDS(+5.29) imporved with quarter of the training split dataset and eval with whole dataset,Specilly we trained all the model with same config,with 8GPU and 4 batchsize, so the baseline maybe subtle differences between our experiment and official figures.\n\nTo verify the effectiveness of different modules, a series of ablation studies have been designed here. All the experiments are executed on the validation split and the settings keep the same as in Sec. \\ref{subsec:dataset}. All the results are given in Tab. \\ref{tab:ablation_table}, from where we could figure out the impact of each module. \n\nAs demonstrated in Tab. \\ref{tab:ablation_table}, 3D semantic segmentation information alone contributes about 9.81\\% mAP and 3.92\\% NDS averagely, while 2D semantic segmentation information contributes about 21.90\\% and 8.46\\% averagely. The higher improvement from 2D semantic information benefit from the better recall ability especially for objects in the distance where the point cloud is very sparse. The advantage for 3D semantic information is that the point clouds can well handle the occlusion among objects which is very common and hard to deal in 2D. But the most improvement comes from the combination of 3D Painting, 2D Painting and Adaptive Attention Module, 26.58\\% mAP and 10.90\\% averagely from 3 detectors we used.\n\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}{\n\\begin{tabular}{l | c c c c l l }\n\\hline\n\\textbf{Baselines} & \\textbf{3D-Sem} & \\textbf{2D-Sem} & \\textbf{ShallowFusion} & \\textbf{DeepFusion} & \\textbf{mAP(\\%)} & \\textbf{NDS(\\%)} \\\\ \\hline\n\\multirow{4}{*}{\\textbf{SECOND}} & -- & -- & -- & -- &50.85 (baseline) &61.96 (baseline) \\\\\n &\\checkmark & & & &54.45 (\\textcolor{red}{+3.60}) &63.41 (\\textcolor{red}{+1.45}) \\\\\n & & \\checkmark & & &59.40 (\\textcolor{red}{+8.55}) &66.03 (\\textcolor{red}{+4.07}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &62.15 (\\textcolor{red}{+11.30}) & 67.41 (\\textcolor{red}{+5.45}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &62.86 (\\textcolor{red}{+12.01}) & 68.05 (\\textcolor{red}{+6.09}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{PointPilars}} & -- & -- & -- & -- &43.46 (baseline) &57.50 (baseline) \\\\\n & \\checkmark & & & &52.18 (\\textcolor{red}{+8.72}) &61.59 (\\textcolor{red}{+4.09}) \\\\\n & & \\checkmark & & &59.10 (\\textcolor{red}{+15.64}) & 65.05 (\\textcolor{red}{+7.55}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & & 60.77 (\\textcolor{red}{+17.31}) &66.04 (\\textcolor{red}{+8.54}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &61.58 (\\textcolor{red}{+18.12}) & 66.78 (\\textcolor{red}{+9.28}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{CenterPoint}} & -- & -- & -- & -- &59.04 (baseline) & 66.60 (baseline) \\\\\n & \\checkmark & & & & 61.12 (\\textcolor{red}{+0.84}) &67.68 (\\textcolor{red}{+0.58}) \\\\\n & & \\checkmark & & & 61.12 (\\textcolor{red}{+7.29}) & 67.68 (\\textcolor{red}{+3.69})\\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &64.89 (\\textcolor{red}{+10.00}) & 69.38 (\\textcolor{red}{+5.86}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &65.85 (\\textcolor{red}{+10.00}) & 69.91 (\\textcolor{red}{+5.86}) \\\\ \\hline\n \n\n\\\\ \\hline\n\\end{tabular}\n}\n\\caption{An ablation study for different fusion strategies. ``Att-Mod'' is short for Adaptive Attention Module, ``3D-P'' and ``2D-P'' are short for ``3D Painting Module'' and ``2D Painting Module'', respectively. Especially, CenterPoint* means Centerpoint with Double Flip and DCN. }\n\n\\label{tab:ablation_table}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\\subsection{Qualitative Results on KITTI and nuScenes Dataset}\\label{sub:Qualitative_Res}\n\nMore qualitative detection results are illustrated in Fig. \\ref{Fig:detection result}. Fig. \\ref{Fig:detection result} (a) shows the annotated ground truth, (b) is the detection results for CenterPoint based on raw point cloud without using any painting strategy. (c) and (d) show the detection results with 2D painted and 3D painted point clouds, respectively, while (e) is the results based on our proposed framework. \nAs shown in the figure, there are false positive detection results caused 2D painting due to frustum blurring effect, while 3D painting method produces worse foreground class segmentation compared to 2D image segmentation. Instead, our FusionPainting can combine two complementary information from 2D and 3D segmentation, and detect objects more accurately.\n\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{FusionPainting\/TITS\/Figs\/results_rviz.pdf}\n\t\\centering\n\t\\caption{Visualization of detection results, where (a) is the ground-truth, (b) is the detection results based on raw point cloud, (c), (d) and (e) are the detection results based on the projected points with 2D semantic information, 3D semantic information and fusion semantic information, respectively. Especially, the yellow and red dash ellipses show some of the false and missed detection, respectively. The baseline detector used here is CenterPoint \\cite{yin2020center}.}\n\t\\label{Fig:detection result}\n\\end{figure*}\n\n\n\\section{Experimental Results} \\label{sec:experiments}\n\nTo verify the effectiveness of our proposed framework, we evaluate it on two the large-scale autonomous driving 3D object detection dataset as KITTI \\cite{geiger2012we} and nuScenes \\cite{nuscenes2019}. Furthermore, we also evaluate the proposed modules on different kinds of 3D detectors for verifying their generalizability, such as SECOND \\cite{yan2018second}, PointPilars \\cite{lang2019pointpillars}, PointRCNN \\cite{shi2019pointrcnn} and PVRCNN \\cite{shi2020pv} etc.\n\n \n \n\n\n\\subsection{Evaluation on KITTI Dataset} \\label{subsec:eval_kitti}\nWe test the proposed framework on the KITTI dataset first. \n\n\\textit{KITTI \\cite{geiger2012we}} as one of the most popular benchmarks for 3D object detection in AD, it contains 7481 samples for training and 7518 samples for testing. The objects in each class are divided into three difficulty levels as ``easy'', ``moderate'', and ``hard'', according to the object size, the occlusion ratio, and the truncation level. Since the ground truth annotations of the test samples are not available and the access to the test server is limited, we follow the idea in~\\cite{chen2017multi} and split the source training data into training and validation where each set contains 3712 and 3769 samples respectively. In this dataset, both the point cloud and the RGB image have been provided. In addition, both the intrinsic camera parameters and extrinsic transformation parameters between different sensors have been well-calibrated. For the object detection task, multiple stereo frames have been provided while only a single image from the left camera at the current frame is used in our approach.\n\n\\textit{Evaluation Metrics:} we follow the metrics provided by the official KITTI benchmark for comparison here. Specifically, for ``Car'' class, we use the metric $\\textbf{AP}_\\text{Car}$\\textbf{70} for all the detection results. $\\textbf{AP}_\\text{Car}$\\textbf{70} means that only the samples in which the overlay of bounding box with ground truth is more than 70\\% are considered as positive. While the threshold for ``Pedestrian'' class is 0.5 (e.g., $\\textbf{AP}_\\text{Ped}$\\textbf{50}) due to its smaller size. Similar to \\cite{vora2020pointpainting}, the average AP (mAP) of three Classes for ``Moderate'' is also taken as an indicator for evaluating the average performance on all three classes. \n\nFor KITTI dataset, four different types of baselines have been evaluated here which are listed as below: \n\\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]\n \\item \\textit{SECOND \\cite{yan2018second}:} is the first to employ the sparse convolution on the voxel-based 3D object detection task and the detection efficiency has been highly improved compared to previous work such as VoxelNet \\cite{zhou2018voxelnet}. \n \\item \\textit{PointPillars \\cite{lang2019pointpillars}} divides the point cloud into pillars rather than voxels, and ``Pillar Feature Net'' is applied to each pillar for extracting the point feature. Then 2d convolution network is adopted on the bird-eye-view feature map for object detection. PointPillars is a trade-off between efficiency and performance.\n \\item \\textit{PointRCNN \\cite{shi2020points}} is a pure point cloud-based two-stages 3D object detector. For extract multi-scale features, the PointNet++ \\cite{qi2017pointnet++} is taken as the backbone. First, the region proposal is generated with a binary foreground\/background classifier in the first stage and then the proposal is refined by cropped points and features in the second stage.\n \\item\\textit{PV-RCNN \\cite{shi2020pv}} is a hybrid point-voxel-based 3D object detector, which can utilize the advantages from both the point and voxel representations.\n\\end{enumerate}\n\n\\textbf{Evaluation Results on KITTI Dataset}\n\n\n\n\\begin{table*}[ht!]\n\\centering\n\n\\resizebox{0.75\\textwidth}{!}\n{\n\\begin{tabular}{r|c|lll|lll|lll}\n\\hline\n\\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Methods}}} &\\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{mAP} (Mod)}} &\n\\multicolumn{3}{c|}{\\textbf{Car}} &\n\\multicolumn{3}{c|}{\\textbf{Pedestrian}} & \n\\multicolumn{3}{c}{\\textbf{Cyclist}} \\\\\n\n{} & {} & {Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} \\\\ \\hline \\hline\n{Pointpillars} & {62.90} \n& {87.59} & {78.17} & {75.23} &\n {53.58} & {47.58} & {44.04} &\n{82.21} & {62.95} & {58.66} \\\\\n\n{PointPillars$^{\\star}$} & {65.78} &\n{89.58} & {78.60} & {75.63} &\n{60.22} & {54.23} & {49.49} &\n84.83 & 64.50 & {60.17} \\\\\n\n{Improvement} & \\RED{+2.88} & \n\\RED{+1.99} & \\RED{+0.43} & \\RED{+0.4} &\n\\RED{+6.64} & \\RED{+6.65} & \\RED{+5.45} &\n\\RED{+2.62} & \\RED{+1.55} & \\RED{+1.51} \\\\ \\hline\n\n\n{SECOND} & {66.64} & \n{90.04} & {81.22} & {78.22} &\n{56.34} & {52.40} & {46.52} &\n83.94 & 66.31 & {62.37} \\\\\n{SECOND$^{\\star}$} & {67.17} & \n{91.19} & {82.13} & {79.25} &\n {56.75} & {52.80} & {48.48} &\n84.46 & 66.57 & {62.51} \\\\\n{Improvement} & \\RED{+0.53} & \n\\RED{+1.15} & \\RED{+0.91} & \\RED{+1.03} &\n\\RED{+0.41} & \\RED{+0.40} & \\RED{+1.96} &\n\\RED{+0.52} & \\RED{+0.26} & \\RED{+0.14} \\\\ \\hline \n\n{PointRCNN} & {69.41} & \n{89.68} & {80.45} & {77.91} &\n{65.60} & {56.69} & {49.92} &\n91.01 & 71.10 & {66.61} \\\\\n{PointRCNN$^{\\star}$} & {70.46} & \n{89.91} & {82.44} & {78.35} &\n{66.19} & {56.78} & {50.34} &\n94.72 & 72.16 & {67.62} \\\\\n{Improvement} & \\RED{+1.05} & \n\\RED{+0.23} & \\RED{+1.99} & \\RED{+0.44} &\n\\RED{+0.59} & \\RED{+0.09} & \\RED{+0.42} &\n\\RED{+3.71} & \\RED{+1.06} & \\RED{+1.01} \\\\ \\hline\n\n\n{PV-RCNN} & {71.82} &\n{92.23} & {83.10} & {82.42} &\n{65.68} & {59.29} & {53.99} &\n91.57 & 73.06 & {69.80} \\\\\n{PV-RCNN$^{\\star}$} & {73.95} & \n{91.54} & {84.59} & {82.66} &\n{69.12} & {61.61} & {55.96} &\n92.82 & 75.65 & {71.03} \\\\\n{Improvement} & \\RED{+2.13} & \n\\GRE{-0.69} & \\RED{+1.49} & \\RED{+0.24} &\n\\RED{+3.44} & \\RED{+2.32} & \\RED{+1.97} &\n\\RED{+1.25} & \\RED{+2.59} & \\RED{+1.23} \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:kitti_3D_detection}\n\\caption{\\normalfont Evaluation of object 3D detection on KITTI ``val'' split for different baseline approaches. $^{\\star}$ represents the baseline by adding the proposed fusion modules. }\n\\end{table*}\n\n\n\n\n\\begin{table*}[ht!]\n\\renewcommand\\arraystretch{1.3}\n\\centering\n \\renewcommand\\arraystretch{1.1}\n\\resizebox{0.90\\textwidth}{!}\n{\n\\begin{tabular}{cllllllllll}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{Method}} & \\multicolumn{1}{c||}{mAP} & \\multicolumn{3}{c||}{Car} (AP_{70}) & \\multicolumn{3}{c||}{Pedestrain} (AP_{70}) & \\multicolumn{3}{c|}{Cyclist} (AP_{70}) \\\\ \\cline{2-11} \n\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{c||}{Mod.} & \\multicolumn{1}{c|}{Easy} & \\multicolumn{1}{c|}{Mod.} & \\multicolumn{1}{c||}{Hard} & \\multicolumn{1}{c|}{Easy} & \\multicolumn{1}{c|}{Mod.} & \\multicolumn{1}{c||}{Hard} & \\multicolumn{1}{c|}{Easy} & \\multicolumn{1}{c|}{Mod.} & \\multicolumn{1}{c|}{Hard} \\\\ \\hline \\hline\n\n\\multicolumn{1}{|c|}{Pointpillars} & \\multicolumn{1}{l||}{69.18} & \\multicolumn{1}{l|}{92.50} & \\multicolumn{1}{l|}{87.80} & \\multicolumn{1}{l||}{87.55} & \\multicolumn{1}{l|}{58.58} & \\multicolumn{1}{l|}{52.88} & \\multicolumn{1}{l||}{48.30} & \\multicolumn{1}{l|}{86.77} & \\multicolumn{1}{l|}{66.87} & \\multicolumn{1}{l|}{62.46} \\\\\n\n\\multicolumn{1}{|c|}{PointPillars(ours)} & \\multicolumn{1}{l||}{72.13} & \\multicolumn{1}{l|}{94.39} & \\multicolumn{1}{l|}{87.65} & \\multicolumn{1}{l||}{89.86} & \\multicolumn{1}{l|}{64.84} & \\multicolumn{1}{l|}{59.57} & \\multicolumn{1}{l||}{55.16} & \\multicolumn{1}{l|}{89.55} & \\multicolumn{1}{l|}{69.18} & \\multicolumn{1}{l|}{64.65} \\\\\n\\multicolumn{1}{|c|}{\\uparrow} & \\multicolumn{1}{l||}{+2.95} & \\multicolumn{1}{l|}{+1.89} & \\multicolumn{1}{l|}{-0.15} & \\multicolumn{1}{l||}{+2.31} & \\multicolumn{1}{l|}{+6.26} & \\multicolumn{1}{l|}{+6.69} & \\multicolumn{1}{l||}{+6.86} & \\multicolumn{1}{l|}{+2.78} & \\multicolumn{1}{l|}{+2.31} & \\multicolumn{1}{l|}{+2.19} \\\\ \\hline\n\n\n\\multicolumn{1}{|c|}{SECOND} & \\multicolumn{1}{l||}{71.95} & \\multicolumn{1}{l|}{92.31} & \\multicolumn{1}{l|}{88.99} & \\multicolumn{1}{l||}{86.59} & \\multicolumn{1}{l|}{60.5} & \\multicolumn{1}{l|}{56.21} & \\multicolumn{1}{l||}{51.25} & \\multicolumn{1}{l|}{87.30} & \\multicolumn{1}{l|}{70.65} & \\multicolumn{1}{l|}{66.63} \\\\\n\\multicolumn{1}{|c|}{SECOND(Ours)} & \\multicolumn{1}{l||}{73.72} & \\multicolumn{1}{l|}{94.25} & \\multicolumn{1}{l|}{90.61} & \\multicolumn{1}{l||}{88.19} & \\multicolumn{1}{l|}{62.37} & \\multicolumn{1}{l|}{57.99} & \\multicolumn{1}{l||}{54.23} & \\multicolumn{1}{l|}{89.41} & \\multicolumn{1}{l|}{72.56} & \\multicolumn{1}{l|}{67.38} \\\\\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{l||}{+1.77} & \\multicolumn{1}{l|}{+1.94} & \\multicolumn{1}{l|}{+1.62} & \\multicolumn{1}{l||}{+1.6} & \\multicolumn{1}{l|}{+1.87} & \\multicolumn{1}{l|}{+1.78} & \\multicolumn{1}{l||}{+2.98} & \\multicolumn{1}{l|}{+2.11} & \\multicolumn{1}{l|}{+1.91} & \\multicolumn{1}{l|}{+0.75} \\\\ \\hline\n\n\n\\multicolumn{1}{|c|}{PointRCNN} & \\multicolumn{1}{l||}{74.13} & \\multicolumn{1}{l|}{93.04} & \\multicolumn{1}{l|}{88.70} & \\multicolumn{1}{l||}{86.64} & \\multicolumn{1}{l|}{68.25} & \\multicolumn{1}{l|}{59.41} & \\multicolumn{1}{l||}{53.71} & \\multicolumn{1}{l|}{94.39} & \\multicolumn{1}{l|}{74.27} & \\multicolumn{1}{l|}{69.70} \\\\\n\\multicolumn{1}{|c|}{PointRCNN(ours)} & \\multicolumn{1}{l||}{74.92} & \\multicolumn{1}{l|}{93.33} & \\multicolumn{1}{l|}{89.13} & \\multicolumn{1}{l||}{86.92} & \\multicolumn{1}{l|}{68.63} & \\multicolumn{1}{l|}{59.92} & \\multicolumn{1}{l||}{54.08} & \\multicolumn{1}{l|}{95.69} & \\multicolumn{1}{l|}{75.70} & \\multicolumn{1}{l|}{71.09} \\\\\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{l||}{+0.79} & \\multicolumn{1}{l|}{+0.29} & \\multicolumn{1}{l|}{+0.43} & \\multicolumn{1}{l||}{0.28} & \\multicolumn{1}{l|}{+0.38} & \\multicolumn{1}{l|}{+0.51} & \\multicolumn{1}{l||}{+0.37} & \\multicolumn{1}{l|}{+1.30} & \\multicolumn{1}{l|}{+1.43} & \\multicolumn{1}{l|}{+1.39} \\\\ \\hline\n\n\\multicolumn{1}{|c|}{PV-RCNN} & \\multicolumn{1}{l||}{76.23} & \\multicolumn{1}{l|}{94.50} & \\multicolumn{1}{l|}{90.62} & \\multicolumn{1}{l||}{88.53} & \\multicolumn{1}{l|}{68.67} & \\multicolumn{1}{l|}{62.49} & \\multicolumn{1}{l||}{58.01} & \\multicolumn{1}{l|}{92.76} & \\multicolumn{1}{l|}{75.59} & \\multicolumn{1}{l|}{71.06} \\\\\n\\multicolumn{1}{|c|}{PV-RCNN(ours)} & \\multicolumn{1}{l||}{78.17} & \\multicolumn{1}{l|}{94.86} & \\multicolumn{1}{l|}{90.87} & \\multicolumn{1}{l||}{88.88} & \\multicolumn{1}{l|}{71.99} & \\multicolumn{1}{l|}{64.71} & \\multicolumn{1}{l||}{59.01} & \\multicolumn{1}{l|}{96.35} & \\multicolumn{1}{l|}{78.93} & \\multicolumn{1}{l|}{74.51} \\\\\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{l||}{+1.94} & \\multicolumn{1}{l|}{+0.36} & \\multicolumn{1}{l|}{+0.25} & \\multicolumn{1}{l||}{+0.35} & \\multicolumn{1}{l|}{+3.32} & \\multicolumn{1}{l|}{+2.22} & \\multicolumn{1}{l||}{+1.00} & \\multicolumn{1}{l|}{+3.59} & \\multicolumn{1}{l|}{+3.34} & \\multicolumn{1}{l|}{+3.45} \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:kitti BEV detection}\n\\caption{\\normalfont kitti BEV detection.}\n\\end{table*}\n\n\\textit{Performance on Baseline Methods:}\n\n\\textit{Comparison with other SOTA methods:}\n\n\\textbf{Qualitative Results on KITTI Dataset}\n\n\n\\subsection{Evaluation on the nuScenes Dataset} \\label{subsec:nuScenes}\n\\textit{Dataset:} nuScenes 3D object detection benchmark~\\cite{nuscenes2019} has been employed for evaluation here, which is a large-scale dataset with a total of 1,000 scenes. For a fair comparison, the dataset has been officially divided into train, val, and testing, which includes 700 scenes (28,130 samples), 150 scenes (6019 samples), 150 scenes (6008 samples) respectively. For each video, only the key frames (every 0.5s) are annotated with 360-degree view. With a 32 lines LiDAR used by nuScenes, each frame contains about {300,000} points. For object detection task, 10 kinds of obstacles are considered including ``car'', ``truck'', ``bicycle'' and ``pedestrian'' et al. Besides the point clouds, the corresponding RGB images are also provided for each keyframe. For each keyframe, there are 6 images that cover 360 fields-of-view.\n\n\\textit{Evaluation Metrics:} For 3D object detection, \\cite{nuscenes2019} proposes mean Average Precision (mAP) and nuScenes detection score (NDS) as the main metrics. Different from the original mAP defined in \\cite{everingham2010pascal}, nuScenes consider the BEV center distance with thresholds of \\{0.5, 1, 2, 4\\} meters, instead of the IoUs of bounding boxes. NDS is a weighted sum of mAP and other metric scores, such as average translation error (ATE) and average scale error (ASE).\n\n\\textit{Baselines:} in addition, to verify its universality,we have implemented the proposed module on three different State-of-the-Art 3D object detectors as following,\n\\begin{itemize}\n\\item \\textit{SECOND} \\cite{yan2018second}, which is the first to employ the sparse convolution on the voxel-based 3D object detection task and the detection efficiency has been highly improved compared to previous work such as VoxelNet. \n\\item \\textit{PointPillars} \\cite{lang2019pointpillars}, which divides the point cloud into pillars and ``Pillar Feature Net'' is applied to each pillar for extracting the point feature. Then 2d convolution network has been adopted on the bird-eye-view feature map for object detection. PointPillars is a trade-off between efficiency and performance. \n\\item \\textit{CenterPoint} \\cite{yin2021center}, is the first anchor-free based 3D object detector which is very suitable for small object detection. \n\\end{itemize}\n\n\\textit{Implementation Details:} HTCNet \\cite{chen2019hybrid} and Cylinder3D \\cite{zhou2020cylinder3d} are employed here as the 2D Segmentor and 3D Segmentor, respectively, due to their outstanding semantic segmentation ability. The 2D segmentor is pretrained on nuImages\\footnote{\\url{https:\/\/www.nuscenes.org\/nuimages}} dataset, and the 3D segmentor is pretrained on nuScenes detection dataset while the point cloud semantic information can be generated by extracting the points inside each bounding box of the obstacles.\n\nFor Adaptive Attention Module, $m=11$, $C_1=64,C_2=128$, respectively, \nFor each baseline, all the experiments share the same setting, the voxel size for SECOND, PointPillar and CenterPoint are $0.2m \\times 0.2m \\times 8 m$, $0.1m \\times 0.1m \\times0.2m$ and $0.075m \\times 0.075m \\times 0.075m$, respectively. We use AdamW \\cite{ loshchilov2019decoupled} with max learning rate 0.001 as the optimizer. Follow \\cite{nuscenes2019}, 10 previous LiDAR sweeps are stacked into the keyframe to make the point clouds more dense.\n\n\n\n\n\n\n\n\n\n\\begin{table*}[ht!]\n\\centering\n\\resizebox{0.95\\textwidth}{!}\n{\n\t\\begin{tabular}{l | c | c | c c c c c c c c c c}\n\t\\hline\n \\multicolumn{1}{c|}{\\multirow{2}{*}{Methods}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{NDS}(\\%)}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{mAP}(\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\\n \n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l}{Car} & \\multicolumn{1}{l}{Pedestrian} & \\multicolumn{1}{l}{Bus} & \\multicolumn{1}{l}{Barrier} & \\multicolumn{1}{l}{T.C.} &\n \\multicolumn{1}{l}{Truck} & \\multicolumn{1}{l}{Trailer} & \\multicolumn{1}{l}{Moto.} & \\multicolumn{1}{l}{Cons.} &\n \\multicolumn{1}{l}{Bicycle} \\\\ \n \\hline \\hline\n \n \n\t\n\n\tSECOND \\cite{yan2018second} &61.96 &50.85 &81.61 &77.37 & 68.53 &57.75 &56.86 &51.93 &38.19 & 40.14 &17.95 & 18.16 \\\\\n\tSECOND$^{\\ast}$ & 67.41 & 62.15 & 83.98 & 82.86 &71.13 &63.29 &74.66 &59.97 &41.53 &66.85 &22.90 &54.35 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{{+5.45}} & \\textcolor{red}{{+11.30}} & \\textcolor{red}{+2.37} & \\textcolor{red}{+5.49} & \\textcolor{red}{+2.60} & \\textcolor{red}{+5.54} & \\textcolor{red}{+17.80} & \\textcolor{red}{+8.04} & \\textcolor{red}{+3.34} & \\textcolor{red}{+26.71} & \\textcolor{red}{+5.05}& \\textcolor{red}{+36.19} \\\\\n \\hline\n\t\n\tPointPillars \\cite{lang2019pointpillars} & 57.50 &43.46 &80.67 &70.80 &62.01 &49.23 &44.00 &49.35 &34.86 &26.74 &11.64 &5.27 \\\\\n\tPointPillars$^{\\ast}$ &66.04 & 60.77&83.51 &82.90&69.96 &58.46&74.50&56.90&39.25&66.40&21.68&54.10 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red} {{+8.54}} & \\textcolor{red}{{+17.31}} & \n\t\\textcolor{red}{+2.84} & \\textcolor{red}{+12.10} & \\textcolor{red}{+7.95 } & \\textcolor{red}{ +9.23} & \\textcolor{red}{+30.50} & \\textcolor{red}{+7.55} & \\textcolor{red}{+4.39} & \\textcolor{red}{+39.66} & \\textcolor{red}{+10.04} & \\textcolor{red}{+48.83} \\\\\n \\hline\n\t\n\tCenterPoint \\cite{yin2021center} & 64.82 &56.53 &84.73 &83.42 &66.95 &64.56 &64.76 &54.52 &36.13 &56.81 &15.81 &37.57\\\\\n\tCenterPoint$^{\\ast}$ &70.68 &66.53 &87.04 &88.44 & 70.66 &67.26 &79.57 &62.98 & 45.09 & 74.65&25.36&64.41\\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{\\textbf{+5.86}} & \\textcolor{red}{\\textbf{+10.00}} & \\textcolor{red}{+2.31} & \\textcolor{red}{+5.02} & \\textcolor{red}{+3.71} & \\textcolor{red}{+2.70} & \\textcolor{red}{+14.81} & \\textcolor{red}{+8.46} & \\textcolor{red}{+8.96} & \\textcolor{red}{+17.84} & \\textcolor{red}{+9.55}&\\textcolor{red}{+26.84} \\\\\n \\hline\n\t\\end{tabular}\n}\n\\caption{\\normalfont Evaluation results on nuScenes validation dataset. ``NDS'' and ``mAP'' mean nuScenes detection score and mean Average Precision. ``T.C.'', ``Moto.'' and ``Cons.'' are short for ``traffic cone'', ``motorcycle'', and ``construction vehicle'' respectively. `` * '' denotes the improved baseline by adding the proposed ``FusionPainting''. To hight the effectiveness of our methods, the improvements of each method are demonstrated in red.}\n\\label{tab:eval_on_nuscenes_val}\n\\end{table*}\n\n\n\n\n\n\n\n\\textbf{Evaluation Results on nuScenes Dataset} \\label{subsubsec:evaluation}\nWe have evaluated the proposed framework on nuScenes benchmark for both validation and test splits.\n\n\\textit{Performance on Baseline Methods:} First of all, we have integrated the proposed fusion module into three different baselines methods and experimentally we have achieved consistently improvements on both $\\text{mAP}$ and $\\text{NDS}$ compared to all baselines. Detailed results are given in Tab. \\ref{tab:eval_on_nuscenes_val}. From this table, we can obviously find that the proposed module gives more than 10 points improvements on $\\text{mAP}$ and \u00a05 points improvements on $\\text{NDS}$ for all the three baselines. The improvements have reached 17.31 and 8.45 points for PointPillars method \u00a0on $\\text{mAP}$ and $\\text{NDS}$ respectively. In addition, we find that the ``Traffic Cone'', ``Moto'' and ``Bicycle'' have received the surprising improvements compared to other classes. For ``Bicycle'' category specifically, the AP has improved about \\textbf{36.19\\%}, \\textbf{48.83\\%} and \\textbf{26.84\\%} based on Second, PointPillar and Centerpoint respectively. Interestingly, all these categories are small objects which are hard to be detected because of a few LiDAR points on them. By adding the prior semantic information, the category classification becomes relatively easier. Especially, the experiments here we share the same setting, so the baseline figures may be subtle differences with official.\n\n\\textit{Comparison with other SOTA methods:} To compare the proposed framework with other SOTA methods, we submit our best results (proposed module on the CenterPoint \\cite{yin2021center}) to the nuScenes evaluation sever\\footnote{\\url{https:\/\/www.nuscenes.org\/object-detection\/}}. The detailed results are listed in Tab. \\ref{tab:test_tabel}. To be clear, only these methods with publications are compared here due to space limitation. From this table, we can find that the ``NDS'' and ``mAP'' have been improved by 3.1 points and 6.0 points respectively compared with the baseline method CenterPoint \\cite{yin2021center}. More importantly, our algorithm outperforms previous multi-modal method 3DCVF \\cite{DBLP:conf\/eccv\/YooKKC20} by a large margin, \\textit{i.e.}, improving 8.1 and 13.6 points in terms of NDS and mAP metrics.\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{FusionPainting\/TITS\/Figs\/results_rviz.pdf}\n\t\\centering\n\t\\caption{Visualization of detection results, where (a) is the ground-truth, (b) is the detection results based on raw point cloud, (c), (d) and (e) are the detection results based on the projected points with 2D semantic information, 3D semantic information and fusion semantic information, respectively. Especially, the yellow and red dash ellipses show some of the false and missed detection, respectively. The baseline detector used here is CenterPoint.}\n\t\\label{Fig:detection result}\n\\end{figure*}\n\n\\textbf{Qualitative Results on nuScenes Dataset}\n\nMore qualitative detection results are illustrated in Fig. \\ref{Fig:detection result}. Fig. \\ref{Fig:detection result} (a) shows the annotated ground truth, (b) is the detection results for CenterPoint based on raw point cloud without using any painting strategy. (c) and (d) show the detection results with 2D painted and 3D painted point clouds, respectively, while (e) is the results based on our proposed framework. \nAs shown in the figure, there are false positive detection results caused 2D painting due to frustum blurring effect, while 3D painting method produces worse foreground class segmentation compared to 2D image segmentation. Instead, our FusionPainting can combine two complementary information from 2D and 3D segmentation, and detect objects more accurately.\n\n\n\n \n\\begin{table*}[ht!]\n \\centering\n \\renewcommand\\arraystretch{1.2}\n\\resizebox{0.95\\textwidth}{!}{\n\n\\begin{tabular}{ l| c| c| c| c c c c c c c c c c}\n \\hline\n \\multicolumn{1}{c|}{\\multirow{2}{*}{Methods}} & \n \\multicolumn{1}{c|}{\\multirow{2}{*}{Modality}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{NDS}(\\%)}} & \n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{mAP}(\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\ \n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{l}{Car} & \\multicolumn{1}{l}{Truck} & \\multicolumn{1}{l}{Bus} & \\multicolumn{1}{l}{Trailer} & \\multicolumn{1}{l}{Cons} & \\multicolumn{1}{l}{Ped} & \\multicolumn{1}{l}{Moto} & \\multicolumn{1}{l}{Bicycle} & \\multicolumn{1}{l}{T.C} & \\multicolumn{1}{l}{Barrier} \\\\ \\hline\n PointPillars \\cite{vora2020pointpainting} & L & 55.0 & 40.1 & 76.0 & 31.0 & 32.1 & 36.6 & 11.3 & 64.0 & 34.2 & 14.0 & 45.6 & 56.4 \\\\\n 3DSSD \\cite{yang20203dssd} & L & 56.4 & 46.2 & 81.2 & 47.2 & 61.4 & 30.5 & 12.6 & 70.2 & 36.0 & 8.6 & 31.1 & 47.9 \\\\\n \n CBGS \\cite{zhu2019class} & L & 63.3 & 52.8 & 81.1 & 48.5 & 54.9 & 42.9 & 10.5 & 80.1 & 51.5 & 22.3 & 70.9 & 65.7 \\\\\n \n HotSpotNet \\cite{chen2020object} & L & 66.0 & 59.3 & 83.1 & 50.9 & 56.4 & 53.3 & {23.0} & 81.3 & {63.5} & 36.6 & 73.0 & \\textbf{71.6} \\\\\n CVCNET\\cite{chen2020every} & L & 66.6 & 58.2 & 82.6 & 49.5 & 59.4 & 51.1 & 16.2 & {83.0} & 61.8 & {38.8} & 69.7 & 69.7 \\\\\n CenterPoint \\cite{yin2021center} & L & {67.3} & {60.3} & {85.2} & {53.5} & {63.6} & {56.0} & 20.0 & 54.6 & 59.5 & 30.7 & ne{78.4} & 71.1 \\\\\n PointPainting \\cite{vora2020pointpainting} & L \\& C & 58.1 & 46.4 & 77.9 & 35.8 & 36.2 & 37.3 & 15.8 & 73.3 & 41.5 & 24.1 & 62.4 & 60.2 \\\\\n 3DCVF \\cite{DBLP:conf\/eccv\/YooKKC20} & L \\& C & 62.3 & 52.7 & 83.0 & 45.0 & 48.8 & 49.6 & 15.9 & 74.2 & 51.2 & 30.4 & 62.9 & 65.9 \\\\ \\hline\n \n \n {FusionPainting(Ours)} & L \\& C & \\textbf{70.4} & \\textbf{66.3} & \\textbf{86.3} & \\textbf{58.5} & \\textbf{66.8} & \\textbf{59.4} & \\textbf{27.7} & \\textbf{87.5} & \\textbf{71.2} & \\textbf{51.7} & \\textbf{84.2} & {70.2} \\\\ \\hline\n \n \\end{tabular}\n}\n\\caption{Comparison with other SOTA methods on the nuScenes 3D object detection benchmark. ``L'' and ``C'' in the modality column represent lidar and camera respectively. For each column, we use ``underline'' to illustrate the best results of other methods and we give the improvements of our method compared to other best performance. Here, we only list all the methods results with publications.}\n\\label{tab:test_tabel}\n\\end{table*}\n\\section{Introduction}\n\n \n\n\n\n\\input{Files\/01_introduction}\n\\input{Files\/02_related_work}\n\\input{Files\/03_methods}\n\\input{Files\/04_experiments}\n\\input{Files\/05_ablation_study}\n \n\n\n\n\n\n\n\n\n\\section{Conclusion and Future Works}\n\n{In this work, we proposed an effective framework \\textit{Multi-Sem Fusion} to fuse the RGB image and LiDAR point clouds in two different levels. For the first level, the proposed AAF module aggregates the semantic information from both the 2D image and 3D point clouds segmentation results adaptively with learned weight scores. For the second level, a DFF module is proposed to further fuse the boosted feature maps with different receptive fields by a channel attention module. Thus, the features can cover objects of different sizes. More importantly, the proposed modules are detector independent which can be seamlessly employed in different frameworks. The effectiveness of the proposed framework has been evaluated on public benchmark and outperforms the state-of-the-art approaches. However, the limitation of the current framework is also obvious that both the 2D and 3D parsing results are obtained by offline approaches which prevent the application of our approach in real-time AD scenarios. An interesting research direction is to share the backbone features for both object detection and segmentation and take the segmentation as an auxiliary task.} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Discussion and Conclusion}\nWe presented a counterfactual-based attribution method to explain changes in a large-scale ad system's output metric. Using the computational structure of the system, the method provides attribution scores that are more accurate than prior methods.\n\n\n\n\\bibliographystyle{ACM-Reference-Format}\n\n\\section{Defining the attribution problem}\nFor a system's outcome metric $Y$, let $Y=y_t$ be a value that needs to be explained (e.g., an extreme value). Our goal is to explain the value by \\textit{attributing} it to a set of input variables, $\\bm{X}$. Can we rank the variables by their contribution in \\textit{causing} the outcome? \n\nFor example, consider a system that crashes whenever its load crosses 0.9 units. The system's crash metric can be described by the following structural equations, $Y=I_{Load>=0.9}; Load=0.5X_1+ 0.4X_2+0.9X_3; X_i= Bernoulli(0.5) \\forall i$. The corresponding graph for the system has the following edges: ${X_1, X_2, X_3} \\rightarrow Load; Load \\rightarrow Y$. The value of each input $X_i$ is affected by the independent error terms through the Bernoulli distribution. Suppose the initial reference value was $(X_1=0,X_2=0, X_3=0, Y=0)$ and the next observed value is $(X_1=1,X_2=1, X_3=1, Y=1)$. Given that the system crashed ($Y=1$), how do we attribute it to $X_1, X_2, X_3$? \nIntuitively, $X_3$ is a sufficient cause of the crash since changing $X_3=1$ would lead to the crash irrespective of values of other variables. However, $X_1$ and $X_2$ can be equally a reason for this particular crash since their coefficients sum to $0.9$. \nHowever, if either of $X_1$ or $X_2$ are observed to be zero, then the other one cannot explain the crash. \nThis example indicates that the attribution for any input variable depends on the equations of the data-generating process and also on the values of other variables.\n\\subsection{Attribution score for system metric change}\nWe now define the \\textit{attribution score} for explaining an observed value wrt a reference value. While system inputs can be continuous, we utilize the fact that system metrics are measured and compared over time. That is, we are often interested in attribution for a metric value compared to an reference timestamp. Reference values are typically chosen from previous values that are expected to be comparable (E.g., metric value at last hour or last week). \nBy comparing to a reference timestamp, we simplify the problem by considering only two values of a continuous variable:\nits \\textit{observed} value, and its value on the\\textit{ reference} timestamp. \n\nFormally, we express the problem of attribution of an outcome metric, $Y=y_t$ as explaining change in the metric wrt. a reference, $\\Delta Y = y_t - y'$: Why did the outcome value change from $y'$ to $y_t$? \n\n\\begin{definition}\\label{def:attr-problem}\n\\textbf{Attribution Score. }\nLet $Y=y_t$ and $Y=y'$ be the observed and reference values respectively of a system metric. Let $\\bm{V}$ be the set of input variables. Then, an \\textit{attribution score} for $X \\in \\bm{V}$ provides the contribution of $X$ in causing the change from $y'$ to $y_t$.\n\\end{definition}\n\n\n\n\n\\subsection{The need for SCM and counterfactuals}\nTo estimate the \\textit{causal} contribution, we need to model the\ndata-generating process from input variables to the outcome. This is usually done by a structural causal model (SCM) $M$, that consists of a causal graph and structural equations describing the generating functions for each variable. \n\n\n\n\n\n\\noindent \\textbf{SCM. } Formally, a structural causal model~\\cite{pearl2009book} is defined by a tuple $\\langle \\bm{V}, \\bm{U}, \\bm{F}, P(\\bm{u}) \\rangle$ where $\\bm{V}$ is the set of observed variables, $\\bm{U}$ refer to the unobserved variables, $\\bm{F}$ is a set of functions, and $P(\\bm{U})$ is a strictly positive probability measure for $\\bm{U}$. For each $V\\in \\bm{V}$, $f_V \\in \\bm{F}$ determines its data-generating process, \n$V=f_V(\\operatorname{Pa}_V, U_V)$ where $\\operatorname{Pa}_V \\subseteq \\bm{V} \\setminus \\{V\\}$ denotes \\textit{parents} of $V$ and $U_V\\subseteq \\bm{U}$. We consider a non-linear, additive noise SCM such that $\\forall V \\in \\bm{V}$, $f_V$ can be written as a additive combination of some $f^*_V(\\operatorname{Pa}_V)$ and the unobserved variables (error terms). We assume a Markovian SCM such that unobserved variables (corresponding to error terms) are mutually independent, thus the SCM corresponds to a directed acyclic graph (DAG) over $\\bm{V}$ with edges to each node from its parents. Note that a specific realization of the unobserved variables, $\\bm{U}=\\bm{u}$ determines the values of all other variables.\n\n\\noindent \\textbf{Counterfactual. }Given an SCM, values of unobserved variables $\\bm{U}=\\bm{u}$, a target variable $Y\\in \\bm{V}$ and a subset of inputs $\\bm{X} \\subseteq \\bm{V}\\setminus \\{Y\\}$, a counterfactual corresponds to the query, \\textit{``What would have been the value of $Y$ (under $\\bm{u}$), had $\\bm{X}$ been $\\bm{x}$}''. It is written as $Y_{\\bm{x}}(\\bm{u})$.\n\nUsing counterfactuals, we can formally express the attribution question in the the above example. Suppose the observed values are $Y=y_t$ and $X_i=x_i$ for some input $X_i$, under $\\bm{U}=\\bm{u}$. At an earlier reference timestamp with a different value of the unobserved variables, $\\bm{U}=\\bm{u}'$, the values are $Y=y'$ and $X_i=x'_i$. Starting from the observed value ($\\bm{U}=u$), the attribution for $X_i$ is characterized by the change in $Y$ after changing $X_i$ to its reference value, $Y_{x_i}(\\bm{u}) - Y_{x'_i}(\\bm{u})= y_t - Y_{x'_i}(\\bm{u})$.\nThat is, given that $Y$ is $y_t$ with $X_i=x_i$ and all other variables at their observed value, how much would $Y$ change if $X_i$ is set to $x'_i$?\nSimilarly, we can ask, $Y_{x_i, x'_1}(\\bm{u}) - Y_{x'_i, x'_1}(\\bm{u})$ ($i \\neq 1$), denoting the change in $Y$'s value upon setting $X=x_i$ when $X_1$ is set to its reference values. Thus, there can be multiple expressions to determine the counterfactual impact of $X_i$ depends on the values of other variables. \n\n\n\n\\section{Attribution using CF-Shapley value}\nTo develop an attribution score, we propose a way to average over the different possible counterfactual impacts. First, we posit desirable axioms that an attribution score should satisfy, as in~\\cite{lundberg2017unified,pmlr-v162-jung22a}.\n\\subsection{Desirable axioms for an attribution score} \\label{sec:axioms}\n\\begin{axiom*} Given two values of the metric, observed, $Y(\\bm{u})$ and reference, $Y(\\bm{u}')$, corresponding to unobserved variables, $\\bm{u}$ and $\\bm{u}'$ respectively, following properties are desirable for an attribution score $\\bm{\\phi}$ that measures the causal contribution of inputs $V \\in \\bm{V}$.\n\\begin{enumerate}\n \\item \\textbf{CF-Efficiency. } The sum of attribution scores for all $V \\in \\bm{V}$ equals the counterfactual change in output from reference to observed value, $Y(\\bm{u}) -Y_{\\bm{v}'}(\\bm{u})=Y_{\\bm{v}}(\\bm{u}') -Y(\\bm{u}')=\\sum_V \\phi_V$.\n \\item \\textbf{CF-Irrelevance}. If a variable $X$ has no effect on the counterfactual value of output under all witnesses, $Y_{x',\\bm{s}'}(\\bm{u})=Y_{\\bm{s}'}(\\bm{u}) \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{X\\}$, then $\\phi_X=0$.\n \\item \\textbf{CF-Symmetry. } If two variables have the same effect on counterfactual value of output $Y_{\\bm{s}'}(\\bm{u})-Y_{x'_1,\\bm{s}'}(\\bm{u})=Y_{\\bm{s}'}(\\bm{u})-Y_{x'_2,\\bm{s}'}(\\bm{u}) \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{X_1, X_2\\}$, then their attribution scores are same, $\\phi_{X_1}=\\phi_{X_2}$. \n \\item \\textbf{CF-Approximation.} For any subset of variables $\\bm{S} \\subseteq \\bm{V}$ set to their reference values $\\bm{s}'$, the sum of attribution scores approximates the counterfactual change from observed value. I.e., there exists a weight $\\omega(\\bm{S})$ s.t. the vector $\\bm{\\phi}_{\\bm{S}}$ is the solution to the weighted least squares, $\\arg \\min_{\\bm{\\phi}^*_{\\bm{S}}} \\sum_{\\bm{S} \\subseteq \\bm{V}} \\omega(\\bm{S}) ((Y(\\bm{u}) - Y_{\\bm{s}'}(\\bm{u})) -\\sum_{S \\in \\bm{S}} \\phi^*_{S})^2$. \n\\end{enumerate}\n\\end{axiom*}\nSimilar to shapley value axioms, these axioms convey intuitive properties that a \\textit{counterfactual} attribution score should satisfy. \\textit{CF-Efficiency} states the sum of attribution scores for inputs should equal the difference between the observed metric and the counterfactual metric when all inputs are set to their reference values. \\textit{CF-Irrelevance} states that if changing the value of an input $X$ has no effect on the output counterfactual under all values of other variables, then the Shapley value of $X$ should be zero. \\textit{CF-Symmetry }states that if changing the value of two inputs has the same effect on the counterfactual output under all values of the other variables, then both variables should have an identical attribution score. And finally, \\textit{CF-Approximation} states the difference between the observed output and the counterfactual output due to a change in any subset of variables is roughly equal to the sum of attribution scores for those variables.\n\nNote that \\textit{CF-Efficiency} does not necessarily imply that the sum of attribution scores is equal to the actual difference between the observed value and reference value. This is because the actual difference is a combination of the input variables' contribution and statistical noise (error terms). That is, $y_t - y' = Y_{\\bm{v}}(\\bm{u}) - Y_{\\bm{v}'}(\\bm{u}') = \\sum_V \\phi_V + (Y_{\\bm{v}'}(\\bm{u})- Y_{\\bm{v}'}(\\bm{u}'))$, where we used the CF-Efficiency property for a desirable attribution score $\\bm{\\phi}$. The second term corresponds to the difference in metric with the same input variables but different noise corresponding to the observed and reference timestamps.\nThis is the unavoidable noise component since we are explaining the change due to a \\textit{single} observation.\nTherefore, for any counterfactual attribution score to meaningfully explain the observed difference, it is useful to select a reference timestamp to minimize the difference over exogenous factors (e.g., using a previous value of the metric on the same day of week or same hour). Given the true structural equations and an attribution score that satisfies the axioms, if the scores do sum to the observed difference in a metric, then it implies that reference timestamp was well-selected.\n\n\\subsection{The CF-Shapley value}\nWe now define the {\\textit{CF-Shapley}}\\xspace value that satisfies all four axioms.\n\\begin{definition}\nGiven an observed output metric $Y=y_t$ and a reference value $y'$, the CF-Shapley value for contribution by input $X$ is given by, \n\\begin{equation} \\label{eq:cf-shap}\n \\phi_X = \\sum_{\\bm{S} \\subseteq \\bm{V} \\setminus \\{X\\}}\\frac{Y_{\\bm{s}'}(\\bm{u})- Y_{x',\\bm{s}'}(\\bm{u}) }{n C(n-1, |\\bm{S}|)}\n\\end{equation}\nwhere $n$ is the number of input variables $\\bm{V}$, $\\bm{S}$ is the subset of variables set to their reference values $\\bm{s}'$, and $\\bm{U}=\\bm{u}$ is the value of unobserved variables such that $Y(\\bm{u})=y_t$.\n\\end{definition}\n\\begin{proposition}\n{\\textit{CF-Shapley}}\\xspace value satisfies all four axioms, Efficiency, Irrelevance, Symmetry and Approximation.\n\\end{proposition}\n\\begin{proof}\n\\textit{Efficiency.} Following ~\\cite{pmlr-v162-jung22a,vstrumbelj2014explaining}, the {\\textit{CF-Shapley}}\\xspace value for an input $V_i$ can be written as, \n\\begin{equation}\n \\phi_{V_i}= \\frac{1}{n!} \\sum_{\\pi \\in \\Pi(n)} Y_{\\bm{w}'_{pre}(\\pi)}(\\bm{u}) - Y_{v'_i, \\bm{w}'_{pre}(\\pi)}(\\bm{u}) \n\\end{equation}\nwhere $\\Pi$ is the set of all permutations over the $n$ variables and $\\bm{W}_{pre}(\\pi)$ is the subset of variables that precede $V_i$ in the permutation $\\pi \\in \\Pi$. The sum is, \n\\begin{equation}\n \\begin{split}\n &\\sum_{i=1}^{n} \\phi_{V_i} = \\frac{1}{n!} \\sum_{\\pi \\in \\Pi(n)} \\sum_{i=0}^{n} Y_{\\bm{w}'_{pre}(\\pi)}(\\bm{u}) - Y_{v'_i, \\bm{w}'_{pre}(\\pi)}(\\bm{u}) \\\\\n \n &=\\frac{1}{n!} \\sum_{\\pi \\in \\Pi(n)} Y_{\\emptyset}(\\bm{u}) - Y_{\\bm{v}'}(\\bm{u}) \\\\\n &= Y(\\bm{u}) - Y_{\\bm{v}'}(\\bm{u})\n \\end{split}\n\\end{equation}\nWe can show it analogously under $\\bm{U}=\\bm{u}'$. \n\\\\\n\\textit{CF-Irrelevance.}\nIf $Y_{x', \\bm{s}'}(\\bm{u}) = Y_{\\bm{s}'}(\\bm{u}) \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{X\\}$, then the numerator in Eqn.~\\ref{eq:cf-shap} for $\\phi_X$, will be zero and the result follows. \n\\\\\n\\textit{CF-Symmetry.}\nAssuming same effect on counterfactual value, we write the {\\textit{CF-Shapley}}\\xspace value for $V_i$ and show it is the same for $V_j$. \n\\begin{equation*}\n\\begin{split}\n &\\phi_{V_i} = \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_i\\}}\\frac{Y_{\\bm{w}'}(\\bm{u})- Y_{v'_i,\\bm{w}'}(\\bm{u}) }{n C(n-1, |\\bm{W}|)}\\\\\n &= \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}}\\frac{Y_{\\bm{w}'}(u)- Y_{v'_i,\\bm{w}'}(u)}{n C(n-1, |\\bm{W}|)} + \\sum_{\\bm{Z}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}} \\frac{Y_{v'_j,\\bm{z}'}(u)- Y_{v'_i,v'_j,\\bm{z}'}(u)}{n C(n-1, |\\bm{Z}|+1)} \\\\ \n &= \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}}\\frac{Y_{\\bm{w}'}(\\bm{u})- Y_{v'_j,\\bm{w}'}(\\bm{u})}{n C(n-1, |\\bm{W}|)} + \\sum_{\\bm{Z}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}} \\frac{Y_{v'_i,\\bm{z}'}(\\bm{u})- Y_{v'_i,v'_j,\\bm{z}'}(\\bm{u})}{n C(n-1, |\\bm{Z}|+1)} \\\\\n &= \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_j\\}}\\frac{Y_{\\bm{w}'}(\\bm{u})- Y_{v'_j,\\bm{w}'}(\\bm{u})}{n C(n-1, |\\bm{W}|)} = \\phi_{V_j}\n\\end{split}\n\\end{equation*}\nwhere the third equality uses $Y_{v'_i,\\bm{s}'}(\\bm{u})=Y_{v'_j,\\bm{s}'}(\\bm{u}) \\text{\\ } \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}$.\n\\\\\n\\textit{CF-Approximation.} Here we use a property~\\cite{lundberg2017unified} on value functions of standard Shapley values. There exists specific weights $\\omega(S)$ such that the Shapley value is the solution to $\\arg \\min_{\\bm{\\phi}^*_{\\bm{S}}} \\sum_{\\bm{S}\\subseteq \\bm{V}}\\omega(\\bm{w}) (\\nu(S) -\\sum_{\\bm{s} \\in \\bm{S}} \\phi^*_w)^2$ where $\\nu(\\bm{S})$ is the value function of any subset $\\bm{S} \\subseteq \\bm{V}$. The result follows by selecting $\\nu(\\bm{S})=Y(\\bm{u})-Y_{\\bm{s}'}(\\bm{u})$. \n\\end{proof}\n\n\n\\noindent \\textbf{Comparison to do-shapley. }\nUnlike {\\textit{CF-Shapley}}\\xspace, the do-shapley value~\\cite{pmlr-v162-jung22a} takes the expectation over all values of the unobserved $\\bm{u}$, $\\bm{E}_{\\bm{u}}[Y|do(\\bm{S})] - \\bm{E}_{\\bm{u}}[Y]$. Thus, it measures the \\textit{average} causal effect over values of $\\bm{u}$, whereas for attributing a single observed value, we want to know the contributions of inputs under the same $\\bm{u}$.\n\n\\subsection{Estimating {\\textit{CF-Shapley}}\\xspace values}\n\\label{sec:est-cf}\nEqn.~\\ref{eq:cf-shap} requires estimation of counterfactual output at different (hypothetical) values of input, and in turn requires both the causal graph and the structural equations of the SCM. Using knowledge on the system's metric computation, the first step is to construct its computational graph. Then for each node in the graph, we fit its generating function using a predictive model over its parents, which we consider as the data-generating process (fitted SCM).\n\nTo fit the SCM equations, for each node $V$, a common way is to use supervised learning to build a model $\\hat{f}_V$ estimating its value using the values of its parent nodes at the same timestamp.\nHowever, such a model will have high variance due to natural temporal variation in the node's value over time. Since including variables predictive of the outcome reduces the variance of an estimate in general~\\cite{bottou2013counterfactual}, we utilize auto-correlation in time-series data to include the previous values of the node as predictive features. Thus, the final model is expressed as, $\\forall V \\in \\bm{V}$,\n\\begin{equation}\\label{eq:catden-model}\n \\hat{v}_t = \\hat{f}(\\operatorname{Pa}_V, v_{t-1},v_{t-2} \\cdots , v_{t-r})\n\\end{equation}\nwhere $r$ is the number of auto-correlated features that we include. \nThe model can be trained using a supervised time-series prediction algorithm with auxiliary features, such as DeepAR~\\cite{salinas2020deepar}.\n\n\n\n\nWe then use the fitted SCM equations to estimate the counterfactual with the 3-step algorithm from\nPearl~\\cite{pearl2009book}, assuming additive error.\nTo compute $Y_{\\bm{s}'}(\\bm{u})$ for any subset $\\bm{S} \\subseteq \\bm{V}$, the three steps are,\n\\begin{enumerate}\n \\item \\textbf{Abduction.} Infer error of structural equations on all observed variables. For each $V \\in \\bm{V}$, $\\hat{\\epsilon}_{v,t} = v_t - \\hat{f}_V(\\operatorname{Pa}(V), v_{t-1}..v_{t-r})$ where $v_t$ is the observed value at timestamp $t$.\n \\item \\textbf{Action.} Set the value of $\\bm{S}\\leftarrow s'$, ignoring any parents of $\\bm{S}$.\n \\item \\textbf{Prediction. } Use the inferred error term and new value of $s'$ to estimate the new outcome, by proceeding step-wise for each level of the graph~\\cite{pearl2009book,dash2022evaluating} (i.e., following a topological sort of the graph), starting with $\\bm{S}$'s children and proceeding downstream until $Y$ node's value is obtained. For each $X \\in \\bm{V}$ ordered by the topological sort of the graph (after $\\bm{S}$), $x'=\\hat{f}_X(Pa'(X), \\cdots) +\\hat{\\epsilon}_{x,t}$. And finally, we will obtain, $y'=\\hat{f}_Y(Pa'(Y), \\cdots) +\\hat{\\epsilon}_{y,t}$.\n\\end{enumerate}\n Thus, the {\\textit{CF-Shapley}}\\xspace score for any input is obtained by repeatedly applying the above algorithm and aggregating the required counterfactuals in Eqn.~\\ref{eq:cf-shap}; we use a common Monte Carlo approximation to sample a fixed number ($M=1000$) of values of $\\bm{S}$~\\cite{castro2009polynomial,fatima2008shapleycompute}. \n\n\n\\section{{\\textit{CF-Shapley}}\\xspace for Ad Matching system}\n\\section{Case study on ad matching system}\nWe now apply the {\\textit{CF-Shapley}}\\xspace attribution method on data logs of a real-world ad matching system from July 6 to Dec 28, 2021. For each query, we have log data on the number of ads matched by the system. In addition, each query is marked with its category. The category {query volume}\\xspace is measured as the number of queries issued by users for each category. This allows us to calculate the ground-truth matching density on each day, category-wise and aggregate. Separately, to compute the category-wise \\textit{input} ad demand for a day, we fetch each ad listing available on the day and assign it to a category if any query from that category contains a word that is present in its keywords. This is the total sum of ad listings that are potentially relevant to the query for the exact matching algorithm (that matches the full query exactly to the full ad keyword phrase).\n\n\n\n\n\n\n\n\n\\subsection{Implementing {\\textit{CF-Shapley}}\\xspace: Fitting the SCM}\n\\label{sec:fit-scm}\nWe follow the method outlined in Section~\\ref{sec:est-cf}. The main task is to estimate the structural equations for category-wise ad density.\nThere are over 250 categories; fitting a separate model for each is not efficient. Besides, it may be beneficial to exploit the common patterns in the different time-series. We therefore consider a deep learning-based model, DeepAR~\\cite{salinas2020deepar} that fits a single recurrent network for multiple timeseries (we also tried a transformer-based model, temporal fusion transformer (TFT)~\\cite{lim2021temporal} but found it hard to tune to obtain comparable accuracy). As specified in Equation~\\ref{eq:catden-model}, for each category, the DeepAR model is given {ad demand}\\xspace, {query volume}\\xspace and the autoregressive values of density for the past 14 days. Note that rather than predicting over a range of days (which can be innacurate), we fit the timeseries model separately for each day using data up to its $t-1$th day, to utilize the additional information available from the previous day. To implement DeepAR, we used the open-source GluonTS library. \n\nWe compare the DeepAR model to three baselines. As simple baselines that capture the weekly pattern, we consider, \\textbf{1)} category density on the same day a week before; and \\textbf{2)} the average density over the last four weeks. We also consider a 3-layer feed-forward network that uses the same features as DeepAR. Table~\\ref{tab:deepar-acc} shows the prediction error. DeepAR model obtains the lowest error on the validation set according to all three metrics: mean absolute percentage error (MAPE), median APE, and the symmetric MAPE~\\cite{makridakis2020m4}.\nFor our results, we choose DeepAR as the estimated SCM equation and apply {\\textit{CF-Shapley}}\\xspace on data from Nov 15 to Dec 28. We chose Nov 15 to allow sufficient days of training data.\n\n\\noindent \\textbf{Choosing reference timestamp. }\nThe {\\textit{CF-Shapley}}\\xspace method requires specifying a reference day that provides the \n\nthe ``expected\/usual'' density value. Common ways to choose it are the last day's value or the value last week on the same day. We choose the latter due to known weekly patterns in the density metric. \n\n\\begin{table}[tb]\n \\centering\n \\resizebox{0.7\\columnwidth}{!}{%\n \\begin{tabular}{lccc}\n \\toprule\n Model &Mean APE (\\%) & Median APE (\\%) & sMAPE\\\\\n \\midrule\n LastWeek & 21.2 & 11.5 & 0.20\\\\\n Avg4Weeks & 25.1 & 10.6 & 0.17 \\\\\n FeedForward & 20.0 & 10.8 & 0.20\\\\\n DeepAR & \\textbf{15.6} & \\textbf{7.8} & \\textbf{0.13} \\\\\n \\bottomrule\n \\end{tabular}}\n \\caption{Accuracy of category-wise density prediction models.}\n \\label{tab:deepar-acc}\n \\vspace{-3em}\n\\end{table}\n\n\\begin{comment}\n\\begin{table}[tbh]\n \\centering\n \\begin{tabular}{c|c|c|c}\n Model &MeanAPE (\\%) & MedAPE (\\%) & SMAPE\\\\\n LastWeek & 23.9 & 11.3 & 19.6\\\\\n Avg4Weeks & 20.6 & 10.4 & 17.2 \\\\\n FF & 19.0 & 10.7 & 18.8\\\\\n DeepAR & 15.3 & 7.7 & 13.1 \n \\end{tabular}\n \\caption{Category-wise density prediction model. MAPE computed over all days.}\n \\label{tab:deepar-acc}\n\\end{table}\n\\end{comment}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figures\/shapley_sum_validate.pdf}\n \\caption{Comparison of actual percentage change in daily density with sum of estimated {\\textit{CF-Shapley}}\\xspace attribution scores. }\n \\label{fig:shapley-valid}\n\\end{figure}\n\n\n\n\n\n\\subsection{Validating the CF-Efficiency axiom}\n We first check whether the obtained {\\textit{CF-Shapley}}\\xspace scores sum up to the observed percentage change in daily density metric (Figure~\\ref{fig:shapley-valid}). The difference between the sum of {\\textit{CF-Shapley}}\\xspace scores and the actual change is less than 0.10\\% for all days, indicating that our choice of reference timestamp is appropriate (Sec.~\\ref{sec:axioms}) and that the shapley value computation by approximation is capturing relevant signal.\n\n\\subsection{Choosing dates for evaluation}\nWhile we computed attribution scores for all days, typically one is interested in attribution for unexpected values for daily density.\n\nTo discover unusual days for attribution, we fit a standard time-series model to the aggregate daily density data.\nWe use four candidate models: \\textbf{1)} daily density on the same day last week; \\textbf{2)} mean density of the last 4 weeks; \\textbf{3)} a feed forward network; and \\textbf{4)} DeepAR model. As for the category-wise prediction, all neural network models are provided the last 14 days of daily density. Table\\ref{tab:agg-acc} shows the mean APE, median APE, and SMAPE. The feed forward model obtains the lowest error. While DeepAR is a more expressive model than FeedForward, a potential reason for its lower accuracy is the number of training samples (only as many data points as the number of days for dailydensity prediction unlike category-wise prediction). \nFor its simplicity, we use the FeedForward network for detecting outlier days. Its prediction for different days and the outliers detected can be seen in Figure~\\ref{fig:ff-preds}. Like DeepAR, the feedforward model is implemented as a Bayesian probabilistic model, so it outputs prediction samples rather than a point prediction.\n\n\n\\begin{table}[tb]\n \\centering\n \\resizebox{0.7\\columnwidth}{!}{%\n \\begin{tabular}{lccc}\n \\toprule\n Model & Mean APE (\\%) & Median APE (\\%) & sMAPE \\\\\n \\midrule\n LastWeek & 4.6 & 3.1 & 0.047\\\\\n Last4Weeks & 4.5 & 3.3 & 0.047\\\\\n FeedForward & \\textbf{3.0} & \\textbf{2.2} & \\textbf{0.031} \\\\\n DeepAR & 3.4 & 2.4 & 0.035\\\\\n \n \\bottomrule\n \\end{tabular}}\n \\caption{Prediction error for daily density models.}\n \\label{tab:agg-acc}\n \\vspace{-2em}\n\\end{table}\n\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.65\\columnwidth]{figures\/ff_pred_pattern.pdf}\n \\caption{Outliers through FeedForward model's prediction. Shaded region represents the 95\\% prediction interval.}\n \\label{fig:ff-preds}\n\\end{figure}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figures\/cat_contrib_Dec4.pdf}\n \\caption{Attribution scores of different categories by {ad demand}\\xspace and {query volume}\\xspace on December 4.}\n \\label{fig:cat-contrib}\n\\end{figure}\n\n\\begin{table}\n \\centering\n \\resizebox{0.85\\columnwidth}{!}{%\n \\begin{tabular}{lcc}\n \\toprule\n Category & AdDemandAttrib & QueryVolumeAttrib \\\\\n \\midrule\n \\textit{Sort by AdDemandAttrib} & &\\\\\n Internet \\& Telecom & \\textbf{0.0450} & 0.00850 \\\\\n Apparel & -0.00843 & 0.00151 \\\\\n Arts \\& Entertainment & 0.00663 & -0.00928 \\\\\n Hobbies \\& Leisure & 0.00646 & -0.0168 \\\\\n Travel \\& Tourism & -0.00584 & -0.000287 \\\\\n \\midrule\n \\textit{Sort by QueryVolumeAttrib} & &\\\\\n Hobbies \\& Leisure & 0.00646 & \\textbf{-0.0168} \\\\\n Arts \\& Entertainment & 0.00633 & -0.00928 \\\\\n Internet \\& Telecom & 0.0450 & 0.00850 \\\\ \n Law \\& Government & 0.000203 & -0.00743 \\\\\n Health & 0.000161 & -0.00645 \\\\\n \\bottomrule\n \\end{tabular}}\n \\caption{Ad demand and {query volume}\\xspace attribution on Dec 4. }\n \\label{tab:dec4-attr}\n \\vspace{-3em}\n\\end{table}\n\n\nDays where the daily density goes beyond the 95\\% prediction interval are chosen for attribution. A visual inspection shows two clusters, Thanksgiving\/Black Friday and Christmas, which are expected due to their significance in the US. We also find an extreme value on Dec. 4. In all three cases, the daily density increases. Intuitively, one may have expected the opposite for holidays: density would decrease since people are expected to spend less time online. \n\n\\subsection{Qualitative analysis}\nWe now use {\\textit{CF-Shapley}}\\xspace to explain these observed changes. \n\n\\noindent \\textbf{December 4. }\nFigure~\\ref{fig:cat-contrib} shows the attribution by different categories, aggregated up to obtain 22 high-level categories. The \\textit{Internet \\& Telecom} (IT) category has the biggest positive attribution score while the \\textit{Hobbies \\& Leisure} (HL) category has the biggest negative attribution score. That is, daily density decreased on the day due to the HL category. \n\nTo understand why, we look at the attribution scores separately for {ad demand}\\xspace and {query volume}\\xspace for each category in Table~\\ref{tab:dec4-attr}. The attribution score reflects to the percentage change in daily density compared to last week, due to {ad demand}\\xspace or {query volume}\\xspace of a category. The only categories to have an attribution score greater than 1\\% are \\textit{IT} and \\textit{HL}, agreeing with the category-wise analysis. Specifically, the change in {ad demand}\\xspace due to \\textit{IT} leads to a 4.5\\% increase in daily density. The {query volume}\\xspace change in \\textit{HL}, on the other hand, leads to a 1.7\\% decrease in daily density. Considering all categories together, {ad demand}\\xspace change leads to an 6.5\\% increase in daily density and {query volume}\\xspace change leads to a 5.6\\% decrease. The net result is a 1\\% improvement over the last week. While an increase of 1\\% of daily density may look small, note that the value last week was already inflated due to it being a Black Friday week. This is why we detect outliers using the expected time-series pattern rather than simply difference from last week. On such days, one may also consider an alternative baseline, e.g., two weeks before.\n\nAre the attributions meaningful? In the absence of ground-truth, we dive deeper into the query logs to check for evidence. We do find a significant increase in queries for the \\textit{HL} category. In fact, more than 70\\% of the increase in {query volume}\\xspace for \\textit{HL} is due to cheetah-related queries. On manual inspection, we find that December 4 is \\textit{International Cheetah Day}. Cheetah-related queries also contribute to 86\\% of the {ad demand}\\xspace increase for \\textit{HL} category. \nGiven that the category density of \\textit{HL} is much lower than the daily density, this increase in {query volume}\\xspace causes a \\textit{decrease} in daily density, leading to the negative attribution score. Due to {ad demand}\\xspace volume increase (perhaps in anticipation of the Cheetah Day), the \\textit{HL} also leads to an increase of 0.6\\% in daily density (see Table~\\ref{tab:dec4-attr}. On the other hand, \\textit{IT} category's main contribution is from an increase in ad demand. Logs show a substantial (14\\%) increase in ads compared to last week for the category on Dec 4, which explains its high attribution score for {ad demand}\\xspace.\nThis increase is sustained across queries, possibly indicating a shift for the first Saturday after the holiday weekend. \n\n\\noindent \\textbf{Nov 25 and 26 (Thanksgiving).} On Thanksgiving holiday (Nov 25), we may have expected density to drop since many people in the US are expected to spend more time with their family and less time online. At the same time, online shopping on Black Friday (Nov 26) may increase density. Instead, we find that the density increases significantly on \\textit{both} days (see Figure~\\ref{fig:ff-preds}. Specifically, compared to last week, daily density on Nov 26 increased by 18.3\\%, out of which 13.5\\% is contributed by {query volume}\\xspace change and 4.8\\% by {ad demand}\\xspace. How to explain this result? Using the {\\textit{CF-Shapley}}\\xspace method, for {query volume}\\xspace change, we find that the categories \\textit{Health}, \\textit{Law and Government} and \\textit{Business \\& Industrial} are the top-ranked categories. Each contribute more than 2\\% of the density increase, leading to a cumulative 7\\% increase. From the logs, we see that {query volume}\\xspace for these categories decreased as people spent less time on work or health related queries. Since these categories tend to have low density, the daily density increased as a result. On the {ad demand}\\xspace side, \\textit{Online Media \\& Ecommerce} contributed nearly 3\\% increase in daily density, perhaps due to increased demand for Black Friday shopping. \nNov 25 exhibits similar patterns for {query volume}\\xspace. \n\n\\noindent \\textbf{Dec 24 and Dec 25 (Christmas).} On Christmas days too, there is an significant increase in density. Like the Thanksgiving days, health and work-related queries are issued fewer times, leading to an overall increase in daily density (all three categories have attribution scores >1\\%). However, we find that the top categories by {query volume}\\xspace change are \\textit{Hobbies \\& Leisure} and \\textit{Arts \\& Entertainment}. Both these categories experience a surge in their query volume and being high-density categories, cause a 2.1\\% and 1.8\\% increase in daily density respectively. To explain this, we look at the query logs and find that the rise in \\textit{Hobbies \\& Leisure} queries is fueled by the \\textit{toys \\& games} subcategory, which is aligned with the expectation of the holiday days. On Dec 25, \\textit{Hobbies \\& Leisure} is also the category which has the highest attribution score by {ad demand}\\xspace (2.7\\%).\nOverall, the category contributes 4.8\\% increase, nearly one-third of the total density increase on Christmas day, signifying the importance of \\textit{toys \\& games} subcategory for Christmas.\n\n\n\n\n\\section{Introduction}\nIn large-scale systems, a common problem is to explain the reasons for a change in the output, especially for unexpected and big changes. Explaining the reasons or attributing the change to input factors can help isolate the cause and debug it if the change is undesirable, or suggest ways to amplify the change if desirable. For example, in a distributed system, system failure~\\cite{zhang2019inflection} or performance anomaly~\\cite{nagaraj2012structured,attariyan2012xray} are important undesirable outcomes. In online platforms such as e-commerce websites or search websites, a desirable outcome is increase in revenue and it is important to understand why the revenue increased or decreased~\\cite{singal2022shapley,dalessandro2012causally}.\n\nTechnically, this problem can be framed as an attribution problem~\\cite{efron2020prediction,yamamoto2012understanding, dalessandro2012causally}. Given a set of candidate factors, which of them can best explain the observed change in output? Methods include statistical analysis based on conditional probabilities~\\cite{berman2018beyondmta,ji2016probabilistic-mta,shao2011mta} or computation of game-theoretic attribution scores like Shapley value~\\cite{lundberg2017unified,singal2022shapley,dalessandro2012causally}. However, most past work assumes that the output can be written as a function of the inputs, ignoring any structure in the computation of the output. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figures\/graph-atttribution-crop.pdf}\n \\caption{Causal graph for an ad matching system that reflects computation of the matching density metric. For each query category, the number of queries (query volume) and ads (ad demand) determine the categorical density for each day. Different categorical densities combine to yield the daily density. Goal is to attribute daily density to ad demand and query volume of different categories. The category density is directly affected by category-wise ad demand and query volume and thus has a relatively more stable relationship with the inputs than overall daily density. \n }\n \\label{fig:causal-graph}\n \\vspace{-3em}\n\\end{figure}\n\nIn this paper, we consider large-scale systems such as search or ad systems where output metrics are aggregated over different kinds of inputs or composed over multiple pipeline stages, leading to a natural computational structure (instead of a single function of the inputs). For example, in an ad system, the number of ads that are matched per query is a composite measure that is composed of an analogous metric over each query category (see Figure~\\ref{fig:causal-graph}). While the overall matching density may fluctuate, the matching density per category is expected to be more stably associated with the input queries and ads. As another example, the output metric may be a result of a series of modules in a pipeline, e.g., recommendations that are finally shown to a user may be a result of multiple pipeline stages where each stage filters some items. Our key insight is that utilizing the computational structure of a real-world system can break up the system into smaller sub-parts that stay stable over time and thus can be modelled accurately. In other words, the system's computation can be modelled as a set of independent, causal mechanisms~\\cite{peters2017elements} over a structural causal model (SCM)~\\cite{pearl2009book}. \n\nModeling the system's computation as a SCM also provides a principled way to define an attribution score. Specifically, we show that attribution can be defined in terms of \\textit{counterfactuals} on the SCM. Following recent work on causal shapley values~\\cite{heskes2020causal,pmlr-v162-jung22a}, we posit four axioms that any desirable attribution method for an output metric should satisfy. We then propose a counterfactual variant of the Shapley value that satisfies all these properties. Thus, given the computational structure, our proposed {\\textit{CF-Shapley}}\\xspace method has the following steps: \\textbf{1)} utilize machine learning algorithms to fit the SCM and compute counterfactual values of the metric under any input, and \\textbf{2) } use the estimated counterfactuals to construct an attribution score to rank the contribution of different inputs. On simulated data, our results show that the proposed method is significantly more accurate for explaining inputs' contribution to an observed change in a system metric, compared to Shapley value~\\cite{lundberg2017unified} or its recent causal variants~\\cite{heskes2020causal,pmlr-v162-jung22a}.\n\n\n\n\nWe apply the proposed method, {\\textit{CF-Shapley}}\\xspace attribution, to a large-scale ad matching system that outputs relevant ads for each search query issued by a user. The key outcome is \\textit{matching density}, the number of ads matched per query. This density is roughly proportional to revenue generated, since only the queries for which ads are selected contribute to revenue. There are two main causes for a change in matching density: change in query volume or change in demand from advertisers. Given that queries are typically organized by categories, the attribution problem is to understand which of these two are driving an observed change in matching density, and from which categories. \n\nTo do so, we construct a causal graph representing the system's computation pipeline (Figure~\\ref{fig:causal-graph}). Given six months of system's log data, we repurpose time-series prediction models to learn the structural equation for category-wise density as a function of query volume and ad demand, its parents in the graph. \n\nFor this system, we find that category-wise attribution is possible with minimal assumptions, while attribution between query volume and ad demand requires knowledge of the structural equations that generate category-wise density.\nIn both cases, we show how the {\\textit{CF-Shapley}}\\xspace method can be used to estimate the system's counterfactual outputs and the resultant attribution scores.\nAs a sanity check, {\\textit{CF-Shapley}}\\xspace attribution scores satisfy the \\textit{efficiency} property for attributing the matching density metric: their sum matches the observed change in density. We then use {\\textit{CF-Shapley}}\\xspace scores to explain density changes on five outlier days from November to December 2021, uncovering insights on how changes in {query volume}\\xspace or {ad demand}\\xspace for different categories affects the density metric. We validate the results through an analysis of external events during the time period.\n\nTo summarize, our contributions include, \n\\begin{itemize}\n \\item A method for attributing metrics in a large-scale system utilizing its computational structure as a causal graph, that outperforms recent Shapley value-based methods on simulated data.\n \n \\item A case study on estimating counterfactuals in a real-world ad matching system, providing a principled way for attributing change in its output metric.\n\\end{itemize}\n\n\n\n\n\\section{Related Work} \nOur work considers a causal interpretation of the attribution problem. Unlike attribution methods on predictions from a (deterministic) machine learning model~\\cite{lundberg2017unified,ide2021anomaly}, here we are interested in attributing real-world outcomes where the data-generating process includes noise. Since the attribution problem concerns explaining a single outcome or event, we focus on causality on individual outcomes~\\cite{halpern2016book} rather than \\textit{general} causality that deals with the average effect of a cause on the outcome over a (sub)population~\\cite{pearl2009book}. In other words, we are interested in estimating the counterfactual, given that we already have an observed event. Counterfactuals are the hardest problem in Pearl's ladder of causation, compared to observation and intervention~\\cite{pearl2019seven}. \n\nWhile counterfactuals have been applied in feature attribution for machine learning models~\\cite{kommiya2021towards,verma2020counterfactual}, less work has been done for attributing real-world outcomes in systems using formal counterfactuals. Recent work uses the \\textit{do}-intervention to propose do-shapley values~\\cite{heskes2020causal,pmlr-v162-jung22a} that attribute the interventional quantity $P(Y|do(\\bm{V}))$ across different inputs $v \\in \\bm{V}$. While do-shapley values are useful for calculating the average effect of different inputs on the output $Y$, they are not applicable for attributing an \\textit{individual} change in the output. For attributing individual changes, \\cite{janzing2019causaloutliers} analyze root cause identification for outliers in a structural causal model, and find that attribution conditional on the parents of a node is more effective than global attribution. They quantify the attribution using information theoretic scores, but do not provide any axiomatic characterization of the resulting attribution score.\nIn this work, we propose four axioms that characterize desirable properties for an attribution score for explaining individual change in output and present the CF-Shapley value that satisfies those axioms.\n\n\\noindent \\textbf{Attribution in ad systems. }\nMulti-touch attribution is the most common attribution problem studied in online ad systems. Given an ad click, the goal is to assign credit to the different preceding exposures of the same item to the user, e.g., previous ad exposures, emails, or other media. Multiple methods have been proposed to estimate the attribution such as attributing all to the last exposure~\\cite{berman2018beyondmta}, an average over all exposures, or using probabilistic models to model the click data as a function of the input exposures~\\cite{shao2011mta,ji2016probabilistic-mta}. Recent methods utilize the game-theoretic attribution score using Shapley values that summarizes the attribution over multiple simulations of input variables, with~\\cite{dalessandro2012causally} or without a causal interpretation~\\cite{singal2022shapley}. \nMulti-touch attribution can be considered as a one-level SCM problem, where there is an output node being affected by all input nodes. It does not cover more complex systems where there is a computational structure.\n\n\\noindent \\textbf{Performance Anomaly Attribution. }\nComputational structure (e.g., specific system capabilities or logs) has been considered in the systems literature to root-cause performance anomalies~\\cite{attariyan2012xray} or system failures~\\cite{zhang2019inflection}. Some methods use causal reasoning to motivate their attribution algorithm, but they do so informally.\nOur work provides a formal analysis of the system attribution problem.\n\n\\section{Evaluation}\nOur goal is to attribute observed changes in the output metric of an ad matching system. We first describe the system and conduct a simulation study to evaluate {\\textit{CF-Shapley}}\\xspace scores.\n\n\\subsection{Description of the ad matching system}\nWe consider an ad matching system where the goal is to retrieve all the relevant ads for a particular web search query by a user (these ads are ranked later to show only top ones to the user). The outcome variable is the average number of ads matched for each query, called the \\textit{``matching density''} (or simply \\textit{density}). This outcome can be affected by multiple factors, including the availability of ads by advertisers, the distribution and amount of user queries issued on the system, any algorithm changes, or any other system bug or unknown factors. For simplicity, we consider a matching algorithm based on matching \\textit{exact} full text between a query and provided keyword phrases for an ad. This algorithm remains stable over time due to its simplicity. Thus, we can safely assume that there are no algorithm changes or code bugs for the matching algorithm under study. Given an extreme or unexpected value of density, our goal then is to attribute between change in ads and change in queries.\n\nSince there are millions of queries and ads, we categorize the data by nearly 250 semantic query categories. Examples of query categories are \"Fashion Apparel\", \"Health Devices\", \"Internet\", and so on.\nA naive solution may be to simply compare the magnitude of observed change in ad demand or query volume across categories. That is, given a change in density on day $t$, choose a reference day $r$ (e.g., same day last week) and compare the values of {ad demand}\\xspace and {query volume}\\xspace. We may conclude that the factor with the highest percentage change is causing the overall change in density. However, the limitation is that\nthe factor with the highest percentage change may neither be necessary nor sufficient to cause the change because its effect depends on the values of other factors. E.g., an increase in {query volume}\\xspace for a category can either have positive, negative, or no effect on the daily density depending on its ad demand compared to other categories. This is because the density is computed as a query volume-weighted average of category density; increase in {query volume}\\xspace for a low-demand (and hence low-density) category \\textit{decreases} the aggregate density (see Eqn.~\\ref{eq:dailyden-eqn}). \n\n\n\\subsection{Constructing an SCM for ad density metric}\nTo apply the {\\textit{CF-Shapley}}\\xspace method for attributing a matching density value, we define a causal graph based on how the metric is computed, as shown in \nFigure~\\ref{fig:causal-graph}. The number of queries for a category is measured by the number of search result page views (SRPV). The number of ads is measured by the number of listings posted by advertisers. For simplicity, we call them \\textit{{query volume}\\xspace} and \\textit{{ad demand}\\xspace}. We assume that given a category, the {ad demand}\\xspace and {query volume}\\xspace are independent of each other since they are driven by the advertiser and user goals respectively. The combination of {ad demand}\\xspace and {query volume}\\xspace for a category determine its category-wise density which then is aggregated to yield the \\textit{daily density}. As we are interested in attribution over days as a time unit, we refer to the aggregate density as daily density, $y$. \nThus, the variables $\\{ad^{c1}, qv^{c1}, ad^{c2}, qv^{c2} \\cdots ad^{ck}, qv^{ck} \\}$ are the $2k$ inputs to the system where $ci$ is the category, ${ad}$ refers to {ad demand}\\xspace, ${qv}$ refers to {query volume}\\xspace, and $k$ is the number of categories. \n\nThe structural equation from category-wise densities to daily density is known. It is simply a weighted average of the category-wise densities, weighted by the {query volume}\\xspace.\n\\begin{equation}\\label{eq:dailyden-eqn}\n y_t = f({den}^{c1}_t, {qv}^{c1}_t, \\cdots {den}^{ck}_t, {qv}^{ck}_t) = \\frac{\\sum_c {den}^c_t {qv}^c_t}{\\sum_c {qv}^c_t}\n\\end{equation}\nwhere $den^c_t$ is the density of category $c$ on day $t$ and ${qv}^c_t$ is the {query volume}\\xspace for the category on day $t$.\nHowever, the equation from category-wise {ad demand}\\xspace and {query volume}\\xspace to category density is infeasible to obtain. This would involve ``replay'' of a computationally expensive matching algorithm to real-time queries and ad listings but the ad listings are not available (only a daily snapshot of ads inventory is stored in the logs). We will show how to to estimate the structural equation for category density in Section~\\ref{sec:fit-scm}.\n\n\n\n\n\\subsection{Evaluating {\\textit{CF-Shapley}}\\xspace on simulated data}\nBefore applying {\\textit{CF-Shapley}}\\xspace on the ad matching system, we first evaluate the method on simulated data motivated by the causal graph of the system. This is because it is impossible to know the ground-truth attribution using data from a real-world system, since we do not know how the change in input variables led to the observed metric value and which inputs were the most important. \n\nWe construct simulated data based on the causal structure of Figure~\\ref{fig:causal-graph}. For each category, we assume {ad demand}\\xspace and {query volume}\\xspace as independent Guassian random variables (we simulate real-world variation in {query volume}\\xspace using a Beta prior). The category-wise density is constructed as a monotonic function of ad demand and has a biweekly periodicity. The SCM equations are,\n\\begin{align} \\label{eq:sim-gen}\n \\gamma &= \\mathcal{B}(0.5,0.5); \\text{\\ } {qv}^c_t = \\mathcal{N}(1000\\gamma, 100); \\text{\\ } {ad}^c_t = \\mathcal{N}(10000, 100) \\nonumber \\\\\n den^c_t &= g({ad}^c_t,{qv}^c_t, den^c_{t-1}) + \\mathcal{N}(0,\\sigma^2) \\nonumber \\\\\n &= \\kappa * {ad}^c_t\/{qv}^c_t + \\beta * a* den^c_{t-1} + \\mathcal{N}(0,\\sigma^2) \\\\\n y_t &= \\frac{\\sum_c den^c_t {qv}^c_t}{\\sum_c {qv}^c_t} \\label{eq:dailyden}\n\\end{align}\nwhere ${qv}^c_t$ and ${ad}^c_t$ are the query volume and ad demand respectively for category $c$ at time $t$. They combine to produce the ad matching density $den^c_t$ based on a function $g$ and additive normal noise. The variance of the noise, $\\sigma^2$ determines the stochastic variation in the system. For the simulation, we construct $g$ based on two observations about the category density: \\textbf{1)} it is roughly a ratio of the \\textit{relevant} ads and the number of queries; and \\textbf{2)} it exhibits auto-correlation with its previous value and periodicity over a longer time duration. We use $\\kappa$ to denote the fraction of relevant ads and add a second term with parameter $a$ to simulate a biweekly pattern, $a=1 \\text{\\ } \\operatorname{if} \\operatorname{floor}(t\/7) \\text{ is even} \\operatorname{else} \\text{ }a=-1 $.\n$\\beta$ is the relative importance of the previous value in determining the current category density. \n Finally, all the category-wise densities are weighted by their query volume ${qv}^c_t$ and averaged to produce the daily density metric, $y_t$.\n \nEach dataset generated using these equations has 1000 days and 10 categories; we set $\\kappa=0.85, \\beta=0.15$ for simplicity. We intervene on the {ad demand}\\xspace or {query volume}\\xspace of the 1000th point to construct an outlier metric that needs to be attributed. Given the biweekly pattern, reference date is chosen 14 days before the 1000th point.\n\n\\noindent \\textbf{Setting ground-truth attribution.} Even with simulated data, setting ground-truth attribution can be tricky. For example, if there is an increase in {ad demand}\\xspace for one category and increase in {query volume}\\xspace for another, it is not clear which one would cause the biggest impact on the daily density. That depends on their {query volume}\\xspace and {ad demand}\\xspace respectively and any changes in other categories. To evaluate attribution methods, therefore, we consider simple interventions where objective ground-truth can be obtained. Specifically, for ease of interpretation, we intervene on only two categories at a time such that the first has a substantially higher chance of affecting the outcome metric than the second. \n\nWe consider two configurations: change in 1) {ad demand}\\xspace and 2) {query volume}\\xspace. For changing {ad demand}\\xspace (\\textit{Config 1}), we choose two categories such that the first has the highest {query volume}\\xspace and the second has the lowest {query volume}\\xspace. We double the {ad demand}\\xspace for both categories with a slight difference (x2 for the first category, x2.1 for the second). Since the category-wise densities are weighted by {query volume}\\xspace to obtain the daily density metric, for the same (or similar) change in demand, it is natural that first category has higher impact on daily density (even though they may have similar impact on their category-wise density). For \\textit{Config 1}, thus, the ground-truth attribution is the first category. For changing {query volume}\\xspace (\\textit{Config 2}), we choose two categories such that the first has the most extreme density and the second has density equal to the reference daily density. Then, we change {query volume}\\xspace as above: x2 for the first category and x2.1 for the second. Following Eqn.~\\ref{eq:dailyden}, {query volume}\\xspace change in a category having the same density as the daily density is expected to have low impact on daily density (keeping other categories constant, if category density is not impacted by {query volume}\\xspace, an increase in {query volume}\\xspace for a category with density equal to daily density causes zero change in daily density). Thus, the ground-truth attribution (category with the highest impact on output metric) is again the first category. Note that {query volume}\\xspace has higher variation across categories, so a higher multiplicative factor does not necessarily mean a higher absolute difference.\n\n\n\\noindent \\textbf{Baseline attribution methods.} We compare {\\textit{CF-Shapley}}\\xspace to the standard Shapley value (as implemented in SHAP~\\cite{lundberg2017unified}, {\\texttt{Shapley}}\\xspace) and the do-shapley value ({\\texttt{DoShapley}}\\xspace)~\\cite{pmlr-v162-jung22a}. The {\\texttt{Shapley}}\\xspace method ignores the structure and fits a model directly predicting daily density $y_t$ using (category-wise) {ad demand}\\xspace and {query volume}\\xspace features. It uses the predictions of this model for computing the Shapley score. For the {\\texttt{DoShapley}}\\xspace method, we notice that our causal graph corresponds to the Direct-Causal graph structure in their paper and use the estimator from Eq. (5) in ~\\cite{pmlr-v162-jung22a}, that depends on the same daily density predictor as the standard Shapley value. \nWe also evaluate on three intuitive baselines based on absolute change in inputs: \\textbf{1)} The category with the biggest change in {ad demand}\\xspace ({\\texttt{AdDemandDelta}}\\xspace); \\textbf{2)} {query volume}\\xspace ({\\texttt{QVolumeDelta}}\\xspace); or \\textbf{3)} density multiplied by {query volume}\\xspace ({\\texttt{ProductDelta}}\\xspace) since this product is used in the daily density equation.\n\nFor the {\\textit{CF-Shapley}}\\xspace algorithm, we fit the structural equation for category density, using the following features: {ad demand}\\xspace, {query volume}\\xspace, $den_{t-1}, den_{t-7}, den_{t-14}$. For both the {\\textit{CF-Shapley}}\\xspace category density prediction and the {\\texttt{Shapley}}\\xspace daily density prediction model, we use a 3-layer feed forward network. We use all data uptil 999th day for training and validation for all models. \n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.85\\columnwidth]{figures\/attr_truem_category.pdf}\n \\caption{Category attribution accuracy for various methods.}\n \\label{fig:sim-cat-attr}\n\\end{figure}\n\n\\noindent \\textbf{Results. }\nFor each attribution method, we measure accuracy compared to the ground-truth as we increase the noise ($\\sigma$) in the true data-generating process (SCM) ($\\sigma=\\{0.1, 1, 10\\}$). As noise in the generating process increases, we expect higher error for fitting structural equations and thus the attribution task becomes harder.\n\\textit{Attribution accuracy} is defined as the fraction of times a method outputs the highest attribution score for the correct category (first category), over 20 simulations.\n\nFigure~\\ref{fig:sim-cat-attr} shows the results. {\\textit{CF-Shapley}}\\xspace obtains the highest attribution accuracy for both Config 1 and 2. In general, attribution for {ad demand}\\xspace is easier than {query volume}\\xspace because both the category density and daily density are monotonic functions of the {ad demand}\\xspace. That is why we observe near 100\\% accuracy for {\\textit{CF-Shapley}}\\xspace under \\textit{Config 1}, even with high noise. The attribution accuracy for \\textit{Config 2} is 70-80\\%, decreasing as more noise is introduced.\n\nIn comparison, none of the baselines achieve more than 50\\% (random-guess) accuracy. Note that the {\\texttt{Shapley}}\\xspace and {\\texttt{DoShapley}}\\xspace methods obtain similar attribution accuracies. While their attribution scores are different, the highest ranked category often turns out to be the same since they rely on the same daily density model (but use different formulae). Inspecting the predictive accuracy of the daily density model offers an explanation: error on the daily density prediction is higher than that for category-wise density prediction (and it increases as the noise is increased). This indicates the value of computing an individualized counterfactual using the full graph structure, rather than focusing on the average (causal) effect. Finally, the other intuitive baselines fail on both tasks since they only look at the change in the input variables.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\\citet[page 74]{AngristPischke2015} argue that ``careful reasoning about OVB [omitted variables bias] is an essential part of the 'metrics game.'' Largely for this reason, researchers have eagerly adopted new tools that let them quantitatively assess the impact of omitted variables on their results. In particular, researchers now widely use the sensitivity analysis methods developed in \\cite{AltonjiElderTaber2005} and \\cite{Oster2019}. These methods have been extremely influential, with about 3600 and 2500 Google Scholar citations as of June 2022, respectively. Looking beyond citations, researchers are actively using these methods. Every top 5 journal in economics is now regularly publishing papers which use the methods in \\cite{Oster2019}. Aggregating across these five journals, from the three year period starting when \\cite{Oster2019} was published, 2019--2021, 33 published papers have used these methods, and often quite prominently.\n\nThese methods, however, rely on an assumption called \\emph{exogenous controls}. This assumption states that the omitted variables of concern are uncorrelated with all included covariates. For example, consider a classic regression of wages on education and controls like parental education. Typically we are worried that the coefficient on education in this regression is a biased measure of the returns to schooling because unobserved ability is omitted. To apply Oster's \\citeyearpar{Oster2019} method for assessing the importance of this unobserved variable, we must assume that unobserved ability is uncorrelated with parent's education, along with all other included controls.\\footnote{Appendix A.1 in \\cite{Oster2019} briefly describes one approach to relaxing the exogenous controls assumption in her setting. We show that this approach changes the interpretation of her sensitivity parameter in a way that may change researchers' conclusions about the robustness of their results. We discuss this in detail in our appendix \\ref{sec:OsterRedefinition}.}\n\nSuch exogeneity assumptions on the control variables are usually thought to be very strong and implausible. For example, \\cite{AngristPischke2017} discuss how\n\\begin{quote}\n``The modern distinction between causal and control variables on the right-hand side of a regression equation requires more nuanced assumptions than the blanket statement of regressor-error orthogonality that's emblematic of the traditional econometric presentation of regression.'' (page 129)\n\\end{quote}\nPut differently: We usually do not expect the included control variables to be uncorrelated with the omitted variables; instead we merely hope that the treatment variable is uncorrelated with the omitted variables after adjusting for the included controls. These control variables are therefore usually thought to be endogenous.\n\nIn this paper we provide a new approach to sensitivity analysis that allows the included control variables to be endogenous, unlike \\cite{AltonjiElderTaber2005} and \\cite{Oster2019}. Like these previous papers, however, we measure the importance of unobserved variables by comparing them to the included covariates. Thus researchers can still measure how strong selection on unobservables must be relative to selection on observables in order to overturn their baseline findings.\n\n\\subsubsection*{Overview of Our Approach}\n\nIn section \\ref{sec:model} we describe the baseline model. The parameter of interest is $\\beta_\\text{long}$, the coefficient on a treatment variable $X$ in a long OLS regression of an outcome variable $Y$ on a constant, treatment $X$, the observed covariates $W_1$, and some unobserved covariates $W_2$. In section \\ref{sec:causalModels} we discuss three causal models which allow us to interpret this parameter causally, based on three different identification strategies: unconfoundedness, difference-in-differences, and instrumental variables. Since $W_2$ is unobserved, we cannot compute the long regression of $Y$ on $(1,X,W_1,W_2)$ in the data. Instead, we can only compute the medium regression of $Y$ on $(1,X,W_1)$. We begin by considering a baseline model with a no selection on unobservables assumption, which implies that the coefficients on $X$ in the long and medium regressions are the same. Importantly, this baseline model does \\emph{not} assume the controls $W_1$ are exogenous. They are included solely to aid in identification of the coefficient on $X$ in the long regression. \n\nWe then move to assess the importance of the no selection on unobservables assumption. In section \\ref{sec:MainNewAnalysis} we develop a sensitivity analysis that does not rely on the exogenous controls assumption, while still allowing researchers to compare the magnitude of selection on observables with the magnitude of selection on unobservables. Our results use either one, two, or three sensitivity parameters; not all of our results require all three parameters. The first sensitivity parameter compares the relative magnitude of the coefficients on the observed and unobserved covariates in a treatment selection equation. This parameter thus measures the magnitude of selection on unobservables by comparing it with the magnitude of selection on observables. The second sensitivity parameter compares the relative magnitude of the coefficients on the observed and unobserved covariates in the outcome equation. The third sensitivity parameter controls the relationship between the observed and the unobserved covariates; this parameter thus measures the magnitude of control endogeneity.\n\nWe provide three main identification results. Our first result (Theorem \\ref{cor:IdsetRyANDcFree}) characterizes the identified set for $\\beta_\\text{long}$, the coefficient on treatment in the long regression of the outcome on the treatment and the observed and unobserved covariates. This theorem only requires that researchers make an assumption about a single sensitivity parameter---the relative magnitudes of selection on observables and unobservables. In contrast, \\cite{Oster2019} requires that researchers reason about two different sensitivity parameters. Moreover, our result allows for arbitrarily endogenous controls, unlike existing results in the literature. We provide a closed form, analytical expression for the identified set, which makes this result easy to use in practice. Using this result, we show how to do breakdown analysis: To find the largest magnitude of selection on unobservables relative to unobservables to needed to overturn a specific baseline finding. This value is called a breakdown point, and can be used to measure the robustness of one's baseline results. We provide a simple expression for the breakdown point and recommend that researchers report estimates of it along with their baseline estimates. This estimated breakdown point provides a scalar summary of a study's robustness to selection on unobservables while allowing for arbitrarily endogenous controls.\n\nIf researchers are willing to partially restrict the magnitude of control endogeneity, then their results will be more robust to selection on unobservables. Our second result (Theorem \\ref{thm:IdsetRyFree}) therefore characterizes the identified set for $\\beta_\\text{long}$ when researchers make an assumption about two sensitivity parameters: the relative magnitude of selection on unobservables and the magnitude of control endogeneity. We again provide a simple closed form expression for the identified set, and then show how to use this result to do breakdown analysis. Finally, if researchers are willing to restrict the impact of unobservables on outcomes, then they can again obtain results that are more robust to selection on unobservables. In this case, the identified set is more difficult to characterize (see Theorem \\ref{thm:mainMultivariateResult} in the appendix). However, our third main result (Theorem \\ref{cor:BFCalculation3D}) shows that we can nonetheless easily numerically compute objects like breakdown points.\n\nIn section \\ref{sec:empirical} we show how to use our results in empirical practice. We use data from \\citet[\\emph{Econometrica}]{BFG2020} who studied the effect of historical American frontier life on modern cultural beliefs. Specifically, they test a well known conjecture that living on the American frontier cultivated individualism and antipathy to government intervention. They heavily rely on Oster's (2019) method to argue against the importance of omitted variables. Using our results, we obtain more nuanced conclusions about robustness. In particular, when allowing for endogenous controls, we find that effects obtained from questionnaire based outcomes are no longer robust, but effects from election and property tax outcomes remain robust. This analysis highlights that previous empirical findings of robustness based on \\cite{Oster2019}, for example, may no longer be robust once the controls are allowed to be endogenous.\n\n\\subsubsection*{Related Literature}\n\nWe conclude this section with a brief review of the literature. We focus on two literatures: The literature on endogenous controls and the literature on sensitivity analysis in linear or parametric models.\n\nThe idea that the treatment variable and the control variables should be treated asymmetrically in the assumptions goes back to at least \\cite{BarnowCainGoldberger1980}. They developed the ``linear control function estimator'', which is based on an early parametric version of the now standard unconfoundedness assumption. \\citet[page 190]{HeckmanRobb1985}, \\cite{HeckmanHotz1989}, and \\citet[page 5035]{HeckmanVytlacil2007part2} all provide detailed discussions of this estimator. It was also one of the main estimators used in \\cite{LaLonde1986}. \\cite{StockWatson2011} provide a textbook description of it on pages 230--233 and pages 250--251. \\cite{AngristPischke2009} also discuss it at the end of their section 3.2.1. Also see \\cite{Frolich2008}. Note that this earlier analysis was based on mean independence assumptions, while the analysis in our paper only uses linear projections. We do this so that our baseline model is not falsifiable, which allows us to avoid complications that arise in falsifiable models (e.g., see \\citealt{MastenPoirier2021FF}). More recently, \\cite{HuenermundLouw2020} remind researchers that most control variables are likely endogenous and hence their coefficients should not be interpreted as causal. \n\nAlthough control variables are often thought to be endogenous, the literature on sensitivity analysis generally assumes the controls are exogenous. As mentioned earlier, this includes \\cite*{AltonjiElderTaber2005, AltonjiElderTaber2008} and \\cite{Oster2019}. However, Appendix A.1 of \\cite{Oster2019} describes one approach for relaxing the exogenous controls assumption by redefining her sensitivity parameter $\\delta$. We discuss this in detail in appendix \\ref{sec:OsterRedefinition}. There we show that such a redefinition implies that $\\delta = 1$ is no longer a natural reference point. In particular, we show that this redefinition can change researchers' conclusions about the robustness of their results. \\cite{Krauth2016} explicitly allows for endogenous controls, but he relies on a similar redefinition approach which implies that his sensitivity parameter $\\lambda$ measures the magnitude of selection on unobservables as well as the endogeneity of the controls as the same time. In particular, like our results for Oster's $\\delta$, when exogenous controls does not hold, $\\lambda = 1$ is no longer a natural reference point for Krauth's $\\lambda$. See Appendix \\ref{subsec:KrauthDisc} for more discussion. Thus it too does not solely measure selection on unobservables when the controls are endogenous. \\cite{Imbens2003} starts from the standard unconfoundedness assumption which allows endogenous controls, but in his parametric assumptions (see his likelihood equation on page 128) he assumes that the unobserved omitted variable is independent of the observed covariates. \n\\cite{CinelliHazlett2020} develop an alternative to \\cite{Oster2019} that allows researchers to compare the relative strength of the observed and unobserved covariates on outcomes and on treatment selection. Their approach also imposes the exogenous controls assumption (see the last paragraph of their page 53). \\cite{AET2019} propose an approach to allow for endogenous controls based on imposing a factor model on all covariates, observable and unobservable. Their approach and ours are not nested; in particular, their results require the number of covariates to go to infinity, while we suppose the number of covariates is fixed. This difference arises because they explicitly model the covariate selection process. We instead take the covariates as given and impose assumptions directly on these covariates, rather than deriving such assumptions from a model of covariate selection. Our results also allow researchers to be fully agnostic about the relationship between the observed and unobserved covariates.\n\nThere are several other related papers on sensitivity analysis. The sensitivity parameters we use are defined based on the relative magnitude of different coefficients. That is similar to previous work by \\cite{Chalak2019}, who shows how to use relative magnitude constraints to assess sensitivity to omitted variables when a proxy for the omitted variable is observed. \\citet[section 7.3]{ZhangCinelliChenPearl2021} discuss a sensitivity analysis that uses constraints on the relative magnitude of coefficients in a setting with exogenous controls. Finally, note that the standard unconfoundedness assumption (for example, chapter 12 in \\citealt{ImbensRubin2015}) allows for endogenous controls. For this reason, several papers that assess sensitivity to unconfoundedness also allow for endogenous controls. This includes \\cite{Rosenbaum1995, Rosenbaum2002}, \\cite{MastenPoirier2018}, and \\cite{MastenPoirierZhang2020}. These methods, however, do not provide formal results for calibrating their sensitivity parameters based on comparing selection on observables with selection on unobservables. These methods also focus on binary or discrete treatments, whereas the analysis in our paper can be used for continuous treatments as well.\n\n\\subsubsection*{Notation Remark}\n\nFor a scalar random variable $A$ and a random column vector $B$, we define $A^{\\perp B} = A - [\\var(B)^{-1} \\cov(B,A)]' B$. This is the sum of the residual from a linear projection of $A$ onto $(1,B)$ and the intercept in that projection. Many of our equations therefore do not include intercepts because they are absorbed into $A^{\\perp B}$ by definition. Note also that $A^{\\perp B}$ is uncorrelated with each component of $B$, by definition. Let $R_{A \\sim B \\mathrel{\\mathsmaller{\\bullet}} C}^2$ denote the R-squared from a regression of $A^{\\perp C}$ on $(1,B^{\\perp C})$. This is sometimes called the partial R-squared.\n\n\\section{The Baseline Model}\\label{sec:model}\n\nLet $Y$ and $X$ be observed scalar variables. Let $W_1$ be a vector of observed covariates of dimension $d_1$. Let $W_2$ be an unobserved scalar covariate; we discuss vector $W_2$ in appendix \\ref{sec:vectorW2}. Let $W = (W_1,W_2)$. Consider the OLS estimand of $Y$ on $(1,X,W_1,W_2)$. Let $(\\beta_\\text{long}, \\gamma_1, \\gamma_2)$ denote the coefficients on $(X,W_1,W_2)$. The following assumption ensures these coefficients and other OLS estimands we consider are well defined.\n\n\\begin{Aassump}\\label{assump:posdefVar}\nThe variance matrix of $(Y,X,W_1,W_2)$ is finite and positive definite.\n\\end{Aassump}\n\nWe can write\n\\begin{equation}\\label{eq:outcome}\n\tY = \\beta_\\text{long} X + \\gamma_1' W_1 + \\gamma_2 W_2 + Y^{\\perp X,W}\n\\end{equation}\nwhere $Y^{\\perp X,W}$ is defined to be the OLS residual plus the intercept term, and hence is uncorrelated with each component of $(X,W)$ by construction. Suppose our parameter of interest is $\\beta_\\text{long}$. In section \\ref{sec:causalModels} we discuss three causal models that lead to this specific OLS estimand as the parameter of interest, using either unconfoundedness, difference-in-differences, or instrumental variables as an identification strategy. Alternatively, it may be that we are simply interested in $\\beta_\\text{long}$ as a descriptive statistic. The specific motivation for interest in $\\beta_\\text{long}$ does not affect our technical analysis. \n\nNext consider the OLS estimand of $X$ on $(1,W_1,W_2)$. Let $(\\pi_1,\\pi_2)$ denote the coefficients on $(W_1,W_2)$. Then we can write\n\\begin{equation}\\label{eq:XprojectionW1W2}\n\tX = \\pi_1' W_1 + \\pi_2 W_2 + X^{\\perp W}\n\\end{equation}\nwhere $X^{\\perp W}$ is defined to be the OLS residual plus the intercept term, and hence is uncorrelated with each component of $W$ by construction. Although equation \\eqref{eq:XprojectionW1W2} is not necessarily causal, we can think of the value of $\\pi_1$ as representing ``selection on observables'' while $\\pi_2$ represents ``selection on unobservables.'' The following is thus a natural baseline assumption.\n\n\\begin{Aassump}[No selection on unobservables]\\label{assump:baselineEndogControl1}\n\t$\\pi_2 = 0$.\n\\end{Aassump}\n\nLet $\\beta_\\text{med}$ denote the coefficient on $X$ in the OLS estimand of $Y$ on $(1,X,W_1)$. With no selection on unobservables, we have the following result.\n\n\\begin{theorem}\\label{thm:baselineEndogControl1}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar} and A\\ref{assump:baselineEndogControl1} hold. Then the following hold.\n\\begin{enumerate}\n\\item $\\beta_\\text{long} = \\beta_\\text{med}$. Consequently, $\\beta_\\text{long}$ is point identified.\n\n\\item The identified set for $\\gamma_1$ is $\\ensuremath{\\mathbb{R}}^{d_1}$.\n\\end{enumerate}\n\\end{theorem}\n\nThis result allows for endogenous controls, in the sense that the observed covariates $W_1$ can be arbitrarily correlated with the unobserved covariate $W_2$. But it restricts the relationship between $(X,W_1,W_2)$ in such a way that we can still point identify $\\beta_\\text{long}$ even though $W_1$ and $W_2$ are arbitrarily correlated. The coefficient $\\gamma_1$ on the observed controls, however, is completely unidentified. The difference between the roles of $X$ and $W_1$ in Theorem \\ref{thm:baselineEndogControl1} reflects the sentiment of \\cite{AngristPischke2017} that we discussed in the introduction.\n\nIn practice, we are often worried that the no selection on unobservables assumption A\\ref{assump:baselineEndogControl1} does not hold. In section \\ref{sec:MainNewAnalysis} we develop a new approach to assess the importance of this assumption. \n\n\\section{Sensitivity Analysis}\\label{sec:MainNewAnalysis}\n\nWe have argued that, in practice, most controls are endogenous in the sense that they are potentially correlated with the omitted variables of concern. Consequently, methods for assessing the sensitivity of identifying assumptions for the treatment variable of interest should allow for the controls to be endogenous to some extent. In this section, we develop such a method. In section \\ref{sec:sensitivityParameters} we first describe the three sensitivity parameters that we use; note that we do not use all of these parameters in all of our results. In section \\ref{sec:mainIdentification} we then state our main identification results. In section \\ref{sec:interpretation} we make several remarks regarding interpretation of the sensitivity parameters. Finally, in section \\ref{sec:estimationInference} we briefly discuss estimation and inference.\n\n\\subsection{The Sensitivity Parameters}\\label{sec:sensitivityParameters}\n\nRecall from section \\ref{sec:model} that our parameter of interest is $\\beta_\\text{long}$, the OLS coefficient on $X$ in the long regression of $Y$ on $(1,X,W_1,W_2)$. We cannot compute this regression from the data, since $W_2$ is not observed. Instead, we can compute $\\beta_\\text{med}$, the OLS coefficient on $X$ in the medium regression of $(1,X,W_1)$. The difference between these two regression coefficients is given by the classic omitted variable bias formula. We can write this formula as a function of the coefficient $\\gamma_2$ on $W_2$ in the long regression outcome equation \\eqref{eq:outcome} and the coefficient $\\pi_2$ on $W_2$ in the selection equation \\eqref{eq:XprojectionW1W2} as follows:\n\\begin{equation}\\label{eq:ourOVBformula}\n\t\\beta_{\\text{med}} - \\beta_\\text{long}\n\t=\n\t\\frac{\\gamma_2 \\pi_2 (1 - R^2_{W_2 \\sim W_1})}{\\pi_2^2 \\left(1 - R^2_{W_2 \\sim W_1}\\right) + \\var(X^{\\perp W})}\n\\end{equation}\nwhere $R_{W_2 \\sim W_1}^2$ denotes the population $R^2$ in a linear regression of the unobservable $W_2$ on the observables $(1,W_1)$. This bias is a function of the coefficient $\\pi_2$. Hence $\\pi_2$ is a natural sensitivity parameter. Using $\\pi_2$ as a sensitivity parameter, however, would require researchers to make judgment calls about the absolute magnitude of this coefficient. This may be difficult. So, similar to the definition of Oster's $\\delta$ (which we review in appendix \\ref{subsec:osterredef}), we instead define a \\emph{relative} sensitivity parameter. Specifically, let $\\| \\cdot \\|_{\\Sigma_{\\text{obs}}}$ denote the weighted Euclidean norm on $\\ensuremath{\\mathbb{R}}^{d_1}$: $\\|a\\|_{\\Sigma_{\\text{obs}}} = \\left(a'\\Sigma_{\\text{obs}}a\\right)^{1\/2}$ where $\\Sigma_{\\text{obs}} \\equiv \\var(W_1)$. \n\nWe then consider the following assumption.\n\n\\begin{Aassump}\\label{assump:rx}\n$| \\pi_2 | \\leq \\bar{r}_X \\| \\pi_1 \\|_{\\Sigma_{\\text{obs}}}$ for a known value of $\\bar{r}_X \\geq 0$.\n\\end{Aassump}\n\nTo interpret assumption A\\ref{assump:rx}, we first normalize the variance of the unobserved $W_2$ to 1.\n\n\\begin{Aassump}\\label{assump:normalizeVarianceW2}\n$\\var(W_2) = 1$.\n\\end{Aassump}\n\nUsing this normalization A\\ref{assump:normalizeVarianceW2}, assumption A\\ref{assump:rx} can be written as\n\\[\n\t\\sqrt{ \\var(\\pi_2 W_2) }\n\t\\leq\n\t\\bar{r}_X \\cdot \\sqrt{\\var(\\pi_1' W_1)}.\n\\]\nSo assumption A\\ref{assump:rx} says that the association between treatment $X$ and a one standard deviation increase in the index of unobservables is at most $\\bar{r}_X$ times the association between treatment and a one standard deviation increase in the index of observables. Finally, note that $\\| \\pi_1 \\|_{\\Sigma_{\\text{obs}}}$ is invariant to linear transformations of $W_1$, including rescalings, since the index $\\pi_1'W_1$ is invariant with respect to linear transformations. This invariance ensures that $\\bar{r}_X$ is a unit-free sensitivity parameter.\n\nThe baseline model of section \\ref{sec:model} corresponds to the case $\\bar{r}_X = 0$, since it implies $\\pi_2 = 0$. We relax the baseline model by considering values $\\bar{r}_X > 0$. Our first main result (Theorem \\ref{cor:IdsetRyANDcFree}) describes the identified set using only A\\ref{assump:rx}. Researchers may also be willing to make additional restrictions so we consider two additional sensitivity parameters as well, however. These parameters are also motivated by the omitted variables bias formula in equation \\eqref{eq:ourOVBformula}. The bias is a function of $\\gamma_2$ so it is natural to also consider assumptions that restrict this parameter. Like our assumption on $\\pi_2$, we consider a restriction on the relative magnitudes of $\\gamma_1$ and $\\gamma_2$, the coefficients of $W_1$ and $W_2$ in the outcome equation \\eqref{eq:outcome}.\n\n\\begin{Aassump}\\label{assump:ry}\n$| \\gamma_2 | \\leq \\bar{r}_Y \\| \\gamma_1 \\|_{\\Sigma_{\\text{obs}}}$ for a known value of $\\bar{r}_Y \\geq 0$.\n\\end{Aassump}\n\nMaintaining the normalization A\\ref{assump:normalizeVarianceW2}, A\\ref{assump:ry} has a similar interpretation as A\\ref{assump:rx}: It says that the association between the outcome and a one standard deviation increase in the index of unobservables is at most $\\bar{r}_Y$ times the association between the outcome and a one standard deviation increase in the index of observables. $\\| \\gamma_1 \\|_{\\Sigma_{\\text{obs}}}$ is also invariant to linear transformations of $W_1$ and hence $\\bar{r}_Y$ is also a unit-free sensitivity parameter.\n\nFinally, the omitted variable bias in equation \\eqref{eq:ourOVBformula} is a function of $R_{W_2 \\sim W_1}^2$. So we also consider restrictions directly on the relationship between the observed and unobserved covariates.\n\n\\begin{Aassump}\\label{assump:corr}\n$R_{W_2 \\sim W_1} \\leq \\bar{c}$ for a known value of $\\bar{c} \\in [0,1]$.\n\\end{Aassump}\n\n Assumption A\\ref{assump:corr} allows researchers to constrain the magnitude of control endogeneity. In particular, the exogenous controls assumption is equivalent to $R_{W_2 \\sim W_1}^2 = 0$ and hence can be obtained by setting $\\bar{c} = 0$. Values $\\bar{c} > 0$ allow for partially endogenous controls. Note that $R_{W_2 \\sim W_1}^2$ is invariant to linear transformations of $W_1$ as well as to the normalization on $W_2$. Finally, it will sometimes be useful to note that $R_{W_2 \\sim W_1}^2 = \\| \\cov(W_1,W_2) \\|_{\\Sigma_{\\text{obs}}^{-1}}^2$.\n\n\\subsection{Identification}\\label{sec:mainIdentification}\n\nIn this section we state our main results. For simplicity, we first normalize the treatment variable so that $\\var(X) = 1$ and the covariates so that $\\var(W_1) = I$. All of the results below can be rewritten without these normalizations, at the cost of additional notation. With these normalizations, $\\| \\cdot \\|_{\\Sigma_{\\text{obs}}^{-1}} =\\| \\cdot \\|_{\\Sigma_{\\text{obs}}}$. We use $\\| \\cdot \\|$ to refer to this norm throughout.\n\n\\subsubsection*{Identification Using Restrictions on $\\bar{r}_X$ Only}\n\nLet $\\mathcal{B}_I(\\bar{r}_X)$ denote the identified set for $\\beta_\\text{long}$ under the positive definite variance assumption A\\ref{assump:posdefVar}, the normalization assumption A\\ref{assump:normalizeVarianceW2}, and the restriction A\\ref{assump:rx} on $\\pi_2$. In particular, this identified set does not impose the restriction A\\ref{assump:ry} on $\\gamma_2$ or the restriction A\\ref{assump:corr} on $R_{W_2 \\sim W_1}^2$.\nLet\n\\[\n\t\\underline{B}(\\bar{r}_X) = \\inf \\mathcal{B}_I(\\bar{r}_X)\n\t\\qquad \\text{and} \\qquad\n\t\\overline{B}(\\bar{r}_X) = \\sup \\mathcal{B}_I(\\bar{r}_X)\n\\]\ndenote its smallest and largest values. Our first main result, Theorem \\ref{cor:IdsetRyANDcFree} below, provides simple, closed form expressions for these sharp bounds. Similarly, let $\\mathcal{B}_I(\\bar{r}_X,\\bar{c})$ denote the identified set for $\\beta_\\text{long}$ if we also impose A\\ref{assump:corr}. Let\n\\[\n\t\\underline{B}(\\bar{r}_X, \\bar{c}) = \\inf \\mathcal{B}_I(\\bar{r}_X, \\bar{c})\n\t\\qquad \\text{and} \\qquad\n\t\\overline{B}(\\bar{r}_X, \\bar{c}) = \\sup \\mathcal{B}_I(\\bar{r}_X, \\bar{c})\n\\]\ndenote its smallest and largest values. Our second main result, Theorem \\ref{thm:IdsetRyFree} below, similarly provides simple, closed form expressions for these sharp bounds. \n\nTo state these results, we first define some additional notation. For any random vectors $A$ and $B$, let $\\sigma_{A,B} = \\cov(A,B)$, a matrix of dimension $\\text{dim}(A) \\times \\text{dim}(B)$. Let\n\\begin{align*}\n \t\\bar{z}_X(\\bar{r}_X) \n\t&= \n\t\\begin{cases}\n \t\\dfrac{\\bar{r}_X \\|\\sigma_{W_1,X}\\|}{\\sqrt{1 - \\bar{r}_X^2}} & \\text{ if } \\bar{r}_X < 1 \\\\\n \t+\\infty & \\text{ if } \\bar{r}_X \\geq 1.\n\t\\end{cases}\n\\end{align*}\nThe sensitivity parameter $\\bar{r}_X$ will affect the bounds only via this function. Note that this function also depends on the data, via the covariance between treatment $X$ and the covariates $W_1$. The bounds also depend on the data via three constants:\n\\begin{align*}\n k_0 &= 1 - \\|\\sigma_{W_1, X}\\|^2 = \\var(X^{\\perp W_1}) > 0\\\\\n k_1 &= \\sigma_{X,Y} - \\sigma_{W_1, X}'\\sigma_{W_1, Y} = \\cov(Y,X^{\\perp W_1})\\\\\n k_2 &= \\sigma_Y^2 - \\|\\sigma_{W_1, Y}\\|^2 = \\var(Y^{\\perp W_1}) >0.\n\\end{align*}\nThe inequalities here follow from A\\ref{assump:posdefVar}, positive definiteness of $\\var(Y,X,W_1)$. Note that the coefficient on $X$ in the medium OLS regression of $Y$ on $(1,X,W_1)$ can be written as $\\beta_{\\text{med}} = k_1\/k_0$ by the FWL theorem. \n\nWe also use the following assumption.\n\n\\begin{Aassump}\\label{assump:noKnifeEdge}\n$\\sigma_{W_1,Y} \\ne \\sigma_{X,Y}\\sigma_{W_1,X}$ and $\\sigma_{W_1,X} \\neq 0$.\n\\end{Aassump}\n\nThis assumption is not necessary, but it simplifies the proofs. A sufficient condition for A\\ref{assump:noKnifeEdge} is $\\beta_\\text{short} \\neq \\beta_{\\text{med}}$, where $\\beta_\\text{short}$ is the coefficient on $X$ in the short OLS regression of $Y$ on $(1,X)$. This follows from $\\beta_{\\text{med}} - \\beta_\\text{short} = \\sigma_{W_1,X}'(\\sigma_{W_1,Y} - \\sigma_{X,Y}\\sigma_{W_1,X})$. We can now state our first main result.\n\n\\begin{theorem}\\label{cor:IdsetRyANDcFree}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar}, A\\ref{assump:rx}, A\\ref{assump:normalizeVarianceW2}, and A\\ref{assump:noKnifeEdge} hold. Normalize $\\var(X) = 1$ and $\\var(W_1) = I$. If $\\bar{r}_X^2 < k_0$, then\n\\[\n\t\\underline{B}(\\bar{r}_X) = \\beta_{\\text{med}} - \\text{dev}(\\bar{r}_X) \\qquad \\text{and} \\qquad\n \\overline{B}(\\bar{r}_X) = \\beta_{\\text{med}} + \\text{dev}(\\bar{r}_X)\n\\]\nwhere\n\\[\n\t\\text{dev}(\\bar{r}_X) = \\bar{r}_X \\|\\cov(X,W_1)\\| \\sqrt{\\frac{k_2(1 - R^2_{Y \\sim X \\mathrel{\\mathsmaller{\\bullet}} W_1})}{k_0(k_0 - \\bar{r}_X^2)}}.\n\\]\nOtherwise, $\\underline{B}(\\bar{r}_X) = -\\infty$ and $\\bar{B}(\\bar{r}_X) = +\\infty$.\n\\end{theorem}\n\nTheorem \\ref{cor:IdsetRyANDcFree} characterizes the largest and smallest possible values of $\\beta_\\text{long}$ when some selection on unobservables is allowed, the observed covariates are allowed to be arbitrarily correlated with the unobserved covariate, and we make no restrictions on the coefficients in the outcome equation. In fact, with the exception of at most three extra elements, the interval $[\\underline{B}(\\bar{r}_X), \\overline{B}(\\bar{r}_X)]$ is the identified set for $\\beta_\\text{long}$ under these assumptions. Here we focus on the smallest and largest elements to avoid technical digressions that are unimportant for applications.\n\nThere are two important features of Theorem \\ref{cor:IdsetRyANDcFree}: First, it only requires researchers to reason about \\emph{one} sensitivity parameter, unlike some existing approaches, including \\cite{Oster2019} and \\cite{CinelliHazlett2020}. Second, and also unlike those results, it allows for arbitrarily endogenous controls. So this result allows researchers to examine the impact of selection on unobservables on their baseline results without also having to reason about the magnitude of endogenous controls.\n\nSince Theorem \\ref{cor:IdsetRyANDcFree} provides explicit expressions for the bounds, we can immediately derive a few of their properties. First, when $\\bar{r}_X = 0$, the bounds collapse to $\\beta_\\text{med}$, the point estimand from the baseline model with no selection on unobservables. So we recover the baseline model as a special case. For small values of $\\bar{r}_X > 0$, the bounds are no longer a singleton, but their length increases continuously as $\\bar{r}_X$ increases away from zero. The rate at which they increase depends on just a few features of the data: The relationship between treatment and the observed covariates, $\\cov(X,W_1)$, the variance in outcomes after adjusting for the covariates, $\\var(Y^{\\perp W_1})$, the variance in treatments after adjusting for the covariates, $\\var(X^{\\perp W_1})$, and the R-squared from the regression of $Y^{\\perp W_1}$ on a constant and $X^{\\perp W_1}$. We also see that the bounds are symmetric around $\\beta_\\text{med}$. Finally, the bounds can only be finite if $\\bar{r}_X < 1$. We discuss interpretation of the magnitude of $\\bar{r}_X$ in detail in section \\ref{sec:interpretation}.\n\nIn practice, empirical researchers often do breakdown analysis. In this context, they ask:\n\\begin{quote}\nHow strong does selection on unobservables have to be relative to selection on observables in order to overturn our baseline findings?\n\\end{quote}\nWe can use Theorem \\ref{cor:IdsetRyANDcFree} to answer this question. Suppose in the baseline model we find $\\beta_\\text{med} \\geq 0$. We are concerned, however, that $\\beta_\\text{long} \\leq 0$, in which case our positive finding is driven solely by selection on unobservables. Define\n\\[\n\t\\bar{r}_X^\\text{bp} \n\t= \\inf \\{ \\bar{r}_X \\geq 0 : b \\in \\mathcal{B}_I(\\bar{r}_X) \\text{ for some $b \\leq 0$} \\}.\n\\]\nThis value is called a \\emph{breakdown point}. It is the largest amount of selection on unobservables we can allow for while still concluding that $\\beta_\\text{long}$ is nonnegative. Note that the breakdown point when $\\beta_\\text{med} \\leq 0$ can be defined analogously.\n\n\\begin{corollary}\\label{corr:breakdownPointRXonly}\nSuppose the assumptions of Theorem \\ref{cor:IdsetRyANDcFree} hold. Then\n\\[\n\t\\bar{r}_X^\\text{bp} = \\left(\\frac{\\beta_\\text{med}^2\\var(X^{\\perp W_1})^2}{\\|\\cov(X,W_1)\\|^2 \\var(Y^{\\perp W_1}) + \\beta_\\text{med}^2 \\var(X^{\\perp W_1})^2}\\right)^{1\/2}.\n\\]\n\\end{corollary}\n\nThe breakdown point described in Corollary \\ref{corr:breakdownPointRXonly} characterizes the magnitude of selection on unobservables relative to selection on observables needed to overturn one's baseline findings. One of our main recommendations is that researchers present estimates of this point as a scalar measure of the robustness of their results. We illustrate this recommendation in our empirical application in section \\ref{sec:empirical}.\n\n\\subsubsection*{Identification Using Restrictions on $\\bar{r}_X$ and $\\bar{c}$}\n\nIn some applications, the bounds in Theorem \\ref{cor:IdsetRyANDcFree} may be quite large, even for small values of $\\bar{r}_X$. In this case, researchers may be willing to restrict the relationship between the observed covariates and the omitted variable. So next we present a similar result, but now imposing A\\ref{assump:corr}. Generalize the $\\bar{z}_X(\\cdot)$ function as follows:\n\\begin{align*}\n \t\\bar{z}_X(\\bar{r}_X,\\bar{c}) &= \\begin{cases}\n \t\\dfrac{\\bar{r}_X \\|\\sigma_{W_1,X}\\| \\sqrt{1 - \\min\\{\\bar{c}, \\bar{r}_X\\}^2}}{1 -\\bar{r}_X \\min\\{\\bar{c}, \\bar{r}_X\\}} & \\text{ if } \\bar{r}_X\\bar{c} < 1 \\\\\n \t+\\infty & \\text{ if } \\bar{r}_X\\bar{c} \\ge 1.\n\t\\end{cases}\n\\end{align*}\nNote that for $\\bar{c} = 1$, $\\bar{z}_X(\\bar{r}_X,1) = \\bar{z}_X(\\bar{r}_X)$ for all $\\bar{r}_X \\geq 0$. Also, $\\bar{z}_X(\\bar{r}_X,\\bar{c}) = \\bar{z}_X(\\bar{r}_X)$ when $\\bar{r}_X \\leq \\bar{c}$. As before, the sensitivity parameters will only affect the bounds via this function.\n\n\\begin{theorem}\\label{thm:IdsetRyFree}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar}, A\\ref{assump:rx}, A\\ref{assump:normalizeVarianceW2}, A\\ref{assump:corr}, and A\\ref{assump:noKnifeEdge} hold. Normalize $\\var(X) = 1$ and $\\var(W_1) = I$. If $\\bar{z}_X(\\bar{r}_X, \\bar{c})^2 < k_0$, then\n\\[\n\t\\underline{B}(\\bar{r}_X, \\bar{c}) = \\beta_{\\text{med}} - \\text{dev}(\\bar{r}_X, \\bar{c})\n\t\\qquad \\text{and} \\qquad\n\t\\overline{B}(\\bar{r}_X, \\bar{c}) = \\beta_{\\text{med}} + \\text{dev}(\\bar{r}_X,\\bar{c})\t\n\\]\nwhere\n\\[\n\t\\text{dev}(\\bar{r}_X, \\bar{c}) = \\sqrt{\\frac{\\bar{z}_X(\\bar{r}_X,\\bar{c})^2\\left(\\frac{k_2}{k_0} - \\beta_{\\text{med}}^2\\right)}{k_0 - \\bar{z}_X(\\bar{r}_X, \\bar{c})^2}}.\n\\]\nOtherwise, $\\underline{B}(\\bar{r}_X, \\bar{c}) = -\\infty$ and $\\bar{B}(\\bar{r}_X, \\bar{c}) = +\\infty$.\n\\end{theorem}\n\nThe interpretation of Theorem \\ref{thm:IdsetRyFree} is similar to our earlier result Theorem \\ref{cor:IdsetRyANDcFree}. It characterizes the largest and smallest possible values of $\\beta_\\text{long}$ when some selection on unobservables is allowed and the controls are allowed to be partially but not arbitrarily endogenous. We also make no restrictions on the coefficients in the outcome equation. As before, with the exception of at most three extra elements, the interval $[\\underline{B}(\\bar{r}_X, \\bar{c}), \\overline{B}(\\bar{r}_X, \\bar{c})]$ is the identified set for $\\beta_\\text{long}$ under these assumptions.\n\nEarlier we saw that $\\bar{r}_X < 1$ is necessary for the bounds of Theorem \\ref{cor:IdsetRyANDcFree} to be finite. Theorem \\ref{thm:IdsetRyFree} shows that, if we are willing to restrict the value of $\\bar{c}$, then we can allow for $\\bar{r}_X > 1$ while still obtaining finite bounds. Thus there is a trade-off between the magnitude of selection on unobservables we can allow for and the magnitude of control endogeneity. One way to summarize this trade-off is to use \\emph{breakdown frontiers} (\\citealt{MastenPoirier2019BF}). Specifically, when $\\beta_\\text{med} \\geq 0$, define\n\\[\n\t\\bar{r}_X^\\text{bf}(\\bar{c}) \n\t= \\inf \\{ \\bar{r}_X \\geq 0 : b \\in \\mathcal{B}_I(\\bar{r}_X, \\bar{c}) \\text{ for some $b \\leq 0$} \\}.\n\\]\nFor any fixed $\\bar{c}$, $\\bar{r}_X^\\text{bf}(\\bar{c})$ is a breakdown point: It is the largest magnitude of selection on unobservables relative to selection on observables that we can allow for while still concluding that our parameter of interest is nonnegative. As we vary $\\bar{c}$, this breakdown point changes: It increases as $\\bar{c}$ gets smaller, because we can allow for more selection on unobservables if we impose stronger restrictions on exogeneity of the observed covariates. Conversely, it decreases as $\\bar{c}$ gets larger, because we can allow for less selection on unobservables if we allow for more endogeneity of the observed covariates. In particular, $\\bar{r}_X^\\text{bf}(1) = \\bar{r}_X^\\text{bp}$, the breakdown point of Corollary \\ref{corr:breakdownPointRXonly}. Like that corollary, we can derive an analytical characterization of the function $\\bar{r}_X^\\text{bf}(\\cdot)$, but we omit this for brevity.\n\n\\subsubsection*{Identification Using Restrictions on $\\bar{r}_X$, $\\bar{c}$, and $\\bar{r}_Y$}\n\nFinally, in some empirical settings the results may not be robust even if we impose exogenous controls ($\\bar{c} = 0$). In these cases, we might be willing to restrict the impact of unobservables on outcomes; that is, we may be willing to impose A\\ref{assump:ry}. Let $\\mathcal{B}_I(\\bar{r}_X,\\bar{r}_Y,\\bar{c})$ denote the identified set for $\\beta_\\text{long}$ under A\\ref{assump:posdefVar} and A\\ref{assump:rx}--A\\ref{assump:corr}. Unlike the two identified sets we considered above, this set is less tractable. We provide a precise characterization in appendix \\ref{sec:generalIdentSet}. Here we instead use our characterization to show how to do breakdown analysis.\n\nSuppose we are interested in the robustness of the conclusion that $\\beta_\\text{long} \\geq \\underline{b}$ for some known scalar $\\underline{b}$. For example, $\\underline{b} = 0$. Define the function\n\\[\n\t\\bar{r}_Y^\\text{bf}(\\bar{r}_X, \\bar{c}, \\underline{b}) \n\t= \\inf \\{ \\bar{r}_Y \\geq 0 : b \\in \\mathcal{B}_I(\\bar{r}_X, \\bar{c}, \\bar{r}_Y) \\text{ for some $b \\leq \\underline{b}$} \\}.\n\\]\nThis is a three-dimensional breakdown frontier. In particular, we can use it to define the set\n\\[\n\t\\text{RR} = \\{ (\\bar{r}_X, \\bar{r}_Y, \\bar{c}) : \\bar{r}_Y \\leq \\bar{r}_Y^\\text{bf}(\\bar{r}_X, \\bar{c}, \\underline{b}) \\}.\n\\]\n\\cite{MastenPoirier2019BF} call this the \\emph{robust region} because the conclusion of interest, $\\beta_\\text{long} \\geq \\underline{b}$, holds for any combination of sensitivity parameters in this region. The size of this region is therefore a measure of the robustness of our baseline conclusion.\n\nAlthough we do not have a closed form expression for the smallest and largest elements of $\\mathcal{B}_I(\\bar{r}_X, \\bar{c}, \\bar{r}_Y)$, our next main result shows that we can still easily compute the breakdown frontier numerically. To state the result, define the sets\n\\begin{align*}\n\t\\mathcal{D} &= \\mathbb{R} \\times \\{c \\in \\mathbb{R}^{d_1}: \\|c\\| < 1\\} \\times \\mathbb{R} \\\\\n\t\\mathcal{D}^0 &= \\{(z,c,b) \\in \\mathcal{D}: z\\sqrt{1 - \\|c\\|^2}(\\sigma_{W_1,Y} - b\\sigma_{W_1,X}) - (k_1 - k_0b)c \\ne 0 \\}\n\\end{align*} \nand define the functions\n\\begin{align*}\n \\text{devsq}(z) &= \\frac{z^2(k_2\/k_0 - \\beta_\\text{med}^2)}{k_0 - z^2} \\\\\n \\underline{r}_Y(z,c,b) &= \\begin{cases}\n 0 & \\text{ if } b = \\beta_{\\text{med}} \\\\ \n \\dfrac{|k_1 - k_0b|}{\\|z\\sqrt{1 - \\|c\\|^2}(\\sigma_{W_1,Y} - b\\sigma_{W_1,X}) - (k_1 - k_0b)c\\|} & \\text{ if } (z,c,b) \\in \\mathcal{D}^0 \\text{ and } b \\neq \\beta_{\\text{med}} \\\\ \n +\\infty & \\text{ otherwise} \n \\end{cases} \\\\\n p(z,c;\\bar{r}_X) &= \\bar{r}^2_X\\|\\sigma_{W_1,X}\\sqrt{1 - \\|c\\|^2} - cz\\|^2 - z^2.\n\\end{align*}\n\nWe can now state our last main result.\n\n\\begin{theorem}\\label{cor:BFCalculation3D}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar} holds. Suppose A\\ref{assump:rx}--A\\ref{assump:noKnifeEdge} hold. Normalize $\\var(X) = 1$ and $\\var(W_1) = I$. \n\\begin{enumerate}\n\\item If $\\underline{b} \\ge \\beta_{\\text{med}}$ then $\\bar{r}_Y^\\text{bf}(\\bar{r}_{X}, \\bar{c}, \\underline{b}) = 0$.\n\n\\item If $\\underline{B}(\\bar{r}_X,\\bar{c}) > \\underline{b}$, then $\\bar{r}_Y^\\text{bf}(\\bar{r}_{X}, \\bar{c}, \\underline{b}) = +\\infty$.\n \n\\item If $\\underline{B}(\\bar{r}_X,\\bar{c}) \\le \\underline{b} < \\beta_\\text{med}$, then\n\\begin{align*}\n \\bar{r}^\\text{bf}_Y(\\bar{r}_X,\\bar{c}, \\underline{b}) = \\min_{(z,c_1,c_2,b) \\in (-\\sqrt{k_0},\\sqrt{k_0}) \\times \\ensuremath{\\mathbb{R}} \\times \\ensuremath{\\mathbb{R}} \\times (-\\infty, \\underline{b}]}& \\ \\ \\underline{r}_Y(z,c_1\\sigma_{W_1,Y} + c_2 \\sigma_{W_1,X},b) \\\\\n \\text{subject to }& \\ \\ p(z, c_1\\sigma_{W_1,Y} + c_2 \\sigma_{W_1,X}; \\bar{r}_X) \\ge 0 \\\\\n &\\ \\ (b - \\beta_{\\text{med}})^2 < \\text{devsq}(z) \\\\\n &\\ \\ \\|c_1\\sigma_{W_1,Y} + c_2 \\sigma_{W_1,X}\\| \\leq \\bar{c}.\n\\end{align*}\n\\end{enumerate}\n\\end{theorem}\n\nTheorem \\ref{cor:BFCalculation3D} shows that the three dimensional breakdown frontier can be computed as the solution to a smooth optimization problem. Importantly, this problem only requires searching over a 4-dimensional space. In particular, this dimension does not depend on the dimension of the covariates $W_1$. Consequently, it remains computationally feasible even with a large number of observed covariates, as is often the case in empirical practice. For example, the results for our empirical application take about 15 seconds to compute.\n\n\\subsection{Interpreting the Sensitivity Parameters}\\label{sec:interpretation}\n\nThus far we have introduced the sensitivity parameters (section \\ref{sec:sensitivityParameters}) and described their implications for identification (section \\ref{sec:mainIdentification}). Next we make several remarks regarding how to interpret the magnitudes of these parameters.\n\n\\subsubsection*{Which Covariates to Calibrate Against?}\n\nAs we discuss below, one of the main benefits of using \\emph{relative} sensitivity parameters like $\\bar{r}_X$ is that $\\bar{r}_X = 1$ is a natural reference point of ``equal selection.'' However, the interpretation of this reference point depends on the choice of covariates that we calibrate against. Put differently, when we say that we compare ``selection on unobservables to selection on observables,'' \\emph{which} observables do we mean?\n\nTo answer this, we split the observed covariates into two groups: (1) The control covariates, which we label $W_0$, and (2) The calibration covariates, which we continue to label $W_1$. Write equation \\eqref{eq:outcome} as\n\\[\n\tY = \\beta_\\text{long} X + \\gamma_0' W_0 + \\gamma_1' W_1 + \\gamma_2 W_2 + Y^{\\perp X,W}\n\t\\tag{\\ref{eq:outcome}$^\\prime$}\n\\]\nwhere $W = (W_0,W_1,W_2)$. Likewise, write equation \\eqref{eq:XprojectionW1W2} as\n\\[\n\tX = \\pi_0' W_0 + \\pi_1' W_1 + \\pi_2 W_2 + X^{\\perp W}.\n\t\\tag{\\ref{eq:XprojectionW1W2}$^\\prime$}\n\\]\nThe key difference from our earlier analysis is that, like in assumption A\\ref{assump:rx}, we will continue to only compare $\\pi_1$ with $\\pi_2$. That is, we only compare the omitted variable to the observed variables in $W_1$; we do not use $W_0$ for calibration. A similar remark applies to A\\ref{assump:ry}.\n\nThis distinction between control and calibration covariates is useful because in many applications we do not necessarily think the omitted variables have similar explanatory power as \\emph{all} of the observed covariates included in the model. For example, in our empirical application in section \\ref{sec:empirical}, we include state fixed effects as control covariates, but we do not use them for calibration. \n\nWe next briefly describe how to generalize our results in section \\ref{sec:mainIdentification} to account for this distinction. By the FWL theorem, a linear projection of $Y$ onto $(1,X^{\\perp W_0}, W_1^{\\perp W_0}, W_2^{\\perp W_0})$ has the same coefficients as equation \\eqref{eq:outcome}. Likewise for a linear projection of $X$ onto $(1,W_1^{\\perp W_0}, W_2^{\\perp W_0})$. Hence we can write\n\\begin{align*}\n\tY &= \\beta_\\text{long} X^{\\perp W_0} + \\gamma_1'W_1^{\\perp W_0} + \\gamma_2 W_2^{\\perp W_0} + \\widetilde{U}\\\\\n\tX &= \\pi_1' W_1^{\\perp W_0} + \\pi_2 W_2^{\\perp W_0} + \\widetilde{V}\n\\end{align*}\nwhere \n\\begin{align*}\n\t\\widetilde{U} &= Y^{\\perp X^{\\perp W_0}, W_1^{\\perp W_0}, W_2^{\\perp W_0}} \\qquad \\text{ and} \\qquad\t\\widetilde{V} = X^{\\perp W_1^{\\perp W_0}, W_2^{\\perp W_0}}.\n\\end{align*}\nBy construction, $\\widetilde{U}$ has zero covariance with $(X^{\\perp W_0},W_1^{\\perp W_0},W_2^{\\perp W_0})$ and $\\widetilde{V}$ has zero covariance with $(W_1^{\\perp W_0}, \\allowbreak W_2^{\\perp W_0})$. Therefore our earlier results continue to hold when $(X,W_1,W_2)$ are replaced with $(X^{\\perp W_0}, \\allowbreak W_1^{\\perp W_0}, \\allowbreak W_2^{\\perp W_0})$. Finally, this change implies that $\\bar{c}$ should be interpreted as an upper bound on $R_{W_2 \\sim W_1 \\mathrel{\\mathsmaller{\\bullet}} W_0}$, the square root of the R-squared from the regression of $W_2$ on $W_1$ after partialling out $W_0$.\n\n\\subsubsection*{What is a Robust Result?}\n\nHow should researchers determine what values of $\\bar{r}_X$ and $\\bar{r}_Y$ are large and what values are small? Like in \\cite{AltonjiElderTaber2005} and \\cite{Oster2019}, these are \\emph{relative} sensitivity parameters. Consequently, the values $\\bar{r}_X = 1$ and $\\bar{r}_Y = 1$ are natural reference points. Specifically, when $\\bar{r}_X < 1$, the magnitude of the coefficient on the unobservable $W_2$ in equation \\eqref{eq:XprojectionW1W2} is smaller than the magnitude of the coefficient on the observable controls $W_1$ in the outcome equation. This is one way to formalize the idea that ``selection on unobservables is smaller than selection on observables.'' Likewise, when $\\bar{r}_X > 1$, we can think of this as formalizing the idea that ``selection on unobservables is larger than selection on observables.'' A similar interpretation applies to $\\bar{r}_Y$. These interpretations do \\emph{not}, however, imply that the value 1 should be thought of as a universal, context independent cutoff between ``small'' and ``large'' values of these two sensitivity parameters.\n\nWhy? As we described above, researchers must choose which of their observed covariates should be used to calibrate against. Consequently, the choice of $W_0$ (and hence $W_1$) affects the interpretation of the magnitude of $\\bar{r}_X$. One way in which this choice manifests itself is via its impact on the breakdown point: Including more relevant variables in $W_1$ will tend to will tend to shrink the breakdown point $\\bar{r}_X^\\text{bp}$, because the explanatory power of the observables we're calibrating against increases when we move variables from $W_0$ to $W_1$. This observation does \\emph{not} necessarily imply that the result is becoming less robust, but rather that the standard by which we are measuring sensitivity is changing. If the calibration variables $W_1$ have a large amount of explanatory power, then even an apparently small value of $\\bar{r}_X^\\text{bp}$ like 0.3 could be considered to be robust. Conversely, when the calibration variables $W_1$ do not have much explanatory power, then even an apparently large value of $\\bar{r}_X^\\text{bp}$ like 3 could be considered sensitive.\n\nThis discussion can be summarized by the following relationship:\n\\begin{equation}\\label{eq:pseudoEq}\n\t\\text{Selection on Unobservables} = r \\cdot (\\text{Selection on Observables}).\n\\end{equation}\nThe left hand side is the absolute magnitude of selection on unobservables, while the right hand side is the proportion $r$ of the absolute magnitude of selection on observables. $\\bar{r}_X^\\text{bp}$ is a bound on $r$. Our discussion above merely states that even if the bound $\\bar{r}_X^\\text{bp}$ on $r$ seems small, the magnitude of selection on unobservables allowed for can be very large if the magnitude of selection on observables is large. And conversely, even if $\\bar{r}_X^\\text{bp}$ seems large, the amount of unobserved selection allowed for must be small if the magnitude of selection on observables is also small.\n\nOverall, the value of using relative sensitivity parameters like $\\bar{r}_X$ is not that they allow us to obtain a universal threshold for what is or is not a robust result. Instead, it gives researchers a \\emph{unit free} measurement of sensitivity that is \\emph{interpretable} in terms of the effects of observed variables. Finally, note that the issues we've raised in this discussion equally apply to the existing methods in the literature as well, including \\cite{Oster2019}; they are not unique to our analysis.\n\n\\subsubsection*{Assessing Exogenous Controls}\n\nThus far we have discussed the interpretation of $\\bar{r}_X$ and $\\bar{r}_Y$. Next consider $\\bar{c}$. This is a constraint on the covariance between the observed calibration covariates and the unobserved covariate. In particular, recall that it is a bound on $R_{W_2 \\sim W_1 \\mathrel{\\mathsmaller{\\bullet}} W_0}$. So what values of this parameter should be considered large, and what values should be considered small? One way to calibrate this parameter is to compute\n\\[\n\tc_k = R_{W_{1k} \\sim W_{1,-k} \\mathrel{\\mathsmaller{\\bullet}} W_0}\n\\]\nfor each covariate $k$ in $W_1$. That is, compute the square root of the population R-squared from the regression of $W_{1k}$ on the rest of the calibration covariates $W_{1,-k}$, after partialling out the control covariates $W_0$. These numbers tell us two things. First, if many of these values are nonzero and large, we may worry that the exogenous controls assumption fails. That is, if $W_2$ is in some way similar to the observed covariates $W_1$, then we might expect that $R_{W_2 \\sim W_1 \\mathrel{\\mathsmaller{\\bullet}} W_0}$ is similar to some of the $c_k$'s. So this gives us one method for assessing the plausibility of exogenous controls. Second, we can use the magnitudes of these values to calibrate our choice of $\\bar{c}$, in analysis based on Theorems \\ref{thm:IdsetRyFree} or \\ref{cor:BFCalculation3D}. For example, you could choose the largest value of $c_k$. A less conservative approach would be to select the median value.\n\n\\subsection{Estimation and Inference}\\label{sec:estimationInference}\n\nThus far we have described population level identification results. In practice, we only observe finite sample data. Our identification results depend on the observables $(Y,X,W_1)$ solely through their covariance matrix. In our empirical analysis in section \\ref{sec:empirical}, we apply our identification results by using sample analog estimators that replace $\\var(Y,X,W_1)$ with a consistent estimator $\\widehat{\\var}(Y,X,W_1)$. For example, we let $\\widehat{\\beta}_\\text{med}$ denote the OLS estimator of $\\beta_\\text{med}$, the coefficient on $X$ in the medium regression of $Y$ on $(1,X,W_1)$. We expect the corresponding asymptotic theory for estimation and inference on the bound functions to be straightforward, but for brevity we do not develop it in this paper. Inference on the breakdown points and frontiers could also be done as in \\cite{MastenPoirier2019BF}.\n\n\\section{Causal Models}\\label{sec:causalModels}\n\nIn this section we describe three different causal models in which the parameter $\\beta_\\text{long}$ in equation \\eqref{eq:outcome} has a causal interpretation. These models are based on three different kinds of identification strategies: Unconfoundedness, difference-in-differences, and instrumental variables. Here we focus on simple models, but keep in mind that our analysis can be used anytime the causal parameter of interest can be written as the coefficient on a treatment variable in a long regression of the form in equation \\eqref{eq:outcome}.\n\n\\subsection{Unconfoundedness}\n\nRecall that $Y$ denotes the realized outcome, $X$ denotes treatment, $W_1$ denotes the observed covariates, and $W_2$ denotes the unobserved variables of concern. Let $Y(x)$ denote potential outcomes, where $x$ is any logically possible value of treatment. Assume this potential outcome has the following form:\n\\begin{equation}\\label{eq:OsterOutcomeEq}\n\tY(x) = \\beta_c x + \\gamma_1' W_1 + \\gamma_2 W_2 + U\n\\end{equation}\nwhere $(\\beta_c,\\gamma_1,\\gamma_2)$ are unknown constants. The parameter of interest is $\\beta_c$, the causal effect of treatment on the outcome. $U$ is an unobserved random variable. Suppose the realized outcome satisfies $Y = Y(X)$. Consider the following assumption.\n\\begin{itemize}\n\\item[] Linear Latent Unconfoundedness: $\\text{corr}(X^{\\perp W_1,W_2}, U^{\\perp W_1,W_2}) = 0$.\n\\end{itemize}\nThis assumption says that, after partialling out the observed covariates $W_1$ and the unobserved variables $W_2$, treatment is uncorrelated with the unobserved variable $U$. This model has two unobservables, which are treated differently via this assumption. We call $W_2$ the \\emph{confounders} and $U$ the \\emph{non-confounders}. $W_2$ are the unobserved variables which, when omitted, may cause bias. In contrast, as long as we adjust for $(W_1,W_2)$, omitting $U$ does not cause bias. Note that, given equation \\eqref{eq:OsterOutcomeEq}, linear latent unconfoundedness can be equivalently written as $\\text{corr}(X^{\\perp W_1,W_2}, Y(x)^{\\perp W_1,W_2}) = 0$ for all logically possible values of treatment $x$.\n\nLinear latent unconfoundedness is a linear parametric version of the nonparametric latent unconfoundedness assumption\n\\begin{equation}\\label{eq:nonparaLatentUnconf}\n\tY(x) \\indep X \\mid (W_1,W_2)\n\\end{equation}\nfor all logically possible values of $x$. In particular, with the linear potential outcomes assumption of equation \\eqref{eq:OsterOutcomeEq}, nonparametric latent unconfoundedness (equation \\eqref{eq:nonparaLatentUnconf}) implies linear latent unconfoundedness. We use the linear parametric version to avoid overidentifying restrictions that can arise from the combination of linearity and statistical independence.\n\nThe following result shows that, in this model, the causal effect of $X$ on $Y$ can be obtained from $\\beta_\\text{long}$, the coefficient on $X$ in the long regression described in equation \\eqref{eq:outcome}.\n\n\\begin{proposition}\\label{prop:unconfoundednessBaseline}\nConsider the linear potential outcomes model \\eqref{eq:OsterOutcomeEq}. Suppose linear latent unconfoundedness holds. Suppose A\\ref{assump:posdefVar} holds. Then $\\beta_c = \\beta_\\text{long}$.\n\\end{proposition}\n\nSince $W_2$ is unobserved, however, this result cannot be used to identify $\\beta_c$. Instead, suppose we believe the no selection on unobservables assumption A\\ref{assump:baselineEndogControl1}. Recall that this assumption says that $\\pi_2 = 0$, where $\\pi_2$ is the coefficient on $W_2$ in the OLS estimand of $X$ on $(1,W_1,W_2)$. Under this assumption, we obtain the following result. Recall that $\\beta_\\text{med}$ denotes the coefficient on $X$ in the medium regression of $Y$ on $(1,X,W_1)$. \n\n\\begin{corollary}\\label{corr:SelectionOnObsCausal}\nSuppose the assumptions of Proposition \\ref{prop:unconfoundednessBaseline} hold. Suppose A\\ref{assump:baselineEndogControl1} holds ($\\pi_2 = 0$). Then $\\beta_c = \\beta_\\text{med}$.\n\\end{corollary}\n\nThe selection on observables assumption A\\ref{assump:baselineEndogControl1} is usually thought to be quite strong, however. Nonetheless, since $\\beta_c = \\beta_\\text{long}$, our results in section \\ref{sec:MainNewAnalysis} can be used to assess sensitivity to selection on unobservables.\n\n\\subsection{Difference-in-differences}\n\nLet $Y_t(x_t)$ denote potential outcomes at time $t$, where $x_t$ is a logically possible value of treatment. For simplicity we do not consider models with dynamic effects of treatment or of covariates. Also suppose $W_{2t}$ is a scalar for simplicity. Suppose\n\\begin{equation}\\label{eq:DiDoutcomeEq}\n\tY_t(x_t) = \\beta_c x_t + \\gamma_1' W_{1t} + \\gamma_2 W_{2t} + V_t\n\\end{equation}\nwhere $V_t$ is an unobserved random variable and $(\\beta_c,\\gamma_1,\\gamma_2)$ are unknown parameters that are constant across units. The classical two way fixed effects model is a special case where\n\\begin{equation}\\label{eq:TWFE}\n\tV_t = A + \\delta_t + U_t.\n\\end{equation}\nwhere $A$ is an unobserved random variable that is constant over time, $\\delta_t$ is an unobserved constant, and $U_t$ is an unobserved random variable.\n\nSuppose there are two time periods, $t \\in \\{1,2\\}$. Let $Y_t = Y_t(X_t)$ denote the observed outcome at time $t$. For any time varying random variable like $Y_t$, let $\\Delta Y = Y_2 - Y_1$. Then taking first differences of the observed outcomes yields\n\\[\n\t\\Delta Y = \\beta _c \\Delta X + \\gamma_1' \\Delta W_1 + \\gamma_2 \\Delta W_2 + \\Delta V.\n\\]\nLet $\\beta_\\text{long}$ denote the OLS coefficient on $\\Delta X$ from the long regression of $\\Delta Y$ on $(1,\\Delta X, \\Delta W_1, \\Delta W_2)$.\n\n\\begin{proposition}\\label{prop:DiD}\nConsider the linear potential outcome model \\eqref{eq:DiDoutcomeEq}. Suppose the following exogeneity assumption holds:\n\\begin{itemize}\n\\item $\\cov(\\Delta X, \\Delta V) = 0$, $\\cov(\\Delta W_2, \\Delta V) = 0$, and $\\cov(\\Delta W_1, \\Delta V) = 0$.\n\\end{itemize}\nThen $\\beta_c = \\beta_\\text{long}$.\n\\end{proposition}\n\nThe exogeneity assumption in Proposition \\ref{prop:DiD} says that $\\Delta V$ is uncorrelated with all components of $(\\Delta X, \\Delta W_1, \\Delta W_2)$. A sufficient condition for this is the two way fixed effects assumption \\eqref{eq:TWFE} combined with the assumption that the $U_t$ are uncorrelated with $(X_s,W_{1s}, W_{2s})$ for all $t$ and $s$. Given this exogeneity assumption, the only possible identification problem is that $\\Delta W_2$ is unobserved. Hence we cannot adjust for this trend variable. If we assume, however, that treatment trends $\\Delta X$ are not related to the unobserved trend $\\Delta W_2$, then we can point identify $\\beta_c$. Specifically, consider the linear projection of $\\Delta X$ onto $(1,\\Delta W_1, \\Delta W_2)$:\n\\[\n\t\\Delta X = \\pi_1' (\\Delta W_1) + \\pi_2 (\\Delta W_2) + (\\Delta X)^{\\perp \\Delta W_1, \\Delta W_2}.\n\\]\nUsing this equation to define $\\pi_2$, we now have the following result. Here we let $\\beta_\\text{med}$ denote the coefficient on $\\Delta X$ in the medium regression of $\\Delta Y$ on $(1, \\Delta X, \\Delta W_1)$.\n\n\\begin{corollary}\\label{corr:DiD}\nSuppose the assumptions of Proposition \\ref{prop:DiD} hold. Suppose A\\ref{assump:baselineEndogControl1} holds ($\\pi_2 = 0$). Then $\\beta_c = \\beta_\\text{med}$.\n\\end{corollary}\n\nThis result implies that $\\beta_c$ is point identified when $\\pi_2 = 0$. This assumption is a version of common trends, because it says that the unobserved trend $\\Delta W_2$ is not related to the trend in treatments, $\\Delta X$. Our results in section \\ref{sec:MainNewAnalysis} allow us to analyze the impacts of failure of this common trends assumption on conclusions about the causal effect of $X$ on $Y$, $\\beta_c$. In particular, our results allow researchers to assess the failure of common trends by comparing the impact of observed time varying covariates with the impact of unobserved time varying confounders. In this context, allowing for endogenous controls means allowing for the trend in observed covariates to correlate with the trend in the unobserved covariates.\n\n\\subsection{Instrumental variables}\n\nLet $Z$ be an observed variable that we want to use as an instrument. Let $Y(z)$ denote potential outcomes, where $z$ is any logical value of the instrument. Assume\n\\[\n\tY(z) = \\beta_c z + \\gamma_1' W_1 + \\gamma_2 W_2 + U\n\\]\nwhere $U$ is an unobserved scalar random variable and $(\\beta_c, \\gamma_1, \\gamma_2)$ are unknown constants. Thus $\\beta_c$ is the causal effect of $Z$ on $Y$. In an instrumental variables analysis, this is typically called the reduced form causal effect, and $Y(z)$ are reduced form potential outcomes. Suppose $\\cov(Z,U) = 0$, $\\cov(W_2, U) = 0$, and $\\cov(W_1, U) = 0$. Then $\\beta_c$ equals the OLS coefficient on $Z$ from the long regression of $Y$ on $(1,Z,W_1,W_2)$. In this model, Theorem \\ref{thm:baselineEndogControl1} implies that $\\beta_c$ is also obtained as the coefficient on $Z$ in the medium regression of $Y$ on $(1,Z,W_1)$, and thus is point identified. In this case, assumption A\\ref{assump:baselineEndogControl1} is an instrument exogeneity assumption. Our results in section \\ref{sec:MainNewAnalysis} thus allow us to analyze the impacts of instrument exogeneity failure on conclusions about the reduced form causal effect of $Z$ on $Y$, $\\beta_c$.\n\nIn a typical instrumental variable analysis, the reduced form causal effect of the instrument on outcomes is not the main effect of interest. Instead, we usually care about the causal effect of a treatment variable on outcomes. The reduced form is often just an intermediate tool for learning about that causal effect. Our analysis in this paper can be used to assess the sensitivity of conclusions about this causal effect to failures of instrument exclusion or exogeneity too. This analysis is somewhat more complicated, however, and so we develop it in a separate paper. Nonetheless, empirical researchers do sometimes examine the reduced form directly to study the impact of instrument exogeneity failure. For example, see section D7 and table D15 of \\cite{Tabellini2020}.\n\n\\section{Empirical Application: The Frontier Experience and Culture}\\label{sec:empirical}\n\nWhere does culture come from? \\cite{BFG2020} study the origins of people's preferences for or against government redistribution, intervention, and regulation. They provide the first systematic empirical analysis of a famous conjecture that living on the American frontier cultivated individualism and antipathy to government intervention. The idea is that life on the frontier was hard and dangerous, had little to no infrastructure, and required independence and self-reliance to survive. It was far from the federal government. And it was an opportunity for upward mobility through effort, rather than luck. These features then create cultural change, in particular, leading to ``more pervasive individualism and opposition to redistribution''. Overall, \\cite{BFG2020} find evidence supporting this frontier life conjecture.\n\nThe main results in \\cite{BFG2020} are based on an unconfoundedness identification strategy and use linear models. They note that ``the main threat to causal identification of $\\beta$ lies in omitted variables'' and hence they strongly rely on Oster's (2019) method to ``show that unobservables are unlikely to drive our results'' (page 2344). As we have discussed, however, this approach is based on the exogenous controls assumption. In this section, we apply our methods to examine the impact of allowing for endogenous controls on Bazzi et al.'s empirical conclusions. Overall, we come to a more nuanced conclusion about robustness: While they found that all of their analyses were robust to the presence of omitted variables, we find that their analysis using questionnaire based outcomes is quite sensitive, but their analysis using property tax levels and voting patterns is robust. We also find suggestive evidence that the controls are endogenous, which highlights the value of sensitivity analysis methods that allow for endogenous controls. We discuss all of these findings in more detail below.\n\n\\subsection{Data}\n\nWe first describe the variables and data sources. The main units of analysis are counties in the U.S., although we will also use some individual level data. The treatment $X$ is the ``total frontier experience'' (TFE). This is defined as the number of years between 1790 and 1890 a country spent ``on the frontier'', divided by 10. A county is ``on the frontier'' if it had a population density less than 6 people per square mile and was within 100 km of the ``frontier line''. The frontier line is a line that divides sparse counties (less than or equal to 2 people per square mile) from less sparse counties. By definition, the frontier line changed over time in response to population patterns, but it did so unevenly, resulting in some counties being ``on the frontier'' for longer than others. Figure 3 in \\cite{BFG2020} shows the spatial distribution of treatment.\n\nThe outcome variable $Y$ is a measure of modern culture. They consider 8 different outcome variables. Since data is not publicly available for all of them, we only look at 5 of these. They can be classified into two groups. The first are questionnaire based outcomes:\n\\begin{enumerate}\n\\item \\emph{Cut spending on the poor}. This variable comes from the 1992 and 1996 waves of the American National Election Study (ANES), a nationally representative survey. In those waves, it asked\n\\begin{itemize}\n\\item[] ``Should federal spending be increased, decreased, or kept about the same on poor people?''\n\\end{itemize}\nLet $Y_{1i} = 1$ if individual $i$ answered ``decreased'' and 0 otherwise.\n\n\\item \\emph{Cut welfare spending}. This variable comes from the Cooperative Congressional Election Study (CCES), waves 2014 and 2016. In those waves, it asked\n\\begin{itemize}\n\\item[] ``State legislatures must make choices when making spending decisions on important state programs. Would you like your legislature to increase or decrease spending on Welfare? 1. Greatly Increase 2. Slightly Increase 3. Maintain 4. Slightly Decrease 5. Greatly Decrease.''\n\\end{itemize}\nLet $Y_{2i} = 1$ if individual $i$ answered ``slightly decrease'' or ``greatly decrease'' and 0 otherwise.\n\n\\item \\emph{Reduce debt by cutting spending}. This variable also comes from the CCES, waves 2000--2014 (biannual). It asked\n\\begin{itemize}\n\\item[] ``The federal budget deficit is approximately [\\$ year specific amount] this year. If the Congress were to balance the budget it would have to consider cutting defense spending, cutting domestic spending (such as Medicare and Social Security), or raising taxes to cover the deficit. Please rank the options below from what would you most prefer that Congress do to what you would least prefer they do: Cut Defense Spending; Cut Domestic Spending; Raise Taxes.''\n\\end{itemize}\nLet $Y_{3i} = 1$ if individual $i$ chooses ``cut domestic spending'' as a first priority, and 0 otherwise.\n\\end{enumerate}\nThese surveys also collected data on individual demographics, specifically age, gender, and race. The second group of outcome variables are based on behavior rather than questionnaire responses:\n\\begin{enumerate}\n\\setcounter{enumi}{3}\n\n\\item $Y_{4i}$ is the average effective \\emph{property tax rate} in county $i$, based on data from 2010 to 2014 from the National Association of Home Builders (NAHB) data, which itself uses data from the American Community Survey (ACS) waves 2010--2014.\n\n\\item $Y_{5i}$ is the average \\emph{Republican vote share} over the five presidential elections from 2000 to 2016 in county $i$, using data from Leip's Atlas of U.S. Presidential Elections.\n\\end{enumerate}\n\nNext we describe the observed covariates. We partition these covariates into $W_1$ and $W_0$ by following the implementation of Oster's \\citeyearpar{Oster2019} approach in \\cite{BFG2020}. $W_1$, the calibration covariates which \\emph{are} used to calibrate selection on unobservables, is a set of geographic and climate controls: Centroid Latitude, Centroid Longitude, Land area, Average rainfall, Average temperature, Elevation, Average potential agricultural yield, and Distance from the centroid to rivers, lakes, and the coast. $W_0$, the control covariates which are \\emph{not} used to calibrate selection on unobservables, includes state fixed effects. The questionnaire based outcomes use individual level data. For those analyses, we also include age, age-squared, gender, race, and survey wave fixed effects in $W_0$. In \\cite{BFG2020}, they were included in $W_1$. We instead include them in $W_0$ to keep the set of calibration covariates $W_1$ constant across the five main specifications.\n\n\\subsection{Baseline Model Results}\n\n\\cite{BFG2020} has a variety of analyses. We focus on the subset of their main results for which replication data is publicly available. These are columns 1, 2, 4, 6, and 7 of their table 3. Panel A in our table \\ref{table:mainTable1} replicates those results. From columns (1)--(3) we see that individuals who live in counties with more exposure to the frontier prefer cutting spending on the poor, on welfare, and to reduce debt by spending cuts. Moreover, these point estimates are statistically significant at conventional levels. From columns (4) and (5), we see that counties with more exposure to the frontier have lower property taxes and are more likely to vote for Republicans. As \\cite{BFG2020} argue, these baseline results therefore support the conjecture that frontier life led to opposition to government intervention and redistribution.\n\n\\def\\rule{0pt}{1.25\\normalbaselineskip}{\\rule{0pt}{1.25\\normalbaselineskip}}\n\\begin{table}[t]\n\\centering\n\\SetTblrInner[talltblr]{rowsep=0pt}\n\\resizebox{.98\\textwidth}{!}{\n\\begin{talltblr}[\n caption = {The Effect of Frontier Life on Opposition to Government Intervention and Redistribution.\\label{table:mainTable1}},\n remark{Note} = {Panel A and the first row of Panel B replicate columns 1, 2, 4, 6, and 7 of table 3 in \\cite*{BFG2020}, while the second row of Panel B and Panel C are new. As in \\cite{BFG2020}, Panel B uses Oster's rule of thumb choice $R_\\text{long}^2 = 1.3 R_\\text{med}^2$.},\n]{\np{0.25\\textwidth}>{\\centering}\n*{3}{>{\\centering \\arraybackslash}p{0.15\\textwidth}} |\n*{2}{>{\\centering \\arraybackslash}p{0.15\\textwidth}}\n}\n \\toprule\n\\rule{0pt}{1.25\\normalbaselineskip}\n& Prefers Cut Public Spending on Poor & Prefers Cut Public Spending on Welfare &Prefers Reduce Debt by Spending Cuts & County Property Tax Rate &Republican Presidential Vote Share \\\\[4pt]\n & (1) & (2) & (3) & (4) & (5) \\\\[4pt]\n \\hline\n\\multicolumn{5}{l}{Panel A. Baseline Results} \\rule{0pt}{1.25\\normalbaselineskip} \\\\[0.5em]\n\\hline\n\\rule{0pt}{1.25\\normalbaselineskip}\nTotal Frontier Exp. & 0.010 & 0.007 & 0.014 & -0.034 & 2.055 \\\\\n& {\\footnotesize (0.004) } & {\\footnotesize (0.003) } & {\\footnotesize (0.002) } & {\\footnotesize (0.007)} & {\\footnotesize (0.349)} \\\\[2pt]\n \\rule{0pt}{1.25\\normalbaselineskip}\nMean of Dep Variable & 0.09 & 0.40 & 0.41 & 1.02 & 60.04 \\\\ \nNumber of Individuals & 2322 & 53,472 & 111,853 & - & - \\\\\nNumber of Counties & 95 & 1863 & 1963 & 2029 & 2036 \\\\[2pt]\n \\rule{0pt}{1.25\\normalbaselineskip}\n Controls: & & & & & \\\\[2pt]\n \\ \\ Survey Wave FEs & X & X & X & - & - \\\\\n \\ \\ Ind.\\ Demographics & X & X & X & - & - \\\\\n \\ \\ State Fixed Effects & X & X & X & X & X \\\\\n \\ \\ Geographic\/Climate & X & X & X &X & X \\\\[2pt]\n \\hline\n\\multicolumn{5}{l}{Panel B. Sensitivity Analysis (Exogenous Controls; Oster 2019)} \\rule{0pt}{1.25\\normalbaselineskip} \\\\[0.5em]\n\\hline\n\\rule{0pt}{1.25\\normalbaselineskip}\n$\\delta^\\text{bp}$ (wrong) & 16.01 & 3.1 & 5.9 & -27.4 & -8.5 \\\\\n$\\delta^\\text{bp}$ (correct) & 2.28 & 3.05 & 2.58 & 90.7 & -23.3 \\\\[2pt]\n \\hline\n\\multicolumn{5}{l}{Panel C. Sensitivity Analysis (Endogenous Controls)} \\rule{0pt}{1.25\\normalbaselineskip} \\\\[0.5em]\n\\hline\n\\rule{0pt}{1.25\\normalbaselineskip}\n$\\bar{r}_X^\\text{bp}$ ($\\times 100$) & 2.83 & 3.05 & 5.85 & 72.0 & 80.4 \\\\[2pt]\n\\bottomrule\n\\end{talltblr}\n}\n\\end{table}\n\n\\subsection{Assessing Selection on Observables}\n\nThe baseline results in table 1 rely on a selection on observables assumption, that treatment $X$ is exogenous after adjusting for the observed covariates $(W_0,W_1)$. How plausible is this assumption? \\cite{BFG2020} say\n\\begin{itemize}\n\\item[] ``The main threat to causal identification of $\\beta$ lies in omitted variables correlated with both contemporary culture and TFE. We address this concern in four ways. First, we rule out confounding effects of modern population density. Second, we augment [the covariates] to remove cultural variation highlighted in prior work. \\emph{Third, we show that unobservables are unlikely to drive our results.} Finally, we use an IV strategy that isolates exogenous variation in TFE due to changes in national immigration flows over time.'' (page 2344, emphasis added)\n\\end{itemize}\nTheir first two approaches continue to rely on selection on observables, simply by including additional control variables. We focus on their third strategy: to use a formal econometric method to assess the importance of omitted variables. \n\n\\subsubsection*{Sensitivity Analysis Assuming Exogenous Controls}\n\nWe start by summarizing the sensitivity analysis based on \\cite{Oster2019} (hereafter Oster), as used in \\cite{BFG2020}. Oster's analysis uses two sensitivity parameters: (i) $\\delta$, which we define in equation \\eqref{eq:defOfDelta} in appendix \\ref{subsec:osterredef} and (ii) $R_\\text{long}^2$, the R-squared from the long regression of $Y$ on $(1,X,W_0,W_1,W_2)$, including the omitted variable of concern $W_2$. For any choice of $(\\delta,R_\\text{long}^2)$, Oster's Proposition 2 derives the identified set for $\\beta_\\text{long}$. Oster's Proposition 3 derives the breakdown point for $\\delta$, as a function of $R_\\text{long}^2$, for the conclusion that the identified set does not contain zero. Denote this point by $\\delta^\\text{bp}(R_\\text{long}^2)$. This is the smallest value of $\\delta$ such that the identified set contains zero. Put differently: For any $\\delta < \\delta^\\text{bp}(R_\\text{long}^2)$, the true value of $\\beta_\\text{long}$ cannot be zero.\n\nThe second row of Panel B of table \\ref{table:mainTable1} shows sample analog estimates of this breakdown point, which is commonly referred to as \\emph{Oster's delta}. As in \\cite{BFG2020}, we use Oster's rule of thumb choice $R_\\text{long}^2 = 1.3 R_\\text{med}^2$. $R_\\text{med}^2$ is the R-squared from the medium regression of $Y^{\\perp W_0}$ on $(1,X^{\\perp W_0},W_1^{\\perp W_0})$, which can be estimated from the data. Thus the table shows estimates of $\\delta^\\text{bp}(1.3 R_\\text{med}^2)$. The first row of Panel B shows the values of Oster's delta as reported in table 2 of \\cite{BFG2020}. These were incorrectly computed. It appears to us that, rather than using the correct expression in Proposition 3 of \\cite{Oster2019}, they set the first displayed equation on page 193 of that paper equal to zero and solved for $\\delta$. That does not give the correct breakdown point. Note that we noticed this same mistake in several other papers published in top 5 economics journals.\n\n\\cite{BFG2020} conclude:\n\\begin{itemize}\n\\item[] ``Oster (2019) suggests $| \\delta | > 1$ leaves limited scope for unobservables to explain the results'' and therefore, based on their $\\delta^\\text{bp}$ estimates, ``unobservables are unlikely to drive our results'' (page 2344)\n\\end{itemize}\nThis conclusion remains unchanged if the same rule is applied to the correctly computed $\\delta^\\text{bp}$ estimates.\n\n\\subsubsection*{Assessing Exogenous Controls}\n\nAs we have discussed, Oster's method combined with the $\\delta = 1$ cutoff rule relies on the exogenous controls assumption. Is exogenous controls plausible in this application? The answer depends on which omitted variables $W_2$ we are concerned about. \\cite{BFG2020} does not specifically describe the unmeasured omitted variables of concern, nor do they discuss the plausibility of exogenous controls. However, in their extra robustness checks they consider the following variables:\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{l l}\nContemporary population density & Sex ratio \\\\\nConflict with Native Americans & Rainfall risk \\\\\nEmployment share in manufacturing & Portage sites \\\\\nMineral resources & Prevalence of slavery \\\\\nImmigrant share & Scotch-Irish settlement \\\\\nTiming of railroad access & Birthplace diversity \\\\\nRuggedness &\n\\end{tabular}\n\\end{table}\n\n\\noindent The additional omitted variables of concern might therefore be similar to these variables. Thus the question is: Are \\emph{all} of the geographic\/climate variables in $W_1$ uncorrelated with variables like these? This seems unlikely, especially since many of these additional variables are also geographic\/climate type variables. Moreover, although this assumption is not falsifiable---since $W_2$ is unobserved---we can assess its plausibility by examining the correlation structure of the observed covariates. Specifically, we compute the parameters $c_k$ defined in section \\ref{sec:interpretation}. These are square roots of R-squareds from regressing each element of $W_1$ on the other elements, after partialling out $W_0$. Table \\ref{table:ck_calib} shows sample analog estimates of these $c_k$'s.\n\nThe estimates in table \\ref{table:ck_calib} show a substantial range of correlation between the observed covariates in $W_1$. Recall that the exogenous controls assumption says that each element of $W_1$ is uncorrelated with $W_2$, after partialling out $W_0$. Thus if $W_2$ was included in this table it would have a value of zero. Therefore, if $W_2$ is a variable similar to the components of $W_1$ then we would expect exogenous controls to fail. This suggests that it is important to use sensitivity analysis methods that allow for endogenous controls.\n\n\\def\\rule{0pt}{1.25\\normalbaselineskip}{\\rule{0pt}{1.25\\normalbaselineskip}}\n\n\\begin{table}[h]\n\\caption{Correlations Between Observed Covariates. \\label{table:ck_calib}}\n\\centering\n\\begin{tabular}{l c}\n\\toprule\n$W_{1k}$ & $\\widehat{R}_{W_{1k} \\sim W_{1,-k} \\mathrel{\\mathsmaller{\\bullet}} W_0}$ \\\\[4pt]\n \\hline\n \\rule{0pt}{1.25\\normalbaselineskip}\nAverage temperature & 0.945 \\\\\nCentroid Latitude & 0.936 \\\\\nElevation & 0.825 \\\\\nAverage potential agricultural yield & 0.805 \\\\\nAverage rainfall & 0.748 \\\\\nDistance from centroid to the coast & 0.698 \\\\\nCentroid Longitude & 0.659 \\\\\nDistance from centroid to rivers & 0.367 \\\\\nDistance from centroid to lakes & 0.316 \\\\\nLand area & 0.313 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\subsubsection*{Sensitivity Analysis Allowing For Endogenous Controls}\n\nNext we present the findings from the sensitivity analysis that we developed in section \\ref{sec:MainNewAnalysis}, which allows for endogenous controls. We begin with our simplest result, Theorem \\ref{cor:IdsetRyANDcFree}, that only uses a single sensitivity parameter $\\bar{r}_X$. Panel C of table \\ref{table:mainTable1} reports sample analog estimates of the breakdown point $\\bar{r}_X^\\text{bp}$ described in Corollary \\ref{corr:breakdownPointRXonly}. This is the largest amount of selection on unobservables, as a percentage of selection on observables, allowed for until we can no longer conclude that $\\beta_\\text{long}$ is nonzero. Recall that, since those results allow for arbitrarily endogenous controls, Theorem \\ref{cor:IdsetRyANDcFree} implies that $\\bar{r}_X^\\text{bp} < 1$. As discussed in section \\ref{sec:interpretation}, however, this does not imply that these results should always be considered non-robust. Instead, when the calibration covariates $W_1$ are a set of variables that are important for treatment selection, researchers should consider large values of $\\bar{r}_X^\\text{bp}$ to indicate the robustness of their baseline results. For example, in columns (4) and (5) of Panel C we see that the breakdown point estimates for the two behavior based outcomes are 72\\% and 80.4\\%. For example, for the average Republication vote share outcome, we can conclude $\\beta_\\text{long} > 0$ as long as selection on unobservables is at most 80.4\\% as large as selection on observables. In contrast, the breakdown point estimates in columns (1)--(3) are substantially smaller: between about 3\\% and 6\\%. For these outcomes, we therefore only need selection on unobservables to be at least 3 to 6\\% as large as selection on observables to overturn our conclusion that $\\beta_\\text{long} > 0$. Thus, unlike the conclusions based on Oster's method, we find that the analysis using questionnaire based outcomes is highly sensitive to selection on unobservables. In contrast, the analysis using behavior based outcomes is quite robust to selection on unobservables. This contrast continues to hold after considering restrictions on the magnitude of endogenous controls and the impact of unobservables on outcomes too. We present these analyses next.\n\n\\begin{figure}[t]\n\\caption{Sensitivity Analysis for Average Republican Vote Share. See body text for discussion.\\label{fig:republican}}\n\\centering\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-idset-1d}\n\\hspace{0.05\\textwidth}\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-idset-2d} \\\\[6pt]\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-breakfront-2d}\n\\hspace{0.05\\textwidth}\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-breakfront-3d}\n\\end{figure}\n\nFor brevity we discuss just one questionnaire based outcome, cut spending on the poor, and one behavior based outcome, average Republican vote share. Figure \\ref{fig:republican} shows the results for average Republican vote share. The top left plot shows the estimated identified set $\\beta_\\text{long}$ as a function of $\\bar{r}_X$, allowing for arbitrarily endogenous controls and no restrictions on the outcome equation. This is the set described by Theorem \\ref{cor:IdsetRyANDcFree}. The horizontal intercept is the breakdown point $\\bar{r}_X^\\text{bp} = 80.4\\%$, as reported in Panel C, column (5) of table \\ref{table:mainTable1}. This result allows for arbitrarily endogenous controls.\n\nIf we are willing to somewhat restrict the magnitude of control endogeneity, we can allow for more selection on unobservables. The top right figure shows a sequence of estimated identified sets for $\\beta_\\text{long}$ as a function of $\\bar{r}_X$ on the horizontal axis, as described in Theorem \\ref{thm:IdsetRyFree}. It starts at the darkest line, $\\bar{c} = 1$ (arbitrarily endogenous controls), and then as the shading of the bound functions becomes lighter, we get closer to exogenous controls ($\\bar{c} = 0$). Put differently: For any fixed value of $\\bar{r}_X$, imposing stronger assumptions on exogeneity of the controls results in a smaller identified set. The bottom left picture shows the impact of assuming partially exogenous controls on the breakdown point for selection on unobservables. Specifically, this plot shows the estimated breakdown frontier $\\bar{r}_X^\\text{bf}(\\bar{c})$. This function shows the horizontal intercept in the top right figure, as a function of $\\bar{c}$. At $\\bar{c} = 1$, we recover the breakdown point 80.4\\% that allows for arbitrarily endogenous controls. If we impose exogenous controls, however, and set $\\bar{c} = 0$, we obtain a breakdown point around 135\\%. That is, under exogenous controls, we can allow for selection on unobservables of up to 135\\% as large as selection on observables before our baseline results break down. In fact, we only need $\\bar{c}$ less than or equal to about $0.3$ to obtain a breakdown point at or above 100\\%.\n\nAll of the analysis thus far has left the impact of unobservables on outcomes unrestricted. So in our final analysis we also consider the effect of restricting the impact of unobservables on outcomes. The bottom right plot in figure \\ref{fig:republican} shows the three-dimensional breakdown frontiers $\\bar{r}_Y^\\text{bf}(\\bar{r}_X, \\bar{c})$ described in Theorem \\ref{cor:BFCalculation3D}. Any combination of sensitivity parameters $(\\bar{r}_X, \\bar{r}_Y, \\bar{c})$ below this three-dimensional function lead to an identified set that allows us to conclude $\\beta_\\text{long} > 0$. This includes, for example, $\\bar{r}_X = \\bar{r}_Y = 110\\%$ and $\\bar{c} = 0.7$. Note that $0.7$ is around the middle of the distribution of $c_k$ values in table \\ref{table:ck_calib}, and hence might be considered a moderate or slightly conservative value of the magnitude of control endogeneity. For this value, our baseline finding is robust to omitted variables that have up to 110\\% of the effect on treatment and outcomes as the observables. If we impose exogenous controls ($\\bar{c} = 0$) then we can allow the impact of the omitted variable on outcomes to be 200\\% as large as the observables and the impact of the omitted variable on treatment to be up to about 240\\% as large as the observables, and yet still conclude that $\\beta_\\text{long} > 0$.\n\n\\begin{figure}[t]\n\\caption{Sensitivity Analysis for Cut Spending on Poor. See body text for discussion.\\label{fig:cutpoor}}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{images\/bazzi-Aii1-idset-2d}\n\\hspace{0.025\\textwidth}\n\\includegraphics[width=0.45\\textwidth]{images\/bazzi-Aii1-breakfront-3d}\n\\end{figure}\n\nThese findings suggest that the empirical conclusions for average Republican vote share are quite robust to failures of the selection on observables assumption. In contrast, we next consider the analysis for the cut spending on the poor outcome. Figure \\ref{fig:cutpoor} shows the results. The left plot shows the estimated identified sets for $\\beta_\\text{long}$ as a function of $\\bar{r}_X$ on the horizontal axis, as described in Theorem \\ref{thm:IdsetRyFree}. For $\\bar{c} = 1$, the horizontal intercept gives an estimated value for $\\bar{r}_X^\\text{bp}$ of 2.83\\%, as reported in Panel C, column (1) of table \\ref{table:mainTable1}. Moreover, as shown in the figure, even if we impose exogenous controls, $\\bar{c} = 0$, the identified sets do not change much, and hence the breakdown point does not change much. The breakdown frontier $\\bar{r}_X^\\text{bf}(\\bar{c})$ is essentially flat and hence we do not report it. These conclusions do not change substantially if we also impose restrictions on how omitted variables affect the outcomes. The right plot in figure \\ref{fig:cutpoor} shows the estimated three-dimensional breakdown frontier. It shows that we can allow for larger amounts of selection on unobservables if we are willing to greatly restrict the impact of unobservables on outcomes. For example, if we allow for arbitrarily endogenous controls ($\\bar{c} = 1$) then we can allow for the effect of omitted variables on the treatment and outcomes to be as much as 50\\% that of the effect of the observables while still concluding that $\\beta_\\text{long} > 0$. Alternatively, if we restrict the effect of omitted variables on outcomes to be at most 25\\% that of observables, then we can allow the omitted variables to affect treatment by more than 100\\% of the effect of the observables while still concluding that $\\beta_\\text{long} > 0$.\n\nOverall, we see that there are some cases where the results for cutting spending on the poor could be considered robust. But there are also many cases where these results could be considered sensitive. In contrast, the results for average Republican vote share are robust across a wide range of relaxations of the baseline model. Similar findings hold for the other three outcome variables: The three results using the questionnaire based outcomes tend to be much more sensitive than the two results using behavior based outcomes.\n\n\\subsubsection*{The Effect of the Choice of Calibration Covariates}\n\n\\begin{figure}[t]\n\\caption{Effect of Calibration Covariates on Analysis For Republican Vote Share. See body text for discussion.\\label{fig:calibrationCovars}}\n\\centering\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-now0-idset-2d}\n\\hspace{0.05\\textwidth}\n\\includegraphics[width=0.4\\textwidth]{images\/bazzi-Bi5-now0-breakfront-3d}\n\\end{figure}\n\nIn section \\ref{sec:interpretation} we discussed the importance of choosing which variables to calibrate against (the variables in $W_1$) versus which variables to use as controls only (the variables in $W_0$). We next briefly illustrate this in our empirical application. The results in table \\ref{table:mainTable1} and figures \\ref{fig:republican} and \\ref{fig:cutpoor} all include state fixed effects as controls, but do not use them for calibration; that is, these variables are in $W_0$. Next we consider the impact of instead putting them into $W_1$ and calibrating the magnitude of selection on unobservables against them, in addition to the geographic and climate controls already in $W_1$.\n\nFigure \\ref{fig:calibrationCovars} shows figures corresponding to the top right and bottom right plots in figure \\ref{fig:republican}, but now also using state fixed effects for calibration. We first see that the identified sets for $\\beta_\\text{long}$ (left plot) are larger, for any fixed $\\bar{r}_X$. This makes sense because the \\emph{meaning} of $\\bar{r}_X$ has changed with the change in calibration controls. In particular, the breakdown point $\\bar{r}_X^\\text{bp}$ is now about 30\\%, whereas previously it was about 80\\%. This change can be understood as a consequence of equation \\eqref{eq:pseudoEq}. By including state fixed effects---which have a large amount of explanatory power---in our calibration controls, we have increased the magnitude of selection on observables. Holding selection on unobservables fixed, this implies that $r$ must decrease. This discussion reiterates the point that the magnitude of $\\bar{r}_X$ must always be interpreted as dependent on the set of calibration controls. For example, our finding in figure \\ref{fig:calibrationCovars} that the estimated $\\bar{r}_X^\\text{bp}$ is about 30\\% should not be interpreted as saying that the results are sensitive; in fact, an effect about 30\\% as large as these calibration covariates is substantially large, and so it may be that we do not expect the omitted variable to have such a large additional impact.\n\nThe right plot in figure \\ref{fig:calibrationCovars} shows the estimated three-dimensional breakdown frontiers. The frontiers have all shifted inward, compared to the bottom right plot of figure \\ref{fig:republican} which did not use state fixed effects for calibration. Consequently, a superficial reading of this plot may suggest that the results for average Republican vote share are no longer robust. However, as we just emphasized in our discussion of the left plot, by including state fixed effects in the calibration covariates $W_1$, we are changing the meaning of all three sensitivity parameters. Since the expanded set of calibration covariates has substantial explanatory power, even a relaxation like $(\\bar{r}_Y, \\bar{r}_X, \\bar{c}) = (50\\%, 50\\%, 1)$---which is below the breakdown frontier and hence allows us to conclude that $\\beta_\\text{long}$ is positive---could be considered to be a large impact of omitted variables. So these figures do not change our overall conclusions about the robustness of the analysis for average Republican vote share.\n\nFinally, as we emphasized in section \\ref{sec:interpretation}, our discussion about the choice of calibration covariates are not unique to our analysis; they apply equally to all other methods that use covariates to calibrate the magnitudes of sensitivity parameters in some way.\n\n\\subsection{Empirical Conclusions}\n\nOverall, a sensitivity analysis based on our new methods leads to a more nuanced empirical conclusion than originally obtained by \\cite{BFG2020}. We found that their analysis using questionnaire based outcomes is quite sensitive to the presence of omitted variables, while their analysis using property tax levels and voting patterns is robust. This has several empirical implications. \n\nFirst, the questionnaire based outcomes are the most easily interpretable as measures of opposition to redistribution, regulation, and preferences for small government. In contrast, it is less clear that property taxes and Republican presidential vote share alone should be interpreted as direct measures of opposition to redistribution. So the fact that the questionnaire based outcomes are sensitive to the presence of omitted variables suggests that Bazzi et al.'s overall conclusion in support of the ``frontier thesis'' should be considered more tentative than previously stated. Second, it suggests that the impact of frontier life may occur primarily through broader behavior based channels like elections, rather than individuals' more specific policy preferences and behavior in their personal lives. It may be useful to explore this difference in future empirical work.\n\nFinally, note that \\cite{BFG2020} perform a wide variety of additional supporting analyses that we have not examined here. It would be interesting to apply our methods to these additional analyses, to see whether allowing for endogenous controls affects the sensitivity of these other analyses. In particular, their figure 5 considers another set of outcome variables: Republican vote share in each election from 1900 to 2016. In contrast, our analysis above looked only at one election outcome: the average Republican vote share over the five elections from 2000 to 2016. They use these additional baseline estimates along with a qualitative discussion of the evolution of Republican party policies over time to argue that the average Republican vote share outcome between 2000--2016 can be interpreted as a measure of opposition to redistribution. It would be interesting to see how these supporting results hold up to a sensitivity analysis that allows for endogenous controls.\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nAs \\cite{AngristPischke2017} emphasize, most researchers do not expect to identify causal effects for many variables at the same time. Instead, we target a single variable, called the treatment, while the other variables are called controls, and are included solely to aid identification of causal effects for the treatment variable. These control variables are therefore usually thought to be endogenous. And yet most of the available methods for doing sensitivity analysis rely on an assumption that these controls are exogenous. This raises the question of whether these methods for assessing sensitivity are themselves sensitive to allowing the controls to be endogenous. In this paper we provide a new approach to assessing the sensitivity of selection on observables assumptions in linear models. Our results have two key features that distinguish them from existing methods: First, they allow the controls to be endogenous. Second, our first main result only requires researchers to pick a single sensitivity parameter. In contrast, many existing methods rely on exogenous controls \\emph{and} require researchers to pick or reason about at least two different sensitivity parameters. Our results are also simple to implement in practice, via an accompanying Stata package \\texttt{regsensitivity}. Finally, in our empirical application to Bazzi et al.'s \\citeyearpar{BFG2020} study of the impact of frontier life on modern culture, we showed that allowing for endogenous controls does matter in practice, leading to more nuanced empirical conclusions than those obtained in \\cite{BFG2020}.\n\n\\subsection*{\\emph{Internal and External Calibrations of Sensitivity Parameters}}\n\nOur analysis raises several open questions for the broader literature on sensitivity analysis. A typical method specifies continuous relaxations or deviations from one's baseline assumptions and then asks: How much can we relax or deviate from our baseline assumptions until our conclusions breakdown? Answering this question requires \\emph{calibrating} the sensitivity parameters: How do we know when these sensitivity parameters are `large' in some sense? A key insight of \\cite*{AltonjiElderTaber2005} was that we could answer this question by performing an \\emph{internal} calibration, by comparing the magnitude of the sensitivity parameters to the magnitude other parameters in the model. However, as we have discussed in this paper, the value of such internal calibrations is to provide (1) a unit free sensitivity parameter which (2) can be interpreted in terms of the effects of observed variables. It does \\emph{not} provide a single universal threshold for what is or is not a robust result. In particular, the choice of which observed variables to calibrate against will change the scale and interpretation of the sensitivity parameter. Consequently, the value 1 should be considered a unit free reference point, not a threshold for robustness.\n\nThis observation leads to several questions: How should researchers choose the covariates against which they calibrate? And for any given choice of covariates, if 1 is not the threshold for robustness, what is the threshold? The difficulty of answering these questions speaks to the difficulty of using a single dataset to assess sensitivity and to calibrate those sensitivity parameters. An alternative approach is \\emph{external} calibration, where a secondary dataset is used to calibrate the sensitivity parameters. This approach uses sensitivity parameters that are not defined relative to other parameters in the model, and does not require researchers to pick a set of covariates to calibrate against. Such absolute sensitivity parameters are common in the literature on nonparametric sensitivity analysis (e.g., \\citealt{Rosenbaum1995, Rosenbaum2002} or \\citealt{MastenPoirier2018}). This external calibration approach is also common in the literature on measurement error or missing data, where secondary datasets are used to assess the extent of measurement error or the strength of violations of missing at random assumptions. It is possible that some combination of both internal and external calibration approaches will lead to the most robust set of methods for assessing the role of selection on unobservables in empirical work.\n\n\\singlespacing\n\\bibliographystyle{econometrica}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGiven a local Artin $\\res$-algebra $A=R\/I$, with $R=\\res[\\![x_1,\\dots,x_n]\\!]$, an interesting problem is to find how far is it from being Gorenstein. In \\cite{Ana08}, Ananthnarayan introduces for the first time the notion of Gorenstein colength, denoted by $\\operatorname{gcl}(A)$, as the minimum of $\\ell(G)-\\ell(A)$ among all Gorenstein Artin $\\res$-algebras $G=R\/J$ mapping onto $A$. Two natural questions arise immediately:\n\n\\medskip\n\n{\\sc Question A}: How we can explicitly compute the Gorenstein colength of a given local Artin $\\res$-algebra $A$?\n\n\\medskip\n\n{\\sc Question B}: Which are its minimal Gorenstein covers, that is, all Gorenstein rings $G$ reaching the minimum $\\operatorname{gcl}(A)=\\ell(G)-\\ell(A)$?\n\n\\medskip\nAnanthnarayan generalizes some results by Teter \\cite{Tet74} and Huneke-Vraciu \\cite{HV06} and provides a characterization of rings of $\\operatorname{gcl}(A)\\leq 2$ in terms of the existence of certain self-dual ideals $\\mathfrak{q}\\in A$ with respect to the canonical module $\\omega_A$ of $A$ satisfying $\\ell(A\/\\mathfrak{q})\\leq 2$. For more information on this, see \\cite{Ana08} or \\cite[Section 4]{EH18}, for a reinterpretation in terms of inverse systems. Later on, Elias and Silva (\\cite{ES17}) address the problem of the colength from the perspective of Macaulay's inverse systems. In this setting, the goal is to find polynomials $F\\in S$ such that $I^\\perp\\subset \\langle F\\rangle$ and $\\ell(\\langle F\\rangle)-\\ell(I^\\perp)$ is minimal. Then the Gorenstein $\\res$-algebra $G=R\/\\operatorname{Ann_R} F$ is a minimal Gorenstein cover of $A$. A precise characterization of such polynomials $F\\in S$ is provided for $\\operatorname{gcl}(A)=1$ in \\cite{ES17} and for $\\operatorname{gcl}(A)=2$ in \\cite{EH18}.\n\nHowever, the explicit computation of the Gorenstein colength of a given ring $A$ is not an easy task even for low colength - meaning $\\operatorname{gcl}(A)$ equal or less than 2 - in the general case. For examples of computation of colength of certain families of rings, see \\cite{Ana09} and \\cite{EH18}.\n\nOn the other hand, if $\\operatorname{gcl}(A)=1$, the Teter variety introduced in \\cite[Proposition 4.2]{ES17} is precisely the variety of all minimal Gorenstein covers of $A$ and \\cite[Proposition 4.5]{ES17} already suggests that a method to compute such covers is possible.\n\n\\bigskip\nIn this paper we address questions A and B by extending the previous definition of Teter variety of a ring of Gorenstein colength 1 to the variety of minimal Gorenstein covers $MGC(A)$ where $A$ has arbitrary Gorenstein colength $t$. We use a constructive approach based on the integration method to compute inverse systems proposed by Mourrain in \\cite{Mou96}.\n\nIn section 2 we recall the basic definitions of inverse systems and introduce the notion of integral of an $R$-module $M$ of $S$ with respect to an ideal $K$ of $R$, denoted by $\\int_K M$. Section 3 links generators $F\\in S$ of inverse systems $J^\\perp$ of Gorenstein covers $G=R\/J$ of $A=R\/I$ with elements in the integral $\\int_{\\mathfrak m^t} I^\\perp$, where $\\mathfrak m$ is the maximal ideal of $R$ and $t=\\operatorname{gcl}(A)$. This relation is described in \\Cref{F} and \\Cref{propint} sets the theoretical background to compute a $\\res$-basis of the integral of a module extending Mourrain's integration method.\n\nIn section 4, \\Cref{ThMGC} proves the existence of a quasi-projective sub-variety $MGC^n(A)$ whose set of closed points are associated to polynomials $F\\in S$ such that $G=R\/\\operatorname{Ann_R} F$ is a minimal Gorenstein cover of $A$. Section 5 is devoted to algorithms: explicit methods to compute a $\\res$-basis of $\\int_{\\mathfrak m^t}I^\\perp$ and $MGC(A)$ for colengths 1 and 2. Finally, in section 6 we provide several examples of the minimal Gorenstein covers variety and list the comptutation times of $MGC(A)$ for all analytic types of $\\res$-algebras with $\\operatorname{gcl}(A)\\leq 2$ appearing in Poonen's classification in \\cite{Poo08a}.\n\nAll algorithms appearing in this paper have been implemented in \\emph{Singular}, \\cite{DGPS}, and the library \\cite{E-InvSyst14} for inverse system has also been used.\n\n\\medskip\n{\\sc Acknowledgements:} The second author wants to thank the third author for the opportunity to stay at INRIA Sophia Antipolis - M\\'editerran\\'ee (France) and his hospitality during her visit on the fall of 2017, where part of this project was carried out. This stay was financed by the Spanish Ministry of Economy and Competitiveness through the Estancias Breves programme (EEBB-I-17-12700).\n\n\\section{Integrals and inverse systems}\n\nLet us consider the regular local ring $R=\\res[\\![x_1,\\dots,x_n]\\!]$ over an arbitrary field $\\res$, with maximal ideal $\\mathfrak m$. Let $S=\\res[y_1,\\dots,y_n]$ be the polynomial ring over the same field $\\res$. Given $\\alpha=(\\alpha_1,\\dots,\\alpha_n)$ in $\\mathbb{N}^n$, we denote by $x^\\alpha$ the monomial $x_1^{\\alpha_1}\\cdots x_n^{\\alpha_n}$ and set $\\vert\\alpha\\vert=\\alpha_1+\\dots+\\alpha_n$. Recall that $S$ can be given an $R$-module structure by contraction:\n$$\n\\begin{array}{cccc}\n R\\times S & \\longrightarrow & S & \\\\\n (x^\\alpha, y^\\beta) & \\mapsto & x^\\alpha \\circ y^\\beta = &\n \\left\\{\n \\begin{array}{ll}\n y^{\\beta-\\alpha}, & \\beta \\ge \\alpha; \\\\\n 0, & \\mbox{otherwise.}\n \\end{array}\n \\right.\n\\end{array}\n$$\n\nThe Macaulay inverse system of $A=R\/I$ is the sub-$R$-module $I^\\perp=\\lbrace G\\in S\\mid I\\circ G=0\\rbrace$ of $S$. This provides the order-reversing bijection between $\\mathfrak m$-primary ideals $I$ of $R$ and finitely generated sub-$R$-modules $M$ of $S$ described in Macaulay's duality. As for the reverse correspondence, given a sub-$R$-module $M$ of $S$, the module $M^\\perp$ is the ideal $\\operatorname{Ann_R} M=\\lbrace f\\in R\\mid f\\circ G=0\\,\\mbox{ for any } G\\in M\\rbrace$ of $R$. Moreover, it characterizes zero-dimensional Gorenstein rings $G=R\/J$ as those with cyclic inverse system $J^\\perp=\\langle F\\rangle$, where $\\langle F\\rangle$ is the $\\res$-vector space $\\langle x^\\alpha\\circ F:\\vert\\alpha\\vert\\leq \\deg F\\rangle_\\res$. For more details on this construction, see \\cite{ES17} and \\cite{EH18}.\n\nConsider an Artin local ring $A=R\/I$ of socle degree $s$ and inverse system $I^\\perp$. We are interested in finding Artin local rings $R\/\\operatorname{Ann_R} F$ that cover $R\/I$, that is $I^\\perp\\subset \\langle F\\rangle$, but we also want to control how apart are those two inverse systems. In other words, given an ideal $K$, we want to find a Gorenstein cover $\\langle F\\rangle$ such that $K\\circ \\langle F\\rangle\\subset I^\\perp$. Therefore it makes sense to think of an inverse operation to contraction.\n\n\\begin{definition}[Integral of a module with respect to an ideal] Consider an $R$-submodule $M$ of $S$. We define the integral of $M$ with respect to the ideal $K$, denoted by $\\int_K M$, as $$\\int_K M=\\lbrace G\\in S\\mid K\\circ G\\subset M\\rbrace.$$\n\n\\end{definition}\n\nNote that the set $N=\\lbrace G\\in S\\mid K\\circ G\\subset M\\rbrace$ is, in fact, an $R$-submodule $N$ of $S$ endowed with the contraction structure. Indeed,\ngiven $G_1,G_2\\in N$ then $K\\circ (G_1+G_2)=K\\circ G_1+K\\circ G_2\\subset M$, hence $G_1+G_2\\in N$.\nFor all $a\\in R$ and $G\\in N$ we have $K\\circ (a\\circ G)=aK\\circ G=a\\circ (K\\circ G)\\subset M$, hence $a\\circ G\\in N$.\n\n\\begin{proposition}\n\\label{integral}\nLet $K$ be an $\\mathfrak m$-primary ideal of $R$ and let $M$ be a finitely generated sub-$R$-module of $S$. Then $$\\int_K M=\\left(KM^\\perp\\right)^\\perp.$$\n\\end{proposition}\n\\begin{proof}\nLet $G\\in\\left(KM^\\perp\\right)^\\perp$. Then $\\left(KM^\\perp\\right)\\circ G=0$, so $M^\\perp\\circ\\left(K\\circ G\\right)=0$. Hence $K\\circ G\\subset M$, i.e. $G\\in\\int_K M$. We have proved that $\\left(KM^\\perp\\right)^\\perp\\subseteq\\int_K M$.\nNow let $G\\in\\int_K M$. By definition, $K\\circ G\\subset M$, so $M^\\perp\\circ\\left(K\\circ G\\right)=0$ and hence $\\left(M^\\perp K\\right)\\circ G=0$. Therefore, $G\\in\\left(M^\\perp K\\right)^\\perp$.\n\\end{proof}\n\nOne of the key results of this paper is the effective computation of $\\int_K M$ (see \\Cref{AlINT}).\nLast result gives a method for the computation of this module by computing two Macaulay duals. However, since computing Macaulay duals is expensive, \\Cref{AlINT} avoids the computation of such duals.\n\n\\begin{remark}\\label{incl} The following properties hold:\n\\noindent\n$(i)$ Given $K\\subset L$ ideals of $R$ and $M$ $R$-module, if $K\\subset L$, then $\\int_L M\\subset\\int_K M.$\n\\noindent\n$(ii)$\nGiven $K$ ideal of $R$ and $M\\subset N$ $R$-modules, if $M\\subset N$, then $\\int_K M\\subset\\int_K N.$\n\\noindent\n$(iii)$\nGiven any $R$-module $M$, $\\int_ R M=M$.\n\\end{remark}\n\nThe inclusion $K\\circ \\int_K M\\subset M$ follows directly from the definition of integral. However, the equality does not hold:\n\n\\begin{example}\\label{Ex1}\nLet us consider $R=\\res[\\![x_1,x_2,x_3]\\!]$, $K=(x_1,x_2,x_3)$, $S=\\res[y_1,y_2,y_3]$ and $M=\\langle y_1y_2,y_3^3\\rangle$. \nWe can compute Macaulay duals with the \\emph{Singular} library \\textbf{Inverse-syst.lib}, see \\cite{E-InvSyst14}. We get $\\int_K M=\\langle y_1^2,y_1y_2,y_1y_3,y_2^2,y_2y_3,y_3^4 \\rangle$ by \\Cref{integral} and hence $K\\circ \\int_K M=\\langle y_1,y_2,y_3^3\\rangle\\subsetneq M.$\n\\end{example}\n\nWe also have the inclusion $M\\subset\\int_K K\\circ M$. Indeed, for any $F\\in M$, $K\\circ F\\subset K\\circ M$ and hence $F\\in\\int_K K\\circ M=\\lbrace G\\in S\\mid K\\circ G\\subset K\\circ M\\rbrace$. Again, the equality does not hold.\n\n\\begin{example} Using the same example as in \\Cref{Ex1}, we get\n$K\\circ M=\\mathfrak m\\circ \\langle y_1y_2,y_3^3\\rangle=\\langle y_1,y_2,y_3^2\\rangle,$ and\n$\\int_K( K\\circ M)=\\left(K (K\\circ M)^\\perp\\right)^\\perp=\\langle y_1^2,y_1y_2,y_1y_3,y_2^2,y_2y_3,y_3^2\\rangle\\nsubseteq M.$\n\\end{example}\n\n\\begin{remark}\nNote that if we integrate with respect to a principal ideal $K=(f)$ of $R$, then \n$\\int_K M=\\lbrace G\\in S\\mid f\\circ G\\in M\\rbrace$. Hence in this case we will denote it by $\\int_f M$.\n\\end{remark}\n\nIn particular, if we consider a principal monomial ideal $K=(x^\\alpha)$, then the expected equality for integrals $$x^{\\alpha}\\circ\\int_{x^{\\alpha}}M=M$$ holds. Indeed, for any $m\\in M$, take $G=y^{\\alpha}m$. Since $x^{\\alpha}\\circ y^{\\alpha}=1$, then $x^\\alpha\\circ y^{\\alpha}m=m$ and the equality is reached.\n\n\\begin{remark}\nIn general, $\\int_{x^{\\alpha}} x^{\\alpha}\\circ M\\neq x^{\\alpha}\\circ\\int_{x^{\\alpha}} M$, hence the inclusion $M\\subset\\int_K K\\circ M$ is not an equality even for principal monomial ideals. See \\Cref{r1}.\n\\end{remark}\n\nLet us now consider an even more particular case: the integral of a cyclic module $M=\\langle F\\rangle$ with respect to the variable $x_i$. Since the equality $x_i\\circ \\int_{x_i}M=M$ holds, there exists $G\\in S$ such that $x_i\\circ G=F$. This polynomial $G$ is not unique because it can have any constant term with respect to $x_i$, that is $G=y_iF+p(y_1,\\dots,\\hat{y}_i,\\dots,y_n)$. However, if we restrict to the non-constant polynomial we can define the following:\n\n\\begin{definition}[$i$-primitive]\nThe $i$-primitive of a polynomial $f\\in S$ is the polynomial $g\\in S$, denoted by $\\int_i f$, such that\n\\begin{enumerate}\n\\item[(i)] $x_i\\circ g=f$,\n\\item[(ii)] $g\\vert_{y_i=0}=0$.\n\\end{enumerate}\n\\end{definition}\n\nIn \\cite{EM07}, Elkadi and Mourrain proposed a definition of $i$-primitive of a polynomial in a zero-characteristic setting using the derivation structure instead of contraction. Therefore, we can think of the integral of a module with respect to an ideal as a generalization of their $i$-primitive.\n\nSince we are considering the $R$-module structure given by contraction, the $i$-primitive is precisely\n$$\\int_i f=y_if.$$\n\nIndeed, $x_i\\circ(y_if)=f$ and $(y_if)\\mid_{y_i=0}=0$, hence (i) and (ii) hold.\nUniqueness can be easily proved. Consider $g_1,g_2$ to be $i$-primitives of $f$. Then $x_i\\circ (g_1-g_2)=0$ and hence $g_1-g_2=p(y_1,\\dots,\\hat{y}_i,\\dots,y_n)$. Clearly $(g_1-g_2)\\vert_{y_i=0}=p(y_1,\\dots,\\hat{y}_i,\\dots,y_n)$. On the other hand, $(g_1-g_2)\\vert_{y_i=0}=g_1\\vert_{y_i=0}-g_2\\vert_{y_i=0}=0$. Hence $p=0$ and $g_1=g_2$.\n\n\\begin{remark}\\label{r1} Note that, by definition, $x_k\\circ \\int_k f=f$. Any $f$ can be decomposed in $f=f_1+f_2$, where the first term is a multiple of $y_k$ and the second has no appearances of this variable. Then\n$$\\int_k x_k\\circ f=\\int_k x_k\\circ f_1+\\int_k x_k\\circ f_2=f_1+\\int_k 0=f_1.$$\n\\noindent\nTherefore, in general, $$f_1=\\int_k x_k\\circ f\\neq x_k\\circ \\int_k f=f.$$\nHowever, for all $l\\neq k$, $$\\int_l x_k\\circ f=\\frac{y_lf_1}{y_k}=x_k\\circ \\int_l f.$$\n\\end{remark}\n\n\\bigskip\n\nLet us now recall Theorem 7.36 of Elkadi-Mourrain in \\cite{EM07}, which describes the elements of the inverse system $I^\\perp$ up to a certain degree $d$. We define $\\mathcal{D}_d=I^\\perp\\cap S_{\\leq d}$, for any $1\\leq d\\leq s$, where $s=\\operatorname{socdeg}(A)$. Since $\\mathcal{D}_s=I^\\perp$, this result leads to an algorithm proposed by the same author to obtain a $\\res$-basis of an inverse system. We rewrite the theorem using the contraction setting instead of derivation.\n\n\\begin{theorem}[Elkadi-Mourrain]\\label{EM}\nGiven an ideal $I=(f_1,\\dots,f_m)$ and $d>1$. Let $\\lbrace b_1,\\dots,b_{t_{d-1}}\\rbrace$ be a $\\res$-basis of $\\mathcal{D}_{d-1}$. The polynomials of $\\mathcal{D}_d$ with no constant term, i.e. no terms of degree zero, are of the form\n\\begin{equation}\\label{thm}\n\\Lambda=\\sum_{j=1}^{t_{d-1}}\\lambda_j^1\\int_1 b_j\\vert_{y_2=\\cdots=y_n=0}+\\sum_{j=1}^{t_{d-1}}\\lambda_j^2\\int_2 b_j\\vert_{y_3=\\cdots=y_n=0}+\\dots+\\sum_{j=1}^{t_{d-1}}\\lambda_j^n\\int_n b_j,\\quad\\lambda_j^k\\in\\res,\n\\end{equation}\nsuch that\n\\begin{equation}\\label{cond1}\n\\sum_{j=1}^{t_{d-1}}\\lambda_j^k (x_l\\circ b_j)-\\sum_{j=1}^{t_{d-1}}\\lambda_j^l(x_k\\circ b_j)=0, 1\\leq k1$.\n\\end{example}\n\n\\subsection{Gorenstein colength 2}\n\nBy \\cite{EH18}, we know that an Artin ring $A$ of socle degree $s$ is of Gorenstein colength 2 if and only if there exists a polynomial $F$ of degree $s+1$ or $s+2$ such that $K_F\\circ F=I^\\perp$, where $K_F=(L_1,\\dots,L_{n-1},L_n^2)$ and $L_1,\\dots,L_n$ are suitable independent linear forms.\n\nObserve that a completely analogous characterization to the one we did for Teter rings is not possible. If $A=R\/I$ has Gorenstein colength 2, by \\Cref{cor}, there exists $F=\\sum_{i=1}^2\\sum_{j=1}^{h_i}a_j^iF_j^i\\in\\int_{\\mathfrak m^2}I^\\perp$, where $\\lbrace\\overline{F^i_j}\\rbrace_{1\\leq i\\leq 2,1\\leq j\\leq h_i}$ is a $\\res$-basis of $\\mathcal{L}_{A,2}$, that generates a minimal Gorenstein cover of $A$ and then trivially $I^\\perp\\subset\\langle F\\rangle$. However, the reverse implication is not true.\n\n\\begin{example} Consider $A=R\/\\mathfrak m^3$, where $R$ is the ring of power series in 2 variables, and consider $F=y_1^2y_2^2$. It is easy to see that $F\\in\\int_{\\mathfrak m^2}I^\\perp=S_{\\leq 4}$ and $I^\\perp\\subset\\langle F\\rangle$. However, it can be proved that $\\operatorname{gcl}(A)=3$ using \\cite[Corollary 3.3]{Ana09}. Note that $K_F=\\mathfrak m^2$ and hence $\\ell(R\/K_F)=3$.\n\\end{example}\n\nTherefore, given $F\\in\\int_{\\mathfrak m^2}I^\\perp$, the condition $I\\subset\\langle F\\rangle$ is not sufficient to ensure that $\\operatorname{gcl}(A)=2$. We must require that $\\ell(R\/K_F)=2$ as well.\n\n\\begin{proposition}\\label{gcl2} Given a non-Gorenstein non-Teter local Artin ring $A=R\/I$, $\\operatorname{gcl}(A)=2$ if and only if there exist a polynomial $F=\\sum_{i=1}^2\\sum_{j=1}^{h_i} a_j^iF_j^i\\in\\int_{\\mathfrak m^2}I^\\perp$ such that $\\lbrace\\overline{F_j^i}\\rbrace_{1\\leq i\\leq 2,1\\leq j\\leq h_i}$ is an adapted $\\res$-basis of $\\mathcal{L}_{A,2}$ and $(L_1,\\dots,L_{n-1},L_n^2)\\circ F=I^\\perp$ for suitable independent linear forms $L_1,\\dots,L_n$.\n\\end{proposition}\n\n\\begin{proof} We will only prove that if $F$ satisfies the required conditions, then $\\operatorname{gcl}(A)=2$. By definition of $K_F$, if $(L_1,\\dots,L_{n-1},L_n^2)\\circ F=I^\\perp$, then $(L_1,\\dots,L_{n-1},L_n^2)\\subseteq K_F$. Again by \\cite{EH18}, $\\operatorname{gcl}(A)\\leq \\ell(R\/K_F)$ and hence $\\operatorname{gcl}(A)\\leq \\ell\\left(R\/(L_1,\\dots,L_{n-1},L_n^2)\\right)=2$. Since $\\operatorname{gcl}(A)\\geq 2$ by hypothesis, then $\\operatorname{gcl}(A)=2$. The converse implication follows from \\Cref{KF}.\n\\end{proof}\n\n\\begin{example} Recall the ring $A=R\/I$ in \\Cref{Ex3}. Since\n$$\\int_{\\mathfrak m^2}I^\\perp=\\langle y_1^3,y_1^2y_2,y_1y_2^2,y_2^3,y_1^2y_3,y_1y_2y_3,y_2^2y_3,y_1y_3^2,y_2y_3^3,y_3^5\\rangle$$\nand $\\operatorname{gcl}(A)>1$, its Gorenstein colength is 2 if and only if there exist some\n$$F\\in\\langle y_1^2,y_1y_2,y_1y_3,y_2^2,y_2y_3,y_3^4,y_1^3,y_1^2y_2,y_1y_2^2,y_2^3,y_1^2y_3,y_1y_2y_3,y_2^2y_3,y_1y_3^2,y_2y_3^3,y_3^5\\rangle_\\res$$\nsuch that $(L_1,\\dots,L_{n-1},L_n^2)\\circ F=I^\\perp$. Consider $F=y_3^4+y_1^2y_2$, then\n$$(x_1,x_2^2,x_3)\\circ F=\\langle x_1\\circ F,x_2^2\\circ F,x_3\\circ F\\rangle=\\langle y_1y_2,y_3^3\\rangle$$\nand hence $\\operatorname{gcl}(A)=2$.\n\\end{example}\n\n\\section{Minimal Gorenstein covers varieties}\n\nWe are now interested in providing a geometric interpretation of the set of all minimal Gorenstein covers $G=R\/J$ of a given local Artin $\\res$-algebra $A=R\/I$. From now on, we will assume that $\\res$ is an algebraically closed field. The following result is well known and it is an easy linear algebra exercise.\n\n\\begin{lemma}\n\\label{semic}\nLet $\\varphi_i:\\res^a \t\\longrightarrow \\res^b$, $i=1\\cdots,r$, be a family of Zariski continuous maps.\n Then the function $\\varphi^*:\\res^a\\longrightarrow \\mathbb N$ defined by\n$\\varphi^*(z)=\\dim_{\\res} \\langle \\varphi_1(z),\\cdots, \\varphi_r(z)\\rangle_{\\res}$ is lower semicontinous, i.e. for all $z_0 \\in \\res^a$ there is a Zariski open set\n$z_0\\in U \\subset \\res^a$ such that for all $z\\in U$ it holds\n$\\varphi^*(z)\\geq \\varphi^*(z_0)$.\n\\end{lemma}\n\n\\begin{theorem}\\label{ThMGC}\nLet $A=R\/I$ be an Artin ring of Gorenstein colength $t$.\nThere exists a quasi-projective sub-variety $MGC^n(A)$, $n=\\dim(R)$, of\n$\\mathbb P_{\\res}\\left(\\mathcal{L}_{A,t}\\right)$\nwhose set of closed points are the points $[\\overline{F}]$, $\\overline{F}\\in \\mathcal{L}_{A,t}$,\nsuch that $G=R\/\\operatorname{Ann_R} F$ is a minimal Gorenstein cover of $A$.\n\\end{theorem}\n\\begin{proof}\nLet $E$ be a sub-$\\res$-vector space of $\\int_{\\mathfrak m^t}I^\\perp$ such that\n$$\n\\int_{\\mathfrak m^t}I^\\perp \\cong E\\oplus I^{\\perp},\n$$\nwe identify $\\mathcal{L}_{A,t}$ with $E$. From \\Cref{F}, for all minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} F$ we may assume that $F\\in E$. Given $F\\in E$, the quotient $G=R\/\\operatorname{Ann_R} F$ is a minimal cover of $A$ if and only if the following two numerical conditions hold:\n\\begin{enumerate}\n\\item $\\dim_{\\res}(\\langle F\\rangle)= \\dim_{\\res}A+t$, and\n\\item $\\dim_{\\res}(I^{\\perp}+ \\langle F \\rangle) =\\dim_{\\res}\\langle F \\rangle$.\n\\end{enumerate}\n\n\\noindent\nDefine the family of Zariski continuous maps $\\lbrace\\varphi_{\\alpha}\\rbrace_{\\vert\\alpha\\vert\\leq\\deg F}$, $\\alpha\\in\\mathbb{N}^n$, where\n$$\\begin{array}{rrcl}\n\\varphi_{\\alpha}: & E & \\longrightarrow & E\\\\\n& F & \\longmapsto & x^\\alpha\\circ F\\\\\n\\end{array}$$\n\\noindent\nIn particular, $\\varphi_0=Id_R$.\nWe write\n$$\\begin{array}{rrcl}\n\\varphi^\\ast: & E & \\longrightarrow & \\mathbb{N}\\\\\n& F & \\longmapsto & \\dim_\\res\\langle x^\\alpha\\circ F,\\vert\\alpha\\vert\\leq\\deg F\\rangle_\\res\n\\end{array}$$\n\n\\noindent\nNote that $\\varphi^\\ast(F)=\\dim_\\res \\langle F\\rangle$ and, by \\Cref{semic}, $\\varphi^\\ast$ is a lower semicontinuous map. Hence $U_1=\\lbrace F\\in E\\mid \\dim_\\res\\langle F\\rangle\\geq\\dim_\\res A+t\\rbrace$ is an open Zariski set in $E$. Using the same argument,\n$U_2=\\lbrace F\\in E\\mid \\dim_\\res\\langle F\\rangle\\geq\\dim_\\res A+t+1\\rbrace$\nis also an open Zariski set in $E$ and hence $Z_1=E\\backslash U_2$ is a Zariski closed set such that $\\dim_\\res\\langle F\\rangle\\leq\\dim_\\res A+t$ for any $F\\in Z_1$.\nThen $Z_1\\cap U_1=\\lbrace F\\in E\\mid \\dim_\\res\\langle F\\rangle=\\dim_\\res A+t\\rbrace$ is a locally closed set.\n\nLet $G_1,\\cdots,G_e$ be a $\\res$-basis of $I^{\\perp}$ and consider the constant map\n$$\\begin{array}{rrcl}\n\\psi_{i}: & E & \\longrightarrow & E\\\\\n& F & \\longmapsto & G_i\\\\\n\\end{array}$$\nfor any $i=1,\\cdots,e$.\nBy \\Cref{semic},\n\n$$\\begin{array}{rrcl}\n\\psi^\\ast: & E & \\longrightarrow & \\mathbb{N}\\\\\n& F & \\longmapsto & \\dim_\\res \\langle\\lbrace x^{\\alpha}\\circ F\\rbrace_{\\vert\\alpha\\vert\\leq\\deg F}, G_1,\\dots, G_e\\rangle_\\res=\\dim_\\res\\left(\\langle F\\rangle+I^\\perp\\right)\n\\end{array}$$\n\n\\noindent\nis a lower semicontinuous map. Using an analogous argument, we can prove that $T=\\lbrace F\\in E\\mid \\dim_\\res(I^\\perp+\\langle F\\rangle)=\\dim_\\res A+t\\rbrace$ is a locally closed set. Therefore,\n$$W=(Z_1\\cap U_1)\\cap T=\\lbrace F\\in E\\mid \\dim_\\res A+t=\\dim_\\res(I^\\perp+\\langle F\\rangle)=\\dim_\\res\\langle F\\rangle\\rbrace$$\n\\noindent\nis a locally closed subset of $E$ whose set of closed points are all the $F$ in $E$ satisfying $(1)$ and $(2)$, i.e. defining a minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} F$ of $A$.\n\nMoreover, since $\\langle F\\rangle=\\langle \\lambda F\\rangle$ for any $\\lambda\\in\\res^\\ast$, conditions $(1)$ and $(2)$ are invariant under the multiplicative action of $\\res^*$ on $F$ and hence\n$MGC^n(A)=\\mathbb P_{\\res}(W)\\subset \\mathbb P_{\\res}(E)=\\mathbb P_{\\res}\\left(\\mathcal{L}_{A,t}\\right)$.\n\\end{proof}\n\nRecall that the embedding dimension of $A$ is $\\operatorname{emb dim}(A)=\\dim_\\res\\mathfrak m\/(\\mathfrak m^2+I)$.\n\n\\begin{proposition}\nLet $G$ be a minimal Gorenstein cover of $A$.\nThen\n$$\n\\operatorname{emb dim}(G)\\le \\tau(A)+\\operatorname{gcl}(A)-1.\n$$\n\\end{proposition}\n\\begin{proof}\nSet $A=R\/I$ such that $\\operatorname{emb dim}(A)=\\dim R=n$. Consider the power series ring $R'$ of dimension $n+t$ over $\\res$ for some $t\\geq 0$ such that $G=R'\/J'$ with $\\operatorname{emb dim}(G)=\\dim R'$. See \\cite{EH18} for more details on this construction. We denote by $\\mathfrak m$ and $\\mathfrak m'$ the maximal ideals of $R$ and $R'$, respectively, and consider $K_{F'}=(I^\\perp:_{R'} F')$. \nFrom \\Cref{KF}$.(i)$, it is easy to deduce that $K_{F'}\/(\\mathfrak m K_{F'}+J')\\simeq I^\\perp\/(\\mathfrak m\\circ I^\\perp)$. Hence $\\tau(A)=\\dim_\\res K_{F'}\/(\\mathfrak m K_{F'}+J')$ by \\cite[Proposition 2.6]{ES17}. Then\n$$\\operatorname{emb dim}(G)+1=\\dim_\\res R'\/(\\mathfrak m')^2\\leq \\dim_\\res R'\/(\\mathfrak m K_{F'}+J')=\\operatorname{gcl}(A)+\\tau(A),$$\nwhere the last equality follows from \\Cref{KF}$.(ii)$.\n\\end{proof}\n\n\\begin{definition}\\label{DefMGC}\nGiven an Artin ring $A=R\/I$, the variety $MGC(A)=MGC^n(A)$, with $n=\\tau(A)+\\operatorname{gcl}(A)-1$, is called the minimal Gorenstein cover variety associated to $A$.\n\\end{definition}\n\n\\begin{remark}\\label{RemMGC}\nLet us recall that in \\cite{EH18} we proved that for low Gorenstein colength of $A$, i.e. $\\operatorname{gcl}(A)\\le 2$, $\\operatorname{emb dim}(G)=\\operatorname{emb dim}(A)$ for any minimal Gorenstein cover $G$ of $A$. In this situation we can consider $MGC(A)$ as the variety $MGC^n(A)$ with $n=\\operatorname{emb dim}(A)$.\n\\end{remark}\n\nObserve that this notion of minimal Gorenstein cover variety generalizes the definition of Teter variety introduced in \\cite{ES17}, which applies only to rings of Gorenstein colength 1, to any arbitrary colength.\n\n\\section{Computing $MGC(A)$ for low Gorenstein colength}\\label{s5}\n\nIn this section we provide algorithms and examples to compute the variety of minimal Gorenstein covers of a given ring $A$ whenever its Gorenstein colength is 1 or 2. These algorithms can also be used to decide whether a ring has colength greater than 2, since it will correspond to empty varieties.\n\nTo start with, we provide the auxiliar algorithm to compute the integral of $I^\\perp$ with respect to the $t$-th power of the maximal ideal of $R$. If there exist polynomials defining minimal Gorenstein covers of colength $t$, they must belong to this integral.\n\n\\subsection{Computing integrals of modules}\n\nConsider a $\\res$-basis $\\mathbf{b}=(b_1,\\dots,b_t)$ of a finitely generated sub-$R$-module $M$ of $S$ and consider $x_k\\circ b_i=\\sum_{j=1}^t a_j^i b_j$, for any $1\\leq i\\leq t$ and $1\\leq k\\leq n$. Let us define matrices $U_k=(a_j^i)_{1\\leq j,i\\leq t}$ for any $1\\leq k\\leq n$. Note that\n\n$$\\left(x_k\\circ b_1 \\cdots x_k\\circ b_t\\right)=\n\\left(b_1 \\cdots b_t\\right)\n\\left(\\begin{array}{ccc}\na_1^1 & \\dots & a_1^t\\\\\n\\vdots & & \\vdots\\\\\na_t^1 & \\dots & a_t^t\n\\end{array}\\right)\n.$$\n\n\\noindent\nNow consider any element $h\\in M$. Then\n$$x_k\\circ h=x_k\\circ\\sum_{i=1}^t h_ib_i=\\sum_{i=1}^t (x_k\\circ h_ib_i)=\\sum_{i=1}^t (x_k\\circ b_i)h_i=$$\n$$=\\left(x_k\\circ b_1 \\cdots x_k\\circ b_t\\right)\n\\left(\\begin{array}{c}\nh_1\\\\\n\\vdots\\\\\nh_t\n\\end{array}\\right)=\\left(b_1 \\cdots b_t\\right)U_k\n\\left(\\begin{array}{c}\nh_1\\\\\n\\vdots\\\\\nh_t\n\\end{array}\\right),$$\n\\noindent\nwhere $h_1,\\dots,h_t\\in\\res$.\n\n\\begin{definition} Let $U_k$, $1\\leq k\\leq n$, be the square matrix of order $t$ such that\n$$x_k\\circ h=\\mathbf{b}\\,U_k\\,\\mathbf{h}^t,$$\nwhere $\\mathbf{h}=(h_1,\\dots,h_t)$ for any $h\\in M$, with $h=\\sum_{i=1}^t h_ib_i$. We call $U_k$ the contraction matrix of $M$ with respect to $x_k$ associated to a $\\res$-basis $\\mathbf{b}$ of $M$.\n\\end{definition}\n\n\\begin{remark} Since $x_kx_l\\circ h=x_lx_k\\circ h$ for any $h\\in M$, we have $U_kU_l=U_lU_k$, with $1\\leq k1$.\n\\end{proof}\n\n\\begin{algorithm}[H]\n\\caption{Compute the Teter variety of $A=R\/I$ with $n\\geq 2$}\n\\label{AlMGC1}\n\\begin{algorithmic}\n\\REQUIRE\n$s$ socle degree of $A=R\/I$;\\\\\n$b_1,\\dots,b_t$ $\\res$-basis of the inverse system $I^\\perp$;\\\\\n$F_1,\\dots,F_h$ adapted $\\res$-basis of $\\mathcal{L}_{A,1}$;\\\\\n$U_1,\\dots,U_n$ contraction matrices of $\\int_\\mathfrak m I^\\perp$.\n\\ENSURE\nideal $\\mathfrak{a}$ such that $MGC(A)=\\mathbb{P}_\\res^{h-1}\\backslash\\mathbb{V}_+(\\mathfrak{a})$.\n\\RETURN\n\\begin{enumerate}\n\\item Set $F=a_1F_1+\\dots+a_hF_h$ and $\\mathbf{F}=(a_1,\\dots,a_h)^t$, where $a_1,\\dots,a_h$ are variables in $\\res$.\n\\item Build matrix $A=\\left(\\mu^\\alpha_j\\right)_{1\\leq\\vert\\alpha\\vert\\leq s+1,1\\leq j\\leq t}$, where $$U^\\alpha \\textbf{F}=\\sum_{j=1}^t\\mu^\\alpha_jb_j,\\quad U^\\alpha=U_1^{\\alpha_1}\\cdots U_n^{\\alpha_n}.$$\n\\item Compute the ideal $\\mathfrak{a}$ generated by all minors of order $t$ of the matrix $A$.\n\\end{enumerate}\n\\end{algorithmic}\n\\end{algorithm}\n\nWith the following example we show how to apply and interpret the output of the algorithm:\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2]\\!]$ and $I=\\mathfrak m^2$ \\cite[Example 4.3]{ES17}. From \\Cref{ExAlINT} we gather all the information we need for the input of \\Cref{AlMGC1}:\n{\\sc Input}:\n$b_1=1,b_2=y_1,b_3=y_2$ $\\res$-basis of $I^\\perp$; $F_1=y^2,F_2=y_1y_2,F_3=y_1^2$ adapted $\\res$-basis of $\\mathcal{L}_{A,1}$; $U_1'$,$U_2'$ contraction matrices of $\\int_\\mathfrak m I^\\perp$.\\\\\n{\\sc Output}: $\\operatorname{rad}(\\mathfrak{a})=a_2^2-a_1a_3$.\\\\\nThen $MGC(A)=\\mathbb{P}^2\\backslash\\lbrace a_2^2-a_1a_3=0\\rbrace$ and any minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} F$ of $A$ is given by a polynomial $F=a_1y^2_2+a_2y_1y_2+a_3y_1^2$ such that $a_2^2-a_1a_3\\neq 0$.\n\\end{example}\n\n\\bigskip\n\n\\subsection{Computing $MGC(A)$ in colength 2}\n\nConsider a $\\res$-basis $b_1,\\dots,b_t$ of $I^\\perp$ and an adapted $\\res$-basis $\\overline{F}_1,\\dots,\\overline{F}_{h_1},\\overline{G}_1,\\dots,\\overline{G}_{h_2}$ of $\\mathcal{L}_{A,2}$ (see \\Cref{adapted}) such that\n\\begin{itemize}\n\\item $b_1,\\dots,b_t,F_1,\\dots,F_{h_1}$ is a $\\res$-basis of $\\int_\\mathfrak m I^\\perp$,\n\\item $b_1,\\dots,b_t,F_1,\\dots,F_{h_1},G_1,\\dots,G_{h_2}$ is a $\\res$-basis of $\\int_{\\mathfrak m^2} I^\\perp$.\n\\end{itemize}\n\nThroughout this section, we will Consider local Artin rings $A=R\/I$ such that $\\operatorname{gcl}(A)>1$. If a minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} H$ of colength 2 exists, then, by \\Cref{cor}, we can assume that $H$ is a polynomial of the form\n$$H=\\sum_{i=1}^{h_1}\\alpha_iF_i+\\sum_{i=1}^{h_2}\\beta_iG_i,\\quad \\alpha_i,\\beta_i\\in\\res.$$\nWe want to obtain conditions on the $\\alpha$'s and $\\beta$'s under which $H$ actually generates a minimal Gorenstein cover of colength 2. By definition, $H\\in\\int_{\\mathfrak m^2}I^\\perp$, hence $x_k\\circ H\\in\\mathfrak m\\circ\\int_\\mathfrak m\\left(\\int_\\mathfrak m I^\\perp\\right)\\subseteq\\int_{\\mathfrak m}I^\\perp$ and\n$$x_k\\circ H=\\sum_{j=1}^t\\mu^j_kb_j+\\sum_{j=1}^{h_1}\\rho^j_kF_j,\\quad \\mu_k^j,\\rho_k^j\\in\\res.$$\nSet matrices $A_H=(\\mu_k^j)$ and $B_H=(\\rho_k^j)$. Let us describe matrix $B_H$ explicitly. We have\n$$x_k\\circ H=\\sum_{i=1}^{h_1}\\alpha_i(x_k\\circ F_i)+\\sum_{i=1}^{h_2}\\beta_i(x_k\\circ G_i).$$\nNote that each $x_k\\circ G_i$, for any $1\\leq i\\leq h_2$, is in $\\int_{\\mathfrak m}I^\\perp$ and hence it can be decomposed as\n$$x_k\\circ G_i=\\sum_{j=1}^t\\lambda^{k,i}_jb_j+\\sum_{j=1}^{h_1}a^{k,i}_jF_j,\\quad \\lambda^{k,i}_j,a_j^{k,i}\\in\\res.$$\nThen\n$$x_k\\circ H=\\sum_{i=1}^{h_1}\\alpha_i(x_k\\circ F_i)+\\sum_{i=1}^{h_2}\\beta_i\\left(\\sum_{j=1}^t\\lambda^{k,i}_jb_j+\\sum_{j=1}^{h_1}a^{k,i}_jF_j\\right)=b+\\sum_{j=1}^{h_1}\\left(\\sum_{i=1}^{h_2}\\beta_ia^{k,i}_j\\right)F_j,$$\nwhere $b:=\\sum_{i=1}^{h_1}\\alpha_i(x_k\\circ F_i)+\\sum_{i=1}^{h_2}\\beta_i\\left(\\sum_{j=1}^t\\lambda_j^{k,i}b_j\\right)\\in I^\\perp$.\nObserve that\n\\begin{equation}\\label{rho}\n\\rho_k^j=\\sum_{i=1}^{h_2}a^{k,i}_j\\beta_i,\n\\end{equation}\nhence the entries of matrix $B_H$ can be regarded as polynomials in variables $\\beta_1,\\dots,\\beta_{h_2}$ with coefficients in $\\res$.\n\n\\begin{lemma}\\label{rk(B)} Consider the matrix $B_H=(\\rho_k^j)$ as previously defined and let $B'_H=(\\varrho^j_k)$ be the matrix of the coefficients of $\\overline{L_k\\circ H}=\\sum_{j=1}^{h_1}\\varrho_k^j\\overline{F}_j\\in\\mathcal{L}_{A,1}$ where $L_1,\\dots,L_n$ are independent linear forms. Then,\n\\begin{enumerate}\n\\item[(i)] $\\operatorname{rk} B_H=\\dim_\\res\\left(\\displaystyle\\frac{\\mathfrak m\\circ H+I^\\perp}{I^\\perp}\\right)$,\n\\item[(ii)] $\\operatorname{rk} B'_H=\\operatorname{rk} B_H$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nSince $\\overline{x_k\\circ H}=\\sum_{j=1}^{h_1}\\rho_k^j\\overline{F}_j$ and $\\overline{F}_1,\\dots,\\overline{F}_{h_1}$ is a $\\res$-basis of $\\mathcal{L}_{A,1}$, then $\\operatorname{rk} B_H=\\dim_\\res\\langle\\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res$. Note that $\\langle\\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res=(\\mathfrak m\\circ H+I^\\perp)\/I^\\perp\\subseteq\\mathcal{L}_{A,1}$, hence (i) holds.\n\n\\noindent\nFor (ii) it will be enough to prove that $\\langle \\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res=\\langle \\overline{L_1\\circ H},\\dots,\\overline{L_n\\circ H}\\rangle_\\res$.\nIndeed, since $L_i=\\sum_{j=1}^n\\lambda^i_jx_j$ for any $1\\leq i\\leq n$, then $\\overline{L_i\\circ H}=\\sum_{j=1}^n\\lambda_j^i(\\overline{x_j\\circ H})\\in\\langle\\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res$. The reverse inclusion comes from the fact that $L_1,\\dots,L_n$ are linearly independent and hence $(L_1,\\dots,L_n)=\\mathfrak m$.\n\\end{proof}\n\n\\begin{lemma}\\label{BH0} With the previous notation, consider a polynomial $H\\in\\int_{\\mathfrak m^2}I^\\perp$ with coefficients $\\beta_1,\\dots,\\beta_{h_2}$ of $G_1,\\dots,G_{h_2}$, respectively, and its corresponding matrix $B_H$. Then the following are equivalent:\n\\begin{enumerate}\n\\item[(i)] $B_H\\neq 0$,\n\\item[(ii)] $\\mathfrak m\\circ H\\nsubseteq I^\\perp$,\n\\item[(iii)] $(\\beta_1,\\dots,\\beta_{h_2})\\neq (0,\\dots,0)$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n$(i)$ implies $(ii)$. If $B_H\\neq 0$, by \\Cref{rk(B)}, $(\\mathfrak m\\circ H+I^\\perp)\/I^\\perp\\neq 0$ and hence $\\mathfrak m\\circ H\\nsubseteq I^\\perp$.\n\n\\noindent\n$(ii)$ implies $(iii)$. If $\\mathfrak m\\circ H\\nsubseteq I^\\perp$, by definition $H\\notin \\int_\\mathfrak m I^\\perp$ and hence $H\\in\\int_{\\mathfrak m^2} I^\\perp\\backslash\\int_\\mathfrak m I^\\perp$. Therefore, some $\\beta_i$ must be non-zero.\n\n\\noindent\n$(iii)$ implies $(i)$. Since $G_i\\in\\int_{\\mathfrak m^2}I^\\perp\\backslash\\int_\\mathfrak m I^\\perp$ for any $1\\leq i\\leq h_2$ and, by hypothesis, there is some non-zero $\\beta_i$, we have that $H\\in\\int_{\\mathfrak m^2}I^\\perp\\backslash \\int_\\mathfrak m I^\\perp$. We claim that $x_k\\circ H\\in\\int_\\mathfrak m I^\\perp\\backslash I^\\perp$ for some $k\\in\\lbrace 1,\\dots,n\\rbrace$. Suppose the claim is not true. Then $x_k\\circ H\\in I^\\perp$ for any $1\\leq k\\leq n$, or equivalently, $\\mathfrak m\\circ H\\subseteq I^\\perp$ but this is equivalent to $H\\in\\int_\\mathfrak m I^\\perp$, which is a contradiction. Since\n$$x_k\\circ H=b+\\sum_{j=1}^{h_1}\\rho_k^jF_j\\in\\int_\\mathfrak m I^\\perp\\backslash I^\\perp,\\quad b\\in I^\\perp,$$\n\\noindent\nfor some $k\\in\\lbrace 1,\\dots,n\\rbrace$, then $\\rho_k^j\\neq 0$, for some $j\\in\\lbrace 1,\\dots,h_1\\rbrace$. Therefore, $B_H\\neq 0$.\n\\end{proof}\n\n\\begin{lemma}\\label{lemaB} Consider the previous setting. If $B_H=0$, then either $\\operatorname{gcl}(A)=0$ or $\\operatorname{gcl}(A)=1$ or $R\/\\operatorname{Ann_R} H$ is not a cover of $A$.\n\\end{lemma}\n\n\\begin{proof}\nIf $B_H=0$, then $\\mathfrak m\\circ H\\subseteq I^\\perp$ and hence $\\ell(\\langle H\\rangle)-1\\leq \\ell(I^\\perp)$. If $I^\\perp\\subseteq \\langle H\\rangle$, then $G=R\/\\operatorname{Ann_R} H$ is a Gorenstein cover of $A$ such that $\\ell(G)-\\ell(A)\\leq 1$. Therefore, either $\\operatorname{gcl}(A)\\leq 1$ or $G$ is not a cover.\n\\end{proof}\n\n\\bigskip\nSince we already have techniques to check whether $A$ has colength 0 or 1, we can focus completely on the case $\\operatorname{gcl}(A)>1$. Then, according to \\Cref{lemaB}, if $G=R\/\\operatorname{Ann_R} H$ is a Gorenstein cover of $A$, then $B_H\\neq 0$.\n\n\\begin{proposition}\\label{rk B} Assume that $B_H\\neq 0$. Then $\\operatorname{rk} B_H=1$ if and only if $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subseteq I^\\perp$\nfor some independent linear forms $L_1,\\dots,L_n$.\n\\end{proposition}\n\n\\begin{proof}\nSince $B_H\\neq 0$, there exists $k$ such that $x_k\\circ H\\notin I^\\perp$. Without loss of generality, we can assume that $x_n\\circ H\\notin I^\\perp$. If $\\operatorname{rk} B_H=1$, then any other row of $B_H$ must be a multiple of row $n$. Therefore, for any $1\\leq i\\leq n-1$, there exists $\\lambda_i\\in\\res$ such that $(x_i-\\lambda_ix_n)\\circ H\\in I^\\perp$. Take $L_n:=x_n$ and $L_i:=x_i-\\lambda_ix_n$. Then $L_1,\\dots,L_n$ are linearly independent and $L_i\\circ H\\in I^\\perp$ for any $1\\leq i\\leq n-1$. Moreover, $L_n^2\\circ H\\in \\mathfrak m^2\\circ\\int_{\\mathfrak m^2}I^\\perp\\subseteq I^\\perp$.\n\nConversely, let $B'_H=(\\varrho^j_k)$ be the matrix of the coefficients of $\\overline{L_k\\circ H}=\\sum_{j=1}^{h_1}\\varrho_k^j\\overline{F_j}\\in\\mathcal{L}_{A,1}$. By \\Cref{rk(B)}, since $B_H\\neq 0$, then $B'_H\\neq 0$. By hypothesis, $\\overline{L_1\\circ H}=\\dots=\\overline{L_{n-1}\\circ H}=0$ in $\\mathcal{L}_{A,1}$ but, since $B'_H\\neq 0$, then $\\overline{L_n\\circ H}\\neq 0$. Then $\\operatorname{rk} B'_H=1$ and hence, again by \\Cref{rk(B)}, $\\operatorname{rk} B_H=1$.\n\\end{proof}\n\nRecall that $\\langle H\\rangle=\\langle \\lambda H\\rangle$ for any $\\lambda\\in\\res^\\ast$. Therefore, as pointed out in \\Cref{ThMGC}, for any $H\\neq 0$, a Gorenstein ring $G=R\/\\operatorname{Ann_R} H$ can be identified with a point $[H]\\in\\mathbb{P}_\\res\\left(\\mathcal{L}_{A,2}\\right)$ by taking coordinates $(\\alpha_1:\\dots:\\alpha_{h_1}:\\beta_1:\\dots:\\beta_{h_2})$. Observe that $\\mathbb{P}_\\res\\left(\\mathcal{L}_{A,2}\\right)$ is a projective space over $\\res$ of dimension $h_1+h_2-1$, hence we will denote it by $\\mathbb{P}_\\res^{h_1+h_2-1}$.\n\nOn the other hand, by \\Cref{rho}, any minor of $B_H=(\\rho_k^j)$ is a homogeneous polynomial in variables $\\beta_1,\\dots,\\beta_{h_2}$. Therefore, we can consider the homogeneous ideal $\\mathfrak{b}$ generated by all order-2-minors of $B_H$ in $\\res[\\alpha_1,\\dots,\\alpha_{h_1},\\beta_1,\\dots,\\beta_{h_2}]$. Hence $\\mathbb{V}_+(\\mathfrak{b})$ is the projective variety consisting of all points $[H]\\in\\mathbb{P}_\\res^{h_1+h_2-1}$ such that $\\operatorname{rk} B_H\\leq 1$.\n\n\\begin{remark} In this section we will use the notation $MGC_2(A)$ to denote the set of points $[H]\\in\\mathbb{P}_\\res^{h_1+h_2-1}$ such that $G=R\/\\operatorname{Ann_R} H$ is a Gorenstein cover of $A$ with $\\ell(G)-\\ell(A)=2$. Since we are considering rings such that $\\operatorname{gcl}(A)>1$, we can characterize rings of higher colength than 2 as those such that $MGC_2(A)=\\emptyset$. On the other hand, $\\operatorname{gcl}(A)=2$ if and only if $MGC_2(A)\\neq\\emptyset$, hence in this case $MGC_2(A)=MGC(A)$, see \\Cref{DefMGC} and \\Cref{RemMGC}.\n\\end{remark}\n\n\\begin{corollary}\\label{MGC1} Let $A=R\/I$ be an Artin ring such that $\\operatorname{gcl}(A)=2$. Then\n$$MGC_2(A)\\subseteq\\mathbb{V}_+(\\mathfrak{b})\\subseteq\\mathbb{P}_\\res^{h_1+h_2-1}.$$\n\\end{corollary}\n\n\\begin{proof}\nBy \\Cref{KF}$.(ii)$, points $[H]\\in MGC_2(A)$ correspond to Gorenstein covers $G=R\/\\operatorname{Ann_R} H$ of $A$ such that $I^\\perp=(L_1,\\dots,L_{n-1},L_n^2)\\circ H$ for some $L_1,\\dots,L_n$. Since $B_H\\neq 0$ by \\Cref{lemaB}, then we can apply \\Cref{rk B} to deduce that $\\operatorname{rk} B_H=1$.\n\\end{proof}\n\nNote that the conditions on the rank of $B_H$ do not provide any information about which particular choices of independent linear forms $L_1,\\dots,L_n$ satisfy the inclusion $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subseteq I^\\perp$. In fact, it will be enough to understand which are the $L_n$ that meet the requirements. To that end, we fix $L_n=v_1x_1+\\dots+v_nx_n$, where $v=(v_1,\\dots,v_n)\\neq 0$. We can choose linear forms $L_i=\\lambda_1^ix_1+\\dots+\\lambda_n^ix_n$, where $\\lambda_i=(\\lambda_1^i,\\dots,\\lambda_n^i)\\neq 0$, for $1\\leq i\\leq n-1$, such that $L_1,\\dots,L_n$ are linearly independent and $\\lambda_i\\cdot v=0$. Observe that the $k$-vector space generated by $L_1,\\dots,L_{n-1}$ can be expressed in terms of $v_1,\\dots,v_n$, that is,\n$$\\langle L_1,\\dots,L_{n-1}\\rangle_\\res=\\langle v_lx_k-v_kx_l:\\,1\\leq k1$. Consider a point $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})\\subset\\mathbb{P}^{h_1+h_2-1}\\times\\mathbb{P}^{n-1}$. Then\n$$[H]\\in MGC_2(A)\\Longleftrightarrow ([H],[v])\\notin \\mathbb{V}_+(\\mathfrak{a}),$$\n\\end{proposition}\n\n\\begin{proof}\nFrom \\Cref{MGC1} we deduce that if $[H]$ is a point in $MGC_2(A)$, then $\\operatorname{rk} B_H\\leq 1$. The same is true for any point $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})$. Let us consider these two cases:\n\n\\emph{Case $B_H=0$}. Since $\\operatorname{gcl}(A)>1$, then $R\/\\operatorname{Ann_R} H$ is not a Gorenstein cover of $A$ by \\Cref{lemaB}, hence $[H]\\notin MGC_2(A)$. On the other hand, as stated in the proof of \\Cref{proj}, $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})$ for any $v\\neq 0$. By \\Cref{BH0} and $\\operatorname{gcl}(A)\\neq 1$, it follows that\n$$(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subseteq \\mathfrak m\\circ H\\subsetneq I^\\perp$$\n\\noindent\nfor any $L_1,\\dots,L_n$ linearly independent linear forms, where $L_n=v_1x_1+\\dots+v_nx_n$. Therefore, the rank of matrix $U_{H,v}$ is always strictly smaller than $\\dim_\\res I^\\perp$. Hence $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{a})$ for any $v\\neq 0$.\n\n\\emph{Case $\\operatorname{rk} B_H=1$}. If $[H]\\in MGC_2(A)$, then there exist $L_1,\\dots,L_n$ such that $(L_1,\\dots,L_{n-1},L_n^2)\\circ H=I^\\perp$. Take $v$ as the vector of coefficients of $L_n$, it is an admissible vector by definition. By \\Cref{proj}, $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})$ is unique and $\\operatorname{rk} U_{H,v}=\\dim_\\res I^\\perp$. Therefore, $([H],[v])\\notin\\mathbb{V}_+(\\mathfrak{a})$.\n\nConversely, if $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})\\cap\\mathbb{V}_+(\\mathfrak{a})$, then $\\operatorname{rk} U_{H,v}<\\dim_\\res I^\\perp$ and hence $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subsetneq I^\\perp$, where $L_n=v_1x_1+\\dots+v_nx_n$. By unicity of $v$, no other choice of $L_1,\\dots,L_n$ satisfies the inclusion $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subset I^\\perp$, hence $[H]\\notin MGC_2(A)$.\n\\end{proof}\n\n\\begin{corollary} Assume $\\operatorname{gcl}(A)>1$. With the previous definitions,\n$$MGC_2(A)=\\mathbb{V}_+(\\mathfrak{b})\\backslash \\pi_1\\left(\\mathbb{V}_+(\\mathfrak{c})\\cap\\mathbb{V}_+(\\mathfrak{a})\\right).$$\n\\end{corollary}\n\n\\begin{proof}\nIt follows from \\Cref{proj} and \\Cref{MGC2}.\n\\end{proof}\n\nFinally, let us recall the following result for bihomogeneous ideals:\n\n\\begin{lemma} Let ideals $\\mathfrak{a},\\mathfrak{c}$ be as previously defined, $\\mathfrak{d}=\\mathfrak{a}+\\mathfrak{c}$ the sum ideal and $\\pi_1:\\mathbb{P}_\\res^{h_1+h_2-1}\\times \\mathbb{P}_\\res^{n-1}\\longrightarrow \\mathbb{P}_\\res^{h_1+h_2-1}$ be the projection map. Let $\\widehat{\\mathfrak{d}}$ be the projective elimination of the ideal $\\mathfrak{d}$ with respect to variables $v_1,\\dots,v_n$. Then,\n$$\\pi_1(\\mathbb{V}_+(\\mathbb{\\mathfrak{a}})\\cap\\mathbb{V}_+(\\mathbb{\\mathfrak{c}}))=\\mathbb{V}_+(\\widehat{\\mathfrak{d}}).$$\n\\end{lemma}\n\\begin{proof}\nSee \\cite[Section 8.5, Exercise 16]{CLO97}.\n\\end{proof}\n\n\\bigskip\n\nWe end this section by providing an algorithm to effectively compute the set $MGC_2(A)$ of any ring $A=R\/I$ such that $\\operatorname{gcl}(A)>1$.\n\n\\begin{algorithm}[H]\n\\caption{Compute $MGC_2(A)$ of $A=R\/I$ with $n\\geq 2$ and $\\operatorname{gcl}(A)>1$}\n\\label{AlMGC2}\n\\begin{algorithmic}\n\\REQUIRE\n$s$ socle degree of $A=R\/I$;\n$b_1,\\dots,b_t$ $\\res$-basis of the inverse system $I^\\perp$;\n$F_1,\\dots,F_{h_1},G_1,\\dots,G_{h_2}$ an adapted $\\res$-basis of $\\mathcal{L}_{A,2}$;\n$U_1,\\dots,U_n$ contraction matrices of $\\int_{\\mathfrak m^2} I^\\perp$.\n\\ENSURE\nideals $\\mathfrak{b}$ and $\\widehat{\\mathfrak{d}}$ such that\n$MGC_2(A)=\\mathbb{V}_+(\\mathfrak{b})\\backslash\\mathbb{V}_+(\\widehat{\\mathfrak{d}})$.\n\\RETURN\n\\begin{enumerate}\n\\item Set $H=\\alpha_1F_1+\\dots+\\alpha_{h_1}F_{h_1}+\\beta_1G_1+\\dots+\\beta_{h_2}G_{h_2}$, where $\\alpha,\\beta$ are variables in $\\res$. Set column vectors $\\textbf{H}=(0,\\dots,0,\\alpha,\\beta)^t$ and $v=(v_1,\\dots,v_n)^t$ in $R=\\res[\\alpha,\\beta,v]$, where the first $t$ components of $\\textbf{H}$ are zero.\n\\item Build matrix $B_H=(\\rho_i^j)_{1\\leq i\\leq n,\\,1\\leq j\\leq h_1}$, where\n$U_i\\textbf{H}$ is the column vector $(\\mu_i^1,\\dots,\\mu_i^t,\\rho_i^1,\\dots,\\rho_i^{h_1},0,\\dots,0)^t$.\n\\item Build matrix $C_{H,v}=\\left(\\begin{array}{c|c}\nB_H & v\\\\\n\\end{array}\\right)$ as an horizontal concatenation of $B_H$ and the column vector $v$.\n\\item Compute the ideal $\\mathfrak{c}\\subseteq R$ generated by all minors of order 2 of $B_H$.\n\\item Build matrix $U_{H,v}$ as a vertical concatenation of matrices $(\\varrho^j_{l,k})_{1\\leq j\\leq h_1,\\,1\\leq l2$.\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2]\\!]$ and $I=(x_1^2,x_1x_2^2,x_2^4)$. Applying \\Cref{AlINT} twice we get the necessary input for \\Cref{AlMGC2}:\\\\\n{\\sc Input}: $b_1=1,b_2=y_1,b_3=y_2,b_4=y_2^2,b_5=y_1y_2,b_6=y_2^3$ $\\res$-basis of $I^\\perp$; $F_1=y_2^4,F_2=y_1y_2^2,F_3=y_1^2,G_1=y_1^2y_2,G_2=y_1y_2^3,G_3=y_2^5,G_4=y_1^3$ adapted $\\res$-basis of $\\mathcal{L}_{A,2}$; $U_1, U_2$ contraction matrices of $\\int_{\\mathfrak m^2}I^\\perp$.\\\\\n{\\sc Output}: $\\mathfrak{b}=(b_3b_4,b_2b_4)$, $\\mathfrak{\\widehat{d}}=(b_3b_4,b_2b_4,b_2^2-b_1b_3)$.\\\\\n$MGC_2(A)=\\mathbb{V}_+(b_3b_4,b_2b_4)\\backslash\\mathbb{V}_+ (b_3b_4,b_2b_4,b_2^2-b_1b_3)=\\mathbb{V}_+(b_3b_4,b_2b_4)\\backslash\\mathbb{V}_+ (b_2^2-b_1b_3).$\nNote that if $b_3b_4=b_2b_4=0$ and $b_4\\neq 0$, then both $b_2$ and $b_3$ are zero and the condition $b_2^2-b_1b_3=0$ always holds. Therefore, $\\operatorname{gcl}(A)=2$ and hence\n$$MGC(A)=\\mathbb{V}_+(b_4)\\backslash\\mathbb{V}_+ (b_2^2-b_1b_3)\\simeq \\mathbb{P}^5\\backslash\\mathbb{V}_+ (b_2^2-b_1b_3),$$\n\\noindent\nwhere $(a_1:a_2:a_3:b_1:b_2:b_3)$ are the coordinates of the points in $\\mathbb{P}^5$. Moreover, any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where\n$$H=a_1y_2^4+a_2y_1y_2^2+a_3y_1^2+b_1y_1^2y_2+b_2y_1y_2^3+b_3y_2^5$$\n\\noindent\nsatisfies $b_2^2-b_1b_3\\neq 0$. All such covers admit $(x_1,x_2^2)$ as the corresponding $K_H$.\n\\end{example}\n\n\\section{Computations}\n\nThe first aim of this section is to provide a wide range of examples of the computation of the minimal Gorenstein cover variety of a local ring $A$. In \\cite{Poo08a}, Poonen provides a complete classification of local algebras over an algebraically closed field of length equal or less than 6. Note that, for higher lengths, the number of isomorphism classes is no longer finite. We will go through all algebras of Poonen's list and restrict, for the sake of simplicity, to fields of characteristic zero.\n\nOn the other hand, we also intend to test the efficiency of the algorithms by collecting the computation times. We have implemented algorithms 1, 2 and 3 of \\Cref{s5} in the commutative algebra software \\emph{Singular} \\cite{DGPS}. The computer we use runs into the operating system Microsoft Windows 10 Pro and its technical specifications are the following: Surface Pro 3; Processor: 1.90 GHz Intel Core i5-4300U 3 MB SmartCache; Memory: 4GB 1600MHz DDR3.\n\n\\subsection{Teter varieties}\n\nIn this first part of the section we are interested in the computation of Teter varieties, that is, the $MGC(A)$ variety for local $\\res$-algebras $A$ of Gorenstein colength 1. All the results are obtained by running \\Cref{AlMGC1} in \\emph{Singular}.\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1^2,x_1x_2,x_1x_3,x_2x_3,x_2^3,x_3^3)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,2\\rbrace$ and $\\tau(A)=3$. The output provided by our implementation of the algorithm in \\emph{Singular} \\cite{DGPS} is the following:\n{\\scriptsize{\n\\begin{verbatim}\nF;\na(4)*x(2)^3+a(1)*x(3)^3+a(6)*x(1)^2+a(5)*x(1)*x(2)+a(3)*x(1)*x(3)+a(2)*x(2)*x(3)\nradical(a);\na(1)*a(4)*a(6)\n\\end{verbatim}}}\n\\noindent\nWe consider points with coordinates $(a_1:a_2:a_3:a_4:a_5:a_6)\\in\\mathbb{P}^5$. Therefore, $MGC(A)=\\mathbb{P}^5\\backslash\\mathbb{V}_+(a_1a_4a_6)$ and any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where $H=a_1y_3^3+a_2y_2y_3+a_3y_1y_3+a_4y_2^3+a_5y_1y_2+a_6y_1^2$ with $a_1a_4a_6\\neq 0$.\n\\end{example}\n\nIn \\Cref{table:TMGC1} below we show the computation time (in seconds) of all isomorphism classes of local $\\res$-algebras $A$ of $\\operatorname{gcl}(A)=1$ appearing in Poonen's classification \\cite{Poo08a}. In this table, we list the Hilbert function of $A=R\/I$, the expression of the ideal $I$ up to linear isomorphism, the dimension $h-1$ of the projective space $\\mathbb{P}^{h-1}$ where the variety $MGC(A)$ lies and the computation time. Note that our implementation of \\Cref{AlMGC1} includes also the computation of the $\\res$-basis of $\\int_\\mathfrak m I^\\perp$, hence the computation time corresponds to the total.\n\nNote that \\Cref{AlMGC1} also allows us to prove that all the other non-Gorenstein local rings appearing in Poonen's list have Gorenstein colength at least 2.\n\n{\\footnotesize{\n\\begin{table}[H]\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nHF(R\/I) & I & h-1 & t(s)\\\\\n\\hline\n$1,2$ & $(x_1,x_2)^2$ & 2 & 0,06\\\\\n$1,2,1$ & $x_1x_2,x_2^2,x_1^3$ & 2 & 0,06\\\\\n$1,3 $ & $(x_1,x_2,x_3)^2$ & 5 & 0,13\\\\\n$1,2,1,1 $ & $x_1^2,x_1x_2,x_2^4$ & 2 & 0,23\\\\\n$1,2,2$ & $x_1x_2,x_1^3,x_2^3$ & 2 & 0,11\\\\\n & $x_1x_2^2,x_1^2,x_2^3$ & 2 & 0,05\\\\\n$1,3,1$ & $x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2,x_1^3$ & 5 & 0,16\\\\\n$1,4$ & $(x_1,x_2,x_3,x_4)^2$ & 9 & 2,30\\\\\n$1,2,1,1,1$ & $x_1x_2,x_1^5,x_2^2$ & 2 & 0,17\\\\\n$1,2,2,1$ & $x_1x_2,x_1^3,x_2^4$ & 2 & 0,09\\\\\n & $x_1^2+x_2^3,x_1x_2^2,x_2^4$ & 2 & 0,1\\\\\n$1,3,1,1$ & $x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2,x_1^4$ & 5& 3,05\\\\\n$1,3,2$ & $x_1^2,x_1x_2,x_1x_3,x_2^2,x_2x_3^2,x_3^3$ & 5 & 0,33\\\\\n & $x_1^2,x_1x_2,x_1x_3,x_2x_3,x_2^3,x_3^3$ & 5 & 0,23\\\\\n$1,4,1$ & $x_1x_2,x_1x_3,x_1x_4,x_2x_3,x_2x_4,x_3x_4,x_2^2,x_3^2,x_4^2,x_1^3$ & 9 & 3,21\\\\\n$1,5$ & $(x_1,x_2,x_3,x_4,x_5)^2$ & 14 & 1,25\\\\\n\\hline\n\\end{tabular}\n\\medskip\n\\caption[MGC1]{Computation times of $MGC(A)$ of local rings $A=R\/I$ with $\\ell(A)\\leq 6$ and $\\operatorname{gcl}(A)=1$.}\n\\label{table:TMGC1}\n\\end{table}}}\n\n\\subsection{Minimal Gorenstein covers variety in colength 2}\nNow we want to compute $MGC(A)$ for $\\operatorname{gcl}(A)=2$. All the examples are obtained by running \\Cref{AlMGC2} in \\emph{Singular}.\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1^2,x_2^2,x_3^2,x_1x_2,x_1x_3)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,1\\rbrace$ and $\\tau(A)=2$. The output provided by our implementation of the algorithm in \\emph{Singular} \\cite{DGPS} is the following:\n{\\tiny{\n\\begin{multicols}{2}\n\\begin{verbatim}\nH;\nb(10)*x(1)^3+b(7)*x(1)^2*x(2)+\n+b(8)*x(1)*x(2)^2+b(9)*x(2)^3+\n+b(1)*x(1)^2*x(3)+b(2)*x(1)*x(2)*x(3)+\n+b(3)*x(2)^2*x(3)+b(4)*x(1)*x(3)^2+\n+b(6)*x(2)*x(3)^2+b(5)*x(3)^3+\n+a(5)*x(1)^2+a(4)*x(1)*x(2)+\n+a(3)*x(2)^2+a(2)*x(1)*x(3)+\n+a(1)*x(3)^2\nradical(b);\n_[1]=b(8)^2-b(7)*b(9)\n_[2]=b(7)*b(8)-b(9)*b(10)\n_[3]=b(6)*b(8)-b(4)*b(9)\n_[4]=b(3)*b(8)-b(2)*b(9)\n_[5]=b(2)*b(8)-b(1)*b(9)\n_[6]=b(1)*b(8)-b(3)*b(10)\n_[7]=b(7)^2-b(8)*b(10)\n_[8]=b(6)*b(7)-b(4)*b(8)\n_[9]=b(4)*b(7)-b(6)*b(10)\n_[10]=b(3)*b(7)-b(1)*b(9)\n_[11]=b(2)*b(7)-b(3)*b(10)\n_[12]=b(1)*b(7)-b(2)*b(10)\n_[13]=b(3)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(5)*b(8)\n_[15]=b(1)*b(6)-b(5)*b(7)\n_[16]=b(2)*b(5)-b(4)*b(6)\n_[17]=b(4)^2-b(1)*b(5)\n_[18]=b(3)*b(4)-b(5)*b(8)\n_[19]=b(2)*b(4)-b(5)*b(7)\n_[20]=b(1)*b(4)-b(5)*b(10)\n_[21]=b(2)*b(3)-b(4)*b(9)\n_[22]=b(1)*b(3)-b(4)*b(8)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(6)*b(10)\n_[25]=b(1)^2-b(4)*b(10)\n_[26]=b(3)*b(5)*b(10)-b(6)^2*b(10)\n_[27]=b(3)^2*b(10)-b(6)*b(9)*b(10)\n_[28]=b(4)*b(6)^2-b(5)^2*b(8)\n_[29]=b(6)^3*b(10)-b(5)^2*b(9)*b(10)\nradical(d);\n_[1]=b(8)^2-b(7)*b(9)\n_[2]=b(7)*b(8)-b(9)*b(10)\n_[3]=b(6)*b(8)-b(4)*b(9)\n_[4]=b(3)*b(8)-b(2)*b(9)\n_[5]=b(2)*b(8)-b(1)*b(9)\n_[6]=b(1)*b(8)-b(3)*b(10)\n_[7]=b(7)^2-b(8)*b(10)\n_[8]=b(6)*b(7)-b(4)*b(8)\n_[9]=b(4)*b(7)-b(6)*b(10)\n_[10]=b(3)*b(7)-b(1)*b(9)\n_[11]=b(2)*b(7)-b(3)*b(10)\n_[12]=b(1)*b(7)-b(2)*b(10)\n_[13]=b(3)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(5)*b(8)\n_[15]=b(1)*b(6)-b(5)*b(7)\n_[16]=b(2)*b(5)-b(4)*b(6)\n_[17]=b(4)^2-b(1)*b(5)\n_[18]=b(3)*b(4)-b(5)*b(8)\n_[19]=b(2)*b(4)-b(5)*b(7)\n_[20]=b(1)*b(4)-b(5)*b(10)\n_[21]=b(2)*b(3)-b(4)*b(9)\n_[22]=b(1)*b(3)-b(4)*b(8)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(6)*b(10)\n_[25]=b(1)^2-b(4)*b(10)\n_[26]=b(3)*b(5)*b(10)-b(6)^2*b(10)\n_[27]=b(3)^2*b(10)-b(6)*b(9)*b(10)\n_[28]=b(4)*b(6)^2-b(5)^2*b(8)\n_[29]=a(5)*b(3)*b(5)-a(5)*b(6)^2\n_[30]=a(5)*b(3)^2-a(5)*b(6)*b(9)\n_[31]=b(6)^3*b(10)-b(5)^2*b(9)*b(10)\n_[32]=a(5)*b(6)^3-a(5)*b(5)^2*b(9)\n\\end{verbatim}\n\\end{multicols}}}\n\\noindent\nWe can simplify the output by using the primary decomposition of the ideal $\\mathfrak{b}=\\bigcap_{i=1}^k \\mathfrak{b}_i$. Then,\n$$MGC(A)=\\left(\\bigcup_{i=1}^k \\mathbb{V}_+(\\mathfrak{b}_i)\\right)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})=\\bigcup_{i=1}^k \\left(\\mathbb{V}_+(\\mathfrak{b}_i)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})\\right).$$\n\\noindent\n\\emph{Singular} \\cite{DGPS} provides a primary decomposition $\\mathfrak{b}=\\mathfrak{b}_1\\cap \\mathfrak{b}_2$ that satisfies $\\mathbb{V}_+(\\mathfrak{b}_2)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})=\\emptyset$. Therefore, we get\n$$MGC(A)=\\mathbb{V}_+(b_1,b_2,b_4,b_7,b_8,b_{10},b_3b_6-b_5b_9)\\backslash\\left(\\mathbb{V}_+(a_5)\\cup\\mathbb{V}_+ (-b_6^3+b_5^2b_9,b_3b_5-b_6^2,b_3^2-b_6b_9)\\right).$$\n\\noindent\nin $\\mathbb{P}^{14}$. We can eliminate some of the variables and consider $MGC(A)$ to be the following variety:\n$$MGC(A)=\\mathbb{V}_+(b_3b_6-b_5b_9)\\backslash\\left(\\mathbb{V}_+(a_5)\\cup\\mathbb{V}_+ (b_5^2b_9-b_6^3,b_3b_5-b_6^2,b_3^2-b_6b_9)\\right)\\subset \\mathbb{P}^8.$$\n\\noindent\nTherefore, any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where\n$$H=a_1y_3^2+a_2y_1y_3+a_3y_2^2+a_4y_1y_2+a_5y_1^2+b_3y_2^2y_3+b_5y_3^3+b_6y_2y_3^2+b_9y_2^3$$\n\\noindent\nsatisfies $b_3b_6-b_5b_9=0$, $a_5\\neq 0$ and at least one of the following conditions: $b_5^2b_9-b_6^3\\neq 0, b_3b_5-b_6^2\\neq 0, b_3^2-b_6b_9\\neq 0$.\n\n\\noindent\nMoreover, note that $\\mathbb{V}_+(\\mathfrak{c})\\backslash\\mathbb{V}_+(\\mathfrak{a})=\\mathbb{V}_+(\\mathfrak{c_1})\\backslash\\mathbb{V}_+(\\mathfrak{a})$, where $\\mathfrak{c}=\\mathfrak{c}_1\\cap \\mathfrak{c}_2$ is the primary decomposition of $\\mathfrak{c}$ and $\\mathfrak{c}_1=\\mathfrak{b}_1+(v_1,v_2b_5-v_3b_6,v_2b_3-v_3b_9)$. Hence, any $K_H$ such that $K_H\\circ H=I^\\perp$ will be of the form $K_H=(L_1,L_2,L_3^2)$, where $L_1,L_2,L_3$ are independent linear forms in $R$ such that $L_3=v_2x_2+v_3x_3$, with $v_2b_5-v_3b_6=v_2b_3-v_3b_9=0$.\n\\end{example}\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2-x_1^3)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,1,1\\rbrace$ and $\\tau(A)=2$. The output provided by our implementation of the algorithm in \\emph{Singular} \\cite{DGPS} is the following:\n{\\tiny{\n\\begin{multicols}{2}\n\\begin{verbatim}\nH;\n-b(10)*x(1)^4+b(9)*x(1)^2*x(2)+\n+b(7)*x(1)*x(2)^2+b(8)*x(2)^3+\n+b(6)*x(1)^2*x(3)+b(1)*x(1)*x(2)*x(3)+\n+b(2)*x(2)^2*x(3+b(3)*x(1)*x(3)^2+\n+b(4)*x(2)*x(3)^2+b(5)*x(3)^3+\n+a(5)*x(1)*x(2)+a(4)*x(2)^2+\n+a(3)*x(1)*x(3)+a(2)*x(2)*x(3)+\n+a(1)*x(3)^2\nradical(b);\n_[1]=b(8)*b(10)\n_[2]=b(7)*b(10)\n_[3]=b(4)*b(10)\n_[4]=b(2)*b(10)\n_[5]=b(1)*b(10)\n_[6]=b(6)*b(8)-b(2)*b(9)\n_[7]=b(7)^2-b(8)*b(9)\n_[8]=b(6)*b(7)-b(1)*b(9)\n_[9]=b(4)*b(7)-b(3)*b(8)\n_[10]=b(3)*b(7)-b(4)*b(9)\n_[11]=b(2)*b(7)-b(1)*b(8)\n_[12]=b(1)*b(7)-b(2)*b(9)\n_[13]=b(4)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(4)*b(9)\n_[15]=b(1)*b(6)-b(3)*b(9)\n_[16]=b(4)^2-b(2)*b(5)\n_[17]=b(3)*b(4)-b(1)*b(5)\n_[18]=b(2)*b(4)-b(5)*b(8)\n_[19]=b(1)*b(4)-b(5)*b(7)\n_[20]=b(3)^2-b(5)*b(6)+b(3)*b(10)\n_[21]=b(2)*b(3)-b(5)*b(7)\n_[22]=b(1)*b(3)-b(5)*b(9)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(3)*b(8)\n_[25]=b(1)^2-b(4)*b(9)\n_[26]=b(5)*b(9)*b(10)\n_[27]=b(3)*b(9)*b(10)\nradical(d);\n_[1]=b(8)*b(10)\n_[2]=b(7)*b(10)\n_[3]=b(4)*b(10)\n_[4]=b(2)*b(10)\n_[5]=b(1)*b(10)\n_[6]=b(6)*b(8)-b(2)*b(9)\n_[7]=b(7)^2-b(8)*b(9)\n_[8]=b(6)*b(7)-b(1)*b(9)\n_[9]=b(4)*b(7)-b(3)*b(8)\n_[10]=b(3)*b(7)-b(4)*b(9)\n_[11]=b(2)*b(7)-b(1)*b(8)\n_[12]=b(1)*b(7)-b(2)*b(9)\n_[13]=b(4)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(4)*b(9)\n_[15]=b(1)*b(6)-b(3)*b(9)\n_[16]=b(4)^2-b(2)*b(5)\n_[17]=b(3)*b(4)-b(1)*b(5)\n_[18]=b(2)*b(4)-b(5)*b(8)\n_[19]=b(1)*b(4)-b(5)*b(7)\n_[20]=b(3)^2-b(5)*b(6)+b(3)*b(10)\n_[21]=b(2)*b(3)-b(5)*b(7)\n_[22]=b(1)*b(3)-b(5)*b(9)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(3)*b(8)\n_[25]=b(1)^2-b(4)*b(9)\n_[26]=b(5)*b(9)*b(10)\n_[27]=b(3)*b(9)*b(10)\n_[28]=a(4)*b(5)*b(10)\n_[29]=a(4)*b(3)*b(10)\n\\end{verbatim}\n\\end{multicols}}}\n\\noindent\n\\emph{Singular} \\cite{DGPS} provides a primary decomposition $\\mathfrak{b}=\\mathfrak{b}_1\\cap \\mathfrak{b}_2\\cap\\mathfrak{b}_3$ such that $\\mathbb{V}_+(\\mathfrak{b})\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})=\\mathbb{V}_+(\\mathfrak{b}_2)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})$. Therefore, we get\n$$MGC(A)=\\mathbb{V}_+(b_1,b_2,b_4,b_7,b_8,b_9,b_3^2-b_5b_6+b_3b_{10})\\backslash\\left(\\mathbb{V}_+(a_4)\\cup\\mathbb{V}_+(b_{10})\\cup\\mathbb{V}_+ (b_3,b_5)\\right).$$\n\\noindent\nin $\\mathbb{P}^{14}$. We can eliminate some of the variables and consider $MGC(A)$ to be the following variety:\n$$MGC(A)=\\mathbb{V}_+(b_3^2-b_5b_6+b_3b_{10})\\backslash\\left(\\mathbb{V}_+(a_4)\\cup\\mathbb{V}_+(b_{10})\\cup\\mathbb{V}_+ (b_3,b_5)\\right)\\subset \\mathbb{P}^8.$$\n\\noindent\nTherefore, any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where\n$$H=a_1y_3^2+a_2y_2y_3+a_3y_1y_3+a_4y_2^2+a_5y_1y_2+b_3y_1y_3^2+b_5y_3^3+b_6y_1^2y_3-b_{10}y_1^4$$\n\\noindent\nsatisfies $b_3^2-b_5b_6+b_3b_{10}=0$, $a_4\\neq 0$, $b_{10}\\neq 0$ and either $b_3\\neq 0$ or $b_5\\neq 0$ (or both).\n\n\\noindent\nMoreover, note that $\\mathbb{V}_+(\\mathfrak{c})\\backslash\\mathbb{V}_+(\\mathfrak{a})=\\mathbb{V}_+(\\mathfrak{c_2})\\backslash\\mathbb{V}_+(\\mathfrak{a})$, where $\\mathfrak{c}=\\mathfrak{c}_1\\cap \\mathfrak{c}_2\\cap \\mathfrak{c}_3$ is the primary decomposition of $\\mathfrak{c}$ and $\\mathfrak{c}_2=\\mathfrak{b}_2+(v_2,v_1b_5-v_3b_3-v_3b_{10},v_1b_3-v_3b_6)$. Hence, any $K_H$ such that $K_H\\circ H=I^\\perp$ will be of the form $K_H=(L_1,L_2,L_3^2)$, where $L_1,L_2,L_3$ are independent linear forms in $R$ such that $L_3=v_1x_1+v_3x_3$, with $v_1b_5-v_3b_3-v_3b_{10}=v_1b_3-v_3b_6=0$.\n\\end{example}\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1^2,x_2^2,x_3^2,x_1x_2)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,2\\rbrace$ and $\\tau(A)=2$. Doing analogous computations to the previous examples, \\emph{Singular} provides the following variety:\n$$MGC(A)=\\mathbb{P}^7\\backslash\\mathbb{V}_+(b_2^2-b_1b_3)$$\n\\noindent\nThe coordinates of points in $MGC(A)$ are of the form $(a_1:\\dots:a_4:b_1:b_2:b_3:b_4)\\in \\mathbb{P}^7$ and they correspond to a polynomial\n$$H=b_1y_1^2y_3+b_2y_1y_2y_3+b_3y_2^2y_3+b_4y_3^3+a_1y_3^2+a_2y_2^2+a_3y_1y_2+a_4y_1^2$$\n\\noindent\nsuch that $b_2^2-b_1b_3\\neq 0$. Any $G=R\/\\operatorname{Ann_R} H$ is a minimal Gorenstein cover of colength 2 of $A$ and all such covers admit $(x_1,x_2,x_3^2)$ as the corresponding $K_H$.\n\\end{example}\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3,x_4]\\!]$ and $I=(x_1^2,x_2^2,x_3^2,x_4^2,x_1x_2,x_1x_3,x_1x_4,x_2x_3,x_2x_4)$. Note that $\\operatorname{HF}_A=\\lbrace 1,4,1\\rbrace$ and $\\tau(A)=3$. Doing analogous computations to the previous examples, \\emph{Singular} provides the following variety:\n$$MGC(A)=\\mathbb{V}_+(b_6b_{10}-b_9b_{16})\\backslash\\left(\\mathbb{V}_+(d_1)\\cup\\mathbb{V}_+(d_2)\\right)\\subset \\mathbb{P}^{12},$$\n\\noindent\nwhere $d_1=(a_7a_9-a_8^2)$ and $d_2=(b_9^2b_{16}-b_{10}^3,b_6b_9-b_{10}^2,b_6^2-b_{10}b_{16})$. The coordinates of points in $MGC(A)$ are of the form $(a_1:\\dots:a_9:b_6:b_9:b_{10}:b_{16})\\in \\mathbb{P}^{12}$ and they correspond to a polynomial\n$$H=b_{16}y_3^3+b_6y_3^2y_4+b_{10}y_3y_4^2+b_9y_4^3+a_9y_1^2+a_8y_1y_2+a_7y_2^2+$$\n$$+a_6y_1y_3+a_5y_2y_3+a_4y_3^2+a_3y_1y_4+a_2y_2y_4+a_1y_4^2$$\n\\noindent\nsuch that $G=R\/\\operatorname{Ann_R} H$ is a minimal Gorenstein cover of colength 2 of $A$. Moreover, any $K_H$ such that $K_H\\circ H=I^\\perp$ will be of the form $K_H=(L_1,L_2,L_3,L_4^2)$, where $L_1,L_2,L_3,L_4$ are independent linear forms in $R$ such that $L_4=v_3x_3+v_4x_4$, with $v_3b_9-v_4b_{10}=v_3b_6-v_4b_{16}=0$.\n\\end{example}\n\n\nAs in the case of colength 1, we now provide a table for the computation times of $MGC(A)$ of all isomorphism classes of local $\\res$-algebras $A$ of length equal or less than 6 such that $\\operatorname{gcl}(A)=2$.\n\n{\\small{\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|}\n\\hline\n$\\operatorname{HF}(R\/I)$ & $I$ & t(s)\\\\\n\\hline\n$1,3,1 $ & $x_1x_2,x_1x_3,x_1^2,x_2^2,x_3^2$ & 0,42\\\\\n$1,2,2,1 $ & $x_1^2,x_1x_2^2,x_2^4$ & 0,18\\\\\n$1,3,1,1 $ & $x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2-x_1^3$ & 3,56\\\\\n$1,3,2 $ & $x_1x_2,x_2x_3,x_3^2,x_2^2-x_1x_3,x_1^3$ & 4,4\\\\\n& $x_1x_2,x_3^2,x_1x_3-x_2x_3,x_1^2+x_2^2-x_1x_3$ & 1254,34\\\\\n& $x_1x_2,x_1x_3,x_2^2,x_3^2,x_1^3$ & 3,33\\\\\n& $x_1x_2,x_1x_3,x_2x_3,x_1^2+x_2^2-x_3^2$ & 4,61\\\\\n& $x_1^2,x_1x_2,x_2x_3,x_1x_3+x_2^2-x_3^2$ & 4,09\\\\\n& $x_1^2,x_1x_2,x_2^2,x_3^2$ & 0,45\\\\\n$1,4,1 $ & $x_1^2,x_2^2,x_3^2,x_4^2,x_1x_2,x_1x_3,x_1x_4,x_2x_3,x_2x_4$ & 242,28\\\\\n\\hline\n\\end{tabular}\n\\medskip\n\\caption[AlMGC2]{Computation times of $MGC(A)$ of local rings $A=R\/I$ with $\\ell(A)\\leq 6$ and $\\operatorname{gcl}(A)=2$.}\n\\label{table:TMGC2}\n\\end{table}}}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}