diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgqxn" "b/data_all_eng_slimpj/shuffled/split2/finalzzgqxn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgqxn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\nThis paper is devoted to the question of building an approximate sparse solution of a given convex optimization problem.\nA typical setting is to find an approximate minimum of a real-valued convex function $E$ defined on the Banach space $X$, i.e.\n\\begin{equation}\\label{eq:opt}\n\t\\text{find}\\ \\ x^* = \\mathop{\\operatorname{argmin}}_{x \\in X} E(x).\n\\end{equation}\nWhen optimization is performed over the whole space $X$, it is called an {\\it unconstrained optimization problem}.\nUsually in practice it is desirable to obtain a minimizer that possesses a certain structure or belongs to a given domain $S \\subset X$, in which case problem~\\eqref{eq:opt} becomes a {\\it constrained optimization problem}.\n\nIn particular, it is often preferable that the constructed solution $x^*$ is sparse with respect to a given set of elements $\\mathcal{D} \\subset X$.\nA conventional approach to such a task is to impose an additional $\\ell_1$-regularization on the original problem (see e.g.~\\cite{FNW}) and, instead of~\\eqref{eq:opt}, solve the problem\n\\begin{equation}\\label{eq:opt_reg}\n\t\\text{find}\\ \\ x^* = \\mathop{\\operatorname{argmin}}_{x \\in X} \\big( E(x) + \\lambda \\|x\\|_\\mathcal{D} \\big),\n\\end{equation}\nwhere $\\lambda > 0$ is an appropriate regularization parameter and $\\|\\cdot\\|_\\mathcal{D}$ is the atomic norm with respect to the set $\\mathcal{D}$ (see e.g.~\\cite{chandrasekaran2012convex}), i.e.\n\\begin{equation}\\label{eq:D_norm}\n\t\\|x\\|_\\mathcal{D} := \\inf \\left\\{ \\sum_{g \\in \\mathcal{D}} |c_g| : x = \\sum_{g \\in \\mathcal{D}} c_g \\, g \\right\\}.\n\\end{equation}\nWhile such an approach is quite popular and has its uses, it might not always result in the most appropriate solution since it essentially changes the target function in order to promote sparsity of the solution.\n\nAnother way of obtaining a sparse minimizer (without changing the optimization problem) is to procedurally construct a sequence of minimizers with an increasing support, or, more generally, to design an algorithm that after $m$ iterations provides a point $x_m$ such that $E(x_m)$ is close to the $\\inf_{x \\in S} E(x)$ and that $x_m$ is $m$-sparse with respect to $\\mathcal{D}$, i.e.\n\\[\n\tx_m = \\sum_{j=1}^m c_j \\, g_j\n\t\\ \\ \\text{with}\\ \\ \n\tg_1, \\ldots, g_m \\in \\mathcal{D}\n\t\\ \\ \\text{and}\\ \\ \n\tc_1, \\ldots, c_m \\in \\mathbb{R}.\n\\]\nA wide class of algorithms that fit such requirements is the greedy algorithms in approximation theory, see e.g. \\cite{D}, \\cite{VTbook}.\nA typical problem of greedy approximation is the following.\nLet $X$ be a Banach space with the norm $\\|\\cdot\\|$ and let $\\mathcal{D}$ be a dictionary, i.e. a dense set of semi-normalized elements of $X$.\nThe goal of a greedy algorithm is to obtain a sparse (with respect to the dictionary $\\mathcal{D}$) approximation of a given element $f \\in X$.\nGreedy algorithms are iterative by design and generally after $m$ iterations a greedy algorithm constructs an $m$-term linear combination with respect to $\\mathcal{D}$ that approximates the element $f$.\n\nIt is easy to reframe a greedy approximation problem as a convex optimization problem.\nIndeed, for a given dictionary $\\mathcal{D}$ consider the set of all $m$-term linear combinations with respect to $\\mathcal{D}$ ($m$-sparse with respect to $\\mathcal{D}$ elements):\n\\[\n\t\\Sigma_m(\\mathcal{D}) := \\left\\{ x \\in X: x = \\sum_{j=1}^m c_j \\, g_j, \\ \\ g_1, \\ldots, g_m \\in \\mathcal{D} \\right\\}.\n\\]\nGreedy algorithms in approximation theory are designed to provide a simple way to build good approximants of $f$ from $\\Sigma_m(\\mathcal{D})$, hence the problem of greedy approximation is the following:\n\\begin{equation}\\label{eq:opt_ga}\n\t\\text{find}\\ \\ x_m = \\mathop{\\operatorname{argmin}}_{x \\in \\Sigma_m} \\|f - x\\|.\n\\end{equation}\nClearly, problem~\\eqref{eq:opt_ga} is a constrained optimization problem of the real-valued convex function $E(x) := \\|f - x\\|$ over the manifold $\\Sigma_m(\\mathcal{D}) \\subset X$.\n\nAt first glance the settings of approximation and optimization problems appear to be very different since in approximation theory our task is to find a sparse approximation of a given element $f \\in X$, while in optimization theory we want to find an approximate sparse minimizer of a given target function $E : X \\to \\mathbb{R}$ (for instance, energy function or loss function).\nHowever it is now well understood that similar techniques can be used for solving both problems.\nNamely, it was shown in~\\cite{VT140} and in follow up papers (see, for instance, \\cite{DT}, \\cite{GP}, \\cite{NP}, \\cite{VT141}, and~\\cite{VT148}) how methods developed in nonlinear approximation theory (greedy approximation techniques in particular) can be adjusted to find an approximate sparse (with respect to a given dictionary $\\mathcal{D}$) solution to the optimization problem~\\eqref{eq:opt}.\nMoreover, there is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms, for example, \\cite{BD1}, \\cite{BD2}, \\cite{chandrasekaran2012convex}, \\cite{Cl}, \\cite{Ja2}, \\cite{JS}, \\cite{SSZ}, \\cite{TRD}, and~\\cite{Z}.\n\nWith an established framework it is straightforward to adjust a greedy strategy to a context of convex optimization; however, each of these modified techniques requires an individual analysis to guarantee a desirable performance.\nOn the other hand, it is known that the behavior of a greedy method is largely determined by the underlying geometry of the problem setting.\nIn particular, in~\\cite{VT165} we present a unified way of analyzing different greedy-type algorithms in Banach spaces.\nSpecifically, we define the class of Weak Biorthogonal Greedy Algorithms ($\\mathcal{WBGA}$) and prove convergence and rate of convergence results for algorithms from this class.\nSuch an approach allows for a simultaneous analysis of a wide range of seemingly different greedy algorithms based on the smoothness characteristic of the problem.\n\nIn this paper we adopt the approach of unified analysis for the setting of convex minimization.\nIn Section~\\ref{sec:wbga} we adjust the class $\\mathcal{WBGA}$ of algorithms designed for greedy approximation in Banach spaces and derive the class of Weak Biorthogonal Greedy Algorithms for convex optimization ($\\mathcal{WBGA}$(co)), which consists of greedy algorithms designed for convex optimization.\nWe prove convergence and rate of convergence results for algorithms from the class $\\mathcal{WBGA}$(co) in Theorems~\\ref{thm:wbga_conv} and~\\ref{thm:wbga_rate} respectively.\nThus, results in Section~\\ref{sec:wbga} address two important characteristics of an algorithm~--- convergence and rate of convergence.\n\nThe rate of convergence is an essential characteristic of an algorithm, though in certain practical applications resistance to various perturbations might be of equal importance.\nA systematic study of the stability of greedy algorithms in Banach spaces was started in~\\cite{T7} and further advanced in~\\cite{De}, where necessary and sufficient conditions for the convergence of a certain algorithm were obtained.\nA transition to the optimization setting was performed in~\\cite{DT} and~\\cite{VT148}, where stability results for greedy-type algorithms for convex optimization were obtained.\nIn Section~\\ref{sec:awbga} we discuss the stability of the algorithms from the $\\mathcal{WBGA}$(co) by analyzing convergence properties of the algorithms from $\\mathcal{WBGA}$(co) under the assumption of imprecise calculations in the steps of the algorithms.\nWe call such algorithms {\\it approximate greedy algorithms} or {\\it algorithms with errors}.\nWe prove convergence and rate of convergence results for the Weak Biorthogonal Greedy Algorithms with errors, which describes the stability of the algorithms from the class $\\mathcal{WBGA}$(co)~--- an important characteristic that is crucial for practical implementation.\n\nSince theoretical analysis cannot always predict the practical behavior of an algorithm, it is of interest to observe its actual implementation for particular problems.\nIn Section~\\ref{sec:numerics} we demonstrate the performance of some algorithms from the class $\\mathcal{WBGA}$(co) by employing them to solve various minimization problems.\nAdditionally, we compare these algorithms with a conventional method of obtaining sparse minimizers~--- optimization with $\\ell_1$-regularization~\\eqref{eq:opt_reg}.\nLastly, in Sections~\\ref{sec:proofs_wbga} and~\\ref{sec:proofs_awbga} we prove the results stated in Sections~\\ref{sec:wbga} and~\\ref{sec:awbga} respectively.\n\n\n\n\\section{Weak Biorthogonal Greedy Algorithms for Convex Optimization}\\label{sec:wbga}\nIn this section we introduce and discuss the class of Weak Biorthogonal Greedy Algorithms for convex optimization, denoted as $\\mathcal{WBGA}$(co).\nWe begin by recalling the relevant terminology.\n\n\\subsection{Preliminaries}\nLet $X$ be a real Banach space with the norm $\\|\\cdot\\|$.\nWe say that a set of elements $\\mathcal{D}$ from $X$ is a dictionary if each $g \\in \\mathcal{D}$ has the norm bounded by one and $\\mathcal{D}$ is dense in $X$, that is\n\\[\n\t\\|g\\| \\le 1\n\t\\ \\ \\text{for any}\\ \\ \n\tg \\in \\mathcal{D},\n\t\\ \\ \\text{and}\\ \\\n\t\\overline{\\operatorname{span}\\mathcal{D}} = X.\n\\]\nFor notational convenience in this paper we consider {\\it symmetric dictionaries}, i.e. such that\n\\[\n\tg\\in \\mathcal{D} \\ \\ \\text{implies} \\ \\ -g \\in \\mathcal{D}.\n\\]\nWe denote the closure (in $X$) of the convex hull of $\\mathcal{D}$ by $\\mathcal{A}_1(\\mathcal{D})$:\n\\begin{equation}\\label{eq:A_1(D)}\n\t\\mathcal{A}_1(\\mathcal{D}) := \\overline{\\mathrm{conv} \\mathcal{D}},\n\\end{equation}\nwhich is the standard notation in relevant greedy approximation literature. \n\nThe modulus of smoothness $\\rho(E,S,u)$ of a function $E : X \\to \\mathbb{R}$ on a set $S \\subset X$ is defined as\n\\begin{equation}\\label{eq:mod_smt}\n\t\\rho(E,S,u) := \\frac{1}{2} \\sup_{x\\in S, \\|y\\|=1} \\Big| E(x + uy) + E(x - uy) - 2E(x) \\Big|.\n\\end{equation}\nWe note that, in comparison to the modulus of smoothness of a norm (see, for instance,~\\cite[Part~3]{beauzamy2011introduction}), the modulus of smoothness of a function additionally depends on the chosen set $S \\subset X$.\nThat is because a norm is a positive homogeneous function, thus its smoothness on the whole space is determined by its smoothness on the unit sphere, which is not the case for a general function on a Banach space.\n\nThe function $E$ is uniformly smooth on $S \\subset X$ if $\\rho(E,S,u) = o(u)$ as $u \\to 0$.\nWe say that the modulus of smoothness $\\rho(E,S,u)$ is of power type $1 \\leq q \\leq 2$ if $\\rho(E,S,u) \\leq \\gamma u^q$ for some $\\gamma > 0$.\nNote that the class of functions with the modulus of smoothness of a nontrivial power type is completely different from the class of uniformly smooth Banach spaces with the norms of a nontrivial power type since any uniformly smooth norm is not uniformly smooth as a function on any set containing $0$.\nHowever, it is shown in~\\cite{borwein2009uniformly} that if a norm $\\|\\cdot\\|$ has the modulus of smoothness of power type $q \\in [1,2]$, then the function $E(\\cdot) := \\|\\cdot\\|^q$ has the modulus of smoothness $\\rho(E,S,u)$ of power type $q$ for any set $S \\subset X$.\nIn particular, it implies (see e.g.~\\cite[Lemma B.1]{donahue1997rates}) that for any $1 \\le p < \\infty$ the function $E : L_p \\to \\mathbb{R}$ defined as\n\\[\n\tE_p(x) = \\|x\\|_{L_p}^p\n\\]\nhas the modulus of smoothness that satisfies\n\\[\n\\rho_p(E,X,u) \\le\n\t\\left\\{\\begin{array}{ll}\n\t\t\\frac{1}{p} u^p & 1 \\le p \\le 2,\n\t\t\\\\\n\t\t\\frac{p-1}{2} u^2 & 2 \\le p < \\infty,\n\t\\end{array}\\right.\n\\]\ni.e. $\\rho_p(E,X,u)$ is of power type $\\min\\{p,2\\}$.\n\\\\\nA typical smoothness assumption in convex optimization is of the form\n\\[\n\t|E(x + uy) - E(x) - \\| \\le Cu^2\n\\]\nwith some constant $C > 0$ and any $u \\in \\mathbb{R}$, $x \\in X$, $\\|y\\| = 1$.\nIn terms of the modulus of smoothness~\\eqref{eq:mod_smt} such an assumption corresponds to the case $\\rho(E,X,u) \\le Cu^2 \/ 2$, i.e. that the modulus of smoothness of $E$ is of power type $2$.\n\nThroughout the paper we assume that the target function $E$ is Fr{\\'e}chet-differentiable, i.e. that at any $x \\in X$ there is a bounded linear functional $E'(x) : X \\to \\mathbb{R}$ such that\n\\[\n\t\\sup_{\\|y\\|=1} \\Big( \\lim_{u \\to 0} \\frac{E(x + uy) - E(x)}{u} - \\< E'(x), y \\> \\Big) = 0.\n\\]\nThen the convexity of $E$ implies that for any $x,y \\in D$\n\\begin{equation}\\label{eq:E'_conv1}\n\tE(y) \\ge E(x) + \\,\n\\end{equation}\nor, equivalently,\n\\begin{equation}\\label{eq:E'_conv2}\n\tE(x) - E(y) \\le \\ = \\<-E'(x),y-x\\>.\n\\end{equation}\n\n\\begin{Remark}\nThe condition of Fr{\\'e}chet-differentiability is not necessary and can be relaxed by considering support functionals in place of the derivative of $E$, as is done in~\\cite[Chapter~5]{dereventsov2017convergence}.\nAlthough the existence of support functionals is guaranteed by the convexity of the target function, we additionally impose the assumption of differentiability for the convenience of presentation.\n\\end{Remark}\n\n\n\\subsection{Weak Biorthogonal Greedy Algorithms}\nTypically in greedy approximation one has to perform a greedy selection from a given dictionary $\\mathcal{D}$, which might not always be possible.\nIn order to guarantee the feasibility of algorithms, it is conventional to perform a {\\it weak} greedy step where the greedy search is relaxed.\nSuch relaxations are represented by a given sequence $\\tau := \\{t_m\\}_{m=1}^\\infty$, referred to as a {\\it weakness sequence}.\n\nFor a convex Fr{\\'e}chet-differentiable target function $E : X \\to \\mathbb{R}$ we define the following class of greedy algorithms.\n\\\\[.5em]\\noindent\n{\\bf Weak Biorthogonal Greedy Algorithms ($\\boldsymbol{\\mathcal{WBGA}}$(co)).\\\\}\nWe say that an algorithm belongs to the class $\\mathcal{WBGA}$(co) with a weakness sequence $\\tau = \\{t_m\\}_{m=1}^\\infty$, $t_m\\in[0,1]$, if sequences of approximators $\\{G_m\\}_{m=0}^\\infty$ and selected elements $\\{\\varphi_m\\}_{m=1}^\\infty$ of the dictionary $\\mathcal{D}$ satisfy the following conditions at every iteration $m \\ge 1$:\n\\begin{enumerate}[label=\\bf(\\arabic*), leftmargin=.5in]\n\t\\item\\label{wbga_gs}\n\t\tGreedy selection: ${\\displaystyle \\<-E'(G_{m-1}), \\varphi_m\\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\< -E'(G_{m-1}), \\varphi \\>}$;\n\t\\item\\label{wbga_er}\n\t\tError reduction: ${\\displaystyle E(G_m) \\le \\inf_{\\lambda\\ge0} E(G_{m-1} + \\lambda\\varphi_m)}$;\n\t\\item\\label{wbga_bo}\n\t\tBiorthogonality: ${\\displaystyle \\ = 0}$.\n\\end{enumerate}\n\n\\smallskip\\noindent\nWe assume that for a given target function $E : X \\to \\mathbb{R}$ the set\n\\[\n\tD = D(E) := \\big\\{ x \\in X : E(x) \\le E(0) \\big\\} \\subset X\n\\]\nis bounded.\nCoupled with the assumption that $E$ is convex and Fr{\\'e}chet-differentiable, boundedness of $D$ guarantees that\n\\[\n\t\\inf_{x \\in X} E(x) = \\inf_{x \\in D} E(x) > -\\infty\n\t\\ \\ \\text{and}\\ \\\n\t\\mathop{\\operatorname{argmin}}_{x \\in X} E(x) = \\mathop{\\operatorname{argmin}}_{x \\in D} E(x) \\in D,\n\\]\ni.e. there is a nontrivial and attainable minimum of $E$.\nThen by condition~\\ref{wbga_er} the sequence of $m$-sparse approximants $\\{G_m\\}_{m=0}^\\infty$ constructed by an algorithm from the $\\mathcal{WBGA}$(co) satisfies the relation\n\\[\n\tE(0) = E(G_0) \\ge E(G_1) \\ge E(G_2) \\ge \\dots,\n\\]\nwhich guarantees that $G_m \\in D$ for all $m \\ge 0$.\n\n\\begin{Remark}\nIn the case $E(x) := \\|f - x\\|^q$ with any $f\\in X$ and $q \\ge 1$, the class $\\mathcal{WBGA}$(co) coincides with the class $\\mathcal{WBGA}$ from the approximation theory, which is introduced and analyzed in~\\cite{VT165}.\n\\end{Remark}\n\n\n\\subsection{Examples of algorithms from the $\\mathcal{WBGA}$(co)}\\label{sec:wbga_ga}\nIn this section we briefly overview a few particular algorithms from the class $\\mathcal{WBGA}$(co) that will be utilized in the numerical experiments presented in Section~\\ref{sec:numerics}.\nBy $\\tau := \\{t_m\\}_{m=1}^\\infty$ we denote a weakness sequence, i.e. a given sequence of non-negative numbers $t_m \\le 1$, $m = 1,2,3,\\dots$.\n\nWe first define the Weak Chebyshev Greedy Algorithm for convex optimization that is introduced and studied in~\\cite{VT140}.\n\\\\[.5em]\\noindent\n{\\bf Weak Chebyshev Greedy Algorithm (WCGA(co)).\\\\}\nSet $G^c_0 = 0$ and for each $m \\ge 1$ perform the following steps:\n\\begin{enumerate}\n\t\\item Take any $\\varphi^{c}_m \\in \\mathcal{D}$ satisfying\n\t\t\\[\n\t\t\t\\< -E'(G^c_{m-1}), \\varphi^c_m \\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\< -E'(G^c_{m-1}), \\varphi \\>;\n\t\t\\]\n\t\\item Denote $\\Phi_m^c = \\operatorname{span} \\{\\varphi^c_k\\}_{k=1}^m$ and find $G_m^c \\in \\Phi_m^c$ such that\n\t\t\\[\n\t\t\tE(G_m^c) = \\inf_{G \\in \\Phi_m^c} E(G).\n\t\t\\]\n\\end{enumerate}\n\n\\smallskip\\noindent\nAnother algorithm, which utilizes a simpler approach to updating the approximant is the Weak Greedy Algorithm with Free Relaxation for convex optimization (see~\\cite{VT140}).\n\\\\[.5em]\\noindent\n{\\bf Weak Greedy Algorithm with Free Relaxation (WGAFR(co)).\\\\}\nSet $G^f_0 = 0$ and for each $m \\ge 1$ perform the following steps:\n\\begin{enumerate}[label=\\bf(\\arabic*), leftmargin=.5in]\n\t\\item Take any $\\varphi^f_m \\in \\mathcal{D}$ satisfying\n\t\t\\[\n\t\t\t\\< -E'(G^f_{m-1}), \\varphi^f_m \\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\< -E'(G^f_{m-1}), \\varphi \\>;\n\t\t\\]\n\t\\item Find $\\omega_m \\in \\mathbb{R}$ and $ \\lambda_m \\in \\mathbb{R}$ such that\n\t\t\\[\n\t\t\tE\\big( (1-\\omega_m) G^f_{m-1} + \\lambda_m \\varphi^f_m \\big)\n\t\t\t= \\inf_{ \\lambda, \\omega \\in \\mathbb{R}} E\\big( (1 - \\omega) G^f_{m-1} + \\lambda \\varphi^f_m \\big)\n\t\t\\]\n\t\tand define $G^f_m = (1 - \\omega_m) G^f_{m-1} + \\lambda_m \\varphi^f_m$.\n\\end{enumerate}\n\n\\smallskip\\noindent\nThe next algorithm~--- the Rescaled Weak Relaxed Greedy Algorithm for convex optimization~--- is an adaptation of its counterpart from the approximation theory (see~\\cite{VT165}) that can be viewed as a generalization of the Rescaled Pure Greedy Algorithm, introduced in~\\cite{Pet} and adapted for convex optimization in~\\cite{GP}.\n\\\\[.5em]\\noindent\n{\\bf Rescaled Weak Relaxed Greedy Algorithm (RWRGA(co)).\\\\}\nSet $G^r_0 = 0$ and for each $m \\ge 1$ perform the following steps:\n\\begin{enumerate}\n\t\\item Take any $\\varphi^r_m \\in \\mathcal{D}$ satisfying\n\t\t\\[\n\t\t\t\\< -E'(G^r_{m-1}), \\varphi^r_m \\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\< -E'(G^r_{m-1}), \\varphi \\>;\n\t\t\\]\n\t\\item Find $\\lambda_m \\ge 0$ such that\n\t\t\\[\n\t\t\tE(G^r_{m-1} + \\lambda_m \\varphi^r_m) = \\inf_{\\lambda \\ge 0} E(G^r_{m-1} + \\lambda \\varphi^r_m);\n\t\t\\]\n\t\\item Find $\\mu_m \\in \\mathbb{R}$ such that\n\t\t\\[\n\t\t\tE\\big( \\mu_m (G^r_{m-1} + \\lambda_m \\varphi^r_m) \\big)\n\t\t\t= \\inf_{\\mu \\in \\mathbb{R}} E\\big( \\mu (G^r_{m-1} + \\lambda_m \\varphi^r_m) \\big)\n\t\t\\]\n\t\tand define $G^r_m = \\mu_m (G^r_{m-1} + \\lambda_m \\varphi^r_m)$.\n\\end{enumerate}\n\n\\begin{Proposition}\\label{prp:ga_wbga}\nThe WCGA(co), the WGAFR(co), and the RWRGA(co) belong to the class $\\mathcal{WBGA}$(co).\n\\end{Proposition}\n\n\n\\subsection{Convergence results for the $\\mathcal{WBGA}(co)$}\nIn this section we state the results related to convergence and the rate of convergence for algorithms from the class $\\mathcal{WBGA}$(co).\n\nOur setting of an infinite dimensional Banach space makes the formulation of convergence results nontrivial, and thus we require a special sequence which is defined for a given modulus of smoothness $\\rho(u) := \\rho(E,D,u)$ and a given weakness sequence $\\tau = \\{t_m\\}_{m=1}^\\infty$.\n\nLet $E : X \\to \\mathbb{R}$ be a convex uniformly smooth function, then $\\rho(u) := \\rho(E,D,u) : \\mathbb{R} \\to \\mathbb{R_+}$ is an even convex function.\nAssume that $\\rho(u)$ has the property $\\rho(1\/\\theta_0) \\ge 1$ for some $\\theta_0 \\in (0,1]$ and\n\\[\n\t\\lim_{u\\to 0} \\rho(u)\/u = 0.\n\\]\nNote that assumptions on uniform smoothness of $E$ and boundedness of domain $D \\subset X$ guarantee the above properties.\nThen for a given $0 < \\theta \\le \\theta_0$ define $\\xi_m := \\xi_m(\\rho,\\tau,\\theta)$ as the solution of the equation\n\\begin{equation}\\label{eq:theta}\n\t\\rho(u) = \\theta t_m u.\n\\end{equation}\nNote that conditions on $\\rho(u)$ imply that the function\n\\[\n\ts(u) := \\left\\{\\begin{array}{ll}\n\t\t\\rho(u)\/u, & u \\neq 0\n\t\t\\\\\n\t\t0, & u = 0\n\t\\end{array}\\right.\n\\]\nis continuous and increasing on $[0,\\infty)$ with $s(1\/\\theta_0) \\ge \\theta_0$.\nThus equation~\\eqref{eq:theta} has the unique solution $\\xi_m = s^{-1}(\\theta t_m)$ such that $0 < \\xi_m \\le 1\/\\theta_0$.\n\n\\noindent\nWe now formulate our main convergence result for the $\\mathcal{WBGA}$(co).\n\\begin{Theorem}\\label{thm:wbga_conv}\nLet $E$ be a uniformly smooth on $D \\subset X$ convex function with the modulus of smoothness $\\rho(E,D,u)$.\nAssume that a sequence $\\tau := \\{t_m\\}_{m=1}^\\infty$ satisfies the condition that for any $\\theta \\in(0,\\theta_0]$ we have\n\\[\n\t\\sum_{m=1}^\\infty t_m \\xi_m(\\rho,\\tau,\\theta) = \\infty.\n\\]\nThen for any algorithm from the class $\\mathcal{WBGA}$(co) we have\n\\[\n\t\\lim_{m\\to\\infty} E(G_m) = \\inf_{x\\in D} E(x).\n\\]\n\\end{Theorem}\n\n\\noindent\nHere are two simple corollaries of Theorem~\\ref{thm:wbga_conv}.\n\\begin{Corollary}\nLet $E$ be a uniformly smooth on $D \\subset X$ convex function.\nThen any algorithm from the class $\\mathcal{WBGA}$(co) with a constant weakness sequence $\\tau = t \\in (0,1]$ converges, i.e.\n\\[\n\t\\lim_{m\\to\\infty} E(G_m) = \\inf_{x\\in D} E(x).\n\\]\n\\end{Corollary}\n\n\\begin{Corollary}\nLet $E$ be a convex function with the modulus of smoothness of power type $1 < q \\le 2$, that is, $\\rho(E,D,u) \\le \\gamma u^q$.\nLet a sequence $\\tau := \\{t_m\\}_{m=1}^\\infty$, $t_m \\in (0,1]$ for $m = 1,2,3,\\ldots$ be such that \n\\[\n\t\\sum_{m=1}^\\infty t_m^p = \\infty, \\ \\ p = \\frac{q}{q-1}.\n\\]\nThen any algorithm from the class $\\mathcal{WBGA}$(co) with the weakness sequence $\\tau$ converges, i.e.\n\\[\n\t\\lim_{m \\to \\infty} E(G_m) = \\inf_{x\\in D} E(x).\n\\]\n\\end{Corollary}\n\n\\noindent\nWe now proceed to the rate of convergence estimates, which are of interest in both finite dimensional and infinite dimensional settings.\nA typical assumption in this regard is formulated in terms of the convex hull $\\mathcal{A}_1(\\mathcal{D})$ of the dictionary $\\mathcal{D}$, defined by~\\eqref{eq:A_1(D)}.\n\\begin{Theorem}\\label{thm:wbga_rate}\nLet $E$ be a convex function with the modulus of smoothness of power type $1 < q \\le 2$, that is, $\\rho(E,D,u) \\le \\gamma u^q$.\nTake an element $f^\\epsilon \\in D$ and a number $\\epsilon \\ge 0$ such that\n\\[\n\tE(f^\\epsilon) \\le \\inf_{x\\in D} E(x) + \\epsilon, \\ \\ f^\\epsilon\/A(\\epsilon) \\in \\mathcal{A}_1(\\mathcal{D})\n\\]\nwith some number $A(\\epsilon) \\ge 1$.\nThen for any algorithm from the class $\\mathcal{WBGA}$(co) we have\n\\[\n\tE(G_m) - \\inf_{x\\in D} E(x) \\le \\max\\left\\{ 2\\epsilon, C(q,\\gamma) A(\\epsilon)^q \\left(C(E,q,\\gamma) + \\sum_{k=1}^m t_k^p\\right)^{1-q} \\right\\},\n\\]\nwhere $p = q\/(q-1)$.\n\\end{Theorem}\n\n\\begin{Corollary}\nLet $E$ be a convex function with the modulus of smoothness of power type $1 < q \\le 2$, that is, $\\rho(E,D,u) \\le \\gamma u^q$.\nIf $\\operatorname{argmin}_{x \\in D} E(x) \\in \\mathcal{A}_1(\\mathcal{D})$ then for any algorithm from the class $\\mathcal{WBGA}$(co) we have\n\\[\n\tE(G_m) - \\inf_{x \\in D} E(x) \\le C(q,\\gamma) \\left(C(E,q,\\gamma) + \\sum_{k=1}^m t_k^p\\right)^{1-q},\n\\]\nwhere $p = q\/(q-1)$.\n\\end{Corollary}\n\n\\begin{Remark}\nWhile the results stated in this section are known for the WCGA(co) and the WGAFR(co) (see~\\cite{VT140}), they are novel for the RWRGA(co).\n\\end{Remark}\n\n\n\n\\section{Weak Biorthogonal Greedy Algorithms with errors for Convex Optimization}\\label{sec:awbga}\nIn this section we address the question of the stability of algorithms from the class $\\mathcal{WBGA}$(co) by introducing the wider class $\\mathcal{WBGA}(\\Delta,\\text{co})$, which allows for imprecise calculations in the realization of algorithms.\nSuch an approach is of a practical interest since computational inaccuracies often occur naturally in applications.\nTo account for imprecise computations we introduce a sequence $\\Delta := \\{\\delta_m, \\epsilon_m\\}_{m=1}^\\infty$, where $\\delta_m \\in [0,1]$ and $\\epsilon_m \\ge 0$ for $m = 1,2,3,\\dots$, that represents the allowed inaccuracies in the steps of the algorithms.\nIn accordance with the conventional notation (see e.g.~\\cite{gribonval2001approximate}, \\cite{galatenko2003convergence}), we refer to a given sequence $\\Delta := \\{\\delta_m, \\epsilon_m\\}_{m=1}^\\infty$ as an {\\it error sequence}.\n\nFor a convex Fr{\\'e}chet-differentiable target function $E : X \\to \\mathbb{R}$ we define the following class of greedy algorithms with errors.\n\\\\[.5em]\\noindent\n{\\bf Weak Biorthogonal Greedy Algorithms with errors ($\\boldsymbol{\\mathcal{WBGA}(\\Delta,\\text{co})}$).\\\\} \nWe say that an algorithm belongs to the class $\\mathcal{WBGA}(\\Delta,\\text{co})$ with a weakness sequence $\\tau = \\{t_m\\}_{m=1}^\\infty$, $t_m\\in[0,1]$ and an error sequence $\\Delta = \\{\\delta_m,\\epsilon_m\\}_{m=1}^\\infty$, $\\delta_m\\in[0,1], \\epsilon_m\\ge0$, if sequences of approximators $\\{G_m\\}_{m=0}^\\infty$ and selected elements $\\{\\varphi_m\\}_{m=1}^\\infty$ of the dictionary $\\mathcal{D}$ satisfy the following conditions at every iteration $m \\ge 1$:\n\\begin{enumerate}[label=\\bf(\\arabic*), leftmargin=.5in]\n\t\\item\\label{awbga_gs}\n\t\tGreedy selection: ${\\displaystyle \\<-E'(G_{m-1}), \\varphi_m\\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\<-E'(G_{m-1}), \\varphi\\>}$;\n\t\\item\\label{awbga_er}\n\t\tError reduction: ${\\displaystyle E(G_m) \\le \\inf_{\\lambda\\ge0} E(G_{m-1} + \\lambda\\varphi_m) + \\delta_m}$;\n\t\\item\\label{awbga_bo}\n\t\tBiorthogonality: ${\\displaystyle |\\| \\le \\epsilon_m}$;${\\displaystyle\\phantom{\\inf_\\lambda}}$\n\t\\item\\label{awbga_bd}\n\t\tBoundedness: ${\\displaystyle E(G_{m}) \\le E(0) + C_0}$.\n\\end{enumerate}\n\n\\smallskip\\noindent\nNote that in addition to conditions~\\ref{wbga_gs}--\\ref{wbga_bo} from the definition of the class $\\mathcal{WBGA}$(co), for the $\\mathcal{WBGA}(\\Delta,\\text{co})$ we require the boundedness condition~\\ref{awbga_bd} to account for the magnitude of allowed errors $\\Delta$.\nIn particular, if the error sequence $\\Delta$ is summable, i.e. $\\sum_{m=1}^\\infty \\delta_m < \\infty$, then condition~\\ref{awbga_bd} follows directly from~\\ref{awbga_er} with $C_0 = \\sum_{m=1}^\\infty \\delta_m$.\n\\\\\nMoreover, we assume that the set\n\\[\n\tD \\subset D_1 := \\{x \\in X : E(x) \\le E(0) + C_0\\} \\subset X,\n\\]\nwhere $C_0$ is the constant from condition~\\ref{awbga_bd}, is bounded.\nThen condition~\\ref{awbga_bd} guarantees that $G_m \\in D_1$ for all $m \\ge 0$ for any algorithm from the $\\mathcal{WBGA}(\\Delta,\\text{co})$.\n\n\\begin{Remark}\nIn the error reduction condition~\\ref{awbga_er} from the definition of the class $\\mathcal{WBGA}(\\Delta,co$) the infimum is taken over all $\\lambda \\ge 0$.\nIn order to simplify this problem, one can consider a wider than the $\\mathcal{WBGA}(\\Delta,co)$ class~--- the class $\\mathcal{WBGA}(\\Delta,[0,1],co)$ of algorithms satisfying conditions~\\ref{awbga_gs}, \\ref{awbga_bo}, \\ref{awbga_bd}, and the following condition instead of~\\ref{awbga_er}:\n\\[\n\t\\text{{\\bf (2')} {\\rm Restricted error reduction: }}\n\tE(G_m) \\le \\inf_{\\lambda\\in[0,1]} E(G_{m-1} + \\lambda\\varphi_m) + \\delta_m.\n\\]\nThen finding such $\\lambda\\in[0,1]$ is a line search problem, which is known to be a simple one-dimensional convex optimization problem (see e.g.~\\cite{boyd2004convex}, \\cite{N}).\n\\end{Remark}\n\n\n\\subsection{Examples of algorithms from the $\\mathcal{WBGA}(\\Delta,co)$}\\label{sec:awbga_ga}\nIn this section we briefly overview particular algorithms from the class $\\mathcal{WBGA}(\\Delta,\\text{co})$ that correspond to the approximate versions of the algorithms considered in Section~\\ref{sec:wbga_ga}.\nDenote by $\\tau := \\{t_m\\}_{m=1}^\\infty$ and $\\Delta := \\{\\delta_m,\\epsilon_m\\}_{m=1}^\\infty$ a weakness sequence and an error sequence respectively, i.e. given sequences of numbers $t_m \\in [0,1]$, $\\delta_m \\in [0,1]$, and $\\epsilon_m \\ge 0$ for $m = 1,2,3,\\ldots$.\n\nWe begin with the Weak Chebyshev Greedy Algorithm with errors for convex optimization. \n\\\\[.5em]\\noindent\n{\\bf Weak Chebyshev Greedy Algorithm with errors (WCGA($\\Delta,\\text{co}$)).\\\\}\nSet $G^c_0 = 0$ and for each $m \\ge 1$ perform the following steps:\n\\begin{enumerate}\n\t\\item Take any $\\varphi^{c}_m \\in \\mathcal{D}$ satisfying\n\t\t\\[\n\t\t\t\\< -E'(G^c_{m-1}), \\varphi^c_m \\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\< -E'(G^c_{m-1}), \\varphi \\>;\n\t\t\\]\n\t\\item Denote $\\Phi_m^c = \\operatorname{span} \\{\\varphi^c_k\\}_{k=1}^m$ and find $G_m^c \\in \\Phi_m^c$ such that\n\t\t\\[\n\t\t\tE(G_m^c) \\le \\inf_{G \\in \\Phi_m^c} E(G) + \\delta_m.\n\t\t\\]\n\\end{enumerate}\n\n\\smallskip\\noindent\nNext, we state the Weak Greedy Algorithm with Free Relaxation and errors for convex optimization, introduced and studied in~\\cite{DT}.\n\\\\[.5em]\\noindent\n{\\bf Weak Greedy Algorithm with Free Relaxation and errors (WGAFR($\\Delta,\\text{co}$)).\\\\}\nSet $G^f_0 = 0$ and for each $m \\ge 1$ perform the following steps:\n\\begin{enumerate}\n\t\\item Take any $\\varphi^f_m \\in \\mathcal{D}$ satisfying\n\t\t\\[\n\t\t\t\\< -E'(G^f_{m-1}), \\varphi^f_m \\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\< -E'(G^f_{m-1}), \\varphi \\>;\n\t\t\\]\n\t\\item Find $\\omega_m \\in \\mathbb{R}$ and $ \\lambda_m \\in \\mathbb{R}$ such that\n\t\t\\[\n\t\t\tE\\big( (1-\\omega_m) G^f_{m-1} + \\lambda_m \\varphi^f_m \\big)\n\t\t\t\\le \\inf_{\\lambda, \\omega \\in \\mathbb{R}} E\\big( (1 - \\omega) G^f_{m-1} + \\lambda \\varphi^f_m \\big) + \\delta_m\n\t\t\\]\n\t\tand define $G^f_m = (1 - \\omega_m) G^f_{m-1} + \\lambda_m \\varphi^f_m$.\n\\end{enumerate}\n\n\\smallskip\\noindent\nLastly, we introduce a new algorithm~--- the Rescaled Weak Relaxed Greedy Algorithm with errors for convex optimization.\n\\\\[.5em]\\noindent\n{\\bf Rescaled Weak Relaxed Greedy Algorithm with errors (RWRGA($\\Delta,\\text{co}$)).\\\\}\nSet $G^r_0 = 0$ and for each $m \\ge 1$ perform the following steps:\n\\begin{enumerate}\n\t\\item Take any $\\varphi^r_m \\in \\mathcal{D}$ satisfying\n\t\t\\[\n\t\t\t\\< -E'(G^r_{m-1}), \\varphi^r_m \\> \\ge t_m \\sup_{\\varphi\\in\\mathcal{D}} \\< -E'(G^r_{m-1}), \\varphi \\>;\n\t\t\\]\n\t\\item Find $\\lambda_m \\ge 0$ such that\n\t\t\\[\n\t\t\tE(G^r_{m-1} + \\lambda_m \\varphi^r_m) \\le \\inf_{\\lambda \\ge 0} E(G^r_{m-1} + \\lambda \\varphi^r_m) + \\delta_m\/2;\n\t\t\\]\n\t\\item Find $\\mu_m \\in \\mathbb{R}$ such that\n\t\t\\[\n\t\t\tE\\big( \\mu_m (G^r_{m-1} + \\lambda_m \\varphi^r_m) \\big)\n\t\t\t\\le \\inf_{\\mu \\in \\mathbb{R}} E\\big( \\mu (G^r_{m-1} + \\lambda_m \\varphi^r_m) \\big) + \\delta_m\/2\n\t\t\\]\n\t\tand define $G^r_m = \\mu_m (G^r_{m-1} + \\lambda_m \\varphi^r_m)$.\n\\end{enumerate}\n\n\\begin{Proposition}\\label{prp:ga_awbga}\nThe WCGA($\\Delta$,co), the WGAFR($\\Delta$,co), and the RWRGA($\\Delta$,co) belong to the class $\\mathcal{WBGA}(\\Delta,co)$ with\n\\[\n\t\\epsilon_m = \\inf_{u > 0} \\frac{\\delta_m + 2\\rho(E,D_1,u \\|G_m\\|)}{u}.\n\\]\n\\end{Proposition}\n\n\n\\subsection{Convergence results for the $\\mathcal{WBGA}(\\Delta,co)$}\nIn this section we discuss the convergence and rate of convergence results for algorithms from the class $\\mathcal{WBGA}(\\Delta,\\text{co})$.\n\n\\noindent\nFirst, we state the convergence result.\n\\begin{Theorem}\\label{thm:awbga_conv}\nLet $E$ be a uniformly smooth on $D_1 \\subset X$ convex function.\nAssume that an error sequence $\\Delta := \\{\\delta_m,\\epsilon_m\\}_{m=1}^\\infty$ is such that $\\delta_m \\to 0$ and $\\epsilon_m \\to 0$ as $m \\to \\infty$.\nThen any algorithm from the class $\\mathcal{WBGA}(\\Delta,\\text{co})$ with a constant weakness sequence $\\tau = t \\in (0,1]$ converges, i.e.\n\\[\n\t\\lim_{m\\to\\infty} E(G_m) = \\inf_{x\\in D_1} E(x).\n\\]\n\\end{Theorem}\n\n\\noindent\nSecond, we provide the rate of convergence estimate.\n\\begin{Theorem}\\label{thm:awbga_rate}\nLet $E$ be a convex function with the modulus of smoothness of power type $1 < q \\le 2$, that is, $\\rho(E,D_{1},u) \\le \\gamma u^q$.\nTake an element $f^\\epsilon \\in D_1$ and a number $\\epsilon \\ge 0$ such that\n\\[\n\tE(f^\\epsilon) \\le \\inf_{x \\in D_1} E(x) + \\epsilon, \\ \\ \n\tf^\\epsilon\/A \\in \\mathcal{A}_1(\\mathcal{D}),\n\\]\nwith some number $A := A(\\epsilon) \\ge 1$.\nThen for any algorithm from the class $\\mathcal{WBGA}(\\Delta,\\text{co})$ with a constant weakness sequence $\\tau = t \\in (0,1]$ and an error sequence $\\Delta = \\{\\delta_m,\\epsilon_m\\}_{m=1}^\\infty$ with $\\delta_m + \\epsilon_m \\le cm^{-q}$, $m = 1,2,3,\\dots$ we have\n\\[\n\tE(G_m) - \\inf_{x\\in D_1} E(x) \\le \\epsilon + C(E,q,\\gamma,t,c) A(\\epsilon)^q \\, m^{1-q}.\n\\]\n\\end{Theorem}\n\n\\begin{Corollary}\nUnder the conditions of Theorem~\\ref{thm:awbga_rate}, specifying \n\\[\n\tA(\\epsilon) := \\inf \\Big\\{ A > 0 : \\exists f \\in D_1 : f\/A \\in \\mathcal{A}_1(\\mathcal{D}),\\ \\ E(f) \\le \\inf_{x\\in D_1}E(x) + \\epsilon \\Big\\}\n\\]\nand denoting\n\\[\n\t\\eta_m := \\inf \\big\\{ \\epsilon > 0: A(\\epsilon)^q \\, m^{1-q} \\le \\epsilon \\big\\},\n\\]\nwe obtain for any algorithm from the class $\\mathcal{WBGA}(\\Delta,co)$\n\\[\n\tE(G_m) - \\inf_{x\\in D_1} E(x) \\le C(E,q,\\gamma,t) \\, \\eta_m.\n\\]\n\\end{Corollary}\n\n\\begin{Remark}\nIt follows from the proofs of Theorems~\\ref{thm:awbga_conv} and~\\ref{thm:awbga_rate}, given in Section~\\ref{sec:proofs_awbga}, that the results stated in this section also hold for the class $\\mathcal{WBGA}(\\Delta,[0,1],\\text{co})$.\n\\end{Remark}\n\n\n\n\\section{Numerical experiments}\\label{sec:numerics}\nIn this section we demonstrate the performance of the algorithms from the class $\\mathcal{WBGA}$(co) that are discussed in Section~\\ref{sec:wbga_ga}: the Weak Chebyshev Greedy Algorithm (WCGA(co)), the Weak Greedy Algorithm with Free Relaxation (WGAFR(co)), and the Rescaled Weak Relaxed Greedy Algorithm (RWRGA(co)).\n\nFor each of the numerical experiments presented below we consider the Banach space $X = \\ell_1^{(\\mathrm{dim})}$ of dimensionality $\\mathrm{dim}$, a target function $E : X \\to \\mathbb{R}$, and a dictionary $\\mathcal{D} \\in X$.\nWe then employ the aforementioned algorithms to solve the optimization problem~\\eqref{eq:opt}, i.e. to find a sparse (with respect to the dictionary $\\mathcal{D}$) minimizer\n\\[\n\tx^* = \\mathop{\\operatorname{argmin}}_{x \\in X} E(x).\n\\]\nSince greedy algorithms are iterative by design, in Examples~1--2 we obtain and present the trade-off between the sparsity of the solution $x^*$ and the value of $E(x^*)$.\nIn Examples~3--4, we additionally compare the greedy algorithms for convex optimization with a conventional method of finding sparse solutions~--- the optimization with $\\ell_1$-regularization, see~\\eqref{eq:opt_reg}.\nSpecifically, we solve the problem\n\\[\n\t\\text{find}\\ \\ x^* = \\mathop{\\operatorname{argmin}}_{x \\in X} \\Big( E(x) + \\lambda \\|x\\|_\\mathcal{D} \\Big),\n\\]\nwhere $\\|\\cdot\\|_\\mathcal{D}$ is the atomic norm with respect to the dictionary $\\mathcal{D}$, defined by~\\eqref{eq:D_norm}.\nTo obtain minimizers of different sparsities, the values of the regularization parameter $\\lambda$ are taken from the sequence $\\{0.1 \\times (0.9)^k\\}_{k=0}^{49}$, i.e. $50$ regularized optimization problems are solved in every setting.\n\nTo avoid an unintentional bias in the selection of dictionary $\\mathcal{D}$ and target function $E$, we generate those randomly, based on certain parameters that are described in the setting of each example.\nIn order to provide a reliable demonstration that is independent of a particular random generation, we compute $100$ simulations for each presented example and provide the distribution of the optimization results (shown in Figures~1--4).\nIn the presented pictures the solid lines represent the mean minimization values for each algorithm and the filled areas represent the minimization distribution across all $100$ simulations.\nFinally, to make the results consistent across simulations, we rescale the optimization results to be in the interval $[0,1]$, i.e. instead of reporting the value of $E(x^*)$ we report\n\\[\n\t\\frac{E(x^*) - \\inf_{x \\in X} E(x)}{E(0) - \\inf_{x \\in X} E(x)} \\in [0,1].\n\\]\n\nNumerical experiments presented in this section are performed in Python~3.6 with the use of NumPy and SciPy libraries.\nThe source code is available at~\\url{https:\/\/github.com\/sukiboo\/wbga_co_2020}.\n\n\n\\subsection{Example 1}\n\\begin{figure}[t]\n\t\\includegraphics[width=\\linewidth]{.\/images\/ex1.pdf}\n\t\\caption{Distribution of optimization results for Example~1.}\n\\end{figure}\nIn this example we consider the space $X = \\ell_1^{(500)}$, and construct a dictionary $\\mathcal{D}$ of size $1000$ as linear combinations of the canonical basis $\\{e_i\\}_{i=1}^{500}$ of $X$ with uniformly distributed coefficients, i.e.\n\\[\n\t\\mathcal{D} = \\{\\varphi_j\\}_{j=1}^{1000},\n\t\\ \\ \\text{where}\\ \\ \n\t\\varphi_j = \\sum_{i=1}^{500} c^i_j \\, e_i\n\t\\ \\ \\text{with}\\ \\ \n\tc^i_j \\sim \\mathcal{U}(0,1).\n\\]\nThe target function $E : X \\to \\mathbb{R}$ is chosen as\n\\[\n\tE(x) = \\|x - f\\|_p^p,\n\\]\nwhere $p = 1.2$ and $f \\in X$ is randomly generated as a linear combination of $60$ randomly selected elements of $\\mathcal{D}$ with normally distributed coefficients, i.e.\n\\[\n\tf = \\sum_{k=1}^{60} a_k \\, \\varphi_{\\sigma(k)},\n\t\\ \\ \\text{where}\\ \\ \n\ta_k \\sim \\mathcal{N}(0,1)\n\t\\ \\ \\text{and}\\ \\\n\t\\sigma\\ \\ \\text{is a permutation of}\\ \\ \\{1,\\ldots,1000\\}.\n\\]\nPerformance of the greedy algorithms in this setting is presented in Figure~1.\nThe average number of iterations required to obtain a minimizer of sparsity $50$ is $185$ for the RWRGA(co), $93$ for the WGAFR(co), and $50$ for the WCGA(co).\n\n\n\\subsection{Example 2}\n\\begin{figure}[t]\n\t\\includegraphics[width=\\linewidth]{.\/images\/ex2.pdf}\n\t\\caption{Distribution of optimization results for Example~2.}\n\\end{figure}\nIn this example we once again consider the space $X = \\ell_1^{(500)}$ and a dictionary $\\mathcal{D}$ of size $1000$, constructed as linear combinations of the canonical basis $\\{e_i\\}_{i=1}^{500}$ of $X$ with uniformly distributed coefficients, i.e.\n\\[\n\t\\mathcal{D} = \\{\\varphi_j\\}_{j=1}^{1000},\n\t\\ \\ \\text{where}\\ \\ \n\t\\varphi_j = \\sum_{i=1}^{500} c^i_j \\, e_i\n\t\\ \\ \\text{with}\\ \\ \n\tc^i_j \\sim \\mathcal{U}(0,1).\n\\]\nThe target function $E : X \\to \\mathbb{R}$ is chosen as\n\\[\n\tE(x) = \\|x - f\\|_p^p \\, \\|g\\|_q^q + \\|x - g\\|_q^q \\, \\|f\\|_p^p,\n\\]\nwhere $p = 3$, $q = 1.2$, and the elements $f,g \\in X$ are each randomly generated as linear combinations of $30$ randomly selected elements of $\\mathcal{D}$ with normally distributed coefficients, i.e.\n\\begin{gather*}\n\tf = \\sum_{k=1}^{30} a^1_k \\, \\varphi_{\\sigma_1(k)}\n\t\\ \\ \\text{and}\\ \\ \n\tg = \\sum_{k=1}^{30} a^2_k \\, \\varphi_{\\sigma_2(k)},\n\t\\\\\n\t\\text{where}\\ \\ \n\ta^1_k, a^2_k \\sim \\mathcal{N}(0,1)\n\t\\ \\ \\text{and}\\ \\\n\t\\sigma_1, \\sigma_2\\ \\ \\text{are permutations of}\\ \\ \\{1,\\ldots,1000\\}.\n\\end{gather*}\nPerformance of the greedy algorithms in this setting is presented in Figure~2.\nThe average number of iterations required to obtain a minimizer of sparsity $50$ is $125$ for the RWRGA(co), $78$ for the WGAFR(co), and $50$ for the WCGA(co).\n\n\n\\subsection{Example 3}\n\\begin{figure}[t]\n\t\\includegraphics[width=\\linewidth]{.\/images\/ex3.pdf}\n\t\\caption{Distribution of optimization results for Example~3.}\n\\end{figure}\nIn this example we additionally compare the greedy algorithms with conventional optimization with $\\ell_1$-regularization, see~\\eqref{eq:opt_reg}.\nSince obtaining the minimization-sparsity trade-off with $\\ell_1$-regularization is more expensive computationally than it is for the greedy algorithms, we restrict ourselves to work in a space of smaller dimensionality.\nNamely, we consider the space $X = \\ell_1^{(100)}$, and construct a dictionary $\\mathcal{D}$ of size $200$ as linear combinations of the canonical basis $\\{e_i\\}_{i=1}^{100}$ of $X$ with uniformly distributed coefficients, i.e.\n\\[\n\t\\mathcal{D} = \\{\\varphi_j\\}_{j=1}^{200},\n\t\\ \\ \\text{where}\\ \\ \n\t\\varphi_j = \\sum_{i=1}^{100} c^i_j \\, e_i\n\t\\ \\ \\text{with}\\ \\ \n\tc^i_j \\sim \\mathcal{U}(0,1).\n\\]\nThe target function $E : X \\to \\mathbb{R}$ is chosen as\n\\[\n\tE(x) = \\|x - f\\|_p^p \\, \\|g\\|_q^q + \\|x - g\\|_q^q \\, \\|f\\|_p^p,\n\\]\nwhere $p = 4$, $q = 1.5$, and the elements $f,g \\in X$ are each randomly generated as linear combinations of $30$ randomly selected elements of $\\mathcal{D}$ with normally distributed coefficients, i.e.\n\\begin{gather*}\n\tf = \\sum_{k=1}^{30} a^1_k \\, \\varphi_{\\sigma_1(k)}\n\t\\ \\ \\text{and}\\ \\ \n\tg = \\sum_{k=1}^{30} a^2_k \\, \\varphi_{\\sigma_2(k)},\n\t\\\\\n\t\\text{where}\\ \\ \n\ta^1_k, a^2_k \\sim \\mathcal{N}(0,1)\n\t\\ \\ \\text{and}\\ \\\n\t\\sigma_1, \\sigma_2\\ \\ \\text{are permutations of}\\ \\ \\{1,\\ldots,200\\}.\n\\end{gather*}\nPerformance of the greedy algorithms and optimization with $\\ell_1$-regularization in this setting is presented in Figure~3.\nThe average number of iterations required to obtain a minimizer of sparsity $50$ is $105$ for the RWRGA(co), $74$ for the WGAFR(co), and $50$ for the WCGA(co).\n\n\\subsection{Example 4}\n\\begin{figure}[t]\n\t\\includegraphics[width=\\linewidth]{.\/images\/ex4.pdf}\n\t\\caption{Distribution of optimization results for Example~4.}\n\\end{figure}\nIn this example we compare the greedy algorithms with conventional optimization with $\\ell_1$-regularization in a classical setting of canonical basis instead of a randomly-generated dictionary.\nNamely, we consider the space $X = \\ell_1^{(200)}$, and set a dictionary $\\mathcal{D}$ to be the canonical basis $\\{e_i\\}_{i=1}^{200}$ of $X$, i.e.\n\\[\n\t\\mathcal{D} = \\{e_j\\}_{j=1}^{200},\n\t\\ \\ \\text{where}\\ \\ \n\te_j = (\\underbrace{0,\\ldots,0}_{j-1},1,\\underbrace{0,\\ldots,0}_{200-j}).\n\\]\nThe target function $E : X \\to \\mathbb{R}$ is chosen as\n\\[\n\tE(x) = \\|x - f\\|_p^p \\, \\|g\\|_q^q + \\|x - g\\|_q^q \\, \\|f\\|_p^p,\n\\]\nwhere $p = 7$, $q = 3$, and $f,g$ are randomly generated as elements of $X$ with normally distributed coefficients, i.e.\n\\[\n\tf = \\sum_{k=1}^{200} a^1_k \\, e_k\n\t\\ \\ \\text{and}\\ \\ \n\tg = \\sum_{k=1}^{200} a^2_k \\, e_k,\n\t\\ \\text{where}\\ \\ \n\ta^1_k, a^2_k \\sim \\mathcal{N}(0,1).\n\\]\nPerformance of the greedy algorithms and optimization with $\\ell_1$-regularization in this setting is presented in Figure~4.\nNote that in this case all greedy algorithms~--- the RWRGA(co), the WGAFR(co), and the WCGA(co)~--- coincide due to the fact that elements of the dictionary $\\mathcal{D}$ are mutually disjoint.\nHence the number of iterations required to obtain a minimizer of sparsity $100$ is exactly $100$ for all three greedy algorithms.\n\n\n\n\\section{Proofs for Section~\\ref{sec:wbga}}\\label{sec:proofs_wbga}\nIn this section we provide the proofs of the results from Section~\\ref{sec:wbga}.\nWe begin with a known lemma.\n\\begin{Lemma}[{\\cite[Lemma~6.1]{VT140}}]\\label{lem:E'_L=0}\nLet $E$ be a uniformly smooth Fr{\\'e}chet-differentiable convex function on a Banach space $X$ and $L$ be a finite-dimensional subspace of $X$.\nLet $x_L$ denote the point from $L$ at which $E$ attains the minimum, i.e.\n\\[\n\tx_L = \\mathop{\\operatorname{argmin}}_{x \\in L} E(x) \\in L.\n\\]\nThen for any $\\phi \\in L$ we have\n\\[\n\t\\ = 0.\n\\]\n\\end{Lemma}\n\n\\noindent\nWe now prove that the algorithms stated in Section~\\ref{sec:wbga_ga} belong to the class $\\mathcal{WBGA}$(co).\n\\begin{proof}[Proof of Proposition~\\ref{prp:ga_wbga}]\nIt is easy to see that conditions~\\ref{wbga_gs} and~\\ref{wbga_er} from the definition of the class $\\mathcal{WBGA}$(co) are satisfied for all three algorithms.\nCondition~\\ref{wbga_bo} for any $m \\ge 1$ follows directly from Lemma~\\ref{lem:E'_L=0} with $x_L = \\phi = G_m$ and\n\\[\n\tL = \\Phi^c_m = \\operatorname{span}\\{\\varphi_1^c, \\ldots, \\varphi_m^c\\}\n\\]\nfor the WCGA(co), and\n\\[\n\tL = \\operatorname{span}\\{G_{m-1}^f, \\varphi_m^f\\}\n\t\\ \\ \\text{or}\\ \\ \n\tL = \\operatorname{span}\\{G_{m-1}^r, \\varphi_m^r\\}\n\\]\nfor the WGAFR(co) \/ RWRGA(co) respectively.\n\\end{proof}\n\n\n\\noindent\nWe proceed by listing the lemmas that will be utilized later in the proofs of the main results.\nThe following simple lemma is well-known (see, for instance, \\cite{VT140}).\nFor the reader's convenience we present its proof here.\n\\begin{Lemma}[{\\cite[Lemma~6.3]{VT140}}]\\label{lem:E'_rho}\nLet $E$ be a Fr{\\'e}chet-differentiable convex function.\nThen the following inequality holds for any $x \\in S \\subset X$, $y \\in X$, and $u \\in \\mathbb{R}$\n\\[\n\t0 \\le E(x + uy) - E(x) - u\\ \\le 2\\rho(E,S,u\\|y\\|).\n\\]\n\\end{Lemma}\n\\begin{proof}\nThe left inequality follows directly from~\\eqref{eq:E'_conv1}.\nNext, from the definition of modulus of smoothness~\\eqref{eq:mod_smt} it follows that\n\\[\n\tE(x + uy) + E(x - uy) \\le 2\\big( E(x) + \\rho(E,S,u\\|y\\|) \\big).\n\\]\nFrom inequality \\eqref{eq:E'_conv1} we get\n\\[\n\tE(x - uy) \\ge E(x) - u\\. \n\\]\nCombining the above two estimates, we obtain\n\\[\n\tE(x + uy) \\le E(x) + u\\ + 2\\rho(E,S,u\\|y\\|),\n\\]\nwhich proves the second inequality. \n\\end{proof}\n\n\n\\begin{Lemma}[{\\cite[Lemma~6.10]{VTbook}}]\\label{lem:F_A1(D)}\nFor any bounded linear functional $F$ and any dictionary $\\mathcal{D}$, we have\n\\[\n\t\\sup_{g\\in \\mathcal{D}} \\ = \\sup_{f\\in\\mathcal{A}_1(\\mathcal{D})} \\.\n\\]\n\\end{Lemma}\n\n\n\\noindent\nThe following lemma is similar to the result from~\\cite{T13}.\nFor the reader's convenience we present a brief proof of this lemma here.\n\\begin{Lemma}\\label{lem:y_k}\nSuppose that a sequence $y_1 \\ge y_2 \\ge y_3 \\ge \\ldots > 0$ satisfies inequalities\n\\[\n\ty_k \\le y_{k-1} (1 - w_k y_{k-1}), \\ \\ w_k \\ge 0\n\\]\nfor any $k > n$.\nThen for any $m > n$ we have\n\\[\n\t\\frac{1}{y_m} \\ge \\frac{1}{y_n} + \\sum_{k=n+1}^m w_k.\n\\]\n\\end{Lemma}\n\\begin{proof}\nThe proof follows directly from the chain of inequalities\n\\[\n\t\\frac{1}{y_k} \\ge \\frac{1}{y_{k-1}} (1 - w_k y_{k-1})^{-1}\n\t\\ge \\frac{1}{y_{k-1}} (1 + w_k y_{k-1})\n\t= \\frac{1}{y_{k-1}} + w_k.\n\\]\n\\end{proof}\n\n\n\\noindent\nThe following lemma is our key tool for establishing convergence and rate of convergence of algorithms from the class $\\mathcal{WBGA}$(co).\n\\begin{Lemma}[{{\\bf Error Reduction Lemma}}]\\label{lem:erl}\nLet $E$ be a uniformly smooth on $D \\subset X$ convex function with the modulus of smoothness $\\rho(E,D,u)$.\nTake a number $\\epsilon\\ge 0$ and an element $f^\\epsilon \\in D$ such that\n\\[\n\tE(f^\\epsilon) \\le \\inf_{x \\in X} E(x) + \\epsilon, \\ \\ \n\tf^\\epsilon \/ A \\in \\mathcal{A}_1(\\mathcal{D}),\n\\]\nwith some number $A := A(\\epsilon) \\ge 1$.\nThen for any algorithm from the class $\\mathcal{WBGA}$(co) we have for any $m \\ge 1$\n\\begin{multline*}\n\tE(G_m) - E(f^\\epsilon) \\le E(G_{m-1}) - E(f^\\epsilon)\n\t\\\\\n\t+ \\inf_{\\lambda\\ge0} \\Big(-\\lambda t_m A^{-1} (E(G_{m-1})-E(f^\\epsilon)) + 2\\rho(E,D,\\lambda) \\Big).\n\\end{multline*}\n\\end{Lemma}\n\\begin{proof}\nThe main idea of the proof is the same as in the proof of the corresponding one-step improvement inequality for the WCGA (see, for instance, \\cite[Lemma~6.11]{VTbook}).\nIt follows from~\\ref{wbga_er} of the definition of the class $\\mathcal{WBGA}$(co) that\n\\[\n\tE(0) \\ge E(G_1) \\ge E(G_2) \\ldots.\n\\]\nThus if $E(G_{m-1}) - E(f^\\epsilon) \\le 0$ then the claim of Lemma~\\ref{lem:erl} is trivial.\nAssuming $E(G_{m-1}) - E(f^\\epsilon) > 0$, Lemma~\\ref{lem:E'_rho} provides for any $\\lambda \\ge 0$\n\\[\n\tE(G_{m-1} + \\lambda \\varphi_m) \\le E(G_{m-1}) - \\lambda \\<-E'(G_{m-1}),\\varphi_m\\> + 2 \\rho(E,D,\\lambda)\n\\]\nand by~\\ref{wbga_gs} from the definition of the class $\\mathcal{WBGA}$(co) and Lemma~\\ref{lem:F_A1(D)} we get\n\\begin{align*}\n\t\\<-E'(G_{m-1}),\\varphi_m\\> \n\t&\\ge t_m \\sup_{g\\in \\mathcal{D}} \\<-E'(G_{m-1}),g\\> \n\t\\\\\n\t&= t_m\\sup_{\\phi \\in \\mathcal{A}_1(\\mathcal{D})} \\<-E'(G_{m-1}),\\phi\\>\n\t\\ge t_m A^{-1} \\<-E'(G_{m-1}),f^\\epsilon\\>.\n\\end{align*}\nBy~\\ref{wbga_bo} from the definition of the class $\\mathcal{WBGA}$(co) and by convexity~\\eqref{eq:E'_conv2} we obtain\n\\[\n\t\\<-E'(G_{m-1}),f^\\epsilon\\> = \\<-E'(G_{m-1}),f^\\epsilon-G_{m-1}\\> \\ge E(G_{m-1})-E(f^\\epsilon).\n\\]\nThus, by~\\ref{wbga_er} from the definition of the $\\mathcal{WBGA}$(co) we deduce\n\\begin{align*}\n\tE(G_m) &\\le \\inf_{\\lambda\\ge0} E(G_{m-1} + \\lambda\\varphi_m)\n\t\\\\\n\t&\\le E(G_{m-1}) + \\inf_{\\lambda\\ge0} \\Big( -\\lambda t_m A^{-1} (E(G_{m-1}) - E(f^\\epsilon)) + 2\\rho(E,D,\\lambda) \\Big),\n\\end{align*}\nwhich proves the lemma.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:wbga_conv}]\nThe error reduction property~\\ref{wbga_er} of the class $\\mathcal{WBGA}$(co) implies that the sequence of minimizers $\\{G_m\\}_{m=0}^\\infty$ is in $D$ and the sequence $\\{E(G_m)\\}_{m=0}^\\infty$ is non-increasing.\nTherefore, we have\n\\[\n\t\\lim_{m\\to \\infty} E(G_m) = a \\ge \\inf_{x\\in D}E(x).\n\\]\nDenote\n\\[\n\tb := \\inf_{x\\in D} E(x)\n\t\\ \\ \\text{and}\\ \\ \n\t\\alpha := a - b.\n\\]\nWe prove that $\\alpha = 0$ by contradiction.\nIndeed, assume that $\\alpha > 0$.\nThen for any $m \\ge 0$ we have\n\\[\n\tE(G_m) - b \\ge \\alpha.\n\\]\nWe set $\\epsilon = \\alpha\/2$ and find $f^\\epsilon \\in D$ such that\n\\[\n\tE(f^\\epsilon) \\le b + \\epsilon \\ \\ \\text{and}\\ \\ f^\\epsilon\/A \\in \\mathcal{A}_1(\\mathcal{D})\n\\]\nwith some $A := A(\\epsilon) \\ge 1$.\nThen by Lemma~\\ref{lem:erl} we get\n\\[\n\tE(G_m) - E(f^\\epsilon) \\le E(G_{m-1}) - E(f^\\epsilon) + \\inf_{\\lambda\\ge0} (-\\lambda t_m A^{-1}\\alpha\/2 + 2\\rho(E,D,\\lambda)).\n\\]\nSpecify $\\theta := \\min\\left\\{ \\theta_0,\\frac{\\alpha}{8A} \\right\\}$ and take $\\lambda = \\xi_m(\\rho,\\tau,\\theta)$ given by~\\eqref{eq:theta}.\nThen we obtain\n\\[\n\tE(G_m) \\le E(G_{m-1}) - 2\\theta t_m\\xi_m.\n\\]\nThe assumption\n\\[\n\t\\sum_{m=1}^\\infty t_m\\xi_m =\\infty\n\\]\nimplies a contradiction, which proves the theorem.\n\\end{proof}\n\n \n\\begin{proof}[Proof of Theorem~\\ref{thm:wbga_rate}]\nDenote\n\\[\n\ta_n := E(G_n) - E(f^\\epsilon),\n\\]\nthen the sequence $\\{a_n\\}_{n=0}^\\infty$ is non-increasing.\nIf for some $n \\le m$ we have $a_n \\le 0$ then $E(G_m) - E(f^\\epsilon) \\le 0$, which implies\n\\[\n\tE(G_m) - \\inf_{x \\in D} E(x) \\le \\epsilon,\n\\]\nand hence the statement of the theorem holds.\nThus we assume that $a_n > 0$ for $n \\le m$.\nBy Lemma~\\ref{lem:erl} we have\n\\begin{equation}\\label{B3}\n\ta_m \\le a_{m-1} + \\inf_{\\lambda\\ge0} \\left(-\\frac{\\lambda t_m a_{m-1}}{B} + 2\\gamma \\lambda^q\\right).\n\\end{equation}\nChoose $\\lambda$ from the equation\n\\[\n\t\\frac{\\lambda t_m a_{m-1}}{A} = 4\\gamma \\lambda^q,\n\\]\nwhich implies that\n\\[\n\t\\lambda = \\left(\\frac{ t_m a_{m-1}}{4\\gamma A}\\right)^{\\frac{1}{q-1}} .\n\\]\nLet\n\\[\n\tA_q := 2(4\\gamma)^{\\frac{1}{q-1}}.\n\\]\nUsing the notation $p := q\/(q-1)$ we get from~\\eqref{B3}\n\\[\n\ta_m \\le a_{m-1}\\left(1-\\frac{\\lambda t_m}{2A} \\right)\n\t= a_{m-1}\\left(1 - \\frac{t_m^p}{A_q A^{p}} a_{m-1}^{\\frac{1}{q-1}}\\right).\n\\]\nRaising both sides of this inequality to the power $1\/(q-1)$ and taking into account the inequality $x^r\\le x$ for $r\\ge 1$, $0\\le x\\le 1$, we obtain\n\\[\n\ta_m^{\\frac{1}{q-1}} \\le a_{m-1}^{\\frac{1}{q-1}} \\left(1 - \\frac{t^p_m}{A_q A^{p}} a_{m-1}^{\\frac{1}{q-1}}\\right).\n\\]\nThen Lemma~\\ref{lem:y_k} with $y_k := a_k^{\\frac{1}{q-1}}$, $n=0$, $w_k=t^p_m\/(A_qA^{p})$, which provides\n\\[\n\ta_m^{\\frac{1}{q-1}} \\le C(q,\\gamma) A^{p}\\left(C(E,q,\\gamma) + \\sum_{k=1}^m t_k^p\\right)^{-1},\n\\]\nthat implies\n\\[\n\ta_m \\le C(q,\\gamma) A^q\\left(C(E,q,\\gamma) + \\sum_{k=1}^m t_k^p\\right)^{1-q},\n\\]\nwhich proves the theorem.\n\\end{proof}\n\n\n\n\\section{Proofs for Section~\\ref{sec:awbga}}\\label{sec:proofs_awbga}\nIn this section we state the proofs for the results from Section~\\ref{sec:awbga}.\nWe begin with the proof that the algorithms stated in Section~\\ref{sec:awbga_ga} belong to the class $\\mathcal{WBGA}(\\Delta,\\text{co})$.\n\\begin{proof}[Proof of Proposition~\\ref{prp:ga_awbga}]\nIt is easy to see that conditions~\\ref{awbga_gs} and~\\ref{awbga_er} from the definition of the class $\\mathcal{WBGA}(\\Delta,\\text{co})$ are satisfied for all three algorithms.\nCondition~\\ref{awbga_bd} holds with $C_0 = 1$ since for all three algorithms we have for any $m \\ge 1$\n\\[\n\tE(G_m) \\le E(0) + \\delta_m \\le E(0) + 1.\n\\]\nTo guarantee condition~\\ref{awbga_bo}, first note that for any $m \\ge 1$ and any $u > 0$ the definition of modulus of smoothness~\\eqref{eq:mod_smt} provides\n\\[\n\tE((1+u) G_m) + E((1-u) G_m) \\le 2E(G_m) + 2\\rho(E,D_1,u\\|G_m\\|).\n\\]\nAssume that $\\ \\ge 0$ (the case $\\ < 0$ is handled similarly).\nThen from convexity~\\eqref{eq:E'_conv1} we get\n\\[\n\tE((1+u) G_m) \\ge E(G_m) + u \\\n\\]\nand from the definitions of the corresponding algorithms we obtain\n\\[\n\tE((1-u) G_m) \\ge E(G_m) - \\delta_m.\n\\]\nCombining the above estimates we deduce\n\\[\n\t\\ \\le \\frac{\\delta_m + 2\\rho(E,D_1, u \\|G_m\\|)}{u}.\n\\]\nTaking infimum over $u > 0$ completes the proof.\n\\end{proof}\n\n\n\\noindent\nNext, we state necessary technical lemmas that will be utilized in the proof of main results.\n\\begin{Lemma}[{\\cite[Lemma~3.2]{VT148}}]\\label{lem:a_delta_0}\nLet $\\rho(u)$ be a non-negative convex on $[0,1]$ function with the property $\\rho(u)\/u\\to0$ as $u\\to 0$.\nAssume that a nonnegative sequence $\\{\\alpha_k\\}_{k=1}^\\infty$ is such that $\\alpha_k\\to0$ as $k\\to\\infty$.\nSuppose that a nonnegative sequence $\\{a_k\\}_{k=0}^\\infty$ satisfies the inequalities\n\\[\n\ta_m \\le a_{m-1} + \\inf_{0\\le\\lambda\\le1}(-\\lambda va_{m-1} + B\\rho(\\lambda)) + \\alpha_m, \\ \\ m = 1,2,3,\\dots\n\\]\nwith positive numbers $v$ and $B$.\nThen\n\\[\n\t\\lim_{m\\to\\infty} a_m = 0.\n\\]\n\\end{Lemma}\n\n\n\\begin{Lemma}[{\\cite[Lemma~3.3]{VT148}}]\\label{lem:a_delta_q}\nSuppose a nonnegative sequence $a_0,a_1,\\dots$ satisfies the inequalities for $m = 1,2,3,\\dots$\n\\[\n\ta_m\\le a_{m-1} + \\inf_{0\\le \\lambda\\le 1}(-\\lambda va_{m-1}+B\\lambda^q) + \\alpha_m, \\ \\ \\alpha_m \\le cm^{-q},\n\\]\nwhere $q\\in (1,2]$, $v\\in(0,1]$, and $B > 0$.\nThen\n\\[\n\ta_m \\le C(q,v,B,a_0,c) \\, m^{1-q} \\le C'(q,B,a_0,c) \\, v^{-q} \\, m^{1-q}.\n\\]\n\\end{Lemma}\n\n\n\\noindent\nLastly, we establish a generalized version of Lemma~\\ref{lem:erl}.\n\\begin{Lemma}[{{\\bf General Error Reduction Lemma}}]\\label{lem:gerl}\nLet $E$ be a uniformly smooth on $S \\subset X$ convex function with the modulus of smoothness $\\rho(E,S,u)$.\nTake a number $\\epsilon \\ge 0$ and an element $f^\\epsilon \\in S$ such that\n\\[\n\tE(f^\\epsilon) \\le \\inf_{x\\in X} E(x) + \\epsilon, \\ \\ \n\tf^\\epsilon\/B \\in \\mathcal{A}_1(\\mathcal{D}),\n\\]\nwith some number $B \\ge 1$.\nSuppose that $G \\in S$ and $\\varphi \\in \\mathcal{D}$ satisfy the following conditions\n\\begin{gather}\n\t\\label{C1}\n\t\\<-E'(G),\\varphi\\> \\ge \\theta \\sup_{g\\in \\mathcal{D}} \\<-E'(G),g\\>, \\ \\ \\theta \\in (0,1];\n\t\\\\\n\t\\label{C2}\n\t|\\| \\le \\delta, \\ \\ \\delta \\in [0,1].\n\\end{gather}\nThen we have\n\\begin{align*}\n\t\\inf_{0\\le\\lambda\\le1} E(G+\\lambda\\varphi)\n\t&\\le E(G)\n\t\\\\\n\t&+ \\inf_{0\\le\\lambda\\le1} (-\\lambda \\theta B^{-1} (E(G) - E(f^\\epsilon)) + 2\\rho(E,S,\\lambda)) + \\delta.\n\\end{align*}\n\\end{Lemma}\n\\begin{proof}\nIf $E(G_{m-1}) - E(f^\\epsilon) \\le 0$ then the claim of Lemma~\\ref{lem:gerl} is trivial.\nAssuming $E(G_{m-1}) - E(f^\\epsilon) > 0$, Lemma~\\ref{lem:E'_rho} provides for any $\\lambda \\ge 0$\n\\[\n\tE(G + \\lambda \\varphi) \\le E(G) - \\lambda \\<-E'(G),\\varphi\\> + 2 \\rho(E,S,\\lambda).\n\\]\nBy~\\eqref{C1} and Lemma~\\ref{lem:F_A1(D)} we get\n\\begin{align*}\n\t\\<-E'(G),\\varphi\\>\n\t&\\ge \\theta \\sup_{g \\in \\mathcal{D}} \\<-E'(G),g\\>\n\t\\\\\n\t&= \\theta\\sup_{\\phi \\in \\mathcal{A}_1(\\mathcal{D})} \\<-E'(G),\\phi\\> \\ge \\theta B^{-1} \\<-E'(G),f^\\epsilon\\>.\n\\end{align*}\nBy~\\eqref{C2} and by convexity~\\eqref{eq:E'_conv2} we obtain\n\\[\n\t\\<-E'(G),f^\\epsilon\\> = \\<-E'(G),f^\\epsilon-G\\> + \\<-E'(G),G\\> \\ge E(G)-E(f^\\epsilon)-\\delta.\n\\]\nThus\n\\[\n\tE(G + \\lambda\\varphi) \\le E(G) - \\lambda \\theta B^{-1} (E(G) - E(f^\\epsilon)) + 2\\rho(E,S,\\lambda)) + \\delta,\n\\]\nwhich proves the lemma.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:awbga_conv}]\nAssumption~\\ref{awbga_bd} from the definition of the class $\\mathcal{WBGA}(\\Delta,\\text{co})$ implies that for any $m \\ge 0$\n\\[\n\tE(G_m) \\le E(0) + C_0\n\t\\ \\ \\text{and}\\ \\\n\tG_m \\in D_1.\n\\]\nThen from Lemma~\\ref{lem:gerl} with $S = D_1$, $G = G_{m-1}$, $\\varphi = \\varphi_m$, $\\delta = \\epsilon_m$, $\\theta = t$, $B = A(\\epsilon)$ and property~\\ref{awbga_er} from the definition of the class $\\mathcal{WBGA}(\\Delta,\\text{co})$ we obtain\n\\begin{align}\\nonumber\n\tE(G_m)\n\t&\\le \\inf_{0\\le\\lambda\\le1} E(G_{m-1} + \\lambda\\varphi_m) + \\delta_m\n\t\\\\\\nonumber\n\t&\\le E(G_{m-1}) + \\inf_{0\\le\\lambda\\le 1} \\big(-\\lambda t A^{-1} (E(G_{m-1}) - E(f^\\epsilon))\n\t\\\\\\label{eq:E(G_m)}\n\t&\\phantom{E(G_{m-1}) + \\inf_{0\\le\\lambda\\le 1} \\big(-\\lambda}\n\t+ 2\\rho(E,D_1,\\lambda) \\big) + \\delta_m + \\epsilon_m.\n\\end{align}\nDenote\n\\[\n\ta_n := \\max\\big\\{ E(G_n) - E(f^\\epsilon), 0 \\big\\}.\n\\]\nNote that under our assumptions $t \\in (0,1]$ and $A := A(\\epsilon) \\ge 1$ we always have\n\\[\n\ta_{m-1} + \\inf_{0\\le\\lambda\\le 1}(-\\lambda t A^{-1} a_{m-1} + 2\\rho(E,D_1,\\lambda)) \\ge 0.\n\\]\nTherefore estimate~\\eqref{eq:E(G_m)} implies\n\\begin{equation}\\label{eq:a_m}\n\ta_m \\le a_{m-1} + \\inf_{0\\le\\lambda\\le 1} (-\\lambda t A^{-1} a_{m-1} + 2\\rho(E,D_1,\\lambda)) + \\delta_m + \\epsilon_m.\n\\end{equation}\nWe apply Lemma~\\ref{lem:a_delta_0} with $v = tA^{-1}$, $B = 2$, $\\rho(u) = \\rho(E,D_1,\\lambda)$, and $\\alpha_m = \\delta_m + \\epsilon_m$ to obtain\n\\[\n\t\\lim_{m\\to\\infty} a_m = 0,\n\\]\nwhich implies\n\\[\n\t\\limsup_{m\\to\\infty} E(G_m) \\le \\epsilon + \\inf_{x\\in D_1} E(x)\n\\]\nand, due to the arbitrary nature of choice of $\\epsilon > 0$,\n\\[\n\t\\lim_{m\\to\\infty} E(G_m) = \\inf_{x\\in D_1} E(x),\n\\]\nwhich completes the proof.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:awbga_rate}]\nFrom estimate~\\eqref{eq:a_m} we get\n\\begin{align*}\n\ta_m\n\t&\\le a_{m-1} + \\inf_{0\\le\\lambda\\le 1} (-\\lambda t A^{-1} a_{m-1} + 2\\rho(E,D_1,\\lambda)) + \\delta_m + \\epsilon_m\n\t\\\\\n\t&\\le a_{m-1} + \\inf_{0\\le\\lambda\\le 1}(-\\lambda t A^{-1} a_{m-1} + 2\\gamma\\lambda^q) + \\delta_m + \\epsilon_m.\n\\end{align*}\nApplying Lemma~\\ref{lem:a_delta_q} with $v = t A^{-1}$, $B = 2\\gamma$, and $\\alpha_m = \\delta_m + \\epsilon_m$ completes the proof.\n\\end{proof}\n\n\n\n\n\n\\section*{Acknowledgments}\nThe first author acknowledges support given by the Oak Ridge National Laboratory, which is operated by UT-Battelle, LLC., for the U.S. Department of Energy under Contract DE-AC05-00OR22725.\n\nThe work was supported by the Russian Federation Government Grant N{\\textsuperscript{\\underline{o}}}14.W03.31.0031. The paper contains results obtained in frames of the program \"Center for the storage and analysis of big data\", supported by the Ministry of Science and High Education of Russian Federation (contract 11.12.2018 N{\\textsuperscript{\\underline{o}}}13\/1251\/2018 between the Lomonosov Moscow State University and the Fond of support of the National technological initiative projects).\n\n\\section*{References}\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThis article is a complementary part to the work done in \\cite{CMFixed} with \nits own independent interest. We discuss geometric conditions under which \nthere are no invariant Beltrami differentials supported on the dissipative set \nof a rational map $R$. \n\nIn this paper we will always assume that the conservative set of the action of \n$R$ belongs to the Julia set.\n\nNow, let us introduce the geometric objects to be treated in this \npaper. \n\nWe denote by $P(R)$ the closure of the postcritical set of $R$ and consider \nthe surface $S_R:=\\bar{\\C}\\setminus P(R)$. The surface $S_R$ is not always \nconnected, however, on each connected component of $S_R$ we fix a \nPoincar\\'e hyperbolic metric and denote by $\\lambda$ the family of all these \nmetrics.\n\nLet $Q(S_R)$ be the subspace of $ L_1(S_R)$ of holomorphic \nintegrable functions on $S_R.$\n\nA rational map $R$ defines a complex Push-Forward map on $L_1(\\C)$, with \nrespect to the Lebesgue measure $m$, which is a contracting endomorphism and \nis called the complex Ruelle-Perron-Frobenius, for shortness \nRuelle operator. The Ruelle operator has the following formula:\n\n\\[R^*(\\phi)(z)=\\sum_{y \\in R^{-1}(z)} \\frac{\\phi(y)}{R'(y)^2}\nR(\\zeta)\\]\n\\[=\\sum_i \\phi(\\zeta_i)(z)\\zeta'_i(z)\\] where $\\zeta_i$ is any local complete \nsystem of branches of $R^{-1}.$ The space $Q(S_R)$ is invariant under the action \nof the Ruelle operator. \nThe Beltrami operator $Bel:L_\\infty(\\C)\\rightarrow L_\\infty(\\C)$ given by \n\\[Bel(\\mu)=\\mu(R)\\frac{\\overline{R'}}{R'}\\] is dual to the Ruelle operator \nacting on $L_1(\\C)$. \n\nThe fixed point space $Fix(B)$ of the Beltrami operator is called the \n\\textit{space of invariant Beltrami differentials}. An element $\\alpha \\in \nL_\\infty(\\C)$ is called non trivial if and only if the functional given by \n\\[v_\\alpha(\\phi) = \\int \\phi \\alpha\\] is non zero on $Q(S_R).$ The norm of \n$v_\\alpha$ in $Q^*(S_R)$, for a non trivial element $\\alpha$, is called the \n\\textit{Teichm\\\"uller norm} of $\\alpha$ and it is denoted by $\\| \\alpha \n\\|_{T}.$ \n\nA non trivial element $\\alpha$ is called \\textit{extremal} if and only if the \n$\\|\\alpha\\|_\\infty=\\|\\alpha\\|_T.$ \n\nA sequence of unit vectors $\\{\\phi_i\\}$ is called a \n\\textit{Hamilton-Krushkal} sequence, for short HK-sequence, for an extremal \nelement $\\alpha$ if and only if \\[ \\lim_{i\\rightarrow \n\\infty}|v_\\alpha(\\phi_i)|=\\|\\alpha\\|_\\infty.\\]\n\nA HK sequence $\\{\\phi_i\\}$ is called \\textit{degenerated} if converge to $0$ \nuniformly on compact sets.\n \nLet $T:\\mathcal{B}\\rightarrow \\mathcal{B}$ be a linear contraction of a Banach \nspace $\\mathcal{B}$. An element \n$b\\in \\mathcal{B}$ is called \\textit{mean ergodic} with respect to $T$ if and \nonly if the sequence of Ces\\`aro averages with respect to $T$, given by\n$C_n(b)=\\frac{1}{n}\\sum_{i=0}^{n-1} T^i(b)$, forms a weakly precompact family. \nIndeed (see Krengel \\cite{Krengel}), when $\\mathcal{B}$ is weakly complete then, for \na mean ergodic element $b$, the sequence $C_n(b)$ converges in norm to its limit, \nthis limit always is a fixed element of $T$. If every element $b\\in \\mathcal{B}$ \nis mean ergodic with respect to $T$ then the operator $T$ is called \nmean-ergodic.\n\nBy the Bers Representation Theorem, the space $Q^*(S_R)$ is linearly \nquasi-isome\\-trically isomorphic to the \\textit{Bergman} space $B(S_R)$ which \nis \nthe space of holomorphic functions $\\phi$ on $S_R$ with the \nnorm $\\|\\lambda^{-2}\\phi\\|_{L_\\infty(S_R)}.$\n\nIn the case where $S_R$ has finitely many components, a classical theorem, \nsee for example \\cite{Matsuzaki} and references within, states that \n$Q(S_R)\\subset B(S_R)$ if and only if the infimum of the length of simple \nclosed geodesics is bounded away from $0$.\n\n\\section{Main Theorem}\nLet $X$ be an $R$ invariant measurable set, then the set $W:=\\bigcup \nR^{-n}(X)$ is completely invariant. In the following theorem we will only \nconsider Ces\\`aro averages with respect to the Ruelle operator $R^*$ in \n$L_1(W).$\n\n\n\\begin{theorem}\\label{MainTechnicalThm}\nLet $X$ be an $R$ invariant measurable subset such that the restriction map \n$r(\\phi)= \\phi|_X$ from $Q(S_R)$ to $L_1(X) $ is weakly precompact. Then \nevery $\\phi \\in Q(S_R)$ is mean ergodic with respect to $R^*$ in $L_1(W)$. \n\\end{theorem}\n\n\\begin{proof}\nIf $X$ is $R$ invariant then the Ruelle operator $R^*$ defines an endomorphism \nof $L_1(X)$. Given $\\phi \\in Q(S_R)$, the family of Ces\\`aro averages \n$C_n(\\phi)$ restricted on $X$ forms a weakly precompact subset of $L_1(X).$ \nWe claim that $C_n(\\phi)$ converges in norm on $L_1(X).$ Indeed, first \nwe show that every weak accumulation point of $C_n(\\phi)$ is a fixed point for \nthe Ruelle operator. Let $f$ be the weak limit of $C_{n_i}(\\phi)$ for some \nsubsequence $\\{n_i\\},$ then $R^*(f)$ is the weak limit of $R^*(C_{n_i}(\\phi))$.\nBy the Fatou Lemma \n$$\\int_X |f-R^*(f)| \\leq \\liminf \\int_X |C_{n_i}(\\phi)-R^*(C_{n_i}(\\phi))|$$\n$$\\leq \\liminf \\| C_{n_i}(\\phi)-R^*(C_{n_i}(\\phi))\\|_{L_1(S_R)}$$ \n$$\\leq \\limsup \\|C_{n_i}(I-R^*)(\\phi)\\|_{L_1(S_R)}.$$\n\nBut \n$$ \\|C_{n_i}(I-R^*)(\\phi)\\|_{L_1(S_R)}\\leq \\frac{2}{n_i}\\|\\phi\\|_{L_1(S_R)}.$$\nThen $f$ is a non zero fixed point of Ruelle operator. As in \\cite{MakRuelle} \nwe have that $|f|$ defines a finite absolutely continuous invariant measure. \nHence, the support of $f$ is a non trivial subset of the conservative set of \n$R.$ By Lyubich's Ergodicity theorem (see \\cite{Mc1} and \\cite{LyuTypical}) and \nthe fact \nthat $X$ does not intersect the postcritical set we have $X=W=S_R$. But, \nMcMullen's Theorem (Theorem 3.9 of \\cite{Mc1}) implies that in this case $R$ is \na, so called, \\textit{flexible Latt\\`es} map. Furthermore, \nthe space $Q(S_R)$ is finitely dimensional and hence $R^*$ is a compact \nendomorphism of $Q(S_R)$, it follows that $R^*$ is mean ergodic on $Q(S_R)$. \n\nTherefore, if $R$ is not a flexible Latt\\`es map then any weak limit of \n$C_n(\\phi)$ is $0$. Since the weak closure of convex bounded sets is equal to \nthe closure in norm of convex bounded sets, we conclude our claim.\n\nNow let $W_n=R^{-n}(X)$, one can inductively prove that \n$\\phi|_{W_n}$ is mean ergodic on $L_1(W_n)$. Indeed, let $\\psi_n=\\phi|_{W_n}$, \nsince $R^*:L_1(W_n)\\rightarrow L_1(W_{n-1}) \\subset L(W_n)$ \nand $R^*(\\psi_n)=R^*(\\phi)|_{W_{n-1}}$, then by arguments above we are done. \n\nNow consider $\\phi|_{W}-\\phi|_{W_n}$, the $L_1$ norm of this difference \nconverges to $0$ in $L_1(W)$, since the Ces\\`aro averages does not expand the \n$L_1$ norm we have $$\\|C_k(\\phi|_{W}-\\phi|_{W_n})\\| \\leq \n\\|\\phi|_{W}-\\phi|_{W_n}\\|.$$\n\nHence $C_k(\\phi|_{W})$ converges to $0$ and $\\phi$ is mean ergodic on $L_1(W)$. \n\n\\end{proof}\n\nNow we state our Main Theorem.\n\n\\begin{theorem}\\label{MainTheorem}\nLet $R$ be a rational map and let $X\\subset S_R$ be an invariant \nmeasurable set of positive Lebesgue measure.\nAssume that the restriction map $r(\\phi)=\\phi|_X$ from $Q(S_R)$ into \n$L_1(X)$ is weakly precompact. If $\\mu$ is a non trivial invariant Beltrami \ndifferential, then $m(supp (\\mu)\\cap X)>0$ if and only if $R$ is a flexible \nLatt\\`es map.\n\\end{theorem}\n\n\\begin{proof}\nAssume that $R$ is a flexible Latt\\`es map. Then $R$ is ergodic on the Riemann \nsphere and therefore the support of any invariant Beltrami differential $\\mu$ \nis the whole Riemann sphere. Hence, if $X$ is invariant of positive Lebesgue \nmeasure then $m(supp(\\mu)\\cap X)=m(X)>0.$ \n\nAgain let $W=\\bigcup R^{-n}(X)$. Now let $\\mu$ be a non \ntrivial invariant Beltrami differential supported on $W$. If $R$ is not \nLatt\\`es, then for any $\\phi \\in Q(S_R)$ we have $$\\int_{S_R} \\phi \\mu \n=\\int_{S_R} \\mu C_k(\\phi)=\\int_{W} \\mu C_k(\\phi).$$ \n\nBy Theorem \\ref{MainTechnicalThm}, the right hand side converges to $0$ as $k$ \nconverges to $\\infty$. Hence $\\int \\phi \\mu=0$ for every quadratic \ndifferential $\\phi$ and the functional $\\phi\\mapsto \\int \\phi \\mu$ is $0$ on \n$Q(S_R).$ Which contradicts the assumption that $\\mu$ is non trivial. \n\n\\end{proof}\n\n\nIn the proofs of the previous theorems, the only ingredient was the\nprecompactness of the Ces\\`aro averages $C_n(\\phi)$. Hence, \nit is enough to assume the weak precompactness only of \nCes\\`aro averages on elements of $Q(S_R)$. \nBy results of the second author in \\cite{MakRuelle}, see also a related work \non \\cite{CMFixed}, it is enough to consider the Ces\\`aro averages of \nrational functions in $Q(S_R)$ having poles only on the set of critical values. \n\n\\section{Compactness}\nWe want to discuss conditions under which the restriction map $\\phi\\mapsto \n\\phi|_A$ is weakly precompact. Unfortunately, so far we have not found \nconditions where the restriction is weakly precompact but not compact.\nLet us start with the following observations and definitions. \n\n\\begin{definition}\nA rational map $R$ satisfies the $B$-condition if and only if for any $\\phi\\in \nQ(S_R)$ we have $$\\|\\lambda^{-2}(z) \n\\phi(z)\\|_{L_\\infty(S_R)}\\leq C \\|\\phi(z)\\|_{L_1(S_R)},$$ where $C$ \nis a constant independent of $\\phi.$ \n\\end{definition}\nIn other words, if $R$ satisfies the $B$-condition, then\n$Q(S_R)\\subset B(S_R)$ and the inclusion\nmap $Q(S_R)\\rightarrow B(S_R)$ is continuous. As it was noted on the \nintroduction, this happens when $S_R$ has finitely many components and \nthe infimum of the length of the simple closed geodesics is bounded away from \n$0.$\n\n\\begin{proposition}\\label{prop.Bcond.comp}\nIf $R$ satisfies the $B$-condition and $\\lambda(X)<\\infty$ then the \nrestriction map is compact.\n\\end{proposition}\n\n\\begin{proof}\nIf $R$ satisfies the $B$-condition then $$\\lambda^{-2}|\\phi(z)|\\leq sup_{z\\in \nS_R} |\\lambda^{-2}(z) \\phi(z) | \\leq C \\|\\phi \\|_1,$$ hence $|\\phi(z)|\\leq \nC\\| \n\\phi\\|_1 \\lambda^{2}(z),$ by Lebesgue Theorem \nthe restriction map is compact.\n\\end{proof}\n\nUsing Theorem \\ref{MainTheorem} and Proposition \\ref{prop.Bcond.comp} \nwe have the following.\n\\begin{corollary}\\label{cor.area}\nIf $R$ satisfies the $B$-condition and $X$ is an invariant set of \npositive Lebesgue measure with $Area_\\lambda(X)<\\infty$. If $\\mu$ is a non \nzero invariant Beltrami differential, then $m(supp(\\mu)\\cap X)>0$ if and only \nif $R$ is a flexible Latt\\`es map. \n\\end{corollary}\nIn general, the finiteness of the hyperbolic area of $X$ does not \nimply the finiteness of hyperbolic area of $W$. Generically, it \ncould be that the hyperbolic area of $W$ is infinite regardless of the area of \n$X$. \nOn the other hand, by Corollary \\ref{cor.area}, if $R$ satisfies the \n$B$-condition and the hyperbolic area \n$Area_\\lambda(J(R))$ is bounded then $R$ satisfies Sullivan's conjecture. \nHowever, in this situation, we believe that the following stronger statement \nholds true:\n\nThe $Area_\\lambda(J(R))<\\infty$ if \nand only if either $m(J(R))=0$ or $R$ is postcritically finite. \n\nIn fact, we do not know if the $B$-condition is sufficient on this statement.\n\nNow we consider the more general condition when the restriction map $r_X$ is \ncompact. This condition, in some sense, reflects the geometry of the \npostcritical set.\n \nOn the product $S_R\\times S_R\\subset \\C^2$ there exist a unique \nfunction $K(z,\\zeta)$ which is characterized by the following conditions.\n\n\\begin{enumerate}\n \\item $K(\\zeta,z)=-\\overline{K(z,\\zeta)}$\n \\item For any $\\zeta_0\\in S_R$, the function $\\phi_{\\zeta_0}(z)=K(z,\\zeta_0)$\nbelongs to the intersection $Q(S_R)\\cap B(S_R).$ \n \\item If $z_0,\\zeta_0$ belong to different components of $S_R$, then \n$K(z_0,\\zeta_0)=0.$\n\\item The operator $P(f)(z)=\\int \\lambda^{-2}(\\zeta) K(z,\\zeta)f(\\zeta) d\\zeta \nd\\bar{\\zeta}$ from $L_1(S_R)$ to $Q(S_R)$ is a continuous surjective \nprojection.\n\\end{enumerate}\n\nIn fact, the function $K(z,\\zeta)$ is defined on any planar hyperbolic Riemann \nsurface $S$. In particular, when the surface $S$ is the unit disk $\\mathbb{D}$ \nthe function $K(z,\\zeta)$ has the formula $$K(z,\\zeta)=\\frac{3}{2}\\pi i \nK_\\mathbb{D}(z,\\zeta)^2$$ where \n$K_\\mathbb{D}(z,\\zeta)=[\\pi(1-z\\bar{\\zeta})^2]^{-1}$ is the classical Bergman \nKernel function on the unit disk. For further details on these facts see for \nexample Chapter 3, \\S 7 of the book of I. Kra \n\\cite{KraBook} .\n\n\nNow we consider the following function $$\\omega(\\zeta,z)=\\lambda^{-2}(\\zeta) \nK(z,\\zeta)$$ and $$w(z)=\\omega(z,z).$$ \n\n\nThe following proposition is a consequence of H\\\"older inequality and appear as \nLemma 2 on Ohtake's paper \\cite{Ohtakedeform}.\n\n\\begin{proposition}\\label{prop.bound.comp}\n If $X$ has positive measure and $$\\int_X |w|<\\infty$$ then the restriction \n$r_X:\\phi\\mapsto \\phi|_X$ from $Q(S_R)$ to $L_1(X)$ is compact.\n\\end{proposition}\n\\begin{proof}\n\n\nWe follow arguments of Lemma 2 in \\cite{Ohtakedeform}. If $D$ is a \ncomponent of $S_R$, then by H\\\"older's \ninequality as in Lemma 2 of \\cite{Ohtakedeform}, we have that $$|(\\phi|_D)(z)| \n\\leq \nC|(w|_D)(z)|\\int_D |\\phi| $$ where the constant $C$ does not depend on $D$. \nSince $S_R$ is a countable union of components, then \n\n$$|\\phi(z)|\\leq C |w| \\|\\phi(z)\\|.$$ As $w$ is integrable on $X$ then by \napplying once again the Lebesgue Theorem we complete the proof.\n\\end{proof}\n\nAs a consequence we have:\n\\begin{corollary}\\label{cor.finite}\n If $\\int_{J(R)} |w|<\\infty$ then $R$ satisfies Sullivan's conjecture.\n\\end{corollary}\n\n\\begin{proof}\n Follows from Theorem \\ref{MainTheorem} and Proposition \\ref{prop.bound.comp}.\n\\end{proof}\n\n\nRemarks: \n\\begin{enumerate}\n\\item If $R$ satisfies the $B$ condition then by Classical results, see the \ncomments before Proposition 1 in \\cite{Ohtake}, we have that $w(z)\\leq C \n\\lambda^2(z)$ where \n$C$ does not depend on $z.$ Partially, if $X$ has bounded hyperbolic area then \n$w(z)$ is integrable on $X$, hence the conditions of Proposition \n\\ref{prop.bound.comp} implies Proposition \\ref{prop.Bcond.comp}.\nAs it is mentioned in \\cite{Ohtake}, the conditions in Proposition \n\\ref{prop.Bcond.comp} are strictly weaker than conditions of Proposition \n\\ref{prop.bound.comp}.\n\\item Moreover, by other result of Ohtake (Proposition 3 in \n\\cite{Ohtakedeform}) we note \nthat in general, the boundedness of the hyperbolic area is not a quasiconformal \ninvariant.\n\n\n\\end{enumerate}\n\n\nIn other words, Proposition \\ref{prop.Bcond.comp} and Proposition \n\\ref{prop.bound.comp} states that if $X$ is completely invariant positive \nmeasure set and satisfying an integrability condition then $X$ can not support \nextremal \ndifferentials with Hamilton-Krushkal degenerated sequences.\n\nHence, Corollary \\ref{cor.area} and Corollary \\ref{cor.finite}, in the case \nwhen $X$ is a completely invariant, derive from results in \n\\cite{CMFixed}. Together, the corollaries mean that \nif a map $R$ has an invariant line field which does not allow a \nHamilton-Krushkal degenerated sequences on $Q(S_R)$, then $R$ is a Latt\\`es map \nif and only if the postcritical set has Lebesgue measure zero.\n\nLet $Y_n$ be an exhaustion of $S_R$ by compact subsets such that the Lebesgue \nmeasure of \n$Y_{n+1}\\setminus Y_n$ converge to zero. Let $P_n$ be the sequence of \nrestrictions \n$P_n:L_1(S_R)\\rightarrow L_1(S_R)$ given by $P_n(f)=\\chi_{n} P(f)$ where \n$\\chi_{n}$ is the characteristic function on $Y_n$. \nImmediately from the definition we have the following facts:\n\n\\begin{enumerate}\n \\item For each $n$, the map $P_n$ is a compact operator.\n \\item The limit \\[\\lim_{n\\rightarrow \\infty} \n\\|P_n(f)-P(f)\\|_{L_1(S_R)}\\rightarrow 0\\] for all $f$ on $L_1(S_R)$.\n\\end{enumerate}\n\nWe have the following Theorem:\n\n\\begin{theorem}\\label{th.exhaustion}\n Let $\\mu\\neq 0$ be an extremal invariant Beltrami differential, then the \nfollowing conditions are equivalent:\n\n\\begin{itemize}\n \\item The map $R$ is a flexible Latt\\`es map. \n \\item There exist an exhaustion of compact sets $Y_n$ as defined above \nsuch that the following inequality is true: \\[\\inf_{n} \\|P_n -P\\|_{L_1(supp \n(\\mu))}<1.\\]\n\\end{itemize}\n \n\\end{theorem}\n\n\\begin{proof}\nAssume that $R$ is a flexible Latt\\`es map, then $Q(S_R)$ is finitely \ndimensional then the operators $P_n$ converge to $P$ by norm. Hence,\nthe infimum $\\inf_{n} \\|P_n -P\\|_{L_1(supp(\\mu))}=0.$ \n\nNow, let us assume that $\\inf_{n} \\|P_n -P\\|_{L_1(supp (\\mu))}<1$. We show \nthat this condition implies that $\\mu$ does not accept degenerated \nHamilton-Krushkal sequences. Indeed, assume that $\\{\\phi_n\\}$ is a degenerated \nHamilton-Krushkal sequence for $\\mu$. By assumption, there exist $n_0$ such \nthat \n\\[\\sup_{f\\in L_1(supp(\\mu)),\\|f\\|=1} \\int |P_{n_0}(f)-P(f)|=r<1.\\] Since \n$\\phi_n$ is degenerated and by the compactness of $P_{n_0}$ we have that \n\\[\\lim_{j\\rightarrow \\infty} \\|P_{n_0}(\\phi_j)\\|_{L_1(S_R)}\\rightarrow 0.\\] \nHence\n\\[\\|\\mu\\|_\\infty=\\lim_{j} \\bigg |\\int \\mu \\phi_j\\bigg |=\\lim_j \\bigg \n|\\int_{supp(\\mu) }\\mu \n\\phi_j\\bigg |\\]\n\\[=\\lim_{j} \\bigg |\\int_{supp(\\mu)} \\mu(P_{n_0}(\\phi_j)-P(\\phi_j))\\bigg |\\]\n\\[\\leq \\|\\mu\\|_\\infty \\sup_{f\\in L_1(supp(\\mu)),\\|f\\|=1} \\int \n|P_{n_0}(f)-P(f)|=r \\|\\mu\\|_\\infty < \\|\\mu\\|_\\infty.\\] \nWhich is a contradiction. \n\nApplying the Corollary 1.5 in \\cite{EarleLi}, the extremal differential $\\mu$ \ndoes \nnot accept Hamilton{-}Krushkal degenerated sequences if \nand only if there exist $\\phi$ in $Q(S_R)$ and a suitable constant $K$ such \nthat \n\\[v_\\mu(\\gamma)=K \\int \\frac{|\\phi|}{\\phi} \\gamma.\\]\n\n\n\nHence, for any $\\gamma$ in $Q(S_R)$ we have\n\n$$\\int \\frac{|\\phi|}{\\phi}R^*(\\gamma)=\\int\\frac{|\\phi|}{\\phi} \\gamma$$ and\n$$1=\\int\\frac{|\\phi|}{\\phi}R^*(\\phi).$$ This implies that \n$$\\frac{|R^*(\\phi)|}{R^*(\\phi)}=\\frac{|\\phi|}{\\phi}$$ but since $\\phi$ is \nholomorphic then $\\phi$ is a non zero fixed point on $Q(S_R)$. \nUsing arguments of the proof of Theorem \\ref{MainTechnicalThm} we are \ndone. \n\\end{proof}\n\nThe following Proposition is an illustration of when the conditions of Theorem \n\\ref{th.exhaustion} are fulfilled. \n\n\\begin{proposition} If $R$ is a rational map satisfying the $B$ condition.\nIf $A$ is a measurable subset of $S_R$ so that \n\\[\\int_A \\int_{S_R} |K(z,\\zeta)| dz\\wedge d\\bar{z} \\wedge d\\zeta \\wedge \nd\\bar{\\zeta}<\\infty\\]\nthen for any exhaustion of $S_R$ by compact sets $Y_n$ and operators \n$P_n$ defined as above we have $\\lim \\|P_n-P\\|_{L_1(A)}=0.$ \n\\end{proposition}\n\n\\begin{proof} Let $Y_n$ be an exhaustion of compact sets as above. Since \n$K(z,\\zeta)$ is absolutely integrable on $A\\times S_R$ then \n$$|\\chi_{n} K(z,\\zeta)|\\leq |K(z,\\zeta)|$$ and\n$\\chi_{n}K(z,\\zeta)\\rightarrow K(z,\\zeta)$ pointwise on $A\\times S_R.$ By the \nLebesgue theorem $$\\inf \\int_{A}\\int_{S_R} |K(z,\\zeta)-\\chi_{n} \nK(z,\\zeta)|=0.$$\nFor all $\\phi \\in Q(S_R)$, we have $$\\|P_n(\\phi)-P(\\phi)\\|_{L_1(A)}$$ \n\n\\[\\leq \\int_A |P_n(\\phi)-P(\\phi)|\\leq \\int_{A} \\int_{S_R} \n|\\lambda^{-2}(\\zeta)\\phi(\\zeta)(K(z,\\zeta)-\\chi_n K(z,\\zeta))|d\\zeta dz\\]\n\n\n\\[\\leq \\|\\lambda^{-2} \\phi\\|_\\infty \\int_{A} \\int_{S_R} |K(z,\\zeta)-\\chi_n \nK(z,\\zeta)| d\\zeta dz \n\\]\nwhich by the $B$-condition we have that the latter is \n\\[\\leq C \\|\\phi\\|_{L_1(S_R)}\\int_{A} \\int_{S_R} |K(z,\\zeta)-\\chi_n \nK(z,\\zeta)| d\\zeta dz .\\] For some constant $C$ which does not depend on \n$\\phi$.\n\nNow let $f\\in L_1(A)$, since $P$ is a projection then $f=\\phi+\\omega$ where \n$\\phi\\in \nQ(S_R),$ $P(\\omega)=P_n(\\omega)=0$ and $$\\|\\phi\\|_{Q(S_R)}\\leq \\|P\\| \n\\|f\\|_{L_1(A)}.$$\n Hence $\\lim \\|P_n-P\\|_{L_1(A)}= 0.$ \n\n\n\\end{proof}\nFinally we characterize a Latt\\`es map in terms of the geometry of $Q^*(S_R).$ \nWe start with the following definitions.\n\\begin{definition} \n\\begin{enumerate} \n \\item A set $L$ in $Q^*(S_R)$ is called a geodesic ray if $L$ \nis an isometric image of the non negative real numbers $\\mathbb{R}_+$. \n \\item Let $L_1$ and $L_2$ be geodesic rays with parameterizations \n$\\psi_1:\\mathbb{R}_+\\longrightarrow L_1$ and \n$\\psi_2:\\mathbb{R}_+\\longrightarrow L_2$ respectively. The pair of rays $L_1$ \nand $L_2$ are called equivalent if \n$$\\limsup_{t\\rightarrow \\infty} \\| \\psi_1(t)-\\psi_2(t) \\|_T\\leq d <\\infty$$ for \nsome $d.$\n\\item An element $v$ in $Q^*(S_R)$ is called asymptotically finite if the \nnumber of equivalence classes of geodesic rays in $Q^*(S_R)$ containing $0$ and \n$v$ is finite.\n\\end{enumerate}\n \n\n \n\\end{definition}\n\n\nNow we characterize rational maps which have asymptotically finite non trivial \ninvariant Beltrami differentials.\n\n\\begin{theorem}\\label{thm. uniq.equiv}\nAssume that $S_R$ is connected and let $\\mu$ be non trivial an invariant \nBeltrami differential for $R$ supported on $S_R$. Then the functional \n$v_\\mu(\\phi)=\\int \\phi\\mu$ is asymptotically finite if and only if $R$ is \nLatt\\`es.\n\\end{theorem}\n \n\n\\begin{proof}\n If $R$ is a Latt\\`es map then $Q^*(S_R)$ is finitely dimensional and then \nthere is only a unique geodesic ray passing through any \npair of points in $Q^*(S_R)$ see \\cite{EarleLi} and \\cite{GardLakic}. \n\nReciprocally, suppose that the functional $v_\\mu$ is asymptotically \nfinite. Let us first assume \nthat $\\|v_\\mu \\|_{Q^*(S_R)}=\\|\\mu\\|_{L_\\infty}.$ Then by Corollary 6.4 in \n\\cite{EarleLi}, if $\\mu$ accept degenerated \nHamilton-Krushkal sequences there exist $\\C$-linear isometry \n$I:\\ell_\\infty\\rightarrow Q^*(S_R)$ such that if $m$ is the constant \nsequence \nwith value $\\|\\mu\\|_\\infty$ then $I(m)=v_\\mu.$\n \n\n\nNow let $\\{e_i\\}$ be the canonical basis of $\\ell_\\infty$. Then as in \n\\cite{EarleLi}, we define geodesic rays in $\\ell_\\infty$ as follows:\n\nFor any $r\\geq \\|\\mu\\|_\\infty$ and \n\n$$\\psi_{r,i}(t)=\\bigg \\{ \\begin{array}{l} \nt\\cdot m \\textnormal{ for } t\\leq r. \\\\\nr\\cdot m + (t-r)\\|\\mu\\|_\\infty e_i \\textnormal{ for } t>r.\n\\end{array}$$\n\nBut for all $i_0, r_1,r_2$, $$\\limsup_{t\\rightarrow \\infty}\n\\|\\psi_{i_0,r_1}(t)-\\psi_{i_0,r_2}(t)\\|_{\\ell_\\infty}\\leq \n|r_1-r_2|\\|\\mu\\|_\\infty.$$\n\n\nAlso for all $i\\neq j$ and all $r$ we have \n\\[\\limsup_{t\\rightarrow \\infty} \\|\\psi_{j,r}(t)-\\psi_{i,r}(t)\\|_\\infty\\]\n\\[=\\limsup_{t\\rightarrow \\infty}\\|\\mu\\|_\\infty t \\|e_i-e_j\\|=\\infty.\\]\n\n\nBut the existence of the isometry $I$ gives a contradiction. \nHence $\\mu$ does not accept Hamilton-Krushkal degenerated sequences.\n\nNow using similar arguments as in the proof of Theorem \\ref{th.exhaustion}, we \ncomplete the proof in the case where $\\mu$ is extremal.\n\n\nFinally we show that if $\\mu$ is a non trivial invariant Beltrami differential, \nthen there exist an extremal invariant differential $\\nu$ such that \n$v_\\nu(\\gamma)=v_\\mu(\\gamma)$ for all $\\gamma$ in $Q(S_R).$ \n\n\nIndeed, if $\\mu$ is not extremal then by the Banach Extension Theorem and \nRiesz Representation Theorem there exist is another Beltrami differential \n$\\alpha$ which is extremal satisfying \n$\\|\\alpha\\|_\\infty=\\|\\mu\\|_T<\\|\\mu\\|_\\infty$ and \nsuch that defines the same functional as $\\mu$ in $Q(S_R)$. Let $\\beta$ be a \n$*$-weak limit of the Ces\\`aro averages $C_n(\\alpha)=\\frac{1}{n} \n\\sum_{i=0}^{n-1} \n\\alpha(R^i)\\frac{\\overline{(R^i)'}}{(R^i)'}$, then \n$\\beta(R)\\frac{\\overline{R'}}{R'}=\\beta$ and $\\|\\beta\\|_\\infty\\leq \n\\|\\alpha\\|_\\infty.$ Then we claim that $v_\\beta=v_\\mu$.\nLet $\\{C_{n_i}(\\alpha)\\}$ be a sequence of averages $*$-weakly converging to \n$\\beta$. For any $\\gamma \\in Q(S_R)$ we have $$\\int \\gamma \\beta=\\lim \\int \nC_{n_i}(\\alpha)\\gamma$$ by duality the previous limit is \nequal to $$\\lim \\int \\alpha \\frac{1}{n_i}\\sum_{k=0}^{n_i-1} R^{*k}(\\gamma)=\\lim \n\\int \\mu \\frac{1}{n_i}\\sum_{k=0}^{n_i-1} R^{*k}(\\gamma)$$ but $\\mu$ is an \ninvariant differential and again using duality the previous limit becomes \n$$\\lim \\int \\mu \\gamma=\\int \\mu \\gamma.$$\n\nHence for any $\\gamma$ in $Q(S_R)$ we have $$\\int \\beta \\gamma= \\lim \n\\int C_{n_i}(\\alpha)\\gamma=\\int \\mu \\gamma.$$\nSince $\\alpha$ is extremal we have \n$\\| \n\\beta \\|_\\infty=\\| \\alpha \\|_\\infty=\\|\\mu\\|_T$. Thus \n$\\beta$ is the desired extremal invariant differential. \n\n\\end{proof}\n\n\nTo conclude, let us note that the arguments of the theorems in this paper \nwork for entire and meromorphic functions in the class\nof Eremenko-Lyubich. This is the class of all entire or meromorphic functions \nwith finitely many critical and singular values. It is not completely clear \nwhether this arguments can be carried on entire or meromorphic functions whose \nasymptotic value set contains a compact set of positive Lebesgue measure. \n\n\n\n \\bibliographystyle{amsplain} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\n\n\\IEEEPARstart{D}{iabetic} \nretinopathy (DR) has become a worldwide major medical concern for the large population of diabetic patients and has been the leading cause of blindness in the working-age population today\\cite{thomas2019idf,ciulla2003diabetic,raman2016diabetic}. DR lesions often present as microaneurysms (MAs), hemorrhages (HEs), soft exudates (SEs), and hard exudates (EXs) which can be observed in colorful fundus images and are the basis of diagnosis for ophthalmologists. \n\nHowever, until now there has been no valid treatment to cure this disease completely. The most recognized treatment is the early diagnosis and intervention to controll the progression of the disease and to avoid eventual loss of vision\\cite{wong2018guidelines}. Thus, many national health institutions are promoting DR screening, which has been proven effective in reducing the rate of blindness caused by DR\\cite{ciulla2003diabetic,ting2016diabetic}. However, screening is a heavy burden for primary care systems during the promotion, since the ophthalmologists are in very short supply and have already engaged in post-DR treatment. For this reason, automatic segmentation technology for DR lesions has become a trend towards assisting ophthalmologists in diagnosis.\n\n\n\n\\begin{figure}[!t]\n\\centerline{\\includegraphics[width=\\columnwidth]{fig1.png}}\n\\caption{Illustration of fundus image with characteristics of DR lesion. \nSE: soft exudate; HE: hemorrhage; EX: hard exudate; MA: microaneurysm.\n(a) Original image with different clusters of lesions denoted by red and blue bounding boxes;\n(b) magnified lesion regions where green, purple, red and blue area represent SE, HE, EX and MA, respectively;\n(c) location statistics of certain lesions on IDRiD dataset in pixels. Specifically, from left to right are the distances from SE center to nearest HE center, from EX cluster center to nearest MA center, from SE center to the nearest vascular tree midline, and from MA center to the nearest vascular tree midline.}\n\\label{fig1}\n\\end{figure}\n\n\nRecent research efforts have been directed towards automatic DR segmentation based on deep learning (DL). \nGuo et al. \\cite{guo2019seg} adopted a pretrained Vgg16\\cite{simonyan2014very} as backbone with multi-scale feature fusion block for DR lesion segmentation. Specifically, they extracted side features from each convolution layer in Vgg16 and fused them in a weighted way. Further, a multi-channel bin loss was proposed to alleviate class-imbalance and loss-imbalance problems.\nZhou et al. \\cite{zhou2019collaborative} designed a collaborative learning network to jointly improve the performance of DR grading and DR lesion segmentation with attention mechanism. The attention mechanism allowed features with image-level annotations to be refined by class-specific information, and generated pixel-level pseudo-masks for the training of segmentation model.\nHowever, previously published studies paid too much attention to the designs of networks and many of the researches up to now have only achieved the segmentation of just one or two lesions \\cite{tavakoli2020automated,mamilla2017extraction,wu2017automatic,khojasteh2019novel,mo2018exudate}. It should be noted that as a complex medical lesion segmentation task, the pathological connections have not received enough attention.\n\nAfter comprehensive investigation in possible causes of DR lesions, we found two interesting presentation phenomena shown in Fig.\\ref{fig1}: 1) lesions are usually closed to specific veins and arteries. For example, SEs are generally distributed at the margins of the main trunk of the upper and lower arteries, and MAs are generally distributed at the margins of the capillaries; 2) most lesions have certain spatial interactions with each other. Specifically, SEs commonly appear at the edge of HEs, while EXs are usually arranged in a circular pattern around one or several MAs, which is consistent with the occurrence of pathology.\n\nMotivated by the above observations, we propose a relation transformer block (RTB), comprised of a cross-attention and a self-attention head, to explore dependencies among lesions and other fundus tissues. For specific, \nthe cross-attention head is designed to capture implicit relations between lesions and vessels. We design a dual-branch network to employ both lesion and vascular information, and cross-attention head is integrated between the two branches to make effective use of vascular information in lesion segmentation branch. \nAs we know, the fundus tissues, \\textit{e.g.,} vessels, optic disc, nerves and other lesions, are complex and easily confused with DR lesions of interest, but considering that vascular information describes certain distribution patterns of the above tissues, the cross-attention head is able to locate more lesions through the layout provided and eliminate the false positives far from certain vessels.\nThe self-attention head is employed to investigate the relationships of multi-lesion themselves. Since some of the lesions look similar, \\textit{e.g.,} HE and MA (both red-lesion), SE and EX (both exduate-lesion), misclassifications between lesions occur frequently. Through impactful information emphasized by self-attention head, the distinction and connection of lesions play roles in reducing confusion in segmentation.\nNote that before DL was utilized in medical imaging, vessels were easily mistaken for red lesions, and vessel detection was regarded as a routine step in the segmentation with detected vessels being removed straight away. However, as no fundus dataset is annotated with both vessels and DR lesions, the DL-based lesion segmentation task no longer extracts vessel features separately. To our best knowledge, this is the first trial that utilizes vascular information for deep based fundus lesion segmentation.\n\n\n\nIn addition, some special lesion patterns, such as MA with small size and SE with blurred border, are hard to be situated accurately due to the lack of fine-grained details in high-level features.\nTo alleviate this, we propose a Global Transformer Block (GTB) inspired by GCNet\\cite{GCNet} to further extract detail information, which can preserve the detailed lesion information and suppress the less useful information of channels in each position. In our network, GTB is also adopted to generate dual branches. Specifically, after the backbone, the shared fundus features are obtain as input of the two GTBs, and then GTBs generate more specific features of vessels and lesions respectively, which would be further investigated at the pathological connection level by RTB.\n\nWe have evaluated our network on two publicly available datasets - IDRiD and DDR. Experimental results show that our network outperforms the state-of-the-art DR lesion segmentation reports and achieves the best performance in EX, MA and SE. Furthermore, we also implement ablative experiments on IDRiD dataset and validate the effectiveness of RTB and GTB in improving DR lesion segmentation outcomes.\n\nIn summary, our contributions are as follows:\n\\begin{itemize}\n \\item We propose a dual-branch architecture to obtain vascular information, which contributes to locate the position of DR lesions. For effective use of vascular information in multi-lesion segmentation, we design a relation transformer block (RTB) based on transformer mechanism. To our best knowledge, this is the first work to employ multi-heads transformer structure in lesion segmentation in fundus medical images.\n \\item We present global transformer block (GTB) and relation transformer block (RTB) to detect the special medical patterns with small size or blurred border. The design explores the internal relationship between DR lesions which improves the performance in capture the details of interest.\n \\item Experiments on the IDRiD dataset show that our method achieves a front row finish on DR multi-lesion segmentation. Specifically, our method achieves the best performance in exduates segmentation and ranks second in HE lesion segmentation.\n Experiments on the DDR dataset show that our method outperforms other methods on EX, MA and SE segmentation task and ranks second on HE segmentation task.\n\\end{itemize}\n\n\n\\section{Related Work} \n\\subsection{Pathological Analysis of the DR Lesions}\nDR lesions segmentation is a complex topic due to large intra-class variance.\nFurthermore, DR lesions vary with different stages of disease as well, which also brings challenges to segmentation. \nHowever, instead of discovering lesions directly, we notice that there are pathological associations between these lesions, which can be depicted in the spatial distribution. \n\nWe first investigate the possible pathological causes of DR lesions. Briefly, MA is the earliest lesion of DR observed as spherical lateralized swelling which is produced by vascular atresia; EX looks like yellowish-white well-defined waxy patch, generally thought to be lipid produced by the rupture of the retinal nerve tissues, incidentally, which is also resulted from the vascular atresia. Additionally, when the rupture of vessels happens after the vascular atresia, bloods leak out from the vessels, which leads to the lipoproteins in vessels leaking into the retina as well. The leaking bloods form the HE patterns and the leaking lipoproteins form the SE patterns with poorly defined borders. In summary, as shown in Fig.\\ref{fig1}(b), most of the EXs are observed arranged in a circular pattern around one or several MAs (the bottom line) and most of the SEs appear at the edge of HEs (the upper line).\n\nIn addition to the intra-class dependencies among lesions, the inter-class relations between DR lesions and vessels also make great sense. As mentioned above, vascular abnormalities are the direct or indirect causes of DR lesions, specifically, we found the fact that SEs are often distributed near the trunk of upper and lower arteries, and MAs are generally distributed among the capillaries. Furthermore, intricate fundus tissues often confuse the identification of lesions, but we notice that there are certain pattern rules of fundus, especially in the distribution of various veins and arteries, which would provide valuable prior information. \n\nTo confirm the above pathological analysis, we count the distances in pixel between different fundus tissues on the IDRiD dataset. Fig.\\ref{fig1} (c) illustrates the distance between cluster center of EX and the nearest neighbor MA, SE and the nearest neighbor HE, MA and the closet capillaries, SE and closest upper and lower arteries, respectively. The statistical results verify that there is an exploitable pattern in the distribution of DR lesions. \n\n\\subsection{Deep Neural Networks in DR Lesion Segmentations}\n\nDR lesion segmentation task based on traditional image processing techniques\\cite{walter2007automatic,Automatic2005,alipour2012analysis} is facing two main challenges: the great morphological differences of the same lesions in different disease stages and the confusions of DR lesions and similar structures in the fundus. These two problems have not been effectively solved until \ndeep neural networks (DNNs) exploded in the field of computer vision (CV) \\cite{krizhevsky2012imagenet} and have also been widely applied to DR lesion segmentations\\cite{CABNet,CANet,CENet}.\n\nHowever, DNNs also raise new difficulties. For instance, the detailed information is easily lost by deep networks but most of the DR lesions are very small and even just one or two pixels. Besides, considering the balance of different characteristics of red and exudate lesions, the accuracy of multi-task model is limited. \n\nTo deal with the above issues, researchers have proposed many improvements, which can be summarised as two directions:\n\nFirstly, some researchers focus on the designs of attention models fusing low-level and high-level features together to avoid details lost in the deep network.\nZhang et al. \\cite{zhang2019detection} fused multiple features with distinct target features in each layer based on attention mechanism and achieved preliminary MA detection.\nWang et al. \\cite{wang2017zoom} designed a dual-branch attention network, with one producing a 5-graded score map and the other producing an attention gate map combined to the score map to highlight suspicious regions.\nZhou et al. \\cite{zhou2019collaborative} applied low-level and high-level guidances to different lesion features and obtained the refined multi-lesion attention maps, which were further employed as pseudo-masks to train the segmentation model.\n\nSecondly, the task of segmenting DR lesions is divided into segmenting red lesions and exudate lesions separately, which evades the balanced cost of inter-class disparities and enables fully learning of the same type of lesions.\nMo et al. \\cite{mo2018exudate} designed a fully convolutional residual network incorporating multi-level hierarchical information to segment the exudates without taking the red lesions into account.\nXie et al. \\cite{xie2020sesv} built a general framework to predict the errors generated by existing models and then correct them, which performed well in MA segmentation.\n\nHowever, the first direction pays much attention to the network designs, with seldom considering the pathological connections of DR lesions, while the second direction leads to time consuming and large memory requirements. In order to take advantages of the pathological connections and improve the efficiency of the multi-task model, we propose a RTB consisting of self-attention and cross-attention head to segment DR lesions simultaneously.\n\n\\subsection{Transformer in Medical Images}\n\nTransformer network has been one of the fundamental architecture for natural language processing (NLP) since 2017 due to the efficient and effective self-attention mechanism\\cite{vaswani2017attention}. It improves the performance on many NLP tasks, such as text classification, general language understanding and question answering. Compared with recurrent networks, transformer network achieves parallel computation and reduces the computational complexity. \nIn a basic transformer attention block, Query ($Q$), Key ($K$), Value ($V$) are the three typical inputs to a attention operation. At first, $Q$ and $K$ are computed in the form of pairwise function to obtain the corresponding attention of all points on $K$ for each point on $Q$. The pairwise function can optionally be Gaussian, Embedding Guassian, Dot-Product, Concatenation and \\textit{etc.}. Then, the product is multiplied by $V$ and passes through a column-wise softmax operator to ensure every column sum to 1. Every position on the output contains the recoded global information by attention mechanism. In self-attention operator, $Q = K = V$, so that the output has the same shape with input.\n\nInspired by the success in the domain of NLP, a standard transformer was applying to CV in 2020\\cite{dosovitskiy2020image} with the fewest modifications, called as Vision Transformer (ViT). The input to a ViT is a sequence of cropping images which are linearly encoded by aliquoting the original image. The image patches are treated the same way as tokens (words) in an NLP application.\n\n\nRecently, the transformer architecture has also been applied to the field of medical image processing. \n Liu et al. \\cite{GPT} proposed a global pixel transformer (GPT) to predict several target fluorescent labels in microscopy images. The GPT is similar to a three-headed transformer with different sizes of query inputs, which allows it to adequately capture features at different scales.\nGuo et al. \\cite{guo2021transformer} applied the ViT to anisotropic 3D medical image segmentation, with the self-attention model arranged at the bottom of the Unet architecture.\nSong et al.\\cite{DRT} built a Deep Relation Transformer (DRT) to combine OCT and VF information for glaucoma diagnosis. They modified the standard transformer to an interactive transformer that utilizes a relationship map of VF features interacting with OCT features. \n\n\\section{Methodology}\nIn this section, we first give a brief overview of our proposed network, and then elaborate on the key network components, \\textit{i.e.,} global transformer block (GTB) and relation transformer block (RTB). Finally, designed loss function is further provided. \n\n\n\\subsection{Overview}\nGiven an input fundus image, the proposed network is designed to output one vascular mask and four lesion masks in parallel.\nFig.\\ref{fig2} depicts its overall architecture, which is comprised of four key components: backbone, global transformer block (GTB), relation transformer block (RTB), and segmentation head.\nA dual-branch architecture is employed upon the backbone to explore vascular and pathological features separately, where the transformers based on GTB and RTB\nare incorporated to reason about interactions among both features.\n\n\n\nTo be specific, the fundus image first passes through a backbone to obtain an abstracted feature map $\\mathbf{F}$, with a spatial resolution of $W\\times H$ and $C$ number of channels.\nThen, two parallel branches comprised of global transformer block (GTB) are incorporated to exploit long-range dependencies among pixels in $\\mathbf{F}$, resulting in specific vessel features $\\mathbf{F}_v$ and primary lesion features $\\mathbf{F}_l$ fueled with global contextual information, respectively.\nUpon the branch providing lesion features, we further integrate a relation transformer block (RTB) to model spatial relations between vessels and lesions due to their inherent pathological connections using a self-attention and a cross-attention head:\nthe self-attention head inputs only the lesion features $\\mathbf{F}_l$, and exploits long-range contextual information to generate self-attentive features $\\mathbf{F}_{s}$ through a self-attention mechanism; the cross-attention head inputs both the lesion and vessel features $\\mathbf{F}_{l}$, $\\mathbf{F}_v$,\nand incorporates beneficial fine-grained vessel structural information into $\\mathbf{F}_v$, producing cross-attentive features $\\mathbf{F}_{c}$. \nThe resulting $\\mathbf{F}_{s}$ and $\\mathbf{F}_{c}$ are concatenated together to form the output of the RTB. \nFinally, two sibling heads, each of which contains a Norm layer and a $1\\times 1$ convolution, are used to predict vascular and pathology masks based on the vessel features and concatenated lesion features, respectively.\n\nGTB contains one head while RTB contains two heads. Although the basic heads of GTB and RTB are based on the transformer structure that generates query, key and value for relation reasoning,\nthey are structurally different in our work. \nThe query of head in GTB is similar to a channel-wise weights and the one in RTB has the same size in spatial dimension with the input.\nIn a training process, GTB is employed to generate specific multi-lesion and vessel features independently which maintain more details of interest, and RTB further exploits the inherent pathogenic relationships between multi-lesion and vessels, which eliminate noise and imply the location information. \n\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth]{fig2.png}}\n\\caption{Pipeline of the proposed method. The input image passes through a backbone to obtain the shared feature $\\mathbf{F}$. Then the shared feature $\\mathbf{F}$ takes two branches to achieve vessels and multi-lesion segmentation respectively. Two Global Transformer Blocks (GTB) are applied to both branches to generate specific features, and a Relation Transformer Block (RTB) is incorporated after GTB to explore the inherent pathological connections among multi-lesion and between multi-lesion and vessels.}\n\\label{fig2}\n\\end{figure*}\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=\\columnwidth]{fig3.png}}\n\\caption{The overall structure of Global Transformer Block (GTB).}\n\\label{fig3}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=\\columnwidth]{fig4.png}}\n\\caption{The details of Relation Transformer Block (RTB).}\n\\label{fig4}\n\\end{figure}\n\n\n\\subsection{Global Transformer Block}\nThe Global Transformer Block (GTB) contains two parallel branches of the same architecture to extract features for lesions and vessels separately. Such a dual-branch design owns to the fact that lesions and vessels generally have dramatically different visual patterns. To be concrete, lesions are discrete patterns, with nearly random spatial distribution. While vessels are topological connected structures, and the layout of vascular trunks, containing central retinal artery, ciliary artery and etc, generally follows some common rules. \nIt is hence necessary to use specialized branches to learn specific characteristics of different objects of interest.\n\n \nFig.\\ref{fig3} presents the detailed structure of each GTB branch.\nIt takes an input $\\mathbf{F}\\in \\mathbb{R}^{\\: C\\times W\\times H}$ generated from the backbone, and outputs attentively-refined feature maps $\\mathbf{F}_{i}\\in \\mathbb{R}^{\\: C\\times W\\times H},i \\in\\left\\{l,v\\right\\}$ of lesions and vessels respectively.\nSpecifically, GTB follows the typical framework of transformer networks. \nThree generators, denoted as $\\mathcal{Q}$, $\\mathcal{K}$ and $\\mathcal{V}$ are first employed to transform the input $\\mathbf{F}$ into query, key and value, respectively. In GTB, the generator $\\mathcal{Q}$ is implemented with a $3 \\times 3$ convolution followed by a global average pooling, and outputs a query vector $\\mathcal{Q}(\\mathbf{F}) \\in \\mathbb{R}^{\\: C^{'}\\times 1}$ with the channel number designed as $C^{\\prime}=C\/8$; the generator $\\mathcal{K}$ and $\\mathcal{V}$ have the same architecture as $\\mathcal{Q}$ expect for replacing the global averaging pooling with a reshape operation, leading to the key and value $\\mathcal{K}(\\mathbf{F}),\\mathcal{V}(\\mathbf{F})\\in \\mathbb{R}^{\\:C^{'}\\times HW}$.\n\nWe define the pairwise function of query and key as a matrix multiplication:\n\\begin{equation}\n \\mathcal{F}(\\mathbf{F})=\\mathcal{K}(\\mathbf{F})^T\\mathcal{Q}(\\mathbf{F}) ,\n\\end{equation}\nwhere the superscript $T$ denotes a transpose operator for matrix. \nNote that the query of GTB acts as a channel-wise query instead of the position query as NLNet\\cite{wang2018NLnet}. To be specific, the query vector is considered as a feature selector for channels of key matrix. \nSubsequently, the product $\\mathcal{F}(\\mathbf{F})\\in\\mathbb{R}^{\\:HW\\times 1}$ also acts as a feature selector for spatial positions of value matrix. \nIn summary, the GTB can be roughly described as a attention mechanism which fuses channel-wise first and then spatial-wise weighted features together with input information.\n\nNext, we consider the global transform operation defined as:\n\\begin{equation}\n \\mathcal{G}(\\mathbf{F}) = \\mathcal{V}(\\mathbf{F})softmax(\\mathcal{F}(\\mathbf{F}))\\in \\mathbb{R}^{\\:C^{'}\\times 1},\n\\end{equation}\nwhere $softmax$ is a softmax function to normalize the $\\mathcal{F}(\\mathbf{F})$.\n\nThen, We take the obtained attentive features $\\mathcal{G}(\\mathbf{F})$ with a linear embedding as a residual term to the input $\\mathbf{F}$, and get the final output through a residual connection:\n\\begin{equation}\n \\mathbf{F}_{i} = W\\mathcal{G}(\\mathbf{F})+\\mathbf{F},\\ i\\in\\left \\{ l,v\\right \\},\n\\end{equation}\nwhere the $+$ operation denotes broadcasting element-wise sum operation; the $W$ is a linear embedding, implemented as $1 \\times 1$ convolution to convert the channel number of intermediate feature map from $C^{'}$ back to $C$. As a result, the output features are received with the same format as the input, but have been enriched with specialized vessel and lesion features, respectively.\n\n\n\n\n\n\nThe GTB structure is inspired by the GCNet\\cite{GCNet}. They both follow the idea of transformer mechanism but generate the per-channel weights. The weight vectors in GCNet and GTB both obtained by a matrix multiplication, but different from GCNet, GTB attains the two multipliers with three generator to further highlight the useful channels in each position, realizing both channel-wise and spatial-wise attention. Due to the fundus lesion features, especially the small discrete ones, are easily confused with artifacts or idiosyncratic tissues, \nuseful information tends to exist in only a few pixels of certain channels. It is an improved method aimed at the feasibility of small discrete pattern segmentation in fundus images.\n\n\\subsection{Relation Transformer Block}\nRelation Transformer Block (RTB) consists of a self-attention and a cross-attention head, used to capture intra-class dependencies among lesions and inter-class relations between lesions and vessels, respectively, as shown in Fig.\\ref{fig4}. In each head, three trainable linear embeddings, implemented with a $3 \\times 3$ convolution followed by a reshape operation, are employed as the query, key and value generator $\\mathcal{G}_i, \\mathcal{K}_i, \\mathcal{V}_i,i \\in\\left\\{s,c\\right\\}$, respectively.\nThe pairwise computations of query and key in self-attention head and cross-attention head are described as:\n\\begin{equation}\n \\begin{split}\n \\mathcal{F}_{s}(\\mathbf{F}_l)&=\\mathcal{K}_{s}(\\mathbf{F}_l)^T\\mathcal{Q}_{s}(\\mathbf{F}_l) \\\\\n \\mathcal{F}_{c}(\\mathbf{F}_l,\\mathbf{F}_v)&=\\mathcal{K}_{c}(\\mathbf{F}_v)^T\\mathcal{Q}_{c}(\\mathbf{F}_l),\n \\end{split}\n\\end{equation}\nwhere the subscripts $s$ and $c$ denote the self-attention and the cross-attention head, respectively.\nIt is important to emphasize that different from the self-attention head that derives the query, key all from input lesion features $\\mathbf{F}_l$, the cross-attention head generates the key from the vessel features $\\mathbf{F}_v$ instead to integrate vascular information. \n\n\n\nNext, the individual attentive features of the two heads are computed respectively as:\n\\begin{equation}\n\\begin{split}\n \\mathcal{G}_{s}(\\mathbf{F}_{l})&=\\mathcal{V}_{s}(\\mathbf{F}_{l})softmax(\\mathcal{F}_{s}(\\mathbf{F}_{l}))\\\\\n \\mathcal{G}_{c}(\\mathbf{F}_{l},\\mathbf{F}_v)&=\\mathcal{V}_{c}(\\mathbf{F}_{v})softmax(\\mathcal{F}_{c}(\\mathbf{F}_{l},\\mathbf{F}_v)).\n\\end{split}\n\\end{equation}\n\nWe adopt residual learning to each head as well and get the outputs:\n\n\n\\begin{equation}\n\\begin{split}\n \\mathbf{F}_{i} = W_{i}\\mathcal{G}_{i}(\\mathbf{F}_{l},\\mathbf{F}_v)\\oplus\\mathbf{F}_l\\\\i\\in \\left\\{s, c \\right\\},\n\\end{split}\n\\end{equation}\nwhere the $W_i$ is a linear embedding implemented as $1 \\times 1$ convolution, and the $\\oplus$ operation is performed by a residual connection of element-wise addition. \n\nAs such, the self-attention head computes the response in a position as a weighted sum of the features in all positions, and thus well captures long-range dependencies. \nGiven the fact that DR lesions are usually dispersed over a broad range, the self-attention can exchange message among multiple lesions, regardless of their positional distance, and thus allows the modeling of intra-class pairwise relations of lesions. The head is supposed to distinguish the mixtures of more than two lesions and further refine the edges of large patterns in lesion segmentation.\n\n\nThe cross-attention head queries global vascular structures from the vessel features, thus incorporating interactions between lesions and vessels.\nConsidering that lesions and vessels have strong inherent pathogenic connections, the cross attention help to better locate MA and SE, and meanwhile eliminate false positives of EX caused by vessel reflection and MA caused by capillary confusion.\n\nWe concatenate the resulting features $\\mathbf{F}_{s}$ from the self-attention head and that $\\mathbf{F}_{c}$ from the cross-attention head, leading to the final RTB output:\n\n\\begin{equation}\n \\mathbf{F}_{out} = [\\mathbf{F}_s;\\mathbf{F}_c],\n\\end{equation}\nwhere the $[\\cdot\\ ;\\ \\cdot]$ denotes the concatenation at channel dimension.\n\n\n\n\\subsection{Loss Function}\nWe employ two loss functions, \\textit{i.e.}, $\\mathcal{L}_{lesion}$ and $\\mathcal{L}_{vessel}$ for the multi-lesion and vessel segmentation branches respectively, and the total loss of our network is defined as:\n\\begin{equation}\n \\mathcal{L} =\\mathcal{L}_{lesion}+\\lambda \\mathcal{L}_{vessel},\n\\end{equation}\nwhere the $\\mathcal{L}_{lesion}$ denotes the 5-class weighted cross-entropy loss for multi-lesion segmentation and the $\\mathcal{L}_{vessel}$ is a binary weighted cross-entropy loss to learn vascular features; the $\\lambda$ is set as the weight in the loss function. When $\\lambda = 0.0$, the network is optimized by the multi-lesion features only, and as $\\lambda$ grows, vascular information plays an increasing role in optimization.\n\n\n\\section{Experiments And Results}\n\\subsection{Datasets}\n\\textbf{IDRiD Dataset} is available for the segmentation and grading of retinal image challenge 2018\\cite{porwal2018indian, porwal2020idrid}. The segmentation part of the dataset contains 81 $4288 \\times 2848$ sized fundus images, accompanied by four pixel-level annotations, \\textit{i.e.,} EX, HE, MA and SE if the image has this type of lesion. In total, there are 81 EX annotations, 81 MA annotations, 80 HE annotations, and 40 SE annotations. The partition of training set and testing set is provided on IDRiD already, with 54 images for training and the rest 27 images for testing. \n\n\\textbf{DDR Dataset} is provided by Ocular Disease Intelligent Recognition (ODIR-2019) for lesion segmentation and lesion detection\\cite{LI2019}. This dataset consists 13,673 fundus images from 147 hospitals, covering 23 provinces in China. For segmentation task, 757 fundus images are provided with pixel-level annotation for EX, HE, MA and SE if the image has this type of lesion. In total, there are 486 EX annotations, 570 MA annotations, 601 HE annotations, and 239 SE annotations. The partition of training set, validation set and testing set is provided on DDR already, with 383 images for training, 149 images for validation and the rest 225 images for testing.\n\n\\begin{table*}\n \\centering\n \\caption{Performance Comparison with the state-of-the-art works reported on the IDRiD dataset, where the \\textbf{separate} and \\textbf{same} indicates the way in which the method segments the lesions separately by different models or at the same time by one model}\n \\label{tab1}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multirowcell{2}{Method} & \\multirowcell{2}{sepa-\\\\rate} & \\multirowcell{2}{same} & \\multicolumn{2}{c|}{Hard Exudates} &\\multicolumn{2}{c|}{Haemorrhages} \n & \\multicolumn{2}{c|}{Microaneurysms} \n & \\multicolumn{2}{c|}{Soft Exudates} \\\\ \\cline{4-11}\n & & & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC \\\\ \\hline\n VRT(1st) & \\checkmark & & 0.7127 & - & 0.6804 & - & 0.4951 & - & 0.6995 & - \\\\ \n PATech(2nd) & \\checkmark & & 0.8850 & - & 0.6490 & - & 0.4740 & - & - & - \\\\ \n iFLYTEK-MIG(3rd) & \\checkmark & & 0.8741 & - & 0.5588 & - & 0.5017 & - & 0.6588 & - \\\\ \n \n DRUNet\\cite{kou2019microaneurysms} & \\checkmark & & - & - & - & - & - & 0.9820 & - & - \\\\ \n SESV\\cite{xie2020sesv} & \\checkmark & & - & - & - & - & \\textbf{0.5099} & - & - & - \\\\\n L-Seg\\cite{guo2019seg} & & \\checkmark & 0.7945 & - & 0.6374 & - & 0.4627 & - & 0.7113 & - \\\\\n SSCL\\cite{zhou2019collaborative} & & \\checkmark & 0.8872 & 0.9935 & \\textbf{0.6936} & \\textbf{0.9779} & 0.4960 & 0.9828 & 0.7407 & 0.9936 \\\\ \n RTN(Ours) & & \\checkmark & \\textbf{0.9024} & \\textbf{0.9980} & 0.6880 & 0.9731 & 0.4897 & \\textbf{0.9952} & \\textbf{0.7502} & \\textbf{0.9938} \\\\ \\hline\n \\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n\\caption{Performance Comparison with state-of-the-art segmentation methods on the DDR dataset, where * denotes the results are reproduced by ourselves}\n\\label{tab2}\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multirowcell{2}{Method} & \\multicolumn{2}{c|}{Hard Exudates} &\\multicolumn{2}{c|}{Haemorrhages} \n & \\multicolumn{2}{c|}{Microaneurysms} \n & \\multicolumn{2}{c|}{Soft Exudates} \\\\ \\cline{2-9}\n & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC \\\\ \\hline\n HED\\cite{xie2015holistically} & 0.4252 & 0.9612 & 0.2014 & 0.8878 & 0.0652 &0.9299 & 0.1301 & 0.8215 \\\\ \n DeepLab v3+\\cite{chen2018deeplabv3} & 0.5405 & 0.9641 & 0.3789 & 0.9308 & 0.0316 & 0.9245 & 0.2185 & 0.8642 \\\\\n UNet\\cite{guan2019fully,Yakubovskiy2019} & 0.5505 & 0.9741 & \\textbf{0.3899} & \\textbf{0.9387} & 0.0334 & 0.9366 & 0.2455 & 0.8778 \\\\\n L-seg*\\cite{guo2019seg} & 0.5645 & 0.9726 & 0.3588 & 0.9298 & 0.1174 & 0.9423 & 0.2654 & 0.8795\\\\ \n RTN(Ours) & \\textbf{0.5671} & \\textbf{0.9751} & 0.3656 & 0.9321 & \\textbf{0.1176} & \\textbf{0.9452} & \\textbf{0.2943} & \\textbf{0.8845} \\\\ \\hline\n \\end{tabular}\n\\end{table*}\n\n\\subsection{Implementation Details}\n\n\\subsubsection{Data Preparation}\nTo prepare more trainable data, some operations are performed on the original images. First, the images are input into the a segmentation model pretrained on DRIVE\\cite{staal2004ridgeb91} and STARE\\cite{hoover2000locatingb92} dataset with vessel annotations and the pseudo vascular masks are obtained. Next, \nlimited by the memory received, the large images are random resized and cropped into small pieces of $512 \\times 512$ size, additionally we apply random horizontal flips, vertical flips, and random rotation as forms of data augmentation to reduce overfitting. Then, in order to enhance image contrast while preserving local details, we process Contrast Limited Adaptive Histogram Equalization (CLAHE) on all input images with ClipLimit=2 and GridSize=8 by setting. The CLAHE is proven to be effective due to the anomalousness distinguished from the background in diabetic fundus, and the quantitative results are presented in Table \\ref{tab5}.\n\n\\subsubsection{Model Settings}\nThe typical UNet architecture\\cite{Yakubovskiy2019} is \na popular method for medical images segmentation, constructed by an encoder and a decoder with skip connections in channel-wise concatenation manner. In this paper, we apply DenseNet-161\\cite{huang2017densely} pretrained on ImageNet dataset as the backbone of UNet encoder\\cite{guan2019fully} to achieve better performance. The channel number $C$ of output of UNet is set to 32.\n\n\\subsubsection{Experiment settings}\nOur framework is implemented using pytorch backend and performed on NVIDIA GeForce RTX 3090 GPU with 24GB of memory. During the training, the batch-size is set to 16. The initial learning rate is set to 0.001 and is decay in a step-wise manner to 0.1 times of the previous every 120 epochs. All models are trained for 250 epochs with the SGD optimizer with momentum 0.9 and weight decay 0.0005.\n\nThe loss function settings are as follows: a) the return loss ratio $\\lambda$ is set to 0.1; b) the weights of $\\mathcal{L}_{lesion}$ are set as 0.001, 0.1, 0.1, 1.0, 0.1 for background, EX, HE, MA and SE respectively; c) The coefficients of background and vessels in $\\mathcal{L}_{vessel}$ are set as 0.01 and 1.0.\n\n\\subsection{Evaluation Metrics}\nTo evaluate the performance of the proposed method, we employ the area-under-the-curve (AUC) of both the precision and recall (PR) curve and receiving operating characteristic (ROC) curve \\cite{scikit-learn}, which are also recognized as metrics of fundus image segmentation in previous competitions and researches. The former is more concerned with the accuracy of the true data in prediction, while the latter reflects the performance of the data predicted positively. There is more emphasis on recall metric in medical images, which indicates the performance of true samples being successfully predicted, \\textit{i.e.,} the AUC\\_PR drawn by recall metric shows more practical value, and the AUC\\_ROC also characterizes the effectiveness of the model.\n\n\\subsection{Comparisons on Other State-of-the-art Methods}\nWe compare our method with previous works reported on the IDRiD dataset (Table \\ref{tab1}). Our method ranks first in AUC\\_ROC of EX, MA and SE and AUC\\_PR of EX and SE, ranks second in AUC\\_ROC and AUC\\_PR of HE. \nNote that the first five methods of Table \\ref{tab1} all employed individual models segmenting the four lesions separately: according to the conference reports of the top 3 IDRiD competition teams, four models were developed for segmentation of four lesions respectively; DRUNet\\cite{kou2019microaneurysms} and SESV\\cite{xie2015holistically} proposed a specific network to segment MA only. Although SESV achieved the best AUC\\_PR of MA, such individual designs require modifying a large number of hyper-parameters in training stage and lead to time consuming in inference.\nThe rest methods of Table \\ref{tab1}: L-Seg\\cite{zhou2019collaborative}, SSCL\\cite{zhou2019collaborative} and our network, all propose one model to segment the four lesions at the same time. SSCL performs better than ours in AUC\\_PR of MA segmentation but worse in AUC\\_ROC, that might result from the fact that although RTB reduces false positives of MA away from the vessels, it also introduces the false negatives of MA.\n\nWe also verify the effectiveness of the proposed method on DDR dataset. Compared with the IDRiD dataset, the DDR dataset is an updated dataset with few results reported on it, so we apply some state-of-the-art segmentation methods on the DDR dataset to make comparisons. As shown in Table \\ref{tab2}, in the comparison with other state-of-the-arts, our method achieves the best performance in EX, MA and SE and ranks second in HE. It is obvious that our method is not doing the best job in HE segmentation both in IDRiD and DDR dataset, which may be explained by the fact that the large HE patterns are formed by blood irregularly haloing on the retina, and the specific bleeding points have been blurred. In this case, RTB, an approach that is more concerned with theoretical correlations, does not contribute useful information in the segmentation of large HEs. As mentioned in the dataset section, considering there are many low quality images with uneven illumination, underexposure, overexposure, image blurring, retinal artifacts and other disturbing lesion tissues in DDR dataset, DDR dataset is more challenging than IDRiD dataset, which results in the performance of the former lagging behind that of the latter. \n\n\n\n\n\\begin{table}[]\n \\centering\n \\caption{Performance Comparison of the different backbones with our network}\n \\scalebox{0.9}{\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\multirowcell{2}{Framework} &\n \\multirowcell{2}{Encoder}&\n \\multicolumn{4}{c|}{AUC\\_PR}\\\\\\cline{3-6}\n & & EX & HE & MA & SE \\\\\n \\hline\n \\multirowcell{5}{UNet\\cite{guan2019fully}} & ResNet-34\\cite{he2016deep} & 0.8778 & 0.6764 & 0.4659 & 0.7407 \\\\\n & ResNet-50\\cite{he2016deep} & 0.8858 & 0.6874 & 0.4692 & 0.7475 \\\\\n & Xception\\cite{chollet2017xception} & 0.8924 & 0.6806 & 0.4851 & 0.7424 \\\\\n & Vgg19-bn\\cite{simonyan2014very} & 0.8821 & 0.6832 & 0.4789 & 0.7423 \\\\\n & DenseNet-161\\cite{huang2017densely} & \\textbf{0.9024} & \\textbf{0.6880} & \\textbf{0.4897} & \\textbf{0.7502} \\\\\n \\hline\n \\end{tabular}}\n \\label{tab4}\n\\end{table}\n\n\\subsection{Ablation Studies on IDRiD Dataset}\nWe conduct ablation studies to better understand the impact of each component of our network. First, results of several encoding architectures are available to select the proper one as backbone. Then, considering the topology of vessels, regularization terms of loss function are discussed. Next, we analyze the effect of GTB based on the baseline, which is defined as the complete workflow without GTB and RTB. In order to identify whether vascular information contributes to lesion segmentation in advanced, we apply concatenation operation to the two outputs of GTBs directly. Finally, the roles of both self-attention head and cross-attention head in RTB are discussed thoroughly.\nSince AUC\\_PR is considered as the most important clinical evaluation metric for lesion segmentation in fundus images, we simplify the metrics to AUC\\_PR only for ablative comparisons. The loss and PR curves obtained from different experiments are set out in Fig.\\ref{fig5} and detailed AUC\\_PR values can be compared in Table \\ref{tab3}.\n\n\n\\subsubsection{Analysis on the Backbone}\nTo compare the effectiveness of backbone models, we perform experiments to select proper encoding architecture. \nResNet-34\\cite{he2016deep}, ResNet-50\\cite{he2016deep}, Xception\\cite{chollet2017xception}, Vgg19-bn\\cite{simonyan2014very} and DenseNet-161\\cite{huang2017densely} integrated with UNet are implemented with the segmentation heads of our network. As can be seen from the Table \\ref{tab4}, the DenseNet-161 integrated with UNet achieves the best performance in all lesions and is utilized as the following backbone.\n\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth]{fig5.png}}\n\\caption{Loss and PR curves for segmentation over four DR lesions. Ablation studies are compared to explore the effectiveness of the baseline itself and stacked by GTB, self-attention head (sah), cross-attention head (rah) and RTB one by one.}\n\\label{fig5}\n\\end{figure*}\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=\\columnwidth]{fig6.png}}\n\\caption{Visualization of (a) Original images with annotations; (b) spatial attention features with Convolutional Block Attention Module (CBAM) and (c) query attention features $\\mathcal{F}$ with Global Transformer Block (GTB) for three images from IDRiD dataset. The GTB picks more discrete and small lesions up and fine-grains the patterns of interest.}\n\\label{fig6}\n\\end{figure}\n\n\n\n\\subsubsection{Analysis on the Regularization Term}\nConsidering the topology of vessels, an extension of loss function regularization terms has been conducted. In the result of vessel segmentation, there are two common cases, the neglected ends and the truncated trunks. \nBased on the above cases, we propose two regularization terms $R_{thin}$ and $R_{cl}$. The former inspired by \\cite{yang2021hybrid} applies focal-loss function specifically on peripheral vessels, and the latter utilizes the center-line idea of \\cite{shit2021cldice} to ensure the connectivity. Table \\ref{tab_loss} indicates that the $R_{thin}$ improves the performance of MA segmentation and the $R_{cl}$ achieves the best grades in AUC\\_PR of HE and MA. However, the alterations are unremarkable, probably due to the fact that the groundtruths of vessels are pseudo-masks generated by semi-supervision, rather than manually annotated masks. The upper bound on the performance of the vessel segmentation restricts the improvement of the regularization terms on the final results.\n\n\\begin{table}[]\n \\centering\n \\caption{Performance Comparison of Different Regularization Terms on IDRiD Dataset}\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \\multirowcell{2}{$R_{thin}$} &\\multirowcell{2}{$R_{cl}$} & \\multicolumn{4}{c|}{AUC\\_PR}\\\\\\cline{3-6}\n & & EX & HE & MA & SE \\\\\n \\hline\n&&\\textbf{0.9024} & 0.6880 & 0.4897 & \\textbf{0.7502}\\\\\n\\checkmark&&0.8975&0.6845&0.4899&0.7432\\\\\n&\\checkmark&0.8912&\\textbf{0.6891}&\\textbf{0.4901}&0.7435\\\\\n\\checkmark&\\checkmark&0.8923&0.6808&0.4895&0.7426\\\\\n \\hline\n \\end{tabular}\n \\label{tab_loss}\n\\end{table}\n\n\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth]{fig7.png}}\n\\caption{Visualization of the query position (red points) of different lesions and their two query-specific attention maps with Relation Transformer Block (RTB). The red borders denote the self-attention maps, and the blue denote the cross-attention maps. The attention of different query positions in EX, HE, MA and SE varies.}\n\\label{fig7}\n\\end{figure*}\n\n\\begin{table*}\n \\centering\n \\caption{Performance Comparison of different attention blocks on the IDRiD dataset}\n \\label{tab5}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n lesion & \\multicolumn{2}{|c|}{Hard Exudates} &\\multicolumn{2}{|c|}{Haemorrhages} \n & \\multicolumn{2}{|c|}{Microaneurysms} \n & \\multicolumn{2}{|c|}{Soft Exudates} \\\\ \\hline\n method & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC & AUC\\_PR & AUC\\_ROC \\\\ \\hline\n baseline(without CLAHE) & 0.8025 & 0.9912 & 0.6031 & 0.9498 & 0.3912 & 0.9803 & 0.5478 & 0.9120\\\\ \\hline\n baseline & 0.8593 & 0.9919 & 0.6284 & 0.9553 & \\textbf{0.4279} & 0.9830 & 0.5766 & 0.9171 \\\\ \n baseline+SENet\\cite{hu2018squeeze} & 0.8653 & 0.9931 & 0.6408 & 0.9497 & 0.3861 & 0.9869 & 0.5683 & 0.9299 \\\\ \n baseline+CBAM\\cite{woo2018cbam} & 0.8606 & 0.9918 & 0.6470 & 0.9480 & 0.3979 & 0.9871 & 0.5525 & 0.9373 \\\\ \n baseline+GC\\cite{GCNet} & 0.8633 & 0.9929 & 0.6406 & 0.9488 & 0.4031 & 0.9809 & 0.5540 & 0.9251 \\\\ \n baseline+GTB(Ours) & \\textbf{0.8659} & \\textbf{0.9933} & \\textbf{0.6570} & \\textbf{0.9534} & 0.4071 & \\textbf{0.9879} & \\textbf{0.5968} & \\textbf{0.9458} \\\\ \\hline\n \\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}[]\n \\centering\n \\caption{Performance Comparison of different components of our network on the IDRiD dataset, where \\textbf{cat} denotes a simple concatenate of multi-lesion and vessel features in channel-wise, \\textbf{cah} and \\textbf{sah} is abbreviations for cross-attention head and self-attention head respectively}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multirowcell{2}{Framework} & \\multirowcell{2}{GTB} &\\multirowcell{2}{cat} & \\multicolumn{2}{c|}{RTB} & \\multicolumn{4}{c|}{AUC\\_PR}\\\\\\cline{4-9}\n & & & cah & sah & EX & HE & MA & SE \\\\\n \\hline\n \\multirowcell{6}{baseline} & & & & & 0.8593 & 0.6284 & 0.4071 & 0.5766 \\\\\n & \\checkmark & & & & 0.8659 & 0.6570 & 0.4279 & 0.5968 \\\\\n & \\checkmark & \\checkmark & & & 0.8672 & 0.6756 & 0.4294 & 0.6663 \\\\\n & \\checkmark & & \\checkmark & & 0.8682 & 0.6818 & 0.4847 & 0.7463 \\\\\n & \\checkmark & & & \\checkmark & 0.8862 & 0.6846 & 0.4663 & 0.7422 \\\\\n & \\checkmark & & \\checkmark & \\checkmark & \\textbf{0.9024} & \\textbf{0.6880} \n & \\textbf{0.4897} & \\textbf{0.7502} \\\\\n \\hline\n \\end{tabular}\n \\label{tab3}\n\\end{table*}\n\n\n\n\\subsubsection{Analyze the Effect of GTB}\n\nTable \\ref{tab3} shows that GTB improves the performance on the basis of baseline, which indicates the weights assigned to channels in each position by GTB benefit the segmentation of all four lesions. To highlight the performance of GTB further, we compare it with other popular attention blocks under the same model parameters. \nTable \\ref{tab5} illustrates that the performance of GTB on IDRiD dataset is better than others. Different from the pooling operation as other attention blocks do, channel-wise weights of GTB are specific in each pixel, so that different channels are enlightened in different pixels.\n\nAs shown in Fig.\\ref{fig6}, the snapshots of attentive features after softmax function are taken in color. In the comparison of the spatial attention features of CBAM\\cite{woo2018cbam} and the attention features $\\mathcal{F}$ of GTB, many discrete and small patterns overlooked by the former are noticed by the latter.\n\n\\subsubsection{Analyze the Effect of RTB}\nTable \\ref{tab3} lists the ablation results. We first reaffirm the idea that vascular information contributes to multi-lesion segmentation by concatenating the vascular and multi-lesion features in channel-wise. In the comparison with and without concatenations, the greatest performance gains are obtained for HE and SE, which is consistent with the previous analysis that the HE and SE have strong relations with vessels. In order to make full use of vascular information, RTB is applied instead of simple concatenation.\nCompared with the simple concatenation, cross-attention head incorporating multi-lesion and vascular features in a transformer way enhances the scores of all four lesions further. Likewise, with the integration of self-attention head, the scores get a huge improvement as well. \n\nAs visualized in Fig.\\ref{fig7}, query-specific attention maps focus on specialized tissues. Take the query pixel on MA as an example, the self-attention tends to smaller patterns, and the cross-attention specializes the vascular tributaries, which are supposed to assist in reducing false negatives mistaken for tributaries and eliminating false alarms far from tributaries.\nBoth self-attention head and cross-attention head play a role in improving the network performance, evincing the fact that exploring the internal relationships of multi-lesion and vessels makes sense.\n\nFinally both GTB and RTB are incorporated and the network achieves the highest results. As shown in the bottom half of the Fig.\\ref{fig5}, the curve corresponding to the complete network wraps almost entirely around the others. \n\n\n\n\\begin{figure*}[htbp]\n\\centerline{\\includegraphics[width=\\textwidth]{fig8.png}}\n\\caption{Visualization of segmentation results for multi-lesion segmentation on IDRiD dataset. The different columns represent the original images, the segmentation results generated by baseline, baseline+GTB, baseline+GTB+RTB and groundtruths respectively. The yellow boxes denote the improvements over baseline brought about by GTB, in the form of pickups of missing detections, while the green boxes denote the improvements over baseline+GTB brought about by RTB, mainly in the form of fewer false alarms.\n}\n\\label{fig8}\n\\end{figure*}\n\n\\subsection{Generalization Studies on DDR and IDRiD Dataset}\nFor medical images, it is challenging but meaningful to realize the generalization over different domains under different imaging conditions. In purpose to validate the generalization capability, models are trained with the images from the train set of DDR dataset and tested on test set of IDRiD dataset which is captured from another source. Table \\ref{tab6} compares the results obtained from the preliminary analysis of generalization. From the chart, it can be seen that our method achieves the best performance by narrowing down the gap between images under different conditions.\n\n\n\n\n\n\n\n\\subsection{Qualitative Results}\nTo better illustrate the effect of GTB and RTB, we visualize the results of certain images. \nFig.\\ref{fig8} compares the segmentation results with corresponding original images and groundtruths. We take the segmentation maps of baseline, baseline with GTB and baseline with GTB and RTB to present the improvements of different components of our network.\nThe yellow boxes present the improvements of GTB, where the missing detections are discovered. The green boxes are steps up from the yellow boxes by RTB, which picks up missing detections further and reduces false alarms. Additionally, the edges of the large lesion patterns are more precisely fine-tuned by RTB, especially for the SE with blurred edges.\n\n\n\\begin{table}[]\n \\centering\n \\caption{Performance Comparison of different methods on the generalization from DDR dataset to IDRiD dataset}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\multirowcell{2}{Framework} & \\multicolumn{4}{c|}{AUC\\_PR}\\\\\\cline{2-5}\n & EX & HE & MA & SE \\\\\n \\hline\n HED\\cite{xie2015holistically} & 0.5420 & 0.2104 & 0.1245 & 0.1278 \\\\\n DeepLab v3+\\cite{chen2018deeplabv3} & 0.6480 & 0.4472 & 0.1823 & 0.2926 \\\\\n UNet\\cite{guan2019fully,Yakubovskiy2019} & 0.6472 & 0.4452 & 0.1965 & 0.2845 \\\\\n Lseg\\cite{guo2019seg} & 0.6501 & 0.4405 & 0.1986 & 0.3059 \\\\\n RTN(Ours) & \\textbf{0.6799} & \\textbf{0.4504} & \\textbf{0.2114} & \\textbf{0.3401} \\\\\n \\hline\n \\end{tabular}\n \\label{tab6}\n\\end{table}\n\n\n\n\\section{Conclusion And Discussion}\nIn this paper, we present a novel network that employs a dual-branch architecture with GTB and RTB to segment the four DR lesions simultaneously. \nOutstanding experiment results of our network can be attributed to GTB and RTB, which investigate the intra-class dependencies among multi-lesion and inter-class relations of multi-lesion and vessels.\n\n\nHowever, limited to the considerable cost of expertise pixel-level annotations, the vessel pseudo masks provided by semi-supervised learning are inevitably coarse-grained and lead to the inadequacy of our network. Therefore,\nin our future work, we will further modify the vascular semi-supervised learning strategy and keep improving the transformer structures to achieve better performance in DR multi-lesion segmentation with less memory requirement. \n\n\n\n\n\n\\input{ref.bbl}\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction and statement of results}\n\nThis paper gives a systematic treatment of two topologies for spaces of smooth functions from a finite-dimensional manifold to a (possibly infinite-dimensional) manifold modeled on a locally convex space.\n\nIn particular, we establish the continuity of certain mappings between spaces of smooth mappings, e.g.\\ the continuity of the joint composition map.\nAs a first application we prove that the bisection group of an arbitrary Lie groupoid (with finite-dimensional base) is a topological group.\nFor the most part, these results are generalizations of well known constructions to spaces of smooth functions with infinite-dimensional range.\nWe refer to \\cite{illman,michor,hirsch} for topologies on spaces of smooth functions between finite-dimensional manifolds.\n\\medskip\n\nTo understand these results of the present article, recall first the situation for spaces of smooth functions between finite-dimensional manifolds. \nFor $0 \\leq r \\leq \\infty$, let $C^r (M,N)$ denote the set of $r$-times continuously differentiable functions between manifolds $M$ and $N$\nIn the case where $r$ is finite, the standard choice for a topology on $C^r (M,N)$ is the well known Whitney $C^r$-topology (cf.\\ \\cite{illman,hirsch}). \nFor $r=\\infty$ and $M$ non-compact there are several choices for a suitable topology. \nOne can for example choose the topology generated by the union of all Whitney $C^r$-topologies. \nWe call this topology the strong $C^\\infty$-topology and write $C^\\infty_S (M,N)$ for the smooth functions with this topology.\\footnote{The strong topology is in the literature often also called the ``Whitney $C^\\infty$-topology''. Following Illman in \\cite{illman}, we will not use this naming convention as it can be argued that the strong $C^\\infty$-topology is not a genuine $C^\\infty$-topology. See ibid. for more information.}\nNote that each basic neighborhood of the strong $C^\\infty$-topology allows one to control derivatives of functions only up to a fixed upper bound.\nHowever, in applications one wants to control the derivatives of up to arbitrary high order (this is made precise in Section \\ref{sect: vsTop}).\nTo achieve this one has to refine the strong topology, obtaining the \\emph{very strong} topology\\footnote{In \\cite{michor} this topology is called the $\\mathcal{D}$-topology.} in the process (cf.\\ \\cite{illman} for an exposition).\nWe denote by $C_{\\textup{vS}}^\\infty (M,N)$ the smooth functions with the very strong topology and note that this topology is fine enough for many questions arising from differential topology.\n\nUnfortunately, as is argued in \\cite{michor} this topology is still not fine enough, if one wants to obtain manifold structures on $C^\\infty (M,N)$ (and subsequently on the group of diffeomorphisms $\\Diff (M)$).\nHence Michor constructed a further refinement of the very strong topology, called the $\\mathcal{FD}$-topology. \nIn the present paper, we call this topology the \\emph{fine very strong} topology and denote the space of smooth functions with this topology by $C_{\\textup{fS}}^\\infty (M,N)$.\n\nNote that the topologies discussed so far coincide if the source manifold is compact. \nIn fact in this case, all of these topologies coincide with the compact open $C^\\infty$-topology (see e.g.\\ \\cite[Definition 5.1]{neeb}). \nThe compact open $C^\\infty$-topology for infinite-dimensional target manifolds is already well understood and has been used in many investigations, for example in infinite-dimensional Lie theory, e.g.\\ \\cite{glock1}.\nHence our investigation will only turn up new results for non-compact source manifolds and infinite-dimensional target manifolds.\n\\smallskip\n\nWe will now go into some more detail and explain the main results of the present paper.\nOur aim is now to generalize the construction of the very strong and fine very strong topology to the set of smooth functions $C^\\infty (M,X)$, where $X$ is a locally convex manifold.\nHere smooth maps are understood in the sense of Bastiani's calculus \\cite{bastiani} (often also called Keller's $C^r$-theory~\\cite{keller}).\nWe refer to \\cite{milnor1983,glock1,neeb} for streamlined expositions, but have included a brief recollection in Appendix \\ref{calculus}.\n\nWorking in this framework we construct the very strong and the fine very strong topology for $C^\\infty (M,X)$, where $M$ is finite-dimensional and $X$ is a locally convex manifold.\nOur exposition mostly follows Illman's article \\cite{illman} and we adapt his arguments to our setting. \nIn particular, we describe the topology in terms of local charts as in \\cite{illman} (cf.\\ also \\cite{hirsch}).\nFor finite-dimensional manifolds one can alternatively introduce the topology using jet bundles and it is well known that both approaches yield the same topology.\nThis fact seems to be a folklore theorem, but we were not able to locate a proof in the literature. \nAs this fact is needed later on, a proof is given in Appendix \\ref{folklore}. \nThe advantage of the approach using local charts can be summarized as follows: Arguments and proofs often split into two distinct steps.\nFirst one establishes a property of the function space topology only for the (easier) special case of vector space valued smooth mappings. \nThen a localization argument involving manifold charts allows one to establish the result for smooth maps between manifolds.\n\nTo our knowledge the topologies discussed in the present paper have so far only been studied for finite-dimensional manifolds. A topology somewhat similar to the very strong topology but for infinite-dimensional manifolds can be found in \\cite[Section 41]{KM97}. Albeit the similar look, be aware that the jet bundles used in the construction are only manifolds in the inequivalent convenient setting of calculus. In particular, the topology in loc.cit.\\ does not coincide with the one constructed here if $M$ is non-compact (cf.\\ \\cite[42.2 Remarks]{KM97}). We refer to Remark \\ref{rem: topgeneral} for related topologies on function spaces between Banach manifolds.\nFor finite-dimensional manifolds, our construction recovers exactly the ones in the literature. \nWe exploit this and recall that the set $\\Prop (N,N) \\subseteq C^\\infty (M,N)$ of all proper maps is open in the very strong and the fine very strong topology.\nThen one can establish continuity of certain composition mappings, in particular our results subsume the following theorem.\\smallskip\n\n\\textbf{Theorem A} \\emph{Let $M$, $N$ be finite-dimensional manifolds, $X$ and $Y$ be (possibly infinite-dimensional) manifolds. In the following, endow all function spaces either with the very strong or the fine very strong topology. Then \n the joint composition \n \\begin{displaymath}\n \\Gamma \\colon \\Prop (M,N) \\times C^\\infty (N,X) \\rightarrow C^\\infty (M,X) ,\\quad (f,g) \\mapsto g\\circ f\n \\end{displaymath}\n is continuous.}\n \n \\emph{Further, for any smooth map $h \\colon X \\rightarrow Y$, the pushforward $$h_* \\colon C^\\infty (M,X) \\rightarrow C^\\infty (M,Y) , \\quad f \\mapsto h\\circ f$$ \n is continuous.}\n\nHaving this theorem at our disposal, we construct an interesting class of topological groups: \nSuppose $\\mathcal{G} = (G \\ensuremath{\\nobreak\\rightrightarrows\\nobreak} M)$ is a Lie groupoid. This means that $G,M$ are smooth manifolds, equipped with submersions $\\alpha,\\beta \\colon G\\rightarrow M$ and an associative and smooth multiplication $G\\times _{\\alpha,\\beta}G \\rightarrow G$ that\n admits a smooth identity map $1 \\colon M\\rightarrow G$ and a smooth inversion $\\iota\\colon G\\rightarrow G$. \n Then the bisections $\\Bis(\\mathcal{G})$ of $\\mathcal{G}$ are the sections\n $\\sigma\\colon M\\rightarrow G$ of $\\alpha$ such that $\\beta \\circ \\sigma$ is a\n diffeomorphism of $M$. This becomes a group with respect to \n \\begin{equation*}\n (\\sigma \\star \\tau ) (x) := \\sigma ((\\beta \\circ \\tau)(x))\\tau(x)\\text{ for } x \\in M.\n \\end{equation*}\nMany interesting groups from differential geometry such as diffeomorphism groups, automorphism groups and gauge transformations of principle bundles can be realised as bisection groups of suitable Lie groupoids. \nBy construction $\\Bis (\\mathcal{G}) \\subseteq C^\\infty (M,G)$ and with respect to the topologies on the space of smooth functions we obtain the following.\n\n\\textbf{Theorem B} \n\\emph{Let $\\mathcal{G} = (G\\ensuremath{\\nobreak\\rightrightarrows\\nobreak} M)$ be a Lie groupoid with finite-dimensional base $M$. Then $(\\Bis (\\mathcal{G}),\\star)$ is a topological group with respect to the subspace topology induced by either the very strong or the fine very strong topology on $C^\\infty (M,G)$.}\n\nThis result is a first step needed to turn the bisection group into an infinite-dimensional Lie group. \nIn fact, it turns out that one can establish this result quite easily (see below) once Theorem B is available.\nThe key step to establish the applications mentioned below, is to work out the continuity of certain composition mappings (which has been done in Theorem A).\nThen Proposition C and Theorem D below can be established using standard techniques from the literature.\nIn the present paper we will be only concerned with properties of the topology on function spaces. \nHence the next results are stated without a proof. We provide only some references to the literature and hope to provide details in future work.\n\n\n\\textbf{Proposition C} \\emph{\nLet $M$ be a finite-dimensional manifold and $X$ be a possibly infinite-dimensional manifold which admits a local addition.\\footnote{This is for example satisfied if $X$ is a Lie group, see also \\cite[Section 42.4]{KM97} for a definition of local additions and more examples.} \nThen $C_{\\textup{fS}}^\\infty (M,X)$ can be turned into a manifold modeled on spaces of compactly supported sections of certain bundles.}\n\nIt turns out that once the space of smooth functions is endowed with the correct topology it is not hard to prove Proposition C.\nMore details and references to literature containing the necessary auxiliary facts can be found at the end of Section \\ref{sect: vsTop}.\nProposition C generalizes \\cite[Theorem 10.4]{michor} in so far as it admits arbitrary infinite-dimensional manifolds as target manifolds (whereas loc.cit.\\ was confined to finite-dimensional targets). \nWe remark that in \\cite[42.4 Theorem]{KM97} the smooth functions $C^\\infty (M,X)$ for $M$ and $X$ as in Proposition C have been endowed with a manifold structure in the inequivalent convenient setting of calculus.\nHowever, following \\cite[42.2 Remarks]{KM97} the topology on $C^\\infty (M,X)$ used in the construction does not coincide with the fine very strong topology if $M$ is non-compact. \nHence both constructions are inequivalent even if both $M$ and $X$ are finite-dimensional (and \\( M \\) is non-compact).\n\nThe manifold structure provided by Proposition C allows one to establish the Lie group structure for a general class of bisection groups. Adapting arguments from \\cite{michor} and \\cite{Schmeding2015} one can prove that \n\n\\textbf{Theorem D} \\emph{\nThe group of bisections of a Lie groupoid $\\mathcal{G} = (G\\ensuremath{\\nobreak\\rightrightarrows\\nobreak} M)$ with $M$ finite-dimensional and $G$ a Banach manifold\\footnote{Assuming certain mild conditions on $G$ (i.e.\\ an adapted local addition, cf. \\cite{Schmeding2015}), it is not necessary to assume that $G$ is a Banach manifold.} is an infinite-dimensional Lie group. }\n\nThis generalizes the construction from \\cite{Schmeding2015}, where the group of bisections of a Lie groupoid with \\emph{compact} base was turned into an infinite-dimensional Lie group. \nThus one obtains a conceptual approach to the Lie group structures of many groups which are of interest in differential geometry (e.g.\\ automorphism groups and gauge transformation groups of principle bundles over a \\emph{non compact} base).\nMoreover, Theorem D is a crucial ingredient if one wants to extend the strong connection between Lie groupoids and infinite-dimensional Lie groups which was developed in \\cite{SchmedingWockel15}.\n\n\\newpage\n\\section{The very strong topology}\\label{sect: vsTop}\nIn this section, we introduce the \\emph{very strong topology} on the space $ C^\\infty (M,X) $ of smooth maps from a finite-dimensional smooth manifold $ M $ to a possibly infinite-dimensional smooth manifold $ X $. \nThe very strong topology allows us to control derivatives of smooth maps up to arbitrarily high order on certain families of compact sets.\nThis is a straightforward generalization of the very strong topology on the space of smooth maps between finite-dimensional manifolds as described in \\cite{illman}.\n\n\\textbf{Notation and conventions.} We write $ \\mathbb{N} := \\lbrace 1,2,\\dots \\rbrace $ and $ \\mathbb{N}_0 := \\lbrace 0,1,\\dots \\rbrace $, and will only work with vector spaces over the field of real numbers $ \\RR $. Finite-dimensional manifolds are always assumed to be $\\sigma$-compact, i.e. a countable union of compact subspaces (which for finite-dimensional manifolds is equivalent to being second countable). We always endow \\( \\RR^n \\) with the supremum norm $ \\Vert \\cdot \\Vert_\\infty $ unless otherwise stated. We define $ B_\\epsilon^n (x) := \\lbrace y \\in \\mathbb{R}^n : \\Vert y - x \\Vert_\\infty < \\epsilon \\rbrace $. Notation and conventions regarding locally convex vector spaces, smooth maps, and infinite-dimensional manifolds is covered in Appendix \\ref{calculus}. Typically, $ M $ and $ N $ will be finite-dimensional smooth manifolds, $ X $ a smooth manifold modeled on a locally convex vector space, and $ E $ a locally convex vector space.\n\n\\begin{definition}\n\t\\label{norm}\n\tLet $E$ be a locally convex vector space, $p$ a continuous seminorm on $E$, $f \\colon \\mathbb{R}^m \\to E $ smooth, $A \\subseteq \\mathbb{R}^m $ compact, $r \\in \\mathbb{N}_0 $, and $e_1, \\dots, e_m $ the standard basis vectors in $\\mathbb{R}^m$. Then define\n\t\\begin{displaymath}\n\t\\Vert f \\Vert (r,A,p) = \\sup \\lbrace p(\\dd^{(k)} f(a;\\alpha)) : a \\in A, \\alpha \\in \\lbrace e_1, \\dots , e_m \\rbrace^k, 0 \\leq k \\leq r \\rbrace .\n\t\\end{displaymath}\n\\end{definition}\n\\begin{remark}\n\t\\label{normremark}\n\tThe symbol $ \\dd^{(k)} f $ is defined in Definition \\ref{Crmap}. Elsewhere in the literature, $ \\dd^{(k)} f (x;y) = \\dd^{(k)} f (x;y_1,\\dots,y_k) $ is often denoted\n\t\\begin{align*}\n\t\\frac{\\partial^k}{\\partial y_k \\cdots \\partial y_1}f(x) && \\mbox{or} && \\frac{\\partial}{\\partial y} f(x),\n\t\\end{align*}\n\twhere $ y = (y_1,\\dots,y_k) $.\n\t\n\tIn the definition above we require $ \\alpha \\in \\lbrace e_1,\\dots,e_m \\rbrace^k $. But for any $ \\alpha \\in B_1^n (0) $ and $ a \\in A $ and $ k \\leq r $ we have $ p (\\dd^{(k)} f (a;\\alpha)) \\leq K \\Vert f \\Vert (r,A,p) $ for some constant $ K $ depending only on $ r $ and $ m $, by \\eqref{schwarzrule} in Proposition \\ref{chainruleprop}.\n\t\n\tIf $ E = \\RR^n $, any norm generates the topology on $ E $ and norms are in particular seminorms. By Proposition \\ref{generatingfamilies}, the very strong topology is not affected if we always assume that the seminorm $p$ on $ \\RR^n $ is the supremum norm $ \\Vert\\cdot \\Vert_\\infty $. In this case we simply write $ \\Vert f \\Vert (r,A) $ for $ \\Vert f \\Vert (r,A,\\Vert\\cdot \\Vert_\\infty ) $.\n\\end{remark}\n\n\\begin{lemma}[Triangle inequality]\n\t\\label{triangleinequality}\n\tLet $ E,p,A,r $ be as in Definition \\ref{norm}. Then the map\n\t\\begin{displaymath}\n\t\t\\Vert\\cdot \\Vert (r,A,p) : C^\\infty (\\mathbb{R}^m,E) \\to \\mathbb{R}\n\t\\end{displaymath}\n\tsatisfies the triangle inequality. In fact it is a seminorm on $ C^\\infty (\\mathbb{R}^m,E) $.\n\\end{lemma}\n\\begin{proof}\n\t\tUse linearity of $ d(-)(a,\\alpha) $ for fixed $ (a, \\alpha) $, and the fact that $ p $ satisfies the triangle inequality.\n\\end{proof}\n\n\\begin{definition}[Elementary neighborhood]\n\t\\label{elementarynbh}\n\tLet $E$, $p$, and $r$ be as in Definition \\ref{norm}, $M$ an $m$-dimensional smooth manifold, $ X $ a smooth manifold modeled on $ E $. Consider \\( f \\colon M \\to X \\) smooth, $(U,\\phi)$ a chart on $M$, $ (V,\\psi) $ a chart on $ X $, $ A \\subseteq U $ compact such that $ f(A) \\subseteq V $, and $ \\epsilon > 0 $. Define\n\t\\begin{align*}\n\t\\mathcal{N}^r (f; A,(U,\\phi),(V,\\psi),p,\\epsilon ) = \\lbrace h &\\in C^\\infty (M,E) : \\mbox{$ h(A) \\subseteq V $ and } \\\\\n\t&\\Vert \\psi \\circ h \\circ \\phi^{-1} - \\psi \\circ f \\circ \\phi^{-1} \\Vert (r, \\phi (A), p) < \\epsilon \\rbrace .\n\t\\end{align*}\n\tWe call this set an \\emph{elementary ~$C^r$-neighborhood of }~$ f $ in ~$ C^\\infty (M,E) $.\n\\end{definition}\n\n\\textbf{Conventions for elementary neighborhoods}\nIf $ X = \\RR^n $, we will assume that $ p $ is the supremum norm and omit the $ p $ when writing down the elementary neighborhoods.\n\nWhen there is a canonical choice of charts for our manifolds, e.g.\\ if $ X = E $ is a locally convex vector space, we omit the obvious charts when writing down elementary $ C^r $-neighborhoods.\nThus for $ f \\colon M \\to E $ we write e.g.\\ $ \\mathcal{N}^r (f;A,(U,\\phi),p,\\epsilon) := \\mathcal{N}^r (f;A,(U,\\phi),(X,\\id ),p,\\epsilon ) .$ \n\n\n\\begin{remark}\n\t\\begin{enumerate}\n\t \\item The conditions $ f(A) \\subseteq V $ and $ h(A) \\subseteq V $ ensure that the map $ \\psi \\circ h \\circ \\phi^{-1} - \\psi \\circ f \\circ \\phi^{-1} $ makes sense. \n\tFurther, the conditions enable us to control the open sets into which a (given) compact set is mapped, i.e.\\ the kind of control provided by the well known compact open topology (cf.\\ \\cite[Definition I.5.1]{neeb}).\n Indeed, by restricting to elementary $C^0$-neighborhoods, one would recover a subbase of the compact open topology on $C^\\infty (M,X)$.\n\t \\item We define elementary neighborhoods only for finite-dimensional source manifolds as the seminorms in Definition \\ref{norm} make only sense for these manifolds. \n\t Compare Remark \\ref{rem: topgeneral} for more information on alternative approaches to the topology which avoid this problem. \n\t\\end{enumerate}\n\\end{remark}\n\nWe now define what will become the basis sets in the very strong topology on $ C^\\infty (M,X) $. \n\\begin{definition}[Basic neighborhood]\n\t\\label{basicnbh}\n\tLet $ f \\colon M \\to X $ be a smooth map from a finite-dimensional smooth manifold $M$ to a smooth manifold $ X $ modeled on a locally convex vector space $E$. A \\emph{basic neighborhood of $f$ in ~$ C^\\infty (M,X) $} is a set of the form\n\t\\begin{displaymath}\n\t\\bigcap_{i \\in \\Lambda } \\mathcal{N}^{r_i} (f; A_i,(U_i, \\phi_i),(V_i,\\psi_i),p_i, \\epsilon_i),\n\t\\end{displaymath}\n\twhere $ \\Lambda $ is a possibly infinite indexing set, for all \\( i \\) the other parameters are as in Definition \\ref{elementarynbh}, and $ \\lbrace A_i \\rbrace_{i \\in \\Lambda} $ is locally finite. We call $ \\lbrace A_i \\rbrace_{i \\in \\Lambda} $ the \\emph{underlying compact family} of the neighborhood.\n\\end{definition}\nWithout loss of generalization, \\( \\Lambda = \\mathbb{N} \\), since every locally finite family over a \\( \\sigma \\)-compact space is countable.\n\nAs Proposition \\ref{basicnbhsisbasis} show, the basic neighborhoods in ~$ C^\\infty (M,X) $ form a basis for a topology on ~$ C^\\infty (M,X) $. In order to prove the proposition we need the following lemma.\n\\begin{lemma}\n\t\\label{trianglelemma}\n\tLet $ f : M \\to X $ be smooth, and ~$ g \\in \\mathcal{N} := \\mathcal{N}^r (f; A, (U,\\phi),(V,\\psi),p,\\epsilon) $. \n\tThen there exists $ \\epsilon' > 0 $ such that ~$ \\mathcal{N}' := \\mathcal{N}^r (g; A, (U, \\phi ),(V,\\psi), p, \\epsilon') \\subseteq \\mathcal{N} $.\n\\end{lemma}\n\\begin{proof}\n\tFor $ h,\\tilde{h} \\in C^\\infty (M,X) $ with $ h(A),\\tilde{h}(A) \\subseteq V $, let\n\t\\begin{displaymath}\n\t\td(h,\\tilde{h}) = \\Vert \\psi \\circ \\tilde{h} \\circ \\phi^{-1} - \\psi \\circ h \\circ \\phi^{-1} \\Vert (r,\\phi(A),p).\n\t\\end{displaymath}\n\tNote that $ d $ satisfies the triangle inequality by Lemma \\ref{triangleinequality}, and that $ h \\in \\mathcal{N} $ is equivalent to $ d(f,h)<\\epsilon $.\n\t\n\tSet $ \\epsilon' = \\epsilon - d(f,g) $, and let $ \\mathcal{N}' $ be as in the statement of the lemma. If $ h \\in \\mathcal{N}' $, then \n\t\\begin{displaymath}\n\td(f,h) \\leq d(f,g) + d(g,h) < d(f,g) + (\\epsilon - d(f,g)) = \\epsilon.\n\t\\end{displaymath}\n\tHence $ h \\in \\mathcal{N} $, and $ \\mathcal{N}' \\subseteq \\mathcal{N} $.\n\\end{proof}\n\\begin{proposition}\n\t\\label{basicnbhsisbasis}\n\tLet $ \\mathcal{U} $ and $ \\mathcal{U}' $ be basic neighborhoods of $ f $ and $ f' $ in $ C^\\infty (M,X) $, respectively. If $ g \\in \\mathcal{U} \\cap \\mathcal{U}' $, then there exists a basic neighborhood $ \\mathcal{V} $ of $ g $ such that $ \\mathcal{V} \\subseteq \\mathcal{U} \\cap \\mathcal{U}' $.\n\t\n\tHence the basic neighborhoods form a basis for a topology on $ C^\\infty (M,E) $, called \\emph{the very strong topology on $ C^\\infty (M,E) $}.\n\\end{proposition}\n\\begin{proof}\n\tWe may write\n\t\\begin{align*}\n\t\\mathcal{U} = \\bigcap_{i \\in \\Lambda} \\mathcal{N}_i && \\mbox{and} && \\mathcal{U}' = \\bigcap_{j \\in \\Lambda'} \\mathcal{N}'_j \n\t\\end{align*}\n\tfor some sets $ \\Lambda $ and $ \\Lambda' $, where $ \\mathcal{N}_i $ and $ \\mathcal{N}'_i $ are elementary neighborhoods of $ f $ and $ f' $, respectively.\n\tFor all $ i \\in \\Lambda $ and $ j \\in \\Lambda $ choose as in Lemma \\ref{trianglelemma} elementary neighborhoods $ \\mathcal{M}_i $ and $ \\mathcal{M}'_j $ of $ g $ such that $ \\mathcal{M}_i \\subset \\mathcal{N}_i $ and $ \\mathcal{M}'_j \\subset \\mathcal{N}'_j $. Then \n\t\\begin{displaymath}\n\t\\mathcal{V} := \\left( \\bigcap_{i \\in \\Lambda} \\mathcal{M}_i \\right) \\cap \\left( \\bigcap_{j \\in \\Lambda'} \\mathcal{M}'_i \\right) \\subseteq \\mathcal{U} \\cap \\mathcal{U}'. \n\t\\end{displaymath} \n\tIt remains to check that $ \\mathcal{V} $ is in fact a basic neighborhood of $ g $. The set $ \\mathcal{V} $ is a basic neighborhood of $ g $ provided that the underlying compact family of $ \\mathcal{V} $ is locally finite. This is indeed the case since the underlying compact families of $ \\mathcal{U} $ and $ \\mathcal{U}' $ are locally finite and finite unions of locally finite families are locally finite.\n\\end{proof}\nThe preceding proposition justifies the following definition.\n\\begin{definition}[Very strong topology]\n\tThe \\emph{very strong topology on $ C^\\infty (M,X) $} is the topology on $ C^\\infty (M,X) $ with basis the basic neighborhoods in $ C^\\infty (M,X) $.\\\\\n\tThe set $ C^\\infty (M,X) $ equipped with the very strong topology will be denoted by $ C_{\\textup{vS}}^\\infty (M,X) $.\n\\end{definition}\n\n\\begin{remark}\n We will work later on with $C_{\\textup{vS}}^\\infty (M,E)$, where $E$ is a locally convex space. To this end, we considered $E$ as a manifold with the canonical atlas given by the identity.\n This may seem artificial at first glance as one in principle needs to take all ``manifold charts'' of $E$ into account.\n Note however that by Lemma \\ref{lem: atlaschoice} the very strong topology on $C^\\infty (M,E)$ is generated by all basic neighborhoods of the form \n \\begin{equation}\\label{loc: charts}\n \\bigcap_{i \\in \\Lambda } \\mathcal{N}^{r_i} (f; A_i,(U_i, \\phi_i),(E,\\id_E),p_i, \\epsilon_i),\n \\end{equation}\n i.e.\\ it suffices to consider elementary neighborhoods with respect to the identity chart.\n Hence the topology on $C^\\infty (M,E)$ is quite natural.\n \n Similarly for $C^\\infty (\\mathbb{R}^n,E)$ the charts $(U_i,\\phi_i)$ in \\eqref{loc: charts} can be replaced by $(\\mathbb{R}^n,\\id_{\\mathbb{R}^n})$ by Lemma \\ref{lem: atlaschoice2}. \n In the following, we will always assume that our elementary and basic neighborhoods are constructed with respect to the identity if one (or both) of the manifolds are a locally convex space. \n\\end{remark}\n\n\n\\begin{remark}\n\tThere are other well-known topologies on $ C^\\infty (M,X) $. The \\emph{strong} topology (or \\emph{Whitney} $C^\\infty$-topology) and the \\emph{compact open} $C^\\infty$-topology (or \\emph{weak} topology) have as bases neighborhoods of the form described in Definition \\ref{basicnbh}, with some additional restrictions. For the strong topology the collection $\\lbrace r_i \\rbrace_{i \\in \\Lambda} $ of indices giving differentiation order is bounded, and for the compact open $C^\\infty$-topology we require that the indexing set $ \\Lambda $ is finite. \n\t\n\tThe very strong topology is finer than the strong topology which is finer than the compact open $C^\\infty$-topology, and in the case that $ M $ is compact all of these topologies coincide (since every locally finite family meets a compact set only finitely many times). We refer the reader to section 2.1 in \\cite{hirsch} for information about the strong and compact open $C^\\infty$ topologies in the case that $ X $ is finite-dimensional. A comparison of the strong topology and the very strong topology can be found in the introduction of \\cite{illman}.\n\t\n\tSince the very strong topology is finer than the strong topology, subsets of $ C^\\infty (M,X) $ that are open in the strong topology are also open in the very strong topology. \\cite[Section 2.1]{hirsch} has several results stating that certain subsets of $ C^\\infty (M,N) $ are open in the strong topology, consequently also in the very strong topology. In particular, the set $ \\Prop (M,N) $ of proper smooth maps is open in $ C_{\\textup{vS}}^\\infty (M,N) $. We write $ \\Prop_{\\textup{vS}} (M,N) $ for the subspace $ \\Prop (M,N) $ of $ C_{\\textup{vS}}^\\infty (M,N) $ equipped with the subspace topology.\n\\end{remark}\n\n\\begin{remark}\\label{rem: topgeneral}\nOne can also define the very strong topology on the space $ C^\\infty (X,Y) $ where $ X $ and $ Y $ are Banach manifolds (i.e. modeled on Banach spaces). \nTo this end one needs to redefine the seminorms generating the topology, which in the vector space case will take the following form:\n\nIf \\( X,Y \\) are Banach spaces, \\( f \\colon X \\to Y \\) smooth, \\( r \\in \\mathbb{N}_0 \\), and \\( A \\subseteq X \\) compact define\n\\begin{equation}\\label{semi:Frechet}\n\t\\Vert f \\Vert (r,A) = \\sup \\left\\lbrace \\Vert D^k f (x) \\Vert_Y : \\mbox{ \\( 0 \\leq k \\leq r \\) and \\( x \\in A \\)} \\right\\rbrace,\n\\end{equation}\nwhere \\( D^k f \\) denotes the \\( k \\)-th \\emph{\\Frechet derivative} of \\( f \\).\n\nHere we use that every smooth Bastiani map is also smooth in the sense of \\Frechet differentiability by \\cite[Lemma 2.10]{milnor1983}. It is easy to see that all statements made on elementary neighborhoods in the present section remain valid. Hence we obtain a very strong topology on smooth functions between Banach manifolds. \n\n\tNote that one can prove as in Appendix \\ref{folklore} that the ``very strong topology'' constructed with respect to the seminorms \\eqref{semi:Frechet} induces again the (original) very strong topology on $ C^\\infty (X,Y) $ if $ X $ is finite-dimensional. \n\tUnfortunately, for an infinite-dimensional Banach manifold $ X $ this topology does not allow us to control the behavior of functions ``at infinity'' (or anywhere for that matter since compact subsets of infinite-dimensional Banach spaces have empty interior).\nTo see this recall that manifolds modeled on infinite-dimensional Banach space don't have a locally finite compact exhaustion by the Baire category theorem. \n\n\tRecall however, that one can define a Whitney $ C^\\infty $-topology for $X,Y$ Banach manifolds via jet bundles (see e.g.\\ \\cite{michor,KM97} or Appendix \\ref{folklore} for a short exposition). As shown in \\cite[Chapter 9]{marg}, this topology then allows one to control the behavior of a function on all of $ X $. \n\tThe key difference is that the Whitney topology defined in this way controls the behavior of jets on locally finite families of \\textbf{closed} sets. \n\tObviously, one can not hope to describe it via the seminorms as the existence of the suprema in the seminorms is tied to the compactness of the sets. Even worse, for an infinite-dimensional manifold $ X $ and $Y=E$ a locally convex space, the largest topological vector space contained in $C^\\infty(X,E)$ with respect to this topology is trivial (cf. \\cite[437]{KM97}). For these reasons we work exclusively with the very strong topology for finite-dimensional source manifolds.\n\\end{remark}\n\n\\textbf{Additional facts about the very strong topology.} Sometimes it is convenient to assume that the continuous seminorms $ p $ used in constructing very strong neighborhoods are of a certain form, as we have already remarked. There is no loss of generality in making such assumptions if the family of seminorms that we restrict to is ``big enough''.\n\n\\begin{proposition}\n\t\\label{generatingfamilies}\n\tLet $ M $ be a finite-dimensional smooth manifold and $ X $ a smooth manifold modeled on a locally convex vector space $ E $. Suppose $ \\mathcal{P} $ is a generating family of seminorms for $ E $ (see Definition \\ref{seminormfamilydef}). \n\t\n\tIf we replace every instance of ``$p$ is a continuous seminorm on $ E $'' in the definitions and results earlier in this section with ``$ p \\in \\mathcal{P} $'', then the resulting very strong topology on $ C^\\infty (M,X) $ is unaffected.\n\\end{proposition}\n\\begin{proof}\n\tLet $ \\mathcal{T} $ be the very strong topology on $ C^\\infty (M,X) $ constructed with respect to all continuous seminorms on $ E $, and let $ \\mathcal{T}' $ be the very strong topology on $ C^\\infty (M,X) $ obtained by restricting to seminorms in $ \\mathcal{P} $. Then $ \\mathcal{T}' $ is obviously coarser than $ \\mathcal{T} $ since every $ p \\in \\mathcal{P} $ is continuous, so it suffices to show that $ \\mathcal{T} $ is coarser than $ \\mathcal{T}' $. This will be the case if for every basic $ \\mathcal{T} $-very strong neighborhood $ \\mathcal{U} = \\bigcap_{i\\in\\Lambda} \\mathcal{N}_i $ of $ f \\in C^\\infty (M,X) $, where each $ \\mathcal{N}_i $ is an elementary $ \\mathcal{T} $-very strong neighborhood $$ \\mathcal{N}_i = \\mathcal{N}^{r_i} (f;A_i,(U_i,\\phi_i),(V_i,\\psi_i),p_i,\\epsilon_i ), $$ there exists a basic $ \\mathcal{T}'$-very strong neighborhood $ \\mathcal{U}' $ of $ f $ such that $ \\mathcal{U}' \\subseteq \\mathcal{U} $.\n\t\n\tFix $ i \\in \\Lambda $. By \\eqref{generating family criterion} in Proposition \\ref{locally convex prop} there exist $ n_i \\in \\mathbb{N} $ and $ p_{i,1},\\dots,p_{i,n_i} \\in \\mathcal{P} $ and $ c_i > 0 $ such that $ p_i \\leq c_i \\sup_{1\\leq j \\leq n_i} p_{i,j} $. And then\n\t\\begin{align*} \n\t\\mathcal{V}_i := \\bigcap_{j=1}^{n_i} \\mathcal{N}^{r_i} \\left( f;A_i,(U_i,\\phi_i),(V_i,\\psi_i),p_{i,j}, \\frac{ \\epsilon_i }{2c_i} \\right) \\subseteq \\mathcal{N}_i.\n\t\\end{align*}\n\tIndeed, if $ g \\in \\mathcal{V}_i $, then for $ a \\in A_i $, $ \\alpha \\in \\lbrace e_1,\\dots,e_m \\rbrace^k $, $ 0 \\leq k \\leq r_i $, and $ 0 \\leq j \\leq n_i $, we have $$ c_i p_{i,j} ( \\dd (\\psi_i \\circ g \\circ \\phi_i^{-1} - \\psi_i \\circ f \\circ \\phi_i^{-1})^{(k)}(a,\\alpha) ) < \\frac{ \\epsilon_i}{2}, $$ which together with $ p_i \\leq c_i \\sup p_{i,j} $ clearly implies that $ g \\in \\mathcal{N}_i $.\n\t\n\tNow set $ \\mathcal{U}' := \\bigcap_{i \\in \\Lambda} \\mathcal{V}_i $. This is a basic $ \\mathcal{T}' $-very strong neighborhood of $ f $ such that $ \\mathcal{U}' \\subseteq \\mathcal{U} $.\n\\end{proof}\n\nThe following lemma is useful when constructing certain basic neighborhoods. The proof given here is fairly detailed, but throughout the remainder of this text the details of similar arguments will be omitted.\n\\begin{lemma}\n\t\\label{hackinglemma}\n\tLet \\( M \\) be a finite-dimensional smooth manifold, \\( X \\) a locally convex manifold, and \\( f \\colon M \\to X \\) a smooth map. Suppose \\( \\lbrace K_n \\rbrace_{n \\in \\mathbb{N}} \\) is a locally finite family of compact subsets of \\( M \\). Then there exist families of charts \\( \\lbrace (V_i,\\psi_i) \\rbrace_{i\\in\\mathbb{N}} \\) for \\( X \\) and \\( \\lbrace (U_i,\\phi_i) \\rbrace_{i\\in\\mathbb{N}} \\) for \\( M \\), and a locally finite family \\( \\lbrace A_i \\rbrace_{i\\in\\mathbb{N}} \\) of compact subsets of \\( M \\) such that \n\t\\begin{enumerate}\n\t\t\\item \\( \\bigcup_{i\\in\\mathbb{N}} A_i = \\bigcup_{n \\in \\mathbb{N}} K_n \\),\n\t\t\\item \\( A_i \\subseteq U_i \\) for all \\( i \\in \\mathbb{N} \\),\n\t\t\\item \\( f(U_i) \\subseteq V_i \\) for all \\( i \\in \\mathbb{N} \\).\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\tFix \\( n \\in \\mathbb{N} \\). For every \\( x \\in K_n \\) choose a chart \\( (V_{n,x},\\psi_{n,x}) \\) around \\( f(x) \\) and a chart \\( (U_{n,x},\\phi_{n,x}) \\) around \\( x \\). By shrinking \\( U_{n,x} \\) we may assume that \\( f(U_{n,x}) \\subseteq V_{n,x} \\). Since \\( M \\) is locally compact there exists a compact neighborhood \\( A_{n,x}' \\) around \\( x \\) such that \\( A_{n,x}' \\subseteq U_{n,x} \\). Now set \\( A_{n,x} = K_n \\cap A_{n,x}' \\). By compactness of \\( K_n \\) there exist finitely many \\( x_{n,1},\\dots,x_{n,k_n} \\in K_n \\) such that \\( \\lbrace A_{n,x_{n,j}} \\rbrace_{i=1}^{k_n} \\) covers \\( K_n \\). \n\t\n\tThe families \\( \\lbrace (V_{n,x_{n,i}},\\psi_{n,x_{n,i}}) \\rbrace_{n,i} \\), \\( \\lbrace (U_{n,x_{n,i}},\\phi_{n,x_{n,i}}) \\rbrace_{n,i} \\) and \\( \\lbrace A_{n,x_{n,i}} \\rbrace_{n,i} \\) have the desired properties. By relabeling the indices we can take the indexing set to be \\( \\mathbb{N} \\).\n\\end{proof}\n\n\\begin{lemma}\n\t\\label{biglemma}\n\tLet $ M $ be a finite-dimensional smooth manifold, $ X $ a smooth manifold modeled on a locally convex vector space $ E $, and let \\( U \\subseteq M \\) and \\( V \\subseteq X \\) be open subsets. Consider the subspace \\( C^\\infty_{\\text{vS,sub}} (U,V) := \\left\\lbrace f \\in C^\\infty (M,X) : f(U) \\subseteq V \\right\\rbrace \\subseteq C_{\\textup{vS}}^\\infty (M,X) \\).\n\t\\begin{enumerate}\n\t\t\\item \\( C^\\infty_{\\text{vS,sub}} (U,V) \\) is an open subset of \\( C_{\\textup{vS}}^\\infty (M,X) \\).\n\t\t\\item The restriction \\( \\res_{\\text{vS}} \\colon C^\\infty_{ \\text{vS,sub}} (U,V) \\to C_{\\textup{vS}}^\\infty (U,V) \\) is continuous.\n\t\t\\item If $f \\in C^\\infty (U,V)$ and $\\mathcal{N}^r (f; A,(U_\\phi,\\phi),(V_\\psi,\\psi),p,\\epsilon )$ is an elementary neighborhood of $f$ such that $\\psi (V_\\psi)$ is a convex set, then there exists $g \\in C^\\infty (M,X)$ with\n\t\t\t \\begin{equation}\\label{eq: elres}\n\t\t \\res_{\\text{vS}}^{-1} (\\mathcal{N}^r (f; A,(U_\\phi,\\phi),(V_\\psi,\\psi),p,\\epsilon )) = \\mathcal{N}^r (g; A,(U_\\phi,\\phi),(V_\\psi,\\psi),p,\\epsilon )\n\t\t \\end{equation}\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\t\\begin{enumerate}\n\t\t\\item Suppose \\( f \\in C^\\infty(U,V) \\).\n\t\t\tSince $ U $ is an open subset of $ M $, the subspace \\( U \\) is metrizable and locally compact, hence $ \\sigma $-compact.\n\t\t\tCombining Lemma \\ref{dugundjifact} and Lemma \\ref{hackinglemma}, we find a locally finite exhaustion \\( \\lbrace A_n \\rbrace_{n \\in \\mathbb{N}} \\) of \\( U \\) by compact sets, charts \\( \\lbrace (U_n,\\phi_n) \\rbrace_{n \\in \\mathbb{N}} \\) for \\( M \\), and charts \\( \\lbrace (V_n,\\psi_n) \\rbrace_{n \\in \\mathbb{N}} \\) for \\( X \\) such that \\( A_n \\subseteq U_n \\) and \\( f(A_n) \\subseteq V_n \\) for all \\( n \\in \\mathbb{N} \\).\n\t\t\tSince \\( f(A_n) \\subseteq V \\), shrink the \\( V_n \\) if necessary such that \\( V_n \\subseteq V \\) for all \\( n \\in \\mathbb{N} \\) (while still \\( f( A_n ) \\subseteq V_n \\)).\n\t\t\tTake any continuous seminorm $ p $ on $ E $ and define \n\t\t\t\\[ \\mathcal{U} = \\bigcap_{n \\in \\mathbb{N}} \\mathcal{N}^0 \\left( f; A_n,(U_n,\\phi_n),(V_n,\\psi_n),p,1 \\right). \\] \n\t\t\tIf $ g \\in \\mathcal{U} $, then $ g(A_n) \\subseteq V_n \\subseteq V $ for all $ n \\in \\mathbb{N} $, from which it follows that $ g(U) = g \\left( \\bigcup A_n \\right) \\subseteq V $. So $ \\mathcal{U} $ is a neighborhood of $ f $ in $ C_{\\textup{vS}}^\\infty (M,X) $ such that $ \\mathcal{U} \\subseteq C^\\infty (U,V) $.\n\t\t\\item Take an arbitrary basic neighborhood\n\t\t\t\\[\n\t\t\t\t\\mathcal{U} = \\bigcap_{i\\in\\Lambda} \\mathcal{N}^{r_i} \\left( f;A_i,(U_i,\\phi_i),(V_i,\\psi_i),p_i,\\epsilon_i \\right)\n\t\t\t\\]\n\t\t\tin \\( C_{\\textup{vS}}^\\infty (U,V) \\). We will show that given \\( g \\in \\res_{\\text{vS}}^{-1} (\\mathcal{U}) \\), there exists a basic neighborhood \\( \\mathcal{V} \\) of \\( g \\) in \\( C_{\\textup{vS}}^\\infty (M,X) \\) such that \\( \\mathcal{V} \\subseteq \\res_{\\text{vS}}^{-1} (\\mathcal{U}) \\), i.e. \\( \\res_{\\text{vS}}^{-1} (\\mathcal{U}) \\) is open. By Lemma \\ref{basicnbhsisbasis} there are elementary neighborhoods of $\\res_{\\text{vS}} (g)$ such that \n\t\t\t\\[\n\t\t\t\t\\mathcal{V} = \\bigcap_{i\\in\\Lambda} \\mathcal{N}^{r_i} \\left( \\res_{\\text{vS}} (g) ;A_i,(U_i,\\phi_i),(V_i,\\psi_i),p_i,\\delta_i \\right).\n\t\t\t\\]\n\t\t\tis contained in $\\mathcal{U}$. \n\t\t\tNow clearly $\\bigcap_{i\\in\\Lambda} \\mathcal{N}^{r_i} \\left( g ;A_i,(U_i,\\phi_i),(V_i,\\psi_i),p_i,\\delta_i \\right)$ is contained in $\\res_{\\text{vS}}^{-1} (\\mathcal{V}) \\subseteq \\res_{\\text{vS}}^{-1} (\\mathcal{U})$.\n \t\t\\item Since $M$ is finite-dimensional, whence paracompact, we can choose a neighborhood $W$ of $A$ and a smooth cutoff function $\\rho \\colon M \\rightarrow \\RR$ with $\\rho|_{W} \\equiv 1$ and $\\rho_{M \\setminus f^{-1} (V_\\psi) \\cap U_\\phi}\\equiv 0$.\n \t\tComposing with a suitable translation, we may assume without loss of generality that $\\psi(V_\\psi)$ is a convex $0$-neighborhood.\n \t\tSuppressing the translation we can thus define $$g \\colon M \\rightarrow X , \\quad x \\mapsto \\begin{cases}\n \t\t \\psi^{-1} ( \\rho(x) \\cdot \\psi \\circ f (x)) & \\text{if } x \\in U_\\phi, \\\\\n \t\t \\psi^{-1} (0) & \\text{else}.\n \t\t \\end{cases}\n\t\t $$\n\t\tNow as $g|_W = f|_W$ the identity \\eqref{eq: elres} is satisfied. \\qedhere\n\t\\end{enumerate}\n\\end{proof}\n\n\n\\section{Composition of maps in the very strong topology}\\label{sect: compo}\nThroughout this section, $ M $ and $ N $ are finite-dimensional smooth manifolds and $ X $ denotes a smooth manifold modeled on a locally convex vector space $ E $. \n\nIt is a desirable property of the very strong topology on \\( C^\\infty (M,X) \\) that composition\n\\begin{align*}\n\t\\Gamma : C_{\\textup{vS}}^\\infty (M,N) \\times C_{\\textup{vS}}^\\infty (N,X) &\\to C_{\\textup{vS}}^\\infty (M,X) \\\\\n\t(f,h) &\\mapsto h \\circ f\n\\end{align*}\nis continuous. But this is not the case in general, a counterexample can be found in Example \\ref{counterexample}. However, the restriction of the composition map\n\\begin{displaymath}\n\\Gamma \\colon \\Prop_{\\textup{vS}} (M,N) \\times C_{\\textup{vS}}^\\infty (N,X) \\to C_{\\textup{vS}}^\\infty (M,X)\n\\end{displaymath}\n\\textit{is} continuous, where $ \\Prop_{\\textup{vS}} (M,N) $ denotes the subspace of $ C_{\\textup{vS}}^\\infty (M,N) $ consisting of all the proper maps. This is precisely what Theorem \\ref{compiscont} says, and proving it is the main goal of this section.\n\nAs we will see, the crucial property of proper maps needed is the fact that if $ \\lbrace A_i \\rbrace $ is a locally finite family of subsets of $ M $ and $ f : M \\to N $ is proper, then $ \\lbrace f(A_i) \\rbrace $ is locally finite. This will enable us to choose for a basic neighborhood $ \\mathcal{V} $ of some composition $ h \\circ f $ in $ C_{\\textup{vS}}^\\infty (M,X) $ basic neighborhoods $ \\mathcal{U} $ and $ \\mathcal{U}' $ of $ f $ and $ h $, respectively, such that $ \\Gamma (\\mathcal{U} \\times \\mathcal{U}' ) \\subseteq \\mathcal{V} $. The challenge is to choose $ \\mathcal{U}' $ in such a way that the underlying compact family of the neighborhood is locally finite.\n\nWe now give the promised counterexample to the statement that composition of maps in the very strong topology is continuous in general. This example is inspired by the proof of \\cite[Proposition 2.2(b)]{counterglock}.\n\\begin{example}\n\t\\label{counterexample}\n\tThe composition map\n\t\\begin{align*}\n\t\t\\Gamma \\colon C_{\\textup{vS}}^\\infty (\\mathbb{R},\\mathbb{R}) \\times C_{\\textup{vS}}^\\infty (\\mathbb{R},\\mathbb{R}) \\to C_{\\textup{vS}}^\\infty (\\mathbb{R},\\mathbb{R}), \\quad (f,h) \\mapsto h \\circ f\n\t\\end{align*}\n\tis not continuous.\n\\end{example}\n\\begin{proof}\n\tNote that for every basic neighborhood $ \\mathcal{U} $ of $ f \\in C_{\\textup{vS}}^\\infty (\\mathbb{R},\\mathbb{R}) $ there exists a basic neighborhood $ \\mathcal{U}' $ of $ f $ with underlying compact family $ \\lbrace [2n-1,2n+1] \\rbrace_{n\\in\\mathbb{Z}} $ such that $ f \\in \\mathcal{U}' \\subseteq \\mathcal{U} $, since each compact interval $ [2n-1,2n+1] $ intersects only finitely many sets belonging to the locally finite underlying compact family of $ \\mathcal{U} $.\n\t\n\tTo show discontinuity of $ \\Gamma $ it suffices to show discontinuity at $ (0,0) $. Let $ \\mathcal{V} $ be the basic neighborhood of 0 given by $$ \\mathcal{V} := \\bigcap_{n\\in\\mathbb{N}} \\mathcal{N}^{n} (0;[2n-1,2n+1],1) . $$ We will show that for any pair of basic neighborhoods\n\t\\begin{align*}\n\t\\mathcal{U} = \\bigcap_{n\\in\\mathbb{Z}} \\mathcal{N}^{r_n} (0;[2n-1,2n+1],\\epsilon_n), \\\\\n\t\\mathcal{U}' = \\bigcap_{n\\in\\mathbb{Z}} \\mathcal{N}^{r_n'} (0;[2n-1,2n+1],\\epsilon_n'),\n\t\\end{align*}\n\tthere exists a pair of functions $ (f,h) \\in \\mathcal{U}' \\times \\mathcal{U} $ such that $ h \\circ f \\notin \\mathcal{V} $. \n\t\n\tConstruct $ h \\in C^\\infty (\\mathbb{R},\\mathbb{R}) $ such that in a neighborhood of $ 0 $, $ h $ is given by the equation $ h(x) = x^{r_0 + 1} $, and such that $ \\operatorname{supp} h \\subseteq ]-1,1[ $. For some sufficiently small $ k > 0 $ we will have $ kh \\in \\mathcal{U} $. For every $ m \\in \\mathbb{N} $ define $$ h_m (x) := \\frac{k}{m^{r_0}} h(mx) $$ and note that $ h_m \\in \\mathcal{U} $, since $$ | h_m^{(j)} (x) | = \\frac{k m^j}{m^{r_0}} | h^{(j)} (mx) | \\leq k | h^{(j)}(mx) | < \\epsilon_0 $$ for $ j \\leq r_0 $, where we use the notation \\( g^{(j)} (y) = \\dd^{(j)} g(y;1,\\dots,1) \\) for smooth maps \\( g \\colon \\RR \\to \\RR \\).\n\t\n\tLet $ 2n \\geq r_0 + 1 $ and construct $ \\tilde{f} \\in C^\\infty (\\mathbb{R},\\mathbb{R}) $ such that $ \\tilde{f}(x) = x - 2n $ in a neighborhood of $ 2n $ and $ \\operatorname{supp} \\tilde{f} \\subseteq ]2n-1,2n+1[ $. Then for some sufficiently small $ s > 0 $ we have $ f := s \\tilde{f} \\in \\mathcal{U}' $.\n\t\n\tSo far we have a sequence $ \\lbrace h_m \\rbrace_{m\\in\\mathbb{N}} \\subset \\mathcal{U} $ and $ f \\in \\mathcal{U}' $. By construction, $ h_m \\circ f (x) = kms^{r_0+1} (x-2n)^{r_0+1} $ in a neighborhood of $ 2n $. Hence $$ |(h_m \\circ f)^{(r_0+1)}(2n)| = kms^{r_0+1} (r_0+1)! \\geq 1 $$ for large enough $ m $, in which case $ h_m \\circ f \\notin \\mathcal{V} $.\n\\end{proof}\nHaving given the example above, we return our focus to the main task of the section, which is proving Theorem \\ref{compiscont}. Leading up to the theorem is a sequence of lemmata. \n\nAlthough we are actually interested in mapping spaces between manifolds, we first give a lemma that only applies to vector spaces. In a sense, this lemma resolves the main difficulty, and generalizing to manifolds is only a matter of dealing with charts.\n\\begin{lemma}\n\t\\label{realdomains}\n\tConsider the composition map\n\t\\begin{displaymath}\n\t\t\\Gamma : C_{\\textup{vS}}^\\infty (\\mathbb{R}^m,\\mathbb{R}^n) \\times C_{\\textup{vS}}^\\infty (\\mathbb{R}^n,E) \\to C_{\\textup{vS}}^\\infty (\\mathbb{R}^m,E),\\quad (f,h) \\mapsto h \\circ f.\n\t\\end{displaymath}\n\tLet $ (f,h) \\in C_{\\textup{vS}}^\\infty (\\mathbb{R}^m,\\mathbb{R}^n) \\times C_{\\textup{vS}}^\\infty (\\mathbb{R}^n,E) $, and consider an arbitrary elementary neighborhood\n\n\t$\t\\mathcal{N} = \\mathcal{N}^r (h\\circ f;A,p,\\epsilon) \\subseteq C_{\\textup{vS}}^\\infty (\\mathbb{R}^m,E)$\n\n\tof $ h \\circ f $. For all compact neighborhoods \\( A' \\) of \\( f(A) \\) there exist \\( \\delta, \\delta' > 0 \\) such that the elementary neighborhoods\n\t\\begin{align*}\n\t\t\\mathcal{M} = \\mathcal{N}^r (f;A,\\delta) && \\mbox{and} && \\mathcal{M}' = \\mathcal{N}^r (h;A',p,\\delta')\n\t\\end{align*}\n\tsatisfy $ \\Gamma (\\mathcal{M} \\times \\mathcal{M}') \\subseteq \\mathcal{N} $.\n\\end{lemma}\n\\begin{proof}\n\tLet $ A' $ be any compact neighborhood of $ f(A) $. We proceed in several steps.\n\t\n\t\\textbf{Step 1.} Our first goal is to define $ \\mathcal{M} $. We want a $ \\delta > 0 $ such that for all $ \\hat{f} \\in C^\\infty (\\mathbb{R}^m, \\mathbb{R}^n) $, the inequality $ \\Vert \\hat{f} - f \\Vert (r,A) < \\delta $ implies\n\t\\begin{align*}\n\t\t\\Vert h \\circ (\\hat{f} - f) \\Vert (r,A,p) < \\frac{\\epsilon}{2} && \\mbox{and} && \\hat{f} (A) \\subseteq A'.\n\t\\end{align*}\n\tBy Lemma \\ref{greenlemma} it is possible to choose $ \\delta $ such that the first property holds. We may choose $ \\delta $ such that the second property also holds, because $ \\hat{f}(A) = (\\hat{f}-f)(A) + f(A) \\subseteq B_\\delta^n (0) + f(A) $. Pick such a $ \\delta $ and define\n\t\\begin{displaymath}\n\t\t\\mathcal{M} := \\mathcal{N}^r (f;A,\\delta).\n\t\\end{displaymath}\n\t\n\tObserve that by the triangle inequality (Lemma \\ref{triangleinequality}) there exists an $ R > 0 $ such that every $ \\hat{f} \\in \\mathcal{M} $ satisfies $ \\Vert \\hat{f} \\Vert (r,A) \\leq R $.\n\t\n\t\\textbf{Step 2.} Our second goal is to define $ \\mathcal{M}' $. We want a $ \\delta' > 0 $ such that for all $ \\hat{h} \\in C^\\infty (\\mathbb{R}^n,E) $ and all $ \\hat{f} \\in \\mathcal{M} $,\n\t\\begin{align*}\n\t\t\\Vert \\hat{h} - h \\Vert (r,A',p) < \\delta' && \\implies && \\Vert (\\hat{h} - h) \\circ \\hat{f} \\Vert (r,A,p) < \\frac{\\epsilon}{2}.\n\t\\end{align*}\n\tA $ \\delta' $ having this property exists by Lemma \\ref{greenlemma} and the observation at the end of step 1. Now define\n\t\\begin{displaymath}\n\t\t\\mathcal{M}' := \\mathcal{N}^r (h;A',p,\\delta').\n\t\\end{displaymath}\n\t\n\t\\textbf{Step 3.} Now we must show that $ \\mathcal{M} $ and $ \\mathcal{M}' $ have the desired property. Let $ \\hat{f} \\in \\mathcal{M} $ and $ \\hat{h} \\in \\mathcal{M}' $. By the triangle inequality,\n\t\\begin{displaymath}\n\t\t\\Vert \\hat{h} \\circ \\hat{f} - h \\circ f \\Vert (r,A,p) \\leq \\Vert (\\hat{h} - h) \\circ \\hat{f} \\Vert (r,A,p) + \\Vert h \\circ (\\hat{f} - f) \\Vert (r,A,p) < \\epsilon.\n\t\\end{displaymath} \n\tSo $ \\hat{h}\\circ \\hat{f} \\in \\mathcal{N} $. Thus $ \\Gamma (\\mathcal{M} \\times \\mathcal{M}') \\subseteq \\mathcal{N} $.\n\\end{proof}\n\nA version of the preceding lemma still holds if we replace $ \\mathbb{R}^m $ and $ \\mathbb{R}^n $ with finite-dimensional smooth manifolds and $ E $ with an infinite-dimensional smooth manifold. This is our next result.\n\n\\begin{lemma}\n\t\\label{illmannslemma}\n\tGiven $ f \\in C^\\infty (M,N) $ and $ h \\in C^\\infty (N,X) $ and an arbitrary elementary neighborhood\n\t\\begin{displaymath}\n\t\t\\mathcal{N} := \\mathcal{N}^r (h\\circ f;A,(U,\\phi),(W,\\eta),p,\\epsilon) \\subseteq C_{\\textup{vS}}^\\infty (M,X)\n\t\\end{displaymath}\n\tof $ h\\circ f $ there exist finitely many\n\t\\begin{align*}\n\t\t\\mathcal{M}_j := \\mathcal{N}^r (f;A_j,(U,\\phi),(V_j,\\psi_j),\\delta_j) && \\mbox{and} && \\mathcal{M}_j' := \\mathcal{N}^r (h;A_j',(V_j,\\psi),(W,\\eta),p,\\delta_j')\n\t\\end{align*}\n\tsuch that\n\t\\begin{enumerate} \n\t\t\\item $\\displaystyle\\Gamma \\left( \\bigcap_j \\left( \\mathcal{M}_j \\times \\mathcal{M}_j' \\right) \\right) \\subseteq \\mathcal{N} $,\n\t\t\\item $\\displaystyle \\bigcup_j A_j = A $,\n\t\t\\item $ f(A_j) \\subseteq \\interior A_j' \\subseteq A_j' \\subseteq V_j $ for all $ j $.\n\t\\end{enumerate}\n\t\n\tMoreover, given any neighborhood $ Q $ of $ f(A) $ we may choose the $ V_j $ such that all $ V_j \\subseteq Q $.\n\\end{lemma}\n\\begin{proof}\n\tSince $ f(A) $ is compact we may choose finitely many sets\n\t\\begin{align*}\n\t\tD_j \\subseteq \\interior A_j' \\subseteq A_j' \\subseteq V_j \\subseteq N && \\mbox{such that} && f(A) \\subseteq \\bigcup_j D_j,\n\t\\end{align*}\n\twhere $ D_j $ and $ A_j' $ are compact, and $ V_j $ is a chart domain for a chart $ (V_j, \\psi_j) $ on $ N $. Shrinking the $ V_j $ we may assume that every $ V_j \\subseteq Q $. Set\n\t\\begin{displaymath} \n\t\tA_j := A \\cap f^{-1} (D_j),\n\t\\end{displaymath} \n\tto obtain compact sets that satisfy\n\t\\begin{align*}\n\t\t\\bigcup_j A_j = A && \\mbox{and} && f(A_j) \\subseteq D_j \\mbox{, for all $ j $.}\n\t\\end{align*}\n\t\n\tLet\n\t\\( \\mathcal{N}_j := \\bigcap_j \\mathcal{N}^r (h\\circ f ; A_j, (U,\\phi),(W,\\eta),p,\\epsilon) \\),\n\tand note that $ \\mathcal{N} = \\bigcap_j \\mathcal{N}_j $. For each $ j $ apply Lemma \\ref{realdomains} to the maps $ \\psi_j \\circ f \\circ \\phi^{-1} $ and $ \\eta \\circ h \\circ \\psi_j^{-1} $ and the elementary neighborhood\n\t\\begin{displaymath}\n\t\t\\tilde{\\mathcal{N}}_j := \\mathcal{N}^r (\\eta \\circ h \\circ f \\circ \\phi^{-1};\\phi(A_j),p,\\epsilon)\n\t\\end{displaymath}\n\tto obtain elementary neighborhoods\n\t\\begin{align*}\n\t\t\\tilde{\\mathcal{M}}_j = \\mathcal{N}^r (\\psi_j \\circ f \\circ \\phi^{-1};\\phi(A_j),\\delta_j) && \\mbox{and} && \\tilde{\\mathcal{M}}_j' = \\mathcal{N}^r (\\eta \\circ h \\circ \\psi_j^{-1};\\psi_j (A_j'),p,\\delta_j')\n\t\\end{align*}\n\tsuch that $ \\Gamma (\\tilde{\\mathcal{M}}_j \\times \\tilde{\\mathcal{M}}_j') \\subseteq \\tilde{\\mathcal{N}}_j $. \n\t\n\tThe elementary neighborhoods $ \\tilde{\\mathcal{M}}_j $ and $ \\tilde{\\mathcal{M}}_j' $ of $ \\psi_j \\circ f \\circ \\phi^{-1} $ and $ \\eta \\circ h \\circ \\psi_j^{-1} $, respectively, induce elementary neighborhoods\n\t\\begin{align*}\n\t\t\\mathcal{M}_j = \\mathcal{N}^r (f;A_j,(U,\\phi),(V_j,\\psi_j),\\delta_j) && \\mbox{and} && \\mathcal{M}_j' = \\mathcal{N}^r (h;A_j',(V_j,\\psi_j),(W,\\eta),p,\\delta_j')\n\t\\end{align*}\n\tof $ f $ and $ h $, respectively. These neighborhoods correspond to each other in the sense that $ \\hat{f} \\in \\mathcal{M}_j $ if and only if $ \\psi_j \\circ \\hat{f} \\circ \\phi^{-1} \\in \\tilde{\\mathcal{M}}_j $, and $ \\hat{h} \\in \\mathcal{M}_j' $ if and only if $ \\eta \\circ \\hat{h} \\circ \\psi_j^{-1} \\in \\tilde{\\mathcal{M}}_j' $. Similarly for $ \\tilde{\\mathcal{N}}_j $ and $ \\mathcal{N}_j $. Since $ \\Gamma (\\tilde{\\mathcal{M}}_j \\times \\tilde{\\mathcal{M}}_j') \\subseteq \\tilde{\\mathcal{N}}_j $, one has by the correspondence described here that $ \\Gamma (\\mathcal{M}_j \\times \\mathcal{M}_j') \\subseteq \\mathcal{N}_j $. \n\t\n\tNow just observe that\n\t\\begin{displaymath}\n\t\t\\Gamma \\left( \\bigcap_j \\left( \\mathcal{M}_j \\times \\mathcal{M}_j' \\right) \\right) \\subseteq \\bigcap_j \\Gamma \\left( \\mathcal{M}_j \\times \\mathcal{M}_j' \\right) \\subseteq \\bigcap_j \\mathcal{N}_j = \\mathcal{N}.\n\t\\end{displaymath}\n\\end{proof}\n\\begin{lemma}\n\t\\label{compislemma}\n\tConsider smooth maps $ f \\in C_{\\textup{vS}}^\\infty (M,N) $ and $ h \\in C_{\\textup{vS}}^\\infty (N,X) $ and a basic neighborhood $ \\mathcal{U} = \\bigcap_{i \\in \\Lambda} \\mathcal{N}_i $, where each\n\t\\begin{displaymath} \n\t\\mathcal{N}_i = \\mathcal{N}^{r_i} (h\\circ f;A_i,(U_i,\\phi_i),(W_i,\\eta_i),p_i,\\epsilon_i)\n\t\\end{displaymath}\n\tis an elementary neighborhood of $ h\\circ f $. \n\t\n\tIf $ \\lbrace f(A_i) \\rbrace_{i \\in \\Lambda} $ is locally finite, then there exist basic neighborhoods $ \\mathcal{V} $ and $ \\mathcal{V}' $ of $ f $ and $ h $, respectively, such that $ \\Gamma (\\mathcal{V} \\times \\mathcal{V}') \\subseteq \\mathcal{U} $.\n\\end{lemma}\n\\begin{proof}\n\tSince $ \\lbrace f(A_i) \\rbrace_{i\\in\\Lambda} $ is locally finite, there exist compact neighborhoods $ Q_i $ of $ f(A_i) $ such that $ \\lbrace Q_i \\rbrace_{i \\in \\Lambda} $ is locally finite, by \\cite[30.C.10]{cech}. Here we use our assumption that finite-dimensional manifolds are $ \\sigma $-compact.\n\t\n\tFor each $ i \\in \\Lambda $, Lemma \\ref{illmannslemma} implies that there exist\n\t\\begin{align*}\n\t\t\\mathcal{W}_i :=& \\bigcap_{j=1}^{n_i} \\mathcal{N}^{r_i} (f;A_{i,j},(U_i,\\phi_i),(V_{i,j},\\psi_{i,j}),\\delta_{i,j}), \\\\\n\t\t\\mathcal{W}_i' :=& \\bigcap_{j=1}^{n_i} \\mathcal{N}^{r_i} (h;A_{i,j}',(V_{i,j},\\psi_{i,j}),(W_i,\\eta_i),p_i,\\delta_{i,j}')\n\t\\end{align*}\n\tsuch that\n\t\\begin{enumerate}\n\t\t\\item $\\displaystyle \\Gamma (\\mathcal{W}_i \\times \\mathcal{W}_i') \\subseteq \\mathcal{N}_i $,\n\t\t\\item $\\displaystyle \\bigcup_{j=1}^{n_i} A_{i,j} = A_i $,\n\t\t\\item $ A_{i,j}' \\subseteq V_{i,j} \\subseteq Q_i $ for all \\( j \\).\n\t\\end{enumerate}\n\tThen $ \\lbrace A_{i,j} \\rbrace_{i,j} $ is locally finite by (2) and since $ \\lbrace A_i \\rbrace_i $ is locally finite, and $ \\lbrace A_{i,j}' \\rbrace_{i,j} $ is locally finite by (3) and since $ \\lbrace Q_i \\rbrace_i $ is locally finite.\tHence\n\t\\begin{align*}\n\t\t\\mathcal{V} := \\bigcap_{i\\in\\Lambda} \\mathcal{W}_i && \\mbox{and} && \\mathcal{V}' := \\bigcap_{i\\in\\Lambda} \\mathcal{W}_i'\n\t\\end{align*}\n\tare basic neighborhoods of $ f $ and $ h $, respectively, such that\n\t\\begin{displaymath}\n\t\t\\Gamma \\left( \\mathcal{V} \\times \\mathcal{V}' \\right) = \\Gamma \\left( \\bigcap_{i\\in\\Lambda} \\left( \\mathcal{W}_i \\times \\mathcal{W}_i' \\right) \\right) \\subseteq \\bigcap_{i\\in\\Lambda} \\Gamma (\\mathcal{W}_i \\times \\mathcal{W}_i') \\subseteq \\bigcap_{i\\in\\Lambda} \\mathcal{N}_i = \\mathcal{U}.\n\t\\end{displaymath}\n\\end{proof}\n\\begin{theorem}\n\t\\label{compiscont}\n\tLet $M$ and $N$ be finite-dimensional smooth manifolds and let $ X $ be a smooth manifold modeled on a locally convex vector space $E$. Then the composition map\n\t\\begin{displaymath}\n\t\\Gamma \\colon \\Prop_{\\textup{vS}} (M,N) \\times C_{\\textup{vS}}^\\infty (N,X) \\to C_{\\textup{vS}}^\\infty (M,X)\n\t\\end{displaymath}\n\tsending $ (f,h) $ to $ h \\circ f $ is continuous.\n\\end{theorem}\n\\begin{proof}\n\tIt suffices to show that given maps $ f \\in \\Prop_{\\textup{vS}} (M,N) $ and $ h \\in C_{\\textup{vS}}^\\infty (N,E) $ and a basic neighborhood\n\t\\begin{displaymath}\n\t\t\\mathcal{U} = \\bigcap_{i\\in\\Lambda} \\mathcal{N}^{r_i} (h\\circ f;A_i,(U_i,\\phi_i),(V_i,\\psi_i),p_i,\\epsilon_i)\n\t\\end{displaymath}\n\tof $ h\\circ f $ in $ C_{\\textup{vS}}^\\infty (M,X) $, there exist basic neighborhoods $ \\mathcal{V} $ and $ \\mathcal{V}' $ of $ f $ and $ h $, respectively, such that $ \\Gamma (\\mathcal{V} \\times \\mathcal{V}') \\subseteq \\mathcal{U} $.\n\t\n\tSo suppose that we are given $ f , h $ and $ \\mathcal{U} $ as above. Then $ \\lbrace f(A_i) \\rbrace_{i \\in \\Lambda} $ is locally finite since $ f $ is proper, by \\cite[Lemma 3.10.11]{engelking}. Thus we may apply Lemma \\ref{compislemma} to obtain the desired neighborhoods $ \\mathcal{V} $ and $ \\mathcal{V}' $.\n\\end{proof}\nUnfortunately, precomposition is not continuous in general as an examination of Example \\ref{counterexample} reveals. However, precomposition by a proper map is continuous. \n\\begin{proposition}\n\t\\label{precomposition}\n\tLet $ f \\in \\Prop_{\\textup{vS}} (M,N)$. Then the following map is continuous \n\t\\begin{align*}\n\t\tf^* \\colon C_{\\textup{vS}}^\\infty (N,X) &\\to C_{\\textup{vS}}^\\infty (M,X), \\quad\t\th \\mapsto h \\circ f \n\t\\end{align*}\n\\end{proposition}\n\\begin{proof}\n\tThe map $ \\iota_f \\colon C_{\\textup{vS}}^\\infty (N,X) \\to \\Prop (M,N) \\times C_{\\textup{vS}}^\\infty (N,X) $ given by $ \\iota_f (h) = (f,h) $ is continuous. Hence $ f^* $, which is the composition \n\t\\begin{displaymath}\n\t\tC_{\\textup{vS}}^\\infty (N,X) \\xrightarrow{\\iota_f} \\Prop_{\\textup{vS}} (M,N) \\times C_{\\textup{vS}}^\\infty (N,X) \\xrightarrow{\\Gamma} C_{\\textup{vS}}^\\infty (M,X),\n\t\\end{displaymath}\n\tis also continuous. \n\\end{proof}\n\nWe will now prove that postcomposition is always continuous. \nThis result is needed even for postcomposition by a map $f\\colon X \\rightarrow Y$ between infinite-dimensional manifolds.\nThus the next proposition can not readily be deduced from Theorem \\ref{compiscont} (or the other results in this section)\\footnote{For the finite-dimensional case, a proof along these lines can be found in \\cite{illman}.}. \nInstead we have to take a detour using results on the (coarser) compact open $C^\\infty$-topology (cf.\\ to the proof of Lemma \\ref{lem: atlaschoice}). \n\n\\begin{proposition}\\label{postcomposition}\n Let $f \\colon X \\rightarrow Y$ be smooth, where $Y$ is a (possibly infinite-dimensional) manifold. \n Then for any finite-dimensional manifold $M$ the following map is continuous \n \\begin{displaymath}\n f_* \\colon C_{\\textup{vS}}^\\infty (M,X) \\rightarrow C_{\\textup{vS}}^\\infty (M,Y) , \\quad h \\mapsto f\\circ h\n \\end{displaymath}\n\\end{proposition}\n\n\\begin{proof}\n To see that $f_*$ is continuous, we proceed in several steps.\n \n \\textbf{Step 1} \\emph{Special elementary neighborhoods.} \n Consider first an arbitrary elementary neighborhood $\\mathcal{N} = \\mathcal{N}^{r} (f\\circ h; A,(U, \\phi),(V,\\psi),q, \\epsilon)$ in $C_{\\textup{vS}}^\\infty (M,Y)$.\n Since $h(A)$ is compact, there are finitely many manifold charts $(W_i,\\kappa_i)$ of $X$ with $h(A) \\subseteq \\bigcup_{i} W_i$.\n Now the open sets $h^{-1} (W_i)$ cover $A$ and thus there are finitely many compact sets $K_j$ such that $A=\\bigcup_{j} K_j$ and $h(K_j) \\subseteq W_{i_j}$.\n Thus we replace $A$ by the finitely many compact sets. Note that this will ensure that the families of compact sets considered later remain locally finite.\n To shorten the notation, assume without loss of generality that there is a manifold chart $(W,\\kappa)$ of $X$ such that $h(A) \\subseteq W$ and $f(W) \\subseteq V$.\n In particular, we can thus consider the mapping $f_{\\kappa}^\\psi := \\psi \\circ f \\circ \\kappa^{-1} \\colon \\kappa (W) \\rightarrow \\psi (V)$\n \n \\textbf{Step 2} \\emph{The preimage of a special elementary neighborhood of $f\\circ h$ is a neighborhood of $h$.} We work locally in charts.\n Let $Y$ be modeled on the locally convex space $F$ and $X$ be modeled on the locally convex space $E$.\n Recall that the compact open $C^\\infty$-topology (see \\cite[Definition I.5.1]{neeb}) controls the derivatives of functions on compact sets.\n Moreover, the elementary neighborhoods of the very strong topology form a subbase of the compact open $C^\\infty$-topology.\n We denote by $C^\\infty (M,F)_{\\text{co}}$ the vector space of smooth functions with the compact open $C^\\infty$-topology. \n \n Choose a compact neighborhood $C \\subseteq U$ of $A$ such that $h(C) \\subseteq W$ (this entails $f\\circ h(C)\\subseteq V$). \n We endow the subset $\\lfloor C,\\kappa(W)\\rfloor := \\{g\\in C^\\infty (M,E) \\mid g(C) \\subseteq \\kappa(W)\\}$ with the subspace topology induced by the compact open $C^\\infty$-topology.\n As $\\kappa (W) \\subseteq E$ is open, we note that $\\lfloor C,\\kappa(W)\\rfloor$ is open in $C^\\infty (M,E)_{\\text{co}}$.\n Now \\cite[Proposition 4.23 (a)]{glockomega} shows that \n \\begin{displaymath}\n (f_{\\kappa}^\\psi)_* \\colon \\lfloor C,\\kappa(W) \\rfloor \\rightarrow C^\\infty (\\interior C,F)_{\\text{co}},\\quad h \\mapsto (\\psi\\circ f\\circ \\kappa^{-1}) \\circ h\n \\end{displaymath}\n is continuous. \n Moreover, $\\mathcal{N}_{loc} := \\mathcal{N}^{r} (\\psi \\circ f\\circ h|_{\\interior C}; A,(U\\cap \\interior C, \\phi),(F,\\id_F),q, \\epsilon)$ is open in $C^\\infty (\\interior C,F)_{\\text{co}}$.\n Further, $f_{\\kappa}^\\psi \\circ \\kappa \\circ h|_{\\interior C} = \\psi \\circ f \\circ h|_{\\interior C}$. \n Observe that thus $\\kappa \\circ h \\in ((f_{\\kappa}^\\psi)_*)^{-1} (\\mathcal{N}_{loc})$. \n As the elementary neighborhoods form a subbase of the compact open $C^\\infty$-topology, Lemma \\ref{lem: atlaschoice} together with continuity of $(f_{\\kappa}^\\psi)_*$ yields\n \\begin{equation}\\label{eq: locpush}\n \\lfloor C, \\kappa (W)\\rfloor \\cap \\bigcap_{k=1}^N \\mathcal{N}^{r} (\\kappa \\circ h; A_k,(U_k, \\phi_k),(E,\\id_E),p_k, \\epsilon_k) \\subseteq ((f_{\\kappa}^\\psi)_*)^{-1} (\\mathcal{N}_{loc}).\n \\end{equation}\n Recall from the proof of \\cite[Proposition 4.23 (a)]{glockomega} that the compact sets $A_k$ are contained by construction in $\\interior C$. \n Thus one easily deduces from \\eqref{eq: locpush} that \n \\begin{displaymath}\n \\mathcal{N}^{0} (h; C,(U, \\phi),(W,\\kappa),p_1, 1) \\cap \\bigcap_{k=1}^N \\mathcal{N}^{r} (h; A_k,(U_k, \\phi_k),(W,\\kappa),p_k, \\epsilon_k) \\subseteq (f_*)^{-1} (\\mathcal{N})\n \\end{displaymath}\n Summing up, we see that $(f_*)^{-1} (\\mathcal{N})$ is a neighborhood of $h$. \n\n\n Further, this \\emph{finite} family of neighborhoods controls the behavior of mappings only on a pre chosen compact set $C$ (which depends of course on $h$). \n \n \\textbf{Step 3} \\emph{Preimages of basic neighborhoods are open} \n Let $\\mathcal{M} = \\bigcap_{i \\in \\mathbb{N}} \\mathcal{N}_i$ be a basic neighborhood of $f\\circ h \\in C^\\infty (M,Y)$ with $\\{A_k\\}_{k \\in \\mathbb{N}}$ its the underlying compact family.\n We will prove that for arbitrary $g \\in (f_*)^{-1} (\\mathcal{M})$ the preimage is a neighborhood of $g$.\n Choose with Proposition \\ref{basicnbhsisbasis} a basic neighborhood of $f\\circ g$ which is contained in $\\mathcal{M}$. \n Replacing $\\mathcal{M}$ with this basic neighborhood, it suffices thus to consider the case $g=h$. \n Splitting each $A_k$ as in Step 1 we may assume without loss of generality that each $\\mathcal{N}_i$ is of the form considered in Step 2.\n Use \\cite[30.C.10]{cech} to construct for every $A_k$ a compact neighborhood $C_k$ such that $\\{C_k\\}_{k \\in \\mathbb{N}}$ is locally finite.\n Now we proceed for every elementary neighborhood $\\mathcal{N}_k$ as above (replace $C$ in Step 2 by $C_k$ and shrink $C_k$ if necessary!). \n Since the family $\\{C_k\\}_{k \\in \\mathbb{N}}$ is locally finite, we thus end up with a basic neighborhood $\\mathcal{M}_h$ around $h$ which mapped by $f_*$ to $\\mathcal{M}$.\n We conclude that $(f_*)^{-1} (M)$ is a neighborhood of $h$, whence of every of its elements.\n Hence preimages of basic neighborhoods under $f_*$ are open in $C_{\\textup{vS}}^\\infty (M,X)$, whence $f_*$ is continuous.\n\\end{proof}\n\n\n\nAs an application, we can now identify (as topological spaces) spaces of maps into a product with products of spaces of mappings to the factors.\n\n\\begin{theorem}\n\t\\label{product theorem}\n\tLet $ M $ be a finite-dimensional manifold, and let $ X_1 $ and $ X_2 $ be smooth manifolds modeled on locally convex vector spaces $ E_1 $ and $E_2$, respectively. Then \n\t\\begin{align*}\n\t\t\\iota : C_{\\textup{vS}}^\\infty (M, X_1 \\times X_2) \\to C_{\\textup{vS}}^\\infty (M, X_1) \\times C_{\\textup{vS}}^\\infty (M, X_2), \\quad f \\mapsto (\\pr_1 \\circ f, \\pr_2 \\circ f)\n\t\\end{align*}\n\tis a homeomorphism, where for \\( i \\in \\lbrace 1,2 \\rbrace \\) \\( \\pr_i \\colon X_1 \\times X_2 \\to X_i \\) is the canonical projection.\n\\end{theorem}\n\\begin{proof}\n\tClearly $ \\iota $ is a bijection, and it is continuous by Proposition \\ref{postcomposition}. We will prove that $ \\iota^{-1} $ is continuous, i.e. that $ \\iota $ is open.\n\tBy \\eqref{locally convex product} in Proposition \\ref{locally convex prop}, the set \n\t\t\\begin{displaymath}\n\t\t\t\\mathcal{P} := \\lbrace p \\circ \\pr_i : p \\mbox{ is a continuous seminorm on } E_i \\rbrace\n\t\t\\end{displaymath} \n\tis a generating family of seminorms on $ E_1 \\times E_2 $.\n\tConsider a basic neighborhood $ \\mathcal{U} = \\bigcap_{i\\in\\Lambda} \\mathcal{N}_i $ of $ f \\in C_{\\textup{vS}}^\\infty (M,X_1 \\times X_2) $, where each\n\t$\n\t\t\\mathcal{N}_i = \\mathcal{N}^{r_i} (f;A_i,(U_i,\\phi_i),(V_i,\\psi_i),p_i,\\epsilon_i).\n\t$\n\tBy Proposition \\ref{generatingfamilies} we may assume that each $ p_i \\in \\mathcal{P} $. Take an arbitrary $ i \\in \\Lambda $. If $ p_i = p\\circ \\pr_1 $ for some continuous seminorm $ p $ on $ E_1 $, let\n\t\\begin{align*}\n\t\t\\mathcal{M}_i = \\mathcal{N}^{r_i} (\\pr_1 \\circ f; A_i,(U_i,\\phi_i),(V_i,\\psi_i),p,\\epsilon_i) && \\mbox{and} && \\mathcal{M}_i' = C_{\\textup{vS}}^\\infty (M,X_2).\n\t\\end{align*}\n\tIf $ p_i = q\\circ \\pr_2 $ for a continuous seminorm $ q $ on $ E_2 $, reverse the roles of $ \\mathcal{M}_i $ and $ \\mathcal{M}_i' $.\n\t\n\tNow suppose without loss of generalization that $ p_i = p \\circ \\pr_1 $ for some continuous seminorm $ p $ on $ E_1 $. For $ g \\colon \\mathbb{R}^m \\supseteq \\phi_i (A_i) \\to \\psi_i (V_i) \\subseteq E_1 \\times E_2 $, one has\n\t\\begin{displaymath}\n\t\\dd^{(k)} g = \\dd^{(k)} (\\pr_1 \\circ g , \\pr_2 \\circ g) = ( \\dd^{(k)} \\pr_1 \\circ g, \\dd^{(k)} \\pr_2 \\circ g ),\n\t\\end{displaymath}\n\tso the condition\n\t\\( p_i \\left( \\dd^{(k)}g (a;\\alpha) \\right) < \\epsilon_i \\)\n\tis equivalent to\n\t\\( p \\left( \\dd^{(k)} (\\pr_1 \\circ g) (a;\\alpha) \\right) < \\epsilon_i \\).\n\tHence $ \\mathcal{M}_i \\times \\mathcal{M}_i' = \\iota (\\mathcal{N}_i) $. Since $ \\iota $ is bijective one has\n\t\\begin{displaymath}\n\t\t\\iota (\\mathcal{U}) = \\bigcap_{i\\in\\Lambda} \\iota (\\mathcal{N}_i) = \\bigcap_{i\\in\\Lambda} \\mathcal{M}_i \\times \\bigcap_{i\\in\\Lambda} \\mathcal{M}_i'.\n\t\\end{displaymath}\n\tSo $ \\iota $ is open, and a homeomorphism.\n\\end{proof}\n\n\\begin{corollary}\n\t\\label{product corollary}\n\tIf $ Q $ is a compact smooth manifold, then following map is continuous\n\t\\begin{align*}\n\t\t\\chi \\colon C_{\\textup{vS}}^\\infty (M,X) \\to C_{\\textup{vS}}^\\infty (Q \\times M, Q \\times X), \\quad f \\mapsto \\id \\times f.\n\t\\end{align*}\n\\end{corollary}\n\\begin{proof}\n\tBy Theorem \\ref{product theorem} it suffices to show that the maps\n\t\\begin{align*}\n\t\t\\chi_1 \\colon C_{\\textup{vS}}^\\infty (M,X) &\\to C_{\\textup{vS}}^\\infty (Q\\times M, Q) && &\\mbox{and} && \\chi_2 \\colon C_{\\textup{vS}}^\\infty(M,X) &\\to C_{\\textup{vS}}^\\infty (Q \\times M, X) \\\\\n\t\tf &\\mapsto \\pr_1 \\circ (\\id \\times f) && & && f &\\mapsto \\pr_2 \\circ (\\id \\times f)\n\t\\end{align*}\n\tare continuous. For $ (q,m) \\in Q \\times M $, one has\n\t\\begin{align*}\n\t\t\\chi_1(f)(q,m) &= \\pr_1 \\circ (\\id \\times f) (q,m) = \\pr_1 (q,f(m)) = q = \\pr_1 (q,m), \\\\ \n\t\t\\chi_2(f)(q,m) &= \\pr_2 \\circ (\\id \\times f) (q,m) = \\pr_2 (q,f(m)) = f(m) \\\\ &= f \\circ \\pr_2 (q,m) = \\pr_2^* (f) (q,m).\n\t\\end{align*}\n\tThe map \\( \\chi_1 \\) is constant in $ f $, hence continuous, and the map \\( \\chi_2 = \\pr_2^* \\). Since $ Q $ is compact, $ \\pr_2 $ is proper, so Proposition \\ref{precomposition} implies that $ \\chi_2 = \\pr_2^* $ is also continuous.\n\\end{proof}\n\n\\section{The fine very strong topology}\n\nIn the end, we would like a structure on \\( C^\\infty (M,X) \\) as a locally convex manifold, where \\( M \\) is a finite-dimensional smooth manifolds and \\( X \\) is a manifold modeled on a locally convex vector space \\( E \\), but for this purpose the very strong topology is not fine enough. A first step in the direction of making \\( C^\\infty (M,X) \\) into a locally convex manifold would be having a similar structure on \\( C^\\infty (M,E) \\). One might hope that \\( C_{\\textup{vS}}^\\infty (M,E) \\) itself with the vector space structure induced by pointwise operations would be a locally convex vector space. But as Corollary \\ref{supportcorollary} points out, this is not the case when $ E $ is a (non-trivial) locally convex vector space and $ M $ is a non-compact manifold. However, we will see in the next section that the subspace of \\( C_{\\textup{vS}}^\\infty (M,E) \\) consisting of maps with compact support, denoted \\( C_{\\text{vS,c}}^\\infty (M,E) \\), is a locally convex vector space. Following \\cite{michor}, we refine the topology on \\( C_{\\textup{vS}}^\\infty (M,E) \\) to obtain a structure on \\( C^\\infty (M,E) \\) as a smooth manifold modeled on \\( C_{\\text{vS,c}}^\\infty (M,E) \\). The resulting topology on \\( C^\\infty (M,E) \\), or more generally \\( C^\\infty (M,X) \\), is called the \\emph{fine very strong} topology on $ C^\\infty (M,X) $. The space \\( C^\\infty (M,X) \\) equipped with the fine very strong topology is denoted $ C_{\\textup{fS}}^\\infty (M,X) $. \n\nFortunately, the results of the previous sections are easily extended to hold in the fine very strong topology. This is done in Proposition \\ref{fsmoothprop}.\n\nIt is a folklore fact (Proposition \\ref{prop: topologiescoincide}) that in the finite-dimensional case, the very strong topology is equivalent to the $ \\mathcal{D} $-topology as described in \\cite[36]{michor}.\\footnote{The $\\mathcal{D}$-topology was defined using jet bundles (also reviewed in Appendix \\ref{folklore}). Our treatment of the topology has the advantage that only elementary arguments are needed. Further, only our approach generalizes to arbitrary locally convex target manifolds.} Consequently, the fine very strong topology is equivalent to the $ \\mathcal{FD} $-topology defined in \\cite[40]{michor}.\n\n\\begin{proposition}\n\t\\label{supportprop}\n\tLet $ M $ be a finite-dimensional smooth manifold and $ E $ be a locally convex vector space. Consider a sequence $ \\lbrace f_n \\rbrace_{n\\in \\mathbb{N}} \\subseteq C_{\\textup{vS}}^\\infty (M,E) $ which converges in the very strong topology towards \\( f \\in C^\\infty (M,E) \\). \n\tThen there exist a compact $ K \\subseteq M $ and an $ N \\in \\mathbb{N} $ such that for all $ n \\geq N $ we have $$ \\osupp{f}{f_n} := \\lbrace y \\in M : f_n (y) \\neq f(y) \\rbrace \\subseteq K. $$\n\\end{proposition}\n\\begin{proof}\n\tFor $ f \\in C_{\\textup{vS}}^\\infty (M,X) $, we will show that $ f $ cannot be a limit of $ \\lbrace f_n \\rbrace $ if for all compact $ K \\subseteq M $ and all $ N \\in \\mathbb{N} $ there exists $ n \\geq N $ such that $ \\osupp{f}{f_n} \\nsubseteq K $.\n\n\tLet \\( \\lbrace A_n \\rbrace_{n \\in \\mathbb{N}} \\) be a locally finite exhaustion of \\( M \\) by compact sets (exists by Lemma \\ref{dugundjifact} since \\( M \\) is \\( \\sigma \\)-compact), and for \\( n \\in \\mathbb{N} \\) set \\( K_n = \\bigcup_{i=1}^n A_i \\).\n\n\tConstruct a basic neighborhood of \\( f \\) recursively, using the following procedure. Let \\( n_0 = 1 \\), \\( m_0 = 1 \\). For \\( i \\in \\mathbb{N} \\), choose \\( n_i > n_{i-1} \\) such that \\( \\osupp{f}{f_{n_i}} \\nsubseteq K_{m_{i-1}} \\). By construction there exists \\( m_i > m_{i-1} \\) such that \\( \\osupp{f}{f_{n_i}} \\cap \\left( M \\setminus K_{m_{i-1}} \\right) \\cap A_{m_i} \\neq \\emptyset \\). Take any \\( x \\) in this nonempty set. Since \\( f (x) \\neq f_{n_i} (x) \\), there exists a continuous seminorm \\( p_i \\) on \\( E \\) such that \\( 2 \\epsilon_i := p_i (f_{n_i} (x) - f(x)) > 0 \\), and then\n\t\\[\n\t\tf_{n_i} \\notin \\mathcal{N}_i := \\mathcal{N}^0 \\left( f; A_{m_i}, p_i, \\epsilon_i \\right).\n\t\\]\n\tNow \\( \\mathcal{U} := \\bigcap_{i \\in \\mathbb{N}} \\mathcal{N}_i \\) is a basic neighborhood of \\( f \\) such that for all \\( N \\in \\mathbb{N} \\) there exists \\( n \\geq N \\) such that \\( f_n \\notin \\mathcal{U} \\). So the sequence \\( \\lbrace f_n \\rbrace_{n \\in \\mathbb{N}} \\) does not converge to \\( f \\).\n\\end{proof}\n\\begin{remark}\n\tOne can easily prove the proposition above for \\( E \\) a locally convex manifold rather than a locally convex vector space, by ``hacking'' the compact sets \\( A_i \\) in the proof into smaller compact sets that are contained in charts.\n\\end{remark}\n\\begin{corollary}\n\t\\label{supportcorollary}\n\tLet $ M $ be a finite-dimensional non-compact manifold and $ E \\neq \\lbrace 0 \\rbrace $ a locally convex vector space. Then $ C_{\\textup{vS}}^\\infty (M,E) $ with the vector space structure induced by pointwise operations is not a topological vector space.\n\\end{corollary}\n\\begin{proof}\n\tLet $ f \\in C_{\\textup{vS}}^\\infty (M,E) $ be a non-zero constant map. Then Proposition \\ref{supportprop} shows that $ \\lim_{\\lambda \\to 0} (\\lambda f) \\neq 0 = (\\lim_{\\lambda \\to 0} \\lambda) f $, hence scalar multiplication is not continuous.\n\\end{proof}\n\\begin{remark}\n\t\\label{supportcorollaryremark}\n\tAlthough $ C_{\\textup{vS}}^\\infty (M,E) $ is not a topological vector space, it is a topological group under pointwise addition by Lemma \\ref{continuous addition}. And $ C_{\\textup{vS}}^\\infty (M,\\mathbb{R}) $ is a topological ring under the pointwise operations induced by addition and multiplication in $ \\mathbb{R} $.\n\\end{remark}\n\\begin{definition}[The fine very strong topology]\n\tDefine an equivalence relation $ \\sim $ on $ C^\\infty (M,X) $ by declaring that $ f \\sim g $ whenever $$ \\csupp{f}{g} := \\overline{\\lbrace y \\in M : f(y) \\neq g(y) \\rbrace} $$ is compact. Now refine the very strong topology on $ C^\\infty (M,X) $ by demanding that the equivalence classes are open in $ C^\\infty (M,X) $. In other words, equip $ C^\\infty (M,X) $ with the topology generated by the very strong topology and the equivalence classes. This is the \\emph{fine very strong topology} on $ C^\\infty (M,X) $. We write $ C_{\\textup{fS}}^\\infty (M,X) $ for $ C^\\infty (M,X) $ equipped with the fine very strong topology.\n\\end{definition}\n\\begin{remark}\n\t\\label{fvs remark}\n\tHere is another way to look at the fine very strong topology. Start with $ C_{\\textup{vS}}^\\infty (M,X) $ and equip the equivalence classes $ [f] $ with the subspace topology. Then $$ C_{\\textup{fS}}^\\infty (M,X) = \\bigsqcup_{[f] \\in C^\\infty (M,X)\/\\sim} [f] $$ as topological spaces.\n\tTaking the family of all sets of the form $ \\mathcal{U} \\cap [f] $, where $ \\mathcal{U} $ runs through the basic neighborhoods in $ C^\\infty (M,X) $ and $ [f] $ runs through the equivalence classes, yields a basis for the fine very strong topology on $ C^\\infty (M,X) $.\n\\end{remark}\n\\begin{remark}\n\tIf $ f \\in C^\\infty (M,X) $ is a proper map and $ f \\sim \\hat{f} $, then $ \\hat{f} $ is also proper. Indeed, if $ K \\subseteq X $ is compact, then $ \\hat{f}^{-1} (K) \\subseteq f^{-1}(K) \\cup \\csupp{f}{\\hat{f}} $. Since closed subspaces of compact spaces are compact, $ \\hat{f}^{-1} (K) $ is compact.\n\\end{remark}\nWe would obviously like the results of the previous sections to remain true in the fine very strong topology. Fortunately, it is easy to extend the results to this case using the following lemma.\n\\begin{lemma}\n\t\\label{corollarylemma}\n\tLet $ T $ be a topological space, and $ \\zeta \\colon T \\to C^\\infty (M,X) $ a function. If $ \\zeta $ is continuous as a map to $ C_{\\textup{vS}}^\\infty (M,X) $ and $ \\zeta^{-1} ([f]) \\subseteq T $ is open for all equivalence classes $ [f] \\subseteq C^\\infty (M,X) $, then $ \\zeta $ is continuous as a map to $ C_{\\textup{fS}}^\\infty (M,X) $.\n\\end{lemma}\n\\begin{proof}\n\tThe map $ \\zeta $ is continuous if preimages of basis elements are open. Basis elements for $ C_{\\textup{fS}}^\\infty (M,X) $ are of the form $ \\mathcal{U} \\cap [f] $ for some basic neighborhood $ \\mathcal{U} $ and some equivalence class $ [f] $, and $ \\zeta^{-1}(\\mathcal{U}\\cap [f]) = \\zeta^{-1}(\\mathcal{U}) \\cap \\zeta^{-1} ([f]) $.\n\\end{proof}\n\\begin{proposition}\n\t\\label{fsmoothprop}\n\tTheorem \\ref{compiscont}, Proposition \\ref{precomposition}, Proposition \\ref{postcomposition}, Theorem \\ref{product theorem}, and Corollary \\ref{product corollary} still hold if we in every case replace the very strong topology with the fine very strong topology.\n\t\n\tIn the cases that we consider $ \\Prop_{\\textup{vS}} (M,N) $, replace this with $ \\Prop_{\\mbox{fS}} (M,N) $, by which is meant the subset $ \\Prop (M,N) \\subseteq C_{\\textup{fS}}^\\infty (M,N) $ equipped with the subspace topology.\n\\end{proposition}\n\\begin{proof}\n\tThe proof is case by case. In all cases except for the generalization of Theorem \\ref{product theorem} and its corollary, it suffices by \\ref{corollarylemma} to check that preimages of equivalence classes are open. Unless otherwise stated, letters (such as $ f $ or $ N $) are always assumed to have the same role here as in the statement of the corresponding result.\n\n\t\\textit{Theorem \\ref{compiscont} (the full composition map is continuous).} Suppose that $ f \\sim \\hat{f} $ and $ h \\sim \\hat{h} $. We have $ \\csupp{h \\circ f}{ \\hat{h} \\circ \\hat{f} } \\subseteq \\csupp{f}{\\hat{f}} \\cup f^{-1}\\left( \\csupp{h}{\\hat{h}} \\right) $. The right hand side is compact since $ f $ is proper, so $ \\csupp{h \\circ f}{ \\hat{h} \\circ \\hat{f} } $ is a closed subset of a compact space, hence compact. By definition this means that $ h \\circ f \\sim \\hat{h} \\circ \\hat{f} $. \n\t\t\n\tConsider an equivalence class $ [g] \\subseteq C_{\\textup{fS}}^\\infty (M,X) $. By what we just observed, if $ h \\circ f \\sim g $ and $ \\hat{f} \\sim f $ and $ \\hat{h} \\sim h $, then $ \\hat{h} \\circ \\hat{f} \\sim g $. Hence $$ \\Gamma^{-1} ([g]) = \\bigcup_{h \\circ f \\sim g} [f] \\times [h], $$ which is open.\n\t\n\t\\textit{Proposition \\ref{precomposition} (precomposition is continuous).} If $ h \\sim \\hat{h} $, then $ h \\circ f \\sim \\hat{h} \\circ f $ by the same argument as before. So for any equivalence class $ [g] \\subseteq C^\\infty (M,X) $, we have $$ (f^*)^{-1} ([g]) = \\bigcup_{h \\circ f \\sim g} [h]. $$\n\t\n\t\\textit{Proposition \\ref{postcomposition} (postcomposition is continuous).} If $ h, \\hat{h} \\in C^\\infty (M,X) $, then it is easy to see that $ \\csupp{f \\circ h}{f \\circ \\hat{h}} \\subseteq \\csupp{h}{\\hat{h}} $. So if $ h \\sim \\hat{h} $, then $ f \\circ h \\sim f \\circ \\hat{h} $, since closed subsets of compact spaces are compact. It follows that for any equivalence class $ [g] \\subseteq C^\\infty (M,X) $, we have $ (f_*)^{-1}([g]) = \\bigcup_{f \\circ h \\sim g} [h]. $ \n\t\n\t\\textit{Theorem \\ref{product theorem} (the product theorem).} For the same reasons as in the proof of the very strong version of the theorem, $ \\iota $ is clearly a bijective continuous map. So by Lemma \\ref{corollarylemma} it suffices to show that images of equivalence classes are open. \n\t\n\tObserve that for $ f, \\hat{f} \\in C^\\infty (M, X_1 \\times X_2) $, we have $ \\csupp{f}{\\hat{f}} = \\csupp{\\pr_1 \\circ f}{\\pr_1 \\circ \\hat{f}} \\cup \\csupp{\\pr_2 \\circ f}{\\pr_2 \\circ \\hat{f}} $. Hence $ f \\sim \\hat{f} $ if and only if $ \\pr_1 \\circ f \\sim \\pr_1 \\circ \\hat{f} $ and $ \\pr_2 \\circ f \\sim \\pr_2 \\circ \\hat{f} $. Another way of stating this fact is $ \\iota ([f]) = [\\pr_1 \\circ f] \\times [\\pr_2 \\circ f] $ for all $ f \\in C^\\infty (M,X_1 \\times X_2) $.\n\t\n\t\\textit{Corollary \\ref{product corollary}.} Same proof as in the very strong case.\n\\end{proof}\n\n\n\\section{The manifold structure on smooth vector valued functions}\\label{sect: smmfd}\nThroughout this section, $ M $ is a finite-dimensional manifold, $ E $ is a locally convex vector space, and \\( X \\) is a locally convex manifold.\n\nRecall from Corollary \\ref{supportcorollary} that \\( C_{\\textup{vS}}^\\infty (M,E) \\) with pointwise operations is not a locally convex vector space, in fact it is not even a topological vector space. Neither is \\( C_{\\textup{fS}}^\\infty (M,E) \\). However, we will in this section make \\( C_{\\textup{fS}}^\\infty (M,E) \\) into a locally convex manifold. This is a first step towards making \\( C_{\\textup{fS}}^\\infty (M,X) \\) into a locally convex manifold (but we will not do this). The modeling space for \\( C_{\\textup{fS}}^\\infty (M,E) \\) as a locally convex manifold is \\( C_{\\text{vS,c}}^\\infty (M,E) \\), defined below.\n\n\\begin{definition}\n\tWe define $ C_{\\text{vS,c}}^\\infty (M,E) $ to be the subspace of $ C_{\\textup{vS}}^\\infty (M,E) $ consisting of the functions with compact support, i.e. \n\t$$ C_{\\text{vS,c}}^\\infty (M,E) = \\lbrace f \\in C_{\\textup{vS}}^\\infty (M,E) : \\mbox{$\\csupp{f}{0}$ is compact} \\rbrace $$ equipped with the subspace topology from $ C_{\\textup{vS}}^\\infty (M,E) $.\n\tNote that $ C_{\\text{vS,c}}^\\infty (M,E) = [0] $ in $ C_{\\textup{fS}}^\\infty (M,E) $.\n\\end{definition}\n\nAs a first step towards proving that \\( C_{\\text{vS,c}}^\\infty (M,E) \\) with pointwise operations is a locally convex vector space, we show that \\( C^\\infty (M,E) \\) with pointwise addition is a topological group in the very strong and fine very strong topologies.\n\n\\begin{lemma}\n\t\\label{continuous addition}\n\tAddition\n\t\\begin{align*}\n\t\t\\Sigma \\colon C^\\infty (M,E) \\times C^\\infty (M,E) &\\to C^\\infty (M,E), \\quad (f,g) &\\mapsto f+h = \\left[ m \\mapsto f(m) + h(m) \\right]\n\t\\end{align*}\n\tis continuous when $ C^\\infty (M,E) $ is equipped with the very strong topology or fine very strong topology.\n\\end{lemma}\n\\begin{proof}\n\tWe prove the assertion only for the very strong topology as the proof carries over verbatim to the fine very strong topology.\n\tBy Theorem \\ref{product theorem} there is a canonical homeomorphism $ \\iota \\colon C_{\\textup{vS}}^\\infty (M,E) \\times C_{\\textup{vS}}^\\infty (M,E) \\cong C_{\\textup{vS}}^\\infty (M,E \\times E ) $. Since addition $ S \\colon E \\times E \\to E $ in $ E $ is smooth, induced postcomposition $ S_* \\colon C_{\\textup{vS}}^\\infty (M,E\\times E) \\to C_{\\textup{vS}}^\\infty (M,E) $ is continuous.\n\tHence $ \\Sigma = S_* \\circ \\iota $ is continuous.\n\\end{proof}\n\nOnce we have established the following proposition, it will be easy to make \\( C_{\\textup{fS}}^\\infty (M,E) \\) into a locally convex manifold modeled on \\( C_{\\text{vS,c}}^\\infty (M,E) \\). The hard work lies here.\n\\begin{proposition}\n\t\\label{locally convex vs prop}\n\tThe topological space $ C_{\\text{vS,c}}^\\infty (M,E) $ with vector space structure induced by pointwise operations in $ E $ is a locally convex vector space.\n\\end{proposition}\n\\begin{proof}\n\tIn Lemma \\ref{continuous addition} we showed that addition is continuous, and the topological space $ C_{\\text{vS,c}}^\\infty (M,E) $ is Hausdorff since the compact open \\( C^\\infty \\)-topology on \\( C^\\infty(M,E) \\) is Hausdorff and the very strong topology is finer than the compact open \\( C^\\infty \\)-topology. It is therefore only necessary to check that scalar multiplication is continuous in order to conclude that $ C_{\\text{vS,c}}^\\infty (M,E) $ is a topological vector space. Finally, we must verify that this topological vector space is locally convex.\n\t\n\t\\textit{Scalar multiplication $\\mu \\colon \\mathbb{R} \\times C_{\\text{vS,c}}^\\infty (M,E) \\to C_{\\text{vS,c}}^\\infty (M,E),\\ \n\t\t(\\lambda, f) \\mapsto \\lambda f $ is continuous.} \n\tLet $ (\\lambda,f) \\in \\mathbb{R} \\times C_{\\text{vS,c}}^\\infty (M,E) $, and consider a basic neighborhood $ \\mathcal{V} = \\bigcap_{i \\in \\Lambda} \\mathcal{N}_i $ of $ \\lambda f $, where each $\\mathcal{N}_i = \\mathcal{N}^{r_i} (\\lambda f; A_i, (U_i,\\phi_i),p_i,\\epsilon_i) $ is an elementary neighborhood of $ \\lambda f $. We will show that there exists open sets $ I \\subseteq \\RR $ and $ \\mathcal{U} \\subseteq C_{\\textup{vS}}^\\infty (M,E) $ such that $ \\mu (I \\times \\mathcal{U}) \\subseteq \\mathcal{V} $. \n\t\n\tSince $ \\csupp{f}{0} $ is compact, only finitely many $ A_i $ intersect $ \\csupp{f}{0} $, say only for $ i = i_1, \\dots , i_n $. Define $ \\epsilon := \\min (\\epsilon_{i_1},\\dots,\\epsilon_{i_n}) $. \n\t\n\tSet $m_1 := \\max\\left\\{\\sup_{1 \\leq j \\leq n} \\Vert f \\circ \\phi_{i_j}^{-1} \\Vert (r_{i_j}, \\phi_{i_j}(A_{i_j}),p_{i_j}), 1\\right\\}$ and \n\t\\[ I := B_{\\frac{\\epsilon}{2 m_1 }}^1 (\\lambda) = \\left] \\lambda - \\frac{\\epsilon}{2 m_1}, \\lambda + \\frac{\\epsilon}{2 m_1} \\right[ . \\] \n\t\n\tDefine $ m_2 := \\sup \\lbrace | t | : t \\in I \\rbrace $, and set $ \\mathcal{U} := \\bigcap_{i \\in \\Lambda} \\mathcal{N}^{r_i} \\left( f;A_i,(U_i,\\phi_i),p_i,\\frac{\\epsilon_i}{2 m_2} \\right). $\n\tSuppose $ (\\lambda',f') \\in I \\times \\mathcal{U} $. \n\tFor all $ i \\in \\Lambda $, $ x \\in A_i $, $ 1 \\leq k \\leq r_i $, and $ \\alpha \\in \\lbrace e_1, \\dots , e_{\\dim M} \\rbrace^k $, we have\n\t\\begin{align*}\n\t\t& p_i \\left( \\dd^{(k)} (\\lambda' f' \\circ \\phi_i^{-1} - \\lambda f \\circ \\phi_i^{-1})(\\phi_i(x);\\alpha) \\right) \\\\\n\t\t\\leq& | \\lambda' | p_i \\left( \\dd^{(k)} (f' \\circ \\phi_i^{-1} - f \\circ \\phi_i^{-1})(\\phi_i(x);\\alpha) \\right) + | \\lambda' - \\lambda | p_i \\left( \\dd^{(k)} (f \\circ \\phi_i^{-1})(\\phi_i(x);\\alpha) \\right) \\\\\n\t\t<& \\frac{\\epsilon_i}{2} + \\frac{\\epsilon}{2 m_1} p_i \\left( \\dd^{(k)} (f \\circ \\phi_i^{-1})(\\phi_i(x);\\alpha) \\right) =: C.\n\t\\end{align*}\n\tIf $ i \\notin \\lbrace i_1,\\dots,i_n \\rbrace $, then $ p_i \\left( \\dd^{(k)} (f \\circ \\phi_i^{-1})(\\phi_i(x);\\alpha) \\right) = 0 $, in which case $ C \\leq \\epsilon_i $. And if $ i \\in \\lbrace i_1, \\dots,i_n \\rbrace $, then $ \\epsilon \\leq \\epsilon_i $ and $ p_i \\left( \\dd^{(k)} (f \\circ \\phi_i^{-1})(\\phi_i(x);\\alpha) \\right) \\leq m_1 $, in which case we still have $ C \\leq \\epsilon_i $. Hence $ \\lambda' f' \\in \\mathcal{V} $, and $ \\mu (I \\times \\mathcal{U}) \\subseteq \\mathcal{V} $. Consequently, $\\mu$ is continuous.\n\t\n\t\\textit{The space is locally convex.} We have now established that $ C_{\\mbox{vS,c}}^\\infty (M,E) $ is a topological vector space. It remains to see that this topological vector space is locally convex. For $ r \\in \\mathbb{N}_0 $, $ (U,\\phi) $ a chart on $ M $, $ A \\subseteq U $ compact, and $ p $ a continuous seminorm on $ E $, define\n\t\\begin{align*}\n\t\t\\Vert\\cdot \\Vert(r,A,(U,\\phi),p) \\colon C_{\\mbox{vS,c}}^\\infty (M,E) \\to [0,\\infty), \\quad f \\mapsto \\Vert f \\circ \\phi^{-1} \\Vert (r,\\phi_i (A_i),p) \n\t\\end{align*}\n\tThis is a seminorm on $ C_{\\mbox{vS,c}}^\\infty (M,E) $. Consider a family $ \\lbrace \\Vert\\cdot \\Vert (r_i,A_i,(U_i,\\phi_i),p_i) \\rbrace_{i\\in \\Lambda} $ of such seminorms, where $ \\lbrace A_i \\rbrace_{i \\in \\Lambda} $ is locally finite. For some family $ \\lbrace \\epsilon_i \\rbrace_{i \\in \\Lambda} $ define $ q \\colon C_{\\mbox{vS,c}}^\\infty (M,E) \\to [0,\\infty) $ by \\[ q(f) = \\sup_{i\\in\\Lambda} \\epsilon_i \\Vert f \\Vert (r_i,A_i,(U_i,\\phi_i),p_i). \\] \n\tEvery $ f \\in C_{\\mbox{vS,c}}^\\infty (M,E) $ has compact support, so $ \\operatorname{supp}(f,0) $ intersects only finitely many of the $ A_i $, from which it follows that $ \\Vert f \\Vert (r_i,A_i,(U_i,\\phi_i),p_i) \\neq 0 $ for only finitely many $ i \\in \\Lambda $. Hence $ q(f) < \\infty $, so $ q $ is well-defined. Clearly $ q $ is a seminorm. Also $ q $ is continuous as for all $ \\lambda > 0 $, the preimage $ q^{-1} [0,\\lambda) $ is a basic neighborhood of $ 0 $, e.g.\\ \n\t\\[ q^{-1}[0,1) = \\left( \\bigcap_{i \\in \\Lambda } \\mathcal{N}^{r_i} (0;A_i,(U_i,\\phi_i),p_i,\\epsilon_i) \\right) \\cap C_{\\text{vS,c}}^\\infty (M,E). \\] So every basic neighborhood of 0 arises as a preimage of a continuous seminorm. Consequently, $ C_{\\text{vS,c}}^\\infty (M,E) $ is locally convex (see \\cite[\\S 18]{koethe}).\n\\end{proof}\n\nWe will now provide an alternative description of the topology on $ C_{\\text{vS,c}}^\\infty (M,E) $ as an inductive limit of certain locally convex spaces.\nThis characterization also implies that $ C_{\\text{vS,c}}^\\infty (M,E) $ is a locally convex space (thus providing an elegant proof of Proposition \\ref{locally convex vs prop}).\nNote however: Though the proof of Proposition \\ref{locally convex vs prop} is a bit cumbersome, it is also completely elementary and does not use auxiliary results on inductive limits.\n\n\n\\begin{definition}\nLet $K \\subseteq M$ be a compact subset and $E$ be a locally convex space.\nThen we define \n \\begin{displaymath}\n C^\\infty_K (M,E) := \\{f\\in C^\\infty (M,E) \\mid \\operatorname{supp} (f,0) \\subseteq K\\} \n \\end{displaymath}\n and topologize this space with the compact open $C^\\infty$-topology, i.e.\\ the topology generated by the subbase $\\mathcal{N} \\cap C^\\infty_K (M,E)$ where $\\mathcal{N}$ runs through all elementary neighborhoods of $C^\\infty_{\\text{vS}} (M,E)$.\n Recall from \\cite[Proposition 4.19]{glockomega} that $C^\\infty_K (M,E)$ is a locally convex vector space.\n\\end{definition}\n\n\\begin{remark}\n Since all functions $C^\\infty_K (M,E)$ have compact support contained in $K$ one can prove that the compact open $C^\\infty$-topology coincides with the subspace topologies induced by $C_{\\textup{vS}}^\\infty (M,E)$ and $C_{\\textup{fS}}^\\infty (M,E)$.\n However, we will not need this. \n\\end{remark}\n\nDenote by $\\mathcal{K} (M)$ the set of compact subsets of $M$. \nObserve that as sets $C^\\infty_{\\text{vS,c}} (M,E) = \\bigcup_{K \\in \\mathcal{K} (M)} C^\\infty_K (M,E)$.\nWe claim that the topology on the compactly supported functions is determined by the smaller locally convex spaces:\nTo see this, recall that with respect to inclusion, $\\mathcal{K} (M)$ is a directed set. \nFurther, for $K,L \\in \\mathcal{K} (M)$ with $K \\subseteq L$ the canonical inclusion $\\iota_K^L\\colon C^\\infty_K (M,E) \\rightarrow C^\\infty_L (M,E)$ is continuous linear by definition of the topology.\nHence we can form the locally convex inductive limit $\\displaystyle \\lim_{\\rightarrow} C^\\infty_K (M,E)$ (cf.\\ \\cite[\\S 19 3.]{koethe}) of the family $\\{C^\\infty_K (M,E)\\}_{\\mathcal{K} (M)}$ (with respect to the canonical inclusions).\n\n\\begin{lemma}\\label{lem: indlim}\n Let $E$ be a locally convex space, then as locally convex spaces\n \\begin{displaymath}\n C^\\infty_{\\text{vS,c}} (M,E) = \\lim_{\\rightarrow} C^\\infty_K (M,E).\n \\end{displaymath}\n\\end{lemma}\n\n\\begin{proof}\n Since as sets $C^\\infty_{\\text{vS,c}} (M,E) = \\displaystyle\\lim_{\\rightarrow} C^\\infty_K (M,E)$, we only have to prove that the topologies coincide.\n However, since $M$ is $\\sigma$-compact, \\cite[Proposition 8.13 (d)]{glockomega} implies that a basis for the inductive limit topology on $C^\\infty_{\\text{vS,c}} (M,E) = \\displaystyle\\lim_{\\rightarrow} C^\\infty_K (M,E)$ is given by the basic neighborhoods of the very strong topology.\n\\end{proof}\n\n\n\\begin{proposition}\n\tFor each class $ [f] $ in $ C_{\\textup{fS}}^\\infty (M,E) $ define $ \\phi_{[f]} \\colon [f] \\to C_{\\text{vS,c}}^\\infty (M,E) $ by $ \\phi_{[f]}(g) = g-f $. \n\tThen $ \\mathcal{A} = \\lbrace (\\phi_{[f]},[f]) \\rbrace_{f \\in C^\\infty (M,E)} $ is a smooth atlas for $ C_{\\textup{fS}}^\\infty (M,E) $. Hence $ C_{\\textup{fS}}^\\infty(M,E) $ is a smooth manifold modeled on $ C_{\\text{vS,c}}^\\infty (M,E) $.\n\\end{proposition}\n\\begin{proof}\n\tWe will first show that every chart $ \\phi_{[f]} $ is a homeomorphism. First of all, note that $ \\phi_{[f]} $ is well-defined since $ g - f $ is smooth and compactly supported for $ g \\in [f] $. It is bijective with inverse $ \\phi_{[f]}^{-1} (h) = h + f $. Both $ \\phi_{[f]} $ and $ \\phi_{[f]}^{-1} $ are continuous by Lemma \\ref{continuous addition}.\n\t\n\tThe chart domains of $ \\mathcal{A} $ cover $ C_{\\textup{fS}}^\\infty $, whence we have to check that chart transformations are smooth. Let $ (\\phi_{[f]},[f]) $ and $(\\phi_{[g]},[g]) $ be charts with $ [f] \\cap [g] \\neq \\emptyset $.\n\tThen $ [f] = [g] $ and $ \\phi_{[g]} \\circ \\phi_{[f]}^{-1}(h) = h + f - g $, whence it is smooth in $ h $ as addition in $C_{\\text{vS,c}}^\\infty (M,E)$ is so.\n\\end{proof}\nStructurally, the manifold $ C_{\\textup{fS}}^\\infty (M,E) $ is just a collection of (affine) copies of $ C_{\\text{vS,c}}^\\infty (M,E)$. For this reason, it is also called in \\cite{michor} a \\emph{local topological affine space}.\n\nTo construct a manifold structure on $ C_{\\textup{fS}}^\\infty (M,X) $ for an arbitrary locally convex manifold $X$ one needs a so called \\emph{local addition} on $X$ (cf.\\ \\cite{michor,KM97}).\nA local addition replaces the vector space addition. \nIt allows to ``smoothly choose'' charts on $X$ (see \\cite{stacey} for more information). \nThe details are similar to \\cite[Section 10]{michor} but require certain analytical tools (e.g.\\ a suitable version of the $\\Omega$-Lemma, \\cite[Appendix F]{glockomega})\\footnote{To apply the $\\Omega$-Lemma as stated in \\cite{glockomega}, one needs a topology on spaces of compactly supported sections in vector bundles. In ibid.\\ the compact open $C^\\infty$-topology is used, however by arguments similar to Lemma \\ref{lem: indlim} one proves that this topology coincides with the very strong topology.}.\n\n\\section{Application to bisection groups}\nIn this section we use our results on the very strong and the fine very strong topology to turn certain groups into topological groups.\nThe groups envisaged here are the bisection groups associated to certain Lie groupoids.\nA reference on (finite-dimensional) Lie groupoids is \\cite{mackenzie}, see \\cite{Schmeding2015,SchmedingWockel15} for infinite-dimensional Lie groupoids.\n\\begin{definition}[Lie groupoid]\n\tLet $ M $ be a finite-dimensional smooth manifold and $ G $ a smooth manifold modeled on a locally convex vector space. Then a groupoid $ \\mathcal{G} = (G \\rightrightarrows M) $ with source projection $ \\alpha \\colon G \\to M $ and target projection $ \\beta \\colon G \\to M $ is a \\emph{(locally convex) Lie groupoid} if $ \\alpha $ and $ \\beta $ are smooth submersions (i.e. locally projections), partial multiplication $ m \\colon G \\times_{\\alpha,\\beta} G \\to G $ is smooth, object inclusion $ 1 \\colon M \\to G $ is smooth, and inversion $ i \\colon G \\to G $ is smooth.\n\\end{definition}\n\\begin{definition}[Bisection group]\n\tThe \\emph{group of bisections} $ \\Bis (\\mathcal{G}) $ of a Lie groupoid $ \\mathcal{G} = (G \\rightrightarrows M) $ is the set of sections $ \\sigma \\colon M \\to G $ of $ \\alpha $ such that $ \\beta \\circ \\sigma $ is a diffeomorphism of $ M $. The group operation $ \\star $ is given by $$ (\\sigma \\star \\tau)(x) := \\sigma ((\\beta \\circ \\tau)(x)) \\tau (x). $$ With this operation, the object inclusion $ 1 \\colon M \\to G $ becomes the neutral element and the inverse of a section $ \\sigma $ is $ \\sigma^{-1}(x) = i(\\sigma((\\beta \\circ \\sigma)^{-1}(x))). $\n\\end{definition}\n\\begin{example}\n\t\\begin{enumerate}\n\t\t\\item For a finite-dimensional manifold $ M $, the \\emph{unit Lie groupoid} is the groupoid $ (M \\rightrightarrows M) $ with both source and target projection $ \\id_M $. The bisection group of this groupoid is trivial.\n\t\t\\item Let $ M $ be a finite-dimensional smooth manifold. Then $ \\mathcal{P}(M) := (M \\times M \\rightrightarrows M) $ with source projection $ \\alpha = \\pr_2 $ and target projection $ \\beta = \\pr_1 $ is a Lie groupoid. Multiplication in the groupoid is given by $ (x,y)(y,z) = (x,z) $. Postcomposition $ \\beta_* $ induces an isomorphism $ \\Bis (\\mathcal{P}(M)) \\cong \\Diff(M) $ of groups, where \\( \\Diff (M) \\) is the group of smooth diffeomorphisms of \\( M \\).\n\t\t\\item Suppose $ G $ is a locally convex Lie group, and $ * $ is the one-point space. Then $ (G \\rightrightarrows *) $ is a Lie groupoid with bisection group $ G $.\n\t\\end{enumerate}\n\\end{example}\n\nTo prepare the construction of a topological group structure on bisection groups, recall the following facts on diffeomorphism groups of finite-dimensional manifolds.\n\\begin{remark}\\label{rem: topgroup}\n Let $M$ be a finite-dimensional manifold and $\\Diff (M)$ be the group of smooth diffeomorphisms of $M$.\n As $\\Diff (M) \\subseteq C^\\infty (M,M)$, we can endow $\\Diff (M)$ either with the subspace topology induced by the very strong topology (write $\\Diff_{\\text{vS}} (M)$) or with respect to the fine very strong topology (we write $\\Diff_{\\text{fS}} (M)$).\n Now as a consequence of \\cite[Corollary 7.7]{michor} and Proposition \\ref{prop: topologiescoincide}, both $\\Diff_{\\text{vS}} (M)$ and $\\Diff_{\\text{fS}} (M)$ are topological groups.\n Note that $\\Diff_{\\text{fS}} (M)$ is even a locally convex Lie group by \\cite[Theorem 11.11]{michor}.\n In particular, we remark that the (subspace topology induced by the) fine very strong topology is the Lie group topology of $\\Diff (M)$.\n\\end{remark}\n\n\\begin{proposition}\\label{prop: topgp}\n\tIf $ \\Bis (\\mathcal{G}) $ is equipped with the subspace topology with respect to $ C_{\\textup{vS}}^\\infty (M,G) $ or $ C_{\\textup{fS}}^\\infty (M,G) $, then $ \\Bis (\\mathcal{G}) $ becomes a topological group.\n\\end{proposition}\n\\begin{proof}\n\tWe will prove that $ \\Bis (\\mathcal{G}) $ becomes a topological group when equipped with the subspace topology with respect to $ C_{\\textup{vS}}^\\infty (M,G) $. The case where we consider the subspace topology with respect to $ C_{\\textup{fS}}^\\infty (M,G) $ can be proven identically, since we only use results that hold in both topologies.\n\tLet $ \\Omega \\colon \\Bis (\\mathcal{G}) \\times \\Bis (\\mathcal{G}) \\to \\Bis (\\mathcal{G}) $ be the multiplication map defined by $ \\Omega (\\sigma,\\tau) = \\sigma \\star \\tau $, and let $ \\iota $ be the inclusion $ \\Bis (\\mathcal{G}) \\to C_{\\textup{vS}}^\\infty (M,G) $. Observe that we can write $$ \\Omega(\\sigma,\\tau)(x) = \\sigma((\\beta \\circ \\tau)(x))\\tau(x) = m(\\Gamma(\\beta \\circ \\tau, \\sigma)(x),\\tau(x)). $$ So $ \\iota \\circ \\Omega $ can be written as a composition of continuous maps; the diagram\n\t\\begin{align*}\n\t\t\\xymatrix{\n\t\t\t\\Bis (\\mathcal{G}) \\times \\Bis (\\mathcal{G}) \\ar[d]^{(\\beta_* \\circ \\pr_2, \\iota \\times \\iota )} \\ar@{.>}[rr]^{\\iota \\circ \\Omega} &&C_{\\textup{vS}}^\\infty (M,G)\t\\\\\n\t\t\t\\Prop_{\\textup{vS}} (M,M) \\times C_{\\textup{vS}}^\\infty(M,G) \\times C_{\\textup{vS}}^\\infty (M,G) \\ar[d]^{\\Gamma \\times \\id} && \\\\\n\t\t\tC_{\\textup{vS}}^\\infty (M,G) \\times C_{\\textup{vS}}^\\infty (M,G) \\ar[rr]^-{\\cong} & & C_{\\textup{vS}}^\\infty (M,G \\times G) \\ar[uu]^{m_*}\n\t\t\t}\n\t\\end{align*}\n\tcommutes. Here we have used that $ \\beta_* (\\Bis (\\mathcal{G})) \\subseteq \\Diff (M) \\subseteq \\Prop (M,M) $ by definition of bisections. All of the maps represented by normal arrows in the diagram are continuous by results in the previous sections. Since $ \\iota \\circ \\Omega $ is continuous, so is $ \\Omega $.\n\t\n\tLet $ \\Phi \\colon \\Bis (\\mathcal{G}) \\to \\Bis (\\mathcal{G}) $ be the inversion map. \n\tInversion $ \\Inv \\colon \\Diff_{\\text{vS}} (M) \\to \\Diff_{\\text{vS}}(M) $ is continuous by \\cite[Theorem 7.6]{michor} and Proposition \\ref{prop: topologiescoincide}. The diagram\n\t\\begin{align*}\n\t\t\\xymatrix{\n\t\t\t\\Bis(\\mathcal{G}) \\ar[d]^{\\beta_*} \\ar@{.>}[rr]^{\\iota \\circ \\Phi} && C_{\\textup{vS}}^\\infty(M,G) \\\\\n\t\t\t\\Diff_{\\text{vS}}(M) \\ar[r]^{\\Inv} & \\Diff_{\\text{vS}}(M) \\ar[r]^{\\sigma_*} & C_{\\textup{vS}}^\\infty (M,G) \\ar[u]^{i_*}\n\t\t\t}\n\t\\end{align*}\n\tcommutes as $ \\Phi(\\sigma)(x) = i(\\sigma((\\beta \\circ \\sigma)^{-1}(x))) = (i_*(\\sigma_*(\\Inv(\\beta_*(\\sigma))))(x)$. \n\tAll maps represented by normal arrows are continuous. Thus $ \\iota \\circ \\Phi $ and also $ \\Phi $ are continuous.\n\\end{proof}\n\nAs noted in Remark \\ref{rem: topgroup}, $\\Diff (M)$ is a topological group with respect to the subspace topologies induced by the (fine) very strong topology on $C^\\infty (M,M)$.\nThus we obtain the following morphisms of topological groups.\n\\begin{corollary}\n\tThe target projection \\( \\beta \\colon G \\to M \\) of a locally convex Lie groupoid \\( \\mathcal{G} = (G \\rightrightarrows M) \\) induces a map \\( \\beta_* \\colon \\Bis (\\mathcal{G}) \\to \\Diff(M) \\) given by postcomposition. This is a homomorphism of topological groups with respect to the very strong and fine very strong topologies on both groups.\n\\end{corollary}\n\\begin{proof}\n\tSince $\\beta_* \\colon C_{\\textup{vS}}^\\infty (M,G) \\rightarrow C_{\\textup{vS}}^\\infty (M,M)$ is continuous, so is the (co)restriction of \\( \\beta_* \\) to $\\Bis (\\mathcal{G})$ and \\( \\Diff (M) \\). The same argument holds in the fine very strong topology.\n\tThe map \\( \\beta_* \\) is also a group homomorphism, since \n\t\\[ \\left( \\beta_* ( \\sigma \\star \\tau ) \\right) (x) = \\beta \\left( \\sigma ((\\beta \\circ \\tau)(x)) \\tau (x) \\right) = \\beta ( \\sigma ( \\beta \\circ \\tau ) (x)) = \\left( \\beta_*(\\sigma) \\circ \\beta_* (\\tau)\\right) (x).\\qedhere \\]\n\\end{proof}\n\nThe results of this section enable the construction of a Lie group structure on $\\Bis (\\mathcal{G})$. \nIt is worth noting that the key step in constructing the Lie group structure is sorting out the topology of the function spaces.\nUsing the manifold structure on $C_{\\textup{fS}}^\\infty (M,G)$ (see comments in Section \\ref{sect: smmfd}) one establishes the smoothness of joint composition and postcomposition with respect to these structures.\nSince Theorem A and the $\\Omega$-Lemma \\cite[Appendix F]{glockomega} are at our disposal, one can copy exactly the arguments from the finite-dimensional case outlined in \\cite[\\S 10 and \\S 11]{michor}.\nAfter that one can proceed as in \\cite{Schmeding2015} and establish smoothness of the group operations following the proof of Proposition \\ref{prop: topgp}.\nAgain, results of this type are beyond the scope of the present paper. \n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHigh-fidelity numerical simulations play a critical role in modern-day engineering and scientific investigations. The computational cost of high-fidelity or full-order models (FOMs) is, however, often prohibitively expensive. This limitation has led to the emergence of reduced-order modeling techniques. Reduced-order models (ROMs) are formulated to \\textit{approximate} solutions to a FOM on a low-dimensional manifold. Common reduced-order modeling techniques include balanced truncation~\\cite{balanced_truncation_moore,balanced_truncation_roberts}, Krylov subspace techniques~\\cite{krylov_rom}, reduced-basis methods~\\cite{Hesthaven2016}, and the proper orthogonal decomposition approach~\\cite{chatterjee_pod_intro}.\nReduced-order models based on such techniques have been implemented in a wide variety of disciplines and have been effective in reducing the computational cost associated with high-fidelity numerical simulations~\\cite{kerschen_mech_pod,padhi_neural_net_pod,cao_meteorology_pod}.\n\nProjection-based reduced-order models constructed from proper orthogonal decomposition (POD) have proved to be an effective tool for model order reduction of complex systems. In the POD-ROM approach, snapshots from a high-fidelity simulation (or experiment) are used to construct an orthonormal basis spanning the solution space. A small, truncated set of these basis vectors forms the \\emph{trial} basis.\nThe POD-ROM then seeks a solution within the range of the trial basis via projection. Galerkin projection, in which the FOM equations are projected onto the same trial subspace, is the simplest type of projection. The Galerkin ROM (G ROM) has been used successfully in a variety of problems. When applied to general non-self-adjoint and non-linear problems, however, theoretical analysis and numerical experiments have shown that Galerkin ROM lacks \\textit{a priori} guarantees of stability, accuracy, and convergence~\\cite{rowley_pod_energyproj}. This last issue is particularly challenging as it demonstrates that enriching a ROM basis does not necessarily improve the solution~\\cite{huang_combustion_roms}. The development of stable and accurate reduced-order modeling techniques for complex non-linear systems is the motivation for the current work.\n\\begin{comment}\nResearch examining the stability and accuracy of ROMs is typically approached from either a stabilization viewpoint or from a closure modeling viewpoint. \n\\end{comment}\n\nA significant body of research aimed at producing accurate and stable ROMs for complex non-linear problems exists in the literature. These efforts include, but are not limited to, ``energy-based\" inner products~\\cite{rowley_pod_energyproj,Kalashnikova_sand2014}, symmetry transformations~\\cite{sirovich_symmetry_trans}, basis adaptation~\\cite{carlberg_hadaptation,adeim_peherstorfer}, $L^1$-norm minimization~\\cite{l1}, projection subspace rotations~\\cite{basis_rotation}, and least-squares residual minimization approaches~\\cite{bui_resmin_steady,bui_unsteady,rovas_thesis,carlberg_thesis,bui_thesis,carlberg_lspg,carlberg_lspg_v_galerkin,carlberg_gnat}. The Least-Squares Petrov--Galerkin (LSPG)~\\cite{carlberg_lspg} method comprises a particularly popular least-squares residual minimization approach and has been proven to be an effective tool for non-linear model reduction. Defined at the fully-discrete level (i.e., after spatial and temporal discretization), LSPG relies on least-squares minimization of the FOM residual at each time-step. While the method lacks \\textit{a priori} stability guarantees for general non-linear systems, it has been shown to be effective for complex problems of interest~\\cite{carlberg_gnat, carlberg_lspg_v_galerkin, huang_scitech19}. Additionally, as it is formulated as a minimization problem, physical constraints such as conservation can be naturally incorporated into the ROM formulation~\\cite{carlberg_conservative_rom}. At the fully-discrete level, LSPG is sensitive to both the time integration scheme as well as the time-step. For example, in Ref.~\\cite{carlberg_lspg_v_galerkin} it was shown that LSPG produces optimal results at an intermediate time-step. Another example of this sensitivity is that, when applied to explicit time integration schemes, the LSPG approach reverts to a Galerkin approach. This limits the scope of LSPG to implicit time integration schemes, which can in turn increase the cost of the ROM~\\cite{carlberg_lspg_v_galerkin}\\footnote{It is possible to use LSPG with an explicit time integrator by formulating the ROM for an implicit time integration scheme, and then time integrating the resulting system with an explicit integrator.}. This is particularly relevant in the case where the optimal time-step of LSPG is small, thus requiring many time-steps of an implicit solver. Despite these challenges, the LSPG approach is arguably the most robust technique that is used for ROMs of non-linear dynamical systems.\n\n\\begin{comment}\nOne method of producing stable and accurate ROMs is through stabilization, or the addition of dissipative effects to otherwise-unstable dynamics. Research on ROM stabilization seeks to address the issue that, in general, Galerkin ROMs lack any \\textit{a priori} guarantees of stability~\\cite{Kalashnikova_sand2014}. A variety of model reduction techniques have been developed to address this issue, including ``energy-based\" inner products~\\cite{Kalashnikova_sand2014,rowley_pod_energyproj}, symmetry transformations~\\cite{sirovich_symmetry_trans}, projection subspace rotations~\\cite{basis_rotation}, and the Least-Squares Petrov--Galerkin (LSPG) approach~\\cite{carlberg_lspg}. Typically defined at the fully-discrete level (i.e., after spatial and temporal discretization), the LSPG approach relies on least-squares minimization of the FOM residual at each time-step. While the method lacks \\textit{a priori} stability guarantees for general non-linear systems, it has been shown to be effective for complex problems of interest~\\cite{carlberg_gnat, carlberg_lspg_v_galerkin, huang_scitech19}. Additionally, as it is formulated as a minimization problem, physical constraints such as conservation can be naturally incorporated into the ROM formulation~\\cite{carlberg_conservative_rom}. At the fully-discrete level, LSPG is sensitive to both the time integration scheme as well as the time-step. For example, in Ref.~\\cite{carlberg_lspg_v_galerkin} it was shown that LSPG produces optimal results at an intermediate time-step. Another example of this sensitivity is that, when applied to explicit time integration schemes, the LSPG approach reverts to a Galerkin approach. This limits the scope of LSPG to implicit time integration schemes, which can in turn increase the cost of the ROM~\\cite{carlberg_lspg_v_galerkin}\\footnote{It is possible to use LSPG with an explicit time integrator by formulating the ROM for an implicit time integration scheme, and then time integrating the resulting system with an explicit integrator.}. This is particularly relevant in the case where the optimal time-step of LSPG is small, thus requiring many time-steps of an implicit solver. Despite these drawbacks, the LSPG approach is arguably the most robust technique that is used for ROMs of non-linear dynamical systems.\n\\end{comment}\n\nA second school of thought addresses stability and accuracy of ROMs from a closure modeling viewpoint. This follows from the idea that instabilities and inaccuracies in ROMs can, for the most part, be attributed to the truncated modes. While these truncated modes may not contain a significant portion of the system energy, they can play a significant role in the dynamics of the ROM~\\cite{Wang_ROM_thesis}. This is analogous to the closure problem encountered in large eddy simulation. Research has examined the construction of mixing length~\\cite{aubry_mixlength_pod}, Smagorinsky-type~\\cite{Wang_ROM_thesis,Ullmann_smag,wang_smag,smag_ROM}, and variational multiscale (VMS) closures~\\cite{Wang_ROM_thesis,san_iliescu_geostrophic,Bergmann_pod_vms,Stabile2019} for POD-ROMs. The VMS approach is of particular relevance to this work. Originally developed in the context of finite element methods, VMS is a formalism to derive stabilization\/closure schemes for numerical simulations of multiscale problems. The VMS procedure is centered around a sum decomposition of the solution $u$ in terms of resolved\/coarse-scales $\\tilde{u}$ and unresolved\/fine-scales ${u}^{\\prime}$. The impact of the fine-scales on the evolution of the coarse-scales is then accounted for by devising an approximation to the fine-scales. This approximation is often referred to as a ``subgrid-scale'' or ``closure'' model. \n\nResearch has examined the application of both phenomenological and residual-based subgrid-scale models to POD-ROMs. In Refs.~\\cite{san_iliescu_geostrophic,iliescu_pod_eddyviscosity,iliescu_vms_pod_ns}, Iliescu and co-workers examine the construction of eddy-viscosity-based ROM closures via the VMS method. These eddy-viscosity methods are directly analogous to the eddy-viscocity philosophy used in turbulence modeling. While they do not guarantee stability \\textit{a priori}, these ROMs have been shown to enhance accuracy on a variety of problems in fluid dynamics. However, as eddy-viscosity methods are based on phenomenological assumptions specific to three-dimensional turbulent flows, their scope may be limited to specific types of problems. Residual-based methods, which can also be derived from VMS, constitute a more general modeling strategy. The subgrid-scale model emerging from a residual-based method typically appears as a term that is proportional to the residual of the full-order model; if the governing equations are exactly satisfied by the ROM, then the model is inactive. While residual-based methods in ROMs are not as well-developed as they are in finite element methods, they have been explored in several contexts. In Ref.~\\cite{Bergmann_pod_vms}, ROMs of the Navier-Stokes equations are stabilized using residual-based methods. This stabilization is performed by solving a ROM stabilized with a method such as streamline upwind Petrov--Galerkin (SUPG) and augmenting the POD basis with additional modes computed from the residual of the Navier-Stokes equations. In Ref.~\\cite{iliescu_ciazzo_residual_rom}, residual-based stabilization is developed for velocity-pressure ROMs of the incompressible Navier-Stokes equations. Both eddy-viscosity and residual-based methods have been shown to improve ROM stability and performance. The majority of existing work on residual-based stabilization (and eddy-viscosity methods) is focused on ROMs formulated from continuous projection (i.e., projecting a continuous PDE using a continuous basis). In this instance, the ROM residual is defined at the continuous level and is directly linked to the governing partial differential equation. In many applications (arguably the majority~\\cite{Kalashnikova_sand2014}), however, the ROM is constructed through discrete projection (i.e., projecting the spatially discretized PDE using a discrete basis). In this instance, the ROM residual is defined at the semi-discrete level and is tied to the \\textit{spatially discretized} governing equations. Residual-based methods for ROMs developed through discrete projections have, to the best of the authors' knowledge, not been investigated.\n \nAnother approach that displays similarities to the variational multiscale method is the Mori-Zwanzig (MZ) formalism. Originally developed by Mori~\\cite{MoriTransport} and Zwanzig~\\cite{ZwanzigLangevin} and reformulated by Chorin and co-workers~\\cite{ChorinOptimalPrediction,ChorinOptimalPredictionMemory,Chorin_book,ProblemReduction}, the MZ formalism is a type of model order reduction framework. The framework consists of decomposing the state variables in a dynamical system into a resolved (coarse-scale) set and an unresolved (fine-scale) set. An exact reduced-order model for the resolved scales is then derived in which the impact of the unresolved scales on the resolved scales appears as a memory term. This memory term depends on the temporal history of the resolved variables. In practice, the evaluation of this memory term is not tractable. It does, however, serve as a starting point to develop closure models. As MZ is formulated systematically in a dynamical system setting, it promises to be an effective technique for developing stable and accurate ROMs of non-linear dynamical systems. A range of research examining the MZ formalism as a multiscale modeling tool exists in the community. Most notably, Stinis and co-workers~\\cite{stinisEuler,stinisHighOrderEuler,Stinis-rMZ,stinis_finitememory,PriceMZ,PriceMZ2} have developed several models for approximating the memory, including finite memory and renormalized models, and examined their application to the semi-discrete systems emerging from Fourier-Galerkin and Polynomial Chaos Expansions of Burgers' equation and the Euler equations. Application of MZ-based techniques to the classic POD-ROM approach has not been undertaken.\n\nThis manuscript leverages work that the authors have performed on the use of the MZ formalism to develop closure models of partial differential equations~\\cite{parishAIAA2016,parishMZ1,parish_dtau,GouasmiMZ1,parishVMS}. In addition to focusing on the development and analysis of MZ models, the authors have examined the formulation of the MZ formalism within the context of the VMS method~\\cite{parishVMS}. By expressing MZ models within a VMS framework, similarities were discovered between MZ and VMS models. In particular, it was discovered that several existing MZ models are residual-based methods. \n\n The contributions of this work include:\n\\begin{enumerate}\n\\item The development of a novel projection-based reduced-order modeling technique, termed the Adjoint Petrov--Galerkin (APG) method. The method leads to a ROM equation that is driven by the residual of the discretized governing equations. The approach is equivalent to a Petrov--Galerkin ROM and displays similarities to the LSPG approach. The method can be evolved in time with explicit integrators (in contrast to LSPG). This potentially lowers the cost of the ROM.\n\n\\item Theoretical error analysis examining conditions under which the \\textit{a priori} error bounds in APG may be smaller than in the Galerkin method.\n\n\\item Computational cost analysis (in FLOPS) of the proposed APG method as compared to the Galerkin and LSPG methods. This analysis shows that the APG ROM is twice as expensive as the G ROM for a given time step, for both explicit and implicit time integrators. In the implicit case, the ability of the APG ROM to make use of Jacobian-Free Newton-Krylov methods suggests that it may be more efficient than the LSPG ROM.\n\n\\item Numerical evidence on ROMs of compressible flow problems demonstrating that the proposed method is more accurate and stable than the G ROM on problems of interest. Improvements over the LSPG ROM are observed in most cases. An analysis of the computational cost shows that the APG method can lead to lower errors than the LSPG and G ROMs for the same computational cost.\n\n\\item Theoretical results and numerical evidence that provides a relationship between the time-scale in the APG ROM and the spectral radius of the right-hand side Jacobian. Numerical evidence suggests that this relationship also applies to the selection of the optimal time-step in LSPG.\n\n\\end{enumerate}\n\n\nThe structure of this paper is as follows: Section~\\ref{sec:FOM} outlines the full-order model of interest and its formulation in generalized coordinates. Section~\\ref{sec:ROM} outlines the reduced-order modeling approach applied at the semi-discrete level. Galerkin, Petrov--Galerkin, and VMS ROMs will be discussed. Section~\\ref{sec:MZ} details the Mori-Zwanzig formalism and the construction of the Adjoint Petrov--Galerkin ROM. Section~\\ref{sec:analysis} provides theoretical error analysis. Section~\\ref{sec:cost} discusses the implementation and computational cost of the Adjoint Petrov--Galerkin method. Numerical results and comparisons with Galerkin and LSPG ROMs are presented in Section~\\ref{sec:numerical}. Conclusions are provided in Section~\\ref{sec:conclude}.\n\nMathematical notation in this manuscript is as follows: matrices are written as bold uppercase letters (e.g. ${\\mathbf{V}}$), vectors as lowercase bold letters (e.g. ${\\mathbf{u}}$), and scalars as italicized lowercase letters (e.g. $a_i$). Calligraphic script may denote vector spaces or special operators (e.g. $\\mathcal{V}$, $\\mathcal{L}$). Bold letters followed by parentheses indicate a matrix or vector function (e.g. $\\mathbf{R}(\\cdot)$, ${\\mathbf{u}} (\\cdot)$), and those followed by brackets indicate a linearization about the bracketed argument (e.g. $\\mathbf{J}[\\cdot]$).\n\n\\section{Full-Order Model and Generalized Coordinates}\\label{sec:FOM}\nConsider a full-order model that is described by the dynamical system,\n\\begin{equation}\\label{eq:FOM}\n\\frac{d }{dt}{\\mathbf{u}}(t) = \\mathbf{R}({\\mathbf{u}}(t)), \\qquad {\\mathbf{u}}(0) = {\\mathbf{u}}_0, \\qquad t \\in [0,T], \n\\end{equation}\nwhere $T \\in \\mathbb{R}^+$ denotes the final time, ${\\mathbf{u}} : [0,T] \\rightarrow \\RR{N}$ denotes the state, and ${\\mathbf{u}}_0 \\in \\mathbb{R}^N$ the initial conditions. The function $\\mathbf{R}: \\mathbb{R}^N \\rightarrow \\mathbb{R}^N$ with $\\mathbf{y} \\mapsto \\mathbf{R}(\\mathbf{y})$ is a (possibly non-linear) function and will be referred to as the ``right-hand side\" operator. Equation~\\ref{eq:FOM} arises in many disciplines, including the numerical discretization of partial differential equations. In this context, $\\mathbf{R}(\\cdot)$ may represent a spatial discretization scheme with source terms and applicable boundary conditions. \n\nIn many practical applications, the computational cost associated with solving Eq.~\\ref{eq:FOM} is prohibitively expensive due to the high dimension of the state. The goal of a ROM is to transform the $N$-dimensional dynamical system presented in Eq.~\\ref{eq:FOM} into a $K$ dimensional dynamical system, with $K \\ll N$. To achieve this goal, we pursue the following agenda:\n\\begin{enumerate}\n\\item Develop a weak form of the FOM in generalized coordinates.\n\\item Decompose the generalized coordinates into a $K$-dimensional resolved coarse-scale set and an $N-K$ dimensional unresolved fine-scale set.\n\\item Develop a $K$-dimensional ROM for the coarse-scales by making approximations to the fine-scale coordinates.\n\\end{enumerate}\nThe remainder of this section will address task 1 in the above agenda.\n\n\nTo develop the weak form of Eq.~\\ref{eq:FOM}, we start by defining a trial basis matrix comprising $N$ orthonormal basis vectors,\n\\begin{equation*}\n\\mathbf{V} \\equiv \\begin{bmatrix}\n\\mathbf{v}_1 & \\mathbf{v}_2 & \\cdots & \\mathbf{v}_N\n\\end{bmatrix},\n\\end{equation*}\nwhere $\\mathbf{v}_i \\in \\mathbb{R}^N$, $\\mathbf{v}_i^T \\mathbf{v}_j = \\delta_{ij}$.\nThe basis vectors may be generated, for example, by the POD approach. \nWe next define the \\textit{trial space} as the range of the trial basis matrix,\n$${\\MC{V}} \\trieq \\text{Range}({\\mathbf{V}}).$$\nAs $\\mathbf{V}$ is a full-rank $N \\times N$ matrix, $\\mathcal{V} \\equiv \\RR{N}$ and the state variable can be exactly described by a linear combination of these basis vectors,\n\\begin{equation}\\label{eq:genCord}\n{\\mathbf{u}}(t) = \\sum_{i=1}^N \\mathbf{v}_i a_i(t). \n\\end{equation}\nFollowing~\\cite{carlberg_lspg}, we collect the basis coefficients $a_i(t)$ into $\\mathbf{a} : [0,T] \\rightarrow \\RR{N}$ and refer to $\\mathbf{a}$ as the \\emph{generalized coordinates}. We similarly define the test basis matrix, ${\\mathbf{W}}$, whose columns comprise linearly independent basis vectors that span the \\textit{test space}, $\\mathcal{W}$,\n\\begin{equation*}\n{\\mathbf{W}} \\equiv \\begin{bmatrix}\n\\mathbf{w}_1 & \\mathbf{w}_2 & \\cdots& \\mathbf{w}_N\n\\end{bmatrix}, \\qquad \n\\mathcal{W} \\trieq \\text{Range}({\\mathbf{W}}) ,\n\\end{equation*}\nwith $\\mathbf{w}_i \\in \\mathbb{R}^N$. \n\nEquation~\\ref{eq:FOM} can be expressed in terms of the generalized coordinates by inserting Eq.~\\ref{eq:genCord} into Eq.~\\ref{eq:FOM}, \n\\begin{equation}\\label{eq:FOM2}\n{\\mathbf{V}} \\frac{d }{dt}\\mathbf{a}(t)= \\mathbf{R}({\\mathbf{V}} \\mathbf{a}(t)).\n\\end{equation}\nThe weak form of Eq.~\\ref{eq:FOM2} is obtained by taking the $L^2$ inner product with ${\\mathbf{W}}$,\\footnote{The authors recognize that many types of inner products are possible in formulating a ROM. To avoid unnecessary abstraction, we focus here on the simplest case.}\n\\begin{equation}\\label{eq:FOM3}\n{\\mathbf{W}}^T {\\mathbf{V}} \\frac{d }{dt}\\mathbf{a}(t) = {\\mathbf{W}}^T \\mathbf{R}({\\mathbf{V}} \\mathbf{a}(t)).\n\\end{equation}\nManipulation of Eq.~\\ref{eq:FOM3} yields the following dynamical system,\n\\begin{equation}\\label{eq:FOM_generalized}\n\\frac{d }{dt}\\mathbf{a}(t) = [{\\mathbf{W}}^T {\\mathbf{V}} ]^{-1} {\\mathbf{W}}^T \\mathbf{R}({\\mathbf{V}} \\mathbf{a}(t)), \\qquad \\mathbf{a}(t=0) = \\mathbf{a}_0, \\qquad t \\in [0,T],\n\\end{equation}\nwhere $\\mathbf{a}_0 \\in \\RR{N}$ with $\\mathbf{a}_0 = [{\\mathbf{W}}^T {\\mathbf{V}} ]^{-1} {\\mathbf{W}}^T {\\mathbf{u}}_0.$ \nNote that Eq.~\\ref{eq:FOM_generalized} is an $N$-dimensional ODE system and is simply Eq.~\\ref{eq:FOM} expressed in a different coordinate system. It is further worth noting that, since ${\\mathbf{W}}$ and ${\\mathbf{V}}$ are invertible (both are square matrices with linearly independent columns), one has $[{\\mathbf{W}}^T {\\mathbf{V}} ]^{-1} {\\mathbf{W}}^T = {\\mathbf{V}}^{-1}.$ This will not be the case for ROMs.\n\n\\section{Reduced-Order Models}\\label{sec:ROM}\n\\subsection{Multiscale Formulation}\nThis subsection addresses task 2 in the aforementioned agenda. Reduced-order models seek a low-dimensional representation of the original high-fidelity model. To achieve this, we examine a multiscale formulation of Eq.~\\ref{eq:FOM_generalized}. Consider sum decompositions of the trial and test space,\n\\begin{equation}\n{\\MC{V}} = \\tilde{\\MC{V}} \\oplus {\\MC{V}^{\\prime}}, \\qquad \\mathcal{W} = \\mathcal{\\tilde{W}} \\oplus \\mathcal{W}'.\n\\end{equation}\nThe space $\\tilde{\\MC{V}}$ is referred to as the coarse-scale trial space, while ${\\MC{V}^{\\prime}}$ is referred to as the fine-scale trial space. We refer to $\\tilde{\\MC{W}}$ and ${\\MC{W}^{\\prime}}$ in a similar fashion.\nFor simplicity, define $\\tilde{\\MC{V}}$ to be the column space of the first $K$ basis vectors in ${\\mathbf{V}}$ and ${\\MC{V}^{\\prime}}$ to be the column space of the last $N-K$ basis vectors in ${\\mathbf{V}}$. This approach is appropriate when the basis vectors are ordered in a hierarchical manner, as is the case with POD. Note the following properties of the decomposition:\n\\begin{enumerate}\n\\item The coarse-scale space is a subspace of $\\mathcal{V}$, i.e., $\\tilde{\\MC{V}} \\subset {\\MC{V}}$.\n\\item The fine-scale space is a subspace of $\\mathcal{V}$, i.e., ${\\MC{V}^{\\prime}} \\subset {\\MC{V}}$.\n\\item The fine and coarse-scale subspaces do not overlap, i.e., $\\tilde{\\MC{V}} \\cap {\\MC{V}^{\\prime}} = \\{ 0 \\}.$\n\\item The fine-scale and coarse-scale subspaces are orthogonal, i.e., $\\tilde{\\MC{V}} \\perp {\\MC{V}^{\\prime}}.$ This is due to the fact that the basis vectors that comprise $\\mathbf{V}$ are orthonormal.\n\\end{enumerate}\nFor notational purposes, we make the following definitions for the trial and test spaces:\n\\begin{equation*}\n{\\mathbf{V}} \\equiv \\begin{bmatrix} \\tilde{\\mathbf{V}} & ; & {\\mathbf{V}^{\\prime}} \\end{bmatrix}, \\quad {\\mathbf{W}} \\equiv \\begin{bmatrix} \\tilde{\\mathbf{W}} & ; & {\\mathbf{W}^{\\prime}} \\end{bmatrix}, \n\\end{equation*}\nwhere $[ \\cdot \\hspace{0.05 in}; \\hspace{0.05 in} \\cdot ]$ denotes the concatenation of two matrices and,\n\\begin{alignat*}{2}\n&\\tilde{\\mathbf{V}} \\equiv \\begin{bmatrix}\n\\mathbf{v}_1 & \\mathbf{v}_2 & \\cdots & \\mathbf{v}_K\n\\end{bmatrix}, && \\quad \\tilde{\\MC{V}} \\trieq \\text{Range}(\\tilde{\\mathbf{V}}) , \\\\\n&{\\mathbf{V}^{\\prime}} \\equiv \\begin{bmatrix}\n\\mathbf{v}_{K+1} & \\mathbf{v}_{K+2} & \\cdots & \\mathbf{v}_{N}\n\\end{bmatrix}, && \\quad {\\MC{V}^{\\prime}} \\trieq \\text{Range}({\\mathbf{V}^{\\prime}}). \\\\\n&\\tilde{\\mathbf{W}} \\equiv \\begin{bmatrix}\n\\mathbf{w}_1 & \\mathbf{w}_2 & \\cdots & \\mathbf{w}_K\n\\end{bmatrix}, && \\quad \\tilde{\\MC{W}} \\trieq \\text{Range}(\\tilde{\\mathbf{W}}) , \\\\\n&{\\mathbf{W}^{\\prime}} \\equiv \\begin{bmatrix}\n\\mathbf{w}_{K+1} & \\mathbf{w}_{K+2} & \\cdots & \\mathbf{w}_{N}\n\\end{bmatrix}, && \\quad {\\MC{W}^{\\prime}} \\trieq \\text{Range}({\\mathbf{W}^{\\prime}}) .\n\\end{alignat*}\nThe coarse and fine-scale states are defined as,\n\\begin{equation*}\n\\tilde{\\mathbf{u}}(t) \\trieq \\sum_{i=1}^K \\mathbf{v}_i a_i(t) \\equiv \\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}(t), \\qquad {\\mathbf{u}^{\\prime}}(t) \\trieq \\sum_{i = K+1}^N \\mathbf{v}_i a_i(t) \\equiv {\\mathbf{V}^{\\prime}} {\\mathbf{a}^{\\prime}}(t),\n\\end{equation*}\nwith $\\tilde{\\mathbf{u}} : [0,T] \\rightarrow \\tilde{\\mathcal{V}}$, ${\\mathbf{u}^{\\prime}} : [0,T] \\rightarrow \\mathcal{V}', \\tilde{\\mathbf{a}}: [0,T] \\rightarrow \\RR{K},$ and ${\\mathbf{a}^{\\prime}} : [0,T] \\rightarrow \\RR{N-K}.$\n\\begin{comment}\nWe make similar definitions for the test space,\n\\begin{equation*}\n{\\mathbf{W}} \\overset{\\Delta}{=} \\begin{bmatrix} \\tilde{\\mathbf{W}} & ; & {\\mathbf{W}^{\\prime}} \\end{bmatrix}, \n\\end{equation*}\nwhere,\n\\begin{alignat*}{2}\n&\\tilde{\\mathbf{W}} \\overset{\\Delta}{=} \\begin{bmatrix}\n\\mathbf{w}_1, \\mathbf{w}_2, \\hdots, \\mathbf{w}_K\n\\end{bmatrix}, && \\quad \\text{Range}(\\tilde{\\mathbf{W}}) \\overset{\\Delta}{=} \\tilde{\\MC{W}}, \\\\\n&{\\mathbf{W}^{\\prime}} \\overset{\\Delta}{=} \\begin{bmatrix}\n\\mathbf{w}_{K+1}, \\mathbf{w}_{K+2}, \\hdots, \\mathbf{w}_{N}\n\\end{bmatrix}, && \\quad \\text{Range}({\\mathbf{W}^{\\prime}})\\overset{\\Delta}{=} {\\MC{W}^{\\prime}}.\n\\end{alignat*}\n\\end{comment}\nThese decompositions allow Eq.~\\ref{eq:FOM3} to be expressed as two linearly independent systems,\n\\begin{equation}\\label{eq:FOM_VMS_coarse}\n\\tilde{\\mathbf{W}}^T \\tilde{\\mathbf{V}} \\frac{d }{dt}\\tilde{\\mathbf{a}}(t) + \\tilde{\\mathbf{W}}^T {\\mathbf{V}^{\\prime}} \\frac{d }{dt}{\\mathbf{a}^{\\prime}}(t) =\\tilde{\\mathbf{W}}^T \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}(t) + {\\mathbf{V}^{\\prime}} {\\mathbf{a}^{\\prime}}(t)),\n\\end{equation}\n\\begin{equation}\\label{eq:FOM_VMS_fine}\n{\\mathbf{W}^{\\prime}}^T \\tilde{\\mathbf{V}} \\frac{d }{dt} \\tilde{\\mathbf{a}}(t) + {\\mathbf{W}^{\\prime}}^T {\\mathbf{V}^{\\prime}} \\frac{d }{dt}{\\mathbf{a}^{\\prime}}(t) ={\\mathbf{W}^{\\prime}}^T \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}(t) + {\\mathbf{V}^{\\prime}} {\\mathbf{a}^{\\prime}}(t)).\n\\end{equation}\nEquation~\\ref{eq:FOM_VMS_coarse} is referred to as the coarse-scale equation, while Eq.~\\ref{eq:FOM_VMS_fine} is referred to as the fine-scale equation. It is important to emphasize that the system formed by Eqs.~\\ref{eq:FOM_VMS_coarse} and~\\ref{eq:FOM_VMS_fine} is still an exact representation of the original FOM.\n\nThe objective of ROMs is to solve the coarse-scale equation. The challenge encountered in this objective is that the evolution of the coarse-scales depends on the fine-scales. This is a type of ``closure problem\" and must be addressed to develop a closed ROM.\n\n\\subsection{Reduced-Order Models}\nAs noted above, the objective of a ROM is to solve the (unclosed) coarse-scale equation. We now develop ROMs of Eq.~\\ref{eq:FOM} by leveraging the multiscale decomposition presented above. This section addresses task 3 in the mathematical agenda.\n\n The most straightforward technique to develop a ROM is to make the approximation,\n\\begin{equation*}\\label{eq:ufine_ansatz}\n{\\mathbf{u}^{\\prime}} \\approx \\mathbf{0}.\n\\end{equation*}\nThis allows for the coarse-scale equation to be expressed as,\n\\begin{equation}\\label{eq:FOM_VMS_coarse_2}\n\\tilde{\\mathbf{W}}^T \\tilde{\\mathbf{V}} \\frac{d }{dt}\\tilde{\\mathbf{a}}(t) =\\tilde{\\mathbf{W}}^T \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}(t) ).\n\\end{equation}\nEquation~\\ref{eq:FOM_VMS_coarse_2} forms a $K$-dimensional reduced-order system (with $K \\ll N$) and provides the starting point for formulating several standard ROM techniques. The Galerkin and Least-Squares Petrov--Galerkin ROMs are outlined in the subsequent subsections.\n\n\\subsubsection{The Galerkin Reduced-Order Model}\nGalerkin projection is a common choice for producing a reduced set of ODEs. In Galerkin projection, the test basis is taken to be equivalent to the trial basis, i.e. $\\tilde{\\mathbf{W}} = \\tilde{\\mathbf{V}}$. The Galerkin ROM is then,\n\\begin{equation}\\label{eq:GROM}\n\\tilde{\\mathbf{V}}^T \\frac{d }{dt}\\tilde{\\mathbf{u}}(t) = \\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{u}}(t)), \\qquad \\tilde{\\mathbf{u}}(0) = \\tilde{\\mathbf{u}}_0, \\qquad t \\in [0,T].\n\\end{equation}\nGalerkin projection can be shown to be optimal in the sense that it minimizes the $L^2$-norm of the FOM ODE residual over $\\text{Range}(\\tilde{\\mathbf{V}})$~\\cite{carlberg_lspg}. As the columns of $\\tilde{\\mathbf{V}}$ no longer spans ${\\MC{V}}$, it is possible that the initial state of the full system, ${\\mathbf{u}}_0$, may differ from the initial state of the reduced system, $\\tilde{\\mathbf{u}}_0$. For simplicity, however, it is assumed here that the initial conditions lie fully in the coarse-scale trial space, i.e. ${\\mathbf{u}}_0 \\in \\tilde{\\MC{V}}$ such that,\n\\begin{equation}\\label{eq:ROM_IC}\n\\tilde{\\mathbf{u}}_0 = {\\mathbf{u}}_0.\n\\end{equation}\n Note that this issue can be formally addressed by using an affine trial space to ensure that $\\tilde{\\mathbf{u}}_0 = {\\mathbf{u}}_0$.\n\nEquation~\\ref{eq:GROM} can be equivalently written for the generalized coarse-scale coordinates, $\\tilde{\\mathbf{a}}$,\n\\begin{equation}\\label{eq:GROM_modal}\n\\frac{d }{dt}\\tilde{\\mathbf{a}}(t) = \\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}(t)), \\qquad \\tilde{\\mathbf{a}}(0) = \\tilde{\\mathbf{a}}_0, \\qquad t \\in [0,T],\n\\end{equation}\nwhere $\\tilde{\\mathbf{a}}_0 \\in \\RR{K}$ with $\\tilde{\\mathbf{a}}_0 =\\tilde{\\mathbf{V}}^T {\\mathbf{u}}_0.$\nEquation~\\ref{eq:GROM_modal} is a $K$-dimensional ODE system (with $K\\ll N$) and is hence of lower dimension than the FOM. Note that, similar to Eq.~\\ref{eq:FOM_generalized}, the projection via $\\tilde{\\mathbf{V}}^T$ would normally be $\\big[ \\tilde{\\mathbf{V}}^T \\tilde{\\mathbf{V}} \\big]^{-1} \\tilde{\\mathbf{V}}$. When $\\tilde{\\mathbf{V}}$ is constructed via POD, its columns are orthonormal and $\\tilde{\\mathbf{V}}^T \\tilde{\\mathbf{V}} = \\mathbf{I}$; the projector has been simplified to reflect this. Non-orthonormal basis vectors will require the full computation of $\\big[ \\tilde{\\mathbf{V}}^T \\tilde{\\mathbf{V}} \\big]^{-1} \\tilde{\\mathbf{V}}$.\n\nIt is important to note that, in order to develop a computationally efficient ROM, some ``hyper-reduction'' method must be devised to reduce the cost associated with evaluating the matrix-vector product, $\\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{u}}(t))$. Gappy POD~\\cite{everson_sirovich_gappy} and the (discrete) empirical interpolation method~\\cite{eim,deim} are two such techniques. More details on hyper-reduction are provided in Appendix~\\ref{appendix:hyper}\n\nWhen applied to unsteady non-linear problems, the Galerkin ROM is often inaccurate and, at times, unstable. Examples of this are seen in Ref.~\\cite{carlberg_lspg_v_galerkin}. These issues motivate the development of more sophisticated reduced-order modeling techniques. \n\n\\subsubsection{Petrov--Galerkin and Least-Squares Petrov--Galerkin Reduced-Order Models}\nIn the Petrov--Galerkin approach, the test space is different from the trial space. Petrov--Galerkin approaches have a rich history in the finite element community~\\cite{brooks_supg,hughes_petrovgalerkin} and can enhance the stability and robustness of a numerical method. In the context of reduced-order modeling for dynamical systems, the Least-Squares Petrov--Galerkin method (LSPG)~\\cite{carlberg_lspg} is a popular approach. The LSPG approach is a ROM technique that seeks to minimize the fully discrete residual (i.e., the residual after spatial and temporal discretization) at each time-step. The LSPG method can be shown to be optimal in the sense that it minimizes the $L^2$-norm of the \\textit{fully discrete} residual at each time-step over $\\text{Range}(\\tilde{\\MC{V}})$. To illustrate the LSPG method, consider the algebraic system of equations for the FOM obtained after an implicit Euler temporal discretization,\n\\begin{equation}\\label{eq:coarse_implicit_euler_0}\n\\frac{{\\mathbf{u}}^{n} - {\\mathbf{u}}^{n-1} }{\\Delta t} - \\mathbf{R}({\\mathbf{u}}^{n}) = \\mathbf{0},\n\\end{equation}\nwhere ${\\mathbf{u}}^n \\in \\RR{N}$ denotes the solution at the $n^{th}$ time-step.\nThe FOM will exactly satisfy Eq.~\\ref{eq:coarse_implicit_euler_0}. The ROM, however, will not. The LSPG method minimizes the residual of Eq.~\\ref{eq:coarse_implicit_euler_0} over each time-step. For notational purposes, we define the residual vector for the implicit Euler method,\n\\begin{equation*}\n\\mathbf{r}_{\\text{IE}}: (\\mathbf{y};{\\mathbf{u}}^{n-1}) \\mapsto \\frac{\\mathbf{y} - {\\mathbf{u}}^{n-1} }{\\Delta t} - \\mathbf{R}(\\mathbf{y}).\n\\end{equation*}\nThe LSPG method is defined as follows,\n\\begin{equation*}\n{\\mathbf{u}}^n = \\underset{\\mathbf{y} \\in \\text{Range}(\\tilde{\\mathbf{V}}) }{\\text{arg min}}|| \\mathbf{A}(\\mathbf{y}) \\mathbf{r}_{\\text{IE}}(\\mathbf{y};{\\mathbf{u}}^{n-1}) ||_2^2,\n\\end{equation*}\nwhere $\\mathbf{A}(\\cdot) \\in \\mathbb{R}^{z \\times n}$ with $z \\le N$ is a weighting matrix. The standard LSPG method takes $\\mathbf{A} = \\mathbf{I}$. For the implicit Euler time integration scheme (as well as various other implicit schemes) the LSPG approach can be shown to have an equivalent continuous representation using a Petrov--Galerkin projection~\\cite{carlberg_lspg_v_galerkin}. For example, the LSPG method for any backward differentiation formula (BDF) time integration scheme can be written as a Petrov--Galerkin ROM with the test basis,\n\\begin{equation*}\n\\mathbf{\\tilde{\\mathbf{W}}} =\\big( \\mathbf{I} - \\alpha \\Delta t \\mathbf{J}[\\tilde{\\mathbf{u}}(t)] \\big) \\tilde{\\mathbf{V}},\n\\end{equation*}\nwhere $\\mathbf{J}[\\tilde{\\mathbf{u}}(t)] = \\frac{\\partial \\mathbf{R}}{\\partial \\mathbf{y}}(\\tilde{\\mathbf{u}}(t))$ is the Jacobian of the right-hand side function evaluated about the coarse-scale state and $\\alpha$ is a constant, specific to a given scheme (e.g. $\\alpha = 1$ for implicit Euler, $\\alpha = \\frac{2}{3}$ for BDF2, $\\alpha = \\frac{6}{11}$ for BDF3, etc.).\nWith this test basis, the LSPG ROM can be written as,\n\\begin{equation}\\label{eq:LSPGROM}\n \\tilde{\\mathbf{V}}^T \\bigg( \\frac{d }{dt}\\tilde{\\mathbf{u}}(t) - \\mathbf{R}(\\tilde{\\mathbf{u}}(t)) \\bigg) = \\tilde{\\mathbf{V}}^T \\mathbf{J}^T [\\tilde{\\mathbf{u}}(t)] \\alpha \\Delta t\\bigg( \\frac{d }{dt}\\tilde{\\mathbf{u}}(t) - \\mathbf{R}(\\tilde{\\mathbf{u}}(t)) \\bigg) , \\qquad \\tilde{\\mathbf{u}}(0) = \\tilde{\\mathbf{u}}_0, \\qquad t \\in [0,T].\n\\end{equation}\nIn writing Eq.~\\ref{eq:LSPGROM}, we have coupled all of the terms from the standard Galerkin ROM on the left-hand side, and have similarly coupled the terms introduced by the Petrov--Galerkin projection on the right-hand side. One immediately observes that the LSPG approach is a residual-based method, meaning that the stabilization added by LSPG is proportional to the residual. The LSPG method is similar to the Galerkin\/Least-Squares (GLS) approach commonly employed in the finite element community~\\cite{hughes_GLS,hughes0}. This can be made apparent by writing Eq.~\\ref{eq:LSPGROM} as,\n\\begin{equation*}\n \\bigg( \\mathbf{v}_i , \\frac{d }{dt}\\tilde{\\mathbf{u}}(t) - \\mathbf{R}(\\tilde{\\mathbf{u}}(t)) \\bigg)= \\bigg(\\mathbf{J} [\\tilde{\\mathbf{u}}(t)] \\mathbf{v}_i , \\tau \\big[ \\frac{d }{dt}\\tilde{\\mathbf{u}}(t) - \\mathbf{R}(\\tilde{\\mathbf{u}}(t)) \\big] \\bigg) , \\qquad i = 1,2,\\hdots,K, \n\\end{equation*}\nwhere $(\\mathbf{a},\\mathbf{b}) = \\mathbf{a}^T \\mathbf{b}$ and $\\tau = \\alpha \\Delta t$ is the stabilization parameter. Compare the above to, say, Eq. 70 and 71 in Ref.~\\cite{hughes0}. A rich body of literature exists on residual-based methods, and viewing the LSPG approach in this light helps establish connections with other methods.\nWe highlight several important aspects of LSPG. Remarks 1 through 3 are derived by Carlberg et al. in Ref.~\\cite{carlberg_lspg_v_galerkin}:\n\\begin{enumerate}\n\\item The LSPG approach is inherently tied to the temporal discretization. For different time integration schemes, the ``stabilization\" added by the LSPG method will vary. For optimal accuracy, the LSPG method requires an intermediary time-step size.\n\\item In the limit of $\\Delta t \\rightarrow 0$, the LSPG approach recovers a Galerkin approach.\n\\item For explicit time integration schemes, the LSPG and Galerkin approach are equivalent.\n\\item For backwards differentiation schemes, the LSPG approach is a type of GLS stabilization for non-linear problems. \n\\item While commonalities exist between LSPG and multiscale approaches, the authors believe that the LSPG method should \\textit{not} be viewed as a subgrid-scale model. The reason for this is that it is unclear how Eq.~\\ref{eq:LSPGROM} can be derived from Eq.~\\ref{eq:FOM_VMS_coarse}. This is similar to the fact that, in Ref.~\\cite{hughes0}, \\textit{adjoint} stabilization is viewed as a subgrid-scale model while GLS stabilization is not. The challenge in deriving Eq.~\\ref{eq:LSPGROM} from Eq.~\\ref{eq:FOM_VMS_coarse} lies primarily in the fact that the Jacobian in Eq.~\\ref{eq:LSPGROM} contains a transpose operator. We thus view LSPG as mathematical stabilization rather than a subgrid-scale model.\n\\end{enumerate}\nWhile the LSPG approach has enjoyed much success for constructing ROMs of non-linear problems, remarks 1, 2, 3, and 5 suggest that improvements over the LSPG method are possible. Remark 1 suggests improvements in computational speed and accuracy are possible by removing sensitivity to the time-step size. Remark 3 suggests that improvements in computational speed and flexibility are possible by formulating a method that can be used with explicit time-stepping schemes. Lastly, remark 5 suggests that improvements in accuracy are possible by formulating a method that accounts for subgrid effects.\n\n\n\\subsection{Mori-Zwanzig Reduced-Order Models}\\label{sec:MZ}\nThe optimal prediction framework formulated by Chorin et al.~\\cite{ChorinOptimalPrediction,ChorinOptimalPredictionMemory,Chorin_book}, which is a (significant) reformulation of the Mori-Zwanzig (MZ) formalism of statistical mechanics, is a model order reduction tool that can be used to develop representations of the impact of the fine-scales on the coarse-scale dynamics. In this section, the optimal prediction framework is used to derive a compact approximation to the impact of the fine-scale POD modes on the evolution of the coarse-scale POD modes. For completeness, the optimal prediction framework is first derived in the context of the Galerkin POD ROM. It is emphasized that the content presented in Sections~\\ref{sec:liouville} and \\ref{sec:projOpsLangevin} is simply a formulation of Chorin's framework, with a specific projection operator, in the context of the Galerkin POD ROM. \n\n\nWe pursue the MZ approach on a Galerkin formulation of Eq.~\\ref{eq:FOM_generalized}. Before describing the formalism, it is beneficial to re-write the original FOM in terms of the generalized coordinates with the solution being defined implicitly as a function of the initial conditions,\n\\begin{equation}\\label{eq:FOM_generalized_b}\n\\frac{d }{dt} \\mathbf{a} (\\mathbf{a}_0,t) = {\\mathbf{V}}^T \\mathbf{R}({\\mathbf{V}} \\mathbf{a} (\\mathbf{a}_0,t)), \\qquad \\mathbf{a}(0) = \\mathbf{a}_0, \\qquad t \\in [0,T],\n\\end{equation}\nwith $\\mathbf{a}: \\mathbb{R}^N \\times [0,T] \\rightarrow \\mathbb{R}^N$, $\\mathbf{a} \\in \\RR{N} \\otimes \\RRC{N} \\otimes \\MC{T}$ the time-dependent generalized coordinates, $\\RRC{N}$ the space of (sufficiently smooth) functions acting on $\\RR{N}$, $\\MC{T}$ the space of (sufficiently smooth) functions acting on $[0,T]$, and $\\mathbf{a}_0 \\in \\mathbb{R}^N$ the initial conditions. Here, $\\mathbf{a}(\\mathbf{a}_0,t)$ is viewed as a function that maps from the coordinates $\\mathbf{a}_0$ (i.e., the initial conditions) and time to a vector in $\\mathbb{R}^N$. It is assumed that the right-hand side operator $\\mathbf{R}$ is continuously differentiable on $\\mathbb{R}^N$.\n\\subsubsection{The Liouville Equation}\\label{sec:liouville}\nThe starting point of the MZ approach is to transform the non-linear FOM (Eq.~\\ref{eq:FOM_generalized_b}) into a linear partial differential equation. \nEquation~\\ref{eq:FOM_generalized_b} can be written equivalently as the following partial differential equation in $\\mathbb{R}^N \\times [0,T]$~\\cite{ChorinOptimalPredictionMemory},\n\\begin{equation}\\label{eq:Liouville}\n\\frac{\\partial }{\\partial t}v(\\mathbf{a}_0,t) = \\mathcal{L} v(\\mathbf{a}_0,t); \\qquad\nv(\\mathbf{a_0},0) = g(\\mathbf{a}_0),\n\\end{equation}\nwhere $v: \\mathbb{R}^N \\times [0,T] \\rightarrow \\mathbb{R}^{N_v}$ with $v \\in \\RR{N_v} \\otimes \\RRC{N} \\otimes \\MC{T}$ is a set of $N_v$ observables and $g: \\mathbb{R}^N \\rightarrow \\mathbb{R}^{N_v}$ is a state-to-observable map. The operator $\\mathcal{L}$ is the Liouville operator, also known as the Lie derivative, and is defined by,\n\\begin{align*}\n\\mathcal{L} &: \\mathbf{q} \\mapsto \\bigg[ \\frac{\\partial }{\\partial \\mathbf{a_0}} \\mathbf{q} \\bigg] {\\mathbf{V}}^T \\mathbf{R}( {\\mathbf{V}} \\mathbf{a_0} ),\\\\\n &: \\RR{q} \\otimes \\RRC{N} \\rightarrow \\RR{q} \\otimes \\RRC{N},\n\\end{align*}\nfor arbitrary q.\nEquation~\\ref{eq:Liouville} is referred to as the Liouville equation and is an exact statement of the original dynamics. The Liouville equation describes the solution to Eq.~\\ref{eq:FOM_generalized_b} for \\textit{all} possible initial conditions. The advantage of reformulating the system in this way is that the Liouville equation is linear, allowing for the use of superposition and aiding in the removal of the fine-scales.\n\nThe solution to Eq.~\\ref{eq:Liouville} can be written as,\n\\begin{equation*}\nv(\\mathbf{a_0},t) = e^{t \\mathcal{L}} g(\\mathbf{a_0}) .\n\\end{equation*}\nThe operator $e^{t \\mathcal{L}}$, which has been referred to as a ``propagator\", evolves the solution along its trajectory in phase-space~\\cite{ZwanzigBook}. The operator $e^{t \\mathcal{L}}$ has several interesting properties. Most notably, the operator can be ``pulled\" inside of a non-linear functional~\\cite{ZwanzigBook},\n\\begin{equation*}\ne^{t \\mathcal{L}} g(\\mathbf{a}_0) = g( e^{t \\mathcal{L}} \\mathbf{a}_0).\n\\end{equation*}\nThis is similar to the composition property inherent to Koopman operators~\\cite{Koopman}. With this property, the solution to Eq.~\\ref{eq:Liouville} may be written as,\n\\begin{equation*}\nv(\\mathbf{a_0},t) = g( e^{t \\mathcal{L}} \\mathbf{a}_0).\n\\end{equation*\nThe implications of $e^{t \\mathcal{L}}$ are significant. It demonstrates that, given trajectories $\\mathbf{a}(\\mathbf{a_0},t)$, the solution $v$ is known for any observable $g$. \n\nNoting that $\\mathcal{L}$ and $e^{t \\mathcal{L}}$ commute, Eq.~\\ref{eq:Liouville} may be written as,\n\\begin{equation*}\n\\frac{\\partial }{\\partial t} v(\\mathbf{a_0},t) = e^{t \\mathcal{L}} \\mathcal{L} v(\\mathbf{a_0},0).\n\\end{equation*}\nA set of partial differential equations for the resolved generalized coordinates can be obtained by taking $g(\\mathbf{a_0}) = \\tilde{\\mathbf{a}}_0$,\n\\begin{equation}\\label{eq:Liouville_sg_res}\n\\frac{\\partial }{\\partial t} e^{t \\mathcal{L}} \\tilde{\\mathbf{a}}_0 = e^{t \\mathcal{L}} \\mathcal{L} \\tilde{\\mathbf{a}}_0.\n\\end{equation}\nThe remainder of the derivation is performed for $g(\\mathbf{a_0}) = \\tilde{\\mathbf{a}}_0$, thus $N_v = K$. \n\\subsubsection{Projection Operators and the Generalized Langevin Equation}\\label{sec:projOpsLangevin}\nThe objective now is to remove the dependence of Eq.~\\ref{eq:Liouville_sg_res} on the fine-scale variables.\nSimilar to the VMS decomposition, $\\RRC{N}$ can be decomposed into resolved and unresolved subspaces,\n\\begin{equation*}\n\\RRC{N} = \\tilde{\\MC{H}} \\oplus \\MC{H}',\n\\end{equation*} \nwith $\\tilde{\\MC{H}}$ being the space of all functions of the resolved coordinates, $\\tilde{\\mathbf{a}}_0$, and $\\MC{H}'$ the complementary space.\nThe associated projection operators are defined as $\\mathcal{P}: \\RRC{N} \\rightarrow \\tilde{\\MC{H}}$ and $\\mathcal{Q} = I - \\mathcal{P}$. Various types of projections are possible, and here we consider,\n\\begin{equation*}\n\\mathcal{P}f( \\mathbf{a}_0 ) = \\int_{\\RR{N}} f( \\mathbf{a}_0 ) \\delta({\\mathbf{a}^{\\prime}}_0) d {\\mathbf{a}^{\\prime}}_0,\n\\end{equation*}\nwhich leads to\n\\begin{equation*}\n\\mathcal{P}f(\\mathbf{a}_0 )= f([\\tilde{\\mathbf{a}}_0;\\mathbf{0}]).\n\\end{equation*}\n\nThe projection operators can be used to split the Liouville equation,\n\\begin{equation}\\label{eq:Liouville_sg_split}\n\\frac{\\partial }{\\partial t} e^{t \\mathcal{L}} \\tilde{\\mathbf{a}}_0 = e^{t \\mathcal{L}} \\mathcal{PL} \\tilde{\\mathbf{a}}_0 + e^{t \\mathcal{L}}\\mathcal{QL}\\tilde{\\mathbf{a}}_0.\n\\end{equation}\nThe objective now is to remove the dependence of the right-hand side of Eq.~\\ref{eq:Liouville_sg_split} on the fine-scales, ${\\mathbf{a}_0^{\\prime}}$ (i.e. $\\mathcal{QL}\\tilde{\\mathbf{a}}_0$). This may be achieved by Duhamel's principle,\n\\begin{equation}\\label{eq:duhamel}\ne^{t \\mathcal{L}} = e^{t \\mathcal{Q} \\mathcal{L}} + \\int_0^t e^{(t - s)\\mathcal{L}} \\mathcal{P}\\mathcal{L} e^{s \\mathcal{Q} \\mathcal{L}} ds.\n\\end{equation}\nInserting Eq.~\\ref{eq:duhamel} into Eq.~\\ref{eq:Liouville_sg_split}, the generalized Langevin equation is obtained,\n\\begin{equation}\\label{eq:MZ_Identity}\n\\frac{\\partial }{\\partial t} e^{t \\mathcal{L}} \\tilde{\\mathbf{a}}_0 = \\underbrace{e^{t\\mathcal{L}}\\mathcal{PL} \\tilde{\\mathbf{a}}_0}_{\\text{Markovian}} + \\underbrace{e^{t\\mathcal{QL}}\\mathcal{QL} \\tilde{\\mathbf{a}}_0}_{\\text{Noise}} + \n \\underbrace{ \\int_0^t e^{{(t - s)}\\mathcal{L}} \\mathcal{P}\\mathcal{L} e^{s \\mathcal{Q} \\mathcal{L}} \\mathcal{QL}\\tilde{\\mathbf{a}}_0 ds}_{\\text{Memory}}.\n\\end{equation}\nBy the definition of the initial conditions (Eq.~\\ref{eq:ROM_IC}), the noise-term is zero and we obtain,\n\\begin{equation}\\label{eq:MZ_Identity3}\n\\frac{\\partial}{\\partial t} e^{t \\mathcal{L}} \\tilde{\\mathbf{a}}_0 = e^{t\\mathcal{L}}\\mathcal{PL} \\tilde{\\mathbf{a}}_0+ \\int_0^t e^{{(t - s)}\\mathcal{L}} \\mathcal{P}\\mathcal{L} e^{s \\mathcal{Q} \\mathcal{L}} \\mathcal{QL} \\tilde{\\mathbf{a}}_0 ds.\n\\end{equation}\nThe system described in Eq.~\\ref{eq:MZ_Identity} is precise and not an approximation to the original ODE system. For notational purposes, define,\n\\begin{equation}\\label{eq:kerndef}\n\\mathbf{K}(\\tilde{\\mathbf{a}}_0,t) \\equiv \\mathcal{PL}e^{t\\mathcal{QL}}\\mathcal{QL}\\tilde{\\mathbf{a}}_0.\n\\end{equation}\nThe term $\\mathbf{K}: \\mathbb{R}^K \\times [0,T] \\rightarrow \\mathbb{R}^K$ with $\\mathbf{K} \\in \\RR{K} \\otimes \\tilde{\\MC{H}} \\otimes \\MC{T}$ is referred to as the memory kernel. \n\nUsing the identity $e^{t \\mathcal{L}} \\mathcal{PL}\\tilde{\\mathbf{a}}_0 = \\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{u}}(t))$ and Definition~\\eqref{eq:kerndef}, Equation~\\ref{eq:MZ_Identity3} can be written in a more transparent form,\n\\begin{equation}\\label{eq:MZ_Identity_VMS}\n\\tilde{\\mathbf{V}}^T \\bigg( \\frac{\\partial }{ \\partial t}\\tilde{\\mathbf{u}}(t) - \\mathbf{R}(\\tilde{\\mathbf{u}}(t)) \\bigg) = \\int_0^t \\mathbf{K}(\\tilde{\\mathbf{a}}(t-s),s) ds,\n\\end{equation}\nNote that the time derivative is represented as a partial derivative due to the Liouville operators embedded in the memory.\n\nThe derivation up to this point has cast the original full-order model in generalized coordinates (Eq.~\\ref{eq:FOM_generalized_b}) as a linear PDE. Through the use of projection operators and Duhamel's principle, an \\textit{exact} equation (Eq.~\\ref{eq:MZ_Identity_VMS}) for the coarse-scale dynamics \\textit{only in terms of the coarse-scale variables} was then derived. The effect of the fine-scales on the coarse-scales appeared as a memory integral. This memory integral may be thought of as the closure term that is required to exactly account for the unresolved dynamics.\n \n\\subsubsection{The $\\tau$-model and the Adjoint Petrov--Galerkin Method}\\label{sec:tau-model}\nThe direct evaluation of the memory term in Eq.~\\ref{eq:MZ_Identity_VMS} is, in general, computationally intractable. To gain a reduction in computational cost, an approximation to the memory must be devised. A variety of such approximations exist, and here we outline the $\\tau$-model~\\cite{parish_dtau,BarberThesis}. The $\\tau$-model can be interpreted as the result of assuming that the memory is driven to zero in finite time and approximating the integral with a quadrature rule. This can be written as a two-step approximation,\n\\begin{equation*}\n\\int^t_0 \\mathbf{K}(\\tilde{\\mathbf{a}}(t-s),s) ds \\approx \\int^t_{t-\\tau} \\mathbf{K}(\\tilde{\\mathbf{a}}(t-s),s) ds \\approx \\tau \\mathbf{K}(\\tilde{\\mathbf{a}}(t),0).\n\\end{equation*}\nHere, $\\tau \\in \\RR{}$ is a stabilization parameter that is sometimes referred to as the ``memory length.\" It is typically static and user-defined, though methods of dynamically calculating it have been developed in \\cite{parish_dtau}. The \\textit{a priori} selection of $\\tau$ and sensitivity of the model output to this selection are discussed later in this manuscript. \n\nThe term $\\mathbf{K}(\\tilde{\\mathbf{a}}(t),0)$ can be shown to be~\\cite{parishVMS},\n\\begin{equation*}\n\\mathbf{K}(\\tilde{\\mathbf{a}}(t),0) = \\tilde{\\mathbf{V}}^T \\mathbf{J}[\\tilde{\\mathbf{u}}(t)] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}(t)),\n\\end{equation*}\nwhere ${\\Pi^{\\prime}}$ is the ``orthogonal projection operator,\" defined as ${\\Pi^{\\prime}} \\equiv \\big(\\mathbf{I}-\\tilde{\\mathbf{V}} \\tilde{\\mathbf{V}}^T\\big)$. We define the corresponding coarse-scale projection operator as $\\tilde{\\Pi} \\equiv \\tilde{\\mathbf{V}} \\tilde{\\mathbf{V}}^T$. The coarse-scale equation with the $\\tau$-model reads,\n\\begin{equation}\\label{eq:MZ_coarse_tau_NL}\n\\tilde{\\mathbf{V}}^T \\bigg( \\frac{d }{dt}\\tilde{\\mathbf{u}}(t) - \\mathbf{R}(\\tilde{\\mathbf{u}}(t)) \\bigg) = \\tau \\tilde{\\mathbf{V}}^T \\mathbf{J}[\\tilde{\\mathbf{u}}(t)] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}(t)).\n\\end{equation}\nEquation~\\ref{eq:MZ_coarse_tau_NL} provides a closed equation for the evolution of the coarse-scales. The left-hand side of Eq.~\\ref{eq:MZ_coarse_tau_NL} is the standard Galerkin ROM, and the right-hand side can be viewed as a subgrid-scale model. \n\nWhen compared to existing methods, the inclusion of the $\\tau$-model leads to a method that is analogous to a non-linear formulation of the \\textit{adjoint} stabilization technique developed in the finite element community. The ``adjoint\" terminology arises from writing Eq.~\\ref{eq:MZ_coarse_tau_NL} in a Petrov--Galerkin form,\n\\begin{equation}\\label{eq:adjoint_Galerkin}\n \\bigg[ \\bigg( \\mathbf{I} + \\tau {\\Pi^{\\prime}}^T \\mathbf{J}^T[\\tilde{\\mathbf{u}}(t)]\\bigg) \\tilde{\\mathbf{V}} \\bigg]^T \\bigg( \\frac{d }{dt}\\tilde{\\mathbf{u}}(t) - \\mathbf{R}(\\tilde{\\mathbf{u}}(t)) \\bigg) = \\mathbf{0}.\n\\end{equation}\nIt is seen that Eq.~\\ref{eq:adjoint_Galerkin} involves taking the inner product of the coarse-scale ODE with a test-basis that contains the adjoint of the coarse-scale Jacobian. Unlike GLS stabilization, adjoint stabilization can be derived from the multiscale equations~\\cite{hughes0}. Due to the similarity of the proposed method with adjoint stabilization techniques, as well as the LSPG terminology, the complete ROM formulation will be referred to as the Adjoint Petrov--Galerkin (APG) method. \n\n\\subsubsection{Comparison of APG and LSPG}\nThe APG method displays similarities to LSPG. From Eq.~\\ref{eq:adjoint_Galerkin}, it is seen that the test basis for the APG ROM is given by,\n\\begin{equation}\\label{eq:MZ_testbasis}\n\\tilde{\\mathbf{W}}_{A} = \\bigg( \\mathbf{I} + \\tau {\\Pi^{\\prime}}^T \\mathbf{J}^T[\\tilde{\\mathbf{u}}]\\bigg) \\tilde{\\mathbf{V}} .\n\\end{equation}\nRecall the LSPG test basis for backward differentiation schemes,\n\\begin{equation}\\label{eq:LSPG_testbasis}\n\\tilde{\\mathbf{W}}_{LSPG} = \\big( \\mathbf{I} - \\alpha \\Delta t \\mathbf{J}[\\tilde{\\mathbf{u}}] \\big) \\tilde{\\mathbf{V}}.\n\\end{equation}\nComparing Eq.~\\ref{eq:LSPG_testbasis} to Eq.~\\ref{eq:MZ_testbasis}, we can draw several interesting comparisons between the LSPG and APG method. Both contain a time-scale: $\\tau$ for APG and $\\alpha \\Delta t$ for LSPG. Both include Jacobians of the non-linear function $\\mathbf{R}(\\tilde{\\mathbf{u}})$. The two methods differ in the presence of the orthogonal projection operator in APG, a transpose on the Jacobian, and a sign discrepancy on the Jacobian. These last two differences are consistent with the discrepancies between GLS and adjoint stabilization methods used in the finite element community. See, for instance, Eqs. 71 and 73 in Ref~\\cite{hughes0}.\n\n\n\n\n\\section{Analysis}\\label{sec:analysis}\nThis section presents theoretical analyses of the Adjoint Petrov--Galerkin method. Specifically, error and eigenvalue analyses are undertaken for linear time-invariant (LTI) systems.\nSection~\\ref{sec:error_bound} derives \\textit{a priori} error bounds for the Galerkin and Adjoint Petrov--Galerkin ROMs. Conditions under which the APG ROM may be more accurate than the Galerkin ROM are discussed. Section~\\ref{sec:selecttau} outlines the selection of the parameter $\\tau$ that appears in APG. \n\\subsection{A Priori Error Bounds}\\label{sec:error_bound}\nWe now derive \\textit{a priori} error bounds for the Galerkin and Adjoint Petrov--Galerkin method for LTI systems. Define ${\\mathbf{u}}_F$ to be the solution to the FOM, $\\tilde{\\mathbf{u}}_G$ to be the solution to the Galerkin ROM, and $\\tilde{\\mathbf{u}}_A$ the solution to the Adjoint Petrov--Galerkin ROM. The full-order solution, Galerkin ROM, and Adjoint Petrov--Galerkin ROMs obey the following dynamical systems,\n\\begin{equation}\\label{eq:fom_error}\n\\frac{d }{dt}{\\mathbf{u}}_F(t) = \\mathbf{R}({\\mathbf{u}}_F(t)), \\qquad {\\mathbf{u}}_F(0) = {\\mathbf{u}}_0,\n\\end{equation}\n\\begin{equation}\\label{eq:grom_error}\n\\frac{d }{dt}\\tilde{\\mathbf{u}}_G(t) = \\mathbb{P}_G \\mathbf{R}(\\tilde{\\mathbf{u}}_G(t)), \\qquad {\\mathbf{u}}_G(0) = {\\mathbf{u}}_0,\n\\end{equation}\n\\begin{equation}\\label{eq:ag_error}\n\\frac{d }{dt}\\tilde{\\mathbf{u}}_A(t) = \\mathbb{P}_{A} \\mathbf{R}(\\tilde{\\mathbf{u}}_A(t)), \\qquad {\\mathbf{u}}_A(0) = {\\mathbf{u}}_0,\n\\end{equation}\nwhere the Galerkin and Adjoint Petrov--Galerkin projections are, respectively,\n\\begin{equation*}\n\\mathbb{P}_G = \\tilde{\\Pi}, \\qquad \\mathbb{P}_A = \\tilde{\\Pi} \\big[ \\mathbf{I} + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}_A] {\\Pi^{\\prime}} \\big]. \n\\end{equation*}\nThe residual of the full-order model is defined as,\n\\begin{equation*}\n\\mathbf{r}_F : {\\mathbf{u}} \\mapsto \\frac{d {\\mathbf{u}}}{dt} - \\mathbf{R}({\\mathbf{u}}).\n\\end{equation*}\nWe define the error in the Galerkin and Adjoint Petrov--Galerkin method as,\n\\begin{equation*}\n\\mathbf{e}_G \\overset{\\Delta}{=} {\\mathbf{u}}_F - \\tilde{\\mathbf{u}}_G, \\qquad \\mathbf{e}_A \\trieq {\\mathbf{u}}_F - \\tilde{\\mathbf{u}}_A. \n\\end{equation*}\nSimilarly, the coarse-scale error is defined as,\n\\begin{equation*}\n\\mathbf{\\tilde{e}}_G \\overset{\\Delta}{=} \\tilde{\\Pi} {\\mathbf{u}}_F - \\tilde{\\mathbf{u}}_G, \\qquad \\tilde{\\mathbf{e}}_A \\trieq \\tilde{\\Pi} {\\mathbf{u}}_F - \\tilde{\\mathbf{u}}_A. \n\\end{equation*}\nIn what follows, we assume Lipschitz continuity of the right-hand side function: there exists a constant $\\kappa > 0$ such that $\\forall \\mathbf{x},\\mathbf{y} \\in \\mathbb{R}^N$,\n\\begin{equation*}\n\\norm{ \\mathbf{R}(\\mathbf{x}) - \\mathbf{R}(\\mathbf{y})} \\le \\kappa \\norm{ \\mathbf{x} - \\mathbf{y}}. \n\\end{equation*}\nTo simplify the analysis, the Adjoint Petrov--Galerkin projection is approximated to be stationary in time. Note that the Galerkin projection is stationary in time. For clarity, we suppress the temporal argument on the states when possible in the proofs.\n\\begin{theorem}\n\\textit{A priori} error bounds for the Galerkin and Adjoint Petrov--Galerkin ROMs are, respectively,\n\\begin{equation}\\label{eq:g_nlbound} \\norm{\\mathbf{e}_G(t)} \\le \\int_0^t e^{ \\norm{\\mathbf{P}_G} \\kappa s }\\norm{\\big[\\mathbf{I} - \\mathbb{P}_G \\big] \\mathbf{R}( {\\mathbf{u}}_F(t-s) ) } ds .\\end{equation}\n\\begin{equation}\\label{eq:ag_nlbound} \\norm{\\mathbf{e}_A(t)} \\le \\int_0^t e^{\\norm{\\mathbf{P}_A} \\kappa s }\\norm{\\big[\\mathbf{I} - \\mathbb{P}_A \\big] \\mathbf{R}({\\mathbf{u}}_F(t-s))} ds .\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nWe prove only Eq.~\\ref{eq:ag_nlbound} as Eq.~\\ref{eq:g_nlbound} is obtained through the same arguments. Following~\\cite{carlberg_lspg_v_galerkin}, start by subtracting Eq.~\\ref{eq:ag_error} from Eq.~\\ref{eq:fom_error}, and adding and subtracting $\\mathbb{P}_A\\mathbf{R} ( {\\mathbf{u}}_F$),\n\\begin{equation*}\\label{eq:ea_lti_1}\n\\frac{d \\mathbf{e}_A}{dt} = \\mathbf{R} ({\\mathbf{u}}_F) + \\mathbb{P}_A \\mathbf{R} ({\\mathbf{u}}_F) - \\mathbb{P}_A \\mathbf{R} ({\\mathbf{u}}_F) - \\mathbb{P}_A \\mathbf{R}(\\tilde{\\mathbf{u}}_G), \\qquad \\mathbf{e}_A(0) = \\mathbf{0}.\n\\end{equation*}\nTaking the $L^2$-norm,\n\\begin{equation*}\\label{eq:ea_lti_2}\n\\norm{ \\frac{d \\mathbf{e}_A}{dt} } = \\norm{ \\mathbf{R} ({\\mathbf{u}}_F) + \\mathbb{P}_A \\mathbf{R} ({\\mathbf{u}}_F) - \\mathbb{P}_A \\mathbf{R} ({\\mathbf{u}}_F) - \\mathbb{P}_A \\mathbf{R}(\\tilde{\\mathbf{u}}_G)}.\n\\end{equation*}\nApplying the triangle inequality,\n\\begin{equation*}\\label{eq:ea_lti_3}\n\\norm{ \\frac{d \\mathbf{e}_A}{dt} }\\le \\norm{\\big[\\mathbf{I} - \\mathbb{P}_A \\big] \\mathbf{R}( {\\mathbf{u}}_F) } + \\norm{ \\mathbb{P}_A \\big( \\mathbf{R} {\\mathbf{u}}_F) - \\mathbf{R}(\\tilde{\\mathbf{u}}_G) \\big) } .\n\\end{equation*}\nInvoking the assumption of Lipschitz continuity,\n\\begin{equation*}\\label{eq:ea1}\n\\norm{ \\frac{d \\mathbf{e}_A}{dt} }\\le \\norm{\\big[\\mathbf{I} - \\mathbb{P}_A \\big] \\mathbf{R}( {\\mathbf{u}}_F)} +\\norm{ \\mathbb{P}_A} \\kappa \\norm{\\mathbf{e}_A}.\n\\end{equation*}\nNoting that $\\frac{d \\norm{ \\mathbf{e}_A } }{dt} \\le \\norm{ \\frac{d \\mathbf{e}_A}{dt}}$ \nwe have\n\\footnote{ \n$$\\frac{d \\norm{ \\mathbf{e}_A } }{dt} = \\frac{1}{\\norm{\\mathbf{e}_A}} \\mathbf{e}_A^T \\frac{d \\mathbf{e}_A}{dt} \\le \\norm{ \\frac{1}{\\norm{\\mathbf{e}_A}} \\mathbf{e}_A} \\norm{ \\frac{d \\mathbf{e}_A}{dt}} \\le \\norm{ \\frac{d \\mathbf{e}_A}{dt}}\n$$},\n\\begin{equation}\\label{eq:ea2}\n\\frac{d \\norm{\\mathbf{e}_A }}{dt} \\le \\norm{\\big[\\mathbf{I} - \\mathbb{P}_A \\big] \\mathbf{R}( {\\mathbf{u}}_F)} +\\norm{ \\mathbb{P}_A} \\kappa \\norm{\\mathbf{e}_A}.\n\\end{equation}\n\\begin{comment}\nNext, we show $\\int_0^t \\norm{ \\frac{d \\mathbf{e}_A}{ds} }ds$ is bounded from below by $\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds }$. Expanding $\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds }^2$,\n\\begin{equation*}\n\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds }^2 = \\bigg[ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds \\bigg]^T\\bigg[ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds \\bigg] .\n\\end{equation*}\nPulling the first integral on the right-hand side into the second integral as it is independent of $s$,\n\\begin{equation*}\n\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds }^2 = \\int_0^t \\bigg[ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds \\bigg]^T \\bigg[\\frac{d \\mathbf{e}_A}{ds} \\bigg] ds \n\\end{equation*}\nUsing $\\mathbf{a}^T \\mathbf{b} \\le \\norm{\\mathbf{a}}\\norm{\\mathbf{b}}$,\n\\begin{equation*}\n\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds }^2\\le \\int_0^t \\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds } \\norm{ \\frac{d \\mathbf{e}_A}{ds} } ds .\n\\end{equation*}\nPulling the first term on the right-hand side outside of the integral,\n\\begin{equation*}\n\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds }^2 \\le \\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds } \\int_0^t \\norm{ \\frac{d \\mathbf{e}_A}{ds} } ds \n\\end{equation*}\nDividing through by $\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds }$ gives,\n\\begin{equation*}\n\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds } \\le \\int_0^t \\norm{ \\frac{d \\mathbf{e}_A}{ds} } ds.\n\\end{equation*}\nNext, it is noted that,\n\\begin{align*}\n\\norm{ \\int_0^t \\frac{d \\mathbf{e}_A}{ds} ds } &= \\norm{\\mathbf{e}_A(t) - \\mathbf{e}_A(0)} \\\\\n&= \\norm{\\mathbf{e}_A(t)} , \\\\\n&= \\int_0^t \\frac{d \\norm{\\mathbf{e}_A}} {ds} ds .\n\\end{align*}\nTherefore,\n\\begin{equation}\\label{eq:norm_inequality}\n\\int_0^t \\frac{d \\norm{\\mathbf{e}_A}} {ds} ds \\le \\int_0^t \\norm{ \\frac{d \\mathbf{e}_A}{ds} } ds.\n\\end{equation}\n\\end{comment}\nAn upper bound on the error for the Adjoint Petrov--Galerkin method is then obtained by solving Eq.~\\ref{eq:ea2} for $\\norm{\\mathbf{e}_A}$, which yields,\n\\begin{equation*}\n\\norm{ \\mathbf{e}_A(t) } \\le \\int_0^t e^{\\norm{ \\mathbb{P}_A } \\kappa s }\\norm{\\big[\\mathbf{I} - \\mathbb{P}_A \\big] \\mathbf{R} ({\\mathbf{u}}_F(t-s))} ds .\n\\end{equation*}\n\n\n \\end{proof}\nNote that, for the Adjoint Petrov--Galerkin method, the error bound provided in Eq.~\\ref{eq:ag_nlbound} is not truly an \\textit{a priori} bound as $\\mathbb{P}_A$ is a function of $\\tilde{\\mathbf{u}}_A$.\nEquation~\\ref{eq:g_nlbound} (and~\\ref{eq:ag_nlbound}) indicates an exponentially growing error and contains two distinct terms. The term $e^{\\norm{\\mathbb{P}_A} \\kappa s }$ indicates the exponential growth of the error in time. The second term of interest is $\\norm{\\big[\\mathbf{I} - \\mathbb{P}_A \\big] \\mathbf{R} ({\\mathbf{u}}_F(t))}$. This term corresponds to the error introduced at time $t$ due to projection. It is important to note that the first term controls how the error will grow in time, while the second term controls how much error is added at a given time. \n\nUnfortunately, for general non-linear systems, \\textit{a priori} error analysis provides minimal insight beyond what was just mentioned. To obtain a more intuitive understanding of the APG method, error analysis in the case that $\\mathbf{R}(\\cdot)$ is a linear time-invariant operator is now considered. \n\n\\input{proofs}\n\n\\begin{comment}\n\\begin{corollary}\\label{cor:1}\nIf $\\norm{ \\mathbb{P}_A} \\le \\norm{\\mathbb{P}_G }$, then the upper bound on the error will accumulate at a slower rate in the Adjoint Petrov--Galerkin method than in the Galerkin method.\n\\end{corollary}\n\\begin{proof}\nIf $\\norm{ \\mathbb{P}_A} \\le \\norm{\\mathbb{P}_G }$ then $e^{\\norm{\\mathbb{P}_A} s} < e^{ \\norm{\\mathbb{P}_G} s}$ $\\forall s > 0$, and the desired result is obtained.\n\\end{proof}\nCorollary~\\ref{cor:1} shows that it is possible for the APG ROM to be more accurate than the G ROM. Theorem 2 demonstrates one possible situation in which this will occur.\n\n\\begin{theorem}\\label{theorem:symmetric}\nIf $ \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} $ is a diagonalizable symmetric matrix with negative eigenvalues, then \n$$ \\norm{ \\mathbb{P}_A} \\le \\norm{\\mathbb{P}_G }; \\qquad \\forall \\tau \\in \\bigg[0, \\frac{2}{ \\rho( \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} )} \\bigg],$$\nand errors in the Adjoint Petrov--Galerkin method grow slower than in the Galerkin method. \\footnote{Note that the function $\\rho(\\cdot)$ indicates the spectral radius, not to be confused with the physical density $\\rho$ which appears later in this manuscript.}\n\\end{theorem}\n\n\\begin{proof}\nStart by defining an upper bound on $\\mathbb{P}_A$. It is helpful to note that by the orthonormality of $\\tilde{\\mathbf{V}}$,\n\\begin{equation}\\label{eq:picoarse_norm}\n\\norm{\\mathbb{P}_G} = \\norm{\\tilde{\\Pi}} = 1.\n\\end{equation}\nBy the sub-multiplicative property of the $L^2$-norm,\n$$\\norm{ \\mathbb{P}_A } \\le \\norm{\\tilde{\\Pi}} \\norm{\\big[ \\mathbf{I} + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} \\big]}.$$\nBy Eq.~\\ref{eq:picoarse_norm},\n$$\\norm{ \\mathbb{P}_A }\\le \\norm{\\big[ \\mathbf{I} + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} \\big]}.$$\nInvoking the assumption that $ \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} $ is diagonalizable, the eigendecomposition can be written as,\n$$ \\mathbf{I} + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} = \\mathbf{S} (\\tau \\Lambda + \\mathbf{I}) \\mathbf{S}^{-1},$$\nwhere $\\Lambda$ is a diagonal matrix containing the $N-K$ non-zero eigenvalues of $\\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}}$. \nInvoking the assumption that $ \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} $ is symmetric, one has $\\norm{\\mathbf{S}}=\\norm{\\mathbf{S}^{-1}} = 1$, obtaining the upper bound,\n$$\\norm{ \\mathbb{P}_A }\\le \\norm{ (\\tau \\Lambda + \\mathbf{I}) }.$$\nThe norm of a diagonal matrix is the maximum absolute value of its diagonal elements.\nKnowing that the eigenvalues of $\\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}}$ are all negative, this can be written as,\n$$\\norm{ \\mathbb{P}_A }\\le |\\tau \\cdot \\underset{i}{min} (\\lambda_i) + 1|.$$\nNoting that $\\norm{ \\mathbb{P}_G } = 1$, we can enforce that $\\norm{ \\mathbb{P}_A } \\le \\norm{ \\mathbb{P}_G}$ via the inequality,\n$$ |\\tau \\cdot \\underset{i}{min} (\\lambda_i) + 1| \\leq 1 $$\nRecognizing that $\\underset{i}{min} (\\lambda_i) = - \\rho( \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} )$, we can solve for bounds on $\\tau$ to arrive at,\n$$\\norm{ \\mathbb{P}_A } \\le \\norm{ \\mathbb{P}_G}; \\qquad \\forall \\tau \\in \\bigg[0, \\frac{2}{ \\rho( \\mathbf{J}[\\tilde{\\mathbf{u}}_A]{\\Pi^{\\prime}} ) } \\bigg] .$$\n\\end{proof}\n\n\n\n\nTheorem~\\ref{theorem:symmetric} is interesting. First, it provides theoretical results showing the conditions under which the Adjoint Petrov--Galerkin method will have a lower rate of error growth than the Galerkin method. Further, it provides valuable insight into the selection of the stabilization parameter $\\tau$. Specifically, it is an upper bound on $\\tau$ and shows that this bound varies inversely with the spectral radius of the Jacobian.\n\\end{comment}\n\n\\subsection{Selection of Memory Length $\\tau$}\\label{sec:selecttau}\nThe APG method requires the specification of the parameter $\\tau$. Theorem~\\ref{theorem:errorbound_symmetric} showed that, for a self-adjoint linear system, bounds on the value of $\\tau$ are related to the eigenvalues of the Jacobian of the full-dimensional right-hand side operator. While such bounds provide intuition into the behavior of $\\tau$, they are not particularly useful in the selection of an optimal value of $\\tau$ as they 1.) are conservative due to repeated use of inequalities and 2.) require the eigenvalues of the full right-hand side operator, which one does not have access to in a ROM. Further, the bounds were derived for a self-adjoint linear system, and the extension to non-linear systems is unclear. \n\nIn practice, it is desirable to obtain an expression for $\\tau$ using only the coarse-scale Jacobian, $\\tilde{\\mathbf{V}}^T \\mathbf{J}[\\tilde{\\mathbf{u}}]\\tilde{\\mathbf{V}}$.\nIn Ref.~\\cite{parishMZ1}, numerical evidence showed a strong correlation between the optimal value of $\\tau$ and this coarse-scale Jacobian. Based on this numerical evidence and the analysis in the previous section, the following heuristic for selecting $\\tau$ is used:\n\\begin{equation}\\label{eq:taueq}\n\\tau = \\frac{C}{\\rho (\\tilde{\\mathbf{V}}^T \\mathbf{J}[\\tilde{\\mathbf{u}}] \\tilde{\\mathbf{V}})},\n\\end{equation}\nwhere $C$ is a model parameter and $\\rho(\\cdot)$ indicates the spectral radius. In Ref.~\\cite{parishMZ1}, $C$ was reported to be $0.2.$ In the numerical experiments presented later in this manuscript, the sensitivity of APG to the value of $\\tau$ and the validity of Eq.~\\ref{eq:taueq} are examined. \n\nSimilar to the selection of $\\tau$ in the APG method, the LSPG method requires the selection of an appropriate time-step~\\cite{carlberg_lspg_v_galerkin}. In practice, this fact can be problematic as finding an optimal time-step for LSPG which minimizes error may result in a small time-step and, hence, an expensive simulation. The selection of the parameter $\\tau$, on the other hand, does not impact the computational cost of the APG ROM.\n\n\\section{Implementation and Computational Cost of the Adjoint Petrov--Galerkin Method}\\label{sec:cost}\n\nThis section details the implementation of the Adjoint Petrov--Galerkin ROM for simple time integration schemes. Algorithms for explicit and implicit time integration schemes are provided, and the approximate cost of each method in floating-point operations (FLOPs) is analyzed. Here, a FLOP refers to any floating-point addition or multiplication; no distinction is made between the computational cost of either operation. The notation used here is as follows: $N$ is the full-order number of degrees of freedom, $K$ is the number of modes retained in the POD basis, and $\\omega N$ is the number of FLOPs required for one evaluation of the right-hand side, $\\mathbf{R}(\\tilde{\\mathbf{u}}(t))$. For sufficiently complex problems, $\\omega$ is usually on the order of $\\mathcal{O}(10) < \\omega < \\mathcal{O}(1000)$. The analysis presented in this section does not consider hyper-reduction. The analysis can be approximately extended to hyper-reduction by replacing the full-order degrees of freedom with the dimension of the hyper-reduced right-hand side.\\footnote{An accurate cost-analysis for hyper-reduced ROMs should consider over sampling of the right-hand side and the FOM stencil.}\n\n\\subsection{Explicit Time Integration Schemes}\nThis section explores the cost of the APG method within the scope of explicit time integration schemes. For simplicity, the analysis is carried out only for the explicit Euler scheme. The computational cost of more sophisticated time integration methods, such as Runge-Kutta and multistep schemes, is generally a proportional scaling of the cost of the explicit Euler scheme. Algorithm~\\ref{alg:alg_apg_exp} provides the step-by-step procedure for performing an explicit Euler update to the Adjoint Petrov--Galerkin ROM. Table~\\ref{tab:alg_apg_exp} provides the approximate floating-point operations for the steps reported in Algorithm~\\ref{alg:alg_apg_exp}. The algorithm for an explicit update to the Galerkin ROM, along with the associated FLOP counts, is provided in Algorithm~\\ref{alg:alg_g_exp} and Table~\\ref{tab:alg_g_exp} in Appendix~\\ref{appendix:algorithms}. As noted previously, LSPG reverts to the Galerkin method for explicit schemes, and so is not detailed in this section. Table~\\ref{tab:alg_apg_exp} shows that, in the case that $K \\ll N$ (standard for a ROM) and $\\omega \\gg 1$ (sufficiently complex right-hand side), the Adjoint Petrov--Galerkin ROM is approximately twice as expensive as the Galerkin ROM.\n\n\n\n\n\n\\begin{algorithm}\n\\caption{Algorithm for an explicit Euler update for the APG ROM}\n\\label{alg:alg_apg_exp}\nInput: $\\tilde{\\mathbf{a}}^n$\\;\n\\newline\nOutput: $\\tilde{\\mathbf{a}}^{n+1}$\\;\n\\newline\nSteps:\n\\begin{enumerate}\n\\item Compute the state from the generalized coordinates, $\\tilde{\\mathbf{u}}^n =\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n+1}$\n\\item Compute the right-hand side from the state, $\\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n\\item Compute the projection of the right-hand side, $\\tilde{\\Pi} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) = \\tilde{\\mathbf{V}} \\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n\\item Compute the orthogonal projection of the right-hand side, ${\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) = \\mathbf{R}(\\tilde{\\mathbf{u}}^n) - \\tilde{\\Pi} \\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n\\item Compute the action of the Jacobian on ${\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n)$ using either of the two following strategies:\n \\begin{enumerate}\n \\item Finite difference approximation:\n \\begin{equation*}\n \\mathbf{J}[\\tilde{\\mathbf{u}}^n] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) \\approx \\frac{1}{\\epsilon} \\Big[ \\mathbf{R}\\big(\\tilde{\\mathbf{u}}^n + \\epsilon {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) \\big) - \\mathbf{R}(\\tilde{\\mathbf{u}}^n ) \\Big], \n \\end{equation*}\n where $\\epsilon$ is a small constant value, usually $\\sim \\mathcal{O}(10^{-5})$.\n \\item Exact linearization:\n \\begin{equation*}\n \\mathbf{J}[\\tilde{\\mathbf{u}}^n] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) = \\mathbf{R'}[\\tilde{\\mathbf{u}}^n]({\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n)),\n \\end{equation*}\n where $\\mathbf{R}'[\\tilde{\\mathbf{u}}^n]$ is right-hand side operator linearized about $\\tilde{\\mathbf{u}}^n$.\n \\end{enumerate}\n\\item Compute the full right-hand side: $\\mathbf{R}(\\tilde{\\mathbf{u}}^n) + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}^n] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n\n\\item Project: $\\tilde{\\mathbf{V}}^T \\bigg[ \\mathbf{R}(\\tilde{\\mathbf{u}}^n) + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}^n] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) \\bigg]$\n\\item Update the state $\\tilde{\\mathbf{a}}^{n+1} = \\tilde{\\mathbf{a}}^n + \\Delta t \\tilde{\\mathbf{V}}^T \\bigg[ \\mathbf{R}(\\tilde{\\mathbf{u}}^n) + \\tau\\mathbf{J}[\\tilde{\\mathbf{u}}^n] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) \\bigg]$\n\\end{enumerate}\n\\end{algorithm}\n\n\\begin{table}\n\\begin{tabular}{p{7cm} p{8cm}}\n\\hline\nStep in Algorithm~\\ref{alg:alg_apg_exp}& Approximate FLOPs \\\\\n\\hline\n1 & $2 N K - N$ \\\\\n2 & $\\omega N$ \\\\\n3 & $4 N K - N - K$ \\\\\n4 & $N $ \\\\\n5 & $(\\omega + 4)N $ \\\\\n6 & $2N $ \\\\\n7 & $2NK - K $ \\\\\n8 & $2K $ \\\\\n\\hline\nTotal & $8 N K + (2\\omega + 5) N$ \\\\\nTotal for Galerkin Method & $4 N K + (\\omega-1) N + K$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Approximate floating-point operations for an explicit Euler update to the Adjoint Petrov--Galerkin method reported in Algorithm~\\ref{alg:alg_apg_exp}. The total FLOP count for the Galerkin ROM with an explicit Euler update is additionally reported for comparison. A full description of the Galerkin update is provided in Appendix~\\ref{appendix:algorithms}.}\n\\label{tab:alg_apg_exp}\n\\end{table}\n\n\n\n\\subsection{Implicit Time Integration Schemes}\nThis section evaluates the computational cost of the Galerkin, Adjoint Petrov--Galerkin, and Least-Squares Petrov--Galerkin methods for implicit time integration schemes. For non-linear systems, implicit time integration schemes require the solution of a non-linear algebraic system at each time-step. Newton's method, along with a preferred linear solver, is typically employed to solve the system. For simplicity, the analysis provided in this section is carried out for the implicit Euler time integration scheme along with Newton's method to solve the non-linear system. Before proceeding, the full-order residual, Galerkin residual, and APG residual at time-step $(n + 1)$ are denoted as,\n\\begin{align*}\n &\\mathbf{r}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n+1}) = \\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n+1} - \\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^n - \\Delta t \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n+1}),\\\\\n&\\mathbf{r}_G(\\tilde{\\mathbf{a}}^{n+1}) = \\tilde{\\mathbf{a}}^{n+1} - \\tilde{\\mathbf{a}}^n - \\Delta t \\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n+1}),\\\\\n&\\mathbf{r}_{A}(\\tilde{\\mathbf{a}}^{n+1}) = \\tilde{\\mathbf{a}}^{n+1} - \\tilde{\\mathbf{a}}^n - \\Delta t \\tilde{\\mathbf{V}}^T\\bigg[\\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n+1}) + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}]{\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n+1} ) \\bigg].\n\\end{align*}\nAs the future state, $\\tilde{\\mathbf{a}}^{n+1}$, is unknown, we denote an intermediate state, $\\tilde{\\mathbf{a}}_k$, that is updated after every Newton iteration until some convergence criterion is met.\nNewton's method is defined by the iteration,\n\\begin{equation}\\label{eq:newton_linear}\n \\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{a}}_k)}{\\partial \\tilde{\\mathbf{a}}_k} \\big[ \\tilde{\\mathbf{a}}_{k+1} - \\tilde{\\mathbf{a}}_k\\big] = - \\mathbf{r}(\\tilde{\\mathbf{a}}_k).\n\\end{equation}\nNewton's method solves Eq.~\\ref{eq:newton_linear} for the change in the state, $\\tilde{\\mathbf{a}}_{k+1} - \\tilde{\\mathbf{a}}_k$, for $k = 1,2,\\hdots$, until the residual converges to a sufficiently-small number. For a ROM, both the assembly and solution of this linear system is the dominant cost of an implicit method.\n\nTwo methods are considered for the solution to the non-linear algebraic system arising from implicit time discretizations of the G and APG ROMs: Newton's method with direct Gaussian elimination and Jacobian-Free Newton-Krylov GMRES. The Gauss-Newton method with Gaussian elimination is considered for the solution to the least-squares problem arising in LSPG.\n\nAlgorithm~\\ref{alg:alg_apg_imp} provides the step-by-step procedures for performing an implicit Euler update to the Adjoint Petrov--Galerkin ROM with the use of Newton's method and Gaussian elimination. Table~\\ref{tab:alg_apg_imp} provides the approximate floating-point operations for the steps reported in these algorithms. Analogous results for the Galerkin and LSPG ROMs are reported in Algorithms~\\ref{alg:alg_g_imp} and~\\ref{alg:alg_LSPG}, and Tables~\\ref{tab:alg_g_imp} and~\\ref{tab:alg_LSPG} in Appendix~\\ref{appendix:algorithms}.\nIn the limit that $K \\ll N$ and $\\omega \\gg 1$, the total FLOP counts reported show that APG is twice as expensive as both the LSPG and Galerkin ROMs. It is observed that the dominant cost for all three methods lies in the computation of the low-dimensional residual Jacobian. Computation of the low-dimensional Jacobian requires $K$ evaluations of the unsteady residual. Depending on values of $\\omega$, $N$, and $K$, this step can consist of over $50\\%$ $(K \\ll N)$ of the CPU time.\\footnote{It is noted that the low-dimensional Jacobian can be computed in parallel.}\n\nTo avoid the cost of computing the low-dimensional Jacobian required in the linear solve at each Netwon step, the Galerkin and APG ROMs can make use of Jacobian-Free Netwon-Krylov (JFNK) methods to solve the linear system, opposed to direct methods such as Gaussian elimination. JFNK methods are iterative methods that allow one to circumvent the expense associated with computing the full low-dimensional Jacobian. Instead, JFNK methods only compute the \\textit{action} of the Jacobian on a vector at each iteration of the linear solve. This can drastically decrease the cost of the implicit solve. JFNK utilizing the Generalized Minimal Residual (GMRES) method~\\cite{gmres}, for example, is guaranteed to converge to the solution $\\tilde{\\mathbf{a}}_k$ in at most $K$ iterations. It takes $K$ residual evaluations just to form the Jacobian required for direct methods.\n\nThe LSPG method is formulated as a non-linear least-squares problem. The use of Jacobian-free methods to solve non-linear least-squares problems is significantly more challenging. The principle issue encountered in attempting to use Jacobian-free methods for such applications it that one requires the action of the \\textit{transpose} of the residual Jacobian on a vector. This quantity cannot be computed via a standard finite difference approximation or linearization. It is only recently that true Jacobian-free methods have been utilized for solving non-linear least-squares problems. In Ref~\\cite{nlls_JacobianFree}, for example, automatic differentiation is utilized to compute the action of the transposed Jacobian on a vector. Due to the challenges associated with Jacobian-free methods for non-linear least-squares problems, this method is not considered here as a solution technique for LSPG.\n\nAlgorithm~\\ref{alg:alg_apg_jfnk} and Table~\\ref{tab:alg_apg_jfnk} report the algorithm and FLOPs required for an implicit Euler update to APG using JFNK GMRES. The term $\\eta \\le K$ is the number of iterations needed for convergence of the GMRES solver at each Newton iteration. For a concise presentation, the same update for the Galerkin ROM is not presented. Figure~\\ref{fig:implicitcost} shows the ratio of the cost of the various implicit ROMs as compared to the Galerkin ROM solved with Gaussian elimination. The standard LSPG method is seen to be approximately the same cost of Galerkin, while APG is seen to be approximately $2$x the cost of Galerkin. The success of the JFNK methods depends on the number of GMRES iterations required for convergence. If $\\eta = K$, which is the maximum number of iterations required for GMRES, the cost of JFNK methods is seen to be the same as their direct-solve counterparts. For cases where JFNK converges at a rate of $\\eta < K,$ the iterative methods out-perform their direct-solve counterparts.\n\nThe analysis presented here shows that, for a given basis dimension, the Adjoint Petrov--Galerkin ROM is approximately twice the cost of the Galerkin ROM for both implicit and explicit solvers. In the implicit case, the APG ROM utilizing a direct linear solver is approximately 2x the cost of LSPG. It was highlighted, however, that APG can be solved via JFNK methods. For cases where one either doesn't have access to the full Jacobian, or the full Jacobian can't be stored, JFNK methods can significantly decrease the ROM cost. The use of JFNK methods within the LSPG approach is more challenging due to the presence of the transpose of the residual Jacobian. Lastly it is noted that, although hyper-reduction can decrease the cost of a residual evaluation, it does not entirely alleviate the cost of forming the Jacobian.\n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.65\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{implicit_cost.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Estimates of the G, APG, and LSPG reduced-order models for an implicit Euler update. This plot is generated for values of $N=1000$ and $\\omega = 50$, and $\\eta = \\{K,K\/2,K\/5\\},$ where $\\eta$ is the total number of iterations required for the GMRES solver at each Newton step.}\n\\label{fig:implicitcost}\n\\end{figure}\n\n\n\n\n\n\\begin{comment} \n\nLSPG, on the other hand, cannot leverage these methods as the full residual Jacobian must be formed in computing the test basis. Unless hyper-reduction methods are implemented, this fact severely limits the practical application LSPG to problems with many degrees of freedom.\n\nThis linear system solve is generally the dominant cost of the implicit method, and many methods are available for computing the solution are available. In calculating the computational cost for this step, we use Gaussian elimination, which is the simplest but most expensive method (and is also potentially unstable).\n\nAlgorithms~\\ref{alg:alg_g_imp},~\\ref{alg:alg_apg_imp}, and~\\ref{alg:alg_LSPG} provide step-by-step procedures for performing an implicit Euler update to the Galerkin, Adjoint Petrov--Galerkin, and Least-Squares Petrov--Galerkin ROMs, respectively. Tables~\\ref{tab:alg_g_imp},~\\ref{tab:alg_apg_imp}, and~\\ref{tab:alg_LSPG} provide the approximate floating-point operations for the steps reported these algorithms. \n\nThe analysis reveals a couple key points in comparing the three ROM methods. Again, we see that in the case of $N \\gg K$ and $\\omega \\gg 1$, the cost of the implicit APG ROM is roughly double the cost of implicit Galerkin ROM. Additionally, we see that the cost of LSPG suffers from the fact that the original problem is posed as a minimization of the full-order residual. This results in the evaluation of an $N\\times K$ residual Jacobian. In the case of $N \\gg K$, as is generally the case for most ROMs, this cost far outpaces the extra cost incurred by computing the modified right-hand side of the APG method. In general, note that the computational cost of all methods is dominated by computing the residual Jacobian. In fact, for $N \\gg K$, it is likely that computing this Jacobian is even more expensive than computing $\\Delta \\tilde{\\mathbf{a}}_k$ via Gaussian elimination. This leads to another critical point, as we scrutinize what linear system is solved to compute the Newton update $\\Delta \\tilde{\\mathbf{a}}$ in each method. For a residual Jacobian $\\mathbf{J}_k$, the linear system for the Galerkin and APG methods takes the form,\n\\begin{equation}\n \\mathbf{J}_k \\Delta \\tilde{\\mathbf{a}} = - \\mathbf{r}(\\tilde{\\mathbf{a}}_k),\n\\end{equation}\nwhile the LSPG linear system takes the form,\n\\begin{equation}\n \\big[ \\tilde{\\mathbf{V}}^T \\mathbf{J}_k^T \\mathbf{J}_k \\tilde{\\mathbf{V}}] \\Delta \\tilde{\\mathbf{a}}_k = - \\tilde{\\mathbf{V}}^T \\mathbf{J}_k^T \\mathbf{r}(\\tilde{\\mathbf{u}}_k)\n\\end{equation}\nThat is, the APG\/Galerkin methods result in a strict Newton's method form. Instead of first computing the residual Jacobian and then solving a generic linear system, indirect Jacobian-free Newton-Krylov methods may be used to solve the system without the explicit computation of the residual Jacobian. The popular Generalized Minimal Residual (GMRES) method, for example, is guaranteed to converge to the solution $\\tilde{\\mathbf{a}}_k$ in $K$ iterations, although the residual Jacobian is never actually generated. LSPG, on the other hand, cannot leverage these methods as the full residual Jacobian must be formed in computing the test basis. Unless hyper-reduction methods are implemented, this fact severely limits the practical application LSPG to problems with many degrees of freedom.\n\\end{comment} \n\n\n\n\n\n\n\\begin{algorithm}\n\\caption{Algorithm for an implicit Euler update for the APG ROM using Newton's Method with Gaussian Elimination}\n\\label{alg:alg_apg_imp}\nInput: $\\tilde{\\mathbf{a}}^n$, residual tolerance $\\xi$ \\;\n\\newline\nOutput: $\\tilde{\\mathbf{a}}^{n+1}$\\;\n\\newline\nSteps:\n\\begin{enumerate}\n\\item Set initial guess, $\\tilde{\\mathbf{a}}_k$\n\\item Loop while $\\mathbf{r}^k > \\xi$\n\\begin{enumerate}\n \\item Compute the state from the generalized coordinates, $\\tilde{\\mathbf{u}}_k = \\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k$\n \\item Compute the right-hand side from the full state, $\\mathbf{R}(\\tilde{\\mathbf{u}}_k)$\n \\item Compute the projection of the right-hand side, $\\tilde{\\Pi} \\mathbf{R}(\\tilde{\\mathbf{u}}^n) = \\tilde{\\mathbf{V}} \\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n \\item Compute the orthogonal projection of the right-hand side, ${\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}_k) = \\mathbf{R}(\\tilde{\\mathbf{u}}_k) - \\tilde{\\Pi} \\mathbf{R}(\\tilde{\\mathbf{u}}_k)$\n \\item Compute the action of the right-hand side Jacobian on ${\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}_k)$, as in Alg.~\\ref{alg:alg_apg_exp}.\n \\item Compute the modified right-hand side, $ \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k) + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}]{\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k )$\n \\item Project the modified right-hand side, $\\tilde{\\mathbf{V}}^T\\Big[\\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k) + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}]{\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k ) \\Big]$\n \\item Compute the APG residual, $\\mathbf{r}_A(\\tilde{\\mathbf{a}}_k) = \\tilde{\\mathbf{a}}_k - \\tilde{\\mathbf{a}}^n - \\Delta t \\tilde{\\mathbf{V}}^T\\Big[\\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k) + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}]{\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k ) \\Big]$\n \\item Compute the residual Jacobian, $\\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{a}}_k)}{\\partial \\tilde{\\mathbf{a}}_k}$\n \\item Solve the linear system via Gaussian Elimination: $\\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{a}}_k)}{\\partial \\tilde{\\mathbf{a}}_k} \\Delta \\tilde{\\mathbf{a}} = - \\mathbf{r}(\\tilde{\\mathbf{a}}_k)$\n \\item Update the state: $\\tilde{\\mathbf{a}}_{k+1} = \\tilde{\\mathbf{a}}_k + \\Delta \\tilde{\\mathbf{a}}$\n \\item $k = k + 1$\n\\end{enumerate}\n\\item Set final state, $\\tilde{\\mathbf{a}}^{n+1} = \\tilde{\\mathbf{a}}_k$\n\\end{enumerate}\n\\end{algorithm}\n\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{p{7cm} p{8cm}}\n\\hline\nStep in Algorithm~\\ref{alg:alg_apg_imp}& Approximate FLOPs \\\\\n\\hline\n2a & $2 N K - N $ \\\\\n2b & $ \\omega N $ \\\\\n2c & $4 N K - N - K$ \\\\\n2d & $ N $ \\\\\n2e & $ (\\omega + 4) N $ \\\\\n2f & $ 2N $ \\\\\n2g & $ 2NK - K $ \\\\\n2h & $ 3K $ \\\\\n2i & $ (2\\omega + 5) NK + K^2 + 8NK^2 $ \\\\\n2j & $ K^3 $ \\\\\n2k & $ K $ \\\\\n\\hline\nTotal & $ (2\\omega + 5)N + 2K + (2\\omega + 13) NK + K^2 + 8NK^2 + K^3 $ \\\\\nGalerkin ROM FLOP count & $ (\\omega - 1)N + 3K + (\\omega + 3)NK + 2K^2 + 4NK^2 + K^3 $ \\\\\nLSPG ROM FLOP count& $ (\\omega + 2)N + (\\omega + 6) NK - K^2 + 4NK^2 + K^3 $ \\\\\n\n\\end{tabular}\n\\caption{Approximate floating-point operations for one Newton iteration for the implicit Euler update to the Adjoint Petrov--Galerkin method reported in Algorithm~\\ref{alg:alg_apg_imp}. FLOP counts for the Galerkin ROM and LSPG ROM with an implicit Euler update are additionally reported for comparison. A full description of the Galerkin and LSPG ROM updates are provided in Appendix~\\ref{appendix:algorithms}}\n\\label{tab:alg_apg_imp}\n\\end{table}\n\n\n\n\n\n\n\n\n\\begin{algorithm}\n\\caption{Algorithm for an implicit Euler update for the APG ROM using JFNK GMRES}\n\\label{alg:alg_apg_jfnk}\nInput: $\\tilde{\\mathbf{a}}^n$, residual tolerance $\\xi$ \\;\n\\newline\nOutput: $\\tilde{\\mathbf{a}}^{n+1}$\\;\n\\newline\nSteps:\n\\begin{enumerate}\n\\item Set initial guess, $\\tilde{\\mathbf{a}}_k$\n\\item Loop while $\\mathbf{r}^k > \\xi$\n\\begin{enumerate}\n\n \\refstepcounter{enumii}\\item[$(a\\text{--}h)$] Compute steps 2a through 2h in Algorithm~\\ref{alg:alg_apg_imp}\n \\setcounter{enumii}{8}\n \\item Solve the linear system, $\\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{a}}_k)}{\\partial \\tilde{\\mathbf{a}}_k} \\Delta \\tilde{\\mathbf{a}}_k = \\mathbf{r}_A(\\tilde{\\mathbf{a}}_k)$ using Jacobian-Free GMRES\n \\item Update the state: $\\tilde{\\mathbf{a}}_{k+1} = \\tilde{\\mathbf{a}}_k + \\Delta \\tilde{\\mathbf{a}}$\n \\item $k = k + 1$\n\\end{enumerate}\n\\item Set final state, $\\tilde{\\mathbf{a}}^{n+1} = \\tilde{\\mathbf{a}}_k$\n\\end{enumerate}\n\\end{algorithm}\n\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{p{6cm} p{9cm}}\n\\hline\nStep in Algorithm~\\ref{alg:alg_apg_jfnk}& Approximate FLOPs \\\\\n\\hline\n2a & $2 N K - N $ \\\\\n2b & $ \\omega N $ \\\\\n2c & $4 N K - N - K$ \\\\\n2d & $ N $ \\\\\n2e & $ (\\omega + 4) N $ \\\\\n2f & $ 2N $ \\\\\n2g & $ 2NK - K $ \\\\\n2h & $ 3K $ \\\\\n2i & $ (2\\omega + 5) N \\eta + K \\eta + 8N K \\eta + \\eta^2 K $\\\\\n2k & $ K $ \\\\\n\\hline\nTotal & $ \\big((2\\eta + 2) \\omega + 5\\eta + 5) \\big)N + (\\eta^2 + \\eta + 2)K + (8\\eta + 8) NK $ \\\\\n\\end{tabular}\n\\caption{Approximate floating-point operations for one Newton iteration for the implicit Euler update to the Adjoint Petrov--Galerkin method using Jacobian-Free GMRES reported in Algorithm~\\ref{alg:alg_apg_jfnk}.}\n\\label{tab:alg_apg_jfnk}\n\\end{table}\n\n\n\n\\begin{comment}\nThe Adjoint Petrov--Galerkin method is straightforward to implement in a Jacobian-free fashion and requires minimal modifications to a standard Galerkin ROM. The Jacobian-free implementation of the additional terms in the Adjoint Petrov--Galerkin method is as follows:\n\\begin{enumerate}\n \\item Compute the coarse-scale right-hand side, $\\mathbf{R}(\\tilde{\\mathbf{u}})$.\n \\item Compute the orthogonal projection of the right-hand side, ${\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}) = \\mathbf{R}(\\tilde{\\mathbf{u}}) - \\tilde{\\Pi} \\mathbf{R}(\\tilde{\\mathbf{u}})$.\n \\item Compute the action of the Jacobian on ${\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}})$. This can be done without explicitly forming the Jacobian using either of the two following strategies:\n \\begin{enumerate}\n \\item Finite difference approximation:\n \\begin{equation*}\n \\mathbf{J}[\\tilde{\\mathbf{u}}] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}) = \\frac{1}{\\epsilon} \\Big[ \\mathbf{R}\\big(\\tilde{\\mathbf{u}} + \\epsilon {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}) \\big) - \\mathbf{R}(\\tilde{\\mathbf{u}} ) \\Big] + \\mathcal{O}(\\epsilon^2),\n \\end{equation*}\n where $\\epsilon$ is a small constant value, usually $\\sim \\mathcal{O}(10^{-5})$.\n \\item Exact linearization:\n \\begin{equation*}\n \\mathbf{J}[\\tilde{\\mathbf{u}}] {\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}}) = \\mathbf{R'}[\\tilde{\\mathbf{u}}]({\\Pi^{\\prime}} \\mathbf{R}(\\tilde{\\mathbf{u}})),\n \\end{equation*}\n where $\\mathbf{R}'[\\tilde{\\mathbf{u}}]$ is right-hand side operator linearized about $\\tilde{\\mathbf{u}}$.\n \\end{enumerate}\n \\item Multiply by $\\tau \\tilde{\\mathbf{V}}^T$ to compute the subgrid-scale term.\n\\end{enumerate}\nThe Adjoint Petrov--Galerkin method is observed to require an extra right-hand side (or linearized right-hand side) evaluation. Assuming the right-hand side evaluation is the dominant cost of the ROM, the Adjoint Petrov--Galerkin method is roughly twice as expensive as the standard Galerkin ROM.\n\\end{comment}\n\n\n\\section{Numerical Examples}\\label{sec:numerical}\n\nApplications of the APG method are presented for ROMs of compressible flows: the 1D Sod shock tube problem and 2D viscous flow over a cylinder. In both problems, the test bases are chosen via POD. The shock tube problem highlights the improved stability and accuracy of the APG method over the standard Galerkin ROM, as well as improved performance over the LSPG method. The impact of the choice of $\\tau$ (APG) and $\\Delta t$ (LSPG) time-scales are also explored. The cylinder flow experiment examines a more complex problem and assesses the predictive capability of APG in comparison with Galerkin and LSPG ROMs. The effect of the choice of $\\tau$ on simulation accuracy is further explored. \n\n\\subsection{Example 1: Sod Shock Tube with reflection}\nThe first case considered is the Sod shock tube, described in more detail in \\cite{sod}. The experiment simulates the instantaneous bursting of a diaphragm separating a closed chamber of high-density, high-pressure gas from a closed chamber of low-density, low pressure gas. This generates a strong shock, a contact discontinuity, and an expansion wave, which reflect off the shock tube walls at either end and interact with each other in complex ways. The system is described by the one-dimensional compressible Euler equations with the initial conditions,\n\\begin{comment}\n\\begin{equation}\\label{eq:euler_1D}\n \\frac{\\partial {\\mathbf{u}}}{\\partial t} + \\frac{\\partial \\mathbf{f}}{\\partial x} = 0, \\quad\n {\\mathbf{u}} = \n \\begin{Bmatrix} \\rho \\\\ \\rho u \\\\ \\rho E \\end{Bmatrix}, \\quad \n \\mathbf{f} = \\begin{Bmatrix} \\rho u \\\\ \\rho u^2 + p \\\\ u(\\rho E + p) \\end{Bmatrix}.\n\\end{equation}\nThe problem setup is given by the initial conditions,\n\\end{comment} \n\n\\begin{equation*}\n\\rho = \n\\begin{cases} \n 1 & x\\leq 0.5 \\\\\n 0.125 & x > 0.5 \n \\end{cases},\n\\qquad\np = \n\\begin{cases} \n 1 & x\\leq 0.5 \\\\\n 0.1 & x > 0.5 \n \\end{cases},\n\\qquad\nu = \n\\begin{cases} \n 0 & x\\leq 0.5 \\\\\n 0 & x > 0.5 \n \\end{cases},\n\\end{equation*} \nwith $x \\in [0,1]$. Impermeable wall boundary conditions are enforced at x = 0 and x = 1.\n\n\\subsubsection{Full-Order Model}\nThe 1D compressible Euler equations are solved using a finite volume method and explicit time integration. The domain is partitioned into 1,000 cells of uniform width. The finite volume method uses the first-order Roe flux~\\cite{roescheme} at the cell interfaces. A strong stability-preserving RK3 scheme~\\cite{SSP_RK3} is used for time integration. The solution is evolved for $t \\in [0.0,1.0]$ with a time-step of $\\Delta t = 0.0005$, ensuring CFL$\\leq 0.75$ for the duration of the simulation. The solution is saved every other time-step, resulting in 1,000 solutions snapshots for each conserved variable.\n\n\\subsubsection{Solution of the Reduced-Order Model}\nUsing the FOM data snapshots, trial bases for the ROMs are constructed via the proper orthogonal decomposition (POD) approach. A separate basis is constructed for each conserved variable. The complete basis construction procedure is detailed in Appendix~\\ref{appendix:basisconstruction}. Once a coarse-scale trial basis $\\tilde{\\mathbf{V}}$ is built, a variety of ROMs are evaluated according to the following formulations:\n\n\\begin{enumerate}\n\\item Galerkin ROM:\n\\begin{equation*}\\label{eq:galerkin_ROM}\n\\tilde{\\mathbf{V}}^T \\bigg( \\frac{d \\tilde{\\mathbf{u}}}{dt} - \\mathbf{R}(\\tilde{\\mathbf{u}}) \\bigg) = 0, \\qquad t \\in [0,1].\n\\end{equation*}\n\n\\item Adjoint Petrov--Galerkin ROM:\n\\begin{equation*}\\label{eq:MZPG_ROM}\n\\tilde{\\mathbf{V}}^T\\bigg(\\mathbf{I} + \\tau \\mathbf{J}[\\tilde{\\mathbf{u}}] {\\Pi^{\\prime}} \\bigg) \\bigg( \\frac{d \\tilde{\\mathbf{u}}}{dt} - \\mathbf{R}(\\tilde{\\mathbf{u}}) \\bigg) = 0 , \\qquad t \\in [0,1].\n\\end{equation*}\n\\textit{Remark: The Adjoint Petrov--Galerkin ROM requires specification of $\\tau$.}\n\n\\item Least-Squares Petrov--Galerkin ROM (Implicit Euler Time Integration):\n\\begin{equation*}\n{\\mathbf{u}}^n = \\underset{\\mathbf{y} \\in \\text{Range}(\\tilde{\\mathbf{V}}) }{\\text{arg min}}\\norm{ \\frac{\\mathbf{y} - \\tilde{\\mathbf{u}}^{n-1}}{\\Delta t} - \\mathbf{R}(\\mathbf{y}) }^2, \\qquad \\text{for } n = 1,2,\\hdots , \\text{ceil}\\big(\\frac{1}{\\Delta t}\\big).\n\\end{equation*}\n\\textit{Remark: The LSPG approach is strictly coupled to the time integration scheme and time-step.}\n\n\n\\end{enumerate}\n\n\\subsubsection{Numerical Results}\nThe first case considered uses $50$ basis vectors each for the conserved variables $\\rho, \\rho u,$ and $\\rho E$. The total dimension of the reduced model is thus $K = 150$. Roughly 99.9--99.99\\% of the POD energy is captured by this 150-mode basis. In fact, 99\\% of the energy is contained in the first 5-12 modes of each conserved variable.\n\nThe Adjoint Petrov--Galerkin ROM requires specification of the memory length $\\tau$. Similarly, LSPG requires the selection of an appropriate time-step. The sensitivity of both methods to this selection will be discussed later in this section.\nThe simulation parameters are provided in Table~\\ref{tab:sod_tab1}.\n\nDensity profiles at $t = 0.25$ and $t = 1.0$ for explicit Galerkin and APG ROMs, along with an implicit LSPG ROM, are displayed in Fig.~\\ref{fig:sod_density}. All three ROMs are capable of reproducing the shock tube density profile in Fig.~\\ref{fig:sod_density_0p25}; a normal shock propagates to the right and is followed closely behind by a contact discontinuity, while an expansion wave propagates to the left. All three methods exhibit oscillations at $x = 0.5$, the location of the imaginary burst diaphragm, and near the shock at $x = 0.95$. At $t = 1.0$, when the shock has reflected from the right wall and interacted with the contact discontinuity, much stronger oscillations are present, particularly near the reflected shock at $x = 0.45$. These oscillations are reminiscent of Gibbs phenomenon, and are an indicator of the inability to accurately reconstruct sharp gradients. The Galerkin ROM exhibits the largest oscillations of the ROMs considered, while LSPG exhibits the smallest.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{ l l l l l}\\hline\n ROM Type & Time Scheme & $\\Delta t$ & $\\tau$ & $\\int ||e||_2 dt$\\\\ \\hline\n Galerkin & SSP-RK3 & 0.0005 & N\/A & 1.5752 \\\\\n Galerkin & Imp. Euler & 0.0005 & N\/A & 1.0344 \\\\\n APG & SSP-RK3 & 0.0005 & 0.00043 & 1.0637 \\\\\n APG & Imp. Euler & 0.0005 & 0.00043 & 0.8057 \\\\\n APG & Imp. Euler & 0.001 & 0.00043 & 0.7983 \\\\\n LSPG & Imp. Euler & 0.0005 & N\/A & 1.1668 \\\\\n LSPG & Imp. Euler & 0.001 & N\/A & 1.4917 \\\\ \\hline\n\\end{tabular}\n\\caption{Computational details for Sod shock tube ROM cases, $K = 150$}\n\\label{tab:sod_tab1}\n\\end{table}\n\n\\begin{comment}\n\\begin{figure}\n \\centering\n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/PODspectrumZoomed.png}\n \\caption{Sod shock tube POD energy spectrum}\n \\label{fig:pod_spectrum}\n \\end{minipage}\\hfill\n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/err_consVars_k50_apgExplicit.png}\n \\caption{Conserved variable error profiles, APG w\/ SSP-RK3 time integration, $K = 150$, $\\Delta t = 0.0005$}\n \\label{fig:sod_error_consVars}\n \\end{minipage}\\hfill\n\\end{figure}\n\\end{comment}\n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/rho_k50_dt0p0005_t0p25__explicit.png}\n\\caption{$t=0.25$.}\n\\label{fig:sod_density_0p25}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/rho_k50_dt0p0005_t1p00_explicit.png}\n\\caption{$t=1$.}\n\\label{fig:sod_density_1p0}\n\\end{subfigure}\n\\end{center}\n\\caption{Density profiles for the Sod shock tube with $K = 150$, $\\Delta t = 0.0005$.}\n\\label{fig:sod_density}\n\\end{figure}\n\n\nFigure~\\ref{fig:sod_error} shows the evolution of the error for all of the ROMs listed in Table~\\ref{tab:sod_tab1}. The $L^2$-norm of the error is computed as,\n\\begin{equation*}\n||e||_2 = \\sqrt{ \\sum_{i=1}^{1000} \\Big[(\\tilde{\\rho}_{i,ROM} - \\tilde{\\rho}_{i,FOM} )^2 + (\\widetilde{\\rho u}_{i,ROM} - \\widetilde{\\rho u}_{i,FOM} )^2 + (\\widetilde{\\rho E}_{i,ROM} - \\widetilde{\\rho E}_{i,FOM})^2 \\Big] }.\n\\end{equation*}\n Here, the subscript $i$ denotes each finite volume cell. The FOM values used for error calculations are projections of the FOM data onto $\\tilde{\\MC{V}}$, e.g. $\\tilde{\\rho}_{FOM} = \\tilde{\\Pi} \\rho_{FOM}$. This error measure provides a fair upper bound on the accuracy of the ROMs, as the quality of the ROM is generally dictated by the richness of the trial basis and the projection of the FOM data is the maximum accuracy that can be reasonably hoped for.\n\nIn Figure~\\ref{fig:sod_error}, it is seen that the APG ROM exhibits improved accuracy over the Galerkin ROM. The LSPG ROM for $\\Delta t = 0.0005$ performs slightly better than the explicit Galerkin ROM, and worse than the implicit Galerkin ROM. Increasing the time-step to $\\Delta t = 0.001$ results in a significant increase in error for the LSPG ROM. This is due to the fact that the performance of LSPG is influenced by the time-step. For a trial basis containing much of the residual POD energy, LSPG will generally require a very small time-step to improve accuracy; this sensitivity will be explored later. Lastly, it is observed that the APG ROM is \\textit{not} significantly affected by the time-step. The APG ROM with $\\Delta t = 0.001$ shows moderately increased error prior to $t = 0.3$ and similar error afterwards when compared against the $\\Delta t = 0.0005$ APG ROM case. \n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/err_k50_explicit.png}\n\\caption{Explicit Galerkin\/APG, implicit LSPG, $\\Delta t = 0.0005$}\n\\label{fig:sod_err_explicit}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/err_k50_implicit.png}\n\\caption{All implicit, various $\\Delta t$}\n\\label{fig:sod_err_implicit}\n\\end{subfigure}\n\\end{center}\n\\caption{$L^2$-norm error profiles for the Sod shock tube with $150$ basis vectors.}\n\\label{fig:sod_error}\n\\end{figure}\n\nFigure~\\ref{fig:sod_mode_study} studies the effect of the number of modes retained in the trial basis on the stability and accuracy, over the range $K = 60,75,90,\\ldots,180$. Missing data points indicate an unstable solution. Values of $\\tau$ for the APG ROMs are again selected by user choice. The most striking feature of these plots is the fact that even though the explicit Galerkin ROM is unstable for $K \\leq 135$ and the implicit Galerkin ROM is unstable for $K \\leq 75$, the APG and LSPG ROMs are stable for all cases. Furthermore, the APG and LSPG ROMs are capable of achieving stability with a time-step twice as large as that of the Galerkin ROM. The cost of the APG and LSPG ROMs are effectively halved, but they are still able to stabilize the simulation. Interestingly, the Galerkin and APG ROMs both exhibit abrupt peaks in error at $K = 120$, while the LSPG ROMs do not. The exact cause of this is unknown, but displays that a monotonic decrease in error with enrichment of the trial space is not guaranteed. \n\nSeveral interesting comparisons between APG and LSPG arise from Figure~\\ref{fig:sod_mode_study}. First, with the exception of the $K = 120$ case, Fig.~\\ref{fig:sod_err_explicit} shows that the APG ROM with explicit time integration exhibits accuracy comparable to that of the LSPG ROM with implicit time integration. As can be seen in comparing Tables~\\ref{tab:alg_apg_exp} and~\\ref{tab:alg_LSPG}, the cost of APG with explicit time integration is significantly lower than the cost of LSPG. This is an attractive feature of APG, as it is able to use inexpensive explicit time integration while LSPG is restricted to implicit methods. Additionally, we draw attention to the poor performance of LSPG at high $K$ for a moderate time-step in Fig.~\\ref{fig:sod_modeSens_implicit}. Increasing the time-step to $\\Delta t = 0.001$ to decrease simulation cost only exacerbates this issue; as the trial space is enriched, LSPG requires a smaller time-step to yield accurate results. If we wish to improve the LSPG solution for $K = 150$, we must decrease the time-step below that of the FOM. The accuracy of the APG ROM does not change when the time-step is doubled from $\\Delta t = 0.0005$ to $\\Delta t = 0.001$. This halves the cost of the APG ROM with no significant drawbacks. \n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/modeSensExplicit.png}\n\\caption{Explicit Galerkin\/APG, implicit LSPG, fixed $\\Delta t$}\n\\label{fig:sod_modeSens_explicit}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/modeSensImplicit.png}\n\\caption{All implicit, various $\\Delta t$}\n\\label{fig:sod_modeSens_implicit}\n\\end{subfigure}\n\\end{center}\n\\caption{Integrated error mode sensitivity study for the Sod shock tube.}\n\\label{fig:sod_mode_study}\n\\end{figure}\n\n\\subsubsection{Optimal Memory Length Investigations}\nAs mentioned previously, the success of LSPG is tied to the physical time-step and the time integration scheme, while the parameter $\\tau$ in the APG method may be chosen independently from these factors. In minimizing ROM error, finding an optimal value of $\\tau$ for the APG ROM may permit the choice of a much larger time-step than the optimal LSPG time-step. Further, the APG method may be applied with explicit time integration schemes, which are generally much less expensive than the implicit methods which LSPG is restricted to. To demonstrate this, the APG ROM and LSPG ROM with $K=150$ are simulated for a variety of time scales ($\\tau$ for APG and $\\Delta t$ for LSPG). \n\nFigure~\\ref{fig:sod_memlength_sensitivity} shows the integrated error of the ROMs versus the relevant time scale. For this case, the optimal value of $\\Delta t$ for LSPG is less than $0.0001$ and is not shown. The optimal value of $\\tau$ is not greatly affected by the choice of time integration scheme (implicit or explicit) or time-step. Furthermore, because $\\tau$ can be chosen independently from $\\Delta t$ for APG, the APG ROM can produce low error at a much larger time-step ($\\Delta t = 0.001$) than the optimal time-step for the LSPG ROM. This highlights the fact that the ``optimal\" LSPG ROM may be computationally expensive due to a small time-step, whereas the ``optimal\" APG ROM requires only the specification of $\\tau$ and can use, potentially, much larger time-steps than LSPG. It has to be mentioned, however, that the choice of $\\tau$ has an impact on performance --- selections of $\\tau$ larger than those plotted in Fig.~\\ref{fig:sod_memlength_sensitivity} caused the ROM to lose stability. \n\nAlso discussed previously, computing an optimal value of $\\tau$ \\textit{a priori} may be linked to the spectral radius of the coarse-scale Jacobian, (i.e. $\\rho \\big( \\tilde{\\mathbf{V}}^T \\mathbf{J}[\\tilde{\\mathbf{u}}_0]\\tilde{\\mathbf{V}} \\big)$, not to be confused with the physical density $\\rho$). We consider the APG ROM for basis sizes of $K=30,60,90,150,180,240$. For each case, an ``optimal\" $\\tau$ is found by minimizing the misfit between the ROM solution and the projected FOM solution. The misfit is defined as follows,\n\\begin{equation}\\label{eq:misfit}\n\\mathcal{J}(\\tau) = \\sum_{i=1}^{200} ||e(\\tau,t = i10\\Delta t)||_2.\n\\end{equation}\nEquation~\\ref{eq:misfit} corresponds to summing the $L^2$-norm of the error at every $10^{th}$ time-step. Figure~\\ref{fig:sod_tau_specRad} shows the resulting optimal $\\tau$ for each case plotted against the inverse of the spectral radius of the coarse-scale Jacobian evaluated at $t=0$. \nThe strong linear correlation suggests that a near-optimal value of $\\tau$ may be chosen by evaluating the spectral radius, $\\rho \\big( \\tilde{\\mathbf{V}}^T \\mathbf{J}[\\tilde{\\mathbf{u}}_0]\\tilde{\\mathbf{V}} \\big)$ and using the above linear relationship to $\\tau$. \n\nTwo points are emphasized here:\n\\begin{enumerate}\n\\item The spectral radius plays an important role in both implicit and explicit time integrators and is often the determining factor in the choice of the time-step. Theoretical analysis on the stability of explicit methods (and convergence of implicit methods) shows a similar dependence to the spectral radius. Choosing the memory length to be $\\tau = \\Delta t$ is one simple heuristic that may be used.\n\\item\nWhile a linear relationship between $\\tau$ and the spectral radius of the coarse-scale Jacobian has been observed in every problem the authors have examined, the slope of the fit is somewhat problem dependent. For the purpose of reduced-order modeling, however, this is only a minor inconvenience as an appropriate \nvalue of $\\tau$ can be selected by assessing the performance of the ROM on the training set, i.e., on the simulation used to construct the POD basis. \n\\item Finally, more complex methods may be used to compute $\\tau$. A method to dynamically compute $\\tau$ based on Germano's identity, for instance, was proposed in~\\cite{parish_dtau} in the context of the simulation of turbulent flows with Fourier-Galerkin methods. Extension of this technique to projection-based ROMs and the development of additional techniques to select $\\tau$ will be the subject of future work.\n\\end{enumerate}\n\n\\begin{figure}\n \\centering\n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/dt_tau_sensitivity_semilog.png}\n \\caption{Error as a function of time scale, $K = 150$.}\n \\label{fig:sod_memlength_sensitivity}\n \\end{minipage}\\hfill\n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{sodFigs\/tauSpecRadCorr_revis.png}\n \\caption{Optimal $\\tau$ as a function of the spectral radius evaluated at $t=0$.}\n \\label{fig:sod_tau_specRad}\n \\end{minipage}\\hfill\n\\end{figure}\n\n\\input{cyl_new}\n\n\n\n\n\\section{Conclusion}\\label{sec:conclude}\nThis work introduced the Adjoint Petrov--Galerkin method for non-linear model reduction. Derived from the variational multiscale method and Mori-Zwanzig formalism, the Adjoint Petrov--Galerkin method is a Petrov--Galerkin projection technique with a non-linear time-varying test basis. The method is designed to be applied at the semi-discrete level, i.e., after spatial discretization of a partial differential equation, and is compatible with both implicit and explicit time integration schemes. The method displays commonalities with the adjoint-stabilization method used in the finite element community as well as the Least-Squares Petrov--Galerkin approach used in non-linear model-order reduction. Theoretical error analysis was presented that showed conditions under which the Adjoint Petrov--Galerkin ROM may have lower \\textit{a priori} error bounds than the Galerkin ROM. The theoretical cost of the Adjoint Petrov--Galerkin method was considered for both explicit and implicit schemes, where it was shown to be approximately twice that of the Galerkin method. In the case of implicit time integration schemes, the Adjoint Petrov--Galerkin ROM was shown to be capable of being more efficient than Least-Squares Petrov--Galerkin when the non-linear system is solved via Jacobian-Free Newton-Krylov methods. \n\nNumerical experiments with the Adjoint Petrov--Galerkin, Galerkin, and Least-Squares Petrov--Galerkin method were presented for the Sod shock tube problem and viscous compressible flow over a cylinder parameterized by the Reynolds number. In all examples, the Adjoint Petrov--Galerkin method provided more accurate predictions than the Galerkin method for a fixed basis dimension. Improvements over the Least-Squares Petrov--Galerkin method were observed in most cases. In particular, the Adjoint Petrov--Galerkin method was shown to provide relatively accurate predictions for the cylinder flow at Reynolds numbers outside of the training set used to construct the POD basis. The Galerkin method, with both an equivalent and an enriched trial space, failed to produce accurate results in these cases. Additionally, numerical evidence showed a correlation between the spectral radius of the reduced Jacobian and the optimal value of the stabilization parameter appearing in the Adjoint Petrov--Galerkin method.\n\nWhen augmented with hyper-reduction, the Adjoint Petrov--Galerkin ROM was shown to be capable of producing accurate predictions within the POD training set with computational speedups up to 5000 times compared to the full-order models. This speed-up is a result of hyper-reduction of the right-hand side, as well as the ability to use explicit time integration schemes at large time-steps. A study of the Pareto front for simulation error versus relative wall time showed that, for the compressible cylinder problem, the Adjoint Petrov--Galerkin ROM is competitive with the Galerkin ROM, and more efficient than the LSPG ROM for the problems considered. \n\n\n\n\n\n\\section{Acknowledgements}\\label{sec:acknowledge}\nThe authors acknowledge support from the US Air Force Office of Scientific Research through the Center of Excellence Grant FA9550-17-1-0195 (Tech. Monitors: Mitat Birkan \\& Fariba Fahroo) and the project LES Modeling of Non-local effects using Statistical Coarse-graining (Tech. Monitors: Jean-Luc Cambier \\& Fariba Fahroo). E. Parish acknowledges an appointment to the Sandia National Laboratories John von Neumann fellowship. This paper describes objective technical results and analysis. Any subjective\nviews or opinions that might be expressed in the paper do not necessarily\nrepresent the views of the U.S. Department of Energy or the United States\nGovernment Sandia National Laboratories is a multimission laboratory managed\nand operated by National Technology and Engineering Solutions of Sandia, LLC.,\na wholly owned subsidiary of Honeywell International, Inc., for the U.S.\nDepartment of Energy's National Nuclear Security Administration under contract\nDE-NA-0003525.\n\n\\begin{appendices}\n\n\\section{Hyper-reduction for the Adjoint Petrov--Galerkin Reduced-Order Model}\\label{appendix:hyper}\n\nIn the numerical solution of non-linear dynamical systems, the evaluation of the non-linear right-hand side term usually accounts for a large portion (if not the majority) of the computational cost. Equation~\\ref{eq:GROM_modal} shows that standard projection-based ROMs are incapable of reducing this cost, as the evaluation of $\\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}})$ still scales with the number of degrees of freedom $N$. If this issue is not addressed, and there is no reduction in temporal dimensionality, the ROM will typically be \\emph{more} expensive than the FOM due to additional matrix-vector products from projection onto the reduced-order space. Techniques for overcoming this bottleneck are typically referred to as hyper-reduction methods. The thesis of hyper-reduction methods is that, instead of computing the entire right-hand side vector, only a few entries are calculated. The missing entries can either be ignored (as done in collocation methods) or reconstructed (as done in the discrete interpolation and gappy POD methods). This section outlines the Gappy POD method, selection of the sampling points through QR factorization, and the algorithm for the hyper-reduced Adjoint Petrov--Galerkin ROM.\n\n\\subsection{Gappy POD}\nThe gappy POD method~\\cite{everson_sirovich_gappy} seeks to find an approximation for $\\mathbf{R}(\\cdot)$ that evaluates the right-hand side term at a reduced number of spatial points $r \\ll N$. This is achieved through the construction of a trial space for the right-hand side and least-squares reconstruction of a sampled signal. The offline steps required in the Gappy POD method are given in Algorithm~\\ref{alg:gappy_hyper_offline}.\n\n\\begin{algorithm}\n\\caption{Algorithm for the offline steps required for hyper-reduction via gappy POD.}\n\\label{alg:gappy_hyper_offline}\nOffline Steps:\n\\begin{enumerate}\n \\item Compute the full-order solution, storing $n_t$ time snapshots in the following matrices,\n \\begin{alignat*}{2}\n \n &\\text{Right-hand side snapshots:} \\; &&\\mathbf{F} = [\\mathbf{R}({\\mathbf{u}}_1) \\quad \\mathbf{R}({\\mathbf{u}}_2) \\quad ... \\quad \\mathbf{R}({\\mathbf{u}}_{n_t})] \\in \\mathbb{R}^{N \\times n_t}\n \\end{alignat*} \n \n \\item Compute the right-hand side POD basis $\\mathbf{U} \\in \\mathbb{R}^{N \\times r}$, from $\\mathbf{F}$.\n \\item Compute the sampling point matrix $\\mathbf{P} = [\\mathbf{e}_{p_1} \\quad \\mathbf{e}_{p2} \\quad ... \\mathbf{e}_{p_r}] \\in \\mathbb{R}^{N_{p} \\times N}$, where $\\mathbf{e}_i$ is the $i$th cannonical unit vector and $N_p$ is the number of sampling points.\n \\item Compute the stencil matrix $\\mathbf{P}_s = [\\mathbf{e}_{s_1} \\quad \\mathbf{e}_{s2} \\quad ... \\mathbf{e}_{s_r}] \\in \\mathbb{R}^{N_{s} \\times N}$, where $\\mathbf{e}_i$ is the $i$th cannonical unit vector and $N_s$ is the number of stencil points required to reconstruct the residual at the sample points. Note $N_s \\ge N_p$.\n\n \\item Compute the least-squares reconstruction matrix: $\\big[ \\mathbf{P}^T \\mathbf{U} \\big]^{+}$, where the superscript $+$ denotes the pseudo-inverse. \n Note that this matrix corresponds to the solution of the least-squares problem for a gappy signal $\\mathbf{f} \\in \\mathbb{R}^N$:\n$$ \\mathbf{a}_{\\mathbf{f}} = \\underset{\\mathbf{b} \\in \\mathbb{R}^r}{\\text{argmin}} || \\mathbf{P}^T \\mathbf{U} \\mathbf{b} - \\mathbf{P}^T \\mathbf{f}||,$$\nwhich has the solution,\n$$\\mathbf{a}_{\\mathbf{f}} = \\bigg[ \\mathbf{P}^T \\mathbf{U} \\bigg]^{+} \\mathbf{f}.$$\n\\end{enumerate}\n\\end{algorithm}\n\n\nNote that, because the gappy POD approximation of the right-hand side only samples the right-hand side term at $N_p$ spatial points, the cost of the ROM no longer scales with the full-order degrees of freedom $N$, but instead with the number of POD basis modes $K$ and the number of sample points $N_p$. Thus, the cost of evaluating the full right-hand side may be drastically reduced at the price of storing another snapshot matrix $\\mathbf{F}$ and computing another POD basis $\\mathbf{U}$ in the offline stage of computation. Furthermore, the product $\\tilde{\\mathbf{V}}^T \\mathbf{U} \\big[ \\mathbf{P}^T \\mathbf{U} \\big]^{+}$ may be precomputed during the offline stage if both $\\mathbf{P}$ and $\\mathbf{U}$ remain static throughout the simulation. This results in an relatively small $K\\times N_p$ matrix. As such, an increase in offline computational cost may produce a significant decrease in online computational cost.\n\n\\subsection{Selection of Sampling Points}\nStep 3 in Algorithm~\\ref{alg:gappy_hyper_offline} requires the construction of the sampling point matrix, for which several methods exist. In the discrete interpolation method proposed by Chaturantabut and Sorensen~\\cite{deim}, the sample points are selected inductively from the basis $\\mathbf{U}$, based on an error measure between the basis vectors and approximations of the basis vectors via interpolation. The method proposed by Drmac and Gugercin~\\cite{qdeim_drmac} leverages the rank-revealing QR factorization to compute $\\mathbf{P}$. Dynamic updates of the $\\mathbf{U}$ and $\\mathbf{P}$ via periodic sampling of the full-order right-hand side term is even possible via the methods developed by Peherstorfer and Willcox~\\cite{adeim_peherstorfer}. \n\nThe 2D compressible cylinder simulations presented in this manuscript uses a modified version of the rank-revealing QR factorization proposed in~\\cite{qdeim_drmac} to obtain the sampling points. The modifications are added to enhance the stability and accuracy of the hyper-reduced ROM within the discontinuous Galerkin method. Algorithm~\\ref{alg:qdeim} outlines the steps used in this manuscript to compute the sampling points.\n\n\\begin{algorithm}\n\\caption{Algorithm for QR-factorization-based selection of sampling matrix.}\n\\label{alg:qdeim}\nInput: Right-hand side POD basis $\\mathbf{U} \\in \\mathbb{R}^{N \\times r}$ \\;\n\\newline\nOutput: Sampling matrix $\\mathbf{P}$ \\;\n\\newline\nSteps:\n\\begin{enumerate}\n \n \\item Transpose the right-hand side POD basis, $\\mathbf{U}' = \\mathbf{U}^T$\n \\item Compute the rank-revealing QR factorization of $\\mathbf{U}'$, generating a permutation matrix $\\Gamma \\in \\mathbb{R}^{N \\times N}$, unitary $\\mathbf{Q} \\in \\mathbb{R}^{r \\times r}$, and orthonormal $\\mathbf{R} \\in \\mathbb{R}^{r \\times N}$ such that\n \\begin{equation*}\n \\mathbf{U}' \\Gamma = \\mathbf{QR}\n \\end{equation*}\n Details on computing rank-revealing QR decompositions can be found in~\\cite{qr_decomp_gu}, though many math libraries include optimized routines for this operation.\n \\item From the permutation matrix $\\Gamma$, select the first $r$ columns to form the interpolation point matrix $\\mathbf{P}$. \n \\item When applied to systems of equations, sampling approaches that select only specific indices of the residual can be inaccurate due to the fact that, at a given cell, the residual of all unknowns at that cell may not be calculated. This issue is further exacerbated in the discontinuous Galerkin method, where each cell has a number of quadrature points. As such, we perform the additional step: \n \\begin{enumerate}\n \\item Augment the sampling point matrix, $\\mathbf{P}$, with additional columns such that all unknowns are computed at the mesh cells selected by Step 3. In the present context, these additional unknowns correspond to each conserved variable and quadrature point in the selected cells. This step leads to $N_p > r$.\n \\end{enumerate}\n\\end{enumerate}\n\n\n\\end{algorithm}\n\n\n\\subsection{Hyper-Reduction of the Adjoint Petrov--Galerkin ROM}\nLastly, the online steps required for an explicit Euler update to the Adjoint Petrov--Galerkin ROM with Gappy POD hyper-reduction is provided in Algorithm~\\ref{alg:alg_ag_hyper}. It is worth noting that Step 3 in Algorithm~\\ref{alg:alg_ag_hyper} requires one to reconstruct the right-hand side at the stencil points. Hyper-reduction via a standard collocation method, which provides no means to reconstruct the right-hand side, is thus not compatable with the Adjoint Petrov--Galerkin ROM. \n\\begin{algorithm}\n\\caption{Algorithm for an explicit Euler update for the Adjoint Petrov--Galerkin ROM with gappy POD hyper-reduction.}\n\\label{alg:alg_ag_hyper}\nInput: $\\tilde{\\mathbf{a}}^n$ \\;\n\\newline\nOutput: $\\tilde{\\mathbf{a}}^{n+1}$\\;\n\\newline\nOnline steps at time-step $n$:\n\\begin{enumerate}\n\\item Compute the state at the stencil points: $\\tilde{\\mathbf{u}}_s^n = \\mathbf{P}_s^T \\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^n,$ with $\\tilde{\\mathbf{u}}_s^n \\in \\mathbb{R}^{N_s}$ \n\\item Compute the generalized coordinates to the right-hand side evaluation via,\n\\begin{equation*}\\label{eq:RHSapprox}\n \\mathbf{a}_{\\mathbf{R}}^n = \\big[ \\mathbf{P}^T \\mathbf{U} \\big]^{+} \\mathbf{P}^T \\mathbf{R}(\\tilde{\\mathbf{u}}_s^n),\n\\end{equation*}\nwith $\\mathbf{a}_{\\mathbf{R}}^n \\in \\mathbb{R}^r$.\nNote that the product $\\mathbf{P}^T \\mathbf{R}(\\tilde{\\mathbf{u}}_s^n)$ requires computing $\\mathbf{R}(\\tilde{\\mathbf{u}}_s^n)$ \\emph{only at the sample points}, as given by the unit vectors stored in $\\mathbf{P}$.\n\\item Reconstruct the right-hand side at the stencil points: $\\overline{\\mathbf{R}_s(\\tilde{\\mathbf{u}}_s^n)} = \\mathbf{P}_s^T \\mathbf{U} \\mathbf{a}_{\\mathbf{R}}^n$\n\\item Compute the orthogonal projection of the approximated right-hand side at the stencil points:\n$${\\Pi^{\\prime}} \\overline{\\mathbf{R}_s(\\tilde{\\mathbf{u}}^n)} = \\overline{\\mathbf{R}_s(\\tilde{\\mathbf{u}}^n)} - \\tilde{\\mathbf{V}} \\tilde{\\mathbf{V}}^T \\overline{ \\mathbf{R}_s(\\tilde{\\mathbf{u}}^n)}$$\n\\item Compute the generalized coordintes for the action of the Jacobian on ${\\Pi^{\\prime}} \\overline{\\mathbf{R}_s(\\tilde{\\mathbf{u}}^n)}$ at the sample points using either finite difference or exact linearization. For finite difference:\n \\begin{equation*}\n \\mathbf{a}_{\\mathbf{J}}^n \\approx \\frac{1}{\\epsilon} \\big[ \\mathbf{P}^T \\mathbf{U} \\big]^{+} \\mathbf{P}^T \\Big[ \\mathbf{R}_s\\big(\\tilde{\\mathbf{u}}^n_s + \\epsilon {\\Pi^{\\prime}} \\overline{\\mathbf{R}_s}(\\tilde{\\mathbf{u}}^n_s) \\big) - {\\mathbf{R}_s(\\tilde{\\mathbf{u}}^n_s )} \\Big], \n \\end{equation*}\n with $\\mathbf{a}_{\\mathbf{J}}^n \\in \\mathbb{R}^r$. Note $\\epsilon$ is a small constant value, usually $\\sim \\mathcal{O}(10^{-5})$.\n\\item Compute the combined sampled right-hand side: $\\tilde{\\mathbf{V}}^T \\mathbf{U} \\bigg[ \\mathbf{a}_{\\mathbf{R}}^n + \\tau \\mathbf{a}_{\\mathbf{J}}^n \\bigg]$\n\\item Update the state: $\\tilde{\\mathbf{a}}^{n+1} = \\tilde{\\mathbf{a}}^n + \\Delta t \\tilde{\\mathbf{V}}^T \\mathbf{U} \\bigg[ \\mathbf{a}_{\\mathbf{R}}^n + \\tau \\mathbf{a}_{\\mathbf{J}}^n \\bigg] $\n\\end{enumerate}\n\n\\end{algorithm}\n\n\n\n\\section{POD Basis Construction}\\label{appendix:basisconstruction}\n\nFor the 1D Euler case detailed in this manuscript, the procedure for constructing separate POD bases for each conserved variables is as follows:\n\n\\begin{enumerate}\n\\item Run the full-order model for $t \\in (0,1)$ at a time-step of $\\Delta t = 0.0005$. The state vector is saved at every other time step to create 1000 state snapshots.\n\\item Collect the snapshots for each state into three state snapshot matrices:\n$$\n\\mathbf{S}_{\\rho} = \\begin{bmatrix}\n\\mathbf{\\rho}_1 & \\mathbf{\\rho}_2 & \\hdots & \\rho_{1000}\n\\end{bmatrix},\n\\mathbf{S}_{\\rho u} = \\begin{bmatrix}\n\\mathbf{\\rho u}_1 & \\mathbf{\\rho u}_2 & \\hdots & \\mathbf{\\rho u}_{1000}\n\\end{bmatrix},\n\\mathbf{S}_{\\rho E} = \\begin{bmatrix}\n\\mathbf{ \\rho E}_1 & \\mathbf{ \\rho E}_2 & \\hdots & \\mathbf{\\rho E}_{1000}\n\\end{bmatrix},\n$$\nwhere $\\mathbf{\\rho}_i, \\mathbf{\\rho u}_i, \\mathbf{\\rho E}_i \\in \\mathbb{R}^{1000}$.\n\\item Compute the singular-value decomposition (SVD) of each snapshot matrix, e.g. for $\\mathbf{S}_{\\rho}$,\n$$\\mathbf{S}_{\\rho} \\mathrel{\\overset{\\makebox[0pt]{\\mbox{\\normalfont\\tiny\\sffamily SVD}}}{=}} \\mathbf{V}_{\\rho} {\\Sigma}_{\\rho} \\mathbf{U}^T_{\\rho}.$$\nThe columns of $\\mathbf{V}_{\\rho}$ and $\\mathbf{U}_{\\rho}$ are the left and right singular vectors of $\\mathbf{S}_{\\rho}$, respectively. ${\\Sigma}_{\\rho}$ is a diagonal matrix of the singular values of $\\mathbf{S}_{\\rho}$. The columns of $\\mathbf{V}_{\\rho}$ form a basis for the solution space of $\\rho_i$. \n\n\\textit{Remarks}\n\\begin{enumerate}\n\\item In this example, a separate basis is computed for each conserved quantity. It is also possible to construct a global basis by stacking $\\mathbf{S}_{\\rho}$, $\\mathbf{S}_{\\rho \\mathbf{u}}$, and $\\mathbf{S}_{\\rho \\mathbf{E}}$ into one snapshot matrix and computing one ``global\" SVD.\n\\end{enumerate}\n\\item Decompose each basis into bases for the resolved and unresolved scales by selecting the first $K$ columns and last $1000-K$ columns, respectively, e.g.\n$$\\mathbf{V}_\\rho = \\begin{bmatrix} \\tilde{\\mathbf{V}}_{\\rho} & ; & {\\mathbf{V}^{\\prime}}_{\\rho} \\end{bmatrix},$$\nwhere $\\tilde{\\mathbf{V}}_{\\rho} \\in \\mathbb{R}^{1000 \\times K}$ and ${\\mathbf{V}^{\\prime}} \\in \\mathbb{R}^{1000 \\times 1000 - K}.$ \\\\\n\\textit{Remarks}\n\\begin{enumerate}\n\\item Each basis vector is orthogonal to the others, hence the coarse and fine-scales are orthogonal.\n\\item In this example, we have selected 1000 snapshots such that the column space of ${\\mathbf{V}}$ spans ${\\MC{V}}$. In general, this is not the case. As the APG method requires no processing of the fine-scale basis functions, however, this is not an issue.\n\\end{enumerate}\n\\item Construct a global coarse-scale basis,\n$$\\tilde{\\mathbf{V}} = \\begin{bmatrix}\n\\tilde{\\mathbf{V}}_{\\rho} & \\mathbf{0} & \\mathbf{0} \\\\\n\\mathbf{0} & \\tilde{\\mathbf{V}}_{\\rho u} & \\mathbf{0} \\\\\n\\mathbf{0} & \\mathbf{0} & \\tilde{\\mathbf{V}}_{\\mathbf{\\rho E}} \\\\\n\\end{bmatrix}.$$\n\\end{enumerate}\n\n\n\\section{Algorithms for the Galerkin and LSPG ROMs}\\label{appendix:algorithms}\nSection~\\ref{sec:cost} presented an analysis on the computational cost of the Adjoint Petrov--Galerkin ROM. This appendix presents similar algorithms and FLOP counts for the Galerkin and LSPG ROMs. The following algorithms and FLOP counts are reported:\n\\begin{enumerate}\n \\item An explicit Euler update to the Galerkin ROM (Algorithm~\\ref{alg:alg_g_exp}, Table~\\ref{tab:alg_g_exp}).\n \\item An implicit Euler update to the Galerkin ROM using Newton's method with Gaussian elimination (Algorithm~\\ref{alg:alg_g_imp}, Table~\\ref{tab:alg_g_imp}).\n \\item An implicit Euler update to the Least-Squares Petrov--Galerkin ROM using the Gauss-Newton method with Gaussian elimination (Algorithm~\\ref{alg:alg_LSPG}, Table~\\ref{tab:alg_LSPG}).\n\\end{enumerate}\n\n\\begin{algorithm}[h!]\n\\caption{Algorithm for an explicit Euler update for the Galerkin ROM.}\n\\label{alg:alg_g_exp}\nInput: $\\tilde{\\mathbf{a}}^n$ \\;\n\\newline\nOutput: $\\tilde{\\mathbf{a}}^{n+1}$\\;\n\\newline\nSteps:\n\\begin{enumerate}\n\\item Compute the state from the generalized coordinates, $\\tilde{\\mathbf{u}}^n =\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}^{n}$\n\\item Compute the right-hand side from the state, $\\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n\\item Project the right-hand side, $\\tilde{\\mathbf{V}}^T\\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n\\item Update the state $\\tilde{\\mathbf{a}}^{n+1} = \\tilde{\\mathbf{a}}^n + \\Delta t \\tilde{\\mathbf{V}}^T\\mathbf{R}(\\tilde{\\mathbf{u}}^n)$\n\\end{enumerate}\n\\end{algorithm}\n\n\\begin{table}[h!]\n\\begin{tabular}{p{7cm} p{8cm}}\n\\hline\nStep in Algorithm~\\ref{alg:alg_g_exp} & Approximate FLOPs \\\\\n\\hline\n1 & $2NK - N$ \\\\\n2 & $\\omega N$ \\\\\n3 & $2NK - K$ \\\\\n4 & $2K $ \\\\\n\\hline\nTotal & $4 N K + (\\omega-1) N + K$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Approximate floating-point operations for an explicit Euler update to the Galerkin method reported in Algorithm~\\ref{alg:alg_g_exp}.}\n\\label{tab:alg_g_exp}\n\\end{table}\n\n\n\\begin{algorithm}[h!]\n\\caption{Algorithm for an implicit Euler update for the Galerkin ROM using Newton's Method with Gaussian Elimination}\n\\label{alg:alg_g_imp}\nInput: $\\tilde{\\mathbf{a}}^n$, residual tolerance $\\xi$ \\;\n\\newline\nOutput: $\\tilde{\\mathbf{a}}^{n+1}$\\;\n\\newline\nSteps:\n\\begin{enumerate}\n\\item Set initial guess, $\\tilde{\\mathbf{a}}_k$\n\\item Loop while $\\mathbf{r}^k > \\xi$\n\\begin{enumerate}\n \\item Compute the state from the generalized coordinates, $\\tilde{\\mathbf{u}}_k = \\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k$\n \\item Compute the right-hand side from the full state, $\\mathbf{R}(\\tilde{\\mathbf{u}}_k)$\n \\item Project the right-hand side, $\\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{u}}_k)$\n \\item Compute the Galerkin residual, $\\mathbf{r}_G(\\tilde{\\mathbf{a}}_k) = \\tilde{\\mathbf{a}}_k - \\tilde{\\mathbf{a}}^n - \\Delta t \\tilde{\\mathbf{V}}^T \\mathbf{R}(\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_k)$\n \\item Compute the residual Jacobian, $\\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{a}}_k)}{\\partial \\tilde{\\mathbf{a}}_k}$\n \\item Solve the linear system via Gaussian Elimination: $\\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{a}}_k)}{\\partial \\tilde{\\mathbf{a}}_k} \\Delta \\tilde{\\mathbf{a}} = - \\mathbf{r}(\\tilde{\\mathbf{a}}_k)$\n \\item Update the state: $\\tilde{\\mathbf{a}}_{k+1} = \\tilde{\\mathbf{a}}_k + \\Delta \\tilde{\\mathbf{a}}$\n \\item $k = k + 1$\n\\end{enumerate}\n\\item Set final state, $\\tilde{\\mathbf{a}}^{n+1} = \\tilde{\\mathbf{a}}_k$\n\\end{enumerate}\n\\end{algorithm}\n\n\n\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{p{7cm} p{8cm}}\n\\hline\nStep in Algorithm~\\ref{alg:alg_apg_imp}& Approximate FLOPs \\\\\n\\hline\n2a & $2 N K - N $ \\\\\n2b & $ \\omega N $ \\\\\n2c & $2 N K - K$ \\\\\n2d & $ 3K $ \\\\\n2e & $ 4 N K^2 + (\\omega - 1) N K + 2K^2 $ \\\\\n2f & $ K^3 $ \\\\\n2g & $ K $ \\\\\n\\hline\nTotal & $ (\\omega - 1)N + 3K + (\\omega + 3)NK + 2K^2 + 4NK^2 + K^3 $\n\\end{tabular}\n\\caption{Approximate floating-point operations for one Newton iteration for the implicit Euler update to the Galerkin method reported in Algorithm~\\ref{alg:alg_g_imp}.}\n\\label{tab:alg_g_imp}\n\\end{table}\n\n\\clearpage\n\n\\begin{algorithm}[H]\n\\caption{Algorithm for an implicit Euler update for the LSPG ROM using a Gauss-Newton method with Gaussian Elimination}\n\\label{alg:alg_LSPG}\nInput: $\\tilde{\\mathbf{a}}^n$, residual tolerance $\\xi$ \\;\n\\newline\nOutput: $\\tilde{\\mathbf{a}}^{n+1}$\\;\n\\newline\nSteps:\n\\begin{enumerate}\n\\item Set initial guess, $\\tilde{\\mathbf{a}}_k$\n\\item Loop while $\\mathbf{r}_k > \\xi$\n\\begin{enumerate}\n \\item Compute the state from the generalized coordinates, $\\tilde{\\mathbf{u}}_k =\\tilde{\\mathbf{V}} \\tilde{\\mathbf{a}}_{k}$\n \\item Compute the right-hand side from the full state, $\\mathbf{R}(\\tilde{\\mathbf{u}}_k)$\n \\item Compute the residual, $\\mathbf{r} (\\tilde{\\mathbf{u}}_k) = \\tilde{\\mathbf{u}}_k - \\tilde{\\mathbf{u}}^n - \\Delta t \\mathbf{R}(\\tilde{\\mathbf{u}}_k)$\n \\item Compute the test basis, $\\mathbf{W}_k = \\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{u}}_k)}{\\partial \\tilde{\\mathbf{u}}_k} \\tilde{\\mathbf{V}} = \\frac{\\partial \\mathbf{r}(\\tilde{\\mathbf{u}}_k)}{\\partial \\tilde{\\mathbf{a}}_k} $\n \\item Compute the product, $\\tilde{\\mathbf{W}}_k^T \\tilde{\\mathbf{W}}_k$\n \\item Project the residual onto the test space, $\\tilde{\\mathbf{W}}^T \\mathbf{r}(\\tilde{\\mathbf{a}}_k)$\n \\item Solve $\\tilde{\\mathbf{W}}^T \\tilde{\\mathbf{W}} \\Delta \\tilde{\\mathbf{a}} = - \\tilde{\\mathbf{W}}^T \\mathbf{r}(\\tilde{\\mathbf{a}}_k) $ for $\\Delta \\tilde{\\mathbf{a}}$ via Gaussian elimination\n \\item Update solution, $\\tilde{\\mathbf{a}}_{k+1} = \\tilde{\\mathbf{a}}_k + \\Delta \\tilde{\\mathbf{a}}$\n \\item k = k + 1\n\\end{enumerate}\n\\item Set final state, $\\tilde{\\mathbf{a}}^{n+1} = \\tilde{\\mathbf{a}}_k$ \n\\end{enumerate}\n\\end{algorithm}\n\n\n\\begin{table}[H]\n\\begin{tabular}{p{7cm} p{8cm}}\n\\hline\nStep in Algorithm~\\ref{alg:alg_LSPG}& Approximate FLOPs \\\\\n\\hline\n2a & $ 2NK - N $ \\\\\n2b & $ \\omega N $ \\\\\n2c & $ 3N $ \\\\\n2d & $ (\\omega + 2)NK + 2NK^2 $ \\\\\n2e & $ 2NK^2 - K^2 $ \\\\\n2f & $ 2NK - K $ \\\\\n2g & $ K^3 $ \\\\\n2h & $ K $ \\\\\n\\hline\nNewton Iteration Total & $ (\\omega + 2)N + (\\omega + 6) NK - K^2 + 4NK^2 + K^3 $ \\\\\n\n\\end{tabular}\n\\caption{Approximate floating-point operations for one Newton iteration for the implicit Euler update to the LSPG method reported in Algorithm~\\ref{alg:alg_LSPG}.}\n\\label{tab:alg_LSPG}\n\\end{table}\n\n\n\n\\end{appendices}\n\n\\clearpage\n\n\\bibliographystyle{aiaa}\n\n\n\n\\subsection{Example 2: Flow Over Cylinder}\\label{sec:cylinder}\nThe second case considered is viscous compressible flow over a circular cylinder. The flow is described by the two-dimensional compressible Navier-Stokes equations. A Newtonian fluid and a calorically perfect gas are assumed.\n\\begin{comment} \nThe flow is described by the two-dimensional compressible Navier-Stokes equations,\n\\begin{equation}\\label{eq:compressible_ns}\n\\frac{\\partial \\mathbf{u}}{\\partial t} + \\nabla \\cdot \\big( \\mathbf{F}(\\mathbf{u} ) - \\mathbf{F}_v (\\mathbf{u},\\nabla \\mathbf{u} ) \\big) =0,\n\\end{equation}\nwhere $\\mathbf{F}$ and $ \\mathbf{F}_v$ are the inviscid and viscous fluxes, respectively. For a two-dimensional flow the state vector and inviscid fluxes are,\n$$\n\\mathbf{u} = \\begin{Bmatrix}\n\\rho \\\\ \\rho u_1 \\\\ \\rho u_2 \\\\ \\rho E \\end{Bmatrix}, \\qquad \\mathbf{F}_{1} = \\begin{Bmatrix} \\rho u_1 \\\\ \\rho u_1^2 + p \\\\ \\rho u_1 u_2 \\\\ u_1(E + p) \\end{Bmatrix}, \n\\qquad \\mathbf{F}_{2} = \\begin{Bmatrix} \\rho u_2 \\\\ \\rho u_1 u_2 \\\\ \\rho u_2^2 + p \\\\ u_2(E + p) \\end{Bmatrix}.\n$$\nThe viscous fluxes are given by,\n$$\n\\qquad \\mathbf{F}_{v_1} = \\begin{Bmatrix} 0 \\\\ \\tau_{11} \\\\ \\tau_{12} \\\\ u_j \\tau_{j1} + c_p \\frac{\\mu}{\\text{Pr}} \\frac{\\partial T}{\\partial x_1} \\end{Bmatrix}, \n\\qquad \\mathbf{F}_{v_2} = \\begin{Bmatrix} 0 \\\\ \\tau_{21} \\\\ \\tau_{22} \\\\ u_j \\tau_{j2} + c_p \\frac{\\mu}{\\text{Pr}} \\frac{\\partial T}{\\partial x_2} \\end{Bmatrix}.\n$$\nWe assume a Newtonian fluid, which leads to a viscous stress tensor of the form,\n\\begin{equation*}\n\\tau_{ij} = 2\\mu S_{ij},\n\\end{equation*}\nwhere,\n\\begin{equation*}\n S_{ij} = \\frac{1}{2} \\big( \\frac{\\partial u_i}{\\partial x_j} + \\frac{\\partial u_j}{\\partial x_i} \\big) - \\frac{1}{3} \\frac{\\partial u_k}{\\partial x_i} \\delta_{ij}.\n\\end{equation*}\nThe Navier-Stokes equations are closed with a constitutive relationship for a calorically perfect gas,\n$$p = (\\gamma - 1)( \\rho E - \\frac{1}{2} \\rho u_1^2 - \\frac{1}{2} \\rho u_2^2 \\big),$$\nwhere $\\gamma = 1.4$ is the heat-capacity ratio. \n\\end{comment}\n\\subsubsection{Full-Order Model}\\label{sec:cylinder_fom}\nThe compressible Navier-Stokes equations are solved using a discontinuous Galerkin (DG) method and explicit time integration. Spatial discretization with the discontinuous Galerkin method leads to a semi-discrete system of the form,\n\\begin{equation*}\n \\frac{d {\\mathbf{u}}}{dt} = \\mathbf{M}^{-1}\\mathbf{f}({\\mathbf{u}}), \\qquad {\\mathbf{u}}(t=0) = {\\mathbf{u}}_0,\n\\end{equation*}\nwhere $\\mathbf{M}\\in \\mathbb{R}^{N \\times N}$ is a block diagonal mass matrix and $\\mathbf{f}({\\mathbf{u}}) \\in \\mathbb{R}^N$ is a vector containing surface and volume integrals. Thus, using the notation defined in Eq.~\\ref{eq:FOM}, the right-hand side operator for the DG discretization is defined as,\n$$\\mathbf{R}({\\mathbf{u}}) = \\mathbf{M}^{-1}\\mathbf{f}(\\tilde{\\mathbf{u}}).$$\n\nFor the flow over cylinder problem considered in this section, a single block domain is constructed in polar coordinates by uniformly discretizing in $\\theta$ and by discretizing in the radial direction by,\n$$r_{i+1} = r_i + r_i (R_g - 1),$$\nwhere $R_g$ is a stretching factor and is defined by,\n$$R_g = r_{max}^{1\/N_r}.$$\n\nThe DG method utilizes the Roe flux at the cell interfaces and uses the first form of Bassi and Rebay~\\cite{BR1} for the viscous fluxes. Temporal integration is again performed using a strong stability preserving RK3 method. Far-field boundary conditions and linear elements are used. Details of the FOM are presented in Table~\\ref{tab:cylinder}. \n\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|c | c | c | c | c | c | c | c | c | c |}\\hline\n $r_{max}$ & $N_r$ & $N_{\\theta}$ & $p_r$ & $p_{\\theta}$ & $\\Delta t$ & Mach & $a_{\\infty}$ & $p_{\\infty}$ & $T_{\\infty}$ \\\\ \\hline\n 60 & 80 & 80 & 3 & 3 & $5e-3$ & 0.2 & $1.0$ & $1.0$ & $\\gamma^{-1}$ \\\\\\hline\n\\end{tabular}\n\\caption{Details used for flow over cylinder problem. In the above table, $N_r$ and $N_{\\theta}$ are the number of cells in the radial and $\\theta$ direction, respectively. Similarly, $p_r$ and $p_{\\theta}$ are the polynomial orders in the radial and $\\theta$ direction. Lastly, $a_{\\infty}$, $p_{\\infty}$, and $T_{\\infty}$ are the free-stream speed of sound, pressure, and temperature.}\n\\label{tab:cylinder}\n\\end{table}\n\n\n\\subsubsection{Solution of the Full-Order Model and Construction of the ROM Trial Space}\\label{sec:cyl_romsteps}\nFlow over a cylinder at Re=$100,200,$ and $300$, where Re=$ \\rho_{\\infty} U_{\\infty} D \/ \\mu$ is the Reynolds number, are considered. These Reynolds numbers give rise to the well studied von K\\'arm\\'an vortex street. Figure~\\ref{fig:vonkarman} shows the FOM solution at Re=100 for several time instances to illustrate the vortex street.\n\nThe FOM is used to construct the trial spaces used in the ROM simulations. The process used to construct these trial spaces is as follows:\n\\begin{enumerate}\n \\item Initialize FOM simulations at Reynold's numbers of Re=$100,200,$ and $300.$ The Reynold's number is controlled by raising or lowering the viscosity.\n \\item Time-integrate the FOM at each Reynolds number until a statistically steady-state is reached.\n \\item Once the flow has statistically converged to a steady state, reset the time coordinate to be $t=0$, and solve the FOM for $t \\in [0,100]$.\n \\item Take snapshots of the FOM solution obtained from Step 3 at every $t=0.5$ time units over a time-window of $t \\in [0,100]$ time units, for a total of $200$ snapshots at each Reynolds number. This time window corresponds to roughly two cycles of the vortex street, with $100$ snapshots per cycle.\n \\item Assemble the snapshots from each case into one global snapshot matrix of dimension $N \\times 600$. This snapshot matrix is used to construct the trial subspace through POD. Note that only one set of basis functions for all conserved variables is constructed. \n \\item Construct trial spaces of dimension $N\\times 11$, $N\\times 43$, and $N \\times 87$. These subspace dimensions correspond to an energy criterion of $99, 99.9,$ and $99.99\\%$. The different trial spaces are summarized in Table~\\ref{tab:rom_basis_1}.\n\\end{enumerate}\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.05\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/colorbar.png}\n\\label{fig:vonkarman0}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.3\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cyl0000.png}\n\\caption{$x$-velocity at $t=0.0$}\n\\label{fig:vonkarman1}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.3\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cyl0005.png}\n\\caption{$x$-velocity at $t=25.0$}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.3\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cyl0010.png}\n\\caption{$x$-velocity at $t=50.0$}\n\\label{fig:vonkarman3}\n\\end{subfigure}\n\\end{center}\n\\caption{Evolution of the von K\\'arm\\'an vortex street at Re=100.}\n\\label{fig:vonkarman}\n\\end{figure}\n\n\\subsubsection{Solution of the Reduced-Order Models}\nThe G ROM, APG ROM, and LSPG ROMs are considered. Details on their implementation are as follows:\n\\begin{enumerate}\n\\item Galerkin ROM: The Galerkin ROM is evolved in time using both explicit and implicit time integrators. In the explicit case, a strong stability RK3 method is used. In the implicit case, Crank-Nicolson time integration is used. The non-linear algebraic system is solved using SciPy's Jacobian-Free Netwon-Krylov solver. LGMRES is employed as the linear solver. The convergence tolerance for the max-norm of the residual is set at the default ftol=6e-6. \n\n\\item Adjoint Petrov-Galerkin ROM: The APG ROM is evolved in time using the same time integrators as the Galerkin ROM. The extra term appearing in the APG model is computed via finite difference\\footnote{It is noted that, when used in conjunction with a JFNK solver that utilizes finite difference to approximate the action of the Jacobian on a vector, approximating the extra RHS term in APG via finite difference leads to computing the finite difference approximation of a finite difference approximation. While not observed in the examples presented here, this can have a detrimental effect on accuracy and\/or convergence.} with a step size of $\\epsilon = $1e-5. Unless noted otherwise, the memory length $\\tau$ is selected to be $\\tau = \\frac{0.2}{\\rho(\\tilde{\\mathbf{V}}^T \\mathbf{J}[\\tilde{\\mathbf{u}}(0)] \\tilde{\\mathbf{V}} ) } .$ The impact of $\\tau$ on the numerical results is considered in the subsequent sections. \n\n\\item LSPG ROM: The LSPG ROM is formulated from an implicit Crank-Nicolson temporal discretization. The resulting non-linear least-squares problem is solved using SciPy's least-squares solver with the `dogbox' method~\\cite{scipy_leastsquares_dogbox}. The tolerance on the change to the cost function is set at ftol=1e-8. The tolerance on the change to the generalized coordinates is set at xtol=1e-8. The SciPy least-squares solver is comparable in speed to our own least-squares solver that utilizes the Gauss-Newton method with a thin QR factorization to solve the least-squares problem. The SciPy solver, however, was observed to be more robust in driving down the residual than the basic Gauss-Newton method with QR factorization, presumably due to SciPy's inclusion of trust-regions, and hence results are reported with the SciPy solver.\n\\end{enumerate}\nAll ROMs are initialized with the solution of the Re=$100$ FOM at time $t=0$, the $x$-velocity of which is shown in Figure~\\ref{fig:vonkarman1}.\n\n\n\n\\begin{table}[]\n\\begin{center}\n\\begin{tabular}{c c c c}\n\\hline\nBasis \\# & Trial Basis Dimension ($K$) & Energy Criteria & $\\tau$ (Adjoint Petrov-Galerkin) \\\\\n\\hline\n1 & $ 11$ & $99\\%$ & $1.0$ \\\\\n2 & $ 42$ & $99.9\\%$ & $0.3$ \\\\\n3 & $ 86$ & $99.99\\%$ & $0.1$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Summary of the various basis dimensions used for Example 2}\n\\label{tab:rom_basis_1}\n\\end{center}\n\\end{table}\n\n\n\n\\subsubsection{Reconstruction of Re=100 Case}\\label{sec:re100_rec}\nReduced-order models of the Re=100 case are first considered. This case was explicitly used in the construction of the POD basis and tests the ability of the ROM to reconstruct previously ``seen\" dynamics. Unless otherwise noted, the default time-step for all ROMs is taken to be $\\Delta t = 0.5$. The values of $\\tau$ used in the APG ROMs are selected from the spectral radius heuristic and are given in Table~\\ref{tab:rom_basis_1}. Figures~\\ref{subfig:re100a} and~\\ref{subfig:re100b} show the lift coefficient as well as the mean squared error (MSE) of the full-field ROM solutions for the G ROM, APG ROM, and LSPG ROMs for Basis \\#2, while Figure~\\ref{subfig:re100c} shows the integrated MSE for $t \\in [0,200]$ for Basis \\#1, 2, and 3. Figure~\\ref{subfig:re100d} shows the integrated error as a function of relative CPU time for the various ROMs. The relative CPU time is defined with respect to the FOM, which is integrated with an explicit time-step 100 times lower than the ROMs. The lift coefficients predicted by all three ROMs are seen to overlay the FOM results. The mean squared error shows that, for a given trial basis dimension, the APG ROM is more accurate than both the Galerkin and LSPG ROMs. This is the case for both explicit and implicit time integrators. As shown in Figure~\\ref{subfig:re100c}, the APG ROM converges at a similar rate to the G ROM as the dimension of the trial space grows. For Basis \\#2 and \\#3, the implicit time-marching schemes are slightly less accurate than the explicit time-marching schemes. Finally, Figure~\\ref{subfig:re100d} shows that, for a given CPU time, the G ROM with explicit time-marching produces the least error. The APG ROM with explicit time-marching is the second-best performing method. In the implicit case, both the G and APG ROMs lead to lower error at a given CPU time than LSPG. This decrease in cost is due to the fact that the G and APG ROMs utilize Jacobian-Free Netwon-Krylov solvers. As discussed in Section~\\ref{sec:cost}, it is much more challenging for LSPG to utilize Jacobian-Free methods. Due to the increased cost associated with implicit solvers, only explicit time integration is used for the G ROM and APG ROM beyond this point. \n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cl_999case.pdf}\n\\caption{Lift Coefficient as a function of time for Basis \\# 2.}\n\\label{subfig:re100a}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/mse_vs_t_999case.pdf}\n\\caption{Normalized error as a function of time for Basis \\# 2.}\n\\label{subfig:re100b}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/converge.pdf}\n\\caption{Integrated normalized error vs trial subspace dimension.}\n\\label{subfig:re100c}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/rom_pareto.pdf}\n\\caption{Integrated normalized error vs relative CPU time.}\n\\label{subfig:re100d}\n\\end{subfigure}\n\\end{center}\n\\caption{ROM results for flow over cylinder at Re=100. The lift coefficient is defined as $C_L = \\frac{2L}{\\rho U_{\\infty}^2 D}$, with $L$ being the integrated force on the cylinder perpendicular to the free-stream velocity vector.}\n\\label{fig:re100}\n\\end{figure}\n\nNext, we investigate the sensitivity of the different ROMs to the time-step size. Reduced-order models of the Re=$100$ case using Basis \\#2 are solved using time-steps of $\\Delta t = \\big[0.1,0.2,0.5,1]$. Note that the largest time-step considered is $200$ times larger than the FOM time-step, thus reducing the temporal dimensionality of the problem by 200 times. The mean-squared error of each ROM solution is shown in Figure~\\ref{fig:re100_dtvary}. The G and APG ROMs are stable for all time-steps considered. Further, it is seen that varying the time-step has a minimal effect on the accuracy of the G and APG ROMs. In contrast, the accuracy of LSPG deteriorates if the time-step grows too large. This is due to the fact that, as shown in Ref.~\\cite{carlberg_lspg_v_galerkin}, the stabilization added by LSPG depends on the time-step size. Optimal accuracy of the LSPG method requires an intermediate time-step. The ability of the APG and G ROMs to take large time-steps without a significant degradation in accuracy allows for significant computational savings. This advantage is further amplified when large time-steps can be taken with an explicit solver, as is the case here. This will be discussed in more detail in Section~\\ref{sec:cylhyper}.\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/mse_vs_t_dtvary}\n\\caption{Normalized error as a function of time.}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/converge_dtvary.pdf}\n\\caption{Integrated normalized error as a function of the time-step.}\n\\end{subfigure}\n\\end{center}\n\\caption{Results for time-step study of flow over cylinder at Re$=100$.}\n\\label{fig:re100_dtvary}\n\\end{figure}\n\nLastly, we numerically investigate the sensitivity of APG to the parameter $\\tau$ by running simulations for $\\tau=[0.001,0.01,0.1,0.3,0.5,1.]$. All simulations are run at $\\Delta t = 0.5$. The results of the simulations are shown in Figure~\\ref{fig:re100_tau}. It is seen that, for all values of $\\tau$, the APG ROM produces a better solution than the G ROM. The lowest error is observed for an intermediate value of $\\tau$, in which case the APG ROM leads to over a $50\\%$ reduction in error from the G ROM. As $\\tau$ approaches zero, the APG ROM solution approaches the Galerkin ROM solution. Convergence plots for LSPG as a function of $\\Delta t$ are additionally shown in Figure~\\ref{subfig:re100b_tau}. It is seen that the optimal time-step in LSPG is similar to the optimal value of $\\tau$ in APG.\n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.45\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/mse_vs_t_tauvary.pdf}\n\\caption{Normalized error as a function of time}\n\\label{subfig:re100a_tau}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.45\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/converge_tauvary.pdf}\n\\caption{Integrated normalized error as a function of $\\tau$. Results for how the time-step $\\Delta t$ impacts LSPG are included for reference.}\n\\label{subfig:re100b_tau}\n\\end{subfigure}\n\\end{center}\n\\caption{Summary of numerical results investigating the impact of the parameter $\\tau$ on the performance of the Adjoint Petrov-Galerkin method.}\n\\label{fig:re100_tau}\n\\end{figure}\n\n\n\\begin{comment}\n\n\\subsubsection{Prediction at Re$=150$}\nSimulations at a Reynolds number of Re$=150$ are now considered. The Re$=150$ case was not considered in the construction of the POD basis, and thus this case tests the predictive ability of the ROM. The case is initialized using the flow field from the Re$=100$ simulation. The Reynolds number is modified by lowering the viscosity. Figure~\\ref{fig:re150_unsteady} shows the temporal evolution of the lift coefficient as predicted by the FOM, G ROM, APG ROM, and LSPG ROM. The G ROM and LSPG ROMs are seen to behave similarly for this case, both under-predicting the growth in amplitude of the lift coefficient and shedding frequency. The solution generated by the APG ROM provides a better prediction for both the lift coefficient, as well as the full-field mean-squared error. By $t=200$, the instantaneous MSE of the APG ROM is approximately an order of magnitude better than both the G ROM and LSPG ROM. It is interesting to again observe that, at early time, the APG ROM is less accurate than both the G ROM and LSPG ROM. This further supports the theoretical analysis in Section~\\ref{sec:analysis}. \n \\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/re150_cl999case.pdf}\n\\caption{Lift coefficient prediction}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/re150_mse_vs_t_999case.pdf}\n\\caption{Extended time integration of ROM cases}\n\\label{fig:re150_longtime}\n\\end{subfigure}\n\\end{center}\n\\caption{Temporal evolution of cylinder case for Re$=150$.}\n\\label{fig:re150_unsteady}\n\\end{figure}\n\n\\end{comment}\n\n\\subsubsection{Parametric Study of Reynolds Number Dependence}\nNext, the ability of the different ROMs to interpolate between between different Reynolds numbers is studied. Simulations at Reynolds numbers of Re=100,150,200,250, and 300 with Basis $\\#2$ and $\\#3$ are performed. All cases are initialized from the Re=100 simulation. Note that the trial spaces in the ROMs were constructed from statistically steady-state FOM simulations of Re=100,200,300. The Reynolds number is modified by changing the viscosity.\n\nFigure~\\ref{fig:ROM_summary} summarizes the amplitude of the lift coefficient signal as well as the shedding frequency for the various methods. The values reported in Figure~\\ref{fig:ROM_summary} are computed from the last 150s of the simulations\\footnote{Not all G ROMs reached a statistically steady state over the time window considered}. The Galerkin ROM is seen to do poorly in predicting the lift coefficient amplitude for both Basis $\\#2$ and Basis $\\#3$. Unlike in the Re=100 case, enhancing the basis dimension does not improve the performance of the ROMs. Both the LSPG and APG ROMs are seen to offer much improved predictions over the Galerkin and ROM. This result is promising, as the ultimate goal of reduced-order modeling is to provide predictions in new regimes.\n\nThe results presented in this example highlight the shortcomings of the Galerkin ROM. To obtain results that are even qualitatively correct for the Re=\\{150,200,250,300\\} cases, either APG or LSPG must be used. As reported in Figure~\\ref{subfig:re100d}, explicit APG is over an order of magnitude faster than LSPG, and implicit APG with a JFNK solver is anywhere from 2x to 5x faster than LSPG. Therefore, APG is the best-performing method for this example. \n\n\n\n \\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/re_vs_amp.pdf}\n\\caption{Prediction for lift coefficient amplitudes}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/re_vs_st.pdf}\n\\caption{Prediction for shedding frequency}\n\\end{subfigure}\n\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cl_re150.pdf}\n\\caption{Re=$150$}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cl_re200.pdf}\n\\caption{Re=$200$}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cl_re250.pdf}\n\\caption{Re=$250$}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/cl_re300.pdf}\n\\caption{Re=$300$}\n\\end{subfigure}\n\n\n\\end{center}\n\\caption{Summary of ROM simulations}\n\\label{fig:ROM_summary}\n\\end{figure}\n\n\n\n\\subsubsection{Flow Over Cylinder at Re=$100$ with Hyper-Reduction}\\label{sec:cylhyper}\nThe last example considered is again flow over a cylinder, but this time the reduced-order models are augmented with hyper-reduction. The purpose of this example is to examine the performance of the different reduced-order models when fully equipped with state-of-the-art reduction techniques. For hyper-reduction, an additional snapshot matrix of the right-hand side is generated. This additional snapshot matrix is generated by following steps one through five provided in Section~\\ref{sec:cyl_romsteps}. Hyper-reduction for the G ROM and the APG ROM is achieved through the Gappy POD method~\\cite{everson_sirovich_gappy}. Hyper-reduction for LSPG is achieved through collocation using the same sampling points.\\footnote{It is noted that collocated LSPG out-performed the GNAT method for this example, and thus GNAT is not considered.} When augmented with hyper-reduction, the trial basis dimension ($K$), right-hand side basis dimension ($r$), and number of sample points $(N_s)$ can impact the performance of the ROMs. Table~\\ref{tab:rom_basis_details} summarizes the various permutations of $K$, $r$, and $N_s$ considered in this example. The sample points are selected through a QR factorization of the right-hand side snapshot matrix~\\cite{qdeim_drmac}. These sample points are then augmented such that they contain every conserved variable and quadrature point at the selected cells. The sample mesh corresponding to Basis numbers 4,5, and 6 in Table~\\ref{tab:rom_basis_details} is shown in Figure~\\ref{fig:re100_sample}. Details on hyper-reduction and its implementation in our discontinuous Galerkin code are provided in Appendix~\\ref{appendix:hyper}. \n\n\\begin{table}[]\n\\begin{tabular}{c c c c c}\n\\hline\nBasis \\# & Trial Basis Dimension ($K$) & RHS Basis Dimension ($r$) & Sample Points ($N_S$) & Maximum Stable $\\Delta t$\\\\\n\\hline\n1 & $ 11$ & $103$ & $4230$ & 4.0\\\\\n2 & $ 42$ & $103$ & $4230$ & 2.0\\\\\n3 & $ 86$ & $103$ & $4230$ & 1.0\\\\\n4 & $ 11$ & $268$ & $8460$ & 4.0\\\\\n5 & $ 42$ & $268$ & $8460$ & 2.0\\\\\n6 & $ 86$ & $268$ & $8460$ & 1.0\\\\\n\\hline\n\\end{tabular}\n\\caption{Summary of the various reduced-order models evaluated on the flow over cylinder problem. The maximum stable $\\Delta t$ for each basis is reported for the SSP-RK3 explicit time-marching scheme and was empirically determined.}\n\\label{tab:rom_basis_details}\n\\end{table}\n\n\n\nFlow at Re=$100$ is considered. All simulations are performed at the maximum stable time-step for a given basis dimension, as summarized in Table~\\ref{tab:rom_basis_details}. Figure~\\ref{fig:re100_qdeim_pareto} shows the integrated error as a function of relative wall time for the various ROM techniques and basis numbers. All methods show significant computational speedup, with the G and APG ROMs producing wall-times up to 5000 times faster than the FOM while retaining a reasonable MSE. It is noted that the majority of this speed up is attributed to the increase in time-step size.\nFor a given level of accuracy, LSPG is significantly more expensive than the G and APG ROMs. The reason for this expense is three-fold. First, LSPG is inherently implicit. For a given time-step size, an implicit step is more expensive than an explicit step. Second, LSPG requires an intermediate time-step for optimal accuracy. In this example, this intermediate time-step is small enough that the computational gains that could be obtained with an implicit method are negated. The third reason is that, for each Gauss-Newton iteration, LSPG requires the computation of the action of the Jacobian on the trial space basis, $\\tilde{\\mathbf{V}}$. This expense can become significant for large basis dimensions as it requires the computation of a dense Jacobian. It is noted that it may be possible to achieve computational speed-ups in our implementation of LSPG through the development of a least-squares solver more tailored to LSPG, sparse Jacobian updates, or Jacobian approximation techniques.\n\n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\includegraphics[trim={4cm 0cm 4cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/samplemesh1.png}\n\\caption{Full sample mesh}\n\\label{subfig:re100_sampling_full}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\includegraphics[trim={0cm 0cm 4cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/samplemesh_wake2.png}\n\\caption{Close up of wake}\n\\label{subfig:re100_sampling_zoom}\n\\end{subfigure}\n\\end{center}\n\\caption{Mesh used for hyper-reduction. Cells colored in red are the sampled cells. Note that all conserved variables and quadrature points are computed within a cell.}\n\\label{fig:re100_sample}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/qdeim_pareto.pdf}\n\\caption{Wall time vs normalized error for hyper-reduced ROMs.}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/qdeim_mse_999case.pdf}\n\\caption{Normalized error as a function of time for Basis \\#5}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/qdeim_cl_999case.pdf}\n\\caption{Lift coefficient as a function of time for Basis \\# 5.}\n\\end{subfigure}\n\\begin{subfigure}[t]{0.49\\textwidth}\n\\includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1.\\linewidth]{figs_cylnew\/qdeim_cl_9999case.pdf}\n\\caption{Lift coefficient as a function of time for Basis \\# 6.}\n\\end{subfigure}\n\n\\end{center}\n\\caption{Results for hyper-reduced ROMs at Re$=100$.}\n\\label{fig:re100_qdeim_pareto}\n\\end{figure}\n\n\n\n\\subsubsection{Discussion}\\label{sec:theorem_discussion}\nTheorems~\\ref{theorem:apg_error} and~\\ref{theorem:errorbound_symmetric} contain three interesting results that are worth discussing. First, as discussed in Corollary~\\ref{corollary:resid_bound}, Theorem~\\ref{theorem:apg_error} shows that, in the limit $\\tau \\rightarrow 0^+$, the upper-bound provided in Theorem~\\ref{theorem:residualbounds} on the error introduced at time $t$ in the APG ROM due to the fine-scales is less than that introduced in the Galerkin ROM (per unit $\\tau$). This is an appealing result as APG is derived as a subgrid-scale model. Two remarks are worth making regarding this result. First, while the result was demonstrated in the limit $\\tau \\rightarrow 0^+$, it will hold so long as the norm of the truncation error in the quadrature approximation is less than the norm of the integral it is approximating; i.e. the approximation is doing a better job than neglecting the integral entirely. Second, the result derived in Theorem~\\ref{theorem:apg_error} does not \\textit{directly} translate to showing that,\n\\begin{equation}\\label{eq:resid_discussion}\n\\norm{\\tilde{\\mathbf{V}}^T \\mathbb{P}_A \\mathbf{r}_F(\\tilde{\\mathbf{u}}_F(t)) } \\le \\norm{\\tilde{\\mathbf{V}}^T \\mathbb{P}_G \\mathbf{r}_F(\\tilde{\\mathbf{u}}_F(t)) }.\n\\end{equation}\nThis is a consequence of Theorem~\\ref{theorem:residualbounds}, in which the integral that defines the fine-scale solution was split into two intervals. The APG ROM attempts to approximate the first integral, while the Galerkin ROM ignores both terms. Theorem~\\ref{theorem:apg_error} showed that, in the limit $\\tau \\rightarrow 0^+$, the APG approximation to the first integral is better than in the case of Galerkin (i.e., the APG approximation is better than no approximation). The only time that the result provided in Theorem~\\ref{theorem:apg_error} will \\textit{not} translate to APG providing a better approximation to the \\textit{entire} integral (and thus proving Eq.~\\ref{eq:resid_discussion}), is when integration over the second interval ``cancels out\" the integration over the first interval. To make this idea concrete, consider the integral,\n$$\\int_0^{2 \\pi} \\sin(x) dx = \\int_0^{\\pi} \\sin(x) dx + \\int_{\\pi}^{2 \\pi} \\sin(x) dx .$$\nClearly, $\\int_0^{2 \\pi} \\sin(x) dx = 0$. If the entire integral was approximated using just the interval $0$ to $\\pi$ (which is analogous to APG), then one would end up with the approximation $\\int_0^{2 \\pi} \\sin(x) dx \\approx \\int_0^{\\pi} \\sin(x) dx = 2$. Alternatively, if one were to ignore the integral entirely (which is analogous to Galerkin) and make the approximation $\\int_0^{2 \\pi} \\sin(x) dx \\approx 0$ (which in this example is exact), a better approximation would be obtained.\n\n The next interesting result is presented in Theorem~\\ref{theorem:errorbound_symmetric}, where it is shown that for a self-adjoint system with negative eigenvalues, the eigenvalues associated with the APG ROM error equation are \\textit{greater} than the Galerkin ROM. This implies that APG is \\textit{less} dissipative than Galerkin, and means that errors may be slower to decay in time. Thus, while Theorem~\\ref{theorem:apg_error} shows that the \\textit{a priori} contributions to the error due to the closure problem may be smaller in the APG ROM than in the Galerkin ROM, the errors that \\textit{are} incurred may be slower to decay. Finally, Corollary~\\ref{corollary:errorbound_symmetric} shows that, for self-adjoint systems with negative eigenvalues, the bounds on the parameter $\\tau$ such that all eigenvalues associated with the evolution of the error in APG remain negative depends on the spectral content of the Jacobian of $\\mathbf{A}$. This observation has been made heuristically in Ref~\\cite{parishMZ1}. Although the upper bound on $\\tau$ in Eq.~\\ref{eq:tau_bound} is very conservative due to the repeated use of inequalities, it provides insight into the selection and behaviour of $\\tau$.\n\n\n\n\n\\section{Significance}\n\n\nThis work develops a new reduced-order modeling technique for discretely projected ROMs. We show that the method outperforms the standard Galerkin-ROM and, in some cases, the popular least-squares Petrov-Galerkin approach. This work is novel to both the reduced-order modeling community and the Mori-Zwanzig community. For the reduced-order modeling community, the novel aspects and contributions of this work are:\n\\begin{enumerate}\n\\item A new reduced-order model derived from the Mori-Zwanzig formalism and variational multiscale method that is developed specifically for discretely projected ROMs. The method leads to a coarse-scale ROM equation that is driven by the coarse-scale residual. The method can be evolved in time with explicit integrators (in contrast to the popular least-squares Petrov Galerkin approach), potentially lowering the cost of the ROM. \n\n\\item An analysis of the present approach as a Petrov-Galerkin Method. This setting exposes similarities between the present approach and the least-squares Petrov-Galerkin (LSPG) approach. We find that the proposed method displays similarities to the well known adjoint stabilization technique, while LSPG displays similarities with the Galerkin least squares technique. \n\n\\item A comprehensive analysis of the algorithmic implementation of the proposed method as well as the estimation of the computational cost (in FLOPS) for the proposed method, Galerkin, and least-squares Petrov-Galerkin approaches. Implicit and explicit time marching schemes are considered.\n\n\\item Detailed $\\textit{a priori}$ error analysis, where we show circumstances in which the presented technique is expected to be more accurate than the Galerkin method.\n\n\\item Numerical evidence on ROMs of compressible flow problems demonstrating that the proposed method is more accurate and stable than a Galerkin-ROM. Improvements over the LSPG ROM are observed in most cases. \n\n\\item A thorough examination of errors vs CPU time for the presented method, Galerkin method, and LSPG method. We show that the presented method leads to lower errors for a given CPU time than the LSPG method. Improvements over Galerkin are observed in some cases. These results are presented for cases both with and without hyper-reduction.\n\n\\end{enumerate}\nThis work is also novel to the Mori-Zwanzig research community. The contributions include:\n\\begin{enumerate}\n\\item This is the first application of an analytical MZ method for a POD ROM. All previous work with analytic MZ models have focused on analytical basis.\n\\item By formulating MZ in the context of the variational multiscale method, we express the proposed model in a general \"residual-based\" context that allows us to apply it to complex systems. The applications presented in this work (POD-ROMs of the Sod shocktube and flow over a cylinder) mark a significant step up in complexity as compared to existing numerical problems that have been explored in the MZ community. In the authors' viewpoint, this is one of the first MZ methods that can, out of the box, be directly applied to complex systems of engineering interest.\n\\item We provide numerical evidence of a relationship between the time-scale in the proposed model and the spectral radius of the right-hand side Jacobian. This relationship applies to the selection of the optimal time-step in LSPG as well.\n\\end{enumerate}\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}