diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkszl" "b/data_all_eng_slimpj/shuffled/split2/finalzzkszl" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkszl" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nIn nature, diversity refers to the fact that many species coexist (among many other definitions). In society, it sometimes refers to the idea of gathering people coming from different cultures and background.\nIn all these domains, diversity (a fact) is considered essential for the emergence of resilience, stability or novelty (a property) \\cite{mccann00}. \nIn software, we take the problem upside-down. We want properties, e.g. resilience, for which diversity may be the key. The main research question is thus formulated as: how to create, maintain, exploit -- i.e. engineer -- diversity in software? \n\nFor instance, early experiments with software diversity in the mid 1970's (e.g. recovery blocks \\cite{randell75}) advocate design and implementation diversity as a means for tolerating faults. Indeed, similarly to natural systems, software systems including diverse functions and elements are able to cope with many kinds of unanticipatable problems and failures.\nCurrently, the concept of software diversity appears as a rich and polymorphic notion, with multiple applications. Yet, the exploration of this concept is very fragmented over different communities, who do not necessarily know each other. \n\nWe aim at putting together the many pieces of the puzzle of software diversity. \nPrevious surveys on classical work about diversity for fault-tolerance \\cite{deswarte98} or for security \\cite{just04} provide important milestones in this direction. Yet, their scope is very focused on a single type of software diversity and they do not include the most recent works in the area.\nOur paper contributes to the field of software diversity, as the first paper that adopts an inclusive vision of the area, with an emphasis on the most recent advances in the field. \n\n\\paragraph{Scope}\nThis survey includes classical work about design and data diversity for fault tolerance, as well as the cybersecurity literature that investigates randomization at different system levels. \nBeyond that, we broaden this standard scope of diversity, to include work about the study and exploitation of natural diversity and about the management of diverse software products in software architecture. \nSince the main barriers between communities are words, we had to cross terminological chasms several times: diversity, randomization, poly- and meta-morphism, to only cite a few that are intrinsically related.\nThis inclusive definition allows us to draw a more complete landscape of software diversity than previous surveys \\cite{Knight2011,Schaefer2012,just04,deswarte98}, which we discuss in section \\ref{sec:other-surveys}.\nFor the first time, this survey gathers under the same umbrella works that are often considered very different, while they share a similar underlying concept: software diversity. \n\n\\paragraph{Novelty}\nThe field of software diversity has been very active in the 70's and 80's for fault-tolerance purposes. There has been a revival in the late 90's, early 2000's, this time with automatic diversity for security. Both periods have been covered by previous surveys \\cite{deswarte98,just04}. \nThe last decade's research on software diversity has also been extremely rich and dynamic. Yet, this activity is only partially covered in recent surveys by Schaeffer et al. \\cite{Schaefer2012}, Knight \\cite{Knight2011} and Larsen et al. \\cite{Larsen14}, which have specific focuses. \nOur survey includes the most recent works in all areas of software diversity, with an emphasis from 2000 to present. \n \n\\paragraph{Audience}\nThe targeted audience of this paper is researchers and practitioners in one of the surveyed fields, who miss the big picture of software diversity. \nOur intention is to let them know and understand the related approaches, so far unknown to them because of the community boundaries. \nWe believe that this shared awareness and understanding, with different technical backgrounds, will be the key enabling factor for the development of integrated and multi-tier software diversification techniques \\cite{allier14}. This will contribute to the construction of future resilient and secure software systems.\n\n\n\\paragraph{Structure}\nGiven the breadth of this work's scope, there is no single decomposition criterion to structure our paper. Software diversity has multiple facets: the goal of diversity, the diversification techniques, the scale of diversity, the application domain, when it is applied~\\ldots \nThis diversity of software diversity is reflected in table \\ref{tab:diversities}. \nAs shown in Figure \\ref{fig:global-map}, we decide to organize this survey mainly along two oppositions. \nFirst, we differentiate engineering work that aims at exploiting diversity (Sections \\ref{sec:managed} and \\ref{sec:automated-diversity}) from papers that are more observational in nature, where software diversity is a study subject (Section \\ref{sec:natural-study}).\nThen, we split the engineering papers on \n\\emph{managed diversity} approaches, that aim at manually controlling software diversity\n(section \\ref{sec:managed}); \nand the papers describing \\emph{automated diversity} techniques (section \\ref{sec:automated-diversity}).\nThis structuring supports our main goal of bridging different research communities and enables us to discuss, in the same section, papers coming from very different fields. \nThe paper can be read linearly. However, each section is meant to be self-contained and there is a diversity of reading pathways.\nWe invite the reader to use Figure \\ref{fig:global-map} for choosing her own one.\n\n\\begin{table}\n\\tbl{The diversity of software diversity (not exhaustive overview). Over time and over research communities, many kinds of software diversity have been proposed or studied.}{\n\\begin{tabularx}{\\textwidth}{p{4cm}|X}\nSoftware diversity for \\ldots & Fault tolerance \\cite{randell75,avizienis84}, security \\cite{Forrest:1997,cox06}, reusability \\cite{pohl2005software}, software testing \\cite{chen2010adaptive}, performance \\cite{Sidiroglou-Douskos2011}, bypassing antivirus\nsoftware \\cite{BorelloFM10} \\ldots\\\\\n\\hline\nSoftware diversity at the scale of \\ldots & Networks \\cite{donnell04}, operating systems \\cite{koopman99}, components \\cite{gashi04}, data structures \\cite{ammann88}, statements \\cite{Schulte2013mutrob}, \\ldots\\\\\n\\hline\nSoftware diversity as \\ldots & a natural phenomenon \\cite{mendez13}, a goal \\cite{cohen93}, a means \\cite{collberg12}, a research object \\cite{knight86} \\ldots\\\\\n\\hline\n\nSoftware diversity in \\ldots & market products \\cite{han09}, operating systems \\cite{koopman99}, developer expertise \\cite{posnett13}, \\ldots\\\\\n\\hline\n\nSoftware diversity when \\ldots & the specifications are written \\cite{Yoo2002111}, the code is developed \\cite{avizienis84}, the application is deployed \\cite{franz10}, executed \\cite{ammann88} \\ldots\\\\\n\n\n\\end{tabularx}}\n\\label{tab:diversities}\n\\end{table}\n\n\n\n\n\n\\begin{figure}[h]\n\t\\begin{center}\n\t\t\\includegraphics[width=\\columnwidth]{diversity-of-diversity-3.pdf}\n\t\t\\caption{The diverse dimensions of software diversity}\n \\label{fig:global-map}\n\t\\end{center}\n\\end{figure}\n\n\n\\section{Survey Process} \n\nTo prepare this survey, we first analyzed the existing surveys on the topic (see Section \\ref{sec:other-surveys}). None of them covers the material we cover. Second, we set up and conducted a systematic process described in \\ref{sec:process} \n\n\\subsection{Other Surveys on Software Diversity} \n\\label{sec:other-surveys}\n\nThe oldest survey we found is by Deswarte et al. in 1998 \\cite{deswarte98}.\nIt clearly shows that software diversity has different scales:\nfrom the level of human users or operators to the level of hardware and execution.\nOur survey exactly goes along this line of exploring the diversity of diversities.\nIn addition to classical and 90ies' software diversity, our survey discusses the rich work that has been done around software diversity during the last fifteen years: instruction-set randomization, adaptive random testing, and many others.\n\nIn 2001, Littlewood et al. \\cite{littlewood01} focus on design diversity (N-version programming). They review in particular their own work on the probabilistic reasoning that can be made on N-version systems. To this extent, as the abstract puts it, the survey is more a tutorial on design diversity than a broad perspective on software diversity.\n\nThe goal of Just et al.'s review paper \\cite{just04} is to list the techniques of synthetic diversity that can improve software survivability.\n``Synthetic diversity'' is equivalent, in our views, to ``artificial automated diversity''.\nIn our paper, we consider other goals than only security (such as quality of service, see section \\ref{sec:unsound}), and consider other diversity engineering techniques (e.g., managed software diversity, see \\ref{sec:managed}).\n\nJohn Knight published a survey in 2011 \\cite{Knight2011}.\nHe discusses four kinds of diversity:\nclassical design diversity (N-version and recovery block),\ndata diversity (a research direction he has both invented and lead),\nartificial diversity (in the sense of instruction-set randomization for security and the like),\nand N-variant systems (compared to N-version, N-variant diversity uses artificial and automated diversity).\nIn addition, he introduces the concept of ``temporal diversity'' as a diversity over time, for instance by regularly changing the key for instruction-set randomization.\nWe agree on all points that Knight considers as software diversity. However, we have a broader definition of software diversity: we discuss more kinds of managed software diversity (such as software product lines, see \\ref{sec:spl}), \nmore kinds of artificial diversity (such as runtime diversity, see section \\ref{sec:dyn-randomization}), \nand papers for which diversity is the main study subject (see Section \\ref{sec:study-subject}).\n\nSchaefer and colleagues co-authored in 2012 ``Software diversity: state of the art and perspectives'' \\cite{Schaefer2012}.\nDespite what the title suggests, this paper surveys only one kind of software diversity: software product lines.\nAs we will discuss later, the techniques of software product lines enable one to manage a set of related features to build diverse products in a specific domain. We refer to this kind of diversity as ``managed software diversity''. \nIn our paper, not only do we describe other kinds of managed software diversity such as design diversity, but we also discuss artificial diversity and natural diversity as well.\n\nLarsen et al. \\cite{Larsen14} recently authored a survey about automated software diversity for security and privacy. They discuss the different threat models that can be addressed via diversification. Then, they classify the surveyed approaches according to the nature of the object to be diversified and the temporal dimension of the diversification process. They conclude with an insightful discussion about compiler-based vs. binary rewriting diversity synthesis.\n\n\n\n\n\n\n\\subsection{Systematic Process} \n\\label{sec:process}\nWe followed a systematic process to select the papers discussed in this paper. We started with 30 papers that we knew and are written by the most remarkable authors: Avizienis, Randell, Forrest, Cohen, Knight and Levenson, Schaeffer, etc.. They appear in top publications of these fields (ACM TISSEC, IEEE TSE, IEEE S\\&P, CCS, ICSE, PLDI, DSN, etc.) and are generally considered as seminal work in each area. Then, we increased this set through a systematic keyword-based search using Google Scholar, IEEE Xplore and ACM DL. This set went through a second expansion phase when we followed the citation graph of the selected papers. This provided us with a set of more than 300 papers. Then, we filtered out papers. First, we discarded the redundant papers that discuss a similar problem or solution (e.g., we selected only a few papers about product lines or about multi-version execution). Second, we filtered out the papers that had no impact on the literature (that appear in unknown conferences or that had less than 5 citations after 20 years). Since our survey focuses on recent developments in the field of software diversity, we took a special care to keep the most significant recent works (up to papers that appeared in 2014).\n\n\n\n\n\\section{Managed Software Diversity}\n\\label{sec:managed}\n\n``Managed software diversity'' relates to technical approaches aiming at encouraging or controlling software diversity.\nThis kind of diversity is principally embodied in the work on multi-version software (early structuring of diversity), open software architecture (encouraging diversity) and software product lines (controlling diversity).\n\n\n\\subsection{Design Diversity (N-Version)}\n\\label{sec:design}\n\nSince the late 1970's many different authors have devised engineering methods for software diversification to cope with accidental and deliberate faults. Here, an accidental fault is any form of bug, \\textit{i.e.}, an internal problem unintentionally introduced by a developer of the execution environment. N-version programming \\cite{avizienis85} and recovery blocks \\cite{randell75} were the two initial proposals to introduce diversity in computation to limit the impact of bugs. Those techniques are traditionally called ``design diversity'' techniques. \n \nN-version design is defined as ``the independent generation of $N\\geq2$ functionally\nequivalent programs from the same initial specification'' \\cite{avizienis84,avizienis85}. This consists in providing N development teams with the same requirements.\nThose teams then develop N independent versions, using different technologies, processes, verification techniques, etc. The N versions are then run in parallel and a voting mechanism is executed on the N results.\nThe increased diversity in design, programming languages and humans is meant to reduce the number of faults by emergence of the best behavior, the emergence resulting from the vote on the output value. \n\nSince the initial definition of the N-version paradigm, it has been refined along different dimensions: the process, the product and the environment necessary for N-version development \\cite{avizienis95}. For example Kelly \\cite{kelly91} distinguishes between random diversity (let independent teams develop their version) from enforced diversity in which there is an explicit effort to design diverse algorithms or data structures. More recently, Avizienis proposed to adapt the concept to software survivability \\cite{avizienis00}.\n\nRecovery blocks were developed at the same time as N-version design, and proposed a way of structuring the code, using diverse alternative software solutions, for fault tolerance \\cite{randell75}. The idea is to have recovery blocks in the program, \\textit{i.e.}, blocks equipped with error detection mechanisms and one or more spares that are executed in case of errors. These spares are diverse variant implementations of the function. \n\nIn the latest work about N-version development, both N-version design and recovery blocks were included in the same global framework \\cite{avizienis95}. This framework has then been used in multiple domains, including the design of multiple versions of firewalls \\cite{liu2008}. \nWhile the essential conceptual elements of design diversity have remained stable over time, most subsequent works have focused on experimenting and quantifying the effects of this approach on fault tolerance. The work related to the analysis of N-version programming is synthesized in section \\ref{sec:study-n-version}.\n\n\n\n\\subsection{Managed Natural Software Diversity}\n\\label{sec:managed-natural}\nWe call ``natural diversity'', the existence of different software solutions that provide similar functionalities and which spontaneously emerge from software development processes.\nThere exists several forms of natural software diversity. For example, the programs that can be customized through several parameters, embed a natural mechanism for diversification (two instances of the same program, tuned with different parameters can have different behaviors in terms of performance). Software market and competition are also strong vectors that drive the natural emergence for software diversity. For example, the gigantic business opportunities offered by the world wide web has driven the emergence of many competing web browsers. Web browsers are diverse in their implementation, in their performance, in some of their plugins, yet they are functionally very similar and can be used for one another in most cases. Other examples of such market software diversity include operating systems, firewalls, database management systems, virtual machines, routers, middleware, application servers, etc. In this section we present a set of works which exploit this natural diversity for different purposes. We will come back to natural diversity later in Section \\ref{sec:natural-study}, for discussing authors who study natural diversity with no engineering goals at all.\n\nHiltunen et al. \\cite{Hiltunen00} propose the Cactus mechanism for survivability, i.e., a mechanism that monitors and controls a running application in order to tolerate unpredictable events such as bugs or attacks. The Cactus approach relies on fine grain customization of the different components in the application, as well as runtime adaptation, to achieve survivability. They discuss how they can switch between different security and fault-tolerance solutions through customization and they also discuss how this natural way of changing a system supports the emergence of natural diversity and thus increases resilience.\n\nCaballero et al. \\cite{caballero08} exploit the existing diversity in router technology to design a network topology that has a diverse routing infrastructure. Their work introduces a novel metric to quantify the robustness of a network. Then, they use it to compare the robustness of different, more or less diverse, routing infrastructure. They explore the impact of different levels of diversity, by converting the problem into a graph coloring problem. They show that a small amount of router technology and well designed topology actually increases the global robustness of the infrastructure.\n\nTotel et al. \\cite{totel06} propose to design an intrusion detection mechanism by design diversity, leveraging the natural diversity of components-off-the-shelf (COTS). They exploit the fact that COTS for database management and web servers have very few common mode failures \\cite{wang03,gashi04} and are thus very good candidates for N-version design based on natural diversity. The authors deploy an architecture with three diverse servers running on three different operating systems and feed it with the requests sent on their campus web page in the last month (800000 requests, out of which around 1\\% can be harmful). The results show that the COTS-based IDS only raises a small number of false positives. Along the same line, Garcia et al. \\cite{garcia2014analysis} conducted a study on the impact of operating system diversity w.r.t. to security bugs of the NIST National Vulnerability Database (NVD). Their results show that diversity indeed contribute to building intrusion-tolerant systems.\n\nOberheide et al. \\cite{oberheide08} exploit the diversity of antivirus and malware systems to propose what is called ``N-version protection''. It is based on multiple and diverse detection engines running in parallel. Their prototype system intercepts suspicious files on a host machine and send them in the cloud to check for viruses and malware against diverse antivirus systems. They evaluate their system over 7220 malware and show that it is able to detect 98\\% of the malware. It provides better results than a single antivirus in 35\\% of the cases. The idea has been further explored by Bishop et al. \\cite{bishop2011diversity}, who explored the deep characteristics of the dataset of known malware to reduce global vulnerability.\n\nO'Donnell and Sethu \\cite{donnell04} leverage the diversity of software packages in operating systems and investigates several algorithms to increase the global diversity in a network of machines. They model the diversification of distributed machines as a graph coloring problem and compare different algorithms according to their ability of setting a network that is tolerant to attacks. The experiments are based on a simulation, which uses the topology from email traffic at the authors' institution. They show that the introduction of diversity at multiple levels provides the best defense.\n\nCarzaniga et al. \\cite{carzaniga10} find multiple different sequences of method calls in Javascript code, which happen to have the same behavior. They harness this redundancy to setup a runtime recovery mechanism for web applications. \n\nGorbenko et al. \\cite{gorbenko11} propose an intrusion avoidance architecture based on multi-level software diversity and dynamic software reconfiguration in IaaS cloud layers. The approach leverages the natural diversity of off-the-shelf components that are found in the cloud (operating system, web server, database management system and application server), in combination with dynamic reconfiguration strategies. The authors illustrate the approach with an experiment over several weeks, during which they switch between 4 diverse operating systems that have different open vulnerabilities. They discuss how this mechanism reduces exposure to vulnerabilities. \n\n\n\\subsection{Managed Functional Diversity}\nIn software, it is known that many functions are the same yet different. \nFor instance, passing a message to a distant machine or writing to a local file is conceptually the same: writing data to a location. However, the different implementations (say for network or for file input\/output) of this abstract function are radically different.\nOne responsibility of software abstractions is to capture this conceptual identity and to abstract over the diversity of implementation details.\nFor instance, Unix is well known because of Unix' concept of file captures all input\/output operations, whether on the network, on a physical file on disk or on the memory of a kernel module.\nWe refer to this facet of abstraction as managing the functional diversity.\n\nMany software abstractions have the clear goal of managing functional diversity.\nIn the following, we will review classical object-oriented software, software product lines and plugin-based architecture.\n\n\\subsubsection{Class Diversity}\nThe object-oriented software paradigm is a rich paradigm with implications on understandability, reuse, etc. \nThere is one point in this paradigm really related to managing the diversity: polymorphism.\n\nPolymorphism is the mechanism enabling us to have code that calls other pieces of code in a non predefined manner. The late binding between functions enables an object to call a diverse set of functions and even to call code that will be written in the future. To this extent, polymorphism is the key mechanism enabling to manage the function diversity (as embodied in classes). \nIn other words, polymorphism (with abstract methods, interfaces or other fancy object-oriented constructs) supports the construction of a program architecture that is ready for handling diversity. \n\nAs Bertrand Meyer~\\cite{meyer1988object} puts it:\n\\begin{quote}\n\\emph{``We are at the heart of the object-oriented method's contribution to reusability: offering not just frozen components (such as found in subroutine libraries), but flexible solutions that provide the basic schemes and can be adapted to suit the needs of many diverse applications.''}\n\\end{quote}\n\n\n\\subsubsection{Software product lines}\n\\label{sec:spl}\n\nThe techniques around software product lines can be considered as means of controlling a diversity of software solutions capable of handling a diversity of requirements (user requirements or environmental constraints) \\cite{pohl2005software,clements02}.\nSoftware product line engineering is about the development of ``\\emph{a diversity of software products and software-intensive systems at lower costs, in shorter time, and with higher quality}'' \\cite{pohl2005software}. This consists in building an explicit variability model, which captures all commonalities and variation points in requirements and software solutions. In other words, the variability model is an explicit definition of the space of diverse solutions that can be engineered in a particular domain. This model is usually expressed as a form of feature model \\cite{kang90}. \n\nIn the context of software product lines, the main challenge for software diversity management consists in providing systematic ways to reuse existing parts of software systems in order to derive diverse solutions. \n\nWe synthesize the main works in software product lines, for an exhaustive survey, we refer the reader to Schaefer et al.'s survey ``Software diversity: state of the art and perspectives'' \\cite{Schaefer2012}. We start by looking at solutions that handle diversity in design, then we summarize solutions for diversity in implementation. \n\nSoftware product lines mainly offer support for design diversity through architectural solutions \\cite{clements02}. An essential challenge is to handle both the logical variability (the set of features that architects manipulate) and the variability of concrete assets (diversity of software pieces that can actually be composed to implement a particular product). Initial solutions are based on annotations to relate both views \\cite{atkinson02}. Hendrikson et al. \\cite{hendrickson07} propose a product line architecture modeling approach that unites the two, using change sets to cluster related architectural differences. Several approaches are founded on a compositional approach to derive products from architectural models. Ziadi et al. \\cite{ziadi04} propose sound composition operations for UML 2.0 scenarii in order to automatically synthesize diverse statecharts inside a given product line, while Morin et al. \\cite{morin08} compose software components to derive software configurations at runtime. Other approaches rely on an orthogonal variability model associated to model transformations for product derivation, as is the case for the Common Variability Language \\cite{haugen08} or the Orthogonal Variability Model \\cite{pohl2005software}.\nAt the boundary between models and implementation, it is possible to capture the variants of a program with explicit design patterns, as suggested by J\\'ez\\'equel \\cite{jezequel1998}.\nAt the source code level, there exist several mechanisms to manage a set of variants for a given program: delta-oriented programming \\cite{schaefer2010} instantiates the concept of delta-modeling \\cite{clarke2011} to specify a specific set of deltas for a program, as well as transformations that can systematically inject a set of selected deltas in a program to derive a variant; Figueiredo and colleagues have reported on the usage of aspect-oriented programming to hanlde variants in a product line and discuss the postive and negative effects on design stability \\cite{figueiredo2008}; preprocessing was one of the first language technology used to handle program variants and has been extensively analyzed, for example in the recent work by Liebig et al. \\cite{liebig2010}.\n\n\\subsubsection{Diversity through Plugin- and Component- based Software Architecture}\n\nPlugin-based software architectures offer means to design open software systems. Plugins are software units that encapsulate a given functionality as well as some information about its dependencies. As far as we know, Wijnstra \\cite{wijnstra00} was one of the first authors to assess the suitability of plugins to handle the diversity of configurations and usages of a complex software system \\cite{wijnstra00}. He proposed to use plugins, together with a component framework to design an extensible system for medical imaging. In this context, he needed to have a core set of functionalities to deploy a diversity of products that fit different requirements or different environments. \n\nMore recently, very successful software projects such as Wordpress, Firefox or Eclipse have adopted plugin-based architectures. This allows them to be open, thus leveraging the efforts of large open source communities, while keeping a core set of functionalities across all versions. But most importantly, this architecture supports a true explosion of functional software diversity. For example, there are 25000 plugins available for Wordpress, which can be combined by users in billions of functionally diverse configurations, each of them fitting a specific purpose or need. This was somehow predicted by Ommering \\cite{ommering02}, who used a plugin-based architecture in which connections between plugins handle design-time or run-time diversity.\n\n\\subsubsection{Discussion}\nThe main benefit of those software construction paradigms with respect to diversity is reusability: a large range of diverse products can be made with a smaller number of software \"bricks\". This is our motivation for considering software construction and design paradigms in our survey.\n\nHowever, the overall effect of those paradigms is to reduce software design diversity for a given set of product functions. Indeed, those reuse-oriented paradigms create a tension between reusability and monoculture \\cite{allier14}. Both relate to diversity (the second one in a dual manner). In practice, there is an engineering tradeoff between the increase of diversity due to the infinite number of possible combinations and the decrease of diversity due to massive reuse.\n\n\n\\subsection{Summary}\nThis section has focused on three areas of software engineering, which \\textit{manage} software diversity. The first was about multi-version design, an approach to fault-tolerance that aims at managing the manual development of diverse program versions. The second part was about managing and exploiting software diversity that naturally emerges in software markets or open source communities, in order to build fault or attack tolerant systems. The last part opened on a series of works dedicated to the management of functional diversity, in order to fulfill the various usages of a given system. These three parts refer to different research communities, yet, they all share a common approach: software diversity can be managed and harnessed in order to achieve specific software engineering objectives.\n\n\n\\section{Automated Software Diversity}\n\\label{sec:automated-diversity}\n\n\n``Automated software diversity'' consists of techniques for artificially and automatically synthesizing diversity in software.\nInstead of using the adjective automated, some authors call it ``synthetic diversity'' \\cite{just04} or ``artificial'' diversity (e.g. \\cite{SidiroglouLK06}). However, artificial literaly means \\emph{``created or caused by people''}\\footnote{Merriam-Webster, \\url{http:\/\/www.merriam-webster.com\/dictionary\/artificial}}. To this extent, N-version programming also produces artificial diversity but, the diverse program variants are produced manually.\nWe prefer ``automated diversity'' which emphasizes the absence of human in the loop and is in clear opposition to managed software diversity.\nBeyond those details, we actually equate those three terms: artificial, synthetic and automated diversity.\n\nAutomated software diversity is valuable in different contexts, for instance software security or fault tolerance. However, these different \\emph{goals} are not the only dimension in which we can characterize the various approaches to automated software diversity.\nFirst, the \\emph{scale} dimension characterizes the fact that software systems are engineered at several scales:\nfrom a set of interacting machines in a distributed system down to the optimization of a particular loop.\nResearch has produced techniques for automated software diversity along all those different scales.\nSecond, the \\emph{genericity} dimension explores whether the diversification technique is domain-specific or not. Third, the \\emph{integrated} dimension is about the assembly of multiple diversification techniques in a global approach. \n\n\\subsection{Randomization}\n\nThe mainstream software paradigms are built on determinism. \nAll layers of the software stack tend to be deterministic, from programming language constructs, to compilers, to middleware, up to application-level code.\n\nHowever, it is known that randomization can be useful, for instance to improve security \\cite{bhatkar03}. \nA classical example of randomization is compiler based-randomization: a compiler may compile the same code with different memory layouts to decrease the risk of code injection.\n\nWhat is the relation between randomization and diversity? A randomization technique creates, directly or indirectly, set of unique executions for the very same program. As mentioned by \\cite{bhatkar03}, \\emph{``the use of randomized program transformations [is] a way to introduce diversity into applications''}. The notion of ``diversity of execution'' is broad: it may mean diverse performances, diverse outputs, diverse memory locations, etc. \nWe present an overview of diversifying randomization techniques in this survey.\nFor a more detailed survey about randomization, we refer the reader to surveys dedicated to that topic, in particular the one of Keromytis and Prevelakis \\cite{keromytis2005survey}.\n\nThere are different kinds of diversifying randomization. \n First, one can create different versions of the same program.\nFor instance, one can randomize the data structures at the source or at the binary level. \nWe call this kind of randomization ``static''. \nStatic randomization is discussed in Section \\ref{sec:static-randomization}. \n\nSecond, one can automatically integrate randomization points in the executable program. \nFor instance, a malloc primitive (memory allocation) with random padding is a randomization point: each execution of malloc yields a different result. \nContrary to static randomization, there is still one single version of the executable program but their executions are diverse. We call this kind of randomization ``dynamic randomization'' (also called runtime randomization \\cite{xu2003transparent}) and discuss it in \\ref{sec:dyn-randomization}. \n\nThird, some randomization techniques do not aim at providing a strict behavioral equivalence between the the original program and the randomized executions. They are are discussed in Section \\ref{sec:unsound}.\n\nFinally, as we will see later in Section \\ref{integrated-diversity}, diversification techniques can be stacked. This also holds for randomization: one can stack static and dynamic randomization. In this case, there are diverse versions of the same program which embed randomization points that themselves produce different executions. \n\n\n\\subsubsection{Static Randomization}\n\\label{sec:static-randomization}\n\nOne of seminal papers on static randomization is by Forrest and colleagues \\cite{Forrest:1997}, who highlight two families of randomization: \nrandomly adding or deleting non-functional code and\nreordering code.\nThose transformations are also described by Cohen \\cite{cohen93} in the context of operating system protection. \nLin et al. \\cite{Lin2009} randomize the data structure of C code. Following the line of thought of Forrest et al. \\cite{Forrest:1997} they re-order fields of data structures (\\texttt{struct} and \\texttt{class} in C\/C++ code) and insert garbage ones. \n\nThe concept of instruction-set randomization has been invented in 2003 in two independent teams \\cite{Kc03,barrantes:03}\nIt consists of creating a unique mapping between artificial CPU instructions and real ones. This mapping is encoded in a key which must be known at runtime to actually execute the program. Eventually, the instruction set of a machine can be considered as unique, and it is very hard for an attacker ignoring the key to inject executable code.\nInstruction-set randomization can be done statically (a variant of the program using a generated instruction set is written somewhere) or dynamically (the artificial instruction set is synthesized at load time).\nIn both cases, instruction-set randomization indeed creates a diversity of execution which is the essence of the counter-measure against code injection.\n\nIn some execution environments (e.g. x86 CPUs), there exists a ``NOP'' instruction. It means ``no operation'' and it has been invented for the sake of optimization, in order to align instructions with respect to some alignment criteria (e.g. memory or cache). \nMerckx \\cite{merckxsoftware} and later Jackson \\cite{jackson} have explored how to use NOP to statically diversify programs. The intuition is simple: by construction ``NOP'' does nothing and the insertion of any amount of it results in a semantically equivalent program. However, it breaks the predictability of program execution and to this extent mitigates certain exploits. \n\nObfuscation is a classical application domain of static randomization.\nCode obfuscation consists of modifying software for the sake of hindering reverse engineering and code tampering. Its main goal is to protect intellectual property and business secrets.\nA basic obfuscation technique simply transforms a program $P$ in a program $P'$ which is distributed. \nHowever, since obfuscation is automated, it is often possible to generate several different obfuscated versions of the same program (as proposed by Collberg et al. \\cite{collberg12} for example). To this extent, code obfuscation is one kind of software diversification, with one specific criterion in mind.\nFor an overview on code obfuscation, we refer to the now classical taxonomy by Collberg and colleagues \\cite{collberg1997taxonomy}. For an example of a concrete obfuscation engine for Java programs, we refer to \\cite{collberg1998manufacturing} and its Figure 1. \nWhen obfuscation happens at runtime, it is a kind of execution diversity and we discuss it in \\ref{sec:dyn-randomization}.\n\n\n\n\\subsubsection{Dynamic Randomization}\n\\label{sec:dyn-randomization}\n\nChew and Song \\cite{Chew02mitigatingbuffer} target ``operating system randomization''. More specifically, they randomize the interface between the operating system and the user-land applications:\nthe system call numbers, the library entry points (memory addresses) and the stack placement. All those techniques are dynamic, done at runtime using load-time preprocessing and rewriting. \n\nDynamic randomization can address different kinds of problems. In particular, it mitigates a large range of memory error exploits. Bathkar et al. \\cite{bhatkar03,bhatkar2005efficient} have proposed some of the seminal research in this direction. Their approach is based on three kinds of randomization transformations: randomizing the base addresses of applications and libraries memory regions, random permutation of the order of variables and routines, and the random introduction of random gaps between objects. \n\nStatic randomization creates diverse version of the same program at compilation time, dynamic randomization creates diverse executions of the same program under the same input at runtime. What about just-in-time compilation randomization? This point has been studied by Homescu and colleagues at the University of California Irvine \\cite{homescu2013profile}. Their approach neither creates diverse versions of the same program nor introduces randomization points: the randomization happens in the just-in-time compiler directly. \nTheir randomization is based on two diversification techniques: insertion of NOP instructions and constant blinding.\n\nIn the techniques we have just discussed, the support for dynamic randomization is implemented within the execution environment. On the contrary, self-modifying programs embed their own randomization techniques \\cite{mavrogiannopoulos2011taxonomy}. This is done for sake of security and is considered one of the strongest obfuscation mechanism \\cite{mavrogiannopoulos2011taxonomy}. \n\nAmmann and Knight's ``data diversity'' \\cite{ammann88} represents another family of randomization. The goal of data diversity is not security but fault tolerance. The technique aims at enabling the computation of a program in the presence of failures. The idea of data diversity is that, when a failure occurs, the input data is changed so that the new input does not result in a failure. The output based on this artificial input, through a inverse transformation, remains acceptable in the domain under consideration. \nTo this extent, this technique dynamically diversifies the input data. \n\nThe notion of ``environment diversity'' \\cite{vaidyanathan2005comprehensive} refers to techniques that change the environment to overcome failures. For instance, changing the scheduler or its parameter is indeed a change in the environment. This is larger in scope than just changing some process data, such as standard randomization. \n\n\n\\subsubsection{Unsound Randomization}\n\\label{sec:unsound}\nTraditional randomization techniques are meant to produce programs or executions that are semantically equivalent to the original program or execution. \nHowever, have explored the domain of ``unsound'' randomization techniques, either statically or dynamically.\n\nFoster and Somayaji \\cite{Foster2010} recombine binary object files of commodity applications. If an application is made of two binary files A and B, they show that is possible to run the application by artificially linking a version of A with a different yet close version of B. The technique enables them to tolerate bugs and even let new functions emerging but has no guarantee on the behavior of the recombination.\n\nSchulte et al. \\cite{schultesoftware} describe a property of software that has never been reported before. Software can be mutated and at the same time, it can preserve a certain level of correctness. Using an analogy from genomics, they call this property ``software mutational robustness''. This property has a direct relation to diversification: one can mutate the code in order to get functionally equivalent variants of a program. Doing this in advance is called ``proactive diversity''. The authors present a set of experiments that show that this proactive diversity is able to fix certain bugs.\n\nIn our previous work \\cite{baudry14}, we experiment with different transformation strategies, on Java statements, to synthesize ``sosie'' programs. The sosies of a program P are variants of P, i.e., different source code, which pass the same test suite and that exhibit a form of computation diversity. In other words, our technique synthesizes large quantities of variants, which provide the same functionality as the original through a different control or data flow, reducing the predictability of the program's computation.\n\nAnother kind of runtime diversity emerges from the technique of loop perforation \\cite{Sidiroglou-Douskos2011}.\nIn this paper, Sidiroglou et al. have shown that in some domains it is possible to skip the execution of loop iterations. For instance, in a video decoding algorithm (codec), skipping some loop iterations has an effect on some pixels or contours but does not further degrade or crash the software application. On the other hand, skipping loop iterations is key with respect to performance. In other words, there is a trade-off between the performance and accuracy. This trade-off can be set offline (e.g. by arbitrarily skipping one every two loops) or dynamically based on the current load of the machine.\nIn both cases, this kind of technique results in a semantic diversity of execution profiles, and consequently is deeply related to automated diversity. \n\n\\subsubsection{Summary}\n\\label{sec:summary-randomization}\n\nIn this subsection, we have focused on techniques that automatically randomize some aspect of a program, thus producing a diversity of program versions. Diversity occurs in memory, in the operating system, in the bytecode or in the source code, but in all cases it happens with no human intervention, through random processes. \n\n\n\n\\subsection{Domain-specific Diversity}\n\nThe techniques we have presented so far are independent of any application domain. \nYet, domain knowledge can be essential to devise efficient diversification techniques. This section illustrates such situations.\n\nFor instance, a common vulnerability of web applications is the possibility of injecting SQL code in order to access unauthorized data or corrupt existing one. Boyd et al. \\cite{Boyd04} proposed a technique to diversify the SQL query themselves. By simply prefixing all SQL keywords with an execution specific token, they create an unpredictable language that is hardly attackable from the outside and diverse for each database. \n\nFeldt \\cite{Feldt1998} exploited the structure of the genetic programming problem domain for the sake of diversification. He uses a genetic programming system to create a pool of diverse airplane arrestment controllers. He then shows that the failure modes of the synthesized programs are diverse, i.e. that the approach is effective for the generation of a kind of failure diversity. \n\nOh et al. \\cite{oh2002} presented a program transformation aiming at detecting a particular hardware fault (stuck-at faults in data paths of functional units). \nThe transformation consists of multiplying all numerical computations by a constant $k$ in a semantics-preserving way. \nThe authors show that this technique is effective with respect to their fault model. Obviously, it enables one to automatically obtain diverse implementations of the same program (for different values of $k$).\n\nComputer viruses are programs whose main opponents are anti-virus systems. \nInventors of computer viruses of course care about being reverse-engineered.\nHowever, more importantly for them, the computer viruses must remain undetectable as long as possible. \nDiversification is one solution in this very specific domain: if the virus exists under many different forms, it is harder for anti-virus systems to detect them all. From the perspective of the virus itself, it is even better to constantly change itself. This kind of diversification is performed through so-called ``metamorphic engines'', where metamorphism refers to the concept of having different forms for the same identity. For a recent account on this kind of diversification we refer the reader to Borello and M\u00e9 \\cite{BorelloFM10}.\n\nIn the domain of sensor networks, Alarifi and Du \\cite{alarifi2006} propose an approach to diversifying sensor software in order to mitigate reverse engineering effort. Their approach diversifies both the data (e.g. the keys used to communicate between nodes) and the code. As a result, each node in a sensor network is very likely to be unique.\n\n\n\n\nSo far, we have discussed the diversification of software applications. \nTest cases are executable programs, but very specific ones. \nAlthough they are often written in general purpose programming languages,\ntheir unique goal is to verify the correctness of an application. \nThey do not provide services to users. \nInterestingly, this fundamental difference does not prevent diversity and diversification to be valuable in test cases as well. \nAdaptive random testing \\cite{chen2010adaptive} is a random testing technique whose goal is generate input test data. \nIt is adaptive in the sense that the generated test cases depend on the previously generated ones. The final goal is to evenly spread test cases throughout the input domain. \nTo this extent, adaptive random testing aims at generating diverse test cases, and this is clear for the authors themselves, who subtitled their flagship paper: ``\\emph{The art of test case diversity}''.\nFeldt et al.'s VAT model is an example of adaptive random testing \\cite{Feldt2008TestDiversity}.\nThey use an information distance for information theory to maximize the diversity of generated test cases.\n\n\n\\subsection{Integrated Diversity}\n\\label{integrated-diversity}\n\nIntegrated software diversity is about works that aim at automatically injecting different forms of diversity at the same time in the same program. \nIn this line of thought, previous researchers have either emphasized the fact that the diversity is stacked (Section~\\ref{sec:stacked-diversity}) or whether these different forms of diversity are managed with a specific diversity controller (Section~\\ref{sec:controller-diversity}).\n\n\\subsubsection{Stacked Diversity}\n\\label{sec:stacked-diversity}\n\nThe different contributions discussed in this section all share the same intuition that each kind of artificial diversity has value in one perspective (a specific kind of attack or bug), and thus, integrating several forms of diversity should increase the global ability of the software system with respect to security or fault tolerance.\n\nWang et al. \\cite{wang01} propose a multi-level program transformation that aims at introducing diversity at multiple levels in the control flow so as to provide in-depth obfuscation. This work on program transformation takes place in the context of a software architecture for survivable systems as proposed by Knight et al. \\cite{knight00}. Wang et al's architecture relies on probing mechanisms that integrate two forms of diversity: in time (the probe algorithms are replaced regularly) and in space (there are different probing algorithms running on the different nodes of the distributed system). \n\nBhatkar et al. \\cite{bhatkar03} aim at developing a technique for address obfuscation in order to thwart code injection attacks. This obfuscation approach relies on the combination of several randomization transformations: randomize base addresses of memory regions to make the address of objects unpredictable; permute the order of variables in the stack; and introduce random gaps in the memory layout. Since all these transformations have a random component, they synthesize different outputs on different machines, thus increasing the diversity of attack surfaces that are visible to attackers.\n\nKnight et al., in a report of the DARPA project Self-Regenerative System (SRS) \\cite{knight07}, summarize the main features of the Genesis Diversity Toolkit. This tool is one of the most recent approaches that integrates multiple forms of artificial diversity. The goal of the project was to generate 100 diverse versions of a program that were functionally equivalent but for which a maximum of 33 versions had the same deficiency. The tool supports the injection of 5 forms of diversity: Address Space Randomization (ASR), Stack Space Randomization (SSR), Simple Execution Randomization (SER), Strong Instruction Set Randomization (SISR), Calling Sequence Diversity (CSD).\n\nThe GENESIS project, also coordinated by Knight's group, explored a complete program compilation chain that applies diversity transformations at different steps to break the monoculture \\cite{Williams09}. Diversity transformations are applied compile time, link time, load time,\nand runtime. The latter step is the main innovation of GENESIS and relies on the Strata virtual machine technology, which supports the injection of runtime software diversity. This application-level virtual machine realizes two forms of diversification: calling sequence diversity and instruction set diversity.\n\nJacob et al. \\cite{jacob08} propose superdiversification as a technique that integrates several forms of diversification to synthesize individualized versions of programs. The approach, inspired by compilation superoptimization, consists in selecting sequences of bytecode and in synthesizing new sequences that are functionally equivalent. Given the very large number of potential candidate sequences, the authors discuss several strategies to reduce the search space, including learning occurrence frequencies of certain sequences.\n\nFranz \\cite{franz10} advocates for massive-scale diversity as a new paradigm for software security. The idea is that today some programs are distributed several million times and all these software clones run on millions of machines in the world. The essential issue is that, even if takes a long time to an attacker to discover a way to exploit a vulnerability, this time is worth spending since the exploit can be reused to attack millions of machines. Franz envisions a new context in which, each time a binary program is shipped, it is automatically diversified and individualized, to prevent large-scale reuse of exploits. The approach relies on four paradigm shifts as enablers for his vision: online software distribution, ultra reliable compilers, cloud computing and good enough performance.\n\nIn 2010, Moving Target Defense (MTD) was announced as one of the three ``game-changing'' themes to cyber security the President's Cyber Policy Review announced. The software component of MTD integrates spatial and temporal software diversity, in order to ``limit the exposure of vulnerabilities and opportunities for attack'' \\cite{jajodia11}. With such a statement, future solutions for MTD will heavily rely on the integration of various software diversity mechanisms to achieve their objectives.\n\nInspired by the work of Cohen, who suggested multiple kinds of program transformations to diversify software \\cite{cohen93}, Collberg et al. \\cite{collberg12} compose multiple forms of diversity and code replacement in a distributed system in order to protect it from remote man-at-the-end attacks. The diversification transformations used in this work are adapted from obfuscation techniques: flatten the control flow, merge or split functions, non-functional code addition, parameter reordering and variable encoding. These transformations for spatial diversity are combined with temporal diversity (when and how frequently diversity is injected), which rely on a diversity scheduler that regularly produces new variants.\n\nAllier et al. recently proposed to use software diversification in multiple components of web applications \\cite{allier14}. They combine different software diversification strategies, from the deployment of different vendor solutions, to fine-grained code transformations, in order to provide different forms of protection. Their form of multi-tier software diversity is a kind of integrated diversity in application-level code.\n\n\\subsubsection{Controllers of Automated Diversity}\n\\label{sec:controller-diversity}\n\nIf mixed together and put at a certain scale of automation and size, all kinds of automated diversity need to be controlled. Popov et al \\cite{popov2012empirical} provide an in-depth analysis of diversity controllers, showing that diversity controlled with specific diversity management decisions is better than naive diversity maximization. On the engineering side, several researchers have discussed how to manage the diverse variants of the same program. \n\nCox et al. \\cite{cox06} introduce the idea of N-variant systems, which consists in automatically generating variants of a given program and then running them in parallel in order to detect security issues. This is different from N-version programming because the variants are generated automatically and not written manually.\nThe approach is integrated because it synthesizes variants using two different techniques: address space partitioning and instruction set tagging. Both techniques are complementary, since address space partitioning protects against attacks that rely on absolute memory addresses, while instruction set tagging is effective against the injection of malicious instructions. In subsequent work, the same group proposed another transformation that aims at thwarting user ID corruption attacks \\cite{nguyen08}.\n\nSalamat and colleagues find a nice name for this concept: ``multi-variant execution environment''~\\cite{salamat2008reverse,jackson2011compiler}.\nA multi-variant execution environment provides support for running multiple diverse versions of the same program in parallel. The diverse versions are automatically synthesized at compile-time, with reverse stack execution \\cite{salamat09,salamat11}.\nThe execution differences allow some kind of analysis and reasoning on the program behavior. For instance, in \\cite{salamat2008reverse}, multi-variant execution enables the authors to detect malicious code trying to manipulate the stack. \n\nLocasto and colleagues \\cite{SidiroglouLK06} introduced the idea of collaborative application communities. \nThe same application (e.g. a web server) is run on different nodes. In presence of bugs (invalid memory accesses), each node tries a different runtime fix alternative. If the fix proves to be successful, a controller shared it among other nodes. This healing process contains both a diversification phase (at the level of nodes) and a convergence phase (at the level of the community).\n\n\n\n\\subsubsection{Summary}\n\\label{sec:summary-integrated}\n\nEach form of software diversification targets a specific goal (e.g., against a specific attack vector). Many recent work have thus experimented with the integration of multiple forms of diversity in a system, ot benefit from several forms of protection. We have discussed these works here, as well as the specific kinds of controllers that are required to integrate various diversification techniques.\n\n\n\\subsection{Summary}\nThis section has presented a broad range of contributions on automated software diversity.\nThey come from different research communities, some of them do not even use the word diversity. \nHowever, they all share the same idea that programs and program executions need not be identical. With respect to the rest of this paper, they are fully automated, which is different from the natural diversity discussed in section \\ref{sec:managed-natural} and \\ref{sec:natural-study} and the managed, yet mostly manual diversity presented in section \\ref{sec:managed}.\n\n\n\n\\section{Diversity as Study Subject}\n\\label{sec:study-subject}\n\nIn this section, we present different works that focus on analyzing and quantifying software diversity and its effects on different aspects of reliability (e.g., fault-tolerance or intrusion-avoidance). \nContrary to the previous sections, the work presented here is not primarily an engineering contribution, it is not a new technique to support, encourage, or create a new kind of software diversity. \nThese approaches all have in common that they consider software diversity as their research subject per se. They simply aim at understanding the deep nature of software diversity from the causes to the implications.\n \nFirst, section \\ref{sec:study-n-version} discusses the theoretical models of design diversity and its effects on fault-tolerance. Then, section \\ref{sec:natural-study} presents the literature on the analysis of the natural diversity that is found in off-the-shelf components and source code. \n\n\\subsection{Theoretical Modeling Of Design Diversity}\n\\label{sec:study-n-version}\n\nFailure independence is a critical assumption of the design diversity principle for fault-tolerant critical systems. After the introduction of N-version programming and recovery blocks in the late 70's, a large number of studies have investigated their theoretical foundations and the validity of their assumptions. We discuss the most important studies here.\n\nDesign diversity (N-version programming, recovery blocks) was one of the earliest proposal to leverage diversity and redundancy in software for sake of fault-tolerance. Fault-tolerance is ensured under one essential assumption: the independence of failures among the diverse solutions. Because of the critical impact of this assumption, a large number of papers have investigated the validity of this assumption. While section \\ref{sec:design} focused on the principles of design diversity, here we focus on the studies that have evaluated the impact of this approach through empirical studies and statistical modeling.\n\nKnight and Levenson \\cite{knight86} provided the first large-scale experiment that aimed at validating the independence assumption in N-version programming. They asked students to write a program from a single requirements document (for a simple antimissile system) and obtained 27 programs. Each program was tested against 1 million random test cases. The quality of the programs was very high (very few faults), but still there were errors that were found in more than one version (the same error in independently developed programs). A statistical analysis of the results revealed a significant lack of independence between certain errors in the multiple versions of this program. Consequently, the paper was the first major criticism of the effectiveness of design diversity.\n\nBishop et al. \\cite{bishop86} summarized the results of the PODS project, which aimed at evaluating N-version design on the reliability of software. Their experimental setup is based on the development of three versions of a controller for over-power protection. The requirements document is the same for the three teams, but then they use different methods and languages for the implementation. They concluded that running the three versions, with a voting mechanism, produces a system that is more reliable than the most reliable version and also that back-to-back testing on all three versions is an effective solution to find residual bugs. \n\nSeveral pieces of work proposed theoretical frameworks to analyze and quantify the effects of N-version design on reliability.\nEckhardt and Lee \\cite{eckhardt85} have developed a theoretical statistical model for evaluating the impact of diversity on fault-tolerance. This model quantifies the effect of joint occurrences of errors on the reliability of the global system. Then, they use this model to explore the conditions under which N-version design can improve fault-tolerance and what are the limits of coincidental errors on the effect of N-version design. Littlewood and colleagues have refined the work of Eckhardt, first by considering the diversity of development methods \\cite{littlewood89}, and more recently by adding further hypotheses and studying two-channel systems \\cite{littlewood2012reasoning}. They show that methodological diversity, analyzed as the diversity of development decisions, is very likely to produce behavioral diversity. Popov and Strigini \\cite{popov01} proposed another model to analyze the effects of design diversity, in which they rely on data that are more related to physical attributes than previous proposals, making the model more actionable for reliability analysis and prediction. Mitra et al. \\cite{mitra99} defined metrics to quantify diversity in N-version designs and highlighted new results about the effectiveness of N versions on software reliability: diversity increases fault tolerance in the presence of common mode failures, as well as self-testing capacities, but the effects of diversity decrease over time. Nicola and Goyal \\cite{nicola90} proposed a statistical model that captures the distribution of correlated failures in multiple versions, as well as a combinatorial formula to predict the reliability of a system running N versions. They analyze the effectiveness of N-version design and demonstrate the need for loose correlations between failures in the N versions.\nHatton \\cite{hatton97} evaluates N-version design slightly differently: he proposes a theoretical model to compare the development of a single highly reliable version of a software component, vs. the development of N versions of the component. He concludes that N-version design is good, especially considering our inability to make a really good version.\n\nKanoun focuses \\cite{kanoun99} on a cost analysis of developing 2 diverse versions of the same program. She aims at providing feedback about the overhead of developing the second version, considering one version as the reference. She focuses on working hours records for cost estimates. She observes between 25\\% and 134\\% overhead depending on the development phase (the highest overhead is for the coding and unit tests, while the lowest if for functional specification). These results confirm other observations from controlled experiments, with actual data from industrial software development.\n\nPartridge and Krzanowski \\cite{partridge97} start from the framework of Littlewood and Miller and extend it: they look at the impact of multiple versions beyond failure diversity, including other targets for diversity, such as specializing the performance of some versions for specific tasks.\nThey evaluate the possibility of an optimal diversity level for reliable software. Partridge and Krzanowski provide an initial attempt to understand the role of software diversity at multiple levels and to systematically quantify diversity in complex systems.\n\nMore recently, van der Meulen and Revilla \\cite{Meulen08} analyze the impact of design diversity with thousands of programs that all implement the same set of requirements. \nThose programs come from the UVa Online Judge Website, which proposes a set of programming challenges that can be automatically corrected. Hence, the programs were written by thousands of anonymous programmers attracted by the website concept.\nvan der Meulen and Revilla use the frameworks of Eckhardt and Lee \\cite{eckhardt85} and Littlewood and Miller \\cite{littlewood89}. The authors classify different categories of faults that occur in different versions, and then, through random selections of pairs of versions, evaluate the reliability of the system (assuming that the system does not fail if one of the versions does not fail). They confirm that N-version design is more effective when different versions fail independently and that the diversity of programming language has a positive effect (programmers make different faults and different kinds of faults, with different languages). \nGiven the size of their dataset, the authors really stress the statistical validity of their findings.\n\nSalako et al. \\cite{salako13} question the independent sampling assumption posed by the models of Eckhardt and Lee \\cite{eckhardt85} and Littlewood and Miller \\cite{littlewood89}. They analyze the consequences of violating this assumption and evaluate the opportunity of using different versions of a program (not developed independently) to build fault-tolerant systems. Their results confirm the important influence of independence on diversity. Yet, they also open the discussion about different forms of independence and different processes that can be applied to mitigate the influences between different versions.\n\nA large number of theoretical and empirical studies have dissected the foundations of design diversity. We have summarized these works here and discussed how they have contributed to a finer grain understanding of the conditions for effective design diversity. \n\n\\subsection{Study of Natural Software Diversity}\n\\label{sec:natural-study}\n\n``Natural software diversity'' is any form of software diversity that spontaneously emerges from software development. The emergence comes from many factors such as the market competition, the diversity of developers, of languages or of execution environments. \nIn Section \\ref{sec:managed},\n we have discussed how natural diversity can be used to establish reliable software systems (Section \\ref{sec:managed-natural}).\nIn this section, we resume on natural diversity and discuss the literature that studies and describes this existing natural diversity.\nThe different studies presented in this section explore different kinds of software diversity: in software components, in source code, as well as in the social behaviors in open source communities.\n\nGashi et al. \\cite{gashi04} have studied bug reports for 4 off-the-shelf SQL servers (Oracle 8.0.5, Microsoft SQL, PostgreSQL 7.0.0 and Interbase 6.0), to understand whether these solutions could be good candidates for fault-tolerance, i.e., exhibit failure diversity. The study consisted in selecting bugs for each of the servers, collect the test cases that trigger the bug on a server and run them on the other servers to check whether the other solutions present the same bug. Following this protocol, for a total of 181 bugs, they observed that only 4 were bugs in two versions simultaneously, and no bug was found in more than 2 versions. They emphasize that the diversity of solutions is major asset for forward error recovery, since it is possible to copy the state of a correct database in a failed one. They have proposed to use this natural diversity to design an architecture for a fault-tolerant database management system \\cite{gashi07}. \n\n\nBarman et al. \\cite{barman09} focus on host intrusion detection systems (HIDS) deployed on all machines of entreprise networks. The ability of an IDS to detect intrusions depends on different thresholds that should depend on each user, yet these thresholds are usually set to the same value on each machine, because of a lack of guidelines about how to configure them. The authors analyze the impact of this monoculture of HIDS, showing that it provides very poor results in terms of intrusion detection. These poor results are mainly because the behavior of users are so diverse that they HIDS should also have diverse configurations to be effective. Then, the authors experiment with increasing configuration diversity and observe a clear benefit to reduce the number of missed detections.\n\nKoopman and De Vale \\cite{koopman99} evaluate the diversity of POSIX operating systems, using a robustness metric based on failure rates. The authors compare 13 implementations of POSIX. They use the Ballista testing tool to generate large quantities of robustness test cases that they run on each version. This reveals between 6\\% and 19\\% of failure rate. Then, the authors perform a multi-version comparison to analyze the diversity of failures and thus the usability of these POSIX versions for N-version fault-tolerance. The results demonstrate that multi-versions can be used to increase robustness, yet, with the 2 most diverse solutions, there is still a 9.7\\% common mode failure exposure for system calls.\n\nHan et al. \\cite{han09} analyze the diversity of off-the-shelf components with respect to their diversity of vulnerabilities. They provide a systematic analysis of the ability of multi version systems to prevent exploits. The study is based on 6000 vulnerabilities published in 2007. The main result is that components available for web servers are diverse with respect to their vulnerabilities and cannot be compromised by the same exploit.\nConsequently, all these components can run on multiple operating systems in order to increase diversity. They conclude that the natural diversity of off-the-shelf software applications is beneficial to build attack tolerant systems.\n\n\nSome recent work study the natural diversity or redundancy that emerges in large-scale source code. Gabel and Su \\cite{gabel10} analyze uniqueness in source code through the analysis of 6000 programs covering 420 million lines of code. The authors focus on the level of granularity at which diversity emerges in source code. Their main finding is that, for sequences up to 40 tokens, there is a lot of redundancy. Beyond this (of course fuzzy) threshold, the diversity and uniqueness of source code appears. \nJiang and Su \\cite{jiang09} propose an approach for the identification of functionally equivalent source code snippets in large software projects. This approach consists in extracting code snippets of a given length, randomly generating input data for these snippets and identify the snippets that produce the same output values (which are considered functionally equivalent, w.r.t the set of random test inputs). They run their analysis on the Linux kernel 2.6.24 during several days and find a large number of functionally equivalent code fragments, most of which are syntactically different. Both studies explore the tension between redundancy and diversity that exists in software. \n\nMendez et al. \\cite{mendez13} analyze the diversity in source code at the level of usages of Java classes. They analyze hundreds of thousands of Java classes, looking for type usages, i.e. sets of methods called on an object of a given type. They find 748 classes with more than 100 different usages of the API, the most extreme case being the \\texttt{String} of the Java library, for which they found 2460 different usages. This reveals a very high degree of usage diversity in object-oriented software.\n\nDiversity also emerges in social behaviors in open source software development. In this area, Posnett et al. \\cite{posnett13} analyze the focus of developers (whether they contribute to few or many artifacts) and the ownership (to what extent an artifact is ``owned'' by one or several developers). Through an analogy with predator-prey relations, they set up entropy measures to quantify the diversity in focus and ownership. They observe high levels of diversity in open source projects, and also demonstrate that these entropy metrics have good predictive properties: focused developers introduce less defects, while artifacts that receive contributions from several developers tend to have more defects. \nVasilescu et al. \\cite{vasilescu13} studied the development of the GNOME community and observed diversity both from the point of view of contributors (how diverse are the activities of different project contributors) as well as from the point of view of project (how diverse are the activities going on in different GNOME projects).\n\nSoftware diversity spontaneously emerges through multiple phenomena. In this section we have discussed the methods to study these different phenomena, as well as the experimental procedures that have been implemented to analyze the impact of this specific form of software diversity. These recent studies illustrate how the analysis of complex diversification processes must leverage techniuqes from multiple domains ranging from software analysis, data mining, statistics to threat models and exploit replication.\n\n\n\\subsection{Summary}\nThis section has presented two main areas in the analysis and the theoretical modeling of software diversity and its impact. The first part provided an overview of 3 decades of works that analyzed N-version programming and proposed several statistical methods and foundational assumptions that underly the effectiveness of this technique for fault-tolerant software systems.\nThe second part discusses novel work that analyze the implication and the effectiveness of natural software diversity (as presented in section \\ref{sec:managed-natural}) for building resilient systems.\n\n\\section{Conclusion}\n\nIn this paper, we provided a global picture of the software diversity landscape. We decided to broaden the standard scope of diversity, in order to give a very inclusive vision of the field and, hopefully, a better understanding of the nature of software diversity. The survey gathered work from various scientific communities (security, software engineering, programming languages), which we organized around one dimension: the diversity engineering technique (managed, automated, natural).\n\nLooking at all these works from a temporal perspective, we realize that the interest for diversity has always existed in the last 40 years. The latest studies even discover phenomena of natural diversity emergence, i.e. diversity is observed but the processes that led to its presence are unknown. We believe that harnessing this natural diversity will be an essential step in the future of software diversification. This could be the intermediate step towards the amplification of natural diversity. Indeed, diversity in natural complex systems is never explicitly developed, but emerges as a side effect of other phenomena. For example, biodiversity at different scales of ecosystems, emerges as the result of sexual reproduction, mutation, dispersal and frequency-dependent selection \\cite{de2009,melian2010}. To this extent, the main area of future work is to identify the software engineering principles and evolution rules that drive the emergence and the constant renewal of diversity in software systems. In other words, can we engineer open-ended software diversification?\n\n\n\n\n\n\\section*{Acknowledgements}\nWe would like to thank Paul Amman, Benoit Gauzens and Sebastian Banescu for their valuable feedback on this paper. This work is partially supported by the EU FP7-ICT-2011-9 600654 DIVERSIFY project.\n\n\n\n\n\\bibliographystyle{ACM-Reference-Format-Journals} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nSolutions to partial differential equations that exhibit singularity (e.g. cracks) or high variations in the computational domain are usually approximated by adaptive numerical methods. There is nowadays a large body of literature concerned with the development of reliable a posteriori error estimators aiming for mesh refinement in regions of large errors (see e.g. \\cite{BaR78a, BaR78b, BaW85, Ver96a}). However, classical adaptive methods are usually based on iterative processes which rely on recomputing the solution on the whole computational domain for each new mesh obtained after a refinement procedure. \n\nIn this paper we present a scheme which solves local problems defined on refined regions only. Local schemes have been proposed in the past, we mention the Local Defect Correction (LDC) method \\cite{Hac84}, the Fast Adaptive Composite (FAC) grid algorithm \\cite{McT86} and the Multi-Level Adaptive (MLA) technique \\cite{Bra77}. At each iteration, these algorithms solve a problem on a coarse mesh on the whole domain and a local problem on a finer mesh. The coarse solution is used for artificial boundary conditions while the local solution is used to correct the residual in the coarse grid. In \\cite{BRL15} the LDC scheme has been coupled with error estimators, which are used to select the local domain. \n\nIn \\cite{AbR19} we proposed a Local Discontinuous Galerkin Gradient Discretization (LDGGD) method which decomposes the computational domain in local subdomains encompassing the large gradient regions. This scheme iteratively improves a coarse solution on the full domain by solving local elliptic problems on finer meshes. \nHence, the full problem is solved only in the first iteration on a coarse mesh while a sequence of solutions on smaller subdomains are subsequently computed. In turn iterations between subdomains are not needed as in the LDC, FAC or MLA schemes and the condition number of the small systems are considerably smaller than the one of large systems (which describe data and mesh variations on the whole domain). The LDGGD method has been shown to converge under minimal regularity assumptions, i.e. when the solution is in $H^1_0(\\Omega)$ and the forcing term in $H^{-1}(\\Omega)$ \\cite{AbR19}. \nHowever, the marking of the subdomains this scheme did so far rely on the a priori knowledge of the location of high gradient regions. \n\nThe main contribution of this paper is to propose an adaptive local LDGGD method. This adaptive method is based on a posteriori error estimators that automatically identify the subdomains to be refined. \nThis is crucial for practical applications of the method. The LDGGD relies on the symmetric weighted interior penalty Galerkin (SWIPG) method \\cite{PiE12,ESZ09} and we consider linear advection-diffusion-reaction equations\n\\begin{equation}\\label{eq:elliptic}\n\t\\begin{aligned}\n\t-\\nabla\\cdot(A\\nabla u)+\\bm{\\beta}\\cdot\\nabla u+\\mu u &=f\\qquad && \\text{in } \\Omega, \\\\\n\tu&=0 && \\text{in } \\partial \\Omega, \n\t\\end{aligned}\n\\end{equation}\nwhere $\\Omega$ is an open bounded polytopal connected subset of $\\mathbb{R}^{d}$ for $d\\geq 2$, $A$ is the diffusion tensor, $\\bm{\\beta}$ the velocity field, $\\mu$ the reaction coefficient and $f$ a forcing term. \nIn \\cite{ESV10} the authors introduce a posteriori error estimators for the SWIPG scheme based on cutoff functions and conforming flux and potential reconstructions, these estimators are shown to be efficient and robust in singularly perturbed regimes. Following the same strategy, we derive estimators for the local scheme by weakening the regularity requirements on the reconstructed fluxes. The new estimators are as well free of unknown constants and their robustness is verified numerically. Furthermore, they are employed to define the local subdomains and provide error bounds on the numerical solution of the LDGGD method.\nWe prove that the error estimators are reliable. Because of the local nature of our scheme, we introduce two new estimators that measure the jumps at the boundaries of the local domains. However, these two new terms have lower convergence rate than the other terms and we cannot establish the efficiency of our a posteriori estimators with our current approach.\nNevertheless, the two new terms are useful in our algorithm: whenever the errors are localized these new terms become negligible; in contrast, when these estimators dominate it is an indication that the error is not localized and one can switch to a nonlocal method. Other boundary conditions than those of \\cref{eq:elliptic} can be considered, at the cost of modifying the error estimators introduced in \\cite{ESV10}. The new estimators introduced here need no changes.\n\n\nThe outline of the paper is as follows. In \\cref{sec:ladg} we describe the local scheme, in \\cref{sec:err_flux} we introduce the error estimators and state the main a posteriori error analysis results. \\Cref{sec:errbound} is dedicated to the definition of the reconstructed fluxes and proofs of the main results. Finally, various numerical examples illustrating the efficiency, versatility and limits of the proposed method are presented in \\cref{sec:num}.\n\n\n\\section{Local adaptive discontinuous Galerkin method}\\label{sec:ladg}\nIn this section we introduce the local algorithm based on the discontinuous Galerkin method. \nWe start by some assumptions on the data and the domain, before introducing the weak form corresponding to \\eqref{eq:elliptic}.\nWe assume that $\\Omega\\subset\\mathbb{R}^{d}$ is a polytopal domain with $d\\geq 2$, $\\bm{\\beta}\\in W^{1,\\infty}(\\Omega)^d$, $\\mu\\in L^\\infty(\\Omega)$ and $A\\in L^\\infty(\\Omega)^{d\\times d}$, with $A(\\bm{x})$ a symmetric piecewise constant matrix with eigenvalues in $[\\underline\\lambda,\\overline \\lambda]$, where $\\overline\\lambda\\geq\\underline\\lambda>0$. Moreover, we assume that $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}\\geq 0$ a.e. in $\\Omega$. This term $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}$ appears in the symmetric part of the operator $\\mathcal{B}(\\cdot,\\cdot)$ defined in \\eqref{eq:bform} and hence the assumption $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}\\geq 0$ is needed for coercivity. Finally, we set $f\\in L^2(\\Omega)$.\nUnder these assumptions, the unique weak solution $u\\in H^1_0(\\Omega)$ of \\eqref{eq:elliptic} satisfies\n\\begin{equation}\\label{eq:weak}\n\\mathcal{B}(u,v)=\\int_\\Omega fv\\dif \\bm{x}\\qquad \\text{for all }v\\in H^1_0(\\Omega),\n\\end{equation} \nwhere\n\\begin{equation}\\label{eq:bform}\n\\mathcal{B}(u,v)= \\int_\\Omega (A\\nabla u\\cdot\\nabla v+(\\bm{\\beta}\\cdot\\nabla u) v+\\mu u v)\\dif\\bm{x}.\n\\end{equation}\n\n\n\\subsection{Preliminary definitions}\\label{sec:prel}\nWe start by collecting some notations related to the geometry and the mesh of the subdomains, before recalling the definition of the discontinuous Galerkin finite element method.\n\n\\subsubsection*{Subdomains and meshes}\nLet $M\\in\\mathbb{N}$ and $\\{\\Omega_k\\}_{k=1}^M$ be a sequence of open subdomains of $\\Omega$ with $\\Omega_1=\\Omega$. The domains $\\Omega_k$ for $k\\geq 2$ can be any polytopal subset of $\\Omega$, in practice they will be chosen by the error estimators (see \\cref{sec:localg}). We consider $\\{\\mathcal{M}_k\\}_{k=1}^M$ a sequence of simplicial meshes on $\\Omega$ and $\\mathcal{F}_k=\\mathcal{F}_{k,b}\\cup\\mathcal{F}_{k,i}$ is the set of boundary and internal faces of $\\mathcal{M}_k$. The assumption below ensures that $\\mathcal{M}_{k+1}$ is a refinement of $\\mathcal{M}_k$ inside the subdomain $\\Omega_{k+1}$.\n\\begin{assumption}\\label{ass:mesh}\n\t$\\phantom{=}$\n\t\\begin{enumerate\n\t\t\\item For each $k=1,\\ldots,M$, $\\overline{\\Omega}_{k}=\\cup_{K\\in\\mathcal{M}_k,\\,K\\subset\\Omega_k} \\overline{K}$.\n\t\t\\item For $k=1,\\ldots,M-1$,\n\t\t\\begin{enumerate}[label=\\alph*)]\n\t\t\t\\item $\\{K\\in\\mathcal{M}_{k+1}\\,:\\, K\\subset \\Omega\\setminus\\Omega_{k+1}\\}=\\{K\\in\\mathcal{M}_k\\,:\\, K\\subset \\Omega\\setminus\\Omega_{k+1}\\}$,\n\t\t\t\\item if $K,T\\in \\mathcal{M}_k$ with $K\\subset \\Omega_{k+1}$, $T\\subset\\Omega\\setminus\\Omega_{k+1}$ and $\\partial K\\cap\\partial T\\neq\\emptyset$ then $K\\in \\mathcal{M}_{k+1}$,\n\t\t\t\\item if $K\\in\\mathcal{M}_k$ and $K\\subset \\Omega_{k+1}$, either $K\\in \\mathcal{M}_{k+1}$ or $K$ is a union of elements in $\\mathcal{M}_{k+1}$.\n\t\t\\end{enumerate}\n\t\\end{enumerate}\n\\end{assumption}\n\nLet $\\widehat{\\mathcal{M}}_k=\\{K\\in\\mathcal{M}_k\\,:\\, K\\subset \\Omega_k\\}$ and $\\widehat{\\mathcal{F}}_k=\\widehat{\\mathcal{F}}_{k,b}\\cup \\widehat{\\mathcal{F}}_{k,i}$ the set of faces of $\\widehat{\\mathcal{M}}_k$, with $\\widehat{\\mathcal{F}}_{k,b}$ and $\\widehat{\\mathcal{F}}_{k,i}$ the boundary and internal faces, respectively. Condition 1 in \\cref{ass:mesh} ensures that $\\widehat{\\mathcal{M}}_k$ is a simplicial mesh on $\\Omega_k$. Condition 2 guarantees that in $\\Omega\\setminus\\Omega_{k+1}$ and in the neighborhood of $\\partial\\Omega_{k+1}\\setminus\\partial\\Omega$ the meshes $\\mathcal{M}_k$ and $\\mathcal{M}_{k+1}$ are equal and that $\\mathcal{M}_{k+1}$ is a refinement of $\\mathcal{M}_k$ in $\\Omega_{k+1}$. An example of domains and meshes satisfying \\cref{ass:mesh} is illustrated in \\cref{fig:illustrationmesh}.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}[xscale=1]\n\t\n\t\t\\draw[step=1.0,black, thin] (0,0) grid (12,4);\n\t\t\\foreach \\x in {0,1,2,3,4,5,6,7,8}\n\t\t\t\\draw[thin] (\\x,0)--(\\x+4,4);\n\t\t\\draw[thin] (0,3)--(1,4);\n\t\t\\draw[thin] (0,2)--(2,4);\n\t\t\\draw[thin] (0,1)--(3,4);\n\t\t\\draw[thin] (9,0)--(12,3);\n\t\t\\draw[thin] (10,0)--(12,2);\n\t\t\\draw[thin] (11,0)--(12,1);\n\t\t\\draw[ultra thick] (0,0)--(12,0)--(12,4)--(0,4)--(0,0);\n\t\t\\draw[ultra thick] (2,1)--(10,1)--(10,4)--(2,4)--(2,1);\n\t\t\\draw[ultra thick] (4,2)--(8,2)--(8,4)--(4,4)--(4,2);\n\t\t\n\t\n\t\t\\draw[step=0.5,black, thin] (3,2) grid (9,4);\n\t\t\\draw[thin] (3,3.5)--(3.5,4);\n\t\t\\draw[thin] (3,2.5)--(4.5,4);\n\t\t\\draw[thin] (4.5,2.5)--(6,4);\n\t\t\\draw[thin] (4.5,2.0)--(6.5,4);\n\t\t\\draw[thin] (5.5,2.5)--(7,4);\n\t\t\\draw[thin] (5.5,2.0)--(7.5,4);\n\t\t\\draw[thin] (6.5,2.5)--(7.5,3.5);\n\t\t\\draw[thin] (3.5,2.0)--(5.5,4);\n\t\t\\draw[thin] (4.5,3.5)--(5.0,4);\n\t\t\\draw[thin] (6.5,2.0)--(8.5,4.0);\n\t\t\\draw[thin] (7.5,2.0)--(9,3.5);\n\t\t\\draw[thin] (8.5,2.0)--(9,2.5);\n\t\t\n\t\n\t\t\\foreach \\x in {4.5,5,...,7}\n\t\t\\foreach \\y in {3,3.5,4}\n\t\t\\draw[thin] (\\x,\\y)--(\\x+0.5,\\y-0.5);\n\t\t\n\t\t\n\t\t\\node[align=right,fill=white] at (12.5,1.5) {\\Large{$\\Omega_1$}};\n\t\t\\node[align=right,fill=white] at (10.5,2.5) {\\Large{$\\Omega_2$}};\n\t\t\\node[align=right,fill=white] at (7.5,1.5) {\\Large{$\\Omega_3$}};\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\caption{Example of possible meshes for three embedded domains $\\Omega_1$, $\\Omega_2$, $\\Omega_3$.}\n\t\\label{fig:illustrationmesh}\n\\end{figure}\n\n\\subsubsection*{Discontinuous Galerkin finite element method}\nThe local adaptive discontinuous Galerkin method will solve local elliptic problems in $\\Omega_k$ by using a discontinuous Galerkin scheme introduced in \\cite{ESZ09}, which we recall here. \nIn what follows, $\\mathfrak{T}=(D,\\mathcal{M},\\mathcal{F})$ denotes a tuple defined by a domain $D$, a simplicial mesh $\\mathcal{M}$ on $D$ and its set of faces $\\mathcal{F}=\\mathcal{F}_b\\cup\\mathcal{F}_i$. In practice we will consider $\\mathfrak{T}_k=(\\Omega,\\mathcal{M}_k,\\mathcal{F}_k)$ or $\\widehat{\\mathfrak{T}}_k=(\\Omega_k,\\widehat{\\mathcal{M}}_k,\\widehat{\\mathcal{F}}_k)$. For $\\mathfrak{T}=(D,\\mathcal{M},\\mathcal{F})$ we define \n\\begin{equation}\\label{eq:defVT}\nV(\\mathfrak{T}) = \\{v\\in L^2(D)\\,:\\, v|_K\\in \\mathbb{P}_\\ell(K),\\,\\forall K\\in\\mathcal{M}\\},\n\\end{equation}\nwhere $\\mathbb{P}_\\ell(K)$ is the set of polynomials in $K$ of total degree $\\ell$. As usual for such discontinuous Galerkin methods we need to define appropriate averages, jumps, weights and penalization parameters. For $K\\in\\mathcal{M}$ we denote $\\bm{n}_K$ the unit normal outward to $K$ and $\\mathcal{F}_K=\\{\\sigma\\in\\mathcal{F}\\,:\\,\\sigma\\subset\\partial K\\}$. Let $\\sigma\\in\\mathcal{F}_{i}$ and $K,T\\in\\mathcal{M}$ with $\\sigma=\\partial K\\cap\\partial T$, then $\\bm{n}_\\sigma=\\bm{n}_K$ and\n\\begin{equation}\n\\delta_{K,\\sigma}=\\bm{n}_\\sigma^\\top A|_K\\bm{n}_\\sigma, \\qquad\\qquad \\delta_{T,\\sigma}=\\bm{n}_\\sigma^\\top A|_T\\bm{n}_\\sigma.\n\\end{equation}\nThe weights are defined by\n\\begin{equation}\n\\omega_{K,\\sigma}=\\frac{\\delta_{T,\\sigma}}{\\delta_{K,\\sigma}+\\delta_{T,\\sigma}}, \\qquad\\qquad \\omega_{T,\\sigma}=\\frac{\\delta_{K,\\sigma}}{\\delta_{K,\\sigma}+\\delta_{T,\\sigma}}\n\\end{equation}\nand the penalization parameters by\n\\begin{equation}\n\\gamma_\\sigma=2\\frac{\\delta_{K,\\sigma}\\delta_{T,\\sigma}}{\\delta_{K,\\sigma}+\\delta_{T,\\sigma}}, \\qquad\\qquad \\nu_\\sigma=\\frac{1}{2}|\\bm{\\beta}\\cdot \\bm{n}_\\sigma|.\n\\end{equation}\nIf $\\sigma\\in\\mathcal{F}_{b}$ and $K\\in\\mathcal{M}$ with $\\sigma=\\partial K\\cap \\partial D$ then $\\bm{n}_\\sigma$ is $\\bm{n}_D$ the unit outward normal to $\\partial D$ and\n\\begin{equation}\n\\delta_{K,\\sigma}=\\bm{n}_\\sigma^\\top A|_K\\bm{n}_\\sigma, \\qquad \\omega_{K,\\sigma}=1, \\qquad \\gamma_\\sigma=\\delta_{K,\\sigma}, \\qquad \\nu_\\sigma=\\frac{1}{2}|\\bm{\\beta}\\cdot \\bm{n}_\\sigma|.\n\\end{equation}\nLet $g\\in L^2(\\partial D)$, we define the averages and jumps of $v\\inV(\\mathfrak{T})$ as follows.\nFor $\\sigma\\in\\mathcal{F}_{b}$ with $\\sigma=\\partial K\\cap\\partial D$ we set\n\\begin{equation}\n\\mean{v}_{\\omega,\\sigma}=v|_K, \\qquad\\qquad \\mean{v}_{g,\\sigma}=\\frac{1}{2}(v|_K+g), \\qquad\\qquad \\jump{v}_{g,\\sigma}=v|_K-g\n\\end{equation}\nand for $\\sigma\\in\\mathcal{F}_{i}$ with $\\sigma=\\partial K\\cap\\partial T$\n\\begin{equation}\n\\mean{v}_{\\omega,\\sigma}=\\omega_{K,\\sigma}v|_K+\\omega_{T,\\sigma}v|_T, \\qquad\\qquad\n\\mean{v}_{g,\\sigma}=\\frac{1}{2}(v|_K+v|_T ), \\qquad\\qquad\n\\jump{v}_{g,\\sigma} = v|_K-v|_T.\n\\end{equation}\nWe define $\\jump{\\cdot}_{\\sigma}= \\jump{\\cdot}_{0,\\sigma}$ and $\\mean{\\cdot}_{\\sigma}= \\mean{\\cdot}_{0,\\sigma}$. A similar notation holds for vector valued functions and whenever no confusion can arise the subscript $\\sigma$ is omitted. Let $h_\\sigma$ be the diameter of $\\sigma$ and $\\eta_\\sigma>0$ a user parameter, for $u,v\\inV(\\mathfrak{T})$ we define the bilinear form\n\\begin{align}\\label{eq:opB}\n\\begin{split}\n\\mathcal{B}(u,v,\\mathfrak{T},g)&=\n\\int_{D} (A\\nabla u\\cdot \\nabla v +(\\mu-\\nabla\\cdot \\bm{\\beta})u v-u\\bm{\\beta}\\cdot \\nabla v)\\dif\\bm{x}\\\\\n&\\quad-\\sum_{\\sigma\\in\\mathcal{F}}\\int_\\sigma(\\jump{v}\\mean{A\\nabla u}_{\\omega}\\cdot \\bm{n}_\\sigma+\\jump{u}_{g}\\mean{A\\nabla v}_{\\omega}\\cdot \\bm{n}_\\sigma)\\dif\\bm{y}\\\\\n&\\quad+\\sum_{\\sigma\\in\\mathcal{F}}\\int_\\sigma ((\\eta_\\sigma\\frac{\\gamma_\\sigma}{h_\\sigma}+\\nu_\\sigma)\\jump{u}_{g}\\jump{v}+\\bm{\\beta}\\cdot\\bm{n}_\\sigma\\mean{u}_{g}\\jump{v})\\dif\\bm{y},\n\\end{split}\n\\end{align}\nwhere the gradients are taken element wise. The bilinear form $\\mathcal{B}(\\cdot,\\cdot,\\mathfrak{T},g)$ will be used to approximate elliptic problems in $D$ with Dirichlet boundary condition $g$. This scheme is known as the Symmetric Weighted Interior Penalty (SWIP) scheme \\cite{ESZ09}. The SWIP method is an improvement of the Interior Penalty scheme (IP) \\cite{Arn82}, where the weights are defined as $\\omega_{K,\\sigma}=\\omega_{T,\\sigma}=1\/2$. The use of diffusivity-dependent averages increases the robustness of the method for problems with strong diffusion discontinuities. The bilinear form defined in \\cref{eq:opB} is mathematically equivalent to other formulations where $v\\bm{\\beta}\\cdot\\nabla u$ or $\\nabla\\cdot(\\bm{\\beta} u)v$ appear instead of $u\\bm{\\beta}\\cdot\\nabla v$ (see \\cite{ESZ09} and \\cite[Section 4.6.2]{PiE12}). Our choice of formulation is convenient to express local conservation laws (see \\cite[Section 2.2.3]{PiE12}).\n\n\\subsection{Local method algorithm}\\label{sec:localg}\nIn this section we present the local scheme. In order to facilitate the comprehension of the method, we start with an informal description and then provide a pseudo-code for the algorithm. We denote $u_k$ the global solutions on $\\Omega$ and $\\hat{u}_k$ the local solutions on $\\Omega_k$, which are used to correct the global solutions.\n\nGiven a discretization $\\mathfrak{T}_1=(\\Omega,\\mathcal{M}_1,\\mathcal{F}_1)$ on $\\Omega$ the local scheme computes a first approximate solution $u_1\\in V(\\mathfrak{T}_1)$ to \\eqref{eq:weak}. The algorithm then performs the following steps for $k=2,\\ldots,M$.\n\\begin{enumerate}[label=\\roman*)]\n\t\\item Given the current solution $u_{k-1}$, identify the region $\\Omega_k$ where the error is large and define a new refined mesh $\\mathcal{M}_k$ satisfying \\cref{ass:mesh} by iterating the following steps.\n\t\\begin{enumerate}[label=\\alph*)]\n\t\t\\item For each element $K\\in\\mathcal{M}_{k-1}$ compute an error indicator $\\eta_{M,K}$ (defined in \\eqref{eq:marketa}) and mark the local domain $\\Omega_{k}$ using the fixed energy fraction marking strategy \\cite[Section 4.2]{Dor96}. Hence, $\\Omega_{k}$ is defined as the union of the elements with largest error indicator $\\eta_{M,K}$ and it is such that the error committed inside of $\\Omega_{k}$ is at least a prescribed fraction of the total error.\n\t\t\\item Define the new mesh ${\\mathcal{M}}_{k}$ by refining the elements $K\\in\\mathcal{M}_{k-1}$ with $K\\subset\\Omega_{k}$.\n\t\t\\item Enlarge the local domain $\\Omega_{k}$ defined at step a) by adding a one element wide boundary layer (i.e. in order to satisfy item 2b of \\cref{ass:mesh}).\n\t\t\\item Define the local mesh $\\widehat{\\mathcal{M}}_{k}$ by the elements of $\\mathcal{M}_{k}$ inside of $\\Omega_{k}$.\n\t\\end{enumerate}\n\t\\item Solve a local elliptic problem in $\\Omega_k$ on the refined mesh $\\widehat{\\mathcal{M}}_k$ using $u_{k-1}$ as artificial Dirichlet boundary condition on $\\partial\\Omega_k\\setminus\\partial\\Omega$. The solution is denoted $\\hat{u}_k\\in V(\\widehat{\\mathfrak{T}}_k)$, where $\\widehat{\\mathfrak{T}}_k=(\\Omega_k,\\widehat{\\mathcal{M}}_k,\\widehat{\\mathcal{F}}_k)$.\n\t\\item The local solution $\\hat{u}_k$ is used to correct the previous solution $u_{k-1}$ inside of $\\Omega_k$ and obtain the new global solution $u_k$.\n\\end{enumerate}\nThe pseudo-code of the local scheme is given in \\cref{alg:local}, where $\\chi_{\\Omega\\setminus\\Omega_k}$ is the indicator function of $\\Omega\\setminus\\Omega_k$ and $(\\cdot,\\cdot)_k$ is the inner product in $L^2(\\Omega_k)$. The function $\\text{LocalDomain}(u_k,\\mathfrak{T}_k)$ used in \\cref{alg:local} performs steps a)-d) of i). For purely diffusive problems, it is shown in \\cite[Theorem 8.2]{Ros20} that \\cref{alg:local} is equivalent to the LDGGD introduced in \\cite{AbR19}, hence the scheme convergences for exact solutions $u\\in H^1_0(\\Omega)$.\n\n\\begin{algorithm}\n\t\\caption{LocalScheme($\\mathfrak{T}_1$)}\n\t\\label{alg:local}\n\t\\begin{algorithmic}\n\t\t\\State Find $u_1\\in V(\\mathfrak{T}_1)$ solution to $\\mathcal{B}(u_1,v_1,\\mathfrak{T}_1,0)=(f,v_1)_1$ for all $v_1\\in V(\\mathfrak{T}_1)$.\n\t\t\\For{$k=2,\\ldots,M$}\n\t\t\\State $(\\mathfrak{T}_k,\\widehat{\\mathfrak{T}}_{k}) = \\text{LocalDomain}(u_{k-1},\\mathfrak{T}_{k-1})$.\n\t\t\\State $g_k=u_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}\\in V(\\mathfrak{T}_k)$.\n\t\t\\State Find $\\hat{u}_k\\in V(\\widehat{\\mathfrak{T}}_k)$ solution to $\\mathcal{B}(\\hat{u}_k,v_k,\\widehat{\\mathfrak{T}}_k,g_k)=(f,v_k)_k$ for all $v_k\\in V(\\widehat{\\mathfrak{T}}_k)$.\n\t\t\\State $u_k=g_k+\\hat{u}_k\\in V(\\mathfrak{T}_k)$.\n\t\t\\EndFor\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Error estimators via flux and potential reconstructions}\\label{sec:err_flux}\nThe error estimators used to mark the local domains $\\Omega_k$ and to provide error bounds on the numerical solution $u_k$ are introduced here.\n\nIn the framework of selfadjoint elliptic problems, the equilibrated fluxes method \\cite{AiO93,BaW85} is a technique largely used to derive a posteriori error estimators free of undetermined constants and is based on the definition of local fluxes which satisfy a local conservation property. Since local fluxes and conservation properties are intrinsic to the discontinuous Galerkin formulation, this discretization is well suited for the equilibrated fluxes method \\cite{Ain05,CoN08}. In \\cite{ENV07,Kim07} the Raviart-Thomas-N\u00e9d\u00e9lec space is used to build an $H_{\\divop}(\\Omega)$ conforming reconstruction $\\bm{t}_h$ of the discrete diffusive flux $-A\\nabla u_h$. A diffusive flux $\\bm{t}_h$ with optimal divergence, in the sense that it coincides with the orthogonal projection of the right-hand side $f$ onto the discontinuous Galerkin space, is obtained. In \\cite{ESV10} the authors extend this approach to convection-diffusion-reaction equations by defining an $H_{\\divop}(\\Omega)$ conforming convective flux $\\bm{q}_h$ approximating $\\bm{\\beta} u_h$ and satisfying a conservation property.\n\nWe follow a similar strategy and define in the next section error estimators in function of diffusive and convective fluxes reconstructions $\\bt_k,\\bq_k$ for the local scheme, as well as an $H^1_0(\\Omega)$ conforming potential reconstruction $s_k$ of the solution $u_k$.\n\n\\subsection{Definition of the error estimators}\\label{sec:errest}\nThe error estimators in function of the potential reconstruction $s_k$ approximating the solution $u_k$, the diffusive and convective fluxes $\\bt_k$ and $\\bq_k$ approximating $-A\\nabla u_k$ and $\\bm{\\beta} u_k$, respectively, are defined in this section.\n\nFollowing the iterative and local nature of our scheme, we define the diffusive and convective fluxes reconstructions as\n\\begin{equation}\\label{eq:defflux}\n\\bt_k=\\bm{t}_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}+\\hat{\\bt}_k,\\qquad \\bq_k=\\bm{q}_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}+\\hat{\\bq}_k,\n\\end{equation}\nwhere $\\bm{t}_0=\\bm{q}_0=0$ and $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ are $H_{\\divop}(\\Omega_k)$ conforming fluxes reconstructions of $-A\\nabla \\hat{u}_k$, $\\bm{\\beta} \\hat{u}_k$, respectively, and where $\\hat{u}_k$ is the local solution. To avoid any abuse of notation in \\cref{eq:defflux}, we extended $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ to zero outside of $\\Omega_k$.\nThe fluxes reconstructions $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ satisfy a local conservation property and are defined in \\cref{sec:potflux}. We readily see that \\cref{eq:defflux} allows for flux jumps at the subdomains boundaries, while giving enough freedom to define $\\hat{\\bt}_k,\\hat{\\bq}_k$ in a way that a conservation property is satisfied. The fluxes reconstructions are used to measure the non conformity of the numerical fluxes. In the same spirit we define a potential reconstruction $s_k\\in H^1_0(\\Omega)$ used to measure the non conformity of the numerical solution. It is defined recursively as\n\\begin{equation}\\label{eq:defpot}\ns_k = s_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}+\\hat s_k,\n\\end{equation}\nwhere $s_0=0$ and $\\hat s_k\\in H^1(\\Omega_k)$ is such that $s_k\\in H^1_0(\\Omega)$; similarly, we extend $\\hat s_k$ to zero outside of $\\Omega_k$. More details about the definitions of $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ and $\\hat s_k$ will be given in \\cref{sec:potflux}, for the time being we will define the error estimators.\n\nLet $K\\in\\mathcal{M}_k$, $v\\in H^1(K)$, \n\\begin{equation}\\label{eq:defnBK}\n\\nB{v}_K^2=\\nLddK{A^{1\/2}\\nabla v}^2+\\nLdK{(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})^{1\/2}v}^2,\n\\end{equation}\nwhere $\\nLdK{{\\cdot}}$ is the $L^2$-norm for scalar-valued functions in $K$ and $\\nLddK{{\\cdot}}$ the $L^2$-norm for vector-valued functions in $K$. The non conformity of the numerical solution $u_k$ is measured by the estimator\n\\begin{subequations\n\t\\begin{equation}\\label{eq:etaNC}\n\t\\eta_{NC,K}= \\nB{u_k-s_k}_K.\n\t\\end{equation}\n\tIn the following, $m_K$, $\\tilde m_K$, $m_\\sigma$, $D_{t,K,\\sigma}$, $c_{\\bm{\\beta},\\mu,K}>0$ are some known constants which will be defined in \\cref{sec:ctedef}.\n\tThe residual estimator is\n\t\\begin{equation}\\label{eq:etaR}\n\t\\eta_{R,K}= m_K \\nLdK{f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k},\n\t\\end{equation}\n\twhich can be seen as the residual of \\eqref{eq:weak} where we first replace $u$ by $u_k$, then $-A\\nabla u_k$ by $\\bt_k$, $\\bm{\\beta} u_k$ by $\\bq_k$ and finally use the Green theorem. The error estimators defined in \\cref{eq:etaDF,eq:etaC1,eq:etaC2,eq:etaU,eq:etaG1,eq:etaG2,eq:etatC1,eq:etatU} measure the error introduced by these substitutions and the error introduced when applying the Green theorem to $\\bt_k,\\bq_k$, which are not in $H_{\\divop}(\\Omega)$.\n\t\n\tThe diffusive flux estimator measures the difference between $-A\\nabla u_k$ and $\\bt_k$. It is given by $\\eta_{DF,K}=\\min\\lbrace \\eta_{DF,K}^1,\\eta_{DF,K}^2\\rbrace$, where\n\t\\begin{equation}\\label{eq:etaDF}\n\t\\begin{aligned}\n\t\\eta_{DF,K}^1 &= \\nLddK{A^{1\/2}\\nabla u_k+A^{-1\/2}\\bt_k},\\\\\n\t\\eta_{DF,K}^2 &= m_K\\nLdK{(\\mathcal{I}-\\pi_0)(\\nabla\\cdot(A\\nabla u_k+\\bt_k))}\\\\\n\t&\\quad +\\tilde{m}_K^{1\/2}\\sum_{\\sigma\\in \\mathcal{F}_K}C_{t,K,\\sigma}^{1\/2}\\nLds{(A\\nabla u_k+\\bt_k)\\cdot\\bm{n}_\\sigma},\n\t\\end{aligned}\n\t\\end{equation}\n\t$\\pi_0$ is the $L^2$-orthogonal projector onto $\\mathbb{P}_0(K)$ and $\\mathcal{I}$ is the identity operator. Let $\\sigma\\in\\mathcal{F}_k$ and $\\pi_{0,\\sigma}$ be the $L^2$-orthogonal projector onto $\\mathbb{P}_0(\\sigma)$. The convection and upwinding estimators, that measure the difference between $\\bm{\\beta} u_k$, $\\bm{\\beta} s_k$ and $\\bq_k$, are defined by\n\t\\begin{align}\\label{eq:etaC1}\n\t\\eta_{C,1,K}&= m_K\\nLdK{(\\mathcal{I}-\\pi_0)(\\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k))},\\\\ \\label{eq:etaC2}\n\t\\eta_{C,2,K}&= \\frac{1}{2}c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\nLdK{(\\nabla\\cdot\\bm{\\beta})(u_k-s_k))},\\\\ \\label{eq:etatC1}\n\t\\tilde \\eta_{C,1,K}&= m_K\\nLdK{(\\mathcal{I}-\\pi_0)(\\nabla\\cdot (\\bq_k-\\bm{\\beta} u_k))},\\\\ \\label{eq:etaU}\n\t\\eta_{U,K} &= \\sum_{\\sigma\\in\\mathcal{F}_K}\\chi_\\sigma m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot \\bm{n}_\\sigma},\\\\ \\label{eq:etatU}\n\t\\tilde\\eta_{U,K}&= \\sum_{\\sigma\\in\\mathcal{F}_K}\\chi_\\sigma m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} u_k}\\cdot \\bm{n}_\\sigma},\n\t\\end{align}\n\twhere $\\chi_\\sigma=2$ if $\\sigma\\in\\mathcal{F}_{k,b}$ and $\\chi_\\sigma=1$ if $\\sigma\\in\\mathcal{F}_{k,i}$.\n\tFinally, we introduce the jump estimators coming from the application of the Green theorem to $\\bt_k$ and $\\bq_k$ (see \\cref{lemma:boundBBA}). Those are defined by \n\t\\begin{align}\\label{eq:etaG1}\n\t\\eta_{\\Gamma,1,K} &= \\frac{1}{2}(|K|c_{\\bm{\\beta},\\mu,K})^{-1\/2}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\nLus{\\pi_{0,\\sigma}\\jump{\\bq_k}\\cdot\\bm{n}_\\sigma},\\\\ \\label{eq:etaG2}\n\t\\eta_{\\Gamma,2,K} &= \\frac{1}{2}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}} D_{t,K,\\sigma}\\nLds{\\jump{\\bt_k}\\cdot \\bm{n}_\\sigma}.\n\t\\end{align}\n\\end{subequations}\nWe end the section defining the marking error estimator $\\eta_{M,K}$ used to mark $\\Omega_k$ in the LocalDomain routine of \\cref{alg:local}, let\n\\begin{equation}\\label{eq:marketa}\n\\begin{aligned}\n\\eta_{M,K}&= \\eta_{NC,K}+\\eta_{R,K}+\\eta_{DF,K}+\\eta_{C,1,K}+\\eta_{C,2,K}+\\eta_{U,K}\\\\\n&\\quad +\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}+\\tilde\\eta_{C,1,K}+\\tilde\\eta_{U,K}.\n\\end{aligned}\n\\end{equation}\n\n\\subsection{Main results}\\label{sec:thms}\nWe state here our main results related to the a posteriori analysis of the local scheme, in particular we will provide reliable error bounds on the numerical solution $u_k$ which are free of undetermined constants. We will also comment as to why we cannot prove the efficiency of the new estimator.\n\n\nWe start defining the norms for which we provide the error bounds, the same norms are used in \\cite{ESV10}. The operator $\\mathcal{B}$ defined in \\eqref{eq:bform} can be written $\\mathcal{B}=\\mathcal{B}_S+\\mathcal{B}_A$, where $\\mathcal{B}_S$ and $\\mathcal{B}_A$ are symmetric and skew-symmetric operators defined by\n\\begin{equation}\\label{eq:bsba}\n\\begin{aligned}\n\\mathcal{B}_S(u,v)&= \\int_\\Omega (A\\nabla u\\cdot\\nabla v+(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})u v)\\dif\\bm{x},\\\\\n\\mathcal{B}_A(u,v)&=\\int_{\\Omega}(\\bm{\\beta}\\cdot\\nabla u+\\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})u)v\\dif\\bm{x},\n\\end{aligned}\n\\end{equation}\nfor $u,v\\in H^1(\\mathcal{M}_k)$. The energy norm is defined by the symmetric operator as\n\\begin{equation}\n\\nB{v}^2 = \\mathcal{B}_S(v,v) = \\nLdd{A^{1\/2}\\nabla v}^2+\\nLd{(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})^{1\/2}v}^2,\n\\end{equation}\nobserve that $\\nB{v}^2=\\sum_{K\\in\\mathcal{M}_k}\\nB{v}_K^2$, with $\\nB{{\\cdot}}_K$ as in \\eqref{eq:defnBK}. Since the norm $\\nB{{\\cdot}}$ is defined by the symmetric operator, it is well suited to study problems with dominant diffusion or reaction. On the other hand, it is inappropriate for convection dominated problems since it lacks a term measuring the error along the velocity direction. For this kind of problems we use the augmented norm\n\\begin{equation}\\label{eq:augnorm}\n\\nBp{v}=\\nB{v}+\\sup_{\\substack{w\\in H^1_0(\\Omega)\\\\ \\nB{w}=1}}(\\mathcal{B}_A(v,w)+\\mathcal{B}_J(v,w)),\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathcal{B}_J(v,w)=-\\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma \\jump{\\bm{\\beta} v}\\cdot\\bm{n}_\\sigma \\mean{\\pi_0 w}\\dif\\bm{y}\n\\end{equation}\nis a term needed to sharpen the error bounds. The next two theorems give a bound on the error of the local scheme, measured in the energy or the augmented norm.\n\\begin{theorem}\\label{thm:energynormbound}\n\tLet $u\\in H^1_0(\\Omega)$ be the solution to \\eqref{eq:weak}, $u_k\\in V(\\mathfrak{T}_k)$ given by \\cref{alg:local}, $s_k\\in V(\\mathfrak{T}_k)\\cap H^1_0(\\Omega)$ from \\cref{eq:defpot,eq:defhsk} and $\\bt_k,\\bq_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ be defined by \\cref{eq:defflux,eq:deflocflux}. Then, the error measured in the energy norm is bounded as\n\t\\begin{equation}\n\t\\nB{u-u_k}\\leq \\eta = \\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{NC,K}^2\\right)^{1\/2}+\\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{1,K}^2\\right)^{1\/2},\n\t\\end{equation}\n\twhere $\\eta_{1,K}=\\eta_{R,K}+\\eta_{DF,K}+\\eta_{C,1,K}+\\eta_{C,2,K}+\\eta_{U,K}+\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}$.\n\\end{theorem}\n\\begin{theorem}\\label{thm:augmentednormbound}\n\tUnder the same assumptions of \\cref{thm:energynormbound}, the error measured in the augmented norm is bounded as\n\t\\begin{equation}\n\t\\nBp{u-u_k}\\leq \\tilde\\eta = 2\\eta +\\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{2,K}^2\\right)^{1\/2},\n\t\\end{equation}\n\twith $\\eta$ from \\cref{thm:energynormbound} and $\\eta_{2,K}=\\eta_{R,K}+\\eta_{DF,K}+\\tilde\\eta_{C,1,K}+\\tilde\\eta_{U,K}+\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}$.\n\\end{theorem}\nThe error estimators of \\cref{thm:energynormbound,thm:augmentednormbound} are free of undetermined constants, indeed they depend on the numerical solution, the smallest eigenvalues of the diffusion tensor, on the essential minimum of $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}$, the mesh size and known geometric constants. In contrast, the error estimators are not efficient. The reason is that, compared to the true errors $\\nB{u-u_k}$ and $\\nBp{u-u_k}$, the error estimators $\\eta_{\\Gamma,1,K},\\eta_{\\Gamma,2,K}$ have a lower order of convergence. We illustrate this numerically in \\cref{exp:conv}.\nHowever, $\\eta_{\\Gamma,1,K},\\eta_{\\Gamma,2,K}$ are useful in practice: whenever they are small, then the error estimators are efficient. When they become large then they indicate that the error is not localized and one should switch to a nonlocal method. This is also illustrated numerically in \\cref{exp:conv}.\n\n\n\\section{Potential and fluxes reconstructions, proofs of the main results}\\label{sec:errbound}\nIn this section, we will define the potential, diffusion and advection reconstructions, define the geometric constants appearing in the error estimators defined in \\cref{eq:etaNC,eq:etaR,eq:etaDF,eq:etaC1,eq:etaC2,eq:etaU,eq:etaG1,eq:etaG2,eq:etatC1,eq:etatU} and finally prove \\cref{thm:energynormbound,thm:augmentednormbound}.\n\n\\subsection{Potential and fluxes reconstruction via the equilibrated flux method}\\label{sec:potflux}\nWe define here the fluxes reconstructions $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ of \\eqref{eq:defflux} and the potential reconstruction $\\hat s_k$ of \\eqref{eq:defpot}. In what follows we assume that $\\mathcal{M}_k$ does not have hanging nodes, i.e. we consider matching meshes, since it simplifies the analysis; however, in practice nonmatching meshes possessing hanging nodes can be employed (as in \\cref{sec:num}). Roughly speaking, the next results are extended to nonmatching meshes by building matching submeshes and computing the error estimators on those submeshes, we refer to \\cite[Appendix]{ESV10} for the details.\n\nWe start defining some broken Sobolev spaces and then the potential and fluxes reconstructions. For $k=1,\\ldots,M$ let $\\mathcal{G}_k=\\{G_j\\,|\\, j=1,\\ldots,k\\}$, where $G_k=\\Omega_k$ and \n\\begin{equation}\nG_j =\\Omega_j\\setminus\\cup_{i=j+1}^{k}\\overline{\\Omega}_{i} \\qquad \\text{for }j=1,\\ldots,k-1.\n\\end{equation}\nIn \\cref{fig:Omegak,fig:Dk} we give an example of a sequence of domains $\\Omega_k$ and the corresponding set $\\mathcal{G}_k$.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\t\\centering\n\t\t\t\\captionsetup{justification=centering}\n\t\t\t\\begin{tikzpicture}[scale=1]\n\t\t\t\\draw (0,0) rectangle (4,4);\n\t\t\n\t\t\t\\draw (2,2) rectangle (4,4);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.2)--(4,2.2);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.4)--(4,2.4);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.6)--(4,2.6);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.8)--(4,2.8);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3)--(4,3);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.2)--(4,3.2);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.4)--(4,3.4);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.6)--(4,3.6);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.8)--(4,3.8);\n\t\t\n\t\t\t\\draw (3,1) rectangle (4,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.2,1)--(3.2,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.4,1)--(3.4,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.6,1)--(3.6,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.8,1)--(3.8,3);\n\t\t\t\\end{tikzpicture}\n\t\t\n\t\t\t\\caption{Sequence of domains\\\\$\\Omega_1$= \\tikz \\draw (0,0) rectangle (10pt,10pt);, $\\Omega_2$= \\tikz{\\draw(0,0) rectangle (10pt,10pt);\\draw[color=NavyBlue,line width=1pt] (0,6.6pt)--(10pt,6.6pt);\\draw[color=NavyBlue,line width=1pt] (0,3.3pt)--(10pt,3.3pt);}, $\\Omega_3$= \\tikz{\\draw(0,0) rectangle (10pt,10pt);\\draw[color=OliveGreen,line width=1pt] (6.6pt,0pt)--(6.6pt,10pt);\\draw[color=OliveGreen,line width=1pt] (3.3pt,0pt)--(3.3pt,10pt);} .}\n\t\t\t\\label{fig:Omegak}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\t\\centering\n\t\t\t\\captionsetup{justification=centering}\n\t\t\t\\begin{tikzpicture}[scale=1]\n\t\t\t\\draw[pattern=dots] (0,0) rectangle (4,4);\n\t\t\n\t\t\n\t\t\t\\draw[fill=Goldenrod] (2,2) rectangle (4,4);\n\t\t\n\t\t\n\t\t\t\\draw[fill=BrickRed] (3,1) rectangle (4,3);\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Set $\\mathcal{G}_3=\\{G_1,G_2,G_3\\}$ with\\\\ $G_1$= \\tikz \\draw[pattern=dots] (0,0) rectangle (10pt,10pt);, $G_2$= \\tikz \\draw[fill=Goldenrod] (0,0) rectangle (10pt,10pt);, $G_3$= \\tikz \\draw[fill = BrickRed] (0,0) rectangle (10pt,10pt); .}\n\t\t\t\\label{fig:Dk}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\t\\centering\n\t\t\t\\captionsetup{justification=centering}\n\t\t\t\\begin{tikzpicture}[scale=1]\n\t\t\t\\draw[draw=none] (0,0) rectangle (4,4);\n\t\t\t\\draw[color=YellowOrange, line width=1pt, solid] (2,4)--(2,2)--(3,2);\n\t\t\t\\draw[color=PineGreen, line width=1pt, densely dotted] (3,2)--(3,1)--(4,1);\n\t\t\t\\draw[color=Purple, line width=1pt, loosely dashed] (3,2)--(3,3)--(4,3);\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Skeleton $\\Gamma_3$ with \\\\$\\partial G_1\\cap\\partial G_2$= \\tikz\\draw[color=YellowOrange, line width=1pt, solid] (0,0)--(10pt,0pt);, $\\partial G_1\\cap\\partial G_3$= \\tikz \\draw[color=PineGreen, line width=1pt, densely dotted] (0,0) -- (10pt,0pt);,\\\\$\\partial G_2\\cap\\partial G_3$= \\tikz \\draw[color=Purple, line width=1pt, loosely dashed] (0,0) -- (15pt,0pt);.}\n\t\t\t\\label{fig:Gk}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Example of sequence of domains $\\Omega_1,\\Omega_2,\\Omega_3$, set $\\mathcal{G}_3$ and skeleton $\\Gamma_3$.}\n\t\\label{fig:illustrationDk}\n\\end{figure}\nWe define the broken spaces\n\\begin{align}\nH_{\\divop}(\\mathcal{G}_k) &= \\{\\bm{v}\\in L^2(\\Omega)^d\\,:\\, \\bm{v}|_G\\in H_{\\divop}(G)\\text{ for all }G\\in \\mathcal{G}_k\\},\\\\\nH^1({\\mathcal{M}}_k)&=\\{v\\in L^2(\\Omega)\\,:\\,v|_K\\in H^1(K)\\text{ for all }K\\in\\mathcal{M}_k\\},\n\\end{align}\nthe divergence and gradient operators in $H_{\\divop}(\\mathcal{G}_k)$ and $H^1(\\mathcal{M}_k)$ are taken element wise.\nWe extend the jump operator $\\jump{\\cdot}_\\sigma$ to the broken space $H^1(\\mathcal{M}_k)$. We call $\\Gamma_k$ the internal skeleton of $\\mathcal{G}_k$, that is\n\\begin{equation}\n\\Gamma_k=\\{\\partial G_i\\cap\\partial G_j\\,|\\, G_i,G_j\\in\\mathcal{G}_k,\\, i\\neq j\\},\n\\end{equation}\nan example of $\\Gamma_k$ is given in \\cref{fig:Gk}.\nFor each $\\gamma\\in\\Gamma_k$ we define $\\mathcal{F}_\\gamma = \\{\\sigma\\in\\mathcal{F}_{k,i}\\,|\\,\\sigma\\subset \\gamma\\}$ and set $\\bm{n}_\\gamma$, the normal to $\\gamma$, as $\\bm{n}_\\gamma|_\\sigma=\\bm{n}_\\sigma$. The jump $\\jump{\\cdot}_\\gamma$ on $\\gamma$ is defined by $\\jump{\\cdot}_\\gamma|_\\sigma=\\jump{\\cdot}_\\sigma$. \n\n\nIn \\cite{ESV10} the reconstructed fluxes live in $H_{\\divop}(\\Omega)$. For the local algorithm we need to build such fluxes using the recursive relation \\eqref{eq:defflux}. This leads to fluxes having jumps across the boundaries of the subdomains, i.e. $\\gamma\\in\\Gamma_k$, hence they lie in the broken space $H_{\\divop}(\\mathcal{G}_k)$. In the rest of this section we explain how to build fluxes which are in an approximation space of $H_{\\divop}(\\mathcal{G}_k)$ and satisfy a local conservation property. \nWe start by introducing a broken version of the usual Raviart-Thomas-N\u00e9d\u00e9lec spaces \\cite{Ned80,RaT77}, which we define as\n\\begin{equation}\\label{eq:RTN}\n\\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k):=\\{\\bm{v}_k\\in H_{\\divop}(\\mathcal{G}_k)\\,:\\, \\bm{v}_k|_K\\in\\mathbf{RTN}_\\mathcalligra{r}(K)\\text{ for all }K\\in\\mathcal{M}_k\\},\n\\end{equation}\nwhere $\\mathcalligra{r}\\in\\{\\ell-1,\\ell\\}$ and $\\mathbf{RTN}_\\mathcalligra{r}(K)=\\mathbb{P}_\\mathcalligra{r}(K)^d+\\bm{x} \\mathbb{P}_{\\mathcalligra{r}}(K)$. In order to build functions in $\\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ we need a characterization of this space. \nLet $\\bm{v}_k\\in L^2(\\Omega)^d$ such that $\\bm{v}_k|_K\\in\\mathbf{RTN}_\\mathcalligra{r}(K)$ for each $K\\in\\mathcal{M}_k$, it is known that $\\bm{v}_k\\in H_{\\divop}(\\Omega)$ if and only if $\\jump{\\bm{v}_k}_\\sigma\\cdot\\bm{n}_\\sigma=0$ for all $\\sigma\\in\\mathcal{F}_{k,i}$ (see \\cite[Lemma 1.24]{PiE12}). Since we search for fluxes $\\bm{v}_k$ in $H_{\\divop}(\\mathcal{G}_k)$, we relax this condition and allow $\\jump{\\bm{v}_k}_\\gamma\\cdot\\bm{n}_\\gamma\\neq 0$ for $\\gamma\\in\\Gamma_k$.\n\n\\begin{lemma}\n\tLet $\\bm{v}_k\\in L^2(\\Omega)^d$ be such that $\\bm{v}_k|_K\\in\\mathbf{RTN}_\\mathcalligra{r}(K)$ for each $K\\in\\mathcal{M}_k$, then $\\bm{v}_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ if and only if $\\jump{\\bm{v}_k}_\\sigma\\cdot\\bm{n}_\\sigma=0$ for all $\\sigma\\notin \\cup_{\\gamma\\in\\Gamma_k}\\mathcal{F}_\\gamma$.\n\\end{lemma} \n\\begin{proof}\n\tFollowing the lines of \\cite[Lemma 1.24]{PiE12}.\n\\end{proof}\nThe diffusive and convective fluxes $\\bt_k,\\bq_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ are defined recursively as in \\eqref{eq:defflux}, where $\\hat{\\bt}_k,\\hat{\\bq}_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\widehat{\\mathcal{M}}_k)$, with\n\\begin{equation}\n\\mathbf{RTN}_\\mathcalligra{r}(\\widehat{\\mathcal{M}}_k):=\\{\\bm{v}_k\\in H_{\\divop}(\\Omega_k)\\,:\\, \\bm{v}_k\\in\\mathbf{RTN}_\\mathcalligra{r}(K)\\text{ for all }K\\in\\widehat{\\mathcal{M}}_k\\},\n\\end{equation}\nare given by the relations\n\\begin{subequations}\\label{eq:deflocflux}\n\t\\begin{equation}\\label{eq:deflocflux1}\n\t\\begin{aligned}\n\t\\int_\\sigma \\hat{\\bt}_k\\cdot\\bm{n}_\\sigma p_k\\dif\\bm{y}&= \\int_\\sigma (-\\mean{A\\nabla \\hat{u}_k}_\\omega\\cdot\\bm{n}_\\sigma+\\eta_\\sigma\\frac{\\gamma_\\sigma}{h_\\sigma}\\jump{\\hat{u}_k}_{g_k})p_k\\dif\\bm{y},\\\\\n\t\\int_\\sigma \\hat{\\bq}_k\\cdot\\bm{n}_\\sigma p_k\\dif\\bm{y} &= \\int_\\sigma (\\bm{\\beta}\\cdot\\bm{n}_\\sigma\\mean{\\hat{u}_k}_{g_k}+\\nu_\\sigma\\jump{\\hat{u}_k}_{g_k})p_k\\dif\\bm{y}\n\t\\end{aligned}\n\t\\end{equation}\n\tfor all $\\sigma\\in\\widehat{\\mathcal{F}}_k$ and $p_k\\in \\mathbb{P}_\\mathcalligra{r}(\\sigma)$ and\n\t\\begin{equation}\\label{eq:deflocflux2}\n\t\\begin{aligned}\n\t\\int_K \\hat{\\bt}_k \\cdot\\hat{\\br}_k\\dif\\bm{x} &= -\\int_K A\\nabla\\hat{u}_k\\cdot\\hat{\\br}_k\\dif\\bm{x}+\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma\\omega_{K,\\sigma}\\jump{\\hat{u}_k}_{g_k} A|_K\\hat{\\br}_k\\cdot\\bm{n}_\\sigma\\dif\\bm{y},\\\\\n\t\\int_K\\hat{\\bq}_k\\cdot\\hat{\\br}_k\\dif\\bm{x} &= \\int_K \\hat{u}_k\\bm{\\beta}\\cdot\\hat{\\br}_k\\dif\\bm{x}\n\t\\end{aligned}\n\t\\end{equation}\n\\end{subequations}\nfor all $K\\in\\widehat{\\mathcal{M}}_k$ and $\\hat{\\br}_k\\in\\mathbb{P}_{\\mathcalligra{r}-1}(K)^d$. Since $\\hat{\\bt}_k|_K\\cdot\\bm{n}_\\sigma$, $\\hat{\\bq}_k|_K\\cdot\\bm{n}_\\sigma\\in\\mathbb{P}_\\mathcalligra{r}(\\sigma)$ (see \\cite[Proposition 3.2]{BrF91}) then \\eqref{eq:deflocflux1} defines $\\hat{\\bt}_k|_K\\cdot\\bm{n}_\\sigma$, $\\hat{\\bq}_k|_K\\cdot\\bm{n}_\\sigma$ on $\\sigma$. The remaining degrees of freedom are fixed by \\eqref{eq:deflocflux2} \\cite[Proposition 3.3]{BrF91}.\nThanks to \\eqref{eq:deflocflux1} we have $\\jump{\\hat{\\bt}_k}\\cdot\\bm{n}_\\sigma=0$ and $\\jump{\\hat{\\bq}_k}\\cdot\\bm{n}_\\sigma=0$ for $\\sigma\\in\\widehat{\\mathcal{F}}_{k,i}$ and hence $\\hat{\\bt}_k,\\hat{\\bq}_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\widehat{\\mathcal{M}}_k)$. By construction it follows $\\bt_k,\\bq_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$.\n\nLet $K\\in\\mathcal{M}_k$ and $\\pi_\\mathcalligra{r}$ be the $L^2$-orthogonal projector onto $\\mathbb{P}_\\mathcalligra{r}(K)$, the following lemma states a local conservation property of the reconstructed fluxes. The proof follows the lines of \\cite[Lemma 2.1]{ESV10}\n\\begin{lemma}\\label{lemma:cons}\n\tLet $u_k\\in V(\\mathfrak{T}_k)$ be given by \\cref{alg:local} and $\\bt_k,\\bq_k\\in H_{\\divop}(\\mathcal{G}_k)$ defined by \\cref{eq:defflux,eq:deflocflux}. For all $K\\in\\mathcal{M}_k$ it holds\n\t\\begin{equation}\\label{eq:cons}\n\t(\\nabla\\cdot \\bt_k+\\nabla\\cdot\\bq_k+\\pi_\\mathcalligra{r}((\\mu-\\nabla\\cdot\\bm{\\beta})u_k))|_K = \\pi_\\mathcalligra{r} f|_K.\n\t\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\tLet $K\\in\\mathcal{M}_k$ and $j=\\max\\{i=1,\\ldots,k\\,:\\, K\\subset\\Omega_j\\}$, then $K\\in\\widehat{\\mathcal{M}}_j$, $\\bt_k|_K=\\hat{\\bt}_j|_K$, $\\bq_k|_K=\\hat{\\bq}_j|_K$ and $u_k|_K=\\hat u_j|_K$. Let $v_j\\in \\mathbb{P}_{\\mathcalligra{r}}(K)$, with $v_j=0$ outside of $K$, by the Green theorem we have\n\t\\begin{equation}\\label{eq:greentq}\n\t\\int_K (\\nabla\\cdot \\hat{\\bt}_j+\\nabla\\cdot\\hat{\\bq}_j)v_j\\dif \\bm{x} = -\\int_K (\\hat{\\bt}_j+\\hat{\\bq}_j)\\cdot\\nabla v_j\\dif \\bm{x} +\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma v_j(\\hat{\\bt}_j+\\hat{\\bq}_j)\\cdot\\bm{n}_K\\dif \\bm{y}\n\t\\end{equation}\n\tand using $\\mathcal{B}(\\hat u_j,v_j,\\widehat{\\mathfrak{T}}_j,g_j)=(f,v_j)_j$ it follows\n\t\\begin{equation}\n\t\\begin{aligned}\n\t\\int_K f v_j \\dif \\bm{x} &= \\int_K (A\\nabla \\hat u_j\\cdot \\nabla v_j+(\\mu-\\nabla\\cdot \\bm{\\beta})\\hat u_j v_j-\\hat{u}_j\\bm{\\beta}\\cdot \\nabla v_j)\\dif \\bm{x}\\\\\n\t&\\quad -\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma(\\jump{v_j}\\mean{A\\nabla \\hat{u}_j}_{\\omega}\\cdot \\bm{n}_\\sigma+\\jump{\\hat{u}_j}_{g_j}\\mean{A\\nabla v_j}_{\\omega}\\cdot \\bm{n}_\\sigma)\\dif \\bm{y}\\\\\n\t&\\quad +\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma ((\\eta_\\sigma\\frac{\\gamma_\\sigma}{h_\\sigma}+\\nu_\\sigma)\\jump{\\hat{u}_j}_{g_j}\\jump{v_j}+\\bm{\\beta}\\cdot\\bm{n}_\\sigma\\mean{\\hat{u}_j}_{g_j}\\jump{v_j})\\dif \\bm{y}.\n\t\\end{aligned}\n\t\\end{equation}\n\tSince $\\mean{A\\nabla v_j}_\\omega =\\omega_{K,\\sigma}A|_K\\nabla v_j$ and $\\jump{v_j}\\bm{n}_\\sigma=v_j|_K\\bm{n}_K$, using \\cref{eq:deflocflux,eq:greentq}, we obtain\n\t\\begin{equation}\\label{eq:precons}\n\t\\int_K f v_j \\dif \\bm{x} = \\int_K (\\nabla\\cdot \\hat{\\bt}_j+\\nabla\\cdot\\hat{\\bq}_j+(\\mu-\\nabla\\cdot\\bm{\\beta})\\hat u_j)v_j\\dif \\bm{x}\n\t\\end{equation}\n\tand the result follows from $\\nabla\\cdot\\hat{\\bt}_j,\\nabla\\cdot\\hat{\\bq}_j\\in\\mathbb{P}_\\mathcalligra{r}(K)$, $\\bt_k|_K=\\hat{\\bm{t}}_j|_K$, $\\bq_k|_K=\\hat{\\bm{q}}_j|_K$ and $u_k|_K=\\hat{u}_j|_K$. \n\\end{proof}\n\nIn order to define the $H^1_0(\\Omega)$ conforming approximation $s_k$ of $u_k$ we will need the so-called Oswald operator already considered in \\cite{KaP03} for a posteriori estimates. Let $\\mathfrak{T}=(D,\\mathcal{M},\\mathcal{F})$, $g\\in C^0(\\partial D)$ and consider $\\mathcal{O}_{\\mathfrak{T},g}:V(\\mathfrak{T})\\rightarrow V(\\mathfrak{T})\\cap H^1(D)$, for a function $v\\in V(\\mathfrak{T})$ the value of $\\mathcal{O}_{\\mathfrak{T},g} v$ is prescribed at the Lagrange interpolation nodes $p$ of the conforming finite element space $V(\\mathfrak{T})\\cap H^1(D)$. Let $p\\in \\overline{D}$ be a Lagrange node, if $p\\notin \\partial D$ we set\n\\begin{equation}\n\\mathcal{O}_{\\mathfrak{T},g}v(p)=\\frac{1}{\\# \\mathcal M_p}\\sum_{K\\in\\mathcal{M}p}v|_K(p),\n\\end{equation}\nwhere $\\mathcal M_{p}=\\{K\\in\\mathcal{M}\\,:\\,p\\in\\overline{K}\\}$. If instead $p\\in\\partial D$ then $\\mathcal{O}_{\\mathfrak{T},g}v(p)=g(p)$, where $g$ is the Dirichlet condition on $\\partial D$. The reconstructed potential $s_k\\in V(\\mathfrak{T}_k)\\cap H^1_0(\\Omega)$ is built as in \\eqref{eq:defpot}, where\n\\begin{equation}\\label{eq:defhsk}\n\\hat s_k = \\mathcal{O}_{\\widehat{\\mathfrak{T}}_k,s_{k-1}} \\hat{u}_k.\n\\end{equation}\n\n\\subsection{Constants definition and preliminary results}\\label{sec:ctedef}\nHere we define the constants appearing in \\cref{eq:etaNC,eq:etaR,eq:etaDF,eq:etaC1,eq:etaC2,eq:etaU,eq:etaG1,eq:etaG2,eq:etatC1,eq:etatU} and derive preliminary results needed to prove \\cref{thm:energynormbound,thm:augmentednormbound}.\n\n\nLet $K\\in\\mathcal{M}_k$ and $\\sigma\\in\\mathcal{F}_K$, we recall that $|K|$ is the measure of $K$ and $|\\sigma|$ the $d-1$ dimensional measure of $\\sigma$. We denote by $c_{A,K}$ the minimal eigenvalue of $A|_K$. Next, we denote by $c_{\\bm{\\beta},\\mu,K}$ the essential minimum of $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}\\geq 0$ on $K$. \nIn what follows we will assume that $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}>0$ a.e. in $\\Omega$, hence $c_{\\bm{\\beta},\\mu,K}>0$ for all $K\\in\\mathcal{M}_k$, and provide error estimators under this assumption. We explain in \\cref{sec:altbounds} how to overcome this limitation slightly modifying the proofs and error estimators.\n\nThe cutoff functions $m_K,\\tilde m_K$ and $m_\\sigma$ are defined by \n\\begin{subequations} \\label{eq:cutoff}\n\t\\begin{align} \\label{eq:mK}\n\tm_K =& \\min\\{ C_p^{1\/2}h_K c_{A,K}^{-1\/2},c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\},\\\\ \\label{eq:tmK}\n\t\\tilde m_K=& \\min\\{ (C_p+C_p^{1\/2})h_Kc_{A,K}^{-1}, h_K^{-1}c_{\\bm{\\beta},\\mu,K}^{-1}+c_{\\bm{\\beta},\\mu,K}^{-1\/2}c_{A,K}^{-1\/2}\/2\\},\\\\ \\label{eq:ms}\n\tm_\\sigma^2=& \\min\\lbrace \\max_{K\\in\\mathcal{M}_\\sigma}\\{3d|\\sigma|h_K^2|K|^{-1}c_{A,K}^{-1}\\},\\max_{K\\in\\mathcal{M}_\\sigma}\\{|\\sigma||K|^{-1}c_{\\bm{\\beta},\\mu,K}^{-1}\\}\\rbrace,\n\t\\end{align}\n\\end{subequations}\nwhere $C_p=1\/\\pi^2$ is an optimal Poincar\u00e9 constant for convex domains \\cite{PaW60}. Let $v\\in H^1(\\mathcal{M}_k)$, it holds\n\\begin{subequations}\\label{eq:bounds}\n\t\\begin{align} \\label{eq:bounds1}\n\t\\nLdK{v-\\pi_0 v}&\\leq m_K \\nB{v}_K & \\text{for all }& K\\in\\mathcal{M}_k,\\\\ \\label{eq:bounds2}\n\t\\nLds{v-\\pi_0 v|_K}&\\leq C_{t,K,\\sigma}^{1\/2}\\tilde{m}_K^{1\/2}\\nB{v}_K & \\text{for all }& \\sigma\\in \\mathcal{F}_k \\text{ and } K\\in\\mathcal{M}_\\sigma,\\\\ \\label{eq:bounds3}\n\t\\nLds{\\jump{\\pi_0 v}}&\\leq m_\\sigma\\sum_{K\\in\\mathcal{M}_\\sigma}\\nB{v}_K & \\text{for all }& \\sigma\\in\\mathcal{F}_k,\n\t\\end{align}\n\\end{subequations}\nwhere $\\mathcal{M}_\\sigma = \\{K\\in\\mathcal{M}_k\\,:\\, \\sigma\\subset\\partial K\\}$ and $C_{t,K,\\sigma}$ is the constant of the trace inequality\n\\begin{equation}\\label{eq:trace}\n\\nLds{v|_K}^2\\leq C_{t,K,\\sigma}(h_K^{-1}\\nLdK{v}^2+\\nLdK{v}\\nLddK{\\nabla v}).\n\\end{equation}\nIt has been proved in \\cite[Lemma 3.12]{Ste07} that for a simplex it holds $C_{t,K,\\sigma}=|\\sigma|h_K\/|K|$. \n\nLet us briefly explain the role of constants \\eqref{eq:cutoff} and how the bounds \\eqref{eq:bounds} are obtained. We observe that for each bound in \\eqref{eq:bounds} the cut off functions take the minimum between two possible values, allowing for robust error estimation in singularly perturbed regimes. For \\eqref{eq:bounds1}, using the Poincar\u00e9 inequality \\cite[equation 3.2]{PaW60} we have\n\\begin{subequations}\n\t\\begin{equation}\\label{eq:bounds1a}\n\t\\begin{aligned}\n\t\\nLdK{v-\\pi_0 v}&\\leq C_p^{1\/2} h_K \\nLddK{\\nabla v}\\\\\n\t& \\leq C_p^{1\/2}h_Kc_{A,K}^{-1\/2}\\nLddK{A^{1\/2}\\nabla v}\\leq C_p^{1\/2}h_Kc_{A,K}^{-1\/2}\\nB{v}_K.\n\t\\end{aligned}\n\t\\end{equation}\n\tDenoting $(\\cdot,\\cdot)_K$ the $L^2(K)$ inner product, it holds\n\t\\begin{equation}\n\t\\nLdK{v-\\pi_0 v}^2=(v-\\pi_0 v,v-\\pi_0 v)_K=(v-\\pi_0 v,v)_K\\leq \\nLdK{v-\\pi_0 v}\\nLdK{v},\n\t\\end{equation}\n\thence\n\t\\begin{equation}\\label{eq:bounds1b}\n\t\\nLdK{v-\\pi_0 v}\\leq \\nLdK{v} \\leq c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\nLdK{(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})^{1\/2}v}\\leq c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\nB{v}_K\n\t\\end{equation}\n\\end{subequations}\nand \\eqref{eq:bounds1} follows. The choice between bounds \\cref{eq:bounds1a,eq:bounds1b} depends on whether the problem is singularly perturbed or not. Bounds \\eqref{eq:bounds2} and \\eqref{eq:bounds3} are obtained similarly, see \\cite[Lemma 4.2]{CFP09} and \\cite[Lemma 4.5]{Voh08}. Finally, for $K\\in\\mathcal{M}_k$ and $\\sigma\\in \\mathcal{F}_K$ we define\n\\begin{equation}\\label{eq:Dk}\nD_{t,K,\\sigma}=\\left(\\frac{C_{t,K,\\sigma}}{2 h_K c_{\\bm{\\beta},\\mu,K}}\\left(1+\\sqrt{1+h_K^2\\frac{c_{\\bm{\\beta},\\mu,K}}{c_{A,K}}}\\right)\\right)^{1\/2},\n\\end{equation}\nwhich is used to bound $\\nLds{v|_K}$ in terms of $\\nB{v}_K$ in the next lemma.\n\\begin{lemma}\\label{lemma:boundsigma}\n\tLet $v_k\\in H^1(\\mathcal{M}_k)$, for each $K\\in\\mathcal{M}_k$ and $\\sigma\\in \\mathcal{F}_K$ it holds\n\t\\begin{equation}\n\t\\nLds{v_k|_K}\\leq D_{t,K,\\sigma} \\nB{v_k}_K.\n\t\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\tLet $v_k\\in H^1(\\mathcal{M}_k)$ and $\\epsilon>0$. Applying H\u00f6lder inequality to the trace inequality \\cref{eq:trace} we get\n\t\\begin{equation}\n\t\\nLds{v_k|_K}^2 \\leq C_{t,K,\\sigma}((h_K^{-1}+\\frac{1}{2\\epsilon})\\nLdK{v_k}^2+\\frac{\\epsilon}{2}\\nLddK{\\nabla v_k}^2).\n\t\\end{equation}\n\tHence, if there exists $D_{t,K,\\sigma}>0$ independent of $v_k$ such that\n\t\\begin{equation}\\label{eq:Dkeps}\n\t\\begin{aligned}\n\tC_{t,K,\\sigma}((h_K^{-1}+\\frac{1}{2\\epsilon})\\nLdK{v_k}^2+&\\frac{\\epsilon}{2}\\nLddK{\\nabla v_k}^2)\\\\\n\t& \\leq D_{t,K,\\sigma}^2 (c_{A,K}\\nLddK{\\nabla v_k}^2+c_{\\bm{\\beta},\\mu,K}\\nLdK{v_k}^2) \n\t\\end{aligned}\n\t\\end{equation}\n\tthen $\\nLds{v_k|_K}^2\\leq D_{t,K,\\sigma}^2 \\nB{v_k}^2_K$ and the result holds. Relation \\eqref{eq:Dkeps} holds if\n\t\\begin{equation}\n\tC_{t,K,\\sigma}(h_K^{-1}+\\frac{1}{2\\epsilon})\\leq D_{t,K,\\sigma}^2c_{\\bm{\\beta},\\mu,K}, \\qquad\\qquad C_{t,K,\\sigma}\\frac{\\epsilon}{2} \\leq D_{t,K,\\sigma}^2c_{A,K}\n\t\\end{equation}\n\tand hence $D_{t,K,\\sigma}^2=\\max\\{C_{t,K,\\sigma}(h_K^{-1}+\\frac{1}{2\\epsilon})c_{\\bm{\\beta},\\mu,K}^{-1},C_{t,K,\\sigma}\\frac{\\epsilon}{2}c_{A,K}^{-1}\\}$.\n\tTaking $\\epsilon$ such that the maximum is minimized we get $D_{t,K,\\sigma}$ as in \\cref{eq:Dk}.\n\\end{proof}\nThe proof of the following Lemma is inspired from \\cite[Theorem 3.1]{ESV10}, the main difference is that we take into account the weaker regularity of the reconstructed fluxes. \n\\begin{lemma}\\label{lemma:boundBBA}\n\tLet $u\\in H^1_0(\\Omega)$ be the solution to \\eqref{eq:weak}, $u_k\\in V(\\mathfrak{T}_k)$ given by \\cref{alg:local}, $s_k\\in H^1_0(\\Omega)$ from \\cref{eq:defpot,eq:defhsk}, $\\bt_k,\\bq_k\\in H_{\\divop}(\\mathcal{G}_k)$ defined by \\cref{eq:defflux,eq:deflocflux} and $v\\in H^1_0(\\Omega)$. Then\n\t\\begin{equation}\n\t|\\mathcal{B}(u -u_k ,v)+\\mathcal{B}_A(u_k-s_k,v)| \\leq \\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{1,K}^2\\right)^{1\/2}\\nB{v},\n\t\\end{equation}\n\twith $\\eta_{1,K}=\\eta_{R,K}+\\eta_{DF,K}+\\eta_{C,1,K}+\\eta_{C,2,K}+\\eta_{U,K}+\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}$.\n\\end{lemma}\n\\begin{proof}\n\tSince $u$ satisfies \\eqref{eq:weak}, using the definition of $\\mathcal{B}$ and $\\mathcal{B}_A$\n\t\\begin{align}\n\t\\mathcal{B}(u-u_k,v)+\\mathcal{B}_A(u_k-s_k,v) \n\t&= \\int_\\Omega (f-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)v\\dif \\bm{x} -\\int_\\Omega A\\nabla u_k\\cdot \\nabla v\\dif \\bm{x}\\\\\n\t&\\quad -\\int_\\Omega \\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})(u_k-s_k)v\\dif \\bm{x} -\\int_\\Omega \\nabla\\cdot(\\bm{\\beta} s_k)v\\dif \\bm{x}.\n\t\\end{align}\n\tUsing $v \\bt_k\\in H_{\\divop}(\\mathcal{G}_k)$, from the divergence theorem we have\n\t\\begin{align}\n\t\\int_\\Omega (v\\nabla\\cdot \\bt_k +\\nabla v\\cdot\\bt_k)\\dif \\bm{x} &= \\sum_{G\\in\\mathcal{G}_k}\\int_{G}\\nabla\\cdot(v\\bt_k)\\dif \\bm{x} =\\sum_{G\\in\\mathcal{G}_k}\\int_{\\partial G} v\\bt_k\\cdot\\bm{n}_{\\partial G}\\dif \\bm{y} \\\\\n\t&=\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{v \\bt_k}\\cdot \\bm{n}_\\gamma\\dif \\bm{y} = \\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot \\bm{n}_\\gamma v\\dif \\bm{y}\n\t\\end{align}\n\tand hence\n\t\\begin{equation}\\label{eq:integrBBA}\n\t\\begin{aligned}\n\t\\mathcal{B}(u-u_k,v)+\\mathcal{B}_A(u_k- s_k ,v)&=\\int_\\Omega (f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)v\\dif \\bm{x} \\\\\n\t&\\quad -\\int_\\Omega \\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})(u_k-s_k)v\\dif \\bm{x} +\\int_\\Omega \\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)v\\dif \\bm{x}\\\\\n\t&\\quad -\\int_\\Omega (A\\nabla u_k+\\bt_k)\\cdot \\nabla v\\dif \\bm{x} +\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k }\\cdot\\bm{n}_\\gamma v\\dif \\bm{y}.\n\t\\end{aligned}\n\t\\end{equation}\n\tFrom \\cref{lemma:cons} we deduce\n\t\\begin{subequations}\\label{eq:boundsBBAterms}\n\t\t\\begin{equation}\\label{eq:boundsBBAterm0}\n\t\t\\begin{aligned}\n\t\t&\\left|\\int_\\Omega (f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)v\\dif \\bm{x}\\right| \\\\\n\t\t&\\qquad\\qquad\\qquad\\qquad = \\left|\\int_\\Omega (f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)(v-\\pi_0 v)\\dif \\bm{x}\\right| \\\\\n\t\t&\\qquad\\qquad\\qquad\\qquad \\leq \\sum_{K\\in\\mathcal{M}_k} \\eta_{R,K}\\nB{v}_K.\n\t\t\\end{aligned}\n\t\t\\end{equation}\n\t\tSimilarly, we get\n\t\t\\begin{equation}\\label{eq:boundsBBAterms1}\n\t\t\\begin{aligned}\n\t\t\\left| \\int_\\Omega (A\\nabla u_k+\\bt_k)\\cdot \\nabla v\\dif \\bm{x}\\right|&\\leq \\sum_{K\\in\\mathcal{M}_k}\\eta_{DF,K}\\nB{v}_K,\\\\\n\t\t\\left| \\int_\\Omega \\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})(u_k-s_k)v\\dif \\bm{x}\\right|&\\leq \\sum_{K\\in\\mathcal{M}_k} \\eta_{C,2,K}\\nB{v}_K.\n\t\t\\end{aligned}\n\t\t\\end{equation}\n\t\tSince $\\jump{\\bt_k}_\\sigma=0$ for $\\sigma\\in \\mathcal{F}_{k,i}\\setminus\\cup_{\\gamma\\in\\Gamma_k}\\mathcal{F}_\\gamma$, it holds\n\t\t\\begin{equation}\n\t\t\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} = \\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma v\\dif \\bm{y} = \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\int_\\sigma \\jump{\\bt_k}\\cdot\\bm{n}_\\sigma v\\dif \\bm{y}.\n\t\t\\end{equation}\n\t\tUsing \\cref{lemma:boundsigma} we obtain\n\t\t\\begin{equation}\\label{eq:boundsBBAterms2}\n\t\t\\begin{aligned}\n\t\t\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| &\\leq \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\nLds{\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma}\\nLds{v} \\\\\n\t\t&\\leq \\sum_{K\\in\\mathcal{M}_k}\\eta_{\\Gamma,2,K}\\nB{v}_K.\n\t\t\\end{aligned}\n\t\t\\end{equation}\n\t\tIt remains to estimate $\\int_\\Omega \\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)v\\dif \\bm{x}$. For that, we use\n\t\t\\begin{align}\n\t\t\\int_\\Omega \\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)v\\dif \\bm{x} \n\t\n\t\n\t\t=& \\sum_{K\\in\\mathcal{M}_k}\\int_K (\\mathcal{I}-\\pi_0)\\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)(v-\\pi_0 v)\\dif \\bm{x} \\\\\n\t\t&+\\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma (\\bq_k-\\bm{\\beta} s_k)\\cdot \\bm{n}_K \\pi_0 v\\dif \\bm{y}\n\t\t\\end{align}\n\t\tand from \\cref{eq:bounds1} we get\n\t\t\\begin{align}\\label{eq:boundsBBAterms3}\n\t\t\\left|\\sum_{K\\in\\mathcal{M}_k}\\int_K (\\mathcal{I}-\\pi_0)\\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)(v-\\pi_0 v)\\dif \\bm{x} \\right|\\leq \\sum_{K\\in\\mathcal{M}_k}\\eta_{C,1,K}\\nB{v}_K.\n\t\t\\end{align}\n\t\tFor the second term we write\n\t\t\\begin{align}\n\t\t&\\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma (\\bq_k-\\bm{\\beta} s_k)\\cdot \\bm{n}_K \\pi_0 v\\dif \\bm{y}= \\sum_{\\sigma\\in\\mathcal{F}_k}\\int_\\sigma \\jump{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)\\pi_0 v}\\cdot \\bm{n}_\\sigma\\dif \\bm{y}\\\\\n\t\t&=\\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma \\mean{\\pi_0 v}\\jump{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)}\\cdot \\bm{n}_\\sigma+\\jump{\\pi_0 v}\\mean{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)}\\cdot \\bm{n}_\\sigma\\dif \\bm{y}\\\\\n\t\t&\\quad +\\sum_{\\sigma\\in \\mathcal{F}_{k,b}}\\int_\\sigma\\pi_0 v\\, \\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)\\cdot\\bm{n}_\\sigma \\dif \\bm{y} = \\operatorname{I}+\\operatorname{II}+\\operatorname{III}\n\t\t\\end{align}\n\t\tand we easily obtain, since $\\jump{\\bm{\\beta} s_k}=0$,\n\t\t\\begin{equation}\n\t\t\\operatorname{I} = \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\int_\\sigma \\pi_0 v|_K \\jump{\\pi_{0,\\sigma}\\bq_k}\\cdot\\bm{n}_\\sigma\\dif \\bm{y}.\n\t\t\\end{equation}\n\t\tUsing $|\\pi_0 v|_K| = |K|^{-1\/2}\\nLdK{\\pi_0 v}\\leq |K|^{-1\/2}\\nLdK{v}\\leq (|K|c_{\\bm{\\beta},\\mu,K})^{-1\/2}\\nB{v}_K$ we get\n\t\t\\begin{equation}\\label{eq:boundsBBAterms4}\n\t\t\\operatorname{I} \\leq \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}(|K|c_{\\bm{\\beta},\\mu,K})^{-1\/2}\\nLus{\\jump{\\pi_{0,\\sigma}\\bq_k}\\cdot\\bm{n}_\\sigma}\\nB{v}_K= \\sum_{K\\in\\mathcal{M}_k}\\eta_{\\Gamma,1,K}\\nB{v}_K.\n\t\t\\end{equation}\n\t\tLet $\\mathcal{M}_\\sigma=\\{K\\in\\mathcal{M}_k\\,:\\, \\sigma\\subset\\partial K\\}$, using \\eqref{eq:bounds3} for the second term we have\n\t\t\\begin{align}\n\t\t\\operatorname{II} & \\leq \\sum_{\\sigma\\in\\mathcal{F}_{k,i}} m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot\\bm{n}_\\sigma}\\sum_{K\\in \\mathcal{M}_\\sigma}\\nB{v}_{K}\\\\\n\t\t&= \\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}} m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot\\bm{n}_\\sigma}\\nB{v}_K.\n\t\t\\end{align}\n\t\tFor the last term we similarly obtain\n\t\t\\begin{equation}\n\t\t\\operatorname{III} \\leq \\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,b}} m_\\sigma\\nLds{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)\\cdot \\bm{n}_\\sigma}\\nB{v}_K\n\t\t\\end{equation}\n\t\tand hence\n\t\t\\begin{equation}\\label{eq:boundsBBAterms5}\n\t\t\\operatorname{II}+\\operatorname{III} \\leq \\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K}\\chi_\\sigma m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot \\bm{n}_\\sigma} \\nB{v}_K= \\sum_{K\\in\\mathcal{M}_k}\\eta_{U,K}\\nB{v}_K,\n\t\t\\end{equation}\n\t\\end{subequations}\n\twhere $\\chi_\\sigma=2$ if $\\sigma\\in\\mathcal{F}_{k,b}$ and $\\chi_\\sigma=1$ if $\\sigma\\in\\mathcal{F}_{k,i}$. Plugging relations \\cref{eq:boundsBBAterm0,eq:boundsBBAterms1,eq:boundsBBAterms2,eq:boundsBBAterms3,eq:boundsBBAterms4,eq:boundsBBAterms5} into \\eqref{eq:integrBBA} we get the result.\n\\end{proof}\n\nIn \\cref{lemma:boundBBA} we use \\cref{lemma:cons} to deduce that\n\\begin{equation}\\label{eq:weakcons}\n\\int_K (\\nabla\\cdot \\bt_k+\\nabla\\cdot\\bq_k+(\\mu-\\nabla\\cdot\\bm{\\beta})u_k) \\dif \\bm{x} = \\int_K f \\dif \\bm{x}\n\\end{equation}\nand hence \\eqref{eq:boundsBBAterm0}. However, when the mesh has hanging nodes inside of the local domains \\cref{lemma:cons} is not valid. Indeed, if $\\widehat{\\mathcal{M}}_k$ has hanging nodes, the fluxes $\\hat{\\bt}_k,\\hat{\\bq}_k$ must be constructed on a matching (free of hanging nodes) submesh $\\overline{\\mathcal{M}}_k$ of $\\widehat{\\mathcal{M}}_k$, otherwise they may fail to be in $H_{\\divop}(\\Omega_k)$. The constructed fluxes will satisfy relation \\cref{eq:precons}, but since $\\nabla\\cdot\\hat{\\bt}_k,\\nabla\\cdot\\hat{\\bq}_k\\in \\mathbb{P}_\\mathcalligra{r}(K')$ for $K'\\in\\overline{\\mathcal{M}}_k$ and $\\overline{\\mathcal{M}}_k$ is finer than $\\widehat{\\mathcal{M}}_k$, then we cannot conclude as we did in \\cref{lemma:cons}. Nonetheless, \\cref{eq:precons} still implies \\cref{eq:weakcons}, which is enough to prove \\cref{lemma:boundBBA}.\n\n\n\\subsection{Proof of the theorems}\\label{sec:proofs}\nHere we prove \\cref{thm:energynormbound,thm:augmentednormbound}. We will consider $\\mathcal{B}:H^1_0(\\Omega)\\times H^1_0(\\Omega)\\rightarrow\\mathbb{R}$ defined in \\eqref{eq:bform} for functions in $ H^1(\\mathcal{M}_k)$.\n\\begin{proof}[Proof of \\cref{thm:energynormbound}]\n\tIt has been proved in \\cite[Lemma 3.1]{Ern08} that for any $u_k\\in V(\\mathfrak{T}_k)$ and $u,s\\in H^1_0(\\Omega)$ it holds\n\t\\begin{equation}\n\t\\nB{u-u_k}\\leq \\nB{u_k-s}+|\\mathcal{B}(u-u_k,v)+\\mathcal{B}_A(u_k-s,v)|,\n\t\\end{equation}\n\twith $v=(u-s)\/\\nB{u-s}$. Choosing $u$ as the exact solution to \\cref{eq:weak}, $u_k$ given by \\cref{alg:local}, $s=s_k$ from \\cref{eq:defpot} and using \\cref{lemma:boundBBA} gives the result.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{thm:augmentednormbound}]\n\tSince $u\\in H^1_0(\\Omega)$ it holds $\\mathcal{B}_J(u,w)=0$ for all $w\\in H^1_0(\\Omega)$, using $\\mathcal{B}_A\\leq\\mathcal{B}+|\\mathcal{B}_S|$ we get\n\t\\begin{equation}\n\t\\nBp{u-u_k}\\leq 2\\nB{u-u_k}+\\sup_{\\substack{w\\in H^1_0(\\Omega)\\\\ \\nB{w}=1}}(\\mathcal{B}(u-u_k,w)-\\mathcal{B}_J(u_k,w)).\n\t\\end{equation}\n\tTo conclude the proof we show that\n\t\\begin{equation}\\label{eq:supBBD}\n\t\\sup_{\\substack{w\\in H^1_0(\\Omega)\\\\ \\nB{w}=1}}(\\mathcal{B}(u-u_k,w)-\\mathcal{B}_J(u_k,w))\\leq \\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{2,K}^2\\right)^{1\/2}.\n\t\\end{equation}\n\tFollowing \\cref{lemma:boundBBA}, we easily get\n\t\\begin{multline}\n\t\\mathcal{B}(u-u_k,w)-\\mathcal{B}_J(u_k,w) \\leq \\sum_{K\\in\\mathcal{M}_k}(\\eta_{R,K}+\\eta_{DF,K}+\\tilde\\eta_{C,1,K}+\\eta_{\\Gamma,2,K})\\nB{w}_K\\\\\n\t+\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma\\pi_0 w (\\bq_k-\\bm{\\beta} u_k)\\cdot\\bm{n}_K\\dif \\bm{y}-\\mathcal{B}_J(u_k,w).\n\t\\end{multline}\n\tThe two last terms satisfy\n\t\\begin{align}\n\t&\\sum_{\\sigma\\in\\mathcal{F}_k}\\int_\\sigma\\jump{\\pi_0 w(\\bq_k-\\bm{\\beta} u_k)}\\cdot \\bm{n}_\\sigma\\dif \\bm{y}-\\mathcal{B}_J(u_k,w) \\\\\n\t&= \\sum_{\\sigma\\in\\mathcal{F}_k}\\chi_\\sigma\\int_\\sigma \\jump{\\pi_0 w}\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} u_k}\\cdot\\bm{n}_\\sigma\\dif \\bm{y} +\\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma \\mean{\\pi_0 w}\\jump{\\pi_{0,\\sigma}\\bq_k}\\cdot\\bm{n}_\\sigma\\dif \\bm{y} \\\\\n\t&\\leq\\sum_{K\\in\\mathcal{M}_k}(\\tilde\\eta_{U,K}+\\eta_{\\Gamma,1,K})\\nB{w}_K,\n\t\\end{align}\n\twhere in the last step we followed again \\cref{lemma:boundBBA}.\n\\end{proof}\n\n\\subsection{Alternative error bounds}\\label{sec:altbounds}\nOur aim here is to explain how to avoid the assumption $c_{\\bm{\\beta},\\mu,K}>0$ for all $K\\in\\mathcal{M}_k$ made in \\cref{sec:errest,sec:ctedef}. This assumption is needed to define $\\eta_{\\Gamma,1,K}$, $\\eta_{\\Gamma,2,K}$ but can be avoided if \\cref{eq:boundsBBAterms2,eq:boundsBBAterms4} are estimated differently. For \\cref{eq:boundsBBAterms2}, using the trace inequality \\cref{eq:trace} we get\n\\begin{equation}\n\\begin{aligned}\n\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| &\\leq \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\nLds{\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma}\\nLds{v|_K} \\\\\n&\\leq \\sum_{K\\in\\mathcal{M}_k}\\tilde\\eta_{\\Gamma,2,K}(\\nLdK{v}^2+h_K\\nLdK{v}\\nLddK{\\nabla v})^{1\/2},\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n\\tilde \\eta_{\\Gamma,2,K} = \\frac{1}{2}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}h_K^{-1\/2}C_{t,K,\\sigma}^{1\/2}\\nLds{\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma}.\n\\end{equation}\nSetting $\\tilde\\eta_{\\Gamma,2}^2=\\sum_{K\\in\\mathcal{M}_k}\\tilde \\eta_{\\Gamma,2,K}^2$, it yields\n\\begin{align}\n\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| &\\leq \\tilde \\eta_{\\Gamma,2}\\left(\\sum_{K\\in\\mathcal{M}_k} \\nLdK{v}^2+h_K\\nLdK{v}\\nLddK{\\nabla v}\\right)^{1\/2}\\\\\n&\\leq \\tilde \\eta_{\\Gamma,2} \\left(\\nLd{v}^2+h_{\\mathcal{M}_k}\\nLd{v}\\nLdd{\\nabla v}\\right)^{1\/2}.\n\\end{align}\nUsing the Poincar\u00e9 inequality $\\nLd{v}\\leq d_\\Omega\\nLdd{\\nabla v}$, where $d_\\Omega$ is the diameter of $\\Omega$, we get\n\\begin{equation}\n\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| \\leq \\tilde \\eta_{\\Gamma,2} \\left(d_\\Omega^2+h_{\\mathcal{M}_k}d_\\Omega\\right)^{1\/2}\\nLdd{\\nabla v}\\leq \\tilde \\eta_{\\Gamma,2} c_A^{-1\/2} \\left(d_\\Omega^2+h_{\\mathcal{M}_k}d_\\Omega\\right)^{1\/2}\\nB{v},\n\\end{equation}\nwhere $c_A$ is the minimal eigenvalue of $A(\\bm{x})$ over $\\Omega$. The same procedure can be used to replace \\cref{eq:boundsBBAterms4} by a relation avoiding the term $c_{\\bm{\\beta},\\mu,K}^{-1\/2}$. The new bounds can be used to modify the results of \\cref{thm:energynormbound,thm:augmentednormbound} and obtain error estimators when $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}>0$ is not satisfied.\n\n\n\n\\section{Numerical Experiments}\\label{sec:num}\nIn order to study the properties and illustrate the performance of the local scheme we consider here several numerical examples.\nFirst, in \\cref{exp:conv}, we look at the convergence rates of the error estimators, focusing on the errors introduced by solving only local problems. Considering a local and a nonlocal problem, we also compare the size of the new error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ against the classical terms. We emphasize that we do not use the automatic subdomains' identification algorithm for this example, as the subdomains are fixed beforehand.\nWe also perform in \\cref{exp:corner} an experiment for a smooth problem, where the errors are not localized, illustrating the role of $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$. To do so, we also compare the local scheme against a classical adaptive method, where after each mesh refinement the problem is solved again on the whole domain. The classical method we refer to is given by \\cref{alg:classical}.\nSecond, we investigate the efficiency of the new local algorithm for non smooth problems in \\cref{exp:bndlayer_sym,exp:bndlayer_notsym}. For such examples, that are the target of our method, the local scheme performs better than the classical one. We conclude in \\cref{exp:nonlin} with a nonlinear problem, where \\cref{thm:energynormbound,thm:augmentednormbound} do not apply but \\cref{alg:local} can nevertheless be employed in conjunction with a Newton scheme.\n\n\\begin{algorithm}[!tbhp]\n\t\\caption{ClassicalScheme($\\mathfrak{T}_1$)}\n\t\\label{alg:classical}\n\t\\begin{algorithmic}\n\t\t\\State Find $\\overline{u}_1\\in V(\\mathfrak{T}_1)$ solution to $\\mathcal{B}(\\overline{u}_1,v_1,\\mathfrak{T}_1,0)=(f,v_1)_1$ for all $v_1\\in V(\\mathfrak{T}_1)$.\n\t\t\\For{$k=2,\\ldots,M$}\n\t\t\\State $(\\mathfrak{T}_k,\\widehat{\\mathfrak{T}}_{k}) = \\text{LocalDomain}(\\overline{u}_{k-1},\\mathfrak{T}_{k-1})$.\n\t\t\\State Find $\\overline{u}_k\\in V(\\mathfrak{T}_k)$ solution to $\\mathcal{B}(\\overline{u}_k,v_k,\\mathfrak{T}_k,0)=(f,v_k)_1$ for all $v_k\\in V(\\mathfrak{T}_k)$.\n\t\t\\EndFor\n\t\\end{algorithmic}\n\\end{algorithm}\n\nIn all the experiments we use $\\mathbb P_1$ elements ($\\ell=1$ in \\eqref{eq:defVT}) on a simplicial mesh with penalization parameter $\\eta_\\sigma=10$, the diffusive and convective fluxes $\\bt_k,\\bq_k$ are computed with $\\mathcalligra{r}=0$ (see \\eqref{eq:RTN}). Furthermore, $\\bm{\\beta}$ is always such that $\\nabla\\cdot\\bm{\\beta}=0$. These choices give $\\eta_{C,1,K}=\\eta_{C,2,K}=\\tilde\\eta_{C,1,K}=0$. For an estimator $\\eta_{*,K}$ we define $\\eta_{*}^2=\\sum_{K\\in\\mathcal{M}_k}\\eta_{*,K}^2$.\nSimilarly to \\cite{ESV10}, if $A=\\varepsilon I_2$ and $\\bm{\\beta}$ is constant then for $v_k\\in H^1(\\mathcal{M}_k)$ the augmented norm is well estimated by \n\\begin{align}\n\\nBp{v_k}\\leq \\nB{v_k}_{\\oplus'}&= \\nB{v_k}+\\varepsilon^{-1\/2}\\Vert\\bm{\\beta}\\Vert_2\\nLd{v_k}\\\\\n&\\quad +\\frac{1}{2}\\left(\\sum_{K\\in\\mathcal{M}_k}\\left(\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\tilde m_K^{1\/2} C_{t,K,\\sigma}^{1\/2}\\nLds{\\jump{v_k}\\bm{\\beta}\\cdot\\bm{n}_\\sigma}\\right)^2\\right)^{1\/2}.\n\\end{align}\nHence, in the numerical experiments we consider the computable norm $\\nB{\\cdot}_{\\oplus'}$. The effectivity indexes of the error estimators $\\eta$ and $\\tilde \\eta$ from \\cref{thm:energynormbound,thm:augmentednormbound} are defined as\n\\begin{equation}\\label{eq:effind}\n\\frac{\\eta}{\\nB{u-u_k}} \\qquad\\text{and}\\qquad \\frac{\\tilde\\eta}{\\nB{u-u_k}_{\\oplus'}},\n\\end{equation}\nrespectively. For the solution $\\overline u_k$ of the classical algorithm we use the error estimators $\\eta$ and $\\tilde \\eta$ from \\cite{ESV10}. They are equivalent to the estimators presented in this paper except that for $\\overline u_k$ we have $\\eta_{\\Gamma,1,K}=\\eta_{\\Gamma,2,K}=0$, as in this case the reconstructed fluxes are in $H_{\\divop}(\\Omega)$. The effectivity indexes for $\\overline u_k$ are as in \\eqref{eq:effind} but with $u_k$ replaced by $\\overline u_k$. The numerical experiments have been performed with the help of the C++ library \\texttt{libMesh} \\cite{KPS06}.\n\n\n\n\n\\subsection{Problem shifting from localized to nonlocalized errors}\\label{exp:conv}\nWe investigate an example in two different locality regimes. First, the errors are confined in a small region and then they are distributed in the whole domain. We will study the effects of this transition on the size of the new error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$.\n\nWe solve \\eqref{eq:elliptic} in $\\Omega=[0,1]\\times [0,1]$ with $A=I_2$, $\\bm{\\beta}=-(1,1)^\\top$ and $\\mu=1$. The force term $f$ is chosen so that the exact solution reads\n\\begin{equation}\\label{eq:solsmooth}\n\tu(\\bm{x})=e^{-\\kappa ||\\bm{x}||_2}\\left( x_1-\\frac{1-e^{-\\kappa x_1}}{1-e^{-\\kappa}}\\right)\\left(x_2-\\frac{1-e^{-\\kappa x_2}}{1-e^{-\\kappa}} \\right),\n\\end{equation}\nwith $\\kappa=100$ or $\\kappa=10$. When $\\kappa=100$ the solution has a narrow peak and the errors are localized around that region, when $\\kappa=10$ the solution is smoother and the errors are distributed in the whole domain. See \\cref{fig:sol_conv_100,fig:sol_conv_10}.\n\nFirst, we investigate the convergence rate of the error estimators and then we comment on the size of the new error estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ when the errors are localized or not, i.e. when $\\kappa=100$ or $\\kappa=10$.\nWe define two domains $\\Omega_1,\\Omega_2$ as follows: $\\Omega_1=\\Omega$ and $\\bm{x}\\in\\Omega_2$ if $\\Vert\\bm{x}\\Vert_\\infty\\leq 1\/2$, see \\cref{fig:domains_priori}. \n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.32\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=\\textwidth]{images\/corner\/sol_sing_1e-2.png}\n\t\t\t\\caption{$u(\\bm{x})$ for $\\kappa=100$.}\n\t\t\t\\label{fig:sol_conv_100}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.32\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=\\textwidth]{images\/corner\/sol_sing_1e-1.png}\n\t\t\t\\caption{$u(\\bm{x})$ for $\\kappa=10$.}\n\t\t\t\\label{fig:sol_conv_10}\n\t\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.32\\textwidth}\n\t\t\\centering\n\t\t\\begin{tikzpicture}\n\t\t\t\\node at (0,0) {\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=\\textwidth]{images\/corner\/domains_dld_1_k_1-2.png}};\n\t\t\n\t\t\n\t\t\\end{tikzpicture}\n\t\t\\caption{Domains $\\Omega_2\\subset\\Omega_1$.}\n\t\t\\label{fig:domains_priori}\n\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Solution $u(\\bm{x})$ in \\cref{eq:solsmooth} for two values of $\\kappa$ and local domains $\\Omega_1$, $\\Omega_2$.}\n\t\\label{fig:sol_dom_conv}\n\\end{figure}\nLet $h$ be the grid size of $\\widehat{\\mathcal M}_1$, then the grid size of $\\widehat{\\mathcal{M}}_2$ is $h\/2$. \nFor different choices of $h$ we run \\cref{alg:local} without calling LocalDomain, since the local domains and meshes are chosen beforehand. After the second iteration we compute the exact energy error and the error estimators. The results are reported in \\cref{tab:conv_a,tab:conv_b} for $\\kappa=100$ and $\\kappa=10$, respectively. We recall that $\\eta_{NC}$ measures the non conformity of $u_k$, $\\eta_{R}$ measures the error in the energy conservation, $\\eta_{DF}$ the difference between $-A\\nabla u_k$ and the reconstructed diffusive flux $\\bt_k$, $\\eta_U,\\tilde\\eta_{U}$ are upwind errors and $\\eta_{\\Gamma,1},\\eta_{\\Gamma,2}$ measure the jumps of $\\bt_k,\\bq_k$ across subdomains boundaries.\n\n\\begin{table}\n\t\\csvreader[\n\tbefore reading=\\small\\centering\\sisetup{table-number-alignment=left,table-parse-only,zero-decimal-to-integer,round-mode=figures,round-precision=2,output-exponent-marker = \\text{e},fixed-exponent=0},\n\ttabular={lSSSSSSSS},head to column names,\n\ttable head=\\toprule $h$ & \\text{$\\nB{u-u_k}$} &$\\eta_{NC}$ & $\\eta_{R}$ & $\\eta_{DF}$ & $\\eta_{U}$ & \\text{$\\tilde{\\eta}_{U}$} & $\\eta_{\\Gamma,1}$ & $\\eta_{\\Gamma,2}$ \\\\\\midrule,\n\tlate after last line=\\\\\\toprule Order & $1$ & $1$ & $2$ & $1$ & $2$ & $2$ & \\text{$0.5$} & \\text{$0.5$} \\\\\\midrule]\n\t{data\/corner\/local_sing_1e-2_diff_1e0_a_posteriori_data.csv}{}\n\t{$2^{-{\\the\\numexpr\\thecsvrow+5\\relax}}$ & \\erren & \\etaNC & \\etaR & \\etaDF & \\etaU & \\etatU & \\etaGu & \\etaGd}\n\t\\caption{Convergence rate of error estimators for $\\kappa=100$.}\n\t\\label{tab:conv_a}\n\\end{table}\n\\begin{table}\n\t\\csvreader[\n\tbefore reading=\\small\\centering\\sisetup{table-number-alignment=left,table-parse-only,zero-decimal-to-integer,round-mode=figures,round-precision=2,output-exponent-marker = \\text{e},fixed-exponent=0},\n\ttabular={lSSSSSSSS},\n\thead to column names,\n\ttable head=\\toprule $h$ & \\text{$\\nB{u-u_k}$} & $\\eta_{NC}$ & $\\eta_{R}$ & $\\eta_{DF}$ & $\\eta_{U}$ & \\text{$\\tilde{\\eta}_{U}$} & $\\eta_{\\Gamma,1}$ & $\\eta_{\\Gamma,2}$ \\\\\\midrule,\n\tlate after last line=\\\\\\toprule Order & $1$ & $1$ & $2$ & $1$ & $1.5$ & $1.5$ & \\text{$0.5$} & \\text{$0.5$} \\\\\\midrule]\n\t{data\/corner\/local_sing_1e-1_diff_1e0_a_posteriori_data.csv}{}\n\t{$2^{-{\\the\\numexpr\\thecsvrow+5\\relax}}$ & \\erren & \\etaNC & \\etaR & \\etaDF & \\etaU & \\etatU & \\etaGu & \\etaGd}\n\t\\caption{Convergence rate of error estimators for $\\kappa=10$.}\n\t\\label{tab:conv_b}\n\\end{table}\n\n\nWe see that the energy error converges with order one, as predicted by the a priori error analysis of \\cite{AbR19}. We also observe that the error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ measuring the reconstructed fluxes' jumps across subdomains' boundaries have a lower rate of convergence. Therefore, the error estimators are not efficient, in the sense that they cannot be bounded from above by the energy error multiplied by a mesh-size independent constant.\nHowever, the relative size of $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ compared to the other estimators gives an information on the suitability of the local scheme:\n\\begin{itemize}\n\t\\item if $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ are comparable to the other estimators one should use the local scheme. The typical situation is when the errors are localized, with local regions covering the large error regions (see \\cref{fig:sol_conv_100,fig:domains_priori} and \\cref{tab:conv_a});\n\t\\item if the relative size of $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ is larger than the other estimators, this is an indication that one should switch from local to classical method. The typical situation is when the errors are not (or less) localized (see \\cref{fig:sol_conv_10,fig:domains_priori} and \\cref{tab:conv_b}). On purpose we did choose a local domain that is too small to cover the error region.\n\\end{itemize}\n\nIn the next experiments we let the scheme select the local subdomains on the fly, using the fixed energy fraction marking strategy \\cite[Section 4.2]{Dor96} implemented in the $\\text{LocalDomain}(u_k,\\mathfrak{T}_k)$ routine of \\cref{alg:local}. First, we revisit the example of \\cref{exp:conv}. Second, we consider two examples where the errors are localized, illustrating the efficiency of the algorithm.\n\n\n\\subsection{A nonlocal smooth problem}\\label{exp:corner}\nConsidering the same problem as in \\cref{exp:conv} with $\\kappa=10$, we run the local and classical schemes for $k=1,\\ldots,15$ starting with a uniform mesh of 128 elements. Here, we employ the automatic subdomains' identification algorithm and the goal is to show when one should switch from local to nonlocal methods.\nAs the error is distributed in the whole domain, it is not possible to chose the subdomains $\\Omega_{k}$ so that the errors at their boundaries are negligible. Consequently, the error estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ will dominate.\nIndeed, we see in \\cref{tab:dom} that the error estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ measuring the reconstructed fluxes' jumps dominate the other estimators.\n\\begin{table}\n\t\\csvreader[\n\tbefore reading=\\small\\centering\\sisetup{table-number-alignment=left,table-parse-only,zero-decimal-to-integer,round-mode=figures,round-precision=2,output-exponent-marker = \\text{e},fixed-exponent=0},\n\ttabular={lSSSSSSSS},\n\thead to column names,\n\ttable head=\\toprule $k$ & \\text{$\\nB{u-u_k}$} & $\\eta_{NC}$ & $\\eta_{R}$ & $\\eta_{DF}$ & $\\eta_{U}$ & \\text{$\\tilde{\\eta}_{U}$} & $\\eta_{\\Gamma,1}$ & $\\eta_{\\Gamma,2}$ \\\\\\midrule,\n\t]\n\t{data\/corner\/SPA2FFM_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data_first_5_levels.csv}{}\n\t{\\level & \\erren & \\etaNC & \\etaR & \\etaDF & \\etaU & \\etatU & \\etaGu & \\etaGd}\n\t\\caption{\\Cref{exp:corner}, nonlocal smooth problem. Dominance of $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ over the other error estimators. Only the results of the first five iterations are shown, i.e. $k\\leq 5$.}\n\t\\label{tab:dom}\n\\end{table}\nThis phenomenon brings two issues into the algorithm. First, the effectivity index of the local scheme is significantly larger than the index for the classical scheme, as we illustrate in \\cref{fig:corner_effind_eta}. Second, the marking error estimator $\\eta_{M,K}$ \\cref{eq:marketa} will be larger at the boundaries of the local domains than in the large error regions; indeed, we see in \\cref{fig:corner_doms} that the local domain $\\Omega_4$ chosen by the algorithm do not correspond to a large error region but is in a neighborhood of the boundary of $\\Omega_3$, where $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ are large. For this reason the algorithm in unable to detect the high error regions and we see in \\cref{fig:corner_effenerr}, where we show the computational cost in function of the energy errors, that the error of the local method stagnates.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\t\\begin{semilogyaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(0,1)},anchor=north west},xlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}, log basis y=10,ymin=1,ymax=400]\n\t\t\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma] \n\t\t\t\t\t{data\/corner\/SPA2FFM_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma] \n\t\t\t\t\t{data\/corner\/SPA1_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\t\\end{semilogyaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\eta$.}\n\t\t\t\\label{fig:corner_effind_eta}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(0,1)},anchor=north west},\n\t\t\t\t\txlabel={Energy norm error.}, ylabel={GMRES cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erren,y=linsolvertot,col sep=comma,select coords between index={0}{14}] \n\t\t\t\t\t{data\/corner\/SPA2FFM_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erren,y=linsolvertot,col sep=comma,select coords between index={0}{14}] \n\t\t\t\t\t{data\/corner\/SPA1_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{GMRES cost versus energy norm error.}\n\t\t\t\\label{fig:corner_effenerr}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:corner}, nonlocal smooth problem. Effectivity indexes in function of the iteration number.}\n\\end{figure}\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\t\\node at (0,0) {\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=0.22\\textheight]{images\/corner\/domains_dld_2_k_3.png}};\n\t\t\t\\node[opacity=0.7] at (0,0) {\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=0.22\\textheight]{images\/corner\/domains_dld_2_k_4.png}};\n\t\t\\end{tikzpicture}\n\t\t\\caption{Local domains $\\Omega_3$ (darker) and $\\Omega_4$ (brighter).}\n\t\t\\label{fig:corner_doms}\n\t\\end{center}\n\\end{figure}\n\nThis example shows that if the errors are not localized then the estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ dominate, the local scheme becomes inefficient and a classical \\emph{global} method should be preferred over a local method. However, our algorithm allows to monitor the size of the error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ and when these error estimators start to dominate the other error indicators (as seen in \\cref{tab:dom}) it provides a switching criteria.\n\n\\subsection{Reaction dominated problem}\\label{exp:bndlayer_sym}\nIn our next example we consider a symmetric problem and want to compare the local and classical schemes (\\cref{alg:local,alg:classical}) in a singularly perturbed regime. We investigate the efficiency measured as the computational cost and analyze their effectivity indexes. The setting is as follows: we solve \\eqref{eq:elliptic} in $\\Omega=[0,1]\\times [0,1]$ with $\\varepsilon=10^{-6}$, $A=\\varepsilon I_2$, $\\bm{\\beta}=(0,0)^\\top$, $\\mu=1$ and we choose $f$ such that the exact solution is given by\n\\begin{equation}\\label{eq:bndlayer}\nu(\\bm{x})=e^{x_1+x_2}\\left( x_1-\\frac{1-e^{-\\zeta x_1}}{1-e^{-\\zeta}}\\right)\\left(x_2-\\frac{1-e^{-\\zeta x_2}}{1-e^{-\\zeta}} \\right),\n\\end{equation}\nwhere $\\zeta=10^{4}$. The solution is illustrated in \\cref{fig:bndlayer_sol}. \n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/sol_vlq.png}\n\t\t\t\\caption{Solution $u(\\bm{x})$.}\n\t\t\t\\label{fig:bndlayer_sol}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[spy using outlines= {circle, connect spies,every spy on node\/.append style={thick}}]\n\t\t\t\t\\coordinate (spypoint) at (-0.1,0.15);\n\t\t\t\t\\coordinate (magnifyglass) at (1.5,0.5);\n\t\t\t\t\\coordinate (spypoint_bis) at (-0.03,-0.55);\n\t\t\t\t\\coordinate (magnifyglass_bis) at (1.5,-1.2);\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_1.png}};\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_2.png}};\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_3.png}};\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_4.png}};\n\t\t\t\t\\spy [WildStrawberry, size=1.3cm, magnification=4] on (spypoint) in node[fill=white] at (magnifyglass);\n\t\t\t\t\\spy [WildStrawberry, size=1.3cm, magnification=4] on (spypoint_bis) in node[fill=white] at (magnifyglass_bis);\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{First local domains $\\Omega_k$, $k=1,\\ldots,4$.}\n\t\t\t\\label{fig:bndlayer_doms}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Solution $u(\\bm{x})$ in \\eqref{eq:bndlayer} of the reaction dominated problem and first local domains chosen by the error estimators.}\n\\end{figure}\n\nSince the problem is symmetric we have $\\nB{{\\cdot}}=\\nBp{{\\cdot}}$, but their related error estimators $\\eta$ and $\\tilde\\eta$, respectively, satisfy $\\tilde\\eta>\\eta$ and hence the effectivity index of $\\eta$ will be lower (see \\cref{thm:energynormbound,thm:augmentednormbound}). \n\nStarting from a coarse mesh (128 elements), we let the two algorithms run for $k=1,\\ldots,20$. In \\cref{fig:bndlayer_doms} we show the first four subdomains $\\Omega_k$ chosen by the local scheme. Note that the local domain $\\Omega_4$ chosen by the algorithm is disconnected, while subdomain $\\Omega_3$ has an hole; as is allowed by the theory. Several of the subsequent subdomains (not displayed) are also disconnected or contain holes. The first iterations are needed to capture the boundary layer and reach the convergence regime, hence we will plot the results for $k\\geq 7$. The most expensive part of the code is the solution of linear systems by means of the conjugate gradient (CG) method preconditioned with the incomplete Cholesky factorization, followed by the computation of the potential and fluxes reconstruction and then by the evaluation of the error estimators. In the local scheme, the time spent doing these tasks is proportional to the number of elements inside each subdomain $\\Omega_k$. For the classical scheme, the cost of these tasks depends on the total number of elements in the mesh. Since the CG routine is the most expensive part, we take the time spent in it as an indicator for the computational cost.\n\nIn \\cref{fig:bndlayer_sym_etacost}, we plot the simulation cost against the error estimator $\\eta$, for both the local and classical algorithms. Each circle or star in the figure represents an iteration $k$. We observe that the local scheme provides similar error bounds but at a smaller cost. The effectivity index of $\\eta$ at each iteration $k$ is shown in \\cref{fig:bndlayer_sym_etaeffind}, we can observe that the local scheme has an effectivity index similar to the classical scheme.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Error estimator $\\eta$}, ylabel={CG cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=etafull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=etafull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{CG cost versus $\\eta$.}\n\t\t\t\\label{fig:bndlayer_sym_etacost}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},ymin=0,ymax=5,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\eta$.}\n\t\t\t\\label{fig:bndlayer_sym_etaeffind}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_sym}, reaction dominated problem. Computational cost vs. $\\eta$ and effectivity index in function of the iteration number.}\n\t\\label{fig:bndlayer_sym_etacost_etaeffind}\n\\end{figure}\n\nIn \\cref{fig:bndlayer_sym_effenerr} we exhibit the cost against the exact energy error and we notice that for some values of $k$ the mesh is refined but the error stays almost constant. This phenomenon significantly increases the simulation cost of the classical scheme without improving the solution. In contrast, the cost of the local scheme increases only marginally. Dividing the two curves in \\cref{fig:bndlayer_sym_effenerr} we obtain the relative speed-up, which is plotted in \\cref{fig:bndlayer_sym_speedup}. We note that as the error decreases the local scheme becomes faster than the classical scheme.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Energy norm error.}, ylabel={CG cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erren,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erren,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{CG cost versus energy norm error.}\n\t\t\t\\label{fig:bndlayer_sym_effenerr}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(0,1)},anchor=north west},\n\t\t\txlabel={Energy norm error.}, ylabel={Speed-up}, x dir=reverse,log basis x={2},log basis y={2},ymax=8, ytick={2,4,8,16},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=black,line width=1.0 pt,mark=none] table[x=erren,y=speeden,col sep=comma] \n\t\t\t{data\/bndlayer\/speedup_sing_4_diff_6_b_0_nref_3_lay_21.csv};\\addlegendentry{Speed-up}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Speed-up in function of the error.}\n\t\t\t\\label{fig:bndlayer_sym_speedup}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_sym}, reaction dominated problem. Computational cost vs. energy norm error and speed-up in function of the error.}\n\t\\label{fig:bndlayer_sym_effenerr_speedup}\n\\end{figure}\nIn \\cref{fig:bndlayer_sym_etateffind} we plot the effectivity index of $\\tilde\\eta$. As expected, for this symmetric problem, it is worse than the effectivity of $\\eta$. Finally, we run the same experiment but for different diffusion coefficients $\\varepsilon=10^{-4},10^{-6},10^{-8}$ and display in \\cref{fig:bndlayer_sym_eta_diff_eps} the effectivity index of $\\eta$. We note that it always remains below 4.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\tilde\\eta$},ymin=0,ymax=15,,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\tilde\\eta$.}\n\t\t\t\\label{fig:bndlayer_sym_etateffind}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend columns=2, ,legend style={at={(0,0)},anchor=south west,draw=none,fill=none},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},ymin=0,ymax=4.0,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_4_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-4}$}\n\t\t\t\\addlegendimage{empty legend}\\addlegendentry{}\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-6}$}\n\t\t\t\\addplot[color=NavyBlue,mark=triangle,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_8_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-8}$}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\\caption{Effectivity index of $\\eta$ for different diffusion coefficients $\\varepsilon$.}\n\t\t\\label{fig:bndlayer_sym_eta_diff_eps}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_sym}, reaction dominated problem. Effectivity index of $\\tilde\\eta$ and of $\\eta$ but for different diffusion coefficients $\\varepsilon$.}\n\t\\label{fig:bndlayer_sym_effetat_effeta}\n\\end{figure}\n\n\n\\subsection{Convection dominated problem}\\label{exp:bndlayer_notsym}\nIn this section we perform the same experiment as in \\cref{exp:bndlayer_sym} but instead of choosing $\\bm{\\beta}=(0,0)^\\top$ we set $\\bm{\\beta}=-(1,1)^\\top$, hence we solve a nonsymmetric singularly perturbed problem. The linear systems are solved with the GMRES method preconditioned with the incomplete LU factorization. As in \\cref{exp:bndlayer_sym}, we investigate the effectivity indexes and efficiency of the local and classical schemes.\n\nFor convection dominated problems, the norm $\\nBp{{\\cdot}}$ is more appropriate than $\\nB{{\\cdot}}$ since it measures also the error in the advective direction. In \\cref{fig:bndlayer_notsym_etatcost}, we plot the simulation cost versus the error estimator $\\tilde\\eta$, we remark that again the local scheme provides similar error bounds at smaller cost. The effectivity index of $\\tilde\\eta$ is displayed in \\cref{fig:bndlayer_notsym_etateffind}, we note that the local and classical schemes have again similar effectivity indexes.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Error estimator $\\tilde\\eta$}, ylabel={GMRES cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=etatfull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=etatfull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{GMRES cost versus $\\tilde\\eta$.}\n\t\t\t\\label{fig:bndlayer_notsym_etatcost}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,1)},anchor=north east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\tilde\\eta$},ymin=0,ymax=15,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\tilde\\eta$.}\n\t\t\t\\label{fig:bndlayer_notsym_etateffind}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_notsym}, convection dominated problem. Computational cost vs. $\\tilde\\eta$ and effectivity index in function of the iteration number.}\n\t\\label{fig:bndlayer_notsym_etatcost_etateffind}\n\\end{figure}\n\nIn \\cref{fig:bndlayer_notsym_effenerr_speedup} we plot the simulation cost versus the error in the augmented norm $\\nBp{{\\cdot}}$ and the relative speed-up. We again observe that the local scheme is faster. \n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Aumented norm error.}, ylabel={GMRES cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erraug,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erraug,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{GMRES cost versus augmented norm error.}\n\t\t\t\\label{fig:bndlayer_notsym_effenerr}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(0,1)},anchor=north west},\n\t\t\txlabel={Augmented norm error.}, ylabel={Speed-up}, x dir=reverse,log basis x={2},log basis y={2},ymax=8,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=black,line width=1.0 pt,mark=none] table[x=erraug,y=speeden,col sep=comma] \n\t\t\t{data\/bndlayer\/speedup_sing_4_diff_6_b_1_nref_3_lay_21.csv};\\addlegendentry{Speed-up}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Speed-up in function of the error.}\n\t\t\t\\label{fig:bndlayer_notsym_speedup}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_notsym}, convection dominated problem. Computational cost vs. augmented norm error and speed-up in function of the error.}\n\t\\label{fig:bndlayer_notsym_effenerr_speedup}\n\\end{figure}\n\nFor completeness, we plot in \\cref{fig:bndlayer_notsym_etaeffind} the effectivity index of $\\eta$. We see that it is completely off. This illustrates that this estimator does not capture the convective error and is hence not appropriate for convection dominated problems. Then, we run again the same experiment but considering different diffusion coefficients $\\varepsilon=10^{-4}, 10^{-6}, 10^{-8}$ and display the effectivity indexes of $\\tilde\\eta$ in \\cref{fig:bndlayer_notsym_diff_eps}.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,1)},anchor=north east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},ymin=0,ymax=800,,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\eta$.}\n\t\t\t\\label{fig:bndlayer_notsym_etaeffind}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend columns=2, ,legend style={at={(1,1)},anchor=north east,draw=none,fill=none},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\tilde\\eta$},ymin=0,ymax=20,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_4_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-4}$}\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-6}$}\n\t\t\t\\addlegendimage{empty legend}\\addlegendentry{}\n\t\t\t\\addplot[color=NavyBlue,mark=triangle,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \t\n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_8_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-8}$}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\tilde\\eta$ for different diffusion coefficients $\\varepsilon$.}\n\t\t\t\\label{fig:bndlayer_notsym_diff_eps}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_notsym}, convection dominated problem. Effectivity index of $\\eta$ and of $\\tilde\\eta$ but for different diffusion coefficients $\\varepsilon$.}\n\t\\label{fig:bndlayer_notsym_etaeffind_diff_eps}\n\\end{figure}\n\n\n\\subsection{A nonlinear nonsmooth problem with multiple local structures}\\label{exp:nonlin}\nWe conclude with an experiment on a nonlinear nonsmooth problem, where the diffusion tensor is solution dependent and has multiple discontinuities, hence the solution presents several local structures. More precisely, we solve \\cref{eq:elliptic} with $\\Omega=[-3\/2,3\/2]\\times [-3\/2,3\/2]$, $\\bm{\\beta}=-(1,1)^\\top$, $\\mu=1$ and $f(\\bm{x})=\\nld{\\bm{x}}^2$. The diffusion tensor is $A(u,\\bm{x})=A_1(u)A_2(\\bm{x})$, with $A_1(u)=1\/\\sqrt{1+u^2}$. We divide $\\Omega$ in nine squares of size $1\/2\\times 1\/2$ and $A_2(\\bm{x})$ alternates between $1$ and $0.01$, in a checkerboard-like manner. A reference solution is displayed in \\cref{fig:nonlinsol}.\n\n\\cref{thm:energynormbound,thm:augmentednormbound} do not apply straightforwardly as the problem is nonlinear. Nevertheless, \\cref{alg:local} can be used in combination with a Newton scheme as it is shown in \\cite{AbR19}. In this experiment we investigate the efficiency of the error estimators in identifying the local subdomains for a nonlinear nonsmooth problem with multiple local structures. Starting with a $32\\times 32$ elements mesh, we run the code and let it automatically select the subdomains for twenty iterations. We do the same with the classical \\cref{alg:classical} and compare the results in \\cref{fig:nonlin_eff}, where we display the cost of the Newton method versus the error, computed in energy norm, against a reference solution. We remark as the local method is faster.\n \n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.7\\textwidth]{images\/nonlin\/sol.png}\n\t\t\t\\caption{Solution $u(\\bm{x})$ of the nonlinear nonsmooth problem.}\n\t\t\t\\label{fig:nonlinsol}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east,fill=none,draw=none},\n\t\t\t\t\txlabel={Energy norm error.}, ylabel={Newton cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erren,y=linsystemtot,col sep=comma] \n\t\t\t\t\t{data\/nonlin\/spa_2_nref_5_nlev_20_nlay1_2_nlay2_2_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erren,y=linsystemtot,col sep=comma] \n\t\t\t\t\t{data\/nonlin\/spa_1_nref_5_nlev_20_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Newton cost versus energy norm error.}\n\t\t\t\\label{fig:nonlin_eff}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Solution $u(\\bm{x})$ and efficiency experiment on the nonlinear nonsmooth problem of \\cref{exp:nonlin}.}\n\\end{figure}\n\n\\section{Conclusion}\nIn this paper we have derived a local adaptive discontinuous Galerkin method for the scheme introduced in \\cite{AbR19}. The scheme, defined in \\cref{sec:localg}, relies on a coarse solution which is successively improved by solving a sequence of localized elliptic problems in confined subdomains, where the mesh is refined. Starting from error estimators for the symmetric weighted interior penalty Galerkin scheme based on conforming potential and fluxes reconstructions, allowing for flux jumps across the subdomains boundaries we have derived new estimators for the local method and proved their reliability in \\cref{thm:energynormbound,thm:augmentednormbound}. An important property of the original estimators (for nonlocal schemes) is conserved: the absence of unknown constants.\nNumerical experiments confirm the error estimators' effectivity for singularly perturbed convection-reaction dominated problems and illustrate the efficiency of the local scheme when compared to a classical adaptive algorithm, where at each iteration the solution on the whole computational domain must be recomputed. We also showed that the growth of boundary error indicators (the reason why efficiency cannot be proved in general) can be monitored in order to switch from local to a nonlocal method. Switching automatically from local to classical scheme, based on the indicators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$, could be easily integrated in a finite element code. Testing such an integrated code could be of interest to investigate in the future.\n\n\\section*{Acknowledgments} The authors are partially supported by the Swiss National Science Foundation, under grant No. $200020\\_172710$.\n\n\n\n\n\\section{Introduction} \\label{sec:intro}\nSolutions to partial differential equations that exhibit singularity (e.g. cracks) or high variations in the computational domain are usually approximated by adaptive numerical methods. There is nowadays a large body of literature concerned with the development of reliable a posteriori error estimators aiming for mesh refinement in regions of large errors (see e.g. \\cite{BaR78a, BaR78b, BaW85, Ver96a}). However, classical adaptive methods are usually based on iterative processes which rely on recomputing the solution on the whole computational domain for each new mesh obtained after a refinement procedure. \n\nIn this paper we present a scheme which solves local problems defined on refined regions only. Local schemes have been proposed in the past, we mention the Local Defect Correction (LDC) method \\cite{Hac84}, the Fast Adaptive Composite (FAC) grid algorithm \\cite{McT86} and the Multi-Level Adaptive (MLA) technique \\cite{Bra77}. At each iteration, these algorithms solve a problem on a coarse mesh on the whole domain and a local problem on a finer mesh. The coarse solution is used for artificial boundary conditions while the local solution is used to correct the residual in the coarse grid. In \\cite{BRL15} the LDC scheme has been coupled with error estimators, which are used to select the local domain. \n\nIn \\cite{AbR19} we proposed a Local Discontinuous Galerkin Gradient Discretization (LDGGD) method which decomposes the computational domain in local subdomains encompassing the large gradient regions. This scheme iteratively improves a coarse solution on the full domain by solving local elliptic problems on finer meshes. \nHence, the full problem is solved only in the first iteration on a coarse mesh while a sequence of solutions on smaller subdomains are subsequently computed. In turn iterations between subdomains are not needed as in the LDC, FAC or MLA schemes and the condition number of the small systems are considerably smaller than the one of large systems (which describe data and mesh variations on the whole domain). The LDGGD method has been shown to converge under minimal regularity assumptions, i.e. when the solution is in $H^1_0(\\Omega)$ and the forcing term in $H^{-1}(\\Omega)$ \\cite{AbR19}. \nHowever, the marking of the subdomains this scheme did so far rely on the a priori knowledge of the location of high gradient regions. \n\nThe main contribution of this paper is to propose an adaptive local LDGGD method. This adaptive method is based on a posteriori error estimators that automatically identify the subdomains to be refined. \nThis is crucial for practical applications of the method. The LDGGD relies on the symmetric weighted interior penalty Galerkin (SWIPG) method \\cite{PiE12,ESZ09} and we consider linear advection-diffusion-reaction equations\n\\begin{equation}\\label{eq:elliptic}\n\t\\begin{aligned}\n\t-\\nabla\\cdot(A\\nabla u)+\\bm{\\beta}\\cdot\\nabla u+\\mu u &=f\\qquad && \\text{in } \\Omega, \\\\\n\tu&=0 && \\text{in } \\partial \\Omega, \n\t\\end{aligned}\n\\end{equation}\nwhere $\\Omega$ is an open bounded polytopal connected subset of $\\mathbb{R}^{d}$ for $d\\geq 2$, $A$ is the diffusion tensor, $\\bm{\\beta}$ the velocity field, $\\mu$ the reaction coefficient and $f$ a forcing term. \nIn \\cite{ESV10} the authors introduce a posteriori error estimators for the SWIPG scheme based on cutoff functions and conforming flux and potential reconstructions, these estimators are shown to be efficient and robust in singularly perturbed regimes. Following the same strategy, we derive estimators for the local scheme by weakening the regularity requirements on the reconstructed fluxes. The new estimators are as well free of unknown constants and their robustness is verified numerically. Furthermore, they are employed to define the local subdomains and provide error bounds on the numerical solution of the LDGGD method.\nWe prove that the error estimators are reliable. Because of the local nature of our scheme, we introduce two new estimators that measure the jumps at the boundaries of the local domains. However, these two new terms have lower convergence rate than the other terms and we cannot establish the efficiency of our a posteriori estimators with our current approach.\nNevertheless, the two new terms are useful in our algorithm: whenever the errors are localized these new terms become negligible; in contrast, when these estimators dominate it is an indication that the error is not localized and one can switch to a nonlocal method. Other boundary conditions than those of \\cref{eq:elliptic} can be considered, at the cost of modifying the error estimators introduced in \\cite{ESV10}. The new estimators introduced here need no changes.\n\n\nThe outline of the paper is as follows. In \\cref{sec:ladg} we describe the local scheme, in \\cref{sec:err_flux} we introduce the error estimators and state the main a posteriori error analysis results. \\Cref{sec:errbound} is dedicated to the definition of the reconstructed fluxes and proofs of the main results. Finally, various numerical examples illustrating the efficiency, versatility and limits of the proposed method are presented in \\cref{sec:num}.\n\n\n\\section{Local adaptive discontinuous Galerkin method}\\label{sec:ladg}\nIn this section we introduce the local algorithm based on the discontinuous Galerkin method. \nWe start by some assumptions on the data and the domain, before introducing the weak form corresponding to \\eqref{eq:elliptic}.\nWe assume that $\\Omega\\subset\\mathbb{R}^{d}$ is a polytopal domain with $d\\geq 2$, $\\bm{\\beta}\\in W^{1,\\infty}(\\Omega)^d$, $\\mu\\in L^\\infty(\\Omega)$ and $A\\in L^\\infty(\\Omega)^{d\\times d}$, with $A(\\bm{x})$ a symmetric piecewise constant matrix with eigenvalues in $[\\underline\\lambda,\\overline \\lambda]$, where $\\overline\\lambda\\geq\\underline\\lambda>0$. Moreover, we assume that $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}\\geq 0$ a.e. in $\\Omega$. This term $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}$ appears in the symmetric part of the operator $\\mathcal{B}(\\cdot,\\cdot)$ defined in \\eqref{eq:bform} and hence the assumption $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}\\geq 0$ is needed for coercivity. Finally, we set $f\\in L^2(\\Omega)$.\nUnder these assumptions, the unique weak solution $u\\in H^1_0(\\Omega)$ of \\eqref{eq:elliptic} satisfies\n\\begin{equation}\\label{eq:weak}\n\\mathcal{B}(u,v)=\\int_\\Omega fv\\dif \\bm{x}\\qquad \\text{for all }v\\in H^1_0(\\Omega),\n\\end{equation} \nwhere\n\\begin{equation}\\label{eq:bform}\n\\mathcal{B}(u,v)= \\int_\\Omega (A\\nabla u\\cdot\\nabla v+(\\bm{\\beta}\\cdot\\nabla u) v+\\mu u v)\\dif\\bm{x}.\n\\end{equation}\n\n\n\\subsection{Preliminary definitions}\\label{sec:prel}\nWe start by collecting some notations related to the geometry and the mesh of the subdomains, before recalling the definition of the discontinuous Galerkin finite element method.\n\n\\subsubsection*{Subdomains and meshes}\nLet $M\\in\\mathbb{N}$ and $\\{\\Omega_k\\}_{k=1}^M$ be a sequence of open subdomains of $\\Omega$ with $\\Omega_1=\\Omega$. The domains $\\Omega_k$ for $k\\geq 2$ can be any polytopal subset of $\\Omega$, in practice they will be chosen by the error estimators (see \\cref{sec:localg}). We consider $\\{\\mathcal{M}_k\\}_{k=1}^M$ a sequence of simplicial meshes on $\\Omega$ and $\\mathcal{F}_k=\\mathcal{F}_{k,b}\\cup\\mathcal{F}_{k,i}$ is the set of boundary and internal faces of $\\mathcal{M}_k$. The assumption below ensures that $\\mathcal{M}_{k+1}$ is a refinement of $\\mathcal{M}_k$ inside the subdomain $\\Omega_{k+1}$.\n\\begin{assumption}\\label{ass:mesh}\n\t$\\phantom{=}$\n\t\\begin{enumerate\n\t\t\\item For each $k=1,\\ldots,M$, $\\overline{\\Omega}_{k}=\\cup_{K\\in\\mathcal{M}_k,\\,K\\subset\\Omega_k} \\overline{K}$.\n\t\t\\item For $k=1,\\ldots,M-1$,\n\t\t\\begin{enumerate}[label=\\alph*)]\n\t\t\t\\item $\\{K\\in\\mathcal{M}_{k+1}\\,:\\, K\\subset \\Omega\\setminus\\Omega_{k+1}\\}=\\{K\\in\\mathcal{M}_k\\,:\\, K\\subset \\Omega\\setminus\\Omega_{k+1}\\}$,\n\t\t\t\\item if $K,T\\in \\mathcal{M}_k$ with $K\\subset \\Omega_{k+1}$, $T\\subset\\Omega\\setminus\\Omega_{k+1}$ and $\\partial K\\cap\\partial T\\neq\\emptyset$ then $K\\in \\mathcal{M}_{k+1}$,\n\t\t\t\\item if $K\\in\\mathcal{M}_k$ and $K\\subset \\Omega_{k+1}$, either $K\\in \\mathcal{M}_{k+1}$ or $K$ is a union of elements in $\\mathcal{M}_{k+1}$.\n\t\t\\end{enumerate}\n\t\\end{enumerate}\n\\end{assumption}\n\nLet $\\widehat{\\mathcal{M}}_k=\\{K\\in\\mathcal{M}_k\\,:\\, K\\subset \\Omega_k\\}$ and $\\widehat{\\mathcal{F}}_k=\\widehat{\\mathcal{F}}_{k,b}\\cup \\widehat{\\mathcal{F}}_{k,i}$ the set of faces of $\\widehat{\\mathcal{M}}_k$, with $\\widehat{\\mathcal{F}}_{k,b}$ and $\\widehat{\\mathcal{F}}_{k,i}$ the boundary and internal faces, respectively. Condition 1 in \\cref{ass:mesh} ensures that $\\widehat{\\mathcal{M}}_k$ is a simplicial mesh on $\\Omega_k$. Condition 2 guarantees that in $\\Omega\\setminus\\Omega_{k+1}$ and in the neighborhood of $\\partial\\Omega_{k+1}\\setminus\\partial\\Omega$ the meshes $\\mathcal{M}_k$ and $\\mathcal{M}_{k+1}$ are equal and that $\\mathcal{M}_{k+1}$ is a refinement of $\\mathcal{M}_k$ in $\\Omega_{k+1}$. An example of domains and meshes satisfying \\cref{ass:mesh} is illustrated in \\cref{fig:illustrationmesh}.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}[xscale=1]\n\t\n\t\t\\draw[step=1.0,black, thin] (0,0) grid (12,4);\n\t\t\\foreach \\x in {0,1,2,3,4,5,6,7,8}\n\t\t\t\\draw[thin] (\\x,0)--(\\x+4,4);\n\t\t\\draw[thin] (0,3)--(1,4);\n\t\t\\draw[thin] (0,2)--(2,4);\n\t\t\\draw[thin] (0,1)--(3,4);\n\t\t\\draw[thin] (9,0)--(12,3);\n\t\t\\draw[thin] (10,0)--(12,2);\n\t\t\\draw[thin] (11,0)--(12,1);\n\t\t\\draw[ultra thick] (0,0)--(12,0)--(12,4)--(0,4)--(0,0);\n\t\t\\draw[ultra thick] (2,1)--(10,1)--(10,4)--(2,4)--(2,1);\n\t\t\\draw[ultra thick] (4,2)--(8,2)--(8,4)--(4,4)--(4,2);\n\t\t\n\t\n\t\t\\draw[step=0.5,black, thin] (3,2) grid (9,4);\n\t\t\\draw[thin] (3,3.5)--(3.5,4);\n\t\t\\draw[thin] (3,2.5)--(4.5,4);\n\t\t\\draw[thin] (4.5,2.5)--(6,4);\n\t\t\\draw[thin] (4.5,2.0)--(6.5,4);\n\t\t\\draw[thin] (5.5,2.5)--(7,4);\n\t\t\\draw[thin] (5.5,2.0)--(7.5,4);\n\t\t\\draw[thin] (6.5,2.5)--(7.5,3.5);\n\t\t\\draw[thin] (3.5,2.0)--(5.5,4);\n\t\t\\draw[thin] (4.5,3.5)--(5.0,4);\n\t\t\\draw[thin] (6.5,2.0)--(8.5,4.0);\n\t\t\\draw[thin] (7.5,2.0)--(9,3.5);\n\t\t\\draw[thin] (8.5,2.0)--(9,2.5);\n\t\t\n\t\n\t\t\\foreach \\x in {4.5,5,...,7}\n\t\t\\foreach \\y in {3,3.5,4}\n\t\t\\draw[thin] (\\x,\\y)--(\\x+0.5,\\y-0.5);\n\t\t\n\t\t\n\t\t\\node[align=right,fill=white] at (12.5,1.5) {\\Large{$\\Omega_1$}};\n\t\t\\node[align=right,fill=white] at (10.5,2.5) {\\Large{$\\Omega_2$}};\n\t\t\\node[align=right,fill=white] at (7.5,1.5) {\\Large{$\\Omega_3$}};\n\t\t\\end{tikzpicture}\n\t\\end{center}\n\t\\caption{Example of possible meshes for three embedded domains $\\Omega_1$, $\\Omega_2$, $\\Omega_3$.}\n\t\\label{fig:illustrationmesh}\n\\end{figure}\n\n\\subsubsection*{Discontinuous Galerkin finite element method}\nThe local adaptive discontinuous Galerkin method will solve local elliptic problems in $\\Omega_k$ by using a discontinuous Galerkin scheme introduced in \\cite{ESZ09}, which we recall here. \nIn what follows, $\\mathfrak{T}=(D,\\mathcal{M},\\mathcal{F})$ denotes a tuple defined by a domain $D$, a simplicial mesh $\\mathcal{M}$ on $D$ and its set of faces $\\mathcal{F}=\\mathcal{F}_b\\cup\\mathcal{F}_i$. In practice we will consider $\\mathfrak{T}_k=(\\Omega,\\mathcal{M}_k,\\mathcal{F}_k)$ or $\\widehat{\\mathfrak{T}}_k=(\\Omega_k,\\widehat{\\mathcal{M}}_k,\\widehat{\\mathcal{F}}_k)$. For $\\mathfrak{T}=(D,\\mathcal{M},\\mathcal{F})$ we define \n\\begin{equation}\\label{eq:defVT}\nV(\\mathfrak{T}) = \\{v\\in L^2(D)\\,:\\, v|_K\\in \\mathbb{P}_\\ell(K),\\,\\forall K\\in\\mathcal{M}\\},\n\\end{equation}\nwhere $\\mathbb{P}_\\ell(K)$ is the set of polynomials in $K$ of total degree $\\ell$. As usual for such discontinuous Galerkin methods we need to define appropriate averages, jumps, weights and penalization parameters. For $K\\in\\mathcal{M}$ we denote $\\bm{n}_K$ the unit normal outward to $K$ and $\\mathcal{F}_K=\\{\\sigma\\in\\mathcal{F}\\,:\\,\\sigma\\subset\\partial K\\}$. Let $\\sigma\\in\\mathcal{F}_{i}$ and $K,T\\in\\mathcal{M}$ with $\\sigma=\\partial K\\cap\\partial T$, then $\\bm{n}_\\sigma=\\bm{n}_K$ and\n\\begin{equation}\n\\delta_{K,\\sigma}=\\bm{n}_\\sigma^\\top A|_K\\bm{n}_\\sigma, \\qquad\\qquad \\delta_{T,\\sigma}=\\bm{n}_\\sigma^\\top A|_T\\bm{n}_\\sigma.\n\\end{equation}\nThe weights are defined by\n\\begin{equation}\n\\omega_{K,\\sigma}=\\frac{\\delta_{T,\\sigma}}{\\delta_{K,\\sigma}+\\delta_{T,\\sigma}}, \\qquad\\qquad \\omega_{T,\\sigma}=\\frac{\\delta_{K,\\sigma}}{\\delta_{K,\\sigma}+\\delta_{T,\\sigma}}\n\\end{equation}\nand the penalization parameters by\n\\begin{equation}\n\\gamma_\\sigma=2\\frac{\\delta_{K,\\sigma}\\delta_{T,\\sigma}}{\\delta_{K,\\sigma}+\\delta_{T,\\sigma}}, \\qquad\\qquad \\nu_\\sigma=\\frac{1}{2}|\\bm{\\beta}\\cdot \\bm{n}_\\sigma|.\n\\end{equation}\nIf $\\sigma\\in\\mathcal{F}_{b}$ and $K\\in\\mathcal{M}$ with $\\sigma=\\partial K\\cap \\partial D$ then $\\bm{n}_\\sigma$ is $\\bm{n}_D$ the unit outward normal to $\\partial D$ and\n\\begin{equation}\n\\delta_{K,\\sigma}=\\bm{n}_\\sigma^\\top A|_K\\bm{n}_\\sigma, \\qquad \\omega_{K,\\sigma}=1, \\qquad \\gamma_\\sigma=\\delta_{K,\\sigma}, \\qquad \\nu_\\sigma=\\frac{1}{2}|\\bm{\\beta}\\cdot \\bm{n}_\\sigma|.\n\\end{equation}\nLet $g\\in L^2(\\partial D)$, we define the averages and jumps of $v\\inV(\\mathfrak{T})$ as follows.\nFor $\\sigma\\in\\mathcal{F}_{b}$ with $\\sigma=\\partial K\\cap\\partial D$ we set\n\\begin{equation}\n\\mean{v}_{\\omega,\\sigma}=v|_K, \\qquad\\qquad \\mean{v}_{g,\\sigma}=\\frac{1}{2}(v|_K+g), \\qquad\\qquad \\jump{v}_{g,\\sigma}=v|_K-g\n\\end{equation}\nand for $\\sigma\\in\\mathcal{F}_{i}$ with $\\sigma=\\partial K\\cap\\partial T$\n\\begin{equation}\n\\mean{v}_{\\omega,\\sigma}=\\omega_{K,\\sigma}v|_K+\\omega_{T,\\sigma}v|_T, \\qquad\\qquad\n\\mean{v}_{g,\\sigma}=\\frac{1}{2}(v|_K+v|_T ), \\qquad\\qquad\n\\jump{v}_{g,\\sigma} = v|_K-v|_T.\n\\end{equation}\nWe define $\\jump{\\cdot}_{\\sigma}= \\jump{\\cdot}_{0,\\sigma}$ and $\\mean{\\cdot}_{\\sigma}= \\mean{\\cdot}_{0,\\sigma}$. A similar notation holds for vector valued functions and whenever no confusion can arise the subscript $\\sigma$ is omitted. Let $h_\\sigma$ be the diameter of $\\sigma$ and $\\eta_\\sigma>0$ a user parameter, for $u,v\\inV(\\mathfrak{T})$ we define the bilinear form\n\\begin{align}\\label{eq:opB}\n\\begin{split}\n\\mathcal{B}(u,v,\\mathfrak{T},g)&=\n\\int_{D} (A\\nabla u\\cdot \\nabla v +(\\mu-\\nabla\\cdot \\bm{\\beta})u v-u\\bm{\\beta}\\cdot \\nabla v)\\dif\\bm{x}\\\\\n&\\quad-\\sum_{\\sigma\\in\\mathcal{F}}\\int_\\sigma(\\jump{v}\\mean{A\\nabla u}_{\\omega}\\cdot \\bm{n}_\\sigma+\\jump{u}_{g}\\mean{A\\nabla v}_{\\omega}\\cdot \\bm{n}_\\sigma)\\dif\\bm{y}\\\\\n&\\quad+\\sum_{\\sigma\\in\\mathcal{F}}\\int_\\sigma ((\\eta_\\sigma\\frac{\\gamma_\\sigma}{h_\\sigma}+\\nu_\\sigma)\\jump{u}_{g}\\jump{v}+\\bm{\\beta}\\cdot\\bm{n}_\\sigma\\mean{u}_{g}\\jump{v})\\dif\\bm{y},\n\\end{split}\n\\end{align}\nwhere the gradients are taken element wise. The bilinear form $\\mathcal{B}(\\cdot,\\cdot,\\mathfrak{T},g)$ will be used to approximate elliptic problems in $D$ with Dirichlet boundary condition $g$. This scheme is known as the Symmetric Weighted Interior Penalty (SWIP) scheme \\cite{ESZ09}. The SWIP method is an improvement of the Interior Penalty scheme (IP) \\cite{Arn82}, where the weights are defined as $\\omega_{K,\\sigma}=\\omega_{T,\\sigma}=1\/2$. The use of diffusivity-dependent averages increases the robustness of the method for problems with strong diffusion discontinuities. The bilinear form defined in \\cref{eq:opB} is mathematically equivalent to other formulations where $v\\bm{\\beta}\\cdot\\nabla u$ or $\\nabla\\cdot(\\bm{\\beta} u)v$ appear instead of $u\\bm{\\beta}\\cdot\\nabla v$ (see \\cite{ESZ09} and \\cite[Section 4.6.2]{PiE12}). Our choice of formulation is convenient to express local conservation laws (see \\cite[Section 2.2.3]{PiE12}).\n\n\\subsection{Local method algorithm}\\label{sec:localg}\nIn this section we present the local scheme. In order to facilitate the comprehension of the method, we start with an informal description and then provide a pseudo-code for the algorithm. We denote $u_k$ the global solutions on $\\Omega$ and $\\hat{u}_k$ the local solutions on $\\Omega_k$, which are used to correct the global solutions.\n\nGiven a discretization $\\mathfrak{T}_1=(\\Omega,\\mathcal{M}_1,\\mathcal{F}_1)$ on $\\Omega$ the local scheme computes a first approximate solution $u_1\\in V(\\mathfrak{T}_1)$ to \\eqref{eq:weak}. The algorithm then performs the following steps for $k=2,\\ldots,M$.\n\\begin{enumerate}[label=\\roman*)]\n\t\\item Given the current solution $u_{k-1}$, identify the region $\\Omega_k$ where the error is large and define a new refined mesh $\\mathcal{M}_k$ satisfying \\cref{ass:mesh} by iterating the following steps.\n\t\\begin{enumerate}[label=\\alph*)]\n\t\t\\item For each element $K\\in\\mathcal{M}_{k-1}$ compute an error indicator $\\eta_{M,K}$ (defined in \\eqref{eq:marketa}) and mark the local domain $\\Omega_{k}$ using the fixed energy fraction marking strategy \\cite[Section 4.2]{Dor96}. Hence, $\\Omega_{k}$ is defined as the union of the elements with largest error indicator $\\eta_{M,K}$ and it is such that the error committed inside of $\\Omega_{k}$ is at least a prescribed fraction of the total error.\n\t\t\\item Define the new mesh ${\\mathcal{M}}_{k}$ by refining the elements $K\\in\\mathcal{M}_{k-1}$ with $K\\subset\\Omega_{k}$.\n\t\t\\item Enlarge the local domain $\\Omega_{k}$ defined at step a) by adding a one element wide boundary layer (i.e. in order to satisfy item 2b of \\cref{ass:mesh}).\n\t\t\\item Define the local mesh $\\widehat{\\mathcal{M}}_{k}$ by the elements of $\\mathcal{M}_{k}$ inside of $\\Omega_{k}$.\n\t\\end{enumerate}\n\t\\item Solve a local elliptic problem in $\\Omega_k$ on the refined mesh $\\widehat{\\mathcal{M}}_k$ using $u_{k-1}$ as artificial Dirichlet boundary condition on $\\partial\\Omega_k\\setminus\\partial\\Omega$. The solution is denoted $\\hat{u}_k\\in V(\\widehat{\\mathfrak{T}}_k)$, where $\\widehat{\\mathfrak{T}}_k=(\\Omega_k,\\widehat{\\mathcal{M}}_k,\\widehat{\\mathcal{F}}_k)$.\n\t\\item The local solution $\\hat{u}_k$ is used to correct the previous solution $u_{k-1}$ inside of $\\Omega_k$ and obtain the new global solution $u_k$.\n\\end{enumerate}\nThe pseudo-code of the local scheme is given in \\cref{alg:local}, where $\\chi_{\\Omega\\setminus\\Omega_k}$ is the indicator function of $\\Omega\\setminus\\Omega_k$ and $(\\cdot,\\cdot)_k$ is the inner product in $L^2(\\Omega_k)$. The function $\\text{LocalDomain}(u_k,\\mathfrak{T}_k)$ used in \\cref{alg:local} performs steps a)-d) of i). For purely diffusive problems, it is shown in \\cite[Theorem 8.2]{Ros20} that \\cref{alg:local} is equivalent to the LDGGD introduced in \\cite{AbR19}, hence the scheme convergences for exact solutions $u\\in H^1_0(\\Omega)$.\n\n\\begin{algorithm}\n\t\\caption{LocalScheme($\\mathfrak{T}_1$)}\n\t\\label{alg:local}\n\t\\begin{algorithmic}\n\t\t\\State Find $u_1\\in V(\\mathfrak{T}_1)$ solution to $\\mathcal{B}(u_1,v_1,\\mathfrak{T}_1,0)=(f,v_1)_1$ for all $v_1\\in V(\\mathfrak{T}_1)$.\n\t\t\\For{$k=2,\\ldots,M$}\n\t\t\\State $(\\mathfrak{T}_k,\\widehat{\\mathfrak{T}}_{k}) = \\text{LocalDomain}(u_{k-1},\\mathfrak{T}_{k-1})$.\n\t\t\\State $g_k=u_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}\\in V(\\mathfrak{T}_k)$.\n\t\t\\State Find $\\hat{u}_k\\in V(\\widehat{\\mathfrak{T}}_k)$ solution to $\\mathcal{B}(\\hat{u}_k,v_k,\\widehat{\\mathfrak{T}}_k,g_k)=(f,v_k)_k$ for all $v_k\\in V(\\widehat{\\mathfrak{T}}_k)$.\n\t\t\\State $u_k=g_k+\\hat{u}_k\\in V(\\mathfrak{T}_k)$.\n\t\t\\EndFor\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Error estimators via flux and potential reconstructions}\\label{sec:err_flux}\nThe error estimators used to mark the local domains $\\Omega_k$ and to provide error bounds on the numerical solution $u_k$ are introduced here.\n\nIn the framework of selfadjoint elliptic problems, the equilibrated fluxes method \\cite{AiO93,BaW85} is a technique largely used to derive a posteriori error estimators free of undetermined constants and is based on the definition of local fluxes which satisfy a local conservation property. Since local fluxes and conservation properties are intrinsic to the discontinuous Galerkin formulation, this discretization is well suited for the equilibrated fluxes method \\cite{Ain05,CoN08}. In \\cite{ENV07,Kim07} the Raviart-Thomas-N\u00e9d\u00e9lec space is used to build an $H_{\\divop}(\\Omega)$ conforming reconstruction $\\bm{t}_h$ of the discrete diffusive flux $-A\\nabla u_h$. A diffusive flux $\\bm{t}_h$ with optimal divergence, in the sense that it coincides with the orthogonal projection of the right-hand side $f$ onto the discontinuous Galerkin space, is obtained. In \\cite{ESV10} the authors extend this approach to convection-diffusion-reaction equations by defining an $H_{\\divop}(\\Omega)$ conforming convective flux $\\bm{q}_h$ approximating $\\bm{\\beta} u_h$ and satisfying a conservation property.\n\nWe follow a similar strategy and define in the next section error estimators in function of diffusive and convective fluxes reconstructions $\\bt_k,\\bq_k$ for the local scheme, as well as an $H^1_0(\\Omega)$ conforming potential reconstruction $s_k$ of the solution $u_k$.\n\n\\subsection{Definition of the error estimators}\\label{sec:errest}\nThe error estimators in function of the potential reconstruction $s_k$ approximating the solution $u_k$, the diffusive and convective fluxes $\\bt_k$ and $\\bq_k$ approximating $-A\\nabla u_k$ and $\\bm{\\beta} u_k$, respectively, are defined in this section.\n\nFollowing the iterative and local nature of our scheme, we define the diffusive and convective fluxes reconstructions as\n\\begin{equation}\\label{eq:defflux}\n\\bt_k=\\bm{t}_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}+\\hat{\\bt}_k,\\qquad \\bq_k=\\bm{q}_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}+\\hat{\\bq}_k,\n\\end{equation}\nwhere $\\bm{t}_0=\\bm{q}_0=0$ and $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ are $H_{\\divop}(\\Omega_k)$ conforming fluxes reconstructions of $-A\\nabla \\hat{u}_k$, $\\bm{\\beta} \\hat{u}_k$, respectively, and where $\\hat{u}_k$ is the local solution. To avoid any abuse of notation in \\cref{eq:defflux}, we extended $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ to zero outside of $\\Omega_k$.\nThe fluxes reconstructions $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ satisfy a local conservation property and are defined in \\cref{sec:potflux}. We readily see that \\cref{eq:defflux} allows for flux jumps at the subdomains boundaries, while giving enough freedom to define $\\hat{\\bt}_k,\\hat{\\bq}_k$ in a way that a conservation property is satisfied. The fluxes reconstructions are used to measure the non conformity of the numerical fluxes. In the same spirit we define a potential reconstruction $s_k\\in H^1_0(\\Omega)$ used to measure the non conformity of the numerical solution. It is defined recursively as\n\\begin{equation}\\label{eq:defpot}\ns_k = s_{k-1}\\chi_{\\Omega\\setminus\\Omega_k}+\\hat s_k,\n\\end{equation}\nwhere $s_0=0$ and $\\hat s_k\\in H^1(\\Omega_k)$ is such that $s_k\\in H^1_0(\\Omega)$; similarly, we extend $\\hat s_k$ to zero outside of $\\Omega_k$. More details about the definitions of $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ and $\\hat s_k$ will be given in \\cref{sec:potflux}, for the time being we will define the error estimators.\n\nLet $K\\in\\mathcal{M}_k$, $v\\in H^1(K)$, \n\\begin{equation}\\label{eq:defnBK}\n\\nB{v}_K^2=\\nLddK{A^{1\/2}\\nabla v}^2+\\nLdK{(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})^{1\/2}v}^2,\n\\end{equation}\nwhere $\\nLdK{{\\cdot}}$ is the $L^2$-norm for scalar-valued functions in $K$ and $\\nLddK{{\\cdot}}$ the $L^2$-norm for vector-valued functions in $K$. The non conformity of the numerical solution $u_k$ is measured by the estimator\n\\begin{subequations\n\t\\begin{equation}\\label{eq:etaNC}\n\t\\eta_{NC,K}= \\nB{u_k-s_k}_K.\n\t\\end{equation}\n\tIn the following, $m_K$, $\\tilde m_K$, $m_\\sigma$, $D_{t,K,\\sigma}$, $c_{\\bm{\\beta},\\mu,K}>0$ are some known constants which will be defined in \\cref{sec:ctedef}.\n\tThe residual estimator is\n\t\\begin{equation}\\label{eq:etaR}\n\t\\eta_{R,K}= m_K \\nLdK{f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k},\n\t\\end{equation}\n\twhich can be seen as the residual of \\eqref{eq:weak} where we first replace $u$ by $u_k$, then $-A\\nabla u_k$ by $\\bt_k$, $\\bm{\\beta} u_k$ by $\\bq_k$ and finally use the Green theorem. The error estimators defined in \\cref{eq:etaDF,eq:etaC1,eq:etaC2,eq:etaU,eq:etaG1,eq:etaG2,eq:etatC1,eq:etatU} measure the error introduced by these substitutions and the error introduced when applying the Green theorem to $\\bt_k,\\bq_k$, which are not in $H_{\\divop}(\\Omega)$.\n\t\n\tThe diffusive flux estimator measures the difference between $-A\\nabla u_k$ and $\\bt_k$. It is given by $\\eta_{DF,K}=\\min\\lbrace \\eta_{DF,K}^1,\\eta_{DF,K}^2\\rbrace$, where\n\t\\begin{equation}\\label{eq:etaDF}\n\t\\begin{aligned}\n\t\\eta_{DF,K}^1 &= \\nLddK{A^{1\/2}\\nabla u_k+A^{-1\/2}\\bt_k},\\\\\n\t\\eta_{DF,K}^2 &= m_K\\nLdK{(\\mathcal{I}-\\pi_0)(\\nabla\\cdot(A\\nabla u_k+\\bt_k))}\\\\\n\t&\\quad +\\tilde{m}_K^{1\/2}\\sum_{\\sigma\\in \\mathcal{F}_K}C_{t,K,\\sigma}^{1\/2}\\nLds{(A\\nabla u_k+\\bt_k)\\cdot\\bm{n}_\\sigma},\n\t\\end{aligned}\n\t\\end{equation}\n\t$\\pi_0$ is the $L^2$-orthogonal projector onto $\\mathbb{P}_0(K)$ and $\\mathcal{I}$ is the identity operator. Let $\\sigma\\in\\mathcal{F}_k$ and $\\pi_{0,\\sigma}$ be the $L^2$-orthogonal projector onto $\\mathbb{P}_0(\\sigma)$. The convection and upwinding estimators, that measure the difference between $\\bm{\\beta} u_k$, $\\bm{\\beta} s_k$ and $\\bq_k$, are defined by\n\t\\begin{align}\\label{eq:etaC1}\n\t\\eta_{C,1,K}&= m_K\\nLdK{(\\mathcal{I}-\\pi_0)(\\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k))},\\\\ \\label{eq:etaC2}\n\t\\eta_{C,2,K}&= \\frac{1}{2}c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\nLdK{(\\nabla\\cdot\\bm{\\beta})(u_k-s_k))},\\\\ \\label{eq:etatC1}\n\t\\tilde \\eta_{C,1,K}&= m_K\\nLdK{(\\mathcal{I}-\\pi_0)(\\nabla\\cdot (\\bq_k-\\bm{\\beta} u_k))},\\\\ \\label{eq:etaU}\n\t\\eta_{U,K} &= \\sum_{\\sigma\\in\\mathcal{F}_K}\\chi_\\sigma m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot \\bm{n}_\\sigma},\\\\ \\label{eq:etatU}\n\t\\tilde\\eta_{U,K}&= \\sum_{\\sigma\\in\\mathcal{F}_K}\\chi_\\sigma m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} u_k}\\cdot \\bm{n}_\\sigma},\n\t\\end{align}\n\twhere $\\chi_\\sigma=2$ if $\\sigma\\in\\mathcal{F}_{k,b}$ and $\\chi_\\sigma=1$ if $\\sigma\\in\\mathcal{F}_{k,i}$.\n\tFinally, we introduce the jump estimators coming from the application of the Green theorem to $\\bt_k$ and $\\bq_k$ (see \\cref{lemma:boundBBA}). Those are defined by \n\t\\begin{align}\\label{eq:etaG1}\n\t\\eta_{\\Gamma,1,K} &= \\frac{1}{2}(|K|c_{\\bm{\\beta},\\mu,K})^{-1\/2}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\nLus{\\pi_{0,\\sigma}\\jump{\\bq_k}\\cdot\\bm{n}_\\sigma},\\\\ \\label{eq:etaG2}\n\t\\eta_{\\Gamma,2,K} &= \\frac{1}{2}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}} D_{t,K,\\sigma}\\nLds{\\jump{\\bt_k}\\cdot \\bm{n}_\\sigma}.\n\t\\end{align}\n\\end{subequations}\nWe end the section defining the marking error estimator $\\eta_{M,K}$ used to mark $\\Omega_k$ in the LocalDomain routine of \\cref{alg:local}, let\n\\begin{equation}\\label{eq:marketa}\n\\begin{aligned}\n\\eta_{M,K}&= \\eta_{NC,K}+\\eta_{R,K}+\\eta_{DF,K}+\\eta_{C,1,K}+\\eta_{C,2,K}+\\eta_{U,K}\\\\\n&\\quad +\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}+\\tilde\\eta_{C,1,K}+\\tilde\\eta_{U,K}.\n\\end{aligned}\n\\end{equation}\n\n\\subsection{Main results}\\label{sec:thms}\nWe state here our main results related to the a posteriori analysis of the local scheme, in particular we will provide reliable error bounds on the numerical solution $u_k$ which are free of undetermined constants. We will also comment as to why we cannot prove the efficiency of the new estimator.\n\n\nWe start defining the norms for which we provide the error bounds, the same norms are used in \\cite{ESV10}. The operator $\\mathcal{B}$ defined in \\eqref{eq:bform} can be written $\\mathcal{B}=\\mathcal{B}_S+\\mathcal{B}_A$, where $\\mathcal{B}_S$ and $\\mathcal{B}_A$ are symmetric and skew-symmetric operators defined by\n\\begin{equation}\\label{eq:bsba}\n\\begin{aligned}\n\\mathcal{B}_S(u,v)&= \\int_\\Omega (A\\nabla u\\cdot\\nabla v+(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})u v)\\dif\\bm{x},\\\\\n\\mathcal{B}_A(u,v)&=\\int_{\\Omega}(\\bm{\\beta}\\cdot\\nabla u+\\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})u)v\\dif\\bm{x},\n\\end{aligned}\n\\end{equation}\nfor $u,v\\in H^1(\\mathcal{M}_k)$. The energy norm is defined by the symmetric operator as\n\\begin{equation}\n\\nB{v}^2 = \\mathcal{B}_S(v,v) = \\nLdd{A^{1\/2}\\nabla v}^2+\\nLd{(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})^{1\/2}v}^2,\n\\end{equation}\nobserve that $\\nB{v}^2=\\sum_{K\\in\\mathcal{M}_k}\\nB{v}_K^2$, with $\\nB{{\\cdot}}_K$ as in \\eqref{eq:defnBK}. Since the norm $\\nB{{\\cdot}}$ is defined by the symmetric operator, it is well suited to study problems with dominant diffusion or reaction. On the other hand, it is inappropriate for convection dominated problems since it lacks a term measuring the error along the velocity direction. For this kind of problems we use the augmented norm\n\\begin{equation}\\label{eq:augnorm}\n\\nBp{v}=\\nB{v}+\\sup_{\\substack{w\\in H^1_0(\\Omega)\\\\ \\nB{w}=1}}(\\mathcal{B}_A(v,w)+\\mathcal{B}_J(v,w)),\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathcal{B}_J(v,w)=-\\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma \\jump{\\bm{\\beta} v}\\cdot\\bm{n}_\\sigma \\mean{\\pi_0 w}\\dif\\bm{y}\n\\end{equation}\nis a term needed to sharpen the error bounds. The next two theorems give a bound on the error of the local scheme, measured in the energy or the augmented norm.\n\\begin{theorem}\\label{thm:energynormbound}\n\tLet $u\\in H^1_0(\\Omega)$ be the solution to \\eqref{eq:weak}, $u_k\\in V(\\mathfrak{T}_k)$ given by \\cref{alg:local}, $s_k\\in V(\\mathfrak{T}_k)\\cap H^1_0(\\Omega)$ from \\cref{eq:defpot,eq:defhsk} and $\\bt_k,\\bq_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ be defined by \\cref{eq:defflux,eq:deflocflux}. Then, the error measured in the energy norm is bounded as\n\t\\begin{equation}\n\t\\nB{u-u_k}\\leq \\eta = \\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{NC,K}^2\\right)^{1\/2}+\\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{1,K}^2\\right)^{1\/2},\n\t\\end{equation}\n\twhere $\\eta_{1,K}=\\eta_{R,K}+\\eta_{DF,K}+\\eta_{C,1,K}+\\eta_{C,2,K}+\\eta_{U,K}+\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}$.\n\\end{theorem}\n\\begin{theorem}\\label{thm:augmentednormbound}\n\tUnder the same assumptions of \\cref{thm:energynormbound}, the error measured in the augmented norm is bounded as\n\t\\begin{equation}\n\t\\nBp{u-u_k}\\leq \\tilde\\eta = 2\\eta +\\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{2,K}^2\\right)^{1\/2},\n\t\\end{equation}\n\twith $\\eta$ from \\cref{thm:energynormbound} and $\\eta_{2,K}=\\eta_{R,K}+\\eta_{DF,K}+\\tilde\\eta_{C,1,K}+\\tilde\\eta_{U,K}+\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}$.\n\\end{theorem}\nThe error estimators of \\cref{thm:energynormbound,thm:augmentednormbound} are free of undetermined constants, indeed they depend on the numerical solution, the smallest eigenvalues of the diffusion tensor, on the essential minimum of $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}$, the mesh size and known geometric constants. In contrast, the error estimators are not efficient. The reason is that, compared to the true errors $\\nB{u-u_k}$ and $\\nBp{u-u_k}$, the error estimators $\\eta_{\\Gamma,1,K},\\eta_{\\Gamma,2,K}$ have a lower order of convergence. We illustrate this numerically in \\cref{exp:conv}.\nHowever, $\\eta_{\\Gamma,1,K},\\eta_{\\Gamma,2,K}$ are useful in practice: whenever they are small, then the error estimators are efficient. When they become large then they indicate that the error is not localized and one should switch to a nonlocal method. This is also illustrated numerically in \\cref{exp:conv}.\n\n\n\\section{Potential and fluxes reconstructions, proofs of the main results}\\label{sec:errbound}\nIn this section, we will define the potential, diffusion and advection reconstructions, define the geometric constants appearing in the error estimators defined in \\cref{eq:etaNC,eq:etaR,eq:etaDF,eq:etaC1,eq:etaC2,eq:etaU,eq:etaG1,eq:etaG2,eq:etatC1,eq:etatU} and finally prove \\cref{thm:energynormbound,thm:augmentednormbound}.\n\n\\subsection{Potential and fluxes reconstruction via the equilibrated flux method}\\label{sec:potflux}\nWe define here the fluxes reconstructions $\\hat{\\bt}_k$, $\\hat{\\bq}_k$ of \\eqref{eq:defflux} and the potential reconstruction $\\hat s_k$ of \\eqref{eq:defpot}. In what follows we assume that $\\mathcal{M}_k$ does not have hanging nodes, i.e. we consider matching meshes, since it simplifies the analysis; however, in practice nonmatching meshes possessing hanging nodes can be employed (as in \\cref{sec:num}). Roughly speaking, the next results are extended to nonmatching meshes by building matching submeshes and computing the error estimators on those submeshes, we refer to \\cite[Appendix]{ESV10} for the details.\n\nWe start defining some broken Sobolev spaces and then the potential and fluxes reconstructions. For $k=1,\\ldots,M$ let $\\mathcal{G}_k=\\{G_j\\,|\\, j=1,\\ldots,k\\}$, where $G_k=\\Omega_k$ and \n\\begin{equation}\nG_j =\\Omega_j\\setminus\\cup_{i=j+1}^{k}\\overline{\\Omega}_{i} \\qquad \\text{for }j=1,\\ldots,k-1.\n\\end{equation}\nIn \\cref{fig:Omegak,fig:Dk} we give an example of a sequence of domains $\\Omega_k$ and the corresponding set $\\mathcal{G}_k$.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\t\\centering\n\t\t\t\\captionsetup{justification=centering}\n\t\t\t\\begin{tikzpicture}[scale=1]\n\t\t\t\\draw (0,0) rectangle (4,4);\n\t\t\n\t\t\t\\draw (2,2) rectangle (4,4);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.2)--(4,2.2);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.4)--(4,2.4);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.6)--(4,2.6);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,2.8)--(4,2.8);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3)--(4,3);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.2)--(4,3.2);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.4)--(4,3.4);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.6)--(4,3.6);\n\t\t\t\\draw[color=NavyBlue,line width=1pt] (2,3.8)--(4,3.8);\n\t\t\n\t\t\t\\draw (3,1) rectangle (4,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.2,1)--(3.2,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.4,1)--(3.4,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.6,1)--(3.6,3);\n\t\t\t\\draw[color=OliveGreen,line width=1pt] (3.8,1)--(3.8,3);\n\t\t\t\\end{tikzpicture}\n\t\t\n\t\t\t\\caption{Sequence of domains\\\\$\\Omega_1$= \\tikz \\draw (0,0) rectangle (10pt,10pt);, $\\Omega_2$= \\tikz{\\draw(0,0) rectangle (10pt,10pt);\\draw[color=NavyBlue,line width=1pt] (0,6.6pt)--(10pt,6.6pt);\\draw[color=NavyBlue,line width=1pt] (0,3.3pt)--(10pt,3.3pt);}, $\\Omega_3$= \\tikz{\\draw(0,0) rectangle (10pt,10pt);\\draw[color=OliveGreen,line width=1pt] (6.6pt,0pt)--(6.6pt,10pt);\\draw[color=OliveGreen,line width=1pt] (3.3pt,0pt)--(3.3pt,10pt);} .}\n\t\t\t\\label{fig:Omegak}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\t\\centering\n\t\t\t\\captionsetup{justification=centering}\n\t\t\t\\begin{tikzpicture}[scale=1]\n\t\t\t\\draw[pattern=dots] (0,0) rectangle (4,4);\n\t\t\n\t\t\n\t\t\t\\draw[fill=Goldenrod] (2,2) rectangle (4,4);\n\t\t\n\t\t\n\t\t\t\\draw[fill=BrickRed] (3,1) rectangle (4,3);\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Set $\\mathcal{G}_3=\\{G_1,G_2,G_3\\}$ with\\\\ $G_1$= \\tikz \\draw[pattern=dots] (0,0) rectangle (10pt,10pt);, $G_2$= \\tikz \\draw[fill=Goldenrod] (0,0) rectangle (10pt,10pt);, $G_3$= \\tikz \\draw[fill = BrickRed] (0,0) rectangle (10pt,10pt); .}\n\t\t\t\\label{fig:Dk}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\t\\centering\n\t\t\t\\captionsetup{justification=centering}\n\t\t\t\\begin{tikzpicture}[scale=1]\n\t\t\t\\draw[draw=none] (0,0) rectangle (4,4);\n\t\t\t\\draw[color=YellowOrange, line width=1pt, solid] (2,4)--(2,2)--(3,2);\n\t\t\t\\draw[color=PineGreen, line width=1pt, densely dotted] (3,2)--(3,1)--(4,1);\n\t\t\t\\draw[color=Purple, line width=1pt, loosely dashed] (3,2)--(3,3)--(4,3);\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Skeleton $\\Gamma_3$ with \\\\$\\partial G_1\\cap\\partial G_2$= \\tikz\\draw[color=YellowOrange, line width=1pt, solid] (0,0)--(10pt,0pt);, $\\partial G_1\\cap\\partial G_3$= \\tikz \\draw[color=PineGreen, line width=1pt, densely dotted] (0,0) -- (10pt,0pt);,\\\\$\\partial G_2\\cap\\partial G_3$= \\tikz \\draw[color=Purple, line width=1pt, loosely dashed] (0,0) -- (15pt,0pt);.}\n\t\t\t\\label{fig:Gk}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Example of sequence of domains $\\Omega_1,\\Omega_2,\\Omega_3$, set $\\mathcal{G}_3$ and skeleton $\\Gamma_3$.}\n\t\\label{fig:illustrationDk}\n\\end{figure}\nWe define the broken spaces\n\\begin{align}\nH_{\\divop}(\\mathcal{G}_k) &= \\{\\bm{v}\\in L^2(\\Omega)^d\\,:\\, \\bm{v}|_G\\in H_{\\divop}(G)\\text{ for all }G\\in \\mathcal{G}_k\\},\\\\\nH^1({\\mathcal{M}}_k)&=\\{v\\in L^2(\\Omega)\\,:\\,v|_K\\in H^1(K)\\text{ for all }K\\in\\mathcal{M}_k\\},\n\\end{align}\nthe divergence and gradient operators in $H_{\\divop}(\\mathcal{G}_k)$ and $H^1(\\mathcal{M}_k)$ are taken element wise.\nWe extend the jump operator $\\jump{\\cdot}_\\sigma$ to the broken space $H^1(\\mathcal{M}_k)$. We call $\\Gamma_k$ the internal skeleton of $\\mathcal{G}_k$, that is\n\\begin{equation}\n\\Gamma_k=\\{\\partial G_i\\cap\\partial G_j\\,|\\, G_i,G_j\\in\\mathcal{G}_k,\\, i\\neq j\\},\n\\end{equation}\nan example of $\\Gamma_k$ is given in \\cref{fig:Gk}.\nFor each $\\gamma\\in\\Gamma_k$ we define $\\mathcal{F}_\\gamma = \\{\\sigma\\in\\mathcal{F}_{k,i}\\,|\\,\\sigma\\subset \\gamma\\}$ and set $\\bm{n}_\\gamma$, the normal to $\\gamma$, as $\\bm{n}_\\gamma|_\\sigma=\\bm{n}_\\sigma$. The jump $\\jump{\\cdot}_\\gamma$ on $\\gamma$ is defined by $\\jump{\\cdot}_\\gamma|_\\sigma=\\jump{\\cdot}_\\sigma$. \n\n\nIn \\cite{ESV10} the reconstructed fluxes live in $H_{\\divop}(\\Omega)$. For the local algorithm we need to build such fluxes using the recursive relation \\eqref{eq:defflux}. This leads to fluxes having jumps across the boundaries of the subdomains, i.e. $\\gamma\\in\\Gamma_k$, hence they lie in the broken space $H_{\\divop}(\\mathcal{G}_k)$. In the rest of this section we explain how to build fluxes which are in an approximation space of $H_{\\divop}(\\mathcal{G}_k)$ and satisfy a local conservation property. \nWe start by introducing a broken version of the usual Raviart-Thomas-N\u00e9d\u00e9lec spaces \\cite{Ned80,RaT77}, which we define as\n\\begin{equation}\\label{eq:RTN}\n\\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k):=\\{\\bm{v}_k\\in H_{\\divop}(\\mathcal{G}_k)\\,:\\, \\bm{v}_k|_K\\in\\mathbf{RTN}_\\mathcalligra{r}(K)\\text{ for all }K\\in\\mathcal{M}_k\\},\n\\end{equation}\nwhere $\\mathcalligra{r}\\in\\{\\ell-1,\\ell\\}$ and $\\mathbf{RTN}_\\mathcalligra{r}(K)=\\mathbb{P}_\\mathcalligra{r}(K)^d+\\bm{x} \\mathbb{P}_{\\mathcalligra{r}}(K)$. In order to build functions in $\\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ we need a characterization of this space. \nLet $\\bm{v}_k\\in L^2(\\Omega)^d$ such that $\\bm{v}_k|_K\\in\\mathbf{RTN}_\\mathcalligra{r}(K)$ for each $K\\in\\mathcal{M}_k$, it is known that $\\bm{v}_k\\in H_{\\divop}(\\Omega)$ if and only if $\\jump{\\bm{v}_k}_\\sigma\\cdot\\bm{n}_\\sigma=0$ for all $\\sigma\\in\\mathcal{F}_{k,i}$ (see \\cite[Lemma 1.24]{PiE12}). Since we search for fluxes $\\bm{v}_k$ in $H_{\\divop}(\\mathcal{G}_k)$, we relax this condition and allow $\\jump{\\bm{v}_k}_\\gamma\\cdot\\bm{n}_\\gamma\\neq 0$ for $\\gamma\\in\\Gamma_k$.\n\n\\begin{lemma}\n\tLet $\\bm{v}_k\\in L^2(\\Omega)^d$ be such that $\\bm{v}_k|_K\\in\\mathbf{RTN}_\\mathcalligra{r}(K)$ for each $K\\in\\mathcal{M}_k$, then $\\bm{v}_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ if and only if $\\jump{\\bm{v}_k}_\\sigma\\cdot\\bm{n}_\\sigma=0$ for all $\\sigma\\notin \\cup_{\\gamma\\in\\Gamma_k}\\mathcal{F}_\\gamma$.\n\\end{lemma} \n\\begin{proof}\n\tFollowing the lines of \\cite[Lemma 1.24]{PiE12}.\n\\end{proof}\nThe diffusive and convective fluxes $\\bt_k,\\bq_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$ are defined recursively as in \\eqref{eq:defflux}, where $\\hat{\\bt}_k,\\hat{\\bq}_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\widehat{\\mathcal{M}}_k)$, with\n\\begin{equation}\n\\mathbf{RTN}_\\mathcalligra{r}(\\widehat{\\mathcal{M}}_k):=\\{\\bm{v}_k\\in H_{\\divop}(\\Omega_k)\\,:\\, \\bm{v}_k\\in\\mathbf{RTN}_\\mathcalligra{r}(K)\\text{ for all }K\\in\\widehat{\\mathcal{M}}_k\\},\n\\end{equation}\nare given by the relations\n\\begin{subequations}\\label{eq:deflocflux}\n\t\\begin{equation}\\label{eq:deflocflux1}\n\t\\begin{aligned}\n\t\\int_\\sigma \\hat{\\bt}_k\\cdot\\bm{n}_\\sigma p_k\\dif\\bm{y}&= \\int_\\sigma (-\\mean{A\\nabla \\hat{u}_k}_\\omega\\cdot\\bm{n}_\\sigma+\\eta_\\sigma\\frac{\\gamma_\\sigma}{h_\\sigma}\\jump{\\hat{u}_k}_{g_k})p_k\\dif\\bm{y},\\\\\n\t\\int_\\sigma \\hat{\\bq}_k\\cdot\\bm{n}_\\sigma p_k\\dif\\bm{y} &= \\int_\\sigma (\\bm{\\beta}\\cdot\\bm{n}_\\sigma\\mean{\\hat{u}_k}_{g_k}+\\nu_\\sigma\\jump{\\hat{u}_k}_{g_k})p_k\\dif\\bm{y}\n\t\\end{aligned}\n\t\\end{equation}\n\tfor all $\\sigma\\in\\widehat{\\mathcal{F}}_k$ and $p_k\\in \\mathbb{P}_\\mathcalligra{r}(\\sigma)$ and\n\t\\begin{equation}\\label{eq:deflocflux2}\n\t\\begin{aligned}\n\t\\int_K \\hat{\\bt}_k \\cdot\\hat{\\br}_k\\dif\\bm{x} &= -\\int_K A\\nabla\\hat{u}_k\\cdot\\hat{\\br}_k\\dif\\bm{x}+\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma\\omega_{K,\\sigma}\\jump{\\hat{u}_k}_{g_k} A|_K\\hat{\\br}_k\\cdot\\bm{n}_\\sigma\\dif\\bm{y},\\\\\n\t\\int_K\\hat{\\bq}_k\\cdot\\hat{\\br}_k\\dif\\bm{x} &= \\int_K \\hat{u}_k\\bm{\\beta}\\cdot\\hat{\\br}_k\\dif\\bm{x}\n\t\\end{aligned}\n\t\\end{equation}\n\\end{subequations}\nfor all $K\\in\\widehat{\\mathcal{M}}_k$ and $\\hat{\\br}_k\\in\\mathbb{P}_{\\mathcalligra{r}-1}(K)^d$. Since $\\hat{\\bt}_k|_K\\cdot\\bm{n}_\\sigma$, $\\hat{\\bq}_k|_K\\cdot\\bm{n}_\\sigma\\in\\mathbb{P}_\\mathcalligra{r}(\\sigma)$ (see \\cite[Proposition 3.2]{BrF91}) then \\eqref{eq:deflocflux1} defines $\\hat{\\bt}_k|_K\\cdot\\bm{n}_\\sigma$, $\\hat{\\bq}_k|_K\\cdot\\bm{n}_\\sigma$ on $\\sigma$. The remaining degrees of freedom are fixed by \\eqref{eq:deflocflux2} \\cite[Proposition 3.3]{BrF91}.\nThanks to \\eqref{eq:deflocflux1} we have $\\jump{\\hat{\\bt}_k}\\cdot\\bm{n}_\\sigma=0$ and $\\jump{\\hat{\\bq}_k}\\cdot\\bm{n}_\\sigma=0$ for $\\sigma\\in\\widehat{\\mathcal{F}}_{k,i}$ and hence $\\hat{\\bt}_k,\\hat{\\bq}_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\widehat{\\mathcal{M}}_k)$. By construction it follows $\\bt_k,\\bq_k\\in \\mathbf{RTN}_\\mathcalligra{r}(\\mathcal{M}_k)$.\n\nLet $K\\in\\mathcal{M}_k$ and $\\pi_\\mathcalligra{r}$ be the $L^2$-orthogonal projector onto $\\mathbb{P}_\\mathcalligra{r}(K)$, the following lemma states a local conservation property of the reconstructed fluxes. The proof follows the lines of \\cite[Lemma 2.1]{ESV10}\n\\begin{lemma}\\label{lemma:cons}\n\tLet $u_k\\in V(\\mathfrak{T}_k)$ be given by \\cref{alg:local} and $\\bt_k,\\bq_k\\in H_{\\divop}(\\mathcal{G}_k)$ defined by \\cref{eq:defflux,eq:deflocflux}. For all $K\\in\\mathcal{M}_k$ it holds\n\t\\begin{equation}\\label{eq:cons}\n\t(\\nabla\\cdot \\bt_k+\\nabla\\cdot\\bq_k+\\pi_\\mathcalligra{r}((\\mu-\\nabla\\cdot\\bm{\\beta})u_k))|_K = \\pi_\\mathcalligra{r} f|_K.\n\t\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\tLet $K\\in\\mathcal{M}_k$ and $j=\\max\\{i=1,\\ldots,k\\,:\\, K\\subset\\Omega_j\\}$, then $K\\in\\widehat{\\mathcal{M}}_j$, $\\bt_k|_K=\\hat{\\bt}_j|_K$, $\\bq_k|_K=\\hat{\\bq}_j|_K$ and $u_k|_K=\\hat u_j|_K$. Let $v_j\\in \\mathbb{P}_{\\mathcalligra{r}}(K)$, with $v_j=0$ outside of $K$, by the Green theorem we have\n\t\\begin{equation}\\label{eq:greentq}\n\t\\int_K (\\nabla\\cdot \\hat{\\bt}_j+\\nabla\\cdot\\hat{\\bq}_j)v_j\\dif \\bm{x} = -\\int_K (\\hat{\\bt}_j+\\hat{\\bq}_j)\\cdot\\nabla v_j\\dif \\bm{x} +\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma v_j(\\hat{\\bt}_j+\\hat{\\bq}_j)\\cdot\\bm{n}_K\\dif \\bm{y}\n\t\\end{equation}\n\tand using $\\mathcal{B}(\\hat u_j,v_j,\\widehat{\\mathfrak{T}}_j,g_j)=(f,v_j)_j$ it follows\n\t\\begin{equation}\n\t\\begin{aligned}\n\t\\int_K f v_j \\dif \\bm{x} &= \\int_K (A\\nabla \\hat u_j\\cdot \\nabla v_j+(\\mu-\\nabla\\cdot \\bm{\\beta})\\hat u_j v_j-\\hat{u}_j\\bm{\\beta}\\cdot \\nabla v_j)\\dif \\bm{x}\\\\\n\t&\\quad -\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma(\\jump{v_j}\\mean{A\\nabla \\hat{u}_j}_{\\omega}\\cdot \\bm{n}_\\sigma+\\jump{\\hat{u}_j}_{g_j}\\mean{A\\nabla v_j}_{\\omega}\\cdot \\bm{n}_\\sigma)\\dif \\bm{y}\\\\\n\t&\\quad +\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma ((\\eta_\\sigma\\frac{\\gamma_\\sigma}{h_\\sigma}+\\nu_\\sigma)\\jump{\\hat{u}_j}_{g_j}\\jump{v_j}+\\bm{\\beta}\\cdot\\bm{n}_\\sigma\\mean{\\hat{u}_j}_{g_j}\\jump{v_j})\\dif \\bm{y}.\n\t\\end{aligned}\n\t\\end{equation}\n\tSince $\\mean{A\\nabla v_j}_\\omega =\\omega_{K,\\sigma}A|_K\\nabla v_j$ and $\\jump{v_j}\\bm{n}_\\sigma=v_j|_K\\bm{n}_K$, using \\cref{eq:deflocflux,eq:greentq}, we obtain\n\t\\begin{equation}\\label{eq:precons}\n\t\\int_K f v_j \\dif \\bm{x} = \\int_K (\\nabla\\cdot \\hat{\\bt}_j+\\nabla\\cdot\\hat{\\bq}_j+(\\mu-\\nabla\\cdot\\bm{\\beta})\\hat u_j)v_j\\dif \\bm{x}\n\t\\end{equation}\n\tand the result follows from $\\nabla\\cdot\\hat{\\bt}_j,\\nabla\\cdot\\hat{\\bq}_j\\in\\mathbb{P}_\\mathcalligra{r}(K)$, $\\bt_k|_K=\\hat{\\bm{t}}_j|_K$, $\\bq_k|_K=\\hat{\\bm{q}}_j|_K$ and $u_k|_K=\\hat{u}_j|_K$. \n\\end{proof}\n\nIn order to define the $H^1_0(\\Omega)$ conforming approximation $s_k$ of $u_k$ we will need the so-called Oswald operator already considered in \\cite{KaP03} for a posteriori estimates. Let $\\mathfrak{T}=(D,\\mathcal{M},\\mathcal{F})$, $g\\in C^0(\\partial D)$ and consider $\\mathcal{O}_{\\mathfrak{T},g}:V(\\mathfrak{T})\\rightarrow V(\\mathfrak{T})\\cap H^1(D)$, for a function $v\\in V(\\mathfrak{T})$ the value of $\\mathcal{O}_{\\mathfrak{T},g} v$ is prescribed at the Lagrange interpolation nodes $p$ of the conforming finite element space $V(\\mathfrak{T})\\cap H^1(D)$. Let $p\\in \\overline{D}$ be a Lagrange node, if $p\\notin \\partial D$ we set\n\\begin{equation}\n\\mathcal{O}_{\\mathfrak{T},g}v(p)=\\frac{1}{\\# \\mathcal M_p}\\sum_{K\\in\\mathcal{M}p}v|_K(p),\n\\end{equation}\nwhere $\\mathcal M_{p}=\\{K\\in\\mathcal{M}\\,:\\,p\\in\\overline{K}\\}$. If instead $p\\in\\partial D$ then $\\mathcal{O}_{\\mathfrak{T},g}v(p)=g(p)$, where $g$ is the Dirichlet condition on $\\partial D$. The reconstructed potential $s_k\\in V(\\mathfrak{T}_k)\\cap H^1_0(\\Omega)$ is built as in \\eqref{eq:defpot}, where\n\\begin{equation}\\label{eq:defhsk}\n\\hat s_k = \\mathcal{O}_{\\widehat{\\mathfrak{T}}_k,s_{k-1}} \\hat{u}_k.\n\\end{equation}\n\n\\subsection{Constants definition and preliminary results}\\label{sec:ctedef}\nHere we define the constants appearing in \\cref{eq:etaNC,eq:etaR,eq:etaDF,eq:etaC1,eq:etaC2,eq:etaU,eq:etaG1,eq:etaG2,eq:etatC1,eq:etatU} and derive preliminary results needed to prove \\cref{thm:energynormbound,thm:augmentednormbound}.\n\n\nLet $K\\in\\mathcal{M}_k$ and $\\sigma\\in\\mathcal{F}_K$, we recall that $|K|$ is the measure of $K$ and $|\\sigma|$ the $d-1$ dimensional measure of $\\sigma$. We denote by $c_{A,K}$ the minimal eigenvalue of $A|_K$. Next, we denote by $c_{\\bm{\\beta},\\mu,K}$ the essential minimum of $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}\\geq 0$ on $K$. \nIn what follows we will assume that $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}>0$ a.e. in $\\Omega$, hence $c_{\\bm{\\beta},\\mu,K}>0$ for all $K\\in\\mathcal{M}_k$, and provide error estimators under this assumption. We explain in \\cref{sec:altbounds} how to overcome this limitation slightly modifying the proofs and error estimators.\n\nThe cutoff functions $m_K,\\tilde m_K$ and $m_\\sigma$ are defined by \n\\begin{subequations} \\label{eq:cutoff}\n\t\\begin{align} \\label{eq:mK}\n\tm_K =& \\min\\{ C_p^{1\/2}h_K c_{A,K}^{-1\/2},c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\},\\\\ \\label{eq:tmK}\n\t\\tilde m_K=& \\min\\{ (C_p+C_p^{1\/2})h_Kc_{A,K}^{-1}, h_K^{-1}c_{\\bm{\\beta},\\mu,K}^{-1}+c_{\\bm{\\beta},\\mu,K}^{-1\/2}c_{A,K}^{-1\/2}\/2\\},\\\\ \\label{eq:ms}\n\tm_\\sigma^2=& \\min\\lbrace \\max_{K\\in\\mathcal{M}_\\sigma}\\{3d|\\sigma|h_K^2|K|^{-1}c_{A,K}^{-1}\\},\\max_{K\\in\\mathcal{M}_\\sigma}\\{|\\sigma||K|^{-1}c_{\\bm{\\beta},\\mu,K}^{-1}\\}\\rbrace,\n\t\\end{align}\n\\end{subequations}\nwhere $C_p=1\/\\pi^2$ is an optimal Poincar\u00e9 constant for convex domains \\cite{PaW60}. Let $v\\in H^1(\\mathcal{M}_k)$, it holds\n\\begin{subequations}\\label{eq:bounds}\n\t\\begin{align} \\label{eq:bounds1}\n\t\\nLdK{v-\\pi_0 v}&\\leq m_K \\nB{v}_K & \\text{for all }& K\\in\\mathcal{M}_k,\\\\ \\label{eq:bounds2}\n\t\\nLds{v-\\pi_0 v|_K}&\\leq C_{t,K,\\sigma}^{1\/2}\\tilde{m}_K^{1\/2}\\nB{v}_K & \\text{for all }& \\sigma\\in \\mathcal{F}_k \\text{ and } K\\in\\mathcal{M}_\\sigma,\\\\ \\label{eq:bounds3}\n\t\\nLds{\\jump{\\pi_0 v}}&\\leq m_\\sigma\\sum_{K\\in\\mathcal{M}_\\sigma}\\nB{v}_K & \\text{for all }& \\sigma\\in\\mathcal{F}_k,\n\t\\end{align}\n\\end{subequations}\nwhere $\\mathcal{M}_\\sigma = \\{K\\in\\mathcal{M}_k\\,:\\, \\sigma\\subset\\partial K\\}$ and $C_{t,K,\\sigma}$ is the constant of the trace inequality\n\\begin{equation}\\label{eq:trace}\n\\nLds{v|_K}^2\\leq C_{t,K,\\sigma}(h_K^{-1}\\nLdK{v}^2+\\nLdK{v}\\nLddK{\\nabla v}).\n\\end{equation}\nIt has been proved in \\cite[Lemma 3.12]{Ste07} that for a simplex it holds $C_{t,K,\\sigma}=|\\sigma|h_K\/|K|$. \n\nLet us briefly explain the role of constants \\eqref{eq:cutoff} and how the bounds \\eqref{eq:bounds} are obtained. We observe that for each bound in \\eqref{eq:bounds} the cut off functions take the minimum between two possible values, allowing for robust error estimation in singularly perturbed regimes. For \\eqref{eq:bounds1}, using the Poincar\u00e9 inequality \\cite[equation 3.2]{PaW60} we have\n\\begin{subequations}\n\t\\begin{equation}\\label{eq:bounds1a}\n\t\\begin{aligned}\n\t\\nLdK{v-\\pi_0 v}&\\leq C_p^{1\/2} h_K \\nLddK{\\nabla v}\\\\\n\t& \\leq C_p^{1\/2}h_Kc_{A,K}^{-1\/2}\\nLddK{A^{1\/2}\\nabla v}\\leq C_p^{1\/2}h_Kc_{A,K}^{-1\/2}\\nB{v}_K.\n\t\\end{aligned}\n\t\\end{equation}\n\tDenoting $(\\cdot,\\cdot)_K$ the $L^2(K)$ inner product, it holds\n\t\\begin{equation}\n\t\\nLdK{v-\\pi_0 v}^2=(v-\\pi_0 v,v-\\pi_0 v)_K=(v-\\pi_0 v,v)_K\\leq \\nLdK{v-\\pi_0 v}\\nLdK{v},\n\t\\end{equation}\n\thence\n\t\\begin{equation}\\label{eq:bounds1b}\n\t\\nLdK{v-\\pi_0 v}\\leq \\nLdK{v} \\leq c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\nLdK{(\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta})^{1\/2}v}\\leq c_{\\bm{\\beta},\\mu,K}^{-1\/2}\\nB{v}_K\n\t\\end{equation}\n\\end{subequations}\nand \\eqref{eq:bounds1} follows. The choice between bounds \\cref{eq:bounds1a,eq:bounds1b} depends on whether the problem is singularly perturbed or not. Bounds \\eqref{eq:bounds2} and \\eqref{eq:bounds3} are obtained similarly, see \\cite[Lemma 4.2]{CFP09} and \\cite[Lemma 4.5]{Voh08}. Finally, for $K\\in\\mathcal{M}_k$ and $\\sigma\\in \\mathcal{F}_K$ we define\n\\begin{equation}\\label{eq:Dk}\nD_{t,K,\\sigma}=\\left(\\frac{C_{t,K,\\sigma}}{2 h_K c_{\\bm{\\beta},\\mu,K}}\\left(1+\\sqrt{1+h_K^2\\frac{c_{\\bm{\\beta},\\mu,K}}{c_{A,K}}}\\right)\\right)^{1\/2},\n\\end{equation}\nwhich is used to bound $\\nLds{v|_K}$ in terms of $\\nB{v}_K$ in the next lemma.\n\\begin{lemma}\\label{lemma:boundsigma}\n\tLet $v_k\\in H^1(\\mathcal{M}_k)$, for each $K\\in\\mathcal{M}_k$ and $\\sigma\\in \\mathcal{F}_K$ it holds\n\t\\begin{equation}\n\t\\nLds{v_k|_K}\\leq D_{t,K,\\sigma} \\nB{v_k}_K.\n\t\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\tLet $v_k\\in H^1(\\mathcal{M}_k)$ and $\\epsilon>0$. Applying H\u00f6lder inequality to the trace inequality \\cref{eq:trace} we get\n\t\\begin{equation}\n\t\\nLds{v_k|_K}^2 \\leq C_{t,K,\\sigma}((h_K^{-1}+\\frac{1}{2\\epsilon})\\nLdK{v_k}^2+\\frac{\\epsilon}{2}\\nLddK{\\nabla v_k}^2).\n\t\\end{equation}\n\tHence, if there exists $D_{t,K,\\sigma}>0$ independent of $v_k$ such that\n\t\\begin{equation}\\label{eq:Dkeps}\n\t\\begin{aligned}\n\tC_{t,K,\\sigma}((h_K^{-1}+\\frac{1}{2\\epsilon})\\nLdK{v_k}^2+&\\frac{\\epsilon}{2}\\nLddK{\\nabla v_k}^2)\\\\\n\t& \\leq D_{t,K,\\sigma}^2 (c_{A,K}\\nLddK{\\nabla v_k}^2+c_{\\bm{\\beta},\\mu,K}\\nLdK{v_k}^2) \n\t\\end{aligned}\n\t\\end{equation}\n\tthen $\\nLds{v_k|_K}^2\\leq D_{t,K,\\sigma}^2 \\nB{v_k}^2_K$ and the result holds. Relation \\eqref{eq:Dkeps} holds if\n\t\\begin{equation}\n\tC_{t,K,\\sigma}(h_K^{-1}+\\frac{1}{2\\epsilon})\\leq D_{t,K,\\sigma}^2c_{\\bm{\\beta},\\mu,K}, \\qquad\\qquad C_{t,K,\\sigma}\\frac{\\epsilon}{2} \\leq D_{t,K,\\sigma}^2c_{A,K}\n\t\\end{equation}\n\tand hence $D_{t,K,\\sigma}^2=\\max\\{C_{t,K,\\sigma}(h_K^{-1}+\\frac{1}{2\\epsilon})c_{\\bm{\\beta},\\mu,K}^{-1},C_{t,K,\\sigma}\\frac{\\epsilon}{2}c_{A,K}^{-1}\\}$.\n\tTaking $\\epsilon$ such that the maximum is minimized we get $D_{t,K,\\sigma}$ as in \\cref{eq:Dk}.\n\\end{proof}\nThe proof of the following Lemma is inspired from \\cite[Theorem 3.1]{ESV10}, the main difference is that we take into account the weaker regularity of the reconstructed fluxes. \n\\begin{lemma}\\label{lemma:boundBBA}\n\tLet $u\\in H^1_0(\\Omega)$ be the solution to \\eqref{eq:weak}, $u_k\\in V(\\mathfrak{T}_k)$ given by \\cref{alg:local}, $s_k\\in H^1_0(\\Omega)$ from \\cref{eq:defpot,eq:defhsk}, $\\bt_k,\\bq_k\\in H_{\\divop}(\\mathcal{G}_k)$ defined by \\cref{eq:defflux,eq:deflocflux} and $v\\in H^1_0(\\Omega)$. Then\n\t\\begin{equation}\n\t|\\mathcal{B}(u -u_k ,v)+\\mathcal{B}_A(u_k-s_k,v)| \\leq \\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{1,K}^2\\right)^{1\/2}\\nB{v},\n\t\\end{equation}\n\twith $\\eta_{1,K}=\\eta_{R,K}+\\eta_{DF,K}+\\eta_{C,1,K}+\\eta_{C,2,K}+\\eta_{U,K}+\\eta_{\\Gamma,1,K}+\\eta_{\\Gamma,2,K}$.\n\\end{lemma}\n\\begin{proof}\n\tSince $u$ satisfies \\eqref{eq:weak}, using the definition of $\\mathcal{B}$ and $\\mathcal{B}_A$\n\t\\begin{align}\n\t\\mathcal{B}(u-u_k,v)+\\mathcal{B}_A(u_k-s_k,v) \n\t&= \\int_\\Omega (f-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)v\\dif \\bm{x} -\\int_\\Omega A\\nabla u_k\\cdot \\nabla v\\dif \\bm{x}\\\\\n\t&\\quad -\\int_\\Omega \\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})(u_k-s_k)v\\dif \\bm{x} -\\int_\\Omega \\nabla\\cdot(\\bm{\\beta} s_k)v\\dif \\bm{x}.\n\t\\end{align}\n\tUsing $v \\bt_k\\in H_{\\divop}(\\mathcal{G}_k)$, from the divergence theorem we have\n\t\\begin{align}\n\t\\int_\\Omega (v\\nabla\\cdot \\bt_k +\\nabla v\\cdot\\bt_k)\\dif \\bm{x} &= \\sum_{G\\in\\mathcal{G}_k}\\int_{G}\\nabla\\cdot(v\\bt_k)\\dif \\bm{x} =\\sum_{G\\in\\mathcal{G}_k}\\int_{\\partial G} v\\bt_k\\cdot\\bm{n}_{\\partial G}\\dif \\bm{y} \\\\\n\t&=\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{v \\bt_k}\\cdot \\bm{n}_\\gamma\\dif \\bm{y} = \\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot \\bm{n}_\\gamma v\\dif \\bm{y}\n\t\\end{align}\n\tand hence\n\t\\begin{equation}\\label{eq:integrBBA}\n\t\\begin{aligned}\n\t\\mathcal{B}(u-u_k,v)+\\mathcal{B}_A(u_k- s_k ,v)&=\\int_\\Omega (f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)v\\dif \\bm{x} \\\\\n\t&\\quad -\\int_\\Omega \\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})(u_k-s_k)v\\dif \\bm{x} +\\int_\\Omega \\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)v\\dif \\bm{x}\\\\\n\t&\\quad -\\int_\\Omega (A\\nabla u_k+\\bt_k)\\cdot \\nabla v\\dif \\bm{x} +\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k }\\cdot\\bm{n}_\\gamma v\\dif \\bm{y}.\n\t\\end{aligned}\n\t\\end{equation}\n\tFrom \\cref{lemma:cons} we deduce\n\t\\begin{subequations}\\label{eq:boundsBBAterms}\n\t\t\\begin{equation}\\label{eq:boundsBBAterm0}\n\t\t\\begin{aligned}\n\t\t&\\left|\\int_\\Omega (f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)v\\dif \\bm{x}\\right| \\\\\n\t\t&\\qquad\\qquad\\qquad\\qquad = \\left|\\int_\\Omega (f-\\nabla\\cdot\\bt_k-\\nabla\\cdot\\bq_k-(\\mu-\\nabla\\cdot\\bm{\\beta})u_k)(v-\\pi_0 v)\\dif \\bm{x}\\right| \\\\\n\t\t&\\qquad\\qquad\\qquad\\qquad \\leq \\sum_{K\\in\\mathcal{M}_k} \\eta_{R,K}\\nB{v}_K.\n\t\t\\end{aligned}\n\t\t\\end{equation}\n\t\tSimilarly, we get\n\t\t\\begin{equation}\\label{eq:boundsBBAterms1}\n\t\t\\begin{aligned}\n\t\t\\left| \\int_\\Omega (A\\nabla u_k+\\bt_k)\\cdot \\nabla v\\dif \\bm{x}\\right|&\\leq \\sum_{K\\in\\mathcal{M}_k}\\eta_{DF,K}\\nB{v}_K,\\\\\n\t\t\\left| \\int_\\Omega \\frac{1}{2}(\\nabla\\cdot\\bm{\\beta})(u_k-s_k)v\\dif \\bm{x}\\right|&\\leq \\sum_{K\\in\\mathcal{M}_k} \\eta_{C,2,K}\\nB{v}_K.\n\t\t\\end{aligned}\n\t\t\\end{equation}\n\t\tSince $\\jump{\\bt_k}_\\sigma=0$ for $\\sigma\\in \\mathcal{F}_{k,i}\\setminus\\cup_{\\gamma\\in\\Gamma_k}\\mathcal{F}_\\gamma$, it holds\n\t\t\\begin{equation}\n\t\t\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} = \\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma v\\dif \\bm{y} = \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\int_\\sigma \\jump{\\bt_k}\\cdot\\bm{n}_\\sigma v\\dif \\bm{y}.\n\t\t\\end{equation}\n\t\tUsing \\cref{lemma:boundsigma} we obtain\n\t\t\\begin{equation}\\label{eq:boundsBBAterms2}\n\t\t\\begin{aligned}\n\t\t\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| &\\leq \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\nLds{\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma}\\nLds{v} \\\\\n\t\t&\\leq \\sum_{K\\in\\mathcal{M}_k}\\eta_{\\Gamma,2,K}\\nB{v}_K.\n\t\t\\end{aligned}\n\t\t\\end{equation}\n\t\tIt remains to estimate $\\int_\\Omega \\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)v\\dif \\bm{x}$. For that, we use\n\t\t\\begin{align}\n\t\t\\int_\\Omega \\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)v\\dif \\bm{x} \n\t\n\t\n\t\t=& \\sum_{K\\in\\mathcal{M}_k}\\int_K (\\mathcal{I}-\\pi_0)\\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)(v-\\pi_0 v)\\dif \\bm{x} \\\\\n\t\t&+\\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma (\\bq_k-\\bm{\\beta} s_k)\\cdot \\bm{n}_K \\pi_0 v\\dif \\bm{y}\n\t\t\\end{align}\n\t\tand from \\cref{eq:bounds1} we get\n\t\t\\begin{align}\\label{eq:boundsBBAterms3}\n\t\t\\left|\\sum_{K\\in\\mathcal{M}_k}\\int_K (\\mathcal{I}-\\pi_0)\\nabla\\cdot(\\bq_k-\\bm{\\beta} s_k)(v-\\pi_0 v)\\dif \\bm{x} \\right|\\leq \\sum_{K\\in\\mathcal{M}_k}\\eta_{C,1,K}\\nB{v}_K.\n\t\t\\end{align}\n\t\tFor the second term we write\n\t\t\\begin{align}\n\t\t&\\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma (\\bq_k-\\bm{\\beta} s_k)\\cdot \\bm{n}_K \\pi_0 v\\dif \\bm{y}= \\sum_{\\sigma\\in\\mathcal{F}_k}\\int_\\sigma \\jump{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)\\pi_0 v}\\cdot \\bm{n}_\\sigma\\dif \\bm{y}\\\\\n\t\t&=\\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma \\mean{\\pi_0 v}\\jump{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)}\\cdot \\bm{n}_\\sigma+\\jump{\\pi_0 v}\\mean{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)}\\cdot \\bm{n}_\\sigma\\dif \\bm{y}\\\\\n\t\t&\\quad +\\sum_{\\sigma\\in \\mathcal{F}_{k,b}}\\int_\\sigma\\pi_0 v\\, \\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)\\cdot\\bm{n}_\\sigma \\dif \\bm{y} = \\operatorname{I}+\\operatorname{II}+\\operatorname{III}\n\t\t\\end{align}\n\t\tand we easily obtain, since $\\jump{\\bm{\\beta} s_k}=0$,\n\t\t\\begin{equation}\n\t\t\\operatorname{I} = \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\int_\\sigma \\pi_0 v|_K \\jump{\\pi_{0,\\sigma}\\bq_k}\\cdot\\bm{n}_\\sigma\\dif \\bm{y}.\n\t\t\\end{equation}\n\t\tUsing $|\\pi_0 v|_K| = |K|^{-1\/2}\\nLdK{\\pi_0 v}\\leq |K|^{-1\/2}\\nLdK{v}\\leq (|K|c_{\\bm{\\beta},\\mu,K})^{-1\/2}\\nB{v}_K$ we get\n\t\t\\begin{equation}\\label{eq:boundsBBAterms4}\n\t\t\\operatorname{I} \\leq \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}(|K|c_{\\bm{\\beta},\\mu,K})^{-1\/2}\\nLus{\\jump{\\pi_{0,\\sigma}\\bq_k}\\cdot\\bm{n}_\\sigma}\\nB{v}_K= \\sum_{K\\in\\mathcal{M}_k}\\eta_{\\Gamma,1,K}\\nB{v}_K.\n\t\t\\end{equation}\n\t\tLet $\\mathcal{M}_\\sigma=\\{K\\in\\mathcal{M}_k\\,:\\, \\sigma\\subset\\partial K\\}$, using \\eqref{eq:bounds3} for the second term we have\n\t\t\\begin{align}\n\t\t\\operatorname{II} & \\leq \\sum_{\\sigma\\in\\mathcal{F}_{k,i}} m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot\\bm{n}_\\sigma}\\sum_{K\\in \\mathcal{M}_\\sigma}\\nB{v}_{K}\\\\\n\t\t&= \\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}} m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot\\bm{n}_\\sigma}\\nB{v}_K.\n\t\t\\end{align}\n\t\tFor the last term we similarly obtain\n\t\t\\begin{equation}\n\t\t\\operatorname{III} \\leq \\sum_{K\\in\\mathcal{M}_k} \\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,b}} m_\\sigma\\nLds{\\pi_{0,\\sigma}(\\bq_k-\\bm{\\beta} s_k)\\cdot \\bm{n}_\\sigma}\\nB{v}_K\n\t\t\\end{equation}\n\t\tand hence\n\t\t\\begin{equation}\\label{eq:boundsBBAterms5}\n\t\t\\operatorname{II}+\\operatorname{III} \\leq \\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K}\\chi_\\sigma m_\\sigma\\nLds{\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} s_k}\\cdot \\bm{n}_\\sigma} \\nB{v}_K= \\sum_{K\\in\\mathcal{M}_k}\\eta_{U,K}\\nB{v}_K,\n\t\t\\end{equation}\n\t\\end{subequations}\n\twhere $\\chi_\\sigma=2$ if $\\sigma\\in\\mathcal{F}_{k,b}$ and $\\chi_\\sigma=1$ if $\\sigma\\in\\mathcal{F}_{k,i}$. Plugging relations \\cref{eq:boundsBBAterm0,eq:boundsBBAterms1,eq:boundsBBAterms2,eq:boundsBBAterms3,eq:boundsBBAterms4,eq:boundsBBAterms5} into \\eqref{eq:integrBBA} we get the result.\n\\end{proof}\n\nIn \\cref{lemma:boundBBA} we use \\cref{lemma:cons} to deduce that\n\\begin{equation}\\label{eq:weakcons}\n\\int_K (\\nabla\\cdot \\bt_k+\\nabla\\cdot\\bq_k+(\\mu-\\nabla\\cdot\\bm{\\beta})u_k) \\dif \\bm{x} = \\int_K f \\dif \\bm{x}\n\\end{equation}\nand hence \\eqref{eq:boundsBBAterm0}. However, when the mesh has hanging nodes inside of the local domains \\cref{lemma:cons} is not valid. Indeed, if $\\widehat{\\mathcal{M}}_k$ has hanging nodes, the fluxes $\\hat{\\bt}_k,\\hat{\\bq}_k$ must be constructed on a matching (free of hanging nodes) submesh $\\overline{\\mathcal{M}}_k$ of $\\widehat{\\mathcal{M}}_k$, otherwise they may fail to be in $H_{\\divop}(\\Omega_k)$. The constructed fluxes will satisfy relation \\cref{eq:precons}, but since $\\nabla\\cdot\\hat{\\bt}_k,\\nabla\\cdot\\hat{\\bq}_k\\in \\mathbb{P}_\\mathcalligra{r}(K')$ for $K'\\in\\overline{\\mathcal{M}}_k$ and $\\overline{\\mathcal{M}}_k$ is finer than $\\widehat{\\mathcal{M}}_k$, then we cannot conclude as we did in \\cref{lemma:cons}. Nonetheless, \\cref{eq:precons} still implies \\cref{eq:weakcons}, which is enough to prove \\cref{lemma:boundBBA}.\n\n\n\\subsection{Proof of the theorems}\\label{sec:proofs}\nHere we prove \\cref{thm:energynormbound,thm:augmentednormbound}. We will consider $\\mathcal{B}:H^1_0(\\Omega)\\times H^1_0(\\Omega)\\rightarrow\\mathbb{R}$ defined in \\eqref{eq:bform} for functions in $ H^1(\\mathcal{M}_k)$.\n\\begin{proof}[Proof of \\cref{thm:energynormbound}]\n\tIt has been proved in \\cite[Lemma 3.1]{Ern08} that for any $u_k\\in V(\\mathfrak{T}_k)$ and $u,s\\in H^1_0(\\Omega)$ it holds\n\t\\begin{equation}\n\t\\nB{u-u_k}\\leq \\nB{u_k-s}+|\\mathcal{B}(u-u_k,v)+\\mathcal{B}_A(u_k-s,v)|,\n\t\\end{equation}\n\twith $v=(u-s)\/\\nB{u-s}$. Choosing $u$ as the exact solution to \\cref{eq:weak}, $u_k$ given by \\cref{alg:local}, $s=s_k$ from \\cref{eq:defpot} and using \\cref{lemma:boundBBA} gives the result.\n\\end{proof}\n\n\\begin{proof}[Proof of \\cref{thm:augmentednormbound}]\n\tSince $u\\in H^1_0(\\Omega)$ it holds $\\mathcal{B}_J(u,w)=0$ for all $w\\in H^1_0(\\Omega)$, using $\\mathcal{B}_A\\leq\\mathcal{B}+|\\mathcal{B}_S|$ we get\n\t\\begin{equation}\n\t\\nBp{u-u_k}\\leq 2\\nB{u-u_k}+\\sup_{\\substack{w\\in H^1_0(\\Omega)\\\\ \\nB{w}=1}}(\\mathcal{B}(u-u_k,w)-\\mathcal{B}_J(u_k,w)).\n\t\\end{equation}\n\tTo conclude the proof we show that\n\t\\begin{equation}\\label{eq:supBBD}\n\t\\sup_{\\substack{w\\in H^1_0(\\Omega)\\\\ \\nB{w}=1}}(\\mathcal{B}(u-u_k,w)-\\mathcal{B}_J(u_k,w))\\leq \\left(\\sum_{K\\in\\mathcal{M}_k}\\eta_{2,K}^2\\right)^{1\/2}.\n\t\\end{equation}\n\tFollowing \\cref{lemma:boundBBA}, we easily get\n\t\\begin{multline}\n\t\\mathcal{B}(u-u_k,w)-\\mathcal{B}_J(u_k,w) \\leq \\sum_{K\\in\\mathcal{M}_k}(\\eta_{R,K}+\\eta_{DF,K}+\\tilde\\eta_{C,1,K}+\\eta_{\\Gamma,2,K})\\nB{w}_K\\\\\n\t+\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K}\\int_\\sigma\\pi_0 w (\\bq_k-\\bm{\\beta} u_k)\\cdot\\bm{n}_K\\dif \\bm{y}-\\mathcal{B}_J(u_k,w).\n\t\\end{multline}\n\tThe two last terms satisfy\n\t\\begin{align}\n\t&\\sum_{\\sigma\\in\\mathcal{F}_k}\\int_\\sigma\\jump{\\pi_0 w(\\bq_k-\\bm{\\beta} u_k)}\\cdot \\bm{n}_\\sigma\\dif \\bm{y}-\\mathcal{B}_J(u_k,w) \\\\\n\t&= \\sum_{\\sigma\\in\\mathcal{F}_k}\\chi_\\sigma\\int_\\sigma \\jump{\\pi_0 w}\\pi_{0,\\sigma}\\mean{\\bq_k-\\bm{\\beta} u_k}\\cdot\\bm{n}_\\sigma\\dif \\bm{y} +\\sum_{\\sigma\\in\\mathcal{F}_{k,i}}\\int_\\sigma \\mean{\\pi_0 w}\\jump{\\pi_{0,\\sigma}\\bq_k}\\cdot\\bm{n}_\\sigma\\dif \\bm{y} \\\\\n\t&\\leq\\sum_{K\\in\\mathcal{M}_k}(\\tilde\\eta_{U,K}+\\eta_{\\Gamma,1,K})\\nB{w}_K,\n\t\\end{align}\n\twhere in the last step we followed again \\cref{lemma:boundBBA}.\n\\end{proof}\n\n\\subsection{Alternative error bounds}\\label{sec:altbounds}\nOur aim here is to explain how to avoid the assumption $c_{\\bm{\\beta},\\mu,K}>0$ for all $K\\in\\mathcal{M}_k$ made in \\cref{sec:errest,sec:ctedef}. This assumption is needed to define $\\eta_{\\Gamma,1,K}$, $\\eta_{\\Gamma,2,K}$ but can be avoided if \\cref{eq:boundsBBAterms2,eq:boundsBBAterms4} are estimated differently. For \\cref{eq:boundsBBAterms2}, using the trace inequality \\cref{eq:trace} we get\n\\begin{equation}\n\\begin{aligned}\n\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| &\\leq \\frac{1}{2}\\sum_{K\\in\\mathcal{M}_k}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\nLds{\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma}\\nLds{v|_K} \\\\\n&\\leq \\sum_{K\\in\\mathcal{M}_k}\\tilde\\eta_{\\Gamma,2,K}(\\nLdK{v}^2+h_K\\nLdK{v}\\nLddK{\\nabla v})^{1\/2},\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n\\tilde \\eta_{\\Gamma,2,K} = \\frac{1}{2}\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}h_K^{-1\/2}C_{t,K,\\sigma}^{1\/2}\\nLds{\\jump{\\bt_k}\\cdot\\bm{n}_\\sigma}.\n\\end{equation}\nSetting $\\tilde\\eta_{\\Gamma,2}^2=\\sum_{K\\in\\mathcal{M}_k}\\tilde \\eta_{\\Gamma,2,K}^2$, it yields\n\\begin{align}\n\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| &\\leq \\tilde \\eta_{\\Gamma,2}\\left(\\sum_{K\\in\\mathcal{M}_k} \\nLdK{v}^2+h_K\\nLdK{v}\\nLddK{\\nabla v}\\right)^{1\/2}\\\\\n&\\leq \\tilde \\eta_{\\Gamma,2} \\left(\\nLd{v}^2+h_{\\mathcal{M}_k}\\nLd{v}\\nLdd{\\nabla v}\\right)^{1\/2}.\n\\end{align}\nUsing the Poincar\u00e9 inequality $\\nLd{v}\\leq d_\\Omega\\nLdd{\\nabla v}$, where $d_\\Omega$ is the diameter of $\\Omega$, we get\n\\begin{equation}\n\\left|\\sum_{\\gamma\\in\\Gamma_k}\\int_\\gamma \\jump{\\bt_k}\\cdot\\bm{n}_\\gamma v\\dif \\bm{y} \\right| \\leq \\tilde \\eta_{\\Gamma,2} \\left(d_\\Omega^2+h_{\\mathcal{M}_k}d_\\Omega\\right)^{1\/2}\\nLdd{\\nabla v}\\leq \\tilde \\eta_{\\Gamma,2} c_A^{-1\/2} \\left(d_\\Omega^2+h_{\\mathcal{M}_k}d_\\Omega\\right)^{1\/2}\\nB{v},\n\\end{equation}\nwhere $c_A$ is the minimal eigenvalue of $A(\\bm{x})$ over $\\Omega$. The same procedure can be used to replace \\cref{eq:boundsBBAterms4} by a relation avoiding the term $c_{\\bm{\\beta},\\mu,K}^{-1\/2}$. The new bounds can be used to modify the results of \\cref{thm:energynormbound,thm:augmentednormbound} and obtain error estimators when $\\mu-\\frac{1}{2}\\nabla\\cdot\\bm{\\beta}>0$ is not satisfied.\n\n\n\n\\section{Numerical Experiments}\\label{sec:num}\nIn order to study the properties and illustrate the performance of the local scheme we consider here several numerical examples.\nFirst, in \\cref{exp:conv}, we look at the convergence rates of the error estimators, focusing on the errors introduced by solving only local problems. Considering a local and a nonlocal problem, we also compare the size of the new error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ against the classical terms. We emphasize that we do not use the automatic subdomains' identification algorithm for this example, as the subdomains are fixed beforehand.\nWe also perform in \\cref{exp:corner} an experiment for a smooth problem, where the errors are not localized, illustrating the role of $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$. To do so, we also compare the local scheme against a classical adaptive method, where after each mesh refinement the problem is solved again on the whole domain. The classical method we refer to is given by \\cref{alg:classical}.\nSecond, we investigate the efficiency of the new local algorithm for non smooth problems in \\cref{exp:bndlayer_sym,exp:bndlayer_notsym}. For such examples, that are the target of our method, the local scheme performs better than the classical one. We conclude in \\cref{exp:nonlin} with a nonlinear problem, where \\cref{thm:energynormbound,thm:augmentednormbound} do not apply but \\cref{alg:local} can nevertheless be employed in conjunction with a Newton scheme.\n\n\\begin{algorithm}[!tbhp]\n\t\\caption{ClassicalScheme($\\mathfrak{T}_1$)}\n\t\\label{alg:classical}\n\t\\begin{algorithmic}\n\t\t\\State Find $\\overline{u}_1\\in V(\\mathfrak{T}_1)$ solution to $\\mathcal{B}(\\overline{u}_1,v_1,\\mathfrak{T}_1,0)=(f,v_1)_1$ for all $v_1\\in V(\\mathfrak{T}_1)$.\n\t\t\\For{$k=2,\\ldots,M$}\n\t\t\\State $(\\mathfrak{T}_k,\\widehat{\\mathfrak{T}}_{k}) = \\text{LocalDomain}(\\overline{u}_{k-1},\\mathfrak{T}_{k-1})$.\n\t\t\\State Find $\\overline{u}_k\\in V(\\mathfrak{T}_k)$ solution to $\\mathcal{B}(\\overline{u}_k,v_k,\\mathfrak{T}_k,0)=(f,v_k)_1$ for all $v_k\\in V(\\mathfrak{T}_k)$.\n\t\t\\EndFor\n\t\\end{algorithmic}\n\\end{algorithm}\n\nIn all the experiments we use $\\mathbb P_1$ elements ($\\ell=1$ in \\eqref{eq:defVT}) on a simplicial mesh with penalization parameter $\\eta_\\sigma=10$, the diffusive and convective fluxes $\\bt_k,\\bq_k$ are computed with $\\mathcalligra{r}=0$ (see \\eqref{eq:RTN}). Furthermore, $\\bm{\\beta}$ is always such that $\\nabla\\cdot\\bm{\\beta}=0$. These choices give $\\eta_{C,1,K}=\\eta_{C,2,K}=\\tilde\\eta_{C,1,K}=0$. For an estimator $\\eta_{*,K}$ we define $\\eta_{*}^2=\\sum_{K\\in\\mathcal{M}_k}\\eta_{*,K}^2$.\nSimilarly to \\cite{ESV10}, if $A=\\varepsilon I_2$ and $\\bm{\\beta}$ is constant then for $v_k\\in H^1(\\mathcal{M}_k)$ the augmented norm is well estimated by \n\\begin{align}\n\\nBp{v_k}\\leq \\nB{v_k}_{\\oplus'}&= \\nB{v_k}+\\varepsilon^{-1\/2}\\Vert\\bm{\\beta}\\Vert_2\\nLd{v_k}\\\\\n&\\quad +\\frac{1}{2}\\left(\\sum_{K\\in\\mathcal{M}_k}\\left(\\sum_{\\sigma\\in\\mathcal{F}_K\\cap\\mathcal{F}_{k,i}}\\tilde m_K^{1\/2} C_{t,K,\\sigma}^{1\/2}\\nLds{\\jump{v_k}\\bm{\\beta}\\cdot\\bm{n}_\\sigma}\\right)^2\\right)^{1\/2}.\n\\end{align}\nHence, in the numerical experiments we consider the computable norm $\\nB{\\cdot}_{\\oplus'}$. The effectivity indexes of the error estimators $\\eta$ and $\\tilde \\eta$ from \\cref{thm:energynormbound,thm:augmentednormbound} are defined as\n\\begin{equation}\\label{eq:effind}\n\\frac{\\eta}{\\nB{u-u_k}} \\qquad\\text{and}\\qquad \\frac{\\tilde\\eta}{\\nB{u-u_k}_{\\oplus'}},\n\\end{equation}\nrespectively. For the solution $\\overline u_k$ of the classical algorithm we use the error estimators $\\eta$ and $\\tilde \\eta$ from \\cite{ESV10}. They are equivalent to the estimators presented in this paper except that for $\\overline u_k$ we have $\\eta_{\\Gamma,1,K}=\\eta_{\\Gamma,2,K}=0$, as in this case the reconstructed fluxes are in $H_{\\divop}(\\Omega)$. The effectivity indexes for $\\overline u_k$ are as in \\eqref{eq:effind} but with $u_k$ replaced by $\\overline u_k$. The numerical experiments have been performed with the help of the C++ library \\texttt{libMesh} \\cite{KPS06}.\n\n\n\n\n\\subsection{Problem shifting from localized to nonlocalized errors}\\label{exp:conv}\nWe investigate an example in two different locality regimes. First, the errors are confined in a small region and then they are distributed in the whole domain. We will study the effects of this transition on the size of the new error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$.\n\nWe solve \\eqref{eq:elliptic} in $\\Omega=[0,1]\\times [0,1]$ with $A=I_2$, $\\bm{\\beta}=-(1,1)^\\top$ and $\\mu=1$. The force term $f$ is chosen so that the exact solution reads\n\\begin{equation}\\label{eq:solsmooth}\n\tu(\\bm{x})=e^{-\\kappa ||\\bm{x}||_2}\\left( x_1-\\frac{1-e^{-\\kappa x_1}}{1-e^{-\\kappa}}\\right)\\left(x_2-\\frac{1-e^{-\\kappa x_2}}{1-e^{-\\kappa}} \\right),\n\\end{equation}\nwith $\\kappa=100$ or $\\kappa=10$. When $\\kappa=100$ the solution has a narrow peak and the errors are localized around that region, when $\\kappa=10$ the solution is smoother and the errors are distributed in the whole domain. See \\cref{fig:sol_conv_100,fig:sol_conv_10}.\n\nFirst, we investigate the convergence rate of the error estimators and then we comment on the size of the new error estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ when the errors are localized or not, i.e. when $\\kappa=100$ or $\\kappa=10$.\nWe define two domains $\\Omega_1,\\Omega_2$ as follows: $\\Omega_1=\\Omega$ and $\\bm{x}\\in\\Omega_2$ if $\\Vert\\bm{x}\\Vert_\\infty\\leq 1\/2$, see \\cref{fig:domains_priori}. \n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.32\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=\\textwidth]{images\/corner\/sol_sing_1e-2.png}\n\t\t\t\\caption{$u(\\bm{x})$ for $\\kappa=100$.}\n\t\t\t\\label{fig:sol_conv_100}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.32\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=\\textwidth]{images\/corner\/sol_sing_1e-1.png}\n\t\t\t\\caption{$u(\\bm{x})$ for $\\kappa=10$.}\n\t\t\t\\label{fig:sol_conv_10}\n\t\t\\end{subfigure}\n\t\\begin{subfigure}[t]{0.32\\textwidth}\n\t\t\\centering\n\t\t\\begin{tikzpicture}\n\t\t\t\\node at (0,0) {\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=\\textwidth]{images\/corner\/domains_dld_1_k_1-2.png}};\n\t\t\n\t\t\n\t\t\\end{tikzpicture}\n\t\t\\caption{Domains $\\Omega_2\\subset\\Omega_1$.}\n\t\t\\label{fig:domains_priori}\n\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Solution $u(\\bm{x})$ in \\cref{eq:solsmooth} for two values of $\\kappa$ and local domains $\\Omega_1$, $\\Omega_2$.}\n\t\\label{fig:sol_dom_conv}\n\\end{figure}\nLet $h$ be the grid size of $\\widehat{\\mathcal M}_1$, then the grid size of $\\widehat{\\mathcal{M}}_2$ is $h\/2$. \nFor different choices of $h$ we run \\cref{alg:local} without calling LocalDomain, since the local domains and meshes are chosen beforehand. After the second iteration we compute the exact energy error and the error estimators. The results are reported in \\cref{tab:conv_a,tab:conv_b} for $\\kappa=100$ and $\\kappa=10$, respectively. We recall that $\\eta_{NC}$ measures the non conformity of $u_k$, $\\eta_{R}$ measures the error in the energy conservation, $\\eta_{DF}$ the difference between $-A\\nabla u_k$ and the reconstructed diffusive flux $\\bt_k$, $\\eta_U,\\tilde\\eta_{U}$ are upwind errors and $\\eta_{\\Gamma,1},\\eta_{\\Gamma,2}$ measure the jumps of $\\bt_k,\\bq_k$ across subdomains boundaries.\n\n\\begin{table}\n\t\\csvreader[\n\tbefore reading=\\small\\centering\\sisetup{table-number-alignment=left,table-parse-only,zero-decimal-to-integer,round-mode=figures,round-precision=2,output-exponent-marker = \\text{e},fixed-exponent=0},\n\ttabular={lSSSSSSSS},head to column names,\n\ttable head=\\toprule $h$ & \\text{$\\nB{u-u_k}$} &$\\eta_{NC}$ & $\\eta_{R}$ & $\\eta_{DF}$ & $\\eta_{U}$ & \\text{$\\tilde{\\eta}_{U}$} & $\\eta_{\\Gamma,1}$ & $\\eta_{\\Gamma,2}$ \\\\\\midrule,\n\tlate after last line=\\\\\\toprule Order & $1$ & $1$ & $2$ & $1$ & $2$ & $2$ & \\text{$0.5$} & \\text{$0.5$} \\\\\\midrule]\n\t{data\/corner\/local_sing_1e-2_diff_1e0_a_posteriori_data.csv}{}\n\t{$2^{-{\\the\\numexpr\\thecsvrow+5\\relax}}$ & \\erren & \\etaNC & \\etaR & \\etaDF & \\etaU & \\etatU & \\etaGu & \\etaGd}\n\t\\caption{Convergence rate of error estimators for $\\kappa=100$.}\n\t\\label{tab:conv_a}\n\\end{table}\n\\begin{table}\n\t\\csvreader[\n\tbefore reading=\\small\\centering\\sisetup{table-number-alignment=left,table-parse-only,zero-decimal-to-integer,round-mode=figures,round-precision=2,output-exponent-marker = \\text{e},fixed-exponent=0},\n\ttabular={lSSSSSSSS},\n\thead to column names,\n\ttable head=\\toprule $h$ & \\text{$\\nB{u-u_k}$} & $\\eta_{NC}$ & $\\eta_{R}$ & $\\eta_{DF}$ & $\\eta_{U}$ & \\text{$\\tilde{\\eta}_{U}$} & $\\eta_{\\Gamma,1}$ & $\\eta_{\\Gamma,2}$ \\\\\\midrule,\n\tlate after last line=\\\\\\toprule Order & $1$ & $1$ & $2$ & $1$ & $1.5$ & $1.5$ & \\text{$0.5$} & \\text{$0.5$} \\\\\\midrule]\n\t{data\/corner\/local_sing_1e-1_diff_1e0_a_posteriori_data.csv}{}\n\t{$2^{-{\\the\\numexpr\\thecsvrow+5\\relax}}$ & \\erren & \\etaNC & \\etaR & \\etaDF & \\etaU & \\etatU & \\etaGu & \\etaGd}\n\t\\caption{Convergence rate of error estimators for $\\kappa=10$.}\n\t\\label{tab:conv_b}\n\\end{table}\n\n\nWe see that the energy error converges with order one, as predicted by the a priori error analysis of \\cite{AbR19}. We also observe that the error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ measuring the reconstructed fluxes' jumps across subdomains' boundaries have a lower rate of convergence. Therefore, the error estimators are not efficient, in the sense that they cannot be bounded from above by the energy error multiplied by a mesh-size independent constant.\nHowever, the relative size of $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ compared to the other estimators gives an information on the suitability of the local scheme:\n\\begin{itemize}\n\t\\item if $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ are comparable to the other estimators one should use the local scheme. The typical situation is when the errors are localized, with local regions covering the large error regions (see \\cref{fig:sol_conv_100,fig:domains_priori} and \\cref{tab:conv_a});\n\t\\item if the relative size of $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ is larger than the other estimators, this is an indication that one should switch from local to classical method. The typical situation is when the errors are not (or less) localized (see \\cref{fig:sol_conv_10,fig:domains_priori} and \\cref{tab:conv_b}). On purpose we did choose a local domain that is too small to cover the error region.\n\\end{itemize}\n\nIn the next experiments we let the scheme select the local subdomains on the fly, using the fixed energy fraction marking strategy \\cite[Section 4.2]{Dor96} implemented in the $\\text{LocalDomain}(u_k,\\mathfrak{T}_k)$ routine of \\cref{alg:local}. First, we revisit the example of \\cref{exp:conv}. Second, we consider two examples where the errors are localized, illustrating the efficiency of the algorithm.\n\n\n\\subsection{A nonlocal smooth problem}\\label{exp:corner}\nConsidering the same problem as in \\cref{exp:conv} with $\\kappa=10$, we run the local and classical schemes for $k=1,\\ldots,15$ starting with a uniform mesh of 128 elements. Here, we employ the automatic subdomains' identification algorithm and the goal is to show when one should switch from local to nonlocal methods.\nAs the error is distributed in the whole domain, it is not possible to chose the subdomains $\\Omega_{k}$ so that the errors at their boundaries are negligible. Consequently, the error estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ will dominate.\nIndeed, we see in \\cref{tab:dom} that the error estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ measuring the reconstructed fluxes' jumps dominate the other estimators.\n\\begin{table}\n\t\\csvreader[\n\tbefore reading=\\small\\centering\\sisetup{table-number-alignment=left,table-parse-only,zero-decimal-to-integer,round-mode=figures,round-precision=2,output-exponent-marker = \\text{e},fixed-exponent=0},\n\ttabular={lSSSSSSSS},\n\thead to column names,\n\ttable head=\\toprule $k$ & \\text{$\\nB{u-u_k}$} & $\\eta_{NC}$ & $\\eta_{R}$ & $\\eta_{DF}$ & $\\eta_{U}$ & \\text{$\\tilde{\\eta}_{U}$} & $\\eta_{\\Gamma,1}$ & $\\eta_{\\Gamma,2}$ \\\\\\midrule,\n\t]\n\t{data\/corner\/SPA2FFM_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data_first_5_levels.csv}{}\n\t{\\level & \\erren & \\etaNC & \\etaR & \\etaDF & \\etaU & \\etatU & \\etaGu & \\etaGd}\n\t\\caption{\\Cref{exp:corner}, nonlocal smooth problem. Dominance of $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ over the other error estimators. Only the results of the first five iterations are shown, i.e. $k\\leq 5$.}\n\t\\label{tab:dom}\n\\end{table}\nThis phenomenon brings two issues into the algorithm. First, the effectivity index of the local scheme is significantly larger than the index for the classical scheme, as we illustrate in \\cref{fig:corner_effind_eta}. Second, the marking error estimator $\\eta_{M,K}$ \\cref{eq:marketa} will be larger at the boundaries of the local domains than in the large error regions; indeed, we see in \\cref{fig:corner_doms} that the local domain $\\Omega_4$ chosen by the algorithm do not correspond to a large error region but is in a neighborhood of the boundary of $\\Omega_3$, where $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ are large. For this reason the algorithm in unable to detect the high error regions and we see in \\cref{fig:corner_effenerr}, where we show the computational cost in function of the energy errors, that the error of the local method stagnates.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\t\\begin{semilogyaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(0,1)},anchor=north west},xlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}, log basis y=10,ymin=1,ymax=400]\n\t\t\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma] \n\t\t\t\t\t{data\/corner\/SPA2FFM_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma] \n\t\t\t\t\t{data\/corner\/SPA1_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\t\\end{semilogyaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\eta$.}\n\t\t\t\\label{fig:corner_effind_eta}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(0,1)},anchor=north west},\n\t\t\t\t\txlabel={Energy norm error.}, ylabel={GMRES cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erren,y=linsolvertot,col sep=comma,select coords between index={0}{14}] \n\t\t\t\t\t{data\/corner\/SPA2FFM_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erren,y=linsolvertot,col sep=comma,select coords between index={0}{14}] \n\t\t\t\t\t{data\/corner\/SPA1_sing_1_diff_0_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{GMRES cost versus energy norm error.}\n\t\t\t\\label{fig:corner_effenerr}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:corner}, nonlocal smooth problem. Effectivity indexes in function of the iteration number.}\n\\end{figure}\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{tikzpicture}\n\t\t\t\\node at (0,0) {\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=0.22\\textheight]{images\/corner\/domains_dld_2_k_3.png}};\n\t\t\t\\node[opacity=0.7] at (0,0) {\\includegraphics[trim=4cm 3cm 2.3cm 6.2cm, clip, width=0.22\\textheight]{images\/corner\/domains_dld_2_k_4.png}};\n\t\t\\end{tikzpicture}\n\t\t\\caption{Local domains $\\Omega_3$ (darker) and $\\Omega_4$ (brighter).}\n\t\t\\label{fig:corner_doms}\n\t\\end{center}\n\\end{figure}\n\nThis example shows that if the errors are not localized then the estimators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$ dominate, the local scheme becomes inefficient and a classical \\emph{global} method should be preferred over a local method. However, our algorithm allows to monitor the size of the error estimators $\\eta_{\\Gamma,1}$ and $\\eta_{\\Gamma,2}$ and when these error estimators start to dominate the other error indicators (as seen in \\cref{tab:dom}) it provides a switching criteria.\n\n\\subsection{Reaction dominated problem}\\label{exp:bndlayer_sym}\nIn our next example we consider a symmetric problem and want to compare the local and classical schemes (\\cref{alg:local,alg:classical}) in a singularly perturbed regime. We investigate the efficiency measured as the computational cost and analyze their effectivity indexes. The setting is as follows: we solve \\eqref{eq:elliptic} in $\\Omega=[0,1]\\times [0,1]$ with $\\varepsilon=10^{-6}$, $A=\\varepsilon I_2$, $\\bm{\\beta}=(0,0)^\\top$, $\\mu=1$ and we choose $f$ such that the exact solution is given by\n\\begin{equation}\\label{eq:bndlayer}\nu(\\bm{x})=e^{x_1+x_2}\\left( x_1-\\frac{1-e^{-\\zeta x_1}}{1-e^{-\\zeta}}\\right)\\left(x_2-\\frac{1-e^{-\\zeta x_2}}{1-e^{-\\zeta}} \\right),\n\\end{equation}\nwhere $\\zeta=10^{4}$. The solution is illustrated in \\cref{fig:bndlayer_sol}. \n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/sol_vlq.png}\n\t\t\t\\caption{Solution $u(\\bm{x})$.}\n\t\t\t\\label{fig:bndlayer_sol}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[spy using outlines= {circle, connect spies,every spy on node\/.append style={thick}}]\n\t\t\t\t\\coordinate (spypoint) at (-0.1,0.15);\n\t\t\t\t\\coordinate (magnifyglass) at (1.5,0.5);\n\t\t\t\t\\coordinate (spypoint_bis) at (-0.03,-0.55);\n\t\t\t\t\\coordinate (magnifyglass_bis) at (1.5,-1.2);\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_1.png}};\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_2.png}};\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_3.png}};\n\t\t\t\t\\node at (0,0) {\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.22\\textheight]{images\/bndlayer\/dom_4.png}};\n\t\t\t\t\\spy [WildStrawberry, size=1.3cm, magnification=4] on (spypoint) in node[fill=white] at (magnifyglass);\n\t\t\t\t\\spy [WildStrawberry, size=1.3cm, magnification=4] on (spypoint_bis) in node[fill=white] at (magnifyglass_bis);\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{First local domains $\\Omega_k$, $k=1,\\ldots,4$.}\n\t\t\t\\label{fig:bndlayer_doms}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Solution $u(\\bm{x})$ in \\eqref{eq:bndlayer} of the reaction dominated problem and first local domains chosen by the error estimators.}\n\\end{figure}\n\nSince the problem is symmetric we have $\\nB{{\\cdot}}=\\nBp{{\\cdot}}$, but their related error estimators $\\eta$ and $\\tilde\\eta$, respectively, satisfy $\\tilde\\eta>\\eta$ and hence the effectivity index of $\\eta$ will be lower (see \\cref{thm:energynormbound,thm:augmentednormbound}). \n\nStarting from a coarse mesh (128 elements), we let the two algorithms run for $k=1,\\ldots,20$. In \\cref{fig:bndlayer_doms} we show the first four subdomains $\\Omega_k$ chosen by the local scheme. Note that the local domain $\\Omega_4$ chosen by the algorithm is disconnected, while subdomain $\\Omega_3$ has an hole; as is allowed by the theory. Several of the subsequent subdomains (not displayed) are also disconnected or contain holes. The first iterations are needed to capture the boundary layer and reach the convergence regime, hence we will plot the results for $k\\geq 7$. The most expensive part of the code is the solution of linear systems by means of the conjugate gradient (CG) method preconditioned with the incomplete Cholesky factorization, followed by the computation of the potential and fluxes reconstruction and then by the evaluation of the error estimators. In the local scheme, the time spent doing these tasks is proportional to the number of elements inside each subdomain $\\Omega_k$. For the classical scheme, the cost of these tasks depends on the total number of elements in the mesh. Since the CG routine is the most expensive part, we take the time spent in it as an indicator for the computational cost.\n\nIn \\cref{fig:bndlayer_sym_etacost}, we plot the simulation cost against the error estimator $\\eta$, for both the local and classical algorithms. Each circle or star in the figure represents an iteration $k$. We observe that the local scheme provides similar error bounds but at a smaller cost. The effectivity index of $\\eta$ at each iteration $k$ is shown in \\cref{fig:bndlayer_sym_etaeffind}, we can observe that the local scheme has an effectivity index similar to the classical scheme.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Error estimator $\\eta$}, ylabel={CG cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=etafull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=etafull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{CG cost versus $\\eta$.}\n\t\t\t\\label{fig:bndlayer_sym_etacost}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},ymin=0,ymax=5,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\eta$.}\n\t\t\t\\label{fig:bndlayer_sym_etaeffind}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_sym}, reaction dominated problem. Computational cost vs. $\\eta$ and effectivity index in function of the iteration number.}\n\t\\label{fig:bndlayer_sym_etacost_etaeffind}\n\\end{figure}\n\nIn \\cref{fig:bndlayer_sym_effenerr} we exhibit the cost against the exact energy error and we notice that for some values of $k$ the mesh is refined but the error stays almost constant. This phenomenon significantly increases the simulation cost of the classical scheme without improving the solution. In contrast, the cost of the local scheme increases only marginally. Dividing the two curves in \\cref{fig:bndlayer_sym_effenerr} we obtain the relative speed-up, which is plotted in \\cref{fig:bndlayer_sym_speedup}. We note that as the error decreases the local scheme becomes faster than the classical scheme.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Energy norm error.}, ylabel={CG cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erren,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erren,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{CG cost versus energy norm error.}\n\t\t\t\\label{fig:bndlayer_sym_effenerr}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(0,1)},anchor=north west},\n\t\t\txlabel={Energy norm error.}, ylabel={Speed-up}, x dir=reverse,log basis x={2},log basis y={2},ymax=8, ytick={2,4,8,16},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=black,line width=1.0 pt,mark=none] table[x=erren,y=speeden,col sep=comma] \n\t\t\t{data\/bndlayer\/speedup_sing_4_diff_6_b_0_nref_3_lay_21.csv};\\addlegendentry{Speed-up}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Speed-up in function of the error.}\n\t\t\t\\label{fig:bndlayer_sym_speedup}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_sym}, reaction dominated problem. Computational cost vs. energy norm error and speed-up in function of the error.}\n\t\\label{fig:bndlayer_sym_effenerr_speedup}\n\\end{figure}\nIn \\cref{fig:bndlayer_sym_etateffind} we plot the effectivity index of $\\tilde\\eta$. As expected, for this symmetric problem, it is worse than the effectivity of $\\eta$. Finally, we run the same experiment but for different diffusion coefficients $\\varepsilon=10^{-4},10^{-6},10^{-8}$ and display in \\cref{fig:bndlayer_sym_eta_diff_eps} the effectivity index of $\\eta$. We note that it always remains below 4.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\tilde\\eta$},ymin=0,ymax=15,,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\tilde\\eta$.}\n\t\t\t\\label{fig:bndlayer_sym_etateffind}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend columns=2, ,legend style={at={(0,0)},anchor=south west,draw=none,fill=none},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},ymin=0,ymax=4.0,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_4_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-4}$}\n\t\t\t\\addlegendimage{empty legend}\\addlegendentry{}\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-6}$}\n\t\t\t\\addplot[color=NavyBlue,mark=triangle,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_8_b_0_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-8}$}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\\caption{Effectivity index of $\\eta$ for different diffusion coefficients $\\varepsilon$.}\n\t\t\\label{fig:bndlayer_sym_eta_diff_eps}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_sym}, reaction dominated problem. Effectivity index of $\\tilde\\eta$ and of $\\eta$ but for different diffusion coefficients $\\varepsilon$.}\n\t\\label{fig:bndlayer_sym_effetat_effeta}\n\\end{figure}\n\n\n\\subsection{Convection dominated problem}\\label{exp:bndlayer_notsym}\nIn this section we perform the same experiment as in \\cref{exp:bndlayer_sym} but instead of choosing $\\bm{\\beta}=(0,0)^\\top$ we set $\\bm{\\beta}=-(1,1)^\\top$, hence we solve a nonsymmetric singularly perturbed problem. The linear systems are solved with the GMRES method preconditioned with the incomplete LU factorization. As in \\cref{exp:bndlayer_sym}, we investigate the effectivity indexes and efficiency of the local and classical schemes.\n\nFor convection dominated problems, the norm $\\nBp{{\\cdot}}$ is more appropriate than $\\nB{{\\cdot}}$ since it measures also the error in the advective direction. In \\cref{fig:bndlayer_notsym_etatcost}, we plot the simulation cost versus the error estimator $\\tilde\\eta$, we remark that again the local scheme provides similar error bounds at smaller cost. The effectivity index of $\\tilde\\eta$ is displayed in \\cref{fig:bndlayer_notsym_etateffind}, we note that the local and classical schemes have again similar effectivity indexes.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Error estimator $\\tilde\\eta$}, ylabel={GMRES cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=etatfull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=etatfull,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{GMRES cost versus $\\tilde\\eta$.}\n\t\t\t\\label{fig:bndlayer_notsym_etatcost}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,1)},anchor=north east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\tilde\\eta$},ymin=0,ymax=15,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\tilde\\eta$.}\n\t\t\t\\label{fig:bndlayer_notsym_etateffind}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_notsym}, convection dominated problem. Computational cost vs. $\\tilde\\eta$ and effectivity index in function of the iteration number.}\n\t\\label{fig:bndlayer_notsym_etatcost_etateffind}\n\\end{figure}\n\nIn \\cref{fig:bndlayer_notsym_effenerr_speedup} we plot the simulation cost versus the error in the augmented norm $\\nBp{{\\cdot}}$ and the relative speed-up. We again observe that the local scheme is faster. \n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east},\n\t\t\txlabel={Aumented norm error.}, ylabel={GMRES cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erraug,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot+[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erraug,y=linsolvertot,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{GMRES cost versus augmented norm error.}\n\t\t\t\\label{fig:bndlayer_notsym_effenerr}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(0,1)},anchor=north west},\n\t\t\txlabel={Augmented norm error.}, ylabel={Speed-up}, x dir=reverse,log basis x={2},log basis y={2},ymax=8,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot+[color=black,line width=1.0 pt,mark=none] table[x=erraug,y=speeden,col sep=comma] \n\t\t\t{data\/bndlayer\/speedup_sing_4_diff_6_b_1_nref_3_lay_21.csv};\\addlegendentry{Speed-up}\n\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Speed-up in function of the error.}\n\t\t\t\\label{fig:bndlayer_notsym_speedup}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_notsym}, convection dominated problem. Computational cost vs. augmented norm error and speed-up in function of the error.}\n\t\\label{fig:bndlayer_notsym_effenerr_speedup}\n\\end{figure}\n\nFor completeness, we plot in \\cref{fig:bndlayer_notsym_etaeffind} the effectivity index of $\\eta$. We see that it is completely off. This illustrates that this estimator does not capture the convective error and is hence not appropriate for convection dominated problems. Then, we run again the same experiment but considering different diffusion coefficients $\\varepsilon=10^{-4}, 10^{-6}, 10^{-8}$ and display the effectivity indexes of $\\tilde\\eta$ in \\cref{fig:bndlayer_notsym_diff_eps}.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend style={at={(1,1)},anchor=north east},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\eta$},ymin=0,ymax=800,,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=eff,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA1_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\eta$.}\n\t\t\t\\label{fig:bndlayer_notsym_etaeffind}\n\t\t\\end{subfigure}\\hfill\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\\begin{axis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth,legend columns=2, ,legend style={at={(1,1)},anchor=north east,draw=none,fill=none},\n\t\t\txlabel={Iteration $k$}, ylabel={Effectivity index of $\\tilde\\eta$},ymin=0,ymax=20,label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_4_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-4}$}\n\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=level,y=efft,col sep=comma,select coords between index={6}{19}] \n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_6_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-6}$}\n\t\t\t\\addlegendimage{empty legend}\\addlegendentry{}\n\t\t\t\\addplot[color=NavyBlue,mark=triangle,line width=1.0 pt,mark size=2.5 pt] table[x=level,y=efft,col sep=comma,select coords between index={6}{19}] \t\n\t\t\t{data\/bndlayer\/SPA2FFM_sing_4_diff_8_b_1_nref_3_lay_21_a_posteriori_data.csv};\\addlegendentry{$\\varepsilon=10^{-8}$}\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Effectivity index of $\\tilde\\eta$ for different diffusion coefficients $\\varepsilon$.}\n\t\t\t\\label{fig:bndlayer_notsym_diff_eps}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{\\Cref{exp:bndlayer_notsym}, convection dominated problem. Effectivity index of $\\eta$ and of $\\tilde\\eta$ but for different diffusion coefficients $\\varepsilon$.}\n\t\\label{fig:bndlayer_notsym_etaeffind_diff_eps}\n\\end{figure}\n\n\n\\subsection{A nonlinear nonsmooth problem with multiple local structures}\\label{exp:nonlin}\nWe conclude with an experiment on a nonlinear nonsmooth problem, where the diffusion tensor is solution dependent and has multiple discontinuities, hence the solution presents several local structures. More precisely, we solve \\cref{eq:elliptic} with $\\Omega=[-3\/2,3\/2]\\times [-3\/2,3\/2]$, $\\bm{\\beta}=-(1,1)^\\top$, $\\mu=1$ and $f(\\bm{x})=\\nld{\\bm{x}}^2$. The diffusion tensor is $A(u,\\bm{x})=A_1(u)A_2(\\bm{x})$, with $A_1(u)=1\/\\sqrt{1+u^2}$. We divide $\\Omega$ in nine squares of size $1\/2\\times 1\/2$ and $A_2(\\bm{x})$ alternates between $1$ and $0.01$, in a checkerboard-like manner. A reference solution is displayed in \\cref{fig:nonlinsol}.\n\n\\cref{thm:energynormbound,thm:augmentednormbound} do not apply straightforwardly as the problem is nonlinear. Nevertheless, \\cref{alg:local} can be used in combination with a Newton scheme as it is shown in \\cite{AbR19}. In this experiment we investigate the efficiency of the error estimators in identifying the local subdomains for a nonlinear nonsmooth problem with multiple local structures. Starting with a $32\\times 32$ elements mesh, we run the code and let it automatically select the subdomains for twenty iterations. We do the same with the classical \\cref{alg:classical} and compare the results in \\cref{fig:nonlin_eff}, where we display the cost of the Newton method versus the error, computed in energy norm, against a reference solution. We remark as the local method is faster.\n \n\\begin{figure}\n\t\\begin{center}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=0.7\\textwidth]{images\/nonlin\/sol.png}\n\t\t\t\\caption{Solution $u(\\bm{x})$ of the nonlinear nonsmooth problem.}\n\t\t\t\\label{fig:nonlinsol}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[t]{0.49\\textwidth}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.98]\n\t\t\t\t\\begin{loglogaxis}[height=0.66*0.9\\textwidth,width=0.9\\textwidth, x dir=reverse,legend style={at={(1,0)},anchor=south east,fill=none,draw=none},\n\t\t\t\t\txlabel={Energy norm error.}, ylabel={Newton cost [sec.]},log basis x={2},label style={font=\\normalsize},tick label style={font=\\normalsize},legend image post style={scale=1},legend style={nodes={scale=1, transform shape},draw=none}]\n\t\t\t\t\t\\addplot[color=OrangeRed,mark=o,line width=1.0 pt,mark size=2.5 pt] table [x=erren,y=linsystemtot,col sep=comma] \n\t\t\t\t\t{data\/nonlin\/spa_2_nref_5_nlev_20_nlay1_2_nlay2_2_a_posteriori_data.csv};\\addlegendentry{Local}\n\t\t\t\t\t\\addplot[color=ForestGreen,mark=star,line width=1.0 pt,mark size=2.5 pt] table[x=erren,y=linsystemtot,col sep=comma] \n\t\t\t\t\t{data\/nonlin\/spa_1_nref_5_nlev_20_a_posteriori_data.csv};\\addlegendentry{Classical}\n\t\t\t\t\\end{loglogaxis}\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{Newton cost versus energy norm error.}\n\t\t\t\\label{fig:nonlin_eff}\n\t\t\\end{subfigure}\n\t\\end{center}\n\t\\caption{Solution $u(\\bm{x})$ and efficiency experiment on the nonlinear nonsmooth problem of \\cref{exp:nonlin}.}\n\\end{figure}\n\n\\section{Conclusion}\nIn this paper we have derived a local adaptive discontinuous Galerkin method for the scheme introduced in \\cite{AbR19}. The scheme, defined in \\cref{sec:localg}, relies on a coarse solution which is successively improved by solving a sequence of localized elliptic problems in confined subdomains, where the mesh is refined. Starting from error estimators for the symmetric weighted interior penalty Galerkin scheme based on conforming potential and fluxes reconstructions, allowing for flux jumps across the subdomains boundaries we have derived new estimators for the local method and proved their reliability in \\cref{thm:energynormbound,thm:augmentednormbound}. An important property of the original estimators (for nonlocal schemes) is conserved: the absence of unknown constants.\nNumerical experiments confirm the error estimators' effectivity for singularly perturbed convection-reaction dominated problems and illustrate the efficiency of the local scheme when compared to a classical adaptive algorithm, where at each iteration the solution on the whole computational domain must be recomputed. We also showed that the growth of boundary error indicators (the reason why efficiency cannot be proved in general) can be monitored in order to switch from local to a nonlocal method. Switching automatically from local to classical scheme, based on the indicators $\\eta_{\\Gamma,1}$, $\\eta_{\\Gamma,2}$, could be easily integrated in a finite element code. Testing such an integrated code could be of interest to investigate in the future.\n\n\\section*{Acknowledgments} The authors are partially supported by the Swiss National Science Foundation, under grant No. $200020\\_172710$.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTopological complexity ($\\operatorname{TC}$) is an invariant introduced by Michael Farber in \\cite{farberTC} connecting a motion planning problem in robotics with algebraic topology. Intuitively, we think of topological complexity as the smallest number of ``rules'' needed to form a continuous motion planning algorithm on a topological space $X$. For robotics, we think of $X$ as the configuration space of some robot, and our motion planning algorithm outputs paths between the configurations in $X$. Continuity then requires that the paths remain ``close'' when the configurations are ``close'' in some sense. It turns out that $\\operatorname{TC}(X)$ only depends on the topology of $X$, and so topologists study the invariant applied to various topological spaces in the abstract sense rather than as configuration spaces of specific robots.\n\nIn the years since its introduction, several different variations of topological complexity have been studied pertaining to different motion planning problems. Of interest to us is relative topological complexity. Here, we restrict which configurations are allowed to be starting and ending configurations, but we permit the path to move within a larger configuration space. This variant is mentioned in Farber's book on the subject \\cite{farber}, and it is used there to prove that $\\operatorname{TC}(X)$ is a homotopy invariant. In this paper, we restrict our attention further to a certain method for choosing starting and ending configurations.\n\nOur invariant is motivated by the following motion planning problem. Suppose there was a robot with configuration space $X$, and the robot is given to us in an arbitrary configuration within $X$. Our goal is to plan the robot's motion to a configuration within some specified subset $Y \\subseteq X$. Then the relative topological complexity of the pair $(X,Y)$ is the smallest number of ``rules'' needed to form a continuous motion planning algorithm on $X$ where the paths must end in $Y$. This restriction provides two major advantages. First, we are able to develop some standard tools for estimating this value, which we do in Section 2. Then, there are natural relationships between this value and both $\\operatorname{TC}(X)$ and the Lusternik-Schnirelmann category of $X$ which we explore in Section 3.\n\nIn Section 4, we apply this new variant of relative topological complexity to pairs of real projective spaces. In so doing, we draw a deep connection to the existence of certain axial maps. We draw this connection explicitly in Theorem \\ref{axial}. This connection follows a similar logic to \\cite{farberRP}, where Farber, Tabachnikov, and Yuzvinsky connect the immersion dimension of real projective spaces to their topological complexity using axial maps.\n\nFinally, in Section 5, we apply this new variant to pairs of spatial polygon spaces. These have been studied by Hausmann and Knutson in \\cite{hausknut} as well as by Davis in \\cite{davis}. In our study, we introduce new notation for interesting submanifolds of the spatial polygon spaces for consideration using our variant. We also compute the relative topological complexity for pairs of spatial polygon spaces, relying upon the symplectic structure of these spaces.\n\nThis work is a piece of the author's PhD thesis under the supervision of Don Davis. We are grateful for his guidance and support throughout this process, and for giving productive comments on early drafts. We would also like to thank Jean-Claude Hausmann, Jesus Gonzalez, Steve Scheirer, Alan Hylton, and Brian Klatt for various productive and interesting conversations over the course of this project.\n\n\\section{Basic Estimates}\nWe begin by reframing the intuitions established in the introduction in terms of the Schwarz genus of a fibration. This was introduced by Schwarz in \\cite{schwarz}.\n\n\\begin{defn}\nLet $f:E\\to B$ be a fibration. The \\textit{Schwarz genus of $f$}, denoted $\\operatorname{genus}(f)$, is the smallest $k$ such that there exists $\\{U_i \\}_{i=1}^k$ an open cover of $B$ along with sections $s_i:U_i \\to E$ of $f$.\n\\end{defn}\n\nTo apply this to topological complexity, note that there is a natural fibration $p:PX\\to X\\times X$ where $PX$ is the space of paths in $X$. This fibration assigns to each path in $X$ the endpoints, i.e. $p(\\sigma)=(\\sigma(0),\\sigma(1))$. A section of this fibration is a way to assign a path to a pair of points in $X$, aligning this with a motion planning rule in the intuitive notion. Formally, we are left with the following definition for $\\operatorname{TC}(X)$.\n\n\\begin{defn}\nLet $p:PX\\to X\\times X$ be the fibration defined above. Then, the \\textit{topological complexity of $X$} is the Schwarz genus of $p$, or in other words $\\operatorname{TC}(X)=\\operatorname{genus}(p)$.\n\\end{defn}\n\nIt is worth noting that the definition we are using is the unreduced version of topological complexity used by Farber in \\cite{farber}. Many researchers in this field use a reduced version of topological complexity where $\\overline{\\operatorname{TC}}(X) = \\operatorname{TC}(X)-1$. We will use the unreduced version throughout this paper.\n\nIn addition to $\\operatorname{TC}(X)$, Farber also introduced a relative topological complexity for general subsets of $X\\times X$. Again, this is defined in terms of Schwarz genus, but here the motion planning rule is to only move between select pairs of points within $X \\times X$.\n\n\\begin{defn}{\\cite{farber}}\nIf $A \\subseteq X \\times X$, the \\textit{relative topological complexity}, denoted $\\operatorname{TC}_X(A)$, is the Schwarz genus of the pullback fibration over $A$ induced by the inclusion map.\n\\end{defn}\n\nIf we consider $A$ as the set of pairs of allowed configurations in the intuitive notion of relative topological complexity, then this tracks the smallest number of rules needed to move through $X$ where the pairs of starting and ending points must lie in $A$.\n\n\\subsection{Relative Topological Complexity of a Pair}\n\nAs a variant of relative topological complexity, we consider the following problem. Suppose we had a specified set of target configurations $Y \\subseteq X$. We wish to determine the smallest number of rules needed to create a continuous motion planner from any configuration in $X$ to a configuration in $Y$. This natural question is answered by our new variant.\n\n\\begin{defn}\nLet $Y \\subseteq X$. Let $P_{X\\times Y} = \\{\\gamma\\in PX|\\gamma(0)\\in X \\text{ and }\\gamma(1)\\in Y\\}$. There is a natural fibration $P_{X\\times Y} \\overset{\\pi}{\\longrightarrow} X \\times Y$ where $\\pi(\\gamma)=(\\gamma(0),\\gamma(1))$. Then, the \\textit{relative topological complexity of the pair $(X,Y)$} is the Schwarz genus of $\\pi$. In other words, $\\operatorname{TC}(X,Y)=\\operatorname{genus}(\\pi)$.\n\\end{defn}\n\nOne can also think of the fibration $\\pi$ as the pullback of the usual topological complexity fibration induced by the inclusion map $X\\times Y \\hookrightarrow X\\times X$. Doing so, we can immediately get a convenient upper bound on $\\operatorname{TC}(X,Y)$ thanks to a theorem from Schwarz.\n\n\\begin{prop}{\\cite[Prop 7]{schwarz}}\nLet $p:E\\to B$ be a fibration and suppose $i:A\\to B$ is a continuous map. Let $i^*p:i^*E\\to A$ be the pullback fibration over $A$ induced by $i$. Then, $\\operatorname{genus}(i^*p) \\leq \\operatorname{genus}(p)$.\n\\end{prop}\n\nWe can easily get the following corollary to this proposition.\n\n\\begin{cor}\\label{RelTCk$.\n\\end{thm}\n\n\\begin{proof}\nAssume $\\operatorname{TC}(X,Y) \\leq k$. Then, take $\\{U_j\\}_{j=1}^k$ to be an open cover of $X\\times Y$ such that for each $j$, there exists $s_j:U_j \\to P_{X\\times Y}$ with $s_j$ a section of $\\pi$. Then, for each $j$, we get the following commutative diagram:\n\n{\\centering\n\\begin{tikzcd}\n\\pi^{-1}(U_j) \\ar[r,\"a\"] \\ar[d,\"\\pi_j\"] & P_{X\\times Y} \\ar[d,\"\\pi\"] \\\\\nU_j \\ar[u,bend left,dashed, \"s_j\"] \\ar[r,\"b\"] & X\\times Y\n\\end{tikzcd}\n\\par}\n\nThis induces the following diagram in cohomology:\n\n{\\centering\n\\begin{tikzcd}\nH^*(\\pi^{-1}(U_j)) & \\ar[l,\"a^*\"'] H^*(P_{X\\times Y}) \\\\\nH^*(U_j) \\ar[u,\"\\pi_j^*\"']& H^*(X\\times Y) \\ar[l,\"b^*\"'] \\ar[u,\"\\pi^*\"']\n\\end{tikzcd}\n\\par}\n\nSince $\\pi_j$ has a section, $\\pi_j^*$ is injective. So, if we take $\\alpha \\in Z(X\\times Y)$, we have that $\\pi^*(\\alpha)=0 \\implies a^*(\\pi^*(\\alpha))=0$. By the diagram above, this implies that $\\pi_j^*(b^*(\\alpha))=0$, but $\\pi_j^*$ in injective, so we see that $b^*(\\alpha)=0$. Thus, by exactness, $\\alpha \\in \\operatorname{Im}(H^*(X\\times Y, U_j) \\to H^*(X\\times Y))$. We can then use this in the following diagram:\n\n{\\centering\n\\begin{tikzcd}\nH^*(X\\times Y\\, , U_1) \\otimes \\cdots \\otimes H^*(X\\times Y\\, , U_k) \\ar[r] \\ar[d] & H^*(X\\times Y, \\bigcup\\limits_{j=1}^k U_j) = 0 \\ar[d]\\\\\nH^*(X\\times Y) \\otimes \\cdots \\otimes H^*(X\\times Y) \\ar[r,\"\\Delta^*\"] & H^*(X\\times Y)\n\\end{tikzcd}\n\\par}\n\nFollowing the diagram, if $\\alpha_1 \\otimes \\cdots \\otimes \\alpha_k \\in H^*(X\\times Y) \\otimes \\cdots \\otimes H^*(X\\times Y)$ is such that $\\alpha_j \\in Z(X\\times Y)$, then $\\alpha_1 \\otimes \\cdots \\otimes \\alpha_k$ pulls back to $\\widetilde{\\alpha}_1 \\otimes \\cdots \\otimes \\widetilde{\\alpha}_k \\in H^*(X\\times Y\\, , U_1) \\otimes \\cdots \\otimes H^*(X\\times Y\\, , U_k)$. By commutativity, we get that $\\Delta^*(\\alpha_1 \\otimes \\cdots \\otimes \\alpha_k) = 0$.\n\nThus, by contrapositive, if we have elements $\\alpha_1, \\cdots, \\alpha_k \\in Z(X\\times Y)$ such that $\\Delta^*(\\alpha_1 \\otimes \\cdots \\otimes \\alpha_k) \\neq 0$, we must have that $\\operatorname{TC}(X,Y) >k$.\n\n\\end{proof}\n\nThe primary takeaway of the above result is that we can compute cohomological lower bounds using knowledge of the cup product in $Y$ and knowledge of the inclusion-induced map $\\iota^*:H^*(X) \\to H^*(Y)$. Symplectic structures often exhibit useful knowledge of the inclusion-induced map in a powerful way. Inspired by \\cite[Thm 1]{farberRP}, we get the following theorem.\n\n\\begin{thm}\\label{sympTC}\nLet $(X,\\omega_X)$ be a simply-connected, closed, symplectic manifold of dimension $2n$ with submanifold $Y$ of dimension $2m$ carrying a symplectic form $\\omega_Y$ such that $\\iota^*([\\omega_X])=[\\omega_Y]$. Then $\\operatorname{TC}(X,Y) = n+m+ 1$.\n\\end{thm}\n\n\\begin{proof}\nFirst, note that $\\operatorname{TC}(X,Y) \\leq n+m+1$ via Corollary \\ref{schwarzubcor}, since $X$ is simply-connected.\n\nFor the lower bound, consider the cohomology classes $[\\omega_X]$ and $[\\omega_Y]$. Since $\\iota^*[\\omega_X] = [\\omega_Y]$, $[\\omega_X]\\otimes 1 - 1 \\otimes [\\omega_Y]$ is a zero-divisor in $Z(X \\times Y)$. Expanding via the binomial theorem, $([\\omega_X] \\otimes 1 - 1 \\otimes [\\omega_Y])^{n+m} = (-1)^m\\binom{n+m}{m}[\\omega_X]^n \\otimes [\\omega_Y]^m \\neq 0$. Thus, $\\operatorname{TC}(X,Y) > n+m$ by Theorem \\ref{cohomlb}. The result follows.\n\n\\end{proof}\n\nAs an example, notice that $\\mathbb{C}P^n$ is a closed, simply-connected, symplectic manifold and also that $\\mathbb{C}P^m \\subset \\mathbb{C}P^n$ is a symplectic submanifold where the natural inclusion map satisfies that $\\iota^*([\\omega_{\\mathbb{C}P^n}]) = [\\omega_{\\mathbb{C}P^m}]$. Thus, we get an easy corollary.\n\n\\begin{cor}\\label{CPn}\n$\\operatorname{TC}(\\mathbb{C}P^n,\\mathbb{C}P^m) = n+m+1$.\n\\end{cor}\n\nThis corollary generalizes the result from \\cite[Cor 2]{farberRP} that $\\operatorname{TC}(\\mathbb{C}P^n) = 2n+1$. In general, projective spaces provide examples where the inclusion-induced map is well-behaved in cohomology. We will return to the case of real projective spaces in Section 4.\n\n\\section{Relationship with Other Invariants}\n\nThe other two invariants we consider here are $\\operatorname{TC}(X)$ and the Lusternik-Schnirelmann category (L-S cat) of $X$, denoted $\\operatorname{cat}(X)$. We defined $\\operatorname{TC}(X)$ earlier, but we can use the Schwarz genus to give a definition for $\\operatorname{cat}(X)$ that works well for our purposes.\n\nFor a pointed space $(X,x_0)$, there is a natural fibration $p_0:P_0X \\to X\\times \\{x_0\\}$ where $P_0X$ is the space $\\{\\sigma \\in PX \\, | \\, \\sigma(1)=x_0\\}$. Then $p_0(\\sigma)=(\\sigma(0),x_0)$ defines the fibration. Notice that this is again the pullback of the fibration we used to define $\\operatorname{TC}(X)$ over the inclusion map $X \\times \\{x_0\\} \\hookrightarrow X\\times X$.\n\n\\begin{defn}\nLet $p_0:P_0X\\to X\\times \\{x_0\\}$ be the fibration defined above. Then, the \\textit{Lusternik-Schnirelmann category of $X$} is the Schwarz genus of $p_0$, denoted by $\\operatorname{cat}_{x_0}(X)$ or $\\operatorname{cat}(X)$ if the basepoint is implied.\n\\end{defn}\n\n\\begin{remark}\nThe usual definition of $\\operatorname{cat}(X)$ is the smallest number of sets $U_i \\subseteq X$ needed to cover $X$ where $U_i$ is contractible in $X$. In \\cite[Thm 18]{schwarz}, Schwarz proves that when $p:E\\to B$ is a fibration with $E$ contractible, then $\\operatorname{genus}(p)=\\operatorname{cat}(B)$. In the above fibration, $P_0X$ is contractible, so this definition corresponds to the usual definition of L-S cat. We will need a quick lemma to show that our definition is also independent of the choice of basepoint under reasonable conditions.\n\\end{remark}\n\n\\begin{lem}\\label{catequiv}\nIf $X$ is path-connected, then for any $x_0, y_0 \\in X$, $\\operatorname{cat}_{x_0}(X)=\\operatorname{cat}_{y_0}(X)$.\n\\end{lem}\n\n\\begin{proof}\nIt suffices to show that $\\operatorname{cat}_{x_0}(X)\\leq \\operatorname{cat}_{y_0}(X)$ by symmetry.\n\nSuppose $\\operatorname{cat}_{y_0}(X)=k$. Then there is an open cover of $X\\times \\{y_0\\}$, say $\\{U_i\\}_{i=1}^k$ with sections $s_i:U_i \\to P_0$. To construct an open cover of $X\\times \\{x_0\\}$, let $V_i=\\{(u,x_0)\\, | \\, (u,y_0) \\in U_i\\}$. Then, since $X$ is path-connected, there exists some path $\\sigma$ such that $\\sigma(0)=y_0$, and $\\sigma(1)=x_0$. The map $s_i':V_i \\to P_0$ given by $s_i'(u,x_0)=s_i(u,y_0)\\cdot \\sigma$, where $\\cdot$ denotes concatenation, is a continuous section of the fibration for $\\operatorname{cat}_{x_0}(X)$. Thus, $\\operatorname{cat}_{x_0}(X)\\leq k$.\n\\end{proof}\n\nFor a pointed space $(X,x_0)$, there is a natural inclusion $f:X\\times \\{x_0\\} \\hookrightarrow X\\times Y$ when $x_0\\in Y$. Moreover, we have the following relationship:\n\n\\begin{prop}\\label{cat \\operatorname{TC}(S^4,\\mathbb{R}P^2)$.\n\\end{example}\n\nOne nice application of this relationship occurs in the presence of a topological group. Let $G$ be a path connected topological group. As a simple exercise succeeding \\cite[Prop 4.19]{farber}, Farber indicates that $\\operatorname{TC}(G)=\\operatorname{cat}(G)$. Using this, we have the following corollary:\n\n\\begin{cor}\\label{toplgrp}\nLet $H$ be a non-empty subset of a path-connected topological group $G$. Then $\\operatorname{TC}(G,H) = \\operatorname{cat}(G)$.\n\\end{cor}\n\n\\begin{proof}\n$\\operatorname{cat}(G) \\leq \\operatorname{TC}(G,H) \\leq \\operatorname{TC}(G) = \\operatorname{cat}(G)$\n\\end{proof}\n\nSince any torus $T^n=(S^1)^n$ has a topological group structure, and $\\operatorname{TC}(T^n) = n+1$ is a well-known result, we can compute their relative topological complexity as a corollary to this:\n\n\\begin{cor}\\label{torus}\nLet $H\\subseteq T^n$. Then $\\operatorname{TC}(T^n,H) = n+1$. In particular, $\\operatorname{TC}(T^n, T^m)=n+1$ when $n\\geq m$.\n\\end{cor}\n\nA standard result for both L-S cat and TC says that $\\operatorname{TC}(X)=1$ if and only if $X$ is contractible (similarly $\\operatorname{cat}(X)=1$ if and only if $X$ is contractible). We now establish a similar result for relative topological complexity of the pair $(X,Y)$.\n\n\\begin{prop}\\label{Xcontract}\n$\\operatorname{TC}(X,Y) = 1$ if and only if $X$ is contractible.\n\\end{prop}\n\n\\begin{proof}\nWe will prove both implications separately although the proofs are similar.\n\n\\begin{itemize}\n\\item [$\\implies$:] Suppose $\\operatorname{TC}(X,Y) = 1$. Then $1 \\leq \\operatorname{cat}(X) \\leq \\operatorname{TC}(X,Y) =1$, so $\\operatorname{cat}(X)=1$. But $\\operatorname{cat}(X)=1$ if and only if $X$ is contractible.\n\n\\item [$\\impliedby$:] Suppose $X$ is contractible. Then $\\operatorname{TC}(X)=1$. Then $1\\leq \\operatorname{TC}(X,Y) \\leq \\operatorname{TC}(X)=1$, so $\\operatorname{TC}(X,Y) =1$.\n\\end{itemize}\n\\end{proof}\n\nIt is also useful to think of what the contractibility of $Y$ can yield in terms of $\\operatorname{TC}(X,Y)$ results. This yields the following definition and theorem.\n\n\\begin{defn}\nWe say that a space $Y$ is \\textit{contractible in $X$} if the inclusion map $\\iota:Y\\to X$ is homotopic to a constant map.\n\\end{defn}\n\n\\begin{thm}\\label{Ycontract}\nIf $Y$ is contractible in $X$, then $\\operatorname{TC}(X,Y) = \\operatorname{cat}(X)$.\n\\end{thm}\n\n\\begin{proof}\nWe know $\\operatorname{cat}(X) \\leq \\operatorname{TC}(X,Y)$, so all that remains is to see that $\\operatorname{cat}(X) \\geq \\operatorname{TC}(X,Y)$ when $Y$ is contractible in $X$.\n\nSuppose $\\operatorname{cat}(X) = k$ and this is exhibited by an open cover $U_1, \\dots, U_k$ of $X\\times \\{x_0\\}$ with sections $s_i$ over each $U_i$. Let $p_X(U_i)$ be the projection of $U_i$ onto its $X$ component. Define $V_i = p_X(U_i)\\times Y$. Then, since $p_X(U_i)$ covers $X$, the collection of $V_i$ sets covers $X\\times Y$. Let $H:Y\\times I \\to X$ be the homotopy where $H(Y,0)=x_0$ and $H(Y,1) = \\iota(Y)$, and let $h(y)=H|_{\\{y\\}\\times I}$ be the path from $x_0$ to $\\iota(y)$ for $y\\in Y$. Define $s'_i(x,y) = s_i(x,x_0) \\cdot h(y) $. The pairs $(V_i,s'_i)$ form an open cover with sections for $\\operatorname{TC}(X,Y)$ with $k$ elements. Thus, $\\operatorname{cat}(X) \\geq \\operatorname{TC}(X,Y)$.\n\\end{proof}\n\n\\subsection{Examples Involving Spheres}\nSince $\\pi_m(S^n)=0$ for $mm>0$, then $\\operatorname{TC}(S^n,S^m) =\\operatorname{cat}(S^n)=2$.\n\\end{cor}\n\n\\begin{remark}\nIt is beneficial to see an explicit motion planning algorithm exhibiting the fact that $\\operatorname{TC}(S^n,S^m)=2$. We provide this here as an example of the construction used in Theorem \\ref{Ycontract}.\n\nFirst we construct the open sets needed to see that $\\operatorname{cat}(S^n)=2$. Take the distinguished point of $S^n$ to be $e_1 = (1,0,\\dots , 0)$, let its antipode be $e_2 = (-1,0, \\dots, 0)$, and let $0<\\varepsilon <1$. Take $\\pi_1:S^n \\to \\mathbb{R}$ to be projection onto the first component. We define the open cover of $S^n \\times \\{e_1\\}$ by $U_1 = \\pi_1^{-1}((-\\varepsilon,1]) \\times \\{e_1\\} $ and $U_2 = \\pi_1^{-1}([-1,\\varepsilon)) \\times \\{e_1\\}$.\n\nFor $i=1,2$, let $f_i:U_i \\hookrightarrow S^n$ be the natural inclusion map. We can easily define homotopies $F_i:U_i \\times I \\to S^n$ such that $F_i((x,e_1),0)=f_i(x,e_1)$ and $F_i((x,e_1),1) = e_i$. Fix a path $\\sigma:I \\to S^n$ with $\\sigma(0) = e_2$ and $\\sigma(1)=e_1$. Then, we can define sections over each $U_i$ by $s_1(x,e_1) = F_1((x,e_1),-)$ and $s_2(x,e_1)=F_2((x,e_1),-) \\cdot \\sigma$.\n\nTo incorporate $S^m$, we proceed exactly as we did in the proof of Theorem \\ref{Ycontract}. Let $\\iota:S^m \\to S^n$ be the inclusion map, and WLOG choose $h:S^m \\times I \\to S^n$ to be a homotopy where $h(S^m,0) = e_1$ and $h(S^m,1) = \\iota(S^m)$. Then for each $p \\in S^m$, take $h(p) = h|_{\\{p\\}\\times I}$ to be the path from $e_1$ to $\\iota(p)$. Take $V_1 = \\pi_1^{-1}((-\\varepsilon,1]) \\times S^m$ and $V_2 = \\pi_1^{-1}([-1,\\varepsilon)) \\times S^m$ mimicking $U_1$ and $U_2$ above so that $U_i \\subseteq V_i$. Then, define $s'_i(x,p) = s_i(x,e_1) \\cdot h(p)$ for each $i$. This exhibits the rules for $\\operatorname{TC}(S^n,S^m)$ explicitly.\n\\end{remark}\n\nNote that this differs significantly from the $\\operatorname{TC}(S^n)$ case where the parity of the sphere's dimension determines the value.\n\nWe finish this section by putting all of the tools we developed to use on tackling pairs of bouquets of spheres.\n\n\\begin{prop}\\label{wedge}\nSuppose $(a_i)_{i=1}^n$ is a sequence of positive integers. Then,\n\\begin{enumerate}\n\\item $\\operatorname{TC}\\bigg( \\bigvee\\limits_{i=1}^n S^{a_i}, * \\bigg) = 2$; and\n\\item For $1 < m < n$, $\\operatorname{TC}\\bigg( \\bigvee\\limits_{i=1}^n S^{a_i}, \\bigvee\\limits_{j=1}^m S^{a_j} \\bigg) = 3$.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nFor (1), without loss of generality, suppose $\\iota(*) = x_0$ where $x_0$ is the wedge point of $\\bigvee^n_{i=1} S^{a_i}$. Thus, by Theorem \\ref{Ycontract}, $\\operatorname{TC}\\big( \\bigvee^n_{i=1} S^{a_i},*\\big) = \\operatorname{TC}\\big( \\bigvee^n_{i=1} S^{a_i}, x_0 \\big) = \\operatorname{cat}\\big( \\bigvee^n_{i=1} S^{a_i} \\big) = 2$.\n\nFor (2), let $x_i$ denote the point in $S^{a_i}$ antipodal to $x_0$. Let $C_a = \\bigvee^n_{i=1} S^{a_i} - \\{ x_i \\}_{i=1}^n$ and let $C_b=\\bigvee^m_{j=1} S^{a_j}-\\{x_j\\}_{j=1}^m$. Notice that $C_a$ is contractible and $C_b \\subseteq C_a$, so there exists a homotopy $h:C_a\\times I \\to C_a$ such that $h(x,-):I \\to C_a$ is a path from $x$ to $x_0$ for each $x \\in C_a$. This homotopy also assigns paths from points in $C_b$ to $x_0$. Also, for each $1 \\leq i \\leq n$, take $D_i$ to denote a contractible neighborhood of $x_i$ such that $x_0 \\notin D_i$ and with contraction $k_i:D_i \\times I \\to D_i$ such that $k_i(x,0)=x$ and $k_i(x,1) = x_i$. Finally, fix paths $\\sigma_i:I \\to \\bigvee_{i=1}^n S^{a_i}$ where $\\sigma_i(0) = x_i$ and $\\sigma_i(1)=x_0$ for each $1 \\leq i \\leq n$.\n\nWe can now construct a motion planning algorithm on $\\bigvee^n_{i=1}S^{a_i} \\times \\bigvee^m_{j=1}S^{a_j}$. Let $\\overline{\\sigma}$ denote the path $\\sigma$ traversed backwards. Define $U_1 = C_a \\times C_b$, $U_2 = \\bigcup_{i=1}^n D_i \\times C_b \\, \\cup \\, \\bigcup_{j=1}^m C_a \\times D_j$, and $U_3=\\bigcup_{(i,j) \\in [n]\\times [m]} D_i \\times D_j$. Let $X = \\bigvee_{i=1}^n S^{a_i} \\times \\bigvee_{j=1}^m S^{a_j}$ and let $P_X \\to X$ denote the relative topological complexity fibration. Define $s_1:U_1 \\to P_X$ by $s_1(x,y)=h(x,-)\\cdot \\overline{h}(y,-)$. Each of $U_2$ and $U_3$ is a topologically disjoint union of open sets in $X$. Then, we need only define sections over each set in the union and appropriately combine them for sections over $U_2$ and $U_3$. We break these into the following three cases:\n\n\\begin{itemize}\n\\item For $D_i \\times C_b$, use $s(x,y) = k_i(x,-)\\cdot \\sigma_i \\cdot \\overline{h}(y,-)$.\n\\item For $C_a \\times D_j$, use $s'(x,y) = h(x,-)\\cdot \\overline{\\sigma}_j \\cdot \\overline{k}_j(y,-)$.\n\\item For $D_i \\times D_j$, use $s''(x,y) = k_i(x,-) \\cdot \\sigma_i \\cdot \\overline{\\sigma}_j \\cdot \\overline{k}_j(y,-)$.\n\\end{itemize}\n\nFor the lower bound in the case where $\\{b_j\\} \\neq \\emptyset$, we must locate two non-zero cohomology elements in $H^*\\big(\\bigvee_{i=1}^n S^{a_i} \\big)$. Let $\\pi_k:\\bigvee_{i=1}^n S^{a_i} \\to S^{a_k}$ denote the map sending the index $k$ sphere to $S^{a_k}$ and all other spheres to $x_0$. For $0genus}\nIf $mgenuscor}\nIf $mgenus}, this yields the result.\n\n\\end{proof}\n\nNext we need to connect nowhere-zero sections of $k(\\xi_n\\boxtimes \\xi_m)$ to non-singular maps as defined in \\cite{farberRP}. We reproduce the definition here.\n\n\\begin{defn}\nA map $f:\\mathbb{R}^n \\times \\mathbb{R}^m \\to \\mathbb{R}^k$ is \\textit{non-singular} if for any $\\lambda,\\mu \\in \\mathbb{R}$, and any $(x,y) \\in \\mathbb{R}^n \\times \\mathbb{R}^m$, we have that:\n\\begin{itemize}\n\\item $f(\\lambda x, \\mu y) = \\lambda \\mu f(x,y)$, and\n\\item $f(x,y)=0 \\implies x=0 \\text{ or } y=0$.\n\\end{itemize}\n\\end{defn}\n\nWe connect the two ideas using the lemma below.\n\n\\begin{lem}\\label{sect-nonsing}\nIf $n>m>1$ and there exists a nowhere-zero section of $k(\\xi_n\\boxtimes\\xi_m)$, then there exists a non-singular map $\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^k$.\n\\end{lem}\n\n\\begin{proof}\nSuppose $s$ is a nowhere-zero section of $k(\\xi_n \\boxtimes \\xi_m)$. Then, consider the following commutative diagram:\n\n{\\centering\n\t\\begin{tikzcd}\n\t\tS^n \\times S^m \\times \\mathbb{R}^k \\arrow[r,\"q_1\"] \\arrow[d,\"p_1\"]& \\frac{S^n\\times S^m\\times \\mathbb{R}^k}{(x,y,t)\\raise.17ex\\hbox{$\\scriptstyle\\sim$} (-x,y,-t)\\raise.17ex\\hbox{$\\scriptstyle\\sim$} (x,-y,-t)} \\arrow[d,\"k(\\xi_n\\boxtimes \\xi_m)\"] \\\\\n\t\tS^n\\times S^m \\arrow[u,\"s_2\",bend left,dashed] \\arrow[r,\"q_2\"] \\arrow[ur,\"s_1\",dashed] & \\frac{S^n \\times S^m}{(x,y)\\raise.17ex\\hbox{$\\scriptstyle\\sim$}(-x,y)\\raise.17ex\\hbox{$\\scriptstyle\\sim$}(x,-y)}\\arrow[u,\"s\",bend left,dashed] \n\\end{tikzcd}\n\\par}\n\nLet each $q_i$ be the natural quotient map, and let $p_1$ be the projection $(x,y,t)\\mapsto (x,y)$.\n\nDefine $s_1=s\\circ q_2$. Notice that $s_1(x,y)=s_1(-x,y)=s_1(x,-y)$.\n\nNow, $q_1$ defines a covering space, and since $S^n\\times S^m$ is simply-connected, we can lift $s_1$ to some map $S^n\\times S^m \\to S^n\\times S^m \\times \\mathbb{R}^k$. This lift is not unique, but we can define $s_2$ as the unique lift of $s_1$ which is also a section of $p_1$.\n\nLet $f$ be the $\\mathbb{R}^k$ component of $s_2$, so that $s_2(x,y)=(x,y,f(x,y))$. Notice that, in order for $s_2$ to be a lift of $s_1$, it must be that $q_1(s_2(x,y))=q_1(x,y,f(x,y))=s_1(x,y)=[x,y,f(x,y)]=[-x,y,-f(x,y)]=q_1(s_2(-x,y))$. Thus, $f(-x,y)=-f(x,y)=f(x,-y)$ for any $(x,y)\\in S^n\\times S^m$.\n\nWe can then use $f$ to define a non-singular map $g:\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1}\\to \\mathbb{R}^k$ by $$g(x,y)=\\left\\{ \\begin{array}{cl}\n\t|x||y|f(\\tfrac{x}{|x|},\\tfrac{y}{|y|}) &\\text{if }x,y \\neq 0\\\\\n\t0 &\\text{if }x=0 \\text{ or } y=0\n\t\\end{array}\n\t\\right.$$\n\\end{proof}\n\nAn easy corollary of Lemma \\ref{sect-nonsing} and Corollary \\ref{TC>genuscor} is the following.\n\n\\begin{cor}\\label{sect-nonsingcor}\nIf $\\operatorname{TC}(\\mathbb{R}P^n,\\mathbb{R}P^m)=k$ and $n>m>1$, then there exists a non-singular map $\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1}\\to\\mathbb{R}^k$.\n\\end{cor}\n\nThe last piece of this direction of the proof is to connect this result to axial maps.\n\n\\begin{lem}\\label{nonsing-axial}\nAssume $1 < m < n \\leq k-1$. There exists a bijection between non-singular maps $\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^{k}$ (identified under multiplication by a non-zero scalar) and axial maps of type $(n,m,k-1)$.\n\\end{lem}\n\n\\begin{proof}\nSuppose $f: \\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^k$ is non-singular. Then, we can descend to a map $g: \\mathbb{R}P^n \\times \\mathbb{R}P^m \\to \\mathbb{R}P^{k-1}$ by following the quotient maps. This map is defined by sending an element $(u,v) \\in S^n \\times S^m$ to the line containing $f(u,v)$ in $\\mathbb{R}P^{k-1}$. To see that this is axial, consider $g|_{\\mathbb{R}P^n \\times *}$. For a fixed $v \\in S^m$, $g|_{\\mathbb{R}P^n \\times *}$ lifts to a function $\\tilde{g}:S^n \\to S^{k-1}$ such that $u \\mapsto f(u,v)$. Since $f(-u,v) = -f(u,v)$ by the non-singularity of $f$, we get that $g|_{\\mathbb{R}P^n \\times *}$ is not null-homotopic. A similar argument shows that $g|_{*\\times \\mathbb{R}P^m}$ is also not null-homotopic.\n\nFor the other direction, suppose $g: \\mathbb{R}P^n \\times \\mathbb{R}P^m \\to \\mathbb{R}P^{k-1}$ is an axial map of type $(n,m,k-1)$. Passing to the universal covers, we have a continuous map $\\tilde{g}:S^n \\times S^m \\to S^{k-1}$. As above, $g$ being an axial map gives us that $\\tilde{g}(-u,v) = -\\tilde{g}(u,v) = \\tilde{g}(u,-v)$ for any $(u,v) \\in S^n \\times S^m$. Thus, we can extend $\\tilde{g}$ to a non-singular map $f:\\mathbb{R}^{n+1} \\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^k$ defined by $$f(u,v) = |u| |v| \\tilde{g}\\bigg( \\frac{u}{|u|} , \\frac{v}{|v|}\\bigg)$$\n\nThis yields the bijection.\n\\end{proof}\n\nOne benefit that this gives us is a method for choosing a non-singular map with some specific benefits. We see this in the following corollary.\n\n\\begin{cor}\\label{firstcoord}\nLet $1 < m < n < k-1$ be integers such that there exists a non-singular map $\\mathbb{R}^{n+1} \\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^k$. Then, there exists a non-singular map $f:\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^k$ such that for any $0 \\neq u \\in \\mathbb{R}^{m+1}$, the first coordinate of $f((u,\\overline{0}),u) \\in \\mathbb{R}^k$ is positive.\n\\end{cor}\n\n\\begin{proof}\nFor the given non-singular map that is assumed to exist, let $g: \\mathbb{R}P^n \\times \\mathbb{R}P^m \\to \\mathbb{R}P^{k-1}$ be the corresponding axial map from Lemma \\ref{nonsing-axial}. By the axial map property, restricting to the diagonal $\\mathbb{R}P^m \\subseteq \\mathbb{R}P^n \\times \\mathbb{R}P^m$ yields a null-homotopic function. To see this quickly, note that $H^*(\\mathbb{R}P^{k-1}) \\overset{g^*}{\\to} H^*(\\mathbb{R}P^n) \\otimes H^*(\\mathbb{R}P^m) \\overset{\\iota^*\\otimes 1}{\\to} H^*(\\mathbb{R}P^m) \\otimes H^*(\\mathbb{R}P^m) \\overset{\\Delta^*}{\\to} H^*(\\mathbb{R}P^m)$ sends the generator $x \\in H^1(\\mathbb{R}P^{k-1})$ to 0. Thus, there is some $g' \\simeq g$ such that $g': \\mathbb{R}P^n \\times \\mathbb{R}P^m \\to \\mathbb{R}P^{k-1}$ is constant along the diagonal.\n\nBy Lemma \\ref{nonsing-axial}, $g'$ corresponds to some non-singular function $f: \\mathbb{R}^{n+1} \\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^k$. By construction, $f(u,u)$ lies on a single line through the origin. Via some rotation, we may assume that the first coordinate of $f(u,u)$ is positive, as desired.\n\\end{proof}\n\nFinally, we require a way to point from non-singular maps to bounds on the relative topological complexity of the pair of real projective spaces. For this, we again follow \\cite{farberRP} with some modifications, using Corollary \\ref{firstcoord} in a critical way.\n\n\\begin{lem}\\label{nonsing>TC}\nIf there exists a non-singular map $\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1}\\to \\mathbb{R}^k$, then $\\operatorname{TC}(\\mathbb{R}P^n,\\mathbb{R}P^m)\\leq k$.\n\\end{lem}\n\n\\begin{proof}\nGiven a non-singular map $\\rho:\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1}\\to \\mathbb{R}^k$, we can decompose $\\rho$ into maps $\\rho = (\\rho_1, \\dots , \\rho_k)$ with $\\rho_i:\\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1}\\to \\mathbb{R}$ for each $i$. We can also choose these so that $\\rho_1(u,u)>0$ for any $u \\in \\mathbb{R}^{m+1}$ by Corollary \\ref{firstcoord}.\n\nOur goal is to create an open cover of $\\mathbb{R}P^n \\times \\mathbb{R}P^m$ using the $k$ maps in the decomposition of $\\rho$. We will consider $\\mathbb{R}P^n$ as the space of lines through the origin in $\\mathbb{R}^{n+1}$ where $\\mathbb{R}P^m$ is a natural subspace of $\\mathbb{R}P^n$.\n\nWe construct the sets as follows. For $2 \\leq i \\leq k$, define\n$$U_i = \\{(L,L')\\in \\mathbb{R}P^n \\times \\mathbb{R}P^m \\, | \\, L \\neq L' \\text{ and } \\rho_i(u,u') \\neq 0 \\text{ for some } u\\in L, u'\\in L'\\}.$$\n\nFor $i=1$, we have to do something a little different to guarantee we have pairs of lines $(L,L)$ covered. Let\n\n$$U_1 = \\{(L,L') \\in \\mathbb{R}P^n \\times \\mathbb{R}P^m \\, | \\, \\rho_1(u,u')\\neq 0 \\text{ for some }u\\in L , u' \\in L' \\}.$$\n\nNote that since $\\rho$ is non-singular, each pair $(L,L') \\in \\mathbb{R}P^n \\times \\mathbb{R}P^m$ must fall into at least one of the $U_i$ sets. Thus, $\\{U_i\\}_{i=1}^k$ forms an open cover of $\\mathbb{R}P^n \\times \\mathbb{R}P^m$.\n\nNext, we need sections of the relative topological complexity fibration over each $U_i$. If $L\\neq L'$, then, there exists a plane in $\\mathbb{R}^{n+1}$ spanned by the two lines. Once we orient this plane, we can move one line to the other by rotating in the plane along the direction of positive orientation.\n\nIf $(L,L') \\in U_i$, then we can use $\\rho_i$ to orient the plane. Suppose $u \\in L$ and $u\\in L'$ are two unit vectors such that $\\rho_i(u,u')>0$. Then, define the positive orientation of the plane spanned by $L$ and $L'$ to be the direction given by moving $u$ to $u'$ through the angle smaller than $\\pi$. Then, we can define $s_i(L,L')$ to be the path moving $L$ in the positive orientation of this plane given by $\\rho_i$.\n\nWhen $L=L'$, which only occurs when $L \\in \\mathbb{R}P^m$, we can use the constant path. Since this only occurs when $i=1$, we only need to make this distinction for $s_1$. Using this, it is clear that each $s_i:U_i \\to P_{\\mathbb{R}P^n \\times \\mathbb{R}P^m}$ is a continuous section of the relative topological complexity fibration.\n\n\\end{proof}\n\nFinally, we have the tools needed to prove our main result.\n\n\\begin{proof}[Proof of Theorem \\ref{axial}]\nBy Corollary \\ref{TC>genuscor} and Lemma \\ref{nonsing>TC}, we have that when $1 < m < n$, $\\operatorname{TC}(\\mathbb{R}P^n, \\mathbb{R}P^m) = \\min \\{k \\, | \\, \\text{there exists a non-singular map } f: \\mathbb{R}^{n+1}\\times \\mathbb{R}^{m+1} \\to \\mathbb{R}^k\\}$. Then, by Lemma \\ref{nonsing-axial}, we can replace the non-singular map with an axial map of type $(n,m,k-1)$.\n\nTo complete this, we need only verify that $n+1 \\leq \\operatorname{TC}(\\mathbb{R}P^n, \\mathbb{R}P^m)$ to satisfy the conditions of Lemma \\ref{nonsing-axial}. But, it is well-known that $\\operatorname{cat}(\\mathbb{R}P^n) = n+1$, and $\\operatorname{TC}(\\mathbb{R}P^n,\\mathbb{R}P^m) \\geq \\operatorname{cat}(\\mathbb{R}P^n)$, so the condition holds.\n\n\\end{proof}\n\n\\section{Spatial Polygon Spaces}\n\nConfiguration spaces of polygons in $\\mathbb{R}^3$ have been an interesting example in algebraic geometry for some time. The configuration spaces of polygons in $\\mathbb{R}^3$, called the spatial polygon spaces, come endowed with a symplectic structure which will prove useful to us later. This structure has been studied by Klyachko in \\cite{klyachko} as well as Kapovich and Millson in \\cite{kapomill}. We first encountered the spatial polygon spaces in the work of Jean-Claude Hausmann and Allen Knutson in \\cite{hausknut}, but the topological complexity of spatial polygon spaces was not studied explicitly until the work of Don Davis in \\cite{davis}.\n\nThese spatial polygon spaces are determined by the lengths of the sides of the polygons involved. This motivates the following definition.\n\n\\begin{defn}\nLet $\\ell = (\\ell_1, \\dots , \\ell_n) \\in \\mathbb{R}_+^n$ be a length vector of size $n$. The \\textit{spatial polygon space of $\\ell$} is defined as:\n$$\\mathcal{N}(\\ell) = \\{ (z_1, \\dots , z_n) \\in (S^2)^n \\, | \\, \\Sigma \\ell_i z_i = \\vec{0} \\} \/ SO(3)$$\n\\end{defn}\n\nWe can think of $\\mathcal{N}(\\ell)$ as a set of ways to draw a polygon in $\\mathbb{R}^3$ allowing for possible self-intersections. A natural question to ask is which sides of the polygon we are capable of making collinear, or parallel. We can refer to edges based on the index corresponding to its length in $\\ell$, and doing this leads to a natural, and quite topologically valuable, definition for this case. Take $[n] = \\{1, \\dots, n\\}$.\n\n\\begin{defn}\nA subset $S \\subseteq [n]$ is \\textit{short} (with respect to $\\ell$) if $\\sum\\limits_{i \\in S} \\ell_i < \\sum\\limits_{j \\notin S} \\ell_j$. Correspondingly, we say a subset $L \\subseteq [n]$ is \\textit{long} (with respect to $\\ell$) if $\\sum\\limits_{i \\in L} \\ell_i > \\sum\\limits_{j \\notin L} \\ell_j$.\n\\end{defn}\n\nNote that not every subset has to be short or long. As an example, consider $\\ell = (1,1,2,2)$. Here, the subset $\\{1,3\\}$ is neither short nor long as $\\ell_1 + \\ell_3 = 3 = \\ell_2 + \\ell_4$. However, when we have subsets like this, we can arrange the polygon into a configuration where all edges are collinear. These collinear configurations create singularities in $\\mathcal{N}(\\ell)$, which can cause $\\mathcal{N}(\\ell)$ to fail to be a manifold. To make sure we get a manifold, we will need to impose a few reasonable conditions on our length vectors.\n\n\\begin{defn}\nLet $\\ell$ be a length vector of size $n$.\n\\begin{enumerate}\n\\item We say $\\ell$ is \\textit{generic} if for any $S \\subseteq [n]=\\{ 1, \\dots, n\\}$ we have $\\sum\\limits_{i\\in S} \\ell_i \\neq \\sum\\limits_{j \\notin S} \\ell_j .$\n\n\\item We say $\\ell$ is \\textit{non-degenerate} if for any $i\\in [n]$ we have $\\ell_i < \\sum\\limits_{j\\neq i} \\ell_j$.\n\n\\item We say $\\ell$ is \\textit{ordered} if $\\ell_1 \\leq \\ell_2 \\leq \\dots \\leq \\ell_{n}$.\n\\end{enumerate}\n\n\\end{defn}\n\nSo long as $\\ell$ is generic and non-degenerate, we can guarantee that $\\mathcal{N}(\\ell)$ is a manifold. The topology of $\\mathcal{N}(\\ell)$ also respects permuting the order of the edges. As stated precisely in \\cite[1.4]{hausgeom}, for any $\\sigma \\in \\Sigma_{n}$, let $\\ell_{\\sigma} = (\\ell_{\\sigma(1)}, \\dots, \\ell_{\\sigma(n)})$; then $\\mathcal{N}(\\ell)$ is diffeomorphic to $\\mathcal{N}(\\ell_{\\sigma})$. That is, any length vector can be associated to an ordered length vector which generates the same topology. As such, we can safely assume that our length vectors are ordered. Finally, for a generic and non-degenerate length vector, every subset of $[n]$ is either short or long.\n\nIt will become necessary to have a way of sorting and categorizing the different short and long subsets of a length vector. We use the following notation for this purpose.\n\n\\begin{defn}\n$$\\mathcal{S}_i(\\ell) = \\{ S \\subseteq [n] \\, | \\, i \\in S, \\, S\\text{ is short}\\} \\qquad \\qquad \\mathcal{S}(\\ell) = \\bigcup_{i=1}^n\\mathcal{S}_i(\\ell)$$\n$$\\mathcal{L}_i(\\ell) = \\{ L \\subseteq [n] \\, | \\, i \\in L, \\, L\\text{ is long} \\} \\qquad \\qquad \\mathcal{L}(\\ell) = \\bigcup_{i=1}^n\\mathcal{L}_i(\\ell)$$\n\\end{defn}\n\nIn \\cite{hausknut}, Hausmann and Knutson give the following description for the cohomology ring of $\\mathcal{N}(\\ell)$ which uses short and long subsets in an essential way.\n\n\\begin{thm}\\cite[Thm 6.4(2)]{hausknut}\\label{hauscohom}\nThe cohomology ring of $\\mathcal{N}(\\ell)$ is given as\n$$H^*(\\mathcal{N}(\\ell))=\\mathbb{Z}[R,V_1, \\dots , V_{n-1}]\/\\mathcal{I}$$\nwhere $R,V_i \\in H^2(\\mathcal{N}(\\ell))$, and $\\mathcal{I}$ is the ideal generated by three families of relations:\n\\begin{enumerate}\n\\item $V_i^2 + RV_i$ for $i \\in [n-1]$,\n\\item $\\prod\\limits_{i\\in L}V_i$ for $L \\in \\mathcal{L}_n(\\ell)$, and\n\\item $\\sum\\limits_{\\overset{S \\subset L}{S \\text{ short}}}(\\prod\\limits_{i\\in S}V_i)R^{|L-S|-1}$ for $L \\in \\mathcal{L}(\\ell)-\\mathcal{L}_n(\\ell)$.\n\\end{enumerate}\n\\end{thm}\n\nHausmann and Knutson derive this description for the cohomology ring by studying and utilizing the symplectic structure of $\\mathcal{N}(\\ell)$. They also describe a collection of natural $\\operatorname{SO}(2)$-bundles over $\\mathcal{N}(\\ell)$ whose Chern classes prove particularly useful to us.\n\n\\begin{defn}\\cite[\\S 7]{hausknut}\\label{bdledefn}\nLet $\\ell$ be a generic, non-degenerate length vector of size $n$. Define $$A_j(\\ell) = \\{ (z_1, \\dots , z_n) \\in (S^2)^n \\, | \\, \\Sigma \\ell_i z_i = \\vec{0} \\text{ and } z_j = (0,0,1) \\in S^2 \\}.$$\n\nLet $c_j(\\ell) = c_1(A_j(\\ell)) \\in H^2(\\mathcal{N}(\\ell))$ denote the Chern class of the bundle $A_j(\\ell) \\to \\mathcal{N}(\\ell)$.\n\\end{defn}\n\nHausmann and Knutson then provide a method for describing each $c_j(\\ell)$ using their description for $H^*(\\mathcal{N}(\\ell))$.\n\n\\begin{prop}\\cite[Prop 7.3]{hausknut} \\label{cherndesc}\nIn $H^2(\\mathcal{N}(\\ell))$, one has\n\\begin{itemize}\n\\item $c_j(\\ell) = R + 2V_j$ if $i < n$; and\n\\item $c_n(\\ell) = -R$.\n\\end{itemize}\n\\end{prop}\n\nHausmann and Knutson use these Chern classes to determine a very nice expression for the cohomology class for the symplectic form of $\\mathcal{N}(\\ell)$.\n\n\\begin{prop}[\\cite{hausknut}, Remark 7.5]\\label{haussymp}\nIf $\\ell \\in \\mathbb{Z}^n$, then the symplectic form $[\\omega] \\in H^2(\\mathcal{N}(\\ell))$ is given by $$[\\omega] = \\sum\\limits_{i=1}^n \\ell_i c_i(\\ell)$$\n\\end{prop}\n\nFinally, it is well-known that $\\mathcal{N}(\\ell)$ is a simply-connected manifold of dimension $2(n-3)$ when $\\ell$ is of size $n$ (see \\cite[\\S 1]{hausknut}, \\cite[Lemma 10.3.33]{hausbook}). Since it is a closed, symplectic, simply-connected manifold, we can compute $\\operatorname{TC}(\\mathcal{N}(\\ell))$ as a corollary of \\ref{sympTC} with $m=n$. This is computed directly using Theorem \\ref{hauscohom} in \\cite{davis}.\n\n\\begin{prop}\nFor $\\ell$ generic and non-degenerate of size $n$, $\\operatorname{TC}(\\mathcal{N}(\\ell)) = 2n-5$.\n\\end{prop}\n\n\\subsection{Edge-Identifying Submanifolds}\nThere is a natural way to form submanifolds within $\\mathcal{N}(\\ell)$ by restricting our attention to configurations where selected edges are aligned together. This space of configurations where selected edges are aligned forms a submanifold of $\\mathcal{N}(\\ell)$ which is homeomorphic, in some cases, to $\\mathcal{N}(\\ell^\\P)$ for a different, but related length vector $\\ell^\\P$.\n\n\\begin{defn}\\label{eilvdefn}\nLet $\\ell = (\\ell_1, \\dots , \\ell_n)$ be a length vector and let $\\P = (\\P_1, \\dots, \\P_m)$ be an ordered set partition of $[n]$ into $m$ parts. We say that a \\textit{edge-identified length vector} from $\\ell$ is a vector $\\ell^\\P = (\\ell^\\P_1, \\dots , \\ell^\\P_m)$ such that $$\\ell^\\P_k = \\sum\\limits_{i\\in \\P_k}\\ell_i.$$\n\\end{defn}\n\nAs this is a novel method of describing these spaces, we present some examples of edge-identified length vectors.\n\n\\begin{example}\\label{eilvexs}\nLet $\\ell = (1,1,2,3,5,7)$. Note that $\\ell$ is a generic, non-degenerate, ordered length vector. We give four examples of edge-identified length vectors of $\\ell$.\n\n\\begin{itemize}\n\\item $\\ell^{\\P'} = (1,2,3,6,7)$: Here, we have combined $\\ell_2$ and $\\ell_5$ into a single edge. Explicitly, the ordered set partition is $\\P' = (\\{1\\}, \\{3\\}, \\{4\\}, \\{2,5\\}, \\{6\\})$ giving us that $\\ell^{\\P'}_4 = \\ell_2 + \\ell_5$.\n\n\\item $\\ell^{\\P''}=(1,1,3,5,9)$: Here, we have combined $\\ell_3$ and $\\ell_6$ into a single edge. Explicitly, the ordered set partition is $\\P'' = (\\{1\\}, \\{2\\}, \\{4\\}, \\{5\\}, \\{3,6\\})$ giving us that $\\ell^{\\P''}_5 = \\ell_3 + \\ell_6$. Notice that we can identify other edges with the last edge $\\ell_6$ as we do in this example.\n\n\\item $\\ell^{\\P'''}=(4,7,8)$: Here, we have combined several of the edges together. Explicitly, the ordered set partition is $\\P'''=(\\{1,4\\}, \\{6\\}, \\{2,3,5\\})$ giving us that $\\ell^{\\P'''}_1 = \\ell_1 + \\ell_4$, and $\\ell^{\\P'''}_3 = \\ell_2 + \\ell_3 + \\ell_5$. Notice that it is possible, as in this example, to supplant the largest length by identifying other edges. We can always permute the ordered set partition in order to end up with an ordered edge-identified length vector if we desire this.\n\n\\item $\\ell^{\\P''''}=(1,1,7,10)$: Here, we have combined $\\ell_3$, $\\ell_4$, and $\\ell_5$ into a single large edge. In fact, $\\ell^{\\P''''}_4 = \\ell_3 + \\ell_4 + \\ell_5 > \\ell^{\\P''''}_1 + \\ell^{\\P''''}_2 + \\ell^{\\P''''}_3$, giving us a degenerate length vector. Thus, edge-identified length vectors need not preserve non-degeneracy in general.\n\\end{itemize}\n\\end{example}\n\nNotice that all edge-identified length vectors preserve genericity, but they could fail to preserve non-degeneracy. If we assume $\\ell$ is non-degenerate, then we can preserve non-degeneracy by only identifying edges whose indices form short subsets.\n\nThe core concern in the above examples is which length in $\\ell$ is assigned to a particular length in $\\ell^\\P$. We can encode this in the function $\\phi:[n] \\to [m]$ given by $\\phi(i)=j \\iff i \\in \\P_j$ where $\\P_j$ is the $j$th set of $\\P$. This function controls the topology of $\\mathcal{N}(\\ell^\\P)$, but it also controls the inclusion map $\\mathcal{N}(\\ell^\\P) \\hookrightarrow \\mathcal{N}(\\ell)$. We see this in the following proposition.\n\n\\begin{prop}\\label{inducedcohom}\nLet $\\ell$ be a generic, non-degenerate length vector with $\\ell^\\P$ a non-degenerate edge-identified length vector of $\\ell$. Then the inclusion induced map $\\iota^*:H^*(\\mathcal{N}(\\ell)) \\to H^*(\\mathcal{N}(\\ell^\\P))$ acts on the Chern classes by $\\iota^*(c_j(\\ell)) = c_{\\phi(j)}(\\ell^\\P)$.\n\\end{prop}\n\n\\begin{proof}\nConsidering $\\iota: \\mathcal{N}(\\ell^\\P) \\to \\mathcal{N}(\\ell)$, we know that $\\iota^*(c_j(\\ell)) = \\iota^*(c_1(A_j(\\ell))) = c_1(\\iota^*(A_j(\\ell)))$. Thus, what we need to show is that the pullback bundle $\\iota^*(A_j(\\ell)) = A_{\\phi(j)}(\\ell^\\P)$.\n\nWe can think of $A_j(\\ell)$ as the space of polygons in $\\mathbb{R}^3$ with the $j$th edge parallel to the $z$-axis. Since $\\mathcal{N}(\\ell^e)$ identifies $\\ell_j$ as part of $\\ell_{\\phi(j)}^\\P$, $\\iota^*(A_j(\\ell))$ has all the edges in $\\ell_{\\phi(j)}^\\P$ parallel to the $z$-axis. Thus, $\\iota^*(A_j(\\ell)) = A_{\\phi(j)}(\\ell^\\P)$. And so, $\\iota^*(c_j(\\ell)) = c_1(A_{\\phi(j)}(\\ell^\\P)) = c_{\\phi(j)}(\\ell^\\P)$.\n\n\\end{proof}\n\nWith this information, we can determine the relative topological complexity for pairs of spatial polygon spaces.\n\n\\begin{thm}\\label{NlTC}\nLet $\\ell$ be a generic, non-degenerate length vector with $\\ell^\\P$ a non-degenerate edge-identified length vector of $\\ell$ as in Definition \\ref{eilvdefn}. Then, $\\operatorname{TC}(\\mathcal{N}(\\ell),\\mathcal{N}(\\ell^\\P))=n+m-5$.\n\\end{thm}\n\n\\begin{proof}\nFirst, notice that $(\\mathcal{N}(\\ell),\\omega)$ is a simply-connected symplectic manifold of dimension $2(n-3)$ and $(\\mathcal{N}(\\ell^\\P), \\omega_\\P)$ is a submanifold of dimension $2(m-3)$ with its own symplectic structure. We need only verify that $\\iota^*([\\omega]) = [\\omega_\\P]$, and then Theorem \\ref{sympTC} yields the result. We show this in the following computation.\n\n\\begin{eqnarray*}\n\\iota^*([\\omega]) =& \\iota^*\\bigg( \\sum\\limits_{i=1}^n \\ell_i c_i(\\ell) \\bigg) & \\text{ (by Proposition \\ref{haussymp})}\\\\\n=&\\sum\\limits_{i=1}^n \\ell_i \\iota^*c_i(\\ell) & \\\\\n=&\\sum\\limits_{i=1}^n \\ell_i c_{\\phi(i)}(\\ell^\\P) &\\text{ (by Proposition \\ref{inducedcohom})} \\\\\n=&\\sum\\limits_{j=1}^m \\ell_j^\\P c_{j}(\\ell^\\P) & \\\\\n=&[\\omega_\\P] & \\text{ (by Proposition \\ref{haussymp})}\n\\end{eqnarray*}\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGalaxy clusters are powerful cosmological probes: observations of their internal structure provide information\non dark matter and can be used to estimate distances, while studies of their evolution gauge the \ninfluence of dark energy on structure formation \n-- as rare objects at the top of the mass hierarchy, \ntheir number density and its evolution are extremely sensitive to the underlying cosmology. \nFor these reasons, the Dark Energy Task Force included cluster surveys among the four primary methods for constraining \ndark energy. In this context, massive clusters are the most pertinent \nbecause their properties are little affected by non--gravitational processes.\n\nToday, we do not yet have the large samples of clusters, most notably massive clusters, out to high redshifts (e.g, unity and \nbeyond) needed to fully realize the potential of cluster studies. This is changing, thanks in large part to Sunyaev-Zel'dovich (SZ) \ncluster suveys. The SZ effect\\,\\cite{sz1,sz2,sz3} is a distortion of the cosmic microwave background (CMB) black body \nspectrum due to inverse Compton scattering of CMB photons off electrons in the intra--cluster \nmedium (ICM). It is one of the most promising ways of finding new galaxy clusters, since its amplitude (in terms of surface\nbrightness) and spectrum are independent of redshift (in the non-relativistic case). \n\\commentaire{The change in the intensity of the CMB induced by the SZ effect follows:\n\\begin{equation}\n\\frac{\\Delta I_{\\nu}}{I_0} = y f(\\nu) = \\int_{los}\\frac{kT_e}{m_ec^2}n_e\\sigma_Tdl \\times \\frac{x^4e^x}{(e^x-1)^2}\\left[\\frac{x(e^x+1)}{e^x-1}-4\\right]\\, ,\n\\label{eq:delta_i_sz}\n\\end{equation}\nwhere $I_{\\nu} = I_0\\frac{x^3}{e^x-1}$ is Planck's law with $I_0 = \\frac{2(kT_{\\mbox{\\tiny{CMB}}})^3}{(hc)^2}$, $T_e$ and $n_e$ are the temperature and density of the electrons in the ICM and \n$x = \\frac{h\\nu}{kT_{\\mbox{\\tiny{CMB}}}}$ in the dimensionless frequency ($\\nu$ is the frequency of observation). The amplitude of the effect is characterized by the so-called \\textit{Compton-y parameter}, which corresponds \nto the integral of the pressure along the line of sight (los). The spectrum of the distorsion, characterised by $f(\\nu)$ in Eq. \\ref{eq:delta_i_sz} is completely independent of the cluster's properties (at least, \nin the non relativistic case) and is thus universal; the distorsion corresponds to a deficit of photons relative to the mean sky ($\\Delta I_{\\nu}<0$) for $\\nu \\lesssim 217$~GHz, an excess \n($\\Delta I_{\\nu}>0$) for $\\nu \\gtrsim 217$~GHz and is null for $\\nu\\simeq 217$~GHz. \nExcept for the changes of angular size on the sky, two identical clusters at, for instance, redshifts $0.5$ and $5$ will be responsible for the same distorsion.}\nAs SZ surveys begin to open this new window on cluster science, relating them to surveys in other wavebands becomes a \ncritical issue, both to understand what we are finding and to fully exploit their scientific potential.\n\n\n\\section{An SZ\/X--ray Cluster Model}\n\nObservations of the SZ effect give only two--dimensional information projected onto the sky, and follow--up in other \nwavebands is essential for most studies. Obviously, optical\/NIR follow--up is needed to obtain redshifts. \nMuch can also be gained by combining X--ray and SZ data sets, for instance to better understand various \nsurvey selection functions. Follow--up with \\textit{XMM-Newton}\\ and \\textit{Chandra} will enable us to probe the ICM with \nunprecedented precision, e.g, its thermal structure and the gas mass fraction. Moreover, with X--ray data we can\nestimate cluster masses through application of hydrostatic equilibrium. \n\nIn order to inform such SZ\/X--ray comparisons and follow--up of SZ surveys, we have constructed an empirical and \neasily adaptable model\\,\\cite{wam} relating the SZ and X--ray properties of clusters. \nThis section briefly summarizes its most relevant aspects.\n\n\\subsection{Description of the model}\n\nOur model is based on several ingredients, derived from observations, numerical simulations and theory. \nWe employ scaling laws in order to relate observed properties with the fundamental cluster parameters, mass \nand redshift: the $M_{500}-T$\\,\\cite{m500_1,m500_2}, $L_X-T$\\,\\cite{l-t} and \n$f_{gas}-T$\\,\\cite{fgas} relations. The evolution of all these scaling relations is still poorly constrained. \nHowever, recent observations\\,\\cite{evol_mt,evol_lt} \nindicate that self--similar evolution tends to reproduce well the data. Given this, we adopted self--similar \nevolution in all cases, and we subsequently validated this choice (see next section).\n\nWe approximate the spatial structure of the gas with an isothermal $\\beta$-model\\,\\footnote{This will be \nimproved in a future version of the model, to account for recent observations showing that this profile is inadequate, \nespecially in the core and outer parts of clusters.} with $\\beta=2\/3$. Fitting the $L_X-T$ relation then \nrequires a deviation from self--similarity in the $f_{gas}-T$ relation, i.e., the gas mass fraction varies\nwith cluster total mass; this variation is allowed by present observations. \nFor the dark matter, we adopt a NFW profile and use the Jenkins mass function\\,\\cite{mf}. Local cluster counts in terms \nof the X-ray Temperature Function\\,\\cite{xtf} (XTF) then fix the normalization of the fluctuation power spectrum\n(using the measured $M_{500}-T$). \n\nBy combining these different ingredients we constrain all free parameters of the model, namely \nthose describing cluster physics -- like the core radius $r_c$ and the central electronic density $n_e$ --\nand those describing population statistics, such as $\\sigma_8$ (see below). \nWe took particular care with the various mass definitions available in the literature, \nrelated to theoretical studies (e.g. $M_{vir}$), observations (e.g. $M_{500}$) or numerical simulations\n(e.g. masses estimated by the friends--of--friends method), transforming among them with the \nNFW dark matter profile. This was indispensable for coherently combining the variety of constraints. \n\n\\subsection{Model validation}\n\nTo validate the model, we checked it against additional observational constraints, not used to \nfix its parameters. We discuss below the most relevant of these\\commentaire{, redshift distributions of X--ray\nsurveys}, but cite another notable one in passing: \nfitting \nthe local XTF\\,\\cite{xtf} we find $\\sigma_8=0.78\\pm 0.027$, in complete agreement with WMAP-5 \nresults\\,\\cite{wmap}.\nMore specifically, we tested the model by comparing the observed redshift distributions from the REFLEX\\,\\cite{reflex} and 400 square--degree\\,\\cite{400} surveys to the predictions of the model. \nFigure \\ref{fig:xray_counts} shows the \npredicted and observed counts in both cases. In the former case, the observed total number of clusters is 447 \nwith a completeness estimated to be at least 90\\%; the predicted number is 508 clusters, which corresponds \nto 457 clusters for a completeness of 90\\%. Moreover, the shapes of the two distributions are in very good agreement. \n\nIn the case of the \n400deg$^2$ survey, the model reproduces extremely well the high redshift distribution ($z>0.4$), although its seems to \npredict too many low redshift clusters. Noting that this is a serendipitous survey, in which known local clusters are by\nconstruction missing, we conclude once again that the model is in reasonable agreement with the data. \nThis last result is particularly satisfying since the high redshift clusters contained in this deep survey are of the kind \nexpected to be found in SZ surveys like \\textit{Planck}\\, (as discussed below).\n\n\\begin{figure}[t!]\n\\begin{center}\n\\subfigure{\\epsfig{figure=Plots\/N_z_REFLEX_0.0_thick.eps,width=7.0cm}}\n\\subfigure{\\epsfig{figure=Plots\/N_z_400square_0.0_thick.eps,width=7.0cm}}\n\\end{center}\n\\vspace{-0.5cm}\n\\caption[The Planck catalog]{Two examples of the redshift distribution of clusters from ROSAT surveys (red) compared to model predictions (blue). \n{\\em Left}: The REFLEX survey. {\\em Right}: The 400 square--degree survey.}\n\\label{fig:xray_counts}\n\\end{figure}\n\n\\section{An application of the model}\n\nThe model is completely general and can be used to predict the results of any set of SZ and X--ray observations. \nAs an example of its application, we discuss potential follow--up of \\textit{Planck}\\ SZ clusters with \\textit{XMM-Newton}. \n\n\\subsection{The \\textit{Planck}\\ cluster catalog}\n\nTo accurately model the \\textit{Planck}\\ cluster catalog, we employed the selection function derived using the detection \nalgorithms developed by Melin et al.\\,\\cite{jb} and applied to detailed simulations of \\textit{Planck}\\ observations (the \\textit{Planck}\\\nSky Model\\,\\cite{psm}). We find that a non--negligible fraction of otherwise bright SZ clusters remain undetected: these are resolved, \nlow to intermediate redshift clusters whose SZ flux is diluted over several pixels. \nThis selection effect is shown in the left--hand panel of Figure \n\\ref{fig:catalog}, where we plot the cumulative redshift distribution of \\textit{Planck}\\ clusters. \n\nThe result is that the \\textit{Planck}\\ catalog is expected to contain $\\sim$2350 clusters\\commentaire{ (blue curve)}, of which $\\sim$180 \nare at $z>0.6$ and $\\sim$15 at $z>1$; there is, of course, a certain amount of model uncertainty associated with these \npredictions, in particular from the normalization of the SZ--mass relation. \n\n\\subsection{Follow--up with \\textit{XMM-Newton}}\n\nWe wish to identify the X--ray nature of these new \\textit{Planck}\\ clusters and evaluate the ability of \\textit{XMM-Newton}\\ to observe a \nsignificant number of them. We therefore examine those clusters with X--ray fluxes below the ROSAT All Sky Survey \n(RASS) limit, which we take to be \n$f_{X}[0.1-2.4]\\mbox{keV}= 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ (i.e., the lowest limit of the MACS survey\\,\\cite{macs}). \nThe distribution of this sub--catalog of new \\textit{Planck}\\ clusters is given as a function of redshift \nand predicted temperature in the central panel of Figure \\ref{fig:catalog}. Most of the ($\\sim$520) clusters are relatively\ncool and local; however, $\\sim$168 clusters lie at $z>0.6$ and have temperatures $T>6$keV. Note that only six such\nclusters are presently known. \n\nIn the right--hand panel of Figure \\ref{fig:catalog}, we show the expected X--ray flux of these objects \nin the \\textit{XMM-Newton}\\ [0.5-2]--keV band as contours projected onto the redshift--temperature plane.\nThis allows us to evaluate their detectability, and we see that all of these \\textit{Planck}\\ clusters have fluxes larger than \n$10^{-13}$ erg s$^{-1}$ cm$^{-2}$. They are bright, falling in the flux decade just below the ROSAT limit.\nThis has important consequences for follow--up programs. \n\nUsing observations of MS1054-0321\\,\\cite{ms} and \nClJ1226.9+3332\\,\\cite{clj} -- two clusters of the same kind as the newly--discovered high redshift \\textit{Planck}\\ clusters -- as a guide, we \nestimate that \\textit{XMM-Newton}\\ could measure the temperature of \\textit{Planck}\\ clusters at $z>0.6$ to 10\\% with a relatively short\nexposure of 25-50 ks (per cluster). It should also be possible to obtain masses and mass profiles for the reasonably relaxed\n clusters by applying hydrodynamic equilibrium equation. \n \n\\begin{figure}[t!]\n\\begin{center}\n\\hspace{-0.8cm}\n \\subfigure{\\epsfig{figure=Plots\/cumulative_dist_Planck.eps,width=5.9cm}}\\\n \\subfigure{\\epsfig{figure=Plots\/catalog_fullsf_planck3sig-norosat_zT_moriond.eps,width=4.8cm, height=4.8cm}}\\hspace{-0.5cm\n \\subfigure{\\epsfig{figure=Plots\/zT_fx_band_contour_Planck_moriond.eps,width=6.3cm}}\n\\end{center}\n\\vspace{-0.3cm}\n\\caption[The Planck catalog]{{\\em Left}: Predicted cumulative redshift distribution of the {\\it Planck} cluster catalog. The dashed red line corresponds to the case where all clusters are (falsely) imagined to be unresolved (point--source approximation); \nthe blue line is the realistic case, accounting for the fact that some clusters are resolved. \n{\\em Middle}: The {\\it Planck} newly--discovered cluster catalog (in which clusters are \nobserved by {\\it Planck} but not by ROSAT) distributed in bins over temperature and redshift. {\\em Right}: The same \ndistribution projected over the ($z$, $T$)--plane with contours of iso--flux in the \n{\\it XMM-Newton} [0.5-2]--keV band.}\n\\label{fig:catalog}\n\\end{figure}\n\n\\section{Summary}\n\nWe presented a model for the SZ and X--ray signals of galaxy clusters based on current X--ray data. \nUsing a realistic mock \\textit{Planck}\\ cluster catalog, we employed the model to \npredict, firstly, that $\\sim$168 newly--discovered clusters lie at $z>0.6$ with \\commentaire{temperatures }$T>6$keV, and secondly,\nthat these clusters can be observed in some detail in only 25--50 ks with \\textit{XMM-Newton}. \nThus\nwe could follow--up the majority of these new \\textit{Planck}\\ clusters with a dedicated program of several Msec on \\textit{XMM-Newton}; \nthis falls in the category of {\\em Very Large Programme} now possible with the satellite. Follow--up\nobservations with \\textit{XMM-Newton}\\ would therefore dramatically increase the sample of well--studied, \nmassive, high redshift ($0.6