diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhxem" "b/data_all_eng_slimpj/shuffled/split2/finalzzhxem" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhxem" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\n\n\n\nIn recent years, machine learning approaches (CNNs) have achieved the state of the art results for both discriminative \\cite{Krizhevsky2012,Girshick2014,Girshick2015,Ren2015,Ren2015} and generative tasks \\cite{Mao2016,Cai2017,Pathak2016,Aharon2006, Mairal2008, Mairal2009,Dabov2007,Dabov2009,Lebrun2012,Radford2015,Ledig2016,Pathak2016}.\nHowever, applying the ideas from these powerful learning techniques like Dictionary Learning and Convolutional Neural Networks (CNNs) to 3D shapes is not straightforward, as a common parameterization of the 3D mesh has to be decided before the application of the learning algorithm. A simple way of such parameterization is the voxel representation of the shape. For discriminative tasks, this generic representation of voxels performs very well \\cite{Maturana2015, Wu2015, Su2015, Brock2016}. However when this representation is used for global generative tasks, the results are often blotchy, with spurious points floating as noise \\cite{Wu2016, Dai2016, Brock2016}. The aforementioned methods reconstruct the global outline of the shape impressively, but smaller sharp features are lost - mostly due to the problem in the voxel based representation and the nature of the problem being solved, than the performance of the CNN.\n\nIn this paper we intend to reconstruct fine scale surface details in a 3D shape using ideas taken from the powerful learning methods used in 2D domain. This problem is different from voxel based shape generation where the entire global shape is generated with the loss of fine-scale accuracy. Instead, we intend to restore and inpaint surfaces, when it is already possible to have a global outline of the noisy mesh being reconstructed. \nInstead of the lossy voxel based global representation, we propose local patches computed by the help of mesh quadriangulation. These local patches provide a collection of \\textit{fixed-length} and \\textit{regular} local units for 3D shapes which can be then used for fine-scale shape processing and analysis. \n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_3dv17\/patch_framework_quads.jpg} \n\\end{subfigure}%\n\\caption{Our patch computation framework - Local patches are computed on reference frames from quad orientations of the quad mesh obtained from the low resolution version of the input mesh.}\n\\label{fig:patchframework}\n\\end{figure}\n\n\nOur local patch computation procedure makes it possible to have a large number overlapping patches of intermediate length from a single mesh. These patches cover the surface variations of the mesh and are sufficient in amount to use popular machine learning algorithms such as deep CNNs. At the same time due to the stable orientation of our patch computation procedure (by the help of quadriangulations), they are sufficiently large to capture meaningful surface details. This makes these patches suitable for the application of repairing a damaged part in the same mesh, while learning from its undamaged parts or some other clean meshes. Because of the locality and the density of the computed patches, we do not need a large database of shapes to correct a damaged part or fill a moderate sized hole in a mesh. We explore ideas from 2D images and use methods such as dictionary learning and deep generative CNNs for surface analysis.\n\nWe compute local patches of moderate length by applying automatic mesh quadrangulation algorithm~\\cite{Ebke2013} to the low-resolution representation of an input 3D mesh and taking the stable quad orientations for patch computation. The low-resolution mesh is obtained by applying mesh smoothing to the input 3D scan, which captures the broad outline of the shape over which local patches can be placed. We then set the average quad size and thereby choose the required scale for computing local patches. The mesh quadrangulation is a quasi-global method, which determines the local orientations of the quads based on the distribution of corners and edges on the overall shape. At the same time, the scanlines of the quads retain some robustness towards surface noise and partial scans - these orientations can be fine-tuned further by the user if needed. The patch computation approach is summarized in Figure \\ref{fig:patchframework}.\n\n\nPrior work in using local surface patches for 3D shape compression \\cite{Digne2014} assumed the patch size to be sufficiently small such that a local patch on the input 3D scan could be mapped to a unit disc. Such small patch sizes would restrict learning based methods from learning any shape detail at larger scale, and for applications like surface inpainting. \n\nThe contributions of our paper are as follows. \n\\begin{enumerate}\n\\item We propose a novel shape encoding by local patches oriented by mesh quadrangulation. Unlike previous works, we do not require\nthe patches to be exceedingly small \\cite{Digne2014}.\n\n\\item Using our quadriangulated patches, we propose a method for learning a 3D patch dictionary. Using the self-similarity among the 3D patches we solve the problem of surface analysis such as inpainting and compression.\n\n\\item We extend the insights for designing CNN architectures for 2D image inpainting to surface inpainting of 3D shapes using our 3D patches. We provide analysis for their applicability to shape denoising and inpainting.\n\n\\item We validate the applicability of our models (patch-dictionary and CNN) learned from multiple 3D scans thrown into a common data set, towards repairing an individual 3D scan. \n\\end{enumerate} \n\n\nThe related work is discussed in the following section. We first explain our encoding of quadriangulated patches in Section \\ref{sec:3Dpatches}. We then present both linear and CNN based generative models in Section \\ref{sec:generativemodels}. We follow it with the experiments section where both the generative models are evaluated.\n\n\\section{Related Work}\n\\label{sec:related_work}\n\\subsection{3D global shape parameterization} Aligning a data set of 3D meshes to a common global surface parameterization is very challenging and requires the shapes to be of the same topology. For example, {\\em geometry images}\\cite{Sinha2016} can parameterize genus-0 shapes on a unit sphere, and even higher topology shapes with some distortion. Alternatively, the shapes can be aligned on the spectral distribution spanned by the Laplace-Beltrami Eigenfunctions\\cite{Masci2015a,Boscaini2016}. However, even small changes to the 3D mesh structure and topology can create large variations in the global spectral parameterization - something which cannot be avoided when dealing with real world 3D scans. Another problem is with learning partial scans and shape variations, where the shape detail is preserved only locally at certain places. Sumner and Popovic \\cite{Sumner2004} proposed the {\\em deformation gradient} encoding of a deforming surface through the individual geometric transformations of the mesh facets. This encoding can be used for statistical modeling of pre-registered 3D scans~\\cite{Neumann2013}, and describes a Riemannian manifold structure with a Lie algebra~\\cite{Freifeld2012}. All these methods assume that the shapes are pre-registered globally to a common mesh template, which is a significant challenge for shapes with arbitrary topologies. Another alternative is to embed a shape of arbitrary topology in a set of 3D cubes in the extrinsic space, known as {\\em PolyCube-Maps}~\\cite{Tarini2004}. Unfortunately, this encoding is not robust to intrinsic deformations of the shape, such as bending and articulated deformations that can typically occur with real world shapes. So we choose an intrinsic quadrangular parameterization on the shape itself~\\cite{Ebke2013}(see also Jakob et al.~\\cite{Jakob2015}).\n\n\\subsection{Statistical learning of 3D shapes} For reconstructing specific classes of shapes, such as human bodies or faces, fine-scale surface detail can be learned~{\\em e.g,}\\cite{Garrido2016,Bermano2014,Bogo2015}, from high resolution scans registered to a common mesh template model. This presumes a common shape topology or registration to a common template model, which is not possible for arbitrary shapes as presented in our work. \nFor shapes of arbitrary topology, existing learning architectures for deep neural networks on 2D images can be harnessed by using the projection of the model into different perspectives~\\cite{Su2015, Sarkar2017}, or by using its depth images~\\cite{Wei2016}. 3D shapes are also converted into common global descriptors by voxel sampling. The availability of large database of 3D shapes like ShapeNet \\cite{Chang2015} has made possible to learn deep CNNs on such voxalized space for the purpose of both discrimination \\cite{Maturana2015, Wu2015, Su2015, Brock2016} and shape generation \\cite{Wu2016, Dai2016, Brock2016}. Unfortunately, these methods cannot preserve fine-scale surface detail, though they are good for identifying global shape outline. More recently, there has been serious effort to have alternative ways of applying CNNs in 3D data such as OctNet \\cite{Riegler2017} and PointNet \\cite{Qi2016}. OctNet system uses a compact version of voxel based representation where only occupied grids are stored in an octree instead of the entire voxel grid, and has similar computational power as the voxel based CNNs.\nPointNet on the other hand takes unstructured 3D points as input and gets a global feature by using max pool as a symmetrical function on the output of MLP (multi-layer perceptron) on individual points. \nBoth these networks have not been explored yet fully for their generation properties (Eg. OctNetFusion \\cite{Riegler2017a}). They are still in their core, systems for global representation and are not targeted specifically for surfaces. In contrast, we encode 3D shape by fixed-length and regular local patches and learn generative models (patch dictionary and generative CNNs) for reproducing fine scaled surface details.\n\n\\subsection{CNN based generative models in images} One of the earliest work on unsupervised feature learning are autoencoders \\cite{Hinton2006} which can be also seen as a generative network. A slight variation, denoising autoencoders \\cite{Vincent2008,Xie2012}, reconstruct the image from local corruptions, and are used as a tool for both unsupervised feature learning and the application of noise removal. Our generative CNN model is, in principle, a variant of denoising autoencoder, where we use convolutional layers following the modern advances in the field of CNNs. \\cite{Mao2016,Cai2017,Pathak2016} uses similar network with convolutional layers for image inpainting. Generating natural images from using a neural network has also been studied extensively - mostly after the introduction of Generative Adversarial Network (GAN) by Goodfellow \\cite{Goodfellow2014} and its successful implementation using convolutional layers in DCGAN (Deep Convolutional GANs) \\cite{Radford2015}. As discussed in Section \\ref{sec:networkdesign}, our networks for patch inpainting are inspired from all the aforementioned ideas and are used to inpaint height map based 3D patches instead of images.\n\n\\subsection{Dense patch based generative models in images} 2D patch based methods have been very popular in the topic of image denoising. These non local algorithms can be categorised into dictionary based \\cite{Aharon2006, Mairal2008, Mairal2009} and BM3D (Block-matching and 3D filtering) based \\cite{Dabov2007, Dabov2009, Lebrun2012} methods. \nBecause of the presence of a block matching step in BM3D (patches are matched and kept in a block if they are similar), it is not simple to extend it for the task of inpainting, though the algorithm can be applied indirectly in a different domain \\cite{Li2014}. In contrast, dictionary based methods can be extended for the problem of inpatinting by introducing missing data masks in the matrix factorization step - making them the most popular methods for the comparison of inpainting tasks. \nIn 3D meshes, due to the lack of common patch parameterization, this task becomes difficult. \nIn this work, we use our novel encoding to compute moderate length dense 3D patches, and process them with the generative models of patch dictionary and non linear deep CNNs.\n\n \n\\subsection{3D patch dictionaries} A lossy encoding of local shape detail can be obtained by 3D feature descriptors~\\cite{Kim2013}. However, they typically do not provide a complete local surface parameterization. Recently, Digne et al.~\\cite{Digne2014} used a 3D patch dictionary for point cloud compression. Local surface patches are encoded as 2D height maps from a circular disc and learned a sparse linear dictionary of patch variations~\\cite{Aharon2006}. They assume that the local patches are sufficiently small (wherethe shape is parameterizable to a unit disc). In contrast to\nthis work, (i) we use mesh quadringulation for getting the patch location and orientation (in comparison to uniform sampling and PCA in \\cite{Digne2014}) enabling us to get large patches at good locations, (ii) \nwe address the problem of inpainting by generative models (masked version in matrix factorization and a blind method for CNN models) instead of compression, (iii) as a result of aforementioned differences, our patch size is much larger in order to have a meaningful patch description in the presence of missing regions.\n\n\n\\subsection{General 3D surface inpainting} Earlier methods for 3D surface inpainting regularized from the geometric neighborhood ~\\cite{Liepa2003,Bendels2006}. More recently, Sahay et al.~\\cite{Sahay2015} inpaint the holes in a shape by pre-registering it to a {\\em self-similar} proxy model in a dataset, that broadly resembles the shape. The holes are inpainted using a patch-dictionary. In this paper, we use a similar approach, but avoid the assumption of finding and pre-registering to a proxy model. The term {\\em self-similarity} in our paper refers to finding similar patches in other areas of the shape. Our method automatically detects the suitable patches, either from within the shape, or from a diverse data set of 3D models. Zhong et al.~\\cite{Zhong2016} propose an alternative learning approach by applying sparsity on the Laplacian Eigenbasis of the shape. We show that our method (both patch dictionary and generative CNN models) is better than this approach on publicly available meshes.\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{0.6\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/images_3dv17\/heightmap-eps-converted-to.pdf} \n\\end{subfigure}%\n\\begin{subfigure}{0.35\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/images_3dv17\/multipatches-eps-converted-to.pdf} \n\\end{subfigure}\n\\caption{(Left) Patch representation - Points are sampled as a height map over the planer grid of a reference frame at the seed point. (Right) Patches computed at multiple offset from the quad centres to simulate dense sampling of patches while keeping the stable quad orientation. The black connected square represents the quad in a quad mesh and the dotted squares represents the patches that are computed at different offset.}\n\\label{fig:heightmap}\n\\end{figure}\n\n\n\n\n\n\n\\section{3D Patch Encoding}\n\\label{sec:3Dpatches}\n\n\nGiven a mesh $\\model{M} = \\{F, V\\}$ depicting a 3D shape and the input parameters - patch radius $r$ and grid resolution $N$, our aim is to decompose it into a set of fixed length local patches $\\{P_s\\}$, along with the settings $\\model{S} = \\{(s, T_{s})\\}, Conn$ having information on the location (by $s$), orientation (by the transformation $T_{s}$) of each patch and vertex connectivity (by $Conn$) for reconstructing back the original shape.\n\nTo compute uniform length patches, a point cloud $C$ is computed by dense uniform sampling of points in $\\model{M}$. Given a seed point $s$ on the model surface $C$, a reference frame $\\model{F}_s$ corresponding to a transformation matrix $T_{s}$ at $s$, and an input patch-radius $r$, we consider all the points in the $r$-neighbourhood, $\\model{P}_s$. \nEach point in $\\model{P}_s$ is represented w.r.t. $\\model{F}_s$ as $P_{\\model{F}_s}$. That is, if the rotation between global coordinates and $\\model{F}_s$ is given by the rotation matrix $R_s$, a point $\\bm{p}$ represented in the local coordinate system of $\\model{F}_s$ is given by $\\bm{p}_{s}= T_s \\bm{p}$, where $T_s = \\begin{pmatrix}R & {-R_s}s\\\\ 0 & 1\\end{pmatrix}$ is the transformation matrix between the two coordinates.\n\n\n\\subsection{Local parameterisation and patch representation}\nAn $N\\times N$ square grid of length $\\sqrt{2}r$ and is placed on the X-Y plane of $\\model{F}_s$, and points in $P_{\\model{F}_s}$ are sampled over the grid wrt their X-Y coordinates. Each sampled point is then represented by its `height' from the square grid, which is its Z coordinate to finally get a height-map representation of dimension of $(N \\times N)$ (Figure \\ref{fig:heightmap}). Thus, each patch around a point $s$ is defined by a \\textit{fixed size} vector $\\operatorname{vec}(P_s)$ of size $N^2$ and a transformation $T_s$. \n\n\\subsection{Mesh reconstruction}\n\\label{sec:connectedmeshrec}\n To reconstruct a connected mesh from patch set we need to store connectivity information $Conn$. This can be achieved by keeping track of the exact patch-bin $(P_s, i)$ a vertex $v_j \\in V$ in the input mesh corresponds (would get sampled during the patch computation) by the mapping $\\{(j, \\{(P_s, i)\\})\\}$.\n\nTherefore, given patch set $\\{P_s\\}$ along with the settings $\\model{S} = \\{(s, T_{s})\\}, Conn$ with $Conn = \\{(j, \\{(P_s, i)\\})\\}, F$ it is possible to reconstruct back the original shape with the accuracy upto the sampling length. For each patch $P_{s}$, for each bin $i$, the height map representation $P_s[i]$, is first converted to the XYZ coordinates in its reference frame, $\\bm{p}_s$, and then to the global coordinates $\\bm{p}'$, by $\\bm{p}'= T_s^{-1} \\bm{p}_s$. Then the estimate of each vertex index $j$, $v_j \\in V$ is given by the set of vertices $\\{v_e\\}$. The final value of vertex $v_m'$ is taken as the mean of $\\{v_e\\}$. The reconstructed mesh is then given by $\\{\\{v_j'\\}, F\\}$. If the estimate of a vertex $v_j$ is empty, we take the average of the vertices in its 1-neighbour ring.\n\n\n \\begin{algorithm}[t]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\\floatname{algorithm}{Steps}\n \\caption{3D Patch computation based on quad mesh}\n \\begin{algorithmic}[1]\n \\REQUIRE Mesh - $M$, Patch radius - $r$, resolution - $N$\n \\STATE Compute quad mesh of the smoothened $M$ using \\cite{Jakob2015}.\n\t\t\t\\STATE Densely sample points in $M$ to get the cloud $C$.\n\\STATE At each quad center, compute r-neighborhood in $C$ and orient using the quad orientation to get local patches.\n\\STATE Sample the local patches in a ($N \\times N$) square grid in a height map based representation.\n\\STATE Store the vertex connections (details in the text).\n \\ENSURE Patch set $\\{P_s\\}$ of ($N \\times N$) dimension, orientations, vertex connections.\n \\end{algorithmic}\n \\label{algorithm}\n \n\\end{algorithm}\n\n\n\\subsection{Reference frames from quad mesh}\n\\label{sec:rfcomputation} \\label{sec:globalproperties}\nThe height map based representation accurately encodes a surface only when the patch radius is below the distance between surface points and the shape medial axis. In other words, the $r$-neighbourhood, $\\model{P}_s$ should delimit a topological disk on the underlying surface to enable parameterization over the grid defined by the reference frame. In real world shapes, either this assumption breaks, or the patch radius becomes too low to have a meaningful sampling of shape description. A good choice of seed points enables the computation of the patches in well behaved areas, such that, even with moderately sized patches in arbitrary real world shapes, the $r$-neighbourhood, $\\model{P}_s$ of a given point $s$ delimits a topological disk on the grid of parameterisation. It should also provide an orientation consistent with global shape. \n\n\nGiven a mesh $\\model{M}$, we obtain low-resolution representation by Laplacian smoothing \\cite{Sorkine2004}. The low resolution mesh captures the broad outline of the shape over which local patches can be placed. In our experiments, for all the meshes, we performed $30$ Laplacian smoothing iterations (normal smoothing + vertex fitting).\n\nGiven the smooth coarse mesh, the quad mesh $\\model{M}^Q$ is extracted following Jakob et al.\\cite{Jakob2015}. At this step, the quad length is specified in proportion to the final patch length and hence the scale of the patch computation. For each quad $q$ in the quad mesh, its center and $4*k$ offsets are considered as seed points, where $k$ is the overlap level (Figure \\ref{fig:heightmap} (Right)). These offsets capture more patch variations for the learning algorithm. For all these seed points, the reference frames are taken from the orientation of the quad $q$ denoted by its transformation $T_{s}$. In this reference frame, $Z$ axis, on which the height map is computed, is taken to be in the direction normal to the quad. The other two orthogonal axes - $X$ and $Y$, are computed from the two consistent sides of the quads. To keep the orientation of $X$ and $Y$ axes consistent, we do a breath first traversal starting from a specific quad location in the quad mesh and reorient all the axes to the initial axes. \n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/patchaa_summary_gen} \n\\end{subfigure}%\n\\caption{Summary of the inpainting framework. Generative models are trained on 3D the patches computed from 3D shapes for the purpose of inpainting. During testing (dashed line) the generative model is used to reconstruct noisy patches computed in the noisy mesh.}\n\\label{fig:patchaa_summary}\n\\end{figure}\n\n\\section{Learning on 3D patches}\n\\label{sec:generativemodels}\nGiven a set of 3D meshes, we first decompose them into local rectangular patches. Using this large database of 3D patches, we learn a generative model to reconstruct denoised version of input 3D patches. We use both Matrix Factorization and CNN based generative models for inpainting whose details are explained in this section. The overall approach for training is presented in Figure \\ref{fig:patchaa_summary}.\n\nLet $\\bm{x_i} := \\operatorname{vec}(P_i) \\in \\mathbb{R}^{N^2}$ be the vectorization of the patch $P_i$ in the patch set $\\{P_i\\}$. And let $X$ be the set of the domain of the vectorization of the patches generated from a mesh (or a pool of meshes). Given such patch set, we learn a generative model $\\model{M}: X \\mapsto X$, such that $\\model{M}(\\bm{x}) = \\bm{x'}$ produces a cleaned version of the noisy input $\\bm{x}$. Following sections describe two such popular methods of generative models used in the context of patch inpainting, namely Dictionary Learning and Denoising Autoencoders. These methods, inspired from their popularity in the 2D domain as generative models, are designed to meet the needs of the patch encoding. They are described in detail in the following paragraphs.\n\n\\subsection{Dictionary Learning and Sparse Models}\nGiven a matrix ${D}$ in $\\mathbb{R}^{m \\times p}$ with $p$ column vectors, sparse models in signal processing aims at representing a signal $\\bm{x}$ in $\\mathbb{R}^{m}$ as a sparse linear combination of the column vectors of $D$. The matrix $D$ is called \\textit{dictionary} and its columns \\textit{atoms}. In terms of optimization, approximating $\\bm{x}$ by a sparse linear combination of atoms can be formulated as finding a sparse vector $\\bm{y}$ in $\\mathbb{R}^p$, with $k$ non-zero coefficients, that minimizes\n\\vspace{-0.2cm}\n\\begin{equation} \\label{eq:sparsity}\n \\quad \\min \\limits _{\\bm{y}} \\frac{1}{2}\\|\\bm{x} - D\\bm{y}\\|^2_2 \\qquad \\text{s.t. } \\|\\bm{y}\\|_0 \\le k\n \\vspace{-0.2cm}\n\\end{equation}\n\nThe dictionary $D$ can be learned or evaluated from the signal dataset itself which gives better performance over the off-the-shelf dictionaries in natural images. In this work we learn the dictionary from the 3D patches for the purpose of mesh processing. Given a dataset of $n$ training signals $\\bm{X} = [\\bm{x}_1, ..., \\bm{x}_n]$, dictionary learning can be formulated as the following minimization problem\n\\vspace{-0.2cm}\n\\begin{equation} \\label{eq:dlearning}\n \\quad \\min \\limits _{D, \\bm{Y}} \\sum_{i=1}^n \\frac{1}{2}\\|\\bm{x}_i - D\\bm{y}_i\\|^2_2 + \\lambda \\psi(\\bm{y}_i),\n \\vspace{-0.2cm}\n\\end{equation}\n\nwhere $\\bm{Y} = [\\bm{y}_1, ..., \\bm{y}_n] \\in \\mathbb{R}^{p \\times n}$ is the set of sparse decomposition coefficients of the input signals $\\bm{X}$, $\\psi$ is sparsity inducing regularization function, which is often the $l_1$ or $l_0$ norm.\n\n\nBoth optimization problems described by equations \\ref{eq:sparsity} and \\ref{eq:dlearning} are solved by approximate or greedy algorithms; for example, Orthogonal Matching Pursuit (OMP) \\cite{Pati1993}, Least Angle Regression (LARS) \\cite{Efron2004} for sparse encoding (optimization of Equation \\ref{eq:sparsity}) and KSVD \\cite{Aharon2006} for dictionary learning (optimization of Equation \\ref{eq:dlearning}) \n\n\n\\textbf{Missing Data:} Missing information in the original signal can be well handled by the sparse encoding. To deal with unobserved information, the sparse encoding formulation of Equation \\ref{eq:sparsity} can be modified by introducing a binary mask $M$ for each signal $\\bm{x}$. Formally, $M$ is defined as a diagonal matrix in $\\mathbb{R}^{m \\times m}$ whose value on the $j$-th entry of the diagonal is 1 if the pixel $\\bm{x}$ is observed and 0 otherwise. Then the sparse encoding formulation becomes \n\n\\begin{equation}\\label{eq:maskedsparsity}\n \\quad \\min \\limits _{\\bm{y}} \\frac{1}{2}\\|M(\\bm{x} - D\\bm{y})\\|^2_2 \\qquad \\text{s.t. } \\|\\bm{y}\\|_0 \\le k\n \\vspace{-0.2cm}\n\\end{equation}\n\nHere $M\\bm{x}$ represents the observed data of the signal $\\bm{x}$ and $\\bm{x'} = D\\bm{y}$ is the estimate of the full signal. The binary mask does not drastically change the optimization procedure and one can still use the classical optimization techniques for sparse encoding. \n\n\n\\subsubsection{3D Patch Dictionary}\n\\label{sec:shapeencoding}\nWe learn patch dictionary $D$ with the generated patch set $\\{P_s\\}$ as training signals ($m = N^2$). This patch set may come from a single mesh (providing \\textit{local dictionary}), or be accumulated globally using patches coming from different shapes (providing a \\textit{global dictionary} of the dataset). Also in the case of the application of hole-filling, a dictionary can be learnt on the patches from clean part of the mesh, which we call \\textit{self-similar} dictionary which are powerful in meshes with repetitive structures. For example a tiled floor, or the side of a shoe has many repetitive elements that can be learned automatically. We computed patches at the resolution of (24 $\\times$ 24) following the mesh resolution. More details on the 3D dataset, patch size, resolutions for different types of meshes are provided in the Evaluation section. Please note that, we also computed patches at the resolution (100 $\\times$ 100) for longer CNN base generative models they are more complex than the linear dictionary based models. Please find the details in the next section.\n\n\\textbf{Reconstruction}\nUsing a given patch dictionary $D$, we can reconstruct the original shape whose accuracy depends on the number of atoms chosen for the dictionary. For each 3D patch $\\bm{x_i} = \\operatorname{vec}(\\bm{P}_i)$ from the generated patches and the learnt dictionary $D$ of a shape, its sparse representation, $\\bm{y}$ is found following the optimization in Equation \\ref{eq:sparsity} using the algorithm of Orthogonal Matching Pursuit (OMP). It's approximate representation, the locally reconstructed patch $\\bm{x}_i'$ is found as $\\bm{x}_i' \\approx D\\bm{y}$. The final reconstruction is performed using the altered patch set $\\{P_i'\\}$ and $\\model{S}$ following the procedure in Section \\ref{sec:connectedmeshrec}. \n\n\\textbf{Reconstruction with missing data}\n\\label{sec:missingdatarec}\nIn case of 3D mesh with missing data, for each 3D patch $\\bm{x_i}$ computed from the noisy data having missing values, we find the sparse encoding $\\bm{y_i}$ following Equation \\ref{eq:maskedsparsity}. The estimate of the full reconstructed patch is then $\\bm{x}' = D\\bm{y}$. \n\nResults of inpainting using Dictionary Learning is provided in the Evaluation section (Section \\ref{sec:results}). We now present the second generative model in the next section.\n\n\n\n\n\n\n\\begin{figure*}\n\\small\n\\centering\n\\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/AAsummary2}\n \\end{subfigure}\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tabular}{|l|l|l|l|l|l|l|}\n\\hline\n & \\textbf{small\\_4x} & \\textbf{multi\\_6x} & \\textbf{6x\\_128} & \\textbf{6x\\_128\\_FC} & \\textbf{long\\_12x} & \\textbf{long\\_12x\\_SC} \\\\ \\hline\nInput & (24x24) & (100x100x1) & (100x100x1) & (100x100x1) & (100x100x1) & (100x100x1) \\\\ \\hline\n & 3x3, 32 & 3x3, 32 & & & & \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{2-7}\n \\multirow{3}{*}{\\begin{turn}{90} Convolution blocks\\end{turn}} & 3x3, 32 & 3x3, 32 & & & & 5x5, 64, (2, 2) \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & Out (1) \\\\ \\cline{3-7}\n & & 3x3, 32 & & & & \\\\ \n & & 3x3, 32, (2, 2) & 5x5, 128, (2, 2) & 5x5, 128, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{5-6} \\cline{6-7}\n\n & & & & & & 5x5, 64, (2, 2) \\\\ \n & & & & & 5x5, 64, (2, 2) & Out (2) \\\\ \\cline{6-7}\n & & & & & & \\\\ \n & & & & & 5x5, 64 & 5x5, 64 \\\\ \\cline{6-7}\n & & & & & & \\\\ \n & & & & FC 4096 & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) \\\\ \\hline \\hline \n\n\n & & & & & & \\\\ \n & & & & & 5x5, 64 & 5x5, 64 \\\\ \\cline{6-7}\n \\multirow{3}{*}{\\begin{turn}{90}Transposed conv blocks\\end{turn}} & & & & & & 5x5, 64, (2, 2) \\\\ \n & & & & & 5x5, 64, (2, 2) & Relu + (2) \\\\ \\cline{6-7}\n & & 3x3, 32 & & & & \\\\ \n & & 3x3, 32, (2, 2) & 5x5, 128, (2, 2) & 5x5, 128, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{3-7}\n & 3x3, 32 & 3x3, 32 & & & & 5x5, 64, (2, 2) \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & Relu + (1) \\\\ \\cline{2-7}\n & 3x3, 32 & 3x3, 32 & & & & \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{2-7}\n \n & 3x3, 1 & 3x3, 1 & 5x5, 1 & 5x5, 1 & 5x5, 1, (2, 2) & 5x5, 1, (2, 2) \\\\ \\hline\n\\end{tabular}\n\n}\n\\caption{(Left) - Summary of our network architecture showing the building blocks. Dashed lines and blocks are optional parts depending on the network as described in the table on the right. Conv, FCs and TConv denote Convolution, Fully Connected and Transposed Convolution layers respectively. (Right) - The detailed description of the different networks used. Each column represents a network where the input is processed from top to bottom. The block represents the kernel size, number of filters or output channels and optional strides when it differs from (1, 1). The network complexity in terms of computation and parameters increases from left to right except for \\textit{6x\\_128\\_FC}, which has the maximum number of parameters because of the presence of the FC layer. Other details are provided in Section \\ref{sec:networkdesign}.}\n\\label{table:networks}\n\\end{figure*}\n \n\\subsection{Denoising Autoencoders for 3D patches}\n\\label{sec:cnnintro}\nIn this section we present the generative model $\\model{M}: X \\mapsto X$ as Convolutional Denoising Autoencoder. Autoencoders are generative networks which try to reconstruct the input. A Denoising Autoencoder reconstructs the de-noised version of the noisy input, and is one of the most well known method for image restoration and unsupervised feature learning \\cite{Xie2012}. We use denoising autoencoder architecture with convolutional layers following the success of general deep convolutional neural networks (CNN) in images classification and generation. Instead of images, we use the 3D patches generated from different shapes as input, and show that this height map based representation can be successfully used in CNN for geometry restoration and surface inpainting. \n\nFollowing typical denoising autoencoders, our network has two parts - an encoder and a decoder. An encoder takes a 3D patch with missing data as input and and produces a latent feature representation of that image. The decoder takes this feature representation and reconstructs the original patch with missing content. The encoder contains a sequence of convolutional layers which reduces the spatial dimension of the output as we go forward the network. Therefore, this part can be also called \\textit{downsampling} part. This follows by an optional fully connected layer completing the encoding part of the network. The decoding part consists fractionally strided convolution (or transposed convolution) layers which increase the spatial dimension back to the original patch size and hence can also be called as \\textit{upsampling}. The general design is shown in Figure \\ref{table:networks} (Left). \n\n\n\n\\subsubsection{Network design choices}\n\\label{sec:networkdesign}\nOur denoising autoencoder should be designed to meet the need of the patch encoding. The common design choices are presented in Figure \\ref{table:networks} and are discussed in the following paragraphs in details.\n\n\\textbf{Pooling vs strides} \nFollowing the approach of powerful generative models like Deep Convolutional Generative Adversarial Network (DCGAN) \\cite{Radford2015}, we use strided convolutions for downsampling and strided transposed convolutions for upsampling and do not use any pooling layers. For small networks its effect is insignificant, but for large network the strided version performs better. \n\n\n\\textbf{Patch dimension} We computed patches at the resolution of 16 $\\times$ 16, 24 $\\times$ 24 and 100 $\\times$ 100 with the same patch radius (providing patches at the same scale) in our 3D models. Patches with high resolution capture more details than the low resolution counterpart. But, reconstructing higher dimension images is also difficult by a neural network. This causes a trade-off which needs to be considered. Also higher resolution requires a bigger network to capture intricate details which is discussed in the following paragraphs. For lower dimensions (24 $\\times$ 24 input), we used two down-sampling blocks followed by two up-sampling blocks. We call this network \\textbf{small\\_4x} as described in Figure \\ref{table:networks}. \nOther than this, all the considered network take an input of 100 $\\times$ 100 dimensions. The simplest ones corresponding to 3 encoder and decoder blocks are \\textbf{multi\\_6x} and \\textbf{6x\\_128}.\n\n\\textbf{Kernal size} Convolutional kernel of large size tends to perform better than lower ones for image inpainting. \\cite{Mao2016} found a filter size of (5 $\\times$ 5) to (7 $\\times$ 7) to be the optimal and going higher degrades the quality. Following this intuition and the general network of DCGAN \\cite{Radford2015}, we use filter size of (5 $\\times$ 5) in all the experiments.\n\n\\textbf{FC latent layer} A fully connected (FC) layer can be present in the end of encoder part. If not, the propagation of information from one corner of the feature map to other is not possible. However, adding FC layer where the latent feature dimension from the convolutional layer is already high, will cause explosion in the number of parameters. It is to be noted that for inpainting, we want to retain as much of information as possible, unlike simple Autoencoders where the latent layer is often small for compact feature representation and dimension reduction.\nWe use a network with FC layer, \\textbf{6x\\_128\\_FC} with 4096 units for 100 $\\times$ 100 feature input. Note that all though the number of output neurons in this FC layer can be considered to be large (in comparison to classical CNNs for classification), the output dimension is less than the input dimensions which causes some loss in information for generative tasks such as inpainting.\n\n\\textbf{Symmetrical skip connections}\nFor deep network, symmetrical skip connections have shown to perform better for the task of inpainting of images \\cite{Mao2016}. The idea is to provide short-cut (addition followed by Relu activation) from the convolutional feature maps to their mirrored transposed-convolution layers in a symmetrical encoding-decoding network. This is particularly helpful with a network with a large depth. In our experiments, we consider a deep network of 12 layers with skip connections \\textbf{long\\_12x\\_SC} and compare with its non connected counter part \\textbf{long\\_12x}. All the networks are summarized in Figure \\ref{table:networks}.\n\n\n\\subsubsection{Training details}\n\\label{sec:training_details}\n3D patches can be straightforwardly extended to images with 1 channel. Instead of pixel value we have height at a perticular 2D bin which can be negative. Depending on the scale the patches are computed, this height can be dependent on the 3D shape it is computed. Therefore, we need to perform dataset normalization before training and testing. \n\n\\textbf{Patch normalization}\nWe normalize patch set between 0 and 0.83 (= 1\/1.2) before training and assign the missing region or hole-masks as 1. This makes the network easily identify the holes during the training - as the training procedure is technically a blind inpainting method. We manually found that, the network has difficulty in reconstructing fine scaled details when this threshold is lowered further (Eg. 1\/1.5). The main idea here is to let the network easily identify the missing regions without sacrificing a big part of the input spectrum.\n\n\n\\textbf{Training} We train on the densely overlapped clean patches computed on a set of clean meshes. Square and circular hole-masks of length 0 to 0.8 times the patch length are created randomly on the fly at random locations on the patches with a uniform probability and is passed through the denoising network during the training. The output of the network is matched against the original patches without holes with a soft binary cross entropy loss between 0 and 1. Note that this training scheme is aimed to reconstruct holes less than 0.8 times the patch length. The use of patches of moderate length computed on quad orientations, enables this method to inpaint holes of small to moderate size.\n\n\\subsection{Inpainting pipeline}\n\\label{sec:testing_inpainting}\nTesting consists of inpainting holes in a given 3D mesh. This involves patch computation in the noisy mesh, patch inpainting through a generative model, and the reconstruction of the final mesh. \nFor a 3D mesh with holes, the regions to be inpainted are completely empty and have no edge connectivity and vertices information. Thus, to establish the final interior mesh connectivity after CNN based patch reconstruction, there has to be a way of inserting vertices and performing triangulation. We use an existing popular \\cite{Liepa2003}, for this purpose of hole triangulation to get a connected hole filled mesh based on local geometry. This hole-triangulated mesh is also used for quad mesh computation on the mesh with holes. This is important as quad mesh computation is affected by the presence of holes. \n\n\\begin{figure}[t]\n\\centering\n\n\\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures\/images_3dv17\/dict_Totem} \n\\end{subfigure}%\n\\hfill\n\\begin{subfigure}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_3dv17\/shape_accuracy2} \n\\end{subfigure}%\n\\caption{(Left) Visualization of dictionary atoms learnt from the shape \\textit{Totem} ($m = 16 \\times 16$). (Right) Reconstruction of the shape \\textit{Totem} using local dictionary of size 5 atoms and 100 atoms}\n\\label{fig:dictionariesvis}\n\\end{figure}\n\\section{Experimental Results}\n\\label{sec:results}\nIn this section we provide the different experiments performed to evaluate our design choices of both dictionary and CNN based generative models for mesh processing. We first provide the details of the meshes used in our experiments by introducing our dataset in Section \\ref{sec:dataset_patches}. We then provide the different parameters used for patch computations followed by the mesh restoration results with dictionary learning (Section \\ref{sec:results_patchdict}). We then provide our results of inpainting with CNN based approach and its comparison with our dictionary based approach (Section \\ref{sec:conv_results}). As seen both quantitatively and qualitatively, our the CNN based approach provides better results than the dictionary based approach. We finally end up with a section with the generalizing capability of our local patches through global generative models (by both global dictionary and global denoising autoencoder) and discuss the possibility of having a global universal generative model for local 3D patches.\n\n\\subsection{Dataset}\n\\label{sec:dataset_patches}\nWe considered dataset having 3D shapes of two different types. The first type (\\textbf{Type 1}) consists of meshes that are in general, simple in nature without much surface texture. In descending order of complexity 5 such objects considered are - \\emph{Totem, Bunny, Milk-bottle, Fandisk} and \\emph{Baseball}. \\emph{Totem} is of very high complexity containing a high amount of fine level details whereas \\emph{Bunny} and \\emph{Fandisk} are some standard graphics models with moderate complexity. In addition, we considered 5 models with high surface texture and details (\\textbf{Type 2}) consisting of shoe soles and a human brain specifically to evaluate our hole-filling algorithm - \\emph{Supernova, Terrex, Wander, LeatherShoe} and \\emph{Brain}. This subset of meshes is also referred as \\textit{high texture} dataset in subsequent section. Therefore, we consider in total 10 different meshes for our evaluation (all meshes are shown in the supplementary material). \n\n\nOther than the models \\textit{Baseball, Fandisk} and \\textit{Brain}, all models considered for the experimentation are reconstructed using a vision based reconstruction system - 3Digify \\cite{3Digify}. Since this system uses structured light, the output models are quite accurate, but do have inherent noise coming from structured light reconstruction and alignments. Nonetheless, because of its high accuracy, we consider these meshes to be `clean' for computing global patch database. These models were also reconstructed with varying accuracy by changing the reconstruction environment before considering for the experiments of inpainting. In an extreme case some of these models are reconstructed using Structure From Motion for the purpose of denoising using its `clean' counterpart as described in Section \\ref{sec:denoisingres}.\n\n\n\n\n\\textbf{Dataset normalization and scale selection}\nFor normalization, we put each mesh into a unit cube followed by upsampling (by subdivision) or downsampling (by edge collapse) to bring it to a common resolution. After normalization, we obtained the low resolution mesh by applying Laplacian smoothing with 30 iterations. We then performed the automatic quadiangulation procedure of \\cite{Ebke2013} on the low resolution mesh, with the targeted number of faces such that, it results an average quad length to be 0.03 for Type 1 dataset and 0.06 for Type 2 dataset (for larger holes); which in turns become the average patch length of our dataset. The procedure of smoothing and generating quad mesh can be supervised manually in order to get better quad mesh for reference frame computation. But, in our varied dataset, the automatic procedure gave us the desired results. \n\nWe then generated 3D patches from each of the clean meshes using the procedure provided in Section \\ref{sec:3Dpatches}. We chose the number of bins $N$, to be 16 for Type 1 dataset and 24 for Type 2 dataset; to match the resolution the input mesh. To perform experiment in a common space (global dictionary), we also generated patches with patch dimension of 16 in Type 2 dataset with the loss of some output resolution.\n\n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{figures\/images_3dv17\/error_shapecomplexity} \n\\end{subfigure}%\n\\vspace{-0.4cm}\n\\caption{Reconstruction error of different shapes with Dictionaries with increasing number of atoms.}\n\\label{fig:reconstructioncomplexity_quantitative}\n\\end{figure}\n\n\n\\begin{table}\n\\small\n\\centering\n\\begin{tabular}{cccccc}\n\\toprule\n{} & Mesh & & Patch & Compr& \\\\\n{Meshes} & entities & \\#patches & entities & factor & PSNR \\\\\n\\midrule\nTotem & 450006 & 658 & 12484 & 36.0 & 56.6 \\\\\nMilkbottle & 441591 & 758 & 14420 & 30.6 & 72.3 \\\\\nBaseball & 415446 & 787 & 14974 & 27.7 & 75.6 \\\\\nBunny & 501144 & 844 & 16030 & 31.3 & 60.6 \\\\\nFandisk & 65049 & 874 & 16642 & 3.9 & 62.1 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Results for compression in terms of number of entities with a representation with global dictionary of 100 atoms. Mesh entities consists of the number of entities for representing the mesh which is: 3 $\\times$ \\#Faces and \\#Vertices. Patch entities consists of the total number of sparce dictionary coefficients (20 per patch) used to represent the mesh plus the entities in the quad mesh. Compr factor is the compression factor between the two representation. PSNR is Peek Signal to Noise Ratio where the bounding box diameter of the mesh is considered as the peek signal following \\cite{Praun2003}}.\n\n\\label{table:compression}\n\\end{table}\n\\subsection{Evaluating 3D Patch Dictionaries}\n\\label{sec:results_patchdict}\n\\subsubsection{Dictionary Learning and Mesh Reconstruction}\n\n\\textbf{Dictionary Learning}\nWe learn the local dictionary for each shape with varying numbers of dictionary atoms with the aim to reconstruct the shape with varying details. Atoms of one such learned dictionary is shown in Figure \\ref{fig:dictionariesvis} (Left). Observe the `stripe like' structures the dictionary of \\textit{Totem} in accordance to the fact that the \\textit{Totem} has more line like geometric textures. \n\n\\textbf{Reconstruction of shapes}\nWe then perform reconstruction of the original shape using the local dictionaries with different number of atoms (Section \\ref{sec:shapeencoding}). \nFigure \\ref{fig:dictionariesvis} (Right) shows qualitatively the difference in output shape when reconstructed with dictionary with 5 and 100 atoms. \nFigure \\ref{fig:reconstructioncomplexity_quantitative} shows the plot between the \\textit{Global Reconstruction Error} - the mean Point to Mesh distance of the vertices of the reconstructed mesh and the reference mesh - and the number of atoms in the learned dictionary for our Type 1 dataset. We note that the reconstruction error saturates after a certain number of atoms (50 for all).\n\n\\textbf{Potential for Compression} The reconstruction error is low after a certain number of atoms in the learned dictionary, even when global dictionary is used for reconstructing all the shapes (more on shape independence in Section \\ref{sec:generalization}). Thus, only the sparse coefficients and the connectivity information needs to be stored for a representation of a mesh using a common global dictionary, and can be used as a means of mesh compression. Table \\ref{table:compression} shows the results of information compression on Type 1 dataset. \n\n\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_3dv17\/missingvqualitative1-compressed} \n\\end{subfigure}\n\\caption{Inpainting of the models with 50\\% missing vertices (Left - noisy mesh, Middle - inpainted mesh, Right - ground truth) of \\textit{Terrex} and \\textit{Bunny}, using the local dictionary. Here we use the quad mesh provided at the testing time.}\n\\label{fig:missingvertices}\n\n\\end{figure}\n\n\n\n\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{l|cc|cc}\n\\hline\n{Missing Ratio} & \\multicolumn{2}{c|}{0.2} & \\multicolumn{2}{c}{0.5} \\\\\n{} & ours & \\cite{Zhong2016} & ours & \\cite{Zhong2016} \\\\\n\\hline\n\nbunny & \\textbf{1.11e-3} &1.90e-2 &\\textbf{1.62e-3} & 2.20e-2 \\\\\nfandisk & \\textbf{1.32e-3} &8.30e-3 & \\textbf{1.34e-3} &1.20e-2\\\\\n\\hline\n\\end{tabular}\n\\caption{RMS Inpainting error of missing vertices from our method using local dictionary and its comparison to \\cite{Zhong2016}}.\n\\label{table:zhongcomp}\n\n\\end{table}\n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=.75\\linewidth]{figures\/images_3dv17\/qualitative_shoe} \n\\end{subfigure}%\n\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=0.75\\linewidth]{figures\/images_3dv17\/qualitative_milkbottle} \n\\end{subfigure}%\n\\caption{Qualitative analysis of the inpainting algorithm of \\textit{Supernova} and \\textit{Milk-bottle}. From left to right - mesh with holes, hole filling with \\cite{Liepa2003}, our results from global dictionary and ground truth mesh. Detailed visualization of the results of other meshes are presented in the provided supplementary material.}\n\\label{fig:inpaintqualitative}\n\n\\end{figure}\n\n\n\n\\subsubsection{Surface Inpainting}\n\n\\textbf{Recovering missing geometry}\nTo evaluate our algorithm for geometry recovery, we randomly label certain percentage of vertices in the mesh as missing. The reconstructed vertices are then compared with the original ones. The visualization of our result is in Figure \\ref{fig:missingvertices}. Zoomed view highlighting the details captured as well as the results from other objects are provided in the supplementary material. We compare our results with \\cite{Zhong2016} which performs similar task of estimating missing vertices, with the publically available meshes \\textit{Bunny} and \\textit{Fandisk}, and provide the recovery error measured as the Root Mean Square Error (RMSE) of the missing coordinates in Table \\ref{table:zhongcomp}. Because of the unavailability of the other two meshes used in \\cite{Zhong2016}, we limited to these aforementioned meshes for comparison. As seen in the table, we improve over them by a large margin.\n\nThis experiment also covers the case when the coarse mesh of the noisy data is provided to us which we can directly use for computing quad mesh and infer the final mesh connectivity (Section \\ref{sec:connectedmeshrec}). This is true for the application of recovering damaged part. If the coarse mesh is not provided, we can easily perform poisson surface reconstruction using the non-missing vertices followed by Laplacian smoothing to get our low resolution mesh for quadriangulation. Since, the low resolution mesh is needed just for the shape outline without any details, poisson surface reconstruction does sufficiently well even when 70\\% of the vertices are missing in our meshes.\n\n\n\\textbf{Hole filling}\nWe systematically punched holes of different size (limiting to the patch length) uniform distance apart in the models of our dataset to create noisy test dataset. We follow the procedure in Section \\ref{sec:testing_inpainting} in this noisy dataset and report our inpainting results in Table \\ref{table:inpaintingall}. Here we use mean of the Cloud-to-Mesh error of the inpainted vertices as our error metrics. Please note that the noisy patches are independently generated on its own quad mesh. No information about the reference frames from the training data is used for patch computation of the noisy data. Also, note that this logically covers the inpainting of the missing geometry of a scan due to occlusions. We use both local and global dictionaries for filling in the missing information and found our results to be quite similar to each other. \n\nFor baseline comparison we computed the error from the popular filling algorithm of \\cite{Liepa2003} available in MeshLab\\cite{Cignoni2008}. Here the comparison is to point out the improvement achieved using a data driven approach over geometry. We could not compare our results with \\cite{Zhong2016} because of the lack of systematic evaluation of hole-filling in their paper. As it is seen, our method is clearly better compared to the \\cite{Liepa2003} quantitatively and qualitatively (Figure \\ref{fig:inpaintqualitative}). The focus of our evaluation here is on the Type 2 dataset - which captures complex textures. In this particular dataset we also performed the hole filling procedure using self-similarity, where we learn a dictionary from the patches computed on the noisy mesh having holes, and use it to reconstruct the missing data. The results obtained is very similar to the use of local or global dictionary (Table \\ref{table:inpaintselfsimilar}).\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{0.65\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_3dv17\/denoising_totemrec-compressed} \n\\end{subfigure}\\begin{subfigure}{0.35\\linewidth}\\centering\\includegraphics[width=0.6\\linewidth]{figures\/images_3dv17\/keyboard_denoised} \n\\end{subfigure}%\n\\caption{Denoising meshes using a clean patch dictionary of a similar object. (Left) Results on \\textit{Totem} (from left to right) - noisy reconstruction from SFM, our denoising using patch dictionary from a clean reconstruction, denoising by Laplacian smoothing \\cite{Sorkine2004}, the high quality clean mesh with different global configuration. (Right) Result for the mesh \\textit{Keyboard} with the same experiment. Zoomed versions of similar results are provided in the supplementary material.}\n\\label{fig:denoising_qualitative}\n\\end{figure}\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{lrrr}\n\\toprule\n{} & \\cite{Liepa2003} & Our - Local & Our - Global \\\\\n\\midrule\nSupernova & 0.001646 & \\textbf{0.000499} & 0.000524 \\\\\nTerrex & 0.001258 & 0.000595 & \\textbf{0.000575} \\\\\nWander & 0.002214 & 0.000948 & \\textbf{0.000901} \\\\\nLeatherShoe & 0.000854 & 0.000569 & \\textbf{0.000532} \\\\\nBrain & 0.002273 & 0.000646 & \\textbf{0.000587} \\\\ \\hline\nMilk-bottle & 0.000327 & 0.000126 & \\textbf{0.000123} \\\\\nBaseball & 0.000158 & \\textbf{0.000138} & 0.000168 \\\\\nTotem & 0.001065 & 0.001065 & \\textbf{0.001052} \\\\\nBunny & \\textbf{0.000551} & 0.000576 & 0.000569 \\\\\nFandisk & 0.001667 & 0.000654 & \\textbf{0.000634} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error for our dataset of hole size 0.015, 0.025 and 0.035 for the dataset Type 2 (top block of the table) and 0.01 and 0.02 for Dataset Type 1 (bottom block of the table). \\textit{Local} uses the local dictionary learned from the clean mesh of the corresponding shape and \\textit{Global} uses a global dictionary learned from the entire dataset.}\n\\label{table:inpaintingall}\n\n\\end{table}\n\n\\begin{table}[th]\n\\centering\n\\small\n\\begin{tabular}{lrr}\n\\toprule\n{} & \\cite{Liepa2003} & Self-Similar \\\\\n\\midrule\nSupernova & 0.001162 & \\textbf{0.000401} \\\\\nTerrex & 0.000900 & \\textbf{0.000585} \\\\\nWander & 0.001373 & \\textbf{0.000959} \\\\\nLeatherShoe & 0.000596 & \\textbf{0.000544} \\\\\nBrain & 0.001704 & \\textbf{0.000614} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error comparison with self similar dictionary with 100 atoms. Hole size considered is 0.035}\n\\label{table:inpaintselfsimilar}\n\\end{table}\n\n\n\nWith smaller holes, the method of \\cite{Liepa2003} performs as good as our algorithm, as shape information is not present in such a small scale. The performance of our algorithms becomes noticeably better as the hole size increases as described by Figure \\ref{fig:brain_holewise_algo}. This shows the advantage of our method for moderately sized holes. \n\n\\textbf{Improving quality of noisy reconstruction}\n\\label{sec:denoisingres}\nOur algorithm for inpainting can be easily extended for the purpose of denoising. We can use the dictionary learned on the patches from a clean or high quality reconstruction of an object to improve the quality of its low quality reconstruction. Here we approximate the noisy patch with its closest linear combination in the Dictionary following Equation \\ref{eq:sparsity}. Because of the fact that our patches are local, the low quality reconstruction need not be globally similar to the clean shape. This is depicted by the Figure \\ref{fig:denoising_qualitative} (Left) where a different configuration of the model \\textit{Totem} (with the wings turned compared to the horizontal position in its clean counterpart) reconstructed with structure-from-motion with noisy bumps has been denoised by using the patch dictionary learnt on its clean version reconstructed by Structured Light. A similar result on Keyboard is shown in Figure \\ref{fig:denoising_qualitative} (Right).\n\\begin{figure*}[th]\n\\centering\n\\begin{subfigure}[b]{0.33\\linewidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/images_3dv17\/brain_holewise_algo} \n \\vspace{-0.25cm}\n \\caption{} \n \\label{fig:brain_holewise_algo}\n\\end{subfigure}\\begin{subfigure}[b]{0.33\\linewidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/images_3dv17\/recerror_global_local} \n \\vspace{-0.25cm}\n \\caption{} \n \\label{fig:recerror_global_local}\n\\end{subfigure}\\begin{subfigure}[b]{0.33\\linewidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/images_3dv17\/recerror_noobject} \n \\vspace{-0.25cm}\n \\caption{} \n \\label{fig:reconstructioncomplexity}\n\\end{subfigure}%\n\\vspace{-0.4cm}\n\\caption{(a) Inpainting error vs Hole-size to patch-size ratio for \\textit{Brain} inpainted using the global dictionary. The patchsize here is 0.062 (patch radius $\\approx$ 0.044). Plots of other shapes are in provided in the supplementary material (b) Comparison of the reconstruction error of \\textit{Totem} using local and global dictionaries with different number of atoms. For better visualization the X axis provided in logarithmic scale. (c) Reconstruction error of \\textit{Totem} with global dictionaries (with 500 atoms) having patches from different number of shapes.}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Evaluating Denoising Autoencoders}\n\\label{sec:conv_results}\n\nWe use the same dataset mentioned in the beginning of this section for evaluating Convolutional Denoising Autoencoders and put more emphasis to the \\textit{high texture dataset}. We compute another set of patches with resolution 100 $\\times$ 100 (in addition to computing patches with resolution 24 $\\times$ 24 as presented in Section \\ref{sec:dataset_patches}) for performing fine level analysis of patch reconstruction w.r.t. the network complexity.\n\n\n\n\n\\begin{figure*}\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{figures\/images_wacv18\/plot_exp_comp} \n\\end{subfigure}%\n\\caption{Qualitative result of our inpainting method with different patches of dimension 100 $\\times$ 100 (24 $\\times$ 24 for \\textit{small\\_4x}) with global networks. Patches are taken at random from the test set of meshes of shoe soles and brain, and random masks of variable size, shown in cyan (light blue), are chosen for the task of inpainting. Results of the inpainted patches with different network architectures are shown in the bottom rows. }\n\\label{fig:qualitative_test}\n\\end{figure*}\n\n\\begin{table*}\n\\centering\n\\small\n\\begin{tabular}{l|rr|rrrrrrr}\n\\toprule\nMeshes & \\cite{Liepa2003} & Global Dict & \\textbf{small\\_4x} & \\textbf{multi\\_6x} & \\textbf{6x\\_128} & \\textbf{6x\\_128\\_FC} & \\textbf{l\\_12x} & \\textbf{l\\_12x\\_SC} \\\\\n\\midrule\nSupernova & 0.001646 & 0.000524 & 0.000427 & 0.000175 & 0.000173 & 0.000291 & 0.000185 & 0.000162 \\\\\nTerrex & 0.001258 & 0.000575 & 0.000591 & 0.000373 & 0.000371 & 0.000488 & 0.000395 & 0.000369\\\\\nWander & 0.002214 & 0.000901 & 0.000894 & 0.000631 & 0.000628 & 0.001033 & 0.000694 & 0.000616 \\\\\nLeatherShoe & 0.000854 & 0.000532 & 0.000570 & 0.000421 & 0.000412 & 0.000525 & 0.000451 & 0.000407 \\\\\nBrain & 0.002273 & 0.000587 & 0.000436 & 0.000166 & 0.000171 & 0.000756 & 0.000299 & 0.000165 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error for our dataset of shoe soles of hole size 0.015, 0.025 and 0.035 with a single CNN of different architecture and its comparison to the global dictionary based method. As expected, the error decreases with the increase in the complexity (network length, skip connections, etc).}\n\n\\label{table:inpaintingall}\n\\end{table*}\n\n\n\\textbf{Training and Testing} We train different CNNs from the clean meshes as described in the following sections. For testing or hole filling, we systematically punched holes of different size (limiting to the patch length) uniform distance apart in the models of our dataset to create noisy test dataset. The holes are triangulated to get connectivity as described in the Section \\ref{sec:testing_inpainting}. Finally, noisy patches are generated on a different set of quad-mesh (Reference frames) computed on the hole triangulated mesh, so that we use a different set of patches during the testing. More on the generalising capability of the CNNs are discussed in the Section \\ref{sec:generalization}.\n\n\n\n\\begin{figure*}\n\\centering\n\\small\n\\begin{tabular}{|cccccc|}\n\nHoles & GT & \\cite{Liepa2003} & Global Dict & small\\_4x & long\\_12x\\_SC \\\\\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot00.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot01.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot02.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot03.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot04.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot05.jpg}\\\\\n\\end{tabular}\\begin{subfigure}{0.12\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_wacv18\/snaps\/totem_quads\/bh\/snapshot00} \n\\end{subfigure}%\n\\begin{subfigure}{0.12\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_wacv18\/snaps\/totem_quads\/bh\/snapshot01} \n\\end{subfigure}\n\\caption{(Left) Qualitative results of hole filling on the mesh Supernova with a hole radius of 0.025 with Global generative methods. (Right) Example of the quad mesh used in training (Left) and testing (Right) for the mesh Totem. Best viewed when zoomed digitally. Enlarged version and more results are provided in the supplementary material. }\n\\label{fig:inpaint_mesh_qual}\n\\end{figure*}\n\n\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{lccc}\n\\toprule\nMeshes & \\cite{Liepa2003} & Local Dictionary& Local CNN - small\\_4x \\\\\n\\midrule\nSupernova & 0.001646 & 0.000499 & 0.000415 \\\\\nTerrex & 0.001258 & 0.000595 & 0.000509 \\\\\nWander & 0.002214 & 0.000948 & 0.000766 \\\\\nLeatherShoe & 0.000854 & 0.000569 & 0.000512 \\\\\nBrain & 0.002273 & 0.000646 & 0.000457 \\\\\n\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error of hole size 0.015, 0.025 and 0.035 for high texture dataset which uses Local patches generated on the same clean mesh of the corresponding shape.}\n\\label{table:inpaint_local}\n\\end{table}\n\n\n\\subsubsection{Hole filling on a single mesh}\n\\label{sec:singlemesh}\nAs explained before, our 3D patches from a single mesh are sufficient in amount to train a generative model for that mesh.\nNote that we still need to provide an approximately correct scale for the quad mesh computation of the noisy mesh, so that the training and testing patches are not too different by size. \nTable \\ref{table:inpaint_local} shows the result of hole filling using our smallest network - \\textit{small\\_4x} in terms of mean of the Cloud-to-Mesh error of the inpainted vertices and its comparison with our linear dictionary based inpainting results. We also provide the results from \\cite{Liepa2003} in the table for better portray of the comparison. In this experiment, we learn one CNN per mesh on the patches in the clean input mesh (similar to local dictionary model), and tested in hole data as explained in the above section. As seen, our smallest network beats the result of linear approach of surface inpainting. \n\nWe also train a long network \\textit{long\\_12x\\_SC} (our best performing global network) with an offset factor of $k = 7$, giving us a total of 28 overlapping patches per quad location for the model \\textit{Supernova} and we show the qualitative result in Figure \\ref{fig:cnn_length} (Left). The figure verifies qualitatively, that with enough number of dense overlapping patches and a more complete CNN architecture, our method is able to inpaint surfaces with a very high accuracy.\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Global Denoising Autoencoder}\n\\label{sec:result_globalcnn}\nEven though the input to the CNN are local patches, we can still create a single CNN designed for repairing a set of meshes, if the set of meshes are pooled from a similar domain. This is analogous to the \\textit{global dictionary} where the dictionary was learnt from the patches of a pool of meshes. But to incorporate more variations in between the meshes in the set, the network needs to be well designed. \nThis gets shown in the column \\textit{global CNN} of Table \\ref{table:inpaint_global} where our inpainting result with a single CNN (\\textit{small\\_4x}) for common meshes (Type 1 dataset) is comparable to our linear global-dictionary based method (column \\textit{global dictionary}), but not better. With the premise that CNN is more powerful than the linear dictionary based methods, we perform additional experiments incorporating all the careful design choices discussed in the Section \\ref{sec:networkdesign} for creating global CNNs for the purpose of inpainting different meshes in similar domain. The objective of these experiments are 1) to evaluate different denoising autoencoders ideas for inpainting in the context of height map based 3D patches 2) to verify that carefully designed generative CNNs performs better than the linear dictionary based methods, and 3) to show how to design a single denoising autoencoder for inpainting meshes from similar domain or inpainting meshes across a varied domain, when the number of meshes is not too high. We, however, do not claim that this procedure makes it possible to have a single model (be it global dictionary or global CNN) capable of learning and inpainting across a large number of meshes (say all meshes in ShapeNet); nor is this our intention. \n\nFigure \\ref{fig:qualitative_test} provides the qualitative results for different networks showing the reconstructed patches from the masked incomplete patches. The results shows that the quality of the reconstruction increases with the increase in the network complexity. In terms of capturing overall details the network with FC layer seems to reconstruct the patches close to the original, but with the lack of contrast. This gets shown in the quantitative results where it is seen that the network with FC performs worse than most of networks. The quantitative results are shown in Table \\ref{table:inpaint_local}. The best result qualitatively and quantitatively is shown by \\textbf{long\\_12x\\_SC} - the longest network with symmetrical skip connections. Figure \\ref{fig:cnn_length} (Right) provides more insights on the importance of the skip connections. Visualizations of the reconstructed hole filled mesh are provided in Figure \\ref{fig:inpaint_mesh_qual} (Left).\n\n\n\n\n\n\n\\subsection{Generalisation capability}\n\\label{sec:generalization}\n\\textbf{Patches from common pool}\nWe perform reconstruction of \\textit{Totem} using both the local dictionary and global dictionary having different number of atoms to know if the reconstruction error, or the shape information encoded by the dictionary, is dependent on where the patches come from at the time of training. We observed that when the number of dictionary atoms is sufficiently large (200 - 500), the global dictionary performs as good as the local dictionary (Figure \\ref{fig:recerror_global_local} ). This is also supported by our superior performance of global dictionary in therms of hole filling. \n\nKeeping the number of atoms fixed at which the performances between Local and Global dictionary becomes indistinguishable (500 in our combined dataset), we learned global dictionary using the patches from different shapes, with one shape at a time. The reconstruction error of \\textit{Totem} using these global dictionary varied very little. But we notice a steady increase in the reconstruction error with increase in the number of object used for learning; which becomes steady after a certain number of object. After that point (6 objects), adding more shapes for learning does not create any difference in the final reconstruction error (Figure \\ref{fig:reconstructioncomplexity}). This verifies our hypothesis that the reconstruction quality does not deteriorate significantly with increase in the size of the dataset for common meshes for learning.\n\n\\textbf{Different test meshes}\nWe perform experiments to see how the inpainting method can be generalized among different shapes and use Type 1 dataset of \\cite{Sarkar2017a} consisting of general shapes like Bunny, Fandisk, Totem, etc. These meshes do not have high amount of specific surface patterns. Column \\textit{global CNN ex} of Table \\ref{table:inpaint_global} shows the quantitative result for the network \\textit{small\\_4x} to inpaint the meshes trained on patches of other meshes. It is seen that if the shape being inpainted does not have too much characteristic surface texture, the inpainting method generalizes well. Note that this result is still better than the geometry based inpainting result of \\cite{Liepa2003}. Thus, it can be concluded that our system is a valid system for inpainting simple and standard surface meshes (Eg. \\textit{Bunny}, \\textit{Milk-bottle}, \\textit{Fandisk} etc). \n\nHowever for complicated and characteristic surfaces (Eg. shoe dataset), we need to learn on the surface itself, because of the inherent nature of the input to our CNN - \\textit{local patches} (instead of global features which takes an entire mesh as an input) that are supposed to capture surface details of its own mesh. Evaluating the generalizing capability of such a system requires patch computation on different locations between the training and testing set, instead of different mesh altogether. As explained before, in all our inpainting experiments, we explicitly made sure that the patches during the testing do not belong to training by manually computing a different set of quad mesh (Reference frames) for the hole triangulated mesh. To absolutely make sure the testing is done in a different set of patches, we manually tuned different parameters in \\cite{Ebke2013} for quadriangulation. One example of such pair of quad meshes of the mesh Totem are shown in Figure \\ref{fig:inpaint_mesh_qual} (Right). \n\n\n\n\n\n\n\n\n\nThe generalization capability can also be tested across the surfaces that are similar in nature, but from a different sample. \nThe mesh Stone Wall from \\cite{Zhou2013} provides a good example of such data, which has two different sides of the wall of similar nature. We fill holes on one side by training CNN on the other side and show the qualitative result in Figure \\ref{fig:wall}. This verifies the fact that the CNN seems to generalize well for reconstructing unseen patches.\n\n\\textbf{Discussion on texture synthesis} We add a small discussion on the topic of texture synthesis as a good part of our evaluation is focused on a dataset of meshes high in textures.\nAs stated in the related work, both dictionary \\cite{Aharon2006} based and BM3D \\cite{Dabov2007} based algorithms are well known to work with textures in terms of denoising 2D images. Both approaches have been extended to work with denoising 3D surfaces. Because of the presence of patch matching step in BM3D (patches are matched and kept in a block if they are similar), it is not simple to extend it for the task of 3D inpainting with moderate sized holes, as a good matching technique has to be proposed for incomplete patches. Iterative Closest Point (ICP) is a promising means of such grouping as used by \\cite{Rosman2013} for extending BM3D for 3D point cloud denoising. Since the contribution in \\cite{Rosman2013} is limited for denoising surfaces, we could not compare our results with it - as further extending \\cite{Rosman2013} for inpainting is not trivial and requires further investigation. Instead we compared our results with the dictionary based inpainting algorithm proposed in \\cite{Sarkar2017a}.\n\nInpainting repeating structure is well studied in \\cite{Pauly2008}. Because of the lack of their code and unavailability of results on a standard meshes, we could not compare our results to them. We also do not claim our method to be superior to them in high texture scenario, though we show high quality result with indistinguishable inpainted region for one of the meshes in Figure \\ref{fig:cnn_length} (Left) using a deep network. However, we do claim our method to be more general, and to work in cases with shapes with no explicit repeating patterns (Eg. Type 1 dataset) which is not possible with \\cite{Pauly2008}.\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{0.25\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/supernovaqual.pdf} \n\\end{subfigure} \n\\begin{subfigure}{0.6\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/plot_cnnlength.pdf} \n\\end{subfigure}\n\\caption{(Left) Qualitative result of inpainting on a single mesh with an overlap factor of $k = 7$. (Right) Mean inpainting error for high texture meshes wrt the number of parameters in the CNN. Inpainting error decreases with the increase in the network depth, saturates at one time, and performs worse if increased further. Presence of symmetrical skip connections decreases the error further providing its importance to train longer networks.}\n\\label{fig:cnn_length}\n\\end{figure}\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{lcc|c}\n\\toprule\n{} & global & global CNN & global CNN ex\\\\\n& dictionary& small\\_4x & small\\_4x \\\\\n\\midrule\nMilk-bottle & 0.000123 & 0.000172 & 0.000187 \\\\\nBaseball & 0.000168 & 0.000113 & 0.000138\\\\\nTotem & 0.001052 & 0.001038 & 0.001406\\\\\nBunny & 0.000569 & 0.000780 & 0.000644\\\\\nFandisk & 0.000634 & 0.000916 & 0.000855\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{(Left) Mean inpainting error of hole size 0.01, 0.02 and 0.03 for common mesh dataset using global models. For column \\textit{global CNN} we use a single global CNN (small\\_4x) trained on the local patches of all the meshes. The result of this small network is comparable to that of the linear global dictionary, but not better. This shows that we have more scope of improvement with a better network design for CNNs. \n(Right) in the column \\textit{global CNN ex}, for each mesh, we use a global CNN (small\\_4x) trained on the local patches of all the meshes except itself. More discussion is in Section \\ref{sec:generalization}\n}\n\\label{table:inpaint_global}\n\\end{table}\n\n\\subsection{Limitation and failure cases}\n\\label{sec:failurecases}\n\n\\noindent\n\\textbf{General limitations} - The quad mesh on the low resolution mesh provides a good way of achieving stable orientations for computing moderate length patch in 3D surfaces. However, on highly complicated areas such as joints, and a large patch length, the height map based patch description becomes invalid due to multiple overlapping surfaces on the reference quad as shown in Figure \\ref{fig:failurecase} (left). Also, the method in general does not work with full shape completion where the entire global outline has to be predicted.\n\n\\noindent\n\\textbf{Generative network failure cases} - It is observed that small sized missing regions are reconstructed accurately by our long generative networks. Failure cases arise when the missing region is large. In the first case the network reconstructs the region according to the patch context slightly different than the ground truth (Figure \\ref{fig:failurecase}-A). The second case is similar to the first case where the network misses fine details in the missing region, but still reconstructs well according to the other dominant features. The third case, which is often seen in the network with FC, is the lack of contrast in the final reconstruction (Figure \\ref{fig:failurecase}-C). Failure cases for smaller networks can be seen in Figure \\ref{fig:qualitative_test}.\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{0.55\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_wacv18\/stonewall} \n \\caption{Experiment on Stone Wall} \n \\label{fig:wall}\n\\end{subfigure}%\n\\begin{subfigure}[b]{0.45\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/failurecase_badpatch} \n \\caption{Failure cases.}\n \\label{fig:failurecase}\n\\end{subfigure}\n\\caption{(a) Scanned mesh of Stone Wall \\cite{Zhou2013} which has two sides of similar nature shown in the top. The CNN \\textbf{6x\\_128} was trained on the patches generated on one side (Top Left) to recover the missing details on the other side (Top Right) whose result is shown in the bottom. \n(b) Failure cases -(Left) - bad or invalid patches (point cloud with RF at the top, and its corresponding broken and invalid surface representation at the bottom) at complicated areas of a mesh. (Right) Three failure case scenarios of the CNN.\n}\n\\end{figure}\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe proposed in this paper our a first attempt at using generative models on 3D shapes with a representation and parameterization other than voxel grid or 2D projections. For that, we proposed a new method for shape encoding 3D surface of arbitrary shapes using rectangular local patches.\nWith these local patches we designed generative models, inspired that from 2D images, for inpainting moderate sized holes and showed our results to be better than the geometry based methods. \nWith this, we identified an important direction of future work - exploration of the application of CNNs in 3D shapes in a parameterization different from the generic voxel representation. \nIn continuation of this particular work, we would like to extend the local quad based representation to global shape representation which uses mesh quadriangulation, as it inherently provides a grid like structure required for the application of convolutional layers. This, we hope, will provide an alternative way of 3D shape processing in the future.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nReproducing kernel Hilbert spaces were introduced by Zaremba \\cite{za1907} and Mercer \\cite{me1909} and were first studied in a systematic fashion by Aronszajn \\cite{aro50} in 1950. Ever since these spaces have played an important role in many branches of mathematics such as complex analysis \\cite{duschu04}, approximation theory \\cite{wah90} and, only recently, in learning theory and classification due to the celebrated representer theorem \\cite{schhesm01}. Another field with manifold connections to reproducing kernels is frame theory and its relatives.\n\nDiscrete frames have been introduced in the 1950's in the context of nonharmonic Fourier analysis \\cite{duscha52} and have then been generalized to continuous frames on arbitrary positive measure spaces in the early 1990's \\cite{alanga93,ka90}. Reproducing kernel theory can be employed to construct continuous frames and conversely frame theory can be used to study reproducing kernels \\cite{jo06}.\n\nAlthough frames are convenient objects to work with, there exists a large reservoir of interesting systems that are complete and do not satisfy both frame conditions. Therefore, semi-frames \\cite{jpaxxl09,jpaxxl12} and reproducing pairs \\cite{ansptr15,spexxl14,spexxl16} have been introduced. An upper (resp. lower) semi-frame is a complete system that only satisfies the upper (resp. lower) frame inequality. A reproducing pair is a pair of mappings that generates a bounded and boundedly invertible analysis\/synthesis process without assuming any frame inequality.\n\nThis paper is divided into three major parts portraying different connections between frames, reproducing pairs and reproducing kernel Hilbert spaces. In the first part we investigate systems taking values in a reproducing kernel Hilbert space. We present an explicit expression for the reproducing kernel in terms of a reproducing pair. This is an extension of the results from \\cite{paul09,raca05}. Moreover, we introduce a novel necessary condition for a vector family to form a frame.\n\n\nThe second part is devoted to studying the redundancy of (semi-)frames. In the discrete case, the redundancy of a frame measures how much the Hilbert space is oversampled by the frame, see for example \\cite{bocaku11,cacahe11}. It is however impossible to directly generalize the notion of redundancy to continuous (semi-) frames. The approach chosen in \\cite{hedera00} thus takes a detour via the concept of Riesz bases, i.e., non-redundant discrete frames. A discrete frame $\\Psi$ is a Riesz basis if its analysis operator $C_\\Psi$ is surjective. Following \\cite{hedera00}, the redundancy of a (semi-)frame is defined by\n\\begin{equation*}\nR(\\Psi):=\\dim({\\sf Ran}\\, C_\\Psi {}^\\bot).\n\\end{equation*}\nIt has been observed in several articles \\cite{hedera00,hogira13,jale15} that $R(\\Psi)$ depends on the underlying measure space $(X,\\mu)$. In particular, if a (lower semi-)frame has finite redundancy, then it follows that $(X,\\mu)$ is atomic. The proofs in the aforementioned papers all rely in one way or the other on the following argument: If the redundancy of a frame is zero (finite), then\n$$\n\\inf\\big\\{\\mu(A):\\ A\\mbox{ measurable and }\\mu(A)>0\\big\\}=C>0,\n$$\nwhich implies that $(X,\\mu)$ is atomic. We will give a new proof here using the reproducing kernel Hilbert space structure of the range of $C_\\Psi$. It is interesting to note that\nupper semi-frames behave essentially different in this regard. We show that there exist upper semi-frames on non-atomic measure spaces with redundancy zero. \n\nAs a side product, we conclude that efforts to generalize Riesz bases to the continuous setting \\cite{arkatota12,gaha03} cannot succeed. This is because the underlying measure space of a frame with redundancy zero is atomic (and therefore discrete).\nMoreover, we show that every frame can be split into a discrete and a strictly continuous Bessel system.\n\n\nThe final part of this paper is concerned with characterizing the ranges of the analysis operators of a reproducing pair. The omission of the frame inequalities causes the problem that ${\\sf Ran}\\, C_\\Psi$ need no longer be contained in $L^2(X,\\mu)$. We will demonstrate how a reproducing pair intrinsically generates a pair of reproducing kernel Hilbert spaces and calculate the reproducing kernel.\n\n\\begin{comment}\n See also the definition of excess of frames \\cite{bacahela03-1, bacahela03}.\\\\\n {\\bf Idea\/Question:}\\begin{itemize}\n \\item Does this concept (excess) make sense for continous frames?\n \\item If we assume that $R(\\Psi)$ is finite, we should get to the theory of Fredholm operators. Can this help? Could this provide an idea to distinguish between different infinite redundancies?\n \\item Is there any hope to distinguish between $\\{e_1,e_1,e_2,e_2,e_3,...\\}$ and\\\\ $\\{e_1,e_1,e_1,e_2,e_2,e_2,e_3,...\\}$ with another concept?\n \\end{itemize}\n\nWe will present known results from \\cite{hedera00,hogira13} but with proofs that are lie closer to the heart of frame theory using RKHS and \nallow insights in the structure of continuous frames.\nThe proofs in the previous paper follow more measure theoretical arguments.\n If a frame is non-redundant ($R(\\Psi)=0$) it is a continuous Riesz bases (Riesz-type frames) \\cite{gaha03,arkatota12}.\nWe will explain why continuous Riesz bases only exist on \n atomic measure spaces which are essentially discrete measure spaces.\n \n For finite dimensional frames $F=\\{f_i\\}_{i=1}^N\\subset\\mathbb{R}^n$ we have the classical notion of redundancy $N\/n=R(F)\/n+1$, as $R(F)=N-n$.\n \\end{comment}\n\n\nThis paper is organized as follows. After introducing the main concepts in Section \\ref{sec:prel-rkhs} we first consider systems on reproducing kernel Hilbert spaces in Section \\ref{sec:char-RKHS}. Then, in Section \\ref{redundancy-section} we investigate the redundancy of continuous (semi-)frames.\nFinally, we show how a reproducing pair intrinsically generates a pair of RKHSs in Section \\ref{sec:rep-pair-rkhs} and characterize the reproducing kernels.\n \n\\section{Preliminaries}\\label{sec:prel-rkhs}\n\n\\subsection{Atomic and non-atomic measures}\nThroughout this paper we will assume that $(X,\\mu)$ is a nontrivial measure space with $\\mu$ being $\\sigma$-finite and positive.\nA measurable set $A \\subset X$ is called an atom if $\\mu(A)>0$ and for any measurable subset $B\\subset A$, \nwith $\\mu(B)<\\mu(A)$,\nit holds $\\mu(B)=0$.\nA measure space is called atomic if there exists a partition $\\{A_n\\}_{n\\in \\mathbb{N}}$ of $X$\nconsisting of atoms and null sets.\n$(X,\\mu)$ is called non-atomic if there are no atoms in $(X,\\mu)$. To our knowledge there is no term to denote a\nmeasure space which is \nnot atomic. In order to avoid any confusion with non-atomic spaces, we will therefore call a measure space an-atomic if it is not atomic.\n\n\nA well-known result by Sierpi{\\'n}ski states that non-atomic measures take a continuity of values.\n\\begin{theorem}[Sierpi{\\'n}ski \\cite{sie22}]\\label{sierpinski-thm}\n Let $(X,\\mu)$ be a non-atomic measure space and let $A\\subset X$ be measurable with positive measure, then,\n for every $0\\leq b\\leq \\mu(A)$,\n there exists $B\\subset A$ such that $\\mu(B)=b$.\n\\end{theorem}\nWe will later separate the purely continuous part of a frame from the discrete part. For the construction, we need the following auxiliary result. Since we could not find any reference for the second part, we will provide a proof in the appendix.\n\\begin{lemma}\\label{not-atomic-non-atomic}\nLet $(X,\\mu)$ be a $\\sigma$-finite measure space.\\begin{enumerate}[(i)]\\item There exists $\\mu_a$ atomic and $\\mu_c$ non-atomic such that \n\\begin{equation}\\label{measure-partition}\n\\mu=\\mu_a+\\mu_c.\n\\end{equation}\n\\item If $(X,\\mu)$ is an-atomic, then there exists $A\\subset X$ with $\\mu(A)>0$ and $(A,\\mu)$ non-atomic.\n\\end{enumerate}\n\\end{lemma}\n\n\n\n\\subsection{Continuous frames, semi-frames and reproducing pairs}\nFrames were first introduced by Duffin and Schaeffer \\cite{duscha52} in the context of non-harmonic Fourier analysis. \nIn the early 1990's, Ali et al. \\cite{alanga93} and Kaiser \\cite{ka90} independently extended frames to\nmappings acting on a measure space $(X,\\mu)$. \n\nDenote by $GL(\\H)$ the space of all bounded linear operators on $\\H$ with bounded inverse and let\n$\\mathcal{H}$ be a separable Hilbert space.\n\\begin{definition}\\label{def-cont-frame}\nA mapping $\\Psi:X\\rightarrow \\mathcal{H}$ is called a continuous frame if\n\\begin{enumerate}[(i)]\n \\item $\\Psi$ is weakly measurable, that is, $x\\mapsto\\langle f,\\Psi(x)\\rangle$ is a measurable function for every\n $f\\in\\mathcal{H}$, \n \\item there exist positive constants $m,M>0$ such that\n \\begin{equation}\\label{frame-condition}\n m\\left\\|f\\right\\|^2\\leq\\int_{X}\\left|\\langle f,\\Psi(x)\\rangle\\right|^2d\\mu(x)\\leq M\\left\\|f\\right\\|^2,\\ \\\n \\forall f\\in\\mathcal{H}.\n \\end{equation}\n\\end{enumerate}\n\\end{definition}\nThe constants $m,M$ are called the frame bounds and $\\Psi$ is called Bessel if at least the second inequality in \n(\\ref{frame-condition}) is satisfied.\nIf $(X,\\mu)$ is a countable set equipped with\nthe counting measure then one recovers the classical definition of a discrete frame, see for example \\cite{christ1}. For a short and self-contained introduction to continuous \nframes, we refer the reader to \\cite{ranade06}.\n\nThe fundamental operators in frame theory are the analysis operator\n$\n C_\\Psi:\\mathcal{H}\\rightarrow L^2(X,\\mu)$, $C_\\Psi f(x):=\\langle f,\\Psi(x)\\rangle,\n$\nand the synthesis operator\n\\begin{equation*}\n D_\\Psi:L^2(X,\\mu)\\rightarrow \\mathcal{H},\\ \\ \\ D_\\Psi F:=\\int_X F (x)\\Psi(x)d\\mu(x),\n\\end{equation*}\nwhere the integral is defined weakly. Observe that $C_\\Psi^\\ast =D_\\Psi$ whenever $\\Psi$ is Bessel. The \nframe operator $S_\\Psi\\in GL(\\H)$ is defined as the composition of $C_\\Psi$ and $D_\\Psi$\n\\begin{equation*}\n S_\\Psi:\\mathcal{H}\\rightarrow \\mathcal{H},\\ \\ \\ \n S_\\Psi f:=D_\\Psi C_\\Psi f=\\int_X\\langle f,\\Psi(x)\\rangle \\Psi(x)d\\mu(x).\n\\end{equation*}\nEvery frame $\\Psi^d$ satisfying\n\\begin{equation*}\nf:=D_\\Psi C_{\\Psi^d} f=D_{\\Psi^d}C_\\Psi f,\\ \\ \\forall f\\in \\mathcal{H},\n\\end{equation*}\nis called a dual frame for $\\Psi$. For every frame there exists at least one dual frame $S_\\Psi^{-1}\\Psi$, called the canonical \ndual frame. As the analysis operator is in general not onto $L^2(X,\\mu)$, there may exist several dual frames for $\\Psi$. \n\nFrames have proven to be a useful tool in many different fields of mathematics such as signal processing \\cite{nsdgt10} or mathematical physics \\cite{alanga00,xxlbayasg11}. There is however a great variety of examples of complete systems that do not meet both frame conditions. Several concepts to generalize the frame property have thus been proposed. An upper (resp. lower) semi-frame is a complete system that only satisfies the upper (resp. lower) frame inequality, see \\cite{jpaxxl09,jpaxxl11,jpaxxl12}.\n\nAnother generalization is the concept of reproducing pairs, defined in \\cite{spexxl14} \nand further investigated in \\cite{ansptr15,antra16,spexxl16}. Here, one considers a pair of mappings instead of a single one and no frame inequality is assumed to hold.\n\\begin{definition}\\label{rep-pair-definition}\n Let $\\Psi,\\Phi:X\\rightarrow\\mathcal{H}$ weakly measurable.\n The pair of mappings $(\\Psi,\\Phi)$ is called a reproducing pair for $\\mathcal{H}$ if the resolution operator \n $S_{\\Psi,\\Phi}:\\mathcal{H}\\rightarrow \\mathcal{H}$, weakly defined by\n \\begin{equation}\\label{rep-pair-def}\n \\langle S_{\\Psi,\\Phi} f,g\\rangle:=\\int_X \\langle f,\\Psi(x)\\rangle \\langle\\Phi(x),g\\rangle d\\mu(x),\n \\end{equation}\nis an element of $GL(\\mathcal{H})$.\n\\end{definition}\nObserve that Definition \\ref{rep-pair-definition} is indeed a generalization of continuous frames. On the one hand, neither \n$\\Psi$ nor $\\Phi$ are required to meet the frame conditions and,\non the other hand, a weakly measurable mapping $\\Psi$ is a continuous frame if, and only if, $(\\Psi,\\Psi)$ is a reproducing pair.\nNote that reproducing pairs are conceptually similar to the concept of weak duality \\cite{fezi98} where one considers expansions in terms of a Gelfand triplet.\n\n\\subsection{Reproducing kernel Hilbert spaces (RKHS)}\nLet $\\mathcal{F}(X,\\mathbb{C})$ denote the vector space of all functions $f:X\\rightarrow\\mathbb{C}$. \nReproducing kernel Hilbert spaces are in a way convenient subspaces of $\\mathcal{F}(X,\\mathbb{C})$ since they allow for pointwise interpretation of functions, unlike for example Lebesgue spaces.\n\n\\begin{definition}\nLet $\\H_{K}\\subset \\mathcal{F}(X,\\mathbb{C})$ be a Hilbert space, $\\H_K$ is called a reproducing kernel Hilbert space (RKHS) if \nthe point evaluation functional $\\delta_x:\\H_{K}\\rightarrow\\mathbb{C}$, $\\delta_x(f):=f(x)$ is bounded for \n every $x\\in X$, that is, if there exists $C_x>0$ such that $|\\delta_x(f)|\\leq C_x\\|f\\|$, for all $f\\in\\H_K$.\n\\end{definition}\nAs $\\delta_x$ is bounded, there exists a unique vector $k_x\\in \\H_{K}$ such that $f(x)=\\langle f,k_x\\rangle,$ for all $f\\in\\H_{K}$.\nThe function $K(x,y)=k_y(x)=\\langle k_y,k_x\\rangle$ is called the reproducing kernel for $\\H_{K}$. The \nreproducing kernel is unique, $K(x,y)=\\overline{K(y,x)}$ and \nits diagonal is of the following form\n$$K(x,x)=\\langle k_x,k_x\\rangle=\\|k_x\\|^2=\\sup\\big\\{|f(x)|^2:\\ f\\in\\H_K,\\ \\|f\\|=1\\big\\}.$$\nThe following result can be found in \\cite[Theorem 3.1]{alanga93}.\n\\begin{theorem}\\label{charact-of-RKHS}\n Let $\\H_{K}$ be a RKHS and $\\{\\phi_i\\}_{i\\in\\mathcal{I}}\\subset \\H_{K}$ an orthonormal basis, then \n \\begin{equation}K(x,y)=\\sum_{i\\in\\mathcal{I}}\\phi_i(x)\\overline{\\phi_i(y)}, \\end{equation}\n with pointwise convergence. In particular, \n \\begin{equation}\\label{pointwise-l2-onb}\n0<\\sum_{i\\in\\mathcal{I}}|\\phi_i(x)|^2=K(x,x)<\\infty,\\ \\forall x\\in X.\n\\end{equation}\nConversely, if there exists an orthonormal basis for a Hilbert space $\\H_K\\subset \\mathcal{F}(X,\\mathbb{C})$ that satisfies \\eqref{pointwise-l2-onb},\nthen $\\H_{K}$ can be identified with a RKHS consisting of functions $f:X\\rightarrow \\mathbb{C}$.\n\nIf $X$ is equipped with a measure $\\mu$ and $\\H_K\\subset L^2(X,\\mu)$, then $\\Psi(x):=K(x,\\cdot)$ is a continuous Parseval frame.\n\\end{theorem}\nFor a thorough introduction to RKHS we refer the reader to \\cite{aro50,paul09}. We will investigate the connection between RKHS and frames (resp. reproducing pairs) in two different ways. In Section 3 we consider frames (resp. reproducing pairs) taking values in a RKHS, whereas in Section 4 and 5 we investigate the RKHS generated by the range of the analysis operator of a frame (resp. reproducing pair).\n\n\n\\begin{comment}\n\n\\section{Preliminaries from Gabardo-Han \\cite{gaha03}: Proposition 2.6. to 2.10}\n\n\n\n\\begin{theorem}\\label{frame-repres-equiv}\n The following are equivalent:\n \\begin{enumerate}[(i)]\n \\item $\\Psi$ is a continuous frame\n \\item there exists an orthonormal basis $\\{e_i\\}_{i\\in\\mathcal{I}}\\subset \\mathcal{H}$ and a Riesz sequence \n $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset L^2(X,\\mu)$, such that\n for a.e. $x\\in X$, it holds \n $$\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty\\ \\ \\text{and}\\ \\ \\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i.$$\n \\item there exists a Riesz basis $\\{r_i\\}_{i\\in\\mathcal{I}}\\subset\\mathcal{H}$ and an orthonormal family \n $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset L^2(X,\\mu)$, \n such that for a.e. $x\\in X$, it holds\n $$\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty \\ \\ \\text{and}\\ \\ \\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i.$$\n \\item there exists a Riesz basis $\\{r_i\\}_{i\\in\\mathcal{I}}\\subset\\mathcal{H}$ and a Riesz sequence \n $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset L^2(X,\\mu)$, \n such that for a.e. $x\\in X$, it holds $$\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty\\ \\ \\text{and}\\ \\ \\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)r_i.$$\n \\end{enumerate}\n\\end{theorem}\n\n\n\\textbf{Proof:\\ }\n$(i)\\Rightarrow (ii)$ Take an orthonormal basis $\\{e_i\\}$ and define $\\psi_i(x)=\\langle \\Psi(x),e_i\\rangle$, then\n\\begin{equation*}\n \\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2=\\sum_{i\\in\\mathcal{I}}|\\langle e_i,\\Psi(x)\\rangle|^2=\\|\\Psi(x)\\|_\\mathcal{H}^2<\\infty,\\ for\\ a.e.\\ x\\in X\n\\end{equation*}\nMoreover,\n\\begin{equation*}\n \\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i=\\sum_{i\\in\\mathcal{I}}\\langle \\Psi(x),e_i\\rangle e_i=\\Psi(x),\\ for\\ a.e.\\ x\\in X\n\\end{equation*}\nIt remains to show that $\\{\\psi_i\\}$ is a Riesz basis for its closed span. As $\\Psi$ is a frame we have\n\\begin{equation*}\n A\\|f\\|^2_\\mathcal{H}\\leq\\int_X|\\langle f,\\Psi(x)\\rangle|^2d\\mu(x)=\\Big\\|\\sum_{i\\in\\mathcal{I}}\\langle f,e_i\\rangle \\overline{\\psi_i}\n \\Big\\|^2_2\\leq B\\|f\\|^2_\\mathcal{H}\n\\end{equation*}\nNow since $l^2(\\mathcal{I})=\\Big\\{\\{\\langle f,e_i\\rangle\\}_{i\\in\\mathcal{I}}:\\ f\\in\\mathcal{H}\\Big\\}$ and Parseval's formula we get\n\\begin{equation*}\n A\\sum_{i\\in\\mathcal{I}}|c_i|^2\\leq\\Big\\|\\sum_{i\\in\\mathcal{I}}c_i \\overline{\\psi_i}\n \\Big\\|^2_2\\leq B\\sum_{i\\in\\mathcal{I}}|c_i|^2,\\ \\ \\forall c\\in l^2(\\mathcal{I})\n\\end{equation*}\n\n$(ii)\\Rightarrow (iii)$ Let $\\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i$ with $\\{e_i\\}$ an orthonormal basis and $\\{\\psi_i\\}$ a Riesz basis for\n$\\mathcal{H}_\\psi$, the closure of $\\text{span}\\{\\psi_i, \\ i\\in\\mathcal{I}\\}$. Let $\\{\\phi_i\\}$ be an orthonormal basis for $\\mathcal{H}_\\psi$, then \nthere exists $T\\in GL(\\mathcal{H}_\\psi)$, such that $T\\psi_i=\\phi_i$ or in other words \n$$\n\\phi_i(x)=\\sum_{j\\in\\mathcal{I}}\\langle T \\psi_i ,\\widetilde\\psi_j\\rangle \\psi_j(x),\\ \\ \\text{for a.e. }x\\in X\n$$\nwhere $\\{\\widetilde\\psi_i\\}$ is the unique dual of $\\{\\psi_i\\}$.\nThe matrix $(\\langle T \\psi_i ,\\widetilde\\psi_j\\rangle)_{i,j\\in\\mathcal{I}}$ defines a bounded operator\non $l^2(\\mathcal{I})$ (see \\cite[Theorem 3.1]{xxl08}), that is, \n$$ \n\\sum_{i\\in\\mathcal{I}}|\\phi_i(x)|^2\\leq C\\sum_{i\\in\\mathcal{I}}|\\psi_j(x)|^2<\\infty,\\ \\ \\text{for a.e. }x\\in X\n$$\nMoreover, we have\n$$\n\\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i=\\sum_{i,j\\in\\mathcal{I}}\\langle\\psi_i,\\phi_j\\rangle \\phi_j(x) e_i=\n\\sum_{j\\in\\mathcal{I}}\\Big(\\sum_{i\\in\\mathcal{I}}\\langle\\psi_i,\\phi_j\\rangle e_i\\Big) \\phi_j(x) \n$$\n$$\n=\\sum_{j\\in\\mathcal{I}} \\phi_{j}(x)r_{j}\n$$\nwhere $r_j:=\\sum_{i\\in\\mathcal{I}}\\langle\\psi_{i},\\phi_{j}\\rangle e_{i}$. It remains to show that $\\{r_j\\}$ is a Riesz basis.\n$$\n\\Big\\|\\sum_{j\\in\\mathcal{I}}c_jr_j\\Big\\|^2=\\sum_{i,j\\in\\mathcal{I}}c_i\\overline{c_j}\\langle r_i,r_{j}\\rangle\n$$\nThe inner product $\\langle r_i,r_{j}\\rangle$ can be expressed in the following way\n$$\n\\langle r_i,r_{j}\\rangle=\\sum_{k,l\\in\\mathcal{I}}\\ip{\\psi_k}{\\phi_i}\\ip{e_k}{e_l}\\ip{\\phi_j}{\\psi_l}\n$$\n$$\n=\\sum_{k\\in\\mathcal{I}}\\ip{\\psi_k}{\\phi_i}\\ip{\\phi_j}{\\psi_k}=\\ip{S_\\psi \\phi_j}{\\phi_i}=(\\mathcal{M}_\\phi(S_\\psi))_{i,j}\n$$\nBy Proposition \\ref{op-rep-positive} and \\ref{op-rep-invertible} it follows that $\\mathcal{M}_\\phi(S_\\psi)$ is positive and invertible on $l^2(\\mathcal{I})$\nas $\\phi$ is an orthonormal basis on ${\\sf Ran}\\, C_\\Psi$.\nHence, we get for all $c\\in l^2(\\mathcal{I})$\n$$\n\\Big\\|\\sum_{j\\in\\mathcal{I}}c_jr_j\\Big\\|^2=\\sum_{i,j \\in\\mathcal{I}}(\\mathcal{M}_\\phi(S_\\psi))_{i,j}c_i\\overline{c_j}=\\ip{\\mathcal{M}_\\phi(S_\\psi)c}{c}\n=\\norm{}{(\\mathcal{M}_\\phi(S_\\psi))^{1\/2}c}^2\n$$\nAgain by Proposition \\ref{op-rep-invertible} and using $\\phi$ that is an orthonormal basis (its frame bounds are 1) one gets\n$$\nA_\\psi \\norm{2}{c}^2\\leq \\norm{}{(\\mathcal{M}_\\phi(S_\\psi))^{1\/2}c}^2\\leq B_\\psi \\norm{2}{c}^2\n$$\n\\\\\n$(iii)\\Rightarrow (iv)$ trivial. \\\\ \n\n\n$(iv)\\Rightarrow (i)$ It holds\n\\begin{equation*}\n \\|V_\\Psi f\\|_2^2=\\int_X|\\langle f,\\Psi(x)\\rangle|^2d\\mu(x)=\\Big\\|\\sum_{i\\in\\mathcal{I}}\\langle f,r_i\\rangle \\overline{\\psi_i}\\Big\\|^2_2\n\\end{equation*}\nNow as $\\{\\psi_i\\}$, and consequently $\\{\\overline{\\psi_i}\\}$, is a Riesz basis for its closed span we get using that any Riesz basis\nis a frame\n\\begin{equation*}\n A\\|f\\|_\\mathcal{H}^2\\leq A'\\sum_{i\\in\\mathcal{I}}|\\langle f,r_i\\rangle|^2\\leq\\|V_\\Psi f\\|^2_2\\leq B'\\sum_{i\\in\\mathcal{I}}|\\langle f,r_i\\rangle|^2=B\\|f\\|_\\mathcal{H}^2\n\\end{equation*}\nHence, $\\Psi$ is a frame.\\\\\n\n\\hfill$\\Box$\\\\\n\n \n \\textcolor{red}{From this corollary one can derive that the frame operator of a continuous frame can be represented by the frame operator\n of a Riesz basis}\n\n\n\\begin{corollary}\\label{exist-frame-for-RKHS}\n Let $U$ be a closed subspace of $L^2(X,\\mu)$. The following are equivalent:\n \\begin{enumerate}[(i)]\n \\item There exists a continuous frame $\\Psi$, such that $Ran(C_\\Psi)=U$.\n \\item There exists an orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $U$ such that $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\item for every orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $U$ it holds $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\end{enumerate}\n\\end{corollary}\n\n\\textcolor{red}{write something on almost every and and every point }\n\nIn particular \n\n\\begin{corollary}\n The following are equivalent:\n \\begin{enumerate}[(i)]\n \\item there exists a Riesz-type frame for $\\mathcal{H}$\n \\item there exists an orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $L^2(X,\\mu)$ such that $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\item for every orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $L^2(X,\\mu)$ it holds $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\end{enumerate}\n\\end{corollary}\n\n\n\nOne should say something about the independence of the choice of the orthonormal basis....\n\n\\end{comment}\n\n\\section{Frames and reproducing pairs taking values in a RKHS}\\label{sec:char-RKHS}\nIn this section we will mainly investigate two questions. First, given a RKHS, what can be said about the pointwise behavior of frames and how can the reproducing kernel be characterized. Second, which conditions on a frame ensure that the space possesses a reproducing kernel.\\\\\nThe following result adapts the \narguments of the proof of \\cite[Theorem 3.12]{paul09} to the case of reproducing pairs.\n\\begin{theorem}\\label{kernel-and-rep-pair}\n Let $\\H_{K}$ be a RKHS and $\\Psi=\\{\\phi_i\\}_{i\\in\\mathcal{I}},\\ \\Phi=\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset\\H_K$. The pair $(\\Psi,\\Phi)$\n is a reproducing pair for $\\H_K$ if, and only if, there exists $A\\in GL(\\H_K)$ such that\n\\begin{equation}\\label{rep-prod-char-ker}\nK(x,y)=\\sum_{i\\in\\mathcal{I}} (A\\phi_i)(x)\\overline{\\psi_i(y)}=\\sum_{i\\in\\mathcal{I}} (A^\\ast\\psi_i)(x)\\overline{\\phi_i(y)},\n\\end{equation}\n where the series converges pointwise. In particular, $A$ is unique and given by $S_{\\Psi,\\Phi}^{-1}$.\n \\end{theorem}\n\\textbf{Proof:\\ } Let $(\\Psi,\\Phi)$ be a reproducing pair, it holds\n$$\nK(x,y)=\\ip{k_y}{k_x}=\\sum_{i\\in\\mathcal{I}} \\ip{k_y}{\\psi_i}\\ip{S_{\\Psi,\\Phi}^{-1}\\phi_i}{k_x}=\\sum_{i\\in\\mathcal{I}} \\overline{\\psi_i(y)}(S_{\\Psi,\\Phi}^{-1}\\phi_i)(x).\n$$\nConversely, assume that $K$ is given by \\eqref{rep-prod-char-ker}. Let $f,g\\in \\mbox{span}\\{k_x\\hspace{-0.07cm}:\\ x\\in X\\}$, that is, there exist \n$\\alpha_n,\\beta_m\\in \\mathbb{C}$\nsuch that $f=\\sum_n \\alpha_n k_{x_n}$ and $g=\\sum_m \\beta_m k_{y_m}$, then\n$$\n\\ip{f}{g}=\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}\\ip{k_{x_n}}{k_{y_m}}=\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}K(y_m,x_n)\n$$\n$$\n=\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}\\sum_{i\\in\\mathcal{I}}(A\\phi_i)(y_m)\\overline{\\psi_i(x_n)}=\n\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}\\sum_{i\\in\\mathcal{I}}\\ip{k_{x_n}}{\\psi_i}\\ip{A\\phi_i}{k_{y_m}}\n$$\n$$\n=\\sum_{i\\in\\mathcal{I}}\\Big\\langle\\sum_{n=1}^N\\alpha_n k_{x_n},\\psi_i\\Big\\rangle\\Big\\langle A\\phi_i,\\sum_{m=1}^N\\beta_mk_{y_m}\\Big\\rangle$$\n$$\n=\\sum_{i\\in\\mathcal{I}}\\ip{f}{\\psi_i}\\ip{A\\phi_i}{g}=\\ip{AS_{\\Psi,\\Phi}f}{g}.\n$$\nIn \\cite[Proposition 3.1]{paul09} it is shown that $\\mbox{span}\\{k_x\\hspace{-0.07cm}:\\ x\\in X\\}$ is dense in $\\H_K$. Therefore, it follows\nthat $AS_{\\Psi,\\Phi}=I$. As $A\\in GL(\\H_K)$ we may conclude that $S_{\\Psi,\\Phi}\\in GL(\\H_K)$, that is,\n$(\\Psi,\\Phi)$ is a reproducing pair.\n\\hfill$\\Box$\\\\\n\\begin{remark}\nResults have been proven if $\\Psi$ and $\\Phi$ are dual frames \n \\cite[Theorem 7]{raca05} or if $\\Psi=\\Phi$ is a Parseval frame \\cite[Theorem 3.12]{paul09}, which are just particular cases of Theorem \\ref{kernel-and-rep-pair}. In both cases one has\n$A=I$.\n\\end{remark}\n\n\n\\begin{comment}\n {\\bf Idea\/Question:}\\begin{itemize}\n \\item There should be other convergence possible. Weakly unconditional is clear.... For the RKHS property the pointwise convergence is the right one!\n \\item The following should be possible:\n \\begin{theorem}\\label{charact-of-RKHS_1}\n Let $\\H_{K}$ be a RKHS and $\\{\\psi_k\\}_{k\\in\\mathcal{I}}\\subset \\H_{K}$ a frame, then \n \\begin{equation}K(x,y)=\\sum_{k\\in\\mathcal{I}}\\psi_k(x)\\overline{\\tilde \\psi_k(y)} \\end{equation}\n with pointwise convergence. In particular, \n \\begin{equation}\\label{pointwise-l2-frame}\n0<\\sum_{k\\in\\mathbb{N}} \\psi_k(x) \\overline{\\widetilde{\\psi_k}(x)} = |K(x,x)| < \\infty,\\ \\forall x\\in X.\n\\end{equation}\nConversely, if there exists a frame for a Hilbert space $\\H_K$ that satisfies \\eqref{pointwise-l2-frame},\nthen $\\H_{K}$ is identifiable with a RKHS in $\\mathcal{F}(X,\\mathbb{C})$.\n\\end{theorem}\nas well as \n\\begin{theorem}\\label{charact-of-RKHS_2}\n Let $\\H_{K}$ be a RKHS and $\\{\\psi_k\\}_{k\\in\\mathcal{I}},\\{\\phi_k\\}_{k\\in\\mathcal{I}}\\subset \\H_{K}$ weakly dual systems, then \n \\begin{equation}K(x,y)=\\sum_{k\\in\\mathcal{I}}\\psi_k(x)\\overline{\\phi_k(y)}, \\end{equation}\n with pointwise convergence. In particular, \n \\begin{equation}\\label{pointwise-l2-weak}\n0<\\sum_{k\\in\\mathbb{N}} \\psi_k(x) \\overline{{\\phi_k}(x)} = |K(x,x)| < \\infty,\\ \\forall x\\in X.\n\\end{equation}\nConversely, if there exists a frame for a Hilbert space $\\H_K$ that satisfies \\eqref{pointwise-l2-weak},\nthen $\\H_{K}$ identifiable with a RKHS consisting of functions $F:X\\rightarrow \\mathbb{C}$.\n\\end{theorem}\n\\end{comment}\n\n\\begin{proposition}\\label{bessel-rkhs}\nLet $\\H_K$ be a RKHS and $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset \\H_K$ Bessel, then it holds \n\\begin{equation}\\label{eq-lower-rkhs}\n\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\nIf $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset \\H_K$ is a frame, then \n\\begin{equation}\\label{eq-lower-rkhs2}\n0<\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ be Bessel, then, for every $x\\in X$, it holds\n$$ \\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2=\\sum_{i\\in\\mathcal{I}}|\\ip{k_x}{\\psi_i}|^2\\leq M \\|k_x\\|^2<\\infty.$$ \nAn analogue argument shows the lower bound in \\eqref{eq-lower-rkhs2} if $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ is a frame.\\hfill$\\Box$\\\\\n\\begin{remark}\\label{discrete-subspace-rkhs}\n \\begin{enumerate}[(i)]\n \\item\n Observe that \\eqref{eq-lower-rkhs2} is not a direct consequence of Theorem \\ref{kernel-and-rep-pair} as \n\\eqref{rep-prod-char-ker} only ensures\n$$\n00$ such that $C_\\Psi f$ is constant on $A$, for all $f\\in\\H$.\n\\end{definition}\nSquare-integrable group representations \\cite{grmopa86} like the short-time Fourier system or the continuous wavelet system, see \\cite{groe1}, are just one class out of a large reservoir of strictly continuous mappings.\nIn the rest of this section we show that continuous frames can be decomposed into a discrete and a strictly continuous system.\nTo this end, we will need two auxiliary lemmata.\n\\begin{lemma}[\\cite{si96}, Theorem 3.8.1]\\label{constant-on-atoms}\nLet $A\\subset X$ be an atom. Every measurable function $F:X\\rightarrow \\mathbb{C}$ is constant\nalmost everywhere on $A$.\n\\end{lemma}\n\n\\begin{lemma}\\label{atomic-means-discrete}\nLet $\\Psi$ be Bessel and $A\\subset X$ such that $\\mu(A)>0$ and $\\ip{f}{\\Psi(\\cdot)}$ is constant on $A$ for every $f\\in \\H$, then there exists a unique $\\psi\\in \\H$ such that $$\n\\|C_\\Psi f\\|_2^2=\\|C_\\Psi f|_{X\\backslash A}\\|_2^2+|\\ip{f}{\\psi}|^2,\\ \\forall\\ f\\in\\H.\n$$\nIn particular, $\\psi$ is weakly given by\n\\begin{equation}\\label{defin-of-psi-const}\n\\ip{f}{\\psi}:=\\mu(A)^{-1\/2}\\int_{A}\\ip{f}{\\Psi(x)}d\\mu(x),\\ \\forall\\ f\\in\\H.\n\\end{equation}\n\\end{lemma}\n\\textbf{Proof:\\ } First, observe that $\\psi$ defined by \\eqref{defin-of-psi-const} is unique for every $n\\in\\mathbb{N}$ by Riesz representation theorem \n$$\n|\\ip{f}{\\psi}|\\leq\\frac{1}{\\sqrt{\\mu(A)}}\\int_{A}|\\ip{f}{\\Psi(x)}|d\\mu(x)\n\\leq \\left(\\int_{A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)\\right)^{\\frac{1}{2}}\\leq M\\|f\\|,\n$$\nwhere $M$ is the upper frame bound of $\\Psi$. Moreover, \n$$\n\\int_X|\\ip{f}{\\Psi(x)}|^2d\\mu(x)=\\int_{X\\backslash A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)+\\int_{A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)\n$$\n$$\n=\\int_{X\\backslash A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)+|\\ip{f}{\\psi}|^2\n$$\nwhere we have used that $\\ip{f}{\\Psi(\\cdot)}$ is almost everywhere constant on $A$ and \\eqref{defin-of-psi-const}.\\hfill$\\Box$\\\\\n\n\\begin{theorem}\nEvery frame $\\Psi$ can be written as $\\Psi=\\Psi_d\\cup \\Psi_c$, where $\\Psi_d$ is a discrete Bessel system and $\\Psi_c$ is a strictly continuous Bessel mapping.\n\\end{theorem}\n\\textbf{Proof:\\ } By Lemma \\ref{not-atomic-non-atomic} $(i)$, any measure $\\mu$ can be written as $\\mu=\\mu_a+\\mu_c$, where $\\mu_a$ is atomic and $\\mu_c$ is non-atomic. By Lemma \\ref{constant-on-atoms} and \\ref{atomic-means-discrete} we deduce that $\\Psi$ defined on $(X,\\mu_a)$ can be identified with a discrete Bessel system $\\Psi_d^a$. Let $X_d\\subset X$ be the disjoint union of all sets of positive measure with respect to $\\mu_c$ on which $C_\\Psi f$ is constant for all $f\\in\\H$ and $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ the corresponding collection of vectors. By definition $\\Psi_c:=\\Psi|_{X\\backslash X_d}$ is strictly continuous. It therefore remains to show that $\\mathcal{I}$ is countable.\nThis, however, is a direct consequence from the fact that $\\sigma$-finite measure spaces can only be partitioned into countably many sets of positive measure. Hence setting $\\Psi_d:=\\Psi_d^a\\cup \\{\\psi_i\\}_{i\\in\\mathcal{I}}$ yield the result.\\hfill$\\Box$\\\\\n\n\\noindent In an attempt to generalize the concept of Riesz bases, continuous Riesz bases \\cite{arkatota12} and Riesz-type mappings \\cite{gaha03} have been introduced.\nIt turns out that \nthe these notions are equivalent and characterized by as frames with redundancy zero \\cite[Proposition 2.5 \\& Theorem 2.6]{arkatota12}.\n\n\\begin{corollary}\nEvery continuous Riesz basis (Riesz-type mapping) can be written as a discrete Riesz basis.\n\\end{corollary}\n\\textbf{Proof:\\ } Let $\\Psi$ be a continuous Riesz basis, then $R(\\Psi)=0$. By Theorem \\ref{reproduced-result}, $(X,\\mu)$ is \natomic. Consequently, $\\Psi$ corresponds to a discrete Riesz basis by Lemma \\ref{constant-on-atoms} and \\ref{atomic-means-discrete}.\\hfill$\\Box$\\\\\n\n\\noindent With the results of this section in mind, we suggest to use the term continuous frame only in the case of a strictly continuous frame, and semi-continuous frame if there is both a strictly continuous and a discrete part. Moreover, the notion of continuous Riesz basis\/ Riezs type mapping should be discarded as there are no such systems on \nan-atomic measure spaces and continuous Riesz bases on atomic spaces reduce to discrete Riesz bases.\n\n\\subsection{Upper semi-frames}\\label{sec:upper-semi}\n\nIn this section we want to illustrate that upper semi-frames behave essentially different from (lower semi-)frames in respect of the problems of Section \\ref{sec:frames-and-redund}. In particular, the closure of the range of the analysis operator is not necessarily a reproducing kernel Hilbert space and there exist upper semi-frames on non-atomic measure spaces with redundancy zero (compare to Proposition \\ref{exist-frame-for-RKHS} and Theorem \\ref{reproduced-result}). Throughout this section we will assume that any upper semi-frame violates the lower frame inequality.\\\\\n\n\\noindent \\textbf{Example 1}\nIn \\cite{jpaxxl09,ansptr15} the following upper \nsemi-frame has been studied. \nSet $\\H_n:=L^2(\\mathbb{R}^+,r^{n-1}d r)$, where $n\\in\\mathbb{N}$, and $(X,\\mu)=(\\mathbb{R},d x)$.\n We use the following convention to denote the Fourier transform\n $$\n \\widehat f(\\omega)=\\int_\\mathbb{R} f(x)e^{-2\\pi i x\\omega}dx.\n $$\n Let $\\psi\\in \\H_n$ and define the affine coherent state by\n$$\n\\Psi(x)(r):=e^{-2\\pi ixr}\\psi(r),\\ \\ r\\in\\mathbb{R}^+,\\ x\\in\\mathbb{R}.\n$$\nThe mapping $\\Psi$ forms an upper semi-frame if \n$\\esssup_{r \\in {\\mathbb{R}}^{+}}{\\mathfrak s}(r)<\\infty,$ where ${\\mathfrak s}(r):=r^{n-1}|\\psi (r)|^{2}$,\nand $|\\psi(r)|\\neq 0$, for a.e. $r\\in\\mathbb{R}^+$.\nThe frame operator is then given by a multiplication operator on $\\H_n$, that is,\n$$\n(S_\\Psi f)(r)= {\\mathfrak s}(r) f(r).\n$$\nIt is thus easy to see that $\\Psi$ cannot form a frame since\n$\\essinf_{r \\in {\\mathbb{R}}^{+}} {\\mathfrak s}(r)=0$ for every $\\psi\\in \\H_n$.\nIn \\cite[Section 5.2]{ansptr15} it is shown that ${\\sf Ker}\\, D_\\Psi=\\mathcal F_+$, where \n$$\\mathcal{F}_+ :=\\{f\\in L^2(\\mathbb{R}):\\ \\widehat f(\\omega)=0,\\ \\text{for a.e. }\\omega\\geq0\\}.$$\nClearly, $\\overline{{\\sf Ran}\\, C_\\Psi}=(\\ker D_\\Psi)^\\bot=\\mathcal{F}_+{}^\\bot=\\mathcal{F}_-$, where\n$$\\mathcal{F}_- :=\\{f\\in L^2(\\mathbb{R}):\\ \\widehat f(\\omega)=0,\\ \\text{for a.e. }\\omega\\leq0\\}.$$\nTherefore, $\\Psi$ has infinite redundancy and a short argument shows that $\\mathcal F_-$ is not a RKHS: \n\n The dilation operator $D_a $, defined by $D_a f(x):=a^{-1\/2}f(x\/a)$, $a\\in \\mathbb{R}^+$, acts isometrically on $\\mathcal{F}_-$. Take $f\\in\\mathcal{F}_-$ with $\\|f\\|=1$ and $f(0)\\neq0$, then $|D_af(0)|=|a^{-1\/2}f(0)|\\rightarrow\\infty$, as $a\\rightarrow 0$. Consequently, point evaluation cannot be continuous and $\\mathcal{F}_-$ is not a RKHS.\n\nThe mapping $\\Psi$ possesses several other interesting properties, see \\cite{ansptr15}. For instance, it forms\na total Bessel system with no dual. Or, in other words, there is no mapping $\\Phi$ such that \n$(\\Psi,\\Phi)$ generates a reproducing pair.\n\n\nNext, we will show the existence of upper semi-frames with ${\\sf Ran}\\, C_\\Psi$ dense in $L^2(X,\\mu)$ \nif there exists an\n orthonormal basis of $L^2(X,\\mu)$ which is pointwise bounded. In particular, there exist upper semi-frames on non-atomic measure spaces with redundancy zero.\n\\begin{proposition}\nLet $(X,\\mu)$ be a measure space, such that there exists an orthonormal basis $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ of $L^2(X,\\mu)$ satisfying\n\\begin{equation}\\label{assumpt-on-onb}\n\\sup_{n\\in\\mathbb{N}}\\sup_{x\\in X}|\\psi_n(x)|=C< \\infty,\n\\end{equation}\nthen there exists an upper semi-frame $\\Psi$ for $\\H$ such that $\\overline{{\\sf Ran}\\, C_\\Psi}=L^2(X,\\mu)$.\nIn particular, $R(\\Psi)=0$.\n\\end{proposition}\n\\textbf{Proof:\\ } Take an arbitrary orthonormal basis $\\{e_n\\}_{n\\in\\mathbb{N}}$ of $\\H$, and define\n$$\\Psi(x):=\\sum_{n\\in\\mathbb{N}}n^{-1} e_n \\psi_n(x),$$\nwith the sum converging absolutely in every point. Then, $\\Psi$ is an upper semi-frame with the desired properties.\nTo see this, we first observe that $\\Psi:X\\rightarrow\\H$ is well-defined \nas, for $x\\in X$ fixed,\n$$\n|\\ip{f}{\\Psi(x)}|\\leq \\sum_{n\\in\\mathbb{N}}|\\ip{f}{e_n}n^{-1}\\psi_n(x)|\\leq \\norm{}{f}\\Big(\\sum_{n\\in\\mathbb{N}}n^{-2}| \\psi_n(x)|^2\\Big)^{1\/2}\n$$\n$$\n\\leq C\\norm{}{f}\\Big(\\sum_{n\\in\\mathbb{N}}n^{-2}\\Big)^{1\/2}= \\frac{\\pi}{\\sqrt{6}} C\\norm{}{f},\n$$\nwhere we used \\eqref{assumpt-on-onb} and Cauchy-Schwarz inequality.\nMoreover,\n$$\n\\int_X|\\ip{f}{\\Psi(x)}|^2d\\mu(x)\\leq\\int_X\\norm{}{f}^2\\sum_{n\\in\\mathbb{N}}n^{-2}| \\psi_n(x)|^2d\\mu(x)\n$$\n$$\n=\\norm{}{f}^2\\sum_{n\\in\\mathbb{N}}n^{-2}\\int_X| \\psi_n(x)|^2 d\\mu(x)=\\norm{}{f}^2\\sum_{n\\in\\mathbb{N}}n^{-2}=\n\\frac{\\pi^2}{6}\\norm{}{f}^2.\n$$\n Since $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ is an orthonormal basis of $L^2(X,\\mu)$ it follows that $\\Psi$ is total in $\\H$ as, for $f\\neq0$,\n$$\n\\int_X|\\ip{f}{\\Psi(x)}|^2d\\mu(x)= \\int_X \\sum_{n,k\\in\\mathbb{N}}\\ip{f}{e_n}\\ip{e_k}{f}(nk)^{-1}\\psi_n(x)\\overline{\\psi_k(x)}d\\mu(x)\n$$\n$$\n=\\sum_{n,k\\in\\mathbb{N}}\\ip{f}{e_n}\\ip{e_k}{f}(nk)^{-1}\\delta_{n,k}=\\sum_{n\\in\\mathbb{N}}|\\ip{f}{e_n}|^2n^{-2}>0.\n$$\nFinally, the range of the analysis operator of the system $\\{n^{-1} e_n \\}_{n\\in\\mathbb{N}}$ is dense in $l^2(\\mathbb{N})$, which implies that ${\\sf Ran}\\, C_\\Psi$ is dense \nin $L^2(X,\\mu)$.\\hfill$\\Box$\\\\\n\n\n\\noindent \\textbf{Example 2} Let $(X,\\mu)=(\\mathbb{T},dx)$ be the torus with Lebesgue measure, and $\\psi_n(x)=e^{2\\pi i xn},$ $n\\in\\mathbb{Z}$.\nThen, $\\{\\psi_n\\}_{n\\in\\mathbb{Z}}$ is an orthonormal basis and\n$$\\sup_{n\\in\\mathbb{Z}}\\sup_{x\\in \\mathbb{T}}|\\psi_n(x)|=1.$$\nHence, there exists an upper semi-frame $\\Psi$ with the closure of ${\\sf Ran}\\, C_\\Psi$ being $L^2(\\mathbb T,dx)$.\n\n\n\\subsection{Correction of the proof of a result on the existence of duals for lower semi-frames}\nIn this section we corrected version of the proof of \\cite[Proposition 2.6]{jpaxxl09} which states that for every lower semi-frame $\\Psi$ there exists a dual mapping $\\Phi$ such that $S_{\\Psi,\\Phi}=I$ on ${\\sf Dom}\\, C_\\Psi$. While the result itself is correct,\nthe construction of the dual system $\\Phi$ in \\cite{jpaxxl09} is in general not well-defined. In particular, $\\Phi$ is \ndefined as\n$$\n\\Phi(x):=\\sum_{n\\in\\mathbb{N}}\\phi_n(x)V\\phi_n=V\\Big(\\sum_{n\\in\\mathbb{N}}\\phi_n(x)\\phi_n\\Big),\n$$\nwhere $V:L^2(X,\\mu)\\rightarrow \\H$ is a bounded operator depending on $\\Psi$ and $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ is an orthonormal basis\nfor $L^2(X,\\mu)$. However, if $(X,\\mu)$ is an-atomic, then there exists a set of positive measure $A$ such that\n$\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2=\\infty,$ for all $x\\in A$, by Corollary \\ref{noonbpointwise1}.\nThus, $\\Psi$ is not well-defined on a set of positive measure.\n\n\n\\begin{proposition}[\\cite{jpaxxl09}, Proposition 2.6]\\label{corrected-result}\n Let $\\Psi$ be a lower semi-frame, there exists an upper semi-frame $\\Phi$ such that \n $$\n f=\\int_X\\ip{f}{\\Psi(x)}\\Phi(x)d\\mu(x),\\ \\ \\ \\forall \\ f\\in {\\sf Dom}\\, C_\\Psi.\n $$\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\Psi$ be a lower semi-frame, then ${\\sf Ran}\\, C_\\Psi$ is a RKHS in $L^2(X,\\mu)$ by Proposition \\ref{exist-frame-for-RKHS}.\nMoreover, let $P$ denote the orthogonal projection from $L^2(X,\\mu)$ onto ${\\sf Ran}\\, C_\\Psi$, and $\\{e_n\\}_{n\\in\\mathbb{N}}$ be an orthonormal\nbasis for $\\H$.\nDefine the linear operator $V:L^2(X,\\mu)\\rightarrow \\H$ by $V:=C_\\Psi^{-1}$ on \n${\\sf Ran}\\, C_\\Psi$ and $V:=0$ on $({\\sf Ran}\\, C_\\Psi)^\\bot$. Then $V$ is bounded and for all $f\\in {\\sf Dom}\\, C_\\Psi,\\ g\\in\\H$,\nit holds\n$$\n\\ip{f}{g}=\\ip{VC_\\Psi f}{g}=\\ip{C_\\Psi f}{V^\\ast g}_2=\\ip{C_\\Psi f}{V^\\ast (\\sum_{n\\in\\mathbb{N}}\\ip{ g}{e_n}e_n)}_2\n$$\n$$\n=\\ip{C_\\Psi f}{\\sum_{n\\in\\mathbb{N}}\\ip{g}{ e_n}V^\\ast e_n}_2=\\ip{C_\\Psi f}{\\sum_{n\\in\\mathbb{N}}\\ip{g}{ e_n}P V^\\ast e_n}_2\n=\\ip{C_\\Psi f}{C_\\Phi g}_2,\n$$\nwhere $\\Phi(x):=\\sum_{n\\in\\mathbb{N}}\\overline{(PV^\\ast e_n)}(x)e_n$. It remains to show that $\\Phi(x)$ is well-defined for every $x\\in X$. Since $\\{e_n\\}_{n\\in\\mathbb{N}}$ is an orthonormal basis, one has that $\\Phi(x)$ is well defined\nif, and only if, \n$$\n\\sum_{n\\in\\mathbb{N}}|(PV^\\ast e_n)(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n$$ \nBy Proposition \\ref{bessel-rkhs}, it is sufficient to show that $\\Theta:=\\{P V^\\ast e_n\\}_{n\\in\\mathbb{N}}$ is a Bessel sequence on ${\\sf Ran}\\, C_\\Psi$.\nLet $F\\in {\\sf Ran}\\, C_\\Psi$, then\n$$\n\\sum_{n\\in\\mathbb{N}}|\\ip{F}{ \\Theta_n}_2|^2=\\sum_{n\\in\\mathbb{N}}|\\ip{VPF}{e_n}_2|^2=\\|VF\\|^2\\leq C\\|F\\|_2^2,\n$$\nas $PF=F$ and $V$ is bounded. It hence remains to show that $\\Phi$ is Bessel. Let $f\\in \\H$, then\n$$\n\\int_X|\\ip{f}{\\Phi(x)}|^2d\\mu(x)=\\int_X\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{e_n}\\Theta_n(x)\\Big|^2d\\mu(x)\n$$\n$$\n=\\|D_\\Theta \\{\\ip{f}{e_n}\\}_{n\\in\\mathbb{N}}\\|_2^2\n\\leq C\\sum_{n\\in\\mathbb{N}}|\\ip{f}{e_n}|^2=C\\|f\\|^2,\n$$\nas $\\Theta$ is Bessel.\n\\hfill$\\Box$\\\\\n\\begin{remark}\nThere is no analogue result of Proposition \\ref{corrected-result} if $\\Psi$ is an upper semi-frame. In \\cite{ansptr15} it is shown that the affine coherent state system presented in Section \\ref{sec:upper-semi} is a complete Bessel mapping with no dual.\n\\end{remark}\n\n\\section{Reproducing pairs and RKHSs}\\label{sec:rep-pair-rkhs}\n\n\nThe absence of frame bounds causes problems in the analysis of the ranges of $C_\\Psi$ and $C_\\Phi$ of a reproducing pair $(\\Psi,\\Phi)$. On the one hand, without the upper frame bound it is no longer guaranteed that ${\\sf Ran}\\, C_\\Psi$ is a subspace of $L^2(X,\\mu)$. The lower frame inequality, on the other hand, ensured that ${\\sf Ran}\\, C_\\Psi$ is a RKHS.\nA construction of two mutually dual Hilbert spaces intrinsically generated by the pair $(\\Psi,\\Phi)$ is presented in \\cite{ansptr15}.\nLet us first recall some of these results before we explain how reproducing kernel Hilbert spaces come into play.\n\nLet ${\\mathcal V}_\\Phi(X, \\mu)$ be the space of all measurable functions $F : X \\to \\mathbb{C}$ for which\n there exists $M>0$ such that\n$$\\label{eq-Vphi}\n\\left| \\int_X F(x) \\ip{\\Phi(x)}{g} d\\mu(x) \\right| \\leq M \\norm{}{g}, \\; \\forall\\, g \\in \\H.\n$$\nNote that in general neither ${\\mathcal V}_\\Phi(X, \\mu)\\subset L^2(X, \\mu)$ nor $ L^2(X, \\mu)\\subset {\\mathcal V}_\\Phi(X, \\mu)$.\nThe linear map $T_\\Phi :{\\mathcal V}_\\Phi(X, \\mu) \\rightarrow \\H$ given weakly by\n\\begin{equation}\\label{def-T-phi}\n\\ip{T_\\Phi F}{g} =\\int_X F(x) \\ip{\\Phi(x)}{g} d\\mu(x) , \\; g\\in\\H,\n\\end{equation}\nis thus well defined by Riesz representation theorem.\nThe operator $T_\\Phi$ can be seen as the natural extension of the synthesis operator $D_\\Phi$ (defined on ${\\sf Dom}\\, D_\\Phi\\subseteq L^2(X,\\mu)$) to ${\\mathcal V}_\\Phi(X, \\mu)$. \n\nLet $(\\Psi,\\Phi)$ be a reproducing pair, according to \\cite{ansptr15}\nit then holds\n\\begin{equation}\\label{dir-sum-rp}\n{\\mathcal V}_\\Phi(X,\\mu)={\\sf Ran}\\, C_\\Psi\\oplus \\ker T_\\Phi.\n\\end{equation}\nThis observation, together with the fact that $T_\\Phi$ is general not injective, motivates to define the redundancy for arbitrary complete mappings via \n\\begin{equation}\\label{redund-rep-pair}\nR(\\Phi):=\\dim(\\ker T_\\Phi).\n\\end{equation}\nWe expect that similar results on $R(\\Phi)$ as in Section \\ref{sec:frames-and-redund} hold.\n\\begin{conjecture} If $R(\\Phi)<\\infty$, then $(X,\\mu)$ is atomic.\n\\end{conjecture}\nThe main difficulty is that there is no characterization of ${\\mathcal V}_\\Phi(X,\\mu)$ which would allow to treat the problem in a similar manner than in Section \\ref{sec:frames-and-redund} using \\eqref{dir-sum-rp}. It is in particular not even clear if ${\\mathcal V}_\\Phi(X,\\mu)$ is normable.\n\nLet us introduce the following vector space\n$$ \nV_\\Phi(X, \\mu)= {\\mathcal V}_\\Phi(X, \\mu)\/{{\\sf Ker}\\,}\\,T_\\Phi,\n$$\nequipped with the inner product\n$$\n\\ip{F}{G}_{\\Phi}: =\\ip{T_\\phi F}{T_\\phi G}, \\mbox{ where } F,\nG\\ \\in V_\\Phi(X,\\mu).\n$$\nThis is indeed an inner product as $\\ip{F}{F}_{\\Phi}=0$ if, and only if, $F\\in{\\sf Ker}\\, T_\\Phi$. Hence, $V_\\Phi(X,\\mu)$ forms a pre-Hilbert space and $T_\\Phi:V_\\Phi(X,\\mu)\\rightarrow \\H$ is an isometry.\nBy \\eqref{def-T-phi} $ \\ip{\\cdot}{\\cdot}_{\\Phi}$ can be written explicitly as\n\\begin{equation}\\label{phi-inner-expl}\n \\ip{F}{G}_{\\Phi}=\\int_X\\int_X F(x) \\ip{\\Phi(x)}{\\Phi(y)}\\overline{G(y)} d\\mu(x)d\\mu(y).\n\\end{equation}\\\\\n\n\nWith the basic definitions at hand, we are now able to give an interpretation of \\cite[Theorem 4.1]{ansptr15} in terms of RKHSs.\nIn particular, this result answers the question if, given a mapping $\\Phi$, \nthere exist another mapping $\\Psi$ such that $(\\Psi,\\Phi)$ forms a reproducing pair. \n \\begin{theorem}[\\cite{ansptr15}, Theorem 4.1]\\label{theo-partner}\nLet $\\Phi:X\\rightarrow\\H$ be a weakly measurable mapping and $\\{e_i\\}_{i\\in\\mathcal{I}}$ an orthonormal \nbasis of $\\H$. There exists another family $\\Psi$, such that $(\\Psi,\\Phi)$ is a reproducing pair if, and only if,\n\\begin{enumerate}[(i)]\\item ${\\sf Ran}\\, T_\\phi =\\H$, \n\\item there exists $\\{\\mathcal{E}_i\\}_{i\\in\\mathcal{I}}\\subset {\\mathcal V}_\\Phi(X,\\mu)$ satisfying $T_\\Phi \\mathcal{E}_i= e_i,\\ \\forall\\ i\\in\\mathcal{I},$ and\n\\begin{equation}\\label{second-assumption}\n\\sum_{i\\in\\mathcal{I}}|\\mathcal{E}_i(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\n\\end{enumerate}\n A reproducing partner $\\Psi$ is then given by\n\\begin{equation}\\label{def-repr-partn}\n \\Psi(x):=\\sum_{i\\in\\mathcal{I}}\\overline{\\mathcal{E}_i(x)}e_i.\n\\end{equation}\n \\end{theorem}\n Theorem \\ref{theo-partner} is a powerful tool for the study of complete systems. It has for example been used to construct a reproducing partner for the Gabor system of integer time-frequency shifts of the Gaussian window \\cite{spexxl16}.\n\nLet us briefly discuss the conditions $(i)$ and $(ii)$. For a complete system on can show that (under very mild conditions \\cite[Lemma 2.2]{jpaxxl09}) $\\overline{{\\sf Ran}\\, D_\\Phi}=\\H$ holds. It might therefore seem that $(i)$ is mainly a formality since $T_\\Phi$ extends $D_\\Phi$ to its domain ${\\mathcal V}_\\Phi(X,\\mu)$. The upper semi-frame from Section \\ref{sec:upper-semi} however does not satisfy $(i)$, see \\cite[Section 6.2.3]{ansptr15}. In addition, there are intuitive interpretations of $(i)$ and $(ii)$ in different contexts.\n\n{\\bf Coefficient map interpretation:} Property $(i)$ ensures the existence of a linear coefficient\nmap $A:\\H\\rightarrow {\\mathcal V}_\\Phi(X,\\mu)$ satisfying $f=T_\\Phi A(f)$ for every $f\\in\\H$. \nProperty $(ii)$ then guarantees that $A(f)$ can be calculated taking inner products of $f$ with a second mapping\n$\\Psi:X\\rightarrow\\H$.\n\n{\\bf RKHS interpretation:} Let us assume that $(i)$ and $(ii)$ are satisfied. The family\n$\\{\\mathcal{E}_i\\}_{i\\in\\mathcal{I}}$ forms an orthonormal system with respect to the inner product $\\ip{\\cdot}{\\cdot}_\\Phi$, since by $(ii)$ it holds\n$$\n\\langle \\mathcal{E}_i,\\mathcal{E}_k\\rangle_\\Phi=\\langle T_\\Phi \\mathcal{E}_i,T_\\Phi \\mathcal{E}_k\\rangle=\\langle e_i,e_k\\rangle=\\delta_{i,k}.\n$$\nHence, $\\{\\mathcal{E}_i\\}_{i\\in\\mathcal{I}}$ forms an orthonormal basis for\n$$ \n\\H_K^\\Phi:=\\overline{\\mbox{span}\\{\\mathcal{E}_i:\\ i\\in\\mathcal{I}\\}}^{\\|\\cdot\\|_\\Phi}.\n$$\nTheorem \\ref{charact-of-RKHS} together with \\eqref{second-assumption} ensure that $\\H_K^\\Phi$ is a RKHS.\nMoreover, the definition of the reproducing partner $\\Psi$ in \\eqref{def-repr-partn} yields that\n\\begin{equation}\\label{rkhs-ran-anal}\\H_K^\\Phi\\simeq V_\\Phi(X,\\mu)\\simeq({\\sf Ran}\\, C_\\Psi,\\|\\cdot\\|_\\Phi).\n\\end{equation}\nTo put it another way, $(i)$ and $(ii)$ guarantee that there exists a RKHS $\\H_K^\\Phi\\subset {\\mathcal V}_\\Phi(X,\\mu)$\nwhich reproduces $\\H$ in the sense that $T_\\Phi(\\H_K^\\Phi)=\\H$.\n\n\nLet us assume that $(\\Psi,\\Phi)$ is a reproducing pair. There is a natural way to generate frames on $\\H$ and $\\H_K^\\Phi$ using the analysis and synthesis operators.\n\\begin{proposition}\\label{rep-pair-frame-dec}\nLet $(\\Psi,\\Phi)$ be a reproducing pair for $\\H$, $\\{g_i\\}_{i\\in\\mathcal{I}}$ a frame for $\\H$ and $\\{G_i\\}_{i\\in\\mathcal{I}}$ a frame for \n$\\H_K^\\Phi$. \nDefine $H_i(x):=\\ip{g_i}{\\Psi(x)}$ and $h_i:=T_\\Phi G_i$, then\n$\\{H_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H_K^\\Phi$ and $\\{h_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H$.\n\\end{proposition}\n\\textbf{Proof:\\ } Let $F\\in \\H_K^\\Phi$, then \n$$\n\\sum_{i\\in\\mathcal{I}}|\\langle F,H_i\\rangle_\\Phi|^2=\\sum_{i\\in\\mathcal{I}}|\\langle T_\\Phi F,T_\\Phi H_i\\rangle|^2\n=\n\\sum_{i\\in\\mathcal{I}}|\\langle T_\\Phi F,S_{\\Psi,\\Phi} g_i\\rangle|^2\n$$\n$$\n=\\sum_{i\\in\\mathcal{I}}|\\langle (S_{\\Psi,\\Phi})^\\ast T_\\Phi F, g_i\\rangle|^2\\leq M\\|(S_{\\Psi,\\Phi})^\\ast T_\\Phi F\\|^2\n$$\n$$\n\\leq M\\|S_{\\Psi,\\Phi}\\|^2\\| T_\\Phi F\\|^2 =\\widetilde M \\|F\\|_\\Phi^2.\n$$\nThe lower bound follows from the same argument as $(S_{\\Psi,\\Phi})^\\ast$ is boundedly invertible.\nHence, $\\{H_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H_K^\\Phi$. \\\\\nLet $f\\in \\H$, then \n$$\n\\|f\\|=\\|T_\\Phi C_\\Psi S_{\\Psi,\\Phi}^{-1}f\\|=\\|C_\\Psi S_{\\Psi,\\Phi}^{-1}f\\|_\\Phi, \n$$\ntogether with \n$$\n\\sum_{i\\in\\mathcal{I}}|\\langle f,h_i\\rangle|^2=\\sum_{i\\in\\mathcal{I}}|\\langle T_\\Phi C_\\Psi S_{\\Psi,\\Phi}^{-1}f,T_\\Phi G_i\\rangle|^2\n=\n\\sum_{i\\in\\mathcal{I}}|\\langle C_\\Psi S_{\\Psi,\\Phi}^{-1}f,G_i\\rangle_\\Phi|^2,\n$$\nyields that $\\{h_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H$.\n\\hfill$\\Box$\\\\\n\n\\noindent The rest of this section is concerned with the explicit calculation of the reproducing kernel for $\\H_K^\\Phi$.\n Let $(\\Psi,\\Phi)$ be a reproducing pair, then there exists a similar characterization of the range of the \n analysis operators as in \\eqref{frame-rep-kernel-op}.\n Let $F\\in {\\mathcal V}_\\Phi(X,\\mu)$ and define $R_{\\Psi,\\Phi}(x,y):=\\ip{S_{\\Psi,\\Phi}^{-1}\\Phi(y)}{\\Psi(x)}$ and its associated\n integral operator\n$$\n\\mathcal{R}_{\\Psi,\\Phi}(F)(x):=\\int_X F(y)R_{\\Psi,\\Phi}(x,y)d\\mu(y).\n$$\nBy\n\\cite[Proposition 2]{spexxl14} it follows that $\\mathcal{R}_{\\Psi,\\Phi}(F)(x)=F(x)$ if, and only if, there \nexists $f\\in \\H$ such that $F(x)=\\ip{f}{\\Psi(x)}$, for all $x\\in X$.\nHowever, $R_{\\Psi,\\Phi}$ is not the reproducing kernel for $\\H_K^\\Phi$ since the reproducing formula is based\non the inner product of $L^2(X,\\mu)$ and not on $\\langle \\cdot,\\cdot\\rangle_\\Phi$. \n\nBy \\eqref{rkhs-ran-anal}, the reproducing kernel is given by a function $k_x\\in {\\sf Ran}\\, C_\\Psi$ such that $F(x)=\\ip{F}{k_x}_\\Phi$. \nLet $F\\in{\\sf Ran}\\, C_\\Psi$, applying \\eqref{phi-inner-expl} and the identity $f=T_\\Phi C_\\Psi S_{\\Psi,\\Phi}^{-1} f$ yields \n\\begin{align*}\n F(x)&=\\mathcal{R}_{\\Psi,\\Phi}(F)(x)=\\int_X F(y)\\ip{\\Phi(y)}{(S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(x)}d\\mu(y)\\\\\n &=\\int_X \\int_X F(y)\\ip{\\Phi(y)}{\\Phi(z)}\\ip{\\Psi(z)}{S_{\\Psi,\\Phi}^{-1}(S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(x)}d\\mu(z)d\\mu(y)\\\\\n\\\\\n&=\\Big\\langle F,\\big\\langle (S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(x),(S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(\\cdot)\\big\\rangle\\Big\\rangle_\\Phi\n \\end{align*}\nHence, using $(S_{\\Psi,\\Phi}^{-1})^\\ast=S_{\\Phi,\\Psi}^{-1}$, we finally obtain\n$$\nK_{\\Psi,\\Phi}(x,y)=k_x(y)=\\big\\langle S_{\\Phi,\\Psi}^{-1}\\Psi(x),S_{\\Phi,\\Psi}^{-1}\\Psi(y)\\big\\rangle.\n$$\n\n\n\n\\begin{comment}\nIt therefore remains\nto show that point evaluation is continuous.\nLet $F\\in V_\\Phi(X,\\mu)$, by Lemma one obtains\n$$\n|F(x)|=|\\ip{f}{\\Psi(x)}|\\leq \\norm{}{f}\\norm{}{\\Psi(x)}\\leq C\\norm{\\Phi}{C_\\Psi f}\\norm{}{\\Psi(x)}=C_x\\norm{\\Phi}{F}\n$$\n\n \\begin{theorem}\\label{theo-partner}\nLet $\\phi$ be a weakly measurable function and $e=\\{e_n\\}_{n\\in\\mathbb{N}}$ an orthonormal basis of $\\H$. There exists another measurable\nfunction $\\psi$, such that $(\\psi,\\phi)$ is a reproducing pair if, and only if, \n there exist $m,M>0$ such that \n\\begin{equation}\\label{norm_equiv_f_Cf}\nm\\norm{}{f}\\leq \\norm{\\phi^\\ast}{C_\\phi f} \\leq M\\norm{}{f},\\ \\forall f\\in\\mathcal{H}\n\\end{equation}\nand there exists a family $\\{\\xi_n\\}_{n\\in\\mathbb{N}}\\subset \\mathcal{V}_\\phi(X,\\mu)$ such that \n\\begin{equation}\\label{second-assumption}\n[\\xi_n]_\\phi=[\\widehat T_\\phi^{-1} e_n]_\\phi,\\ \\forall n\\in\\mathbb{N},\\hspace{0.5cm} \\text{and} \\hspace{0.5cm}\n\\sum_{n\\in\\mathbb{N}}|\\xi_n(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\n\\end{theorem}\n\\end{comment}\n\n\n\n\n\\begin{comment}\n\\begin{definition}\n Let $(B,\\norm{B}{\\cdot})$ be a Banach space. A family $\\{g_n\\}_{n\\in\\mathbb{N}}\\subset B$ is called an \\textbf{atomic decomposition} if there exists a \n Banach space of sequences $(B^\\natural,\\norm{B^\\natural}{\\cdot})$ and bounded linear functionals $\\{\\lambda_n\\}_{n\\in\\mathbb{N}}$ such that\n \\begin{enumerate}[(i)]\n \\item $\\{\\lambda_n(f)\\}_{n\\in\\mathbb{N}}\\in B^\\natural$ and there exists $C_1>0$ such that\n $$\n \\norm{B^\\natural}{\\{\\lambda_n(f)\\}_{n\\in\\mathbb{N}}}\\leq C_1\\norm{B}{f},\\ \\forall\\ f\\in B\n $$\n \\item If $\\{\\lambda_n\\}_{n\\in\\mathbb{N}}\\in B^\\natural$ then $f=\\sum_{n\\in\\mathbb{N}}\\lambda_n g_n\\in B$ (with unconditional convergence in some\n suitable topology) and there exists $C_2>0$ such that \n $$\n \\norm{B}{f}\\leq C_2\\norm{B^\\natural}{\\{\\lambda_n\\}_{n\\in\\mathbb{N}}}\n $$\n \\item $f=\\sum_{n\\in\\mathbb{N}}\\lambda_n(f)g_n,\\ \\forall\\ f\\in B$\n \\end{enumerate}\n\n \\end{definition}\n \n\\begin{definition}\n Let $(B,\\norm{B}{\\cdot})$ be a Banach space. A family $\\{h_n\\}_{n\\in\\mathbb{N}}\\subset B^\\ast$ is called a \\textbf{Banach frame} if there exists a \n Banach space of sequences $(B^\\natural,\\norm{B^\\natural}{\\cdot})$ and a bounded linear reconstruction operator $\\Omega$ such that\n \\begin{enumerate}[(i)]\n \\item If $f\\in B$ then $\\{h_n(f)\\}_{n\\in\\mathbb{N}}\\in B^\\natural$ and there exists $C_1,C_2>0$ such that \n $$\n C_1\\norm{B}{f}\\leq\\norm{B^\\natural}{\\{h_n(f)\\}_{n\\in\\mathbb{N}}}\\leq C_2\\norm{B}{f},\\ \\forall\\ f\\in B\n $$\n \\item $f=\\Omega(\\{\\lambda_n(f)\\}_{n\\in\\mathbb{N}}),\\ \\forall\\ f\\in B$\n \\end{enumerate}\n\n \\end{definition}\n\n\n\n\\begin{proposition}\\label{rep-pair-atomic-dec}\nLet $(\\Psi,\\Phi)$ be a reproducing pair, $\\{f_n\\}_{n\\in\\mathbb{N}}$ a frame for $\\H$, $B:=({\\sf Ran}\\, C_\\Psi,\\norm{\\Phi}{\\cdot})$ and $B^\\natural:=l^2(\\mathbb{N})$.\nDefine $\\psi_n(x):=\\ip{f_n}{\\Psi(x)}$, then $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ is an atomic decomposition for $B$.\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\{\\widetilde f_n\\}_{n\\in\\mathbb{N}}$ be the canonical dual frame of $\\{f_n\\}_{n\\in\\mathbb{N}}$ and define\n$$\n\\lambda_n(F):=\\ip{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}{\\widetilde f_n}\n$$\nBy a combination of \\cite[Lemma 2.5 \\& Proposition 2.10]{ansptr15} we get\n$$\n\\norm{l^2}{\\lambda(F)}^2=\\sum_{n\\in\\mathbb{N}}\\big|\\lambda_n(F)\\big|^2=\\sum_{n\\in\\mathbb{N}}\\big|\\ip{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}{\\widetilde f_n}\\big|^2\n$$\n$$\n\\leq C \\norm{}{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}^2\n\\leq C\\norm{}{T_\\Phi F}^2\n\\leq C\\norm{\\Phi}{F}^2\n$$\nNow let $\\lambda\\in l^2(\\mathbb{N})$ and $F=\\sum_{n\\in\\mathbb{N}}\\lambda_n\\phi_n$, then\n$$\n\\norm{\\Phi}{F}=\\sup_{\\norm{}{g}=1}\\Big|\\int_X\\sum_{n\\in\\mathbb{N}}\\lambda_n\\psi_n(x)\\ip{\\Phi(x)}{g}d\\mu(x)\\Big|\n$$\n$$\n=\\sup_{\\norm{}{g}=1}\\Big|\\sum_{n\\in\\mathbb{N}}\\lambda_n\\int_X\\ip{f_n}{\\Psi(x)}\\ip{\\Phi(x)}{g}d\\mu(x)\\Big|=\\sup_{\\norm{}{g}=1}\\Big|\n\\sum_{n\\in\\mathbb{N}}\\lambda_n\\ip{S_{\\Psi,\\Phi}f_n}{g}\\Big|\n$$\n$$\n=\\sup_{\\norm{}{g}=1}\\Big|\\ip{S_{\\Psi,\\Phi}D_f\\lambda}{g}\\Big|=\\Big\\|S_{\\Psi,\\Phi}D_f\\lambda\\Big\\|\\leq C\\norm{2}{\\lambda}\n$$\nFinally, for every $F=\\ip{f}{\\Psi(\\cdot)}\\in {\\sf Ran}\\, C_\\Psi$, we have\n$$\n\\sum_{n\\in\\mathbb{N}}\\lambda_n(F)\\psi_n(x)=\\sum_{n\\in\\mathbb{N}}\\ip{S_{\\Psi,\\Phi}^{-1}D_\\Phi F}{\\widetilde f_n}\\ip{f_n}{\\Psi(x)}\n=\\ip{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}{\\Psi(x)}\n$$\n$$\n=\\ip{S_{\\Psi,\\Phi}^{-1}S_{\\Psi,\\Phi} f}{\\Psi(x)}=\\ip{ f}{\\Psi(x)}=F(x)\n$$\n\\hfill$\\Box$\\\\\n\n\\begin{proposition}\nLet $(\\Psi,\\Phi)$ be a reproducing pair, $\\{f_n\\}_{n\\in\\mathbb{N}}$ a frame for $\\H$, $B:=({\\sf Ran}\\, C_\\Psi,\\norm{\\Phi}{\\cdot})$ and $B^\\flat:=l^2(\\mathbb{N})$.\nDefine $\\phi_n(x):=\\ip{f_n}{\\Phi(x)}$ and $\\{h_n\\}_{n\\in\\mathbb{N}}\\subset B^\\ast$ defined by\n$$\nh_n(F):=\\int_X F(x)\\overline{\\phi_n(x)}d\\mu(x),\n$$ then $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ is an Banach frame for $B$.\n\\end{proposition}\n\\textbf{Proof:\\ } Let $F\\in B$ and $C_\\Psi f=F$, then\n$$\nC_1\\norm{\\phi}{F}\\leq \\norm{}{S_{\\Psi,\\Phi}^{-1}}^{-1}\\norm{}{f}\\leq \\norm{}{S_{\\Psi,\\Phi}f}\\leq \\norm{}{S_{\\Psi,\\Phi}}\\norm{}{f}\\leq C_2\\norm{\\Phi}{F}\n$$\nMoreover, we have\n$$\n\\sum_{n\\in\\mathbb{N}}|h_n(F)|^2=\\sum_{n\\in\\mathbb{N}}\\Big|\\int_X \\ip{f}{\\Psi(x)}\\ip{\\Phi(x)}{f_n}d\\mu(x)\\Big|^2\n$$\n$$\n=\\sum_{n\\in\\mathbb{N}}|\\ip{S_{\\Psi,\\Phi}f}{f_n}|^2\\asymp\\norm{}{S_{\\Psi,\\Phi}f}^2\n$$\nIt remains to show that there exists a bounded reconstruction operator $\\Omega:l^2(\\mathbb{N})\\rightarrow B$, such that $\\Omega(h(F))=F$.\nDefine $\\Omega(c):=C_\\Psi S_{\\Psi,\\Phi}^{-1}D_{\\widetilde f} c$. $\\Omega$ is bounded as \n$$\n\\norm{\\phi}{\\Omega(c)}=\\norm{\\phi}{C_\\Psi S_{\\Psi,\\Phi}^{-1}D_{\\widetilde f} c}\\leq C\\norm{}{S_{\\Psi,\\Phi}^{-1}D_{\\widetilde f} c}\\leq C\\norm{2}{c}\n$$\nMoreover, $\\Omega$ reproduces $F=C_\\Psi f$ as\n$$\n\\Omega(h(F))=C_\\Psi S_{\\Psi,\\Phi}^{-1}\\Big(\\sum_{n\\in\\mathbb{N}}\\int_X F(x)\\overline{\\phi_n(x)}d\\mu(x)\\widetilde f_n\\Big)\n$$\n$$\nC_\\Psi S_{\\Psi,\\Phi}^{-1}\\Big(\\sum_{n\\in\\mathbb{N}}\\int_X C_\\Psi f(x)\\ip{\\Phi(x)}{f_n}d\\mu(x){\\widetilde f}_n\\Big)\n=C_\\Psi S_{\\Psi,\\Phi}^{-1}\\Big(\\sum_{n\\in\\mathbb{N}}\\ip{S_{\\Psi,\\Phi}f}{f_n}{\\widetilde f}_n\\Big)\n$$\n$$\nC_\\Psi S_{\\Psi,\\Phi}^{-1}S_{\\Psi,\\Phi}f=C_\\Psi f=F\n$$\n\\hfill$\\Box$\\\\\n\\xxl{\n\\begin{itemize}\n\\item Can we use this for a Gelfand triplet \n$$ B \\subseteq \\H \\subseteq B'$$\nto construct Banach frames?\n\\item In this setting a reproducing pair $\\Phi \\subseteq B$ and $ \\phi \\subseteq B'$ makes sense, and gives a reproducing pair for $\\H$, right?\n\\end{itemize}\n}\n\n\n\nFinally, we give a characterization of the reproducing kernel of a reproducing pair via atomic decompositions (in analogy to Theorem \\ref{charact-of-RKHS})\nand show their pointwise square summability.\n\\begin{theorem}\\label{rep-kernel-repres}\nLet $(\\Psi,\\Phi)$ be a reproducing pair.\nThe reproducing kernel can be written as \n\\begin{equation}\\label{kernel-characterization}\nK_{\\Psi,\\Phi}(x,y)=\\sum_{n\\in\\mathbb{N}}\\psi_n(x)\\overline{\\phi_n(y)}\n\\end{equation}\nwhere $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ and $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ are atomic decompositions of $({\\sf Ran}\\, C_\\Psi,\\norm{\\Phi}{\\cdot})$ and \n$({\\sf Ran}\\, C_\\Phi,\\norm{\\Psi}{\\cdot})$ respectively.\n\\end{theorem}\n\\textbf{Proof:\\ } Let $\\{e_n\\}_{n\\in\\mathbb{N}}$ be an ONB of $\\H$ and define $g_n=S_{\\Psi,\\Phi}e_n$, then $\\{g_n\\}_{n\\in\\mathbb{N}}$ is a Riesz basis for $\\H$. \nIt holds\n$$\nK_{\\Psi,\\Phi}(x,y)=\\ip{S_{\\Psi,\\Phi}^{-1}\\Phi(y)}{\\Psi(x)}=\n\\Big\\langle S_{\\Psi,\\Phi}^{-1}\\sum_{n\\in\\mathbb{N}}\\ip{\\Phi(y)}{\\widetilde g_n}g_n,\\sum_{n\\in\\mathbb{N}}\\ip{\\Psi(x)}{e_k}e_k\\Big\\rangle\n$$\n$$\n=\\sum_{k,n\\in\\mathbb{N}}\\ip{\\Phi(y)}{\\widetilde g_n}\\ip{e_k}{\\Psi(x)}\\langle S_{\\Psi,\\Phi}^{-1}g_n,e_k\\rangle\n=\\sum_{k,n\\in\\mathbb{N}}\\ip{\\Phi(y)}{\\widetilde g_n}\\ip{e_k}{\\Psi(x)}\\langle e_n,e_k\\rangle\n$$\n$$\n=\\sum_{k,n\\in\\mathbb{N}}\\ip{e_n}{\\Psi(x)}\\overline{\\ip{\\widetilde g_n}{\\Phi(y)}}=\\sum_{k,n\\in\\mathbb{N}}\\psi_n(x)\\overline{\\phi_n(y)}\n$$\nwith $\\psi_n:=\\ip{e_n}{\\Psi(\\cdot)}$ and $\\phi_n:=\\ip{\\widetilde g_n}{\\Phi(\\cdot)}$. The result then follows by Proposition \n\\ref{rep-pair-atomic-dec}.\\hfill$\\Box$\\\\\n\n\\begin{corollary}\nUnder the same assumptions as in Theorem \\ref{rep-kernel-repres}, \nwe have that $\\sum_{n\\in\\mathbb{N}}|\\psi_n(x)|^2<\\infty$ and $\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2<\\infty$ \n\\end{corollary}\n\\textbf{Proof:\\ } Let $x\\in X$\n$$\n\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2=\\sum_{n\\in\\mathbb{N}}|\\ip{\\widetilde g_n}{\\Phi(x)}|^2\\leq B\\norm{}{\\Phi(x)}^2<\\infty\n$$\nThe argument for $\\psi_n$ is the same.\n\\hfill$\\Box$\\\\ \n\n\n\\textcolor{red}{conjectures:\n\\begin{proposition}\nLet $(B,B')$ be a pair of mutually dual, separable RKBS, then there exists an atomic decomposition of $B$ such that\n$$\n\\sum_{n\\in\\mathbb{N}}|\\psi_n(x)|^2<\\infty\n$$\n\\end{proposition}\n\\begin{proposition}\nLet $(B,B')$ be a pair of mutually dual, separable RKBS, and the reproducing kernel is given by\n$$\nK(x,y)=\\sum_{n\\in\\mathbb{N}}\\psi_n(x)\\overline{\\phi_n(y)}\n$$\nsatisfying\n$$\n\\sum_{n\\in\\mathbb{N}}|\\psi_n(x)|^2<\\infty\n$$\n$$\n\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2<\\infty\n$$\nThen there exist a reproducing pair $(\\Psi,\\Phi)$ with ....\n\\end{proposition}\n}\n\n\n\\begin{proposition}\nLet $(B,B')$ be a pair of separable RKBS defined on $(X,\\mu)$ with $B'$ being the conjugate dual space of $B$ w.r.t. the $L^2$ duality pairing.\nAssume that $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ is an atomic decompositions of $B$ and $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ a Banach frame for $B$ with\n$B^\\natural=B^\\flat=l^2$. \nThen there exists a reproducing pair $(\\Psi,\\Phi)$ such that ${\\sf Ran}\\, C_\\Psi=B$ and ${\\sf Ran}\\, C_\\Phi= B'$ as sets with equivalent \nnorms\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\{g_n\\}_{n\\in\\mathbb{N}}$, $\\{h_n\\}_{n\\in\\mathbb{N}}$ be frames of $\\H$, such that ${\\sf Ran}\\, C_g={\\sf Ran}\\, \\lambda$ and ${\\sf Ran}\\, C_h={\\sf Ran}\\, \\gamma$. As \n${\\sf Ran}\\, \\lambda$ and ${\\sf Ran}\\, \\gamma$ are closed by the atomic decomposition and Banach frame assumptions, it follows by Corollary \\ref{exist-frame-for-RKHS}\nand Remark \\ref{discrete-subspace-rkhs} that such \nframes always exist. Define $\\Psi$ via \n$\\ip{\\widetilde g_n}{\\Psi(x)}=\\psi_n(x)$ and $\\Phi$ via $\\ip{\\widetilde h_n}{\\Psi(x)}=\\phi_n(x)$. First we have to show that $\\Psi(x)$ is \nwell-defined. Let $f\\in\\H$, it holds\n$$\n|\\ip{f}{\\Psi(x)}|=\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{g_n}\\psi_n(x)\\Big|\\leq M_x\\Big\\|\\sum_{n\\in\\mathbb{N}}\\ip{f}{g_n}\\psi_n\\Big\\|_B\n$$\n$$\n\\leq M_x' \\|C_g f\\|_{2}=M_x''\\|f\\|\n$$\nwhere we have used that point evaluation is continuous and $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ defines an atomic decomposition. \n$$\n|\\ip{f}{\\Phi(x)}|=\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{h_n}\\phi_n(x)\\Big|\\leq M_x\\Big\\|\\sum_{n\\in\\mathbb{N}}\\ip{f}{h_n}\\phi_n\\Big\\|_{B'}\n$$\n$$\n= M_x\\sup_{\\norm{B}{F}=1}\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{h_n}\\phi_n(F)\\Big|\\leq M_x\\sup_{\\norm{B}{F}=1}\\norm{l^2}{C_h f}\\norm{2}{\\{\\phi_n(F)\\}_{n\\in\\mathbb{N}}}\n$$\n$$\n\\leq M_x'\\norm{}{f}\\sup_{\\norm{B}{F}=1}\\norm{B}{F}=M_x'\\norm{}{f}\n$$\nwhere we used Cauchy-Schwarz inequality, the upper frame inequality and the upper inequality from the definition of a Banach frame.\nHence, Riesz representation theorem assures the existence and uniqueness of $\\Psi(x)$ for every $x\\in X$. The same construction can \nbe made to define $\\Phi$.\nIt remains to show that $S_{\\Psi,\\Phi}\\in GL(\\H)$\n$$\n\\ip{S_{\\Psi,\\Phi}f}{g}=\\int_X \\ip{f}{\\Psi(x)}\\ip{\\Phi(x)}{g}d\\mu(x)\n$$\n$$\n=\\int_X \\sum_{n\\in\\mathbb{N}}\\ip{f}{g_n}\\psi_n(x)\\sum_{k\\in\\mathbb{N}}\\overline{\\phi_k(x)}\\ip{h_k}{g}d\\mu(x)\n$$\n$$\n=\\sum_{k\\in\\mathbb{N}}\\int_X \\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n(x)\\Big)\\overline{\\phi_k(x)}d\\mu(x)\\ip{h_k}{g}\n$$\n \\textcolor{red}{why are we allowed to interchange summation and integration???}\n \n We will now show that $O:{\\sf Ran}\\,\\Lambda\\rightarrow{\\sf Ran}\\, \\Gamma$ defined by \n$$\n(O c)_k:=\\int_X \\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n(x)\\Big)\\overline{\\phi_k(x)}d\\mu(x)= \\gamma_k\\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big)\n$$\nis bounded and bijective. Therefore let $c\\in {\\sf Ran}\\,\\Lambda$, then $O$ is bounded as\n$$\n\\sum_{k\\in\\mathbb{N}}|(O c)_k|^2=\\sum_{k\\in\\mathbb{N}}\\Big|\\gamma_k\\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big)\\Big|^2\\leq M \\Big\\|\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big\\|_B\\leq M'\\norm{2}{c}\n$$\nLet $F\\in B$ such that $F=\\sum_{n\\in\\mathbb{N}}c_n \\psi_n$, then\n$$\n\\sum_{k\\in\\mathbb{N}}|(O c)_k|^2=\\sum_{k\\in\\mathbb{N}}\\Big|\\gamma_k\\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big)\\Big|^2\\geq m \\Big\\|\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big\\|_B\n$$\n$$\n= m \\Big\\|\\sum_{n\\in\\mathbb{N}}\\lambda_n(F) \\psi_n\\Big\\|_B \\geq m'\\norm{2}{\\Lambda(F)}\n$$\n\nAs $B=\\big\\{\\sum_{n\\in\\mathbb{N}}c_n\\psi_n:\\ c\\in{\\sf Ran}\\,\\Lambda\\big\\}$ it follows that $O$ is surjective.\n\n\nFinally, $S_{\\Psi,\\Phi}=D_hO C_g\\in GL(\\H)$ as $C_g:\\H\\rightarrow {\\sf Ran}\\, \\Lambda={\\sf Ran}\\, C_g$ and $D_h:{\\sf Ran}\\, \\Gamma={\\sf Ran}\\, C_h\\rightarrow \\H$ \nare bijective.\n\\end{comment}\n\n\\section{Conclusion}\nThe results of this paper suggest to change the usage of some notions in frame theory. We have shown that any frame can be decomposed into a discrete and a strictly continuous part. \nIn this light, it is reasonable to use the term continuous (semi-)frame only if it is actually strictly continuous and semi-continuous (resp. discrete) frame otherwise.\nMoreover, since the underlying measure space of a frame with finite is atomic, all efforts to generalize Riesz bases to general measure spaces are condemned to failure from the beginning.\n\nWe have investigated the redundancy of (semi-)frames in detail and showed that, in this regard, upper semi-frames may behave essentially different from systems satisfying the lower frame bound. It is an open question to us whether a similar result like Theorem \\ref{reproduced-result} can be proven for the redundancy of a reproducing pair defined in \\eqref{redund-rep-pair}.\n\nAnother interesting topic for future research is to find and study alternative notions of redundancy for continuous frames. A promising approach that may be adapted can be found in \\cite{bacahela06}.\nStudying the dependence on the measure space should thereby remain a key objective.\n\nTo sum up, we hope that we could emphasize the fundamental importance of RKHSs for analysis\/synthesis processes like frames or reproducing pairs.\n\n\\section*{Appendix}\n\n\\textbf{Proof of Lemma \\ref{not-atomic-non-atomic}:} Ad $(i)$: See \\cite{fi72}.\n\nAd $(ii)$: Let $(X,\\mu)$ be an-atomic. Let us assume on the contrary that \nfor every measurable set $A\\subset X$ with \n$\\mu(A)>0$ there exists an atom $B\\subset A$ and let $\\{A_n\\}_{n\\mathcal{I}}\\subset X$ be a countable partition of $X$ by sets of finite measure. We will show that each $A_n$ can be partitioned into atoms and null sets, a contradiction. Assume without loss of generality that $\\mu(A_{1})>0$. By assumption, there exists an atom\n$B_1\\subset A_{1}$. \nIf $\\mu(B_1)=\\mu(A_{1})$, then $A_{1}$ is an atom. If $0<\\mu(B_1)<\\mu(A_{1})$, then\n$\\mu(A_{1}\\backslash B_1)>0$. Hence, there exists an atom $B_2\\subset A_{1}\\backslash B_1$ and the preceding\nargument can be repeated. If one has\n$\\mu\\big(A_{1}\\backslash \\big(\\bigcup_{k=1}^KB_k\\big)\\big)>0$ for all iteration steps $K$, then $\\mu_K:=\\mu\\big(\\bigcup_{k=1}^K B_k\\big)$\ndefines a strictly increasing sequence, bounded by $\\mu(A_{1})$.\nHence, $\\mu_K$ is convergent to some $\\mu^\\ast$ and the limit equals $\\mu(A_{1})$. \nIndeed, if $\\mu^\\ast<\\mu(A_1)$ then, by assumption,\nthere exists an atom $B^\\ast\\subset A_{1}\\backslash \\bigcup_{k\\in\\mathbb{N}} B_k$ and\n$$\\mu\\Big(\\bigcup_{k\\in\\mathbb{N}} B_k\\cup B^\\ast\\Big)>\\mu^\\ast,$$ \na contradiction.\nConsequently,\n$\nA_{1}=\\bigcup_{k\\in\\mathbb{N}} B_k\\cup N,$\nwhere $N=A_{1}\\backslash \\bigcup_{k\\in\\mathbb{N}} B_k$ is of measure zero. In particular, we have constructed a \npartition of $A_{1}$ consisting of atoms and null sets.\nRepeating this argument for all $A_n$, $n\\in\\mathcal{I}$, with $\\mu(A_n)>0$ shows that $(X,\\mu)$ is atomic, a\ncontradiction. \\hfill$\\Box$\\\\\n\n\n\n \n\n\\section*{Acknowledgement}\nThis work was funded by the Austrian Science Fund (FWF) START-project FLAME ('Frames and\nLinear Operators for Acoustical Modeling and Parameter Estimation'; Y 551-N13).\n\nThe authors would like to thank Jean-Pierre Antoine for fruitful discussions on the physical interpretation\nof the result on the redundancy of continuous frames.\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \n The emergence of atomically thin, single-layer graphene spawned a new class of materials, known as two-dimensional (2D) materials~\\cite{Xu2013, Novoselov2011}. These extraordinary 2D materials have attracted significant attention within the scientific community due to their wide range of properties - from large band-gap insulators to the very best conductors, the mechanically tough to soft and malleable, and semi-metals to topologically insulating~\\cite{Singh2015, Paul2017,Blonsky2015,Akiyama2021}. The diverse pool of properties that 2D materials possess promise many novel next-generation device applications in nanoelectronics, quantum computing, field-effect transistors, microwave and terahertz photonics, and catalysis~\\cite{Rode2017, Xu2015, Yu2014, Kang2013, Amani2014, Li2019, Luo2016, Yu2014}. Despite the excitement surrounding these promising materials, surprisingly few 2D materials are used in the industry. Roughly 55 of the $>$5,000 theoretically predicted 2D materials have been experimentally synthesized~\\cite{Mounet2018, Ashton2017, c2db, Singh2014, Zhou2019}.\n \n Of the various methods used to synthesize 2D materials, substrate-assisted methods such as chemical vapor deposition result in large-area and low-defect flakes at a reasonable cost per mass~\\cite{Novoselov2012}. Substrate-assisted methods have the added benefit of being able to synthesize 2D materials that have non-van der Waals (vdW) bonded bulk counterparts. On the other hand, exfoliation techniques, like mechanical exfoliation~\\cite{Singh2015}, can only be used to generate 2D flakes from vdW-bonded bulk counterparts. Currently, substrate-assisted synthesis of 2D materials rely on expensive trial-and-error processes requiring significant experimental effort and intuition for choosing the substrate, precursors, and the growth conditions (substrate temperatures, growth rate, etc.) to synthesize 2D materials resulting in the slow progress to realize and utilize these materials. Furthermore, the properties of 2D materials can be dramatically altered by placing them on substrates. For example, the mobility of carriers in 2D-MoS$_2$ is reduced by more than an order of magnitude by placing it on a sapphire substrate~\\cite{singh2015al2o3}. To enable the functionalization and to assist in the selection of substrates for synthesis, a detailed understanding of the substrate-assisted modification of energetic, physical, and electronic properties of 2D materials is required. \n \n In this work, we present the $Hetero2d$ workflow package inspired by existing community workflow packages. $Hetero2d$ is tailored to address scientific questions regarding the stability and properties of 2D-substrate heterostructured materials. $Hetero2d$ provides automated routines for the generation of low-lattice mismatched heterostructures for arbitrary 2D materials and substrate surfaces, the creation of vdW-corrected density-functional theory (DFT) input files, the submission and monitoring of simulations on computing resources, and the post-processing of the key parameters to compute, namely, (a) the interface interaction energy of 2D-substrate heterostructures, (b) the identification of substrate-induced changes in the interfacial structure, and (c) charge doping of the 2D material. The 2D-substrate information generated by our routines is stored in a MongoDB database tailored for 2D-substrate heterostructures.\n \n As an example, we demonstrate the use of $Hetero2d$ in screening for substrate surfaces that stabilize the following four 2D materials - $2H$-MoS$_2$, $1T$- and $2H$-NbO$_2$, and hexagonal-ZnTe. We considered the low-index planes of a total of 50 cubic metallic materials as potential substrates. Using the $Hetero2d$ workflow, we determine that Cu, Hf, Mn, Nd, Ni, Pd, Re, Rh, Sc, Ta, Ti, V, W, Y, and Zr substrates sufficiently stabilize the formation energies of these 2D materials, with binding energies in the range of $\\sim$0.1 -- 0.6 eV\/atom. Upon examining the $z$-separation, the charge transfer, and the electronic density of states at the 2D-substrate interface using post-processing tools of $Hetero2d$, we find a covalent type bonding at the interface, which suggests that these substrates can be used as contact materials. \\href{https:\/\/github.com\/cmdlab\/Hetero2d}{Hetero2d} is shared on GitHub as an open-source package under the GNU license. \n \n\\section{DFT Approach to Identifying Stable 2D-Substrate Heterostructures}\n \n 2D materials are inherently meta-stable materials and are often created by peeling 2D films from layered, vdW bonded bulk counterparts. Their meta-stability arises from the removal of the vdW bonds between the individual flakes. However, the vdW bonds are an order of magnitude weaker than the in-plane covalent or ionic bonds of 2D materials, thus many 2D materials can remain stable at room temperature or above. A quantitative measure of the stability of 2D materials to remain as a free-standing 2D film is given by the formation energy, $\\Delta E_{\\mathrm{vac}}^f$, with respect to the bulk phase\n \n \\begin{equation}\n \t\\label{eq:Eform}\n \t\\begin{aligned}[t]\n \t\t\\hspace*{-1.5cm} \\Delta E_{\\mathrm{vac}}^f &= \\dfrac{ E_{\\mathrm{2D}}}{ N_{\\mathrm{2D}} } - \\dfrac{E_{\\mathrm{3D}}}{N_{\\mathrm{3D}}},\\\\\n \t\\end{aligned}\n \\end{equation} where $E_{\\mathrm{2D}}$\\ is the energy of a 2D material in vacuum, $E_{\\mathrm{3D}}$\\ is the energy of the bulk counterpart of the 2D material, and $N_{\\mathrm{2D}}$\\ and $N_{\\mathrm{3D}}$\\ are the number of atoms in the unit cell of 2D and bulk counterpart, respectively. \n \n The $\\Delta E_{\\mathrm{vac}}^f$\\ of a 2D material indicates the stability of a 2D flake to retain the 2D form over its bulk counterpart, where the higher the $\\Delta E_{\\mathrm{vac}}^f$, the larger the driving force to lower the free energy. Singh et. al. and others have shown that when the $\\Delta E_{\\mathrm{vac}}^f$\\ < 0.2 eV\/atom, the 2D materials are stable as a free-standing film, but for larger $\\Delta E_{\\mathrm{vac}}^f$'s they are highly unstable and may only be synthesized using substrate-assisted methods~\\cite{Singh2015, c2db}. \n \n \n For substrate surfaces to stabilize a 2D material during the growth processes, the 2D-substrate heterostructure should be energetically stable. Thus the interactions between the 2D material and substrate surface have to be attractive in nature. This interaction energy known as the binding energy can be estimated as, $\\Delta E_{\\mathrm{b}} = (E_{\\mathrm{2D}} + E_{\\mathrm{S}} - E_{\\mathrm{2D+S}} )\/N_{\\mathrm{2D}}$, where $E_{\\mathrm{2D+S}}$\\ is the energy of the 2D material adsorbed on the surface of a substrate, $E_{\\mathrm{S}}$\\ is the energy of the substrate slab, $E_{\\mathrm{2D}}$\\ is the energy of the free-standing 2D material, and $N_{\\mathrm{2D}}$\\ is the number of atoms in the unit cell of the 2D material. Note, strain is applied to the 2D material to place it on the substrate surface due to the lattice-mismatch between the two lattices. For the 2D-substrate heterostructure interaction to be attractive, the $\\Delta E_{\\mathrm{b}}$\\ > 0. In addition, this $\\Delta E_{\\mathrm{b}}$\\ should be greater than the $\\Delta E_{\\mathrm{vac}}^f$\\ of 2D materials to ensure that the 2D materials remain in their 2D form on the substrate. Singh et. al. has shown previously that the successful synthesis of a 2D material on a particular substrate surface is feasible when the adsorption formation energy, $\\Delta E_{\\mathrm{ads}}^f$\\ = $\\Delta E_{\\mathrm{vac}}^f$\\ - $\\Delta E_{\\mathrm{b}}$\\ < 0.\n\n\\section{Hetero2d: The High-Throughput Implementation of the DFT Approach}\n \\subsection{Introduction}\n \n The $Hetero2d$ package is an all-in-one workflow approach to model the heterostructures formed by the arbitrary combinations of 2D materials and substrate surfaces. $Hetero2d$ can calculate the $\\Delta E_{\\mathrm{vac}}^f$, $\\Delta E_{\\mathrm{b}}$, and $\\Delta E_{\\mathrm{ads}}^f$\\ for each 2D-substrate heterostructure and store the relevant simulation parameters and post-processing in a queryable MongoDB database that can be interfaced to and accessed by an application programming interface (API) or a web-portal. $Hetero2d$ is written in Python 3.6, a high-level coding language widely used on modern scientific computing resources. $Hetero2d$ utilizes \\textit{MPInterfaces}~\\cite{Mathew2016} routines and the robust high-throughput computational tools developed by the Materials Project~\\cite{atomate,Jain2013,Jain2015,Ong2013} (MP), namely \\textit{atomate}, \\textit{FireWorks}, \\textit{pymatgen}, and \\textit{custodian}.\n \n \n $Hetero2d$'s framework is inspired by \\textit{atomate}'s straightforward statement-based workflow design to perform complex materials science computations with pre-built workflows that automate various types of DFT calculations. Figure \\ref{fig:Figure1} illustrates the framework of our workflow within the $Hetero2d$ package. $Hetero2d$ extends some powerful high-throughput techniques available in existing community packages and combines them with new routines created for this work to generate 2D-substrate heterostructures, perform vdW-corrected DFT calculations, store the stability related data within a queryable database, and analyze key properties of the heterostructure. In the following sections, we discuss each step outlined in Figure \\ref{fig:Figure1} underscoring the new computational tools developed for $Hetero2d$.\n \n \n \\begin{figure}[!th]\n \\centering\n \\includegraphics[width=\\textwidth]{img\/WorkflowFlowChart.pdf}\n \\caption{Outline for our computational workflow used in our study to investigate the properties of the 2D-substrate heterostructures as coded in the $Hetero2d$ package. All structures imported from an external database are relaxed using vdW-corrected DFT with our parameters (discussed below) to maintain consistency. Boxes in gold denote a DFT simulation step and boxes in silver denote a pre-processing or post-processing step.}\n \\vspace{-0.25\\intextsep}\n \\label{fig:Figure1}\n \\end{figure}\n \n \\subsection{Workflow Framework}\n $Hetero2d$'s \\textit{atomate}-inspired framework utilizes the \\textit{FireWorks} package to break down and organize each task within a workflow. Workflows within the \\textit{FireWorks} package are organized into three task levels -- (1) workflow, (2) firework, and (3) firetask. A workflow is a set of fireworks with dependencies and information shared between them through the use of a unique specification file that determines the order of execution of each firework (FW) and firetask. Each FW is composed of one or more related firetasks designed to accomplish a specific task such as DFT structure relaxation. Firetasks are the lowest level task in the workflow. Firetasks can be simple tasks such as writing files, copying files from a previous directory, or more complex tasks such as calling script-based functions to generate 2D-substrate heterostructures, starting and monitoring a DFT calculation, or post-processing a DFT calculation and updating the database. \n \n $Hetero2d$'s workflow \\textit{get\\_heterostructures\\_stabilityWF} shown in Figure \\ref{fig:Figure1}, has a total of five firework steps (1) FW$_1$: the DFT structural optimization of the 2D material, (2) FW$_2$: the DFT structural optimization of the bulk counterpart of the 2D material, (3) FW$_3$: the DFT structural optimization of the substrate, (4) FW$_4$: the creation and DFT structural optimization of the substrate slab, and (5) FW$_5$: the generation and DFT structural optimization of the 2D-substrate heterostructure configurations. Each firework can be composed of a single or many related firetasks. The tasks are gathered from the specification file that controls the execution of each firetask. For example, FW$_1$ is used to perform a vdW-corrected DFT structure optimization of the 2D material. Note that the DFT simulations are performed using the Vienna \\textit{ab initio} simulation package ~\\cite{Kresse5, Kresse4, Kresse1, Kresse2, Kresse3}. FW$_1$ is composed of firetasks which (1) write VASP input files to the job's launch directory, (2) write the structure file, (3) run VASP using \\textit{custodian}~\\cite{Ong2013} to perform just-in-time job management, error checking, and error recovery, (4) collect information regarding the location of the calculation and update the specification file, and (5) perform analysis and convergence checks for the calculation and store all pre-defined information about the calculation in our MongoDB database. A more detailed explanation of each firework in the workflow is discussed in section 3.6, \\textit{Workflow Steps}. \n \n \\subsection{Package Functionalities}\n As mentioned earlier, $Hetero2d$ adapts and extends existing community packages to assess the stability of 2D-substrate heterostructures. Table \\ref{tab:Table1} lists the functionalities of $Hetero2d$ compared with two other workflow-based packages, \\textit{MPInterfaces}~\\cite{Mathew2016} and \\textit{atomate}~\\cite{atomate}, highlighting new and common features within the three packages. \n \n \\begin{table}\n \\centering\n \\caption{A list of functionalities present in the $Hetero2d$ package compared with two other workflow-based packages \\textit{MPInterfaces} and \\textit{atomate}. $Hetero2d$ is the only workflow package with all the specific features needed to create 2D-substrate heterostructures using high-throughput computational methods.}\n \\begin{adjustbox}{width=0.5\\textwidth}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n & $Hetero2d$ & \\textit{MPInterfaces} & \\textit{Atomate} \\\\\n \\hline\n Structure processing & \\checked & \\checked & \\checked \\\\\n \\hline\n Error recovery & \\checked & \\checked & \\checked \\\\\n \\hline\n Database integration & \\checked & \\checked & \\checked \\\\\n \\hline\n \\textit{FireWorks} compatible & \\checked & & \\checked \\\\\n \\hline\n 2D hetero. routines & \\checked & \\checked & \\\\\n \\hline\n 2D hetero. workflow & \\checked & & \\\\\n \\hline\n 2D post-processing & \\checked & & \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:Table1}\n \\end{table}\n \n All three packages utilize the \\textit{pymatgen} package to perform various structure processing tasks. \\textit{Pymatgen} is used to perform various types of structure-manipulation processes such as reducing\/increasing simulation cell size, creating a vacuum, or creating a slab during the execution of the workflow. Throughout $Hetero2d$, we utilized \\textit{pymatgen} to handle structure-manipulation for (a) the bulk materials and (b) some basic pre-\/post-processing of structures and generation of files for the DFT calculations. Within $Hetero2d$, \\textit{pymatgen}'s structure-manipulation tools are used to create conventional unit cells for the substrate and create the substrate slab surface. Additionally, we have integrated \\textit{pymatgen}'s structure analysis modules to decorate the fireworks in the workflow with structural information for each input structure to populate our database. The pre-processing enables one to differentiate crystal phases with similar compound formulas, easily reference and sort data within the database, and perform analysis in later fireworks. \n \n All three packages use the \\textit{custodian} package~\\cite{Ong2013} to perform error recovery. Error recovery routines are pivotal for any workflow package to reduce the need for human intervention and correct simple run-time errors with pre-defined functions. Additionally, \\textit{custodian} alerts the user if an unrecoverable error has occurred.\n \n Database integration is another functionality present in all three packages that stores and analyzes the vast amount of information generated by each calculation. \n \n Only $Hetero2d$ and \\textit{atomate} are \\textit{FireWorks} compatible whereas, \\textit{MPInterfaces} uses the python package \\textit{fabric} to remote launch jobs over SSH. \\textit{FireWorks} is a single package used to define, manage, and execute scientific workflows with built-in failure-detection routines capable of concurrent job execution and remote job tracking over an arbitrary number of computing resources accessible from a clean and flexible Python API. \n \n Routines used to automate the generation of 2D-substrate heterostructures given user constraints are available in $Hetero2d$ and \\textit{MPInterfaces}. \\textit{MPInterfaces} implements a mathematical algorithm developed by Zur et. al.~\\cite{Zur1984} for generating supercells of lattice-matched heterostructures given two arbitrary lattices and user-specified tolerances for the lattice-mismatch and heterostructure surface area. $Hetero2d$ incorporates functions from \\textit{MPInterfaces} to create 2D-substrate heterostructures and enable our package to utilize \\textit{FireWorks} which \\textit{MPInterfaces} is currently incompatible with. Additionally, by incorporating these routines in $Hetero2d$, we can modify the function to return critical information regarding the 2D-substrate heterostructures that are not returned by the \\textit{MPInterfaces} function. Our 2D-substrate heterostructure function returns the strain of the 2D material along \\textbf{a} and \\textbf{b} lattice vectors, angle mismatch between the \\textbf{ab} lattice vectors of the substrate and the 2D material, and scaling matrix used to generate the aligned the 2D-substrate heterostructures. \n \n \n \n The 2D-substrate heterostructure workflow and post-processing routines are uniquely available in $Hetero2d$. The workflow automates all steps needed to study 2D-substrate heterostructure stability and properties via the DFT method. The post-processing routines enable a curated database to view all calculation results and perform additional analysis or calculations. \n \n \\subsection{Default Computational Parameters}\n \\textit{CMDLInterfaceSet} is based on \\textit{pymatgen}'s \\textit{VASPInputSet} class that creates custom input files for DFT calculations. Our new class \\textit{CMDLInterfaceSet} has all the functionality of the parent \\textit{pymatgen} class but is tailored to perform structural optimizations of 2D-substrate heterostructures and implements vdW-corrections, on-the-fly dipole corrections for slabs, generation of custom $k$-point mesh grid density, and addition of selective dynamics tags for the 2D-substrate structures. All DFT calculations are performed using the projector-augmented wave method as implemented in the plane-wave code VASP~\\cite{Kresse5, Kresse4, Kresse1, Kresse2, Kresse3}. The vdW interactions between the 2D material and substrate are modeled using the vdW\u2013DF~\\cite{Rydber2003} functional with the optB88 exchange functional~\\cite{Klimes2011}. \n \n The \\textit{CMDLInterfaceSet} has a default energy cutoff of 520 eV used for all calculations to ensure consistency between structures that have the cell shape and volume relaxed and those that only have ionic positions relaxed. The default $k$-point grid density was automated using \\textit{pymatgen}~\\cite{Ong2013} routines to 20 $k$-points\/unit length by taking the nearest integer value after multiplying $\\frac{1}{\\textbf{a}}$ and $\\frac{1}{\\textbf{b}}$ by 20. These settings were sufficient to converge all calculations to a total force per atom of less than 0.02 eV\/\\AA. Additional information regarding default settings set in the \\textit{CMDLInterfaceSet} and convergence tests performed to benchmark our calculations are in the section 1 and 2 of the SI.\n \n \\subsection{Workflow Initialization and Customization}\n \n To use $Hetero2d$'s workflow, \\textit{get\\_heterostructures\\_stabilityWF}, we import the 2D structure, its bulk counterpart, and the substrate structure from existing databases through their APIs. When initialized, the workflow can accept up to three structures (1) the 2D structure, (2) the bulk counterpart of the 2D structure, and (3) the substrate structure in the bulk or slab form. \n \n To perform structure transformations to generate the substrate slabs or the 2D-substrate heterostructures, our workflow requires two dictionaries during initialization -- the (1) \\textit{h\\_params} and (2) \\textit{slab\\_params} dictionary. Figure \\ref{fig:Figure2} is a code excerpt demonstrating the parameters one can supply to generate a 2D-substrate heterostructure on a (111) substrate slab surface. In Figure \\ref{fig:Figure2}, \\textit{slab\\_params} dictionary generates a substrate slab with a vacuum spacing of 19 \\AA\\ and a substrate slab thickness of at least 12 \\AA. The \\textit{h\\_params} dictionary creates the lattice-matched, symmetry-matched 2D-substrate heterostructures with 3.0 \\AA\\ $z$-separation distance between the 2D material and the substrate surface. The \\textit{h\\_params} dictionary also sets the maximum allowed lattice-mismatch along \\textbf{ab} to be less than 5\\%, a surface area less than 130 \\AA$^2$, sets the selective dynamics tags in the DFT input file to relax all layers of the 2D material and top two layers of the substrate slab. \n \n \n \\begin{wrapfigure}[11]{r}{0.55\\textwidth}\n \\vspace{-1.4\\intextsep}\n \\hspace*{-0.4\\columnsep}\\includegraphics[width=0.55\\textwidth]{img\/CodeExcerpt.pdf}\n \\vspace{-0.55\\intextsep}\n \\caption{Simplified workflow illustrating the setup necessary to setup the 2D-substrate heterostructure workflows using \\textit{get\\_heterostructures\\_stabilityWF} used throughout this work. A full example jupyter notebook is located in the SI.}\n \\vspace{10\\intextsep}\n \\label{fig:Figure2}\n \\end{wrapfigure}\n \n \n The workflow has commands for two VASP executables compiled that incorporate vdW-corrections for performing DFT calculations for (1) 2D materials and (2) 3D materials. The first executable is a custom executable to relax 2D materials with a large vacuum and prevent the vacuum from shrinking by not letting the cell length change in the direction of vacuum spacing. The second executable allows the cell volume to change in all directions. Other optional arguments used to initialize the workflow include dipole correction for substrate slabs, tags for database entries, and avenues to modify the INCAR of each firework in the workflow. The parameters $vis$ and \\textit{vis\\_i} where $i$=2d, 3d2d, bulk, trans, and iface are used to override the default \\textit{VaspInputSet} with one provided by the user. This can be provided for all fireworks using \\textit{vis} or for a specific firework using \\textit{vis\\_i}. The parameters \\textit{uis} and \\textit{uis\\_i} can be set to change the default settings in the INCAR. The parameter \\textit{uis} will set the specified parameters for all INCARs in the workflow, while \\textit{uis\\_i} will set the INCAR parameters for the corresponding firework. Additional details regarding workflow customization options and current functionality available in \\textit{Hetero2d} are discussed in SI section 3 as well as an example jupyter notebook.\n \n \\subsection{Workflow Steps}\n As mentioned previously, our workflow has five firework steps. Here, we discuss the pre-processing steps that occur when initializing the workflow, each firework, and the firetasks composing each firework for the 2D-substrate heterostructure workflow introduced in section 3.2, \\textit{Workflow Framework}.\n \n \n The first firework, FW$_1$, in the workflow optimizes the 2D material structure. During initialization of the workflow, the 2D material is centered within the simulation cell, obtaining crystallographic information regarding the structure, the \\textit{CMDLInterfaceSet} is initialized to create VASP input files, and a list of user-defined\/default tags are created for the 2D material. The structure, tags, and \\textit{CMDLInterfaceSet} are used to initialize the firework \\textit{HeteroOptimizeFW} that performs the structure optimization. The default tags appended to the firework are the unique identification tags (provided to the workflow by the user), the crystallographic information, workflow and firework name, and the structure's composition. In FW$_1$, \\textit{HeteroOptimizeFW} executes firetasks that -- (a) create directories for the firework, (b) write all input files initialized using \\textit{CMDLInterfaceSet}, (c) submit the VASP calculation to supercomputing resources to perform full structure optimization and monitor the calculation to correct errors, (d) run our \\textit{HeteroAnalysisToDb} class to store all information necessary for data analysis within the database, and (e) lastly pass the information to the next firework. Details regarding \\textit{HeteroAnalysisToDb} can be found in the next section.\n \n \n Similar to FW$_1$, FW$_2$ and FW$_3$ perform a full structural optimization for the bulk counterpart of the 2D material and the substrate, respectively. FW$_2$ and FW$_3$ differ from FW$_1$ only in the pre-processing steps. The step to center the 2D material is not performed, however, the conventional standard structure is utilized during the pre-processing for FW$_3$.\n \n \n FW$_3$ spawns a child firework passing the optimized substrate structure to FW$_4$ which transforms the conventional unit cell of the substrate into a substrate slab using the \\textit{slab\\_params} dictionary and performs the structure optimization. When the workflow is initialized, FW$_4$ undergoes similar pre-processing steps that are used to initialize the firework \\textit{SubstrateSlabFW} that creates a substrate slab from the substrate. \\textit{SubstrateSlabFW} is the firework that transforms the conventional unit cell of the substrate into a slab, sets the selective dynamics tags on the surface layers, and sets the number of compute nodes necessary to relax the substrate slab. The \\textit{slab\\_params} variable is the input dictionary that initializes \\textit{pymatgen}'s \\textit{SlabTransformation} module that creates the substrate slab. All required and optional input arguments used in the \\textit{SlabTransformation} module must be supplied using this dictionary (key: value) format. This dictionary format is implemented to enable $Hetero2d$ to be flexible and extendable in future updates. Additionally, the \\textit{slab\\_params} dictionary is only required when creating a new substrate slab from a substrate. \n \n \n After the first four fireworks have been completed and successfully stored in the database, the fifth firework (FW$_5$) obtains the optimized structures and information from previous fireworks and the specification file. FW$_5$ calls the \\textit{GenHeteroStructuresFW} firework to generate the 2D-substrate heterostructure configurations using \\textit{h\\_params} and spawns a firework to perform structure optimization for each configuration. The input required for the \\textit{h\\_params} dictionary are those that are required by $Hetero2d$'s \\textit{hetero\\_interfaces} function. This function attempts to find a matching lattice between the substrate surface and the 2D material. The parameters used to initialize \\textit{hetero\\_interfaces} are listed in the \\textit{h\\_params} dictionary shown in Figure \\ref{fig:Figure2} and the jupyter notebook in the SI. \n \n \n Our function \\textit{hetero\\_interfaces} generates the 2D-substrate heterostructure configurations utilizing \\textit{MPInterfaces}'s interface matching algorithm. We developed \\textit{hetero\\_interfaces} to ensure functions within the workflow are compatible with \\textit{FireWorks}. Additionally, we can return key variables regarding the interfacing matching algorithm, such as the strain or angle mismatch, and store these values in our database. \\textit{MPInterfaces} is used to (a) generate heterostructures within an allowed lattice-mismatch and surface area of the supercell at any rotation between the 2D material and bulk material surface and (b) create distinct configurations in which the 2D material can be placed on the bulk material surface based on the Wyckoff positions of the near-interface atoms.\n \n \n FW$_5$ calls \\textit{GenHeteroStructuresFW} which generates the 2D-substrate heterostructure configurations, the total number of configurations is computed, each unique configuration is labeled from 0 to $n$-1, where $n$ is the total number of configurations, and stored under the \\textit{Interface Config} tag. For each configuration, a new firework is spawned to optimize each 2D-substrate heterostructure configuration. The data generated within FW$_5$ is stored in the database.\n \n \n After all previous FWs have successfully converged, \\textit{HeteroAnalysisToDb} is called one final time to compute the $\\Delta E_{\\mathrm{vac}}^f$, $\\Delta E_{\\mathrm{b}}$, and $\\Delta E_{\\mathrm{ads}}^f$\\ for each heterostructure configuration generated by the workflow. The calculation of the $\\Delta E_{\\mathrm{vac}}^f$\\ references the simulation for the 2D material and its bulk counterpart. The bulk counterpart is simulated using a standard periodic simulation cell. The calculation of $\\Delta E_{\\mathrm{b}}$\\ references the 2D material, substrate slab, and 2D-substrate heterostructure simulations which all employ a standard supercell slab model. The calculation of the $\\Delta E_{\\mathrm{ads}}^f$\\ references both $\\Delta E_{\\mathrm{b}}$\\ and $\\Delta E_{\\mathrm{vac}}^f$. Once each value is computed, all the information is curated and stored in the MongoDB database.\n\n \\subsection{Post-Processing Throughout Our Workflow} \n \n After each VASP simulation is complete, post-processing is performed within the calculation directory using our \\textit{HeteroAnalysisToDb} class, an adaptation of \\textit{atomate}'s \\textit{VaspToDb} module. It is used to parse the calculation directory, perform error checks, and curate a wide range of input parameters and quantities from calculation parameters and output, energetic parameters, and structural information for storage in our MongoDB. \\textit{HeteroAnalysisToDb} detects the type of calculation performed within the workflow and parses the calculation accordingly. \\textit{HeteroAnalysisToDb} has the same functionally as \\textit{VaspToDb} with additional analyzers developed for 2D-substrate heterostructures that -- (a) identify layer-by-layer interface atom IDs for the substrate and 2D material, (b) store the initial and final configuration of all structures, (c) compute the $\\Delta E_{\\mathrm{vac}}^f$, $\\Delta E_{\\mathrm{b}}$, and $\\Delta E_{\\mathrm{ads}}^f$, (d) store the results obtained from the interface matching, and (e) ensure each database entry has any custom tags added to the database such as those appended by the user. The workflow design ensures that the DFT simulations for each 2D-substrate surface pair will be performed independently of each other, but as soon as all simulations are completed for each 2D-substrate surface pair, the data will be analyzed and curated in the MongoDB database right away.\n\n\\section{An Example of Substrate Screening via Hetero2d}\n \\subsection{Materials Selection}\n \n To demonstrate the functionalities of the $Hetero2d$ package, we screened for suitable substrates for four 2D materials, namely $2H$-MoS$_2$, $1T$-NbO$_2$, $2H$-NbO$_2$~\\cite{c2db}, and hexagonal-ZnTe~\\cite{Torrisi2020}. The four 2D materials in consideration possess hexagonal symmetry as illustrated in Figure \\ref{fig:2ds}. \n \n MoS$_2$ was selected because there is a large amount of experimental and computational~\\cite{Chen2013, Zhuang2013b, Yun2012, singh2015al2o3} data available in literature which we can use to validate the computed properties from our $Hetero2d$ workflow. The hexagonal-ZnTe~\\cite{Torrisi2020}, $1T$-NbO$_2$, and $2H$-NbO$_2$~\\cite{c2db} are yet to be synthesized. In addition, these particular 2D materials have diverse predicted properties see Table \\ref{tab:2dProp}. It is noteworthy that hexagonal-ZnTe has been predicted to be an excellent CO$_2$ reduction photocatalyst~\\cite{Torrisi2020}. \n \n \n \\begin{table}[!htbp]\n \\centering\n \\caption{The electronic properties and band gap of the four selected 2D materials used in this work. FM represents ferromagnetic.}\n \\begin{adjustbox}{width=\\textwidth}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n 2D Mat. & MoS$_2$ & $1T$-NbO$_2$ & $2H$-NbO$_2$ & ZnTe \\\\\n \\hline\n Classification & Semiconductor & FM~\\cite{c2db} & FM~\\cite{c2db} & Semiconductor\\\\\n \\hline\n Band Gap (eV) & 1.88~\\cite{Gusakova2017} & 0.0~\\cite{c2db} & 0.0~\\cite{c2db} & 2.88~\\cite{Torrisi2020} \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:2dProp}\n \\end{table}\n \n \\begin{table}[!htbp]\n \\centering\n \\caption{A list of matching substrate surfaces for the 4 2D materials given our heterostructure search criteria discussed in the next section.}\n \\begin{adjustbox}{width=\\textwidth}\n \\begin{tabular}{|l|c|c|l|}\n \\hline\n 2D Mat. & (111) Substrate & (110) Substrate \\\\\n \\hline\n MoS$_2$ & Hf, Ir, Pd, Zr, Re, Rh & Ta, Rh, Sc, Pb, W, Y \\\\\n \\hline\n $1T$-NbO$_2$ & Ni, Mn, V, Nd, Pd, Ir, Hf, Zr, Cu & Rh, Ta, Sc, W \\\\\n \\hline\n $2H$-NbO$_2$ & Ni, Mn, Nd, Ir, Hf, Al, Te, Ag, Ti, Cu, Au & Ta, Sc, W, Y, Rh \\\\\n \\hline\n ZnTe & Sr, Ni, Mn, V, Al, Ti, Cu & W\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:iface}\n \\end{table}\n \n \n The properties of a 2D material can differ when placed on different miller-index planes for the same substrate. Thus, we investigated all unique low-index substrate surfaces (with $h$, $k$, $l$ equal to 1 or 0) for these 2D materials. A material available in the Materials Project (MP)~\\cite{Ong2013} database was considered a potential substrate if it satisfied all of the following criteria - a) is metallic, b) is a cubic phase, c) is single-element composition, d) has a valid ICSD ID~\\cite{ICSD} (thus been experimentally synthesized), and e) has an $E_{above\\ hull}<0.1$ eV\/atom. There are 50 total substrates that satisfy the criteria above when queried from the MP database. \n \n \n \\begin{wrapfigure}[19]{r}{0.5\\textwidth}\n \\centering\n \\vspace{-1.25\\intextsep}\n \\includegraphics[width=0.5\\textwidth]{img\/StructureModels.pdf}\n \\vspace{-2\\columnsep}\n \\caption{Structure models illustrating the 2D films crystal structure. Top view demonstrates the hexagonal symmetry of each 2D material. The $1T$ and $2H$ phase for NbO$_2$ are labeled to clarify the two phases.}\n \\label{fig:2ds}\n \\end{wrapfigure} \n \n The bulk counterpart of each 2D material is also obtained from the MP database. We query the database for bulk materials that have the same composition as the 2D material and select the structure with the lowest $E_{above\\ hull}$. SI Table 1-3 have additional reference information regarding all the optimized substrate slabs, 2D materials, and their bulk counterparts. SI Table 1 contains information about the Materials Project material\\_id, $E_{above\\ hull}$, ICSD ID, crystal system, and miller plane for the substrate surface. SI Table 2 contains information about the reference database ID, $\\Delta E^{f}_{vac}$ (eV\/atom), and crystal system for each 2D material and SI Table 3 contains information about the reference database id, $E_{above\\ hull}$, E$_{gap}$, and the crystal system for the bulk counterpart of the 2D material. \n \n \\subsection{Symmetry-Matched, Lattice-Matched 2D-Substrate Heterostructures}\n \n In this study, we focus our search for 2D-substrate heterostructures to substrate planes with indices, $h$, $k$, $l$ as 0 or 1. The following studies focus on the heterostructures with the (111) and (110) substrate surfaces because we find that only these two miller planes have an appreciable number of heterostructures. The (001) substrate plane resulted in only one heterostructure. \n \n Restricting our search for 2D-substrate matches to only the (111) and (110) yields a total of 4 (\\# of 2D materials) X 2 (\\# of planes) X 50 (\\# of substrates) = 400 potential 2D-substrate heterostructure combinations. As illustrated in Figure \\ref{fig:Workflow}, after introducing our constraints for the surface area to be $< 130$ \\AA\\ and applied strain on the 2D material to be $ < 5$ \\AA, a total of 49 2D-substrate heterostructure workflows are found. Table \\ref{tab:iface} lists all metallic substrates matching each of the 2D materials given our heterostructure criteria.\n \n \n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{img\/WorkflowDataPipeline.pdf}\n \\caption{Schematic representing the materials selection process identifying stable 2D-substrate heterostructures using the $Hetero2d$ workflow. Tier 1 represents choosing 2D materials, substrates, and their surfaces. Tier 2 applies constraints on the surface area and lattice strain. Tier 3 shows the energetic stability of the heterostructures stored in the database.}\n \\vspace{-0.25\\intextsep}\n \\label{fig:Workflow}\n \\end{figure}\n \n \n Of the total 49 workflows, 33 workflows correspond to the (111) substrate surfaces, and 16 workflows correspond to the (110) substrate surfaces. Generally, the (111) surface has more substrate matches than (110) surface due to the intrinsic hexagonal symmetry of the (111) surface that matches the hexagonal symmetry of the selected 2D materials. Each workflow generates between 2--4 2D-substrate heterostructure configurations for a given 2D-substrate surface pair, resulting in a total of 123 2D-substrate heterostructure configurations. Of those 2D-substrate heterostructures, 78 configurations, a total of 29 workflows stabilize the meta-stable 2D materials when placed upon the substrate slab. Additional details regarding these simulations can be found in section 4 of the SI.\n \n \\subsection{Stability of Free-Standing 2D Films and Adsorbed 2D-Substrate Heterostructures}\n \n \\begin{wrapfigure}[13]{r}{0.56\\textwidth}\n \\vspace{-1\\intextsep}\n \\hspace*{-0.75\\columnsep}\\includegraphics[width=0.56\\textwidth]{img\/FormationEnergy.pdf}\n \\vspace{-0.1\\intextsep}\n \\caption{The $\\Delta E_{\\mathrm{vac}}^f$\\ for 2D- MoS$_2$ (\\tikzcircle[gray, fill=orange]{2pt}), \n $1T$-NbO$_2$ (\\tikzcircle[gray, fill=red]{2pt}), \n $2H$-NbO$_2$ (\\tikzcircle[gray, fill=green]{2pt}), and ZnTe (\\tikzcircle[gray, fill=blue]{2pt}). The $\\Delta E_{\\mathrm{vac}}^f$\\ is used to assess the thermodynamic stability of the free-standing 2D film with respect to its bulk counterpart. MoS$_2$ and ZnTe have relatively low $\\Delta E_{\\mathrm{vac}}^f$\\ while the $1T$ and $2H$ phase of NbO$_2$ have high $\\Delta E_{\\mathrm{vac}}^f$.}\n \\vspace{10\\intextsep}\n \\label{fig:form}\n \\end{wrapfigure}\n \n Figure \\ref{fig:form} shows the $\\Delta E_{\\mathrm{vac}}^f$\\ of the isolated unstrained 2D material with respect to their bulk counterpart. We find the $\\Delta E_{\\mathrm{vac}}^f$\\ for both MoS$_2$ and ZnTe are low, less than 0.2 eV\/atom. Both the $1T$ and $2H$ phase for NbO$_2$ possess high $\\Delta E_{\\mathrm{vac}}^f$, as shown by the red shaded region in Figure \\ref{fig:form}, making substrate-assisted synthesis methods the most feasible method to synthesize these 2D films. The $\\Delta E_{\\mathrm{vac}}^f$'s in Figure \\ref{fig:form} are consistent with prior computational~\\cite{c2db, Torrisi2020} and experimental work~\\cite{Lee2013}.\n \n \n Figures \\ref{fig:Eads}a and \\ref{fig:Eads}b show the $\\Delta E_{\\mathrm{ads}}^f$\\ for the four 2D materials on the (110) and (111) substrate surfaces, respectively. The black lines in Figure 2 separate the 2D materials, while the shaded regions indicate stabilization of the 2D material on the substrate surface. When generating 2D-substrate heterostructure, the first challenge is finding a matching lattice between the 2D material and substrate surface. The next challenge is identifying \"ideal\" or likely locations to place the 2D material on the substrate surface to generate stable low-energy heterostructures. To reduce the large number of in-plane shifts possible for a given 2D-substrate heterostructure, we selectively placed the 2D material on the substrate slab by enumerating combinations of high-symmetry points (Wyckoff sites) between the 2D material and substrate slab stacking the 2D material on top of these sites $z$ \\AA\\ away from the substrate surface. Each unique 2D-substrate heterostructure configuration is represented by 0=$\\triangle$, 1=\\textbf{x}, 2=$\\circ$, and 3=$\\square$ in Figure \\ref{fig:Eads}.\n \n\t \n \t\\begin{figure}[t!]\n \t \\centering\n \\vspace{-1\\intextsep}\n \t \\includegraphics[width=\\textwidth]{img\/AdsEnergy.pdf}\n \\vspace{-2\\intextsep}\n \t \\caption{Adsorption formation energy, $\\Delta E_{\\mathrm{ads}}^f$, for the symmetry-matched, low lattice-mismatched (a) (110) and (b) (111) substrate surfaces. The rectangular symmetry of the (110) surface results in fewer matches while the hexagonal symmetry of the (111) substrate surface results in numerous matches within the given constraints on the surface area and lattice strain. Negative $\\Delta E_{\\mathrm{ads}}^f$\\ values indicate stabilization of the 2D material. Each set of symbols (up to 4 points per substrate) represents the unique 2D-substrate configurations. }\n \\vspace{-1\\intextsep} \n \t \\label{fig:Eads}\n \t\\end{figure}\n \n \n The $\\Delta E_{\\mathrm{ads}}^f$\\ on the (110) surface is shown in Figure \\ref{fig:Eads}a. In the figure, 9 substrates stabilize the $\\Delta E_{\\mathrm{ads}}^f$\\ of the 2D materials. The $\\Delta E_{\\mathrm{ads}}^f$\\ appears to be correlated with the substrate where the 2D material is placed, however, there are not enough data points in Figure \\ref{fig:Eads}a to distinguish the origin of this trend. Interestingly, when MoS$_2$ is placed on the (110) Ta substrate surface, the 2D material buckles which likely increases the $\\Delta E_{\\mathrm{ads}}^f$\\ significantly above the other substrates. SI Figure 6 shows both configurations for MoS$_2$ on the (110) Ta substrate surface. There are an additional 5 2D-(110) substrate pairs that were studied but are not shown in Figure \\ref{fig:Eads}a because the 2D materials\/substrate interface becomes highly distorted\/completely disintegrated. These cases are shown in SI Figure 4a and discussed in section 5 of the SI. \n \n \n The (111) substrate surface matches for each 2D material are shown in Figure \\ref{fig:Eads}b, where 15 substrates result in an $\\Delta E_{\\mathrm{ads}}^f$\\ $<$ 0. An additional 8 2D-substrate pairs, shown in SI Figure 4b, have 2D materials\/substrate surfaces that are disintegrated and are discussed in section 5 of the SI.\n \n \n A correlation between the substrate surface and the $\\Delta E_{\\mathrm{ads}}^f$\\ is more apparent for the (111) surface in Figure \\ref{fig:Eads}b due to the increased number of 2D-substrate pairs. For MoS$_2$ on Zr and Hf, the triangle configurations have $\\Delta E_{\\mathrm{ads}}^f$\\ significantly lower than the other configurations, see SI Figure 6 for structures of the three configurations. The lower $\\Delta E_{\\mathrm{ads}}^f$\\ is correlated with smaller bond distances between the substrate surface and the 2D material. When the $\\Delta E_{\\mathrm{ads}}^f$\\ is lower for these structures, we find that the $2h$ Wyckoff site of the 2D material is stacked on top of the $2a$ Wyckoff site of the substrate surface. The location of a 2D material on a substrate surface has previously been shown to influence the type of bonding present between the 2D material and substrate surface~\\cite{Singh2014a,Zhuang2017}.\n \n The $1T$ phase of NbO$_2$ on Hf, Zr, and Ir substrates have an $\\Delta E_{\\mathrm{ads}}^f$\\ difference between each configuration that is larger than other 2D-substrate pairs. The differences in $\\Delta E_{\\mathrm{ads}}^f$\\ for $1T$-NbO$_2$ on Ir is partly due to some structural disorder of the 2D materials from the O atoms bonding strongly with the substrate surface, shown in SI Figure 7. For both Hf and Zr, the differences in $\\Delta E_{\\mathrm{ads}}^f$\\ do not arise from structural disorder. The $\\Delta E_{\\mathrm{ads}}^f$\\ of $1T$-NbO$_2$ on Hf and Zr are more strongly affected by the location of the 2D material on the substrate surface.\n \n $2H$-NbO$_2$ has two substrate surfaces, Ti and Au, where the $\\Delta E_{\\mathrm{ads}}^f$\\ varies strongly with the configuration of 2D material on the substrate, unlike other 2D-substrate pairs for $2H$-NbO$_2$. $2H$-NbO$_2$ on Ti and Au have no structural distortions that explain the difference in $\\Delta E_{\\mathrm{ads}}^f$. For $2H$-NbO$_2$ on Ti, each configuration possesses different $\\Delta E_{\\mathrm{ads}}^f$\\ arising from the unique placement of the 2D material on the substrate surface for each configuration. The strong bonding between the 2D material and substrate surface may be due to the affinity for Ti to form a metal oxide. SI Figure 8 shows each configuration for $2H$-NbO$_2$ on (111) Ti substrate surface. For $2H$-NbO$_2$ on Au, the circle configuration has a lower $\\Delta E_{\\mathrm{ads}}^f$\\ due to the bottom layer of the $2H$-NbO$_2$ stacked directly on the top layer of the Au substrate surface. \n \n \n The properties of MoS$_2$ have been studied both computationally and experimentally, where previous computational works~\\cite{Zhuang2013b, Singh2015} have found similar values for the $\\Delta E_{\\mathrm{vac}}^f$\\ of MoS$_2$. Chen et. al. found that Ir bonds more strongly with the substrate surface than Pd~\\cite{Chen2013}. This may explain the small structural modulations observed in our study for MoS$_2$ on the Ir (111) substrate surface but no such modulation is observed for MoS$_2$ on the Pd (111) substrate surface. Additionally, the $z$-separation distance between the 2D material and substrate surface found in this work agrees well with Chen et. al.'s values despite using a different functional. Our $z$-separation distances are within 0.05 \\AA\\ for Ir and 0.16 \\AA\\ for Pd~\\cite{Chen2013}. \n\n \\subsection{Separation Distance of Adsorbed 2D Films on Substrate Slab Surfaces}\n\t \n The change in the thickness of the adsorbed 2D material may provide insight into the nature of bonding between the 2D-substrate heterostructures. For instance, vdW bonds are weak and thus typically result in minimal structural and electronic changes in the 2D material. Using our database, we determine the change in the thickness of post-adsorbed 2D materials from that of the free-standing 2D material. The thickness of the free-standing\/adsorbed 2D material is computed first by finding the average $z$ coordinate of the top and bottom layer of the 2D material given by $\\bar{d}_z = \\sum\\limits_{i=1}^n d^{top}_{i,z}\/n - \\sum\\limits_{i=1}^m d^{bottom}_{i,z}\/m$ where $d_{i,z}$ is the $z$ coordinate of the $i^{th}$ atom summed up to $n$ and $m$, the total number of atoms in the top and bottom layers, respectively. The thickness, obtained by taking the difference between the average thickness of the adsorbed 2D material from that of the free-standing 2D material, $\\delta d$=$\\bar{d}^{adsorbed}_z-\\bar{d}^{free}_z$, with positive (negative) values corresponding to an increase (decrease) in the thickness of the adsorbed 2D material. \n \n Figure \\ref{fig:Zdiff} illustrates the change in the thickness of the free-standing 2D material from that of the adsorbed 2D material for each 2D-substrate heterostructure. Typically for vdW type bonding, each atom should have minimum deviations from the free-standing 2D film due to the weak interaction between the adsorbed 2D material and substrate surface that characterizes vdW bonding. Figure \\ref{fig:Zdiff} shows many of the 2D-substrate pairs have a significant change in the thickness of the 2D material that may indicate more covalent\/ionic type bonding. The change in the thickness of the 2D material for the majority of the MoS$_2$-substrate configurations is minimal ($\\textless$0.1 \\AA) that may indicate weak interactions between the 2D material and substrate surface. Figure \\ref{fig:Zdiff} indicates that for the majority of the adsorbed 2D materials, the substrates tend to induce an increase in the thickness of the adsorbed 2D material.\n \n \\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.50\\textwidth]{img\/2dThickness.pdf}\n \\caption{Each 2D material is separated spatially along the $x$-axis using a violin plot. The change in the 2D material's thickness, $\\delta d$, for all substrates is plotted along the $y$-axis. A positive $y$-value indicates the 2D material's thickness has increased during adsorption onto the substrate slab. The width of the violin plot is non-quantitative from scaling the density curve by the number of counts per violin, however, within one violin plot, the relative $x$-width does represent the frequency that a 2D material's thickness changes by $y$ amount relative to the total number of data points in the plot.}\n \\label{fig:Zdiff}\n \\end{figure}\n\n \\subsection{Charge Layer Doping of Adsorbed 2D Films}\n The $Hetero2d$ workflow package has a similar infrastructure as \\textit{atomate} that allows our package to integrate seamlessly with the workflows developed within \\textit{atomate}. These workflows enable us to expand our database by performing additional calculations such as Bader~\\cite{Tang2009,Henkelman2006} charge analysis and high-quality density of states (DOS) calculations to assess charge transfer that occurs between the adsorbed 2D material and the substrate surface, changes in the DOS from the adsorbed and pristine 2D material, and changes in the charged state of the 2D-substrate pairs. \n \n \n \\begin{table}[h!]\n \\centering\n \\caption{Q$_x$ is obtained with Bader analysis and represents the average number of electrons transferred to\/from (positive\/negative) specific atomic layers with the initial number of electrons taken from the POTCAR. The first four columns are the electrons transferred to\/from -- the Hf substrate atoms, Q$_{sub}$, the bottom layer of S atoms, Q$_{S_b}$, the Mo atoms, Q$_{Mo}$, and the top layer of S atoms, Q$_{S_t}$ for the adsorbed 2D-substrate heterostructure. The last three columns denote the charge transfer in the pristine MoS$_2$ structure. MoS$_2$ has an increased charge accumulation on the bottom layer of the 2D material from the substrate slab.}\n \\begin{adjustbox}{width=3in}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|}\n \\hline\n electrons & Q$_{sub}$ & Q$_{S_b}$ & Q$_{Mo}$ & Q$_{S_t}$ & Q$^{prist}_{S_b}$ & Q$^{prist}_{Mo}$ & Q$^{prist}_{S_t}$ \\\\\n \\hline\n Q$_x$ & -0.11 & 1.10 & -1.03 & 0.57 & 0.60 & -1.20 & 0.60 \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:bader}\n \\end{table}\n \n \\begin{figure}[hb!]\n \\centering\n \\includegraphics[width=\\textwidth]{img\/BaderCharges_DOS.pdf}\n \\caption{(a) The element projected density of states (DOS) where red and blue lines correspond to S and Mo states, respectively, for the isolated strained 2D material (dashed lines), the adsorbed 2D material (solid lines), and the pristine MoS$_2$ material (dashed-dotted lines). The Hf (111) substrate influences the DOS for MoS$_2$ causing a semiconductor to metal transition. (b) The $z$ plane-averaged electron density difference ($\\Delta\\rho$) for MoS$_2$ on Hf. Electron density difference is computed by summing the charge density for the isolated MoS$_2$ and isolated Hf then subtracting that from the charge density of the interacting MoS$_2$ on Hf system. The charge densities were computing with fixed geometries. The red and blue colors indicate electron accumulation and depletion in the combined MoS$_2$ on Hf system, respectively, compared to the isolated MoS$_2$ and isolated Hf atoms. (c) The charge density distribution for MoS$_2$ on (111) Hf substrate. The cross section is taken along the (110) plane passing through Mo, S, and Hf atoms. The charge density is in units of electrons\/\\AA$^3$.}\n \\label{fig:DosChg}\n \\end{figure}\n Most 2D materials are desirable due to their unique electronic properties. We selected MoS$_2$ on Hf (111) surface to demonstrate the capability of \\textit{Hetero2d} in providing detailed electronic and structural information. Our Bader analysis illustrated in Table \\ref{tab:bader} shows that there is charge transfer from the substrate to the bottom layer of the 2D material which is consistent with the findings presented by Zhuang et. al.~\\cite{Zhuang2017} In Figure \\ref{fig:DosChg}a, the DOS for the isolated un-strained, isolated strained, and adsorbed MoS$_2$ is shown where the black dashed line represents the Fermi level. There is a small shift in the DOS when comparing the un-strained and strained DOS for MoS$_2$. Comparing the DOS for the adsorbed MoS$_2$ to the other DOS for MoS$_2$, there is a significant change in the DOS. We can see that the substrate influences the DOS of MoS$_2$ when placed on the Hf (111) surface causing a semiconductor to metal transition of the MoS$_2$. This change in the DOS is consistent with the Bader analysis that indicates electron doping of the MoS$_2$ material occurs which would result in changes in the DOS. Figure \\ref{fig:DosChg}b shows the redistribution of charge due to the interaction of the 2D material and substrate surface where red and blue regions indicate charge accumulation (gaining electrons) and depletion (losing electrons) of the combined system due to the interaction between MoS$_2$ and Hf. The charge density difference is computed as the difference between the sum of the isolated MoS$_2$ and isolated Hf substrate slab from that of the combined MoS$_2$ on Hf system . Figure \\ref{fig:DosChg}c is the charge density of the combined MoS$_2$ on Hf system along the (110) plane. Thus, the electronic properties of MoS$_2$ are dramatically affected by the substrate. \\textit{Hetero2d} can analyze the substrate induced changes in the electronic structure of 2D materials. This will lead to a fundamental understanding and engineering of complex interfaces.\n\n\\section{Conclusions} \n \n In summary, we have developed an open-source workflow package, $Hetero2d$, that automates the generation of 2D-substrate heterostructures, the creation of DFT input files, the submission and monitoring of computational jobs on supercomputing facilities, and the storage of relevant parameters alongside the post-processed results in a MongoDB database. Using the example of four candidate 2D materials and low-index planes of 50 potential substrates we demonstrate that our open-source package can address the immense number of 2D material-substrate surface pairs to guide the experimental realization of novel 2D materials. Among the 123 configurations studied, we find that only 78 configurations (29 workflows) result in stable 2D-substrate heterostructures. We exemplify the use of $Hetero2d$ in examining the changes in thickness of the adsorbed 2D materials, the Bader charges, and the electronic density of states of the heterostructures to study the fundamental changes in the properties of the 2D material post adsorption on the substrate. $Hetero2d$ is freely available on our GitHub website under the GNU license along with example jupyter notebooks. \n \n\\section{Acknowledgements}\n The authors thank start-up funds from Arizona State University and the National Science Foundation grant number DMR-1906030. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number TG-DMR150006. The authors acknowledge Research Computing at Arizona State University for providing HPC resources that have contributed to the research results reported within this paper. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The authors acknowledge Akash Patel for his dedicated work maintaining our database and API. We thank Peter A. Crozier for their valuable discussions and suggestions. \n\n\\section{Supporting Information}\n Supporting information provides additional descriptions, figures, and tables supporting the results described in the main text.\n\n\\section{Data Availability}\n The results reported in this article and the workflow package can be found on our github website \\href{https:\/\/github.com\/cmdlab\/Hetero2d}{Hetero2d}. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Three-Dimensional Case with Partial Observations}\nWe also wish to present results for the case of partial observations, paired with a three-dimensional example involving Poisson's equation on $\\Omega = (0,1)^3$. \nThe desired state is illustrated in Figure \\ref{fig::res3d}. We use the preconditioner ${\\cal P}_\\Pi$, as the observation domain {\\color{black} $\\Omega_1$} is given by $0.2<\\rm x_1<0.4$, $0.4<\\rm x_2<0.9$, $0\\leq\\rm x_3\\leq 1$,\nand therefore the $(1,1)$-block of the matrix \\eqref{NewtonSystem} is singular. The results for the computation with $\\alpha = 10^{-5},$ $\\beta = 10^{-3},$ and without additional box constraints, are also presented in Figure \\ref{fig::res3d}, with the discretization involving $35937$ degrees of freedom.\n\\begin{figure}[htb!]\n\\begin{center}\n \\setlength\\figureheight{0.33\\linewidth} \t\n\t\\subfloat[Computed control $\\rm u$]{\n \t\t\\includegraphics[width=0.33\\textwidth]{figures\/control3d.png}\n\t}\n\t\\subfloat[Computed state $\\rm y$]{\n\t\t\\includegraphics[width=0.33\\textwidth]{figures\/state3d.png}\n\t}\n\t\\subfloat[Desired state $\\rm y_d$]{\n\t\t\\includegraphics[width=0.33\\textwidth]{figures\/destate3D.png}\n\t}\n\\end{center}\n\n\t\\caption{Three-dimensional Poisson problem with partial observations: computed solutions for the control, state, and desired state.}\\label{fig::res3d}\n\\end{figure}\nTo illustrate the performance of the proposed preconditioner $\\mathcal{P}_\\Pi$ with respect to changes in the parameter regimes, in Table \\ref{tab::resultspoisson2} we provide results for a computation involving sparsity constraints applied to the control, as well as partial observation of the state, and set $\\rm u_a=-2$, $\\rm u_b=1.5.$ \nAgain, the results are very promising and a large degree of robustness is achieved.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nIn this paper we address the challenge of solving large-scale problems arising from PDE-constrained\noptimization \\cite{book::hpuu09,book::IK08,book::FT2010}. Such formulations arise in a multitude of applications, ranging\nfrom the control of fluid flows \\cite{Hinze2000} to image processing contexts \\cite{de2013image}. \nThe particular question considered in this paper is how to efficiently handle sparsity-promoting cost\nterms within the objective function, as well as additional constraints imposed on the control variable and even the state variable. \nIn fact, seeking optimal control functions that are both contained within a range of function\nvalues, and zero on large parts of the domain, has become extremely relevant in practical applications \\cite{Sta09}.\n\nIn detail, we commence by studying the problem of finding $(\\rm y,\\rm u) \\in H^1(\\Omega) \\times L^2(\\Omega)$\nsuch that the functional\n\\begin{align}\n\\ \\mathcal{F}(\\rm y,\\rm u)&=\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega)}+ \\frac{\\alpha}{2}\\|\\rm u\\|^ 2_{L^2(\\Omega)} + \n\\beta\\|u\\|_{L^1(\\Omega)} \\label{pb}\n\\end{align}\nis minimized subject to the PDE constraint\n\\begin{align}\n-\\Delta \\rm y &= \\rm u + \\rm f ~~\\mbox{ in } \\Omega, \\label{eq:lap} \\\\ \n\\rm y &= \\rm g \\hspace{2.3em}\\mbox{ on } \\Gamma,\n\\end{align}\nwhere we assume that the equation \\eqref{eq:lap} is understood in the weak sense \\cite{book::FT2010}. Here, $\\Omega\\subset\\mathbb{R}^2$ or $\\mathbb{R}^3$ denotes a spatial domain with boundary $\\Gamma$.\nAdditionally, we allow for box constraints on the control\n\\begin{equation}\\label{box}\n\\rm u_a \\le \\rm u \\le \\rm u_b \\quad\\mbox{ a.e. in } \\Omega,\n\\end{equation}\nand,\nfor the sake of generality, consider the possibility that there are also box constraints on the state\n\\begin{equation}\\label{boxs}\n\\rm y_a \\le \\rm y \\le \\rm y_b \\quad\\mbox{ a.e. in } \\Omega.\n\\end{equation}\nWe follow the convention of recent numerical studies (see \\cite{SongYu3,SongYu2,SongYu1,wwL1}, for instance) and investigate the case where the lower (upper) bounds of the box constraints are non-positive (non-negative). Here, the functions $\\rm y_d,f, g,\\rm u_a,u_b, y_a,y_b \\in \\ensuremath{L^{2}(\\Omega)}$ are provided in the problem statement, with $\\alpha,\\beta >0$ given problem-specific \\emph{regularization parameters}. The functions $\\rm y,\\rm y_d,\\rm u$ denote the state, the desired state, and the control, respectively.\nThe state $\\rm y$ and the control $\\rm u$ are then linked via a state equation (the PDE). In this work we examine a broad class of state equations, including Poisson's equation (\\ref{eq:lap}) as well as the convection--diffusion equation and the heat equation.\nFurthermore, we consider the case where the difference between state $\\rm y$ and desired state \n$\\rm y_d$ is only observed on a certain part of the domain,\ni.e. over $\\Omega_1\\subset\\Omega$, with the first quadratic term in (\\ref{pb}) \nthen having the form $\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega_1)}$. We refer to this case\nas the ``partial observation'' case.\n\nThere are many difficulties associated with the problem (\\ref{pb})--(\\ref{boxs}), such as selecting a suitable discretization, and choosing an efficient approach for handling the box constraints and the sparsity term. In particular, the state constrained problem itself, not even including the $\\rm L^1$-norm term, leads to a problem formulation where the regularity of the Lagrange multiplier is reduced, see \\cite{Cas86} for details. Additionally, the simultaneous treatment of control and state constraints is a complex task.\nFor this, G\\\"unther and co-authors in \\cite{gunther2012posteriori} propose the use of Moreau--Yosida regularization in order to add the state constraints as a penalty to the objective function. Other approaches are based on a semismooth Newton method, see e.g. \\cite{HS10,pss17}.\nIn fact, the inclusion of control\/state constraints leads to a semismooth nonlinear formulation of the first-order optimality \nconditions \\cite{BIK99,HIK02,pst15}. Interestingly, the structure of the arising nonlinear system is preserved if the $\\rm L^1$-norm \npenalization is added \\cite{HS10,pss17,Sta09}. Therefore its solution also generally relies on semismooth Newton approaches, and\nan infinite dimensional formulation is commonly utilized to derive the first-order optimality system. \nStadler in \\cite{Sta09} was the first to study PDE-constrained optimization with the $\\rm L^1$ term included,\nutilizing a semismooth approach, and many contributions have been made to the study of these problems in recent years \n(cf. \\cite{HerOW15,HSW11_DS} among others).\nOur objective is to tackle the coupled problem of both box constraints combined with the sparsity-promoting term, using the Interior Point method.\n\nThe paper \\cite{pss17} provides a complete analysis of a globally convergent\nsemismooth Newton method proposed for the problem (\\ref{pb})--(\\ref{box}).\nTheoretical and practical aspects are investigated \nfor both the linear algebra phase and the convergence behavior of the nonlinear method.\nThe numerical experiments carried out revealed a drawback of the method, as it exhibited \npoor convergence behavior for limiting values of the regularization parameter $\\alpha$. \n\n\nThe aim of this paper is to propose a new framework for the solution of\n(\\ref{pb})--(\\ref{boxs}) for a wider class of state equations and boundary conditions\nand, at the same time, attempt to overcome the numerical limitations of the global semismooth approach.\n\nTo pursue this issue we utilize Interior Point methods (IPMs), which \nhave shown great applicability for nonlinear programming problems \\cite{NocW06,IPMWright}, \nand have also found effective use within the PDE-constrained optimization framework \\cite{PGIP17,ulbrich2009primal}.\nIn particular, IPMs for linear and (convex) quadratic programming problems display \nseveral features which make them particularly attractive for very large-scale optimization, see e.g. \nthe recent survey paper \\cite{gondzio12}. Their main advantages are undoubtedly \ntheir low-degree polynomial worst-case complexity, and \ntheir ability to deliver optimal solutions in an almost constant number of iterations which\ndepends very little, if at all, on the problem dimension.\nThis feature makes IPMs perfect candidates for huge-scale discretized PDE-constrained\noptimal control problems.\n\nRecently, in \\cite{PGIP17}, an Interior Point approach has been successfully applied to the solution\nof problem (\\ref{pb})--(\\ref{boxs}), with $\\beta=0$. In this case the discretization\nof the optimization problem leads to a convex quadratic programming problem,\nand IPMs may naturally be applied. Furthermore, the rich structure of the linear systems\narising in this framework allows one to design efficient and robust preconditioners, based on those originally developed for the Poisson control problem without box constraints \\cite{PW10}.\n\nIn this work we extend the approach proposed in \\cite{PGIP17} to the more difficult \nand general case with $\\beta > 0$, and apply it to a broad class of PDE-constrained optimal control problems.\nTo achieve this goal we utilize two key ingredients that will be described in detail\nin Section \\ref{sec::ipa}: an appropriate discretization of the\n$\\rm L^1$-norm that allows us to write the discretized problem in a matrix-vector form, and\na suitable smoothing of the arising vector $\\ell_1$-norm that yields a final quadratic programming\nform of the discretized problem. The first ingredient is based on the discretization described in \\cite{wwL1},\nand recently applied to problem (\\ref{pb})--(\\ref{box}) in \\cite{SongYu3,SongYu2,SongYu1},\nwhere block-coordinate like methods are then introduced.\nThe second ingredient has been widely used for solving the ubiquitous\n$\\rm L^1$-norm regularized quadratic problem as, for example, when computing \nsparse solutions in wavelet-based deconvolution problems and compressed sensing \\cite{GPSR}. \nOn the other hand, its use is completely new within the PDE-constrained optimization context.\nFinally, we propose new preconditioners for the sequence of saddle-point systems\ngenerated by the IPM, based on approximations of the $(1,1)$-block and the Schur complement.\nIn particular, the case where the $(1,1)$-block is singular is taken into account\nwhen examining the partial observation case.\nWe may then analyse the spectral properties of the preconditioned $(1,1)$-block and Schur complement, to guide us as to the effectiveness of our overall preconditioning strategies.\n\nWe structure the paper as follows. The discretization of the continuous problem is discussed \nin Section \\ref{sec::dis}, while an Interior Point scheme is\nintroduced in Section \\ref{sec::ipa} together with the description of the \nlinear algebra considerations. Hence, Section \\ref{sec::prec} is devoted \nto introducing preconditioning strategies to improve the convergence behavior of the linear iterative solver. \nWe highlight a ``matching approach'' that introduces robust approximations to the Schur complement of the linear system.\nAdditionally, we propose a preconditioning strategy for partial observations in \nSection \\ref{subsec::po}, and time-dependent problems in Section \\ref{subsec::td}.\nSection \\ref{exp} illustrates the performance of our scheme for a variety of different parameter regimes, \ndiscretization levels, and PDE constraints.\n\n\n\\subsection*{Notation}\nThe $\\rm L^1$-norm of a function $\\rm u$ is denoted by $\\|\\rm u\\|_{L^1}$,\nwhile the $\\ell_1$-norm of a vector $u$ is denoted by $\\| u\\|_1$. \nComponents of a vector $x$ are denoted by $x_j$, or by $x_{a,j}$\nfor a vector $x_a$. The matrix $I_n$ denotes the $n\\times n$ identity matrix,\nand $1_n$ is the column vector of ones of dimension $n$.\n\n\\section{Problem Discretization and Quadratic Programming Formulation}\n\\label{sec::dis}\nWe here apply a discretize-then-optimize approach to (\\ref{pb})--(\\ref{boxs}), and \nuse a finite element discretization that retains a favorable property of the vector $\\ell\n_1$-norm, specifically that it is separable with respect to the vector components.\nThis key step allows us to state the discretized problem as a convex quadratic program\nthat may be tackled using an IPM.\n\nLet $n$ denote the dimension of the discretized space, for both state and control variables. \nLet the matrix $L$ represent a discretization of the Laplacian \noperator (the \\textit{stiffness matrix}) when Poisson's equation is considered or, more generally, the discretization of a non-selfadjoint elliptic differential operator, \nand let the matrix $M$ be the finite element Gram matrix, or \\textit{mass matrix}.\nFinally, we denote by $y,u,y_d,f,u_a,u_b,y_a,y_b$ the discrete counterparts of the functions\n$\\rm y,u,y_d,f,u_a,u_b,y_a,y_b$, respectively.\n\nThe discretization without the additional sparsity term follows a standard Galerkin approach \\cite{HS10,RSW09,book::FT2010}.\nFor the discretization of the $\\rm L^1$ term, we here follow \\cite{SongYu3,SongYu2,SongYu1,wwL1}\nand apply the nodal quadrature rule:\n$$\\|{\\rm u}\\|_{{\\rm L}^1(\\Omega)} \\approx \\sum^n_{i=1} |u_i| \\int_{ \\Omega} \\phi_i(x)~{\\rm d}x,$$ \nwhere $\\{\\phi_i\\}$ are the finite element basis functions used\nand $u_i$ are the components of $u$. It is shown in \\cite{wwL1} that first-order convergence may be achieved using this approximation with piecewise linear discretizations of the control. We define a lumped mass matrix $D$ as\n$$\nD := \\text{diag}\\left ( \\int_{ \\Omega} \\phi_i(x)~{\\rm d}x\\right )_{i=1}^{n},\n$$\nso that the discretized $\\rm L^1$-norm can be written in matrix-vector form as $\\|D u\\|_1$.\nAs a result, the overall finite element discretization of problem (\\ref{pb})--(\\ref{boxs}) may be stated as\n\\begin{equation}\n\\begin{array}{cl}\\label{pb_fe}\n\\displaystyle\\min_{y\\in \\IR^n,u\\in \\IR^{n}} & \\frac 1 2 (y-y_d)^TM (y-y_d) + \n \\frac{\\alpha}{2} u^TMu + \\beta \\|D u\\|_1\\\\\n\\mbox{ s.t. } & L y - Mu = f,\n\\end{array}\n\\end{equation}\nwhile additionally being in the presence of control constraints and state constraints: \n\\begin{equation}\\label{boxvector}\nu_a \\le u \\le u_b,\\quad\\quad y_a \\le y \\le y_b.\n\\end{equation}\nThe problems we consider will always have control constraints present, and will sometimes also involve state constraints.\n\nProblem (\\ref{pb_fe})--(\\ref{boxvector}) is a linearly constrained quadratic problem with bound \nconstraints on the state and control variables $(y,u)$, and with an additional nonsmooth weighted \n$\\ell_1$-norm term of the variable $u$. \nA possible approach to handle the nonsmoothness in the problem \nconsists of using smoothing techniques for the $\\ell_1$-norm term, see e.g. \\cite{GPSR,FG-pseudo16,FG-IPM14}.\nWe here consider a classical strategy proposed in \\cite{GPSR} that linearizes\nthe $\\ell_1$-norm by splitting the variable $u$ as follows.\nLet $w, v \\in \\IR^n$ be such that\n$$|u_i | = w_i + v_i, \\ \\ i = 1, \\dots, n,\n$$\nwhere $w_i = \\max(u_i,0)$ and $v_i = \\max(-u_i,0)$. Therefore\n$$\n\\|u\\|_1 = 1_n^Tw + 1_n^Tv, \n$$\nwith $w,v\\ge 0$.\nIn the weighted case, which we are interested in when approximating the discretized version of $\\|\\rm u\\|_{\\rm L^1(\\Omega)}$ by $\\|Du\\|_1$, we obtain \n$$\n\\|D u\\|_1 = 1_n^T Dw + 1_n^T Dv.\n$$\n\nBy using the relationship \n\\begin{equation}\\label{split}\n u = w - v,\n\\end{equation}\none may now rewrite problem (\\ref{pb_fe}) in terms of variables $(y,z)$, with \n$$z= \\begin{bmatrix}\n w \\\\\n v\n \\end{bmatrix}.\n $$\nNote that bounds for $u$\n$$u_a \\le u \\le u_b$$\nnow have to be replaced by the following bounds for $z$:\n$$z_a \\le z \\le z_b,$$\nwith\n\\begin{equation*\nz_a = \\left[\\begin{array}{c}\n\\max\\{u_a,0\\} \\\\ -\\min\\{u_b,0\\} \\\\\n\\end{array}\\right], \\qquad z_b = \\left[\\begin{array}{c}\n\\max\\{u_b, 0\\} \\\\ -\\min\\{u_a, 0\\} \\\\\n\\end{array}\\right].\n\\end{equation*}\nWe note that these bounds automatically satisfy the constraint $z\\ge 0$. Overall, we have the desired quadratic programming formulation:\n\\begin{equation}\n\\begin{array}{cl}\\label{pb_fe_lin}\n\\displaystyle\\min_{y\\in \\IR^n,z\\in \\IR^{2n}} & Q(y,z):= \\frac 1 2 (y-y_d)^TM (y-y_d) + \n \\frac{\\alpha}{2} z^T \\widetilde M z + \n \\beta\\, 1_{2n}^T \\bar D z \\\\\n\\mbox{ s.t. } & L y - \\bar M z = f, \\\\\n & z_a \\le z \\le z_b, \\\\\n & y_a \\le y \\le y_b,\n\\end{array}\n\\end{equation}\nwhere\n$$\n \\widetilde M = \\begin{bmatrix}\n M & -M \\\\ \n -M & M\n \\end{bmatrix}, \\quad\\quad \\bar D = \\begin{bmatrix}\n D & D \n \\end{bmatrix} ,\\quad\\quad\n \\bar M = \\begin{bmatrix}\n M & -M \n \\end{bmatrix}.\n$$\nIn the next section we derive an Interior Point scheme for the solution of the above problem. \nClearly once optimal values of variables $z$, and therefore of\n$w$ and $v$, are found, the control $u$ of the initial problem is retrieved by\n(\\ref{split}). We observe that we gain smoothness in the problem at the expense of\nincreasing the number of variables by 50\\% within the problem statement. \n Fortunately, this increase will not\nhave a significant impact in the linear algebra solution phase of our method, as we only require additional sparse matrix-vector multiplications, and the storage of the additional control vectors.\n\n\n\\section{Interior Point Framework and Newton Equations}\n\\label{sec::ipa}\n\nThe three key steps to set up an IPM are the following. First, the\nbound constraints are ``eliminated'' by using a logarithmic barrier function.\nFor problem (\\ref{pb_fe_lin}), \nthe barrier function takes the form:\n\\begin{align*}\nL_{\\mu}(y,z,p) = Q(y,z) + p^T ( L y - \\bar M z - f)&{}- \\mu \\sum \\log(y_j - y_{a,j}) - \\mu \\sum \\log(y_{b,j} -y_j)\\\\\n &{}-\\mu \\sum \\log(z_j - z_{a,j}) - \\mu \\sum \\log(z_{b,j} -z_j),\n\\end{align*}\nwhere $p\\in\\IR^n$ is the Lagrange multiplier (or adjoint variable) associated with the state equation,\nwhile $\\mu > 0$ is the barrier parameter that controls the relation between\nthe barrier term and the original objective $Q(y,z)$. As the IPM progresses, $\\mu$ is decreased towards zero.\n\nThe second step involves applying duality theory, and\nderiving the first-order optimality conditions to obtain a nonlinear system\nparameterized by $\\mu$.\nDifferentiating $L_\\mu$ with respect to $(y,z,p)$ gives the nonlinear system\n\\begin{eqnarray*}\n M y - M y_d +L^T p - \\lambda_{y,a} + \\lambda_{y,b} & = & 0, \\\\\n \\alpha \\widetilde M z + \\beta \\bar D ^T 1_{n} - \\bar M^T p \n - \\lambda_{z,a} + \\lambda_{z,b} & = & 0, \\\\\n L y - \\bar M z - f & = & 0,\n\\end{eqnarray*}\nwhere the $j$th entries of the Lagrange multipliers $\\lambda_{y,a},\\lambda_{y,b},\n\\lambda_{z,a},\\lambda_{z,b}$ are defined as follows:\n$$\n(\\lambda_{y,a})_j = \\frac{\\mu}{y_j - y_{a,j}}, \\quad\\quad\n(\\lambda_{y,b})_j = \\frac{\\mu}{y_{b,j} - y_j}, \\quad\\quad\n(\\lambda_{z,a})_j = \\frac{\\mu}{z_j - z_{a,j}}, \\quad\\quad\n(\\lambda_{z,b})_j = \\frac{\\mu}{z_{b,j} - z_j}.\n$$\nAlso, the following bound constraints enforce the \nconstraints on $y$ and $z$ via:\n$$\\lambda_{y,a} \\ge 0 , \\quad\\quad \\lambda_{y,b} \\ge 0, \\quad\\quad \\lambda_{z,a} \\ge0, \\quad\\quad \\lambda_{z,b} \\ge 0.$$ \n\nThe third crucial step of the IPM is the application of Newton's method\nto the nonlinear system. \nWe now derive the Newton equations, following the description in \\cite{PGIP17}.\nLetting \n$y,z,p, \\lambda_{y,a}, \\lambda_{y,b}, \\lambda_{z,a}, \\lambda_{z,b}$\ndenote the most recent Newton iterates, these quantities \nare updated at each iteration by computing the corresponding Newton steps\n$ \\Delta y, \\Delta z, \\Delta p, \\Delta \\lambda_{y,a}, \\Delta \\lambda_{y,b}, \\Delta \\lambda_{z,a},$ $\\Delta \\lambda_{z,b}$,\nthrough the solution of the following Newton system:\n\\begin{align}\n\\ \\label{7by7} &\\begin{bmatrix}\n M & 0 & L^T & - I_n & I_n & 0 & 0 \\\\\n 0 & \\alpha \\widetilde M & -\\bar M^T & 0 & 0 & -I_{2n} & I_{2n} \\\\\n L & -\\bar M & 0 & 0 & 0 & 0 & 0 \\\\\n \\Lambda_{y,a} & 0 & 0 & Y - Y_a& 0 & 0 & 0 \\\\\n-\\Lambda_{y,b} & 0 & 0 & 0 &Y_b-Y & 0 & 0 \\\\\n0 & \\Lambda_{z,a} & 0 & 0 &0 & Z - Z_a & 0 \\\\\n0 &-\\Lambda_{z,b} & 0 & 0 &0 & 0 & Z_b - Z\n\\end{bmatrix}\n\\begin{bmatrix}\n \\Delta y\\\\\n \\Delta z \\\\\n \\Delta p \\\\\n \\Delta \\lambda_{y,a} \\\\\n \\Delta \\lambda_{y,b} \\\\\n \\Delta \\lambda_{z,a} \\\\\n \\Delta \\lambda_{z,b} \n\\end{bmatrix} \\\\\n\\ \\nonumber &\\hspace{17.5em}=-\\begin{bmatrix}\n M y - M y_d +L^T p - \\lambda_{y,a} + \\lambda_{y,b} \\\\\n \\alpha \\widetilde M z + \\beta \\bar D ^T 1_{n} - \\bar M^T p \n - \\lambda_{z,a} + \\lambda_{z,b} \\\\\n L y - \\bar M z - f \\\\\n (y-y_a).*\\lambda_{y,a} - \\mu 1_n \\\\\n (y_b-y).*\\lambda_{y,b} - \\mu 1_n \\\\\n (z-z_a).*\\lambda_{z,a} - \\mu 1_{2n} \\\\\n (z_b - z).*\\lambda_{z,a} - \\mu 1_{2n} \n\\end{bmatrix},\n\\end{align}\nwhere $Y, Z, \\Lambda_{y,a}, \\Lambda_{y,b}, \\Lambda_{z,a}, \\Lambda_{z,b}$ are diagonal matrices, \nwith the most recent iterates $y,z,p, \\lambda_{y,a},$ $\\lambda_{y,b}, \\lambda_{z,a}, \\lambda_{z,b}$\nappearing on their diagonal entries. Similarly, the matrices $Y_a , Y_b , Z_a, Z_b$ are diagonal matrices\ncorresponding to the bounds $y_a, y_b, z_a, z_b$. \nHere we utilize the {\\scshape matlab} notation `$.*$' to denote the componentwise product.\nWe observe that the contribution of the $\\ell_1$-norm term only arises in the right-hand side, that is to say\n$\\beta$ does not appear within the matrix we need to solve for.\n\n\n\n\n\n\nEliminating $\\Delta \\lambda_{y,a}, \\Delta \\lambda_{y,b}, \\Delta \\lambda_{z,a}, \\Delta \\lambda_{z,b}$ from \\eqref{7by7},\nwe obtain the following reduced linear system:\n\\begin{align}\\label{NewtonSystem}\n&\\begin{bmatrix}\n M + \\Theta_y & 0 & L^T \\\\\n 0 & \\alpha \\widetilde M + \\Theta_z & -\\bar M^T \\\\\n L & -\\bar M & 0 \\\\ \n\\end{bmatrix}\n\\begin{bmatrix}\n \\Delta y\\\\\n \\Delta z \\\\\n \\Delta p \\\\\n \\end{bmatrix} \\\\\n\\nonumber &\\hspace{5em}=-\\begin{bmatrix}\n M y - M y_d +L^T p -\\mu (Y-Y_a)^{-1}1_n + \\mu (Y_b-Y)^{-1}1_n \\\\\n \\alpha \\widetilde M z + \\beta \\bar D ^T 1_{n} - \\bar M^T p \n -\\mu (Z-Z_a)^{-1}1_{2n} + \\mu (Z_b-Z)^{-1}1_{2n} \\\\\n L y - \\bar M z - f \\\\\n\\end{bmatrix},\n\\end{align}\nwith\n$$\\Theta_y = (Y - Y_a )^{-1} \\Lambda_{y,a} + (Y_b - Y )^{-1} \\Lambda_{y,b},\n\\quad\\quad\\Theta_z = (Z - Z_a )^{-1} \\Lambda_{z,a} + (Z_b - Z )^{-1} \\Lambda_{z,b}\n$$\nboth diagonal and positive definite matrices, which are typically very ill-conditioned. \nOnce the above system is solved, one can compute the steps for the\nLagrange multipliers:\n\\begin{eqnarray}\n \\Delta \\lambda_{y,a} & = & - (Y-Y_a)^{-1} \\Lambda_{y,a} \\Delta y - \\Lambda_{y,a} + \\mu (Y-Y_a)^{-1}1_n, \\label{zupdate1}\\\\\n \\Delta \\lambda_{y,b} & = & (Y_b-Y)^{-1} \\Lambda_{y,b} \\Delta y - \\Lambda_{y,b} + \\mu (Y_b-Y)^{-1}1_n, \\label{zupdate2}\\\\\n \\Delta \\lambda_{z,a} & = & - (Z-Z_a)^{-1} \\Lambda_{z,a} \\Delta z - \\Lambda_{z,a} + \\mu (Z-Z_a)^{-1}1_{2n}, \\label{zupdate3}\\\\\n \\Delta \\lambda_{z,b} & = & (Z_b-Z)^{-1} \\Lambda_{z,b} \\Delta z - \\Lambda_{z,b} + \\mu (Z_b-Z)^{-1}1_{2n}. \\label{zupdate4}\n \\end{eqnarray}\nAfter updating the iterates, and ensuring that they remain feasible, the barrier $\\mu$ is reduced and \na new Newton step is performed.\n\nFor the sake of completeness, the structure of the overall Interior Point algorithm is reported in the Appendix,\nand follows the standard infeasible Interior Point path-following scheme outlined in \\cite{gondzio12}.\nWe report on the formulas for the primal and dual feasibilities, given by \n\\begin{equation}\\label{prdu}\n \\xi_p^k = L y^k - \\bar{M} z^k - f, \\quad \\quad \n \\xi_d^k = \\begin{bmatrix}\n M y^k - M y_d + L^T p^k - \\lambda^k_{y,a} + \\lambda^k_{y,b} \\\\\n \\alpha \\widetilde M z^k + \\beta \\bar D ^T 1_{n} - \\bar {M}^T p^k \n - \\lambda^k_{z,a} + \\lambda^k_{z,b} \n \\end{bmatrix},\n\\end{equation}\nrespectively, and the complementarity gap\n\\begin{equation}\\label{gap}\n \\xi_c^k = \\begin{bmatrix}\n (y^k-y_a).* \\lambda^k_{y,a} - \\mu^k 1_n \\\\\n (y_b-y^k).* \\lambda^k_{y,b} - \\mu^k 1_n \\\\\n (z^k-z_a).* \\lambda^k_{z,a} - \\mu^k 1_{2n} \\\\\n (z_b - z^k).* \\lambda^k_{z,a} - \\mu^k 1_{2n} \n \\end{bmatrix},\n \\end{equation}\nfor problem (\\ref{pb_fe_lin}). Here $k$ denotes the iteration counter for the Interior Point method, with $y^k,z^k,p^k,\\lambda^k_{y,a},\\lambda^k_{y,b},\\lambda^k_{u,a},\\lambda^k_{u,b},\\mu^k$ the values of $y,z,p,\\lambda_{y,a},\\lambda_{y,b},\\lambda_{u,a},\\lambda_{u,b},\\mu$ at the $k$th iteration.\n\nThe measure of the change in the norm of $\\xi_p^k, \\xi_d^k, \\xi_c^k$ allows us to monitor the convergence of the entire process.\nComputationally, the main bottleneck of the algorithm is the linear algebra phase,\nthat is the efficient solution of the Newton system (\\ref{NewtonSystem}).\nThis is the focus of the forthcoming section.\n\n\n\n\n\\section{Preconditioning}\n\\label{sec::prec}\n\nHaving arrived at the Newton system \\eqref{NewtonSystem}, the main task at this stage is to construct fast and effective methods for the solution of such systems. In this work, we elect to apply iterative (Krylov subspace) solvers, both the {\\scshape minres} method \\cite{minres} for symmetric matrix systems, and the {\\scshape gmres} algorithm \\cite{gmres} which may also be applied to non-symmetric matrices. We wish to accelerate these methods using carefully chosen preconditioners.\n\nTo develop these preconditioners, we observe that \\eqref{NewtonSystem} is a \\emph{saddle-point system} (see \\cite{BenGolLie05} for a review of such systems), of the form\n\\begin{equation*}\n\\ \\mathcal{A}=\\left[\\begin{array}{cc}\nA & B^T \\\\\nB & C \\\\\n\\end{array}\\right],\n\\end{equation*}\nwith\n\\begin{equation*}\n\\ A=\\left[\\begin{array}{cc}\nM+\\Theta_y & 0 \\\\\n0 & \\alpha\\widetilde{M}+\\Theta_z \\\\\n\\end{array}\\right],\\quad\\quad{}B=\\left[\\begin{array}{cc}\nL & -\\bar{M} \\\\\n\\end{array}\\right],\\quad\\quad{}C=\\left[\\begin{array}{c}\n0 \\\\\n\\end{array}\\right].\n\\end{equation*}\nProvided $A$ is nonsingular, it is well known that two \\emph{ideal preconditioners} for the saddle-point matrix $\\mathcal{A}$ are given by\n\\begin{equation*}\n\\ \\mathcal{P}_1=\\left[\\begin{array}{cc}\nA & 0\\\\\n0 & S \\\\\n\\end{array}\\right],\\quad\\quad\\mathcal{P}_2=\\left[\\begin{array}{cc}\nA & 0\\\\\nB & -S \\\\\n\\end{array}\\right],\n\\end{equation*}\nwhere the (negative) \\emph{Schur complement} $S:=-C+BA^{-1}B^T$. In particular, provided the preconditioned system is nonsingular, it can be shown that \\cite{Ipsen,Ku95,preconMGW}\n\\begin{equation*}\n\\ \\lambda(\\mathcal{P}_1^{-1}\\mathcal{A})\\in\\left\\{1,\\frac{1}{2}(1\\pm\\sqrt{5})\\right\\},\\quad\\quad\\lambda(\\mathcal{P}_2^{-1}\\mathcal{A})\\in\\{1\\},\n\\end{equation*}\nand hence that a suitable Krylov method preconditioned by $\\mathcal{P}_1$ or $\\mathcal{P}_2$ will converge in $3$ or $2$ iterations, respectively.\n\nOf course, we would not wish to work with the preconditioners $\\mathcal{P}_1$ or $\\mathcal{P}_2$ in practice, as they would be prohibitively expensive to invert. We therefore wish to develop analogous preconditioners of the form\n\\begin{equation*}\n\\ \\mathcal{P}_D=\\left[\\begin{array}{cc}\n\\widehat{A} & 0\\\\\n0 & \\widehat{S} \\\\\n\\end{array}\\right],\\quad\\quad\\mathcal{P}_T=\\left[\\begin{array}{cc}\n\\widehat{A} & 0\\\\\nB & -\\widehat{S} \\\\\n\\end{array}\\right],\n\\end{equation*}\nwhere $\\widehat{A}$ and $\\widehat{S}$ are suitable and computationally cheap approximations of the $(1,1)$-block $A$ and the Schur complement $S$. Provided $\\widehat{A}$ and $\\widehat{S}$ are symmetric positive definite, the preconditioner $\\mathcal{P}_D$ may be applied within the {\\scshape minres} algorithm, and $\\mathcal{P}_T$ is applied within a non-symmetric solver such as {\\scshape gmres}.\n\nOur focus is therefore to develop such approximations for the corresponding matrices for the Newton system \\eqref{NewtonSystem}:\n\\begin{equation*}\n\\ A=\\left[\\begin{array}{cc}\nM+\\Theta_y & 0 \\\\\n0 & \\alpha\\widetilde{M}+\\Theta_z \\\\\n\\end{array}\\right],\\quad\\quad{}S=\\left[\\begin{array}{cc}\nL & -\\bar{M} \\\\\n\\end{array}\\right]\\left[\\begin{array}{cc}\nM+\\Theta_y & 0 \\\\\n0 & \\alpha\\widetilde{M}+\\Theta_z \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{c}\nL^T \\\\\n-\\bar{M}^T \\\\\n\\end{array}\\right].\n\\end{equation*}\n\n\n\\subsection{Approximation of \\boldmath{$(1,1)$}-block}\n\nAn effective approximation of the $(1,1)$-block $A$ will require cheap and accurate approximations of the matrices $M+\\Theta_y$ and $\\alpha\\widetilde{M}+\\Theta_z$.\n\nWhen considering the matrix $M+\\Theta_y$, our first observation is that the mass matrix $M$ may be effectively approximated by its diagonal \\cite{wathen87} within a preconditioner. This can be exploited and enhanced by applying the \\emph{Chebyshev semi-iteration} method \\cite{VGI61,VGII61,RW08}, which utilizes the effectiveness of the diagonal approximation and accelerates it. Now, it may be easily shown that\n\\begin{align*}\n\\ &\\Big[\\lambda_{\\min}\\big((D_M+\\Theta_y)^{-1}(M+\\Theta_y)\\big),\\lambda_{\\max}\\big((D_M+\\Theta_y)^{-1}(M+\\Theta_y)\\big)\\Big] \\\\\n\\ &\\hspace{10em}\\subset\\Big[\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\},\\max\\left\\{\\lambda_{\\max}(D_M^{-1}M),1\\right\\}\\Big],\n\\end{align*}\nwhere $D_M:=\\text{diag}(M)$, due to the positivity of the diagonal matrix $\\Theta_y$. Here, $\\lambda_{\\min}(\\cdot)$, $\\lambda_{\\max}(\\cdot)$ denote the smallest and largest eigenvalues of a matrix, respectively. In other words, the diagonal of $M+\\Theta_y$ also clusters the eigenvalues within a preconditioner. The same argument may therefore be used to apply Chebyshev semi-iteration to $M+\\Theta_y$ within a preconditioner, and so we elect to use this approach.\n\nWe now turn our attention to the matrix $\\alpha\\widetilde{M}+\\Theta_z$, first decomposing $\\Theta_z=\\text{blkdiag}(\\Theta_w,\\Theta_v)$, where $\\Theta_w$, $\\Theta_v$ denote the components of $\\Theta_z$ corresponding to $w$, $v$. Therefore, in this notation,\n\\begin{equation*}\n\\ \\alpha\\widetilde{M}+\\Theta_z=\\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right].\n\\end{equation*}\nNote that $\\widetilde{M}$ is positive semidefinite but $\\alpha\\widetilde{M}+\\Theta_z$ is positive definite since the diagonal $\\Theta_z$ is positive definite (the control and state bounds are enforced as strict inequalities at each Newton step).\n\nA result which we apply is that of \\cite[Theorems 2.1(i) and 2.2(i)]{LSinverses02}, which gives us the following statements about the inverse of $2\\times2$ block matrices:\n\\begin{teo}\nConsider the inverse of the block matrix\n\\begin{equation}\n\\ \\label{ABCD} \\left[\\begin{array}{cc}\nA & B_1 \\\\\nB_2 & C \\\\\n\\end{array}\\right].\n\\end{equation}\nIf $A$ is nonsingular and $C-B_{2}A^{-1}B_1$ is invertible, then \\eqref{ABCD} is invertible, with\n\\begin{equation}\n\\ \\label{ABCDinv1} \\left[\\begin{array}{cc}\nA & B_1 \\\\\nB_2 & C \\\\\n\\end{array}\\right]^{-1}=\\left[\\begin{array}{cc}\nA^{-1}+A^{-1}B_1(C-B_{2}A^{-1}B_1)^{-1}B_{2}A^{-1} & -A^{-1}B_1(C-B_{2}A^{-1}B_1)^{-1} \\\\\n-(C-B_{2}A^{-1}B_1)^{-1}B_{2}A^{-1} & (C-B_{2}A^{-1}B_1)^{-1} \\\\\n\\end{array}\\right].\n\\end{equation}\nAlternatively, if $B_1$ is nonsingular and $B_2-CB_1^{-1}A$ is invertible, then \\eqref{ABCD} is invertible, with\n\\begin{equation}\n\\ \\label{ABCDinv2} \\left[\\begin{array}{cc}\nA & B_1 \\\\\nB_2 & C \\\\\n\\end{array}\\right]^{-1}=\\left[\\begin{array}{cc}\n-(B_2-CB_1^{-1}A)^{-1}CB_1^{-1} & (B_2-CB_1^{-1}A)^{-1} \\\\\nB_1^{-1}+B_1^{-1}A(B_2-CB_1^{-1}A)^{-1}CB_1^{-1} & -B_1^{-1}A(B_2-CB_1^{-1}A)^{-1} \\\\\n\\end{array}\\right].\n\\end{equation}\n\\end{teo}\n\nFor the purposes of this working, we may therefore consider the matrix $\\alpha\\widetilde{M}+\\Theta_z$ itself \nas a block matrix \\eqref{ABCD}, \nwith $A=\\alpha{}M+\\Theta_w$, $B_1=B_2=-\\alpha{}M$, $C=\\alpha{}M+\\Theta_v$. It may easily be verified that $A$, $C-B_{2}A^{-1}B_1$, $B_1$, $B_2-CB_1^{-1}A$ are then invertible matrices, and so the results \\eqref{ABCDinv1} and \\eqref{ABCDinv2} both hold in this setting.\n\nWe now consider approximating $\\alpha\\widetilde{M}+\\Theta_z$ within a preconditioner by replacing all mass matrices with their diagonals, i.e. writing\n\\begin{equation*}\n\\ \\alpha\\widetilde{D}_M+\\Theta_z:=\\left[\\begin{array}{cc}\n\\alpha{}D_M+\\Theta_w & -\\alpha{}D_M \\\\\n-\\alpha{}D_M & \\alpha{}D_M+\\Theta_v \\\\\n\\end{array}\\right].\n\\end{equation*}\nThis would give us a practical approximation, by using the expression \\eqref{ABCDinv1} to apply $(\\alpha\\widetilde{D}_M+\\Theta_z)^{-1}$, provided it can be demonstrated that $\\alpha\\widetilde{D}_M+\\Theta_z$ well approximates $\\alpha\\widetilde{M}+\\Theta_z$. This is indeed the case, as demonstrated using the result below:\n\\begin{teo}\n\t\\label{theorem1}\nThe eigenvalues $\\lambda$ of the matrix\n\\begin{equation}\n\\ \\label{PrecDiag} \\left[\\begin{array}{cc}\n\\alpha{}D_M+\\Theta_w & -\\alpha{}D_M \\\\\n-\\alpha{}D_M & \\alpha{}D_M+\\Theta_v \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right]\n\\end{equation}\nare all contained within the interval:\n\\begin{equation*}\n\\ \\lambda\\in\\Big[\\min\\{\\lambda_{\\min}(D_M^{-1}M),1\\},\\max\\{\\lambda_{\\max}(D_M^{-1}M),1\\}\\Big].\n\\end{equation*}\n\\end{teo}\n\\emph{Proof.}~~The eigenvalues of \\eqref{PrecDiag} satisfy\n\\begin{equation*}\n\\ \\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right]\\left[\\begin{array}{c}\n\\mathbf{x}_1 \\\\\n\\mathbf{x}_2 \\\\\n\\end{array}\\right]=\\lambda\\left[\\begin{array}{cc}\n\\alpha{}D_M+\\Theta_w & -\\alpha{}D_M \\\\\n-\\alpha{}D_M & \\alpha{}D_M+\\Theta_v \\\\\n\\end{array}\\right]\\left[\\begin{array}{c}\n\\mathbf{x}_1 \\\\\n\\mathbf{x}_2 \\\\\n\\end{array}\\right],\n\\end{equation*}\nwith $\\mathbf{x}_1$, $\\mathbf{x}_2$ not both equal to $\\mathbf{0}$, which may be decomposed to write\n\\begin{align}\n\\ \\label{EigEqn1} (\\alpha{}M+\\Theta_w)\\mathbf{x}_1-\\alpha{}M\\mathbf{x}_2={}&\\lambda(\\alpha{}D_M+\\Theta_w)\\mathbf{x}_1-\\lambda\\alpha{}D_M\\mathbf{x}_2, \\\\\n\\ \\label{EigEqn2} -\\alpha{}M\\mathbf{x}_1+(\\alpha{}M+\\Theta_v)\\mathbf{x}_2={}&-\\lambda\\alpha{}D_M\\mathbf{x}_1+\\lambda(\\alpha{}D_M+\\Theta_v)\\mathbf{x}_2.\n\\end{align}\nSumming \\eqref{EigEqn1} and \\eqref{EigEqn2} gives that\n\\begin{equation*}\n\\ \\Theta_w\\mathbf{x}_1+\\Theta_v\\mathbf{x}_2=\\lambda{}\\Theta_w\\mathbf{x}_1+\\lambda{}\\Theta_v\\mathbf{x}_2=\\lambda(\\Theta_w\\mathbf{x}_1+\\Theta_v\\mathbf{x}_2),\n\\end{equation*}\nwhich tells us that either $\\lambda=1$ or $\\Theta_w\\mathbf{x}_1+\\Theta_v\\mathbf{x}_2=\\mathbf{0}$. In the latter case, we substitute $\\mathbf{x}_1=-\\Theta_w^{-1}\\Theta_v\\mathbf{x}_2$ into \\eqref{EigEqn1} to give that\n\\begin{align*}\n\\ -(\\alpha{}M+\\Theta_w)\\Theta_w^{-1}\\Theta_v\\mathbf{x}_2-\\alpha{}M\\mathbf{x}_2={}&-\\lambda(\\alpha{}D_M+\\Theta_w)\\Theta_w^{-1}\\Theta_v\\mathbf{x}_2-\\lambda\\alpha{}D_M\\mathbf{x}_2 \\\\\n\\ \\Rightarrow\\quad\\quad~~\\Big[\\alpha{}M(\\Theta_w^{-1}\\Theta_v+I)+\\Theta_v\\Big]\\mathbf{x}_2={}&\\lambda\\Big[\\alpha{}D_M(\\Theta_w^{-1}\\Theta_v+I)+\\Theta_v\\Big]\\mathbf{x}_2,\n\\end{align*}\nwhich in turn tells us that\n\\begin{equation*}\n\\ \\Big[\\alpha{}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v(\\Theta_w^{-1}\\Theta_v+I)^{-1\/2}\\Big]\\mathbf{x}_3=\\lambda\\Big[\\alpha{}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v(\\Theta_w^{-1}\\Theta_v+I)^{-1\/2}\\Big]\\mathbf{x}_3,\n\\end{equation*}\nwhere $\\mathbf{x}_3=(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\mathbf{x}_2\\neq\\mathbf{0}$. Premultiplying both sides of the equation by $(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}$ then gives that\n\\begin{equation*}\n\\ \\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3=\\lambda\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3,\n\\end{equation*}\nand therefore that the eigenvalues may be described by the Rayleigh quotient\n\\begin{equation*}\n\\ \\frac{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3}{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3}.\n\\end{equation*}\nNow, as $\\mathbf{x}_3^T\\Theta_v\\mathbf{x}_3$ is a positive number, $\\lambda$ may be bounded within the range of the following Rayleigh quotient:\n\\begin{align*}\n\\ \\lambda\\in{}&\\left[\\min\\left\\{\\min_{\\mathbf{x}_3}\\frac{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3}{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3},1\\right\\},\\right. \\\\\n\\ &\\quad\\quad\\quad\\quad\\left.\\max\\left\\{\\max_{\\mathbf{x}_3}\\frac{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3}{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3},1\\right\\}\\right] \\\\\n\\ ={}&\\left[\\min\\left\\{\\min_{\\mathbf{x}_4}\\frac{\\mathbf{x}_4^{T}M\\mathbf{x}_4}{\\mathbf{x}_4^{T}D_M\\mathbf{x}_4},1\\right\\},\\max\\left\\{\\max_{\\mathbf{x}_4}\\frac{\\mathbf{x}_4^{T}M\\mathbf{x}_4}{\\mathbf{x}_4^{T}D_M\\mathbf{x}_4},1\\right\\}\\right] \\\\\n\\ \\subset{}&\\Big[\\min\\{\\lambda_{\\min}(D_M^{-1}M),1\\},\\max\\{\\lambda_{\\max}(D_M^{-1}M),1\\}\\Big],\n\\end{align*}\nwhere in the above derivation $\\mathbf{x}_4=(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\mathbf{x}_3\\neq\\mathbf{0}$. This gives the stated result.~~$\\Box$\n\n\n\\vspace{1em}\n\n\\begin{remark} Theorem \\ref{theorem1} is indeed a positive result. We utilize the fact that a mass matrix preconditioned by its diagonal gives tight eigenvalue bounds \\cite{wathen87}. \nWe have now obtained a cheap approximation of the $(1,1)$-block of our saddle-point system, with eigenvalues of the preconditioned matrix provably contained within a tight interval. \nWe wish to emphasize the fact that the interval boundaries and thus the region of interest where the eigenvalues will lie is independent of all system parameters, such as penalization-, regularization-, mesh-, and time-step parameters.\n\\end{remark}\n\n\n\\subsection{Approximation of Schur Complement}\\label{sec:Schur}\n\nThe Schur complement of the Newton system \\eqref{NewtonSystem} under consideration is given by\n\\begin{equation*}\n\\ S=L(M+\\Theta_y)^{-1}L^{T}+\\left[\\begin{array}{cc}\n-M & M \\\\\n\\end{array}\\right]\\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{c}\n-M \\\\\nM \\\\\n\\end{array}\\right].\n\\end{equation*}\nFor the matrix inverse in the above expression, we again consider the matrix $\\alpha\\widetilde{M}+\\Theta_z$ as\na block matrix of the form \\eqref{ABCD}, with $A=\\alpha{}M+\\Theta_w$, $B_1=B_2=B=-\\alpha{}M$, $C=\\alpha{}M+\\Theta_v$. Using \\eqref{ABCDinv2} then gives that\n\\begin{align*}\n\\ &\\left[\\begin{array}{cc}\n-M & M \\\\\n\\end{array}\\right]\\left[\\begin{array}{cc}\nA & B \\\\\nB & C \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{c}\n-M \\\\\nM \\\\\n\\end{array}\\right] \\\\\n\\ ={}&\\left[\\begin{array}{cc}\n-M & M \\\\\n\\end{array}\\right]\\left[\\begin{array}{c}\n(B-CB^{-1}A)^{-1}CB^{-1}M+(B-CB^{-1}A)^{-1}M \\\\\n-B^{-1}M-B^{-1}A(B-CB^{-1}A)^{-1}CB^{-1}M-B^{-1}A(B-CB^{-1}A)^{-1}M \\\\\n\\end{array}\\right] \\\\\n\\ \\ ={}&-M\\Big[B^{-1}+(B^{-1}A+I)(B-CB^{-1}A)^{-1}(CB^{-1}+I)\\Big]M,\n\\end{align*}\nwhereupon substituting in the relevant $A$, $B$, $C$ gives that this expression can be written as follows:\n\\begin{align*}\n\\ &\\frac{1}{\\alpha}M-\\left(-\\frac{1}{\\alpha}A+M\\right)\\left(-\\alpha{}M+\\frac{1}{\\alpha}DM^{-1}A\\right)^{-1}\\left(-\\frac{1}{\\alpha}D+M\\right) \\\\\n\\ ={}&\\frac{1}{\\alpha}M+\\left(\\frac{1}{\\alpha}\\Theta_w\\right)\\left(\\alpha{}M-\\left(\\alpha{}M+\\Theta_w+\\Theta_v+\\frac{1}{\\alpha}\\Theta_v{}M^{-1}\\Theta_w\\right)\\right)^{-1}\\left(\\frac{1}{\\alpha}\\Theta_v\\right) \\\\\n\\ ={}&\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}.\n\\end{align*}\nTherefore, $S$ may be written as\n\\begin{equation}\n\\ \\label{Schur} S=L(M+\\Theta_y)^{-1}L^{T}+\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}.\n\\end{equation}\nIt can be shown that $S$ consists of a sum of two symmetric positive semidefinite matrices. The matrix $L(M+\\Theta_y)^{-1}L^{T}$ clearly satisfies this property due to the positive definiteness of $M+\\Theta_y$, and $\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}$ is in fact positive definite by the following argument:\n\\begin{align*}\n\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\frac{1}{\\alpha}M^{-1}+\\Theta_w^{-1}+\\Theta_v^{-1}\\right)^{-1}\\succ0\\quad\\Leftrightarrow\\quad&\\frac{1}{\\alpha^2}\\left(\\frac{1}{\\alpha}M^{-1}+\\Theta_w^{-1}+\\Theta_v^{-1}\\right)^{-1}\\prec\\frac{1}{\\alpha}M \\\\\n\\ \\Leftrightarrow\\quad&\\alpha^2\\left(\\frac{1}{\\alpha}M^{-1}+\\Theta_w^{-1}+\\Theta_v^{-1}\\right)\\succ\\alpha{}M^{-1} \\\\\n\\ \\Leftrightarrow\\quad&\\ M^{-1}+\\alpha\\Theta_w^{-1}+\\alpha\\Theta_v^{-1}\\succ{}M^{-1}.\n\\end{align*}\nBased on this observation, we apply a ``\\emph{matching strategy}'' previously derived in \\cite{PSW11,PW10} for simpler PDE-constrained optimization problems, which relies on a Schur complement being written in this form. In more detail, we approximate the Schur complement $S$ by\n\\begin{equation}\n\\ \\label{SchurApprox} \\widehat{S}=\\left(L+\\widehat{M}\\right)(M+\\Theta_y)^{-1}\\left(L+\\widehat{M}\\right)^T,\n\\end{equation}\nwhere $\\widehat{M}$ is chosen such that the `outer' term of $\\widehat{S}$ in \\eqref{SchurApprox} approximates the second and third terms of $S$ in \\eqref{Schur}, that is\n\\begin{equation*}\n\\ \\widehat{M}(M+\\Theta_y)^{-1}\\widehat{M}^{T}\\approx\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}.\n\\end{equation*}\nThis may be achieved if\n\\begin{equation*}\n\\ \\widehat{M}\\approx\\left[\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}\\right]^{1\/2}(M+\\Theta_y)^{1\/2}.\n\\end{equation*}\nA natural choice, which may be readily worked with on a computer, therefore involves replacing mass matrices with their diagonals, making the square roots of matrices practical to work with, and therefore setting\n\\begin{equation*}\n\\ \\widehat{M}=\\left[\\frac{1}{\\alpha}D_M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}D_M^{-1}\\right)^{-1}\\right]^{1\/2}(D_M+\\Theta_y)^{1\/2}.\n\\end{equation*}\nWe therefore have a Schur complement approximation $\\widehat{S}$ which may be approximately inverted by applying a multigrid method to the matrix $L+\\widehat{M}$ and its transpose, along with a matrix-vector multiplication for $M+\\Theta_y$.\n\nBelow we present a result concerning the lower bounds of the eigenvalues of the preconditioned Schur complement.\n\\begin{teo}\nIn the case of lumped (diagonal) mass matrices, the eigenvalues of the preconditioned Schur complement all satisfy:\n\\begin{equation*}\n\\ \\lambda(\\widehat{S}^{-1}S)\\geq\\frac{1}{2}.\n\\end{equation*}\n\\end{teo}\n\\emph{Proof.}~~Bounds for the eigenvalues of $\\widehat{S}^{-1}S$ are determined by the extrema of the Rayleigh quotient\n\\begin{equation*}\n\\ R:=\\frac{\\mathbf{v}^{T}S\\mathbf{v}}{\\mathbf{v}^{T}\\widehat{S}\\mathbf{v}}=\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\omega^T\\boldsymbol\\omega}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)},\n\\end{equation*}\nwhere\n\\begin{align*}\n\\ \\boldsymbol\\chi={}&(M+\\Theta_y)^{-1\/2}L^T\\mathbf{v}, \\\\\n\\ \\boldsymbol\\omega={}&\\left[\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}\\right]^{1\/2}\\mathbf{v}, \\\\\n\\ \\boldsymbol\\gamma={}&(M+\\Theta_y)^{-1\/2}(D_M+\\Theta_y)^{1\/2}\\left[\\frac{1}{\\alpha}D_M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}D_M^{-1}\\right)^{-1}\\right]^{1\/2}\\mathbf{v}.\n\\end{align*}\nFollowing the argument used in \\cite[Lemma 2]{PGIP17}, we may bound $R$ as follows:\n\\begin{equation}\\label{Rbound}\n\\ R=\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\displaystyle{\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}}\\hspace{0.25em}\\boldsymbol\\gamma^T\\boldsymbol\\gamma}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)}\\geq\\min\\left\\{\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma},1\\right\\}\\cdot\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\gamma^T\\boldsymbol\\gamma}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)}\\geq\\frac{1}{2}\\cdot\\min\\left\\{\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma},1\\right\\},\n\\end{equation}\nusing the argument\n\\begin{align*}\n\\ \\frac{1}{2}(\\boldsymbol\\chi-\\boldsymbol\\gamma)^T(\\boldsymbol\\chi-\\boldsymbol\\gamma)\\geq0\\quad\\Leftrightarrow&\\quad\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\gamma^T\\boldsymbol\\gamma\\geq\\frac{1}{2}(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma) \\\\\n\\ \\Leftrightarrow&\\quad\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\gamma^T\\boldsymbol\\gamma}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)}\\geq\\frac{1}{2}.\n\\end{align*}\n\nWe now turn our attention to the product $\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}$. Straightforward calculation tells us that\n\\begin{equation*}\n\\ \\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}=\\underbrace{\\frac{\\mathbf{v}^T[M-(\\Theta+M^{-1})^{-1}]\\mathbf{v}}{\\mathbf{v}^T[D_M-(\\Theta+D_M^{-1})^{-1}]\\mathbf{v}}}_{=:R_{\\Theta}}\\cdot\\frac{\\mathbf{w}^T(D_M+\\Theta_y)^{-1}\\mathbf{w}}{\\mathbf{w}^T(M+\\Theta_y)^{-1}\\mathbf{w}},\n\\end{equation*}\nwhere $\\Theta:=\\alpha\\Theta_w^{-1}+\\alpha\\Theta_v^{-1}$ and $\\mathbf{w}:=(D_M+\\Theta_y)^{1\/2}\\big[\\frac{1}{\\alpha}D_M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}D_M^{-1}\\right)^{-1}\\big]^{1\/2}\\mathbf{v}$. It may be observed that\n\\begin{equation*}\n\\ \\frac{\\mathbf{w}^T(D_M+\\Theta_y)^{-1}\\mathbf{w}}{\\mathbf{w}^T(M+\\Theta_y)^{-1}\\mathbf{w}}\\geq\\lambda_{\\min}\\Big((D_M+\\Theta_y)^{-1}(M+\\Theta_y)\\Big)\\geq\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\},\n\\end{equation*}\nand hence that\n\\begin{equation}\\label{omegabound}\n\\ \\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}\\geq{}R_{\\Theta}\\cdot\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\}.\n\\end{equation}\n\nFinally, we observe that $R_{\\Theta}=\\lambda_{\\min}(D_M^{-1}M)=1$ for lumped mass matrices, as $D_M=M$. Inserting \\eqref{omegabound} into \\eqref{Rbound} then gives the required result.~~$\\Box$\n\n\\vspace{1em}\n\n\\begin{remark} For consistent mass matrices, the working above still holds, except $R_{\\Theta}$ and $\\lambda_{\\min}(D_M^{-1}M)$ are not equal to $1$. Therefore, the bound reads\n\\begin{equation*}\n\\ \\lambda(\\widehat{S}^{-1}S)\\geq\\frac{1}{2}\\cdot\\min\\Big\\{\\min\\hspace{0.1em}R_{\\Theta}\\cdot\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\},1\\Big\\},\n\\end{equation*}\nand depends on the matrix $[D_M-(\\Theta+D_M^{-1})^{-1}]^{-1}[M-(\\Theta+M^{-1})^{-1}]$, which does not have uniformly bounded eigenvalues. This is, however, a weak bound, and in practice we find that the (smallest and largest) eigenvalues of the preconditioned Schur complement are moderate in size.\n\n\nFurthermore, in numerical experiments, we find the vast majority of the eigenvalues of $\\widehat{S}^{-1}S$ \nto be clustered in the interval $\\left [\\frac{1}{2},1 \\right]$, particularly as the Interior Point method approaches convergence, for the following reasons. In \\cite[Theorem 4.1]{PW11}, it is shown that\n\\begin{eqnarray}\\label{half1}\n\\ \\lambda\\left(\\left[\\left(L+\\frac{1}{\\sqrt{\\alpha}}M\\right)M^{-1}\\left(L+\\frac{1}{\\sqrt{\\alpha}}M\\right)^T\\right]^{-1}\\left[LM^{-1}L^T+\\frac{1}{\\alpha}M\\right]\\right)\\in\\left[\\frac{1}{2},1\\right],\n\\end{eqnarray}\nfor any (positive) value of $\\alpha$, and any mesh-size, provided $L+L^T$ is positive semidefinite, which is the case for Poisson and convection--diffusion problems for instance. For the Schur complement \\eqref{Schur} and Schur complement approximation \\eqref{SchurApprox}, as the Interior Point method approaches convergence, two cases will arise: (i) some entries of $\\Theta_w^{-1}+\\Theta_v^{-1}$ will approach zero, whereupon substituting these values into \\eqref{Schur} and \\eqref{SchurApprox} gives that $S$ and $\\widehat{S}$ are both approximately $L(M+\\Theta_y^{-1})^{-1}L^T$, so the eigenvalues of $\\widehat{S}^{-1}S$ should be roughly $1$; (ii) some entries of $\\Theta_w^{-1}+\\Theta_v^{-1}$ approach infinity (with many entries of $\\Theta_y$ correspondingly approaching zero), so $S$ is approximately $LM^{-1}L^T+\\frac{1}{\\alpha}M$, with $\\widehat{S}$ an approximation of $(L+\\frac{1}{\\sqrt{\\alpha}}M)M^{-1}(L+\\frac{1}{\\sqrt{\\alpha}}M)^T$, giving clustered eigenvalues as predicted by \\eqref{half1}.\nThe numerical evidence of the described behavior, for consistent mass matrices, is shown in Figure \\ref{eig}.\n\n\\vspace{1em}\n\n\\tikzexternaldisable\n \\begin{figure}[htb]\n\\begin{center}\n\t\\setlength\\figureheight{0.3\\linewidth} \n\t\\setlength\\figurewidth{0.4\\linewidth}\n\t\\subfloat[Poisson eigenvalues]{\n\t\\input{figures\/eigplot1.tikz}\n\t}\n\t\\subfloat[Convection--diffusion eigenvalues]{\n\t\\input{figures\/eigplot2.tikz}\n\t}\n \\end{center}\n\\caption{Eigenvalue distribution of $\\widehat{S}^{-1}S $ at later Interior Point iterations for test problems involving Poisson's equation (left)\nand the convection--diffusion equation (right) (with mesh-size $h=2^{-4}$). \n} \n\\label{eig}\n\\end{figure}\n\\tikzexternalenable\n\\end{remark}\n\n\nWe note that the $(1,1)$-block and Schur complement approximations that we have derived are both symmetric positive definite, so we may apply the {\\scshape minres} algorithm with a block diagonal preconditioner\nof the form\n\\begin{equation*}\n\\ \\mathcal{P}_D=\\left[\\begin{array}{cccc}\nM+\\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha{}D_M+\\Theta_w & -\\alpha{}D_M & 0 \\\\\n0 & -\\alpha{}D_M & \\alpha{}D_M+\\Theta_v & 0 \\\\\n0 & 0 & 0 & \\widehat{S} \\\\\n\\end{array}\\right],\n\\end{equation*}\nwith $\\widehat{S}$ defined as above.\n\n\nIt is also possible to exploit the often faster convergence achieved by block triangular preconditioners within {\\scshape gmres}, and utilize the block triangular preconditioner:\n\\begin{equation*}\n\\ \\mathcal{P}_T=\\left[\\begin{array}{cccc}\nM+\\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha{}D_M+\\Theta_w & -\\alpha{}D_M & 0 \\\\\n0 & -\\alpha{}D_M & \\alpha{}D_M+\\Theta_v & 0 \\\\\nL & -M & M & -\\widehat{S} \\\\\n\\end{array}\\right].\n\\end{equation*}\n\n\n\n\\subsection{Preconditioner for Partial Observations}\n\\label{subsec::po}\nIn practice, the quantity of importance from a practical point-of-view is the difference between the state variable and the desired state on a certain region of the domain,\ni.e. $\\Omega_1\\subset\\Omega$, in which case one would instead consider the term $\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega_1)}$ within the cost functional \\eqref{pb}.\nThis results in a mass matrix where many of the eigenvalues are equal to zero. In more detail, the matrix $M+\\Theta_y$ is in practice $M_s+\\Theta_y$, where $M_s$ is a (singular) mass matrix acting on a subdomain, although for the purposes of our working we retain the existing notation. Hence, the standard saddle-point preconditioning\napproach cannot be straightforwardly applied, due to the $(1,1)$-block being singular. One strategy is to replace the singular mass matrix with a slightly perturbed \nversion in the preconditioning step. However, it is not straightforward to estimate the strength of this perturbation and its affect on the preconditioner.\n\nAnother alternative is presented in \\cite{BenDOS15,herzog2018fast}, and we follow this strategy here. \nThis method is tailored to the case where\nthe leading block of the saddle-point system is highly singular (meaning a large proportion of its eigenvalues are zero), due to the fact that the observations \nare placed only on parts of the domain.\nIn more detail, we consider the matrix system\n\\begin{equation}\\label{MatrixPartial}\n\\left [\\begin{array}{cc c}\n M+\\Theta_y&0&L^T\\\\\n 0&\\alpha \\widetilde M + \\Theta_z &-\\bar M^T \\\\\n L&-\\bar M&0\\\\ \n\\end{array} \\right],\n\\end{equation}\nwith $M+\\Theta_y$ often a highly singular matrix, as $\\Theta_y=0$ when no state constraints are present. \nThe mass matrix used to construct $\\widetilde M$ is then defined on the control domain, which can be the whole domain or part of it.\nWe start by considering the following permutation of the matrix to be solved:\n\\begin{equation}\\label{Permuted}\n\t\\Pi\n\\left [\\begin{array}{ccc}\n M+\\Theta_y&0&L^T\\\\\n 0&\\alpha \\widetilde M + \\Theta_z &-\\bar M^T \\\\\n L&-\\bar M&0\\\\ \n\\end{array} \\right]\n=\n\\left [\\begin{array}{ccc}\nL&-\\bar M&0\\\\ \n0&\\alpha \\widetilde M + \\Theta_z &-\\bar M^T \\\\\nM+\\Theta_y&0&L^T\\\\ \n\\end{array} \\right]\n,\n\\end{equation}\nwhere \n\\begin{equation*}\n\t\\Pi:=\n\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t\t0&0&I\\\\\n\t\t\t0&I&0\\\\\n\t\t\tI&0&0\\\\\n\t\t\\end{array}\n\t\\right].\n\\end{equation*}\nThe matrix \\eqref{Permuted} is a block matrix of the form \\eqref{ABCD} with\n\\begin{equation*}\n\t\\ A=\\left[\n\t\t\\begin{array}{cc}\nL&-\\bar M\\\\ \n0&\\alpha \\widetilde M + \\Theta_z\\\\\n\t\t\\end{array}\n\t\\right],\\quad\\quad{}B_1=\\left[\n\t\t\\begin{array}{cc}\n\t\t\t0\\\\\n\t\t\t-\\bar M^T \\\\\n\t\t\\end{array}\n\t\\right],\\quad\\quad{}B_2=\\left[\n\t\t\\begin{array}{cc}\n\t\t\tM+\\Theta_y&0\\\\\n\t\t\\end{array}\n\t\\right],\\quad\\quad{}C=\\left[\n\t\t\\begin{array}{c}\n\t\t\tL^T \\\\\n\t\t\\end{array}\n\t\\right],\n\\end{equation*}\nwhich is a modification to a general saddle-point system, with non-symmetric extra-diagonal blocks and a non-zero $(2,2)$-block given by $L^T$. \nBased on this we propose the following preconditioner of block-triangular type for the permuted system:\n\\begin{equation*}\n\t\\widetilde{\\mathcal{P}}=\n\t\\left[\n\t\t\\begin{array}{ccc}\nL&-\\bar M&0\\\\ \n0&\\alpha \\widetilde M + \\Theta_z &0\\\\\nM+\\Theta_y&0&-\\widehat{S}_{\\Pi}\\\\\n\t\t\\end{array}\n\t\\right],\n\\end{equation*}\nwith the inverse then given by\n\\begin{equation*}\n\t\\widetilde{\\mathcal{P}}^{-1}=\n\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t\tL^{-1}&L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&0\\\\\n\t\t\t0&(\\alpha \\widetilde M + \\Theta_z)^{-1} &0\\\\\n\t\t\t\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}&\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&-\\widehat{S}_{\\Pi}^{-1}\\\\\n\t\t\\end{array}\n\t\\right].\n\\end{equation*}\nThe matrix $\\widehat{S}_{\\Pi}$ is designed to approximate the Schur complement $S_{\\Pi}$ of the \\emph{permuted matrix system}, that is\n\\begin{equation*}\n\t\\widehat{S}_{\\Pi}\\approx\n\tS_{\\Pi}\n\t=L^{T}+(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T.\n\\end{equation*}\nWe now propose a preconditioner $\\mathcal{P}_{\\Pi}$ for the original matrix \\eqref{MatrixPartial}, such that $\\mathcal{P}_{\\Pi}^{-1}=\\widetilde{\\mathcal{P}}^{-1}\\Pi$, and we therefore obtain\n\\begin{equation}\n\t\\label{eq:prec1}\n\t\\mathcal{P}_{\\Pi}^{-1}=\n\\left[\n\\begin{array}{ccc}\n0&L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&L^{-1}\\\\\n0&(\\alpha \\widetilde M + \\Theta_z)^{-1} &0\\\\\n-\\widehat{S}_{\\Pi}^{-1}&\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\\\\n\\end{array}\n\\right].\n\\end{equation}\nApplying the preconditioner is in fact more straightforward than it currently appears. To compute a vector $\\mathbf{v}=\\mathcal{P}_{\\Pi}^{-1}\\mathbf{w}$, where $\\mathbf{v}:=\\left[\\mathbf{v}_{1}^T,~\\mathbf{v}_{2}^T,~\\mathbf{v}_{3}^T\\right]^T$, $\\mathbf{w}:=\\left[\\mathbf{w}_{1}^T,~\\mathbf{w}_{2}^T,~\\mathbf{w}_{3}^T\\right]^T$, we first observe from the second block of $\\mathcal{P}_{\\Pi}^{-1}$ that\n\\begin{equation*}\n\t(\\alpha \\widetilde M + \\Theta_z)^{-1}\\mathbf{w}_2=\\mathbf{v}_2.\n\\end{equation*}\nThe first equation derived from \\eqref{eq:prec1} then gives that\n\\begin{align*}\nL^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\mathbf{w}_2+L^{-1}\\mathbf{w}_3&=\\mathbf{v}_1\\\\\n\\Rightarrow\\hspace{7.2em}L^{-1}(\\bar M\\mathbf{v}_2+\\mathbf{w}_3)&=\\mathbf{v}_1,\n\\end{align*}\nand applying this within the last equation in \\eqref{eq:prec1} that\n\\begin{align*}\n-\\widehat{S}_{\\Pi}^{-1}\\mathbf{w}_1+\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\mathbf{w}_2+\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\mathbf{w}_3&=\\mathbf{v}_3\\\\\n\\Rightarrow\\hspace{5.1em}-\\widehat{S}_{\\Pi}^{-1}\\mathbf{w}_1+\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)\\big(L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\mathbf{w}_2+L^{-1}\\mathbf{w}_3\\big)&=\\mathbf{v}_3\\\\\n\\Rightarrow\\hspace{20.95em}\\widehat{S}_{\\Pi}^{-1}\\big((M+\\Theta_y)\\mathbf{v}_1-\\mathbf{w}_1\\big)&=\\mathbf{v}_3.\n\\end{align*}\n\nThus we need to approximately solve with $\\widehat{S}_{\\Pi}$, $L$, and $\\alpha \\widetilde M + \\Theta_z$, which are all invertible matrices, to apply the preconditioner. We now briefly discuss our choice of $\\widehat{S}_{\\Pi}.$ We suggest a matching strategy as above, to write\n\\begin{align*}\nS_{\\Pi}=L^{T}+(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T\\approx\\big(L^{T}+{M}_l\\big)L^{-1}\\big(L+{M}_r\\big)=\\widehat{S}_{\\Pi},\n\\end{align*}\nwhere \n\\begin{equation*}\n\t{M}_lL^{-1}{M}_r\\approx(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T.\n\\end{equation*}\nSuch an approximation may be achieved if, for example, \n\\begin{equation*}\n\t{M}_l=M+\\Theta_y,\\quad\\quad{M}_r\\approx \\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T.\n\\end{equation*}\nAlternatively, we can use a matrix based on the approximation $\\widehat{M}$ from the previous section to approximate ${M}_r.$\nWe thus build such approximations into our preconditioner $\\mathcal{P}_{\\Pi}$, although further tailoring of such preconditioners is a subject of future investigation.\n\n\n\n\\subsection{Time-Dependent Problems}\n\\label{subsec::td}\nTo demonstrate the applicability of our preconditioners to time-dependent PDE-constrained optimization problems, we now consider the minimization of the cost functional\n\\begin{equation*}\n\\ \\mathcal{F}(\\rm y,\\rm u)=\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega\\times(0,T))}+ \\frac{\\alpha}{2}\\|\\rm u\\|^ 2_{L^2(\\Omega\\times(0,T))} + \\beta\\|u\\|_{L^1(\\Omega\\times(0,T))},\n\\end{equation*}\nsubject to the PDE $\\rm y_{t}-\\Delta\\rm y=\\rm u+\\rm f$ on the space-time interval $\\Omega\\times(0,T)$, along with suitable boundary and initial conditions.\n\n\nWith the backward Euler method used to handle the time derivative, the matrix within the system to be solved is of the form\n\\begin{equation}\\label{TimeDeptSystem}\n \\mathcal{A} = \\left [\\begin{array}{c c c }\n\n \\tau \\mathcal{M}_c + \\Theta_y & 0 & \\mathcal{L}^T \\\\\n 0 & \\alpha\\tau\\widetilde{\\mathcal{M}}_c + \\Theta_z & -\\tau\\bar{\\mathcal{M}}^T \\\\\n \\mathcal{L} & -\\tau\\bar{\\mathcal{M}} & 0 \\\\ \n \\end{array} \\right],\n\\end{equation}\nwith $\\tau$ the time-step used.\n\nThe matrix $\\mathcal{M}_c$ is a block diagonal matrix consisting of multiples of mass matrices on each block diagonal corresponding to each time-step, depending on the quadrature rule used to approximate the cost functional in the time domain. For example, if a trapezoidal rule is used, then $\\mathcal{M}_c=\\text{blkdiag}(\\frac{1}{2}M,M,...,M,\\frac{1}{2}M)$, and if a rectangle rule is used, then $\\mathcal{M}_c=\\mathcal{M}:=\\text{blkdiag}(M,M,...,M,M)$. Further,\n\\begin{equation*}\n\\ \\widetilde{\\mathcal{M}}_c=\\left[\\begin{array}{cc}\n\\mathcal{M}_c & -\\mathcal{M}_c \\\\\n-\\mathcal{M}_c & \\mathcal{M}_c \\\\\n\\end{array}\\right],\\quad\\quad\\bar{\\mathcal{M}}=\\left[\\begin{array}{cc}\n\\mathcal{M} & -\\mathcal{M} \\\\\n\\end{array}\\right],\n\\end{equation*}\nand $\\mathcal{L}$ is defined as follows (with its dimension equal to that of $L$, multiplied by the number of time-steps):\n\\begin{equation*}\n\\ \\mathcal{L}=\\left[\\begin{array}{cccc}\nM+\\tau{}L & & & \\\\\n-M & M+\\tau{}L & & \\\\\n & \\ddots & \\ddots & \\\\\n & & -M & M+\\tau{}L \\\\\n\\end{array}\\right].\n\\end{equation*}\n\nWe now consider saddle-point preconditioners for the matrix \\eqref{TimeDeptSystem}. We may apply preconditioners of the form\n\\begin{align*}\n\\ \\mathcal{P}_{D}={}&\\left [\\begin{array}{c c c c}\n\\tau \\mathcal{M}_c + \\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_w & -\\alpha\\tau\\mathcal{D}_{M_c} & 0 \\\\\n0 & -\\alpha\\tau\\mathcal{D}_{M_c} & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_v & 0 \\\\\n0 & 0 & 0 & \\widehat{\\mathcal{S}} \\\\ \n\\end{array} \\right] \\\\\n\\ \\text{or}\\quad\\mathcal{P}_{T}={}&\\left [\\begin{array}{c c c c}\n\\tau \\mathcal{M}_c + \\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_w & -\\alpha\\tau\\mathcal{D}_{M_c} & 0 \\\\\n0 & -\\alpha\\tau\\mathcal{D}_{M_c} & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_v & 0 \\\\\n\\mathcal{L} & -\\tau\\mathcal{M} & \\tau\\mathcal{M} & -\\widehat{\\mathcal{S}} \\\\ \n\\end{array} \\right],\n\\end{align*}\nwhere $\\mathcal{D}_{M_c}:=\\text{diag}(\\mathcal{M}_c)$, the matrix $\\tau\\mathcal{M}_{c}+\\Theta_y$ can be approximately inverted by applying Chebyshev semi-iteration to the matrices arising at each time-step, and $\\widehat{\\mathcal{S}}$ is an approximation of the Schur complement:\n\\begin{equation*}\n\\ \\mathcal{S}=\\mathcal{L}(\\tau\\mathcal{M}_{c}+\\Theta_y)^{-1}\\mathcal{L}^{T}+\\frac{\\tau}{\\alpha}\\mathcal{M}\\mathcal{M}_c^{-1}\\mathcal{M}-\\frac{1}{\\alpha^2}\\mathcal{M}\\mathcal{M}_c^{-1}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha\\tau}\\mathcal{M}_c^{-1}\\right)\\mathcal{M}_c^{-1}\\mathcal{M}.\n\\end{equation*}\nWe select the approximation\n\\begin{equation*}\n\\ \\widehat{\\mathcal{S}}=\\left(\\mathcal{L}+\\widehat{\\mathcal{M}}\\right)(\\tau\\mathcal{M}_{c}+\\Theta_y)^{-1}\\left(\\mathcal{L}+\\widehat{\\mathcal{M}}\\right)^{T},\n\\end{equation*}\nusing the same reasoning as in Section \\ref{sec:Schur}, where\n\\begin{equation*}\n\\ \\widehat{\\mathcal{M}}=\\left[\\frac{\\tau}{\\alpha}\\mathcal{D}_{M}^2\\mathcal{D}_{M_c}^{-1}-\\frac{1}{\\alpha^2}\\mathcal{D}_{M}^2\\mathcal{D}_{M_c}^{-2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha\\tau}\\mathcal{D}_{M_c}^{-1}\\right)\\right]^{1\/2}(\\tau\\mathcal{D}_{M_c}+\\Theta_y)^{1\/2},\n\\end{equation*}\nwith $\\mathcal{D}_{M}:=\\text{diag}(\\mathcal{M})$. Within the numerical experiments of the forthcoming section, we apply the preconditioning strategy that arises from the working above.\n\n\n\\section{Numerical Experiments}\\label{exp}\n\n\nWe now implement the Interior Point algorithm described in the Appendix, using {\\scshape matlab}\\textsuperscript{\\textregistered} R2017b\non an Intel\\textsuperscript{\\textregistered} Xeon\\textsuperscript{\\textregistered} computer with a 2.40GHz processor, and 250GB of RAM.\nWithin the algorithm we employ the preconditioned {\\scshape minres}\\ \\cite{minres} and {\\scshape gmres} \\cite{gmres} methods with the following preconditioners:\n\\begin{itemize}\n\\item \\ipmbt: {\\sc gmres} and block triangular preconditioner $\\mathcal{P}_T,$ \n\\item {\\sc ipm-minres-${\\cal P}_D$} : {\\sc minres} with block diagonal preconditioner $\\mathcal{P}_D,$\n\\item {\\sc ipm-gmres-${\\cal P}_\\Pi$} : {\\sc gmres} and block triangular preconditioner $\\mathcal{P}_\\Pi.$\n\\end{itemize}\nRegarding the parameters listed in the Appendix, we use\n$\\alpha_0 = 0.995$ and $\\epsilon_p=\\epsilon_d=\\epsilon_c = 10^{-6}$.\nFor the barrier reduction parameter $\\sigma$, we consider for each class of\nproblems tested a value that ensures a smooth decrease in the complementarity measure\n$\\xi^k_c$ in (\\ref{gap}), that is to say $\\|\\xi^k_c\\| = \\mathcal{O}(\\mu^k)$. This way, the number of \nnonlinear (Interior Point) iterations typically depends only on $\\sigma$.\nWe solve the linear matrix systems to a (relative unpreconditioned residual norm) tolerance of $10^{-10}$.\n\n\n\\begin{figure}\n\\begin{center}\n\t\\setlength\\figureheight{0.225\\linewidth} \n\t\\setlength\\figurewidth{0.225\\linewidth} \n \\subfloat[Control $\\rm u$, $\\beta=5\\times10^{-2}$]{\n\t\\input{figures\/controlPoissonbeta5e_2.tikz}\n\t}\n\t\\subfloat[Control $\\rm u$, $\\beta=5\\times10^{-3}$]{\n\t\\input{figures\/controlPoissonbeta5e_3.tikz}\n\t}\n \\end{center}\n\\caption{Poisson problem: computed solutions of the control $\\rm u$, for two values of $\\beta$.} \\label{fig::poissonu}\n\\end{figure}\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{ccccccc}\n\\toprule\n & \\multicolumn{ 2}{c}{$\\beta = 10^{-1}$} & \\multicolumn{ 2}{c}{$\\beta = 10^{-2}$} & \\multicolumn{ 2}{c}{$\\beta = 10^{-3}$} \\\\\n\\midrule\n & {\\sc sparsity} & $\\|u\\|_1$ & {\\sc sparsity} & $\\|u\\|_1$ & {\\sc sparsity} & $\\|u\\|_1$ \\\\\n\\midrule\n$\\alpha = 10^{-2}$ & 99\\% & 3 & 15\\% & $7\\times 10^2$ & 12\\% & $1\\times 10^3$ \\\\\n\n$\\alpha = 10^{-4}$ & 100\\% & 2 & 38\\% & $9\\times 10^2$ & 12\\% & $1\\times 10^3$ \\\\\n\n$\\alpha = 10^{-6}$ & 100\\% & 2 & 39\\% & $9\\times 10^2$ & 12\\% & $1\\times 10^3$ \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Poisson problem: sparsity features of the computed optimal control, for a range of $\\alpha$ and $\\beta$, and mesh-size $h = 2^{-5}$. \n\\label{tab::sparsity}}\n\\end{table}\n\n\nWe apply the {\\scshape ifiss} software package \\cite{ifissmatlab,ifisslink} to build\nthe relevant finite element matrices for the 2D examples shown in this section, and use the\n{\\scshape deal.II} library \\cite{dealii} in the 3D case. In each case we utilize $Q1$ finite elements\nfor the state, control, and adjoint variables.\n\n\nWe apply $20$ steps of Chebyshev semi-iteration to approximate the inverse of mass matrices, as well as mass matrices plus positive diagonal matrices, whenever they arise within the preconditioners.\nApplying the approximate inverses of the Schur complement approximations derived for each of our preconditioners\nrequires solving for matrices of the form $L + \\widehat M$ and its transpose.\nFor this we utilize $3$ V-cycles of the algebraic multigrid routine {\\sc hsl-mi20} \\cite{Boyle2007},\nwith a Gauss--Seidel coarse solver, and apply $5$ steps of pre- and post-smoothing.\nFor time-dependent problems, we also use Chebyshev semi-iteration and algebraic multigrid within the preconditioner, \nbut are required to apply the methods to matrices arising from each time-step.\nIn all the forthcoming tables of results, we report the average number of linear ({\\scshape minres} or {\\scshape gmres}) iterations {\\sc av-li},\nand the average CPU time {\\sc av-cpu}. The overall number of nonlinear (Interior Point) iterations {\\sc nli} is specified in the table captions. \nWe believe these demonstrate the effectiveness of our proposed Interior Point and preconditioning approaches, as well as the robustness of the\noverall method, for a range of PDEs, matrix dimensions, and parameters involved in the problem set-up.\n\n\n\\subsection{A Poisson Problem}\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrrrr}\n\\toprule\n & & \\multicolumn{ 2}{c}{\\ipmbt } & \\multicolumn{ 2}{|c }{{\\sc ipm-minres-${\\cal P}_D$} } \\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\n\\multicolumn{ 1}{c}{6} & $-2$ & 8.9 & 0.2 & 19.4 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.2 & 0.2 & 16.3 & 0.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.1 & 0.2 & 14.6 & 0.3 \\\\\n\n\\multicolumn{ 1}{c}{7} & $-2$ & 9.0 & 0.8 & 19.5 & 1.6 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.1 & 0.7 & 15.8 & 1.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 6.8 & 0.6 & 14.4 & 1.4 \\\\\n\n\\multicolumn{ 1}{c}{8} & $-2$ & 6.9 & 2.5 & 14.3 & 5.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 6.5 & 2.4 & 13.4 & 4.7 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 6.5 & 2.4 & 12.8 & 4.5 \\\\\n\n\\multicolumn{ 1}{c}{9} & $-2$ & 7.9 & 12.4 & 13.8 & 21.8 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.6 & 12.0 & 12.7 & 20.2 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.5 & 11.9 & 12.3 & 20.0 \\\\\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Poisson problem: average Krylov iterations and CPU times for problem with control constraints, for a range of $h$ and $\\alpha$, $\\beta = 10^{-2}$, $\\sigma = 0.2$, $\\textsc{nli} = 9$.\n\\label{tab::resultspoisson1}}\n\\end{table}\n\nWe first examine an optimization problem involving Poisson's equation, investigating the behavior of the IPM and our proposed preconditioners. \n\n\n\\subsection*{Two-Dimensional Case}\nWe focus initially on the performance of our solvers for the two-dimensional Poisson problem, employing both \\ipmbt and {\\sc ipm-minres-${\\cal P}_D$} methods, as well as considering some sparsity issues.\nWe set the box constraints for the control to be $\\rm u_a=-2$, $\\rm u_b=1.5,$ and the desired state\n$\\rm y_d=\\sin(\\pi {\\rm x_1})\\sin(\\pi {\\rm x_2}) $, with ${\\rm x}_i$ denoting the $i$th spatial variable. Figure \\ref{fig::poissonu} displays the computed optimal controls for this problem for a particular set-up on the domain $\\Omega=(0,1)^2$, for both $\\beta=5\\times10^{-2}$ and $\\beta=5\\times10^{-3}$\nas well as $\\alpha = 10^{-2}$. Table \\ref{tab::sparsity} reports the level of sparsity in the computed solution, as well as its \n$\\ell_1$-norm, when varying the regularization parameters $\\alpha$ and $\\beta$. The value of {\\sc sparsity} in the table is computed by\nmeasuring the percentage of components of $u$ which are below a certain threshold ($10^{-2}$ in our case),\nsee e.g. \\cite{fpcas}. We observe that our algorithm reliably computes sparse\ncontrols, and as expected the sparsity of the solution increases when $\\beta$ is correspondingly increased.\n\nIn Table \\ref{tab::resultspoisson1} we compare the performance of the preconditioners $\\mathcal{P}_T$ and $\\mathcal{P}_D$ within the IPM, varying the \nspatial mesh-size $h = 2^{-i},\\ i = 6, \\dots, 9$, as well as the regularization parameter $\\alpha$, while fixing the value $\\beta = 10^{-2}$ (Table \\ref{tab::sparsity} indicates that this value of $\\beta$ gives rise to the most computationally interesting case). We set $\\sigma = 0.2$, and\ntake $9$ Interior Point iterations with a final value $\\mu^k = 5 \\times 10^{-7}$. Figure \\ref{fig::convh} provides a representation of the typical convergence behavior for the feasibilities $\\xi^k_p, \\xi^k_d$ and complementarity $\\xi^k_c$, together with \nthe decrease of $\\mu^k$ with this value of $\\sigma$.\nThe reported results demonstrate good robustness of both preconditioners with respect to both $h$ and $\\alpha$ in terms of linear iterations and\nCPU time, with \\ipmbt outperforming {\\sc ipm-minres-${\\cal P}_D$} in each measure.\nDespite the fact that the value of {\\sc av-li} is constant in both implementations, we observe that when using {\\sc ipm-minres-${\\cal P}_D$} the number of\npreconditioned {\\scshape minres} iterations slightly increases as $\\mu^k \\rightarrow 0$, as many entries of $\\Theta_{z}$ tend to zero. \nOn the contrary, the number of preconditioned {\\scshape gmres} iterations hardly varies with $k$.\n\n\n\\tikzexternaldisable\n \\begin{figure}[htb]\n \\centering\n\t\\setlength\\figureheight{0.35\\linewidth} \n\t\\setlength\\figurewidth{0.45\\linewidth}\n\\input{figures\/convhist.tikz}\n \\caption{Typical convergence history of the relevant quantities $\\mu^k, \\xi^k_p, \\xi^k_d, \\xi^k_c$. \n\\label{fig::convh}}\n\\end{figure}\n\\tikzexternalenable\n\nAs a final validation of the general framework outlined, we report in Table \\ref{tab::resultspoisson2}\nresults obtained when imposing both control and state constraints within the Poisson setting described above.\nIn particular, we set $\\rm y_a=-0.1$, $\\rm y_b=0.8$, $\\rm u_a=-1$, $\\rm u_b=15$ and test the most promising implementation\nof the IPM, that is the \\ipmbt routine, while varying $h$ and $\\alpha$. The reported values of {\\sc av-li} confirm the roboustness of\nthe preconditioning strategy proposed.\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrr}\n\\toprule\n & & \\multicolumn{ 2}{c}{\\ipmbt} \\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} \\\\% & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\\multicolumn{ 1}{c}{6} & $-2$ & 15.8 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.4 & 0.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 10.6 & 0.2 \\\\\n\n\\multicolumn{ 1}{c}{7} & $-2$ & 14.8 & 1.5 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.4 & 1.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 10.3 & 0.9 \\\\\n\n\\multicolumn{ 1}{c}{8} & $-2$ & 14.6 & 5.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 10.8 & 3.9 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 10.1 & 3.5 \\\\\n\n\\multicolumn{ 1}{c}{9} & $-2$ & 14.5 & 22.1 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 10.8 & 16.6 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 9.0 & 15.4 \\\\\n\n\n\\bottomrule\n\\end{tabular} \\hfill \\begin{tabular}{clrrrr}\n\\toprule\n&&\\multicolumn{ 2}{c}{{\\sc ipm-gmres-${\\cal P}_\\Pi$} }\\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\n\\multicolumn{ 1}{c}{3} & $-2$ & 10.2 & 0.04 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.3 & 0.05 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 11.3 & 0.05 \\\\\n\n\\multicolumn{ 1}{c}{4} & $-2$ & 11.2 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.3 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 11.3 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{5} & $-2$ & 15.0 & 7.2 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 15.1 & 7.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 15.1 & 7.3 \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{\\emph{(Left)} Poisson problem: average Krylov iterations and CPU times for problem with both control and state constraints, for a range of $h$ and $\\alpha$, $\\beta = 10^{-2}$, $\\sigma = 0.2$ ($\\textsc{nli} = 14$).\\\\\\emph{(Right)} Three-dimensional Poisson problem with partial observations: average Krylov iterations and CPU times for problem, for a range of $h$ and $\\alpha$, $\\beta = 10^{-3}$, $\\sigma = 0.25$ ($\\textsc{nli} = 11$).\n\\label{tab::resultspoisson2}}\n\\end{table}\n\n\n\\input{partial_rev.tex}\n\n\\subsection{A Convection--Diffusion Problem}\nWe next consider the optimal control of the convection--diffusion equation given by\n$- \\varepsilon \\Delta {\\rm y} + \\vec{\\rm w} \\cdot \\nabla {\\rm y} = {\\rm u}$\non the domain $\\Omega=(0,1)^2$, with the wind vector $\\vec{\\rm w}$ given by $\\vec{\\rm w} = \\big[{\\rm 2x_2(1-x_1^2)}, {\\rm -2x_1(1-x_2^2)}\\big]^T$, and the bounds on the control given by $\\rm u_a=-2$ and $\\rm u_b = 1.5$.\nThe desired state is here defined by\n$\\rm y_d = \\exp(\\rm -64(x_1-0.5)^2+(x_2-0.5)^2)$.\nThe discretization is again performed using Q1 finite elements, while also employing the Streamline Upwind Petrov--Galerkin (SUPG) \\cite{BroH82} upwinding scheme as implemented in {\\scshape ifiss}. The results of our scheme are given in Table \\ref{tab::resultscd1}, which again exhibit robustness with respect to $h$ and $\\alpha$, while also performing well for both values of $\\varepsilon$ tested.\n\\begin{figure}\n\\begin{center}\n\t\\setlength\\figureheight{0.225\\linewidth} \n\t\\setlength\\figurewidth{0.225\\linewidth} \n\n\t\t\\subfloat[Control $\\rm u$, $\\beta=10^{-2}$]{\n\t\\input{figures\/controlCDbeta1e_2.tikz}\n\t}\n\t\\subfloat[Control $\\rm u$, $\\beta=10^{-3}$]{\n\t\\input{figures\/controlCDbeta1e_3.tikz}\n\t}\n \\end{center}\n\\caption{Convection--diffusion problem: computed solutions of the control $\\rm u$, for two values of $\\beta$.} \\label{fig::CDu}\n\\end{figure}\n\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrr|rrrr}\n\\toprule\n & & \\multicolumn{ 4}{c|}{$\\varepsilon = 10^{-1}$ } & \\multicolumn{ 4}{c}{$\\varepsilon = 10^{-2}$} \\\\\n\t\t\t\t\t\\midrule\n & & \\multicolumn{ 2}{c}{\\ipmbt} & \\multicolumn{ 2}{|c }{{\\sc ipm-minres-${\\cal P}_D$} }\n\t\t\t\t\t\t\t\t\t\t\t\t & \\multicolumn{ 2}{|c}{\\ipmbt} & \\multicolumn{ 2}{|c }{{\\sc ipm-minres-${\\cal P}_D$} }\\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\\multicolumn{ 1}{c}{6} & $-2$ & 9.4 & 0.2 & 21.1 & 0.5 & 11.2 & 0.5 & 25.8 & 1.1 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 8.3 & 0.2 & 18.2 & 0.4 & 10.5 & 0.5 & 23.2 & 1.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 8.2 & 0.2 & 17.8 & 0.4 & 10.5 & 0.5 & 23.5 & 1.0 \\\\\n\n\\multicolumn{ 1}{c}{7} & $-2$ & 8.2 & 0.8 & 18.0 & 1.7 & 9.2 & 1.6 & 20.6 & 3.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.5 & 0.7 & 16.3 & 1.5 & 8.7 & 1.5 & 19.0 & 3.1 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.5 & 0.7 & 16.1 & 1.5 & 8.7 & 1.5 & 19.4 & 3.1 \\\\\n\n\\multicolumn{ 1}{c}{8} & $-2$ & 7.5 & 2.7 & 16.3 & 5.6 & 8.0 & 3.8 & 17.1 & 7.9 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.0 & 2.5 & 15.1 & 5.2 & 7.7 & 3.7 & 16.4 & 7.5 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.0 & 2.5 & 14.8 & 5.1 & 7.7 & 3.7 & 16.4 & 7.5 \\\\\n\n\\multicolumn{ 1}{c}{9} & $-2$ & 7.0 & 11.2 & 14.9 & 23.0 & 7.3 & 13.1 & 15.1 & 26.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 6.7 & 11.0 & 14.2 & 22.4 & 6.8 & 12.5 & 14.4 & 25.5 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 6.7 & 11.0 & 13.9 & 21.7 & 6.8 & 12.5 & 14.5 & 25.5 \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Convection--diffusion problem: average Krylov iterations and CPU times for problem with control constraints, for a range of $h$ and $\\alpha$, $\\beta = 10^{-3}$, $\\sigma=0.25$ ($\\textsc{nli} = 11$) with $\\varepsilon = 10^{-1}$, and $\\sigma=0.4$ ($\\textsc{nli} = 16$) with $\\varepsilon = 10^{-2}$.\\label{tab::resultscd1}}\n\\end{table}\n\nWe now provide a numerical insight on the comparison between the proposed IPM approach\nand the commonly used semismooth Newton approach \\cite{HIK02}.\nWe therefore compare \\ipmbt and the implementation \\ssnip of the global semismooth Newton method proposed for PDE-constrained optimization problems with sparsity-promoting terms in \\cite{pss17}. When using the \\ssnip approach, global convergence is attained using a nonsmooth line-search strategy\nand the linear systems arising in the linear algebra phase are solved by \nusing preconditioned {\\scshape gmres}. We consider the $2\\times2$ block formulation and \nan indefinite preconditioner available in a factorized form \\cite{pss17,pst15}. \nSince the semismooth approach requires a diagonal mass matrix in the discretization of the complementarity\nconditions, in the experiments with \\ssnip we use a lumped mass matrix.\nTable \\ref{tab::resultscd1_new} collects results concerning the nonlinear behaviour of\nthe two methods: the number of nonlinear iterations ({\\sc nli}) and the total CPU time ({\\sc tcpu}).\n\n\nIt is interesting to note that the number of nonlinear Interior Point iterations does not vary with $\\alpha$.\nIn fact, the mildly aggressive choice of barrier reduction factor $\\sigma$ yields a low number of nonlinear iterations,\n even for limiting values of $\\alpha$.\nBy contrast, \\ssnip struggles as $\\alpha \\rightarrow 0$. Furthermore, overall the \nInterior Point strategy outperforms the semismooth method in terms of total CPU time.\\\\\n\n\n\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrr}\n\\toprule\n & & \\multicolumn{ 2}{c}{\\ipmbt} & \\multicolumn{ 2}{|c }{\\ssnip}\\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc nli} & {\\sc tcpu} & {\\sc nli} & {\\sc tcpu} \\\\\n\\midrule\n \n\\multicolumn{ 1}{c}{6} & -2 & 11 & 2.8 & 5 & 4.2 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 2.5 & 19 & 27.9 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 2.4 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 2.4 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{7} & -2 & 11 & 9.4 & 5 & 14.0 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 8.7 & 18 & 101.9 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 8.7 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 9.1 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{8} & -2 & 11 & 36.6 & 5 & 43.4 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 34.4 & 20 & 345.3 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 33.9 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 33.8 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{9} & -2 & 11 & 155.9 & 5 & 147.3 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 149.8 & 21 & 1265.4 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 148.9 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 149.6 & $>100$ & \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Convection--diffusion problem: comparison between \\ipmbt and \\ssnip in terms of nonlinear iterations and total CPU times for problem with control constraints, for a range of $h$ and $\\alpha$,\n $\\beta = 10^{-3}$, $\\epsilon = 10^{-1}$. \\label{tab::resultscd1_new}}\n\\end{table}\n\n\n\n\n\\subsection{A Heat Equation Problem}\nTo demonstrate the applicability of our methodology to time-dependent problems, we now perform experiments on an optimization problem with the heat equation acting as a constraint. We utilize the implicit Euler scheme on a time interval up to $T=1$, for varying values of time-step $\\tau$, and set a time-independent desired state to be $\\rm y_d=\\sin(\\pi {\\rm x_1})\\sin(\\pi {\\rm x_2}) $. We consider a control problem with full observations, with Table \\ref{tab::resultsheat1} illustrating the performance of the Interior Point method and preconditioner $\\mathcal{P}_T$ for varying mesh-sizes and values of $\\alpha$, with fixed $\\beta=10^{-2}$. Considerable robustness is again achieved, in particular with respect to changes in the time-step.\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrrrr}\n\\toprule\n & & \\multicolumn{ 6}{c}{\\ipmbt} \\\\\n\t\t\t\t\t\\midrule\n & & \\multicolumn{ 2}{c}{$\\tau = 0.04$ } & \\multicolumn{ 2}{c}{$\\tau = 0.02$ } & \\multicolumn{ 2}{c}{$\\tau = 0.01$ } \\\\\n\t\t\t\t\t\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\\multicolumn{ 1}{c}{4} & $-2$ & 13.9 & 0.6 & 13.1 & 1.0 & 13.1 & 2.2 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 13.3 & 0.5 & 12.2 & 1.0 & 12.3 & 2.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 12.8 & 0.5 & 12.0 & 1.0 & 12.0 & 2.0 \\\\\n\n\\multicolumn{ 1}{c}{5} & $-2$ & 14.6 & 1.6 & 14.0 & 3.1 & 14.7 & 6.6 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 13.9 & 1.5 & 13.3 & 2.9 & 13.3 & 5.8 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 13.6 & 1.5 & 12.8 & 2.8 & 13.0 & 5.7 \\\\\n\n\\multicolumn{ 1}{c}{6} & $-2$ & 15.5 & 5.9 & 14.6 & 11.4 & 15.4 & 23.7 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 14.8 & 5.8 & 14.0 & 10.6 & 14.0 & 21.7 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 14.6 & 5.5 & 13.8 & 10.6 & 13.9 & 21.5 \\\\\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Heat equation problem: average Krylov iterations and CPU times for problem with control constraints, \nfor a range of $h$, $\\alpha$, and $\\tau$, $\\beta = 10^{-2}$, $\\sigma=0.25$ ($\\textsc{nli} = 13$). \n\\label{tab::resultsheat1}}\n\\end{table}\n\n\n\\vspace{1em}\n\n\\begin{remark} We highlight that the number of nonlinear Interior Point iterations almost does not vary with $\\alpha$, due\nto the suitable choices made for the barrier reduction factor $\\sigma$. In particular, in all the test cases\ndiscussed, the choice of $\\sigma$ is mildly aggressive (from $0.2$ to $0.4$ in the most difficult cases),\nyielding a low number of nonlinear iterations, even for limiting values of $\\alpha$.\nBy contrast, a semismooth Newton approach globalized with a line-search\nstrategy may perform poorly as $\\alpha \\rightarrow 0$.\n\\end{remark}\n\n\\section{Conclusions}\n\nWe have presented a new Interior Point method for PDE-constrained optimization problems that include additional box constraints on the control variable, as well as possibly the state variable, and a sparsity-promoting $\\rm L^1$-norm term for the control within the cost functional. We incorporated a splitting of the control into positive and negative parts, as well as a suitable nodal quadrature rule, to linearize the $\\rm L^1$-norm, and considered preconditioned iterative solvers for the Newton systems arising at each Interior Point iteration. Through theoretical justification for our approximations of the $(1,1)$-block and Schur complement of the Newton systems, as well as numerical experiments, we have demonstrated the effectiveness and robustness of our approach, which may be applied within symmetric and non-symmetric Krylov methods, for a range of steady and time-dependent PDE-constrained optimization problems.\n\n\\Appendix\n\\section{Interior Point Algorithm for Quadratic Programming}\\label{IPalgo}\nIn the Algorithm below, we present the structure of the Interior Point method that we apply within our numerical experiments, following the Interior Point path-following scheme described in \\cite{gondzio12}. It is clear that the main computational effort arises from solving the Newton system \\eqref{NewtonSystem} at each iteration.\n\n\\algo{ipm_algo}{Interior Point Algorithm for Quadratic Programming}{\\vspace{-2em}\n\\begin{align*}\n\\ &\\textbf{Parameters} \\\\\n\\ &\\quad\\quad\\alpha_{0} \\in(0,1),~~\\text{step-size factor to boundary} \\\\\n\\ &\\quad\\quad\\sigma\\in(0,1),~~\\text{barrier reduction parameter} \\\\\n\\ &\\quad\\quad\\epsilon_{p},~\\epsilon_{d},~\\epsilon_{c},~~\\text{stopping tolerances} \\\\\n\\ &\\quad\\quad\\text{Interior point method stops when }\\big\\|{\\xi}_{p}^{k}\\big\\|\\leq\\epsilon_{p},~\\big\\|{\\xi}_{d}^{k}\\big\\|\\leq\\epsilon_{d},~\\big\\|{\\xi}_{c}^{k}\\big\\|\\leq\\epsilon_{c} \\\\\n\\ &\\textbf{Initialize IPM} \\\\\n\\ &\\quad\\quad\\text{Set the initial guesses for }{y}^{0},~{z}^{0},~{p}^{0},~{\\lambda}_{y,a}^{0},~{\\lambda}_{y,b}^{0},~{\\lambda}_{z,a}^{0},~{\\lambda}_{z,b}^{0} \\\\\n\\ &\\quad\\quad\\text{Set the initial barrier parameter }\\mu^{0} \\\\\n\\ &\\quad\\quad\\text{Compute primal infeasibility } {\\xi}_{p}^{0}, \\text{ dual infeasibility } {\\xi}_{d}^{0}, \\text{ and} \n\\text{ complementarity gap }{\\xi}_{c}^{0}, \\\\\n\\ &\\quad\\quad\\quad\\quad \\text{as in \\eqref{prdu}--\\eqref{gap} with }k=0 \\\\ \n\\ &\\textbf{Interior Point Method} \\\\\n\\ &\\quad\\quad\\text{while}~~\\left(\\big\\|{\\xi}_{p}^{k}\\big\\|>\\epsilon_{p}~~\\text{or}~~\\big\\|{\\xi}_{d}^{k}\\big\\|>\\epsilon_{d}~~\\text{or}~~\\big\\|{\\xi}_{c}^{k}\\big\\|>\\epsilon_{c}\\right) \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Reduce barrier parameter}~\\mu^{k+1}=\\sigma\\mu^{k} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Solve Newton system }\\eqref{NewtonSystem}\\text{ for primal-dual Newton direction}~{\\Delta}{y},~{\\Delta}{z},~{\\Delta p} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Use }\\text{\\eqref{zupdate1}--\\eqref{zupdate4}}\\text{ to find }{\\Delta}{\\lambda}_{y,a},~{\\Delta}{\\lambda}_{y,b},~{\\Delta}{\\lambda}_{z,a},~{\\Delta}{\\lambda}_{z,b} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Find }\\alpha_{P},~\\alpha_{D}~\\text{s.t. bound constraints on primal and dual variables hold} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Set }\\alpha_{P}=\\alpha_{0}\\alpha_{P},~\\alpha_{D}=\\alpha_{0}\\alpha_{D} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Make step: }{y}^{k+1}={y}^{k}+\\alpha_{P}{\\Delta}{y},~{z}^{k+1}={z}^{k}+\\alpha_{P}{\\Delta}{z},~{p}^{k+1}={p}^{k}+\\alpha_{D}{\\Delta p} \\\\\n\\ &\\quad\\quad\\quad\\quad\\quad\\quad{\\lambda}_{y,a}^{k+1}={\\lambda}_{y,a}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{y,a}, \\ \n {\\lambda}_{y,b}^{k+1}={\\lambda}_{y,b}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{y,b} \\\\\n\\ &\\quad\\quad\\quad\\quad\\quad\\quad{\\lambda}_{z,a}^{k+1}={\\lambda}_{z,a}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{z,a}, \\ \n {\\lambda}_{z,b}^{k+1}={\\lambda}_{z,b}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{z,b} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Update infeasibilities } {\\xi}_{p}^{k+1},~{\\xi}_{d}^{k+1}, \\text{ and compute the complementarity gap } {\\xi}_{c}^{k+1} \\\\\n\\ &\\quad\\quad\\quad\\quad\\quad\\quad\\text{as in \\eqref{prdu}--\\eqref{gap}} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Set iteration number }k=k+1 \\\\\n\\ &\\quad\\quad\\text{end}\n\\end{align*}\\vspace{-1.5em}\n}\n\n\\textbf{Acknowledgments.}\nJ. W. Pearson gratefully acknowledges support from the Engineering and Physical Sciences Research Council (EPSRC) Fellowship EP\/M018857\/2, and a Fellowship from The Alan Turing Institute in London.\nM. Porcelli and M. Stoll were partially supported by the {\\em DAAD-MIUR Joint Mobility Program} 2018--2020 (Grant 57396654).\nThe work of M. Porcelli was also partially supported by the {\\em National Group of Computing Science (GNCS-INDAM)}.\n\n\\bibliographystyle{siam}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn most papers dealing with the statistical analysis of\nmeteorological data available to the authors, the suggested\nanalytical models for the observed statistical regularities in\nprecipitation are rather ideal and inadequate. For example, it is\ntraditionally assumed that the duration of a wet period (the number\nof subsequent wet days) follows the geometric distribution (for\nexample, see~\\cite{Zolina2013}) although the goodness-of-fit of this\nmodel is far from being admissible. Perhaps, this prejudice is based\non the conventional interpretation of the geometric distribution in\nterms of the Bernoulli trials as the distribution of the number of\nsubsequent wet days (``successes'') till the first dry day\n(``failure''). But the framework of Bernoulli trials assumes that\nthe trials are independent whereas a thorough statistical analysis\nof precipitation data registered in different points demonstrates\nthat the sequence of dry and wet days is not only independent, but\nit is also devoid of the Markov property so that the framework of\nBernoulli trials is absolutely inadequate for analyzing\nmeteorological data.\n\nIt turned out that the statistical regularities of the number of\nsubsequent wet days can be very reliably modeled by the negative\nbinomial distribution with the shape parameter less than one. For\nexample, in~\\cite{Gulev} we analyzed meteorological data registered\nat two geographic points with very different climate: Potsdam\n(Brandenburg, Germany) with mild climate influenced by the closeness\nto the ocean with warm Gulfstream flow and Elista (Kalmykia, Russia)\nwith radically continental climate. The initial data of daily\nprecipitation in Elista and Potsdam are presented on Figures~1a and\n1b, respectively. On these figures the horizontal axis is discrete\ntime measured in days. The vertical axis is the daily precipitation\nvolume measured in centimeters. In other words, the height of each\n``pin'' on these figures is the precipitation volume registered at\nthe corresponding day (at the corresponding point on the horizontal\naxis).\n\n\\renewcommand{\\figurename}{\\rm{Fig.}}\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{DataElista_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth]{DataPotsdam_en.png} \\\\\nb)}\n\\end{minipage}\n\\label{Data} \\caption{The initial data of daily precipitation in\nElista (a) and Potsdam (b).}\n\\end{figure}\n\nIn order to analyze the statistical regularities of the duration of\nwet periods this data was rearranged as shown on Figures~2a and 2b.\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{ElistaDataWet_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth]{PotsdamDataWet_en.png} \\\\\nb)}\n\\end{minipage}\n\\label{WetPeriod} \\caption{The durations of wet periods in Elista\n(a) and Potsdam (b).}\n\\end{figure}\n\nOn these figures the horizontal axis is the number of successive wet\nperiods. It should be mentioned that directly before and after each\nwet period there is at least one dry day, that is, successive wet\nperiods are separated by dry periods. On the vertical axis there lie\nthe durations of wet periods. In other words, the height of each\n``pin'' on these figures is the length of the corresponding wet\nperiod measured in days and the corresponding point on the\nhorizontal axis is the number of the wet period.\n\nThe samples of durations in both Elista and Potsdam were assumed\nhomogeneous and independent. It was demonstrated that the\nfluctuations of the numbers of successive wet days with very high\nconfidence fit the negative binomial distribution with shape\nparameter less than one (also see~\\cite{Gorshenin2017}). Figures~3a\nand~3b show the histograms constructed from the corresponding\nsamples of duration periods and the fitted negative binomial\ndistribution. In both cases the shape parameter $r$ turned out to be\nless than one. For Elista $r=0.876$, $p=0.489$, for Potsdam\n$r=0.847$, $p=0.322$.\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.5\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{ElistaWetPeriod_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.5\\textwidth}\n\\center{\n\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{PotsdamWetPeriod_en.png} \\\\ b)}\n\\end{minipage}\n\\label{WetHist} \\caption{The histogram of durations of wet periods\nin Elista (a) and Potsdam (b) and the fitted negative binomial\ndistribution.}\n\\end{figure}\n\nIt is worth noting that at the same time the statistical analysis\nconvincingly suggests the Pareto-type model for the distribution of\ndaily precipitation volumes, see Figures~4a and 4b. For comparison,\non these figures there are also presented the graphs of the best\ngamma-densities which, nevertheless, fit the histograms in a\nnoticeably worse way than the Pareto distributions.\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{PrecipElista_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\n\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{PrecipPotsdam_en.png} \\\\ b)}\n\\end{minipage}\n\\label{WetHist} \\caption{The histogram of daily precipitation\nvolumes in Elista (a) and Potsdam (b) and the fitted Pareto and\ngamma distributions.}\n\\end{figure}\n\nIn the same paper a schematic attempt was undertaken to explain this\nphenomenon by the fact that negative binomial distributions can be\nrepresented as mixed Poisson laws with mixing gamma-distributions.\nAs is known, the Poisson distribution is the best model for the\ndiscrete stochastic chaos~\\cite{Kingman1993} by virtue of the\nuniversal principle of non-decrease of entropy in closed systems\n(see, e. g., \\cite{GnedenkoKorolev1996, KorolevBeningShorgin2011})\nand the mixing distribution accumulates the statistical regularities\nin the influence of stochastic factors that can be assumed exogenous\nwith respect to the local system under consideration.\n\nIn the paper \\cite{Korolev2017} this explanation of the adequacy of\nthe negative binomial model was concretized. For this purpose, the\nconcept of a mixed geometric distribution introduced\nin~\\cite{Korolev2016TVP} (also see~\\cite{KorolevPoisson,\nKorolev2016}) was used. In~\\cite{Korolev2017} it was demonstrated\nthat any negative binomial distribution with shape parameter no\ngreater than one is a mixed geometric distribution (this result is\nreproduced below as Theorem 1). Thereby, a ``discrete'' analog of a\ntheorem due to L.~Gleser~\\cite{Gleser1989} was proved. Gleser's\ntheorem establishes that a gamma distribution with shape parameter\nno greater than one can be represented as a mixed exponential\ndistribution.\n\nThe representation of a negative binomial distribution as a mixed\ngeometric law can be interpreted in terms of the Bernoulli trials as\nfollows. First, as a result of some ``preliminary'' experiment the\nvalue of some random variables (r.v:s) taking values in $[0,1]$ is\ndetermined which is then used as the probability of success in the\nsequence of Bernoulli trials in which the original ``unconditional''\nr.v. with the negative binomial distribution is nothing else than\nthe ``conditionally'' geometrically distributed r.v. having the\nsense of the number of trials up to the first failure. This makes it\npossible to assume that the sequence of wet\/dry days is not\nindependent, but is conditionally independent and the random\nprobability of success is determined by some outer stochastic\nfactors. As such, we can consider the seasonality or the type of the\ncause of a rainy period.\n\nThe negative binomial model for the distribution of the duration of\nwet periods makes it possible to obtain asymptotic approximations\nfor important characteristics of precipitation such as the\ndistribution of the total precipitation volume per wet period and\nthe distribution of the maximum daily precipitation volume within a\nwet period. The first of these approximations was proposed\nin~\\cite{Korolev2017}, where an analog of the law of large numbers\nfor negative binomial random sums was presented stating that the\nlimit distribution for these sums is the gamma distribution.\n\nThe construction of the second approximation is the target of the\npresent paper.\n\nThe paper is organized as follows. Definitions and notation are\nintroduced in Section~2 which also contains some preliminary results\nproviding some theoretical grounds for the negative binomial model\nof the probability distribution of the duration of wet periods. Main\nresults are presented and proved in Section 3 where the asymptotic\napproximation is proposed for the distribution of the maximum daily\nprecipitation volume within a wet period. Some analytic properties\nof the obtained limit distribution are described. In particular, it\nis demonstrated that under certain conditions the limit distribution\nis mixed exponential and hence, is infinitely divisible. It is shown\nthat under the same conditions the limit distribution can be\nrepresented as a scale mixture of stable or Weibull or Pareto or\nfolded normal laws. The corresponding product representations for\nthe limit random variable can be used for its computer simulation.\nSeveral methods for the statistical estimation of the parameters of\nthis distribution are proposed in Section 4. Section 5 contains the\nresults of fitting the distribution proposed in Section 3 to real\ndata by the methods described in Section 4.\n\n\\section{Preliminaries}\n\nAlthough the main objects of our interest are the probability\ndistributions, for convenience and brevity in what follows we will\nexpound our results in terms of r.v:s with the corresponding\ndistributions assuming that all the r.v:s under consideration are\ndefined on one and the same probability space\n$(\\Omega,\\,\\mathfrak{F},\\,{\\sf P})$.\n\nIn the paper, conventional notation is used. The symbols $\\stackrel{d}{=}$ and\n$\\Longrightarrow$ denote the coincidence of distributions and\nconvergence in distribution, respectively. The integer and\nfractional parts of a number $z$ will be respectively denoted $[z]$\nand $\\{z\\}$.\n\nA r.v. having the gamma distribution with shape parameter $r>0$ and\nscale parameter $\\lambda>0$ will be denoted $G_{r,\\lambda}$,\n$$\n{\\sf P}(G_{r,\\lambda}0$.\n\nIn these notation, obviously, $G_{1,1}$ is a r.v. with the standard\nexponential distribution: ${\\sf P}(G_{1,1}0$, $r>0$.\n\nThe properties of GG-distributions are described in \\cite{Stacy1962,\nKorolevZaks2013}. A r.v. with the density $g^*(x;r,\\gamma,\\lambda)$\nwill be denoted $G^*_{r,\\gamma,\\lambda}$. It can be easily made sure\nthat\n\\begin{equation}\\label{GG}\nG^*_{r,\\gamma,\\lambda}\\stackrel{d}{=} G_{r,\\lambda}^{1\/\\gamma}.\n\\end{equation}\nFor a r.v. with the Weibull distribution, a particular case of\nGG-distributions corresponding to the density $g^*(x;1,\\gamma,1)$\nand the distribution function (d.f.)\n$\\big[1-e^{-x^{\\gamma}}\\big]{\\bf 1}(x\\ge0)$, we will use a special\nnotation $W_{\\gamma}$. Thus, $G_{1,1}\\stackrel{d}{=} W_1$. It is easy to see\nthat\n\\begin{equation}\\label{Weibull}\nW_1^{1\/\\gamma}\\stackrel{d}{=} W_{\\gamma}.\n\\end{equation}\nA r.v. with the standard normal d.f. $\\Phi(x)$ will be denoted $X$,\n$$\n{\\sf\nP}(X0.\n\\label{Rdensity}\n\\end{equation}\n\nA r.v. $N_{r,p}$ is said to have the {\\it negative binomial\ndistribution} with parameters $r>0$ (``shape'') and $p\\in(0,1)$\n(``success probability''), if\n$$\n{\\sf P}(N_{r,p}=k)=\\frac{\\Gamma(r+k)}{k!\\Gamma(r)}\\cdot p^r(1-p)^k,\\\n\\ \\ \\ k=0,1,2,...\n$$\n\nA particular case of the negative binomial distribution\ncorresponding to the value $r=1$ is the {\\it geometric\ndistribution}. Let $p\\in(0,1)$ and let $N_{1,p}$ be the r.v. having\nthe geometric distribution with parameter $p\\,$:\n$$\n{\\sf P}(N_{1,p}=k)=p(1-p)^{k},\\ \\ \\ \\ k=0,1,2,...\n$$\nThis means that for any $m\\in\\mathbb{N}$\n$$\n{\\sf P}(N_{1,p}\\ge\nm)=\\sum\\nolimits_{k=m}^{\\infty}p(1-p)^{k}=(1-p)^{m}.\n$$\n\nLet $Y$ be a r.v. taking values in the interval $(0,1)$. Moreover,\nlet for all $p\\in(0,1)$ the r.v. $Y$ and the geometrically\ndistributed r.v. $N_{1,p}$ be independent. Let $V=N_{1,Y}$, that is,\n$V(\\omega)=N_{1,Y(\\omega)}(\\omega)$ for any $\\omega\\in\\Omega$. The\ndistribution\n$$\n{\\sf P}(V\\ge m)=\\int_{0}^{1}(1-y)^{m}d{\\sf P}(Y0$, $p\\in(0,1)$ and $k\\in\\{0\\}\\bigcup\\mathbb{N}$ we have\n\\begin{equation}\n\\frac{\\Gamma(r+k)}{k!\\Gamma(r)}\\cdot\np^r(1-p)^k=\\frac{1}{k!}\\int_{0}^{\\infty}e^{-z}z^kg(z;r,\\mu)dz,\\label{NBMixt}\n\\end{equation}\nwhere $\\mu=p\/(1-p)$.\n\nBased on representation \\eqref{NBMixt}, in \\cite{Korolev2017} it was\nproved that any negative binomial distribution with the shape\nparameter no greater than one is a mixed geometric distribution.\nNamely, the following statement was proved that gives an analytic\nexplanation of the validity of the negative binomial model for the\nduration of wet periods measured in days (see the Introduction).\n\n\\smallskip\n\n{\\sc Theorem 1} \\cite{Korolev2017}. {\\it The negative binomial\ndistribution with parameters $r\\in(0,1)$ and $p\\in(0,1)$ is a mixed\ngeometric distribution$:$ for any $k\\in\\{0\\}\\bigcup\\mathbb{N}$}\n$$\n\\frac{\\Gamma(r+k)}{k!\\Gamma(r)}\\cdot\np^r(1-p)^k=\\int_{\\mu}^{\\infty}\\Big(\\frac{z}{z+1}\\Big)\\Big(1-\\frac{z}{z+1}\\Big)^kp(z;r,\\mu)dz=\\int_{p}^{1}y(1-y)^kh(y;r,p)dy,\n$$\n{\\it where $\\mu=p\/(1-p)$ and the probability densities $p(z;r,\\mu)$\nand $h(y;r,p)$ have the forms\n$$\np(z;r,\\mu)=\\frac{\\mu^r}{\\Gamma(1-r)\\Gamma(r)}\\cdot\\frac{\\mathbf{1}(z\\ge\\mu)}{(z-\\mu)^rz},\n$$\n$$\nh(y;r,p)=\\frac{p^r}{\\Gamma(1-r)\\Gamma(r)}\\cdot\\frac{(1-y)^{r-1}\\mathbf{1}(p0$, $p\\in(0,1)$, then the density\n$p(z;r,\\mu)$ corresponds to the r.v.\n\\begin{equation}\nZ_{r,\\mu}=\\frac{\\mu(G_{r,\\,1}+G_{1-r,\\,1})}{G_{r,\\,1}}\n\\label{Zdef}\n\\end{equation}\nand the density $h(y;r,p)$ corresponds to the r.v.\n$$\nY_{r,p}=\\frac{p(G_{r,\\,1}+G_{1-r,\\,1})}{G_{r,\\,1}+pG_{1-r,\\,1}}.\n$$\n}\n\n\\smallskip\n\nLet $P(t)$, $t\\ge0$, be the standard Poisson process (homogeneous\nPoisson process with unit intensity). Then\ndistribution~\\eqref{NBMixt} corresponds to the r.v.\n$N_{r,p}=P(G_{r,p\/(1-p)})$, where the r.v. $G_{r,p\/(1-p)}$ is\nindependent of the process $P(t)$.\n\n\\section{The probability distribution of extremal precipitation}\n\nIn this section we will deduce the probability distribution of\nextremal daily precipitation within a wet period.\n\nLet $r>0$, $\\lambda>0$, $q\\in(0,1)$, $n\\in\\mathbb{N}$,\n$p_n=\\min\\{q,\\,\\lambda\/n\\}$. It is easy to make sure that\n\\begin{equation}\nn^{-1}G_{r,p_n\/(1-p_n)}\\Longrightarrow G_{r,\\lambda}\\label{2}\n\\end{equation}\nas $n\\to\\infty$.\n\n\\smallskip\n\n{\\sc Lemma 1.} {\\it Let $\\Lambda_1,\\Lambda_2,\\ldots$ be a sequence\nof positive r.v$:$s such that for any $n\\in\\mathbb{N}$ the r.v.\n$\\Lambda_n$ is independent of the Poisson process $P(t)$, $t\\ge0$.\nThe convergence\n$$\nn^{-1}P(\\Lambda_n)\\Longrightarrow \\Lambda\n$$\nas $n\\to\\infty$ to some nonnegative r.v. $\\Lambda$ takes place if\nand only if\n\\begin{equation}\nn^{-1}\\Lambda_n\\Longrightarrow \\Lambda \\label{3}\n\\end{equation}\nas $n\\to\\infty$.}\n\n\\smallskip\n\n{\\sc Proof}. This statement is a particular case of Lemma 2\nin~\\cite{Korolev1998} (also see Theorem 7.9.1 in\n\\cite{KorolevBeningShorgin2011}).\n\n\\smallskip\n\nConsider a sequence of independent identically distributed (i.i.d.)\nr.v:s $X_1,X_2,\\ldots$. Let $N_1,N_2,\\ldots$ be a sequence of\nnatural-valued r.v:s such that for each $n\\in\\mathbb{N}$ the r.v.\n$N_n$ is independent of the sequence $X_1,X_2,\\ldots$. Denote\n$M_n=\\max\\{X_1,\\ldots,X_{N_n}\\}$.\n\nLet $F(x)$ be a d.f., $a\\in\\mathbb{R}$. Denote\n$\\mathrm{rext}(F)=\\sup\\{x:\\,F(x)<1\\}$, $F^{-1}(a)=\\inf\\{x:\\,F(x)\\ge\na\\}$.\n\n\\smallskip\n\n{\\sc Lemma 2.} {\\it Let $\\Lambda_1,\\Lambda_2,\\ldots$ be a sequence\nof positive r.v$:$s such that for each $n\\in\\mathbb{N}$ the r.v.\n$\\Lambda_n$ is independent of the Poisson process $P(t)$, $t\\ge0$.\nLet $N_n=P(\\Lambda_n)$. Assume that there exists a nonnegative r.v.\n$\\Lambda$ such that convergence~{\\rm \\eqref{3}} takes place. Let\n$X_1,X_2,\\ldots$ be i.i.d. r.v$:$s with a common d.f. $F(x)$. Assume\nalso that $\\mathrm{rext}(F)=\\infty$ and there exists a number\n$\\gamma>0$ such that for each $x>0$\n\\begin{equation}\n\\lim_{y\\to\\infty}\\frac{1-F(xy)}{1-F(y)}=x^{-\\gamma}.\\label{4}\n\\end{equation}\nThen}\n$$\n\\lim_{n\\to\\infty}\\sup_{x\\ge 0}\\bigg|{\\sf\nP}\\bigg(\\frac{M_n}{F^{-1}(1-\\frac{1}{n})}0$, $q\\in(0,1)$\nand let $N_{r,p_n}$ be a r.v. with the negative binomial\ndistribution with parameters $r>0$ and $p_n=\\min\\{q,\\lambda\/n\\}$.\nLet $X_1,X_2,\\ldots$ be i.i.d. r.v$:$s with a common d.f. $F(x)$.\nAssume that $\\mathrm{rext}(F)=\\infty$ and there exists a number\n$\\gamma>0$ such that relation~{\\rm \\eqref{4}} holds for any $x>0$.\nThen\n$$\n\\lim_{n\\to\\infty}\\sup_{x\\ge 0}\\bigg|{\\sf\nP}\\bigg(\\frac{\\max\\{X_1,\\ldots,X_{N_{r,p_n}}\\}}{F^{-1}(1-\\frac{1}{n})}0$\ncorresponds to the r.v. $W_{\\gamma}^{-1}$, it is easy to make sure\nthat the d.f. $F(x; r,\\lambda,\\gamma)$ corresponds to the r.v.\n$M_{r,\\gamma,\\lambda}\\equiv\nG_{r,\\lambda}^{1\/\\gamma}W_{\\gamma}^{-1}$, where the multipliers on\nthe right-hand side are independent. From~\\eqref{GG}\nand~\\eqref{Weibull} it follows that\n\\begin{equation}\\label{M}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\\Big(\\frac{G_{r,\\lambda}}{W_1}\\Big)^{1\/\\gamma}\n\\stackrel{d}{=}\\frac{G^*_{r,\\gamma,\\lambda}}{W_{\\gamma}}\n\\end{equation}\nwhere in each term the multipliers are independent. Consider the\nr.v. $G_{r,\\lambda}\/W_1$ in \\eqref{M} in more detail. We have\n$$\n\\frac{G_{r,\\lambda}}{W_1}\\stackrel{d}{=}\\frac{G_{r,\\lambda}}{G_{1,1}}\\stackrel{d}{=}\\frac{G_{r,1}}{\\lambda\nG_{1,1}}\\stackrel{d}{=}\\frac{Q_{r,1}}{\\lambda r},\n$$\nwhere $Q_{r,1}$ is the r.v. having the Snedecor--Fisher distribution\nwith parameters $r,\\,1$ (`degrees of freedom') defined by the\nLebesgue density\n$$\nf_{r,1}(x)=\\frac{r^{r+1}x^{r-1}}{(1+rx)^{r+1}},\\ \\ \\ x\\ge0,\n$$\n(see, e. g., \\cite{Bolshev}, Section 27).\n\nSo,\n\\begin{equation}\\label{MQ}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\\Big(\\frac{Q_{r,1}}{\\lambda\nr}\\Big)^{1\/\\gamma},\n\\end{equation}\nand the statement of theorem 2 can be re-formulated as\n\\begin{equation}\n\\label{Mdef}\n\\frac{\\max\\{X_1,\\ldots,X_{N_{r,p_n}}\\}}{F^{-1}(1-\\frac{1}{n})}\\Longrightarrow\nM_{r,\\gamma,\\lambda}\\equiv\n\\frac{G_{r,\\lambda}^{1\/\\gamma}}{W_{\\gamma}}\\stackrel{d}{=}\n\\Big(\\frac{Q_{r,1}}{\\lambda r}\\Big)^{1\/\\gamma}\\ \\ \\ \\ (n\\to\\infty).\n\\end{equation}\n\nThe density of the limit distribution $F(x;r,\\gamma,\\lambda)$ of the\nextreme daily precipitation within a wet period has the form\n\\begin{equation}\np(x;r,\\gamma,\\lambda)=\\frac{r\\gamma\\lambda^rx^{\\gamma\nr-1}}{(1+\\lambda x^{\\gamma})^{r+1}}=\\frac{\\gamma\nr\\lambda^r}{x^{1+\\gamma}(\\lambda+x^{-\\gamma})^{r+1}},\\ \\ \\\nx>0.\\label{ExtrPDF}\n\\end{equation}\n\nIt is easy to see that $p(x;r,\\gamma,\\lambda)=O(x^{-1-\\gamma})$ as\n$x\\to\\infty$. Therefore ${\\sf\nE}M_{r,\\gamma,\\lambda}^{\\delta}<\\infty$ only if $\\delta<\\gamma$.\nMoreover, from~\\eqref{Mdef} it is possible to deduce explicit\nexpressions for the moments of the r.v. $M_{r,\\gamma,\\lambda}$.\n\n\\smallskip\n\n{\\sc Theorem 3.} {\\it Let $0<\\delta<\\gamma<\\infty$. Then}\n$$\n{\\sf\nE}M_{r,\\gamma,\\lambda}^{\\delta}=\n\\frac{\\Gamma\\big(r+\\frac{\\delta}{\\gamma}\\big)\\Gamma\\big(1-\\frac{\\delta}{\\gamma}\\big)}{\\lambda^{\\delta\/\\gamma}\\Gamma(r)}.\n$$\n\n\\smallskip\n\n{\\sc Proof}. From \\eqref{Mdef} it follows that\n\\begin{equation}\n\\label{Mmoments} {\\sf E}M_{r,\\gamma,\\lambda}^{\\delta}={\\sf\nE}G_{r,\\lambda}^{\\delta\/\\gamma}\\cdot{\\sf E}W_1^{-\\delta\/\\gamma}.\n\\end{equation}\nIt is easy to verify that\n\\begin{equation}\n\\label{moments} {\\sf\nE}G_{r,\\lambda}^{\\delta\/\\gamma}=\\frac{\\Gamma\\big(r+\\frac{\\delta}{\\gamma}\\big)}{\\lambda^{\\delta\/\\gamma}\\Gamma(r)},\\\n\\ \\ {\\sf\nE}W_1^{-\\delta\/\\gamma}=\\Gamma\\big(1-{\\textstyle\\frac{\\delta}{\\gamma}}\\big).\n\\end{equation}\nHence follows the desired result.\n\n\\smallskip\n\nTo analyze the properties of the limit distribution in theorem 2\nmore thoroughly we will require some additional auxiliary results.\n\n\\smallskip\n\n{\\sc Lemma 3} \\cite{KorolevWeibull2016}. {\\it Let $\\gamma\\in(0,1]$.\nThen\n$$\nW_{\\gamma}\\stackrel{d}{=} \\frac{W_1}{S_{\\gamma,1}}\n$$\nwith the r.v:s on the right-hand side being independent.}\n\n\\smallskip\n\n{\\sc Lemma 4} \\cite{Korolev2017}. {\\it Let $r\\in(0,1]$,\n$\\gamma\\in(0,1]$, $\\lambda>0$. Then\n$$\nG_{r,\\lambda}^{1\/\\gamma}\\stackrel{d}{=}\nG^*_{r,\\gamma,\\lambda}\\stackrel{d}{=}\\frac{W_{\\gamma}}{Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\n\\frac{W_1}{S_{\\gamma,1}Z_{r,\\lambda}^{1\/\\gamma}},\n$$\nwhere the r.v. $Z_{r,\\lambda}$ was defined in \\eqref{Zdef} and all\nthe involved r.v$:$s are independent.}\n\n\\smallskip\n\n{\\sc Theorem 4}. {\\it Let $r\\in(0,1]$, $\\gamma\\in(0,1]$,\n$\\lambda>0$. Then the following product representations are valid$:$\n\\begin{equation}\\label{T3_1}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\n\\frac{G_{r,\\lambda}^{1\/\\gamma}S_{\\gamma,1}}{W_1},\n\\end{equation}\n\\begin{equation}\\label{T3_2}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\n\\frac{W_{\\gamma}}{W'_{\\gamma}}\\cdot\\frac{1}{Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\nW_1\\cdot\\frac{R_{\\gamma}}{W'_1Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\n\\frac{\\Pi R_{\\gamma}}{Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\n\\frac{|X|\\sqrt{2W_1}R_{\\gamma}}{W'_1Z_{r,\\lambda}^{1\/\\gamma}},\n\\end{equation}\nwhere $W_{\\gamma}\\stackrel{d}{=} W'_{\\gamma}$, $W_1\\stackrel{d}{=} W'_1$, the r.v.\n$R_{\\gamma}$ has the density {\\rm\\eqref{Rdensity}}, the r.v. $\\Pi$\nhas the Pareto distribution$:$ ${\\sf P}(\\Pi>x)=(x+1)^{-1}$, $x\\ge0$,\nand in each term the involved r.v$:$s are independent.}\n\n\\smallskip\n\n{\\sc Proof}. Relation \\eqref{T3_1} follows from \\eqref{Mdef} and\nLemma 3, relation \\eqref{T3_2} follows from \\eqref{Mdef} and Lemma 4\nwith the account of the representation $W_1\\stackrel{d}{=} |X|\\sqrt{2W_1}$, the\nproof of which can be found in, say, \\cite{KorolevWeibull2016}.\n\n\\smallskip\n\nWith the account of the relation $R_{\\gamma}\\stackrel{d}{=} R_{\\gamma}^{-1}$,\nfrom~\\eqref{T3_2} we obtain the following statement.\n\n\\smallskip\n\n{\\sc Corollary 1.} {\\it Let $r\\in(0,1]$, $\\gamma\\in(0,1]$,\n$\\lambda>0$. Then the d.f. $F(x;r,\\gamma,\\lambda)$ is mixed\nexponential$:$\n$$\n1-F(x;r,\\gamma,\\lambda)=\\int_{0}^{\\infty}e^{-ux}dA(u),\\ \\ \\ x\\ge0,\n$$\nwhere\n$$\nA(u)={\\sf P}\\big(W_1R_{\\gamma}Z_{r,\\lambda}^{1\/\\gamma}0$. Then the d.f. $F(x;r,\\gamma,\\lambda)$ is infinitely\ndivisible.}\n\n\\smallskip\n\n{\\sc Proof.} This statement immediately follows from Corollary 1 and\nthe result of Goldie \\cite{Goldie1967} stating that the product of\ntwo independent non-negative random variables is infinitely\ndivisible, if one of the two is exponentially distributed.\n\n\\smallskip\n\nTheorem 3 states that the limit distribution in Theorem 2 can be\nrepresented as a scale mixture of exponential or stable or Weibull\nor Pareto or folded normal laws. The corresponding product\nrepresentations for the r.v. $M_{r,\\gamma,\\lambda}$ can be used for\nits computer simulation.\n\nIn practice, the asymptotic approximation $F(x; r,\\lambda,\\gamma)$\nfor the distribution of the extreme daily precipitation within a wet\nperiod proposed by Theorem~2 is adequate, if the ``success\nprobability'' is small enough, that is, if on the average the wet\nperiods are long enough.\n\n\\section{Estimation of the parameters $r$, $\\lambda$ and $\\gamma$}\n\nFrom~\\eqref{ExtrPDF} it can be seen that the realization of the\nmaximum likelihood method for the estimation of the parameters $r$,\n$\\lambda$ and $\\gamma$ inevitably assumes the necessity of numerical\nsolution of a system of transcendental equations by iterative\nprocedures without any guarantee that the resulting maximum is\nglobal. The closeness of the initial approximation to the true\nmaximum likelihood point in the three-dimensional parameter set\nmight give a hope that the terminal extreme point found by the\nnumerical algorithm is global.\n\nFor rough estimation of the parameters, the following considerably\nsimpler method can be used. The resulting rough estimates can be\nused as a starting point for the `full' maximum likelihood algorithm\nmentioned above in order to ensure the closeness of the initial\napproximation to the true solution. The rough method is based on\nthat the quantiles of the d.f. $F(x; r,\\lambda,\\gamma)$ can be\nwritten out explicitly. Namely, the quantile\n$x(\\epsilon;r,\\lambda,\\gamma)$ of the d.f. $F(x; r,\\lambda,\\gamma)$\nof order $\\epsilon\\in(0,1)$, that is, the solution of the equation\n$F(x; r,\\lambda,\\gamma)=\\epsilon$ with respect to $x$, obviously has\nthe form\n$$\nx(\\epsilon;r,\\lambda,\\gamma)=\\bigg(\\frac{\\epsilon^{1\/r}}{\\lambda-\\lambda\\epsilon^{1\/r}}\\bigg)^{1\/\\gamma}.\n$$\nLet at our disposal there be observations $\\{X_{i,j}\\}$,\n$i=1,\\ldots,m$, $j=1,\\ldots,m_i$, where $i$ is the number of a wet\nperiod (the number of a sequence of rainy days), $j$ is the number\nof a day in the wet sequence, $m_i$ is the length of the $i$th wet\nsequence (the number of rainy days in the $i$th wet period), $m$ is\nthe total number of wet sequences, $X_{i,j}$ is the precipitation\nvolume on the $j$th day of the $i$th wet sequence. Construct the\nsample $X^*_1,\\ldots,X^*_m$ as\n\\begin{equation}\nX^*_k=\\max\\{X_{k,1},\\ldots,X_{k,m_k}\\},\\ \\ \\ k=1,\\ldots,m.\\label{VarSample}\n\\end{equation}\nLet $X^*_{(1)},\\ldots,X^*_{(m)}$ be order statistics constructed\nfrom the sample $X^*_1,\\ldots,X^*_m$. Since we have three unknown\nparameters $r$, $\\lambda$ and $\\gamma$, fix three numbers\n$0