diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgbrr" "b/data_all_eng_slimpj/shuffled/split2/finalzzgbrr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgbrr" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nA time-parametrized family $\\{\\Gamma(t)\\}_{t\\geq 0}$ of $n$-dimensional surfaces in $\\mathbb R^{n+1}$ (or in an open domain $U \\subset \\R^{n+1}$) is called \na \\emph{mean curvature flow} (abbreviated hereafter\nas MCF) if the velocity of motion of $\\Gamma(t)$ is equal to the mean curvature of $\\Gamma(t)$ at each point and time. \nThe aim of the present paper is to establish a global-in-time existence theorem for the \nMCF $\\{\\Gamma(t)\\}_{t\\geq 0}$ starting from a given surface $\\Gamma_0$ while keeping the boundary of $\\Gamma(t)$ fixed for all times $t \\geq 0$. In particular, we are interested in the case when the initial surface $\\Gamma_0$ is not smooth. Typical MCF under consideration in this setting may look like a moving network with multiple junctions for $n=1$, \nor a moving cluster of bubbles for $n=2$, and they may undergo various topological changes as they \nevolve. Due to the presence of singularities, we work in the framework of the generalized, measure-theoretic notion of MCF introduced by Brakke and since known as the Brakke flow \\cite{Brakke,Ton1}. A global-in-time existence result for a Brakke flow \\emph{without} fixed boundary conditions was established by Kim and the second-named author in \n\\cite{KimTone} by reworking \\cite{Brakke} thoroughly. The major challenge of the present work is to devise a modification to\nthe approximation scheme in \\cite{KimTone} which preserves the boundary data. \n\n\\smallskip\n\nThough somewhat technical, in order to clarify the setting of the problem at this point, we state the assumptions on the initial surface $\\Gamma_0$ and the domain $U$ hosting its evolution. Their validity will be assumed throughout the paper. \n\\begin{assumption} \\label{ass:main}\nIntegers $n\\geq 1$ and $N\\geq 2$ are fixed, and ${\\rm clos}\\,A$ denotes the topological closure of $A$ in $\\mathbb R^{n+1}$. \n\\begin{itemize}\n\\item[(A1)] $U \\subset \\R^{n+1}$ is a strictly convex bounded domain with boundary $\\partial U$ of class $C^2$.\n\\smallskip\n\\item[(A2)] $\\Gamma_0 \\subset U$ is a relatively closed, countably $n$-rectifiable set with finite $n$-dimensional Hausdorff measure.\n\\smallskip\n\\item[(A3)] $E_{0,1},E_{0,2},\\ldots,E_{0,N}$ are non-empty, open, and mutually disjoint subsets of $U$ such that $U\\setminus\\Gamma_0=\\bigcup_{i=1}^N E_{0,i}$.\n\\smallskip\n\\item[(A4)] $\\partial\\Gamma_0 := ({\\rm clos}\\,\\Gamma_0) \\setminus U$\nis not empty, and for each $x \\in \\partial\\Gamma_0$ there exist at least two indexes $i_1 \\ne i_2$ in $\\{1,\\ldots,N\\}$ such that $x \\in {\\rm clos}\\left({\\rm clos}(E_{0,i_j}) \\setminus (U \\cup \\partial \\Gamma_0)\\right)$ for $j=1,2$.\n\\end{itemize}\n\\end{assumption}\n\\noindent\nSince $N\\geq 2$, we implicitly assume that $U\\setminus \\Gamma_0$ is not connected. \nWhen $n=1$, $\\Gamma_0$ could be for instance a union of Lipschitz curves joined \nat junctions, with ``labels'' from $1$ to $N$ being assigned to each connected component of $U\\setminus\\Gamma_0$. \nIf one defines $F_i:=({\\rm clos}\\,E_{0,i})\\setminus(U\\cup\\partial\n\\Gamma_0)$ for $i=1,\\ldots,N$, one can check that each $F_i$ is relatively open in $\\partial U$,\n$F_1,\\ldots,F_N$ are mutually disjoint, and $\\cup_{i=1}^N F_i=\\partial U\\setminus\n\\partial\\Gamma_0$. The assumption (A4) is equivalent to the requirement that each $x\\in \\partial \\Gamma_0$ is in\n$\\partial F_{i_1}\\cap\\partial F_{i_2}$ for some indices $i_1\\neq i_2$. \nThe main result of the present paper can then be roughly stated as follows.\n\\begin{theoremletter}\nUnder the assumptions (A1)-(A4), there exists a MCF $\\{\\Gamma(t)\\}_{t\\geq 0}$ such that\n\\[\n\\Gamma(0) = \\Gamma_0\\,, \\qquad \\mbox{and} \\qquad \\partial \\Gamma(t) := ({\\rm clos}\\, \\Gamma(t)) \\setminus U = \\partial \\Gamma_0 \\quad \\mbox{for all $t\\geq 0$}\\,.\n\\]\nFor all $t>0$, $\\Gamma(t)$ remains within the convex hull of \n$\\Gamma_0\\cup\\partial\\Gamma_0$. \n\\end{theoremletter}\n\\noindent\nMore precisely, $\\{\\Gamma(t)\\}_{t\\geq 0}$ is a MCF in the sense that \n$\\Gamma(t)$ coincides with the slice, at time $t$, of the space-time support of a Brakke flow $\\{V_t\\}_{t\\geq 0}$ starting from $\\Gamma_0$. The method adopted to produce the evolving generalized surfaces $\\Gamma(t)$ actually gives us more. Indeed, we show the existence of $N$ families $\\{E_i(t)\\}_{t \\geq 0}$ ($i = 1,\\ldots,N$) of evolving open sets such that $E_i(0)=E_{0,i}$ for every $i$, and $\\Gamma(t)=U\\setminus\\cup_{i=1}^N\nE_i(t)$ for all $t \\geq 0$. At each time $t\\geq 0$, the sets $E_1(t),\\ldots,E_N(t)$ are mutually disjoint and form \na partition of $U$. Moreover, for each fixed $i$ the Lebesgue measure of $E_i(t)$ is a continuous function of time, so that the evolving $\\Gamma(t)$ do not exhibit arbitrary instantaneous loss of mass. See Theorems \\ref{thm:main} and \\ref{thm:main2} for the full statement. \n\n\\smallskip\n\nIt is reasonable to expect that the flow $\\Gamma(t)$ converges, as \n$t\\rightarrow\\infty$, to \na minimal surface in $U$ with boundary $\\partial \\Gamma_0$. We are not able to prove such a result in full generality; nonetheless, we can show the following\n\\begin{theoremletter}\nThere exists a sequence of times $\\{t_k\\}_{k=1}^\\infty$ with $\\lim_{k\\to\\infty} t_k = \\infty$ such that the corresponding varifolds $V_k := V_{t_k}$ converge to a \\emph{stationary} integral varifold $V_\\infty$ in $U$ such that $({\\rm clos}\\,(\\spt\\|V_\\infty\\|)) \\setminus U = \\partial \\Gamma_0$.\n\\end{theoremletter}\nSee Corollary \\ref{main:cor} for a precise statement. The limit $V_{\\infty}$ is a solution to Plateau's problem with boundary $\\partial \\Gamma_0$, in the sense that it has the prescribed boundary in the topological sense specified above and it is minimal in the sense of varifolds. We warn the reader that $V_{\\infty}$ may not be area-minimizing. Furthermore, the flow may converge to different limit varifolds along different diverging sequences of times in all cases when uniqueness of a minimal surface with the prescribed boundary is not guaranteed. The possibility to use Brakke flow in order to select solutions to Plateau's problem in classes of varifolds seems an interesting byproduct of our theory. See Section \\ref{propla} for further discussion on these points.\n\n\\smallskip \n\nNext, we discuss closely related results. \nWhile there are several works on the global-in-time existence of MCF, there are relatively \nfew results on the existence of MCF with fixed boundary conditions. When $\\Gamma_0$ is a smooth graph over a bounded domain $\\Omega$ in $\\R^n$, global-in-time existence follows from the classical work of Lieberman \\cite{Lieb}. Furthermore, under the assumption that $\\Omega$ is mean convex, convergence of the flow to the unique solution to the minimal surfaces equation in $\\Omega$ with the prescribed boundary was established by Huisken in \\cite{Hu1}; see also the subsequent generalizations to the Riemannian setting in \\cite{Priw,spruck}. The case of network flows with fixed endpoints and a single triple junction was extensively\nstudied in \\cite{MNT,MMN}. For other configurations and related works on the network flows,\nsee the survey paper \\cite{MNPS} and references therein. In the case when $N=2$ (which\ndoes not allow triple junctions in general), a \npowerful approach is the level set method \\cite{CGG,ES}. Existence and uniqueness in this setting were established in \\cite{SZ1}, and the\nasymptotic limit as $t\\rightarrow\\infty$ was studied in \\cite{SZ2}. Recently, White \\cite{Wh1} proved the existence of a Brakke flow with prescribed smooth boundary in the sense of integral flat chains ${\\rm mod}(2)$. The proof uses the elliptic\nregularization scheme discovered by Ilmanen \\cite{Ilm1}, which allows one to obtain a Brakke flow with additional good regularity and compactness properties; see also \\cite{SW} for an application of elliptic regularization within the framework of flat chains with coefficients in suitable finite groups to the long-time existence and short-time regularity of unconstrained MCF starting from a general surface cluster. Observe that the homological constraint used by White prevents the flow to develop interior junction-type singularities of odd order (namely, junctions which are locally diffeomorphic to the union of an odd number of half-hyperplanes), because these singularities are necessarily boundary points ${\\rm mod}(2)$. As a consequence, the flows obtained in \\cite{Wh1} may differ greatly from those produced in the present paper. This is not surprising, as solutions to Brakke flow may be highly non-unique. A complete characterization of the topological changes that the evolving surfaces can undergo with either of the two approaches is, in fact, an interesting open question. It is worth noticing that analogous generic non-uniqueness holds true also for Plateau's problem: in that context, different definitions of the key words \\emph{surfaces, area, spanning} in its formulation lead to solutions with dramatically different regularity properties, thus making each model a better or worse predictor of the geometric complexity of physical soap films; see e.g. the survey papers \\cite{David_Plateau,HP_Plateau} and the references therein, as well as the more recent works \\cite{DGM,MSS,KMS,KMS2,KMS3,DLHMS_linear,DLHMS_nonlinear}. It is then interesting and natural to investigate different formulations for Brakke flow as well.\n\n\\medskip\n\n{\\bf Acknowledgments.} The work of S.S. was supported by the NSF grants DMS-1565354, DMS-RTG-1840314 and DMS-FRG-1854344. Y.T. was partially supported by JSPS Grant-in-aid for scientific research 18H03670, 19H00639 and 17H01092.\n\n\\section{Definitions, Notation, and Main Results}\n\n\\subsection{Basic notation}\nThe ambient space we will be working in is Euclidean space $\\R^{n+1}$. We write $\\R^+$ for $[0,\\infty)$.\nFor $A\\subset\\mathbb R^{n+1}$, ${\\rm clos}\\,A$ (or $\\overline A$) is the topological closure of $A$ in \n$\\mathbb R^{n+1}$ (and not in $U$), ${\\rm int}\\,A$ is the set of interior points of $A$ \nand ${\\rm conv}\\,A$ is the convex hull of $A$. The standard Euclidean inner product between vectors in $\\R^{n+1}$ is denoted $x \\cdot y$, and $\\abs{x} := \\sqrt{x \\cdot x}$. If $L,S \\in \\mathscr{L}(\\R^{n+1};\\R^{n+1})$ are linear operators in $\\R^{n+1}$, their (Hilbert-Schmidt) inner product is $L \\cdot S := {\\rm trace}(L^T \\circ S)$, where $L^T$ is the transpose of $L$ and $\\circ$ denotes composition. The corresponding (Euclidean) norm in $\\mathscr{L}(\\R^{n+1};\\R^{n+1})$ is then $\\abs{L} := \\sqrt{L \\cdot L}$, whereas the operator norm in $\\mathscr{L}(\\R^{n+1};\\R^{n+1})$ is $\\|L\\| := \\sup\\left\\lbrace \\abs{L(x)} \\, \\colon \\, \\mbox{$x\\in\\R^{n+1}$ with $\\abs{x}\\leq 1$} \\right\\rbrace$. If $u,v \\in \\R^{n+1}$ then $u \\otimes v \\in \\mathscr{L}(\\R^{n+1};\\R^{n+1})$ is defined by $(u \\otimes v)(x) := (x \\cdot v)\\, u$, so that $\\| u \\otimes v \\| = \\abs{u}\\,\\abs{v}$. The symbol $U_{r}(x)$ (resp.~$B_r(x)$) denotes the open (resp.~closed) ball in $\\R^{n+1}$ centered at $x$ and having radius $r > 0$. The Lebesgue measure of a set $A \\subset \\R^{n+1}$ is denoted $\\Leb^{n+1}(A)$ or $|A|$. If $1 \\leq k \\leq n+1$ is an integer, $U_r^k(x)$ denotes the open ball with center $x$ and radius $r$ in $\\R^k$. We will set $\\omega_k := \\Leb^k(U_1^k(0))$. The symbol $\\Ha^k$ denotes the $k$-dimensional Hausdorff measure in $\\R^{n+1}$, so that $\\Ha^{n+1}$ and $\\Leb^{n+1}$ coincide as measures. \\\\\n\n\\smallskip\n\nA Radon measure $\\mu$ in $U\\subset\\mathbb R^{n+1}$ is always also regarded as a linear functional on the space $C_c(U)$ of continuous and compactly supported functions on $U$, with the pairing denoted $\\mu(\\phi)$ for $\\phi \\in C_c(U)$. The restriction of $\\mu$ to a Borel set $A$ is denoted $\\mu\\, \\mres_A$, so that $(\\mu \\,\\mres_A)(E) := \\mu(A \\cap E)$ for any $E \\subset U$. The support of $\\mu$ is denoted $\\spt\\,\\mu$, and it is the relatively closed subset of $U$ defined by\n\\[\n\\spt\\,\\mu := \\left\\lbrace x \\in U \\, \\colon \\, \\mu(B_r(x)) > 0 \\mbox{ for every $r > 0$} \\right\\rbrace\\,.\n\\]\nThe upper and lower $k$-dimensional densities of a Radon measure $\\mu$ at $x \\in U$ are\n\\[\n\\theta^{*k}(\\mu,x) := \\limsup_{r \\to 0^+} \\frac{\\mu(B_r(x))}{\\omega_k\\, r^k} \\,, \\qquad \\theta^k_*(\\mu,x) := \\liminf_{r \\to 0^+} \\frac{\\mu(B_r(x))}{\\omega_k\\, r^k}\\,,\n\\]\nrespectively. If $\\theta^{*k}(\\mu,x) = \\theta^k_*(\\mu,x)$ then the common value is denoted $\\theta^k(\\mu,x)$, and is called the {\\it $k$-dimensional density} of $\\mu$ at $x$. For $1 \\leq p \\leq \\infty$, the space of $p$-integrable (resp.~locally $p$-integrable) functions with respect to $\\mu$ is denoted $L^p(\\mu)$ (resp.~$L^p_{loc}(\\mu)$). For a set $E \\subset U$, $\\chi_E$ is the characteristic function of $E$. If $E$ is a set of finite perimeter in $U$, then $\\nabla \\chi_E$ is the associated Gauss-Green measure in $U$, and its total variation $\\|\\nabla \\chi_E\\|$ in $U$ is the perimeter measure; by De Giorgi's structure theorem, $\\| \\nabla \\chi_E\\| = \\Ha^n \\mres_{\\partial^* E}$, where $\\partial^* E$ is the reduced boundary of $E$ in $U$. \n\\subsection{Varifolds}\nThe symbol $\\bG(n+1,k)$ will denote the Grassmannian of (unoriented) $k$-dimensional linear planes in $\\R^{n+1}$. Given $S \\in \\bG(n+1,k)$, we shall often identify $S$ with the orthogonal projection operator onto it. The symbol $\\V_k(U)$ will denote the space of $k$-dimensional {\\it varifolds} in $U$, namely the space of Radon measures on $\\bG_k(U) := U \\times \\bG(n+1,k)$ (see \\cite{Allard,Simon} for a comprehensive treatment of varifolds). To any given $V \\in \\V_k(U)$ one associates a Radon measure $\\|V\\|$ on $U$, called the {\\it weight} of $V$, and defined by projecting $V$ onto the first factor in $\\bG_k(U)$, explicitly:\n\\[\n\\|V\\|(\\phi) := \\int_{\\bG_k(U)} \\phi(x) \\, dV(x,S) \\qquad \\mbox{for every $\\phi \\in C_c(U)$}\\,.\n\\] \nA set $\\Gamma \\subset \\R^{n+1}$ is {\\it countably $k$-rectifiable} if it can be covered by countably many Lipschitz images of $\\R^k$ into $\\R^{n+1}$ up to a $\\Ha^k$-negligible set. We say that $\\Gamma$ is (locally) {\\it $\\Ha^k$-rectifiable} if it is $\\Ha^k$-measurable, countably $k$-rectifiable, and $\\Ha^k(\\Gamma)$ is (locally) finite. If $\\Gamma \\subset U$ is locally $\\Ha^k$-rectifiable, and $\\theta \\in L^{1}_{loc}(\\Ha^k \\mres_\\Gamma)$ is a positive function on $\\Gamma$, then there is a $k$-varifold canonically associated to the pair $(\\Gamma,\\theta)$, namely the varifold $\\var(\\Gamma,\\theta)$ defined by\n\\begin{equation} \\label{varGammatheta}\n\\var(\\Gamma,\\theta)(\\varphi) := \\int_\\Gamma \\varphi(x, T_x \\Gamma) \\, \\theta(x)\\, d\\Ha^k(x) \\qquad \\mbox{for every } \\varphi \\in C_c(\\bG_k(U))\\,,\n\\end{equation}\nwhere $T_x\\Gamma$ denotes the approximate tangent plane to $\\Gamma$ at $x$, which exists $\\Ha^k$-a.e. on $\\Gamma$. Any varifold $V \\in \\V_k(U)$ admitting a representation as in \\eqref{varGammatheta} is said to be \\emph{rectifiable}, and the space of rectifiable $k$-varifolds in $U$ is denoted by \n${\\bf RV}_k(U)$. If $V = \\var(\\Gamma,\\theta)$ is rectifiable and $\\theta(x)$ is an integer at $\\Ha^k$-a.e. $x \\in \\Gamma$, then we say that $V$ is an \\emph{integral} $k$-dimensional varifold in $U$: the corresponding space is denoted $\\IV_k(U)$. \n\n\\subsection{First variation of a varifold}\nIf $V \\in \\V_k(U)$ and $f \\colon U \\to U'$ is $C^1$ and proper, then we let $f_\\sharp V \\in \\V_k(U')$ denote the push-forward of $V$ through $f$. Recall that the weight of $f_\\sharp V$ is given by\n\\begin{equation}\\label{pushfd}\n\\|f_\\sharp V\\|(\\phi) = \\int_{\\bG_{k}(U)} \\phi \\circ f(x) \\, \\abs{\\Lambda_k \\nabla f(x) \\circ S} \\, dV(x,S) \\qquad \\mbox{for every }\\, \\phi \\in C_{c}(U')\\,,\n\\end{equation}\nwhere\n\\[\n\\abs{\\Lambda_k \\nabla f(x) \\circ S} := \\abs{\\nabla f(x) \\cdot v_1 \\, \\wedge \\, \\ldots \\,\\wedge\\, \\nabla f(x) \\cdot v_k} \\quad \\mbox{for any orthonormal basis $\\{ v_1, \\ldots, v_k \\}$ of $S$}\n\\]\nis the Jacobian of $f$ along $S \\in \\bG(n+1,k)$.\nGiven a varifold $V \\in \\V_k(U)$ and a vector field $g \\in C^1_c(U; \\R^{n+1})$, the {\\it first variation} of $V$ in the direction of $g$ is the quantity\n\\begin{equation}\n\\label{defFV}\n\\delta V(g) := \\left.\\frac{d}{dt}\\right|_{t=0} \\|(\\Phi_t)_\\sharp V\\|(\\tilde U)\\,,\n\\end{equation}\nwhere $\\Phi_t(\\cdot) = \\Phi(t,\\cdot)$ is any one-parameter family of diffeomorphisms of $U$ defined for sufficiently small $|t|$ such that $\\Phi_0 = {\\rm id}_U$ and $\\partial_t \\Phi(0,\\cdot) = g(\\cdot)$. The $\\tilde U$ is chosen so that ${\\rm clos}\\,\\tilde U\\subset U$ is compact and ${\\rm spt}\\,g\\subset \\tilde U$, and the definition of \\eqref{defFV} \ndoes not depend on the choice of $\\tilde U$. \nIt is well known that $\\delta V$ is a linear and continuous functional on $C^1_c(U; \\R^{n+1})$, and in fact that\n\\begin{equation}\n\\label{defFV1}\n\\delta V(g) = \\int_{\\bG_k(U)} \\nabla g(x) \\cdot S \\, dV(x,S) \\qquad \\mbox{for every $g \\in C^1_c(U;\\R^{n+1})$}\\,,\n\\end{equation}\nwhere, after identifying $S \\in \\bG(n+1,k)$ with the orthogonal projection operator $\\R^{n+1} \\to S$,\n\\[\n\\nabla g \\cdot S = {\\rm trace}(\\nabla g^T \\circ S) = \\sum_{i,j=1}^{n+1} S_{ij} \\, \\frac{\\partial g_i}{\\partial x_j} = {\\rm div}^S g\\,.\n\\]\nIf $\\delta V$ can be extended to a linear and continuous functional on\n$C_c(U;\\R^{n+1})$, we say that $V$ has {\\it bounded first variation} in $U$. In this case, \n$\\delta V$ is naturally associated with \na unique $\\R^{n+1}$-valued measure on $U$ by means of the Riesz representation theorem.\nIf such a measure is absolutely continuous with respect to the weight $\\|V\\|$, then there exists a $\\|V\\|$-measurable and locally $\\|V\\|$-integrable vector\nfield $h(\\cdot,V)$ such that \n\\begin{equation} \\label{def:generalized mean curvature}\n\\delta V(g) = - \\int_{U} g(x) \\cdot h(x,V) \\, d\\|V\\|(x) \\qquad \\mbox{for every $g \\in C_c(U,\\R^{n+1})$}\n\\end{equation}\nby the Lebesgue-Radon-Nikod\\'ym differentiation theorem. The vector field $h(\\cdot,V)$ is called the {\\it generalized mean curvature vector} of $V$. In particular, if $\\delta V(g)=0$ \nfor all $g\\in C_c^1(U;\\mathbb R^{n+1})$, $V$ is called {\\it stationary}, and this is \nequivalent to $h(\\cdot,V)=0$ $\\|V\\|$-almost everywhere. For any $V\\in {\\bf IV}_k(U)$ with \nbounded first variation, {\\it Brakke's perpendicularity theorem} \\cite[Chapter 5]{Brakke}\nsays that \n\\begin{equation}\n\\label{BPT}\nS^{\\perp}(h(x,V))=h(x,V) \\qquad \\mbox{for $V$-a.e. $(x,S) \\in {\\bf G}_k(U)$}\\,.\n\\end{equation}\nHere, $S^{\\perp}$ is the projection onto the orthogonal complement of $S$ in $\\R^{n+1}$. \nThis means that the generalized mean curvature vector is perpendicular to the \napproximate tangent plane almost everywhere. \n\n\\smallskip\n\nOther than the first variation $\\delta V$ discussed\nabove, we shall also use a {\\it weighted first variation}, defined as follows. For given $\\phi\\in C^1_c(U;\\mathbb R^+)$, $V\\in {\\bf V}_k(U)$, and $g \\in C^1_c(U;\\R^{n+1})$, we modify \\eqref{defFV} to introduce the $\\phi$-weighted first variation of $V$ in the direction of $g$, denoted $\\delta(V,\\phi)(g)$, by setting\n\\begin{equation} \\label{defFV_modified}\n\\delta(V,\\phi)(g) := \\left.\\frac{d}{dt}\\right|_{t=0} \\| (\\Phi_t)_\\sharp V \\|(\\phi)\\,,\n\\end{equation}\nwhere $\\Phi_t$ denotes the one-parameter family of diffeomorphisms of $U$ induced by $g$ as above. Proceeding as in the derivation of \\eqref{defFV1}, one then obtains the expression\n\\begin{equation}\n\\label{defFV2}\n\\delta(V,\\phi)(g)=\\int_{{\\bf G}_k(U)} \\phi(x)\\, \\nabla g(x)\\cdot S\\,dV(x,S)+\n\\int_U g(x)\\cdot\\nabla\\phi(x)\\,d\\|V\\|(x)\\,.\n\\end{equation}\nUsing $\\phi\\nabla g=\n\\nabla(\\phi g)- g\\otimes\\nabla\\phi$ in \\eqref{defFV2} and \\eqref{defFV1}, we obtain\n\\begin{equation}\n\\label{defFV3}\n\\begin{split}\n\\delta(V,\\phi)(g)&=\\delta V(\\phi g)+\\int_{{\\bf G}_k(U)} g(x)\\cdot(\\nabla\\phi(x)-S(\n\\nabla\\phi(x)))\\,dV(x,S) \\\\\n&=\\delta V(\\phi g)+\\int_{{\\bf G}_k(U)} g(x)\\cdot S^{\\perp}(\\nabla\\phi(x))\\,dV(x,S)\\,. \n\\end{split}\n\\end{equation}\nIf $\\delta V$ has generalized mean curvature $h(\\cdot,V)$, then we may use \\eqref{def:generalized mean curvature} in \\eqref{defFV3} to obtain\n\\begin{equation}\n\\label{defFV4}\n\\delta(V,\\phi)(g)=-\\int_U \\phi(x)g(x)\\cdot h(x,V) \\, d\\|V\\|(x)+\\int_{{\\bf G}_k(U)} g(x)\\cdot S^{\\perp}\n(\\nabla\\phi(x))\\,\ndV(x,S).\n\\end{equation}\n\nThe definition of Brakke flow requires considering weighted first variations in the direction of the mean curvature. Suppose $V\\in {\\bf IV}_k(U)$, $\\delta V$ is locally bounded and absolutely continuous\nwith respect to $\\|V\\|$ and $h(\\cdot,V)$ is locally square-integrable with respect to $\\|V\\|$. \nIn this case, it is natural from the expression \\eqref{defFV4} to define for $\\phi\\in C_c^1\n(U;\\mathbb R^+)$\n\\begin{equation}\n\\label{defFV5}\n\\delta(V,\\phi)(h(\\cdot,V)):=\\int_U \\lbrace -\\phi(x)|h(x,V)|^2+h(x,V)\\cdot\\nabla\\phi(x) \\rbrace\\,d\\|V\\|(x).\n\\end{equation}\nObserve that here we have used \\eqref{BPT} in order to replace the term $h(x,V)\\cdot S^{\\perp}(\\nabla\\phi(x))$ with $h(x,V)\\cdot \\nabla\\phi(x)$.\n\n\\subsection{Brakke flow}\nTo motivate a weak formulation of the MCF, note that a smooth family of $k$-dimensional\nsurfaces $\\{\\Gamma(t)\\}_{t\\geq 0}$ in $U$ is a MCF if and only if the following inequality\nholds true for all $\\phi = \\phi (x,t)\\in C_c^1(U\\times[0,\\infty);\\mathbb R^+)$:\n\\begin{equation}\n\\label{smMCF1}\n\\frac{d}{dt}\\int_{\\Gamma(t)}\\phi\\,d\\mathcal H^k \\leq \\int_{\\Gamma(t)} \n\\left\\lbrace -\\phi\\,|h(\\cdot,\\Gamma(t))|^2+\\nabla\\phi\\cdot h(\\cdot,\\Gamma(t))\n+\\frac{\\partial\\phi}{\\partial t} \\right\\rbrace \\,d\\mathcal H^k \\,.\n\\end{equation}\nIn fact, the ``only if''\npart holds with equality in place of inequality. For a more comprehensive treatment of the Brakke flow, \nsee \\cite[Chapter 2]{Ton1}. Formally, if $\\partial\\Gamma(t)\\subset\n\\partial U$ is fixed in time, with $\\phi=1$, we also obtain\n\\begin{equation}\n\\label{smMCF2}\n\\frac{d}{dt}\\mathcal H^k(\\Gamma(t)) \\leq -\\int_{\\Gamma(t)}|h(x,\\Gamma(t))|^2\\,d\\mathcal H^k(x)\\,,\n\\end{equation}\nwhich states the well-known fact that the $L^2$-norm of the mean curvature represents the dissipation of area along the MCF. Motivated by \\eqref{smMCF1} and \\eqref{smMCF2}, and for the purposes of this paper, we give the following definition.\n\\begin{definition} \\label{def:Brakke_bc}\nWe say that a family of varifolds $\\{V_t\\}_{t\\geq 0}$ in $U$ is a {\\it Brakke flow with fixed\nboundary} $\\Sigma\\subset\\partial U$ if all of the following hold: \n\\begin{enumerate}\n\\item[(a)]\nFor a.e.~$t\\geq 0$, $V_t\\in{\\bf IV}_k(U)$;\n\\item[(b)]\nFor a.e.~$t\\geq 0$, $\\delta V_t$ is bounded and absolutely continuous with respect to $\\|V_t\\|$;\n\\item[(c)] The generalized mean curvature $h(x,V_t)$ (which exists for a.e.~$t$ by (b)) satisfies for all $T>0$\n\\begin{equation}\n\\|V_T\\|(U)+\\int_0^{T}dt\\int_U|h(x,V_t)|^2\\,d\\|V_t\\|(x)\\leq \\|V_0\\|(U);\n\\label{brakineq2}\n\\end{equation}\n\\item[(d)]\nFor all $0\\leq t_10$.\nIf the union of the \\emph{reduced boundaries} of the initial partition in $U$ coincides with $\\Gamma_0$ modulo $\\Ha^n$-negligible sets (note that the assumptions $(A2)$ and $(A3)$ in Assumption \\ref{ass:main} imply that $\\Gamma_0 = U\\cap\\bigcup_{i=1}^N \\partial E_{0,i}$), then\nthe claim is that\nthe initial condition is satisfied continuously as measures. Otherwise, an instantaneous loss of measure may \noccur at $t=0$. As far as the regularity is concerned, under the additional\nassumption that $\\{V_t\\}_{t > 0}$ is a unit density flow, partial regularity theorems of \\cite{Brakke,Kasai-Tone,Ton-2}\nshow that $V_t$ is a smooth MCF for a.e.~time\nand a.e.~point in space, just like \\cite{KimTone}, see \\cite[Theorem 3.6]{KimTone} for the precise statement. No claim of the uniqueness is made here, but the next Theorem \\ref{thm:main2}\ngives an additional structure to $V_t$ in the form of ``moving partitions''\nstarting from $E_{0,1},\\ldots,E_{0,N}$. \n\\begin{theorem} \\label{thm:main2}\nUnder the same assumption of Theorem \\ref{thm:main} and in addition to $\\{V_t\\}_{t\\geq 0}$, for each $i=1,\\dots,N$ there exists a one-parameter family $\\{E_i(t)\\}_{t \\geq 0}$ of open sets $E_{i}(t) \\subset U$ with the following properties. Let $\\Gamma(t):=U\\setminus\\cup_{i=1}^N E_i(t)$.\n\\begin{enumerate}\n\n\\item $E_{i}(0) = E_{0,i}$ $\\forall i=1,\\dots,N$;\n\n\\item $\\forall t \\geq 0$, the sets $\\{E_i(t)\\}_{i=1}^N$ are mutually disjoint; \n\n\\item $\\forall\\tilde U\\subset\\joinrel\\subset U$ and $\\forall t\\geq 0$, $\\mathcal H^n(\\Gamma(t)\\cap \\tilde U)<\\infty$;\n\n\\item $\\forall t\\geq 0$, $\\Gamma(t)=U\\cap \\cup_{i=1}^N \\partial( E_i(t))$;\n\n\\item $\\forall t\\geq 0$, $\\Gamma(t)\\subset {\\rm conv}(\\Gamma_0\\cup\\partial\\Gamma_0)$;\n\n\\item $\\forall t\\geq 0$ and $\\forall i=1,\\ldots,N$, $E_i(t)\\setminus {\\rm conv}(\\Gamma_0\n\\cup\\partial\\Gamma_0)=E_{0,i}\\setminus {\\rm conv}(\\Gamma_0\n\\cup\\partial\\Gamma_0)$;\n\n\\item $\\forall t\\geq 0$, $\\partial\\Gamma(t):=({\\rm clos}\\,\\Gamma(t))\\setminus U=\\partial\\Gamma_0$;\n\n\\item $\\forall t\\geq 0$ and $\\forall i=1,\\ldots,N$, $\\|\\nabla\\chi_{E_i(t)}\\| \\leq \\|V_t\\|$\nand $\\sum_{i=1}^N\\|\\nabla\\chi_{E_i(t)}\\|\\leq 2\\|V_t\\|$;\n \n \\item Fix $i=1,\\ldots,N$ and $U_r(x)\\subset\\joinrel\\subset U$, and define $g(t):=\n \\mathcal L^{n+1}(U_r(x)\\cap E_i(t))$. Then, $g\\in C^0([0,\\infty))\\cap C^{0,\\frac12}((0,\\infty))$;\n \n\\item For each $i=1,\\ldots,N$, $\\chi_{E_i(t)}\\in C([0,\\infty);L^1(U))$; \n\n\\item Let $\\mu$ be the product measure of $\\|V_t\\|$ and $dt$ defined on $U\\times\\R^+$, i.e. $d\\mu:=d\\|V_t\\|dt$. Then, $\\forall t>0$, we have \n\\begin{equation*}\n{\\rm spt}\\,\\|V_t\\|\\subset \n\\{x\\in U\\,:\\, (x,t)\\in {\\rm spt}\\,\\mu\\}=\\Gamma(t).\n\\end{equation*}\n\n\\end{enumerate} \n\n\\end{theorem}\n\\noindent\nThe claims (1)-(4) imply that $\\{E_i(t)\\}_{i=1}^N$ is an $\\mathcal{L}^{n+1}$-partition of $U$, and that $\\Gamma(t)$\nhas empty interior in particular. The claim (5) is an expected property for the MCF, and, by (11), ${\\rm spt}\\,\\|V_t\\|$ is also in the same convex hull. (7) says that $\\Gamma(t)$ has the fixed\nboundary $\\partial\\Gamma_0$.\nIn general, the reduced boundary of the partition and $\\|V_t\\|$ may not match, but the \nlatter is bounded from below by the former as in (8). By (10), the Lebesgue measure of each \n$E_i(t)$ changes continuously in time, so that arbitrary sudden \nloss of measure of $\\|V_t\\|$ is not allowed. \nThe statement in (11) says that the time-slice of the support of $\\mu$ at time $t$ contains the support of $\\|V_t\\|$ and is equal to the topological boundary of the moving\npartition. \\\\\n\nAs a corollary of the above, we deduce the following.\n\n\\begin{corollary}\\label{main:cor}\nThere exist a sequence $\\{t_k\\}_{k=1}^\\infty$ with $\\lim_{k\\rightarrow\\infty} t_k=\\infty$ and a varifold $V \\in \\IV_n(U)$ such that $V_{t_k} \\to V$ in the sense of varifolds. The varifold $V$ is stationary. Furthermore, there is a mutually disjoint family $\\{E_i\\}_{i=1}^N$ of open subsets of $U$ such that \n\\begin{enumerate}\n\\item $\\forall i=1,\\ldots,N$, $ \\| \\nabla \\chi_{E_i} \\| \\leq \\|V\\|$ and $\\sum_{i=1}^N\n\\|\\nabla\\chi_{E_i}\\| \\leq 2\\|V\\|$;\n\\item $\\forall i=1,\\ldots,N$, $E_i \\setminus {\\rm conv (\\Gamma_0\\cup\\partial\\Gamma_0)} = E_{0,i} \\setminus {\\rm conv}(\\Gamma_0\\cup\\partial\\Gamma_0)$;\n\\item $U \\setminus \\bigcup_{i=1}^N E_i = \\spt\\|V\\|$, and $0 < \\Ha^n (U \\setminus \\bigcup_{i=1}^N E_i) \\leq \\|V\\|(U) \\leq \\Ha^n(\\Gamma_0)$;\n\\item $({\\rm clos}\\,(\\spt\\|V\\|))\\setminus U= ({\\rm clos}(U \\setminus \\bigcup_{i=1}^N E_i)) \\setminus U = \\partial\\Gamma_0$.\n\\end{enumerate}\n\\end{corollary}\n\nThe varifold $V$ in Corollary \\ref{main:cor} is a solution to Plateau's problem in $U$ in the class of stationary varifolds satisfying the topological constraint $({\\rm clos}\\,(\\spt \\|V\\|)) \\setminus U = \\partial \\Gamma_0$. This is an interesting byproduct of our construction, above all considering that $\\partial \\Gamma_0$ enjoys in general rather poor regularity (in particular, it may have infinite $(n-1)$-dimensional Hausdorff measure, and also it may not be countably $(n-1)$-rectifiable). Even though the \\emph{topological} boundary condition specified above seems natural in this setting, other notions of spanning may be adopted: for instance, in Proposition \\ref{final spanning} we show that a \\emph{strong homotopic spanning condition} in the sense of \\cite{HP16,DGM} is preserved along the flow and in the limit if it is satisfied at the initial time $t=0$. We postpone further discussion and questions concerning the application to Plateau's problem to Section \\ref{propla}. \n\n\\subsection{General strategy and structure of the paper}\n\nThe general idea behind the proof of Theorems \\ref{thm:main} and \\ref{thm:main2}\n is to suitably modify the time-discrete approximation scheme introduced in \\cite{KimTone,Brakke}. There, one constructs a time-parametrized flow of open partitions which is piecewise constant in time. We will call \\emph{epoch} any time interval during which the approximating flow is constant. The open partition at a given epoch is constructed from the open partition at the previous epoch by applying two operations, which we call \\emph{steps}. The first step is a small\nLipschitz deformation of partitions with the effect of\n``regularizing singularities'' by ``locally minimizing the area of the boundary of partitions'' at a small scale. This deformation is defined in such a way that, if the boundary of partitions is regular (relative to a certain length scale), then\nthe deformation reduces to the identity.\nThe second step consists of flowing the boundary of partitions by a suitably defined ``approximate\nmean curvature vector''. The latter is computed by smoothing the surface measures via convolution with a localized heat kernel. Note that, typically, the boundary of open partitions has bounded $n$-dimensional measure, but the unit-density varifold associated to it may not have bounded first variation. In \\cite{KimTone}, a time-discrete approximate MCF is obtained by alternating these two steps, epoch after epoch. In the present work, we \nneed to fix the boundary $\\partial\\Gamma_0$. The rough idea to achieve this is to perform an ``exponentially \nsmall''\ntruncation of the approximate mean curvature vector near $\\partial\\Gamma_0$, so that\nthe boundary cannot move in the ``polynomial time scale'' defining an epoch with respect to a certain length\nscale. We also need to make sure that the time-discrete movement does not push the \nboundary of open partitions to the outside of $U$. To prevent this, in addition to\nthe two steps (Lipschitz deformation and motion by smoothed and truncated mean curvature\nvector), we add another ``retraction to $U$'' step to be performed in each epoch. All these \noperations have to come with suitable estimates on the surface measures, in order to have convergence of the approximating flow when we let the epoch time scale approach zero. The final goal is to show that this limit flow is indeed a Brakke flow with fixed boundary $\\partial\\Gamma_0$ as in Definition \\ref{def:Brakke_bc}.\n\n\\smallskip\n\nThe rest of the paper is organized as follows. Section \\ref{sec:prelim} lays the foundations to the technical construction of the approximate flow by proving the relevant estimates to be used in the Lipschitz deformation and flow by smoothed mean curvature steps, and by defining the boundary truncation of the mean curvature. Both the discrete approximate flow and its ``vanishing epoch'' limit are constructed in Section \\ref{sec:limit flow}. In Section \\ref{sec:Brakke} we show that the one-parameter family of measures obtained in the previous section satisfies conditions (a) to (d) in Definition \\ref{def:Brakke_bc}. The boundary condition (e) is, instead, proved in Section \\ref{sec:bb}, which therefore also contains the proofs of Theorems \\ref{thm:main} and \\ref{thm:main2}. Finally, Section \\ref{propla} is dedicated to the limit $t \\to \\infty$: hence, it contains the proof of Corollary \\ref{main:cor}, as well as a discussion of related results and open questions concerning the application of our construction to Plateau's problem.\n\n\n\n\\section{Preliminaries} \\label{sec:prelim}\n\nIn this section we will collect the preliminary results that will play a pivotal role in the construction of the time-discrete approximate flows. Some of the results are straightforward adaptations of the corresponding ones in \\cite{KimTone}: when that is the case, we shall omit the proofs, and refer the reader to that paper. \n\n\\subsection{Classes of test functions and vector fields}\n\nDefine, for every $j \\in \\Na$, the classes $\\cA_j$ and $\\cB_j$ as follows:\n\n\\begin{equation} \\label{classA}\n\\begin{split}\n\\cA_j := \\{ \\phi \\in C^2(\\R^{n+1}; \\R^+) \\, \\colon \\, &\\phi(x) \\leq 1,\\; \\abs{\\nabla \\phi(x)} \\leq j\\, \\phi(x), \\\\ &\\|\\nabla^2\\phi(x) \\| \\leq j \\, \\phi(x)\\,\\, \\mbox{for every $x \\in \\R^{n+1}$} \\}\\,,\n\\end{split}\n\\end{equation}\n\n\\begin{equation} \\label{classB}\n\\begin{split}\n\\cB_j := \\{ g \\in C^2(\\R^{n+1}; \\R^{n+1}) \\, \\colon \\, &|g(x)| \\leq j,\\,\\, \\norm{\\nabla g(x)} \\leq j\\, , \\\\ &\\|\\nabla^2 g(x) \\| \\leq j \\, \\, \\mbox{for every $x \\in \\R^{n+1}$ and $\\|g\\|_{L^2} \\leq j$} \\}\\,.\n\\end{split}\n\\end{equation}\n\nThe properties of functions $\\phi \\in \\cA_j$ and vector fields $g \\in \\cB_j$ are precisely as in \\cite[Lemma 4.6, Lemma 4.7]{KimTone}, and we record them in the following lemma for future reference.\n\n\\begin{lemma} \\label{l:class properties}\n\nLet $x,y \\in \\R^{n+1}$ and $j \\in \\Na$. For every $\\phi \\in \\cA_j$, the following properties hold:\n\\begin{align}\n\\phi(x) & \\leq \\phi(y) \\exp(j\\, \\abs{x-y})\\,, \\label{e:Gronwall} \\\\\n\\abs{\\phi(x) - \\phi(y)} &\\leq j \\, \\abs{x-y} \\phi(y) \\exp(j\\, \\abs{x-y})\\,, \\label{e:1st_order} \\\\\n\\abs{\\phi(x) - \\phi(y) - \\nabla\\phi(y) \\cdot (x-y)} &\\leq j\\, \\abs{x-y}^2 \\phi(y) \\exp(j\\, \\abs{x-y}) \\label{e:2nd_order}\\,.\n\\end{align}\n\nAlso, for every $g \\in \\cB_j$:\n\\begin{equation} \\label{e:vectorfield}\n\\abs{g(x) - g(y)} \\leq j\\, \\abs{x-y}\\,.\n\\end{equation}\n\n\\end{lemma}\n\n\\subsection{Open partitions and admissible functions}\n\nLet $\\tilde U\\subset \\R^{n+1}$ be a bounded open set. \nLater, $\\tilde U$ will be an open set\nwhich is very close to $U$ in Assumption \\ref{ass:main}. \n\n\\begin{definition}\\label{def:op}\n\nFor $N \\geq 2$, an \\emph{open partition} of $\\tilde U$ in $N$ elements is a finite and ordered collection $\\E = \\{E_{i}\\}_{i=1}^N$ of subsets $E_{i} \\subset \\tilde U$ such that:\n\\begin{itemize}\n\\item[(a)] $E_1,\\dots,E_N$ are open and mutually disjoint;\n\\item[(b)] $\\Ha^n(\\tilde U \\setminus \\bigcup_{i=1}^N E_i) < \\infty$;\n\\item[(c)] $\\bigcup_{i=1}^N \\partial E_i\\subset \\tilde U$ is countably $n$-rectifiable.\n\\end{itemize}\nThe set of all open partitions of $\\tilde U$ of $N$ elements will be denoted $\\op^N(\\tilde U)$. \n\\end{definition}\n\\noindent\nNote that some of the $E_i$ may be empty. Condition (b) implies that\n\\begin{equation}\\label{etop}\n\\tilde U \\setminus \\bigcup_{i=1}^N E_i = \\bigcup_{i=1}^N \\partial E_i\\,,\n\\end{equation}\nand thus that $\\bigcup_{i=1}^N \\partial E_i$ is $\\Ha^n$-rectifiable and each $E_i$ is in fact an open set with finite perimeter in $\\tilde U$. By De Giorgi's structure theorem, the reduced boundary $\\partial^*E_i$ is $\\Ha^n$-rectifiable: nonetheless, the reduced boundary $\\partial^*E_i$ may not coincide in general with the topological boundary $\\partial E_i$, which makes condition (c) not redundant. We keep the following for \nlater use. The proof is straightforward. \n\\begin{lemma} \\label{difpa}\nSuppose $\\E=\\{E_i\\}_{i=1}^N\\in \\op^N(\\tilde U)$ and $f:\\R^{n+1}\\to\\R^{n+1}$ is a\n$C^1$ diffeomorphism. Then we have $\\{f(E_i)\\}_{i=1}^N\\in \\op^N(f(\\tilde U))$. \n\\end{lemma}\n\\begin{notation}\n\nGiven $\\E \\in \\op^N(\\tilde U)$, we will set\n\\begin{equation} \\label{e:interior boundary}\n\\partial\\E := \\var\\left(\\bigcup_{i=1}^N \\partial E_i , 1\\right) \\in \\IV_{n}(\\R^{n+1})\\,.\n\\end{equation}\nHere, to avoid some possible confusion, we emphasize that we want to consider\n$\\partial\\E$ as a varifold on $\\R^{n+1}$ when we construct approximate MCF. \nOn the other hand, note\nthat we still consider the relative topology of $\\tilde U$, as $\\partial E_i\\subset \\tilde U$ here. In particular, writing $\\Gamma=\\cup_{i=1}^N\\partial E_i$, we have $\\|\\partial\\E\\|=\\Ha^n\\mres_{\\Gamma}$, and \n\\[\n\\partial\\E(\\varphi) = \\int_{\\Gamma} \\varphi(x,T_x\\,\\Gamma)\\, d\\Ha^n(x) \\qquad \\mbox{for every $\\varphi \\in C_{c}(\\bG_n(\\R^{n+1}))$}\\,,\n\\]\nwhere $T_x\\,\\Gamma \\in \\bG(n+1,n)$ is the approximate tangent plane to $\\Gamma$ at $x$, which exists and is unique at $\\Ha^n$-a.e. $x \\in \\Gamma$ because of Definition \\ref{def:op}(c).\n\\end{notation}\n\n\\begin{definition} \\label{def:E-admissible}\n\nGiven $\\E=\\{E_i\\}_{i=1}^N \\in \\op^N(\\tilde U)$ and a closed set $C\\subset\\joinrel\\subset \\tilde U$, a function $f \\colon \\R^{n+1} \\to \\R^{n+1}$ is \\emph{$\\E$-admissible} in $C$ if it is Lipschitz continuous and satisfies the following. Let $\\tilde{E}_i := {\\rm int}\\,(f(E_i))$ for $i = 1,\\dots,N$. Then:\n\\begin{itemize}\n\\item[(a)] $\\{x\\,:\\,x\\neq f(x)\\}\\cup \\{f(x)\\,:\\,x\\neq f(x)\\}\\subset C$;\n\\item[(b)] $\\{\\tilde{E}_i\\}_{i=1}^N$ are mutually disjoint;\n\\item[(c)] $\\tilde U \\setminus \\bigcup_{i=1}^N \\tilde{E}_i \\subset f(\\bigcup_{i=1}^N \\partial E_i)$.\n\n\\end{itemize}\n\n\\end{definition}\n\n\n\\begin{lemma} \\label{l:preserving partitions}\n\nLet $\\E = \\{E_i\\}_{i=1}^N \\in \\op^N(\\tilde U)$ be an open partition of $\\tilde U$ in $N$ elements,\n$C\\subset\\joinrel\\subset \\tilde U$, and let $f$ be $\\E$-admissible in $C$. If we define $\\tilde\\E := \\{\\tilde E_{i}\\}_{i=1}^N$ with $\\tilde E_i := {\\rm int}\\,(f(E_i))$, then $\\tilde\\E \\in \\op^N(\\tilde U)$.\n\n\\end{lemma}\n\n\\begin{proof}\nWe check that $\\tilde \\E$ satisfies properties (a)-(c) in Definition \\ref{def:op}.\nBy Definition \\ref{def:E-admissible}(a) and (b), it is clear that $\\tilde E_1,\\ldots,\n\\tilde E_N$ are open and mutually disjoint subsets of $\\tilde U$, which gives (a).\nIn order to prove (b), we use Definition \\ref{def:E-admissible}(c) and the area formula to compute:\n\\[\n\\Ha^n\\Big( \\tilde U \\setminus \\bigcup_{i=1}^N \\tilde E_i \\Big)\n\\leq \\Ha^n\\Big(f(\\bigcup_{i=1}^N \\partial E_i)\\Big) \n\\leq \\Lip(f)^n \\, \\Ha^n\\Big( \\bigcup_{i=1}^N \\partial E_i \\Big) < \\infty\\,,\n\\]\nwhere we have used Definition \\ref{def:op}(b) and \\eqref{etop}. This also shows \n$\\tilde U\\setminus\\bigcup_{i=1}^N \\tilde E_i=\\bigcup_{i=1}^N\\partial\\tilde E_i$.\nFinally, we prove property (c). Observe that, since $\\bigcup_{i=1}^N \\partial E_i$ is countably $n$-rectifiable, also the set $f(\\bigcup_{i=1}^N \\partial E_i)$ is countably $n$-rectifiable. Since any subset of a countably $n$-rectifiable set is countably $n$-rectifiable, also $\\bigcup_{i=1}^N \\partial\\tilde E_i$ is countably $n$-rectifiable.\n\\end{proof}\n\n\\begin{notation}\nIf $\\E \\in \\op^N(\\tilde U)$ and $f \\in \\Lip(\\R^{n+1}; \\R^{n+1})$ is $\\E$-admissible in $C$\nfor some $C\\subset\\joinrel\\subset \\tilde U$, then the open partition $\\tilde \\E \\in \\op^N(\\tilde U)$ will be denoted $f_{\\star}\\E$. \n\\end{notation}\n\n\n\n\\subsection{Area reducing Lipschitz deformations}\n\n\\begin{definition}\\label{def:Lip_def}\nFor $\\E = \\{E_i\\}_{i=1}^N \\in \\op^N(\\tilde U)$, $j \\in \\Na$ and a closed set $C \\subset\\joinrel\\subset\\tilde U$, define $\\bE(\\E, C, j)$ to be the set of all $\\E$-admissible functions $f$ in $C$ such that:\n\n\\begin{itemize}\n\n\\item[(a)] $\\abs{f(x) - x} \\leq \\sfrac{1}{j^2}$ for every $x \\in C$;\n\n\\item[(b)] $\\Leb^{n+1}(\\tilde E_i \\triangle E_i) \\leq \\sfrac1j$ for all $i = 1,\\dots,N$, where $\\tilde E_i = {\\rm int}\\,(f(E_i))$, and where $E \\triangle F := \\left[ E \\setminus F \\right] \\cup \\left[ F \\setminus E \\right]$ is the symmetric difference of the sets $E$ and $F$;\n\n\\item[(c)] $\\| \\partial f_{\\star}\\E\\|(\\phi) \\leq \\|\\partial\\E\\|(\\phi)$ for all $\\phi \\in \\cA_j$. Here, $f_{\\star}\\E = \\{\\tilde E_i \\}_{i=1}^N$ and $\\|\\partial\\E\\|$ is the weight of the multiplicity one varifold associated to the open partition $\\E$.\n\n\n\\end{itemize}\n\n\\end{definition}\n\\noindent\nThe set $\\bE(\\E,C,j)$ is not empty, as it contains the identity map. \n\n\\begin{definition} \\label{def:excess}\nGiven $\\E \\in \\op^N(\\tilde U)$ and $j$, and given a closed set $C \\subset\\joinrel\\subset \\tilde U$, we define \n\\begin{equation} \\label{e:excess}\n\\begin{split}\n\\Delta_j\\|\\partial\\E\\|(C) :&= \\inf_{f \\in \\bE(\\E,C,j)} \\left\\lbrace \\|\\partial f_{\\star}\\E\\|(C) - \\|\\partial \\E\\|(C) \\right\\rbrace \\\\ &= \\inf_{f \\in \\bE(\\E,C,j)} \\left\\lbrace \\|\\partial f_{\\star}\\E\\|(\\R^{n+1}) - \\|\\partial \\E\\|(\\R^{n+1}) \\right\\rbrace\\,.\n\\end{split}\n\\end{equation}\n\\end{definition}\n\\noindent\nObserve that it always holds $\\Delta_j\\|\\partial\\E\\|(C) \\leq 0$, since the identity map $f(x)=x$ belongs to $\\bE(\\E,C,j)$. The quantity $\\Delta_j\\|\\partial\\E\\|(C)$ measures the extent to which $\\|\\partial\\E\\|$ can be reduced by acting with area reducing Lipschitz deformations in $C$. \n\n\\subsection{Smoothing of varifolds and first variations}\n\nWe let $\\psi \\in C^{\\infty}(\\R^{n+1})$ be a radially symmetric function such that\n\\begin{equation} \\label{e:cutoff_smoothing}\n\\begin{split}\n& \\psi(x) = 1 \\mbox{ for } \\abs{x} \\leq 1\/2\\,, \\qquad \\psi(x) = 0 \\mbox{ for } \\abs{x} \\geq 1\\,, \\\\\n& 0 \\leq \\psi(x) \\leq 1\\,, \\quad \\abs{\\nabla \\psi(x)} \\leq 3\\,, \\quad \\|\\nabla^2\\psi(x)\\| \\leq 9 \\mbox{ for all } x \\in \\R^{n+1}\\,, \n\\end{split}\n\\end{equation}\nand we define, for each $\\eps \\in \\left( 0, 1 \\right)$, \n\\begin{equation} \\label{e:smoothing_kernel}\n\\hat\\Phi_\\eps(x) := \\frac{1}{(2\\pi \\eps^2)^{\\frac{n+1}{2}}} \\, \\exp\\left( - \\frac{\\abs{x}^2}{2\\eps^2} \\right)\\,, \\quad \\Phi_\\eps(x) := c(\\eps)\\, \\psi(x) \\, \\hat\\Phi_\\eps(x)\\,,\n\\end{equation}\nwhere the constant $c(\\eps)$ is chosen in such a way that\n\\begin{equation} \\label{e:normalization_kernel}\n\\int_{\\R^{n+1}} \\Phi_\\eps(x) \\, dx = 1\\,.\n\\end{equation}\n\nThe function $\\Phi_\\eps$ will be adopted as a convolution kernel for the definition of the smoothing of a varifold. We record the properties of $\\Phi_\\eps$ in the following lemma (cf. \\cite[Lemma 4.13]{KimTone}).\n\n\\begin{lemma} \\label{l:properties_kernel}\nThere exists a constant $c = c(n)$ such that, for $\\eps \\in \\left( 0, 1 \\right)$, we have:\n\n\\begin{align} \n\\abs{\\nabla \\Phi_\\eps(x)} &\\leq \\frac{\\abs{x}}{\\eps^2} \\, \\Phi_\\eps(x) + c\\, \\chi_{B_1 \\setminus B_{1\/2}}(x) \\, \\exp(-\\eps^{-1})\\,, \\label{e:1st_bound} \\\\\n\\|\\nabla^2 \\Phi_\\eps(x)\\| &\\leq \\frac{\\abs{x}^2}{\\eps^4} \\, \\Phi_\\eps(x) + \\frac{c}{\\eps^2}\\, \\Phi_\\eps(x) + c\\, \\chi_{B_1 \\setminus B_{1\/2}}(x) \\, \\exp(-\\eps^{-1})\\,. \\label{e:2nd_bound}\n\\end{align}\n\\end{lemma}\n\nNext, we use the convolution kernel $\\Phi_\\eps$ in order to define the smoothing of a varifold and its first variation. Recall that, given a Radon measure $\\mu$ on $\\R^{n+1}$, the smoothing of $\\mu$ by means of the kernel $\\Phi_\\eps$ is defined to be the Radon measure $\\Phi_\\eps \\ast \\mu$ given by\n\\begin{equation} \\label{e:smoothing_measure}\n(\\Phi_\\eps \\ast \\mu)(\\phi) := \\mu(\\Phi_\\eps \\ast \\phi) = \\int_{\\R^{n+1}} \\int_{\\R^{n+1}} \\Phi_\\eps(x-y) \\, \\phi(y) \\, dy \\, d\\mu(x) \\qquad \\mbox{for every } \\phi \\in C_c(\\R^{n+1}) \\,.\n\\end{equation}\n\nThe definition of smoothing of a varifold $V$ is the equivalent of \\eqref{e:smoothing_measure} when regarding $V$ as a Radon measure on $\\bG_n(\\R^{n+1})$, keeping in mind that the operator $(\\Phi_\\eps \\ast)$ acts on a test function $\\varphi \\in C_c(\\bG_n(\\R^{n+1}))$ by convolving only the space variable. Explicitly, we give the following definition.\n\n\\begin{definition} \\label{def:smoothing_varifold}\nGiven $V \\in \\V_n(\\R^{n+1})$, we let $\\Phi_\\eps \\ast V \\in \\V_n(\\R^{n+1})$ be the varifold defined by\n\\begin{equation} \\label{e:smoothing_varifold}\n(\\Phi_\\eps \\ast V) (\\varphi) := V (\\Phi_\\eps \\ast \\varphi) = \\int_{\\bG_n(\\R^{n+1})} \\int_{\\R^{n+1}} \\Phi_\\eps(x-y) \\, \\varphi(y,S) \\, dy \\, dV(x,S) \n\\end{equation} \nfor every $\\varphi \\in C_c(\\bG_n(\\R^{n+1}))$. \n\\end{definition}\n\nObserve that, given a Radon measure $\\mu$ on $\\R^{n+1}$, one can identify the measure $\\Phi_\\eps \\ast \\mu$ with a $C^{\\infty}$ function by means of the Hilbert space structure of $L^2(\\R^{n+1}) = L^2(\\mathcal L^{n+1})$. Indeed, for any $\\phi \\in C_c(\\R^{n+1})$ we have that\n\\[\n(\\Phi_\\eps \\ast \\mu) (\\phi) = \\langle \\Phi_\\eps \\ast \\mu\\,|\\, \\phi \\rangle_{L^2(\\R^{n+1})}\\,,\n\\] \nwhere $\\Phi_\\eps \\ast \\mu \\in C^\\infty(\\R^{n+1})$ is defined by \n\\[\n(\\Phi_{\\eps} \\ast \\mu)(x) := \\int_{\\R^{n+1}} \\Phi_\\eps(x-y) \\, d\\mu(y)\\,.\n\\]\n\nThese considerations suggest the following definition for the smoothing of the first variation of a varifold.\n\n\\begin{definition} \\label{def:smoothing_first_var}\nGiven $V \\in \\V_n(\\R^{n+1})$, the smoothing of $\\delta V$ by means of the convolution kernel $\\Phi_\\eps$ is the vector field $\\Phi_\\eps \\ast \\delta V \\in C^{\\infty}(\\R^{n+1}; \\R^{n+1})$ defined by\n\\begin{equation} \\label{e:smoothing_first_var}\n (\\Phi_\\eps \\ast \\delta V) (x) := \\int_{\\bG_n(\\R^{n+1})} S( \\nabla \\Phi_\\eps(y-x) ) \\, dV(y,S)\\,, \n\\end{equation}\nin such a way that\n\\begin{equation} \\label{e:why_smoothing_first_var}\n \\delta V (\\Phi_\\eps \\ast g) = \\langle \\Phi_\\eps \\ast \\delta V \\, | \\, g \\rangle_{L^2(\\R^{n+1})} \\qquad \\mbox{for every } g \\in C^{1}_c(\\R^{n+1}; \\R^{n+1})\\,.\n\\end{equation}\n\n\\end{definition}\n\n\\begin{lemma} \\label{l:smoothing}\nFor $V \\in \\V_n(\\R^{n+1})$, we have\n\\begin{align} \n\\Phi_\\eps \\ast \\|V \\| &= \\| \\Phi_\\eps \\ast V \\| \\,, \\label{e:comm1} \\\\\n\\Phi_\\eps \\ast \\delta V &= \\delta (\\Phi_\\eps \\ast V) \\,. \\label{e:comm2}\n\\end{align}\nMoreover, if $\\|V\\|(\\R^{n+1}) < \\infty$ then\n\\begin{equation} \\label{smoothing mass estimate}\n\\| \\Phi_\\eps \\ast V \\| (\\R^{n+1}) \\leq \\|V\\|(\\R^{n+1}) \\,.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n\nThe identities \\eqref{e:comm1} and \\eqref{e:comm2} are proved in \\cite[Lemma 4.16]{KimTone}. Concerning \\eqref{smoothing mass estimate}, we observe that for any $\\varphi \\in C_c(\\bG_n(\\R^{n+1}))$ with $\\| \\varphi \\|_{0} \\leq 1$, setting $\\tau_z(x) := x-z$, it holds:\n\\[\n\\begin{split}\n(\\Phi_\\eps \\ast V) (\\varphi) &= \\int_{\\bG_n(\\R^{n+1})} \\int_{\\R^{n+1}} \\Phi_\\eps(x-y) \\, \\varphi(y,S) \\, dy \\, dV(x,S) \\\\\n&= \\int_{\\bG_n(\\R^{n+1})} \\int_{\\R^{n+1}} \\Phi_\\eps(z) \\, \\varphi(x-z,S) \\, dz \\, dV(x,S) \\\\\n&= \\int_{\\R^{n+1}} \\Phi_\\eps(z) \\int_{\\bG_n(\\R^{n+1})} \\varphi(\\tau_z(x),S) \\, dV(x,S)\\, dz \\leq \\|V\\|(\\R^{n+1})\\,.\n\\end{split}\n\\]\nTaking the supremum among all functions $\\varphi \\in C_c(\\bG_n(\\R^{n+1}))$ with $\\|\\varphi\\|_0 \\leq 1$ completes the proof.\n\\end{proof}\n\n\\subsection{Smoothed mean curvature vector}\n\n\\begin{definition} \\label{def:smcv}\nGiven $V \\in \\V_n(\\R^{n+1})$ and $\\eps \\in \\left( 0, 1 \\right)$, the \\emph{smoothed mean curvature vector} of $V$ is the vector field $h_\\eps(\\cdot, V) \\in C^\\infty(\\R^{n+1}; \\R^{n+1})$ defined by\n\\begin{equation} \\label{e:smcv}\nh_\\eps(\\cdot,V) := - \\Phi_\\eps \\ast \\left( \\frac{\\Phi_\\eps \\ast \\delta V}{\\Phi_\\eps \\ast \\|V \\| + \\eps} \\right)\\,.\n\\end{equation}\n\n\\end{definition}\n\nWe will often make use of \\cite[Lemma 5.1]{KimTone} with $\\Omega \\equiv 1$ (and $c_1 = 0$). For the reader's convenience, we provide here the statement.\n\n\\begin{lemma} \\label{l:smc estimates}\nFor every $M > 0$, there exists a constant $\\eps_1 \\in \\left( 0,1 \\right)$, depending only on $n$ and $M$ such that the following holds. Let $V \\in \\V_{n}(\\R^{n+1})$ be an $n$-dimensional varifold in $\\R^{n+1}$ such that $\\|V\\|(\\R^{n+1}) \\leq M$, and, for every $\\eps \\in \\left( 0, \\eps_1 \\right)$, let $h_{\\eps}(\\cdot, V)$ be its smoothed mean curvature vector. Then:\n\\begin{equation} \\label{e:h in L infty}\n\\abs{h_\\eps(x, V)} \\leq 2 \\, \\eps^{-2}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{e:nabla h in L infty}\n\\|\\nabla h_\\eps(x,V)\\| \\leq 2\\, \\eps^{-4}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{e:nabla2 h in L infty}\n\\| \\nabla^2 h_\\eps(x,V) \\| \\leq 2\\, \\eps^{-6}\\,.\n\\end{equation}\n\n\\end{lemma}\n\n\\subsection{The cut-off functions $\\eta_j$}\n\nIn this subsection we construct the cut-off functions which will later be used to truncate the smoothed mean curvature vector in order to produce time-discrete approximate flows which \\emph{almost} preserve the boundary $\\partial\\Gamma_0$.\n\n\\smallskip\n\nGiven a set $E \\subset \\R^{n+1}$ and $s > 0$, $(E)_s$ denotes the $s$-neighborhood of $E$, namely the open set\n\\[\n(E)_s := \\bigcup_{x \\in E} U_{s}(x)\\,.\n\\] \nWe shall also adopt the convention that $(E)_0 = E$. \\\\\n\n\\smallskip\n\nLet $U$ and $\\Gamma_0$ be as in Assumption \\ref{ass:main}.\n\n\\begin{definition} \\label{D and K sets}\n\nWe define for $j\\in\\mathbb N$:\n\\begin{equation} \\label{D sets}\nD_{j} := \\left\\lbrace x \\in U \\, \\colon \\, \\dist(x, \\partial U) \\geq \\frac{2}{j^{\\sfrac{1}{4}}} \\right\\rbrace\\,.\n\\end{equation}\nObserve that $D_j$ is not empty for all $j$ sufficiently large (depending on $U$).\n\nAlso, we define the sets\n\\begin{equation} \\label{K sets}\nK_j := \\left( \\Gamma_0 \\setminus D_j \\right)_{1\/j^{\\sfrac{1}{4}}}\\,, \\qquad \\tilde K_j := \\left( \\Gamma_0 \\setminus D_j \\right)_{2\/j^{\\sfrac{1}{4}}}\\,, \\quad \\mbox{and} \\quad \\hat K_j := \\left( \\Gamma_0 \\setminus D_j \\right)_{3\/j^{\\sfrac18}}\\,,\n\\end{equation}\nso that $K_j \\subset \\tilde K_j \\subset \\hat K_j$.\n\\end{definition}\n\n\\begin{definition} \\label{def:etaj}\nLet $\\psi \\colon \\left(0, \\infty \\right) \\to \\R$ be a smooth function satisfying the following properties:\n\\begin{itemize}\n\\item[(a)] $0 \\leq \\psi(t) \\leq 1$ for every $t>0$, $\\psi(t) = t$ for $t \\in \\left( 0, 1\/2 \\right]$, $t\/2 \\leq \\psi(t) \\leq t$ for $t \\in \\left[ 1\/2, 3\/2 \\right]$, $\\psi(t) = 1$ for $t \\geq 3\/2$;\n\n\\item[(b)] $0 \\leq \\psi'(t) \\leq 1$ for every $t > 0$;\n\n\\item[(c)] $\\abs{\\psi''(t)} \\leq 2$ for every $t > 0$.\n\\end{itemize}\n\nFor every $j \\in \\Na$, set\n\\[\n\\hat{\\d}_j(x) := \\dist(x, \\R^{n+1} \\setminus \\left( \\Gamma_0 \\setminus D_j \\right)_{2\/j^{\\sfrac18}}) \\qquad \\mbox{for every $x \\in \\R^{n+1}$}\\,.\n\\]\n\nLet $\\{\\phi_\\rho\\}_{\\rho}$, $\\rho > 0$, be a standard family of mollifiers: precisely, let \n\\[\n\\phi(w) := \n\\begin{cases}\nA_n \\, \\exp\\left( \\frac{1}{\\abs{w}^2 - 1} \\right) & \\mbox{if $\\abs{w} < 1$}\\\\\n0 & \\mbox{otherwise}\\,,\n\\end{cases}\n\\]\nfor a suitable normalization constant $A_n$ chosen in such a way that $\\int_{\\R^{n+1}} \\phi(w) \\, dw = 1$, and define $\\phi_\\rho(z) := \\rho^{-(n+1)}\\, \\phi(z\/\\rho)$. Then, set $\\rho_j := 1\/(j^{\\sfrac14})$, and $\\d_j := \\phi_{\\rho_j} \\ast \\hat{\\d}_j$. We finally define\n\\begin{equation} \\label{e:etaj}\n\\eta_j(x) := \\psi\\left( \\exp\\left( - j^{\\sfrac14} (\\d_j(x) - j^{-\\sfrac14}) \\right) \\right)\\,.\n\\end{equation}\n\\end{definition}\n\n\\begin{lemma} \\label{l:etaj} \nThere exists $J = J(n)$ such that the following properties hold for all $j \\geq J$:\n\\begin{enumerate}\n\\item $\\eta_j \\equiv 1$ on $\\R^{n+1} \\setminus \\hat K_j$;\n\n\\item $0 < \\eta_j \\leq \\exp(-j^{\\sfrac18})$ on $\\tilde K_j$;\n\n\\item $\\eta_j \\in \\cA_{j^{\\sfrac34}}$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFor the proof of (1), if $x \\notin \\hat{K}_j$ then $\\hat\\d_j(x) = 0$. Moreover, since $\\rho_j = j^{-\\sfrac{1}{4}} < j^{-\\sfrac{1}{8}}$, evidently $\\hat\\d_j(y) = 0$ for all $y \\in B_{\\rho_j}(x)$. This implies that\n\\[\n\\d_j(x) = (\\phi_{\\rho_j} \\ast \\hat\\d_j)(x) = \\int_{B_{\\rho_j}(x)} \\phi_{\\rho_j}(x-y)\\, \\hat\\d_j(y) \\, dy = 0\\,.\n\\]\nHence, $\\eta_j(x) = \\psi(e) = 1$ because of property (a) of $\\psi$ in Definition \\ref{def:etaj}. \n\n\\smallskip\n\nNext, we prove (2). Let $x \\in \\tilde K_j$, so that there exists $z \\in \\Gamma_0 \\setminus D_j$ such that $\\abs{x-z} < 2\\,j^{-\\sfrac14}$. If $y \\in B_{\\rho_j}(x)$, then $\\abs{y-z} < 3\\, j^{-\\sfrac14}$ by the definition of $\\rho_j$, and thus, for $j$ suitably large,\n\\[\n\\hat\\d_j(y) = \\dist(y, \\R^{n+1} \\setminus \\left( \\Gamma_0 \\setminus D_j \\right)_{2\/j^{\\sfrac18}}) \\geq 2 j^{-\\sfrac{1}{8}} - 3\\, j^{-\\sfrac14}\\,,\n\\]\nwhich in turn implies\n\\[\n\\d_j(x) = (\\phi_{\\rho_j} \\ast \\hat\\d_j)(x) = \\int_{B_{\\rho_j}(x)} \\phi_{\\rho_j}(x-y)\\, \\hat\\d_j(y) \\geq 2 j^{-\\sfrac{1}{8}} - 3\\, j^{-\\sfrac14}\\,.\n\\]\nHence, setting $t := \\exp\\left( - j^{\\sfrac14} (\\d_j(x) - j^{-\\sfrac14})\\right)$ we have that $0 < t \\leq \\exp(4 - 2\\, j^{\\sfrac{1}{8}}) \\leq 1\/2$ for $j$ large enough. Hence, by property (a) of $\\psi$ in Definition \\ref{def:etaj}:\n\\[\n\\eta_j(x) = \\psi(t) = t \\leq \\exp(4 - 2\\, j^{\\sfrac{1}{8}}) \\qquad \\mbox{for every $x \\in \\tilde K_j$}\\,.\n\\]\nIn particular, up to taking larger values of $j$, we see that \n\\[\n0 < \\eta_j(x) \\leq e^{-j^{\\sfrac18}} \\qquad \\mbox{for every $x \\in \\tilde K_j$}\\,.\n\\]\n\n\\smallskip\n\nFinally, we prove (3). To this aim, we compute the gradient of $\\eta_j$: at any point $x$, we have\n\\[\n\\nabla\\eta_j =- j^{\\sfrac14}\\, \\psi'(t) \\, t\\, \\nabla\\d_j\\,.\n\\]\nUsing that $t = \\psi(t)$ for $0 \\leq t \\leq 1\/2$, $\\psi'(t) = 0$ for $t \\geq 3\/2$, and that $\\abs{t} = t \\leq 2\\,\\psi(t)$ for $t \\in \\left[ 1\/2,3\/2\\right]$, together with the fact that $\\abs{\\psi'} \\leq 1$, we can estimate\n\\begin{equation} \\label{e:gradient etaj}\n\\abs{\\nabla\\eta_j} \\leq 2\\,j^{\\sfrac14} \\, \\abs{\\nabla\\d_j}\\, \\eta_j \\leq 2\\, j^{\\sfrac14}\\,\\eta_j\\,, \n\\end{equation}\nwhere we have used that $\\nabla\\d_j(x) = \\phi_{\\rho_j} \\ast \\nabla \\hat{d}_j (x)$, so that\n\\[\n\\abs{\\nabla \\d_j (x)} \\leq \\int_{B_{\\rho_j}(x)} \\phi_{\\rho_j}(x-y) \\, \\abs{\\nabla \\hat\\d_j (y)} \\leq 1\\,.\n\\]\n\nIn particular, $\\abs{\\nabla \\eta_j} \\leq j^{\\sfrac34}\\, \\eta_j$ as soon as $j \\geq 4$. Next, we compute the Hessian of $\\eta_j$\n\\[\n\\nabla^2\\eta_j = j^{\\sfrac12} \\, t \\, \\left( t\\,\\psi''(t) + \\psi'(t) \\right) \\nabla\\d_j \\otimes \\nabla\\d_j - j^{\\sfrac14} \\, \\psi'(t) \\, t \\, \\nabla^2\\d_j\\,,\n\\]\nfrom which we estimate\n\\[\n\\| \\nabla^2\\eta_j\\| \\leq 100 \\, j^{\\sfrac12} \\, \\eta_j + j^{\\sfrac14} \\, \\eta_j \\, \\|\\nabla^2\\d_j\\|\\,.\n\\]\n\nNow, observe that \n\\[\n\\begin{split}\n\\| \\nabla^2\\d_j\\| & \\leq \\int_{B_{\\rho_j}(x)} \\| \\nabla \\phi_{\\rho_j} (x-y) \\otimes \\nabla \\hat\\d_j(y)\\| \\, dy \\leq \\int_{B_{\\rho_j}} \\abs{\\nabla \\phi_{\\rho_j} (z)} \\, dz \\\\\n& = \\rho_j^{-1} \\int_{B_1} \\abs{\\nabla\\phi(w)} \\, dw = C(n)\\, \\rho_j^{-1}\\,.\n\\end{split}\n\\]\n\nHence, recalling that $\\rho_j = j^{-\\sfrac14}$, we conclude the estimate\n\\begin{equation} \\label{e:hessian etaj}\n\\|\\nabla^2\\eta_j\\| \\leq C(n)\\, j^{\\sfrac12} \\, \\eta_j\n\\end{equation}\nfor a constant $C$ depending only on $n$. Thus, we conclude $\\eta_j \\in \\cA_{j^{\\sfrac34}}$ for $j$ sufficiently large. \n\\end{proof}\n\n\n\\subsection{$L^2$ approximations}\n\nIn this subsection, we collect a few estimates of the error terms deriving from working with smoothed first variations and smoothed mean curvature vectors. They will be critically important to deduce the convergence of the discrete approximation algorithm. The first estimate is a modification of \\cite[Proposition 5.3]{KimTone}. We let $\\eta_j$ be the cut-off function as in Definition \\ref{def:etaj}, corresponding to $U$ and $\\Gamma_0$, and we will suppose that $j \\geq J(n)$, in such a way that the conclusions of Lemma \\ref{l:etaj} are satisfied.\n\n\\begin{proposition} \\label{p:prop53}\nFor every $M > 0$, there exists $\\eps_2 \\in \\left( 0, 1 \\right)$ depending only on $n$ and $M$ such that the following holds. For any $j \\geq J(n)$, $g \\in \\cB_j$, $V \\in \\V_n(\\R^{n+1})$ with $\\|V\\|(\\R^{n+1}) \\leq M$, $\\eps \\in \\left( 0, \\eps_2 \\right)$ with \n\\begin{equation} \\label{e:eps_smallness}\nj \\leq \\frac12 \\, \\eps^{-\\frac16}\\,,\n\t\\end{equation}\nwe have for $h_\\eps(\\cdot) = h_\\eps(\\cdot,V)$:\n\\begin{equation} \\label{e:prop53}\n\\Abs{\\int_{\\R^{n+1}} h_\\eps \\cdot \\eta_j\\,g \\, d\\|V\\| + \\int_{\\R^{n+1}} \\, (\\Phi_\\eps \\ast \\delta V) \\cdot \\eta_j\\,g \\, dx } \\leq \\eps^{\\frac14} \\, \\left( \\int_{\\R^{n+1}} \\eta_j \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx \\right)^{\\frac12}\\,.\n\\end{equation}\n\\end{proposition}\nGiven the validity of \\eqref{e:why_smoothing_first_var}, we see that \\eqref{e:prop53} measures the deviation from the identity \\eqref{def:generalized mean curvature}. The difference with \\cite[Proposition 5.3]{KimTone} is that there, in place of \n$\\eta_j g$ (left-hand side of \\eqref{e:prop53}) and $\\eta_j$ (right-hand side\nof \\eqref{e:prop53}), we have $g$ and $\\Omega$, respectively. We note that $g\\,\\eta_j$ satisfies\n$|(g\\,\\eta_j)(x)|\\leq j\\eta_j(x)$ and $\\|\\nabla(g\\,\\eta_j)(x)\\|\\leq 2\\,j^{\\sfrac74}\\eta_j(x)$: using these, the modification of the proof is \nstraightforward, and thus we omit the details. \n\n\\smallskip\n\nThe following is \\cite[Proposition 5.4]{KimTone}.\n\\begin{proposition} \\label{p:prop54}\nThere exists a constant $\\eps_3 \\in \\left( 0, 1 \\right)$ depending only on $n$ and $M$ with the following property. Given any $V \\in \\V_n(\\R^{n+1})$ with $\\|V\\|(\\R^{n+1}) \\leq M$, $j \\in \\Na$, $\\phi \\in \\cA_j$, and $\\eps \\in \\left( 0, \\eps_3 \\right)$ satisfying \\eqref{e:eps_smallness}, we have:\n\\begin{align}\n\\label{e:fv along h vs h in L2}\n\\Big| \\delta V(\\phi\\, h_\\eps) + \\int_{\\R^{n+1}} \\phi \\, \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx \\Big| & \\leq \\eps^{\\frac{1}{4}} \\, \\left( \\int_{\\R^{n+1}} \\phi \\, \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx + 1 \\right), \\\\\n\\label{e:L2 norm of h vs approx}\n\\int_{\\R^{n+1}} \\abs{h_\\eps}^2 \\, \\phi \\, d\\|V\\| & \\leq (1+\\eps^{\\frac14}) \\int_{\\R^{n+1}} \\phi \\, \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx + \\eps^{\\frac14}\\,.\n\\end{align}\n\\end{proposition}\nNote that formula \\eqref{e:fv along h vs h in L2} estimates the deviation from the identity \\eqref{def:generalized mean curvature} with $g = h(\\cdot,V)$.\n\n\\smallskip\n\nThe next statement is \\cite[Proposition 5.5]{KimTone}.\nThe proof is a straightforward modification, using \\eqref{e:prop53}.\n\n\\begin{proposition} \\label{p:prop55}\nFor every $M > 0$, there exists $\\eps_4 \\in \\left( 0, 1 \\right)$ depending only on $n$ and $M$ with the following property. For any $j \\geq J(n)$, $g \\in \\cB_j$, $V \\in \\V_n(\\R^{n+1})$ with $\\|V\\|(\\R^{n+1}) \\leq M$, $\\eps \\in \\left( 0, \\eps_4 \\right)$ satisfying \\eqref{e:eps_smallness}, it holds\n\\begin{equation} \\label{e:prop55}\n\\Abs{ \\int_{\\R^{n+1}} h_\\eps \\cdot \\eta_j\\,g \\, d\\|V\\| + \\delta V (\\eta_j \\, g) } \\leq \\eps^{\\frac14} \\left( 1 + \\left( \\int_{\\R^{n+1}} \\eta_j\\, \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx \\right)^{\\frac12} \\right) \\,.\n\\end{equation}\n\\end{proposition}\n\n\n\\subsection{Curvature of limit varifolds}\n\nThe next Proposition \\ref{p:prop56} corresponds to \\cite[Proposition 5.6]{KimTone} when there is\nno boundary.\n\\begin{proposition} \\label{p:prop56}\nSuppose that $\\{V_{j_\\ell}\\}_{\\ell=1}^{\\infty} \\subset \\V_n(\\R^{n+1})$ and $\\{\\eps_{j_\\ell}\\}_{\\ell=1}^{\\infty} \\subset \\left( 0, 1 \\right)$ are such that:\n\\begin{enumerate}\n\\item $\\sup_{\\ell} \\|V_{j_\\ell}\\|(\\R^{n+1}) < \\infty$,\n\\item $\\liminf_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\eta_{j_\\ell}\\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta V_{j_\\ell}}^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\|V_{j_\\ell}\\| + \\eps_{j_\\ell}} \\, dx < \\infty $,\n\\item $\\lim_{\\ell \\to \\infty} \\eps_{j_\\ell} = 0$ and $j_\\ell\\leq \\varepsilon_{j_\\ell}^{-\\frac16}\/2$. \n\\end{enumerate}\nThen, there exists a subsequence $\\{j'_\\ell\\}\\subset\\{j_\\ell\\}$ such that $V_{j'_{\\ell}} \\to V \\in \\V_{n}(\\R^{n+1})$ in the sense of varifolds, and $V$ has a generalized mean curvature vector $h(\\cdot, V)$ in $U$ such that\n\\begin{equation} \\label{e:prop56}\n\\int_{U} \\abs{h(\\cdot,V)}^2 \\, \\phi \\, d\\|V\\| \\leq \\liminf_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\eta_{j_\\ell}\\, \\phi \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta V_{j_\\ell}}^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\|V_{j_\\ell}\\| + \\eps_{j_\\ell}} \\, dx\n\\end{equation}\nfor every $\\phi \\in C_c(U; \\R^+)$.\n\\end{proposition}\n\\begin{proof}\nBy (1), we may choose a (not relabeled) subsequence $V_{j_\\ell}$ converging to $V$\nas varifolds on $\\R^{n+1}$, and we may assume that the integrals in (2) for this subsequence \nconverge to the $\\liminf$ of the original sequence. \nFix $g\\in C_c^2(U;\\R^{n+1})$. \nFor all sufficiently large $\\ell$, we have \n$g\\,\\eta_{j_\\ell}=g$ due to Lemma \\ref{l:etaj}(1), \\eqref{K sets} and \\eqref{D sets}. Moreover, we may assume that $g\\,\\eta_{j_\\ell}\\in\n\\mathcal B_{j_\\ell}$ due to Lemma \\ref{l:etaj}(3). Then, by \\eqref{e:prop55}, (2) and (3), we have\n\\begin{equation}\n\\label{p56-1}\n\\delta V(g)=\\lim_{\\ell\\rightarrow\\infty}\n\\delta V_{j_\\ell}(g\\,\\eta_{j_{\\ell}})=-\\lim_{\\ell\\rightarrow\\infty}\\int_{\\R^{n+1}}\nh_{\\eps_{j_\\ell}}(\\cdot,V_{j_\\ell})\\cdot \\eta_{j_\\ell}\\,g\\,d\\|V_{j_\\ell}\\|.\n\\end{equation}\nSince $\\eta_{j_\\ell}\\in \\mathcal A_{j_\\ell}$ in particular, by the Cauchy-Schartz inequality \nand \\eqref{e:L2 norm of h vs approx}, we have\n\\begin{equation}\n\\label{p56-2}\n\\delta V(g)\\leq \\Big(\\liminf_{\\ell\\rightarrow\\infty} \\int_{\\R^{n+1}} \\frac{|\\Phi_{\\eps_{j_\\ell}}\\ast\n\\delta V_{j_\\ell}|^2\\,\\eta_{j_\\ell}}{\\Phi_{\\eps_{j_\\ell}}\\ast\\|V_{j_\\ell}\\|+\\eps_{j_\\ell}}\\,dx\\Big)^{\\sfrac12}\n\\Big(\\int_{\\R^{n+1}} |g|^2\\,d\\|V\\|\\Big)^{\\sfrac12}.\n\\end{equation}\nThis shows that $\\delta V$ is absolutely continuous with respect to $\\|V\\|$ on\n$U$ and $h(\\cdot, V)$ satisfies\n\\begin{equation}\n\\label{p56-3}\n\\int_{U}|h(\\cdot,V)|^2\\,d\\|V\\|\\leq \n\\liminf_{\\ell\\rightarrow\\infty} \\int_{\\R^{n+1}} \\frac{|\\Phi_{\\eps_{j_\\ell}}\\ast\n\\delta V_{j_\\ell}|^2\\,\\eta_{j_\\ell}}{\\Phi_{\\eps_{j_\\ell}}\\ast\\|V_{j_\\ell}\\|+\\eps_{j_\\ell}}\\,dx.\n\\end{equation}\nGiven $\\phi\\in C^2_c(U;\\R^+)$ ($C_c$ case is by \napproximation), let $i\\in \\mathbb N$ be\narbitrary and consider $\\hat\\phi:=\\phi+i^{-1}$. For all sufficiently large $\\ell$, \nwe have $g\\,\\eta_{j_\\ell}\\hat\\phi\\in\\mathcal B_{j_\\ell}$ and $\\eta_{j_\\ell}\\hat\\phi\\in \\mathcal A_{j_\\ell}$\n(we may assume $|\\hat\\phi|<1$ without loss of generality). \nThus the same computation above with $g\\,\\eta_{j_\\ell}\\hat \\phi$ yields\n\\begin{equation}\n\\label{p56-4}\n\\int_{\\R^{n+1}}h\\cdot g\\,\\hat\\phi\\,d\\|V\\|\\leq \n\\Big(\\liminf_{\\ell\\rightarrow\\infty} \n\\int_{\\R^{n+1}} \\frac{|\\Phi_{\\eps_{j_\\ell}}\\ast\n\\delta V_{j_\\ell}|^2\\,\\eta_{j_\\ell}\\hat\\phi}{\\Phi_{\\eps_{j_\\ell}}\\ast\\|V_{j_\\ell}\\|+\\eps_{j_\\ell}}\\,dx\\Big)^{\\sfrac12}\n\\Big(\\int_{\\R^{n+1}}|g|^2\\hat\\phi\\,d\\|V\\|\\Big)^{\\sfrac12}.\n\\end{equation}\nWe let then $i\\rightarrow\\infty$ in \\eqref{p56-4} to replace $\\hat \\phi$ by $\\phi$,\nand finally we approximate $h(\\cdot,V)$ by $g$ to obtain \\eqref{e:prop56}. \n\\end{proof}\n\n\\subsection{Motion by smoothed mean curvature with boundary damping}\n\nWe aim at proving the following proposition: it contains the perturbation estimates for a varifold $V$ which is moved by a vector field consisting of a boundary damping of its smoothed mean curvature for a time $\\Delta t$. \n\n\\begin{proposition} \\label{p:motion by smc}\nThere exists $\\eps_5 \\in \\left( 0,1 \\right)$, depending only on $n$, $M$ and $U$ such that the following holds. Suppose that:\n\\begin{enumerate}\n\n\\item $V \\in \\V_n(\\R^{n+1})$ satisfies $\\spt\\,\\|V\\| \\subset \\left( U \\right)_{1}$ and $\\|V\\|(\\R^{n+1}) \\leq M$;\n\n\\item $j \\geq J(n)$ and $\\eta_j$ is as in Definition \\ref{def:etaj};\n\n\\item $\\eps \\in \\left( 0, \\eps_5 \\right)$ satisfies \\eqref{e:eps_smallness};\n\n\\item $\\Delta t \\in \\left[ 2^{-1} \\eps^{\\kappa}, \\eps^\\kappa\\right]$, with \n\\[\n\\kappa = 3n + 20 \\,.\n\\]\n\\end{enumerate}\n Define\n\\[\nf(x) := x + \\eta_j(x) h_\\eps(x,V) \\Delta t\\,.\n\\] \nThen, for every $\\phi \\in \\cA_j$ we have the following estimates.\n\n\\begin{equation} \\label{e:smc1}\n\\left| \\frac{\\|f_\\sharp V\\|(\\phi) - \\|V\\|(\\phi)}{\\Delta t} - \\delta(V,\\phi)(\\eta_j h_{\\eps}(\\cdot,V)) \\right| \\leq \\eps^{\\kappa - 10}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{e:smc2}\n\\frac{\\|f_\\sharp V\\|(\\R^{n+1}) - \\|V\\|(\\R^{n+1})}{\\Delta t} + \\frac{1}{4} \\int_{\\R^{n+1}} \\eta_j\\, \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx \\leq 2\\,\\eps^{\\sfrac14}\\,.\n\\end{equation}\n\nFurthermore, if also $\\|f_\\sharp V\\|(\\R^{n+1}) \\leq M$, then we have\n\n\\begin{equation} \\label{e:smc3}\n\\abs{\\delta(V,\\phi)(\\eta_j \\, h_\\eps(\\cdot, V)) - \\delta(f_\\sharp V, \\phi)(\\eta_j \\, h_\\eps(\\cdot, f_\\sharp V))} \\leq \\eps^{\\kappa-2n-18}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{e:smc4}\n\\left| \\int_{\\R^{n+1}} \\eta_j \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx - \\int_{\\R^{n+1}} \\eta_j \\frac{\\abs{\\Phi_\\eps \\ast \\delta (f_\\sharp V)}^2}{\\Phi_\\eps \\ast \\|f_\\sharp V\\| + \\eps} \\, dx \\right| \\leq \\eps^{\\kappa-3n-18}\\,.\n\\end{equation}\n\n\\end{proposition}\n\n\n\\begin{proof}\nWe want to estimate the following quantity\n\n\\[\nA := \\|f_\\sharp V \\|(\\phi) - \\|V\\|(\\phi) - \\delta(V,\\phi)(\\eta_j h_\\eps(\\cdot,V)) \\, \\Delta t = \\|f_\\sharp V \\|(\\phi) - \\|V\\|(\\phi) - \\delta(V,\\phi)(F)\\,,\n\\]\nwhere $F(x) := \\eta_j(x) h_\\eps(x,V) \\Delta t = f(x) - x$. By \\eqref{pushfd} and \\eqref{defFV2}, we have that\n\\[\nA = \\int_{\\bG_{n}(\\R^{n+1})} \\{ \\phi(f(x)) \\, \\abs{\\Lambda_n\\nabla f(x) \\circ S} - \\phi(x) - \\phi(x) \\, \\nabla F \\cdot S - F \\cdot \\nabla \\phi \\} \\, dV(x,S)\\,,\n\\]\nwhich can be written as\n\\[\nA = I_1 + I_2 + I_3\\,,\n\\]\nwith \n\\begin{align*}\nI_1 :&= \\int_{\\bG_{n}(\\R^{n+1})} \\left( \\phi(f(x)) - \\phi(x) \\right) \\, \\left( \\abs{\\Lambda_n\\nabla f(x) \\circ S} -1 \\right) \\, dV(x,S)\\,, \\\\\nI_2 :&= \\int_{\\bG_n(\\R^{n+1})} \\phi(x)\\, \\left( \\abs{\\Lambda_n\\nabla f(x) \\circ S} - 1 - \\nabla F \\cdot S \\right) \\, dV(x,S)\\,, \\\\\nI_3 :&= \\int_{\\bG_{n}(\\R^{n+1})} \\phi(f(x)) - \\phi(x) - \\nabla\\phi(x) \\cdot F(x)\\, dV(x,S)\\,.\n\\end{align*}\n\nChoose $\\eps_5 \\leq \\min\\{\\eps_1, \\eps_3\\}$, so that the conclusions of Lemma \\ref{l:smc estimates} and Proposition \\ref{p:prop54} hold with $\\eps \\in \\left( 0, \\eps_5 \\right)$. In order to estimate the size of the various integrands appearing in the definition of $I_1, I_2$ and $I_3$, we first observe that, by \\eqref{e:h in L infty} and our assumption on $\\Delta t$,\n\n\\begin{equation} \\label{F in L infty}\n\\abs{F(x)} = \\abs{\\eta_j h_\\eps(\\cdot, V) \\Delta t} \\leq 2\\, \\eps^{\\kappa - 2}\\,.\n\\end{equation}\n\nFurthermore, using \\eqref{e:h in L infty}, \\eqref{e:nabla h in L infty}, \\eqref{e:eps_smallness}, and the fact that $\\eta_j \\in \\cA_j$ we obtain\n\n\\begin{equation} \\label{nabla F in L infty}\n\\| \\nabla F \\| \\leq \\Delta t\\, \\left( \\eta_j \\|\\nabla h_\\eps\\| + \\| h_\\eps \\otimes \\nabla \\eta_j\\| \\right) \\leq \\eps^\\kappa \\left( 2\\,\\eps^{-4} + 2\\, j \\, \\eps^{-2} \\right) \\leq 3\\, \\eps^{\\kappa - 4}\\,.\n\\end{equation}\n\nSince $\\phi \\in \\cA_j$, we can use the results of Lemma \\ref{l:class properties} to estimate:\n\n\\begin{align} \n\\abs{\\phi(f(x)) - \\phi(x)} &\\overset{\\eqref{e:1st_order}}{\\leq} j^{} \\abs{F(x)} \\phi(x) \\exp\\left( j^{} \\abs{F(x)} \\right) \\leq \\eps^{\\kappa - 3}\\,, \\label{e:test1}\\\\\n\\abs{\\phi(f(x)) - \\phi(x) - \\nabla\\phi(x) \\cdot F(x)} &\\overset{\\eqref{e:2nd_order}}{\\leq} j^{} \\abs{F(x)}^2 \\phi(x) \\exp\\left( j^{} \\abs{F(x)} \\right) \\leq \\eps^{\\kappa-5} \\, \\Delta t\\,. \\label{e:test2}\n\\end{align}\n\nAnalogously, using that $f(x) = x + F(x)$, so that\n\\[\n\\abs{\\Lambda_n\\nabla f(x) \\circ S} = \\abs{({\\rm Id} + \\nabla F(x)) \\cdot v_1 \\wedge \\ldots \\wedge ({\\rm Id} + \\nabla F(x)) \\cdot v_n}\n\\]\nfor any orthonormal basis $\\{v_1,\\ldots,v_n \\}$ of $S$, we can Taylor expand the tangential Jacobian and deduce the estimates\n\n\\begin{align}\n\\Big|\\abs{\\Lambda_n\\nabla f(x) \\circ S} - 1\\Big| &\\leq c(n) \\, \\|\\nabla F \\| \\overset{\\eqref{nabla F in L infty}}{\\leq} c(n) \\, \\eps^{\\kappa - 4} \\leq c(n) \\, \\Delta t\\, \\eps^{-4} \\leq \\Delta t\\, \\eps^{-5}\\,, \\label{e:Jacobian1}\\\\\n\\Big|\\abs{\\Lambda_n\\nabla f(x) \\circ S} - 1 - \\nabla F \\cdot S\\Big| &\\leq c(n) \\, \\|\\nabla F\\|^2 \\overset{\\eqref{nabla F in L infty}}{\\leq} c(n) \\eps^{2\\,\\kappa - 8} \\leq \\eps^{k-9} \\, \\Delta t\\,, \\label{e:Jacobian2}\n\\end{align}\nmodulo choosing a smaller value of $\\eps$ if necessary. Putting all together, we can finally conclude the proof of \\eqref{e:smc1}:\n\n\\begin{equation} \\label{smc1_final}\n\\abs{A} \\leq \\abs{I_1} + \\abs{I_2} + \\abs{I_3} \\leq \\left( \\eps^{\\kappa-8} + \\eps^{\\kappa-9} + \\eps^{\\kappa-5} \\right) \\, \\Delta t \\, \\|V\\|(\\R^{n+1}) \\leq \\eps^{\\kappa-10} \\Delta t\\,.\n\\end{equation}\n\n\\smallskip\n\nIn order to prove \\eqref{e:smc2}, we use \\eqref{e:smc1} with $\\phi(x) \\equiv 1$, which implies that\n\\begin{equation} \\label{smc1 implies smc2}\n\\frac{\\|f_\\sharp V\\|(\\R^{n+1}) - \\|V\\|(\\R^{n+1})}{\\Delta t} \\leq \\delta V(\\eta_j h_{\\eps}(\\cdot, V)) + \\eps^{\\kappa-10}\\,.\n\\end{equation}\n\nOn the other hand, since $\\eta_j \\in \\cA_j$ we can apply \\eqref{e:fv along h vs h in L2} to further estimate\n\n\\begin{equation} \\label{first variation estimate}\n\\delta V(\\eta_j h_{\\eps}) \\leq - (1- \\eps^{\\sfrac14}) \\left( \\int_{\\R^{n+1}} \\eta_j \\, \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx \\right) + \\eps^{\\sfrac14}\\,,\n\\end{equation}\n\nso that \\eqref{e:smc2} follows by choosing $\\eps$ so small that $1 - \\eps^{\\sfrac14} \\geq 1\/4$.\n\n\\smallskip\n\nFinally, we turn to the proof of \\eqref{e:smc3} and \\eqref{e:smc4}. In order to simplify the notation, let us write $\\hat{V}$ instead of $f_\\sharp V$. Using the same strategy as in \\cite[Proof of Proposition 5.7]{KimTone}, we can estimate\n\\[\n\\abs{\\Phi_\\eps \\ast \\|\\hat V\\|(x) - \\Phi_\\eps \\ast \\|V\\|(x)} \\leq I_1 + I_2\\,,\n\\]\nwhere \n\\[\nI_1 = \\int \\abs{\\Phi_\\eps(f(y) - x) - \\Phi_\\eps(y-x)}\\, \\abs{\\Lambda_n \\nabla f(y) \\circ S} \\, dV(y,S)\\,,\n\\]\nand\n\\[\nI_2 = \\int \\Phi_\\eps(y-x) \\, \\abs{\\abs{\\Lambda_n \\nabla f \\circ S} -1 }\\, dV(y,S)\\,.\n\\]\n\nThe first term can be estimated by observing that for some point $\\hat y$ on the segment $\\left[ y-x, f(y)-x\\right]$,\n\\[\n\\begin{split}\n\\abs{\\Phi_\\eps(f(y) - x) - \\Phi_\\eps(y-x)} &\\leq \\abs{\\nabla \\Phi_\\eps(\\hat y)} \\, \\abs{F(y)} \\\\ \n&\\overset{\\eqref{e:1st_bound}}{\\leq} \\abs{F(y)} \\, \\left( \\eps^{-2} \\abs{\\hat y} \\Phi_\\eps(\\hat y) + c\\, \\chi_{B_1 \\setminus B_{1\/2}}(\\hat y) \\, \\exp(-\\eps^{-1}) \\right)\\\\\n&\\overset{\\eqref{F in L infty}}{\\leq} c(n)\\, \\eps^{\\kappa-n-5}\\, \\chi_{B_2(x)}(y)\\,,\n\\end{split}\n\\]\nand using that\n\\[\n\\abs{\\Lambda_n\\nabla f(y) \\circ S} \\leq 1 + \\eps^{\\kappa - 5}\n\\]\nbecause of \\eqref{e:Jacobian1}, so that\n\\[\nI_1 \\leq \\eps^{\\kappa-n-6} \\, \\|V\\|(B_2(x))\\,.\n\\]\nConcerning the second term in the sum, we can use \\eqref{e:Jacobian1} again to estimate\n\\[\nI_2 \\leq c(n)\\, \\eps^{-n-1}\\, \\eps^{\\kappa - 5 }\\, \\|V\\|(B_{1}(x))\\,.\n\\]\nPutting the two estimates together, we see that\n\\begin{equation} \\label{smoothed measures}\n\\abs{\\Phi_\\eps \\ast \\|\\hat V\\|(x) - \\Phi_\\eps \\ast \\|V\\|(x)} \\leq \\eps^{\\kappa-n-7} \\, \\|V\\|(B_2(x))\\,.\n\\end{equation}\nAnalogous calculations lead to\n\\begin{equation} \\label{smoothed first variations}\n\\abs{\\Phi_\\eps \\ast \\delta \\hat V(x) - \\Phi_\\eps \\ast \\delta V(x)} \\leq \\eps^{\\kappa-n-9} \\, \\|V\\|(B_2(x))\\,.\n\\end{equation}\nThe rough estimates also give\n\\begin{equation} \\label{rough estimates}\n\\abs{\\Phi_\\eps \\ast \\delta V(x)}\\,, \\abs{\\Phi_\\eps \\ast \\delta \\hat V(x)} \\leq \\eps^{-n-4} \\, \\|V\\|(B_{2}(x))\\,.\n\\end{equation}\nThe estimates \\eqref{smoothed measures}, \\eqref{smoothed first variations}, and \\eqref{rough estimates} immediately yield\n\\begin{equation} \\label{comparison1}\n\\left| \\frac{\\Phi_\\eps \\ast \\delta\\hat V}{\\Phi_\\eps \\ast \\|\\hat V\\| + \\eps} - \\frac{\\Phi_\\eps \\ast \\delta V}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\right| \\leq \\eps^{\\kappa - n - 10}\\, \\|V\\|(B_2(x)) + \\eps^{\\kappa-2n-13} \\, \\|V\\|(B_2(x))^2\\,,\n\\end{equation}\nas well as\n\\begin{equation} \\label{comparison2}\n\\left| \\frac{\\abs{\\Phi_\\eps \\ast \\delta\\hat V}^2}{\\Phi_\\eps \\ast \\|\\hat V\\| + \\eps} - \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\right| \\leq \\eps^{\\kappa - 2n - 15} \\, \\|V\\|(B_2(x))^2 + \\eps^{\\kappa-3n-17} \\, \\|V\\|(B_2(x))^3\\,.\n\\end{equation}\n\nObserve that, since $\\spt\\|V\\|\\subset \\left( U \\right)_1$, the right-hand side of estimates \\eqref{comparison1} and \\eqref{comparison2} is zero whenever $\\dist(x, {\\rm clos}(U)) > 3$. Hence,\n\\eqref{comparison2} and the monotonicity of the mass $\\|V\\|(B_2(x)) \\leq M$ imply that\n\\[\n\\begin{split}\n&\\left| \\int_{\\R^{n+1}} \\eta_j \\frac{\\abs{\\Phi_\\eps \\ast \\delta V}^2}{\\Phi_\\eps \\ast \\|V\\| + \\eps} \\, dx - \\int_{\\R^{n+1}} \\eta_j \\frac{\\abs{\\Phi_\\eps \\ast \\delta (f_\\sharp V)}^2}{\\Phi_\\eps \\ast \\|f_\\sharp V\\| + \\eps} \\, dx \\right| \\\\ &\\hspace{2cm}\\leq \\left(\\eps^{\\kappa - 2n - 15} \\, M^2 + \\eps^{\\kappa-3n-17} \\, M^3 \\right)\\, \\int_{\\left( U \\right)_3} \\eta_j(x) \\, dx \\leq \\eps^{\\kappa-3n-18} \n\\end{split}\n\\]\nby possibly choosing a smaller value of $\\eps$ (depending on $U$ and $M$). This proves \\eqref{e:smc4}. \n\nFinally, we prove \\eqref{e:smc3}. By \\eqref{e:smcv}, \\eqref{comparison1}, and\nthe properties of $\\Phi_\\eps$, we deduce that\n\\begin{align}\\label{heV}\n\\norm{\\nabla^lh_\\eps(V) - \\nabla^lh_\\eps(\\hat V)} &\\leq \\eps^{\\kappa-2n-14-2l}(M+M^2)\n\\end{align}\nfor $l=0,1,2$. We can conclude using \\eqref{heV}, \\eqref{F in L infty}-\\eqref{e:Jacobian1} and\nsuitable interpolations that:\n\\begin{align*}\n&\\abs{\\delta (V,\\phi)(\\eta_j \\, h_\\eps(V)) - \\delta(\\hat V, \\phi)(\\eta_j \\, h_\\eps(\\hat V))} \\\\\n&\\qquad = \\Big| \\int_{\\bG_n(\\R^{n+1})} \\left\\lbrace \\phi \\, \\nabla (\\eta_j \\, h_{\\eps}(V) ) \\cdot S + \\eta_j \\, h_{\\eps}(V) \\cdot \\nabla \\phi\\right\\rbrace dV(x,S) \\\\\n&\\qquad \\qquad - \\int_{\\bG_{n}(\\R^{n+1})} \\big\\{ \\phi \\circ f \\, \\left[ \\nabla(\\eta_j \\, h_\\eps(\\hat V)) \\right] \\circ f \\cdot (\\nabla f \\circ S) \\\\\n&\\qquad \\qquad \\qquad \\qquad \\qquad + (\\eta_j \\, h_\\eps(\\hat V)) \\circ f \\cdot (\\nabla \\phi \\circ f)\\big\\} \\abs{\\Lambda_n\\nabla f \\circ S} \\, dV(x,S) \\Big| \\\\\n&\\qquad\\leq \\eps^{\\kappa-2n-18}\\,. \\qedhere\n\\end{align*}\n\\end{proof}\n\n\\section{Existence of limit measures} \\label{sec:limit flow}\n\n\\subsection{The construction of the approximate flows}\n\nSuppose $U$ and $\\Gamma_0$ are as in Assumption \\ref{ass:main}. Together with the sets $D_j, K_j, \\tilde K_j, \\hat K_j$ introduced in Definition \\ref{D and K sets}, for $k = 0,1,\\ldots$, we set\n\\[\nD_{j,k} := \\left\\lbrace x \\in U \\, \\colon \\, \\dist(x, \\partial U) \\geq \\frac{1}{j^{\\sfrac14}} - k \\, \\exp(-j^{\\sfrac18}) \\right\\rbrace \\,.\n\\]\nOnce again, here the indices $j$ and $k$ are chosen in such a way that the corresponding sets $D_{j,k}$ are non-empty proper subsets of $U$. Observe that we have the elementary inclusions $D_{j,0} \\subset D_{j,k} \\subset D_{j,k'}$ for every $0 \\leq k \\leq k'$, and that $D_j \\subset D_{j,k}$ for every $k$.\n\n\\smallskip \n\nBefore proceeding with the construction of the time-discrete approximate flows, we need to introduce a suitable new class of test functions. Since $U$ is an open and bounded convex domain with boundary $\\partial U$ of class $C^2$, there exists a neighborhood $\\left( \\partial U \\right)_{s_0}$ such that, denoting $\\d_{U}(x) := \\dist(x, \\R^{n+1} \\setminus U)$ for $x \\in \\left( \\partial U \\right)_{s_0} \\cap U$ the distance function from the boundary, the vector field $\\nu_{U}(x) := - \\nabla \\d_{U}(x)$ is a $C^1$ extension to $\\left( \\partial U \\right)_{s_0}^{-} := \\left( \\partial U \\right)_{s_0} \\cap U$ of the exterior unit normal vector field to $\\partial U$.\n\n\\begin{definition} \\label{radially increasing functions}\nDefine the tubular neighborhood of $ \\partial U$ and the vector field $\\nu_{U}$ as above. Given an open set $W$, a function $\\phi \\in C^1(\\R^{n+1}; \\R^+)$ is said to be non decreasing in $W$ along the fibers of the normal bundle of $\\partial U$ oriented by $\\nu_{U}$, or simply \\emph{$\\nu_{U}$-non decreasing} in $W$, if for every $x \\in W \\cap \\left( \\partial U \\right)_{s_0}^{-}$ the map\n\\[\nt \\mapsto \\phi(x + t \\, \\nu_{U} (x))\n\\]\nis monotone non decreasing for $t$ such that $x + t \\, \\nu_{U} (x) \\in W \\cap \\left( \\partial U\\right)_{s_0}^{-}$. For $j \\in \\Na$, we will set \n\\begin{equation} \\label{classR}\n\\mathcal{R}_j := \\left\\lbrace \\phi \\in C^1(\\R^{n+1}; \\R^+) \\, \\colon \\, \\phi \\mbox{ is $\\nu_{U}$-non decreasing in $\\R^{n+1} \\setminus D_j$}\\right\\rbrace\\,.\n\\end{equation}\n\\end{definition}\n\nThe following proposition and its proof contain the constructive algorithm which produces the time-discrete approximations of our Brakke flow with fixed boundary.\n\n\\begin{proposition} \\label{p:induction}\nLet $U$, $\\E_0 = \\{E_{0,i}\\}_{i=1}^N \\in \\op^N(U)$, and $\\Gamma_0$ be as in Assumption \\ref{ass:main}. There exists a positive integer $J=J(n)$ with the following property. For every $j \\geq J(n)$, there exist $\\eps_j \\in \\left( 0, 1 \\right)$ satisfying \\eqref{e:eps_smallness}, $p_j \\in \\Na$, and, for every $k \\in \\{0,1,\\ldots,j\\,2^{p_j}\\}$, a bounded open set $U_{j,k} \\subset \\R^{n+1}$ with boundary $\\partial U_{j,k}$ of class $C^2$ and an open partition $\\E_{j,k} = \\{E_{j,k,i}\\}_{i=1}^N \\in \\op^N(U_{j,k})$ such that\n\\begin{equation} \\label{e:initial partiation}\nU_{j,0} = U \\quad \\mbox{and} \\quad \\E_{j,0} = \\E_0 \\qquad \\mbox{for every $j$}\\,,\n\\end{equation}\nand such that, setting $\\Delta t_j := 2^{-p_j}$, and defining $\\Gamma_{j,k} := U_{j,k} \\setminus \\bigcup_{i=1}^N E_{j,k,i}$, the following holds true:\n\\begin{enumerate}\n\n\\item $\\partial U_{j,k} \\subset (\\partial U)_{k\\,\\exp(-j^{\\sfrac18})}$ and $U_{j,k} \\triangle U \\subset \\left( \\partial U \\right)_{k\\,\\exp(-j^{\\sfrac18})}$,\n\n\\item $K_j\\cap \\Gamma_{j,k}\\setminus D_{j,k}\\subset (\\Gamma_0)_{k\\,\\exp(-j^{\\sfrac18})}$,\n\n\\item $\\Gamma_{j,k}\\setminus K_j\\subset (D_{j,k})_{j^{-10}}$.\n\\end{enumerate}\n\nMoreover, we have:\n\n\\begin{equation} \\label{induction:mass estimate}\n\\| \\partial \\E_{j,k} \\|(\\R^{n+1}) \\leq \\| \\partial \\E_0 \\|(\\R^{n+1}) + k \\, \\Delta t_j \\, \\eps_j^{\\sfrac16}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{induction:mean curvature}\n\\begin{split}\n \\frac{\\| \\partial \\E_{j,k}\\| (\\R^{n+1}) - \\| \\partial \\E_{j,k-1}\\| (\\R^{n+1}) }{\\Delta t_j} &+ \\frac{1}{4} \\int_{\\R^{n+1}} \\eta_j \\frac{\\abs{\\Phi_{\\eps_j} \\ast \\delta (\\partial \\E_{j,k})}^2}{\\Phi_{\\eps_j} \\ast \\| \\partial \\E_{j,k} \\| + \\eps_j} \\, dx \\\\\n&- \\frac{(1 - j^{-5})}{\\Delta t_j} \\, \\Delta_j \\|\\partial \\E_{j,k-1}\\|(D_j) \\leq \\eps_{j}^{\\sfrac18}\\,,\n\\end{split}\n\\end{equation}\n\n\\begin{equation} \\label{induction:mass variation}\n\\frac{\\| \\partial \\E_{j,k}\\| (\\phi) - \\| \\partial \\E_{j,k-1}\\| (\\phi) }{\\Delta t_j} \\leq \\delta(\\partial \\E_{j,k}, \\phi)(\\eta_j\\,h_{\\eps_j}(\\cdot, \\partial \\E_{j,k})) + \\eps_j^{\\sfrac18}\n\\end{equation}\n\nfor every $k \\in \\{1,\\ldots,j\\,2^{p_j}\\}$ and $\\phi \\in \\cA_j \\cap \\mathcal{R}_j$.\n\n\n\\end{proposition}\n\n\\smallskip\n\n\\begin{proof}[{\\bf Proof of Proposition \\ref{p:induction}}]\nSet\n\\begin{equation} \\label{def of M}\nM := \\| \\partial \\E_0 \\|(\\R^{n+1}) + 1 \\,,\n\\end{equation}\nlet $\\kappa = 3n+20$ as in Proposition \\ref{p:motion by smc}, and consider the following set of conditions for $\\eps \\in \\left( 0,1 \\right)$:\n\\begin{equation} \\label{epsilon conditions}\n\\begin{cases}\n& \\eps < \\eps_* := \\min\\{\\eps_1\\,,\\ldots\\,,\\eps_5\\}\\,, \\mbox{with $\\eps_* = \\eps_*(n,U,M)$}\\,,\\\\\n&\\mbox{\\eqref{e:eps_smallness} holds, namely $\\eps^{\\sfrac16} \\leq 1\/(2\\,j)$}\\,,\\\\\n& 2\\,\\eps^{\\kappa-2} \\leq j^{-10}\\,,\\\\\n&2\\, j \\, \\eps^{-\\kappa} \\, \\exp(-j^{\\sfrac18}) \\leq 1\/(4j^{\\sfrac14})\\,.\n\\end{cases}\n\\end{equation}\nNotice that the conditions in \\eqref{epsilon conditions} are compatible for large $j$, namely there exists $j_0$ with the property that for every $j \\geq j_0$ the set of $\\eps \\in \\left( 0, 1 \\right)$ satisfying \\eqref{epsilon conditions} is not empty. Letting $J(n)$ be the number provided by Lemma \\ref{l:etaj}, for every $j \\geq \\max\\{j_0, J(n)\\}$ we choose $\\eps_j \\in \\left( 0, 1 \\right)$ such that all conditions in \\eqref{epsilon conditions} are met. Observe that $\\lim_{j \\to \\infty} \\eps_j = 0$. Then, we choose $p_j \\in \\Na$ such that\n\\begin{equation} \\label{d:time step}\n\\Delta t_j := \\frac{1}{2^{p_j}} \\in \\left( 2^{-1} \\, \\eps_j^{\\kappa}, \\eps_{j}^{\\kappa} \\right] \\,.\n\\end{equation}\n\n\\smallskip\n\nThe argument is constructive, and it proceeds by means of an induction process on $k \\in \\{0,1,\\ldots,j\\, 2^{p_j}\\}$. We set $U_{j,0} := U$ and $\\E_{j,0} := \\E_0$. Properties (1), (2), (3), as well as the estimate in \\eqref{induction:mass estimate} are then trivially satisfied, given the definition of $M$ and since $U_{j,0}=U$, $\\Gamma_0\\setminus D_{j,0}\\subset\n\\Gamma_0$ and $\\Gamma_0\\setminus K_j\\subset \\Gamma_0 \\cap D_j \\subset D_{j,0}$. Next, let $k \\geq 1$, and assume we obtained the open partition $\\E_{j,k-1} = \\{E_{j,k-1,i}\\}_{i=1}^N$ of $U_{j,k-1}$ satisfying (1), (2), (3), and \\eqref{induction:mass estimate} with $k-1$ in place of $k$. We will now produce $U_{j,k}$ and $\\E_{j,k} = \\{E_{j,k,i}\\}_{i=1}^N$ satisfying the same conditions with $k$. At the same time, we will also show that each inductive step satisfies \\eqref{induction:mean curvature} and \\eqref{induction:mass variation}. Before proceeding, let us record the inductive assumptions for $U_{j,k-1}$ and $\\Gamma_{j,k-1}:=U_{j,k-1}\\cap\\cup_{i=1}^N\\partial E_{j,k-1,i}$ in the following set of equations:\n\\begin{equation}\n\\label{ind0}\n\\partial U_{j,k-1}\\subset (\\partial U)_{(k-1)\\exp(-j^{\\sfrac18})}\\, \\quad \\mbox{and} \\quad U_{j,k-1} \\triangle U \\subset \\left( \\partial U \\right)_{{(k-1)\\,\\exp(-j^{\\sfrac18})}}\\,,\n\\end{equation}\n\\begin{equation}\n\\label{ind1}\nK_j\\cap \\Gamma_{j,k-1}\\setminus D_{j,k-1}\\subset (\\Gamma_0)_{(k-1)\\exp(-j^{\\sfrac18})}\\,,\n\\end{equation}\n\\begin{equation}\n\\label{ind2}\n\\Gamma_{j,k-1}\\setminus K_j\\subset (D_{j,k-1})_{j^{-10}}\\,,\n\\end{equation}\n\\begin{equation} \\label{indmass}\n\\| \\partial \\E_{j,k-1} \\|(\\R^{n+1}) \\leq \\| \\partial \\E_0 \\|(\\R^{n+1}) + (k-1) \\, \\Delta t_j \\, \\eps_j^{\\sfrac16}\\,.\n\\end{equation}\n\n\\smallskip\n\n{\\bf Step 1: area reducing Lipschitz deformation.} First notice that $D_{j,k-1} \\subset U_{j,k-1}$. Indeed, the definition of $D_{j,k-1}$, \\eqref{ind0}, and the choice of $\\eps_j$ imply that $D_{j,k-1} \\cap (U_{j,k-1} \\triangle U) = \\emptyset$, so that our claim readily follows from $D_{j,k-1} \\subset U$. In particular,\n$D_j\\subset D_{j,k-1}\\subset U_{j,k-1}$. Hence, we can choose $f_{1} \\in \\bE(\\E_{j,k-1},D_j,j)$ such that, setting $\\E_{j,k}^\\star := (f_1)_{\\star}\\E_{j,k-1}$ ($\\in \\op^N(U_{j,k-1})$ by Lemma \\ref{l:preserving partitions}), we have\n\\begin{equation} \\label{e:almost minimizing}\n\\| \\partial \\E_{j,k}^\\star\\|(\\R^{n+1}) - \\| \\partial \\E_{j,k-1}\\|(\\R^{n+1}) \\leq (1 - j^{-5})\\, \\Delta_{j}\\|\\partial \\E_{j,k-1}\\|(D_j) \\, \\footnote{Recall that $\\Delta_{j}\\|\\partial \\E_{j,k-1}\\|(D_j) \\leq 0$}\\,. \n\\end{equation}\nSet $\\Gamma_{j,k}^\\star := U_{j,k-1} \\cap \\bigcup_{i=1}^N \\partial E_{j,k,i}^\\star$, and note that\n\\begin{equation}\n\\label{starinv}\n\\Gamma_{j,k}^{\\star}\\setminus D_j=\\Gamma_{j,k-1}\\setminus D_j\n\\end{equation}\nand \n\\begin{equation} \\label{mass estimate step 1}\n\\|\\partial \\E_{j,k}^\\star\\|(\\phi) \\leq \\|\\partial \\E_{j,k-1}\\|(\\phi) \\qquad \\mbox{for every $\\phi \\in \\cA_j$} \\,.\n\\end{equation}\n\n\n{\\bf Step 2: retraction.} Outside of $D_{j,k-1}$, we perform a suitable retraction procedure so that \n$\\Gamma_{j,k}^\\star\\setminus (D_{j,k-1}\\cup K_j)$ is retracted to $\\partial D_{j,k-1}$. \nThis retraction step is not needed for $k=1$, since $\\Gamma_{j,1}^\\star \\cap D_{j,0}^c = \\Gamma_{j,0} \\cap D_{j,0}^c$, and $\\Gamma_{j,0}\\setminus K_j\\subset D_{j,0}$ already. \n\nDefine\n\\begin{equation}\n\\label{ind4}\nA_{j,k}:=\\{x\\in \\partial (D_{j,k-1})_{j^{-10}}\\,:\\, {\\rm dist}\\,(x,\\Gamma_0\\setminus\nD_j)>1\/(2j^{1\/4})\\}\\,,\n\\end{equation}\nand observe that $\\left. f_{1} \\right|_{A_{j,k}} = \\left. {\\rm id} \\right|_{A_{j,k}}$, so that $A_{j,k} \\cap E_{j,k,i}^\\star = A_{j,k} \\cap {\\rm int}(f_{1}(E_{j,k-1,i})) = A_{j,k} \\cap E_{j,k-1,i}$ for every $i = 1,\\ldots,N$. In particular, $\\Gamma_{j,k}^\\star \\cap A_{j,k} = \\Gamma_{j,k-1} \\cap A_{j,k}$.\n\nWe claim the validity of the following\n\\begin{lemma}\n\\label{ind5}\nWe have $A_{j,k}\\cap \\Gamma_{j,k}^\\star=\\emptyset$. Moreover, for any $x\\in \\partial A_{j,k}$\n(the boundary as a subset of $\\partial (D_{j,k-1})_{j^{-10}}$), we have ${\\rm dist}\\,(x,\\Gamma_{j,k}^\\star)\n\\geq j^{-10}$. \n\\end{lemma}\n\\begin{proof}\nBy the discussion above, $A_{j,k} \\cap \\Gamma_{j,k}^\\star = A_{j,k} \\cap \\Gamma_{j,k-1}$. By \\eqref{ind2}, $A_{j,k}\\cap \\Gamma_{j,k-1}\\setminus K_j=\\emptyset$. If $x\\in A_{j,k}\\cap \n\\Gamma_{j,k-1}\\cap K_j$, then $x\\in K_j\\cap \\Gamma_{j,k-1}\\setminus D_{j,k-1}$. Then\nby \\eqref{ind1}, ${\\rm dist}\\,(x,\\Gamma_0)<(k-1)\\exp(-j^{\\sfrac18}) \\leq 1\/(4\\, j^{\\sfrac14})$, where the last inequality follows from $k \\leq j\\, 2^{p_j} \\leq 2\\, j\\, \\eps_j^{-\\kappa}$ and the choice of $\\eps_j$. By \\eqref{ind4}, we need to \nhave some $\\tilde x\\in \\Gamma_0\\cap D_j$ such that $|x-\\tilde x|<(k-1)\\exp(-j^{\\sfrac18}) $. \nOn the other hand, by the definitions of $D_{j,k-1}$ and $D_j$, $|x-\\tilde x|\\geq {\\rm dist}(A_{j,k},D_j)>1\/j^{1\/4}$, and we have reached a\ncontradiction. Thus the first claim follows. For the second claim, such point $x$\nsatisfies ${\\rm dist}\\,(x, \\Gamma_0\\setminus D_j)=1\/(2j^{1\/4})$.\nIf there exists \n$\\tilde x\\in \\Gamma_{j,k}^\\star$ with $|x-\\tilde x|< j^{-10}$, then $\\tilde x \\in \\Gamma_{j,k-1}$, and\n${\\rm dist}\\,(\\tilde x,\\Gamma_0\\setminus D_j)<1\/(2j^{1\/4})+j^{-10}$, so that $\\tilde x\\in K_j\\cap\n\\Gamma_{j,k-1}\\setminus D_{j,k-1}$. By \\eqref{ind1}, ${\\rm dist}\\,(\\tilde x,\\Gamma_0)\n\\leq (k-1)\\exp(-j^{\\sfrac18})$ and thus ${\\rm dist}\\,(x,\\Gamma_0)1\/(j^{1\/4})$,\nwhich is a contradiction. Thus we have the second claim. \n\\end{proof}\n\n\\smallskip\n\nNext, for each point $x\\in \\partial(D_{j,k-1})_{j^{-10}}$, let $r_0(x)\\in \\partial D_{j,k-1}$ be the nearest point projection of $x$ onto $\\partial D_{j,k-1}$, and set $r_s(x):=sx+(1-s)r_0(x)$\nfor $s\\in (0,1)$.\nWith this notation, define\n\\begin{equation*}\n{\\rm Ret}_{j,k}:=\\{r_s(x)\\,:\\, x\\in A_{j,k}, \\,\\,s\\in (0,1)\\}.\n\\end{equation*}\n\\begin{lemma}\\label{ind6}\nWe have\n$(D_{j,k-1})_{j^{-10}}\\setminus (K_j\\cup D_{j,k-1})\\subset {\\rm Ret}_{j,k}$.\n\\end{lemma}\n\n\\begin{proof}\nFor any point $\\tilde x\\in (D_{j,k-1})_{j^{-10}}\\setminus (K_j\\cup D_{j,k-1})$, there\nexist $s\\in (0,1)$ and $x\\in \\partial(D_{j,k-1})_{j^{-10}}$ such that \n$\\tilde x=r_s(x)$. The condition $\\tilde x\\notin K_j$ means that ${\\rm dist}\\,(\\tilde x,\\Gamma_0\\setminus\nD_j)\\geq 1\/j^{1\/4}$, and then ${\\rm dist}\\,(x,\\Gamma_0\\setminus D_j)\\geq 1\/j^{1\/4}\n-j^{-10}$. Thus $x\\in A_{j,k}$ and $\\tilde x\\in {\\rm Ret}_{j,k}$. \n\\end{proof}\n\nThe set $A_{j,k}$ is a relatively open subset of $\\partial (D_{j,k-1})_{j^{-10}}$. Let \n$A_{j,k,l}\\subset A_{j,k}$ be any of the (at most countably many) connected components of $A_{j,k}$ and\ndefine\n\\begin{equation*}\n{\\rm Ret}_{j,k,l}:=\\{r_s(x)\\,:\\, x\\in A_{j,k,l},\\,\\, s\\in (0,1)\\}.\n\\end{equation*}\n\\begin{lemma}\n\\label{supind6}\nWe have $(A_{j,k,l}\\cup (\\partial A_{j,k,l})_{j^{-10}})\\cap \\Gamma_{j,k}^\\star=\\emptyset$.\n\\end{lemma}\n\\begin{proof} The claim follows directly from Lemma \\ref{ind5}.\n\\end{proof}\nLemma \\ref{supind6} implies that for each $l$\nthere exists some $i(l)\\in\\{1,\\ldots,N\\}$ such that $E_{j,k,i(l)}^{\\star}$ contains $A_{j,k,l} \\cup (\\partial A_{j,k,l})_{j^{-10}}$. \nFor each index $l$, let $i(l)$ be this correspondence. \nWe define for each $i=1,\\ldots,N$\n\\begin{equation*}\n\\tilde E_{j,k,i}:=E_{j,k,i}^{\\star}\\cup (\\cup_{i(l)=i} {\\rm Ret}_{j,k,l}).\n\\end{equation*}\nIn other words, when $A_{j,k,l}\\cup (\\partial A_{j,k,l})_{j^{-10}}$ is \ncontained in $E_{j,k,i(l)}^{\\star}$ with $i(l)=i$, then we replace the open partitions inside \n${\\rm Ret}_{j,k,l}$ by $\\tilde E_{j,k,i}$. \nFor the resulting open\npartition $\\tilde\\E_{j,k} := \\{\\tilde E_{j,k,i}\\}_{i=1}^N \\in \\op^N(U_{j,k-1})$, define $\\tilde \\Gamma_{j,k}:=U_{j,k-1}\\cap\n\\cup_{i=1}^N\\partial \\tilde E_{j,k,i}$.\n\\begin{lemma}\nWe have \n\\begin{equation}\n\\label{supind6aeq}\n\\tilde \\Gamma_{j,k}\\setminus K_j\\subset D_{j,k-1}\n\\end{equation}\nand\n\\begin{equation}\n\\label{supind6aeq2}\n\\tilde\\Gamma_{j,k}\\setminus D_{j,k-1}= \\Gamma_{j,k}^\\star\\setminus (D_{j,k-1}\\cup\n{\\rm Ret}_{j,k}) \n=\\Gamma_{j,k-1}\\setminus (D_{j,k-1}\\cup\n{\\rm Ret}_{j,k}).\n\\end{equation}\n\\label{supind6a}\n\\end{lemma}\n\\begin{proof}\nNote that $\\tilde \\Gamma_{j,k}\\cap \\overline{{\\rm Ret}_{j,k}}\\setminus D_{j,k-1}=\\emptyset$\nsince $\\partial {\\rm Ret}_{j,k}\\setminus D_{j,k-1}$ is contained in some open partition by\nLemma \\ref{supind6}\nand $\\tilde\\Gamma_{j,k}\\cap {\\rm Ret}_{j,k}=\\emptyset$. If there exists \n$x\\in \\tilde\\Gamma_{j,k}\\setminus (K_j\\cup D_{j,k-1})$, then $x\\notin \\overline{{\\rm Ret}_{j,k}}$ and thus $x\\in \\Gamma_{j,k}^{\\star}\\setminus (K_j\\cup D_{j,k-1}) = \\Gamma_{j,k-1}\\setminus (K_j\\cup D_{j,k-1})$. By \\eqref{ind2}, \n$x\\in (D_{j,k-1})_{j^{-10}}\\setminus(K_j\\cup D_{j,k-1})$. By Lemma \\ref{ind6}, \n$x\\in {\\rm Ret}_{j,k}$, which is a contradiction. This proves the first claim. The \nsecond claim follows from the definition of $\\tilde\\Gamma_{j,k}$, in the sense that the new partition has no boundary in ${\\rm Ret}_{j,k}$, while $\\Gamma_{j,k}^\\star \\setminus\n(D_{j,k-1}\\cup{\\rm Ret}_{j,k})$ is kept intact. The identity in \\eqref{starinv} is also used to\nobtain the last equality.\n\\end{proof}\n\\begin{lemma} \\label{l:mass estimate}\nFor any $\\phi \\in \\mathcal{R}_j$ we have:\n\\begin{equation} \\label{e:mass estimate after retraction}\n\\int_{\\tilde \\Gamma_{j,k}}\\phi\\, d\\mathcal H^n\n\\leq \\int_{\\Gamma_{j,k}^\\star}\\phi\\,d\\mathcal H^{n}\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nNote that $\\tilde\\Gamma_{j,k} \\triangle \\Gamma_{j,k}^\\star \\subset (\\partial D_{j,k-1}\\cap \\overline{{\\rm Ret}_{j,k}})\\cup {\\rm Ret}_{j,k}$, and that $\\tilde \\Gamma_{j,k} \\cap {\\rm Ret}_{j,k} = \\emptyset$. Let \n${\\rm Ret}_{j,k,l}$ and $E_{j,k,i(l)}^{\\star}$ be as before. For any\n$x\\in\\tilde\\Gamma_{j,k}\\cap \\overline{{\\rm Ret}_{j,k,l}}\\subset\\partial D_{j,k-1}$, \nconsider $\\tilde x\\in \\partial (D_{j,k-1})_{j^{-10}}$ such that $r_0(\\tilde x)=x$. \nNote that $\\tilde x=r_1(\\tilde x)\\in E_{j,k,i(l)}^{\\star}$. If $r_s(\\tilde x)\\notin \\Gamma_{j,k}^\\star$ for all\n$s\\in [0,1)$, then $r_0(\\tilde x)=x\\in E_{j,k,i(l)}^{\\star}$ and we have $x\\in \\tilde E_{j,k,i(l)}$,\nwhich is a contradiction to $x\\in \\tilde\\Gamma_{j,k}$. Thus there exists $s\\in [0,1)$ such that $r_s(\\tilde x)\n\\in \\Gamma_{j,k}^\\star$. In particular, we see that $\\tilde\\Gamma_{j,k} \\cap\\overline{{\\rm Ret}_{j,k}}$ is in the image of $\\Gamma_{j,k}^\\star\\cap \n\\overline{{\\rm Ret}_{j,k}}$ through the normal nearest point projection onto $\\partial D_{j,k-1}$. Furthermore, since $r_s(\\tilde x) = x + s \\, \\abs{\\tilde x - x} \\, \\nu_{U}(x)$, and since $\\phi$ is $\\nu_{U}$-non decreasing in $\\R^{n+1} \\setminus D_j$, it holds $\\phi(x) \\leq \\phi(r_s(\\tilde x))$. Given that the normal nearest point projection onto $\\partial D_{j,k-1}$ is a\nLipschitz map with Lipschitz constant $=1$, the desired estimate follows from the area formula. \n\\end{proof}\n\nNote that, as a corollary of Lemma \\ref{l:mass estimate}, we have that, setting $\\tilde\\E_{j,k} = \\{ \\tilde E_{j,k,i} \\}_{i=1}^N$,\n\\begin{equation} \\label{mass estimate step 2}\n\\| \\partial \\tilde \\E_{j,k} \\|(\\R^{n+1}) \\leq \\| \\partial \\E_{j,k}^\\star \\|(\\R^{n+1}) \\,.\n\\end{equation}\n\n\\smallskip\n\n{\\bf Step 3: motion by smoothed mean curvature with boundary damping.} Let $\\tilde V_{j,k} = \\partial \\tilde{\\E}_{j,k}$ as defined in \\eqref{e:interior boundary}, and compute $h_{\\eps_j}(\\cdot):=h_{\\eps_j}\n(\\cdot,\\tilde V_{j,k})$. Also, let $\\eta_j \\in \\cA_{j^{\\sfrac34}}$ be the cut-off function defined in Definition \\ref{def:etaj}. Observe that $j$ has been chosen so that the conclusions of Lemma \\ref{l:etaj} hold. Define the smooth diffeomorphism $f_{j,k}(x):=x+\\eta_j(x)\\,h_{\\eps_j}(x)\\,\\Delta t_j$. Observe that the induction hypothesis \\eqref{indmass}, together with \\eqref{mass estimate step 1} and \\eqref{mass estimate step 2}, implies that $\\|\\tilde V_{j,k} \\|(\\R^{n+1}) \\leq M$ as defined in \\eqref{def of M}. Hence, by Lemma \\ref{l:etaj}, and using \\eqref{e:h in L infty} and the definition of $\\Delta t_j$, we can conclude that $\\abs{\\eta_j \\, h_\\eps\\, \\Delta t_j} \\leq \\exp(-j^{\\sfrac18})$ on $\\tilde K_j$. By the choice of $\\eps_j$, we also have that $|\\eta_j\\, h_\\eps\\, \\Delta t_j|\\leq j^{-10}$ everywhere. \n\nSet $U_{j,k}:=f_{j,k}(U_{j,k-1})$, $E_{j,k,i}:=f_{j,k}(\\tilde E_{j,k,i})$ \nand $\\Gamma_{j,k}:=U_{j,k}\\cap\\cup_{i=1}^N\\partial E_{j,k,i}$. \n\n\\begin{lemma} \\label{l:step1}\nWe have\n\\[\n\\partial U_{j,k} \\subset \\left( \\partial U \\right)_{k\\, \\exp(-j^{\\sfrac18})}\\, \\quad \\mbox{and} \\quad U_{j,k} \\triangle U \\subset \\left( \\partial U \\right)_{k \\, \\exp(-j^{\\sfrac18})}\\,,\n\\]\nnamely \\eqref{ind0} with $k$ in place of $k-1$ holds true.\n\\end{lemma}\n\n\\begin{proof}\nSince $|x-f_{j,k}(x)|\\leq \\eta_j |h_{\\varepsilon_j}|\\Delta t_j\\leq \\exp(-j^{\\sfrac18})$ on $K_j$ by Lemma \\ref{l:etaj}(2), \nwe see with \\eqref{ind0} that $f_{j,k}(K_j\\cap (\\partial U_{j,k-1}\\cup U_{j,k-1}\\triangle U))\\subset (\\partial U)_{k\\exp(-j^{\\sfrac18})}$.\nIn order to show that also $f_{j,k}((\\partial U_{j,k-1}\\cup U_{j,k-1}\\triangle U)\\setminus K_j) \\subset (\\partial U)_{k\\exp(-j^{\\sfrac18})}$, \nwe next claim that \n\\begin{equation} \\label{claim:boundary}\n\\min\\{\\dist(\\partial U_{j,k-1} \\setminus K_j, \\tilde\\Gamma_{j,k})\\,, \\; \\dist( (U_{j,k-1} \\triangle U) \\setminus K_j, \\tilde\\Gamma_{j,k}) \\} \\geq 1\/(4\\,j^{\\sfrac14})\\,.\n\\end{equation}\nTo see this, let $x \\in (\\partial U_{j,k-1} \\cup (U_{j,k-1} \\triangle U)) \\setminus K_j$ and $y \\in \\tilde \\Gamma_{j,k}$. Since $x \\in \\partial U_{j,k-1} \\cup (U_{j,k-1} \\triangle U)$, by \\eqref{ind0} there is $\\tilde x \\in \\partial U$ such that $\\abs{x - \\tilde x} \\leq (k-1) \\exp(-j^{\\sfrac18})$. Now, if $y \\notin K_j$, then by Lemma \\ref{supind6a}, $y\\in D_{j,k-1}$. By the definition of $D_{j,k-1}$, $\\abs{x - y} \\geq \\abs{y-\\tilde x}-\\abs{\\tilde x-x}\\geq 1\/j^{\\sfrac14} - 2(k-1)\\exp(-j^{\\sfrac18})$, so that $\\abs{x-y} \\geq 1\/(4\\,j^{\\sfrac14})$. The same conclusion clearly holds if $y \\in D_{j,k-1}$. Finally, if $y \\in K_j \\setminus D_{j,k-1}$ then, \nby \\eqref{supind6aeq2}, $y\\in \\Gamma_{j,k-1}\\cap K_j\\setminus D_{j,k-1}$. \nThen by \\eqref{ind1}, $y\\in (\\Gamma_0)_{(k-1)\\exp(-j^{\\sfrac18})}\\setminus \nD_{j,k-1}$. By the definition of $K_j$, we have $|x-y|\\geq j^{-\\sfrac14}-\n(k-1)\\exp(-j^{\\sfrac18})>1\/(4j^{\\sfrac14})$. This proves \\eqref{claim:boundary}. \nFor any point $x\\notin (\\tilde \\Gamma_{j,k})_{1\/4j^{\\sfrac14}}$, note that \n\\begin{equation*}\n|h_{\\eps_j}(x,\\tilde V_{j,k})|\\leq \\eps_j^{-1}\\int_{\\tilde\\Gamma_{j,k}}\n|\\nabla \\Phi_{\\eps_j}(x-y)|\\,d\\Ha^n (y)\\leq M\\exp(-1\/\\eps_j)<\\exp(-j^{\\sfrac18})\n\\end{equation*}\nfor all sufficiently large $j$. This shows that $f_{j,k}((\\partial U_{j,k-1}\n\\cup U_{j,k-1}\\triangle U)\\setminus K_j)\\subset (\\partial U)_{k\\exp(-j^{\n\\sfrac18})}$ and concludes the proof. \n\n\\end{proof}\n\\begin{lemma} We have\n\\[f_{j,k} (D_{j,k-1})\\cap (K_j\\setminus D_{j,k})=\\emptyset.\\]\n\\label{inc1}\n\\end{lemma}\n\\begin{proof}\nSuppose, towards a contradiction, that $x \\in f_{j,k} (D_{j,k-1})\\cap (K_j\\setminus D_{j,k})$.\nSince $|\\Delta t_j\\eta_j h_{\\eps_j}|\\ll 1\/j^{1\/4}$ for all points, $\\hat x:=f_{j,k}^{-1}(x)$ is in \n$\\tilde K_j$ in particular. Then, $|\\eta_j(\\hat x)\\, h_{\\eps_j}(\\hat x)\\,\\Delta t_j|\\leq \\exp(-j^{\\sfrac18})$.\nThis means that $|x-\\hat x|\\leq \\exp(-j^{\\sfrac18})$. Since $x\\notin D_{j,k}$, we need to have\n$\\hat x\\notin D_{j,k-1}$ by the definition of these sets. But this is a contradiction \nsince $x=f_{j,k}(\\hat x) \\in f_{j,k}(D_{j,k-1})$ and $f_{j,k}$ is bijective. \n\\end{proof}\n\\begin{lemma} \\label{l:step2}\nWe have\n\\begin{equation}\n\\label{ind3}\n(\\Gamma_{j,k}\\cap K_j)\\setminus D_{j,k}\\subset (\\Gamma_0)_{k\\exp(-j^{\\sfrac18})}\\,,\n\\end{equation}\nnamely \\eqref{ind1} with $k$ in place of $k-1$ holds true.\n\\end{lemma}\n\\begin{proof}\nFor any $x\\in (\\Gamma_{j,k}\n\\cap K_j)\\setminus D_{j,k}$, by Lemma \\ref{inc1}, $x\\notin f_{j,k}(D_{j,k-1})$ and there exists $\\hat x\\in \\tilde\\Gamma_{j,k}\\setminus D_{j,k-1}$ such that $f_{j,k}(\\hat x)\n=x$. By \\eqref{supind6aeq} and \\eqref{supind6aeq2}, $\\hat x\\in (\\Gamma_{j,k}^\\star \\cap K_j) \\setminus D_{j,k-1} = (\\Gamma_{j,k-1}\\cap K_j) \\setminus D_{j,k-1}$. \nBy \\eqref{ind1}, $\\hat x\\in (\\Gamma_0)_{(k-1)\\exp(-j^{\\sfrac18})}$; on the other hand, $\\hat x\\in K_j$ implies\n$|x-\\hat x|\\leq \\exp(-j^{\\sfrac18})$. These two estimates together prove \\eqref{ind3}. \n\\end{proof}\n\n\\begin{lemma} \\label{incsn}\nWe have\n\\begin{equation} \\label{l:step3}\n\\Gamma_{j,k} \\setminus K_j \\subset (D_{j,k})_{j^{-10}}\\,,\n\\end{equation}\nnamely \\eqref{ind2} with $k$ in place of $k-1$ holds true.\n\\end{lemma}\n\\begin{proof}\nIf $x \\in \\Gamma_{j,k} \\setminus K_j$, then there is $\\tilde x \\in \\tilde \\Gamma_{j,k}$ such that $x = f_{j,k}(\\tilde x)$. If $\\tilde x \\notin K_j$, then $x \\in D_{j,k-1} \\subset D_{j,k}$ by Lemma \\ref{supind6a}, and since $\\abs{x- \\tilde x} < j^{-10}$ by the properties of the diffeomorphism $f_{j,k}$ our claim holds true. Hence, suppose that $\\tilde x \\in K_j$. Since in this case $\\abs{x - \\tilde x} \\leq \\exp(-j^{\\sfrac18})$, if $\\tilde x \\in D_{j,k-1}$ then evidently $x \\in D_{j,k}$, and the proof is complete. On the other hand, we claim that it has to be $\\tilde x \\in D_{j,k-1}$. Indeed, otherwise we would have $\\tilde x \\in \\tilde \\Gamma_{j,k} \\cap K_j \\setminus D_{j,k-1}$, and thus, again by Lemma \\ref{supind6a}, $\\tilde x \\in \\Gamma_{j,k}^\\star \\cap K_j \\setminus D_{j,k-1} = \\Gamma_{j,k-1} \\cap K_j \\setminus D_{j,k-1}$. But then, by \\eqref{ind1}, there exists $y \\in \\Gamma_0$ such that $\\abs{\\tilde x-y} < (k-1)\\, \\exp(-j^{\\sfrac18})$. \nSince $\\tilde x \\notin D_{j,k-1}$, we have $y \\notin D_j$, and therefore $\\dist(x, (\\Gamma_0 \\setminus D_j)) \n\\leq |x-\\tilde x|+|\\tilde x-y|< k\\, \\exp(-j^{\\sfrac18}) < 1\/j^{\\sfrac14}$. But this contradicts the fact that $x \\notin K_j$ and completes the proof. \n\\end{proof}\n\n\\smallskip\n\n{\\bf Conclusion.} Together, Lemmas \\ref{l:step1}, \\ref{l:step2} and \\ref{incsn} complete the induction step from $k-1$ to $k$ for properties (1), (2), (3). Concerning \\eqref{induction:mass estimate}, first we observe that, since $f_{j,k}$ is a diffeomorphism,\n\\begin{equation} \\label{e:partition after motion}\n\\partial \\E_{j,k} = \\var\\left( \\bigcup_{i=1}^N (U_{j,k} \\cap \\partial E_{j,k,i})\\,, \\; 1 \\right) = \\var\\left( f_{j,k}\\Big( \\bigcup_{i=1}^N (U_{j,k-1} \\cap \\partial \\tilde E_{j,k,i}) \\Big) \\,, \\; 1 \\right) = (f_{j,k})_\\sharp \\partial \\tilde \\E_{j,k}\\,.\n\\end{equation}\nWe can then use \\eqref{e:smc2} with $V = \\partial \\tilde \\E_{j,k}$, $M$ as defined in \\eqref{def of M}, $\\eps = \\eps_j$, and $\\Delta t = \\Delta t_j$ in order to conclude that\n\\begin{equation} \\label{mass estimate step 3}\n\\| \\partial \\E_{j,k} \\|(\\R^{n+1}) \\leq 2\\, \\Delta t_j \\, \\eps_j^{\\sfrac14} + \\| \\partial \\tilde \\E_{j,k} \\|(\\R^{n+1})\\,.\n\\end{equation}\nCombining \\eqref{mass estimate step 3} with \\eqref{mass estimate step 1} and \\eqref{mass estimate step 2}, and using that $2 \\, \\eps_{j}^{\\sfrac14} < \\eps_{j}^{\\sfrac16} $, we get\n\\begin{equation} \\label{e:key mass bound}\n\\| \\partial \\E_{j,k} \\|(\\R^{n+1}) \\leq \\| \\partial \\E_{j,k-1} \\|(\\R^{n+1}) + \\Delta t_j \\, \\eps_j^{\\sfrac16}\\,,\n\\end{equation}\nwhich, together with \\eqref{indmass}, gives \\eqref{induction:mass estimate}. Last, we show that the construction of the induction step satisfies \\eqref{induction:mean curvature} and \\eqref{induction:mass variation}. Since $\\eps_j$ satisfies \\eqref{e:eps_smallness} and \\eqref{induction:mass estimate} implies \n$\\| (f_{j,k})_\\sharp \\partial \\tilde \\E_{j,k} \\|(\\R^{n+1})\\leq M$, so that the estimates in \\eqref{e:smc3} and \\eqref{e:smc4} hold true. Then \\eqref{induction:mean curvature} follows from \\eqref{e:smc2}, \\eqref{e:smc4}, \\eqref{mass estimate step 2}\nand \\eqref{e:almost minimizing}.\nFinally, \\eqref{induction:mass variation} is a consequence of \\eqref{e:smc1}, \\eqref{e:smc3}, \\eqref{e:mass estimate after retraction} and \\eqref{mass estimate step 1}.\n\\end{proof}\n\nWe are now in a position to define an approximate flow of open partitions. As anticipated in the introduction, the flow is piecewise constant in time; the parameter $\\Delta t_j$ defined in \\eqref{d:time step} is the \\emph{epoch length}, namely the length of the time intervals in which the flow is set to be constant.\n\n\\begin{definition} \nFor every $j \\geq \\max\\{j_0,J(n)\\}$, define a family $\\E_j(t)$ for $t \\in \\left[ 0, j \\right]$ by setting\n\\[\n\\E_j (t) := \\E_{j,k}\\quad \\mbox{if $t \\in \\left( (k-1) \\, \\Delta t_j, k \\, \\Delta t_j \\right]$}\\,.\n\\]\n\\end{definition}\n\n\\subsection{Convergence in the sense of measures}\n\n\\begin{proposition} \\label{p:limit_measure}\n\nUnder the assumptions of Proposition \\ref{p:induction}, there exist a subsequence $\\{j_{\\ell} \\}_{\\ell=1}^{\\infty}$ and a one-parameter family of Radon measures $\\{\\mu_t\\}_{t \\geq 0}$ on $U$ such that \n\\begin{equation} \\label{e:limit_measure}\n\\mu_t(\\phi) = \\lim_{\\ell \\to \\infty} \\| \\partial \\E_{j_\\ell}(t) \\| (\\phi)\n\\end{equation}\nfor all $\\phi \\in C_{c}(U)$ and $t\\in \\mathbb R^+$. The limits $\\lim_{s\\to t+} \\mu_s(\\phi)$ and $\\lim_{s\\to t-}\\mu_s(\\phi)$ exist and satisfy\n\\begin{equation}\\label{muconti}\n\\lim_{s\\to t+} \\mu_s(\\phi)\\leq \\mu_t(\\phi)\\leq \\lim_{s\\to t-}\\mu_s(\\phi)\n\\end{equation}\nfor all $\\phi \\in C_c(U;\\mathbb R^+)$ and $t\\in \\mathbb R^+$. Furthermore, $\\lim_{s\\to t+} \\mu_s(\\phi)=\\lim_{s\\to t-}\\mu_s(\\phi)$ for all $t\\in\\mathbb R^+\\setminus B$, where $B \\subset \\R^+$ is countable. Finally, for every $T > 0$ we have\n\\begin{equation} \\label{finite total mean curvature}\n\\limsup_{\\ell \\to \\infty} \\int_{0}^T \\left( \\int_{\\R^{n+1}} \\eta_{j_\\ell} \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta (\\partial \\E_{j_\\ell}(t))}^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\| \\partial \\E_{j_\\ell}(t) \\| + \\eps_{j_\\ell}} \\, dx - \\frac{1}{\\Delta t_{j_\\ell}} \\, \\Delta_{j_\\ell} \\| \\partial \\E_{j_\\ell}(t) \\| (D_{j_\\ell}) \\right) \\, dt < \\infty\\,,\n\\end{equation}\nand for a.e. $t \\in \\R^+$ it holds\n\\begin{equation} \\label{decay of mass reduction}\n\\lim_{\\ell \\to \\infty} j_\\ell^{2(n+1)} \\, \\Delta_{j_\\ell} \\| \\partial \\E_{j_\\ell}(t) \\|(D_{j_\\ell}) = 0\\,.\n\\end{equation}\n\\end{proposition}\n\n\n\\begin{proof}\n\nLet $2_\\Q$ be the set of all non-negative numbers of the form $\\frac{i}{2^j}$ for some $i,j \\in \\Na \\cup \\{0\\}$. $2_\\Q$ is countable and dense in $\\R^+$. For each fixed $T \\in \\Na$, the mass estimate in \\eqref{induction:mass estimate} implies that\n\\begin{equation} \\label{e:precompactness}\n\\limsup_{j \\to \\infty} \\sup_{t \\in \\left[ 0, T \\right]} \\| \\partial \\E_{j}(t) \\| (\\R^{n+1}) \\leq \\| \\partial \\E_0 \\| (\\R^{n+1})\\,.\n\\end{equation}\nTherefore, by a diagonal argument we can choose a subsequence $\\{j_{\\ell}\\}$ and a family of Radon measures $\\{ \\mu_t \\}_{t \\in 2_\\Q}$ on $\\R^{n+1}$ such that\n\\begin{equation} \\label{e:convergence 2Q}\n\\mu_t (\\phi) = \\lim_{\\ell \\to \\infty} \\| \\partial \\E_{j_\\ell}(t) \\| (\\phi) \\qquad \\mbox{for every $\\phi \\in C_{c}(\\R^{n+1})$, for every $t \\in 2_\\Q$}\\,.\n\\end{equation}\nFurthermore, with \\eqref{e:precompactness}, we also deduce that\n\\begin{equation} \\label{e:limit mass bound}\n\\mu_{t} (\\R^{n+1}) \\leq \\| \\partial \\E_0 \\|(\\R^{n+1}) \\qquad \\mbox{for every $t \\in 2_\\Q$}\\,.\n\\end{equation}\n\nNext, let $Z := \\{ \\phi_q \\}_{q \\in \\Na}$ be a countable subset of $C^2_{c}(U; \\R^+)$ which is dense in $C_{c} (U; \\R^+)$ with respect to the supremum norm. We claim that the function\n\\begin{equation} \\label{monotone function}\nt \\in 2_\\Q \\mapsto g_{q}(t) := \\mu_{t}(\\phi_q) - t\\, \\| \\nabla^2 \\phi_q \\|_{\\infty} \\, \\| \\partial \\E_0 \\| (\\R^{n+1})\n\\end{equation}\nis monotone non-increasing. To see this, first observe that since $\\phi_q$ has compact support, and since the definition in \\eqref{monotone function} depends linearly on $\\phi_q$, we can assume without loss of generality that $\\phi_q < 1$. For convenience, for $t\\leq 0$, we define $g_q(t):=\\mu_0(\\phi_q)=\\|\\partial\\E_0\\|(\\phi_q)$. Next, given any $j \\geq J(n)$ as in Proposition \\ref{p:induction}, for every positive function $\\phi$ such that $\\eta_j \\, \\phi \\in \\cA_j$ we can compute\n\\begin{equation} \\label{monotonicity estimate basic}\n\\begin{split}\n\\delta (\\partial \\E_j(t), \\phi) (\\eta_j \\, h_{\\eps_j}) &= \\delta (\\partial \\E_j(t)) (\\eta_j \\, \\phi \\, h_{\\eps_j}) + \\int_{\\bG_n(\\R^{n+1})} \\eta_j(x) \\, h_{\\eps_j} \\cdot S^{\\perp} (\\nabla \\phi (x)) \\, d(\\partial \\E_{j}(t))(x,S) \\\\\n&=: I_1 + I_2\n\\end{split}\n\\end{equation}\nfor every $t \\in \\left[ 0, j \\right]$, and where $h_{\\eps_j}(\\cdot) = h_{\\eps_j}(\\cdot, \\partial\\E_{j}(t))$. By the choice of $\\eps_j$, and since $\\eta_j \\, \\phi \\in \\cA_j$, we can use \\eqref{e:fv along h vs h in L2} to estimate\n\\begin{equation} \\label{monotonicity estimate 1}\nI_1 \\leq \\eps_j^{\\sfrac14} - \\left( 1 - \\eps_{j}^{\\sfrac14} \\right) \\, \\int_{\\R^{n+1}} \\eta_j \\, \\phi \\, \\frac{\\abs{\\Phi_{\\eps_j} \\ast \\delta (\\partial\\E_{j}(t))}^2}{\\Phi_{\\eps_j} \\ast \\| \\partial \\E_{j}(t) \\| + \\eps_j} \\, dx\\,,\n\\end{equation}\nwhereas Young's inequality together with \\eqref{e:L2 norm of h vs approx} yields\n\\begin{equation} \\label{monotonicity estimate 2}\n\\begin{split}\nI_2 &\\leq \\frac12 \\, \\int_{\\R^{n+1}} \\eta_j \\, \\phi \\, \\abs{h_{\\eps_j}}^2 \\, d\\|\\partial \\E_{j}(t)\\| + \\frac{1}{2} \\, \\int_{\\R^{n+1}} \\eta_j \\, \\frac{\\abs{S^\\perp(\\nabla\\phi)}^2}{\\phi} \\, d\\| \\partial \\E_{j}(t) \\| \\\\\n&\\leq \\frac{\\eps_j^{\\sfrac14}}{2} + \\left( \\frac12 + \\frac{\\eps_j^{\\sfrac14}}{2} \\right) \\, \\int_{\\R^{n+1}} \\eta_j \\, \\phi \\, \\frac{\\abs{\\Phi_{\\eps_j} \\ast \\delta (\\partial\\E_{j}(t))}^2}{\\Phi_{\\eps_j} \\ast \\| \\partial \\E_{j}(t) \\| + \\eps_j} \\, dx + \\frac{1}{2} \\, \\int_{\\R^{n+1}} \\eta_j \\, \\frac{\\abs{S^\\perp(\\nabla\\phi)}^2}{\\phi} \\, d\\| \\partial \\E_{j}(t) \\|.\n\\end{split}\n\\end{equation}\nPlugging \\eqref{monotonicity estimate 1} and \\eqref{monotonicity estimate 2} into \\eqref{monotonicity estimate basic}, we obtain\n\\begin{equation} \\label{monotonicity estimate final}\n\\delta (\\partial \\E_j(t), \\phi) (\\eta_j \\, h_{\\eps_j}) \\leq 2\\, \\eps_{j}^{\\frac14} + \\frac12\\, \\int_{\\R^{n+1}} \\eta_j \\, \\frac{\\abs{\\nabla \\phi}^2}{\\phi} \\, d\\| \\partial \\E_j(t) \\|\n\\end{equation}\nfor every $t \\in \\left[ 0, j \\right]$ and for every positive function $\\phi$ such that $\\eta_j \\, \\phi \\in \\cA_j$. Now, for every $T \\in \\Na$, for every $\\phi_q \\in Z$ with $\\phi_q < 1$, and for every sufficiently large $i \\in \\Na$, choose $j_* \\geq \\max\\{ T, J(n)\\}$ so that \n\\begin{itemize}\n\\item[(i)] $\\phi_q + i^{-1} \\in \\cA_{j} \\cap \\mathcal{R}_{j}$,\n\\item[(ii)] $\\eta_{j} \\, (\\phi_q + i^{-1}) \\in \\cA_{j}$\n\\end{itemize}\nfor every $j \\geq j_*$. Using that $\\eta_j \\in \\cA_{j^{\\sfrac34}}$ for every $j \\geq J(n)$ and that $\\phi_q = 0$ outside some compact set $K \\subset U$, it is easily seen that the two conditions above can be met by choosing $j_*$ sufficiently large, depending on $i$, $\\|\\phi_q\\|_{C^2}$, and $K$. In particular, $j_*$ is so large that $\\phi_q \\equiv 0$ on $\\left( \\partial U \\right)_{s_0}^{-} \\setminus D_{j_*}$, so that $\\phi_q + i^{-1}$ is trivially $\\nu_{U}$-non decreasing in $\\R^{n+1} \\setminus D_{j_*}$ because it is constant in there. For any fixed $t_1,t_2 \\in \\left[ 0, T \\right] \\cap 2_\\Q$ with $t_2 > t_1$, choose a larger $j_*$, so that both $t_1$ and $t_2$ are integer multiples of $1\/2^{p_{j_*}}$. Then, both $t_2$ and $t_1$ are integer multiples of $\\Delta t_{j_\\ell}$ for every $j_\\ell \\geq j_*$. Hence, for every $j_\\ell \\geq j_*$ we can apply \\eqref{induction:mass variation} repeatedly with $\\phi = \\phi_q + i^{-1} \\in \\cA_{j_\\ell} \\cap \\mathcal{R}_{j_\\ell}$ and \\eqref{monotonicity estimate final} again with $\\phi = \\phi_q + i^{-1}$ so that $\\eta_{j_\\ell} \\, \\phi \\in \\cA_{j_\\ell}$ in order to deduce\n\\begin{equation} \\label{towards monotonicity 1}\n\\begin{split}\n&\\| \\partial \\E_{j_\\ell}(t_2) \\| (\\phi_q + i^{-1}) - \\| \\partial\\E_{j_\\ell}(t_1) \\| (\\phi_q + i^{-1})\\\\ &\\qquad \\qquad \\qquad \\leq \\left( \\eps_{j_\\ell}^{\\sfrac18} + 2\\, \\eps_{j_\\ell}^{\\sfrac14} \\right) (t_2 - t_1) + \\frac12 \\, \\int_{t_1}^{t_2} \\int_{\\R^{n+1}} \\eta_{j_\\ell} \\, \\frac{\\abs{\\nabla \\phi_q}^2}{\\phi_q + i^{-1}} \\, d\\|\\partial \\E_{j_\\ell}(t)\\| \\, dt\\,.\n\\end{split}\n\\end{equation}\nAs we let $\\ell \\to \\infty$, the left-hand side of \\eqref{towards monotonicity 1} can be bounded from below, using \\eqref{e:precompactness} and \\eqref{e:convergence 2Q}, as follows:\n\\begin{equation} \\label{lhs lower bound}\n\\geq \\mu_{t_2}(\\phi_q) - \\mu_{t_1}(\\phi_q) - i^{-1} \\, \\| \\partial\\E_0 \\|(\\R^{n+1})\\,.\n\\end{equation}\nIn order to estimate the right-hand side of \\eqref{towards monotonicity 1}, we note that\n\\begin{equation} \\label{trick}\n\\frac{\\abs{\\nabla \\phi_q}^2}{\\phi_q + i^{-1}} \\leq \\frac{\\abs{\\nabla\\phi_q}^2}{\\phi_q} \\leq 2\\, \\| \\nabla^2 \\phi_q \\|_{\\infty}\\,,\n\\end{equation}\nso that if we plug \\eqref{trick} in \\eqref{towards monotonicity 1}, use that $\\eta_{j_\\ell} \\leq 1$, let $\\ell \\to \\infty$ by means of \\eqref{e:precompactness}, and finally let $i \\to \\infty$ we conclude\n\\begin{equation} \\label{towards monotonicity 2}\n\\mu_{t_2}(\\phi_q) - \\mu_{t_1}(\\phi_q) \\leq \\| \\nabla^2 \\phi_q \\|_{\\infty} \\, \\| \\partial \\E_0 \\|(\\R^{n+1}) \\, (t_2 - t_1)\n\\end{equation}\n for every $t_1,t_2 \\in \\left[0, T \\right] \\cap 2_\\Q$ with $t_2 > t_1$ and for any $\\phi_q \\in Z$ with $\\phi_q < 1$, thus proving that the function defined in \\eqref{monotone function} is indeed monotone non-increasing on $[0,T]$. Since $T$ is arbitrary, the same holds on $\\mathbb R^+$. \n \n\\smallskip \n \n Define now\n \\[\n B := \\left\\lbrace t \\in \\mathbb R^+ \\, \\colon \\, \\lim_{2_\\Q \\ni s \\to t-} g_{q}(s) > \\lim_{2_\\Q \\ni s \\to t+} g_{q}(s) \\quad \\mbox{for some $q \\in \\Na$} \\right\\rbrace\\,.\n \\] \n By the monotonicity of each $g_{q}$, $B$ is a countable subset of $\\R^+$, and for every $t \\in \\R^+ \\setminus (B \\cup 2_\\Q)$ we can define $\\mu_t(\\phi_q)$ for every $\\phi_q \\in Z$ by \n \\begin{equation} \\label{e:mu family extended}\n \\mu_t(\\phi_q) := \\lim_{2_\\Q \\ni s \\to t} \\left( g_{q}(s) + s\\, \\|\\nabla^2 \\phi_q\\|_{\\infty} \\, \\| \\partial \\E_0\\|(\\R^{n+1}) \\right) = \\lim_{2_\\Q \\ni s \\to t} \\mu_{s}(\\phi_q)\\,.\n\\end{equation} \n \nWe claim that \n\\begin{equation} \\label{mu_t is the correct limit}\n\\exists \\, \\lim_{\\ell \\to \\infty} \\| \\partial \\E_{j_\\ell}(t) \\| (\\phi_q) = \\mu_t (\\phi_q) \\qquad \\mbox{for every $t \\in \\R^+ \\setminus (B \\cup 2_\\Q)$\\mbox{ and }$\\phi_q \\in Z$}\\,. \n\\end{equation} \n\nIndeed, due to the definition of $\\partial \\E_{j_\\ell}(t)$, there exists a sequence $\\{t_\\ell\\}_{\\ell=1}^{\\infty} \\subset 2_\\Q$ with $t_\\ell > t$ such that $\\lim_{\\ell \\to \\infty} t_\\ell = t$ and $\\partial \\E_{j_\\ell}(t) = \\partial \\E_{j_\\ell}(t_\\ell)$. For any $s \\in 2_\\Q$ with $s > t$, and for all suffciently large $\\ell$ so that $s > t_\\ell$, we deduce from \\eqref{towards monotonicity 1} that\n\\begin{equation} \\label{correct limit1}\n\\| \\partial \\E_{j_\\ell}(s) \\| (\\phi_q + i^{-1}) \\leq \\| \\partial \\E_{j_\\ell}(t_\\ell) \\| (\\phi_q + i^{-1}) + {\\rm O}(s-t)\\,.\n\\end{equation}\n Taking the $\\liminf_{\\ell \\to \\infty}$ and then the $\\lim_{i \\to \\infty}$ on both sides of \\eqref{correct limit1} we obtain that\n \\begin{equation} \\label{correct limit2}\n \\mu_{s}(\\phi_q) \\leq \\liminf_{\\ell\\to\\infty} \\| \\partial \\E_{j_\\ell}(t_\\ell)\\| (\\phi_q) + {\\rm O}(s-t)\\,,\n \\end{equation}\n so that when we let $s \\to t+$ the definition of $\\mu_t$ and the fact that $\\partial \\E_{j_\\ell}(t_\\ell) = \\partial \\E_{j_\\ell}(t)$ yield\n \\begin{equation} \\label{correct limit3}\n \\mu_t(\\phi_q) \\leq \\liminf_{\\ell \\to \\infty} \\| \\partial \\E_{j_\\ell}(t) \\| (\\phi_q) \\,.\n \\end{equation}\n An analogous argument provides, at the same time,\n \\begin{equation} \\label{correct_limit4}\n \\limsup_{\\ell \\to \\infty} \\| \\partial \\E_{j_\\ell}(t) \\| (\\phi_q) \\leq \\mu_t(\\phi_q)\\,,\n \\end{equation}\n so that \\eqref{correct limit3} and \\eqref{correct_limit4} together complete the proof of \\eqref{mu_t is the correct limit}. Since $Z$ is dense in $C_{c}(U ;\\R^+)$, \\eqref{mu_t is the correct limit} determines the limit measure uniquely, and the convergence holds for every $\\phi \\in C_c(U)$ at every $t \\in \\R^+ \\setminus B$. On the other hand, since $B$ is countable we can extract a further subsequence of $\\{ \\partial \\E_{j_\\ell}(t)\\}_{\\ell=1}^{\\infty}$ converging to a Radon measure $\\mu_t$ in $U$ for every $t \\geq 0$.\nThe continuity of $\\mu_t(\\phi)$ on $\\mathbb R^+\\setminus B$ follows from the definition of $B$ and\na density argument. The existence of limits and the inequalities \\eqref{muconti} can be also deduced from \\eqref{towards monotonicity 2}\nin the case $\\phi=\\phi_q$, and by density for $\\phi\\in C_c(U;\\mathbb R^+)$. This completes the proof of the first part of the statement.\\\\\n \n \\smallskip\n \n The claim in \\eqref{finite total mean curvature} follows from \\eqref{induction:mean curvature}. Finally, \\eqref{finite total mean curvature} implies that for each $T > 0$\n\\begin{equation} \n\\lim_{\\ell \\to \\infty} \\int_{0}^T - j^{2(n+1)} \\, \\Delta_{j_\\ell} \\| \\partial \\E_{j_\\ell}(t) \\| (D_{j_\\ell}) \\, dt \\lesssim \\lim_{\\ell \\to \\infty} j_{\\ell}^{2(n+1)} \\, \\Delta t_{j_\\ell} = 0\\,,\n\\end{equation} \nwhere in the last identity we have used that \n\\[\n\\Delta t_{j_\\ell} \\leq \\eps_{j_\\ell}^{\\kappa} \\ll j_{\\ell}^{-2(n+1)}\\,,\n\\]\ngiven the definition of $\\kappa$ and the fact that $\\eps_j$ satisfies \\eqref{e:eps_smallness}. The proof is now complete.\n \\end{proof}\n\n\n\\section{Brakke's inequality, rectifiability and integrality of the limit} \\label{sec:Brakke}\n\nIn the next proposition we deduce further information concerning the family $\\{\\mu_t\\}_{t \\geq 0}$ of measures in $U$ introduced in Proposition \\ref{p:limit_measure}.\n\n\\begin{proposition} \\label{p:integral varifold limit}\nLet $\\{ \\partial \\E_{j_\\ell}(t)\\}$ for $\\ell \\in \\Na$ and $t \\geq 0$, and $\\{\\mu_t\\}$ for $t \\geq 0$ be as in Proposition \\ref{p:limit_measure} satisfying \\eqref{e:limit_measure}, \\eqref{finite total mean curvature} and \\eqref{decay of mass reduction}. Then, we have the following.\n\\begin{enumerate}\n\n\\item For a.e. $t \\in \\R^+$ the measure $\\mu_t$ is integral, namely\nthere exists an integral varifold $V_t \\in \\IV_n(U)$ such that $\\mu_t = \\|V_t\\|$.\n\n\\item For a.e. $t \\in \\R^+$, if a subsequence $\\{j_{\\ell}'\\}_{\\ell=1}^\\infty \\subset \\{j_\\ell\\}_{\\ell=1}^\\infty $ is such that \n\\begin{equation} \\label{hp:uniform bound on L2 mean curvature}\n\\sup_{\\ell \\in \\Na} \\int_{\\R^{n+1}} \\eta_{j_\\ell'} \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell'}} \\ast \\delta (\\partial \\E_{j_\\ell'}(t))}^2}{\\Phi_{\\eps_{j_\\ell'}} \\ast \\| \\partial \\E_{j_\\ell'}(t) \\| + \\eps_{j_\\ell'}} \\, dx < \\infty\\,,\n\\end{equation}\nthen $\\partial \\E_{j_\\ell'}(t)$ converges to $V_t\\in {\\bf IV}_n(U)$ as varifolds in $U$ as $\\ell \\to \\infty$, namely\n\\begin{equation} \\label{p2 of limit varifold}\n\\lim_{\\ell \\to \\infty} \\partial \\E_{j_\\ell'}(t) (\\varphi) = V_t(\\varphi) \\qquad \\mbox{for every $\\varphi \\in C_{c}(\\bG_{n}(U))$}\\,.\n\\end{equation}\n\n\\item For a.e. $t \\in \\R^+$, $V_t$ has generalized mean curvature $h(\\cdot, V_t)$ in $U$ which satisfies\n\\begin{equation} \\label{e:lsc of L2 norm mean curvature}\n\\int_{U} \\abs{h(\\cdot, V_t)}^2 \\, \\phi \\, d\\|V_t\\| \\leq \\liminf_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\phi \\, \\eta_{j_\\ell} \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta (\\partial \\E_{j_\\ell}(t))}^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\| \\partial \\E_{j_\\ell}(t) \\| + \\eps_{j_\\ell}} \\, dx<\\infty \n\\end{equation}\nfor any $\\phi \\in C_c(U;\\R^+)$.\n\n\\end{enumerate}\n\\end{proposition} \n\nBefore proving Proposition \\ref{p:integral varifold limit}, we need to state two important results, which are obtained by suitably modifying \\cite[Theorem 7.3 \\& Theorem 8.6]{KimTone}, respectively.\n\n\\begin{theorem}[Rectifiability Theorem] \\label{t:rectifiability}\nSuppose that $\\{U_{j_\\ell}\\}_{\\ell=1}^{\\infty}$ are open sets in $\\R^{n+1}$, $\\{\\E_{j_\\ell}\\}_{\\ell=1}^\\infty$ are such that $\\E_{j_\\ell} \\in \\op^N(U_{j_\\ell})$, and $\\{\\eps_{j_\\ell}\\}_{l=1}^{\\infty} \\subset \\left(0,1\\right)$. Suppose that they satisfy\n\\begin{enumerate}\n\\item $\\partial U_{j_\\ell} \\subset \\left( \\partial U \\right)_{1\/(4\\, j_\\ell^{\\sfrac14})}$ and $U_{j_\\ell} \\, \\triangle \\, U \\subset \\left( \\partial U \\right)_{1\/(4\\, j_{\\ell}^{\\sfrac14})}$,\n\\item $\\lim_{\\ell \\to \\infty} j_\\ell^4\\,\\eps_{j_\\ell} = 0$ and $j_\\ell\\leq \\eps_{j_\\ell}^{\\sfrac16}\/2$,\n\\item $\\sup_{\\ell \\in \\Na} \\| \\partial \\E_{j_\\ell} \\|(\\R^{n+1}) < \\infty$,\n\\item $\\liminf_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\eta_{j_\\ell} \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta (\\partial \\E_{j_\\ell})}^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\| \\partial \\E_{j_\\ell} \\| + \\eps_{j_\\ell}}\\, dx < \\infty$,\n\\item $\\lim_{\\ell \\to \\infty} \\Delta_{j_\\ell} \\|\\partial \\E_{j_\\ell} \\|(D_{j_\\ell}) = 0$. \n\\end{enumerate}\n\nThen, there exist a subsequence $\\{j'_\\ell\\}_{\\ell=1}^{\\infty}\\subset\\{j_\\ell\\}_{\\ell=1}^{\\infty}$ and a varifold $V \\in \\V_n(\\R^{n+1})$ such that $\\partial \\E_{j'_\\ell} \\to V$ in the sense of varifolds, $\\spt\\, \\|V\\| \\subset {\\rm clos}\\,U$, and\n\\begin{equation} \\label{e:lower density bound}\n\\theta^{*n}(\\|V\\|,x) \\geq c_0 > 0 \\qquad \\mbox{for $\\|V\\|$ a.e. $x\\in U$}\\,.\n\\end{equation}\n\nHere, $c_0$ is a constant depending only on $n$. Furthermore, $V \\mres \\bG_n(U) \\in \\RV_n(U)$.\n\n\\end{theorem} \n\n\\begin{proof}\nThe existence of a subsequence $\\{ \\partial \\E_{j'_\\ell} \\}_{\\ell=1}^{\\infty}$ converging in the sense of varifolds to $V \\in \\V_n(\\R^{n+1})$ follows from the compactness theorem for Radon measures using assumption (3). The limit varifold $V$ satisfies $\\spt\\|V\\| \\subset {\\rm clos}\\,U$ because of assumption (1). Indeed, since $\\spt\\| \\partial\\E_{j_\\ell}\\| \\subset {\\rm clos}\\,U_{j_\\ell}$ by definition of open partition, if $x \\in \\R^{n+1} \\setminus {\\rm clos}\\,U$ then (1) implies that there is a radius $r > 0$ such that $\\| \\partial \\E_{j'_\\ell}\\| (U_r(x)) = 0$ for all sufficiently large $\\ell$, which in turn gives $\\|V\\|(U_r(x)) = 0$. Furthermore, the validity of (2), (3), and (4) allows us to apply Proposition \\ref{p:prop56} in order to deduce that $\\| \\delta V \\| \\mres U$ is a Radon measure. Hence, the rectifiability of the limit varifold in $U$ is a consequence of Allard's rectifiability theorem \\cite[Theorem 5.5(1)]{Allard} once we prove \\eqref{e:lower density bound}. In turn, the latter can be obtained by repeating \\emph{verbatim} the arguments in \\cite[Theorem 7.3]{KimTone}. Indeed, the proof in there is local, and for a given $x_0 \\in U$ it can be reproduced by replacing $B_1(x_0)$ in \\cite[Theorem 7.3]{KimTone} by $B_{\\rho}(x_0)$ for sufficiently small $\\rho>0$ and large $\\ell$\nso that $B_{\\rho}(x_0)\\subset D_{j'_\\ell}$ and $\\eta_{j'_\\ell} = 1$ on $B_{\\rho}(x_0)$.\n\\end{proof}\n\n\n\\begin{theorem}[Integrality Theorem]\\label{t:integrality}\nUnder the same assumptions of Theorem \\ref{t:rectifiability}, if the stronger\n\\begin{itemize}\n\\item[(5)'] $\\lim_{\\ell \\to \\infty} j_\\ell^{2(n+1)} \\, \\Delta_{j_\\ell} \\| \\partial \\E_{j_\\ell} \\|(D_{j_\\ell}) = 0$\n\\end{itemize}\nholds, then there is a converging subsequence $\\{\\partial\\E_{j'_\\ell}\\}_{\\ell=1}^{\\infty}$ such that the limit varifold $V$ satisfies $V \\mres \\bG_n(U) \\in \\IV_n(U)$.\n\\end{theorem}\nJust like Theorem \\ref{t:rectifiability}, the claim is local in nature and the proof is the same as \n\\cite[Theorem 8.6]{KimTone}. \n\n\\begin{proof}[Proof of Proposition \\ref{p:integral varifold limit}]\n\nFirst, observe that by \\eqref{finite total mean curvature} and Fatou's lemma we have\n\\begin{equation} \\label{vlim1}\n\\liminf_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\eta_{j_\\ell} \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta (\\partial \\E_{j_\\ell}(t))}^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\| \\partial \\E_{j_\\ell} \\| + \\eps_{j_\\ell}} \\, dx < \\infty\n\\end{equation}\nfor a.e. $t \\in \\R^+$. Furthermore, from \\eqref{induction:mass estimate} and the definition of $\\partial \\E_{j}(t)$ we also have that for every $T < \\infty$\n\\begin{equation} \\label{vlim2}\n\\sup_{\\ell \\in\\Na} \\sup_{t \\in \\left[ 0, T \\right]} \\| \\partial \\E_{j_\\ell}(t) \\| (\\R^{n+1}) < \\infty\\,.\n\\end{equation}\nLet $t \\in \\R^+$ be such that \\eqref{vlim1} and \\eqref{decay of mass reduction} hold. We want to show that the sequence $\\{ \\partial \\E_{j_\\ell}(t) \\}_{\\ell=1}^\\infty$ satisfies the assumptions of Theorem \\ref{t:integrality}. Assumption (1) follows from the construction of the discrete flow in Proposition \\ref{p:induction} and the choice of $\\eps_{j_\\ell}$; (2) follows again from the choice of $\\eps_{j_\\ell}$, more precisely from \\eqref{e:eps_smallness}; (3) and (4) are \\eqref{vlim2} and \\eqref{vlim1}, respectively; (5)' is \\eqref{decay of mass reduction}. Hence, Theorem \\ref{t:integrality} implies that, along a further subsequence $\\{j_\\ell'\\}_{\\ell=1}^\\infty \\subset \\{j_\\ell\\}_{\\ell=1}^\\infty$, $\\partial \\E_{j_\\ell'}(t)$ converges, as $\\ell \\to \\infty$, to a varifold $V_t \\in \\V_n(\\R^{n+1})$ with $\\spt \\|V_t\\| \\subset {\\rm clos}\\,U$ and such that $V_t \\mres \\bG_n(U) \\in \\IV_n(U)$. Since the convergence is in the sense of varifolds, the weights converge as Radon measures, and thus $\\lim_{\\ell \\to \\infty} \\| \\partial \\E_{j_\\ell'}(t) \\| = \\| V_t \\|$: \\eqref{e:limit_measure} then readily implies that $\\| V_t \\| \\mres U = \\mu_t$ as Radon measures on $U$, thus proving (1). Concerning the statement in (2), let $\\{j_\\ell'\\}_{\\ell=1}^\\infty$ be a subsequence along which \\eqref{hp:uniform bound on L2 mean curvature} holds. Then, any converging further subsequence must converge to a varifold satisfying the conclusion of Theorem \\ref{t:integrality}. A priori, two distinct subsequences may converge to different limits. On the other hand, each subsequential limit $V_t$ is a rectifiable varifold when restricted to the open set $U$, and furthermore it satisfies $\\|V_t\\| \\mres U = \\mu_t$. Since rectifiable varifolds are uniquely determined by their weight, we deduce that the limit in $U$ is independent of the particular subsequence, and thus \\eqref{hp:uniform bound on L2 mean curvature} forces the whole sequence $\\partial \\E_{j_\\ell'}(t)$ to converge to a uniquely determined integral varifold $V_t$ in $U$. Finally, (3) follows from Proposition \\ref{p:prop56}.\n\\end{proof}\n\nA byproduct of the proof of Proposition \\ref{p:integral varifold limit} is the existence of a (uniquely defined) integral varifold $V_t \\in \\IV_{n}(U)$ with weight $\\|V_t\\| = \\mu_t$ for every $t \\in \\R^+ \\setminus Z$, where $\\mathcal{L}^1(Z) = 0$. Such a varifold $V_t$ is the limit on $U$ of any sequence $\\partial \\E_{j_\\ell'}(t)$ along which \\eqref{hp:uniform bound on L2 mean curvature} holds true. We can now extend the definition of $V_t$ to $t \\in Z$ so to have a one-parameter family $\\{V_t\\}_{t \\in \\R^+} \\subset \\V_{n}(U)$ of varifolds satisfying $\\|V_t\\| = \\mu_t$ for every $t \\in \\R^+$. Such an extension can be defined in an arbitrary fashion: for instance, if $t \\in Z$ then we can set $V_t(\\varphi) := \\int \\varphi(x,S) \\, d\\mu_t(x)$ for every $\\varphi \\in C_{c}(\\bG_n(U))$, where $S$ is any constant plane in $\\bG(n+1,n)$.\n\n\\medskip\n\nIn the next theorem, we show that the family of varifolds $\\{V_t\\}$ is indeed a Brakke flow in $U$. \nThe boundary condition and the initial condition will be discussed in the following section. \n\n\\begin{theorem}[Brakke's inequality] \\label{t:Brakke inequality}\nFor every $T > 0$ we have\n\\begin{equation} \\label{e:mean curvature bound}\n\\|V_T\\|(U)+\\int_0^T \\int_{U} \\abs{h(x,V_t)}^2 \\, d\\|V_t\\|(x) \\, dt \\leq \\Ha^n(\\Gamma_0)\\,.\n\\end{equation}\nFurthermore, for any $\\phi \\in C^1_{c}(U \\times \\R^+ ; \\R^+)$ and $0 \\leq t_1 < t_2 < \\infty$ we have:\n\\begin{equation} \\label{e:Brakke}\n\\|V_t\\|(\\phi(\\cdot, t))\\Big|_{t=t_1}^{t_2} \\leq \\int_{t_1}^{t_2} \\left( \\delta (V_t, \\phi(\\cdot, t))(h(\\cdot, V_t)) + \\|V_t\\|( \\frac{\\partial \\phi}{\\partial t}(\\cdot, t) ) \\right) \\, dt\\,. \n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\n\nIn order to prove \\eqref{e:mean curvature bound}, we use \\eqref{induction:mass variation} with\n$\\phi=1$ which belongs to $\\mathcal A_j\\cap \\mathcal R_j$ for all $j$. Assume $T\\in 2_{\\mathbb Q}$\nfirst. By summing over the\nindex $k$ and for all sufficiently large $j$, we have\n\\begin{equation*}\n\\|\\partial \\mathcal E_{j}(T)\\|(U)-\\int_0^T \\delta(\\partial\\mathcal E_{j}(t))\n(\\eta_{j} h_{\\eps_j})\\,dt\\leq \\Ha^n(\\Gamma_0)+T\\eps_{j}^{\\sfrac18}.\n\\end{equation*}\nBy \\eqref{e:fv along h vs h in L2} and \\eqref{e:lsc of L2 norm mean curvature} as well as $\\|V_T\\|(U)\\leq \\liminf_{\\ell\\rightarrow\\infty}\\|\\partial\n\\mathcal E_{j_\\ell}(T)\\|(U)$, we obtain \\eqref{e:mean curvature bound}. For $T\\notin 2_{\\mathbb Q}$, \nuse \\eqref{muconti} to deduce the same inequality. \n\n\\medskip\n\nWe now focus on proving the validity of Brakke's inequality \\eqref{e:Brakke}. \\\\\n\n\\smallskip\n\n{\\bf Step 1.} We will first assume that $\\phi$ is independent of $t$, and then extend the proof to the more general case. By an elementary density argument, we can assume that $\\phi \\in C^\\infty_c(U; \\R^+)$. Moreover, since the support of $\\phi$ is compact and \\eqref{e:Brakke} depends linearly on $\\phi$, we can also normalize $\\phi$ in such a way that $\\phi < 1$ everywhere. Then, for all sufficiently large $i \\in \\mathbb N$, also $\\hat \\phi := \\phi + i^{-1} < 1$ everywhere. Arguing as in the proof of Proposition \\ref{p:limit_measure}, we can choose $m \\in \\mathbb N$ so that $m \\geq J(n)$ (see Lemma \\ref{l:etaj}) and furthermore\n\\begin{itemize}\n\\item[(i)] $\\hat \\phi \\in \\cA_j \\cap \\mathcal{R}_j$,\n\\item[(ii)] $\\eta_j \\, \\hat \\phi \\in \\cA_j$\n\\end{itemize}\nfor all $j \\geq m$. Next, fix $0 \\leq t_1 < t_2 < \\infty$, and let $\\ell$ be such that $j_\\ell \\geq m$ and $j_\\ell \\geq t_2$, so that $\\partial \\E_{j_\\ell}(t)$ is certainly well defined for $t \\in \\left[ t_1, t_2 \\right]$. By the condition (i) above, we can apply \\eqref{induction:mass variation} with $\\hat \\phi$ and deduce\n\\begin{equation} \\label{Brakke1}\n\\| \\partial \\E_{j_\\ell}(t)\\|(\\hat \\phi) - \\| \\partial \\E_{j_\\ell} (t - \\Delta t_{j_\\ell}) \\| (\\hat \\phi) \\leq \\Delta t_{j_\\ell} \\, \\left(\\delta (\\partial \\E_{j_\\ell}(t), \\hat \\phi)(\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(t))) + \\eps_{j_\\ell}^{\\sfrac18}\\right)\n\\end{equation}\nfor every $t = \\Delta t_{j_\\ell}, 2\\, \\Delta t_{j_\\ell}, \\ldots, j_\\ell\\, 2^{p_{j_\\ell}} \\, \\Delta t_{j_\\ell}$. Since $\\Delta t_{j_\\ell} \\to 0$ as $\\ell \\to \\infty$, we can assume without loss of generality that $\\Delta t_{j_\\ell} < t_2 - t_1$, so that there exist $k_1, k_2 \\in \\mathbb N$ with $k_1 < k_2$ such that $t_1 \\in \\left( (k_1 - 2) \\, \\Delta t_{j_\\ell}, (k_1 - 1) \\, \\Delta t_{j_\\ell} \\right]$ and $t_2 \\in \\left( (k_2 - 1) \\, \\Delta t_{j_\\ell}, k_2 \\, \\Delta t_{j_\\ell} \\right]$. If we sum \\eqref{Brakke1} on $t = k \\, \\Delta t_{j_\\ell}$ for $k \\in \\left[ k_1, k_2 \\right] \\cap \\mathbb N$ we get\n\\begin{equation} \\label{Brakke2}\n\\| \\partial \\E_{j_\\ell}(t) \\|(\\hat \\phi) \\Big|_{t= (k_1 - 1) \\, \\Delta t_{j_\\ell}}^{k_2\\,\\Delta t_{j_\\ell}} \\leq \\sum_{k=k_1}^{k_2} \\Delta t_{j_\\ell} \\, \\left(\\delta (\\partial \\E_{j_\\ell}(k\\,\\Delta t_{j_\\ell}), \\hat \\phi)(\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(k\\,\\Delta t_{j_\\ell}))) + \\eps_{j_\\ell}^{\\sfrac18}\\right)\\,.\n\\end{equation}\nSince $\\hat \\phi = \\phi + i^{-1}$, we can estimate the left-hand side of \\eqref{Brakke2} from below as\n\\begin{equation} \\label{Brakke3}\n\\| \\partial \\E_{j_\\ell}(t) \\|(\\hat \\phi) \\Big|_{t= (k_1 - 1) \\, \\Delta t_{j_\\ell}}^{k_2\\,\\Delta t_{j_\\ell}} \\geq \\| \\partial \\E_{j_\\ell}(t_2)\\| (\\phi) - \\| \\partial \\E_{j_\\ell}(t_1) \\|(\\phi) - i^{-1} \\| \\partial \\E_{j_\\ell} (t_1) \\|(\\R^{n+1})\\,,\n\\end{equation}\nso that when we let $\\ell \\to \\infty$ we conclude\n\\begin{equation} \\label{Brakke4}\n\\limsup_{\\ell \\to \\infty} \\| \\partial \\E_{j_\\ell}(t) \\|(\\hat \\phi) \\Big|_{t= (k_1 - 1) \\, \\Delta t_{j_\\ell}}^{ k_2\\,\\Delta t_{j_\\ell}} \\geq \\| V_{t} \\|(\\phi) \\Big|_{t=t_1}^{t_2} - i^{-1}\\, \\| \\partial \\E_0 \\|(\\R^{n+1})\\,,\n\\end{equation}\nwhere we have used \\eqref{e:limit_measure} together with Proposition \\ref{p:integral varifold limit}(1).\\\\\n\n\\smallskip\n\nNext, we estimate the right-hand side of \\eqref{Brakke2} from above. Setting $\\partial \\E_{j_\\ell} = \\partial \\E_{j_\\ell}(t)$ and $h_{\\eps_{j_\\ell}} = h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell})$, we proceed as in \\eqref{monotonicity estimate basic} writing\n\\begin{equation} \\label{Brakke5}\n\\delta(\\partial \\E_{j_\\ell}, \\hat \\phi)(\\eta_{j_\\ell}\\, h_{\\eps_{j_\\ell}}) = \\delta (\\partial \\E_{j_\\ell})(\\eta_{j_\\ell}\\, \\hat \\phi \\, h_{\\eps_{j_\\ell}}) + \\int_{\\bG_{n}(\\R^{n+1})} \\eta_{j_\\ell}\\, S^\\perp (\\nabla \\phi) \\cdot h_{\\eps_{j_\\ell}}\\, d(\\partial \\E_{j_\\ell})\\,,\n\\end{equation}\nwhere we have used that $\\nabla \\hat\\phi = \\nabla \\phi$. Since $\\eta_{j_\\ell}\\, \\hat\\phi \\in \\cA_{j_\\ell}$, we can apply \\eqref{e:fv along h vs h in L2} in order to obtain that\n\\begin{equation} \\label{Brakke6}\n\\abs{\\delta (\\partial \\E_{j_\\ell})(\\eta_{j_\\ell}\\, \\hat \\phi \\, h_{\\eps_{j_\\ell}}) + b_{j_\\ell} } \\leq \\eps_{j_\\ell}^{\\sfrac14} \\left( b_{j_\\ell} + 1 \\right)\\,,\n\\end{equation}\nwhere we have set for simplicity\n\\begin{equation} \\label{Brakke7}\nb_{j_\\ell} := \\int_{\\R^{n+1}} \\eta_{j_\\ell}\\, \\hat \\phi \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta (\\partial \\E_{j_\\ell}) }^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\| \\partial \\E_{j_\\ell} \\| + \\eps_{j_\\ell} } \\, dx\\,.\n\\end{equation}\nConcerning the second summand in \\eqref{Brakke5}, we use the Cauchy-Schwarz inequality to estimate\n\\begin{equation} \\label{Brakke8}\n\\begin{split}\n\\Abs{ \\int_{\\bG_{n}(\\R^{n+1})} \\eta_{j_\\ell}\\, S^\\perp (\\nabla \\phi) \\cdot h_{\\eps_{j_\\ell}}\\, d(\\partial \\E_{j_\\ell}) } &\\leq \\left( \\int_{\\R^{n+1}} \\eta_{j_\\ell} \\, \\frac{\\abs{\\nabla \\phi}^2}{\\hat\\phi} \\right)^{\\sfrac12} \\left( \\int_{\\R^{n+1}} \\eta_{j_\\ell} \\, \\hat\\phi\\, \\abs{h_{\\eps_{j_\\ell}}}^2 \\right)^{\\sfrac12} \\\\\n&\\leq c \\, \\| \\partial \\E_{j_\\ell} \\| (\\R^{n+1})^{\\sfrac12}\\, \\left( (1+\\eps_{j_\\ell}^{\\sfrac14})\\, b_{j_\\ell} + \\eps_{j_\\ell}^{\\sfrac14} \\right)^{\\sfrac12}\\,,\n\\end{split}\n\\end{equation}\nwhere $c$ depends only on $\\| \\phi \\|_{C^2}$, and where we have used \\eqref{e:L2 norm of h vs approx}. Using \\eqref{Brakke6}, \\eqref{Brakke8} and \\eqref{induction:mass estimate}, we can then conclude that\n\\begin{equation} \\label{Brakke9}\n\\sup_{t \\in \\left[ t_1, t_2 \\right]} \\delta (\\partial \\E_{j_\\ell}(t), \\hat \\phi) (\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(t))) \\leq c\\,, \n\\end{equation}\nwhere $c$ depends only on $\\| \\phi \\|_{C^2}$ and $\\| \\partial \\E_0 \\|(\\R^{n+1})$. Using \\eqref{Brakke9} together with the definition of $\\partial \\E_{j_\\ell}(t)$ and Fatou's lemma, one can readily show that, when we take the $\\limsup$ as $\\ell \\to \\infty$, the right-hand side of \\eqref{Brakke2} can be bounded by \n\\begin{equation} \\label{Brakke10}\n\\begin{split}\n\\limsup_{\\ell \\to \\infty}&\\sum_{k=k_1}^{k_2} \\Delta t_{j_\\ell} \\, \\left(\\delta (\\partial \\E_{j_\\ell}(k\\,\\Delta t_{j_\\ell}), \\hat \\phi)(\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(k\\,\\Delta t_{j_\\ell}))) + \\eps_{j_\\ell}^{\\sfrac18}\\right) \\\\ \n&= \\limsup_{\\ell \\to \\infty} \\int_{t_1}^{t_2} \\delta (\\partial \\E_{j_\\ell}(t), \\hat\\phi) (\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(t))) \\, dt \\\\\n&\\leq \\int_{t_1}^{t_2} \\limsup_{\\ell \\to \\infty} \\delta (\\partial \\E_{j_\\ell}(t), \\hat\\phi) (\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(t))) \\, dt \\,.\n\\end{split}\n\\end{equation}\n\nNow, fix $t \\in \\left[ t_1, t_2 \\right]$ such that $\\liminf_{\\ell\\to \\infty}b_{j_\\ell}<\\infty$ \n(which holds for a.e.~$t$), and let $\\{j_\\ell'\\} \\subset \\{j_\\ell\\}$ be a subsequence which realizes the $\\limsup$, namely with\n\\begin{equation} \\label{Brakke11}\n\\lim_{\\ell \\to \\infty} \\delta (\\partial \\E_{j_\\ell'}(t), \\hat\\phi) (\\eta_{j_\\ell'} \\, h_{\\eps_{j_\\ell'}}(\\cdot, \\partial \\E_{j_\\ell'}(t))) = \\limsup_{\\ell \\to \\infty} \\delta (\\partial \\E_{j_\\ell}(t), \\hat\\phi) (\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(t)))\\,.\n\\end{equation}\nBy the identity in \\eqref{Brakke5}, we also have that along the same subsequence\n\\begin{equation} \\label{Brakke12}\n\\begin{split}\n\\lim_{\\ell\\to \\infty} \\Big( - \\delta (\\partial \\E_{j_\\ell'})&(\\eta_{j_\\ell'}\\, \\hat \\phi \\, h_{\\eps_{j_\\ell'}}) - \\int_{\\bG_{n}(\\R^{n+1})} \\eta_{j_\\ell'}\\, S^\\perp (\\nabla \\phi) \\cdot h_{\\eps_{j_\\ell'}}\\, d(\\partial \\E_{j_\\ell'}) \\Big) \\\\\n&= \\liminf_{\\ell \\to \\infty} \\Big( - \\delta (\\partial \\E_{j_\\ell})(\\eta_{j_\\ell}\\, \\hat \\phi \\, h_{\\eps_{j_\\ell}}) - \\int_{\\bG_{n}(\\R^{n+1})} \\eta_{j_\\ell}\\, S^\\perp (\\nabla \\phi) \\cdot h_{\\eps_{j_\\ell}}\\, d(\\partial \\E_{j_\\ell}) \\Big) \\,,\n\\end{split}\n\\end{equation}\nwhere once again $\\partial \\E_{j_\\ell} = \\partial \\E_{j_\\ell}(t)$ and $h_{\\eps_{j_\\ell}} = h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell})$. Using \\eqref{Brakke6} and \\eqref{Brakke8}, we see that the right-hand side of \\eqref{Brakke12} can be bounded from above by $\\liminf_{\\ell \\to \\infty} 2\\, b_{j_\\ell} + c$, whereas the left-hand side can be bounded from below by $\\limsup_{\\ell\\to \\infty} \\frac12\\, b_{j_\\ell'} - c$, where $c$ depends on $\\| \\phi \\|_{C^2}$ and $\\| \\partial \\E_0 \\|(\\R^{n+1})$. As a consequence, along any subsequence $\\{j_\\ell'\\}$ satisfying \\eqref{Brakke11} one has that \n\\begin{equation} \\label{Brakke13}\n\\limsup_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\eta_{j_\\ell'}\\, \\hat \\phi \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell'}} \\ast \\delta (\\partial \\E_{j_\\ell'}) }^2}{\\Phi_{\\eps_{j_\\ell'}} \\ast \\| \\partial \\E_{j_\\ell'} \\| + \\eps_{j_\\ell'} } \\, dx \\leq 4\\, \\liminf_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\eta_{j_\\ell}\\, \\hat \\phi \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta (\\partial \\E_{j_\\ell}) }^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\| \\partial \\E_{j_\\ell} \\| + \\eps_{j_\\ell} } \\, dx + c<\\infty\\,,\n\\end{equation}\nwhere $\\partial\\E_{j_\\ell'} = \\partial\\E_{j_\\ell'}(t)$. Let us denote the right-hand side of \\eqref{Brakke13} as $B(t)$. Since $\\hat \\phi \\geq i^{-1}$, and thanks to \\eqref{Brakke13}, if $B(t) < \\infty$ then the assumption \n\\eqref{hp:uniform bound on L2 mean curvature} of Proposition \\ref{p:integral varifold limit} is satisfied along $j_\\ell'$: hence, the whole sequence $\\{\\partial \\E_{j_\\ell'}(t)\\}_{\\ell=1}^{\\infty}$ converges to $V_t\n\\in \\IV_n(U)$ as varifolds in $U$. Furthermore, using one more time that $\\hat \\phi \\geq i^{-1}$ we deduce that\n\\begin{equation} \\label{Brakke14}\n\\limsup_{\\ell \\to \\infty} \\int_{\\R^{n+1}} \\eta_{j_\\ell'} \\, \\frac{\\abs{\\Phi_{\\eps_{j_\\ell'}} \\ast \\delta (\\partial \\E_{j_\\ell'}) }^2}{\\Phi_{\\eps_{j_\\ell'}} \\ast \\| \\partial \\E_{j_\\ell'} \\| + \\eps_{j_\\ell'} } \\, dx \\leq i \\, B(t)\\,.\n\\end{equation} \nUsing \\eqref{Brakke11}, \\eqref{Brakke5}, \\eqref{Brakke6}, $\\hat \\phi > \\phi$, and Proposition \\ref{p:integral varifold limit}(3) with $\\phi$ (recalling $\\phi\\in C_c^{\\infty}(U;\\mathbb R^+)$), we have\n\\begin{equation} \\label{Brakke key1}\n\\begin{split}\n\\limsup_{\\ell \\to \\infty} \\delta (\\partial \\E_{j_\\ell}(t), \\hat\\phi) &(\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(t))) = \\lim_{\\ell \\to \\infty} \\delta (\\partial \\E_{j_\\ell'}(t), \\hat\\phi) (\\eta_{j_\\ell'} \\, h_{\\eps_{j_\\ell'}}(\\cdot, \\partial \\E_{j_\\ell'}(t))) \\\\\n\\leq &- \\int_{U} \\abs{h(\\cdot, V_t)}^2 \\, \\phi \\, d\\|V_t\\| \\\\ &+ \\limsup_{\\ell \\to \\infty} \\int_{\\bG_n(U)} S^\\perp(\\nabla \\phi) \\cdot h_{\\eps_{j_\\ell'}}(\\cdot, \\partial \\E_{j_\\ell'}(t)) \\, d(\\partial \\E_{j_\\ell'}(t))\\,,\n\\end{split}\n\\end{equation}\nwhere we have also used that, as $\\ell \\to \\infty$, $\\eta_{j_\\ell'} = 1$ on $\\{ \\nabla \\phi \\neq 0 \\} \\subset\\joinrel\\subset U$.\\\\\n\n\\smallskip\n\nNow, recall that $V_t \\in \\IV_n(U)$. Therefore, there is an $\\Ha^n$-rectifiable set $M_t \\subset U$ such that \n\\begin{equation} \\label{Brakke15}\n\\int_{\\bG_n(U)} S^\\perp(\\nabla \\phi(x)) \\, dV_t(x,S) = \\int_{U} T_{x}M_t^\\perp(\\nabla\\phi(x)) \\, d\\|V_t\\|(x)\\,. \n\\end{equation}\nFurthermore, since the map $x \\mapsto T_{x}M_t^\\perp(\\nabla\\phi(x))$ is in $L^2(\\|V_t\\|)$, for any $\\eps > 0$ there are a vector field $g \\in C^\\infty_c(U; \\R^{n+1})$ and a positive integer $m'$ such that $g \\in \\cB_{m'}$ and \n\\begin{equation} \\label{Brakke16}\n\\int_{U} \\abs{T_{x}M_t^\\perp(\\nabla\\phi(x)) - g(x)}^2 \\, d\\|V_t\\|(x) \\leq \\eps^2\\,.\n\\end{equation}\n\nIn order to estimate the $\\limsup$ in the right-hand side of \\eqref{Brakke key1}, we can now compute, for $\\partial\\E_{j_\\ell'} = \\partial\\E_{j_\\ell'}(t)$:\n\\begin{equation} \\label{Brakke add and subtract}\n\\begin{split}\n\\int_{\\bG_n(U)} & S^\\perp(\\nabla \\phi) \\cdot h_{\\eps_{j_\\ell'}}(\\cdot, \\partial \\E_{j_\\ell'}) \\, d(\\partial \\E_{j_\\ell'}) \\\\\n= &\\int_{\\bG_n(U)} (S^\\perp(\\nabla\\phi) - g) \\cdot h_{\\eps_{j_\\ell'}} \\, d(\\partial\\E_{j_\\ell'})\\\\\n&+ \\left( \\int_{U} g \\cdot h_{\\eps_{j_\\ell'}} \\, d\\|\\partial \\E_{j_\\ell'}\\| + \\delta (\\partial\\E_{j_\\ell'})(g) \\right) - \\delta (\\partial\\E_{j_\\ell'})(g) + \\delta V_t(g) \\\\ \n&+ \\int_{U} h(\\cdot, V_t) \\cdot \\left( g - T_{\\cdot}M_t^\\perp(\\nabla \\phi) \\right) \\, d\\|V_t\\| \\\\\n&+ \\int_{\\bG_{n}(U)} h(\\cdot, V_t) \\cdot S^\\perp(\\nabla\\phi) \\, dV_t(\\cdot,S)\\,.\n\\end{split}\n\\end{equation}\n\nWe proceed estimating each term of \\eqref{Brakke add and subtract}.\nUsing that $\\eta_{j_\\ell'} = 1$ on $\\{ \\nabla\\phi \\neq 0 \\}$ for all $\\ell$ sufficiently large, the Cauchy-Schwarz inequality gives that\n\\begin{equation} \\label{Brakke17}\n\\begin{split}\n\\Big|\\int_{\\bG_n(U)} (S^\\perp(\\nabla\\phi) - g) & \\cdot h_{\\eps_{j_\\ell'}} \\, d(\\partial\\E_{j_\\ell'})\\Big| \\\\ & \\leq \\left( \\int_{\\bG_{n}(U)} \\abs{S^\\perp(\\nabla\\phi) - g}^2 \\, d(\\partial\\E_{j_\\ell'}) \\right)^{\\frac12} \\, \\left( \\int_{\\R^{n+1}} \\eta_{j_\\ell'} \\, \\abs{h_{\\eps_{j_\\ell'}}}^2 \\, d\\| \\partial\\E_{j_\\ell'}\\| \\right)^{\\frac12}\n\\end{split}\n\\end{equation}\nfor all $\\ell$ sufficiently large. Since $(x,S) \\mapsto \\abs{S^\\perp(\\nabla\\phi(x)) - g(x)}^2 \\in C_{c}(\\bG_n(U))$, we have that\n\\begin{equation} \\label{Brakke18}\n\\begin{split}\n\\lim_{\\ell\\to\\infty} \\int_{\\bG_{n}(U)} \\abs{S^\\perp(\\nabla\\phi) - g}^2 \\, d(\\partial\\E_{j_\\ell'}) &= \\int_{\\bG_n(U)} \\abs{S^\\perp(\\nabla\\phi) - g}^2 \\, dV_t \\\\\n&= \\int_{U} \\abs{T_xM_t^\\perp (\\nabla\\phi(x)) - g(x)}^2 \\, d\\|V_t\\|(x) \\overset{\\eqref{Brakke16}}{\\leq} \\eps^2\\,.\n\\end{split}\n\\end{equation} \nUsing \\eqref{e:L2 norm of h vs approx}, \\eqref{Brakke14}, \\eqref{Brakke17} and \\eqref{Brakke18}, we then conclude that\n\\begin{equation} \\label{Brakke19}\n\\limsup_{\\ell\\to\\infty}\\Big|\\int_{\\bG_n(U)} (S^\\perp(\\nabla\\phi) - g) \\cdot h_{\\eps_{j_\\ell'}} \\, d(\\partial\\E_{j_\\ell'})\\Big| \\leq \\left( i\\, B(t) \\right)^{\\frac12} \\, \\eps\\,.\n\\end{equation}\n\nAnalogously, since $\\eta_{j_\\ell'} = 1$ on $\\{g \\neq 0\\}$ for all $\\ell$ sufficiently large, we have that\n\\begin{equation} \\label{Brakke20}\n\\lim_{\\ell\\to\\infty} \\Abs{\\int_{U} g \\cdot h_{\\eps_{j_\\ell'}} \\, d\\|\\partial \\E_{j_\\ell'}\\| + \\delta (\\partial\\E_{j_\\ell'})(g)} = \\lim_{\\ell\\to\\infty} \\Abs{\\int_{\\R^{n+1}} \\eta_{j_\\ell'} \\,g \\cdot h_{\\eps_{j_\\ell'}} \\, d\\|\\partial \\E_{j_\\ell'}\\| + \\delta (\\partial\\E_{j_\\ell'})(\\eta_{j_\\ell'}\\,g)} = 0\n\\end{equation}\nby \\eqref{e:prop55} and \\eqref{Brakke14}. \n\nNext, by varifold convergence of $\\partial\\E_{j_\\ell'}$ to $V_t$ on $U$, given that $g$ has compact support in $U$, we also have\n\\begin{equation} \\label{Brakke21}\n\\lim_{\\ell\\to\\infty} \\abs{\\delta(\\partial\\E_{j_\\ell'})(g) - \\delta V_t (g)} = 0\\,.\n\\end{equation}\n\nFinally, letting $\\psi$ be any function in $C_c(U; \\R^+)$ such that $\\psi = 1$ on $\\{g \\neq 0 \\} \\cup \\{\\nabla\\phi \\neq 0\\}$ and $0\\leq \\psi\\leq 1$, the Cauchy-Schwarz inequality allows us to estimate\n\\begin{equation} \\label{Brakke22}\n\\begin{split}\n\\Big|\\int_{U} h(x, V_t) \\cdot &\\left( g(x) - T_xM_t^\\perp(\\nabla\\phi(x)) \\right) \\, d\\|V_t\\| \\Big|\\\\\n&\\leq \\left( \\int_{U} \\abs{h(x,V_t)}^2\\,\\psi(x) \\, d\\|V_t\\|(x) \\right)^{\\frac12} \\, \\left( \\int_{U} \\abs{g(x) - T_xM_t^\\perp (\\nabla\\phi(x))}^2 \\, d\\|V_t\\|(x) \\right)^{\\frac12} \\\\\n&\\leq \\left( i \\, B(t) \\right)^{\\frac12} \\, \\eps\\,,\n\\end{split}\n\\end{equation}\nwhere in the last inequality we have used \\eqref{e:lsc of L2 norm mean curvature} with $\\psi$ in place of $\\phi$, \\eqref{Brakke14} and \\eqref{Brakke16}.\n\nFrom \\eqref{Brakke add and subtract}, combining \\eqref{Brakke19}-\\eqref{Brakke22} we conclude that\n\\begin{equation} \\label{Brakke ci siamo quasi}\n\\limsup_{\\ell\\to\\infty} \\int_{\\bG_n(U)} S^\\perp (\\nabla\\phi) \\cdot h_{\\eps_{j_\\ell'}}(\\cdot, \\partial\\E_{j_\\ell'}) \\, d(\\partial\\E_{j_\\ell'}) \\leq \\int_{U} h(\\cdot, V_t) \\cdot \\nabla\\phi \\, d\\|V_t\\| + 2\\, \\left( i \\, B(t) \\right)^{\\frac12} \\, \\eps\\,,\n\\end{equation}\nwhere we have also used \\eqref{BPT}.\n\nWe can now combine \\eqref{Brakke2}, \\eqref{Brakke4}, \\eqref{Brakke10}, \\eqref{Brakke key1}, and \\eqref{Brakke ci siamo quasi} to deduce that\n\\begin{equation} \\label{Brakke last effort}\n\\begin{split}\n \\|V_t\\|(\\phi) \\Big|_{t=t_1}^{t_2} \\leq &- \\int_{t_1}^{t_2} \\int_{U} \\left(\\abs{h(\\cdot, V_t)}^2\\, \\phi - h(\\cdot, V_t) \\cdot \\nabla\\phi\\right) \\, d\\|V_t\\| \\, dt \\\\\n &+ i^{-1} \\, \\|\\partial\\E_0\\|(\\R^{n+1}) + 2 i^{\\frac12} \\eps \\, \\int_{t_1}^{t_2} B(t)^{\\frac12} \\, dt\\,.\n \\end{split}\n\\end{equation}\n\nWe use the Cauchy-Schwarz inequality one more time, and combine it with the definition of $B(t)$ as the right-hand side of \\eqref{Brakke13} and with Fatou's lemma to obtain the bound\n\\begin{equation} \\label{errore sotto controllo}\n\\int_{t_1}^{t_2} B(t)^{\\frac12} \\, dt \\leq (t_2 - t_1) + c\\, (t_2 - t_1) + 4\\, \\liminf_{\\ell\\to\\infty} \\int_{t_1}^{t_2} \\int_{\\R^{n+1}} \\eta_{j_\\ell} \\, \\hat\\phi \\frac{\\abs{\\Phi_{\\eps_{j_\\ell}} \\ast \\delta (\\partial\\E_{j_\\ell})}^2}{\\Phi_{\\eps_{j_\\ell}} \\ast \\| \\partial\\E_{j_\\ell} \\| + \\eps_{j_\\ell}} \\,,\n\\end{equation}\nwhich is finite (depending on $t_2$) by \\eqref{finite total mean curvature} (recall that $\\hat \\phi \\leq 1$ everywhere). Brakke's inequality \\eqref{e:Brakke} for a test function $\\phi$ which does not depend on $t$ is then deduced from \\eqref{Brakke last effort} after letting $\\eps \\downarrow 0$ and then $i \\uparrow \\infty$.\n\n\\smallskip\n\n{\\bf Step 2.} We consider now the general case of a time dependent test function $\\phi \\in C^{1}_{c}(U \\times \\R^+ ;\\R^+)$. We can once again assume that $\\phi$ is smooth, and then conclude by a density argument. The proof follows the same strategy of Step 1. We define $\\hat\\phi$ analogously, and then we apply \\eqref{induction:mass variation} with $\\phi = \\hat\\phi(\\cdot, t)$. In place of \\eqref{Brakke1}, we then obtain a formula with one extra term, namely\n\\begin{equation} \\label{Brakke1bis}\n\\begin{split}\n\\| \\partial \\E_{j_\\ell}(s)\\|(\\hat \\phi(\\cdot,s)) \\Big|_{s=t-\\Delta t_{j_\\ell}}^{t} \\leq \\Delta t_{j_\\ell} \\, &\\left(\\delta (\\partial \\E_{j_\\ell}(t), \\hat \\phi(\\cdot, t))(\\eta_{j_\\ell} \\, h_{\\eps_{j_\\ell}}(\\cdot, \\partial \\E_{j_\\ell}(t))) + \\eps_{j_\\ell}^{\\sfrac18}\\right)\\\\\n&+ \\| \\partial \\E_{j_\\ell}(t-\\Delta t_{j_\\ell}) \\| (\\phi (\\cdot, t) - \\phi (\\cdot, t-\\Delta t_{j_\\ell}))\\,.\n\\end{split}\n\\end{equation}\nSimilarly, the inequality in \\eqref{Brakke2} needs to be replaced with an analogous one containing, in the right-hand side, also the term\n\\begin{equation} \\label{Brakke time}\n\\sum_{k=k_1}^{k_2} \\| \\partial \\E_{j_\\ell}((k-1)\\Delta t_{j_\\ell}) \\| (\\phi (\\cdot, k\\, \\Delta t_{j_\\ell}) - \\phi (\\cdot, (k-1)\\Delta t_{j_\\ell}))\\,.\n\\end{equation} \n\nUsing the regularity of $\\phi$ and the estimates in \\eqref{induction:mass estimate} and \\eqref{induction:mean curvature}, we may deduce that\n\\begin{equation} \\label{Brakke time 2}\n\\begin{split}\n\\lim_{\\ell\\to\\infty} \\eqref{Brakke time} &= \\lim_{\\ell \\to \\infty} \\sum_{k=k_1}^{k_2} \\| \\partial \\E_{j_\\ell}(k\\,\\Delta t_{j_\\ell}) \\| \\left( \\frac{\\partial\\phi}{\\partial t}(\\cdot, k\\,\\Delta t_{j_\\ell}) \\right) \\Delta t_{j_\\ell} \\\\\n&= \\lim_{\\ell\\to\\infty} \\int_{t_1}^{t_2} \\| \\partial \\E_{j_\\ell}(t) \\| \\left( \\frac{\\partial\\phi}{\\partial t}(\\cdot, t) \\right) \\, dt \\\\\n&= \\int_{t_1}^{t_2} \\|V_t\\| \\left( \\frac{\\partial\\phi}{\\partial t}(\\cdot, t) \\right)\\, dt\\,,\n\\end{split}\n\\end{equation}\nwhere the last identity is a consequence of \\eqref{e:limit_measure}, Proposition \\ref{p:integral varifold limit}(1), and Lebesgue's dominated convergence theorem. The remaining part of the argument stays the same, modulo the following variation. The identity in \\eqref{Brakke10} remains true if $\\hat \\phi$ is replaced by the piecewise constant function $\\hat\\phi_{j_\\ell}$ defined by\n\\[\n\\hat\\phi_{j_\\ell}(x,t) := \\hat\\phi(x,k\\, \\Delta t_{j_\\ell}) \\qquad \\mbox{if $t \\in \\left( (k-1) \\, \\Delta t_{j_\\ell}, k\\,\\Delta t_{j_\\ell} \\right]$}\\,.\n\\]\nThe error one makes in order to put $\\hat \\phi$ back into \\eqref{Brakke10} in place of $\\hat\\phi_{j_\\ell}$ is then given by the product of $\\Delta t_{j_\\ell}$ times some negative powers of $\\eps_{j_\\ell}$; nonetheless, this error converges to $0$ uniformly as $\\ell \\uparrow \\infty$ by the choice of $\\Delta t_{j_\\ell}$, see \\eqref{d:time step}. This allows us to conclude the proof of \\eqref{e:Brakke} precisely as in the case of a time-independent $\\phi$ whenever $\\phi \\in C^\\infty_c(U \\times \\R^+ ; \\R^+)$, and in turn, by approximation, also when $\\phi \\in C^1_c(U \\times \\R^+ ; \\R^+)$.\n\\end{proof}\n\n\n\n\\section{Boundary behavior and proof of main results} \\label{sec:bb}\n\n\n\\subsection{Vanishing of measure outside the convex hull of initial data}\n\nFirst, we prove that the limit measures $\\|V_t\\|$ vanish uniformly \nin time near $\\partial\nU\\setminus \\partial\\Gamma_0$. This is a preliminary result, and using the Brakke's inquality,\nwe eventually prove that\nthey actually vanish outside the convex hull of $\\Gamma_0\\cup\\partial \\Gamma_0$ in Proposition \\ref{vvmp}.\n\\begin{proposition} \n\\label{vmn}\nFor $\\hat x\\in \\partial U\\setminus \\partial\\Gamma_0$, suppose that \nan affine hyperplane $A\\subset\\R^{n+1}$ with $\\hat x\\notin A$ has the following property. Let $A^+$ and $A^-$ be defined as the open half-spaces separated by $A$, i.e., \n$\\R^{n+1}$ is a disjoint union of $A^+$, $A$ and $A^-$, with $\\hat x\\in A^+$. \nDefine $d_A(x):={\\rm dist}\\,(x,A^-)$, and suppose that \n\\begin{enumerate}\n\\item $\\Gamma_0\\cup \\partial \\Gamma_0\\subset A^-$,\n\\item $d_A$ is $\\nu_{U}$-non decreasing in $A^+$. \n\\end{enumerate}\nThen for any compact set $C\\subset A^+$, we have\n\\begin{equation}\n\\label{vmn1}\n\\lim_{j\\rightarrow\\infty}\\sup_{t\\in[0,j^{\\sfrac12}]}\\|\\partial\\E_j(t)\\|(C)=0.\n\\end{equation}\n\\end{proposition}\n\n\\begin{remark} \\label{rmk:hyperplane}\nDue to the definition of $\\partial\\Gamma_0$ and the strict convexity of $U$,\nnote that there exists such an affine hyperplane $A$ for any given $\\hat x\\in \\partial U\\setminus\n\\partial \\Gamma_0$. For example, we may choose a hyperplane $A$ which is parallel to the tangent\nspace of $\\partial U$ at $\\hat x$ and which passes through $\\hat x- \\nu_{U}(\\hat x)c$. By the strict\nconvexity of $U$ and the $C^1$ regularity \nof $\\nu_{U}$, for all sufficiently small $c>0$, one can show that such\n$A$ satisfies the above (1) and (2). \n\\end{remark}\n\\begin{remark}\nIn the following proof, we adapted a computation from \\cite[p.60]{Ilm1}. There, the object \nis the Brakke flow, but the \nbasic idea here is that a similar computation can be carried out for the approximate MCF\nwith suitable error estimates. \n\\end{remark}\n\\begin{proof}\nWe may assume after a suitable change of coordinates that $A=\\{x_{n+1}=0\\}$\nand $A^+=\\{x_{n+1}>0\\}$. With this, we have ${\\rm clos}\\,\\Gamma_0\\subset\n\\{x_{n+1}<0\\}$ and $d_A(x)=\\max\\{x_{n+1},0\\}$ is $\\nu_{U}$-non decreasing in $\\{x_{n+1} > 0\\}$. \nLet $s>0$\nbe arbitrary, and define\n\\begin{equation}\n\\label{vmn2}\n\\phi(x):=s+ (d_A(x))^\\beta\n\\end{equation}\nfor some $\\beta\\geq 3$ to be fixed later. Then $\\phi\\in C^2(\\R^{n+1}; \\R^+)$, and letting $\\{e_1,\\,\\ldots,\\,e_{n+1}\\}$ denote the standard basis of $\\R^{n+1}$, we have\n\\begin{equation}\n\\label{ilm11}\n\\nabla\\phi = \\beta \\, d_A^{\\beta - 1} \\, e_{n+1}\\,, \\qquad \\nabla^2\\phi = \\beta\\,(\\beta-1)\\, d_A^{\\beta-2} \\, e_{n+1} \\otimes e_{n+1}\\,.\n\\end{equation}\nWith $s>0$ fixed, we choose sufficiently large $j$ \nso that $\\phi\\in \\mathcal A_j$. Actually, the function $\\phi$ as defined in \\eqref{vmn2} is unbounded. Nonetheless, since we know that $\\spt\\,\\|\\partial\\E_j(t)\\|\\subset (U)_{1\/(4j^{\\sfrac14})}$, we may modify $\\phi$ suitably \naway from $U$ by multiplying it by a small number and truncating it, so that \n$\\phi\\leq 1$. We assume that we have done this modification if necessary. \nWe also choose $j$ so large that $\\eta_j=1$ on $\\{x_{n+1}\\geq 0\\}$. This is \npossible due to Lemma \\ref{l:etaj}(1). \nAdditionally, since $d_A$ is $\\nu_{U}$-non decreasing in $A^+$, and since $\\phi$ is constant in $\\R^{n+1} \\setminus A^+$, we have $\\phi\n\\in \\mathcal R_j$. \nThus, by \\eqref{induction:mass variation}, \nwe have for $\\partial\\E_{j,k}=:V$ and $\\partial\\E_{j,k-1}=:\\hat V$ with $k\\in\n\\{1,\\ldots,j2^{p_j}\\}$ \n\\begin{equation}\n\\label{ilm1}\n\\frac{\\|V\\|(\\phi)-\\|\\hat V\\|(\\phi)}{\\Delta t_j}\\leq \\eps_j^{\\sfrac18}+\\delta\n(V,\\phi)(\\eta_j\\,h_{\\eps_j}(\\cdot,V)).\n\\end{equation}\nFor all sufficiently large $j$, we also have $\\eta_j\\phi\\in \\mathcal A_j$, \nthus we may proceed as in \\eqref{monotonicity estimate basic} and estimate\n\\begin{equation}\\label{ilm4}\n\\begin{split}\n\\delta(V,\\phi)(\\eta_j h_{\\eps_j}(\\cdot,V))&=\\delta V(\\phi\\,\\eta_j\\, h_{\\eps_j})+\n\\int_{\\bG_n(\\R^{n+1})}\\eta_j\\, h_{\\eps_j}(I-S)(\\nabla\\phi)\\,dV(x,S) \\\\\n&\\leq -(1-\\eps_j^{\\sfrac14})\\int \\eta_j\\,\\phi\\,\\frac{|\\Phi_{\\eps_j}\\ast\\delta V|^2}{\\Phi_{\\eps_j}\\ast\\|V\\|\n+\\eps_j}\\,dx+\\eps_j^{\\sfrac14}+\\frac12\\int \\eta_j \\, \\phi\\, |h_{\\eps_j}|^2\\,d\\|V\\|\\\\\n&\\qquad +\\frac12\\int\\frac{|S(\\nabla\\phi)|^2}{\\phi}\\,dV+\\int h_{\\eps_j}\\cdot\\nabla\\phi\\,d\\|V\\|.\n\\end{split}\n\\end{equation}\n\nHere we have used that $\\eta_j=1$ when $\\nabla\\phi\\neq 0$. In the present proof, we omit the\ndomains of integration, which are either $\\R^{n+1}$ or $\\bG_n(\\R^{n+1})$ unless specified otherwise.\nWe use \\eqref{e:L2 norm of h vs approx} to proceed as:\n\\[\n\\leq -\\left(1-\\frac12-\\frac{3\\eps_j^{\\sfrac14}}{2}\\right)\\int\\eta_j\\,\\phi\\,\\frac{|\\Phi_{\\eps_j}\\ast\\delta V|^2}{\\Phi_{\\eps_j}\\ast\\|V\\|\n+\\eps_j}\\,dx+2\\eps_j^{\\sfrac14}+\\frac12\\int\\frac{|S(\\nabla\\phi)|^2}{\\phi}\\,dV+\\int h_{\\eps_j}\\cdot\\nabla\\phi\\,d\\|V\\|.\n\\]\n We prove that the last term gives a good negative contribution. \n We have\n \\begin{equation}\n \\begin{split}\n \\int & h_{\\eps_j}\\cdot \\nabla\\phi\\,d\\|V\\|=-\\int \\Phi_{\\eps_j}\\ast\\frac{\\Phi_{\\eps_j}\n \\ast\\delta V}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\cdot\\nabla\\phi\\, d\\|V\\| \\\\\n &\n =-\\int \\Big(\\frac{\\Phi_{\\eps_j}\n \\ast\\delta V}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\Big)(y)\\cdot \\int \\Phi_{\\eps_j}(x-y)\\nabla\\phi(x)\n \\,d\\|V\\|(x)\\,dy.\n \\end{split}\n \\label{ilm5}\n \\end{equation}\n Here we replace $\\nabla\\phi(x)$ by $\\nabla\\phi(y)$ and estimate the error\n \\begin{equation}\\label{ilm2}\n \\Big|\n \\int \\Big(\\frac{\\Phi_{\\eps_j}\n \\ast\\delta V}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\Big)(y)\\cdot \\int \\Phi_{\\eps_j}(x-y)(\\nabla\\phi(x)\n -\\nabla\\phi(y))\\,d\\|V\\|(x)\\,dy\\Big|.\n \\end{equation}\nTo estimate \\eqref{ilm2}, since $\\eta_j\\phi\\in \\mathcal A_j$, \\eqref{classA} and \\eqref{e:Gronwall} imply\n \\[|\\nabla\\phi(x)-\\nabla\\phi(y)|=|\\nabla(\\eta_j\\phi)(x)-\\nabla(\\eta_j\\phi)(y)|\\leq j\\,\n |x-y|\\,\\eta_j(y)\\,\\phi(y)\\,\\exp(j|x-y|)\\,.\\]\n By separating the integration to $B_{\\sqrt{\\eps_j}}(y)$ and $B_1(y)\\setminus B_{\\sqrt{\\eps_j}}(y)$,\n\\begin{equation}\n\\label{ilm3}\n\\begin{split}\n\\int\\Phi_{\\eps_j}(x-y) &|\\nabla\\phi(x)-\\nabla\\phi(y)|\\,d\\|V\\|(x)\\leq j\\,\\sqrt{\\eps_j}\\,\\exp(j\\sqrt{\n\\eps_j})\\,\\eta_j(y)\\,\\phi(y)\\,(\\Phi_{\\eps_j}\\ast\\|V\\|)(y) \\\\\n&+c(n)\\,\\eps_j^{-n-1}\\,j\\,\\exp(j-(2\\eps_j)^{-1})\\,\\eta_j(y)\\,\\phi(y)\\,\\|V\\|(B_1(y)).\n\\end{split}\n\\end{equation}\nLet us denote $c_{\\eps_j}:=c(n)\\eps_j^{-n-1}j\\exp(j-(2\\eps_j)^{-1})$ and note that \nit is exponentially small (say, $\\leq \\exp(-\\eps_j^{-\\sfrac12})$ for all large $j$) due to $j\\leq \\eps_j^{-1\/6}\/2$. Similarly we have $j\\sqrt{\\eps_j}\n\\exp(j\\sqrt{\\eps_j})\\leq \\eps_j^{\\sfrac14}$, so that\n\\[\n\\int\\Phi_{\\eps_j}(x-y) |\\nabla\\phi(x)-\\nabla\\phi(y)|\\,d\\|V\\|(x)\n\\leq (\\eps_j^{\\sfrac14}(\\Phi_{\\eps_j}\\ast\\|V\\|)(y)+c_{\\eps_j}\\|V\\|(B_1(y)))\\eta_j(y)\\phi(y).\\]\nUsing this, we can estimate\n\\begin{equation}\n\\label{ilm9}\n\\begin{split}\n|\\eqref{ilm2}|&\\leq \\left(\\int\\eta_j \\, \\phi \\,\\frac{|\\Phi_{\\eps_j}\\ast\\delta V|^2}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\right)^{\\frac12}\\,\\left(2\\,\\int \\eps_j^{\\frac12}\\,(\\Phi_{\\eps_j}\\ast\\|V\\|)(y)+c_{\\eps_j}^2\\,\\eps_j^{-1}\\,\\|V\\|(B_1(y))^2\\,dy\\right)^{\\frac12} \\\\\n&\\leq \\eps_j^{\\frac14}\\, \\int\\eta_j\\,\\phi\\,\\frac{|\\Phi_{\\eps_j}\\ast\\delta V|^2}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\n+\\int \\eps_j^{\\frac14}\\,(\\Phi_{\\eps_j}\\ast\\|V\\|)(y)+c_{\\eps_j}^2\\,\\eps_j^{-\\frac54}\\,\\|V\\|(B_1(y))^2\\,dy.\n\\end{split}\n\\end{equation}\nIn view of \\eqref{ilm4}, this shows that \\eqref{ilm2} can be absorbed as a small error term.\n Continuing from \\eqref{ilm5} with $\\nabla\\phi(y)$ replacing $\\nabla\\phi(x)$, \n \\begin{equation}\n \\begin{split}\n & -\\int \\Big(\\frac{\\Phi_{\\eps_j}\\ast\\delta V}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\Big)(y)\\cdot \\int \\Phi_{\\eps_j}(x-y)\\nabla\\phi(y)\n \\,d\\|V\\|(x)\\,dy \\\\\n =& -\\int \\Big(\\frac{\\Phi_{\\eps_j}\n \\ast\\delta V}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\Big)(y)\\cdot \\nabla\\phi(y)\\, ( \\Phi_{\\eps_j}\\ast\\|V\\|)(y)\\,dy \\\\\n =& -\\int(\\Phi_{\\eps_j}\\ast\\delta V)\\cdot\\nabla\\phi\\,dy+\\eps_j\n \\int \\Big(\\frac{\\Phi_{\\eps_j}\n \\ast\\delta V}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\Big)(y)\\cdot \\nabla\\phi(y)\\,dy\\,.\n \\end{split}\\label{ilm6}\\end{equation}\n The last term of \\eqref{ilm6} may be estimated as\n \\begin{equation}\\label{ilm7}\n \\begin{split}\n \\eps_j \\Big|\\int \\Big(\\frac{\\Phi_{\\eps_j}\n \\ast\\delta V}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j}\\Big)(y)\\cdot& \\nabla\\phi(y)\\,dy\\Big|\n \\leq j\\, \\eps_j \\int_{(U)_{2}} \\eta_j \\, \\phi \\,\\frac{|\\Phi_{\\eps_j}\n \\ast\\delta V|}{\\Phi_{\\eps_j}\\ast\\|V\\|+\\eps_j} \\\\\n &\\leq j\\, \\eps_j^{\\frac12} \\, \\Big(\\int \\eta_j \\, \\phi\\,\\frac{|\\Phi_{\\eps_j}\\ast\\delta V|^2}{\\Phi_{\\eps_j}\\ast\\|V\\|+\n \\eps_j}\\Big)^{\\frac12}\\Big(\\int_{(U)_2}\\eta_j\\,\\phi\\Big)^{\\frac12} \\\\\n &\\leq \\eps_j^{\\frac14}\\int\\eta_j \\,\\phi\\,\\frac{|\\Phi_{\\eps_j}\\ast\\delta V|^2}{\\Phi_{\\eps_j}\\ast\\|V\\|+\n \\eps_j}+j^2\\,\\eps_j^{\\frac34}\\,\\int_{(U)_2}\\eta_j\\phi.\n \\end{split}\n\\end{equation}\nHere, we used the fact that the integrand is $0$ far away from $U$, for\nexample, outside of $(U)_2$. \nThe last term of \\eqref{ilm7} can be absorbed as a small error since $j\\leq \\eps_j^{-1\/6}\/2$ \nand $\\int_{(U)_2}\\eta_j\\,\\phi$ is bounded by a constant.\nWe can continue as \n \\begin{equation*}\n \\begin{split}\n-\\int (\\Phi_{\\eps_j}\\ast\\delta V)\\cdot\\nabla\\phi\\,dy &=-\\iint S(\\nabla\\Phi_{\\eps_j}(x-y))\\, dV(x,S)\\nabla\\phi(y)\\,dy \\\\ \n &\n =-\\int S\\cdot\\Big(\\int \\nabla\\Phi_{\\eps_j}(x-y)\\otimes\\nabla\\phi(y)\\,dy\\Big)\\,dV(x,S)\n \\\\ &\n =-\\int S\\cdot \\int\\Phi_{\\eps_j}(x-y)\\,\\nabla^2\\phi(y)\\, dy\\, dV(x,S).\n \\end{split}\n \\end{equation*}\n We replace $\\nabla^2\\phi(y)$ by $\\nabla^2\\phi(x)$, with the resulting error being estimated, for instance, by $\\leq M \\eps_j^{\\sfrac12}$ using standard methods as above. Then, we have\n \\begin{equation}\n \\label{ilm8}\n -\\int(\\Phi_{\\eps_j}\\ast\\delta V)\\cdot\\nabla\\phi\\,dy\\leq \n -\\int S\\cdot \\nabla^2\\phi(x)\\, dV(x,S) +M\\eps_j^{\\sfrac12}.\n \\end{equation}\n Thus, combining \\eqref{ilm1}-\\eqref{ilm8} and recovering the notations,\n we obtain\n\\begin{equation}\n \\label{ilm10}\n\\frac{\\|\\partial\\E_{j,k}\\|(\\phi) - \\|\\partial\\E_{j,k-1}\\|(\\phi)}{\\Delta t_j}\n\\leq 2\\eps_j^{\\sfrac18}+\\int\\frac{|S(\\nabla\\phi)|^2}{2\\phi}-S\\cdot\\nabla^2\\phi\\,dV\n\\end{equation}\nfor all sufficiently large $j$. By \\eqref{ilm11}, \nwe have \n \\begin{equation} \\label{ilm12}\n \\begin{split}\n \\frac{|S(\\nabla\\phi)|^2}{2\\phi}-S\\cdot\\nabla^2\\phi &= \\left( \\frac{\\beta^2}{2} \\, \\sum_{i=1}^{n+1} S_{i,n+1}^2 - \\beta\\, (\\beta - 1) \\, S_{n+1,n+1} \\right) \\, d_A^{\\beta - 2}\\\\ &= \\left(\\frac{\\beta^2}{2}\n- \\beta\\,(\\beta-1)\\right)\\, \\abs{S_{n+1,n+1}}\\, d_A^{\\beta-2}\\,,\n\\end{split}\n \\end{equation}\nwhere in the last identity we have used that $S$ is the matrix representing an orthogonal projection operator, so that $S$ is symmetric and $S^2 = S$, whence\n\\[\nS_{n+1,n+1} = (S^2)_{n+1,n+1} = \\sum_{i=1}^{n+1} S_{i,n+1}^2 \\geq 0\\,.\n\\] \n \nIn particular, the quantity in \\eqref{ilm12} can be made negative if $\\beta=4$, for example. This shows that \\eqref{ilm10} is less than $2\\eps_j^{\\sfrac18}$. By summing\nover $k=1,\\ldots,j^{\\sfrac12}\/(\\Delta t_j)$ and using that $\\|\\partial\\E_{j,0}\\|(\\phi)\n=s\\,\\Ha^n(\\Gamma_0)$, we obtain\n\\begin{equation}\n\\label{ilm13}\n\\sup_{t\\in[0,j^{\\sfrac12}]}\\|\\partial\\E_j(t)\\|(\\phi)\\leq 2\\eps_j^{\\sfrac18} j^{\\sfrac12}\n+s\\,\\Ha^n(\\Gamma_0).\n\\end{equation}\nFix $\\rho>0$ so that \n$C\\subset\\{x_{n+1}>\\rho\\}$. Then we have $\\phi\\geq \\rho^{\\beta}$ on $C$. \nWith this, we have $\\|\\partial\\E_j(t)\\|(C)\n\\leq \\rho^{-\\beta} \\|\\partial\\E_j(t)\\|(\\phi)$. We use this in \\eqref{ilm13}, \nand we let first $j\\rightarrow\\infty$ and then $s\\rightarrow 0$ in order to obtain \\eqref{vmn1}.\n\\end{proof}\n\n\\begin{proposition}\\label{vvmp}\nFor all $t\\geq 0$, we have ${\\rm spt}\\,\\|V_t\\|\\subset{\\rm conv}\\,(\\Gamma_0\\cup\\partial\\Gamma_0)$. \n\\end{proposition}\n\\begin{proof}\nSuppose that $A\\subset\\R^{n+1}$ is a hyperplane such that, using the notation in the \nstatement of Proposition \\ref{vmn}, $\\Gamma_0\\cup\\partial\\Gamma_0\\subset A^-$. If \n$d_A$ is $\\nu_{U}$-non decreasing in $A^+$, then \\eqref{vmn1} proves immediately that\n$\\|V_t\\|(A^+)=0$ for all $t\\geq 0$. Thus, suppose that $d_A$ does not satisfy this property.\nStill, due to Proposition \\ref{vmn}, for each $x\\in\\partial U\\setminus\\partial\\Gamma_0$, \nthere exists a neighborhood $B_r(x)$ such that $\\|V_t\\|(B_r(x)\\cap U)=0$ for all $t\\geq 0$. \nIn particular, there exists some $r_0>0$ such that \n\\begin{equation}\n\\|V_t\\|((\\partial U)_{r_0}\\cap A^+)=0\n\\label{vvm1}\n\\end{equation}\nfor all $t\\geq 0$. Let $\\psi\\in C^{\\infty}_c(U;\\R^+)$ be such that $\\psi=1$ on $U\\setminus \n(\\partial U)_{r_0}$ and $\\psi=0$ on $(\\partial U)_{\\frac{r_0}{2}}$. We next use $\\phi\n=\\psi\\, d_A^{4}$ in \\eqref{e:Brakke} with $t_1=0$ and an arbitrary $t_2=t>0$ to obtain\n\\begin{equation}\n\\begin{split}\n\\label{vvm2}\n\\|V_s\\|(\\phi)\\Big|_{s=0}^{t}&\\leq \\int_0^t \\int_{U}(\\nabla\\phi-\\phi\\, h(\\cdot,V_s))\\cdot h(\\cdot,V_s)\\,d\\|V_s\\|\\,ds \\\\\n& \\leq -\\int_0^t\\int_{U} S\\cdot\\nabla^2\\phi\\,dV_s(\\cdot,S)\\,ds.\n\\end{split}\n\\end{equation}\nBy \\eqref{vvm1}, $\\phi=d_A^4$ on the support of $\\|V_s\\|$. Since $S\\cdot \\nabla^2 d_A^4\\geq 0$\nfor any $S\\in {\\bf G}(n+1,n)$ (see \\eqref{ilm12}), the right-hand side of \\eqref{vvm2}\nis $\\leq 0$. Since $\\|V_0\\|(\\phi)=0$, we have $\\|V_t\\|(A^+)=0$ for all $t>0$. This proves\nthe claim. \n\\end{proof}\n\nIn the following, we list results from \\cite[Section 10]{KimTone}. The results are local in nature, thus\neven if we are concerned with a Brakke flow in $U$ instead of $\\R^{n+1}$, the \nproofs are the same. We recall the following (cf. Theorem \\ref{thm:main2}(11)):\n\\begin{definition} \\label{spacetime_measure}\nDefine a Radon measure $\\mu$ on $U \\times \\R^+$ by setting $d\\mu := d\\|V_t\\|\\,dt$, namely\n\\begin{equation} \\label{e:spacetime_measure}\n\\int_{U\\times\\R^+} \\phi(x,t) \\, d\\mu(x,t) := \\int_0^\\infty \\left(\\int_U \\phi(x,t) \\, d\\|V_t\\|(x)\\right)\\,dt \\qquad \\mbox{for every $\\phi \\in C_c(U \\times \\R^+)$}\\,. \n\\end{equation}\n\\end{definition}\n\n\\begin{lemma} \\label{sptfini}\nWe have the following properties for $\\mu$ and $\\{V_t\\}_{t\\in\\R^+}$.\n\\begin{enumerate}\n\\item ${\\rm spt}\\,\\|V_t\\|\\subset\\{x\\in U\\,:\\,(x,t)\\in{\\rm spt}\\,\\mu\\}$ for all $t>0$.\n\\item For each $\\tilde U\\subset\\joinrel\\subset U$ and $t>0$, we have $\\Ha^n(\\{x\\in\\tilde U\\,:\\,\n(x,t)\\in {\\rm spt}\\,\\mu\\})<\\infty$. \n\\end{enumerate}\n\\end{lemma} \nThe next Lemma (see \\cite[Lemma 10.10 and 10.11]{KimTone}) is used to prove the continuity of the labeling of partitions. \n\\begin{lemma} \\label{contidom}\nLet $\\{\\mathcal E_{j_{\\ell}}(t)\\}_{\\ell=1}^{\\infty}$ be the sequence obtained in Proposition \\ref{p:integral varifold limit}, and let $\\{E_{j_\\ell,i}(t)\\}_{i=1}^N$ denote the open partitions\nfor each $j_\\ell$ and $t\\in\\R^+$, i.e., $\\mathcal E_{j_\\ell}(t)=\\{E_{j_\\ell,i}(t)\\}_{i=1}^N$.\n\\begin{enumerate}\n\\item\nFor fixed $i\\in \\{1,\\ldots,N\\}$, $B_{2r}(x)\\subset\\joinrel\\subset U$, $t>0$ with $t-r^2>0$, suppose that\n\\[ \\lim_{\\ell\\rightarrow\\infty} \\mathcal L^{n+1}(B_{2r}(x)\\setminus E_{j_\\ell,i}(t))=0\n\\hspace{.5cm}\\mbox{and}\\hspace{.5cm}\\mu(B_{2r}(x)\\times [t-r^2,t+r^2])=0.\\]\nThen for all $t'\\in (t-r^2,t+r^2]$, we have \n\\[ \\lim_{\\ell\\rightarrow\\infty} \\mathcal L^{n+1}(B_r(x)\\setminus E_{j_\\ell,i}(t'))=0.\\]\n\\item For fixed $i\\in\\{1,\\ldots,N\\}$, $B_{2r}(x)\\subset\\joinrel\\subset U$ and $r>0$, suppose that\n\\[B_{2r}(x)\\subset E_{j_\\ell,i}(0)\\hspace{.3cm}\\mbox{for all $\\ell\\in\\mathbb N$}\\hspace{.5cm} \n\\mbox{and}\n\\hspace{.5cm} \\mu(B_{2r}(x)\\times[0,r^2])=0.\\]\nThen for all $t'\\in (0,r^2]$, we have\n\\[ \\lim_{\\ell\\rightarrow\\infty} \\mathcal L^{n+1}(B_r(x)\\setminus E_{j_\\ell,i}(t'))=0.\\]\n\\end{enumerate}\n\\end{lemma} \nThe following is from \\cite[3.7]{Brakke}. \n\\begin{lemma}\\label{lemma1012}\nSuppose that $\\|V_t\\|(U_r(x)) = 0$ for some $t \\in \\R^+$ and $U_r(x)\\subset\\joinrel\\subset U$. Then, for every $t' \\in \\left[t, t+\\frac{r^2}{2n} \\right]$ it holds $\\|V_{t'}\\|(U_{\\sqrt{r^2 - 2n\\,(t'-t)}}(x)) = 0$.\n\\end{lemma}\n\n\\begin{proof}[{\\bf Proof of Theorem \\ref{thm:main2}}]\n\n\nLet $\\{ \\E_{j_\\ell}(t) \\}_{\\ell=1}^{\\infty}$ be a sequence as in Lemma \\ref{contidom}, with $\\E_{j_\\ell}(t) = \\{ E_{j_\\ell,i}(t) \\}_{i=1}^N$ for every $\\ell \\in \\mathbb{N}$. Since $E_{j_\\ell,i}(t) \\subset \\left( U \\right)_1$, for each $t$ and $i$ the volumes $\\mathcal{L}^{n+1}(E_{j_\\ell,i}(t))$ are uniformly bounded in $\\ell$. Furthermore, by the mass estimate in \\eqref{e:precompactness} we also have that $\\| \\nabla \\chi_{E_{j_\\ell,i}(t)} \\| (\\R^{n+1})$ are uniformly bounded. Hence, we can use the compactness theorem for sets of finite perimeter in order to select a (not relabeled) subsequence with the property that, for each fixed $i \\in \\{1,\\ldots,N\\}$, \n\\begin{equation} \\label{e:convergence}\n\\chi_{E_{j_\\ell,i}(t)} \\to \\chi_{E_i(t)} \\quad \\mbox{in $L^1_{loc}(\\R^{n+1})$ $\\qquad$ for every $t \\in 2_\\Q$}\\,,\n\\end{equation} \nwhere $E_i(t)$ is a set of locally finite perimeter in $\\R^{n+1}$. Moreover, using that $E_{j_\\ell,i}(t) \\subset \\left( U \\right)_{1\/(4\\,j_\\ell^{\\sfrac14})}$ (see Proposition \\ref{p:induction} and \\eqref{epsilon conditions}) we see that $\\mathcal{L}^{n+1}(E_i(t) \\setminus U) = 0$. Since sets of finite perimeter are defined up to measure zero sets, we can then assume without loss of generality that $E_i(t) \\subset U$. Hence, since $\\Ha^n(\\partial U) < \\infty$, $E_i(t)$ is in fact a set of finite perimeter in $\\R^{n+1}$. \n\n\\smallskip\n\nNext, consider the complement of $\\spt\\,\\mu \\cup (\\Gamma_0 \\times \\{0\\})$ in $U \\times \\R^+$, which is relatively open in $U \\times \\R^+$, and let $S$ be one of its connected components. For any point $(x,t) \\in S$ there exists $r > 0$ such that either $B_{2\\,r}(x) \\times \\left[ t-r^2, t+r^2 \\right] \\subset S$ if $t > 0$, or $B_{2\\,r}(x) \\times \\left[0, r^2 \\right] \\subset S$ if $t=0$. We first consider the case $t=0$. Since $B_{2\\,r}(x)$ lies in the complement of $\\Gamma_0$, there exists $i(x,0) \\in \\{1,\\ldots,N\\}$ such that $B_{2\\,r}(x) \\subset E_{0,i(x,0)}$, and thus $B_{2\\,r}(x) \\subset E_{j_\\ell,i(x,0)}(0)$ for all $\\ell \\in \\mathbb N$. Since also $\\mu (B_{2\\,r}(x) \\times \\left[0,r^2\\right]) = 0$, we can apply Lemma \\ref{contidom}(2) and conclude that \n\\begin{equation} \\label{e:conclusion t=0}\n\\lim_{\\ell \\to \\infty}\\mathcal{L}^{n+1}(B_r(x) \\setminus E_{j_\\ell,i(x,0)}(t')) = 0 \\qquad \\mbox{for all $t' \\in \\left(0, r^2 \\right]$}\\,.\n\\end{equation}\nSimilarly, if $t > 0$, since $\\mu (B_{2\\,r}(x) \\times \\left[t-r^2,t+r^2 \\right]) = 0$, we can apply Lemma \\ref{contidom}(1) to conclude that there is a unique $i(x,t)\\in \\{1,\\ldots,N\\}$ such that\n\\begin{equation} \\label{e:conclusion t>0}\n\\lim_{\\ell \\to \\infty}\\mathcal{L}^{n+1}(B_r(x) \\setminus E_{j_\\ell,i(x,t)}(t')) = 0 \\qquad \\mbox{for all $t' \\in \\left(t-r^2, t+ r^2 \\right]$}\\,.\n\\end{equation}\n\n\\smallskip\n\nNow, observe that if $S$ is any connected component of the complement of $\\spt\\,\\mu \\cup (\\Gamma_0 \\times \\{0\\})$ in $U \\times \\R^+$, then by \\eqref{e:conclusion t=0} and \\eqref{e:conclusion t>0}, and since $S$ is connected, for any two points $(x,t)$ and $(y,s)$ in $S$ it has to be $i(x,t) = i(y,s)$. For every $i \\in \\{1,\\ldots,N\\}$, we can then let $S(i)$ denote the union of all connected components $S$ such that $i(x,t) = i$ for every $(x,t) \\in S$. It is clear that $S(i)$ are open sets, and that $E_{0,i} = \\left\\lbrace x \\in U \\, \\colon \\, (x,0) \\in S(i) \\right\\rbrace$ (notice that if $x \\in E_{0,i}$ then $(x,0) \\notin \\spt\\,\\mu$ as a consequence of Lemma \\ref{lemma1012}), so that each $S(i)$ is not empty. Furthermore, we have that $\\bigcup_{i=1}^N S(i) = (U \\times \\R^+) \\setminus \\left( \\spt\\,\\mu \\cup (\\Gamma_0 \\times \\{0\\}) \\right)$. For every $t \\in \\R^+$, we can thus define\n\\begin{equation} \\label{def partition final}\nE_i(t) := \\left\\lbrace x \\in U \\, \\colon \\, (x,t) \\in S(i) \\right\\rbrace\\,,\\,\\,\n\\Gamma(t):=U\\setminus \\cup_{i=1}^N E_i(t).\n\\end{equation}\nBy examining the definition, one obtains $\\Gamma(t)=\\{x\\in U\\,:\\, (x,t)\\in {\\rm spt}\\,\\mu\\}$ for all $t>0$.\nCombined with Lemma \\ref{sptfini}(1), we have (11). By Lemma \\ref{sptfini}(2), we have (3), and this \nalso proves that $\\Gamma(t)$ has empty interior, which shows (4). The claims (1) and (2) hold true by construction. (5) is a consequence of Proposition \\ref{vvmp} and the definition of $\\mu$ being\nthe product measure. (6) is similar: if $x \\in U \\setminus {\\rm conv}(\\Gamma_0\\cup\\partial\\Gamma_0)$ then the half-line $t \\in \\R^+ \\mapsto \\gamma_x(t) := \\left( x,t \\right) \\in U \\times \\R^+$ must be contained in the same connected component of $(U \\times \\R^+) \\setminus ( \\spt \\,\\mu \\cup (\\Gamma_0 \\times \\{0\\}) )$, for otherwise there would be $t > 0$ such that $(x,t) \\in \\spt\\,\\mu$, thus contradicting (5). \nFor (7), by the strict convexity of $U$ and (5), we have $\\partial\\Gamma(t)\\subset \\partial\\Gamma_0$ for all\n$t>0$. Later in Proposition \\ref{p:boundary data}, we prove $({\\rm clos}\\,({\\rm spt}\\,\\|V_t\\|))\\setminus\nU=\\partial\\Gamma_0$ and $\\partial\\Gamma_0\\subset\\partial\\Gamma(t)$ follows from this and (11). Coming to (8), we use \\eqref{e:conclusion t>0} together with the conclusions in Proposition \\ref{p:induction}(1) to see that $\\chi_{E_{j_\\ell,i}(t)} \\to \\chi_{E_i(t)}$ in $L^1(\\R^{n+1})$ as $\\ell \\uparrow\\infty$, for every $t \\in \\R^+$. In particular, the lower semi-continuity of perimeter allows us to deduce that for any $\\phi\\in C_c(U;\\R^+)$ \\[\\| \\nabla \\chi_{E_i(t)} \\| (\\phi) \\leq \\liminf_{\\ell \\to \\infty} \\| \\nabla \\chi_{E_{j_\\ell,i}(t)} \\| (\\phi) \\leq \\liminf_{\\ell\\to\\infty} \\| \\partial \\E_{j_\\ell}(t) \\| (\\phi) = \\|V_t\\|(\\phi)\\,, \\] \nthus proving $\\|\\nabla\\chi_{E_i(t)}\\|\\leq \\|V_t\\|$ of (8). Using the cluster structure of each $\\partial \\E_{j_\\ell}(t)$ (see e.g. \\cite[Proposition 29.4]{Maggi_book}), we have in fact that\n\\[\n\\frac12 \\sum_{i=1}^N \\| \\nabla \\chi_{E_{j_\\ell,i}(t)} \\| (\\phi) =\\Ha^n\\mres_{(\\cup_{i=1}^N \\partial^*\nE_{j_\\ell,i}(t))}(\\phi)\\leq \\| \\partial \\E_{j_\\ell}(t) \\| (\\phi) \\qquad \\mbox{for every $\\phi$ as above}\\,,\n\\]\nwhich shows the other statement $\\sum_{i=1}^N \\| \\nabla \\chi_{E_i(t)} \\| \\leq 2\\, \\|V_t\\|$ in (8). \nSince the claim of (9) is interior in nature, the proof is identical to the case without boundary\nas in \\cite[Theorem 3.5(6)]{KimTone}. For the proof of (10), for $\\bar t\\geq 0$, \nwe prove that $\\chi_{E_i(t)}\\to \\chi_{E_i(\\bar t)}$ in $L^1(U)$ as $t\\to \\bar t$ for\neach $i=1,\\ldots,N$. Since $\\|\\nabla\\chi_{E_i(t)}\\|(U)\\leq \\|V_t\\|(U)\\leq \\Ha^n(\\Gamma_0)$, \nfor any $t_k\\to \\bar t$, there exists a subsequence (denoted by the same index) and \n$\\tilde E_i\\subset U$ such that $\\chi_{E_i(t_k)}\\to \\chi_{\\tilde E_i}$ in $L^1(U)$ and \n$\\mathcal L^{n+1}$ a.e.~by the\ncompactness theorem for sets of finite perimeter. We also have $\\mathcal L^{n+1}(\\tilde E_i\\cap \\tilde E_j)=0$\nfor $i\\neq j$ and $\\mathcal L^{n+1}(U\\setminus\\cup_{i=1}^N \\tilde E_i)=0$. For a contradiction,\nassume that $\\mathcal L^{n+1}(E_i(\\bar t)\\setminus \\tilde E_i)>0$ for some $i$. Then, there must be \n$U_r(x)\\subset\\joinrel\\subset E_i(\\bar t)$ such that $\\mathcal L^{n+1}(U_r(x)\\setminus \\tilde E_i)>0$. We then use\nTheorem \\ref{thm:main2}(9) with $g(t)=\\mathcal L^{n+1}(E_i(t)\\cap U_r(x))$, which gives\n$\\lim_{t\\to \\bar t}g(t)=g(\\bar t)=\\mathcal L^{n+1}(E_i(\\bar t)\\cap U_r(x))=\\mathcal L^{n+1}(U_r(x))$. On the \nother hand, $\\chi_{E_i(t)}\\to \\chi_{\\tilde E_i}$ in $L^1(U)$ implies $\\lim_{t\\to\\bar t}g(t)=\n\\mathcal L^{n+1}(\\tilde E_i\\cap U_r(x))<\\mathcal L^{n+1}(U_r(x))$ because of $\\mathcal L^{n+1}(U_r(x)\\setminus \\tilde E_i)>0$. This is a contradiction. Thus, we have $\\mathcal L^{n+1}(E_i(\\bar t)\\setminus\n\\tilde E_i)=0$ for all $i=1,\\ldots, N$. Since $\\{\\tilde E_1,\\ldots,\\tilde E_N\\}$ is a partion of $U$,\nwe have $\\mathcal L^{n+1}(E_i(\\bar t)\\triangle \\tilde E_i)=0$ for all $i$. This proves (9),\nand finishes the proof of (1)-(11) except for (7), which \nis independent and is proved once we prove Proposition \\ref{p:boundary data}. \n\\end{proof}\n\n\n\n\\begin{proposition} \\label{p:boundary data}\nFor all $t \\geq 0$, it holds $({\\rm clos}\\,(\\spt\\|V_t\\|) )\\setminus U = \\partial\\Gamma_0$. \n\\end{proposition}\n\n\\begin{proof}\nLet $x \\in ({\\rm clos}\\,(\\spt\\|V_t\\|) )\\setminus U$, and let $\\{x_k\\}_{k=1}^\\infty$ be a sequence with $x_k \\in \\spt\\,\\|V_t\\|$ such that $x_k \\to x$ as $k \\uparrow \\infty$. If $x \\notin \\partial \\Gamma_0$, then by Proposition \\ref{vmn} there is $r > 0$ such that $\\|V_t\\| (B_r(x) \\cap U) = 0$. For all suitably large $k$ so that $\\abs{x-x_k} < r$ we then have $\\|V_t\\|(B_{r-\\abs{x-x_k}}(x_k) \\cap U) = 0$, which contradicts the fact that $x_k \\in \\spt\\|V_t\\|$.\n\n\\smallskip\n\nConversely, let $x \\in \\partial \\Gamma_0$, and suppose for a contradiction that $x \\notin {\\rm clos}\\,(\\spt\\|V_t\\|)$, so that there is a radius $r > 0$ with the property that $B_{r}(x) \\cap \\spt\\|V_t\\| = \\emptyset$. \nThen, Theorem \\ref{thm:main2}(8) implies that $\\|\\nabla \\chi_{E_i(t)}\\|(B_{r}(x) \\cap U) = 0$ for every $i\\in \\{1,\\ldots,N\\}$. Since $B_{r}(x) \\cap U$ is connected by the convexity of $U$, every $\\chi_{E_i(t)}$ is either constantly equal to $0$ or $1$ on $B_{r}(x) \\cap U$, namely\n\\begin{equation} \\label{contradiction}\nB_{r}(x) \\cap U \\subset E_\\ell(t) \\qquad \\mbox{for some $\\ell \\in \\{1,\\ldots,N\\}$}\\,.\n\\end{equation}\n\nIf $t=0$, since $E_i(0) = E_{0,i}$ for every $i =1,\\ldots,N$, the conclusion in \\eqref{contradiction} is evidently incompatible with $(A4)$, thus providing the desired contradiction. We can then assume $t > 0$. By $(A4)$, there are at least two indices $i \\neq i' \\in \\{1,\\ldots,N\\}$ and sequences of balls $\\{B_{r_j}(x_j)\\}_{j=1}^\\infty$, $\\{B_{r_j'}(x_j')\\}_{j=1}^\\infty$ such that $x_j, x_j' \\in \\partial U$, $\\lim_{j\\to\\infty} x_j = \\lim_{j\\to\\infty} x_j' = x$ and $B_{r_j}(x_j) \\cap U \\subset E_{0,i}$ whereas $B_{r_j'} (x_j') \\cap U \\subset E_{0,i'}$. Let $z$ denote any of the points $x_j$ or $x_j'$, and observe that the above condition guarantees that $z \\in \\partial U \\setminus \\partial \\Gamma_0$. In turn, by arguing as in Remark \\ref{rmk:hyperplane} we deduce that there is a neighborhood $B_{\\rho}(z) \\cap U$ such that $\\|V_t\\|(B_\\rho(z) \\cap U) = 0$ for all $t \\geq 0$, and thus also $\\| \\nabla \\chi_{E_l(t)} \\| (B_\\rho(z) \\cap U) = 0$ for every $t \\geq 0$ and for every $l \\in \\{1,\\ldots,N\\}$. Since $B_\\rho(z) \\cap U$ is connected this implies that $B_\\rho(z) \\cap U \\subset E_l(t)$ for some $l$. Applying this argument with $z=x_j$ and $z=x_j'$ we then find radii $\\rho_j$ and $\\rho_j'$ such that, necessarily, $B_{\\rho_j}(x_j) \\cap U \\subset E_i(t)$ whereas $B_{\\rho_j'}(x_j') \\cap U \\subset E_{i'}(t)$ for all $t \\geq 0$. Since $x_j \\to x$ and $x_j' \\to x$ this conclusion is again incompatible with \\eqref{contradiction}, thus completing the proof. \n\\end{proof}\n\\begin{proposition}\n\\label{inidata}\nWe have for each $\\phi\\in C_c(U;\\R^+)$\n\\[\\Ha^n\\mres_{(\\cup_{i=1}^N \\partial^*E_{0,i})}(\\phi)\\leq \\liminf_{t\\downarrow 0}\\|V_t\\|(\\phi)=\\limsup_{t\\downarrow 0}\n\\|V_t\\|(\\phi)\\leq \\Ha^n\\mres_{\\Gamma_0}(\\phi).\\]\nIn particular, if $\\Ha^n(\\Gamma_0\\setminus \\cup_{i=1}^N\\partial^*E_{0,i})=0$, then we have\n\\[\\lim_{t\\downarrow 0}\\|V_t\\|=\\Ha^n\\mres_{\\Gamma_0} \\qquad \\mbox{as Radon measures in $U$}\\,.\\]\n\\end{proposition}\n\\begin{proof}\n\nBy \\cite[Proposition 29.4]{Maggi_book}, we have for each $\\phi\\in C_c(U;\\R^+)$\n\\begin{equation*}\n\\begin{split}\n&2\\Ha^n\\mres_{(\\cup_{i=1}^N \\partial^*E_{0,i})}(\\phi)=\\sum_{i=1}^N \\|\\nabla\\chi_{E_{0,i}}\\|(\\phi)\n\\leq\\sum_{i=1}^N \\liminf_{t\\downarrow 0} \\|\\nabla\\chi_{E_i(t)}\\|(\\phi) \\\\\n&\\leq \\liminf_{t\\downarrow 0}\\sum_{i=1}^N \\|\\nabla\\chi_{E_i(t)}\\|(\\phi) \\leq 2\\liminf_{t\\downarrow 0}\n\\|V_t\\|(\\phi)\n\\end{split}\n\\end{equation*}\nwhere we also used Theorem \\ref{thm:main2}(8) and (10). This proves the first inequality. \nThe second equality and the third inequality follow from \\eqref{muconti}, $\\mu_t=\\|V_t\\|$ and \n$\\|V_0\\|=\\Ha^n\\mres_{\\Gamma_0}$. \n\\end{proof}\nThe proof of Theorem \\ref{thm:main} is now complete: $\\{V_t\\}_{t\\geq 0}$ is a Brakke flow with fixed\nboundary $\\partial\\Gamma_0$ due to Proposition \\ref{p:integral varifold limit}(1), Theorem \\ref{t:Brakke inequality}\nand Proposition \\ref{p:boundary data}. Proposition \\ref{inidata} proves the claim on the continuity\nof measure at $t=0$. \n\\section{Applications to the problem of Plateau}\n\\label{propla}\n\nAs anticipated in the introduction, an interesting byproduct of our global existence result for Brakke flow is the existence of a stationary integral varifold $V$ in $U$ satisfying the topological boundary constraint ${\\rm clos}(\\spt \\|V\\|) \\setminus U = \\partial \\Gamma_0$. This is the content of Corollary \\ref{main:cor}, which we prove next.\n\n\\begin{proof}[Proof of Corollary \\ref{main:cor}]\nBy the estimate in \\eqref{e:mean curvature bound}, the function\n\\[\nH(t) := \\int_U \\abs{h(x,V_t)}^2 \\, d\\|V_t\\|(x)\n\\]\nis in $L^1(\\left(0,\\infty\\right))$. Hence, there exists a sequence $\\{t_k\\}_{k=1}^\\infty$ such that\n\\begin{equation} \\label{vanishing sequence}\n\\lim_{k \\to \\infty} t_k = \\infty\\,, \\qquad \\lim_{k\\to \\infty} H(t_k) = 0\\,.\n\\end{equation}\nLet $V_k := V_{t_k}$. Again by \\eqref{e:mean curvature bound}, we have that\n\\begin{equation} \\label{mass bound}\n\\sup_{k} \\|V_k\\|(U) \\leq \\Ha^n(\\Gamma_0)\\,.\n\\end{equation}\nFurthermore, combining \\eqref{def:generalized mean curvature} with \\eqref{mass bound} yields, via the Cauchy-Schwarz inequality, that\n\\begin{equation}\n\\abs{\\delta V_k (g)} \\leq \\|g\\|_{C^0} \\, \\left( \\Ha^n(\\Gamma_0) \\right)^{\\frac12} \\, \\left( H(t_k) \\right)^{\\frac12} \\qquad \\mbox{for every $g \\in C_c(U;\\R^{n+1})$}\\,,\n\\end{equation} \nso that\n\\begin{equation} \\label{first variation limit}\n\\lim_{k \\to \\infty} \\| \\delta V_k \\| (U) = 0\\,.\n\\end{equation}\nHence, we can apply Allard's compactness theorem for integral varifolds, see \\cite[Theorem 42.7]{Simon}, in order to conclude the existence of a stationary integral varifold $V \\in \\IV_n(U)$ such that $V_k \\to V$ in the sense of varifolds.\n\n\\smallskip\n\nNext, we prove the existence of the family $\\{E_i\\}_{i=1}^N$. Fix $i \\in \\{1,\\ldots,N\\}$, and consider the sequence $\\{E_i^k\\}_{k=1}^\\infty$, where $E_i^k := E_i(t_k)$. By Theorem \\ref{thm:main2}(8) and \\eqref{e:mean curvature bound} we have, along a (not relabeled) subsequence, the convergence\n\\begin{equation} \\label{long_time_limit_sets}\n\\chi_{E_i^k} \\to \\chi_{E_i} \\qquad \\mbox{in $L^1(U)$ and pointwise $\\mathcal{L}^{n+1}$-a.e. as $k \\to \\infty$}\\,,\n\\end{equation}\nwhere $E_i \\subset U$ are sets of finite perimeter. Since, by Theorem \\ref{thm:main2}(3), $\\sum_{i=1}^N \\chi_{E_i^k} = \\chi_U$ as $L^1$ functions, we conclude that \n\\[\n\\mathcal{L}^{n+1}\\left(U \\setminus \\bigcup_{i=1}^N E_i \\right) =0\\,, \\qquad \\mbox{and} \\qquad \\mathcal{L}^{n+1}(E_i \\cap E_j) = 0 \\quad \\mbox{if $i \\neq j$}\\,,\n\\]\nso that $\\bigcup_{i=1}^N E_i$ is an $\\mathcal{L}^{n+1}$-partition of $U$. The validity of Theorem \\ref{thm:main2}(8) implies conclusion (1), namely that \n\\begin{equation} \\label{measure_inclusion}\n\\| \\nabla \\chi_{E_i} \\| \\leq \\|V\\| \\quad \\mbox{for every $i \\in \\{1,\\ldots, N\\}$} \\qquad \\mbox{and} \\qquad \\sum_{i=1}^N \\| \\nabla \\chi_{E_i} \\| \\leq 2\\, \\|V\\|\n\\end{equation}\nin the sense of Radon measures in $U$. As a consequence of \\eqref{measure_inclusion}, we have that $\\spt\\, \\| \\nabla \\chi_{E_i} \\| \\subset \\spt\\, \\|V\\|$ for every $i=1,\\ldots,N$. Since $V$ is a stationary integral varifold, the monotonicity formula implies that $\\spt\\|V\\|$ is $\\Ha^n$-rectifiable, and $V=\\var(\\spt\\,\\|V\\|,\\theta)$ for some upper semi-continuous $\\theta\\,\\colon\\,U \\to \\mathbb R^+$ with $\\theta(x) \\ge 1$ at each $x \\in \\spt\\|V\\|$. In particular, setting $\\Gamma := \\spt\\,\\|V\\|$, we have\n\\begin{equation}\\label{gamma1}\n\\Ha^n(\\Gamma) = \\| \\var(\\Gamma,1) \\| (U) \\leq \\| V \\| (U) \\leq \\Ha^n(\\Gamma_0)\\,,\n\\end{equation}\nwhere the last inequality is a consequence of \\eqref{e:mean curvature bound} and the lower semicontinuity of the weight with respect to varifold convergence. \n\n\\smallskip\n\nNext, we observe that, since $\\spt\\,\\|\\nabla\n\\chi_{E_i}\\|\\subset\\Gamma$, on each connected component of $U\\setminus \\Gamma$ each $\\chi_{E_i}$ is almost everywhere constant. Denoting $\\{O_h\\}_{h \\in \\mathbb{N}}$ the connected components of the open set $U \\setminus \\Gamma$, we may then modify each set $E_i$ ($i \\in \\{1,\\ldots,N\\}$) by setting\n\\[\nE_{i}^* := \\bigcup_{ \\{ O_h \\, \\colon \\, \\chi_{E_i} = 1 \\quad \\mbox{a.e. on }O_h \\}} O_h.\n\\]\nBy definition, each set $E_i^*$ is open; furthermore, the sets $E_i^*$ are pairwise disjoint, and $\\bigcup_{i=1}^N E_i^* = U \\setminus \\Gamma$. Since for each $i$ we have $\\mathcal{L}^{n+1}(E_i \\Delta E_i^*) = 0$, and since sets of finite perimeter are defined up to $\\mathcal{L}^{n+1}$-negligible sets, we can thus replace the family $\\{E_i\\}$ with $\\{E_i^*\\}$, and drop the superscript $\\,^*$ to ease the notation. \n\n\\smallskip\n\nProperty (2) is a consequence of Theorem \\ref{thm:main2}(6), since the convergence $\\chi_{E_i^k} \\to \\chi_{E_i}$ now holds pointwise on $U\\setminus{\\rm conv}(\\Gamma_0\\cup\\partial\\Gamma_0)$. \nWe have not excluded the possibility that $\\Ha^n(\\Gamma)=0$. But this should imply $\\|V\\|=0$ \nby \\eqref{gamma1}, and $\\|\\nabla\\chi_{E_i}\\|=0$ for every $i\\in\\{1,\\ldots,N\\}$ by \\eqref{measure_inclusion},\nwhich is a contradiction to (2). Thus we have necessarily \n$\\Ha^n(\\Gamma)>0$ and this completes the proof of (3).\nIn order to conclude the proof, we are just left with the boundary condition (4), namely\n\\begin{equation} \\label{final_bc}\n({\\rm clos}\\,(\\spt\\,\\|V\\|)) \\setminus U = \\partial \\Gamma_0\\,.\n\\end{equation}\nTowards the first inclusion, suppose that $x \\in ({\\rm clos}\\,(\\spt\\,\\|V\\|)) \\setminus U$, and let $\\{x_h\\}_{h=1}^\\infty$ be a sequence with $x_h \\in \\spt\\|V\\|$ such that $x_h \\to x$ as $h \\to \\infty$. If $x \\notin \\partial \\Gamma_0$ then Proposition \\ref{vmn} implies that there exists $r > 0$ such that \n\\[\n\\limsup_{k \\to \\infty} \\|V_k\\|( U \\cap B_r(x)) = 0\\,.\n\\]\nBy the lower semi-continuity of the weight with respect to varifold convergence, we deduce then that $\\|V\\|(U \\cap U_r(x)) = 0$. For $h$ large enough so that $\\abs{x-x_h} < r$ we then have $\\|V\\| (U \\cap U_{r - \\abs{x-x_h}}(x_h)) = 0$, thus contradicting that $x_h \\in \\spt\\|V\\|$. For the second inclusion, let $x \\in \\partial \\Gamma_0$, and suppose towards a contradiction that $x \\notin {\\rm clos}(\\spt\\,\\|V\\|) \\setminus U$. Then, there exists a radius $r > 0$ such that $U_r(x) \\cap \\spt \\, \\|V\\| = \\emptyset$. In particular, $\\|\\nabla \\chi_{E_i}\\| (U \\cap U_r(x)) = 0$ for every $i \\in \\{1, \\ldots, N \\}$. Since $U$ is convex, $U \\cap U_r(x)$ is connected, and thus every $\\chi_{E_i}$ is either identically $0$ or $1$ in $U_r(x) \\cap U$, namely\n\\begin{equation} \\label{one_domain_only}\nU_r(x) \\cap U \\subset E_{\\ell} \\qquad \\mbox{for some $\\ell \\in \\{ 1, \\ldots, N \\}$}\\,.\n\\end{equation}\nBecause $x \\in \\partial \\Gamma_0$, by property $(A4)$ in Assumption \\ref{ass:main} there are two indices $i \\neq i' \\in \\{1,\\ldots,N\\}$ and sequences $\\{x_j\\}_{j=1}^\\infty\\,, \\{x'_j\\}_{j=1}^\\infty$ with $\\lim_{j \\to \\infty} x_j = x = \\lim_{j \\to \\infty} x_j'$ such that $x_j, x_j' \\in \\partial U \\setminus \\partial \\Gamma_0$ and $U_{r_j}(x_j) \\cap U\\subset E_{0,i}$, $U_{r_j'}(x_j') \\cap U\\subset E_{0,i'}$ for some $r_j, r_j' > 0$. If $z$ denotes any of the points $x_j$ or $x_j'$, Proposition \\ref{vmn} and Remark \\ref{rmk:hyperplane} ensure the existence of $\\rho$ such that $\\|V_t\\|(B_\\rho(z) \\cap U) = 0$ for all $t \\geq 0$. Again by lower semicontinuity of the weight with respect to varifold convergence, $\\| V \\| (U_\\rho(z) \\cap U) = 0$. Since each $U_\\rho(z) \\cap U$ is connected and $\\spt\\|\\nabla \\chi_{E_i}\\| \\subset \\spt\\|V\\|$ for all $i$, we deduce that $U_{\\rho_j}(x_j) \\cap U \\subset E_{i}$ and $U_{\\rho_j'}(x_j') \\cap U \\subset E_{i'}$ for some $i \\neq i'$. Since both $x_j \\to x$ and $x_j' \\to x$, this conclusion is incompatible with \\eqref{one_domain_only}. This completes the proof.\n\\end{proof}\n\nThe stationary varifold $V$ from Corollary \\ref{main:cor} is a generalized minimal surface in $U$, and for this reason it can be thought of as a solution to Plateau's problem in $U$ with the prescribed boundary $\\partial \\Gamma_0$. Brakke flow provides, therefore, an interesting alternative approach to the existence theory for Plateau's problem compared to more classical methods based on mass (or area) minimization. Another novelty of this approach is that the structure of partitions allows to prescribe the boundary datum in the purely \\emph{topological} sense, by means of the constraint $({\\rm clos}\\,(\\spt\\|V\\|)) \\setminus U = \\partial \\Gamma_0$. This adds to the several other possible interpretations of the spanning conditions that have been proposed in the literature: among them, let us mention the \\emph{homological} boundary conditions in Federer and Fleming's theory of integral currents \\cite{FF60} or of integral currents ${\\rm mod}(p)$ \\cite{Federer_book} (see also Brakke's covering space model for soap films \\cite{Brakke_covering}); the \\emph{sliding} boundary conditions in David's sliding minimizers \\cite{David_Plateau,David_taylorsthm}; and the \\emph{homotopic} spanning condition of Harrison \\cite{Harrison14}, Harrison-Pugh \\cite{HP16} and De Lellis-Ghiraldin-Maggi \\cite{DGM}.\n\nConcerning the latter, we can actually show that, under a suitable extra assumption on the initial partition $\\E_0$, a homotopic spanning condition is satisfied at all times along the flow. Before stating and proving this result, which is Proposition \\ref{final spanning} below, let us first record the definition of homotopic spanning condition after \\cite{DGM}.\n\n\\begin{definition}[{see \\cite[Definition 3]{DGM}}]\nLet $n \\geq 2$, and let $\\Sigma$ be a closed subset of $\\R^{n+1}$. Consider the family\n\\begin{equation} \\label{spanning class}\n\\mathcal{C}_\\Sigma := \\left\\lbrace \\gamma \\colon \\Sf^1 \\to \\R^{n+1} \\setminus \\Sigma \\, \\colon \\, \\gamma \\mbox{ is a smooth embedding of $\\Sf^1$ into $\\R^{n+1} \\setminus \\Sigma$} \\right\\rbrace\\,.\n\\end{equation}\nA subfamily $\\mathcal C \\subset \\mathcal C_\\Sigma$ is said to be \\emph{homotopically closed} if $\\gamma \\in \\mathcal C$ implies that $\\tilde \\gamma \\in \\mathcal C$ for every $\\tilde \\gamma \\in \\left[ \\gamma \\right]$, where $\\left[ \\gamma \\right]$ is the equivalence class of $\\gamma$ modulo homotopies in $\\R^{n+1} \\setminus \\Sigma$. Given a homotopically closed $\\mathcal{C} \\subset \\mathcal{C}_\\Sigma$, a relatively closed subset $K \\subset \\R^{n+1} \\setminus \\Sigma$ is $\\mathcal{C}$-\\emph{spanning $\\Sigma$} if \\footnote{With a slight abuse of notation, in what follows we will always identify the map $\\gamma$ with its image $\\gamma(\\mathbb{S}^1) \\subset \\R^{n+1} \\setminus \\Sigma$.}\n\\begin{equation} \\label{C-spanning}\nK \\cap \\gamma \\neq \\emptyset \\qquad \\mbox{for every $\\gamma \\in \\mathcal{C}$}\\,. \n\\end{equation}\n\\end{definition}\n\n\\begin{remark}\nIf $\\mathcal C \\subset \\mathcal{C}_\\Sigma$ contains a homotopically trivial curve, then any $\\mathcal C$-spanning set $K$ will necessarily have non-empty interior (and therefore infinite $\\Ha^n$ measure). For this reason, we are only interested in subfamilies $\\mathcal C$ with $\\left[ \\gamma \\right] \\neq 0$ for every $\\gamma \\in \\mathcal C$.\n\\end{remark}\n\n\\begin{definition}\nWe will say that a relatively closed subset $K \\subset \\R^{n+1} \\setminus \\Sigma$ \\emph{strongly homotopically spans $\\Sigma$} if it $\\mathcal{C}$-spans $\\Sigma$ for \\emph{every} homotopically closed family $\\mathcal{C} \\subset \\mathcal{C}_\\Sigma$ which does not contain any homotopically trivial curve. Namely, if $K \\cap \\gamma \\neq \\emptyset$ for every $\\gamma \\in \\mathcal{C}_\\Sigma$ such that $\\left[ \\gamma \\right] \\neq 0$ in $\\pi_1(\\R^{n+1} \\setminus \\Sigma)$.\n\\end{definition}\n\nWe can prove the following proposition, whose proof is a suitable adaptation of the argument in \\cite[Lemma 10]{DGM}.\n\n\\begin{proposition} \\label{final spanning}\n\nLet $n \\geq 2$, and let $U,\\Gamma_0,\\E_0$ be as in Assumption \\ref{ass:main}. Suppose that the initial partition $\\E_0$ satisfies the following additional property:\n\\begin{equation} \\label{disconnected components} \\tag{$\\diamond$}\n\\begin{split}\n&\\mbox{Given any two connected components $S_1$ and $S_2$ of $\\partial U \\setminus \\partial \\Gamma_0$}\\,,\\\\ &\\mbox{there are two indices $i,j \\in \\{1,\\ldots,N\\}$ with $i \\neq j$}\\\\&\\mbox{such that $S_1 \\subset {\\rm clos}\\,E_{0,i}$ and $S_2 \\subset {\\rm clos}\\,E_{0,j}$}\\,. \n\\end{split}\n\\end{equation}\nThen, the set $\\Gamma(t)$ strongly homotopically spans $\\partial \\Gamma_0$ for every $t \\in \\left[0,\\infty \\right]$.\n\n\\end{proposition}\n\n\\begin{proof}\n\nLet $\\gamma \\colon \\mathbb{S}^1 \\to \\R^{n+1} \\setminus \\partial \\Gamma_0$ be a smooth embedding that is not homotopically trivial in $\\R^{n+1} \\setminus \\partial \\Gamma_0$. The goal is to prove that, for every $t \\in \\left[ 0, \\infty \\right]$, $\\Gamma(t) \\cap \\gamma \\neq \\emptyset$. First observe that it cannot be $\\gamma \\subset U$, for otherwise $\\gamma$ would be homotopically trivial. For the same reason, since the ambient dimension is $n+1 \\geq 3$ also $\\gamma \\subset \\R^{n+1} \\setminus {\\rm clos}\\,U$ is incompatible with the properties of $\\gamma$. Hence, we conclude that $\\gamma$ must necessarily intersect $\\partial U$. We first prove the result under the additional assumption that $\\gamma$ and $\\partial U$ intersect transversally. We can then find finitely many closed arcs $I_h = \\left[ a_h, b_h \\right] \\subset \\Sf^1$ with the property that $\\gamma \\cap U = \\bigcup_{h} \\gamma(\\left( a_h, b_h \\right))$, and $\\gamma \\cap ( \\partial U \\setminus {\\partial\\Gamma_0} ) = \\bigcup_{h} \\{ \\gamma(a_h), \\gamma(b_h) \\}$. If there is $h$ such that $\\gamma(a_h)$ and $\\gamma(b_h)$ belong to two distinct connected components of $\\partial U \\setminus {\\partial\\Gamma_0}$, then \\eqref{disconnected components} implies that the arc $\\sigma_h := \\left. \\gamma \\right|_{\\left( a_h, b_h \\right)}$ must intersect $U \\cap \\partial E_i(0)$ for some $i=1,\\ldots,N$. In fact, since the labeling of the open partition at the boundary of $U$ is invariant along the flow, the same conclusion holds for every $t \\in \\left[ 0, \\infty \\right]$. In particular, in this case $\\gamma$ intersects $\\bigcup_{i} (\\partial E_i(t) \\cap U) = \\Gamma(t)$ for every $t \\in \\left[0,\\infty\\right]$. Hence, if by contradiction $\\gamma$ has empty intersection with $\\Gamma(t)$, then necessarily for every $h$ there is a connected component $S_h$ of $\\partial U \\setminus {\\partial\\Gamma_0}$ such that $\\gamma(a_h), \\gamma(b_h) \\in S_h$ (note that it may be $S_h = S_{h'}$ for $h \\neq h'$). Since each $S_h$ is connected, for every $h$ we can find a smooth embedding $\\tau_h \\colon I_h \\to S_h$ with the property that $\\tau_h(a_h) = \\gamma(a_h)$ and $\\tau_h(b_h) = \\gamma(b_h)$. Furthermore, this can be achieved under the additional condition that $\\tau_h(I_h) \\cap \\tau_{h'} (I_{h'}) = \\emptyset$ for every $h \\neq h'$. We can then define a piecewise smooth embedding $\\tilde \\gamma$ of $\\Sf^1$ into $\\R^{n+1} \\setminus {\\partial\\Gamma_0}$ such that $\\left.\\tilde \\gamma \\right|_{I_h} := \\left. \\tau_h \\right|_{I_h}$ for every $h$, and $\\tilde \\gamma = \\gamma$ on the open set $\\Sf^1 \\setminus \\bigcup_{h} I_h$. We have $\\left[ \\tilde \\gamma \\right] = \\left[ \\gamma \\right]$ in $\\pi_1(\\R^{n+1} \\setminus {\\partial\\Gamma_0})$. We can then construct a smooth embedding $\\hat \\gamma \\colon \\Sf^1 \\to \\R^{n+1} \\setminus {\\partial\\Gamma_0}$ such that $\\left[ \\hat \\gamma \\right] = \\left[ \\gamma \\right]$ in $\\pi_1(\\R^{n+1} \\setminus {\\partial\\Gamma_0})$, and with $\\hat \\gamma \\subset \\R^{n+1} \\setminus \\partial U$. Since $n+1 \\geq 3$ this contradicts the assumption that $\\left[ \\gamma \\right] \\neq 0$ and completes the proof if $\\gamma$ and $\\partial U$ intersect transversally.\n\n\\smallskip\n\nFinally, we remove the transversality assumption. Let $\\delta = \\delta (\\partial U) > 0$ be such that the tubular neighborhood $(\\partial U)_{2\\delta}$ has a well-defined smooth nearest point projection $\\Pi$, and consider, for $\\abs{s} < \\delta$, the open sets $U_s$ having boundary $\\partial U_s = \\left\\lbrace x - s\\, \\nu_{U}(x) \\, \\colon \\, x \\in \\partial U \\right \\rbrace$, where $\\nu_{U}$ is the exterior normal unit vector field to $\\partial U$. Since $\\gamma$ is smooth, by Sard's theorem $\\gamma$ intersects $\\partial U_s$ transversally for a.e. $\\abs{s} < \\delta$. Fix such an $s \\in \\left( 0, \\delta \\right)$, and let $\\Phi_s \\colon \\R^{n+1} \\to \\R^{n+1}$ be the smooth diffeomorphism of $\\R^{n+1}$ defined by\n\\begin{equation} \\label{diffeo}\n\\Phi_s(x) := x + \\varphi_s(\\rho_U(x)) \\, \\nu_U(\\Pi(x))\\,,\n\\end{equation}\nwhere \n\\[\n\\rho_U(x) := \n\\begin{cases}\n\\abs{x - \\Pi(x)} & \\mbox{if } x \\in (\\partial U)_{2\\delta} \\cap U \\\\\n- \\abs{x - \\Pi(x)} & \\mbox{if } x \\in (\\partial U)_{2\\delta} \\setminus U\n\\end{cases}\n\\]\nis the signed distance function from $\\partial U$, and $\\varphi_s = \\varphi_s(t)$ is a smooth function such that\n\\[\n\\varphi_s(t) = 0 \\quad \\mbox{for all $|t| \\geq 2s$}\\,, \\qquad \\mbox{and} \\qquad \\varphi_s(s)=s\\,.\n\\]\n\nIn particular, $\\Phi_s$ maps $\\partial U_s$ diffeomorphically onto $\\partial U$, and furthermore\n\\begin{equation} \\label{convergence diffeo}\n\\Phi_{s} \\to {\\rm id} \\qquad \\mbox{uniformly on $\\R^{n+1}$ as $s \\to 0+$}\\,.\n\\end{equation}\nSince $\\gamma$ intersects $\\partial U_s$ transversally, the curve $\\Phi_s \\circ \\gamma$ intersects $\\partial U$ transversally. Furthermore, since $\\gamma$ and ${\\partial\\Gamma_0}$ are two compact sets with empty intersection, \\eqref{convergence diffeo} implies that if we choose $s$ sufficiently small then also $(\\Phi_s \\circ \\gamma) \\cap {\\partial\\Gamma_0} = \\emptyset$. Since $\\left[ \\Phi_s \\circ \\gamma \\right] = \\left[ \\gamma \\right] \\neq 0$ in $\\pi_1(\\R^{n+1} \\setminus \\partial\\Gamma_0)$, the first part of the proof guarantees that for every $t \\in \\left[ 0, \\infty \\right]$ we have $\\Gamma(t) \\cap (\\Phi_s \\circ \\gamma) \\neq \\emptyset$. For every $t$ we then have points $z_s(t) \\in \\Gamma(t) \\cap \\Phi_s \\circ \\gamma$. Along a sequence $s_h \\to 0+$, then, by compactness, \\eqref{convergence diffeo}, and the fact that each set $\\Gamma(t)$ is closed, we have that the points $z_{s_h}(t)$ converge to a point $z_0(t) \\in \\Gamma(t) \\cap \\gamma$. The proof is now complete.\n\\end{proof}\n\n\\begin{example} \\label{ex:two circles}\nSuppose that $U = U_1(0) \\subset \\R^3$, and $\\partial \\Gamma_0$ is the union of two parallel circles contained in $\\mathbb{S}^2 = \\partial U$ at distance $2h$ from one another, with $h \\in \\left(0,1\\right)$. Then, $\\partial U \\setminus \\partial \\Gamma_0$ consists of the union of three connected components $S_{u} \\cup S_{l} \\cup S_{d}$ (here $u,l,d$ stand for \\emph{up, lateral}, and \\emph{down}, respectively). If $h$ is suitably small, then there are two smooth minimal catenoidal surfaces $C_1 \\subset U$ and $C_2 \\subset U$, one stable and the other unstable, satisfying ${\\rm clos}(C_j) \\setminus U = \\partial \\Gamma_0$. Nonetheless if the initial partition $\\{E_{0,i}\\}_{i}$ satisfies \\eqref{disconnected components}, then, as a consequence of Proposition \\ref{final spanning}, both $C_1$ and $C_2$ are \\emph{not} admissible limits of Brakke flow as in Corollary \\ref{main:cor}, since there exists a smooth and homotopically non-trivial embedding $\\gamma \\colon \\mathbb{S}^1 \\to \\R^3 \\setminus \\partial\\Gamma_0$ having empty intersection with each of them. For instances, if $N=3$ and the initial partition is such that $S_u \\subset {\\rm clos}\\,E_{0,1}$, $S_l \\subset {\\rm clos}\\,E_{0,2}$, and $S_d \\subset {\\rm clos}\\,E_{0,3}$, then the corresponding Brakke flows will converge, instead, to a \\emph{singular} minimal surface $\\Gamma$ in $U$ consisting of the union $\\Gamma = \\tilde C_1 \\cup \\tilde C_2 \\cup D$, where $\\tilde C_j$ are pieces of catenoids, and $D$ is a disc contained in the plane $\\{z=0\\}$, which join together forming $120^{\\circ}$ angles along the ``free boundary'' circle $\\Sigma = \\partial D$; see Figure \\ref{singular_cat}.\n\n\\begin{figure}[h]\n\\includegraphics[scale=0.6]{singular_cat.pdf}\n\\caption{The singular limit varifold detailed in Example \\ref{ex:two circles}.} \\label{singular_cat}\n\\end{figure}\n\n\\end{example}\n\n\nWe will conclude the section with three remarks containing some interesting possible future research directions.\n\n\\begin{remark}\nFirst, we stress that the requirements on $\\partial \\Gamma_0$ are rather flexible, above all in terms of regularity. It would be interesting to characterize, for a given strictly convex domain $U \\subset \\R^{n+1}$, all its \\emph{admissible boundaries}, namely all subsets $\\Sigma \\subset \\partial U$ such that there are $N \\geq 2$ and $\\E_0$, $\\Gamma_0$ as in Assumption \\ref{ass:main} such that $\\Sigma = \\partial \\Gamma_0$. A first observation is that admissible boundaries do not need to be countably $(n-1)$-rectifiable, or to have finite $(n-1)$-dimensional Hausdorff measure: for example, it is not difficult to construct an admissible $\\Sigma \\subset \\partial U_1(0)$ in $\\R^2$ with $\\Ha^1(\\Sigma) > 0$, essentially a ``fat'' Cantor set in $\\mathbb{S}^1$. The assumption $(A4)$ requires any admissible boundary to have empty interior. It is unclear whether this condition is also sufficient for a subset $\\Sigma$ to be admissible.\n\\end{remark}\n\n\n\\begin{remark}\nLet us explicitly observe that, even in the case when $\\Gamma_0$ (or more precisely $V_0 := \\var(\\Gamma_0,1)$) is stationary, it is false in general that $V_{t} = V_0$ for $t > 0$. In other words, the approximation scheme which produces the Brakke flow $V_t$ may move the initial datum $V_0$ even when the latter is stationary. A simple example is a set consisting of two line segments with a crossing, for which multiple non-trivial solutions (depending on the choice of the initial partition) are possible; see Figure \\ref{fig:multiple_sols}. In fact, one can\nprove that such one-dimensional configuration \\emph{cannot} stay time-independent with respect to the Brakke flow constructed in the present paper: \\cite[Theorem 2.2]{KiTo20}, indeed, shows that one-dimensional Brakke flows obtained in the present paper and in \\cite{KimTone} necessarily satisfy a specific angle condition at junctions\nfor a.e. time, with the only admissible angles being $0$, $60$, or $120$ degrees. Thus, depending\non the initial labeling of domains, one of the two evolutions depicted in Figure \\ref{fig:multiple_sols} has to occur instantly. \n\n\\begin{figure}[h]\n\\includegraphics[scale=0.65]{non-unique.pdf}\n\\caption{Non-uniqueness without loss of mesure when $N=2$ (top) or $N=4$ (bottom).} \\label{fig:multiple_sols}\n\\end{figure}\n\nIf $\\Gamma_0$ is a smooth minimal surface\nwith smooth boundary $\\partial\\Gamma_0$, the uniqueness theorem for classical MCF should allow $\\Gamma_t\\equiv\\Gamma_0$ as the unique solution, even if\nthe latter is unstable (i.e.~the second variation is negative for some direction). In other words, in the smooth case we expect that there is no other Brakke flow starting from $\\Gamma_0$ other than the time-independent\nsolution (notice, in passing, that both the area-reducing Lipschitz deformation step and the motion by smoothed mean curvature step in our time-discrete approximation of Brakke flow trivialize in this case - at least locally -, because smooth minimal surfaces are already locally area minimizing at suitably small scales around each point). \n\nOn the other hand, in \\cite{stu-tone2} we show that time-dependent solutions may arise even from the existence, on $\\Gamma_0$, of singular points at which $V_0$ has a \\emph{flat} tangent cone, that is a tangent cone which is a plane $T$ with multiplicity $Q \\ge 2$. It would be interesting to characterize the regularity properties of those stationary $\\Gamma_0$ with\n$E_{0,1},\\ldots,E_{0,N}$ satisfying Assumption \\ref{ass:main} and $\\Ha^n(\\Gamma_0\\setminus \\cup_{i=1}^N\n\\partial^* E_{0,i})=0$ \nwhich do not allow any non-trivial Brakke flows (\\emph{dynamically stable} stationary varifolds, in the terminology introduced in \\cite{stu-tone2}). We expect that such a $\\Gamma_0$ \nshould have some local measure minimizing properties. \n\\end{remark}\n\n\\begin{remark}\nLet $V$, $\\{E_i\\}_{i=1}^N$ and $\\Gamma$ be as in Corollary \\ref{main:cor} obtained as $t_k\\to \\infty$ along a Brakke flow. \nSince $V$ is integral and stationary, $V=\\var (\\Gamma,\\theta)$ for some \n$\\Ha^n$-measurable function $\\theta:\n\\Gamma\\to \\mathbb N$. One can check that $\\Gamma$ and $\\{E_i\\}_{i=1}^N$ (after removing empty\n$E_i$'s if necessary) again satisfy the Assumption \\ref{ass:main}, thus \nwe may apply Theorem \\ref{thm:main} and obtain another Brakke flow with the same\nfixed boundary. Note that if we have $\\|V\\|(\\{x\\,:\\,\\theta(x)\\geq 2\\})>0$, then $\\var(\\Gamma,1)$ may \nnot be stationary, and the Brakke flow starting from non-stationary $\\var(\\Gamma_,1)$ \nis genuinely time-dependent. \nWe then obtain another stationary varifold as $t\\to\\infty$ by Corollary \\ref{main:cor}. \nIt is likely that,\nafter a finite number of iterations, this process produces a unit density stationary varifold which does not move anymore. The other possibility is \nalso interesting, in that we would have \ninfinitely many different integral stationary varifolds with the same boundary condition,\neach having strictly smaller $\\Ha^n$ measure than the previous one. \n\\end{remark}\n\n\\newpage\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Conclusion and Future Work}\\label{sec:conclusion}\n\nGrammar-based fuzzing is effective for fuzzing applications with\ncomplex structured inputs provided a comprehensive input grammar is\navailable. This paper describes the first attempt at using\nneural-network-based statistical learning techniques to automatically\ngenerate input grammars from sample inputs. We presented and evaluated\nalgorithms that leverage recent advances in sequence learning by\nneural networks, namely \\t{seq2seq} recurrent neural networks, to\nautomatically learn a generative model of PDF objects. We devised\nseveral sampling techniques to generate new PDF objects from the\nlearnt distribution. We show that the learnt models are not only\nable to generate a large set of new well-formed objects, but also\nresults in increased coverage of the PDF parser used in our\nexperiments, compared to various forms of random fuzzing. \n\nWhile the results presented in Section~\\ref{sec:evaluation} may vary\nfor other applications, our general observations about the tension\nbetween conflicting learning and fuzzing goals will remain valid:\nlearning wants to capture the structure of well-formed inputs, while\nfuzzing wants to break that structure in order to cover unexpected\ncode paths and find bugs. We believe that the inherent statistical\nnature of learning by neural networks is a powerful tool to address\nthis learn\\&fuzz challenge.\n\nThere are several interesting directions for future work. While the focus of our paper was on learning the structure of PDF objects, it would be worth exploring how to learn, as automatically as possible, the higher-level hierarchical structure of PDF documents involving cross-reference tables, object bodies, and trailer sections that maintain certain complex invariants amongst them. Perhaps some combination of logical inference techniques with neural networks could be powerful enough to achieve this. Also, our learning algorithm is currently agnostic to the application under test. We are considering using some form of reinforcement learning to guide the learning of \\t{seq2seq} models with coverage feedback from the application, which could potentially guide the learning more explicitly towards increasing coverage.\n\n\n\\section{Statistical Learning of Object Contents}\\label{sec:learning}\n\nWe now describe our statistical learning approach for learning a generative model of PDF objects. The main idea is to learn a generative language model over the set of PDF object characters given a large corpus of objects. We use a sequence-to-sequence (seq2seq)~\\cite{seq2seq,machinetranslation} network model that has been shown to produce state-of-the-art results for many different learning tasks such as machine translation~\\cite{machinetranslation} and speech recognition~\\cite{speechrecognition}. The seq2seq model allows for learning arbitrary length contexts to predict next sequence of characters as compared to traditional n-gram based approaches that are limited by contexts of finite length. Given a corpus of PDF objects, the seq2seq model can be trained in an unsupervised manner to learn a generative model to generate new PDF objects using a set of input and output sequences. The input sequences correspond to sequences of characters in PDF objects and the corresponding output sequences are obtained by shifting the input sequences by one position. The learnt model can then be used to generate new sequences (PDF objects) by sampling the distribution given a starting prefix (such as \\quotes{\\texttt{obj}}).\n\n\\subsection{Sequence-to-Sequence Neural Network Models}\n\nA recurrent neural network (RNN) is a neural network that operates on a variable length input sequence $\\langle x_1,x_2,\\cdots,x_T \\rangle$ and consists of a hidden state $h$ and an output $y$. The RNN processes the input sequence in a series of time stamps (one for each element in the sequence). For a given time stamp $t$, the hidden state $h_t$ at that time stamp and the output $y_t$ is computed as:\n\\begin{equation*}\nh_t = f(h_{t-1},x_{t})\n\\end{equation*}\n\\begin{equation*}\ny_t = \\phi(h_t)\n\\end{equation*}\nwhere $f$ is a non-linear activation function such as sigmoid, $\\tanh$ etc. and $\\phi$ is a function such as \\texttt{softmax} that computes the output probability distribution over a given vocabulary conditioned on the current hidden state. RNNs can learn a probability distribution over a character sequence $\\langle x_1,\\cdots,x_{t-1} \\rangle$ by training to predict the next character $x_t$ in the sequence, i.e., it can learn the conditional distribution $p(x_t|\\langle x_1,\\cdots,x_{t-1} \\rangle)$.\n\nCho et al.~\\cite{seq2seq} introduced a sequence-to-sequence (seq2seq) model that consists of two recurrent neural networks, an encoder RNN that processes a variable dimensional input sequence to a fixed dimensional representation, and a decoder RNN that takes the fixed dimensional input sequence representation and generates the variable dimensional output sequence. The decoder network generates output sequences by using the predicted output character generated at time step $t$ as the input character for timestep $t+1$. An illustration of the \\texttt{seq2seq} architecture is shown in Figure.~\\ref{seqseq}. This architecture allows us to learn a conditional distribution over a sequence of next outputs, i.e., $p( \\langle y_1,\\cdots,y_{T_1} \\rangle | \\langle x_1,\\cdots,x_{T_2} \\rangle)$.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.4]{figures\/seqseq_cropped.pdf}\n\\caption{A sequence-to-sequence RNN model to generate PDF objects.}\n\\label{seqseq}\n\\end{figure}\n\nWe train the seq2seq model using a corpus of PDF objects treating each one of them as a sequence of characters. During training, we first concatenate all the object files $s_i$ into a single file resulting in a large sequence of characters $\\tilde{s} = s_1 + \\cdots + s_n$. We then split the sequence into multiple training sequences of a fixed size $d$, such that the $i^{\\texttt{th}}$ training instance $t_i = \\tilde{s}[i*d:(i+1)*d]$, where $s[k:l]$ denotes the subsequence of $s$ between indices $k$ and $l$. The output sequence for each training sequence is the input sequence shifted by $1$ position, i.e., $o_t=\\tilde{s}[i*d+1:(i+1)*d+1]$. The seq2seq model is then trained end-to-end to learn a generative model over the set of all training instances.\n\n\\subsection{Generating new PDF objects}\n\nWe use the learnt seq2seq model to generate new PDF objects. There are many different strategies for object generation depending upon the sampling strategy used to sample the learnt distribution. We always start with a prefix of the sequence \\quotes{\\texttt{obj }} (denoting the start of an object instance), and then query the model to generate a sequence of output characters until it produces \\quotes{\\texttt{endobj}} corresponding to the end of the object instance. We now describe three different sampling strategies we employ for generating new object instances.\n\n\\paragraph{{\\bf \\nosample}:} In this generation strategy, we use the learnt distribution to greedily predict the best character given a prefix. This strategy results in generating PDF objects that are most likely to be well-formed and consistent, but it also limits the number of objects that can be generated. Given a prefix like \\quotes{\\texttt{obj}}, the best sequence of next characters is uniquely determined and therefore this strategy results in the same PDF object. This limitation precludes this strategy from being useful for fuzzing.\n\n\\paragraph{{\\bf \\orig}:} In this generation strategy, we use the learnt distribution to \\emph{sample} next characters (instead of selecting the top predicted character) in the sequence given a prefix sequence. This sampling strategy is able to generate a diverse set of new PDF objects by combining various patterns the model has learnt from the diverse set of objects in the training corpus. Because of sampling, the generated PDF objects are not always guaranteed to be well-formed, which is useful from the fuzzing perspective.\n\n\\paragraph{{\\bf \\fuzz}:} This sampling strategy is a combination of $\\orig$ and $\\nosample$ strategies. It samples the distribution to generate the next character only when the current prefix sequence ends with a whitespace, whereas it uses the best character from the distribution in middle of tokens (i.e., prefixes ending with non-whitespace characters), similar to the $\\nosample$ strategy. This strategy is expected to generate more well-formed PDF objects compared to the $\\orig$ strategy as the sampling is restricted to only at the end of whitespace characters.\n\n\n\\subsection{\\textsc{SampleFuzz}: Sampling with Fuzzing}\n\nOur goal of learning a generative model of PDF objects is ultimately to perform fuzzing. A perfect learning technique would always generate well-formed objects that would not exercise any error-hanlding code, whereas a bad learning technique would result in ill-formed objects that woult be quickly rejected by the parser upfront. To explore this tradeoff, we present a new algorithm, dubbed \\t{SampleFuzz}, to perform some fuzzing while sampling new objects. We use the learnt model to generate new PDF object instances, but at the same time introduce anomalies to exercise error-handling code. \n\nThe \\t{SampleFuzz} algorithm is shown in Algorithm~\\ref{samplefuzzalgo}. It takes as input the learnt distribution $\\mathcal{D}(\\t{x},\\theta)$, the probability of fuzzing a character $t_\\t{fuzz}$, and a threshold probability $p_t$ that is used to decide whether to modify the predicted character. While generating the output sequence \\t{seq}, the algorithm samples the learnt model to get some next character $c$ and its probability $p(c)$ at a particular timestamp $t$. If the probability $p(c)$ is higher than a user-provided threshold $p_t$, i.e., if the model is confident that $c$ is likely the next character in the sequence, the algorithm chooses to instead sample another different character $c'$ in its place where $c'$ has the minimum probability $p(c')$ in the learnt distribution. This modification (fuzzing) takes place only if the result $p_\\t{fuzz}$ of a random coin toss returns a probability higher than input parameter $t_\\t{fuzz}$, which lets the user further control the probability of fuzzing characters. The key intuition of the \\t{SampleFuzz} algorithm is to introduce unexpected characters in objects only in places where the model is {\\em highly confident}, in order to trick the PDF parser. The algorithm also ensures that the object length is bounded by \\t{MAXLEN}. Note that the algorithm is not guaranteed to always terminate, but we observe that it always terminates in practice.\n\n\\begin{algorithm}[t]\n\\caption{\\t{SampleFuzz}($\\mathcal{D}(\\t{x},\\theta),t_\\t{fuzz}, p_t$)}\n\\begin{algorithmic}\n\\STATE {\\t{seq} := \\quotes{obj }}\n\\WHILE{$\\neg$ \\t{seq}.\\t{endswith}(\\quotes{endobj})}\n\\STATE{c,p(c) := \\t{sample}($\\mathcal{D}$(\\t{seq},$\\theta$))} (* Sample c from the learnt distribution *)\n\\STATE{$p_\\t{fuzz} := \\t{random}(0,1) $} (* random variable to decide whether to fuzz *)\n\\IF{$p_\\t{fuzz} > t_\\t{fuzz} \\land p(c) > p_t$}\n\\STATE{c := $\\argmin_{c'} \\{p(c') \\sim \\mathcal{D}(\\t{seq},\\theta)\\} $} (* replace c by c' (with lowest likelihood) *)\n\\ENDIF\n\\STATE{\\t{seq} := \\t{seq} + c}\n\\IF{\\t{len(seq)} $>$ \\t{MAXLEN}}\n\\STATE{\\t{seq} := \\quotes{obj }} (* Reset the sequence *)\n\\ENDIF\n\\ENDWHILE\n\\RETURN{\\t{seq}}\n\\end{algorithmic}\n\\label{samplefuzzalgo}\n\\end{algorithm}\n\n\\subsection{Training the Model}\n\nSince we train the seq2seq model in an unsupervised learning setting, we do not have test labels to explicitly determine how well the learnt models are performing. We instead train multiple models parameterized by number of passes, called \\emph{epochs}, that the learning algorithm performs over the training dataset. An \\emph{epoch} is thus defined as an iteration of the learning algorithm to go over the complete training dataset. We evaluate the seq2seq models trained for five different numbers of epochs: 10, 20, 30, 40, and 50. In our setting, one epoch takes about 12 minutes to train the seq2seq model, and the model with 50 epochs takes about 10 hours to learn. We use an LSTM model~\\cite{lstm} (a variant of RNN) with 2 hidden layers, where each layer consists of 128 hidden states. \n\\section{Experimental Evaluation}\\label{sec:evaluation}\n\n\\subsection{Experiment Setup}\n\nIn this section, we present results of various fuzzing experiments\nwith the PDF viewer included in Microsoft's new Edge browser. We used\na self-contained single-process test-driver executable provided by\nthe Windows team for testing\/fuzzing purposes. \\comment{anonymized: provided\nto us by the Windows organization.} This executable takes a PDF file\nas input argument, executes the PDF parser included in the Microsoft\nEdge browser, and then stops. If the executable detects any parsing\nerror due to the PDF input file being malformed, it prints an error\nmessage in an execution log. In what follows, we simply refer to it\nas the {\\em Edge PDF parser}. All experiments were performed on 4-core\n64-bit Windows 10 VMs with 20Gb of RAM.\n\nWe use three main standard metrics to measure fuzzing effectiveness:\n\\begin{description}\n\\topsep0pt\n\\itemsep0pt\n\\item [Coverage.] For each test execution, we measure instruction coverage, that is, the set of all unique instructions executed during that test. Each instruction is uniquely identified by a pair of values {\\tt dll-name} and {\\tt dll-offset}. The coverage for a set of tests is simply the union of the coverage sets of each individual test.\n\n\\item [Pass rate.] For each test execution, we programmatically check ({\\tt grep}) for the presence of parsing-error messages in the PDF-parser execution log. If there are no error messages, we call this test {\\em pass} otherwise we call it {\\em fail}. Pass tests corresponds to PDF files that are considered to be well-formed by the Edge PDF parser. This metric is less important for fuzzing purposes, but it will help us estimate the quality of the learning.\n\n\\item [Bugs.] Each test execution is performed under the monitoring of the tool AppVerifier, a free runtime monitoring tool that can catch memory corruptions bugs (such as buffer overflows) with a low runtime overhead (typically a few percent runtime overhead) and that is widely used for fuzzing on Windows (for instance, this is how SAGE~\\cite{SAGE} detects bugs).\n\n\\end{description}\n\n\\subsection{Training Data}\n\nWe extracted about 63,000 non-binary PDF objects out of a diverse set\nof 534 PDF files. These 534 files themselves were\nprovided to us by the Windows fuzzing team and had been used for prior\nextended fuzzing of the Edge PDF parser. This set of 534\nfiles was itself the result of {\\em seed minimization}, that is, the\nprocess of computing a subset of a larger set of input files which\nprovides the same instruction coverage as the larger set. Seed\nminimization is a standard first step applied before file\nfuzzing~\\cite{fuzzing-book,SAGE}. The larger set of PDF files came\nfrom various sources, like past PDF files used for fuzzing but also\nother PDF files collected from the public web. \\comment{anonymized:\nand our own intranet.}\n\nThese 63,000 non-binary objects are the training set for the RNNs we\nused in this work. Binary objects embedded in PDF files (typically\nrepresenting images in various image formats) were not considered in\nthis work.\n\nWe learn, generate, and fuzz PDF objects, but the Edge PDF\nparser processes full PDF files, not single objects. Therefore we wrote a simple\nprogram to correctly {\\em append} a new PDF object to an existing\n(well-formed) PDF file, which we call a {\\em host}, following the\nprocedure discussed in Section~\\ref{pdf-struc} for updating a PDF\ndocument. Specifically, this program first identifies the last trailer\nin the PDF host file. This provides information about the file, such\nas addresses of objects and the cross-reference table, and the last used object\nID. Next, a new body section is added to the file. In it, the new\nobject is included with an object ID that overrides the last object in\nthe host file. A new cross reference table is appended, which\nincreases the generation number of the overridden object. Finally, a\nnew trailer is appended.\n\n\\subsection{Baseline Coverage}\n\nTo allow for a meaningful interpretation of coverage results, we\nrandomly selected 1,000 PDF objects out of our 63,000 training\nobjects, and we measured their coverage of the Edge PDF parser, to be\nused as a baseline for later experiments.\n\nA first question is which host PDF file should we use in our\nexperiments: since any PDF file will have some objects in it, will a\nnew appended object interfere with other objects already present in\nthe host, and hence influence the overall coverage and pass rate?\n\nTo study this question, we selected the smallest three PDF files in\nour set of 534 files, and used those as hosts. These three hosts are\nof size 26Kb, 33Kb and 16Kb respectively.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.3]{figures\/baselineCov.pdf}\n\\vspace*{-0.5cm}\n\\caption{Coverage for PDF hosts and baselines.}\n\\label{fig:baseline-coverage}\n\\end{figure}\n\nFigure~\\ref{fig:baseline-coverage} shows the instruction coverage\nobtained by running the Edge PDF parser on the three hosts, denoted\n{\\tt host1}, {\\tt host2}, and {\\tt host3}. It also show the coverage\nobtained by computing the union of these three sets, denoted {\\tt\nhost123}. Coverage ranges from 353,327 ({\\tt host1}) to 457,464 ({\\tt\nhost2}) unique instructions, while the union ({\\tt host123}) is 494,652\nand larger than all three -- each host covers some unique instructions\nnot covered by the other two. Note that the smallest file {\\tt host3}\ndoes not lead to the smallest coverage.\n\nNext, we recombined each of our 1,000 baseline objects with each of\nour three hosts, to obtain three sets of 1,000 new PDF files, denoted\n{\\tt baseline1}, {\\tt baseline2} and {\\tt baseline3},\nrespectively. Figure~\\ref{fig:baseline-coverage} shows the coverage of\neach set, as well as their union {\\tt baseline123}. We observe the\nfollowing.\n\\begin{itemize}\n\\topsep0pt\n\\itemsep0pt\n\\item The baseline coverage varies\ndepending on the host, but is larger than the host alone (as\nexpected). The largest difference between a host and a baseline\ncoverage is 59,221 instruction for {\\tt host123} out of 553,873\ninstruction for {\\tt baseline123}. In other words, 90\\% of all\ninstructions are included in the host coverage no matter what new\nobjects are appended.\n\n\\item Each test typically covers on the\norder of half a million unique instructions; this confirms that the\nEdge PDF parser is a large and non-trivial application.\n\n\\item 1,000 PDF files take about 90 minutes to be processed (both to be\ntested and get the coverage data).\n\n\\end{itemize}\nWe also measured the pass rate for each experiment. As expected, the\npass rate is 100\\% for all 3 hosts.\n\n{\\bf Main Takeaway:} Even though coverage varies across hosts because\nobjects may interact differently with each host, the re-combined PDF\nfile is always perceived as well-formed by the Edge PDF parser.\n\n\\subsection{Learning PDF Objects}\n\nWhen training the RNN, an important parameter is the number of epochs\nbeing used (see Section~\\ref{sec:learning}). We report here results of\nexperiments obtained after training the RNN for 10, 20, 30, 40, and 50\nepochs, respectively. After training, we used each learnt RNN model to generate 1,000 unique PDF objects. We also compared the generated objects with the 63,000 objects used for training the model, and found no exact matches.\n\nAs explained earlier in Section~\\ref{sec:learning}, we consider two\nmain RNN generation modes: the $\\orig$ mode where we sample the\ndistribution at every character position, and the $\\fuzz$ mode where we sample\nthe distribution only after whitespaces and generate the top predicted character for other positions.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.3]{figures\/passRate.pdf}\n\\vspace*{-0.5cm}\n\\caption{Pass rate for $\\orig$ and $\\fuzz$ from 10 to 50 epochs.}\n\\label{fig:passRate}\n\\end{figure}\n\nThe pass rate for $\\orig$ and $\\fuzz$ when training with 10 to 50 epochs is\nreported in Figure~\\ref{fig:passRate}. We observe the following:\n\\begin{itemize}\n\\topsep0pt\n\\itemsep0pt\n\\item The pass rate for $\\fuzz$ is consistently better than the one for $\\orig$.\n\\item For 10 epochs only, the pass rate for $\\orig$ is already above 70\\%. This means that the learning is of good quality.\n\\item As the number of epochs increases, the pass rate increases, as expected, since the learned models become more precise but they also take more time (see Section~\\ref{sec:learning}).\n\\item The best pass rate is 97\\% obtained with $\\fuzz$ and 50 epochs.\n\\end{itemize}\nInterestingly, the pass rate is essentially the same regardless of the\nhost PDF file being used: it varies by at most 0.1\\% across hosts (data not shown here).\n\n{\\bf Main Takeaway:} The pass rate ranges between $70\\%$ and $97\\%$\nand shows the learning is of good quality.\n\n\n\\subsection{Coverage with Learned PDF Objects}\n\n\\begin{figure}[t]\n\\centering\n\\hspace*{-2cm}\n\\includegraphics[scale=0.5]{figures\/epochsCoverage.pdf}\n\\vspace*{-0.5cm}\n\\caption{Coverage for $\\orig$ and $\\fuzz$ from 10 to 50 epochs, for {\\tt host 1, 2, 3,} and {\\tt 123}.}\n\\label{fig:epochs-coverage}\n\\end{figure}\n\nFigure~\\ref{fig:epochs-coverage} shows the instruction coverage\nobtained with $\\orig$ and $\\fuzz$ from 10 to 50 epochs and using {\\tt host1}\n(top left), {\\tt host2} (top right), {\\tt host3} (bottom left), and the\noverall coverage for all hosts {\\tt host123} (bottom right). The\nfigure also shows the coverage obtained with the corresponding {\\tt\nbaseline}. We observe the following:\n\\begin{itemize}\n\\topsep0pt\n\\itemsep0pt\n\\item Unlike for the pass rate, the host impacts coverage significantly, as already pointed out earlier. Moreover, the shapes of each line vary across hosts.\n\n\\item For {\\tt host1} and {\\tt host2}, the coverage for $\\orig$ and $\\fuzz$ are above the {\\tt baseline} coverage for most epoch results, while they are mostly below the {\\tt baseline} coverage for {\\tt host3} and {\\tt host123}.\n\n\\item The best overall coverage is obtained with $\\orig$ 40-epochs (see the {\\tt host123} data at the bottom right).\n\n\\item The {\\tt baseline123} coverage is overall second best behind $\\orig$ 40-epochs.\n\n\\item The best coverage obtained with $\\fuzz$ is also with 40-epochs.\n\n\\end{itemize}\n{\\bf Main Takeaway:} The best overall coverage is obtained with $\\orig$ 40-epochs.\n\n\\subsection{Comparing Coverage Sets}\n\n\\begin{figure}[t]\n\\centering\n\\begin{tabular}{c|c|c|c|c}\nRow$\\setminus$Column & $\\orig$-40e & $\\fuzz$-40e & {\\tt baseline123} & {\\tt host123} \\\\\n\\hline\n$\\orig$-40e & 0 & 10,799 & 6,658 & 65,442 \\\\\n$\\fuzz$-40e & 1,680 & 0 & 3,393 & 56,323 \\\\\n{\\tt baseline123} & 660 & 6,514 & 0 & 59,444 \\\\\n{\\tt host123} & 188 & 781 & 223 & 0 \\\\\n\\end{tabular}\n\\caption{Comparing coverage: unique instructions in each row compared to each column.}\n\\label{fig:coverage-overlap}\n\\end{figure}\n\nSo far, we simply counted the number of unique instructions being\ncovered. We now drill down into the overall {\\tt host123} coverage\ndata of Figure~\\ref{fig:epochs-coverage}, and compute the overlap\nbetween overall coverage sets obtained with our 40-epochs winner\n$\\orig$-40e and $\\fuzz$-40e, as well as the {\\tt baseline123} and {\\tt\nhost123} overall coverage. The results are presented in\nFigure~\\ref{fig:coverage-overlap}. We observe the following:\n\\begin{itemize}\n\\topsep0pt\n\\itemsep0pt\n\\item All sets are almost supersets of {\\tt host123} as expected (see the {\\tt host123} row), except for a few hundred instructions each.\n\n\\item $\\orig$-40e is almost a superset of all other sets,\nexcept for 1,680 instructions compared to $\\fuzz$-40e, and a few\nhundreds instructions compared to {\\tt baseline123} and {\\tt host123}\n(see the $\\orig$-40e column).\n\n\\item $\\orig$-40e and $\\fuzz$-40e have way more instructions in common\nthan they differ (10,799 and 1,680), with $\\orig$-40e having better\ncoverage than $\\fuzz$-40e.\n\n\\item $\\fuzz$-40e is incomparable with {\\tt baseline123}: it has 3,393 more instructions but also 6,514 missing instructions.\n\n\\end{itemize}\n{\\bf Main Takeaway:} Our coverage winner $\\orig$-40e is almost a\nsuperset of all other coverage sets.\n\n\\subsection{Combining Learning and Fuzzing}\n\nIn this section, we consider several ways to combine learning with\nfuzzing, and evaluate their effectiveness.\n\nWe consider a widely-used simple blackbox random fuzzing algorithm,\ndenoted {\\tt Random}, which randomly picks a position in a file and\nthen replaces the byte value by a random value between 0 and 255. The\nalgorithm uses a {\\em fuzz-factor} of 100: the length of the file\ndivided by 100 is the average number of bytes that are fuzzed in that\nfile.\n\nWe use {\\tt random} to generate 10 variants of every PDF object\ngenerated by 40-epochs $\\orig$-40e, $\\fuzz$-40e, and {\\tt\nbaseline}. The resulting fuzzed objects are re-combined with our 3\nhost files, to obtain three sets of 30,000 new PDF files, denoted by\n$\\origrandom$, $\\fuzzrandom$ and {\\tt baseline+Random}, respectively.\n\nFor comparison purposes, we also include the results of running\n$\\orig$-40e to generate 10,000 objects, denoted $\\orig$-10K.\n\nFinally, we consider our new algorithm $\\morefuzz$ described in\nSection~\\ref{sec:learning}, which decides where to fuzz values based on the\nlearnt distribution. We applied this algorithm with the learnt\ndistribution of the 40-epochs RNN model, $t_\\t{fuzz} = 0.9$,\nand a threshold $p_t = 0.9$.\n\n\\begin{figure}[t]\n\\centering\n\\begin{tabular}{c|c|c}\nAlgorithm & Coverage & Pass Rate \\\\\n\\hline\n$\\fuzzrandom$ & 563,930 & 36.97\\%\\\\\n{\\tt baseline+Random} & 564,195 & 44.05\\%\\\\\n$\\orig$-10K & 565,590 & 78.92\\% \\\\\n$\\origrandom$ & 566,964 & 41.81\\%\\\\\n$\\morefuzz$ & 567,634 & 68.24\\% \\\\\n\\end{tabular}\n\\caption{Results of fuzzing experiments with 30,000 PDF files each.}\n\\label{fig:fuzzing-results}\n\\end{figure}\n\nFigure~\\ref{fig:fuzzing-results} reports the overall coverage and the\npass-rate for each set. Each set of 30,000 PDF files takes about 45\nhours to be processed. The rows are sorted by increasing coverage.\nWe observe the following:\n\\begin{itemize}\n\\topsep0pt\n\\itemsep0pt\n\\item After applying {\\tt Random} on objects generated with $\\orig$, $\\fuzz$ and {\\tt baseline}, coverage goes up while the pass rate goes down: it is consistently below $50\\%$.\n\n\\item After analyzing the overlap among coverage sets (data not shown here), all fuzzed sets are almost supersets of their original non-fuzzed sets (as expected).\n\n\\item Coverage for $\\orig$-10K also increases by 6,173 instructions compared to $\\orig$, while the pass rate remains around $80\\%$ (as expected).\n\n\\item Perhaps surprisingly, the best overall coverage is obtained with $\\morefuzz$. Its pass rate is $68.24\\%$.\n\n\\item The difference in absolute coverage between $\\morefuzz$ and the next best $\\origrandom$ is only 670 instructions. Moreover, after analyzing the coverage set overlap, $\\morefuzz$ covers 2,622 more instructions than $\\origrandom$, but also misses 1,952 instructions covered by $\\origrandom$. Therefore, none of these two top-coverage winners fully ``simulate'' the effects of the other.\n\\end{itemize}\n{\\bf Main Takeaway:} All the learning-based algorithms considered here\nare competitive compared to {\\tt baseline+Random}, and three of those\nbeat that baseline coverage.\n\n\n\\subsection{Main Takeaway: Tension between Coverage and Pass Rate}\n\nThe main takeaway from all our experiments is the {\\em tension we\nobserve between the coverage and the pass rate}.\n\nThis tension is visible in Figure~\\ref{fig:fuzzing-results}. But it is\nalso visible in earlier results: if we correlate the coverage results\nof Figure~\\ref{fig:epochs-coverage} with the pass-rate results of\nFigure~\\ref{fig:passRate}, we can clearly see that $\\fuzz$ has a\nbetter pass rate than $\\orig$, but $\\orig$ has a better overall\ncoverage than $\\fuzz$ (see {\\tt host123} in the bottom right of\nFigure~\\ref{fig:epochs-coverage}).\n\nIntuitively, this tension can be explained as follows. A pure\nlearning algorithm with a nearly-perfect pass-rate (like $\\fuzz$)\ngenerates almost only well-formed objects and exercises little\nerror-handling code. In contrast, a {\\em noisier} learning algorithm\n(like $\\orig$) with a lower pass-rate can not only generate many\nwell-formed objects, but it also generates some ill-formed ones which\nexercise error-handling code.\n\nApplying a random fuzzing algorithm (like {\\tt random}) to\npreviously-generated (nearly) well-formed objects has an even more\ndramatic effect on lowering the pass rate (see\nFigure~\\ref{fig:fuzzing-results}) while increasing coverage, again\nprobably due to increased coverage of error-handling code.\n\nThe new $\\morefuzz$ algorithm seems to hit a sweet spot between both\npass rate and coverage. In our experiments, the sweet spot for the\npass rate seems to be around $65\\%-70\\%$: {\\em this pass rate is high\nenough to generate diverse well-formed objects that cover a lot of\ncode in the PDF parser, yet low enough to also exercise error-handling\ncode in many parts of that parser.}\n\nNote that instruction coverage is ultimately a better indicator of\nfuzzing effectiveness than the pass rate, which is instead a\nlearning-quality metric.\n\n\n\n\\subsection{Bugs}\n\nIn addition to coverage and pass rate, a third metric of interest is\nof course the number of bugs found. During the experiments previously\nreported in this section, no bugs were found. Note that the Edge PDF\nparser had been thoroughly fuzzed for months with other fuzzers\n(including SAGE~\\cite{SAGE}) before we performed\nthis study, and that all the bugs found during this prior fuzzing had\nbeen fixed in the version of the PDF parser we used for this study.\n\nHowever, during a longer experiment with $\\origrandom$, 100,000\nobjects and 300,000 PDF files (which took nearly 5 days), a\nstack-overflow bug was found in the Edge PDF parser: a regular-size\nPDF file is generated (its size is 33Kb) but it triggers an unexpected\nrecursion in the parser, which ultimately results in a stack overflow.\nThis bug was later confirmed and fixed by the Microsoft Edge\ndevelopment team. We plan to conduct other longer experiments in the\nnear future.\n\n\n\\section{Introduction}\n\n{\\em Fuzzing} is the process of finding security vulnerabilities in\ninput-parsing code by repeatedly testing the parser with modified, or\n{\\em fuzzed}, inputs. There are three main types of fuzzing techniques\nin use today: (1) {\\em blackbox random} fuzzing~\\cite{fuzzing-book},\n(2) {\\em whitebox constraint-based} fuzzing~\\cite{SAGE}, and (3) {\\em\ngrammar-based} fuzzing~\\cite{purdom1972sgt,fuzzing-book}, which can be\nviewed as a variant of model-based\ntesting~\\cite{utting2006tmb}. Blackbox and whitebox fuzzing are fully\nautomatic, and have historically proved to be very effective at\nfinding security vulnerabilities in binary-format file parsers. In\ncontrast, grammar-based fuzzing is not fully automatic: it requires an\ninput grammar specifying the input format of the application under\ntest. This grammar is typically written by hand, and this process is\nlaborious, time consuming, and error-prone. Nevertheless,\ngrammar-based fuzzing is the most effective fuzzing technique known\ntoday for fuzzing applications with complex structured input formats,\nlike web-browsers which must take as (untrusted) inputs web-pages\nincluding complex HTML documents and JavaScript code.\n\nIn this paper, we consider the problem of {\\em automatically}\ngenerating input grammars for grammar-based fuzzing by using\nmachine-learning techniques and sample inputs. Previous attempts have\nused variants of traditional automata and context-free-grammar\nlearning algorithms (see Section~\\ref{sec:related-work}). In contrast\nwith prior work, this paper presents the {\\em first attempt} at using\n{\\em neural-network-based statistical learning techniques} for this\nproblem. Specifically, we use {\\em recurrent neural networks} for\nlearning a statistical input model that is also {\\em generative}: it\ncan be used to generate new inputs based on the probability\ndistribution of the learnt model (see Section~\\ref{sec:learning} for\nan introduction to these learning techniques). We use unsupervised\nlearning, and our approach is fully automatic and does not require any\nformat-specific customization.\n\nWe present an in-depth case study for a very complex input format:\nPDF. This format is so complex (see Section~\\ref{pdf-struc}) that it\nis described in a 1,300-pages (PDF) document~\\cite{pdf-manual}. We\nconsider a large, complex and security-critical parser for this\nformat: the PDF parser embedded in Microsoft's new Edge\nbrowser. Through a series of detailed experiments (see\nSection~\\ref{sec:evaluation}), we discuss the {\\em learn\\&fuzz\nchallenge}: how to learn and then generate diverse well-formed inputs\nin order to maximize parser-code coverage, while still injecting\nenough ill-formed input parts in order to exercise unexpected code\npaths and error-handling code.\n\nWe also present a novel {\\em learn\\&fuzz} algorithm (in\nSection~\\ref{sec:learning}) which uses a learnt input probability\ndistribution to intelligently guide {\\em where} to fuzz (statistically\nwell-formed) inputs. We show that this new algorithm can outperform\nthe other learning-based and random fuzzing algorithms considered in\nthis work.\n\nThe paper is organized as follows. Section~\\ref{pdf-struc} presents an\noverview of the PDF format, and the specific scope of this\nwork. Section~\\ref{sec:learning} gives a brief introduction to\nneural-network-based learning, and discusses how to use and adapt such\ntechniques for the learn\\&fuzz problem. Section~\\ref{sec:evaluation}\npresents results of several learning and fuzzing experiments with the\nEdge PDF parser. Related work is discussed in\nSection~\\ref{sec:related-work}. We conclude and discuss directions for\nfuture work in Section~\\ref{sec:conclusion}.\n\n\n\n\n\n\n\\section{The Structure of PDF Documents}\\label{pdf-struc}\n\n\\begin{figure}[t]\n\\centering\n\\setlength{\\tabcolsep}{12pt}\n\\begin{tabular}[t]{ccc}\n\\begin{lstlisting}\n2 0 obj\n<<\n\/Type \/Pages\n\/Kids [ 3 0 R ]\n\/Count 1\n>>\nendobj\n\\end{lstlisting} &\n\\begin{lstlisting}\nxref\n0 6\n0000000000 65535 f\n0000000010 00000 n\n0000000059 00000 n\n0000000118 00000 n\n0000000296 00000 n\n0000000377 00000 n\n0000000395 00000 n\n\\end{lstlisting}&\n\\begin{lstlisting}\ntrailer\n<<\n\/Size 18\n\/Info 17 0 R\n\/Root 1 0 R\n>>\nstartxref\n3661\n\\end{lstlisting}\\\\\n(a) & (b) & (c)\n\\end{tabular}\n\\caption{Excerpts of a well-formed PDF document. (a) is a sample object, (b) is a cross-reference table with one subsection, and (c) is a trailer.}\\label{pdf-samples}\n\\end{figure}\n\nThe full specification of the PDF format is over $1,300$ pages long~\\cite{pdf-manual}. Most of this specification -- roughly 70\\% -- deals with the description of {\\em data objects} and their relationships between parts of a PDF document.\n\\comment{\nHowever, they are made up of common components, which take up a much smaller portion of the specification, and are used to store data and internal references. These components are rigidly structured, and there is great repetition in their use.\n\nWhile the components take up a (relatively) small part of the specification, there are still many of them and they are tedious to encode by hand. Furthermore, their use in the specific data objects is varied and complex. This combination of data and text-encoded data structures which have some similarity but varied uses is what makes PDF data objects such an attractive target for learning, and at the same time such a challenge to learn.\n}\n\n\nPDF files are encoded in a textual format, which may contain binary information streams (e.g., images, encrypted data).\nA PDF document is a sequence of at least one PDF body.\nA PDF body is composed of three sections: objects, cross-reference table, and trailer.\n\n\\paragraph{Objects.}\nThe data and metadata in a PDF document is organized in basic units called objects. Objects are all similarly formatted, as seen in \\Cref{pdf-samples}(a), and have a joint outer structure.\nThe first line of the object is its identifier, for indirect references, its generation number, which is incremented if the object is overridden with a newer version, and ``\\scode{obj}'' which indicates the start of an object. The ``\\scode{endobj}'' indicator closes the object.\n\nThe object in \\Cref{pdf-samples}(a) contains a dictionary structure, which is delimited by ``\\scode{<<}'' and ``\\scode{>>}'', and contains keys that begin with \\scode{\/} followed by their values. \\scode{[ 3 0 R ]} is a cross-object reference to an object in the same document with the identifier $3$ and the generation number $0$. Since a document can be very large, a referenced object is accessed using random-access via a cross-reference table.\n\n\\begin{figure}[t]\n\\centering\n\\newcolumntype{C}[1]{>{\\centering\\let\\newline\\\\\\arraybackslash\\hspace{0pt}}m{#1}}\n\\begin{tabular}{cC{0.2in}cC{0.2in}c}\n\\begin{lstlisting}\n125 0 obj\n[680.6 680.6]\nendobj\n\\end{lstlisting} & &\n\\begin{lstlisting}\n88 0 obj\n(Related Work)\nendobj\n\\end{lstlisting} & &\n\\begin{lstlisting}\n75 0 obj\n4171\nendobj\n\\end{lstlisting}\\\\\n(a) & & (b) & & (c)\n\\end{tabular}\n\\begin{tabular}{c}\n\\\\\n\\begin{lstlisting}\n47 1 obj\n[false 170 85.5 (Hello) \/My#20Name]\nendobj\n\\end{lstlisting}\\\\\n(d)\n\\end{tabular}\n\\caption{PDF data objects of different types.}\\label{object-samples}\n\\end{figure}\n\nOther examples of objects are shown in \\Cref{object-samples}. The object in \\Cref{object-samples}(a) has the content \\scode{[680.6 680.6]}, which is an \\emph{array object}. Its purpose is to hold coordinates referenced by another object. \\Cref{object-samples}(b) is a string literal that holds the bookmark text for a PDF document section. \\Cref{object-samples}(c) is a numeric object. \\Cref{object-samples}(d) is an object containing a multi-type array. These are all examples of object types that are both used on their own and as the basic blocks from which other objects are composed (e.g., the dictionary object in \\Cref{pdf-samples}(a) contains an array). The rules for defining and composing objects comprises the majority of the PDF-format specification.\n\n\\paragraph{Cross reference table.}\nThe cross reference tables of a PDF body contain the address in bytes of referenced objects within the document. \\Cref{pdf-samples}(b) shows a cross-reference table with a subsection that contains the addresses for five objects with identifiers $1$-$5$ and the placeholder for identifier $0$ which never refers to an object. The object being pointed to is determined by the row of the table (the subsection will include $6$ objects starting with identifier $0$) where \\scode{n} is an indicator for an object in use, where the first column is the address of the object in the file, and \\scode{f} is an object not used, where the first column refers to the identifier of the previous free object, or in the case of object $0$ to object $65535$, the last available object ID, closing the circle.\n\n\\paragraph{Trailer.}\nThe trailer of a PDF body contains a dictionary (again contained within ``\\scode{<<}'' and ``\\scode{>>}'') of information about the body, and \\scode{startxref} which is the address of the cross-reference table. This allows the body to be parsed from the end, reading \\scode{startxref}, then skipping back to the cross-reference table and parsing it, and only parsing objects as they are needed.\n\n\\paragraph{Updating a document.}\nPDF documents can be {\\em updated incrementally}. This means that if a PDF writer wishes to update the data in object $12$, it will start a new PDF body, in it write the new object with identifier $12$, and a generation number greater than the one that appeared before. It will then write a new cross-reference table pointing to the new object, and append this body to the previous document. Similarly, an object will be deleted by creating a new cross-reference table and marking it as free. We use this method in order to append new objects in a PDF file, as discussed later in Section~\\ref{sec:evaluation}.\n\n\\paragraph{Scope of this work.}\nIn this paper, we investigate how to leverage and adapt\nneural-network-based learning techniques to learn a grammar for {\\em\nnon-binary PDF data objects}. Such data objects are formatted text,\nsuch as shown in \\Cref{pdf-samples}(a) and \\Cref{object-samples}.\nRules for defining and composing such data objects makes the bulk of\nthe 1,300-pages PDF-format specification. These rules are numerous and\ntedious, but repetitive and structured, and therefore well-suited for\nlearning with neural networks (as we will show later). In contrast,\nlearning automatically the structure (rules) for defining\ncross-reference tables and trailers, which involve constraints on\nlists, addresses, pointers and counters, look too complex and less\npromising for learning with neural networks. We also do not consider\nbinary data objects, which are encoded in binary (e.g., image)\nsub-formats and for which fully-automatic blackbox and whitebox\nfuzzing are already effective.\n\n\n\n\\section{Related Work}\\label{sec:related-work}\n\n\\newcommand{\\footnoteurl}[1]{\\footnote{\\scriptsize\\url{#1}}}\n\n\\paragraph{Grammar-based fuzzing.}\nMost popular blackbox random fuzzers today support some form of\ngrammar representation, e.g.,\nPeach\\footnoteurl{http:\/\/www.peachfuzzer.com\/} and\nSPIKE\\footnoteurl{http:\/\/resources.infosecinstitute.com\/fuzzer-automation-with-spike\/},\namong many others~\\cite{fuzzing-book}. Work on grammar-based test\ninput generation started in\nthe~1970's~\\cite{hanford1970agt,purdom1972sgt} and is related to\nmodel-based testing~\\cite{utting2006tmb}. Test generation from a\ngrammar is usually either\nrandom~\\cite{maurer1990gtd,sirer1999upg,coppit2005yeu} or\nexaustive~\\cite{lammel2006ccc}. Imperative\ngeneration~\\cite{quickcheck,BrettDGM07} is a related approach in which\na custom-made program generates the inputs (in effect, the program\nencodes the grammar). Grammar-based fuzzing can also be combined with\nwhitebox fuzzing~\\cite{MX07,GKL08}.\n\n\\paragraph{Learning grammars for grammar-based fuzzing.} Bastani et al.~\\cite{bastani} present an algorithm to synthesize a context-free grammar given a set of input examples, which is then used to generate new inputs for fuzzing. This algorithm uses a set of generalization steps by introducing repetition and alternation constructs for regular expressions, and merging non-terminals for context-free grammars, which in turn results in a monotonic generalization of the input language. This technique is able to capture hierarchical properties of input formats, but is not well suited for formats such as PDF objects, which are relatively flat but include a large diverse set of content types and key-value pairs. Instead, our approach uses sequence-to-sequence neural-network models to learn {\\em statistical} generative models of such flat formats. Moreover, learning a statistical model also allows for guiding additional fuzzing of the generated inputs.\n\nAUTOGRAM~\\cite{autogram} also learns (non-probabilistic) context-free grammars given a set of inputs but by dynamically observing how inputs are processed in a program. It instruments the program under test with dynamic taints that tags memory with input fragments they come from. The parts of the inputs that are processed by the program become syntactic entities in the grammar. Tupni~\\cite{tupni} is another system that reverse engineers an input format from examples using a taint tracking mechanism that associate data structures with addresses in the application address space. Unlike our approach that treats the program under test as a black-box, AUTOGRAM and Tupni require access to the program for adding instrumentation, are more complex, and their applicability and precision for complex formats such as PDF objects is unclear.\n\n\\paragraph{Neural-networks-based program analysis.} There has been a lot of recent interest in using neural networks for program analysis and synthesis. Several neural architectures have been proposed to learn simple algorithms such as array sorting and copying~\\cite{neuralram,neuralpi}. Neural FlashFill~\\cite{neuralflashfill} uses novel neural architectures for encoding input-output examples and generating regular-expression-based programs in a domain specific language. Several seq2seq based models have been developed for learning to repair syntax errors in programs~\\cite{synfix,deepfix,evanmooc}. These techniques learn a seq2seq model over a set of correct programs, and then use the learnt model to predict syntax corrections for buggy programs. Other related work optimizes assembly programs using neural representations~\\cite{neuraloptimize}. In this paper, we present a novel application of seq2seq models to learn grammars from sample inputs for fuzzing purposes.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThere exist a growing interest in the construction of $Spin(7)$\nholonomy metrics due to their application in supergravity\ncompactification preserving certain amount of supersymmetry\n\\cite{Spin10}-\\cite{Spin20}. The present work is concerned with this\ntask and from the analysis performed here it is obtained the\nfollowing proposition.\n\\\\\n\n{\\bf Proposition }{ \\it Let us consider a compact quaternion\nK\\\"ahler space $M$ in $d=4$ with metric $g_q$ and with cosmological\nconstant $\\Lambda$ normalized to $3$. For any of such metrics there\nalways exist a basis $e^a$ such that $g_q=\\delta_{ab}e^a\\otimes e^b$\nfor which the $Sp(1)$ part of the spin connection $\\omega^{a}_{-}$\nand the negative oriented K\\\"ahler triplet $\\overline{J}_i$ defined\nby\n$$\n\\omega^{a}_{-}=\\omega^a_{0}- \\epsilon_{abc}\\omega^b_c,\\qquad\n\\overline{J}_1=e^1\\wedge e^2-e^3\\wedge e^4,\n$$\n$$\n\\overline{J}_2=e^1\\wedge e^3-e^4\\wedge e^2\\qquad\n\\overline{J}_3=e^1\\wedge e^4-e^2\\wedge e^3,\n$$\nsatisfy the relations \\begin{equation}\\label{relat}\nd\\omega_{-}^i+\\epsilon_{ijk}\\omega_{-}^j \\wedge \\omega_{-}^k=\n\\overline{J}_i, \\qquad\nd\\overline{J}^i=\\epsilon_{ijk}\\overline{J}^{j}\\wedge\\omega_{-}^{k}.\n\\end{equation} Let $\\tau$ and $u_i$ be four new coordinates, $u=\\sqrt{u^iu^i}$,\n$\\alpha_i=du^i+\\epsilon^{ijk}\\omega_{-}^j u^k$ and $H$ a\n$\\tau$-independent one form. Then the 8-dimensional metric\n\\begin{equation}\\label{eyo}\ng_8=\\frac{(dt+H)^2}{e^{\\frac{3}{2}h}}+e^{2f+\\frac{1}{2}h}\n\\alpha_i\\alpha_i+e^{2g+\\frac{1}{2}h} g_q \\end{equation} together with the four\nform\n$$\n\\Phi_4=(dt+H)\\wedge\\bigg( e^{3f}\\alpha_1\\wedge \\alpha_2\\wedge\n\\alpha_3+e^{f+2g}\\alpha_i\\wedge \\overline{J}_i\n\\bigg)+e^{2(f+g)+h}\\frac{\\epsilon_{ijk}}{2}\\alpha_i\\wedge\n\\alpha_j\\wedge \\overline{J}_k+e^{4g+h}e_1\\wedge e_2\\wedge e_3\\wedge\ne_4,\n$$\nconstitute an $Spin(7)$ structure preserved by the Killing vector\n$\\partial_{\\tau}$. Moreover if $f$, $g$ and $h$ are functions of $u$\nrelated by\n$$\nu e^{3f}=(e^{f+2g})', \\qquad\n\\lambda(e^{3f}-\\frac{e^{f+2g}}{u^2})=(e^{2(f+g)+h})',\n$$\n$$4 ue^{2(f+g)+h}-2\\lambda e^{f+2g}=(e^{4g+h})', $$ and the 1-form\n$H$ satisfy \\begin{equation}\\label{mur}\ndH=-\\widetilde{u}_i\\overline{J}_i+\\frac{\\epsilon_{ijk}}{2}\\widetilde{u}_i\\theta_j\\wedge\n\\theta_k, \\end{equation} being\n$\\theta_i=d(\\widetilde{u}^i)+\\epsilon^{ijk}\\omega_{-}^j\n\\widetilde{u}^k$ and $\\widetilde{u}^i=u^i\/u$, then $\\Phi_4$ will be\nclosed and therefore, the holonomy of the metric (\\ref{eyo}) will be\nincluded in $Spin(7)$.}\n\\\\\n\n We will show below that the integrability condition for\n(\\ref{mur}), namely\n$$\nd(\\widetilde{u}_i\\overline{J}_i-\\frac{\\epsilon_{ijk}}{2}\\widetilde{u}_i\\theta_j\\wedge\n\\theta_k)=0\n$$\nis always satisfied due to the fact that the twistor space $Z$ of\nany compact quaternion K\\\"ahler space $M$ carries a\nK\\\"ahler-Einstein metric of positive scalar curvature \\cite{Salomon}\nand the left hand side of (\\ref{mur}) is the K\\\"ahler form for such\nmetric. Also, the closure of $\\Phi_4$ follows directly from the\nformulas (\\ref{use}) given below. By construction, the vector field\n$\\partial_{\\tau}$ is Killing and if the quaternion K\\\"ahler basis\npossess an isometry group $G$ which preserves the forms\n$\\omega_{-}^i$, then $G$ will be an isometry of $g_8$. The\nconstruction of $Spin(7)$ holonomy metrics that follows from the\nproposition can also be applied to quaternion K\\\"ahler orbifolds.\n\n\n\n\n\\section{Closed $G_2$ and $Spin(7)$ structures and K\\\"ahler-Einstein metrics}\n\n\\textit{Quaternion K\\\"ahler spaces in brief}\n\\\\\n\n A key ingredient in order to construct the metrics (\\ref{eyo}) are quaternion\nK\\\"ahler manifolds and so, it is convenient to give a brief\ndescription of their properties. By definition, a quaternion\nK\\\"ahler space $M$ is an euclidean $4n$ dimensional space with\nholonomy group $\\Gamma$ included into the Lie group $Sp(n)\\times\nSp(1)\\subset SO(4n)$ \\cite{Berger}-\\cite{Ishihara}. This affirmation\nis non trivial if $D>4$, but in $D=4$ we have the well known\nisomorphism $Sp(1)\\times Sp(1)\\simeq SU(2)_L\\times SU(2)_R \\simeq\nSO(4)$ and so to state that $\\Gamma\\subseteq Sp(1)\\times Sp(1)$ is\nthe same that to state that $\\Gamma\\subseteq SO(4)$. The last\naffirmation is trivially satisfied for any oriented space and gives\nalmost no restrictions about the space, therefore the definition of\nquaternion K\\\"ahler spaces should be modified in $d=4$. Their main\nproperties are the following.\n\\\\\n\n- There exists three automorphism $J^i$ ($i=1$ ,$2$, $3$) of the\ntangent space $TM_x$ at a given point $x$ with multiplication rule\n$J^{i} \\cdot J^{j} = -\\delta_{ij} + \\epsilon_{ijk}J^{k}$, and for\nwhich the metric $g_q$ is quaternion hermitian, that is\n\\begin{equation}\\label{hermoso} g_q(X,Y)=g(J^i X, J^i Y), \\end{equation} being $X$ and $Y$\narbitrary vector fields.\n\\\\\n\n- The automorphisms $J^i$ satisfy the fundamental relation\n\\begin{equation}\\label{rela2} \\nabla_{X}J^{i}=\\epsilon_{ijk}J^{j}\\omega_{-}^{k}, \\end{equation}\nwith $\\nabla_{X}$ the Levi-Civita connection of $M$ and\n$\\omega_{-}^{i}$ its $Sp(1)$ part. As a consequence of hermiticity\nof $g$, the tensor $\\overline{J}^{i}_{ab}=(J^{i})_{a}^{c}g_{cb}$ is\nantisymmetric, and the associated 2-form\n$$\n\\overline{J}^i=\\overline{J}^{i}_{ab} e^a \\wedge e^b\n$$\nsatisfies \\begin{equation}\\label{basta}\nd\\overline{J}^i=\\epsilon_{ijk}\\overline{J}^{j}\\wedge\\omega_{-}^{k},\n\\end{equation} being $d$ the usual exterior derivative.\n\\\\\n\n- Corresponding to the $Sp(1)$ connection we can define the 2-form\n$$\nF^i=d\\omega_{-}^i+\\epsilon_{ijk}\\omega_{-}^j \\wedge \\omega_{-}^k.\n$$\nThen for a quaternion K\\\"ahler manifold \\begin{equation}\\label{lamas}\nR^i_{-}=2n\\kappa \\overline{J}^i, \\end{equation} \\begin{equation}\\label{rela} F^i=\\kappa\n\\overline{J}^i, \\end{equation} being $\\Lambda$ certain constant and $\\kappa$\nthe scalar curvature. The tensor $R^a_{-}$ is the $Sp(1)$ part of\nthe curvature. The last two conditions implies that $g$ is Einstein\nwith non zero cosmological constant, i.e, $R_{ij}=3\\kappa\n(g_{q})_{ij}$ being $R_{ij}$ the Ricci tensor constructed from\n$g_q$. Notice that (\\ref{rela}) is equivalent to (\\ref{relat}) if we\nchoose the normalization $\\kappa=1$.\n\\\\\n\n- For any quaternion K\\\"ahler space the $(0,4)$ and $(2,2)$ tensors\n$$\n\\Theta=\\overline{J}^1 \\wedge \\overline{J}^1 + \\overline{J}^2 \\wedge\n\\overline{J}^2 + \\overline{J}^3 \\wedge \\overline{J}^3,\n$$\n$$\n\\Xi= J^1 \\otimes J^1 + J^2 \\otimes J^2 + J^3 \\otimes J^3\n$$\nare globally defined and covariantly constant with respect to the\nusual Levi Civita connection.\n\\\\\n\n- Any quaternion K\\\"ahler space is orientable.\n\\\\\n\n- In four dimensions the K\\\"ahler triplet $\\overline{J}_2$ and the\none forms $\\omega^{a}_{-}$ are\n$$\n\\omega^{a}_{-}=\\omega^a_{0}- \\epsilon_{abc}\\omega^b_c,\\qquad\n\\overline{J}_1=e^1\\wedge e^2-e^3\\wedge e^4,\n$$\n$$\n\\overline{J}_2=e^1\\wedge e^3-e^4\\wedge e^2\\qquad\n\\overline{J}_3=e^1\\wedge e^4-e^2\\wedge e^3.\n$$\nIn this dimension quaternion K\\\"ahler spaces are defined by the\nconditions (\\ref{rela}) and (\\ref{lamas}). This definition is\nequivalent to state that quaternion K\\\"ahler spaces are Einstein and\nwith self-dual Weyl tensor.\n\\\\\n\n\\textit{The twistor space of a quaternion K\\\"ahler space}\n\\\\\n\n Another very important property about compact quaternion K\\\"ahler spaces\nis that its twistor space is \\emph{K\\\"ahler-Einstein}. In order to\ndefine the twistor space let us note that any linear combination of\nthe form $J=\\widetilde{u}_i J_i$ is an almost complex structure on\n$M$, and the metric $g_q$ is hermitic with respect to it. Here we\nhave defined the scalar fields $\\widetilde{u}^i=u^i\/u$ and it is\nevident that they are constrained by the condition $\\widetilde{u}^i\n\\widetilde{u}^i=1$. This means that the bundle of almost complex\nstructures over $M$ is parameterized by points on the two sphere\n$S^2$. This bundle is known as the twistor space $Z$ of $M$. The\nspace $Z$ is endowed with the metric \\begin{equation}\\label{kahlo} g_6=\\theta_i\n\\theta_i + g_q,\\end{equation} where we have defined\n$$\n\\theta_i=d(\\widetilde{u}^i)+\\epsilon^{ijk}\\omega_{-}^j\n\\widetilde{u}^k.\n$$\nThe metric (\\ref{kahlo}) is six dimensional due to the constraint\n$\\widetilde{u}^i \\widetilde{u}^i=1$. Corresponding to this metric we\nhave the K\\\"ahler two form \\begin{equation}\\label{two} \\overline{J}=\n\\widetilde{u}_i\\overline{J}_i-\\frac{\\epsilon_{ijk}}{2}\\widetilde{u}_i\\theta_j\\wedge\n\\theta_k. \\end{equation} It has been proved in \\cite{Salomon} that $J$ is\nintegrable and $\\overline{J}$ is closed (see also \\cite{Lolo}),\ntherefore $J$ is truly a complex structure and $g_6$ is\n\\emph{K\\\"ahler}. The calculation of the Ricci tensor of $g_6$ shows\nthat it is also Einstein, therefore the space $Z$ is\n\\emph{K\\\"ahler-Einstein}. We are using the normalization $\\kappa=1$\nhere, for other normalization certain coefficients must be included\nin (\\ref{two}). Let us introduce the covariant derivative\n\\begin{equation}\\label{alf} \\alpha_i=du^i+\\epsilon^{ijk}\\omega_{-}^j u^k, \\end{equation} which\nis related to $\\theta_i$ by\n$$\n\\theta^i=\\frac{\\alpha_i}{u}-\\frac{u_i du}{u^2}.\n$$\nWith the help of this relation and the definition (\\ref{two}) it\nfollows that \\begin{equation}\\label{two2} \\overline{J}=\\frac{u_i}{u}\\overline{J}_i\n-\\frac{\\epsilon_{ijk}}{2}u_i\\frac{\\alpha_j\\wedge\n\\alpha_k}{u^3}-\\epsilon_{ijk}u_iu_j\\frac{\\alpha_k\\wedge du}{u^4}\\end{equation}\n$$ =\\frac{u_i}{u}\\overline{J}_i\n-\\frac{\\epsilon_{ijk}}{2}u_i\\frac{\\alpha_j\\wedge \\alpha_k}{u^3}.\n$$ The last expression will be needed in the following, although it is completely equivalent\nto (\\ref{two}). Also, the following formulae\n$$\n\\overline{J}_i\\wedge \\overline{J}_j=-2\\delta_{ij}e_1 \\wedge\ne_2\\wedge e_3\\wedge e_4,\n$$\n$$\nd\\alpha_i=\\epsilon_{ijk}(u_j\\overline{J}_k+\\alpha_k\\wedge\n\\omega_j^-),\n$$\n$$\nd(u^iu^i)=d(u^2)=2u du=2u^i\\alpha_i,\n$$\n\\begin{equation}\\label{use} d(\\epsilon_{ijk}\\alpha_i\\wedge \\alpha_j\\wedge\\alpha_k)=-\nu du\\wedge \\alpha_i\\wedge \\overline{J}_i, \\end{equation}\n$$\nd(\\alpha_i\\wedge \\overline{J}_i)=0,\n$$\n$$\nd(e^{3f})\\wedge\\alpha_1\\wedge\n\\alpha_2\\wedge\\alpha_3=(e^{3f})'du\\wedge\\alpha_1\\wedge\n\\alpha_2\\wedge\\alpha_3=0,\n$$\nrelating $\\overline{J}_i$ and $\\alpha_i$ will be useful for our\npurposes. \\footnote{A more complete account of formulae can be\nfound, for instance, in \\cite{bernie}.} For instance, the closure of\n(\\ref{two2}) is a direct consequence of the second (\\ref{use})\ntogether with (\\ref{basta}).\n\\\\\n\n\\textit{A proof of the proposition}\n\\\\\n\n Let us go back to our task of constructing the metric (\\ref{eyo}).\nOur starting point is an eight-dimensional metric anzatz of the form\n\\begin{equation}\\label{anzatz}\ng_8=\\frac{(dt+H)^2}{e^{\\frac{3}{2}h}}+e^{\\frac{1}{2}h}g_7, \\end{equation} being\n$g_7$ a metric over a 7-manifold $Y$ and $h$ a function over $Y$.\nNeither the one form $H$ nor the function $h$ depends on $t$,\ntherefore the vector field $\\partial_t$ is, by construction,\nKilling. Associated to the metric (\\ref{anzatz}) we can construct\nthe octonionic $4$-form\n \\begin{equation}\\label{ol}\n\\Phi_4=(dt+H)\\wedge \\Phi+e^{h}\\ast\\Phi, \\end{equation} being $\\Phi$ a $G_2$\ninvariant three form corresponding to the metric $g_7$ and $\\ast\n\\Phi$ its dual. The precise form for $\\Phi$ will be found below. If\nwe impose the condition $d\\Phi_4=0$ then the metric $g_8$ will have\n$Spin(7)$ holonomy. \\footnote{The converse of this affirmation is\nnot true, that is, the holonomy group of $g_8$ could be $Spin(7)$\nand (\\ref{ol}) could be not closed. The form (\\ref{ol}) is preserved\nby the Killing vector, but there could exist cases for which this\nsimplifying condition do not hold and the holonomy is still\n$Spin(7)$. In such cases there will exist another closed 4-form\n$\\Phi_4$ which is not preserved by $\\partial_t$.} We will suppose\nthat the seven dimensional metric $g_7$ is of the form \\begin{equation}\\label{anz}\ng_7=e^{2f} \\alpha_i\\alpha_i+e^{2g} g_q, \\end{equation} being $g_q$ a quaternion\nK\\\"ahler metric in $d=4$ and $\\alpha_i$ defined in (\\ref{alf}). The\nfunctions $f$, $g$ will depend only on the \"radius\"\n$u=\\sqrt{u^iu^i}$. The form (\\ref{anz}) for the 7-metric is well\nknown and is inspired in the Bryant-Salamon construction for $G_2$\nholonomy metrics \\cite{Bryant}. For a metric with $G_2$ holonomy we\nhave $d\\Phi=d\\ast \\Phi=0$ but we will not suppose that the holonomy\nof (\\ref{anz}) is $G_2$, as in the Bryant-Salamon case. Instead we\nwill consider 7-spaces for which the form $\\Phi$ is closed but not\nco-closed. These are known as \\emph{closed $G_2$ structures}\n\\cite{Chiozzi}-\\cite{Spin12}. In this case the $Spin(7)$ holonomy\ncondition $d\\Phi_4=0$ for (\\ref{ol}) will reduce to \\begin{equation}\\label{imita}\ndH\\wedge \\Phi=-d(e^h\\ast \\Phi). \\end{equation} We will find below a suitable\n$H$ and $g_7$ for which (\\ref{imita}) gets simplified even more. It\nseems reasonable for us to choose $H$ in (\\ref{ol}) such that\n\\begin{equation}\\label{h} dH=-\\lambda\\overline{J}. \\end{equation} The reason for this election\nis that the integrability condition $d\\overline{J}=0$ will be\nautomatically satisfied because, as we have seen above, the two form\n$\\overline{J}$ is the K\\\"ahler form of a K\\\"ahler-Einstein metric.\nHere $\\lambda$ is a parameter, and the minus sign was introduced by\nconvenience. Also, by selecting the basis\n$$\n\\widetilde{e}_{i}=e^f\\alpha_i,\\qquad i=1,2,3\\qquad\n\\widetilde{e}_{\\alpha}= e^g e_{\\alpha} \\qquad \\alpha=1,2,3,4,\n$$\nfor (\\ref{anz}), we can construct the $G_2$ invariant three form\n \\begin{equation}\\label{tre}\n\\Phi=c_{abc}\\widetilde{e}^a\\wedge \\widetilde{e}^b\\wedge\n\\widetilde{e}^c= e^{3f}\\alpha_1\\wedge \\alpha_2\\wedge\n\\alpha_3+e^{f+2g}\\alpha_i\\wedge \\overline{J}_i \\end{equation} and its dual\n\\begin{equation}\\label{cuat}\n\\ast\\Phi=e^{2(f+g)}\\frac{\\epsilon_{ijk}}{2}\\alpha_i\\wedge\n\\alpha_j\\wedge \\overline{J}_k+e^{4g}e_1\\wedge e_2\\wedge e_3\\wedge\ne_4. \\end{equation} Here $e^{\\alpha}$ is a basis for the quaternion K\\\"ahler\nmetric $g_q$. As we stated above, we will consider 7-spaces for\nwhich the form $\\Phi$ is closed but not co-closed. In other words we\nwill have that $d\\Phi=0$ but $d\\ast\\Phi\\neq 0$. We also suppose that\n$f$, $g$ and $h$ are functions of the radius $u$ only. By using\n(\\ref{use}) it follows that the closure condition $d\\Phi=0$ for\n(\\ref{tre}) leads to the equation \\begin{equation}\\label{closo} u\ne^{3f}=(e^{f+2g})', \\end{equation} where the tilde implies the derivation with\nrespect to $u$. This is one of the equations that we need.\n\n\n By another side, with the election (\\ref{h}) for $H$ and using that $d\\Phi=0$ it\nfollows from (\\ref{ol}) that \\begin{equation}\\label{cla} d\\Phi_4=dH\\wedge\n\\Phi+d(e^{h}\\ast\\Phi)=-\\lambda\\overline{J}\\wedge\\Phi+d(e^{h}\\ast\\Phi)\n\\end{equation} and therefore the $Spin(7)$ holonomy condition $d\\Phi_4=0$ is\nequivalent to \\begin{equation}\\label{clad}\n\\lambda\\overline{J}\\wedge\\Phi=d(e^{h}\\ast\\Phi). \\end{equation} From (\\ref{tre})\nand (\\ref{two2}) we see that the left side of (\\ref{clad}) is\n \\begin{equation}\\label{left}\n\\lambda\\overline{J}\\wedge\\Phi=\\lambda(\\frac{e^{3f}}{u}\n-\\frac{e^{f+2g}}{u^3})u^i\\overline{J}^i\\wedge\\alpha_1\\wedge\n\\alpha_2\\wedge \\alpha_3-2\\lambda \\frac{e^{f+2g}}{u}u^i\\alpha_i\\wedge\ne_1\\wedge e_2\\wedge e_3\\wedge e_4. \\end{equation} By using the formula\n$u^i\\alpha_i=u du$ and that\n$$\nu^i\\overline{J}^i\\wedge\\alpha_1\\wedge \\alpha_2\\wedge \\alpha_3=u\ndu\\wedge \\frac{\\epsilon_{ijk}}{2}\\alpha_i\\wedge\n\\alpha_j\\wedge\\overline{J}^k,\n$$\nwe can reexpress (\\ref{left}) as \\begin{equation}\\label{alr}\n\\lambda\\overline{J}\\wedge\\Phi=\\lambda(e^{3f}-\\frac{e^{f+2g}}{u^2})\ndu\\wedge \\frac{\\epsilon_{ijk}}{2}\\alpha_i\\wedge\n\\alpha_j\\wedge\\overline{J}^k-2\\lambda e^{f+2g} du\\wedge e_1\\wedge\ne_2\\wedge e_3\\wedge e_4. \\end{equation} By another side, the right side of the\nequation (\\ref{clad}) is found directly from (\\ref{cuat}) and\n(\\ref{use}), the result is\n \\begin{equation}\\label{right}\nd(e^{h}\\ast\\Phi)=(e^{2(f+g)+h})'du\\wedge\n\\frac{\\epsilon_{ijk}}{2}\\alpha_i\\wedge\n\\alpha_j\\wedge\\overline{J}^k+\\bigg((e^{4g+h})'-4\nue^{2(f+g)+h}\\bigg)du\\wedge e_1\\wedge e_2\\wedge e_3\\wedge e_4. \\end{equation}\nBy equating (\\ref{right}) with (\\ref{alr}) and taking into account\nthe closure condition (\\ref{closo}) we obtain the following\ndifferential system\n$$\nu e^{3f}=(e^{f+2g})', \\qquad\n\\lambda(e^{3f}-\\frac{e^{f+2g}}{u^2})=(e^{2(f+g)+h})',\n$$\n\\begin{equation}\\label{sistema} 4 ue^{2(f+g)+h}-2\\lambda e^{f+2g}=(e^{4g+h})'. \\end{equation}\nFrom this system of equations we obtain the proposition stated\nabove.\n\n\n\n\\subsection{A particular solution: the Swann bundle}\n\n If we were able to solve the system (\\ref{sistema}) then we will\nobtain a family of $Spin(7)$ holonomy metrics based on arbitrary\nquaternion K\\\"ahler spaces. We do not know its general solution, but\nwe have found a particular one. It is not difficult to check that\nindeed \\begin{equation}\\label{ck} e^f=u^{-1\/3},\\qquad e^g=u^{2\/3},\\qquad e^h=\\lambda\nu^{-2\/3}, \\end{equation} is a solution of (\\ref{sistema}). By introducing it\ninto the expression (\\ref{anzatz}) and by defining the variable\n$\\tau=t\/\\lambda$ and rescaling by $g_8\\to\\lambda^{-1}g_8$ we obtain\nthe following metric \\begin{equation}\\label{eyo}\ng_8=u(d\\tau+H)^2+\\frac{\\alpha_1^2+\\alpha_2^2+\\alpha_3^2}{u} + u g_q,\n\\end{equation} and the corresponding closed four form is\n$$\n\\Phi_4=(d\\tau+H)\\wedge\\bigg( \\frac{\\alpha_1\\wedge \\alpha_2\\wedge\n\\alpha_3}{u}+u\\alpha_i\\wedge \\overline{J}_i\n\\bigg)+\\frac{\\epsilon_{ijk}}{2}\\alpha_i\\wedge \\alpha_j\\wedge\n\\overline{J}_k+u^2 e_1 \\wedge e_2\\wedge e_3\\wedge e_4.\n$$\nThe closure of this form follows directly from formulas (\\ref{use}).\nAn inspection of this metric shows that it is a cone over an\nEinstein-Sassaki metric, thus the holonomy is in $SU(4)\\in G_2$. In\norder to see this we need to show the orthogonality condition\n$\\widetilde{u}_i\\theta_i=0$, which is a consequence of the following\ncalculation\n$$\n\\widetilde{u}_i\\theta_i=\\widetilde{u}_i\nd\\widetilde{u}_i+\\epsilon^{ijk}\\widetilde{u}^i\\omega_{-}^j\n\\widetilde{u}^k=\\widetilde{u}_i d\\widetilde{u}_i=\nd(\\widetilde{u}_i\\widetilde{u}_i)=0,\n$$\nwe have used $\\widetilde{u}_i\\widetilde{u}_i=1$ in the last line. We\nalso have that\n$$\nu\\theta^i+\\frac{u_i du}{u}=\\alpha_i,\n$$\nand by inserting this expression in (\\ref{eyo}), applying after the\northogonality condition and by defining the new radius $r=2u^{1\/2}$\ngives the following conical form of the metric\n\\begin{equation}\\label{cono} g_8=dr^2+r^2g_7, \\end{equation} being $g_7$ given by \\begin{equation}\\label{owner}\ng_7=(d\\tau+H)^2+g_6=(d\\tau+H)^2+\\theta_i \\theta_i + g_q. \\end{equation} We have\nseen that the six dimensional metric $g_6$ is K\\\"ahler-Einstein and\ntherefore $g_7$ is Einstein-Sassaki (see the lectures \\cite{Galicki}\nand references therein). Any cone over an Einstein-Sassaki space is\nCalabi-Yau and therefore its holonomy is in $SU(4)\\subset Spin(7)$.\n\n But more information about these metrics can be found by\nfinding explicitly the one form $H$, which is defined by\n$dH=\\overline{J}$. In order to solve $dH=\\overline{J}$ we need to\nsimplify the expression (\\ref{two}). Let us remind that\n$$\n\\overline{J}=\n\\widetilde{u}_i\\overline{J}_i-\\frac{\\epsilon_{ijk}}{2}\\widetilde{u}_i\\theta_j\\wedge\n\\theta_k\n$$\nand that $\\theta_i=d(\\widetilde{u}^i)+\\epsilon^{ijk}\\omega_{-}^j\n\\widetilde{u}^k$. The orthogonality condition $\\widetilde{u}_i\\theta_i=0$\nis equivalent to\n$$\n\\theta_3=-\\frac{(\\widetilde{u}_1\\theta_1+\\widetilde{u}_2\\theta_2)}{\\widetilde{u}_3}.\n$$\nFrom the last relation it follows that\n$$\n\\frac{\\epsilon_{ijk}}{2}\\widetilde{u}_i\\theta_j\\wedge\n\\theta_k=\\frac{\\theta_1\\wedge \\theta_2}{\\widetilde{u}_3}.\n$$\nAfter certain calculation we obtain\n$$\n\\frac{\\theta_1\\wedge \\theta_2}{\\widetilde{u}_3}\n=\\frac{d\\widetilde{u}_1\\wedge\nd\\widetilde{u}_2}{\\widetilde{u}_3}-d\\widetilde{u}_i\\wedge\n\\omega_{-}^i+\\frac{\\epsilon^{ijk}}{2}\\widetilde{u}_i\\omega_{-}^j\\wedge\n\\omega_{-}^k.\n$$\nTherefore\n\\begin{equation}\\label{mam}\n\\frac{\\epsilon_{ijk}}{2}\\widetilde{u}_i\\theta_j\\wedge\n\\theta_k=\\frac{d\\widetilde{u}_1\\wedge\nd\\widetilde{u}_2}{\\widetilde{u}_3}-d\\widetilde{u}_i\\wedge\n\\omega_{-}^i+\\frac{\\epsilon^{ijk}}{2}\\widetilde{u}_i\\omega_{-}^j\\wedge\n\\omega_{-}^k.\n\\end{equation}\nBy another side we have the fundamental relation for quaternion Kahler manifolds, which is\n\\begin{equation}\\label{mamsa}\n\\widetilde{J}_i=d\\omega_{-}^i+\\frac{\\epsilon^{ijk}}{2}\\omega_{-}^j\\wedge\n\\omega_{-}^k.\n\\end{equation}\nInserting expressions (\\ref{mam}) and (\\ref{mamsa}) into (\\ref{two}) give us a\nremarkably simple expression for $\\overline{J}$, namely\n\\begin{equation}\\label{simple}\n\\overline{J}=d(\\widetilde{u}_i\\omega_{-}^i)-\\frac{d\\widetilde{u}_1\\wedge\nd\\widetilde{u}_2}{\\widetilde{u}_3} \\end{equation} By parameterizing the\ncoordinates $\\widetilde{u}_i$ in the spherical form\n$$\n\\widetilde{u}_1=\\cos\\theta,\\qquad\n\\widetilde{u}_2=\\sin\\theta \\cos\\varphi,\\qquad\n\\widetilde{u}_3=\\sin\\theta \\sin\\varphi,\\qquad\n$$\nwe find out that\n$$\n\\frac{d\\widetilde{u}_1\\wedge\nd\\widetilde{u}_2}{\\widetilde{u}_3}=-d\\varphi\\wedge d\\cos\\theta.\n$$\nWith the help of the last expression can reexpress (\\ref{simple}) as\n$$\n\\overline{J}=d(\\widetilde{u}_i\\omega_{-}^i)-d\\varphi\\wedge\nd\\cos\\theta,\n$$\nfrom where it follows directly that the form $H$ such that\n$dH=\\overline{J}$ is given by \\begin{equation}\\label{simplon}\nH=u_i\\omega_{-}^i+\\cos\\theta d\\varphi, \\end{equation} up to a total\ndifferential term. By introducing the expression (\\ref{simplon})\ninto (\\ref{owner}) we find directly the following expression for the Einstein-Sasaki\nmetric\n$$\ng_7=(d\\tau+\\cos\\theta d\\varphi+\\cos\\theta\\omega_{-}^1+\\sin\\theta \\cos\\varphi\\omega_{-}^2\n+\\sin\\theta \\sin\\varphi\\omega_{-}^3)^2+(d\\theta-\\sin\\varphi\\omega_{-}^2\n+\\cos\\varphi\\omega_{-}^3)^2\n$$\n\\begin{equation}\\label{alfinal}\n+(\\sin\\theta d\\varphi\n+\\sin\\theta\\omega_{-}^1-\\cos\\theta \\cos\\varphi\\omega_{-}^2\n-\\cos\\theta \\sin\\varphi\\omega_{-}^3)^2+g_q.\n\\end{equation}\nLet\nus introduce the coordinates $u_i$ written in spherical form\n$$\nu_1=|u| \\sin\\theta\\cos\\varphi\\cos\\phi,\n$$\n$$\nu_2=|u| \\sin\\theta\\cos\\varphi\\sin\\phi,\n$$\n$$\nu_3=|u| \\sin\\theta\\sin\\varphi,\n$$\n$$\nu_4=|u| \\cos\\theta,\n$$\nThen it is not difficult to check that the\ncone $g_8=dr^2+r^2g_7$ being $g_7$ given at (\\ref{alfinal}), can be expressed as\n\\begin{equation}\\label{explico}\ng_s=g|u|^2\\overline{g}+f[(du_0-u_i\\omega_{-}^i)^2 +(du_i+\nu_0\\omega_{-}^i + \\epsilon_{ijk}u_k\\omega_{-}^k)^2].\\end{equation}\n The coordinates $u_i$ can be extended to a single quaternion valued coordinate\n$$\nu=u_0 + u_1 I + u_2 J + u_3 K ,\\;\\;\\;\\;\\;\\; \\overline{u}= u_0 - u_1\nI - u_2 J - u_3 K.\n$$\nHere $I, J, K$ denote the unit quaternions, and it follows that\n$|du|^2=(du_0)^2+(du_1)^2 +(du_2)^2 + (du_3)^2$. The $Sp(1)\\sim\nSU(2)$ triplet $\\omega_{-}^i$ can be used to define a quaternion\nvalued one form\n$$\n\\omega_{-}=\\omega_{-}^1 I+\\omega_{-}^2 J +\\omega_{-}^3 K,\n$$\nand the Kahler triplet $\\overline{J}^a$ can be extended to a\nquaternion valued two form $\\overline{J}= \\overline{J}^1 I +\n\\overline{J}^2 J + \\overline{J}^3 K$. The metric (\\ref{explico}) can\nbe expressed in this notation as \\begin{equation}\\label{Swann2} g_8=|u|^2 g_q + |du\n+ u \\omega_{-}|^2, \\end{equation} Under the transformation $u\\to G u$ with $G:\nM \\to SU(2)$ the $SU(2)$ instanton $\\omega_{-}$ is gauge transformed\nas $\\omega_{-}\\to G\\omega_{-}G^{-1}+ GdG^{-1}$. Therefore the form\n$du + \\omega_{-}u$ is transformed as\n$$\ndu + u \\omega_{-}\\rightarrow d(Gu) + (G\\omega_{-}G^{-1}+ GdG^{-1})\nGu=G du+ (dG+G\\omega_{-}-dG)u=G (du + u \\omega_{-}),\n$$\nand it is seen that $du + \\omega_{-}u$ is a well defined\nquaternion-valued one form over the chiral bundle. Associated to the\nmetric (\\ref{Swann2}) we have the quaternion valued two form\n\\begin{equation}\\label{quato} \\widetilde{\\overline{J}}=u\\overline{J}\\overline{u}+(du\n+ u \\omega_{-})\\wedge \\overline{(du + u \\omega_{-})}, \\end{equation} and it can\nbe checked that the metric (\\ref{explico}) is hermitic with respect to\nany of the components of (\\ref{quato}). We have that\n$$\nd\\widetilde{\\overline{J}}=du\\wedge\n(\\overline{J}+d\\omega_{-}-\\omega_{-}\\wedge\n\\omega_{-})\\overline{u}+u\\wedge\n(\\overline{J}+d\\omega_{-}-\\omega_{-}\\wedge \\omega_{-})d\\overline{u}\n$$\n$$+u(d\\overline{J}+\\omega_{-}\\wedge\nd\\omega_{-}-d\\omega_{-}\\wedge \\omega_{-})\\overline{u}.\n$$\nThe first two terms of the last expression are zero due to\n(\\ref{rela}). Also by introducing (\\ref{rela}) into the relation\n(\\ref{basta}) we obtain that\n$$\nd\\overline{J}+\\omega_{-}\\wedge d\\omega_{-}-d\\omega_{-}\\wedge\n\\omega_{-}=0\n$$\nand therefore the third term is also zero. This means that the\nmetric (\\ref{Swann2}) is hyperkahler with respect to the triplet\n$\\widetilde{\\overline{J}}$ and the holonomy is reduced to\n$Sp(2)\\subset SU(4)\\subset Spin(7)$.\n\n\n The hyperkahler metrics (\\ref{Swann2}) are indeed well known.\nThey are the Swann principal $CO(3)$ bundle of co-frames over a\nquaternion K\\\"ahler spaces \\cite{Swann}. Hyperkahler quotients of\nsuch metrics by tri-holomorphic isometries are related to quaternion\nKahler quotients of the base spaces. The hyperkahler condition for\nthe Swann metric implies that the seven dimensional cone of\n(\\ref{owner}) is not only Einstein-Sassaki, but tri-Sasaki. The\nvector field $\\partial_{\\phi}$ is the Reeb vector of the tri-Sasaki\nmetric.\n\\\\\n\n\\textit{The self duality of the spin connection}\n\\\\\n\nAlthough we have found that our example is hyperkahler, it is\ninstructive to check that the spin connection $\\omega_{ab}$ of the\nmetric (\\ref{eyo}) is self-dual. We choose the basis\n$$\n\\overline{e}^{\\alpha}=u^{1\/2}e^{\\alpha},\\qquad\n\\overline{e}^{i}=\\frac{\\alpha^i}{u^{1\/2}},\\qquad\n\\overline{e}^{8}=u^{1\/2}(d\\tau+H)\n$$\nbeing $e^{\\alpha}$ a basis for $g_q$ with ${\\rm \\alpha=1,2,3,4}$ and\n${\\rm i=1,2,3}$. With the help of the first Cartan equation\n$$\nd\\overline{e}^{m}+\\hat{\\omega}_{mn}\\wedge \\overline{e}^{n}=0,\n$$\nwhere $m$ an index that can be latin or greek, we obtain the\nfollowing components of the spin connection\n$$\n\\hat{\\omega}_{ij}=\\frac{u^{[i}\\alpha^{j]}}{2u^2}-\\epsilon_{ijk}\\omega^{k}_{-}\n+\\epsilon_{ijk}\\frac{u_k}{u}(d\\tau+H),\n$$\n$$\n\\hat{\\omega}_{\\alpha\\beta}=\\omega_{\\alpha\\beta}\n-\\epsilon_{ijk}\\frac{u_k}{u^2}(\\overline{J}_j)_{\\alpha\\beta}\\alpha^i\n-\\frac{u_i}{u}(\\overline{J}_i)_{\\alpha\\beta}(d\\tau+H),\n$$\n$$\n\\hat{\\omega}_{\\alpha i}=\\frac{u^i}{2}\ne^{\\alpha}-\\epsilon_{ijk}\\frac{u_k}{u}(\\overline{J}_j)_{\\alpha\\beta}e^{\\beta},\n$$\n$$\n\\hat{\\omega}_{8\ni}=\\frac{u_i}{u}(d\\tau+H)+\\epsilon_{ijk}\\frac{u_j}{u^2}\\alpha^k,\n$$\n$$\n\\hat{\\omega}_{8\n\\alpha}=\\frac{u_i}{u^2}(\\overline{J}_j)_{\\alpha\\beta}e^{\\beta}.\n$$\nBy using that\n$2\\omega^{i}_{-}=\\omega_{\\alpha\\beta}(\\overline{J}_j)_{\\alpha\\beta}$\nand the representation\n$$\nJ^{1}=\\left(\\begin{array}{cccc}\n 0 & -1 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & -1 \\\\\n 0 & 0 & 1 & 0\n\\end{array}\\right),\\;\\;\\;\\;\nJ^{2}=\\left(\\begin{array}{cccc}\n 0 & 0 & -1 & 0 \\\\\n 0 & 0 & 0 & 1 \\\\\n 1 & 0 & 0 & 0 \\\\\n 0 & -1 & 0 & 0\n\\end{array}\\right)\n$$\n\\begin{equation}\\label{reprodui} J^{3}=J^{1}J^{2}=\\left(\\begin{array}{cccc}\n 0 & 0 & 0 & -1 \\\\\n 0 & 0 & -1 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 1 & 0 & 0 & 0\n\\end{array}\\right),\n\\end{equation} for the matrix $(\\overline{J}_j)_{\\alpha\\beta}$, it can be\nchecked that\n$$\n\\hat{\\omega}_{81}=-(\\hat{\\omega}_{23}+\\hat{\\omega}_{65}+\\hat{\\omega}_{47}),\n\\qquad\n\\hat{\\omega}_{82}=-(\\hat{\\omega}_{31}+\\hat{\\omega}_{46}+\\hat{\\omega}_{57}),\n$$\n$$\n\\hat{\\omega}_{83}=-(\\hat{\\omega}_{12}+\\hat{\\omega}_{54}+\\hat{\\omega}_{67}),\n\\qquad\n\\hat{\\omega}_{84}=-(\\hat{\\omega}_{62}+\\hat{\\omega}_{35}+\\hat{\\omega}_{71}),\n$$\n$$\n\\hat{\\omega}_{85}=-(\\hat{\\omega}_{16}+\\hat{\\omega}_{43}+\\hat{\\omega}_{72}),\n\\qquad\n\\hat{\\omega}_{86}=-(\\hat{\\omega}_{15}+\\hat{\\omega}_{24}+\\hat{\\omega}_{73}),\n$$\n$$\n\\hat{\\omega}_{87}=-(\\hat{\\omega}_{14}+\\hat{\\omega}_{36}+\\hat{\\omega}_{25}).\n$$\nThese conditions for $\\hat{\\omega}_{mn}$ can be written more\nconcisely as\n$$\n\\hat{\\omega}_{8i}=-c_{imn}\\hat{\\omega}_{mn},\n$$\nand this is equivalent to say that, for the basis $\\overline{e}^m$,\nthe spin connection $\\hat{\\omega}_{mn}$ is self-dual.\n\n\\section{Discussion}\n\n Along this brief work we have proposed an anzatz for $Spin(7)$ metrics\nas an $R$ bundle over closed $G_2$ structures. These $G_2$\nstructures are $R^3$ bundles over 4-dimensional compact quaternion\nK\\\"ahler spaces. We also have used the fact that the twistor space\nof any compact quaternion K\\\"ahler space is K\\\"ahler Einstein and\ntherefore there it possess a six dimensional sympletic form defined\nover it. We have imposed the conditions for the reduction of the\nholonomy to $Spin(7)$ and we have found a non linear system relating\nthree unknown functions. We have found a particular solution and the\nresult was the Swann bundle in eight dimensions, which is\nhyperkahler and therefore the holonomy is $Sp(2)\\subset Spin(7)$.\nLet us recall that the direct sum\n$$\ng_{11}=g_{1,2}+g_8,\n$$\nof the Swann metric with the three dimensional flat Minkowski one\n$g_{1,2}$ is a solution of the supergravity equations of motion with\nall the fields \"turned off\" except the graviton, and preserving four\nsupersymmetries after compactification. This solution can be\nrewritten in the IIA form\n$$\ng_{11}= e ^{-\\frac{2}{3}\\phi} g_{10} + e^{\\frac{4}{3}\n\\phi}(d\\tau+H)^2,\n$$\nbeing the dilaton $\\phi$ defined by $\\phi=\\frac{3}{4}\\log u$. The\nreduction along the isometry $\\partial_{\\tau}$ will give a\nbackground of the form\n$$\ng_{IIA}= u ^{1\/2} g_{1,2} + u ^{-1\/2} \\widetilde{g}_{7}.\n$$\nbeing $\\widetilde{g}_7$ given by\n$$\n\\widetilde{g}_7=\\frac{(du^1+\\omega_{-}^2\nu^3)^2}{u^{1\/2}}+\\frac{(du^2+\\omega_{-}^3\nu^1)^2}{u^{1\/2}}+\\frac{(du^3+\\omega_{-}^1 u^2)^2}{u^{1\/2}} + u^{3\/2}\ng_q.\n$$\nThe last metric together with the 3-form\n$$\n\\Phi=\\frac{1}{u^{3\/4}}\\alpha_1\\wedge \\alpha_2\\wedge\n\\alpha_3+u^{5\/4}\\alpha_i\\wedge \\overline{J}_i $$ and its dual\n\\begin{equation}\\label{cuat} \\ast\\Phi=u\\frac{\\epsilon_{ijk}}{2}\\alpha_i\\wedge\n\\alpha_j\\wedge \\overline{J}_k+u^3e_1\\wedge e_2\\wedge e_3\\wedge e_4,\n\\end{equation} constitute a $G_2$ structure. Therefore and on general grounds\nwe have that\n$$\nd\\Phi=\\tau_o\\wedge \\ast\\Phi+3\\tau_1\\wedge \\Phi+\\ast\\tau_3,\n$$\n$$\nd\\ast\\Phi=4\\tau_1\\wedge \\ast\\Phi+\\tau_2\\wedge \\Phi\n$$\nbeing $\\tau_i$ the four torsion classes. We have calculated them for\nour case and we have found that $\\tau_3=\\tau_0=0$ and that the class\n$\\tau_2$ can be expressed entirely in terms of the sympletic form\n$\\overline{J}$ for the K\\\"ahler-Einstein metric. The class $\\tau_0$\ncan be eliminated by a conformal transformation. The expression for\nthe Swann metric (\\ref{owner}) in terms of $\\widetilde{g}_7$ is\n$$\ng_8=u(d\\tau+H)^2+\\frac{1}{u^{1\/2}}\\widetilde{g}_7.\n$$\n We can therefore paraphrase the\nresults described in this work by stating that the Swann bundle\ndefines a conformally closed $G_2$ structure with $\\tau_3=\\tau_0=0$\nby reduction along one isometry (which should not be confused with\nan hyperkahler reduction or quotient). It will be interesting to see\nif it is possible to find a one parameter deformation of the\nsolution presented here and to see if the holonomy group obtained is\nbigger than $Sp(2)$. In our opinion, this question deserve some\nattention.\n\\\\\n\n\n {\\bf Acknowledgement:} I am benefited with discussions\nwith G. Giribet, who checked some calculations. Also my\nacknowledgments to D. Joyce who pointed out that the example\npresented here could be the Swann bundle and to S.Salamon for\npresenting me useful literature to compare.\n\\\\\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Summary:} We present several recent improvements to minimap2, a\nversatile pairwise aligner for nucleotide sequences. Now minimap2 v2.22 can\nmore accurately map long reads to highly repetitive regions and align through\ninsertions or deletions up to 100kb by default, addressing major weakness in\nminimap2 v2.18 or earlier.\n\n\\section{Availability and implementation:}\n\\href{https:\/\/github.com\/lh3\/minimap2}{https:\/\/github.com\/lh3\/minimap2}\n\n\\section{Contact:} hli@ds.dfci.harvard.edu\n\\end{abstract}\n\n\\section{Introduction}\nMinimap2~\\citep{Li:2018ab} is widely used for maping long sequence\nreads and assembly contigs. \\citet{Jain:2020aa} found minimap2 v2.18 or earlier occasionally\nmisaligned reads from highly repetitive regions as minimap2 ignored seeds of\nhigh occurrence. They also noticed minimap2 may misplace reads with structural\nvariations (SVs) in such regions~\\citep{Jain2020.11.01.363887}. These\nmisalignments have become a pressing issue in the advent of\ntemolere-to-telomore human assembly~\\citep{Miga:2020aa}. Meanwhile, old minimap2\nwas unable to efficiently align long insertions\/deletions (INDELs) and often\nbreaks an alignment around variable-number tandem repeats (VNTRs). This has\ninspired new chaining algorithms~\\citep{Li:2020aa,Ren:2021aa} which are not\nintegrated into minimap2. Here we will describe recent efforts implemented\nin v2.19 through v2.22 to improve mapping results.\n\n\\begin{methods}\n\\section{Methods}\n\n\\subsection{Rescuing high-occurrence $k$-mers}\nMinimap2 keeps all $k$-mer minimizers~\\citep{Roberts:2004fv} during indexing. Its original\nimplementation only selected low-occurrence minimizers during mapping. The\ncutoff is a few hundred for mapping long reads against a human genome. If a\nread habors only a few or even no low-occurrence minimizers, it will fail\nchaining due to insufficient anchors.\n\nTo resolve this issue, we implemented a new heuristic to add additional\nminimizers. Suppose we are looking at two adjacent low-occurence $k$-mers\nlocated at position $x_1$ and $x_2$, respectively. If $|x_1-x_2|\\ge500$,\nminimap2 v2.22 additionally selects $\\lfloor|x_1-x_2|\/500\\rfloor$ minimizers\nof the lowest occurrence among minimizers between $x_1$ and $x_2$.\nWe use a binary heap data\nstructure to select minimizers of the lowest occurrence in this interval.\nThis strategy adds necessary anchors at the cost of increasing total alignment\ntime by a few percent on real data.\n\n\\subsection{Aligning through longer INDELs}\nThe original minimap2 may fail to align long INDELs due to its chaining\nheuristics. Briefly, minimap2 applies dynamic programming (DP) to chain\nminimizer anchors. This is a quadratic algorithm, slow for chaining\ncontigs. For acceptable performance, the original minimap2 uses a 500bp band by\ndefault, which means a gap longer than 500bp will stop chaining.\nTo align through longer gaps, older minimap2 implemented a long-join heurstic as follows.\nIf there is an INDEL longer than 500bp and the two chains around the INDEL\nhave no overlaps on either the query or the reference sequence, minimap2 may\njoin the two short chains later.\nThis heuristic may fail around VNTRs because short chains\noften have overlaps in VNTRs. More subtly, minimap2 may escape the inner DP\nloop early, again for performance, if the chaining result is not improved for\n50 iterations. When there is a copy number change in a long segmental\nduplication, the early escape may break around the event even if users\nspecify a large band.\n\nIn minigraph~\\citep{Li:2020aa}, we developed a new chaining algorithm that\nfinds up to 1kb INDELs with DP-based chaining and goes through longer INDELs with a\nsubquadratic algorithm~\\citep{DBLP:conf\/wabi\/AbouelhodaO03}. We ported the same\nalgorithm to minimap2 for contig mapping. For long-read mapping, the minigraph\nalgorithm is slower. Minimap2 v2.22 still uses the DP-based algorithm to\nfind short chains and then invokes the minigraph algorithm to rechain anchors in\nthese short chains. The rechaining step achieves the same goal as long-join\nbut is more reliable because it can resolve overlaps between short chains. The old\nlong-join heuristic has since been removed.\n\n\\subsection{Properly mapping long reads with SVs}\nThe original minimap2 ranks an alignment by its Smith-Waterman score and\noutputs the best scoring alignment. However, when there are SVs on the read,\nthe best scoring alignment is sometimes not the correct alignment.\n\\citet{Jain2020.11.01.363887} resolved this dilemma by altering the mapping\nalgorithm.\n\nIn our view, this problem is rooted in impropriate scoring: affine-gap penalty\nover-penalizes a long INDEL that was often evolutionarily created in one event.\nWe should not penalize a SV by a function linear in the SV length. Minimap2 v2.22 instead rescores\nan alignment with the following scoring function. Suppose an alignment consists\nof $M$ matching bases, $N$ substitutions and $G$ gap opens, we empirically\nscore the alignment with\n$$\nS=M-\\frac{N+G}{2d}-\\sum_{i=1}^G\\log_2(1+g_i)\n$$\nwhere $g_i\\ge1$ is the length of the $i$-th gap and\n$$\nd=\\max\\left\\{\\frac{N+G}{M+N+G},0.02\\right\\}\n$$\nIt approximates per-base sequence divergence except with the smallest value set\nto 2\\%. As an analogy to affine-gap scoring, the matching score in our scheme\nis 1, the mismatch and gap open penalties are both $1\/2d$ and the gap extension\npenalty is a logarithm function of the gap length. Our scoring gives a long SV\na much milder penalty. In terms of time complexity, scoring an alignment is\nlinear in the length of the alignment. The time spent on rescoring is negligible in\npractice.\n\n\n\\end{methods}\n\n\\section{Results}\n\n\\begin{table}\n\\processtable{Evaluation of minimap2 v2.22}\n{\\footnotesize\\label{tab:1}\\begin{tabular}{p{4.2cm}rrrr}\n\\toprule\n$[$Benchmark$]$ Metric & v2.22 & v2.18 & Winno & lra \\\\\n\\midrule\n$[$sim-map$]$ \\% mapped reads at Q10 & 97.9 & 97.6 & {\\bf 99.0} & 97.3 \\\\\n$[$sim-map$]$ err. rate at Q10 (phredQ) & {\\bf 52} & {\\bf 52} & 38 & 24 \\\\\n$[$winno-cmp$]$ rate of diff. (phredQ) & {\\bf 41} & 37 & N\/A & 18 \\\\\n$[$sim-sv$]$ \\% false negative rate & {\\bf 0.5} & 2.0 & {\\bf 0.5} & 1.4 \\\\\n$[$sim-sv$]$ \\% false discovery rate & {\\bf 0.0} & 0.1 & {\\bf 0.0} & 0.1 \\\\\n$[$real-sv-1k$]$ \\% false negative rate & {\\bf 7.3} & 20.0 & 13.0 & N\/A \\\\\n$[$real-sv-1k$]$ \\% false discovery rate & 2.7 & {\\bf 2.4} & 2.7 & N\/A \\\\\n\\botrule\n\\end{tabular}}\n{In $[$sim-map$]$, 152,713 reads were simulated from the CHM13 telomere-to-telomere assembly v1.1\n(AC: GCA\\_009914755.3) with pbsim2~\\citep{Ono:2021aa}: ``pbsim2 -{}-hmm\\_model R94.model -{}-length-min\n5000 -{}-length-mean 20000 -{}-accuracy-mean 0.95''. Alignments of mapping quality\n10 or higher were evaluated by ``paftools.js mapeval''. The mapping error rate\nis measured in the phred scale: if the error rate is $e$, $-10\\log_{10}e$ is\nreported in the table. In $[$winno-cmp$]$, 1.39 million CHM13 HiFi reads from\nSRR11292121 were mapped against the same CHM13 assembly. 99.3\\% of them were mapped by Winnowmap2\nat mapping quality 10 or higher and were taken as ground truth to evaluate\nminimap2 and lra with ``paftools.js pafcmp''. $[$sim-sv$]$ simulated 1,000\n50bp to 1000bp INDELs from chr8 in CHM13 using SURVIVOR~\\citep{Jeffares:2017aa} and simulated Nanopore\nreads at 30-fold coverage with the same pbsim2 command line. SVs were called with\n``sniffles -q 10''~\\citep{Sedlazeck:2018ab} and compared to the simulated truth with ``SURVIVOR eval\ncall.vcf truth.bed 50''. In $[$real-sv-1k$]$, small and long variants were\ncalled by dipcall-0.3~\\citep{Li:2018aa} for HG002 assemblies (AC: GCA\\_018852605.1 and\nGCA\\_018852615.1) and compared to the GIAB truth~\\citep{Zook:2020aa} using ``truvari -r 2000 -s\n1000 -S 400 -{}-multimatch -{}-passonly'' which sets the minimum INDEL size to 1kb in evaluation. }\n\\end{table}\n\nWe evaluated minimap2 v2.22 along with v2.18, Winnowmap2 v2.03 and lra v1.3.2\n(Table~\\ref{tab:1}). Both versions of minimap2 achieved high mapping accuracy on\nsimulated Nanopore reads (sim-map). Winnowmap2 aligned more reads at mapping\nquality 10 or higher (mapQ10). However, it may occasionally assign a high mapping\nquality to a read with multiple identical best alignments. This reduced its\nmapping accuracy.\n\nIn lack of groud truth for real data, we took Winnowmap2 mapping as ground\ntruth to evaluate other mappers (winno-cmp in Table~\\ref{tab:1}). Out of 1,378,092 reads with mapQ10\nalignments by Winnowmap2, minimap2 v2.22 could map all of them. 118 reads, less\nthan 0.01\\% of all reads, were mapped differently by v2.22. 51 of them have\nmultiple identical best alignments. We believe these are more likely to be\nWinnowmap2 errors. Most of the remaining 67 (=118-51) reads have multiple\nhighly similar but not identical alignments. We are not sure how many are real\nmapping errors. Minimap2 v2.18 is less consistent with Winnowmap2. Most of the\ndifferences are located in highly repetitive regions.\n\nThe two benchmarks above only evaluate read mappings when there are no variations between the reads and the reference.\nTo measure the mapping accuracy in the presence of SVs (sim-sv), we reproduced\nthe results by~\\citep{Jain2020.11.01.363887}. Minimap2 v2.22 is as good as\nWinnowmap2 now. Note that we were setting the Sniffles mapping quality\nthreshold to 10 in consistent with the benchmarks above. If we used the\ndefault threshold 20, v2.22 would miss additional five SVs (accounting for\n0.5\\% of simulated SVs). For four out of these missing five SVs, minimap2 v2.22\nmapped more variant reads than Winnowmap2. Sniffles did not call these SVs\nbecause minimap2 tended to give them conservative mapping quality. It is worth\nnoting that the simulation here only considers a simple scenario in evolution.\nNon-allelic gene conversions, which happen often in segmental\nduplications~\\citep{Harpak:2017aa}, would obscure the optimal mapping\nstrategies. How much such simple SV simulation informs real-world SV calling\nremains a question.\n\nTo see if minimap2 v2.22 could improve long INDEL alignment, we ran dipcall on\ncontig-to-reference alignments and focused on INDELs longer than 1kb\n(real-sv-1k). v2.22 is more sensitive at comparable specificity, confirming its\nadvantage in more contiguous alignment. lra is supposed to handle long INDELs\nwell, too. However, we could not get dipcall to work well with lra,\nso did not report the numbers.\n\nMinimap2 spends most computing time on base alignment. As recent improvements\nin v2.22 incur little additional computing and do not change the base alignment\nalgorithm, the new version has similar performance to older verions. It is\nconsistently faster than Winnowmap2 by several times. Sometimes simple\nheuristics can be as effective as more sophisticated yet slower solutions.\n\n\\section*{Acknowledgements}\nWe thank Arang Rhie and Chirag Jain for providing motivating examples for which\nolder minimap2 underperforms.\n\n\\paragraph{Funding\\textcolon} This work is funded by NHGRI grant R01HG010040.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{secintro}\n\nUncertainty in the coefficients of a linear program is often handled by probability constraints or, more general, bounds on a risk measure.\nThe random restrictions are then captured by imposing risk constraints on their violation.\nConsider the linear program\n\\begin{equation}\\label{SLP}\n {\\mathbf c}^\\prime {\\mathbf x} \\longrightarrow \\min \\quad s.t. \\ \\tilde {\\mathbf A} {\\mathbf x} \\ge {\\mathbf b}\\,,\n\\end{equation}\nand assume that $\\tilde {\\mathbf A}$ is a stochastic $m\\times d$ matrix and ${\\mathbf b}\\in \\mathbb{R}^m$.\nThis is a stochastic linear optimization problem. To handle the stochastic side conditions a \\textit{joint risk constraint},\n\\begin{equation}\n \\rho^m(\\tilde {\\mathbf A} {\\mathbf x}-{\\mathbf b}) \\le {\\mathbf 0}\\,, \\label{jointRiskConstraint}\n\\end{equation}\nmay be introduced, where $\\rho^m$ is an $m$-variate risk measure.\nE.g. with $\\rho^m(Y)=\\text{Prob}[Y<0]-\\alpha$ the restriction (\\ref{jointRiskConstraint}) becomes\n\\begin{equation}\\label{jointProbConstraint}\n \\text{Prob}[\\tilde {\\mathbf A} {\\mathbf x} \\ge {\\mathbf b}] \\ge 1 - \\alpha\\,,\n\\end{equation}\nand a usual \\textit{chance-constrained linear program} is obtained.\nAlternatively, the side conditions may be subjected to \\textit{separate risk constraints},\n\\begin{equation}\n\\rho^1(\\tilde {\\mathbf A}_j {\\mathbf x}-b_j) \\le 0\\,, \\quad j=1\\dots m\\,, \\label{sepRiskConstraint}\n\\end{equation}\nwith $\\tilde {\\mathbf A}_j$ denoting the $j$-th row of $\\tilde {\\mathbf A}$.\nIn (\\ref{sepRiskConstraint}) each side condition is subject to the same bound that limits the risk of violating the condition.\nA linear program that minimizes ${\\mathbf c}'{\\mathbf x}$ subject to one of the restrictions (\\ref{jointRiskConstraint}) or (\\ref{sepRiskConstraint}) is called\na \\textit{risk-constrained stochastic linear program}.\n\nFor stochastic linear programs (SLPs) in general and risk-constrained SLPs in particular, the reader is e.g.\\ referred to \\cite{KallM10}. What we call a risk measure here is mentioned there as a \\textit{quality measure}, and useful representations of the corresponding constraints are given. As most of the literature, \\cite{KallM10} focus on classes of SLPs with chance constraints that lead to convex programming problems, since these have obvious computational advantages; see also \\citet{prek95}. Our choice of the quality measure, besides its generality, enjoys a meaningful interpretation and, as will be seen later, enables the use of convex structures in the problem. \n\nIn the case of a single constraint ($m=1$) notate\n\\begin{equation}\n\\label{eqambiguni}\n \\rho(\\tilde {\\mathbf a}'{\\mathbf x} -b) \\le 0\\,.\n\\end{equation}\n\nA practically important example of an SLP with a single risk constraint (\\ref{eqambiguni}) is the \\emph{portfolio selection problem}.\nLet $\\tilde r_1, \\dots, \\tilde r_d$ be the return rates on $d$ assets and notate $\\tilde {\\mathbf r}=(\\tilde r_1, \\dots, \\tilde r_d)'$.\n A convex combination of the assets' returns is sought, $\\tilde {\\mathbf r}'{\\mathbf x}= \\sum_{j=1}^d \\tilde r_j x_j$, that has maximum expectation under a risk constraint and an additional deterministic constraint,\n\\begin{equation}\\label{portfolio}\n \\max_{{\\mathbf x} \\in {\\cal C}}\\ E[\\tilde {\\mathbf r}]'{\\mathbf x}, \\quad s.t.\\ \\rho(\\tilde {\\mathbf r}'{\\mathbf x})\\le \\rho_0,\\; {\\mathbf x} \\in {\\cal C}\\,,\n\\end{equation}\nwhere $\\rho$ is a risk measure, $\\rho_0\\in \\mathbb{R}$ is a given upper bound of risk (a nonnegative monetary value), and ${\\cal C}\\in \\mathbb{R}^d$ is a deterministic set which restricts the coefficients $x_k$ in some way.\nFor example, if short sales are excluded, ${\\cal C}$ is the positive orthant in $\\mathbb{R}^d$. The solution ${\\mathbf x}^*$ is the optimal investment under the given model. We will see that, if a solution exists, it is regularly finite and unique.\nIn our geometric approach such a solution corresponds to the intersection of\nsome line and a convex body that both contain the point $E[\\tilde {\\mathbf r}]$.\n\nRegarding the choice of $\\rho$, two special cases are well known. First, let $\\rho(\\tilde {\\mathbf r}'{\\mathbf x})= \\text{Prob}[\\tilde {\\mathbf r}'{\\mathbf x}\\le - v_0]$ and $\\rho_0=\\alpha$. Then the optimization problem (\\ref{portfolio}) says: Maximize the mean return $E[\\tilde {\\mathbf r}'{\\mathbf x}]$ under the restrictions ${\\mathbf x} \\in {\\cal C}$ and\n\\[ \\text{V@R}_\\alpha(\\tilde {\\mathbf r}'{\\mathbf x}) \\le v_0\\,.\n\\]\nThat is, the \\textit{value at risk} $\\text{V@R}_\\alpha$ of the portfolio return must not exceed the bound $v_0$.\nSecond, let\n\\begin{equation}\\label{expected shortfall}\n\\rho(\\tilde {\\mathbf r}'{\\mathbf x})= - \\frac 1\\alpha \\int_0^\\alpha Q_{\\tilde {\\mathbf r}'{\\mathbf x}}(t) dt\\,,\n\\end{equation}\nwhere $Q_Z$ signifies the quantile function of a random variable $Z$. This means that the \\emph{expected shortfall} of the portfolio return\nis employed in the risk restriction.\n\nIn practice, $\\tilde {\\mathbf a}$ has to be estimated from data.\nIf the solution of the SLP is based on a sample of observed coefficient vectors ${\\mathbf a}^1,\\dots,{\\mathbf a}^n \\in \\mathbb{R}^d$, that is, on an \\textit{external sample}, the SLP is mentioned as an \\textit{empirical risk-constrained SLP}.\nIn other words, we assume that $\\tilde {\\mathbf a}$ follows an \\textit{empirical distribution} that gives equal mass $\\frac 1n$ to some observed points ${\\mathbf a}^1,\\dots, {\\mathbf a}^n\\in \\mathbb{R}^d$.\n\\cite{rockur00} investigate\nan empirical stochastic program that arises in portfolio choice when the expected shortfall of a portfolio is minimized. They convert the objective into a function that is convex in the decision vector ${\\mathbf x}$ and optimize it by standard methods. This approach is commonly used in more recent works of these and other authors on portfolio optimization. \n\nA more complex situation is investigated by \\cite{BertsimasB09}, who discuss the risk-constrained SLP with arbitrary coherent distortion risk measures, which also include expected shortfall.\nThese allow for a sound interpretation in terms of expected utility with distorted probabilities.\nFor the linear restriction a so called \\textit{uncertainty set} is constructed which consists of all coefficients satisfying the risk constraint.\n\\cite{BertsimasB09} discuss the uncertainty set that turns the SLP into a minimax problem, called \\textit{robust linear program}; however they provide no optimal solution of this program.\nThe uncertainty set is a convex body and, as it will be made precise below in this paper, comes out to equal a weighted-mean trimmed region.\n\\cite{Natar09}, on the reverse, construct similar risk measures from given polyhedral and conic uncertainty sets. \\cite{Pflug2006} has proposed an iterative algorithm for optimizing a portfolio using distortion functionals, on each step adding a constraint to the problem and solving it by the simplex method.\nMeanwhile, many other authors have recently contributed to the development of robust linear programs related to risk-constrained optimization problems, see, e.g. \\cite{NemirovskiS06}, \\cite{Ben-TalEN09} and \\cite{ChenSST10}\nFor a review of robust linear programs in portfolio optimization the reader is referred to \\citet{FabozziHZh10}.\n\nIn this paper we contribute to this discussion in three respects:\n\\begin{enumerate}\n\\item The uncertainty set of an SLP under a general coherent distortion risk constraint is shown to be a \\emph{weighted-mean region}, which provides a useful visual and computable characterization of the set.\n\\item An \\textit{algorithm} is constructed that solves the minimax problem over the uncertainty set, hence the SLP.\n\\item If the external sample is i.i.d.\\ from a general probability distribution, the uncertainty set and the solution of the SLP are shown to be \\textit{consistent estimators} of the uncertainty set and the SLP solution.\n\n\\end{enumerate}\n\nThe paper is organized as follows: In Section~\\ref{secdrmwmtr} constraints on distortion risk measures and their equivalence to uncertainty sets in the parameter space are discussed; further these uncertainty sets are shown to be so called weighted-mean trimmed regions that satisfy a coherency property. In Section~\\ref{secsolve} a robust linear program is investigated by which the SLP with a distortion risk constraint is solved.\nSection~\\ref{secalguni} introduces an algorithm for this program and discusses sensitivity issues of its solution. In Section~\\ref{secsamplesol} we address the SLP and its solution for generally distributed coefficients and investigate the limit behavior of our algorithm if based on an independent sample of coefficients. Section~\\ref{secdiscuss} contains first computational results and concludes.\n\n\n\\section{Distortion risk constraints and weighted-mean regions}\n\\label{secdrmwmtr}\n\nLet us consider a probability space $\\langle\\Omega,{\\cal F}, P \\rangle$ and a set $\\cal R$ of random variables\n(e.g. returns of portfolios).\nA function $\\rho: {\\cal R} \\rightarrow \\mathbb{R}$ is a law invariant \\textit{risk measure} if for $Y,Z \\in{\\cal R}$ it holds:\n \\begin{enumerate}\n \\item \\textit{Monotonicity}: If $Y$ is pointwise larger than $Z$ then it has less risk, $\\rho(Y) \\le \\rho(Z)$\\,.\n \\item \\textit{Translation invariance}: $\\rho(Y+\\gamma) = \\rho(Y) - \\gamma \\ \\text{for all}\\;\\; \\gamma\\in \\mathbb{R}$\\,.\n\\item \\textit{Law invariance}: If $Y$ and $Z$ have the same distribution, $P_Y=P_Z$, then $\\rho(Y)=\\rho(Z)$\\,.\n\\end{enumerate}\n$\\rho$ is a \\textit{coherent risk measure} if it is, in addition, positive homogeneous and subadditive,\n\\begin{enumerate}\n\\setcounter{enumi}{3}\n \\item \\textit{Positive homogeneity}: $\\rho(\\lambda Y) = \\lambda \\rho(Y)\\quad \\text{for all}\\;\\; \\lambda \\ge 0$\\,,\n \\item \\textit{Subadditivity}: $\\rho(Y+Z) \\le \\rho(Y) + \\rho(Z)\\quad \\text{for all}\\;\\; Y,Z \\in {\\cal R}$\\,.\n\\end{enumerate}\nThe last two restrictions imply that \\textit{diversification} is encouraged - a crucial property for the risk management. Distortion risk measures are essentially the same as \\textit{spectral risk measures}.\nFor the theory of such risk measures, see e.g.\\ \\citet{Foellmer04}.\nA function $\\rho: {\\cal R} \\rightarrow \\mathbb{R}$ is said to satisfy the \\emph{Fatou property} if\n${\\lim \\inf}_{n\\to\\infty} \\rho(Y_n) \\ge \\rho(Y)$ for any bounded sequence converging pointwise to $Y$.\nWith the notion of coherent risk measures, we reformulate a fundamental representation result of \\cite{Huber81}:\n\n\\begin{proposition}\n$\\rho$ is a coherent risk measure satisfying the {Fatou property} if and only if there exists a family $\\mathbb{Q}$ of probability measures that are dominated by $P$ (i.e.\\ $P(S)= 0 \\Rightarrow Q(S)=0$ for any $S\\in {\\cal F}$ and $Q\\in \\mathbb{Q}$) such that\n\\[\n\\rho(Y) = \\sup_{Q\\in\\mathbb{Q}}E_Q(-Y)\\,.\n\\]\n\\end{proposition}\n\nWe say that the family $\\mathbb{Q}$ generates $\\rho$. In particular, let $(\\Omega, {\\cal A})=(\\mathbb{R}^d,{\\cal B}^d)$ and $P$ be the probability distribution of a random vector $\\tilde {\\mathbf a}$. Huber's Theorem implies that for any coherent risk measure $\\rho$ there exists a family $\\mathbb{G}$ of $P$-dominated probabilities on ${\\cal B}^d$ so that\n\\begin{align*}\n \\rho(\\tilde {\\mathbf a}'{\\mathbf x} - b)\\le 0 & \\quad \\Leftrightarrow \\quad \\rho(\\tilde {\\mathbf a}'{\\mathbf x} )\\le - b\\\\\n& \\quad \\Leftrightarrow \\quad \\inf_{G\\in \\mathbb{G}} E_G(\\tilde {\\mathbf a}'{\\mathbf x}) \\ge b\\\\\n& \\quad \\Leftrightarrow \\quad E_G(\\tilde {\\mathbf a}'{\\mathbf x}) \\ge b \\;\\; \\text{for all} \\;\\; G\\in \\mathbb{G}\\,.\n \\end{align*}\n\nLet us denote by $\\Delta^n$ the unit simplex in $\\mathbb{R}^n$,\n\\[ \\Delta^n= \\{{\\mathbf x} \\in \\mathbb{R}^n\\,|\\,\\sum_{k=1}^n x_k = 1, x_k \\ge 0\\;\\; \\forall k\\}\\,.\n\\]\nThen, if $\\tilde {\\mathbf a}$ has an empirical distribution on $n$ given points in $\\mathbb{R}^d$, any subset ${\\cal Q}$ of $\\Delta^n$ corresponds to a\nfamily of $P$-dominated probabilities, and thus defines a coherent risk measure $\\rho$. As an immediate consequence of Huber's theorem an equivalent characterization of the risk constraint is obtained (see also \\cite{BertsimasB09}):\n\\begin{proposition}\\label{empHuber}\nLet $\\rho: {\\cal R}\\to \\mathbb{R}$ be a coherent risk measure and let $\\tilde {\\mathbf a}$ have an empirical distribution on ${\\mathbf a}^1,\\dots, {\\mathbf a}^n\\in \\mathbb{R}^d$. Then there exists some ${\\cal Q_\\rho} \\subset \\Delta^n$ such that\n \\begin{align*}\n\\rho(\\tilde {\\mathbf a}'{\\mathbf x} - b) \\le 0 \\quad \\Leftrightarrow \\quad & {\\mathbf a}'{\\mathbf x} \\ge b \\;\\; \\text{for all}\\\\\n & {\\mathbf a}\\in {\\cal U}_\\rho:= {\\rm conv}\\{{\\mathbf a} \\in \\mathbb{R}^d\\,|\\, {\\mathbf a}=[{\\mathbf a}^1,\\dots,{\\mathbf a}^n]{\\mathbf q} \\,|\\, {\\mathbf q} \\in {\\cal Q}_\\rho\\}\\,.\n \\end{align*}\n\\end{proposition}\nHere, ${\\rm conv}(W)$ denotes the convex closure of a set $W$.\nProposition~\\ref{empHuber} says that a deterministic side condition ${\\mathbf a}'{\\mathbf x}\\ge b$ holding uniformly for all ${\\mathbf a}$ in the \\textit{uncertainty set} ${\\cal U}_\\rho$\nis equivalent to the above risk constraint (\\ref{eqambiguni}) on the stochastic side condition. This will be used below in providing an algorithmic solution of the risk-constrained SLP.\n\n\n\n\n\n\\subsection{Distortion risk measures}\n\\label{ssecdistrm}\n\nA large and versatile subclass of risk measures is the class of distortion risk measures \\citep{Acerbi02}.\nAgain, let $Q_Y$ denote the quantile function of a random variable $Y$.\n\\begin{definition}[Distortion risk measure]\\label{distortionrisk}\nLet $r$ be an increasing function $[0,1]\\to [0,1]$. The risk measure $\\rho$ given by\n\\begin{equation}\\label{defdistortionrisk}\n\\rho(Y)= - \\int_0^1 Q_{Y}(t) dr(t)\n\\end{equation}\nis a \\textit{distortion risk measure} with \\textit{weight generating function} $r$.\n\\end{definition}\n\nA distortion risk measure is coherent if and only if $r$ is concave. For example, with $r(t)=0$ if $t < \\alpha$ and $r(t)=1$ if $t\\ge \\alpha$, the \\textit{value at risk} $\\text{V@R}_\\alpha(Y)= - Q_{Y}(\\alpha)$ is obtained, which is a non-coherent distortion risk measure. A prominent example of a coherent distortion risk measure is the \\textit{expected shortfall}, which is yielded by $r(t)=t\/\\alpha$ if $t < \\alpha$ and $r(t)=1$ otherwise. Note that with $r(t)=t$, the risk measure becomes the expectation of $-Y$. A general distortion risk measure $\\rho(Y)$ can thus be interpreted as the expectation of $-Y$ with respect to a probability distribution that has been distorted by the function $r$. In particular, a concave function $r$ distorts the probabilities of lower outcomes of $Y$ in positive direction (the lower the more) and conversely for higher outcomes (the higher the less).\nIn empirical applications, coherent distortion risk measures other than expected shortfall have been recently used by many authors; see, e.g., \\cite{AdamHL08} for a comparison of various such measures in portfolio choice.\n\nAn equivalent characterization of a coherent distortion risk measure\nis that it is coherent and {comonotonic} (\\cite{Acerbi02}). $\\rho$ is \\textit{comonotonic} if\n\\[\n \\rho(Y+Z) = \\rho(Y) + \\rho(Z) \\;\\; \\text{for all $Y$ and $Z$ that are comonotonic},\n\\]\ni.e., that satisfy $\\big(Y(\\omega) - Y(\\omega^\\prime)\\big)\\big(Z(\\omega)-Z(\\omega^\\prime)\\big) \\ge 0$ for every $\\omega, \\omega^{\\prime}\\in \\Omega$.\nIf $Y$ has an empirical distribution on $y_1,\\dots, y_n\\in \\mathbb{R}$, the definition (\\ref{defdistortionrisk}) of a {distortion risk measure}\nspecializes to\n\\begin{equation}\\label{empdistortionrisk}\n \\rho(Y) = -\\sum_{i=1}^{n}{q_i y_{[i]}},\n\\end{equation}\nwhere $y_{[i]}$ are the values ordered from above and $q_i$ are nonnegative weights adding up to $1$. (Observe that $q_i= r(y_{[\\frac{n+1-i}n]})-r(y_{[\\frac{n-i}n]})$.)\nThen, the distortion risk measure (\\ref{empdistortionrisk}) is coherent if and only if the weights are ordered, i.e.\\ ${\\mathbf q}\\in \\Delta^n_{\\le}:= \\{{\\mathbf q}\\in \\Delta^n \\,|\\, 0\\le q_1\\le\\dots\\le q_n\\}$.\n\n\\subsection{Weighted-mean regions as uncertainty sets}\\label{subsec2.2}\nIf $\\rho$ is a coherent distortion risk measure, the uncertainty set ${\\cal U}_\\rho$ has a special geometric structure, which will be explored now in order to visualize the optimization problem and to provide the basis for an algorithm.\nWe will demonstrate that ${\\cal U}_\\rho$ equals a so called \\textit{weighted-mean (WM) region} of the distribution of $\\tilde {\\mathbf a}$.\n\n Given the probability distribution $F_Y$ of a random vector $Y$ in $\\mathbb{R}^d$, weighted-mean regions form a nested family of convex compact sets, $\\{D_\\alpha(F_Y)\\}_{\\alpha\\in [0,1]}$, that are affine equivariant (that is $D_\\alpha(F_{AX+b})= A\\,D_\\alpha(F_Y) + b$ for any regular matrix $A$ and $b\\in \\mathbb{R}^d$). By this, the regions describe the distribution with respect to its location, dispersion and shape.\nWeighted-mean regions have been introduced in \\cite{DyckerhoffM10} for empirical distributions, and in \\cite{DyckerhoffM10a} for general ones.\n\nFor an empirical distribution on ${\\mathbf a}^1, \\dots, {\\mathbf a}^n\\in \\mathbb{R}^d$, a weighted-mean region is a polytope in $\\mathbb{R}^d$ and defined as\n\\begin{equation}\n\\label{eqregconv}\nD_{\\bmw_\\alpha}({\\mathbf a}^1,\\dots,{\\mathbf a}^n)={\\rm conv}\\left\\{\\sum_{j=1}^n w_{\\alpha, j}{\\mathbf a}^{\\pi(j)}\\,\\Big|\\,\n\\text{$\\pi$ permutation of $\\{1,\\dots,n\\}$}\\,\\right\\}\\,.\n\\end{equation}\nHere ${\\mathbf w}_\\alpha=[w_{\\alpha,1}, \\dots, w_{\\alpha,n}]'$ is a vector of ordered weights, i.e.\\ ${\\mathbf w}_\\alpha \\in \\Delta^n_\\le$, indexed by $0\\le \\alpha \\le 1$\nthat for $\\alpha< \\beta$ satisfies\n\\begin{equation}\\label{majorization}\n\\sum_{j=1}^kw_{\\alpha,j}\\le \\sum_{j=1}^kw_{\\beta,j}\\,,\\quad\\forall k=1,\\dots,n\\,.\n\\end{equation}\nAny such family of \\textit{weight vectors} $\\{{\\mathbf w}_\\alpha\\}_{0\\le \\alpha\\le 1}$ specifies a particular notion of weighted-mean regions.\nThere are many types of weighted-mean regions. They contain well known trimmed regions like the\nzonoid regions, the expected convex hull regions and several others. For example,\n\\[\nw_{\\alpha,j}=\\left\\{\\begin{array}{cl}\n\\frac{1}{n\\alpha}\\,&\\text{if $j>n-\\lfloor n\\alpha\\rfloor$,}\\\\[1ex]\n\\frac{n\\alpha-\\lfloor n\\alpha\\rfloor}{n\\alpha}\\,&\\text{if $j=n-\\lfloor n\\alpha\\rfloor$,}\\\\[1ex]\n0\\,&\\text{if $j1\\}$.\nConsequently, the minimum value of the robust stochastic LP cannot be smaller than the value of an LP with any deterministic parameter ${\\mathbf u}$ chosen from the uncertainty set. Figure~\\ref{fighypcurve} (left panel) illustrates how a deterministic feasible set in dimension two compares to a general robust one: The line that bounds the halfspace ${\\cal X}_{{\\mathbf u}}$ `folds' into a piecewise linear curve delimiting ${\\cal X}$.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{frobust.png}\n \\caption{Deterministic and robust cases: feasible set (left panel), uncertainty set (right panel).}\n \\label{fighypcurve}\n\\end{figure}\n\nLet\n\\[{U_{\\mathbf x}} = \\{{\\mathbf a}\\in \\mathbb{R}^d| {\\mathbf a}'{\\mathbf x} \\ge b\\}\\,, \\quad {\\mathbf x} \\in \\mathbb{R}^d\\,.\n\\]\n\n\\begin{lemma\n \\label{ldualdatatoparam}\nIt holds that\n\\[\n {\\cal U} \\subset \\bigcap_{{\\mathbf x}\\in {\\cal X}}{U_{\\mathbf x}} \\subset \\bigcap_{{\\mathbf x}\\in \\ext{\\cal X}}{U_{\\mathbf x}}\\,.\n\\]\nMoreover, each vertex ${\\mathbf x}\\in \\ext{\\cal X}$ corresponds to a facet of ${\\cal U}$.\n\\end{lemma}\n\n\\textbf{Proof.}\n By Lemma~\\ref{ldualparamtodata} we have ${\\mathbf x}\\in{\\cal X} \\quad \\Leftrightarrow \\quad {\\mathbf a}'{\\mathbf x}\\ge b$ for all ${\\mathbf a}\\in {\\cal U}$. Now let ${\\mathbf a}\\in {\\cal U}$; then for any ${\\mathbf x}\\in{\\cal X}$ it holds that ${\\mathbf a}'{\\mathbf x}\\ge b$, hence ${\\mathbf a}\\in U_{\\mathbf x}$. Conclude ${\\cal U} \\subset \\bigcap_{{\\mathbf x}\\in {\\cal X}}{U_{\\mathbf x}}$.\n Further, it is clear that an extreme point ${\\mathbf x}\\in \\ext{\\cal X}$ yields a facet of ${\\cal U}$.\n\\qed\n\n\\textbf{Remark.} While ${\\cal U}$ is always compact, ${\\cal X}$ is in general not.\nTherefore neither inclusion holds with equality.\n\nThe ordinary simplex algorithm, operating on the vertices of ${\\cal X}$, constructs a chain of adjacent facets in the space of parameters. The chain ends at the solution of the optimization task. Notice that this chain corresponds to a chain of facets of the uncertainty set. So, in principle we could try to calculate this chain of facets in the parameter set. However, in our algorithm, another way is pursued to find the optimal solution.\n\nTo manage this task let us consider the goal function ${\\mathbf c}'{\\mathbf x}$. In the parameter space ${\\mathbf c}$ corresponds to a point or a direction. In the solution space it corresponds to all hyperplanes that have ${\\mathbf c}$ as their normal.\nTo produce all these hyperplanes in the parameter space, ${\\mathbf c}$ has to be multiplied with some scaling factor. Hence the hyperplanes are obtained by passing through a straight ray $\\phi$ starting at the origin and containing ${\\mathbf c}$.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{fduality.png}\n \\caption{Duality between spaces.}\n \\label{figduality}\n\\end{figure}\n\nNext we search the intersection of ${\\cal U}$ with the ray $\\phi$. Note that finding the intersection of a line and a polyhedron in $\\mathbb{R}^3$ is an important problem in computer graphics (cf. \\citet{Kay1986}). The same principle is employed for a general dimension $d$.\nThe uncertainty set ${\\cal U}$ is the finite intersection of halfspaces ${\\cal H}_j$, $j=1\\dots J$, each being defined by a hyperplane $H_j$ with normal ${\\mathbf n}_j$ pointing into ${\\cal H}_j$ and an intercept $d_j$.\n\nConsider some point ${\\mathbf u}$ on the ray $\\phi$ that is not in ${\\cal U}$. Compute $\\frac{d_j}{{\\mathbf u}'{\\mathbf n}_j}$ for all halfspaces ${\\cal H}_j$ that do \\emph{not} include ${\\mathbf u}$, i.e.\\ where $({\\mathbf u}'{\\mathbf n}_j - d_j)< 0$ holds. (In other words, $H_j$ is \\textit{visible} from ${\\mathbf u}$.) Find $j_*$ at which this value is the largest.\n Recall that moving a point ${\\mathbf u}$ along $\\phi$ is equivalent to multiplying ${\\mathbf u}$ by some constant. The furthest move is given by the biggest constant. The \\emph{optimal solution} ${\\mathbf x}^*$ of the robust SLP has to satisfy\n ${\\mathbf a}'{\\mathbf x}^*\\ge b$, which is equivalent to\n \\[ {\\mathbf a}'\\left(\\frac{d_{j_*}}b {\\mathbf x}^*\\right) \\ge d_{j_*}\\,.\n \\]\nHence, to obtain ${\\mathbf x}^*$, the normal ${\\mathbf n}_{j_*}$ has to be scaled with the constant $\\frac{b}{d_j}$,\n\\begin{equation}\\label{optimalsolution}\n{\\mathbf x}^* = \\frac{b}{d_{j_*}} {\\mathbf n}_{j_*}\\,.\n\\end{equation}\n\nBesides the regular situation described above, two special cases can arise:\n\\begin{enumerate}\n \\item There is no facet visible from the origin. This means that no solution is obtained.\n\\item $\\phi$ does not intersect $\\cal U$. Then the whole procedure is repeated with the opposite ray $- \\phi$. If this still gives no intersection, an infinite solution exists.\n\\end{enumerate}\n\n\nFinally, we like to point out that\nnot the whole polytope ${\\cal U}$ needs to be calculated but only a part of it which intersects the\nray~$\\phi$. In searching for the optimum not all $F$ facets need to be checked, but only a subset of the surface where the intersection will happen. Such a filtration makes the procedure more efficient. The search for a proper subset can be driven by \\textit{geometrical} considerations.\nLet ${\\mathbf x}^*$ be an \\emph{optimal solution} of the robust SLP. A subset ${\\cal U}_\\text{eff}$ of ${\\cal U}$ will be mentioned as an \\textit{efficient parameter set} if\n\\begin{itemize}\n \\item ${\\mathbf x}^* \\in \\bigcap_{{\\mathbf a}\\in {\\cal U}_\\text{eff}}\\{{\\mathbf x}| {\\mathbf a}' {\\mathbf x} \\ge b\\ \\} \\subset {\\cal X} \\quad \\text{and}$\n \\item ${\\mathbf a}, {\\mathbf d}\\in {\\cal U}_{\\text{eff}}, \\ {\\mathbf a}'{\\mathbf x}\\ge b, \\ {\\mathbf d}'{\\mathbf x}\\ge b \\quad \\text{implies} \\quad {\\mathbf a}={\\mathbf d}\\,.$\n\\end{itemize}\n\n\nThat is to say, ${\\cal U}_\\text{eff}$ is the minimal subset of $\\cal U$ containing all facets that can be optimal for some ${\\mathbf c}$.\n\n\\begin{proposition}\n\\label{proplowerbound}\n${\\cal U}_\\text{eff}$ is the union of all facets of $\\cal U$ for which $d_j\\ge 0$ holds.\n\\end{proposition}\nIn other words, an efficient parameter set ${\\cal U}_\\text{eff}$ consists of that part of the surface of $\\cal U$ that is visible from the origin $\\mathbf 0$. The proof is obvious.\n\n\nTo visualize the efficient parameter set we will use the \\textit{augmented uncertainty set}, which is defined as\n\\[ \\{{\\mathbf a}| {\\mathbf a} = \\lambda {\\mathbf a}^*, \\lambda>1,{\\mathbf a}^*\\in{\\cal U}_\\text{eff}\\}\\,.\n\\]\nIt includes all parameters that are dominated by ${\\cal U}_\\text{eff}$; see the shaded area in the right panel of Figure~\\ref{figalguni}.\n\nSo far we have assumed that $b>0$. It is easy to show, that with $b<0$ we have to construct the intersection of $\\phi$ with the part of the surface of $\\cal U$ that is \\textit{invisible} from the origin $\\mathbf 0$, which is $\\tilde{\\cal U}_\\text{eff}$ in this case. In the sense of Proposition~\\ref{proplowerbound}, $\\tilde{\\cal U}_\\text{eff}$ contains all facets of $\\cal U$ with $d_j\\le 0$. Obviously, $\\tilde{\\cal U}_\\text{eff}$ is always non-empty in this case, which, in turn, means that the existence of a solution is guaranteed. However, the solution can be infinite if $\\phi$ does not intersect $\\tilde{\\cal U}_\\text{eff}$.\n\nThe situation of $b<0$ is common in the maximizing SLPs. Really, if we have the model\n\\begin{equation}\\label{LProbust-max}\n {\\mathbf c}^\\prime {\\mathbf x} \\longrightarrow \\max \\quad s.t. \\ {\\mathbf a}'{\\mathbf x} \\le b\\;\\; \\text{for all } {\\mathbf a}\\in{\\cal U}\\,,\n\\end{equation}\nit is possible to rewrite it as follows:\n\\begin{equation}\\label{LProbust-max-trans}\n (-{\\mathbf c})^\\prime {\\mathbf x} \\longrightarrow \\min \\quad s.t. \\ (-{\\mathbf a})'{\\mathbf x} \\ge -b\\;\\; \\text{for all } {\\mathbf a}\\in{\\cal U}\\,.\n\\end{equation}\n\nClearly,~\\eqref{LProbust-max-trans} is equivalent to~\\eqref{LProbust} except of the negativity of the coefficient~$b$.\n\n\n\\section{The algorithm}\n\\label{secalguni}\n\n\nIn this part an accurate procedure of obtaining the optimal solution is given.\n\n\\textit{Input:}\n\\begin{itemize}\n\\item a vector ${\\mathbf c}\\in \\mathbb{R}^d$ of coefficients of the goal function,\n\\item an external sample $\\{{\\mathbf a}^1,\\dots,{\\mathbf a}^n\\}\\subset \\mathbb{R}^d$ of coefficient vectors of the restriction,\n \\item a right-hand side $b\\in \\mathbb{R}$ of the restriction,\n\\item a distortion {risk measure} $\\rho$ (defined either by name or by a weight vector).\n\\end{itemize}\n\n\\textit{Output:}\n\\begin{itemize}\n\\item the {uncertainty set} ${\\cal U}$ of parameters given by\n\\begin{itemize}\n \\item facets (i.e. normals and intercepts),\n \\item vertices,\n\\end{itemize}\n\\item the {optimal solution} ${\\mathbf x}^*$ of the robust LP and its value ${\\mathbf c}'{\\mathbf x}^*$.\n\\end{itemize}\n\n\\textit{Steps:}\n\\begin{enumerate}\n\\renewcommand{\\theenumi}{\\Alph{enumi}.} \\renewcommand{\\labelenumi}{\\theenumi}\n\n\\renewcommand{\\theenumii}{\\alph{enumii}.} \\renewcommand{\\labelenumii}{\\theenumii}\n\n\\renewcommand{\\theenumiii}{\\Roman{enumiii}.} \\renewcommand{\\labelenumiii}{\\theenumiii}\n\n\\renewcommand{\\theenumiv}{\\roman{enumiv}.} \\renewcommand{\\labelenumiv}{\\theenumiv}\n\n\\item Calculate the subset ${\\cal U}_\\text{eff}\\subset {\\cal U}$ consisting of facets $\\{({\\mathbf n}_j,d_j)\\}_{j\\in J}$.\n\\item Create a line $\\phi$ passing through the origin $\\mathbf 0$ and ${\\mathbf c}$.\n\\item Search for a facet $H_{j_*}$ of ${\\cal U}_\\text{eff}$ that is intersected by $\\phi$:\n\\begin{enumerate}\n\\item Select a subset ${\\cal U}_{sel}\\subseteq{\\cal U}_\\text{eff}$ of facets: This may be either ${\\cal U}_\\text{eff}$ itself or its part where the intersection is expected; ${\\cal U}_{sel}=\\{ ({\\mathbf n}_j,d_j)\\,|\\,j\\in J_{sel}\\}$. For example, we can search the best solution on a pre-given subset of parameters. The other possible filtration is iterative transition to a facet with better criterion value.\n\\item\n\\label{stfindbest} Take a point ${\\mathbf u}=\\lambda {\\mathbf c}, \\lambda \\ge 0$, outside the augmented uncertainty set. Find the $j_* = \\arg\\underset{j}{\\max}\\{\\lambda_j=\\frac{d_j}{{\\mathbf u}'{\\mathbf n}_j}|\\lambda_j>0\\}_{j\\in J_{sel}\\subseteq J}$. For the case $b<0$ just replace $\\arg{\\max}$ with $\\arg{\\min}$.\n\\begin{enumerate}\n \\item If $\\phi$ does not intersect ${\\cal U}_\\text{eff}$, then the solution is \\textit{infinite}. If $b>0$, then repeat \\ref{stfindbest} for the opposite ray $-\\phi$; else stop.\n\\item If in the case $b>0$ the inner part of $\\cal U$ contains the origin, then \\textit{no solution} exists; stop.\n\\end{enumerate}\n\n\\item ${\\mathbf x}^* = \\frac{b}{d_{j_*}} {\\mathbf n}_{j_*}$ is the optimal {solution} of the robust LP.\n\\end{enumerate}\n\n\\end{enumerate}\n\n\n In fact, the line $\\phi$ consists of points that correspond to hyperplanes whose normal is the vector ${\\mathbf c}$ in the dual space. One part of $\\phi$ is dominated by points from ${\\cal U}_\\text{eff}$, while the other is not (which results from Proposition~\\ref{proplowerbound}). The crossing point ${\\mathbf a}^*$ defines the hyperplane that touches the feasible set at the optimum as its dual.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{falgorRLO.png}\n \\caption{Finding the optimal solution on the uncertainty set.}\n \\label{figalguni}\n\\end{figure}\n\nMoreover, a typical nonnegativity side constraint ${\\mathbf x} \\ge {\\bf 0}$ can be easily accounted for in the algorithm. In considering this, the search for facets has just to be restricted to those having nonnegative normals.\n\nTo solve the portfolio selection problem~\\eqref{portfolio} with the algorithm, we treat the realizations of the vector $-\\tilde{\\mathbf r}$ of losses rates as $\\{{\\mathbf a}^1,\\dots,{\\mathbf a}^n\\}$, and minimize ${\\mathbf c}' {\\mathbf x}$ with ${\\mathbf c} = \\frac{1}{n}\\sum_{i=1}^{n}{{\\mathbf a}^i}$. This corresponds to transforming the maximizing SLP by \\eqref{LProbust-max-trans} and running the above outlined procedure.\nNote that both $\\phi$ and $\\cal U$ contain the point $\\frac{1}{n}\\sum_{i=1}^{n}{{\\mathbf a}^i}$, that is, they always intersect, which, in turn, guarantees the existence of a finite solution.\nTo meet a unit budget constraint, the solution ${\\mathbf x}^*$ is finally scaled down by $\\sum_{j=1}^d x^*_j=1$. Recall that the risk measure is, by definition, scale equivariant.\n\n\n\\subsection{Sensitivity and complexity issues}\n\\label{ssecsens}\n\nNext we like to discuss how the robust SLP and its optimal solution behave when the data $\\{{\\mathbf a}^1,\\dots,{\\mathbf a}^n\\}$ on the coefficients are slightly changed. From (\\ref{supportWMT}) it is immediately seen that the support function $h_{\\cal U}$ of the uncertainty set is continuous in the data\n${\\mathbf a}^j$ as well as in the weight vector ${\\mathbf w}_\\alpha$. (Note that the support function $h_{\\cal U}$ is even \\textit{uniformly continuous} in ${\\mathbf a}^1,\\dots,{\\mathbf a}^n$ and ${\\mathbf w}_\\alpha$, which is tantamount saying that the uncertainty set ${\\cal U}$ is \\textit{Hausdorff continuous} in the data and the risk weights.)\nConsequently, a slight perturbation of the data will only slightly change the value of the support function of ${\\cal U}$, which is a practically useful result regarding the sensitivity of the uncertainty set with respect to the data. The same is true for a small change in the weights of the risk measure.\n\nWe conclude that the point ${\\mathbf a}^{j_*}$ where the line through the origin and ${\\mathbf c}$ cuts ${\\cal U}$ depends continuously on the data and\nthe weights. However this is not true for the optimal solution ${\\mathbf x}^*$, which may `jump' when the cutting point moves from one facet of ${\\cal U}$ to a neighboring one.\n\nThe theoretical complexity in time of finding the solution is compounded from the complexity of one transition to the next facet and by the whole number of such transitions until the sought-for facet is achieved. \\cite{BazovkinM10} have shown that the transition has a complexity of ${\\cal O}(d^2n)$. In turn, in the same paper the number of facets $N(n,d)$ of an WM region is shown to lie between ${\\cal O}(n^d)$ and ${\\cal O}(n^{2d})$ depending on the type of the WM region. Thus, it is easily seen, that an average number of facets in a facets chain of a fixed length is defined by the density of facets on the region's surface, $\\sqrt[d]{N(n,d)}$, and is estimated by a function between ${\\cal O}(n)$ and ${\\cal O}(n^2)$. The overall complexity is then ${\\cal O}(d^2n^2)$ up to ${\\cal O}(d^2n^3)$. Notice, that the lower complexity is achieved for zonoid regions, namely when the expected shortfall is used for the risk measure.\n\n\\subsection{Ordered sensitivity analysis}\n\\label{ssecorder}\n\nAlso alternative uncertainty sets may be compared that are ordered by inclusion. From Lemma~\\ref{ldualparamtodata} it is clear that the respective sets of feasible solutions are then ordered in the reverse direction; see e.g.\\ Figure~\\ref{figoutofcentral}. In particular we may consider the robust LP for two alternative distortion risk measures which are based on weight vectors ${\\mathbf w}_\\alpha$ and ${\\mathbf w}_\\beta$, respectively, that satisfy the monotonicity restriction (\\ref{majorization}). Then the resulting uncertainty sets are nested, ${\\cal U}_\\beta \\subset {\\cal U}_\\alpha$ and so are, reversely, the feasible sets, ${\\cal X}_\\beta \\supset {\\cal X}_\\alpha$.\nThis is a useful approach for visualizing the {sensitivity} of the robust LP against changes in risk evaluation.\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{foutofcentral.png}\n \\caption{Example of the `reversed' central regions in the dimension 2.}\n \\label{figoutofcentral}\n\\end{figure}\n\n\n\n\n\\section{Robust SLP for generally distributed coefficients}\n\\label{secsamplesol}\n\nSo far an SLP~\\eqref{SLP} has been considered where the coefficient vector $\\tilde {\\mathbf a}$ follows an empirical distribution. It has been solved on the basis of an external sample $\\{{\\mathbf a}^1,\\dots,{\\mathbf a}^n\\}$.\nIn this section the SLP will be addressed with a general probability distribution $P$ of $\\tilde {\\mathbf a}$. We formulate the robust SLP in the general case and demonstrate that the solution of this SLP can be consistently estimated by random sampling from $P$.\n\nConsider a distortion risk measure $\\rho$ (\\ref{defdistortionrisk})\nthat measures the risk of a general random variable $Y$ and has weight generating function $r$,\n$\\rho(Y)= - \\int_0^1 Q_{Y}(t) dr(t)$.\nSimilarly as in Section \\ref{subsec2.2} a convex compact $\\cal U$ in $\\mathbb{R}^d$ is constructed through its support\nfunction $h_{\\cal U}$,\n\\[ h_{\\cal U}({\\mathbf p}) = \\int_0^1 Q_{{\\mathbf p}'\\tilde {\\mathbf a}}(t) dr(t)\\,.\\]\n\nNow, let a sequence $(\\tilde {\\mathbf a}^n)_{n\\in \\mathbb{N}}$ of independent random vectors be given that are identically distributed with $P$, and consider the sequence of random uncertainty sets ${\\cal U}_n$ based on $\\tilde {\\mathbf a}^1, \\dots, \\tilde {\\mathbf a}^n$. \\cite{DyckerhoffM10} have shown:\n\\begin{proposition}[\\cite{DyckerhoffM10}]\n\\label{propcont}\n${\\cal U}_n$ converges to ${\\cal U}$ almost surely in the Hausdorff sense.\n\\end{proposition}\n\nThe proposition implies that by drawing an independent sample of $\\tilde {\\mathbf a}$ and solving the robust LP based on the observed empirical distribution a consistent estimate of the uncertainty set ${\\cal U}$ is obtained. Moreover, the cutting point ${\\mathbf a}^{j_*}$, where the line through the origin and ${\\mathbf c}$ hits the uncertainty set, is consistently estimated by our algorithm.\nBut, in particular for a discretely distributed $\\tilde {\\mathbf a}$, the optimal solution ${\\mathbf x}^*$ need not be a consistent estimate, as it may perform a jump when ${\\mathbf a}^{j_*}$ moves from one facet of ${\\cal U}$ to neighboring one.\n\n\n\n\\section{Concluding remarks}\n\\label{secdiscuss}\nA stochastic linear program (SLP) has been investigated, where the coefficients of the linear restrictions are random.\nRisk constraints are imposed on the random side conditions and an equivalent robust SLP is modeled, whose worst-case solution is searched over an uncertainty set of coefficients.\nIf the risk is measured by a general coherent distortion risk measure, the uncertainty set of a side condition has been shown to be a \\emph{weighted-mean region}. This provides a comprehensive visual and computable characterization of the uncertainty set.\nAn \\textit{algorithm} has been developed that solves the robust SLP under a single stochastic constraint, given an external sample.\nIt is available as an R-package \\textit{StochaTR} \\citep{StochaTRpack}.\nMoreover, if the data are generated by an infinite i.i.d.\\ sample, the limit behavior of the solution has been investigated.\nThe algorithm allows the introduction of\nadditional deterministic constraints, in particular, those regarding nonnegativity.\n\n\\begin{table}[h!]\n \\centering\n\\begin{tabular}{ |c||c|c|c|c|c|c|c|c|c| }\n \\hline\n $d$\\textbackslash$n$ & 1000 & 2000 & 3000 & 4000 & 5000 & 10000 & 15000 & 20000 & 25000 \\\\ \\hline\n3 & 0.3 & 1.14 & 1.76 & 2.92 & 3.41 & 6.18& 12.61 & 15.06 & 47.54\\\\\n 4 & 0.66 & 2.21 & 3.47 & 4.48 & 4.27 &7.68 & 16.97 & 20.04 & \\\\\n 5 & 1.85 & 3.09 & 5.68 & 9.28 & 11.03 & 13.52& 27.34 & 54.86 & \\\\\n 6 & 2.08 & 4.41 & 5.62 & 14.99 & 18.73 &25.07 & 46.88 & & \\\\\n 7 & 2.16 & 6.22 & 13.3 & 25.44 & 28.56 & 52.33& & & \\\\\n 8 & 4.18 & 9.78 & 20.18 & 31.82 & 34.23 & & & & \\\\\n 9 & 5.18 & 14.75 & 24.11 & 35.94 & 61.14 & & & & \\\\\n 10 &6.17 & 16.97 & 33.82 & 42.11 & 67.06 & & & & \\\\ \\hline\n\\end{tabular}\n\\caption{Running times of \\emph{StochaTR} for different $n$ and $d$ (in seconds).}\n\\label{tabcompres}\n \\end{table}\n\nTable~\\ref{tabcompres} reports simulated running times (in seconds) of the R-package for the $5\\%$-level expected shortfall and different $d$ and $n$. The data are simulated by mixing the uniform distribution on a $d$-dimensional parallelogram with a multivariate Gaussian distribution.\nIn light of the table the complexity seems to grow with $d$ and $n$ slower than ${\\cal O}(d^2n^2)$. \n\nBesides this, we contrast our new procedure with the seminal approach of \\cite{rockur00}, who solve the portfolio problem by optimizing the expected shortfall with a simplex-based method.\nIn illustrating their method, they simulate three-dimensional normal returns having specified expectations and covariance matrices.\nWe have applied our package to likewise simulated data. The results are exhibited in Table~\\ref{tabcompres2}. For a comparison, some cells contain also a second value, which corresponds to the \\cite{rockur00} procedure and is taken from Table 5 there. \n\n\\begin{table}[h!]\n \\centering\n\\begin{tabular}{ |c||c|c|c|c|c|c|c|c|c| }\n \\hline\n $\\alpha$\\textbackslash$n$ & 1000 & 5000 & 10000 & 15000 & 20000 & 25000 \\\\ \\hline\n0.10 & 1.1 \\;($<$5)& 7.2 \\; (6) &23.7\\; (20) &46 & 56.3\\; (45)& 74.4\\\\\n0.05 & 0.5\\; ($<$5)& 4.7\\; (6) &14.0\\; (12) & 20.0& 39.8\\; (40)& 53.2\\\\\n0.01 & 0.3\\; ($<$5) &2.3\\; (6) &3.8\\; (6) &7.9 &22.1\\; (50) & 38.5 \\\\\n\\hline\n\\end{tabular}\n\\caption{Running times of \\emph{StochaTR} for different $n$ and $\\alpha$ (in seconds); in parentheses running times of \\cite{rockur00}.} \n\\label{tabcompres2}\n \\end{table}\n\n\nAs we see from Table~\\ref{tabcompres2}, the computational times of the two approaches do not much differ. However, our algorithm usually needs some dozens of iterations only, which is substantially less than the algorithm of \\cite{rockur00}. Also, in contrast to the latter, where the resulting portfolio can vary between $(0.42,0.13,0.45)$ for $n=1000$ and $(0.64,0.04,0.32)$\nfor $n=5000$, we get a \\textit{stable} optimal portfolio. Our solution averages at $(0.36,0.15,0.49)$, which has approximately the same $\\text{V@R}$ and expected shortfall as that in the compared study but yields a \\textit{better} value of the \\textit{expected return}. \n\n\n\nFinally, our approach turns out to be very flexible. \nIn particular, non-sample information can be introduced into the procedure in an interactive way\nby explicitly changing and modifying the uncertainty set.\nMore research is needed in extending the algorithm to solve SLPs with multiple constraints \\eqref{jointRiskConstraint}. Also procedures that allow for a stochastic right-hand side in the constraints and a random coefficients in the goal function have still to be explored.\n\n\\nocite{Delbaen02}\n\\nocite{bental00}\n\n\\section*{Acknowledgments}\nPavel Bazovkin was partly supported by a grant of the German Research Foundation (DFG).\n\n\n\\bibliographystyle{ormsv080}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}