diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziffa" "b/data_all_eng_slimpj/shuffled/split2/finalzziffa" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziffa" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\nRecall that for a positive integer $d$ a $d$-dimensional grid $\\Lambda^{d}$ is a graph,\nwhich vertexes are iteger tuples $(a_{1},\\ldots a_{d})$ and two vertexes\n$(a_{1}',\\ldots,a_{d}')$ and $(a_{1}'',\\ldots,a_{d}'')$ are adjacent if and only if \n$|a_{1}'-a_{1}''|+\\ldots+|a_{d}'-a_{d}''|=1$.\nFor a finite graph $\\Delta$ define a connected graph $\\Gamma$ to be {\\it \na symmetrical extension of $\\Lambda^{d}$ by $\\Delta$} if there exists a vertex-transitive\ngroup $G$ of automorphisms of $\\Gamma$ and an imprimitivity system $\\sigma$ of $G$ on $V(\\Gamma)$\nsuch that subgraphs of $\\Gamma$, generated by blocks of $\\sigma$ are isomorphic to $\\Delta$\nand there exists an isomorphism of $\\Gamma\/\\sigma$ (i.e. of factor-graph of $\\Gamma$ by partition $\\sigma$ of its vertex set)\nonto $\\Lambda^{d}$. A tuple $(\\Gamma, G, \\sigma, \\varphi)$ with specified components if called a {\\it\nrealizaion of symmetrical extension $\\Gamma$ of the grid $\\Lambda^{d}$ by the graph $\\Delta$}.\nFor a positive integer $q$ a graph $\\Gamma$ is called {\\it a symmetrical $q$-extension of the grid\n $\\Lambda^{d}$}, if $\\Gamma$ is a symmetrical extension of the grid $\\Lambda^d$ by some graph $\\Delta$, such\nthat $|V(\\Delta)|=q$. In this situation the tuple $(\\Gamma, G, \\sigma, \\varphi)$ with specified components\nis called {\\it a realization of the symmetrical $q$-extension} $\\Gamma$ of the grid $\\Lambda^{d}$,\nand $\\Gamma$ we call a graph of this realization.\nAlong with purely mathematical interest,\nsymmetrical $q$-extensions of the grid $\\Lambda^{d}$ for small $d\\geq 1$ and $q>1$ are iteresting\nfor molecular crystallography and some physical theories (see \\cite{IV2}).\nFor the crystallography symmetrical $2$-extensions of grids $\\Lambda^d$\nare of the most interest. They naturally arise when considering ``molecular'' crystals,\nwhose ``molecules'' consist of two ``atoms'' or, more generally, have a accentuated axis.\n\nIt is natural to consider realizations of symmetric $q$-extensions\nof the grid $\\Lambda^{d}$ up to equivalence defined as follows (see \\cite {IV2}).\nWe call two such realizations $R_1=(\\Gamma_1, G_1,$ $\\sigma_1, \\varphi_1)$ and $R_2=(\\Gamma_2, G_2, \\sigma_2, \\varphi_2)$ \n{\\it equivalent}, and we will write $R_1 \\sim R_2$ if there exists an isomorphism of the graph \n$\\Gamma_1$ to the graph $\\Gamma_2$ which maps $\\sigma_1$ onto $\\sigma_2$.\nThe realizaion $(\\Gamma, G, \\ sigma, \\ varphi)$ of the symmetrical $q$-extension of the grid $\\Lambda^{d}$\nwe call {\\it maximal} if $G=\\mathrm{Aut}_\\sigma(\\Gamma)$ is the group of all automorphisms of the graph \n$\\Gamma$ which preserve the partition $\\sigma$.\nIt is clear that each realization of the symmetrical $q$-extension of the grid $\\Lambda^{d}$ \nhas an equivalent maximal realization (unique up to equivalence).\nV.I. Trofimov proved that for an arbitrary positive integer $d$\nup to equivalence, there is only a finite number of realizations of symmetrical 2-extensions\nof $d$-dimensional grid (see \\cite[Theorem 2]{IV3}).\nAn algorithm for constructing of these extensions is also proposed in \\cite{IV3}.\n\nUsing this algorithm, in \\cite{dim2partI} and \\cite{dim2partII} was found all, up to equivalence,\nrealizations of symmetrical 2-extensions of the grid $\\Lambda^2 $ (162 realizations).\nAmong the graphs of these realizations, there are exactly 152 pairwise nonisomorphic graphs.\n\n\nFor an arbitrary realization $(\\Gamma, G, \\sigma, \\varphi)$ of the symmetrical 2-extension of the grid $\\Lambda^{d}$\nand an arbitrary pair of adjacent vertices $B_1, B_2$ of the graph $\\Gamma\/\\sigma$\nthe set of edges of the graph $\\Gamma$, one end of which lies in $B_1$, and the other in $B_2$,\nwe will call a {\\it connection}.\nThe following types of connections are possible:\n{\\it type $1$} --- a single edge;\n{\\it type $2_{||}$} --- two non-adjacent edges;\n{\\it type $2_V$} --- two adjacent edges;\n{\\it type $3$} --- three edges;\n{\\it type $4$} --- full connection (4 edges).\nA realization, which necessarily have connections of types not equal to $2_{||}$ and $4$, we will call by {\\it realization of class} I.\nA realization, which have connections only of types $2_{||}$ and $4$ (maybe only of one type), we will call by {\\it realizations of class} II.\nBy Proposition 4 from \\cite{IV3} realizations of class I,\nare exactly the realizations of symmetrical 2-extensions of the grid $\\Lambda^d $ such that\nonly a trivial automorphism of their graph fixes all blocks (as whole).\n\nAll 162 realizations of symmetrical 2-extensions of the grid $\\Lambda^2$\nare distributed in classes I and II as follows: 87 realizations of class I (see \\cite {dim2partI})\nand 75 realizations of class II (see \\cite {dim2partII}).\nThis paper is devoted to the description of all, up to equivalence,\nrealizations of symmetrical 2-extensions of the grid $\\Lambda^3$ of class I.\n\nA realization of symmetrical extension of the grid $\\Lambda^{d}$ by the graph $K_2$ (full graph on two vertices)\nwe will call the {\\ it saturated} realization of the symmetrical 2-extension of the grid $\\Lambda^{d}$.\nAccordingly, a realization of symmetrical extension of the grid $\\Lambda^{d}$ by the graph, complemented to $K_2$\nwe will call {\\ it non-saturated} realization of the symmetrical 2-extension of the grid $\\Lambda^{d}$.\n \n \nIn this paper, we have shown that, up to equivalence,\nthere are 5573 realizations of symmetrical 2-extensions of the grid $\\Lambda^{3}$ of\nclass I, among which 2872 are saturated and 2701 are non-saturated (see Theorem 1 and Corollary 1).\nAmong the graphs of saturated realizations of symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I \nthere are exactly 2792 pairwise nonisomorphic;\namong the graphs of non-saturated realizations of class I there are\nexactly 2594 pairwise nonisomorphic;\nand among all graphs of realizations of class I there are\n5350 pairwise nonisomorphic (see Corollary 2).\n\nIn Sec. \\ref{main_res} we give the desciption of all up to equivalence realizations of symmetrical 2-extensions\nof the grid $\\Lambda^{3}$ of class I (Theorem 1 and Corollary 1). It is obtained in this article\n using the approach from \\cite{IV3} implemented in GAP\\cite{gap} (Algorithms 1 and 2 from \\cite {dim2partI}). \nSec. \\ref{auxiliary} contains preliminary results.\n\nOur work was performed using \u00abUran\u00bb supercomputer of IMM UB RAS.\n\n\\section{Preliminaries}\\label{auxiliary}\n\nEach vertex-transitive group of automorphisms of\n$\\Lambda^{3}$ is generated by the stabilizer in this \ngroup of the vertex $(0,0,0)$ and six elements of this group,\ntranslating the vertex $(0,0,0) $ to the vertices adjacent to it.\nBased on this, using GAP \\cite{gap}, we have listed\nall conjugate classes of vertex-transitive subgrous of the group $\\mathrm{Aut}(\\Lambda^{3})$.\nIt turned out that there are 786 such classes \nwhose system of representatives we denote by $\\textbf{H}=\\big\\{H_{1},\\ldots, H_{786}\\big\\}$.\nThe groups $H_{1},\\ldots, H_{786}$ are given in the Table 2 below by their generating systems.\nThe following notation is used for certain automorphisms of the grid $\\Lambda^{3}$:\n\\begin{center}\n\\begin{tabular}{cc}\n$r_x: (x,y,z)\\mapsto (x,-z,y),$ &\n$r_y: (x,y,z)\\mapsto (z,y,-x),$ \\\\\n$r_z: (x,y,z)\\mapsto (y,-x,z),$ &\n$m_x: (x,y,z)\\mapsto (-x,y,z),$ \\\\\n$m_y: (x,y,z)\\mapsto (x,-y,z),$ &\n$m_z: (x,y,z)\\mapsto (x,y,-z),$ \\\\\n$i: (x,y,z)\\mapsto (-x,-y,-z),$ & \n$t_x: (x,y,z)\\mapsto (x+1,y,z),$ \\\\\n$t_y: (x,y,z)\\mapsto (x,y+1,z),$ &\n$t_z: (x,y,z)\\mapsto (x,y,z+1),$\n\\end{tabular}\n\\end{center}\n\u0433\u0434\u0435 $x,y,z\\in\\mathbb Z$.\n\\smallskip\n\n\\begin{rem} \nIn the natural embedding of the grid $\\Lambda^{3}$ in the Euclidean space\n$\\mathbb R^3$ each automorphism\n$g\\in \\mathrm{Aut}(\\Lambda^{3})$ is induced by the only isometry $\\tilde{g} $ of this space.\nThe isometries that induce the above automorphisms of $\\Lambda^{3}$ have the following geometric meaning:\n$\\tilde{r_x}, \\tilde{r_y}, \\tilde{r_z}$ are rotations by the angle $\\frac{\\pi}{2}$ around coordinate axes $x$, $y$ and $z$ correspondingly,\n$\\tilde{m_x}, \\tilde{m_y}, \\tilde{m_z}$ are reflections relative coordinate planes,\n$\\tilde i$ --- central symmetry abount the origin,\n$\\tilde{t_x}, \\tilde{t_y}, \\tilde{t_z}$ are translations by 1 along axes $x$, $y$ and $z$ correspondingly.\\smallskip\n\\end{rem}\n\nUsing GAP, we constructed and compared 786 stabilizers \\linebreak $\\{H_{(0,0,0)} : H\\in \\textbf{H}\\}$ for conjugation in $\\mathrm{Aut}(\\Lambda^{3})_{(0,0,0)}$. \nIt turned out that up to conjugation there are only 33 such stabilizers.\nWe give them in Table 1 by their generators (column 3). \nFor each of the 33 groups, the abstract group structure is given (column 2).\n\n\\hspace*{97mm} \\mbox{T\\ a\\ b\\ l\\ e\\ \\ 1}\\\\\n\\centerline{\\small\\bf\n Stabilizers of vertex $(0,0,0)$ in vertex-transitive subgroups of $\\mathrm{Aut}(\\Lambda^3)$}\n \\centerline{\\small\\bf up to conjugation in $\\mathrm{Aut}(\\Lambda^3)_{(0,0,0)}$}\n\\begin{longtable}{llll}\n\\textnumero & Group structure & Generators \\\\\n\\hline\\endhead\n1& $1$ & 1\\\\\n2& $C_2$ & $\\langle i \\rangle$\\\\\n3& $C_2$ & $\\langle m_z \\rangle$\\\\\n4& $C_2$ & $\\langle r_z^2 r_x \\rangle$\\\\\n5& $C_2$ & $\\langle r_z^2 \\rangle$\\\\\n6& $C_2$ & $\\langle m_z r_x \\rangle$\\\\\n7& $C_3$ & $\\langle r_y^{-1} r_z^{-1} \\rangle$\\\\\n8& $C_2\\times C_2$ & $\\langle r_y^2, r_z^2 \\rangle$\\\\\n9& $C_2\\times C_2$ & $\\langle i, r_z^2 \\rangle$\\\\\n10& $C_2\\times C_2$ & $\\langle m_z r_x^{-1}, r_x^2 \\rangle$\\\\\n11& $C_2\\times C_2$ & $\\langle m_x, r_y^2 r_x \\rangle$\\\\\n12& $C_2\\times C_2$ & $\\langle i, r_z^2 r_x \\rangle$\\\\\n13& $C_2\\times C_2$ & $\\langle r_y^2 r_x, r_z^2 r_x \\rangle$\\\\\n14& $C_2\\times C_2$ & $\\langle m_x, r_z^2 \\rangle$\\\\\n15& $C_4$ & $\\langle r_x^{-1}, r_x^2 \\rangle$\\\\\n16& $C_4$ & $\\langle m_x r_x^{-1} \\rangle$\\\\\n17& $C_6$ & $\\langle i, m_x r_y^{-1} r_x^{-1} \\rangle$\\\\\n18& $S_3$ & $\\langle m_x r_y, r_y^{-1} r_z^{-1} \\rangle$\\\\\n19& $S_3$ & $\\langle r_y^2 r_z, r_z^2 r_x \\rangle$\\\\\n20& $C_2\\times C_2\\times C_2$ & $\\langle i, r_y^2 r_x, r_z^2 r_x \\rangle$\\\\\n21& $C_2\\times C_2\\times C_2$ & $\\langle i, r_y^2, r_z^2 \\rangle$\\\\\n22& $C_4\\times C_2$ & $\\langle i, m_x r_x^{-1} \\rangle$\\\\\n23& $D_8$ & $\\langle m_x r_x^{-1}, r_z^2 r_x \\rangle$\\\\\n24& $D_8$ & $\\langle r_z^2, r_z^2 r_x \\rangle$\\\\\n25& $D_8$ & $\\langle m_y, m_z r_x^{-1}, r_x^2 \\rangle$\\\\\n26& $D_8$ & $\\langle m_x r_x^{-1}, r_z^2 \\rangle$\\\\\n27& $A_4$ & $\\langle r_y r_x^{-1}, r_y^2, r_z^2 \\rangle$\\\\\n28& $D_{12}$ & $\\langle i, r_y^2 r_z, r_z^2 r_x \\rangle$\\\\\n29& $C_2\\times D_8$ & $\\langle i, r_z^2, r_z^2 r_x \\rangle$\\\\\n30& $C_2\\times A_4$ & $\\langle i, m_x r_y^{-1} r_x^{-1}, r_y^2, r_z^2 \\rangle$\\\\\n31& $S_4$ & $\\langle m_x r_x^{-1}, m_x r_z, r_z^2 \\rangle$\\\\\n32& $S_4$ & $\\langle r_y^2 r_z, r_z^2, r_z^2 r_x \\rangle$\\\\\n33& $C_2\\times S_4$ & $\\langle i, r_y^2 r_z, r_z^2, r_z^2 r_x \\rangle$\\\\\n\\end{longtable} \n\nEach group $H\\in\\textbf{H}$ is identified with some space group,\nand, therefore, has a point group $\\mathrm{P}(H)$ and a translation basis (see \\cite{cryst}).\nUsing GAP, we verified that the set of point groups $\\{\\mathrm{P}(H) : H\\in \\textbf{H}\\}$\nup to conjugation in $\\mathrm{P}(\\mathrm{Aut}(\\Lambda^{3}))$ is equal to the set of 33 stabilizers, \ngiven in Table 1.\nIn column 1 of Table 2 below, we give the set of groups $\\textbf{H}$ defined by their generators.\nIn column 2 for each group $H\\in \\textbf{H}$ we give the \\textnumero \\, of group from Table 1 conjugate to\nthe stabilizer $H_{(0,0,0)}$ in $\\mathrm{Aut}(\\Lambda^{3})_{(0,0,0)}$.\nIn column 3 for each group $H\\in \\textbf{H}$ we give the \\textnumero \\, of group from Table 1, conjugate to the point group \n$\\mathrm{P}(H)$ in $\\mathrm{P}(\\mathrm{Aut}(\\Lambda^{3}))$.\nIn column 4 for each group $H\\in\\textbf{H}$ we give its translation basis.\nThe groups in Table 2 are sorted lexicographically first by \\textnumero \\, in column 2 and then by \\textnumero \\, in column 3.\n\n\n\\hspace*{97mm} \\mbox{T\\ a\\ b\\ l\\ e\\ \\ 2}\\\\\n\\centerline{\\small\\bf\n Conjugate classes of vertex-transitive}\n \\centerline{\\small\\bf subgroups of the group $\\mathrm{Aut}(\\Lambda^3)$ representatives}\n{\\tiny\\begin{longtable}{llll}\n$H$ & $H_{0,0,0}$ & $P(H)$ & Translation basis of $H$\\\\\n\\hline\\endhead\n\\input{Gs_gens.tex}\n\\end{longtable} }\n\n\n\\section{Main result}\\label{main_res}\n\nWe have done a computer implementation of the\nproposed in \\cite{IV3} approach, which can be called a coordinatization\nof symmetrical extensions of graphs.\nAccording to it, the realization of symmetrical 2-extension of the grid\n$\\Lambda^{3}$ of class I can be defined by a triple $H, L, X$, where $H$ is a vertex-transitive\nsubgroup of $\\mathrm {Aut}(\\Lambda^{3})$, $L$ is subgroup of index $2$ of the stabilizer\nof the vertex $(0,0,0)$ in $H$, and $X$ is some subset of elements of $H$ mapping\nthe vertex $(0,0,0)$ of $\\Lambda^{3}$ to some its adjacent vertexes (for details, see \\cite{dim2partI}).\n\nAlgorithm 1 (Generating of all saturated realizations of symmetrical 2-extensions of $\\Lambda^{2}$)\nand Algorithm 2 (Check two realizations for equivalence) from \\cite{dim2partI},\nwe adapted (without essential changes) and applied to $\\Lambda^{3}$.\nA list of saturated realizations generated by Algorithm 1 and thinned out by Algorithm 2,\ncontains 2872 realizations (given in Table 4 below). \nWhen thinning in each class of equivalent realizations the\nrealization with maximal by inclusion group $H_i$ was selected.\nDue to this, the realizations in the resulting list are maximal.\n\nWe split the set of all realizations of the symmetrical 2-extensions of $\\Lambda^{3}$\nof the class I into subclasses, defined by the types of connections in the neighbourhood of vertex.\nIn the first column of Table 3 below, we give all occurring combinations of connection types in the neighbourhood of vertex (59 combinations).\nCombinations are of the form $x_1 x_2\\_ y_1 y_2\\_ z_1 z_2$, where \n$x_1$ is the type of the first connection by the first direction\n(the grid $\\Lambda^{3}$, and therefore a 2-extension, has three directions along coordinate axes),\n$x_2$ is the type of the second connection by the first direction,\n$y_1$ is the type of the first connection by the second direction,\n$y_2$ is the type of the second connection by the second direction,\n$z_1$ is the type of the first connection by the third direction,\n$z_2$ is the type of the second connection by the third direction.\nHere, for each extension, the numbering of directions and connections within the direction is performed so that\nthe combination turned out to be lexicographically minimal.\nAt the pictures of combinations in the first column of Table 3\nthe first direction is shown horizontally\n(first connection to the left, second to the right),\nthe second direction is shown vertically\n(bottom connection is first, top connection is second),\nthe third direction is shown diagonally\n(Bottom-left connection is first, top-right connection is second).\nFor each combination of connection types, the remaining columns in Table 3 contain\npictures of all found corresponding extensions of vertex neighborhood up to equivalence.\nAt these pictures, the edges in blocks are not shown because\nwe use these types of vertex neighborhood both for saturated and non-saturated realizations.\n\n\\hspace*{97mm} \\mbox{T\\ a\\ b\\ l\\ e\\ \\ 3}\\\\\n\\centerline{\\bf Vertex neighbourhood extensions for}\\\\\n \\centerline{\\small\\bf 2-extensions of $\\Lambda^{3}$ of class I}\n\\begin{longtable}{llll}\nconnection types & \\multicolumn{3}{c}{vertex neighbourhood extensions} \\\\\n\\hline\\endhead\n\\begin{tabular}{c} \\input{pictures\/det1.tex} \\\\$11\\_11\\_11$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball1_1.tex} \\\\1A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball1_2.tex} \\\\1B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det2.tex} \\\\$11\\_11\\_2_{||}2_{||}$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball2_1.tex} \\\\2A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball2_2.tex} \\\\2B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det3.tex} \\\\$11\\_11\\_2_{||}4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball3_1.tex} \\\\3A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball3_2.tex} \\\\3B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det4.tex} \\\\$11\\_11\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball4_1.tex} \\\\4A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball4_2.tex} \\\\4B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det5.tex} \\\\$11\\_11\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball5_1.tex} \\\\5A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball5_2.tex} \\\\5B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det6.tex} \\\\$11\\_12_{||}\\_12_{||}$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball6_1.tex} \\\\6\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det7.tex} \\\\$11\\_13\\_13$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball7_1.tex} \\\\7A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball7_2.tex} \\\\7B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det8.tex} \\\\$11\\_14\\_14$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball8_1.tex} \\\\8\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det9.tex} \\\\$11\\_2_{||}2_{||}\\_2_{||}2_{||}$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball9_1.tex} \\\\9\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det10.tex} \\\\$11\\_2_{||}2_{||}\\_2_{||}4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball10_1.tex} \\\\10\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det11.tex} \\\\$11\\_2_{||}2_{||}\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball11_1.tex} \\\\11\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det12.tex} \\\\$11\\_2_{||}2_{||}\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball12_1.tex} \\\\12\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det13.tex} \\\\$11\\_2_{||}3\\_2_{||}3$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball13_1.tex} \\\\13\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det14.tex} \\\\$11\\_2_{||}4\\_2_{||}4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball14_1.tex} \\\\14\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det15.tex} \\\\$11\\_2_{||}4\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball15_1.tex} \\\\15\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det16.tex} \\\\$11\\_2_{||}4\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball16_1.tex} \\\\16\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det17.tex} \\\\$11\\_2_V2_V\\_2_V2_V$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball17_1.tex} \\\\17A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball17_2.tex} \\\\17B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det18.tex} \\\\$11\\_33\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball18_1.tex} \\\\18A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball18_2.tex} \\\\18B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det19.tex} \\\\$11\\_33\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball19_1.tex} \\\\19\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det20.tex} \\\\$11\\_34\\_34$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball20_1.tex} \\\\20\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det21.tex} \\\\$11\\_44\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball21_1.tex} \\\\21\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det22.tex} \\\\$12_{||}\\_12_{||}\\_2_{||}2_{||}$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball22_1.tex} \\\\22\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det23.tex} \\\\$12_{||}\\_12_{||}\\_2_{||}4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball23_1.tex} \\\\23\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det24.tex} \\\\$12_{||}\\_12_{||}\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball24_1.tex} \\\\24\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det25.tex} \\\\$12_{||}\\_12_{||}\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball25_1.tex} \\\\25\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det26.tex} \\\\$12_V\\_12_V\\_2_V2_V$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball26_1.tex} \\\\26A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball26_2.tex} \\\\26B\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball26_3.tex} \\\\26C\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det27.tex} \\\\$13\\_13\\_2_{||}2_{||}$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball27_1.tex} \\\\27A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball27_2.tex} \\\\27B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det28.tex} \\\\$13\\_13\\_2_{||}4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball28_1.tex} \\\\28A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball28_2.tex} \\\\28B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det29.tex} \\\\$13\\_13\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball29_1.tex} \\\\29A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball29_2.tex} \\\\29B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det30.tex} \\\\$13\\_13\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball30_1.tex} \\\\30A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball30_2.tex} \\\\30B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det31.tex} \\\\$14\\_14\\_2_{||}2_{||}$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball31_1.tex} \\\\31\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det32.tex} \\\\$14\\_14\\_2_{||}4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball32_1.tex} \\\\32\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det33.tex} \\\\$14\\_14\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball33_1.tex} \\\\33\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det34.tex} \\\\$14\\_14\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball34_1.tex} \\\\34\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det35.tex} \\\\$2_{||}2_{||}\\_2_{||}2_{||}\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball35_1.tex} \\\\35\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det36.tex} \\\\$2_{||}2_{||}\\_2_{||}3\\_2_{||}3$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball36_1.tex} \\\\36\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det37.tex} \\\\$2_{||}2_{||}\\_2_{||}4\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball37_1.tex} \\\\37\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det38.tex} \\\\$2_{||}2_{||}\\_2_V2_V\\_2_V2_V$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball38_1.tex} \\\\38A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball38_2.tex} \\\\38B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det39.tex} \\\\$2_{||}2_{||}\\_33\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball39_1.tex} \\\\39A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball39_2.tex} \\\\39B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det40.tex} \\\\$2_{||}2_{||}\\_33\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball40_1.tex} \\\\40\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det41.tex} \\\\$2_{||}2_{||}\\_34\\_34$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball41_1.tex} \\\\41\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det42.tex} \\\\$2_{||}2_V\\_2_{||}2_V\\_2_V2_V$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball42_1.tex} \\\\42A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball42_2.tex} \\\\42B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det43.tex} \\\\$2_{||}3\\_2_{||}3\\_2_{||}4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball43_1.tex} \\\\43\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det44.tex} \\\\$2_{||}3\\_2_{||}3\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball44_1.tex} \\\\44\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det45.tex} \\\\$2_{||}3\\_2_{||}3\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball45_1.tex} \\\\45\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det46.tex} \\\\$2_{||}4\\_2_{||}4\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball46_1.tex} \\\\46\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det47.tex} \\\\$2_{||}4\\_2_V2_V\\_2_V2_V$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball47_1.tex} \\\\47A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball47_2.tex} \\\\47B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det48.tex} \\\\$2_{||}4\\_33\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball48_1.tex} \\\\48A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball48_2.tex} \\\\48B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det49.tex} \\\\$2_{||}4\\_33\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball49_1.tex} \\\\49\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det50.tex} \\\\$2_{||}4\\_34\\_34$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball50_1.tex} \\\\50\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det51.tex} \\\\$2_V2_V\\_2_V2_V\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball51_1.tex} \\\\51A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball51_2.tex} \\\\51B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det52.tex} \\\\$2_V2_V\\_2_V2_V\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball52_1.tex} \\\\52A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball52_2.tex} \\\\52B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det53.tex} \\\\$2_V2_V\\_2_V3\\_2_V3$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball53_1.tex} \\\\53A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball53_2.tex} \\\\53B\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball53_3.tex} \\\\53C\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det54.tex} \\\\$2_V2_V\\_2_V4\\_2_V4$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball54_1.tex} \\\\54A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball54_2.tex} \\\\54B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det55.tex} \\\\$33\\_33\\_33$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball55_1.tex} \\\\55A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball55_2.tex} \\\\55B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det56.tex} \\\\$33\\_33\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball56_1.tex} \\\\56A\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball56_2.tex} \\\\56B\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det57.tex} \\\\$33\\_34\\_34$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball57_1.tex} \\\\57\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det58.tex} \\\\$33\\_44\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball58_1.tex} \\\\58\\end{tabular}\\\\\n\\begin{tabular}{c} \\input{pictures\/det59.tex} \\\\$34\\_34\\_44$\\end{tabular}& \\begin{tabular}{c} \\input{pictures\/ball59_1.tex} \\\\59\\end{tabular}\\\\\n\\end{longtable}\n\nA list of 2872 saturated realizations generated using Algorithm 1 and thinned out using Algorithm 2\nis given below in Table 4.\nSaturated realizations are defined by triples $ H, L, X $ (see above):\ngroup $H\\in \\textbf{H}$ is given in the fourth column,\nsubgroup $L$ of index $ 2 $ in $ H _ {(0,0,0)} $ - in the fifth column,\na subset of $X$ of elements of the group $H$ is in the seventh column.\nIn addition, in the sixth column is given an element $m$, such that\n$L\\cup mL = H_{(0,0,0)}$. The realizaion \\textnumero \\, is given in the third column.\nSaturated realizations are sorted by vertex neighborhood extensions given in Table 3.\n(\\textnumero of vertex neighborhood extension is given in the first column of Table 4).\nThe set of realizations with given vertex neighborhood extension, in turn, \nare divided into classes,\ndefined by orders of balls of radii 1, 2, ..., 10 of graph of a realization\n(we call such a set of orders of balls of radii 1, 2, ..., 10 by {\\it growth} and give it in the second column of Table 4).\nIn this subdivision, classes are lexicographically sorted by growth increasing.\nIn Table 4, along with saturated realizations, we give non-saturated realizations by\n\\textnumero of saturated realizations with an asterisk in the third column (details see before Corollary 1 below).\n\n\\hspace*{97mm}{\\small \\mbox{T\\ a\\ b\\ l\\ e\\ \\ 4}}\\\\\n\\centerline{\\small\\bf 2872 saturated and 2701 non-saturated maximal realizations}\\\\\n \\centerline{\\small\\bf of symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I}\n{\\tiny\\begin{longtable}{p{0.01\\linewidth}p{0.01\\linewidth}llp{0.12\\linewidth}p{0.03\\linewidth}p{0.55\\linewidth}}\nNbr. & gr & \\textnumero & $H_i$ & $L$ & $m$ & $X$ \\\\\n\\hline\\endhead\n\\input{PartitReprs.tex}\n \\end{longtable}}\n\n\\begin{theorem}\nSaturated realizations of symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I up to equivalence\nare exhausted by $2872$ pairwise nonequivalent saturated realizations given in Table 4.\n\\end{theorem}\n\n\nNote that listing of all, up to equivalence, realizations of the symmetrical 2-extensions of the grid $\\Lambda^{3}$\nis reduced to listing of all, up to equivalence, saturated realizations of symmetrical 2-extensions of $\\Lambda^{3}$.\nIndeed, it is obvious that every non-saturated realization of a symmetrical 2-extension of $\\Lambda^{3}$\nis obtained from a uniquely defined saturated realization by removing of an edge in each block.\nAll maximal non-saturated realizations obtained in this way are given in Table 4 by \\textnumero \\, \nof saturated maximal realizaions taken with an asterisk.\n\n\\begin{corollary} \nNon-saturated realizations of symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I up to equivalence\nare exhausted by $2701$ pairwise nonequivalent non-saturated realizations given in Table 4.\n\\end{corollary}\n\\begin{proof}\nUsing a computer, it is easily verified that when removing edges inside blocks of $2872$ realizaions given in Table 4,\n171 their graphs become disconnected,\nand the remaining 2701 realizaions (see \\textnumero \\, with an asterisk in Table 4) give all, up to equivalence,\nnon-saturated realizations of symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I.\n\\end{proof}\n\nAccording to Theorem 1 and Corollary 1, all realizations given in Table 4 are pairwise nonequivalent.\nHowever, among the graphs of these realizaions, isomorphic ones are found.\nWith GAP, we built\npartition of the set of graphs of realizaions from Table 4 into classes of isomorphic graphs\n(for details, see the proof of Corollary \\ref{cor2} below).\nIn the following Table 5, we give all the non-singleton classes of this partition.\n\n\\hspace*{97mm}{\\small \\mbox{T\\ a\\ b\\ l\\ e\\ \\ 5}}\\\\\n\\centerline{\\small\\bf Non-singleton classes of ishomorphic graphs of realizaions}\\\\\n \\centerline{\\small\\bf of symmetrical 2-extensions of $\\Lambda^{3}$ of class I}\n\n{\\tiny\n\\begin{multicols}{4}\n{\n\\noindent\n1) 1, 45*\\\\\n2) 7, 8, 76*\\\\\n3) 9, 10\\\\\n4) 16, 17\\\\\n5) 18, 19\\\\\n6) 26, 64*\\\\\n7) 30, 97*\\\\\n8) 36, 198*, 199*, 337*\\\\\n9) 37, 38, 202*, 203*, 204*, 205*, 206*, 207*, 1119*, 1120*, 1484*\\\\\n10) 39, 1121*\\\\\n11) 41, 279\\\\\n12) 42, 280\\\\\n13) 43, 44, 214*, 215*, 343*\\\\\n14) 46, 123*\\\\\n15) 47, 122*\\\\\n16) 54, 293\\\\\n17) 55, 294\\\\\n18) 57, 348*, 352*\\\\\n19) 59, 144*\\\\\n20) 64, 65\\\\\n21) 110, 250*\\\\\n22) 122, 123\\\\\n23) 198, 199\\\\\n24) 202, 203\\\\\n25) 206, 207\\\\\n26) 216, 343\\\\\n27) 229, 348\\\\\n28) 233, 352\\\\\n29) 286, 216*\\\\\n30) 296, 1123*, 1124*\\\\\n31) 299, 229*\\\\\n32) 301, 233*\\\\\n33) 324, 363*\\\\\n34) 332, 366*\\\\\n35) 427, 2114*\\\\\n36) 428, 503*, 658*\\\\\n37) 431, 667*, 1489*\\\\\n38) 439, 1422*\\\\\n39) 440, 1421*\\\\\n40) 455, 1440*\\\\\n41) 456, 1439*\\\\\n42) 651, 652\\\\\n43) 653, 654\\\\\n44) 664, 1489\\\\\n45) 672, 1421\\\\\n46) 673, 1422\\\\\n47) 712, 1439\\\\\n48) 713, 1440\\\\\n49) 1091, 1094\\\\\n50) 1092, 1093\\\\\n51) 1282, 1412*\\\\\n52) 1283, 1414*\\\\\n53) 1288, 664*, 1423*\\\\\n54) 1289, 1420*\\\\\n55) 1290, 1424*\\\\\n56) 1294, 672*\\\\\n57) 1295, 673*\\\\\n58) 1307, 1437*\\\\\n59) 1313, 712*\\\\\n60) 1314, 713*\\\\\n61) 1716, 1717\\\\\n62) 1720, 1721\\\\\n63) 1722, 1724\\\\\n64) 1725, 1726\\\\\n65) 1727, 1728\\\\\n66) 1756, 1757\\\\\n67) 1759, 1760\\\\\n68) 1762, 1763\\\\\n69) 1857, 1858\\\\\n70) 1919, 1920\\\\\n71) 1928, 1929\\\\\n72) 1941, 1943\\\\\n73) 1944, 1945\\\\\n74) 1962, 1963\\\\\n75) 1990, 1991\\\\\n76) 1998, 1999\\\\\n77) 2002, 2006\\\\\n78) 2004, 2005\\\\\n79) 2043, 2047\\\\\n80) 2050, 2051\\\\\n81) 2053, 2054\\\\\n82) 2055, 2056\\\\\n83) 2058, 2059\\\\\n84) 2071, 2085\\\\\n85) 2072, 2086\\\\\n86) 2075, 2084\\\\\n87) 2080, 2082\\\\\n88) 2089, 2090\\\\\n89) 2091, 2092\\\\\n90) 2093, 2094\\\\\n91) 2109, 2110\\\\\n92) 2346, 2349\\\\\n93) 2347, 2348\\\\\n94) 2353, 2355\\\\\n95) 2357, 2358\\\\\n96) 2362, 2365\\\\\n97) 2363, 2364\\\\\n98) 2368, 2369\\\\\n99) 2371, 2372\\\\\n100) 2374, 2375\\\\\n101) 2376, 2378\\\\\n102) 2492, 2495\\\\\n103) 2552, 2553\\\\\n104) 2651, 2659\\\\\n105) 2652, 2660\\\\\n106) 2653, 2658\\\\\n107) 2654, 2655\\\\\n108) 2656, 2657\\\\\n109) 2702, 2703\\\\\n110) 2707, 2708\\\\\n111) 2713, 2714\\\\\n112) 2718, 2720\\\\\n113) 2724, 2726\\\\\n114) 63*, 64*, 65*\\\\\n115) 89*, 314*\\\\\n116) 102*, 105*\\\\\n117) 116*, 1121*\\\\\n118) 117*, 339*\\\\\n119) 121*, 219*\\\\\n120) 130*, 1127*\\\\\n121) 141*, 1125*\\\\\n122) 146*, 353*\\\\\n123) 167*, 245*\\\\\n124) 175*, 1139*\\\\\n125) 209*, 210*\\\\\n126) 211*, 212*\\\\\n127) 228*, 350*\\\\\n128) 234*, 235*\\\\\n129) 328*, 329*\\\\\n130) 360*, 361*\\\\\n131) 651*, 652*, 653*, 654*, 2114*\\\\\n132) 660*, 2115*\\\\\n133) 662*, 2116*\\\\\n134) 817*, 818*\\\\\n135) 819*, 820*\\\\\n136) 821*, 2117*\\\\\n137) 825*, 2118*\\\\\n138) 1091*, 1094*\\\\\n139) 1092*, 1093*\\\\\n140) 1133*, 1134*\\\\\n141) 1136*, 1137*\\\\\n142) 1143*, 1144*\\\\\n143) 1147*, 1148*\\\\\n144) 1149*, 1150*\\\\\n145) 1152*, 2521*\\\\\n146) 1163*, 1622*\\\\\n147) 1716*, 1717*\\\\\n148) 1720*, 1721*\\\\\n149) 1722*, 1724*\\\\\n150) 1725*, 1726*\\\\\n151) 1727*, 1728*\\\\\n152) 1756*, 1757*\\\\\n153) 1759*, 1760*\\\\\n154) 1762*, 1763*\\\\\n155) 1857*, 1858*\\\\\n156) 1919*, 1920*\\\\\n157) 1928*, 1929*\\\\\n158) 1941*, 1943*\\\\\n159) 1944*, 1945*\\\\\n160) 1962*, 1963*\\\\\n161) 1990*, 1991*\\\\\n162) 1998*, 1999*\\\\\n163) 2002*, 2006*\\\\\n164) 2004*, 2005*\\\\\n165) 2043*, 2047*\\\\\n166) 2050*, 2051*\\\\\n167) 2053*, 2054*\\\\\n168) 2055*, 2056*\\\\\n169) 2058*, 2059*\\\\\n170) 2071*, 2085*\\\\\n171) 2072*, 2086*\\\\\n172) 2075*, 2084*\\\\\n173) 2080*, 2082*\\\\\n174) 2089*, 2090*\\\\\n175) 2091*, 2092*\\\\\n176) 2093*, 2094*\\\\\n177) 2109*, 2110*\\\\\n178) 2346*, 2349*\\\\\n179) 2347*, 2348*\\\\\n180) 2353*, 2355*\\\\\n181) 2357*, 2358*\\\\\n182) 2362*, 2365*\\\\\n183) 2363*, 2364*\\\\\n184) 2368*, 2369*\\\\\n185) 2371*, 2372*\\\\\n186) 2374*, 2375*\\\\\n187) 2376*, 2378*\\\\\n188) 2492*, 2495*\\\\\n189) 2552*, 2553*\\\\\n190) 2651*, 2659*\\\\\n191) 2652*, 2660*\\\\\n192) 2653*, 2658*\\\\\n193) 2654*, 2655*\\\\\n194) 2656*, 2657*\\\\\n195) 2702*, 2703*\\\\\n196) 2707*, 2708*\\\\\n197) 2713*, 2714*\\\\\n198) 2718*, 2720*\\\\\n199) 2724*, 2726*\\\\ \n} \\end{multicols} }\n\n\n\\begin{corollary}\\label{cor2} \n$(1)$ Up to isomorphism, there are $2792$ graphs of saturated realizations of symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I.\\\\\n\\\\\n$(2)$ Up to isomorphism, there are $2594$ graphs of non-saturated realizations \nof symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I.\\\\ \n$(3)$ Up to isomorphism, there are $5350$ graphs of realizations \nof symmetrical 2-extensions of the grid $\\Lambda^{3}$ of class I.\\\\\n\\end{corollary}\n\n\\begin{proof}\nUsing GAP for each of the graphs of 5573 (2872 saturated and 2701 non-saturated) realizaions \nof symmetrical 2-extensions of $\\Lambda^{3}$ of class I\na subgraph $B$ was generated by\na set of vertices, which are at a distance of $\\leq 4$ from some arbitrary vertex (i.e. $B$ is a ball of radius 4).\nIn the obtained set of 5573 finite graphs, balls are isomorphic if and only if\nthey correspond to realizaions which are in the same line of Table 5.\nAfter that, each isomorphism $\\varphi_b$ between the balls $B_1$ and $B_2$ we continued to the isomorphism $\\varphi$\nof whole graphs of the corresponding realizaions $R_1 = (\\Gamma_1, G_1, $ $ \\sigma_1, \\varphi_1)$ and\n$R_2 = (\\Gamma_2, G_2, \\sigma_2, \\varphi_2)$ as follows.\n\nLet a realization $R_1$ satisfy the condition of $[p_{x1}, p_{y1}, p_{z1}]$ - periodicity,\nand $R_2$ -- the condition of $[p_{x2}, p_{y2}, p_{z2}]$ - periodicity\n(according to \\cite{IV2} a realizaion $R = (\\Gamma, G, \\sigma, \\varphi)$ of a symmetrical 2-extension of $\\Lambda^{3}$\n{\\it satisfies the condition of $[p_x, p_y, p_z]$-periodicity}, where $p_x, p_y, p_z$ are positive integers, if\nthere exist $g_1, g_2, g_3 \\in G$ such that $[g_i, g_j] = 1$ for $i\\neq j$ and\n$\\varphi g_1^{\\sigma} \\varphi^{-1} = t_x^{p_x}$, $\\varphi g_2^{\\sigma} \\varphi^{-1} = t_y^{p_y}$, \n$\\varphi g_3^{\\sigma} \\varphi^{-1} = t_z^{p_y}$).\nWe identify the set of vertices of $\\Gamma_1$ with the set\n$\\{(x,y,z,w): x,y,z\\in \\mathbb Z, w\\in\\{0,1\\}\\}$, so that $\\sigma_1=\\{\\{(x,y,z,0),(x,y,z,1)\\}: x,y,z\\in \\mathbb Z\\}$\nand $\\{(x_1,y_1,z_1,w_1),(x_2,y_2,z_2,w_2)\\} \\in \\mathrm E(\\Gamma_1) \n\\Leftrightarrow \\{(x_1+p_{x1},y_1,z_1,w_1),(x_2+p_{x1},y_2,z_2,w_2)\\} \\in \\mathrm E(\\Gamma_1)\n\\Leftrightarrow \\{(x_1,y_1+p_{y1},z_1,w_1)$, $(x_2,y_2+p_{y1},z_2,w_2)\\} \\in \\mathrm E(\\Gamma_1)\n\\Leftrightarrow \\{(x_1,y_1,z_1+p_{z1},w_1),(x_2,y_2,z_2+p_{z1},w_2)\\} \\in \\mathrm E(\\Gamma_1)$.\nSimilarly, we identify the set of vertices of $\\Gamma_2$ with the set\n$\\{(x,y,z,w): x,y,z\\in \\mathbb Z, w\\in\\{0,1\\}\\}$.\n\nWe select positive integers $p_x, p_y, p_z$, so that\nthe realization $R_1$ satisfies the conditions of $[p_x, p_y, p_z]$-periodicity,\n$p_x | p_{x1}, p_y | p_{y1}, p_z | p_{z1}$ and\n$(p_x, 0, 0) M$, $(0, p_y, 0) M$, $(0, 0, p_z) M \\in \\langle(p_{x2}, 0,0), (0,p_{y2},0), (0,0,p_{z2})\\rangle$\n(angle brackets mean generation in the additive group of row-vectors), where\nthe $3\\times 3$-matrix $M$ is obtained from the $3\\times 4$ matrix\n$$\\left(\\begin{array}{c}\n(p_x,0,0,0)\\varphi_b\/p_x\\\\\n(0,p_y,0,0)\\varphi_b\/p_y\\\\\n(0,0,p_z,0)\\varphi_b\/p_z\n\\end{array}\\right)$$\nby deleting of the last column.\nWe need to ensure that the extension of the fragment $[0...p_x-1]\\times[0...p_y-1]\\times[0...p_z-1]$\n is inside of the ball $B_1$. To choose $p_x, p_y, p_z$ in such way for some\n pairs of realizations $R_1$ and $R_2$ we had to take the radius of balls $B_1$ and $B_2$ greater than 4.\n\nThe image of an arbitrary vertex $u$ of the extension $\\Gamma_1$ is now defined by\n $u\\varphi=$\\linebreak $=u t^{-1} \\varphi_b t^M$, where the shift $t^{-1}=t_x^{p_x n_1} t_y^{p_y n_2} t_z^{p_z n_3}$ \n maps $u$ into the extension of fragment $[0...p_x-1]\\times[0...p_y-1]\\times[0...p_z-1]$\n ($n_1, n_2, n_3$ are suitable positive integers).\n\\end{proof}\n\n\n\n\n\n\\bigskip\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Introduction}\nConvolutional Neural Networks \\cite{LeCun89} have become an essential part of deep learning models \\cite{LeCun2015} designed to tackle a wide range of computer vision tasks including image classification and recognition \\cite{KrizhevskySH12, SermanetEZMFL13, SimonyanZ14a, SzegedyLJSRAEVR14, ZeilerF14}, image captioning \\cite{KarpathyL15, XuBKCCSZB15, VinyalsTBE15}, object detection \\cite{GirshickDDM14, Girshick15, RenHG015}. Recent advances in computing technologies with efficient utilisation of Graphical Processing Units (GPUs), as well as availability of large-scale datasets \\cite{DengDSLL009, LinMBHPRDZ14} have been among primary factors in such a rapid rise in CNN popularity.\n\n\nAn adaptation of convolutional network models \\cite{LongSD15}, pre-trained for the image classification task, has fostered extensive research on the exploitation of CNNs in semantic image segmentation - a problem of marking (or classifying) each pixel of the image with one of the given semantic labels. Among important applications of this problem are road scene understanding \\cite{AlvarezGLL12, BadrinarayananH15, SturgessALT09}, biomedical imaging \\cite{RonnebergerFB15, CiresanGGS12}, aerial imaging \\cite{KlucknerMRB09, MnihH10}.\n\nRecent breakthrough methods in the area have efficiently and effectively combined neural networks with probabilistic graphical models, such as Conditional Random Fields (CRFs) \\cite{ChenPKMY14, LinSRH15, ZhengJRVSDHT15} and Markov Random Fields (MRFs) \\cite{LiuLLLT15}. These approaches usually refine per-pixel features extracted by CNNs (so-called `unary potentials') with the help of pairwise similarities between the pixels based on location and colour features, followed by an approximate inference of the obtained fully connected graphical model \\cite{KrahenbuhlK11}.\n\nIn this work, we address two main challenges of current CNN-based semantic segmentation methods \\cite{ChenPKMY14, LongSD15}: an effective deconvolution, or upsampling, of low-resolution output from a neural network; and an inclusion of global information, or context, into existing models without relying on graphical models. Our contribution is twofold: i) we propose a novel approach for performing the deconvolution operation of the encoded signal and ii) demonstrate that this new architecture, called \\emph{`Global Deconvolutional Network'}, achieves close to the state-of-the-art performance on semantic segmentation with a simpler architecture and significantly lower number of parameters in comparison to the existing models~\\cite{ChenPKMY14,NohHH15,LiuLLLT15}.\n\\begin{figure*}\n\\vskip 0.2in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\linewidth]{figures\/GND_ORG-01}}\n\\caption{\\textbf{Global Deconvolutional Network.} Our adaptation of FCN-32s \\cite{LongSD15}. After hierarchical blocks of convolutional-subsampling-nonlinearity layers, we are upsampling the reduced signal with the help of the global interpolation block. In addition to the pixel-wise softmax loss (not shown here), we also use the multi-label classification loss to increase the recognition accuracy.}\n\\label{architecture}\n\\end{center}\n\\vskip -0.2in\n\\end{figure*} \n\nThe rest of the paper is structured as follows. We briefly explore recent common practices of semantic segmentation models in Section~\\ref{Related Work}. Section~\\ref{GDN} presents our approach designed to overcome the issues outlined above. Section~\\ref{exps} describes the experimental part, including the evaluation results of the proposed method on the popular PASCAL VOC dataset. Finally, Section~\\ref{concl} contains conclusions.\n\n\\section{Related Work}\n\\label{Related Work}\n\nExploitation of fully convolutional neural networks has become ubiquitous in semantic image segmentation ever since the publication of the paper by Long \\emph{et al}\\bmvaOneDot \\cite{LongSD15}. Further research has been concerned with the combination of CNNs and probabilistic graphical models \\cite{ChenPKMY14, LinSRH15, LiuLLLT15, ZhengJRVSDHT15}, training in the presence of weakly-labelled data \\cite{HongNH15, PapandreouCMY15, RussakovskyBFL15},\nlearning an additional (deconvolutional) network \\cite{NohHH15}. \n\nThe problem of incorporation of contextual information has been an active research topic in computer vision \\cite{RabinovichVGWB07, HeitzK08, DivvalaHHEH09, DoerschGE14, MottaghiCLCLFUY14}.\nTo some extent, probabilistic graphical models address this issue in semantic segmentation and can be either a) used as a separate post-processing step \\cite{ChenPKMY14} or b) trained end-to-end with CNNs \\cite{LinSRH15, LiuLLLT15, ZhengJRVSDHT15}.\nIn setting a) graphical models are unable to refine the parameters of the CNN and thus the errors from the CNN will essentially be presented during post-processing. On the other hand, in b) one need to carefully design the inference part in terms of usual neural networks operations, and it still relies on computing high-dimensional Gaussian kernels \\cite{AdamsBD10}. \nBesides that, Yu and Koltun \\cite{YuK15} have recently shown that dilated convolution filters are generally applicable and allow to increase the contextual capacity of the network as well.\n\nIn terms of improving the deconvolutional part of the network for dense predictions, there has been a prevalence of using information from lower layers: so-called `Skip Architecture' \\cite{LongSD15,RonnebergerFB15} and Multi-scale \\cite{ChenPKMY14} are two notable examples. \nNoh \\emph{et al}\\bmvaOneDot \\cite{NohHH15} proposed to train a separate deconvolutional network to effectively decode information from the original fully convolutional model. While these methods have given better results, all of them contain significantly more parameters than the corresponding baseline models. \n\nIn turn, we propose another approach, called \\emph{`Global Deconvolutional Network'}, which includes a global interpolation block with an additional recognition loss, and gives better results than multi-scale and `skip' variants. \nThe depiction of our architecture is presented in Figure~\\ref{architecture}.\n\n\\section{Global Deconvolutional Network}\n\\label{GDN}\nIn this section, we describe our approach intended to boost the performance of deep learning models on the semantic segmentation task.\n\\subsection{Baseline Models}\n\\label{baseline}\n \nAs baseline models, we choose two publicly available deep CNN models: FCN-32s\\footnote{https:\/\/github.com\/BVLC\/caffe\/wiki\/Model-Zoo} \\cite{LongSD15} and DeepLab\\footnote{https:\/\/bitbucket.org\/deeplab\/deeplab-public\/} \\cite{ChenPKMY14}.\nBoth of them are based on the VGG 16-layer net \\cite{SimonyanZ14a} from the ILSVRC-2014 competition \\cite{RussakovskyDSKS15}. This network contains 16 weight layers, including two fully-connected ones, and can be represented as hierarchical stacks of convolutional layers with rectified linear unit non-linearity \\cite{GlorotBB11} followed by pooling operations after each stack. The output of the fully-connected layers is fed into a softmax classifier.\n\nFor semantic segmentation, where one needs to acquire dense predictions, the fully-connected layers have been replaced by convolution filters followed by a learnable deconvolution or fixed bilinear interpolation to match the original spatial dimensions of the image. The pixel-wise softmax loss represents the objective function.\n\n\\subsection{Global Interpolation}\n\\label{lin_deconv}\n\nThe output of multiple blocks of convolutional and pooling layers is an encoded image with severely reduced dimensions. To predict the segmentation mask of the same resolution as the original image, one needs to simultaneously decode and upsample this coarse output. A natural approach is to perform an interpolation. In this work, instead of applying conventional local methods, we devise a learnable global interpolation. \n\nWe denote the decoded information of the RGB-image $\\mathbf{I}: \\mathbf{I}\\in{\\mathbb{R}^{3\\times{H}\\times{W}}}$, as $\\mathbf{x}:\\mathbf{x}\\in{\\mathbb{R}^{C\\times{h}\\times{w}}}$, where $C$ represents the number of channels, $h$ and $w$ define the reduced height $H$ and width $W$, respectively. To acquire $\\mathbf{y}:\\mathbf{y}\\in{\\mathbb{R}^{C\\times{H}\\times{W}}}$, an upsampled signal, we apply the following formula:\n\\begin{equation}\n\\label{eqn1}\n\\mathbf{y_{c}}=\\mathbf{K_{h}}\\mathbf{x_{c}}\\mathbf{K_{w}^{\\rm T}},\n\\forall{\\mathbf{c}\\in{\\mathbf{C}}}\n\\end{equation}\nwhere the matrices $\\mathbf{K_{h}}\\in{\\mathbb{R}^{H\\times{h}}}$ and $\\mathbf{K_{w}}\\in{\\mathbb{R}^{W\\times{w}}}$ are interpolating each feature map of $\\mathbf{x}$ through the corresponding spatial dimensions. Opposite to a simple bilinear interpolation, which operates only on the closest four points, the equation above allows to include much more information on the rectangular grid. An illustrative example can be seen in Figure~\\ref{equation1}.\n\n\\begin{figure}[ht]\n\\vskip 0.2in\n\\begin{center}\n\\centerline{\\includegraphics[scale=0.4]{figures\/matrix_plus}}\n\\caption{Depiction of Equation~\\eqref{eqn1} for synthetic data.}\n\\label{equation1}\n\\end{center}\n\\vskip -0.2in\n\\end{figure} \n\nNote that this operation is differentiable, and during the backpropagation algorithm \\cite{Rumelhart1986} the derivatives of the pixelwise cross-entropy loss function $\\pazocal{L}_{s}$ with respect to the input $\\mathbf{x}$ and parameters $\\mathbf{K_{h}},\\mathbf{K_{w}}$ can be found as follows:\n\\begin{align}\n\\label{eqn2}\n\\begin{gathered}\n\\frac{\\partial \\pazocal{L}_{s}}{\\partial \\mathbf{x_{c}}}=\\mathbf{K_{h}^{\\rm T}}\\frac{\\partial \\pazocal{L}_{s}}{\\partial \\mathbf{y_{c}}}\\mathbf{K_{w}^{\\rm T}}, \\\\\n\\frac{\\partial \\pazocal{L}_{s}}{\\partial \\mathbf{K_{w}}}=(\\frac{\\partial \\pazocal{L}_{s}}{\\partial \\mathbf{y_{c}}})^{\\mathbf{\\rm T}}\\mathbf{K_{h}}\\mathbf{x_{c}}, \\frac{\\partial \\pazocal{L}_{s}}{\\partial \\mathbf{K_{h}}}=(\\frac{\\partial \\pazocal{L}_{s}}{\\partial \\mathbf{y_{c}}})\\mathbf{K_{w}}\\mathbf{x_{c}^{\\rm T}}.\n\\end{gathered}\n\\end{align}\n\nWe call the operation performed by Equation~\\eqref{eqn1} \\textit{`global deconvolution'.} We only use this term to underline the fact that we mimic the behaviour of standard deconvolution using a global function; note that our method is not the inverse of the convolution operation and therefore does not represent deconvolution in the strictest sense as, for example, in \\cite{ZeilerKTF10}. \n\n\\subsection{Multi-task loss}\n\nIt is not uncommon to force intermediate layers of deep learning networks to preserve meaningful and discriminative representations. For example, Szegedy~\\emph{et al}\\bmvaOneDot \\cite{SzegedyLJSRAEVR14} appended several auxiliary classifiers to the middle blocks of their model. \n \nAs semantic image segmentation essentially comprises image classification as one of its sub-tasks, we append an additional objective function on the top of the coarse output to further improve the model performance on the particular task of classification (Figure~\\ref{architecture}). This supplementary block consists of $3$ fully-connected layers, with the length of the last one being equal to the pre-defined number of possible labels (excluding the background). As there are usually multiple instances of the same label presented in the image, we do not explicitly encode the quantitative components and only denote the presence of a particular class or its absence. The scores from the last layer are transformed with the sigmoid function followed by the multinomial cross entropy loss. \n\nThe loss functions are defined as follows:\n\\begin{align}\n\\label{eqn3}\n\\begin{gathered}\n\\pazocal{L}_{s}(I, G)=\\frac{-1}{|I|}\\sum_{i\\in{I}}log(\\hat{p}_{iG_{i}}), \\\\\n\\pazocal{L}_{c}(I, L)=\\frac{-1}{|C|}\\sum_{c\\in{C}}[p_{c}log(\\hat{p}_{c})+(1-p_{c})log(1-\\hat{p}_{c})],\n\\end{gathered}\n\\end{align}\nwhere $\\pazocal{L}_{c}$ is the multi-label classification loss; $\\pazocal{L}_{s}$ is the pixelwise cross-entropy loss; $I$ is the set of pixels; $G$ is a ground truth map; $C$ is the number of possible labels; $L$ is a ground truth binary vector of length $|C|$; $\\hat{p}_{ic}\\in[0,1]$ is the softmax probability of pixel $i$ being assigned to class $c$; $p_{c}\\in\\{0,1\\}$ indicates the presence of class $c$ or its absence; $\\hat{p}_{c}\\in[0,1]$ corresponds to the predicted probability of class $c$ being presented in the image. Note that it is possible to use a weighted sum of the two losses depending on which task's performance we want to optimise.\n\\\\\n\nOverall, each component of the proposed approach aims to capture global information and incorporate it into the network, hence the name \\textit{global deconvolutional network}. Besides that, the proposed interpolation also effectively upsamples the coarse output and a nonlinear upsampling can be achieved with the addition of an activation function on the top of the block. The complete architecture of our approach is presented in Figure~\\ref{architecture}.\n\n\n\\section{Experiments}\n\\label{exps}\n\\subsection{Implementation details} \n\nWe have implemented the proposed methods using Caffe \\cite{JiaSDKLGGD14}, the popular deep learning framework. Our training procedure follows the practice of the corresponding baseline models: DeepLab \\cite{ChenPKMY14} and FCN \\cite{LongSD15}. Both of them employ the VGG-16 net pre-trained on ImageNet \\cite{DengDSLL009}. \n\nWe use Stochastic Gradient Descent (SGD) with momentum and train with a minibatch size of 20 images. \nWe start the training process with the learning rate equal to $10^{-8}$ and divide it by $10$ when the validation accuracy stops improving. \nWe use momentum of $0.9$ and weight decay of $0.0005$.\nWe initialise all additional layers randomly as in \\cite{GlorotB10} and fine-tune them by backpropagation with a lower learning rate before finally training the whole network. \n\n\\subsection{Dataset}\n\nWe evaluate performance of the proposed approach on the PASCAL VOC 2012 segmentation benchmark \\cite{EveringhamGWWZ10}, which consists of 20 semantic categories and one background category.\nFollowing \\cite{HariharanABMM11}, we augment the training data to 8498 images and to 10582 images for the FCN and DeepLab models, respectively. \n\nThe segmentation performance is evaluated by the mean pixel-wise intersection-over-union (mIoU) score \\cite{EveringhamGWWZ10}, defined as follows:\n\\begin{equation}\n\\label{miou}\nmIoU = \\frac{1}{|C|}\\sum_{c\\in{C}}n_{cc}\/\\left(\\sum_{c'\\in{C}}{(n_{cc'}+n_{c'c})}-n_{cc}\\right),\n\\end{equation}\nwhere $n_{cc'}$ represents the number of pixels of class $c$ predicted to belong to class $c'$.\n\nFirst, we conduct all our experiments on the PASCAL VOC \\textit{val} set, and then compare the best performing models with their corresponding baseline models on the PASCAL VOC \\textit{test} set. As the annotations for the test data are not available, we send our predictions to the PASCAL VOC Evaluation Server.\\footnote{http:\/\/host.robots.ox.ac.uk\/\n\n\\begin{figure*}\n\\vskip 0.2in\n\\begin{center}\n\\centerline{\\includegraphics[page=1,height=\\dimexpr\\textheight-101pt\\relax]{figures\/combo}}\n\\caption{ \\textbf{Qualitative results on the validation set}. \\textit{(a)} Last column represents our approach, which includes the replacement of standard deconvolution with global deconvolution and the addition of the multi-label classification loss. \\textit{(b)} Fourth and sixth columns demonstrate our model, where bilinear interpolation is replaced with global deconvolution. Last two columns also incorporate a conditional random field (CRF) as a post-processing step. \\textit{Best viewed in colour.}}\n\\label{fcn32s}\n\\end{center}\n\\vskip -0.2in\n\\end{figure*}\n\n\\subsection{Experiments with FCN-32s}\nWe conduct several experiments with FCN-32s as a baseline model. During the training stage the images are resized to $500\\times 500$.\\footnote{This is the maximum value for both the height and width in the PASCAL VOC dataset.} We evaluate all the models on the holdout dataset of 736 images as in \\cite{LongSD15}, and send the test results of the best performing ones to the evaluation server.\n\nThe original FCN-32s model employs the standard deconvolution operation (also known as \\textit{backwards convolution}) to upsample the coarse output. We replace it with the proposed global deconvolution and randomly initialise the new parameters as in \\cite{GlorotB10}. We fix the rest of the network to pre-train the added block, and after that train the whole network. \nGlobal interpolation already improves its baseline model on the validation dataset, as can be seen in Table \\ref{table1}.\n\nThe baseline model deals with inputs of different sizes via cropping the predicted mask to the same resolution as the corresponding input. Other popular options include either 1) padding or 2) resizing to the fixed input dimensions. In case of global deconvolution, we propose a more elegant solution. Recall that the parameters of this block can be represented as matrices $\\mathbf{K_{h}}\\in{\\mathbb{R}^{H\\times{h}}}$ and $\\mathbf{K_{w}}\\in{\\mathbb{R}^{W\\times{w}}}$, where ${H}={W}=500$, ${h}={w}=16$ during the training stage. Then, given a test image $\\{\\mathbf{I}\\in{\\mathbb{R}^{3\\times{\\hat{H}}\\times{\\hat{W}}}}|\\hat{W},\\hat{H}\\leq{500}\\}$, we subset the learned matrices to acquire $\\mathbf{\\hat{K}_{{h}}}\\in{\\mathbb{R}^{\\hat{H}\\times{\\hat{h}}}}$ and $\\mathbf{\\hat{K}_{w}}\\in{\\mathbb{R}^{\\hat{W}\\times{\\hat{w}}}}$ ($\\hat{w},\\hat{h}\\leq{16}$) and proceed with the same operation. To subset, we only leave first $\\hat{H},\\hat{W}$ rows and $\\hat{h},\\hat{w}$ columns of the corresponding matrices, and discard all the rest. We have found that this do not affect the final performance of our model.\n\nNext, to increase the recognition accuracy we also append the multi-label classification loss. This slightly improves the validation score in comparison to the baseline model, while the combination with global interpolation gives a further boost in performance (FCN-32s+GDN). \n \nBesides that, we have also conducted additional experiments with FCN-32s, where we insert a fully-connected layer directly after the coarse output (FCN-32s+FC). The idea behind this trick is to allow the network to refine the local predictions based on the information from all the pixels. One drawback in such an approach is the requirement of the fully-connected layers to have the fixed-size input, although the solutions discussed above are also applicable here. Nevertheless, neither of these methods gives satisfactory results during the empirical evaluations. Therefore, we proceed with a slightly different architecture: before appending the fully-connected layer, we first add a spatial pyramid pooling layer \\cite{HeZR015}, which produces the output of the same length given an arbitrarily sized input. In particular, we are using a $5$-level pyramid with max-pooling. Though during evaluation this approach alone does not give any improvement in the validation score over the baseline model, its ensemble with the global deconvolution model (FCN-32s+GDN+FC) improves previous results, which indicates that these models may be complementary to each other. \n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l || c|}\n\\hline\nMethod & mean IoU \\\\\n\\hline\nFCN-32s \\cite{LongSD15} & 59.4 \\\\\nFCN-32s + Label Loss & 59.8 \\\\\nFCN-32s + Global Interp. & 60.9 \\\\\nFCN-32s + GDN & \\textbf{61.2} \\\\\nFCN-32s + GDN + FC & \\textbf{62.5} \\\\ \n\\hline\nDL-LargeFOV \\cite{ChenPKMY14} & 73.3 \\\\\nDL-LargeFOV + Label Loss & 73.9 \\\\\nDL-LargeFOV + Global Interp. & 74.2 \\\\\nDL-LargeFOV + GDN & \\textbf{75.1} \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Mean intersection over union accuracy of our approach (GDN), which includes the addition of multi-label classification loss and global interpolation, compared with the baseline model on the reduced validation dataset (for FCN-32s) and on the PASCAL VOC 2012 validation dataset (for DL-LargeFOV).}\n\\label{table1}\n\\end{table}\n\nWe continue with the evaluation of the best performing models on the test set (Table~\\ref{table3}). Both of them improve their baseline model, FCN-32s, and even outperform FCN-8s, another model by Long~\\emph{et al}\\bmvaOneDot~\\cite{LongSD15} with the skip-architecture, which combines information from lower layers with the final prediction layer.\n\nSome examples of our approach can be seen in Figure~\\ref{fcn32s}.\n\n\\subsection{Experiments with DeepLab}\n\nAs the next baseline model we consider DeepLab-LargeFOV \\cite{ChenPKMY14}. With the help of the \\textit{algorithme {\\`a} trous} \\cite{holschneider1989real, Shensa92}, the model has a larger Field-of-View (FOV), which results in the finer predictions from the network. Besides that, this model is significantly faster and contains fewer parameters, than the plain modification of the VGG-16 net due to the reduced number of filters of the last two convolutional layers. The model employs simple bilinear interpolation to acquire the output of the same resolution as the input. \n\nWe proceed with the same experiments as for the FCN-32s model, except for the ones involving the fully-connected layer. As DeepLab-LargeFOV has a higher resolution coarse output, the inclusion of the fully-connected layer would result in the weight matrix of several billions parameters. Therefore, we omit these experiments. \n\nWe separately replace the bilinear interpolation with global deconvolution, append the label loss and estimate the joint GDN model. We carry out the same strategy outlined above during the testing stage to deal with variable-size inputs. All the experiments lead to improvements over the baseline model, with GDN showing a significantly higher score on the PASCAL VOC 2012 val set (Table \\ref{table1}). \n\n\\begin{figure}[ht]\n\\vskip 0.2in\n\\begin{center}\n\\centerline{\\includegraphics[scale=0.25]{figures\/failure_cases}}\n\\caption{Failure cases of our approach, Global Deconvolutional Network (GDN), on the PASCAL VOC 2012 val set. \\textit{Best viewed in colour.} \n}\n\\label{failure}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\\addtocounter{footnote}{-1}\n\\begin{table*}[t]\n\\setlength{\\tabcolsep}{3pt}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\resizebox{\\textwidth}{!}\n{\\begin{tabular}{|c | *{20}{c} || c |}\n\\hline\nMethod & aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mIoU\\\\\n\\hline\nFCN-8s~\\cite{LongSD15} & \\textbf{76.8} & \\textbf{34.2} & 68.9 & 49.4 & 60.3 & 75.3 & 74.7 & 77.6 & 21.4 & \\textbf{62.5} & 46.8 & 71.8 & \\textbf{63.9} & \\textbf{76.5} & 73.9 & 45.2 & \\textbf{72.4} & 37.4 & 70.9 & 55.1 & 62.20 \\\\\n\\textbf{FCN-32s + GDN} & 74.5 & 31.8 & 66.6 & 49.7 & 60.5 & 76.9 & 75.8 & 76.0 & 22.8 & 57.5 & 54.5 & 72.9 & 59.4 & 74.9 & 73.6 & 50.9 & 67.5 & 43.2 & 70.0 & 56.4 & 62.22 \\\\ \n\\textbf{FCN-32s + GDN + FC} & 75.6 & 31.5 & \\textbf{69.2} & \\textbf{51.6} & \\textbf{62.9} & \\textbf{78.8} & \\textbf{76.7} & \\textbf{78.6} & \\textbf{24.6} & 61.6 & \\textbf{60.3} & \\textbf{74.5} & 62.6 & 76.0 & \\textbf{74.3} & \\textbf{51.4} & 70.6 & \\textbf{47.3} & \\textbf{73.9} & \\textbf{58.3} & \\textbf{64.37} \\\\\n\\hline\nDL-LargeFOV-CRF~\\cite{ChenPKMY14} & 83.4 & 36.5 & 82.5 & 62.2 & 66.4 & 85.3 & 78.4 & 83.7 & 30.4 & 72.9 & 60.4 & 78.4 & 75.4 & 82.1 & 79.6 & 58.2 & 81.9 & 48.8 & 73.6 & 63.2 & 70.34 \\\\\nDeconvNet+CRF\\_VOC\\cite{NohHH15} & 87.8 & 41.9 & 80.6 & 63.9 & 67.3 & 88.1 & 78.4 & 81.3 & 25.9 & 73.7 & 61.2 & 72.0 & 77.0 & 79.9 & 78.7 & 59.5 & 78.3 & 55.0 & 75.2 & 61.5 & 70.50 \\\\\nDL-MSC-LargeFOV-CRF~\\cite{ChenPKMY14} & 84.4 & \\textbf{54.5} & 81.5 & 63.6 & 65.9 & 85.1 & 79.1 & 83.4 & 30.7 & 74.1 & 59.8 & 79.0 & 76.1 & 83.2 & 80.8 & 59.7 & 82.2 & 50.4 & 73.1 & 63.7 & 71.60 \\\\\nEDeconvNet+CRF\\_VOC\\cite{NohHH15} & 89.9 & 39.3 & 79.7 & 63.9 & 68.2 & 87.4 & 81.2 & 86.1 & 28.5 & 77.0 & 62.0 & 79.0 & 80.3 & 83.6 & 80.2 & 58.8 & 83.4 & 54.3 & 80.7 & 65.0 & 72.50 \\\\\n\\textbf{DL-LargeFOV-CRF + GDN} & 87.9 & 37.8 & \\textbf{88.8} & 64.5 & 70.7 & 87.7 & 81.3 & \\textbf{87.1} & 32.5 & 76.7 & \\textbf{66.7} & 80.4 & 76.6 & 82.2 & 82.3 & 57.9 & 84.5 & 55.9 & 78.5 & 64.2 & 73.21 \\\\\n\\textbf{DL-L\\_FOV-CRF + GDN\\_ENS} & 88.6 & 48.6 & \\textbf{88.8} & 64.7 & 70.4 & 87.2 & 81.8 & 86.4 & 32.0 & 77.1 & 64.1 & 80.5 & 78.0 & 84.0 & 83.3 & 59.2 & \\textbf{85.9} & 56.8 & 77.9 & 65.0 & \\textbf{74.02} \\\\\nAdelaide\\_Cont\\_CRF\\_VOC~\\cite{LiuLLLT15} & \\textbf{90.6} & 37.6 & 80.0 & \\textbf{67.8} & \\textbf{74.4} & \\textbf{92.0} & \\textbf{85.2} & 86.2 & \\textbf{39.1} & \\textbf{81.2} & 58.9 & \\textbf{83.8} & \\textbf{83.9} & \\textbf{84.3} & \\textbf{84.8} & \\textbf{62.1} & 83.2 & \\textbf{58.2} & \\textbf{80.8} & \\textbf{72.3} & \\textbf{75.30} \\\\\n\\hline\n\\end{tabular}}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\caption{Mean intersection over union accuracy of our approach (\\textbf{GDN}), compared with the competing models on the PASCAL VOC 2012 test set.\\protect\\footnotemark~Lacking the results of FCN-32s on the test set, we thus compare it directly with a more powerful model, FCN-8s \\cite{LongSD15}. All the methods presented here do not use MS COCO dataset.}\n\\label{table3}\n\\vskip -0.1in\n\\end{table*}\n\n\n\n\nThe DeepLab-LargeFOV model also incorporates a fully-connected CRF \\cite{LaffertyMP01, KohliLT09, KrahenbuhlK11} as a post-processing step. \nTo set the parameters of the fully connected CRF, we employ the same method of cross-validation as in \\cite{ChenPKMY14} on a subset of the validation data. Then we send our best performing model enriched by CRF to the evaluation server. \nOn the PASCAL VOC 2012 test set our single model (DL-LargeFOV-CRF+GDN) achieves $73.2\\%$ mIoU, a significant improvement over the baseline model (around $2.9\\%$), and even excels the multiscale DeepLab-MSc-LargeFOV model by $1.6\\%$ (Table~\\ref{table3}); the predictions averaged across our several models (DL-L\\_FOV-CRF+GDN\\_ENS) give a further improvement of $0.8\\%$, showing a competitive score to the models that do not exploit the Microsoft COCO dataset~\\cite{LinMBHPRDZ14}.\\\\\nAs is the case with the FCN-32s model, we obtain performance on par with the multi-resolution variant using a much simpler architecture. Moreover, our single CRF-equipped global deconvolutional network (DL-LargeFOV-CRF+GDN) even surpasses the results of the competing approach (DeconvNet+CRF~\\cite{NohHH15}) by $2.7\\%$, where the deconvolutional part of the network contains significantly more parameters: almost 126M compared to less than 70K of global deconvolution; in case of ensembles, the improvement is over $1.5\\%$.\n\nThe illustrative examples are presented in Figures~\\ref{fcn32s} and~\\ref{failure}.\n\n\n\n\n\\section{Conclusion} \n\\label{concl}\nIn this paper we addressed two important problems of semantic image segmentation: an upsampling of the low-resolution output from the network and refinement of this coarse output, incorporating global information and the additional classification loss. We proposed a novel approach, \\textit{global deconvolution}, to acquire the output of the same size as the input for images of variable resolutions. We showed that \\textit{global deconvolution} effectively replaces standard approaches, and can easily be trained in a straightforward manner. \n\nOn the benchmark competition, PASCAL VOC 2012, we showed that the proposed approach outperforms the results of the baseline models. Furthermore, our method even surpasses the performance of more powerful multi-resolution models, which combine information from several blocks of the deep neural network. \\\\\n\n\\noindent\n\\textbf{Acknowledgements}\nThe authors would like to thank the anonymous reviewers for their helpful and constructive comments, and Gaee Kim for making Fig.~\\ref{architecture}.\nThis work is supported by the Ministry of Science, ICT \\& Future Planning (MSIP), Korea, under Basic\nScience Research Program through the National Research Foundation of Korea (NRF) grant (NRF-2014R1A1A1002662), under the ITRC (Information Technology Research Center) support program (IITP-2016-R2720-16-0007) supervised by the IITP (Institute for Information \\& communications Technology Promotion) and under NIPA (National IT Industry Promotion Agency) program (NIPA-S0415-15-1004).\n\n\\footnotetext{\\url{http:\/\/host.robots.ox.ac.uk:8080\/leaderboard\/displaylb.php?challengeid=11&compid=6}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{s1}}\\rm\n\nAs usual, $\\C$ is the field of complex numbers, $\\Z_+$ is the set of\nnon-negative integers and $\\N$ is the set of positive integers. Let\n$X$ be a topological vector space and $T$ be a continuous linear\noperator acting on $X$. Recall that $x\\in X$ is called a {\\it\nhypercyclic vector} for $T$ if the orbit $\\{T^nx:n\\in\\Z_+\\}$ is\ndense in $X$. By $H(T)$ we denote the set of hypercyclic vectors for\n$T$. The operator $T$ is called {\\it hypercyclic} if it has a\nhypercyclic vector. For more information on hypercyclic operators\nsee the surveys \\cite{ge1,ge2} and references therein. We would just\nlike to mention that the set $H(T)$ for any hypercyclic operator $T$\ncontains all non-zero vectors from a dense linear subspace of $X$.\nIt follows from the fact due to Bourdon \\cite{bourd} (see also\n\\cite{ansa}) that if $x\\in H(T)$, then $p(T)x\\in H(T)$ for any\nnon-zero polynomial $p$. The question whether $H(T)$ for a given\noperator $T$ must contain all non-zero vectors from a closed\ninfinite dimensional subspace of $X$ was studied by several authors.\nSee \\cite{alf1,alf2} for sufficient conditions in terms of the\nspectrum of $T$ for $H(T)$ to contain all non-zero vectors from an\ninfinite dimensional closed linear linear subspace of $X$ in the\ncase when $X$ is a complex Banach space.\n\nBy $\\H$ we denote the space of all entire functions $f:\\C\\to\\C$ with\nthe topology of uniform convergence on compact sets. It is\nwell-known that $\\H$ is a Fr\\'echet space. That is, $\\H$ is complete\nmetrizable locally convex space, whose topology is defined by the\nincreasing sequence of norms $f\\mapsto \\max\\{|f(z)|:|z|\\leq n\\}$ for\n$n\\in\\N$. The differentiation operator\n\\begin{equation*\nD:\\H\\to\\H,\\quad Df=f'\n\\end{equation*}\nis a continuous linear operator on $\\H$. Due to MacLane \\cite{mac},\n$D$ is hypercyclic. It is well-known \\cite{ge1} that the set of\nhypercyclic vectors of any hypercyclic operator on a separable\nmetrizable topological vector space is a dense $G_\\delta$-set. Hence\n$H(D)$ is a dense $G_\\delta$-set in $\\H$. We deal with two problems\nraised by Aron, Conejero, Peris and Seoane-Sep\\'ulveda in\n\\cite{aron1}. It is worth mentioning that $\\H$ is an algebra with\nrespect to pointwise multiplication.\n\n\\begin{question}\\label{ar1} Does $H(D)$ contain all non-zero vectors\nfrom a closed infinite dimensional linear subspace of $\\H$?\n\\end{question}\n\n\\begin{question}\\label{ar2} Does $H(D)$ contain all non-constant\nfunctions from a non-trivial subalgebra of $\\H$? In other words,\ndoes there exist $f\\in\\H$ such that $p\\circ f\\in H(D)$ for any\nnon-constant polynomial $p$?\n\\end{question}\n\nNote that the analog of the last question for the translation\noperator $Tf(z)=f(z-1)$ on $\\H$ has been answered negatively by the\nsame set of authors \\cite{aron2}. Namely, they have shown that for\nany $f\\in\\H$ and any $k\\geq 2$, $f^k\\notin H(T)$. In \\cite{aron1} it\nis also shown that the set $\\{f\\in\\H:f^n\\in H(D)\\ \\ \\text{for any}\\\nn\\in\\N\\}$ is a dense $G_\\delta$-set in $\\H$, thus providing an\nevidence that the answer to Question~\\ref{ar2} could be affirmative.\nIn the present paper both above questions are answered affirmatively\nand constructively. It is worth noting that Question~\\ref{ar2} was\nrecently independently answered by Bayart and Matheron by means of\napplying the Baire theorem. Their proof will soon appear in the book\n\\cite{bama}.\n\n\\begin{theorem}\\label{main1} There is a closed infinite\ndimensional subspace $L$ of $\\H$ such that $L\\setminus\\{0\\}\\subset\nH(D)$.\n\\end{theorem}\n\n\\begin{theorem}\\label{main2} There exists $f\\in\\H$ such that $p\\circ f\\in H(D)$ for\nany non-constant polynomial $p$.\n\\end{theorem}\n\n\\section{Preliminaries}\n\nBefore proving Theorems~\\ref{main1} and~\\ref{main2}, we would like\nto introduce some notation and mention few elementary facts.\nThroughout the paper $\\P$ stands for the space $\\C[z]$ of all\ncomplex polynomials in one variable. Clearly $\\P$ is a dense linear\nsubspace of $\\H$. Let\n$$\n\\P_0=\\{0\\}\\ \\ \\text{and}\\ \\ \\P_k=\\{p\\in\\P:\\deg p0$, we denote\n$$\n\\P_{k,c}=\\biggl\\{p(z)=\\sum_{j=0}^{k-1}c_jz^j:|c_j|\\leq c\\ \\\n\\text{for}\\ \\ 0\\leq j\\leq k-1\\}.\n$$\nSince we are going to deal with the Taylor series expansion of\nfunctions $f\\in\\H$ rather than their values, we consider a sequence\nof norms defining the topology of $\\H$ different from the one\nmentioned in the introduction. Namely for $a\\in\\N$ and $f\\in\\H$, we\nwrite\n\\begin{equation}\\label{norm}\n\\|f\\|_a=\\sum_{n=0}^\\infty |f_n|a^n,\\ \\ \\text{where $f\\in\\H$,}\\ \\\nf(z)=\\sum_{n=0}^\\infty f_nz^n.\n\\end{equation}\nIt is easy to see that the above sequence of norms is increasing and\ndefines the original topology on $\\H$. Moreover, each of these norms\nis submultiplicative:\n\\begin{equation}\\label{norm1}\n\\text{$\\|f\\|_a\\leq\\|f\\|_b$ and $\\|fg\\|_a\\leq \\|f\\|_a\\|g\\|_a$\nwhenever $f,g\\in\\H$, $a,b\\in\\N$, $a\\leq b$.}\n\\end{equation}\n\nObserve that $D(\\P_k)\\subseteq \\P_{k-1}$ for any $k\\in\\N$. In\nparticular, $D^n(\\P_k)=\\{0\\}$ if $n\\geq k$. Moreover, $\\|Dp\\|_a\\leq\n\\frac{k-1}{a}\\|p\\|_a$ for each $k,a\\in\\N$ and any $p\\in\\P_k$.\nIterating this estimate, we obtain\n\\begin{equation}\\label{normd2}\n\\|D^np\\|_a\\leq \\frac{|(k-n)\\dots(k-1)|}{a^n}\\|p\\|_a\\leq\n(k\/a)^n\\|p\\|_a\\ \\ \\text{for any $k,n,a\\in\\N$ and $p\\in\\P_k$}.\n\\end{equation}\nWe also consider the Volterra operator $V:\\H\\to\\H$, $Vf(z)=\\int_0^z\nf(t)\\,dt$. It is easy to see that\n\\begin{equation}\\label{vo}\nVf(z)=\\sum_{n=1}^\\infty \\frac{f_{n-1}}{n}z^n,\\ \\ \\text{where\n$f\\in\\H$,}\\ \\ f(z)=\\sum_{n=0}^\\infty f_nz^n\n\\end{equation}\nand that $V$ is a right inverse of $D$. In particular,\n\\begin{equation}\\label{ri}\nD^nV^n=I\\ \\ \\text{for any}\\ \\ n\\in\\N.\n\\end{equation}\nUsing (\\ref{vo}), one can easily verify that\n\\begin{equation}\\label{normv1}\n\\|V^nf\\|_a\\leq \\frac{a^n}{n!}\\|f\\|_a\\ \\ \\text{for any $n,a\\in\\N$ and\nany $f\\in\\H$.}\n\\end{equation}\nFor $f\\in\\H$, the {\\it support} of $f$ is the set\n$\\{n\\in\\Z_+:f^{(n)}(0)\\neq 0\\}$. Obviously $D$ shifts the supports\nto the left and $V$ shifts them to the right. That is, if $A$ is the\nsupport of $f$, then $A+1=\\{n+1:n\\in A\\}$ is the support of $Vf$ and\n$(A-1)\\cap \\Z_+$ is the support of $Df$.\n\nIn the proof of Theorems~\\ref{main1} and~\\ref{main2} we use a\nsequence in $\\N\\times\\P$ with specific properties.\n\n\\begin{lemma}\\label{NtP} There exists a sequence\n$\\{(d_k,p_k)\\}_{k\\in\\N}$ of elements of $\\N\\times\\P$ such that\n\\begin{itemize}\\itemsep=-3pt\n\\item[\\rm(\\ref{NtP}.1)]$d_k\\leq k$ and $p_k\\in\\P_{k,k}$ for each\n$k\\in\\N;$\n\\item[\\rm(\\ref{NtP}.2)]for any $d\\in\\N$, the set $\\{p_k:d_k=d\\}$ is\ndense in $\\H$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof} Since $\\P$ is dense in $\\H$ and $\\P$ is the union of $\\P_{k,k}$\nfor $k\\in\\N$, we can pick a sequence $\\{s_j\\}_{j\\in\\N}$ in $\\P$ such\nthat $\\{s_j:j\\in\\N\\}$ is dense in $\\H$ and $s_j\\in\\P_{j,j}$ for any\n$j\\in\\N$. It is well-known and easy to see that there is a bijection\n$\\phi:\\N\\times\\N\\to \\N$ such that $\\max\\{m,j\\}\\leq \\phi(m,j)$ for\nany $m,j\\in\\N$. We define a sequence $\\{(d_k,p_k)\\}_{k\\in\\N}$ of\nelements of $\\N\\times\\P$ by the formula\n$$\n(d_k,p_k)=(m,s_j)\\ \\ \\text{if}\\ \\ \\phi(m,j)=k.\n$$\nThe condition $\\max\\{m,j\\}\\leq \\phi(m,j)$ and the obvious inclusion\n$P_{j,j}\\subseteq \\P_{k,k}$ for $j\\leq k$ imply that (\\ref{NtP}.1)\nis satisfied. Next, let $d\\in\\N$. From the definition of $(d_k,p_k)$\nand bijectivity of $\\phi$ it follows that\n$\\{p_k:d_k=d\\}=\\{s_j:j\\in\\N\\}$. Hence (\\ref{NtP}.2) is also\nsatisfied.\n\\end{proof}\n\n\\section{Proof of Theorem~\\ref{main1}}\n\nLet $\\{(d_k,p_k)\\}_{k\\in\\N}$ be the sequence of elements of\n$\\N\\times\\P$ provided by Lemma~\\ref{NtP}. For each $d\\in\\N$ let\n$B_d=\\{k\\in\\N:d_k=d\\}$. By Lemma~\\ref{NtP}, $B_d$ are infinite\ndisjoint subsets of $\\N$, whose union is $\\N$. Let $m_d=\\min B_d$\nand $B'_d=B_d\\setminus\\{m_d\\}$. By (\\ref{NtP}.1), $m_d\\geq d$. We\nalso need a sequence increasing fast enough. Namely, pick\n$\\beta:\\N\\to\\N$ such that\n\\begin{equation}\\label{beta}\n\\text{$\\beta(k+1)>\\beta(k)+k$ and $\\beta(k+1)^{\\beta(k)}\\leq\n2^{\\beta(k+1)}$ for any $k\\in\\N$}.\n\\end{equation}\nFor each $d\\in\\N$, we consider the series\n$$\nf_d=g_d+\\sum_{k\\in B'_d} V^{\\beta(k)}p_k,\\ \\ \\text{where}\\ \\\ng_d(z)=z^{\\beta(m_d)}.\n$$\nLet $a\\in\\N$. Since $p_k\\in\\P_{k,k}$, we have $\\|p_k\\|_a\\leq\nk^2a^k$. Thus using (\\ref{normv1}), we obtain\n$$\n\\sum_{k\\in B'_d} \\|V^{\\beta(k)}p_{k}\\|_a\\leq \\sum_{k=1}^\\infty\n\\|V^{\\beta(k)}p_k\\|_a\\leq \\sum_{k=1}^\\infty \\frac{k^2a^k\na^{\\beta(k)}}{\\beta(k)!}<\\infty.\n$$\nHence the series defining $f_d$ converges absolutely and therefore\n$f_d\\in\\H$ for $d\\in\\N$. The inclusions $p_k\\in\\P_k$ and the\ninequality $\\beta(k+1)>\\beta(k)+k$ imply that the supports of $g_d$\nand $V^{\\beta(k)}p_{k}$ are pairwise disjoint. Hence the supports of\n$f_d$ are pairwise disjoint. It is easy to verify that each sequence\nof non-zero functions in $\\H$ with pairwise disjoint supports is a\nSchauder basic sequence. Hence $\\{f_d\\}_{d\\in\\N}$ is a Schauder\nbasic sequence in $\\H$ and the closed linear span $L$ of\n$\\{f_d:d\\in\\N\\}$ consists of the sums of convergent series of the\nshape $\\sum\\limits_{d=1}^\\infty c_df_d$ with $c_d\\in\\C$. In order to\nprove Theorem~\\ref{main1}, it is enough to demonstrate that\n$L\\setminus\\{0\\}\\subseteq H(D)$.\n\nLet $f\\in L\\setminus\\{0\\}$. Then $f$ is the sum of a convergent\nseries $\\sum\\limits_{d=1}^\\infty c_df_d$ with $c_d\\in\\C$ being not\nall zero. Since a non-zero scalar multiple of a hypercyclic vector\nis hypercyclic, we, multiplying $f$ by a non-zero constant, can\nassume that there is $b\\in\\N$ such that $c_b=1$. Considering the\nnatural projection onto the subspace of $\\H$ of functions whose\nsupport is contained in $\\{\\beta(m_d):d\\in\\N\\}$, we see that the\nseries $\\sum\\limits_{d=1}^\\infty c_dg_d$ converges in $\\H$. Since\n$g_d(z)=z^{\\beta(m_d)}$, it follows that $|c_d|^{1\/\\beta(m_d)}\\to\n0$. In order to verify that $f\\in H(D)$ it suffices to demonstrate\nthat\n\\begin{equation}\\label{clo1}\n\\text{$D^{\\beta(k)}f-p_k\\to 0$ in $\\H$ as $k\\to\\infty$, $k\\in B_b$}.\n\\end{equation}\nIndeed, by Lemma~\\ref{NtP}, $\\{p_k:k\\in B_b\\}$ is dense in $\\H$.\nThen (\\ref{clo1}) implies that $\\{D^{\\beta(k)}f:k\\in B_b\\}$ is dense\nin $\\H$ and therefore $f\\in H(D)$.\n\nIt remains to prove (\\ref{clo1}). Let $C=\\{m_d:d\\in\\N\\}$. Using\ndefinitions of $f$ and $f_d$ and the condition\n$|c_d|^{1\/\\beta(m_d)}\\to 0$ it is easy to see that\n$$\nf=\\sum_{d\\in\\N}c_df_d=g+h,\\ \\ \\text{where}\\ \\ g=\\sum_{d\\in\\N}c_dg_d\\\n\\ \\text{and}\\ \\ h=\\sum_{k\\in\\N\\setminus C} c_{d_k}V^{\\beta(k)}p_k,\n$$\nwhere the series defining $h$ and $g$ are absolutely convergent in\n$\\H$. Let $a\\in\\N$ and $k\\in B'_b$. Since $D^n(\\P_j)=\\{0\\}$ for\n$n\\geq j$, we, using the above display together with (\\ref{ri}),\nobtain\n\\begin{equation}\\label{sum}\nD^{\\beta(k)}f=p_k+q_k+h_k,\\ \\ \\text{where}\\ \\\nq_k=\\sum_{m_d>k}c_dD^{\\beta(k)}g_d\\ \\ \\text{and}\\ \\\nh_k=\\sum_{n\\notin C,\\ n>k} c_{d_n}V^{\\beta(n)-\\beta(k)}p_n.\n\\end{equation}\nCondition $|c_d|^{1\/\\beta(m_d)}\\to 0$ implies that there is\n$c=c(a)>0$ such that $|c_d|\\leq c(4a)^{-\\beta(m_d)}$ for any\n$d\\in\\N$. As we have already mentioned, $\\|p_n\\|_a\\leq n^2a^n$ for\nany $n\\in\\N$. By (\\ref{beta}) $\\beta(n)-\\beta(k)\\geq n$ for any\n$n>k$. Hence, using (\\ref{normv1}) and the inequality $|c_d|\\leq c$,\nwe have\n$$\n\\|h_k\\|_a\\leq \\sum_{n=k+1}^\\infty\n\\frac{cn^2a^{\\beta(n)-\\beta(k)+n}}{(\\beta(n)-\\beta(k))!}\\leq\n\\!\\!\\!\\sum_{n=k+1}^\\infty\n\\frac{c(\\beta(n)-\\beta(k))^2a^{2(\\beta(n)-\\beta(k))}}{(\\beta(n)-\\beta(k))!}\n\\leq \\!\\!\\!\\sum_{m=k+1}^\\infty \\frac{cm^2a^{2m}}{m!}\\to 0\n$$\nas $k\\to\\infty$. Next, we estimate $\\|q_k\\|_a$. Using the\ninequality $|c_d|\\leq c(4a)^{-\\beta(m_d)}$, we obtain\n$$\n\\|q_k\\|_a\\leq\\!\\!\n\\sum_{m_d>k}\\!\\!c(4a)^{-\\beta(m_d)}\\|D^{\\beta(k)}g_d\\|_a\\leq c\\!\\!\n\\sum_{m_d>k}\\!\\!\n(4a)^{-\\beta(m_d)}\\beta(m_d)^{\\beta(k)}a^{\\beta(m_d)}=\nc\\!\\!\\sum_{m_d>k}\\! 4^{-\\beta(m_d)}\\beta(m_d)^{\\beta(k)},\n$$\nwhere the last inequality in the above display follows from the\nequalities $g_d(z)=z^{\\beta(m_d)}$ and (\\ref{normd2}). According to\n(\\ref{beta}), $\\beta(m_d)^{\\beta(k)}\\leq 2^{\\beta(m_d)}$ whenever\n$m_d>k$. Substituting these inequalities into the above display we\narrive to\n$$\n\\|q_k\\|_a\\leq c \\sum_{m_d>k} 2^{-\\beta(m_d)}.\n$$\nIt follows that $\\|q_k\\|_a\\to 0$ as $k\\to\\infty$. As we already\nknow, $\\|h_k\\|_a\\to 0$ as $k\\to\\infty$. Since $a\\in\\N$ is\narbitrary, $q_k\\to 0$ and $h_k\\to 0$ in $\\H$ as $k\\to\\infty$. By\n(\\ref{sum}), $D^{\\beta(k)}f-p_k\\to 0$ in $\\H$ as $k\\to\\infty$, $k\\in\nB_b$. The proof of (\\ref{clo1}) and of Theorem~\\ref{main1} is now\ncomplete.\n\n\\section{Proof of Theorem~\\ref{main2}}\n\nThe main building blocks of our construction of a generator of an\nalgebra contained in $H(D)$ are polynomials of the shape\n\\begin{equation}\\label{ra}\nr_{\\alpha,n}(z)=\\frac{z^n}{n^n}+q_{\\alpha,n}(z),\\ \\ \\text{where}\\ \\\nq_{\\alpha,n}(z)=n^{(d-1)n}\\frac{V^{n^2+(d-1)n}p(z)}{dz^{(d-1)n}},\\\nn>1,\\ \\alpha=(d,p)\\in \\N\\times\\P.\n\\end{equation}\n\nDirect calculations show that\n\\begin{equation}\\label{ra1}\n\\text{if}\\ p(z)=\\sum\\limits_{j=0}^{k-1}c_jz^j,\\ \\ \\text{then}\\ \\\nq_{\\alpha,n}(z)=\\frac{n^{(d-1)n}}{d}\\sum_{j=0}^{k-1}\\frac{j!c_jz^{j+n^2}}{(j+(d-1)n+n^2)!}.\n\\end{equation}\nThe next lemma is the reason for the choice of $r_{\\alpha,n}$.\n\n\\begin{lemma}\\label{est1} Let $\\alpha=(d,p)\\in \\N\\times\\P$ and $a\\in\\N$.\nThen\n\\begin{align}\\label{ra2}\n&\\lim_{n\\to\\infty}\\|r_{\\alpha,n}\\|_a=0,\n\\\\ \\label{ra3}\n& \\text{for any $\\nu,b\\in\\N$ and $h\\in\\P$,}\\ \\\n\\lim_{n\\to\\infty}\\|D^{\\nu}(hr_{\\alpha,n}^b)\\|_a=0,\n\\\\ \\label{ra4}\n&\\text{for any $h\\in\\P$ and $b\\in\\N$, $1\\leq b1$. Hence there exists\n$c=c(a,d,p)>0$ such that\n\\begin{equation}\\label{qan}\n\\|q_{\\alpha,n}\\|_a\\leq (cn)^{-2n^2}\\ \\ \\text{for any $n>1$}.\n\\end{equation}\nBy (\\ref{ra}), $r_{\\alpha,n}(z)=(z\/n)^n+q_{\\alpha,n}(z)$. Hence\n$\\|r_{\\alpha,n}\\|_a\\leq (a\/n)^n+\\|q_{\\alpha,n}(z)\\|_a$ and therefore\n(\\ref{ra2}) follows from (\\ref{qan}).\n\nAccording to (\\ref{qan}), we can pick $c_1=c_1(a,p,d)>1$ such that\n$\\|q_{\\alpha,n}\\|_a\\leq (c_1-1)(a\/n)^n$ for any $n>1$. Then\n$\\|r_{\\alpha,n}\\|_a\\leq (a\/n)^n+\\|q_{\\alpha,n}(z)\\|_a\\leq\nc_1(a\/n)^n$ for any $n>1$. Let $h\\in\\P$ and $b,\\nu\\in\\N$. Using\nsubmultiplicativity of the norm $\\|\\cdot\\|_a$, we obtain\n$$\n\\|hr_\\alpha^b\\|_a\\leq c_1^b\\|h\\|_a (a\/n)^{bn}\\ \\ \\text{for any\n$n>1$}.\n$$\nNext, pick $\\mu\\in\\N$ such that $h\\in\\P_\\mu$. By (\\ref{ra}),\n$hr_\\alpha^b\\in\\P_{bn^2+bk+\\mu}$. Thus, applying the above display\nand the estimate (\\ref{normd2}), we have\n$$\n\\|D^\\nu(hr_\\alpha^b)\\|_a\\leq c_1^b\\|h\\|_a\n(bn^2+bk+\\mu)^\\nu(a\/n)^{bn}\\ \\ \\text{for any $n>1$}.\n$$\nEquality (\\ref{ra3}) follows immediately from the above estimate.\n\nFinally, assume that $1\\leq b\\leq d$. Since\n$r_{\\alpha,n}(z)=(z\/n)^n+q_{\\alpha,n}(z)$, we have\n\\begin{align*}\n&(hr_{\\alpha,n}^b)(z)=\\sum_{j=0}^b \\bin{b}{j}\n\\frac{z^{nj}q_{\\alpha,n}(z)^{b-j}h(z)}{n^{nj}}=f_n(z)+g_n(z),\\ \\\n\\text{where}\n\\\\\n&f_n(z)=\\frac{z^{bn}h(z)}{n^{bn}}+\\frac{bq_{\\alpha,n}(z)h(z)z^{(b-1)n}}{n^{(b-1)n}}\\\n\\ \\text{and}\\ \\ g_n(z)=\\sum_{0\\leq j\\leq b-2}\\bin{b}{j}\n\\frac{z^{nj}h(z)q_{\\alpha,n}(z)^{b-j}}{n^{nj}}.\n\\end{align*}\nFirst, we shall estimate $\\|g_n\\|_a$. Using (\\ref{qan}) and\nsubmultiplicativity of $\\|\\cdot\\|_a$, we get\n$$\n\\|g_n\\|_a\\leq \\|h\\|_a\\sum_{0\\leq j\\leq b-2}\\bin{b}{j}\n\\frac{a^{nj}}{n^{nj}}(cn)^{-2(b-j)n^2}\\ \\ \\text{for any $n>1$}.\n$$\nFor $n\\geq a$ we have $(a\/n)^{nj}\\leq 1$. Since $b-j$ in the above\nsum is at least $2$, for $n\\geq c^{-1}$ we have\n$(cn)^{-2(b-j)n^2}\\leq (cn)^{-4n^2}$. Hence, we can write\n\\begin{equation}\\label{ges}\n\\|g_n\\|_a\\leq \\|h\\|_a\\,(cn)^{-4n^2}\\sum_{0\\leq j\\leq b-2}\\bin{b}{j}\n\\leq 2^b\\|h\\|_a\\,(cn)^{-4n^2}\\ \\ \\text{for $n> \\max\\{a,c^{-1}\\}$}.\n\\end{equation}\nRecall that $h\\in\\P_\\mu$ and $p\\in\\P_k$. Then\n$g_n\\in\\P_{bn^2+bk+\\mu}$. According to (\\ref{normd2}),\n$\\|D^{n^2+(d-1)n}g_n\\|_a\\leq (bn^2+bk+\\mu)^{n^2+(d-1)n}\\|g_n\\|_a$.\nBy (\\ref{ges}),\n$$\n\\|D^{n^2+(d-1)n}g_n\\|_a\\leq\n2^b\\|h\\|_a\\,(bn^2+bk+\\mu)^{n^2+(d-1)n}(cn)^{-4n^2}\\ \\ \\text{for $n>\n\\max\\{a,c^{-1}\\}$}.\n$$\nPassing to the limit as $n\\to\\infty$ we arrive to\n\\begin{equation}\\label{ges1}\n\\lim_{n\\to\\infty}\\|D^{n^2+(d-1)n}g_n\\|_a =0.\n\\end{equation}\nSince $\\deg q_{\\alpha,n}n$, we have $n^2+(d-1)n>dn$ and therefore\n$D^{n^2+(d-1)n}$ annihilates the first summand in the above display.\nSince $q_{\\alpha,n}(z)=r_{\\alpha,n}(z)-(z\/n)^n$, from (\\ref{ra}) it\nfollows that\n$$\n\\frac{dz^{(d-1)n}q_{\\alpha,n}(z)}{n^{(d-1)n}}=(V^{n^2+(d-1)n}p)(z).\n$$\nAccording to (\\ref{ri}), we obtain\n$D^{n^2+(d-1)n}f_n=D^{n^2+(d-1)n}V^{n^2+(d-1)n}p=p$. Since\n$r_{\\alpha,n}^d=f_n+g_n$,\n$p-D^{n^2+(d-1)n}(r_{\\alpha,n}^d)=-D^{n^2+(d-1)n}g_n$ and\n(\\ref{ra5}) follows from (\\ref{ges1}).\n\\end{proof}\n\nWe are ready to prove Theorem~\\ref{main2}. Let\n$\\{(d_k,p_k)\\}_{k\\in\\N}$ be the sequence in $\\N\\times\\P$ provided by\nLemma~\\ref{NtP}. Denote $\\alpha_k=(d_k,p_k)$. We shall construct\ninductively natural numbers $n_k\\geq 2$ such that for any $k\\in\\N$,\n\\begin{itemize}\\itemsep=-3pt\n\\item[(a1)] $n_k>n_{k-1}$ if $k\\geq 2$;\n\\item[(a2)] $\\|r_k\\|_k\\leq 2^{-k}$, where $r_k=r_{\\alpha_k,n_k}$;\n\\item[(a3)] if $k\\geq 2$, then $\\|D^\\nu (f_k^j-f_{k-1}^j)\\|_k\\leq 2^{-k}$\nfor any $j\\leq k$ and $\\nu\\leq n^2_{k-1}+kn_{k-1}$, where\n$f_a=\\sum\\limits_{l=1}^a r_l$;\n\\item[(a4)]$\\|D^{\\nu_k} (f_k^{j})\\|_k\\leq 2^{-k}$ for $1\\leq\nj1$\nconsider\n$$\n\\text{$\\rho_{n}=r_{\\alpha_m,n}$, $\\phi_n=f_{m-1}+\\rho_n$ and\n$\\beta_n=n^2+(d_m-1)n$}.\n$$\nApplying Lemma~\\ref{est1}, we obtain\n\\begin{align}\\label{e1}\n&\\lim_{n\\to\\infty}\\|\\rho_n\\|_m=0.\n\\\\ \\label{e2}\n&\\lim_{n\\to\\infty}\\|D^\\nu(h\\rho_n^j)\\|_m=0\\ \\ \\text{for any\n$\\nu,j\\in\\N$ and $h\\in\\P$.}\n\\\\ \\label{e3}\n&\\lim_{n\\to\\infty}\\|p_m-D^{\\beta_n}(\\rho_n^{d_m})\\|_m=0\\ \\\n\\text{and}\\ \\ \\lim_{n\\to\\infty}\\|D^{\\beta_n}(h\\rho_n^j)\\|_m=0\\ \\\n\\text{for any $h\\in\\P$ and $1\\leq j0$. In particular, $f$ has finite exponential type. It is\nworth noting that a function from $H(D)$ can not have exponential\ntype $<1$ \\cite{mac}. It is also easy to verify that there can be no\ncommon growth restriction for the functions from the space $L$ from\nTheorem~\\ref{main1}. We would like to discuss possible modifications\nof Questions~\\ref{ar1} and~\\ref{ar2}. Note that $H(D)$ contains (up\nto the zero function) no non-trivial ideals in $\\H$. Indeed, let\n$f\\in\\H\\setminus\\{0\\}$. Then the function\n$g(z)=\\overline{f(\\overline{z})}$ also belongs to $\\H\\setminus\\{0\\}$\nand $(fg)(\\R)\\subseteq \\R$. The latter inclusion implies that\n$fg\\notin H(D)$. Indeed the set of functions real on the real axis\nis closed and nowhere dense in $\\H$ and is preserved by $D$. Thus\nthe ideal generated by $f$ contains the non-zero function $fg$,\nwhich is not hypercyclic for $D$. Finally we would like to raise the\nfollowing question.\n\n\\begin{question}\\label{qs} Does $H(D)$ contain all non-constant\nfunctions from a non-trivial closed subalgebra of $\\H$?\nEquivalently, does there exist $f\\in\\H$ such that $g\\circ f\\in H(D)$\nfor any non-constant $g\\in\\H$?\n\\end{question}\n\nIt seems likely that the answer to the above question is negative.\nTo prove this it would be sufficient for any $f\\in\\H$ to find a\nnon-constant $g\\in\\H$ and a bounded sequence $\\{z_n\\}_{n\\in\\Z_+}$ in\n$\\C$ such that the sequence $\\{(g\\circ f)^{(n)}(z_n)\\}_{n\\in\\Z_+}$\nis bounded. Indeed, boundedness of the last sequence would prevent\n$g\\circ f$ from being hypercyclic for $D$.\n\nLeon and Montes \\cite{alf2} have shown that if $T$ is a continuous\nlinear operator on a Banach space $X$ and $\\sigma_e(T)$, being the\nset of $\\lambda\\in\\C$ such that $T-\\lambda I$ is not Fredholm, does\nnot intersect the closed unit ball $\\{z\\in\\C:|z|\\leq 1\\}$, then\nthere is no closed infinite dimensional subspaces $L\\subset X$ such\nthat $L\\setminus\\{0\\}\\subset H(T)$. It is easy to see that\n$\\sigma_e(D)=\\varnothing$. Thus the above result does not carry\nthrough to operators on Fr\\'echet spaces.\n\n\\bigskip\n\n{\\bf Acknowledgements.} \\ The author is grateful to the referee for\nhelpful comments and numerous corrections.\n\n\\small\\rm\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn swarm robotics, a vast number of autonomous mobile robots cooperate to achieve complex goals \\cite{hsiang}. Individual members of the swarm are usually assumed to be simple, expendable, and computationally limited, and to move and act according to online local rules of behaviour. In this work, our goal is to formally study the ability of a simple, ``two-layered'' swarm-robotic system to complete an environmental coverage task called \\textit{uniform dispersal} assuming asynchronicity and that robots may crash whenever they attempt to move.\n\n``Coverage'' algorithms that enable a single- or multi-robot system to cover or explore unknown or dynamically uncertain environments are an important topic in mobile robotics. There has been great interest in applications to, for example, mapping \\cite{amigoni2010information, howard2002, corah2017efficient}, servicing and surveillance \\cite{chen2013temporal}, or search and rescue operations \\cite{jorgensen2017risk, calisi2005autonomous,basilico2011exploration, basilico2016semantically}, and a rich body of theoretical work exists (we refer the reader to the surveys \\cite{altshuler2018introduction, galceran2013survey}). A natural coverage problem for robotic swarms is the \\textit{uniform dispersal} problem, introduced in \\cite{hsiang}. In \\textit{uniform dispersal}, many robotic agents enter an unknown discrete graph environment over time via one or several source locations and are tasked to eventually occupy every vertex of the graph with a robot while avoiding collisions. \n\nSwarms are often claimed to be highly fault-tolerant, as redundancy and sheer numbers can enable the swarm to go on with its mission even if many robots malfunction \\cite{winfield2006faulttolerant}. However, as the size of a robotic fleet grows, so too does the opportunity for error. Specifically, three different complications that arise in multi-robot systems are further exacerbated in the swarm setting:\n\n\\textbf{Asynchronicity.} As the number of robots grows, coordinating the robots' actions becomes a formidable task, as their actions and internal clocks can become highly unsynchronized. \n\n\\textbf{Crashes.} We cannot expect to release a huge swarm of simple robots to an unknown environment without the occurrence of hardware or software faults that may cause robots to crash. \n\n\\textbf{Traffic.} To avoid collisions, we do not wish for there to be too many robots crowding a given area, and so mobile robots should maintain safe distances from each other. In restricted physical environments, such requirements cause traffic delays, as robots must wait for other robots to move away before entering a target location.\n\nSuch challenges are discussed as a central direction of research for swarm robotics in \\cite{peleg2005distributed}. If the number of errors scales with the number of robots, are swarms ``worth the trouble''? The purpose of this work is to give a perspective on this question via a formal mathematical analysis. We study, in an abstract setting, the ability of a simple local rule to achieve \\textit{uniform dispersal} in the presence of crashes and asynchronicity. We are specifically interested in how the frequency of crashes affects the time to mission completion.\n\nWe first describe a ``two-layered'' rule of behaviour for swarms that is capable of achieving uniform dispersal. Using this algorithm, we show that a swarm can complete its mission quickly and reliably in unknown discrete environments, even in the presence of asynchronicity and frequent crashes. Hence, we claim that in our setting, many robots can win against many errors. In the spirit of swarm robotics, the algorithm relies only on local information to dictate robots' actions and is quite simple. This simplicity makes it amenable to analysis.\n\nOur swarm consists of a large reservoir of simple, anonymous, identical, and autonomous mobile robots that enter the environment over time via a source location $s$. The robots move across a discrete environment represented by an \\textit{a priori unknown} connected graph $G$ whose vertices represent spatial locations. The robots gradually expand their coverage of the environment by occupying certain locations and assisting other nearby robots in navigational tasks using a local, indirect communication scheme.\n\nThe swarm's robots switch between two modes: mobile and settled. The settled robots act as `nodes' of the current coverage of the graph environment, and the mobile robots move between locations with settled robots until they can find a new location where they themselves can settle. The settled `robot nodes' are capable of \\textit{pointing to} (``\\textit{marking}'') a single neighbouring location where there is another settled robot. ``Marking'' is understood to be a generic capability of the robots and could be accomplished by many different technologies, such as local radio communication or visual sensing; we refer to the Related Work section for possible implementations. \n\nAs more and more mobile robots become settled, their marks serve as a navigational network of the environment that is utilised by the remaining mobile robots. The mobile robots are capable of sensing the number of robots in neighbour locations, and sensing when a settled robot is pointing to (marking) their location. They rely only on this information to make decisions. Hence, they operate in a GPS-denied, low memory setting, meaning they act based only on local communications and local geographic features. The robots are tasked with settling at every vertex of $G$, and constructing an implicit \\textit{spanning tree} of $G$ via the settled robots and their pointer marks. \n\nThere are no restrictions on $G$ as long as it is connected. In principle, different robots need not even agree on the graph representation of their environment for our algorithm to work (e.g., in case they gradually build it from local sensory data), as the settled robots gradually construct a spanning tree which all robots agree on and use to move between locations. We assume, for simplicity, that they share the same representation. \n\n\\textbf{Physical constraints and asynchronicity.} We model the mobile robots as activating repeatedly at stochastic, independent exponential waiting times of rate $1$. When a robot activates, it may move or move-and-settle at a nearby location (once a robot settles, it remains stationary). We assume the physical constraint that any given location may contain no more than a single mobile robot and a single settled robot (and perhaps a number of crashed robots). Frequent traffic obstructions occur as robots block each other off from progressing.\n\nThis model of asynchronicity and limited vertex capacity in a graph environment is motivated by the \\textit{totally asymmetric simple exclusion process} (TASEP) in statistical mechanics. There is an extensive literature on this process as a model for a great variety of transport phenomena, such as traffic flow \\cite{chowdhury2000statistical} and biological transport \\cite{chou2011biologicaltasep}. Rigorous exact and asymptotic results for TASEP are known \\cite{Johansson2000, tracy2009asymptotics}, and our analysis technique shall be to compare our swarm's performance to a two-layered TASEP-like process. Since our robots are mostly in a state of \"traffic flow\" (waiting for other robots to move), references such as \\cite{chowdhury2000statistical} suggest that our model in fact captures many of the relevant traffic phenomena that will occur in real life implementations.\n\n\\textbf{Adversarial crashing.} Similar to, e.g., \\cite{jorgensen2017risk}, we consider a risky traversal model where robots may crash whenever they try to move across an edge. We assume robots remain safe when not moving, as remaining put is less risky than travelling (in fact, we need just the weaker assumption that settled robots, which \\textit{never} move, are safe). To facilitate analysis, we assume crashed robots do not prevent travel between the graph's vertex hubs. This assumption is applicable when such robots can be manoeuvred around or pushed aside, or the crash causes the robot to disappear. For example, we may consider crashed air-based robots falling to the ground during exploration of an environment. Alternatively, in a ground robot scenario, we can with foresight expand vertex sizes to be big enough such that vertices can contain a small number of crashed robots in addition to the two active robots (and such local crashed robots are then bypassed using, e.g., local collision avoidance). \n\nSince, in our model, at most one new robot may arrive in the environment per time step, we assume that the number of crashes that occur is bounded by the current time $t$, and a parameter $c$ which reflects the frequency at which crashes occur over time. When $c$ is close to $1$, the \\textit{vast majority} of robots that enter the environment will crash before achieving anything.\n\nBesides these limitations, we assume nothing more about the crashes that occur. In particular, a \\textit{virtual adversary} may choose crashes so as to be as obstructing as possible.\n\n\\textbf{Results.} We describe a local rule of behaviour (Algorithm \\ref{alg:localrule}) that can achieve uniform dispersion, even in the presence frequent crashes and traffic obstructions. The rule is easy to understand and implement and is well-suited for a swarm of simple robots, mimicking a kind of branching depth-first search. In many mobile robot systems one wishes to construct a spanning tree of the environment for purposes of mapping, routing or broadcasting \\cite{abbas2006distributedspanning, agmon2006spanning, broder1989spanning, mapdrawing, gabriely2001spanning}. Our rule achieves this as well, by having robots act as nodes of the tree, and making them aware of their immediate descendants. Our goal is to study how crashes, asynchronicity, and traffic affect the swarm's performance under this rule of behaviour. \n\nWe prove that our robots are able to complete their mission in time linear in the size of the environment, and that performance degrades gracefully (by a factor $(1-c)^{-1}$) with frequency of crashes. Given our assumptions and algorithm, it is not surprising that the robots can complete the dispersion assuming some crashes; rather, we show that even with many frequent crashes, the robots can still do so efficiently. \n\nSpecifically, let $n$ be the number of vertices in the environment $G$. We prove that dispersal completes before time $8 \\cdot \\big((1-c)^{-1} + o(1)\\big)n$ asymptotically almost surely (meaning with probability approaching $1$ as $n$ grows)--a worst-case bound on performance. No dispersal algorithm can complete in less than $O(n)$ expected time, since this is the time it takes to even explore $n$ vertices, so when there are no crashes (but still there is traffic and asynchronicity) this bound is asymptotically tight. For, say, $c = 0.5$, we expect up to (roughly) $50$\\% of robots to crash before achieving anything, and our analysis says that therefore the swarm will take twice as long to achieve dispersal. This seems intuitive, but consider that the robots that eventually crash are (uselessly) present in the environment in the time leading to the crash, blocking other robots from entering or progressing. The analysis says that nevertheless, the ability of the rest of the swarm to achieve its goal is not disproportionately worsened.\n\nTo the best of our knowledge, with or without crashes, we are the first to consider a non-synchronous setting for the uniform dispersal problem where time to completion can explicitly be bounded, hence also the first to give explicit performance guarantees in a non-synchronous setting. In an asynchronous as opposed to a synchronous setting, there are many more possible configurations that the robots might exist in, which makes the analysis more difficult. We believe the references and techniques from statistics \\cite{Johansson2000, tracy2009asymptotics, chowdhury2000statistical} might be of general interest for tackling these kinds of topics.\n\nOur analysis extends also to a synchronous time setting, and to the case where robots enter the environment from \\textit{multiple} locations. Multiple entrance locations result instead in the robots constructing instead an implicit \\textit{spanning forest}. In both these settings, dispersion completes faster. The bound on performance we derive for the synchronous case is exact.\n\nFinally, we confirm our findings by numerically simulating our system in a number of environments and measuring performance. \n\n\\vspace{-0.5em}\n\\subsection{Related work}\n\\vspace{-0.25em}\nUniform dispersal was introduced by Hsiang et al. in \\cite{hsiang} for discrete grid environments of connected pixels (but their work can be extended to arbitrary graph environments). They considered a synchronous time setting where robots are allowed to send short messages to nearby robots, and showed time-optimal algorithms for this setting. Many variations have since been studied. Barrameda et al. extend the problem to the asynchronous setting with no explicit non-visual communication \\cite{barrameda2013uniform, barraswarm1}. Recent works include dispersal with weakened sensing \\cite{hideg2017uniformtime}, dispersal in arbitrary graph environments \\cite{dispersalgraphs2019}, and dispersal under energy constraints \\cite{arxivminimizingtravel}. Our model differs from previous work on several central points, including the presence of crashes, the two layers, and the ability to mark neighbours. Marking is weaker than the radio communication available to robots in \\cite{hsiang}, that enables robots to transfer many bits of data locally, but stronger than the indirect, visual communication assumed in several other works. \n\n\\begin{table}[]\n\\centering\n\\fontsize{7}{7}\\selectfont\n\\begin{tabularx}{\\textwidth}{p{1cm}p{1.1cm}p{0.7cm}p{1.4cm}p{1cm}p{0.8cm}}\n Reference & Environment & Time & Communication & Makespan & Crashes \\\\\n\\centering\\arraybackslash\\cite{hsiang} & Arbitrary & Synch. & Radio & $O(n)$ & \\centering\\arraybackslash x \\\\\n\\centering\\arraybackslash\\cite{arxivminimizingtravel} & Hole-less grid & Synch. & Visual & $O(n)$ & \\centering\\arraybackslash x \\\\\n\\centering\\arraybackslash\\cite{hideg2017uniformtime} & Grid & Synch. & Visual & $O(n)$ & \\centering\\arraybackslash x \\\\\n\\centering\\arraybackslash\\cite{barraswarm1} & Hole-less grid & Asynch. & Visual & undefined & \\centering\\arraybackslash x \\\\\n\\centering\\arraybackslash\\cite{barrameda2013uniform} & Grid & Asynch. & Radio & undefined & \\centering\\arraybackslash x \\\\\nOur work & Arbitrary & Stochastic Asynch. & Marking & $O(n)$ & \\centering\\arraybackslash\\checkmark\n\\end{tabularx}\n\\newline\n\\newline\n\\vspace{-1em}\n\\caption{A comparison of works on uniform dispersal.}\n\\vspace{-3em}\n\\label{tablecompare}\n\\end{table}\n\nBecause of differences in the settings, assumptions, and constraints, quantitative comparison of works on uniform dispersal is very difficult. Table \\ref{tablecompare} gives a \\textit{rough, non-exhaustive} overview of some differences, such as the supported kinds of environments (grid environment, hole-less grid environment, or arbitrary graph environment), synchronous versus asynchronous time, expected makespan (i.e., how long it takes the robots to complete their mission), and whether crashes are considered in the model.\n\nRobotic coverage, patrolling, and exploration with adversarial interference, as well as crashes, have been studied in different problem settings from our own. Agmon and Peleg studied a gathering problem for robots where a single robot may crash \\cite{agmon2006fault}, and gathering with multiple crashes was later discussed by Zohir et al. in a similar setting \\cite{bouzid2013gathering}. Robotic exploration in an environment containing threats has been studied in \\cite{yehoshua2013robotic, yehoshua2015frontier}. Moreover, adversarial crashes of processes are often studied in general distributed algorithms (e.g., \\cite{delporte2011disagreement}). Differing from many of these works, we study a situation where the number of crashes scales with the mission's complexity (the time it takes to cover the environment), and where even the vast majority of robots may crash. However, to enable this, we assume access to a huge reservoir of robots waiting to replace crashed robots--i.e., a robotic swarm.\n \nRobotic coverage in various hazardous or adversarial GPS-denied settings has become an important topic in recent decades, since this opens the possibility of deploying robotic swarms in the real world, outside laboratory conditions \\cite{galceran2013survey, altshuler2018introduction, agmon2017adversarialrobotic}. Theoretical and empirical results about the performance of swarms in such settings may help inform our expectations of real world swarm-robotic fleets. To implement such systems in practice, the robots themselves must be capable of relative visual localization. This poses a technical challenge, as considerations of depth, angle of view, and persistent coverage come into play. In \\cite{biswas2012depth} a system of relative visual localization for mobile ground vehicles with low computing power is proposed. The system enables autonomous ground vehicles to navigate their environment while avoiding obstacles. In \\cite{saska2017system} a relative visual localization technique is developed for small quadcopters, with similar capabilities. In \\cite{prorok2011reciprocal} the authors discuss a localization algorithm for lightweight asynchronous multi-robot systems with lossy communication. These are examples of the techniques that may be used for the sensors of robots in such systems as the one described in this paper (see similar discussion in \\cite{arxivminimizingtravel}). \n \nA fascinating introduction to TASEP-like processes and their connection to other fields is \\cite{kriecherbauer2010pedestrian}. \n\n\n\\section{Model and System}\n\\label{modelsection}\n\nWe consider a swarm of mobile robotic agents performing world-embedded calculations on an unknown discrete environment represented by a connected graph $G$. The vertices of $G$ represent spatial locations, and the edges represent connections between these locations, such that the existence of an edge $(u,v)$ indicates that a robot may move from $u$ to $v$. \n\nWe assume an infinite collection of robots (also referred to as `agents') attempt to enter $G$ over time through a \\textit{source} vertex $s \\in G$. The robots are identical and execute the same algorithm. They begin in the \\textit{mobile} state, and eventually enter the \\textit{settled} state. Settled robots are stationary, and are capable of \\textit{marking} a neighbouring vertex that contains another settled robot. Mobile robots move between the vertices of $G$ and sometimes crash while in motion. They are oblivious, and decide where to move based only on local information provided by their sensors: the number of robots at neighbouring vertices, and whether any of the neighbouring settled robots mark their current location. Each vertex has limited capacity: it can contain at most one settled and one mobile robot.\n\nMobile robots are only allowed to move to a neighbouring vertex when they are \\textit{activated}. Each robot, including robots outside $G$, reactivates infinitely often and independent of other robots, at random exponential waiting times of mean $1$. \n\nWhen $s$ contains less than two robots, robots from outside $G$ attempt to enter it when they are activated. It is convenient to give the robots arbitrary labels $A_1, A_2, \\ldots$ and assume that $A_i$ cannot enter $s$ before all robots with lower indices entered or crashed. This assumption makes the analysis simpler, but the performance bound we prove in this work holds also for the entrance model where robot entrance depends only on which robot is activated first. Hence, whenever the current lowest-index robot outside of $G$ activates and there is no \\textit{mobile} robot at $s$, it moves to $s$. If $s$ is completely empty, the robot settles upon arrival and becomes the root of the spanning tree. Otherwise it remains a mobile robot. \n\nWe denote by $G(t)$ the graph whose vertices are vertices of $G$ containing settled robots at time $t$, and there is a directed edge $(u,v) \\in G(t)$ if $u$ is marked by a settled robot at $v$. The goal of the robots is to reach a time $T$ wherein $G(T)$ is a spanning tree of the entire environment $G$. The \\textit{makespan} of an algorithm is the first time $T_0$ when this occurs. \n\nCrashes are modelled as follows: when a robot $A_i$ is activated and attempts to enter $s$ or move from $u$ to $v$ via the edge $(u,v)$, occasionally an \\textit{adversarial event} occurs, causing the deletion of $A_i$ from $G$. Robots do not crash unless attempting to move. Hence, mobile robots are volatile but settled robots are safe. This assumption is somewhat stronger than necessary: our results still hold if mobile (but not settled) robots are allowed to crash while they stay put, but this tediously lengthens the analysis. We assume the number of adversarial events before time $t$ is bounded by a fraction of $t$. Adversarial events may otherwise be as inconvenient as possible: we may assume there is an \\textit{adversary} choosing crashes to maximize the makespan of our algorithm. \n\nUnless stated otherwise, when discussing the configuration of robots ``at time $t$'', we always refer to the configuration before any activation at time $t$ has occurred.\n\n\n\\section{Dispersal and Spanning Trees}\n\nWe study a simple local behaviour (Algorithm \\ref{alg:localrule}) that disperses robots and incrementally constructs a distributed spanning tree of $G$. The rule determines the behaviour of \\textit{mobile} robots whenever they are activated (settled robots merely remain in place and continue to mark their target). We prove that using this rule, the makespan is linear in the number of vertices of $G$ asymptotically almost surely, and that performance degrades gracefully with the density of crashes.\n\n\\begin{algorithm}[!htb]\n \\caption{Local rule for a mobile robot $A$.}\n \\begin{algorithmic}\n \\State Let $v$ be the current location of $A$ in $G$ (if $A$ is outside $G$, see Section \\ref{modelsection}).\n \\If{a neighbour $u$ of $v$ contains exactly one robot, and this robot marks $v$}\n \\State Attempt to move to $u$. \n \\ElsIf{a neighbour $u$ of $v$ contains no robots}\n \\State Attempt to move to $u$ and become \\textit{settled} if no crash. \n \\State \\textit{Mark} the vertex $v$.\n \\Else\n \\State Stay put.\n \\EndIf\n \\end{algorithmic}\n \\label{alg:localrule}\n\\end{algorithm}\n\nThe rule grows $G(t)$ as a partial spanning tree of $G$. It acts as a kind of depth first search that splits into parallel processes whenever a mobile robot is blocked by another mobile robot. Every vertex of the tree $G(t)$ is marked by settled robots at its descendants. Mobile robots follow these marks to discover the leaves of the current tree $G(t)$ and expand it. Robots grow the tree by settling at unexplored vertices that then become new leaves. Our main result is Theorem \\ref{alg1performancethm}:\n\n\\begin{theorem}\nIf for all $t$ the number of adversarial events before time $t$ is allowed to be at most $ct\/4$, $0 \\leq c < 1$, then the makespan of Algorithm \\ref{alg:localrule} over graph environments with $n$ vertices is at most $8((1-c)^{-1}+o(1))n$ asymptotically almost surely as $n \\to \\infty$. \\label{alg1performancethm}\n\\end{theorem}\n\nFigure \\ref{fig:simulation} shows an execution of our algorithm on a grid environment with $n=62$ square vertices (white region) and obstacles (blue region). We allowed a naive adversary to arbitrarily delete at most $ct\/4$ robots before time $t$, with $c = 0.8$. This corresponded to a deletion of 56\\% of robots that entered the environment before the makespan. In a more constrained topology (such as a path graph, see Section \\ref{pathgraphsection}), the robots would progress more slowly, and a greater percentage would be deleted. The makespan (bottom right figure) was $613$, consistent with the upper bound of Theorem \\ref{alg1performancethm}. After the the spanning tree completes, robots keep entering the region until there are two robots at every vertex. This is related to the ``slow makespan'', which we will later define. The slow makespan was 831. See Section \\ref{simulationsection} for more simulations. \n\n\\begin{figure}[htb]\n \\centering\n\\begin{subfigure}{0.13\\textwidth}\n \\includegraphics[width=\\linewidth]{tas1.png}\n \\label{fig:1}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.13\\textwidth}\n \\includegraphics[width=\\linewidth]{tas2.png}\n \\label{fig:2}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.13\\textwidth}\n \\includegraphics[width=\\linewidth]{tas5.png}\n \\label{fig:3}\n\\end{subfigure}\n\n\\medskip\n\\vspace{-1.5em}\n\\begin{subfigure}{0.13\\textwidth}\n \\includegraphics[width=\\linewidth]{tas9.png}\n \\label{fig:4}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.13\\textwidth}\n \\includegraphics[width=\\linewidth]{tas12.png}\n \\label{fig:5}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.13\\textwidth}\n \\includegraphics[width=\\linewidth]{tas16.png}\n \\label{fig:6}\n\\end{subfigure}\n\\vspace{-1em}\n\\caption{An execution of Algorithm \\ref{alg:localrule} on a grid environment. The source is denoted by a square box in the center. The arrows denote settled robots, and their direction points to the adjacent location with a settled robot that they mark. Red arrows indicate a mobile robot is on top of the settled robot (note that by the algorithm, a mobile robot will never occupy a vertex that does not have a settled robot). The environment is a priori unknown to the robots, and they construct a spanning tree representation over time.}\n\\label{fig:simulation}\n\\vspace{-2em}\n\\end{figure}\n\n\n\\subsection{Analysis}\n\\label{analysissection}\n\nWe study the makespan of Algorithm \\ref{alg:localrule}. Some of the proofs are placed in the Appendix.\n\nFor the analysis, we will assume that robots from $A_1, A_2, \\ldots$ that settle or crash keep being activated. This is a purely ``virtual'' activation: such robots of course do and affect nothing upon being activated. We start with a structural Lemma:\n\n\\begin{lemma}\n\\label{treelemma}\n$G(t)$ is a tree at all times $t$ with probability $1$.\n\\end{lemma}\n\n\\begin{proof} When the first robot enters and successfully settles, $G(t)$ contains only $s$. No settled robots are ever deleted, so $G(t)$ can only gain new vertices. Whenever a mobile robot settles, it extends the tree $G(t)$ by one vertex, connecting its current location $v$ to $G(t)$ via a single directed edge. By definition, the edge is directed from the vertex the settled robot \\textit{marks}--which is its previous location--to $v$. This turns $v$ into a leaf of $G(t)$. With probability $1$ no two robots on $G$ activate at the exact same time, so no two robots settle the same vertex. Hence $G(t)$ remains a tree.\n\\end{proof}\n\n\\subsubsection{Event orders}\n\\label{eventordersection}\nWe explain how we intend to bound the makespan. Our strategy shall be to use coupling to compare the performance of Algorithm \\ref{alg:localrule} by the performance of different random processes of robots moving on different structures. Coupling is a technique in probability theory for comparing different random processes (see \\cite{lindvall2002coupling}). \n\nThe basic idea is this: whenever we run Algorithm \\ref{alg:localrule} on $G$, we can log the exact times at which the robots activate, as well as the times adversarial events happen and which robots they affect. This gives us an \\textit{order of events} $S$ sampled from some random distribution. Note that robots keep activating forever (but these activations do nothing once the graph is full), so $S$ is infinitely long.\n We then ``re-enact'' or \\textbf{``simulate''} $S$ on a new environment (or several new environments) involving the robots $A_1, A_2, \\ldots$ by activating and deleting the robots according to $S$.\n \n To make things more precise, by ``simulating'' $S$ on different environments we mean that we consider the \\textit{coupled} process $(G, G_2, \\ldots, G_m)$ wherein different environments $G, G_2, \\ldots G_m$ have robots that are \\textit{paired} such that whenever $A_i$ in $G$ is scheduled for an activation or a deletion according to the event order $S$ ($S$ is simply an infinite list of scheduled activation and deletion times), the copies of $A_i$ in \\textit{all} the environments $G_2, \\ldots G_m$ also activate or are deleted. When the copies of $A_i$ are activated they act according to Algorithm \\ref{alg:localrule} with respect to their local neighborhood. Robots entrances are modelled as usual (Section \\ref{modelsection}), but note that even if $A_i$ manages to enter $G$ following an activation, its copy might not enter its own environment because in that environment the entrance is blocked, or there is a lower-index robot waiting to enter. During Algorithm \\ref{alg:localrule}'s analysis, we will often be talking about a deterministic event order $S$ being simulated over different environments. The end-goal, however, is to say something about the event order $S$ when it is randomly sampled from the execution of Algorithm \\ref{alg:localrule} on $G$. \n \n The event order $S$ must be a \\textit{possible} set of events that occurred during an execution of our algorithm on the base graph environment $G$. This means, due to our model, that a robot $A_i$ in $G$ will never be scheduled for deletion except at times when it is activated and attempts to move. However, while simulating $S$ on the environments $G_2, \\ldots G_m$, we must be allowed to break the rules of the model: we might delete robots even when they don't attempt to move, or while they are outside of the new graph environment. Whenever we say ``for any event order $S$'', we mean event orders $S$ \\textbf{that could have happened over $G$}. \n \n In $S$, define $t_0$ to be the first time $A_1$ activates, $t_1$ to be the first time \\textit{after $t_0$} that either $A_1$ or $A_2$ activate, and $t_i$ to be the first time $t > t_{i-1}$ that any robot in the set $\\{A_1, \\ldots, A_{i+1}\\}$ is activated. \n \n \\begin{definition}\n The times $t_0 < t_1 < t_2 < \\ldots$ in $S$ are called the \\textbf{meaningful event times} of $S$. \n \\end{definition}\n \n For meaningful event times to be well-defined there must be a \\textit{minimal} time $t > t_{i-1}$ where one of the robots ${A_1, \\ldots, A_i}$ activates. Because the activation times of the robots are independent exponential waiting times of mean $1$, this is true with probability $1$ for a randomly sampled $S$. Moreover, with probability $1$, at any time $t_i$ there is precisely one robot $A$ of $A_1, A_2, \\ldots, A_{i+1}$ scheduled for activation by $S$. Because both these things are true with probability $1$, we assume they are true for any event order $S$ referred to at any point in this analysis. This does not affect our main result (Theorem \\ref{alg1performancethm}), which is probabilistic. \n \n Our end-goal is randomly sample $S$ from $G$ and simulate it on four increasingly ``slower'' environments: $\\mathcal{P}(n)$, $\\mathcal{P}(\\infty)$, $\\mathcal{P}^*(\\infty)$, $B$, so that all environments ($G$ and these four) are coupled. \\textit{Meaningful event times} are so called because, prior to the first activation of $A_i$, any of the robots $A_{i+1}, A_{i+2}, \\ldots$ cannot enter or move in any of these environments, and activating them causes nothing. Hence, at any time $t$ which is not a meaningful event time, the configuration of robots cannot change (no robots move and no robots are deleted in any of the environments $S$ is simulated on). \n \n The possibility to create an event order $S$ is the only reason we labelled the robots and made the assumption about entrance orders in Section \\ref{modelsection}. \n \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{tasepdiagram.png}\n\\caption{The processes $\\mathcal{P}(n), \\mathcal{P}(\\infty), \\mathcal{P}^*(\\infty), B$ that we will be interested in. White vertices are empty, black vertices contain a settled robot, and red vertices contain both a mobile and a settled robot. Edge directions indicate edge directions in $G(t)$. Note that $B$ does not have a source vertex.}\n\\label{fig:processes}\n\\vspace{-2em}\n\\end{figure}\n \n\\subsubsection{$\\mathcal{P}(n)$ versus $G$}\n\\label{pathgraphvsGsection}\n\nLet $n$ be the number of vertices of $G$. The path graph $\\mathcal{P}(n)$ over $n$ vertices is a graph over the vertices $v_1 v_2 \\ldots v_n$ such that there is an edge $(v_i, v_{i+1})$ for all $1 \\leq i \\leq n-1$. We simulate $S$ on the graph environment $\\mathcal{P}(n)$ where the source vertex $s$ is $v_1$. Simulating $S$ on $\\mathcal{P}(n)$ results in what is mostly a normal-looking execution of Algorithm \\ref{alg:localrule} on $\\mathcal{P}(n)$, but as discussed, it might lead to some oddities such as robots being deleted while they are still outside the graph environment.\n\nLet us introduce some notation. $A_i^G$ refers to the copy of $A_i$ being simulated by $S$ on $G$, and $A_i^{\\mathcal{P}(n)}$ is similarly defined. \n\n\\begin{definition}\nThe \\textit{depth} of $A_i^G$ at time $t$, written $d(A_i^G, t)$, is the number of times $A_i^G$ has \\textit{successfully moved} before time $t$. Depth is initially $0$. Entering at $s$ is considered a movement, so robots entering $s$ have depth $1$. \n\\end{definition}\n\n$d(A_i^{\\mathcal{P}(n)}, t)$ is similarly defined with respect to $\\mathcal{P}(n)$.\n\n\\begin{definition}\nLet $T$ be a tree graph environment (such as $\\mathcal{P}(n)$) with source vertex $s$. A vertex $v$ of $T$ becomes \\textbf{slow} at time $t$ if a mobile robot on $v$ was activated and found no vertex it could move to, and also, either $v$ is a leaf of $T$ or all of its descendants in $T$ are slow at time $t$. \n\nA robot $A_i$ is \\textbf{slow} at time $t$ if it is located at a slow vertex at time $t$.\n\\end{definition}\n\n\\begin{definition}\nThe \\textbf{slow makespan} of $S$ on $T$, $M_{slow}^T$, is the first time all vertices of $T$ are slow when simulating the event order $S$.\n\\end{definition}\n\n$G$ is not always a tree, but given a fixed event order $S$, we can associate to $S$ a spanning tree of $G$, $T_S$, containing $G(t)$ as a subtree for all times $t$. Lemma \\ref{treelemma} says robots only use edges of $T_S$, so we may define the slow makespan of $S$ on the $G$-simulation as the slow makespan on $T_S$. Slow makespan is clearly also defined for the $\\mathcal{P}(n)$-simulation. Furthermore, $M_{slow}^G$ is an upper bound on the (regular) makespan of the $G$-simulation, since every vertex must have a settled robot before it becomes slow and, as the settled robots of $G$ never move, they cannot be deleted by $S$.\n\nOur motivation for introducing slow makespan is that we wish to show $\\mathcal{P}(n)$ is the environment that maximizes slow makespan on $n$ vertices. However, it does not maximize normal makespan (see Table \\ref{makespancomparetable} for an example). \n\n\\begin{lemma}\nA slow robot $A_i^G$ is forever unable to move and never deleted in the event order $S$.\n\\label{slowdeletelemma}\n\\end{lemma}\n\n\\begin{proof}\nOnly robots attempting to move can be deleted. If $A_i^G$ is at a leaf of $T_S$, it can never move, since its parent vertex in $T_S$ contains a settled robot marking the vertex of a robot in a different location, and settled robots are never deleted. Hence, $A_i^G$ is never deleted. Slow vertices propagate upwards from the leaves of $T_S$, so the statement of the lemma follows by induction.\n\\end{proof}\n\n\\begin{proposition}\nFor any event order $S$, $M_{slow}^G \\leq M_{slow}^{\\mathcal{P}(n)}$. \\label{slowmakespancoupling}\n\\end{proposition}\n\nAn intuitive argument for this proposition is that if the spanning tree $T_S$ of $G$ is not $\\mathcal{P}(n)$, then some vertex $v$ of $T_S$ must have multiple descendants, hence robots entering $v$ will be able to branch to different neighbours and $v$ is less likely to be blocked. Consequently, robots will enter $G$ faster than $\\mathcal{P}(n)$, and so $M_{slow}^G \\leq M_{slow}^{\\mathcal{P}(n)}$. We need to formalize this intuition into an argument that holds for any event order $S$. It turns out there are many subtleties involving asynchronicity, settling and crashing which make this not straightforward, and we require a rather technical argument. (Such subtleties are also why it is simpler to compare the environments $G, \\mathcal{P}(n)$, $\\mathcal{P}(\\infty)$, $\\mathcal{P^*}(\\infty), B$ rather than compare $G$ to $B$ directly.)\n\nWe prove Proposition \\ref{slowmakespancoupling} by induction on the \\textit{meaningful event times} $t_0, t_1, \\ldots$ in the event order $S$. We show the following statements to be true for non-deleted robots at all times $t_m$:\n\n\\begin{enumerate}[label=(\\alph*)]\n \\item If $A_i^G$ is not slow or settled, then $d(A_i^G, t_m) \\geq d(A_i^{\\mathcal{P}(n)}, t_m)$. \n \n \\item If $A_i^{\\mathcal{P}(n)}$ is slow or settled, then $A_i^G(t_m)$ is slow or settled, and $d(A_i^G, t_m) \\leq d(A_i^{\\mathcal{P}(n)}, t_m)$.\n\\end{enumerate}\n\nWe note that both statements are (trivially) true at time $t_0$, as no event has occurred yet. \n\n\\begin{lemma}\nIf statement (b) is true up to time $t_m$, settled and slow robots of $\\mathcal{P}(n)$ neither move nor get deleted as a result of an event of $S$ scheduled for time $t_m$ (i.e., the robots still exist and are in the same place at time $t_{m+1}$).\n\n\\label{neverdeleteslowcorollary}\n\\end{lemma}\n\nAssuming (a) and (b) hold at all times, let us see how to infer Proposition \\ref{slowmakespancoupling}. If a vertex becomes slow at some time $t$, it must contain a settled and a mobile robot, both of whom become slow. Lemma \\ref{neverdeleteslowcorollary} says that slow and settled robots of $\\mathcal{P}(n)$ never get deleted. Hence, the first time there are $2n$ slow robots on $\\mathcal{P}(n)$ (two at every vertex) is $M_{slow}^{\\mathcal{P}(n)}$. Statement (b) implies that if $\\mathcal{P}(n)$ has $2n$ slow robots, $G$ must also contain $2n$ slow or settled robots. It is immediate to verify that this can only happen when $G$ has $2n$ slow robots. Hence, at time $M_{slow}^{\\mathcal{P}(n)}$, $G$ has $2n$ slow robots--two at every vertex. The inequality $M_{slow}^{\\mathcal{P}(n)} \\geq M_{slow}^G$ follows by definition. \\qed\n\n\\begin{lemma} \nIf statements (a) and (b) are true up to time $t_{m}$, statement (a) is true at time $t_{m+1}$.\n\\label{statementaproof}\n\\end{lemma}\n\n\\begin{lemma}\nIf statements (a) and (b) are true up to time $t_m$, statement (b) is true at time $t_{m+1}$. \n\\label{statementbproof}\n\\end{lemma}\n\n\\subsubsection{$\\mathcal{P}(n)$ versus $\\mathcal{P}(\\infty)$}\n\\label{pathgraphsection}\n\nWe wish to bound $M_{slow}^{\\mathcal{P}(n)}$ (which is determined by the event order $S$). We do this by comparing simulations of $S$ on different environments. To start, let $\\mathcal{P}(\\infty)$ be the path graph with infinite vertices, and where $s = v_1$. We may simulate $S$ on $\\mathcal{P}(\\infty)$ as we did on $\\mathcal{P}(n)$. \n\n\\begin{lemma}\nFor any event order $S$ simulated on $\\mathcal{P}(n)$ and $\\mathcal{P}(\\infty)$ and any time $t < M_{slow}^{\\mathcal{P}(n)}$, $\\mathcal{P}(n)$ and $\\mathcal{P}(\\infty)$ contain the exact same number of robots.\n\\label{Pinftylemma}\n\\end{lemma}\n\n\\begin{proof}\nThe configuration of robots in the first $n$ vertices of $\\mathcal{P}(n)$ and $\\mathcal{P}(\\infty)$ is identical until $v_n$ becomes slow in $\\mathcal{P}(n)$. After $v_n$ becomes slow, the configuration of robots in the first $n-1$ vertices is still the same in both graphs until a robot in $v_{n-1}$ is prevented from moving by a robot in $v_n$, meaning $v_{n-1}$ becomes slow. By induction, the configuration of robots in the first $k$ vertices of both graphs is identical until $v_k$ in $\\mathcal{P}(n)$ becomes slow (we use Lemma \\ref{neverdeleteslowcorollary} to infer that the slow robots at $v_{k+1}$ are never deleted). Hence, until $v_1$ becomes slow, robots enter at the same times in $\\mathcal{P}(n)$ and $\\mathcal{P}(\\infty)$. $v_1$ becomes slow precisely at time $M_{slow}^{\\mathcal{P}(n)}$.\n\\end{proof}\n\n\\subsubsection{$\\mathcal{P}(\\infty)$ versus $\\mathcal{P}^*(\\infty)$} \nWe simulate $S$ on the environment $\\mathcal{P}^*(\\infty)$. $\\mathcal{P}^*(\\infty)$ is $\\mathcal{P}(\\infty)$ with the modification that there is at time $t=0$ a settled robot at every vertex $v_i$. The settled robot at $v_i$ marks $v_{i-1}$. These ``dummy'' robots are never activated, and are not of the indexed robots $A_1, A_2, \\ldots$. Because there is already a settled robot at every vertex, the robots $A_1, A_2, \\ldots$ never become settled. Call this environment $\\mathcal{P}^*(\\infty)$. Lemma \\ref{Pstarinftyslowerlemma} shows $\\mathcal{P}^*(\\infty)$ is strictly slower than $\\mathcal{P}(\\infty)$:\n\n\\begin{lemma}\nFor any event order $S$ and at any time $t$, the amount of \\textbf{mobile}-state robots in $\\mathcal{P}^*(\\infty)$ at time $t$ is at most the total amount of robots in $\\mathcal{P}(\\infty)$.\n\\label{Pstarinftyslowerlemma}\n\\end{lemma}\n\n\\subsubsection{$\\mathcal{P}^*(\\infty)$ versus totally asymmetric simple exclusion}\n\nWe bound the arrival rate of robots at $\\mathcal{P}^*(\\infty)$ by another, even slower process. This process, $B$, takes place on the path graph $\\mathcal{P}(\\infty)$ where we also have non-positive vertices $v_0, v_{-1}, v_{-2}, \\ldots$, and such that there is an edge $(v_i, v_{i+1})$ for every $i$. Like $\\mathcal{P}^*(\\infty)$ there is initially a settled robot at every vertex, marking the vertex before it. Unlike the other processes, robots do not enter at $s$: the robot $A_{i}$ begins inside the graph environment as a mobile robot located at $v_{-i+1}$. To compare $B$ with $\\mathcal{P}^*(\\infty)$, we count the robots that cross the edge $(v_0, v_1)$. There is one more crucial feature of $B$: robots are never deleted from $B$. Scheduled robot deletions at $S$ are treated as a regular activation of the robot. Besides these differences, $S$ can be simulated on $B$ as before.\n\n\\begin{lemma}\nFor any event order $S$ and at any time $t$, the number of mobile robots that crossed the $(v_0, v_1)$ edge of $B$ is at most the number of robots that entered or were deleted before entering $\\mathcal{P}^*(\\infty)$.\n\\label{TASEPboundPstarlemma}\n\\end{lemma}\n\nRecall that $S$ is an event order of some execution of Algorithm \\ref{alg:localrule} on the graph environment of interest, $G$. We may randomly sample $S$ by running Algorithm \\ref{alg:localrule} on $G$ and logging the events.\n\nThe stochastic process resulting from simulating a \\textit{randomly sampled} event order $S$ on $B$ is called a \\textit{totally asymmetric simple exclusion process} (TASEP) with step initial condition, first introduced in \\cite{spitzer1991interaction}. In this process, robots (called also ``particles'') are activated at exponential rate $1$ and attempt to move rightward whenever no other robot blocks their path. This is precisely the outcome of simulating $S$ on $B$ (since robot activations that lead to a deletion in the other processes are treated as a regular activation in $B$).\n\nIn TASEP with step initial condition, let us write $B_t$ to denote the number of robots that have crossed $(v_0, v_1)$ at time $t$. It is shown in \\cite{rost1981non} that $B_t$ converges to $\\frac{1}{4}t$ asymptotically almost surely (i.e., with probability 1 as $t \\to \\infty$). \\cite{Johansson2000} shows that the deviations are of order $t^{1\/3}$. Specifically we have in the limit: \n\n\\vspace{-1em}\n\n\\begin{equation} \\label{tasepequation}\n \\lim_{t\\to\\infty} \\mathbb{P}(B_t - \\frac{t}{4} \\leq 2^{-4\/3} st^{1\/3}) = 1 - F_2(-s)\n\\end{equation}\n\nValid for all $s \\in \\mathbb{R}$, where $F_2$ is the Tracy-Widom distribution and obeys the asymptotics $F_2(-s) = O(e^{-c_1 s^3})$ and $1-F_2(s) = O(e^{-c_2 s^{3\/2}})$ as $s \\to \\infty$. We employ Equation \\ref{tasepequation} and the prior analysis to prove Theorem \\ref{alg1performancethm}:\n\n\\begin{proof}\nLet $G$ be a graph environment with $n$ vertices. Let $S$ be the randomly sampled event order of an execution of Algorithm \\ref{alg:localrule} on $G$. We will bound the slow makespan, $M_{slow}^G$.\n\nWe simulate $S$ over the environments $\\mathcal{P}(n)$, $\\mathcal{P}(\\infty)$, $\\mathcal{P}^*(\\infty)$, and $B$. From Lemma \\ref{TASEPboundPstarlemma} we know that at all times the number of robots that crossed the $(v_0, v_1)$ edge of $B$, meaning $B_t$, is less than the number of robots that entered $\\mathcal{P}^*(\\infty)$ or were deleted before entering. At most $ct\/4$ robots are deleted by time $t$, so the number of mobile robots at $\\mathcal{P}^*(\\infty)$ at time $t$ is at least $B_t - ct\/4$. Lemmas \\ref{Pinftylemma} and \\ref{Pstarinftyslowerlemma} imply this is at least the number of robots at $\\mathcal{P}(n)$ at any time $t < M_{slow}^{\\mathcal{P}(n)}$.\n\nAt any time $t$, there cannot be more than $2n$ robots at $\\mathcal{P}(n)$. Hence, if $B_t - ct\/4 > 2n$, then $t \\geq M_{slow}^{\\mathcal{P}(n)}$. By Proposition \\ref{slowmakespancoupling}, we shall then also have $t \\geq M_{slow}^{G}$.\n\nWrite $t_n = 8((1-c)^{-1}+n^{-1\/3})n$. We wish to show $t_n$ is an upper bound on $M_{slow}^G$ asymptotically almost surely, which is precisely the statement of Theorem \\ref{alg1performancethm}. To show this, we are interested in $X_n = \\mathbb{P}(B_{t_n} \\leq 2n)$, the probability that $B_{t_n}$ is less than $2n$ at time $t_n$. Showing $X_n$ tends to $0$ as $n \\to \\infty$ completes our proof. Define the probability\n\n\\vspace{-0.75em}\n\\begin{equation} \n p(n,s) =\n \\mathbb{P}(B_{t_n} - \\frac{t_n}{4} \\leq 2^{-4\/3} s t_n^{1\/3})\n\\end{equation}\n\\vspace{-1.25em}\n\n$p(n,s)$ is the parametrized left innermost part of Equation \\ref{tasepequation} with $t = t_n$ ($n$ is a positive integer). Note that $p(n,s)$ is monotonic increasing in $s$. Define $s_n = (2n-t_n\/4)2^{4\/3} t_n^{-1\/3}$. By algebra, we have $X_n = p(n, s_n)$. Fix any constant $\\mathcal{S}^*$ and define $Y_n = p(n, \\mathcal{S}^*)$. Again by algebra, $s_n$ tends to $-\\infty$ as $n \\to \\infty$. Hence, for a large $n$, we must have $s_n < \\mathcal{S}^*$ and therefore $X_n \\leq Y_n$ (by the monotonicity of $p(n,s)$). By Equation \\ref{tasepequation}, $Y_n$ tends to $1 - F_2(-\\mathcal{S}^*)$ as $n \\to \\infty$. Hence $X_n$ is at most $1 - F_2(-\\mathcal{S}^*)$ in the limit. By taking $\\mathcal{S}^* \\to -\\infty$ we see that $X_n$ in the limit is at most $1 - F_2(\\infty) = 0$. \\end{proof}\n\nWe note that slow makespan can be nearly equal to makespan (see Table \\ref{makespancomparetable}, or consider a path graph $\\mathcal{P}(n)$ the source vertex placed at $s = v_2$ and robots initially moving rightwards). Hence, one does not ``miss out'' on much by using it to bound makespan.\n\n\\subsection{Synchronous time and multiple sources}\nWe describe extensions of our results to two settings.\n\n\\textbf{Synchronous time}. We may consider a synchronous time setting that is discretized to steps $t = 1, 2, \\ldots$ such that at every step, all the robots activate at once. In this setting, Algorithm \\ref{alg:localrule} ends up exploring just one branch of the tree at a time, like depth-first-search; so no two robots ever attempt to enter the same vertex. Analysis similar to the asynchronous case shows that robots then enter at rate $t\/2$ (instead of approximately $t\/4$) on $\\mathcal{P}(n)$, and analogous reasoning to Lemma \\ref{slowmakespancoupling} and Theorem \\ref{alg1performancethm} gives an upper bound of $4(1-c)^{-1} n$ on the makespan of a graph with $n$ vertices, assuming $ct\/2$ adversarial events. Consider the path graph $\\mathcal{P}(n)$ with $s = v_2$ (not the usual $s = v_1$), and where the robots first fill the vertices $v_3, v_4, \\ldots$ with a double layer before reaching $v_1$. The synchronous makespan of this environment is asymptotically $4(1-c)^{-1} n$. Hence, the bound on the makespan in the synchronous case is exact.\n\n\\textbf{Multiple source vertices.} Instead of just having a single source vertex $s$, we may consider environments with multiple source vertices such that each of them corresponds to its own set of robots $A_1, A_2, \\ldots$ entering over time. In asynchronous time, Lemma \\ref{treelemma} can be generalized to show that $G(t)$ is then a \\textit{forest}, and the robots attempt to create a spanning forest of $G$. The technique in this paper can be generalized to show that the makespan bound of Theorem \\ref{alg1performancethm} holds. In general graph environments multiple sources may not improve the makespan by much. For example, consider the path graph $\\mathcal{P}(n)$ with $k$\n sources on $v_1, v_2, \\ldots v_k$. The makespan of this graph is bounded below by the makespan of the path graph $\\mathcal{P}(n-k-1)$ with a single source vertex $v_1$. \n \n\\section{Simulation and evaluation}\n\\label{simulationsection}\n\nFor empirical confirmation of our analysis, we numerically simulated our algorithm on a number of environments. On these environments, we measured the makespan and the percentage of robots that crashed for the parameters $c = 0, 0.25, 0.75$, averaging them over 30 simulations per configuration and rounding to the nearest integer. Data on several environments is found in Table \\ref{makespancomparetable}. Figure \\ref{fig:simulation2} shows stills from some simulations. \n\n\\vspace{-1em}\n\n\\begin{figure}[!htb]\n \\centering\n \n\\medskip\n\\begin{subfigure}{0.145\\textwidth}\n \\includegraphics[width=\\linewidth]{grid1.png}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.145\\textwidth}\n \\includegraphics[width=\\linewidth]{grid2.png}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.145\\textwidth}\n \\includegraphics[width=\\linewidth]{grid3.png}\n\\end{subfigure}\n\n\\vspace{-0.25em}\n\\medskip \n\\begin{subfigure}{0.145\\textwidth}\n \\includegraphics[width=\\linewidth]{complex1.png}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.145\\textwidth}\n \\includegraphics[width=\\linewidth]{complex4.png}\n\\end{subfigure}\\hfil\n\\begin{subfigure}{0.145\\textwidth}\n \\includegraphics[width=\\linewidth]{complex8.png}\n\\end{subfigure}\n\n\\caption{An execution of Algorithm \\ref{alg:localrule} on (a) an $11\\times11$ square grid and (b) an ``indoor'' environment. The legend is same as Figure \\ref{fig:simulation}.}\n\\label{fig:simulation2}\n\\vspace{-1.5em}\n\\end{figure}\n\n\\begin{table}[!htb]\n\\begin{tabular}{lllll}\n & n & $c=0$ & $c=0.25$ & $c=0.75$ \\\\ \\hline\nFigure \\ref{fig:simulation} & 62 & 272;373 & 321;446 (18\\%) & 555;743 (52\\%) \\\\ \nFigure \\ref{fig:simulation2}, (a) & 121 & 463;554 & 484;586 (13\\%) & 715;921 (39\\%) \\\\ \nFigure \\ref{fig:simulation2}, (b) & 300 & 1791;1907 & 2154;2281 (19\\%) & 3894;4116 (56\\%) \\\\ \n$\\mathcal{P}(300)$ & 300 & 1677;2325 & 1940;2883 (23\\%) & 3727;6147 (66\\%) \\\\ \n\\end{tabular}\n\\caption{The makespan and slow makespan of Algorithm \\ref{alg:localrule} over the environments in the referred-to Figures and over the path graph of length $300$, $\\mathcal{P}(300)$ . We vary the crash density parameter $c$. The cell format is \\textit{makespan ; slow makespan (\\% of robots crashed)}. The column ``$n$'' gives the number of vertices in the environment.}\n\\label{makespancomparetable}\n\\vspace{-2em}\n\\end{table}\n\nFrom the data, it is clear that makespan is affected by the shape of the environment and by $c$. We see that an increase in the percentage of robots crashed scales makespan up gracefully, and that spacious environments generally have lower makespans. We also confirm that the slow makespans are always lower than the bound of Theorem \\ref{alg1performancethm}. Closest to the bound is the scenario where the environment is the path graph $\\mathcal{P}(300)$ and $c=0$, in which case slow makespan is almost exactly the bound, $8 \\cdot 300$. This is consistent with our analysis that the $\\mathcal{P}(n)$ environment has the largest slow makespan. It also verifies that Theorem \\ref{alg1performancethm} gives a correct upper bound. Such data further suggests that for spacious environments, and for large $c$, performance on average is better than the \\textit{worst-case} performance guarantee of Theorem \\ref{alg1performancethm}. In the simulations, we did not choose our adversarial events to be maximally obstructing, but rather crashed robots arbitrarily--a cleverer adversary would cause the makespan and slow makespan to be closer to the worst-case (and cause a larger percentage of robots to crash).\n\n\n\\section{Discussion}\nIn swarm robotics, where one must coordinate an enormous robotic fleet, we must anticipate many faults, such as crashing and traffic jams. Because robots in the swarm are usually assumed to be autonomous and have limited computational power, complex techniques for handling such faults are not necessarily feasible. Hence, it is important to ask whether simple rules of behaviour can be effective. To this end, we investigated the problem of covering an unknown graph environment, and constructing an implicit spanning tree, with a swarm of frequently crashing robots. We showed a simple and local rule of behaviour that enables the swarm to quickly and reliably finish this task in the presence of crashes. The swarm's performance degrades gracefully as crash density increases. \n\nWe outline here several directions for future research. First, our model interprets the ``swarm'' part of swarm robotics as a vast and redundant fleet of robots that can be dispersed into the environment over time. We used this model for uniform dispersal, but it would be interesting to adapt it to other kinds of missions, and design algorithms for those missions that can handle crashes or other forms of interference. For example, in \\cite{arxivprobabilisticpursuits}, mobile agents entering at a source node $s$ over time sequentially pursue each other to discover shortest paths between $s$ and some target node $t$. The algorithm succeeds even if some of the agents are interrupted and have their location changed.\n\nNext, in this work, we made the simplifying assumption that the environment of the robots is discrete. If the robots instead attempted to cover a continuous planar domain by an algorithm similar to ours, the robots would need to construct a shared discrete graph representation of the environment through the settled robots in $G(t)$ and their markings. We believe that our algorithm can readily be extended to such settings.\n\nLastly, can we exploit the large number of robots in a swarm to handle other kinds of errors? There are many situations and modes of failure that can be discussed, such as Byzantine robotic agents, or dynamic changes to the environment. \n\n\n\n\n\n\n\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet \\(X\\) be a complete metric space with metric denoted by \\(d\\).\nAlbeit such abstract scenery, \nbest motivated examples come from and lie in Euclidean spaces \nwith a possible prospect for Hilbert and Banach spaces.\n\nRecall from the fractal geometry that a system \n$\\Phi = (X;f_i,i=1,{\\ldots},N)$ of maps $f_i:X{\\to} X$ is called an \n\\textbf{iterated function system}, shortly IFS (\\cite{Edgar, FractalsEver}). \nWe assume that the maps $f_i$ are nonexpansive:\n\\[\n\\forall_{x_{1},x_{2}{\\in} X}\\; d(f_i(x_1),f_i(x_2)) \\leq d(x_1,x_2).\n\\]\nThis allows for situations where neither a strict attractor \n(\\cite{BarnsleyVinceProjective}) nor a Lasota-Myjak semiattractor \n(\\cite{LasotaMyjak}) exists, despite the fact that such attractors \nmay be present in permanently \nnoncontractive cases (cf. \\cite{BarnsleyVinceChaos}). However the nature of \nbasic examples which took our attention and sparkled this research justifies \nthe assumptions we make here and add further. \nTo avoid a mystery: we are interested in invariant sets rather than attractors\nof IFSs. \nConcerning the existence of invariant sets one can assure it under \nvery mild dissipativity conditions even in the absence of continuity of actions,\nsee e.g. the references in \\cite{LesniakCEJM} or \\cite{Kieninger}. \nSo let us refine our goal. \n\nThe study of systems of contractions (and relatives) became a standard topic \nof several books, e.g., \\cite{FractalsEver, Edgar, MauldinUrbanskiGraph, \nMassopust, LaTorre} to mention only a small portion of literature.\nIt is known also quite a lot about the dynamics of a single nonexpansive map \nand omega-limit sets, see \\cite{DafermosSlemrod, RoehrigSine, \nAkcogluKrengel, GoebelKirk, Nussbaum, Sine, DiLena}. \nHowever it is still visibly less known about iterated systems \nof nonexpansive maps.\nFrankly the idea of a nonexpansive IFS appears in different disguise shortly \nbefore 1939 in the context of linear algebra and functional analysis. This is \nthe method of projections (introduced by von Neumann and Kaczmarz).\nThis topic also received a great attention \n(\\cite{BauschkeThesis, BauschkeBorwein, Galantai, \nAlternatingProjection, BauschkeReich, BauschkeDeutsch}), \nagain, as with the fractal geometry, due to its applicability potential. \nYet for the full understanding and synthesis of the projection method some \ngaps await its explanation. One, towards which we aspire, reads as follows: \n\\emph{what happens when we use projections onto sets with \nempty intersection?}\nIn the case of two sets one easily finds out that the iteration recovers \nthe two nearest points in the sets realizing the (infimum) distance between \nthese sets (\\cite{BauschkeThesis} 11.4.3). Our goal is to show, under very weak\ncontractivity conditions (allowing for orthogonal projections among others),\na general principle that the \n\\emph{iterated maps recover a minimal invariant set} in the sense that \nthe omega-limit set of the generated orbit constitutes an invariant set.\n(Note that for $N=1$, a single map, this is an elementary exercise).\nRelated results together with a panorama of examples can be found in\n\\cite{Angelos}.\n\nGiven an IFS $(X;f_i,i=1,{\\ldots},N)$ we define the \n\\textbf{Hutchinson operator} \n$\\Phi: 2^{X}\\setminus\\{\\emptyset\\} {\\to} 2^{X}\\setminus\\{\\emptyset\\}$ \nvia\n\\[\n\\forall_{S{\\in}2^{X}\\setminus\\{\\emptyset\\}}\\; \n\\Phi(S) := \\bigcup_{i=1}^{N} f_{i}(S),\n\\]\nwhere $2^{X}\\setminus\\{\\emptyset\\}$ stands for the family of nonempty \nsubsets of $X$. (Note that we do not take the closure of the union in \nthis variant of the definition).\n\nA nonempty $S\\subset X$ is said to be an \\textbf{invariant set} for \nthe system of maps $\\{f_{1},{\\ldots},f_{N}\\}$ when $\\Phi(S)=S$,\nand a \\textbf{subinvariant set} when $\\Phi(S){\\subset} S$. \nTraditional dynamics set focus on compact invariant sets \n(see however \\cite{MauldinUrbanskiGraph}\nfor a reasonable deviation from this rule). This is the case here too,\nbut keeping a general definition will prove handy in the next Section.\n\nBy an \\textbf{orbit} $(x_{n})_{n=0}^{\\infty}$ starting at $x_{0}\\in X$ \nwith a driving sequence of symbols \n$(i_{n})_{n=1}^{\\infty} \\in\\{1,\\ldots,N\\}^{\\infty}$ \nwe understand\n\\[\nx_{n} := f_{i_n}{\\circ}{\\ldots}{\\circ} f_{i_1}(x_0).\n\\]\nSuch iterations are fundamental for some numerical methods in\nfractal geometry (chaos game algorithm \\cite{FractalsEver, LaTorre})\nand convex geometry (cyclic projection algorithm \n\\cite{BauschkeBorwein, AlternatingProjection}). \nThe driving sequence may be called a driver (\\cite{McFarlaneHoggar}) \nor a control sequence (\\cite{BauschkeThesis}); by a slight abuse, \nagainst the direction of compositions of maps, a code or an address \n(\\cite{FractalsEver, Kieninger}) might be accepted too.\nIf the process generating symbols is stochastic, then the terms `random \ndriver' and `random orbit' are justified. However random driver can mean \na sequence where each symbol repeats infinitely often \n(\\cite{BauschkeThesis}; repetitive below)\nand `random orbit' can stand for an orbit driven by sufficiently complex \ndeterministic sequence of symbols; with this respect also `chaotic orbit'\nis in use (\\cite{FractalsEver, BarnsleyLesniak}). \n\nWe list below various types of drivers \n(\\cite{BauschkeThesis, BarnsleyLesniak, CaludeStaiger}). A sequence \n$(i_{n})_{n=1}^{\\infty} \\in\\{1,\\ldots,N\\}^{\\infty}$\nis called\n\\begin{itemize}\n\\item \\textbf{cyclic}, if $i_{n}=\\pi((n-1) \\mod N + 1)$, for\nall $n\\geq 1$ under some fixed \npermutation $\\pi$ of $\\{1,{\\ldots},N\\}$,\n\\item \\textbf{repetitive}, if for every $\\sigma\\in\\{1,{\\ldots},N\\}$\nthe set $\\{n\\geq 1: i_{n}=\\sigma\\}$ is infinite,\n\\item \\textbf{disjunctive}, if it contains avery possible finite word as \nits subword, namely for all $m\\geq 1$ and every word \n$(\\sigma_{1},{\\ldots},\\sigma_{m})\\in\\{1,{\\ldots},N\\}^{m}$\nthere exists $n_{0}\\geq 1$ s.t. $i_{n_{0}-1\\, +j}= \\sigma_{j}$ for \n$j= 1,{\\ldots},m$.\n\\end{itemize}\nA disjunctive driver and a cyclic driver are necessarily repetitive \nbut the reverse implications are obviously false. \nAlso neither a cyclic sequence is disjunctive nor vice-versa.\n\nFurther we shall employ a disjunctive driver in the main theorem.\nTo understand why this feature fits well a standard cyclic projection \nonto two sets, one should recognize that the \n(linear or metric nearest point) projection map $P$ is idempotent, \n$P{\\circ} P= P$ (a retraction put in a nonlinear topology framework).\nThe cancellations in any orbit build from $N=2$ projections show \nthat the result is similar (modulo repetitions) to an \nalternating projection orbit regardless of how complex disjunctive \ndriver was applied, see Examples \\ref{ex:KaczmarzLines} and\n\\ref{ex:ParaLines}. \n\nWe define the \\textbf{omega-limit set} of $(x_{n})_{n=0}^{\\infty}$ \nin the usual way by a descending intersection of the closures \nof tails of an orbit\n\\[\n\\omega((x_{n})) := \\bigcap_{m=0}^{\\infty} \n\\overline{\\{x_{n}: n{\\geq} m \\}}.\n\\]\nOne should be aware that in the case of IFSs and multivalued \ndynamical systems various kinds of omega-limit sets can be defined,\nconsult e.g. \\cite{McGehee, Akin, Kieninger, Potzsche}. \nComparing the nonautonomous discrete dynamical systems \n(\\cite{nonautonomouSystems}) with the framework of IFSs \n(and also multivalued systems, cf. \\cite{Igudesman, LasotaMyjak}), \none sees that the orbit of the nonautonomous system is determined\nby the starting point though the dynamics changes over time\nand to determine the orbit of the IFS one needs additionally to\nspecify the driving sequence; loosely speaking in the theory\nof IFSs we deal with the infinite number of nonautonomous \nsystems upon a finite (sometimes countable \n\\cite{MauldinUrbanskiGraph} or compact \n\\cite{Wicks, Kieninger, LesniakCEJM}) set of generating maps.\nYet one can cast the IFS as a skew-product system,\ne.g., \\cite[Example 4.3]{Potzsche}. \n\nWe finish this Section by giving motivating examples \nwhere the projection method invites a \nnonexpansive IFSs viewpoint (cf. \\cite{Angelos}).\nSpecifically Example \\ref{ex:BarnsleyTriangle}\nsuggests the way we should interpret the result of the\nprojection algorithm in general. We use a common notation\n$H_{i}\\subset X := {\\mathbb{R}}^{2}$ for lines \n(hyperplanes) and $P_{i}:X \\to H_{i}$ for orthogonal \nprojections onto $H_{i}$ constituting a nonexpansive \nIFS $(X; f_{1},f_{2},{\\ldots})$, $f_{i} := P_{i}$.\n\n\\begin{example}\\label{ex:KaczmarzLines}\nGiven two lines intersecting at $x_{*}$ one projects alternately \nonto them to recover solution $x_{*}$ of the linear system.\nThe picture in Figure \\ref{fig:KaczmarzLines} is the hallmark\nof the projection method. We have the orbit\n$P_{2}{\\circ}P_{1}{\\circ}P_{2}{\\circ}P_{1}{\\circ}P_{2}(x_0)$\nconverging to $x_{*}$. Note that the composition \n$P_{1}{\\circ}P_{2}$ is contractive.\n\n\\begin{figure}\n\\caption{Alternating projections onto two lines.}\n\\label{fig:KaczmarzLines}\n\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]\n\\clip(-0.24,2.61) rectangle (4.76,6.99);\n\\draw [domain=-0.24:4.76] plot(\\x,{(--0.22-0.49*\\x)\/-0.26});\n\\draw [domain=-0.24:4.76] plot(\\x,{(--2.28-0.51*\\x)\/0.31});\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (0.29,5.68)-- (0.85,6.02);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (0.85,6.02)-- (3.03,4.87);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (3.03,4.87)-- (1.93,4.21);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (1.93,4.21)-- (2.52,3.9);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (2.22,3.73)-- (2.52,3.9);\n\\begin{scriptsize}\n\\fill [color=black] (2.33,3.54) circle (2.0pt);\n\\draw[color=black] (2.46,3.02) node {$x_{*}$};\n\\draw[color=black] (3.65,6.58) node {$H_1$};\n\\draw[color=black] (0.88,6.63) node {$H_2$};\n\\fill [color=black] (0.29,5.68) circle (1.5pt);\n\\draw[color=black] (0.16,5.42) node {$x_0$};\n\\fill [color=black] (0.85,6.02) circle (1.5pt);\n\\draw[color=black] (1.3,6.02) node {$x_1$};\n\\fill [color=black] (3.03,4.87) circle (1.5pt);\n\\draw[color=black] (3.38,4.77) node {$x_2$};\n\\fill [color=black] (1.93,4.21) circle (1.5pt);\n\\draw[color=black] (1.57,4.22) node {$x_3$};\n\\fill [color=black] (2.52,3.9) circle (1.5pt);\n\\draw[color=black] (2.8,3.83) node {$x_4$};\n\\fill [color=black] (2.22,3.73) circle (1.5pt);\n\\draw[color=black] (1.8,3.6) node {$x_5$};\n\\end{scriptsize}\n\\end{tikzpicture}\n\\end{figure}\n\\end{example}\n\n\\begin{example}\\label{ex:ParaLines}\nGiven two parallel lines one projects alternately \nonto them to get a pair of minimally distanced points.\nThe visualization provides Figure \\ref{fig:ParaLines}.\nHere $P_{1}{\\circ} P_{2}$ is not contractive,\nyet it behaves contractively on the orbit, \nsee the last Section for precise formulation of this \nphenomenon.\nUsually the case of parallel lines exhibits instability upon \nparameters for the solution problem of linear systems.\n\n\\begin{figure}\n\\caption{Alternating projections onto two parallel lines.}\n\\label{fig:ParaLines}\n\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]\n\\clip(1.34,4.48) rectangle (4.44,7.19);\n\\draw [domain=1.34:4.44] plot(\\x,{(--0.22-0.49*\\x)\/-0.26});\n\\draw [domain=1.34:4.44] plot(\\x,{(-0.67-0.49*\\x)\/-0.26});\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (1.97,6.3)-- (2.86,5.83);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (2.86,5.83)-- (3.39,5.55);\n\\begin{scriptsize}\n\\draw[color=black] (3.56,6.83) node {$H_1$};\n\\fill [color=black] (2.86,5.83) circle (1.5pt);\n\\draw[color=black] (2.56,5.56) node {$x_0$};\n\\draw[color=black] (2.56,6.74) node {$H_2$};\n\\fill [color=black] (1.97,6.3) circle (2.0pt);\n\\draw[color=black] (2.91,6.28) node {$x_2 = x_4$};\n\\fill [color=black] (3.39,5.55) circle (2.0pt);\n\\draw[color=black] (3.95,5.14) node {$x_1=x_3$};\n\\end{scriptsize}\n\\end{tikzpicture}\n\\end{figure}\n\\end{example}\n\n\\begin{example}\\label{ex:Square}\nSuppose we have the following configuration: \nfour lines so that each one is orthogonal to two others \nand parallel to the third one. Points of intersection,\ndenoted $y_{12}, y_{13}, y_{23}, y_{24}$, span a rectangle,\nsee Figure \\ref{fig:Square}.\nThen projecting onto these lines in a sufficiently ``random\" manner,\ne.g., $P_{2}{\\circ}P_{3}{\\circ}P_{1}{\\circ}P_{4}\n{\\circ}P_{2}{\\circ}P_{1}{\\circ}P_{2}{\\circ}P_{1}(x_0) $,\none quickly recovers the four corner points \n$C:=\\{y_{12}, y_{13}, y_{23}, y_{24}\\}$. \nNote that analogously to Example \\ref{ex:KaczmarzLines} \nthe composition $P_{3}{\\circ} P_{1}$ \n($H_1$ and $H_3$ orthogonal) is contractive;\nhence $P_{4}{\\circ} P_{2}{\\circ} P_{3}{\\circ} P_{1}$\nis contractive.\nThe minimal closed set invariant on the joint action\nof all projections $P_{1},P_{2},P_{3},P_{4}$ \nis exactly the set $C$; see the next Section for the precise \ndefinition of an invariant set.\n\n\\begin{figure}\n\\caption{Projections onto four pairwise orthogonal or parallel lines.}\n\\label{fig:Square}\n\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]\n\\clip(-0.12,3.37) rectangle (6.01,8.45);\n\\draw (4.25,3.37) -- (4.25,8.45);\n\\draw (2.01,3.37) -- (2.01,8.45);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (2.01,5.52)-- (1.26,5.52);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (1.26,5.52)-- (4.25,5.52);\n\\draw [domain=-0.12:6.01] plot(\\x,{(--14.92-0*\\x)\/2.24});\n\\draw [domain=-0.12:6.01] plot(\\x,{(--9.89-0*\\x)\/2.24});\n\\begin{scriptsize}\n\\draw[color=black] (3.95,7.6) node {$H_1$};\n\\fill [color=black] (1.26,5.52) circle (1.5pt);\n\\draw[color=black] (1.26,5.19) node {$x_0$};\n\\draw[color=black] (2.3,7.59) node {$H_2$};\n\\fill [color=black] (2.01,5.52) circle (1.5pt);\n\\draw[color=black] (2.75,5.13) node {$x_2 = x_4$};\n\\fill [color=black] (4.25,5.52) circle (1.5pt);\n\\draw[color=black] (5.0,5.41) node {$x_1=x_3$};\n\\fill [color=black] (4.25,6.66) circle (2.5pt);\n\\draw[color=black] (5.0,6.34) node {$y_{13}=x_7$};\n\\fill [color=black] (2.01,6.66) circle (2.5pt);\n\\draw[color=black] (2.75,6.25) node {$y_{23} = x_8$};\n\\fill [color=black] (2.01,4.41) circle (2.5pt);\n\\draw[color=black] (2.75,4.04) node {$y_{24} = x_5$};\n\\fill [color=black] (4.25,4.41) circle (2.5pt);\n\\draw[color=black] (5.1,4.03) node {$y_{12}=x_6$};\n\\draw[color=black] (0.86,6.99) node {$H_3$};\n\\draw[color=black] (0.72,4) node {$H_4$};\n\\end{scriptsize}\n\\end{tikzpicture}\n\\end{figure}\n\\end{example}\n\nSince the orthogonal projection onto a hyperplane is \nnonexpansive w.r.t. the taxi-cab ${\\ell}^1$-norm \nin the Euclidean space one might hope (in accordance \nwith the Examples so far) that omega-limit sets\nof systems consisting of orthogonal projections are finite\nsets (\\cite{AkcogluKrengel, Nussbaum, DiLena}).\nThis is not true as shown below.\n\n\\begin{example}\\label{ex:BarnsleyTriangle}\nIf we project onto three lines each two of which are intersecting\nand the choice of projections follows a disjunctive driver,\nthen the omega-limit set of such iteration \nconstitutes a triangle with vertices at intersection points.\nThis phenomenon was observed quite long time ago\nfor drivers generated via discrete stochastic processes \n(`chaos game algorithm' \\cite{BarnsleyPrivate}). \n\nWe sketch the orbit \n$P_{1}{\\circ}P_{3}{\\circ}P_{1}{\\circ}P_{2}\n{\\circ}P_{3}{\\circ}P_{1}{\\circ}P_{2}(x_0) $\nin Figure~\\ref{fig:BarnsleyTriangle}. \nNote that any two projections $P_{i}{\\circ} P_{j}$, \n$i{\\neq}j$, compose to contractions; in particular\n$P_{3}{\\circ}P_{2}{\\circ}P_{1}$ is a contraction.\n\n\\begin{figure}\n\\caption{Randomly applied projections onto three lines.}\n\\label{fig:BarnsleyTriangle}\n\\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]\n\\clip(0.37,1.14) rectangle (4.79,5.01);\n\\draw [line width=3.2pt] (2.3,3.54)-- (4.12,1.57);\n\\draw [line width=3.2pt] (4.12,1.57)-- (1.41,1.8);\n\\draw [line width=3.2pt] (1.41,1.8)-- (2.3,3.54);\n\\draw [domain=0.37:4.79] plot(\\x,{(--0.85-1.75*\\x)\/-0.9});\n\\draw [domain=0.37:4.79] plot(\\x,{(--10.98-1.98*\\x)\/1.81});\n\\draw [domain=0.37:4.79] plot(\\x,{(--5.19-0.23*\\x)\/2.71});\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (2.25,4.57)-- (1.77,4.13);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (1.77,4.13)-- (2.43,3.79);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (2.43,3.79)-- (2.25,1.73);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (2.25,1.73)-- (3.19,2.58);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (3.19,2.58)-- (2.1,3.14);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (2.1,3.14)-- (1.98,1.75);\n\\draw [line width=1.2pt,dash pattern=on 2pt off 2pt] (1.98,1.75)-- (1.51,1.99);\n\\begin{scriptsize}\n\\draw[color=black] (3.1,4.44) node {$H_1$};\n\\draw[color=black] (1.1,4.37) node {$H_2$};\n\\draw[color=black] (0.9,2.1) node {$H_3$};\n\\fill [color=black] (2.25,4.57) circle (1.5pt);\n\\draw[color=black] (2.4,4.79) node {$x_0$};\n\\fill [color=black] (1.77,4.13) circle (1.5pt);\n\\draw[color=black] (1.56,3.99) node {$x_1$};\n\\fill [color=black] (2.43,3.79) circle (1.5pt);\n\\draw[color=black] (2.86,3.7) node {$x_2$};\n\\fill [color=black] (2.25,1.73) circle (1.5pt);\n\\draw[color=black] (2.3,1.48) node {$x_3$};\n\\fill [color=black] (3.19,2.58) circle (1.5pt);\n\\draw[color=black] (3.49,2.64) node {$x_4$};\n\\fill [color=black] (2.1,3.14) circle (1.5pt);\n\\draw[color=black] (1.78,3.21) node {$x_5$};\n\\fill [color=black] (1.98,1.75) circle (1.5pt);\n\\draw[color=black] (1.95,1.5) node {$x_6$};\n\\fill [color=black] (1.51,1.99) circle (1.5pt);\n\\draw[color=black] (1.34,2.16) node {$x_7$};\n\\end{scriptsize}\n\\end{tikzpicture}\n\\end{figure}\n\\end{example}\n \nThe reader is encouraged to dwelve in \\cite{Angelos} for more examples \nwith detailed analyses of polygonal omega-limit sets.\n\n\\section{Generalities}\n\nWe shall present here a general relationship between invariant sets and \nomega-limit sets. Throughout let $C\\subset X$ denote a nonempty\nclosed bounded subinvariant set of the nonexpansive IFS\n$(X;f_i,i=1,{\\ldots},N)$, $\\Phi$ the Hutchinson operator and \n$(x_{n})_{n=0}^{\\infty}$ the orbit.\n\nGiven a nonempty set $S\\subset X$, we employ also \nthe notation:\n\\begin{itemize}\n\\item $d(p,S) := \\inf_{s\\in S} d(p,s)$ for the distance from the point \n$p\\in X$ to $S$;\n\\item $N_{\\varepsilon}S := \\{x\\in X: d(x,S) < \\varepsilon\\}$ for \nthe $\\varepsilon$-neighbourhood of $S$.\n\\end{itemize}\n\nIt turns out to be convenient to use a known \nsequential characterization of the omega-limit set:\n\\begin{equation*}\nx_{*}\\in\\omega((x_{n})) \\;\\text{iff}\\; x_{k_n}\\to x_{*}\n\\;\\text{for some subsequence}\\; k_{n}\\nearrow\\infty.\n\\end{equation*}\nWhenever we make statements about omega-limit sets we \n\\emph{assume} that they are nonempty; for empty\nomega-limit sets the statements are void. \nIn this respect we have the following basic criterion.\n\\begin{lemma}[\\cite{McGehee, Akin, nonautonomouSystems}]\nIf the orbit $(x_{n})_{n=0}^{\\infty}$ is precompact, then\n$\\omega((x_{n}))$ is nonempty and compact. \n\\end{lemma}\nRecall that the orbit $\\{x_{n}\\}_{n=0}^{\\infty}$ in a complete \nspace $X$ is precompact if it has compact closure.\n\nThe simple observations below are crucial for \nbasic relationship between invariant sets and omega-limit sets. \n\\begin{lemma}\\label{monotonedist}\nLet $(x_{n})_{n=0}^{\\infty}$ be the orbit and suppose that there\nexists a nonempty closed bounded set $C$ which is subinvariant,\n$\\Phi(C)\\subset C$. Then\n\\begin{enumerate}\n\\item[(i)] $d(x_{n+1},C)\\leq d(x_{n},C)\\leq d(x_{0},C)$,\n\\item[(ii)] $(x_{n})_{n=0}^{\\infty}$ is bounded,\n\\item[(iii)] $d(y,C) = \\inf_{n} d(x_{n},C) \\equiv \\text{const}$ \nfor $y\\in\\omega((x_{n}))$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nFix ${\\varepsilon} >0$. Find appropriate $c_0\\in C$ to write\n\\begin{gather*}\nd(x_0,C)+\\varepsilon \\geq d(x_0,c_0) \\geq\nd(f_{i_1}(x_0),f_{i_1}(c_0)) = \\\\\n= d(x_{1},f_{i_1}(c_0)) \\geq d(x_{1},C); \n\\end{gather*}\nthe last inequality relies on $f_{i_1}(c_0)\\in \\Phi(C)\\subset C$.\nBy induction (i) follows. \n\nMoreover $x_{n}\\in N_{2d(x_{0},C)}C$ for $n\\geq 1$, where \nthe latter set is bounded. Hence (ii) follows.\n\nThe sequence $d(x_{n},C)$ is monotonely decreasing and \nbounded from below by $0$, thus it is convergent to\n$\\text{const} = \\inf_{n} d(x_{n},C)$. Let $x_{k_n}\\to y$.\nBy monotonicity again\n\\[\nd(x_{k_n},C) \\to \\inf_{n} d(x_{k_n},C) = \\text{const},\n\\] \nbut continuity of the distance yields $d(x_{k_n},C)\\to d(y,C)$.\nTherefore we have (iii).\n\\end{proof}\n\n\\begin{proposition}\\label{intersectomega}\nIf a closed bounded subinvariant set $C$ intersects the omega-limit\nset, $C\\cap \\omega((x_{n}))\\neq\\emptyset$, then it contains that \nomega-limit set, $C\\supset\\omega((x_{n}))$. In particular,\nthe omega-limit set is the minimal invariant set, \nprovided it is invariant.\n\\end{proposition}\n\\begin{proof}\nLet $y_0\\in C\\cap \\omega((x_{n}))$. \nFrom Lemma \\ref{monotonedist} we know that any\n$y\\in \\omega((x_{n}))$ necessarily obeys $d(y,C)=d(y_0,C)=0$.\n\\end{proof}\n\nNote that the set $C$ in the above proposition need not be bounded\n(as careful analysis of the proof of Lemma \\ref{monotonedist} (i), (iii)\nshows). Although the omega-limit set need not be invariant \nthere holds\n\\begin{proposition}[\\cite{BarnsleyPrivate}]\\label{superinvomega}\nThe omega-limit set is superinvariant, i.e.,\n\\[\n\\Phi(\\omega((x_{n}))\\,)\\supset \\omega((x_{n})).\n\\]\n\\end{proposition}\n\\begin{proof}\nLet $x_{*}\\in \\omega((x_{n}))$, $x_{k_n}\\to x_{*}$.\nThen there exists a symbol $\\sigma\\in\\{1,{\\ldots},N\\}$\ns.t. $x_{{k_n}+1}=f_{\\sigma}(x_{k_n})$ for infinitely \nmany $k_n$'s, say $k_{l_n}$. Hence\n\\[\nx_{k_{l_n}+1}= f_{\\sigma}(x_{k_{l_n}}) \\to \nf_{\\sigma}(x_{*}) \\in\\Phi(\\omega((x_{n}))\\,).\n\\]\n\\end{proof}\n\nFor good properties of orbits and omega-limit sets one may \nneed the notion of a proper space. \nA metric space $(X,d)$ is called \\textbf{proper} \n(cf. \\cite{BarnsleyVinceChaos}) if every ball not equal\nto the whole space has compact closure; \nthe terminology is not standardized, e.g., Beer uses\nthe term `space with nice closed balls', \nsee \\cite{Beer} 5.1.8 p.142.\nNecessarily any such space is complete and locally compact. \nConversely, a locally compact space is proper after suitable \nremetrization (\\cite{Beer} 5.1.12 p.143). \nThe main advantage of $X$ being proper is that any \nbounded orbit is precompact and consequently admits\nnonempty compact omega-limit set.\nWe would like to stress out that our further considerations \ndo not rely on the properness.\n\n\\section{Main theorem}\n\nLet $\\Phi = (X;f_1,{\\ldots},f_N)$ be a nonexpansive IFS \non a complete space $X$. We constantly assume that \n$\\Phi$ possesses a nonempty closed bounded subinvariant \nset. Hence by observations made in the previous Section\nall orbits and omega-limit sets are warranted to be bounded.\nThis is not very restrictive hypothesis, because if the \nomega-limit set recovers an invariant set, then there must\nbe present at least one (sub)invariant set.\n\nSuppose $(x_{n})_{n=0}^{\\infty}$,\n$(y_{n})_{n=0}^{\\infty}$ are two orbits generated\nby the driving sequences $(i_{n})_{n=1}^{\\infty}$,\n$(j_{n})_{n=1}^{\\infty}$\nand starting at $x_{0},y_{0}\\in X$, respectively.\n\n\\begin{proposition}\\label{disjunctiveomega1}\nLet us assume that\n\\begin{enumerate}\n\\item[(D)] the driving sequences for orbits\n$(x_{n})_{n=0}^{\\infty}$ and $(y_{n})_{n=0}^{\\infty}$\nare disjunctive,\n\\item[(C)] there exists a sequence of symbols \n$(u_1,{\\ldots},u_{l})\\in\\{1,{\\ldots},N\\}^{l}$ s.t.\nthe composition $f_{u_{l}}\\circ{\\ldots}\\circ f_{u_{1}}$\nis a Lipschitz contraction. \n\\end{enumerate}\nThen the omega-limit set does not depend on \nthe initial point\n\\[\\omega((x_n)) = \\omega((y_n)).\\]\n\\end{proposition}\n\\begin{proof}\nWe additionally assume that \nthe driving sequences are the same for both\norbits, $i_{n}=j_{n}$. The general case \nwill follow from Lemma~\\ref{disjunctiveomega}\nbelow.\n\nDenote by $L<1$ the Lipschitz constant of \n$f_{u_{l}}\\circ{\\ldots}\\circ f_{u_{1}}$. \nBy disjunctivity of the driver $(i_{n})_{n=1}^{\\infty}$,\ngiven any subsequence $k_{n}$ there exists a deeper\nsubsequence $k_{l_n}$ s.t. \n$(i_{k_{l_n}}, i_ {k_{l_n}-1}, {\\ldots}, i_{1})$ contains\nas a subword the sequence \n$(u_{l},{\\ldots},u_{1},{\\ldots}, u_{l},{\\ldots},u_{1})\n\\in \\{1,{\\ldots},N\\}^{m_{n}{\\cdot} l}$, i.e., \n$(u_{l},{\\ldots},u_{1})$ repeated $m_{n}$-times,\nand additionally $m_{n}\\nearrow\\infty$.\nTherefore\n\\[\nd(x_{k_{l_n}},y_{k_{l_n}}) \\leq L^{m_n}\\cdot \nd(x_0,y_0)\\to 0.\n\\]\nThis means that both orbits admit the same limit \npoints.\n\\end{proof}\n\nWe are going to strengthen this result, \nby showing under contractivity condition substantially \nweaker than (C), that the omega-limit sets of orbits \nstarting at the same point are identical provided \nthe sequences driving these orbits are disjunctive.\n\n\\begin{lemma}\\label{disjunctiveomega}\nLet us assume about orbits \n$(x_{n})_{n=0}^{\\infty}$,\n$(y_{n})_{n=0}^{\\infty}$\nthat they start from the same point $y_0=x_0$,\nand obey (D) and\n\\begin{enumerate}\n\\item[(CO)] there exists a sequence of symbols \n$(u_1,{\\ldots},u_{l})\\in\\{1,{\\ldots},N\\}^{l}$ s.t.\nthe composition $f_{u_{l}}\\circ{\\ldots}\\circ f_{u_{1}}$\nis a Lipschitz contraction when restricted to the set\n\\begin{gather*}\n\\bigcup_{n=0}^{\\infty} \\Phi^{n}(\\{x_0\\}) = \n\\{x_0, f_1(x_0), {\\ldots}, f_{N}(x_0), \\\\\nf_{1}\\circ f_{1}(x_0), f_{1}\\circ f_{2}(x_0), {\\ldots}, \nf_{1}\\circ f_{N}(x_0), {\\ldots}, \\\\\n f_{N}\\circ f_{1}(x_0), {\\ldots}, f_{N}\\circ f_{N}(x_0), {\\ldots}\\}\n\\end{gather*}\n(a branching tree with root at $x_0$). \n\\end{enumerate}\nThen the omega-limit sets coincide\n\\[\\omega((x_n)) = \\omega((y_n)).\\]\n\\end{lemma}\n\\begin{proof}\nDenote by $L$ the Lipschitz constant of \n$f_{u_{l}}\\circ{\\ldots}\\circ f_{u_{1}}$.\nFix $x_{*}\\in \\omega((x_{n}))$, $x_{k_{n}}\\to x_{*}$.\nSimilarly as in the proof of Proposition \\ref{disjunctiveomega1}\nthere exists $k_{l_n}$ s.t. \n$(i_{k_{l_n}}, {\\ldots}, i_{1})$ contains\n$(u_{l},{\\ldots},u_{1})$ repeated \nconsecutively $m_{n}$-times, $m_{n}\\nearrow\\infty$.\nThen, due to disjunctivity of $j_{n}$, we can find\na subsequence $j_{r_n}$ s.t. \n\\begin{gather*}\nf_{j_{r_n}}= f_{i_{k_{l_n}}}, \\\\\nf_{j_{(r_{n}-1)}}= f_{i_{(k_{l_n}-1)}},\\\\\n{\\ldots},\nf_{j_{(r_{n}-(k_{l_n}-1))}} = f_{i_1}.\\\\\n\\end{gather*}\nHence we arrive at\n\\[\nd(x_{k_{l_n}},y_{r_n})\\leq L^{m_n}\\cdot \nd(x_0,y_{{r_n}-{k_{l_n}}}) \\to 0.\n\\]\nTherefore $x_{*}\\in\\omega((y_n))$.\n\\end{proof}\n\nNow we establish the main result of the whole article.\n\n\\begin{theorem}\\label{OmegaThm}\nIf the orbit $(x_{n})_{n=0}^{\\infty}$ of the \nnonexpansive IFS $(X;f_1,{\\ldots},f_N)$ is bounded\nand driven by a disjunctive sequence of symbols and\nthe system has the property (CO) of \ncontractivity on orbits, then $\\omega((x_{n}))$ \nis a minimal closed invariant set. Under stronger condition\n(C) of contractivity, any two omega-limit sets generated\nby a disjunctive choice of maps coincide.\n\\end{theorem}\n\\begin{proof}\nBy Proposition \\ref{intersectomega} (or by Proposition\n\\ref{superinvomega} if one prefers) we only need to \nprove subinvariance:\n\\[\nf_{\\sigma}(\\omega((x_{n}))\\,) \\subset \\omega((x_{n}))\n\\;\\text{ for every }\\; \\sigma\\in\\{1,{\\ldots},N\\}.\n\\]\nLet $f_{\\sigma}(x_{*})\\in f_{\\sigma}(\\omega((x_{n}))\\,)$,\n$x_{k_n}\\to x_{*}$. Take a finer subsequence \n$k_{l_n}$ according to the following rules: the sequence\n$(i_{k_{(l_{n-1})}+2},{\\ldots},i_{k_{l_n}})$ contains \n\\begin{enumerate}\n\\item[(a)] all finite words of length $n$ \nbuild over the alphabet $\\{1,{\\ldots},N\\}$,\n\\item[(b)] $m_{n}$-times repeated \nword $(u_1,{\\ldots},u_{l})$\nappearing in the condition (CO), $m_{n}\\nearrow\\infty$. \n\\end{enumerate}\nRedefine $i_{n}$ in such a way that \n$\\tilde{i}_{k_{l_n}+1} = \\sigma$ and \n$\\tilde{i}_{n}= i_{n}$ otherwise. The sequence \n$\\tilde{i}_{n}$ is disjunctive due to (a). Hence\nthe orbit $y_{n}$ starting at $y_0:= x_0$ with\nthe driver $j_{n} := \\tilde{i}_{n}$ has the property\nthat $d(y_{k_{l_n}},x_{k_{l_n}}) \\to 0$ as warranted\nby (b). So \n\\[\ny_{k_{l_n}+1} = f_{\\sigma}(y_{k_{l_n}}) \\to \nf_{\\sigma}(x_{*})\\in \\omega((y_{n})).\n\\]\nTherefore \n\\[\n\\omega((x_{n})) = \\omega((y_{n})) \\ni f_{\\sigma}(x_{*})\n\\]\nvia Lemma \\ref{disjunctiveomega}.\n\\end{proof}\n\n\n\\begin{example}\nConsider a system $(X; f_{i}, i=1,{\\ldots},N)$ comprising \nof orthogonal projections $f_{i}:= P_{i}$ onto affine \nsubspaces $H_{i}\\subset X$ in the Euclidean space $X$.\nDue to \\cite{Meshulam} we know that any orbit produced\nby projections is bounded. \nTherefore in the standard situation we do not need to \nassume that there exists a bounded subinvariant set.\nMoreover Theorem \\ref{OmegaThm} warrants the \nexistence of a bounded invariant set.\n \nThe most important fact about orthogonal \nprojections is that the composition \n$P_{N}{\\circ}{\\ldots}{\\circ}P_{1}$ \nis contractive on orbits, obeys condition (CO). \nSeveral results in this direction have been\nobtained throughout the years: \n\\cite{Kosmol, BauschkeAs, Kirchheim}.\n\nThis settles the case of the Kaczmarz algorithm \nwhen projecting onto multiple hyperplanes with \nempty intersection (cf. \\cite{Angelos}).\n\\end{example}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}