diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbgzk" "b/data_all_eng_slimpj/shuffled/split2/finalzzbgzk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbgzk" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nMotivated by applications in a wide array of fields ranging from sociology to systems biology and most closely related to this work, in probabilistic combinatorics and statistical physics, the last few years have witnessed an explosion in both network models as well as interacting particle systems on these models. In this context, the two major themes of this work are as follows:\n\n\n\\begin{enumeratea}\n\\item {\\bf Connectivity, percolation and critical random graphs:} A fundamental question in this general area is understanding connectivity properties of the network model, including the time and nature of emergence of the giant component. Writing $[n] = \\set{1,2,\\ldots, n}$ for the vertex set, most of these models have a parameter $t$ (related to the edge density) and a model dependent critical time $t_c$ such that for $t< t_c$ (subcritical regime), there exists no giant component (size of the largest component $|\\cC_1(t)| = o_P(n)$), while for $t> t_c$ (supercritical regime), the size of the largest component scales like $f(t)n$ with $f(t) > 0$ and is model dependent. Behavior in the so-called ``critical regime'' (i.e., when $t = t_c$) is the main content of this paper.\n To prime the reader let us informally describe the types of results closest in spirit to this work. We defer precise definitions of technical aspects (e.g., definitions of the limiting objects, the underlying topology etc.) to Section \\ref{sec:notation} and precise statements of related results to Section \\ref{sec:disc}.\nThe prototypical example of the ``critical'' phenomenon is the Erd\\H{o}s-R\\'enyi random graph at criticality which is constructed as follows: Fix a parameter $\\lambda\\in \\bR$ and vertex set $[n]$ and let $\\ERRG(n,\\lambda)$ be the random graph obtained by placing each of the ${n\\choose 2}$ possible edges independently with probability $n^{-1} + \\lambda n^{-4\/3}$.\nMaximal component sizes in $\\ERRG(n,\\lambda)$ were studied extensively in \\cite{bollobas1984evolution,luczak1990component,janson1993birth,aldous-crit,nachmias-peres-diameter}. The scaling limit of the maximal components of $\\ERRG(n,\\lambda)$ when viewed as metric spaces was identified in \\cite{BBG-12}. \nIt is believed that a large class of random discrete structures, in the critical regime, belong to the ``Erd\\H{o}s-R\\'enyi universality class.''\nSoon after the work \\cite{BBG-12}, an abstract universality principle was developed in \\cite{SBSSXW14,bhamidi-broutin-sen-wang} which was used to establish Erd\\H{o}s-R\\'enyi type scaling limits for a wide array of critical random graph models including the configuration model and various models of inhomogeneous random graphs.\nIt is strongly believed that the components of critical percolation on high-dimensional tori \\cite{hofstad-sapozhnikov,heydenreich-hofstad-1,heydenreich-hofstad-2}, and the hypercube \\cite{hofstad-nachmias} also share the Erd\\H{o}s-R\\'enyi scaling limit, but these problems are open at this point.\n\n\n\\item {\\bf Vacant set left by random walk on graphs:} The second main theme is the area of random interlacements and percolative properties of the vacant set of random walks on finite graphs, see e.g. \\cite{sznitman2010vacant}. See \\cite{vcerny2012random} for a recent survey most closely related to this paper, and \\cite{drewitz-rath-sapozhnikov} for an introduction to random interlacements. This question was initially posed by Hilhorst who wanted to understand the geometry of crystals affected by corrosion. The precise mathematical model is as follows: consider a finite graph on $[n]$ vertices (and to fix ideas assumed connected) which represents the crystalline structure of the object of interest. Now suppose a ``corrosive particle'' wanders through the structure via a simple random walk $\\set{X_t:t\\geq 0}$ (started from say a uniformly chosen vertex), marking each vertex it visits as ``corroded'' (this marking does not affect the dynamics of the walk). For a fixed parameter $u\\geq 0$, define the vacant set as the set of all vertices that have not been ``corroded'' (i.e., not visited by the walk) by time $u n$,\n\\begin{equation}\n\\label{eqn:vac-set-def}\n\t\\mathcal{V}}\\newcommand{\\cW}{\\mathcal{W}}\\newcommand{\\cX}{\\mathcal{X}^u = [n]\\setminus \\big\\{X_t: 0 \\leq t \\leq n u\\big\\}.\n\\end{equation}\nWhen $u$ is ``small'' one expects that only a small fraction of the vertices have been visited by the corrosive particle and thus the maximal connected component $\\cC_1(u)$ of the non-corroded set $\\mathcal{V}}\\newcommand{\\cW}{\\mathcal{W}}\\newcommand{\\cX}{\\mathcal{X}^u$ has a large connected component of size $\\cC_1(u) = \\Theta_P(n)$, while if $u$ increases beyond a ``critical point'' $u_{\\star}$ then the corrosion in the crystal has spread far enough that the maximal connected component in $\\cC_1(u) = o_P(n)$. The ``critical'' $u=u_{\\star}$ regime and in particular the fractal properties of connected components in this regime are of great interest.\n\n\\end{enumeratea}\n\n\n\\subsection{Organization of the paper}\n\\label{sec:org}\nIn the remaining subsections of the introduction, we describe the random graph models considered in this paper and give an informal description of our results. Section \\ref{sec:res} contains precise statements of our main results. We have deferred major definitions to Section \\ref{sec:not}. We discuss the relevance of this work and connections to existing results in Section \\ref{sec:disc}.\nIn Section \\ref{sec:prf-prelim} we define some constructs used in the proof.\nWe start Section \\ref{sec:proof-plane-tree} with the statement of Lemma \\ref{lem:plane-trees} that contains the main technical estimates related to the uniform measure on plane trees with a prescribed children sequence. This lemma forms the crucial work horse in the rest of the proofs. The proof of this lemma occupies the rest of Section \\ref{sec:proof-plane-tree}.\nSection \\ref{sec:proof-conn} and Section \\ref{sec:proof-main-deg-vac-thm} contain the proofs of our main results.\n\n\n \\subsection{Random graph models}\n \\label{sec:rg-models}\n\n Fix a collection of $n$ vertices labeled by $[n] := \\set{1,2,\\ldots, n}$ and an associated degree sequence $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(n)} = (d_v: v\\in [n])$ where $\\ell_n := \\sum_{v\\in [n]}d_v$ is assumed even. There are two natural constructions resulting in generating a random graph on the above vertex set with the prescribed degree sequence.\n\n \\begin{enumeratea}\n \t\\item {\\bf Uniform distributed simple graph:} Let $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ denote the space of all simple graphs on $[n]$-labeled vertices with degree sequence $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}$. Let $\\pr_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ denote the uniform distribution on this space and write $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ for the random graph with distribution $\\pr_{n, \\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$.\n\t\\item {\\bf Configuration model \\cite{Boll-book,molloy1995critical,bender1978asymptotic}:} Recall that a multigraph is a graph where we allow multiple edges and self-loops. Write $\\overline\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ for the space of all multigraphs on vertex set $[n]$ with prescribed degree sequence $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}$.\nWrite $\\CM_{n}(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ for the random multigraph\nconstructed sequentially as follows: Equip each vertex $v\\in [n]$ with $d_v$ half-edges or stubs. Pick two half-edges uniformly from the set of half-edges that have not yet been paired, and pair them to form a full edge. Repeat till all half-edges have been paired. Write $\\overline\\pr_{n, \\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ for the law of $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$.\n \\end{enumeratea}\n\n\n\\subsection{Informal description of our contribution}\nThis work has four major contributions which we now informally describe:\n\\begin{enumeratea}\n\t\\item We provide an explicit algorithm (Lemma \\ref{lem:alternate-construction}) for sampling a uniform random graph with given degree sequence by first sampling a planar tree with a modified degree sequence via an appropriately defined tilt with respect to the uniform distribution on the space of trees with this modified degree sequence. This allows us to derive scaling limits for the uniform distribution on the space of simple connected graphs with degree sequence satisfying regularity conditions including a finite number of surplus edges (Theorem \\ref{prop:condition-on-connectivity}).\n\t\\item We then use this result to derive scaling limits for the critical regime of both the configuration model as well as the uniform random graph model with prescribed degree sequence (Theorem \\ref{thm:graphs-given-degree-scaling}). This is the strongest metric space scaling result for graphs with given degree sequence under optimal assumptions. This result improves work of \\cite{bhamidi-broutin-sen-wang} which was obtained as a consequence of a general universality principle but under stronger assumptions (exponential moment condition on the degree sequence). The technique used in this paper is also completely disjoint from \\cite{bhamidi-broutin-sen-wang}.\n\t\\item Write $C(n, n+k)$ for the number of {\\bf connected graphs} with $n$ labeled vertices and $n+k$ edges. Deriving asymptotics for $C(n, n+k)$ for fixed $k$ as $n\\to\\infty$ has inspired a large body work both in the combinatorics community \\cite{wright1977number,bender1990asymptotic,spencer-count} as well as in the probability community \\cite{aldous-crit,janson2007brownian,takacs1991bernoulli}. Extending such results to count {\\bf connected graphs} with {\\bf prescribed degree} sequence seems beyond the ken of existing techniques. As a consequence of our proof technique, we derive asymptotics for such counts (Theorem \\ref{thm:number-of-connected-graphs}). \n\\item As a substantive application, we answer a question raised by \\v{C}ern{\\'y} and Teixeira (\\cite[Remark 6.1 (2)]{cerny-teixeira} (also see the work of Sznitman, e.g. \\cite[Remark 4.5]{sznitman2011lower}) by obtaining the metric space scaling limit of the vacant set left by random walk on random regular graphs. This is the first result about continuum scaling limit of maximal components in the critical regime for this model. The eventual hope, albeit not addressed in this paper is as follows: Consider spatial systems such as the $d$-dimensional lattice or perhaps more relevant to this paper, asymptotics for the vacant set left by random walks on the $d$-dimensional torus $(\\bZ\/n\\bZ)^d$ in the large $n\\to\\infty$ network limit. As described in \\cite{sznitman2011lower}, the basic intuition is that for high enough dimensions $d$, the corresponding objects should behave similar to what one sees in the context of $2d$-regular random graphs.\n\\end{enumeratea}\n\n\n\n\n\\section{Main results}\n\\label{sec:res}\nWe will now describe our main results. To motivate the interested reader, we will eschew slight efficiency in the order of the statement of the results and first show the implications in the context of the main substantive application in Section \\ref{sec:res-vac}. We will then come back to the general results in Section \\ref{sec:abs-res}. We first fix a convention that we will follow throughout this paper.\n\n\\medskip\n\n\\noindent{\\bf Convention.} All our metric measure spaces will be probability spaces. For any metric measure space $\\mvX=(X,d,\\mu)$ and $\\alpha>0$, $\\alpha\\mvX$ will denote the metric measure space $(X,\\alpha d,\\mu)$, i.e, the space where the metric has been multiplied by $\\alpha$ and the (probability) measure $\\mu$ has remained unchanged. Precise definitions of metric space convergence including the Gromov-Hausdorff-Prokhorov (GHP) topology are deferred to Section \\ref{sec:not}.\n\\subsection{Geometry of vacant sets left by random walk}\n\\label{sec:res-vac}\nFix $r\\geq 3$ and $n\\geq 1$. Here and throughout we assume $nr$ is even.\nRecall the definitions of $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$, $\\pr_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$, $\\overline\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$, and $\\overline\\pr_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ from Section \\ref{sec:rg-models}.\nLet $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)}=(r,r,\\ldots,r)$, and\ndefine\n\\[\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,r}:=\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)}},\\ \\ \\text{ and }\\ \\ \\overline\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,r}:=\\overline\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)}}.\\]\nAnalogously write $\\pr_{n,r}$ (resp. $\\overline\\pr_{n,r}$) for $\\pr_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)}}$ (resp. $\\overline\\pr_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)}}$).\nLet $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}\\sim \\pr_{n,r}$.\nFor any (multi)graph $G$, write $P^G$ for the distribution of a simple random walk $\\set{X_t: t\\geq 0}$ on $G$ with the initial state $X_0$ chosen uniformly at random. Recall the definition of the vacant set from \\eqref{eqn:vac-set-def}. Define,\n\\begin{equation}\n\\label{eqn:ustar-def}\n\tu_{\\star}=\\frac{r(r-1)\\ln(r-1)}{(r-2)^2}\n\\end{equation}\n\nWrite $\\cC_1(u)$ for the maximal connected component in $\\mathcal{V}}\\newcommand{\\cW}{\\mathcal{W}}\\newcommand{\\cX}{\\mathcal{X}^u$. Then the following was shown in \\cite{vcerny2011giant}: With $\\pr_{n,r}$-probability converging to one as $n\\to\\infty$,\n\\begin{enumeratea}\n\t\\item given any $u< u_{\\star}$ and $\\sigma > 0$, there exist strictly positive constants $\\rho, c> 0$ depending only on $u,\\sigma,r$ such that\n\t\\[P^{\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}}\\big(\\big|\\cC_{\\scriptscriptstyle(1)}(u)\\big|\\geq \\rho n\\big)\\geq 1-cn^{-\\sigma};\\]\n\t\\item for any fixed $u> u_{\\star}$ and $\\sigma > 0$, there exists $\\rho^\\prime >0$ depending only on $u, \\sigma, r$ such that\n\t\\[P^{\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}}\\big(\\big|\\cC_{\\scriptscriptstyle(1)}(u)\\big|\\geq \\rho^\\prime \\log(n)\\big)\\leq cn^{-\\sigma}.\\]\n\\end{enumeratea}\nFigures \\ref{fig:test1} and \\ref{fig:test2} display the connectivity structure of the vacant set just above and just below $u_{\\star}$ respectively where the underlying graph is a $r=4$-regular random graph on $n=50,000$ vertices. The maximal component has been colored red, the second largest component blue, and all other components have been colored cyan.\n\n\\begin{figure}[htbp]\n\\centering\n\\begin{minipage}{.5\\textwidth}\n \\centering\n \n \\includegraphics[trim=5.5cm 5cm 3.8cm 4.5cm, clip=true, angle=0, scale=0.9]{Ghigh_50000-4-5-purp.pdf}\n \\captionof{figure}{Connectivity structure at $u=u_{\\star}+0.5$.}\n \\label{fig:test1}\n\\end{minipage}%\n\\begin{minipage}{.5\\textwidth}\n \\centering\n \n \\includegraphics[trim=5.7cm 5.5cm 3.6cm 4cm, clip=true, angle=0, scale=0.9]{Glow_50000-4-5-purp.pdf}\n \\captionof{figure}{Connectivity structure at $u=u_{\\star}-0.5$. }\n \\label{fig:test2}\n\\end{minipage}\n\\end{figure}\n\nThe main aim of this section is the study of the annealed measures:\n\\[\\vP_{n,r}(\\cdot) := \\frac{1}{|\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,r}|} \\sum_{G\\in \\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,r}} P^G(\\cdot)\\ \\ \\text{ and }\\ \\\n\\overline\\vP_{n,r}(\\cdot) := \\sum_{G\\in\\overline\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,r}}\\overline\\pr_{n,r}\\big(G\\big) P^G(\\cdot).\\]\nBuilding on the work of Cooper and Frieze \\cite{cooper-frieze}, \\v{C}ern{\\'y} and Teixeira in \\cite[Theorem 1.1]{cerny-teixeira} showed the following for the above annealed distribution: Let $\\set{u_n:n\\geq 1}$ be any sequence such that there exists fixed $\\beta < \\infty$ such that, $n^{1\/3}|u_n - u_{\\star}|\\leq \\beta$ for all large $n$. Then given any $\\varepsilon> 0$, $\\exists~ A:= A(\\varepsilon,r,\\beta) > 0$ such that for all $n$ large,\n\\begin{equation}\n\\label{eqn:cerny-tiex}\n\t\\vP_{n,r}\\left(A^{-1} n^{2\/3} \\leq |\\cC_{\\scriptscriptstyle(1)}|\\leq An^{2\/3}\\right)\\geq 1-\\varepsilon.\n\\end{equation}\n\n\nThey also showed that analogous results hold for $\\overline\\vP_{n,r}(\\cdot)$. The $n^{2\/3}$ scaling of the maximal component size suggests that the critical behavior for this model resembles that of the critical Erd\\H{o}s-R\\'enyi random graph. Our first main result stated below confirms this assertion.\n\n\\begin{thm}[Scaling limit of the vacant set]\\label{thm:vacant-set-scaling}\nLet $r\\geq 3$.\n\\begin{enumeratei}\n\\item\nLet $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}\\sim\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}_{n,r}$ and $u_\\star$ be as in \\eqref{eqn:ustar-def}.\nRun a simple random walk on $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}$ up to time $nu_n$ starting from a uniformly chosen vertex, where\n\\begin{align}\\label{eqn:40}\nn^{1\/3}(u_{\\star}-u_n)\\to a_0\\in\\bR.\n\\end{align}\nLet $\\cC_{(j)}$ be $j$-th largest component of the subgraph of $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}$ induced by the vacant set $\\mathcal{V}}\\newcommand{\\cW}{\\mathcal{W}}\\newcommand{\\cX}{\\mathcal{X}^{u_n}$. Endow $\\cC_{(j)}$ with the graph distance and the uniform probability measure on its vertices.\nThen there exists a sequence $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}(a_0)=\\big(M_1^{\\mathrm{vac}}(a_0),M_2^{\\mathrm{vac}}(a_0),\\ldots\\big)$ of random compact metric measure spaces such that under the annealed measure $\\vP_{n,r}$,\n\\[\\frac{1}{n^{1\/3}}\\big(\\cC_{(1)}, \\cC_{(2)},\\ldots\\big)\\stackrel{\\mathrm{w}}{\\longrightarrow}\n\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}(a_0)=\\big(M_1^{\\mathrm{vac}}(a_0),M_2^{\\mathrm{vac}}(a_0),\\ldots\\big)\\]\nwith respect to product topology induced by GHP distance (see Section \\ref{sec:gh-mc} for definition) on each coordinate.\n\\item\nThe conclusion in part (i) continues to hold with the same limiting sequence $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}(a_0)$ if we replace $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}$ by $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)})$ and $\\vP_{n,r}$ by the corresponding annealed measure $\\overline\\vP_{n,r}$.\n\\end{enumeratei}\n\\end{thm}\n\n\\begin{rem}\nA complete description of the limiting spaces appearing in Theorem \\ref{thm:vacant-set-scaling}\nrequires certain definitions and is thus deferred to Section \\ref{sec:not}. The limiting object $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}(a_0)$ is explicitly defined in Construction \\ref{constr:M-vac}. The connection between the scaling limit of the critical Erd\\H{o}s-R\\'enyi random graph $\\ERRG(n,\\lambda)$\nand the limiting spaces in the results stated in this section is also explained in Section \\ref{sec:not}.\n\\end{rem}\n\nThe above result deals with the vacant set left by random walk (VSRW) on the random $r$-regular graph. We in fact conjecture that for the corresponding problem on random graphs with general prescribed degree sequence, one has analogous results with a {\\bf universality} phenomenon under moment conditions on the degree sequence.\n\\begin{conj}\\label{conj:general-degree-vacant}\nLet $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f} = \\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(n)}=(d_1,\\ldots,d_n)$ be a degree sequence, and let $D_n$ denote the degree of a vertex chosen uniformly from $[n]$.\nAssume that as $n\\to\\infty$, $D_n\\stackrel{\\mathrm{w}}{\\longrightarrow} D$ with $\\E(D^2) < \\infty$ and $\\E(D_n^2)\\to \\E(D^2)$. Further assume\n\\[\\nu := \\frac{\\E[D(D-1)]}{\\E[D]} > 1 \\mbox{ and } \\pr(D \\geq 3) > 0.\\]\nConsider the VSRW on $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n, \\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ or $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ at level $u$. We conjecture that the following hold:\n\\begin{enumeratea}\n\\item There exists a (model dependent) critical point $u_{\\star}$ such that for $u< u_{\\star}$, size of the maximal component $|\\cC_{\\scriptscriptstyle(1)}(u)| = \\Theta_P(n)$ whilst for $u > u_{\\star}$, $|\\cC_{\\scriptscriptstyle(1)}| = o_P(1)$.\n\n\\item If $~\\E(D^3) < \\infty$ and $\\E(D_n^3)\\to E(D^3)$, then for $u_n$ satisfying\n\t\\[\\lim_{n\\to\\infty} n^{1\/3} (u_{\\star} - u_n)=a_0\\]\nfor some $a_0\\in \\bR$, the connectivity structure of VSRW at level $u_n$ with edges in the maximal components rescaled by $n^{-1\/3}$ satisfy results analogous to Theorem \\ref{thm:vacant-set-scaling}.\n\t\n\\item Let $p_k:=\\pr(D = k)$, $k=0, 1, \\ldots$. Assume that there exists $C>0$ and $\\tau\\in (3,4)$ such that\n\\[p_k \\sim C k^{-\\tau}\\ \\ \\text{ as }\\ \\ k\\to\\infty.\\]\n(In particular, $\\E[D^2]<\\infty$, but $\\E[D^3]=\\infty$.)\nThen for any $a_0\\in\\bR$, there exists a sequence\n$\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}_{\\tau}(a_0)=\\big(M^{\\mathrm{vac}}_{\\tau, 1}(a_0),M^{\\mathrm{vac}}_{\\tau, 2}(a_0),\\ldots\\big)$\nof random metric measure spaces such that under the annealed measure $\\vP_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ (or $\\overline\\vP_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$),\n\\[n^{-\\frac{\\tau-3}{\\tau -1}}\\big(\\cC_{(1)}(u_n), \\cC_{(2)}(u_n),\\ldots\\big)\\stackrel{\\mathrm{w}}{\\longrightarrow}\n\t\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}_{\\tau}(a_0),\\]\nfor any sequence $u_n$ satisfying\n\\[\\lim_{n\\to\\infty} n^{(\\tau-3)\/(\\tau -1)} (u_{\\star} - u_n) \\to a_0.\\]\n \\end{enumeratea}\n\n\n\n\\end{conj}\n\n\\begin{rem}\n\t\\label{rem:clar}\n\tAt this point, we owe the reader two clarifications regarding the conjecture. First we need to clarify the phrase ``results analogous to Theorem \\ref{thm:vacant-set-scaling}'' in (b). Second we need to give some details on the limit objects $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}_{\\tau}(a_0)$ in (c).\nBoth of these clarifications are deferred to the discussion Section \\ref{sec:disc} \\eqref{disc:vacant-set}.\n\\end{rem}\n\n\\subsection{Scaling limits of random graphs with prescribed degrees}\n\\label{sec:abs-res}\nThis section describes our main results on graphs with prescribed degree sequence. The first result describes maximal component structure for critical random graphs under appropriate assumptions. For each $n\\geq 1$ let $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}=\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(n)}=(d_v : v\\in[n])$ be a degree sequence with vertex set $[n]$. We will work with degree sequences that satisfy the following assumption.\n\n\\begin{ass}\n\t\\label{ass:cm-deg}\n\tLet $D_n$ be a random variable with distribution given by\n\\[\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(D_n=i\\big)=\\frac{1}{n}\\#\\big\\{j\\ :\\ d_j=i\\big\\},\\]\ni.e., $D_n$ has the law of the degree of a vertex selected uniformly at random from $[n]$. Assume the following hold as $n\\to\\infty$:\n\\begin{enumeratei}\n\\item There exists a limiting random variable $D$ with $\\pr(D = 1)> 0$ such that $D_n \\stackrel{\\mathrm{w}}{\\longrightarrow} D$.\n\\item Convergence of third moments (and hence all lower moments):\n\\[\\E\\big[D_n^3\\big]:= \\frac{1}{n} \\sum_{v\\in [n]} d_v^3 \\to \\E\\big[D^3\\big]<\\infty. \\]\n\\item We are in the critical scaling window, i.e., there exists $\\lambda \\in \\bR$ such that\n\\[\\nu_n:= \\frac{\\sum_{v\\in [n]} d_v(d_v-1)}{\\sum_{v\\in [n]} d_v} = 1+\\frac{\\lambda}{n^{1\/3}} + o(n^{-1\/3}). \\]\nIn particular, $\\E[D^2]=2\\E[D]$.\n\\end{enumeratei}\n\\end{ass}\n\nRecall that $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ denotes the space of all simple graphs on $n$ vertices with degree sequence $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}$ and $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ is uniformly distributed over $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$.\n\n\n\n\n\\begin{thm}[Scaling limit of graphs with given degree sequence under optimal assumptions]\\label{thm:graphs-given-degree-scaling}\nSuppose $\\set{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(n)}:n\\geq 1}$ satisfy Assumption \\ref{ass:cm-deg} with limiting random variable $D$.\n\\begin{enumeratei}\n\\item Let $\\cC_{(j)}$ be the $j$-th largest component of $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$. Endow $\\cC_{(j)}$ with the graph distance and the uniform probability measure on its vertices. Then there exists a sequence $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^D(\\lambda)=(M_1^D(\\lambda), M_2^D(\\lambda),\\ldots)$ of (random) compact metric measure spaces such that\n\\[\\frac{1}{n^{1\/3}}\\big(\\cC_{(1)}, \\cC_{(2)},\\ldots\\big)\\stackrel{\\mathrm{w}}{\\longrightarrow}\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^D(\\lambda)\\]\nwith respect to product topology induced by GHP distance on each coordinate.\n\\item The conclusion of part (i) continues to hold with the same limiting sequence $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^D(\\lambda)$ if we replace $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ by $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$.\n\\end{enumeratei}\n\\end{thm}\n\\begin{rem}\n\tThe limit objects $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^D(\\lambda)$ are described explicitly in Construction \\ref{constr:M-D}.\n\\end{rem}\n\n\nThe main ingredient in proving the above result is the following result about the uniform distribution on the space of all {\\bf connected} simple graphs with a prescribed degree sequence.\nFor each fixed $\\widetilde m\\geq 1$, let $\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}=\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(\\widetilde m)}=(\\widetilde d_1,\\hdots,\\widetilde d_{\\widetilde m})$ be a given degree sequence. Consider the following assumption on $\\set{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(\\widetilde m)}: \\widetilde m\\geq 1}$:\n\\begin{ass}\\label{ass:degree}\\hfill\n\\begin{enumeratei}\n\\item\\label{it:1}\n$\\widetilde d_j\\ge 1$ for $1\\le j\\le\\widetilde m$, and $\\widetilde d_1=1$.\n\\item\\label{it:2}\nThere exists a pmf $(\\tilde p_1,\\tilde p_2,\\ldots)$ with\n\\[\\widetilde p_1>0,\\quad \\sum_{i\\ge 1}i\\widetilde p_i=2,\\quad\\text{and }\\sum_{i\\ge 1}i^2\\widetilde p_i<\\infty\\]\nsuch that\n\\[\\frac{1}{\\widetilde m}\\#\\set{j: \\widetilde d_j=i}\\to\\widetilde p_i\\ \\text{ for }\\ i\\ge 1,\\ \\text{ and }\\\n\\frac{1}{\\widetilde m}\\sum_{i\\ge 1}\\widetilde d_i^2\\to\\sum_{i\\ge 1}i^2\\widetilde p_i.\\]\nIn particular, $\\max_{1\\le j\\le \\widetilde m}\\widetilde d_j=o(\\sqrt{\\widetilde m})$.\n\\end{enumeratei}\n\\end{ass}\n\\begin{rem}\\label{rem:d1=1-irrelevant}\nWe make two observations about the above set of assumptions.\n\\begin{enumeratei}\n\\item\nThe assumption $\\widetilde d_1=1$ makes the notation in the proofs simpler. It has no other special relevance. Indeed,\nsince $\\widetilde p_1>0$, a positive proportion of vertices have degree one when $\\widetilde m$ is large. Thus, we can always consider the vertex that has the smallest label among all vertices that have degree one.\n\\item We will work with connected graphs with fixed complexity, i.e., for all $\\widetilde m\\geq 1$, the degree sequence $\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(\\widetilde m)}$ will satisfy $\\sum_{j\\in[\\widetilde m]}\\widetilde d_j=2(\\widetilde m-1)+2k$ for some fixed $k\\geq 0$.\n Hence in this case, the assumption $\\sum_{i\\geq 1}i\\widetilde p_i=2$ is redundant as it follows from the other assumptions.\n\\end{enumeratei}\n\\end{rem}\nLet $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con}$ be the set of all {\\bf connected}, simple, labeled (by $[\\widetilde m]$) graphs with degree sequence $\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}$ where the vertex labeled $j$ has degree $\\tilde d_j$.\n\n\\begin{thm}[Scaling limit of {\\bf connected} graphs with given degree sequence under optimal assumptions]\\label{prop:condition-on-connectivity}\nConsider a sequence of degree sequences $\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(\\widetilde m)}=(\\widetilde d_1,\\hdots,\\widetilde d_{\\widetilde m})$ satisfying Assumption \\ref{ass:degree}. In addition, assume that for all $\\widetilde m$,\n\\[\\sum_{j\\in[\\widetilde m]}\\widetilde d_j=2(\\widetilde m-1)+2k\\]\nfor some (fixed) nonnegative integer $k$.\nSample $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con}$ uniformly from $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}}^{\\con}$, and endow it with the graph distance and the uniform probability measure on vertices. Then there exists a random compact metric measure space $M^{\\scriptscriptstyle(k)}$ such that\n\\[\\frac{1}{\\sqrt{\\widetilde m}}\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con}\\stackrel{\\mathrm{w}}{\\longrightarrow} \\frac{1}{\\sigma}M^{\\scriptscriptstyle(k)}\\]\nin the GHP sense, where $\\sigma^2=\\sum_{i\\ge 1}i^2\\tilde p_i-4$ is the asymptotic variance.\n\\end{thm}\n\n\\begin{rem}\n\tThe limit object $M^{\\scriptscriptstyle(k)}$ is described explicitly in Construction \\ref{constr:M-k}.\n\\end{rem}\nOur next result concerns enumeration of connected graphs with prescribed degrees.\n\\begin{thm}[Asymptotic number of connected graphs with given degree sequence when the complexity is fixed]\\label{thm:number-of-connected-graphs}\nConsider a sequence of degree sequences $\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(\\widetilde m)}=(\\widetilde d_1,\\hdots,\\widetilde d_{\\widetilde m})$ satisfying Assumption \\ref{ass:degree}. Assume further that for all $\\widetilde m$,\n\\[\\sum_{j\\in[\\widetilde m]}\\widetilde d_j=2(\\widetilde m-1)+2k\\]\nfor some (fixed) nonnegative integer $k$. Let $\\sigma^2=\\sum_{i\\ge 1}i^2\\tilde p_i-4$ be the asymptotic variance. Then\n\\[\\lim_{\\widetilde m}\\ \\frac{\\big|\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con}\\big|\\times\n\\prod_{i=1}^{\\widetilde m}\\big(\\widetilde d_i-1\\big)!\\times\n\\widetilde m^{k\/2}}{\\big(\\widetilde m+2k-2\\big)!}=\\frac{\\sigma^k}{k!}\\E\\bigg[\\bigg(\\int_0^1\\ve(x)dx\\bigg)^k\\bigg],\\]\nwhere $\\ve(x)$, $x\\in[0,1]$, is a standard Brownian excursion.\n\\end{thm}\n\\begin{rem}\\label{rem:wright}\nLet $C(n,n+k)$ denote the number of connected graphs with $n$ labeled vertices and $n+k$ edges. Wright \\cite{wright1977number} showed that for any fixed $k\\geq -1$,\n\\[C(n, n+k)\\sim\\rho_k n^{n+(3k-1)\/2}\\qquad \\text{as }\\ n\\to\\infty,\\]\nwhere the constants $\\rho_k$ satisfy a certain recurrence relation. Spencer \\cite{spencer-count} proved a connection between this purely combinatorial result and a probabilistic object by showing that\n\\[\\rho_k=\\frac{1}{(k+1)!}\\E\\bigg[\\bigg(\\int_0^1\\ve(x)dx\\bigg)^{k+1}\\bigg],\\ \\ k\\geq -1,\\]\nwhere $\\ve(x)$, $x\\in[0,1]$, is a standard Brownian excursion. Theorem \\ref{thm:number-of-connected-graphs} proves the analogue of this result for connected graphs when the degree sequence is fixed.\n\\end{rem}\n\n\\section{Definitions and limit objects }\n\\label{sec:not}\nThis section contains basic constructs required to state our main results.\n\n\\subsection{Notation and conventions}\\label{sec:notation}\n\n\nFor any set $A$, we write $|A|$ for its cardinality and $\\mathds{1}\\set{A}$ for the associated indicator function. Given two intervals $A, B \\subset \\bR$, we write $C(A,B)$ for the space of continuous functions $f: A \\to B$, equipped with the $L_\\infty$-norm $\\|f\\|_\\infty := \\sup_{x \\in A}|f(x)|$. We write $D(A,B)$ for the space of RCLL (right-continuous-left-limit) functions $f: A \\to B$, equipped with the Skorohod topology. We use the standard Landau notation of $o(\\cdot)$, $O(\\cdot)$ and the corresponding \\emph{order in probability} notation $o_P(\\cdot)$ and $O_P(\\cdot)$.\n\n We use $\\stackrel{\\mathrm{P}}{\\longrightarrow}$, $\\stackrel{\\mathrm{w}}{\\longrightarrow}$ and $\\stackrel{\\mathrm{a.e.}}{\\longrightarrow}$ to denote convergence in probability, weak convergence and almost-sure convergence. We say a sequence of events $E_n$, $n \\in \\bN$, occur \\emph{with high probability} if $\\pr(E_n) \\to 1$ as $n \\to \\infty$.\n\n\n\\subsection{Gromov-Hausdorff-Prokhorov metric}\n\\label{sec:gh-mc}\n\nWe mainly follow \\cite{EJP2116,AddBroGolMie13,metric-geometry-book}. All metric spaces under consideration will be compact. Let us recall the Gromov-Hausdorff distance $d_{\\GH}$ between metric spaces. Fix two metric spaces $X_1 = (X_1,d_1)$ and $X_2 = (X_2, d_2)$. For a subset $C\\subseteq X_1 \\times X_2$, the distortion of $C$ is defined as\n\\begin{equation}\n\t\\label{eqn:def-distortion}\n\t\\dis(C):= \\sup \\set{|d_1(x_1,y_1) - d_2(x_2, y_2)|: (x_1,x_2) , (y_1,y_2) \\in C}.\n\\end{equation}\nA correspondence $C$ between $X_1$ and $X_2$ is a measurable subset of $X_1 \\times X_2$ such that for every $x_1 \\in X_1$, there exists at least one $x_2 \\in X_2$ such that $(x_1,x_2) \\in C$ and vice-versa. The Gromov-Hausdorff distance between the two metric spaces $(X_1,d_1)$ and $(X_2, d_2)$ is defined as\n\\begin{equation}\n\\label{eqn:dgh}\n\td_{\\GH}(X_1, X_2) = \\frac{1}{2}\\inf \\set{\\dis(C): C \\mbox{ is a correspondence between } X_1 \\mbox{ and } X_2}.\n\\end{equation}\n\nSuppose $(X_1, d_1)$ and $(X_2, d_2)$ are two metric spaces and $p_1\\in X_1$, and $p_2\\in X_2$. Then the {\\it pointed Gromov-Hausdorff distance} between $\\mvX_1:=(X_1, d_1, p_1)$ and $\\mvX_2:=(X_2, d_2, p_2)$ is given by\n\\begin{align}\n\\label{eqn:dgh-pointed}\n\td_{\\GH}^{\\point}(\\mvX_1, \\mvX_2) = \\frac{1}{2}\\inf \\set{\\dis(C): C \\mbox{ is a correspondence between }X_1 \\mbox{ and } X_2\\mbox{ and }(p_1, p_2)\\in C}.\n\\end{align}\n\n\n\nWe will use the Gromov-Hausdorff-Prokhorov distance that also keeps track of associated measures on the corresponding metric spaces. A compact metric measure space $(X, d , \\mu)$ is a compact metric space $(X,d)$ with an associated finite measure $\\mu$ on the Borel sigma algebra $\\cB(X)$. Given two compact metric measure spaces $(X_1, d_1, \\mu_1)$ and $(X_2,d_2, \\mu_2)$ and a measure $\\pi$ on the product space $X_1\\times X_2$, the discrepancy of $\\pi$ with respect to $\\mu_1$ and $\\mu_2$ is defined as\n\n\\begin{equation}\n\t\\label{eqn:def-discrepancy}\n\tD(\\pi;\\mu_1, \\mu_2):= ||\\mu_1-\\pi_1|| + ||\\mu_2-\\pi_2||\n\\end{equation}\nwhere $\\pi_1, \\pi_2$ are the marginals of $\\pi$ and $||\\cdot||$ denotes the total variation of signed measures. Then define the metric $d_{\\GHP}$ between $X_1$ and $X_2$ is defined\n\\begin{equation}\n\\label{eqn:dghp}\n\td_{\\GHP}(X_1, X_2):= \\inf\\bigg\\{ \\max\\bigg(\\frac{1}{2} \\dis(C),~D(\\pi;\\mu_1,\\mu_2),~\\pi(C^c)\\bigg) \\bigg\\},\n\\end{equation}\nwhere the infimum is taken over all correspondences $C$ and measures $\\pi$ on $X_1 \\times X_2$. As mentioned in the introduction, unless otherwise stated typically the associated measures will be \\emph{probability} measures.\n\nSimilar to \\eqref{eqn:dgh-pointed}, we can define a ``{\\it pointed Gromov-Hausdorff-Prokhorov distance}'' $d_{\\GHP}^{\\point}$ between two metric measure spaces $X_1$ and $X_2$ having two distinguished points $p_1$ and $p_2$ respectively by taking the infimum in \\eqref{eqn:dghp} over all correspondences $C$ and measures $\\pi$ on $X_1 \\times X_2$ such that $(p_1, p_2)\\in C$.\n\n\nNow let $\\mathfrak{S}$ be the space of all compact metric measure spaces. The function $d_{\\GHP}$ is a pseudometric on $\\mathfrak{S}$, and defines an equivalence relation $X \\sim Y \\Leftrightarrow d_{\\GHP}(X,Y) = 0$ on $\\mathfrak{S}$. Let $\\bar \\mathfrak{S} := \\mathfrak{S} \/ \\sim $ be the space of isometry equivalent classes of compact metric measure spaces and $\\bar d_{\\GHP}$ be the induced metric. Then by \\cite{EJP2116}, $(\\bar \\mathfrak{S}, \\bar d_{\\GHP})$ is a complete separable metric space. To ease notation, we will continue to use $(\\mathfrak{S}, d_{\\GHP})$ instead of $(\\bar \\mathfrak{S}, \\bar d_{\\GHP})$ and $X = (X, d, \\mu)$ to denote both the metric space and the corresponding equivalence class.\n\nSince we will be interested in not just one metric space but an\ninfinite sequence of metric spaces, the relevant space will be $\\mathfrak{S}^{\\bN}$ equipped with the product topology inherited from $d_{\\GHP}$.\n\n\\subsection{Gromov-weak topology}\\label{sec:gromov-weak}\nHere we mainly follow \\cite{Winter-gromov-weak}. Introduce an equivalence relation on the space of complete and separable metric spaces that are equipped with a probability measure on the associated Borel $\\sigma$-algebra by declaring two such spaces $(X_1, d_1, \\mu_1)$ and $(X_2, d_2, \\mu_2)$ to be equivalent when there exists an isometry $\\psi:\\mathrm{support}(\\mu_1)\\to\\mathrm{support}(\\mu_2)$ such that $\\mu_2=\\psi_{\\ast}\\mu_1:=\\mu_1\\circ\\psi^{-1}$, i.e., the push-forward of $\\mu_1$ under $\\psi$ is $\\mu_2$. Write $\\mathfrak{S}_{*}$ for the associated space of equivalence classes. As before, we will often ease notation by not distinguishing between a metric space and its equivalence class.\n\nFix $l\\geq 2$, and a complete separable metric space $(X, d)$. Then given a collection of points $\\vx:=(x_1, x_2, \\ldots, x_l)\\in X^l$, let $\\vD(\\vx):= (d(x_i, x_j))_{i,j\\in [l]}$ denote the symmetric matrix of pairwise distances between the collection of points. A function $\\Phi\\colon \\mathfrak{S}_* \\to \\bR$ is called a polynomial of degree $l$ if there exists a bounded continuous function $\\phi\\colon \\bR_+^{l^2}\\to \\bR$ such that\n\\begin{equation}\\label{eqn:polynomial-func-def}\n\t\\Phi((X,d,\\mu)):= \\int \\phi(\\vD(\\vx)) d\\mu^{\\otimes l}(\\vx).\n\\end{equation}\nHere $\\mu^{\\otimes l}$ is the $l$-fold product measure of $\\mu$. Let $\\boldsymbol{\\Pi}}\\newcommand{\\mvrho}{\\boldsymbol{\\rho}}\\newcommand{\\mvsigma}{\\boldsymbol{\\sigma}$ denote the space of all polynomials on $\\mathfrak{S}_*$.\n\n\\begin{defn}[Gromov-weak topology]\n\t\\label{def:gromov-weak}\n\tA sequence $(X_n, d_n, \\mu_n)_{n\\geq 1} \\in \\mathfrak{S}_*$ is said to converge to $(X, d, \\mu) \\in \\mathfrak{S}_*$ in the Gromov-weak topology if and only if $\\Phi((X_n, d_n, \\mu_n))\\to \\Phi((X, d, \\mu))$ for all $\\Phi\\in \\boldsymbol{\\Pi}}\\newcommand{\\mvrho}{\\boldsymbol{\\rho}}\\newcommand{\\mvsigma}{\\boldsymbol{\\sigma}$.\n\\end{defn}\nIn \\cite[Theorem 1]{Winter-gromov-weak} it is shown that $\\mathfrak{S}_*$ is a Polish space under the Gromov-weak topology. It is also shown that, in fact, this topology can be completely metrized using the so-called Gromov-Prokhorov metric.\n\n\\subsection{Spaces of trees with edge lengths, leaf weights and root-to-leaf measures}\n\\label{sec:space-of-trees}\nThe rest of this section largely follows \\cite{bhamidi-hofstad-sen}. In the proof of the main results we need the following two spaces built on top of the space of discrete trees. The first space $\\vT_{IJ}$ was formulated in \\cite{aldous-pitman-edge-lengths,aldous-pitman-entrance} where it was used to study trees spanning a finite number of random points sampled from an inhomogeneous continuum random tree (ICRT). A more general space $\\vT_{IJ}^*$ was used in the proofs in \\cite{bhamidi-hofstad-sen}.\nThe index $I$ in $\\vT_{IJ}$ and $\\vT_{IJ}^*$ is needed for the purpose of keeping track of the number of marked ``hubs,'' i.e., vertices of high (or infinite) degrees in such trees (see \\cite{aldous-pitman-edge-lengths,aldous-pitman-entrance,bhamidi-hofstad-sen} for a proper definition). For our purpose it will suffice to consider the case $I=0$. So we only define the space $\\vT_{J}:=\\vT_{0J}$ and $\\vT_{J}^*:=\\vT_{0J}^*$.\n\\medskip\n\n\\noindent{\\bf The space $\\vT_{J}$:} Fix $J\\geq 1$. Let $\\vT_{J}$ be the space of trees having the following properties:\n\\begin{enumeratea}\n\t\\item There are exactly $J$ leaves labeled $1+, \\ldots, J+$, and the tree is rooted at another labeled vertex $0+$.\n\n\t\\item Every edge $e$ has a strictly positive edge length $l_e$.\n\\end{enumeratea}\nA tree $\\vt\\in \\vT_{J}$ can be viewed as being composed of two parts:\\\\\n(1) $\\shape(\\vt)$ describing the shape of the tree (including the labels of leaves) but ignoring edge lengths. The set of all possible shapes $\\vT_{J}^{\\shape}$ is obviously finite for fixed $J$.\\\\\n(2) The edge lengths $\\vl(\\vt):= (l_e:e\\in \\vt)$. Consider the product topology on $\\vT_{J}$ consisting of the discrete topology on $\\vT_{J}^{\\shape}$ and the product topology on $\\bR^d$.\n\n\\medskip\n\n\\noindent{\\bf The space $\\vT_{J}^*$:} We will need a slightly more general space. Along with the two attributes above in $\\vT_{J}$, the trees in this space have the following two additional properties. Let $\\cL(\\vt):= \\set{1+, \\ldots, J+}$ denote the collection of non-root leaves in $\\vt$. Then every leaf $v\\in \\cL(\\vt) $ has the following attributes:\n\n\\begin{enumeratea}\n\t\\item[(d)] {\\bf Leaf weights:} A nonnegative number $A(v)$. Write $\\vA(\\vt):=(A(v): v\\in \\cL(\\vt))$.\n\t\\item[(e)] {\\bf Root-to-leaf measures:} A probability measure $\\nu_{\\vt,v}$ on the path $[0+,v]$ connecting the root and the leaf $v$. Here the path is viewed as a line segment pointed at $0+$ and has the usual Euclidean topology. Write $\\boldsymbol{\\nu}(\\vt):= (\\nu_{\\vt,v}: v\\in \\cL(\\vt))$ for this collection of probability measures.\n\\end{enumeratea}\nIn addition to the topology on $\\vT_{J}$, the space $\\vT_{J}^*$ with these additional two attributes inherits the product topology on $\\bR^{J}$ owing to leaf weights and $(d_{\\GHP}^{\\point})^J$ owing to the root-to-leaf measures.\n\nAdditionally, we include a special element $\\partial$ in $\\vT_{J}^*$. This will be useful in the proofs as we will view any rooted tree that does not have exactly $J$ distinct leaves as $\\partial$, which will allow us to work entirely in the space $\\vT_{J}^*$.\n\\subsection{Scaling limits of component sizes at criticality}\n\\label{sec:erdos-scaling-limit}\nThe starting point for establishing the metric space scaling limit is understanding the behavior of the component sizes.\nWe first set up some notation. Fix parameters $\\alpha, \\eta, \\beta > 0$, and write $\\mvmu = (\\alpha, \\eta, \\beta)\\in \\bR_+^3$. Let $\\set{B(s):s\\geq 0}$ be a standard Brownian motion. For $\\lambda \\in \\bR$, define\n\\begin{equation}\n\\label{eqn:bm-lamb-kapp-def}\nW^{\\mvmu,\\lambda}(s):= \\frac{\\sqrt{\\eta}}{\\alpha} B(s)+\\lambda s - \\frac{\\eta s^2}{2\\alpha^3},\\qquad s\\geq 0.\n\\end{equation}\nWrite $\\overline{W}^{\\mvmu,\\lambda}$ for the process reflected at zero:\n\\begin{equation}\n\\label{eqn:reflected-pro-def}\n\t\\overline{W}^{\\mvmu,\\lambda}(s) := W^{\\mvmu,\\lambda}(s) -\\min_{0\\le u\\le s} W^{\\mvmu,\\lambda}(u), \\qquad s\\geq 0.\n\\end{equation}\n Consider the metric space,\n\\begin{equation}\n\\label{eqn:ldown-def}\n\tl^2_{\\downarrow}:= \\bigg\\{\\vx= (x_i:i\\geq 1): x_1\\geq x_2 \\geq \\ldots \\geq 0, \\sum_{i=1}^\\infty x_i^2 < \\infty\\bigg\\},\n\\end{equation}\nequipped with the natural metric inherited from $l^2$.\nIt was shown by Aldous in \\cite{aldous-crit} that the excursions of $\\overline{W}^{\\mvmu,\\lambda}$ from zero can be arranged in decreasing order of lengths as\n\\begin{equation}\n\\label{eqn:mvxi-def}\n\t\\mvxi^{\\mvmu}(\\lambda) = \\big(|\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu}(\\lambda)|: i \\geq 1\\big),\n\\end{equation}\nwhere $|\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu}(\\lambda)|$ is the length of the $i$-th largest excursion $\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu}(\\lambda)$, and further $\\mvxi^{\\mvmu}(\\lambda) \\in l^2_{\\downarrow}$. Let $\\mathcal{P}}\\newcommand{\\cQ}{\\mathcal{Q}}\\newcommand{\\cR}{\\mathcal{R}_\\beta$ be a rate $\\beta$ Poisson process $\\bR_+^2$ independent of $W^{\\mvmu,\\lambda}(\\cdot)$.\nFor each $i\\geq 1$, write $N_{\\scriptscriptstyle(i)}^{\\mvmu}(\\lambda)$ for the number of points of $\\mathcal{P}}\\newcommand{\\cQ}{\\mathcal{Q}}\\newcommand{\\cR}{\\mathcal{R}_\\beta$ that fall under the excursion $\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu}(\\lambda)$.\n\n\nAldous in \\cite{aldous-crit} studied the maximal components of the Erd\\H{o}s-R\\'enyi random graph in the critical regime and proved a remarkable result that says that the sizes of the maximal components scaled by $n^{-2\/3}$ and the number of surplus edges in the maximal components of $\\ERRG(n^{-1}+\\lambda n^{-4\/3})$ converge jointly in distribution to\n$\\big(\\big(|\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu_{\\er}}(\\lambda)|, N_{\\scriptscriptstyle(i)}^{\\mvmu_{\\er}}(\\lambda)\\big): i\\geq 1 \\big)$,\nwhere $\\mvmu_{\\er} = (1,1,1)$.\nThis result has since been generalized to a number of other random graph models. In the context of graphs with given degree sequence, Nachmias and Peres \\cite{nachmias-peres-random-regular} studied critical percolation on random regular graphs; Riordan \\cite{riordan2012phase} analyzed the configuration model with bounded degrees; Joseph \\cite{joseph2014component} considered i.i.d. degrees.\nThe strongest possible such result under minimal assumptions was obtained in \\cite{dhara-hofstad-leeuwaarden-sen}. We will state a weaker version of this result next.\n\n\\begin{thm}[\\cite{dhara-hofstad-leeuwaarden-sen}]\\label{thm:cm-component-sizes}\nConsider a degree sequence $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}=\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(n)}$ satisfying Assumption \\ref{ass:cm-deg} with the limiting random variable $D$ and define $\\sigma_r:=\\E[D^r]$, $r=1,2,3$.\nWrite $\\cC_{\\scriptscriptstyle(i)}(\\lambda)$ for the $i$-th largest connected component of $\\ \\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ (or $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$). Let\n\\[\nN_{\\scriptscriptstyle(i)}^n(\\lambda) := \\#\\text{ edges in } \\cC_{\\scriptscriptstyle(i)}^n(\\lambda) - (|\\cC_{\\scriptscriptstyle(i)}(\\lambda)| -1)\n\\]\ndenote the number of surplus edges in $\\cC_{\\scriptscriptstyle(i)}(\\lambda)$. Then as $n \\to \\infty$,\n\\[\\bigg(\\big(n^{-2\/3}|\\cC_{\\scriptscriptstyle(i)}^n(\\lambda)|, N_{\\scriptscriptstyle(i)}^n(\\lambda)\\big): i\\geq 1\\bigg)\\stackrel{\\mathrm{w}}{\\longrightarrow} \\mvZ^D(\\lambda):= \\big(\\big(|\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu_{D}}(\\lambda)|, N_{\\scriptscriptstyle(i)}^{\\mvmu_{D}}(\\lambda)\\big): i\\geq 1 \\big)\n\\]\nwith respect to product topology. Here $\\mvmu_{D} = (\\alpha_D, \\eta_D, \\beta_D)$ is given by\n\\begin{equation*}\n\\alpha_D = \\sigma_1,\\ \\\n\\eta_D = \\sigma_3\\sigma_1-\\sigma_2^2,\\ \\text{ and }\\\n\\beta_D = 1\/\\sigma_1.\n\\end{equation*}\n\\end{thm}\nThis result, in a stronger form, can be found in \\cite[Theorem 2 and Remark 5]{dhara-hofstad-leeuwaarden-sen}.\nWe will use this result in the next section to describe the limiting metric measure spaces arising in Section \\ref{sec:res}.\n\n\n\n\n\n\\subsection{The limiting metric measure spaces}\n\nA compact metric space $(X,d)$ is called a \\emph{real tree} \\cite{legall-survey,evans-book} if between every two points there is a unique geodesic such that this path is also the only non self-intersecting path between the two points. Functions encoding excursions from zero can be used to construct such metric spaces via a simple procedure. We describe this construction next.\n\nFor $0 < a < b <\\infty$, an \\emph{excursion} on $[a,b]$ is a continuous function $h \\in C([a,b], \\bR)$ with $h(a)=0=h(b)$ and $h(t) > 0$ for $t \\in (a,b)$. The length of such an excursion is $b-a$. For $l \\in(0,\\infty)$, let $\\cE_l$ be the space of all excursions on the interval $[0,l]$. Given an excursion $h \\in \\cE_l$, one can construct a real tree as follows. Define the pseudo-metric $d_h$ on $[0,l]$:\n\\begin{equation}\n\\label{eqn:d-pseudo}\n\td_h(s,t):= h(s) + h(t) - 2 \\inf_{u \\in [s,t]}h(u), \\; \\mbox{ for } s,t \\in [0,l].\n\\end{equation}\nDefine the equivalence relation $s \\sim t \\Leftrightarrow d_h(s,t) = 0$. Let $[0,l]\/\\sim$ denote the corresponding quotient space and consider the metric space $\\cT_h:= ([0,l]\/\\sim, \\bar d_h)$, where $\\bar d_h$ is the metric on the equivalence classes induced by $d_h$. Then $\\cT_h$ is a real tree (\\cite{legall-survey,evans-book}).\nLet $q_h:[0,l] \\to \\cT_h$ be the canonical projection and write $\\mu_{\\cT_h}$ for the push-forward of the Lebesgue measure on $[0,l]$ onto $\\cT_h$ via $q_h$. Further, we assume that $\\cT_h$ is rooted at $\\rho := q_h(0)$. Equipped with $\\mu_{\\cT_h}$, $\\cT_h$ is now a rooted compact metric measure space. Note that by construction, for any $x\\in \\cT_h$, the function $h$ is constant on $q_h^{-1}(x)$. Thus for each $x\\in [0,l]$, we write $\\mathrm{ht}(x) = h(q_h^{-1}(x))$ for the height of this vertex.\n\nThe Brownian continuum random tree defined below is a fundamental object in the literature of random real trees.\n\n\n\\begin{defn}[Aldous's continuum random tree \\cite{Aldo91a}]\\label{def:aldous-crt}\n\tLet $\\ve$ be a standard Brownian excursion on $[0,1]$. Construct the random compact real tree $\\cT_{2\\ve}$ as in \\eqref{eqn:d-pseudo} with $h=2\\ve$. The associated measure $\\mu_{\\cT_{2\\ve}}$ is supported on the collection of leaves of $\\cT_{2\\ve}$ almost surely.\n\\end{defn}\nWrite $\\nu$ for the law of a standard Brownian excursion on the space of excursions on $[0,1]$ namely $\\cE_1$.\nFor $k\\geq 0$, let $\\widetilde \\ve_{(k)}$ be a random excursion with distribution $\\tilde \\nu_{k}$ given via the following Radon-Nikodym density with respect to $\\nu$:\n\\begin{equation}\\label{eqn:tilde-nu-k-def}\n\\frac{d \\tilde \\nu_{k} }{d\\nu}(h)\n= \\frac{\\left[\\int_0^{1}h(u) du\\right]^k}{\\E\\left[\\left(\\int_0^{1} \\ve(u)du\\right)^k\\right]}\n= \\frac{\\big(\\int_{\\cT_h}\\mathrm{ht}(x)\\mu_{\\cT_h}(dx)\\big)^k}{\\E\\big[\\int_{\\cT_{\\ve}}\\mathrm{ht}(x)\\mu_{\\cT_{\\ve}}(dx)\\big]^k}, \\qquad h\\in \\cE_{1}.\n\t\\end{equation}\n\n\\begin{constr}[The space $M^{\\scriptscriptstyle(k)}$]\\label{constr:M-k}\nFix $k\\geq 0$.\n\\begin{enumeratea}\n\\item Let $\\widetilde \\ve_{(k)}$ be as above, and write $\\cT^\\star=\\cT_{2\\widetilde\\ve_{(k)}}$. Let $\\mu_{\\cT^\\star}$ denote the associated measure.\n\\item Conditional on $\\cT^\\star$, sample $k$ leaves $\\set{x_i: 1\\leq i\\leq k}$ in an i.i.d. fashion from $\\cT^\\star$ with density proportional to $\\mathrm{ht}(x) \\mu_{\\cT^\\star}(dx)$.\n\\item Conditional on the two steps above, for each of the sampled leaves $x_i$, sample a point $y_i$ uniformly at random on the line $[\\rho, x_i]$. Identify $x_i$ and $y_i$, i.e., introduce the equivalence relation $x_i\\sim y_i$, $1\\leq i\\leq k$, and form the quotient space $\\cT_{\\star}\/\\sim$.\n\\end{enumeratea}\nSet $M^{\\scriptscriptstyle(k)}$ to be the resultant (compact) random metric measure space.\n\\end{constr}\n\nNext recall the definition of $\\mvZ^D(\\lambda)$ from Theorem \\ref{thm:cm-component-sizes}.\n\\begin{constr}[The sequence $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^D(\\lambda)$]\\label{constr:M-D}\n\\hfill\n\\begin{enumeratea}\n\\item Sample $\\mvZ^D(\\lambda)=\\big(\\big(|\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu_{D}}(\\lambda)|, N_{\\scriptscriptstyle(i)}^{\\mvmu_{D}}(\\lambda)\\big)\\ :\\ i\\geq 1\\big)$.\nFor simplicity, write\n\\[\\xi_i=|\\gamma_{\\scriptscriptstyle(i)}^{\\mvmu_{D}}(\\lambda)|,\\ \\text{ and }\\ N_i=N_{\\scriptscriptstyle(i)}^{\\mvmu_{D}}(\\lambda).\\]\n\\item Conditional on $\\mvZ^D(\\lambda)$, construct the spaces $S_i:=M^{(N_i)}$ independently for $i\\geq 1$.\n\\end{enumeratea}\nSet\n\\[\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^D(\\lambda)=\\big(M_1^D(\\lambda), M_2^D(\\lambda),\\ldots\\big),\\ \\text{ where }\\ M_i^D(\\lambda)=\\frac{\\alpha_D\\sqrt{\\xi_i}}{\\sqrt{\\eta_D}}\\cdot S_i,\\quad\\ i\\geq 1.\\]\n\\end{constr}\n\nNote that the sequence $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^D(\\lambda)$ of limiting spaces depends only on the first three moments of the random variable $D$ (which is also true for $\\mvZ^D(\\lambda)$--the scaling limit of the component sizes and the number of surplus edges).\n\nFinally, let $a_0$ be as in theorem \\ref{thm:vacant-set-scaling}. Define\n\\begin{align}\\label{eqn:lambda-vac-def}\n\\lambda_{\\mathrm{vac}}=\\frac{a_0(r-2)^2}{r(r-1)},\\ \\text{ and }\\ p_{\\mathrm{vac}}=\\exp\\bigg(-\\frac{r\\ln(r-1)}{(r-2)}\\bigg),\n\\end{align}\nand let $D_{\\mathrm{vac}}$ be the mixture random variable\n\\begin{align}\\label{eqn:D-vac-def}\nD_{\\mathrm{vac}}=(1-p_{\\mathrm{vac}})\\cdot\\delta_0+p_{\\mathrm{vac}}\\cdot\\mathrm{Binomial}\\bigg(r,\\ \\frac{1}{r-1}\\bigg).\n\\end{align}\n\\begin{constr}[The sequence $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}(a_0)$]\\label{constr:M-vac}\nSet\n\\[\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{\\mathrm{vac}}(a_0):=\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{D_{\\mathrm{vac}}}\\big(\\lambda_{\\mathrm{vac}}\\big).\\]\n\\end{constr}\n\n\n\\begin{rem}\\label{rem:erdos-renyi}\nThe Erd\\H{o}s-R\\'enyi scaling limit identified in \\cite{BBG-12} can be recovered by taking the limiting random variable to be $D_{\\er}\\sim\\mathrm{Poisson}(1)$, i.e., the scaling limit of $\\ERRG(n^{-1}+\\lambda n^{-4\/3})$ (after rescaling the graph distance by $n^{-1\/3}$) is given by\n\\[\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}_{\\er}(\\lambda):=\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{D_{\\er}}(\\lambda).\\]\n(Note that in this case, $\\alpha_{D_{\\er}}=\\eta_{D_{\\er}}=\\beta_{D_{\\er}}=1$.) The result for $\\ERRG(n^{-1}+\\lambda n^{-4\/3})$ can be obtained from our results as a special case of Theorem \\ref{thm:graphs-given-degree-scaling} by observing the following two facts:\n\\begin{enumeratei}\n\\item The (random) degree sequence of $\\ERRG(n^{-1}+\\lambda n^{-4\/3})$ satisfies Assumption \\ref{ass:cm-deg} with limiting random variable $D_{\\er}$.\n\\item Conditional on the event where the degree sequence equals $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}$, $\\ERRG(n^{-1}+\\lambda n^{-4\/3})$ is uniformly distributed over $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$.\n\\end{enumeratei}\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:disc}\n\nHere we briefly discuss related work, the relevance of the work in this paper and possible extensions and questions raised by this work.\n\n\\begin{enumeratea}\n\t\\item {\\bf Graphs with prescribed degree distribution:} Graphs with prescribed degree sequence have played an integral part in probabilistic combinatorics over the last decade and have also been heavily used in the applied fields including epidemic modeling \\cite{britton2007graphs,newman2002spread,keeling2005networks} community detection and clustering \\cite{fortunato2010community} and so on. In the context of this paper, the critical point for existence of a giant component was established in \\cite{molloy1995critical}. When the degree sequence results in trees, under suitable assumptions on the degree sequence, Broutin and Marckert in \\cite{broutin-marckert} showed that these trees appropriately normalized converge to Aldous's continuum random tree; this result will show up in a number of our proofs.\n\\item {\\bf Critical random graphs: } In the context of continuum scaling limits of maximal components in the critical regime, the only other result for the configuration model was derived in \\cite{bhamidi-broutin-sen-wang}; here using completely different techniques, critical percolation on the {\\bf supercritical} regime of the configuration model where the degree distribution has exponential tails was studied. Associated dynamic versions of this model were constructed and coupled appropriately to Aldous's multiplicative coalescent. A general universality principle also derived in the same paper then resulted in the scaling limits of maximal components at critical percolation. The techniques in that paper, however, do not extend to this work. Here we need start directly with a {\\bf critical} prescribed degree sequence; the proof techniques in this paper are completely different and use a combinatorial description of the uniform distribution on the space of {\\bf connected} simple graphs with a prescribed degree sequence.\n\n\t\\item \\label{disc:vacant-set} {\\bf Vacant sets and random interlacements on general random graphs:} With regards to vacant sets, as stated Theorem \\ref{thm:vacant-set-scaling} applies to random regular graphs. However as elucidated in Conjecture \\ref{conj:general-degree-vacant}, we believe that analogous results hold for the VSRW problem on $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n, \\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ or $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ constructed using general degree sequence satisfying the hypothesis of Conjecture \\ref{conj:general-degree-vacant}. Let us now address the two clarifications described in Remark \\ref{rem:clar}. Assuming one can establish the critical point for VSRW for such graphs,\n\t\\begin{enumeratei}\n\t\t\\item Finite third moment: we conjecture that one can construct a (model dependent) random variable $D^*_{\\mathrm{vac}}$ (analogous to \\eqref{eqn:D-vac-def} for the random regular graph) and $\\lambda_{\\mathrm{vac}}^*$ a function of both the distribution of $D$ and $a_0$ (analogous to \\eqref{eqn:lambda-vac-def}) such that the maximal connected components in the critical regime with edges rescaled by $n^{-1\/3}$ converge to $\\boldsymbol{M}}\\newcommand{\\mvN}{\\boldsymbol{N}}\\newcommand{\\mvO}{\\boldsymbol{O}^{D_{\\mathrm{vac}}^*}(\\lambda_{\\mathrm{vac}}^*)$. This explicates the ``universality'' phenomenon we expect in this regime.\n\t\t\\item Infinite third moment regime: In \\cite{bhamidi-hofstad-sen}, various results related to scaling limits of maximal components in Aldous's multiplicative coalescent were established in terms of tilted inhomogeneous continuum random trees. One ramification of these results (\\cite[Theorem 1.2]{bhamidi-hofstad-sen}) is the continuum scaling limits of the maximal components in the critical regime of the so-called Norros-Reitu model where the driving weight sequence is assumed to have heavy tails with exponent $\\tau\\in (3,4)$. A full description of this random graph model as well as the corresponding limits is beyond the scope of this paper, we refer the interested reader to \\cite{bhamidi-hofstad-sen}. We conjecture that the maximal components in the critical regime for the VSRW model are described by the limit objects derived in \\cite{bhamidi-hofstad-sen}.\n\t\\end{enumeratei}\n\t\n\tLet us now say a few words on how one can go about proving the above conjecture (at least in the finite third moment setting). As will become evident from the proofs, the result follows owing to the following three ingredients \\begin{inparaenumi}\n\t\t\\item Theorem \\ref{thm:graphs-given-degree-scaling};\n\t\t\\item a result of Cooper and Frieze \\cite{cooper-frieze} which expresses the annealed measure for the vacant set problem in terms of the random graphs with prescribed degree sequence;\n\t\t\\item refined bounds on the degree sequence of the vacant in the critical scaling window derived in \\cite{cerny-teixeira}.\n\t\\end{inparaenumi}\n\tParts (i) and (ii) continue to hold for the vacant set problem for random walks on general graphs with prescribed degree sequence. Thus to extend our results to the vacant set problem for general graphs, all one needs is an extension of the refined bounds in (iii) to random walks on general graphs.\n\t\n\n\n\t\\item {\\bf Proof techniques:} The techniques used in this paper differ from the standard techniques used to show convergence of such random discrete objects to limiting random tree like metric spaces. One standard technique (used in \\cite{BBG-12,SBSSXW14}) is to construct an exploration process of the discrete object of interest that converges to the exploration process of a continuum random tree (see \\cite{legall-survey,evans-book} for beautiful treatments), and encode the ``surplus'' edges as a random point process falling under the exploration, and show that this point process converges to a Poisson point process in the limit.\nIn this work, we use a different technique that requires less work. We first prove convergence of the object of interest in the Gromov-weak topology, essentially showing that for each fixed $k\\geq 2$, the distance matrix constructed from $k$ randomly sampled vertices converges in distribution to the distance matrix constructed from $k$ points appropriately sampled from the limiting structure. This result, coupled with a global lower mass bound implies via general theory \\cite{athreya-lohr-winter} that convergence occurs in the stronger Gromov-Hausdorff-Prokhorov sense. In the context of critical random graphs, this technique was first used in \\cite{bhamidi-hofstad-sen} to analyze the so-called rank-one critical inhomogeneous random graph.\n\\end{enumeratea}\n\n\n\n\\section{Preliminary constructions}\n\\label{sec:prf-prelim}\n\nIn this section we set up important notation related to plane trees that will be used throughout the proof.\n\n\n\\subsection{Plane trees: Basic functionals and exploration}\n\\label{sec:plane-tree-def}\nThroughout the sequel we will let $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ denote a {\\bf plane rooted} tree. We use $\\rho$ to denote the root and think of this as the original progenitor of the tree so that every vertex other than the root has a unique parent. Write $\\cL(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$ for the set of leaves of $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$. For each non-root vertex $u\\in \\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$, let $\\overleftarrow{u}$ denote the parent of $u$. Let $[\\rho, u]$ (resp. $[\\rho, u)$) denote the ancestral line of $u$ including (resp. excluding) $u$. Thus $[\\rho, u)=[\\rho,\\overleftarrow{u}]$. Using the planar embedding, any plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ can be explored in a depth-first manner (our convention is to go to the ``leftmost'' child first). Let $\\prec_{\\mathrm{DF}}$ be the linear order on vertices of a plane tree induced by a depth-first exploration starting from the root, i.e., $x\\prec_{\\mathrm{DF}} y$ if $x$ is explored before $y$ in a depth-first search of the plane tree.\n\n\\begin{defn}[Admissible pairs of leaves]\n\t\\label{def:admissible}\n\tFor leaves $u,v\\in\\cL(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$, we say that the {\\bf ordered} pair $(u,v)$ is {\\bf admissible} if\n\t\\[\\overleftarrow{\\overleftarrow{v}}\\in[\\rho, \\overleftarrow{u}),\\ \\text{ and }\\\n\t\\overleftarrow{u}\\prec_{\\mathrm{DF}}\\overleftarrow{v}.\\]\n\tLet $\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$ denote the set of admissible pairs of $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$.\n\\end{defn}\nSee Figure \\ref{fig:admiss} for an example of an admissible pair.\n\\begin{figure}[htbp]\n\t\\centering\n\t\\pgfmathsetmacro{\\nodebasesize}{1}\n\t\n\t\t\\begin{tikzpicture}[scale=.25, iron\/.style={circle, minimum size=6mm, inner sep=0pt, ball color=red!20}, wat\/.style= {circle, inner sep=3pt, ball color=green!20}\n]\n\t\t\n\t\n\t\n\t\t \n\t\n\t\n\t\t \\node (1) [iron] at (0,0) {$\\rho$};\n\t\t \\node (11) [iron] at (-4,5) {$\\overleftarrow{\\overleftarrow{v}}$};\n\t\t \\node (12) [iron] at (4,5) {};\n\t\t \\node (111) [iron] at (-8,10) {$\\overleftarrow{u}$};\n\t\t\t\\node (112) [iron] at (1,10) {$\\overleftarrow{v}$};\n\t\t\t\\node (1111) [wat] at (-12,15) {${u}$};\n\t\t\t\\node (1121) [wat] at (5,15) {${v}$};\n\t\t \n\t\t \n\t\t \n\t\t \n \t\n\t\n\t\t\n\t\t\n\t \\draw[blue, very thick] (1)-- (11) -- (111) -- (1111);\n\t\t\n\t \\draw[blue, very thick] (1) -- (12);\n\t\t\n\t \\draw[blue, very thick] (11) -- (112) -- (1121);\n\t\t\n\t\n\t\n\n\t\n\t\t\\end{tikzpicture}\n\t\\caption{An example of an admissible pair of leaves. }\n\t\\label{fig:admiss}\n\\end{figure}\n\tWe introduce a linear order $\\ll$ on $\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$ as follows: $(u_1,v_1)\\ll (u_2,v_2)$ if either $\\overleftarrow{u}_1\\prec_{\\mathrm{DF}}\\overleftarrow{u}_2$ or if $\\overleftarrow{u}_1=\\overleftarrow{u}_2$ and $\\overleftarrow{v}_1\\prec_{\\mathrm{DF}}\\overleftarrow{v}_2$. For $u\\in\\cL(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$, define\n\\begin{equation}\n\\label{eqn:ftu-atu-def}\n\t\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, u):=\\set{v\\in\\cL(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})\\ :\\ (u,v)\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})},\\ \\text{ and }\\\n\tf_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}}(u):=|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, u)|.\n\\end{equation}\nNote that\n\\begin{equation}\n\\label{eqn:At=sum-ftu}\n|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|=\\sum_{u\\in\\cL(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})} f_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}}(u).\n\\end{equation}\nFor any plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ and a vertex $u$, define\n\\[\\mvB_1(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},u)=\\set{v\\ :\\ \\overleftarrow{v}\\in[\\rho,u)},\\ \\text{ and }\\\n\\mvB_2(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},u)=\\set{v\\ :\\ \\overleftarrow{\\overleftarrow{v}}\\in[\\rho,u)},\\]\nwhere $\\rho$ is the root of $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$. Thus for any $u\\in\\cL(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$,\n\\begin{equation}\n\\label{eqn:ft-lessb2}\n\tf_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}}(u)=|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, u)|\\le|\\mvB_2(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, u)|.\n\\end{equation}\nFor any plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ and a vertex $u$, define\n\\[\\mvB_1^-(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},u)=\\set{v\\ :\\ \\overleftarrow{v}\\in[\\rho,u),\\ u\\prec_{\\mathrm{DF}} v},\\ \\text{ and }\\\n\\mvB_1^+(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},u)=\\set{v\\ :\\ \\overleftarrow{v}\\in[\\rho,u),\\ v\\prec_{\\mathrm{DF}} u}.\\]\nSo if our convention is to explore the children of a vertex from left to right in a depth-first search, then\n$\\mvB_1^{-}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},u)$ (resp. $\\mvB_1^{+}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},u)$) is the collection of vertices that are at distance one from the path $[\\rho,u)$ and lie on the right (resp. left) side of $[\\rho,u)$.\n\n\n\n\nNow fix $k\\ge 1$. Define\n\\[\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})=\\bigg\\{\\big\\{(u_1,v_1),\\ldots,(u_k,v_k)\\big\\}\\ \\mid\\ (u_j,v_j)\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})\\text{ and }u_1,v_1,\\ldots,u_k,v_k\\text{ are }2k\\text{ distinct leaves}\\bigg\\}.\\]\nLet $\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$ be the collection of all such ordered $k$-tuples of admissible pairs. Clearly\n\\[\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_1(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})=\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}),\\ |\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|=k!\\times|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|,\\ \\text{ and }|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|\\le |\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|^k.\\]\nFor later use, define $\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})^k = \\otimes_{i=1}^k \\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$ be the $k$-fold Cartesian product of $\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$. For a plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ and a vertex $u$, let $\\mathrm{Anc}^{\\scriptscriptstyle (1)}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},u)$ be the plane subtree (with the root and $u$ marked) whose vertex set is given by\n\\[V=\\big\\{v\\ \\big|\\ v\\in[\\rho, u],\\ \\text{ or }\\ \\overleftarrow{v}\\in[\\rho, u)\\big\\}.\\]\nLet $\\mathrm{Anc}^{\\scriptscriptstyle(2)}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, u)$ be the plane subtree (with the root and $u$ marked) whose vertex set is given by\n\\[V^{\\scriptscriptstyle(2)}=\\big\\{v\\ \\big|\\ v\\in[\\rho, u],\\ \\text{ or }\\ \\overleftarrow{v}\\in[\\rho, u),\\ \\text{ or }\\\n\\overleftarrow{\\overleftarrow{v}}\\in[\\rho, u)\\big\\}.\\]\nGiven a plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$, write $\\mvc(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}) = (c_v(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}): v\\in \\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$, where $c_v(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$ is the number of children of $v$ in $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$. Further write $\\mvs(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}) = (s_i(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}):i\\geq 0)$ for the {\\bf empirical children distribution (ECD)} of $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$. Namely,\n \t\\[s_i(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}):= \\set{v\\in \\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}: c_v(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}) = i}, \\qquad i\\geq 0.\\]\nGiven a sequence of integers $\\mvs = (s_i:i\\geq 0)$, we say that the sequence is a \\emph{tenable} ECD for a tree if there exists a finite plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ with $\\mvs(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}) = \\mvs$. It is easy to check $\\mvs$ is tenable if and only if $s_i \\geq 0$ for all $i$ with $s_0 \\geq 1 $, and\n\\[\\sum_{i\\ge 0} s_i = 1+ \\sum_{i\\ge 0} i s_i<\\infty.\\]\nGiven a tenable ECD $\\mvs$, let $\\bT_{\\mvs}$ denote the set of all plane trees having ECD $\\mvs$.\n\nIf $\\mvf$ is a finite forest of plane trees, define the ECD $\\mvs(\\mvf)$ analogously. Note that for any sequence of integers $\\mvs=(s_i: i\\ge 0)$ satisfying\n\\[s_i\\ge 0,\\ \\ \\sum_i i s_i<\\infty,\\ \\text{ and }\\ k(\\mvs):=\\sum_i s_i-\\sum_i i s_i\\geq 1,\\]\nthere exists a forest with ECD $\\mvs$. Such a forest has exactly $k(\\mvs)$ many trees.\nGiven such a sequence $\\mvs$, let $\\bF_{\\mvs}$ denote the set of all plane forests with {\\bf ranked roots} having ECD $\\mvs$. Thus each forest in $\\bF_{\\mvs}$ comes with an ordering of the roots so it makes sense talking about the ``first'' tree of the forest, the ``second'' tree etc.\n\n\n\nFinally fix $k\\geq 1$ and let $\\bT_{\\mvs}^{\\scriptscriptstyle(k)}$ denote the set of all pairs $(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, \\mvx)$, where $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}$ and $\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$. For a plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ and $\\mvx=\\big\\{(u_1,v_1),\\ldots,(u_k,v_k)\\big\\}\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})$, let $\\cI(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, \\mvx)$ be the rooted space obtained by adding an edge between $\\overleftarrow{u}_j$ and $\\overleftarrow{v}_j$, and deleting $u_j,v_j$ and the two edges incident to them for $j=1,\\ldots,k$. (See Figure \\ref{fig:admiss-ident} for an example of this.) We endow the space $\\cI(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, \\mvx)$ with the graph distance and the uniform probability measure on all vertices. Similarly if $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^{\\mathrm{lab}}$ is a labeled plane tree and $\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^\\mathrm{lab})$, then $\\cI(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^\\mathrm{lab}, \\mvx)$ is the {\\bf labeled} graph obtained by following the same construction and retaining the labels.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\pgfmathsetmacro{\\nodebasesize}{1}\n\t\n\t\t\\begin{tikzpicture}[scale=.25, iron\/.style={circle, minimum size=6mm, inner sep=0pt, ball color=red!20}, wat\/.style= {circle, inner sep=3pt, ball color=green!20}\n]\n\t\t\n\t\n\t\n\t\t \n\t\n\t\n\t\t \\node (1) [iron] at (0,0) {$\\rho$};\n\t\t \\node (11) [iron] at (-4,5) {$\\overleftarrow{\\overleftarrow{v}}$};\n\t\t \\node (12) [iron] at (4,5) {};\n\t\t \\node (111) [iron] at (-8,10) {$\\overleftarrow{u}$};\n\t\t\t\\node (112) [iron] at (1,10) {$\\overleftarrow{v}$};\n\t\t \n\t\t \n\t\t \n\t\t \n \t\n\t\n\t\t\n\t\t\n\t \\draw[blue, very thick] (1)-- (11) -- (111);\n\t\t\n\t \\draw[blue, very thick] (1) -- (12);\n\t\t\n\t \\draw[blue, very thick] (11) -- (112);\n\t\t\t\\draw[red, very thick] (111) -- (112);\n\t\t\n\t\n\t\n\n\t\n\t\t\\end{tikzpicture}\n\t\\caption{An example of the operation $\\cI$ applied on the tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ and admissible pair $(u,v)$ in Figure \\ref{fig:admiss}. }\n\t\\label{fig:admiss-ident}\n\\end{figure}\nFinally for a plane tree $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$ and $\\mvx=\\big((u_1,v_1),\\ldots,(u_k,v_k)\\big)\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})^k$, let $\\cQ(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, \\mvx)$ be the space obtained by identifying $u_j$ and $\\overleftarrow{\\overleftarrow{v}}_j$ for $1\\leq j\\leq k$.\nWe endow the space $\\cQ(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}, \\mvx)$ with the graph distance and the push-forward of the uniform probability measure on $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}$.\n\n\n\n\n\n\n\n\\section{Properties of plane trees}\\label{sec:proof-plane-tree}\nWe start by describing the setting and assumptions. Assume that for each $m\\geq 1$, $\\mvs^{\\scriptscriptstyle(m)}=(s_i: i\\ge 0)$ (where $\\sum s_i=m$) is a tenable ECD for a tree. Analogous to Assumption \\ref{ass:degree}, we make the following assumption on $\\set{\\mvs^{\\scriptscriptstyle(m)}: m\\geq 1}$:\n\n\\begin{ass}\\label{ass:ecd}\nThere exists a pmf $(p_0, p_1,\\ldots)$ with\n\\[ p_0>0,\\quad \\sum_{i\\ge 1}i p_i=1,\\quad\\text{and }\\sum_{i\\ge 1}i^2 p_i<\\infty\\]\nsuch that\n\\[\\frac{s_i}{m}\\to p_i\\ \\text{ for }\\ i\\ge 0,\\ \\text{ and }\\\n\\frac{1}{m}\\sum_{i\\ge 0}i^2 s_i\\to\\sum_{i\\ge 1}i^2 p_i.\\]\nIn particular, $\\Delta_m:=\\max\\set{i: s_i\\neq 0}=o(\\sqrt{m})$.\n\nWe will write $\\sigma^2 = \\sum_i i^2 p_i - 1$ for the variance associated with the pmf $(p_0, p_1,\\ldots)$.\n\\end{ass}\nThen the following was shown in \\cite{broutin-marckert}.\n\\begin{theorem}[{\\cite[Theorem 1]{broutin-marckert}}]\n\\label{thm:broutin-marckert}\nLet $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}$ be a uniform element of $\\bT_{\\mvs^{\\scriptscriptstyle(m)}}$ endowed with the with the {\\bf uniform probability measure on $m$ vertices} and viewed as a metric measure space. Under Assumption \\ref{ass:ecd}, as $m\\to\\infty$,\n\\[\\frac{\\sigma}{\\sqrt{m}}\\cT_{\\mvs^{\\scriptscriptstyle(m)}}\\stackrel{\\mathrm{w}}{\\longrightarrow}\\cT_{2\\ve}\\]\nin the GHP sense (see Definition \\ref{def:aldous-crt}).\n\\end{theorem}\n\n\\begin{rem}\nIn \\cite[Theorem 1]{broutin-marckert}, the convergence is stated to hold in the Gromov-Hausdorff sense. However, it is easy to see that the proof in fact implies convergence in the GHP sense.\n\\end{rem}\n\n\nThe following technical lemma collects all the ingredients necessary for proving our main results.\n\\begin{lem}\\label{lem:plane-trees}\nLet $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}$ be a uniform plane tree with ECD $\\mvs^{\\scriptscriptstyle(m)}$. Under Assumption \\ref{ass:ecd} the following assertions hold.\n\\begin{enumeratei}\n\\item\\label{lem:leaves-counting-measure}\nFor each $k\\ge 1$, we can construct independent random vectors $(U_m^{(1)}, V_m^{(1)}),\\ldots,(U_m^{(k)}, V_m^{(k)})$ such that $U_m^{(j)}$, $j=1,\\ldots,k$ have uniform distribution on the $s_0$ leaves and $V_m^{(j)}$, $j=1,\\ldots,k$ have uniform distribution on the $m$ vertices, and\n\\[m^{-1\/2}d_{\\cT_{\\mvs^{\\scriptscriptstyle(m)}}}\\big(U_m^{(j)}, V_m^{(j)}\\big)\\stackrel{\\mathrm{P}}{\\longrightarrow} 0\\ \\text{ for }j=1,\\ldots,k.\\]\nIn particular,\n\\[d_{\\GHP}\\big(\\frac{1}{\\sqrt{m}}\\cT_{\\mvs^{\\scriptscriptstyle(m)}}, \\frac{1}{\\sqrt{m}}\\cT_{\\mvs^{\\scriptscriptstyle(m)}}^{\\cL}\\big)\\stackrel{\\mathrm{P}}{\\longrightarrow} 0,\\]\nwhere $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}^{\\cL}$ denotes the metric measure space obtained when the underlying tree is endowed with the uniform probability measure on the set of leaves $\\cL(\\cT_{\\mvs^{\\scriptscriptstyle(m)}})$. (Recall that the measure on the space $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}$ is the uniform probability measure on {\\bf all} vertices.)\n\\item\\label{lem:uniform-integrability}\nRecall the definition of an admissible pair of leaves (Definition \\ref{def:admissible}) and of the function $f_{\\cT_{\\mvs^{\\scriptscriptstyle(m)}}}$ (Equation \\eqref{eqn:ftu-atu-def}). Let $U_m$ be uniformly distributed over $\\cL(\\cT_{\\mvs^{\\scriptscriptstyle(m)}})$. Then for every $k\\ge 1$,\n\\[\\sup_m\\ \\E\\left(\\frac{|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs^{\\scriptscriptstyle(m)}})|}{s_0\\sqrt{m}}\\right)^k\n\\le \\sup_m\\ \\E\\left(\\frac{f_{\\cT_{\\mvs^{\\scriptscriptstyle(m)}}}(U_m)}{\\sqrt{m}}\\right)^k\n<\\infty.\\]\n\\item\\label{lem:independent-sampling}\nFor every $k\\ge 1$,\n\\[\\frac{1}{m^{3k\/2}}\\bigg(\\big|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs^{\\scriptscriptstyle(m)}})\\big|^k-\\big|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\cT_{\\mvs^{\\scriptscriptstyle(m)}})\\big|\\bigg)\\stackrel{\\mathrm{P}}{\\longrightarrow} 0.\\]\n\\item\\label{lem:finite-dim-convergence}\nLet $k\\ge 0$ and $\\ell\\ge 1$.\nConsider independent random variables $U_m^{(i)}$, $i=1,\\ldots, k$ and $V_m^{(j)}$, $j=1,\\ldots,\\ell$, where $U_m^{(i)}$, $i=1,\\ldots,k$ have uniform distribution on the $s_0$ leaves of $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}$, and $V_m^{(j)}$, $j=1,\\ldots,\\ell$ have uniform distribution on the $m$ vertices.\n\nIf the subtree of $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}$ spanned by the root $\\rho$ and $U_m^{(i)}$, $i=1,\\ldots, k$ and $V_m^{(j)}$, $j=1,\\ldots,\\ell$ does not have $(k+\\ell)$ distinct leaves, then set $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}(\\mvU, \\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X})=\\partial$.\nOtherwise set $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}(\\mvU, \\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X})$ to be the subtree of $\\cT_{\\mvs^{\\scriptscriptstyle(m)}}$ spanned by the root $\\rho$ and $U_m^{(i)}$, $i=1,\\ldots, k$ and $V_m^{(j)}$, $j=1,\\ldots,\\ell$, and view it as an element of $\\ \\vT_{k+\\ell}^*$ as in Section \\ref{sec:space-of-trees} via the following prescription:\nFor $1\\leq i\\leq k$, attach the ``leaf value'' $m^{-1\/2}f_{\\cT_{\\mvs^{\\scriptscriptstyle(m)}}}(U_m^{(i)})$ to $U_m^{(i)}$.\nEndow $[\\rho, U_m^{(i)}]$ with a probability measure by assigning mass $p_x^{\\scriptscriptstyle(i)}$ to each $x\\in[\\rho, U_m^{(i)})$, where\n \\[p_x^{\\scriptscriptstyle(i)}:=\\frac{\\#\\bigg\\{v\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}\\big(\\cT_{\\mvs^{\\scriptscriptstyle(m)}}, U_m^{(i)}\\big)\\ \\ \\bigg| \\ \\\n \\overleftarrow{\\overleftarrow{v}}=x\\bigg\\}}{f_{\\cT_{\\mvs^{\\scriptscriptstyle(m)}}}(U_m^{(i)})}.\\]\n(The leaf values and root-to-leaf measures attached to $V_m^{(j)}$, $1\\le j\\le\\ell$, are irrelevant in our proof and can be taken to be zero and $\\delta_{\\{\\rho\\}}$ respectively.)\n\nThen\n\\[\\frac{1}{\\sqrt{m}}\\cT_{\\mvs^{\\scriptscriptstyle(m)}}(\\mvU, \\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X})\\stackrel{\\mathrm{w}}{\\longrightarrow} \\frac{1}{\\sigma}\\cT_{k,\\ell},\\]\nwhere $\\cT_{k,\\ell}$ is the random element of $\\ \\vT_{k+\\ell}^*$ constructed as follows: The shape of $\\cT_{k,\\ell}$ is that of the subtree of $\\cT_{2\\ve}$ spanned by $(k+\\ell)$ points $x_1,\\ldots,x_{k+\\ell}$ sampled independently according to the mass measure $\\mu_{\\cT_{2\\ve}}$. The leaf weight attached to $x_i$ is $p_0\\sigma\\cdot\\mathrm{ht}(x_i)\/2$, $i=1,\\ldots,k$, and the measure on $[\\rho,x_i]$ is the normalized line measure.\n\\item\\label{lem:conditional-expectation}\nThe following joint convergence holds:\n\\[\\bigg(\\frac{1}{\\sqrt{m}}\\cT_{\\mvs^{\\scriptscriptstyle(m)}}, \\frac{|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs^{\\scriptscriptstyle(m)}})|}{s_0\\sqrt{m}}\\bigg)\n\\stackrel{\\mathrm{w}}{\\longrightarrow}\\bigg(\\frac{1}{\\sigma}\\cT_{2\\ve},\\frac{p_0\\sigma}{2}\\int_{\\cT_{2\\ve}}\\mathrm{ht}(x)\\ \\mu_{\\cT_{2\\ve}}(dx)\\bigg)\\]\nwith respect to product topology induced by GHP topology on the first coordinate and Euclidean topology on the second coordinate.\n\\end{enumeratei}\n\\end{lem}\n\\subsection{Proof of Lemma \\ref{lem:plane-trees}\\eqref{lem:leaves-counting-measure}}\nConsider $m$ vertices labeled $1,\\ldots,m$. Let $\\bT_{\\mvs}^{\\mathrm{lab}}$ be the set of plane trees where vertices labeled $1,\\ldots,s_0$ are leaves, vertices labeled $s_0+1,\\ldots, s_0+s_1$ have one child,..., vertices labeled $(m-s_{\\Delta}+1),\\ldots, m$ have $\\Delta$ many children. For convenience, denote by $c_j$ the number of children of the vertex labeled $j$. We will now describe a way of generating a tree uniformly distributed over $\\bT_{\\mvs}^{\\mathrm{lab}}$.\n\nLet $\\pi$ be a uniform permutation on $[m]$. Let\n\\begin{align}\\label{eqn:F}\nS(j):=\\sum_{i=1}^j (c_{\\pi(i)}-1),\\ j=1,\\ldots,m,\\ \\text{ and }\\ F(x) = S(\\lfloor m x \\rfloor), \\quad 0\\leq x\\leq 1.\n\\end{align}\nExtend the definition of $\\pi$ periodically by letting $\\pi(j):=\\pi(j \\mod m+1)$.\nLet $i_0$ denote the location of the first minima of $\\set{S(j):1\\leq j\\leq m}$ and consider the Vervaat transform w.r.t. this location:\n\\begin{align}\\label{eqn:F-exc}\nS^{\\exec}(j) = \\sum_{i=1}^{j} \\big(c_{\\pi(i_0 + i)}-1\\big),\\quad 1\\leq j\\leq m,\\ \\text{ and\\ set }\nF^{\\exec}(x) = S^{\\exec}(\\lfloor mx \\rfloor), \\quad 0\\leq x\\leq 1.\n\\end{align}\nLet $\\cT_{\\mvs}^{\\mathrm{lab}}$ be the labeled tree whose {\\L}ukasiewicz walk is $F^{\\exec}(\\lfloor mx\\rfloor),\\ x\\in[0,1]$. Note that $\\cT_{\\mvs}^{\\mathrm{lab}}$ can be constructed sequentially from $F^{\\exec}$, and further every interval $((i-1)\/m, i\/m]$, $1\\leq i\\leq m$ corresponds to a unique vertex in the exploration process. Let $\\cT_{\\mvs}^{\\mathrm{lab}-}$ be the tree obtained by removing all labels from $\\cT_{\\mvs}^{\\mathrm{lab}}$.\n\\begin{lem}\\label{lem:generating-tree-from-permutation}\nLet $\\cT_{\\mvs}^{\\mathrm{lab}}$ and $\\cT_{\\mvs}^{\\mathrm{lab}-}$ be as above.\n\\begin{enumeratea}\n\\item $\\cT_{\\mvs}^{\\mathrm{lab}}\\sim\\mathrm{Unif}(\\bT_{\\mvs}^{\\mathrm{lab}})$.\n\\item $\\cT_{\\mvs}^{\\mathrm{lab}-}\\stackrel{\\mathrm{d}}{=}\\cT_{\\mvs}$, i.e., $\\cT_{\\mvs}^{\\mathrm{lab}-}\\sim\\mathrm{Unif}(\\bT_{\\mvs})$.\n\\end{enumeratea}\n\\end{lem}\n\\noindent{\\bf Proof:} Each of the $(m-1)!$ rotation classes of the $m!$ permutations gives rise to a unique $F^{\\exec}$. Since there is a bijection between $\\bT_{\\mvs}^{\\mathrm{lab}}$ and the possible realizations of $F^{\\exec}$,\n\\[\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}(\\cT_{\\mvs}^{\\mathrm{lab}}=\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^\\mathrm{lab})=\\frac{1}{(m-1)!}=\\frac{1}{|\\bT_{\\mvs}^{\\mathrm{lab}}|}\\ \\text{ for any }\\ \\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^\\mathrm{lab}\\in\\bT_{\\mvs}^{\\mathrm{lab}}.\\]\nThis implies\n\\[\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(\\cT_{\\mvs}^{\\mathrm{lab}-}=\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\big)=\\frac{\\prod_{i\\ge 0}s_i!}{(m-1)!}=\\frac{1}{|\\bT_{\\mvs}|}\\ \\text{ for any }\\ \\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}.\\]\n \\hfill $\\blacksquare$\n\n\nWe now state a useful concentration inequality.\n\\begin{lem}\\label{lem:concentration-uniform-permutation}\nThere exist universal constants $c_1, c_2>0$ such that for any $m\\geq 1$ and probability vector $\\mvp:=(p_1,\\ldots,p_m)$,\n\\[\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\bigg(\\max_{j\\in[m]}\\bigg|\\sum_{i=1}^j p_{\\pi(i)}-\\frac{j}{m}\\bigg|\\geq x\\sigma(\\mvp)\\bigg)\n\\leq\\exp\\big(-c_1 x\\log\\log x\\big),\\ \\text{ for }\\ x\\geq c_2,\\]\nwhere $\\pi$ is a uniform permutation on $[m]$, and $\\sigma(\\mvp):=\\sqrt{p_1^2+\\ldots+p_m^2}$.\n\\end{lem}\n\n\n\\noindent{\\bf Proof:}\nThe result is essentially contained in \\cite[Lemma 4.9]{bhamidi-hofstad-sen}, and we only outline how to extract the result from its proof.\nWe can work with $\\pi$ generated in the following way: let $X_1,\\ldots,X_m$ be i.i.d. Unif$[0,1]$, and set $\\pi(i)=j_i$, where $X_{j_1}<\\ldots0$ such that $\\sum_{i\\ge 0}f(i)\\tilde s_i\/\\tilde m\\to a$;\n\\item $\\sup_{\\kappa}\\sum_{i\\ge 0}f^2(i)\\tilde s_i\/\\tilde m<\\infty$;\n\\item $\\tilde\\Delta=o(\\tilde z)$; and\n\\item $\\max_{1\\le i\\le\\tilde\\Delta}f(i)=o(\\tilde z)$.\n\\end{enumeratei}\nThen\n\\[\\frac{1}{\\tilde z}\\cdot\\max_{1\\le j\\le\\tilde z}\\big|\\sum_{i=1}^j f(X_i)-aj\\big|\\stackrel{\\mathrm{P}}{\\longrightarrow} 0.\n\\]\n\\end{lem}\n\n\\noindent{\\bf Proof:}\nAn argument similar to the one used in \\eqref{eqn:25} gives\n\\[\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(X_1=i\\big)\n=\\frac{(\\tilde z-1+i)\\tilde s_{i}}{\\tilde z(\\tilde m-1)}.\n\\]\nHence\n\\begin{align}\\label{eqn:26}\n\\E\\bigg[\\sum_{i\\ge 1}^{\\tilde z}\\frac{f(X_i)}{\\tilde z}\\bigg]\n=\\E\\big[f(X_1)\\big]=\\sum_{i=1}^{\\tilde\\Delta}\n\\bigg(\\frac{(\\tilde z-1+i)\\tilde m}{\\tilde z(\\tilde m-1)}\\bigg)\\frac{f(i)\\tilde s_{i}}{\\tilde m}\\to a.\n\\end{align}\nSimilarly, using \\eqref{eqn:25}, a direct computation shows that\n$\\cov\\big(f(X_1), f(X_2)\\big)\\to 0,$\nwhich in turn implies\n\\begin{align}\\label{eqn:27}\n\\var\\bigg[\\sum_{i\\ge 1}^{\\tilde z}\\frac{f(X_i)}{\\tilde z}\\bigg]\\to 0.\n\\end{align}\nCombining \\eqref{eqn:26} and \\eqref{eqn:27}, we see that\n\\begin{align}\\label{eqn:29}\n\\sum_{i\\ge 1}^{\\tilde z}f(X_i)\/\\tilde z\\stackrel{\\mathrm{P}}{\\longrightarrow} a.\n\\end{align}\n\n\nLet $\\hat S=(\\hat S_0,\\hat S_1,\\ldots)$ denote the empirical distribution of $X_1,\\ldots, X_{\\tilde z}$.\nSince\n\\begin{align*}\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(X_1=i_1,\\ldots, X_{\\tilde z}=i_{\\tilde z}\\big)=\\frac{|\\bF_{\\mvs-\\hat\\mvs}|}{|\\bF_{\\mvs}|}\n\\end{align*}\nfor any $(i_1,\\ldots,i_{\\tilde z})$ with empirical distribution $\\hat\\mvs$,\n\\begin{align}\\label{eqn:17}\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(X_1=i_1,\\ldots, X_{\\tilde z}=i_{\\tilde z}\\ \\big|\\ \\hat S=\\hat\\mvs\\big)=\\frac{\\prod_{i\\geq 0}\\hat s_i!}{\\tilde z!}\n\\end{align}\nDefine $y_1,\\ldots, y_{\\tilde z}$ as follows:\n\\begin{align*}\ny_1=\\ldots=y_{\\hat S_0}=0,\\ \\ y_{\\hat S_0+1}=\\ldots=y_{\\hat S_0+\\hat S_1}=1,\\ldots.\n\\end{align*}\nThen conditional on $\\hat S$, the distribution \\eqref{eqn:17} can be generated by uniformly permuting $y_1,\\ldots,y_{\\tilde z}$ and removing the $y$ labels. Set\n\\[\\hat\\mvp:=(\\hat p_1,\\ldots,\\hat p_{\\tilde z}),\\ \\text{ where }\\\n\\hat p_i=\\frac{f(y_i)}{\\sum_{j=1}^{\\tilde z}f(y_j)}.\\]\nFrom Lemma \\ref{lem:concentration-uniform-permutation}, for a uniform permutation $\\pi$ (independent of $\\hat S$) on $\\tilde z$ elements and $\\varepsilon>0$,\n\\begin{align}\\label{eqn:28}\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\bigg(\\max_{1\\le j\\le\\tilde z}\\bigg|\\sum_{i=1}^j \\hat p_{\\pi(i)}-\\frac{j}{\\tilde z}\\bigg|\\ge\\varepsilon\\ \\bigg|\\ \\hat S\\bigg)\n\\le \\exp\\bigg(-c_1\\bigg(\\frac{\\varepsilon}{\\sigma(\\hat\\mvp)}\\bigg)\\log\\log\\bigg(\\frac{\\varepsilon}{\\sigma(\\hat\\mvp)}\\bigg)\\bigg)\\\n\\text{ on }\\ \\big\\{c_2\\sigma(\\hat\\mvp)\\leq\\varepsilon\\big\\}.\n\\end{align}\nSince\n\\[\\sigma(\\hat\\mvp)^2\\le \\hat p_{\\max}\n=\\frac{\\tilde z}{\\sum_{1\\le i\\le\\tilde z}f(X_i)}\\times\\max_{1\\le i\\le\\tilde\\Delta}\\frac{f(i)}{\\tilde z}\n\\stackrel{\\mathrm{P}}{\\longrightarrow} 0,\\]\nwe conclude from \\eqref{eqn:28} that\n\\[\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\bigg(\\max_{1\\le j\\le\\tilde z}\\bigg|\n\\frac{\\sum_{i=1}^j f(X_i)}{\\sum_{k=1}^{\\tilde z}f(X_k)}-\\frac{j}{\\tilde z}\\bigg|\\ge\\varepsilon\\bigg)\\to 0,\n\\]\nwhich combined with \\eqref{eqn:29} yields the claim.\n \\hfill $\\blacksquare$\n\n\\medskip\n\n\n\\noindent{\\bf Proof of Lemma \\ref{lem:line-measure}:}\nIt follows from \\cite[Proposition 5]{broutin-marckert} that\n\\begin{align}\\label{eqn:31}\n\\frac{1}{\\sqrt{m}}\\ \\max_{k\\le\\mathrm{ht}(U_m)}\\bigg|\\#\\bigg\\{v\\ \\big|\\\n v\\in\\mvB_1^-(\\cT_{\\mvs}, U_m),\\ \\mathrm{ht}\\big(\\overleftarrow{v}\\big)\\le k\\bigg\\}-\\frac{\\sigma^2 k}{2}\\bigg|\\stackrel{\\mathrm{P}}{\\longrightarrow} 0,\n\\end{align}\nand a similar statement is true with $\\mvB_1^+$ by symmetry.\nNow recall from \\eqref{eqn:30} that conditional on $\\mathrm{Anc}^{\\scriptscriptstyle (1)}(\\cT_{\\mvs},U_m)$, the forest $\\cF$ (defined around \\eqref{eqn:forest}) is uniformly distributed over $\\bF_{\\tilde\\mvs}$, where $\\tilde\\mvs$ is the ECD of the vertices in $[\\rho, U_m]^c$.\nWe make the following observations about the forest $\\cF$:\n\\begin{enumeratei}\n\\item The ECD $\\tilde\\mvs$ of $\\cF$ satisfies\n\\[\ns_i-\\mathrm{ht}(U_m)\\leq\\tilde s_i\\leq s_i,\n\\]\nwhere $\\mathrm{ht}(U_m)=\\Theta_P(\\sqrt{m})$ by Theorem \\ref{thm:broutin-marckert}.\n\\item Similarly, the number of vertices $\\tilde m$ (say) in $\\cF$ satisfies\n\\[\nm-\\mathrm{ht}(U_m)\\leq\\tilde m\\leq m.\n\\]\n\\item \\eqref{eqn:31} and its analogue for $\\mvB_1^+$ combined with the fact $\\mathrm{ht}(U_m)=\\Theta_P(\\sqrt{m})$ shows that the number of roots of $\\cF$, namely $|\\mvB(\\cT_{\\mvs}, U_m)|$ satisfies\n\\[|\\mvB(\\cT_{\\mvs}, U_m)|=\\Theta_P(\\sqrt{m}).\\]\n\\end{enumeratei}\nThus, when $\\mvs$ satisfies Assumption \\ref{ass:ecd}, the ECD of $\\cF$ and the function $f(i)=i$ satisfy the assumptions of Lemma \\ref{lem:concentration} with $a=1$. This combined with \\eqref{eqn:31} gives\n\\begin{align}\\label{eqn:32}\n\\frac{1}{\\sqrt{m}}\\ \\max_{k\\le\\mathrm{ht}(U_m)}\\bigg|\\#\\bigg\\{v\\ \\big|\\\n \\overleftarrow{v}\\in\\mvB_1^-(\\cT_{\\mvs}, U_m),\\ \\mathrm{ht}\\big(\\overleftarrow{\\overleftarrow{v}}\\big)\\le k\\bigg\\}\n -\\frac{\\sigma^2 k}{2}\\bigg|\\stackrel{\\mathrm{P}}{\\longrightarrow} 0,\n\\end{align}\nand a similar statement is true with $\\mvB_1^+$.\n\n\nLet $\\cF^{\\scriptscriptstyle(2)}$ be the plane forest with ranked roots obtained by deleting the vertices of $\\mathrm{Anc}^{\\scriptscriptstyle (1)}(\\cT_{\\mvs},U_m)$ and the edges incident to them, rooting the resulting trees at the vertices of $\\mvB_2(\\cT_{\\mvs}, U_m)$, and ranking them in the depth-first order.\nThen conditional on $\\mathrm{Anc}^{\\scriptscriptstyle(2)}(\\cT_{\\mvs},U_m)$, $\\cF^{\\scriptscriptstyle(2)}$ is again uniformly distributed over the set of plane forests with ranked roots with the remaining children sequence.\nFurther, reasoning similar to above shows that the ECD of $\\cF^{\\scriptscriptstyle(2)}$ and the function $f(i)=\\mathds{1}\\set{i=0}$ satisfies the assumptions of Lemma \\ref{lem:concentration} with $a=p_0$.\n\nApplying Lemma \\ref{lem:concentration} to the forest $\\cF^{\\scriptscriptstyle(2)}$ and the function $f(i)=\\mathds{1}\\set{i=0}$, and combining this with \\eqref{eqn:32} yields the claim in Lemma \\ref{lem:line-measure}.\n \\hfill $\\blacksquare$\n\n\\subsection{Proof of Lemma \\ref{lem:plane-trees}\\eqref{lem:conditional-expectation}}\nFirst recall from Lemma \\ref{lem:plane-trees} \\eqref{lem:leaves-counting-measure} that $\\cT_{\\mvs}^{\\cL}$ denoted the metric measure space obtained when the underlying tree is endowed with the uniform probability measure on $\\cL(\\cT_{\\mvs})$. Let $U_m^{\\scriptscriptstyle(i)}$, $1\\leq i\\leq k$, and $x_1,\\ldots,x_k$ be as in Lemma \\ref{lem:plane-trees}\\eqref{lem:finite-dim-convergence}.\nLemma \\ref{lem:plane-trees}\\eqref{lem:leaves-counting-measure} together with Theorem \\ref{thm:broutin-marckert} shows that for all $k\\geq 1$,\n\\begin{align*}\n&\\left(\\frac{1}{\\sqrt{m}}\\cT_{\\mvs},\\\n\\frac{1}{\\sqrt{m}}\\cT_{\\mvs}^{\\cL},\\\n\\frac{1}{k\\sqrt{m}}\\bigg(\\mathrm{ht}(U_m^{\\scriptscriptstyle(1)})+\\ldots+\\mathrm{ht}(U_m^{\\scriptscriptstyle(k)})\\bigg)\\right)\\\\\n&\\hskip30pt\n\\stackrel{\\mathrm{w}}{\\longrightarrow}\\left(\\frac{1}{\\sigma}\\cT_{2\\ve},\\\n\\frac{1}{\\sigma}\\cT_{2\\ve},\\\n\\frac{1}{k\\sigma}\\bigg(\\mathrm{ht}(x_1)+\\ldots+\\mathrm{ht}(x_k)\\bigg)\n\\right)\n\\end{align*}\nwith respect to product topology induced by GHP topology on the first two coordinates and Euclidean topology on $\\bR$ on the third coordinate.\nThus by Lemma \\ref{lem:line-measure},\n\\begin{align}\\label{eqn:33}\n\\left(\\frac{1}{\\sqrt{m}}\\cT_{\\mvs},\\\n\\frac{1}{k\\sqrt{m}}\\bigg(f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(1)})+\\ldots+f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(k)})\\bigg)\\right)\n\\stackrel{\\mathrm{w}}{\\longrightarrow}\\left(\\frac{1}{\\sigma}\\cT_{2\\ve},\\\n\\frac{p_0\\sigma}{2k}\\bigg(\\mathrm{ht}(x_1)+\\ldots+\\mathrm{ht}(x_k)\\bigg)\n\\right).\n\\end{align}\nNow for any $\\varepsilon>0$ and $k\\geq 1$,\n\\begin{align*}\n&\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\bigg(\\bigg|\n\\frac{1}{k\\sqrt{m}}\\bigg(f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(1)})+\\ldots+f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(k)})\\bigg)-\\frac{|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs})|}{s_0\\sqrt{m}}\n\\bigg|\\geq\\varepsilon\n\\bigg)\\\\\n&\\hskip20pt\n=\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\bigg(\\bigg|\n\\frac{1}{k\\sqrt{m}}\\bigg(f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(1)})+\\ldots+f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(k)})\\bigg)-\n\\frac{\\E\\big(f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(1)})\\big|\\cT_{\\mvs}\\big)}{\\sqrt{m}}\n\\bigg|\\geq\\varepsilon\n\\bigg)\\\\\n&\\hskip40pt\n\\leq\\frac{1}{\\varepsilon^2 k m}\\E\\bigg[\\var\\bigg(f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(1)})\\big|\\cT_{\\mvs}\\bigg)\\bigg]\n\\leq\\frac{1}{\\varepsilon^2 k m}\\E\\bigg[f_{\\cT_{\\mvs}}(U_m^{\\scriptscriptstyle(1)})^2\\bigg]\\leq\\frac{C}{\\varepsilon^2k},\n\\end{align*}\nwhere the first equality holds because of \\eqref{eqn:At=sum-ftu} and the last step uses Lemma \\ref{lem:plane-trees}\\eqref{lem:uniform-integrability}.\nBy a similar argument, we can show that\n\\begin{align*}\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\bigg(\\bigg|\n\\frac{1}{k}\\big(\\mathrm{ht}(x_1)+\\ldots+\\mathrm{ht}(x_k)\\big)-\n\\int_{\\cT_{2\\ve}}\\mathrm{ht}(x)\\mu_{\\cT_{2\\ve}}(dx)\\bigg|\n\\geq\\varepsilon\n\\bigg)\\leq\\frac{C}{\\varepsilon^2k}.\n\\end{align*}\nThese observations combined with \\eqref{eqn:33} yield\n\\[\\bigg(\\frac{1}{\\sqrt{m}}\\cT_{\\mvs}, \\frac{|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs})|}{s_0\\sqrt{m}}\\bigg)\n\\stackrel{\\mathrm{w}}{\\longrightarrow}\\bigg(\\frac{1}{\\sigma}\\cT_{2\\ve},\\frac{p_0\\sigma}{2}\\int_{\\cT_{2\\ve}}\\mathrm{ht}(x)\\ \\mu_{\\cT_{2\\ve}}(dx)\\bigg),\\]\nwhich is the desired result. \\hfill $\\blacksquare$\n\n\n\n\\section{Asymptotics for connected graphs with given degree sequence}\n\\label{sec:proof-conn}\nThe aim of this section is to prove Theorems \\ref{prop:condition-on-connectivity} and \\ref{thm:number-of-connected-graphs}.\nWe will start with a construction of the uniform measure on the space of simple connected graphs with a prescribed degree sequence with some fixed number $k$ of surplus edges (Lemma \\ref{lem:alternate-construction}). Using this lemma together with the technical lemma \\ref{lem:plane-trees}, we will then complete the proofs of the above two theorems.\n\n\n\n\\subsection{Construction of connected graphs with given degree sequence}\nLet $\\widetilde{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}} = \\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(\\widetilde m)}$ be as in Theorem \\ref{prop:condition-on-connectivity}, and recall that $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{\\widetilde{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}}^{\\con}$ represents a random connected graph with degree sequence $\\widetilde{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ sampled uniformly from $\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}}^{\\con}$. Recall also that under Assumption \\ref{ass:degree}, $\\widetilde d_1 = 1$. Consider the remaining vertices $\\set{2,\\ldots, \\widetilde m}$, and form the children sequence $\\mvc = (c_j: 2\\leq j \\leq (\\widetilde m+2k))$ via\n\\begin{equation}\\label{eqn:children}\n\\mvc:=\\big(\\widetilde d_2-1,\\ldots,\\widetilde d_{\\widetilde m}-1, 0,\\ldots,0\\big)\\ \\ \\ (\\text{with } 2k\\text{ zeros at the end}).\n\\end{equation}\nBy the hypothesis of Theorem \\ref{prop:condition-on-connectivity},\n\\begin{align}\\label{eqn:18}\n\\sum_{j=2}^{\\widetilde m+2k} c_j = (\\widetilde m-1+2k) - 1,\n\\end{align}\nand thus $\\mvc$ represents a valid children sequence for a tree on\n\\begin{align}\\label{eqn:def-m}\nm:=\\widetilde m-1+2k\n\\end{align}\nvertices.\nLet $\\mvs=\\mvs^{\\scriptscriptstyle(m)}=(s_0, s_1,\\ldots)$ be the empirical distribution of $\\mvc$, i.e.,\n\\begin{align}\\label{eqn:35}\ns_i=\\#\\big\\{j\\ :\\ c_j=i\\big\\}.\n\\end{align}\n\nSample $(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX)$ from $\\bT_{\\mvs}^{\\scriptscriptstyle(k)}$ uniformly. Assume that $\\widetilde\\mvX=\\{(u_1, v_1),\\ldots, (u_k,v_k)\\}$, where\n\\[(u_1, v_1)\\ll\\ldots\\ll (u_k, v_k).\\]\nLabel $u_j$ as $\\widetilde m+2j-1$ and $v_j$ as $\\widetilde m+2j$, $1\\leq j\\leq k$. Label the other $\\widetilde m-1$ vertices of $\\widetilde\\cT_{\\mvs}$ uniformly using labels $2,\\ldots, \\widetilde m$ so that in the resulting labeled plane tree $j$ has $\\widetilde d_j-1$ many children. (Thus there are $(s_0-2k)!\\times\\prod_{i\\geq 1} s_i!$ many ways of obtaining such a labeling of $\\widetilde\\cT_{\\mvs}$). Call this labeled, plane tree $\\widetilde\\cT_{\\mvs}^\\mathrm{lab}$. Construct the graph $\\cI(\\widetilde\\cT_{\\mvs}^\\mathrm{lab}, \\widetilde\\mvX)$, attach a vertex labeled $1$ to the root, and then forget about the planar order and the root. Let $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}$ be the resulting graph.\n\\begin{lem}\\label{lem:alternate-construction}\nLet $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}$ be the random graph resulting from the above construction. Then $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}\\sim\\Unif(\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con})$, i.e., $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}\\stackrel{\\mathrm{d}}{=}\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con}$.\n\\end{lem}\n\\noindent{\\bf Proof:} Fix a graph $G\\in\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}}^{\\con}$. Root the graph at the only neighbor of $1$ (recall that $\\tilde d_1=1$), and remove the vertex $1$ and the edge incident to it. Suppose $H$ is the resulting rooted, labeled graph. We can construct a labeled plane tree from $H$ in the following way:\n\\begin{enumeratei}\n\\item Call the root $u_1$. Set the status of all its neighbors as ``discovered,'' and set the status of $u_1$ as ``explored.'' Shuffle all its neighbors uniformly and go to the ``leftmost'' neighbor and call it $u_2$.\n\\item When we are at $u_k$ ($k\\ge 2$), search for all its neighbors (other than $u_{k-1}$) in the graph at that time. If none of these neighbors have been discovered previously, then shuffle them uniformly, set their status as ``discovered,'' set the status of $u_k$ as ``explored,'' and go to the leftmost neighbor and call it $u_{k+1}$.\n\n If some of these neighbors have been previously discovered, then these edges create surplus. Suppose we have found $\\ell_0$ many surplus edges before exploring $u_k$, and at $u_k$ we found $\\ell_1$ many new surplus edges $e_1,\\ldots,e_{\\ell_1}$.\n Assume that $e_j=(u_k, y_j)$ and $y_1\\prec_{\\mathrm{DF}}\\ldots\\prec_{\\mathrm{DF}}y_{\\ell_1}$. For $j=1,\\ldots,\\ell_1$, delete the edge $e_j$, and create two leaves labeled $\\widetilde m+2\\ell_0+2j-1$ and $\\widetilde m+2\\ell_0+2j$, where $u_k=\\overleftarrow{\\widetilde m+2\\ell_0+2j-1}$ (i.e., $u_k$ is the {\\bf parent} of leaf labeled $\\widetilde m+2\\ell_0+2j-1$) and similarly $y_j=\\overleftarrow{\\widetilde m+2\\ell_0+2j}$.\n Shuffle the neighbors of $u_k$ uniformly (including the newly created leaves), set their status as ``discovered,'' set the status of $u_k$ as ``explored,'' and move to the leftmost neighbor and call it $u_{k+1}$. (Note that we do {\\bf not} set the status of $\\widetilde m+2\\ell_0+2j$, $j=1,\\ldots,\\ell_1$ as discovered.)\n\n If $u_k$ has no neighbors other than $u_{k-1}$, then go to the next (in the depth-first order) discovered but unexplored vertex and call it $u_{k+1}$.\n\\end{enumeratei}\nLet $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^{\\mathrm{lab}}$ be the resulting labeled plane tree and set\n$\\mvx=\\{(\\widetilde m+1,\\widetilde m+2),\\ldots,(\\widetilde m+2k-1,\\widetilde m+2k)\\}$.\nNote that $(\\widetilde m+2\\ell_0+2j-1, \\widetilde m+2\\ell_0+2j)$ is an admissible pair for $1\\leq j\\leq k$. Note also that the children sequence of $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^{\\mathrm{lab}}$ is always $\\mvc$ defined in \\eqref{eqn:children}. Thus $(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}^\\mathrm{lab},\\mvx)$ is a (labeled) element of $\\bT_{\\mvs}^{(k)}$. Let $\\mathrm{DF}(H)$ be the set of all labeled elements of $\\bT_{\\mvs}^{(k)}$ one can obtain in this way. Then\n\\[\\big|\\mathrm{DF}(H)\\big|=\\prod_{j=2}^{\\widetilde m}(\\tilde d_j-1)!.\\]\n\nNow\n\\[\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big((\\widetilde\\cT_{\\mvs}^\\mathrm{lab}, \\widetilde\\mvX)=(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx)\\big)=\\frac{1}{|\\bT_{\\mvs}^{\\scriptscriptstyle(k)}|}\n\\times\\frac{1}{(s_0-2k)!\\times\\prod_{i\\geq 1} s_i!}\n\\ \\text{ for every }\\ (\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx)\\in\\mathrm{DF}(H).\\]\nThus\n\\begin{align}\\label{eqn:34}\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(\\cI(\\widetilde\\cT_{\\mvs}^\\mathrm{lab}, \\widetilde\\mvX)=H\\big)\n&=\\sum_{(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx)\\in\\mathrm{DF}(H)}\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big((\\widetilde\\cT_{\\mvs}^\\mathrm{lab}, \\widetilde\\mvX)=(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx)\\big)\\notag\\\\\n&=\\frac{\\prod_{j=2}^{\\widetilde m}(\\tilde d_j-1)!}{|\\bT_{\\mvs}^{\\scriptscriptstyle(k)}|}\\times\\frac{1}{(s_0-2k)!\\times\\prod_{i\\geq 1} s_i!}.\n\\end{align}\nSince this probability is constant and the map from $G$ to $H$ is a bijection, we get the desired result. \\hfill $\\blacksquare$\n\\subsection{Proof of Theorem \\ref{prop:condition-on-connectivity}}\nLet $\\mvs=\\mvs^{\\scriptscriptstyle(m)}$ be as in \\eqref{eqn:35}.\nNote that when $\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}^{\\scriptscriptstyle(\\widetilde m)}$ satisfies Assumption \\ref{ass:degree}, $\\mvs^{\\scriptscriptstyle(m)}$ satisfies Assumption \\ref{ass:ecd} with limiting p.m.f. $(p_i : i\\geq 0)$, where\n\\[p_i:=\\widetilde p_{i+1},\\ \\ i=0,1,\\ldots.\\]\nIn view of Lemma \\ref{lem:alternate-construction}, it is enough to prove the result for $\\widetilde m^{-1\/2}\\cI(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX)$. Recall the definition of $\\cQ(\\cdot, \\cdot)$ from Section \\ref{sec:plane-tree-def}. Then\n\\[d_{\\GHP}\\bigg(\\frac{1}{\\sqrt{\\widetilde m}}\\cI(\\widetilde\\cT_{\\mvs},\n\\widetilde\\mvX),\\frac{1}{\\sqrt{\\widetilde m}}\\cQ(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX)\\bigg)\\leq \\frac{5k}{\\sqrt{\\widetilde m}}.\\]\nThus it is enough to prove the the result for the space $m^{-1\/2}\\cQ(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX)$ (recall the definition of $m$ from \\eqref{eqn:def-m}).\n\nRecall the various notions of weak convergence on the space of metric measure spaces defined in Section \\ref{sec:notation}. We will first prove convergence of $m^{-1\/2}\\cQ(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX)$ in the Gromov-weak topology by making use of a technique from \\cite{bhamidi-hofstad-sen} and then strengthen it to convergence in the GHP sense. Let $\\Phi$ and $\\phi$ be as in \\eqref{eqn:polynomial-func-def}. Then\n\\begin{align}\\label{eqn:1}\n\\E\\left(\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX\\big)\\bigg)\\right)\n&=\\frac{\\sum_{(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx)\\in\\bT_{\\mvs}^{\\scriptscriptstyle(k)}}\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx\\big)\\bigg)}{|\\bT_{\\mvs}^{\\scriptscriptstyle(k)}|}\\notag\\\\\n&=\\frac{\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}\\sum_{\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})}\n\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx\\big)\\bigg)\\big\/\\big(|\\bT_{\\mvs}|\\cdot s_0^k m^{k\/2}\\big)}{\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|\\big\/\\big(|\\bT_{\\mvs}|\\cdot s_0^k m^{k\/2}\\big)}\\notag\\\\\n&=\\frac{\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}\\sum_{\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})}\n\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx\\big)\\bigg)\\big\/\\big(|\\bT_{\\mvs}|\\cdot s_0^k m^{k\/2}\\big)}{\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|\\big\/\\big(|\\bT_{\\mvs}|\\cdot s_0^k m^{k\/2}\\big)}.\n\\end{align}\n\nNext,\n\\begin{align}\\label{eqn:4}\n\\frac{\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}\\sum_{\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})}\n\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v},\\mvx\\big)\\bigg)}{|\\bT_{\\mvs}|\\cdot s_0^k m^{k\/2}}\n&=\\E\\bigg[\\sum_{\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\cT_{\\mvs})}\n\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\cT_{\\mvs},\\mvx\\big)\\bigg)\\frac{1}{s_0^km^{k\/2}}\\bigg]\\\\\n&=\\E\\bigg[\\sum_{\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs})^k}\n\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\cT_{\\mvs},\\mvx\\big)\\bigg)\\frac{1}{s_0^km^{k\/2}}\\bigg]+o(1),\\notag\n\\end{align}\nwhere the second equality follows from Lemma \\ref{lem:plane-trees} \\eqref{lem:uniform-integrability} and \\eqref{lem:independent-sampling}. Writing\n\\[\\mvx=\\big((u_1,y_1),\\ldots,(u_k,y_k)\\big), \\\n\\sum\\displaystyle_1=\\sum_{\\substack{u_1\\in\\cL(\\cT_{\\mvs})\\\\ \\vdots\\\\ u_k\\in\\cL(\\cT_{\\mvs})}},\n\\ \\text{ and }\\\n\\sum\\displaystyle_2=\\sum_{\\substack{y_1\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs},u_1)\\\\ \\vdots\\\\ y_k\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs},u_k)}},\n\\]\nwe note that\n\\begin{align}\\label{eqn:5}\n\\frac{1}{s_0^km^{k\/2}}\\sum_{\\mvx\\in\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}(\\cT_{\\mvs})^k}\n\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\cT_{\\mvs},\\mvx\\big)\\bigg)\n&=\\frac{1}{s_0^km^{k\/2}}\n\\sum\\displaystyle_1\\sum\\displaystyle_2\\ \\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\cT_{\\mvs},\\mvx\\big)\\bigg)\\\\\n&=\n\\sum\\displaystyle_1\\bigg(\\frac{1}{s_0^k}\\bigg)\\prod_{i=1}^k \\bigg(\\frac{f_{\\cT_{\\mvs}}(u_i)}{\\sqrt{m}}\\bigg)\n\\sum\\displaystyle_2 \\prod_{i=1}^k \\bigg(\\frac{1}{f_{\\cT_{\\mvs}}(u_i)}\\bigg)\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\cT_{\\mvs},\\mvx\\big)\\bigg).\\notag\n\\end{align}\nLet $\\mvU=(U_m^{\\scriptscriptstyle(i)}: 1\\leq i\\leq k),\\ \\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}=(V_m^{\\scriptscriptstyle(j)}: 1\\leq j\\leq \\ell)$, and $\\cT_{\\mvs}(\\mvU,\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X})$ be as in Lemma \\ref{lem:plane-trees}\\eqref{lem:finite-dim-convergence}. We now recall some constructs from \\cite{bhamidi-hofstad-sen}.\nRecall the space $\\vT_{J}^*$ from Section \\ref{sec:space-of-trees}. Let $\\vt$ be an element in $\\vT_{k+\\ell}^*$. Denote its root by $\\rho$ and its leaves by\n\\begin{equation}\n\\label{eqn:leaves-def}\n\\vz_{k,k+\\ell}:= (z_1, z_2, \\ldots,z_{k}, z_{k+1}, \\ldots, z_{k+\\ell}).\n\\end{equation}\nAlso recall that for each $i$, there is a probability measure $\\nu_{\\vt,i }(\\cdot)$ on the path $[\\rho, z_i]$ for $1\\leq i\\leq k+\\ell$. For $1\\leq i\\leq k$, sample $y_i$ according to the distribution $\\nu_{\\vt,i}(\\cdot)$ independently for different $i$ and identify $z_i$ with $y_i$. Let $\\vt'$ denote the (random) space thus obtained, and let $d_{\\vt'}$ denote the induced metric on $\\vt'$. Define the function $g^{\\scriptscriptstyle(k)}_\\phi:\\vT_{k+\\ell}^*\\to \\bR$ by\n\\begin{align}\n\\label{eqn:gphi-def}\ng^{\\scriptscriptstyle(k)}_{\\phi}(\\vt):=\n\\left\\{\n\\begin{array}{l}\n\\E\\left[\\phi\\left(d_{\\vt'}(z_i, z_j): k+1\\leq i\\leq k+\\ell\\right)\\right],\\text{ if }\\vt\\neq\\partial,\\\\\n0,\\text{ if }\\vt=\\partial.\n\\end{array}\n\\right.\n\\end{align}\nIn words, we look at the expectation of $\\phi$ applied to the pairwise distances between the last $\\ell$ leaves after sampling $y_i$ on the path $[\\rho, z_i]$ for $1\\leq i\\leq k$ and identifying $z_i$ with $y_i$. Note that here the expectation is only taken over the choices of $y_i$.\n\n\n\nWrite $d_{\\cQ}$ for the induced metric on the space $m^{-1\/2}\\cQ\\big(\\cT_{\\mvs},\\mvx\\big)$,\nand let $\\sum_3=\\sum_{v_1,\\ldots,v_{\\ell}\\in [m]}$. Then\n\\[\\Phi\\bigg(\\frac{1}{\\sqrt{m}}\\cQ\\big(\\cT_{\\mvs},\\mvx\\big)\\bigg)\n=\\sum\\displaystyle_3\\frac{1}{m^{\\ell}}\\phi\\bigg(d_{\\cQ}(v_i, v_j): 1\\leq i0$, define\n\\[\\kappa_{\\delta}(X)=\\kappa_{\\delta}(X,d,\\mu):=\\inf_{x\\in X}\\bigg\\{\\mu\\big\\{y\\ :\\ d(y,x)\\le\\delta\\big\\}\\bigg\\}.\\]\nFrom the definition of $(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX)$ (given right below \\eqref{eqn:18}), it is clear that\n\\begin{align*}\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(\\widetilde\\cT_{\\mvs}=\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\big)\n=\\frac{|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|}{|\\bT_{\\mvs}^{\\scriptscriptstyle(k)}|}=\\frac{|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|}{\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}'\\in\\bT_{\\mvs}}|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}')|}\n\\end{align*}\nfor any $\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}$. Hence for any bounded continuous (w.r.t. GHP topology) $h$,\n\\[\\E\\bigg[h\\bigg(\\frac{1}{\\sqrt{m}}\\widetilde\\cT_{\\mvs}\\bigg)\\bigg]\n=\\frac{\\E\\bigg[h\\bigg(\\frac{1}{\\sqrt{m}}\\cT_{\\mvs}\\bigg)\\cdot \\big|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\cT_{\\mvs})\\big|s_0^{-k} m^{-k\/2}\\bigg]}{\\E\\big[|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\cT_{\\mvs})|s_0^{-k}m^{-k\/2}\\big]}.\\]\nUsing Lemma \\ref{lem:plane-trees}\\eqref{lem:independent-sampling}, and Lemma \\ref{lem:plane-trees}\\eqref{lem:conditional-expectation}\ntogether with uniform integrability (\\ref{lem:plane-trees}\\eqref{lem:uniform-integrability}), we conclude that\n\\[\\E\\bigg[h\\bigg(\\frac{1}{\\sqrt{m}}\\widetilde\\cT_{\\mvs}\\bigg)\\bigg]\n\\to\\E\\bigg[h\\bigg(\\frac{1}{\\sigma}\\cT_{2\\widetilde{\\ve}_{(k)}}\\bigg)\\bigg],\n\\]\nwhere $\\widetilde{\\ve}_{(k)}$ is as defined before \\eqref{eqn:tilde-nu-k-def}. Hence $m^{-1\/2}\\widetilde\\cT_{\\mvs}\\stackrel{\\mathrm{w}}{\\longrightarrow}\\sigma^{-1}\\cT_{2\\widetilde{\\ve}_{(k)}}$ in the GHP sense, and in particular, for each $\\delta>0$, $1\/\\kappa_{\\delta}\\big(m^{-1\/2}\\widetilde\\cT_{\\mvs}\\big)$, $m\\geq 1$ is a tight sequence of random variables.\nThis immediately implies that $1\/\\kappa_{\\delta}\\big(m^{-1\/2}\\cQ(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX)\\big)$, $m\\geq 1$ is also a tight sequence of random variables for each $\\delta>0$. Combining this with \\eqref{eqn:19} and \\cite[Theorem 6.1]{athreya-lohr-winter}, we see that\n\\begin{align*}\n\\frac{1}{\\sqrt{m}}\\cQ\\big(\\widetilde\\cT_{\\mvs}, \\widetilde\\mvX\\big)\n\\stackrel{\\mathrm{w}}{\\longrightarrow}\\frac{1}{\\sigma}M^{\\scriptscriptstyle(k)}\n\\end{align*}\nin the GHP sense. This concludes the proof of Theorem \\ref{prop:condition-on-connectivity}. \\hfill $\\blacksquare$\n\n\\subsection{Proof of Theorem \\ref{thm:number-of-connected-graphs}}\\label{sec:proof-cor-number-of-connected-graphs}\nRecall the relation between $m$ and $\\widetilde m$ from \\eqref{eqn:def-m}.\nNow it follows from \\eqref{eqn:34} that\n\\[\\big|\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con}\\big|=\n\\frac{\\big|\\bT_{\\mvs}^{\\scriptscriptstyle(k)}\\big|\\times(s_0-2k)!\\times\\prod_{i\\geq 1} s_i!}{\\prod_{j=1}^{\\widetilde m}(\\widetilde d_j-1)!}.\n\\]\nFurther,\n\\[\n\\big|\\bT_{\\mvs}^{\\scriptscriptstyle(k)}\\big|=\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|\n=\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|\/k!.\n\\]\nIt thus follows that\n\\begin{align*}\n\\big|\\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{\\widetilde\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}^{\\con}\\big|\n&=\\frac{s_0^k m^{k\/2}\\times\\big|\\bT_{\\mvs}\\big|\\times (s_0-2k)!\\times\\prod_{i\\geq 1} s_i!}{\\prod_{j=1}^{\\widetilde m}(\\widetilde d_j-1)!\\times k!}\n\\times\\sum_{\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v}\\in\\bT_{\\mvs}}\\frac{|\\boldsymbol{A}}\\newcommand{\\mvB}{\\boldsymbol{B}}\\newcommand{\\mvC}{\\boldsymbol{C}_k^{\\ord}(\\boldsymbol{t}}\\newcommand{\\mvu}{\\boldsymbol{u}}\\newcommand{\\mvv}{\\boldsymbol{v})|}{\\big|\\bT_{\\mvs}\\big|s_0^k m^{k\/2}}\\\\\n&\\sim \\frac{s_0^k m^{k\/2}\\times(\\widetilde m+2k-2)!\\times (s_0-2k)!}{s_0!\\times\\prod_{j=1}^{\\widetilde m}(\\widetilde d_j-1)!\\times k!}\\times\n\\left(\\frac{p_0\\sigma}{2}\\right)^k\\E\\bigg[\\bigg(\\int_0^1 2\\ve(x)dx\\bigg)^k\\bigg],\n\\end{align*}\nwhere the last step uses \\eqref{eqn:3} and the expression for $|\\bT_{\\mvs}|$ from \\eqref{eqn:pitman}.\nUsing the relations $m\/\\widetilde m\\sim 1$ and $s_0!\/(s_0-2k)!\\sim s_0^k(mp_0)^{k}$, a simple rearrangement of terms completes the proof.\n \\hfill $\\blacksquare$\n\n\n\n\n\n\\section{Proof of Theorems \\ref{thm:vacant-set-scaling} and \\ref{thm:graphs-given-degree-scaling}}\\label{sec:proof-main-deg-vac-thm}\n\nWe start with the distribution of the configuration model.\n\\begin{lem}[\\cite{Hofs13}, Proposition 7.7]\\label{lem:CM-distribution}\nLet $G$ be a multigraph on vertex set $[n]$ in which there are $x_{ij}$ many edges between $i$ and $j$, $1\\leq i 0$ such that the probability that $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ is simple satisfies\n\t\\begin{equation}\n\t\\label{eqn:cm-simple-0}\n\t\t\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}) \\in \\mathbb{G}}\\newcommand{\\bH}{\\mathbb{H}}\\newcommand{\\bI}{\\mathbb{I}_{n, \\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}\\big) \\to c, \\qquad \\text{ as } n\\to\\infty.\n\t\\end{equation}\n\\end{enumeratea}\n\nThis connection between $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ and $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ is a very useful tool as it enables one to prove certain results about the uniform simple graph with given degrees by first obtaining a similar result for the configuration model, and then using \\eqref{eqn:cm-conditional-uniform} and \\eqref{eqn:cm-simple-0} to deduce the same for the simple graph.\n\n\nFor any nonnegative random variable $X$ with $\\E X>0$, define the corresponding size biased random variable $X^\\circ$ with distribution given by\n\\[\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(X^\\circ\\leq x\\big)=\\frac{\\E\\big[X\\mathds{1}_{X\\leq x}\\big]}{\\E\\big[X\\big]},\\ \\ x\\in[0,\\infty).\\]\n\n\n\\begin{prop}\\label{prop:CM-facts}\nAssume that $\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}$ satisfies Assumption \\ref{ass:cm-deg} with limiting random variable $D$, and let $D^\\circ$ denote the corresponding size-biased random variable. Let\n\\[p_i^\\circ:=\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(D^\\circ=i\\big)=\\frac{i \\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(D=i\\big)}{\\E\\big[D\\big]},\\ \\ i=1, 2, \\ldots.\\]\n\\begin{enumeratei}\n\\item\nLet $\\cC_{\\scriptscriptstyle(k)}$ be the $k$-th largest component of $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$. Then the following hold for each $k\\geq 1$:\n\\begin{align}\n&\\hskip28pt\\frac{1}{|\\cC_{\\scriptscriptstyle(k)}|}\\sum_{v\\in \\cC_{\\scriptscriptstyle(k)}} d_v^2\\stackrel{\\mathrm{P}}{\\longrightarrow} \\sum_{i\\ge 1}i^2 p_i^{\\circ}<\\infty;\\label{eqn:cm-sum-square} \\\\\n&\\hskip52pt\\pr\\big(\\cC_{\\scriptscriptstyle(k)} \\text{ is simple}\\big)\\to 1;\\label{eqn:cm-simple}\\\\\n&\\frac{1}{|\\cC_{\\scriptscriptstyle(k)}|}\\#\\set{v\\in \\cC_{\\scriptscriptstyle(k)}: d_v=i}\\stackrel{\\mathrm{P}}{\\longrightarrow} p^{\\circ}_i\\ \\text{ for }\\ i\\ge 1.\\label{eqn:cm-degree}\n\\end{align}\n\\item Further \\eqref{eqn:cm-sum-square}\nand \\eqref{eqn:cm-degree} continue to hold if we replace $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ by $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$.\n\\end{enumeratei}\n\\end{prop}\n\\noindent {\\bf Proof:}\nGiven a sequence $a_1,\\ldots,a_{\\ell}$ of positive real numbers, the (random) size-biased permutation $\\pi(1),\\ldots, \\pi(\\ell)$ can be obtained as follows:\n\\[\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(\\pi(1)=i\\big)=\\frac{a_i}{\\sum_{j=1}^{\\ell}a_j},\\ \\text{ and }\\\n\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(\\pi(k)=i\\ \\big|\\ \\mathcal{S}}\\newcommand{\\cT}{\\mathcal{T}}\\newcommand{\\cU}{\\mathcal{U}_{k-1}\\big)=\\frac{a_i}{\\sum_{j\\notin\\mathcal{S}}\\newcommand{\\cT}{\\mathcal{T}}\\newcommand{\\cU}{\\mathcal{U}_{k-1}}a_j},\\ \\ k=2,\\ldots,\\ell,\\]\nwhere $\\mathcal{S}}\\newcommand{\\cT}{\\mathcal{T}}\\newcommand{\\cU}{\\mathcal{U}_k=\\{\\pi(1),\\ldots,\\pi(k)\\}$.\nIt is a standard fact that the random graph $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ can be explored in a depth-first way so that the vertices appear as a size-biased permutation, where vertex $i$ has size $d_i$; see \\cite[Section 5.1]{dhara-hofstad-leeuwaarden-sen} or \\cite{riordan2012phase}.\nIt further follows from \\cite[Lemma 15]{dhara-hofstad-leeuwaarden-sen} and Theorem \\ref{thm:cm-component-sizes} that for every $\\varepsilon>0$,\nthere exists $T_{\\varepsilon}>0$ such that\n\\begin{align}\\label{eqn:36}\n\\limsup_n\\ \\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(\\cC_{(k)}\\text{ is explored by time }T_{\\varepsilon}n^{2\/3}\\big)\\geq 1-\\varepsilon.\n\\end{align}\n\\cite[Lemma 5]{dhara-hofstad-leeuwaarden-sen} shows that for every $T>0$,\n\\begin{align}\\label{eqn:37}\n\\sup_{0\\leq u\\leq T}\\bigg|\\frac{1}{n^{2\/3}}\\sum_{i=1}^{\\lfloor un^{2\/3}\\rfloor}d_{\\pi(i)}^2-\\frac{\\sigma_3 u}{\\sigma_1}\\bigg|\\stackrel{\\mathrm{P}}{\\longrightarrow} 0,\n\\end{align}\nwhere $\\sigma_r=\\E[D^r]$, $r=1,2,3$. Combining \\eqref{eqn:36} and \\eqref{eqn:37}, we get\n\\[\\frac{1}{n^{2\/3}}\\bigg(\\sum_{v\\in \\cC_{\\scriptscriptstyle(k)}} d_v^2-\\frac{\\sigma_3}{\\sigma_1}\\big|\\cC_{(k)}\\big|\\bigg)\\stackrel{\\mathrm{P}}{\\longrightarrow} 0.\\]\nSince $\\sigma_3\/\\sigma_1=\\E\\big[{D^\\circ}^2\\big]=\\sum_{i\\geq 1}i^2 p_i^\\circ$, the last display together with Theorem \\ref{thm:cm-component-sizes} yields \\eqref{eqn:cm-sum-square}.\n\n\n\\eqref{eqn:cm-simple} follows from \\cite[Section 5.3]{dhara-hofstad-leeuwaarden-sen} or by following verbatim the proof of this exact result but under slightly different moment assumptions in \\cite[Equation 7.6]{dhara-hofstad-leeuwaarden-sen-2}. \\eqref{eqn:cm-degree} follows from \\cite[Equation 6.4]{dhara-hofstad-leeuwaarden-sen}.\n\nFinally, part (ii) follows from (i) by an application of \\eqref{eqn:cm-conditional-uniform} and \\eqref{eqn:cm-simple-0}.\n \\hfill $\\blacksquare$\n\n\n\n\n\n\n\n\n\n\\vskip12pt\n\n\\noindent{\\bf Proof of Theorem \\ref{thm:graphs-given-degree-scaling}(i).}\\ \\\nNote that $\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}\\big(D=1\\big)>0$ under Assumption \\ref{ass:cm-deg}. Hence $p_1^\\circ>0$.\nFurther, under Assumption \\ref{ass:cm-deg},\n\\[\\sum_{i\\geq 1}i p_i^\\circ=\\E\\big[D^\\circ\\big]=\\E D^2\/\\E D=2.\\]\nHence, by Proposition \\ref{prop:CM-facts} (ii), for every $k\\geq 1$, $(d_v\\ :\\ v\\in\\cC_{(k)})$ satisfies Assumption \\ref{ass:degree} (after a possible relabeling of vertices) with limiting p.m.f. $(p_1^\\circ, p_2^\\circ,\\ldots)$.\n\nLet $\\mathcal{P}}\\newcommand{\\cQ}{\\mathcal{Q}}\\newcommand{\\cR}{\\mathcal{R}$ denote the partition of $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}$ into different components. Then conditional on $\\mathcal{P}}\\newcommand{\\cQ}{\\mathcal{Q}}\\newcommand{\\cR}{\\mathcal{R}$, each component is uniformly distributed over the set of simple, connected graphs with the degrees prescribed by the partition $\\mathcal{P}}\\newcommand{\\cQ}{\\mathcal{Q}}\\newcommand{\\cR}{\\mathcal{R}$. Further, different components are conditionally independent.\nWe thus conclude using Theorem \\ref{thm:cm-component-sizes} and Theorem \\ref{prop:condition-on-connectivity} that for every $k\\geq 1$,\n\\[n^{-2\/3}\\big(|\\cC_{(1)}|,\\ldots,|\\cC_{(k)}|\\big)\n\\stackrel{\\mathrm{w}}{\\longrightarrow}\\big(|\\gamma_{\\scriptscriptstyle(1)}^{\\mvmu_{D}}(\\lambda)|,\\ldots,|\\gamma_{\\scriptscriptstyle(k)}^{\\mvmu_{D}}(\\lambda)|\\big)\\]\njointly with\n\\[\\bigg(\\frac{1}{\\sqrt{|\\cC_{(1)}|}}\\cC_{(1)},\\ldots, \\frac{1}{\\sqrt{|\\cC_{(k)}|}}\\cC_{(k)}\\bigg)\n\\stackrel{\\mathrm{w}}{\\longrightarrow}\\frac{\\alpha_D}{\\sqrt{\\eta_D}}\\big(S_1,\\ldots,S_k\\big)\\]\nin the GHP sense, where $S_i$ are as in Construction \\ref{constr:M-D}, and $\\eta_D$, and $\\alpha_D$ are as in Theorem \\ref{thm:cm-component-sizes}. (Here we have used the fact $\\sum_{i\\geq 1}i^2 p_i^\\circ-4=\\eta_D\/\\alpha_D^2$.)\nCombining the two yields the result.\n \\hfill $\\blacksquare$\n\n\n\\vskip12pt\n\n\\noindent{\\bf Proof of Theorem \\ref{thm:graphs-given-degree-scaling}(ii).}\\ \\\nBy Proposition \\ref{prop:CM-facts} (i), for every $k\\geq 1$, $(d_v\\ :\\ v\\in\\cC_{(k)})$ satisfies Assumption \\ref{ass:degree} (after a possible relabeling of vertices) with limiting p.m.f. $(p_1^\\circ, p_2^\\circ,\\ldots)$.\nAs before, let $\\mathcal{P}}\\newcommand{\\cQ}{\\mathcal{Q}}\\newcommand{\\cR}{\\mathcal{R}$ denote the partition of $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f})$ into different components. For each $k\\geq 1$, define the event\n\\[E_k:=\\big\\{\\cC_{(j)}\\ \\text{ is simple for all }\\ 1\\leq j\\leq k\\big\\}.\\]\nThen note that by Lemma \\ref{lem:CM-distribution}, conditional on the event $E_k\\cap\\{\\mathcal{P}}\\newcommand{\\cQ}{\\mathcal{Q}}\\newcommand{\\cR}{\\mathcal{R}=P\\}$, $\\cC_{(j)}$, $j\\geq 1$ are independent, and for each $i\\leq k$, $\\cC_{(i)}$ is uniformly distributed over the set of simple, connected graphs with the degrees prescribed by the partition $P$. Since $\\mathbb{P}}\\newcommand{\\bQ}{\\mathbb{Q}}\\newcommand{\\bR}{\\mathbb{R}(E_k^c)\\to 0$ by \\eqref{eqn:cm-simple}, the result follows by imitating the argument used in the proof of Theorem \\ref{thm:graphs-given-degree-scaling}(i). \\hfill $\\blacksquare$\n\n\n\n\n\n\\vskip12pt\n\n\n\\noindent{\\bf Proof of Theorem \\ref{thm:vacant-set-scaling}(ii).}\\ \\\nFor every $u\\geq 0$, let $\\mathcal{V}}\\newcommand{\\cW}{\\mathcal{W}}\\newcommand{\\cX}{\\mathcal{X}^u$ denote the vacant set left by a random walk on $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)})$ run up to time $nu$.\nLet $\\cE^u$ be the set of all edges of $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)})$ both of whose endpoints are in $\\mathcal{V}}\\newcommand{\\cW}{\\mathcal{W}}\\newcommand{\\cX}{\\mathcal{X}^u$, i.e.,\n\\[\\cE^u:=\\big\\{\\{v_1,v_2\\}\\in\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)}) :\\ v_1, v_2\\in\\mathcal{V}}\\newcommand{\\cW}{\\mathcal{W}}\\newcommand{\\cX}{\\mathcal{X}^u\\big\\}.\\]\nDefine the vacant graph $\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}^u$ by $\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}^u:=([n],\\cE^u)$, and let $\\vD^u:=\\big(\\vD^u(j)\\ :\\ j\\in[n]\\big)$ be the degree sequence of $\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}^u$.\nThen, by \\cite[Proposition 3.1]{cerny-teixeira}, for any collection $A$ of multigraphs on $[n]$,\n\\begin{align}\\label{eqn:38}\n\\overline\\vP_{n,r}\\big(\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}^u\\in A\\big)\n=\\sum_{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}\\overline\\vP_{n,r}\\big(\\vD^u=\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}\\big)\\times\\overline\\pr_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}\\big(A\\big).\n\\end{align}\nIn words, the vacant graph $\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}^u$ can be generated in two steps: (i) sample the degree sequence $\\vD^u$ under the annealed measure $\\overline\\vP_{n,r}$, and then (ii) construct a configuration model with this degree sequence.\n\nLet $\\mvs^{u}=\\big(s_0^{u},\\ldots, s_r^{u}\\big)$ denote the empirical distribution of $\\vD^{u}$. Then, by \\cite[Equation 6.1]{cerny-teixeira},\nfor every $\\varepsilon>0$,\n\\begin{align*}\n\\overline\\vP_{n,r}\\bigg(\\bigg|\\frac{1}{n}s_i^{u_{\\star}}-\\pr\\big(D_{\\mathrm{vac}}=i\\big)\\bigg|\\geq\\varepsilon\\bigg)\n\\to 0,\\ \\ 0\\leq i\\leq r,\n\\end{align*}\nwhere $u_{\\star}$ is as in \\eqref{eqn:ustar-def}, and $D_{\\mathrm{vac}}$ is as in \\eqref{eqn:D-vac-def}. The simple observation\n\\[\\big|s_i^{u_{\\star}}-s_i^{u_n}\\big|\\leq (|a_0|+1)n^{2\/3}(r+1)\\]\nfor large $n$ when $u_n$ satisfies \\eqref{eqn:40} leads to\n\\begin{align}\\label{eqn:39}\n\\overline\\vP_{n,r}\\bigg(\\bigg|\\frac{1}{n}s_i^{u_n}-\\pr\\big(D_{\\mathrm{vac}}=i\\big)\\bigg|\\geq\\varepsilon\\bigg)\n\\to 0,\\ \\ 0\\leq i\\leq r.\n\\end{align}\nFurther, by \\cite[Equation 6.4]{cerny-teixeira},\n\\begin{align}\\label{eqn:41}\n\\overline\\vP_{n,r}\\bigg(n^{1\/3}\n\\bigg|\\frac{\\sum_{i=0}^r (i^2-2i)s_i^{u_n}}{\\sum_{i=0}^r i s_i^{u_n}}\n-\\lambda_{\\mathrm{vac}}\\bigg|\\geq\\varepsilon\\bigg)\\to 0,\n\\end{align}\nwhere $\\lambda_{\\mathrm{vac}}$ is as in \\eqref{eqn:lambda-vac-def}. Combining \\eqref{eqn:39} and \\eqref{eqn:41}, we see that\nthe degree sequence $\\vD^{u_n}$ satisfies Assumption \\ref{ass:cm-deg} with limiting random variable $D_{\\mathrm{vac}}$ and $\\lambda=\\lambda_{\\mathrm{vac}}$.\nIn view of \\eqref{eqn:38}, an application of Theorem \\ref{thm:graphs-given-degree-scaling}(ii) completes the proof. \\hfill $\\blacksquare$\n\n\\vskip12pt\n\n\n\n\\noindent{\\bf Proof of Theorem \\ref{thm:vacant-set-scaling}(i).}\\ \\\nLet $\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}^u,\\ \\vD^u$, and $\\mvs^u$ be as in the proof of Theorem \\ref{thm:vacant-set-scaling}(ii), but with $\\mathcal{G}}\\newcommand{\\cH}{\\mathcal{H}}\\newcommand{\\cI}{\\mathcal{I}_{n,r}$ as the underlying graph (instead of $\\CM_n(\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}_r^{\\scriptscriptstyle(n)})$). By \\cite[Lemma 7]{cooper-frieze}, the analogue of \\eqref{eqn:38} is true in this case, i.e.,\n\\[\\vP_{n,r}\\big(\\boldsymbol{V}}\\newcommand{\\mvW}{\\boldsymbol{W}}\\newcommand{\\mvX}{\\boldsymbol{X}^u\\in A\\big)\n=\\sum_{\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}\\vP_{n,r}\\big(\\vD^u=\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}\\big)\\times\\pr_{n,\\mathbf{d}}\\newcommand{\\ve}{\\mathbf{e}}\\newcommand{\\vf}{\\mathbf{f}}\\big(A\\big),\\]\nfor any collection $A$ of simple graphs on $[n]$. Further, using \\eqref{eqn:cm-conditional-uniform} and \\eqref{eqn:cm-simple-0},\nwe conclude that \\eqref{eqn:39} and \\eqref{eqn:41} continue to hold in this case. We complete the proof by an application of Theorem \\ref{thm:graphs-given-degree-scaling}(i). \\hfill $\\blacksquare$\n\n\\section*{Acknowledgements}\nThe authors thank Christina Goldschmidt and Remco van der Hofstad for many helpful discussions, and Bal\\'{a}zs R\\'{a}th for useful comments on a preliminary version of the paper.\nSB has been partially supported by NSF-DMS grants 1105581, 1310002, 160683, 161307 and SES grant 1357622.\nSS has been supported in part by EPSRC grant EP\/J019496\/1, a CRM-ISM fellowship, and the Netherlands Organization for Scientific Research (NWO) through the Gravitation Networks grant 024.002.003.\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sect:intro}\n\nThe aim for theoretical precision predictions for the LHC requires next-to-next-to-leading order (NNLO) calculations for\na number of processes.\nIf one goes beyond the simplest $2 \\rightarrow 2$-processes,\nconsidering a $2 \\rightarrow (n-2)$-process with possibly $n>4$, one is in particular interested\nin methods which allow for automation.\nNumerical methods \nlike numerical loop integration \\cite{Soper:1998ye,Soper:1999xk,Nagy:2003qn,Gong:2008ww,Assadsolimani:2009cz,Assadsolimani:2010ka,Becker:2010ng,Becker:2011vg,Becker:2012aq,Becker:2012nk,Becker:2012bi,Goetz:2014lla,Seth:2016hmv}\ncombined with loop-tree duality \\cite{Catani:2008xa,Bierenbaum:2010cy,Bierenbaum:2012th,Buchta:2014dfa,Hernandez-Pinto:2015ysa,Buchta:2015wna,Sborlini:2016gbr,Driencourt-Mangin:2017gop,Driencourt-Mangin:2019aix,Runkel:2019yrs}\nor methods based on numerical unitarity \\cite{Ita:2015tya,Abreu:2017idw,Abreu:2017xsl,Abreu:2017hqn,Abreu:2018jgq}\nare a promising path for this approach.\n\nStarting from two-loops, there are Feynman integrals with higher powers of the propagators.\nThey arise from self-energy insertions on internal lines.\nAn example is shown in fig.~\\ref{fig_1}.\nNote that the contributions we are concerned about are not an artefact of a gauge choice.\nFor gauge theories, we will use Feynman gauge throughout this paper.\nIn Feynman gauge the Feynman propagator has just simple poles.\nIn an analytic calculation Feynman integrals with higher powers of the propagators are not a problem.\nThey are reduced by integration-by-parts identities to master integrals.\nThe master integrals are then calculated analytically.\nIt is possible that the set of master integrals itself contains integrals with raised propagators.\n\nThe situation is different for numerical approaches.\nIn this paper we focus on numerical loop integration \\cite{Soper:1998ye,Soper:1999xk,Nagy:2003qn,Gong:2008ww,Assadsolimani:2009cz,Assadsolimani:2010ka,Becker:2010ng,Becker:2011vg,Becker:2012aq,Becker:2012nk,Becker:2012bi,Goetz:2014lla,Seth:2016hmv}\nin combination with loop-tree duality \\cite{Catani:2008xa,Bierenbaum:2010cy,Bierenbaum:2012th,Buchta:2014dfa,Hernandez-Pinto:2015ysa,Buchta:2015wna,Sborlini:2016gbr,Driencourt-Mangin:2017gop,Driencourt-Mangin:2019aix,Runkel:2019yrs}.\nLet us mention that our results have also implications for methods\nbased on numerical unitarity \\cite{Ita:2015tya,Abreu:2017idw,Abreu:2017xsl,Abreu:2017hqn,Abreu:2018jgq}.\nRaised propagators have been considered previously in\n\\cite{Bierenbaum:2012th,Sogaard:2014ila,Abreu:2017idw}.\nWithin these numerical approaches one is interested in\nthe residue when a raised propagator goes on-shell.\nIf $f(z)$ is a function of a complex variable $z$, which has a pole of order $\\nu$ at $z_0$, the standard formula for the \nresidue at $z_0$ is given\nby\n\\begin{eqnarray}\n\\label{residue_example_one_dim}\n \\mathrm{res}\\left(f,z_0\\right)\n & = &\n \\frac{1}{\\left(\\nu-1\\right)!}\n \\left.\n \\left(\\frac{d}{dz}\\right)^{\\nu-1}\n \\left[ \\left(z-z_0\\right)^\\nu f\\left(z\\right) \\right]\n \\right|_{z=z_0}.\n\\end{eqnarray}\nWe may think of the variable $z$ as being the energy flowing through the raised propagator.\nFor $\\nu > 1$ we have a derivative acting on all $z$-dependent quantities in the diagram.\nAlthough this can be done, it is process-dependent and not very well suited for automation.\n(Eq.~(\\ref{residue_example_one_dim}) is a simple univariate example, the generalisation to the multivariate case\nis discussed in ref.~\\cite{Sogaard:2014ila}. The computation of the residue in the multivariate case with higher powers of the propagators\nis based on Gr\\\"obner bases.)\nAlternatively, ref.~\\cite{Bierenbaum:2012th} proposes to reduce Feynman integrals with raised propagators through integration-by-parts\nidentities to Feynman integrals without raised propagators.\nThis is possible, but again it is process-dependent and therefore not very well suited for automation.\n\nAlthough our focus is on the loop-tree duality method, where we cut an $l$-loop contribution exactly $l$-times,\nlet us briefly comment on the numerical unitarity method.\nHere, one writes the $l$-loop amplitude as a linear combination of (known) master integrals with (unknown) coefficients.\nThe coefficients are determined by cutting the internal propagators, starting with the maximal cut and working down\nthe hierarchy.\nThe method exploits the fact that the integrand of a loop amplitude factorises on leading poles into products of tree amplitudes.\nHowever, if higher powers of the propagators are present, one needs in addition to the coefficient of the leading poles\nalso the coefficients of the subleading poles, where the above mentioned factorisation property no longer holds.\nRef.~\\cite{Abreu:2017idw} presents a numerical method to extract the coefficients of the subleading poles by considering equations obtained from cutting less propagators.\n\nLet us now return to the loop-tree duality method.\nWe would like to isolate the complication into a small process-independent part.\nIf we only look at the left diagram of fig.~\\ref{fig_1} there is nothing we can do.\nHowever, we may look at the set of all diagrams corresponding to a self-energy insertion on a specific internal line.\nAt two-loops and in $\\phi^3$-theory there are two diagrams, as shown in fig.~\\ref{fig_1}:\nThe left diagram of fig.~\\ref{fig_1}, which we already discussed, and the right diagram of fig.~\\ref{fig_1}, corresponding\nto the counterterm from renormalisation.\nIn the on-shell scheme the counterterm is basically the Taylor expansion to second order around the on-shell value of the self-energy.\nThus, if we would perform the one-loop calculation of the self-energy analytically and combine it with the counterterm, \nwe would obtain a transcendental function, which vanishes quadratically in the on-shell limit.\nThis will cancel the double pole and the residue will vanish.\nThis is fine, but has the drawback that we introduced transcendental functions from an analytic one-loop calculation.\nWe would like to work entirely with rational functions, as we do in the numerical approach.\nIt is therefore natural to ask, if there exists an integral representation for the counterterm, such that the residue\nvanishes already at the integrand level.\nThis is the topic of this paper and we show that such an integral representation for the counterterm exists.\nSuch integral representations are not unique.\nThere is quite some freedom to construct an integral representation, only the integral, the UV-behaviour and the on-shell behaviour \nis fixed by the requirement that the counterterm should be a proper counterterm, local in loop momentum space \nand leading to a vanishing residue.\nA sufficient condition for the last condition to hold is that the sum of the integrands for the self-energy vanishes quadratically as\nthe external momentum of the self-energy goes on-shell.\nThus\n\\begin{eqnarray}\n\\label{eq_condition}\n \\lim\\limits_{k^2\\rightarrow m^2} \\left(\\mbox{Self-energy integrand}\\right)\n & = &\n {\\mathcal O}\\left( \\left(E-E^\\flat\\right)^2 \\right),\n\\end{eqnarray}\nwhere $E^\\flat$ denotes the on-shell value of the energy flowing through the raised propagator, \nThis condition will cancel the double pole (and the single pole)\nfrom the propagators, resulting in a vanishing residue.\nFor gauge theories it is sufficient to require eq.~(\\ref{eq_condition}) only up to gauge terms, which vanish when contracted\ninto gauge-invariant quantities.\n\nIn this paper we construct counterterms with the property given in eq.~(\\ref{eq_condition}).\nThus, the main result of this paper is that when summed over all relevant diagrams (including counterterms from renormalisation)\nresidues due to higher poles from self-energy insertions on internal lines can be made to vanish at the integrand level.\n\nLet us mention that the counterterms we construct have higher powers of the propagators \nin the self-energy parts.\nAt first sight, this may seem like nothing has been gained: \nWe removed higher powers of the propagators in one part,\nbut introduced new higher powers of the propagators in another part.\nThe essential point is that we removed the higher powers of the propagators from the process-dependent part\nand isolated the higher powers of the propagators in a universal process-independent part.\nThe derivatives for the residues may therefore be calculated once and for all.\n\nOne final remark: Although the explicit results for the counterterms\nfor $\\phi^3$-theory and QCD presented in this paper are for stable particles, where the masses are real,\nthis assumption is not essential.\nWe may allow complex masses.\nWe only require that the renormalised propagator has a pole at the renormalised mass with residue $1$.\nOur method has a straightforward extension towards the complex mass scheme \\cite{Denner:2005fg}.\n\nThis paper is organised as follows:\nIn the next section we consider a simple toy example from complex analysis.\nIn section~\\ref{sect:scalar_theory} we present our argument in detail for the case of a scalar $\\phi^3$-theory.\nAll essential features are already in there.\nIn section~\\ref{sect:QCD} we specialise to the case of quantum chromodynamics, treating spin $1\/2$-fermions and massless spin $1$-gauge bosons.\nFinally, our conclusions are contained in section~\\ref{sect:conclusions}.\nAppendix~\\ref{sect:Feynman_rules} lists the Feynman rules for the scalar $\\phi^3$-theory.\n\n\\section{A toy example}\n\\label{sect:toy}\n\nLet us first look at a toy example and consider the polynomials\n\\begin{eqnarray}\n f_2 \\; = \\; z_2,\n \\;\\;\\;\\;\\;\\;\n f_1 \\; = \\; z_1 + \\frac{1}{2} z_2 + 1,\n \\;\\;\\;\\;\\;\\;\n f_6 \\; = \\; z_1 - \\frac{1}{2} z_2 - 1\n\\end{eqnarray}\nin two complex variables $z_1$ and $z_2$.\nWe are interested in the local residues (i.e. two-fold residues in $z_1$ and $z_2$) of the rational function\n\\begin{eqnarray}\n R \n & = &\n \\frac{1}{f_2^2 f_1 f_6}.\n\\end{eqnarray}\nThe local residues are at\n\\begin{eqnarray}\n \\left(z_1,z_2\\right)\n & \\in &\n \\left\\{\n \\; \\left(-1,0\\right),\n \\; \\left(1,0\\right),\n \\; \\left(0,-2\\right) \\;\n \\right\\}.\n\\end{eqnarray}\nThe location of the residues is shown in the left drawing of fig.~\\ref{fig_toy}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.0]{fig_toy}\n\\hspace*{20mm}\n\\includegraphics[scale=1.0]{fig_toy_CT}\n\\end{center}\n\\caption{\nThe left figure shows the location of the residues in the $(z_1,z_2)$-plane of the rational function $R$,\nthe right figure shows the location of the residues of the rational function $R_{\\mathrm{CT}}$.\n}\n\\label{fig_toy}\n\\end{figure}\nWe are in particular interested in the local residues at $P_1=(-1,0)$ and $P_2=(1,0)$, where we have a double pole from $f_2^2$. We have\n\\begin{eqnarray}\n \\mathrm{res}\\left(R,P_1\\right)\n \\; = \\;\n \\frac{1}{4},\n & &\n \\mathrm{res}\\left(R,P_2\\right)\n \\; = \\;\n - \\frac{1}{4}.\n\\end{eqnarray}\nLet us now define\n\\begin{eqnarray}\n f_1^\\flat \\; = \\; z_1 + 1,\n \\;\\;\\;\\;\\;\\;\n f_6^\\flat \\; = \\; z_1 - 1\n\\end{eqnarray}\nand consider rational functions with poles only along $f_2$, $f_1^\\flat$ and $f_6^\\flat$, i.e. rational functions\nof the form\n\\begin{eqnarray}\n \\frac{P\\left(z_1,z_2\\right)}{f_2^{\\nu_2} \\left(f_1^\\flat\\right)^{\\nu_1} \\left(f_6^\\flat\\right)^{\\nu_6}},\n\\end{eqnarray}\nwith $\\nu_1,\\nu_2,\\nu_6 \\in {\\mathbb N}$ and $P(z_1,z_2)$ a polynomial in $z_1$ and $z_2$.\nThese functions have local residues only at the two points $P_1=(-1,0)$ and $P_2=(1,0)$\n(this is shown in the right picture of fig.~\\ref{fig_toy}), and we are interested in a function\n$R_{\\mathrm{CT}}$ which cancels the residues of $R$ at $P_1$ and $P_2$.\nLet us first note that the function\n\\begin{eqnarray}\n R_{\\mathrm{try}}\n & = &\n \\frac{1}{f_2^2 f_1^\\flat f_6^\\flat}\n \\;\\; = \\;\\;\n \\frac{1}{z_2^2 \\left(z_1+1\\right) \\left(z_1-1\\right)}\n\\end{eqnarray}\nhas no residues at $P_1$ or $P_2$:\n\\begin{eqnarray}\n \\mathrm{res}\\left(R_{\\mathrm{try}},P_1\\right)\n \\; = \\;\n 0,\n & &\n \\mathrm{res}\\left(R_{\\mathrm{try}},P_2\\right)\n \\; = \\;\n 0,\n\\end{eqnarray}\nsince $R_{\\mathrm{try}}$ does not have a single pole in $z_2$.\nHowever, expanding $R$ to second order in $z_2$ does the job:\n\\begin{eqnarray}\n R_{\\mathrm{CT}}\n & = &\n -\n \\frac{1}{f_2^2 f_1^\\flat f_6^\\flat}\n \\left( 1 - \\frac{z_2}{2 f_1^\\flat} + \\frac{z_2}{2 f_6^\\flat} \\right)\n\\end{eqnarray}\nWe have\n\\begin{eqnarray}\n \\mathrm{res}\\left(R_{\\mathrm{CT}},P_1\\right)\n \\; = \\;\n - \\frac{1}{4},\n & &\n \\mathrm{res}\\left(R_{\\mathrm{CT}},P_2\\right)\n \\; = \\;\n \\frac{1}{4}.\n\\end{eqnarray}\nThus\n\\begin{eqnarray}\n \\mathrm{res}\\left(R+R_{\\mathrm{CT}},P_1\\right)\n \\; = \\; \n \\mathrm{res}\\left(R+R_{\\mathrm{CT}},P_2\\right)\n & = & 0,\n\\end{eqnarray}\nand the residues at $P_1$ or $P_2$ cancel in the sum.\n\nThe analogy with quantum field theory is as follows: We may think of $z_1$ and $z_2$ as two energy\nvariables, $f_1$, $f_2$ and $f_6$ as propagators and of $f_1^\\flat$ and $f_6^\\flat$ as the on-shell projections\nof $f_1$ and $f_6$, respectively, as $f_2$ goes on-shell.\n\n\n\\section{The method for a scalar theory}\n\\label{sect:scalar_theory}\n\nLet us now discuss a simple quantum field theory.\nWe consider a massive $\\phi^3$-theory.\nThe Lagrangian in renormalised quantities is given by\n\\begin{eqnarray}\n {\\mathcal L} \n & = &\n \\frac{1}{2} \\left( \\partial_\\mu \\phi \\right) \\left( \\partial^\\mu \\phi \\right)\n - \\frac{1}{2} m^2 \\phi^2\n + \\frac{1}{3!} \\lambda^{(D)} \\phi^3\n + {\\mathcal L}_{\\mathrm{CT}}.\n\\end{eqnarray}\nUnder renormalisation we have\n\\begin{eqnarray}\n \\phi_0 = Z_\\phi^{\\frac{1}{2}} \\phi,\n \\;\\;\\;\\;\\;\\;\n \\lambda_0 = Z_\\lambda \\; \\lambda^{(D)},\n \\;\\;\\;\\;\\;\\;\n m_0 = Z_m m,\n\\end{eqnarray}\nwhere we denote bare quantities with a subscript ``0''.\nWe work in dimensional regularisation and set $D=4-2\\varepsilon$.\nWe further set\n\\begin{eqnarray}\n\\label{def_lambda_D}\n \\lambda^{(D)}\n & = &\n \\mu^\\varepsilon S_\\varepsilon^{-\\frac{1}{2}} \\lambda.\n\\end{eqnarray}\nThe arbitrary scale $\\mu$ is introduced to keep the mass dimension of the renormalised coupling $\\lambda$ equal to one.\nThe factor $S_\\varepsilon = (4\\pi)^\\varepsilon \\exp(-\\varepsilon \\gamma_E)$ absorbs artefacts of dimensional regularisation \n(logarithms of $4\\pi$ and Euler's constant $\\gamma_E$).\nThe Lagrangian for the counterterms is given by\n\\begin{eqnarray}\n {\\mathcal L}_{\\mathrm{CT}}\n & = &\n - \\frac{1}{2} \\left(Z_\\phi-1\\right) \\phi \\Box \\phi \n - \\frac{1}{2} \\left(Z_\\phi Z_m^2 -1 \\right) m^2 \\phi^2\n + \\frac{1}{3!} \\left(Z_\\phi^{\\frac{3}{2}} Z_\\lambda - 1 \\right) \\lambda^{(D)} \\phi^3.\n\\end{eqnarray}\nThe Feynman rules for the scalar $\\phi^3$-theory are listed in appendix~\\ref{sect:Feynman_rules}.\nFor the perturbative expansion of the renormalisation constants we write\n\\begin{eqnarray}\n Z_a\n & = &\n 1 + \\sum_{n=1}^\\infty Z_a^{(n)} \\left( \\frac{\\lambda^2}{\\left(4\\pi\\right)^2} \\right)^n,\n \\;\\;\\;\\;\\;\\;\n a \\in \\{ \\phi, m, \\lambda \\}.\n\\end{eqnarray}\nWe will need $Z_m^{(1)}$ and $Z_\\phi^{(1)}$.\nIn the on-shell scheme these renormalisation constants are given by\n\\begin{eqnarray}\n\\label{renormalisation_constants_phi_cubed}\n Z_m^{(1)}\n & = & \n \\frac{1}{4m^2} B_0\\left(m^2,m^2,m^2\\right),\n \\nonumber \\\\\n Z_\\phi^{(1)}\n & = & \n \\frac{2-\\varepsilon}{6m^2} B_0\\left(m^2,m^2,m^2\\right) - \\frac{1-\\varepsilon}{3m^4} A_0\\left(m^2\\right).\n\\end{eqnarray}\nThe scalar one-loop integrals $A_0$ and $B_0$ are defined by\n\\begin{eqnarray}\n A_0\\left(m^2\\right)\n & = &\n 16 \\pi^2\n S_\\varepsilon^{-1} \\mu^{2\\varepsilon} \n \\int \\frac{d^Dk}{(2 \\pi)^D i } \\; \n \\frac{1}{k^2-m^2},\n \\nonumber \\\\\n B_0\\left(p^2,m_1^2,m_2^2\\right)\n & = &\n 16 \\pi^2\n S_\\varepsilon^{-1} \\mu^{2\\varepsilon} \n \\int \\frac{d^Dk}{(2 \\pi)^D i } \\; \n \\frac{1}{\\left[\\left(k+\\frac{1}{2}p\\right)^2-m_1^2\\right]\\left[\\left(k-\\frac{1}{2}p\\right)^2-m_2^2\\right]}.\n\\end{eqnarray}\nIn this paper we are concerned with diagrams like the one shown in the left picture of fig.~\\ref{fig_1}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.0]{fig_oneloop}\n\\hspace*{20mm}\n\\includegraphics[scale=1.0]{fig_CT}\n\\end{center}\n\\caption{\nThe left figure shows a self-energy insertion on an internal line.\nThe same momentum is flowing through the two red lines, resulting in a propagator raised to power two.\nA self-energy insertion on an internal line is always accompanied by a counterterm, shown in the right figure.\n}\n\\label{fig_1}\n\\end{figure}\nIn fig.~\\ref{fig_2} we show our choice for the labelling of the propagators and the orientation of the momenta.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.0]{fig_labelled_propagators}\n\\hspace*{20mm}\n\\includegraphics[scale=1.0]{fig_labelled_momenta}\n\\end{center}\n\\caption{\nThe labelling of the propagators (left figure) and the labelling of the momenta (right figure).\n}\n\\label{fig_2}\n\\end{figure}\nWith\n\\begin{eqnarray}\n D_j & = & k_j^2 - m^2 + i \\delta\n\\end{eqnarray}\nwe have for this diagram\n\\begin{eqnarray}\n I_{\\mathrm{twoloop}}\n & = &\n \\frac{i \\lambda^6}{2}\n \\mu^{4\\varepsilon} S_\\varepsilon^{-2}\n \\int \\frac{d^Dk_1}{\\left(2\\pi\\right)^D}\n \\int \\frac{d^Dk_2}{\\left(2\\pi\\right)^D}\n \\frac{1}{D_1 D_2^2 D_3 D_4 D_5 D_6},\n\\end{eqnarray}\nwhere we ignored a prefactor $\\mu^{2\\varepsilon} S_\\varepsilon^{-1}$, accompanying also the Born amplitude.\nWe see that $D_2$ is raised to the power two.\nWithin the loop-tree duality method we take residues in the energy integrations $E_1$ and $E_2$.\nThe residues are classified by the set of spanning trees for our diagram.\nWe may denote a spanning tree by the propagators we remove to get a tree diagram.\nThe set of spanning trees for our two-loop diagram is given by\n\\begin{eqnarray}\n \\left\\{\n \\;\n \\left(1,2\\right),\n \\;\n \\left(1,3\\right),\n \\;\n \\left(1,4\\right),\n \\;\n \\left(1,5\\right),\n \\;\n \\left(1,6\\right),\n \\;\n \\left(2,6\\right),\n \\;\n \\left(3,6\\right),\n \\;\n \\left(4,6\\right),\n \\;\n \\left(5,6\\right)\n \\;\n \\right\\}.\n\\end{eqnarray}\nEach spanning tree defines also a cut graph.\nFor a cut graph, we don't remove internal edges but cut them into half-edges.\nThe half-edges become additional external lines of the cut graph.\nIn fig.~\\ref{fig_3} we show a few examples of cut graphs obtained from spanning trees. \nProblematic are the cuts $(1,2)$ and $(2,6)$.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.0]{fig_cut_13}\n\\hspace*{20mm}\n\\includegraphics[scale=1.0]{fig_cut_12}\n\\hspace*{20mm}\n\\includegraphics[scale=1.0]{fig_cut_16}\n\\end{center}\n\\caption{\nVarious cuts of the two-loop diagram, which correspond to spanning trees.\nThe cut $(1,3)$ (left diagram) is unproblematic.\nThe cut $(1,2)$ (middle diagram) requires the residue of a doubled propagator.\nThe right diagram shows the cut $(1,6)$.\n}\n\\label{fig_3}\n\\end{figure}\nAs $D_2$ occurs quadratically, taking the residue for $D_2=0$ forces us to compute a derivative.\n\nIn this paper we would like to point out, that the left diagram of fig.~\\ref{fig_1} always comes \nin combination with a counterterm, shown in the right picture of fig.~\\ref{fig_1}.\nThe contribution from the counterterm is \n\\begin{eqnarray}\n I_{\\mathrm{twoloop},\\mathrm{CT}}\n & = &\n -\n \\frac{\\lambda^6}{\\left(4\\pi\\right)^2}\n \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk_2}{\\left(2\\pi\\right)^D}\n \\frac{\\left[Z_\\phi^{(1)} k_2^2 - \\left(Z_\\phi^{(1)} + 2 Z_m^{(1)}\\right) m^2 \\right]}{D_2^2 D_3 D_4 D_5}.\n\\end{eqnarray}\nLet us write\n\\begin{eqnarray}\n I_{\\mathrm{twoloop}}\n & = &\n i \\lambda^6\n \\mu^{4\\varepsilon} S_\\varepsilon^{-2}\n \\int \\frac{d^Dk_1}{\\left(2\\pi\\right)^D}\n \\int \\frac{d^Dk_2}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{twoloop}}\\left(k_1,k_2\\right),\n \\nonumber \\\\\n R_{\\mathrm{twoloop}}\\left(k_1,k_2\\right)\n & = &\n \\frac{1}{2 D_1 D_2^2 D_3 D_4 D_5 D_6}.\n\\end{eqnarray}\n$R_{\\mathrm{twoloop}}(k_1,k_2)$ is a rational function in $k_1$ and $k_2$.\nWithin the numerical method one writes \n$I_{\\mathrm{twoloop},\\mathrm{CT}}$ also as a two-loop integral:\n\\begin{eqnarray}\n\\label{twoloop_CT}\n I_{\\mathrm{twoloop},\\mathrm{CT}}\n & = &\n i \\lambda^6\n \\mu^{4\\varepsilon} S_\\varepsilon^{-2}\n \\int \\frac{d^Dk_1}{\\left(2\\pi\\right)^D}\n \\int \\frac{d^Dk_2}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{twoloop},\\mathrm{CT}}\\left(k_1,k_2\\right).\n\\end{eqnarray}\nWe may now ask the question if there exists a function $R_{\\mathrm{twoloop},\\mathrm{CT}}(k_1,k_2)$, rational\nin the energies $E_1$ and $E_2$, such that\n\\begin{enumerate}\n\\item $R_{\\mathrm{twoloop},\\mathrm{CT}}(k_1,k_2)$ satisfies eq.~(\\ref{twoloop_CT}),\n\\item the sum of $R_{\\mathrm{twoloop}}$ and $R_{\\mathrm{twoloop},\\mathrm{CT}}$ falls off \nfor $|k_1|\\rightarrow \\infty$ as $|k_1|^{-5}$, i.e.\n\\begin{eqnarray}\n \\lim\\limits_{|k_1|\\rightarrow \\infty} \\left( R_{\\mathrm{twoloop}}\\left(k_1,k_2\\right) + R_{\\mathrm{twoloop},\\mathrm{CT}}\\left(k_1,k_2\\right) \\right)\n & = & {\\mathcal O}\\left(|k_1|^{-5}\\right),\n\\end{eqnarray}\n\\item the sum of $R_{\\mathrm{twoloop}}$ and $R_{\\mathrm{twoloop},\\mathrm{CT}}$\nvanishes quadratically as $k_2$ goes on-shell, i.e.\n\\begin{eqnarray}\n \\lim\\limits_{k_2^2 \\rightarrow m^2} \\left( R_{\\mathrm{twoloop}}\\left(k_1,k_2\\right) + R_{\\mathrm{twoloop},\\mathrm{CT}}\\left(k_1,k_2\\right) \\right)\n & = & {\\mathcal O}\\left( \\left(E_2-E_2^\\flat\\right)^2 \\right),\n\\end{eqnarray}\n\\item $R_{\\mathrm{twoloop},\\mathrm{CT}}(k_1,k_2)$ is independent of the energy $E_2$.\n\\end{enumerate}\nThe first two requirements are just the statement that $R_{\\mathrm{twoloop},\\mathrm{CT}}$ is \na local counterterm at the integrand level \nfor the ultraviolet sub-divergence given by the self-energy sub-graph.\nRequirement 3 is the new condition which we would like to enforce and ensures that the residue\nfrom $D_2 \\rightarrow 0$ will vanish.\nCondition 4 is an additional technical requirement and ensures that \n$I_{\\mathrm{twoloop},\\mathrm{CT}}$ does not receive contributions from the cut $(1,6)$.\nThis cut is shown\nin the right diagram of fig.~\\ref{fig_3}.\n\nLet us point out that all conditions laid out above\nrefer only to the self-energy sub-diagram, not to the full diagram.\nThe conditions are therefore universal process-independent conditions.\n\nLet us now look at the self-energy.\nIt is convenient to adopt a slightly different notation for the momenta, shown in fig.~\\ref{fig_4}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1.0]{fig_selfenergy}\n\\end{center}\n\\caption{\nThe labelling of the momenta for the one-loop self-energy.\n}\n\\label{fig_4}\n\\end{figure}\nFor the (bare) one-loop self-energy\nwe have\n\\begin{eqnarray}\n -i \\Sigma_{\\mathrm{oneloop}}\n & = &\n \\lambda^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{oneloop}},\n \\;\\;\\;\\;\\;\\;\n R_{\\mathrm{oneloop}}\n \\; = \\;\n \\frac{1}{2 D_1 D_2},\n \\nonumber \\\\\n & &\n D_1\n \\; = \\; \\left(k+\\frac{1}{2}p\\right)^2 - m^2,\n \\;\\;\\;\\;\\;\\;\n D_2\n \\; = \\; \\left(k-\\frac{1}{2}p\\right)^2 - m^2.\n\\end{eqnarray}\nGiven $p=(E,\\vec{p})$ we define $p^\\flat$ by\n\\begin{eqnarray}\n\\label{def_p_flat}\n p^\\flat & = & \\left( \\mathrm{sign}\\left(E\\right) \\sqrt{\\vec{p}^2+m^2} ,\\vec{p}\\right).\n\\end{eqnarray}\nThe momentum $p^\\flat$ is on-shell\n\\begin{eqnarray}\n \\left(p^\\flat\\right)^2 & = & m^2,\n\\end{eqnarray}\nand does not depend on $E$ (apart from the sign).\nWith $n=(1,\\vec{0})$ we may write $p^\\flat$ equally as\n\\begin{eqnarray}\n p^\\flat\n & = &\n p - c n,\n \\;\\;\\;\\;\\;\\;\n c \\; = \\;\n \\frac{1}{2n^2} \\left( 2 p \\cdot n - \\mathrm{sign}\\left(2 p \\cdot n\\right) \\sqrt{ \\left(2 p \\cdot n\\right)^2 - 4 n^2 \\left(p^2-m^2\\right)} \\right).\n\\end{eqnarray}\nFor the counterterm we write\n\\begin{eqnarray}\n -i \\Sigma_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n \\lambda^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{oneloop},\\mathrm{CT}}.\n\\end{eqnarray}\nWe require that the only poles of $R_{\\mathrm{oneloop},\\mathrm{CT}}$ originate from\n\\begin{eqnarray}\n D_1^\\flat \n \\; = \\;\n \\left(k + \\frac{1}{2}p^\\flat\\right)^2 - m^2,\n \\;\\;\\;\\;\\;\\;\n D_2^\\flat \n \\; = \\;\n \\left(k - \\frac{1}{2}p^\\flat\\right)^2 - m^2.\n\\end{eqnarray}\n$D_1^\\flat$ and $D_2^\\flat$ are the images of $D_1$ and $D_2$ under the map $p \\rightarrow p^\\flat$.\nA possible choice for $R_{\\mathrm{oneloop},\\mathrm{CT}}$ is given by\n\\begin{eqnarray}\n\\label{counterterm_phi_cubed}\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n - \\frac{1}{2 D_1^\\flat D_2^\\flat}\n \\left[ 1 - \\frac{4 k \\cdot \\left(p-p^\\flat\\right) + p^2-m^2}{4 D_1^\\flat} + \\frac{4 k \\cdot \\left(p-p^\\flat\\right) - p^2+m^2}{4 D_2^\\flat} \\right]\n \\nonumber \\\\\n & &\n + \\frac{\\left(p-p^\\flat\\right)^2}{8 m^2} \n \\left( \\frac{2}{D_1^\\flat D_2^\\flat} - \\frac{1}{\\left(D_1^\\flat\\right)^2} - \\frac{1}{\\left(D_2^\\flat\\right)^2} \\right).\n\\end{eqnarray}\nThe first line is the expansion of $R_{\\mathrm{oneloop}}$ around the on-shell kinematics, such that\nthe difference between the first line and $R_{\\mathrm{oneloop}}$ is of order ${\\mathcal O}((p^2-m^2)^2)$.\nThe first line gives also a local UV-counterterm, such that\nthe difference between the first line and $R_{\\mathrm{oneloop}}$ is of order ${\\mathcal O}(|k|^{-5})$ or better.\nThus, we see that the first line satisfies conditions~(2) and (3).\nCondition~(4) is trivially satisfied due to our definition of $p^\\flat$ in eq.~(\\ref{def_p_flat}).\nIt remains to satisfy condition~(1).\nThis is the job of the term in the second line.\nThis term ensures that\n\\begin{eqnarray}\n \\lambda^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n \\frac{\\lambda^2}{\\left(4\\pi\\right)^2}\n \\;\n i \\left[ Z_\\phi^{(1)} p^2 - \\left( Z_\\phi^{(1)} + 2 Z_m^{(1)} \\right) m^2 \\right],\n\\end{eqnarray}\nwhere $Z_\\phi^{(1)}$ and $Z_m^{(1)}$ have been defined in eq.~(\\ref{renormalisation_constants_phi_cubed}).\nAt the same time, the term on the second line does not spoil the on-shell limit nor the UV-limit.\nTo see this, we note that\n\\begin{eqnarray}\n \\left(p-p^\\flat\\right)^2\n\\end{eqnarray}\nvanishes quadratically in the on-shell limit.\nSecondly, the combination\n\\begin{eqnarray}\n \\frac{2}{D_1^\\flat D_2^\\flat} - \\frac{1}{\\left(D_1^\\flat\\right)^2} - \\frac{1}{\\left(D_2^\\flat\\right)^2}\n\\end{eqnarray}\nfalls off as $\\mathcal{O}(|k|^{-6})$ in the UV-limit.\n\nThe counterterm $R_{\\mathrm{oneloop},\\mathrm{CT}}$ is a rational function in the energy variable $E_k$.\nAn inspection of eq.~(\\ref{counterterm_phi_cubed}) shows that $R_{\\mathrm{oneloop},\\mathrm{CT}}$ has double poles in the variable $E_k$.\nThis is however unproblematic, as it occurs in a universal building block. The residues can be calculated once and for all.\nAs an example we consider the residue at\n\\begin{eqnarray}\n E_{k,D_1^\\flat} & = & - \\frac{1}{2} E_{p^\\flat} + \\sqrt{\\left(\\vec{k}+\\frac{1}{2}\\vec{p}\\right)^2+m^2}.\n\\end{eqnarray}\nAs an abbreviation we set\n\\begin{eqnarray}\n E_1 & = & \\sqrt{\\left(\\vec{k}+\\frac{1}{2}\\vec{p}\\right)^2+m^2}.\n\\end{eqnarray}\nWe find\n\\begin{eqnarray}\n \\mathrm{res}\\left(R_{\\mathrm{oneloop},\\mathrm{CT}}, E_k = E_{k,D_1^\\flat}\\right)\n & = &\n- \\frac{1}{4 E_1 D_2^\\flat}\n + \\frac{\\left(E_p-E_{p^\\flat}\\right)^2}{8 E_1 m^2 D_2^\\flat}\n + \\frac{\\left(E_p-E_{p^\\flat}\\right)^2}{32 E_1^3 m^2}\n - \\frac{\\left(E_p-E_{p^\\flat}\\right)^2}{32 E_1^3 D_2^\\flat}\n \\nonumber \\\\\n & &\n - \\frac{\\left(E_1-E_{p^\\flat}\\right)\\left(E_p-E_{p^\\flat}\\right)}{2 E_1 \\left(D_2^\\flat\\right)^2}\n + \\frac{E_{p^\\flat} \\left(E_p-E_{p^\\flat}\\right)^2}{16 E_1^2 \\left(D_2^\\flat\\right)^2},\n\\end{eqnarray}\nwhere $D_2^\\flat$ is understood to be evaluated at $k=(E_{k,D_1^\\flat},\\vec{k})$.\n\nThe rational function $R_{\\mathrm{oneloop}}$ has a corresponding residue at\n\\begin{eqnarray}\n E_{k,D_1} & = & - \\frac{1}{2} E_{p} + \\sqrt{\\left(\\vec{k}+\\frac{1}{2}\\vec{p}\\right)^2+m^2}.\n\\end{eqnarray}\n$R_{\\mathrm{oneloop}}$ has only single poles and the residue is given by\n\\begin{eqnarray}\n \\mathrm{res}\\left(R_{\\mathrm{oneloop}}, E_k = E_{k,D_1}\\right)\n & = &\n \\frac{1}{4 E_1 D_2},\n\\end{eqnarray}\nwhere $D_2$ is understood to be evaluated at $k=(E_{k,D_1},\\vec{k})$.\nFor the sum of the two residues we have\n\\begin{eqnarray}\n\\lefteqn{\n \\mathrm{res}\\left(R_{\\mathrm{oneloop}}, E_k = E_{k,D_1}\\right)\n +\n \\mathrm{res}\\left(R_{\\mathrm{oneloop},\\mathrm{CT}}, E_k = E_{k,D_1^\\flat}\\right)\n = } & &\n \\nonumber \\\\\n & &\n \\frac{1}{4 E_1}\n \\left[\n \\frac{1}{D_2}\n - \\frac{1}{D_2^\\flat}\n - \\frac{2 \\left(E_1-E_{p^\\flat}\\right)\\left(E_p-E_{p^\\flat}\\right)}{\\left(D_2^\\flat\\right)^2}\n \\right]\n \\nonumber \\\\\n & &\n + \\frac{\\left(E_p-E_{p^\\flat}\\right)^2}{8 E_1 m^2 D_2^\\flat}\n + \\frac{\\left(E_p-E_{p^\\flat}\\right)^2}{32 E_1^3 m^2}\n - \\frac{\\left(E_p-E_{p^\\flat}\\right)^2}{32 E_1^3 D_2^\\flat}\n + \\frac{E_{p^\\flat} \\left(E_p-E_{p^\\flat}\\right)^2}{16 E_1^2 \\left(D_2^\\flat\\right)^2}.\n\\end{eqnarray}\nWe note that the term in the square bracket vanishes also quadratically in the on-shell limit.\nIn technical terms we have for $D_2$ evaluated at $k=(E_{k,D_1},\\vec{k})$ and for $D_2^\\flat$ evaluated at $k=(E_{k,D_1^\\flat},\\vec{k})$:\n\\begin{eqnarray}\n \\frac{1}{D_2}\n - \\frac{1}{D_2^\\flat}\n - \\frac{2 \\left(E_1-E_{p^\\flat}\\right)\\left(E_p-E_{p^\\flat}\\right)}{\\left(D_2^\\flat\\right)^2}\n & = &\n {\\mathcal O}\\left( \\left(E_p-E_{p^\\flat}\\right)^2 \\right).\n\\end{eqnarray}\nLet us now go back to fig.~\\ref{fig_1}.\nWe combine the two-loop diagram (left diagram in fig.~\\ref{fig_1})\nwith the one-loop diagram with a counterterm insertion (right diagram in fig.~\\ref{fig_1}).\nFor the latter we derived a two-loop integral representation.\nWe may evaluate the sum of the two-loop integrals by taking residues in the two energy integrations.\nOur construction ensures that there is no residue from the cut $(1,2)$ (middle diagram of fig.~\\ref{fig_3}).\nThere are of course residues from an unproblematic cut like $(1,3)$ (left diagram of fig.~\\ref{fig_3}).\nFinally, let us note that the residue for the cut $(1,6)$ (right diagram of fig.~\\ref{fig_3})\nreceives only a contribution from the genuine two-loop diagram, but not from the diagram with the counterterm insertion.\nBy construction, the integral representation of the counterterm is independent of the energy flowing through\nthe outer loop, therefore there is no residue in this energy variable.\n\n\\section{QCD}\n\\label{sect:QCD}\n\nLet us now consider QCD with $N_f$ massless quarks and $N_Q$ massive quarks.\nIt is sufficient to discuss the case where all massive quarks have the same mass $m$.\nWe denote the renormalisation constant for the gluon field by $Z_3$, \nthe one for a massless quark field by $Z_2$ and the one for a massive quark field by $Z_{2,Q}$.\nThe renormalisation constant for the heavy quark mass $m$ is denoted by $Z_m$. \nFor the renormalisation constants we write\n\\begin{eqnarray}\n Z_a\n & = &\n 1 + \\sum_{n=1}^\\infty Z_a^{(n)} \\left( \\frac{\\alpha_s}{4\\pi} \\right)^n.\n\\end{eqnarray}\nWe will need the one-loop renormalisation constants. For $Z_3^{(1)}$ we write\n\\begin{eqnarray}\n Z_3^{(1)}\n & = &\n Z_{3,l}^{(1)}\n + \n Z_{3,Q}^{(1)},\n\\end{eqnarray}\nseparating the contributions from the massless particles in the loop ($Z_{3,l}^{(1)}$) \nfrom the contribution of the massive quark in the loop ($Z_{3,Q}^{(1)}$).\nIn the on-shell scheme we have\n\\begin{eqnarray}\n Z_2^{(1)}\n & = &\n 0,\n \\nonumber \\\\\n Z_{2,Q}^{(1)}\n & = &\n - \\left(3-2\\varepsilon\\right) C_F B_0\\left(m^2,m^2,0\\right),\n \\nonumber \\\\\n Z_m^{(1)}\n & = &\n - \\left(3-2\\varepsilon\\right) C_F B_0\\left(m^2,m^2,0\\right),\n \\nonumber \\\\\n Z_{3,l}^{(1)}\n & = &\n 0,\n \\nonumber \\\\\n Z_{3,Q}^{(1)}\n & = &\n - \\frac{4}{3} T_R N_Q B_0\\left(0,m^2,m^2\\right).\n\\end{eqnarray}\nThe self-energies are diagonal in colour space. \nWe suppress the Kronecker delta's in colour space.\n\n\\subsection{Light quarks}\n\nIn this paragraph we set\n\\begin{eqnarray}\n D_1^\\flat \n \\; = \\;\n \\left(k + \\frac{1}{2}p^\\flat\\right)^2,\n \\;\\;\\;\\;\\;\\;\n D_2^\\flat \n \\; = \\;\n \\left(k - \\frac{1}{2}p^\\flat\\right)^2.\n\\end{eqnarray}\nThe self-energy for a massless quark is given by\n\\begin{eqnarray}\n - i \\Sigma_{\\mathrm{oneloop}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop}},\n \\;\\;\\;\\;\\;\\;\n R_{\\mathrm{oneloop}}\n \\; = \\;\n C_F\n \\frac{2 \\left(1-\\varepsilon\\right) \\left( \\slashed{k} + \\frac{1}{2} \\slashed{p} \\right)}{D_1 D_2}.\n\\end{eqnarray}\nFor the counterterm we write\n\\begin{eqnarray}\n - i \\Sigma_{\\mathrm{oneloop},\\mathrm{CT}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop},\\mathrm{CT}}.\n\\end{eqnarray}\nA possible choice for $R_{\\mathrm{oneloop},\\mathrm{CT}}$ is given by\n\\begin{eqnarray}\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n - C_F \\frac{2 \\left(1-\\varepsilon\\right) \\left( \\slashed{k} + \\frac{1}{2} \\slashed{p} \\right)}{D_1^\\flat D_2^\\flat}\n \\left[ 1 - \\frac{4 k \\cdot \\left(p-p^\\flat\\right) + p^2}{4 D_1^\\flat} + \\frac{4 k \\cdot \\left(p-p^\\flat\\right) - p^2}{4 D_2^\\flat} \\right].\n \\;\\;\\;\n\\end{eqnarray}\nIntegration is in this case particularly simple. All integrals are scaleless integrals, which vanish in dimensional regularisation. Therefore\n\\begin{eqnarray}\n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n \\frac{\\alpha_s}{4\\pi} \n \\;\n i Z_2^{(1)} \\slashed{p}\n \\; = \\; 0.\n\\end{eqnarray}\n\n\\subsection{Heavy quarks}\n\nIn this paragraph we set\n\\begin{eqnarray}\n D_1^\\flat \n \\; = \\;\n \\left(k + \\frac{1}{2}p^\\flat\\right)^2 - m^2,\n \\;\\;\\;\\;\\;\\;\n D_2^\\flat \n \\; = \\;\n \\left(k - \\frac{1}{2}p^\\flat\\right)^2.\n\\end{eqnarray}\nThe self-energy for a massive quark is given by\n\\begin{eqnarray}\n - i \\Sigma_{\\mathrm{oneloop}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop}},\n \\nonumber \\\\\n R_{\\mathrm{oneloop}}\n & = &\n C_F\n \\frac{2 \\left(1-\\varepsilon\\right) \\left( \\slashed{k} + \\frac{1}{2} \\slashed{p} \\right) - 4 \\left(1-\\frac{1}{2} \\varepsilon \\right) m}{D_1 D_2}.\n\\end{eqnarray}\nFor the counterterm we write\n\\begin{eqnarray}\n - i \\Sigma_{\\mathrm{oneloop},\\mathrm{CT}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop},\\mathrm{CT}}.\n\\end{eqnarray}\nA possible choice for $R_{\\mathrm{oneloop},\\mathrm{CT}}$ is given by\n\\begin{eqnarray}\n\\label{counterterm_heavy_quark}\n\\lefteqn{\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n = \n C_F \\left\\{\n - \\frac{2 \\left(1-\\varepsilon\\right) \\left( \\slashed{k} + \\frac{1}{2} \\slashed{p}^\\flat \\right) - 4 \\left(1-\\frac{1}{2} \\varepsilon \\right) m}{D_1^\\flat D_2^\\flat}\n \\left[ 1 - \\frac{4 k \\cdot \\left(p-p^\\flat\\right) + p^2 - m^2}{4 D_1^\\flat} \n \\right. \\right.\n } & & \\nonumber \\\\\n & &\n \\left.\n + \\frac{4 k \\cdot \\left(p-p^\\flat\\right) - p^2 + m^2}{4 D_2^\\flat} \\right]\n - \\frac{\\left(1-\\varepsilon\\right) \\left( \\slashed{p} - \\slashed{p}^\\flat\\right)}{D_1^\\flat D_2^\\flat}\n \\nonumber \\\\\n & &\n - \\frac{1}{4} \\left( \\slashed{p}^\\flat - m \\right) \\left(p^2-m^2\\right) \\frac{D_1^\\flat-D_2^\\flat+4m^2}{\\left(D_1^\\flat\\right)^2 \\left(D_2^\\flat\\right)^2}\n + \\frac{m \\left(p-p^\\flat\\right)^2}{4 m^2} \n \\frac{\\left(D_1^\\flat-D_2^\\flat\\right)\\left(D_1^\\flat-D_2^\\flat+2m^2\\right)}{\\left(D_1^\\flat\\right)^2 \\left(D_2^\\flat\\right)^2}\n \\nonumber \\\\\n & &\n + \\frac{\\left( \\slashed{p}^\\flat - m \\right) \\left[ p^\\flat \\cdot \\left(p-p^\\flat\\right) \\right]}{m^2}\n \\frac{\\left(D_1^\\flat-D_2^\\flat\\right)\\left(D_1^\\flat-D_2^\\flat+\\frac{3}{2}m^2\\right)}\n {\\left(D_1^\\flat\\right)^2 \\left(D_2^\\flat\\right)^2}\n - \\frac{\\varepsilon m \\left(p-p^\\flat\\right)^2}{2 \\left(D_1^\\flat\\right)^2 D_2^\\flat}\n \\nonumber \\\\\n & &\n \\left.\n + \\frac{\\left[ 2 \\left( \\slashed{p} - m \\right) m^2 - m \\left(p^2-m^2\\right) \\right]}{2m^2} \n \\frac{\\left(D_1^\\flat-D_2^\\flat\\right)\\left(2 D_1^\\flat+D_2^\\flat\\right)}{\\left(D_1^\\flat\\right)^2 \\left(D_2^\\flat\\right)^2}\n \\right\\}.\n\\end{eqnarray}\nThe terms in the first two lines approximate $R_{\\mathrm{oneloop}}$ in the on-shell and in the ultraviolet\nlimit.\nThe terms in the third to fifth line ensure that the integration of $R_{\\mathrm{oneloop},\\mathrm{CT}}$\ngives the desired result.\nWe have\n\\begin{eqnarray}\n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n \\frac{\\alpha_s}{4\\pi} \n \\;\n i \\left[ Z_2^{(1)} \\slashed{p} - \\left( Z_2^{(1)} + Z_m^{(1)} \\right) m \\right].\n\\end{eqnarray}\nThe terms in the third to fifth line vanish\nin the on-shell and in the ultraviolet\nlimit.\nFor example, the last term in eq.~(\\ref{counterterm_heavy_quark})\nfalls of like ${\\mathcal O}(|k|^{-5})$ in the UV-limit.\nFor the on-shell limit we note that\n\\begin{eqnarray}\n 2 \\left( \\slashed{p} - m \\right) m^2 - m \\left(p^2-m^2\\right)\n & = &\n - m \\left( \\slashed{p} - m \\right) \\left( \\slashed{p} - m \\right).\n\\end{eqnarray}\n\n\\subsubsection{The $\\overline{\\mathrm{MS}}$-scheme}\n\nFor the mass of a heavy quark, the $\\overline{\\mathrm{MS}}$-scheme and the on-shell scheme are two popular\nrenormalisation schemes.\nIn this paragraph, we comment on the $\\overline{\\mathrm{MS}}$-scheme.\nIn the previous section we constructed an integral representation\n$R_{\\mathrm{oneloop},\\mathrm{CT}}$ in the on-shell scheme with the property\nthat\n\\begin{eqnarray}\n \\lim\\limits_{k^2\\rightarrow m^2} \\left(R_{\\mathrm{oneloop}}+R_{\\mathrm{oneloop},\\mathrm{CT}}\\right)\n & = &\n {\\mathcal O}\\left( \\left(E-E^\\flat\\right)^2 \\right).\n\\end{eqnarray}\nThis is not possible in the $\\overline{\\mathrm{MS}}$-scheme.\nTo see this, let us perform a finite renormalisation from the on-shell mass to the $\\overline{\\mathrm{MS}}$-mass.\nThis amounts to adding the term\n\\begin{eqnarray}\n\\lefteqn{\n \\frac{\\alpha_s}{4\\pi} \n \\;\n i \\left\\{ \n \\left[ Z_2^{(1)} \\slashed{p} - \\left( Z_2^{(1)} + Z_{m,\\overline{\\mathrm{MS}}}^{(1)} \\right) m \\right]\n -\n \\left[ Z_2^{(1)} \\slashed{p} - \\left( Z_2^{(1)} + Z_m^{(1)} \\right) m \\right]\n \\right\\}\n = }\n \\nonumber \\\\\n & = & \n \\frac{\\alpha_s}{4\\pi} \n \\;\n i \\left(\n Z_m^{(1)} - Z_{m,\\overline{\\mathrm{MS}}}^{(1)}\n \\right) m\n \\; = \\;\n \\frac{\\alpha_s}{4\\pi} \n \\;\n i C_F \\left( -4 + 3 \\ln\\frac{m^2}{\\mu^2} \\right) m\n + {\\mathcal O}\\left(\\varepsilon\\right),\n\\end{eqnarray}\nwhere we used\n\\begin{eqnarray}\n Z_{m,\\overline{\\mathrm{MS}}}^{(1)}\n & = &\n - \\frac{3 C_F}{\\varepsilon}.\n\\end{eqnarray}\nThe term from the finite renormalisation is a non-zero constant in the on-shell limit and hence does not vanish\nquadratically in the on-shell limit.\nNeither can there be an integral representation, which vanishes quadratically in the on-shell limit.\n\n\n\\subsection{Gluons}\n\nWe now consider the gluon self-energy.\nLet us first briefly discuss what happens in an analytic calculation.\nWe denote by $-i\\Pi^{\\mu\\nu}_{\\mathrm{oneloop}}$ the one-loop contribution to the gluon self-energy \nand by $-i\\Pi^{\\mu\\nu}_{\\mathrm{oneloop},\\mathrm{CT}}$ the contribution from the counterterm.\nThe self-energy is transverse and we may write\n\\begin{eqnarray}\n -i \\Pi^{\\mu\\nu}_{\\mathrm{oneloop}}\n & = &\n i \\left( p^2 g^{\\mu\\nu} - p^\\mu p^\\nu \\right) \\Pi_{\\mathrm{oneloop}}\\left(p^2\\right),\n\\end{eqnarray}\nwith a scalar function $\\Pi_{\\mathrm{oneloop}}(p^2)$.\nWe expand $\\Pi_{\\mathrm{oneloop}}(p^2)$ around $p^2=0$:\n\\begin{eqnarray}\n \\Pi_{\\mathrm{oneloop}}\\left(p^2\\right)\n & = &\n \\Pi_{\\mathrm{oneloop}}\\left(0\\right)\n + {\\mathcal O}\\left(p^2\\right).\n\\end{eqnarray}\nThis defines $Z_3^{(1)}$:\n\\begin{eqnarray}\n \\frac{\\alpha_s}{4\\pi} \\; Z_3^{(1)} & = & \\Pi_{\\mathrm{oneloop}}\\left(0\\right).\n\\end{eqnarray}\nThus\n\\begin{eqnarray}\n -i \\left( \\Pi^{\\mu\\nu}_{\\mathrm{oneloop}} + \\Pi^{\\mu\\nu}_{\\mathrm{oneloop},\\mathrm{CT}} \\right)\n & = &\n i \\left( p^2 g^{\\mu\\nu} - p^\\mu p^\\nu \\right) \\cdot {\\mathcal O}\\left(p^2\\right).\n\\end{eqnarray}\nWe see that the term proportional to $g^{\\mu\\nu}$ has a factor $(p^2)^2$ and will cancel a double pole from the propagators.\nOn the other hand, the term proportional to $p^\\mu p^\\nu$ comes only with a single factor $p^2$, leaving a residue from a single pole.\nHowever, this term is proportional to $p^\\mu p^\\nu$. We may neglect the contribution from this residue if we contract\nthis term into quantities, which vanish when contracted with an on-shell momentum $p^\\mu$ or $p^\\nu$.\n\nFor the gluon self-energy we distinguish the case of massless particles in the loop and the case of a massive quark loop.\n\n\\subsubsection{Contributions from massless particles}\n\nIn this paragraph we set\n\\begin{eqnarray}\n D_1^\\flat \n \\; = \\;\n \\left(k + \\frac{1}{2}p^\\flat\\right)^2,\n \\;\\;\\;\\;\\;\\;\n D_2^\\flat \n \\; = \\;\n \\left(k - \\frac{1}{2}p^\\flat\\right)^2.\n\\end{eqnarray}\nThe contribution to the gluon self-energy from massless particles is given by\n\\begin{eqnarray}\n - i \\Pi^{\\mu\\nu}_{\\mathrm{oneloop}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop}},\n \\nonumber \\\\\n R_{\\mathrm{oneloop}}\n & = &\n \\left\\{ - 2 C_A \\left[ -p^2 g^{\\mu\\nu} + p^\\mu p^\\nu - 2 \\left( 1 - \\varepsilon \\right) k^\\mu k^\\nu \n + \\frac{1}{2} \\left( 1 - \\varepsilon \\right) g^{\\mu\\nu} \\left( D_1 + D_2 \\right) \\right]\n \\right. \\nonumber \\\\\n & & \\left.\n - 2 T_R N_f \\left[ p^2 g^{\\mu\\nu} - p^\\mu p^\\nu + 4 k^\\mu k^\\nu \n - g^{\\mu\\nu} \\left( D_1 + D_2 \\right) \\right] \\right\\}\n \\frac{1}{D_1 D_2}.\n\\end{eqnarray}\nFor the counterterm we write\n\\begin{eqnarray}\n - i \\Pi^{\\mu\\nu}_{\\mathrm{oneloop},\\mathrm{CT}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop},\\mathrm{CT}}.\n\\end{eqnarray}\nA possible choice for $R_{\\mathrm{oneloop},\\mathrm{CT}}$ is given by\n\\begin{eqnarray}\n\\lefteqn{\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n = \n \\left\\{ 2 C_A \\left[ -p^2 g^{\\mu\\nu} + p^\\mu p^\\nu - 2 \\left( 1 - \\varepsilon \\right) k^\\mu k^\\nu \n + \\frac{1}{2} \\left( 1 - \\varepsilon \\right) g^{\\mu\\nu} \\left( D_1 + D_2 \\right) \\right]\n \\right. } & &\n \\nonumber \\\\\n & & \\left.\n + 2 T_R N_f \\left[ p^2 g^{\\mu\\nu} - p^\\mu p^\\nu + 4 k^\\mu k^\\nu \n - g^{\\mu\\nu} \\left( D_1 + D_2 \\right) \\right] \\right\\}\n \\frac{1}{D_1^\\flat D_2^\\flat}\n \\left\\{ 1 - \\frac{4 k \\cdot \\left(p-p^\\flat\\right) + p^2}{4 D_1^\\flat} \n \\right. \\nonumber \\\\\n & & \\left. \n + \\frac{4 k \\cdot \\left(p-p^\\flat\\right) - p^2}{4 D_2^\\flat} \n + \\left[ k \\cdot \\left(p-p^\\flat\\right) \\right]^2 \\left( \\frac{1}{\\left(D_1^\\flat\\right)^2} + \\frac{1}{\\left(D_2^\\flat\\right)^2} - \\frac{1}{D_1^\\flat D_2^\\flat}\\right)\n \\right\\}.\n\\end{eqnarray}\nIntegration yields\n\\begin{eqnarray}\n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n \\frac{\\alpha_s}{4\\pi} \n \\;\n i Z_{3,l}^{(1)} \\left( - g^{\\mu\\nu} p^2 + p^\\mu p^\\nu \\right)\n \\; = \\; \n 0.\n\\end{eqnarray}\n\n\n\\subsubsection{Contributions from a massive quark}\n\nIn this paragraph we set\n\\begin{eqnarray}\n D_1^\\flat \n \\; = \\;\n \\left(k + \\frac{1}{2}p^\\flat\\right)^2 - m^2,\n \\;\\;\\;\\;\\;\\;\n D_2^\\flat \n \\; = \\;\n \\left(k - \\frac{1}{2}p^\\flat\\right)^2 - m^2.\n\\end{eqnarray}\nThe contribution to the gluon self-energy from massive quarks is given by\n\\begin{eqnarray}\n - i \\Pi^{\\mu\\nu}_{\\mathrm{oneloop}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop}},\n \\nonumber \\\\\n R_{\\mathrm{oneloop}}\n & = &\n - 2 T_R N_Q\n \\left[\n p^2 g^{\\mu\\nu} - p^\\mu p^\\nu + 4 k^\\mu k^\\nu - g^{\\mu\\nu} \\left( D_1 + D_2 \\right) \n \\right]\n \\frac{1}{D_1 D_2}.\n\\end{eqnarray}\nFor the counterterm we write\n\\begin{eqnarray}\n - i \\Pi^{\\mu\\nu}_{\\mathrm{oneloop},\\mathrm{CT}}\n & = & \n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1} \n \\int \\frac{d^Dk}{(2\\pi)^D} \n R_{\\mathrm{oneloop},\\mathrm{CT}}.\n\\end{eqnarray}\nA possible choice for $R_{\\mathrm{oneloop},\\mathrm{CT}}$ is given by\n\\begin{eqnarray}\n\\lefteqn{\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n = \n T_R N_Q\n \\left\\{\n \\frac{2 \\left( p^2 g^{\\mu\\nu} - p^\\mu p^\\nu \\right)}{D_1^\\flat D_2^\\flat}\n \\left[ 1 - \\frac{4 k \\cdot \\left(p-p^\\flat\\right) + p^2}{4 D_1^\\flat} + \\frac{4 k \\cdot \\left(p-p^\\flat\\right) - p^2}{4 D_2^\\flat} \\right]\n \\right.\n } & & \n \\nonumber \\\\\n & &\n \\left.\n + \n \\frac{\\left[ 8 k^\\mu k^\\nu-2g^{\\mu\\nu} \\left( D_1^\\flat + D_2^\\flat \\right) \\right]}{D_1^\\flat D_2^\\flat}\n \\left[ \n 1 - \\frac{4 k \\cdot \\left(p-p^\\flat\\right) + p^2}{4 D_1^\\flat} + \\frac{4 k \\cdot \\left(p-p^\\flat\\right) - p^2}{4 D_2^\\flat} \n \\right. \\right.\n \\nonumber \\\\\n & &\n \\left.\\left.\n + \\left( k \\cdot \\left(p-p^\\flat\\right) \\right)^2 \\left( \\frac{1}{\\left(D_1^\\flat\\right)^2} + \\frac{1}{\\left(D_2^\\flat\\right)^2} - \\frac{1}{D_1^\\flat D_2^\\flat}\\right)\n \\right]\n - \\frac{p^2 g^{\\mu\\nu}}{D_1^\\flat D_2^\\flat}\n \\right.\n \\nonumber \\\\\n & &\n \\left.\n - \\frac{3}{14} \\frac{\\left( p^\\flat \\cdot \\left(p-p^\\flat\\right) \\right)^2 p^\\flat{}^\\mu p^\\flat{}^\\nu }{\\left(D_1^\\flat\\right)^2 \\left(D_2^\\flat\\right)^2}\n + \\left[\n \\left( \\frac{1}{3} p^\\flat \\cdot \\left(p-p^\\flat\\right) - \\frac{p^2}{2} \\right) \\left(p-p^\\flat\\right)^\\mu \\left(p-p^\\flat\\right)^\\nu\n \\right. \\right.\n \\nonumber \\\\\n & & \n \\left. \\left.\n + \\left( \\frac{2}{15} p^\\flat \\cdot \\left(p-p^\\flat\\right) - \\frac{p^2}{2} \\right) \n \\left( \\left(p-p^\\flat\\right)^\\mu p^\\flat{}^\\nu + p^\\flat{}^\\mu \\left(p-p^\\flat\\right)^\\nu \\right) \n - \\frac{1}{6} \\left(p-p^\\flat\\right)^2 p^\\flat{}^\\mu p^\\flat{}^\\nu\n \\right. \\right.\n \\nonumber \\\\\n & & \n \\left. \\left.\n + \\frac{2}{5} \\left(p^\\flat \\cdot \\left(p-p^\\flat\\right)\\right)^2 g^{\\mu\\nu}\n + \\frac{1}{6} \\left( \\left(p-p^\\flat\\right)^2 + 2p^2 \\right) p^2 g^{\\mu\\nu}\n - \\frac{4}{15} p^2 p^\\flat{}^\\mu p^\\flat{}^\\nu\n \\right] \n \\frac{D_1^\\flat+D_2^\\flat}{\\left(D_1^\\flat\\right)^2 \\left(D_2^\\flat\\right)^2}\n \\right\\}.\n \\nonumber \\\\\n\\end{eqnarray}\nThe terms in the first three lines approximate $R_{\\mathrm{oneloop}}$ in the on-shell and in the ultraviolet\nlimit.\nThe terms in the fourth to sixth line ensure that the integration of $R_{\\mathrm{oneloop},\\mathrm{CT}}$\ngives the desired result.\nWe have\n\\begin{eqnarray}\n g^2 \\mu^{2\\varepsilon} S_\\varepsilon^{-1}\n \\int \\frac{d^Dk}{\\left(2\\pi\\right)^D}\n R_{\\mathrm{oneloop},\\mathrm{CT}}\n & = &\n \\frac{\\alpha_s}{4\\pi} \n \\;\n i Z_{3,Q}^{(1)} \\left( - g^{\\mu\\nu} p^2 + p^\\mu p^\\nu \\right).\n\\end{eqnarray}\nLet us note that the last term\n\\begin{eqnarray}\n - \\frac{4}{15} T_R N_Q p^2 p^\\flat{}^\\mu p^\\flat{}^\\nu\n \\frac{D_1^\\flat+D_2^\\flat}{\\left(D_1^\\flat\\right)^2 \\left(D_2^\\flat\\right)^2}\n\\end{eqnarray}\nonly vanishes linearly in the on-shell limit.\nIt is however proportional to $p^\\flat{}^\\mu p^\\flat{}^\\nu$ and will give a vanishing contribution when contracted\ninto quantities, which vanish when contracted with $p^\\flat{}^\\mu$ or $p^\\flat{}^\\nu$.\n\n\n\n\\section{Conclusions}\n\\label{sect:conclusions}\n\nIn this paper we showed that residues (or cuts) from raised propagators can be made to vanish for renormalised quantities in the on-shell scheme.\nThis is a significant simplification for numerical methods at two-loops and beyond.\nWe achieve this by constructing an integral representation for the ultraviolet counterterms in the on-shell scheme.\nWe worked out these counterterms explicitly for $\\phi^3$-theory and QCD.\n\n\\subsection*{Acknowledgements}\n\nThis work has been supported by the \nCluster of Excellence ``Precision Physics, Fundamental Interactions, and Structure of Matter'' \n(PRISMA+ EXC 2118\/1) funded by the German Research Foundation (DFG) \nwithin the German Excellence Strategy (Project ID 39083149).\n\n\n\n\\begin{appendix}\n\n\\section{Feynman rules}\n\\label{sect:Feynman_rules}\n\nIn this appendix we list the Feynman rules for $\\phi^3$-theory.\nThe Feynman rule for the propagator is\n\\begin{eqnarray}\n\\begin{picture}(85,20)(0,5)\n \\Line(70,10)(20,10)\n\\end{picture} \n & = &\n \\frac{i}{p^2-m^2+i\\delta},\n \\nonumber \\\\\n\\end{eqnarray}\nwith $\\delta$ an infinitesimal small positive number.\nThe vertex is given by\n\\begin{eqnarray}\n\\begin{picture}(100,35)(0,50)\n\\Vertex(50,50){2}\n\\Line(50,50)(80,50)\n\\Line(50,50)(29,71)\n\\Line(29,29)(50,50)\n\\end{picture}\n \\;\\; = \\;\\;\n i \\lambda^{(D)}.\n \\\\ \\nonumber\n\\end{eqnarray}\nThe coupling $\\lambda^{(D)}$ is defined in eq.~(\\ref{def_lambda_D}).\nThe Feynman rules for the counterterms are\n\\begin{eqnarray}\n\\begin{picture}(85,20)(0,5)\n \\Line(70,10)(20,10)\n \\Line(40,5)(50,15)\n \\Line(40,15)(50,5)\n\\end{picture} \n & = &\n i \\left[ \\left(Z_\\phi-1\\right) p^2 - \\left( Z_\\phi Z_m^2 - 1 \\right) m^2 \\right],\n \\nonumber \\\\\n\\begin{picture}(100,35)(0,50)\n\\Line(56,50)(80,50)\n\\Line(43,57)(29,71)\n\\Line(29,29)(43,43)\n\\Line(45,45)(55,55)\n\\Line(45,55)(55,45)\n\\end{picture}\n& = &\n i \\left(Z_\\phi^{\\frac{3}{2}} Z_\\lambda-1\\right) \\lambda^{(D)}.\n \\\\ \\nonumber\n\\end{eqnarray}\n\n\\end{appendix}\n\n{\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{}} \n\n\\begin{document}\n\\normalem\n\n\\title{Rotational Resonances and Regge Trajectories in Lightly Doped Antiferromagnets}\n\n\\author{A. Bohrdt}\n\\email[Corresponding author email: ]{annabelle.bohrdt@tum.de}\n\\affiliation{Department of Physics and Institute for Advanced Study, Technical University of Munich, 85748 Garching, Germany}\n\\affiliation{Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 M\\\"unchen, Germany}\n\\address{ITAMP, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138, USA}\n\\affiliation{Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA}\n\n\\author{E. Demler}\n\\affiliation{Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA}\n\n\\author{F. Grusdt}\n\\affiliation{Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universit\\\"at M\\\"unchen, Theresienstr. 37, M\\\"unchen D-80333, Germany}\n\\affiliation{Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 M\\\"unchen, Germany}\n\n\n\n\\pacs{}\n\n\n\\date{\\today}\n\n\\begin{abstract}\nUnderstanding the nature of charge carriers in doped Mott insulators holds the key to unravelling puzzling properties of strongly correlated electron systems, including cuprate superconductors. Several theoretical models suggested that dopants can be understood as bound states of partons, the analogues of quarks in high-energy physics. However, direct signatures of spinon-chargon bound states are lacking, both in experiment and theory. Here we numerically identify long-lived rotational resonances at low doping, which directly reveal the microscopic structure of spinon-chargon bound states. Similar to Regge trajectories reflecting the quark structure of mesons, we establish a linear dependence of the rotational energy on the super-exchange coupling. Rotational excitations are strongly suppressed in standard angle-resolved photo-emission (ARPES) spectra, but we propose a multi-photon rotational extension of ARPES where they have strong spectral weight. Our findings suggest that multi-photon spectroscopy experiments should provide new insights into emergent universal features of strongly correlated electron systems.\n\\end{abstract}\n\n\\maketitle\n\n\n\\section{Introduction}\nOur understanding of strongly correlated quantum matter often involves new emergent structures. For example, emergent gauge fields play a central role for understanding quantum spin liquids \\cite{Wen2004}, and spin-charge separation in the one-dimensional Hubbard model can be related to the fractionalization of fermions into deconfined spinons and chargons \\cite{Giamarchi2003,Kim1996,Vijayan2020}. The fate of those partons in dimensions higher than one remains unresolved. Theoretically and experimentally, one faces similar problems as in high-energy physics: the mathematical models are too challenging to solve and signatures of parton formation are often indirect or buried in complex observables. In this article we draw analogies to high energy physics and report on unambiguous signatures for parton structures in the two-dimensional (2D) $t-J$ model.\n\nIn quantum chromodynamics it is well established that directly observable nucleons are not the most elementary constituents of matter. The quark model introduced more than fifty years ago explains the larger class of mesons and baryons as composite objects consisting of two or three valence quarks. A smoking gun demonstration of the quark model was its ability to explain many additional resonances observed in collider experiments as ro-vibrational excitations of the fundamental parton configurations. In the quark model, many heavy mesons are thus understood as excited states of the fundamental mesons: they contain the same quark content but realize a higher vibrational state or have non-zero orbital angular momentum \\cite{Micu1969,Olive2014}. \n\nA hallmark signature of rotational mesons comes from analysis of their mass, which can be related to the excitation energy of the pair of quarks relative to the lowest irrotational energy state. In a simplistic model, a meson can be described as a rigid line-like object with constant energy density per unit length, also known as ``string tension''. The two nearly massless quarks are located at the respective ends of this line and carry the (flavor) quantum numbers of the system. The relativistic expression for the energy of a rotating meson of this type scales linearly with the string tension and with the square root of its angular momentum \\cite{Greensite2003}. The latter relation, known as Regge trajectory, can be directly probed in collider experiments and has been observed experimentally \\cite{Bali2001}. It provides a strong indication that the observed mesons are bound states of partons. \n\n\\begin{figure\n\\centering\n\\epsfig{file=FIG1.pdf, width=0.48\\textwidth}\n\\caption{\\textbf{Rotational meson resonances.} Bound states of spinons and chargons in a $C_4$-symmetric doped 2D AFM Mott insulators feature characteristic ro-vibrational excitations. (a) In an effective microscopic theory, the string with the light chargon (gray) can rotate around the heavy spinon (blue). (b) To detect rotational resonances, we propose a multi-photon ARPES scheme. Following the creation of a hole by a first photon, a second photon couples to lattice vibrations and excites rotational modes with $C_4$ angular momentum $m_4=0,1,2,3$. \n(c) Energy distribution curves for rotational ARPES spectra at the nodal point $\\vec{k}= \\vec{\\pi}\/2$ with different angular momenta, from top to bottom: $m_4=2,1,0$. The lowest (dash-dotted) curve corresponds to the usual ARPES spectrum with $m_4=0$. All spectra are normalized by their total area, $\\int d\\omega~ A_{{\\rm (rot)}}^{(m)}$. The lowest mesonic resonances (ground state $\\mathsf{1S}$, vibrational $\\mathsf{2S}$ and rotational $\\mathsf{1P,1D,1F}$ excited states) correspond to long-lived excited states. We performed time-dependent DMRG simulations for a $t-J$ model on a $4 \\times 40$ cylinder, with $t=3 J$. The shaded areas correspond to toy model calculations (see methods \\ref{MethodsC}) where we introduced small energy shifts and broadening as fit parameters. \n}\n\\label{figRotSetup}\n\\end{figure}\n\nAn idea almost as old as the problem of high-$T_c$ superconductivity itself comprises that strongly correlated electrons in these systems may be ruled by similar principles as high-energy physics \\cite{Lee2006}. In analogy with quark confinement, B\\'eran et al. suggested a description of a hole doped into a 2D antiferromagnet (AFM) in terms of a composite quasiparticle, consisting of two partons -- a chargon, carrying the charge quantum number and a spinon, carrying the spin quantum number -- bound together by ``an interaction obeying a string law'' \\cite{Beran1996}. However, finding direct experimental or theoretical signatures for such structure has proven to be difficult. In angle resolved photo-emission spectra (ARPES) no sign of rotational resonances has been seen, and the nature of a possible first vibrational excitation is debated \\cite{Dagotto1990,Leung1995,Mishchenko2001,Manousakis2007,Bohrdt2020}. \n\nDiscerning the nature of charge carriers in lightly doped Mott insulators should provide a major boost to understanding properties of the underdoped cuprates and elucidating the origin of the pseudogap (PG) phase. In particular, it should provide a basis for constructing a consistent description of transport \\cite{Badoux2016} and spectroscopy \\cite{Shen2005} experiments. Several theoretical proposals involve emergent structures of partons, starting on a single dopant level \\cite{Bulaevskii1968,Beran1996,Trugman1988,Manousakis2007}, to effective theories of the PG phase involving exotic composite Fermi liquids \\cite{Anderson2007,Baskaran2007} and including fractionalized Fermi liquids \\cite{Senthil2003} where deconfined spinons and chargons form electron-like bound states \\cite{Punk2015PNASS,Sachdev2016,Zhang2020}. These scenarios may also explain the sudden and pronounced change of ARPES spectra \\cite{Chen2019} and the carrier density \\cite{Badoux2016} observed in cuprates around $p^* = 19 \\%$ doping, as being related to an unbinding transition of spinons and chargons.\n\nHere we provide strong numerical evidence that charge carriers in a lightly doped 2D AFM Mott insulators are comprised of partons, which are bound to each other, and exhibit telltale rotational excitations following Regge-like trajectories. We show that these rotational excitations are strongly suppressed in standard ARPES measurements and propose a multi-photon extension of ARPES imparting $C_4$-angular momentum into the system and allowing to access rotational excitations experimentally in solids \\cite{Damascelli2003} or using ultracold atoms \\cite{Bohrdt2018,Brown2019}. Our numerical DMRG simulations of the rotational ARPES spectra in the $t-J$ model, see Fig.~\\ref{figRotSetup}, reveal narrow quasiparticle peaks at low excitation energies, which we interpret as a striking proof of the parton picture. Moreover, we describe the rotational resonances by a microscopic spinon-chargon toy model which explains the observed features without any free fit parameters.\n\n\n\n\\section{Rotational ARPES spectrum}\nIn traditional ARPES the spectral function $A(\\vec{k},\\omega) = - \\pi^{-1} {\\rm Im} \\mathcal{G}(\\vec{k},\\omega)$ is measured which reveals information about the one-hole Green's function $\\mathcal{G}(\\vec{j},t) = \\theta(t) \\sum_\\sigma \\bra{\\Psi_0} \\hat{c}^\\dagger_{\\vec{j},\\sigma}(t) \\c_{\\vec{0},\\sigma}(0) \\ket{\\Psi_0}$; the latter describes how a fermion $\\c_{\\vec{j},\\sigma}$ with spin $\\sigma$ is removed from the initial state $\\ket{\\Psi_0}$ and leads to a hole propagating through the system. In the 2D Fermi-Hubbard model, believed to describe lightly doped copper oxides \\cite{Lee2006}, a long-lived quasiparticle peak is found in $A(\\vec{k},\\omega)$ \\cite{Wells1995,Ronning2005,Graf2007} which describes how a hole interacting with magnetic fluctuations forms a spin- or magnetic polaron \\cite{Kane1989,Sachdev1989,Dagotto1990,Martinez1991,Liu1992,Koepsell2019} and moves through the surrounding AFM.\n\nOur goal is to search for long-lived rotational excitations in the one-hole spectrum, which provide a direct route to reveal the composite nature of charge carriers in the Hubbard model. To couple to rotationally excited states one must impart discrete $C_4$ angular momentum into the system. However, the Green's function $\\mathcal{G}(\\vec{k},\\omega)$ respects the symmetries of the underlying Hamiltonian: In this case we are particularly interested in the discrete rotational $C_4$ symmetry of the Hubbard model, which is unbroken in the undoped parent AFM $\\ket{\\Psi_0}$. Hence, for $C_4$ invariant momenta (C4IM) in the magnetic Brillouin zone (MBZ) no angular momentum transfer is allowed and rotational excitations have no weight in the traditional ARPES spectrum $A(\\vec{k},\\omega)$. \n\nFor non-C4IM, lattice effects can in principle impart $C_4$ angular momentum into the system. However since the Green's function couples to the center-of-mass momentum $\\vec{k}$ of the hole, the spectral weight of rotational states is still expected to be strongly suppressed if the effective masses of the two supposed partons are significantly different. In this limit, the lighter parton rotates around the heavier parton which carries most of the linear momentum $\\vec{k}$, thus suppressing couplings of $\\vec{k}$ to the \\emph{relative} angular momentum of the two partons. We confirm this intuition for an analytically solvable toy-model in one dimension (see supplements \\ref{SuppMat1Dmes}). The Hubbard model in cuprates, with super-exchange coupling $J \\approx t \/ 3$, is in such a regime where significantly different parton masses $\\simeq 1\/J$ and $\\simeq 1\/t$ are expected. \n\nTo allow significant overlap with possible rotational excitations, we devise a rotational extension of ARPES where angular momentum is directly imparted into the system, even at C4IM. The simplest term creating an excitation with discrete angular momentum $m_4=0,1,2,3$, spin $\\sigma$, charge one, and total momentum $\\vec{k}$ is given by\n\\begin{equation}\n\\hat{R}_{m_4,\\sigma}(\\vec{k}) = \\sum_{\\vec{j}} \\frac{e^{- i \\vec{k} \\cdot \\vec{j}}}{\\sqrt{V}} \\sum_{\\vec{i}: \\ij} e^{i m_4 \\varphi_{\\vec{i} - \\vec{j}}} \\sum_{\\sigma'} \\hat{c}^\\dagger_{\\vec{j},\\sigma'} \\c_{\\vec{i},\\sigma'} \\c_{\\vec{j},\\sigma},\n\\label{eqDefRmk}\n\\end{equation}\nwith $\\varphi_{\\vec{r}} = {\\rm arg}(\\vec{r})$ the polar angle of $\\vec{r}$. The action of this operator on a product N\\'eel state, $\\hat{R}_{m_4,\\sigma} \\ket{{\\rm N}}$, is illustrated in Fig.~\\ref{figRotSetup} (b): Here the second and third fermion operators in $\\hat{R}_{m_4,\\sigma}$ create a string-like excitation with $C_4$ angular momentum $m_4$ and non-zero overlap to the rotational states predicted for a hole in an Ising AFM \\cite{Grusdt2018PRX}.\n\nInstead of the usual Green's function, we consider the rotational Green's function\n\\begin{equation}\n\\mathcal{G}_{\\rm rot}^{(m_4)}(\\vec{k},t) = \\theta(t) \\sum_\\sigma \\bra{\\Psi_0} \\hat{R}^\\dagger_{m_4,\\sigma}(\\vec{k},t) \\hat{R}_{m_4,\\sigma}(\\vec{k},0) \\ket{\\Psi_0},\n\\end{equation}\nwhich we calculate by time-dependent DMRG (see methods \\ref{MethodsB}) \\cite{Paeckel2019a,Kjall2013,Zaletel2015}. The corresponding rotational spectrum, $- \\pi^{-1} {\\rm Im} \\mathcal{G}_{\\rm rot}^{(m_4)}(\\vec{k},\\omega)$, in Lehmann representation is\n\\begin{equation}\n A_{\\rm rot}^{(m_4)}(\\vec{k},\\omega) = \\sum_{\\sigma, n>0} \\delta \\l \\omega-E_n+E_0 \\r | \\bra{\\Psi_n} \\hat{R}_{m_4,\\sigma}(\\vec{k}) \\ket{\\Psi_0} |^2,\n \\label{eqDefArot}\n\\end{equation}\nwhere $\\ket{\\Psi_0}$ ($E_0$) is the correlated ground state (energy) and $\\ket{\\Psi_n}$ ($E_n$) for $n>0$ are the eigenstates (eigenenergies) with an added hole. Hence, if long-lived rotational excitations exist, they manifest in pronounced quasiparticle peaks in the rotational ARPES spectrum in Eq.~\\eqref{eqDefArot}. For $m_4=0$ the same selection rules apply as for the conventional ARPES spectrum and the same states contribute, but with modified spectral weights. \n\nThe rotational spectrum can be experimentally measured using a multi-photon extension of ARPES. We propose to use one set of beams for lattice modulation, which imparts angular momentum into the system by coupling to specific phonon modes in solids \\cite{Devereaux1994} or directly by modulating the optical potential with appropriate phases in ultracold atoms \\cite{Bloch2008}. The other beam is the usual ARPES beam which creates the hole excitation. Details of our scheme are provided in the methods \\ref{MethodsA}.\n\n\n\\section{Rotational resonances}\nNow we present our numerical results obtained for one hole doped into a 2D AFM Mott insulator. Specifically, we considered the $t-J$ model on extended four-leg cylinders and for $t\/J=3$, the experimentally most relevant value for cuprates (see methods \\ref{MethodsB}). In Fig.~\\ref{figRotSetup} (c) we show numerically obtained spectra (energy distribution curves) at the nodal point $\\vec{k}= \\vec{\\pi}\/2$, with $\\vec{\\pi} = (\\pi,\\pi)$. For $m_4=0$ (red line, second from bottom) the rotational spectrum shows the same quasiparticle peak as the conventional ARPES spectrum (black, bottom line), at the same energy. This peak, labeled $\\mathsf{1S}$, corresponds to the magnetic polaron ground state \\cite{Bohrdt2020}. A possible excited state is also visible at $m_4=0$, which we label $\\mathsf{2S}$ and which has previously been argued to correspond to the first vibrational excitation of the magnetic polaron \\cite{Dagotto1990,Leung1995,Mishchenko2001,Manousakis2007,Bohrdt2020}. The $\\mathsf{2S}$ state has a reduced spectral weight in the rotational ARPES spectrum, where it is difficult to identify at all. \n\nMuch clearer indications for long-lived excitations of magnetic polarons can be found in the non-trivial rotational ARPES spectra with $m_4 \\neq 0$. For $m_4=2$ (top, blue curve in Fig.~\\ref{figRotSetup}) we find a pronounced quasiparticle peak corresponding to an excitation energy $\\Delta E \\sim 1.7 J$. Remarkably, no significant spectral weight appears below this energy, in particular we find zero spectral weight at the polaron ground state energy. We note that this is not simply a consequence of selection rules: Firstly, the nodal point does not correspond to a C4IM, not even in the reduced MBZ. Secondly, the AFM has gapless magnon modes which should in principle allow to carry away angular momentum and allow an excited magnetic polaron to decay to its ground state. Based on these observations, we identify the resonance found at $m_4=2$ with a $\\mathsf{1D}$ excited state of the magnetic polaron. \n\nSimilarly, the rotational spectrum with $m_4=1$ features a pronounced peak at a slightly higher excitation energy $\\Delta E \\sim 2.3 J$ above the ground state (second from top, turquoise curve in Fig.~\\ref{figRotSetup}). In this case we find weak hybridization with the $\\mathsf{1S}$ state, as indicated by a small quasiparticle peak at the ground state energy. Based on its quantum numbers, we identify the new excited state as $\\mathsf{1P}$. By applying a combination of time-reversal and inversion symmetry, it follows that the $m_4=1$ and $m_4=3$ rotational spectra coincide exactly. Hence the $\\mathsf{1P}$ state is associated with a degenerate $\\mathsf{1F}$ state at $m_4=3$. \n\nOur observation of long-lived quasiparticle peaks in the rotational spectrum provides a direct indication that mobile holes in lightly doped AFM Mott insulators have a discrete internal structure. To understand our reasoning, consider a theoretical model of magnetic polarons without a rigid internal structure. In this case, the action of the operator $\\hat{R}_{m_4,\\sigma}(\\vec{k})$ in the rotational Green's function would generically be expected to have two separate effects: The first fermion operator $\\c_{\\vec{j},\\sigma}$ in Eq.~\\eqref{eqDefRmk} creates a mobile hole with a large overlap to the structureless magnetic polaron. The subsequent pair of fermion operators, $\\sum_{\\sigma'} \\hat{c}^\\dagger_{\\vec{j},\\sigma'} \\c_{\\vec{i},\\sigma'}$ in Eq.~\\eqref{eqDefRmk}, then couples to the surrounding spins and creates separate magnon excitations. In this case one would expect $A_{\\rm rot}^{(m_4)}(\\vec{k},\\omega)$ to become a convolution of a polaron and a magnon contribution, possibly renormalized weakly by interaction effects. This would lead to a broad and mostly featureless spectrum -- in stark contrast with our numerical findings in Fig.~\\ref{figRotSetup} (c). \n\n\n\\section{Regge-like trajectories}\nB\\'eran et al. \\cite{Beran1996} have suggested that mobile holes in an AFM Mott insulator can be described as mesonic bound states of two strongly interacting partons, a light chargon and a heavy spinon, see also \\cite{Laughlin1997,Grusdt2018PRX}. In that case the operators $\\hat{R}_{m_4,\\sigma}(\\vec{k})$ should create rotational excitations, which explains the peaks we found in the rotational spectra in Fig.~\\ref{figRotSetup}. Now we study how the excitation energies $\\Delta E$ of these rotational peaks, as well as the first vibrational peak, depend on the underlying coupling strength $J\/t$ in the system, which gives further insights into the nature of the bound state. \n\nIn Fig.~\\ref{figReggeTraj} we numerically extracted the positions of the peaks from frequency cuts $A_{\\rm rot}^{(m_4)}(\\vec{\\pi}\/2,\\omega)$ of rotational spectra at the nodal point, for different values of $J\/t$. We find that the positions of the rotational peaks scale linearly with the spin exchange \n\\begin{equation}\n\\Delta E_{\\rm rot} \\simeq J,\n\\label{eqErotScaling}\n\\end{equation}\nwhereas the gap to the vibrational excitation $\\mathsf{2S}$ has a characteristic power-law dependence on $t$ and $J$ \\cite{Bohrdt2020}, \n\\begin{equation}\n\\Delta E_{\\rm vib} \\simeq t^{1\/3} J^{2\/3}.\n\\label{eqEvibScaling}\n\\end{equation} \n\nThese scaling behaviors can be explained by a simplistic meson model~\\cite{Grusdt2018PRX}: In this model the two partons are connected by a line-like object on the square lattice with constant energy density $dE\/d\\ell$. This string tension must be proportional to the spin exchange energy $dE\/d\\ell \\propto J$ to obtain the observed scaling laws in Eqs.~\\eqref{eqErotScaling}, \\eqref{eqEvibScaling}.\n\nSince $J$ corresponds to the string tension between the two partons, Eq.~\\eqref{eqErotScaling} resembles the celebrated Regge formula from particle physics, which relates the meson excitation energy to its angular momentum and the underlying string tension \\cite{Greensite2003}. While high-energy experiments cannot tune the string tension, which is determined by the coupling constant $g$ of quantum chromodynamics, cold atom quantum simulators \\cite{Cheuk2016,Mazurenko2017,Koepsell2019,Brown2019} can be used to tune the coupling $J\/t$ in the Hubbard model and directly measure the Regge-like trajectories we predict for rotational excitations in Fig.~\\ref{figReggeTraj}.\n\n\\begin{figure}[t!]\n\\centering\n\\epsfig{file=FIG2.pdf, width=0.49\\textwidth}\n\\caption{\\textbf{Meson Regge trajectories.} We show the dependence of the excitation gap $\\Delta E$ at the nodal point $\\vec{k}=\\vec{\\pi}\/2$ on the super-exchange energy $J$. The gap was extracted from peak positions in numerically obtained spectra. The low-lying rotational resonances ($\\mathsf{1P, 1D, 1F}$) have a gap scaling linearly with $J$ (light dotted lines denote linear fits: $\\Delta E_{\\mathsf{1D}} = 1.44 J + 0.061 t$ and $\\Delta E_{\\mathsf{1P,1F}} = 2.12 J + 0.047 t$). The gap to the first vibrational peak ($\\mathsf{2S}$) scales with $t^{1\/3} J^{2\/3}$ \\cite{Bohrdt2020}. Solid lines are parameter-free calculations neglecting spinon dynamics, see methods \\ref{MethodsC}. The inset shows the same data, but with energy measured in units of $J$ instead of $t$.\n}\n\\label{figReggeTraj}\n\\end{figure}\n\nIn further analogy with the Regge formula from high-energy physics, we can study the dependence of the meson excitation energy $\\Delta E$ on its angular momentum $m_4$. While quarks in high-energy physics are described in a continuous space-time, lattice effects are strong in the Hubbard or $t-J$ models we consider. As a result, the simplistic meson model from Ref.~\\cite{Grusdt2018PRX} predicts that all rotational states with $m_4 \\neq 0$ should be degenerate when $J\/t \\ll 1$, and be located between purely vibrational states in this limit. Refined meson models predict small splittings of the rotational lines, however. We confirm in Fig.~\\ref{figReggeTraj} that all rotational excitations are close in energy, and well separated from the first vibrational peak. \n\n\n\\begin{figure}[t!]\n\\centering\n\\epsfig{file=FIG3.pdf, width=0.49\\textwidth}\n\\caption{\\textbf{Rotational spectra and meson dispersion.} We show full rotational ARPES spectra $A_{\\rm rot}^{(m_4)}(\\omega,\\vec{k})$ for momenta $\\vec{k}$ along the cuts in the Brillouin zone indicated by blue arrows in (a). For a given value of $m_4=0$ ($m_4=1,2$), the fitted peak positions of the low-lying resonances are indicated by green (gray, blue) dots in (b) and (c). The numerics were carried out on elongated $L_x=40 \\times 4$-leg cylinders, for a $t-J$ model with $t\/J=3$. At high-symmetry momenta in the Brillouin zone, see (a), selection rules explain why some meson resonances are invisible. The insets show the same spectral cuts in the spinon-chargon toy model, where discrete delta-functions were replaced by weakly broadened Gaussian peaks but without any further fitting parameters.\n}\n\\label{figSpectralLines}\n\\end{figure}\n\n\\section{Meson dispersion}\nThe dependence of the peak position on the momentum $\\vec{k}$ enables further insights into the properties of the mesonic bound states describing mobile holes in lightly doped AFM Mott insulators. In Fig.~\\ref{figSpectralLines} we show momentum cuts of the rotational ARPES spectrum, along lines indicated in part (a), for a fixed ratio of $t\/J=3$. \n\nIn order to simplify comparison between spectra at different angular momenta, we indicate the numerically extracted dispersions of all mesonic resonances by dots. Different dot colors denote the angular momentum $m_4$ where the respective peak is most pronounced. Since $m_4$ is only a conserved quantum number at C4IM, at generic momenta $\\vec{k}$ most mesonic resonances are visible for different values of $m_4$ at the same time. In addition, even at C4IM our numerics indicate further weak hybridization of rotational states which is caused by the broken $C_4$ rotational symmetry in the elongated four-leg cylinders we consider. For example, at $\\vec{k}=(\\pi,0)$ the $\\mathsf{1S}$ state has non-zero spectral weight for $m_4=2$. On the other hand, the strict vertical and horizontal reflection symmetries $\\sigma_{\\rm v\/h}$ of our finite-size cylinders prevent hybridization of states with even and odd $m_4$, respectively. \n\nOur main findings from the meson dispersions in Fig.~\\ref{figSpectralLines} are as follows. For almost all momenta, the lowest peak corresponds to angular momentum $m_4=0$ (green dots). This peak shows the strongest dispersion, with a minimum at the nodal point. In contrast, the rotational meson resonances show significantly reduced dispersion and the locations of dispersion minima depend strongly on the value of $m_4$. These features are explained by the spinon-chargon toy model presented in the methods section \\ref{MethodsC}.\n\n\n\\section{Measurement scheme}\nWe expect that the rotational ARPES spectra we predict can be experimentally measured in cuprate compounds. Specifically, we propose to perform multi-photon spectroscopy which combines the usual ARPES beam with a periodic lattice modulation. To realize the latter, we suggest to drive the buckling phonon modes in copper-oxide layers, in particular the Raman active $A_{\\rm 1g}$ and $B_{\\rm 1g}$ modes in YBCO \\cite{Devereaux1994,Rosch2004}. Overall this leads to an effective three-photon transition, consisting of a pair of Raman lasers and the usual ARPES beam. As we show in the methods \\ref{MethodsA}, the $s$- and $d$-wave symmetries of the $A_{\\rm 1g}$ and $B_{\\rm 1g}$ modes, respectively, allow to measure $m_4=0$ and $m_4=2$ rotational ARPES spectra. \n\nWe also show in the methods \\ref{MethodsA} that a similar rotational extension of scanning-tunneling microscopy (STM) can be envisioned. Even though rotational STM spectra lack momentum resolution, we demonstrate that the predicted weakly dispersive long-lived rotational meson resonances in Fig.~\\ref{figSpectralLines} can be resolved. \n\nFinally, we expect that analogous measurements will become possible in ultracold atoms in the immediate future \\cite{Brown2019,Kollath2007,Bohrdt2018}. There, the required angular momentum can be imparted into the system directly through shaking of the optical lattice and without phonons, resulting in a two-photon spectroscopy scheme; see methods \\ref{MethodsA} for details. \n\n\n\\section{Discussion and Outlook}\nWe have proposed a rotational extension of ARPES, which we used to predict the previously unknown long-lived rotational excitations of charge carriers in lightly doped 2D AFM Mott insulators. Our finding of pronounced quasiparticle peaks in the rotational spectra allow to conclude that strong interactions between spin and charge must be present. By analyzing Regge-like trajectories, describing the dependence of the excitation energy on the super-exchange $J$, we found compelling evidence that mobile holes in lightly doped AFM Mott insulators have a rich internal structure and can be understood as spinon-chargon bound states. This finding is further supported by the good agreement we report with a microscopic toy model of spinons and chargons connected by a string on the square lattice. \n\nWe will extend our numerical analysis, obtained for the one-hole doped $t-J$ model so far, to study the closely related 2D Hubbard model and analyze effects of weak non-zero doping next. We do not expect the clear numerical signatures obtained so far -- which we also confirm in exact diagonalization of $4 \\times 4$ systems in the supplements \\ref{SuppMatAddNum} -- to change significantly in this case. \n\nOur research provides the most direct evidence yet for the decades old idea \\cite{Beran1996,Laughlin1997} that the physics of lightly doped 2D AFM Mott insulators -- and by extension high-temperature superconductors -- is captured by emergent partons. In particular our results support the picture of the pseudogap phase in cuprates as a liquid of fermionic mesons, modeling charge carriers as bound states of spinons and chargons. We emphasize the importance of a direct experimental confirmation that charge carriers have a rich internal structure: An observation of the long-lived rotational resonances we predict would provide a strong indication that this picture is correct. \n\nThe meson structure of charge carriers in lightly doped 2D AFM Mott insulators may have further theoretical implications. On the one hand it may contribute to our understanding how stripes form at low temperatures \\cite{Grusdt2020}, or shed new light on the pairing mechanism underlying high-temperature superconductivity in cuprates. On the other hand, understanding possible un-binding transitions of spinons and chargons may contribute to a deeper understanding of the rich phase diagram of cuprates. An interesting future direction would be to explore how the parton picture relates to the sudden change of carrier properties observed around a critical doping $p^*\\approx 19\\%$ \\cite{Tallon2001,Badoux2016,Chen2019}, see also \\cite{Koepsell2020a}.\n\n\n\n\n\n\n\n\n\n\\newpage\n\n\\section{Methods}\n\n\\subsection{Multi-photon lattice modulation spectroscopy}\n\\label{MethodsA}\nTo detect rotational excitations experimentally, we propose a general lattice modulation scheme. The lattice modulation causes a two-photon side-band (or, depending on the coupling scheme, a Raman side-band) in the ARPES spectrum, shifted by the rotational excitation energy. We discuss possible implementations of the rotational ARPES scheme for ultracold atoms in quantum gas microscopes \\cite{Gross2017}, and in solids where three photon beams are required to Raman-couple to suitable phonon modes. Finally we show how scanning-tunneling microscopy (STM) can be combined with the same lattice shaking scheme to probe the rotational states without momentum resolution. \n\n\n\\subsubsection{General scheme}\nTo describe the lattice modulation scheme, we denote by $\\H_{0}$ the underlying system Hamiltonian ($t-J$ or Fermi-Hubbard model; additional terms with the same discrete lattice symmetries are also allowed). $\\H_{0}$ determines the dynamics of the underlying particles $\\c_{\\vec{j},\\sigma}$ in the system. By $\\hat{V}_{\\rm A}(t)$ we denote the applied perturbation whose linear response yields the ARPES signal: \n\\begin{equation}\n \\hat{V}_{\\rm A}(t) = - \\delta t_{\\rm A} \\sin( \\omega_{\\rm A} t) \\sum_{\\vec{j},\\sigma} \\l \\hat{a}^\\dagger_{\\vec{j},\\sigma} \\c_{\\vec{j},\\sigma} + \\text{h.c.} \\r.\n \\label{eqDefVA}\n\\end{equation}\nHere $\\a_{\\vec{j},\\sigma}$ denotes the \"photo electron\" channel in the ARPES sequence. \n\nWe assume that the detection system is governed by a non-interacting Hamiltonian,\n\\begin{equation}\n \\H_a = \\sum_{\\vec{k},\\sigma} \\epsilon_{\\vec{k},\\sigma} \\hat{a}^\\dagger_{\\vec{k},\\sigma} \\a_{\\vec{k},\\sigma},\n\\end{equation}\nwith a known dispersion $\\epsilon_{\\vec{k},\\sigma}$. Further, we require that the energy, momentum and spin of the \"photo electrons\" created by $\\hat{a}^\\dagger_{\\vec{k},\\sigma}$ can be experimentally measured. The dispersion relation $\\epsilon_{\\vec{k},\\sigma}$ is of the form \n\\begin{equation}\n \\epsilon_{\\vec{k},\\sigma} = \\Delta + \\delta \\epsilon_{\\vec{k},\\sigma},\n\\end{equation}\nwhere $\\Delta$ is an overall energy offset and ${\\rm min}_{\\vec{k}} \\delta \\epsilon_{\\vec{k},\\sigma} = 0$. Note that $\\Delta$ corresponds to the minimum energy required to remove a fermion from the system.\n \n\\begin{figure}[t!]\n\\centering\n\\epsfig{file=FIG4.pdf, width=0.48\\textwidth}\n\\caption{\\textbf{Multi-photon rotational ARPES scheme.} To probe the rotational response of (doped) anti-ferromagnets, we propose to use a red-detuned ARPES beam $\\delta t_{\\rm A}$ and supplement the missing energy to extract a \"photo electron\" (green) by the lattice modulation $\\delta t_{\\rm L}$, i.e. $\\omega_{\\rm A} + \\omega_{\\rm L} \\approx \\Delta$. We assume that the lattice modulation frequency $\\omega_{\\rm L}$ is far-off resonant from the doublon-hole resonance at energy $\\sim U$. In solids, the lattice modulation can be realized by Raman-coupling to suitable phonon modes, see inset.\n}\n\\label{figRARPES}\n\\end{figure}\n\nThe second ingredient for our scheme is a super-lattice modulation, which periodically changes the tunnel coupling between neighboring sites $\\vec{i}$ and $\\vec{j}$:\n\\begin{equation}\n \\hat{V}_{\\rm L}(t) = - \\sum_{\\ij} \\delta t_{\\rm L} \\sin ( \\omega_{\\rm L} t + \\phi_{\\ij} ) \\sum_\\sigma \\l \\hat{c}^\\dagger_{\\vec{j},\\sigma} \\c_{\\vec{i},\\sigma} + \\text{h.c.} \\r.\n \\label{eqDefVL}\n\\end{equation}\nWe assume that the phases $\\phi_\\ij$ on different bonds $\\ij$ can be controlled by the spatial structure of the lattice modulation. If resonant, the perturbation \\eqref{eqDefVL} creates doublon-hole pairs with energy $\\approx U \\gg t,J$. \n\nThe basic strategy to detect rotational excitations $m=0,1,2,3$ is to consider a two-photon transition involving both drives $\\hat{V}_{\\rm A}(t)$ and $\\hat{V}_{\\rm L}(t)$. The allowed transitions are shown in Fig.~\\ref{figRARPES}. We assume that both one-photon transitions are off-resonant, i.e. \n\\begin{flalign}\n |\\delta t_{\\rm A}| &\\ll \\underbrace{\\Delta - \\omega_{\\rm A}}_{> 0}, \\\\\n |\\delta t_{\\rm L}| &\\ll | U - \\omega_{\\rm L} |.\n\\end{flalign}\nIn this case, \"photo electrons\" $\\a_{\\vec{k},\\sigma}$ can only be created by a two-photon process. In the linear response regime, i.e. for sufficiently weak modulations $\\delta t_{\\rm A,L}$, the probability per time $\\gamma_{\\rm AL}$ for the two-photon transition is given by\n\\begin{multline}\n\\gamma_{\\rm AL}(\\vec{k}) = 2 \\pi ~ \\delta \\bigl( \\omega_{\\rm L}+\\omega_{\\rm A} - ( E_f(\\vec{k}) - E_i ) \\bigr) \\times \\\\\n\\biggl| \\sum_{r} \\biggl\\{ \\frac{\\bra{\\psi_f(\\vec{k})} \\hat{V}_{\\rm A} \\ket{r} \\bra{r} \\hat{V}_{\\rm L} \\ket{\\psi_i}}{\\omega_{\\rm L} - (E_r - E_i) + i \\Gamma_r} + \\frac{\\bra{\\psi_f(\\vec{k})} \\hat{V}_{\\rm L} \\ket{r} \\bra{r} \\hat{V}_{\\rm A} \\ket{\\psi_i}}{\\omega_{\\rm A} - (E_r - E_i) + i \\Gamma_r} \\biggr\\} \\biggr|^2,\n\\label{eqGammaAL}\n\\end{multline}\nwhere we defined for $\\mu={\\rm A, L}$ the time-independent perturbations $\\hat{V}_\\mu$ by the relation\n\\begin{equation}\n \\hat{V}_\\mu(t) \\equiv e^{ - i \\omega_\\mu t} \\hat{V}_\\mu + \\text{h.c.} ~.\n\\end{equation}\nIn Eq.~\\eqref{eqGammaAL}, $E_f(\\vec{k})$ denotes the energy of the final state $\\ket{\\psi_f(\\vec{k})} = \\ket{\\Psi_f}\\ket{\\vec{k},\\sigma}$ with a \"photo electron\" $\\a_{\\vec{k},\\sigma}$ with momentum $\\vec{k}$, $E_i$ is the energy of the initial state $\\ket{\\psi_i} = \\ket{\\Psi_0} \\ket{0}$ without \"photo electrons\"; $\\ket{\\Psi_f}$ and $\\ket{\\Psi_0}$ denote the final and ground states of the many-body system. The sum $\\sum_r$ is taken over all possible intermediate states $\\ket{r}$ with energy $E_r$ and inverse lifetime $\\Gamma_r$ (where $\\Gamma_r \\to 0^+$ for infinite lifetimes); see e.g.~\\cite{Long2002}.\n\nThe rotational spectra defined in the main text correspond to the response $\\hat{V}_{\\rm L} \\hat{V}_{\\rm A}$, where the ARPES transition takes place before the lattice modulation $\\hat{V}_{\\rm L}$ is applied. This ensures that $\\hat{V}_{\\rm L}$ transfers discrete orbital ($\\hat{C}_4$) angular momentum to the many-body system but not to the \"photo electrons\" $\\a_{\\vec{k},\\sigma}$; note that $[\\hat{V}_{\\rm L}, \\hat{V}_{\\rm A}] \\neq 0$ in general. To ensure that the two-photon response in Eq.~\\eqref{eqGammaAL} is dominated by terms where $\\hat{V}_{\\rm A}$ is applied first, we require that\n\\begin{equation}\n 0 < \\Delta - \\omega_{\\rm A} \\ll |U-\\omega_{\\rm L}|.\n\\end{equation}\nI.e. the ARPES beam is close to resonance while the lattice modulation is far off resonant, see Fig.~\\ref{figRARPES}. \n\nThe response in Eq.~\\eqref{eqGammaAL} simplifies further, if we assume that the intermediate states $\\ket{r}$ are detuned sufficiently far, \n\\begin{equation}\n|\\omega_{\\rm A} - \\Delta| \\gg t, J,\n\\end{equation}\nto neglect any dependence on the specifics of the intermediate states: $(E_r - E_i) \\approx \\Delta$. This yields\n\\begin{multline}\n\\gamma_{\\rm AL}(\\vec{k}) = 2 \\pi ~ \\frac{| \\bra{\\psi_f(\\vec{k})} \\hat{V}_{\\rm L} \\hat{V}_{\\rm A} \\ket{\\psi_i}|^2 }{ (\\omega_{\\rm A} - \\Delta)^2 } \\times \\\\\n \\delta \\bigl( \\omega_{\\rm L}+\\omega_{\\rm A} - ( E_f(\\vec{k}) - E_i ) \\bigr).\n \\label{eqALres}\n\\end{multline}\nNote however, that this approximation does not affect the energy delta-function in Eq.~\\eqref{eqGammaAL}.\n\nAlternatively, if we assume that the quasiparticle lifetimes of the intermediate states are all relatively short,\n\\begin{equation}\n \t\\Gamma_r \\approx \\Gamma \\geq |E_r - E_i| \\approx \\Delta,\n\t\\label{eqGammaApprx}\n\\end{equation}\nwe can again sum over all intermediate states and get\n\\begin{multline}\n\\gamma_{\\rm AL}(\\vec{k}) \n\\approx 2 \\pi ~ \\frac{| \\bra{\\psi_f(\\vec{k})} \\hat{V}_{\\rm L} \\hat{V}_{\\rm A} \\ket{\\psi_i}|^2 }{ | \\omega_{\\rm A} - \\Delta + i \\Gamma |^2 } \\times \\\\\n \\delta \\bigl( \\omega_{\\rm L}+\\omega_{\\rm A} - ( E_f(\\vec{k}) - E_i ) \\bigr).\n \\label{eqALresGamma}\n\\end{multline}\n\n\nFinally, for $m=0,1,2,3$ we consider a specific lattice modulation $\\hat{V}_{\\rm L}$, with the following phases:\n\\begin{flalign}\n \\phi_{\\langle \\vec{j}, \\vec{j} + \\vec{e}_x \\rangle} &= \\pi m j_x, \\label{eqPhaseMod1}\\\\\n \\phi_{\\langle \\vec{j}, \\vec{j} + \\vec{e}_y \\rangle} &= \\frac{\\pi}{2} m + \\pi m j_y.\n \\label{eqPhaseMod2}\n\\end{flalign}\nFor this choice one obtains\n\\begin{multline}\n \\hat{V}_{\\rm L} \\hat{V}_{\\rm A} \\ket{\\psi_i} = \\frac{\\delta t_{\\rm L} \\delta t_{\\rm A}}{4} \\sum_{\\vec{k}, \\sigma} \\ket{\\vec{k},\\sigma} \\frac{1}{\\sqrt{V}} \\sum_{\\vec{j}} e^{- i (\\vec{k} + \\vec{\\pi} m) \\cdot \\vec{j}} \\\\\n \\sum_{\\vec{i}: \\ij} e^{i m \\varphi_{\\vec{i} - \\vec{j}}} \\sum_{\\sigma'} \\hat{c}^\\dagger_{\\vec{j},\\sigma'} \\c_{\\vec{i},\\sigma'} \\c_{\\vec{j},\\sigma} \\ket{\\Psi_0},\n\\end{multline}\nwhere $V=L^2$ is the area of the system, $\\vec{\\pi}=(\\pi,\\pi)^T$ and $\\varphi_{\\vec{i} - \\vec{j}}$ is the polar angle of the vector $\\vec{i} - \\vec{j}$. Plugging this result into Eq.~\\eqref{eqALres} yields the rotational ARPES spectrum discussed in the main text in spectral, or Lehmann, representation:\n\\begin{multline}\n\\gamma_{\\rm AL}^{(m)}(\\vec{k}) \\propto \\sum_{\\sigma=\\uparrow, \\downarrow} \\sum_{\\ket{f}} |\\bra{f} \\hat{R}_{m,\\sigma}(\\vec{k} + \\vec{\\pi} m) \\ket{\\Psi_0} |^2 \\\\\n\\times \\delta \\bigl( \\omega_{\\rm L}+\\omega_{\\rm A} - ( E_f(\\vec{k}) - E_i ) \\bigr).\n\\end{multline}\nHere $\\hat{R}_{m,\\sigma}(\\vec{k})$ creates rotational states with total momentum $\\vec{k}$, as defined in Eq.~\\eqref{eqDefRmk} in the main text; we summed over all final states $\\ket{f}$ of the many-body system contributing to instances where a \"photo electron\" with momentum $\\vec{k}$ and arbitrary spin $\\sigma$ is detected. \n\nThe following corrections to the scheme presented above can be expected: (i) In a Hubbard model, the lattice modulation may lead to recombinations of virtual doublon-hole pairs. Such processes correspond to intermediate states $\\ket{r}$ with energies $E_r - E_i \\simeq J$, i.e. they are close to resonance. However, the spectral weights of such processes are strongly suppressed by the probability for virtual doublon-hole pairs which scales as $\\simeq (U\/t)^2 \\ll 1$. (ii) With the tunneling $t$, the spin-exchange $J$ is modulated accordingly at the frequency $\\omega_{\\rm L}$. This leads to another contribution to the response $\\gamma_{\\rm AL}^{(m)}$. Since $t < J$ it follows that $\\delta t < \\delta J$ and we neglect this sub-dominant contribution. As can be seen from the delta-function in Eq.~\\eqref{eqGammaAL}, all these corrections can only modify the respective spectral weights but not the positions or symmetry properties of the expected spectral lines.\n\n\n\\subsubsection{Realizations with ultracold atoms}\nTo realize the two-photon scheme introduced above for ultracold atoms in optical lattices, the \"photo electron\" channel must be replaced by non-interacting atomic states which are experimentally accessible. One option is to use a third internal atomic state, which may occupy the same lattice, but does not interact with the two spin states $\\sigma=\\uparrow, \\downarrow$ used to realize the doped Hubbard model. Such settings have been proposed \\cite{Dao2007} and realized in the continuum \\cite{Stewart2008} and in lattice systems under a quantum gas microscope \\cite{Brown2019}. A second option is to use spatially separated states to realize the ARPES channel \\cite{Kollath2007,Bohrdt2018}: Specifically, a decoupled layer in the optical lattice, adjacent to the physical system, can be utilized in quantum gas microscopes \\cite{Preiss2015,Koepsell2020PRL}. The offset $\\Delta$ corresponds to the energy offset between the physics and detection layer, and can be independently tuned in ultracold atom systems. \n\nIn all these systems, momentum resolved detection of the $\\a_{\\vec{k},\\sigma}$ particles is required. In addition to the momentum resolution itself, this yields the energy of the emitted \"photo electron\" provided its dispersion relation $\\epsilon_{\\vec{k},\\sigma}$ can be independently determined. The required momentum resolution can be obtained by time-of-flight or band-mapping techniques. In quantum gas microscopes with a restricted field of view, a $T\/4$ oscillation in a harmonic trapping potential \\cite{Murthy2014} can be utilized to map momentum to position states \\cite{Bohrdt2018,Brown2019}. A detailed discussion of the proposed ARPES scheme can be found in Ref.~\\cite{Bohrdt2018}.\n\nIf no momentum resolution is desired, the separate detection layer can be replaced by a local probe. In this case the $\\a$-particles have no dispersion: hence their final energy is determined by $\\epsilon_{\\vec{k},\\sigma} \\equiv \\Delta$ and momentum resolution is no longer required to obtain the final state energy. An experimental realization requires a localized optical potential to spatially confine the final $\\a$-state \\cite{Kollath2007}. \n\nIn the two-layer setting, the ARPES coupling itself, Eq.~\\eqref{eqDefVA}, can be realized by introducing a weak tunnel coupling $\\delta t_{\\rm A}$ between the layers and modulating it at the frequency $\\omega_{\\rm A}$. This can be easily achieved by an intensity modulation of the lattice beams. In settings where an auxiliary internal atomic state is used, the ARPES coupling corresponds to a radio-frequency, or microwave, transition with frequency $\\omega_{\\rm A}$.\n\nFinally the lattice modulation, Eq.~\\eqref{eqDefVL}, with phase choices as in Eqs.~\\eqref{eqPhaseMod1}, \\eqref{eqPhaseMod2} can be realized by modulated super-lattice potentials in $x$- and $y$-direction. The cases of $s$- and $d$-wave modulations are particularly simple and do not even require an extra super-lattice potential: Here it is sufficient to homogeneously modulate the tunnelings along $x$- and $y$-directions. For $s$-wave, modulations in phase are needed, while $d$-wave requires a $\\pi$-phase shift between the modulations along $x$ and $y$ respectively. \n\n\n\n\\subsubsection{Realizations in solids}\nIn copper-oxide layers, we propose to combine state-of-the-art ARPES with a periodically driven optical phonon mode. These phonons couple to the hopping integral of the electrons, thus realizing the desired lattice modulation $\\hat{V}_{\\rm L}(t)$. The symmetry of the involved phonon mode determines the phases $\\phi_{\\ij}$ of this lattice modulation, responsible for the transfer of angular momentum to the many-body system. \n\nIn the following we will only consider two distinct Raman-active phonon modes, with $s$- and $d$-wave symmetry, respectively. Specifically we will discuss the $A_{1g}$ and $B_{1g}$ buckling modes in the YBCO class. Because the Cu-O plane is not the symmetry plane in these materials, a crystal electric field can introduce a linear coupling of the electrons to the buckling modes of the oxygen ions \\cite{Devereaux1994,Rosch2004}. The effect of such phonons in the effective $t-J$ model of the cuprates is to modify the tunneling matrix elements as \\cite{Normand1995}:\n\\begin{equation}\n t_{\\langle \\vec{j}, \\vec{j} + \\vec{e}_\\mu \\rangle} = t \\left[ 1 - \\lambda_t ( u_{\\vec{j}}^\\mu \/ a ) \\right] , \\qquad \\mu=x,y;\n \\label{eqTModulation}\n\\end{equation}\nsee Ref.~\\cite{Devereaux1994} for a discussion of the effect in the three-band model. Here $\\vec{e}_\\mu$ denotes the unit vector along $\\mu$-direction, $a$ is the lattice constant, $\\lambda_t$ is a dimensionless phonon coupling, and $u_{\\vec{j}}^x$ [$u_{\\vec{j}}^y$] is the displacement of the $O(2)$ [$O(3)$] oxygen ion from its equilibrium position along the $c$ axis of the crystal. Note that a similar modification as in Eq.~\\eqref{eqTModulation} is expected for the exchange integral \\cite{Normand1995,Sherman1997}, which we neglect here. \n\nThe symmetry of the phonon mode determines the relative signs of $u_{\\vec{j}}^\\mu$: For the $A_{1g}$ mode with $s$-wave symmetry, $u_{\\vec{j}}^x = u_{\\vec{j}}^y$; the $B_{1g}$ mode has $d$-wave symmetry:\n\\begin{equation}\nu_{\\vec{j}}^x = - u_{\\vec{j}}^y,\n\\end{equation}\nwhich is the central ingredient required for measuring the $d$-wave rotational ARPES spectrum. \n\nCombining the ingredients above, we can use the following electron-phonon Hamiltonian to describe the lattice modulation \\cite{Rosch2004}:\n\\begin{equation}\n \\hat{V}_{\\rm L} = \\sum_{\\ij, \\sigma} \\hat{c}^\\dagger_{\\vec{j},\\sigma} \\c_{\\vec{i},\\sigma} \\sum_{\\vec{q},\\nu} g_{\\ij}(\\vec{q},\\nu) \\l \\b_{\\vec{q},\\nu} + \\hat{b}^\\dagger_{-\\vec{q},\\nu} \\r. \n \\label{eqHeffVLsolids}\n\\end{equation}\nHere $\\nu$ labels the relevant phonon modes and the coupling $g_{\\ij}(\\vec{q},\\nu) \\propto t \\lambda_t$ reflects the symmetry of the respective phonon mode. \n\nTo measure the rotational ARPES spectrum, we propose to drive the desired $\\vec{q}=0$ phonon using a pair of Raman lasers, see inset in Fig.~\\ref{figRARPES}; Thus our scheme becomes a three-photon setup. Effectively we can describe the Raman drive by replacing $\\b_{\\vec{q},\\nu} + \\hat{b}^\\dagger_{-\\vec{q},\\nu}$ in Eq.~\\eqref{eqHeffVLsolids} with $\\delta_{\\vec{q},0} \\beta_\\nu e^{-i \\omega_{\\rm L} t} \/2 + \\text{h.c.} $, where $\\omega_{\\rm L}$ is the frequency of the drive. To obtain a sizable amplitude $\\beta_\\nu$, the frequency $\\omega_{\\rm L}$ should be close to the phonon frequency $\\Omega_\\nu$. For example, the $B_{1g}$ mode in YBCO is located at $\\Omega_{B_{1g}} \\approx 42 {\\rm meV}$. As a result, we obtain $ \\hat{V}_{\\rm L}(t) $ as in Eq.~\\eqref{eqDefVL} with:\n\\begin{equation}\n \\delta t_{\\rm L} = | \\beta_\\nu ~ g_{\\ij}(\\vec{0}, \\nu)|,~~~ \\phi_{\\ij} = \\begin{cases} \n e^{i 2 \\varphi_{\\vec{i} - \\vec{j}}}, & \\nu = B_{1g} \\\\ \n1, & \\nu = A_{1g} \\end{cases} .\n\\end{equation}\n\nSince typical super-exchange energies in YBCO materials are around $J \\approx 250 {\\rm meV}$, we obtain a situation where the ARPES beam is essentially resonant and can directly create hole excitations. I.e. the detuning between the two Raman beams should be around $\\omega_{\\rm L} \\approx 40 {\\rm meV}$. In this regime, we use Eq.~\\eqref{eqGammaApprx} and assume that the lifetime of hole excitations is short. This is justified by the overall broad structure of the conventional ARPES spectrum, see e.g. dash-dotted bottom line in Fig.~\\ref{figRotSetup} (c). To distinguish the direct and rotational ARPES signals, we propose to take a difference measurement of the photo-electron signal with ($\\delta t_{\\rm L} \\neq 0$) and without ($\\delta t_{\\rm L} = 0$) the Raman beams on.\n\n\n\\subsubsection{Extensions to STM probes}\nThe general multi-photon scheme proposed above to probe rotational excitations of mobile dopants can be straightforwardly generalized to scanning-tunneling microscopy (STM) setups. In this case the photo-electron with momentum $\\vec{k}$ created by $\\hat{V}_{\\rm A}$ is replaced by a localized electronic final state at site $\\vec{j}_0$, i.e. \n\\begin{equation}\n \\hat{V}_{\\rm A, STM}(t) = - \\delta t_{\\rm A} \\sin( \\omega_{\\rm A} t) \\sum_{\\sigma} \\l \\hat{a}^\\dagger_{\\vec{j}_0,\\sigma} \\c_{\\vec{j}_0,\\sigma} + \\text{h.c.} \\r.\n \\label{eqDefVAstm}\n\\end{equation}\nThe resulting rotational STM signal is obtained by integrating the rotational ARPES over all momenta:\n\\begin{equation}\nI_{\\rm rot}^{(m)}(\\omega) = \\int_{\\rm BZ} d^2\\vec{k} ~ A_{\\rm rot}^{(m)}(\\vec{k}, \\omega).\n\\end{equation}\n\nThe bandwidth of rotational excitations (with $m_4 \\neq 0$) is significantly smaller than in the vibrational ground state (with $m_4=0$). Hence the rotational STM protocol without full momentum resolution is sufficient to resolve rotational meson excitations. As an example, we show the expected STM signal for the experimentally most relevant case $t\/J=3$ in Fig.~\\ref{figSTMrot}. Indeed, in addition to the most prominent vibrational ground state at $m_4=0$, the rotational excitations at $m_4=1,2$ are clearly visible as distinct quasiparticle peaks. For $m_4=1,2$ we also observe a weak signal at the ground state energy, owing to hybridization of rotational and vibrational states at non-C4IM. \n\nIn Fig.~\\ref{figSTMrot} we also compare our numerical results to the expected integrated spectrum from the spinon-chargon toy model. In the latter we include small shifts on the energy axis and weak broadening as fit parameters to obtain better agreement. As for the ARPES spectra at the nodal point, shown in the main text, we find that the toy model reasonably predicts the overall shape even of the incoherent part of the spectrum and the strong suppression of spectral weight at high energies.\n\n\\begin{figure}[t!]\n\\centering\n\\epsfig{file=FIG5.pdf, width=0.49 \\textwidth}\n\\caption{\\textbf{Rotational STM spectra.} The integrated rotational ARPES signal $I^{(m)}_{\\rm rot}(\\omega)$ which can be measured by the proposed multi-photon rotational STM scheme is calculated for the $t-J$ model at $t\/J=3$. The rotational meson states are still clearly visible as pronounced peaks above the ground state. We consider an extended $L_x=40\\times 4$-leg cylinder, for which the rotational ARPES signal is integrated over $k_x$ and summed over all discrete values of $k_y=-\\pi\/2,0,\\pi\/2,\\pi$. We compare our numerical DMRG results (solid lines) with predictions by the spinon-chargon toy model (shaded areas); for the latter, the four lowest lying states were broadened by $J\/4$ (similar to the observed Fourier broadening in the DMRG numerics) while all higher excited states were artificially broadened by $J$. In addition the toy model curves for $m_4=1$ ($m_4=2$) were shifted by a fitted $\\Delta \\omega = 0.9 J$ ($\\Delta \\omega = 1.2 J$) towards the lowest-lying quasiparticle peak. \n}\n\\label{figSTMrot}\n\\end{figure}\n\n\n\n\\subsection{TD-DMRG simulations}\n\\label{MethodsB}\nWe use time-dependent matrix product state methods \\cite{Schollwock2011,Kjall2013,Zaletel2015,Paeckel2019a}, in particular the TeNPy package \\cite{Hauschild2018SciPost,hauschildTenpy}, to numerically calculate the rotational ARPES spectrum introduced in the main text. To this end we start from the numerically obtained ground state $\\ket{\\Psi_0}$ of the Heisenberg model on a $4 \\times L_x$ cylinder, and apply the rotational operator $\\hat{R}_{m,\\sigma}(\\vec{j})$ introduced in Eq.~\\eqref{eqDefRmj} below.\nWe assume that the resulting one-hole states are described by the $t-J$ model on the same lattice. In two dimensions and for a single hole, the Hamiltonian of the $t-J$ model becomes~\\cite{Auerbach1998},\n\\begin{equation}\n\\H_{t-J} = -t \\sum_{\\ij, \\sigma} \\hat{\\mathcal{P}} \\left( \\hat{c}^\\dagger_{\\vec{i},\\sigma} \\c_{\\vec{j},\\sigma} + h.c. \\right) \n\\hat{\\mathcal{P}}+ J \\sum_\\ij \\hat{\\mathbf{S}}_{\\vec{i}} \\cdot \\hat{\\mathbf{S}}_{\\vec{j}},\n\\label{eq:tjmodel}\n\\end{equation}\nwhere $\\hat{\\mathcal{P}}$ projects on states with less than two fermions $\\c^{(\\dagger)}_{\\vec{j},\\sigma}$ per site. The first term describes tunneling of holes with amplitude $t$ and the second term denotes spin-exchange interactions with coupling constant $J=4 t^2\/U$, where $U$ is the Hubbard interaction. \n\nTo evaluate the rotational spectral function $ A_{\\rm rot}^{(m)}(\\vec{k},\\omega)$ from Eq.~\\eqref{eqDefArot}, we express it in real space and time,\n\\begin{equation}\n A_{\\rm rot}^{(m)}(\\vec{k},\\omega) = \\int_0^{\\infty} dt \\sum_{\\vec{j}} e^{-i \\vec{k} \\cdot \\vec{j}} \\mathcal{G}_{\\rm rot}^{(m_4)}(\\vec{j},t),\n \\label{eqArotTime}\n\\end{equation}\nsee also Ref.~\\cite{Bohrdt2020}, where the rotational Green's function in real space is defined as \n\\begin{equation}\n\\mathcal{G}_{\\rm rot}^{(m_4)}(\\vec{j},t) = \\theta(t) \\sum_\\sigma \\bra{\\Psi_0} \\hat{R}^\\dagger_{m_4,\\sigma}(\\vec{j},t) \\hat{R}_{m_4,\\sigma}(\\vec{0},0) \\ket{\\Psi_0},\n\\end{equation}\nwith the corresponding rotational operator\n\\begin{equation}\n\\hat{R}_{m_4,\\sigma}(\\vec{j}) = \\sum_{\\vec{i}: \\ij} e^{i m_4 \\varphi_{\\vec{i} - \\vec{j}}} \\sum_{\\sigma'} \\hat{c}^\\dagger_{\\vec{j},\\sigma'} \\c_{\\vec{i},\\sigma'} \\c_{\\vec{j},\\sigma}.\n\\label{eqDefRmj}\n\\end{equation}\n\nWe start by numerically calculating the ground state of the Heisenberg model, using a bond dimension of $\\chi=600$. Subsequently, we apply the rotational operator as defined in Eq.~\\eqref{eqDefRmj} in the origin and time-evolve the resulting state under the $t-J$ Hamiltonian using time-dependent matrix product state methods \\cite{Kjall2013,Zaletel2015}. We thus obtain the rotational Green's function in real space and time. We use linear prediction to increase the time window and multiply our data with a Gaussian envelope in order to minimize the weight of the data generated by said linear prediction in the spectrum \\cite{Verresen2018,Bohrdt2020}. Finally, we perform a Fourier transformation in time and space to obtain the spectral function $ A_{\\rm rot}^{(m)}(\\vec{k},\\omega)$ from Eq.~\\eqref{eqDefArot}. We carefully checked our results for convergence with the bond dimension.\n\n\n\\subsection{Spinon-chargon toy model}\n\\label{MethodsC}\nThe identification of rotational and vibrational resonances in the spectrum provide compelling evidence that magnetic polarons are composite objects with an internal structure. We describe the latter by an effective theory, which models magnetic polarons as bound states of spinons and chargons connected by a string on a square lattice, see Fig.~\\ref{figRotSetup} (a), with a linear string tension calculated from spin-correlations in the undoped parent AFM \\cite{Grusdt2018PRX}. In addition, we extend earlier approaches \\cite{Bulaevskii1968,Brinkman1970,Trugman1988,Shraiman1988a,Grusdt2018PRX,Grusdt2019PRB} by including spinon dynamics explicitly in our theory. Details of our theoretical description are presented in the supplements \\ref{SuppMatToyModel}.\n\nIn Fig.~\\ref{figRotSetup} (c) we compare our DMRG spectra to predictions by the spinon-chargon toy model. To capture Fourier broadening present in our DMRG results, we broadened the lowest rotational and vibrational peaks in the toy model by $\\sigma_0=J\/4$. For a better comparison of the overall spectral weight and shape, we added small overall energy shifts $\\Delta \\omega$ separately for each $m_4$ and introduced broadening $\\sigma_1 = J$ of all higher excited states. Such broadening is expected to arise from couplings to magnon excitations in the AFM \\cite{Grusdt2018PRX,Wrzosek2020} which we neglect in our toy model calculation so far. \n\nThe resulting toy-model prediction is in good agreement with the full numerical spectra: It captures the spectral weight of the low-energy mesonic resonances $\\mathsf{1S}$, $\\mathsf{2S}$, $\\mathsf{1P}$, $\\mathsf{1D}$ and $\\mathsf{1F}$. Remarkably, even the spectral features at higher energies are correctly described. In particular this includes the strong suppression of spectral weight in the $m_4=0$ rotational spectrum (red line in Fig.~\\ref{figRotSetup}) between $- \\omega = -5 J$ to $\\omega = 0$, which is followed by a broad continuum at higher energies. This should be contrasted with the non-vanishing spectral weight found in the same frequency range for $m_4 \\neq 0$ rotational spectra and for the standard ARPES spectrum with $m_4=0$ (bottom line in Fig.~\\ref{figRotSetup}). The tails at the highest energies are also correctly described by the toy model.\n\nIn Fig.~\\ref{figReggeTraj} we compare Regge-like trajectories. Here we used the simplified toy model \\cite{Grusdt2018PRX} (solid lines in Fig.~\\ref{figReggeTraj}) which neglects spinon dynamics and explains the characteristic scaling with $J\/t$, without any free fit parameters. A comparison to the full spinon-chargon toy model including spinon dynamics is provided in Fig.~\\ref{figToyModelRegge} in the supplements (section \\ref{SecToyModelRes}): There we confirm the power-laws from Eq.~\\eqref{eqErotScaling}, \\eqref{eqEvibScaling} and find similar quantitative agreement as in Fig.~\\ref{figReggeTraj}, again without any free fit parameters. Notably, the refined spinon-chargon toy model with spinon dynamics predicts a weak splitting of $\\mathsf{1P}$\/$\\mathsf{1F}$ and $\\mathsf{1D}$ resonances as found by DMRG. Further comparison of Regge-like trajectories to the full spinon-chargon toy model can be found in the supplements \\ref{SuppMatAddNum}.\n\nIn the insets in Fig.~\\ref{figSpectralLines} (b) and (c) we show full spectral cuts at low energy predicted by the spinon-chargon toy model, without any free fit parameters. As observed in our full DMRG results, we find that the rotational mesonic states have much weaker dispersion than the $\\mathsf{1S}$ magnetic polaron ground state. The toy model predicts some entirely flat bands, similar to the flat bands predicted for mesonic bound states of two identical partons \\cite{Shraiman1988a}, in addition to weakly dispersing bands. The overall distribution of spectral weight is also qualitatively captured by the toy model.\n\n\n\n\n\\section*{Acknowledgements}\nThe authors thank M. Knap, Z.X. Shen, A. Cavalleri, E.-A. Kim, I. Morera Navarro, S. Sachdev, U. Schollw\\\"ock, I. Bloch, M. Greiner, J. Koepsell, and F. Pollmann for fruitful discussions.\nThis research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2111 -- 390814868. ED was supported by the ARO grant number W911NF-20-1-0163, the NSF grant EAGER-QAC-QSA 2222-206-2014111, the NSF grant OAC-1934714, and the Harvard-MIT CUA.\n\n\n\\section*{References}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Properties of the Operators}\n\\label{app:lm:costgstar}\n\\begin{lemma}\n\\label{lm:KPrelation_0}\nConsider matrices $P,Q,R,A,B$ of appropriate dimensions with $P,Q$ being PSD matrices and $R$ being a PD matrix. Define $\\Phi(P,K,Q,R,A,B):=Q + K^\\tp RK\n+ (A+BK)^\\tp P(A+BK)$. Then,\n\\begin{enumerate}\n\\item[(i)]\n\\begin{small}\n\\begin{align}\n\\Omega(P,Q,R,A,B) &= \\Phi(P,\\Psi(P,R,A,B),Q,R,A,B)\\notag \\\\& = \\min_K \\Phi(P,K,Q,R,A,B).\n\\label{eq:relation1}\n\\end{align}\n\\end{small}\nNote that the minimization is in the sense of partial order $\\preceq$, that is, the minimum value $\\Omega(P,Q,R,A,B) \\preceq \\Phi(P,K,Q,R,A,B)$ for all $K$.\n\n\\item[(ii)] Furthermore, for PSD matrices $Y_1 $ and $Y_2$ such that $Y_1 \\preceq Y_2$, we have\n\\begin{align}\n\\Omega(Y_1,Q,R,A,B) \\preceq \\Omega(Y_2,Q,R,A,B).\n\\label{eq:relation2}\n\\end{align}\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThe statements in the above lemma can be found in the literature (see, for example, \\cite[Chapter 2]{whittle_Optimal_Control}). We provide a proof for completeness.\nIt can be established by straightforward algebraic manipulations that \n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\Phi(P,K,Q,R,A,B) = \\Omega(P,Q,R,A,B) \\notag \\\\\n&+(K - \\Psi(P,R,A,B))^\\tp\\mathcal{R}(K - \\Psi(P,R,A,B)), \\label{eq:new_riccati_1}\n\\end{align}\n\\end{small}\nwith $\\mathcal{R}= R + B^\\tp PB$. Then \\eqref{eq:new_riccati_1} implies that $\\Phi(P,K,Q,R,A,B)$ is minimized when $K=\\Psi(P,R,A,B)$ and the minimum value is $\\Omega(P,Q,R,A,B)$.\n\nFor PSD matrices $Y_1 $ and $Y_2$ such that $Y_1 \\preceq Y_2$, it is straightforward to see that $\\Phi(Y_1,K,Q,R,A,B) \\preceq \\Phi(Y_2,K,Q,R,A,B)$ for any $K$. Hence, \n\\begin{align}\n \\min_K \\Phi(Y_1,K,Q,R,A,B) \\preceq \\min_K \\Phi(Y_2,K,Q,R,A,B).\n \\label{min_K_inequality}\n\\end{align}\nFrom \\eqref{min_K_inequality} and \\eqref{eq:relation1}, it follows that \\eqref{eq:relation2} is correct.\n\\end{proof}\n\n\n\\section{Proof of Lemma \\ref{lm:MJ_infinite}}\n\\label{app:lm_MJ_infinite}\nIf matrices $P^{\\diamond}_{t}(m)$, $m \\in \\mathcal{M}$, converge as $t \\to -\\infty$ to PSD matrices $P^{\\diamond}_{*}(m)$, then by continuity, the collection of PSD matrices $P^{\\diamond}_* = \\{P^{\\diamond}_*(0),\\ldots, P^{\\diamond}_*(M)\\}$ satisfy the DCARE in \\eqref{eq:CARE_infinite}. Since the DCARE \\eqref{eq:CARE_infinite} has a PSD solution $P^{\\diamond}_* = \\{P^{\\diamond}_*(0),\\ldots, P^{\\diamond}_*(M)\\}$, then from \\cite[Proposition 7]{costa1995discrete} and the SD assumption of the MJLS, it is also a stabilizing solution of the DCARE (\\cite[Definition 3]{costa1995discrete} and \\cite[Definition 4.4]{costa2006discrete}). Then, the MJLS is SS from the definition of the stabilizing solution.\n\n\nOn the other hand, if the MJLS is SS, under the SD assumption of the MJLS, \\cite[Corollary A.16]{costa2006discrete} ensures the existence of a stabilizing solution of the DCARE in \\eqref{eq:CARE_infinite}.\nThe solution is also the unique PSD solution from \\cite[Theorem A. 15]{costa2006discrete} (by taking $X=0$ in Theorem A. 15). Then from \\cite[Proposition A. 23]{costa2006discrete}, matrices $P^{\\diamond}_{t}(m)$, $m \\in \\mathcal{M}$, converge as $t \\to -\\infty$ to PSD matrices $P^{\\diamond}_{*}(m)$.\n\n\\section{Proof of Lemma \\ref{lm:pc_2C}}\n\\label{proof_lm:pc_2C}\nBecause of Lemma \\ref{equality_recursions_2C}, $P^0_t = P^{\\diamond}_t(0)$ and $P^1_t = P^{\\diamond}_t(1)$, where matrices $P^{\\diamond}_t(0), P^{\\diamond}_t(1)$ are defined by \\eqref{eq:P_MJ_init} - \\eqref{P_MJ_cmp_2C_1} for the auxiliary MJLS. Thus, we can focus on the convergence of matrices $P^{\\diamond}_t(0), P^{\\diamond}_t(1)$.\n\n\nTo investigate the convergence of $P^{\\diamond}_t(0), P^{\\diamond}_t(1)$, we first show that \n the auxiliary MJLS described by \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_theta} is SS if and only if $p^1 < p_c^1$ where $p_c^1$ is the critical threshold given by \\eqref{eq:pc_2c}. According to Lemma \\ref{lm:ss}, the MJLS is SS if and only if there exist matrices $K^{\\diamond}(m)$, $m \\in \\{0,1\\}$, such that $\\rho(\\mathcal{A}_s) < 1$. For the MJLS described by \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_theta}, we can find $\\mathcal{A}_s$ from \\eqref{eq:bigmatrix_ss} as follows\n\\begin{align}\n\\mathcal{A}_s = \\begin{bmatrix}\nA_s(0)\\otimes A_s(0) & (1-p^1)A_s(1)\\otimes A_s(1) \\\\\n \\mathbf{0} & p^1A_s(1)\\otimes A_s(1)\n\\end{bmatrix},\n\\label{eq:bigmatrix_As}\n\\end{align}\nwhere $A_s (m) = A^{\\diamond} (m) + B^{\\diamond}(m) K^{\\diamond}(m)$, $m \\in \\{0,1\\}$. Since the matrix $\\mathcal{A}_{s}$ is upper-triangular, it is Schur stable if and only if all its diagonal blocks are Schur stable. \n\nSince $A^{\\diamond}(0) = A, B^{\\diamond}(0) = B$ and $(A,B)$ is stabilizable from Assumption \\ref{assum:det_stb_2C}, there exists $K^{\\diamond}(0)$ such that $\\rho\\big(A_s(0)\\otimes A_s(0)\\big)$, which is equal to $\\big(\\rho\\big(A_s(0)\\big)\\big)^2$, is less than $1$. Therefore, the MJLS is SS if and only if $\\rho\\big(p^1 A_s(1)\\otimes A_s(1)\\big)<1$ for some $K^{\\diamond}(1)$. Note that $\\rho\\big(p^1 A_s(1)\\otimes A_s(1)\\big) = p^1 \\times \\big(\\rho\\big(A_s(1)\\big)\\big)^2$. Therefore, the MJLS is SS if and only if $\\frac{1}{\\sqrt{p^1}} > \\rho\\big(A_s(1)\\big)$ for some $K^{\\diamond}(1)$. Since $A^{\\diamond}(1)= A$ and $B^{\\diamond}(1) = [\\mathbf 0,B^{11}]$, it follows then that the MJLS is SS iff \n\\[\\frac{1}{\\sqrt{p^1}} > \\rho\\big(A + B^{11} \\tilde K^{\\diamond}(1)\\big),\\] for some $\\tilde K^{\\diamond}(1)$. This condition is equivalent to $p^1 < p_c^1$ where $p_c^1$ is the critical threshold given by \\eqref{eq:pc_2c}.\n\nNext, we show that the auxiliary MJLS described by \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_theta} is SD. To this end, we can follow an argument similar to the one described above for establishing that the MJLS is SS and use part 2 of Lemma \\ref{lm:ss} to show that the MJLS is SD if and only if there exist matrices $H^{\\diamond}(0)$ and $H^{\\diamond}(1)$ such that $\\rho\\big(A_d(0)\\otimes A_d(0)\\big)<1$ and $\\rho\\big(p^1 A_d(1)\\otimes A_d(1)\\big)<1$. Since $A^{\\diamond}(0) = A^{\\diamond}(1) = A, Q^{\\diamond}(0) = Q^{\\diamond}(1) = Q$ and $(A,Q)$ is detectable from Assumption \\ref{assum:det_stb_2C}, there exist matrices $H^{\\diamond}(0)$ and $H^{\\diamond}(1)$ such that $\\rho\\big(A_d(0)\\otimes A_d(0)\\big)<1$ and $\\rho\\big(p^1 A_d(1)\\otimes A_d(1)\\big)<1$. Hence, the MJLS is SD.\n\nThus, the MJLS of \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_theta} is SD for any $p^1$ and it is SS if and only if $p^1 < p_c^1$. It then follows from Lemma \\ref{lm:MJ_infinite} that matrices $P^{\\diamond}_{t}(m)$, $m \\in \\{0,1\\}$, converge as $t \\to -\\infty$ to\n PSD matrices $P^{\\diamond}_{*}(m), m \\in \\{0,1\\}$ that satisfy the steady state version of \\eqref{P_MJ_cmp_2C_0}-\\eqref{P_MJ_cmp_2C_1} (i.e, equations \\eqref{eq:P_finite_2C_fixed} - \\eqref{eq:tildeP_finite_2C_fixed}) if and only if $p^1 < p_c^1$. This proves the lemma.\n\n\n\\section{Proof of Lemma \\ref{lm:Q2_2C}, parts 1 and 2}\\label{sec:lm_Q2_2C}\nLet $g^*$ denote the strategies described by \\eqref{eq:K_finite_2C_fixed}-\\eqref{eq:estimator_inf_2C}. We want to show that for any $g \\in \\mathcal G$, $J_{\\infty}(g) \\geq J_{\\infty}(g^*)$ and that $J_{\\infty}(g^*)$ is finite. We will make use of the following claim. \n\\begin{claim}\\label{claim:cost_optimal_2C}\nFor the strategies $g^*$ described by \\eqref{eq:K_finite_2C_fixed}-\\eqref{eq:estimator_inf_2C}, the following equation is true:\n\\begin{align}\nJ_{T}(g^{*}) =&(T+1) \\tr (\\Lambda_*) - \\ee^{g*} [V_{T+1} ],\n\\label{eq:costforgstar_2C}\n\\end{align}\nwhere $J_T(g^*)$ is the finite horizon cost of $g^*$ over a horizon of duration $T$, $\\Lambda_* = (1-p^1) P_{*}^0+p^1 P_{*}^1$ and for any $t\\geq 0$,\n\\begin{align}\nV_t = \\hat X_{t}^\\tp P_*^0 \\hat X_t + \\tr \\big(P_*^{1}\\cov(X_{t}|H^0_{t}) \\big).\n\\label{V_t_2C}\n\\end{align}\n\\end{claim}\n\\begin{proof}\nSee Appendix \\ref{proof_claim:cost_optimal_2C} for a proof of this claim.\n\\end{proof}\nBased on Claim \\ref{claim:cost_optimal_2C}, the infinite horizon average cost for $g^*$ is given as \n\\begin{align}\nJ_{\\infty}(g^*)\n= &\\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}(g^*)\n\\notag\\\\\n= & \\tr (\\Lambda_*) - \\liminf_{T\\rightarrow\\infty} \\frac{\\ee^{g^*}[V_{T+1}]}{T+1}\n\\leq \\tr (\\Lambda_*),\n\\label{bound_J_infty}\n\\end{align}\nwhere the last inequality holds because $V_{T+1} \\geq 0$. \n\nFor $n =0,1$, define $Y_{0}^n = 0$, and for $k=0,1,2,\\dots,$\n\\begin{align}\n&Y_{k+1}^0 = \\Omega(Y_{k}^0,Q,R,A,B),\n\\label{eq:Y_k_0_2C}\n\\\\\n&Y_{k+1}^1 = \\Omega \\big((1-p^1) Y_{k}^0+p^1 Y_{k}^1,Q,R^{11},A,B^{11}\\big).\n\\label{eq:Y_k_n_2C}\n\\end{align}\nIt's easy to check that for $n=0,1,$ $Y_k^n = P_{T+1-k}^n$ for all $k \\geq 0$, and that $\\lim_{k \\rightarrow \\infty} Y_k^n = \\lim_{t \\rightarrow -\\infty} P_t^n = P^n_*$.\n\nFurther, let's define $\\Lambda_k = (1-p^1) Y_{k}^0+p^1 Y_{k}^1$. From \\eqref{eq:opcost_finite_2C} of Lemma \\ref{lm:opt_strategies_2C}, we know that the optimal finite horizon cost is given as \n\\begin{align}\nJ^*_{T} &= \\sum_{t = 0}^T \\tr \\big((1-p^1)P_{t+1}^0+p^1 P_{t+1}^1 \\big) \\notag \\\\\n&=\\sum_{k = 0}^T \\tr \\big((1-p^1)P_{T+1-k}^0+p^1 P_{T+1-k}^1 \\big) \\notag \\\\\n&=\\sum_{k = 0}^T \\tr \\big((1-p^1)Y_k^0+p^1 Y_k^1 \\big) \\notag \\\\\n&= \\sum_{k = 0}^T \\tr(\\Lambda_k). \\label{eq:Jstar}\n\\end{align}\n\n\nWe can therefore write\n\\begin{align}\n \\lim_{T\\rightarrow \\infty}\\frac{1}{T+1} J^*_{T} \n &= \\lim_{T\\rightarrow \\infty}\\frac{1}{T+1} \\sum_{k = 0}^T \\tr (\\Lambda_k)\n= \\tr (\\Lambda^*),\n\\label{limit_optimal_finite_cost}\n\\end{align}\nwhere the last equality is correct because $\\lim_{k \\rightarrow \\infty} Y_k^n = P^n_*$ for $n=0,1$.\n\nNow, for any $g \\in \\mathcal G$,\n\\begin{align}\nJ_{\\infty}(g)\n= &\\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}(g)\n\\notag\\\\\n\\geq &\\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}^*\n= \\tr (\\Lambda^*)\n\\geq J_{\\infty}(g^*),\n\\label{eq:lowerbound}\n\\end{align}\nwhere the first inequality is true because by definition $J_T^* = \\inf_{g' \\in\\mathcal{G}} \nJ_T(g') \\leq J_T(g)$ for any $g \\in \\mathcal G$, the second equality is true because of \\eqref{limit_optimal_finite_cost} and the last inequality is true because of \\eqref{bound_J_infty}.\nHence, $g^*$ is optimal for Problem \\ref{problem_infinite_2C}, and the optimal cost is finite and equal to $\\tr (\\Lambda^*)$.\n\n\n\n\n\n\n\n\n\n\n\n\\input{new_proof_claim1}\n\n\\input{stability_proof_new}\n\n\\section{Proof of Lemma \\ref{lm:Q3}} \\label{sec:lmQ3}\n Consider the matrices $Y^n_k$, $n=0,1, k =0,1,\\ldots,$ defined by $Y^n_0 =0$ and the recursions in \\eqref{eq:Y_k_0_2C} and \\eqref{eq:Y_k_n_2C}. Since matrices $P_t^n$, $n =0,1,$ do not converge as $t \\to -\\infty$, it follows that matrices $Y_k^n$, $n=0,1$, do not converge as $k \\rightarrow \\infty$ (recall that $Y_k^n = P_{T+1-k}^n $ for $n=0,1$ and $k \\geq 0$). \n \n Recall from \\eqref{eq:Jstar} that $J^*_T = \\sum_{k = 0}^T \\tr(\\Lambda_k)$, where $\\Lambda_k = (1-p^1) Y_{k}^0+p^1 Y_{k}^1$. Also, from the first inequality in \\eqref{eq:lowerbound}, recall that $J_{\\infty}(g) \\geq \\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}^*$ for any strategy $g$. Therefore, to show that no strategy can achieve finite cost, it suffices to show that \n \\begin{equation}\n \\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}^* = \\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} \\sum_{k = 0}^T \\tr(\\Lambda_k) = \\infty.\n \\end{equation}\n\nTo do so, we first show that the sequence $\\{Y_k^n, k=0,1,\\ldots\\}$ is monotonically increasing\\footnote{ in the sense of the partial order $\\preceq$.} for $n =0,1$. To this end, note that $Y_1^n \\succeq Y_0^n=0$ for $n \\in \\{0,1\\}$. Furthermore, the monotonic property of the operator $\\Omega(\\cdot)$ (proved in part (ii) of Lemma \\ref{lm:KPrelation_0} in Appendix \\ref{app:lm:costgstar}) implies that for $n=0,1,$ and for all $k \\geq 0$, $Y_{k+1}^n \\succeq Y_{k}^n$. Now, if the sequences $\\{Y_k^n, k\\geq 0\\}$, $n =0,1,$ are bounded, they will converge due to the monotone behavior. This contradicts the fact that these sequences do not converge as $k \\to \\infty$. Therefore, at least one of the two sequences $\\{Y_k^0, k\\geq0\\}$, $\\{Y_k^1, k\\geq0\\}$ is unbounded. Consequently, the sequence $\\{\\Lambda_k, k\\geq 0\\}$ is unbounded. Hence, $\\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} \\sum_{k = 0}^T \\tr(\\Lambda_k) = \\infty.$ This completes the proof.\n\n\n\\section{Proof of Lemma \\ref{lm:opt_strategies_new_rep}}\n\\label{proof_lm:opt_strategies_new_rep}\n\n\n\nBy comparing \\eqref{eq:P_N_init}-\\eqref{eq:P_finite} with \\eqref{eq:barP_init}-\\eqref{eq:barP_finite_0}, it is straightforward to observe that $P_t^0 = \\bar P_t^0$ for all $t$. We will now show by induction that at any time $t$, $P_t^n = \\bar P_t^n$ for $n =1,\\ldots,N$.\nFirst note that by definition, $P_{T+1}^{n} = \\mathbf{0}$ and $\\bar P^n_{T+1} = \\mathbf{0}$ for $n =1,\\ldots,N$. Hence, \\eqref{new_rep2} is correct at time $T+1$. Now, assume that \\eqref{new_rep2} is correct at time $t+1$ (induction hypothesis). Then, from \\eqref{eq:barP_finite} and the induction hypothesis, we have for $n =1,\\ldots,N$,\n\\begin{align}\n\\bar P_t^{n} &= \\Omega \\big((1-p^n) P_{t+1}^0+p^n \\mathcal{L}_{zero}(P^0_{t+1}, P_{t+1}^{n},n,n), \\notag \\\\\n& \\hspace{1.0cm} \\mathcal{L}_{zero}(Q,Q^{nn},n,n), \\mathcal{L}_{iden}(R,R^{nn},n+1), \\notag \\\\\n & \\hspace{1.0cm} \\mathcal{L}_{zero}(A,A^{nn},n,n), \\mathcal{L}_{zero}(B,B^{nn},n,n+1) \\big) \\notag \\\\\n&=\n\\mathcal{L}_{zero}(Q,Q^{nn},n,n) + \\mathbb{T}_1 - \\mathbb{T}_2 (\\mathbb{T}_3)^{-1} (\\mathbb{T}_2)^{\\tp},\n\\label{P_MJ_n}\n\\end{align}\nwhere \n\\begin{align}\n\\label{T_1_def}\n&\\mathbb{T}_1 = \\mathcal{L}_{zero}(A,A^{nn},n,n)^{\\tp} \\bar{\\bar{P}}_{t+1} \\mathcal{L}_{zero}(A,A^{nn},n,n) \\\\\n\\label{T_2_def}\n&\\mathbb{T}_2 = \\mathcal{L}_{zero}(A,A^{nn},n,n)^{\\tp} \\bar{\\bar{P}}_{t+1} \\mathcal{L}_{zero}(B,B^{nn},n,n+1) , \\\\\n\\label{T_3_def}\n&\\mathbb{T}_3 = \\mathcal{L}_{iden}(R,R^{nn},n+1) \n\\notag \\\\\n&+ \\mathcal{L}_{zero}(B,B^{nn},n,n+1)^{\\tp} \\bar{\\bar{P}}_{t+1} \\mathcal{L}_{zero}(B,B^{nn},n,n+1),\n\\end{align}\nand we have defined $\\bar{\\bar{P}}_{t+1} = (1-p^n) P_{t+1}^0+p^n \\mathcal{L}_{zero}(P^0_{t+1}, P_{t+1}^{n},n,n)$.\n\nNote that from the definitions of operators $\\mathcal{L}_{zero}$ and $\\mathcal{L}_{iden}$ in \\eqref{L_zero}-\\eqref{L_iden}, it is straightforward to observe that the block dimensions of $\\mathbb{T}_1, \\mathbb{T}_2, \\mathbb{T}_3$ are the same as the block dimensions of $A,B,B^{\\tp} B$, respectively (They are block matrices of sizes $N \\times N$, $N \\times (N+1)$, and $(N+1) \\times (N+1)$, respectively). Therefore, through straightforward algebraic manipulations, we can get\n\\begin{align}\n\\label{T_1}\n&\\mathbb{T}_1 = \\mathcal{L}_{zero}(A,\\mathbb{\\tilde T}_1,n,n), \\\\\n\\label{T_2}\n&\\mathbb{T}_2 = \\mathcal{L}_{zero}(B,\\mathbb{\\tilde T}_2,n,n+1), \\\\\n\\label{T_3}\n&\\mathbb{T}_3 = \\mathcal{L}_{iden}(B^{\\tp}B,\\mathbb{\\tilde T}_3,n+1),\n\\end{align}\nwhere\n\\begin{align}\n\\label{tilde_T_1}\n&\\mathbb{\\tilde T}_1 = (A^{nn})^{\\tp}[(1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n}]A^{nn}, \\\\\n\\label{tilde_T_2}\n&\\mathbb{\\tilde T}_2 = (A^{nn})^{\\tp}[(1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n}]B^{nn}, \\\\\n\\label{tilde_T_3}\n&\\mathbb{\\tilde T}_3 = R^{nn} + (B^{nn})^{\\tp}[(1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n}]B^{nn}.\n\\end{align}\n\nFurther, since $\\mathbb{T}_3$ is a block diagonal matrix, we have \n\\begin{align}\n& (\\mathbb{T}_3)^{-1} =\\mathcal{L}_{iden} \\big(B^{\\tp}B,(\\mathbb{\\tilde T}_3)^{-1},n+1 \\big).\n\\label{T_3_inv}\n\\end{align}\n\nNow, using \\eqref{T_1}-\\eqref{T_3_inv} and the fact that matrices $A, Q, BB^{\\tp}$ have the same size as matrix $P^0_t$ (They are block matrices of size $N \\times N$), \\eqref{P_MJ_n} can be simplified to\n\\begin{align}\n\\bar P_t^{n} &= \\mathcal{L}_{zero} \\big(P^0_t, Q^{nn} + \\mathbb{\\tilde T}_1 - \\mathbb{\\tilde T}_2 (\\mathbb{\\tilde T}_3)^{-1} (\\mathbb{\\tilde T}_2)^{\\tp},n,n \\big) \\notag \\\\\n&= \\mathcal{L}_{zero}(P^0_t, P_t^{n},n,n),\n\\end{align}\nwhere the last equality is true because of the definition of $P_t^{n}$ in \\eqref{eq:tildeP_finite}. Hence, \\eqref{new_rep2} is true at time $t$. This completes the proof.\n\n\n\\section{Proof of Lemma \\ref{lm:pc_NC}}\n\\label{proof_lm:pc}\n\n\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\n\\mathcal{A}_s =\n\\begin{blockarray}{ccccc}\n\\begin{block}{[ccccc]}\n A_s(0)\\otimes A_s(0) & (1-p^1)A_s(1)\\otimes A_s(1)& (1-p^2)A_s(2)\\otimes A_s(2) & \\ldots & (1-p^n)A_s(N)\\otimes A_s(N) \\\\ \\\\\n \\mathbf{0} & p^1A_s(1)\\otimes A_s(1) &\\mathbf{0} &\\ldots & \\mathbf{0} \\\\ \\\\\n \\vdots &\\ddots &p^2 A_s(2)\\otimes A_s(2) & \\ddots & \\vdots \\\\ \\\\\n \\vdots & &\\ddots &\\ddots & \\mathbf{0} \\\\ \\\\\n \\mathbf{0} & \\ldots &\\ldots & \\mathbf{0} & p^n A_s(N)\\otimes A_s(N)\\\\\n\\end{block}\n\\end{blockarray}\n\\label{eq:bigmatrix3}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*}\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\nA_s(n) &= A^{\\diamond}(n) + B^{\\diamond}(n) K^{\\diamond}(n) \\notag \\\\\n&= \\begin{blockarray}{cccl}\n\\text{$1:n-1$} &n &\\text{$n+1:N$} & \\\\\n\\begin{block}{[ccc]l}\n \\text{\\large 0} & \\text{\\large 0} &\\text{\\large 0} & \\text{$1:n-1$} \\\\\n B^{nn} [K^{\\diamond}(n)]_{n+1,1:n-1} & A^{nn} + B^{nn} [K^{\\diamond}(n)]_{n+1,n} & B^{nn} [K^{\\diamond}(n)]_{n+1,n+1:N} &n \\\\\n \\text{\\large 0} & \\text{\\large 0} & \\text{\\large 0}&\\text{$n+1:N$} \\\\\n\\end{block}\n\\end{blockarray}.\n\\label{A_c_l}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*} \n\n\nFrom Lemma \\ref{lm:opt_strategies_new_rep}, we know that the convergence of matrices $P_t^{n}$, $n \\in \\mathcal{\\overline N}$, is equivalent to the convergence of matrices $\\bar P_t^{n}$, $n \\in \\mathcal{\\overline N}$. Further, because of Lemma \\ref{equality_recursions_NC}, $\\bar P^n_t = P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$, where matrices $P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$, are defined by \\eqref{eq:P_N_MJ_init}-\\eqref{P_MJ_cmp_NC_1} for the auxiliary MJLS. Thus, in order to study the the convergence of matrices $P_t^{n}$, $n \\in \\mathcal{\\overline N}$, we can focus on the convergence of matrices $P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$.\n\nTo investigate the convergence of $P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$, we first show that the auxiliary MJLS described by \\eqref{A_mj}-\\eqref{transition_prob} is SS if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$. To do so, we can follow a methodology similar to the one used to prove Lemma \\ref{lm:pc_2C} with $\\mathcal{A}_s$ defined as in \\eqref{eq:bigmatrix3} and $A_s(n)$, $n =1,\\ldots,N,$ defined as in \\eqref{A_c_l}. \n\nNext, we can use part 2 of Lemma \\ref{lm:ss} to show that the auxiliary MJLS described by \\eqref{A_mj}-\\eqref{transition_prob} is SD if and only of $\\mathcal{A}_d$ defined as in \\eqref{eq:bigmatrix4} is Schur stable. Since the matrix $\\mathcal{A}_{d}$ is upper-triangular, it is Schur stable if and only if there exist matrices $H^{\\diamond}(0)$ and $H^{\\diamond}(n)$ for $n \\in \\mathcal{N}$ such that $\\rho\\big(A_d(0)\\otimes A_d(0)\\big)<1$ and $\\rho\\big(p^n A_d(n)\\otimes A_d(n)\\big)<1$, where $A_d(n)$, $n \\in \\mathcal{N}$, defined as in \\eqref{A_c_l_2}. The existence of these matrices follows from detectability of $(A,Q)$ and $\\big(A^{nn},(Q^{nn})^{1\/2} \\big)$ for $n \\in \\mathcal{N}$ (see Assumptions \\ref{assum:det_stb} and \\ref{assum:det_stb_2}).\nHence, the MJLS is SD.\n\n\nIt then follows from Lemma \\ref{lm:MJ_infinite} that matrices $P^{\\diamond}_{t}(n)$, $n \\in \\overline{\\mathcal{N}}$, converge as $t \\to -\\infty$ if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$. Consequently, matrices $ P^0_t,\\ldots, P^N_t$ converge as $t \\to -\\infty$ to matrices $P_*^0,\\ldots, P_*^N$ that satisfy the coupled fixed point equations \\eqref{eq:P_finite_NC_fixed}-\\eqref{eq:tildeP_finite_NC_fixed} if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$. This proves the lemma.\n\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\n\\mathcal{A}_d =\n\\begin{blockarray}{ccccc}\n\\begin{block}{[ccccc]}\n A_d(0)\\otimes A_d(0) & (1-p^1)A_d(1)\\otimes A_d(1)& (1-p^2)A_d(2)\\otimes A_d(2) & \\ldots & (1-p^n)A_d(N)\\otimes A_d(N) \\\\ \\\\\n \\mathbf{0} & p^1A_d(1)\\otimes A_d(1) &\\mathbf{0} &\\ldots & \\mathbf{0} \\\\ \\\\\n \\vdots &\\ddots &p^2 A_d(2)\\otimes A_d(2) & \\ddots & \\vdots \\\\ \\\\\n \\vdots & &\\ddots &\\ddots & \\mathbf{0} \\\\ \\\\\n \\mathbf{0} & \\ldots &\\ldots & \\mathbf{0} & p^n A_d(N)\\otimes A_d(N)\\\\\n\\end{block}\n\\end{blockarray}\n\\label{eq:bigmatrix4}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*}\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\n&A_d(n) = A^{\\diamond} (n) + H^{\\diamond}(n) \\big(Q^{\\diamond}(n) \\big)^{1\/2}\n= \\begin{blockarray}{cccl}\n\\text{$1:n-1$} &n &\\text{$n+1:N$} & \\\\\n\\begin{block}{[ccc]l}\n \\text{\\large 0} & [H^{\\diamond}(n)]_{n,1:n-1} (Q^{nn})^{1\/2} &\\text{\\large 0} & \\text{$1:n-1$} \\\\\n \\text{\\large 0} & A^{nn} + [H^{\\diamond}(n)]_{n,n} (Q^{nn})^{1\/2} & \\text{\\large 0} &n \\\\\n \\text{\\large 0} & [H^{\\diamond}(n)]_{n,n+1:N} (Q^{nn})^{1\/2} & \\text{\\large 0}&\\text{$n+1:N$} \\\\\n\\end{block}\n\\end{blockarray}.\n\\label{A_c_l_2}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*} \n\n\n\n\\section{Proof of Lemma \\ref{lm:Q2_NC}} \\label{sec:proof_lm_Q2_NC}\nLet $g^*$ denote the strategies defined by \\eqref{eq:K_finite_NC_fixed}-\\eqref{eq:opt_U_NC_fixed}. We want to show that for any $g \\in \\mathcal G$, $J_{\\infty}(g) \\geq J_{\\infty}(g^*)$ and that $J_{\\infty}(g^*)$ is finite. We will make use of the following claim.\n\n\\begin{claim}\\label{claim:cost_optimal_NC}\nFor the strategies $g^*$ described by \\eqref{eq:K_finite_NC_fixed}-\\eqref{eq:opt_U_NC_fixed}, the following equation is true:\n\\begin{align}\nJ_{T}(g^{*}) =&(T+1) \\tr (\\Lambda_*) - \\ee^{g*} [V_{T+1} ]\n\\label{eq:costforgstar_NC}\n\\end{align}\nwhere $J_T(g^*)$ is the finite horizon cost of $g^*$ over a horizon of duration $T$, $\\Lambda_* = \\sum_{n=1}^N \\big( (1-p^n) [P_{*}^0]_{n,n}+p^n P_{*}^n \\big)$ and for any $t\\geq 0$,\n\\begin{align}\nV_t = \\hat X_{t}^\\tp P_*^0 \\hat X_t + \\sum_{n=1}^N \\tr \\big(P_*^{n}\\cov(X_{t}^n|H^0_{t}) \\big).\n\\label{V_t_NC}\n\\end{align}\n\\end{claim}\n\\begin{proof}See Appendix \\ref{Cost_of_the_Strategies} for a proof of Claim \\ref{claim:cost_optimal_NC}.\n\\end{proof}\n\n\nAlong the lines of the proof of Lemma \\ref{lm:Q2_2C} in Appendix \\ref{sec:lm_Q2_2C}, we define $Y_{0}^0 = 0$, and for $k=0,1,2,\\dots,$\n\\begin{align}\n&Y_{k+1}^0 = \\Omega(Y_{k}^0,Q,R,A,B),\n\\end{align}\nand for $n=1,\\ldots,N,$\n\\begin{align}\n&Y^n_0 =0, \\\\\n&Y_{k+1}^n = \\Omega \\big((1-p^n)[Y_{k}^0]_{n,n}+p^n Y_{k}^{n},Q^{nn},R^{nn},A^{nn},B^{nn}\\big).\n\\end{align}\nIt's easy to check that for $n=0,1,\\ldots,N,$ $Y_k^n = P_{T+1-k}^n$ for all $k \\geq 0$, and that $\\lim_{k \\rightarrow \\infty} Y_k^n = \\lim_{t \\rightarrow -\\infty} P_t^n = P^n_*$.\n The rest of the proof for parts 1 and 2 follows the same arguments as in Appendix \\ref{sec:lm_Q2_2C} for the proof of parts 1 and 2 of Lemma \\ref{lm:Q2_2C}. \nFor the proof of part 3, define for $n=1,\\ldots,N,$ \n \\begin{align*}\n \\bar K_*^n := \n \\Psi \\big( &(1-p^n)\\bar P_*^0+p^n \\bar P_{*}^n, R^{\\diamond}(n), A^{\\diamond}(n), B^{\\diamond}(n) \\big),\n \\end{align*}\n where $\\bar P_*^{0:N}$ are the limits of $\\bar P_t^{0:N}$ (see Lemmas \\ref{lm:opt_strategies_new_rep} and \\ref{lm:pc_NC} and the auxiliary MJLS in Section \\ref{sec:model_N_controllers}). \nThen, it can be shown that (i) $\\bar K_*^n = \\mathcal{L}_{zero}(K_*^0,K_*^{n},n+1,n)$ and hence (ii)\\\\ $p^n\\rho( (A^{nn} +B^{nn}K^n_*) \\otimes (A^{nn} +B^{nn}K^n_*)) = p^n \\rho (( A^{\\diamond}(n) + B^{\\diamond}(n)\\bar K_*^n ) \\otimes (A^{\\diamond}(n) + B^{\\diamond}(n)\\bar K_*^n )), $ which is less than 1 since the auxiliary MJLS of \\eqref{A_mj}-\\eqref{transition_prob} is SD and SS (see proof of Lemma \\ref{lm:pc_NC}). The rest of the proof uses similar arguments as in Appendix \\ref{sec:stability_proof} for the proof of part 3 of Lemma 7.\n\n\n\n\n\n\\section{Proof of Lemma \\ref{lm:Q3_NC}} \\label{sec:proof_lm_Q3_NC}\nThe proof can be obtained by following the arguments in the proof of Lemma \\ref{lm:Q3} and defining $\\Lambda_k = \\sum_{n=1}^N \\big( (1-p^n) [Y_{k}^0]_{n,n}+p^n Y_{k}^n \\big)$, where $Y^0_k,Y^n_k$ are as defined in Appendix \\ref{sec:proof_lm_Q2_NC}.\n\n\n\n\\input{new_proof_claim2}\n\n\n\\subsection{Summary of the Approach}\n\nPrevious sections provide optimal controllers for certain infinite horizon DNCS problems.\nThe key idea is to construct an auxiliary MJLS and use MJLS properties to extend optimal controllers in finite horizons to their counterparts in the infinite horizon.\nThis approach of solving an infinite horizon DNCS problem can be summarized as the following main steps:\n\\begin{enumerate}\n\\item Solve the finite horizon version of the DNCS problem and obtain matrices $P_t^m, m \\in \\mathcal M = \\{1,2,\\dots,M\\}$ which satisfy $M$ coupled Ricatti recursions \n\\begin{align}\nP_t^m = \\Omega(\\sum_{j} \\theta^{mj} P_{t+1}^j,Q^m,R^m,A^m,B^m)\n\\label{eq:P_finite_general}\n\\end{align}\nfor some matrices $Q^m,R^m,A^m,B^m$ and positive numbers $\\theta^{mj}$ for $m,j \\in\\mathcal M$. Note that we can scale the $\\theta$'s such that $\\sum_{j\\in\\mathcal M}\\theta^{mj} = 1$ by appropriately scaling $A^m$ and $R^m$ for all $m\\in\\mathcal M$.\n\n\n\\item Construct a $M$-mode auxiliary MJLS with transition probabilities $\\theta^{mj}$ and system matrices $Q^m,R^m,A^m,B^m$\nbased on the Ricatti recursions \\eqref{eq:P_finite_general}.\n\n\n\\item Analyze stability criteria of the auxiliary MJLS which directly translate to stability conditions for the DNCS.\n\n\\item When the auxiliary MJLS is stable, solve its associated DCARE to obtain optimal strategies for the infinite horizon DNCS problem.\n\\end{enumerate}\n\n\n\\subsection{The Information Switching}\nEven though the auxiliary MJLS is an artificial system without physical meanings (See Remark ?), the connection between a DNCS and a MJLS provides a general method to analyze switching behaviors of a DNCS. \nIn particular, a DNCS with delays and packet-drop channels can be viewed as a decentralized control problem with a randomized information structure \\cite{RPN}.\nIn other words, a DNCS can be viewed as a special class of ``switched\" systems where each controller's information is switching between different structures or patterns. \n\nThe information of the remote controller in Section \\ref{sec:model_2_controllers} switches between two structures: no observation when the packet is dropped, and perfect observation when the transmission is successful.\nFor this two-controller DNCS, each information structure of the remote controller can be treated as a ``mode\". Consequently, the closed-loop system under the optimal strategies become a two-mode switching system.\nIt is worth noting that, even though the DNCS can be thought of as a two-mode switching system, it cannot be modled by an MJLS due to its decentralized information.\nThe auxiliary MJLS is an artificial two-mode system constructed to help analyze the behavior of the DNCS with two switching information structures.\n\nIn the $(N+1)$-controller DNCS presented in Section \\ref{sec:model_N_controllers}, \nthe remote controller's information about each of the $N$ subsystem switches between two patterns: knowing $X^n_t$ or not. \nTherefore, its information is switching among $2^N$ structures.\nThe analysis of these information structures can be simplified from the decoupled dynamics.\nIn fact, the remote controller's information about each of the $N$ subsystems depends only on the $n$th link because the subsystems' states are conditional independent. Consequently, we construct an auxiliary $(N+1)$-mode MJLS where mode $0$ corresponds to the perfect information case, and each of the other $N$ modes corresponds to the failure of each of the $N$ links.\n\n\n\n\n\n\\subsection{Summary of the Approach}\nThe analysis in Sections \\ref{sec:infinite_2_controllers} and \\ref{sec:model_N_controllers} suggests a general approach for solving infinite horizon decentralized control\/DNCS problems. This can be summarized as follows:\n\\begin{enumerate}\n\\item Solve the finite horizon version of the DNCS\/decentralized control problem (for instance by using the common information approach \\cite{nayyar2013decentralized}). Suppose the optimal strategies are characterized by matrices $P_t^m, m \\in \\mathcal M = \\{1,2,\\dots,M\\}$ which satisfy $M$ coupled Riccati recursions \n\\begin{align}\nP_t^m = \\Omega(\\sum_{j} \\theta^{mj} P_{t+1}^j,Q^m,R^m,A^m,B^m),\n\\label{eq:P_finite_general}\n\\end{align}\nfor some matrices $Q^m,R^m,A^m,B^m$ and positive numbers $\\theta^{mj}$ for $m,j \\in\\mathcal M$. Note that we can scale the $\\theta$'s such that $\\sum_{j\\in\\mathcal M}\\theta^{mj} = 1$ by appropriately scaling $A^m$ and $R^m$ for all $m\\in\\mathcal M$.\n\n\n\\item Construct a $M$-mode auxiliary MJLS with transition probabilities $\\theta^{mj}$ and system matrices $Q^m,R^m,A^m,B^m$\nso that the Riccati recursions associated with optimal control of the MJLS coincide with the Riccati recursions \\eqref{eq:P_finite_general}.\n\n\n\\item Analyze stability criteria of the auxiliary MJLS to find conditions under which the Riccati recursions of the DNCS reach a steady state.\n\n\\item Verify that the decentralized strategies characterized by the steady state DNCS Riccati equations are optimal.\n\\end{enumerate}\n\n\n\\subsection{The Information Switching}\n Even though the auxiliary MJLS we used in our analysis is an artificial system without apparent physical meaning (see Remark \\ref{rem:fictitious}), a general DNCS with random packet drops (or random packet delays) does have some aspects of a switched system. In particular, the information at a controller (e.g, the remote controller in our problem) switches between different patterns based on the state of the underlying communication network. The information of the remote controller in Section \\ref{sec:model_2_controllers} clearly switches between two patterns: no observation when the packet is dropped, and perfect observation when the transmission is successful. The number of such patterns seems related to (but not always equal to) the number of modes in the MJLS used to analyze the DNCS. For the two-controller DNCS of Section \\ref{sec:model_2_controllers}, the number of patterns between which the remote controller's information switches is the same as the number of modes in its auxiliary MJLS. For the $N+1$ controller DNCS in Section \\ref{sec:model_N_controllers}, the remote controller's information appears to switch between $2^N$ patterns (depending on the state of the $N$ links) but its auxiliary MJLS has only $N+1$ modes. This difference between the number of information patterns and the number of modes is due to the nature of the plant dynamics which ensure that the remote controller's estimate of the $n$th plant is not affected by the state of the $m$th link if $m\\neq n$.\n\n\n\n\\subsection{Answering Q1}\n\\label{sec:Q1_2C} \nIn Q1, we want to know whether $P_t^0$ and $P_t^1$ defined by coupled recursions of \\eqref{eq:P_initial}-\\eqref{eq:tildeP_finite_2C} converge to $P_*^0$ and $P_*^1$ satisfying \\eqref{eq:P_finite_2C_fixed}-\\eqref{eq:tildeP_finite_2C_fixed}. Our approach for answering Q1 is based on establishing a connection between the recursions for matrices $P_t^0$ and $P_t^1$ in our DNCS problem and the recursions for matrices $P_t^{\\diamond}(m)$, $m \\in \\mathcal{M},$ in the MJLS problem reviewed in Section \\ref{sec:mjls}. This approach consists of the following two steps.\n\n\n\n\n\\subsubsection*{\\textbf{Step 1: Constructing an auxiliary MJLS}}\nConsider an auxiliary MJLS where the set $\\mathcal{M}$ of modes is $\\{0,1\\}$. Then, we have the following two sequences of matrices, $P^{\\diamond}_t(0),P^{\\diamond}_t(1),$ defined recursively using \\eqref{eq:CARE_init} and \\eqref{eq:CARE_finite} for this MJLS:\n\\begin{align}\n&P^{\\diamond}_{T+1}(0) = P^{\\diamond}_{T+1}(1) =\\mathbf{0}, \\label{eq:P_MJ_init}\\\\\n&P^{\\diamond}_t(0) = \\notag \\\\\n&\\Omega\\big(\\theta^{00} P^{\\diamond}_{t+1}(0) + \\theta^{01} P^{\\diamond}_{t+1}(1),Q^{\\diamond}(0),R^{\\diamond}(0),A^{\\diamond}(0),B^{\\diamond}(0) \\big),\n\\label{P_MJ_cmp_2C_0}\n\\\\\n&P^{\\diamond}_t(1)= \\notag \\\\\n& \\Omega\\big(\\theta^{10} P^{\\diamond}_{t+1}(0) + \\theta^{11} P^{\\diamond}_{t+1}(1),Q^{\\diamond}(1),R^{\\diamond}(1),A^{\\diamond}(1),B^{\\diamond}(1) \\big).\n\\label{P_MJ_cmp_2C_1}\n\\end{align}\nFurthermore, recall that from \\eqref{eq:P_initial}-\\eqref{eq:tildeP_finite_2C}, we have the following recursions for matrices $P_t^0$ and $P_t^1$ in our DNCS problem,\n\\begin{align}\n&P^0_{T+1}=P^1_{T+1}=0, \\label{eq:barP_cmp_init}\\\\\n&P_t^0 = \\Omega(P_{t+1}^0,Q,R,A,B),\n\\label{eq:barP_cmp_0_2C}\n\\\\\n&P_t^{1} = \\Omega((1-p^1) P_{t+1}^0+p^1 P_{t+1}^1,Q,R^{11},A,B^{11}).\n\\label{eq:barP_cmp_n_2C}\n\\end{align}\nIs it possible to find matrices $A^{\\diamond}(m),B^{\\diamond}(m), Q^{\\diamond}(m),R^{\\diamond}(m)$, $m \\in \\{0,1\\}$, and a transition probability matrix $\\Theta$ for the auxiliary MJLS such that the recursions in \\eqref{eq:P_MJ_init} - \\eqref{P_MJ_cmp_2C_1} coincide with the recursions in \\eqref{eq:barP_cmp_init} - \\eqref{eq:barP_cmp_n_2C}?\n\n\nBy comparing \\eqref{P_MJ_cmp_2C_0}-\\eqref{P_MJ_cmp_2C_1} with \\eqref{eq:barP_cmp_0_2C}-\\eqref{eq:barP_cmp_n_2C}, we find that the following definitions would make the two sets of equations identical:\n\\begin{align}\n&A^{\\diamond}(0) = A^{\\diamond}(1) = A, \\label{eq:MJLS_A} \\\\\n&B^{\\diamond}(0) = B, \\quad B^{\\diamond}(1) = [\\mathbf 0,B^{11}], \\label{eq:MJLS_B}\n\\\\\n&Q^{\\diamond}(0) = Q^{\\diamond}(1) = Q,\\label{eq:MJLS_Q} \\\\\n&R^{\\diamond}(0) = R, \\quad R^{\\diamond}(1) = \\begin{bmatrix}\n\\mathbf I & \\mathbf 0 \\\\\n\\mathbf 0 & R^{11}\n\\end{bmatrix}, \\label{eq:MJLS_R}\n\\\\\n&\\Theta = \\begin{bmatrix}\n\\theta^{00} & \\theta^{01} \\\\\n\\theta^{10} & \\theta^{11}\n\\end{bmatrix} = \\begin{bmatrix}\n1 & 0 \\\\\n1-p^1 & p^1\n\\end{bmatrix}.\n\\label{eq:MJLS_theta}\n\\end{align}\nTo complete the definition of the auxiliary MJLS, we need to define the initial state and mode probability distributions $\\pi_{X_0^{\\diamond}}$ and $\\pi_{M_0}$. These can be defined arbitrarily and for simplicity we assume that the initial state and mode of the auxiliary MJLS are fixed to be $X^{\\diamond}_0 = 0$ and $M_0 = 1$.\nThe following lemma summarizes the above discussion.\n\n\n\\begin{lemma}\n\\label{equality_recursions_2C}\nFor the auxiliary MJLS described by \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_theta}, the coupled recursions in \\eqref{eq:P_MJ_init}-\\eqref{P_MJ_cmp_2C_1} are identical to the coupled recursions in \\eqref{eq:barP_cmp_init}-\\eqref{eq:barP_cmp_n_2C}.\n\\end{lemma}\n\\begin{proof}\nThe lemma can be proved by straightforward algebraic manipulations.\n\\end{proof}\n\n\\begin{remark}\nNote that we have not defined $B^{\\diamond}(1)$ to be $B^{11}$ because the MJLS model requires that the dimensions of matrices $B^{\\diamond}(0)$ and $B^{\\diamond}(1)$ be the same (see Section \\ref{sec:mjls}). Similar dimensional considerations prevent us from defining $R^{\\diamond}(1)$ to be simply $R^{11}$.\n\\end{remark}\n\n\\begin{remark}\\label{rem:fictitious}\nIt should be noted that the auxiliary MJLS is simply a mathematical device. It cannot be seen as a reformulation or another interpretation of our DNCS problem. In particular, the binary mode $M_t$ \\emph{is not} the same as the link state $\\Gamma^1_t$. The distinction between $M_t$ and $\\Gamma^1_t$ is immediately clear if one recalls that $M_t$ is the state of a Markov chain with transition probability matrix given in \\eqref{eq:MJLS_theta} whereas $\\Gamma^1_t, t \\geq 0,$ is an i.i.d process. \n\\end{remark}\n\\subsubsection*{\\textbf{Step 2: Using MJLS results to answer Q1}}\nNow that we have constructed an auxiliary MJLS such that $P^{\\diamond}_t(m) = P_t^{m}$ for $m \\in \\{0,1\\}$, we can use the MJLS results about convergence of matrices $P^{\\diamond}_t(m)$ (that is, Lemmas \\ref{lm:MJ_infinite} and \\ref{lm:ss}) to answer Q1. The following lemma states this result.\n\n\n\\begin{lemma}\\label{lm:pc_2C}\nSuppose Assumption \\ref{assum:det_stb_2C} holds. Then, the matrices $P_t^0$ and $P_t^1$ defined in \\eqref{eq:P_initial}-\\eqref{eq:tildeP_finite_2C} converge as $t \\to -\\infty$ to matrices $P_*^0$ and $P_*^1$ that satisfy the coupled fixed point equations \\eqref{eq:P_finite_2C_fixed} - \\eqref{eq:tildeP_finite_2C_fixed} if and only if $p^1 < p^1_c$, where $p^1_c$ is the critical threshold given by \n\\begin{align}\n\\frac{1}{\\sqrt{p^1_c}} = \\min_{K \\in \\mathbb{R}^{d^1_U \\times d_X}}\\rho(A+B^{11}K).\n\\label{eq:pc_2c}\n\\end{align} \n\\end{lemma}\n\n\\begin{proof}\nSee Appendix \\ref{proof_lm:pc_2C}.\n\\end{proof}\n\n\\subsection{Answering Q2 and Q3}\n\\label{sec:Q2_2C}\nAssuming that $P_{t}^n \\rightarrow P_*^n$ as $t \\rightarrow -\\infty$ for $n=0,1,$ we want to know whether the control strategies of \\eqref{eq:K_finite_2C_fixed}-\\eqref{eq:estimator_inf_2C} are optimal for Problem \\ref{problem_infinite_2C}. The following result shows that these control strategies are indeed optimal.\n\n\\begin{lemma}\n\\label{lm:Q2_2C}\nIf $P_{t}^n \\rightarrow P_*^n$ as $t \\rightarrow -\\infty$ for $n=0,1$, then\n\\begin{enumerate}\n\\item Problem \\ref{problem_infinite_2C} has finite optimal cost,\n\\item The strategies described by \\eqref{eq:K_finite_2C_fixed}-\\eqref{eq:estimator_inf_2C} are optimal for Problem \\ref{problem_infinite_2C},\n\\item Under the strategies described by \\eqref{eq:K_finite_2C_fixed}-\\eqref{eq:estimator_inf_2C}, $X_t$ and $(X_t-\\hat X_t)$ are mean square stable, i.e.,\n\\[ \\sup_{t \\geq 0} \\mathds{E}^{g^*}[||X_t||^2] < \\infty \\mbox{~and~} \\sup_{t \\geq 0} \\mathds{E}^{g^*}[||(X_t-\\hat X_t)||^2] < \\infty,\\]\n where $g^*$ denotes the strategy described by \\eqref{eq:K_finite_2C_fixed}-\\eqref{eq:estimator_inf_2C}.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{sec:lm_Q2_2C} for proof of parts 1) and 2). See Appendix \\ref{sec:stability_proof} for proof of part 3).\n\\end{proof}\n\n\nThe following lemma answers Q3.\n\\begin{lemma}\n\\label{lm:Q3}\nIf $P_t^n$, $n=0,1,$ do not converge as $t \\to -\\infty$, then Problem \\ref{problem_infinite_2C} does not have finite optimal cost.\n\\end{lemma}\n\\begin{proof} See Appendix \\ref{sec:lmQ3}.\n\\end{proof}\n\nNow that we have answered Q1, Q2 and Q3, we can summarize our results for the infinite horizon DNCS problem (Problem \\ref{problem_infinite_2C}).\n\n\\subsection{Summary of the Infinite Horizon Results}\nBased on the answers to Q1-Q3, the following theorem summarizes our results for Problem \\ref{problem_infinite_2C}.\n\\begin{theorem}\n\\label{thm:DC_infinite_2C}\nSuppose Assumption \\ref{assum:det_stb_2C} holds. Then, \n\\begin{enumerate}[(i)]\n\\item Problem \\ref{problem_infinite_2C} has finite optimal cost if and only if $p^1 < p^1_c$ where the critical threshold $p^1_c$ is given by \\eqref{eq:pc_2c}.\n\n\\item If $p^1 < p^1_c$, there exist symmetric positive semi-definite matrices $P_*^0, P_*^1$ that satisfy \\eqref{eq:P_finite_2C_fixed}-\\eqref{eq:tildeP_finite_2C_fixed} and the optimal strategies for Problem \\ref{problem_infinite_2C} are given by\n\\begin{align}\n\\bmat{U^{0*}_t \\\\ U^{1*}_t \\\\} = K_*^0\\hat X_t + \\bmat{\\mathbf{0} \\\\ K_*^1 }\\left(X_t - \\hat X_t\\right),\n\\label{eq:opt_U_infinite_2C_thm}\n\\end{align}\nwhere the estimate $\\hat X_t$ can be computed recursively using \\eqref{eq:estimator_inf_2C} with $\\hat{X}_0=0$\n and the gain matrices $K_*^0,K_*^1$ are given by\n\\begin{align}\n& K_*^0 = \\Psi(P_{*}^0,R,A,B),\n\\label{eq:K_finite_2C_fixed_final}\n\\\\\n& K_*^1 = \\Psi((1-p^1)P_{*}^0+p^1 P_{*}^1,R^{11},A,B^{11}).\n\\label{eq:tildeK_finite_2C_fixed_final}\n\\end{align} \n\\item If $p^1 < p^1_c$, then under the strategies described in part (ii) above, $X_t$ and $(X_t-\\hat X_t)$ are mean square stable.\n \n\n\n\\end{enumerate}\n\n\n\n\\end{theorem}\n\\begin{proof}\nThe result follows from Lemmas \\ref{lm:pc_2C}, \\ref{lm:Q2_2C} and \\ref{lm:Q3}.\n\\end{proof}\n\n\n\n\n\n\nIf $B^{11} = 0$, the local controller becomes just a sensor without any control ability. In this case, Theorem \\ref{thm:DC_infinite_2C} gives the \ncritical threshold as $p^1_c = \\rho(A)^{-2}$ and the closed-loop system is mean-square stable if $\\rho(A) < 1\/\\sqrt{p^1}$. This recovers the single-controller NCS result in \\cite{Imer2006optimal}. Thus, we have the following corollary of Theorem \\ref{thm:DC_infinite_2C}.\n\\begin{corollary}[Theorem 3 of \\cite{Imer2006optimal} with $\\alpha = 0$ and $\\beta = p^1$]\nSuppose the local controller is just a sensor (i.e., $B^{11} = 0$) and the remote controller is the only controller present. Then, if $\\rho(A) < 1\/\\sqrt{p^1}$, the optimal controller of this single-controller NCS is given by $U^{0*}_t$ in \\eqref{eq:opt_U_infinite_2C_thm}, and the corresponding closed-loop system is mean-square stable.\n\\end{corollary}\n\n\n\n\\begin{remark}\nThe value $\\frac{1}{\\sqrt{p^1_c}} = \\min_{K}\\rho(A+B^{11}K)$ is actually the largest unreachable mode of $(A,B^{11})$. Therefore, it can be computed using tests for reachability such as the Popov-Belovich-Hautus (PBH) test \\cite{hespanha2009linear}. Further, if $(A,B^{11})$ is reachable, then $\\rho(A+B^{11}K)$ can be made arbitrarily small which implies that $p^1_c = \\infty$. This is the case when the local controller can stabilize the system by itself, so the DNCS is stabilizable under any link failure probability $p^1$.\n\\end{remark}\n\n\\begin{remark}\nIf $p^1 < p^1_c$, the coupled Riccati equations in \\eqref{eq:P_finite_2C_fixed}-\\eqref{eq:tildeP_finite_2C_fixed} can be solved by iteratively carrying out the recursions in \\eqref{eq:P_initial}-\\eqref{eq:tildeP_finite_2C} until convergence. This is similar to the procedure in \\cite[Chapter 7]{costa2006discrete}.\n\\end{remark}\n\n\n\n\n\\subsection{Contributions of the Paper}\n\\begin{enumerate}\n\\item We investigate an infinite time horizon decentralized stochastic control problem in which local controllers send their information to a\nremote controller over unreliable links. To the best of our\nknowledge, this is the first paper that solves an infinite time horizon optimal\ndecentralized control problem with unreliable communication\nbetween controllers. The finite time horizon version of our problem was solved in \\cite{ouyang2016optimal, asghari_ouyang_nayyar_tac_2018} and our results in this paper use the finite horizon solutions obtained there. However, unlike the finite horizon case, we have to address the possibility that no control strategy may achieve finite cost over infinite time horizon. Due to such stability related issues, our approach for the infinite horizon problem is markedly different from the common information based approach adopted in \\cite{ouyang2016optimal, asghari_ouyang_nayyar_tac_2018}.\n\n\\item We show that there are critical thresholds for link failure probabilities above which no control strategy can achieve a finite cost in our problem. When the link failure probabilities are below their critical thresholds, we show that the optimal control strategies of this infinite horizon decentralized control problem admit simple structures: the optimal remote controller strategy is a time-invariant linear function of the common estimates of system states and the optimal strategies for local controllers are time-invariant linear functions of the common estimates of system states and the perfectly observed local states. The main strengths of our result are that (i) it provides simple strategies that are proven to be optimal: not only are the strategies in Theorems \\ref{thm:DC_infinite_2C} and \\ref{thm:DC_infinite_NC} linear, they use estimates that can be easily updated; (ii) it shows that the optimal strategies are completely characterized by solution of coupled Riccati equations.\n\n\n\\item If the local controllers act only as sensors and the remote controller is the only controller in the system, then our model reduces to a NCS with multiple sensors observing different components of the system state and communicating with the remote controller over independent unreliable links. Thus, we obtain optimal strategy and critical probabilities for a multi-sensor, single-controller NCS as a corollary of our result in Theorem \\ref{thm:DC_infinite_NC}.\n \n\\item Finally, our problem can be viewed as a dynamic team problem by viewing each controller's actions at different time instants as the actions of distinct players \\cite{HoChu:1972}. Since we are interested in infinite time horizon, this team-theoretic viewpoint means that our dynamic team has infinitely many players. Further, due to the unreliable links, our problem does not directly fit into the partially nested LQG team problem. Thus, the standard results for partially nested LQG teams with finitely many players \\cite{HoChu:1972} do no apply to our problem. As observed in \\cite{infinite_team}, results for teams with finitely many players cannot be directly extended to teams with infinitely many players even when the information structure is static or partially nested.\n In spite of this, our dynamic team problem turns out to have simple optimal strategies.\n\\end{enumerate}\n\n\n\n\n\n\\subsection{Organization}\nThe rest of the paper is organized as follows. Section \\ref{sec:pre} summarizes the notations and operators used in this paper. \nIn Section \\ref{sec:model_2_controllers}, we formulate the finite horizon and infinite horizon optimal control problems for a DNCS with one remote controller and one local controller. We briefly review Markov Jump Linear Systems (MJLSs) in Section \\ref{sec:mjls}. We establish a connection between the DNCS of Section \\ref{sec:model_2_controllers} and an auxiliary MJLS in Section \\ref{sec:infinite_2_controllers} and use this connection to provide our main results for the DNCS of Section \\ref{sec:model_2_controllers}. In Section \\ref{sec:model_N_controllers}, we extend our DNCS model to the case with multiple local controllers and provide our main results for this DNCS. We discuss some key aspects of our approach in Section \\ref{sec:discussion}. Section \\ref{sec:conclusion} concludes the paper.\nThe proofs of all technical results are in the Appendices.\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{Introduction.tex}\n\n\\section{Preliminaries}\n\\label{sec:pre}\n\\input{prelim.tex}\n\n\\section{System Model and Problem Formulation}\n\\label{sec:model_2_controllers}\n\\input{model_2_controllers.tex}\n\n\\section{Review of Markov Jump Linear Systems}\n\\label{sec:mjls}\n\\input{mjls.tex}\n\n\\section{Infinite Horizon Optimal Control}\n\\label{sec:infinite_2_controllers}\n\\input{infinite_horizon_2_controllers.tex}\n\n\\section{Extension to Multiple Local Controllers}\n\\label{sec:model_N_controllers}\n\\input{model_N_controllers.tex}\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\\input{discussion.tex}\n\\section{Conclusion}\n\\label{sec:conclusion}\n\\input{conc.tex}\n\n\\bibliographystyle{ieeetr}\n\n\\subsubsection{Finite Horizon MJLS Optimal Control}\n\\begin{definition}\n\\label{def:ss}\nThe MJLS of \\eqref{eq:MJ_X}-\\eqref{eq:MJ_M} is stochastically stabilizable if there exist gain matrices $K^{\\diamond}(m), m\\in\\mathcal M,$ such that for any initial state and mode, $\\sum_{t=0}^\\infty \\ee[||X^{\\diamond}_t||^2]< \\infty$ where $X^{\\diamond}_{t+1} = A_s(M_t) X^{\\diamond}_t$ and \n\\begin{align}\nA_s (M_t) = A^{\\diamond} (M_t) + B^{\\diamond}(M_t) K^{\\diamond}(M_t).\n\\label{A_s_matrix}\n\\end{align}\nIn this case, we say the gain matrices $K^{\\diamond}(m), m \\in \\mathcal M$, stabilize the MJLS.\n\\end{definition}\n\n\\begin{definition}\n\\label{def:sd}\nThe MJLS of \\eqref{eq:MJ_X}-\\eqref{eq:MJ_c} is Stochastically Detectable (SD) if there exist gain matrices $H^{\\diamond}(m), m \\in \\mathcal{M}$, such that for any initial state and mode, $\\sum_{t=0}^\\infty \\ee[||X^{\\diamond}_t||^2]< \\infty$ where \n$X^{\\diamond}_{t+1} = A_d(M_t) X^{\\diamond}_t$ and \n\\begin{align}\nA_d(M_t) = A^{\\diamond} (M_t) + H^{\\diamond}(M_t) \\big(Q^{\\diamond}(M_t) \\big)^{1\/2}.\n\\label{A_d_matrix}\n\\end{align}\n\\end{definition}\n\nFrom the theory of MJLS (\\cite{costa2006discrete,costa1995discrete}), we can obtain the following result for \nthe convergence of matrices $\\{P^{\\diamond}_{t}(m), t = T+1,T,T-1,\\ldots \\}$ to $P^{\\diamond}_{*}(m)$ satisfying the DCARE in \\eqref{eq:CARE_infinite}.\n\n\\begin{lemma}\n\\label{lm:MJ_infinite}\nSuppose the MJLS is stochastically detectable (SD). Then, matrices $P^{\\diamond}_{t}(m)$, $m \\in \\mathcal{M}$, converge as $t \\to -\\infty$ to PSD matrices $P^{\\diamond}_{*}(m)$ that satisfy the DCARE in \\eqref{eq:CARE_infinite} if and only if the MJLS is stochastically stabilizable (SS).\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{app:lm_MJ_infinite}.\n\\end{proof}\n\n\nStochastically stabilizability (SS) and stochastically detectability (SD) of a MJLS can be verified from the system matrices and the transition matrix for the mode of the MJLS \\cite{costa2006discrete,fang2002stochastic}. Specifically, we have the following lemmas.\n\n\\begin{lemma}(\\cite[Theorem 3.9]{costa2006discrete} and also \\cite[Corollary 2.6]{fang2002stochastic})\n\\label{lm:ss}\n\\begin{enumerate}\n\\item A MJLS is SS if and only if there exist matrices $K^{\\diamond}(m)$, $m \\in \\mathcal M$, such the matrix\n\\begin{align}\n&\\mathcal{A}_s := \\notag \\\\\n&\\diag \\big(A_s(0)\\otimes A_s(0), \\ldots, A_s(M)\\otimes A_s(M) \\big)\n(\\Theta^{\\tp}\\otimes \\mathbf{I}),\n\\label{eq:bigmatrix_ss}\n\\end{align}\nis Schur stable, i.e. $\\rho(\\mathcal{A}_s) < 1$, where $A_s(M_t)$ is given by \\eqref{A_s_matrix}.\n\\item A MJLS is SD if and only if there exist matrices $H^{\\diamond}(m)$, $m \\in \\mathcal M$, such the matrix\n\\begin{align}\n&\\mathcal{A}_d := \\notag \\\\\n&\\diag \\big(A_d(0)\\otimes A_d(0), \\ldots, A_d(M)\\otimes A_d(M) \\big)\n(\\Theta^{\\tp}\\otimes \\mathbf{I}),\n\\label{eq:bigmatrix_sd}\n\\end{align}\nis Schur stable, i.e. $\\rho(\\mathcal{A}_d) < 1$, where $A_d(M_t)$ is given by \\eqref{A_d_matrix}.\n\\end{enumerate}\n\\end{lemma}\n\\subsection{Communication Model}\n\\label{subs:comm_model_2C}\n At each time $t$, the local controller $C^1$ observes the state $X_t$ perfectly and sends the observed state to the remote controller $C^0$ through an unreliable link with packet drop probability $p^1$. \n Let $\\Gamma_t^1$ be a Bernoulli random variable describing the state of this link, that is, $\\Gamma_t^1=0$ if the link is broken (i.e., the packet is dropped) and $\\Gamma_t^1=1$ if the link is active. We assume that $\\Gamma_{t}^1, t \\geq 0,$ is an i.i.d. process and is independent of the noise process $W_{0:t}, t \\geq 0$. Let $Z_t^1$ be the output of the unreliable link. Then,\n \\begin{align}\n\\Gamma_t^1 = &\\left\\{\\begin{array}{ll}\n1 & \\text{ with probability }(1-p^1),\\\\\n0 & \\text{ with probability }p^1.\n\\end{array}\\right. \\label{eq:gamma}\n\\\\\nZ_t^1 = &\n\\left\\{\\begin{array}{ll}\nX_t & \\text{ when } \\Gamma_t^1 = 1,\\\\\n\\emptyset & \\text{ when } \\Gamma_t^1 = 0.\n\\end{array}\\right.\n\\label{Model:channel_2C}\n\\end{align}\nWe assume that $Z_t^1$ is perfectly observed by $C^0$. Further, we assume that $C^0$ sends an acknowledgment to the local controller $C^1$ if it receives the state value. Thus, effectively, $Z_t^1$ is perfectly observed by $C^1$ as well. The two controllers select their control actions at time $t$ after observing $Z_t^1$. We assume that the links for sending acknowledgments as well as the links from the controllers to the plant are perfectly reliable.\n\n\\subsection{Information structure and cost}\n\\label{subs:info_cost_2C}\n\nLet $H_t^0$ and $H_t^1$ denote the information available to the controllers $C^0$ and $C^1$ to make decisions at time $t$, respectively. Then,\n\\begin{align}\nH^0_t = \\{Z_{0:t}^1, U^0_{0:t-1}\\}, \\hspace{2mm} H^1_t= \\{X_{0:t}, Z_{0:t}^1, U^1_{0:t-1}, U^0_{0:t-1}\\}. \n\\label{Model:info_2C}\n\\end{align}\n$H^0_t$ will be referred to as the \\textit{common information} among the two controllers at time $t$\\footnote{ We assumed that $U^0_{0:t-1}$ is pat of $H^1_t$. This is not a restriction because even if $U^0_{0:t-1}$ is not directly observed by $C^1$ at time $t$, $C^1$ can still compute it using $C^0$'s strategy since it knows everything $C^0$ knows.}.\n\nLet $\\mathcal{H}^0_t$ and $\\mathcal{H}^1_t$ be the spaces of all possible \nrealizations of $H^0_t$ and $H^1_t$, respectively.\nThen, the control actions are selected according to\n\\begin{align}\nU^0_t = g^0_t(H^0_t), \\hspace{2mm} U^1_t = g^1_t(H^1_t), \n\\label{Model:strategy_2C}\n\\end{align}\nwhere the control laws $g^0_t:\\mathcal{H}^0_t \\to \\R^{d_0}$ and $g^1_t:\\mathcal{H}^1_t \\to \\R^{d_1}$ are measurable mappings.\nWe use $g:=(g^0_{0},g^0_1,\\dots,g^1_{0},g^1_1,\\dots)$ to denote the control strategies of $C^0$ and $C^1$.\n\n\nThe instantaneous cost $c(X_t,U_t)$ of the system is a quadratic function given by\n\\begin{align}\n&c(X_t,U_t) = \nX_t^\\tp Q X_t + U_t^\\tp R U_t,\n\\label{Model:cost_2C}\n\\end{align}\nwhere $Q$ is a symmetric positive semi-definite (PSD) matrix, and $R = \\begin{bmatrix}\nR^{00} & R^{01}\\\\R^{10} & R^{11}\n\\end{bmatrix}$ is a symmetric positive definite (PD) matrix.\n\n\\subsection{Problem Formulation}\n\\label{subs:prob_formulaiton_2C}\n\nLet $\\mathcal{G}$ denote the set of all possible control strategies of $C^0$ and $C^1$. The performance of control strategies $g$ over a\nfinite horizon $T$ is measured by the total expected cost\\footnote{Because the cost function $c(X_t,U_t)$ is always non-negative, the expectation is well-defined on the the extended real line $\\R \\cup \\{\\infty\\}$.}:\n\\begin{align}\nJ_T(g):=\\ee^{g}\\left[\\sum_{t=0}^T c(X_t,U_t)\\right].\n\\label{Model:J_T_2C}\n\\end{align}\n\nWe refer to the system described by \\eqref{Model:system_2C}-\\eqref{Model:cost_2C} as the \\emph{decentralized networked control system} (DNCS).\nWe consider the problem of strategy optimization for the DNCS over finite and infinite time horizons. These two problems are formally defined below.\n\\begin{problem}[Finite Horizon DNCS Optimal Control]\n\\label{problem_finite_2C}\nFor the DNCS described by \\eqref{Model:system_2C}-\\eqref{Model:cost_2C}, determine decentralized control strategies $g$ that optimize the total expected cost over a finite horizon of duration $T$. In other words, solve the following strategy optimization problem:\n\\begin{align}\n\\inf_{g \\in\\mathcal{G}} \nJ_T(g).\n\\label{Model:obj_finite_2C}\n\\end{align}\n\\end{problem}\n\n\n\\begin{problem}[Infinite Horizon DNCS Optimal Control]\n\\label{problem_infinite_2C}\nFor the DNCS described by \\eqref{Model:system_2C}-\\eqref{Model:cost_2C}, find decentralized strategies $g$ that minimize the infinite horizon average cost. In other words, solve the following strategy optimization problem:\n\\begin{align}\n&\\inf_{g \\in\\mathcal{G}} \nJ_{\\infty}(g)\n:= \\inf_{g \\in\\mathcal{G}} \n\\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}(g).\n\\label{Model:obj_infinite_2C}\n\\end{align}\n\n\\end{problem}\n\nWe make the following standard assumption on the system and cost matrices \\cite{Bertsekas:1995}. \n\\begin{assumption}\n\\label{assum:det_stb_2C}\n$(A,Q^{1\/2})$ is detectable and $(A,B)$ is stabilizable.\n\\end{assumption}\n\nThe finite horizon DNCS optimal control problem (Problem \\ref{problem_finite_2C}) has been solved in \\cite{ouyang2016optimal, asghari_ouyang_nayyar_tac_2018}. We summarize the finite horizon results below. \n\n\\begin{lemma}(\\cite[Theorem 2]{ouyang2016optimal})\n\\label{lm:opt_strategies_2C}\nThe optimal control strategies of Problem \\ref{problem_finite_2C} are given by\n\\begin{align}\n\\bmat{U^{0*}_t \\\\ U^{1*}_t \\\\} = K_t^0\\hat X_t + \\bmat{\\mathbf{0} \\\\ K_t^1 }\\left(X_t - \\hat X_t\\right), \\label{eq:opt_U_2C}\n\\end{align}\nwhere $\\hat X_t = \\ee[X_t|H^0_t]$ is the estimate (conditional expectation) of $X_t$ based on the common information $H^0_t$. The estimate can be computed recursively according to\n\\begin{align}\n\\hat X_0 = & 0,\n\\label{eq:estimator_0_2C}\n\\\\\n\\hat X_{t+1}\n= &\\left\\{\n\\begin{array}{ll}\n \\big(A + B K^0_t\\big)\\hat X_t& \\text{ if }Z_{t+1}= \\emptyset,\\\\\n X_{t+1} & \\text{ if }Z_{t+1} = X_{t+1}.\n\\end{array}\\right.\n\\label{eq:estimator_t_2C}\n\\end{align}\nThe gain matrices are given by\n\\begin{align}\n& K_t^0 = \\Psi(P_{t+1}^0,R,A,B),\n\\label{eq:K_finite_2C}\n\\\\\n& K_t^1 = \\Psi((1-p^1)P_{t+1}^0+p^1 P_{t+1}^1,R^{11},A,B^{11}),\n\\label{eq:tildeK_finite_2C}\n\\end{align}\nwhere $P_t^0$ and $P_t^1$ are PSD matrices obtained recursively as follows:\n\\begin{align}\n&P_{T+1}^0 = P_{T+1}^1 = \\mathbf{0}, \\label{eq:P_initial}\\\\\n&P_t^0 = \\Omega(P_{t+1}^0,Q,R,A,B),\n\\label{eq:P_finite_2C}\n\\\\\n&P_t^1 = \\Omega((1-p^1)P_{t+1}^0+p^1 P_{t+1}^1,Q,R^{11},A,B^{11}).\n\\label{eq:tildeP_finite_2C}\n\\end{align}\nFurthermore, the optimal cost is given by\n\\begin{align}\nJ^*_{T} = & \\sum_{t = 0}^T \\tr \\big((1-p^1)P_{t+1}^0+p^1 P_{t+1}^1 \\big).\n\\label{eq:opcost_finite_2C}\n\\end{align}\n\\end{lemma}\n\n\\begin{remark}\nNote that remote controller's action $U^{0*}_t$ in \\eqref{eq:opt_U_2C} is a function of $\\hat{X}_t$ only while the local controller's action $U^{1*}_t$ is a function of both $\\hat{X}_t$ and $X_t$. Further, as per \\eqref{eq:estimator_0_2C} and \\eqref{eq:estimator_t_2C}, $\\hat{X}_t$ is computed recursively based only on the knowledge of $Z_{0:t}$.\n\\end{remark}\n\nIn this paper, we will focus on solving the infinite horizon problem (Problem \\ref{problem_infinite_2C}). Our solution will employ results from Markov Jump Linear Systems (MJLS). We provide a review of the relevant results from the theory of Markov jump linear systems before describing our solution to Problem \\ref{problem_infinite_2C}.\n\\subsection{System Model and Problem Formulation}\n\nIn this section, we study an extension of the system model in Section \\ref{sec:model_2_controllers} to the case where instead of 1 local controller, we have $N$ local controllers, $C^1,C^2, \\ldots,C^N$, each associated to a co-located plant as shown in Fig. \\ref{fig:SystemModel}. We use $\\mathcal{N}$ to denote the set $\\{1,2, \\ldots, N\\}$ and $\\overline{\\mathcal{N}}$ to denote $\\{0,1,\\ldots, N\\}$.\nThe linear dynamics of plant $n \\in \\mathcal{N}$ are given by\n\\begin{align}\n&X_{t+1}^n \\!=\\! A^{nn} X_t^n + B^{nn}U^{n}_t+ B^{n0} U^0_t + W_t^n, t=0,\\dots,T,\n \\label{Model:system}\n\\end{align}\nwhere $X_t^n\\in \\R^{d_X^n}$ is the state of the plant $n$ at time $t$,\n$U^n_t \\in \\R^{d_U^n}$ is the control action of the controller $C^{n}$, $U^0_t \\in \\R^{d_U^0}$ is the control action of the controller $C^{0}$, and $A^{nn}, B^{nn}, B^{n0}$ are matrices with appropriate dimensions.\nWe assume that $X^n_0 = 0$, and that $W^n_t$, $n \\in \\mathcal{N}, t \\geq 0$, are i.i.d random variables with zero mean and $\\cov(W_t^n) = \\mathbf{I}$. Note that we do not assume that random variables $W_{t}^{n}$, $n \\in \\mathcal{N}, t \\geq 0$, are Gaussian.\n\nThe overall system dynamics can be written as\n\\begin{align}\nX_{t+1} = A X_t + BU_t + W_t,\n\\label{overall_state_dynamic}\n\\end{align}\nwhere $X_t = \\vecc(X^{1:N}_t), U_t = \\vecc(U^{0:N}_t),W_t = \\vecc(W^{0:N}_t)$ and $A,B$ are defined as\n\n\\begin{align}\nA &= \\begin{bmatrix}\n A^{11} & & \\text{\\huge0}\\\\\n & \\ddots & \\\\\n \\text{\\huge0} & & A^{NN}\n\\end{bmatrix}, \nB= \n\\begin{bmatrix}\nB^{10} & B^{11} & & \\text{\\huge0}\\\\\n\\vdots & & \\ddots & \\\\\nB^{N0} & \\text{\\huge0} & & B^{NN}\n\\end{bmatrix}.\n\\label{eq:thm_matricesABB}\n\\end{align}\n\n\\begin{singlespace}\n\n\\input{system_model}\n\\end{singlespace}\n\n\\subsubsection*{Communication Model}\n\\label{subs:comm_model}\nThe communication model is similar to the one described in Section \\ref{subs:comm_model_2C}. In particular, for each $n \\in \\mathcal{N}$, there is an unreliable link with link failure probability $p^n$ from the local controller $C^{n}$ to the remote controller $C^0$. The local controller $C^{n}$ uses its unreliable link to send the state $X_t^n$ of its co-located plant to the remote controller. The state of this link at time $t$ is described by a Bernoulli random variable $\\Gamma_t^n$ and the output of this link at time $t$ is denoted by $Z_t^n$, where $\\Gamma_t^n$ and $Z_t^n$ are described by equations similar to \\eqref{eq:gamma} and \\eqref{Model:channel_2C}. We assume that $\\Gamma_{0:t}^{1:N}$, $t \\geq 0$, are independent random variables and that they are independent of $W_{0:t}^{1:N}$, $t \\geq 0$.\n\nUnlike the unreliable uplinks, we assume that there exist perfect links from $C^0$ to $C^{n}$, for each $n \\in \\mathcal{N}$. Therefore, $C^0$ can share $Z_t^{1:N}$ and $U_{t-1}^0$ with all local controllers $C^{1:N}$.\nAll controllers select their control actions at time $t$ after observing $Z_t^{1:N}$ and $U_{t-1}^0$. \nWe assume that for each $n \\in \\mathcal{N}$, the links from controllers $C^{n}$ and $C^0$ to plant $n$ are perfect.\n\n\n\\subsubsection*{Information structure and cost}\n\\label{subs:info_cost_2C}\nLet $H^{n}_t$ denote the information available to controller $C^{n}$, $n \\in {\\color{black} \\overline{\\mathcal{N}}}$, at time $t$.\nThen,\n\\begin{align}\nH^{n}_t&= \\{X^n_{0:t}, U^{n}_{0:t-1}, Z_{0:t}^{1:N}, U^0_{0:t-1}\\}, \\hspace{2mm} n \\in \\mathcal{N}, \\notag \\\\\nH^0_t &= \\{Z_{0:t}^{1:N}, U^0_{0:t-1}\\}. \n\\label{Model:info}\n\\end{align}\nLet $\\mathcal{H}^{n}_t$ be the space of all possible realizations of $H_t^n$.\nThen, $C^{n}$'s actions are selected according to\n\\begin{align}\nU^{n}_t &= g^{n}_t(H^{n}_t), \\hspace{2mm} n \\in {\\color{black} \\overline{\\mathcal{N}}},\n\\label{Model:strategy}\n\\end{align}\nwhere $g^{n}_t:\\mathcal{H}^{n}_t{\\color{black} \\to } \\R^{d_U^n}$ is a Borel measurable mapping.\nWe use $g:=(g^0_{0},g^0_1,\\dots,g^1_{0},g^1_1,\\dots, g^N_{0},g^N_1,\\dots,)$ to collectively denote the control strategies of all $N+1$ controllers.\n\nThe instantaneous cost $c_t(X_t, U_t)$ of the system is a quadratic function similar to the one described in \\eqref{Model:cost_2C} where $X_t = \\vecc(X^{1:N}_t), U_t = \\vecc(U^{0:N}_t)$ and \n\\begin{align}\nQ= \\begin{bmatrix}\nQ^{11} &\\ldots &Q^{1N} \\\\\n\\vdots & \\ddots & \\vdots \\\\\nQ^{N1} & \\ldots & Q^{NN}\n\\end{bmatrix}, R= \\begin{bmatrix}\nR^{00} &R^{01} &\\ldots &R^{0N} \\\\\nR^{10} &R^{11} &\\ldots &R^{1N} \\\\\n\\vdots &\\vdots & \\ddots & \\vdots \\\\\nR^{N0} & \\ldots & \\ldots & R^{NN} \n\\end{bmatrix}.\n\\label{matrix_structure}\n\\end{align}\n$Q$ is a symmetric positive semi-definite (PSD) matrix and $R$ is a symmetric positive definite (PD) matrix. \n\n\\subsubsection*{Problem Formulation}\n\\label{subs:prob_formulaiton}\nLet $\\mathcal{G}$ denote the set of all possible control strategies of controllers $C^{0},\\ldots,C^N$.\nThe performance of control strategies $g$ over a finite horizon $T$ is measured by $J_T(g)$ defined in \\eqref{Model:J_T_2C}.\nFor the decentralized networked control system (DNCS) described above, we consider the problem of strategy optimization over finite and infinite time horizons. These two problems are formally defined below.\n\\begin{problem}\n\\label{problem_finite}\nFor the DNCS described above,\nsolve the following strategy optimization problem:\n\\begin{align}\n\\inf_{g \\in\\mathcal{G}} \nJ_T(g)\n\\label{Model:obj_finite}\n\\end{align}\n\\end{problem}\n\n\n\\begin{problem}\n\\label{problem_infinite}\nFor the DNCS described above,\nsolve the following strategy optimization problem:\n\\begin{align}\n&\\inf_{g \\in\\mathcal{G}} \nJ_{\\infty}(g)\n:= \\inf_{g \\in\\mathcal{G}} \n\\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}(g).\n\\label{Model:obj_infinite}\n\\end{align}\n\\end{problem}\n\nDue to stability issues in the infinite horizon problem, we make the following assumptions on the system and cost matrices. \n\\begin{assumption}\n\\label{assum:det_stb}\n$(A,Q^{1\/2})$ is detectable and $(A,B)$ is stabilizable.\n\\end{assumption}\n\n\\begin{assumption}\n\\label{assum:det_stb_2}\n$\\big(A^{nn},(Q^{nn})^{1\/2} \\big)$ is detectable for all $n \\in \\mathcal{N}$.\n\\end{assumption}\n\nThe finite horizon strategy optimization problem (Problem \\ref{problem_finite}) has been solved in \\cite{asghari_ouyang_nayyar_tac_2018}. We summarize the finite horizon results below. \n\\begin{lemma}(\\cite[Theorem 2]{asghari_ouyang_nayyar_tac_2018})\n\\label{lm:opt_strategies}\nThe optimal control strategies of Problem \\ref{problem_finite} are given by\n\\begin{align}\n\\bmat{U^{0*}_t \\\\ U^{1*}_t \\\\ \\vdots \\\\ U^{N*}_t} = K_t^0 \\hat X_t + \n\\begin{bmatrix}\n\\mathbf{0} & \\ldots & \\mathbf{0} \\\\\n&&\\\\\nK_t^{1} & & \\text{\\huge0} \\\\\n &\\ddots & \\\\\n \\text{\\huge0} & & K_t^{N}\n\\end{bmatrix} \\left(X_t - \\hat X_t\\right), \n\\label{eq:opt_U}\n\\end{align}\nwhere $\\hat X_t = \\vecc(\\hat X^{1:N}_t) = \\vecc(\\ee[X^1_t|H^0_t], \\ldots, \\ee[X^N_t|H^0_t]) = \\ee[X_t|H^0_t]$ is the estimate (conditional expectation) of $X_t$ based on the common information $H^0_t$. The estimate $\\hat X_t$ can be computed recursively as follows: for $n \\in \\mathcal{N}$,\n\\begin{align}\n\\hat X_0^n = & 0, \n\\label{eq:estimator_0}\n\\\\\n\\hat X_{t+1}^n\n= &\\left\\{\n\\begin{array}{ll}\n \\big([A]_{n,:} + [B]_{n,:} K^0_t\\big)\\hat X_t& \\text{ if }Z_{t+1}^n= \\emptyset,\\\\\n X_{t+1}^n & \\text{ if }Z_{t+1}^n = X_{t+1}^n.\n\\end{array}\\right. \n\\label{eq:estimator_t}\n\\end{align}\nThe gain matrices $K_t^0$ and $K_t^{n}$, $n \\in \\mathcal{N}$, are given by\n\\begin{align}\n& K^0_t = \\Psi(P_{t+1},R,A,B),\n\\label{eq:K_finite}\n\\\\\n&K_t^n = \\Psi \\big ((1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n},R^{nn},A^{nn},B^{nn} \\big),\n\\label{eq:tildeK_finite}\n\\end{align}\nwhere $P^0_t = \\begin{bmatrix}\n[P_{t}^0]_{1,1} &\\ldots &[P_{t}^0]_{1,N} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n[P_{t}^0]_{N,1} & \\ldots & [P_{t}^0]_{N,N}\n\\end{bmatrix} \\in \\mathbb{R}^{(\\sum_{n=1}^N d^n_X)\\times (\\sum_{n=1}^N d^n_X)}$ and $P_t^{n} \\in \\mathbb{R}^{ d^n_X\\times d^n_X} $, for $n \\in \\mathcal{N}$, are PSD matrices obtained recursively as follows: \nwhere $P_t^0$ and $P_t^{n}$, $n \\in \\mathcal{N}$, are PSD matrices obtained recursively as follows:\n\\begin{align}\n&P_{T+1}^0 = \\mathbf{0}, \\quad P_{T+1}^{n} = \\mathbf{0}, \\label{eq:P_N_init}\\\\\n&P_t^0 = \\Omega(P_{t+1}^0,Q,R,A,B),\n\\label{eq:P_finite}\n\\\\\n&P_t^{n} = \\Omega \\big((1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n},Q^{nn},R^{nn},A^{nn},B^{nn}\\big).\n\\label{eq:tildeP_finite}\n\\end{align}\nFurthermore, the optimal cost is given by\n\\begin{align}\nJ^*_{T} = & \\sum_{t = 0}^T \\sum_{n = 1}^N \\tr \\Big((1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n} \\Big).\n\\label{eq:opcost_finite}\n\\end{align}\n\\end{lemma}\n\n\\begin{remark}\nNote that remote controller's action $U^{0*}_t$ in \\eqref{eq:opt_U} is a function of $\\hat{X}_t$ only while the local controller $C^n$'s action $U^{n*}_t$ is a function of both $\\hat{X}_t$ and $X_t^n$. Further, as per \\eqref{eq:estimator_0} and \\eqref{eq:estimator_t}, $\\hat{X}_t$ is computed recursively based only on the knowledge of $Z_{0:t}^{1:N}$.\n\\end{remark}\n\n\\subsection{Infinite Horizon Optimal Control}\n\\label{sec:infinite_N_controllers}\nAs in Section \\ref{sec:infinite_2_controllers}, the infinite horizon problem (Problem \\ref{problem_infinite}) can be solved by answering the following three questions:\n\n\n\n\n\n\n\n\n\\begin{enumerate}[label=(\\mylabel{Q}{\\arabic*})]\n\\item Do matrices $P_t^{0},\\ldots,P_t^N$, defined in \\eqref{eq:P_N_init}-\\eqref{eq:tildeP_finite} converge as $t \\to -\\infty$ to $P_*^0,\\ldots, P_*^N$, that satisfy the coupled fixed point equations \\eqref{eq:P_finite_NC_fixed} - \\eqref{eq:tildeP_finite_NC_fixed} below?\n\\begin{align}\n&P_*^0 = \\Omega \\big(P_{*}^0,Q,R,A,B\\big),\n\\label{eq:P_finite_NC_fixed}\n\\\\\n& P_*^{n} = \\Omega \\big((1-p^n)[P_{*}^0]_{n,n}+p^n P_{*}^{n},Q^{nn},R^{nn},A^{nn},B^{nn}\\big).\n\\label{eq:tildeP_finite_NC_fixed}\n\\end{align}\n\\item If matrices $P_t^{0},\\ldots,P_t^N$ converge and we define matrices $K_*^0, \\ldots, K_*^N$, using matrices $P_*^{0},\\ldots,P_*^N$ as follows, \n\\begin{align}\n& K_*^0 = \\Psi(P_{*}^0,R,A,B),\n\\label{eq:K_finite_NC_fixed}\n\\\\\n& K_*^n = \\Psi \\big((1-p^n)[P_{*}^0]_{n,n}+p^n P_{*}^n,R^{nn},A^{nn},B^{nn}\\big),\n\\label{eq:tildeK_finite_NC_fixed}\n\\end{align}\nare the following strategies optimal for Problem \\ref{problem_infinite}?\n\\begin{align}\n\\bmat{U^{0*}_t \\\\ U^{1*}_t \\\\ \\vdots \\\\ U^{N*}_t} = K_*^0 \\hat X_t + \n\\begin{bmatrix}\n\\mathbf{0} & \\ldots & \\mathbf{0} \\\\\n&&\\\\\nK_*^{1} & & \\text{\\huge0} \\\\\n &\\ddots & \\\\\n \\text{\\huge0} & & K_*^{N}\n\\end{bmatrix} \\left(X_t - \\hat X_t\\right), \n\\label{eq:opt_U_NC_fixed}\n\\end{align}\nwhere $\\hat{X}_t$ can be computed recursively using \\eqref{eq:estimator_0} - \\eqref{eq:estimator_t} by replacing $K^0_t$ with $K^0_*$.\n\n\\item If matrices $P_t^{0},\\ldots,P_t^N$ do not converge, is it still possible to find control strategies with finite cost for Problem \\ref{problem_infinite}?\n\\end{enumerate}\n\n\n\\subsection{Answering Q1}\nAs in Section \\ref{sec:Q1_2C}, we will answer Q1 by establishing a connection between the recursions for matrices $P_t^{0},\\ldots,P^N_t$ in our DNCS problem and the recursions for matrices $P_t^{\\diamond}(m)$, $m \\in \\mathcal{M},$ in the MJLS problem. One obstacle in making this connection is the fact that the matrices $P_t^{0},\\ldots,P^N_t$ in our DNCS problem do not have the same dimensions while the matrices $P_t^{\\diamond}(m)$, $m \\in \\mathcal{M},$ in the MJLS problem all have the same dimensions. This obstacle was not present in Section \\ref{sec:infinite_2_controllers}. To get around this difficulty, we first provide a new representation $\\bar P_t^{0},\\ldots,\\bar P_t^{N}$ for the matrices $P_t^{0},\\ldots,P^N_t$ in our DNCS problem such that the new matrices $\\bar P_t^{0},\\ldots,\\bar P_t^{N}$ all have the same dimensions.\n\n\n\\begin{lemma}\n\\label{lm:opt_strategies_new_rep}\nDefine matrices $\\bar P_t^{n} \\in \\mathbb{R}^{(\\sum_{k=1}^N d^k_X)\\times (\\sum_{k=1}^N d^k_X)}$, $n =0,1,\\ldots,N$, recursively as follows:\n\\begin{align}\n&\\bar P_{T+1}^n = \\mathbf{0}, \\label{eq:barP_init}\\\\\n&\\bar P_t^0 = \\Omega(\\bar P_{t+1}^0,Q,R,A,B),\n\\label{eq:barP_finite_0}\n\\\\\n&\\bar P_t^{n} = \\Omega \\big((1-p^n)\\bar P_{t+1}^0+p^n \\bar P_{t+1}^n, \n \\mathcal{L}_{zero}(Q,Q^{nn},n,n), \\notag \\\\\n& \\hspace{1.4cm} \\mathcal{L}_{iden}(R,R^{nn},n+1), \\mathcal{L}_{zero}(A,A^{nn},n,n), \\notag \\\\\n & \\hspace{1.4cm} \\mathcal{L}_{zero}(B,B^{nn},n,n+1) \\big),\n\\label{eq:barP_finite}\n\\end{align}\nwhere the operators $\\mathcal{L}_{zero}$ and $\\mathcal{L}_{iden}$ are as defined in Section \\ref{sec:operators}.\nThen, for $ t \\leq T+1$,\n\\begin{align}\n\\label{new_rep1}\n&\\bar P_t^0 = P_t^0, \\\\\n& \\bar P_t^{n} = \\mathcal{L}_{zero}(P^0_t, P_t^{n},n,n), ~~n=1,\\ldots,N.\n\\label{new_rep2}\n\\end{align}\nConsequently, matrices $P_t^{0},\\ldots,P^N_t$ converge as $t \\to -\\infty$ if and only if matrices $\\bar P_t^{0},\\ldots,\\bar P_t^{N}$ converge as $t \\to -\\infty$.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{proof_lm:opt_strategies_new_rep} for a proof.\n\\end{proof}\n\nWe can now proceed with constructing an auxiliary MJLS.\n\\subsubsection*{\\textbf{Step 1: Constructing an auxiliary MJLS}}\nConsider an auxiliary MJLS where the set $\\mathcal{M}$ of modes is $ \\{0,1,\\ldots,N\\}$. Then, we have the following $N+1$ sequences of matrices, $P^{\\diamond}_t(0),P^{\\diamond}_t(1),\\ldots,P^{\\diamond}_t(N)$, defined recursively using \\eqref{eq:CARE_init} and \\eqref{eq:CARE_finite} for this MJLS:\n\\begin{align}\n&P^{\\diamond}_{T+1}(m) =\\mathbf{0}, ~\\forall m \\in \\mathcal{M}, \\label{eq:P_N_MJ_init}\\\\\n&P^{\\diamond}_t(0) = \\notag \\\\\n&\\Omega\\big(\\sum_{k=0}^N \\theta^{0k} P^{\\diamond}_{t+1}(k),Q^{\\diamond}(0),R^{\\diamond}(0),A^{\\diamond}(0),B^{\\diamond}(0) \\big),\n\\label{P_MJ_cmp_NC_0}\n\\\\\n&P^{\\diamond}_t(n)= \\notag \\\\\n& \\Omega\\big(\n\\sum_{k=0}^N \\theta^{nk} P^{\\diamond}_{t+1}(k),Q^{\\diamond}(n),R^{\\diamond}(n),A^{\\diamond}(n),B^{\\diamond}(n) \\big).\n\\label{P_MJ_cmp_NC_1}\n\\end{align}\nFurthermore, we have the recursions of \\eqref{eq:barP_init}-\\eqref{eq:barP_finite} for matrices $\\bar P_t^{0},\\ldots, \\bar P_t^N$ in our DNCS problem.\nBy comparing \\eqref{P_MJ_cmp_NC_0}-\\eqref{P_MJ_cmp_NC_1} with \\eqref{eq:barP_finite_0}-\\eqref{eq:barP_finite}, we find that the following definitions would make the two sets of equations identical:\n\\begin{align}\n&A^{\\diamond}(0) = A, A^{\\diamond}(n) = \\mathcal{L}_{zero}(A,A^{nn},n,n), \\hspace{1mm} n \\in \\mathcal{N},\n\\label{A_mj}\n\\end{align}\n\\begin{align}\n&B^{\\diamond}(0) = B, B^{\\diamond}(n) = \\mathcal{L}_{zero}(B,B^{nn},n,n+1), n \\in \\mathcal{N},\n\\label{B_mj}\n\\end{align}\n\\begin{align}\n&Q^{\\diamond}(0) = Q, Q^{\\diamond}(n) = \\mathcal{L}_{zero}(Q,Q^{nn},n,n), \\hspace{1mm} n \\in \\mathcal{N},\n\\label{Q_mj}\n\\\\\n&R^{\\diamond}(0) = R, R^{\\diamond}(n) = \\mathcal{L}_{iden}(R,R^{nn},n+1), \\hspace{1mm} n \\in \\mathcal{N},\n\\label{R_mj} \n\\\\\n&\\Theta =\n\\begin{blockarray}{rccccc}\n&0 &1 &2 &\\ldots &N \\\\\n\\begin{block}{r[ccccc]}\n 0 & 1 & 0 & \\ldots &\\ldots & 0 \\\\\n 1 & 1-p^1 & p^1 &\\ddots & & \\vdots \\\\\n 2 & 1-p^2 &0 &p^2 & \\ddots& \\vdots \\\\\n \\vdots & \\vdots & \\vdots &\\ddots &\\ddots & 0 \\\\\n N & 1-p^N & 0 &\\ldots &0 &p^N \\\\\n\\end{block}\n\\end{blockarray}.\n\\label{transition_prob}\n\\end{align}\nTo complete the definition of the auxiliary MJLS, we need to define the initial state and mode probability distributions $\\pi_{X_0^{\\diamond}}$ and $\\pi_{M_0}$. These can be defined arbitrarily and for simplicity we assume that the initial state is fixed to be $X^{\\diamond}_0 = 0$ and the initial mode $M_0$ is uniformly distributed over the set $\\mathcal{M}$. The following lemma summarizes the above discussion.\n\n\n\n\\begin{lemma}\n\\label{equality_recursions_NC}\nFor the auxiliary MJLS described by \\eqref{A_mj}-\\eqref{transition_prob}, \n the coupled recursions in \\eqref{eq:P_N_MJ_init}-\\eqref{P_MJ_cmp_NC_1} are identical to the coupled recursions in \\eqref{eq:barP_init}-\\eqref{eq:barP_finite}.\n\\end{lemma}\n\\begin{proof}\nThe lemma can be proved by straightforward algebraic manipulations.\n\\end{proof}\n\n\\subsubsection*{\\textbf{Step 2: Using MJLS results to answer Q1}}\nNow that we have constructed an auxiliary MJLS where $P^{\\diamond}_t(m) = \\bar P_t^{m}$ for $m=0,\\ldots,N$, we can use the MJLS results about convergence of matrices $P^{\\diamond}_t(m)$ (that is, Lemmas \\ref{lm:MJ_infinite} and \\ref{lm:ss}) to answer Q1. The following lemma states this result.\n\n\\begin{lemma}\n\\label{lm:pc_NC}\nSuppose Assumptions \\ref{assum:det_stb} and \\ref{assum:det_stb_2} hold. Then, the matrices $ P^0_t,\\ldots, P^N_t$ defined in \\eqref{eq:P_N_init}-\\eqref{eq:tildeP_finite} converge as $t \\to -\\infty$ to matrices $P_*^0,\\ldots, P_*^N$ that satisfy the coupled fixed point equations \\eqref{eq:P_finite_NC_fixed}-\\eqref{eq:tildeP_finite_NC_fixed} if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$, where the critical thresholds $p_c^n, n \\in \\mathcal{N}$, are given by\n\\begin{align}\n\\frac{1}{\\sqrt{p_c^n}} = \\min_{K \\in \\mathbb{R}^{d^n_U \\times d^n_X}}\\rho(A^{nn}+B^{nn}K).\n\\label{eq:pc}\n\\end{align}\n \n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{proof_lm:pc}.\n\\end{proof}\n\n\\subsection{Answering Q2 and Q3}\nAssuming that $P_{t}^n \\rightarrow P_*^n$ as $t \\rightarrow -\\infty$ for $n=0,\\ldots,N,$ we want to know whether the control strategies of \\eqref{eq:K_finite_NC_fixed}-\\eqref{eq:opt_U_NC_fixed} are optimal for Problem \\ref{problem_infinite_2C}. The following result shows that these control strategies are indeed optimal.\n\n\n\\begin{lemma}\n\\label{lm:Q2_NC}\nIf $P_{t}^n \\rightarrow P_*^n$ as $t \\rightarrow -\\infty$ for $n=0,\\ldots,N$, then\n\\begin{enumerate}\n\\item Problem \\ref{problem_infinite} has finite optimal cost,\n\\item The strategies described by \\eqref{eq:K_finite_NC_fixed}-\\eqref{eq:opt_U_NC_fixed} are optimal for Problem \\ref{problem_infinite},\n\\item Under the strategies described by \\eqref{eq:K_finite_NC_fixed}-\\eqref{eq:opt_U_NC_fixed}, $X_t$ and $(X_t-\\hat X_t)$ are mean square stable.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{sec:proof_lm_Q2_NC}.\n\\end{proof}\nThe following lemma answers Q3.\n\\begin{lemma}\\label{lm:Q3_NC}\nIf matrices $P_t^{0},\\ldots,P_t^{N}$ do not converge as $t \\rightarrow -\\infty$, then Problem \\ref{problem_infinite} does not have finite optimal cost. \n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{sec:proof_lm_Q3_NC}.\n\\end{proof}\n\nNow that we have answered Q1, Q2 and Q3, we can summarize our results for the infinite horizon DNCS problem (Problem \\ref{problem_infinite}).\n\n\\subsection{Summary of the Infinite Horizon Results}\nBased on the answers to Q1-Q3, the following theorem summarizes our results for Problem \\ref{problem_infinite}.\n\\begin{theorem}\n\\label{thm:DC_infinite_NC}\nSuppose Assumptions \\ref{assum:det_stb} and \\ref{assum:det_stb_2} hold. Then, \n\\begin{enumerate}[(i)]\n\\item Problem \\ref{problem_infinite} has finite optimal cost if and only if\nfor all $n \\in \\mathcal{N}$, $p^n < p_c^n$ where the critical threshold $p_c^n$ is given by \\eqref{eq:pc}.\n\n\\item If $p^n < p_c^n$ for all $n \\in \\mathcal{N}$, there exist symmetric positive semi-definite matrices $P_*^0,\\ldots, P_*^N$ that satisfy \\eqref{eq:P_finite_NC_fixed}-\\eqref{eq:tildeP_finite_NC_fixed} and the optimal strategies for Problem \\ref{problem_infinite} are given by\n\\begin{align}\n\\bmat{U^{0*}_t \\\\ U^{1*}_t \\\\ \\vdots \\\\ U^{N*}_t} = K_*^0 \\hat X_t + \n\\begin{bmatrix}\n\\mathbf{0} & \\ldots & \\mathbf{0} \\\\\n&&\\\\\nK_*^{1} & & \\text{\\huge0} \\\\\n &\\ddots & \\\\\n \\text{\\huge0} & & K_*^{N}\n\\end{bmatrix} \\left(X_t - \\hat X_t\\right), \n\\label{eq:opt_U_NC_thm}\n\\end{align}\nwhere the estimate $\\hat X_t$ can be computed recursively using \\eqref{eq:estimator_0}-\\eqref{eq:estimator_t} by replacing $K^0_t$ with $K^0_*$ and the gain matrices $K_*^0, \\ldots,K_*^N$ are given by \n\\begin{align}\n& K_*^0 = \\Psi(P_{*}^0,R,A,B),\n\\label{eq:K_finite_NC_final}\n\\\\\n& K_*^n = \\Psi \\big((1-p^n)[P_{*}^0]_{n,n}+p^n P_{*}^n,R^{nn},A^{nn},B^{nn}\\big).\n\\label{eq:tildeK_finite_NC_final}\n\\end{align}\n\\item If $p^n < p_c^n$ for all $n \\in \\mathcal{N}$, then under the strategies described in part (ii) above, $X_t$ and $(X_t-\\hat X_t)$ are mean square stable.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{corollary}\nSuppose the local controllers are just sensors (i.e., $B^{nn} = 0$ for $n=1,\\ldots,N$) and the remote controller is the only controller present. Then, if $\\rho(A) < 1\/\\sqrt{p^n}$ for all $n=1,\\ldots,N$, the optimal controller of this multi-sensor, single-controller NCS is given by $U^{0*}_t$ in \\eqref{eq:opt_U_NC_thm}, and the corresponding closed-loop system is mean-square stable.\n\\end{corollary}\n\n\n\\begin{remark}\nIf Assumption \\ref{assum:det_stb_2} is not true, define, for $n=1,\\ldots,N$, $p^n_c = \\min \\{p^n_s, p^n_d\\},$ where \n\\begin{align}\n\\frac{1}{\\sqrt{p_s^n}} &= \\min_{K \\in \\mathbb{R}^{d^n_U \\times d^n_X}}\\rho(A^{nn}+B^{nn}K).\n\\label{eq:pc_ss}\n\\\\\n\\frac{1}{\\sqrt{p_d^n}} &= \\min_{H \\in \\mathbb{R}^{d^n_X \\times d^n_X}}\\rho \\big(A^{nn}+H (Q^{nn})^{1\/2} \\big).\n\\label{eq:pc_sd}\n\\end{align}\nThen, using arguments similar to those used for proving Theorem \\ref{thm:DC_infinite_NC}, we can show that if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$, the strategies in \\eqref{eq:opt_U_NC_thm} are optimal for Problem \\ref{problem_infinite}. Moreover, Problem \\ref{problem_infinite} has finite optimal cost and the system state is mean square stable under optimal strategies.\n\\end{remark}\n\n\n\n\n\n\\subsubsection*{Communication Model}\n\\label{subs:comm_model}\nWe assume that the communication network between the local controller $C^{n}$, $n \\in \\mathcal{N}$, and the remote controller $C^0$ is the same as one described for the local controller $C^{1}$ is Section \\ref{subs:comm_model_2C}. In particular, there exists an unreliable channel with link failure probability $p^n$ from the local controller $C^{n}$, $n \\in \\mathcal{N}$, to the remote controller $C^0$, through which the local controller $C^{n}$ sends the perfectly observed state $X_t^n$ of its co-located plant. The state of this channel at time $t$ is described by Bernoulli random variable $\\Gamma_t^n=0$ and the output of this channel at time $t$ is denoted by $Z_t^n$ where $\\Gamma_t^n$ and $Z_t^n$ are described similar to \\eqref{Model:channel_2C}.\n\nUnlike the unreliable uplinks, we assume that there exist perfect links from $C^0$ to $C^{n}$, for $n \\in \\mathcal{N}$. Therefore, $C^0$ can share $Z_t^{1:N}$ and $U_{t-1}^0$ with all local controllers $C^{1:N}$.\nAll controllers select their control actions at time $t$ after observing $Z_t^{1:N}$. \nA schematic of the time ordering of variables is shown in Fig. \\ref{fig:timing}. \nWe assume that for all $n \\in \\mathcal{N}$, the links from controllers $C^{n}$ and $C^0$ to the plant $n$ are perfect.\n\n\n\\subsubsection*{Information structure and cost}\n\\label{subs:info_cost_2C}\nLet $H^{n}_t$ denote the information available to controller $C^{n}$, $n \\in {\\color{black} \\overline{\\mathcal{N}}}$, to make decisions at time $t$.\nThen,\n\\begin{align}\nH^{n}_t&= \\{X^n_{0:t}, U^{n}_{0:t-1}, Z_{0:t}^{1:N}, U^0_{0:t-1}\\}, \\hspace{2mm} n \\in \\mathcal{N} \\notag \\\\\nH^0_t &= \\{Z_{0:t}^{1:N}, U^0_{0:t-1}\\}. \n\\label{Model:info}\n\\end{align}\nLet $\\mathcal{H}^{n}_t$ be the space of all possible realizations of $H_t^n$.\nThen, $C^{n}$'s actions are selected according to\n\\begin{align}\nU^{n}_t &= g^{n}_t(H^{n}_t), \\hspace{2mm} n \\in {\\color{black} \\overline{\\mathcal{N}}},\n\\label{Model:strategy}\n\\end{align}\nwhere $g^{n}_t:\\mathcal{H}^{n}_t{\\color{black} \\to } \\R^{d_U^n}$ is a Borel measurable mapping.\nWe use $g:=(g^0_{0},g^0_1,\\dots,g^1_{0},g^1_1,\\dots, g^N_{0},g^N_1,\\dots,)$ to denote the control strategies of $C^0$ and $C^{1:N}$.\n\nThe instantaneous cost $c_t(X_t, U_t)$ of the system is a quadratic function similar to the one described in \\eqref{Model:cost_2C} where $X_t = \\vecc(X^{1:N}_t), U_t = \\vecc(U^{0:N}_t)$ and \n\\begin{align}\nQ= \\begin{bmatrix}\nQ^{11} &\\ldots &Q^{1N} \\\\\n\\vdots & \\ddots & \\vdots \\\\\nQ^{N1} & \\ldots & Q^{NN}\n\\end{bmatrix}, R= \\begin{bmatrix}\nR^{00} &R^{01} &\\ldots &R^{0N} \\\\\nR^{10} &R^{11} &\\ldots &R^{1N} \\\\\n\\vdots &\\vdots & \\ddots & \\vdots \\\\\nR^{N0} & \\ldots & \\ldots & R^{NN} \n\\end{bmatrix}.\n\\label{matrix_structure}\n\\end{align}\n$Q$ is a symmetric positive semi-definite (PSD) matrix and $R$ is a symmetric positive definite (PD) matrix. \n\n\\subsubsection*{Problem Formulation}\n\\label{subs:prob_formulaiton}\nLet $\\mathcal{G}$ denote the set of all possible control strategies of $C^{0:N}$ that ensure that all states and control actions have finite second moments. The performance of control strategies $g$ over a finite horizon $T$ is measured by $J_T(g)$ described in \\eqref{Model:J_T_2C}.\n\nWe refer to the system described above as the \\emph{decentralized networked control system} with $N$ Local controller and 1 Remote controller (NL1R-DNCS).\nWe consider the problem of strategy optimization for the NL1R-DNCS over finite and infinite time horizons. These two problems are formally defined below.\n\\begin{problem}\n\\label{problem_finite}\nFor the NL1R-DNCS described above,\nsolve the following strategy optimization problem:\n\\begin{align}\n\\inf_{g \\in\\mathcal{G}} \nJ_T(g)\n\\label{Model:obj_finite}\n\\end{align}\n\\end{problem}\n\n\n\\begin{problem}\n\\label{problem_infinite}\nFor the NL1R-DNCS described above,\nsolve the following strategy optimization problem:\n\\begin{align}\n&\\inf_{g \\in\\mathcal{G}} \nJ_{\\infty}(g)\n:= \\inf_{g \\in\\mathcal{G}} \n\\limsup_{T\\rightarrow\\infty} \\frac{1}{T+1} J_{T}(g).\n\\label{Model:obj_infinite}\n\\end{align}\n\n\\end{problem}\n\\section{Proof of Claim \\ref{claim:cost_optimal_2C}}\n\\label{proof_claim:cost_optimal_2C}\nIn order to show that \\eqref{eq:costforgstar_2C} holds, it suffices to show that the following equation is true for all $t \\geq 0$:\n\\begin{align}\n\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]= \\tr(\\Lambda_*) + \\ee^{g^*} [V_t - V_{t+1} | H_t^0],\n\\label{main_goal_2C}\n\\end{align}\nwhere $U^*_t$ are the control actions at time $t$ under $g^*$.\nThis is because by taking the expectation of \\eqref{main_goal_2C} and summing it from $t=0$ to $T$, we obtain\n\\begin{align}\nJ_{T}(g^*) \n= & \\sum_{t=0}^T \\ee^{g^*} [ c(X_t,U_t^*) ]\n\\notag\\\\\n=&(T+1) \\tr(\\Lambda_*) \n+ \\ee^{g^*} [V_0 - V_{T+1} ]\n\\notag\\\\\n=&(T+1) \\tr(\\Lambda_*) \n- \\ee^{g^*} [V_{T+1} ],\n\\end{align}\nwhere the last equality holds because $V_0 = 0$ (recall that $X_0 = \\hat X_0 =0$). \n\n\nNow, to show that \\eqref{main_goal_2C} holds, first note that $\\ee^{g^*}[V_t | H_t^0] = V_t$ since $V_t$, given by \\eqref{V_t_2C}, is a function of $H^0_t$. Hence, \\eqref{main_goal_2C} is equivalent to\n\\begin{align}\n\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0] + \\ee^{g^*} [ V_{t+1} | H_t^0] = \\tr(\\Lambda_*) + V_t. \\label{eq:main_goal_2C_2}\n\\end{align}\n In the following subsections we will calculate $\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]$ and $\\ee^{g^*}[V_{t+1} | H_t^0]$ and then simplify the left hand side of \\eqref{eq:main_goal_2C_2}. To do so, we define \n\\begin{align}\n&\\hat X_{t+1|t} := \\ee [X_{t+1}|H^0_{t}]\n\\\\\n&\\Sigma_{t+1|t} := \\cov(X_{t+1}|H^0_{t})\n \\\\\n&\\Sigma_t := \\cov(X_{t}|H^0_{t}),\n\\label{Sigma_2C}\n\\end{align}\nand recall that $\\hat{X}_t = \\ee [X_{t}|H^0_{t}]$.\n\n\n\\subsection{Calculating $\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]$}\nNote that \n\\begin{small}\n\\begin{align}\n&\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]= \\underbrace{\\ee^{g^*} [X_t^\\tp Q X_t | H_t^0]}_{\\mathbb{T}_4} + \\underbrace{\\ee^{g^*}[U_t^{*\\tp} R U_t^* | H_t^0]}_{\\mathbb{T}_5}. \n\\label{cost_given_h_2C}\n\\end{align}\n\\end{small}\nWe can simplify the term $\\mathbb{T}_4$ as follows\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_4 &= \\hat X_t^\\tp Q\\hat X_t + \\ee^{g^*} \\Big[(X_t-\\hat X_t)^\\tp Q (X_t-\\hat X_t)|H^0_t\\Big] \\notag \\\\\n& = \\hat X_t^\\tp Q\\hat X_t + \\tr\\big(Q\\Sigma_t \\big). \\label{eq:T4eq}\n\\end{align}\n\\end{small}\n\n From \\eqref{eq:opt_U_2C_fixed}, we have $U_t^* = K_*^0\\hat X_t + \\tilde K_*^1 (X_t - \\hat X_t)$, where $\\tilde K_*^1 = \\bmat{\\mathbf{0} \\\\ K_*^1 }$. Therefore, we can simplify the term $\\mathbb{T}_5$ as follows\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_5 &= (K_*^0\\hat X_t)^\\tp R K_*^0\\hat X_t \\notag \\\\\n& + \\ee^{g^*} \\Big[(X_t-\\hat X_t)^\\tp (\\tilde K_*^1)^{\\tp} R \\tilde K_*^1 (X_t-\\hat X_t)|H^0_t\\Big]\n\\notag\\\\\n&= \\hat X_t^\\tp(K_*^0)^\\tp R K_*^0\\hat X_t + \\tr \\big( (\\tilde K_*^1)^\\tp R \\tilde K_*^1 \\Sigma_t \\big).\n \\label{cost_on_U_t_2C}\n\\end{align}\n\\end{small}\n\nPutting \\eqref{cost_given_h_2C}, \\eqref{eq:T4eq} and \\eqref{cost_on_U_t_2C} together, we can write \n\\begin{align}\\label{eq:calculated_c}\n&\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]= \\hat X_t^\\tp Q\\hat X_t +\\tr(Q \\Sigma_t) \\notag \\\\\n&+ \\hat X_t^\\tp(K_*^0)^\\tp R K_*^0\\hat X_t + \\tr( (\\tilde K_*^1)^\\tp R \\tilde K_*^1 \\Sigma_t).\n\\end{align}\n\n\\subsection{Calculating $\\ee^{g^*}[V_{t+1}|H^0_t]$}\nFrom the definition of $V_{t+1}$ (see \\eqref{V_t_2C}) we have\n\\begin{small}\n\\begin{align}\n \\ee^{g^*}[V_{t+1}|H^0_t] &= \\underbrace{\\ee^{g^*}\\Big[\n\\hat X_{t+1}^\\tp P_*^0 \\hat X_{t+1} \\Big| H^0_t\n\\Big]}_{\\mathbb{T}_6} \\notag \\\\\n&+ \\underbrace{\\ee^{g^*}\\Big[\n\\tr \\big(P_*^{1} \\Sigma_{t+1} \\big) \\Big| H^0_t \\Big]}_{\\mathbb{T}_7}.\\label{eq:T6T7}\n\\end{align}\n\\end{small}\nNote that if $\\Gamma_{t+1}^1 = 1$ (i.e., the link is active) $\\hat X_{t+1} = X_{t+1}$ and $\\Sigma_{t+1} = 0$ and if $\\Gamma_{t+1}^1 = 0$ $\\hat X_{t+1} = \\hat X_{t+1|t}$ and $\\Sigma_{t+1} = \\Sigma_{t+1|t}$\\footnote{If $\\Gamma_{t+1}^1=0$, the remote controller gets no new information about $X_{t+1}$. Hence, its belief on $X_{t+1}$ given $H_{t+1}^0$ remains the same as its belief on $X_{t+1}$ given $H_{t}^0$.}. That is, \n\\begin{align}\n\\label{estimation_state_2C}\n&\\hat X_{t+1} = \\Gamma_{t+1}^1 X_{t+1} +(1-\\Gamma_{t+1}^1) \\hat X_{t+1|t}, \\\\\n &\\Sigma_{t+1} = (1-\\Gamma_{t+1}^1) \\Sigma_{t+1|t}.\n\\label{estimation_covariance_2C}\n\\end{align}\nNow, we use \\eqref{estimation_state_2C} and \\eqref{estimation_covariance_2C} to calculate the terms $\\mathbb{T}_6$ and $\\mathbb{T}_7$ in \\eqref{eq:T6T7}.\n\nNote that from \\eqref{estimation_state_2C}, $\\hat X_{t+1}$ is equal to $\\hat X_{t+1|t} + \\Gamma_{t+1}(X_{t+1} -\\hat X_{t+1|t})$. Therefore, $\\mathbb{T}_6$ can be written as\n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\mathbb{T}_6= \n\\ee^{g^*}\\Big[\n\\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t} \\Big| H^0_t \\Big] \\notag \\\\\n&+ \\ee^{g^*}\\Big[\n(X_{t+1} -\\hat X_{t+1|t})^\\tp \\Gamma_{t+1}^1 P_*^0 \\Gamma_{t+1}^1 (X_{t+1} -\\hat X_{t+1|t}) \\Big| H^0_t \\Big] \\notag \\\\ & = \n\\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t}+ (1-p^1) \\tr (P_*^0 \\Sigma_{t+1|t}).\n\\label{eq:T_4_1}\n\\end{align}\n\\end{small}\nFurthermore, using \\eqref{estimation_covariance_2C}, it is straightforward to see that \n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\mathbb{T}_7\n= p^1 \\tr (P_*^1 \\Sigma_{t+1|t}).\\label{eq:T7}\n\\end{align}\n\\end{small}\n\nCombining \\eqref{eq:T6T7}, \\eqref{eq:T_4_1} and \\eqref{eq:T7}, we get\n\\begin{align}\n \\ee^{g^*}[V_{t+1}|H^0_t]= &\\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t}+ (1-p^1) \\tr (P_*^0 \\Sigma_{t+1|t})\\notag \\\\\n& + p^1 \\tr (P_*^1 \\Sigma_{t+1|t}). \\label{eq:vt1}\n\\end{align}\nNote that the right hand side of \\eqref{eq:vt1} involves $\\hat X_{t+1|t}$ and $\\Sigma_{t+1|t}$. We will now try to write these in terms of $\\hat X_{t}$ and $\\Sigma_{t}$. For that purpose, note that under the strategies $g^*$\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\nX_{t+1} &= AX_t + B \\Big[ \nK_*^0\\hat X_t + \\bmat{\\mathbf{0} \\\\ K_*^1 }(X_t - \\hat X_t)\n \\Big] + W_t \\notag \\\\\n& = A_s(0) \\hat X_t + A_s(1) ( X_t - \\hat X_t) + W_t.\n\\label{X_dynamics_proof_2C}\n\\end{align}\n\\end{small}\nIn the above equation, we have defined $A_s(0) = A+ B K_*^0$ and $A_s(1) = A + B \\tilde K_*^1$ where $\\tilde K_*^1 = \\bmat{\\mathbf{0} \\\\ K_*^1 }$. Now using \\eqref{X_dynamics_proof_2C}, we can calculate $\\hat X_{t+1|t}$ and $\\Sigma_{t+1|t}$ as follows, \n\\begin{align}\n\\label{estimation_X_2C}\n&\\hat X_{t+1|t} = A_s(0) \\hat X_t, \\\\\n&\\Sigma_{t+1|t} = \\mathbf I + A_s(1) \\Sigma_t A_s(1)^\\tp.\n\\label{Sigma_t_2C}\n\\end{align}\n\nUsing \\eqref{estimation_X_2C} and \\eqref{Sigma_t_2C} in \\eqref{eq:vt1}, we get\n\\begin{align}\n \\ee^{g^*}[V_{t+1}|H^0_t]&= \\hat X_t ^\\tp A_s(0)^\\tp P_*^0 A_s(0) \\hat X_t+ (1-p^1) \\tr (P_*^0) \\notag \\\\\n& + p^1 \\tr (P_*^1) + (1-p^1) \\tr(P_*^0A_s(1) \\Sigma_t A_s(1)^\\tp) \\notag \\\\\n&+ p^1\\tr(P_*^1A_s(1) \\Sigma_t A_s(1)^\\tp). \\label{eq:vt1A}\n\\end{align}\nRecall that $\\Lambda_* = (1-p^1) P_{*}^0+p^1 P_{*}^1$. Thus, \\eqref{eq:vt1A} can be written as\n\\begin{align}\n \\ee^{g^*}[V_{t+1}|H^0_t]&= \\hat X_t ^\\tp A_s(0)^\\tp P_*^0 A_s(0) \\hat X_t+ \\tr (\\Lambda_*) \\notag \\\\\n& + (1-p^1) \\tr(P_*^0A_s(1) \\Sigma_t A_s(1)^\\tp) \\notag \\\\\n&+ p^1\\tr(P_*^1A_s(1) \\Sigma_t A_s(1)^\\tp). \\label{eq:vt1B}\n\\end{align}\n\n\\subsection{Simplifying the left hand side of \\eqref{eq:main_goal_2C_2}}\nNow that we have calculated $\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]$ and $\\ee^{g^*}[V_{t+1} | H_t^0]$, we will try to simplify the left hand side of \\eqref{eq:main_goal_2C_2}.\n\nAdding \\eqref{eq:vt1B} and \\eqref{eq:calculated_c} and grouping together the terms involving $\\hat X_t$ and those involving $\\Sigma_t$, we can write\n\\begin{align}\n& \\ee^{g^*}[c(X_t,U_t^*)|H^0_t] + \\ee^{g^*}[V_{t+1}|H^0_t] \n= \\tr(\\Lambda_*) \\notag \\\\\n& + \\hat X_t ^\\tp \\Phi(P_*^0,K^0_*) \\hat X_t + \\tr(\\Phi((1-p^1)P_*^0 +p^1P^1_*,\\tilde K^1_*)\\Sigma_t), \\label{eq:vt1C}\n\\end{align}\nwhere \n\\begin{small}\n\\begin{align}\n\\Phi(P_*^0,K^0_*) := Q + (K^0_*)^\\tp RK^0_*\n+ (A+BK^0_*)^\\tp P^0_*(A+BK^0_*),\n\\end{align}\n\\end{small}\nand similarly\n\\begin{small}\n\\begin{align}\n&\\Phi((1-p^1)P_*^0 +p^1P^1_*,\\tilde K^1_*):=Q + (\\tilde K_*^1)^\\tp R \\tilde K_*^1 \\notag \\\\\n&~~+(A+B\\tilde K^1_*)^\\tp\\big((1-p^1)P_*^0 +p^1P^1_* \\big)(A+B\\tilde K^1_*) \\notag \\\\\n&= Q + ( K_*^1)^\\tp R^{11} K_*^1 \\notag \\\\\n&~~+(A+B^{11} K^1_*)^\\tp\\big((1-p^1)P_*^0 +p^1P^1_* \\big)(A+B^{11}K^1_*) \n\\end{align}\n\\end{small}\n\nUsing the fact that $K^0_* = \\Psi(P_{*}^0,R,A,B)$, it can be easily established that \n\\begin{equation}\n\\Phi(P_*^0,K^0_*) = \\Omega(P_{*}^0,Q,R,A,B).\n\\end{equation}\nFurther, since $P^0_* = \\Omega(P_{*}^0,Q,R,A,B)$, we have \n\\begin{equation}\n\\Phi(P_*^0,K^0_*) = P^0_*.\\label{eq:phi0}\n\\end{equation}\n\nSimilarly, using the fact that $K_*^1 = \\Psi((1-p^1)P_{*}^0+p^1 P_{*}^1,R^{11},A,B^{11})$, it can be established that \n\\begin{align}\n&\\Phi((1-p^1)P_*^0 +p^1P^1_*,\\tilde K^1_*) \\notag \\\\\n&= \\Omega((1-p^1)P_*^0 +p^1P^1_*,Q,R^{11},A,B^{11}).\n\\end{align}\nFurther, since $P^1_* = \\Omega((1-p^1)P_*^0 +p^1P^1_*,Q,R^{11},A,B^{11})$, we have \n\\begin{equation}\n\\Phi((1-p^1)P_*^0 +p^1P^1_*,\\tilde K^1_*) = P^1_*.\\label{eq:phi1}\n\\end{equation}\n\nUsing \\eqref{eq:phi0} and \\eqref{eq:phi1} in \\eqref{eq:vt1C}, we get\n\\begin{align}\n& \\ee^{g^*}[c(X_t,U_t^*)|H^0_t] + \\ee^{g^*}[V_{t+1}|H^0_t] \\notag \\\\\n&= \\tr(\\Lambda_*)\n + \\hat X_t ^\\tp P^0_* \\hat X_t + \\tr(P^1_*\\Sigma_t), \\notag \\\\\n&= \\tr(\\Lambda_*) + V_t. \\label{eq:vt1D}\n\\end{align}\nThis establishes \\eqref{eq:main_goal_2C_2} and hence completes the proof of the claim.\n\\section{Proof of Claim \\ref{claim:cost_optimal_NC}}\n\\label{Cost_of_the_Strategies}\nIn order to show that \\eqref{eq:costforgstar_NC} holds, it suffices to show that the following equation is true for all $t \\geq 0$:\n\\begin{align}\n\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]= \\tr(\\Lambda_*) + \\ee^{g^*} [V_t - V_{t+1} | H_t^0],\n\\label{main_goal_NC}\n\\end{align}\nwhere $U^*_t$ are the control actions at time $t$ under $g^*$.\nThis is because by taking the expectation of \\eqref{main_goal_NC} and summing it from $t=0$ to $T$, we obtain\n\\begin{align}\nJ_{T}(g^*) \n=(T+1) \\tr(\\Lambda_*) \n- \\ee^{g^*} [V_{T+1} ].\n\\end{align}\n\nNow, to show that \\eqref{main_goal_NC} holds, note that $\\ee^{g^*}[V_t | H_t^0] = V_t$ since $V_t$ is a function of $H^0_t$. Hence, \\eqref{main_goal_NC} is equivalent to\n\\begin{align}\n\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0] + \\ee^{g^*} [ V_{t+1} | H_t^0] = \\tr(\\Lambda_*) + V_t. \\label{eq:main_goal_NC_2}\n\\end{align}\n In the following subsections we will calculate $\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]$ and $\\ee^{g^*}[V_{t+1} | H_t^0]$ and then simplify the left hand side of \\eqref{eq:main_goal_NC_2}. To do so, we define for $n=1,\\ldots,N,$\n\\begin{align}\n&\\hat X_{t+1|t}^n := \\ee [X_{t+1}^n|H^0_{t}]\n\\\\\n&\\Sigma_{t+1|t}^n := \\cov(X_{t+1}^n|H^0_{t})\n \\\\\n&\\Sigma_t^n := \\cov(X_{t}^n|H^0_{t}).\n\\label{sigma_t}\n\\end{align}\nand recall that $\\hat{X}_t^n = \\ee [X_{t}^n|H^0_{t}]$.\n \n \\subsection{Calculating $\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]$}\n Note that \n\\begin{small}\n\\begin{align}\n&\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]= \\underbrace{\\ee^{g^*} [X_t^\\tp Q X_t | H_t^0]}_{\\mathbb{T}_4} + \\underbrace{\\ee^{g^*}[U_t^{*\\tp} R U_t^* | H_t^0]}_{\\mathbb{T}_5}. \n\\label{cost_given_h_NC}\n\\end{align}\n\\end{small}\n$\\mathbb{T}_4$ can be written as\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_4 &= \\hat X_t^\\tp Q\\hat X_t + \\ee \\Big[(X_t-\\hat X_t)^\\tp Q (X_t-\\hat X_t)|H^0_t\\Big] \\notag \\\\\n& = \\hat X_t^\\tp Q\\hat X_t + \\sum_{n=1}^N \\tr\\big(Q^{nn}\\Sigma_t^n \\big).\n\\end{align}\n\\end{small}\nwhere the second equality is true because according to \\cite[Lemma 3]{asghari_ouyang_nayyar_tac_2018} $X_t^n$ and $X_t^m$, $m \\neq n,$ are conditionally independent given $H_t^0$.\n\nSimilarly, $\\mathbb{T}_5$ can be written as\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_5 &=\\hat X_t^\\tp(K_*^0)^\\tp R K_*^0\\hat X_t + \\sum_{n=1}^N \\tr \\big((K_*^n)^{\\tp} R^{nn} K_*^n \\Sigma_{t}^n \\big ).\n\\end{align}\n\\end{small}\nThus,\n\\begin{align}\\label{eq:calculated_c_N}\n&\\ee^{g^*} [ c(X_t,U_t^*) | H_t^0]= \\hat X_t^\\tp Q\\hat X_t + \\sum_{n=1}^N \\tr\\big(Q^{nn}\\Sigma_t^n \\big) \\notag \\\\\n&~+ \\hat X_t^\\tp(K_*^0)^\\tp R K_*^0\\hat X_t + \\sum_{n=1}^N \\tr \\big((K_*^n)^{\\tp} R^{nn} K_*^n \\Sigma_{t}^n \\big ).\n\\end{align} \n \n \\subsection{Calculating $\\ee^{g^*}[V_{t+1}|H^0_t]$}\n From the definition of $V_{t+1}$ (see \\eqref{V_t_NC}) we have\n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n \\ee^{g^*}[V_{t+1}|H^0_t] &= \\underbrace{\\ee^{g^*}\\Big[\n\\hat X_{t+1}^\\tp P_*^0 \\hat X_{t+1} \\Big| H^0_t\n\\Big]}_{\\mathbb{T}_6} \\notag \\\\\n&+ \\underbrace{\\ee^{g^*}\\Big[\n\\sum_{n=1}^N \\tr \\big(P_*^n \\Sigma_{t+1}^n \\big )\n\\Big| H^0_t\n\\Big]}_{\\mathbb{T}_7}.\\label{eq:T6T7_N}\n\\end{align}\n\\end{small}\nNote that if $\\Gamma_{t+1}^n = 1$, $\\hat X_{t+1}^n = X_{t+1}^n$ and $\\Sigma_{t+1}^n = 0$ and if $\\Gamma_{t+1}^n = 0$, $\\hat X_{t+1}^n = \\hat X_{t+1|t}^n$ and $\\Sigma_{t+1}^n = \\Sigma_{t+1|t}^n$\\footnote{If $\\Gamma^n_{t+1}=0$, the remote controller gets no new information about $X^n_{t+1}$. Hence, its belief on $X^n_{t+1}$ given $H_{t+1}^0$ remains the same as its belief on $X^n_{t+1}$ given $H_{t}^0$.}. Let $\\Delta$ be a random block diagonal matrix defined as follows:\n\\begin{align}\n\\Delta := \\begin{bmatrix}\n \\Gamma_{t+1}^1 \\mathbf{I}_{d_X^1} & & \\text{\\huge0}\\\\\n & \\ddots & \\\\\n \\text{\\huge0} & & \\Gamma_{t+1}^N \\mathbf{I}_{d_X^N}\n\\end{bmatrix}.\n\\label{Delta}\n\\end{align}\nThen, we can write \n\\begin{align}\n\\label{estimation_state}\n&\\hat X_{t+1} = \\Delta X_{t+1} +(\\mathbf{I}-\\Delta) \\hat X_{t+1|t}, \\\\\n &\\Sigma_{t+1}^n = (1-\\Gamma_{t+1}^n) \\Sigma_{t+1|t}^n, \\quad n \\in \\mathcal{N}.\n\\label{estimation_covariance}\n\\end{align}\nNow, we use \\eqref{estimation_state} and \\eqref{estimation_covariance} to calculate the terms $\\mathbb{T}_6$ and $\\mathbb{T}_7$ in \\eqref{eq:T6T7_N}. It can be shown through some straightforward manipulations that \n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\mathbb{T}_6= \n \\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t} + \\sum_{n=1}^N (1-p^n) \\tr \\big ([P_*^0]_{n,n} \\Sigma_{t+1|t}^n \\big).\n\\label{T_4_N}\n\\end{align}\n\\end{small}\nSimilarly, it can be shown that\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\\label{eq:T7_N}\n&\\mathbb{T}_7= \\sum_{n=1}^N p^n \\tr \\big (P_*^n \\Sigma_{t+1|t}^n \\big).\n\\end{align}\n\\end{small}\nCombining \\eqref{eq:T6T7_N}, \\eqref{T_4_N} and \\eqref{eq:T7_N}, we get\n \\begin{align}\n &\\ee^{g^*}[V_{t+1}|H^0_t]= \\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t}+ \\notag \\\\ &\\sum_{n=1}^N (1-p^n) \\tr \\big ([P_*^0]_{n,n} \\Sigma_{t+1|t}^n \\big) +\n \\sum_{n=1}^N p^n \\tr \\big (P_*^n \\Sigma_{t+1|t}^n \\big). \\label{eq:vt1_N}\n\\end{align}\nSince the right hand side of \\eqref{eq:vt1_N} involves $\\hat X_{t+1|t}$ and $\\Sigma^n_{t+1|t}$, we will now try to write these in terms of $\\hat X_{t}$ and $\\Sigma^n_{t}$. It can be easily established that \n\\begin{align}\n\\label{estimation_X_NC}\n&\\hat X_{t+1|t} = A_s(0) \\hat X_t, \\\\\n&\\Sigma_{t+1|t}^n = \\mathbf I + A_s(n) \\Sigma_t^n A_s(n)^\\tp,\n\\label{Sigma_t_NC}\n\\end{align}\nwhere $A_s(0) = A+ B K_*^0$ and $A_s(n) = A^{nn} + B^{nn}K^n_*$ for $n=1,\\ldots,N$.\n\nUsing \\eqref{estimation_X_NC},\\eqref{Sigma_t_NC} and the definition of $\\Lambda_*$ in \\eqref{eq:vt1_N}, we get\n\\begin{align}\n &\\ee^{g^*}[V_{t+1}|H^0_t]= \\hat X_t ^\\tp A_s(0)^\\tp P_*^0 A_s(0) \\hat X_t+ \\tr(\\Lambda_*)\\notag \\\\ \n &+\\sum_{n=1}^N \\big((1-p^n) \\tr([P_{*}^0]_{n,n}A_s(n) \\Sigma_t A_s(n)^\\tp) \\notag \\\\\n&+ p^n\\tr(P_*^nA_s(n) \\Sigma_t A_s(n)^\\tp)\\big). \\label{eq:vt1A_N}\n\\end{align}\n\n\n\n\n \\subsection{Simplifying the left hand side of \\eqref{eq:main_goal_NC_2}}\n Adding \\eqref{eq:vt1A_N} and \\eqref{eq:calculated_c_N} and grouping together the terms involving $\\hat X_t$ and those involving $\\Sigma^n_t$, we can write\n\\begin{small}\n \\begin{align}\n& \\ee^{g^*}[c(X_t,U_t^*)|H^0_t] + \\ee^{g^*}[V_{t+1}|H^0_t] \n \\notag \\\\\n& = \\tr(\\Lambda_*)+ \\hat X_t ^\\tp \\Phi(P_*^0,K^0_*) \\hat X_t + \\notag \\\\\n&\\sum_{n=1}^N \\tr(\\Phi^n((1-p^n)[P_*^0]_{n,n} +p^nP^n_*, K^n_*)\\Sigma^n_t), \\label{eq:vt1C_N}\n\\end{align}\n\\end{small}\nwhere \\begin{small}\n\\begin{align}\n\\Phi(P_*^0,K^0_*) := Q + (K^0_*)^\\tp RK^0_*\n+ (A+BK^0_*)^\\tp P^0_*(A+BK^0_*),\n\\end{align}\n\\end{small}\nand\n\\begin{small}\n\\begin{align}\n&\\Phi^n((1-p^n)[P_*^0]_{n,n} +p^nP^n_*, K^n_*) :=Q^{nn} + (K_*^n)^{\\tp} R^{nn} K_*^n \\notag \\\\\n&+(A^{nn}+B^{nn} K^n_*)^\\tp\\big((1-p^n)[P_*^0]_{n,n} +p^nP^n_* \\big)(A^{nn}+B^{nn}K^n_*).\n\\end{align}\n\\end{small}\nUsing the fact that $K^0_* = \\Psi(P_{*}^0,R,A,B)$ and that $P^0_* = \\Omega(P_{*}^0,Q,R,A,B)$, it can be shown that \n\\begin{equation}\n\\Phi(P_*^0,K^0_*) = P^0_*.\\label{eq:phi0_N}\n\\end{equation}\nSimilarly, using the fact that $K_*^n = \\Psi((1-p^n)[P_{*}^0]_{n,n}+p^n P_{*}^n,R^{nn},A^{nn},B^{nn})$ and that $P_*^{n} = \\Omega \\big((1-p^n)[P_{*}^0]_{n,n}+p^n P_{*}^{n},Q^{nn},R^{nn},A^{nn},B^{nn}\\big)$, it can be shown that \n\\begin{equation}\n\\Phi^n((1-p^n)[P_*^0]_{n,n} +p^nP^n_*, K^n_*) = P^n_*.\\label{eq:phi1_N}\n\\end{equation}\n\nUsing \\eqref{eq:phi0_N} and \\eqref{eq:phi1_N} in \\eqref{eq:vt1C_N}, we get\n\\begin{small}\n\\begin{align}\n& \\ee^{g^*}[c(X_t,U_t^*)|H^0_t] + \\ee^{g^*}[V_{t+1}|H^0_t] \\notag \\\\\n&= \\tr(\\Lambda_*)\n + \\hat X_t ^\\tp P^0_* \\hat X_t + \\sum_{n=1}^N\\tr(P^n_*\\Sigma^n_t), \\notag \\\\\n&= \\tr(\\Lambda_*) + V_t. \\label{eq:vt1D_N}\n\\end{align}\n\\end{small}\nThis establishes \\eqref{eq:main_goal_NC_2} and hence completes the proof of the claim.\n\\section{Proof of Claim \\ref{claim:cost_optimal_2C}}\n\\label{proof_claim:cost_optimal_2C}\n\n\\blue{Partially Edited OLD proof}\n\nIn order to show that \\eqref{eq:costforgstar_2C} holds, it suffices to show that the following equation is true for all $t \\geq 0$:\n\\begin{align}\n\\ee^{g*} [ c(X_t,U_t^*) | H_t^0]= \\tr(\\Lambda_*) + \\ee^{g*} [V_t - V_{t+1} | H_t^0],\n\\label{main_goal_2C}\n\\end{align}\n\\blue{where $U^*_t$ are the control actions at time $t$ under $g^*$.\nThis is because by taking the expectation of \\eqref{main_goal_2C} and summing it from $t=0$ to $T$, we obtain}\n\\begin{align}\nJ_{T}(g^*) \n= & \\sum_{t=0}^T \\ee^{g*} [ c(X_t,U_t^*) ]\n\\notag\\\\\n=&(T+1) \\tr(\\Lambda_*) \n+ \\ee^{g*} [V_0 - V_{T+1} ]\n\\notag\\\\\n=&(T+1) \\tr(\\Lambda_*) \n- \\ee^{g*} [V_{T+1} ],\n\\end{align}\nwhere the last equality holds because $V_0 = 0$ (recall that $X_0 = \\hat X_0 =0$). \n\n\nNow, to show that \\eqref{main_goal_2C} holds, \\blue{first note that $\\ee^{g*}[V_t | H_t^0] = V_t$ since $V_t$, given by \\eqref{V_t_2C}, is a function of $H^0_t$}. Hence, \\eqref{main_goal_2C} is equivalent to\n\\begin{align}\n\\ee^{g*} [ c(X_t,U_t^*) | H_t^0] + \\ee^{g*} [ V_{t+1} | H_t^0] = \\tr(\\Lambda_*) + V_t. \\label{eq:main_goal_2C_2}\n\\end{align}\n In the following subsections we will calculate $\\ee^{g*} [ c(X_t,U_t^*) | H_t^0]$ and $\\ee^{g*}[V_{t+1} | H_t^0]$. To do so, we define \n\\begin{align}\n&\\hat X_{t+1|t} := \\ee [X_{t+1}|H^0_{t}]\n\\\\\n&\\Sigma_{t+1|t} := \\cov(X_{t+1}|H^0_{t})\n \\\\\n&\\Sigma_t := \\cov(X_{t}|H^0_{t}),\n\\label{Sigma_2C}\n\\end{align}\nand recall that $\\hat{X}_t = \\ee [X_{t}|H^0_{t}]$.\n\n\\subsection{Calculating $\\ee^{g*} [ c(X_t,U_t^*) | H_t^0]$}\nNote that \n\\begin{small}\n\\begin{align}\n&\\ee^{g*} [ c(X_t,U_t^*) | H_t^0]= \\underbrace{\\ee^{g*} [X_t^\\tp Q X_t | H_t^0]}_{\\mathbb{T}_4} + \\underbrace{\\ee^{g*}[U_t^{*\\tp} R U_t^* | H_t^0]}_{\\mathbb{T}_5}. \n\\label{cost_given_h_2C}\n\\end{align}\n\\end{small}\nWe can simplify the term $\\mathbb{T}_4$ as follows\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_4 &= \\hat X_t^\\tp Q\\hat X_t + \\ee^{g*} \\Big[(X_t-\\hat X_t)^\\tp Q (X_t-\\hat X_t)|H^0_t\\Big] \\notag \\\\\n& = \\hat X_t^\\tp Q\\hat X_t + \\tr\\big(Q\\Sigma_t \\big)\n= \\hat X_t^\\tp Q^{\\diamond}(0)\\hat X_t + \\tr\\big(Q^{\\diamond}(1)\\Sigma_t \\big), \\label{eq:T4eq}\n\\end{align}\n\\end{small}\nwhere the last equality is correct because from \\eqref{eq:MJLS_Q}, $Q^{\\diamond}(0) = Q^{\\diamond}(1) = Q$. \n\n From \\eqref{eq:opt_U_2C_fixed}, we have $U_t^* = K_*^0\\hat X_t + \\tilde K_*^1 (X_t - \\hat X_t)$, where $\\tilde K_*^1 = \\bmat{\\mathbf{0} \\\\ K_*^1 }$. Therefore, we can simplify the term $\\mathbb{T}_5$ as follows\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_5 &= (K_*^0\\hat X_t)^\\tp R K_*^0\\hat X_t \\notag \\\\\n& + \\ee^{g*} \\Big[(X_t-\\hat X_t)^\\tp (\\tilde K_*^1)^{\\tp} R \\tilde K_*^1 (X_t-\\hat X_t)|H^0_t\\Big]\n\\notag\\\\\n&= (K_*^0\\hat X_t)^\\tp R^{\\diamond}(0) K_*^0\\hat X_t + \\tr \\big( (\\tilde K_*^1)^\\tp R^{\\diamond}(1) \\tilde K_*^1 \\Sigma_t \\big),\n \\label{cost_on_U_t_2C}\n\\end{align}\n\\end{small}\nwhere the last equality is correct from definition of $R^{\\diamond}(0), R^{\\diamond}(1)$ in \\eqref{eq:MJLS_R}.\n\nPutting \\eqref{cost_given_h_2C}, \\eqref{eq:T4eq} and \\eqref{cost_on_U_t_2C} together, we can write \n\\begin{align}\\label{eq:calculated_c}\n&\\ee^{g*} [ c(X_t,U_t^*) | H_t^0]=\\underbrace{[ \\hat X_t^\\tp Q(0)\\hat X_t +\\tr(Q(1) \\Sigma_t)]}_{=\\mathbb{T}_4} \\notag \\\\\n&+ \\underbrace{[(K_*^0\\hat X_t)^\\tp R(0) K_*^0\\hat X_t + \\tr( (\\tilde K_*^1)^\\tp R(1) \\tilde K_*^1 \\Sigma_t)]}_{=\\mathbb{T}_5}.\n\\end{align}\n\\subsection{Calculating $\\ee^{g*}[V_{t+1}|H^0_t]$}\n\n\\red{Start here}\nFrom the definition of $V_{t+1}$ (see \\eqref{V_t_2C}) we have\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n& \\ee^{g*}[V_{t+1}|H^0_t]= \\underbrace{\\ee^{g*}\\Big[\n\\hat X_{t+1}^\\tp P_*^0 \\hat X_{t+1} \\Big| H^0_t\n\\Big]}_{\\mathbb{T}_6} + \\underbrace{\\ee^{g*}\\Big[\n\\tr \\big(P_*^{1} \\Sigma_{t+1} \\big) \\Big| H^0_t \\Big]}_{\\mathbb{T}_7}.\\label{eq:T6T7}\n\\end{align}\n\\end{small}\nNote that if $\\Gamma_{t+1} = 1$ (i.e., the link is active) $\\hat X_{t+1} = X_{t+1}$ and $\\Sigma_{t+1} = 0$ and if $\\Gamma_{t+1} = 0$ $\\hat X_{t+1} = \\hat X_{t+1|t}$ and $\\Sigma_{t+1} = \\Sigma_{t+1|t}$. That is, \n\\red{Do the above statements need some justification? FYI Something is weird about 109 in the finite horizon paper}\n\\begin{align}\n\\label{estimation_state_2C}\n&\\hat X_{t+1} = \\Gamma_{t+1}X_{t+1} +(1-\\Gamma_{t+1}) \\hat X_{t+1|t}, \\\\\n &\\Sigma_{t+1} = (1-\\Gamma_{t+1}) \\Sigma_{t+1|t}.\n\\label{estimation_covariance_2C}\n\\end{align}\nNow, we use \\eqref{estimation_state_2C} and \\eqref{estimation_covariance_2C} to calculate the terms $\\mathbb{T}_6$ and $\\mathbb{T}_7$ in \\eqref{eq:T6T7}.\n\nNote that from \\eqref{estimation_state_2C}, $\\hat X_{t+1}$ is equal to $\\hat X_{t+1|t} + \\Gamma_{t+1}(X_{t+1} -\\hat X_{t+1|t})$. Therefore, $\\mathbb{T}_6$ can be written as\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\mathbb{T}_6= \n\\ee^{g^*}\\Big[\n\\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t} \\Big| H^0_t \\Big] \\notag \\\\\n&+ \\ee^{g^*}\\Big[\n(X_{t+1} -\\hat X_{t+1|t})^\\tp \\Gamma_{t+1} P_*^0 \\Gamma_{t+1}(X_{t+1} -\\hat X_{t+1|t}) \\Big| H^0_t \\Big] \\notag \\\\ & = \n\\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t}+ (1-p^1) \\tr (P_*^0 \\Sigma_{t+1|t}).\n\\label{T_4_1}\n\\end{align}\n\\end{small}\nFurthermore, using \\eqref{estimation_covariance_2C}, it is straightforward to see that \n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\mathbb{T}_7\n= p^1 \\tr (P_*^1 \\Sigma_{t+1|t}).\n\\end{align}\n\\end{small}\n\nIn the subsequent analysis, we will make use of the following definitions and lemmas.\n\\begin{definition}\\label{def:Pi_2}\nLet $P_*$ denote the set of matrices $\\{P^0_*,P^*_1\\}$. For $n=0,1$, we define\n \\begin{align}\n\\Pi (P_*, n) := \\sum_{m=0}^1 \\theta^{nm} P_*^m,\n\\label{Pi_2_matrix}\n\\end{align}\nwhere $\\theta^{nm}$ are the mode transition probabilities given in \\eqref{eq:MJLS_theta}. We further define, for $n=0,1$,\n\\begin{align}\n&\\Omega^{\\diamond}(P_*,n) := \\Omega\\big(\\Pi(Y, n),Q^{\\diamond}(n),R^{\\diamond}(n),A^{\\diamond}(n),B^{\\diamond}(n) \\big),\n\\label{Omega_MJ}\n\\\\\n&\\Psi^{\\diamond}(P_*,n) := \\Psi \\big(\\Pi(Y, n),R^{\\diamond}(n),A^{\\diamond}(n),B^{\\diamond}(n) \\big),\n\\label{Psi_MJ}\n\\end{align}\nwhere $\\Omega$ and $\\Psi$ are the operators defined in \\eqref{Omega} and \\eqref{Psi}.\nFinally, for a matrix $K$ and for $n=0,1$, we define\n\\begin{align}\n& \\Phi(P_*,K,n) = Q^{\\diamond}(n) + (K)^\\tp R^{\\diamond}(n)K\n\\notag\\\\\n+ &(A^{\\diamond}(n)+B^{\\diamond}(n)K)^\\tp \\Pi(Y,n)(A^{\\diamond}(n)+B^{\\diamond}(n)K).\n\\label{Phi_2_def}\n\\end{align}\n\\end{definition}\nWe have the following lemma.\n\\begin{lemma}\nFor $n=0,1,$\n\\begin{small}\n\\begin{align}\n\\Omega^{\\diamond}(P_*,n) = \\Phi(P_*,\\Psi^{\\diamond}(P_*,n),n) = \\min_K \\Phi(P_*,K,n).\n\\label{eq:relation1}\n\\end{align}\n\\end{small}\nNote that the minimization is in the sense of \\red{partial order undefined?} $\\succeq$, that is, the minimum value $\\Omega^{\\diamond}(P_*,n) \\preceq \\Phi(P_*,K,n)$ for all $K$.\n\\end{lemma}\n\\red{Proof}\n\\begin{lemma}\n\\begin{small}\n\\begin{align}\n\\Phi(P_*,K_*^0,0) = \\Phi(P_*,\\Psi^{\\diamond}(P_*,0),0) = \\Omega^{\\diamond}(P_*,0) = P_*^0, \\label{eq:claim1P0} \\\\\n\\Phi(P_*,\\tilde K_*^1,1) = \\Phi(P_*,\\Psi^{\\diamond}(P_*,1),1) = \\Omega^{\\diamond}(P_*,1) = P_*^1. \\label{eq:claim1P1}\n\\end{align}\n\\end{small}\n\\end{lemma}\n\\red{Proof}\n\nNow, using $\\mathbb{T}_6$ and $\\mathbb{T}_7$ calculated above and Definition \\ref{def:Pi_2}, $\\ee^{g^*}[V_{t+1}|H^0_t]$ can be written as,\n\\begin{align}\n\\ee^{g^*}[V_{t+1}|H^0_t]&= \n\\hat X_{t+1|t} ^\\tp \\Pi(P_*,0) \\hat X_{t+1|t}+ \\tr (\\Pi(P_*,1) \\Sigma_{t+1|t}).\n\\label{V_t_1_2C}\n\\end{align}\n\nNote that \\eqref{V_t_1_2C} involves $\\hat X_{t+1|t}$ and $\\Sigma_{t+1|t}$. We will now try to write these in terms of $\\hat X_{t}$ and $\\Sigma_{t}$. \n For that purpose, note that under the strategies $g^*$\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\nX_{t+1} &= AX_t + B \\Big[ \nK_*^0\\hat X_t + \\bmat{\\mathbf{0} \\\\ K_*^1 }(X_t - \\hat X_t)\n \\Big] + W_t \\notag \\\\\n& = A_s(0) \\hat X_t + A_s(1) ( X_t - \\hat X_t) + W_t.\n\\label{X_dynamics_proof_2C}\n\\end{align}\n\\end{small}\nIn the above equation, we have defined $A_s(0) = A^{\\diamond}(0) + B^{\\diamond}(0) K_*^0$ and $A_s(1) = A^{\\diamond}(1) + B^{\\diamond}(1) \\tilde K_*^1$ where matrices $A^{\\diamond}(m),B^{\\diamond}(m)$, $m=0,1,$ are as defined in \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_B} and $\\tilde K_*^1 = \\bmat{\\mathbf{0} \\\\ K_*^1 }$. Now using \\eqref{X_dynamics_proof_2C}, we can calculate $\\hat X_{t+1|t}$ and $\\Sigma_{t+1|t}$ as follows, \n\\begin{align}\n\\label{estimation_X_2C}\n&\\hat X_{t+1|t} = A_s(0) \\hat X_t, \\\\\n&\\Sigma_{t+1|t} = \\mathbf I + A_s(1) \\Sigma_t A_s(1)^\\tp.\n\\label{Sigma_t_2C}\n\\end{align}\n\n\\red{Many matrices are missing the diamonds, I have added some, but several remain....}\n\nUsing \\eqref{estimation_X_2C}, we can write \n\\begin{small}\n\\begin{align}\n&\\hat X_{t+1|t} ^\\tp \\Pi(P_*,0) \\hat X_{t+1|t} \\notag \\\\\n& =\\hat X_t ^\\tp A_s(0)^\\tp \\Pi(P_*,0) A_s(0) \\hat X_t\n\\notag\\\\\n&= \\hat X_t ^\\tp \\Big(\\Phi(P_*,K_*^0,0) - Q(0) - (K_*^0)^\\tp R(0) K_*^0 \\Big)\\hat X_t\n\\notag\\\\\n&= \\hat X_t^\\tp P_*^0\\hat X_{t}\n - \\hat X_t^\\tp Q(0) \\hat X_{t}- (K_*^0\\hat X_t)^\\tp R(0) K_*^0\\hat X_t\n\\label{eq:Vtp1_mean_2C}\n\\end{align}\n\\end{small}\nwhere the first equality follows from the definition of $\\Phi(P_*,K,0)$ in Definition \\ref{def:Pi_2} and the second equality is correct because of \\eqref{eq:claim1P0}.\n\n\nSimilarly, using \\eqref{Sigma_t_2C}, we can write\n\\begin{small}\n\\begin{align}\n&\\tr (\\Pi(P_*,1) \\Sigma_{t+1|t})\n\\notag\\\\\n&= \\tr(\\Pi(P_*,1)) + \\tr((A_{s}(1))^\\tp \\Pi(P_*,1) A_{s}(1) \\Sigma_t)\n\\notag\\\\\n&= \\tr(\\Pi(P_*,1))\n+\\tr(\\Phi(P_*,\\tilde K_*^1,1) \\Sigma_t)\n\\notag\\\\\n&-\\tr(Q(1)\\Sigma_t)-\\tr( (\\tilde K_*^1)^\\tp R(1) \\tilde K_*^1 \\Sigma_t)\n\\notag\\\\\n&= \\tr(\\Pi(P_*,1))\n+\\tr(P_*^1 \\Sigma_t)\n\\notag\\\\\n&-\\tr(Q(1) \\Sigma_t)- \\tr( (\\tilde K_*^1)^\\tp R(1) \\tilde K_*^1 \\Sigma_t),\n\\label{eq:Vtp1_variance_2C}\n\\end{align}\n\\end{small}\nwhere the first equality follows from the definition of $\\Phi(P_*,K,1)$ in Definition \\ref{def:Pi_2} and the second equality is correct because of \\eqref{eq:claim1P1}.\n\nPutting \\eqref{V_t_1_2C}, \\eqref{eq:Vtp1_mean_2C} and \\eqref{eq:Vtp1_variance_2C}together, we get\n\\begin{small}\n\\begin{align}\n \\ee^{g^*}[V_{t+1}|H^0_t] &= \\hat X_t^\\tp P_*^0\\hat X_{t}\n - \\hat X_t^\\tp Q(0) \\hat X_{t}- (K_*^0\\hat X_t)^\\tp R(0) K_*^0\\hat X_t\n \\notag \\\\\n& + \\tr(\\Pi(P_*,1)) + \\tr(P_*^1 \\Sigma_t)\n\\notag\\\\\n&-\\tr(Q(1) \\Sigma_t)- \\tr( (\\tilde K_*^1)^\\tp R(1) \\tilde K_*^1 \\Sigma_t) \\notag \\\\\n&= \\tr(\\Pi(P_*,1)) + \\underbrace{\\hat X_t^\\tp P_*^0\\hat X_{t} + \\tr(P_*^1 \\Sigma_t)}_{=V_t} \\notag \\\\\n& - \\underbrace{[ \\hat X_t^\\tp Q(0)\\hat X_t +\\tr(Q(1) \\Sigma_t)]}_{=\\mathbb{T}_4} \\notag \\\\\n& - \\underbrace{[(K_*^0\\hat X_t)^\\tp R(0) K_*^0\\hat X_t + \\tr( (\\tilde K_*^1)^\\tp R(1) \\tilde K_*^1 \\Sigma_t)]}_{=\\mathbb{T}_5}.\n\\label{eq:Vtp1_t_2C}\n\\end{align}\n\\end{small}\n\nAdding \\eqref{eq:Vtp1_t_2C} and \\eqref{eq:calculated_c} and noting that $ \\Pi(P_*,1) = \\Lambda_*$ , we get \n\n\\vspace{-3mm}\n\\begin{small}\n\\begin{align}\n& \\ee^{g^*}[V_{t+1}|H^0_t] + \\ee^{g^*}[c(X_t,U_t^*)|H^0_t]\n= \\tr(\\Lambda_*)\n+ V_t ,\n\\end{align}\n\\end{small}\nwhich proves \\eqref{eq:main_goal_2C_2}.\n\\section{Proof of Claim \\ref{claim:cost_optimal_NC}}\n\\label{Cost_of_the_Strategies}\n\\red{Start here}\nWe want to show that \\eqref{eq:costforgstar_NC} holds. To this end, we show that \n\\begin{align}\n\\ee [ c(X_t,U_t^*) | H_t^0]= \\tr(\\Lambda_*) + \\ee [V_t - V_{t+1} | H_t^0].\n\\label{main_goal_NC}\n\\end{align}\nwhere $\\Lambda_* = \\sum_{n=1}^N \\big( (1-p^n) [P_{*}^0]_{n,n}+p^n P_{*}^n \\big)$.\nThen, taking the expectation of the above equation and summing it from $0$ to $T$, the correctness of \\eqref{eq:costforgstar_NC} is concluded as follows,\n\\begin{align}\nJ_{T}(g^*) \n= & \\sum_{t=0}^T \\ee [ c(X_t,U_t^*) ]\n\\notag\\\\\n=&(T+1) \\tr(\\Lambda_*) \n+ \\ee [V_0 - V_{T+1} ]\n\\notag\\\\\n=&(T+1) \\tr(\\Lambda_*) \n- \\ee [V_{T+1} ],\n\\end{align}\nwhere the last equality holds because $V_0 = 0$. \n\nNow, to show that \\eqref{main_goal_2C} holds, first note that $\\ee[V_t | H_t^0] = V_t$ which is given by \\eqref{V_t_2C}. Hence, we only need to calculate $\\ee [ c(X_t,U_t^*) | H_t^0]$ and $\\ee[V_{t+1} | H_t^0]$. \n\n\\subsection{Calculating $\\ee [ c(X_t,U_t^*) | H_t^0]$}\nNote that \n\\begin{small}\n\\begin{align}\n&\\ee [ c(X_t,U_t^*) | H_t^0]= \\underbrace{\\ee [X_t^\\tp Q X_t | H_t^0]}_{\\mathbb{T}_4} + \\underbrace{\\ee[U_t^{*\\tp} R U_t^* | H_t^0]}_{\\mathbb{T}_5}. \n\\label{cost_given_h}\n\\end{align}\n\\end{small}\nWe can simplify $\\mathbb{T}_4$ as follows,\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_4 &= \\hat X_t^\\tp Q\\hat X_t + \\ee \\Big[(X_t-\\hat X_t)^\\tp Q (X_t-\\hat X_t)|H^0_t\\Big] \\notag \\\\\n& = \\hat X_t^\\tp Q\\hat X_t + \\sum_{n=1}^N \\tr\\big([Q]_{n,n}\\Sigma_t^n \\big) \\notag \\\\\n& = \\hat X_t^\\tp Q(0)\\hat X_t + \\sum_{n=1}^N \\tr\\big([Q(n)]_{n,n}\\Sigma_t^n \\big),\n\\end{align}\n\\end{small}\nwhere the second equality is correct because according to \\cite[Lemma 3]{asghari_optimal_2017_arXiv},\ngiven $H_t^0$, $X_t^n$ is independent of $X_t^m$ for $m \\neq n$ and\nthe last equality is correct because from \\eqref{Q_mj}, $Q(0) = Q$ and $[Q(n)]_{n,n} = [Q]_{n,n}= Q^{nn}$.\nSimilarly, since from \\eqref{eq:opt_U_NC_fixed} we have $U_t^* = K_*^0 \\hat X_t + \\sum_{n=1}^N \\tilde K_*^n (X_t - \\hat X_t)$ where we have defined $\\tilde K_*^n = \\mathcal{L}_{zero}(K_*^0, K_*^n,n+1,n)$, we can simplify $\\mathbb{T}_5$ as follows,\n\n\\vspace{-3mm}\n\\begin{small}\n\\begin{align}\n\\mathbb{T}_5 &=(K_*^0 \\hat X_t)^\\tp R K_*^0 \\hat X_t + \\sum_{n=1}^N \\tr \\big( [(\\tilde K_*^n)^{\\tp} R \\tilde K_*^n]_{n,n} \\Sigma_{t}^n \\big )\n\\notag\\\\\n&= (K_*^0 \\hat X_t)^\\tp R(0) K_*^0 \\hat X_t \n+ \\sum_{n=1}^N \\tr \\big( [(\\tilde K_*^n)^{\\tp} R(n) \\tilde K_*^n]_{n,n} \\Sigma_{t}^n \\big ),\n \\label{cost_on_U_t}\n\\end{align}\n\\end{small}\nwhere the last equality is correct because from \\eqref{R_mj}, $R(0) = R$ and $[(\\tilde K_*^n)^{\\tp} R \\tilde K_*^n]_{n,n} = \n[(\\tilde K_*^n)^{\\tp} R(n) \\tilde K_*^n]_{n,n} = (K_*^n)^{\\tp} R^{nn} K_*^n$.\n\n\\subsection{Calculating $\\ee[V_{t+1}|H^0_t]$}\nFor $n \\in \\mathcal{N}$, we define\n\\begin{align}\n&\\hat X_{t+1|t}^n := \\ee [X_{t+1}^n|H^0_{t}]\n\\\\\n&\\Sigma_{t+1|t}^n := \\cov(X_{t+1}^n|H^0_{t})\n \\\\\n&\\Sigma_t^n := \\cov(X_{t}^n|H^0_{t}).\n\\label{sigma_t}\n\\end{align}\nRecall that $\\hat{X}_t^n = \\ee [X_{t}^n|H^0_{t}]$. Note that under the strategies $g^*$ defined in \\eqref{eq:costforgstar_NC} and from \\eqref{Model:system}, we have \n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\nX_{t+1} &= AX_t + B \\Big[ \nK_*^0\\hat X_t + \\sum_{n=1}^N \\tilde K_*^n (X_t - \\hat X_t)\n \\Big] + W_t \\notag \\\\\n& = A_s(0) \\hat X_t + \\sum_{n=1}^N A_s(n) ( X_t - \\hat X_t) + W_t.\n\\label{X_dynamics_proof_NC}\n\\end{align}\n\\end{small}\nIn the above equation, we have defined $A_s(0) = A(0) + B(0) K_*^0$ and $A_s(n) = A(n) + B(n) \\tilde K_*^n$ for $n \\in \\mathcal{N}$ where matrices $A(n),B(n)$, $n \\in \\overline{\\mathcal{N}}$ are as defined in \\eqref{A_mj}-\\eqref{B_mj}. Note that the last equality of \\eqref{X_dynamics_proof_NC} is correct because from \\eqref{A_mj}-\\eqref{B_mj}, $A(0) = A$, $\\sum_{n=1}^N A(n) = A$, $B(0) =B$, and $B\\tilde K_*^n = B(n) \\tilde K_*^n$ for $n \\in \\mathcal{N}$. Now using \\eqref{X_dynamics_proof_2C}, we can calculate $\\hat X_{t+1|t}$ and $\\Sigma^n_{t+1|t}$ as follows, \n\\begin{align}\n\\label{estimation_X_NC}\n&\\hat X_{t+1|t} = A_s(0) \\hat X_t \\\\\n&\\Sigma_{t+1|t}^n = \\mathbf I + [A_s(n)]_{n,n} \\Sigma_t^n [A_s(n)]_{n,n}^\\tp.\n\\label{Sigma_t_NC}\n\\end{align}\nFrom the definition of $V_{t+1}$ in \\eqref{V_t_NC} and considering \\eqref{sigma_t}, we have\n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n& \\ee[V_{t+1}|H^0_t]= \\underbrace{\\ee\\Big[\n\\hat X_{t+1}^\\tp P_*^0 \\hat X_{t+1} \\Big| H^0_t\n\\Big]}_{\\mathbb{T}_6}\n+ \\underbrace{\\ee\\Big[\n\\sum_{n=1}^N \\tr \\big(P_*^n \\Sigma_{t+1}^n \\big )\n\\Big| H^0_t\n\\Big]}_{\\mathbb{T}_7}.\n\\end{align}\n\\end{small}\n\nNote that when $\\Gamma_{t+1}^n = 1$, $\\hat X_{t+1}^n = X_{t+1}^n$ and hence $\\Sigma_{t+1}^n = 0$. Further, when $\\Gamma_{t+1}^n = 0$, $\\hat X_{t+1}^n = \\hat X_{t+1|t}^n$ and $\\Sigma_{t+1}^n = \\Sigma_{t+1|t}^n$. Let $\\Delta$ be a block diagonal matrix whose diagonal are block matrices $\\Gamma_{t+1}^1 \\mathbf{I}_{d_X^1}, \\ldots, \\Gamma_{t+1}^N \\mathbf{I}_{d_X^N}$, that is,\n\\begin{align}\n\\Delta := \\begin{bmatrix}\n \\Gamma_{t+1}^1 \\mathbf{I}_{d_X^1} & & \\text{\\huge0}\\\\\n & \\ddots & \\\\\n \\text{\\huge0} & & \\Gamma_{t+1}^N \\mathbf{I}_{d_X^N}\n\\end{bmatrix}.\n\\label{Delta}\n\\end{align}\nNote that $\\Delta$ is a random matrix that depends on $\\Gamma_{t+1}^1,\\ldots,\\Gamma_{t+1}^N$.\nThen, we can write \n\\begin{align}\n\\label{estimation_state}\n&\\hat X_{t+1} = \\Delta X_{t+1} +(1-\\Delta) \\hat X_{t+1|t}, \\\\\n &\\Sigma_{t+1}^n = (1-\\Gamma_{t+1}^n) \\Sigma_{t+1|t}^n, \\quad n \\in \\mathcal{N}.\n\\label{estimation_covariance}\n\\end{align}\nNow, we use \\eqref{estimation_state} and \\eqref{estimation_covariance} to calculate $\\mathbb{T}_6$ and $\\mathbb{T}_7$.\n\n\\subsubsection{Calculating $\\mathbb{T}_6$}\nNote that from \\eqref{estimation_state}, $\\hat X_{t+1}$ can be written as $ \\hat X_{t+1}= \\hat X_{t+1|t} + \\Delta (X_{t+1} -\\hat X_{t+1|t})$. Considering this, $\\mathbb{T}_6$ can be written as\n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\mathbb{T}_6= \n\\ee\\Big[\n\\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t} \\Big| H^0_t \\Big] \\notag \\\\\n&+ \\ee\\Big[\n(X_{t+1} -\\hat X_{t+1|t})^\\tp \\Delta^{\\tp} P_*^0 \\Delta (X_{t+1} -\\hat X_{t+1|t}) \\Big| H^0_t \\Big] \\notag \\\\ & = \\hat X_{t+1|t} ^\\tp P_*^0 \\hat X_{t+1|t} + \\sum_{n=1}^N (1-p^n) \\tr \\big ([P_*^0]_{n,n} \\Sigma_{t+1|t}^n \\big).\n\\label{T_4_1}\n\\end{align}\n\\end{small}\n\nThe last equality of \\eqref{T_4_1} is correct because\n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n& \\ee\\Big[\n(X_{t+1} -\\hat X_{t+1|t})^\\tp \\Delta^{\\tp} P_*^0 \\Delta (X_{t+1} -\\hat X_{t+1|t}) \\Big| H^0_t \\Big] \\notag \\\\ \n&= \n\\ee\\Big[ \\ee[\n(X_{t+1} -\\hat X_{t+1|t})^\\tp \\Delta^{\\tp} P_*^0 \\Delta (X_{t+1} -\\hat X_{t+1|t}) \n| \\Delta,H_t^0] \\Big| H^0_t \\Big] \\notag \\\\ \n&= \n\\ee\\Big[ \\sum_{n=1}^N\n(X_{t+1}^n -\\hat X_{t+1|t}^n)^\\tp [\\Delta^{\\tp} P_*^0 \\Delta]_{n,n} (X_{t+1}^n -\\hat X_{t+1|t}^n) \n\\Big| H^0_t \\Big] \\notag \\\\ \n&= \n\\sum_{n=1}^N \\ee\\Big[ \n(X_{t+1}^n -\\hat X_{t+1|t}^n)^\\tp (\\Gamma_{t+1}^n)^2 [P_*^0]_{n,n} (X_{t+1}^n -\\hat X_{t+1|t}^n) \n\\Big| H^0_t \\Big] \\notag \\\\ \n&= \\sum_{n=1}^N (1-p^n) \\tr \\big ([P_*^0]_{n,n} \\Sigma_{t+1|t}^n \\big),\n\\label{T_4_1_1}\n\\end{align}\n\\end{small}\nwhere the first equality is correct because of tower property and the second equality is correct because according to \\cite[Lemma 3]{asghari_optimal_2017_arXiv}, given the common information $H_t^0$, $X_{t+1}^n$ is independent of $X_{t+1}^m$ for $m \\neq n$. The third equality is correct because $[\\Delta^{\\tp} P_*^0 \\Delta]_{n,n} = (\\Gamma_{t+1}^n)^2 [P_*^0]_{n,n}$ which can be verified through straightforward algebraic manipulations by using \\eqref{Delta}. Finally, the last equality is correct because each term of the summation is only non-zero when $\\Gamma_{t+1}^n =1$ which happens with probability $1-p^n$.\n\n\\subsubsection{Calculating $\\mathbb{T}_7$}\nConsidering \\eqref{estimation_covariance}, it is straightforward to see that \n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n&\\mathbb{T}_7= \\sum_{n=1}^N p^n \\tr \\big (P_*^n \\Sigma_{t+1|t}^n \\big).\n\\end{align}\n\\end{small}\n\n\\subsubsection{Simplifying $\\ee[V_{t+1}|H^0_t]$}\nNow, using $\\mathbb{T}_6$ and $\\mathbb{T}_7$ calculated above, $\\ee[V_{t+1}|H^0_t]$ can be written as,\n\\begin{align}\n\\ee[V_{t+1}|H^0_t]&= \n\\hat X_{t+1|t} ^\\tp \\Pi(\\tilde P_*^0,0) \\hat X_{t+1|t} \\notag \\\\\n&+ \\sum_{n=1}^N \\tr \\big ([\\Pi(\\tilde P_*^n,n)]_{n,n} \\Sigma_{t+1|t}^n \\big),\n\\label{V_t_1}\n\\end{align}\nwhere we have defined $\\tilde P_*^0 = P_*^0$ and $\\tilde P_*^n = \\mathcal{L}_{zero}(P_*^0, P_*^n,n,n)$. Further, we have used the fact that $[\\tilde P_*^n]_{n,n} = P_*^n$ and the definition of $\\Pi(\\cdot)$ in \\eqref{Pi_matrix} where $\\Theta$ is as defined in \\eqref{eq:MJLS_theta}.\n\nUsing \\eqref{estimation_X_NC}, we can write $\\hat X_{t+1|t} ^\\tp \\Pi(\\tilde P_*,0) \\hat X_{t+1|t}$ as follows,\n\\begin{small}\n\\begin{align}\n& \\hat X_t ^\\tp A_s(0)^\\tp \\Pi(\\tilde P_*,0) A_s(0) \\hat X_t\n\\notag\\\\\n&= \\hat X_t ^\\tp \\Big(\\Phi(\\tilde P_*,K_*^0,0) - Q(0) - (K_*^0)^\\tp R(0) K_*^0 \\Big)\\hat X_t\n\\notag\\\\\n&= \\hat X_t^\\tp \\tilde P_*^0\\hat X_{t}\n - \\hat X_t^\\tp Q(0) \\hat X_{t}- (K_*^0\\hat X_t)^\\tp R(0) K_*^0\\hat X_t,\n\\label{eq:Vtp1_mean_NC}\n\\end{align}\n\\end{small}\nwhere the first equality is true from \\eqref{Phi_def} in Lemma \\ref{lm:KPrelation}. Furthermore, the last equality is correct because\n\n\\vspace{-2mm}\n\\begin{small}\n\\begin{align}\n\\Phi(\\tilde P_*,K_*^0,0) = \\Phi(\\tilde P_*,\\Psi^{\\diamond}(\\tilde P_*,0),0) = \\Omega^{\\diamond}(\\tilde P_*,0) = \\tilde P_*^0,\n\\end{align}\n\\end{small}\nwhere the first equality is true because from \\eqref{eq:K_finite_2C_fixed}, $K_*^0 = \\Psi^{\\diamond}(\\tilde P_*,0)$, the second equality is true from \\eqref{eq:relation1} in Lemma \\ref{lm:KPrelation}, and the last equality is true because of definition operator $\\Omega^{\\diamond}(\\cdot)$ in \\eqref{Omega_MJ} and \\eqref{eq:P_finite_2C_fixed}.\n\nSimilarly, using \\eqref{Sigma_t_NC} and by defining $\\hat P_*^n = \\Pi(\\tilde P_*^n,n)$, $\\tr \\big ([\\Pi(\\tilde P_*^n,n)]_{n,n} \\Sigma_{t+1|t}^n \\big)$ can be written as\n\\begin{small}\n\\begin{align}\n&\\tr \\big ([\\Pi(\\tilde P_*^n,n)]_{n,n} \\Sigma_{t+1|t}^n \\big)\n\\notag\\\\\n&=\\tr \\big ([\\hat P_*^n]_{n,n} \\big) + \\tr \\big ([A_{s}(n)]_{n,n}^{\\tp} [\\hat P_*^n]_{n,n} [A_{s}(n)]_{n,n} \\Sigma_t^n \\big)\n\\notag\\\\\n&= \\tr \\big ([\\hat P_*^n]_{n,n} \\big) \n+\\tr(\\Phi(\\tilde P_*,\\tilde K_*^n,n) \\Sigma_t^n)\n\\notag\\\\\n&-\\tr \\big( [Q(n)]_{n,n} \\Sigma_t^n \\big)-\\tr \\big( [(\\tilde K_*^n)^\\tp R(n) \\tilde K_*^n]_{n,n} \\Sigma_t^n \\big)\n\\notag\\\\\n&= \\tr \\big ([\\hat P_*^n]_{n,n} \\big) \n+\\tr([\\tilde P_*^n]_{n,n} \\Sigma_t^n)\n\\notag\\\\\n&-\\tr \\big( [Q(n)]_{n,n} \\Sigma_t^n \\big)- \\tr \\big( [(\\tilde K_*^n)^\\tp R(n) \\tilde K_*^n]_{n,n} \\Sigma_t^n \\big).\n\\label{eq:Vtp1_variance_NC}\n\\end{align}\n\\end{small}\nPutting \\eqref{eq:Vtp1_mean_NC} and \\eqref{eq:Vtp1_variance_NC} into \\eqref{V_t_1} and using the fact that $[\\tilde P_*^n]_{n,n} = P_*^n$, we get\n\\begin{small}\n\\begin{align}\n& \\ee[V_{t+1}|H^0_t]\n\\notag\\\\\n& = \\hat X_t^\\tp P_*^0\\hat X_{t}\n - \\hat X_t^\\tp Q(0) \\hat X_{t}- (K_*^0\\hat X_t)^\\tp R(0) K_*^0\\hat X_t\n \\notag \\\\\n& + \\sum_{n=1}^N \\tr \\big ([\\hat P_*^n]_{n,n} \\big) +\\sum_{n=1}^N \\tr(P_*^n \\Sigma_t^n)\n\\notag\\\\\n&- \\sum_{n=1}^N \\tr \\big( [Q(n)]_{n,n} \\Sigma_t^n \\big)- \\sum_{n=1}^N \\tr \\big( [(\\tilde K_*^n)^\\tp R(n) \\tilde K_*^n]_{n,n} \\Sigma_t^n \\big)\n \\notag \\\\\n&=\\sum_{n=1}^N \\tr \\big ([\\hat P_*^n]_{n,n} \\big) + \\underbrace{ \\hat X_t^\\tp P_*^0\\hat X_{t} + \\sum_{n=1}^N \\tr(P_*^n \\Sigma_t^n)}_{=V_t} \\notag \\\\\n& - \\underbrace{[\\hat X_t^\\tp Q(0) \\hat X_{t} + \\sum_{n=1}^N \\tr \\big( [Q(n)]_{n,n} \\Sigma_t^n \\big)]}_{=\\mathbb{T}_4} \\notag \\\\\n& - \\underbrace{ [(K_*^0\\hat X_t)^\\tp R(0) K_*^0\\hat X_t + \\sum_{n=1}^N \\tr \\big( [(\\tilde K_*^n)^\\tp R(n) \\tilde K_*^n]_{n,n} \\Sigma_t^n \\big)]\n}_{=\\mathbb{T}_5}.\n\\label{eq:Vtp1_t}\n\\end{align}\n\\end{small}\n\nNote that from \\eqref{cost_given_h}, we have $\\ee[c(X_t,U_t^*)|H^0_t] = \\mathbb{T}_4 + \\mathbb{T}_5$ and furthermore, \n\n\\vspace{-3mm}\n\\begin{small}\n\\begin{align}\n&\\sum_{n=1}^N [\\hat P_*^n]_{n,n} = \\sum_{n=1}^N [\\Pi(\\tilde P_*,n)]_{n,n} \\notag \\\\\n&= \\sum_{n=1}^N \\big( (1-p^n) [\\tilde P_*^0]_{n,n} + p^n [\\tilde P_*^n]_{n,n} \\big) \\notag \\\\\n&=\n\\sum_{n=1}^N \\big( (1-p^n) [P_{*}^0]_{n,n}+p^n P_{*}^n \\big) = \\Lambda_*.\n\\end{align}\n\\end{small}\n\nTherefore,\n\\begin{small}\n\\begin{align}\n& \\ee[V_{t+1}|H^0_t]\n= \\Lambda_*\n+ V_t - \\ee[c(X_t,U_t^*)|H^0_t],\n\\end{align}\n\\end{small}\nwhich concludes the correctness of \\eqref{main_goal_NC}.\n\n\\subsection{Notations}\nIn general, subscripts are used as time indices while superscripts are used to index controllers.\nFor time indices $t_1\\leq t_2$, $X_{t_1:t_2}$ is a short hand notation for the collection variables $(X_{t_1},X_{t_1+1},...,X_{t_2})$.\nRandom variables\/vectors are denoted by upper case letters, their realizations by the corresponding lower case letters.\nFor a sequence of column vectors $X, Y, Z, \\ldots$, the notation $\\vecc(X,Y,Z,\\ldots)$ denotes the vector $[X^{\\tp}, Y^{\\tp}, Z^{\\tp},...]^{\\tp}$. $\\prob(\\cdot)$ denotes the probability of an event, and $\\ee[\\cdot]$ and $\\cov(\\cdot)$ denote the expectation and the covariance matrix of a random variable\/vector. The transpose, trace, and spectral radius of a matrix $A$ are denoted by $A^{\\tp}$, $\\tr(A)$, and $\\rho(A)$, respectively. \nFor two symmetric matrices $A,B$, $A\\succeq B$ (resp. $A\\succ B$) means that $(A-B)$ is positive semi-definite (PSD) (resp. positive definite (PD)).\nFor a block matrix $A$, we use $[A]_{m,:}$ to denote the $m$-th block row and $[A]_{:,n}$ to denote the $n$-th block column of $A$. Further, $[A]_{m,n}$ denotes the block located at the $m$-th block row and $n$-th block column of $A$. For example, if \n\\begin{align}\nA = \\begin{bmatrix}\nA^{11} & A^{12} & A^{13} \\\\\nA^{21} & A^{22} & A^{23} \\\\\nA^{31} & A^{32} & A^{33} \n\\end{bmatrix},\\notag\n\\end{align}\nthen $[A]_{2,:} = \\begin{bmatrix}\nA^{21} & A^{22} & A^{23}\n\\end{bmatrix}$, $[A]_{:,3} = \\begin{bmatrix}\nA^{13} \\\\ A^{23} \\\\ A^{33}\n\\end{bmatrix}$, and $[A]_{2,3} = A^{23}$.\nWe use $\\R^n$ to denote the $n$-dimensional Euclidean space and $\\R^{m \\times n}$ to denote the space of all real-valued $m \\times n$ matrices. We use $\\otimes$ to denote the Kronecker product.\n\n\\subsection{Operator Definitions}\\label{sec:operators}\nWe define the following operators. \n\\begin{itemize}\n\\item Consider matrices $P,Q,R,A,B$ of appropriate dimensions with $P,Q$ being PSD matrices and $R$ being a PD matrix. We define $\\Omega(P,Q,R,A,B)$ and $\\Psi(P,R,A,B)$ as follows: \n\\begin{align}\n\\label{Omega}\n&\\Omega(P,Q,R,A,B) \n:= Q+A^\\tp P A- \\notag\\\\\n& ~~~~~~~~ ~~~~~~~~ ~~~~~~~~A^\\tp P B(R+B^\\tp P B)^{-1}B^\\tp P A.\n\\\\\n&\\Psi(P,R,A,B) \n:=\n-(R+B^\\tp P B)^{-1}B^\\tp P A.\n\\label{Psi}\n\\end{align}\nNote that $P = \\Omega(P,Q,R,A,B) $ is the discrete time algebraic Riccati equation.\n\n\\item Let $P$ be a block matrix with $M_1$ block rows and $M_2$ block columns. Then, for numbers $m_1, m_2$ and matrix $Q$, $\\mathcal{L}_{zero}(P,Q,m_1,m_2)$ is a matrix with the same size as $P$ defined as follows:\n\\begin{align}\n&\\mathcal{L}_{zero}(P,Q,m_1,m_2) := \\notag \\\\\n&\\begin{blockarray}{cccl}\n\\text{$1:m_2-1$} &m_2 &\\text{$m_2+1:M_2$} & \\\\\n\\begin{block}{[ccc]l}\n \\mathbf{0} & \\mathbf{0} & \\mathbf{0} & \\text{$1:m_1-1$} \\\\\n \\mathbf{0} & Q & \\mathbf{0} &m_1 \\\\\n \\mathbf{0} & \\mathbf{0} & \\mathbf{0} &\\text{$m_1+1:M_1$} \\\\\n\\end{block}\n\\end{blockarray}\n\\label{L_zero}\n\\end{align}\n\n\\item Let $P$ be a block matrix with $M_1$ block rows and $M_1$ block columns. Then, for number $m_1$ and matrix $Q$, $\\mathcal{L}_{iden}(P,Q,m_1)$ is a matrix with the same size as $P$ defined as follows:\n\\begin{align}\n&\\mathcal{L}_{iden}(P,Q,m_1) := \\notag \\\\\n&\\begin{blockarray}{cccl}\n\\text{$1:m_1-1$} &m_1 &\\text{$m_1+1:M_1$} & \\\\\n\\begin{block}{[ccc]l}\n \\mathbf{I} & \\mathbf{0} & \\mathbf{0} & \\text{$1:m_1-1$} \\\\\n \\mathbf{0} & Q & \\mathbf{0} &m_1 \\\\\n \\mathbf{0} & \\mathbf{0} & \\mathbf{I} &\\text{$m_1+1:M_1$} \\\\\n\\end{block}\n\\end{blockarray}\n\\label{L_iden}\n\\end{align}\n\\end{itemize}\n\\section{Proof of Lemma \\ref{lm:pc_NC}}\n\\label{proof_lm:pc}\nThe proof can be obtained by following a methodology similar to the one used to prove Lemma \\ref{lm:pc_2C}. We can first use part 1 of Lemma \\ref{lm:ss} to show that the auxiliary MJLS described by \\eqref{A_mj}-\\eqref{transition_prob} is SS if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$ where $p_c^n$ is the critical threshold given by \\eqref{eq:pc}. \n\nSecond, we can use part 2 of Lemma \\ref{lm:ss} to show that the auxiliary MJLS described by \\eqref{A_mj}-\\eqref{transition_prob} is SD if and only of $\\mathcal{A}_d$ defined as in \\eqref{eq:bigmatrix3} is Schur stable. Since the matrix $\\mathcal{A}_{d}$ is upper-triangular, it is Schur stable if and only if there exist matrices $H^{\\diamond}(0)$ and $H^{\\diamond}(n)$ for $n \\in \\mathcal{N}$ such that $\\rho\\big(A_d(0)\\otimes A_d(0)\\big)<1$ and $\\rho\\big(p^n A_d(n)\\otimes A_d(n)\\big)<1$ where $A_d(n)$, $n \\in \\mathcal{N}$, defined as in \\eqref{A_c_l}. The existence of these matrices is resulted since $(A,Q)$ and $\\big(A^{nn},(Q^{nn})^{1\/2} \\big)$ for all $n \\in \\mathcal{N}$ are detectable from Assumptions \\ref{assum:det_stb} and \\ref{assum:det_stb_2}.\nHence, the MJLS is SD.\n\nIt then follows from Lemma \\ref{lm:MJ_infinite} that matrices $P^{\\diamond}_{t}(n)$, $n \\in \\overline{\\mathcal{N}}$, converge as $t \\to -\\infty$ to PSD matrices $P^{\\diamond}_{*}(n)$ that satisfy the steady state version of \\eqref{P_MJ_cmp_NC_0}-\\eqref{P_MJ_cmp_NC_1} (i.e., equations \\eqref{eq:P_finite_NC_fixed}-\\eqref{eq:tildeP_finite_NC_fixed}) if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$. This proves the lemma.\n\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\n\\mathcal{A}_d =\n\\begin{blockarray}{ccccc}\n\\begin{block}{[ccccc]}\n A_d(0)\\otimes A_d(0) & (1-p^1)A_d(1)\\otimes A_d(1)& (1-p^2)A_d(2)\\otimes A_d(2) & \\ldots & (1-p^n)A_d(N)\\otimes A_d(N) \\\\ \\\\\n \\mathbf{0} & p^1A_d(1)\\otimes A_d(1) &\\mathbf{0} &\\ldots & \\mathbf{0} \\\\ \\\\\n \\vdots &\\ddots &p^2 A_d(2)\\otimes A_d(2) & \\ddots & \\vdots \\\\ \\\\\n \\vdots & &\\ddots &\\ddots & \\mathbf{0} \\\\ \\\\\n \\mathbf{0} & \\ldots &\\ldots & \\mathbf{0} & p^n A_d(N)\\otimes A_d(N)\\\\\n\\end{block}\n\\end{blockarray}\n\\label{eq:bigmatrix3}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*}\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\n&A_d(n) = A^{\\diamond} (n) + H^{\\diamond}(n) \\big(Q^{\\diamond}(n) \\big)^{1\/2}\n= \\begin{blockarray}{cccl}\n\\text{$1:n-1$} &n &\\text{$n+1:N$} & \\\\\n\\begin{block}{[ccc]l}\n \\text{\\large 0} & [H^{\\diamond}(n)]_{n,1:n-1} (Q^{nn})^{1\/2} &\\text{\\large 0} & \\text{$1:n-1$} \\\\\n \\text{\\large 0} & A^{nn} + [H^{\\diamond}(n)]_{n,n} (Q^{nn})^{1\/2} & \\text{\\large 0} &n \\\\\n \\text{\\large 0} & [H^{\\diamond}(n)]_{n,n+1:N} (Q^{nn})^{1\/2} & \\text{\\large 0}&\\text{$n+1:N$} \\\\\n\\end{block}\n\\end{blockarray}.\n\\label{A_c_l}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*} \n\\section{Proof of Lemma \\ref{lm:opt_strategies_new_rep}}\n\\label{proof_lm:opt_strategies_new_rep}\nBy comparing \\eqref{eq:P_N_init}-\\eqref{eq:P_finite} with \\eqref{eq:barP_init}-\\eqref{eq:barP_finite_0}, it is straightforward to observe that $P_t^0 = \\bar P_t^0$ for all $t$. We will now show by induction that at any time $t$, $P_t^n = \\bar P_t^n$ for $n =1,\\ldots,N$.\nFirst note that by definition, $P_{T+1}^{n} = \\mathbf{0}$ and $\\bar P^n_{T+1} = \\mathbf{0}$ for $n =1,\\ldots,N$. Hence, \\eqref{new_rep2} is correct at time $T+1$. Now, assume that \\eqref{new_rep2} is correct at time $t+1$ (induction hypothesis). Then, from \\eqref{eq:barP_finite} and the induction hypothesis, we have for $n =1,\\ldots,N$,\n\\begin{align}\n\\bar P_t^{n} &= \\Omega \\big((1-p^n) P_{t+1}^0+p^n \\mathcal{L}_{zero}(P^0_{t+1}, P_{t+1}^{n},n,n), \\notag \\\\\n& \\hspace{1.0cm} \\mathcal{L}_{zero}(Q,Q^{nn},n,n), \\mathcal{L}_{iden}(R,R^{nn},n+1), \\notag \\\\\n & \\hspace{1.0cm} \\mathcal{L}_{zero}(A,A^{nn},n,n), \\mathcal{L}_{zero}(B,B^{nn},n,n+1) \\big) \\notag \\\\\n&=\n\\mathcal{L}_{zero}(Q,Q^{nn},n,n) + \\mathbb{T}_1 - \\mathbb{T}_2 (\\mathbb{T}_3)^{-1} (\\mathbb{T}_2)^{\\tp},\n\\label{P_MJ_n}\n\\end{align}\nwhere \n\\begin{align}\n\\label{T_1_def}\n&\\mathbb{T}_1 = \\mathcal{L}_{zero}(A,A^{nn},n,n)^{\\tp} \\bar{\\bar{P}}_{t+1} \\mathcal{L}_{zero}(A,A^{nn},n,n) \\\\\n\\label{T_2_def}\n&\\mathbb{T}_2 = \\mathcal{L}_{zero}(A,A^{nn},n,n)^{\\tp} \\bar{\\bar{P}}_{t+1} \\mathcal{L}_{zero}(B,B^{nn},n,n+1) , \\\\\n\\label{T_3_def}\n&\\mathbb{T}_3 = \\mathcal{L}_{iden}(R,R^{nn},n+1) \n\\notag \\\\\n&+ \\mathcal{L}_{zero}(B,B^{nn},n,n+1)^{\\tp} \\bar{\\bar{P}}_{t+1} \\mathcal{L}_{zero}(B,B^{nn},n,n+1),\n\\end{align}\nand we have defined $\\bar{\\bar{P}}_{t+1} = (1-p^n) P_{t+1}^0+p^n \\mathcal{L}_{zero}(P^0_{t+1}, P_{t+1}^{n},n,n)$.\n\nNote that from the definitions of operators $\\mathcal{L}_{zero}$ and $\\mathcal{L}_{iden}$ in \\eqref{L_zero}-\\eqref{L_iden}, it is straightforward to observe that the block dimensions of $\\mathbb{T}_1, \\mathbb{T}_2, \\mathbb{T}_3$ are the same as the block dimensions of $A,B,B^{\\tp} B$, respectively (They are block matrices of sizes $N \\times N$, $N \\times (N+1)$, and $(N+1) \\times (N+1)$, respectively). Therefore, through straightforward algebraic manipulations, we can get\n\\begin{align}\n\\label{T_1}\n&\\mathbb{T}_1 = \\mathcal{L}_{zero}(A,\\mathbb{\\tilde T}_1,n,n), \\\\\n\\label{T_2}\n&\\mathbb{T}_2 = \\mathcal{L}_{zero}(B,\\mathbb{\\tilde T}_2,n,n+1), \\\\\n\\label{T_3}\n&\\mathbb{T}_3 = \\mathcal{L}_{iden}(B^{\\tp}B,\\mathbb{\\tilde T}_3,n+1),\n\\end{align}\nwhere\n\\begin{align}\n\\label{tilde_T_1}\n&\\mathbb{\\tilde T}_1 = (A^{nn})^{\\tp}[(1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n}]A^{nn}, \\\\\n\\label{tilde_T_2}\n&\\mathbb{\\tilde T}_2 = (A^{nn})^{\\tp}[(1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n}]B^{nn}, \\\\\n\\label{tilde_T_3}\n&\\mathbb{\\tilde T}_3 = R^{nn} + (B^{nn})^{\\tp}[(1-p^n)[P_{t+1}^0]_{n,n}+p^n P_{t+1}^{n}]B^{nn}.\n\\end{align}\n\nFurther, since $\\mathbb{T}_3$ is a block diagonal matrix, we have \n\\begin{align}\n& (\\mathbb{T}_3)^{-1} =\\mathcal{L}_{iden} \\big(B^{\\tp}B,(\\mathbb{\\tilde T}_3)^{-1},n+1 \\big).\n\\label{T_3_inv}\n\\end{align}\n\nNow, using \\eqref{T_1}-\\eqref{T_3_inv} and the fact that matrices $A, Q, BB^{\\tp}$ have the same size as matrix $P^0_t$ (They are block matrices of size $N \\times N$), \\eqref{P_MJ_n} can be simplified to\n\\begin{align}\n\\bar P_t^{n} &= \\mathcal{L}_{zero} \\big(P^0_t, Q^{nn} + \\mathbb{\\tilde T}_1 - \\mathbb{\\tilde T}_2 (\\mathbb{\\tilde T}_3)^{-1} (\\mathbb{\\tilde T}_2)^{\\tp},n,n \\big) \\notag \\\\\n&= \\mathcal{L}_{zero}(P^0_t, P_t^{n},n,n),\n\\end{align}\nwhere the last equality is true because of the definition of $P_t^{n}$ in \\eqref{eq:tildeP_finite}. Hence, \\eqref{new_rep2} is true at time $t$. This completes the proof.\n}\n\n\n{\\color{blue}\n\\section{Proof of Lemma \\ref{lm:pc_NC}}\n\\label{proof_lm:pc}\nFrom Lemma \\ref{lm:opt_strategies_new_rep}, we know that the convergence of matrices $P_t^{n}$, $n \\in \\mathcal{\\overline N}$, is equivalent to the convergence of matrices $\\bar P_t^{n}$, $n \\in \\mathcal{\\overline N}$. Further, because of Lemma \\ref{equality_recursions_NC}, $\\bar P^n_t = P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$, where matrices $P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$, are defined by \\eqref{eq:P_N_MJ_init}-\\eqref{P_MJ_cmp_NC_1} for the auxiliary MJLS. Thus, in order to study the the convergence of matrices $P_t^{n}$, $n \\in \\mathcal{\\overline N}$, we can focus on the convergence of matrices $P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$.\n\nTo investigate the convergence of $P^{\\diamond}_t(n)$, $n \\in \\mathcal{\\overline N}$, we can follow a methodology similar to the one used to prove Lemma \\ref{lm:pc_2C}. We first use part 1 of Lemma \\ref{lm:ss} to show that the auxiliary MJLS described by \\eqref{A_mj}-\\eqref{transition_prob} is SS if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$ where $p_c^n$ is the critical threshold given by \\eqref{eq:pc}. \n\nSecond, we can use part 2 of Lemma \\ref{lm:ss} to show that the auxiliary MJLS described by \\eqref{A_mj}-\\eqref{transition_prob} is SD if and only of $\\mathcal{A}_d$ defined as in \\eqref{eq:bigmatrix3} is Schur stable. Since the matrix $\\mathcal{A}_{d}$ is upper-triangular, it is Schur stable if and only if there exist matrices $H^{\\diamond}(0)$ and $H^{\\diamond}(n)$ for $n \\in \\mathcal{N}$ such that $\\rho\\big(A_d(0)\\otimes A_d(0)\\big)<1$ and $\\rho\\big(p^n A_d(n)\\otimes A_d(n)\\big)<1$ where $A_d(n)$, $n \\in \\mathcal{N}$, defined as in \\eqref{A_c_l}. The existence of these matrices is resulted since $(A,Q)$ and $\\big(A^{nn},(Q^{nn})^{1\/2} \\big)$ for all $n \\in \\mathcal{N}$ are detectable from Assumptions \\ref{assum:det_stb} and \\ref{assum:det_stb_2}.\nHence, the MJLS is SD.\n\nIt then follows from Lemma \\ref{lm:MJ_infinite} that matrices $P^{\\diamond}_{t}(n)$, $n \\in \\overline{\\mathcal{N}}$, converge as $t \\to -\\infty$ to PSD matrices $P^{\\diamond}_{*}(n)$ that satisfy the steady state version of \\eqref{P_MJ_cmp_NC_0}-\\eqref{P_MJ_cmp_NC_1} (i.e., equations \\eqref{eq:P_finite_NC_fixed}-\\eqref{eq:tildeP_finite_NC_fixed}) if and only if $p^n < p_c^n$ for all $n \\in \\mathcal{N}$. This proves the lemma.\n\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\n\\mathcal{A}_d =\n\\begin{blockarray}{ccccc}\n\\begin{block}{[ccccc]}\n A_d(0)\\otimes A_d(0) & (1-p^1)A_d(1)\\otimes A_d(1)& (1-p^2)A_d(2)\\otimes A_d(2) & \\ldots & (1-p^n)A_d(N)\\otimes A_d(N) \\\\ \\\\\n \\mathbf{0} & p^1A_d(1)\\otimes A_d(1) &\\mathbf{0} &\\ldots & \\mathbf{0} \\\\ \\\\\n \\vdots &\\ddots &p^2 A_d(2)\\otimes A_d(2) & \\ddots & \\vdots \\\\ \\\\\n \\vdots & &\\ddots &\\ddots & \\mathbf{0} \\\\ \\\\\n \\mathbf{0} & \\ldots &\\ldots & \\mathbf{0} & p^n A_d(N)\\otimes A_d(N)\\\\\n\\end{block}\n\\end{blockarray}\n\\label{eq:bigmatrix3}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*}\n\\begin{figure*}[t]\n\\begin{small}\n\\begin{align}\n&A_d(n) = A^{\\diamond} (n) + H^{\\diamond}(n) \\big(Q^{\\diamond}(n) \\big)^{1\/2}\n= \\begin{blockarray}{cccl}\n\\text{$1:n-1$} &n &\\text{$n+1:N$} & \\\\\n\\begin{block}{[ccc]l}\n \\text{\\large 0} & [H^{\\diamond}(n)]_{n,1:n-1} (Q^{nn})^{1\/2} &\\text{\\large 0} & \\text{$1:n-1$} \\\\\n \\text{\\large 0} & A^{nn} + [H^{\\diamond}(n)]_{n,n} (Q^{nn})^{1\/2} & \\text{\\large 0} &n \\\\\n \\text{\\large 0} & [H^{\\diamond}(n)]_{n,n+1:N} (Q^{nn})^{1\/2} & \\text{\\large 0}&\\text{$n+1:N$} \\\\\n\\end{block}\n\\end{blockarray}.\n\\label{A_c_l}\n\\end{align}\n\\end{small}\n\\hrule\n\\end{figure*} \n}\n\\section{Proof of Lemma \\ref{lm:Q2_2C}, part 3}\\label{sec:stability_proof}\nLet $\\tilde X_t := X_t - \\hat X_t$ denote the estimation error. It suffices to show that $\\hat X_t$ and $\\tilde X_t$ are mean square stable.\nThe optimal strategies can be written as \n\\begin{align}\nU_t = K^0_* \\hat X_t + \n\\begin{bmatrix}\n0 \\\\ K^1_* \n\\end{bmatrix}\n\\tilde X_t.\n\\end{align}\nThen, from \\eqref{eq:estimator_inf_2C} we have\n\\begin{align}\n\\tilde X_{t+1} \n = & (1-\\Gamma_{t+1}^1)(A_s(1) \\tilde X_t + W_t),\n\\end{align}\nwhere $A_s(1) = (A + B^{11}K^1_*)$.\nIf $p^1 = 0$ or $1$, the stability result follows from standard linear system theory arguments.\nIf $0< p^1 < 1$, the estimation error $\\tilde X_{t}$ is a MJLS with an i.i.d. switching process\\footnote{Note that this MJLS is not the same as the auxiliary MJLS constructed in Section \\ref{sec:Q1_2C}.}.\nFrom \\cite[Theorem 3.33]{costa2006discrete}, the estimation error process is mean square stable if the corresponding noiseless system (i.e., with $W_t = 0$) is mean square stable.\nBecause $\\Gamma_{t+1}^1$ is an i.i.d. process, from \\cite[Corollary 2.7]{fang2002stochastic}, the noiseless system is mean-square stable if\n$p^1\\rho( A_{s}(1) \\otimes A_{s}(1)) < 1$.\n\nNote that the gain matrices $K^0_* ,K^1_* $ are obtained from the DCARE in \\eqref{eq:CARE_infinite} for the SD and SS auxiliary MJLS described by \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_theta}, so the corresponding gains stabilize the auxiliary MJLS \\cite[Corollary A.16]{costa2006discrete}, \\cite[Theorem A.15]{costa2006discrete}. That is, the following matrix\n\\begin{align}\n\\mathcal{A}_s = \\begin{bmatrix}\nA_s(0)\\otimes A_s(0) & (1-p^1)A_s(1)\\otimes A_s(1) \\\\\n \\mathbf{0} & p^1A_s(1)\\otimes A_s(1)\n\\end{bmatrix}\n\\label{eq:A_smat}\n\\end{align}\nhas a spectral radius less than one (see the proof of Lemma \\ref{lm:pc_2C}), where $A_s(0) = A + BK^0_*$. Thus, $\\rho(A_s(0)) < 1$ and $p^1\\rho( A_{s}(1) \\otimes A_{s}(1)) < 1$. Consequently, the estimation error $\\tilde X_{t}$ is mean-square stable.\n\n \nNow, note that the estimate evolution can be written as\n\\begin{align}\n\\hat X_{t+1} \n = & A_s(0) \\hat X_t + \\tilde W_t,\n\\end{align}\nwhere $\\tilde W_t = \\Gamma_{t+1}^1(A_s(0) \\tilde X_t + W_t)$ can be viewed as a ``noise\" process.\nThe process $\\tilde W_t$ is mean square stable because $\\Gamma_{t+1}^1 \\leq 1$, and $\\tilde X_t$ and $W_t$ are both mean square stable.\nSince $\\rho(A_s(0)) < 1$, we conclude that $\\hat X_t$ is mean square stable using standard linear system arguments \\cite[Theorem 3.4]{KumarVaraiya:1986}.\n\n\n\n\n \n\\section{Proof of Lemma \\ref{lm:Q2_2C}, part 3}\\label{sec:stability_proof}\n Let $\\tilde X_t := X_t - \\hat X_t$ denote the estimation error. It suffices to show that $\\hat X_t$ and $\\tilde X_t$ are mean square stable.\n\nThe optimal strategies can be written as \n\\begin{align}\nU_t = K^0_* \\hat X_t + \n\\begin{bmatrix}\n0 \\\\ K^1_* \n\\end{bmatrix}\n\\tilde X_t.\n\\end{align}\n\nAfter some algebra, the closed-loop dynamics of $(\\hat X_t, \\tilde X_t)$ under the optimal strategies can be written as\n\\begin{align}\n\\begin{bmatrix}\n\\hat X_{t+1}\\\\\n\\tilde X_{t+1}\n\\end{bmatrix}\n= \nA_{cl}(\\Gamma^1_{t+1})\n\\begin{bmatrix}\n\\hat X_{t} \\\\\n\\tilde X_{t} \n\\end{bmatrix}\n+ G_{cl}(\\Gamma^1_{t+1}) W_t,\n\\end{align}\nwhere\n\\begin{small}\n\\begin{align}\nA_{cl}(\\Gamma^1_{t+1}) = &\n\\begin{bmatrix}\n(A + B K^0_* ) & \\Gamma^1_{t+1}(A + B^{11}K^1_*) \\\\\n0 & (1-\\Gamma^1_{t+1})(A + B^{11}K^1_*) \n\\end{bmatrix},\n\\\\\nG_{cl}(\\Gamma^1_{t+1}) = &\n\\begin{bmatrix}\n\\Gamma^1_{t+1}\n\\\\\n(1-\\Gamma^1_{t+1})\n\\end{bmatrix},\n\\end{align}\n\\end{small}\nand $\\hat X_0 =0, \\tilde X_0=0$.\n\nIf $p^1 = 0$ or $1$, the stability result follows from standard linear system theory arguments.\nIf $0< p^1 < 1$, the closed-loop system is a MJLS with an i.i.d. switching process\\footnote{Note that this MJLS is not the same as the auxiliary MJLS constructed in Section \\ref{sec:Q1_2C}.}.\nFrom \\cite[Theorem 3.33]{costa2006discrete}, the closed-loop system is mean square stable if the corresponding noiseless system (i.e., with $W_t = 0$) is mean square stable.\nBecause $\\Gamma_{t+1}^1$ is an i.i.d. process, from \\cite[Corollary 2.7]{fang2002stochastic}, the noiseless system is mean-square stable if\nthe spectral radius of the matrix\n\\begin{align}\\label{eq:ms_eq1}\n& p^1 A_{cl}(0) \\otimes A_{cl}(0)+ (1 - p^1) A_{cl}(1) \\otimes A_{cl}(1) =\n\\notag\\\\\n&\n\\begin{tiny}\n\\begin{bmatrix} \nA_s(0) \\otimes \n\\begin{bmatrix}\nA_s(0) & (1-p^1)A_s(1)\\\\\n \\mathbf{0} & p^1 A_s(1)\n\\end{bmatrix}\n& \n(1-p^1) A_s(1) \\otimes \n\\begin{bmatrix}\nA_s(0) & A_s(1)\\\\\n \\mathbf{0} & \\mathbf{0}\n\\end{bmatrix}\n\\\\\n \\mathbf{0}\n&\np^1 A_s(1) \\otimes \n\\begin{bmatrix}\nA_s(0) & \\mathbf{0}\\\\\n \\mathbf{0} & A_s(1)\n\\end{bmatrix}\n\\end{bmatrix}\n\\end{tiny}\n\\end{align}\nis less than one, where $A_s(0) = (A + B K^0_* )$ and $A_s(1) = (A + B^{11}K^1_*)$.\n\n\nNote that the gain matrices $K^0_* ,K^1_* $ are obtained from the CARE in \\eqref{eq:CARE_infinite} for the SD and SS auxiliary MJLS described by \\eqref{eq:MJLS_A}-\\eqref{eq:MJLS_theta}, so the corresponding gains stabilize the auxiliary MJLS \\cite[Corollary A.16]{costa2006discrete}, \\cite[Theorem A.15]{costa2006discrete}. That is, the following matrix\n\\begin{align}\n\\mathcal{A}_s = \\begin{bmatrix}\nA_s(0)\\otimes A_s(0) & (1-p^1)A_s(1)\\otimes A_s(1) \\\\\n \\mathbf{0} & p^1A_s(1)\\otimes A_s(1)\n\\end{bmatrix}\n\\end{align}\nhas a spectral radius less than one (see the proof of Lemma \\ref{lm:pc_2C}). Thus, $\\rho(A_s(0)) < 1$ and $\\rho(A_s(1)) < \\frac{1}{\\sqrt p^1}$. Consequently, the spectral radius of the matrix in \\eqref{eq:ms_eq1} is less than $1$. This establishes the mean-square stability of $(\\hat X_t, \\tilde X_t)$.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\n\n\n\n\nThe omnipresence of the autonomous driving and the intelligent manufacturing systems involve tasks of sampling and remotely estimating fresh status information. For example, in autonomous driving systems, status information such as the position and the instant speed of cars keep changing, and the controller has to estimate the update-to-date status based on samples collected from the surrounding sensors. To ensure efficient control and system safety, it is important to estimate the fresh status information precisely under limited communication resources and random channel conditions. \n\nTo measure the freshness of the status update information, the Age of Information (AoI) metric has been proposed in \\cite{roy_12_aoi}. By definition, AoI captures the difference between the current time and the time-stamp at which the freshest information available at the destination was generated. It is revealed that the AoI minimum sampling and transmission strategies behave differently from utility maximization and delay minimization \\cite{roy_15_isit}. Samples with fresher content should be delivered to the destination in a timely manner \\cite{sun_17_tit}. \n\nWhen the evolution of the dynamic source can be modeled by a random signal process, the mean square estimation error (MSE) based on the available information at the receiver can be used to capture freshness. Sampling to minimize the MSE of the random process in different communication networks are studied in \\cite{hajet_03_infocom,ornee_21_ton,nayyar_13_tac,sun_wiener,tsai_2021_ton}. When the random process can be observed at the sampler, the optimum sampling policy is shown to have a threshold structure, i.e., a new sample should be taken once the difference between the actual signal value and the estimate based on past samples exceed a certain threshold. The optimum threshold can be computed by iterative thresholding \\cite{chichun-19-isit} or the bi-section search \\cite{sun_wiener} if the delay distribution and the statistics of the channel are known in advance. \n\nWhen the statistics of the communication channel is unknown, the problem of sampling and transmissions for data freshness optimization can be formulated into a sequential decision making problem \\cite{aoibandit,atay2020aging, banerjee_adversarial_aoi,tripathi2021online,li2021efficient}. By using the AoI as the freshness metric, \\cite{aoibandit,atay2020aging,banerjee_adversarial_aoi} design online link rate selection algorithms based on stochastic bandits. When the channels are time-varying and the transmitter has an average power constraint, \\cite{ceran_19_infocomwks,ceran_21_jsac,kam_rl,aba_drl_aoi,aylin_rl} employ reinforcement learning algorithms to minimize the average AoI under unknown channel statistics. Notice that in applications such as the remote estimation, a linear AoI cannot fully capture the data freshness. To solve this problem, Tripathi \\emph{et al. } model the information freshness to be a time-varying function of the AoI \\cite{tripathi2021online}, and a robust online learning algorithm is proposed. The above research tackles with unknown packet loss rate or utility functions, the problem of designing online algorithms under unknown delay statistics are not well studied. The iterative thresholding algorithm proposed in \\cite{chichun-19-isit} can be applied in the online setting when the delay statistics is unknown, whereas the convergence rate and the optimality of the algorithm are not well understood. \n\nIn this paper, we consider an online sampling problem, where a sensor transmits status updates of the Wiener source to a destination through a channel with random delay. Our goal is to design a sampling policy that minimizes the estimation error when the delay distribution is unknown a priori. The main contributions of this paper are as follows:\n\\begin{itemize}\n\t\\item The design of the MSE minimum sampling policy is reformulated as an optional stopping problem.\n\tBy analyzing the sufficient conditions of the optimum threshold, we propose an online sampling policy that learns the optimum threshold adaptively through stochastic approximation. Compared with \\cite{chichun-19-isit,Tang2205:Sending,tang2022age}, the operation of the proposed algorithm does not require prior knowledge of an upper bound of the optimum threshold. \n\t\\item We prove that the time averaged MSE of the proposed algorithm converges almost surely to the minimum MSE if the fourth order moment of the transmission delay is bounded (Theorem \\ref{thm:dep-converge}). In addition, it is shown that the MSE regret, i.e., the sub-optimality gap between the expected cumulative MSE of the proposed algorithm and the optimum offline policy, grows at a speed of $\\mathcal{O}(\\ln k)$, where $k$ is the number of samples (Corollary \\ref{thm:mse-rate}). The perturbed ordinary differential equation (ODE) method is a popular tool for establishing the convergence rate of stochastic approximation algorithms \\cite{Kushner2003}. However, this tool requires either the threshold being learned is in a bounded closed set, or the second moment of the updating directions are bounded. Because our algorithm does not require an upper bound on the optimum threshold, and the essential supremum of the transmission delay could be unbounded, we need to develop\n\ta new method for convergence rate analysis, which is based on the Lyapunov drift method for heavy traffic analysis. \n\t\\item Further by using the classic Le Cam's two point method, we show that for any causal algorithm that makes sampling decision based on historical information, under the worst case delay distribution, the MSE regret is lower bounded by $\\Omega(\\ln k)$ (Theorem \\ref{thm:converse}). By combining Theorem \\ref{thm:mse-rate} and Theorem \\ref{thm:converse}, we obtain that the proposed online sampling algorithm achieves the minimax order-optimal regret.\n\t\\item We validate the performance of the proposed algorithm via numerical simulations. In contrast to \\cite{chichun-19-isit}, the proposed algorithm could meet an average sampling frequency constraint. \n\\end{itemize}\n\n\n\n\n\\section{System Model and Problem Formulation}\n\n\\subsection{System Model}\nAs is depicted in Fig.~\\ref{fig:model}, we revisit the status update system in \\cite{sun_17_tit,arafa_model,sun_wiener}, where a sensor takes samples from a Wiener process and transmits the samples to a receiver through a network interface queue. The network interface serves the update packets on the First-Come-First-Serve (FCFS) basis. An ACK is sent back to the sensor once an update packet is cleared at the interface. We assume that the transmission duration after passing the network interface is negligible. \n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=.4\\textwidth]{modelicc}\n\t\\caption{System model. }\n\t\\label{fig:model}\n\\end{figure} \n\nLet $X_t\\in\\mathbb{R}$ denote the value of the Wiener process at time $t\\in\\mathbb{R}^+$. The sampling time-stamp of the $k$-th sample, denoted by $S_k$, is determined by the sensor at will. Based on the FCFS principle, the network interface will start serving the $k$-th packet after the $(k-1)$-th packet is cleared at the network interface and arrived at the receiver. We assume that the service time $D_k$ are independent and identically distributed (i.i.d) with a probability distribution $\\mathbb{P}_D$. The reception time of the $k$-th packet, denoted by $R_k$ satisfies the following recursive formula: $R_k=\\{S_k, R_{k-1}\\}+D_k$ and we define $R_0=0$ for simplicity. We assume the average transmission delay $\\overline{D}:=\\mathbb{E}_{D\\sim\\mathbb{P}_D}[D]$ is lower bounded by $\\overline{D}_{\\text{lb}}>0$. \n\\subsection{MMSE Estimation}\n\nLet $i(t):=\\max_{k\\in\\mathbb{N}}\\{k|R_k\\leq t\\}$ be the index of the latest sample received by the destination at time $t$. The information available at the receiver at time $t$ can be summarized as follows: (i). The sampling time-stamps, transmission delay and the values of previous samples $\\mathcal{M}_t:=\\{(S_j, D_j, X_{S_j})\\}_{j=1}^{i(t)}$; (ii). The fact that no packet was received during $(R_{i(t)}, t]$. Similar to \\cite{sun_17_tit,est_ifac}, we assume that the receiver estimates $X_t$ only based on $\\mathcal{M}_t$ and neglects the second part of information. The minimum mean-square error (MMSE) estimator \\cite{poor2013introduction} in this case is:\n\\begin{equation}\n\\hat{X}_t=\\mathbb{E}[X_t|\\mathcal{M}_t]=X_{S_{i(t)}}. \\label{eq:MMSEest}\n\\end{equation}\n\nWe use a sequence of sampling time instants $\\pi\\triangleq\\{S_k\\}_{k=1}^{\\infty}$ to represent a sampling policy. The expected time average mean square error (MSE) under $\\pi$ is denoted by $\\overline{\\mathcal{E}}_\\pi$, i.e., \n\\begin{equation}\n\\overline{\\mathcal{E}}_\\pi\\triangleq\\limsup_{T\\rightarrow\\infty}\\mathbb{E}\\left[\\frac{1}{T}\\int_{t=0}^T\\left(X_t-X_{S_{i(t)}}\\right)^2\\mathsf{d}t\\right].\n\\end{equation}\n\\subsection{Problem Formulation}\n\nOur goal in this work is to design one sampling policy that can minimize the MSE for the estimator when the delay distribution $\\mathbb{P}_D$ is unknown. Specifically, we focus on the set of causal policies denoted by $\\Pi$, where each policy $\\pi\\in\\Pi$ selects the sampling time $S_k$ of the $k$-th sample based on the transmission delay $\\{D_{k'}\\}_{k'< k}$ and Wiener process evolution $\\{X_t\\}_{t\\leq S_k}$ from the past. The transmission delay and the evolution of the Wiener process in the future cannot be used to decide the sampling time. Due to the energy constraint, we require that the sampling frequency should below a certain threshold. The optimal sampling problem is organized as follows:\n\\begin{pb}[MMSE minimization]\\label{pb:mse}\n\t\\begin{subequations}\n\t\t\\begin{align}\n\t\t\\mathsf{mse}_{\\mathsf{opt}}\\triangleq&\\inf\\limits_{\\pi\\in\\Pi}\\mathop{\\limsup}\\limits_{T\\rightarrow\\infty}\\mathbb{E}\\left[\\frac{1}{T}\\int_{t=0}^T\\left(\\hat{X}_t-X_t\\right)^2\\mathrm{d}t\\right],\\label{eq:primalobj}\\\\\n\t\t&\\hspace{0.5cm}\\text{s.t.}\\hspace{0.2cm}\\mathop{\\limsup}\\limits_{T\\rightarrow\\infty}\\mathbb{E}\\left[\\frac{i(T)}{T}\\right]\\leq f_{\\mathsf{max}}.\n\t\t\\end{align}\n\t\\end{subequations}\n\\end{pb}\t\n\n\\section{Problem Solution}\\label{sec:dep}\nIn this section,\nthe MSE minimization problem (i.e., Problem~\\ref{pb:mse}) is reformulated into an optional stopping problem. Let $\\pi^\\star$ be an optimum policy whose average MSE achieves $\\mathsf{mse}_{\\mathsf{opt}}$. Sufficient conditions for $\\pi^\\star$ are provided in Subsection~\\ref{sec:dep-off}. The online sampling algorithm $\\pi_{\\mathsf{online}}$ is provided in Subsection~\\ref{sec:dep-online} and Subsection~\\ref{sec:dep-analysis} characterizes the behaviors of the online sampling policy. \n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=.33\\textwidth]{estreformulate}\n\t\\caption{Illustration of the Wiener process and the estimation error. The sampling and reception time-stamp of the $k$-th sample are denoted by $S_k$ and $R_k$, respectively. For MMSE estimator, $\\hat{X}_t=X_{S_k}, \\forall t\\in [R_k, R_{k+1})$. }\n\t\\label{fig:errevolve}\n\\end{figure}\n\n\n\\subsection{Markov Decision Reformulation~\\ref{pb:mse}}\\label{sec:dep-rr}\nAccording to \\cite[Theorem 1]{sun_wiener}, policy $\\pi^\\star$ should not take a new sample before the previous sample is delivered to the destination. As is depicted in Fig.~\\ref{fig:errevolve}, the waiting time between the delivery time of the $k$-th sample and the sampling time of the $(k+1)$-th sample is denoted by $W_k$. Define frame $k$ as the time interval between the sampling time-stamp of the $k$-th and the $(k+1)$-th sample. The following corollary enables us to reformulate Problem~\\ref{pb:mse} into a Markov Decision Process. \n\\begin{lemma}\\label{coro:sig-dep-reformulate}\n\tLet $\\mathcal{I}_k:=(D_k, (X_{S_k+t}-X_{S_k})_{t\\geq 0})$ denote the recent information of the sampler in frame $k$. The set of sampling policies that determine the waiting time $W_k$ only based on the recent information $\\mathcal{I}_k$ is denoted by $\\Pi_{\\mathsf{recent}}$. Since for each frame $k$, the difference $X_{S_k+t}-X_{S_k}$ evolves as a Wiener process that is independent of the past $\\{X_{S_{k'}+t}-X_{S_{k'}}\\}_{k'0$ , for any policy $\\pi\\in\\Pi_{\\mathsf{cons}}$, inequality \\eqref{eq:dep-inequal-equiv} can be rewritten as:\n\\begin{align}\n\\theta_{\\pi}(\\gamma^\\star):=&\\liminf_{K\\rightarrow\\infty}\\left(\\frac{1}{K}\\sum_{k=1}^K\\mathbb{E}\\left[\\frac{1}{6}(X_{S_{k+1}}-X_{S_k})^4\\right]\\right.\\nonumber\\\\\n&\\left.-\\gamma^\\star\\cdot\\frac{1}{K}\\sum_{k=1}^K\\mathbb{E}[D_k+W_k]\\right)\\geq 0. \\label{eq:dep-inequal}\n\\end{align}\n\nInequality \\eqref{eq:dep-inequal} takes the minimum value 0 if and only if policy $\\pi$ is optimum. Therefore, if the ratio $\\gamma^\\star$ is known, an optimum policy $\\pi^\\star$ can be obtained by solving the following functional optimization: \n\\begin{pb}[Functional Optimization Problem]\\label{pb:sig-frac}\n\t\\begin{subequations}\\begin{align}\n\t\t\\mathsf{mse}_{\\mathsf{opt}}=&\\inf_{\\pi\\in\\Pi}\\limsup_{K\\rightarrow\\infty}\\left(\\frac{1}{K}\\sum_{k=1}^K\\mathbb{E}\\left[\\frac{1}{6}\\left(X_{S_{k+1}}-X_{S_k}\\right)^4\\right]\\right.\\nonumber\\\\\n\t\t&\\hspace{1cm}\\left.-\\gamma^\\star\\frac{1}{K}\\sum_{k=1}^K\\mathbb{E}\\left[\\left(D_k+W_k\\right)\\right]\\right),\\label{eq:sig-frac-obj}\\\\\n\t\t&\\hspace{0.5cm}\\text{s.t.}\\hspace{0.2cm}\\liminf_{K\\rightarrow\\infty}\\mathbb{E}\\left[\\frac{1}{K}\\sum_{k=1}^K\\left(D_k+W_k\\right)\\right]\\geq\\frac{1}{f_{\\mathsf{max}}}.\\label{eq:cons} \n\t\t\\end{align}\n\t\\end{subequations}\n\\end{pb}\n\nTo solve Problem~\\ref{pb:sig-frac}, we can take the Lagrangian duality of the constraint \\eqref{eq:cons} with a dual variable $\\nu$ and obtain the Lagrange function $\\mathcal{L}(\\pi, \\gamma, \\nu)$:\n\\begin{align}\n&\\mathcal{L}(\\pi, \\gamma, \\nu)\\triangleq\\limsup_{K\\rightarrow\\infty}\\left(\\frac{1}{K}\\sum_{k=1}^K\\mathbb{E}\\left[\\frac{1}{6}(X_{S_{k+1}}-X_{S_k})^4\\right]\\right.\\nonumber\\\\\n&\\hspace{2cm}\\left.-(\\gamma+\\nu)\\frac{1}{K}\\sum_{k=1}^K\\mathbb{E}\\left[\\left(S_{k+1}-S_k\\right)\\right]\\right)+\\nu\\frac{1}{f_{\\mathsf{max}}}.\\label{eq:lagrange-dep}\n\\end{align}\n\nWe say that a stationary policy $\\pi$ has a threshold structure, if the waiting time $W_k$ is determined by:\n\\begin{equation}W_k=\\inf\\{w\\geq 0\\big||X_{S_k+D_k+w}-X_{S_k}|\\geq \\tau\\}.\\label{eq:opt-dep}\n\\end{equation}\n\nLet $Z_t$ be a Wiener process staring from $t=0$. Let $D$ be the transmission delay following distribution $\\mathbb{P}_D$ and the value of the Wiener process at time $D$ is denoted by $Z_D$. Using the threshold policy \\eqref{eq:opt-dep}, the expected frame-length $L_k:=D_k+W_k$ and $\\frac{1}{6}(X_{S_{k+1}}-X_{S_k})^4$ has the following properties:\n\\begin{lemma}\\cite[Corollary 1 Restated]{sun_wiener}\\label{lemma:cond-l}\n\t\\begin{subequations}\n\t\t\\begin{align}\n\t\t&\\mathbb{E}[L_k]=\\mathbb{E}\\left[\\max\\{\\tau^2, Z_D^2\\}\\right],\\\\\n\t\t&\\mathbb{E}\\left[\\frac{1}{6}(X_{S_{k+1}}-X_{S_k})^4\\right]=\\frac{1}{6}\\mathbb{E}\\left[\\max\\{\\tau^2, Z_D^2\\}^2\\right]. \n\t\t\\end{align}\n\t\\end{subequations}\n\\end{lemma}\n\nAs is revealed by \\cite{sun_wiener}, the optimum policy $\\pi^\\star$ has a threshold structure as in equation \\eqref{eq:opt-dep}. To design an off-line algorithm that can learn the updating threshold $\\tau^\\star$ of $\\pi^\\star$, \nwe then reveal the necessary conditions that $\\tau^\\star$ should satisfy.\nWith slightly abuse of notations, let $\\mathcal{L}(\\tau, \\gamma, \\nu)$ denote the expected value of the Lagrange function $\\mathcal{L}(\\pi, \\gamma, \\nu)$ when a stationary policy $\\pi$ with threshold $\\tau$ is used. According to Lemma~\\ref{lemma:cond-l}, $\\mathcal{L}(\\tau, \\gamma, \\nu)$ can be computed as follows:\n\\begin{align}\n\\mathcal{L}(\\tau, \\gamma, \\nu)=&\\mathbb{E}\\left[\\frac{1}{6}\\max\\{\\tau^2, Z_D^2 \\}^2\\right]-(\\gamma+\\nu)\\mathbb{E}[\\max\\{\\tau^2, Z_D^2\\}]\\nonumber\\\\\n&+\\nu\\frac{1}{f_{\\mathsf{max}}}.\n\\end{align}\n\n\\hspace{-10pt}\\emph{Condition 1: }\\cite[Theorem 5 Restated]{sun_wiener} Let $\\tau(\\gamma, \\nu)$ be the optimum sampling threshold that minimizes function $\\mathcal{L}(\\tau, \\gamma, \\nu)$, which can be computed as follows:\n\\begin{equation}\n\\tau(\\gamma, \\nu):=\\arg\\inf_{\\tau\\geq 0}\\mathcal{L}(\\tau, \\gamma ,\\nu)=\\sqrt{3(\\gamma+\\nu)}.\\label{eq:tauopt}\n\\end{equation}\n\n\\hspace{-10pt}\\emph{Condition 2: }\\cite[Eq.~(123, 125)]{sun_wiener}\\begin{equation}\n\\nu^\\star\\left(\\mathbb{E}\\left[\\max\\{3(\\gamma^\\star+\\nu^\\star), Z_D^2\\}\\right]-\\frac{1}{f_{\\mathsf{max}}}\\right)=0, \\nu^\\star\\geq 0. \\label{eq:cs}\n\\end{equation}\n\nRecall that for any policy $\\pi\\in\\Pi_{\\text{cons}}$ with threshold $\\tau$, inequality \\eqref{eq:dep-inequal-equiv} implies\n\\begin{equation}\n\\theta_\\pi(\\gamma^\\star)=\\frac{1}{6}\\mathbb{E}\\left[\\max\\{\\tau^2, Z_D^2\\}^2\\right]-\\gamma^\\star\\mathbb{E}\\left[\\max\\{\\tau^2, Z_D^2\\}\\right]\\geq 0. \\label{eq:theta}\n\\end{equation}\nAccording to \\eqref{eq:tauopt}, inequality \\eqref{eq:theta} holds with equality if and only if $\\pi^\\star$ with threshold $\\tau^\\star=\\sqrt{3(\\gamma^\\star+\\nu^\\star)}$ is used. Adding the CS condition \\eqref{eq:cs} on both sides of \\eqref{eq:theta}, the necessary condition for $\\gamma^\\star$ then becomes:\n\\begin{equation}\n\\overline{g}_\\nu(\\gamma^\\star)=\\theta_{\\pi^\\star}(\\gamma^\\star)=0, \\label{eq:equation-offline}\n\\end{equation}\nwhere function $\\overline{g}_\\nu(\\gamma):=\\mathbb{E}[g_\\nu(\\gamma;Z_D)]$ is the expectation of function $g_{\\nu}(\\gamma;Z_D)$ defined as follows: \n\\begin{equation}\ng_\\nu(\\gamma;Z_D):=\\frac{1}{6}\\max\\{3(\\gamma+\\nu), Z_D^2\\}^2-\\gamma\\max\\{3(\\gamma+\\nu), Z_D^2\\}.\\label{eq:gdef}\n\\end{equation}\n\nAs is shown by \\cite[Theorem 7]{sun_wiener}, the duality gap between $\\overline{\\mathcal{E}}_{\\pi^\\star}$ and $\\sup_{\\nu\\geq 0}\\inf_{\\pi}\\mathcal{L}(\\pi, \\gamma^\\star, \\nu)$ is zero, and \\eqref{eq:equation-offline} becomes a necessary and sufficient condition. \n\n\\subsection{An Online Algorithm $\\pi_{\\mathsf{online}}$}\\label{sec:dep-online}\n\nWhen $\\mathbb{P}_D$ is unknown but $\\nu^\\star$ is known, we can approximate $\\gamma^\\star$ by solving equation \\eqref{eq:equation-offline} through stochastic approximation \\cite{neely2021fast,Kushner2003,robbins_monro}. Notice that the role of $\\nu^\\star$ is to satisfy the sampling frequency constraint. To achieve this goal, we approximate $\\nu^\\star$ by maintaining a sequence $\\{U_k\\}$ that records the sampling constraint violations up to frame $k$. \n\nThe algorithm is initialized by selecting $\\gamma_1=0$ and $U_1=0\n. In each frame $k$, the sampling and updating rules are as follows:\n\n\\hspace{-10pt}\\underline{1. Sampling: }We treat $\\nu_k:=\\frac{1}{V}U_k^+$ as the dual optimizer $\\nu$, where $V>0$ is fixed as a constant. The waiting time $W_{k+1}$ is selected to minimize the Lagrange function \\eqref{eq:lagrange-dep}, and according to the statement after equation \\eqref{eq:theta}, $W_k$ is selected by:\n\\begin{equation}\nW_k=\\inf\\{w\\geq 0|\\left|X_{S_k+D_k+w}-X_{S_k}\\right|\\geq\\sqrt{3\\left(\\gamma_k+\\nu_k\\right)}\\}.\\label{eq:o-dep-wait}\n\\end{equation}\n\n\\hspace{-10pt}\\underline{2. Update $\\gamma_k$: }To search for the root $\\gamma>0$ of equation $\\overline{g}_{\\nu_k}(\\gamma)=0$, we update $\\gamma_{k}$ through the Robbins-Monro algorithm \\cite{robbins_monro}. In each frame $k$, we are given an i.i.d sample $\\delta X_k=X_{S_k+D_k}-X_{S_k}\\sim Z_D$, and the Robbins-Monro algorithm operates by:\n\\begin{align}\n&\\gamma_{k+1}=\\left(\\gamma_{k}+\\eta_kY_k\\right)^+,\\label{eq:robbins-monro-gamma}\n\\end{align}\nwhere $Y_k=g_{\\nu_k}(\\gamma_k;\\delta X_k)$ and function $g_{\\nu}(\\cdot)$ is defined in \\eqref{eq:gdef}. Recall that $\\overline{D}_{\\text{lb}}$ is a non-zero lower bound of the average delay, the step-size $\\{\\eta_k\\}$ is selected by:\n\\begin{equation}\n\\eta_k=\\frac{1}{2\\overline{D}_{\\text{lb}}}k^{-\\alpha},\\alpha\\in(0.5, 1]. \\label{eq:stepsize}\n\\end{equation}\n\n\n\\hspace{-10pt}\\underline{3. Update $U_k$: }To guarantee that the sampling frequency constraint is not violated, we update the violation $U_k$ up to the end of frame $k$ by:\n\\begin{equation}\nU_{k+1}=U_k+\\left(\\frac{1}{f_{\\mathsf{max}}}-(D_k+W_k)\\right). \\label{eq:debt-evolve}\n\\end{equation}\n\n\\subsection{Theoretical Analysis}\\label{sec:dep-analysis}\nWe analyze the convergence and optimality of algorithm $\\pi_{\\mathsf{online}}$. We assume there is no sampling frequency constraint, i.e., $f_{\\mathsf{max}}=\\infty$ and make the following assumption on distribution $\\mathbb{P}_D$:\n\\begin{assu}\n\tThe fourth order moment of the transmission delay is upper bounded by $B$, i.e., \n\t\\[\\mathbb{E}[D^4]\\leq B<\\infty.\\]\n\\end{assu}\n\nThe convergence behavior of the optimum threshold $3\\gamma^\\star$ and the MSE performance are manifested in the following theorems:\n\\begin{theorem}\\label{thm:dep-converge}\n\tThe proposed algorithm learns the optimum parameter $\\gamma^\\star$ almost surely, i.e., \n\t\\begin{equation}\n\t\\lim_{k\\rightarrow\\infty}\\gamma_k=\\gamma^\\star, \\hspace{0.3cm}\\text{w.p.1}.\\label{eq:gamma-as}\n\t\\end{equation}\n\\end{theorem}\nThe proof of Theorem \\ref{thm:dep-converge} is obtained by the ODE method in \\cite[Chapter 5]{Kushner2003} and is provided in Appendix~\\ref{pf:dep-converge}. \n\n\\begin{theorem}\\label{thm:rate-converge}\n\tThe second moment of $(\\gamma_k-\\gamma^\\star)$ satisfies:\n\t\\begin{equation} \n\t\\sup_k\\mathbb{E}\\left[\\frac{|\\gamma_k-\\gamma^\\star|^2}{\\eta_k}\\right]<\\infty. \n\t\\end{equation}\n\tSpecifically, if $\\alpha=1$ and $\\eta_k=\\frac{1}{2\\overline{D}_{{\\rm lb}}k}$, then the mean square error decays with rate $\\mathbb{E}[(\\gamma_k-\\gamma^\\star)^2]=\\mathcal{O}(1\/k)$. \n\\end{theorem}\n\nOne challenge in the proof of Theorem \\ref{thm:rate-converge} is that $\\gamma_k$ is unbounded and the second moment of $Y_k$ is unbounded. We notice that $Y_k$ could become very large when $\\gamma_k$ is much larger than the true value $\\gamma^\\star$, but the truncation of $(\\gamma_k+\\eta_k Y_k)^+$ to non-negative part actually prevents the actual update $|(\\gamma_k+\\eta_k Y_k)^+-\\gamma_k|$ from becoming too large. Based on this observation, we adopt a method from the heavy-traffic analysis by introducing the unused rate $\\chi_k:=(-(\\gamma_k+\\eta_kY_k))^+$, then prove that the variance of the amount of the actual updating $(\\eta_kY_k+\\chi_k)$ is finite. Detailed proofs are provided in Section~\\ref{pf:rate-converge}. \n\n\n\\begin{theorem}\\label{thm:mse-as}\n\tThe average MSE under policy $\\pi_{\\mathsf{online}}$ converges to $\\overline{\\mathcal{E}}_{\\pi^\\star}$ almost surely, i.e.,\n\t\\begin{equation}\n\t\\limsup_{k\\rightarrow\\infty}\\frac{\\int_{t=0}^{S_{k+1}}(X_t-\\hat{X}_t)^2{\\rm{d}}t}{S_{k+1}}=\\overline{\\mathcal{E}}_{\\pi^\\star}, \\hspace{0.3cm}\\text{w.p.1}.\\label{eq:thm-1-}\n\t\\end{equation} \n\\end{theorem}\n\nWith the mean-square convergence of $\\gamma_k$, the proof of Theorem~\\ref{thm:mse-as} is a direct application of the perturbed ODE method \\cite{Kushner2003} and is provided in Appendix~\\ref{pf:mse-rate}.\n\nBy using Theorem~\\ref{thm:rate-converge} and Theorem~\\ref{thm:mse-as}, we can upper bound the growth rate of the cumulative MSE optimality gap in the following corollary:\n\\begin{corollary}\\label{thm:mse-rate}\n\tIf $\\alpha=1$, then the growth rate of the cumulative MSE optimality gap up to the $k$-th sample can be bounded as follows:\n\t\\begin{equation}\n\t\\left(\\mathbb{E}\\left[\\int_0^{S_{k+1}}(X_t-\\hat{X}_t)^2{\\rm d}t\\right]-\\overline{\\mathcal{E}}_{\\pi^\\star}\\mathbb{E}[S_k]\\right)=\\mathcal{O}\\left(\\ln k\\right). \n\t\\end{equation}\n\\end{corollary}\nThe proof of Corollary~\\ref{thm:mse-rate} is provided in Appendix~\\ref{pf:sig-dep-reformulate}. \n\n\n\\begin{theorem}\\label{thm:converse}\n\tFor any distribution $\\mathbb{P}$, let $\\pi^\\star(\\mathbb{P})$ denote the MSE minimum sampling policy when the delay $D\\sim\\mathbb{P}$. The threshold obtained by solving equation \\eqref{eq:equation-offline} is denoted by $\\gamma^\\star(\\mathbb{P})$. After $k$-samples are taken, the minimax estimation error $\\gamma^\\star(\\mathbb{P})$ is lower bounded by:\n\t\\begin{equation}\n\t\\inf_{\\hat{\\gamma}}\\sup_{\\mathbb{P}}\\mathbb{E}\\left[(\\hat{\\gamma}-\\gamma^\\star(\\mathbb{P}))^2\\right]=\\Omega(1\/k). \\label{eq:converse-est}\n\t\\end{equation}\n\t\n\t\n\tLet $p_w(\\mathbb{P}):={\\rm Pr}(Z_D^2\\leq 3\\gamma^\\star(\\mathbb{P})|D\\sim\\mathbb{P})$ denote the probability of waiting by using policy $\\pi^\\star(\\mathbb{P})$ and let $\\mathcal{P}_u(\\mu):=\\{\\mathbb{P}|p_w(\\mathbb{P})\\geq\\mu \\}$. Specifically, let $p_{\\rm w, \\rm{uni}}^\\star:={\\rm{Pr}}(Z_D^2\\leq 3\\gamma^\\star_{{\\rm uni}}|D\\sim{\\rm Uni}([0, 1]))$. Let $\\Pi_h$ denote the set of policies which the sampling decision $S_k$ is made based on historical information $\\mathcal{H}_{k-1}$. We have the following result for $\\mu\\leq p_{\\rm w, \\rm{uni}}^\\star\/2$:\n\t\\begin{align}\n\t&\\inf_{\\pi\\in\\Pi_h}\\sup_{\\mathbb{P}\\in\\mathcal{P}_u(\\mu)}\\left(\\mathbb{E}\\left[\\int_0^{S_{k+1}}(X_t-\\hat{X}_t)^2{\\rm d}t\\right]-\\overline{\\mathcal{E}}_{\\pi^\\star(\\mathbb{P})}\\mathbb{E}[S_{k+1}]\\right)\\nonumber\\\\\n\t&\\hspace{4.5cm}\\geq\\frac{1}{2}\\mu\\cdot\\Omega\\left(\\ln k\\right). \\label{eq:regconverse}\n\t\\end{align}\n\\end{theorem}\n\nAs the transmission delay $\\mathbb{P}_D$ considered in the paper does not belong to a specific family and could be quite general, obtaining a point-wise converse bound on $\\mathbb{E}[(\\hat{\\gamma}-\\gamma^\\star(\\mathbb{P}))^2]$ for each distribution $\\mathbb{P}$ is impossible. As an alternative, a minimax risk bound $\\mathbb{E}[(\\hat{\\gamma}-\\gamma^\\star(\\mathbb{P}))^2]$ over a general distribution set $\\mathcal{P}$ can be obtained using Le Cam's two point method for non-parametric estimation \\cite{nonpara}. The core idea is to construct two distributions $\\mathbb{P}_1, \\mathbb{P}_2$, whose $\\ell_1$ distance $|\\mathbb{P}_1^{\\otimes k}-\\mathbb{P}_2^{\\otimes k}|_1$ can be upper bounded by a constant, but $(\\gamma^\\star(\\mathbb{P}_1)-\\gamma^\\star(\\mathbb{P}_2))^2\\geq\\Omega(1\/k)$ is difficult to distinguish. Such a construction is still challenging because $\\gamma^\\star(\\mathbb{P})$ cannot be obtained in closed form even for the simpliest distribution families such as the delta distribution or exponential distribution. Notice that the estimation error of $\\gamma^\\star$ is closely related to the estimation error $\\overline{g}_\\nu(\\cdot)$ at a given point. Therefore, the construction of $\\mathbb{P}_1$ and $\\mathbb{P}_2$ for obtaining the converse bound of H\u00f6lder smooth functions \\cite[Chapter 2]{nonpara} are adopted. The proof of inequality \\eqref{eq:regconverse} is a direct application of the minimax estimation error \\eqref{eq:converse-est}. Detailed proof of Theorem~\\ref{thm:converse} is provided in Section \\ref{pf:converse}. \n\n\n\n\t\\section{Simulation Results}\n\tIn this section, we provide simulation results to verify the theoretic findings and illustrate the performance of our proposed algorithms. We notice that the MSE minimization problem is closely related to the AoI minimization problem, where the AoI at time $t$, denoted by $A(t)=t-S_{i(t)}$. For signal-ignorant sampling policies (i.e., the sensor cannot always observe the time-varying process), according to the analysis in \\cite[Section IV-B]{sun_17_tit}, policies that minimize the average AoI achieves the minimum MSE. Therefore, we choose both offline and online AoI minimization policies ($\\pi_{\\text{AoI}}^\\star$ from \\cite{sun_17_tit}, $\\pi_{\\text{itr}}$ from \\cite{chichun-19-isit}) for comparison. To show the convergence of online learning algorithm, we plotted the average MSE performance of the optimum off-line algorithm $\\pi^\\star$ from \\cite{sun_wiener}. \n\t\n\tThe transmission delay follows the log-normal distribution parameterized by $\\mu$ and $\\sigma$ such that the density function of the probability measure $\\mathbb{P}_D$ is:\n\t\\[p(x):=\\frac{\\mathbb{P}_D(\\text{d}x)}{\\text{d}x}=\\frac{1}{\\sigma\\sqrt{2\\pi}}\\exp\\left(-\\frac{(\\ln x-\\mu)^2}{2\\sigma^2}\\right). \\]\n\tIn simulations, we set $\\mu=0.8$ and $\\sigma=1.2$, the expected time-averaged MSE is computed by taking the average of 20 runs. Fig.~\\ref{fig:mse-frame} depicts the time-average MSE performance up to the $k$-th frame of different sampling policies. The evolution of $\\{\\gamma_k\\}$ and the MSE regret $\\mathbb{E}\\left[\\int_0^{S_{k+1}}(X_t-\\hat{X}_t)^2\\text{d}t\\right]-\\overline{E}_{\\pi^\\star}\\mathbb{E}[S_{k+1}]$ are depicted in Fig.~\\ref{fig:gap}. From Fig.~\\ref{fig:mse-frame}, with $5\\times 10^4$ samples, the time averaged MSE is almost the same as using the optimum policy. From Fig.~\\ref{fig:gap}, the MSE regret is almost a logarithm function of frame $k$. The asymptotic MSE behaviour is consistent with the convergence results in Theorem~\\ref{thm:mse-as} and Corollary~\\ref{thm:mse-rate}. \n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.5\\textwidth]{frameasympto}\n\t\t\\caption{The time average MSE evolution as a function of frame $k$. }\n\t\t\\label{fig:mse-frame}\n\t\\end{figure}\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.23\\textwidth]{gamma}\n\t\t\\includegraphics[width=.23\\textwidth]{reg}\n\t\t\\caption{The evolution of the threshold estimate $\\gamma_k$ (left) and the MSE regret $\\Delta_k:=\\mathbb{E}\\left[\\int_0^{S_{k+1}}(X_t-\\hat{X}_t)^2\\text{d}t\\right]-\\overline{\\mathcal{E}}_{\\pi^\\star}\\mathbb{E}[S_{k+1}]$ (right). }\n\t\t\\label{fig:gap}\n\t\\end{figure}\n\t\n\tWhen there is a sampling frequency constraint, the average MSE and the average sampling interval achieved by policy $\\pi_{\\text{online}}$ are depicted in Fig.~\\ref{fig:cons} and Fig.~\\ref{fig:inte}, respectively. We set $f_{\\text{max}}=\\frac{1}{10\\overline{D}}$. From these figures, one can observe that the average MSE of $\\pi_{\\text{online}}$ is close to the optimum MSE $\\overline{\\mathcal{E}}_{\\pi^\\star}$ and the sampling frequency can be satisfied. In addition, by choosing a larger $V$, a smaller MSE performance can be achieved, whereas a larger number of iterations are needed to meet the sampling frequency constraint. \n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.5\\textwidth]{msecons}\n\t\t\\caption{The time average MSE evolution as a function of frame $k$. (Left: $V=10$, Right: $V=1$. ) }\n\t\t\\label{fig:cons}\n\t\\end{figure}\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.5\\textwidth]{freqcons}\n\t\t\\caption{The average sampling interval under different constant $V$. (Left: $V=10$, Right: $V=1$. )}\n\t\t\\label{fig:inte}\n\t\\end{figure}\n\t\\section{Proofs of Main Results}\\label{sec:pf}\n\n\n\n\n\t\\subsection{Notations and Preliminary Lemmas}\n\tIn Table~\\ref{tab:notations}, we summarize the notations used in the following proofs. Throughout the proofs, we use $N_1, N_2, \\cdots$ to denote absolute constants and $C_1(\\cdot), C_2(\\cdot)$ to denote polynomials with finite order. For ease\n\tof exposition, the specific values and expressions of the constants and functions may vary across different context.\n\t\\begin{table}\n\t\t\\caption{Notations}\n\t\t\\label{tab:notations}\n\t\t\\begin{tabular}{cp{7cm}}\n\t\t\t\\toprule\n\t\t\tNotation&Meaning\\\\\n\t\t\t\\midrule\n\t\t\t$Z_t$& a Wiener process staring from time 0\\\\\n\t\t\t$l_\\gamma$ & length of running time using stopping rule $\\tau_\\gamma:=\\inf\\{{t\\geq D}||Z_t|\\geq\\sqrt{3\\gamma}\\}$\\\\ \n\t\t\t$\\delta X_k$ & $\\delta X_k:=X_{S_k+D_k}-X_{S_k}$\\\\\n\t\t\t$Q_k$ & $Q_k:=\\frac{1}{6}\\left(X_{S_k+D_k}-X_{S_k}\\right)^4$\\\\\n\t\t\t$L_k$ & $L_k:=S_{k+1}-S_k=D_k+W_k$, frame length $k$\\\\\n\t\t\t$E_k$ & $E_k:=\\int_{S_k}^{S_{k+1}}(X_t-\\hat{X}_t)^2\\text{d}t$, cumulative estimation error in frame $k$\\\\\n\t\t\t$q(\\gamma)$ & $q(\\gamma):=\\frac{1}{6}\\mathbb{E}\\left[\\max\\{3\\gamma_k, Z_D^2\\}^2\\right]$, the expectation of $Q_k$ when $\\gamma_k=\\gamma$\\\\\n\t\t\t$l(\\gamma)$ & $l(\\gamma):=\\mathbb{E}[\\max\\{3\\gamma, Z_D^2\\}]$, expected frame length $L_k$ when $\\gamma_k=\\gamma$\\\\\n\t\t\t$\\mathcal{I}_k$ & $(D_k, (X_{t}-X_{S_k})_{S_k\\leq t\\mathbb{E}_k[\\eta_kY_k]\\geq-(\\gamma_k-\\gamma^\\star)$. Inequality \\eqref{eq:stepsize-drift} can be bounded by:\n\t\\begin{align}\n\t&\\mathbb{E}_k[V(\\gamma_{k+1})-V(\\gamma_k)]\\nonumber\\\\\n\t\\overset{(f)}{\\leq}&\\frac{1}{2}(\\gamma_k-\\gamma^\\star)(\\gamma_k-\\gamma^\\star+\\eta_k\\mathbb{E}_k[Y_k])-\\frac{1}{2}(\\gamma_k-\\gamma^\\star)^2+\\frac{1}{2}\\eta_k^2\\text{Var}[Y_k]\\nonumber\\\\\n\t\\leq&\\frac{1}{2}\\eta_k(\\gamma_k-\\gamma^\\star)\\overline{g}_0(\\gamma_k)+\\frac{1}{2}\\eta_k^2\\text{Var}[Y_k]\\nonumber\\\\\n\t\\overset{(g)}{\\leq} &-\\frac{1}{2}\\eta_kl(\\gamma^\\star)(\\gamma_k-\\gamma^\\star)^2+\\frac{1}{2}\\eta_k^2\\text{Var}[Y_k]\\nonumber\\\\\n\t=&-\\eta_kl(\\gamma^\\star) V(\\gamma_k)+\\frac{1}{2}\\eta_k^2\\text{Var}[Y_k],\\label{eq:stepsizez-ub3}\n\t\\end{align}\n\twhere equality $(f)$ is because $\\left(-\\mathbb{E}_k[\\chi_k]-\\gamma^\\star\\right)^2\\geq(\\gamma^\\star)^2$ and $(\\gamma_k-\\gamma^\\star+\\eta_k\\mathbb{E}_k[Y_k])^2\\leq(\\gamma_k-\\gamma^\\star+\\eta_k\\mathbb{E}_k[Y_k])(\\gamma_k-\\gamma^\\star)$; inequality $(g)$ is due to Lemma~\\ref{lemma:g}-(iii). \n\t\n\tFor proceed to show inequality \\eqref{eq:step-sizedrift} for $\\gamma_k\\geq 3\\gamma^\\star$, we need to upper bound $\\text{Var}[Y_k]$ in inequalities \\eqref{eq:stepsizez-ub2} and \\eqref{eq:stepsizez-ub3}. First, we compute the expectation $\\mathbb{E}[Y_k]$ as follows:\n\t\\begin{align}\n\t\\mathbb{E}_k[Y_k]=&\\mathbb{E}\\left[\\frac{1}{6}\\max\\{3\\gamma_k, Z_D^2\\}^2-\\gamma_k\\max\\{3\\gamma_k, Z_D^2\\}\\right]\\nonumber\\\\\n\t=&-\\frac{3}{2}\\gamma_k^2+\\mathbb{E}\\left[(\\frac{1}{6}Z_D^4-\\gamma_kZ_D^2+\\frac{3}{2}\\gamma_k^2)\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_k)}\\right]\\nonumber\\\\\n\t=&-\\frac{3}{2}\\gamma_k^2+\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_k)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_k)}\\right]\\nonumber\\\\\n\t\\leq&-\\frac{3}{2}\\gamma_k^2+\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2)^2\\right]\\nonumber\\\\\n\t\\leq&-\\frac{3}{2}\\gamma_k^2+\\frac{1}{2}\\mathbb{E}[D^2]\\leq-\\frac{3}{2}\\gamma_k^2+\\frac{1}{2}\\sqrt{B}. \n\t\\end{align}\n\t\n\tGiven historical information $\\mathcal{H}_{k-1}$, the variance of $Y_k$ can be computed by:\n\t\\begin{align}\n\t&\\text{Var}[Y_k|\\mathcal{H}_{k-1}]\\nonumber\\\\\n\t=&\\mathbb{E}_k\\left[(Y_k-\\mathbb{E}_k[Y_k])^2\\right]\\nonumber\\\\\n\t=&\\mathbb{E}_k\\left[\\left(-\\frac{3}{2}\\gamma_k^2-\\mathbb{E}_k[Y_k]\\right)^2\\mathbb{I}_{(Z_D^2\\leq 3\\gamma_k)}\\right]\\nonumber\\\\\n\t&+\\mathbb{E}_k\\left[\\left(\\frac{1}{6}Z_D^4-\\gamma_kZ_D^2+\\frac{3}{2}\\gamma_k^2+(-\\frac{3}{2}\\gamma_k^2-\\mathbb{E}_k[Y_k])\\right)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_k)}\\right]\\nonumber\\\\\n\t\\overset{(h)}{\\leq} &\\frac{1}{4}B+2\\mathbb{E}_k\\left[\\left(\\frac{1}{6}Z_D^4-\\gamma_kZ_D^2+\\frac{3}{2}\\gamma_k^2\\right)^2\\mathbb{I}_{(Z_D^2> 3\\gamma_k)}\\right]\\nonumber\\\\\n\t&+2\\mathbb{E}_k\\left[\\left(-\\frac{3}{2}\\gamma_k^2-\\mathbb{E}_k[Y_k]\\right)^2\\mathbb{I}_{(Z_D^2> 3\\gamma_k)}\\right]\\nonumber\\\\\n\t\\leq&\\frac{3}{4}B+\\frac{1}{3}\\mathbb{E}_k\\left[(Z_D^2-3\\gamma_k)^4\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_k)}\\right]\\nonumber\\\\\n\t\\leq&\\frac{3}{4}B+\\frac{1}{3}\\mathbb{E}[Z_D^8]\\leq(35+\\frac{3}{4})B,\n\t\\end{align}\n\twhere $(h)$ is because $\\mathbb{E}_k[Y_k]\\leq-\\frac{3}{2}\\gamma_k^2+\\frac{1}{2}\\sqrt{B}$ implies $(-\\frac{3}{2}\\gamma_k^2-\\mathbb{E}_k[Y_k])^2\\leq\\frac{1}{4}B$ and $(a+b)^2\\leq 2(a^2+b^2)$.\n\t\n\tDenote $N_1:=\\max\\{(35+\\frac{3}{4})B,\\frac{1}{36}((9\\gamma^\\star)^4+B)+(3\\gamma^\\star)^2((9\\gamma^\\star)^2+3\\sqrt{B})\\}$, inequalities \\eqref{eq:stepsizez-ub1}, \\eqref{eq:stepsizez-ub2} and \\eqref{eq:stepsizez-ub3} then lead to:\n\t\\begin{equation}\n\t\\mathbb{E}_k[V(\\gamma_{k+1})]-V(\\gamma_k)\\leq-\\eta_k\\overline{D}_{\\mathsf{lb}}V(\\gamma_k)+\\eta_k^2 N_1. \\label{eq:stepdrift-final}\n\t\\end{equation}\n\t\\textbf{Step 2: Computing $\\mathbb{E}[V(\\gamma_k)]$ through iteration}\n\tTaking the expectation on both sides of \\eqref{eq:stepdrift-final}, we have:\n\t\\begin{equation}\n\t\\mathbb{E}[V(\\gamma_{k+1})]\\leq(1-\\eta_k\\overline{D}_{\\mathsf{lb}})\\mathbb{E}[V(\\gamma_k)]+\\eta_k^2N_1. \\label{eq:perstep}\n\t\\end{equation}\n\tMultiplying inequality \\eqref{eq:perstep} from $i=1$ to $k$ yields:\n\t\\begin{equation}\n\t\\mathbb{E}[V(\\gamma_{k+1})]\\leq\\prod_{i=1}^k(1-\\eta_i\\overline{D}_{\\mathsf{lb}})V(\\gamma_0)+\\sum_{i=1}^k\\eta_i^2N_1\\cdot\\prod_{j=i+1}^k(1-\\eta_j\\overline{D}_{\\mathsf{lb}}). \\label{eq:telescope-stepsize}\n\t\\end{equation}\n\t\n\tSince the stepsize selected by \\eqref{eq:stepsize} satisfies \\[\\eta_k\\rightarrow 0, \\liminf_{k}\\min_{k\\geq i\\geq m(t_k-T)}\\frac{\\eta_k}{\\eta_i}=1\\]\n\taccording to \\cite[p. 343, Eq. (4.8)]{Kushner2003}, term $\\prod_{i=1}^k(1-\\eta_i\\overline{D}_{\\mathsf{lb}})=\\mathcal{O}(\\eta_k)$. Therefore, \n\t\\begin{equation}\n\t\\sup_k\\mathbb{E}\\left[\\frac{(\\gamma_k-\\gamma^\\star)^2}{\\eta_k}\\right]=\\sup_k\\mathbb{E}\\left[2V(\\theta_k)\/\\eta_k\\right]=\\mathcal{O}(1). \n\t\\end{equation}\n\t\n\tThis finishes the proof of Theorem~\\ref{thm:rate-converge}. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t%\n\n\n\n\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\n\t\n\t\n\t\n\t\\subsection{Proof of Theorem~\\ref{thm:converse}}\\label{pf:converse}\n\t\n\t\\subsubsection{Proof of Inequality \\eqref{eq:converse-est}}\n\t\n\tLet $\\mathbb{P}_1, \\mathbb{P}_2$ be two delay distributions and let $\\gamma_1^\\star, \\gamma_2^\\star$ be the solution to \\eqref{eq:equation-offline} when $D\\sim\\mathbb{P}_1$ and $D\\sim\\mathbb{P}_2$, respectively. \n\tThrough Le Cam's inequality \\cite{yu1997assouad}, we have:\n\t\\begin{equation}\n\t\\inf_{\\hat{\\gamma}}\\sup_{\\mathbb{P}}\\mathbb{E}\\left[\\left(\\hat{\\gamma}-\\gamma^\\star(\\mathbb{P})\\right)^2\\right]\\geq (\\gamma_1^\\star-\\gamma_2^\\star)^2\\cdot\\left(\\mathbb{P}_1^{\\otimes k}\\wedge \\mathbb{P}_2^{\\otimes k}\\right),\\label{eq:lecam-gamma}\n\t\\end{equation}\n\twhere $\\mathbb{P}\\wedge \\mathbb{Q}:=\\int_{\\Omega}\\min\\{p(x), q(x)\\}\\text{d}x$ and $\\mathbb{P}^{\\otimes k}$ is the product of distribution $k$ i.i.d random variables drawn from $\\mathbb{P}$. \n\t\n\tTo use Le Cam's inequality \\eqref{eq:lecam}, we need to find two distributions $\\mathbb{P}_1$ and $\\mathbb{P}_2$, whose $\\ell_1$ distance $|\\mathbb{P}_1^{\\otimes k}-\\mathbb{P}_2^{\\otimes k}|_1$ is bounded, and the difference $(\\gamma_1^\\star-\\gamma_2^\\star)^2$ is of order $1\/k$. We consider $\\mathbb{P}_1$ to be a uniform distribution on $[0, 1]$ and let $\\gamma_1^\\star$ be the optimum ratio of distribution $\\mathbb{P}_1$. Through Corollary~\\ref{coro:gammadep-bound}, we can obtain a loose upper bound on $\\gamma_1^\\star$ as follows:\n\t\\begin{equation}\n\t\\gamma_1^\\star<\\frac{1}{2}\\frac{\\mathbb{E}[D^2]}{\\mathbb{E}[D]}=\\frac{1}{3}. \n\t\\end{equation} \n\t\n\tLet $c\\leq\\frac{1}{2}$ be a constant and we denote\n\t\\begin{equation}\n\t\t\\delta=\\min\\{1-3\\gamma_1^\\star, 1\/3, p_{\\text{w, uni}}^\\star\/2\\}.\\label{eq:deltadef}\n\t\\end{equation} \n\tLet $\\mathbb{P}_2$ be a probability distribution with probability density function $p_2(x)$ defined as follows:\n\t\\begin{equation}\n\tp_2(x)=\\begin{cases}\n\t1-c\\sqrt{1\/k}, &x\\leq \\frac{1}{2}\\delta;\\\\\n\t1, &\\frac{1}{2}\\delta1-\\frac{1}{2}\\delta;\\\\\n\t0, &\\text{otherwise}.\n\t\\end{cases}\\label{eq:p2def}\n\t\\end{equation}\n\t\n\tWe will first bound $(\\gamma_1^\\star-\\gamma_2^\\star)^2$ (in Step 1) and $\\mathbb{P}_1^{\\otimes k}\\wedge\\mathbb{P}_2^{\\otimes k}$ (in Step 2) as follows:\n\t\n\t\\hspace{-10pt}\\textbf{Step 1: Lower bounding $\\gamma_2^\\star-\\gamma_1^\\star$: }\n\tFor notational simplicity, denote function $h_1(\\gamma):=\t\\mathbb{E}_{D\\sim\\mathbb{P}_1}[\\frac{1}{6}\\max\\{3\\gamma, Z_D^2\\}^2-\\gamma\\max\\{3\\gamma, Z_D^2\\}]$ and $h_2(\\gamma):=\t\\mathbb{E}_{D\\sim\\mathbb{P}_2}[\\frac{1}{6}\\max\\{3\\gamma, Z_D^2\\}^2-\\gamma\\max\\{3\\gamma, Z_D^2\\}]$. According to the definition of $\\mathbb{P}_2$ in \\eqref{eq:p2def}, for each $\\gamma$, the difference between $h_1(\\gamma)$ and $h_2(\\gamma)$ can be computed by:\n\t\\begin{align}\n\t&h_2(\\gamma)-h_1(\\gamma)\\nonumber\\\\\n\t=&\\int_{1-\\delta\/2}^1\\frac{c}{\\sqrt{k}}\\mathbb{E}\\left[\\frac{1}{6}\\max\\{3\\gamma, Z_D^2\\}^2-\\gamma\\max\\{3\\gamma, Z_D^2\\}\\big|D=x\\right]\\text{d}x\\nonumber\\\\\n\t&-\\int_{0}^{\\delta\/2}\\frac{c}{\\sqrt{k}}\\mathbb{E}\\left[\\frac{1}{6}\\max\\{3\\gamma, Z_D^2\\}^2-\\gamma\\max\\{3\\gamma, Z_D^2\\}\\big|D=x\\right]\\text{d}x\\nonumber\\\\\n\t\\overset{(a)}{=}&\\int_{1-\\delta\/2}^1\\frac{c}{\\sqrt{k}}\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma)}|D=x\\right]\\text{d}x\\nonumber\\\\\n\t&-\\int_{0}^{\\delta\/2}\\frac{c}{\\sqrt{k}}\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma)}|D=x\\right]\\text{d}x, \\label{eq:hdiff}\n\t\\end{align}\n\twhere inequality $(a)$ is obtained because \n\t\\begin{align}\\frac{1}{6}\\max\\{3\\gamma, Z_D^2\\}^2-\\gamma\\max\\{3\\gamma, Z_D^2\\}=-\\frac{3}{2}\\gamma^2+\\frac{1}{6}(Z_D^2-3\\gamma)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma)}.\n\t\\end{align}\n\tSince $\\gamma_1^\\star$ is the optimum ratio for delay distribution $\\mathbb{P}_1$, we have $h_1(\\gamma_1^\\star)=0$. \n\n\n\n\n\n\t%\n\t%\n\n\tAccording to equation \\eqref{eq:hdiff}, function $h_2(\\gamma_1^\\star)$ can be lower bounded by:\n\t\\begin{align}\n\t&h_2(\\gamma_1^\\star)\\nonumber\\\\\n\t\\overset{(b)}{\\geq}&\\frac{c}{\\sqrt{k}}\\cdot\\int_{1-\\delta\/2}^1\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_1^\\star)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_1^\\star)}\\big|D=x\\right]\\text{d}x\\nonumber\\\\\n\t&-\\int_{0}^{\\delta\/2}\\frac{c}{\\sqrt{k}}\\frac{1}{2}x^2\\text{d}x\\nonumber\\\\\n\t\\geq&\\frac{c}{\\sqrt{k}}\\cdot\\int_{1-\\delta\/2}^1\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_1^\\star)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_1^\\star)}\\big|D=x\\right]\\text{d}x-\\frac{c}{\\sqrt{k}}\\frac{1}{6}\\left(\\frac{\\delta}{2}\\right)^3. \\label{eq:h2mid}\n\t\\end{align}\n\twhere inequality $(b)$ is because $\\mathbb{E}[\\frac{1}{6}(Z_D^2-3\\gamma)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma)}|D=x]\\leq\\mathbb{E}[\\frac{1}{6}Z_D^4|D=x]=\\frac{1}{2}x^2$. \n\t\n\tWe then proceed to lower bound $\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_1^\\star)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_1^\\star)}|D=x\\right]$ for each delay realization $x\\in[1-\\delta\/2, 1]$ as follows:\n\t\\begin{align}\n\t&\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_1^\\star)^2\\mathbb{I}_{(Z_D^2\\geq 3\\gamma_1^\\star)}|D=x\\right]\\nonumber\\\\\n\t\\overset{(c)}{\\geq}&\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_1^\\star)^2\\mathbb{I}_{(3\\gamma_1^\\star\\leq Z_D^2\\leq x)}+\\frac{1}{6}(Z_D^2-x)^2\\mathbb{I}_{(Z_D^2\\geq x)}|D=x\\right]\\nonumber\\\\\n\t\\geq&\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_1^\\star)^2\\mathbb{I}_{(3\\gamma_1^\\star\\leq Z_D^2\\leq x)}\\right]\\nonumber\\\\\n\t&+\\frac{1}{6}\\left(\\text{Var}[Z_D^2|D=x]-x^2\\text{Pr}\\left(Z_D^2\\leq x|D=x\\right)\\right)\\nonumber\\\\\n\t\\overset{(d)}{\\geq}&\\frac{1}{6}x^2\\geq\\frac{1}{6}(1-\\delta\/2)^2,\\label{eq:h2lb}\n\t\\end{align}\n\twhere inequality $(c)$ is because $\\delta\\geq1-3\\gamma_1^\\star$ by equation \\eqref{eq:deltadef}, and for the conditional mean $\\mathbb{E}[Z_D^2|D=x]=x\\geq 1-\\delta\/2\\geq 1-\\delta \\geq3\\gamma_1^\\star$; inequality $(d)$ is because $\\text{Var}[Z_D^2|D=x]=2x^2$ and $x^2\\text{Pr}(Z_D^2\\leq x)\\leq x^2$ and $\\mathbb{E}\\left[\\frac{1}{6}(Z_D^2-3\\gamma_1^\\star)^2\\mathbb{I}_{(3\\gamma_1^\\star\\leq Z_D^2\\leq x)}\\right]\\geq 0$. \n\tPlugging inequality \\eqref{eq:h2lb} into \\eqref{eq:h2mid} and recall that $\\delta<1$ by definition, we have the lower bound of $h_2(\\gamma_1^\\star)$:\n\t\\begin{equation}\n\th_2(\\gamma_1^\\star)\\geq\\frac{c}{\\sqrt{k}}\\frac{\\delta}{2}\\frac{1}{6}\\left(\\left(1-\\frac{\\delta}{2}\\right)^2-\\left(\\frac{\\delta}{2}\\right)^2\\right)\\geq\\frac{c}{\\sqrt{k}}\\frac{\\delta}{12}(1-\\delta)>0. \\label{eq:h2lbf}\n\t\\end{equation}\n\tBy Lemma~\\ref{lemma:g}-(i), function $h_2(\\cdot)$ is monotonically decreasing. Since $h_2(\\gamma_1^\\star)> 0$ and $h_2(\\gamma_2^\\star)=0$, we can conclude that $\\gamma_2^\\star\\geq\\gamma_1^\\star$. We then proceed to bound $\\gamma_2^\\star-\\gamma_1^\\star$ through Taylor expansion at $\\gamma=\\gamma_1^\\star$.\n\t\\begin{equation}\n\th_2(\\gamma_2^\\star)=h_2(\\gamma_1^\\star)+h_2'(\\gamma)(\\gamma_2^\\star-\\gamma_1^\\star),\n\t\\end{equation}\n\twhere $\\gamma\\in[\\gamma_1^\\star, \\gamma_2^\\star]$. Therefore, $\\gamma_2^\\star$ can be computed by:\n\t\\begin{equation}\n\t\\gamma_2^\\star-\\gamma_1^\\star=-\\frac{h_2(\\gamma_1^\\star)}{h_2'(\\gamma)}. \n\t\\end{equation}\n\t%\n\n\n\n\n\t\n\tTo lower bound $\\gamma_2^\\star$, we will first find a loose upper bound of $\\gamma_2^\\star$ using Lemma~\\ref{coro:gammadep-bound}: \\begin{equation}\\gamma_2^\\star\\leq\\frac{1}{2}\\frac{\\mathbb{E}_{D\\sim\\mathbb{P}_2}[D^2]}{\\mathbb{E}_{D\\sim\\mathbb{P}_2}[D]}\\leq\\frac{1}{2}\\left(\\frac{1}{3}+\\delta \\cdot c\\sqrt{1\/k}\\right),\\label{eq:gamma2lbloose}\n\t\\end{equation}\n\tTherefore, since $\\delta<1\/3$, we have $|h_2'(\\gamma)|\\leq|h_2'(\\gamma_2^\\star)|=\\mathbb{E}[\\max\\{3\\gamma_2^\\star, Z_D^2\\}]\\leq\\overline{D}+3\\gamma_{\\text{2, ub}}\\leq 1+\\frac{1}{2}+\\frac{3}{2}c\\sqrt{\\frac{1}{k}}\\delta\\leq 2$. Then by inequality \\eqref{eq:h2lbf}, we have \n\t\\begin{equation}\n\t\\gamma_2^\\star-\\gamma_1^\\star\\geq\\frac{-h_2(\\gamma_1^\\star)}{h_2'(\\gamma_2^\\star)}\\geq\\frac{1}{24}(1-\\delta)\\delta c\\sqrt{\\frac{1}{k}}. \\label{eq:gamma2lb}\n\t\\end{equation}\n\t\n\t\\hspace{-10pt}\\textbf{Step 2: Lower bounding $\\mathbb{P}_1^{\\otimes k}\\wedge \\mathbb{P}_2^{\\otimes k}$: }Let $|\\mathbb{P}-\\mathbb{Q}|=\\int_{\\Omega}|\\text{d}\\mathbb{P}-\\text{d}\\mathbb{Q}|$ be the $\\ell_1$ distance between probability distribution $\\mathbb{P}$ and $\\mathbb{Q}$. Then\n\t\\begin{align}\n\t\\mathbb{P}_1^{\\otimes k}\\wedge \\mathbb{P}_2^{\\otimes k}=&\\int\\min\\{\\mathbb{P}_1^{\\otimes k}(\\text{d}x), \\mathbb{P}_2^{\\otimes k}(\\text{d}x)\\}\\nonumber\\\\\n\t=&\\int\\mathbb{P}_1^{\\otimes k}(\\text{d}x)\\cdot\\left(1-\\frac{\\left(\\mathbb{P}_2^{\\otimes k}(\\text{d}x)-\\mathbb{P}_1^{\\otimes k}(\\text{d}x)\\right)^+}{\\mathbb{P}_1^{\\otimes k}(\\text{d}x)}\\right)\\nonumber\\\\\n\t=&1-\\int\\left(\\mathbb{P}_2^{\\otimes k}(\\text{d}x)-\\mathbb{P}_1^{\\otimes k}(\\text{d}x)\\right)^+\\nonumber\\\\\n\t=&1-\\frac{1}{2}|\\mathbb{P}_1^{\\otimes k}-\\mathbb{P}_2^{\\otimes k}|_1.\\label{eq:wedgepq}\n\t\\end{align}\n\tEquality \\eqref{eq:wedgepq} enables us to lower bound $\\mathbb{P}_1^{\\otimes k}\\wedge \\mathbb{P}_2^{\\otimes k}$ by upper bounding the $\\ell_1$ distance $|\\mathbb{P}_1^{\\otimes k}-\\mathbb{P}_2^{\\otimes k}|_1$, which is done through the Pinsker's inequality:\n\t\\begin{align}\n\t&\\frac{1}{2}\\left|\\mathbb{P}_1^{\\otimes k}-\\mathbb{P}_2^{\\otimes k}\\right|_1\\nonumber\\\\\n\t\\leq&\\sqrt{\\frac{1}{2}D_{\\mathsf{KL}}(\\mathbb{P}_2^{\\otimes k}||\\mathbb{P}_1^{\\otimes k})}\\nonumber\\\\\n\t=&\\sqrt{\\frac{1}{2}kD_{\\mathsf{KL}}(\\mathbb{P}_2||\\mathbb{P}_1)}\\nonumber\\\\\n\t\\overset{(e)}{\\leq}&\\sqrt{\\frac{1}{2}k\\int_0^1p_2(x)\\ln p_2(x)\\text{d}x}\\nonumber\\\\\n\t\\overset{(f)}{\\leq}&\\sqrt{\\frac{1}{2}k\\int_0^1\\left(p_2(x)-1+\\frac{1}{\\min\\{p_2(x), 1\\}}(p_2(x)-1)^2\\right)\\text{d}x}\\nonumber\\\\\n\t\\overset{(g)}{\\leq}&\\sqrt{\\frac{1}{2}k\\frac{1}{\\inf_{0\\leq d\\leq 1} p_2(d)}\\int_0^1(p_2(x)-1)^2\\text{d}x}\\nonumber\\\\\n\t\\leq&\\sqrt{\\frac{1}{2}k\\frac{1}{1-c\\sqrt{1\/k}}\\delta\\frac{c^2}{k}}\\leq\\sqrt{\\delta c^2},\\label{eq:klub-last}\n\t\\end{align}\n\twhere inequality $(e)$ is because the density function $p_1(x)=1$ for uniform distribution, therefore $D_{\\mathsf{KL}}(\\mathbb{P}_2||\\mathbb{P}_1)=\\int_0^1p_2(x)\\ln p_2(x)$; inequality $(f)$ is because function $g(t):=(t\\ln t)$ is convex, its derivative $g(t)''=1\/t$, therefore, through Taylor expansion we have $g(t)\\leq g(1)+(t-1)+\\frac{1}{2}\\frac{1}{\\min\\{t, 1\\}}(t-1)^2=(t-1)+\\frac{1}{2}\\frac{1}{\\min\\{t, 1\\}}(t-1)^2$; inequality $(g)$ is because $\\int_0^1p_2(x)\\text{d}x=1$. \n\t\n\tBy choosing $c=1\/2$ and recall that $\\delta<1$, inequality \\eqref{eq:klub-last} can be upper bounded by:\n\t\\begin{equation}\n\t\\frac{1}{2}|\\mathbb{P}_1^{\\otimes k}-\\mathbb{P}_2^{\\otimes k}|_1\\leq\\frac{1}{2}. \\label{eq:klub}\n\t\\end{equation}\n\t\n\tPlugging \\eqref{eq:klub} into \\eqref{eq:wedgepq} yields:\n\t\\begin{equation}\n\t\\mathbb{P}_1^{\\otimes k}\\wedge \\mathbb{P}_2^{\\otimes k}\\geq\\frac{1}{2}. \\label{eq:infub}\n\t\\end{equation}\n\t\n\tFinally, plugging \\eqref{eq:infub} and \\eqref{eq:gamma2lb} into the Le Cam's inequality \\eqref{eq:lecam-gamma} finishes the proof of inequality \\eqref{eq:converse-est}:\n\t\\begin{equation}\n\t\\inf_{\\hat{\\gamma}}\\sup_{\\mathbb{P}}(\\hat{\\gamma}- \\gamma^{\\star}(\\mathbb{P}))^2\\geq\\frac{1}{2}\\left(\\frac{1}{24}(1-\\delta)\\delta p_{\\text{w,uni}}^\\star\\right)^2\\cdot\\frac{1}{k}.\\label{eq:minimaxf}\n\t\\end{equation} \n\t\\subsubsection{Proof of Inequality \\eqref{eq:regconverse}}\n\tThe proof is divided into three steps: consider a delay distribution $\\mathbb{P}\\in\\mathcal{P}_u(\\mu)$, first we will show for each sample policy $\\pi$ with a random sampling interval $\\tau$, let $l_\\pi:=\\mathbb{E}[\\tau]=\\mathbb{E}[Z_\\tau^2]$ denote the expected running lengt\n\t, the following inequality holds:\n\t\\begin{equation}\n\t\\mathbb{E}\\left[\\int_{t=0}^{\\tau}Z_t^2\\text{d}t\\right]-\\gamma^\\star\\mathbb{E}[\\tau]\\geq\\frac{1}{6}p_w(\\mathbb{P})\\left(l_\\pi-l^\\star(\\mathbb{P})\\right)^2,\\label{eq:regrettoerror}\n\t\\end{equation}\n\twhere $l^\\star(\\mathbb{P}):=\\mathbb{E}_{D\\sim\\mathbb{P}}[\\max\\{3\\gamma^\\star(\\mathbb{P}), Z_D^2\\}]$ is the average frame length when the optimum policy $\\pi^\\star(\\mathbb{P})$ is used. \n\tNext, We will show that given $k$ samples $\\delta X_1, \\cdots, \\delta X_k\\overset{{\\rm i.i.d}}{\\sim}Z_D$, where $D\\sim\\mathbb{P}$, the minimax estimation error satisfies:\n\t\\begin{equation}\n\t\\inf_{\\hat{l}}\\sup_{\\mathbb{P}\\in\\mathcal{P}_u(\\mu)}\\mathbb{E}\\left[\\left(\\hat{l}-l^\\star(\\mathbb{P})\\right)^2\\right]\\geq N\\cdot\\frac{1}{k}, \\label{eq:converses2}\n\t\\end{equation}\n\twhere $N$ is a constant independent of $k$ and $\\mu$ with expressions provided in equation \\eqref{eq:minimaxf}. \n\t\n\tFinally, notice that:\n\t\\begin{align}\n\t&\\mathbb{E}\\left[\\int_0^{S_k}(X_t-\\hat{X}_t)^2\\text{d}t-(\\gamma^\\star+\\overline{D})S_k\\right]\\nonumber\\\\\n\t=&\\sum_{k'=1}^k\\mathbb{E}\\left[(X_{S_{k'+1}}-X_{S_{k'}})^2D_k+\\frac{1}{6}(X_{S_{k'+1}}-X_{S_{k'}})^4\\right]\\nonumber\\\\\n\t&-(\\gamma^\\star+\\overline{D})\\sum_{k'=1}^k\\mathbb{E}[S_{k'+1}-S_{k'}]\\nonumber\\\\\n\t\\geq&\\frac{1}{6}p_w(\\mathbb{P})\\sum_{k'=1}^k\\left(\\mathbb{E}[L_k]-l^\\star\\right)^2.\\label{eq:converses3}\n\t\\end{align}\n\t\n\tTake $\\inf_{\\pi}\\sup_{\\mathbb{P}\\in\\mathcal{P}_w(\\mu)}$ on both sides of inequality \\eqref{eq:converses3}, we have:\n\t\\begin{align}\n\t&\\min_{\\pi}\\max_{\\mathbb{P}\\in\\mathcal{P}_u(\\mu)}\\mathbb{E}\\left[\\int_0^{S_k}(X_t-\\hat{X}_t)^2\\text{d}t-(\\gamma^\\star+\\overline{D})S_k\\right]\\nonumber\\\\\n\t\\geq&\\frac{1}{6}\\mu \\inf_{\\hat{l}}\\sup_{\\mathbb{P}\\in \\mathbb{P}_u(\\mu)}\\mathbb{E}[(\\hat{l}-l^\\star(\\mathbb{P}))^2]\t\\geq\\frac{1}{6}\\mu N\\sum_{k'=1}^k \\frac{1}{k'}\\geq\\Omega(\\ln k). \n\t\\end{align}\n\t\n\tProof of inequality \\eqref{eq:regrettoerror} follows similar ideas as \\cite[Lemma 4]{Tang2205:Sending}. Details are provided in Appendix~\\ref{pf:regrettoerror} \n\tdue to space limitations. The proof of the minimax risk bound \\eqref{eq:converses2} is based on Le Cam's two point method as follows:\n\t\n\t\\subsubsection{Proof of inequality \\eqref{eq:converses2}}\n\tLet $\\mathbb{P}_1, \\mathbb{P}_2\\in\\mathcal{P}_w(\\mu)$, \n\tthrough Le Cam's inequality \\cite{yu1997assouad}, we have:\n\t\\begin{equation}\n\t\\inf_{\\hat{l}}\\sup_{\\mathbb{P}\\in\\mathcal{P}_u(\\mu)}\\mathbb{E}\\left[\\left(\\hat{l}-l^\\star(\\mathbb{P})\\right)^2\\right]\\geq (l_1^\\star-l_2^\\star)^2\\cdot\\left(\\mathbb{P}_1^{\\otimes k}\\wedge \\mathbb{P}_2^{\\otimes k}\\right).\\label{eq:lecam}\n\t\\end{equation}\n\t\n\tSimilarly to the proof of bounding $(\\hat{\\gamma}-\\gamma^\\star(\\mathbb{P}))$, let $\\mathbb{P}_1$ be a uniform distribution and $\\mathbb{P}_2$ through the density function in equation \\eqref{eq:p2def}. For $\\mu\\leq p_{\\text{w, uni}}^\\star\/2$, it is easy to show that $p_w(\\mathbb{P}_2)\\in\\mathcal{P}_u(\\mu)$ as follows:\n\t\\begin{align}\n\t&\\text{Pr}(Z_D^2\\leq 3\\gamma_2^\\star|D\\sim\\mathbb{P}_2)\\nonumber\\\\\n\t=&\\int_0^\\infty\\text{Pr}(Z_D^2\\leq 3\\gamma_2^\\star|D=x)p_2(x)\\text{d}x\\nonumber\\\\\n\t=&\\int_0^\\infty\\text{Pr}(Z_D^2\\leq 3\\gamma_2^\\star|D=x)p_1(x)\\text{d}x\\nonumber\\\\\n\t&-\\int_0^{\\delta\/2}\\text{Pr}(Z_D^2\\leq 3\\gamma_2^\\star|D=x)\\frac{c}{\\sqrt{k}}\\text{d}x\\nonumber\\\\\n\t&+\\int_{1-\\delta\/2}^{1}\\text{Pr}(Z_D^2\\leq 3\\gamma_2^\\star|D=x)\\frac{c}{\\sqrt{k}}\\text{d}x\\nonumber\\\\\n\t\t\\overset{(i)}{\\geq}&\\int_0^\\infty\\text{Pr}(Z_D^2\\leq 3\\gamma_1^\\star|D=x)p_1(x)\\text{d}x-c\/\\sqrt{k}\\delta\\nonumber\\\\\n\\geq&p_{\\text{w, uni}}^\\star-c\/\\sqrt{k}\\delta\\nonumber\\\\\n\t\\overset{(j)}{\\geq}&p_{\\text{w, uni}}^\\star\/2. \n\t\\end{align} \nwhere inequality $(i)$ holds because $\\gamma_1^\\star\\leq\\gamma_2^\\star$ from inequality \\eqref{eq:gamma2lb}; inequality $(j)$ holds because $\\delta