diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzayzw" "b/data_all_eng_slimpj/shuffled/split2/finalzzayzw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzayzw" @@ -0,0 +1,5 @@ +{"text":"\\section{Principal angles}\n\nLet $\\mathbf{K}$ be the real or complex field. For a integer $n$, we equip $\\mathbf{K}^n$ with its usual inner product. We set $[n]=\\{1,\\dots,n\\}$. \nFor $0 \\leq k \\leq n$, we denote by $\\mathsf{G}_{n,k}$ the \\emph{Grassmann manifold} defined as the set of all $k$-dimensional subspaces of $\\mathbf{K}^n$. Given a subspace $E \\subset \\mathbf{K}^n$, we denote by $P_E$ the orthogonal projection onto $E$.\n\nWe now introduce the concept of \\emph{principal angles} which play a central role in this note. Principal angles between two subspaces generalize the notion of the angle between two lines in $\\mathbf{K}^2$. They are defined through the following proposition.\n\n\\begin{proposition} \\label{prop:principal-angles}\nLet $0 \\leq k,l \\leq n$ and consider subspaces $E \\in \\mathsf{G}_{n,k}$ and $F \\in \\mathsf{G}_{n,l}$. There exist\n\\begin{enumerate}\n \\item an orthonormal basis $(e_i)_{i \\in [k]}$ of $E$,\n \\item an orthonormal basis $(f_j)_{j \\in [l]}$ of $F$,\n \\item numbers $\\theta_1 \\leq \\theta_2 \\leq \\cdots \\leq \\theta_{\\min(k,l)}$ in $[0,\\pi\/2]$\n\\end{enumerate}\nsuch that, for every $i \\in [k]$ and $j \\in [l]$\n\\[ \\scalar{e_i}{f_j} = \\begin{cases} 0 & \\textnormal{ if } i \\neq j \\\\ \\cos(\\theta_i) & \\textnormal{ if } i=j. \\end{cases} \\]\nMoreover, the numbers $(\\theta_i)_{i \\in [\\min(k,l)]}$ are uniquely defined by these conditions.\n\\end{proposition}\n\nIn the context of Proposition \\ref{prop:principal-angles}, the numbers $(\\theta_i)_{i \\in [\\min(k,l)]}$ are called the \\emph{principal angles} between $E$ and $F$. The vectors $e_i$ and $f_j$ are sometimes called the principal vectors; they are not uniquely defined.\n\nPrincipal angles are discussed in several places (see, e.g., \\cite{BS,GVL}) and can be related to singular values. If $e_i$, $f_j$ and $\\theta_i$ satisfy the condition of Proposition \\ref{prop:principal-angles}, then\n\\[ P_EP_F = \\left( \\sum_{i \\in [k]} \\ketbra{e_i}{e_i} \\right) \\left( \\sum_{j \\in [l]} \\ketbra{f_j}{f_j} \\right) = \\sum_{i \\in [\\min(k,l)]} \\cos (\\theta_i) \\ketbra{e_i}{f_i} \\]\nis a \\emph{singular value decomposition} of the operator $P_EP_F$. Conversely, one may prove Proposition \\ref{prop:principal-angles} by considering a singular value decomposition of $P_EP_F$; the uniqueness of principal angles follows from the uniqueness of singular values.\n\n\nWe compute, on few simple examples, the spectrum of a self-adjoint expression in two orthogonal projections from the principal angles between their ranges.\n\n\\begin{proposition} \\label{prop:angles-formula}\nLet $E \\in \\mathsf{G}_{n,k}$ and $F \\in \\mathsf{G}_{n,l}$ with $k \\leq l$. Let $m = \\dim( E \\cap F)$ and $(\\theta_i)_{i \\in [k-m]}$ the nonzero principal angles between $E$ and $F$. Set $P=P_E$ and $Q=P_F$. Then\n \\begin{enumerate}\n \\item the spectrum of $PQP$ or $QPQ$ is\n \\[ \\sigma(PQP) = \\sigma(QPQ) = \\{0_{(n-k)} \\} \\cup \\{ \\cos^2 \\theta_i \\} \\cup \\{ 1_{(m)} \\},\\]\n\\item the spectrum of $P+Q$ is\n\\[ \\sigma(P+Q)= \\{0_{(n-k-l+m)}\\} \\cup \\{1-\\cos \\theta_i\\} \\cup \\{1_{(l-k)}\\} \\cup\\{1+\\cos \\theta_i\\} \\cup \\{2_{(m)}\\} ,\\]\n \\item the spectrum of $\\imath (PQ-QP)$ is\n \\[ \\sigma(\\imath (PQ-QP)) = \\{ -\\cos \\theta_i \\sin \\theta_i \\} \\cup \\{0_{(n-2k+2m)} \\} \\cup \\{ \\cos \\theta_i \\sin \\theta_i \\}, \\]\n \\item the spectrum of $PQ+QP$ is\n \\[ \\sigma(PQ+QP) = \\{ \\cos^2 \\theta_i - \\cos \\theta_i \\} \\cup \\{0_{(n-2k+m)}\\} \\cup \\{\\cos^2 \\theta_i+\\cos \\theta_i \\} \\cup \\{2_{(m)}\\}. \\]\n \\end{enumerate}\nIn these formulas, the spectrum is counted with multiplicity, the index $i$ ranges in $[k-m]$ and the notation $\\lambda_{(p)}$ stands for the eigenvalue $\\lambda$ repeated $p$ times.\n\\end{proposition}\n\nMore generally, the spectrum of any self-adjoint polynomial in $P_E$, $P_F$ depends only on the principal angles between $E$ and $F$.\n\n\\begin{proof}\nLet $(e_i)_{i \\in [k]}$ and $(f_j)_{j \\in [l]}$ be respective orthonormal bases of $E$ and $F$ satisfying the conclusion of Proposition \\ref{prop:principal-angles}. We have $e_i=f_i$ for $i \\in [m]$. Consider the orthogonal direct sum\n\\[ \\mathbf{K}^n = (E \\cap F) \\bigoplus \\left( \\bigoplus_{i=m+1}^k \\mathspan(e_i,f_i) \\right) \\bigoplus \\left( \\bigoplus_{j=k+1}^l \\mathspan (f_j) \\right) \\bigoplus (E+F)^\\perp.\\]\nThe operators $P$ and $Q$ are jointly block-diagonalizable with respect to this decomposition:\n\\begin{itemize}\n \\item the $m$-dimensional subspace $E \\cap F$ is a eigenspace for $P$ and $Q$, with eigenvalue $1$,\n\\item for $m+1 \\leq i \\leq k$, the $2$-dimensional subspace $\\mathspan \\{e_i,f_i\\}$ is stable for both $P$ and $Q$, which act respectively as the matrices\n \\begin{equation} \\label{eq:joint-space} \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\ \\ \\textnormal{ and } \\ \\ \\begin{pmatrix} \\cos^2 \\theta_i & \\cos \\theta_i \\sin \\theta_i \\\\ \\cos \\theta_i \\sin \\theta_i & \\sin^2 \\theta_i \\end{pmatrix} \\end{equation}\n in the orthonormal basis $(e_i,g_i)$, where $g_i$ is defined by the formula $f_i=\\cos(\\theta_i) e_i + \\sin(\\theta_i)g_i$,\n \\item for $k+1 \\leq j \\leq l$, the vector $f_j$ is a eigenvector for both $P$ (with eigenvalue $0$) and $Q$ (with eigenvalue $1$),\n \\item the $(n-k-l+m)$-dimensional subspace $(E+F)^\\perp$ is a eigenspace for $P$ and $Q$, with eigenvalue $0$.\n\\end{itemize}\nEach result follows; the formulas involving $\\theta_i$ are obtained by computing the spectrum of the corresponding polynomial in the $2 \\times 2$ matrices appearing in \\eqref{eq:joint-space}.\n\\end{proof}\n\nFor every integer $0 \\leq k \\leq n$, the Grassmann manifold $\\mathsf{G}_{n,k}$ is equipped with a unique rotation-invariant probability measure, which we call the Haar measure. A concrete way to choose a Haar distributed random element $E \\in \\mathsf{G}_{n,k}$ is to realize $E$ as the linear span of $k$ independent standard Gaussian vectors in $\\mathbf{K}^n$. The following lemma is well known.\n\n\\begin{lemma} \\label{lemma:generic}\nConsider integers $0 \\leq k,l \\leq n$. Let $E \\in \\mathsf{G}_{n,k}$ and $F \\in \\mathsf{G}_{n,l}$ be independent Haar distributed subspaces. The following holds almost surely:\n\\[ \\dim (E + F) = \\min(k+l,n), \\ \\ \\ \\dim (E \\cap F) = \\max(k+l-n,0). \\]\nMoreover, the number of nonzero principal angles between $E$ and $F$ is almost surely equal to \n$\\min(k,l,n-k,n-l)$.\n\\end{lemma}\n\n\\begin{proof}\nThe first assertion is clear if we generate $E$, $F$ via Gaussian vectors. The second can then be deduced by writing $E \\cap F$ as $(E^\\perp + F^\\perp)^\\perp$ and using the fact that $E^\\perp \\in \\mathsf{G}_{n,n-k}$ and $F^\\perp \\in \\mathsf{G}_{n,n-l}$ are also independent and Haar distributed. The last point follows since the number of nonzero principal angles between $E$ and $F$ is $\\min(k,l) - \\dim(E \\cap F)$.\n\\end{proof}\n\nIn this paper, we derive the limit distribution of principal angles between random subspaces using the well known connection to free probability. This question does not seem to have been discussed in the literature; we could only locate the paper \\cite{AEK} which deals with the largest principal angle only. \n\n\n\\section{Free probability}\n\nWe introduce very briefly some background from free probability needed for our purposes, and refer to classical references such as \\cite{MS,NS,VDN} for more detail.\n\nA $*$-probability space is a couple $(\\mathcal{A},\\varphi)$, where $\\mathcal{A}$ is a unital complex $*$-algebra and $\\varphi : \\mathcal{A} \\to \\mathbf{C}$ is a linear form which is positive (i.e., $\\varphi(a^*a) \\geq 0$ for every $a \\in \\mathcal{A}$) and satisfies $\\varphi(1_\\mathcal{A})=1$. Given a self-adjoint element $a \\in \\mathcal{A}$ and a compactly supported probability measure $\\mu$, we say that $\\mu$ is the \\emph{distribution} of $a$ if\n\\[ \\int_\\mathbf{R} x^k \\, \\mathrm{d}\\mu (x) = \\varphi(a^k) \\]\nfor every integer $k \\geq 0$. \n\nIf $p \\in \\mathcal{A}$ is a self-adjoint projection and $\\alpha = \\varphi(p)$, then the distribution of $p$ is $\\mathsf{B}(\\alpha) \\coloneqq \\alpha \\delta_1 + (1-\\alpha) \\delta_0$, the Bernoulli distribution with parameter $\\alpha$.\n\nIf $A$ is a self-adjoint operator on $\\mathbf{K}^n$ with eigenvalues $\\lambda_1,\\dots,\\lambda_n$, its \\emph{empirical spectral distribution} is defined as\n\\[ \\mu_{\\mathrm{sp}}(A) = \\frac{1}{n} \\sum_{i=1}^n \\delta_{\\lambda_i}. \\]\nIf $A$ is an orthogonal projection of rank $r$, then $\\mu_{\\mathrm{sp}}(A) = \\mathsf{B}(r\/n)$.\n\nWe do not repeat here the definition of the fundamental concept of \\emph{free independence} (see \\cite[Chapter 5]{NS}). We rely crucially on the asymptotic freeness of independent large-dimensional random matrices. What we need is summarized by the following proposition, which is a special case of \\cite[Theorem 23.14]{NS}.\n\n\\begin{proposition} \\label{prop:asymptotic-freeness}\nFix $\\alpha, \\beta \\in [0,1]$, and for every $n$, integers $0 \\leq k_n, l_n \\leq n$ such that $\\lim k_n\/n = \\alpha$ and $\\lim l_n\/n=\\beta$. Suppose that\n\\begin{enumerate}\n\\item for every $n$, $E_n \\in \\mathsf{G}_{n, k_n}$ and $F_n \\in \\mathsf{G}_{n,l_n}$ are independent Haar distributed random subspaces,\n\\item $p$ and $q$ are free self-adjoint projections in a $*$-probability space, with respective distributions $\\mathsf{B}(\\alpha)$ and $\\mathsf{B}(\\beta)$.\n\\end{enumerate}\nThen, for every self-adjoint polynomial in two non-commuting variables $\\pi$, the sequence of probability measures\n\\[ \\mu_{\\mathrm{sp}} (\\pi(P_{E_n},P_{F_n})) \\]\nconverges towards the distribution of $\\pi(p,q)$.\n\\end{proposition}\n\nIn this paper, the convergence of a sequence of random measures is always meant to be the weak convergence in probability.\n\n\\section{Polynomials in two free projections}\n\nThroughout this section, we consider $p$ and $q$ to be free projections in a $*$-probability space, with distributions $\\mathsf{B}(\\alpha)$ and $\\mathsf{B}(\\beta)$ respectively. \n\nBy Proposition \\ref{prop:asymptotic-freeness}, the distribution of a self-adjoint polynomial in $p$, $q$ is related to the distribution of principal angles between random subspaces. In order to find the later, we consider the polynomial $pqp$. The distribution of $pqp$ is the \\emph{free multiplicative convolution} of $\\mathsf{B}(\\alpha)$ and $\\mathsf{B}(\\beta)$ and is denoted by $\\mathsf{B}(\\alpha) \\boxtimes \\mathsf{B}(\\beta)$. We take advantage of the fact that an explicit formula appears in the literature (see \\cite[Example 3.6.7]{VDN}) \n\\begin{equation} \\label{eq:boxtimes} \\mathsf{B}(\\alpha) \\boxtimes \\mathsf{B}(\\beta) = (1-\\min(\\alpha,\\beta)) \\delta_0 + \\max(\\alpha+\\beta-1,0) \\delta_1 + \\mu\\end{equation}\nwhere $\\mu$ is an absolutely continuous measure with density $f$ supported on $[\\phi_-,\\phi_+]$, with $\\phi_\\pm = \\alpha + \\beta - 2\\alpha\\beta \\pm 2 \\sqrt{\\alpha\\beta(1-\\alpha)(1-\\beta)}$, given by\n\\[ f(x) = \\frac{\\sqrt{(\\phi_+-x)(x-\\phi_-)}}{2\\pi x(1-x)}\n.\\]\nThe total mass of $\\mu$ is $\\min(\\alpha,\\beta,1-\\alpha,1-\\beta)$. In the special case $\\alpha=\\beta=1\/2$, we have $\\phi_-=0$, $\\phi_+=1$ and $2\\mu$ is the arcsine distribution.\n\nWe can now derive the limit distribution for principal angles between random large-dimensional subspaces. \n\n\\begin{theorem} \\label{theo:limit-principal-angles}\nFix $\\alpha, \\beta \\in [0,1]$, and for every $n$, integer $0 \\leq k_n,l_n \\leq n$ such that $\\lim k_n\/n = \\alpha$ and $\\lim l_n\/n=\\beta$. Set $r_n=\\min(k_n,l_n,n-k_n,n-l_n)$. For each $n$, let $E_n \\in \\mathsf{G}_{n,k_n}$, $F \\in \\mathsf{G}_{n,l_n}$ be independent Haar-distributed random subspaces and let $(\\theta^n_i)_{i \\in [r_n]}$ be the nonzero principal angles between $E_n$ and $F_n$. \n\nAs $n \\to \\infty$, the empirical distribution $\\frac{1}{n} \\sum_{i \\in [r_n]} \\delta_{\\theta^n_i}$ converges towards the distribution supported on $[ \\arccos \\sqrt{\\phi_+}, \\arccos \\sqrt{\\phi_-}]$\nwith density\n\\[ s(\\theta) = \\frac{\\sqrt{(\\phi_+ - \\cos^2 \\theta)(\\cos^2 \\theta - \\phi_-)}}{{\\pi \\sin \\theta \\cos \\theta}} .\\]\nThe total mass of this distribution equals $\\min(\\alpha,\\beta,1-\\alpha,1-\\beta)$.\n\\end{theorem}\n\n\\begin{proof}\nBy Lemma \\ref{lemma:generic}, the number of nonzero principal angles between $E_n$ and $F_n$ is almost surely equal to $r_n$, so the random variables $(\\theta^n_i)_{i \\in [r_n]}$ are well-defined.\nBy Proposition \\ref{prop:asymptotic-freeness}, the sequence $\\mu_{\\mathrm{sp}}(P_{E_n}P_{F_n}P_{E_n})$ converges towards $\\mathsf{B}(\\alpha) \\boxtimes \\mathsf{B}(\\beta)$. On the other hand, we know from Proposition \\ref{prop:angles-formula} that\n\\[ \\mu_{\\mathrm{sp}}(P_{E_n}P_{F_n}P_{E_n}) = \\frac{n-\\max(k_n,l_n)}{n}\\delta_0 + \\frac{\\max (k_n+l_n-n,0)}{n}\\delta_1 + \\frac{1}{n} \\sum_{i \\in [r_n]} \\delta_{\\cos^2 \\theta_i^n}.\\]\nComparing with \\eqref{eq:boxtimes}, we conclude that the sequence $\\frac{1}{n} \\sum_{i \\in [r_n]} \\delta_{\\cos^2 \\theta_i^n}$ converges towards $\\mu$, and therefore that $\\frac{1}{n} \\sum_{i \\in [r_n]} \\delta_{\\theta_i^n}$ converges towards $\\varphi_*\\mu$, the pushforward of $\\mu$ under the map $\\varphi : x\\mapsto \\arccos \\sqrt{x}$. By the chain rule, its density of $\\varphi_*\\mu$ is $(f \\circ \\varphi^{-1})|(\\varphi^{-1})'|$ and the result follows.\n\\end{proof}\n\nIn the special case $\\alpha = \\beta = 1\/2$, i.e., when the involved Bernoulli distributions are fair, the situation remarkably simple. If $E$, $F$ are random lines in $\\mathbf{K}^2$, their angle obviously follows the uniform distribution in $[0,\\pi\/2]$. (The analogous statement fails in higher dimension.) Surprisingly, a similar phenomenon appears at the limit.\n\n\\begin{corollary} \\label{cor:principal-angles-uniform}\nFor every $n$, let $E_n, F_n \\in \\mathsf{G}_{2n,n}$ be independent Haar-distributed random subspaces of dimension $n$ in $\\mathbf{K}^{2n}$, and $(\\theta_i^n)_{i \\in [n]}$ the principal angles of the pair $(E_n,F_n)$. As $n \\to \\infty$, the empirical distribution $\\frac{1}{n} \\sum \\delta_{\\theta_i^n}$ converges towards the uniform distribution on $[0,\\pi\/2]$.\n\\end{corollary}\n\n We could not locate Corollary \\ref{cor:principal-angles-uniform} in the literature. It would be interesting to give a direct proof of this limit theorem, given the very simple form of the limit distribution.\n\n\nWe can now revert our proof strategy and compute via principal angles the distribution of any self-adjoint polynomial in $p, q$. A basic case, the distribution of $p+q$, is called the \\emph{free additive convolution} of $\\mathsf{B}(\\alpha)$ and $\\mathsf{B}(\\beta)$ and is denoted by $\\mathsf{B}(\\alpha) \\boxplus \\mathsf{B}(\\beta)$. Although technologies to compute free additive convolutions are available (such as the $R$-transform or Boolean cumulants), their implementation is not so obvious. We could not locate the computation of $\\mathsf{B}(\\alpha) \\boxplus \\mathsf{B}(\\beta)$ in the literature (its Cauchy transform appears in \\cite[Section 4.3]{SpeicherRao} as the solution to a $4$th degree equation, but the inversion step to write explicitly the density is nontrivial). While such a computation is doable by standard methods, we believe our derivation from Theorem \\ref{theo:limit-principal-angles} to be more economical.\n\n\\begin{theorem} \\label{theo:boxplus-bernoulli}\nFor $\\alpha$, $\\beta$ in $[0,1]$, define\n\\begin{align*} \n\\gamma_1 = 1 - \\sqrt{\\beta(1-\\alpha)} - \\sqrt{\\alpha(1-\\beta)} \\\\ \n\\gamma_2 = 1 - \\sqrt{\\beta(1-\\alpha)} + \\sqrt{\\alpha(1-\\beta)} \\\\ \n\\gamma_3 = 1 + \\sqrt{\\beta(1-\\alpha)} - \\sqrt{\\alpha(1-\\beta)} \\\\ \n\\gamma_4 = 1 + \\sqrt{\\beta(1-\\alpha)} + \\sqrt{\\alpha(1-\\beta)}\n\\end{align*}\nThe free additive convolution of Bernoulli distributions is given by\n\\[ \\mathsf{B}(\\alpha) \\boxplus \\mathsf{B}(\\beta) = \\max(1-\\alpha-\\beta,0) \\delta_0 \n+ |\\alpha-\\beta| \\delta_1 + \\max(\\alpha+\\beta-1,0) \\delta_2 + \\nu\\]\nwhere $\\nu$ is the absolutely continuous measure supported on \n\\[ \\left[\\gamma_1,\\min(\\gamma_2,\\gamma_3)\\right] \\cup \\left[\\max(\\gamma_2,\\gamma_3),\\gamma_4\\right] \\]\nwith density given by\n\\[ g(t) = \\frac{\\sqrt{-\n(t-\\gamma_1)(t-\\gamma_2)(t-\\gamma_3)(t-\\gamma_4)\n}}{{\\pi t (2-t)|t-1|}}. \\]\nThe total mass of $\\nu$ equals $2\\min(\\alpha,\\beta,1-\\alpha,1-\\beta)$.\n\\end{theorem}\n\nIn the special case $\\alpha=\\beta=1\/2$, we recover the well known fact that $\\nu$ is the arcsine distribution supported on $[0,2]$.\n\n\\begin{proof}\nWe use the same notation as in Theorem \\ref{theo:limit-principal-angles}. By Proposition \\ref{prop:asymptotic-freeness}, $\\mathsf{B}(\\alpha) \\boxplus \\mathsf{B}(\\beta)$ is the limit of the sequence $\\mu_{\\mathrm{sp}}(P_{E_n}+P_{F_n})$. On the other hand, we know from Proposition \\ref{prop:angles-formula} that\n\\begin{align*} \\mu_{\\mathrm{sp}}(P_{E_n} + P_{F_n}) \\ = & \\ \\frac{\\max(n-k_n-l_n,0)}{n} \\delta_0 + \\frac{|k_n-l_n|}{n} \\delta_1 + \\frac{\\max(k_n+l_n-n,0)}{n} \\delta_2 \\\\\n&+ \\frac{1}{n} \\sum_{i \\in [r_n]} \\delta_{1 - \\cos \\theta_i^n} + \\delta_{1 + \\cos \\theta_i^n}.\n\\end{align*}\nAssume that $\\alpha \\leq \\beta$ without loss of generality, so that \n$\\gamma_1 = 1 -\\sqrt{\\phi^+}$, $\\gamma_2 = 1 -\\sqrt{\\phi^-} \\leq \\gamma_3 = 1+ \\sqrt{\\phi^-}$ and $\\gamma_4 = 1 +\\sqrt{\\phi^+}$. On both $[\\gamma_1,\\gamma_2]$ and $[\\gamma_3,\\gamma_4]$, the density $g$ is the given by pushforward as $(s \\circ \\varphi)|\\varphi'|$, where $\\varphi(t)=\\arccos|1-t|$. The result follows.\n\\end{proof}\n\nIn principle, this approach can be used to compute the distribution of a general self-adjoint polynomial in two free projections as the pushforward of the measure described in Theorem \\ref{theo:limit-principal-angles}. \nWe give three examples below.\n\n\\begin{example}[Commutator of free projections]\nWe consider the polynomial $\\imath(pq-qp)$, where the factor $\\imath$ is introduced to make the operator self-adjoint. An immediate adaptation of the proof of Theorem \\ref{theo:boxplus-bernoulli} gives that the distribution of $\\imath(pq-qp)$ equals\n\\[\\max(|2\\alpha-1|,|2\\beta-1|) \\delta_0 + \\chi^+_*\\mu + \\chi^-_*\\mu,\\]\nwhere the last terms are the pushforward of the measure $\\mu$ defined in \\eqref{eq:boxtimes} by the maps $\\chi_\\pm(t) = \\pm \\sqrt{t(1-t)}$.\nThis result has already been obtained in \\cite[p.559--560]{NSDuke}. \nWe point that in the case $\\alpha = \\beta =1\/2$, the distribution of $\\imath(pq-qp)$ is the arcsine distribution supported on $[-1,1]$.\n\\end{example}\n\n\\begin{example}[Anticommutator of free projections]\nThe free anticommutator $pq+qp$ has attracted some attention in the recent years \\cite{FMNS}. While one may repeat the argument given in the proof of Theorem \\ref{theo:boxplus-bernoulli}, it is actually simpler to observe that $pq+qp$ can be written as $(p+q)^2-(p+q)$. It follows that its distribution is the pushforward of $\\mathsf{B}(\\alpha) \\boxplus \\mathsf{B}(\\beta)$ under the map $t \\mapsto t^2-t$. \n\nWe detail now the computations in the special case $\\alpha=\\beta=1\/2$. The distribution of $\\mathsf{B}(1\/2) \\boxplus \\mathsf{B}(1\/2)$ has density\n\\[ h(t) = \\frac{1}{\\pi \\sqrt{x(2-x)}}. \\]\nThe map $t \\mapsto t^2-t$ is a bijection from $[0,1\/2]$ to $[-1\/4,0]$ with inverse map $\\psi_-(x) = \\frac{1 - \\sqrt{1-4x}}{2}$, and also from $[1\/2,2]$ to $[-1\/4,2]$ with inverse map $\\psi_+(x) = \\frac{1 + \\sqrt{1-4x}}{2}$. Using the chain rule, we obtain the density for $pq+qp$ as\n\\[ u= (h\\circ \\psi_-)|\\psi_-'| {\\bf 1}_{[-1\/4,2]} + (h \\circ \\psi_+)|\\psi_+'| {\\bf 1}_{[0,2]} ,\\]\nwhich can be written explicitly as\n\\[ u(x) = \\begin{cases} \\frac{\\sqrt{2}}{\\pi \\sqrt{1-4x} \\sqrt{1+2x-\\sqrt{1-4x}}} & \\textnormal{ if } -\\frac{1}{4} \\leq x \\leq 0, \\\\ \n\\frac{\\sqrt{2}}{\\pi \\sqrt{1-4x}} \\left(\\frac{1}{\\sqrt{1+2x-\\sqrt{1-4x}}}+\\frac{1}{\\sqrt{1+2x+\\sqrt{1-4x}}} \\right) & \\textnormal{ if } 0 \\leq x \\leq 2. \\end{cases} \\]\nThis formula is much simpler than the one from which has been obtained in \\cite[Proposition 6.11]{FMNS}. \\end{example}\n\n\\begin{example}\nOur last example is the more involved polynomial $p+qpq$, for which the usual free probability techniques seem unfitting. To obtain reasonable formulas, we again restrict to the case where $p$ and $q$ are free projections with distribution $\\mathsf{B}(1\/2)$. We first compute the eigenvalues of $A+B AB$, where $A$ and $B$ are $1$-dimensional projections with angle $\\theta$ between their ranges, to be\n\\[ \\frac{1 + \\cos^2 \\theta \\pm \\sqrt{5 \\cos^4 \\theta - 2 \\cos^2 \\theta + 1}}{2} .\\]\nDenote this quantity by $\\rho_\\pm(\\cos^2\\theta)$.\nWe may describe the distribution of $p+qpq$ as the sum of pushforwards of $\\mathsf{B}(1\/2) \\boxtimes \\mathsf{B}(1\/2)$ (i.e., the arcsine distribution) under $\\rho_+$ and under $\\rho_-$. After routine computations, we obtain for $p+qpq$ a distribution supported on $[0,1\/5] \\cup [1,2]$ and with density\n\\[ x \\mapsto \\begin{cases} \n \\frac{1}{2\\pi \\zeta(x) \\sqrt{2x}}\\left( \\frac{3-5x +\\zeta(x)}{\\sqrt{3-3x+\\zeta(x)}} + \n \\frac{3-5x -\\zeta(x)}{\\sqrt{3-3x-\\zeta(x)}} \\right)\n& \\textnormal{ if } 0 < x < \\frac 15\\\\ \\frac{5x-3-\\zeta(x)}{2\\pi \\zeta(x) \\sqrt{2x}\\sqrt{3-3x+\\zeta(x)}}& \\textnormal{ if } 1 < x \\leq 2, \\end{cases}\\]\nwhere $\\zeta(x)=\\sqrt{5x^2-6x+1}$. On Figure~\\ref{figure1} we compare this limit distribution with its approximation by two half-rank projections in $\\mathbf{R}^{2000}.$\n\\begin{figure}[htpb]\n\\centerline{\\includegraphics[scale=.35]{p-plus-qpq.png}}\n\\caption{Histogram of eigenvalues of $P+QPQ$ when $P, Q$ are projections onto independent Haar distributed subspaces in $\\mathsf{G}_{2n,n}$ for $n=1000$, together with the limit distribution.}\n\\label{figure1}\n\\end{figure}\n\\end{example}\n\n\\medskip\n\nMore generally, our method applies to describe the distribution of a polynomial in two free elements whose distributions are supported on two points, since they are affine image of projections. Extending the method to distributions supported on three points seems out of reach.\n\n\\section*{Acknowledgements}\nWe thank the authors of \\cite{FMNS} for fruitful discussions. The author was supported in part by ANR (France) under the grant ESQuisses (ANR-20-CE47-0014-01)\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\n\n\n\n\\IEEEPARstart{L}{ocomotion} in virtual environments is a hotspot for virtual reality (VR) research. Implementing intuitive and natural locomotion interfaces is an important factor for consideration when designing a new VR environment and system. Low-cost optical tracking devices, such as the Kinect sensor and the Leap Motion sensor, for body and hand skeleton tracking have resulted in many methods for body gesture\\cite{lun_survey_2015} and hand gesture recognition \\cite{bachmann_review_2018}. Ample opportunities have become available for VR researchers to investigate the utility of hand gestures for virtual locomotion.\n\nOne major advantage of these low-cost optical sensors is that they do not require attaching markers or other tracking devices on the body parts to be tracked while providing tracking information that are sufficiently accurate for interaction in VR. Compared to locomotion interfaces using gamepads, hand gesture interfaces do not require users to hold devices in their hands, hence more sophisticated gestures can be recognized and are generally more flexible compared to VR systems using controllers in terms of gesture recognition (but implementation of force feedback without holding controllers or wearing exoskeletons remains a challenging problem). On the other hand, if we compare virtual locomotion interfaces using hand gestures to interfaces using body tracking with the Kinect sensor or other full-body optical tracking systems, hand gesture interfaces do not involve large-scale body movements. This provides users with an advantage that hand gesture interfaces can be operated in places that are limited in physical space. Such scenarios typically include personal gaming, virtual classrooms and teleoperation, \\textit{etc.}, in which large-scale tracking may not be available due to constrained physical space or other reasons. Combined with hand gesture recognition methods for other types of interaction, different interaction activities will be made possible in a VR system. More recent VR headsets, such as the Oculus Quest 2 and the Microsoft Hololens 2, have integrated hand tracking modules and algorithms. This makes hand gesture locomotion interfaces and other interaction using hand gestures more accessible as additional hand tracking hardware is not required when using these headsets.\n\nSome previous studies on locomotion using hand gestures used different geometric features of hands for gesture recognition\n\\cite{cardoso_comparison_2016,cisse_user_2020}. In such designs, each hand shape with a set of geometric features corresponds to a specific walking speed. By designing a set of hand gestures with different hand shapes, one can achieve variable walking speed with a set of hand gestures tracked by optical devices (\\textit{e.g.} the Leap Motion sensor). Another approach was to use the distance between the index finger and the centre of a tracked hand and map the distance to locomotion speed \\cite{huang_design_2019}. In addition, methods that simulate bipedal walking using two fingers were designed based on multi-touch pads\\cite{kim_finger_2008,yan_let_2016}. Although previous hand gesture interfaces have been assessed on a case-by-case basis, there lacks a comparison among different hand gesture interfaces to reveal their differences in terms of performance and user preference. In addition, it is also uncertain how these hand gesture interfaces can be compared to locomotion interfaces using gamepads. \n\nTo answer these research questions, we presented three hand gesture interfaces and their algorithms, called the Finger Distance gesture, the Finger Number gesture and the Finger Tapping gesture, which were inspired by previous studies \\cite{kim_finger_2008,huang_design_2019,cardoso_comparison_2016,zhao_learning_2016}. These are typical gestures that people are familiar with in their daily lives or people can naturally come up with when they intend to make hand gestures. For comparison, we also designed and implemented a gamepad interface based on the Xbox One controller. We compared these four interfaces using two virtual locomotion tasks, which are called the target pursuit task and the waypoints navigation task. The first task evaluated the performance of the gestures in terms of their usability in speed control while the second task focused on waypoints navigation when direction control was introduced. We also respectively assessed user preference in these tasks through a subjective user interface questionnaire.\n\n\nThe goal of the present study was to compare the differences of these four interfaces in terms of their performance and user preference in virtual locomotion tasks.\nThe main contributions of the study were two-fold:\n\n(1) We presented three hand gesture interfaces and their algorithms for virtual locomotion.\n\n(2) We systematically evaluated these three hand gesture interfaces and a gamepad interface using two virtual locomotion tasks and demonstrated their performance and user preference.\n\nThe rest of the paper is organized as follows: Section 2 discusses related work; Section 3 presents hand gesture interfaces, a user interface questionnaire and VR hardware and software for experiments; Section 4 presents the design and results of two VR experiments that evaluated four interfaces; Section 5 provides discussion and Section 6 draws the conclusion of the study. \n\n\\section{Related Work}\nIn this section, we review previous work on virtual locomotion interfaces using body movements (\\textit{i.e.} leaning, head movements, upper-limb and lower-limb movements, \\textit{etc.}) and hand gestures.\n\n\\subsection{Locomotion Interface using Body Movements}\nZielasko \\textit{et al.}\\cite{zielasko_evaluation_2016} compared the adapted walking-in-place, the accelerator pedal, leaning, the shake your head technique and the gamepad interfaces for virtual locomotion. These interfaces were evaluated using a virtual locomotion task that assessed factors, including walking path, task completion time and comfort. Results show that the leaning technique and the accelerator pedal technique worked best. They also concluded that the proposed interfaces were easy to learn and inexpensive to be integrated into existing VR systems.\n\nNabiyouni \\textit{et al.}\\cite{nabiyouni_comparing_2015} compared three different locomotion interfaces with different levels of naturalism. These interfaces included natural walking (fully natural), the VirtualSphere (semi-natural) interface and the gamepad interface (non-natural). To evaluate these interfaces, participants were asked to perform straight line walking and multi-segment line walking. Objective factors and subjective factors were studied. Results showed that fully natural and non-natural interfaces performed better than the semi-natural interface (the VirtualSphere).\n\nKitson \\textit{et al.}\\cite{kitson_comparing_2017} compared four interfaces using body movements for virtual locomotion with the joystick interface. These four interfaces included the NaviChair, the MuvMan, the head-directed technique and the swivel chair, which were based on tracking body movements. Their results showed that participants preferred the NaviChair and the MuvMan and concluded that the joystick is still an easy-to-use and accurate interface. \n\nNguyen-Vo \\textit{et al.}\\cite{nguyen-vo_naviboard_2019} compared the controller, the NaviChair, the Naviboard and real walking interfaces. Their results showed that body-based motion cues and control provided by a low-cost leaning and stepping interface sufficient for virtual locomotion.\n\nCoomer \\textit{et al.}\\cite{coomer_evaluating_2018} compared the joystick interface, the arm-cycling, the point-tugging and the teleporting techniques. To compare these interfaces, a virtual navigation task was designed to ask participants to walk in a virtual town and look for treasure chests. Their results showed that the arm-cycling was the best method compared to three other methods. The arm-cycling method resulted in better sense of spatial awareness and lower simulator sickness scores for participants.\n\nRecently, Buttussi and Chittaro \\cite{buttussi_locomotion_2019} compared the joystick, the teleporting and the leaning interfaces. Results showed that the teleporting interface had better performance compared to other interfaces and also caused less nausea.\n\nNumerous Walking-in-Place (WIP) techniques \\cite{slater_virtual_1995,templeman_virtual_1999,yan_new_2004,feasel_llcm-wip:_2008,wendt_gud_2010,williams_evaluation_2011,bruno_new_2013,bruno_hip-directed_2017,tregillus_vr-step:_2016,hanson_improving_2019} have been proposed over the years. These methods usually use optical trackers or other types of trackers to monitor lower leg motions (including knees and feet, \\textit{etc.}) to use the tracked motion data to calculate the forward walking speed of a user. In some studies\\cite{wendt_gud_2010,bruno_new_2013}, comparisons were made between WIP techniques, which revealed the differences in their performance and user preference between these methods. Another approach for overground walking is the redirected walking approach \\cite{razzaque_redirected_2001,razzaque_redirected_2002}, which manipulates the virtual scene through rotation rate without users noticing it. This technique was applied to systems using VR headsets \\cite{razzaque_redirected_2001} or with projected displays \\cite{razzaque_redirected_2002}. A more recent method proposed by Sun \\textit{et al.} \\cite{sun_towards_2018} redirects users during virtual walking when they have saccadic eye movements. Mechanical repositioning is also a locomotion technique that involved body motion tracking. These include treadmills\\cite{souman_cyberwalk:_2008,wang_real_2020}, foot platforms\\cite{iwata_gait_2001}, pedalling devices\\cite{allison_first_2000} and spheres\\cite{medina_virtusphere:_2008}, \\textit{etc}. Finally, several studies have proposed the use of head motions for teleoperation of robots or for virtual locomotion \\cite{higuchi_flying_2013,pittman_exploring_2014,zhao_effects_2018,hashemian_headjoystick_2020}. \n\nWhile we briefly reviewed locomotion techniques that involved body movements, comprehensive coverage of the subject is in \\cite{nilsson_natural_2018,al_zayer_virtual_2018}.\n\n\\subsection{Locomotion Interface using Hand Gestures}\nKim \\textit{et al.}\\cite{kim_finger_2008} proposed a hand gesture locomotion technique using a multi-touch pad. The gesture required users to use two fingers: the index finger and the middle finger to move back and forth on a touchpad to simulate natural walking using two legs. An experiment was conducted to evaluate the method using a locomotion task that asked participants to pass a few waypoints. A joystick interface was also introduced as the control interface for comparison. Subjective factors, including satisfaction, fastness, easiness and tiredness, were evaluated but no objective parameters were analysed in the study. Later, a subsequent study \\cite{kim_effects_2010} evaluated the method on spatial knowledge acquisition.\n\nSimilarly, Yan \\textit{et al.}\\cite{yan_let_2016} also proposed locomotion methods based on a multi-touch pad. These methods include the walking gesture, the segway gesture and the surfing gesture, which allowed users to travel at different modes and speeds with different gestures. For each proposed gesture, there was a different locomotion task to compare the interface with the gamepad interface. Both objective and subjective factors were studied and authors concluded that the gamepad interface was more time efficient while gesture interfaces based on a multi-touch pad had similar quality compared to the gamepad interface. In addition, switching between different travel modes was faster on the multi-touch pad.\n\nCardoso \\cite{cardoso_comparison_2016} proposed a method using the number of extended fingers to control locomotion speed. Opening and closing both hands were used to start and stop locomotion. The proposed method was compared to a gaze-based locomotion approach using a head-mounted display and a gamepad interface. Results showed that the gamepad interface was better than two other interfaces. However, details of implementation of the gesture recognition algorithms of the hand gesture interface were not mentioned in their paper.\n\nHuang \\textit{et al.}\\cite{huang_design_2019} proposed a technique to control locomotion speed based the distance between the tips of their index finger and the centre of a tracked hand. The proposed method was evaluated on a linear locomotion task, in which participants were asked to walk to targets at different distances as quickly as possible. Their study experimented with different combinations of parameters of the algorithm but the method has not been compared to other locomotion interfaces using hand gestures.\n\nMore recently, Cisse \\textit{et al.}\\cite{cisse_user_2020} proposed a method to design hand gestures by inviting professionals in architectural designs, with exposure to VR in professional capacity, to elicit their preferred hand gestures for locomotion. In total, sixty-four gestures were elicited from six professionals. These gestures were categorized into eight classes, which included moving forward, moving backwards and moving up a floor \\textit{etc}. To evaluate and select a set of efficient gestures, twelve university students were invited to evaluate different combinations of gestures for virtual locomotion. Factors, including task completion time and intuitiveness, \\textit{etc}., were included in the study. However, gamepad interfaces were not included for comparison. The designed gestures were static gestures, which potentially lack information that can be given by dynamic gestures through hand and finger movements.\n\nAnother recent work by Caggianese \\textit{et al.}\\cite{caggianese_freehand-steering_2020} proposed three different methods to control locomotion direction: (1) using the pointing direction of a finger; (2) using palm orientation; and (3) using head orientation tracked by a VR headset. However, speed control mechanism was not studied in this research. Initiation and termination of locomotion were controlled by extending and closing a tracked hand. To evaluate three different steering control methods, objective factors and subjective factors were considered. A limitation of the study was that there was no method to control walking speed.\n\nSch\u00e4fer \\textit{et al.} \\cite{schafer_controlling_2021} compared four different static hand gestures for teleportation-based virtual locomotion using a single hand or both hands. Their results showed that all proposed methods are viable options for virtual locomotion. They also recommend adding these methods in VR systems and let users to choose their preferred methods.\n\nTo the authors' knowledge, there was no comparison made among different hand gesture interfaces and the gamepad interfaces for virtual locomotion on speed control and waypoints navigation. Our study presented three hand gesture interfaces based on previous studies with a gamepad interface and compared their performance and user preference through two virtual locomotion tasks, which respectively evaluated their usability in speed control and waypoints navigation.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.25\\textwidth]{figures\/gesture\/finger-distance.jpg}\n \\caption{Finger Distance gesture ($l$ denotes the Euclidean distance between the fingertips of the index and the thumb of a tracked hand).}\n \\label{fig:finger_distance}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n \\centering\n \\subfigure[Stop (0 km\/h)]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-0.jpg}\n }\n \\subfigure[1 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-1.jpg}\n }\n \\subfigure[2 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-2.jpg}\n }\n \\subfigure[3 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-3.jpg}\n }\n \\subfigure[4 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-4.jpg}\n }\n \\subfigure[5 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-5.jpg}\n }\n \\caption{Finger Number gesture. Sub-figures (a) - (f) illustrate six different gestures that correspond to walking speeds from 0 km\/h to 5 km\/h.}\n \\label{fig:finger_number}\n\\end{figure*}\n\n\\section{Methods}\n\\subsection{Finger Distance Gesture}\nThe Finger Distance gesture is a method that controls locomotion speed via the Euclidean distance between the fingertips of a user's thumb and index. This was inspired by previous work by Huang \\textit{et al.}\\cite{huang_design_2019} who used the distance between a user's fingertip of the index and the hand centre to control walking speed. We decided to use the Euclidean distance between the tips of thumb and index fingers. Logically, the linearity of the distance between the thumb and index fingertips to walking speed is better than that of the distance between the tips of the index finger and the centre of a hand. In our method, the locomotion speed is calculated by the following equations:\n\n\\begin{equation}\ns_{walk} = \\frac{l-d}{r-d}(s_{max}-s_{min})\n\\end{equation}\n\\begin{equation}\nl = \\left \\|F_{thumb} - F_{index}\\right \\|_2\n\\end{equation}\n\n\\noindent where $s_{walk}$ is the calculated walking speed, $s_{max}$ and $s_{min}$ are pre-defined maximum and minimum walking speeds that a user can achieve during locomotion, $r$ ($r$ = 8 cm) a pre-defined reference Euclidean distance value between fingertips of the thumb and the index of a hand, $d$ ($d$ = 2.5 cm) the dead zone that ensures a value close to zero can be obtained when users snap their thumb and index together, $F_{thumb}$ and $F_{index}$ the tracked 3-D positions of the thumb and the index fingers and $l$ ($l\\in(d,r]$) the calculated Euclidean distance between these two fingers. The values of $r$ and $d$ were empirical values determined during our initial testing. We found such choice of values gave a relatively comfortable feeling to control locomotion speed using the gesture. Whenever $l$ is less than or equal to $d$, the calculated walking speed $s_{walk}$ is set to zero to stop locomotion. An illustration of the proposed gesture is given in Figure~\\ref{fig:finger_distance}, in which the distance between the tips of the thumb and index is depicted. This simple equation set enabled accurate control of locomotion speed. There is no filtering necessary to pre-process the data of the fingertips.\n\n\\begin{figure*}[!ht]\n \\centering\n \\subfigure[Index finger moving down]{\n \\includegraphics[width=0.18\\textwidth]{figures\/gesture\/fingertip-down.jpg}\n }\n \\hspace{5em}\n \\subfigure[Index finger moving up]{\n \\includegraphics[width=0.18\\textwidth]{figures\/gesture\/fingertip-up.jpg}\n }\n \\caption{Finger Tapping gesture. Arrows in sub-figures (a) and (b) illustrate the upward and downward motions of the index finger of a tracked hand during locomotion.}\n \\label{fig:finger_tapping}\n\\end{figure*}\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.25\\textwidth]{figures\/gesture\/direction2.png}\n \\caption{Direction control using the left hand, illustrated in the Leap Motion sensor's coordinate system ($\\theta$ denotes the steering angle).}\n \\label{fig:direction_control}\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.25\\textwidth]{figures\/gesture\/gamepad2.png}\n \\caption{Gamepad interface based on the Xbox One controller. Pushing the left joystick left or right controls walking direction and pushing the right joystick forward controls walking speed.}\n \\label{fig:gamepad}\n\\end{figure}\n\n\\subsection{Finger Number Gesture}\nThe Finger Number gesture is a set of gestures that controls the locomotion speed using the number of extended fingers. This idea was originally proposed by Cardoso \\cite{cardoso_comparison_2016} but details of the algorithms were not described in their paper. To design and implement an algorithm that can be used to recognize the number of extended fingers. We resorted to Marin \\textit{et al.}'s robust feature descriptors \\cite{marin_hand_2016}, which normalize the distance between fingertips and the hand centre with respect to different hand orientations and varied distance to the Leap Motion sensor:\n\n\\begin{equation}\nP_i^x=(F_i-C)\\cdot(N\\times H)\n\\end{equation}\n\\begin{equation}\nP_i^y=(F_i-C)\\cdot H\n\\end{equation}\n\\begin{equation}\nP_i^z=(F_i-C)\\cdot N\n\\end{equation}\n\n\\noindent where $P_i^x$, $P_i^y$ and $P_i^z$ are the extracted feature of a tracked 3-D fingertip position $F_i$ ($i$ is the index of of a tracked finger), $C$ the tracked hand centre, $N$ the normal perpendicular to the palm of a tracked hand and $H$ the pointing direction of fingertips given by the Leap Motion sensor. $\\cdot$ and $\\times$ denote dot product and cross product, respectively. The set of descriptors forms a column vector of fifteen elements of a single tracked hand. As the control of the start and the stop of locomotion involves both hands (\\textit{i.e.} extend fingers to start walking and make fists with both hands to stop). We use the set of descriptors to extract features of both hands and stack the column vectors of features of both hands to form a vector $P$ of thirty elements. This compact feature representation enabled us to train a multi-class Support Vector Machine (SVM) using the LibSVM library \\cite{chang_libsvm:_2011}, which make it possible to recognize six types of hand gestures for controlling start and stop (\\textit{i.e.} by making fists with both hands) and control locomotion speeds with five different levels (from 1 km\/h to 5 km\/h based on the number of extended fingers as shown in Figure~\\ref{fig:finger_number}). Our definition of the hand gestures with respect to walking speed is different compared to that of Cardoso's method. In his method, starting and stopping locomotion is controlled by extending both hands and closing both hands. In our implementation, extending an arbitrary number of fingers of the right hand (with the left hand fully extended) will start walking with the speed set by the number of extended fingers of the right hand (also see Figure~\\ref{fig:finger_number}). To train a multi-class SVM for gesture recognition, we collected hand gesture data from twelve volunteers (age: 18-21, 6 males and 6 females) using a custom software application developed by Python 3.7 that enabled data collection and labelling. For each participant, we collected four sessions of data. During each data collection session, each gesture was collected for 5 s. We used the first two sessions of hand gesture data of all participants to train a multi-class SVM with the Radial Basis Function (RBF) kernel. We then used the rest two sessions of the collected data to test the classifier. Overall, the average accuracy of classification is 99.94\\%, which made it readily available for recognizing the Finger Number gesture for virtual locomotion using the $svm\\_predict()$ function in the LibSVM library:\n\n\\begin{equation}\ns_{walk} = svm\\_predict(model, P)\n\\end{equation}\n\n\\noindent where $s_{walk}$ is the calculated walking speed, $model$ the trained multi-class SVM model and $P$ the stacked vector of features extracted from both hands using Marin \\textit{et al.}'s feature descriptor \\cite{marin_hand_2016}. \n\n\\subsection{Finger Tapping Gesture}\nThe Finger Tapping gesture is a method that controls locomotion speed via tapping (up and down) motions of the index finger of a tracked hand (see Figure~\\ref{fig:finger_tapping} for the illustration of the proposed gesture). Our initial idea was to use two fingers (index and middle fingers) to simulate the walking motion of two legs during actual locomotion, similar to Kim \\textit{et al.}'s Finger Walking in Place (FWIP) method \\cite{kim_finger_2008,kim_effects_2010} and use the Leap Motion sensor to track finger motions instead of a multi-touch pad. Our initial design showed that using two fingers to simulate walking motion and control locomotion speed was difficult to achieve as moving one finger (index finger) also affected the tracked finger tip position of the other (middle finger). Thus, to make this algorithm work, we decided to use the tapping motion of a single finger (index finger) to control walking speed. The tapping motion of the tip of the index finger can be viewed as a 3-D signal in the time domain. As the gesture requires a user to tap their fingers up and down, with their hands positioned above the Leap Motion sensor with a certain distance, we only needed the $y$-component of the tracked fingertip speed to detect the time interval between two consecutive peaks and use that time interval to control walking speed. Zhao and Allison proposed gait analysis algorithms for real-time virtual locomotion using a treadmill with a large-scale stereo projected display \\cite{zhao_learning_2016} and also adapted the algorithm for offline gait analysis to study the role of stereoscopic viewing in virtual locomotion \\cite{zhao_role_2020}. This method works by first filtering foot motion signals (speed signals for real-time application or position signals for offline gait analysis) with low-pass Butterworth filters. Then, an empirical threshold is used as the starting point for finding the initial swing and terminal swing of a step through gradient descent. The searching stops whenever the gradient ascends (indicating a different gait cycle is detected) or the minimum threshold is reached. The index of the maximum value between the initial swing and the terminal swing is considered as the mid swing. We considered the problem of detecting the peaks of the index fingertip speed signal as finding the mid-swings of steps. In the present implementation, a 2nd order low-pass Butterworth filter with a cut-off frequency of 5 Hz was applied to the $y$-component of the speed signal (buffered for 1 s using a queue) of the index finger. By adapting the algorithm, we were able to detect the peaks between consecutive steps and calculate the time interval between consecutive peaks. The equation to obtain the calculated walking speed is given by:\n\n\\begin{equation}\ns_{walk}=(1-\\frac{t_{step}-t_{min}}{t_{max}-t_{min}})(s_{max}-s_{min})+s_{min}\n\\end{equation}\n\n\\noindent where $s_{walk}$ is the calculated walking speed, $t_{step}$ ($t_{step}\\in(t_\n{min},t_{max}]$) the time interval between two detected peaks, $t_{max}$ ($t_{max}$ = 0.95 s) and $t_{min}$ ($t_{min}$ = 0.3 s) the pre-defined maximum and minimum time intervals and $s_{max}$ and $s_{min}$ the pre-defined maximum and minimum walking speeds that one can achieve using the method. A larger $t_{step}$ due to slow tapping motion results in a slower walking speed $s_{walk}$, which made it possible for speed control. This equation is similar to Tregillus and Folmer's approach \\cite{tregillus_vr-step:_2016}, in which a WIP method was proposed based on the acceleration signals from a mobile phone to detect steps and calculate walking speed based on the time intervals between steps. Additionally, whenever the time interval $t_{step}$ is larger than 0.95 s, the calculated walking speed $s_{walk}$ is set to $s_{min}$ and whenever $t_{step}$ is less than or equal to 0.3 s, the calculated walking speed is set to $s_{max}$.\n\n\\subsection{Direction Control}\nThe three hand gesture locomotion interfaces presented in previous sections required users to use their right hands to control walking speed. To enable direction control during locomotion, our first thought was to use the same hand (right hand) to control both walking speed and walking direction. However, early testing showed that this was not possible as moving fingers to make a gesture with the right hand also changed the tracked hand pointing direction $H$ given by the Leap Motion sensor. Thus, we decided to let users control their walking direction using their left hands. The equations for calculating the steering angle are given by:\n\n\\begin{equation}\nd=V_n\\cdot(H_{init}\\times H_{current})\n\\end{equation}\n\\begin{equation}\n\\theta=sign(d)cos^{-1}\\frac{H_{init}\\cdot H_{current}}{\\left\\|H_{init}\\right\\|\\left\\|H_{current}\\right\\|}\n\\end{equation}\n\nwhere the $sign(x)$ function is defined as:\n\\begin{equation*}\nsign(x) = \n\\begin{cases}\n&\\text{-1, } x<0 \\\\ \n&\\text{ 1, } x\\ge0 \n\\end{cases}\n\\end{equation*}\n\n\\noindent Parameter $d$ (with a $sign(x)$ function) controls whether a user is turning left or turning right, $V_n$ the vector that is perpendicular to the $x$-$z$ plane ($V_n=(0,1,0)$), $H_{init}$ the initial vector that points to a user's direction of travel ($H_{init}=(0,0,-1)$), $H_{current}$ the current finger pointing direction tracked by the Leap Motion sensor and $\\theta$ the steering angle that a user intends to achieve. $H_{current}$ tracked by the Leap Motion sensor is slightly noisy, which made straight line walking and turning both unstable. To ensure smoothness in linear walking and turning, we used the function $SmoothDamp()$ from the Unity's library to damp turning motions. An illustration of the direction control gesture is in Figure~\\ref{fig:direction_control}. Users extend their fingers and rotate their palms around the $y$-axis to control their travel direction.\n\n\n\\subsection{Gamepad Interface}\nThe gamepad interface was implemented as a control interface to be compared to the three hand gesture interfaces. In our implementation, pushing the left joystick left and right controls steering and pushing the right joystick upward controls a user's walking speed (see Figure~\\ref{fig:gamepad} for illustration). Moving backwards was not allowed for experiments so this function was disabled. As in previous gesture interfaces, participants use their left hands for direction control and right hands for speed control.\n\n\\begin{table}%\n\\centering\n\\caption{The User Interface Questionnaire}\n\\label{tab:user_interface}\n\\begin{tabular}{ p{8cm}}\n\\hline\n\\\\[-1em]\nThe interface is easy to learn.\\\\\n\\hline\n\\\\[-1em]\nThe interface is easy to use.\\\\\n\\hline\n\\\\[-1em]\nThe interface is natural and intuitive to use.\\\\\n\\hline\n\\\\[-1em]\nThe interface helps make the task fun.\\\\\n\\hline\n\\\\[-1em]\nUsing the interface is tiring.\\\\\n\\hline\n\\\\[-1em]\nThe interface helps me respond quickly.\\\\\n\\hline\n\\\\[-1em]\nThe interface helps me make accurate responses.\\\\\n\\hline\n\\\\[-1em]\n\\end{tabular}\\\n\\end{table}\n\n\\subsection{User Interface Questionnaire} \\label{sec:ui}\nTo subjectively evaluate four interfaces, we adapted the user interface questionnaire from \\cite{nabiyouni_comparing_2015,zhao_comparing_2019} and used the questionnaire in the present study. The questionnaire included statements, such as \"The interface is easy to use\" (see Table~\\ref{tab:user_interface} for the complete list), for users to judge. For both experiments, after finishing using an interface for a block of the experiments, participants were asked to rate each factor presented in the questionnaire using a seven-point Likert scale (from strongly disagree to strongly agree) to indicate their preference. This user interface questionnaire allowed us to subjectively evaluate a given interface using factors, including ease-to-learn, ease-to-use, natural-to-use, fun, tiredness, responsiveness and subjective accuracy. These are typical factors for consideration when designing or evaluating new interfaces.\n\n\\subsection{Hardware and Software of the VR system}\nThe software application that included virtual scene presentation, gesture recognition, virtual locomotion tasks and data recording was developed using the Unity 2020.1. The software application was hosted on a Windows 10 desktop computer with an Intel i5-10500 CPU, 16 GB memory, an nVidia Geforce 1660s graphics card with 6 GB graphics memory. Hand movements of participants were tracked by the Leap Motion sensor using the Orion SDK 4.0.0. The display was an AOC 27-inch curved monitor for presenting the virtual scenes of the tasks. VR headsets were not used as sharing headsets among participants is a health concern \\cite{steed_evaluating_2020} and the procedure for sanitizing VR headsets has not been established at the moment. The avatar that represented a participant in the virtual environment and for collision detection was implemented using a 3-D capsule geometry, with the virtual camera placed near the top of the capsule. When users make hand gestures, the walking speed and direction of the capsule is changed. The experimental setup for both experiments is shown in Figure~\\ref{fig:setup}.\n\n\\section{Experiments}\n\\subsection{Experiment 1: Target Pursuit}\n\\subsubsection{Introduction}\nThe purpose of Experiment 1 was to assess the performance and user preference of four locomotion interfaces on speed control.\n\n\\subsubsection{Participants}\nSixteen undergraduate students (age: 18-25, eleven males and five females) were invited as volunteers for this experiment. All had normal or corrected-to-normal vision. An informed consent was signed before the experiment. Participants were na\u00efve to the purpose of the experiment.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/setup\/setup3.png}\n\\caption{Experimental setup. The participant sat in front of the 27-inch curved monitor with hands positioned above the Leap Motion sensor. The viewing distance was approximately 60 cm. The virtual scene displayed on the monitor captured in this figure was a snapshot of Experiment 2.}\n\\label{fig:setup}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/setup\/ball.png}\n\\caption{Exemplar snapshot of Experiment 1. Light blue coloured spheres represent a user's tracked fingers and the hand centre. The wooden ball captured in this snapshot was red as the participant was close to the ball.}\n\\label{fig:ball}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp1\/ball_errorbar.pdf}\n\\caption{Subjective factors of Experiment 1 (bars denote mean values and error bars denote the standard error of the mean).}\n\\label{fig:ball_errorbar}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp1\/ball_horizontal.pdf}\n\\caption{Detailed responses from participants in Experiment 1.}\n\\label{fig:ball_horizontal}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp1\/objective_ball.pdf}\n\\caption{Objective factors of Experiment 1 (bar plot convention is as in Figure~\\ref{fig:ball_errorbar}).}\n\\label{fig:objective_ball}\n\\end{figure*}\n\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Subjective Factors of Experiment 1}\n\\label{tab:ball_subjective}\n\\includegraphics[width=0.8\\textwidth]{tables\/ball_subjective_table.pdf}\n\\end{table*}\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Objective Factors of Experiment 1}\n\\label{tab:ball_objective}\n\\includegraphics[width=0.6\\textwidth]{tables\/ball_objective_table.pdf}\n\\end{table*}\n\n\n\\subsubsection{Design}\nWe adapted the experiment from \\cite{zhao_learning_2016} to assess the performance of speed control of four interfaces. We designed a virtual scene (shown in Figure~\\ref{fig:ball}) with foliage and trees in the Unity 2020.1 using resources (the Free SpeedTrees Package and the Grass Flowers Pack Free) from the Unity Asset Store. A wooden ball was placed before the start of an experimental trial in the scene 3 m in front of the avatar that represented a participant. After a trial started, the rolling ball changed its forward running speed based on a set of randomly generated speed key frames (with values 2 km\/h, 3 km\/h or 4 km\/h), which took effect every 10 s. In total, there were seventeen speed key frames for each trial. The goal of the experiment for participants was to pursue the rolling ball while maintaining the initial 3 m distance between the avatar and the rolling ball using each locomotion interface. Direction control was disabled for this task. An complete trial lasted 180 s and an experiment trial stopped after 180 s. To make the distance judgement easier for participants, we manipulated the color of the rolling wooden ball such that the color would become red if the avatar was getting close or blue if getting far away. The acceleration of the avatar was set to 0.5 ${\\rm m\/s^2}$ for all four interfaces. The maximum locomotion speed and the minimum locomotion speed were set to 5 km\/h and 0 km\/h, respectively, The linear acceleration of the wooden ball was set to 0.3 ${\\rm m\/s^2}$. Setting the maximum walking speed to 5 km\/h and the acceleration to 0.5 ${\\rm m\/s^2}$ was reasonable as Teknomo \\cite{teknomo_microscopic_2002} reported that the average walking speed of pedestrian flow is 1.38 m\/s $\\pm$ 0.37 m\/s (4.97 km\/h $\\pm$ 1.33 km\/h, mean $\\pm$ std) and the average acceleration is 0.68 ${\\rm m\/s^2}$. Light blue coloured spheres (as shown in Figure~\\ref{fig:ball}) were added and rendered on top of all other geometries in the scene to visualise a user's tracked fingertips and the hand centre. This decision was made after our initial testing. We found it necessary to give user tracking information of their hands so that they knew their hands were securely tracked. To control for order effects, we counter-balanced the order of access to four different interfaces using the balanced Latin square design. \n\n\\subsubsection{Metrics} \\label{sec:exp1_metrics}\nBased on the metrics used in \\cite{zhao_learning_2016}, we proposed five metrics to study the performance of the interfaces. These included:\n\\begin{itemize}\n\\item \\textbf{Average position difference \\bm{$d_{avg}$}}:\\\\\nThe average distance between the avatar and the rolling ball through the entire course. Ideally, the value should be 3 m as participants were required to maintain this distance between the avatar and the rolling ball through an entire course.\n\\item \\textbf{Standard deviation of position difference \\bm{$d_{std}$}}:\\\\\nThe standard deviation of the distance between the avatar and the rolling ball through the entire course. This parameter reflects the interval, in which the avatar oscillates while maintaining the 3 m distance.\n\\item \\textbf{Average speed difference \\bm{$s_{avg}$}}:\\\\\nThe speed difference $s_{avg}$ between the avatar and the rolling ball through the entire course. Ideally, the value should be zero assume that participants are able to perfectly maintain the 3 m distance through the entire course.\n\\item \\textbf{Standard deviation of speed difference \\bm{$s_{std}$}}:\\\\\nThe standard deviation of speed difference between the avatar and the rolling ball through the entire course. This parameter reflects the interval in which the avatar vary their locomotion speed while maintaining the 3 m distance.\n\\item \\textbf{Speed difference at key frames \\bm{$s_{inst}$}}:\\\\\nThe average speed difference between the avatar and the wooden ball from the instant the speed change of the wooden ball occurs to 0.1 s after this speed change has occurred. This parameter reflects the transient response of locomotion interfaces when being used for tracking speed changes.\n\\end{itemize}\n\n\\subsubsection{Procedure}\nDuring an experimental session, a participant was first introduced to the task and asked to sign an informed consent. Then, participants sat in front of the curved monitor with their hands positioned above the Leap Motion sensor. A researcher sat beside the participant and operate the experimental software to initiate new experimental trials. Participants were asked to perform one trial as practice to get familiar with the interface. After that, participants were asked to perform three experimental trials using the same interface, with their locomotion data recorded during the trials. The order of access to interfaces was determined using the balanced Latin square design. When participants completed a block (consisting of a practice trial and three experimental trials) for a given interface, they were asked to rate the interface using the user interface questionnaire introduced in Section~\\ref{sec:ui}. They were subsequently introduced to another interface, asked to complete a practice trial and three experimental trials and fill in another user interface questionnaire. We ran the same procedure until a participant used all four interfaces to complete the experiment.\n\n\\subsubsection{Results}\nWe performed data analyses using R 4.0.3. The independent factor of the statistical analyses was the interfaces (Finger Distance, Finger Number, Finger Tapping and gamepad) and was treated as a fixed effect. The dependent factors for subjective analyses were the factors given by the user interface questionnaire. The dependent factors for objective analyses were the parameters defined in Section~\\ref{sec:exp1_metrics}. Participants were treated as a random effect. Subjective data given by participants using the user interface questionnaire were analysed using the Friedman test, followed by the Wilcoxon signed-rank test with the Bonferroni correction as the post-hoc analyses. Figure~\\ref{fig:ball_errorbar} shows the mean values and the standard error of the mean and Figure~\\ref{fig:ball_horizontal} shows the distribution of the responses from participants. Objective parameters based on the metrics defined in Section~\\ref{sec:exp1_metrics} were extracted from the collected experimental data and were analysed using the linear mixed-effects models analyses (Package NLME in R). The post-hoc analyses for objective data were performed using the Tukey's HSD (Honestly Significance Difference) test. No significant learning effect resulting from the number of trials was found on all objective factors. The error contribution of the number of trials was minimal so it was not included in the analyses. Figure~\\ref{fig:objective_ball} shows the mean and the standard error of the mean of all objective parameters. Results of the statistical analyses on both subjective data and objective data are in Table~\\ref{tab:ball_subjective} and Table~\\ref{tab:ball_objective}, respectively.\n\nAnalyses on subjective parameters using the Friedman test showed that interfaces had significant effects on factors including ease-to-learn ($p<0.001$), ease-to-use ($p<0.001$), natural-to-use ($p<0.001$), tiredness ($p=0.004$), responsiveness ($p=0.016$) and subjective accuracy ($p=0.006$) except fun. Pairwise comparisons were performed using the Wilcoxon signed-rank test with the Bonferroni correction on subjective factors. Results showed that gamepad was significantly easier to learn compared to Finger Tapping. Although Finger Tapping had a lower mean score on ease-to-learn, there was no statistical significance compared to Finger Distance and Finger Number. The result was reasonable as the gamepad interface is a traditional device for navigation in games so participants found it easier to learn compared to other interfaces while hand gesture interfaces were all relatively new to them. Finger Distance, Finger Number and gamepad were found to be significantly easier to use compared to Finger Tapping and there was no significant effect among Finger Distance, Finger Number and gamepad on ease-to-use. A primary reason was that the control of walking speed through Finger Tapping could only be made after two peaks on the speed curve of the index finger were detected so there was a latency in adjusting walking speed compared to other interfaces. We also found that Finger Distance, Finger Number and gamepad were significantly natural to use compared to Finger Tapping and there was no significant effect among Finger Distance, Finger Number and gamepad on natural-to-use. This showed that Finger Tapping was somewhat unusual to participants and participants need to take practice to get familiar with the interface. Therefore, lower ratings were received for this factor. There was no statistical significance among these interfaces in terms of fun, but relatively high mean scores on this parameter indicated all four interfaces were of fun for virtual walking in VR. Finger Tapping was found to be significantly more tiring compared to gamepad while there was no significant difference among Finger Distance, Finger Number and Finger Tapping, showing that repetitive motions from Finger Tapping caused fatigue. Finger number was found to be significantly more responsive compared to Finger Tapping and there was no statistical significance among Finger Number, Finger Distance and gamepad. This was also due to the latency problem inherent in the current design of the Finger Tapping gesture. Finally, Finger Number and gamepad were found to be significantly more subjectively accurate compared to Finger Tapping. There were no statistical significance between Finger Number and gamepad, and no statistical significance between Finger Distance and Finger Tapping in terms of subjective accuracy, indicating that Finger Number and gamepad were considered more accurate by participants. By examining Figure~\\ref{fig:ball_horizontal}, we found that gamepad had the most number of ratings of strongly agree excluding tiredness. Finger Number had a similar response in terms of the number of ratings of strongly agree and was slightly higher than that of Finger Distance. Finger Tapping received the lowest number of ratings of strongly agree. \n\nAnalyses on objective factors using the linear mixed-effects models analyses showed that there were significant effects on all five parameters, including average distance $d_{avg}$ ($p<0.001$), standard deviation of distance $d_{std}$ ($p<0.001$), average speed difference $s_{avg}$ ($p<0.001$), standard deviation of speed difference $s_{std}$ ($p<0.001$) and speed difference at key frames $s_{inst}$ ($p<0.001$). Further analyses using Tukey's HSD test (with examination of Figure~\\ref{fig:objective_door}) showed that Finger Tapping had larger error in terms of maintaining the 3 m distance between the avatar and the rolling wooden ball. The performance of Finger Distance, Finger Number and gamepad were similar for this parameter. This showed that Finger Distance, Finger Number and gamepad were more accurate in maintaining a relative distance to the rolling ball. Similar results were also found on parameters, including standard deviation of distance $d_{std}$ and average speed difference $s_{avg}$. Finger Tapping was found to have a larger oscillating interval $d_{std}$ in term of the distance between the avatar and the rolling wooden ball. Finger Tapping was also found to have a larger error in terms of average speed difference $s_{avg}$ during the target pursuit task. In terms of standard deviation of speed difference $s_{std}$, Finger distance and gamepad had similar performance and the values were significantly lower than that of the rest two interfaces. Finger Number was also found to have a significant lower value for standard deviation of speed difference $s_{std}$ compared to Finger Tapping. In terms of speed difference at key frames $s_{inst}$, Finger Distance had similar performance compared to gamepad. Finger Number was significantly better than other interfaces and Finger Tapping had significant lower performance. This showed that Finger number has the best transient response in terms of tracking speed changes.\n\n\\subsubsection{Summary}\nExperiment 1 showed that the Finger Number gesture and the Finger Distance gesture had similar user preference compared to the gamepad interface. The Finger Tapping gesture received lower ratings for all factors except fun. In term of their performance on speed control, the Finger Distance gesture was comparable to the gamepad method. The Finger Number gesture was slightly worse in terms of standard deviation of speed difference and speed difference at key frames compared to the Finger Distance gesture and the gamepad interface. The Finger Tapping gesture had largest errors in terms of maintaining the distance and speed.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/setup\/gate.png}\n\\caption{Exemplar snapshot of Experiment 2. Wooden gates are the waypoints. Light blue coloured spheres represent a user's tracked fingers and hand centres of both hands.}\n\\label{fig:gate}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp2\/door_errorbar.pdf}\n\\caption{Results of subjective factors of Experiment 2 (bar plot convention is as in Figure~\\ref{fig:ball_errorbar}).}\n\\label{fig:door_errorbar}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp2\/door_horizontal.pdf}\n\\caption{Detailed responses from participants in Experiment 2.}\n\\label{fig:door_horizontal}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp2\/objective_door.pdf}\n\\caption{Objective factors of Experiment 2 (bar plot convention is as in Figure~\\ref{fig:ball_errorbar}).}\n\\label{fig:objective_door}\n\\end{figure*}\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Subjective Factors of Experiment 2}\n\\label{tab:door_subjective}\n\\includegraphics[width=0.85\\textwidth]{tables\/door_subjective_table.pdf}\n\\end{table*}\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Objective Factors of Experiment 2}\n\\label{tab:door_objective}\n\\includegraphics[width=0.6\\textwidth]{tables\/door_objective_table.pdf}\n\\end{table*}\n\n\\subsection{Experiment 2: Waypoints Navigation}\n\\subsubsection{Introduction}\nThe purpose of Experiment 2 was to test the performance and user preference of four interfaces in terms of waypoints navigation when direction control using the left hand was introduced.\n\n\\subsubsection{Participants}\nSixteen undergraduate students (age: 18-24, eight males and eight females) were invited as volunteers for this experiment. None had participated in Experiment 1 and all had normal or corrected-to-normal vision. Informed consents were signed before experiments. Participants were na\u00efve to the purpose of the experiment.\n\n\\subsubsection{Design}\nWe adapted this experiment from \\cite{zhao_effects_2018} to evaluate the usability of interfaces for waypoints navigation. In this experiment, we used the same terrain of Experiment 1 but placed wood gates as the waypoints in the virtual environment (shown in Figure~\\ref{fig:gate}). These wood gates had an inner dimension of 2 m (W) $\\times$ 2 m (H) $\\times$ 0.1 m (D). The backside of a gate to the front of its successor was fixed as 5 m in depth. In the lateral direction, these gates were placed in an interval of $\\pm$ 2 m. In total, fifty waypoints were placed in the scene. The initial distance from the avatar to the front side of the first waypoint was 5 m. The distance between the backside of the last waypoint to the bounding box that triggered the end of the experiment was also 5 m. An experimental trial stopped whenever an participant reached the bounding box placed behind the last waypoint. The goal for participants of the experiment was to navigate through waypoints using four different interfaces while trying to achieve short task completion time and fast locomotion speed. It was also necessary to avoid missing waypoints or colliding with waypoints. Participants had direction control and speed control using both hands when using each of the four interfaces to navigate through waypoints. In addition, participants were not allowed to backtrack a waypoint in case they missed. The acceleration of the avatar was also set to 0.5 ${\\rm m\/s^2}$, the maximum and minimum walking speeds were set to 5 km\/h and 0 km\/h, respectively, for all four interfaces. For this experiment, we also adopted the balanced Latin square design to control for order effects. \n\n\\subsubsection{Metrics}\\label{sec:exp2_metrics}\nBased on the metrics given in \\cite{zhao_effects_2018}, we proposed the following metrics to evaluate the interfaces for our study:\n\\begin{itemize}\n\\item \\textbf{Task completion time \\bm{$t_c$}}:\\\\\nThe duration from the start of the locomotion to the instant when participants reach the bounding box that triggers the end of the task.\n\\item \\textbf{Average locomotion speed \\bm{$s_l$}}:\\\\ \nThe average locomotion speed computed by dividing the total length of the locomotion path by task completion time \\bm{$t_c$}.\n\\item \\textbf{Smoothness of the locomotion path \\bm{$d_p$}}:\\\\\nThe mean value of the lateral distance ($x$-axis) of the actual locomotion path to the optimal locomotion path. The optimal locomotion path is defined as the shortest distance between the centres of two adjacent waypoints. \n\\item \\textbf{Number of successfully passed waypoints \\bm{$n_w$}}:\\\\\nThe number of waypoints that participants successfully passed.\n\\item \\textbf{Number of collisions with waypoints \\bm{$n_c$}}:\\\\\nThe number of the collisions of the avatar with the frames of the waypoints. This parameter was obtained from collision detection of the Unity during gameplay and was recorded for analysis.\n\\end{itemize}\n\n\\subsubsection{Procedure}\nThe procedure was as in Experiment 1 except that the waypoints navigation task was given to the participants.\n\n\\subsubsection{Results}\nThe protocols for analysing subjective data and objective data in Experiment 2 were as in Experiment 1. The independent factor was the interfaces and the dependent factors for subjective analyses were the factors given in the user interface questionnaire. The dependent factors for objective analyses were defined in Section~\\ref{sec:exp2_metrics}. Results of the statistical analyses on both subjective data and objective data are in Table~\\ref{tab:door_subjective} and Table~\\ref{tab:door_objective}, respectively.\n\nAnalyses on subjective factors using the Friedman test showed that there were significant effects on all seven factors: ease-to-learn ($p=0.049$), ease-to-use ($p<0.001$), natural-to-use ($p<0.001$), fun ($p=0.021$), tiredness ($p<0.001$), responsiveness ($p=0.004$) and subjective accuracy ($p<0.001$). However, further analyses using the Wilcoxon signed-rank test with the Bonferroni correction showed that there were no significant differences among four interfaces in terms of ease-to-learn and fun, indicating the effect of interfaces on these factors were weak. Finger Distance, Finger Number and gamepad were found to be significantly easier to use compared to Finger Tapping, which was consistent with the result in Experiment 1. In terms of natural-to-use, Finger Distance and gamepad were found to be significantly more natural compared to Finger Tapping but no significant difference was found among Finger Distance, Finger Number and gamepad, indicating Finger Distance and gamepad were more intuitive interfaces. Finger Distance, Finger Number and gamepad were found to be significantly less tiring compared to Finger Tapping. As discussed in Experiment 1, this was mainly due to the repetitive motion of Finger Tapping that resulted in tiredness. Finger Distance was found to be significantly more responsive compared to Finger Tapping as this interface allowed fine control of speed through distance between the thumb and the index of a tracked hand. Finally, Finger Distance and gamepad were significantly more subjectively accurate than Finger Tapping. The distribution of the user responses in Experiment 2 is shown in Figure~\\ref{fig:door_horizontal}. We found that gamepad still received most ratings of strongly agree, excluding tiredness, compared to other interfaces. Finger Distance and Finger Number had similar number of ratings of strongly agree while finger tapping had the lowest.\n\nObjective data analysed using the linear mixed-effects models analyses showed that there were significant effects on parameters, including task completion time $t_c$ ($p<0.001$), average locomotion speed $s_l$ ($p<0.001$), path smoothness $d_p$ ($p<0.001$) and the number of collisions $n_c$ ($p<0.001$) except the number of successfully passed waypoints $n_w$. Post-hoc analyses performed using the Tukey's HSD test combined with examination of Figure~\\ref{fig:door_errorbar} showed that Finger Distance and gamepad had similar task completion time $t_c$ and they were significantly faster compared to other interfaces. Finger Number had a longer task completion time $t_c$ and and Finger Tapping was significantly worse compared to other interfaces. In terms of average locomotion speed $s_l$, gamepad was significantly faster than other interfaces, with Finger Distance being the second, Finger Number being the third and Finger Tapping the last. Locomotion using gamepad resulted in a significantly smoother path $d_p$ compared to three other interfaces and there was no statistical difference among other three interfaces in terms of path smoothness $d_p$. A similar significant effect was found on the number of collisions $n_c$, in which locomotion using gamepad had significantly fewer collisions with frames of wooden gates compared to other interfaces. Finger Distance, Finger Number and Finger Tapping had similar performance in terms of the number of collisions $n_c$. There was no significant effect found on the number of successfully passed waypoints as this parameter was a coarse parameter and it was not sensitive to different interfaces. These results showed that the Finger Distance gesture was very efficient in waypoints navigation. The second experiment is a more general case for virtual locomotion with direction control enabled compared to Experiment 1. Therefore, the implications of the results from Experiment 2 are more important and meaningful compared to Experiment 1.\n\n\\subsubsection{Summary}\nExperiment 2 found that participants had similar user preference on the Finger Distance gesture, the Finger Number gesture and the gamepad interface while the Finger Tapping gesture was the least preferred. In addition, the performance of the Finger Distance gesture was comparable to that of the gamepad interface on the waypoints navigation task. The Finger Number gesture used slightly longer time in completing the task and its average locomotion speed was also lower compared to that of the Finger Distance gesture and the gamepad interface. The Finger Tapping gesture was the slowest in terms of task completion time and average locomotion speed.\n\n\n\\section{Discussion}\nIn our study, we found that the Finger Distance gesture was comparable to the gamepad interface in terms of performance and user preference. Our explanation was that the Finger Distance gesture allowed more precise control of walking speed through fine control of the distance between the fingertips of the thumb and the index of a tracked hand compared to two other hand gesture interfaces. The Finger Number gesture was slightly worse on the waypoint navigation task compared to the Finger Distance gesture but it performed better on a few factors in the target pursuit task. A problem with the Finger Number gesture was that the interface gave discrete speed values so it had difficulty in enabling precise control of locomotion speed. It would also have problems in tracking target with constant decimal speed values if this were used as an experimental scenario. However, the Finger Number gesture is still an intuitive technique. It controls locomotion speed based on the number of extended fingers, which is easy for people to learn. The Finger Tapping gesture had the latency problem. Controlling walking speed can only be performed after detecting two consecutive peaks. There was a latency in detecting the peak following a previously detected peak, which made the interface not very responsive in terms of speed control. We expect that by modifying the algorithm to allow speed control within two consecutive peaks (\\textit{i.e.} within-step speed control as in the GUD-WIP technique \\cite{wendt_gud_2010}), the modified Finger Tapping gesture may perform better than the current method. However, tiredness coming from repetitive tapping motion of finger remains a problem that cannot be easily solved. \n\nOur study included only one method for direction control, that is, rotating the left palm around the $y$-axis in the Leap Motion sensor's coordinate system. For future work, it is necessary to test the utility of the speed control gestures in combination with different direction control methods proposed by other researchers. These direction control methods include using pointing direction of a single finger \\cite{caggianese_freehand-steering_2020} and head orientation \\cite{caggianese_freehand-steering_2020,cardoso_comparison_2016}, \\textit{etc}. Furthermore, the present methods in our study focused on using finger motions to control locomotion speed. Further research is needed to compare finger motion gestures to static hand gestures \\cite{cisse_user_2020} in terms of their usability.\n\nOne limitation of our study was that we did not use VR headsets for experiments as sharing headsets among participants is a health concern \\cite{steed_evaluating_2020}. We expect this problem to be solved by applying a sanitization procedure for VR headsets once it is established. Other options include recruiting users who have their own VR hardware or establishing a pool of users with funded VR hardware but many challenges remain \\cite{steed_evaluating_2020}. We plan to re-conduct experiments using VR headsets in future and compare the results to the present study, which used a curved monitor for virtual scene presentation.\n\nA very recent framework proposed for evaluating VR locomotion techniques \\cite{cannavo_evaluation_2021} offered a new testing tool for virtual locomotion. A direction for future study is to test the present hand gesture interfaces in this framework. Another promising field for further study is to apply these gestures to teleoperation of robots and assess their usability in this type of application. A previous study has investigated the performance of a set of static hand gesture, a gamepad interface and a traditional user interface on a computer desktop for teleoperation of robots \\cite{doisy_comparison_2017}. It is definitely interesting to investigate how different hand gestures perform in such applications and whether hand gesture interfaces improve presence during teleoperation when wearing VR headsets. Finally, another aspect for future study is the effects of hand gesture and gamepad interfaces on simulator sickness \\cite{kennedy_simulator_1993}.\n\n\\section{Conclusion}\nIn this paper, we presented three hand gestures and their algorithms for virtual locomotion and also implemented a gamepad locomotion interface for comparison using the Xbox One controller. Through two virtual locomotion tasks, we systematically compared the performance and user preference of these interfaces. We showed that the Finger Distance gesture was comparable to the gamepad interface. The performance and user preference of the Finger Number gesture were slightly below the Finger Distance gesture. Our results provide VR researchers and designers with methods and empirical data to implement locomotion interfaces using hand gestures in their VR systems. We also recommend these methods being included in new VR systems and letting users to select their favourite interface during usage. In addition, multi-modal interfaces that integrated different interaction mechanisms (\\textit{e.g.} hand gestures, body gestures and voice recognition, \\textit{etc.}) for locomotion also can be considered for new VR systems. Such interfaces will provide end-users with more options for interaction in VR. Finally, we believe that hand gestures also have potentials in their utility for locomotion in augmented reality (AR) and mixed reality (MR) applications and teleoperation of robots. As few related studies have been conducted, new opportunities lie in these fields.\n\n\n\n\n\n\\bibliographystyle{ieeetr}\n\n\n\\section{Introduction}\\label{sec:introduction}}\n\n\n\n\n\\IEEEPARstart{L}{ocomotion} in virtual environments is a hotspot for virtual reality (VR) research. Implementing intuitive and natural locomotion interfaces is an important factor for consideration when designing a new VR environment and system. Low-cost optical tracking devices, such as the Kinect sensor and the Leap Motion sensor, for body and hand skeleton tracking have resulted in many methods for body gesture\\cite{lun_survey_2015} and hand gesture recognition \\cite{bachmann_review_2018}. Ample opportunities have become available for VR researchers to investigate the utility of hand gestures for virtual locomotion.\n\nOne major advantage of these low-cost optical sensors is that they do not require attaching markers or other tracking devices on the body parts to be tracked while providing tracking information that are sufficiently accurate for interaction in VR. Compared to locomotion interfaces using gamepads, hand gesture interfaces do not require users to hold devices in their hands, hence more sophisticated gestures can be recognized and are generally more flexible compared to VR systems using controllers in terms of gesture recognition (but implementation of force feedback without holding controllers or wearing exoskeletons remains a challenging problem). On the other hand, if we compare virtual locomotion interfaces using hand gestures to interfaces using body tracking with the Kinect sensor or other full-body optical tracking systems, hand gesture interfaces do not involve large-scale body movements. This provides users with an advantage that hand gesture interfaces can be operated in places that are limited in physical space. Such scenarios typically include personal gaming, virtual classrooms and teleoperation, \\textit{etc.}, in which large-scale tracking may not be available due to constrained physical space or other reasons. Combined with hand gesture recognition methods for other types of interaction, different interaction activities will be made possible in a VR system. More recent VR headsets, such as the Oculus Quest 2 and the Microsoft Hololens 2, have integrated hand tracking modules and algorithms. This makes hand gesture locomotion interfaces and other interaction using hand gestures more accessible as additional hand tracking hardware is not required when using these headsets.\n\nSome previous studies on locomotion using hand gestures used different geometric features of hands for gesture recognition\n\\cite{cardoso_comparison_2016,cisse_user_2020}. In such designs, each hand shape with a set of geometric features corresponds to a specific walking speed. By designing a set of hand gestures with different hand shapes, one can achieve variable walking speed with a set of hand gestures tracked by optical devices (\\textit{e.g.} the Leap Motion sensor). Another approach was to use the distance between the index finger and the centre of a tracked hand and map the distance to locomotion speed \\cite{huang_design_2019}. In addition, methods that simulate bipedal walking using two fingers were designed based on multi-touch pads\\cite{kim_finger_2008,yan_let_2016}. Although previous hand gesture interfaces have been assessed on a case-by-case basis, there lacks a comparison among different hand gesture interfaces to reveal their differences in terms of performance and user preference. In addition, it is also uncertain how these hand gesture interfaces can be compared to locomotion interfaces using gamepads. \n\nTo answer these research questions, we presented three hand gesture interfaces and their algorithms, called the Finger Distance gesture, the Finger Number gesture and the Finger Tapping gesture, which were inspired by previous studies \\cite{kim_finger_2008,huang_design_2019,cardoso_comparison_2016,zhao_learning_2016}. These are typical gestures that people are familiar with in their daily lives or people can naturally come up with when they intend to make hand gestures. For comparison, we also designed and implemented a gamepad interface based on the Xbox One controller. We compared these four interfaces using two virtual locomotion tasks, which are called the target pursuit task and the waypoints navigation task. The first task evaluated the performance of the gestures in terms of their usability in speed control while the second task focused on waypoints navigation when direction control was introduced. We also respectively assessed user preference in these tasks through a subjective user interface questionnaire.\n\n\nThe goal of the present study was to compare the differences of these four interfaces in terms of their performance and user preference in virtual locomotion tasks.\nThe main contributions of the study were two-fold:\n\n(1) We presented three hand gesture interfaces and their algorithms for virtual locomotion.\n\n(2) We systematically evaluated these three hand gesture interfaces and a gamepad interface using two virtual locomotion tasks and demonstrated their performance and user preference.\n\nThe rest of the paper is organized as follows: Section 2 discusses related work; Section 3 presents hand gesture interfaces, a user interface questionnaire and VR hardware and software for experiments; Section 4 presents the design and results of two VR experiments that evaluated four interfaces; Section 5 provides discussion and Section 6 draws the conclusion of the study. \n\n\\section{Related Work}\nIn this section, we review previous work on virtual locomotion interfaces using body movements (\\textit{i.e.} leaning, head movements, upper-limb and lower-limb movements, \\textit{etc.}) and hand gestures.\n\n\\subsection{Locomotion Interface using Body Movements}\nZielasko \\textit{et al.}\\cite{zielasko_evaluation_2016} compared the adapted walking-in-place, the accelerator pedal, leaning, the shake your head technique and the gamepad interfaces for virtual locomotion. These interfaces were evaluated using a virtual locomotion task that assessed factors, including walking path, task completion time and comfort. Results show that the leaning technique and the accelerator pedal technique worked best. They also concluded that the proposed interfaces were easy to learn and inexpensive to be integrated into existing VR systems.\n\nNabiyouni \\textit{et al.}\\cite{nabiyouni_comparing_2015} compared three different locomotion interfaces with different levels of naturalism. These interfaces included natural walking (fully natural), the VirtualSphere (semi-natural) interface and the gamepad interface (non-natural). To evaluate these interfaces, participants were asked to perform straight line walking and multi-segment line walking. Objective factors and subjective factors were studied. Results showed that fully natural and non-natural interfaces performed better than the semi-natural interface (the VirtualSphere).\n\nKitson \\textit{et al.}\\cite{kitson_comparing_2017} compared four interfaces using body movements for virtual locomotion with the joystick interface. These four interfaces included the NaviChair, the MuvMan, the head-directed technique and the swivel chair, which were based on tracking body movements. Their results showed that participants preferred the NaviChair and the MuvMan and concluded that the joystick is still an easy-to-use and accurate interface. \n\nNguyen-Vo \\textit{et al.}\\cite{nguyen-vo_naviboard_2019} compared the controller, the NaviChair, the Naviboard and real walking interfaces. Their results showed that body-based motion cues and control provided by a low-cost leaning and stepping interface sufficient for virtual locomotion.\n\nCoomer \\textit{et al.}\\cite{coomer_evaluating_2018} compared the joystick interface, the arm-cycling, the point-tugging and the teleporting techniques. To compare these interfaces, a virtual navigation task was designed to ask participants to walk in a virtual town and look for treasure chests. Their results showed that the arm-cycling was the best method compared to three other methods. The arm-cycling method resulted in better sense of spatial awareness and lower simulator sickness scores for participants.\n\nRecently, Buttussi and Chittaro \\cite{buttussi_locomotion_2019} compared the joystick, the teleporting and the leaning interfaces. Results showed that the teleporting interface had better performance compared to other interfaces and also caused less nausea.\n\nNumerous Walking-in-Place (WIP) techniques \\cite{slater_virtual_1995,templeman_virtual_1999,yan_new_2004,feasel_llcm-wip:_2008,wendt_gud_2010,williams_evaluation_2011,bruno_new_2013,bruno_hip-directed_2017,tregillus_vr-step:_2016,hanson_improving_2019} have been proposed over the years. These methods usually use optical trackers or other types of trackers to monitor lower leg motions (including knees and feet, \\textit{etc.}) to use the tracked motion data to calculate the forward walking speed of a user. In some studies\\cite{wendt_gud_2010,bruno_new_2013}, comparisons were made between WIP techniques, which revealed the differences in their performance and user preference between these methods. Another approach for overground walking is the redirected walking approach \\cite{razzaque_redirected_2001,razzaque_redirected_2002}, which manipulates the virtual scene through rotation rate without users noticing it. This technique was applied to systems using VR headsets \\cite{razzaque_redirected_2001} or with projected displays \\cite{razzaque_redirected_2002}. A more recent method proposed by Sun \\textit{et al.} \\cite{sun_towards_2018} redirects users during virtual walking when they have saccadic eye movements. Mechanical repositioning is also a locomotion technique that involved body motion tracking. These include treadmills\\cite{souman_cyberwalk:_2008,wang_real_2020}, foot platforms\\cite{iwata_gait_2001}, pedalling devices\\cite{allison_first_2000} and spheres\\cite{medina_virtusphere:_2008}, \\textit{etc}. Finally, several studies have proposed the use of head motions for teleoperation of robots or for virtual locomotion \\cite{higuchi_flying_2013,pittman_exploring_2014,zhao_effects_2018,hashemian_headjoystick_2020}. \n\nWhile we briefly reviewed locomotion techniques that involved body movements, comprehensive coverage of the subject is in \\cite{nilsson_natural_2018,al_zayer_virtual_2018}.\n\n\\subsection{Locomotion Interface using Hand Gestures}\nKim \\textit{et al.}\\cite{kim_finger_2008} proposed a hand gesture locomotion technique using a multi-touch pad. The gesture required users to use two fingers: the index finger and the middle finger to move back and forth on a touchpad to simulate natural walking using two legs. An experiment was conducted to evaluate the method using a locomotion task that asked participants to pass a few waypoints. A joystick interface was also introduced as the control interface for comparison. Subjective factors, including satisfaction, fastness, easiness and tiredness, were evaluated but no objective parameters were analysed in the study. Later, a subsequent study \\cite{kim_effects_2010} evaluated the method on spatial knowledge acquisition.\n\nSimilarly, Yan \\textit{et al.}\\cite{yan_let_2016} also proposed locomotion methods based on a multi-touch pad. These methods include the walking gesture, the segway gesture and the surfing gesture, which allowed users to travel at different modes and speeds with different gestures. For each proposed gesture, there was a different locomotion task to compare the interface with the gamepad interface. Both objective and subjective factors were studied and authors concluded that the gamepad interface was more time efficient while gesture interfaces based on a multi-touch pad had similar quality compared to the gamepad interface. In addition, switching between different travel modes was faster on the multi-touch pad.\n\nCardoso \\cite{cardoso_comparison_2016} proposed a method using the number of extended fingers to control locomotion speed. Opening and closing both hands were used to start and stop locomotion. The proposed method was compared to a gaze-based locomotion approach using a head-mounted display and a gamepad interface. Results showed that the gamepad interface was better than two other interfaces. However, details of implementation of the gesture recognition algorithms of the hand gesture interface were not mentioned in their paper.\n\nHuang \\textit{et al.}\\cite{huang_design_2019} proposed a technique to control locomotion speed based the distance between the tips of their index finger and the centre of a tracked hand. The proposed method was evaluated on a linear locomotion task, in which participants were asked to walk to targets at different distances as quickly as possible. Their study experimented with different combinations of parameters of the algorithm but the method has not been compared to other locomotion interfaces using hand gestures.\n\nMore recently, Cisse \\textit{et al.}\\cite{cisse_user_2020} proposed a method to design hand gestures by inviting professionals in architectural designs, with exposure to VR in professional capacity, to elicit their preferred hand gestures for locomotion. In total, sixty-four gestures were elicited from six professionals. These gestures were categorized into eight classes, which included moving forward, moving backwards and moving up a floor \\textit{etc}. To evaluate and select a set of efficient gestures, twelve university students were invited to evaluate different combinations of gestures for virtual locomotion. Factors, including task completion time and intuitiveness, \\textit{etc}., were included in the study. However, gamepad interfaces were not included for comparison. The designed gestures were static gestures, which potentially lack information that can be given by dynamic gestures through hand and finger movements.\n\nAnother recent work by Caggianese \\textit{et al.}\\cite{caggianese_freehand-steering_2020} proposed three different methods to control locomotion direction: (1) using the pointing direction of a finger; (2) using palm orientation; and (3) using head orientation tracked by a VR headset. However, speed control mechanism was not studied in this research. Initiation and termination of locomotion were controlled by extending and closing a tracked hand. To evaluate three different steering control methods, objective factors and subjective factors were considered. A limitation of the study was that there was no method to control walking speed.\n\nSch\u00e4fer \\textit{et al.} \\cite{schafer_controlling_2021} compared four different static hand gestures for teleportation-based virtual locomotion using a single hand or both hands. Their results showed that all proposed methods are viable options for virtual locomotion. They also recommend adding these methods in VR systems and let users to choose their preferred methods.\n\nTo the authors' knowledge, there was no comparison made among different hand gesture interfaces and the gamepad interfaces for virtual locomotion on speed control and waypoints navigation. Our study presented three hand gesture interfaces based on previous studies with a gamepad interface and compared their performance and user preference through two virtual locomotion tasks, which respectively evaluated their usability in speed control and waypoints navigation.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.25\\textwidth]{figures\/gesture\/finger-distance.jpg}\n \\caption{Finger Distance gesture ($l$ denotes the Euclidean distance between the fingertips of the index and the thumb of a tracked hand).}\n \\label{fig:finger_distance}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n \\centering\n \\subfigure[Stop (0 km\/h)]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-0.jpg}\n }\n \\subfigure[1 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-1.jpg}\n }\n \\subfigure[2 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-2.jpg}\n }\n \\subfigure[3 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-3.jpg}\n }\n \\subfigure[4 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-4.jpg}\n }\n \\subfigure[5 km\/h]{\n \\includegraphics[width=0.15\\textwidth]{figures\/gesture\/number-5.jpg}\n }\n \\caption{Finger Number gesture. Sub-figures (a) - (f) illustrate six different gestures that correspond to walking speeds from 0 km\/h to 5 km\/h.}\n \\label{fig:finger_number}\n\\end{figure*}\n\n\\section{Methods}\n\\subsection{Finger Distance Gesture}\nThe Finger Distance gesture is a method that controls locomotion speed via the Euclidean distance between the fingertips of a user's thumb and index. This was inspired by previous work by Huang \\textit{et al.}\\cite{huang_design_2019} who used the distance between a user's fingertip of the index and the hand centre to control walking speed. We decided to use the Euclidean distance between the tips of thumb and index fingers. Logically, the linearity of the distance between the thumb and index fingertips to walking speed is better than that of the distance between the tips of the index finger and the centre of a hand. In our method, the locomotion speed is calculated by the following equations:\n\n\\begin{equation}\ns_{walk} = \\frac{l-d}{r-d}(s_{max}-s_{min})\n\\end{equation}\n\\begin{equation}\nl = \\left \\|F_{thumb} - F_{index}\\right \\|_2\n\\end{equation}\n\n\\noindent where $s_{walk}$ is the calculated walking speed, $s_{max}$ and $s_{min}$ are pre-defined maximum and minimum walking speeds that a user can achieve during locomotion, $r$ ($r$ = 8 cm) a pre-defined reference Euclidean distance value between fingertips of the thumb and the index of a hand, $d$ ($d$ = 2.5 cm) the dead zone that ensures a value close to zero can be obtained when users snap their thumb and index together, $F_{thumb}$ and $F_{index}$ the tracked 3-D positions of the thumb and the index fingers and $l$ ($l\\in(d,r]$) the calculated Euclidean distance between these two fingers. The values of $r$ and $d$ were empirical values determined during our initial testing. We found such choice of values gave a relatively comfortable feeling to control locomotion speed using the gesture. Whenever $l$ is less than or equal to $d$, the calculated walking speed $s_{walk}$ is set to zero to stop locomotion. An illustration of the proposed gesture is given in Figure~\\ref{fig:finger_distance}, in which the distance between the tips of the thumb and index is depicted. This simple equation set enabled accurate control of locomotion speed. There is no filtering necessary to pre-process the data of the fingertips.\n\n\\begin{figure*}[!ht]\n \\centering\n \\subfigure[Index finger moving down]{\n \\includegraphics[width=0.18\\textwidth]{figures\/gesture\/fingertip-down.jpg}\n }\n \\hspace{5em}\n \\subfigure[Index finger moving up]{\n \\includegraphics[width=0.18\\textwidth]{figures\/gesture\/fingertip-up.jpg}\n }\n \\caption{Finger Tapping gesture. Arrows in sub-figures (a) and (b) illustrate the upward and downward motions of the index finger of a tracked hand during locomotion.}\n \\label{fig:finger_tapping}\n\\end{figure*}\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.25\\textwidth]{figures\/gesture\/direction2.png}\n \\caption{Direction control using the left hand, illustrated in the Leap Motion sensor's coordinate system ($\\theta$ denotes the steering angle).}\n \\label{fig:direction_control}\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.25\\textwidth]{figures\/gesture\/gamepad2.png}\n \\caption{Gamepad interface based on the Xbox One controller. Pushing the left joystick left or right controls walking direction and pushing the right joystick forward controls walking speed.}\n \\label{fig:gamepad}\n\\end{figure}\n\n\\subsection{Finger Number Gesture}\nThe Finger Number gesture is a set of gestures that controls the locomotion speed using the number of extended fingers. This idea was originally proposed by Cardoso \\cite{cardoso_comparison_2016} but details of the algorithms were not described in their paper. To design and implement an algorithm that can be used to recognize the number of extended fingers. We resorted to Marin \\textit{et al.}'s robust feature descriptors \\cite{marin_hand_2016}, which normalize the distance between fingertips and the hand centre with respect to different hand orientations and varied distance to the Leap Motion sensor:\n\n\\begin{equation}\nP_i^x=(F_i-C)\\cdot(N\\times H)\n\\end{equation}\n\\begin{equation}\nP_i^y=(F_i-C)\\cdot H\n\\end{equation}\n\\begin{equation}\nP_i^z=(F_i-C)\\cdot N\n\\end{equation}\n\n\\noindent where $P_i^x$, $P_i^y$ and $P_i^z$ are the extracted feature of a tracked 3-D fingertip position $F_i$ ($i$ is the index of of a tracked finger), $C$ the tracked hand centre, $N$ the normal perpendicular to the palm of a tracked hand and $H$ the pointing direction of fingertips given by the Leap Motion sensor. $\\cdot$ and $\\times$ denote dot product and cross product, respectively. The set of descriptors forms a column vector of fifteen elements of a single tracked hand. As the control of the start and the stop of locomotion involves both hands (\\textit{i.e.} extend fingers to start walking and make fists with both hands to stop). We use the set of descriptors to extract features of both hands and stack the column vectors of features of both hands to form a vector $P$ of thirty elements. This compact feature representation enabled us to train a multi-class Support Vector Machine (SVM) using the LibSVM library \\cite{chang_libsvm:_2011}, which make it possible to recognize six types of hand gestures for controlling start and stop (\\textit{i.e.} by making fists with both hands) and control locomotion speeds with five different levels (from 1 km\/h to 5 km\/h based on the number of extended fingers as shown in Figure~\\ref{fig:finger_number}). Our definition of the hand gestures with respect to walking speed is different compared to that of Cardoso's method. In his method, starting and stopping locomotion is controlled by extending both hands and closing both hands. In our implementation, extending an arbitrary number of fingers of the right hand (with the left hand fully extended) will start walking with the speed set by the number of extended fingers of the right hand (also see Figure~\\ref{fig:finger_number}). To train a multi-class SVM for gesture recognition, we collected hand gesture data from twelve volunteers (age: 18-21, 6 males and 6 females) using a custom software application developed by Python 3.7 that enabled data collection and labelling. For each participant, we collected four sessions of data. During each data collection session, each gesture was collected for 5 s. We used the first two sessions of hand gesture data of all participants to train a multi-class SVM with the Radial Basis Function (RBF) kernel. We then used the rest two sessions of the collected data to test the classifier. Overall, the average accuracy of classification is 99.94\\%, which made it readily available for recognizing the Finger Number gesture for virtual locomotion using the $svm\\_predict()$ function in the LibSVM library:\n\n\\begin{equation}\ns_{walk} = svm\\_predict(model, P)\n\\end{equation}\n\n\\noindent where $s_{walk}$ is the calculated walking speed, $model$ the trained multi-class SVM model and $P$ the stacked vector of features extracted from both hands using Marin \\textit{et al.}'s feature descriptor \\cite{marin_hand_2016}. \n\n\\subsection{Finger Tapping Gesture}\nThe Finger Tapping gesture is a method that controls locomotion speed via tapping (up and down) motions of the index finger of a tracked hand (see Figure~\\ref{fig:finger_tapping} for the illustration of the proposed gesture). Our initial idea was to use two fingers (index and middle fingers) to simulate the walking motion of two legs during actual locomotion, similar to Kim \\textit{et al.}'s Finger Walking in Place (FWIP) method \\cite{kim_finger_2008,kim_effects_2010} and use the Leap Motion sensor to track finger motions instead of a multi-touch pad. Our initial design showed that using two fingers to simulate walking motion and control locomotion speed was difficult to achieve as moving one finger (index finger) also affected the tracked finger tip position of the other (middle finger). Thus, to make this algorithm work, we decided to use the tapping motion of a single finger (index finger) to control walking speed. The tapping motion of the tip of the index finger can be viewed as a 3-D signal in the time domain. As the gesture requires a user to tap their fingers up and down, with their hands positioned above the Leap Motion sensor with a certain distance, we only needed the $y$-component of the tracked fingertip speed to detect the time interval between two consecutive peaks and use that time interval to control walking speed. Zhao and Allison proposed gait analysis algorithms for real-time virtual locomotion using a treadmill with a large-scale stereo projected display \\cite{zhao_learning_2016} and also adapted the algorithm for offline gait analysis to study the role of stereoscopic viewing in virtual locomotion \\cite{zhao_role_2020}. This method works by first filtering foot motion signals (speed signals for real-time application or position signals for offline gait analysis) with low-pass Butterworth filters. Then, an empirical threshold is used as the starting point for finding the initial swing and terminal swing of a step through gradient descent. The searching stops whenever the gradient ascends (indicating a different gait cycle is detected) or the minimum threshold is reached. The index of the maximum value between the initial swing and the terminal swing is considered as the mid swing. We considered the problem of detecting the peaks of the index fingertip speed signal as finding the mid-swings of steps. In the present implementation, a 2nd order low-pass Butterworth filter with a cut-off frequency of 5 Hz was applied to the $y$-component of the speed signal (buffered for 1 s using a queue) of the index finger. By adapting the algorithm, we were able to detect the peaks between consecutive steps and calculate the time interval between consecutive peaks. The equation to obtain the calculated walking speed is given by:\n\n\\begin{equation}\ns_{walk}=(1-\\frac{t_{step}-t_{min}}{t_{max}-t_{min}})(s_{max}-s_{min})+s_{min}\n\\end{equation}\n\n\\noindent where $s_{walk}$ is the calculated walking speed, $t_{step}$ ($t_{step}\\in(t_\n{min},t_{max}]$) the time interval between two detected peaks, $t_{max}$ ($t_{max}$ = 0.95 s) and $t_{min}$ ($t_{min}$ = 0.3 s) the pre-defined maximum and minimum time intervals and $s_{max}$ and $s_{min}$ the pre-defined maximum and minimum walking speeds that one can achieve using the method. A larger $t_{step}$ due to slow tapping motion results in a slower walking speed $s_{walk}$, which made it possible for speed control. This equation is similar to Tregillus and Folmer's approach \\cite{tregillus_vr-step:_2016}, in which a WIP method was proposed based on the acceleration signals from a mobile phone to detect steps and calculate walking speed based on the time intervals between steps. Additionally, whenever the time interval $t_{step}$ is larger than 0.95 s, the calculated walking speed $s_{walk}$ is set to $s_{min}$ and whenever $t_{step}$ is less than or equal to 0.3 s, the calculated walking speed is set to $s_{max}$.\n\n\\subsection{Direction Control}\nThe three hand gesture locomotion interfaces presented in previous sections required users to use their right hands to control walking speed. To enable direction control during locomotion, our first thought was to use the same hand (right hand) to control both walking speed and walking direction. However, early testing showed that this was not possible as moving fingers to make a gesture with the right hand also changed the tracked hand pointing direction $H$ given by the Leap Motion sensor. Thus, we decided to let users control their walking direction using their left hands. The equations for calculating the steering angle are given by:\n\n\\begin{equation}\nd=V_n\\cdot(H_{init}\\times H_{current})\n\\end{equation}\n\\begin{equation}\n\\theta=sign(d)cos^{-1}\\frac{H_{init}\\cdot H_{current}}{\\left\\|H_{init}\\right\\|\\left\\|H_{current}\\right\\|}\n\\end{equation}\n\nwhere the $sign(x)$ function is defined as:\n\\begin{equation*}\nsign(x) = \n\\begin{cases}\n&\\text{-1, } x<0 \\\\ \n&\\text{ 1, } x\\ge0 \n\\end{cases}\n\\end{equation*}\n\n\\noindent Parameter $d$ (with a $sign(x)$ function) controls whether a user is turning left or turning right, $V_n$ the vector that is perpendicular to the $x$-$z$ plane ($V_n=(0,1,0)$), $H_{init}$ the initial vector that points to a user's direction of travel ($H_{init}=(0,0,-1)$), $H_{current}$ the current finger pointing direction tracked by the Leap Motion sensor and $\\theta$ the steering angle that a user intends to achieve. $H_{current}$ tracked by the Leap Motion sensor is slightly noisy, which made straight line walking and turning both unstable. To ensure smoothness in linear walking and turning, we used the function $SmoothDamp()$ from the Unity's library to damp turning motions. An illustration of the direction control gesture is in Figure~\\ref{fig:direction_control}. Users extend their fingers and rotate their palms around the $y$-axis to control their travel direction.\n\n\n\\subsection{Gamepad Interface}\nThe gamepad interface was implemented as a control interface to be compared to the three hand gesture interfaces. In our implementation, pushing the left joystick left and right controls steering and pushing the right joystick upward controls a user's walking speed (see Figure~\\ref{fig:gamepad} for illustration). Moving backwards was not allowed for experiments so this function was disabled. As in previous gesture interfaces, participants use their left hands for direction control and right hands for speed control.\n\n\\begin{table}%\n\\centering\n\\caption{The User Interface Questionnaire}\n\\label{tab:user_interface}\n\\begin{tabular}{ p{8cm}}\n\\hline\n\\\\[-1em]\nThe interface is easy to learn.\\\\\n\\hline\n\\\\[-1em]\nThe interface is easy to use.\\\\\n\\hline\n\\\\[-1em]\nThe interface is natural and intuitive to use.\\\\\n\\hline\n\\\\[-1em]\nThe interface helps make the task fun.\\\\\n\\hline\n\\\\[-1em]\nUsing the interface is tiring.\\\\\n\\hline\n\\\\[-1em]\nThe interface helps me respond quickly.\\\\\n\\hline\n\\\\[-1em]\nThe interface helps me make accurate responses.\\\\\n\\hline\n\\\\[-1em]\n\\end{tabular}\\\n\\end{table}\n\n\\subsection{User Interface Questionnaire} \\label{sec:ui}\nTo subjectively evaluate four interfaces, we adapted the user interface questionnaire from \\cite{nabiyouni_comparing_2015,zhao_comparing_2019} and used the questionnaire in the present study. The questionnaire included statements, such as \"The interface is easy to use\" (see Table~\\ref{tab:user_interface} for the complete list), for users to judge. For both experiments, after finishing using an interface for a block of the experiments, participants were asked to rate each factor presented in the questionnaire using a seven-point Likert scale (from strongly disagree to strongly agree) to indicate their preference. This user interface questionnaire allowed us to subjectively evaluate a given interface using factors, including ease-to-learn, ease-to-use, natural-to-use, fun, tiredness, responsiveness and subjective accuracy. These are typical factors for consideration when designing or evaluating new interfaces.\n\n\\subsection{Hardware and Software of the VR system}\nThe software application that included virtual scene presentation, gesture recognition, virtual locomotion tasks and data recording was developed using the Unity 2020.1. The software application was hosted on a Windows 10 desktop computer with an Intel i5-10500 CPU, 16 GB memory, an nVidia Geforce 1660s graphics card with 6 GB graphics memory. Hand movements of participants were tracked by the Leap Motion sensor using the Orion SDK 4.0.0. The display was an AOC 27-inch curved monitor for presenting the virtual scenes of the tasks. VR headsets were not used as sharing headsets among participants is a health concern \\cite{steed_evaluating_2020} and the procedure for sanitizing VR headsets has not been established at the moment. The avatar that represented a participant in the virtual environment and for collision detection was implemented using a 3-D capsule geometry, with the virtual camera placed near the top of the capsule. When users make hand gestures, the walking speed and direction of the capsule is changed. The experimental setup for both experiments is shown in Figure~\\ref{fig:setup}.\n\n\\section{Experiments}\n\\subsection{Experiment 1: Target Pursuit}\n\\subsubsection{Introduction}\nThe purpose of Experiment 1 was to assess the performance and user preference of four locomotion interfaces on speed control.\n\n\\subsubsection{Participants}\nSixteen undergraduate students (age: 18-25, eleven males and five females) were invited as volunteers for this experiment. All had normal or corrected-to-normal vision. An informed consent was signed before the experiment. Participants were na\u00efve to the purpose of the experiment.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/setup\/setup3.png}\n\\caption{Experimental setup. The participant sat in front of the 27-inch curved monitor with hands positioned above the Leap Motion sensor. The viewing distance was approximately 60 cm. The virtual scene displayed on the monitor captured in this figure was a snapshot of Experiment 2.}\n\\label{fig:setup}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/setup\/ball.png}\n\\caption{Exemplar snapshot of Experiment 1. Light blue coloured spheres represent a user's tracked fingers and the hand centre. The wooden ball captured in this snapshot was red as the participant was close to the ball.}\n\\label{fig:ball}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp1\/ball_errorbar.pdf}\n\\caption{Subjective factors of Experiment 1 (bars denote mean values and error bars denote the standard error of the mean).}\n\\label{fig:ball_errorbar}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp1\/ball_horizontal.pdf}\n\\caption{Detailed responses from participants in Experiment 1.}\n\\label{fig:ball_horizontal}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp1\/objective_ball.pdf}\n\\caption{Objective factors of Experiment 1 (bar plot convention is as in Figure~\\ref{fig:ball_errorbar}).}\n\\label{fig:objective_ball}\n\\end{figure*}\n\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Subjective Factors of Experiment 1}\n\\label{tab:ball_subjective}\n\\includegraphics[width=0.8\\textwidth]{tables\/ball_subjective_table.pdf}\n\\end{table*}\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Objective Factors of Experiment 1}\n\\label{tab:ball_objective}\n\\includegraphics[width=0.6\\textwidth]{tables\/ball_objective_table.pdf}\n\\end{table*}\n\n\n\\subsubsection{Design}\nWe adapted the experiment from \\cite{zhao_learning_2016} to assess the performance of speed control of four interfaces. We designed a virtual scene (shown in Figure~\\ref{fig:ball}) with foliage and trees in the Unity 2020.1 using resources (the Free SpeedTrees Package and the Grass Flowers Pack Free) from the Unity Asset Store. A wooden ball was placed before the start of an experimental trial in the scene 3 m in front of the avatar that represented a participant. After a trial started, the rolling ball changed its forward running speed based on a set of randomly generated speed key frames (with values 2 km\/h, 3 km\/h or 4 km\/h), which took effect every 10 s. In total, there were seventeen speed key frames for each trial. The goal of the experiment for participants was to pursue the rolling ball while maintaining the initial 3 m distance between the avatar and the rolling ball using each locomotion interface. Direction control was disabled for this task. An complete trial lasted 180 s and an experiment trial stopped after 180 s. To make the distance judgement easier for participants, we manipulated the color of the rolling wooden ball such that the color would become red if the avatar was getting close or blue if getting far away. The acceleration of the avatar was set to 0.5 ${\\rm m\/s^2}$ for all four interfaces. The maximum locomotion speed and the minimum locomotion speed were set to 5 km\/h and 0 km\/h, respectively, The linear acceleration of the wooden ball was set to 0.3 ${\\rm m\/s^2}$. Setting the maximum walking speed to 5 km\/h and the acceleration to 0.5 ${\\rm m\/s^2}$ was reasonable as Teknomo \\cite{teknomo_microscopic_2002} reported that the average walking speed of pedestrian flow is 1.38 m\/s $\\pm$ 0.37 m\/s (4.97 km\/h $\\pm$ 1.33 km\/h, mean $\\pm$ std) and the average acceleration is 0.68 ${\\rm m\/s^2}$. Light blue coloured spheres (as shown in Figure~\\ref{fig:ball}) were added and rendered on top of all other geometries in the scene to visualise a user's tracked fingertips and the hand centre. This decision was made after our initial testing. We found it necessary to give user tracking information of their hands so that they knew their hands were securely tracked. To control for order effects, we counter-balanced the order of access to four different interfaces using the balanced Latin square design. \n\n\\subsubsection{Metrics} \\label{sec:exp1_metrics}\nBased on the metrics used in \\cite{zhao_learning_2016}, we proposed five metrics to study the performance of the interfaces. These included:\n\\begin{itemize}\n\\item \\textbf{Average position difference \\bm{$d_{avg}$}}:\\\\\nThe average distance between the avatar and the rolling ball through the entire course. Ideally, the value should be 3 m as participants were required to maintain this distance between the avatar and the rolling ball through an entire course.\n\\item \\textbf{Standard deviation of position difference \\bm{$d_{std}$}}:\\\\\nThe standard deviation of the distance between the avatar and the rolling ball through the entire course. This parameter reflects the interval, in which the avatar oscillates while maintaining the 3 m distance.\n\\item \\textbf{Average speed difference \\bm{$s_{avg}$}}:\\\\\nThe speed difference $s_{avg}$ between the avatar and the rolling ball through the entire course. Ideally, the value should be zero assume that participants are able to perfectly maintain the 3 m distance through the entire course.\n\\item \\textbf{Standard deviation of speed difference \\bm{$s_{std}$}}:\\\\\nThe standard deviation of speed difference between the avatar and the rolling ball through the entire course. This parameter reflects the interval in which the avatar vary their locomotion speed while maintaining the 3 m distance.\n\\item \\textbf{Speed difference at key frames \\bm{$s_{inst}$}}:\\\\\nThe average speed difference between the avatar and the wooden ball from the instant the speed change of the wooden ball occurs to 0.1 s after this speed change has occurred. This parameter reflects the transient response of locomotion interfaces when being used for tracking speed changes.\n\\end{itemize}\n\n\\subsubsection{Procedure}\nDuring an experimental session, a participant was first introduced to the task and asked to sign an informed consent. Then, participants sat in front of the curved monitor with their hands positioned above the Leap Motion sensor. A researcher sat beside the participant and operate the experimental software to initiate new experimental trials. Participants were asked to perform one trial as practice to get familiar with the interface. After that, participants were asked to perform three experimental trials using the same interface, with their locomotion data recorded during the trials. The order of access to interfaces was determined using the balanced Latin square design. When participants completed a block (consisting of a practice trial and three experimental trials) for a given interface, they were asked to rate the interface using the user interface questionnaire introduced in Section~\\ref{sec:ui}. They were subsequently introduced to another interface, asked to complete a practice trial and three experimental trials and fill in another user interface questionnaire. We ran the same procedure until a participant used all four interfaces to complete the experiment.\n\n\\subsubsection{Results}\nWe performed data analyses using R 4.0.3. The independent factor of the statistical analyses was the interfaces (Finger Distance, Finger Number, Finger Tapping and gamepad) and was treated as a fixed effect. The dependent factors for subjective analyses were the factors given by the user interface questionnaire. The dependent factors for objective analyses were the parameters defined in Section~\\ref{sec:exp1_metrics}. Participants were treated as a random effect. Subjective data given by participants using the user interface questionnaire were analysed using the Friedman test, followed by the Wilcoxon signed-rank test with the Bonferroni correction as the post-hoc analyses. Figure~\\ref{fig:ball_errorbar} shows the mean values and the standard error of the mean and Figure~\\ref{fig:ball_horizontal} shows the distribution of the responses from participants. Objective parameters based on the metrics defined in Section~\\ref{sec:exp1_metrics} were extracted from the collected experimental data and were analysed using the linear mixed-effects models analyses (Package NLME in R). The post-hoc analyses for objective data were performed using the Tukey's HSD (Honestly Significance Difference) test. No significant learning effect resulting from the number of trials was found on all objective factors. The error contribution of the number of trials was minimal so it was not included in the analyses. Figure~\\ref{fig:objective_ball} shows the mean and the standard error of the mean of all objective parameters. Results of the statistical analyses on both subjective data and objective data are in Table~\\ref{tab:ball_subjective} and Table~\\ref{tab:ball_objective}, respectively.\n\nAnalyses on subjective parameters using the Friedman test showed that interfaces had significant effects on factors including ease-to-learn ($p<0.001$), ease-to-use ($p<0.001$), natural-to-use ($p<0.001$), tiredness ($p=0.004$), responsiveness ($p=0.016$) and subjective accuracy ($p=0.006$) except fun. Pairwise comparisons were performed using the Wilcoxon signed-rank test with the Bonferroni correction on subjective factors. Results showed that gamepad was significantly easier to learn compared to Finger Tapping. Although Finger Tapping had a lower mean score on ease-to-learn, there was no statistical significance compared to Finger Distance and Finger Number. The result was reasonable as the gamepad interface is a traditional device for navigation in games so participants found it easier to learn compared to other interfaces while hand gesture interfaces were all relatively new to them. Finger Distance, Finger Number and gamepad were found to be significantly easier to use compared to Finger Tapping and there was no significant effect among Finger Distance, Finger Number and gamepad on ease-to-use. A primary reason was that the control of walking speed through Finger Tapping could only be made after two peaks on the speed curve of the index finger were detected so there was a latency in adjusting walking speed compared to other interfaces. We also found that Finger Distance, Finger Number and gamepad were significantly natural to use compared to Finger Tapping and there was no significant effect among Finger Distance, Finger Number and gamepad on natural-to-use. This showed that Finger Tapping was somewhat unusual to participants and participants need to take practice to get familiar with the interface. Therefore, lower ratings were received for this factor. There was no statistical significance among these interfaces in terms of fun, but relatively high mean scores on this parameter indicated all four interfaces were of fun for virtual walking in VR. Finger Tapping was found to be significantly more tiring compared to gamepad while there was no significant difference among Finger Distance, Finger Number and Finger Tapping, showing that repetitive motions from Finger Tapping caused fatigue. Finger number was found to be significantly more responsive compared to Finger Tapping and there was no statistical significance among Finger Number, Finger Distance and gamepad. This was also due to the latency problem inherent in the current design of the Finger Tapping gesture. Finally, Finger Number and gamepad were found to be significantly more subjectively accurate compared to Finger Tapping. There were no statistical significance between Finger Number and gamepad, and no statistical significance between Finger Distance and Finger Tapping in terms of subjective accuracy, indicating that Finger Number and gamepad were considered more accurate by participants. By examining Figure~\\ref{fig:ball_horizontal}, we found that gamepad had the most number of ratings of strongly agree excluding tiredness. Finger Number had a similar response in terms of the number of ratings of strongly agree and was slightly higher than that of Finger Distance. Finger Tapping received the lowest number of ratings of strongly agree. \n\nAnalyses on objective factors using the linear mixed-effects models analyses showed that there were significant effects on all five parameters, including average distance $d_{avg}$ ($p<0.001$), standard deviation of distance $d_{std}$ ($p<0.001$), average speed difference $s_{avg}$ ($p<0.001$), standard deviation of speed difference $s_{std}$ ($p<0.001$) and speed difference at key frames $s_{inst}$ ($p<0.001$). Further analyses using Tukey's HSD test (with examination of Figure~\\ref{fig:objective_door}) showed that Finger Tapping had larger error in terms of maintaining the 3 m distance between the avatar and the rolling wooden ball. The performance of Finger Distance, Finger Number and gamepad were similar for this parameter. This showed that Finger Distance, Finger Number and gamepad were more accurate in maintaining a relative distance to the rolling ball. Similar results were also found on parameters, including standard deviation of distance $d_{std}$ and average speed difference $s_{avg}$. Finger Tapping was found to have a larger oscillating interval $d_{std}$ in term of the distance between the avatar and the rolling wooden ball. Finger Tapping was also found to have a larger error in terms of average speed difference $s_{avg}$ during the target pursuit task. In terms of standard deviation of speed difference $s_{std}$, Finger distance and gamepad had similar performance and the values were significantly lower than that of the rest two interfaces. Finger Number was also found to have a significant lower value for standard deviation of speed difference $s_{std}$ compared to Finger Tapping. In terms of speed difference at key frames $s_{inst}$, Finger Distance had similar performance compared to gamepad. Finger Number was significantly better than other interfaces and Finger Tapping had significant lower performance. This showed that Finger number has the best transient response in terms of tracking speed changes.\n\n\\subsubsection{Summary}\nExperiment 1 showed that the Finger Number gesture and the Finger Distance gesture had similar user preference compared to the gamepad interface. The Finger Tapping gesture received lower ratings for all factors except fun. In term of their performance on speed control, the Finger Distance gesture was comparable to the gamepad method. The Finger Number gesture was slightly worse in terms of standard deviation of speed difference and speed difference at key frames compared to the Finger Distance gesture and the gamepad interface. The Finger Tapping gesture had largest errors in terms of maintaining the distance and speed.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/setup\/gate.png}\n\\caption{Exemplar snapshot of Experiment 2. Wooden gates are the waypoints. Light blue coloured spheres represent a user's tracked fingers and hand centres of both hands.}\n\\label{fig:gate}\n\\end{figure}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp2\/door_errorbar.pdf}\n\\caption{Results of subjective factors of Experiment 2 (bar plot convention is as in Figure~\\ref{fig:ball_errorbar}).}\n\\label{fig:door_errorbar}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp2\/door_horizontal.pdf}\n\\caption{Detailed responses from participants in Experiment 2.}\n\\label{fig:door_horizontal}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figures\/exp2\/objective_door.pdf}\n\\caption{Objective factors of Experiment 2 (bar plot convention is as in Figure~\\ref{fig:ball_errorbar}).}\n\\label{fig:objective_door}\n\\end{figure*}\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Subjective Factors of Experiment 2}\n\\label{tab:door_subjective}\n\\includegraphics[width=0.85\\textwidth]{tables\/door_subjective_table.pdf}\n\\end{table*}\n\n\\begin{table*}[!ht]%\n\\centering\n\\caption{Results of Statistical Analyses on Objective Factors of Experiment 2}\n\\label{tab:door_objective}\n\\includegraphics[width=0.6\\textwidth]{tables\/door_objective_table.pdf}\n\\end{table*}\n\n\\subsection{Experiment 2: Waypoints Navigation}\n\\subsubsection{Introduction}\nThe purpose of Experiment 2 was to test the performance and user preference of four interfaces in terms of waypoints navigation when direction control using the left hand was introduced.\n\n\\subsubsection{Participants}\nSixteen undergraduate students (age: 18-24, eight males and eight females) were invited as volunteers for this experiment. None had participated in Experiment 1 and all had normal or corrected-to-normal vision. Informed consents were signed before experiments. Participants were na\u00efve to the purpose of the experiment.\n\n\\subsubsection{Design}\nWe adapted this experiment from \\cite{zhao_effects_2018} to evaluate the usability of interfaces for waypoints navigation. In this experiment, we used the same terrain of Experiment 1 but placed wood gates as the waypoints in the virtual environment (shown in Figure~\\ref{fig:gate}). These wood gates had an inner dimension of 2 m (W) $\\times$ 2 m (H) $\\times$ 0.1 m (D). The backside of a gate to the front of its successor was fixed as 5 m in depth. In the lateral direction, these gates were placed in an interval of $\\pm$ 2 m. In total, fifty waypoints were placed in the scene. The initial distance from the avatar to the front side of the first waypoint was 5 m. The distance between the backside of the last waypoint to the bounding box that triggered the end of the experiment was also 5 m. An experimental trial stopped whenever an participant reached the bounding box placed behind the last waypoint. The goal for participants of the experiment was to navigate through waypoints using four different interfaces while trying to achieve short task completion time and fast locomotion speed. It was also necessary to avoid missing waypoints or colliding with waypoints. Participants had direction control and speed control using both hands when using each of the four interfaces to navigate through waypoints. In addition, participants were not allowed to backtrack a waypoint in case they missed. The acceleration of the avatar was also set to 0.5 ${\\rm m\/s^2}$, the maximum and minimum walking speeds were set to 5 km\/h and 0 km\/h, respectively, for all four interfaces. For this experiment, we also adopted the balanced Latin square design to control for order effects. \n\n\\subsubsection{Metrics}\\label{sec:exp2_metrics}\nBased on the metrics given in \\cite{zhao_effects_2018}, we proposed the following metrics to evaluate the interfaces for our study:\n\\begin{itemize}\n\\item \\textbf{Task completion time \\bm{$t_c$}}:\\\\\nThe duration from the start of the locomotion to the instant when participants reach the bounding box that triggers the end of the task.\n\\item \\textbf{Average locomotion speed \\bm{$s_l$}}:\\\\ \nThe average locomotion speed computed by dividing the total length of the locomotion path by task completion time \\bm{$t_c$}.\n\\item \\textbf{Smoothness of the locomotion path \\bm{$d_p$}}:\\\\\nThe mean value of the lateral distance ($x$-axis) of the actual locomotion path to the optimal locomotion path. The optimal locomotion path is defined as the shortest distance between the centres of two adjacent waypoints. \n\\item \\textbf{Number of successfully passed waypoints \\bm{$n_w$}}:\\\\\nThe number of waypoints that participants successfully passed.\n\\item \\textbf{Number of collisions with waypoints \\bm{$n_c$}}:\\\\\nThe number of the collisions of the avatar with the frames of the waypoints. This parameter was obtained from collision detection of the Unity during gameplay and was recorded for analysis.\n\\end{itemize}\n\n\\subsubsection{Procedure}\nThe procedure was as in Experiment 1 except that the waypoints navigation task was given to the participants.\n\n\\subsubsection{Results}\nThe protocols for analysing subjective data and objective data in Experiment 2 were as in Experiment 1. The independent factor was the interfaces and the dependent factors for subjective analyses were the factors given in the user interface questionnaire. The dependent factors for objective analyses were defined in Section~\\ref{sec:exp2_metrics}. Results of the statistical analyses on both subjective data and objective data are in Table~\\ref{tab:door_subjective} and Table~\\ref{tab:door_objective}, respectively.\n\nAnalyses on subjective factors using the Friedman test showed that there were significant effects on all seven factors: ease-to-learn ($p=0.049$), ease-to-use ($p<0.001$), natural-to-use ($p<0.001$), fun ($p=0.021$), tiredness ($p<0.001$), responsiveness ($p=0.004$) and subjective accuracy ($p<0.001$). However, further analyses using the Wilcoxon signed-rank test with the Bonferroni correction showed that there were no significant differences among four interfaces in terms of ease-to-learn and fun, indicating the effect of interfaces on these factors were weak. Finger Distance, Finger Number and gamepad were found to be significantly easier to use compared to Finger Tapping, which was consistent with the result in Experiment 1. In terms of natural-to-use, Finger Distance and gamepad were found to be significantly more natural compared to Finger Tapping but no significant difference was found among Finger Distance, Finger Number and gamepad, indicating Finger Distance and gamepad were more intuitive interfaces. Finger Distance, Finger Number and gamepad were found to be significantly less tiring compared to Finger Tapping. As discussed in Experiment 1, this was mainly due to the repetitive motion of Finger Tapping that resulted in tiredness. Finger Distance was found to be significantly more responsive compared to Finger Tapping as this interface allowed fine control of speed through distance between the thumb and the index of a tracked hand. Finally, Finger Distance and gamepad were significantly more subjectively accurate than Finger Tapping. The distribution of the user responses in Experiment 2 is shown in Figure~\\ref{fig:door_horizontal}. We found that gamepad still received most ratings of strongly agree, excluding tiredness, compared to other interfaces. Finger Distance and Finger Number had similar number of ratings of strongly agree while finger tapping had the lowest.\n\nObjective data analysed using the linear mixed-effects models analyses showed that there were significant effects on parameters, including task completion time $t_c$ ($p<0.001$), average locomotion speed $s_l$ ($p<0.001$), path smoothness $d_p$ ($p<0.001$) and the number of collisions $n_c$ ($p<0.001$) except the number of successfully passed waypoints $n_w$. Post-hoc analyses performed using the Tukey's HSD test combined with examination of Figure~\\ref{fig:door_errorbar} showed that Finger Distance and gamepad had similar task completion time $t_c$ and they were significantly faster compared to other interfaces. Finger Number had a longer task completion time $t_c$ and and Finger Tapping was significantly worse compared to other interfaces. In terms of average locomotion speed $s_l$, gamepad was significantly faster than other interfaces, with Finger Distance being the second, Finger Number being the third and Finger Tapping the last. Locomotion using gamepad resulted in a significantly smoother path $d_p$ compared to three other interfaces and there was no statistical difference among other three interfaces in terms of path smoothness $d_p$. A similar significant effect was found on the number of collisions $n_c$, in which locomotion using gamepad had significantly fewer collisions with frames of wooden gates compared to other interfaces. Finger Distance, Finger Number and Finger Tapping had similar performance in terms of the number of collisions $n_c$. There was no significant effect found on the number of successfully passed waypoints as this parameter was a coarse parameter and it was not sensitive to different interfaces. These results showed that the Finger Distance gesture was very efficient in waypoints navigation. The second experiment is a more general case for virtual locomotion with direction control enabled compared to Experiment 1. Therefore, the implications of the results from Experiment 2 are more important and meaningful compared to Experiment 1.\n\n\\subsubsection{Summary}\nExperiment 2 found that participants had similar user preference on the Finger Distance gesture, the Finger Number gesture and the gamepad interface while the Finger Tapping gesture was the least preferred. In addition, the performance of the Finger Distance gesture was comparable to that of the gamepad interface on the waypoints navigation task. The Finger Number gesture used slightly longer time in completing the task and its average locomotion speed was also lower compared to that of the Finger Distance gesture and the gamepad interface. The Finger Tapping gesture was the slowest in terms of task completion time and average locomotion speed.\n\n\n\\section{Discussion}\nIn our study, we found that the Finger Distance gesture was comparable to the gamepad interface in terms of performance and user preference. Our explanation was that the Finger Distance gesture allowed more precise control of walking speed through fine control of the distance between the fingertips of the thumb and the index of a tracked hand compared to two other hand gesture interfaces. The Finger Number gesture was slightly worse on the waypoint navigation task compared to the Finger Distance gesture but it performed better on a few factors in the target pursuit task. A problem with the Finger Number gesture was that the interface gave discrete speed values so it had difficulty in enabling precise control of locomotion speed. It would also have problems in tracking target with constant decimal speed values if this were used as an experimental scenario. However, the Finger Number gesture is still an intuitive technique. It controls locomotion speed based on the number of extended fingers, which is easy for people to learn. The Finger Tapping gesture had the latency problem. Controlling walking speed can only be performed after detecting two consecutive peaks. There was a latency in detecting the peak following a previously detected peak, which made the interface not very responsive in terms of speed control. We expect that by modifying the algorithm to allow speed control within two consecutive peaks (\\textit{i.e.} within-step speed control as in the GUD-WIP technique \\cite{wendt_gud_2010}), the modified Finger Tapping gesture may perform better than the current method. However, tiredness coming from repetitive tapping motion of finger remains a problem that cannot be easily solved. \n\nOur study included only one method for direction control, that is, rotating the left palm around the $y$-axis in the Leap Motion sensor's coordinate system. For future work, it is necessary to test the utility of the speed control gestures in combination with different direction control methods proposed by other researchers. These direction control methods include using pointing direction of a single finger \\cite{caggianese_freehand-steering_2020} and head orientation \\cite{caggianese_freehand-steering_2020,cardoso_comparison_2016}, \\textit{etc}. Furthermore, the present methods in our study focused on using finger motions to control locomotion speed. Further research is needed to compare finger motion gestures to static hand gestures \\cite{cisse_user_2020} in terms of their usability.\n\nOne limitation of our study was that we did not use VR headsets for experiments as sharing headsets among participants is a health concern \\cite{steed_evaluating_2020}. We expect this problem to be solved by applying a sanitization procedure for VR headsets once it is established. Other options include recruiting users who have their own VR hardware or establishing a pool of users with funded VR hardware but many challenges remain \\cite{steed_evaluating_2020}. We plan to re-conduct experiments using VR headsets in future and compare the results to the present study, which used a curved monitor for virtual scene presentation.\n\nA very recent framework proposed for evaluating VR locomotion techniques \\cite{cannavo_evaluation_2021} offered a new testing tool for virtual locomotion. A direction for future study is to test the present hand gesture interfaces in this framework. Another promising field for further study is to apply these gestures to teleoperation of robots and assess their usability in this type of application. A previous study has investigated the performance of a set of static hand gesture, a gamepad interface and a traditional user interface on a computer desktop for teleoperation of robots \\cite{doisy_comparison_2017}. It is definitely interesting to investigate how different hand gestures perform in such applications and whether hand gesture interfaces improve presence during teleoperation when wearing VR headsets. Finally, another aspect for future study is the effects of hand gesture and gamepad interfaces on simulator sickness \\cite{kennedy_simulator_1993}.\n\n\\section{Conclusion}\nIn this paper, we presented three hand gestures and their algorithms for virtual locomotion and also implemented a gamepad locomotion interface for comparison using the Xbox One controller. Through two virtual locomotion tasks, we systematically compared the performance and user preference of these interfaces. We showed that the Finger Distance gesture was comparable to the gamepad interface. The performance and user preference of the Finger Number gesture were slightly below the Finger Distance gesture. Our results provide VR researchers and designers with methods and empirical data to implement locomotion interfaces using hand gestures in their VR systems. We also recommend these methods being included in new VR systems and letting users to select their favourite interface during usage. In addition, multi-modal interfaces that integrated different interaction mechanisms (\\textit{e.g.} hand gestures, body gestures and voice recognition, \\textit{etc.}) for locomotion also can be considered for new VR systems. Such interfaces will provide end-users with more options for interaction in VR. Finally, we believe that hand gestures also have potentials in their utility for locomotion in augmented reality (AR) and mixed reality (MR) applications and teleoperation of robots. As few related studies have been conducted, new opportunities lie in these fields.\n\n\n\n\n\n\\bibliographystyle{ieeetr}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the mid-infrared (mid-IR) spectral range, between 2 and 25 \u00b5m, most\nof the molecules have strong absorption peaks that can be used to\nselectively and efficiently detect and quantify chemical or biological\nspecies. This is the reason why large-scale projects are currently\ndeveloping around mid-infrared photonics for various applications\ndirectly concerning important societal issues like health and\nenvironment monitoring or security \\cite{MIRTHE,Minerva,FLAIR,MIRIF}.\n\nThe extension of silicon technology to higher wavelengths is a natural\nway to make mid-IR photonic components. However silicon oxide that is\nused in this technology, rapidly limits the possible wavelength range\nsince it starts to absorb around $\\sim$3.6 \u00b5m\n\\cite{Soref2010}. Therefore technical solutions to minimize the amount\nof light propagating in the SiO$_2$ layer should be developed. A\nfirst approach is to modify the geometry of the guides to minimize\ncontact between the core and the absorbing layer, for example, by\nsuspending them \\cite{Penades2016} or placing them on a pedestal\n\\cite{Lin2013}. The obtained propagation losses are below 1 dB\/cm\n(0.82 dB\/cm precisely) in the case of suspended waveguide and 2.7\ndB\/cm in the case of pedestal use but both of these results are\nobtained for a wavelength below 4 \u00b5m. An other strategy more suitable\nfor higher wavelengths is to modify the material. For an example\nsilicon on sapphire nanowires have shown propagation losses of 2 dB\/cm\nat $\\lambda = 5.18$ \u00b5m \\cite{Li2011} and 1 dB\/cm at $\\lambda = 4$ \u00b5m\n\\cite{Singh2015}. On the other hand germanium-based materials seem to\nbe very promising especially for high wavelengths although currently\nthe level of propagation losses remains above 1 dB\/cm\n\\cite{Chang2012,Brun2014,Ramirez2017,Ramirez2018}. Moreover the small\nphysical dimensions of the transverse section of those waveguide\npreclude efficient light collection and coupling losses of several\ndecibels are often reported.\n\n\nAnother way to obtain a waveguide beyond 4 \u00b5m is to make it by ultrafast\nlaser inscription (ULI) technique in glass. The ULI is a very versatile\nand cost-effective method for rapid prototyping and production of\noptical components \\cite{Osellame2012}. It has been applied in the mid-IR to\ndifferent chalcogenide glasses\n(gallium lanthanum sulfide and\n75GeS$_2$-15Ga$_2$S$_3$-4CsI-2Sb$_2$S$_3$-4SnS in \\cite{Rodenas2012}\nand Ge$_{15}$As$_{15}$S$_{70}$ in \\cite{DAmico2014})\nand guiding up to $\\lambda = 10$ \u00b5m has been experimentally\ndemonstrated. Very recently propagation loss around 1-1.5 dB\/cm has\nbeen reported at $\\lambda = 7.8$ \u00b5m in an other germanium based\ncomposition (Ge$_{33}$As$_{12}$Se$_{55}$) \\cite{Butcher2018}.\n\n\n\n\nWe have previously reported a writing procedure of multicore waveguide\nin an arsenic-free germanium based chalcogenide glass\n(72GeS$_2$-18Ga$_2$S$_3$-10CsCl) that shows propagation loss of 0.11\n$\\pm$ 0.03 dB\/cm at $\\lambda = 1.55$ \u00b5m \\cite{Masselin2016}. In this\nletter we presents the extension of these results to the mid-IR at\n$\\lambda = 4.5$ \u00b5m.\n\n\\section{Description of the writing procedure}\n\nThe photowritten waveguides are multicore type and they consist of channels\nof positive refractive index variation ($\\Delta$n\\xspace) induced by a train of\nfemtosecond pulses, placed parallel to each other on a \nmesh. Here the inscription geometry differs from the classical\ntransverse or longitudinal ones as the irradiation is done without\ncontinuous sample translation. In a first step the laser beam is\nfocused in front of the channel \\ding{172} and the sample is\nirradiated with a burst of femtosecond pulses. The duration $\\tau$ of\nthis burst is an important parameter of the experiment as it will be\ndescribed later. The result of this irradiation is an increase of the\nlength of the channel \\ding{172}. In a second step the sample is\ntranslated perpendicularly to the writing direction in the plan of the\ntransverse section so that the channel \\ding{173} is in front of the\nfocus of the laser beam. A second burst of pulses is sent over the\nsample which again increases the channel length. The operation is\nrepeated as needed for all the channels until the slice of transverse\nsection is completed. Then the sample is translated parallel to the\nwriting beam and the procedure is repeated over the entire length of\nthe sample.\n\n\n\nThis procedure presents the advantage that the magnitude of the\nrefractive index contrast between the channel and the non-irradiated\nmatrix can be easily controlled by varying the duration of the burst\n$\\tau$. We have measured $\\Delta$n\\xspace using quantitative phase microscopy\nfollowed by an Abel inversion \\cite{Ampem-Lassen2005} and its\ndependency with $\\tau$ is presented on the figure \\ref{Fig:Dn_vs_Tau}\nfor different repetition rates of the pulse train. It can be seen from\nthis figure that $\\Delta$n\\xspace increases nearly\nlinearly for $\\tau$ values below 150 ms and saturates for higher\nvalues. Also the level of saturation depends on the repetition rate,\nwhich can be attributed to the phenomenon of accumulation of charges\nreleased during the writing process \\cite{Caulier2011}.\n\n\\begin{figure}[htbp]\\centering\n \\includegraphics[width=\\linewidth]{.\/Dn_vs_Tau-2.pdf}\n \\caption{Dependency of the refractive index contrast between the\n individual channel and the glass matrix for different repetition\n rate of the pulse train. The dashed lines are a guide for eye.}\n \\label{Fig:Dn_vs_Tau}\n\\end{figure}\n\n\nOn the other hand, the diameter of the individual channels is constant\ndue to the formation process of $\\Delta$n\\xspace. Indeed, the refractive index\nvariation is related to the formation of a filament during the\npropagation of the femtosecond pulse in the glass, whose diameter is\ndefined by the properties of the material\\cite{Caulier2011}. The\nthermal effects that could induce a dependence of the diameter of the\ninscription with the experimental parameters \\cite{Caulier2013}, are\nnot dominant in the process of appearance of $\\Delta$n\\xspace. However it is\npossible to adjust the diameter of the total structure by changing the\ndistance between the channels or by adding others. Therefore it is\npossible to select independently $\\Delta$n\\xspace and the dimension of the waveguide\nin order to engineer its characteristics as needed.\n\n\\section{Waveguide performance measurements}\n\nFirst we study the case of waveguide composed of channels placed on a hexagonal\nmesh and two configurations are compared. They differ in the number of\nrows that make up the structure. The first is composed of 4 rows of\nchannels separated by a distance of 2.875 \u00b5m and the second by 5 rows\nwith a separation of 2.3 \u00b5m between the channels so that the total\ndiameter of the structure is the same (23 \u00b5m). A typical example of\nsuch structure is represented on the inset of the figure\n\\ref{Fig:Loss-Hexa}. The waveguides are written with a laser repetition rate\nof 200 kHz.\n\n\\begin{figure}[htbp]\\centering\n \\includegraphics[width=\\linewidth]{.\/Loss-Hexa-2.pdf}\n \\caption{Measured propagation losses for hexagonal mesh structures\n with 4 and 5 rows. The dashed lines are a guide for eye.}\n \\label{Fig:Loss-Hexa}\n\\end{figure}\n\nThe propagation losses are measured according to the back reflection\nmethod \\cite{Ramponi2002}. After the inscription the sample is cut and\nrepolished to eliminate edge effects. The input face is cut at a\nslight angle to the cross section plane of the guides in order to\nspatially separate the reflected waves on the input and the output\nface of the waveguide and thus eliminate interference likely to\ndistort measurements. Their total length after this step is equal to\n28 mm. The results of the loss measurement are shown in the figure\n\\ref{Fig:Loss-Hexa}. The existence of an optimal irradiation duration\nvalue $\\tau_{min}$ that leads to a minimum of propagation losses\n$\\alpha$ is clearly observed. This minimum value $(\\alpha_{min})$ is\nequal to 0.20 $\\pm$ 0.05 dB\/cm, which is far below the values obtained\nfor photowritten waveguides or those derived from silicon photonics in\nthe mid-IR. In fact for $\\tau < \\tau_{min}$ the beam is poorly\nconfined in the structure whereas for $\\tau > \\tau_{min}$ the field\ntends to be localized into the individual channel. The optimal\ncondition is a compromise between these two situations\n\\cite{Masselin2016}.\n\n\nIt is interesting to note that $\\alpha_{min}$ is the same for the two\nconfigurations (4 and 5 rows). This means that it is independent of\nthe density of individual channels, i.e. the number of channels per\nsquare micron, since the surface of the transverse section is the same\nfor both configurations. Of course it will not be the case if the\ndistance between the channels becomes too large (i.e. structure with 2\nor 3 rows, considering the same total diameter of written\nstructure). Therefore it is a good indication that our writing method\nleads to very homogeneous refractive index variation channel and that\nthe concatenation of the different slices of transverse section does\nnot add irregularities that would scatter the light similarly to side\nroughness. However for a 4 rows structure with a lower density (0.177\nchannels\/\u00b5m$^2$ compared with 0.265 channels\/\u00b5m$^2$ in the case of 5\nrows) the optimal value of $\\tau$ becomes more critical since the\ndependence of $\\alpha$ over the burst duration $\\tau$ is more\npronounced.\n\nIt should be noted that all of these guides, and those described later\nin the text, were written in the same piece of glass. Thus the properties of\nthe host matrix, which could influence the performance, are the\nsame for all waveguides and the differences between the behaviors of the\npropagation losses are due only to the different structures of the\ntransverse section.\n\n\\begin{figure}[htbp]\\centering\n \\includegraphics[width=\\linewidth]{.\/Loss-Circ-4.pdf}\n \\caption{Measured propagation loss for circular mesh structure with\n 5 rings. The dashed line is a guide for eye.}\n \\label{Fig:Loss-Circ}\n\\end{figure}\n\n\nTo illustrate the versatility of our method we modify the morphology\nof the transverse section of the waveguide. Here the mesh is composed\nof concentric rings. On the N$^{\\text{th}}$ ring the channels are\nseparated by an angle $2\\pi \/6N$. Here the inscriptions were made with\na laser repetition rate of 250 kHz. Indeed, preliminary evaluations\ncarried out with a rate identical to that used for the hexagonal mesh,\nhave shown that higher values of $\\Delta$n\\xspace are necessary to obtain minimum\nlosses. A photograph of a structure formed of 5 rings is shown in the\ninset of the figure \\ref{Fig:Loss-Circ} that presents the propagation\nlosses for different burst durations. On this figure we can see that\nthe behavior of $\\alpha$ with $\\tau$ is the same as for a hexagonal\nmesh and that the minimum value is the same within the uncertainties\nof the measurements. Here also, we can observe that the decrease in\nthe $\\Delta$n\\xspace channels density (0.219 channels\/\u00b5m$^2$) leads to a stricter\ndependence of propagation losses with the duration of the pulse burst\ncompared with the hexagonal mesh with 5 rows.\n\n\n\n\nWe can now consider the other sources of losses but before we note\nthat the intrinsic absorption of the bulk material is already taken\ninto account in the measurement of the propagation losses. First of all the\nreflection coefficient at the air-guide interfaces was measured as\nbeing in the order of 13\\%, which is in agreement with the value of the\nrefractive index of the sample \\cite{Masselin2012}.\nWe also measured the power carried by the waveguide and derived the\ncoupling efficiency using the following equation:\n\n\n\\begin{equation}\n \\eta = \\frac{1}{\\left( 1 - R \\right)^2 \\, T_{Opt}} \\, 10^{\\left(0.1 \\,\n \\alpha L\\right)} \\, \\frac{P_{Out}}{P_{In}}\n\\end{equation}\nwhere $R$ is the reflection coefficient from the interface between air\nand the waveguide, $T_{Opt}$ is the transmission coefficient of the\nfocusing and collimating optics, $\\alpha$ is taken from the values\nreported in the figures \\ref{Fig:Loss-Hexa} and \\ref{Fig:Loss-Circ},\n$L$ is the length of the waveguide (28 mm) and $P_{In}$ and $P_{Out}$ are the\npowers measured before and after the waveguide, respectively.\n\n\\begin{figure}[htbp]\\centering\n \\includegraphics[width=\\linewidth]{.\/Coupling-v1.pdf}\n \\caption{Coupling efficiency of light inside the waveguides for the\n hexagonal and circular meshes.}\n \\label{Fig:Coupling}\n\\end{figure}\n\nThe results of these measurements are reported on the figure\n\\ref{Fig:Coupling}. For all the considered structures the coupling\nefficiency $\\eta$ is an increasing function of the burst duration,\ni.e. $\\Delta$n\\xspace, with values that can be higher than 0.6. However if the\nbehavior of the dependence is the same for all meshes, we note that\nfor most of the values of $\\tau$, \nthe highest value for $\\eta$ is obtained for the 5 rows hexagonal mesh\nthat corresponds to the structures with the highest density of $\\Delta$n\\xspace channels.\n\n\n\n\\begin{figure}[htbp]\\centering\n \\includegraphics[width=\\linewidth]{.\/Mode-2.pdf}\n \\caption{Mode of the propagated beam in a hexagonal mesh with 5\n rows. Similar profiles are obtained with 4 rows and with circular mesh. }\n \\label{Fig:Mode}\n\\end{figure}\n\n\nIt should be remarked that the global performance of the waveguide\nbeing a combination of both the propagation loss and the coupling\nefficiency, the value of $\\tau$ that gives the lowest value\nof $\\alpha$ could be not the optimum one since a higher value would\nincrease $\\eta$ and at last, leads to a maximized carried power.\nConsequently $\\tau$ should be chosen according to the functionality\nof the device. For a short one it would be preferable to select large\n$\\tau$ to maximize $\\eta$ while if the length is so that the\npropagation losses are predominant, $\\tau$ should be shorter to minimize $\\alpha$.\n\nFinally the near field pattern of the propagated beam was imaged on\nthe InSb sensor of a mid-IR camera (FLIR A6750sc) and the mode has\nbeen measured for the optimum values of the burst duration $\\tau$. As\nit can be seen from the figure \\ref{Fig:Mode} the beam profile is very\nclose to be Gaussian indicating the single mode behavior of the\nwaveguide. On this figure one can see the mode corresponding to a\nhexagonal mesh with 5 rows but similar profiles are obtained for the\nothers structures.\n\n\n\\section{Conclusions}\n\nIn conclusion we present the realization of mid-IR waveguides in\narsenic free chalcogenide glass with propagation loss below 0.2 dB\/cm\nat 4.5 \u00b5m. Moreover our method allows a control of the waveguide\ndimensions meaning that an efficient light collection can be achieved\nin most situations as the diameter of the structure can be adapted\naccording to the requirement, i.e. butt coupling from a fiber or\ncoupling from free space. As an illustration we are able to achieve in\nthe best configuration a total power carried corrected for Fresnel\nlosses is as large as 60\\% of the incident power (we do not take into\naccount Fresnel loss since it can be minimized by using an\nanti-reflective coating). By combining these results with those\nobtained previously \\cite{Masselin2016} this demonstrates that the\ndescribed method can be used to design waveguide devices with high\nperformances over the whole range between 1.5 and 4.5 \u00b5m. It can be\nobjectively assumed that this range can be extended to higher\nwavelengths and that the ultimate limit would be defined by the\ntransmission of glass (10 \u00b5m). This work is currently being continued\nto extend this technique to the production of curved guides.\n\n\nThis work has been partially supported by the French National\nResearch Agency (ANR) through the COMI project (ANR-17-CE24-0002) and as well as\nby the Ministry of Higher Education and Research, Hauts de France\ncouncil and European Regional Development Fund (ERDF) through the\nContrat de Projets Etat-Region (CPER Photonics for Society P4S).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{Sec:intro}\nDecision Trees are a widely used technique for nonparametric regression and classification. Decision Trees result in interpretable models and form a building block for more complicated methods such as bagging, boosting and random forests. See~\\cite{loh2014fifty} and references therein for a detailed review. The most prominent example of decision trees is classification and regression trees (CART), proposed by~\\cite{breiman1984classification}. CART operates in two stages. In the first stage, it recursively partitions the \nspace of predictor variables in a greedy top down fashion. Starting from the root node, a locally optimal split is determined by an appropriate optimization criterion and then the process is iterated \nfor each of the resulting child nodes. The final partition or decision tree is reached when a stopping criterion is met for each resulting node. In the second stage, the final tree is pruned by \nwhat is called \\textit{cost complexity pruning} where the cost of a pruned tree thus obtained is proportional to the number of the leaves of the tree; se\n\\:Section $9.2$ in~\\cite{friedman2001elements} for details.\n\n\n\nA possible shortcoming of CART is that it produces locally optimal decision trees. It is natural to attempt to resolve this by computing a globally optimal decision tree. However, computing globally optimal decision tree is computationally a hard problem. It is known (see~\\cite{laurent1976constructing}) that computing an optimal (in a particular sense) binary tree is NP hard. A recent paper of~\\cite{bertsimas2017optimal} sets up an\noptimization problem (see in~\\cite[Equation~$1$]{bertsimas2017optimal}) in the context of classification, which aims to minimize (among all decision trees) misclassification error of a tree plus a penalty proportional to its number of leaves. The paper formulates this problem as an instance of mixed integer optimization (MIO) and claims \nthat modern MIO developments allow for solving reasonably sized problems. It then demonstrates extensive experiments for simulated and real data sets where the optimal tree outperforms the usual CART. These experiments seem to provide strong empirical evidence that optimal decision trees, if computed, can perform significantly better than CART. Another shortcoming of CART is that it is typically very hard to theoretically analyze the full algorithm because of the sequence of data dependent splits. Some results (related to the current paper) exist for the subtree obtained in the pruning stage, conditional on the maximal tree grown in the first stage; see~\\cite{gey2005model} and references therein. Theoretical guarantees for the widely used Random Forests are also typically hard to obtain inspite of much recent work see~\\cite{scornet2015consistency},~\\cite{wager2015adaptive},~\\cite{ishwaran2015effect} and references therein. On the other hand, theoretical analysis for optimal decision trees can be obtained since it can be seen as penalized empirical risk minimization. \n\n\nOne class of decision trees for which an optimal tree can be computed efficiently, in low to moderate dimensions, is the class of \\textit{dyadic decision trees}. These trees are constructed from recursive dyadic partitioning. In the case of regression on a two-dimensional grid design, the paper~\\cite{donoho1997cart} proposed a penalized least squares estimator called the Dyadic CART estimator. The author showed that it is possible to compute this estimator by a fast bottom up dynamic program which has linear time computational complexity $O(n \\times n)$ for a $n \\times n$ grid. Moreover, the author showed that Dyadic CART satisfies an oracle risk bound which in turn was used to show that it is adaptively minimax rate optimal over classes of anisotropically smooth bivariate functions. Ideas in this paper were later used in~\\cite{nowak2004estimating} in the context of adaptively estimating piecewise Holder smooth functions. The idea of dyadic partitioning were also used in classification in papers such as~\\cite{scott2006minimax}\nand~\\cite{blanchard2007optimal} who studied penalized empirical risk minimization over dyadic decision trees of a fixed maximal depth. They also proved oracle risk bounds and showed minimax rate optimality for appropriate classes of classifiers. Minimax rates of convergence have also been obtained for various models of dyadic classification trees in~\\cite{lecue2008classification}. In the related problem of density estimation, dyadic partitioning estimators have also been studied in the context of estimating piecewise polynomial densities; see~\\cite{willett2007multiscale}. This current paper focusses on the regression setting and follows this line of work of studying optimal decision trees, proving an oracle risk bound and then investigating implications for certain function classes of interest. The optimal decision trees we study in this paper are computable in time polynomial in the sample size.\n\n\nIn particular, in this paper, we study two decision tree methods for estimating regression functions in general dimensions in the context of estimating some nonsmooth function classes of recent interest. We focus on the fixed lattice design case like in~\\cite{donoho1997cart}.\nThe first method is an optimal dyadic regression tree and is exactly the same as Dyadic CART in~\\cite{donoho1997cart} when the dimension is $2$. The second method is an Optimal Regression Tree (ORT), very much in the sense of~\\cite{bertsimas2017optimal}, applied to fixed lattice design regression. Here the estimator is computed by optimizing a penalized least squares criterion over the set of all --- not just dyadic --- decision trees. We make the crucial observation that this estimator can be computed by a dynamic programming approach when the design points fall on a lattice. Thus, for instance, one does not need to resort to mixed integer programming and this dynamic program has computational complexity polynomial in the sample size. This observation may be known to the experts but we are unaware of an exact reference. Like in~\\cite{donoho1997cart} we show it is possible to prove an oracle risk bound (see Theorem~\\ref{thm:adapt}) for both of our optimal decision tree estimators. We then apply this oracle risk bound to three function classes of recent interest by employing approximation theoretic inequalities and show that these optimal decision trees have excellent adaptive and worst case performance. \n\n\nOverall in this paper, we revisit the classical idea of recursive partitioning in the context of finding answers to several\nunsolved questions about some classes of functions of recent interest in the nonparametric regression \nliterature. In the course of doing so, we have come up with as well as brought forward several interesting ideas from different areas relevant for the study of regression trees such as dynamic programming, computational geometry and discrete Sobolev type \ninequalities for vector \/ matrix approximation. We believe that the main novel \naspect of the current work is to recognize, prove and point out --- by an amalgamation of \nthese ideas --- that optimal regression trees often provide a better alternative to the state of \nthe art convex optimization methods in the sense they are simultaneously \n(near-) minimax rate optimal, adaptive to the complexity of the underlying signal (under fewer assumptions) and \ncomputationally more efficient for some classes of functions of recent interest. To the best of our knowledge, our paper is the first one among a series of recent works that shows the efficacy of computationally efficient optimal regression tree estimators in these particular nonparametric regression problems.\nWe now describe the function classes we consider in this paper and briefly outline our results and contributions. \n\n\\begin{itemize}\n\t\\item \\textbf{Piecewise Polynomial Functions}: We address the problem of estimating multivariate functions that are (or close to) \\textit{piecewise polynomial} of some fixed degree on some unknown partition of the domain into axis aligned rectangles. This includes function classes such as piecewise constant\/linear\/quadratic etc. on axis aligned rectangles. An oracle, who knows the true rectangular partition, i.e the number of axis aligned rectangles and their arrangement, can just perform least squares separately for data falling within each rectangle. This oracle estimator provides a benchmark for adaptively optimal performance. The main question of interest to us is how to construct an estimator which is efficiently computable and attains risk as close as possible to the risk of this oracle estimator.\n\tTo the best of our knowledge, this question has not been answered in multivariate settings. In this paper, we propose that our optimal regression tree (ORT) estimator solves this question to a \n\tconsiderable extent. Section~\\ref{secmpp} describes all of our results under this topic. It is worthwhile to mention here that we \n\talso focus on cases where the true rectangular partition does not correspond to any decision tree (see Figure~\\ref{fig3}) which necessarily has a hierarchical structure. We call such partitions \n\tnonhierarchical. Even for such nonhierarchical partitions, we make the case that ORT continues to perform well (see our results in Section~\\ref{sec:non}). We are not aware of nonhierarchical \n\tpartitions being studied before in the literature. Here our proof technique uses results from computational geometry which relate the size of any given (possibly nonhierarchical) rectangular partition \n\tto that of the minimal hierarchical partition refining it. \n\t\n\n\t\n\t\\smallskip\n\t\n\t\n\t\\item \\textbf{Multivariate Bounded Variation Functions}: Consider the function class whose total variation (defined later in Section~\\ref{sec:tv}) is bounded by some number. This is a classical function class for nonparametric regression since it contains functions which demonstrate spatially heterogenous smoothness; see Section $6.2$ in~\\cite{tibshiraninonparametric} and references therein. Perhaps, the most natural estimator for this class of functions is what is called the Total Variation Denoising (TVD) estimator. The two dimensional version of this estimator is also very popularly used for image denoising; see~\\cite{rudin1992nonlinear}. It is known that a well tuned TVD estimator is minimax rate optimal for this class in all dimensions; see~\\cite{hutter2016optimal} and~\\cite{sadhanala2016total}. Also, in the univariate case, it is known that the TVD estimator adapts to piecewise constant functions and attains a near oracle risk with parametric rate of convergence; see~\\cite{guntuboyina2020adaptive} and references therein. However, even in two dimensions, the TVD estimator provably cannot attain the near parametric rate of convergence for piecewise constant truths. This is a result (Theorem $2.3$) in a previous article by the same authors~\\cite{chatterjee2019new}. \n\t\n\t\n\t\n\t\n\tIt would be desirable for an estimator to attain the minimax rate among bounded variation functions as well as retain the near parametric rate of convergence for piecewise constant truths in multivariate settings. Our contribution here is to establish that Dyadic CART enjoys these two desired properties in all dimensions. \n\tWe also show that the Dyadic CART adapts to the intrinsic dimensionality of the function in a particular sense. Therorem~\\ref{thm:dcadap} is our main result under this topic. Our proof technique for Theorem~\\ref{thm:dcadap} involves a recursive partitioning strategy to approximate any given bounded variation function by a piecewise constant function (see Proposition~\\ref{prop:division}). We prove an inequality which can be thought of as the discrete version of the classical Gagliardo-Sobolev-Nirenberg inequality (see Proposition~\\ref{prop:gagliardo}) which plays a key role in the proof. \n\t\n\t\n\tAs far as we are aware, Dyadic CART has not been investigated before in the context of estimating bounded variation functions. Coupled with the fact that Dyadic CART can be computed in time linear in the sample size, our results put forth the Dyadic CART estimator as a fast and viable option for estimating bounded variation functions.\n\t\n\t\\smallskip\n\t\n\t\\item \\textbf{Univariate Bounded Variation Functions of higher order}: \n\tHigher order versions of the space of bounded variation functions has also been considered in nonparametric regression, albeit mostly in the univariate case. One can consider the univariate function \n\tclass of all $r$ times (weakly) differentiable functions, whose $r$ th derivative is of bounded variation. A seminal result of~\\cite{donoho1998minimax} shows that a wavelet threshholding estimator attains the minimax rate in this problem. Locally adaptive regression splines, proposed by~\\cite{mammen1997locally}, is also known to achieve the minimax rate in this problem. \n\tRecently, Trend Filtering, proposed by~\\cite{kim2009ell_1}, has proved to be a popular nonparametric regression method. Trend Filtering is very closely related to locally adaptive regression splines and is also minimax rate optimal over the space of higher order bounded variation functions; see~\\cite{tibshirani2014adaptive} and references therein. Moreover, it is known that Trend Filtering adapts to functions which are piecewise polynomials with regularity at the knots. If the number of pieces is not too large and the length of the pieces is not too small, a well tuned Trend Filtering estimator can attain near parametric risk as shown in~\\cite{guntuboyina2020adaptive}. \n\t\n\t\n\tOur main contribution here is to show that the univariate Dyadic CART estimator is also minimax rate optimal in this problem and enjoys near parametric rate of convergence for piecewise polynomials; see Theorem~\\ref{thm:slowrate} and \n\tTheorem~\\ref{thm:fastrate}. Moreover, we show that Dyadic CART requires less regularity assumptions on the true function than what Trend Filtering requires for the near parametric rate of convergence to hold. Theorem~\\ref{thm:fastrate} follows directly from a combination of our oracle risk bound and a result about refining an arbitrary (possibly non dyadic) univariate partition to a dyadic one (see Lemma~\\ref{lem:1dpartbd}). Our proof technique for Theorem~\\ref{thm:slowrate} again involves a recursive partitioning strategy to approximate any given higher order bounded variation function by a piecewise polynomial function (see Proposition~\\ref{prop:piecewise}). We prove an inequality (see Lemma~\\ref{lem:approxpoly}) quantifying the error of approximating a higher order bounded variation function by a single polynomial which plays a key role in the proof.\n\t\n\t\n\tAgain, as far as we are aware, Dyadic CART has not been investigated before in the context of estimating univariate higher order bounded \n\tvariation functions. Coupled with the fact that Dyadic CART is computable in time \n\tlinear in the sample size, our results again provide a fast and viable alternative for estimating univariate higher order bounded variation functions.\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\t\n\\end{itemize}\n\nThe oracle risk bound in Theorem~\\ref{thm:adapt} which holds for the optimal decision trees studied in this paper may imply near optimal results for other function classes as well. In Section~\\ref{Sec:discuss}, we mention some consequences of our oracle risk bounds for shape constrained function classes. We then describe a version of our estimators which can be implemented for arbitrary data with random design and also discuss an extension of our results for dependent noise.\n\n\n\\subsection{Problem Setting and Definitions}\n\nLet us denote the $d$ dimensional lattice with $N$ points by $L_{d,n} \\coloneqq \\{1,\\dots,n\\}^d$ where $N = n^{d}.$ Throughout this paper we will consider the standard fixed design setting where we treat the $N$ design points as fixed and located on the $d$ dimensional grid\/lattice $L_{d,n}.$ One may think of the design points embedded in $[0,1]^d$ and of the form $\\frac{1}{n}(i_1,\\dots,i_d)$ where $(i_1,\\dots,i_d) \\in L_{d,n}$. This lattice design is quite commonly used for theoretical studies in multidimensional nonparametric function estimation (see, e.g.~\\cite{nemirovski2000topics}). The lattice design is also the natural setting for certain applications such as image denoising, matrix\/tensor estimation. All our results will be for the lattice design setting. In Section~\\ref{Sec:discuss}, we make some observations and comments about possible extensions to the random design case. \n\n\n\n\n\n\nLetting $\\theta^*$ denote the evaluation on the grid of the underlying regression function $f$, our observation \nmodel becomes $y = \\theta^* + \\sigma Z$ where \n$y,\\theta^*,Z$ are real valued functions on $L_{d,n}$ and \nhence are $d$ dimensional arrays. Furthermore, $Z$ is a noise array consisting of independent standard Gaussian entries and \n$\\sigma > 0$ is an unknown standard deviation of the noise \nentries. For an estimator $\\hat{\\theta}$, we will evaluate \nits performance by the usual fixed design expected mean \nsquared error $$\\MSE(\\hat{\\theta},\\theta^*) \\coloneqq \n\\frac{1}{N}\\:\\E_{\\theta^*} \\|\\hat{\\theta} - \\theta^*\\|^2.$$ \nHere $\\|.\\|$ refers to the usual Euclidean norm of an array \nwhere we treat an array as a vector in $\\R^N.$ \n\n\n\n\nLet us define the interval of positive integers $[a,b] = \\{i \\in \\Z_{+}: a \\leq i \\leq b\\}$ where $\\Z_{+}$ denotes \nthe set of positive integers. For a positive integer $n$ we \nalso denote the set $[1,n]$ by just $[n].$ A subset $R \n\\subset L_{d,n}$ is called an \\textit{axis aligned \n\trectangle} if $R$ is a product of \nintervals, i.e. $R = \\prod_{i = 1}^{d} [a_i,b_i].$ \nHenceforth, we will just use the word rectangle to denote an \naxis aligned rectangle. Let us define a \\textit{rectangular \n\tpartition} of $L_{d,n}$ to be a set of rectangles \n$\\mathcal{R}$ such that (a) the rectangles in $\\mathcal{R}$ \nare pairwise disjoint and (b) $\\cup_{R \\in \\mathcal{R}} R = \nL_{d,n}.$ \n\n\n\nRecall that a multivariate polynomial of degree at most $r \\geq 0$ is a finite linear combination of the monomials $\\Pi_{i = 1}^{d} (x_i)^{r_i}$ satisfying $\\sum_{i = 1}^{d} r_i \n\\leq r.$ It is thus clear that they form a linear space of dimension \n$K_{r,d} \\coloneqq {r + d - 1 \\choose d - 1}.$ Let us now define the set of discrete multivariate polynomial arrays as \n\\begin{align*}\\label{eq:polydef}\n\t\\mathcal{F}^{(r)}_{d,n} = \\big\\{\\theta \\in \\R^{L_{d,n}}: \\theta(i_1\/n,\\dots,i_d\/n) = &f(i_1\/n,\\dots,i_d\/n)\\:\\:\\forall (i_1,\\dots,i_d) \\in [n]^d \\\\&\n\t\\text{for some polynomial $f$ of degree at most $r$}\\big\\}.\n\\end{align*}\n\n\n\nFor a given rectangle $R \\subset L_{d,n}$ and any $\\theta \\in \\R^{L_{d,n}}$ let us denote the array obtained by restricting $\\theta$ to $R$ by $\\theta_{R}.$ We say that $\\theta$ is a degree $r$ polynomial on the rectangle $R$ if $\\theta_{R} = \\alpha_{R}$ for some $\\alpha \\in \\mathcal{F}^{(r)}_{d,n}.$\n\n\n\n\n\n\nFor a given array $\\theta \\in \\R^{L_{d,n}}$, let \\textit{$k^{(r)}(\\theta)$ denote the smallest positive integer $k$ such that a set of $k$ rectangles $R_1,\\dots,R_k$ form a rectangular partition of $L_{d,n}$ and the restricted array $\\theta_{R_i}$ is a degree $r$ polynomial for all $1 \\leq i \\leq k.$}\nIn other words, $k^{(r)}(\\theta)$ is the cardinality of the minimal rectangular partition of $L_{d,n}$ such that $\\theta$ is piecewise polynomial of degree $r$ on the partition. \n\n\n\n\n\n\n\\subsection{Description of Estimators}\nThe estimators we consider in this manuscript compute a data dependent decision tree (which is globally optimal in a certain sense) and then fit polynomials within each cell\/rectangle of the decision tree. As mentioned before, computing decision trees greedily and then fitting a constant value within each cell of the decision tree has a long history and is what the usual CART does. Fitting polynomials on such greedily grown decision trees is a natural extension of CART and has also been proposed in the literature; see~\\cite{chaudhuri1994piecewise}. The main difference between these estimators and our estimators is that our decision trees are computed as a global optimizer over the set of all decision trees. In particular, they are not grown greedily and there is no stopping rule that is required. The ideas here are mainly inspired by~\\cite{donoho1997cart}. We now define our estimators precisely.\n\n\nRecall the definition of $k^{(r)}(\\theta).$ A natural estimator which fits piecewise polynomial functions of degree $r \\geq 0$ on axis aligned rectangles is the following fully penalized LSE of order $r$:\n\\begin{equation*}\n\t\\hat{\\theta}^{(r)}_{\\all,\\lambda} \\coloneqq \\argmin_{\\theta \\in \\R^{L_{d,n}}} \\big(\\|y - \\theta\\|^2 + \\lambda k^{(r)}(\\theta)\\big).\n\\end{equation*}\n\n\n\n\n\n\nLet us denote the set of all rectangular partitions of $L_{d,n}$ as $\\mathcal{P}_{\\all, d, n}.$ For each rectangular partition $\\Pi \\in \\mathcal{P}_{\\all, d, n}$ and each nonnegative integer $r$, let the (linear) subspace $S^{(r)}(\\Pi)$\ncomprise all arrays which are degree $r$ polynomial on each of the rectangles constituting $\\Pi.$\nFor a generic subspace $S \\subset \\R^N$ let us denote its dimension by $Dim(S)$ and the associated orthogonal projection matrix by $O_{S}.$ Clearly the dimension of the subspace $S^{(r)}(\\Pi)$ is $K_{r,d} |\\Pi|$ where $|\\Pi|$ is the cardinality of the partition. Now note that we can also write $\\hat{\\theta}^{(r)}_{\\all,\\lambda} = O_{S^{(r)}(\\hat{\\Pi}(\\lambda))} y$ where $\\hat{\\Pi}(\\lambda)$ is a data dependent partition defined as \n\\begin{equation}\\label{eq:defplse}\n\\hat{\\Pi}(\\lambda) = \\argmin_{\\Pi: \\Pi \\in \\mathcal{P}_{\\all,d,n}}\\big( \\|y - O_{S^{(r)}(\\Pi)} y\\|^2 + \\lambda |\\Pi| \\big).\n\\end{equation}\n\nThus, computing $\\hat{\\theta}^{(r)}_{\\lambda,\\all}$ really involves optimizing over all \nrectangular partitions $\\Pi \\in \\mathcal{P}_{\\all,d,n}.$ Therefore, one may anticipate that \nthe major roadblock in using this estimator would be computation. For any fixed $d$, the cardinality of $\\mathcal{P}_{\\all, d, n}$ is at least stretched-exponential in $N.$ Thus, a brute \nforce method is infeasible. However, for $d = 1$, a rectangular partition is a set of \ncontiguous blocks of intervals which has enough structure so that a dynamic programming \napproach is amenable. The set of all multivariate rectangular partitions is a more \ncomplicated object and the corresponding computation is likely to be provably hard. This is \nwhere the idea of~\\cite{donoho1997cart} comes in who considers the Dyadic CART estimator (for \n$r = 0$ and $d = 2$) for fitting piecewise constant functions. As we will now explain, it \nturns out that if we constrain the optimization in~\\eqref{eq:defplse} to optimize over special \nsubclasses of rectangular partitions of $L_{d,n}$, a dynamic programming approach again becomes tractable. The Dyadic CART estimator is one such constrained version of the optimization problem in~\\eqref{eq:defplse}. We now precisely define these subclasses of rectangular partitions. \n\n\n\n\n\n\n\n\\subsubsection{Description of Dyadic CART of order $r \\geq 0$}\nLet us consider a generic discrete interval $[a,b].$ We define a \\textit{dyadic split} of the interval to be a split of the interval $[a,b]$ into two equal intervals. To be concrete, the interval $[a,b]$ is split into the intervals $[a,a - 1 + \\ceil{(b - a + 1)\/2}]$ and $[a + \n\\ceil{(b - a + 1)\/2}, b].$ Now consider a generic rectangle $R = \\prod_{i = 1}^{d} [a_i,b_i].$ A \\textit{dyadic split} of the rectangle $R$ involves the choice of a coordinate $1 \\leq j \\leq d$ to be split and then the $j$-th interval in the product defining the rectangle $R$ undergoes a dyadic split. Thus, a dyadic split of $R$ produces two sub rectangles $R_1$ and $R_2$ where $R_2 = R \\cap R_1^{c}$ and $R_1$ is of the following form for some $j \\in [d]$,\n\\begin{equation*}\n\tR_1 = \\prod_{i = 1}^{j - 1} [a_i,b_i] \\times [a_j ,a_j - 1 + \\ceil{(b_j - a_j + 1) \/ 2}] \\times \\prod_{i = j + 1}^{d} [a_i,b_i].\n\\end{equation*}\n\n\nStarting from the trivial partition which is just $L_{d,n}$ itself,\nwe can create a refined partition by dyadically splitting $L_{d,n}.$ This will result in a partition of $L_{d,n}$ into two rectangles. We can now keep on dividing recursively, generating new partitions. In general, if at some stage we have the partition $\\Pi = (R_1,\\dots,R_k)$, we can choose any of the rectangles $R_i$ and dyadically split it to get a refinement of $\\Pi$ with $k + 1$ nonempty rectangles. \\textit{A recursive dyadic partition} (RDP) is any partition reachable by such successive dyadic splitting. Let us denote the set of all recursive dyadic partitions of $L_{d,n}$ as $\\mathcal{P}_{\\rdp,d,n}.$ Indeed, a natural way of encoding any RDP of $L_{d,n}$ is by a binary tree where each nonleaf node is labeled by an integer in $[d].$ This labeling corresponds to the choice of the coordinate that was used for the split.\n\n\n\n\n\n\n\nWe can now consider a constrained version of $\\hat{\\theta}^{(r)}_{\\all,\\lambda}$ which only optimizes over $\\mathcal{P}_{\\rdp,d,n}$ instead of optimizing over $\\mathcal{P}_{\\all,d,n}.$ Let us define $\\hat{\\theta}^{(r)}_{\\rdp,\\lambda} = O_{S^{(r)}(\\hat{\\Pi}_{\\rdp}(\\lambda))} y$ where $\\hat{\\Pi}_{\\rdp}(\\lambda)$ is a data dependent partition defined as \n\\begin{equation*}\n\t\\hat{\\Pi}_{\\rdp}(\\lambda) = \\argmin_{\\Pi: \\Pi \\in \\mathcal{P}_{\\rdp,d,n}} \\big(\\|y - O_{S^{(r)}(\\Pi)}\\|^2 + \\lambda |\\Pi|\\big). \n\\end{equation*}\n\n\n\n\n\nThe estimator $\\hat{\\theta}^{(r)}_{\\rdp,\\lambda}$ is precisely the Dyadic CART estimator which was proposed in~\\cite{donoho1997cart} in the case when $d = 2$ and $r = 0.$ The author studied this estimator for estimating anisotropic smooth functions of two variables which exhibit different degrees of smoothness in the two variables. However, to the best of our knowledge, the risk properties of the Dyadic CART estimator (for $r = 0$) has not been examined in the context of estimating nonsmooth function classes such as piecewise constant and bounded variation functions. For $r \\geq 1,$ the above estimator appears to not have been proposed and studied in the literature before. We call the estimator $\\hat{\\theta}^{(r)}_{\\rdp,\\lambda}$ as {\\em Dyadic CART of order $r$}.\n\n\n\n\n\n\n\n\n\n\\subsubsection{Description of ORT of order $r \\geq 0$}\nFor our purposes, we would need to consider a larger class of partitions than $\\mathcal{P}_{\\rdp,d,n}.$ To generate a RDP, for each rectangle we choose a dimension to split and then split at the midpoint. Instead of splitting at the midpoint, it is natural to allow the split to be at an arbitrary position. To that end, we define a \\textit{hierarchical split} of the interval to be a split of the interval $[a,b]$ into two intervals, but not necessarily equal sized. To be concrete, the interval $[a,b]$ is split into the intervals $[a,\\ell]$ and $[\\ell + 1, b]$ for some $a \\leq \\ell \\leq b.$ Now consider a generic rectangle $R = \\prod_{i = 1}^{d} [a_i,b_i].$ A \\textit{hierarchical split} of the rectangle $R$ involves the choice of a coordinate $1 \\leq j \\leq d$ to be split and then the $j$-th \ninterval in the product defining the rectangle $R$ undergoes a hierarchical split. Thus, a hierarchical split of $R$ produces two sub rectangles $R_1$ and $R_2$ where $R_2 = R \\cap R_1^{c}$ and $R_1$ is of the following form for some $1 \\leq j \\leq d$ and $a_j \\leq \\ell \\leq b_j$,\n\\begin{equation*}\n\tR_1 = \\prod_{i = 1}^{j - 1} [a_i,b_i] \\times [a_j,\\ell] \\times \\prod_{i = j + 1}^{d} [a_i,b_i].\n\\end{equation*} \n\n\n\nStarting from the trivial partition $L_{d,n}$ itself, we can now generate partitions by splitting $L_{d,n}$ hierarchically. Again, in general if at some stage we obtain the partition $\\Pi = (R_1,\\dots,R_k)$, we can choose any of the rectangles $R_i$ and split it hierarchically to obtain $k + 1$ nonempty rectangles now. \\textit{A hierarchical partition} is any partition reachable by such hierarchical splits. We denote the set of all hierarchical partitions of $L_{d,n}$ as $\\mathcal{P}_{\\hier, d, n}.$ Note that a hierarchical partition is in one to one correspondence with decision trees and thus, $\\mathcal{P}_{\\hier, d, n}$ can be thought of as the set of all decision trees. \n\n\nClearly, \n$$\\mathcal{P}_{\\rdp,d,n} \\subset \\mathcal{P}_{\\hier,d,n} \\subset \\mathcal{P}_{\\all, d, n}.$$ In fact, the inclusions are strict as shown in Figure~\\ref{fig1}. In particular, there exist partitions which are not hierarchical. \n\\begin{figure}\\label{fig3}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.5]{examples_of_rectangles} \n\t\\end{center}\n\t\\caption{\\textit{Figure~(a) is an example of a recursive dyadic partition of the square. Figure~(b) is nondyadic but is a hierarchical partition. Figure~(c) is an example of a nonhierarchical partition. An easy way to see this is that there is no split from top to bottom or left to right.}} \n\t\\label{fig1}\n\\end{figure}\n\n\n\n\n\n\nWe can now consider another constrained version of $\\hat{\\theta}^{(r)}_{\\all,\\lambda}$ which optimizes only over $\\mathcal{P}_{\\hier,d,n}.$ Let us define $\\hat{\\theta}^{(r)}_{\\hier, \\lambda} = O_{S^{(r)}(\\hat{\\Pi}_{\\hier}(\\lambda))} y$ where $\\hat{\\Pi}_{\\hier}(\\lambda)$ is a data dependent partition defined as \n\\begin{equation*}\n\t\\hat{\\Pi}_{\\hier}(\\lambda) = \\argmin_{\\Pi: \\Pi \\in \\mathcal{P}_{\\hier,d,n}} \\big(\\|y - O_{S^{(r)}(\\Pi)} y\\|^2 + \\lambda |\\Pi|\\big). \n\\end{equation*}\n\n\n\n\n\n\nAlthough this is a natural extension of Dyadic CART, we are unable to pinpoint an exact reference where this estimator has been explicitly proposed or studied in the statistics literature. The above optimization problem is an analog of the optimal decision tree problem laid out in~\\cite{bertsimas2017optimal}. The difference is that~\\cite{bertsimas2017optimal} is considering classification whereas we are considering fixed lattice design regression. Note that the above optimization problem is different from the usual pruning of a tree that is done at the second stage of CART. Pruning can only result in subtrees of the full tree obtained in the first stage whereas the above optimization is over all rectangular partitions $ \\Pi \\in \\mathcal{P}_{\\hier,d,n}.$ We name the estimator $\\hat{\\theta}^{(r)}_{\\lambda, \\hier}$ as \\textit{Optimal Regression Tree} (ORT) of order $r$.\n\n\n\n\n\n\n\n\\subsection{Both Dyadic CART and ORT of all orders are efficiently computable}\nThe crucial fact about $\\hat{\\theta}^{(r)}_{\\rdp,\\lambda}$ and \n$\\hat{\\theta}^{(r)}_{\\hier,\\lambda}$ is that they can be computed \nefficiently and exactly using dynamic \nprogramming approaches. A dynamic program algorithm to \ncompute $\\hat{\\Pi}_{\\rdp,\\lambda}$ for $d = 2$ and $r = 0$ was shown \nin~\\cite{donoho1997cart}. This algorithm is extremely fast \nand can be computed in $O(N)$ (linear in sample size) time. \nThe basic idea there can actually be extended to compute both Dyadic CART and ORT for any fixed $r,d$ with computational complexity given in the next lemma. The proof is given in Section~\\ref{Sec:proofs} (in the supplementary file,). \n\n\\begin{lemma}\\label{lem:compu}\n\tThere exists an absolute constant $C > 0$ such that the computational complexity, i.e. the number of elementary operations involved in the computation of ORT is bounded by:\n\t\\begin{equation*}\n\t\t\\begin{cases}\n\t\t\tCN^2\\:nd, & \\text{for} \\:\\:r = 0 \\\\\n\t\t\tCN^2\\:(nd + d^{3r}) & \\text{for} \\:\\:r \\geq 1\n\t\t\\end{cases}\n\t\\end{equation*}\n\tSimilarly, the computational complexity of Dyadic CART is bounded by:\n\t\\begin{equation*}\n\t\t\\begin{cases}\n\t\t\tC2^d\\:Nd, & \\text{for} \\:\\:r = 0 \\\\\n\t\t\tC2^d\\:N\\:d^{3r}& \\text{for} \\:\\:r \\geq 1.\n\t\t\\end{cases}\n\t\\end{equation*}\n\\end{lemma}\n\n\n\n\\begin{remark}\n\t{Since the proxy for the sample size $N \\geq 2^d$ as soon as $n \\geq 2$, it does not make sense to think of $d$ as large when reading the above computational complexity. The lattice design setting is really meaningful when $d$ is low to moderate and fixed and the number of samples per dimension $n$ is growing to $\\infty$. Thus, one should look at the dependence of the computational complexity on $N$ and treat the factors depending on $d$ as constant}.\n\\end{remark}\n\n\\begin{remark}\n\tEven for $d = 1$, the brute force computation time is exponential in $N$ as the total number of hierarchical partitions is exponential in $N.$\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\\begin{comment}\nAlso, when $r = 0$, it is not hard to see that this algorithm can be extended to any dimension $d$ with the same $O(N)$ run \ntime. As we will now explain in the next section, the same \nidea behind this algorithm can be extended to compute the \nMS CART estimator $\\hat{\\Pi}_{\\hier,\\lambda}$ as well with \nan increased computational complexity $O(N^{2 + 1\/d})$ \nrespectively. Thus, when $d$ is large, the MS CART \nestimator can be computed in near quadratic time with \nrespect to the sample size $N$.{\\red{Mention computational complexity for general $r.$ Mention that practically computable in low to moderate $d.$}}\n\\end{comment}\n\n\n\n\nThe rest of the paper is organized as follows.\nIn Section~\\ref{Sec:results} we state our oracle risk bound for \nDyadic CART and ORT of all orders. Section~\\ref{secmpp} describes \napplications of the oracle risk bound for ORT to multivariate \npiecewise polynomials. In Sections~\\ref{sec:tv} and~\\ref{Sec:uni} \nwe state applications of the oracle risk bound for Dyadic CART to \nmultivariate bounded variation functions in general dimensions and \nunivariate bounded variation function classes of all orders \nrespectively. In Section~\\ref{sec:sim} we describe our simulation studies and in Section~\\ref{Sec:discuss}, we summarize our \nresults, reiterate the main contributions of this \npaper and discuss some matters related to our work here. Section~\\ref{Sec:proofs} contains all the proofs of our results. In Section~\\ref{Sec:appendix} we state and prove some auxiliary results that we use for proving our main results in the paper.\n\n\n\n\\textbf{Acknowledgements}: This research was supported by a NSF grant and an IDEX grant from Paris-Saclay. S.G.'s research was carried out in part as a member of the\nInfosys-Chandrasekharan virtual center for Random Geometry, supported by a grant from the\nInfosys Foundation. We thank the anonymous referees for their numerous helpful remarks and \nsuggestions on an earlier manuscript of the paper. We also thank Adityanand \nGuntuboyina for many helpful comments. The project started when SG was a postdoctoral fellow \nat the Institut des Hautes \\'{E}tudes Scientifiques (IHES).\n\n\\section{Oracle risk bounds for Dyadic CART and ORT}\\label{Sec:results}\nIn this section we describe an oracle risk bound. We have to set up some notations and terminology first. Let $\\mathcal{S}$ be any \nfinite collection of subspaces of $\\R^N.$ Recall that for a generic subspace $S\\in \\mathcal{S}$, we denote its dimension by $Dim(S).$ For any given $\\theta \\in \\R^N$ let us define \n\\begin{equation}\\label{eq:compl}\nk_{\\mathcal{S}}(\\theta) = \\min\\{Dim(S): S \\in \\mathcal{S}, \\theta \\in S\\}\n\\end{equation}\nwhere we adopt the convention that the infimum of an empty set is $\\infty$.\n\n\\begin{comment}\n\\begin{remark}\nAs will be clear in the proof, no effort has been made to optimize constants as that is not our main focus. Moreover, the coefficient $2$ in front of $\\|\\theta^* - \\theta\\|^2$ can be made $1 + \\delta$ for arbitrarily small $\\delta$ at the expense of $\\lambda$ needing to be set proportional to $\\frac{1}{\\delta}.$ However, with the current proof technique, a sharp oracle inequality (coefficient of $1$) is not reachable. \n\\end{remark}\n\n\n\\begin{remark}\nGaussianity of $\\epsilon$ is not essential in the above theorem. The conclusion of the theorem holds as long as the entries of $\\epsilon$ are independent and are sub gaussian ($\\sigma$) random variables.\n\\end{remark}\n\\end{comment}\n\n\nFor any $\\theta \\in \\R^N,$ the number $k_{\\mathcal{S}}(\\theta)$ can be thought of as describing the complexity of $\\theta$ with respect to the collection of subspaces $\\mathcal{S}.$ Recall the definition of the nested classes of rectangular partitions $\\mathcal{P}_{\\rdp,d,n} \\subset \\mathcal{P}_{\\hier,d,n} \\subset \\mathcal{P}_{\\all,d,n}.$ Also recall that the subspace $S^{(r)}(\\Pi)$\ndenotes all arrays which are degree $r$ polynomial on each of the rectangles constituting $\\Pi.$ For any integer $r \\geq 0$, these classes of partitions induce their respective collection of subspaces of $\\R^N$ defined as follows:\n\\begin{equation*}\n\t\\mathcal{S}^{(r)}_{a} = \\{S^{(r)}(\\Pi): \\Pi \\in \\mathcal{P}_{a, d, n}\\}\n\\end{equation*}\nwhere $a \\in \\{\\rdp,\\hier,\\all\\}$. \nFor any $\\theta \\in \\R^{L_{d, n}}$ and any integer $r \\geq 0$ let us observe its complexity with respect to the collection of subspaces $S^{(r)}_{a}$ is \n\\begin{equation*}\n\tk_{\\mathcal{S}^{(r)}_{a}}(\\theta) = k^{(r)}_{a}(\\theta)\n\\end{equation*}\nwhere again $a \\in \\{\\rdp,\\hier,\\all\\}.$ Here $k^{(r)}_{\\all}(\\theta^*)$ is the same as $k^{(r)}(\\theta^*)$ defined earlier and we use both notations interchangeably. \n\n\n\n\nIt is now clear that for any $\\theta \\in \\R^N$ we have\n\\begin{equation}\\label{eq:chain}\nk_{\\all}^{(r)}(\\theta) \\leq k_{\\hier}^{(r)}(\\theta) \\leq k_{\\rdp}^{(r)}(\\theta).\n\\end{equation}\n\n\n\n\nWe are now ready to\nan oracle risk bound for all the three estimators $\\hat{\\theta}^{(r)}_{\\all,\\lambda},\\hat{\\theta}^{(r)}_{\\rdp,\\lambda}$ and $\\hat{\\theta}^{(r)}_{\\hier,\\lambda}.$ The theorem is proved in Section~\\ref{Sec:proofs}\n\\begin{theorem}\\label{thm:adapt}\n\tFix any integer $r \\geq 0$ and recall that $K_{r,d} = {r + \n\t\td - 1 \\choose d - 1}$ was defined earlier. There exists an absolute constant $C > 0$ such that for any $0 < \\delta < 1$ if we set $\\lambda \\geq C K_{r,d} \\frac{\\sigma^2 \\log N}{\\delta}$, then we have the following risk bounds for $a \\in \\{\\rdp,\\hier,\\all\\}$,\n\n\n\t\\begin{equation*}\n\t\t\\E \\|\\hat{\\theta}^{(r)}_{a,\\lambda} - \\theta^*\\|^2 \\leq \\inf_{\\theta \\in \\R^N} \\big[\\frac{(1 + \\delta)}{(1 - \\delta)}\\:\\|\\theta - \\theta^*\\|^2 + \\frac{\\lambda}{1 - \\delta}\\:k^{(r)}_{a}(\\theta)\\big] + C \\frac{\\sigma^2}{\\delta\\:(1 - \\delta)}\n\t\\end{equation*}\n\n\n\n\n\n\\end{theorem}\n\\begin{comment}\n\\begin{remark}\nThe above oracle inequality already appears for Dyadic Cart in $d = 2$ in Theorem $7.2$ in~\\cite{donoho1997cart} with a slightly worse log factor. The main point of the above theorem is that firstly, it continues to hold for Dyadic Cart and MS Cart of all orders and in all dimensions. Secondly, this oracle inequality potentially implies tight bounds for several function classes of interest. In Section~\\ref{Sec:results}, we describe several applications of this result.\n\\end{remark}\n\n\\begin{remark}\nThe proof of the above theorem is similar to the usual proof of the $\\ell_0$ penalized BIC estimator achieving fast rates in high dimensional linear regression; (e.g, see Theorem $3.14$ in~\\cite{rigollet2015high}). {\\red{Mention this later? Also say something about non sharp oracle ineq}} \n\\end{remark}\n\n\n\\begin{remark}\nGaussianity of $\\epsilon$ is not essential in the above theorem. The conclusion of the theorem holds as long as the entries of $\\epsilon$ are independent and are sub gaussian ($\\sigma$) random variables. For the sake of clean exposition, we prove the theorem for the Gaussian case. \n\\end{remark}\n\n\\end{comment}\nLet us now discuss about some aspects of the above theorem.\n\\begin{itemize}\n\t\\item Theorem~\\ref{thm:adapt} gives an oracle risk bound for all the three estimators: the fully penalized LSE $\\hat{\\theta}^{(r)}_{\\all,\\lambda}$, the Dyadic CART estimator $\\hat{\\theta}^{(r)}_{\\rdp,\\lambda}$ and the ORT estimator $\\hat{\\theta}^{(r)}_{\\hier,\\lambda}$ of all orders $r \\geq 0$ in all dimensions $d.$ An upper bound on the risks of these estimators is given by an oracle risk bound trading off squared error distance and complexity. Such a result already appeared in~\\cite[Theorem~$7.2$]{donoho1997cart} for the Dyadic CART estimator in the special case when $r = 0$ and $d = 2$ with an additional multiplicative $\\log N$ factor in front of the squared \n\tdistance term. Such an oracle risk bound also appeared in~\\cite{nowak2004estimating} (see equation $8$) where an upper bound to the MSE in terms of approximation error plus estimation error is given. The main points we want to make by proving the above theorem are that firstly, it continues to hold for Dyadic CART and \n\tORT of all orders and in all dimensions. Secondly, this oracle inequality potentially implies tight bounds for several classes of functions of recent interest.\n\t\n\t\\smallskip\n\n\t\n\t\\item Oracle risk bounds for Dyadic CART type estimators have also been shown for classification problems; see, e.g.~\\cite{blanchard2007optimal} and~\\cite{scott2006minimax}. \n\t\n\t\\smallskip\n\t\n\t\\item Due to the inequality in~\\eqref{eq:chain}, the risk bounds are ordered pointwise in $\\theta^*.$ The risk bound is best for the \n\tfully penalized LSE followed by ORT and then followed by Dyadic CART.\n\tHowever, the Dyadic CART is the cheapest to compute followed by ORT in terms of the number of basic operations needed for computation. \n\tThe computation of the fully penalized LSE, of course, is out of the question. Thus, our risk bounds hint at a natural trade-off \n\tbetween statistical risk computational complexity. \n\t\n\t\\smallskip\n\n\n\n\t\n\t\n\n\t\n\t\n\t\\item \n\tGaussian nature of the noise $Z$ is not essential in the above theorem. The conclusion of the theorem holds as long as the entries of $Z$ are independent and are sub gaussian random variables. For the sake of clean exposition, we prove the theorem and its applications in the next section, for the Gaussian case only. In fact, the assumption of independence of Gaussian noise can also be relaxed as discussed in Section~\\ref{sec:dep}.\n\t\n\t\\smallskip\n\t\n\t\\item The proof (done in Section~\\ref{Sec:proofs}) of the above theorem uses relatively standard techniques from high dimensional statistics and is similar to the usual proof of the $\\ell_0$ penalized BIC estimator achieving fast rates in high dimensional linear regression \n\t(see, e.g, Theorem $3.14$ in~\\cite{rigollet2015high}). As is probably well known, sharp \n\toracle inequality with $\\frac{1 + \\delta}{1 - \\delta}$ replaced by $1$ seems unreachable with \n\tthe current proof technique. Just to reiterate, the point of proving this theorem is to recognize that such a result would imply near optimal results for function classes that are of interest in this current manuscript. \n\t\n\t\n\t\\smallskip\n\t\n\t\\item Operationally, to derive risk bounds for Dyadic CART or ORT for some function class, Theorem~\\ref{thm:adapt} behooves us to use approximation theoretic arguments. Precisely, for a given generic $\\theta^*$ in the function class, one needs to understand what is the approximation error in the Euclidean sense, if the approximator $\\theta$ is constrained to satisfy $k_{\\rdp}(\\theta) = k$ or $k_{\\hier}(\\theta) = k$ for any given integer $k.$ One of the technical contributions of this paper lie in addressing this approximation theoretic question for the three classes of functions considered in this paper. \n\t\n\t\\smallskip\n\t\n\t\\item The risk bounds in Theorem~\\ref{thm:adapt} as well as the algorithms for computing our estimators (see Section~\\ref{sec:algos} in the supplementary file) can be adapted for any subspace, not only the subspace of degree $r$ polynomials. As long as we consider partitions consisting of axis aligned rectangles, we can choose any subspace of functions to be fitted within each rectangle of the partition. Polynomials are one of the most natural and classical subspace of functions which is why we wrote our results for polynomials. \n\t\n\\end{itemize} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\\subsection{\\textbf{Results for Multivariate Piecewise Polynomial Functions}}\nIn this section, we will apply Theorem~\\ref{thm:adapt} in the context of estimating functions which are piecewise polynomial on some unknown rectangular partition of $L_{d, n}.$ Recall that $K_{r,d}$ is the dimension of the subspace of $d$ dimensional polynomials with degree at most $r \\geq 0.$ As an immediate corollary of Theorem~\\ref{thm:adapt}, we can say the following:\n\\begin{corollary}\\label{cor:pc}\nThere exists an absolute constant $C > 0$ such that by setting $\\lambda \\geq C K_{r,d} \\:\\sigma^2\\:\\log N$ we have the following risk bounds for $a \\in \\{\\rdp,\\hier,\\all\\}$,\n\\begin{equation*}\nMSE(\\hat{\\theta}^{(r)}_{a,\\lambda},\\theta^*) \\leq \\frac{\\lambda k^{(r)}_{a}(\\theta^*)}{N} + \\frac{C\\:\\sigma^2}{N}.\n\\end{equation*}\n\\end{corollary}\n\n\nLet us discuss some implications of the above corollary. Focussing on MS CART (i.e., $a = \\hier$) of order $r \\geq 0$, a risk bound scaling like $\\tilde{O}(k^{(r)}_{\\hier}(\\theta^*)\/N)$ rate is guaranteed for all $\\theta^*$. Thus, for instance, if the true $\\theta^*$ is piecewise constant\/linear on some arbitrary unknown hierarchical partition of $L_{d, n}$, the corresponding MS CART estimator of order $0, 1$ respectively achieves the (almost) oracle risk $\\tilde O(\\frac{k_{\\all}^{(r)}(\\theta^*)}{N})$. To the best of our knowledge, this is the first such risk guarantee established for a CART type estimator in general dimensions. \n\n\nAt this point, let us recall the main question we posed in Section~\\ref{Sec:intro}. Our target is the ideal upper bound $\\tilde{O}(\\frac{k^{(r)}_{\\all}(\\theta^*)}{N})$ to the MSE which is attained by the fully penalized LSE. However, it is perhaps not efficiently computable. The best upper bound to the MSE we can get for a computationally efficient estimator is $\\tilde{O}(\\frac{k_{\\hier}^{(r)}(\\theta^*)}{N})$ which is attained by the MS CART estimator.\n\n\n\n\n\n\n\nA natural question that arises at this point is how much worse is the upper bound for MS CART than the upper bound for the fully penalized LS estimator given in Theorem~\\ref{thm:adapt}. Equivalently, we know that $k_{\\all}^{(r)}(\\theta^*) \\leq k_{\\hier}^{(r)}(\\theta^*)$ in general, but how large can the gap be? There definitely exist partitions which are not hierarchical, i.e., that is $\\mathcal{P}_{\\hier, d, n}$ is a strict subset of $\\mathcal{P}_{\\all,d, n}$ as shown in Figure~1.\n\\begin{comment}\n\\begin{figure}\\label{fig3}\n\\begin{center}\n\\includegraphics[scale=0.5]{examples_of_rectangles} \n\\end{center}\n\\caption{\\textit{Image (a) is an example of a recursive dyadic partition of the plane. Image (b) is non-dyadic but is a hierarchical partition. Image (c) is an example of a non hierarchical partition. An easy way to see this is that there is no split from top to bottom or left to right.}} \n\\end{figure}\n\n\n\nIn the next section we explore non hierarchical partitions and state several results which basically imply that MS CART incurs MSE at most a constant factor more than the ideal fully penalized LSE for several natural instances of rectangular partitions. \n\n\n\n\\subsubsection{Non hierarchical partitions}\nThe risk bound for the MS CART in Theorem~\\ref{thm:adapt} is in terms of $k_{\\hier}(\\theta^*).$ We would like to convert it into a risk bound involving $k_{\\all}(\\theta^*).$ A natural way of doing this would be to refine a non hierarchical partition into a hierarchical partition and then counting the number of extra rectangular pieces that arises as a result of this refinement. This begs the following question of a combinatorial flavour.\n\n\n\\textit{Can an arbitrary non hierarchical partition of $L_{d,n}$ be refined into a hierarchical partition without increasing the number of rectangles too much?}. \n\nIn the $d = 1$ case, an elementary argument can be used to show that there exists a recursive dyadic partition refining a given arbitrary partition with number of intervals being multiplied by a log factor. This is the content of our next lemma which is proved in Section~\\ref{Sec:proofs}.\n\\begin{lemma}\\label{lem:1dpartbd}\nGiven a partition $\\Pi \\in \\mathcal{P}_{\\all,1,n}$, there exists a refinement $\\tilde{\\Pi} \\in \\mathcal{P}_{\\rdp,1,n}$ such that \n\\begin{equation*}\n|\\tilde{\\Pi}| \\leq C |\\Pi| \\log_2\\big(\\frac{n}{|\\Pi|}\\big)\n\\end{equation*}\nwhere $C > 0$ is an absolute constant.\n\\end{lemma}\n\n\nWhat can we say about $d \\geq 2?$ Fortunately, this question has been studied a fair bit in the computational\/combinatorial geometry literature\nunder the name of \\textit{binary space partitions}. A binary space \npartition (BSP) is a recursive partitioning scheme for a set of \nobjects in space. The goal is to partition the space recursively \nuntil each smaller space contains only one\/few of the original \nobjects. The main questions of interest are, given the set of \nobjects, the minimal cardinality of the optimal partition \nand an efficient algorithm to compute it. A nice survey of this area, explaining the \ncentral questions in the area and an overview of known results can be found in~\\cite{toth2005binary}. \nWe will now leverage some existing results in this area which would yield corresponding risk bounds with the help of Theorem~\\ref{thm:adapt}.\n\n\nFor $d = 2,$ it turns out that any rectangular partition can be refined into a hierarchical one where the number of rectangular pieces at most doubles. The following proposition is due to~\\cite{berman2002exact} and states this fact. For the sake of completeness, we sketch our understanding of its proof in Section~\\ref{Sec:appendix}.\n\n\\begin{proposition}[\\cite{berman2002exact}]\\label{prop:partapp}\nGiven any partition $\\Pi \\in \\mathcal{P}_{\\all,2, n}$ there exists a refinement $\\tilde{\\Pi} \\in \\mathcal{P}_{\\hier,2, n}$ such that $|\\tilde{\\Pi}| \\leq 2 |\\Pi|.$ As a consequence, for any matrix $\\theta \\in \\R^{n \\times n}$ and any non negative integer $r$, we have $$k^{(r)}_{\\hier}(\\theta) \\leq 2 k^{(r)}(\\theta).$$\n\\end{proposition}\n\n\n\nThe above proposition applied to Theorem~\\ref{thm:adapt} immediately yields the following theorem:\n\\begin{theorem}\nLet $d = 2.$ There exists an absolute constant $C$ such that by setting $\\lambda \\geq C \\:\\sigma^2\\:\\log n$ we have the following risk bound for $\\hat{\\theta}_{\\hier,\\lambda}$:\n\\begin{equation*}\nMSE(\\hat{\\theta}^{(r)}_{\\hier,\\lambda},\\theta) \\leq \\frac{\\lambda k^{(r)}_{\\all}(\\theta^*)}{N} + \\frac{C\\:\\sigma^2}{N}.\n\\end{equation*}\n\\end{theorem}\n\n\n\\begin{remark}\nThus, in the two dimensional setting, MS CART fulfills the two objectives (is computable in polynomial time and attains oracle risk adaptively for all $\\theta^*$) as laid out at the beginning of this manuscript. Thus, this completely solves the main question we posed in the two dimensional case. To the best of our knowledge, this is the first result of its kind in the literature. \n\\end{remark}\n\n\n\n\nFor dimensions higher than $2$;\nthe best result akin to Proposition~\\ref{prop:partapp} that is available is due to~\\cite{hershberger2005binary}. \n\n\n\\begin{proposition}[~\\cite{hershberger2005binary}]\\label{prop:partapp2}\nLet $d > 2.$ Given any partition $\\Pi \\in \\mathcal{P}_{\\all, d, n}$ there exists a refinement $\\tilde{\\Pi} \\in \\mathcal{P}_{\\hier, d, \nn}$ such that $|\\tilde{\\Pi}| \\leq |\\Pi|^{\\frac{d + 1}{3}}.$ As a consequence, for any array $\\theta \\in \\R^{L_{d, n}}$ and any non negative integer $r$, we have \n$$k^{(r)}_{\\hier}(\\theta) \\leq \\big(k^{(r)}_{\\all}(\\theta)\\big)^{\\frac{d + 1}{3}}$$\n\\end{proposition}\n\n\n\\begin{remark}\nA matching lower bound is also given in~\\cite{hershberger2005binary} for the case $d = 3.$ Thus, to refine a rectangular partition (of $k$ pieces) into a hierarchical one, one necessarily increases the number of rectangular pieces to $O(k^{4\/3})$ in the worst case. \n\\end{remark}\n\n\nThe above result suggests that for arbitrary partitions in $d$ dimensions, our current approach will not yield the $\\tilde{O}(k(\\theta^*)\/N)$ rate of convergence. Nevertheless, we state our risk bound that is implied by Proposition~\\ref{prop:partapp2}.\n\n\n\\begin{theorem}\nLet $d > 2.$ There exists an absolute constant $C$ such that by setting $\\lambda \\geq C\\:K_{r,d} \\:\\sigma^2\\:\\log n$ we have the following risk bound for $\\hat{\\theta}_{\\hier,\\lambda}$:\n\\begin{equation*}\nMSE(\\hat{\\theta}^{(r)}_{\\hier,\\lambda},\\theta^*) \\leq \\lambda \\frac{ \\big(k^{(r)}(\\theta^*)\\big)^{\\frac{d + 1}{3}}}{N} + \\frac{C\\:\\sigma^2}{N}.\n\\end{equation*}\n\\end{theorem}\n\n\n\\begin{remark}\nThus, MS Cart attains our objective for all dimensions $d > 2$ when $k(\\theta^*) = O(1).$\n\\end{remark}\n\n\n\n\nOur approach of refining an arbitrary partition into a hierarchical partition does not seem to yield the oracle rate of convergence for MS CART in dimension higher than $2$ when the truth is a piecewise constant function on an \\textit{arbitrary} rectangular partition. Rectangular partitions in higher dimensions could be highly complex; with some rectangles being very skinny in some dimensions. However, it turns out that if we rule out such anomalies, then it is still possible to attain our objective. Let us now define a class of partitions which rules out such anomalies. \n\nLet $R$ be a rectangle defined as $R = \\Pi_{i = 1}^{d} [a_i,b_i] \\subset L_{d,n}.$ Let the sidelengths of $R$ be defined as $n_i = \nb_i - a_i + 1$ for $i \\in [d].$ Define its {\\em aspect ratio} as $\\mathcal{A}(R) = \\max \\{\\frac{n_i}{n_j}: (i,j) \\in [d]^2\\}.$ For \nany $\\alpha \\geq 1$, let us call a rectangle $\\alpha$ {\\em fat} if \nwe have $\\mathcal{A}(R) \\leq \\alpha.$ Now consider a rectangular \npartition $\\Pi \\in \\mathcal{P}_{\\all,d,n}.$ We call $\\Pi$ a \n\\textit{$\\alpha$ fat partition} if each of its constituent \nrectangles is $\\alpha$ fat. Let us denote the class of $\\alpha$ fat partitions of $L_{d,n}$ as $\\mathcal{P}_{\\fat(\\alpha),d,n}.$ As \nbefore, we can now define the class of subspaces $S^{(r)}_{\\fat(\\alpha),d,n}$ corresponding to the set of partitions \n$\\mathcal{P}_{\\fat(\\alpha),d,n}.$\nFor any array $\\theta^*$ and any integer $r >0$ we can also denote\n\\begin{equation*}\nk^{(r)}_{\\fat(\\alpha)}(\\theta^*) = k_{S^{(r)}_{\\fat(\\alpha),d,n}}(\\theta^*).\n\\end{equation*}\n\n\n\n\n\n\n\n\n\n\nAn important result in the area of binary space partitions is that any fat rectangular partition of $L_{n,d}$ can be refined into a hierarchical one with the number of rectangular pieces inflated by at most a constant factor. This is the content of the following proposition which is due to~\\cite{de1995linear}.\n\\begin{proposition}[\\cite{de1995linear}]\\label{prop:fatpartapp}\nThere exists a constant $C(d,\\alpha) \\geq 1$ depending only on $d$ and $\\alpha$ such that any partition $\\Pi \\in \\mathcal{P}_{\\fat(\\alpha),d,n}$ can be refined into a heirarchical partition $\\tilde{\\Pi} \\in \\mathcal{P}_{\\hier,d,n}$ satisfying\n\\begin{equation*}\n|\\tilde{\\Pi}| \\leq C(d,\\alpha) |\\Pi|.\n\\end{equation*}\n\\end{proposition}\n\n\n\n\n\n\nThe above proposition gives rise to a risk bound for MS CART in all dimensions.\n\\begin{theorem}\nFor any dimension $d$ there exists an absolute constant $C$ such that by setting $\\lambda \\geq C\\:K_{r,d}\\:\\sigma^2\\:\\log n$ we have the following risk bound for $\\hat{\\theta}_{hier,\\lambda}$:\n\\begin{equation*}\n\\E \\|\\hat{\\theta}^{(r)}_{hier,\\lambda} - \\theta^*\\|^2 \\leq \\inf_{\\theta \\in \\R^{L_{n,d}}} \\big(2\\:\\|\\theta - \\theta^*\\|^2 + \\lambda \\:C(d,\\alpha)\\: k^{(r)}_{\\fat(\\alpha)}(\\theta)\\big) + C\\:\\sigma^2.\n\\end{equation*}\n\\end{theorem}\n\n\n\n\\begin{remark}\nFor any fixed dimension $d$, when $\\theta^*$ is piecewise polynomial of degree $r$ on a fat paritition, the above theorem implies a $\\tilde{O}(\\frac{k^{(r)}_{\\all}(\\theta^*)}{N})$ bound to the MSE of the MS CART estimator (of order $r$). Thus, for arbitrary fat partitions in any dimension, MS CART attains our objective of enjoying near oracle risk and being computationally efficient. For any fixed dimension $d$, this is the first result of its type that we are aware of. \n\\end{remark}\n\n\n\n\\begin{remark}\nIt should be mentioned here that the constant $C(d,\\alpha)$ scales exponentially with $d$, atleast in the construction that we use here which is due to~\\cite{de1995linear}. In any case, our results are meaningful when $d$ is low to moderate so this fact does not change anything.\n\\end{remark}\n\n\n\n\n\n\n\\section{\\textbf{Results for Multivariate Piecewise Polynomial Functions}}\\label{secmpp}\n\n\n\\subsection{The main question}\nFor a given underlying truth $\\theta^*,$ the \\textit{oracle estimator} $\\hat{\\theta}^{(r)}_{({\\rm oracle})}$ --- which {\\em knows} the minimal rectangular partition $(R_1,\\dots,R_k)$ of \n$\\theta^*$ {\\em exactly} --- has a simple form. In words, within each rectangle $R_i,$ it estimates $\\theta^*_{R_i}$ by the \nbest fitting $r$-th degree polynomial in the least squares sense. It is not hard to check that $$\\MSE(\\hat{\\theta}^{(r)}_{({\\rm oracle})},\\theta^*) \\leq K_{r,d}\\, \\sigma^2 \\frac{k^{(r)}_{\\all}(\\theta^*)}{N}.$$ Thus, for any fixed $d$ and $r$, the MSE \nof the oracle estimator scales like the \\textit{number of \n\tpieces $k^{(r)}_{\\all}(\\theta^*)$} divided by the sample size $N$ which is precisely the parametric rate of convergence. Furthermore, we can show the following minimax lower bound holds.\n\n\n\n\n\n\\begin{lemma}\\label{lem:mlbpc}\n\tFix any positive integers $n,d$. Fix any integer $k$ such that $3d \\leq k \\leq N = n^d$ and let $\\Theta_{k,d,n} \\coloneqq \\{\\theta \\in \\R^{L_{d,n}}: k_{\\hier}^{(0)}(\\theta) \\leq k\\}$. There exists a universal constant $C$ such that the following inequality holds:\n\t\\begin{equation*}\n\t\t\\inf_{\\tilde{\\theta}} \\sup_{\\theta \\in \\Theta_{k,d,n}} \\E \\|\\tilde{\\theta} - \\theta\\|^2 \\geq C \\:\\sigma^2k \\log \\frac{{\\rm e}N}{k}.\n\t\\end{equation*}\n\tHere the infimum is over all estimators $\\tilde{\\theta}$ which are measurable functions of the data array $y.$\n\\end{lemma}\n\n\\begin{remark}\n\tFor any $r \\geq 1$, since $k_{\\hier}^{(0)}(\\theta) \\geq k_{\\all}^{(0)}(\\theta) \\geq k_{\\all}^{(r)}(\\theta)$ the same minimax lower bound also holds for the parameter space $\\{\\theta \\in \\R^{L_{d,n}}: k_{\\all}^{(r)}(\\theta) \\leq k\\}.$\n\\end{remark}\n\n\nThe above minimax lower bound shows that any estimator must incur MSE (in the worst case) \nwhich is the oracle MSE multiplied by an extra $\\log (eN\/k)$ factor. In particular, if $k = o(N)$, which is the interesting regime, the extra $\\log N$ factor is inevitable. We call this $O\\big(\\frac{k}{N}\\:\\log (eN\/k)\\big)$ rate the minimax rate from here on. \n\n\nWe provide the proof of Lemma~\\ref{lem:mlbpc} in \nSection~\\ref{sec:lempc}. We now ask the following question for every fixed dimension $d$ and \ndegree $r$.\n\n\n\\noindent \\textit{Q: Does there exist an estimator which\n\t\\begin{itemize}\n\t\t\\item \\textit{attains the minimax rate {\\rm MSE} scaling like $O\\big(\\sigma^2 \\frac{k^{(r)}_{\\all}(\\theta^*)}{N} \\log N\\big)$ for all $\\theta^*$ adaptively, and}\n\t\n\t\t\\item \\textit{is possible to compute in polynomial time in the sample size $N = n^d$?}\n\\end{itemize}}\n\n\nTo the best of our knowledge, the above question relating to computationally efficient minimax adaptive estimation of piecewise polynomial functions in multivariate settings, even for piecewise constant functions in the planar case (i.e. $r = 0, d = 2$), has not been rigorously answered in the statistics literature. The fully penalized least squares estimator $\\hat{\\theta}^{(r)}_{\\all,\\lambda}$ is naturally suited for our purpose but is likely to be computationally infeasible. The goal of this section is to show that \n\n\n\\begin{itemize}\n\t\\item In the two dimensional setting, i.e. $d = 2$, the ORT estimator attains the minimax MSE rate adaptively for any truth $\\theta^*.$ The ORT attains this minimax rate even if the true underlying rectangular partition is not hierarchical. In particular, we show that the ORT incurs the oracle MSE with the exponent of $\\log N$ equalling $1$ thus matching the minimax lower bound in Lemma~\\ref{lem:mlbpc} up to constant factors.\n\t\\\\[0.1em] \n\t\n\t\\item When $d > 2$, as long as the true underlying rectangular partition satisfies natural regularity conditions such as being hierarchical or fat (defined later in this section), the ORT estimator continues to attain this minimax rate.\n\\end{itemize}\n\n\n\nWe prove these results by combining Theorem~\\ref{thm:adapt} with existing results in computational geometry. To the best of our knowledge, our results in this section are the first of their type.\n\n\n\n\n\n\n\\subsection{Review}\nIn this section, we review existing results pertaining to our question of attaining the near minimax rate risk in the univariate setting $d = 1$. Note that in the univariate setting, the estimator $\\hat{\\theta}^{(r)}_{\\all,\\lambda}$ coincides with the ORT estimator of order $r$ as all univariate rectangular partitions are hierarchical. Let us first discuss the $r = 0$ case where we are fitting piecewise \\textit{constant} functions.\n\n\n\n\nIn the remainder of the paper the constant involved in $O(\\cdot)$ may depend on $r$ and $d$ \nunless specifically mentioned otherwise. Also to lighten the notation, we use $\\tilde O(\\cdot \n)$ for $O(\\cdot)\\mathrm{poly}(\\log N)$.\n\\begin{comment}\n\\begin{equation}\\label{def:lsest1d}\n\\hat{\\theta}_{\\ell_0,\\lambda} \\coloneqq \\argmin_{\\theta \\in \\R^{L_{1,n}}} \\|y - \\theta\\|^2 + \\lambda k^{(0)}(\\theta).\n\\end{equation} \n\n\\end{comment}\n\nThe univariate ORT estimator was rigorously studied in~\\cite{boysen2009consistencies} where it is called the \\textit{Potts minimizer}. This is because the objective function defining the estimator arises in the Potts model in statistical physics; see~\\cite{wu1982potts}. Furthermore, this estimator can be computed in $O(n^3)$ time by dynamic programming as shown in~\\cite{winkler2002smoothers}.\nThis estimator can be thought of as a $\\ell_0$ penalized least squares estimator as it penalizes $k^{(0)}(\\theta)$ which is same as the $\\ell_0$ norm of the difference vector $D \\theta = (\\theta_{2} - \\theta_{1},\\dots,\\theta_{n} - \\theta_{n - 1}).$ It is known that this estimator, properly tuned, indeed nearly attains the minimax rate risk (for instance, this will be implied by Theorem~\\ref{thm:adapt}).\n\n\n\n\n\nThe other approach is to consider the corresponding $\\ell_1$ penalized estimator,\n\\begin{equation*}\n\t\\hat{\\theta}_{\\ell_1,\\lambda} = \\argmin_{\\theta \\in \\R^{L_{n,1}}}\\big( \\|y - \\theta\\|^2 + \\lambda \\|D \\theta\\|_1\\big).\n\\end{equation*}\nThe above estimator is known as univariate Total Variation Denoising (TVD) estimator and \nsometimes also as Fused Lasso in the literature. This estimator is efficiently computable as this is a convex optimization problem. Recent results \nin~\\cite{guntuboyina2020adaptive},~\\cite{dalalyan2017tvd} and \\cite{ortelli2018} have shown that, when properly tuned, the above estimator is also capable of attaining the minimax rate risk under some minimum length assumptions on $\\theta^*$ (see Section~\\ref{Sec:compare} for details).\n\n\n\n\n\nTo generalize the second approach to the multivariate setting, it is perhaps natural to consider the multivariate Total Variation Denoising (TVD) estimator (see~\\cite{hutter2016optimal},~\\cite{sadhanala2016total}). However, in a previous manuscript \nof the authors, it has been shown that there exists a $\\theta^* \\in L_{2,n}$ with $k^{(0)}(\\theta^*) = 2$ such that the risk of the ideally tuned {\\em constrained} TVD estimator is lower \nbounded by $C N^{-3\/4}$, see Theorem $2.3$ in~\\cite{chatterjee2019new}. Thus, even for $d = \n2,$ the TVD estimator \\textit{cannot} attain the $\\tilde{O}(\\frac{k(\\theta)}{N})$ rate of \nconvergence in general. This fact makes us forego the $\\ell_1$ penalized approach and return \nto $\\ell_0$ penalized least squares.\n\n\n\n\n\n\n\n\n\nComing to the general $r \\geq 1$ case, the literature on fitting piecewise polynomial \nfunctions is diverse. Methods based on local polynomials and spline functions abound in the \nliterature. However, in general dimensions, we are not aware of any rigorous results of the \nprecise nature we desire. In the univariate case, there is a family of computationally efficient estimators which fits piecewise polynomials and attain our goal of nearly achieving the oracle risk as stated \nbefore. This family of estimators is known as {\\em trend filtering}--- first proposed in~\\cite{kim2009ell_1} and its statistical properties analyzed in~\\cite{tibshirani2014adaptive}. Trend filtering is a higher order generalization of the univariate TVD estimator which penalizes the $\\ell_1$ norm of higher order derivatives. A continuous version of these estimators, where discrete derivatives are replaced by continuous derivatives, was proposed much earlier in the statistics literature \nby~\\cite{mammen1997locally} under the name {\\em locally adaptive regression splines}. The \ndesired risk adaptation property (of any order $r$) was established \nin~\\cite{guntuboyina2020adaptive}, where it was shown that trend filtering (of order $r$) attains the minimax rate risk whenever the underlying truth \n$\\theta^*$ is a \\textit{discrete spline} and it satisfies some \\textit{minimum length} assumptions. See \nSection~\\ref{Sec:compare} for a more detailed discussion. However, to the best of our knowledge, such \nbounds are not available in dimension $2$ and above. To summarize this section we can say that beyond the univariate case, our question does not appear to have been answered. Our goal here is to start filling this gap in the literature.\n\n\n\\begin{remark}\n\t\\label{remark:scope}\n\tAlthough the fixed lattice design setting is commonly studied, recall that in this setting the sample size $N \\geq 2 ^d$ whenever $n \\geq 2.$ In other words, $N$ is necessarily growing exponentially with $d.$ Thus, the results in this paper are really meaningful in the asymptotic regime where $d$ is some fixed ambient dimension and $n \\rightarrow \\infty.$ Practically speaking, our algorithms and the statistical risk bounds we will present in this manuscript are meaningful for low to moderate dimensions $d.$ Even when $d = 2$, the question we posed about attaining the minimax rate risk adaptively for all truths in a computationally feasible way seems a nontrivial problem. \n\\end{remark}\n\n\\subsection{Our Results for ORT}\\label{secmpp2}\nRecall that $K_{r,d}$ is the dimension of the subspace of $d$ dimensional polynomials with degree at most $r \\geq 0.$ An immediate corollary of Theorem $2.1$ is the following.\n\\begin{corollary}\\label{cor:pc}\n\tThere exists an absolute constant $C > 0$ such that by setting $\\lambda = C K_{r,d} \\:\\sigma^2\\:\\log N$ we have the following risk bound, \n\n\t\\begin{equation*}\n\t\tMSE(\\hat{\\theta}^{(r)}_{\\hier,\\lambda},\\theta^*) \\leq \\frac{C\\:K_{r,d}\\:\\:\\sigma^2\\:\\log N}{N}\\:k^{(r)}_{\\hier}(\\theta^*) + \\frac{C\\:\\sigma^2}{N}.\n\t\\end{equation*}\n\\end{corollary}\n\n\nLet us discuss some implications of the above corollary. For ORT of order $r \\geq 0$, a risk bound scaling like $O(\\frac{k^{(r)}_{\\hier}(\\theta^*)}{N} \\log N)$ is guaranteed \nfor all $\\theta^*$. Thus, for instance, if the true $\\theta^*$ is piecewise constant\/linear on some arbitrary unknown hierarchical \npartition of $L_{d, n}$, the corresponding ORT estimator of order $0, 1$ respectively achieves the (near) minimax risk $\nO(\\frac{k^{(r)}_{\\all}(\\theta^*)}{N} \\log N)$. Although this result is an immediate implication of Theorem~\\ref{thm:adapt}, this is the first \nsuch risk guarantee established for a computationally efficient decision tree estimator in \ngeneral dimensions as far as we are aware of.\n\n\n\nAt this point, let us recall that our target is to achieve the ideal upper bound $\\tilde O(\\frac{k^{(r)}_{\\all}(\\theta^*)}{N})$ to the MSE for all $\\theta^*$ which is attained by the fully penalized LSE. However, it is perhaps not efficiently computable. The best upper bound to the MSE we can get for a computationally efficient estimator is \n$\\tilde O(\\frac{k^{(r)}_{\\hier}(\\theta^*)}{N})$ which is attained by the ORT estimator.\n\n\nA natural question that arises at this point is how much worse is the upper bound for ORT than the upper bound for the fully penalized LS estimator given in Theorem~\\ref{thm:adapt}. Equivalently, we know that $k_{\\all}^{(r)}(\\theta^*) \\leq \nk_{\\hier}^{(r)}(\\theta^*)$\nin general, but how large can the gap be? There definitely exist partitions which are\nnot hierarchical, \ni.e. that is\n$\\mathcal{P}_{\\hier, d, n}$ is a strict subset of $\\mathcal{P}_{\\all,d, n}$ as shown in Figure~1.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIn the next section we explore general and possibly nonhierarchical partitions of $L_{d,n}$ and state several results which basically imply that ORT incurs MSE at most a constant factor more than the ideal fully penalized LSE for several natural instances of rectangular partitions. \n\n\n\n\n\n\n\n\n\\begin{comment}\n\\begin{figure}\\label{fig3}\n\\begin{center}\n\\includegraphics[scale=0.5]{examples_of_rectangles} \n\\end{center}\n\\caption{\\textit{Image (a) is an example of a recursive dyadic partition of the plane. Image (b) is non-dyadic but is a hierarchical partition. Image (c) is an example of a non hierarchical partition. An easy way to see this is that there is no split from top to bottom or left to right.}} \n\\end{figure}\n\\end{comment}\n\n\n\n\\subsubsection{Arbitrary partitions}\\label{sec:non}\nThe risk bound for ORT in Theorem~\\ref{thm:adapt} is in terms of $k_{\\hier}(\\theta^*).$ We \nwould like to convert it into a risk bound involving $k_{\\all}(\\theta^*).$ A natural way of doing this would be to refine an arbitrary partition into a hierarchical partition and \nthen count the number of extra rectangular pieces that arises as a result of this \nrefinement. This begs the following question of a combinatorial flavour.\n\n\n\\textit{Can an arbitrary partition of $L_{d,n}$ be refined into a hierarchical partition without increasing the number of rectangles too much?}. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFortunately, the above question has been studied a fair bit in the computational\/combinatorial geometry literature\nunder the name of \\textit{binary space partitions}. A binary space \npartition (BSP) is a recursive partitioning scheme for a set of \nobjects in space. The goal is to partition the space recursively \nuntil each smaller space contains only one\/few of the original \nobjects. The main questions of interest are, given the set of \nobjects, the minimal cardinality of the optimal partition \nand an efficient algorithm to compute it. A nice survey of this area, explaining the \ncentral questions and an overview of known results can be found in~\\cite{toth2005binary}. \nWe will now leverage some existing results in this area which would yield corresponding risk bounds with the help of Theorem~\\ref{thm:adapt}.\n\n\nFor $d = 2,$ it turns out that any rectangular partition can be refined into a hierarchical one where the number of rectangular pieces at most doubles. The following proposition is due to~\\cite{berman2002exact} and states this fact.\n\n\\begin{proposition}[\\cite{berman2002exact}]\\label{prop:partapp}\n\tGiven any partition $\\Pi \\in \\mathcal{P}_{\\all,2, n}$ there exists a refinement $\\tilde{\\Pi} \\in \\mathcal{P}_{\\hier,2, n}$ such that $|\\tilde{\\Pi}| \\leq 2 |\\Pi|.$ As a consequence, for any matrix $\\theta \\in \\R^{n \\times n}$ and any nonnegative integer $r$, we have $$k^{(r)}_{\\hier}(\\theta) \\leq 2 k^{(r)}_{\\all}(\\theta).$$\n\\end{proposition}\n\n\n\nThe above proposition applied to Theorem~\\ref{thm:adapt} immediately yields the following theorem:\n\\begin{theorem}\n\tLet $d = 2.$ There exists an absolute constant $C$ such that by setting $\\lambda = C\\:K_{r,d}\\:\\:\\sigma^2\\:\\log N$ we have the following risk bound for $\\hat{\\theta}_{\\hier,\\lambda}$:\n\t\\begin{equation*}\n\t\t\\MSE(\\hat{\\theta}^{(r)}_{\\hier,\\lambda},\\theta^*) \\leq \\frac{C\\:K_{r,d}\\:\\:\\sigma^2\\:\\log N}{N}\\:k^{(r)}_{\\all}(\\theta^*) + \\frac{C\\:\\sigma^2}{N}.\n\t\\end{equation*}\n\\end{theorem}\n\n\n\\begin{remark}\n\tThus, in the two dimensional setting $d = 2$, {\\rm ORT} fulfills the two objectives of computability in polynomial time and attaining the minimax risk\n\trate adaptively for all truths $\\theta^*.$ Thus, this completely solves the main \n\tquestion we posed in the two dimensional case. To the best of our knowledge, this is the first result of its kind in the literature. \n\\end{remark}\n\n\n\n\nFor dimensions higher than $2$;\nthe best result akin to Proposition~\\ref{prop:partapp} that is available is due to~\\cite{hershberger2005binary}. \n\n\n\\begin{proposition}[~\\cite{hershberger2005binary}]\\label{prop:partapp2}\n\tLet $d > 2.$ Given any partition $\\Pi \\in \\mathcal{P}_{\\all, d, n}$ there exists a refinement $\\tilde{\\Pi} \\in \\mathcal{P}_{\\hier, d, \n\t\tn}$ such that $|\\tilde{\\Pi}| \\leq |\\Pi|^{\\frac{d + 1}{3}}.$ As a consequence, for any array $\\theta \\in \\R^{L_{d, n}}$ and any nonnegative integer $r$, we have \n\t$$k^{(r)}_{\\hier}(\\theta) \\leq \\big(k^{(r)}_{\\all}(\\theta)\\big)^{\\frac{d + 1}{3}}.$$\n\\end{proposition}\n\n\n\\begin{remark}\n\tA matching lower bound is also given in~\\cite{hershberger2005binary} for the case $d = 3.$ Thus, to refine a rectangular partition (of $k$ pieces) into a hierarchical one, one necessarily increases the number of rectangular pieces to $O(k^{4\/3})$ in the worst case. \n\\end{remark}\n\n\nThe above result suggests that for arbitrary partitions in $d$ dimensions, our current approach will not yield the near minimax rate of convergence. Nevertheless, we state our risk bound that is implied by Proposition~\\ref{prop:partapp2}.\n\n\n\\begin{theorem}\n\tLet $d > 2.$ There exists an absolute constant $C$ such that by setting $\\lambda \\geq C\\:K_{r,d} \\:\\sigma^2\\:\\log N$ we have the following risk bound for $\\hat{\\theta}_{\\hier,\\lambda}$:\n\t\\begin{equation*}\n\t\t\\MSE(\\hat{\\theta}^{(r)}_{\\hier,\\lambda},\\theta^*) \\leq \\lambda \\frac{ \\big(k^{(r)}_{\\all}(\\theta^*)\\big)^{\\frac{d + 1}{3}}}{N} + \\frac{C\\:\\sigma^2}{N}.\n\t\\end{equation*}\n\\end{theorem}\n\n\n\n\nOur approach of refining an arbitrary partition into a hierarchical partition does not seem to yield the $\\tilde O\\big(\\sigma^2 \\frac{k^{(r)}_{\\all}(\\theta^*)}{N} \\big)$ rate of \nconvergence for ORT in dimension higher than $2$ when the truth is a piecewise polynomial \nfunction on an \\textit{arbitrary} rectangular partition. Rectangular partitions in higher \ndimensions could be highly complex; with some rectangles being very ``skinny'' in some \ndimensions. However, it turns out that if we rule out such anomalies, then it is still \npossible to attain our objective. Let us now define a class of partitions which rules out \nsuch anomalies. \n\nLet $R$ be a rectangle defined as $R = \\Pi_{i = 1}^{d} [a_i,b_i] \\subset L_{d,n}.$ Let the sidelengths of $R$ be defined as $n_i = \nb_i - a_i + 1$ for $i \\in [d].$ Define its {\\em aspect ratio} as $\\mathcal{A}(R) = \\max \\{\\frac{n_i}{n_j}: (i,j) \\in [d]^2\\}.$ For \nany $\\alpha \\geq 1$, let us call a rectangle $\\alpha$ {\\em fat} if \nwe have $\\mathcal{A}(R) \\leq \\alpha.$ Now consider a rectangular \npartition $\\Pi \\in \\mathcal{P}_{\\all,d,n}.$ We call $\\Pi$ an \\textit{$\\alpha$ fat partition} if each of its constituent \nrectangles is $\\alpha$ fat. Let us denote the class of $\\alpha$ fat partitions of $L_{d,n}$ as $\\mathcal{P}_{\\fat(\\alpha),d,n}.$ As \nbefore, we can now define the class of subspaces $S^{(r)}_{\\fat(\\alpha),d,n}$ corresponding to the set of partitions \n$\\mathcal{P}_{\\fat(\\alpha),d,n}.$\nFor any array $\\theta^*$ and any integer $r >0$ we can also denote\n\\begin{equation*}\n\tk^{(r)}_{\\fat(\\alpha)}(\\theta^*) = k_{S^{(r)}_{\\fat(\\alpha),d,n}}(\\theta^*).\n\\end{equation*}\n\n\n\n\n\n\n\n\n\n\nAn important result in the area of binary space partitions is that any fat rectangular partition of $L_{n,d}$ can be refined into a hierarchical one with the number of rectangular pieces inflated by at most a constant factor. This is the content of the following proposition which is due to~\\cite{de1995linear}.\n\\begin{proposition}[\\cite{de1995linear}]\\label{prop:fatpartapp}\n\tThere exists a constant $C(d,\\alpha) \\geq 1$ depending only on $d$ and $\\alpha$ such that any partition $\\Pi \\in \\mathcal{P}_{\\fat(\\alpha),d,n}$ can be refined into a hierarchical partition $\\tilde{\\Pi} \\in \\mathcal{P}_{\\hier,d,n}$ satisfying\n\t\\begin{equation*}\n\t\t|\\tilde{\\Pi}| \\leq C(d,\\alpha) |\\Pi|.\n\t\\end{equation*}\n\tEquivalently, for any $\\theta \\in \\R^{L_{n,d}}$ and any non negative integer $r$ we have\n\t\\begin{equation*}\n\t\tk^{(r)}_{\\hier}(\\theta) \\leq C(d,\\alpha) k^{(r)}_{\\fat(\\alpha)}(\\theta).\n\t\\end{equation*}\n\\end{proposition}\n\n\n\n\n\n\nThe above proposition gives rise to a risk bound for ORT in all dimensions.\n\\begin{theorem}\\label{thm:fat}\n\tFor any dimension $d$ there exists an absolute constant $C$ such that by setting $\\lambda \\geq C\\:K_{r,d}\\:\\sigma^2\\:\\log n$ we have the following risk bound for $\\hat{\\theta}_{\\hier,\\lambda}$:\n\t\\begin{equation*}\n\t\t\\E \\|\\hat{\\theta}^{(r)}_{\\hier,\\lambda} - \\theta^*\\|^2 \\leq \\inf_{\\theta \\in \\R^{L_{n,d}}} \\big(2\\:\\|\\theta - \\theta^*\\|^2 + \\lambda \\:C(d,\\alpha)\\: k^{(r)}_{\\fat(\\alpha)}(\\theta)\\big) + C\\:\\sigma^2.\n\t\\end{equation*}\n\\end{theorem}\n\n\n\n\\begin{remark}\n\tFor any fixed dimension $d$, when $\\theta^*$ is piecewise polynomial of degree $r$ on a fat paritition, the above theorem implies a $O\\big(\\sigma^2 \\frac{k^{(r)}_{\\all}(\\theta^*)}{N} \\log N\\big)$ bound to the MSE of the ORT estimator (of order $r$). Thus, for arbitrary fat partitions in any dimension, ORT attains our objective of enjoying the near minimax rate of convergence and being computationally efficient. For any fixed dimension $d$, this is the first result of its type that we are aware of. \n\\end{remark}\n\n\n\n\\begin{remark}\n\tIt should be mentioned here that the constant $C(d,\\alpha)$ scales exponentially with $d$, at least in the construction which is due to~\\cite{de1995linear}. In any case, recall that all of our results are meaningful when $d$ is low to moderate.\n\\end{remark}\n\n\n\n\\subsection{Our Results for Dyadic CART}\nIn the previous section, we showed that the ORT estimator attains the desired $\\tilde \nO\\big(\\sigma^2 \\frac{k^{(r)}_{\\all}(\\theta^*)}{N}\\big)$ rate for all $\\theta^*$ adaptively \nin dimensions $d = 1,2$ and for all $\\theta^*$ which are piecewise polynomial on a fat \npartition in all dimensions $d > 2.$ Since the ORT is more computationally expensive than \nDyadic CART, a natural question is whether there are analogous results for Dyadic CART. In this case, the relevant question is \n\\medskip\n\n\\textit{Can an arbitrary nonhierarchical partition of $L_{d,n}$ be refined into a recursive dyadic partition without increasing the number of rectangles too much?}. \n\\medskip\n\nWhen $d = 1$ or $d = 2$, we can give an argument to show there exists a recursive dyadic partition refining a given arbitrary rectangular partition with number of rectangles being multiplied by a log factor. This is the content of our next result which is proved in Section~\\ref{sec:dyaproof}.\n\n\\begin{proposition}\\label{prop:dyadicref}\n\tGiven any positive integer $n$ and given a partition $\\Pi \\in \\mathcal{P}_{\\all,1,n}$ with $k$ intervals, there exists a refinement $\\tilde{\\Pi} \\in \\mathcal{P}_{\\rdp,1,n}$ which is a recursive dyadic partition with at most $C k \\log (en\/k)$ intervals where $C > 0$ is an universal constant.\n\n\n\n\tEquivalently, for all $\\theta \\in \\R^{L_{1,n}}$ and all non negative integers $r$, we have \n\t\\begin{equation}\\label{eq:refine1}\n\tk^{(r)}_{\\rdp}(\\theta) \\leq C k^{(r)}_{\\all}(\\theta) \\log \\frac{en}{k^{(r)}_{\\all}(\\theta)}.\n\t\\end{equation}\n\t\n\t\n\t\n\tMoreover, given any positive integer $n$ and an arbitrary partition $\\Pi \\in \\mathcal{P}_{\\all,2,n}$ of $L_{2,n}$ with $k$ rectangles there exists a refinement $\\Pi^{'} \\in \\mathcal{P}_{\\rdp,2,n}$ which is a recursive dyadic partition with at most $C k (\\log n)^2$ rectangles where $C$ is a universal constant. Equivalently, for all $\\theta \\in \\R^{L_{2,n}}$ and all non negative integers $r$, we have \n\t\\begin{equation}\\label{eq:refine2}\n\tk^{(r)}_{\\rdp}(\\theta) \\leq C (\\:\\log n)^2 k^{(r)}_{\\all}(\\theta).\n\t\\end{equation}\n\t\n\n\\end{proposition}\n\n\nWe have not seen the above result (equation~\\eqref{eq:refine2}) stated explicitly in the Statistics literature. It is probable that this result is known in the combinatorics or computational geometry literature. However, since we could locate an exact reference, we provide its proof in Section~\\ref{sec:dyaproof}.\n\n\\begin{remark}\n\tThe exponent of $\\log n$, which is $1$ for $d = 1$ and $2$ for $d = 2$, cannot be improved \n\tin general. It is now natural to conjecture that a result like above is true for a general \n\t$d$ where the exponent of $\\log n$ is $d.$ However, we do not know whether this is true or \n\tnot. Our current proof for the $d = 2$ case breaks down and cannot be extended to higher dimensions. See Remark~\\ref{rem:dyadicint} for more explanations on this.\n\\end{remark}\n\n\n\nThe implication of Proposition~\\ref{prop:dyadicref} is the following corollary for Dyadic CART.\n\\begin{corollary}\\label{cor:pcdc}\n\tFor $d = 1$ and any integer $n$, there exists a universal constant $C > 0$ such that by setting $\\lambda = C K_{r,1} \\:\\sigma^2\\:\\log n$ we have the following risk bound, \n\n\t\\begin{equation*}\n\t\tMSE(\\hat{\\theta}^{(r)}_{\\rdp,\\lambda},\\theta^*) \\leq C K_{r,1} \\sigma^2 \\frac{ k^{(r)}_{\\all}(\\theta^*)}{N} \\log \\frac{n}{k^{(r)}_{\\all}(\\theta)} \\log n + \\frac{C\\:\\sigma^2}{N}.\n\t\\end{equation*}\n\t\n\tFor $d = 2$ and any integer $n$, there exists a universal constant $C > 0$ such that by setting $\\lambda = C K_{r,2} \\:\\sigma^2\\:\\log n$ we have the following risk bound, \n\n\t\\begin{equation*}\n\t\tMSE(\\hat{\\theta}^{(r)}_{\\rdp,\\lambda},\\theta^*) \\leq C K_{r,2}\\:\\sigma^2 \\frac{ k^{(r)}_{\\all}(\\theta^*)}{N} (\\log N)^3 + \\frac{C\\:\\sigma^2}{N}.\n\t\\end{equation*}\n\t\n\\end{corollary}\n\n\nTo summarize, Dyadic CART attains the same rate as the ORT with an extra $\\log N$ factor when $d = 1$ and with an extra $(\\log N)^2$ factor when $d = 2.$ We do not know whether for $d > 2$, a result for Dyadic CART analogous to Theorem~\\ref{thm:fat} for fat partitions is possible or not.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{\\textbf{Results for Multivariate Functions with Bounded Total Variation}}\\label{sec:tv}\n\nIn this section, we will describe an application of Theorem~\\ref{thm:adapt} to show that Dyadic CART of order $0$ has near optimal (worst case and adaptive) risk guarantees in any dimension when we consider estimating functions with bounded total variation. Let us first define what we mean by total variation.\n\n\n\nLet us think of $L_{d,n}$ as the $d$ dimensional regular lattice graph. Then, thinking of $\\theta \\in \\R^{L_{d,n}}$ as a function on $L_{d,n}$ we define\n\\begin{equation}\\label{eq:TVdef}\n\\TV(\\theta) = \\sum_{(u,v) \\in E_{d,n}} |\\theta_{u} - \\theta_{v}| \n\\end{equation}\nwhere $E_{d,n}$ is the edge set of the graph $L_{d,n}.$\nOne way to motivate the above definition is as follows. If we think $\\theta[i_1,\\dots,i_n] = f(\\frac{i_1}{n},\\dots,\\frac{i_d}{n})$ for a differentiable function $f: [0,1]^{d} \\rightarrow \\R$ then the above definition divided by $n^{d - 1}$ is precisely the Reimann approximation for $\\int_{[0,1]^d} \\|\\nabla f\\|_1.$ Of course, the definition in~\\eqref{eq:TVdef} applies to arbitrary arrays, not just for evaluations of a differentiable function on the grid. \n\n\n\n\nThe usual way to estimate functions\/arrays with bounded total variation is to use the Total Variation Denoising (TVD) estimator defined as follows:\n$$\\hat{\\theta}_{\\lambda} = \\argmin_{\\theta \\in \\R^{L_{d,n}}} \\big(\\|y - \\theta\\|^2 + \\lambda \\TV(\\theta)\\big).$$ \nThis estimator was first introduced in the $d = 2$ case by~\\cite{rudin1992nonlinear} for image denoising. This estimator is now a standard and widely succesful technique to do image denoising. In the $d = 1$ setting, it is known (see, e.g.~\\cite{donoho1998minimax},~\\cite{mammen1997locally}) that a well tuned TVD estimator is minimax rate optimal on the class of all bounded variation signals $\\{\\theta: \\TV(\\theta) \n\\leq V\\}$ for $ V > 0$. It is also known (e.g, see~\\cite{guntuboyina2020adaptive},~\\cite{dalalyan2017tvd},~\\cite{ortelli2018}) that, when properly tuned, the above estimator is capable of attaining the oracle MSE scaling like $O(\\frac{k_{\\all}^{(0)}(\\theta^*)}{N})$, up to a log factor in $N.$ \n\n\n\n\nIn the multivariate setting ($d \\geq 2$), worst case performance of the TVD estimator has been studied in~\\cite{hutter2016optimal},~\\cite{sadhanala2016total}. These results show that like in the 1D setting, a well tuned TVD estimator is nearly (up to log factors) minimax rate optimal over the class $\\{\\theta \\in \\R^{L_{d,n}}: \\TV(\\theta) \\leq V\\}$ of bounded variation signals in any dimension. \n\n\n\nThe goal of this section is to proclaim that the Dyadic CART estimator $\\hat{\\theta}^{(0)}_{\\rdp,\\lambda}$ enjoys similar statistical guarantees as the TVD estimator and possibly even has some advantages over TVD which we list below. \n\n\n\n\\begin{itemize}\n\t\\item The Dyadic CART estimator $\\hat{\\theta}^{(0)}_{\\rdp,\\lambda}$ is computable in $O(N)$ time in low dimensions $d.$ Note that TVD is mostly used for image processing in the $d = 2,3$ case. Recall that the lattice has at least $2^d$ points as soon as $n \\geq 2$ so it does not make sense to think of high $d.$ While TVD estimator is the solution of a convex optimization procedure, there is no known algorithm which computes it provably in $O(N)$ time to the best of our knowledge. As we show in Theorem~\\ref{thm:tvadapmlb} and Theorem~\\ref{thm:dcadap}, the Dyadic CART estimator is also minimax rate optimal over the class $\\{\\theta \\in \\R^{L_{d,n}}: \\TV(\\theta) \\leq V\\}.$ Thus, the Dyadic CART estimator appears to be the first provably linear time computable estimator achieving the minimax rate, up to log factors, for functions with bounded total variation. \n\t\n\t\n\t\\item We also show that the Dyadic CART estimator is also adaptive to the intrinsic dimensionality of the true signal $\\theta^*.$ We make the meaning of adapting to intrinsic dimensionality precise later in this section. It is not known whether the TVD estimator demonstrates such adaptivity.\n\t\n\t\n\t\\item One corollary of Theorem~\\ref{thm:adapt} is that the Dyadic CART estimator nearly attains the oracle risk when the truth $\\theta^*$ is piecewise constant on a recursive dyadic partition of $L_{n,d}.$ For such signals, the ideally tuned TVD estimator, even in the $d = 2$ case, provably cannot attain the oracle risk for such piecewise constant signals in general; see Theorem $2.3$ in~\\cite{chatterjee2019new}. \n\\end{itemize}\n\n\n\n\n\n\n\\subsubsection{Adaptive Minimax Rate Optimality of Dyadic CART}\nWe now describe risk bounds for the Dyadic Cart estimator for bounded variation arrays.\nLet us define the following class of bounded variation arrays:\n\\begin{equation*}\n\tK_{d,n}(V) = \\{\\theta \\in L_{d,n}: \\TV(\\theta) \\leq V\\}\n\\end{equation*}\nFor any generic subset $S \\subset [d]$, let us denote its cardinality by $|S|.$ For any vector $x \\in [n]^d$ let us define $x_S \\in [n]^{|S|}$ to be the vector $x$ restricted to the coordinates given by $S.$ We now define\n\\begin{equation*}\n\tK^{S}_{d,n}(V) = \\{\\theta \\in K_{d,n}(V): \\theta(x) = \\theta(y) \\:\\forall x,y\\: \\in \\:[n]^d \\:\\:\\text{with}\\:\\: x_S = y_S\\}\n\\end{equation*}\nIn words, $K^{S}_{n,d}(V)$ is just the set of arrays in $K_{d,n}(V)$ which are a function of the coordinates within $S$ only. In this section, we will show that the Dyadic CART estimator is minimax rate optimal (up to log factors) over the parameter space $K^{S}_{d,n}(V)$ simultaneously over all subsets $S \\subset [d].$ This means that the Dyadic CART performs as well as an oracle estimator which knows the subset $S.$ This is what we mean when we say that the Dyadic CART estimator adapts to intrinsic dimensionality. To the best of our knowledge, such an oracle property in variable selection is rare in Non Parametric regression. The work in~\\cite{bertin2008selection} shows a two step procedure for adapting to instrinsic dimensionality for multivariate Holder smooth function classes. The only comparable result that we are aware of for a spatially heterogenous function class is Theorem $3$ in~\\cite{deng2018isotonic} which proves a similar adaptivity result in multivariate isotonic regression. \n\n\n\n\n\n\n\n\n\n\n\n\n\nFix a subset $S \\subset [d]$ and let $s = |S|.$ Consider our Gaussian mean estimation problem where it is known that the underlying truth $\\theta^* \\in K^{S}_{d,n}(V).$ We could think of $\\theta^*$ as $n^{d - s}$ copies of a $s$ dimensional array $\\theta^*_{S} \\in \\R^{L_{s,n}}.$ It is easy to check that $\\theta^*_{S} \\in K_{s,n}(V_S)$ where $V_s = \\frac{V}{n^{d - s}}.$ Estimating $\\theta^*$ is equivalent to estimating the $s$ dimensional array $\\theta^*_{S}$ where the noise variance is now reduced to $\\sigma^2_{S} = \\frac{\\sigma^2\n}{n^{d - s}}$ because we can average over $n^{d - s}$ elements per each entry of $\\theta^*_{S}.$ Therefore, we now have a reduced Gaussian mean estimation problem where the noise variance is $\\sigma^2_{S}$ and the parameter space is $K_{n,s}(V_S).$ A tight lower bound to the minimax risk for the parameter space $K_{d,n}(V)$ for arbitrary $n,d,V > 0$ is available in~\\cite{sadhanala2016total}. Using the above logic and this existing minimax lower bound allows us to establish a lower bound to the minimax risk for the parameter space $K^{S}_{d,n}(V).$ The detailed proof is given in Section~\\ref{Sec:proofs}.\n\n\n\n\\begin{theorem}[Minimax Lower Bound over $K^{S}_{d,n}(V)$]\n\t\\label{thm:tvadapmlb}\n\tFix positive integers $n,d$ and let $S \\subset [d]$ such that $s = |S| \\geq 2.$ Let $V > 0$ and $V_S = \\frac{V}{n^{d - s}}.$ Similarly, for $\\sigma > 0,$ let $\\sigma^2_S = \\frac{\\sigma^2}{n^{d - s}}.$ There exists a universal constant $c > 0$ such that \n\t\\begin{equation*}\n\t\t\\inf_{\\tilde{\\theta} \\in \\R^{L_{d, n}}} \\sup_{\\theta \\in K^{S}_{d,n}(V)} \\E_{\\theta} \\|\\tilde{\\theta} - \\theta\\|^2 \\geq c\\:n^{d - s}\\: \\min\\{\\frac{\\sigma_{S}\\:V_S}{2s} \\sqrt{1 + \\log(\\frac{2\\:\\sigma\\:s\\:n^s}{V_S})}, n^{s} \\sigma_S^2, \\frac{V_S}{s}^2 + \\sigma_S^2\\}.\n\t\\end{equation*}\n\tIf $|S| = 1$ then \n\t\\begin{equation*}\n\t\t\\inf_{\\tilde{\\theta} \\in \\R^{L_{d, n}}} \\sup_{\\theta \\in K^{S}_{d,n}(V)} \\E_{\\theta} \\|\\tilde{\\theta} - \\theta\\|^2 \\geq c\\:n^{d - 1}\\: \\min\\{(\\sigma^2_{S} V_{S})^{2\/3} n^{1\/3}, n\\:\\sigma^2_{S}, n\\:V^2_{S}\\}.\n\t\\end{equation*}\n\\end{theorem}\n\n\n\nLet us now explain the above result. If we take the subset $S = [d]$ this is exactly the lower bound in Theorem $2$ of~\\cite{sadhanala2016total}. All we have done is stated the same result for any subset $S$ since we can reduce the estimation problem in $K^{S}_{d,n}(V)$ to a $s$ dimensional estimation problem over $K_{s,n}(V_S).$ The bound is in terms of a minimum of three terms. It is enough to explain this bound in the case when $S = [d]$ as similar reasoning holds for any subset $S$ with $s = |S| \\geq 2.$ Thinking of $\\sigma$ as a fixed constant, the three terms in the minimum on the right side corresponds to different regimes of $V.$ It can be shown that the constant array with each entry $\\overline{y}$ attains the $V^2 + \\sigma^2$ rate which is dominant when $V$ is very small. The estimator $y$ itself attains the $N \\sigma^2$ rate which is dominant when $V$ is very large. Hence, these regimes of $V$ can be thought of as trivial regimes. In the nontrivial regime, the lower bound is $c\\: \\min\\{\\frac{\\sigma\\:V}{2d} \\sqrt{1 + \\log(\\frac{2\\:\\sigma\\:d\\:N}{V})}\\}.$\n\n\n\nIt is also known that a well tuned TVD estimator is minimax rate optimal, in the nontrivial regime, over $K_{d,n}(V)$ for all $d \\geq 2$, up to log factors; see~\\cite{hutter2016optimal}. For instance, it achieves the above minimax lower bound (up to log factors) in the nontrivial regime. For this reason, we can define an oracle estimator (which knows the set $S$) attaining the minimax lower bound over $K_{d,n}^{S}(V)$ in Theorem~\\ref{thm:tvadapmlb}, up to log factors. The oracle estimator would first obtain $\\overline{y}_{S}$ by averaging the observation array $y$ over the coordinates in $S^{C}$ and then it would apply the $s$ dimensional TVD estimator on $\\overline{y}_{S}.$ Our main point here is that the Dyadic CART estimator performs as well as this oracle estimator, without the knowledge of $S$. In other words, its risk nearly (up to log factors) matches the minimax lower bound in Theorem~\\ref{thm:tvadapmlb} adaptively over all subsets $S \\subset [d].$\nThis is the content of our next theorem which is proved in Section~\\ref{Sec:proofs} (in the supplementary file).\n\n\n\n\\begin{theorem}[Adaptive Risk Bound for Dyadic Cart]\n\t\\label{thm:dcadap}\n\tFix any positive integers $n,d.$ Let $\\theta^* \\in K^{S}_{d,n}(\\infty)$ be the underlying truth where $S \\subset [d]$ is any subset with $|S| \\geq 2.$ Let $V^* = \\TV(\\theta^*).$\n\tLet $V^*_{S} = \\frac{V^*}{n^{d - s}}$ and $\\sigma^2_{S} = \\frac{\\sigma^2}{n^{d - s}}$ be defined as before. The following risk bound holds for the Dyadic CART estimator $\\hat{\\theta}^{(0)}_{\\rdp,\\lambda} $ with $\\lambda \\geq C \\sigma^2 \\log N$ where $C$ is an absolute constant.\n\t\\begin{equation*}\n\t\t\\E_{\\theta^*} \\|\\hat{\\theta}^{(0)}_{\\rdp,\\lambda} - \\theta^*\\|^2 \\leq C\\:n^{d - s}\\:\\min\\{ \\sigma_{S} V^*_{S} \\log N, \\sigma_{S}^2 \\log N, \\big((V^*_{S})^{2} + \\sigma_{S}^2\\big)\\}\n\t\\end{equation*}\n\tIn the case $|S| = 1$ we have\n\t\\begin{equation*}\n\t\t\\E_{\\theta^*} \\|\\hat{\\theta}^{(0)}_{\\rdp,\\lambda} - \\theta^*\\|^2 \\leq C\\:n^{d - 1}\\: \\min\\{(\\sigma^2_{S} V_{S} \\log N)^{2\/3} n^{1\/3}, n\\:\\sigma^2_{S} \\log N, n\\:V^2_{S} + \\sigma_{S}^2 \\log N\\}\n\t\\end{equation*}\n\n\\end{theorem}\n\n\n\n\n\nWe think the following is an instructive way to read off the implications of the above theorem. Let us consider $d \\geq 2$ and the $S = [d]$ case. We will only look at the nontrivial regime even though Dyadic CART remains minimax rate optimal, up to log factors, even in the trivial regimes. In this case, $MSE(\\hat{\\theta}^{(0)}_{\\rdp,\\lambda},\\theta^*) = \\tilde{O}(\\frac{\\sigma V^*}{N})$ which is the minimax rate in the nontrivial regime as given by Theorem~\\ref{thm:tvadapmlb}. Now, for many natural instances of $\\theta^*$, the quantity $V^* = O(n^{d - 1})$; for instance if $\\theta^*$ are evaluations of a differentiable function on the grid. This $O(n^{d - 1})$ scaling was termed as the \\textit{canonical scaling} for this problem by~\\cite{sadhanala2016total}. Therefore, under this canonical scaling for $V^*$ we have $$MSE(\\hat{\\theta}^{(0)}_{\\rdp,\\lambda},\\theta^*) = \\tilde{O}(\\frac{\\sigma}{n}) = \\tilde{O}(\\frac{\\sigma}{N^{1\/d}}).$$\n\n\n\nNow let us consider $d \\geq 2$ and a general subset $S \\subset [d].$ In the nontrivial regime, by Theorem~\\ref{thm:dcadap} we have $MSE(\\hat{\\theta}^{(0)}_{\\rdp,\\lambda},\\theta^*) = \\tilde{O}(\\frac{\\sigma_S V^*_{S}}{n^{s}})$ which is also the minimax rate over the parameter space $K_{d,n}^{S}.$ Now, $V^*_{S} = O(n^{s - 1})$ under the canonical scaling in this case. Thus, under this canonical scaling we can write $$MSE(\\hat{\\theta}^{(0)}_{\\rdp,\\lambda},\\theta^*) = \\tilde{O}(\\frac{\\sigma_S}{n}) = \\tilde{O}(\\frac{\\sigma_S}{N^{1\/d}}).$$ \nThis is very similar to the last display except $\\sigma$ has been replaced by $\\sigma_{S}$, the \\textit{actual standard deviation} of this problem. The point is, the Dyadic CART attains this rate without knowing $S$. The case when $|S| = 1$ can be read off in a similar way.\n\n\n\\section{\\textbf{Results for Univariate Functions of Bounded Variation of Higher Orders}}\\label{Sec:uni}\nIn this section, we show another application of Theorem~\\ref{thm:adapt} to a family of univariate function classes which have been of recent interest. The results in this section would be for the univariate Dyadic Cart estimator of some order $r \\geq 0.$ As mentioned in Section~\\ref{Sec:intro}, TV denoising in the 1D setting has been studied as part of a general family of estimators which penalize discrete derivatives of different orders. These estimators have been studied in~\\cite{mammen1997locally} ,~\\cite{steidl2006splines},~\\cite{tibshirani2014adaptive},~\\cite{guntuboyina2020adaptive} and~\\cite{kim2009ell_1} who coined the name trend filtering.\n\n\n\n\nTo define the trend filtering estimators here, we first need to define variation of all orders. For a vector $\\theta \\in \\R^n,$ let us define $D^{(0)}(\\theta) = \\theta, D^{(1)}(\\theta) = (\\theta_2 - \\theta_1,\\dots,\\theta_n - \\theta_{n - 1})$ and $D^{(r)}(\\theta)$, for $r \\geq 2$, is recursively defined as $D^{(r)}(\\theta) = D^{(1)}(D^{(r - 1)}(\\theta)).$ Note that $D^{(r)}(\\theta) \\in \\R^{n - r}.$ For simplicity, we denote the operator $D^{(1)}$ by $D.$ For any positive integer $r \\geq 1$, let us also define the $r$ th order variation of a vector $\\theta$ as follows:\n\\begin{equation}\nV^{(r)}(\\theta) = n^{r - 1} |D^{(r)}(\\theta)|_{1}\n\\end{equation}\nwhere $|.|_1$ denotes the usual $\\ell_1$ norm of a vector. Note that $V^{(1)}(\\theta)$ is the usual total variation of a vector as defined in~\\eqref{eq:TVdef}.\n\\begin{remark}\n\tThe $n^{r - 1}$ term in the above definition is a normalizing factor and is written following the convention adopted in~\\cite{guntuboyina2020adaptive}. If we think of $\\theta$ as evaluations of a $r$ times differentiable function $f:[0,1] \\rightarrow \\R$ on the grid $(1\/n,2\/n\\dots,n\/n)$ then the Reimann approximation to the integral $\\int_{[0,1]} f^{(r)}(t) dt$ is precisely equal to $V^{(r)}(\\theta).$ Here $f^{(r)}$ denotes the $r$th derivative of $f.$ Thus, for natural instances of $\\theta$, the reader can imagine that $V^{(r)} = O(1).$ \n\\end{remark}\n\n\n\nLet us now define the following class of sequences for any integer $r \\geq 1$,\n\\begin{equation}\n\\mathcal{BV}^{(r)}_{n}(V) = \\{\\theta \\in \\R^n: V^{(r)}(\\theta) \\leq V\\}.\n\\end{equation}\n\n\nTrend Filtering (of order $r \\geq 1$) estimators are defined as follows for a tuning parameter $\\lambda > 0$:\n\\begin{equation*}\n\t\\hat{\\theta}^{(r)}_{tf,\\lambda} = \\argmin_{\\theta \\in \\R^n} \\big(\\|y - \\theta\\|^2 + \\lambda V^{(r)}(\\theta)\\big).\n\\end{equation*}\nThus, Trend Filtering is penalized least squares where the penalty is proportional to the $\\ell_1$ norm of $D^{(r)}(\\theta).$ As opposed to Trend Filtering, here we will study the univariate Dyadic CART estimator (of order $r - 1$) which penalizes something similar to the $\\ell_0$ norm of $D^{(r)}(\\theta).$ We will show that Dyadic CART (of order $r - 1$) compares favourably with Trend Filtering (of order $r$) in the following aspects: \n\n\n\\begin{itemize}\n\t\n\t\\item For a given constant $V > 0$ and $r \\geq 1$, $n^{-2r\/2r + 1}$ rate is known to be the minimax rate of estimation over the space $\\mathcal{BV}^{(r)}_{n}(V)$; (see e.g,~\\cite{donoho1994ideal}). A standard terminology in this field terms this $n^{-2r\/2r + 1}$ rate as the \\textit{slow rate}. It is also known that a well tuned Trend Filtering estimator is minimax rate optimal over the parameter space $\\mathcal{BV}^{(r)}_{n}(V)$ and thus attains the slow rate. This result has been shown in~\\cite{tibshirani2014adaptive} and~\\cite{wang2014falling} building on earlier results by~\\cite{mammen1997locally}. In Theorem~\\ref{thm:slowrate} we show that Dyadic CART estimator of order $r - 1$ is also near minimax rate optimal (up to a log factor) over the space $\\mathcal{BV}^{(r)}_{n}(V)$ and attains the slow rate.\n\t\n\t\n\t\n\n\t\n\t\\item It is also known that an ideally tuned Trend Filtering (of order $r$) estimator can adapt to $\\|D^{r}(\\theta)\\|_0$, the number of jumps in the $r$ th order differences, \\textit{under some assumptions on $\\theta^*$}. Such a result has been shown in~\\cite{guntuboyina2020adaptive} and~\\cite{van2019prediction}. In this case, the Trend Filtering estimator of order $r$ attains the $\\tilde{O}(\\|D^{(r)}(\\theta)\\|_0\/n)$ rate. Standard terminology in this field terms this as the \\textit{fast rate}. In Theorem~\\ref{thm:fastrate} we show that Dyadic CART estimator of order $r - 1$ attains the fast rate \\textit{without any assumptions} on $\\theta^*.$ \n\t\n\t\n\t\\item If one desires the fast rate for both piecewise constant and piecewise linear functions, there is no way to attain this by a single Trend Filtering Estimator. One needs to use Trend Filtering of order $r = 2$ to attain fast rates for piecewise linear functions and $r = 1$ for piecewise constant functions respectively. In contrast, Dyadic Cart of order $1$ attains the fast rate for both piecewise linear and piecewise constant functions. This is because by our definition, a piecewise constant function is also piecewise linear. In general, Dyadic Cart of order $r$ attains fast rate for a piecewise polynomial of degree $r$ where each piece has degree $\\leq r.$ \n\t\n\t\n\t\\item To the best of our knowledge, different tuning parameters for Trend Filtering are needed depending on whether slow rate or fast rate is desired. Thus, technically the estimators giving the slow rate and the fast rate are different. In contrast, the same tuning parameter gives both the slow rate and the fast rate for Dyadic CART.\n\t\n\t\n\t\\item The univariate Dyadic CART estimator of order $r \\geq 0$ can be computed in linear $O(N)$ time. Although Trend Filtering estimators are efficiently computable by convex optimization, we are not aware of a provably $O(N)$ run time bound (for general $r \\geq 1$) on its computational complexity.\n\\end{itemize}\n\n\n\\begin{remark}\n\tBy Theorem~\\ref{thm:adapt}, the univariate ORT would also satisfy all the risk bounds that we prove for univariate Dyadic Cart in this section. Recall that for $r = 0$, the ORT is precisely the same as the fully penalized least squares estimator $\\hat{\\theta}_{\\all,\\lambda}$ studied in~\\cite{boysen2009consistencies}. This is because $\\mathcal{P}_{\\hier,n,1}$ coincides with $\\mathcal{P}_{\\all,1,n}$ as all univariate partitions are hierarchical. Since the computational complexity of univariate Dyadic Cart is $O(N)$ and of univariate $ORT$ is $O(N^3)$ we focus on univariate Dyadic Cart.\n\\end{remark}\n\n\n\n\n\\subsubsection{Risk Bounds for Univariate Dyadic CART of all orders}\n\n\nWe start with the bound of $n^{-2r\/(2r + 1)}$ for the risk of Dyadic CART of order $r - 1$ for the parameter space $\\mathcal{BV}^{(r)}_{n}(V).$ We also explicitly state the dependence of the bound on $V$ and $\\sigma.$ \n\n\\begin{theorem}[Slow Rate for Dyadic CART]\\label{thm:slowrate}\n\tFix a positive integer $r.$ Let $V^{r}(\\theta^*) = V.$ For the same constant $C$ as in Theorem~\\ref{thm:adapt}, if we set $\\lambda \\geq C \\sigma^2 \\log n$ we have\n\t\\begin{equation}\n\tMSE(\\hat{\\theta}^{(r - 1)}_{\\rdp,\\lambda},\\theta^*) \\leq C_r \\big(\\frac{\\sigma^2 V^{1\/r} \\log n}{n}\\big)^{2r\/(2r + 1)} + C_r \\sigma^2 \\frac{\\log n}{n}\t\n\t\\end{equation}\n\twhere $C_r$ is an absolute constant only depending on $r.$\n\\end{theorem}\n\n\n\\begin{remark}\n\tThe proof of the above theorem is done in Section~\\ref{Sec:proofs} (in the supplementary file). The proof proceeds by approximating any $\\theta \\in\\mathcal{BV}^{(r)}_{n}(V)$ with a vector $\\theta^{'}$ which is piecewise polynomial of degree $r - 1$ with an appropriate bound on its number of pieces and then invoking Theorem~\\ref{thm:adapt}.\n\\end{remark}\n\n\n\\begin{remark}\n\n\tThe above theorem shows that the univariate Dyadic CART estimator of order $r - 1$ is minimax rate optimal up to the $(\\log n)^{2r\/(2r + 1)}$ factor. The dependence of $V$ is also optimal in the above bound. Up to the log factor, this upper bound matches the bound already known for the Trend Filtering estimator of order $r$; (see e.g, ~\\cite{tibshirani2014adaptive}). \n\\end{remark}\n\n\nOur next bound shows that the univariate Dyadic CART estimator achieves our goal of attaining the oracle risk for piecewise polynomial signals. \n\n\\begin{theorem}[Fast Rates for Dyadic CART]\\label{thm:fastrate}\n\tFix a positive integer $r$ and $0 < \\delta < 1.$ Let $V^{r}(\\theta^*) = V.$ For the same constant $C$ as in Theorem~\\ref{thm:adapt}, if we set $\\lambda \\geq C \\sigma^2 \\log n$ we have\n\t\\begin{equation*}\n\t\t\\E \\|\\hat{\\theta}^{(r)}_{\\rdp,\\lambda} - \\theta^*\\|^2 \\leq \\inf_{\\theta \\in \\R^N} \\big[\\frac{(1 + \\delta)}{(1 - \\delta)}\\:\\|\\theta - \\theta^*\\|^2 + \\frac{\\lambda C_r}{1 - \\delta}\\:k^{(r)}_{\\all}(\\theta)\\:\\log (\\frac{en}{k^{(r)}_{\\all}(\\theta)})\\big] + C \\frac{\\sigma^2}{\\delta\\:(1 - \\delta)}\t\n\t\\end{equation*}\n\twhere $C_r$ is an absolute constant only depending on $r.$ As a corollary we can conclude that \n\t\\begin{equation*}\n\t\tMSE(\\hat{\\theta}^{(r)}_{\\rdp,\\lambda},\\theta^*) \\leq C_r \\sigma^2 \\frac{k^{(r)}_{\\all}(\\theta^*) \\log n \\log (\\frac{en}{k^{(r)}_{\\all}(\\theta^*)})}{n}.\n\t\\end{equation*} \n\\end{theorem}\n\n\n\\begin{proof}\n\tThe proof follows directly from the risk bound for univariate Dyadic Cart given in Theorem~\\ref{thm:adapt} and applying equation~\\eqref{eq:refine1} in Lemma~\\ref{lem:1dpartbd} which says that $k^{(r)}_{\\rdp}(\\theta) \\leq k^{(r)}_{\\all}(\\theta) \\log (\\frac{en}{k^{(r)}_{\\all}(\\theta)})$ for all vectors $\\theta \\in \\R^n.$ \n\\end{proof}\n\n\n\n\n\nLet us now put our result in Theorem~\\ref{thm:fastrate} in context. It says that in the $d = 1$ case, Dyadic CART achieves our goal of attaining MSE scaling like $\\tilde{O}(k^{(r)}_{\\all}(\\theta^*)\/n)$ (fast rate) for all $\\theta^*.$ The Trend Filtering estimator, ideally tuned, is also capable of attaining this rate of convergence; (see Theorem $3.1$ in~\\cite{van2019prediction} and Theorem $2.1$ in~\\cite{guntuboyina2020adaptive}), under certain \\textit{minimum length conditions} on $\\theta^*.$ Let us discuss this issue now in more detail and compare Theorem~\\ref{thm:fastrate} to the comparable result known for Trend Filtering.\n\\subsubsection{Comparison of Fast Rates for Trend Filtering and Dyadic CART}\\label{Sec:compare}\nThe fast rate results available for Trend Filtering give bounds on the MSE of the form $\\tilde{O}(|D^{(r)}(\\theta^*)|_0\/n)$ where $|.|_0$ refers to the number of nonzero elements of a vector. Now for every vector $\\theta \\in R^n$, $|D^{(r)}(\\theta)|_0 = k$ \nif and only if $\\theta$ equals $(f(1\/n), . . . , f(n\/n))$ for a discrete spline function f that is made of $k + 1$ polynomials each of degree at most $r - 1$. Discrete splines are piecewise polynomials with regularity at the knots. They differ from the usual (continuous) splines in the form of the regularity condition at the knots: for splines, the regularity condition translates to\n(higher order) derivatives of adjacent polynomials agreeing at the knots, while for discrete splines it translates to discrete differences of adjacent polynomials agreeing at the knots; see~\\cite{mangasarian1971discrete} for details. This fact about the connection between $|D^{(r)}(\\theta^*)|_0$ and discrete splines is standard (see e.g.,~\\cite{steidl2006splines}) and a proof can be found in Proposition D.3 in~\\cite{guntuboyina2020adaptive}.\n\n\n\n\nThe above discussion then directly implies for any $\\theta \\in \\R^n$ and any $r \\geq 1$,\n\\begin{equation}\nk^{(r - 1)}_{\\all}(\\theta) \\leq |D^{(r)}(\\theta)|_0 + 1.\n\\end{equation}\n\nTherefore any bound of the form $\\tilde{O}(k^{(r - 1)}_{\\all}(\\theta^*)\/n)$ is automatically also $\\tilde{O}(|D^{(r)}(\\theta^*)|_0\/n).$ Thus, Theorem~\\ref{thm:fastrate} implies that the Dyadic CART attains the fast rate whenever Trend Filtering does so. However, as we now argue, there is a class of functions for which the Dyadic CART attains the fast rate but Trend Filtering does not. \n\n\n\nA key point is that a minimum length condition needs to hold for Trend Filtering to attain fast rates as explained in~\\cite{guntuboyina2020adaptive}. For example, when $r = 1$, consider the sequence of vectors in $\\R^n$ of the form $\\theta^* = (0,\\dots,0,1).$ Clearly, $\\theta^*$ is piecewise constant with $2$ pieces. However, to the best of our knowledge, the Trend Filtering estimator (with the tuning choices proposed in the literature such as the ideal tuning for constrained version of Trend Filtering as in~\\cite{guntuboyina2020adaptive}) will not attain a $\\tilde{O}(1\/n)$ rate for this sequence of $\\theta^*$ since it needs the length of the constant pieces to be $O(n).$ However, the Dyadic CART estimator does not need any minimum length condition for Theorem~\\ref{thm:fastrate} to hold and will attain the $\\tilde{O}(1\/n)$ rate for this sequence of $\\theta^*.$ \n\n\nNow let us come to the case when $r \\geq 2.$ It is known that Trend Filtering of order $r$ fits a discrete spline of degree $(r - 1)$. Thus, if the truth is piecewise polynomial with small number of pieces but it does not satisfy regularity conditions such as being a discrete spline, then Trend Filtering cannot estimate well. The reason is that Trend Filtering can only fit discrete splines. On the other hand, as long as the truth is piecewise polynomial with not too many pieces, Dyadic CART does not need the regularity conditions to be satisfied in order to perform well. Let us illustrate this with a simple example in the case when $r = 2.$ Similar phenomena is true for higher $r.$ \n\n\n\n\nLet's consider a discontinuous piecewise linear function $f^*:[0,1] \\rightarrow \\R$ defined as follows:\n\\begin{equation*}\n\tf^*(x) =\n\t\\begin{cases}\n\t\tx, & \\text{for} \\:\\:x \\leq 1\/2 \\\\\n\t\t2x & \\text{for} \\:\\:1\/2 < x \\leq 1\n\t\\end{cases}\n\\end{equation*}\nLet $\\theta^* \\in \\R^n$ such that $\\theta^* = (f(1\/n),\\dots,f(n\/n)).$ Clearly, $k^{(1)}_{\\all}(\\theta^*) = 2$ and by Theorem~\\ref{thm:fastrate}, the Dyadic CART estimator of order $1$ attains the $\\tilde{O}(1\/n)$ rate for this sequence of $\\theta^*.$ If we check the vector $D^{(1)}(\\theta^*)$ it is of the form $(a,\\dots,a,b,c,\\dots,c).$ It is piecewise constant with three pieces. However, it does not satisfy the minimum length condition as the middle piece has length only $1$ and not $O(n).$ Thus, the Trend Filtering estimator of order $2$ won't attain the fast rate for such a sequence of $\\theta^*.$ In fact, it can be shown that the Trend Filtering estimator won't even be able to attain the slow rate and would be inconsistent for such a sequence of $\\theta^*$ simply because Trend Filtering can only fit discrete splines. \n\n\n\n\n\n\n\n\nAnother point worth reiterating is that to the best of our knowledge, the results for Trend Filtering say that the tuning parameter needs to be set differently depending on whether one wants to obtain the slow rates or the fast rates; see~\\cite{guntuboyina2020adaptive} and~\\cite{van2019prediction}. In the case of Dyadic CART, both Theorem~\\ref{thm:slowrate} and Theorem~\\ref{thm:fastrate} hold under the choice of the same tuning parameter. Moreover, Theorem~\\ref{thm:fastrate} says that fast rates for Dyadic CART of order $r - 1$ hold whenever the true signal is piecewise polynomial with few pieces and each polynomial can have degree ranging from $0$ to $r - 1.$ This is because for any vector $\\theta \\in \\R^n$, the complexity measure $k^{(r)}(\\theta)$ is non increasing in $r.$ This means, if we use Dyadic Cart of order $2$, fast rates are guaranteed for piecewise quadratic, piecewise linear and piecewise constant signals. However, the same is not true for Trend Filtering where the order $r$ needs to be set to be $3,2,1$ depending on whether we want fast rates for piecewise quadratic or linear or constant signals respectively. Among several advantages there seems to be only one disadvantage for Dyadic CART versus Trend Filtering. It is the presence of extra log factors in the risk bounds. All in all, our results indicate that Univariate Dyadic CART may enjoy certain advantages over Trend Filtering in both the statistical and computational aspects. \n\n\n\\begin{remark}\n\tIt should be remarked here that wavelet shrinkage methods with appropriate tuning methods can also attain the slow and fast rates as shown in~\\cite{donoho1998minimax},~\\cite{donoho1994ideal}. Wavelet shrinkage method can also be computed in $O(n)$ time. However, as is well known, wavelet methods require $n$ to be a power of $2$ and often there are boundary effects that need to be addressed for the fitted function. Univariate Dyadic CART seems related to wavelet shrinkage as both arise from dyadic thinking but they are different estimators. The way Dyadic CART has been defined in this article, $n$ does not need to be a power of $2$ and no boundary effects appear for Dyadic CART. In any case, our point here is not to compare Dyadic CART with wavelet shrinkage but to demonstrate the efficacy of Dyadic CART in fitting piecewise polynomials. \n\\end{remark}\n\n\n\n\n\\section{Simulations}\\label{sec:sim}\nIn this section, we present numerical evidence for our theoretical results. In our simulations we generated data from ground truths $\\theta^*$ with certain size and did monte carlo repetitions to estimate the MSE. We also fitted a least squares line to log MSE versus $\\log n$. This slope is supposed to give us some indication about the exponent of $N$ in the rate of convergence of the MSE to 0. To set the tuning parameter $\\lambda$, we did not do very systematic optimization. Rather, we made a choice which gave us reasonable results. To implement ORT and Dyadic CART, we wrote our own code in R. Our codes are very basic and it is likely that the run times can be speeded up by more efficient implementations. All our simulations are completely reproducible and our codes are available on request.\n\n\n\\subsection{Simulation for two dimensional ORT}\nWe take a ground truth $\\theta^*$ of size $n \\times n$ which is piecewise constant on a nonhierarchical partition as shown in Figure~\\ref{fig2}.\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.75]{ORT.pdf} \n\t\n\t\\end{center}\n\t\\caption{\\textit{Figure depicts the ground truth matrix which is piecewise constant on a non hierarchical partition.}} \n\t\\label{fig2}\n\\end{figure}\nWe varied $n$ in increments of $5$ going from $30$ to $50$. We generated data $y$ by adding mean $0$ Gaussian noise with standard deviation $0.1$ and then applied ORT (of order $0$) to $y.$ We replicated this experiment $50$ times for each $n.$ We set the tuning parameter $\\lambda$ to be increasing from $0.1$ to $0.18$ in increments of $0.02.$\n\n\nFor the sake of comparison, we also implemented the constrained version of two dimensional total variation denoising with ideal \ntuning (i.e. setting the constraint to be equal to the actual total variation (TVD) of the truth $\\theta^*$). In the low $\\sigma$ limit, it can be proved that this ideally tuned constrained \nestimator is better than the corresponding penalized estimator for \nevery deterministic choice of the tuning parameter.\nThis follows from the results of~\\cite{oymak2013sharp} as described in Section 5.2 in~\\cite{guntuboyina2020adaptive}. In this sense, we are comparing with the best possible version of TVD. \n\n\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.25]{ORT2d} \n\t\\end{center}\n\t\\caption{\\textit{This is a $\\log$ MSE vs $\\log N$ plot for ORT (in green) and ideal TVD (in blue). The slope for ideal TVD comes out to be $-0.67$ and and the slope for ORT comes out to be $-0.9$}} \n\t\\label{fig3}\n\\end{figure}\n\nFrom Figure~\\ref{fig3} we see that ORT outperforms the ideal TVD. The slope of the least squares line is close to $-1$ for ORT which agrees with our $\\log N\/N$ rate predicted by our theory for ORT. The slope of the least squares line also agrees with the $\\tilde{O}(N^{-3\/4})$ rate predicted by Theorem $2.3$ in~\\cite{chatterjee2019new}. It should be mentioned here that we did not take $n$ larger than $50$ because our implementation in R of ORT is slow (one run with $n = 50$ takes $8$ minutes). Recall that the computational complexity of ORT is $O(n^5)$ in this setting. We believe that more efficient implementations might make ORT practically computable for $n$ in the hundreds. We also chose the standard deviation $\\sigma = 0.1$ because for larger $\\sigma$ one needs larger sample size to see the rate of convergence of the MSE.\n\n\n\n\n\\subsection{Simulation for two dimensional Dyadic CART}\nHere we compare the performance of Dyadic CART of order $0$ and the constrained version of two dimensional total variation denoising with ideal tuning. \n\n\\subsubsection{Two piece matrix}\nWe consider the simplest piecewise constant matrix $\\theta^* \\in \\R^{n \\times n}$ matrix where $\\theta^*(i,j) = \\I\\{j \\leq n\/2\\}.$ Hence $\\theta^*$ just takes two distinct values and the true rectangular partition is dyadic. Thus, this is expected to be a favourable case for Dyadic CART. We generated data by adding a matrix of independent standard normals. We took a sequence of $n$ geometrically increasing from $2^4$ to $2^9$. We chose $\\lambda = \\ell$ when $n = 2^{\\ell}.$\n\n\n\nOur simulations (see Figure~\\ref{fig4}) suggest that in this case, the ideally tuned constrained TVD estimator is outperformed by Dyadic CART in terms of statistical risk. The least squares slope for TVD comes out to be $-0.71$. Theoretically, it is known that the rate of convergence for ideal constrained TVD is actually $N^{-3\/4}$; see Theorem $2.3$ in~\\cite{chatterjee2019new}. The least squares slope for Dyadic CART comes out to be $-1.26$. Of course, we expect the actual rate of convergence for Dyadic CART to be $\\tilde{O}(N^{-1})$ in this case. The constrained TVD estimator was computed by the convex optimization software MOSEK (via the R package\nRmosek). For $n = 2^9 = 512$, Dyadic CART was much faster to compute and there was a significant difference in the runtimes. Actually, we did not take $n$ larger than $2^9$ because the RMosek implementation of TVD was becoming too slow. However, with our implementation of Dyadic Cart, we could run it for sizes as large as $2^{15} \\times 2^{15}.$ \n\n\n\n\\subsubsection{Smooth Matrix}\nWe considered the matrix $\\theta^* \\in \\R^{n \\times n}$ matrix where $\\theta^*(i,j) = \\sin(i\\:\\pi\/n) \\sin(j\\:\\pi\/n).$ We generated data by adding a matrix of independent standard normals. We again took a sequence of $n$ geometrically increasing from $2^4$ to $2^9$ and chose $\\lambda = \\ell$ when $n = 2^{\\ell}.$ The ground truth here is a smooth matrix which is expected to favour TVD more than Dyadic CART. In this case, we saw (see Figure~\\ref{fig4}) that the slopes of the least squares line came out to be around $-0.55$ for both Dyadic CART and TVD. Recall that $\\tilde{O}(N^{-0.5})$ rate is the minimax rate for bounded variation functions. The ideally tuned TVD did have a slightly lower MSE than Dyadic CART for our choice of $\\lambda$ in this example. \n\n\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.25]{2d2p.pdf}\n\t\t\\includegraphics[scale=0.25]{2dsineplot.pdf} \n\t\\end{center}\n\t\\caption{\\textit{The two figures are $\\log$ MSE vs $\\log N$ plot for ideal TVD (in blue) and Dyadic CART (in green). For the figure on the left, the ground truth is piecewise constant with two pieces. The slopes came out to be $-0.71$ and $-1.23$ for ideal TVD and Dyadic CART respectively. For the figure on the right, the ground truth is a smooth bump function. The slopes came out to be $-0.55$ and $-0.56$ for ideal TVD and Dyadic CART respectively.}} \n\t\\label{fig4}\n\\end{figure}\n\n\n\n\n\n\n\n\\subsection{Simulation for univariate Dyadic Cart}\nHere we compare the performance of univariate Dyadic CART of order $1$ and the constrained version of Trend Filtering (which fits piecewise linear functions) with ideal tuning. We consider a piecewise linear function $f$ given by\n\\begin{equation*}\n\tf(x) = -44 \\max(0,x - 0.3) + 48 \\max(0,x - 0.55) -56 \\max(0,x - 0.8) + 0.28 x.\n\\end{equation*}\nA similar function was considered in~\\cite{guntuboyina2020adaptive} where the knots were at dyadic points $0.25,0.5,0.75$ respectively. We intentionally changed the knot points which makes the problem harder for Dyadic CART. We considered the ground truth $\\theta^{*}$ to be evaluations of $f$ on a grid in $[0,1]$ with spacing $1\/N.$ We then added standard Gaussian noise to generate data. We took a sequence of sample sizes $N$ geometrically increasing from $2^7$ to $2^{12}$ and chose $\\lambda = \\ell$ when $N = 2^{\\ell}.$ \n\n\\begin{figure}\n\t\\begin{center}\n\t\n\t\t\\includegraphics[scale=0.25]{1dplplot.pdf}\n\t\t\\includegraphics[scale=0.25]{1dplot.pdf} \n\t\\end{center}\n\t\\caption{\\textit{The figure on the left is $\\log$ MSE vs $\\log N$ (base $2$) plot for ideal Trend Filtering (in blue) and Dyadic CART (in green). The slopes came out to be $-0.65$ and $-0.70$ for ideal Trend Filtering and Dyadic Cart respectively. Figure on the right is an instance of our simulation with sample size $N = 256.$ The red piecewise linear curve is the ground truth. The green curve is the ideal Trend Filtering fit and the orange curve is the Dyadic CART fit with $\\lambda = 8$. }} \n\t\\label{fig5}\n\\end{figure}\n\n\nThe slopes of the least squares lines came out to be $-0.65$ and $-0.7$ for Trend Filtering and Dyadic CART respectively (see Figure~\\ref{fig5}). These slopes are a bit bigger than the theoretically expected rate $\\tilde{O}(N^{-1})$ for both the estimators. This could be because of the inherent log factors. We observed that ideal Trend Filtering indeed has a lower MSE than Dyadic CART. Since the knots are at non dyadic points, fits from Dyadic CART are forced to make several knots near the true knot points. This effect is more pronounced for small sample sizes. For large sample sizes however, both the estimators give high quality fits. For a slightly worse fit than ideal Trend Filtering, the advantage of Dyadic CART is that it can be computed very fast. We have implemented Trend Filtering by using RMosek and again, we saw a significant difference in the running speeds when $N$ is large. The reason we did not take sample size larger than $2^{12}$ is again because the RMosek implementation of Trend Filtering became too slow.\n\n\n\n\n\\begin{remark}\n\tRecall that it is not necessary for the sample size to be a power of $2$ for defining and implementing Dyadic CART. We just need to adopt a convention for implementing a dyadic split of a rectangle. In our simulations, we have taken $N$ to be a power of two because writing the code becomes less messy. Also, we have compared our estimators to the ideal TVD\/Trend Filtering. In practice, both estimators would be cross validated and one needs to make a comparison then as well. \n\\end{remark}\t\n\n\n\\section{Discussion}\\label{Sec:discuss}\nHere we discuss a couple of naturally related matters. \n\n\n\n\n\n\n\\subsection{Implications for Shape Constrained Function Classes}\nSince our oracle risk bound in Theorem~\\ref{thm:adapt} holds for all truths $\\theta^*$, it is potentially applicable to other function classes as well. A similar oracle risk bound was used by~\\cite{donoho1997cart} to demonstrate minimax rate optimality of Dyadic CART for some anisotropically smooth function classes.\nSince our focus here is on nonsmooth function classes we now discuss here some implications of our results for shape constrained function classes which has been of recent interest. \n\nConsider the class of bounded monotone signals on $L_{d,n}$ defined as\n\\begin{equation*}\n\t\\mathcal{M}_{d,n} = \\{\\theta \\in [0,1]^{L_{n,d}}: \\theta[i_1,\\dots,i_d] \\leq \\theta[j_1,\\dots,,j_d] \\:\\:\\text{whenever}\\:\\: i_1 \\leq j_1,\\dots,i_d \\leq j_d\\}.\n\\end{equation*}\nEstimating signals within this class falls under the purview of isotonic regression. It is known that the LSE is near minimax rate optimal over $\\mathcal{M}_{d,n}$ with a $\\tilde{O}(n^{-1\/d})$ rate of convergence for $d \\geq 2$ and $O(n^{-2\/3})$ rate for $d = 1$; see, e.g.~\\cite{chatterjee2015risk},~\\cite{chatterjee2018matrix},~\\cite{han2019isotonic}. It can be checked that Theorem~\\ref{thm:dcadap} actually implies that Dyadic Cart also achieves the $\\tilde{O}(n^{-1\/d})$ rate for signals in $\\mathcal{M}_{d,n}$ as the total variation for such signals grows like $O(n^{d - 1}).$ Thus, Dyadic CART is a near minimax rate optimal estimator for multivariate Isotonic Regression as well. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLet us now consider univariate convex regression. It is known that the LSE is minimax rate optimal, attaining the $\\tilde{O}(n^{-4\/5})$ rate, over convex functions with bounded entries, see e.g.~\\cite{GSvex},~\\cite{chatterjee2016improved}. It is also known that the LSE attains the $\\tilde{O}(k\/n)$ rate if the true signal is piecewise linear in addition to being convex. Theorem~\\ref{thm:slowrate} and Theorem~\\ref{thm:fastrate} imply both these facts also hold for Univariate Dyadic Cart of order $1$. An advantage of Dyadic CART over convex regression LSE would be that Dyadic CART is computable in $O(n)$ time whereas such a fast algorithm is not known to exist yet for the convex regression LSE. Of course, the disadvantage of Dyadic CART here is the presence of a tuning parameter. \n\nIt is an interesting question as to whether Dyadic CART or even ORT can attain near minimax optimal rates for other shape constrained classes such as multivariate convex functions etc. More generally, going beyond shape constraints, we believe that Dyadic CART of some appropriate order $r$ might be minimax rate optimal among multivariate function classes of bounded variation of higher orders. We leave this as a future avenue of research.\n\n\n\n\n\n\n\\subsection{Arbitrary Design}\nOur estimators have been designed for the case when the design points fall on a lattice. It is practically important to have methods which can be implemented for arbitrary data. Optimal Dyadic Trees which can be implemented for arbitrary data in the context of classification have already been studied in~\\cite{blanchard2007optimal},~\\cite{scott2006minimax}. We now describe a similar way to define Optimal Regression Tree (ORT) fitting piecewise constant functions for arbitrary data.\n\n\n\n\nSuppose we observe pairs $(x_1,y_1),\\dots,(x_N,y_N)$\nwhere we assume all the design points lie on the unit cube $[0, 1]^d$ after scaling, if necessary. We can divide $[0,1]^d$ into small cubes of side length $1\/L$ so that there is a grid of $L^d$ cubes. We can now consider the space of all rectangular partitions where each rectangle is a union of these small cubes. Thus, each partition is necessarily a coarsening of the grid. Call this space of partitions $\\mathcal{P}_{L}.$ Define $\\mathcal{F}_{L}$ to be the space of functions which are piecewise constant on some partition in $\\mathcal{P}_{L}.$ We can now define the optimization problem:\n$$\\hat{f} = \\argmin_{f \\in \\mathcal{F}_{L}} \\big(\\sum_{i = 1}^{n} (y_i - f(x_i))^2 + \\lambda |f|\\big)$$\nwhere $|f|$ is the number of constant pieces of $f.$ \n\n\n\nOne can check that the above optimization problem can be rewritten as an optimization problem over the space of partitions in $\\mathcal{P}_{L}$. The only difference with the lattice design setup is that some of the small cubes here might be empty (depending on the value $L$ and the sample size $N$) but a bottom up dynamic program can still be carried out. In this method, $L$ is an additional tuning parameter which represents the resolution at which we are willing to estimate $f^*$. Theoretical analysis need to be done to ascertain how $L$ should depend on sample size.\nThe computational complexity would scale like $O(L^{2d + 1})$, so this method can still be computationally feasible for low to moderate $d.$ \n\n\nThis method is very natural and is likely to be known to the experts but we are not aware of an exact reference. Like what is done in~\\cite{bertsimas2017optimal}, it might be interesting to compare the performance of this method to the usual CART for real\/simulated datasets with a few covariates. If the method performs well, theoretical analysis also needs to be done under the random design set up. We leave this for future work. \n\n\\subsection{Dependent Errors}\\label{sec:dep}\nThe proofs of our results can be extended without much effort to the case when $Z \\sim N(0,\\Sigma)$ where $\\Sigma$ is a $n \\times n$ covariance matrix with $\\ell_2$-operator norm, i.e. the maximum eigen value $\\|\\Sigma\\|_{{\\rm op}}.$ The only change now would be that there would be an extra multiplicative factor $\\|\\Sigma\\|_{{\\rm op}}$ in front of all our risk bounds. Thus, if the maximum eigenvalue of $\\Sigma$ remains bounded then all our \nrates of convergence remain the same. \n\n\n\nIn case of a general covariance matrix, the only change would be in the proof of Theorem $8.1$ which in turn relies crucially on Lemma $9.1$. In Lemma $9.1$ there are two results. One gives a bound on the expectation and the other gives a tail bound (using Gaussian concentration inequality for Lipschitz functions) for the random variable $$\\sup_{v \\in S, \\, v \\neq \\theta} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle$$ where $\\theta$ is an arbitrary vector, $S$ is a fixed subspace of $\\R^{N}$ and $Z \\sim N(0,I).$\n\n\nFor a general covariance matrix $\\Sigma$ we now have to give an expectation bound and a tail probability bound for a random variable of the form \n$$\\sup_{v \\in S, \\, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle$$ where $Z \\sim N(0,1).$ Lemma~\\ref{lem:0} has been written for a general positive semi definite matrix $\\Sigma$ and thus gives the required expectation and tail bound. The rest of the proof is then similar and an extra multiplicative factor $\\|\\Sigma\\|_{{\\rm op}}$ will result as can be checked by the reader. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\n\n\\section{My learnings from Literature review}\n\n\n~\\cite{bertsimas2017optimal} paper: They set up what they call is the optimal tree problem. This problem seems to be exactly the same as our MS Cart objective for lattices. We should call our estimator ORT(Optimal Regression Tree). They solve their problem by mixed integer programming. They compare the usual CART and the their optimal classification tree (OCT) in extensive experiments on 53 standard data sets and simulated data sets. OCT performs better in out of sample prediction as well as tree estimation. Both are still worse than Random Forests though. Interesting aspects like how to compare methods with tuning parameters can be looked at in this paper. The main question is has anyone observed that the optimal tree problem can be solved by dynamic programming for lattices? It seems like it should have been observed. Is it because practically $O(n^3)$ is too slow? How does our random design method compare with Bertsimas's method? I can cite~\\cite{loh2011classification} when Im introducing decision trees along with Breiman's book. Also I can change the title. I can replace dyadic cart type methods by optimal regression trees. Bertsimas also says that they can compute oblique decision trees. Perhaps there is interesting theory that can be done here? Also, I should take a closer look at Nowak and Scott and see what theory they have done. \n\n\n\n\n~\\cite{norouzi2015efficient}: Seems a very ML type of paper. This paper proposes a method to compute some kind of globally optimal oblique decision trees for classification. A upper bound to the NP hard objective is written which is itself a non convex program. Then they use SGD to find a local minima. No theory in this one. Their experiments suggest globally optimal trees outperform greedily grown trees. \n\n\n\n~\\cite{bennett1994global}: look at this paper. Apparently, this only works for classification and the computational complexity is very high so it is applicable to few node trees. \n\n\nQn: Perhaps we can show the following that $k(\\hat{\\theta}_{\\lambda})$ is only logarithmically larger than $k(\\theta^*)$ if we set $\\lambda $ right. If we believe Yi's analysis then $k(\\hat{\\theta}) = k$? This is what I want to prove in the project with Yi. Perhaps we could extend it to the multivariate case? This is probably a separate paper. Some kind of tree estimation can be done in the multivariate case?\n\n\n\n~\\cite{scott2006minimax} paper: They consider penalized empirical risk minimization over dyadic decision trees with some given depth. They establish oracle risk bounds and then show minimax optimality adaptively over a range of function classes. So far, this is exactly what we do as well. Computation is still exponential in the dimension $d$ or the level $l.$ One thing that is true is that given a tree it is a standard practice to prune the tree. This paper claims that there exists fast bottom up algos for pruning. This is probably same as what we are doing. This paper cites~\\cite{kearns1998fast}. I need to take a look at this paper. \nThe other interesting thing about this paper is that they do their analysis with random design. The maximum depth of the tree is a tuning parameter which they choose. I believe that using Barron's theory and what I did in my phd a random design oracle risk bound can be established for optimal regression trees. \n\n\n\n\n\\textbf{Risk Bounds for Classification\/Regression Trees}: Look at this paper~\\cite{gey2005model}. Seems like they have a oracle risk bound for bounded random design regression. By Barron' theory I may not need boundedness. \n\n\n\n{\\red{Overall Picture:}} I feel that possibly the algorithm we are considering for MS Cart is already known in the lit in some form. I am still trying to pinpoint an exact reference. We have to make this clear in the paper. We are not the first to look at optimal decision trees. Optimal dyadic decision trees have been looked at in the literature; atleast for classification as in Blanchard and Scott,Nowak. In the lattice design case which is the focus of this paper, optimal decision trees happen to be computable by bottom up dynamic programs. Our focus in this paper should be on saying that we are using optimal decision trees for adaptive estimation in certain function classes which has not been done before. At this point, I want to figure out if oracle risk bounds follow easily from past papers for optimal decision trees. Secondly, I may want to write code which will allow us to do some simulations. \n\n\n\\textbf{Sketch of Intro}: General stuff abt CART. It is grown top down with heuristics and then pruned back up penalizing the number of leaves. It is hard to theoretically analyze because of the tree being grown in this data dependent manner. Results available are conditionally on the tree grown~\\cite{gey2005model}.\n\n\nBreiman's quote on optimizing over all partitions simultaneously. If one could, one may just compute a globally optimal decision tree. Recent paper~\\cite{bertsimas2017optimal} proposes to solve this exact problem by a mixed integer programming formulation. The paper demonstrates that the optimal decision tree significantly outeprforms the traditional CART across a range of data sets and regimes. \n\n\nOptimal decision trees are computable among the class of dyadic decision trees for a fixed maximal depth. (Is this a fair statement to say?) In the lattice design setting in 2D, cite Donoho. Say abt Dyadic CART. Proves an oracle risk bound and shows minimax rate optimality...Mention here papers of Scott, Nowak and Blanchard who prove oracle risk bounds for random design classification. The current paper falls under the line of work. \n\n\nIn this paper, we study two methods of estimation in the context of estimating some non smooth function classes of recent interest. We focus on the lattice design setting for the ease of analysis and in some settings such as Image Denoising it is the relevant setting. The first method is an optimal dyadic regression tree; similar to Donoho. The second method is an optimal regression tree in the sense of Bertsimas applied to regression. Here the estimator is computed by optimizing a penalized least squares criterion over all decision trees, not just dyadic. We make the observation that one does not need to resort to mixed integer programming when the design falls on a lattice. A simple dynamic programming approach suffices. This observation is likely known to the experts but we have been unable to find an exact reference. Like in Donoho, Nowak and Scott , Blanchard, we show it is possible to prove an oracle risk bound for our optimal decision tree estimators. The main contribution of this paper is to apply this oracle risk bound to three function classes of recent interest and show that these optimal decision trees have excellent adaptive and worst case performance while remaining efficiently computable. We now briefly outline the function classes we consider in this paper and the kind of results we obtain. \n\n\na) Piecewise Polynomial Function Estimation: Mention the main question in english. What are we able to show? Mention something about non hierarchical as well. \n\n\nb) Multivariate Bounded Variation Functions: Why Bounded Variation is a large class of functions allowing for heterogenous smoothness. TVD is a standard estimator. Dyadic Cart maintains all the statistical risk guarantees; has a nice variable selection type result and potentially can be faster to compute. \n\nc) Univariate Bounded Variation Functions of higher order: These are higher order versions of the BV space in 1D. Mammen Van De geer. Local Adaptivity, Trend Filtering; recent paper of Van de Geer. Dyadic Cart maintains all the statistical risk guarantees of Trend Filtering and requires lesser regularitty assumptions on the truth. Is potentially faster to compute. \n\n\nOther function classes can also be examined possibly from our oracle risk bound. We mention some consequences for shape constrained function classes. In the end, we describe versions of our estimators which can now be used for arbitrary data and random design. We give some theory (?) and leave a lot for the future. \n\n\n\n\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proofs}\\label{Sec:proofs}\n\\subsection{Proof of Lemma~\\ref{lem:compu}}\\label{sec:algos}\n\\subsubsection{Case: $r = 0$}\nLet us first consider the case $r = 0.$ Throughout this proof, the (multiplicative) constant involved in $O(\\cdot)$ is assumed to be absolute, i.e. it does not depend on $r$ or $d$.\n\nWe describe the algorithm to compute the ORT estimator denoted by $\\hat{\\theta}^{(0)}_{\\hier,d,n}$. Note that for any fixed $d \\geq 1$ we need to compute the minimum:\n$$OPT(L_{d, n}) \\coloneqq \\min_{\\Pi: \\Pi \\in \\mathcal{P}_{\\hier,d,n}} \\big(\\|y - \\Pi y\\|^2 + K |\\Pi|\\big)$$\nand find the optimal partition. Here $\\Pi y$ denotes the orthogonal projection of $y$ onto the subspace $S^{(0)}(\\Pi).$ In this case, $\\Pi y$ is a piecewise constant array taking the mean value of the entries of $y_{R}$ within every rectangle $R$ constituting $\\Pi.$ \n\nNow, for any given rectangle $R \\subset L_{d, n}$ we can define the corresponding minimum restricted to $R$.\n\\begin{equation}\nOPT(R) = \\min_{\\Pi: \\Pi \\in \\mathcal{P}_{\\hier,R}} \\|y_{R} - \\Pi y_R\\|^2 + \\lambda |\\Pi|.\n\\end{equation}\nwhere we are now optimizing only over the class of hierarchical rectangular partitions of the rectangle $R$ denoted by \n$\\mathcal{P}_{\\hier,R}$ and $|\\Pi|$ denotes the number of \nrectangles constituting the partition $\\Pi.$ \n\n\n\n\n\n\n\nA key point to note here is that due to the ``additive nature'' of the objective function over any partition (possibly trivial) of $R$ into two disjoint rectangles, we have the following {\\em dynamic programming principle} for computing $OPT(R)$:\n\\begin{equation}\n\\label{eq:dynamic}\nOPT(R) = \\min_{R_1, R_2} (\\,OPT(R_1) + OPT(R_2),\\, \\|y_R - \\overline{y}_R\\|^2 + \\lambda).\n\\end{equation}\nHere $(R_1, R_2)$ ranges over all possible {\\em nontrivial} partitions of $R$ into two disjoint rectangles. Consequently, in order to compute $OPT(R)$ and obtain the optimal heirarchical partition, the first step is to obtain the corresponding first split of $R.$ Let us denote this first split by $SPLIT(R).$\n\n\nLet us now make some observations. For any rectangle $R$, the number of splits possible is at most $dn.$ This is because in each dimension, there are at most $n$ possible splits. Any split of $R$ creates two disjoint sub rectangles $R_1,R_2.$ Suppose we know $OPT(R_1)$ and $OPT(R_2)$ for $R_1,R_2$ arising out of each possible split. Then, to compute $SPLIT(R)$ we have to compute the minimum of the sum of $OPT(R_1)$ and $OPT(R_2)$ for each possible split as well as the number $1 + \\|y_R - \\Pi y_R\\|^2$ which corresponds to not splitting $R$ at all. Thus, we need to compute the minimum of at most $nd + 1$ numbers. \n\n\n\n\n\nThe total number of distinct rectangles of $L_{d,n}$ is at most $n^{2d} = N^2.$ Any rectangle $R$ has dimensions $n_1 \\times n_2 \\times \\dots \\times n_d.$ Let us denote the number $n_1 + \\dots + n_d$ by $Size(R).$ We are now ready to describe our main subroutine. \n\n\n\n\\subsubsection{Main Subroutine}\nFor each rectangle $R$, the goal is to store $SPLIT(R)$ and $OPT(R).$ We do this inductively on $Size(R).$ We will make a single pass\/visit through all distinct rectangles $R \\subset L_{d,n}$, in increasing order of $Size(R)$. Thus, we will first start with all $1\\times1\\times\\cdots\\times1$ rectangles of size equals $d$. Then we visit rectangles of size $d + 1$, $d +2$ all the way to $nd.$ Fixing the size, we can choose some arbitrary order in which we visit the rectangles.\n\n\n\n\n\nFor $1\\times1\\times\\dots\\times1$ rectangles, computing $SPLIT(R)$ and $OPT(R)$ is trivial. Consider a generic step where we are visiting some rectangle $R.$ Note that we have already computed $OPT(R^{'})$ for all rectangles $R^{'}$ with $Size(R^{'}) < Size(R).$ For a possible split of $R$, it generates two rectangles $R_1,R_2$ of strictly smaller size. Thus, it is possible to compute $OPT(R_1) + OPT(R_2)$ and store it. We do this for each possible split to get a list of at most $nd + 1$ numbers. We also compute $\\|y_R - \\overline{y}_R\\|^2$ (described later) and add this number to the list. We now take a minimum of these numbers. This way, we obtain $OPT(R)$ and also $SPLIT(R).$ \n\n\nThe number of basic operations needed per rectangle here is $O(nd)$. Since there are $N^2$ rectangles in all, the total computational complexity of the whole inductive scheme scales like $O(N^2\\:n\\:d)$.\n\n\n\n\nTo compute $\\|y_R - \\overline{y}_R\\|^2$ for every rectangle $R$, we can again induct on size in \nincreasing order. Define $SUM(R)$ to be the sum of entries of $y_R$ and $SUMSQ(R)$ to be the \nsum of squares of entries of $y_R.$ One can keep storing $SUM(R)$ and $SUMSQ(R)$ and $SIZE(R)$ \nbottom up. To compute these quantities for any rectangle $R$ it suffices to consider any non \ntrivial partition of $R$ into two rectangles $R_1,R_2$ and then add these quantites previously \nstored for $R_1$ and $R_2.$ Therefore, this updating step requires constant number of basic \noperations per rectangle $R.$ Once we have computed $SUM(R)$ and $SUMSQ(R)$ and $SIZE(R)$ we can then calculate $\\|y_R - \\overline{y}_R\\|^2$. Thus, this inductive sub scheme requires lower order computation. \n\n\n\n\nOnce we finish the above inductive scheme, we have stored $SPLIT(R)$ for every rectangle $R.$ We can now start going topdown, starting from the biggest rectangle which is $L_{d,n}$ itself. We can recreate the full optimal partition by using $SPLIT(R)$ to split the rectangles at every step. Once the full optimal partition is obtained, computing $\\hat{\\theta}$ just involves computing means within the rectangles constituting the partition. It can be checked that this step requires lower order computation as well. \n\n\n\n\\begin{remark}\n\tThe same algorithm can be used to compute Dyadic CART in all dimensions as well. In this case, the number of possible splits per rectangle is $d$. Also, in the inductive scheme, we only need to visit rectangles which are reachable from $L_{d,n}$ by repeated dyadic splits. Such rectangles are necessarily of the form \n\tof a product of dyadic intervals Here, dyadic intervals are interval subsets of $[n]$ which are reachable by succesive dyadic splits of $[n].$\n\tThere are at most $2n$ dyadic intervals of $[n]$ and thus we need to visit at most $(2n)^d = 2^d\\:N$ rectangles.\n\\end{remark}\n\n\n\n\\subsubsection{Case: $r \\geq 1$}\nLet us fix a subspace $S \\subset \\R^{L_{d,n}}$ and a set of basis vectors $B = (b_1:b_2:\\dots:b_L)$ for $S.$ Here, we think of $b_i$ as a column vector in $\\R^N.$ For instance, $B$ may consist of all (discrete) monomials of degree at most $r$; however the algorithm works for any choice of $S$ and $B.$ We will abuse notation and also denote by $B$ the matrix obtained by stacking together the columns of $B.$ Thus $B$ is a $N \\times L$ matrix; each row corresponds to an entry of the lattice $L_{d,n}.$ Also, for any rectangle $R \\subset L_{d,n}$ we denote $B_R$ to be the matrix of size $|R| \\times L$ where only the subset of rows corresponding to the entries in the rectangle $R$ are present. Also, let's denote the orthogonal projection matrix $B_R (B_R^T B_R)^{-1} B_R^T$ by $O_{B_R}.$\n\n\n\nWe can now again run the inductive scheme described in the last section. The important point here is that when computing $SPLIT(R)$ for each rectangle $R$, one needs to compute $y_R^{T} (I - O_{B_R}) \ny_R$. The dominating task is to compute $y_R^{T} O_{B_R} y_R.$ We can keep storing $(B_R^T B_R)$ as follows. Note that $(B_R^T B_R) = (B_{R_1}^T B_{R_1}) + (B_{R_2}^T B_{R_2})$ for any partition of $R$ \ninto two subrectangles $R_1,R_2.$ Thus we can keep computing $B_R^T B_R$ inductively by adding two matrices. This is at most \n$O(L^2)$ work, i.e. requires at most $O(L^2)$ many elementary \noperations. The major step is computing the inverse $(B_R^T B_R)^{-1}$. This is at most $O(L^3)$ work. The next \nstep is to compute $B_R^T y_R$. Again we can compute this by adding $(B_{R_1}^T y_{R_1}) + (B_{R_2}^T y_{R_2})$ which is $O(L)$ work. \nFinally we need to post multiply $(B_R^T B_R)^{-1}$ by $B_R^T y_R$ \nand pre multiply by $(B_R^T y_R)^T.$ This needs at most $O(L^2)$ \nwork. \n\n\\begin{remark}\n\tMultiplying $A_{m \\times n}$ and $B_{n \\times p}$ is actually $o(mnp)$ in general if we invoke Strassen's algorithm. Here we use the standard $O(mnp)$ complexity for the sake of concreteness.\n\\end{remark}\n\n\nPer visit to a rectangle $R$, we also need to compute the minimum of $nd + 1$ numbers which is $O(nd)$ work. Finally, we need to visit $N^2$ rectangles in total. Thus the total computational complexity of ORT would be $O[N^2\\:(nd + L^3)].$\nA similar argument would give that the computational complexity of Dyadic CART of order $r \\geq 1$ is $O[2^d\\:N\\:(d + L^3)].$ Now note that in case we use the polynomial basis, we would have $L = O(d^r).$ \\qed\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:adapt}}\nWe will actually prove a more general result which will imply Theorem~\\ref{thm:adapt}. Let $\\mathcal{S}$ be any finite collection of subspaces of $\\R^N.$ Recall that for a generic subspace $S\\in \\mathcal{S}$, we denote its dimension by $Dim(S)$ and we denote its orthogonal projection matrix by $O_{S}.$ Also, let $N_k(\\mathcal{S}) = |S: Dim(S) = k, S \\in \\mathcal{S}|.$ Suppose, there exists a constant $c > 0$ such that for each $k \\in [N],$ we have\n\\begin{equation}\\label{eq:cardass}\nN_k(\\mathcal{S}) \\leq N^{ck}. \n\\end{equation}\nLet $\\Theta = \\cup_{S \\in \\mathcal{S}} S$ be the parameter space. Let $y = \\theta^* + \\sigma Z$ be our observation where $\\theta^* \\in \n\\R^n$ is the underlying mean vector and $Z \\sim N(0,I).$ In this context, recall the definition of $k_{\\mathcal{S}}(\\theta)$ in~\\eqref{eq:compl}. For a given tuning parameter $\\lambda \\geq 0$ we now define the usual penalized likelihood estimator $\\hat{\\theta}_{\\lambda}$: \n\\begin{equation*}\n\t\\hat{\\theta}_{\\lambda} = \\argmin_{\\theta \\in \\Theta} \\big(\\|y - \\theta\\|^2 + \\lambda \\:k_{\\mathcal{S}}(\\theta)\\big).\n\\end{equation*} \n\n\\begin{theorem}[Union of Subspaces]\\label{thm:oracleriskbd}\n\tUnder the setting as described above, for any $0 < \\delta < 1$ let us set $$\\lambda \\geq C \\frac{\\sigma^2\\:\\log N}{\\delta}$$ for a particular absolute constant $C$ which only depends on $c$. Then we have the following risk bound for $\\hat{\\theta}_{\\lambda}$:\n\t\\begin{equation*}\n\t\t\\E \\|\\hat{\\theta}_{\\lambda} - \\theta^*\\|^2 \\leq \\inf_{\\theta \\in \\Theta} \\big[\\frac{(1 + \\delta)}{(1 - \\delta)}\\:\\|\\theta - \\theta^*\\|^2 + \\frac{\\lambda}{1 - \\delta}\\:k_{\\mathcal{S}}(\\theta)\\big] + C \\frac{\\sigma^2}{\\delta\\:(1 - \\delta)}.\n\t\\end{equation*}\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\nUsing Theorem~\\ref{thm:oracleriskbd}, the proof of Theorem~\\ref{thm:adapt} is now straightforward. \n\\begin{proof}[Proof of Theorem~\\ref{thm:adapt}]\n\tWe just have to verify the cardinality bounds~\\eqref{eq:cardass} for the collection of subspaces $\\mathcal{S}^{(r)}_{a}$ for $a \\in \\{\\rdp,\\hier,\\all\\}.$ It is enough to verify for $\\mathcal{S}^{(r)}_{\\all}$ because it contains the other two.\n\tNow $N_k(\\mathcal{S}^{(r)}_{\\all})$ is clearly at most the number of distinct rectangles in $L_{d,n}$ raised to the power $k.$ The number of distinct rectangles in $L_{d,n}$ is always upper bounded by $N^2.$ Thus the bound~\\eqref{eq:cardass} holds with $c = 2.$ \n\\end{proof}\nWe now give the proof of Theorem~\\ref{thm:oracleriskbd}.\n\\begin{proof}\n\nTo avoid clutter of notations we will drop the subscript $\\mathcal S$ from $k_{\\mathcal \nS}(\\cdot)$ in this proof. By definition, for any arbitrary $\\theta \\in \\mathcal{S}$, we have\n\t\\begin{equation*}\n\t\t\\|y - \\hat{\\theta}\\|^2 + \\lambda \\:k(\\hat{\\theta}) \\leq \\|y - \\theta\\|^2 + \\lambda\\: k(\\theta).\n\t\\end{equation*}\n\tSince $y = \\theta^* + \\sigma Z$ we can equivalently write\n\t\\begin{equation*}\n\t\t\\|\\theta^* - \\hat{\\theta} + \\sigma Z\\|^2 + \\lambda k(\\hat{\\theta}) \\leq \\|\\theta^* - \\theta + \\sigma Z\\|^2 + \\lambda k(\\theta).\n\t\\end{equation*}\n\tWe can further simplify the above inequality by expanding squares to obtain\n\t\\begin{equation*}\n\t\t\\|\\theta^* - \\hat{\\theta}\\|^2 \\leq \\|\\theta^* - \\theta\\|^2 + \\lambda k(\\theta) + 2\\:\\langle \\sigma\\:Z,\\hat{\\theta} - \\theta \\rangle - \\lambda k(\\hat{\\theta}).\n\t\\end{equation*}\n\n\n\n\n\tNow using the inequality $2ab \\leq \\frac{2}{\\delta} a^2 + \\frac{\\delta}{2} b^2$ for arbitrary positive numbers $a,b,\\delta$ we have\n\t\\begin{align*}\n\t\t2 \\langle \\sigma\\:Z, \\hat{\\theta} - \\theta \\rangle &\\leq \\frac{2}{\\delta} \\big(\\langle \\sigma\\:Z, \\frac{\\hat{\\theta} - \\theta}{\\|\\hat{\\theta} - \\theta\\|} \\rangle\\big)^2 + \\frac{\\delta}{2} \\|\\hat{\\theta} - \\theta\\|^2 \\\\&\\leq\\frac{2}{\\delta} \\big(\\langle \\sigma\\:Z, \\frac{\\hat{\\theta} - \\theta}{\\|\\hat{\\theta} - \\theta\\|} \\rangle\\big)^2 + \\delta \\|\\hat{\\theta} - \\theta^*\\|^2 + \\delta \\| \\theta^* - \\theta \\|^2.\n\t\\end{align*}\n\tThe last two displays therefore let us conclude that for any $\\delta > 0$ and all $\\theta \\in \\R^N$ the following pointwise upper bound on the squared error holds: \n\t\\begin{equation}\\label{eq:ptwiserisk}\n\t\\|\\theta^* - \\hat{\\theta}\\|^2 \\leq \\frac{(1 + \\delta)}{(1 - \\delta)} \\|\\theta^* - \\theta\\|^2 + \\frac{\\lambda}{(1 - \\delta)} k(\\theta) + \\frac{2}{\\delta (1 - \\delta)} \\big(\\langle \\sigma\\:Z, \\frac{\\hat{\\theta} - \\theta}{\\|\\hat{\\theta} - \\theta\\|} \\rangle\\big)^2 - \\frac{\\lambda}{(1 - \\delta)} k(\\hat{\\theta}).\n\t\\end{equation}\n\n\n\n\n\n\n\tTo get a risk bound, we now need to upper bound the random variable \n\t$$L(Z) = \\frac{2}{\\delta (1 - \\delta)} \\big(\\langle \\sigma\\:Z, \\frac{\\hat{\\theta} - \\theta}{\\|\\hat{\\theta} - \\theta\\|} \\rangle\\big)^2 - \\frac{\\lambda}{(1 - \\delta)} k(\\hat{\\theta}).$$\n\t\n\n\n\n\n\t\n\tFor the rest of this proof, $C_1,C_2,C_3$ would denote constants whose precise value might change from line to line. \n\tWe can write \n\t\\begin{equation*}\n\t\tL(Z) \\leq \\max_{k \\in [n]} \\big[\\frac{2}{\\delta (1 - \\delta)}\\sigma^2 \\sup_{S \\in \\mathcal{S}: Dim(S) = k} \\sup_{v \\in S} \\big(\\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle\\big)^2 \\:\\:-\\:\\: \\frac{\\lambda}{(1 - \\delta)} k\\big].\n\t\\end{equation*}\n\n\n\t\n\n\n\n\n\n\tFix any number $t > 0.$ Also fix a subspace $S \\in \\mathcal{S}$ such that $Dim(S) = k.$ Using~\\eqref{eq:lem0} in Lemma~\\ref{lem:0} (stated and proved in Section~\\ref{Sec:appendix}) we obtain\n\t\\begin{equation*}\n\t\t\\P(\\big[\\frac{2}{\\delta (1 - \\delta)}\\sigma^2 \\sup_{v \\in S} \\big(\\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle\\big)^2 \\:\\:-\\:\\: \\frac{\\lambda}{(1 - \\delta)} k\\big] > t) \\leq C_1 \\exp\\big(-C_2\\big[\\frac{t\\:\\delta\\:(1 - \\delta)}{\\sigma^2} + \\frac{\\lambda\\:k\\:\\delta}{\\sigma^2}\\big]\\big). \n\t\\end{equation*}\n\n\tHere we also use the fact that $\\lambda$ would be chosen to be at least bounded below by a constant. Using a union bound argument we can now write\n\t\\begin{align*}\n\t\t\\P(&\\big[\\frac{2}{\\delta (1 - \\delta)}\\sigma^2 \\sup_{S \\in \\mathcal{S}: Dim(S) = k} \\sup_{v \\in S} \\big(\\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle\\big)^2 \\:\\:-\\:\\: \\frac{\\lambda}{(1 - \\delta)} k\\big] > t) \\\\\n\t\t&\\leq N_k(\\mathcal{S})\\:C_1 \\exp\\big(-C_2\\big[\\frac{t\\:\\delta\\:(1 - \\delta)}{\\sigma^2} + \\frac{\\lambda\\:k\\:\\delta}{\\sigma^2}\\big]\\big).\n\t\\end{align*}\n\tNow use the fact that $\\log N_k(\\mathcal{S}) \\leq c\\:k\\:\\log N$ and set $\\lambda \\geq C\\:\\sigma^2\\:\\log N\\:\\frac{1}{\\delta}$ to get a further upper bound on the right hand side of the above display\n\t\\begin{align*}\n\t\tC_1 \\exp\\big(-C_2\\big[\\frac{t\\:\\delta\\:(1 - \\delta)}{\\sigma^2} - k\\:\\log N\\big]\\big)\n\t\\end{align*}\n\tThe above two displays along with another union bound argument then lets us conclude \n\t\\begin{equation*}\n\t\t\\P(L(Z) > t) \\leq \\sum_{k = 1}^{n} C_1 \\exp\\big(-C_2\\big[\\frac{t\\:\\delta\\:(1 - \\delta)}{\\sigma^2} - k\\:\\log N\\big]\\big).\n\t\\end{equation*}\n\tFinally, integrating the above inequality with respect to all nonnegative $t$ will then give us the inequality\n\t\\begin{equation*}\n\t\t\\E L(Z) \\leq C_3 \\frac{\\sigma^2}{\\delta\\:(1 - \\delta)}.\n\t\\end{equation*}\n\tThe above inequality coupled with~\\eqref{eq:ptwiserisk} finishes the proof of the proposition. \\qedhere\n\\end{proof}\n\n\n\\subsection{Proof of Proposition~\\ref{prop:dyadicref}}\\label{sec:dyaproof}\n\n\n\n\n\\begin{proof}[Proof of Proposition~\\ref{prop:dyadicref}]\n\tFor simplicity of exposition, we will take $n = 2^k$ to be a power of $2$ although the same proof will go through for a general $n$. A subinterval $I$ of $[n]$ is called a {\\em dyadic interval of $[n]$} if it is of the form $[(a - 1)2^{s} + 1, a2^{s}]$ for some integers \n\t$0 \\leq s < k$ and $1 \\leq a < 2^{k - s}$. The following lemma characterizes recursive dyadic partitions in $L_{2,n}.$ \n\t\n\t\\begin{lemma}\\label{lem:rdpcharac}\n\t\tA partition $\\Pi \\in \\mathcal{P}_{\\all,2,n}$ is a recursive dyadic partition iff each of its constituent rectangles is a product of dyadic intervals of $[n].$ \n\t\\end{lemma}\n\t\n\t\n\t\\begin{proof}\n\t\tThe only if part can be shown by an induction on the successive possible steps of constructing a recursive dyadic partition. For the if part, let us argue as follows. Let $\\Pi$ be an arbitrary partition such that every rectangle constituting it is a product of dyadic intervals. If $\\Pi$ is not the trivial partition $L_{2,n}$ then we will argue that there exists a dyadic split of $L_{d,n}$ into $R_1,R_2$ such that $\\Pi$ is a refinement of the partition just composed of $R_1,R_2.$ Equivalently, we would show that there is a coordinate $1 \\leq j \\leq 2$ such that by performing a dyadic split on the $j$ th coordinate, we do not \\textit{cut} any rectangle of $\\Pi$ in its interior. We will now argue by contradiction. Suppose there is no such coordinate. Take coordinate $1$ for example. Then there exists a rectangle $R_1 = [a_1,b_1] \\times [a_2,b_2]$ of the partition $\\Pi$ which is cut in its interior by a dyadic split on the first coordinate. This necessarily implies that $a_1 < n\/2$ and $b_1 > n\/2.$ Since $[a_1,b_1]$ is a dyadic interval this then necessarily implies that $a_1 = 1$ and $b_1 = n$. Repeating this argument for coordinate $2$, we see that there exists rectangle $R_2$ of the partition $\\Pi$ which is of the form $[a_1,b_1] \\times [1,n].$ Now clearly it is not possible for any partition in $L_{2,n}$ to consist of the rectangles $R_1,R_2$ together. Thus, we have arrived at a contradiction. \n\t\t\n\t\t\n\t\t\n\t\t\n\t\tNow take a coordinate $j \\in [2]$ for which a dyadic split along that coordinate does not cut any rectangle of $\\Pi$ in its interior. Perform this dyadic split into $R_1,R_2.$ Now it is as if we have two separate problems of the same type within $R_1$ and $R_2.$ Again, we can find coordinates to dyadically split within $R_1$ and $R_2$ so that it does not cut any rectangle of $\\Pi$ in its interior if the partition $\\Pi$ within $R_1$ and $R_2$ are not trivial respectively. We can now iterate this argument till we reach the partition $\\Pi$ and stop. This shows that $\\Pi$ is in fact a recursive dyadic partition which finishes the proof.\n\t\\end{proof}\n\t\n\tWe would also need the following observation which is equivalent to equation~\\eqref{eq:refine1} in the statement of Proposition~\\ref{prop:dyadicref} for $d = 1.$\n\t\n\t\n\t\\begin{lemma}\\label{lem:1dpartbd}\n\t\tGiven a partition $\\Pi \\in \\mathcal{P}_{\\all,1,n}$, there exists a refinement $\\tilde{\\Pi} \\in \\mathcal{P}_{\\rdp,1,n}$ such that \n\t\t\\begin{equation*}\n\t\t\t|\\tilde{\\Pi}| \\leq C |\\Pi| \\log_2\\big(\\frac{n}{|\\Pi|}\\big)\n\t\t\\end{equation*}\n\t\twhere $C > 0$ is an absolute constant.\n\t\\end{lemma}\n\t\n\t\\begin{proof}[Proof of Lemma~\\ref{lem:1dpartbd}]\n\t\tLet $\\Pi$ be an arbitrary partition of $[n]$ and let us denote $|\\Pi|$ by $k.$ \n\t\tLet us consider the binary tree associated with forming a RDP of $[n]$. Consider the following scheme to obtain a RDP of $[n]$ which is also a refinement of $\\Pi.$ Grow the complete binary tree till the number of leaves first exceed $k.$ At this stage, each node consists of $O(n\/k)$ elements. After this, if at any stage, a node of the binary tree (denoting some interval of $[n]$) is completely contained within some interval of $\\Pi$, we do not split that node. Otherwise, we dyadically split the node. Due to our splitting criteria, in each such round, we split at most $k$ nodes because the nodes represent disjoint intervals. Also, the number of rounds of such splitting is at most $O(\\log \\frac{n}{k})$.\n\t\tWhen this scheme finishes, we clearly get a refinement of $\\Pi$; say $\\tilde{\\Pi} \\in \\mathcal{P}_{\\rdp,1,n}.$ These observations finish the proof. \n\t\n\t\\end{proof}\n\t\n\t\n\t\\begin{corollary}\\label{cor:simp}\n\t\tGiven any interval $I \\subset [n]$, there exists atmost $C \\log_{2} n$ many dyadic intervals partitioning $I$.\t\n\t\\end{corollary}\n\t\n\t\\begin{proof}\n\t\n\t\tLet $I = [a,b].$ Let $I_0 = [1,a - 1]$ which is empty if $a - 1 = 0$ and $I_1 = [b + 1,n]$ which is empty if $b + 1 > n.$ Then the rectangles $I_0,I,I_1$ form a partition of $[n].$ Now use Lemma~\\ref{lem:1dpartbd} to obtain a recursive dyadic partition with atmost $C \\log_2 n$ intervals. Considering the intervals of this recursive dyadic partition just within $I$ now finishes the proof.\n\t\\end{proof}\n\t\n\tNow we are ready to finish the proof of Proposition~\\ref{prop:dyadicref}. Take any rectangle $R$ constituting the partition $\\Pi.$ Let $R = [a_1,b_1] \\times [a_2,b_2].$ Using Corollary~\\ref{cor:simp}, for each $i \\in [2],$ write $[a_i,b_i]$ as a union of atmost $C \\log n$ many disjoint dyadic intervals. As a result, we can view $R$ itself as a union of atmost $(C\\:\\log n)^2$ disjoint rectangles each of which has the property that it is a product of dyadic intervals of $[n]$. Doing this for each $R$ then gives us a refinement of $\\Pi$ into atmost $k (C\\:\\log n)^2$ rectangles each of which is a product of dyadic intervals. By Lemma~\\ref{lem:rdpcharac} this refinement is guaranteed to be a recursive dyadic partition. This finishes the proof. \\end{proof}\n\n\n\n\\begin{remark}\\label{rem:dyadicint}\n\tA natural question is whether the following holds in any dimension $d > 2$ and any $d$ dimensional array $\\theta$.\n\t\\begin{equation*}\n\t\tk^{(r)}_{\\rdp}(\\theta) \\leq C (\\:\\log n)^d k^{(r)}_{\\all}(\\theta).\n\t\\end{equation*}\n\tProposition~\\ref{prop:dyadicref} shows the above is true when $d = 1,2$. Our proof technique breaks down for higher $d.$ The reason is that Lemma~\\ref{lem:rdpcharac} is no longer true when $d > 2.$ For a counter example, consider the case where $d = 3$ and $n = 2.$ Consider the partition of $L_{3,2}$ consisting of rectangles \n\t\\begin{enumerate}\n\t\t\\item $R_1 = \\{1,2\\} \\times \\{1\\} \\times \\{2\\}$\n\t\t\\item $R_2 = \\{2\\} \\times \\{1,2\\} \\times \\{1\\}$\n\t\t\\item $R_3 = \\{1\\} \\times \\{2\\} \\times \\{1,2\\}$\n\t\t\\item $R_4 = \\{1\\} \\times \\{1\\} \\times \\{1\\}$\n\t\t\\item $R_5 = \\{2\\} \\times \\{2\\} \\times \\{2\\}$.\n\t\\end{enumerate}\n\tOne can check that \n\t\\begin{enumerate}\n\t\t\\item The rectangles $R_1,\\dots,R_5$ form a partition of $L_{3,2}.$\n\t\t\\item Each of $R_i$ is a product of dyadic intervals of $[2].$ \n\t\t\\item This partition cannot be a recursive dyadic partition. This is because the first dyadic split itself will necessarily cut one of the rectangles $R_1,R_2,R_3$ in its interior. \n\t\\end{enumerate}\n\\end{remark}\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:dcadap}}\nIn view of Theorem~\\ref{thm:adapt}, we need to show that if $\\theta \\in \\R^{L_{d,n}}$ has small total variation, then it can be approximated well by some $\\tilde{\\theta} \\in \\R^{L_{d,n}}$ which is piecewise constant on not too many axis aligned rectangles. To establish this, we need two intermediate results. \n\n\n\\begin{proposition}\\label{prop:division}\n\tLet $\\theta \\in \\R^{L_{d,n}}$ and $\\delta > 0.$ Then there exists a Recursive Dyadic Partition $\\Pi_{\\theta,\\delta} = (R_1,\\dots,R_k) \\in \\mathcal{P}_{\\rdp,d,n}$ such that \n\t\\newline\n\ta) $k = |\\Pi_{\\theta,\\delta}| \\leq 1 + \\log_2 N \\:\\:\\big(1 + \\frac{\\TV(\\theta)}{\\delta}\\big)$ \\newline\n\tb) $\\TV(\\theta_{R_i}) \\leq \\delta \\:\\:\\:\\:\\forall i \\in [k]$ \\newline\n\tc) $\\mathcal{A}(R_i) \\leq 2 \\:\\:\\:\\:\\forall i \\in [k]$ \\newline\n\twhere $\\mathcal{A}(R)$ denotes the aspect ratio of a generic rectangle $R.$ \n\\end{proposition}\n\n\\begin{proof}\nIn order to prove Proposition~\\ref{prop:division}, we first describe a general greedy partitioning scheme --- called the $(\\TV,\\delta)$ scheme --- which takes as input a positive number $\\delta$ and outputs a partition satisfying properties~a), b) and c). A very similar procedure was used in~\\cite{chatterjee2019new}.\n\n\\medskip\n\n\\noindent {\\em Description of the $(\\TV,\\delta)$ scheme.} First, let us note that a Recursive Dyadic Partition (RDP) of $L_{d,n}$ can be encoded via a binary tree and a labelling of the nonleaf vertices. This can be seen as follows. Let the root represent the full set $L_{d,n}.$ If the first step of partitioning is done by dividing in half along coordinate $i$ then label the root vertex by $i \\in [d].$ The two children of the root now represent the subsets of $L_{d,n}$ given by $[n]^{i - 1} \\times [n\/2] \\times [n]^{d - i}$ and its complement. Depending upon the coordinate of the next split, these vertices can now also be labelled. \n\n\n\n\nIn the first step of the $(\\TV,\\delta)$ scheme, we check whether $\\TV(\\theta) \\leq \\delta.$ If \nso, then stop and the root becomes a leaf. If not, then we label the root by $1$ and split \n$L_{d,n}$ along coordinate $1$ in half. The two vertices now represent rectangles $R_1,R_2$ \nsay. For $i = 1,2$ we then check whether $\\TV(\\theta_{R_i}) \\leq \\delta.$ If so, then this \nnode becomes a leaf. Otherwise, we go to the next step. In this step, we split the node along \ncoordinate $2.$ We can iterate this procedure until each node in the binary tree has total \nvariation at most $\\delta.$ In step $i,$ we label all the nodes that are split by the number \n$i \\pmod d.$ In words, we choose the the splitting coordinate from $1$ to $d$ in a cyclic \nmanner. This ensures that the aspect ratio of each of the rectangles represented by the leaves is at most $2$.\n\n\n\n\nAfter carrying out this scheme, we would be left with a Recursive Dyadic Partition of $L_{d,n}$, say, $\\Pi_{\\theta,\\delta}$ satisfying properties~b) and c).\nIn order to show that $\\Pi_{\\theta,\\delta}$ also satisfies property~a), we need:\n\n\\begin{lemma}\\label{lem:division}\n\tLet $\\theta \\in \\R^{L_{d,n}}.$ Then, for any $\\delta > 0$, for the $(\\TV,\\delta)$ division scheme, we have the following cardinality bound: \n\t\\begin{equation*}\n\t\t|\\Pi_{\\theta,\\delta}| \\leq 1 + \\log_2 N \\:\\:\\big(1 + \\frac{\\TV(\\theta)}{\\delta}\\big)\n\t\\end{equation*}\n\tMoreover, each axis aligned rectangle in $P_{\\theta,\\delta}$ has aspect ratio at most $2.$\n\n\n\n\n\\end{lemma}\n\n\\begin{proof}\n\tWe say that a vertex of the binary tree is in generation $i$ if its graph distance to the root is $i - 1.$ Fix any positive integer $i.$ Let us consider the binary tree grown till step $i - 1.$ \n\tNote that all the vertices $\\{v_1,\\dots,v_{k}\\}$ in generation $i$ represent disjoint subsets of $L_{d,n}.$ Thus we would have $\\sum_{i = 1}^{k} \\TV(\\theta_{v_i}) \\leq V.$ This means that there can be at most $1 + \\frac{\\TV(\\theta)}{\\delta}$ vertices of generation $i$ that can be split. Now note that since there are $N$ vertices in total, the depth of the binary tree can be at most $\\log_2 N.$ Thus there can be at most $\\log_2 N \\:\\:\\big(1 + \\frac{\\TV(\\theta)}{\\delta}\\big)$ splits in total and each split increases the number of rectangular blocks by $1.$ This proves the cardinality bound. The second assertion is immediate from the fact that in generation $i$ the split is done on coordinate $i.$ \n\\end{proof} \nThis lemma together with the preceding discussions now implies our proposition.\n\\end{proof}\n\n\n\n\n\\subsubsection{Gagliardo Nirenberg Inequality}\n\\input{gagliardo}\n\nWe are now ready to prove Theorem~\\ref{thm:dcadap}.\n\\begin{proof}[Proof of Theorem~\\ref{thm:dcadap}]\n\tWithout loss of generality let $S = \\{1,\\dots,s\\}$ where $s = |S| \\geq 2.$ Let $\\theta^* \\in K^{S}_{d,n}(V).$ Let us denote \n\t$$\\mathcal{R}(\\theta^*) = \\inf_{\\theta \\in \\R^{L_{d,n}}} \\big[\\|\\theta - \\theta^*\\|^2 + \\sigma^2 \\log N \\:k^{(0)}_{\\rdp}(\\theta)\\big].$$ In view of Theorem~\\ref{thm:adapt}, it suffices to upper bound $\\mathcal{R}(\\theta^*).$\n\t\n\t\n\tLet us define the $s$ dimensional array $\\theta^*_{S}$ which satisfies $$\\theta^*_{S}(i_1,\\dots,i_s) = \\theta^*(i_1,\\dots,i_s,1,\\dots,1) \\:\\:\\forall (i_1,\\dots,i_s) \\in [n]^{s}.$$\n\t\n\t\n\tFor any fixed $\\delta > 0,$ let $\\Pi_{\\theta,\\delta}$ be the RDP of $L_{s,n}$ that is obtained from applying Lemma~\\ref{lem:division} to the $s$ dimensional array $\\theta^*_{S}.$ Let $\\tilde{\\theta} \\in \\R^{L_{s,n}}$ be defined to be piecewise constant on the partition $\\Pi_{\\theta,\\delta}$ so that within each rectangle of $\\Pi_{\\theta,\\delta}$ the array $\\tilde{\\theta}$ equals the mean of the entries of $\\theta^*_{S}$ inside that rectangle. Each rectangle of $\\Pi_{\\theta,\\delta}$ has aspect ratio at most $2$ and we have\n\t$$k^{(0)}_{\\rdp}(\\tilde{\\theta}) = |\\Pi_{\\theta,\\delta}| \\leq C \\log N \\frac{\\TV(\\theta^*_{S})}{\\delta}.$$\n\tWe can now apply Proposition~\\ref{prop:gagliardo} to conclude, within every such rectangle $R$ of $\\Pi_{\\theta,\\delta}$, we have $\\|\\tilde{\\theta}_{R} - \\theta^*_{R}\\|^2 \\leq C \\delta^2.$\n\tThis gives us\n\t\\begin{equation}\\label{eq:l2error}\n\t\\|\\tilde{\\theta} - \\theta^*_{S}\\|^2 = \\sum_{R \\in \\Pi_{\\theta,\\ep}} \\|\\tilde{\\theta}_{R} - \\theta^*_{R}\\|^2 \\leq C \\delta^2 |\\Pi_{\\theta,\\delta}| \\leq C \\delta \\log N\\: \\TV(\\theta^*_{S}).\n\t\\end{equation} \n\t\n\t\n\tNow let us define a $d$ dimensional array $\\theta^{'} \\in L_{d,n}$ satisfying for any $a \\in [n]^d$ the following:\n\t$$\\theta^{'}(a) = \\tilde{\\theta}(a_{S}).$$ \n\tThen we have $k_{\\rdp}^{(0)}(\\theta^{'}) = k_{\\rdp}^{(0)}(\\tilde{\\theta}).$ We also have \n\t\\begin{equation}\\label{eq:e1}\n\t\\|\\theta^{'} - \\theta^*\\|^2 = n^{d - s} \\|\\tilde{\\theta} - \\theta^*_{S}\\|^2 \\leq C n^{d - s} \\delta \\log N\\: \\TV(\\theta^*_{S}).\n\t\\end{equation}\n\t\n\tWe can now upper bound $\\mathcal{R}(\\theta^*)$ by setting $\\theta = \\theta^{'}$ in the infimum and since $\\delta > 0$ was arbitrary we can actually write \n\t\\begin{align}\\label{eq:bdd1}\n\t\tR(\\theta^*) \\leq \\:&C\\: \\inf_{\\delta > 0} \\big(n^{d - s} \\delta \\: \\log N\\: \\TV(\\theta^*_{S}) + \\sigma^2 \\log N \\frac{\\TV(\\theta^*_{S})}{\\delta}\\big) = \\\\& \\nonumber C\\:n^{d - s} V^{*}_{S} \\log N \\inf_{\\delta > 0} \\big(\\delta + \\frac{\\sigma^2_{S}}{\\delta}\\big) = C\\: n^{d - s} V^*_{S} \\sigma_{S} \\log N.\n\t\\end{align}\n\twhere in the last inequality we have set $\\delta = \\sigma_{S}.$\n\t\n\t\n\t\n\t\n\t\n\t\n\tNow, let us define $\\overline{\\theta^*_{S}}$ as the constant $s$ dimensional array with the value being the mean of all entries of $\\theta^*_{S}.$ Define a constant $d$ dimensional array $\\theta^{'} \\in L_{d,n}$ where every entry is again the mean of all entries of $\\theta^*_{S}.$ Then we have $k^{(0)}_{\\rdp}(\\theta^{'}) = 1.$ In this case, we can bound $\\mathcal{R}(\\theta^*)$ by setting $\\theta = \\theta^{'}$ in the infimum to obtain\n\t\\begin{align}\\label{eq:bd2}\n\t\t\\mathcal{R}(\\theta^*) \\leq &\\:C \\big(n^{d - s} \\|\\theta^*_{S} - \\overline{\\theta^*_{S}}\\|^2 + \\sigma^2 \\log N\\big) \\leq Cn^{d - s}\\:\\big((V^*_{S})^{2} + \\sigma_{S}^2\\big)\n\t\\end{align}\n\twhere in the last inequality we have again used Proposition~\\ref{prop:gagliardo}.\n\t\n\t\n\t\n\tAlso, by setting $\\theta = \\theta^*$ in the infimum and noting that $k^{(0)}_{\\rdp}(\\theta^*) \\leq n^{s}$ we can write\n\t\\begin{equation}\\label{eq:bd3}\n\t\\mathcal{R}(\\theta^*) \\leq n^s \\sigma^2 \\log N \\leq n^{d} \\sigma_{S}^2 \\log N.\n\t\\end{equation}\n\t\n\t\n\t\n\n\n\n\n\tCombining the three bounds given in~\\eqref{eq:bdd1},~\\eqref{eq:bd2} and~\\eqref{eq:bd3} finishes the proof of the theorem in the case when $s \\geq 2.$ \n\t\n\t\n\t\n\tWhen $s = 1$ the proof goes along exactly similar lines except there is one main difference. In place of using Proposition~\\ref{prop:gagliardo} we now have to use Lemma~\\ref{lem:1dapprox}, stated and proved in Section~\\ref{Sec:appendix}. We leave the details of this case to be verified by the reader. \\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{thm:tvadapmlb}}\nConsider the Gaussian mean estimation problem $y = \\theta^* + \\sigma Z$ where $\\theta^* \\in K_{d,n}(V).$ Let us denote the minimax risk of this problem under squared error loss by $\\mathcal{R}(V,\\sigma,d,n).$ A lower bound for $\\mathcal{R}(V,\\sigma,d,n)$ is already known in the literature when $d \\geq 2$ and is due to Theorem $2$ in~\\cite{sadhanala2016total}. \n\\begin{theorem}\\label{thm:generalmlb}[Sadhanala et al]\n\tLet $V > 0,\\sigma > 0$ and let $n,d$ be positive integers with $d \\geq 2.$ Let $N = n^d.$ Then there exists positive universal constant $c$ such that we\n\t\\begin{equation*}\n\t\t\\mathcal{R}(V,\\sigma,d,n) = \\inf_{\\tilde{\\theta} \\in \\R^{L_{d,n}}} \\sup_{\\theta \\in K_{d,n}(V)} \\E_{\\theta} \\|\\tilde{\\theta} - \\theta\\|^2 \\geq c\\: \\min\\{\\frac{\\sigma\\:V}{2d} \\sqrt{1 + \\log(\\frac{2\\:\\sigma\\:d\\:N}{V})}, N \\sigma^2, \\frac{V^2}{d^2} + \\sigma^2\\}.\n\t\\end{equation*}\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\n\n\nWe are now ready to proceed with the proof. \n\\begin{proof}\n\tWithout loss of generality, let $S = \\{1,2,\\dots,s\\}$ where $s = |S|.$ For a generic array $\\theta \\in L_{d,n}$ let us define for any $a \\in [n]^s$, \n\t$$\\overline{\\theta}_{S}(a_1,\\dots,a_s) = \\frac{1}{n^{d - s}}\\sum_{i_1,\\dots,i_{d - s} \\in [n]^{d - s}} \\theta(a_1,\\dots,a_s,i_1,\\dots,i_{d - s}).$$ In words, $\\overline{\\theta}_{S} \\in L_{n,s}$ is a $s$ dimensional array obtained by averaging $\\theta$ over the $d - s$ coordinates of $S^{c}.$ \n\t\n\t\n\t\n\tFor any $\\theta^* \\in K^{S}_{d,n}(V)$ we can consider the reduced $s$ dimensional estimation problem where we observe $\\overline{y}_{S} = \\theta^*_{S} + \\sigma \\overline{Z}_{S}.$ This is a $s$ dimensional version of our estimation problem where the parameter space is $K_{s,n}(V_{S})$ and the noise variance is $\\sigma^2_{S}.$ Hence we can denote its minimax risk under ($s$ dimensional) squared error by $\\mathcal{R}(V_{S},\\sigma_{S},s,n)$\n\t\n\t\n\n\n\n\t\n\t\n\n\t\n\t\n\tBy sufficiency principle, $n^{d - s}$ multiplied by the minimax risk under ($s$ dimensional) squared error loss for this reduced problem is equal to the minimax risk of our original $d$ dimensional problem under the $d$ dimensional squared error loss. That is, \n\t\n\t\\begin{align*}\n\t\t\\inf_{\\tilde{\\theta}(y) \\in \\R^{L_{d,n}}} \\sup_{\\theta \\in K^{S}_{d,n}(V)} \\E_{\\theta} \\|\\tilde{\\theta}(y) - \\theta\\|^2 &= n^{d - s} \\inf_{\\tilde{\\theta}(\\overline{y}_{S}) \\in \\R^{L_{s,n}}} \\sup_{\\theta \\in K_{s,n}(V_S)} \\E_{\\theta} \\|\\tilde{\\theta}(\\overline{y}_{S}) - \\theta\\|^2 \\\\\n\t\t&= n^{d - s} \\mathcal{R}(V_{S},\\sigma_{S},s,n).\n\t\\end{align*}\n\t\n\t\n\n\tWe can now invoke Theorem~\\ref{thm:generalmlb} to finish the proof when $s \\geq 2.$ When $s = 1$ we note that the space $\\M_{n,V} = \\{\\theta \\in \\R^n: 0 \\leq \\theta_1 \\dots \\leq \\theta_n \\leq V\\} \\subset K_{1,n}(V).$ We can now use an existing minimax lower bound for vector estimation in $\\M_{n,V}$ given in Theorem $2.7$ in~\\cite{chatterjee2018denoising} to finish the proof. \n\\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{thm:slowrate}}\nWe first prove the following proposition about approximation of a vector in $\\mathcal{BV}^{(r)}_n$ by a piecewise polynomial vector. \n\\begin{proposition}\\label{prop:piecewise}\n\t\n\tFix a positive integer $r$ and $\\theta \\in \\R^n$, and let $V^{r}(\\theta) \\coloneqq V.$ For any $\\delta > 0$, there exists a $\\theta^{'} \\in \\R^n$ such that \\newline\n\ta) $k^{(r)}_{\\rdp}(\\theta^{'}) \\leq C_r \\delta^{-1\/r}$ for a constant $C_r$ depending only on $r$, and\\newline\n\tb) $|\\theta - \\theta^{'}|_{\\infty} \\leq V \\delta$\n\twhere $|\\cdot|_{\\infty}$ denotes the usual $\\ell_{\\infty}$-norm of a vector.\n\\end{proposition}\n\n\n\\begin{remark}\n\tThe above proposition is a discrete version of an analogous result for functions defined on the continuum in~\\cite{BirmanSolomjak67}. The proof uses a recursive partitioning scheme and invokes abstract Sobolev embedding theorems which are not applicable to the discrete setting verbatim. We found that we can write a simpler proof for the discrete version which we now present. \n\\end{remark}\n\n\n\\subsection{Proof of Proposition~\\ref{prop:piecewise}}\n\nWe first need a lemma quantifying the error when approximating an arbitrary vector $\\theta$ by a polynomial vector $\\theta^{'}.$ This is the content of our next lemma. Recall that a vector $\\theta$ is said to be a polynomial of degree $r$ if $\\theta \\in \\mathcal{F}^{(r)}_{1,n}$ where $\\mathcal{F}^{(r)}_{d,n}$ has been defined earlier. \n\\begin{lemma}{\\label{lem:approxpoly}}\n\tFor any $\\theta \\in \\R^n$ there exists a $r - 1$ degree polynomial $\\theta^{'}$ such that\n\t\\begin{equation}\n\t|\\theta - \\theta^{'}|_{\\infty} \\leq C_r n^{r - 1} |D^{(r)}(\\theta)|_{1} = C_r V^{(r)}(\\theta).\n\t\\end{equation}\n\\end{lemma}\n\n\n\n\n\\begin{proof}\n\tAny vector $\\alpha \\in \\R^n$ can be expressed in terms of $D^{(r)}(\\alpha)$ and $D^{(j - 1)}(\\alpha)_1$ for $j = 1,1\\dots,r$ as follows:\n\t\\begin{equation}\\label{eq:buildup}\n\t\\alpha_i = \\sum_{j = 1}^{i - r} {i - j - 1 \\choose r - 1} (D^{(r)}(\\alpha)_j) + \\sum_{j = 1}^{r} {i - 1 \\choose j - 1} D^{(j - 1)}(\\alpha)_1\n\t\\end{equation}\n\twhere the convention is that ${a \\choose b} = 0$ for $b > a$, ${0 \\choose 0} = 1$ and the first term in the right hand side is $0$ unless $i > r.$ This result appears as Lemma $D.2$ in~\\cite{guntuboyina2020adaptive}.\n\t\n\t\n\tLet us define $\\theta^{'}$ to be the unique $r - 1$ degree polynomial vector such that the following holds for all $j \\in \\{1,2,\\dots,r\\}$,\n\t\\begin{equation*}\n\t\tD^{(j - 1)}(\\theta^{'})_1 = D^{(j - 1)}(\\theta)_1.\n\t\\end{equation*}\n\n\t\n\tNow we apply~\\eqref{eq:buildup} to the vector $\\theta - \\theta^{'}$ to obtain \n\t\\begin{equation}\n\t(\\theta - \\theta^{'})_i = \\sum_{j = 1}^{i - r} {i - j - 1 \\choose r - 1} (D^{(r)}(\\theta)_j) \\leq C_r n^{r - 1} |D^{(r)}(\\theta)|_{1}.\n\t\\end{equation}\n\tThe first equality follows from the last display and the fact that $D^{(r)}(\\theta - \\theta^{'}) = D^{(r)}(\\theta)$ (since $D^{(r)}$ is a linear operator and $\\theta^{'}$ is a $r - 1$ degree polynomial). The last inequality follows by using the simple bound ${n \\choose k} \\leq n^{k}$ for arbitrary positive integers $n,k.$ \\end{proof}\n\nWe are now ready to proceed with the proof. \n\n\\begin{proof}[Proof of Proposition~\\ref{prop:piecewise}]\n\tFor the sake of clean exposition, we assume $n$ is a power of $2.$ The reader can check that the proof holds for arbitrary $n$ as well. For an interval $I \\subset [n]$ let us define $$\\mathcal{M}(I) = |I|^{r - 1} |D^{(r)} \\theta_{I}|_1$$ where $|I|$ is the cardinality of $I$ and $\\theta_{I}$ is the vector $\\theta$ restricted to the indices in $I.$ Let us now perform recursive dyadic partitioning of $[n]$ according to the following rule. Starting with the root vertex $I = [n]$ we check whether $\\mathcal{M}(I) \\leq V \\delta.$ If so, we stop and the root becomes a leaf. If not, divide the root $I$ into two equal nodes or intervals $I_1 = [n\/2]$ and $I_2 = [n\/2 + 1 : n].$ For $i = 1,2$ we now check whether $\\mathcal{M}(I_j) \\leq V \\delta$ for $j = 1,2.$ If so, then this node becomes a leaf otherwise we keep partitioning. When this scheme halts, we would be left with a Recursive Dyadic Partition of $[n]$ which are constituted by disjoint intervals. Let's say there are $k$ of these intervals denoted by $B_1,\\dots,B_{k}.$\n\tBy construction, we have $\\mathcal{M}(B_i) \\leq V \\delta.$ We can now apply Lemma~\\ref{lem:approxpoly} to $\\theta_{B_i}$ and obtain a degree $r - 1$ polynomial vector $v_i \\in \\R^{|B_i|}$ such that $|\\theta_{B_i} - v_i|_{\\infty} \\leq V \\delta.$ Then we can append the vectors $v_i$ to define a vector $\\theta^{'} \\in \\R^n$ satisfying $\\theta^{'}_{B_i} = v_i.$ Thus, we have $|\\theta - \\theta^{'}|_{\\infty} \\leq V \\delta.$ Note that, by definition, $k^{(r - 1)}_{\\rdp}(\\theta^{'}) = k.$ We now need to show that $k \\leq C_r \\delta^{-1\/r}.$ \n\t\n\t\n\tLet us rewrite $\\mathcal{M}(I) = (\\frac{|I|}{n}^{r - 1}) n^{r - 1} |D^{(r)} \\theta_{I}|_1.$ Note that for arbitrary disjoint intervals $I_1,I_2,\\dots,I_{k}$ we have by sub-additivity of the functional $V^{r}(\\theta)$,\n\t\\begin{equation}\\label{eq:subadd}\n\t\\sum_{j \\in [k]} n^{r -1} |D^{(r)} \\theta_{I_j}|_1 \\leq V^{r}(\\theta) = V.\n\t\\end{equation}\n\tThe entire process of obtaining our recursive partition of $[n]$ actually happened in several rounds. In the first round, we possibly partitioned the interval $I = [n]$ which has size proportion $|I|\/n = 1 = 2^{-0}.$ In the second round, we possibly partitioned intervals having size proportion $2^{-1}$. \n\tIn general, in the $\\ell$ th round, we possibly partitioned \n\tintervals having size proportion $2^{-\\ell}$. Let $n_\\ell$ be the number of intervals with size proportion $2^{-\\ell}$ that \n\twe divided in round $\\ell$. Let us count and give an upper bound on $n_\\ell.$ If we indeed partitioned $I$ with size proportion $2^{-\\ell}$ then by construction this means \n\t\\begin{equation}\n\tn^{r - 1} |D^{(r)} \\theta_{I}|_1 > \\frac{V \\delta}{2^{-\\ell(r - 1)}}.\n\t\\end{equation}\n\tTherefore, by sub-additivity as in~\\eqref{eq:subadd} we can conclude that the number of such divisions is at most $\\frac{2^{-\\ell(r - 1)}}{\\delta}.$ On the other hand, note that clearly the number of such divisions is bounded above by $2^{\\ell}.$ Thus we conclude\n\t\\begin{equation*}\n\t\tn_\\ell \\leq \\min\\{\\frac{2^{-\\ell(r - 1)}}{\\delta},2^\\ell\\}.\n\t\\end{equation*}\n\tTherefore, we can assert that \n\t\\begin{equation}\n\tk = 1 + \\sum_{l = 0}^{\\infty} n_\\ell \\leq \\sum_{\\ell = 0}^{\\infty} \\min\\{\\frac{2^{-\\ell(r - 1)}}{\\delta},2^\\ell\\} \\leq C_r \\delta^{-1\/r}.\n\t\\end{equation}\n\tIn the above, we set $n_\\ell = 0$ for $\\ell$ exceeding the maximum number of rounds of division possible. The last summation can be easily performed as there exists a nonnegative integer $\\ell^* = O( \\delta^{-1\/r})$ such that \n\t\\begin{equation*}\n\t\t\\min\\{\\frac{2^{-\\ell(r - 1)}}{\\delta},2^\\ell\\} = \n\t\t\\begin{cases}\n\t\t\t2^\\ell, & \\text{for} \\:\\:\\ell < \\ell^* \\\\\n\t\t\t\\frac{2^{-\\ell(r - 1)}}{\\delta} & \\text{for} \\:\\:\\ell \\geq \\ell^*\n\t\t\\end{cases}\n\t\\end{equation*}\n\tThis finishes the proof. \n\\end{proof}\n\n\nWe can now finish the proof of Theorem~\\ref{thm:slowrate}.\n\\begin{proof}[Proof of Theorem~\\ref{thm:slowrate}]\n\tAs in the proof of Theorem~\\ref{thm:dcadap}, it suffices to upper bound\n\t$$\\mathcal{R}(\\theta^*) = \\inf_{\\theta \\in \\R^{L_{1,n}}} \\big[\\|\\theta - \\theta^*\\|^2 + \\sigma^2 \\log n \\:k^{(r)}_{\\rdp}(\\theta)\\big].$$ By Proposition~\\ref{prop:piecewise}, we obtain\n\t$$\\mathcal{R}(\\theta^*) \\leq \\inf_{\\delta > 0} \\big[n V^2 \\delta^2 + C_r \\delta^{-1\/r} \\sigma^2 \\log n\\big].$$\n\n\tSetting $\\delta = c (\\frac{\\sigma^2 V^{1\/r} \\log n}{n})^{2r\/(2r + 1)}$ for an appropriate constant $c$ finishes the proof.\n\\end{proof}\n\n\n\\subsection{Proof of Lemma~\\ref{lem:mlbpc}}\\label{sec:lempc}\nWe will first need the following lemma. Let $\\|.\\|_{0}$ denote the usual $\\ell_0$ norm equal to the number of non zero entries. \n\\begin{lemma}\\label{lem:sparse}\n\tFix any positive integers $n,d$ and $1 \\leq k \\leq N = n^d.$ Then for any $\\theta \\in \\R^{L_{d,n}}$, $k^{(0)}_{\\hier}(\\theta) \\leq 3d\\|\\theta\\|_0.$\n\\end{lemma}\n\n\n\\begin{proof}\n\tWe will prove the lemma via induction on the dimension $d$. Let us start with the base case \n\t$d = 1$. Let $1 \\le i_1 < \\ldots < i_{\\|\\theta\\|_{0}} \\le n$ be the support of $\\theta$ (i.e. the points where $\\theta$ has nonzero value). Then $\\theta$ is constant on each interval of the (hierarchical) partition $\\Pi \\coloneqq \\{\\{1, \\ldots, i_1-1\\}, \\{i_1\\}, \\ldots, \\{i_{\\|\\theta_0 -1\\|} + 1, \\ldots, i_{\\|\\theta_0\\|} - 1\\}, \\{i_{\\|\\theta_0\\|}\\}\\}$ \n\tof $L_{1, n}$ and consequently $k_{\\hier}(\\theta) \\le 3\\|\\theta\\|_0$. Now suppose the statement holds \n\tfor some $d \\ge 1$. Let $\\theta \\in \\R^{L_{d+1, n}}$ and $1 \\le i_1 < \\ldots < i_{k_0} \\le \n\tn$ be the horizontal coordinates of the support of $\\theta$. Evidently $k_0 \\le \n\t\\|\\theta\\|_0$. Now, by our induction hypothesis, for every $j \\in [k_0]$, there exists a hierarchical partition $\\Pi_j$ of $L_{d, n; j} \\coloneqq \\{i_j\\} \\times [n]^d$ such that $\\theta$ restricted to $L_{d, n; j}$ --- which we refer to as $\\theta_j$ in the sequel --- is constant on each rectangle of $\\Pi_j$ and $|\\Pi_j| \\le 3d\\|\\theta_j\\|_0$. Then $\\theta$ \n\tis constant on each rectangle of the partition formed by $\\Pi_j; j \\in [k_0]$ and the rectangles $\\{1, \\ldots, i_1 - 1\\} \\times [n]^{d}, \\ldots, \\{i_{\\|k_0 -1\\|} + 1, \\ldots, i_{\\|k_0\\|} - 1\\} \\times [n]^{d}$. Since $\\Pi$ is a hierarchical refinement of the partition comprising the rectangles $\\{1, \\ldots, i_1 - 1\\} \\times [n]^{d}, L_{d, n; 1},\\ldots, \n\t\\{i_{\\|k_0 -1\\|} + 1, \\ldots, i_{\\|k_0\\|} - 1\\} \\times [n]^{d}, L_{d, n; k_0}$ which is clearly hierarchical, it follows that $\\Pi$ is itself a hierarchical \n\tpartition. Finally, notice that $$|\\Pi| \\le \\sum_{j \\in [k_0]}|\\Pi_j| + 2k_0 \\le \\sum_{j \\in [k_0]}3d\\|\\theta_j\\|_0 + 2\\|\\theta\\|_0 \\leq 3(d + 1)\\|\\theta\\|_0,$$\n\tthus concluding the induction step.\n\\end{proof}\n\n\n\nNow we can finish the proof of Lemma~\\ref{lem:mlbpc}.\n\\begin{proof}\n\tRecall that $\\Theta_{k,d,n} \\coloneqq \\{\\theta \\in \\R^{L_{d,n}}: k_{\\hier}^{(0)}(\\theta) \\leq k\\}$. Let us denote $\\Theta^{({\\rm sparse})}_{k} = \\{\\theta \\in \\R^{L_{d,n}}: \\|\\theta\\|_0 \\leq k\\}.$ Then, by Lemma~\\ref{lem:sparse} we have the set inclusion for any $1 \\leq k \\leq N$,\n\t\n\t$$\\Theta^{({\\rm sparse})}_{k} \\subset \\Theta_{3dk,d,n}.$$\n\tA minimax lower bound (for the squared error) for the parameter space $\\Theta^{({\\rm sparse})}_{k}$ which is tight up to constants is $C \\sigma^2 k \\log \\frac{eN}{k}.$ This result is very well known and can be found for instance as Corollary 4.15 in~\\cite{rigollet2015high}. This finishes the proof. \n\\end{proof}\n\n\n\n\n\n\\begin{comment}\n\\begin{proof}[Proof of Lemma~\\ref{lem:mlbpc}]\n\n\nSince $\\Theta_{k,d,n}^{(r)} \\subset \\Theta_{k,d,n}^{(r')}$ whenever $r' > r$ it suffices to give a minimax lower bound for the parameter space $\\Theta_{k,d,n}^{(0)}.$ Consider the set of arrays\n\\begin{equation*}\n\\tilde{\\Theta}_{k,d,n} = \\{\\theta \\in \\R^{L_{n,d}}: \\theta[i_1,i_2,\\dots,i_n] = v[i_1]\\:\\:\\text{for some }\\:\\:v \\in \\Theta_{k,1,n}^{(0)}\\}\n\\end{equation*}\nIn words, $\\tilde{\\Theta}_{k,d,n}$ is the set of arrays which are only a function of its first coordinate and with respect to the first coordinate it is a piecewise constant vector with at most $k$ pieces. It is clear that $\\tilde{\\Theta}_{k,d,n} \\subset \\Theta_{k,d,n}^{(0)}.$ Therefore, it suffices to lower bound the minimax risk over the parameter space $\\tilde{\\Theta}_{k,d,n}.$ \n\nLet us denote the minimax risk of the Gaussian mean estimation problem (under $N$ dimensional squared error loss) with noise variance $\\sigma^2$ of a parameter space $\\Theta \\subset \\R^{L_{d,n}}$ by $\\mathcal{R}(\\Theta,\\sigma,d,n).$\nNow, by the sufficiency principle (just like we argued in the proof of Theorem $8.5$) we can obtain that \n\n\\begin{equation*}\n\\mathcal{R}(\\tilde{\\Theta}_{k,d,n},\\sigma^2,d,n) = n^{d - 1} \\mathcal{R}(\\Theta_{k,1,n}^{(0)},\\frac{\\sigma^2}{n^{d - 1}},1,n).\n\\end{equation*}\nNow the set $\\Theta_{k,1,n}^{(0)}$ is just the set of piecewise constant vectors with at most $k$ constant pieces. A sharp minimax lower bound for this parameter space is well known; see for instance Theorem $4.1$ in~\\cite{gao2020estimation}. Their result says that for any $k \\geq 3$ and any $V > 0$ we have\n\\begin{equation*}\n\\mathcal{R}(\\Theta_{k,1,n}^{(0)},V^2,1,n) \\geq C V^2 k \\log (en\/k).\n\\end{equation*}\n\n\n\nThe reason we state the result for $k \\geq 3$ is that for $k = 1$ the lower bound would not have a $\\log$ term and for $k = 2$ the actual minimax rate has a $\\log \\log$ term instead of a $\\log$ term. See~\\cite{gao2020estimation} where this is discussed in detail. The last two displays finish the proof. \n\n\n\n\n\\end{proof}\n\\end{comment}\n\n\n\n\\section{Auxiliary Results}\\label{Sec:appendix}\n\\begin{comment}\n\\begin{lemma}\\label{lem:0} \nLet $Z \\sim N(0,I_n)$. Let $S \\subset \\R^n$ be a subspace. Then the following is true for any $\\theta \\in \\R^n$: \n\\begin{equation*}\n\\E \\sup_{v \\in S,\\, v \\neq \\theta} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle \\leq Dim(S)^{1\/2} + 1.\n\\end{equation*}\nAlso, for any $u > 0$, we have with probability at least $1 - 2 \\exp(-\\frac{u^2}{2}),$\n\\begin{equation}\\label{eq:lem0}\n\\big(\\sup_{v \\in S} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle\\big)^2 \\leq 2\\:Dim(S) + 4\\:(1 + u^2).\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWe can write\n\\begin{align*}\n&\\sup_{v \\in S, \\, v \\neq \\theta} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle = \\sup_{v \\in S, \\, v \\neq \\theta} \\langle Z, \\frac{v - O_{S} \\theta - (I - O_{S})\\theta}{\\sqrt{|v - O_{S} \\theta|^2 + |(I - O_{S})\\theta|^2}} \\rangle \\\\\\leq& \\sup_{v \\in S, \\, v \\neq \\theta} \\langle Z, \\frac{v - O_{S} \\theta}{\\sqrt{|v - O_{S} \\theta|^2 + |(I - O_{S})\\theta|^2}} \\rangle + \\sup_{v \\in S, \\, v \\neq \\theta} \\langle Z, \\frac{(I - O_{S})\\theta}{\\sqrt{|v - O_{S} \\theta|^2 + |((I - O_{S})\\theta|^2}} \\rangle \\\\\\leq& \\sup_{v \\in S: \\|v\\| \\leq 1} \\langle Z, v \\rangle + \\sup_{v \\in S': \\|v\\| \\leq 1} \\langle Z, v\\rangle\n\\end{align*} \nwhere $S'$ is the subspace spanned by the vector $(I - O_S)\\theta$.\nNow we will give a bound on the expectation of both the terms above separately. For the first term it is clear that \n\\begin{equation}\\label{eq:gaussian}\n\\E \\sup_{v \\in S: \\|v\\| \\leq 1} \\langle Z,v \\rangle = \\E \\|O_{S} Z\\| \\leq (\\E \\|O_{S} Z\\|^2)^{1\/2} = Dim(S)^{1\/2}.\n\\end{equation}\nwhere the last equality follows from standard facts about Gaussian quadratic forms.\n\n\n\nSimilarly, for the other term we get\n\\begin{equation*}\n\\E \\sup_{v \\in S': \\|v\\| \\leq 1} \\langle Z,v \\rangle \\leq Dim(S')^{1\/2} \\leq 1.\n\\end{equation*}\nThis proves the first part of the lemma. \n\n\n\nComing to the second part, note that, by symmetry we also have the lower bound\n$$\\E \\sup_{v \\in S} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle \\ge \\E \\inf_{v \\in S} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle \\geq - \\big(Dim(S)^{1\/2} + 1\\big).$$\t\nNow we note that $\\sup_{v \\in S,\\, v \\neq \\theta} \\langle z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle$ is a Lipschitz function of $z$ with Lipschitz constant $1.$ Thus we can use the well-known Gaussian Concentration inequality (see, e.g. \\cite[Theorem~7.1]{L01}), which is stated as Theorem~\\ref{prop:gauss_conc} for the convenience of the reader, to conclude that for any $u > 0,$ with probability at least $1 - 2 \\exp(-u^2\/2)$ we have \n\\begin{equation*}\n|\\sup_{v \\in S} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle| \\leq |\\sqrt{Dim(S)} + 1 + u|\n\\end{equation*}\nUsing the elementary inequality $(a + b)^2 \\leq 2 a^2 + 2 b^2$ now finishes the proof of \\eqref{eq:lem0}. \n\\end{proof}\n\\end{comment}\n\n\n\\begin{lemma}\\label{lem:0} \n\tLet $Z \\sim N(0,I_n)$ and $S \\subset \\R^n$ denote a subspace. Let $\\Sigma$ be a positive \n\tsemi-definite matrix and $\\Sigma^{1\/2}$ denote its usual square root. Also, let \n\t$\\|\\Sigma^{1\/2}\\|_{{\\rm op}}$ denote the $\\ell_2$-operator norm of $\\Sigma^{1\/2}$, i.e. the \n\tsquare root of the maximum eigen value of $\\Sigma$. Then the following holds for every $\\theta \\in \\R^n$: \n\t\\begin{equation*}\n\t\t\\E \\sup_{v \\in S,\\, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle \\leq \\|\\Sigma^{1\/2}\\|_{{\\rm op}} (Dim(S)^{1\/2} + 1).\n\t\\end{equation*}\n\tAlso, for any $u > 0$, we have with probability at least $1 - 2 \\exp(-\\frac{u^2}{2}),$\n\n\n\n\n\t\\begin{equation}\\label{eq:lem0}\n\t\\big(\\sup_{v \\in S, v \\neq \\theta} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle\\big)^2 \\leq \\|\\Sigma^{1\/2}\\|_{{\\rm op}} \\big[2\\:Dim(S) + 4\\:(1 + u^2)\\big].\n\t\\end{equation}\n\\end{lemma}\n\n\n\n\\begin{proof}\n\tWe can write\n\t\\begin{align*}\n\t\t&\\sup_{v \\in S, \\, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle = \\sup_{v \\in S, \\, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - O_{S} \\theta - (I - O_{S})\\theta}{\\sqrt{|v - O_{S} \\theta|^2 + |(I - O_{S})\\theta|^2}} \\rangle \\\\\\leq& \\sup_{v \\in S, \\, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - O_{S} \\theta}{\\sqrt{|v - O_{S} \\theta|^2 + |(I - O_{S})\\theta|^2}} \\rangle + \\sup_{v \\in S, \\, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{(I - O_{S})\\theta}{\\sqrt{|v - O_{S} \\theta|^2 + |((I - O_{S})\\theta|^2}} \\rangle \\\\\\leq& \\sup_{v \\in S: \\|v\\| \\leq 1} \\langle \\Sigma^{1\/2} Z, v \\rangle + \\sup_{v \\in S': \\|v\\| \\leq 1} \\langle \\Sigma^{1\/2} Z, v\\rangle\n\t\\end{align*} \n\twhere $S'$ is the subspace spanned by the vector $(I - O_S)\\theta$.\n\n\n\t\n\t\n\t\n\tNow we will give a bound on the expectation of both the terms above separately. Let us work with the first term. \n\n\n\n\tFirst note that $$\\sup_{v \\in S: \\|v\\| \\leq 1} \\langle \\Sigma^{1\/2} Z,v \\rangle = \\sup_{v \\in S: \\|v\\| \\leq 1} \\langle O_S \\Sigma^{1\/2} Z,v \\rangle = \\|O_S \\Sigma^{1\/2} Z\\|.$$\n\tSecondly, \n\t\\begin{align*}\\label{eq:gaussian}\n\t\t&\\big(\\E \\|O_S \\Sigma^{1\/2} Z\\|\\big)^2 \\leq \\E \\|O_S \\Sigma^{1\/2} Z\\|^2 = \\E \\, Z^{t} \\Sigma^{1\/2} O_{S} \\Sigma^{1\/2} Z = \\\\& Trace(\\Sigma^{1\/2} O_{S} \\Sigma^{1\/2}) \\leq \\|\\Sigma\\|_{{\\rm op}} Trace(O_{S}) = \\|\\Sigma\\|_{{\\rm op}} Dim(S).\n\t\\end{align*}\n\twhere the second last equality follows from standard facts about Gaussian quadratic forms and the last inequality follows from the following reasoning. \n\t\n\t\n\tConsider the spectral decomposition of $\\Sigma = P D P^{t}$ where $P$ is an orthonormal matrix and $D$ is a diagonal matrix consisting of eigenvalues of $\\Sigma.$ \n\tThen, by commutativity of trace, we have\n\t\\begin{align*}\n\t\t&Trace(\\Sigma^{1\/2} O_{S} \\Sigma^{1\/2}) = Trace(\\Sigma O_{S}) = Trace(P D P^{t} O_{S}) = \\\\& Trace(D P^{t} O_{S} P) \\leq \\|D\\|_{{\\rm op}} Trace(P^{t} O_{S} P) = \\|\\Sigma\\|_{{\\rm op}} Trace(O_{S} P P^{t}) = \\|\\Sigma\\|_{{\\rm op}} Trace (O_{S}).\n\t\\end{align*}\n\n\n\t\n\t\n\tTherefore, we can say that \n\t\\begin{equation*}\n\t\t\\E \\sup_{v \\in S: \\|v\\| \\leq 1} \\langle \\Sigma^{1\/2} Z,v \\rangle \\leq \\|\\Sigma^{1\/2}\\|_{{\\rm op}} Dim(S)^{1\/2}.\n\t\\end{equation*}\n\n\n\n\n\n\tSimilarly, for the other term we get\n\t\\begin{equation*}\n\t\t\\E \\sup_{v \\in S': \\|v\\| \\leq 1} \\langle \\Sigma^{1\/2} Z,v \\rangle \\leq \\|\\Sigma^{1\/2}\\|_{{\\rm op}} Dim(S')^{1\/2} \\leq \\|\\Sigma^{1\/2}\\|_{{\\rm op}}.\n\t\\end{equation*}\n\tThis proves the first part of the lemma. \n\t\n\t\n\t\n\tComing to the second part, note that, by symmetry we also have the lower bound\n\t$$\\E \\sup_{v \\in S, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle \\ge \\E \\inf_{v \\in S, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle \\geq - \\|\\Sigma^{1\/2}\\|_{{\\rm op}} \\big(Dim(S)^{1\/2} + 1\\big).$$\t\n\tNow we note that $\\sup_{v \\in S,\\, v \\neq \\theta} \\langle \\Sigma^{1\/2} Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle$ is a Lipschitz function of $Z$ with Lipschitz constant $\\|\\Sigma^{1\/2}\\|_{{\\rm op}}$. Thus we can use the well-known Gaussian Concentration inequality (see, e.g. \\cite[Theorem~7.1]{L01}), which is stated as Theorem~\\ref{prop:gauss_conc} for the convenience of the reader, to conclude that for any $u > 0,$ with probability at least $1 - 2 \\exp(-u^2\/2)$ we have \n\t\\begin{equation*}\n\t\t|\\sup_{v \\in S} \\langle Z, \\frac{v - \\theta}{\\|v - \\theta\\|} \\rangle| \\leq \\|\\Sigma^{1\/2}\\|_{{\\rm op}} \\,|\\sqrt{Dim(S)} + 1 + u|\n\t\\end{equation*}\n\tUsing the elementary inequality $(a + b)^2 \\leq 2 a^2 + 2 b^2$ now finishes the proof of \\eqref{eq:lem0}. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\begin{theorem}\\label{prop:gauss_conc}\n\tLet $Z_1, Z_2,\\dots, Z_m$ be independent standard Gaussian variables and $f: \\R^m \\mapsto \\R$ be a Lipschitz function with Lipschitz constant $1$. Then $\\E f(Z_1, \\dots, Z_m)$ is finite and\n\t$$ P (|f(Z_1, \\dots, Z_m) - \\E f(Z_1, \\dots, Z_m)| > t) \\leq 2 \\exp^{-t^2\/2}$$\n\tfor all $t\\geq 0$.\n\\end{theorem}\n\n\nThe following lemma appears as Lemma~7.3 in~\\cite{chatterjee2019new}\n\\begin{lemma}\\label{lem:1dapprox}\n\tLet $\\theta \\in \\R^n.$ Let us define $\\overline{\\theta} = (\\sum_{i = 1}^{n} \\theta_i)\/n.$ Then we have the following inequality:\n\t\\begin{equation*}\n\t\t\\sum_{i = 1}^{n} \\big(\\theta_i - \\overline{\\theta}\\big)^2 \\leq n \\TV(\\theta)^2\\,.\n\t\\end{equation*}\n\\end{lemma}\n\n\n\n\\bibliographystyle{chicago}\n\\def\\noopsort#1{}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \nThe notion of Isoclinism was introduced by P. Hall in \\cite{PH1940}. It is a well-known fact that the Schur Multiplier of any group and the minimal number of generators of a $p$-group are not invariant under isoclinism. In this paper, we define a variant of isoclinism and prove that the Schur Multiplier and the minimal number of generators of a $p$-group are invariant under this modified version of isoclinism. We also prove that the Bogomolov multiplier is invariant under this version of isoclinism. Invariance of the Bogomolov Multiplier under isoclinism was proved by Bogomolov and B\\\"{o}hning in \\cite{BB} and by Moravec in \\cite{PM10}. We define two different notions of $q$-isoclinism and consider two variants of the Bogomolov Multiplier, and we further prove their invariance under $q$-isoclinism. For a group $G$ and a positive integer $q$, the nonabelian $q$-tensor product and nonabelian $q$-exterior product for $G$-crossed modules were introduced in \\cite{CF}, as a generalization of definitions given in \\cite{B} and \\cite{ER}. In this paper, we consider a special case of this where the crossed modules are given by a group $G$ and the identity morphism $Id: G\\rightarrow G$, called the nonabelian q-tensor square of $G$. In \\cite{ADST}, the authors prove several structural results for the case $q = 0$. We generalize these structural results for a general $q$. \n \n\n\\section{Preparatory Results}\n\n The explicit definition for the $q$-nonabelian tensor square is as follows.\n \\begin{definition}\n The \\textit{tensor square modulo q}, $G\\otimes^q G$ of the $G$-crossed module $Id: G \\rightarrow G$ is the group generated by the symbols $g\\otimes h$ and $\\{( g,g )\\}, g, h \\in G$ with the following relations for $g,h,g_1,h_1 \\in G,:$\n \\begin{equation}\\label{eq:E:2.11}\ngg_1 \\otimes h = (^gg_1\\otimes\\ ^gh)(g\\otimes h),\n \\end{equation}\n \\begin{equation}\\label{eq:E:2.12}\n g\\otimes hh_1 = (g\\otimes h)(^hg \\otimes\\ ^hh_1),\n \\end{equation}\n \\begin{equation}\\label{eq:E:2.13}\n \\{(g,g)\\}(g_1\\otimes h_1)\\{(g,g)\\}^{-1} =\\hspace{2mm} ^{g^{q}}g_1\\otimes\\hspace{0.5mm} ^{g{^q}}h_1,\n \\end{equation}\n \\begin{equation}\\label{eq:E:2.14}\n \\{(gg_1,gg_1)\\} = \\{(g,g)\\}\\prod_{i=1}^{q-1}(g^{-1} \\otimes (^{g^{1-q+i}}g_1)^i)\\{(g_1,g_1)\\},\n \\end{equation}\n \\begin{equation}\\label{eq:E:2.15}\n [\\{(g,g)\\},\\{(g_1,g_1)\\}] = g^q \\otimes {g_1}^q,\n \\end{equation}\n \\begin{equation}\\label{eq:E:2.16}\n \\{([g,h],[g,h])\\} = (g\\otimes h)^{q}.\n \\end{equation}\n \\end{definition}\n For $q= 0$, the $q$-tensor square is defined to be the construction introduced by R. Brown and J.-L. Loday in \\cite{BL1} and \\cite{BL2}, called the nonabelian tensor square of the group $G$. Let $\\nabla^q(G)$ denote the normal subgroup in $G\\otimes ^q G$ generated by elements of the form $g \\otimes ^q g$, where $g\\in G$. \n The \\textit{exterior square modulo q}, $G\\wedge^q G$, is the quotient of the group $G\\otimes ^qG$ by $\\nabla^q(G)$. The coset represented by the element $g\\otimes^q h$ is denoted by $g\\wedge ^qh$. Let $\\Delta^q(G)$ be the normal subgroup in $G\\otimes^q G$ generated by elements of the form $(g \\otimes^q h)(h\\otimes^q g)$, where $g,h \\in G$. We need the following easy lemma for later use.\n \n \n\\begin{lemma}\\label{L:1.2.3}\nLet $G$ be a group. Then the following hold in $G\\otimes ^q G$.\n\\begin{itemize}\n\\item[(i)] $\\nabla^q(G)$ and $\\Delta^q(G)$ commute with image of $G\\otimes G$ in $G\\otimes ^q G$ under the natural map. \n\\begin{align*}\n [g\\otimes g, a \\otimes b] = 1,\\\\\n [(g \\otimes h)(h \\otimes g), a \\otimes b] = 1,\n\\end{align*} for all $g,h,a,b \\in G$. In particular, both $\\nabla^q(G)$ and $\\Delta^q(G)$ are abelian.\n\\item[(ii)] $\\Delta^q(G) \\subseteq \\nabla^q(G)$ and $\\exp(\\nabla^q(G)) \\mid q$.\n\\item[(iii)] $G$ acts trivially on $\\nabla^q(G)$. In particular, $G$ acts trivially on $\\Delta^q(G)$.\nFor $g,g_1,h,h_1 \\in G$ and $n \\in Z$, we have\n\n\n\\item[(iv)]\\begin{equation}\\label{eq:2.2.21}\n [g\\otimes h, g_1 \\otimes h_1] = [g,h] \\otimes [g_1,h_1].\n\\end{equation}\n\n\\item[(v)] \n\\begin{equation}\\label{eq:2.2.24}\n\\{(g^{-1}\\otimes g_1)(g_1 \\otimes g^{-1})\\}^{-1} = (g_1 \\otimes g)(g \\otimes g_1).\n\\end{equation}\n\\item[(vi)]\n\\begin{equation}\\label{eq:2.2.25}\n(gg_1 \\otimes gg_1) = (g \\otimes g)(g_1\\otimes g)(g\\otimes g_1)(g_1 \\otimes g_1).\n\\end{equation}\n\\item[(vii)] If $[g,h] =1$,\n\\begin{equation}\\label{eq:2.2.23}\n(g\\otimes h^n) = (g \\otimes h)^n = (g^n \\otimes h).\n\\end{equation}\n\\item[(viii)]\\begin{equation}\\label{eq:2.2.18}\n(g \\otimes h)(g_1 \\otimes h_1)(g \\otimes h)^{-1} =\\ ^{[g,h]}(g_1 \\otimes \\ h_1). \n\\end{equation}\n\\item[(ix)] If $q$ is odd and $[g,h] = 1$, \n\\begin{equation}\\label{eq:2.2.30}\n \\{(gh,gh)\\} = \\{(g,g)\\}\\{(h,h)\\}.\n \\end{equation}\n \\item[(x)]\n \\begin{equation}\\label{eq:2.2.31}\n(g \\otimes h^n)(h^n \\otimes g) = \\{(g \\otimes h)(h \\otimes g)\\}^n.\n \\end{equation}\n \\item[ (xi) ] If $x \\in [G,G]$, then $x \\otimes x = 1$ and $(x \\otimes g)(g \\otimes x) = 1$ for every $g \\in G$.\n\n\\end{itemize}\n\\end{lemma}\n\n\n\n\nThe following is an easy generalization of Proposition 2.3 in \\cite{ADST}.\n\n\\begin{prop}\\label{P:1.2.5}\nLet $G$ be a group and set $A=G_{ab}$. If either of the following conditions hold :\n\\begin{itemize}\n\\item[(i)] If $G'$ has a complement in $G$, \n\\item[(ii)] $G_{ab}$ is finitely generated.\n\\end{itemize}\nthen for an odd integer $q$ we have,\n\\begin{itemize}\n\\item[(i)] $\\nabla^q (G) \\cong \\nabla^q (A)$;\n\\item[(ii)] $\\Delta^q (G) \\cong \\Delta^q (A)$.\n\\end{itemize}\n\\end{prop}\n\\begin{proof}\nAn outline of the proof when $(ii)$ holds is as follows. Set $G_{ab} := < \\overline{x_i} \\mid i = 1,...,n >$, where $x_i \\in G$. We have a natural map say $\\phi : G\\otimes^q G \\rightarrow G_{ab}\\otimes^q G_{ab}$. Then, we have a surjective morphism $\\phi\\mid_{\\nabla^q(G)} : \\nabla^q(G) \\rightarrow \\nabla^q(G_{ab})$. Now, note that for every $g \\in G$, we have $g = \\prod\\limits_{i=1}^{n}x_i^{m_i} z$ for some $z \\in [G,G]$ and $m_i \\in \\mathbb{Z}$. Thus, applying $(vi)$ and $(xi)$ of Lemma \\eqref{L:1.2.3} yields\n\\begin{align*}\ng \\otimes g &=\\ \\prod\\limits_{i=1}^{n}x_i^{m_i} z \\otimes \\prod\\limits_{i=1}^{n}x_i^{m_i} z\\\\\n&=\\ \\prod\\limits_{i=1}^{n}x_i^{m_i} \\otimes \\prod\\limits_{i=1}^{n}x_i^{m_i}\\\\\n&=\\ \\prod\\limits_{i=1}^{n}(x_i \\otimes x_i)^{m_i^2} \\prod\\limits_{1\\leq i j,\\\\\n &\\alpha (\\{(x_i,x_i)\\}) = 1.\n\\end{align*}\n\nWe claim that $\\nabla^q A $ is the normal subgroup of $A\\otimes^q A$ generated by the elements of the form $\\{(x_i \\otimes x_i), (x_i \\otimes x_j)(x_j \\otimes x_i),1\\leq i }[r]\n&\\nabla^q (G)\\ar@{->}^{j}[r]\n\\ar@{}^{||\\wr\\ \\alpha}[d]\n &G\\otimes^q G\\ar@{->}[r]\n\\ar@{->}^p[d]\n&G\\wedge^q G \\ar@{->}[r]\n\\ar@{->}[d]\n&1 \\\\\n1\\ \\ar@{->}[r]\n&\\nabla^q (G_{ab})\\ar@{->}^{i}[r]\n &G_{ab}\\otimes^q G_{ab}\\ar@{->}[r]\n&G_{ab}\\wedge^q G_{ab}\\ar@{->}[r]\n&1.\n}\\end{equation*}\n Our aim is to show that the top row of the above commutative diagram splits. By Lemma \\ref{T:6.2}, the bottom row splits and let\n $f: G_{ab}\\otimes^q G_{ab} \\longrightarrow \\nabla^q (G_{ab})$ be the splitting map. Now $f'=\\alpha^{-1}\\circ f\\circ p$ is the required splitting map, and the result follows.\n\\end{proof}\n\n\\section*{Acknowledgement} V. Z. Thomas acknowledges research support from SERB, DST, Government of India grant MTR\/2020\/000483.\n\n\n\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}