diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfpxv" "b/data_all_eng_slimpj/shuffled/split2/finalzzfpxv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfpxv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nGiven a bounded, connected domain $\\Omega \\subset \\mathbb R^n$ with Lipschitz boundary $\\partial \\Omega$, the classical Poincar\\' e inequality with a sharp constant $C(p,\\Omega)$ states that\n\\begin{equation}\\label{eqPoincareInequality}\n \\int_\\Omega |u|^p dx \\leqslant C(p,\\Omega) \\int_\\Omega |\\nabla u|^p dx\n\\end{equation}\nfor ``suitable\" function $u$ (usually in the Sobolev space $W^{1,p}(\\Omega)$) with vanishing mean value on $\\Omega$. Without assuming the vanishing mean value on $\\Omega$, the classical Poincar\\'e inequality reads as\n\\begin{equation}\\label{eqPoincareInequalityWithAverage}\n \\int_\\Omega |u - \\overline u|^p dx \\leqslant C(p,\\Omega) \\int_\\Omega |\\nabla u|^p dx\n\\end{equation}\nwhere $\\overline u = (1\/|\\Omega|) \\int_\\Omega u dx$ denotes the mean value (or average) of $u$ over $\\Omega$. Inequality \\eqref{eqPoincareInequality} usually holds for $1 \\leqslant p < +\\infty$ under very general assumptions on $\\Omega$, for example, it holds for domains satisfying the so-called ``segment property\" or ``cone property\"; see \\cite{Agmon, liebloss2001}. An interesting question is that how the constant $C(p,\\Omega)$ depends on the domain $\\Omega$?\n\nFor $p=2$ and $n=3$, Steklov \\cite{Ste96} showed that the constant $C(2,\\Omega)$, when $\\partial \\Omega$ is piecewise smooth, must equal $1\/\\lambda_1$ where $\\lambda_1$ is the first, non-zero eigenvalue of the following Neumann boundary condition problem\n\\[\n\\begin{cases}\n-\\Delta u = \\lambda u & \\text{ in } \\Omega,\\\\\n\\partial_{\\vec n} u = 0 & \\text{ on }\\partial \\Omega.\n\\end{cases}\n\\]\nHere $\\vec n$ is the exterior unit normal. A similar result was also obtained by Steklov \\cite{Ste97} for the Dirichlet boundary condition problem\n\\[\n\\begin{cases}\n-\\Delta u = \\lambda u & \\text{ in } \\Omega,\\\\\nu = 0 & \\text{ on }\\partial \\Omega.\n\\end{cases}\n\\]\nBased on these fundamental results, a few results for the sharp constant $C(2,\\Omega)$ are known; for example, the sharp constant $C(2, B(0,1))$ for the unit ball in $\\mathbb R^3$ is $1\/j_{1,1}$ where $j_{1,1}$ is the first positive zero of the Bessel function $J_1$; see \\cite[Subsection 2.2]{KN} and \\cite{NR}. For convex domains $\\Omega \\subset \\mathbb R^n$ with diameter $d$, in a beautiful work by Payne and Weinberger \\cite{PW}, the authors showed that \\eqref{eqPoincareInequality} for $p=2$ can be obtained from weighted Poincar\\'e inequalities in one dimension. As a consequence of this, they proved that $C(2,\\Omega) = d\/\\pi$. A similar argument applied to the case $p=1$ gives $C(1,\\Omega)=d\/2$; see \\cite{AD}. \n\nPoincar\\'e inequalities for punctured domains was also studied in \\cite{LSY}. For a general domain $\\Omega$ and arbitrary $p$, determining the Poincar\\'e constant $C(p,\\Omega)$ is a hard task since the value $C(p,\\Omega)$ depends on $p$ and the geometry of the domain $\\Omega$.\n\nIn this note, we consider \\eqref{eqPoincareInequality} for the hyperbolic space $\\mathbb H^n$ with $n \\geqslant 2$. The motivation of writing this note goes back to a recent higher order Poincar\\'e-type inequality on $\\mathbb H^n$ established by Karmakar and Sandeep in \\cite{KS2016} and subsequently by a few works such that \\cite{BG15, BGG15}; for interested readers, we refer to \\cite{MS08, Tatatu01} for further details and related issues. To go further, let us briefly recall the definition of the space $\\mathbb H^n$. \n\nThe hyperbolic space $\\mathbb H^n$ with $n\\geqslant 2$ is a complete and simply connected Riemannian manifold having constant sectional curvature equal to $-1$. There is a number of models for $\\mathbb H^n$, however, the most important models are the half-space model, the ball model, and the hyperboloid (or Lorentz) model. In this note, we are interested in the ball model since this model is especially useful for questions involving rotational symmetry. \n\nGiven $n \\geqslant 2$, we denote by $B_n$ the open unit ball in $\\mathbb R^n$. Clearly, $B_n$ can be endowed with the following Riemannian metric\n\\[\ng(x) =\\Big(\\frac 2{1-|x|^2}\\Big)^2 dx \\otimes dx,\n\\]\nwhich is then called the ball model of the hyperbolic space $\\mathbb H^n$. In local coordinates, we have $g_{ij} = (2\/(1-|x|^2))^2 \\delta_{ij}$ and $g^{ij} = ((1-|x|^2)\/2)^2 \\delta^{ij}$. Clearly, one can think that $g$ is conformal to $dx^2$ with the conformal factor $\\ln (2\/(1-|x|^2))$. Then, it is well-known that volume element of $\\mathbb H^n$ is given by \n\\[\ndV_g(x) = \\Big(\\frac2{1-|x|^2}\\Big)^n dx,\n\\]\nwhere $dx$ denotes the Lebesgue measure in $\\mathbb R^n$. Let $d(0,x)$ denote the hyperbolic distance between the origin and the point $x$. In the ball model, it is well-known that $d(0,x) = \\ln \\big( (1+|x|)\/(1-|x|) \\big)$ for arbitrary $x\\in B_n$. In this new context, we still use $\\nabla$ and $\\Delta$ to denote the Euclidean gradient and Laplacian as well as $\\la\\cdot ,\\cdot\\ra$ to denote the standard inner product in $\\mathbb R^n$. Then, in terms of $\\nabla$, $\\Delta$, and $\\la\\cdot ,\\cdot\\ra$, with respect to the hyperbolic metric $g$, the hyperbolic gradient $\\nabla_g$, whose local coordinates is $g^{ij}\\partial_j$, and the Laplacian-Beltrami operator $\\Delta_g$, defined to be $\\text{div}_g (\\nabla \\, \\cdot )$, are given by\n\\[\n\\nabla_g = \\Big(\\frac{1-|x|^2}2\\Big)^2 \\nabla,\\quad \\Delta_g = \\Big(\\frac{1-|x|^2}2\\Big)^2 \\Delta + (n-2) \\Big( \\frac{1-|x|^2}2\\Big)^2 \\la x,\\nabla\\ra.\n\\]\nFor higher order derivatives, we shall adopt the following convention\n\\begin{equation*}\n\\nabla_g^m \\cdot =\n\\begin{cases}\n\\Delta_g^{m\/2} \\cdot &\\mbox{if $m$ is even,}\\\\\n\\nabla_g ( \\Delta_g^{(m-1)\/2} \\cdot \\,) &\\mbox{if $m$ is odd.}\n\\end{cases}\n\\end{equation*}\nThen the norm of $\\nabla_g^m$ being calculated with respect to the metric $g$ is understood as follows\n\\begin{equation*}\n|\\nabla_g^m \\cdot |_g=\n\\begin{cases}\n|\\nabla_g^m \\cdot | &\\mbox{if $m$ is even,}\\\\\n\\langle \\nabla_g^m \\, \\cdot \\, , \\nabla_g^m \\, \\cdot \\, \\rangle_g^{1\/2} &\\mbox{if $m$ is odd.}\n\\end{cases}\n\\end{equation*}\nFor simplicity, we write $|\\nabla_g^m \\cdot |$ instead of $|\\nabla_g^m \\cdot |_g$ if no confusion occurs.\n \n\nGiven a function $f$ on $\\mathbb H^n$, we denote $\\|f\\|_p = \\lt(\\int_{\\mathbb H^n} |f|^p dV_g\\rt)^{1\/p}$ and $\\|\\nabla_g^m f\\|_p = \\| |\\nabla_g^m f|_g\\|_p$, for each $1 \\leqslant p < \\infty$ and integer $m\\geqslant 1$. We use $W^{m,p}(\\mathbb H^n)$ to denote the Sobolev space of order $m$ in $\\mathbb H^n$. In \\cite{KS2016}, the authors prove the following higher order Poincar\\'e inequality\n\\begin{equation}\\label{eq:HighOrderPoincare}\n \\|\\nabla^l_g u\\|_2 \\leqslant \\big( \\frac 2{n-1}\\big)^{m-l} \\|\\nabla^m_g u\\|_2\n\\end{equation}\nfor all $u \\in W^{m,2}(\\mathbb H^n)$. In view of \\eqref{eq:HighOrderPoincare}, one can ask: \\textit{Whether the constant $(2\/(n-1))^{m-l}$ is sharp and do we have a similar inequality for $L^p$-norm?} We notice that it was claimed in \\cite{BG15} that the constant $(2\/(n-1))^{m-l}$ in \\eqref{eq:HighOrderPoincare} is sharp; however, we have not found any proof of this yet. In this note, we seek for an answer for the above question.\n\nIn order to state our results, for each number $1 < p < +\\infty$, let us first denote the following constant\n\\begin{equation}\\label{eqConstantC}\nC(n,m,p) =\n\\begin{cases}\n\\lt( p p' \/(n-1)^2 \\rt)^{m\/2}&\\mbox{if $m$ is even},\\\\\n(p\/(n-1))\\lt( p p' \/(n-1)^2\\rt)^{(m-1)\/2} &\\mbox{if $m$ is odd},\n\\end{cases}\n\\end{equation}\nwith $p' = p\/(p-1)$. Clearly when $p=2$ and hence $p'=2$, we obtain $C(n,m,2)= \\big(2\/(n-1)\\big)^m$. In this note, our first result is the following.\n\n\\begin{theorem}\\label{thmMAIN}\nGiven $p > 1$, then the following inequality holds\n\\begin{equation}\\label{eq:sharpPoincare}\n\\|u\\|_{p} \\leqslant C(n,m,p) \\|\\nabla^m_g u\\|_{p}\n\\end{equation}\nfor $u \\in W^{m,p}(\\mathbb H^n)$. Moreover, the constant $C(n,m,p)$ is sharp and is never achieved in $W^{m,p}(\\mathbb H^n)$.\n\\end{theorem}\n\nAs a consequence of Theorem \\ref{thmMAIN}, we know that the sharp constant $C(3,1,2)$ is $1\/2$ which is not $1\/j_{1,1}$ as in the Euclidean case. Let us now go back to \\eqref{eq:HighOrderPoincare}. By making use of Theorem \\ref{thmMAIN} above, we obtain the following corollary, which generalizes \\eqref{eq:HighOrderPoincare}.\n\n\\begin{corollary}\\label{corMAIN}\nGiven $p > 1$, then the following inequality holds\n\\begin{equation}\\label{eq:sharpPoincareLM}\n\\|\\nabla^l_g u\\|_{p} \\leqslant C(n,m-l,p) \\|\\nabla^m_g u\\|_{p}\n\\end{equation}\nfor $u \\in W^{m,p}(\\mathbb H^n)$. Moreover, the constant $C(n,m-l,p)$ is sharp and is never achieved in $W^{m,p}(\\mathbb H^n)$.\n\\end{corollary}\n\nAs a special case of Corollary \\eqref{corMAIN}, we conclude that the constant $(2\/(n-1))^{m-l}$ in \\eqref{eq:HighOrderPoincare} is sharp. In view of the results in \\cite{BG15}, it would be nice, since the sharp constant is never achieved, if there is an analogue of \\eqref{eq:sharpPoincare} with reminders. We leave this topic for interested readers.\n\n\n\\section{Proofs}\n\nIn this section, we prove Theorem \\ref{thmMAIN}. Our proof basically consists of two main parts. In the first part, we prove \\eqref{eq:sharpPoincare}. Then in the second part, we show that the constant $C(n,m,p)$ is sharp. We start with the first part. \n\n\\subsection{Proof of (\\ref{eq:sharpPoincare})}\n\nIt is now known that the symmetrization argument works well in the setting of hyperbolic spaces. It is not only the key tool in the proof of several important inequalities such as Adams-Moser--Trudinger in $\\mathbb H^n$ but also a key tool in the present proof. Let us now recall some facts about the rearrangement in the hyperbolic spaces. Let the function $f: \\mathbb H^n\\to \\mathbb R$ be such that \n\\[\n\\big |\\{x\\in \\mathbb H^n\\, :\\, |f(x)| > t\\} \\big | = \\int_{\\{x\\in \\mathbb H^n\\,:\\, |f(x)|>t\\}} dV_g < +\\infty\n\\]\nfor every $t >0$. Its \\textit{distribution function} is defined by\n\\[\n\\mu_f(t) = \\big |\\{x\\in \\mathbb H^n\\, :\\, |f(x)| > t\\} \\big |.\n\\]\nThen its \\textit{decreasing rearrangement} $f^*$ is defined by\n\\[\nf^*(t) = \\sup\\{s > 0\\, :\\, \\mu_f(s) > t\\}.\n\\]\nSince $f^*$ is non-increasing, the maximal function $f^{**}$ of $f^*$ is defined by\n\\[\nf^{**}(s) = \\frac 1s\\int_0^s f^*(t) dt.\n\\]\nIt is well-known for any $p \\in (1,\\infty)$ that\n\\begin{equation}\\label{eq:Hardyinequality}\n\\left(\\int_0^\\infty f^{**}(s)^p ds\\right)^{1\/p} \\leqslant p' \\left(\\int_0^\\infty f^*(s)^p ds\\right)^{1\/p}.\n\\end{equation}\nNow, we define $f^\\sharp: \\mathbb H^n \\to \\mathbb R$ by \n\\[\nf^\\sharp(x) = f^*(|B(0,d(0,x))|),\n\\]\nwhere $B(0,d(0,x))$ and $|B(0,d(0,x))|$ denote the ball centered at the origin $0$ with radius $d(0,x)$ in the hyperbolic space and its hyperbolic volume, respectively. Then for any continuous increasing function $\\Phi: [0, +\\infty) \\to [0, +\\infty)$ we have\n\\begin{equation}\\label{eqIDENTITY}\n\\int_{\\mathbb H^n} \\Phi(|f|) dV_g = \\int_{\\mathbb H^n} \\Phi(f^\\sharp) dV_g.\n\\end{equation}\nMoreover, the Polya--Szeg\\\"o principle conclude that\n\\[\n\\int_{\\mathbb H^n} |\\nabla_g f^\\sharp|^p dV_g \\leqslant \\int_{\\mathbb H^n} | \\nabla_g f |^p dV_g\n\\]\nfor any function $f:\\mathbb H^n \\to \\mathbb R$. Now we define a function on $[0,+\\infty)$ as follows\n\\[\n\\Phi(s) = n \\om_n \\int_0^s (\\sinh r)^{n-1} dr, \\quad s\\geqslant 0.\n\\]\nClearly, $\\Phi$ is a continuous and strictly increasing function from $[0,+\\infty)$ to $[0,+\\infty)$. Let $F$ denote the inverse function of $\\Phi$, it is not hard to verify that $F$ is a continuous, strictly increasing function and satisfies\n\\begin{equation}\\label{eq:definitionofF}\ns = n \\om_n \\int_0^{F(s)} (\\sinh r)^{n-1} dr\n\\end{equation}\nfor any $s\\geqslant 0$. Depending on $m$ and for clarity, we divide this part into several small steps as follows.\n\n\\subsubsection{The case $m=1$} \n\nLet $u \\in W^{1,p}(\\mathbb H^n)$ arbitrary. Upon normalization, if necessary, we can assume that $\\|\\nabla_g u\\|_p =1$. Then by the Polya--Szeg\\\"o principle we know that $\\|\\nabla_g u^\\sharp\\|_p \\leqslant 1$. Recall, by definition, that $u^\\sharp(x) = u^*(|B(0, d(0,x))|)$. Let $\\mu_u$ denote the distribution function of $u$. For $t >0$, let $\\rho(t)$ denote the radius of the ball having hyperbolic volume $\\mu_u(t)$. Then, we have\n\\[\n\\mu_u(t) = \\int_{B(0,\\rho(t))} dV_g = n\\om_n \\int_0^{\\rho(t)} (\\sinh s)^{n-1} ds.\n\\] \nFrom this and the definition of the function $F$, it is easy to check that\n\\[\n\\rho(t) = F(\\mu_u(t)).\n\\] \nWe now define\n\\begin{equation}\\label{eq:vphi}\n\\varphi (s) = (n \\om_n)^{-p\/(p-1)} \\int_s^{+\\infty} (\\sinh F(t))^{-p(n-1)\/(p-1)} dt\n\\end{equation}\nand choose\n\\begin{equation}\\label{eq:g}\ng( \\varphi (s)) = u^*(s).\n\\end{equation}\nClearly the function $\\varphi$ is decreasing with\n\\[\n-\\varphi '(s) = \\big( n \\om_n (\\sinh F(s))^{n-1} \\big)^{-p\/(p-1)}.\n\\]\nConcerning to the function $g$, it is increasing and\n\\[\n\\int_0^{+\\infty} (g'(s))^p ds = \\int_{\\mathbb H^n} |\\nabla_g u^\\sharp|^p dV_g \\leqslant 1.\n\\]\nDenote $\\underline g = (g')^*$ the decreasing rearrangement of $g'$ on $(0,\\infty)$ and set\n\\begin{equation*}\nf(s) = \\int_0^{\\varphi (s)} \\underline g(t) dt.\n\\end{equation*}\nWe have $f(s) \\geqslant u^*(s)$ and \n\\[\n\\int_0^{+\\infty} \\underline g(s)^p ds =\\int_0^{+\\infty} (g'(s))^p ds \\leqslant 1.\n\\] \nVia integration by parts, we have for any $0< a < b < +\\infty$\n\\begin{equation}\\label{eq:IBPconstant}\n\\begin{split}\n\\int_a^b f(s)^p ds = & -p\\int_a^b s \\varphi '(s) \\underline g(\\varphi (s)) f(s)^{p-1} ds \\\\\n&+ b\\Big(\\int_0^{\\varphi (b)} \\underline g(s) ds\\Big)^p -a\\Big(\\int_0^{\\varphi (a)} \\underline g(s) ds\\Big)^p.\n\\end{split}\n\\end{equation}\nNext we show that \n\\begin{equation}\\label{eq:limit0}\n\\lim_{a\\to 0} a\\Big(\\int_0^{\\varphi (a)} \\underline g(s) ds\\Big)^p = \\lim_{b\\to +\\infty} b\\Big(\\int_0^{\\varphi (b)} \\underline g(s) ds\\Big)^p =0.\n\\end{equation}\nIndeed, for any $\\varepsilon > 0$, there is $R > 0$ such that $\\int_R^{+\\infty} \\underline g(s)^p ds < \\varepsilon^p$, take $s_0$ such that $\\varphi (s_0) =R$, for $0 < a < s_0$ we have\n\\begin{align*}\n\\int_0^{\\varphi (a)} \\underline g(s) ds &= \\int_0^{\\varphi (s_0)} \\underline g(s) ds + \\int_{\\varphi (s_0)}^{\\varphi (a)} \\underline g(s) ds\\\\\n&\\leqslant \\int_0^{\\varphi (s_0)} \\underline g(s) ds +\\Big(\\int_{\\varphi (s_0)}^{\\varphi (a)} \\underline g(s)^p ds\\Big)^{1\/p} \\big(\\varphi (a) -\\varphi (s_0) \\big)^{(p-1)\/p}\\\\\n&\\leqslant \\int_0^{\\varphi (s_0)} \\underline g(s) ds +\\varepsilon \\big( \\varphi (a) -\\varphi (s_0) \\big)^{(p-1)\/p}.\n\\end{align*}\nSince $n\\om_n (\\sinh F(s))^{n-1} \\geqslant (n-1) s$ for all $s > 0$, we conclude that\n\\[\\begin{split}\n\\varphi (a) -\\varphi (s_0) \\leqslant &\\int_a^{s_0} \\big( (n-1)s \\big)^{-p\/(p-1)}ds \\\\\n= &(n-1)^{-p\/(p-1)} (p-1) \\big(a^{-1\/(p-1)} -s_0^{-1\/(p-1)} \\big).\n\\end{split}\\]\nTherefore we get\n\\begin{align*}\n\\limsup_{a\\to 0} a\\Big(\\int_0^{\\varphi (a)} \\underline g(s) ds\\Big)^p \\leqslant& \\limsup_{a\\to 0} a\\Big(\\int_0^{\\varphi (s_0)} \\underline g(s) ds +\\varepsilon \\big(\\varphi (a) -\\varphi (s_0) \\big)^{(p-1)\/p}\\Big)^p\\\\\n=&\\limsup_{a\\to 0} \\Big[ a \\varepsilon^p (\\varphi (a) -\\varphi (s_0))^{p-1} \\Big]\\\\\n\\leqslant &(n-1)^{-p} (p-1)^{p-1} \\varepsilon^p \\\\\n& \\times \\limsup_{a\\to 0} a \\big(a^{-1\/(p-1)} -s_0^{-1\/(p-1)} \\big)^{p-1}\\\\\n=&(n-1)^{-p}(p-1)^{p-1} \\varepsilon^p.\n\\end{align*}\nSince $\\varepsilon >0$ is arbitrary, we get\n\\[\n\\limsup_{a\\to 0} a\\Big(\\int_0^{\\varphi (a)} \\underline g(s) ds\\Big)^p =0.\n\\]\nThe second limit in \\eqref{eq:limit0} follows from the H\\\"older inequality. Indeed, first we have\n\\[\n\\Big(\\int_0^{\\varphi (b)} \\underline g(s) ds\\Big)^p \\leqslant \\varphi (b)^{p-1}\\int_0^{\\varphi (b)} \\underline g(s)^p ds .\n\\]\nObserve that\n\\begin{equation}\\label{eq:boundofvphi}\n\\begin{split}\n\\varphi (b) \\leqslant &(n-1)^{-p\/(p-1)}\\int_b^{+\\infty} s^{-p\/(p-1)}ds \\\\\n \\leqslant &(n-1)^{-p\/(p-1)}(p-1) b^{- 1\/(p-1)},\n\\end{split}\n\\end{equation}\nwhich helps us to obtain\n\\[\nb\\Big(\\int_0^{\\varphi (b)} \\underline g(s) ds\\Big)^p \\leqslant (n-1)^{-p} (p-1)^{p-1} \\int_0^{\\varphi (b)} \\underline g(s)^p ds.\n\\]\nFrom this the conclusion follows by the fact $\\lim_{b\\to 0} \\int_0^{\\varphi (b)} \\underline g(s)^p ds =0$ since $\\varphi (b)$ tends to $0$ as $b$ tends to $0$. Thanks to $\\varphi ' \\leqslant 0$, we can denote \n\\[\nh(s) = \\underline g(\\varphi (s)) (-\\varphi '(s))^{1\/p}.\n\\]\nClearly, $\\int_0^{+\\infty} h(s)^p ds \\leqslant 1$. Making use of the H\\\"older inequality and \\eqref{eq:IBPconstant}, we can estimate $\\int_a^b f(s)^p ds$ as follows\n\\begin{align*}\n\\int_a^b f(s)^p ds \\leqslant & p \\left(\\int_a^b \\big( -\\varphi '(s) s \\underline g(\\varphi (s))\\big)^p ds\\right)^{1\/p} \\left(\\int_a^b f(s)^p ds\\right)^{(p-1)\/p} \\\\\n&\\qquad\\qquad + b\\Big(\\int_0^{\\varphi (b)} \\underline g(s) ds\\Big)^p -a\\Big(\\int_0^{\\varphi (a)} \\underline g(s) ds\\Big)^p.\n\\end{align*}\nDividing both sides by $\\big(\\int_a^b f(s)^p ds\\big)^{(p-1)\/p}$, then letting $a \\searrow 0$ and $b \\nearrow +\\infty$ and using \\eqref{eq:limit0}, we obtain\n\\begin{equation}\\label{eq:case1}\n\\Big(\\int_0^{+\\infty} f(s)^p ds\\Big)^{1\/p} \\leqslant p \\left(\\int_a^b \\big[ -\\varphi '(s) s \\underline g(\\varphi (s))\\big]^p ds\\right)^{1\/p}.\n\\end{equation}\nNote that the inequality $n\\om_n (\\sinh F(s))^{n-1} > (n-1) s$ for any $s>0$ and the definition of $\\varphi$ imply that\n\\[\n\\big(-\\varphi '(s) \\big)^{(p-1)\/p} s < (n-1)^{-1}, \\quad\\forall s >0.\n\\]\nCombining the latter inequality and \\eqref{eq:case1}, we obtain\n\\[\n\\Big(\\int_0^{+\\infty} f(s)^p ds\\Big)^{1\/p} < \\frac p{n-1} \\left(\\int_0^\\infty h(s)^p ds\\right)^{1\/p} \\leqslant \\frac p{n-1},\n\\]\nSince $u^* \\leqslant f$, we have\n\\[\n\\left(\\int_{\\mathbb H^n} |u|^p dV_g\\right)^{1\/p} = \\left(\\int_0^{+\\infty} (u^*(s))^p ds\\right)^{1\/p} \\leqslant \\left(\\int_0^{+\\infty} f(s)^p ds\\right)^{1\/p} < \\frac p{n-1},\n\\]\nfor any function $u\\in W^{1,p}(\\mathbb H^n)$ with $\\|\\nabla_g u\\|_p =1$. This proves \\eqref{eq:sharpPoincare} for the case $m=1$ and also shows that the constant $C(n,1,p)$ is not achieved.\n\n\\subsubsection{The case $m=2$} \n\nFor any function $u\\in W^{2,p}(\\mathbb H^n)$ such that $\\|\\Delta_g u\\|_p = 1$, denote $f = -\\Delta_g u$. It was proved in \\cite{NgoNguyen16} that\n\\[\nu^*(s) \\leqslant \\int_s^{+\\infty} \\frac{t f^{**}(t)}{[n \\omega_n (\\sinh F(t))^{n-1}]^2}dt =: h(s), \\quad\\forall s>0.\n\\]\nAs in the case $m=1$, we can easily prove that\n\\[\n\\lim_{s\\to 0^+} s h(s)^p = \\lim_{s\\to +\\infty} s h(s)^p = 0.\n\\]\nFor any $b> a> 0$, using integration by parts and the H\\\"older inequality imply that\n\\begin{align*}\n\\int_a^b h(s)^p ds &= p \\int_a^b h(s)^{p-1} \\frac{s^2 f^{**}(s)}{[n \\om_n (\\sinh F(s))^{n-1}]^2} ds + bh(b)^p -a h(a)^p\\\\\n&\\leqslant p \\left(\\int_a^b h(s)^p ds\\right)^{(p-1)\/p} \\left(\\int_a^b \\left[\\frac{s^2 f^{**}(s)}{[n \\om_n (\\sinh F(s))^{n-1}]^2}\\right]^p ds\\right)^{1\/p}\\\\\n&\\qquad\\qquad +b h(b)^p -a h(a)^p.\n\\end{align*}\nDividing both sides by $\\big(\\int_a^b h(s)^p ds\\big)^{1\/p}$ and letting $a \\searrow 0$ and $b \\nearrow +\\infty$, we obtain\n\\begin{equation}\\label{eq:case2}\n\\left(\\int_0^{+\\infty} h(s)^p ds\\right)^{1\/p} \\leqslant p \\left(\\int_0^{+\\infty} \\left[\\frac{s^2 f^{**}(s)}{[n \\om_n (\\sinh F(s))^{n-1}]^2}\\right]^p ds\\right)^{1\/p}.\n\\end{equation}\nUsing the inequalities $n \\omega_n (\\sinh F(s))^{n-1} > (n-1) s$ for any $s>0$, \\eqref{eq:Hardyinequality} and \\eqref{eq:case2}, we have\n\\[\n\\left(\\int_0^{+\\infty} h(s)^p ds\\right)^{1\/p} < \\frac{pp'}{(n-1)^2} \\left(\\int_0^{+\\infty} f^*(s)^p ds\\right)^{1\/p} \\leqslant \\frac{pp'}{(n-1)^2}.\n\\]\nSince $u^* \\leqslant h$, we then obtain\n\\[\n\\left(\\int_{\\mathbb H^n} |u|^p dV_g\\right)^{1\/p} = \\left(\\int_0^{+\\infty} (u^*(s))^p ds\\right)^{1\/p} \\leqslant \\left(\\int_0^{+\\infty} h(s)^p ds\\right)^{1\/p} < \\frac {pp'}{(n-1)^2},\n\\]\nfor any function $u\\in W^{2,p}(\\mathbb H^n)$ with $\\|\\Delta_g u\\|_p =1$. This proves \\eqref{eq:sharpPoincare} for the case $m=2$ and also shows that the constant $C(n,2,p)$ is not achieved.\n\n\\subsubsection{The case $m >2$}\n\nIn this scenario, we have two possible cases:\n\n\\noindent\\textbf{Case 1}. Suppose that $m=2k$ is even. Clearly, this case follows from the case $m=2$ by repeating $k$ times as follows\n\\[\\begin{split}\n\\|u\\|_p \\leqslant \\frac{p p'}{(n-1)^2} \\|\\Delta_g u\\|_p \\leqslant & \\Big(\\frac{p p'}{(n-1)^2} \\Big)^2 \\|\\Delta_g^2 u\\|_p \\\\\n\\leqslant & \\cdots \\leqslant \\Big(\\frac{p p'}{(n-1)^2} \\Big)^k \\|\\Delta_g^k u\\|_p.\n\\end{split}\\] \n\n\\noindent\\textbf{Case 2}. Suppose that $m=2k+1$ is odd. This case can also be derived from the cases $m=1$ and $m=2$ as the following\n\\[\\begin{split}\n\\|u\\|_p \\leqslant \\frac p{n-1} \\|\\nabla_g u\\|_p \\leqslant & \\frac p{n-1} \\frac{p p'}{(n-1)^2}\\|\\nabla_g (\\Delta_g u)\\|_p \\\\\n \\leqslant & \\cdots \\leqslant \\frac p{n-1} \\Big(\\frac{p p'}{(n-1)^2} \\Big)^k \\|\\nabla_g (\\Delta_g^k u)\\|_p.\n\\end{split}\\] \n\nWe now move to the second part of the proof. We shall prove the sharpness of $C(n,m,p)$ given in \\eqref{eqConstantC} in the next subsection.\n\n\\subsection{The sharpness of $C(n,m,p)$}\n\nIt remains to check the sharpness of the constant $C(n,m,p)$. To do this, we will construct a function $u$ in such a way that $\\|\\nabla^m_g u\\|_p\/ \\|u\\|_p$ approximates $C(n,m,p)^{-1}$. Observe from \\eqref{eq:definitionofF} that \n$$n\\om_n (\\sinh F(s))^{n-1}\\geqslant (n-1) s$$ \nfor any $s\\geqslant 0$ and\n\\[\n\\lim_{s\\to \\infty} \\frac{n\\om_n (\\sinh F(s))^{n-1}}{(n-1) s} = 1.\n\\]\nHence, for any $\\varepsilon >0$, there is $s_0$ such that\n\\[\n(n-1) s \\leqslant n\\om_n (\\sinh F(s))^{n-1} \\leqslant (1+\\varepsilon) (n-1) s\n\\]\nfor all $s \\geqslant s_0$. For any $R > s_0$, let us construct a positive, continuous, non-increasing function $f_R$ on $[0,\\infty)$ given by\n\\begin{equation}\\label{eq:FunctionF_R}\nf_R(s) = \n\\begin{cases}\ns_0^{-1\/p}&\\mbox{if $s\\in (0,s_0)$}\\\\\ns^{-1\/p} &\\mbox{if $s\\in [s_0,R)$}\\\\\nR^{-1\/p} \\max\\{2- s\/R, 0\\} &\\mbox{if $s\\geqslant R$}.\n\\end{cases}\n\\end{equation}\nThen we define two sequences of functions $\\{v_{R,i}\\}_{i \\geqslant 0}$, $\\{g_{R,i}\\}_{i \\geqslant 1}$ as follows: first we set $v_{R,0} = f_R$, then we define $g_{R,i+1}$ as the maximal function of $v_{R,i}$, that is\n\\[\ng_{R,i+1}(s) = \\frac1s \\int_0^s v_{R,i}(t) dt,\n\\]\nand then\n\\[\nv_{R,i+1}(s) = \\int_s^{+\\infty}\\frac{t g_{R,i+1}(t)}{(n\\om_n (\\sinh F(t))^{n-1})^2} dt,\n\\]\nfor $i=0, 1,2,...$ Note that $v_{R,i}$ and $g_{R,i}$ are non-increasing functions. We can explicitly compute the function $g_{R,1}$ as follows: When $s < R$ we have\n\\[\ng_{R,1}(s) =\n\\begin{cases}\ns_0^{-1\/p}&\\mbox{if $s\\in (0,s_0)$}\\\\\np's^{-1\/p} -s_0^{1-1\/p}\/((p-1)s) &\\mbox{if $s\\in [s_0,R)$,}\n\\end{cases}\n\\]\nwhile for $s\\in [R,2R)$ we have\n\\[\ng_{R,1}(s) = \\bigg(\\Big(p'-\\frac 32\\Big) R^{1-1\/p} - \\frac{s_0^{1-1\/p}}{p-1}\\bigg)\\frac1s +2R^{-1\/p} -\\frac{R^{-1-1\/p} s}2, \\quad\\forall\\, s\\in [R,2R),\n\\]\nand finally when $s \\geqslant 2R$ we have\n\\[\ng_R(s) = \\bigg(p' R^{1-1\/p} - \\frac{s_0^{1-1\/p}}{p-1}\\bigg)\\frac1s + \\frac{R^{1-1\/p}}{2s},\\quad\\forall\\, s\\geqslant 2R.\n\\]\nNote that\n\\[\n\\int_R^{+\\infty} g_R(s)^p ds \\leqslant C\n\\]\nfor some constant $C >0$ independent of $R$\n\nIn the sequel, we use $C$ to denote various constants which are independent of $R$ and whose values can change from line to line and even in one line if no confusion occurs. We will need the following result.\n\n\\begin{proposition}\\label{decomposefunction}\nFor any $i\\geqslant 1$, there exist functions $h_{R,i}$ and $w_{R,i}$ such that $v_{R,i} = h_{R,i} + w_{R,i}$, $\\int_0^{+\\infty} |w_{R,i}|^p ds \\leqslant C$ and \n\\[\n\\frac1{(1+\\varepsilon)^{2i}} \\Big(\\frac{pp'}{(n-1)^2}\\Big)^i f_R \\leqslant h_{R,i} \\leqslant \\Big(\\frac{pp'}{(n-1)^2}\\Big)^i f_R.\n\\]\n\\end{proposition}\n\n\\begin{proof}\nLet us define the operator $T$ acting on functions $v$ on $[0,+\\infty)$ by\n\\[\nTv(s) = \\int_s^{+\\infty} \\frac{r}{(n\\om_n (\\sinh F(r))^{n-1})^2} \\Big(\\frac1r \\int_0^r v(t) dt\\Big) dr.\n\\]\nFor simplicity, for each function $v$ on $[0,+\\infty)$ we define an associated function $\\overline{v}$ on $\\mathbb H^n$ by \n\\[\n\\overline{v}(x) = v(|B(0,d(0,x))|).\n\\]\nWith these notation, it is not hard to see that\n\\[\n\\|\\o{w_{R,i}}\\|_p = \\Big(\\int_0^{+\\infty} |w_{R,i}(s)|^p ds\\Big)^{1\/p}\n\\]\nfor any $i\\geqslant 1$ and \n$$-\\Delta_g \\o{Tw_{R,i}}(x) = \\o{w_{R,i}}(x)$$ \nfor any $x\\in \\mathbb H^n$. Hence, by the Poincar\\'e inequality, we have\n\\[\n\\int_0^{+\\infty} |Tw_{R,i}(s)|^p ds = \\|\\o{Tw_{R,i}}\\|_p^p \\leqslant C \\|\\o{w_{R,i}}\\|_p^p = C \\Big(\\int_0^{+\\infty} |w_{R,i}(s)|^p ds\\Big)^{1\/p}.\n\\]\nThus, using an induction argument, it is enough to prove this proposition for $i=1$. We will perform several explicit estimation for the function $v_{R,1}$. Note that for $s\\geqslant s_0$ we have\n\\[\n(n-1) s \\leqslant n\\om_n (\\sinh F(s))^{n-1} \\leqslant (1+\\varepsilon) (n-1) s.\n\\]\n\\noindent\\textbf{Estimate of $v_{R,1}$ when $s \\geqslant 2R$}. Clearly for $s \\geqslant 2R$, we have\n\\begin{align*}\nv_{R,1}(s)& \\leqslant \\frac1{(n-1)^2}\\int_s^{+\\infty} \\frac{(p'+1\/2)R^{1-1\/p} -s_0^{1-1\/p}\/(p-1)}{t^2} dt\\\\\n&=\\frac1{(n-1)^2}\\frac{(p'+1\/2)R^{1-1\/p} -s_0^{1-1\/p}\/(p-1)}{s},\n\\end{align*}\nand similarly we have\n\\[\nv_{R,1}(s) \\geqslant \\frac1{(1+\\varepsilon)^2(n-1)^2}\\frac{(p'+1\/2)R^{1-1\/p} -s_0^{1-1\/p}\/(p-1)}{s}.\n\\]\nThus an easy calculation shows that\n\\begin{equation}\\label{eqEstimateVOut2R}\n\\int_{2R}^{+\\infty} v_{R,1}(s)^p ds \\leqslant C.\n\\end{equation}\n \n\\noindent\\textbf{Estimate of $v_{R,1}$ when $R \\leqslant s < 2R$}. For $s\\in [R,2R)$, we first write\n\\[\nv_{R,1}(s) = v_{R,1}(2R) + \\int_s^{2R} \\frac{t g_{R,1}(t)}{(n\\om_n (\\sinh F(t))^{n-1})^2}dt.\n\\]\nThen we can estimate\n\\[\\begin{split}\nv_{R,1}(2R) +& \\frac1{(1+\\varepsilon)^2(n-1)^2}\\int_s^{2R} \\frac{g_{R,1}(t)}t dt \\\\\n&\\leqslant v_{R,1}(s) \\leqslant v_{R,1}(2R)+ \\frac1{(n-1)^2}\\int_s^{2R} \\frac{g_{R,1}(t)}t dt.\n\\end{split}\\]\nNote that $v_{R,1}(2R)$ is equivalent to $R^{-1\/p}$ and\n\\begin{align*}\n\\int_s^{2R} \\frac{g_{R,1}(t)}t dt &=\\lt(\\Big(p'-\\frac32\\Big) R^{1-1\/p} - \\frac{s_0^{1-1\/p}}{p-1}\\rt) \\Big(\\frac1s-\\frac1{2R}\\Big)\\\\\n&\\qquad\\qquad\\qquad +2R^{-1\/p} \\ln \\frac{2R}s -\\frac{R^{-1-1\/p} (2R-s)}2.\n\\end{align*}\nThis shows that\n\\begin{equation}\\label{eqEstimateVInR2R}\n\\int_R^{2R} v_{R,1}(s)^p ds \\leqslant C,\n\\end{equation}\nand that $v_{R,1}(R)$ is equivalent to $R^{-1\/p}$. Combining the estimates \\eqref{eqEstimateVOut2R} and \\eqref{eqEstimateVInR2R} gives $\\int_R^{+\\infty} v_{R,1}(s)^p ds \\leqslant C$. \n\n\\noindent\\textbf{Estimate of $v_{R,1}$ when $s_0 \\leqslant s < R$}. For $s\\in [s_0,R)$, we also write\n\\[\nv_{R,1}(s) = v_{R,1}(R) + \\int_s^{R} \\frac{t g_{R,1}(t)}{(n\\om_n (\\sinh F(t))^{n-1})^2}dt.\n\\]\nThus\n\\[\nv_{R,1}(R)+ \\frac1{(1+\\varepsilon)^2(n-1)^2}\\int_s^{R} \\frac{g_{R,1}(t)}t dt\\leqslant v_{R,1}(s) \\leqslant v_{R,1}(R)+ \\frac1{(n-1)^2}\\int_s^{R} \\frac{g_{R,1}(t)}t dt.\n\\]\nA simple computation gives\n\\[\n\\int_s^{R} \\frac{g_{R,1}(t)}t dt = pp'(s^{-1\/p} -R^{-1\/p}) -\\frac{s_0^{1-1\/p}}{p-1} \\Big(\\frac1s -\\frac1R\\Big),\n\\]\nwhich implies that\n\\[\n\\int_{s_0}^R \\lt|\\int_s^{R} \\frac{g_{R,1}(t)}t dt - \\frac{pp'}{s^{1\/p}}\\rt|^p ds \\leqslant C.\n\\]\n\n\\noindent\\textbf{Estimate of $v_{R,1}$ when $s 0$ is arbitrary, we obtain \n\\[\n\\inf_{u\\in W_0^{1,p}(\\mathbb H^n)\\backslash \\{0\\}} \\frac{\\int_{\\mathbb H^n} |\\nabla_g u|^p dV_g}{\\int_{\\mathbb H^n} |u|^p dV_g} \\leqslant \\Big(\\frac{n-1}{p}\\Big)^p.\n\\]\nHence the preceding inequality becomes equality. This proves the sharpness of $C(n,1,p)$. Next, we move to a proof for the sharpness of $C(n,2,p)$.\n\n\\subsubsection{The sharpness of $C(n,2,p)$}\n\nIn this case, we set $u_R(x) = v_{R,1}(|B(0,d(0,x))|)$, then we have \n\\[\n-\\Delta_g u_R(x) = f_R(|B(0,d(0,x))|).\n\\]\nUsing this fact and \\eqref{eqIDENTITY}, we easily obtain\n\\begin{equation}\\label{eqIntegralDeltaU}\n\\int_{\\mathbb H^n} |\\Delta_g u_R|^p dV_g = \\int_0^{+\\infty} f_R(s)^p ds = 1+ \\ln (R\/s_0) + \\int_0^1 (1-s)^pds.\n\\end{equation}\nBy Proposition \\ref{decomposefunction}, we have\n\\begin{align*}\n\\|u_R\\|_{p} &= \\Big(\\int_0^{+\\infty} v_{R,1}(s)^p ds\\Big)^{1\/p} \\\\\n&\\geqslant \\Big(\\int_0^{+\\infty} h_{R,1}(s)^p ds\\Big)^{1\/p} - \\Big(\\int_0^{+\\infty} |w_{R,1}|^p ds\\Big)^{1\/p}\\\\\n&\\geqslant \\frac1{(1+\\varepsilon)^2} \\frac{pp'}{(n-1)^2}\\Big(\\int_0^{+\\infty} f_R(s)^p ds\\Big)^{1\/p} -C\\\\\n&= \\frac1{(1+\\varepsilon)^2} \\frac{pp'}{(n-1)^2} \\Big(1 + \\ln(R\/s_0) + \\int_0^1(1-t)^p dt\\Big)^{1\/p} -C.\n\\end{align*}\nCombing this estimate and \\eqref{eqIntegralDeltaU} gives\n\\[\nC(n,2,p) \\geqslant \\liminf_{R\\to +\\infty} \\frac{\\|u_R\\|_p}{\\|\\Delta_g u_R\\|_p} \\geqslant \\frac1{(1+\\varepsilon)^2}\\frac{pp'}{(n-1)^2}.\n\\]\nSince $\\varepsilon >0$ is arbitrary, we conclude that\n\\[\nC(n,2,p) \\geqslant \\frac{pp'}{(n-1)^2}\n\\]\nand this finishes our proof for the case $m=2$.\n\n\\subsubsection{The sharpness of $C(n,2k,p)$ with $k\\geqslant 2$} \n\nIn this case, we set \n$$u_R(x) = v_{R,k}(|B(0,d(0,x))|),$$\nthen it is clear to see that \n$$(-\\Delta_g)^k u_R(x) = f_R (|B(0,d(0,x))|).$$ \nBy Proposition \\ref{decomposefunction}, we can write $v_{R,k} = h_{R,k} + w_{R,k}$ with $\\int_0^{+\\infty} |w_{R,k}|^p ds \\leqslant C$ and \n\\[\n\\frac1{(1+\\varepsilon)^{2k}} \\Big(\\frac{pp'}{(n-1)^2}\\Big)^k f_R \\leqslant h_{R,k} \\leqslant \\Big(\\frac{pp'}{(n-1)^2}\\Big)^k f_R.\n\\]\nUsing again the argument in proving the sharpness of $C(n,2,p)$, we obtain the sharpness of $C(n,2k,p)$.\n\n\\subsubsection{The sharpness of $C(n,2k+1,p)$ with $k\\geqslant 1$} \n\nIn the previous argument, we can find a function $u_R$ on $\\mathbb H^n$ such that \n$$(-\\Delta_g)^k u_R(x) = f_R(|B(0,d(0,x))|)$$\nand that\n\\[\n\\|u_R\\|_p \\geqslant \\frac1{(1+\\varepsilon)^{2k}} \\Big(\\frac{pp'}{(n-1)^2}\\Big)^k \\Big(\\int_0^{+\\infty} f_R(s)^p ds\\Big)^{1\/p}- C.\n\\]\nFrom the proof of the sharpness of $C(n,1,p)$, we know that\n\\[\n\\int_{\\mathbb H^n} |\\nabla_g (\\Delta_g^k u_R) |^p dV_g\\leqslant \\Big( \\frac{ n-1 }{p} \\Big)^p (1+\\varepsilon)^p \\ln \\frac R{s_0} + (n-1)^p (1+\\varepsilon)^p \\int_0^1(1-t)^p dt.\n\\]\nCombining these two estimate implies the sharpness of $C(n,2k+1,p)$.\n\n\n\n\n\\section*{Acknowledgments}\n\nV.H.N would like to acknowledge the support of the CIMI postdoctoral research fellowship. The research of Q.A.N is funded by the VNU University of Science under project number TN.16.01.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ALP-GMM}\n\\label{app-alp-gmm}\nALP-GMM \\cite{portelas2019} (MIT license) relies on an empirical per-task computation of Absolute Learning Progress (ALP), allowing to fit a GMM on a concatenated space composed of tasks' parameters and respective ALP. Given a task $a_{new} \\in \\mathcal{A}$ whose parameter is $p_{new} \\in \\mathcal{P}$ and on which the student's policy collected the episodic reward $r_{new} \\in \\mathbb{R}$, Its ALP is computed using the closest previous tasks $a_{old}$ (Euclidean distance) with associated episodic reward $r_{old}$:\n\\begin{equation}\n \\label{eq:2}\n alp_{new} = |r_{new} - r_{old}|\n\\end{equation}\nAll previously encountered task's parameters and their associated ALP, parameter-ALP for short, recorded in a history database $H$, are used for this computation. Contrastingly, the fitting of the GMM is performed every $N$ episodes on a window $\\mathcal{W}$ containing the $N$ most recent parameter-ALP. The resulting mean ALP dimension of each Gaussian of the GMM is used for proportional sampling. To adapt the number of components of the GMM online, a batch of GMMs having from 2 to $k_{max}$ components is fitted on $\\mathcal{W}$, and the best one, according to Akaike's Information Criterion \\cite{aic}, is kept as the new GMM. In all of our experiments we use the same hyperparameters as in \\cite{portelas2019} ($N=250$, $k_{max}=10$), except for the percentage of random task sampling $\\rho_{rnd}$ which we set to $10\\%$ (we found it to perform better than $20\\%$) when running ALP-GMM. See algorithm \\ref{algo:ALP-GMM} for pseudo-code and figure \\ref{ALP-GMM-pipeline} for a schematic pipeline. Note that in the main body of this paper we refer to ALP as LP for simplicity (ie. $LP_{ti}$ in $\\mathcal{C}$ from eq. \\ref{eq:craw} is equivalent to the mean ALP of Gaussians in ALP-GMM).\n\n\n\n\n\\begin{algorithm*}[htb!]\n\t\\caption{~ Absolute Learning Progress Gaussian Mixture Model (ALP-GMM)}\n\t\\label{algo:ALP-GMM}\n\t\\begin{algorithmic}\n\t\n\t\\REQUIRE Student policy $s_\\theta$, parametric procedural environment generator $E$, bounded parameter space $\\mathcal{P}$, probability of random sampling $\\rho_{rnd}$, fitting rate $N$, max number of Gaussians $k_{max}$\n\t\\vspace{0.2cm}\n\t\\STATE Initialize $s_\\theta$\n\t\\STATE Initialize parameter-ALP First-in-First-Out window $\\mathcal{W}$, set max size to $N$\n\t\\STATE Initialize parameter-reward history database $H$\n\t\\LOOP[{$N$ times ~~~~{\\color{gray}~~~\\# Bootstrap phase}}]\n\t \\STATE Sample random $p \\in \\mathcal{P}$, send $E(a \\sim \\mathcal{A}(p))$ to $s_\\theta$, observe episodic reward $r_p$\n\t \\STATE Compute ALP of $p$ based on $r_p$ and $H$ (see equation \\ref{eq:2})\n\t \\STATE Store $(p,r_p)$ pair in $H$, store $(p,ALP_p)$ pair in $\\mathcal{W}$\n\t\\ENDLOOP\n\t\\LOOP[{{\\color{gray}~~~\\# Stop after $K$ inner loops}}]\n\t\\STATE Fit a set of GMM having 2 to $k_{max}$ kernels on $\\mathcal{W}$\n\t\\STATE Select the GMM with best Akaike Information Criterion\n\t\\LOOP[{$N$ times}]\n\t \\STATE $\\rho_{rnd} \\%$ of the time, sample a random parameter $p \\in \\mathcal{P}$\n\t \\STATE Else, sample $p$ from a Gaussian chosen proportionally to its mean ALP value \n\t\t\\STATE Send $E(a \\sim \\mathcal{A}(p))$ to student $s_\\theta$ and observe episodic reward $r_p$\n\t \\STATE Compute ALP of $p$ based on $r_p$ and $H$\n\t \\STATE Store $(p,r_p)$ pair in $H$, store $(p,ALP_p)$ pair in $\\mathcal{W}$\n\t\\ENDLOOP\n\t\\ENDLOOP\n\t\\STATE \\textbf{Return} $s_\\theta$\n\t\n\t\\end{algorithmic}\n\\end{algorithm*}\n\n\\begin{figure*}[ht!]\n\\centering\n\\includegraphics[width=0.80\\textwidth]{graphics\/alp-gmm_pipeline_example_final.pdf}\n\\caption{\\footnotesize{Schematic view of an ALP-GMM teacher's workflow from \\cite{portelas2019}}}\n\\label{ALP-GMM-pipeline}\n\\end{figure*}\n\n\\section{AGAIN}\n\\label{app-again}\n\n\n\\paragraph{IN variants.} In order to filter the list $\\mathcal{C}_{raw}$ (see eq. \\ref{eq:craw}) of GMMs extracted from a training trajectory $\\tau_{s}$ selected in training trajectory history $\\mathcal{H}$ into $\\mathcal{C}$ and use it as an expert curriculum, we remove any Gaussian with a $LP_{ti}$ below $\\delta_{LP}=0.2$ (the LP dimension is normalized between $0$ and $1$, which requires to choose an approximate potential reward range, set to $[-150,350]$ for all experiments on Box2D locomotion environments (sec. \\ref{sec:exp:walker-climber} and sec. \\ref{exp:trying-again}). When all Gaussians of a GMM are discarded, the GMM is removed from $\\mathcal{C}$. In practice, it allows to 1) remove non-informative GMMs corresponding to the initial exploration phase of ALP-GMM, when the learner has not made any progress (hence no LP detected by the teacher), and 2) remove an entire training trajectory $\\tau_{s}$ if ALP-GMM never detected high-LP Gaussians, i.e. it failed to train student $s$. $\\mathcal{C}$ is then iterated over to generate a curricula with either of the Time-based (see algo. \\ref{int-algo}), Pool-based (see algo \\ref{inp-algo}) or Reward-based (The one used in our main experiments, see algo \\ref{inr-algo}) IN. The IN-P approach does not require additional hyperparameters. The IN-T requires an update rate $N$ to iterate over $\\mathcal{C}$, which we set to $250$ (same as the fitting rate of ALP-GMM). The IN-R approach requires to extract additional data from the first run, in the form of a list $\\mathcal{R}_{raw}$:\n\\begin{equation}\n\\label{eq:rraw}\n \\mathcal{R}_{raw} = \\{\\mu_r^1, ...,\\mu_r^t, \\mu_r^T\\} ~~s.t~~~ |\\mathcal{R}_{raw}| = |\\mathcal{C}_{raw}|,\n\\end{equation}\nwith T the total number of GMMs in the first run (same as in $\\mathcal{C}_{raw}$), and $\\mu_r^t$ the mean episodic reward obtained by the first DRL agent during the last $50$ tasks sampled from the $t^{th}$ GMM. $\\mathcal{R}$ is simply obtained by removing any $\\mu_r^t$ that corresponds to a GMM discarded while extracting $\\mathcal{C}$ from $\\mathcal{C}_{raw}$. The remaining rewards are then used as thresholds in IN-R to decide when to switch to the next GMM in $\\mathcal{C}$.\n\n\\paragraph{AGAIN} In AGAIN (see algo. \\ref{again-algo}), the idea is to use both IN (R,T or P) and ALP-GMM (without the random bootstrapping period) for curriculum generation. Our main experiments use IN-R as it is the highest performing variant (see app. \\ref{ann:toy-exp}). This means that in the main sections of this paper, AGAIN $=$ AGAIN-R and IN $=$ IN-R. We combine the changing GMM of IN and ALP-GMM over time, simply by building a GMM $G$ containing Gaussians from the current GMM of IN and ALP-GMM. By selecting the Gaussian in $G$ from which to sample a new task using their respective LP, this approach allows to adaptively modulate the task sampling between both, shifting the sampling towards IN when ALP-GMM does not detect high-LP subspaces and towards ALP-GMM when the current GMM of IN have lower-LP Gaussians. While combining ALP-GMM to IN, we reduce the residual random sampling of ALP-GMM from $\\rho_{high}=10\\%$, used for the pretrain phase, to either $\\rho_{low}=2\\%$ for experiments presented in sec. \\ref{sec:exp:toy-env} and sec. \\ref{exp:trying-again}, or $\\rho_{low}=0\\%$ for experiments done in the Parkour environment in sec. \\ref{sec:exp:walker-climber} (here we found $\\rho_{low}=0\\%$ to be beneficial in terms of performances w.r.t. $\\rho_{low}=2\\%$, which means that the task-exploration induced by the periodic GMM fit of ALP-GMM was sufficient for exploration). In AGAIN-R and AGAIN-T, when the last GMM $p(T)$ of the IN curriculum is reached, we switch the fixed $LP_{Ti}$ values of all IN Gaussians to periodically updated LP estimates, i.e. we allow AGAIN to modulate the importance of $p(T)$ for task sampling depending on its current student's performance.\n\n\n\\begin{algorithm*}[htb!]\n\t\\caption{~ Pretrain phase (helper function)}\n\t\\label{pret-algo}\n\t\\begin{algorithmic}[1]\n\t\\REQUIRE Student policy $s_\\theta$, teacher training history $\\mathcal{H}$, task-encoding parameter space $\\mathcal{P}$, LP threshold $\\delta_{LP}$, experimental pre-train budget $K_{pre}$, pre-test set size $m$, number of neighbors for student selection $k$, random sampling ratio $\\rho_{high}$, parametric procedural environment generator $E$\n\t\\vspace{0.2cm}\n\t\\STATE Init $s_{\\theta}$, train it for $K_{pre}$ env. steps with ALP-GMM($\\rho_{high}, \\mathcal{P}$) \n\t\\STATE Pre-test $s_{\\theta}$ with $m$ tasks selected uniformly over $\\mathcal{P}$ and get $KC_s^{pre}$ \\COMMENT{{\\color{gray}~~~\\# Pre-test phase}}\n\t\\STATE Apply knn algorithm in KC space of $\\mathcal{H}$, get $k$ students closest to $KC_s^{pre}$ \\STATE Among those $k$, keep the one with highest summed post training $KC^{post}$, extract its $\\mathcal{C}_{raw}$\n\t\\STATE Get $\\mathcal{C}$ from $\\mathcal{C}_{raw}$ by removing any Gaussian with $LP_{ti} < \\delta_{LP}$.\n\t\\STATE \\textbf{Return} $\\mathcal{C}$\n\t\\end{algorithmic}\n\\end{algorithm*}\n\n\\begin{algorithm*}[htb!]\n\t\\caption{~ Inferred progress Niches - Time-based (IN-T)}\n\t\\label{int-algo}\n\t\\begin{algorithmic}[1]\n\t\n\t\\REQUIRE Student policy $s_\\theta$, teacher training history $\\mathcal{H}$, task-encoding parameter space $\\mathcal{P}$, LP threshold $\\delta_{LP}$, update rate $N$, experimental budget $K$, experimental pre-train budget $K_{pre}$, pre-test set size $m$, number of neighbors for student selection $k$, random sampling ratio $\\rho_{high}$, parametric procedural environment generator $E$\n\t\\vspace{0.2cm}\n\t\\STATE Launch Pretrain phase and get expert GMM list $\\mathcal{C}$ \\COMMENT{{\\color{gray}~~~\\# See algo. \\ref{pret-algo}}}\n\t\\STATE Initialize expert curriculum index $i_{c}$ to $0$\n\t\\LOOP[{~Stop after $K - K_{pre}$ environment steps}]\n\t \\STATE Set $i_{c}$ to $min(i_{c}+1, len(\\mathcal{C}))$\n\t \\STATE Set current GMM $G_{IN}$ to $i_{c}^{th}$ GMM in $\\mathcal{C}$\n\t \\LOOP[{$N$ times}]\n\t \\STATE Sample $p$ from a Gaussian in $G_{IN}$ chosen proportionally to its $LP_{ti}$\n\t\t\\STATE Send $E(a \\sim \\mathcal{A}(p))$ to student $s_{\\theta}$\n\t\\ENDLOOP\n\t\\ENDLOOP\n\t\\STATE Add student's training trajectory to $\\mathcal{H}$\n\t\\STATE \\textbf{Return} $s_\\theta$\n\t\\end{algorithmic}\n\\end{algorithm*}\n\n\\begin{algorithm*}[htb!]\n\t\\caption{~ Inferred progress Niches - Pool-based (IN-P)}\n\t\\label{inp-algo}\n\t\\begin{algorithmic}[1]\n\t\n\t\\REQUIRE Student policy $s_\\theta$, teacher training history $\\mathcal{H}$, task-encoding parameter space $\\mathcal{P}$, LP threshold $\\delta_{LP}$, update rate $N$, experimental budget $K$, experimental pre-train budget $K_{pre}$, pre-test set size $m$, number of neighbors for student selection $k$, random sampling ratio $\\rho_{high}$, parametric procedural environment generator $E$\n\t\\vspace{0.2cm}\n\t\\STATE Launch Pretrain phase and get expert GMM list $\\mathcal{C}$ \\COMMENT{{\\color{gray}~~~\\# See algo. \\ref{pret-algo}}} \n\t\\STATE Initialize pool GMM $G_{IN}$, containing all Gaussians from $\\mathcal{C}$\n\t\\LOOP[{~Stop after $K - K_{pre}$ environment steps}]\n\t \\STATE Sample $p$ from a Gaussian in $G_{IN}$ chosen proportionally to its $LP_{ti}$\n\t\t\\STATE Send $E(a \\sim \\mathcal{A}(p))$ to student $s_\\theta$\n\t\\ENDLOOP\n\t\\STATE Add student's training trajectory to $\\mathcal{H}$\n\t\\STATE \\textbf{Return} $s_\\theta$\n\t\\end{algorithmic}\n\\end{algorithm*}\n\n\\begin{algorithm*}[htb!]\n\t\\caption{~ Inferred progress Niches - Reward-based (IN-R)}\n\t\\label{inr-algo}\n\t\\begin{algorithmic}[1]\n\t\n\t\\REQUIRE Student policy $s_\\theta$, teacher training history $\\mathcal{H}$, task-encoding parameter space $\\mathcal{P}$, LP threshold $\\delta_{LP}$, update rate $N$, experimental budget $K$, experimental pre-train budget $K_{pre}$, pre-test set size $m$, number of neighbors for student selection $k$, random sampling ratio $\\rho_{high}$, parametric procedural environment generator $E$\n\t\\vspace{0.2cm}\n\t\\STATE Launch Pretrain phase and get expert GMM list $\\mathcal{C}$ \\COMMENT{{\\color{gray}~~~\\# See algo. \\ref{pret-algo}}} \n\t\\STATE Initialize reward First-in-First-Out window $\\mathcal{W}$, set max size to $N$\n\t\\STATE Initialize expert curriculum index $i_{c}$ to $0$\n\t\\LOOP[{~Stop after $K - K_{pre}$ environment steps}]\n\t \\STATE If $\\mathcal{W}$ is full, compute mean reward $\\mu_w$ from $\\mathcal{W}$\n\t \\STATE ~~~~If $\\mu_w$ superior to $i_{c}^{th}$ reward threshold in $\\mathcal{R}$, set $i_{c}$ to $min(i_{c}+1, len(\\mathcal{C}))$\n\t \\STATE Set current GMM $G_{IN}$ to $i_{c}^{th}$ GMM in $\\mathcal{C}$\n\t \\STATE Sample $p$ from a Gaussian in $G_{IN}$ chosen proportionally to its $LP_{ti}$\n\t\t\\STATE Send $E(a \\sim \\mathcal{A}(p))$ to student $s_\\theta$ and add episodic reward $r_p$ to $\\mathcal{W}$\n\t\\ENDLOOP\n\t\\STATE Add student's training trajectory to $\\mathcal{H}$\n\t\\STATE \\textbf{Return} $s_\\theta$\n\t\\end{algorithmic}\n\\end{algorithm*}\n\n\\begin{algorithm*}[htb!]\n\t\\caption{~ Alp-Gmm And Inferred progress Niches (AGAIN)}\n\t\\label{again-algo}\n\t\\begin{algorithmic}[1]\n\t\n\t\\REQUIRE Student policy $s_\\theta$, teacher training history $\\mathcal{H}$, task-encoding parameter space $\\mathcal{P}$, LP threshold $\\delta_{LP}$, update rate $N$, experimental budget $K$, experimental pre-train budget $K_{pre}$, pre-test set size $m$, number of neighbors for student selection $k$, random sampling ratio $\\rho_{low}$ and $\\rho_{high}$, parametric procedural environment generator $E$\n\t\\vspace{0.2cm}\n\t\\STATE Launch Pretrain phase and get expert GMM list $\\mathcal{C}$ \\COMMENT{{\\color{gray}~~~\\# See algo. \\ref{pret-algo}}}\n\t\\STATE Setup new ALP-GMM($\\rho_{rnd}=0, \\mathcal{P}$) \\COMMENT{{\\color{gray}~~~\\# See algo. \\ref{algo:ALP-GMM}}} \n\t\\STATE Setup either IN-T, IN-P or IN-R \\COMMENT{{\\color{gray}~~~\\# See algo. \\ref{int-algo}, \\ref{inp-algo} and \\ref{inr-algo}}} \n\t\\LOOP[{~Stop after $K - K_{pre}$ environment steps}]\n\t \\STATE Get composite GMM $G$ from the current GMM of both ALP-GMM and IN\n\t \\STATE $\\rho_{low} \\%$ of the time, sample a random parameter $p \\in \\mathcal{P}$\n\t \\STATE Else, sample $p$ from a Gaussian chosen proportionally to its $LP$ \n\t\t\\STATE Send $E(a \\sim \\mathcal{A}(p))$ to student $s_\\theta$ and observe episodic reward $r_p$\n\t \\STATE Send $(p,r_p)$ pair to both ALP-GMM and IN\n\t\\ENDLOOP\n\t\\STATE Add student's training trajectory to $\\mathcal{H}$\n\t\\STATE \\textbf{Return} $s_\\theta$\n\t\n\t\\end{algorithmic}\n\\end{algorithm*}\n\n\n\\section{Considered ACL and Meta-ACL teachers}\n\\label{an:details}\n\\paragraph{Meta-ACL variants} Our proposed approach, AGAIN, is based on the combination of an inferred expert curriculum with ALP-GMM, an exploratory ACL approach. In section \\ref{sec:methods} and appendix \\ref{app-again}, we present $3$ approaches to use such an expert curriculum, giving the AGAIN-R, AGAIN-P and AGAIN-T algorithms. In our experiments, we also consider ablations were we only use the expert curriculum, giving the IN-R, IN-P and IN-T variants. We also consider two additional AGAIN variants that do not use our proposed KC-based student selection method:\n\\begin{itemize}\n \\item AGAIN with Random selection (AGAIN\\_RND), a lower-baseline ablation were we select the training trajectory $\\tau$ from which to extract the expert curriculum randomly in history $\\mathcal{H}$.\n \\item AGAIN with Ground Truth selection (AGAIN\\_GT), an upper-baseline using privileged information. Instead of performing the knn algorithm in the KC space, this approach directly uses the true student distribution. For instance, in the Parkour environment, given a new student $s$, AGAIN\\_GT selects the $k$ previously trained students from $\\mathcal{H}$ that are morphologically closest to $s$ (i.e. same embodiment type and closest limb sizes), and uses the training trajectory of the student with highest score $j_s$ (see sec. \\ref{sec:methods}).\n \n \n\\end{itemize}\nNote that both for AGAIN\\_RND and AGAIN\\_GT, there is no need to pre-test the student, which means we can use the IN expert curriculum directly at the beginning of training rather than after a pre-training phase.\n\n\n\\paragraph{ACL conditions} A first natural ACL approach to compare our AGAIN variants to is ALP-GMM, the underlying ACL algorithm in AGAIN. We also add as a lower-baseline a random curriculum teacher (Random), which samples tasks' parameters randomly over the task space.\n\nIn both the toy environment (sec. \\ref{sec:exp:toy-env}, toy env. for short) and the Parkour environment (sec. \\ref{sec:exp:walker-climber}), we additionally compare to Adaptive Domain Randomization (ADR), an ACL algorithm proposed in \\cite{OpenAI2019SolvingRC}, which is based on inflating a task distribution sampling from a predefined initially feasible task $p_{easy}$ (w.r.t a given student). Each lower and upper boundaries of each dimension of the sampling distribution are modified independently with step size $\\Delta_{step}$ whenever a predefined mean reward threshold $r_{thr}$ is surpassed over a window (of size $q$) of tasks occasionally sampled (with probability $\\rho_b$) at the sampling dimension boundary. More details can be found in \\cite{OpenAI2019SolvingRC}. In our experiments, as we do not assume access to expert knowledge over students sampled within the student distribution, we randomize the setting of $p_{easy}$ uniformly over the task space in Parkour experiments and uniformly over the $4$ possible student starting subspaces in toy env. experiments. Based on the hyperparameters proposed in \\cite{OpenAI2019SolvingRC} and on informal hyperparameter search, we use $[\\rho_b=0.5, r_{thr}=1, \\Delta_{step}=0.05, q=10]$ in toy env. experiments and $[\\rho_b=0.5, r_{thr}=230, \\Delta_{step}=0.1, q=20]$ in Parkour experiments.\n\nIn experiments described in sec \\ref{exp:trying-again}, we compare our approaches to an oracle condition (Oracle), which is a hand-made curriculum that is very similar to IN-R, except that the list $\\mathcal{C}$ is built using expert knowledge before training starts (i.e. no pre-train and pre-test phases), and all reward thresholds $\\mu_r^i$ in $\\mathcal{R}$ (see eq. \\ref{eq:rraw}) are set to $230$, which is an episodic reward value often used in the literature as characterizing a default walker having a \"reasonably efficient\" walking gate in environments derived from the Box2D gym environment BipedalWalker \\cite{poet,portelas2019}.In practice, Oracle starts proposing tasks from a Gaussian (with std of $0.05$) located at the simplest subspace of the task space (ie. low stump height and high stump spacing) and then gradually moves the Gaussian towards the hardest subspaces (high stump height and low stump spacing) by small increments ($50$ steps overall) happening whenever the mean episodic reward of the DRL agent over the last $50$ proposed tasks is superior to $230$.\n\n\n\\newpage\\section{Analysing Meta-ACL in a toy environment}\n\\label{ann:toy-exp}\n\nIn this section we report the full comparative experiments done in the toy environment, which includes comparisons with AGAIN-T and AGAIN-P to AGAIN-R, shown in table \\ref{toy-env-results-table}. We also provide visualizations of the KC-based curriculum priors selection process (see fig. \\ref{toy-exps-stud-selec-vizu}) happening after the pretraining phase in AGAIN along with a visualization of the fixed set of $96$ randomly drawn students used to perform the varying classroom experiments reported in sec. \\ref{sec:exp:toy-env} (see fig. \\ref{classroom-toyenv-vizu}).\n\n\\paragraph{Additional comparative analysis} Table \\ref{toy-env-results-table} summarizes the post-training performances obtained by our considered Meta-ACL conditions and ACL baselines on the toy environment with only $4$ possible students on a fixed set of $48$ randomly drawn students. Meta-ACL conditions are given a training trajectory $\\mathcal{H}$ created by training an initial classroom of $128$ students. Using a Reward-based iterating scheme over the inferred expert curriculum (AGAIN-R and IN-R) outperforms the Time-based and Pool-based variants ($p<.001$). This result was expected as both these last two variants do not have flexible mechanisms to adapt to the student being trained. The pool based variants (AGAIN-P and IN-P), which discard the temporal ordering of the expert curriculum are the worst performing variants, statistically significantly inferior to both Reward-based and Time-based conditions ($p<.001$).\n\n\\begin{table}[htb!]\n\\caption{\\footnotesize{\\textbf{Experiments on the toy environment.} The average performance with standard deviation after 200k episodes is reported (48 seeds per conditions). For Meta-ACL variants we report results with column 1) the regular KC-based curriculum prior selection performed after $20$k pre-training episodes, column 2) An ablation that performs the selection at random before training, and column 3) An oracle condition selecting before training the curriculum prior using student ground truth type. \\textit{*} Denotes stat. significant advantage w.r.t. ALP-GMM (Welch's t-test at $200k$ ep. with $p<0.05$).}}\n\\vspace{0.3cm}\n \\centering\n \\footnotesize\n\\begin{tabular}{@{}llll@{}}\n\\toprule\nCondition & Regular & Random & Ground Truth \\\\ \\midrule\nAGAIN-R & 98.8 +- 4.8* & 55.4 +- 32.2 & 99.8 +- 0.9* \\\\ \nIN-R & 91.4 +- 3.4* & 26.3 +- 41.1 & 92.5 +- 3.0* \\\\ \nAGAIN-T & 84.3 +- 3.8 & 38.6 +- 34.1 & 89.0 +- 1.7* \\\\ \nIN-T & 79.0 +- 12.0 & 30.3 +- 37.3 & 88.9 +- 1.7* \\\\ \nAGAIN-P & 38.2 +- 7.5 & 9.3 +- 9.2 & 14.8 +- 1.2 \\\\ \nIN-P & 40.6 +- 6.4 & 9.2 +- 9.0 & 15.1 +- 1.2 \\\\ \\midrule\nALP-GMM & 84.6 +- 3.4 & & \\\\ \nADR & 14.9 +- 27.4 & & \\\\ \nRandom & 10.0 +- 0.8 & & \\\\ \\bottomrule\n\\end{tabular}\n\n\n \\label{toy-env-results-table}\n\\end{table}\n\n\\begin{figure*}[htb!]\n \\centering\n \\subfloat{\\includegraphics[width=0.3\\textwidth]{graphics\/sample_cp10_varying_classroom_size_expe.pdf}}\n \\subfloat{\\includegraphics[width=0.3\\textwidth]{graphics\/sample_for_varying_classroom_size_expe.pdf}}\n \\caption{\\footnotesize{ Additional visualizations for the varying classroom size experiments (see sec. \\ref{sec:exp:toy-env}).\n \\textbf{Left:} Visualization of the starting cells of students from a $10$\\% sample of a classroom of $400$ students (one per student type) trained with ALP-GMM and used to populate the training trajectory $\\mathcal{H}$. Each blue circle marks the starting cell of each student (i.e. its type) within the $2$D parameter space $\\mathcal{P}$, which is an initial learning subspace that needs to be detected by the teacher for successful training. \\textbf{Right:}Visualization of the fixed set of $96$ randomly drawn students that have to be trained by Meta-ACL variants given $\\mathcal{H}$. As not all student types are represented in $\\mathcal{H}$, Meta-ACL approaches have to generalize their curriculum generation to these new students.}}\n \\label{classroom-toyenv-vizu}\n\\end{figure*}\n\n\\begin{figure*}[htb!]\n\\centering\n\\subfloat[with type 0 new student]{\\includegraphics[width=0.35\\textwidth]{graphics\/metacl_prior_selection_10-06_toy_env_teacher_CEGT_expert_type_R_use_alpgmm_pt_4_sR_106.png}}\n\\subfloat[with type 1 new student]{\\includegraphics[width=0.35\\textwidth]{graphics\/metacl_prior_selection_10-06_toy_env_teacher_CEGT_expert_type_R_use_alpgmm_pt_4_sR_130.png}}\n\n\\subfloat[with type 2 new student]{\\includegraphics[width=0.35\\textwidth]{graphics\/metacl_prior_selection_10-06_toy_env_teacher_CEGT_expert_type_R_use_alpgmm_pt_4_sR_110.png}}\n\\subfloat[with type 3 new student]{\\includegraphics[width=0.35\\textwidth]{graphics\/metacl_prior_selection_10-06_toy_env_teacher_CEGT_expert_type_R_use_alpgmm_pt_4_sR_131.png}}\n\n\\caption{\\footnotesize{\\textbf{Examples of student selection process in $4$-student type toy environment.} In all figures, we plot the $2$D PCA visualization of the $KC^{pre}$ vectors (after pre-training) of the initial classroom ($128$ students) trained with ALP-GMM and used to populate the training trajectory $\\mathcal{H}$ used by AGAIN variants in our $4$-student type toy env experiments (see sec. \\ref{sec:exp:toy-env}). We then use these $4$ figures to showcase the selection process happening in $4$ different AGAIN-R runs (one per student type). Each triangle represents a student, whose ground truth type (i.e. its initial learning cell) is denoted by the orientation of the triangle. Given a new student to train, AGAIN pretrains the student, constructs its KC vector (purple border triangle), infers the k closest previously trained students from $\\mathcal{H}$ (red and golden border triangles), and use the one with highest end of training performance (i.e. highest score $s$, see sec. \\ref{sec:methods}), denoted by a golden border triangle, to infer curriculum priors for the new student.}}\n\\label{toy-exps-stud-selec-vizu}\n\\end{figure*}\n\n\\begin{figure*}[b]\n\\centering\n\\subfloat{\\includegraphics[width=0.35\\textwidth]{graphics\/concatenated_parkour_agent_images.jpg}}\\hspace{0.2cm}\\subfloat{\\includegraphics[width=0.35\\textwidth]{graphics\/concatenated_parkour_tracks_images.jpg}}\n\\caption{\\footnotesize{Visualizations of the student space and the task space of the Parkour environment.\\textbf{Left:} Examples of possible agent embodiments (randomly set for a given DRL learner before training starts). \\textbf{Right:} Examples of randomly sampled parkour tracks.}}\n\\label{parkour-env-vizu}\n\\end{figure*}\n\n\n\\vspace{10cm}\\section{Meta-ACL for DRL students in the Parkour environment}\n\\label{ann:parkour}\nIn this section we give additional details on the Parkour environment presented in section \\ref{sec:exp:walker-climber}, and we provide additional details and visualizations on the experiments that were performed on it.\n\n\\paragraph{Details on the Parkour environment.} In our experiments, we bound the wall spacing dimension of the task space to $\\Delta_{w}=[0,6]$, and the gate y position to $\\mu_{gate}=[2.5,7.5]$. In practice, given a single parameter tuple $(\\mu_{gate},\\Delta_{w})$, we actually encode a distribution of tasks, since for each new wall along the track we add an independent Gaussian noise to each wall's gate y position $\\mu_{gate}$. Examples of parkour tasks randomly sampled within these bounds are available in figure \\ref{parkour-env-vizu} (right). At the beginning of training a given DRL policy, the agent is embodied in either a bipedal walker morphology with two joints per legs or a two-armed climber morphology with 3-joints per arms ended by a grasping \"hand\". Both morphologies are controlled by torque. Climbers have an additional action dimension $g \\in [-1,1]$ used to grasp: if $g \\in [-1,0[$, the climber closes its gripper, and if $g \\in ]0,1]$ it keeps it open. To avoid falling (which aborts the episode with a $-100$ penalty) while moving forward to collect rewards, climber agents must learn to swing themselves forward by successive grasp-and-release action sequences. To increase the diversity of the student distribution, we also randomize limb sizes. See figure \\ref{parkour-env-vizu} (left) for examples of randomly sampled embodiments.\n\n\\paragraph{Soft Actor-Critic students} In our experiments, we use an implementation of Soft Actor-Critic provided by OpenAI\\footnote{https:\/\/github.com\/openai\/spinningup} (MIT license). We use a $2$ layered ($400$,$300$) network for V, Q1, Q2 and the policy. Gradient steps are performed each $10$ environment steps, with a learning rate of $0.001$ and a batch size of $1000$. The entropy coefficient is set to $0.005$.\n\n\\paragraph{Evaluation procedure} To report the performance of our students on the Parkour environment, we use two separate test sets, one per embodiment type. For walkers we use a $100$-tasks test set, uniformly sampled over a subspace of the task space with $\\Delta_w \\in [0,6]$ and $\\mu_{gate} \\in [2.5,3.6]$, which we chose based on 1) what we initially believed to be morphologically feasible for walkers, and 2) based on previously designed test sets built in recent work \\cite{portelas2019} on comparable bipedal walker experiments). For climbers, because there is no similar experiments in the literature and since it is hard to infer beforehand what will be achievable by such a morphology, we simply use a uniform test set of $225$ tasks sampled over the full task space. Importantly, the customized test set used for walkers is solely used for visualization purposes. In our AGAIN approaches, we pre-test all students with the expert-knowledge-free set of $225$ tasks uniformly sampled over the task space. \n\n\\paragraph{Compute resources} Each of the 576 seeds required to reproduce our experiments (128 seeds for the classroom and 7*64 seeds for our 7 conditions) takes 36 hours on a single cpu. This amounts to around 21 000 cpu hours. Each run requires less than 1GB of RAM.\n\n\n\n\\paragraph{Visualizing student diversity.} To assess whether our proposed multi-modal distribution of possible students in the Parkour environment do have diverse competence profiles (which is desirable as it creates a challenging Meta-ACL scenario), we plot the 2D PCA of the post training KC vector for each students of the initial classroom trained with ALP-GMM (used to populate $\\mathcal{H}$). The result, visible in figure \\ref{vizu-parkour-classroom} (top), shows that climber-students and walker-students are located in two independant clusters, i.e. they do have clearly different competence profiles. The spread of each clusters also demonstrates that variations in initial policy parameters and limb sizes also creates students with diverse learning potentials. The competence differences between walkers and climbers can also be seen in Figure \\ref{vizu-parkour-classroom} (left and right), which shows the episodic reward obtained for each of the $225$ tasks of the KC vector after training by a representative walker student (left) and climber student (right). \n\n\n\n\\begin{figure*}[htb!]\n\\centering\n\\subfloat{\\includegraphics[width=0.6\\textwidth]{graphics\/pca_KC_post_parkour.pdf}}\n\n\n\\subfloat{\\includegraphics[width=0.45\\textwidth]{graphics\/classroom_KC_0_11.pdf}}\\hspace{0.3cm}\\subfloat{\\includegraphics[width=0.45\\textwidth]{graphics\/classroom_KC_1_3.pdf}}\n\\caption{\\footnotesize{\\textbf{top:} PCA of classroom's KC vector (128 students) after being trained for 10M student steps with ALP-GMM. \\textbf{left and right:} Episodic reward obtained for each task that compose the KC vector by a walker-student (left) and a climber-student (right) of this classroom. Stars are added for all tasks for which the agent obtained more than $r=230$ (which corresponds to an efficient locomotion policy). Walkers only manage to learn tasks with very low gate positions while climbers learn only tasks with medium to high gate positions.}}\n\\label{vizu-parkour-classroom}\n\\end{figure*}\n\n\\clearpage\\newpage\\section{Applying Meta-ACL to a single student: Trying AGAIN instead of trying longer}\n\\label{ann:tryingagain}\n\nIn the following section we report all experiments on applying AGAIN variants to train a single DRL student (i.e. no history $\\mathcal{H}$), which is briefly presented in sec. \\ref{exp:trying-again}.\n\n\n\\paragraph{Parametric BipedalWalker env.} We test our modified AGAIN variants along with baselines on an existing parametric BipedalWalker environment proposed in \\cite{portelas2019}, which generates walking tracks paved with stumps whose height and spacing are defined by a $2$D parameter vector used for the procedural generation of tasks. We keep the original bounds of this task space, i.e. we bound the stump-height dimension to $\\mu_h \\in [0,3]$ and the stump-spacing dimension to $\\delta_s \\in [0,6]$. As in their work, we also test our teachers when the learning agent is embodied in a modified short-legged walker, which constitutes an even more challenging scenario (as the task space is unchanged, i.e. more unfeasible tasks). The agent is rewarded for keeping its head straight and going forward and is penalized for torque usage. The episode is terminated after 1) reaching the end of the track, 2) reaching a maximal number of $2000$ steps, or 3) head collision (for which the agent receives a strong penalty). See figure \\ref{pbw-demo} for visualizations.\n\n\n\\begin{figure}[htb!]\n\n\\centering\\includegraphics[width=0.75\\columnwidth]{graphics\/pbw_demo.pdf}\n\\caption{\\footnotesize{Parameterized BipedalWalker environment. \\textbf{Left:} Examples of generated tracks. \\textbf{Right:} The two walker morphologies tested on the environment.} One parameter tuple ($\\mu_h, \\delta_s$) actually encodes a \\textit{distribution} of tasks as the height of each stump along the track is drawn from $\\mathcal{N}(\\mu_h,0.1)$. }\n\\label{pbw-demo}\n\\end{figure}\n\n\\paragraph{Results} To perform our experiments, we ran each condition for either $10$Millions (IN and AGAIN variants) or $20$Millions (others) environment steps ($30$ repeats). The preliminary ALP-GMM runs used in IN and AGAIN variants correspond to the first $10$ Million steps of the ALP-GMM condition (whose end-performance after $20$ Million steps is reported in table \\ref{results-table}. All teacher variants are tested when paired with a Soft-Actor Critic \\cite{sac} student, with same hyperparameters as in the Parkour experiments (see app. \\ref{ann:parkour}). Performance is measured by tracking the percentage of mastered tasks (i.e. $r>230$) from a fixed test set of $100$ tasks sampled uniformly over the task space. We thereafter report results for $2$ independent experiments done with either default walkers or short walkers.\n\n\\begin{figure}\n\\centering\\includegraphics[width=0.5\\columnwidth]{graphics\/compact_demo.pdf}\n\\caption{\\footnotesize{Given a single DRL student to train, AGAIN outperforms ALP-GMM in a parametric BipedalWalker environment. sem plotted, 30 seeds.}}\\label{trying-again-compact-app}\n\\end{figure} \n\\textit{Is re-training from scratch beneficial? - } The end performances of all tested conditions are summarized in table \\ref{results-table}. Interestingly, retraining the DRL agent from scratch in the second run gave superior end performances than fine-tuning using the weights of the first run \\textit{in all tested variants}. This showcases the brittleness of gradient-based training and the difficulty of transfer learning. Despite this, even fine-tuned variants reached superior end-performances than classical ALP-GMM, meaning that the change in curriculum strategy in itself is already beneficial.\n\n\\textit{Is it useful to re-use ALP-GMM in the second run? - } In the default walker experiments, AGAIN-R, T and P conditions mixing ALP-GMM and IN in the second run reached lower mean performances than their respective IN variants. However, the exact opposite is observed for IN-R and IN-T variants in the short walker experiments. This can be explained by the difficulty of short walker experiments for ACL approaches, leading to $16\/30$ preliminary 10M steps long ALP-GMM runs to have a mean end-performance of $0$, compared to $0\/30$ in the default walker experiments. All these run failures led to many GMMs lists $\\mathcal{C}$ used in IN to be of very low-quality, which illustrates the advantage of AGAIN that is able to emancipate from IN using ALP-GMM.\n\n\\textit{Highest-performing variants. - } Consistently with the precedent analysis, mixing ALP-GMM with IN in the second run is not essential in default walker experiments, as the best performing ACL approach is IN-P. This most likely suggests that the improved adaptability of the curriculum when using AGAIN is outbalanced by the added noise (due to the low task-exploration). However in the more complex short walker experiments, mixing ALP-GMM with IN is essential, especially for AGAIN-R, which substantially outperforms ALP-GMM and other AGAIN and IN variants (see fig. \\ref{trying-again-compact}), reaching a mean end performance of $19.0$. The difference in end-performance between AGAIN-R and Oracle, our hand-made expert using privileged information who obtained $20.1$, is not statistically significant ($p=0.6$).\n\n\\begin{table*}[]\n\\caption{\\footnotesize{\\textbf{Experiments on Parametric BipedalWalker} The avg. perf. with std. deviation after 10 Millions steps (IN and AGAIN variants) or 20 Million steps (others) is reported (30 seeds). For IN and AGAIN we also test variants that do not retrain the weights of the policy used in the second run \\textit{from scratch} but rather \\textit{fine-tune} them from the preliminary run.$\\mathbf{^{*\/-}}$ Indicates whether perf. difference with ALP-GMM is statistically significant ie. $p<0.05$ in a post-training Welch's student t-test ($\\mathbf{^{*}}$ for performance advantage w.r.t ALP-GMM and $\\mathbf{^{-}}$ for perf. disadvantage).}}\n\\vspace{0.3cm}\n \\footnotesize\n \\centering\n\\begin{tabular}{@{}lll@{}}\n\\toprule\nCondition & Short walker & Default walker \\\\ \\midrule\nAGAIN-R & $19.0\\pm12.0^*$ & $41.6\\pm6.3^*$ \\\\\nAGAIN-R(fine-tune) & $11.4\\pm12.9$ & $39.9\\pm4.6$ \\\\\nIN-R & $13.4\\pm14.4$ & $43.5\\pm9.6^*$ \\\\\nIN-R(fine-tune) & $11.2\\pm12.3$ & $40.8\\pm5.6$ \\\\\nAGAIN-T & $15.1\\pm11.9$ & $40.6\\pm11.5$ \\\\\nAGAIN-T(fine-tune) & $11.4\\pm11.8$ & $40.6\\pm3.8^*$ \\\\\nIN-T & $13.5\\pm13.3$ & $43.5\\pm6.1^*$ \\\\\nIN-T(fine-tune) & $10.7\\pm12.3$ & $40.3\\pm7.6$ \\\\\nAGAIN-P & $13.6\\pm12.5$ & $41.9\\pm5.1^*$ \\\\\nAGAIN-P(fine-tune) & $11.1\\pm12.0$ & $41.5\\pm3.9^*$ \\\\\nIN-P & $14.5\\pm12.6$ & $\\mathbf{44.3}\\pm3.5^*$ \\\\\nIN-P(fine-tune) & $12.2\\pm12.5$ & $41.1\\pm3.8^*$ \\\\\nALP-GMM & $10.2\\pm11.5$ & $38.6\\pm3.5$ \\\\\nOracle & $\\mathbf{20.1}\\pm3.4^*$ & $27.2\\pm15.2^-$ \\\\\nRandom & $2.5\\pm5.9^-$ & $20.9\\pm11.0^-$ \\\\ \\bottomrule\n\\end{tabular}\n \\label{results-table}\n\\end{table*}\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[htb!]\n\\centering\n\\subfloat[\\textbf{Pool-based} IN]{\\includegraphics[width=0.31\\textwidth]{graphics\/means_default_P_.pdf}}\n\\subfloat[\\textbf{Time-based} IN]{\\includegraphics[width=0.31\\textwidth]{graphics\/means_default_T_.pdf}}\n\\subfloat[\\textbf{Reward-based} IN]{\\includegraphics[width=0.31\\textwidth]{graphics\/means_default_R_.pdf}}\n\\caption{\\footnotesize{\\textbf{Evolution of performance across 20M environment steps of each condition with default bipedal walker.} Each point in each curve corresponds to the mean performance (30 seeds), defined as the percentage of mastered tracks (ie. $r>230$) on a fixed test set. Shaded areas represent the standard error of the mean. Consistently with \\cite{portelas2019}, which implements a similar approach, Oracle is prone to forgetting with default walkers due to the strong shift in task subspace (which is why it is not the best performing condition for default walker experiments.}}\n\\label{default-exps-curves}\n\\end{figure*}\n\n\\begin{figure*}[htb!]\n\\centering\n\\subfloat[\\textbf{Pool-based} IN]{\\includegraphics[width=0.31\\textwidth]{graphics\/means_short_P_.pdf}}\n\\subfloat[\\textbf{Time-based} IN]{\\includegraphics[width=0.31\\textwidth]{graphics\/means_short_T_.pdf}}\n\\subfloat[\\textbf{Reward-based} IN]{\\includegraphics[width=0.31\\textwidth]{graphics\/means_short_R_.pdf}}\n\\caption{\\footnotesize{\\textbf{Evolution of performance across 20M environment steps of each condition with short bipedal walker.} Each point in each curve corresponds to the mean performance (30 seeds), defined as the percentage of mastered tracks (ie. $r>230$) on a fixed test set. Shaded areas represent the standard error of the mean.}}\n\\label{short-exps-curves}\n\\end{figure*}\n\\section{Introduction}\n\nThe idea of organizing the learning sequence of a machine is an old concept that stems from multiple works in reinforcement learning \\citep{selfridge,Schmid}, developmental robotics \\citep{oudeyer2007intrinsic} and supervised learning \\citep{elman, bengiocl}, from which the Deep RL community borrowed the term \\textit{Curriculum Learning}. Automatic CL \\citep{portelas2020-acl-drl} refers to \\textit{teacher} algorithms able to autonomously adapt their task sampling distribution to their evolving \\textit{student}. In DRL, ACL has been leveraged to scaffold learners in a variety of multi-task control problems, including video-games \\citep{icm,rnd,montezuma-single-demo}, multi-goal robotic arm manipulation \\citep{her,curious,cideron2019self,fournier-accuracy-acl} and navigation in sets of environments \\citep{tscl,portelas2019,ADRmila,goalgan,selfpaceddrl}. Concurrently, multiple authors demonstrated the benefits of Procedural Content Generation (PCG) as a tool to create rich task spaces to train generalist agents \\citep{risiPCG,illuminating,quantif-coinrun}. The current limit of ACL is that, when applied to such large continuous task spaces, that often have few learnable subspaces, it either relies on 1) human expert knowledge that is hard\/costly to provide (and which undermines how automatic the ACL approach is), or 2) it loses a lot of time finding tasks of appropriate difficulty through \\textit{task exploration}.\n\nGiven the aforementioned impressive results on training DRL learners with ACL to generalize over tasks (which extended the classical single-task scenarios \\citep{dqn,trpo,ddpg} to multi-tasks), we propose to go further and work on training (unknown) distributions of students on continuous task spaces, thereafter referred to as \\textit{Classroom Teaching} (CT). CT defines a family of problems in which a teacher algorithm is tasked to sequentially generate multiple curricula tailored for each of its students, all having potentially varying abilities. CT differs from the problems studied in population-based developmental robotics \\citep{imgep} and evolutionary algorithms \\citep{poet} as in CT there is no direct control over the characteristics of learners, and the objective is to foster maximal learning progress over all learners rather than iteratively populating a pool of high-performing task-expert policies. Studying CT scenarios brings DRL closer to assisted education research problems and might stimulate the design of methods that alleviate the expensive use of expert knowledge in current state of the art methods \\citep{zpdes, Koedinger13}. CT can also be transposed to (multi-task) robotic training scenarios, e.g. when performing iterative design improvements on a robot, which requires to train a sequence of morphologically related (yet different) robots.\n\n\nGiven multiple students to train, no expert knowledge, and assuming at least partial similarities between each students' optimal curriculum, current \\textit{tabula-rasa} exploratory-ACL approaches that do not reuse knowledge between different students do not seem like the optimal choice. This motivates the research of what we propose to call Meta Automatic Curriculum Learning mechanisms, that is algorithms learning to generalize ACL over multiple students.\nIn this work we formalize this novel setup and propose a first Meta-ACL baseline algorithm (based on an existing ACL method \\citep{portelas2019}). Given a new student to train, our approach is centered on the extraction of adapted curriculum priors from a history of previously trained students. The prior selection is performed by matching competence vectors that are built for each student through pre-testing. We show that this simple method can bring significant performance improvements over classical ACL in both a toy environment without DRL students and on Box2D parkour environments with DRL learners.\n\n\n\\paragraph{Related Work.} To approach the problem of curriculum generation for DRL agents, recent works proposed multiple ACL algorithms based on the optimization of surrogate objectives such as learning progress \\citep{portelas2019,tscl,tscllike,curious}, diversity \\citep{diayn,metarl-carml,countbased} or intermediate difficulty \\citep{goalgan,settersolver,OpenAI2019SolvingRC,reverse-cur,ADRmila}. All these works tackled student training through independent ACL runs, while we propose to investigate how one can share information accross multiple trainings.\nWithin DRL, \\textit{Policy Distillation} \\citep{distralTeh2017,pol-dil-review} consists in leveraging one or several previously trained policies to perform \\textit{behavior cloning} on a new policy (e.g. to speed up training and\/or to leverage task-experts to train a multi-task policy). Our work can be seen as proposing a complementary toolbox aiming to perform \\textit{Curriculum Distillation} on a continuous space of tasks.\n\nSimilar ideas were developed for supervised learning by \\citep{hacohen19a-scoring-pacing, ban, banlike}. In \\citep{hacohen19a-scoring-pacing}, authors propose an approach to infer a curriculum from past training for an image classification task: they train their network once without curriculum and use its predictive confidence for each image as a difficulty measure exploited to derive an appropriate curriculum to re-train the network. Although we are mainly interested in training a classroom of diverse students, section \\ref{exp:trying-again} presents similar experiments in a DRL scenario, showing that our Meta-ACL procedure can be beneficial for a single learner that we train once and re-train using curriculum priors inferred from the first run.\n\nParallel to this work, Turcheta et. al. \\citep{safe-rl-curr-induction} studied how to infer safety constraints (i.e. curriculum) over multiple DRL students in a data-driven way to better perform on a given single task. In their work, students are only varying by their network's initialization and their teacher assumes the existence of a pre-defined discrete set of safety constraints to choose from. By contrast, we consider the problem of training generalist students with varying morphologies (and networks initializations), with a teacher algorithm choosing tasks from a continuous task space.\n\n\n\n\n\\paragraph{Main Contributions.}\n\\begin{itemize}\n \\item Introduction to the concept of Meta-ACL, i.e. algorithms that generalize curriculum generation in Classroom Teaching scenarios, and an approach to study these algorithms. Formalization of the interaction flows between Meta-ACL algorithms and (unknown) Deep RL student distributions.\n \\item Introduction of AGAIN, a first Meta-ACL baseline algorithm which learns curriculum priors to fasten the identification of learning progress niches for new Deep RL students.\n \\item Design of a toy-environment and of a parametric Box2D Parkour environment featuring a multi-modal distribution of possible agent embodiments well suited to study Meta-ACL.\n \\item Analysis of AGAIN on these environments, demonstrating the performance advantages of this approach over classical ACL, including (and surprisingly) when applied to a single student.\n \n\\end{itemize}\n\n\n\n\\section{Meta Automatic Curriculum Learning Framework}\n\\label{sec:framework}\n\n\\begin{figure}[htb!]\n\\centering\n\\includegraphics[width=\\textwidth]{graphics\/meta-acl-pipeline_final.pdf}\n\\caption{\\footnotesize{In \\textit{Meta Automatic Curriculum Learning }(Meta-ACL), the objective is to leverage previous teaching experience to improve the curriculum generation of a new agent, whose embodiment and learning mechanisms have potentially never been seen before: the teacher has to \\textit{generalize} over students.}}\n\\label{meta-acl-fig}\n\\end{figure}\n\n\\paragraph{Black-box students} The Meta-ACL framework assumes the existence of policy learners, a.k.a students, capable of interacting in episodic control tasks. These students are assumed non-resettable, as in classical ACL scenarios. Their optimization objective is the maximization of some performance measure $P$ w.r.t the task (e.g. episodic reward, exploration). To make the framework problem-independant, we do not assume expert knowledge over the task space w.r.t the student distribution, e.g. task subspaces could be trivial for some students and unfeasible for others. The objective of Meta-ACL is precisely to autonomously infer such prior knowledge from experience in scenarios where human expert knowledge is either hard or impossible to use. Similarly, we consider a black-box teaching scenario, i.e. we do not assume the knowledge of which learning mechanisms are used by students (e.g. DRL agents, evolutionary algorithms, ...).\n\n\n\n\\paragraph{Automatic Curriculum Learning} \\hspace{-0.2cm} Given such a black-box student $s$ to train on a continuous \\textit{task space} $\\mathcal{A}$, the purpose of an ACL algorithm is to sequentially sample (parameterized) tasks for $s$, such that the following evaluation metric, used by the experimenter, is maximized:\n\\begin{equation}\n \\label{eq:acl-obj}\n \\max \\int_{a\\sim \\mathcal{A}} \\! P_{s,a}^E\\, \\mathrm{d}a,\n\\end{equation}\nwith $E$ the episode budget, and $P_{s,a}^E$ the end performance of student $s$ on task $a$ (e.g. exploration score, cumulative reward). Since direct optimization of such a post-training performance is difficult, ACL is often approached using proxy objectives (e.g. intermediate difficulty). Given one such proxy objective, an ACL algorithm usually relies on observing episodic behavioral features of its student w.r.t to proposed tasks (e.g. episodic rewards), allowing to infer a competence status $o \\in \\mathcal{O}$ of its learner (e.g. progress regions in $\\mathcal{A}$) which conditions task sampling. This sequential task selection unrolled by ACL methods along a student's training can be transposed into a policy search problem on a high-level non-episodic POMDP. While interacting within this POMDP, an ACL policy $\\Pi(o,s) \\rightarrow \\rho(\\mathcal{A})$ proposes task distributions $\\rho(\\mathcal{A}) \\subset \\mathcal{A}$ to its student, observes the resulting behavioral features to update its competence status $o$, and collects objective-dependant rewards (e.g. learning progress).\n\n\n\n\n\n\nIn practice, approaching this task-level control problem with classical DRL algorithms is challenging because of sample efficiency: an ACL policy has to be learned and exploited along interaction windows typically around a few tens of thousands of steps. This has to be compared to the tens of millions or sometimes billions of interaction steps necessary to train a DRL policy for robotic control tasks. For this reason, most recent ACL research has focused on reducing the teaching problem into a Multi Armed Bandit setup, which ignores the sequential dependency over student states implied in POMDP settings \\citep{tscl,tscllike,curious,portelas2019}. Although out of the scope of this paper, the use of classical DRL approaches as Meta-ACL algorithms is worth investigating in future work.\n\n\\paragraph{Meta-ACL for Classroom Teaching.} We now present the concept of Meta-ACL applied to a Classroom Teaching scenario, i.e. there is no longer a single student $s$ to be trained, but a set of students with varying abilities (e.g. due to morphology and\/or learning mechanisms) sequentially drawn from an unknown distribution $\\mathcal{S}$. The notion of meta-learning refers to \\textit{any type of learning guided by prior experience with other\ntasks} \\citep{surveymetarl}. In meta-RL, agents are \\textit{learning to learn} to act \\citep{wang2016}, i.e. their objective is to maximize performance on previously unseen test tasks after $0$ or a few learning updates. In other words, it is about leveraging knowledge from previously encountered tasks to generalize on new tasks. We propose to extend this concept into Meta-ACL, that is, algorithms that are \\textit{learning to learn} to teach, i.e. they leverage knowledge from curricula built for previous students to improve the curriculum generation for new ones. More precisely, a Meta-ACL algorithm can be formulated as a function:\n\\begin{equation} \\label{eq:meta-acl}\n\\begin{gathered}\nf(\\Pi, \\mathcal{H}, s^{K}) \\rightarrow \\Pi^{'} ~~s.t.~~\\mathcal{H}=[\\tau_{s^0}, \\tau_{s^1}, ...,\n\\tau_{s^{K-1}}] \\\\\n \\tau_{s^{i}} = [(\\rho^0(\\mathcal{A}), o^0), (\\rho^1(\\mathcal{A}), o^1), ..., (\\rho^T(\\mathcal{A}), o^{T})],\n\\end{gathered}\n\\end{equation}\nwith $\\mathcal{H}$ the history of past $K$ training trajectories $\\tau_s$ resulting from the scaffolding of $K$ previous students with an ACL (or Meta-ACL) policy $\\Pi$, and $s^K$ the current student. Given our formalization of ACL (see eq. \\ref{eq:acl-obj}), the experimenter's evaluation objective for Meta-ACL can be expressed as follows:\n\\begin{equation}\n \\label{eq:meta-acl-obj}\n \\max_{f} \\int_{s\\sim \\mathcal{S}}\\int_{a\\sim \\mathcal{A}} \\! P_{s,a}^E\\, \\mathrm{d}a~\\mathrm{d}s.\n\\end{equation}\n\nAs in the case of the ACL evaluation objective expressed in eq. \\ref{eq:acl-obj}, direct optimization of eq. \\ref{eq:meta-acl-obj} is difficult as it implies the joint maximization of multiple students' performance. In our experiments, we reduce the Meta-ACL problem to the sequential independent training of a set of new students by leveraging priors from previous student trainings (with the hope to maximize performance over the entire set). Figure \\ref{meta-acl-fig} provides a visual transcription of the workflow of a Meta-ACL algorithm. While in our experiments we use a fixed-size history $\\mathcal{H}$ of ACL-trained students (to make our experiments computationally tractable), $\\mathcal{H}$ could be grown incrementally by collecting training trajectories online from Meta-ACL trainings.\n\n\n\n\\section{A first Meta-ACL baseline: Alp-Gmm And Inferred Progress Niches}\n\\label{sec:methods}\n\n\\begin{figure*}[b]\n\\centering\n\\includegraphics[width=\\textwidth]{graphics\/AGAIN-pipeline.pdf}\n\\caption{\\footnotesize{Schematic pipeline of Alp-Gmm And Inferred progress Niches (AGAIN), our proposed approach, which first leverages a preliminary run with a high-exploration ALP-GMM curriculum generator, and uses student pre-testing to infer an expert curriculum combined with a low-exploration ALP-GMM.}}\n\\label{again-fig}\n\\end{figure*}\n\nIn this section, we present AGAIN (Alp-Gmm And Inferred Progress Niches), our proposed Meta-ACL algorithm, and connect it to the formalism described in section \\ref{sec:framework}. We first give a broad overview of the approach and then provide detailed explanations of key components.\n\n\\paragraph{Overview} Figure \\ref{again-fig} provides a schematic pipeline of our Meta-ACL approach. Given a history $\\mathcal{H}$ of previously trained students and a new student $s^K \\sim \\mathcal{S}$ to train, AGAIN starts by (1) pre-training $s^K$ using ALP-GMM, an existing ACL algorithm from \\cite{portelas2019} (chosen for its simplicity and its non-reliance on expert knowledge). After pre-training, it (2) challenges the student with a set of test tasks to construct a meaningful competence profile of the student, which we thereafter refer to as a \\textit{Knowledge Component (KC) vector}. This KC vector is then (3) used to select a previously trained student $s^i$ similar to $s^K$ from which the \\textit{training trajectory} $\\tau_{s^{i}}$ is recovered. Based on $\\tau_{s^{i}}$, (4) AGAIN infers a set of curriculum priors $\\mathcal{C}$ (i.e. promising task sub-spaces). Finally, (5) the training of $s^K$ can resume using a composite curriculum generator using both an expert curriculum derived from $\\mathcal{C}$ (for exploitation), and ALP-GMM (for exploration).\n\n\n\\paragraph{1 - ALP-GMM} ALP-GMM \\citep{portelas2019} is a Learning Progress (LP) based ACL technique for continuous task spaces that does not assume prior knowledge. ALP-GMM frames the task sampling problem into a non-stationary Multi-Armed bandit setup \\citep{auer2002nonstochastic} in which arms are Gaussians spanning over the task space. The utility of each Gaussian is defined with a local LP measure derived from episodic reward comparisons. The essence of ALP-GMM is to periodically fit a Gaussian Mixture Model (GMM) on recently sampled tasks' parameters \\textit{concatenated with their respective LP}. This periodically updated GMM used for task sampling can be seen as the evolving competence status $o$ described in section \\ref{sec:framework}. The Gaussian from which to sample a new task is chosen proportionally to its mean LP dimension. Task exploration happens initially through a bootstrapping period of random task sampling and during training through residual random task sampling.\n\n\\paragraph{2,3 - KC-based curriculum priors selection.} For a new student $s^K$, given its capabilities on the considered task space, how to selected the most relevant previously trained student from which to extract curriculum priors? This problem is closely related to knowledge assessment in Intelligent Tutoring Systems setups studied in the educational data mining literature \\citep{vie-mooc-test-assembly,vie-thesis}. Inspired by these works, we use pre-tests to derive a Knowledge Component vectors $KC^{pre} \\in \\mathbb{R}^{m}$ for all trained students. Each dimensions of $KC^{pre}$ contains the episodic return of the student on the corresponding pre-test task. Given that we do not assume access to expert knowledge, we build this pre-test task set by selecting $m$ tasks uniformly over the task space. We use the same task set to build a post-training KC vector $KC^{post} \\in \\mathbb{R}^{m}$ whose dimensions are summed up to get a score $j_s \\in \\mathbb{R}$, used to evaluate the end performance of students in $\\mathcal{H}$. After the initial pre-training of $s^K$ with ALP-GMM, curriculum priors can be obtained in 3 steps: 1) pre-test $s^K$ to get its KC vector $KC^{pre}_{s^K}$, 2) infer the $k$ most similar previously trained students in KC space (using a k-nearest neighbor algorithm), and 3) use the training trajectory $\\tau_s$ of the student with maximal post-training score $j_s$ among those $k$. In essence, this method is about re-using curriculum data from a similarly-skilled and successfully-trained student.\n\n\n\\paragraph{4 - Inferred progress Niches (IN).} Given that the KC-based student selection identified the training trajectory $\\tau_{s^{i}}$ as the most promising for $s^K$, and assuming ALP-GMM as the underlying ACL teacher used for $s^i$, we can derive an expert curriculum from $\\tau_{s^{i}}$ by first considering the ordered sequence of GMMs $\\mathcal{C}_{raw}$ that were periodically fitted along training:\n\n\\begin{equation} \\label{eq:craw}\n \\begin{gathered}\n \\mathcal{C}_{raw} = \\{p(1), ..., p(T)\\} \\\\\n s.t.~~~p(t)= \\sum_{i=1} LP_{ti}\\mathcal{N}(\\bm{\\mu_{ti}},\\bm{\\Sigma_{ti}}),\n\\end{gathered} \n\\end{equation}\n\nwith $T$ the total number of GMMs in the list and $LP_{ti}$ the Learning Progress of the $i^{th}$ Gaussian from the $t^{th}$ GMM. By keeping only Gaussians with $LP_{ti}$ above a predefined threshold $\\delta_{LP}$, we can get a curated list $\\mathcal{C}$ containing only Gaussians located on task subspaces on which $s^i$ experienced learning progress (i.e. curriculum priors). Given a GMM of $\\mathcal{C}$, a task is selected by 1) sampling a Gaussian proportionally to its $LP_{ti}$ value, and 2) sampling the Gaussian to obtain parameters mapping to a task. But how to decide which GMMs to use along the training of the new student $s^K$ ?\n\nWhile the simplest way to obtain such a curriculum would be to start sampling tasks from the first GMM and step to the next GMM at the same rate than the initial ALP-GMM run, we propose a more flexible \\textit{reward-based} method. This method requires to record the mean episodic reward obtained by the previously trained student $s^i$ for each GMM of $\\mathcal{C}$ (which can be done without additional assumptions or computational overhead). Given this, to select which GMM from $\\mathcal{C}$ is used to sample tasks over time along the training of $s^K$, we start with the first GMM and only iterate over $\\mathcal{C}$ once the mean episodic reward over tasks recently sampled from the current GMM matches or surpasses the mean episodic reward recorded during the initial ALP-GMM run. In app. \\ref{ann:toy-exp}, we show that this \\textit{reward-based} variant outperforms other potential methods. See app. \\ref{app-again} for algorithmic details. We name the resulting meta-learned expert curriculum approach Infered progress Niches (IN)\n\n\\paragraph{5 - Our proposed approach: AGAIN.} Simply using IN directly for $s^K$ lacks adaptive mechanisms towards the characteristics of the new student (e.g. new embodiment, different initial parameters, ...), which could lead to failure cases where the expert curriculum misses important aspects of training (e.g. detecting task subspaces that are being forgotten). Additionally, the meta-learned ACL algorithm must have the capacity to emancipate from the expert curriculum once the trajectory is completed (i.e. go beyond $\\mathcal{C}$). This motivates why our approach combines IN with an ALP-GMM teacher after the initial pre-training. The resulting Alp-Gmm And Inferred progress Niches approach (AGAIN) samples tasks from a GMM that is composed of the current mixture of both ALP-GMM and IN. See appendix \\ref{app-alp-gmm} \\& \\ref{app-again} for implementation details and pseudo-code algorithms.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments and Results}\n\\label{sec:expes}\n\nWe organize the analysis of our proposed Meta-ACL algorithm around $3$ experimental questions:\n\\begin{itemize}\n \\item What are the properties and important components of AGAIN? In this section we will leverage a toy environment without DRL students to conduct systematic experiments.\n \\item Does AGAIN scale well to Meta-ACL scenarios with DRL students? Here we will present a new Parkour environment that will be used to conduct our experiments.\n \\item Can AGAIN be used for single learners? Here we will show that it can be useful to derive curriculum priors even for a single student (i.e. without any student History $\\mathcal{H}$).\n\\end{itemize}\n\n\n\\paragraph{Considered baselines and AGAIN variants.} In the following experiments we compare AGAIN to variants where 1) we directly use the expert curriculum instead of combining it with ALP-GMM (\\textit{IN} condition), and 2) where we select a training trajectory at random (\\textit{AGAIN\\_RND}) or using the ground truth student distribution (\\textit{AGAIN\\_GT}). We compare these Meta-ACL variants to ACL approaches such as random curriculum generation (\\textit{Random}), \\textit{ALP-GMM} and either Adaptive Domain Randomization (\\textit{ADR}) \\citep{OpenAI2019SolvingRC} or an expert-made \\textit{Oracle} curriculum. See appendix \\ref{an:details} for details.\n\n\\subsection{Analysing Meta-ACL in a toy environment}\n\\label{sec:exp:toy-env}\n\nTo provide in-depth experiments on AGAIN, we first emancipate from DRL students through the use of a modified version of the toy testbed presented in \\citep{portelas2019}. The objective of this environment is to simulate the learning of a student within a $2$D parameter space $\\mathcal{P} = [0,1]^2$. The parameter space is uniformly divided in $400$ square cells $C \\subset \\mathcal{P}$, and each parameter $p \\in \\mathcal{P}$ sampled by the teacher is directly mapped to an episodic reward $r_p$ based on sampling history and whether $C$ is considered \"locked\" or \"unlocked\". Three rules enforce reward collection in $\\mathcal{P}$:~1) Every cell $C$ starts \"locked\", except a randomly chosen one that is \"unlocked\". 2) If $C$ is \"unlocked\" and $p \\in C$, then $r_p = min(|C|,100)$, with $|C|$ the cumulative number of parameters sampled within $C$ while being \"unlocked\" (if $C$ is \"locked\", then $r_p = 0$). Finally, 3) If $|C| >= 75$, adjacent cells become \"unlocked\". Given these rules, one can model students with different curriculum needs by assigning them different initially unlocked cells, which itself models what is \"easy to learn\" initially for a given student, and from where it can expand.\n\\begin{figure*}[htb!]\n\\centering\n\\subfloat{\\includegraphics[width=0.45\\textwidth]{graphics\/nips21_reg_toy_env_vizu.pdf}}\n\\subfloat{\\includegraphics[width=0.45\\textwidth]{graphics\/nips21_varying_classroom_size_expe.pdf}}\n\\caption{\\footnotesize{\\textbf{left:} By leveraging meta-learned curriculum priors w.r.t to its students, AGAIN outperforms regular ACL approaches. Avg. perfs. with \\textit{sem} (standard error of the mean) plotted, 48 seeds. The vertical dashed black line indicates when pre-training ends for Meta-ACL conditions. \\textbf{right:} Impact of classroom size and sparsity on Meta-ACL performances. Post-training ($200$k ep.) avg perfs. plotted, 96 seeds.}}\n\\label{vizu-toy-env-classroom}\n\\end{figure*}\n\n\\paragraph{Results} Instead of performing a pre-test to construct the KC vector of a student, we directly compute it by concatenating $|C|$ for all cells, giving a $400$-dimensional KC vector. This vector is computed after $20$k training episodes out of $200$k. To study AGAIN, we first populate our training trajectory history $\\mathcal{H}$ by training with ALP-GMM an initial classroom of 128 students drawn randomly from 4 fixed possible student types (i.e. 4 possible initially unlocked cell positions), and then test it on a new fixed set of $48$ random students.\n\n\\textit{Comparative analysis - } Figure \\ref{vizu-toy-env-classroom} (left) showcases performance across training for our considered Meta-ACL conditions and ACL baselines. Both AGAIN and IN significantly outperform ALP-GMM ($p<.001$ for both, using Welch's t-test at $200$k episodes). The initial performance advantage of IN w.r.t AGAIN is due to the greedy nature of IN, which only exploits the expert curriculum while AGAIN complements it with ALP-GMM for exploration. By the end of training, AGAIN outperforms IN ($p<.001$) thanks to its ability to emancipate from the curriculum priors it initially leverages. The regular KC-based curriculum priors selection used in AGAIN outperformed the random selection used in AGAIN\\_RND ($p<.001$ at $200$k episodes), while being not significantly inferior to the Ground Truth variant AGAIN\\_GT ($p=0.16$). Because we assume no expert knowledge over the set of students to train, i.e. their respective initial learning subspace is unknown, ADR -- which relies on being given an initial easy task -- fails to train most students when given randomly selected starting subspace (among the $4$ possible ones). By contrast, this showcases the ability of AGAIN to autonomously and efficiently infer such expert knowledge.\n\n\\textit{Varying classroom size experiment - } An important property that must be met by a meta-learning procedure is to have a monotonic increase of performance as the database of information being leveraged increases. Another important expected aspect of Meta-ACL is whether the approach is able to generalize to students that were never seen before. To assess whether these properties hold on AGAIN, we consider the full student distribution of the toy environment, i.e. $400$ possible student types. We populate a new history $\\mathcal{H}$ by training (with ALP-GMM) a $400$-students classroom (one per student type). We then analyse the end performance of AGAIN and IN on a fixed test set of 96 random students when given increasingly smaller subsets of $\\mathcal{H}$. The smaller the subset, the harder it becomes to generalize over new students. Results, shown in fig. \\ref{vizu-toy-env-classroom} (right), demonstrate that both AGAIN and IN do have monotonic performance increasements as the classroom grows. With as little as $10$\\% of possible students in the classroom, AGAIN statistically significantly ($p<.001$) outperforms ALP-GMM on the new student set, i.e. it generalizes to never seen before students.\n\n\n \\subsection{Meta-ACL for DRL students in the Parkour environment}\n \\label{sec:exp:walker-climber}\n \n \\begin{wrapfigure}[14]{r}{6.3cm}\n\\vspace{-0.4cm}\n\\includegraphics[width=5.7cm]{graphics\/parkour_vizu.PNG}\n\\caption{\\footnotesize{Our proposed parametric Parkour env. to study Meta-ACL with DRL students.}}\n\\label{parkour-env}\n\\end{wrapfigure}\n \nTo study Meta-ACL with DRL students, we present a Box2D Parkour environment with a $2$D parametric PCG that encodes a large space of tasks (see fig. \\ref{parkour-env}). The first parameter controls the spacing between walls that are positioned along the track, while the second parameter sets the y-position of a gate that is added to each wall. Positive rewards are collected by going forward. To simulate a multi-modal distribution of students well suited to study Meta-ACL, we randomize the student's morphology for each new training (i.e. each seed): It can be embodied in either a bipedal walker, which will be prone to learn tasks with near-ground gate positions, or a two-armed climber, for which tasks with near-roof gate positions are easiest. We also randomize the student's limb sizes which can vary from the length visible in fig. \\ref{parkour-env} to 50\\% shorter.\n\n\n\n\n\\paragraph{Results} In the following experiments our Meta-ACL variants leverage a history $\\mathcal{H}$ built from a classroom of $128$ randomly drawn Soft-Actor-Critic \\citep{sac} students (i.e. varying embodiments and initial policy weights) trained with ALP-GMM. We then compare ACL and Meta-ACL variants on a fixed set of 64 new students and report the mean percentage of mastered environments (i.e. $r>230$) from 2 fixed expert test sets (one per embodiment type) across training. The KC vector is built using a uniform pre-test set of $m=225$ tasks, performed after $2$ millions agent steps out of $10$. See appendix \\ref{ann:parkour} for additional experimental details.\n\n\\textit{Qualitative view - } Figure \\ref{wc-env-vizu} (left) showcases the evolution of task sampling when using AGAIN to train a new student. Three distinct phases emerge: 1) A pre-training exploratory phase used to gather information about the student's capabilities, 2) After building the KC vector and inferring the most appropriate curriculum priors from $\\mathcal{H}$, AGAIN paces through the resulting IN curriculum while mixing it to ALP-GMM, and 3) AGAIN emancipates from IN after completing it.\n\n\\textit{Comparative analysis - } As shown in figure \\ref{wc-env-vizu} (right), through its use of curriculum priors, AGAIN outperforms ALP-GMM on Parkour, mastering an average of $41\\%$ of the test set at $10$M steps, compared to $31\\%$ for ALP-GMM ($p<.001$) after $10.5$M steps ($0.5$M training steps added to account for AGAIN additional pre-test time). AGAIN performs better than its AGAIN\\_RND random prior selection variant, and is not statistically different ($p=0.8$) from ground truth sampling (AGAIN\\_GT), although only by the end of training. While AGAIN and IN initially have comparable performances, after $7$ Millions training steps, -- a point at which most students trained with IN or AGAIN reached the last IN GMM --, AGAIN outperforms IN by the end of training ($p<0.02$). This showcases the advantage of emancipating from the expert curriculum once completed. As in the toy environment experiments, when given randomly selected starting subspaces (since we assume no expert knowledge), ADR fails to train most students.\n\n \\begin{figure*}[htb!]\n\\centering\n\\subfloat{\\includegraphics[width=0.49\\textwidth]{graphics\/vizu_wc_AGAIN-R15.pdf}}\n\\subfloat{\\includegraphics[width=0.48\\textwidth]{graphics\/nips21_env_rndls_only_combiperf.pdf}}\n\\caption{\\footnotesize{\\textbf{left:} Example of evolution of task sampling when using AGAIN in the Parkour env. (1 seed). \\textbf{right:} Average performances of AGAIN with variants and baselines in the Parkour env.. 64 seeds, sem plotted. The vertical dashed black line indicates when pre-training ends for Meta-ACL conditions.}}\n\\label{wc-env-vizu}\n\\end{figure*}\n\n\n \\subsection{Applying Meta-ACL to a single student: Trying AGAIN instead of trying longer}\n \\label{exp:trying-again}\n\n \nGiven a single DRL student to train (i.e. no history $\\mathcal{H}$), and if we do not assume access to expert knowledge, current ACL approaches leverage task-exploration (as in ALP-GMM). We hypothesize that these additional tasks presented to the DRL learner have a cluttering effect on the gathered training data, which adds noise in its already brittle gradient-based optimization and leads to sub-optimal performances. We propose to address this problem by modifying AGAIN to fit this no-history setup and by allowing to restart the student along training. More precisely, instead of pre-testing the student to find appropriate curriculum priors in $\\mathcal{H}$, we split the training of the target student into a two stage approach where 1) the DRL student is first trained with ALP-GMM (with high-exploration), and then 2) we extract curriculum priors from the training trajectory of the first run and use them to re-train the same agent \\textit{from scratch}.\n \n\n\n \\begin{wrapfigure}[18]{r}{6.5cm}\n \\vspace{-0.2cm}\n\\includegraphics[width=6.5cm]{graphics\/nips21_short_compact_demo.pdf}\n\\caption{\\footnotesize{Given a single DRL student to train, AGAIN outperforms ALP-GMM in a parametric BipedalWalker environment. sem plotted, 32 seeds.}}\\label{trying-again-compact}\n\\end{wrapfigure} \n\\paragraph{Results.} We test our modified AGAIN along with variants and baselines on a parametric version of BipedalWalker proposed in \\citep{portelas2019}, which generates walking tracks paved with stumps whose height and spacing are defined by a PCG-encoding $2$-D parameter vector. As in their work, we test our approaches with both the default walker and a modified short-legged walker, which constitutes an even more challenging scenario (as the task space is unchanged). Performance is measured by tracking the percentage of mastered tasks from a fixed test set. See App. \\ref{ann:tryingagain} for a complete analysis. \n\nFigure \\ref{trying-again-compact} showcases our proposed approach on the short walker setup (with a SAC student \\citep{sac}). On this short walker scenario, mixing ALP-GMM with IN is essential: while IN end performances are not statistically significantly superior to ALP-GMM, AGAIN clearly outperforms ALP-GMM $(p<0.01)$, reaching a mean end performance of $19.0$. The difference in end-performance between AGAIN and Oracle, our hand-made curriculum using privileged information who obtained $20.1$, is not significant ($p=0.6$).\n\n\\section{Conclusion and Discussion}\nIn this work we attempted to motivate and formalize the study of Classroom Teaching problems, in which a set of diverse students have to be trained optimally, and we proposed to attain this goal through the use of Meta-ACL algorithms. We then presented AGAIN, a first Meta-ACL baseline, and demonstrated its advantages over classical ACL and variants for CT problems in both a toy environment and in a new parametric Parkour environment with DRL learners. We also showed how AGAIN can bring performance gains over ACL in classical single student ACL scenarios.\n\n\\paragraph{Future work}In future work, AGAIN could be improved by using adaptive approaches to build compact pre-test sets, e.g. using decision tree based test pruning methods, or by combining curriculum priors from multiple previously trained learners. While AGAIN is built on top of an existing ACL algorithm, developing an end-to-end Meta-ACL algorithm that generates curricula using a DRL teacher policy trained across multiple students is also a promising line of work to follow. \nAdditionally, this work opens-up exciting new perspectives in transferring Meta-ACL methods to educational data-mining, e.g. in MOOC scenarios, given a previously trained pilot classroom, one could use Meta-ACL to infer adaptive curricula for new students.\n\n\\paragraph{Potential negative societal impact} Meta Automatic Curriculum Learning exploits previous curriculum data of DRL students to better train new ones. As such, if previously trained students acquired unwanted biases, they could potentially be transferred and further amplified from the Meta-ACL training of new DRL students. This can have serious negative societal impacts if considering applications in socially impactful domains (e.g. deciding whether to give an insurance to someone). Further work will need to study how one\ncan safely learn curriculum priors that are not harmful in sensitive application areas.\n\n\\section*{Acknowledgments}\nThis work was supported by Microsoft Research through its PhD Scholarship Programme. All presented experiments were carried out using 1) the computing facilities MCIA (M\u00e9socentre de Calcul Intensif Aquitain) of the Universit\u00e9 de Bordeaux and of the Universit\u00e9 de Pau et des Pays de l'Adour, and 2) the HPC resources of IDRIS under the allocation 2020-[A0091011996] made by GENCI.\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nHuman-robotic fusions controlled through brain computer interfaces (BCI) have tremendous potential to impact human health and capabilities through applications like intelligent motor and cognitive prostheses~\\cite{guger1999prosthetic, muller2007control, mcfarland2008brain, fifer2013simultaneous, hotson2016individual, Zhao2017, Akinola2017}. BCI-based control approaches are currently limited by the number of independent degrees of freedom that can be reliability controlled directly. The number of degrees of freedom on motor prostheses for example can be several dozen~\\cite{pasquina2015recent}. This is about an order of magnitude more degrees of freedom than can be reliability generated by current non-invasive BCI systems. The dilemma is how to control high-degree of freedom complex machines with only a few control inputs. \n\nOne approach to increase the impact of limited control inputs is through modularity and hierarchical control mechanisms~\\cite{Zhao2017, Akinola2017}. The idea is to use the limited number of inputs to select a primitive control policy, from a library of primitive behaviors, and potentially a target. Complex tasks are performed by chaining primitive behaviors.\n\nAs an example of this scenario, consider a Universal Robots UR5~\\cite{UR} manipulator mounted on a Clearpath Husky platform~\\cite{Husky} as shown in Fig.~\\ref{fig:husky}. The UR5 is used to demonstrate reaching, grabbing, and lifting a block on a table. Other tasks may require performing these actions in another order, so it may be useful to learn and maintain a collection of these primitive behaviors for later use. While the underlying behavior primitives are well defined for the reach-and-grasp scenario, other example scenarios may not have as well defined or labeled primitives. In this work, we assume that the underlying label of the behaviors shown in the task demonstrations is unknown. \n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[width=.8\\linewidth]{img\/husky.png}\n \\caption{Husky-UR5 Reach and Grasp Environment }\n \\label{fig:husky}\n\\end{figure} \n\nThe questions we investigate are how might we learn and maintain the primitive library from unlabeled demonstrations and, assuming the behavior primitive library exists, how would one know when to use, adapt, or create a new primitive behavior. We propose that the behavior library should be actively maintained to minimize redundancy and maximize the ability to reconstruct complex tasks through chains of the primitive behaviors. In this work, we explore techniques to directly optimize for these criteria by building on methods that learn from demonstration.\n\n\nWe explore maintaining a behavior primitive library in an online learning scenario. Given a potentially non-empty behavior primitive library and a new set of unlabeled task demonstrations, we seek to update the behavior primitive library to maximally accommodate the new demonstrations while maintaining the ability to reconstruct previously demonstrated trajectories.\n\nOur contribution is an approach called $PICO$\\xspace that simultaneously learns subtask decomposition from unlabeled task demonstrations, trains behavior primitives, and learns a hierarchical control mechanism that allows blending of primitive behaviors to create even greater behavioral diversity, an overview is shown in Fig.~\\ref{fig:overview}. Our approach directly optimizes the contents of the primitive library to maximize the ability to reconstruct unlabeled task demonstrations from sequences of primitive behaviors.\n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=1\\linewidth]{img\/overview.png}\n \\caption{An overview of $PICO$\\xspace. The approach takes as input unlabeled demonstrations and a library of primitive behaviors. The goal is to predict the primitive behavior label associated with each time point in all demonstrations. Additional behavior primitive models can be trained to fill gaps that are not well represented by existing behavior primitives. }\n \\label{fig:overview}\n\\end{figure} \n\n\\section{PRELIMINARIES}\n\n\n\n\nLearning from demonstration (LfD) and imitation learning allow agents to execute a task by observing the task being performed \\cite{Hussein:2017:ILS:3071073.3054912}. In the robotics domain, a goal of imitation learning is to produce a mapping, $\\pi$, from states to actions, known as a control \\emph{policy} \\cite{ARGALL2009469, schaal2010learning}, that has the maximum likelihood of producing the demonstration dataset $\\mathcal{D} = \\{\\rho_1,\\rho_2,\\dots,\\rho_n\\}$, where each $\\rho = ((s_1,a_1),(s_2,a_2),\\dots,(s_T,a_T)$ is a demonstration trajectory of of state, action pairs. The demonstrations can be created by another control policy \\cite{distillation}, by a human expert \\cite{konidaris2012}, or in a simulated environment \\cite{TACO18, compile2019}. Let $\\pi_\\theta$ parameterized by $\\theta$. The goal is then to optimize Equation \\ref{bc} by varying $\\theta$.\n\n\\begin{equation}\n \\max \\mathbb{E}_\\rho[\\sum_{t=1}^T\\log\\pi_\\theta(a_t|s_t)]\\label{bc}\n\\end{equation}\n\nFollowing optimization, covariate drift can cause errors in the control process that can place the robot in a previously unobserved state. Control policies will have higher action prediction errors in parts of the state space that it has not observed, leading to poor action predictions and compounding errors with increased iterations of the policy. One approach that has been introduced to decrease the impact of covariate shift is to introduce noise into the demonstrations used for learning \\cite{DART}. This approach increases the amount of state space covered by the policy and improves action predictions around the demonstrations, leading to better generalization and error tolerance. \n\n\\subsection{Model Agnostic Meta-Learning}\nIn meta-learning a model is trained on a variety of learning tasks and the parameters of the method are fine-tuned for generalization. The idea of meta-learning is to combine a set of learner models to improve performance on a task more quickly than one without pretrained models. This is a common strategy for one-shot~\\cite{santoro2016oneshot} or few shot scenarios, where a model must be trained using one or a few examples. Some approaches for meta-learning come from the reinforcement learning~\\cite{Finn2017}, which typically differ in how they update individual learners. Some meta-learning methods update models using gradient information~\\cite{Finn2017} and others learn how to update learners from data~\\cite{learning_to_learn,Bengio2002}.\n\n\\section{RELATED WORK}\n\n\nImitation learning alone does not provide a mechanism to generalize demonstrations to new tasks. One mechanism to address this challenge is task decomposition, which has the goal of identifying subtasks from demonstration. Subtasks can be made into sub-policies through imitation learning, including methods methods that combine subtask discovery with imitation learning~\\cite{TACO18,NTP18}. By decomposing demonstrations into subtasks, it becomes possible to permute the sequence of sub-policies to achieve greater task diversity and generalizability. However, decomposing demonstrations into subtasks that are maximally useful for recombination is a challenge in task decomposition~\\cite{TACO18}. \n\n\nOnce sub-task policies are established, a hierarchical control policy can be learned that identifies the sequence of policies needed to achieve a specified goal. Given a sufficiently diverse set of demonstrations the reasoning layer can be learned from a set of demonstrations~\\cite{NTP18}. Several approaches for learning hierarchical architectures for control policies from limited demonstrations have been proposed~\\cite{TACO18, NTP18, duan2017}. We were inspired by the work on mixtures-of-experts\\cite{mixture_of_experts,experts}\nwhich includes a similar hierarchical representation.\n\n\nSome approaches assume that the behavior primitive library is fully trained in advance~\\cite{NTP18}. In the reinforcement learning domain, the options framework~\\cite{drl_options, andreas2017, kulkarni2016} and hierarchical reinforcement learning~\\cite{HRL} are common approaches for organising hierarchies of policies. The techniques in reinforcement learning are often predicated on being able to interact with an environment and collect a lot of data. In this work, we focus on learning hierarchical task decomposition strategies from a limited set of demonstrations.\n\n\\subsection{Task Sketch for Sub-policy Discovery}\nSome related approaches~\\cite{andreas2017, mu2019plots} perform demonstration decomposition by combining both demonstrations and task sketches. The literature refers to these approaches as \\textit{weakly-supervised} because the order of tasks is given and the exact transition points within a demonstration must be inferred. \n\nLet $\\mathcal{D}$ be our dataset containing\ntrajectories $\\rho = ((s_0,a_0),(s_1,a_1),\\ldots,((s_T,a_T))$ of length $T$ containing state-action tuples $(s,a)$ for state $s$ and action $a$.\nGiven a library of sub-tasks policies $\\mathcal{B}=(\\pi_1,\\pi_2,\\ldots,\\pi_K)$, A task sketch $\\tau =(\\tau_1,\\tau_2,\\ldots,\\tau_L$) is a sequence of sub-tasks labels where $L$ is the length of the sketch. A path is a sequence of sub-task labels $\\zeta = (\\zeta_1,\\zeta_2,\\ldots,\\zeta_T)$ where $T$ is the length of a demonstration. We assume that $L<p(\\pi|\\rho_t)$ for $\\pi \\in \\mathcal{B}$. The data $\\rho_t$ is then used to train $\\pi_{new}$. For nearby data in the same gap region $\\rho_{t+1}$, it is now more likely that $p(\\pi_{new}|\\rho_{t+1})>p(\\pi|\\rho_{t+1})$ for $\\pi \\in \\mathcal{B}$.\nThis mechanism allows $\\pi_{new}$ to develop in to a new behavior primitive that is not well covered by existing primitives.\n\n\n\n\\subsection{Training Details}\n$PICO$\\xspace is trained end-to-end by back propagation. This is possible because all functions in the model are differentiable with the exception the \\texttt{argmax} function. For experiments making use of pretrained behavior primitive models, the contents of the behavior primitive library are trained using the DART~\\cite{DART} technique for imitation learning.\n\nAs shown in Equation \\ref{mse}, the loss used to train the model is mean squared error between the predicted and observed actions over all timepoints and all demonstrations. There is no loss term for label prediction accuracy, because we assume that the demonstrations are unlabeled.\n\n\\subsection{Metrics}\n\nTwo metrics are computed to estimate performance. First, we evaluate mean squared error (MSE) as shown in Equation \\ref{mse} between the predicted and given action. \nSecond, we compute behavior primitive label accuracy which is a comparison between the predicted and given behavior primitive label. Label accuracy is computed as the number of matching labels divided by the total number of comparisons. Both metrics are computed over all timepoints and over all demonstrations in the test set.\n\n\n\\subsection{Baseline Implementations}\nShiarli et al. \\cite{TACO18} developed TACO, which aligned subtasks to demonstrations given a library of primitives and a \\emph{task sketch}, where a task sketch describes the sequence in which subtasks will appear. In addition, in their recent work \\cite{TACO18}, they extended the connectionist temporal classification (CTC) algorithm \\cite{CTC06}, commonly used to align sequences for speech recognition, for use with identifying subtasks. For this work, we use TACO and the extended version of CTC as baseline comparisons for our algorithm, using an open source implementation~\\footnote{https:\/\/github.com\/KyriacosShiarli\/taco}. Both were tested using MLP and RNN architectures. \n\n\n\n\\section{EXPERIMENTS AND DISCUSSION}\nWe evaluate $PICO$\\xspace using a reach-grab-lift task using a Husky+UR5 environment. The dataset consists of 100 demonstrations of a Clearpath Husky robot with a UR5 manipulator performing a variety of reach, grasp, and lift tasks, see Figure~\\ref{fig:husky}. The number of time steps in the demonstrations varied from 1000 to 1800, but each used all three primitives: reach, grasp, and lift. \n\nThe first experiment quantifies the ability of $PICO$\\xspace to identify primitive task labels from demonstration independently from learning behavior primitives. The second experiment evaluates the ability of $PICO$\\xspace to identify parts of demonstrations that are not represented by existing behavior primitives and rebuild the missing behavior primitive.\n\n\\subsection{Reconstruction from existing primitives}\n\n\nOur initial experiment is an ablation study that separately evaluates the estimate of the primitive behavior probability distribution and the action predictions from learning behavior primitives. We train and freeze behavior primitive models for \\emph{reach}, \\emph{grasp}, and \\emph{lift} using the ground truth labeled data from trajectories. We evaluated $PICO$\\xspace, TACO \\cite{TACO18}, and CTC based on label classification accuracy. For Taco and CTC we additionally compared the methods using MLP and RNN based underlying network models. We evaluated all methods based on an 80\/20 split of demonstrations into training and test sets. The average of five independent runs were obtained for each approach. In Table \\ref{table:husky}, we show the results of the comparison. \n\n\n\\begin{figure}[ht]\n\\centering\n \\subfloat[Sample trajectory label accuracy]{\n \\includegraphics[width=.75\\linewidth]{img\/reconstruction.png}\n }\\break\n \\subfloat[Missing primitive label accuracy]{\n \\includegraphics[width=.75\\linewidth]{img\/discovery.png}\n }\n \\caption{Example behavior primitive label accuracy for a single test demonstration. We compared the label predictions given by $PICO$\\xspace (red) to the ground truth (blue) (a) A sample reconstruction for a single trajectory with an existing behavior primitive library. Timepoints are on the x-axis. and behavior primitive label is on they y-axis. The labels 0,1, and 2 correspond to reach, grasp, and lift respectively. (b) Reconstruction of an example trajectory and discovery of a missing behavior primitive (grasp). }\n \\label{fig:reconstruction}\n\\end{figure}\n\n\nFigure \\ref{fig:reconstruction}(a), shows a comparisons between between the predicted label based on Equation \\ref{eqn:label} and the ground truth label. Over all trajectories in the test set, the average label classification accuracy was 96\\% compared to the ground truth label. The summary of results are shown in Table \\ref{table:husky}.\n\n\\subsection{Behavior Primitive Discovery}\n\n\n\nIn our next experiment, we evaluate the ability of $PICO$\\xspace to recognize and build a missing behavior primitive model. We ran a leave-one-behavior-out experiment where one of the three primitives (i.e. reach, grasp, lift) was replaced with a randomly-initialized behavior primitive. This experiment used the same 100 trajectories on the Husky+UR5 dataset discussed in the previous section and a 80\/20 split between training and validation sets. Again, five trials were run with the training and validation sets randomly chosen. The label accuracy and action prediction MSE are shown in \\ref{fig:labelling}. The leftmost bar shows the results with all primitives pre-trained with behavior cloning. The remaining bars show the accuracy when reach, grasp and lift, respectively, were replaced with the gap primitive. Note, the gap primitive was updated throughout the training with back-propagation such that the final primitive ideally would perform as well as the original pre-trained, behavior-cloned version; this comparison is shown with the action prediction MSE. The error bars show the standard deviation across the five trials. While the label accuracy across all three replaced primitives is approximately the same, the action prediction for the lift primitive is significantly worse. We believe this is due to the larger variance in lift trajectories. Unlike the reach and grasp which have restrictions placed on their final target position (it needs to be near the block), the final position of lift is randomly placed above the block's starting position. \n\n\n\nAs shown in the sample trajectory in Figure \\ref{fig:reconstruction}(b), the label prediction of the trained model closely aligns with the ground truth label from the example trajectory. Over all of the test trajectories, the average label classification accuracy was 96\\%.\n\n\\begin{figure}[ht]\n\\centering\n \\subfloat[behavior label accuracy]{\n \\includegraphics[width=.75\\linewidth]{img\/label_accuracy.png}\n }\n \\break\n \\subfloat[action prediction MSE]{\n \\includegraphics[width=.75\\linewidth]{img\/action_mse_accuracy.png}\n }\n \\caption{Accuracy of $PICO$\\xspace to correctly identify a primitive's label on the validation set (twenty randomly selected trajectories). (a) The leftmost bar shows performance when all primitives are in the library, successive bars denote accuracy when the \\emph{reach}, \\emph{grasp}, and \\emph{lift} primitives are dropped out and learned from a randomly generated ``gap\" primitive. Error bars represent the standard deviation across five validation trials. (b) Mean squared error between the ground truth action and the learned model's estimate averaged across twenty randomly selected test trajectories five times. }\n \\label{fig:labelling}\n\\end{figure}\n\n\n\n\\subsection{Visualizing the Learned Latent Space}\n\nTo better understand the role of the embedding space for predicting the primitive probability distribution, we visualized the embedding of all states vectors from the test set in the recurrent hidden layer. We would expect that a useful latent embedding would naturally cluster states that correspond to different primitives into distinct locations in the embedding space.\n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=.75\\linewidth]{img\/latent2.png}\n \\caption{The organization of the learned latent space associated with the Husky-UR5 dataset for reach, grasp, and lift (red, green, and purple respectively). }\n \\label{fig:latent}\n\\end{figure}\nFigure \\ref{fig:latent} shows layout of the latent space in two dimensions. Each point corresponds to a state vector from the test dataset. The points are colored by the ground truth label.\n\n\\subsection{Jaco Dial Domain Dataset}\n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=.75\\linewidth]{img\/pinpad.png}\n \\caption{The joint-domain dial scenario. A Jaco manipulator modeled in Mujoco presses a sequences of 4 keys on a dialpad. The positions of the keys are randomly shuffled for each demonstration. The positions of the joints and positions of the keys are given as state information. }\n \\label{fig:pinpad}\n\\end{figure}\n\nWe also make use of the Jaco dial domain dataset\\cite{TACO18} illustrated in Figure \\ref{fig:pinpad}. The dial dataset is composed of demonstrations from a Jaco manipulator pressing 4 keys in sequence (e.g. 3,5,4,7). The positions of the keys are randomly shuffled for each demonstration, but the position of each key is given in the state vector. The intention is to treat pressing an individual digit as a behavior primitive. For this dataset, label prediction accuracy is a challenging metric without a task sketch because the starting position of the jaco may not provide clues about which button will be pressed. As the jaco gets closer to a button, it becomes more clear which button will be pressed. The dataset of dialpad demonstrations were generated using default parameters and code from TACO\\cite{TACO18}.\n\n\n\n\n\n\n\n\\subsection{Dial Domain Comparison}\nThe goal of this comparison is to evaluate the label prediction accuracy of the metacontroller in $PICO$\\xspace. To isolate the label predictions of the metacontroller, the behavior primitive library is pretrained on the training dataset including 1200 demonstrations and frozen. Label classification and action prediction accuracy is then evaluated on the test set including 280 demonstrations.\n\nThe average results of 5 runs are shown for TACO and CTC. We evaluate each approach using the same label accuracy and action prediction metrics. The summary of results are shown in Table \\ref{table:pinpad}. We found that our approach achieves the highest label accuracy at 65\\%. The overall label accuracy of $PICO$\\xspace on the dial dataset is lower than the Husky+UR5 dataset. Additional analysis revealed that many of the mislabeling occurred at the beginning of a new key press where context about where the Jaco is moving next is weakest. The dataset is also more challenging than than the Husky dataset because the number of unique behavior primitives has increased from 3 to 10. \n\nAlso of note, we compare our results to TACO which is a weakly supervised approach. TACO is given the ordering of tasks. For task sequences of length 4, this means that a random baseline would be expected to achieve an accuracy of 25\\%. For an unlabeled approach like $PICO$\\xspace, any of the 10 behavior primitives could be selected at each timepoint. This means that unlabeled demonstrations the expected accuracy of a random baseline would be 10\\%.\n\n\n\\section{CONCLUSION}\n\n\n\nIn this paper, we describe $PICO$\\xspace, an approach to learn behavior primitives from unlabeled demonstrations and a partial set of behavior primitives. We optimize a metric that directly minimizes reconstruction error for a set of demonstrations using sequences of behavior primitives. We directly compare our results to similar approaches using demonstrations generated from simulations of two different robotic platforms and achieve both better label accuracy and reconstruction accuracy as measured by action prediction mean squared error. While we have demonstrated success in these tasks, there are limitations to our approach. The number additional primitives to add to the library must be decided prior to training. In spite of these limitations, we believe that $PICO$\\xspace is a useful contribution to the community that may be relevant in a number of different domains.\n\n\n\\begin{table}[]\n\\caption{Method comparisons using the Husky UR5 Reach and Grasp dataset.}\n\\label{table:husky}\n\\begin{tabular}{|c|c|c|}\n\\hline\nHusky UR5 & Label Accuracy & MSE Action Prediction \\\\ \\hline\n$PICO$\\xspace & \\textbf{96\\%} & \\textbf{0.053} \\\\ \\hline\nTACO (MLP) & 74\\% & 3.59 \\\\ \\hline\nTACO (RNN) & 73\\% & 3.75 \\\\ \\hline\nCTC (MLP) & 25\\% & 4.20 \\\\ \\hline\nCTC (RNN) & 33\\% & 2.68 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[]\n\\caption{Method comparisons using the Jaco Pinpad dataset. *TACO (RNN) resulted in NaN loss after repeated attempts.}\n\\label{table:pinpad}\n\\begin{tabular}{|c|c|c|}\n\\hline\nJaco Pinpad & Label Accuracy & MSE Action Prediction \\\\ \\hline\n$PICO$\\xspace & \\textbf{65\\%} & \\textbf{0.0061} \\\\ \\hline\nTACO (MLP) & 47\\% & 0.55 \\\\ \\hline\nTACO (RNN) & * & * \\\\ \\hline\nCTC (MLP) & 31\\% & 0.57 \\\\ \\hline\nCTC (RNN) & 29\\% & 0.58 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n \n\n\n\n \n \n \n \n \n\n\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $n$ be a positive integer, \nand let $\\mathbb{F}$ be any field.\nThe \\emph{subspace lattice} $\\mathcal{P}_n(\\mathbb{F})$\n(also known as the \\emph{finite projective space})\nis the poset of linear subspaces of $\\mathbb{F}^n$.\nThe general linear group $\\mathrm{GL}_n(\\mathbb{F})$ \nacts on $\\mathcal{P}_n(\\mathbb{F})$.\nThe natural grading structure\nof $\\mathcal{P}_n(\\mathbb{F})$\nis given by $\\mathrm{GL}_n(\\mathbb{F})$-action,\nand\neach fiber (i.e., orbit) is called a \\emph{Grassmannian}.\nIn other words,\nthe Grassmannian $\\mathrm{Gr}(m,n)$ is the set of \n$m$-dimensional subspaces in $\\mathbb{F}^n$, where $0 \\le m \\le n$.\nLet $\\mathcal{B}$ denote the Borel subgroup of $\\mathrm{GL}_n(\\mathbb{F})$.\nThen, the $\\mathcal{B}$-action defines \na finer ``hyper-cubic'' grading structure\nof $\\mathcal{P}_n(\\mathbb{F})$,\nand\neach fiber contained in $\\mathrm{Gr}(m,n)$\nis called a \\emph{Schubert cell of $\\mathrm{Gr}(m,n)$}.\nSee \\cite{MR2474907} for details.\nThe author showed in \\cite{W}\nthat\nthe algebra defined from the ``hyper-cubic'' grading structure of \n$\\mathcal{P}_n(\\mathbb{F})$ together with its incidence structure\nhas a close relation to the quantum affine algebra\n$U_q(\\widehat{\\mathfrak{sl}}_2)$, if $\\mathbb{F}$ is a finite field of $q^2$ elements.\nIn this paper,\nwe study the Schubert cells of a Grassmannian \nfrom the combinatorial point of view of \\emph{association schemes}.\nMore precisely,\nwe show that the association scheme\ndefined by the $\\mathcal{B}$-action\non each Schubert cell is a \n\\emph{generalized wreath product} of \none-class association schemes\nwith the base set $\\mathbb{F}$.\nThe concept of \na generalized wreath product\nof association schemes\nwas introduced by\nR.~A.~Bailey~\\cite{MR2206477} in 2006.\nThe (usual) wreath product\nof association schemes has\nbeen actively studied (see e.g.,\n\\cite{MR2712017, MR2610292, MR2002219, MR2964723, MR3612420, MR2747797, MR3047011, MR2802177}),\nand we may view the result of this paper as demonstrating \nthe fundamental importance of Bailey's generalization as well.\n\nBefore the main discussion,\nwe briefly recall the notion of the generalized wreath product of association schemes.\nFor the definition of association schemes,\nsee \\cite{MR882540, MR2535398, MR2184345},\nand\nfor the theory of posets,\nsee \\cite{MR2868112}.\nLet $(X,\\le)$ be a nonempty finite poset.\nA subset $Y$ in $X$ is called an \\emph{anti-chain}\nif any two elements in $Y$ is incomparable.\nFor an anti-chain $Y$ in $X$,\ndefine the \\emph{down-set} (also known as the \\emph{order ideal}) by \n\\[\n\\mathrm{Down}(Y) = \\lbrace x \\in X \\mid \\text{$x < y$ for some $y \\in Y$}\\rbrace.\n\\]\nNote that this definition follows \\cite{MR2206477}\nand it is different from \\cite{MR2868112},\nwhere \n$\\mathrm{Down}(Y) \\cup Y$ is called \nthe down-set of $Y$.\nFor each $x \\in X$,\nlet $\\mathcal{Q}_x$ denote an $r_x$-class association scheme\non a set $\\Omega_x$.\nWe do not assume either $\\mathcal{Q}_x$ is symmetric\nor $\\Omega_x$ is finite.\nLet $R_{x,i}$ denote the $i$-th associate class\nfor $i \\in \\lbrace 0,1,\\ldots,r_x\\rbrace$.\nBy convention,\nwe choose the index \nso that $R_{x,0} = \\lbrace (\\omega, \\omega) \\mid \\omega \\in \\Omega_x\\rbrace$.\nWe set $\\Omega = \\prod_{x \\in X} \\Omega_x$.\nFor each anti-chain $Y$ in $X$\nand for each $(i_x)_{x \\in Y} \\in \\prod_{x \\in Y} \\lbrace 1,2,\\ldots,r_x\\rbrace$,\nlet\n$R(Y,(i_x)_{x \\in Y})$ denote the set of\n$((\\alpha_x)_{x \\in X}, (\\beta_x)_{x \\in X}) \\in \\Omega \\times \\Omega$\nsatisfying\n(i) $\\alpha_x = \\beta_x$ if $x \\in X \\setminus (Y \\cup \\mathrm{Down}(Y))$,\nand\n(ii) $(\\alpha_x, \\beta_x) \\in R_{x,i_x}$ if $x \\in Y$.\nLet $\\mathcal{R}$ denote the set of \n$R(Y,(i_x)_{x \\in Y})$ for all anti-chains $Y$ in $X$ and \n$(i_x)_{x \\in Y} \\in \\prod_{x \\in Y} \\lbrace 1,2,\\ldots,r_x\\rbrace$.\n\n\\begin{thm}[cf. {\\cite[Theorem 3]{MR2206477}}]\\label{Bailey}\nThe pair $(\\Omega, \\mathcal{R})$ is an association scheme.\n\\end{thm}\n\nThe association scheme $(\\Omega, \\mathcal{R})$ in Theorem \\ref{Bailey} is called the \n\\emph{generalized wreath product of $\\mathcal{Q}_x$\nover the poset $X$}.\nWe remark that Bailey~\\cite[Theorem 3]{MR2206477}\nassumes that each base set $\\Omega_x$ is finite\nand each association scheme $\\mathcal{Q}_x$ is symmetric.\nHowever, the theorem is still true if \nwe drop both of these assumptions.\n\n\\section{Subspace lattices}\n\nThroughout this paper, we fix a positive integer $n$ and a field $\\mathbb{F}$.\nLet $\\mathbb{F}^n$ denote the $n$-dimensional column vector space over $\\mathbb{F}$.\nBy the \\emph{subspace lattice}, denoted by \n$\\mathcal{P}_n(\\mathbb{F})$,\nwe mean the poset consisting of all subspaces\nin $\\mathbb{F}^n$ with partial order given by inclusion.\nWe fix \nthe sequence $\\lbrace V_i \\rbrace_{i = 0}^n$ in $\\mathcal{P}_n(\\mathbb{F})$ such that\neach $V_i$ consists of vectors whose\nbottom $n-i$ entries are zero.\nWe remark that \n$\\lbrace V_i \\rbrace_{i = 0}^n$ is a \\emph{(complete) flag} (i.e., a maximal chain) in $\\mathcal{P}_n(\\mathbb{F})$.\n\n\nLet $\\mathrm{Mat}_n(\\mathbb{F})$\ndenote the set of $n \\times n$ matrices with entries in $\\mathbb{F}$.\nLet $\\mathrm{GL}_n(\\mathbb{F})$ denote the set of \ninvertible matrices in $\\mathrm{Mat}_n(\\mathbb{F})$.\nObserve that \n$\\mathrm{GL}_n(\\mathbb{F})$ acts on $\\mathbb{F}^n$\nby left multiplication,\nand hence it acts on $\\mathcal{P}_n(\\mathbb{F})$.\nLet $\\mathcal{B}$ denote the (Borel) subgroup in $\\mathrm{GL}_n(\\mathbb{F})$ stabilizing\n$\\lbrace V_i \\rbrace_{i = 0}^n$.\nIn other words,\n$\\mathcal{B}$ consists of all \nupper triangular invertible matrices in $\\mathrm{Mat}_n(\\mathbb{F})$.\nIn this paper,\nwe consider the $\\mathcal{B}$-action on $\\mathcal{P}_n(\\mathbb{F})$.\n\nA matrix in $\\mathrm{Mat}_n(\\mathbb{F})$\nis said to be in \\emph{reverse column echelon form}\nif the following two conditions are met:\n\\begin{enumerate}\n\\item[(CE1)] Any zero columns are right of all nonzero columns.\n\\item[(CE2)] The last nonzero entry of a nonzero column is always\nstrictly below of the last nonzero entry of its right column.\n\\end{enumerate}\nA matrix in $\\mathrm{Mat}_n(\\mathbb{F})$\nis said to be in \\emph{reduced reverse column echelon form}\nif it is in reverse column echelon form and \nthe following third condition is also met:\n\\begin{enumerate}\n\\item[(CE3)] Every last nonzero entry of a nonzero column is $1$ and \nis the only nonzero entry in its row.\n\\end{enumerate}\n\nBy elementary linear algebra, we have the following.\n\\begin{prop}\\label{prop:bij}\nThere exists a bijection between the following two sets:\n\\begin{enumerate}\n\\item the set of all matrices in $\\mathrm{Mat}_n(\\mathbb{F})$ \nin reduced reverse column echelon form,\n\\item the set $\\mathcal{P}_n(\\mathbb{F})$ of all subspaces in $\\mathbb{F}^n$,\n\\end{enumerate}\nthat sends a matrix in $\\mathrm{Mat}_n(\\mathbb{F})$\nto its column space.\n\\end{prop}\n\\begin{proof}\nSee for instance \\cite{MR3013937}.\n\\end{proof}\n\n\\begin{ex}[$n=7$]\\label{ex1}\nLet $e_1, e_2, \\ldots, e_7$ denote the standard basis for $\\mathbb{F}^7$.\nSuppose $U$ is the $4$-dimensional subspace\nin $\\mathbb{F}^7$ given by\n$U = \\mathrm{Span}\\lbrace\n8e_1+6e_3+4e_6+2e_7,\n8e_1+9e_3+e_4+e_5,\n4e_1+e_2+5e_3+e_4,\n3e_1+e_2\\rbrace$.\nThen the following is the matrix corresponding to $U$\nby the bijection in Proposition \\ref{prop:bij}:\n\\[\nM = \n\\begin{pmatrix}\n4 & 7 & 1 & 3 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n3 & 4 & 5 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n2 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 & 0 & 0\n\\end{pmatrix}.\n\\]\n\\end{ex}\nObserve that\nfor \n$M, N \\in \\mathrm{Mat}_n(\\mathbb{F})$\nin reduced reverse column echelon form\nand for $G \\in \\mathcal{B}$,\nthe column space of $N$ moves to \nthat of $M$ by the $G$-action\nif and only if \n$M$ and $GN$ are column equivalent.\nSince $GN$ is in reverse column echelon form\n(but not necessarily reduced),\nthese conditions are equivalent to\n$M = GNH^{T}$ for some $H \\in \\mathcal{B}$.\nFor notational convenience, \nwe write $M \\sim N$\nif there exist $G, H \\in \\mathcal{B}$ such that $M = GNH^T$.\nObserve that $\\sim$ is an equivalence relation on $\\mathrm{Mat}_n(\\mathbb{F})$.\n\nFor the rest of this paper,\nwe will identify $\\mathcal{P}_n(\\mathbb{F})$ with the set of \nall matrices in $\\mathrm{Mat}_n(\\mathbb{F})$ in reduced reverse column echelon form\nby the bijection in Proposition \\ref{prop:bij}.\n\n\\section{The $\\mathcal{B}$-action on $\\mathcal{P}_n(\\mathbb{F})$}\n\nFor a positive integer $m$, we write \n$[m] = \\lbrace 1,2,\\ldots, m\\rbrace$.\nWe define a partial order in\nthe index set $[n] \\times [n]$ of matrices in $\\mathrm{Mat}_n(\\mathbb{F})$ by \n$(i,j) \\le (k,l)$ if $i \\le k$ and $j \\le l$.\nThis is known as the \\emph{direct product order}\nin \\cite[Section~3.2]{MR2868112}.\nFor $M \\in \\mathrm{Mat}_n(\\mathbb{F})$,\nby the \\emph{support} of $M$, denoted by $\\mathrm{Supp}(M)$, we mean \nthe subposet of $[n] \\times [n]$\nconsisting of all indices $(i,j) \\in [n] \\times [n]$ with $M_{i,j} \\neq 0$.\nThe \\emph{pivot-set} of $M$,\ndenoted by $\\mathrm{Piv}(M)$,\nis the set of all maximal elements in $\\mathrm{Supp}(M)$.\nEach element in the pivot-set is called a \\emph{pivot}.\nObserve that\n$(i,j) \\in \\mathrm{Piv}(M)$\nif and only if\n$M_{i,j} \\neq 0$ and $M_{k,l} = 0$ if $(k,l) > (i,j)$.\nWe remark that\nevery entry indexed by a pivot of a matrix in $\\mathcal{P}_n(\\mathbb{F})$ \nmust be $1$ by the condition (CE3).\n\n\\begin{lem}\\label{lem:wequiv}\nFor $M, N \\in \\mathcal{P}_n(\\mathbb{F})$,\nthe following are equivalent:\n\\begin{enumerate}\n\\item $M \\sim N$,\n\\item $\\mathrm{Piv}(M) = \\mathrm{Piv}(N)$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\n(i) $\\Rightarrow$ (ii)\nSuppose $M \\sim N$.\nThere exist\n$G, H \\in \\mathcal{B}$ such that $M = GNH^T$.\nIt suffices to show $\\mathrm{Piv}(N) \\subseteq \\mathrm{Piv}(M)$.\nFor $(i,j) \\in \\mathrm{Piv}(N)$,\nwe have\n$N_{i,j} = 1$ and $N_{k,l} = 0$ if $(k,l) > (i,j)$.\nSince\n$G, H$ are upper triangular,\nwe have\n\\[\nM_{k,l} = \\sum_{s = k}^n\\sum_{t = l}^n G_{k,s}N_{s,t}H_{l,t}\n=\n\\begin{cases}\nG_{i,i}H_{j,j} & \\text{if $(k,l) = (i,j)$},\\\\\n0 & \\text{if $(k,l) > (i,j)$}.\n\\end{cases}\n\\]\nSince $G, H$ are invertible,\n$G_{i,i}H_{j,j} \\neq 0$.\nThese imply $(i,j) \\in \\mathrm{Piv}(M)$\nand hence $\\mathrm{Piv}(N) \\subseteq \\mathrm{Piv}(M)$.\n\n(ii) $\\Rightarrow$ (i)\nSuppose $\\mathrm{Piv}(M) = \\mathrm{Piv}(N)$.\nTake $X \\in \\mathcal{P}_n(\\mathbb{F})$ with\n$X_{i,j} = 1$ if $(i,j) \\in \\mathrm{Piv}(M)$ and $X_{i,j} = 0$ otherwise.\nObserve that\nfor each $j \\in [n]$,\nthere exists at most one $k$ such that $(j,k) \\in \\mathrm{Piv}(M)$\nand then we\ndefine $G \\in \\mathrm{Mat}_n(\\mathbb{F})$ by\n\\[\nG_{i,j} =\n\\begin{cases}\nM_{i,k} & \\text{if $(j,k) \\in \\mathrm{Piv}(M)$ for some $k$},\\\\\n\\delta_{i,j} & \\text{if there is no $k$ such that $(j,k) \\in \\mathrm{Piv}(M)$}\n\\end{cases}\n\\]\nfor $i,j \\in [n]$.\nThen we have $G_{i,i} = 1$ for $i \\in [n]$\nand $G_{i,j} = 0$ if $j < i$ for $i, j \\in [n]$.\nThus \n$G \\in \\mathcal{B}$.\nBy the direct calculation, we have $M = GX$\nand \nhence, $M \\sim X$.\nSimilarly we have $N \\sim X$ and so $M \\sim N$.\n\\end{proof}\n\nLet $1 \\le m \\le n-1$ and $M \\in \\mathcal{P}_n(\\mathbb{F})$\nwith $\\rank M = m$.\nNote that \nwe avoid the trivial cases $m = 0$ and $m = n$.\nSince the pivots of $M$ lie in the first $m$ columns,\n$\\mathrm{Piv}(M)$\nis an anti-chain in $[n] \\times [m]$ of size $m$.\nFor $1 \\le m \\le n-1$\nand for an anti-chain $\\alpha$ in $[n] \\times [m]$\n of size $m$,\nwe set\n\\begin{equation}\n\\mathcal{O}_{\\alpha} = \\lbrace M \\in \\mathcal{P}_n(\\mathbb{F}) \\mid\n\\mathrm{Piv}(M) = \\alpha\n\\rbrace.\n\\label{OA}\n\\end{equation}\nFor each $1 \\le m \\le n-1$ and\neach anti-chain $\\alpha$ in $[n] \\times [m]$ of size $m$,\nconsider\n$M \\in \\mathcal{P}_n(\\mathbb{F})$ with\n$M_{i,j} = 1$ if $(i,j) \\in \\alpha$ and $M_{i,j} = 0$ otherwise.\nThen we have $M \\in \\mathcal{O}_\\alpha$\nand in particular $\\mathcal{O}_\\alpha \\neq \\emptyset$.\n\n\n\\begin{prop}\\label{prop:OA}\nThe rank of any matrix in \\eqref{OA} \nis $m = |\\alpha|$.\nMoreover,\neach subset \\eqref{OA}\nis an orbit of the $\\mathcal{B}$-action on $\\mathcal{P}_n(\\mathbb{F})$.\n\\end{prop}\n\\begin{proof}\nImmediate from the construction and Lemma \\ref{lem:wequiv}.\n\\end{proof}\n\nRecall the Grassmannian $\\mathrm{Gr}(m,n)$\nand we identify $\\mathrm{Gr}(m,n)$ with a set of matrices\nby the bijection in Proposition \\ref{prop:bij}.\nIn other words,\n\\[\n\\mathrm{Gr}(m,n) = \\lbrace M \\in \\mathcal{P}_n(\\mathbb{F}) \\mid\n\\rank M = m\n\\rbrace.\n\\]\nBy Proposition \\ref{prop:OA},\neach $\\mathcal{O}_\\alpha$ in \\eqref{OA} is\na $\\mathcal{B}$-orbit in $\\mathrm{Gr}(m,n)$,\nwhere $m = |\\alpha|$.\nThus,\nit is called a \\emph{Schubert cell of a Grassmannian} \\cite{MR2474907}.\n\n\\begin{ex}[$n=7$, $m = 4$]\\label{ex2}\nTake $M \\in \\mathcal{P}_7(\\mathbb{F})$ as in Example \\ref{ex1}.\nThen we have\n$\\mathrm{Piv}(M) = \\lbrace\n(2,4), (4,3), (5,2), (7,1)\n\\rbrace$.\nMoreover,\n$\\mathcal{O}_{\\mathrm{Piv}(M)}$ is\nthe set of matrices of the form\n\\begin{equation}\n\\begin{pmatrix}\n\\ast & \\ast & \\ast & \\ast & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n\\ast & \\ast & \\ast & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n\\ast & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 & 0 & 0\n\\end{pmatrix},\n\\label{ex:Oalpha}\n\\end{equation}\nwhere the symbol $\\ast$ denotes an arbitrary element\nin $\\mathbb{F}$.\n\\end{ex}\n\n\\begin{lem}\\label{lem:diag}\nLet $1 \\le m \\le n-1$\nand let $\\alpha$ denote an anti-chain in $[n] \\times [m]$ of size $m$.\nFor $M, N, M', N' \\in \\mathcal{O}_\\alpha$,\nthe following are equivalent:\n\\begin{enumerate}\n\\item $(M,N)$ moves to $(M',N')$ by the diagonal $\\mathcal{B}$-action,\n\\item $\\mathrm{Piv}(M-N) = \\mathrm{Piv}(M'-N')$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\n(i) $\\Rightarrow$ (ii)\nSuppose there exist $G, H, K \\in \\mathcal{B}$\nsuch that $M' = GMH^T$ and $N' = GNK^T$.\nThen we have\n\\begin{equation}\n\\mathrm{Piv}(GM) = \\mathrm{Piv}(GN) = \\alpha,\n\\label{*}\n\\end{equation}\n\\begin{equation}\n\\mathrm{Piv}(GM-GN) = \\mathrm{Piv}(M-N)\n\\label{**}\n\\end{equation}\nsince $G \\in \\mathcal{B}$ (cf. Lemma \\ref{lem:wequiv}).\nWe write $\\alpha = \\lbrace(k_r,r) \\mid r \\in [m] \\rbrace$\nand observe that $k_1 > k_2 > \\cdots > k_m$\nand that $\\mathrm{Piv}(M-N) \\subseteq \\mathrm{Down}(\\alpha)$.\n\nTake $(i,j) \\in \\mathrm{Piv}(M-N)$.\nObserve that $j \\in [m]$ and $k_j > i$\nand hence \nthere exists $m' = \\max\\lbrace r \\in [m] \\mid k_r > i\\rbrace$.\nFor $r, l \\in [m]$ with $j \\le r \\le m'$ and $j \\le l \\le m'$,\nsince $H, K$ are upper triangular,\nwe have\n\\begin{align*}\nM'_{k_r,l} &= \\sum_{t = l}^n (GM)_{k_r,t}H_{l,t}, \\\\\nN'_{k_r,l} &= \\sum_{t = l}^n (GN)_{k_r,t}K_{l,t}.\n\\end{align*}\nIn the above equations, we have the following:\nFor each $r,l$, we have $M'_{k_r,l} = N'_{k_r,l} = \\delta_{r,l}$ by (CE3);\nFor each $r,t$, we have $(GM)_{k_r,t} = (GN)_{k_r,t}$\nsince $(k_r,t) > (i,j)$ and by \\eqref{**};\nFor each $r, t$ with $r < t$, we have $(GM)_{k_r,r} = (GN)_{k_r,r} = G_{k_r,k_r} \\neq 0$ \nand $(GM)_{k_r,t} = (GN)_{k_r,t} = 0$ \nsince $(k_r,r) \\in \\alpha$ and by \\eqref{*}.\nBy these comments,\nfor each $l \\in [m]$ with $j \\le l \\le m'$,\nboth $(H_{l,l}, H_{l,l+1}, \\ldots, H_{l,m'})$\nand $(K_{l,l}, K_{l,l+1}, \\ldots, K_{l,m'})$\nare solutions to the same system of $m'-j+1$ independent linear equations.\nHence, $H_{l,t} = K_{l,t}$ for $j,t \\in [m]$ with \n$j \\le l \\le t \\le m'$.\n\nFor $(k,l) \\in [n] \\times [n]$ with $(k,l) > (i,j)$,\nwe have\n$(GM)_{k,t} = (GN)_{k,t}$ if $t \\ge l$ since $(k,t) > (i,j)$ and by \\eqref{**},\nand we also have \n$(GM)_{k,t} = (GN)_{k,t} = 0$ if $t > m'$ by the definition of $m'$ and by \\eqref{*}.\nRecall that we have shown $H_{l,t} = K_{l,t}$ if $j \\le l \\le t \\le m'$.\nBy these comments, we have\n\\begin{align*}\nM'_{k,l} = \\sum_{t = l}^{n} (GM)_{k,t}H_{l,t}\n= \\sum_{t = l}^{n} (GN)_{k,t}K_{l,t} = N'_{k,l}.\n\\end{align*}\nSimilarly,\nwe have\n\\begin{align*}\nM'_{i,j} - N'_{i,j} = \\sum_{t = j}^{n} (GM)_{i,t}H_{j,t}\n- \\sum_{t = j}^{n} (GN)_{i,t}K_{j,t} \n= ((GM)_{i,j} - (GN)_{i,j})H_{j,j}.\n\\end{align*}\nIn the above equations,\nwe have $(GM)_{i,j} \\neq (GN)_{i,j}$ by \\eqref{**},\nand we also have $H_{j,j} \\neq 0$ since $H \\in \\mathcal{B}$.\nTherefore we obtain $M'_{i,j} \\neq N'_{i,j}$.\nThese imply $(i,j) \\in \\mathrm{Piv}(M'-N')$\nand hence $\\mathrm{Piv}(M-N) \\subseteq \\mathrm{Piv}(M'-N')$.\nSince $G, H, K$ are invertible,\nwe also have \n$\\mathrm{Piv}(M'-N') \\subseteq \\mathrm{Piv}(M-N)$.\nConsequently, we have\n$\\mathrm{Piv}(M-N) = \\mathrm{Piv}(M'-N')$.\n\n(ii) $\\Rightarrow$ (i)\nLet\n$M, N, M', N' \\in \\mathcal{O}_\\alpha$\nwith $\\mathrm{Piv}(M-N) = \\mathrm{Piv}(M'-N')$.\nTake $X \\in \\mathcal{O}_\\alpha$ with\n$X_{i,j} = 1$ if $(i,j) \\in \\alpha \\cup \\mathrm{Piv}(M-N)$ and $X_{i,j} = 0$ otherwise,\nand $Y \\in \\mathcal{O}_\\alpha$ with\n$Y_{i,j} = 1$ if $(i,j) \\in \\alpha$ and $Y_{i,j} = 0$ otherwise.\nDefine $G \\in \\mathrm{Mat}_n(\\mathbb{F})$ by\n\\[\nG_{i,j} = \\begin{cases}\nN_{i,k} & \\text{if $(j,k) \\in \\alpha$ for some $k$},\\\\\nM_{i,k} - N_{i,k} & \\text{if $(j,k) \\in \\mathrm{Piv}(M-N)$ for some $k$}, \\\\\n\\delta_{i,j} & \\text{if there is no $k$ such that $(j,k) \\in \\alpha \\cup \\mathrm{Piv}(M-N)$}\n\\end{cases}\n\\]\nfor $i,j \\in [n]$.\nThen we have $G \\in \\mathcal{B}$\nand $M = GX$ and $N = GY$.\nSimilarly there exists $G' \\in \\mathcal{B}$\nsuch that\n$M' = G'X$ and $N' = G'Y$.\nTherefore\n$(M,N)$ moves to $(M',N')$ by the diagonal action of \n$G'G^{-1}$.\n\\end{proof}\n\nLet $1 \\le m \\le n-1$\nand let $\\alpha$ denote an anti-chain in $[n] \\times [m]$ of size $m$.\nLet $M, N \\in \\mathcal{O}_\\alpha$.\nObserve that\n$\\mathrm{Piv}(M-N)$\nis an anti-chain in \n\\begin{equation}\n\\mathcal{D}(\\alpha)\n=\n\\lbrace (i,j) \\in \\mathrm{Down}(\\alpha) \\mid\n\\text{there is no $k$ such that $(i,k) \\in \\alpha$}\\rbrace.\n\\label{Dalpha}\n\\end{equation}\nFor $1 \\le m \\le n-1$\nand for an anti-chain $\\alpha$ in $[n] \\times [m]$ of size $m$\nand for an anti-chain $\\beta$ in $\\mathcal{D}(\\alpha)$,\nwe set\n\\begin{equation}\n\\mathcal{R}_{\\alpha,\\beta} = \\lbrace (M,N) \\in \\mathcal{O}_\\alpha \\times \\mathcal{O}_\\alpha \\mid \n\\mathrm{Piv}(M-N) = \\beta\n\\rbrace.\n\\label{Rab}\n\\end{equation}\nFor each $1 \\le m \\le n-1$ and\neach anti-chain $\\alpha$ in $[n] \\times [m]$ of size $m$\nand each anti-chain $\\beta$ in $\\mathcal{D}(\\alpha)$,\nconsider\n$M \\in \\mathcal{P}_n(\\mathbb{F})$ with\n$M_{i,j} = 1$ if $(i,j) \\in \\alpha \\cup \\beta$ and $M_{i,j} = 0$ otherwise,\nand $N \\in \\mathcal{P}_n(\\mathbb{F})$ with\n$N_{i,j} = 1$ if $(i,j) \\in \\alpha$ and $N_{i,j} = 0$ otherwise.\nThen we have $(M, N) \\in \\mathcal{R}_{\\alpha,\\beta}$\nand in particular $\\mathcal{R}_{\\alpha,\\beta} \\neq \\emptyset$.\n\n\\begin{prop}\\label{prop:Rab}\nLet $1 \\le m \\le n-1$\nand let $\\alpha$ denote an anti-chain in $[n] \\times [m]$ of size $m$.\nEach subset \\eqref{Rab}\nis an orbital of the $\\mathcal{B}$-action on $\\mathcal{O}_\\alpha$.\n\\end{prop}\n\\begin{proof}\nImmediate from Lemma \\ref{lem:diag}.\n\\end{proof}\n\nLet $1 \\le m \\le n-1$.\nFor an anti-chain $\\alpha$ in $[n] \\times [m]$ of size $m$,\nconsider\n\\begin{equation}\n\\mathcal{D}_1(\\alpha) = \\lbrace i \\mid \\text{there is no $k$ such that $(i,k) \\in \\alpha$}\\rbrace.\n\\label{D1alpha}\n\\end{equation}\nThen we have $|\\mathcal{D}_1(\\alpha)| = n-m$ since \n$|\\alpha| = m$. \nFor $1 \\le i \\le n-m$,\nwe define\n$\\lambda_i = |\\lbrace j \\mid (d_i,j) \\in \\mathcal{D}(\\alpha)\\rbrace|$,\nwhere $d_i$ denotes the $i$-th smallest element in $\\mathcal{D}_1(\\alpha)$.\nThen $\\lambda = (\\lambda_1, \\lambda_2, \\ldots, \\lambda_{n-m}) \\in \\mathbb{N}^{n-m}$ is \nan integer partition (i.e., a non-increasing sequence) with largest part at most $m$,\nwhere \n\\[\n\\mathbb{N} = \\lbrace 0,1,\\ldots\\rbrace.\n\\]\nConsider the map $\\varphi_m$ which sends $\\alpha$\nto $\\lambda$.\n\nFor an integer partition $\\lambda = (\\lambda_1, \\lambda_2, \\ldots, \\lambda_l)$,\nthe \\emph{Ferrers board} of shape $\\lambda$\nis defined by\n\\[\n\\lbrace (i,j) \\in \\mathbb{N} \\times \\mathbb{N} \\mid 1 \\le i \\le l, 1 \\le j \\le \\lambda_i \\rbrace.\n\\]\nWe endow the Ferrers board\nwith direct product order in $\\mathbb{N} \\times \\mathbb{N}$.\n\n\\begin{lem}\\label{al}\nFor $1 \\le m \\le n-1$,\nthe map $\\varphi_m$ is a bijection between the following two sets:\n\\begin{enumerate}\n\\item the anti-chains in $[n] \\times [m]$ of size $m$;\n\\item the integer partitions in $\\mathbb{N}^{n-m}$ with largest part at most $m$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nLet $1 \\le m \\le n-1$.\nIt is clear that $\\varphi_m$ is a map from (i) to (ii).\nWe define the map $\\varphi_m'$ from (ii) to (i) as follows.\nFor a given integer partition $\\lambda$ in $\\mathbb{N}^{n-m}$ with largest part at most $m$,\nwe define\n$\\alpha$ as the set of maximal elements \nin the Ferrers board of shape $\\mu = \\lambda \\cup (m, m-1,\\ldots,1)$,\nwhich is the integer partition obtained by rearranging \nparts of both $\\lambda$ and $(m, m-1,\\ldots,1)$ in non-increasing order.\nSince $\\mu$ is in $\\mathbb{N}^{n}$ and its largest part is $m$,\n$\\alpha$ is an anti-chain in $[n] \\times [m]$ of size $m$.\nThe map $\\varphi_m'$ is defined to send $\\lambda$\nto $\\alpha$.\nBy construction,\n$\\varphi_m$ and $\\varphi_m'$ are inverses and hence bijections.\n\\end{proof}\n\n\n\\begin{lem}\\label{bmu}\nLet $1 \\le m \\le n-1$\nand\nlet $\\alpha$ denote an anti-chain in $[n] \\times [m]$ of size $m$.\nThe poset\n$\\mathcal{D}(\\alpha)$ in \\eqref{Dalpha} is isomorphic to\nthe Ferrers board of shape $\\varphi_m(\\alpha)$.\nMoreover,\nthere is a one-to-one correspondence\nbetween\nthe anti-chains in $\\mathcal{D}(\\alpha)$ \nand the subpartitions of $\\varphi_m(\\alpha)$.\n\\end{lem}\n\\begin{proof}\nRecall the set $\\mathcal{D}_1(\\alpha)$ in \\eqref{D1alpha}.\nThen define the map $\\psi$\nfrom\nthe Ferrers board of shape $\\varphi_m(\\alpha)$ \nto $\\mathcal{D}(\\alpha)$ by\n$\\psi(i,j) = (d_i,j)$,\nwhere $d_i$ denotes the $i$-th smallest element in $\\mathcal{D}_1(\\alpha)$.\nIt is obvious that $\\psi$ is an order-preserving bijection.\nThe first assertion follows.\nTo show the second assertion,\nwe define the map $\\rho$ from \nthe subpartitions of $\\varphi_m(\\alpha)$\nto\nthe anti-chains in $\\mathcal{D}(\\alpha)$\nby \n$\\rho(\\mu) = \\psi(\\max(\\mu))$,\nwhere $\\max(\\mu)$ is the set of all maximal elements in \nthe Ferrers board of shape $\\mu$.\nFrom the construction, \nthe map $\\rho$ is also a bijection.\nThe second assertion follows.\n\\end{proof}\n\n\\begin{ex}[$n=7$, $m=4$]\nTake the anti-chain $\\alpha = \\lbrace\n(2,4), (4,3), (5,2), (7,1)\n\\rbrace$ as in Example \\ref{ex2}.\nRecall that $\\mathcal{O}_\\alpha$ is the set of \nmatrices of the form \\eqref{ex:Oalpha}.\nThen $\\varphi_4(\\alpha) = (4,3,1)$.\nWe remark that\neach number in $(4,3,1)$\nequals the number of $\\ast$'s in each row without a pivot.\n\\end{ex}\n\n\\section{The association scheme on each Schubert cell}\n\nFor $1 \\le m \\le n-1$\nand for an anti-chain $\\alpha$ in $[n] \\times [m]$ of size $m$,\nby Propositions \\ref{prop:OA} and \\ref{prop:Rab},\nthe pair\n\\begin{equation}\n\\mathfrak{X}_\\alpha = (\\mathcal{O}_\\alpha, \\lbrace\\mathcal{R}_{\\alpha, \\beta}\\rbrace_\\beta)\n\\label{Xalpha}\n\\end{equation}\nbecomes an association scheme,\nwhere $\\beta$ runs over all anti-chains in $\\mathcal{D}(\\alpha)$ in \\eqref{Dalpha}.\nSee \\cite[Preface]{MR2184345}.\nWe remark that by Lemma \\ref{al},\nthe family of association schemes $\\lbrace \\mathfrak{X}_\\alpha \\rbrace_\\alpha$\ncan be indexed by integer partitions $\\lambda \\in \\mathbb{N}^{n-m}$ with largest part at most $m$.\nIn this case,\nthe associate classes of $\\mathfrak{X}_\\alpha = \\mathfrak{X}_\\lambda$ are indexed by \nthe subpartitions of $\\lambda$\nby Lemma \\ref{bmu}.\n\\begin{thm}\nLet $1 \\le m \\le n-1$\nand let $\\alpha$ denote an anti-chain in $[n] \\times [m]$ of size $m$.\nThe association scheme $\\mathfrak{X}_\\alpha$ in \\eqref{Xalpha} is symmetric.\n\\end{thm}\n\\begin{proof}\nImmediate from the definition of $\\mathcal{R}_{\\alpha, \\beta}$.\n\\end{proof}\n\n\\begin{thm}\nFor $1 \\le m \\le n-1$\nand for an anti-chain $\\alpha$ in $[n] \\times [m]$ of size $m$,\nthe association scheme $\\mathfrak{X}_\\alpha$ in \\eqref{Xalpha} is \nthe generalized wreath product of the one-class association schemes \nwith the base set $\\mathbb{F}$ over the poset $\\mathcal{D}(\\alpha)$ in \\eqref{Dalpha}.\n\\end{thm}\n\\begin{proof}\nFor \n$M, N \\in \\mathcal{O}_\\alpha$\nand for\nan anti-chain $\\beta$ in $\\mathcal{D}(\\alpha)$,\nwe have\n$(M,N) \\in \\mathcal{R}_{\\alpha, \\beta}$\nif and only if\n$M_{i,j} = N_{i,j}$ if $(i,j) \\not\\in \\beta \\cup \\mathrm{Down}(\\beta)$, $M_{i,j} \\neq N_{i,j}$ if $(i,j) \\in \\beta$.\nTherefore,\nthis associate relation\nis the same as that of \nthe generalized wreath product of the one-class association schemes \nwith the base set $\\mathbb{F}$ over the poset $\\mathcal{D}(\\alpha)$.\nSo the result follows.\n\\end{proof}\n\n\\section{Concluding remarks}\n\nThis paper focuses on the Schubert cells\nof a Grassmannian.\nIt would be an interesting problem to find similar results on Schubert cells for other types of BN-pairs.\n\nThe Terwilliger algebra, introduced by P.~Terwilliger \\cite{MR1203683}, \nof the wreath product of one-class association schemes\nis discussed in several papers \\cite{MR2610292, MR2747797, MR2802177}.\nWe will consider\nthe Terwilliger algebra of the generalized wreath product of one-class association schemes\nin a future paper.\n\n\\section*{Acknowledgments}\nThe author gratefully acknowledges the many helpful suggestions of his advisor, Hajime Tanaka\nduring the preparation of the paper.\nThe author thanks Paul Terwilliger\nfor giving valuable comments\nand Motohiro Ishii for drawing the author's attention to the theory of Schubert cells.\n\n\n\\begin{bibdiv}\n\\begin{biblist}\n\n\\bib{MR2206477}{article}{\n author={Bailey, R. A.},\n title={Generalized wreath products of association schemes},\n journal={European J. Combin.},\n volume={27},\n date={2006},\n number={3},\n pages={428--435},\n issn={0195-6698},\n}\n\n\\bib{MR882540}{book}{\n author={Bannai, Eiichi},\n author={Ito, Tatsuro},\n title={Algebraic combinatorics. I},\n subtitle={Association schemes},\n publisher={The Benjamin\/Cummings Publishing Co., Inc., Menlo Park, CA},\n date={1984},\n pages={xxiv+425},\n isbn={0-8053-0490-8},\n}\n\n\\bib{MR2712017}{thesis}{\n author={Bhattacharyya, Gargi},\n title={Terwilliger algebras of wreath products of association schemes},\n type={Thesis (Ph.D.)--Iowa State University},\n publisher={ProQuest LLC, Ann Arbor, MI},\n date={2008},\n pages={84},\n isbn={978-0549-68830-3},\n}\n\n\\bib{MR2610292}{article}{\n author={Bhattacharyya, Gargi},\n author={Song, Sung Y.},\n author={Tanaka, Rie},\n title={Terwilliger algebras of wreath products of one-class association\n schemes},\n journal={J. Algebraic Combin.},\n volume={31},\n date={2010},\n number={3},\n pages={455--466},\n issn={0925-9899},\n}\n\n\\bib{MR2002219}{article}{\n author={Hanaki, Akihide},\n author={Hirotsuka, Kaoru},\n title={Irreducible representations of wreath products of association\n schemes},\n journal={J. Algebraic Combin.},\n volume={18},\n date={2003},\n number={1},\n pages={47--52},\n issn={0925-9899},\n}\n\n\\bib{MR3013937}{collection}{\n title={Handbook of linear algebra},\n series={Discrete Mathematics and its Applications (Boca Raton)},\n editor={Hogben, Leslie},\n edition={2},\n publisher={CRC Press, Boca Raton, FL},\n date={2014},\n pages={xxx+1874},\n isbn={978-1-4665-0728-9},\n}\n\n\\bib{MR2964723}{article}{\n author={Kim, Kijung},\n title={Terwilliger algebras of wreath products by quasi-thin schemes},\n journal={Linear Algebra Appl.},\n volume={437},\n date={2012},\n number={11},\n pages={2773--2780},\n issn={0024-3795},\n}\n\n\\bib{MR2474907}{book}{\n author={Lakshmibai, V.},\n author={Brown, Justin},\n title={Flag varieties},\n series={Texts and Readings in Mathematics},\n volume={53},\n subtitle={An interplay of geometry, combinatorics, and representation\n theory},\n publisher={Hindustan Book Agency, New Delhi},\n date={2009},\n pages={xiv+272},\n isbn={978-81-85931-92-0},\n}\n\n\\bib{MR2535398}{article}{\n author={Martin, William J.},\n author={Tanaka, Hajime},\n title={Commutative association schemes},\n journal={European J. Combin.},\n volume={30},\n date={2009},\n number={6},\n pages={1497--1525},\n issn={0195-6698},\n}\n\n\\bib{MR3612420}{article}{\n author={Song, Sung Y.},\n author={Xu, Bangteng},\n author={Zhou, Shenglin},\n title={Combinatorial extensions of Terwilliger algebras and wreath\n products of association schemes},\n journal={Discrete Math.},\n volume={340},\n date={2017},\n number={5},\n pages={892--905},\n issn={0012-365X},\n}\n\n\\bib{MR2868112}{book}{\n author={Stanley, Richard P.},\n title={Enumerative combinatorics. Volume 1},\n series={Cambridge Studies in Advanced Mathematics},\n volume={49},\n edition={2},\n publisher={Cambridge University Press, Cambridge},\n date={2012},\n pages={xiv+626},\n isbn={978-1-107-60262-5},\n}\n\n\\bib{MR2747797}{article}{\n author={Tanaka, Rie},\n title={Classification of commutative association schemes with almost\n commutative Terwilliger algebras},\n journal={J. Algebraic Combin.},\n volume={33},\n date={2011},\n number={1},\n pages={1--10},\n issn={0925-9899},\n}\n\n\\bib{MR3047011}{article}{\n author={Tanaka, Rie},\n author={Zieschang, Paul-Hermann},\n title={On a class of wreath products of hypergroups and association\n schemes},\n journal={J. Algebraic Combin.},\n volume={37},\n date={2013},\n number={4},\n pages={601--619},\n issn={0925-9899},\n}\n\n\\bib{MR1203683}{article}{\n author={Terwilliger, Paul},\n title={The subconstituent algebra of an association scheme. I},\n journal={J. Algebraic Combin.},\n volume={1},\n date={1992},\n number={4},\n pages={363--388},\n issn={0925-9899},\n}\n\n\\bib{W}{article}{\n author={Watanabe, Yuta},\n title={An algebra associated with a flag in a subspace lattice over a finite field and the quantum affine algebra $U_q(\\widehat{\\mathfrak{sl}}_2)$},\n eprint={arXiv:1709.06329 [math.CO]},\n date={2017},\n}\n\n\\bib{MR2802177}{article}{\n author={Xu, Bangteng},\n title={Characterizations of wreath products of one-class association\n schemes},\n journal={J. Combin. Theory Ser. A},\n volume={118},\n date={2011},\n number={7},\n pages={1907--1914},\n issn={0097-3165},\n}\n\n\\bib{MR2184345}{book}{\n author={Zieschang, Paul-Hermann},\n title={Theory of association schemes},\n series={Springer Monographs in Mathematics},\n publisher={Springer-Verlag, Berlin},\n date={2005},\n pages={xvi+283},\n isbn={978-3-540-26136-0},\n isbn={3-540-26136-2},\n}\n\\end{biblist}\n\\end{bibdiv}\n\n\\bigskip\n\n\\noindent\nYuta Watanabe \\\\\nGraduate School of Information Sciences \\\\\nTohoku University \\\\\nSendai, 980-8579 Japan\\\\\nemail: \\texttt{watanabe@ims.is.tohoku.ac.jp}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{s1}\nReaction-diffusion equations arise naturally when modeling certain phenomena in biological, chemical and physical systems. In this paper we study reaction-diffusion equations for infinity Laplacian, which despite being too degenerate to realistically represent a physical diffusion process, has been studied in the framework of optimization and free boundary problems (see, for example, \\cite{ATU19}, \\cite{RST17}, \\cite{RT12,RTU15,TU17}, just to cite a few). More precisely, we establish regularity and geometric properties of solutions of the problem\n\\begin{equation}\\label{1.1}\n\\Delta_\\infty u=f(u) \\quad \\text{in} \\quad \\Omega,\n\\end{equation}\nwhere $\\Omega\\subset\\mathbb{R}^n$, $f\\in C(\\mathbb{R}_+)$ and\n\\begin{equation}\\label{1.2}\n0\\le f(\\delta t)\\leq M\\delta^\\gamma f(t),\n\\end{equation}\nwith $M>0$, $\\gamma\\in[0,3)$, $t>0$ bounded, and $\\delta>0$ small enough. Additionally, we assume that \n\\begin{equation}\\label{comparison}\nf \\text{ is non-decreasing }.\n\\end{equation}\nHere, $\\mathbb{R}_+$ is the set of non-negative numbers, and the infinity Laplacian is defined as follows:\n$$\n\\Delta_\\infty u(x):=\\sum_{i,j=1}^nu_{x_i}u_{x_j}u_{x_ix_j},\n$$\nwith $u_{x_i}=\\partial u\/\\partial x_i$. Note that the continuity of $f$ provides that $f(u)$ is bounded once $u$ is bounded. Note also that \\eqref{1.2} is quite general in the sense that it needs to hold only for $\\delta$ close to zero. For example, it holds for functions that are homogeneous of degree $\\gamma$. Condition \\eqref{comparison} is needed to guarantee the comparison principle. Solutions of \\eqref{1.1} are understood in the viscosity sense according to the following definition:\n\\begin{definition}\\label{d1.1}\n\tA function $u\\in C(\\Omega)$ is called a viscosity super-solution (resp. sub-solution) of \\eqref{1.1}, and written as $\\Delta_\\infty u\\le f(u)$ (resp. $\\ge$), if for every $\\phi\\in C^2(\\Omega)$ such that $u-\\phi$ has a local minimum at $x_0\\in\\Omega$, with $\\phi(x_0)=u(x_0)$, we have\n\t$$\n\t\\Delta_\\infty\\phi(x_0)\\leq f(\\phi(x_0)).\\quad \\textrm{(resp. $\\geq$)}\n\t$$\n\tA function $u$ is called a viscosity solution if it is both a viscosity super-solution and a viscosity sub-solution.\n\\end{definition}\n\nThe infinity Laplace operator is related to the absolutely minimizing Lipschitz extension problem: for a given Lipschitz function on the boundary of a bounded domain, find its extension inside the domain in a way that has the minimal Lipschitz constant, \\cite{A67}. It is known (see \\cite{J93}) that such function $u$ has to be an infinity harmonic one, i.e. $\\Delta_\\infty u=0$ (in the viscosity sense). The regularity issue of infinity harmonic functions received extensive attention over the years. As was shown in \\cite{ES08}, the infinity harmonic functions in the plane are $C^{1,\\alpha}$, for a small $\\alpha$ (it is conjectured that the optimal regularity is $C^{1,\\frac{1}{3}}$). In higher dimensions infinity harmonic functions are known to be everywhere differentiable (see \\cite{ES11}). \n\nAs for the inhomogeneous case of $\\Delta_\\infty u=f$, it is known that the Dirichlet problem has a unique viscosity solution, provided $f$ does not change sign (see \\cite{LW08}). Moreover, as was shown in \\cite{L14}, for bounded right hand side, the Lipschitz estimate and everywhere differentiability of solutions remain true. The case of $f$ not being bounded away from zero, mainly, when $f=u_+^\\gamma$, where $u_+:=\\max\\left(u,0\\right)$ and $\\gamma\\in[0,3)$ is a constant, was studied in \\cite{ALT16} (dead-core problem). The authors show that for such right hand side (strong absorbtion) across the free boundary $\\partial\\{u>0\\}$ non-negative viscosity solutions are of class $C^{\\frac{4}{3-\\gamma}}$. The denominator $3-\\gamma$ is related to the degree of homogeneity of the operator, which is three, i.e., $\\Delta_\\infty(Cu)=C^3\\Delta_\\infty u$, for any constant $C$. Note that for $\\gamma\\in(0,3)$ this regularity is more than the conjectured $C^{1,\\frac{1}{3}}$, i.e., we obtain higher regularity across the free boundary. This result allows to establish Hausdorff dimension estimate for the free boundary $\\partial\\{u>0\\}$ and conclude that it has Lebesgue measure zero. \n\nWe extend these results for the source term $f$ satisfying \\eqref{1.2}. In particular, it includes equations with the right hand side \n$$\nf(t)=e^t-1\\,\\,\\,\\textrm{ and }\\,\\,\\,f(t)=\\log(t^2+1)\n$$\namong others (see Section \\ref{s7} for more examples). In fact, our results are true in a broader context, when allowing ``coefficients'' in the right hand side, that is, when in \\eqref{1.1} one has $f=f(x,u)$, as long as $f(x,u)$ satisfies \\eqref{1.2} as a function of $u$ and is continuous (and bounded) as a function of $x$ (see Section \\ref{s7}). For simplicity, we restrict ourselves to the case of $f(x,u)=f(u)$.\n\nOur strategy is the following: by means of a flattening argument, we show that across the free boundary $\\partial\\{u>0\\}\\cap\\Omega$ non-negative viscosity solutions of \\eqref{1.1} are of class $C^{\\frac{4}{3-\\gamma}}$, when \\eqref{1.2} holds. When the source term is comparable to a homogeneous function of degree $\\gamma$, this result is sharp in the sense that across the free boundary non-negative viscosity solutions grow exactly as $r^{\\frac{4}{3-\\gamma}}$ in the ball of radius $r$. We also analyze the borderline (critical) case, that is, when $\\gamma=3$ (which is also the degree of the homogeneity of the infinity Laplacian). Unlike \\cite{ALT16}, $f$ is not given explicitly, which makes it harder to construct a barrier function - needed for our analysis. Nevertheless, we are able to show that in this case \\eqref{1.1} has a viscosity sub-solution whose gradient has modulus separated from zero. We use this function to build up a suitable barrier to conclude that if a viscosity solution vanishes at a point, it has to vanish everywhere. Our results remain true when the right hand side has some ``bounded coefficients'' (see Remark \\ref{r7.1}). For simplicity we restrict ourselves with the right hand side ``without coefficients''.\n\nThe paper is organized as follows: in Section \\ref{s2}, we prove an auxiliary result (flattening solutions) (Lemma \\ref{l2.2}), which we use in Section \\ref{s3} to derive the main regularity result (Theorem \\ref{t3.1}), and as a consequence, in Section \\ref{Liouville}, we obtain Liouville type theorems (Theorem \\ref{t3.2} and Theorem \\ref{t3.3}). In Section \\ref{s4}, we prove several geometric measure estimates (Theorem \\ref{t4.1} (non-degeneracy) and Corollary \\ref{c4.1} (porosity)), and conclude that the free boundary has Lebesgue measure zero (Corollary \\ref{c4.2}). In Section \\ref{s5}, when $\\gamma=3$, we show that the only non-negative viscosity solution that has zero, is the function that is identically zero (Theorem \\ref{t5.1}). Finally, in Section \\ref{s7} we bring some examples of source terms for which our results are true.\n\n\\section{Preliminaries}\\label{s2}\nIn this section we list some preliminaries, as well as prove an auxiliary lemma for future reference. We start by the comparison principle, the proof of which can be found in \\cite{CIL92,LW08}.\n\\begin{lemma}\\label{l2.1}\n\tLet $u$, $v\\in C(\\overline{\\Omega})$ be such that\n\t$$\n \\Delta_\\infty u-f(u)\\le0, \\,\\,\\,\\Delta_\\infty v-f(v)\\ge0\\,\\,\\textrm{ in }\\,\\,\\Omega\n\t$$\n\tin the viscosity sense, and $f$ satisfy \\eqref{comparison} or $\\inf f>0$. If $u\\geq v$ on $\\partial\\Omega$, then $u\\geq v$ in $\\Omega$.\n\\end{lemma}\nThe comparison principle, together with Perron's method leads to the following result (for the proof we refer the reader to \\cite{CIL92}, for example). In fact, existence of solutions can be shown even without directly applying the comparison principle, as it was done, for example, in \\cite[Theorem 3.1]{RTU19}.\n\\begin{theorem}\\label{t2.1}\n\tIf $\\Omega\\subset\\mathbb{R}^n$ is bounded and $\\varphi\\in C(\\partial\\Omega)$ is a non-negative function, then there is a unique and non-negative function $u$ that solves the Dirichlet problem\n\t\\begin{equation}\\label{2.1}\n\t\\left\\{\n\t\\begin{aligned}\n\t\t\\Delta_\\infty u&=f(u) \\text { in } \\Omega, \\\\\n\t\tu&=\\varphi\\text { on } \\partial\\Omega\n\t\\end{aligned}\\right.\n\t\\end{equation}\nin the viscosity sense.\t\n\\end{theorem}\nThe following auxiliary lemma is a variant of the flatness improvement technique introduced in \\cite{ALT16,T13,T16} to study the regularity properties of solutions of dead-core problems.\n\\begin{lemma}\\label{l2.2} Let $g\\in L^\\infty(B_1)\\cap C(B_1)$ be a non-negative function such that \n\t$$\n\t\\|g\\|_\\infty\\le\\max\\{1,M\\}\\sup_{[0,\\|u\\|_\\infty]}f,\n\t$$\n\twhere $f$ and $M$ are as in \\eqref{1.2}. For any given $\\mu >0 $ there exists a constant $\\kappa _{\\mu}=\\kappa(\\mu,n)>0$ such that if in $B_1$ a continuous functions $v$, which vanishes at the origin and $v\\in[0,1]$, satisfies, in viscosity sense,\n\t$$\\Delta_\\infty v - \\kappa_\\mu^4g(v) = 0$$\\\\\n\tfor $0<\\kappa\\leq \\kappa_{\\mu}$, then \\\\\n\t$$\\sup_{B_{1\/2}} v \\leq \\mu.$$\n\\end{lemma}\n\\begin{proof}\nWe argue by contradiction assuming that there exist $\\mu^* >0$, $\\{v_i\\}_{i\\in \\mathbb{N}}$ and $\\{\\kappa_i\\}_{i\\in \\mathbb{N}}$ with $v_i(0)=0$, $0\\leq v_i\\leq1$, in $B_1$ satisfying in viscosity sense to\n\t$$\\Delta_\\infty v_i-\\kappa_i^4 g(v_i)= 0$$\\\\\n\twhere $\\kappa_i = \\text{o}(1)$, while\n\\begin{equation}\\label{2.2}\t\t\t\t\t\n\\sup_{B_{1\/2}} v_i > \\mu^*.\n\\end{equation}\\\\\nBy local Lipschitz regularity (see \\cite[Corollary 2]{L14}, for example), the sequence $\\{v_i\\}_{i\\in \\mathbb{N}}$ is pre-compact in the $C^{0,1}(B_{3\/4})$. Hence, by Arzel\\`{a}-Ascoli theorem, $v_i$ converges (up to a subsequence) to a function $v_\\infty$ locally uniformly in $B_{2\/3}$. Moreover, $v_\\infty(0)=0, \\; 0\\leq v_\\infty \\leq 1 $ and $\\Delta_\\infty v_\\infty = 0.$ The maximum principle for the infinity harmonic functions then yields $v\\equiv 0$, which contradicts to \\eqref{2.2} once $i$ is big enough.\n\\end{proof}\nThe following definition is for future reference.\n\\begin{definition}\\label{d1.2}\n\tA function $u$ is called an entire solution, if it is a viscosity solution of \\eqref{1.1} in $\\mathbb{R}^n$.\n\\end{definition}\nWe close this section by reminding the notion of porosity.\n\\begin{definition}\\label{porosity}\n\tThe set $E\\subset\\mathbb{R}^n$ is called porous with porosity $\\sigma$, if there is $R>0$ such that $\\forall x\\in E$ and $\\forall r\\in (0,R)$ there exists $y\\in\\mathbb{R}^n$ such that\n\t$$\n\tB_{\\sigma r}(y)\\subset\tB_{r}(x)\\setminus E.\n\t$$\n\\end{definition}\n\tA porous set of porosity $\\sigma$ has Hausdorff dimension not exceeding\n$n-c\\sigma^n$, where $c>0$ is a constant depending only on dimension. In particular, a porous set has Lebesgue measure zero (see \\cite{Z88}, for instance).\n\n\\section{Regularity across the free boundary}\\label{s3}\nIn this section we make use of Lemma \\ref{l2.2} and derive regularity result for viscosity solutions of \\eqref{1.1} across the free boundary $\\partial\\{u>0\\}$.\n\\begin{theorem}\\label{t3.1}\n\tIf $u$ is a non-negative viscosity solution of \\eqref{1.1}, where $f$ satisfies \\eqref{1.2}, and $x_0\\in\\partial \\{u>0\\}\\cap\\Omega$, then there exists a constant $C>0$, depending only on $\\gamma$, $\\|u\\|_\\infty$ and $\\mathrm{dist} (x_0 , \\partial \\Omega)$, such that\n\t$$u(x) \\leq C |x - x_0|^{\\frac{4}{3-\\gamma}}$$\n\tfor $x \\in \\{u>0\\} $ near $x_0$.\n\\end{theorem}\n\\begin{proof}\n\tThe idea is to use an iteration argument and carefully choose sequence of functions that allows to make use of the Lemma \\ref{l2.2}. Observe that without loss of generality, we may assume that $x_0=0$ and $B_1\\subset\\Omega$.\n\t\n\tFor $\\mu =2^{-\\frac{4}{3-\\gamma}}$, let now $\\kappa_{\\mu} >0$ be as in Lemma \\ref{l2.2}. We then construct the first member of the sequence by setting\t\n\t$$\n\tw_0(x):= \\tau u(\\rho x) \\quad \\text{in} \\quad B_1,\n\t$$\t\n\twhere\n\t$$\n\t\\tau:= \\min \\left\\{1, \\|u\\|_\\infty^{-1} \\right\\}\\,\\,\\,\\textrm{ and }\\,\\,\\,\\rho :=\\kappa_\\mu \\tau^{-\\frac{3-\\gamma}{4}}.\n\t$$\n\tNote that $\\tau^3\\rho^4=\\kappa_\\mu^4\\tau^\\gamma$, $w_0(0)=0$ and in $w_0\\in[0,1]$. Since $u$ is a viscosity solution of \\eqref{1.1}, then\n\t$$\n\t\\Delta_\\infty w_0(x) - \\tau^3\\rho^4 f(\\tau^{-1}w_0(x))=0\n\t$$\n\tor, equivalently,\n\t\\begin{equation}\\label{3.1}\n\t\\Delta_\\infty w_0(x)-\\kappa_\\mu^4\\tau^{ \\gamma} f(\\tau^{-1}w_0(x)) = 0.\n\t\\end{equation}\n\tSince $\\tau\\le1$, then \t\n\t$g(w_0):=\\tau^\\gamma f(\\tau^{-1}w_0)\\le f(u(\\rho x))\\le\\displaystyle\\sup_{[0,\\|u\\|_\\infty]}f$. From Lemma \\ref{l2.2}, we obtain\t\n\t$$\n\t\\sup_{B_{1\/2}} w_0 \\leq 2^{-\\frac{4}{3-\\gamma}}.\n\t$$\n\tFor $i\\in \\mathbb{N} $, we then define\n\t\\[ w_i(x) := 2^{ -\\frac{4}{3-\\gamma}}w_{i-1}(2^{-1}x). \\]\n\tand observe that $w_i(0)=0$, $w_i\\in[0,1]$ and $w_i$ satisfies \n\t$$\n\t\\Delta_\\infty w_i(x)=\\kappa_\\mu^4 2^{\\frac{4\\gamma}{3-\\gamma}i}\\tau^\\gamma f\\left(\\tau^{-1}2^{-\\frac{4}{3-\\gamma}i}w_i(x)\\right).\n\t$$\n\tUsing \\eqref{1.2}, for $i$ big we estimate\n\t$$\n\t2^{\\frac{4\\gamma}{3-\\gamma}i}\\tau^\\gamma f\\left(\\tau^{-1}2^{-\\frac{4}{3-\\gamma}i}w_i(x)\\right)\\le M\\tau^\\gamma f\\left(\\tau^{-1}w_i(x)\\right)\\le M\\displaystyle\\sup_{[0,\\|u\\|_\\infty]}f.\n\t$$\n\tOnce again applying Lemma \\ref{l2.2}, one gets\n\t\\[ \\sup_{B_{1\/2}} w_i \\leq 2^{-\\frac{4}{3-\\gamma}}, \\]\n\tor in other terms,\n\t\\[ \\sup_{B_{1\/4}} w_{i-1} \\leq 2^{-2 \\frac{4}{3-\\gamma}}. \\]\n\tContinuing this way, for $w_0$ we obtain\n\t\\begin{equation}\\label{3.2}\n\t\\sup_{B_{2^{-i}}} w_0 \\leq 2^{-i\\frac{4}{3-\\gamma}}.\n\t\\end{equation}\n\tNext, for a fixed $00\\}$, it touches the free boundary $\\partial\\{u>0\\}$ smoothly. In other words, a non-negative viscosity solution of \\eqref{1.1} may have cusp singularities in its positivity set, and yet it is smooth near its free boundary. \n\n\\section{Liouville type results}\\label{Liouville}\n\nDespite the regularity information being available only across the free boundary, it is enough to derive the following Liouville type theorem.\n\\begin{theorem}\\label{t3.2}\n\tIf $u$ is an entire solution, \\eqref{1.2} holds and $u(x_0)=0$ for a $x_0\\in\\mathbb{R}^n$ with\n \\begin{equation}\\label{3.3}\n \tu(x) = o\\left(|x|^{\\frac{4}{3-\\gamma}}\\right),\\,\\,\\,\\textrm{ as }\\,\\,\\,|x| \\rightarrow \\infty,\n \\end{equation}\n then $u \\equiv0$.\n\\end{theorem}\n\\begin{proof}\n\tWithout loss of generality we may assume that $x_0=0$. For $k\\in\\mathbb{N}$, set\n\t\\[ u_k(x):= k^{\\frac{-4}{3-\\gamma}}u(kx),\\quad x\\in B_1,\\]\n\twhere $B_1$ is the ball of radius one centered at the origin. Note that $ u_k(0)=0 $. Since $u$ is an entire solution, for $x\\in B_1$ one has\n\t\\begin{equation*}\n\t\\Delta_\\infty u_k(x)=k^{\\frac{-4 \\gamma}{3-\\gamma}} f\\left(k^{\\frac{4}{3-\\gamma}} u_{k}(x)\\right).\n\t\\end{equation*}\t\n\tNote that the right hand side of the last equation satisfies \\eqref{1.2}. From Theorem \\ref{t3.1}, we then deduce that if $x_k\\in\\overline{B}_r$ is such that\n $$\n u_k(x_k)=\\sup_{\\overline{B}_r} u_k,\n $$\n where $r>0$ is small, then in $B_r$ one has\n \\begin{equation}\\label{3.4}\n \\|u_k \\|_\\infty\\to0,\\,\\,\\,\\textrm{ as }\\,\\,\\,k\\to\\infty.\n \\end{equation}\n In fact, if $|kx_k|$ remains bounded as $k\\to\\infty$, then applying Theorem \\ref{t3.1} to $u_k$ we obtain\n\t\\begin{equation}\\label{3.5}\n\tu_k (x_k) \\le C_k|x_k|^{\\frac{4}{3-\\gamma}},\n\t\\end{equation}\n\twhere $C_k>0$ and $C_k\\to0$. This implies that $u(kx_k)$ remains bounded as $k\\to\\infty$, and therefore $u_k(x_k)\\to0$, as $k\\to\\infty$, and \\eqref{3.4} is true. It remains true also in the case when $|kx_k|\\to\\infty$, as $k\\to\\infty$, since then from \\eqref{3.3} we get\n\t\\begin{equation*}\n\tu_k(x_k) \\leq |kx_k|^{-\\frac{4}{3-\\gamma}}k^{-\\frac{4}{3-\\gamma}}\\to0,\\,\\,\\,\\textrm{ as }\\,\\,\\,k\\to\\infty.\n\t\\end{equation*}\n\tNow, if there exists $y\\in\\mathbb{R}^n$ such that $u(y)>0$, by choosing $k\\in\\mathbb{N}$ large enough so $y\\in B_{kr}$ and using \\eqref{3.4} and \\eqref{3.5}, we estimate\n\t\\[\\dfrac{u(y)}{|y|^{\\frac{4}{3-\\gamma}}} \\leq \\sup_{B_{kr}} \\dfrac{u(x)}{|x|^{\\frac{4}{3-\\gamma}}}=\\sup_{B_r} \\dfrac{u_k(x)}{|x|^{\\frac{4}{3-\\gamma}}}\\leq \\dfrac{u(y)}{2|y|^{\\frac{4}{3-\\gamma}}},\\]\n\twhich is a contradiction.\n\\end{proof}\nIn fact, once the comparison principle holds, the condition \\eqref{3.3} can be weakened in the following sense (Theorem \\ref{t3.3} below). Let $x_0\\in\\mathbb{R}^n$ and $r>0$ be fixed, and let $u\\ge0$ be the unique solution of \\eqref{2.1} in $B_r(x_0)$ with $\\varphi\\equiv\\alpha_r>0$ constant, guaranteed by Theorem \\ref{t2.1}. Note that $u$ is a viscosity sub-solution of\n\\begin{equation}\\label{3.7}\n\\left\\{\n\\begin{aligned}\n\\Delta_\\infty v &= \\lambda v_+^{\\gamma} &&\\textrm{ in } B_r(x_0),\\\\\nv&=\\alpha_r &&\\textrm{ on } \\partial B_r(x_0),\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere\n\\begin{equation}\\label{3.6}\n\\lambda:=M^{-1}\\beta^{-\\gamma}f(\\beta),\n\\end{equation}\nand $\\beta>\\|u\\|_\\infty$ is a constant big enough so \\eqref{1.2} holds. Then the condition \\eqref{3.3} can be weakened and substituted by\n\\begin{equation}\\label{3.8}\n\t\\limsup_{|x| \\rightarrow \\infty} \\dfrac{u(x)}{|x-x_0|^{\\frac{4}{3-\\gamma}}} < \\left( \\lambda \\frac{(3-\\gamma)^4}{64(1+\\gamma)} \\right)^{\\frac{1}{3-\\gamma}},\n\\end{equation}\nwhere $\\lambda$ is defined by \\eqref{3.6}, and Theorem \\ref{t3.2} can be improved to the following variant (see Theorem \\ref{t3.3} below). The choice of the right hand side of \\eqref{3.8} comes from the explicit structure of the unique solution of \\eqref{3.7}, which, as observed in \\cite{ALT16}, is given by\n\\begin{equation}\\label{3.9}\nv(x):=\\Upsilon\\left(|x-x_0|-r + \\left(\\frac{\\alpha_r}{\\Upsilon}\\right)^{\\frac{3-\\gamma}{4}} \\right)^{\\frac{4}{3-\\gamma}}_+,\n\\end{equation}\nwhere\n\\begin{equation}\\label{3.10}\n\t\\Upsilon:= \\left(\\lambda \\frac{(3-\\gamma)^4}{64(1+\\gamma)}\\right)^{\\frac{1}{3-\\gamma}}.\n\\end{equation}\n\\begin{theorem}\\label{t3.3}\n\tLet \\eqref{1.2}, \\eqref{comparison} hold. If $u$ is an entire solution and satisfies \\eqref{3.8}, then $u\\equiv0$.\n\\end{theorem}\n\\begin{proof}\n\tOnce $r>0$ is large enough, then \\eqref{3.8} guarantees, with $\\Upsilon>0$ defined by \\eqref{3.10},\n\t\\[ \\sup_{\\partial B_r} \\dfrac{u(x)}{r^{\\frac{4}{3-\\gamma}}} \\leq \\theta\\Upsilon,\\]\n\tfor some $\\theta < 1$. On the other hand, using \\eqref{1.2}, one has that the unique solution of \\eqref{3.7}, with $\\alpha_r=\\displaystyle\\sup_{\\partial B_r(x_0)}u$, given by \\eqref{3.9}, is a viscosity sub-solution of \\eqref{1.1}. The comparison principle, Lemma \\ref{l2.1}, then implies that $u\\le v$ in $B_r(x_0)$. Letting $r\\to\\infty$, we conclude that $u\\equiv0$.\t\n\\end{proof}\n\\begin{remark}\\label{r3.1}\n\tAs can be seen from \\eqref{3.9}, the plateau of $v$, i.e., the set $\\{v=0\\}$, is the ball $\\overline{B}_{R}(x_0) $, where\n\t$$\n\t00$, $\\gamma\\in[0,3)$, $t>0$ bounded, and $\\delta>0$ small enough, then across the free boundary non-negative viscosity solutions of \\eqref{1.1} grow exactly as $r^\\frac{4}{3-\\gamma}$ in the ball $B_r$, for $r>0$ small enough. As a consequence, we conclude that the touching ground surface is a porous set, which implies that it has Hausdorff dimension less than $n$, and so its Lebesgue measure is zero (see \\cite{Z88}). We start by the following non-degeneracy theorem.\n\\begin{theorem}\\label{t4.1} \n\tLet \\eqref{5.1} hold. Let also $f$ satisfy \\eqref{comparison} or $\\inf f>0$. If $u$ is a non-negative viscosity solution of \\eqref{1.1}, then there exists a universal constant $c>0$, depending only on dimension and $\\gamma$, such that\n\t\\[ \\sup_{B_{r}(x_0)} u \\ge cr^{\\frac{4}{3-\\gamma}}, \\]\nwhere $x_0\\in\\overline{\\{u>0\\}}\\cap\\Omega$ and $00\\}\\cap\\Omega$. Set\n\t$$\n\tv(x):=c|x-x_0|^{\\frac{4}{3-\\gamma}},\n\t$$\n\twith a constant $c\\in(0,\\Upsilon)$, where $\\Upsilon>0$ is defined by \\eqref{3.10}. Using \\eqref{5.1}, direct computation reveals that the choice of $c$ makes $v$ a viscosity super-solution of \\eqref{1.1} in $B_r(x_0)$, where $r>0$ is such that $B_r(x_0)\\subset\\Omega$. If $v\\ge u$ on $\\partial B_r(x_0)$, then the comparison principle, Lemma \\ref{l2.1}, would imply $v\\ge u$ in $B_r(x_0)$, contradicting to the fact that $0=v(x_0)0\\}$ is a porous set.\n\\end{corollary}\n\\begin{proof}\n\tLet $x\\in\\partial\\{u>0\\}$ and $y\\in\\overline{B}_r(x)$ be such that\n\t$$\n\tu(y)=\\sup_{B_{r}(x)}u.\n\t$$\n\tBy Theorem \\ref{t4.1}, $u(y)\\ge cr^{\\frac{4}{3-\\gamma}}$. On the other hand, Theorem \\ref{t3.1} provides\n\t$$\n\tu(y)\\le C\\left[d(y)\\right]^{\\frac{4}{3-\\gamma}},\n\t$$\n\twhere $d(y):=\\text{dist}\\left({y,\\partial\\{u>0\\}}\\right)$. Therefore,\n\t$$\n\t\\left(\\frac{c}{C}\\right)^{\\frac{3-\\gamma}{4}}r\\le d(y).\n\t$$\n\tHence, if $\\sigma:=\\frac{1}{2}\\left(\\frac{c}{C}\\right)^{\\frac{3-\\gamma}{4}}$, one has $$\n\tB_{2\\sigma r}(y)\\subset\\{u>0\\}.\n\t$$\n\tWe now choose $\\xi\\in(0,1)$ such that for the point $z:=\\xi y+(1-\\xi)x$ we have $|y-z|=\\sigma r$. Then\n\t$$\n\tB_{\\sigma r}(z)\\subset B_{{2\\sigma}r}(y)\\cap B_r(x).\n\t$$\n\tMoreover, we have\n\t$$\n\tB_{2\\sigma r}(y)\\cap B_r(x)\\subset \\{u>0\\},\n\t$$\t\n\twhich together with the previous inclusion implies\n\t$$\n\tB_{\\sigma r}(z)\\subset B_{2\\sigma r}(y)\\cap B_r(x)\\subset B_r(x)\\setminus\\partial\\{u>0\\},\n\t$$\n\tthat is, the set $\\partial\\{u>0\\}$ is porous with porosity $\\sigma$.\n\\end{proof}\n\\begin{corollary}\\label{c4.2}\n\tIf \\eqref{1.2}, \\eqref{comparison}, \\eqref{5.1} hold, and $u$ is a viscosity solution of \\eqref{1.1}, then Lebesgue measure of the set $\\partial\\{u>0\\}$ is zero.\n\\end{corollary}\n\\section{The borderline case}\\label{s5}\nAlthough, in general, one cannot expect more than $C^{1,\\alpha}$ regularity for viscosity solutions of \\eqref{1.1}, Theorem \\ref{t3.1} provides higher and higher regularity across the free boundary, as $\\gamma\\in[0,3)$ gets closer to 3. In this section we analyze the limit case of $\\gamma=3$. The scaling property of the operator plays an essential role here, as $\\gamma=3$ is also the degree of homogeneity of the infinity Laplacian, meaning that $\\Delta_\\infty(Cu)=C^3\\Delta_\\infty u$, for any constant $C$. Observe that Theorem \\ref{t3.1} cannot be applied directly, since the estimates deteriorate as $\\gamma\\to3$. Thus, in this section \\eqref{1.2} is substituted by\n\\begin{equation}\\label{6.1}\n0\\le f(\\delta t)\\le M\\delta^3f(t),\n\\end{equation}\nwith $M>0$, $t>0$ bounded and $\\delta>0$ small. Our first observation states as follows.\n\\begin{lemma}\\label{l5.1}\n\tIf $u$ is a non-negative viscosity solution of \\eqref{1.1}, where $f$ satisfies \\eqref{6.1}, then its every zero is of infinite order.\n\\end{lemma}\n\\begin{proof}\n\tThis is a consequence of Theorem \\ref{t3.1}. To see that it is enough to rewrite \\eqref{1.2}, for $\\gamma=3$, as\n\t$$\n\tf(\\delta t)\\leq M_\\delta\\delta^{3-\\beta}f(t),\n\t$$\n\twhere $M_\\delta:=M\\delta^\\beta$ and $\\beta>0$. An application of Theorem \\ref{t3.1} with $M=M_\\delta$ leads to the conclusion that if $u(z)=0$ for $z\\in\\Omega$, then $D^nu(z)=0$, $\\forall n\\in\\mathbb{N}$.\n\\end{proof}\nFurthermore, we show that if a non-negative viscosity solution of \\eqref{1.1} vanishes at a point, then it must vanish everywhere. For $f\\equiv0$ this follows from the Harnack inequality. The particular case, when $f$ is homogeneous of degree three, that is, $f(t)=M t^3$, was studied in \\cite{ALT16}, where by means of a suitable barrier function, was concluded that if non-negative viscosity solution vanishes in an inner point, then it has to vanish everywhere. Unlike \\cite{ALT16}, our function $f$ is not given explicitly, which makes the construction of a suitable barrier function more complicated. Observe that once \\eqref{6.1} holds, then $f(0)=0$, hence $\\inf f=0$, so to use the comparison principle, one needs to assume that $f$ is non-decreasing.\n\\begin{theorem}\\label{t5.1} Let $u$ be a non-negative viscosity solution of \\eqref{1.1}, where $f$ satisfies \\eqref{comparison} and \\eqref{6.1}. If $\\{u=0\\}\\cap\\Omega\\neq\\emptyset$, then $u\\equiv0$.\n\\end{theorem}\n\\begin{proof}\n\tWe argue by contradiction, assuming that there is $x\\in\\Omega$ such that $u(x)=0$, but $u(y)>0$ for a point $y\\in\\Omega$. Without loss of generality we may assume that\n\t$$\n\tr:=\\mathrm{dist}\\left(y,\\{u=0\\}\\right)<\\frac{1}{10}\\mathrm{dist}\\left(y,\\partial\\Omega\\right).\n\t$$\n\tWe aim to construct a sub-solution of \\eqref{1.1} which stays below $u$ on $\\partial B_r(y)$.\n\t\n\tLet $w$ be an infinity sub-harmonic function in $B_r(y)$ such that $|\\nabla w|\\ge\\eta$ for $\\eta\\ge0$ constant to be chosen later. Such function can be built up as a limit, as $p\\to\\infty$, of $p$-super-harmonic functions with modulus of gradient separated from zero by $\\eta$. We refer the reader for details to \\cite{J93}.\n\n Now if $g$ is a smooth function and $v=g(w)$, direct computation reveals that\n $$\n \\Delta_{\\infty} v=\\left[g'(w)\\right]^3\\Delta_{\\infty} w+\\left[g^{\\prime}(w)\\right]^{2} g^{\\prime \\prime}(w)|\\nabla w|^{4}.\n $$\n Thus, for $g(t)=e^t+t$,\n \\begin{equation}\\label{5.2}\n \\Delta_{\\infty} v\\ge\\left[g^{\\prime}(w)\\right]^{2} g^{\\prime \\prime}(w)|\\nabla w|^{4},\n \\end{equation}\n since $g^\\prime\\ge1$ and $\\Delta_\\infty w\\ge0$. Also $g^{\\prime\\prime}\\ge e^{-\\|w\\|_\\infty}>0$, and \\eqref{5.2} yields (recall that $|\\nabla w|\\ge\\eta$)\n \\begin{equation}\\label{5.3}\n \t\\Delta_\\infty v\\ge\\mu\\eta,\n \\end{equation}\n where $\\mu:=e^{-\\|w\\|_\\infty}>0$. Choosing\n $$\n \\eta>\\frac{M}{\\mu}\\max_{[0,\\|v\\|_\\infty]}f,\n $$\n from \\eqref{5.3} we obtain\n $$\n \\Delta_\\infty v-Mf(v)\\ge\\Delta_\\infty v-\\mu\\eta\\ge0,\n $$\n i.e., $v$ is a sub-solution of \\eqref{1.1}. The latter together with \\eqref{1.2} gives, for any small constant $\\delta>0$,\n\t$$\n\t\\Delta_\\infty\\left(\\delta v\\right)-f\\left(\\delta v\\right)\\ge\\delta^3\\left(\\Delta_\\infty v-Mf(v)\\right)\\ge0,\n\t$$\t\n\tthat is, the function $\\delta v$ is also a sub-solution of \\eqref{1.1}. We choose $\\delta>0$ small enough to guarantee\n\t$$\n\t\t\\delta v(x)\\le u(x),\\,\\,\\,x\\in\\partial B_r(y),\n\t$$\n\tand by the comparison principle, Lemma \\ref{l2.1},\n\t\\begin{equation}\\label{5.4}\n\t\t\\delta v(x)\\le u(x),\\,\\,\\,x\\in B_r(y).\n\t\\end{equation}\n Observe, that writing \\eqref{1.2} as\n $$\n f(\\delta t)\\le M\\delta\\delta^2f(t),\n $$\n and applying Theorem \\ref{t3.1} with $\\widetilde{M}=M\\delta$, we arrive at\n \\begin{equation}\\label{5.5}\n \t\\sup_{B_d(z)}u\\le Cd^4,\n \\end{equation}\n where $z\\in\\partial B_r(y)\\cap\\partial\\{u>0\\}$, and $d>0$ is small. In fact, we choose $00$ small and $t>0$. Hence, applying Theorem \\ref{t3.1} to\n$$\n\\Delta_\\infty u=e^u-1,\n$$\nwe conclude that non-negative viscosity solutions of the above equation are $C^\\frac{4}{3}$ near the free boundary $\\partial\\{u>0\\}$. Moreover, if $u$ is an entire solution of the last equation, which vanishes at a point and\n$$\nu(x)=o\\left(|x|^\\frac{4}{3}\\right),\\,\\,\\,\\textrm{ as }\\,\\,\\,|x|\\rightarrow\\infty,\n$$\nthen Theorem \\ref{t3.2} implies that it has to be identically zero.\n\nSame conclusion can be made when in \\eqref{1.1} the source term is $f(t):=\\log (t^2+1)$. Of course, Theorem \\ref{t3.1} can be applied also for any linear combination (with continuous, bounded coefficients) of the above source terms. In fact, Theorem \\ref{t3.1} is true for any non-negative continuous function, which is non-decreasing around zero and vanishes at the origin. We point out that \\eqref{1.2} (as well as \\eqref{5.1} and \\eqref{6.1}) is required to hold only around zero, so Theorem \\ref{t3.1} is true also for source terms that are any of the above examples around zero and can be anything outside, while remaining non-negative and continuous (and in case of Theorem \\ref{t4.1} and its consequences, also non-decreasing).\n\nWe finish with an application of Theorem \\ref{t5.1}. Let $u$ be a non-negative viscosity solution of\n\\begin{equation}\\label{7.1}\n\t\\Delta_\\infty u=\\log\\left(1+u^3\\right).\n\\end{equation}\nSince $f(t)=\\log(1+t^3)$ satisfies \\eqref{1.2} with $M=1$ and $\\gamma=0$, Theorem \\ref{t3.2} implies $C^{\\frac{4}{3}}$ regularity of $u$ near the touching ground. On the other hand, the function $f(t)$ can be written as \n$$\n\\log\\left(1+t^3\\right)=t^3\\frac{\\log\\left(1+t^3\\right)}{t^3}:=t^3g(t).\n$$ \nSet $g(0)=1$. Then $g$ is continuous, bounded ($0\\le g\\le1$) function, and the non-decreasing function $f(t)=g(t)t^3$ satisfies \\eqref{6.1} with $M=1$. Applying Theorem \\ref{t5.1}, we conclude that if $u$ is a non-negative viscosity solution of \\eqref{7.1}, which is zero at a point, then it must be identically zero.\n\n\\bigskip\n\n\\noindent{\\bf Acknowledgments.} NMLD was partially supported by Instituto Federal de Educa\\c{c}\\~ao, Ci\\^encia e Tecnologia do Rio Grande do Sul. NMLD thanks the Analysis group at Centre for Mathematics of the University of Coimbra (CMUC) for fostering a pleasant and productive scientific atmosphere during his postdoctoral program. RT was partially supported by FCT -- Funda\\c c\\~ao para a Ci\\^encia e a Tecnologia, I.P., through projects PTDC\/MAT-PUR\/28686\/2017 and UTAP-EXPL\/MAT\/0017\/2017, and by CMUC -- UID\/MAT\/00324\/2013, funded by the Portuguese government through FCT and co-funded by the European Regional Development Fund through Partnership Agreement PT2020.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}