diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzoqtp" "b/data_all_eng_slimpj/shuffled/split2/finalzzoqtp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzoqtp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n \nThe numerical solution of fluid flow problems is an important topic in computational science and engineering, which has received much attention in the last few decades \\cite{bathe2007benchmark,brandt1979multigrid,brandt2011multigrid,connor2013finite,vanka1986block,wesseling2001geometric}. Stokes-Darcy Brinkman problem is one of them used to model fluid motion in porous media with fractures. The discretization of the fluid flow problems often leads to a saddle-point system, which is ill-conditioned. Designing fast numerical solution of these problems is often challenging due to the small magnitude of physical parameters of the model. \n\n\nWe consider the multigrid numerical solution of the Stokes-Darcy Brinkman equations\n\\begin{subequations}\n\\label{eq:SDB}\n\\begin{align}\n - \\epsilon^2 \\Delta\\boldsymbol{u} +\\boldsymbol{u} + \\nabla p =&\\boldsymbol{f} \\qquad \\text{in}\\,\\, \\Omega \\label{eq:stokes-DB-one}\\\\\n \\nabla \\cdot \\boldsymbol{u}=&g \\qquad \\text{in}\\,\\, \\Omega \\label{eq:stokes-DB-two} \\\\\n \\boldsymbol{u} =&0 \\qquad \\text{on}\\,\\, \\partial \\Omega \\label{eq:stokes-DB-three}, \n\\end{align}\n\\end{subequations}\nwhere $\\epsilon \\in(0,1]$. The source term $g$ is assumed to satisfy the solvability condition\n\\begin{equation*}\n \\int_{\\Omega} g\\, d\\Omega =0.\n\\end{equation*} \nThen, equations \\eqref{eq:stokes-DB-one}, \\eqref{eq:stokes-DB-two}, and \\eqref{eq:stokes-DB-three} have a unique solution. \n\nA variety of discretization schemes are available for equations \\eqref{eq:stokes-DB-one}, \\eqref{eq:stokes-DB-two}, and \\eqref{eq:stokes-DB-three}, including finite element methods \\cite{gulbransen2010multiscale,vassilevski2014mixed,xie2008uniformly,xu2010new,zhai2016new,zhang2009low,zhao2020new}, finite difference techniques \\cite{he2017finite,sun2019stability}, and divergence-conforming B-spline methods \\cite{evans2013isogeometric}. When $\\epsilon= 0$, the model problem is reduced to the Darcy problem \\cite{arraras2021multigrid,harder2013family}. For $\\epsilon \\in(0,1]$, designing a robust discretization and numerical solver is challenging. The convergence rate deteriorates as the Stokes-Darcy Brinkman becomes Darcy-dominating when certain stable Stokes elements are used \\cite{hannukainen2011computations}, for example, Taylor\u2013Hood element. While, as the Stokes-Darcy Brinkman problem becomes Stokes-dominating when Darcy stable elements such as the lowest order Raviart-Thomas elements are used, degradation on convergence is observed \\cite{mardal2002robust}. \n\nUpon discretization, large-scale indefinite linear systems typically need to be solved, at times repeatedly. For saddle-point systems,\nwithin the context of multigrid, there are several effective block-structured relaxation schemes for solving such linear systems, such as Braess-Sarazin smoother \\cite{braess1997efficient, he2018local,MR1810326}, distributive smoother \\cite{chen2015multigrid,he2018local}, Schwarz-type smoothers \\cite{schoberl2003schwarz}, Vanka smoother \\cite{claus2021nonoverlapping,MR3217219,manservisi2006numerical,molenaar1991two,wobker2009numerical}, and Uzawa-type relaxation \\cite{MR1451114,MR3217219,MR1302679,luo2017uzawa}. \n\n \n\nWe note also that a number of effective preconditioning methods are available for the Stokes-Darcy Brinkmans problems, for example the scalable block diagonal preconditioner \\cite{vassilevski2013block}, and Uzawa-type preconditioning \\cite{kobelkov2000effective,sarin1998efficient}. \nMultigrid methods are studied in depth \\cite{butt2018multigrid,coley2018geometric,kanschat2017geometric,larin2008comparative,olshanskii2012multigrid}. Braess-Sarazin, Uzawa, and Vanka smoothers within multigrid with finite element discretization have been discussed \\cite{larin2008comparative}. However, the convergence rate is highly dependent on physical parameters. A Gauss\u2013Seidel smoother based on a Uzawa-type iteration is studied \\cite{MR3217219}, where the authors provide an upper bound on the smoothing factor. Moreover, the performance of Uzawa with a Gauss\u2013Seidel type coupled Vanka smoother \\cite{vanka1986block} has been investigated \\cite{MR3217219}, in which the pressure and the velocities in a grid cell, are updated simultaneously, showing that the actual convergence of the W-cycle of Uzawa is approximately the same as that obtained by the Vanka smoother.\n\nOur interest is in the marker and cell scheme (MAC), a finite difference method on a staggered mesh. On a uniform mesh discretization, the method is second-order accurate for both velocity and pressure \\cite{sun2019stability}. We propose a Vanka-type Braess-Sarazin relaxation (V-BSR) scheme for the Stokes-Darcy Brinkman equations discretized by the MAC scheme on staggered meshes. In contrast to the Vanka smoother \\cite{MR3217219}, our work builds an algorithm that decouples velocity and pressure, which is often preferred considering the cost efficiency. Specifically, in our relaxation scheme, the shifted Laplacian operator, $ - \\epsilon^2 \\Delta\\boldsymbol{u} +\\boldsymbol{u}$, is solved by an additive Vanka-type smoother. Instead of solving many subproblems involved in Vanka setting, we derive the stencil of the Vanka smoother, which means that we can form the global matrix of the Vanka smoother. As a result, in our multigrid method we only have matrix-vector products. This represents significant savings compared to traditional methods that require computationally expensive exact solves; in V-BSR, we solve the Schur complement system by only two or three iterations of the Jacobi method, which achieves the same performance as that of exact solve. We apply local Fourier analysis (LFA) to select the multigrid damping parameter and predict the actual multigrid performance. From this analysis, we derive an optimal damping parameter and optimal smoothing factors. Those parameters are dependent on physical parameters and the meshsize, which means that we can propose adaptive damping parameter in each multigrid level. The optimal parameter turns out to be close to one and relatively insensitive to physical parameters and meshsize. This allows for an easy choice of an approximately optimal damping parameter. We quantitatively compare the results with optimal parameter and the value of one from LFA and present numerical results of two-grid and V-cycle multigrid to validate the high efficiency of our methods. Our V-cycle results outperform these of Uzawa and Vanka smoothers \\cite{MR3217219}, especially for small $\\epsilon$.\n\nThe rest of the work is organized as follows. In Section \\ref{sec:Discretization-relaxation} we review the MAC scheme for our model problem and propose the afore-mentioned Vanka-based Braess-Sarazin relaxation. We apply LFA to study the smoothing process in Section \\ref{sec:LFA}, where optimal LFA smoothing factor is derived. In Section \\ref{sec:numerical} we present our LFA predictions for the two-grid method and actual multigrid performance. Finally, we draw conclusions in Section \\ref{sec:con}.\n\n\n\\section{Discretization and relaxation}\\label{sec:Discretization-relaxation}\n\n\n\nAs mentioned in the Introduction, we use throughout the well-known MAC scheme to solve~\\eqref{eq:SDB}. For the discretization of \\eqref{eq:SDB}, a staggered mesh is needed to guarantee numerical stability. The discrete unknowns $u,v,p,$ are placed in different locations; see Figure \\ref{fig:MAC-stokes-brinkman}. The stability and convergence of the MAC scheme for this problem has been studied \\cite{sun2019stability}. \n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{.\/figures\/figure_MAC_StokesBrinkman.pdf}\n \\caption{The location of the unknowns in the staggered grid: $\\Box-u,\\,\\, \\lozenge-v, \\,\\, \\bigcirc-p$.}\\label{fig:MAC-stokes-brinkman}\n\\end{figure}\n\nThe stencil representation of MAC for the Stokes-Darcy Brinkmann equations is\n \\begin{equation}\\label{eq:Kh--stencil-operator}\n \\mathcal{K}_h =\\begin{pmatrix}\n - \\epsilon^2\\Delta_{h} + I & 0 & (\\partial_{x})_{h\/2}\\\\\n 0 & - \\epsilon^2 \\Delta_{h} + I & (\\partial_{y})_{h\/2} \\\\\n - (\\partial_{x})_{h\/2} & - (\\partial_{y})_{h\/2} & 0\n \\end{pmatrix},\n \\end{equation}\nwhere \n\\begin{equation*}\n -\\Delta_{h} =\\frac{1}{h^2}\\begin{bmatrix}\n & -1 & \\\\\n -1 & 4 & -1 \\\\\n & -1 &\n \\end{bmatrix},\\quad\n (\\partial_{x})_{h\/2} =\\frac{1}{h}\\begin{bmatrix}\n -1& 0 & 1 \\\\\n \\end{bmatrix},\\quad\n (\\partial_{y})_{h\/2} =\\frac{1}{h}\\begin{bmatrix}\n 1 \\\\\n 0 \\\\\n -1\n \\end{bmatrix}.\n\\end{equation*}\n\n\nAfter discretization, the corresponding linear system is \n\\begin{equation}\\label{eq:linear-system}\n\\mathcal{K}_h \\boldsymbol{x}_h=\\begin{pmatrix}\n\\mathcal{A} & \\mathcal{B}^T\\\\\n\\mathcal{B} & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n\\boldsymbol{u}_h\n\\\\\np_h\n\\end{pmatrix}\n=\\begin{pmatrix}\n\\boldsymbol{f}_h\n\\\\\ng_h\n\\end{pmatrix}=b_h,\n\\end{equation} \nwhere $\\mathcal{A}$ is the matrix corresponding to the discretization of $ - \\epsilon^2 \\Delta\\boldsymbol{u} +\\boldsymbol{u}$, and $\\mathcal{B}^T$ is the discrete gradient.\n\nIn order to solve \\eqref{eq:linear-system} efficiently by multigrid we use BSR, with the smoother\n \\begin{equation}\\label{eq:Mh-form}\n\\mathcal{M}_h= \\begin{pmatrix}\n\\mathcal{C} & \\mathcal{B}^T\\\\\n\\mathcal{B} & 0\n\\end{pmatrix},\n\\end{equation}\nwhere $\\mathcal{C}$ is an approximation to $\\mathcal{A}$. In the context of preconditioning, such an approach is known as {\\em constraint preconditioning} \\cite{keller2000constraint,chidyagwai2016constraint,rees2007preconditioner} and it has received quite a bit of attention due to its attactive property of computing interim approximate solutions that satisfy the constraints.\n\nA number of studies \\cite{he2018local,YH2021massStokes} have shown that the efficiency of solving the Laplacian will determine the convergence of BSR. To construct an efficient approximation $\\mathcal{C}$, we first investigate the discrete operator $-\\epsilon^2 \\Delta u +u $ denoted by \n \\begin{equation}\\label{eq:positive-shift-Laplace-system}\nL= A+ I,\n\\end{equation} \nwhere $A$ corresponds to the five-point discretization of operator $-\\epsilon^2 \\Delta u$. The stencil notation for the discrete operator $-\\epsilon^2 \\Delta u +u $ is \n\n\\begin{equation}\\label{eq:shift-Laplace-stencil}\nL= \\frac{\\epsilon^2}{h^2}\n\\begin{bmatrix}\n& -1 & \\\\\n-1 & 4+\\frac{h^2}{\\epsilon^2} & -1 \\\\\n& -1 & \n\\end{bmatrix}=\\frac{\\epsilon^2}{h^2}\n\\begin{bmatrix}\n& -1 & \\\\\n-1 & 4+r & -1 \\\\\n& -1 & \n\\end{bmatrix},\n\\end{equation}\nwhere $r=\\frac{h^2}{\\epsilon^2}$. When $r=0$, \\eqref{eq:shift-Laplace-stencil} reduces to the discretization of $-\\epsilon^2 \\Delta u$.\n \nRecently, we proposed an additive element-wise Vanka smoother \\cite{CH2021addVanka} for $\\Delta u$. Our current goal is to extend our approach to \\eqref{eq:positive-shift-Laplace-system}. An immediate challenge here contrary to \\cite{CH2021addVanka} is the difference in scale between the discretized scaled Laplacian and the identity operator.\n\nDenote the element-wise smoother as $M_e$, which has the form\n \\begin{equation}\\label{eq:Vanka-operator}\nM_e = \\sum_{j=1}^{N} V_j^T D_j L_j^{-1} V_j,\n\\end{equation} \nwhere $D_j=\\frac{1}{4}I$ with $I$ be the $4\\times 4$-identity matrix, $L_j$ is the coefficient matrix of $j$-th subproblem defined for one element, and $V_j$ is a restriction operator mapping the global vector to the $j$-th subproblem.\nWe consider\n\\begin{equation*}\n\\mathcal{C}^{-1} = \n\\begin{pmatrix}\nM_e & 0\\\\\n0 & M_e\n\\end{pmatrix}.\n\\end{equation*}\nThe relaxation scheme for \\eqref{eq:linear-system} is \n\\begin{equation}\\label{eq:BSR-relax-scheme}\n\\boldsymbol{x}^{k+1}_h = \\boldsymbol{x}^k_h+\\omega \\mathcal{M}^{-1}_h(b_h- \\mathcal{K}_h\\boldsymbol{x}^k_h).\n\\end{equation} \nWe refer to the above relaxation as {\\em Vanka-based Braess-Sarazin relaxation} (V-BSR). \n\nLet $b_h- \\mathcal{K}_h\\boldsymbol{x}^k_h=( r_{ \\boldsymbol{u}}, r_p)$. In \\eqref{eq:BSR-relax-scheme}, we need to solve for\n $(\\delta \\boldsymbol{u}, \\delta p)=\\mathcal{M}^{-1}_h(r_{\\boldsymbol{u}}, r_p)$ by\n\\begin{eqnarray}\n (\\mathcal{B}\\mathcal{C}^{-1}\\mathcal{B}^{T})\\delta p&=&\\mathcal{B}\\mathcal{C}^{-1} r_{ \\boldsymbol{u}}- r_{p}, \\label{eq:solution-schur-complement}\\\\\n \\delta \\boldsymbol{u}&=& \\mathcal{C}^{-1}(r_{ \\boldsymbol{u}}-\\mathcal{B}^{T}\\delta p).\\nonumber\n\\end{eqnarray}\n\nSolving \\eqref{eq:solution-schur-complement} exactly is prohibitive and impractical in the current context, and it has been shown in a few studies \\cite{he2018local,MR1810326} that an inexact solve can be applied and perform well. In the sequel we will present a smoothing analysis for the exact solve, but in practice, for assessing the performance of the multigrid scheme we apply a few iterations of weighted Jacobi to \\eqref{eq:solution-schur-complement}. \n\nThe relaxation error operator for \\eqref{eq:BSR-relax-scheme} is given by\n\\begin{equation}\\label{eq:relaxation-error-operator}\n\\mathcal{S}_h = I-\\omega \\mathcal{M}^{-1}_h\\mathcal{K}_h,\n\\end{equation}\nwhere $\\omega$ is a damping parameter to be determined. \n\nFor a two-grid method, the error propagation operator is \n\\begin{equation}\\label{eq:two-grid-error-operator}\nE_h=S^{\\nu_2}_h (I- P_h(L_{2h})^{-1} R_h L_h) S^{\\nu_1}_h,\n\\end{equation}\n where $L_{2h}$ is rediscretization for the coarse-grid operator and the integers $\\nu_1$ and $\\nu_2$ are the numbers of pre- and post-smoothing steps. For simplicity, we denote the overall number of those steps by $\\nu=\\nu_1+\\nu_2$. We consider simple restriction operators using six points for the $u$ and $v$ components of the velocity, that is, \n \\begin{equation*} \n R_{h,u} = \\frac{1}{8}\\begin{bmatrix}\n 1 & & 1 \\\\\n 2 & \\star & 2 \\\\\n 1 & & 1 \n \\end{bmatrix},\n \\quad \n R_{h,v} = \\frac{1}{8}\\begin{bmatrix}\n 1 & 2 & 1 \\\\\n & \\star & \\\\\n 1 & 2 & 1 \n \\end{bmatrix},\n\\end{equation*}\nwhere the $\\star$ denotes the position (on the coarse grid) at which the discrete operator is applied. For interpolation, we take $P_{h,u} =4 R^T_{h,u}$ and $P_{h,v} =4 R^T_{h,v}$.\nFor the restriction for the pressure, we use \n \\begin{equation*} \n R_{h,p} = \\frac{1}{4}\\begin{bmatrix}\n 1 & & 1 \\\\\n & \\star & \\\\\n 1 & & 1 \n \\end{bmatrix},\n\\end{equation*}\nand $P_{h,p}=4R^T_{h,p}$. Consequently,\n\\begin{equation*}\nR_h = \\begin{pmatrix}\n R_{h,u} & 0 & 0 \\\\\n 0 & R_{h,v} & 0 \\\\\n 0 & 0 & R_{h,p} \n \\end{pmatrix}, \\quad P_h = 4 R^T_h. \n\\end{equation*} \n\n\\section{Local Fourier analysis}\\label{sec:LFA}\n \n To identify a proper parameter $\\omega$ in \\eqref{eq:BSR-relax-scheme} to construct fast multigrid methods, we apply LFA \\cite{wienands2004practical,trottenberg2000multigrid} to examine the multigrid relaxation scheme. \nThe LFA smooting factor, see Definition \\ref{def:LFA-mu}, often offers a sharp prediction of actual multigrid performance. \n \n\\begin{definition}\nLet $L_h =[s_{\\boldsymbol{\\kappa}}]_{h}$ be a scalar stencil operator acting on grid ${G}_{h}$ as\n\\begin{equation*}\n L_{h}w_{h}(\\boldsymbol{x})=\\sum_{\\boldsymbol{\\kappa}\\in{V}}s_{\\boldsymbol{\\kappa}}w_{h}(\\boldsymbol{x}+\\boldsymbol{\\kappa}h),\n\\end{equation*}\nwhere $s_{\\boldsymbol{\\kappa}}\\in \\mathbb{R}$ is constant, $w_{h}(\\boldsymbol{x}) \\in l^{2} ({G}_{h})$, and ${V}$ is a finite index set. \nThen, the symbol of $L_{h}$ is defined as:\n\\begin{equation}\\label{eq:symbol-calculation-form}\n \\widetilde{L}_{h}(\\boldsymbol{\\theta})=\\displaystyle\\sum_{\\boldsymbol{\\kappa}\\in{V}}s_{\\boldsymbol{\\kappa}}e^{i \\boldsymbol{\\theta}\\cdot\\boldsymbol{\\kappa}},\\qquad i^2=-1. \n\\end{equation} \n\\end{definition}\n\nWe consider standard coarsening. The low and high frequencies are given by $$\\boldsymbol{\\theta} \\in T^{\\rm{L}} =\\left[-\\frac{\\pi}{2}, \\frac{\\pi}{2}\\right)^d; \\qquad \\boldsymbol{\\theta} \\in T^{\\rm{H}} =\\left[-\\frac{\\pi}{2}, \\frac{3\\pi}{2}\\right)^d \\setminus T^{\\rm{L}}.$$\n\n\\begin{definition}\\label{def:LFA-mu}\nWe define the LFA smoothing factor for relaxation error operator $\\mathcal{S}_h$ as\n\n\\begin{equation*}\n\\mu_{\\rm loc}(\\mathcal{S}_h) = \\max_{\\boldsymbol{\\theta} \\in T^{\\rm{H}}}\\{\\rho(\\widetilde{\\mathcal{S}}_h(\\boldsymbol{\\theta}))\\},\n\\end{equation*}\nwhere $\\rho(\\widetilde{S}_h(\\boldsymbol{\\theta}))$ stands for the spectral radius of $\\widetilde{\\mathcal{S}}_h(\\boldsymbol{\\theta})$.\n\\end{definition}\n\nThe symbol of $\\mathcal{S}_h$ defined in \\eqref{eq:relaxation-error-operator} is a $3\\times 3$ matrix since $\\mathcal{K}_h$ is a $3\\times 3$ block system; see \\eqref{eq:Kh--stencil-operator}. The same holds for $\\mathcal{M}_h$, see \\eqref{eq:Mh-form}, and the symbol of each block is a scalar. For more details on how to compute the symbol of \n$\\mathcal{S}_h$, refer to other studies \\cite{farrell2021local,he2018local}. Since $\\mu_{\\rm loc}(\\mathcal{S}_h)$ is a function of the parameter $\\omega$, we are interested in minimizing $\\mu_{\\rm loc}(\\mathcal{S}_h)$ over $\\omega$ to obtain a fast convergence speed. We define the optimal smoothing factor as \n \\begin{equation*}\n\\mu_{\\rm opt} =\\min_{\\omega}\\mu_{\\rm loc}(\\mathcal{S}_h).\n\\end{equation*}\n\nFor the two-grid error operator $E_h$ defined in \\eqref{eq:two-grid-error-operator}, the two-grid LFA convergence factor is \n\\begin{equation}\\label{eq:LFA-two-grid-convergence-factor}\n\\rho_h(\\nu)= \\max_{\\boldsymbol{\\theta}\\in T^{\\rm L}}\\{\\rho( \\widetilde{\\mathbf{E}}_h(\\omega,\\boldsymbol{\\theta}))\\},\n\\end{equation} \nwhere $\\widetilde{\\mathbf{E}}_h$ is the two-grid error operator symbol and $\\rho( \\widetilde{\\mathbf{E}}_h)$ stands for the spectral radius of matrix $\\widetilde{\\mathbf{E}}_h$. Since $E_h$ contains the coarse and fine grid operators, its symbol is a $12\\times 12$ matrix, including four harmonic frequencies. \n\nFrom this point onward, let us drop the subscript $h$, unless it is necessary.\n\nThe element-wise Vanka-type smoother has been successfully applied to complex-shifted Laplacian systems arising from optimal control problem \\cite{HL2022shiftLaplacianPiTMG}. Here, we consider an element-wise additive Vanka smoother applied to \\eqref{eq:shift-Laplace-stencil}. The subproblem coefficient matrix $L_j$ in \\eqref{eq:Vanka-operator} has a symmetric structure\n \\begin{equation*}\nL_j =\\frac{\\epsilon^2}{h^2}\n\\begin{pmatrix}\n4+r & -1 & -1 &0 \\\\ \n-1 & 4+r& 0 &-1\\\\\n-1 & 0 & 4+r &-1\\\\\n0 & -1 & -1 &4+r \n\\end{pmatrix}.\n\\end{equation*}\n\nIt follows that\n\\begin{equation} \\label{eq:inverse-Li-form}\nL^{-1}_j =\\frac{h^2}{\\epsilon^2}\n\\begin{pmatrix}\na & b & b &c \\\\\nb & a& c &b\\\\\nb & c & a &b \\\\\nc & b & b &a \n\\end{pmatrix},\n\\end{equation}\nwhere \n\\begin{subequations}\n\\label{eq:Vanka}\n\\begin{eqnarray}\na&=& \\frac{r^2+8r+14}{(2+r)(4+r)(6+r)}, \\label{eq:Vanka-a}\\\\\nb&=& \\frac{1}{(2+r)(6+r)}, \\label{eq:Vanka-b}\\\\\nc&= & \\frac{2}{(2+r)(4+r)(6+r)}.\\label{eq:Vanka-c}\n\\end{eqnarray}\n\\end{subequations} \nIt is easy to show that $a>b>c$, which is useful for our analysis. \n\nBased on \\eqref{eq:Vanka-operator} and \\eqref{eq:inverse-Li-form}, the stencil of the element-wise Vanka smoother $M_e$ is given by\n\\begin{equation*} \nM_e = \\frac{h^2}{4\\epsilon^2} \n\\begin{bmatrix}\nc & 2b &c\\\\\n2b & 4a &2b\\\\\nc & 2b &c\n\\end{bmatrix}.\n\\end{equation*} \nUsing \\eqref{eq:symbol-calculation-form}, we have \n\\begin{eqnarray*}\n\\widetilde{L} &=&\\frac{\\epsilon^2}{h^2}(4+r-2\\cos \\theta_1 -2\\cos \\theta_2),\\\\\n \\widetilde{M}_e &=&\\frac{h^2}{\\epsilon^2} (a+b\\cos \\theta_1 +b\\cos \\theta_2 +c\\cos\\theta_1\\cos \\theta_2).\n\\end{eqnarray*}\nLet $t= \\epsilon^2 (4+r-2\\cos \\theta_1 -2\\cos \\theta_2)$ and $\\hat{t}=\\frac{\\epsilon^2}{ a+b\\cos \\theta_1 +b\\cos \\theta_2 +c\\cos\\theta_1\\cos \\theta_2)}$. Then,\n\\begin{equation*} \n\\widetilde{\\mathcal{K}}= \\frac{1}{h^2}\\begin{pmatrix}\nt & 0 & i 2h \\sin(\\theta_1\/2)\\\\\n0 & t & i 2h \\sin(\\theta_2\/2)\\\\\n-i 2h \\sin(\\theta_1\/2) & -i 2h \\sin(\\theta_2\/2) &0 \n\\end{pmatrix},\n\\end{equation*}\n and\n\\begin{equation*} \n\\widetilde{\\mathcal{M}}= \\frac{1}{h^2} \\begin{pmatrix}\n\\hat{t} & 0 &i 2h \\sin(\\theta_1\/2)\\\\\n0 & \\hat{t} & i 2h \\sin(\\theta_2\/2)\\\\\n-i 2h \\sin(\\theta_1\/2) & -i 2h \\sin(\\theta_2\/2) &0 \n\\end{pmatrix},\n\\end{equation*}\nTo identify the eigenvalues of $\\widetilde{\\mathcal{M}}^{-1} \\widetilde{\\mathcal{K}}$, we first compute the determinant of $\\widetilde{\\mathcal{K}}-\\lambda \\widetilde{\\mathcal{M}}$:\n \\begin{eqnarray*}\n | \\widetilde{\\mathcal{K}}-\\lambda\\widetilde{\\mathcal{M}}| &= & \n \\frac{1}{h^2}\\begin{vmatrix}\n t-\\lambda \\hat{t} & 0 & (1-\\lambda) i 2h \\sin(\\theta_1\/2) \\\\\n 0 & t-\\lambda \\hat{t} & (1-\\lambda) i 2h \\sin(\\theta_2\/2) \\\\\n -(1-\\lambda) i 2h \\sin(\\theta_1\/2) &(1-\\lambda) i 2h \\sin(\\theta_2\/2) &0\n \\end{vmatrix} \\\\\n &=& \\frac{1}{h^2}(t-\\lambda \\hat{t})(1-\\lambda)^2\\left((i 2h \\sin(\\theta_1\/2))^2 + (i 2h \\sin(\\theta_2\/2) )^2\\right)\\\\\n &=& 4\\hat{t}\\left((\\sin(\\theta_1\/2))^2 + ( \\sin(\\theta_2\/2) )^2\\right)(1-\\lambda)^2 (\\lambda -\\frac{t}{\\hat{t}}).\n \\end{eqnarray*}\nThe three eigenvalues of $\\widetilde{\\mathcal{M}}^{-1} \\widetilde{\\mathcal{K}}$ are $1, 1$ and $\\frac{t}{\\hat{t}}=:\\lambda^*$, where\n\\begin{equation}\\label{eq:lambda-form-r}\n\\lambda^*(r;\\cos \\theta_1,\\cos \\theta_2) =(a+b\\cos \\theta_1 +b\\cos \\theta_2 +c\\cos\\theta_1\\cos \\theta_2)(4+r-2\\cos \\theta_1 -2\\cos \\theta_2).\n\\end{equation}\n \n \n For $\\boldsymbol{\\theta} \\in T^{\\rm H}$, it is easy to show that \n \\begin{equation}\\label{eq:cos-range}\n (\\cos\\theta_1,\\cos\\theta_2) \\in \\mathcal{D}=[-1,1] \\times [-1,0] \\bigcup [-1,0]\\times [0, 1].\n\\end{equation}\n\n \n Next, we explore the range of $\\lambda^*$ over $\\boldsymbol{\\theta}$ for high frequencies.\n \n \n\\begin{theorem}\\label{thm:maxmin-and-theta}\nFor $\\boldsymbol{\\theta} \\in T^{\\rm H}$, \n\\begin{eqnarray*}\n\\max_{\\boldsymbol{\\theta} }\\lambda^*(r;\\cos \\theta_1,\\cos \\theta_2) &=& \\lambda^*(r;-1,-1) =(a-2b+c)(8+r)=:d_1(r),\\\\\n\\min_{\\boldsymbol{\\theta} }\\lambda^*(r;\\cos \\theta_1,\\cos \\theta_2) &=& \\lambda^*(r;1,0) =(a+b)(2+r)=:d_2(r).\n\\end{eqnarray*}\n\\end{theorem}\n\\begin{proof}\nFor simplicity, let $\\eta_1=\\cos\\theta_1$ and $\\eta_2=\\cos \\theta_2$. Then, we rewrite \\eqref{eq:lambda-form-r} as\n\\begin{equation*} \n \\lambda^*=\\psi(\\eta_1,\\eta_2) =(a+b\\eta_1+b\\eta_2 +c\\eta_1\\eta_2)(4+r-2\\eta_1 -2\\eta_2).\n\\end{equation*}\n\nWe first consider the critical point of $ \\psi(\\eta_1,\\eta_2) $ in $\\mathcal{D}$, see \\eqref{eq:cos-range}, by computing the partial derivatives of $ \\psi(\\eta_1,\\eta_2)$, which are given by\n\\begin{eqnarray}\n\\psi'_{\\eta_1} (\\eta_1,\\eta_2)&=&rb+4b-2a-4b\\eta_1+(4c+cr-4b)\\eta_2-2c\\eta_2^2-4c\\eta_1\\eta_2=0, \\label{eq:partial-psi-eta1} \\\\\n\\psi'_{\\eta_2}(\\eta_1,\\eta_2) &=& rb+4b-2a-4b\\eta_2+(4c+cr-4b)\\eta_1-2c\\eta_1^2-4c\\eta_1\\eta_2=0. \\label{eq:partial-psi-eta2} \n\\end{eqnarray}\nSubtracting \\eqref{eq:partial-psi-eta2} from \\eqref{eq:partial-psi-eta1} gives \n\\begin{equation}\\label{eq:critical-point-solu}\n(\\eta_1-\\eta_2)\\left(2(\\eta_1+\\eta_2)-4-r\\right) =0.\n\\end{equation}\nIt follows that $\\eta_1=\\eta_2$ or $2(\\eta_1+\\eta_2)-4-r=0$. However, $\\eta_1+\\eta_2< 2$, so the latter does not have a real solution. For $\\eta_1=\\eta_2$, we \nreplace $\\eta_2$ by $\\eta_1$ in \\eqref{eq:partial-psi-eta1}, leading to\n\\begin{equation}\\label{eq:solve-eta1=0}\n6c\\eta_1^2-(4c+cr-8b)\\eta_1-(rb+4b-2a)=0.\n\\end{equation}\nWe claim that there is no real solution for \\eqref{eq:solve-eta1=0} for $r>0$. We will show that the discriminant is not positive.\nWe first simplify $rb+4b-2a$. Using \\eqref{eq:Vanka-a} and \\eqref{eq:Vanka-b} gives\n\\begin{eqnarray*}\nrb+4b-2a &=&\\frac{ 4+r}{(2+r)(6+r)}- \\frac{2(r^2+8r+14)}{(2+r)(4+r)(6+r)}\\\\\n &=&\\frac{ (4+r)^2-2(r^2+8r+14)}{(2+r) (4+r)(6+r)}\\\\\n &=& -\\frac{1}{4+r}.\n\\end{eqnarray*} \n\n Using \\eqref{eq:Vanka-b} and \\eqref{eq:Vanka-c}, the discriminant of \\eqref{eq:solve-eta1=0} is\n\\begin{eqnarray*}\n\\Phi &=& (4c+cr-8b)^2+4 \\cdot 6c(rb+4b-2a)\\\\\n &=&\\left(\\frac{8+2r}{(2+r)(4+r)(6+r)} - \\frac{8(4+r)}{(2+r)(4+r)(6+r)} \\right)^2-\\frac{48}{(2+r)(4+r)(6+r)} \\frac{1}{4+r}\\\\\n &=&\\left (\\frac{-6}{(2+r)(6+r)}\\right)^2-\\frac{48}{(2+r)(4+r)^2(6+r)}\\\\\n &=& \\frac{12}{(2+r)(6+r)}\\left(\\frac{3}{(2+r)(6+r)}-\\frac{4}{(4+r)^2}\\right)\\\\\n &=& \\frac{-12r(r+8)}{(2+r)^2(4+r)^2(6+r)^2}\\leq 0.\n\\end{eqnarray*}\nThe case $r=0$ has been discussed \\cite{CH2021addVanka}, where $\\psi'_{\\eta_1} (\\eta_1,\\eta_2)=\\psi'_{\\eta_2} (\\eta_1,\\eta_2)=0$ gives $(\\eta_1,\\eta_2)=(-1,-1)$ the boundary point of $\\mathcal{D}$, and $\\lambda^*_{\\rm max}=\\frac{4}{3}$. When $r>0$, \\eqref{eq:critical-point-solu} has no real solution and $\\psi(\\eta_1,\\eta_2)$ cannot have extreme values at interior of $\\mathcal{D}$. This means that we only need to find the extreme values of $\\psi(\\eta_1,\\eta_2)$ at the boundary of $\\mathcal{D}$, see \\eqref{eq:cos-range}. To do this, we split the boundary of $\\mathcal{D}$ as follows: \n\\begin{eqnarray*}\n\\partial \\mathcal{D}_1 &=&\\{-1\\} \\times [-1,1], \\\\\n\\partial \\mathcal{D}_2 &=& [-1,1] \\times \\{-1\\}, \\\\\n\\partial \\mathcal{D}_3 &=& \\{1\\} \\times [-1,0], \\\\\n\\partial \\mathcal{D}_4 &=& [0,1]\\times \\{0\\}, \\\\\n\\partial \\mathcal{D}_5 &=& \\{0\\} \\times [0,1], \\\\\n \\partial \\mathcal{D}_6&=& [-1,0] \\times \\{1\\}. \n\\end{eqnarray*} \n \nDue to the symmetry of $\\psi(\\eta_1,\\eta_2)$, that is $\\psi(\\eta_1,\\eta_2)=\\psi(\\eta_2,\\eta_1)$, we only need to find the extreme values of $ \\psi(\\eta_1,\\eta_2) $ at $\\partial \\mathcal{D}_1, \\partial \\mathcal{D}_3$ and $\\partial \\mathcal{D}_4$. We present below the results.\n\n\\begin{enumerate}\n\\item For $(\\eta_1,\\eta_2) \\in \\partial \\mathcal{D}_1$,\n\\begin{equation}\\label{eq:psi-case1}\n\\psi(\\eta_1,\\eta_2) =\\psi(-1,\\eta_2) =(a-b+b\\eta_2-c\\eta_2)(6+r-2\\eta_2).\n\\end{equation} \nNote that the two roots of the quadratic form \\eqref{eq:psi-case1} are $\\frac{6+r}{2}$ and $\\frac{a-b}{c-b}$.\nUsing \\eqref{eq:Vanka}, we have\n\\begin{equation*}\n\\frac{a-b}{c-b}= -(5+r).\n\\end{equation*} \nThus, the axis of symmetry is $\\eta_2=\\frac{(6+r)\/2-(5+r)}{2}=-1-\\frac{r}{4}\\leq -1$. Using the fact that $a>b>c$, see \\eqref{eq:Vanka-a}, \\eqref{eq:Vanka-b}, and \\eqref{eq:Vanka-c}, the quadratic function opens downward. Therefore, \n the maximum and minimum of $\\psi(-1,\\eta_2) $ for $\\eta_2 \\in[-1,1]$ are \n\\begin{align}\n\\begin{aligned}\n\\psi(-1,\\eta_2) _{\\rm max} &=\\psi(-1,-1) =(a-2b+c)(8+r),\\label{eq:case1-max}\\\\\n\\psi(-1,\\eta_2) _{\\rm min} &=\\psi(-1,1) = (a-c)(4+r). \n\\end{aligned}\n\\end{align}\n\\item For $(\\eta_1,\\eta_2) \\in \\partial \\mathcal{D}_3$,\n\\begin{equation}\\label{eq:psi-case2}\n\\psi(\\eta_1,\\eta_2) =\\psi(1,\\eta_2) =(a+b+b\\eta_2+c\\eta_2)(2+r-2\\eta_2).\n\\end{equation} \nThe two roots of quadratic form \\eqref{eq:psi-case2} are $\\frac{2+r}{2}$ and $-\\frac{a+b}{b+c}$.\nUsing \\eqref{eq:Vanka}, we have\n\\begin{equation*}\n-\\frac{a+b}{b+c}= -(3+r).\n\\end{equation*} \nThus, the axis of symmetry is $\\eta_2=\\frac{(2+r)\/2-(3+r)}{2}=-1-\\frac{r}{4}\\leq -1$. Using the fact that $a>b>c$, the quadratic function opens downward. It follows that\nfor $\\eta_2 \\in[-1,0]$, the maximum and minimum of $\\psi(1,\\eta_2) $ are given by\n\\begin{eqnarray*}\n\\psi(1,\\eta_2) _{\\rm max} &=&\\psi(1,-1) =(a-c)(4+r),\\\\\n\\psi(1,\\eta_2) _{\\rm min} &=&\\psi(1,0) = (a+b)(2+r).\n\\end{eqnarray*}\n\n\\item For $(\\eta_1,\\eta_2) \\in \\partial \\mathcal{D}_4$,\n\\begin{equation*} \n\\psi(\\eta_1,\\eta_2) =\\psi(\\eta_1,0) =(a+b\\eta_1)(4+r-2\\eta_1).\n\\end{equation*} \nNote that the two roots of quadratic form \\eqref{eq:psi-case2} are $\\frac{4+r}{2}$ and $-\\frac{a}{b}$.\nUsing \\eqref{eq:Vanka-a} and \\eqref{eq:Vanka-b}, we have\n\\begin{equation*}\n-\\frac{a}{b}= -(4+r)+\\frac{2}{4+r}.\n\\end{equation*} \nThus, the axis of symmetry is $\\eta_1=\\frac{(4+r)\/2-(4+r)+\\frac{2}{4+r}}{2}=-1-\\frac{r}{4}+\\frac{1}{4+r}< 0$. Thus, the maximum and minimum of $\\psi(\\eta_1,0) $ for $\\eta_1 \\in[0,1]$ are \n \n\\begin{eqnarray}\n\\psi(\\eta_1,0) _{\\rm max} &=&\\psi(0,0) =a(4+r),\\label{eq:case3-max}\\\\\n\\psi(\\eta_1,0) _{\\rm min} &=&\\psi(0,1) = (a+b)(2+r). \\nonumber\n\\end{eqnarray}\n\\end{enumerate}\n\nBased on the above discussions, the minimum of $\\psi(\\eta_1,\\eta_2) $ over $\\partial \\mathcal{D}$ is\n\\begin{equation*}\n\\psi(\\eta_1,\\eta_2) _{\\rm min} =\\psi(1,0)=\\psi(0,1) = (a+b)(2+r).\n\\end{equation*} \nNext, we compare $\\psi(-1,-1)$ (see \\eqref{eq:case1-max}) and $\\psi(0,0)$ (see \\eqref{eq:case3-max}) to determine the maximum.\nUsing \\eqref{eq:Vanka}, we have \n\\begin{eqnarray*}\n\\psi(-1,-1)- \\psi(0,0) &=&(a-2b+c)(8+r)-a(4+r)\\\\\n &=& 4a+(c-2b)(8+r)\\\\\n & =&\\frac{4(r^2+8r+14)}{(2+r)(4+r)(6+r)} - \\frac{2-2(4+r)}{(2+r)(4+r)(6+r)}(8+r)\\\\\n &=& \\frac{2r^2+10r+8}{(2+r)(4+r)(6+r)}>0.\n\\end{eqnarray*}\nIt follows that the maximum of $\\psi(\\eta_1,\\eta_2)$ is given by\n \\begin{equation*}\n\\psi(\\eta_1,\\eta_2) _{\\rm max} =\\psi(-1,-1) =(a-2b+c)(8+r).\n\\end{equation*} \nThus, for $(\\eta_1,\\eta_2) \\in \\mathcal{D}$, the maximum and minimum of $\\psi(\\eta_1,\\eta_2)$ are $\\psi(-1,-1)$ and $\\psi(1,0)=\\psi(0,1)$, respectively. \n\\end{proof}\n\nBased on the results in Theorem \\ref{thm:maxmin-and-theta}, we can further estimate the range of extreme values of $\\lambda^*$, which plays an important role in determining the optimal smoothing factor for V-BSR.\n\n\\begin{theorem}\\label{thm:form-d1-d2}\nSuppose $r\\in[0,\\infty)$. Then,\n\\begin{equation}\\label{eq:explicit-form-low-up-bound}\nd_1(r)=\\frac{8+r}{6+r}, \\quad d_2(r)=\\frac{3+r}{4+r}. \n\\end{equation}\nFurthermore, \n\\begin{eqnarray*}\n1< &d_1(r)& \\leq \\frac{4}{3}, \\\\\n \\frac{3}{4}\\leq &d_2(r)& <1.\n\\end{eqnarray*} \n\\end{theorem}\n\\begin{proof}\nUsing \\eqref{eq:Vanka}, we simplify $d_1(r)$ as follows:\n\\begin{eqnarray*}\nd_1(r) &=&(a-2b+c)(8+r) \\\\\n &=& \\frac{r^2+8r+14-2(4+r)+2}{(2+r)(4+r)(6+r)}(8+r)\\\\\n &=& \\frac{8+r}{6+r}.\n\\end{eqnarray*} \nSince $d_1(r)$ is a decreasing function of $r$, $\\max_r d_1(r)=d_1(0)=\\frac{4}{3}$.\n\nUsing \\eqref{eq:Vanka-a} and \\eqref{eq:Vanka-b}, we have\n\\begin{eqnarray*}\nd_2(r) &=&(a+b)(2+r) \\\\\n &=& \\frac{r^2+8r+14+4+r}{(2+r)(4+r)(6+r)} (2+r)\\\\\n &=&\\frac{r+3}{r+4}.\n\\end{eqnarray*}\nSince $d_2(r)$ is an increasing function of $r$, $\\min_r d_2(r)=d_2(0)=\\frac{3}{4}$. \n\\end{proof}\n\nNow, we are able to derive the optimal smoothing factor for V-BSR for the Stokes-Darcy Brinkman problems.\n\\begin{theorem}\\label{thm:opt-mu-omega}\nFor the V-BSR relaxation scheme \\eqref{eq:BSR-relax-scheme} the optimal smoothing factor is given by\n\\begin{equation}\\label{eq:opt-mu-form-r}\n\\mu_{\\rm opt}(r)=\\min_{\\omega} \\max_{\\boldsymbol{\\theta} \\in T^{\\rm H}} \\{|1-\\omega|, |1-\\omega \\lambda^*|\\} = \\frac{3r+14}{2r^2+21r+50},\n\\end{equation}\nprovided that \n\\begin{equation}\\label{eq:opt-omega}\n\\omega =\\omega_{\\rm opt}= \\frac{2r^2+20r+48}{2r^2+21r+50}.\n\\end{equation}\nMoreover,\n\\begin{equation*}\n\\mu_{\\rm opt}(r)\\leq \\frac{7}{25}=0.28.\n\\end{equation*} \n\n\\end{theorem}\n \\begin{proof}\nFrom Theorem \\ref{thm:maxmin-and-theta} and \\eqref{eq:explicit-form-low-up-bound}, we know that\n \\begin{equation}\\label{eq:lambda*-two-roots}\n \\max_{\\boldsymbol{\\theta} \\in T^{\\rm H}} \\{|1-\\omega|, |1-\\omega \\lambda^*|\\} =\\max\\{|1-\\omega d_2(r)|, |1-\\omega d_1(r)| \\}.\n \\end{equation}\n To minimize $\\max_{\\boldsymbol{\\theta} \\in T^{\\rm H}} \\{|1-\\omega \\lambda^*|\\}$, we require\n \\begin{equation*}\n|1-\\omega d_2(r)|= |1-\\omega d_1(r)|,\n\\end{equation*} \nwhich gives $\\omega_{\\rm opt}(r) =\\frac{2}{d_1(r)+d_2(r)}$. Using Theorem \\ref{thm:form-d1-d2}, we obtain \n\\begin{equation*}\n \\omega_{\\rm opt}(r) =\\frac{2}{d_1(r)+d_2(r)} = \\frac{2r^2+20r+48}{2r^2+21r+50}\n\\end{equation*} \nand\n\\begin{equation*}\n\\mu_{\\rm opt}(r) =\\frac{d_1(r)-d_2(r)}{d_1(r)+d_2(r)} = \\frac{3r+14}{2r^2+21r+50}\\leq \\frac{7}{25}.\n\\end{equation*} \n \\end{proof}\n \n \\begin{remark}\n When $r=0$, Theorem \\ref{thm:opt-mu-omega} is consistent with the existing results \\cite{CH2021addVanka}, which amount to applying an element-wise Vanka smoother to the Poisson equation. \n \\end{remark}\n \\begin{proposition}\\label{prop:optimal-mu-decrease}\n $\\mu_{\\rm opt}(r)$ given by \\eqref{eq:opt-mu-form-r} is a decreasing function of $r$. \n \\end{proposition}\n \\begin{proof}\nThe derivative of $\\mu_{\\rm opt}$ is given by\n \\begin{equation*}\n \\mu'_{\\rm opt} = \\frac{-2(r^2+28r+72)}{(2r^2+21r+50)^2}<0.\n\\end{equation*} \n This suggests that when $r$ increases, the optimal smoothing factor decreases. \n \\end{proof}\n\n \nLet us look at the optimal parameter, \\eqref{eq:opt-omega}. It can be shown that\n \\begin{equation*}\n \\omega'_{\\rm opt}(r) = \\frac{2(r^2+4r-4)}{(2r^2+21r+50)^2}.\n\\end{equation*} \nWhen $r \\in[0, 2\\sqrt{2}-2]$, $\\omega_{\\rm opt}(r)$ is decreasing and when $r \\in[2\\sqrt{2}-2, \\infty)$, $\\omega_{\\rm opt}(r)$ is increasing. It follows that\n \\begin{equation}\\label{eq:estimate-opt-omega}\n(\\omega_{\\rm opt}(r))_{\\rm min}=\\omega_{\\rm opt}(2\\sqrt{2}-2) = \\frac{(\\sqrt{2}-1)(4\\sqrt{2}+16)+24}{(\\sqrt{2}-1)(4\\sqrt{2}+17)+25}\\approx 0.959 \\leq \\omega_{\\rm opt}(r) <1.\n\\end{equation} \n %\n Thus, for simplicity, if we take $\\omega=1$, then \\eqref{eq:lambda*-two-roots} gives\n \\begin{equation}\\label{eq:omega-one-mu}\n\\mu(\\omega=1)=\\max_{\\omega=1}\\{|1-\\omega d_2(r)|, |1-\\omega d_1(r)| \\}=\\frac{2}{6+r}\\leq \\frac{1}{3}.\n\\end{equation} \n\nIn practice, we can consider $\\omega=1$. In multigrid, for fixed $\\epsilon$, in each level, the relaxation schemes has a different convergence speed in each level, which can be computed from \\eqref{eq:omega-one-mu}. However, note that \\eqref{eq:omega-one-mu} is a decreasing function of $r=\\frac{h^2}{\\epsilon^2}$ or $h$. This means that at the coarse level, the relaxation scheme has a smaller convergence speed compared with that of the fine level. \n \n \\section{Numerical experiments}\\label{sec:numerical}\n In this section, we first compute LFA two-grid convergence factors using two choices of the damping parameter, that is, $\\omega=1$ and $\\omega=\\omega_{\\rm opt}$, then we report V-cycle multigrid results for different values of the physical parameter $\\epsilon$.\n \n\\subsection{LFA prediction}\nWe compute the two-grid LFA convergence factor \\eqref{eq:LFA-two-grid-convergence-factor}, using $h=1\/64,\\ \\omega=1$ and optimal $\\omega$, see \\eqref{eq:opt-omega}, derived from optimizing LFA smoothing factors for different $\\epsilon$. From Table \\ref{tab:LFA-results-h64-omega} we see a strong agreement between two-grid convergence factors $\\rho_h(1)$ and the LFA smoothing factors. Moreover, the convergence factors for optimal $\\omega$ are slightly better than those for $\\omega=1$, which is reasonable since the optimal $\\omega$, see \\eqref{eq:estimate-opt-omega}, is very close to $1$. From our smoothing analysis, we know that even though the smoothing factor is dependent on $h$ and $\\alpha$, the upper bound on the smoothing factor is $\\frac{1}{3}$. This is also confirmed by our two-grid LFA convergence factor $\\rho_h(1)$ in Table \\ref{tab:LFA-results-h64-omega}. \n \n\n\n \n \n \n\n \\begin{table}[H]\n \\caption{Two-grid LFA convergence factors, $\\rho_h(\\nu)$ with $h=1\/64$ and different choices of $\\omega$.}\n\\centering\n\\begin{tabular}{lccccc}\n\\hline\n$\\epsilon,\\omega=1$ & $\\mu_{\\rm opt}$ &$\\rho_h(1)$ &$\\rho_h(2)$ &$\\rho_h(3)$ & $\\rho_h(4)$ \\\\ \\hline\n \n$1$ & 0.333 & 0.333 & 0.119 & 0.054 &0.043 \\\\\n$ 2^{-2} $ &0.333 & 0.333 & 0.119 & 0.054 & 0.042 \\\\\n$2^{-4}$ &0.330 & 0.330 &0.115 &0.052 &0.040 \\\\\n$2^{-6}$ &0.286 & 0.286 & 0.082 & 0.023 & 0.012 \\\\\n$2^{-8}$ & 0.091 & 0.091 & 0.008 & 0.001 &0.000 \\\\\n\\hline\n$1, \\omega_{\\rm opt}$ &0.280 &0.280 & 0.096 & 0.056 & 0.044 \\\\\n\n$2^{-2}$ & 0.280 &0.280 &0.096 & 0.056 &0.044 \\\\\n\n$2^{-4}$ & 0.276 & 0.276 & 0.093 & 0.055 &0.042 \\\\\n\n\n$2^{-6}$ & 0.233 &0.233 & 0.057 & 0.026 & 0.014 \\\\\n\n$2^{-8}$ & 0.069 & 0.069 & 0.005 & 0.000 & 0.000 \\\\\n\\hline\n\\end{tabular}\\label{tab:LFA-results-h64-omega}\n\\end{table}\n\nTo illustrate how the smoothing factor changes as a function of $r$, we plot $\\mu_{\\rm opt}$ defined in \\eqref{eq:opt-mu-form-r} and $\\mu(\\omega=1)$ in \\eqref{eq:omega-one-mu} as functions of $r$ in Figure \\ref{fig: mu-vs-r}. It is evident that when $r$ increases, the smoothing factor decreases and approaches zero, and $\\mu(\\omega=1)$ tends towards $\\mu_{\\rm opt}$. \n\n \n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{.\/figures\/plot-mu-vs-r.pdf}\n \\caption{Smoothing factors with optimal $\\omega$ and $\\omega=1$.}\\label{fig: mu-vs-r}\n\\end{figure}\n\n \n \n \\subsection{Multigrid performance}\n \n Consider the model problems \\eqref{eq:SDB} on the unit square domain $[0,1]\\times[0,1]$ with an exact solution \\cite[Section 5]{sun2019stability}, \n and given by \n\\begin{align*}\n u(x,y) &=\\pi \\sin^2(\\pi x) \\sin(2\\pi y),\\\\\n v(x,y) &=-\\pi \\sin(2\\pi x) \\sin^2(\\pi y),\\\\\n p(x,y) &=\\sin(\\pi y) -\\frac{2}{\\pi},\n\\end{align*} \nwith $g=0$. The source term is computed via $\\boldsymbol{f}=(f_1,f_2)=- \\epsilon^2 \\Delta\\boldsymbol{u} +\\boldsymbol{u} + \\nabla p$, and it is\n\\begin{align*}\nf_1 &=(4\\pi^3\\epsilon^2+\\pi)\\sin^2(\\pi x) \\sin(2\\pi y)-2\\pi^3\\epsilon^2\\cos(2\\pi x)\\sin(2\\pi y), \\\\\nf_2& = -(4\\pi^3\\epsilon^3+\\pi)\\sin(2\\pi x)\\sin^2(\\pi y)+2\\pi^3\\epsilon^2\\sin(2\\pi x)\\cos(2\\pi y)+\\pi \\cos(\\pi y).\n\\end{align*}\n\nTo validate our theoretical LFA predictions, we compute the actual multigrid convergence factors by\n\\begin{equation*}\n\\hat{\\rho}^{(k)}_h=\\left( \\frac{||r_k||}{||r_0||}\\right)^{1\/k},\n\\end{equation*}\nwhere $r_k=b_h-\\mathcal{K}_h\\boldsymbol{z}_k$ is the residual and $\\boldsymbol{z}_k$ is the $k$-th multigrid iteration. The initial guess is chosen randomly. In our test, we report $\\hat{\\rho}^{(k)}_h=:\\hat{\\rho}_h $ with the smallest $k$ such that $||r_k||\/|r_0|\\leq 10^{-10}$. \n\nAs mentioned before, computing the exact solution of the Schur complement system \\eqref{eq:solution-schur-complement} is expensive. For our multigrid tests, we apply a few weighted ($\\omega_J$) Jacobi iterations to the Schur complement system. We choose $\\omega_J=0.8$ which seems more robust to $\\epsilon$. The number of Jacobi iterations is set as three. \n\n\\subsubsection{Two-grid results}\nWe first report actual two-grid convergence factor using $h=1\/64$ and three Jacobi iterations for solving the Schur complement. Table \\ref{tab:TG-omega1} shows that two-grid actual performance using $\\omega=1$ matches with the LFA predictions in Table \\ref{tab:LFA-results-h64-omega}, except for a small difference for a $\\epsilon =2^{-8}$, which might suggest that more iterations are needed for the Schur complement system. However, due to the satisfactory convergence factor of the actual performance, we do not further explore this. Using optimal $\\omega$, Table \\ref{tab:TG-omega2} shows that the actual two-grid performance matches two-grid LFA predictions reported in Table \\ref{tab:LFA-results-h64-omega}, except for $\\epsilon=2^{-6}, 2^{-8}$. Again, the measured convergence factor is satisfactory, and there is no need to consider more Jacobi iterations for the Schur complement system. \n\n \n \n \\begin{table}[H]\n \\caption{Two-grid measured convergence factor, $\\hat{\\rho}_h(\\nu)$, using three Jacobi iterations for solving the Schur complement, $h=1\/64$ and $\\omega=1$.}\n\\centering\n\\begin{tabular}{lcccc }\n\\hline\n$ \\epsilon, \\omega=1 $ &$\\hat{\\rho}_h(1)$ &$\\hat{\\rho}_h(2)$ &$\\hat{\\rho}_h(3)$ & $\\hat{\\rho}_h(4)$ \\\\ \\hline\n \n$1$ & 0.319 &0.111 &0.033 & 0.023 \\\\\n$2^{-2} $ &0.317 & 0.109 &0.033 & 0.023 \\\\\n$2^{-4} $ & 0.300 & 0.094 & 0.029 & 0.021 \\\\\n$2^{-6}$ & 0.209 &0.047 &0.023 & 0.015 \\\\\n$2^{-8} $ &0.145 & 0.035 &0.020 & 0.015 \\\\\n\\hline\n \\end{tabular}\\label{tab:TG-omega1}\n\\end{table}\n\n \n \n \\begin{table}[H]\n \\caption{Two-grid measured convergence facto, $\\hat{\\rho}_h(\\nu)$, using three Jacobi iterations for solving the Schur complement, $h=1\/64$ and $\\omega_{\\rm opt}$, see \\eqref{eq:opt-omega}.}\n\\centering\n\\begin{tabular}{lcccc }\n\\hline\n$\\epsilon, \\omega_{\\rm opt}$ &$\\hat{\\rho}_h(1)$ &$\\hat{\\rho}_h(2)$ &$\\hat{\\rho}_h(3)$ & $\\hat{\\rho}_h(4)$ \\\\ \\hline\n \n$1$ & 0.266 & 0.082 &0.030 & 0.023 \\\\\n$2^{-2} $ & 0.264 &0.080 & 0.030 & 0.024 \\\\\n$2^{-4}$ & 0.248 &0.068 &0.029 & 0.023 \\\\\n$2^{-6}$ &0.163 & 0.044 & 0.024 & 0.016 \\\\\n$2^{-8}$ & 0.165 &0.040 & 0.019 &0.015 \\\\\n\\hline\n \\end{tabular}\\label{tab:TG-omega2}\n\\end{table}\n\n\\subsubsection{V(1,1)-cycle results}\nA two-grid method is computationally costly since we have to solve the coarse problem directly and if the initial mesh is fine, then the next coarser mesh may give rise to a large problem as well. In practice, deeply-nested W-cycle and V-cycle are preferred. We now explore the V(1,1)-cycle multigrid methods with two choices of $\\omega$ and varying values of the physical parameter $\\epsilon$. In order to study the sensitivity of solving the Schur complement system, we consider one, two, and three Jacobi iterations for Schur complement system. We consider different $n\\times n$ finest meshgrids, where $n=32, 64,128, 256$.\n\n{\\bf One iteration for Schur complement system:} We first report the iteration counts for V(1,1)-cycle multigrid methods using one iteration of Jacobi relaxation for the Schur complement system in Table \\ref{tab:Itn-Schur-Jacobi-number-one} to achieve the tolerance $||r_k||\/|r_0|\\leq 10^{-10}$. We see that using $\\omega=1$ and optimal $\\omega$ give similar performance. When $\\epsilon =2^{-6}, 2^{-8}$, the iteration count increase dramatically.\nTo mitigate the effect of this degradation, we will consider two or three Jacobi iterations for the Schur complement system. \n \n \n \n \\begin{table}[H]\n \\caption{Iteration accounts for V(1,1)-cycle multigrid with one Jacobi iteration for solving the Schur complement.}\n\\centering\n\\begin{tabular}{lcccc }\n\\hline\n$\\epsilon, \\omega=1$ &$n=32$ &$n=64$ &$n=128$ & $n=256$ \\\\ \n \n$1$ & 13 & 13 &13 & 15 \\\\\n$2^{-2} $ &12 & 13 & 13 & 14 \\\\\n$2^{-4}$ &11 & 11 & 12 & 12 \\\\\n$2^{-6}$ &23 & 18 &13 & 11 \\\\\n$2^{-8}$ & 50 & 50 &47 & 34 \\\\\n\\hline\n$\\epsilon, \\omega_{\\rm opt}$ &$n=32$ &$n=64$ &$n=128$ & $n=256$ \\\\ \n \n$1$ & 12 &12 & 12 & 15 \\\\\n$2^{-2} $ & 12 & 12 & 12 & 14 \\\\\n$2^{-4}$ & 11 & 11 &11 & 11 \\\\\n$2^{-6}$ & 26 &19 & 14 & 12 \\\\\n$2^{-8}$ &50 &50 &50 & 38 \\\\\n\\hline\n \\end{tabular}\\label{tab:Itn-Schur-Jacobi-number-one}\n\\end{table}\n\n \n \n \n{\\bf Two iterations for Schur complement system:} We report the convergence history of the relative residual norm $\\frac{||r_k||}{||r_0||}$ as a function of the number of V(1,1)-cycles using two Jacobi iterations for Schur complement system. Figure \\ref{fig:V-vs-J2-eps0} reports the results for $\\epsilon=1$. We see that using optimal $\\omega$ takes 12 V(1,1)-cycle iterations to achieve the stopping tolerance and it takes 13 iterations for $\\omega=1$. The convergence behavior is independent of meshsize $h$. A similar performance is seen for $\\epsilon=2^{-2}, 2^{-4}, 2^{-6}, 2^{-8}$ in Figures \\ref{fig:V-vs-J2-eps2}, \\ref{fig:V-vs-J2-eps4}, \\ref{fig:V-vs-J2-eps6} and \\ref{fig:V-vs-J2-eps8}. Observe that for smaller values of $\\epsilon$, the iteration count does not increase. Using optimal $\\omega$ has one iteration number fewer than that of $\\omega=1$. Thus, it is simple and reasonable to use $\\omega=1$ in practice. \n \n\n\\begin{figure}[h!] \n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J2-eps0.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J2-eps0.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=1$ and two Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-J2-eps0}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J2-eps2.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J2-eps2.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-2}$ and two Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-J2-eps2}\n\\end{figure}\n\n\\begin{figure}[h!] \n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J2-eps4.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J2-eps4.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-4}$ and two Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-J2-eps4}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J2-eps6.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J2-eps6.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-6}$ and two Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-J2-eps6}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J2-eps8.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J2-eps8.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-8}$ and two Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-J2-eps8}\n\\end{figure}\n\n\n \n{\\bf Three iterations for Schur complement system:} We explore V(1,1)-cycle iterations with two choices of $\\omega$ and a varying physical parameter $\\epsilon$, using three Jacobi iterations for Schur complement system. We report the history of relative residual $\\frac{||r_k||}{||r_0||}$ as a function of the V(1,1)-cycle iteration counts for $n\\times n$ meshgrid ($n=32, 64,128, 256$). Figure \\ref{fig:V-vs-eps0} shows the results for $\\epsilon=1$. We see that using optimal $\\omega$ takes 12 iterations of V(1,1)-cycle to achieve the stopping tolerance and it takes 13 iterations for $\\omega=1$. We see that the convergence behavior is independent of meshsize $h$. A similar performance is seen for $\\epsilon=2^{-2}, 2^{-4}, 2^{-6}, 2^{-8}$ in Figures \\ref{fig:V-vs-eps2}, \\ref{fig:V-vs-eps4}, \\ref{fig:V-vs-eps6} and \\ref{fig:V-vs-eps8}. Compared with two Jacobi iterations for solving the Schur complement system, three Jacobi iterations give a slightly better results for small $\\epsilon=2^{-6}, 2^{-8}$. Again using optimal $\\omega$ has one iteration number less than that of $\\omega=1$. Thus, it is simple and reasonable to use $\\omega=1$ in practice. Moreover, two Jacobi iterations are enough to achieve robustness V(1,1)-cycle multgrid with respect to meshgrid and physical parameter $\\epsilon$. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J3-eps0.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J3-eps0.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=1$ and three Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-eps0}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J3-eps2.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J3-eps2.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-2}$ and three Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-eps2}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J3-eps4.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J3-eps4.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-4}$ and three Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-eps4}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J3-eps6.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J3-eps6.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-6}$ and three Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-eps6}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega1J3-eps8.pdf}\n\\includegraphics[width=0.49\\textwidth]{.\/figures\/V-omega2J3-eps8.pdf}\n \\caption{Convergence history: Number of iterations versus relative residual of V(1,1)-cycle with $\\epsilon=2^{-8}$ and three Jacobi iterations for Schur complement system (left $\\omega=1$ and right optimal $\\omega$).} \\label{fig:V-vs-eps8}\n\\end{figure}\n\n\n \n \\section{Conclusions}\\label{sec:con}\n \n \n We propose a parameter-robust multigrid method for solving the discrete system of Stokes-Darcy problems, with the maker and cell scheme used for the discretization. The resulting linear system is a saddle-point system. In contrast to existing Vanka smoothers, where the velocities and pressure unknowns in a grid cell are updated simultaneously, we propose Vanka-based Braess-Sarazin relaxation scheme, where the Laplace-like term in the saddle-point system is solved by an additive Vanka algorithm. This approach decouples the velocities and pressure unknowns. Moreover, only matrix-vector products are needed in our proposed multigrid method. LFA is used to analyze the smoothing process and help choose the optimal parameter that minimizing LFA smoothing factor. From LFA, we derive the stencil of additive Vanka for the Laplace-like operator, which can help form the global iteration matrix, avoiding solving many subproblems in the classical additive Vanka setting. \n \n Our main contribution is that we derive the optimal algorithmic parameter and optimal LFA smoothing factor for Vanka-based Braess-Sarazin relaxation scheme, and show that this scheme is highly efficient with respect to physical parameter. Our theoretical results reveal that although the optimal damping parameter is related to physical parameter and meshsize, it is very close to one. We also present the theoretical LFA smoothing factor with damping parameter one. In Vanka-based Braess-Sarazin relaxation, we have to solve a Schur complement system. Direct solver is often expensive. We propose an inexact version of Vanka-based Braess-Sarazin relaxation, where we apply only two or three iterations of Jacobi to the Schur complement system to achieve the same performance as that of an exact solve. We show that using damping parameter one can achieve almost the same performance as that of optimal result, and the results are close to exact version. Thus, using damping parameter one is recommended. Our V-cycle multigrid illustrates high efficiency of our relaxation scheme and robustness to physical parameter. \n \nWe comment that the proposed Vanka-based Braess-Sarazin multigrid method can be used\nas a preconditioner for Krylov subspace methods. We have limited ourselves for the MAC scheme on uniform grids. However, it is possible to extend the Vanka-smoother to non-uniform grids. \n \n\\bibliographystyle{siam}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAirspace is utilized today by far lesser aircraft than it can accommodate,\nespecially low altitude airspace. There are more and more applications\nfor UAVs in low altitude airspace, ranging from the on-demand package\ndelivery to traffic and wildlife surveillance, inspection of infrastructure,\nsearch and rescue, agriculture, and cinematography. Moreover, since\nUAVs are usually small owing to portability requirements, it is often\nnecessary to deploy a team of UAVs to accomplish certain missions.\nAll these applications share a common need for both navigation and\nairspace management. One good starting point is NASA's Unmanned Aerial\nSystem Traffic Management (UTM) project, which organized a symposium\nto begin preparations of a solution for low altitude traffic management\nto be proposed to the Federal Aviation Administration. What is more,\nair traffic for UAVs is attracted more and more research \\cite{IoD(2016)},\\cite{Devasia(2016)}.\nTraditionally, the main role of air traffic management (ATM) is to\nkeep a prescribed separation among all aircraft by using centralized\ncontrol. However, it is infeasible for increasing UAVs because the\ntraditional control method lacks scalability. In order to address\nsuch a problem, free flight is a developing air traffic control method\nthat uses no centralized control. Instead, parts of airspace are reserved\ndynamically and automatically in a distributed way using computer\ncommunication to ensure the required separation among aircraft. This\nnew system may be implemented into the U.S. air traffic control system\nin the next decade. Airspace may be allocated temporarily by an ATM\nfor a special task within a given time interval. In this airspace,\nthese aircraft have to be managed so that they can complete their\ntasks meanwhile avoiding collision. In \\cite{IoD(2016)}, the airspace\nis structured similarly to the road network as shown in Figure \\ref{Airspacestructure}(a).\nAircraft are only allowed inside the following three: \\emph{airways}\nplaying a similar role to roads or virtual tubes, \\emph{intersections}\nformed by at least two airways, and \\emph{nodes} which are the points\nof interest reachable through an alternating sequence of airways and\nintersections. \n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[scale=0.65]{Airspacesturcure} \n\\par\\end{centering}\n\\caption{Practical application scenarios of virtual tube passing problem.}\n\\label{Airspacestructure} \n\\end{figure}\n\nIn this paper, for simplicity, coordinating the motions of VTOL UAVs\nto pass an \\emph{airway }is considered, which can be taken as a virtual\ntube or corridor in the air. Concretely, the main problem is to coordinate\nthe motions of VTOL UAVs include passing a virtual tube, inter-agent\nconflict (coming within the minimum allowed distance between each\nother, not to be confused with a \\emph{collision}) avoidance and keeping\nwithin the virtual tube, which is called the \\emph{virtual tube passing\nproblem }here, which is very common in practice. For example, virtual\ntubes can be paths connecting two places, designed to bypass areas\nhaving the dense population or to be covered by wireless mobile networks\n(4G or 5G). virtual tubes can also be gates, corridors or windows,\nbecause they can be viewed as virtual virtual tubes, as shown in Figure\n\\ref{Airspacestructure}(b). Such problems of coordination of multiple\nagents have been addressed partly using different approaches, various\nstability criteria and numerous control techniques \\cite{Parker(2009)},\\cite{Ren(2011)},\\cite{Antonelli(2013)},\\cite{Yan(2013)},\\cite{Hoy(2015)},\\cite{Oh(2015)},\\cite{Survey}.\nA commonly-used method, namely the dynamic region-following formation\ncontrol, is to organize multiple agents as a group inside a region\nand then move the group to pass virtual tubes, where the size of the\nregion can vary according to the virtual tubes \\cite{Hou(2009)},\\cite{Chen(2018)},\\cite{Dutta(2018)},\\cite{Wang(2007)}.\nHowever, the formation control is not very suited for the air traffic\ncontrol problem considered. First, each UAV has its own task, while\nthe formation control means that one has to wait for other ones. Second,\nhigher-level coordination should be made to decide which ones should\nbe in one group. What is more, the number of UAVs in airspace is varying\ndynamically, which increases the design difficulty of the higher-level\ncoordination. Another way is to plan the trajectories for UAVs \\cite{Liu(2019)},\\cite{Capt},\\cite{Ingersoll(2016)}.\nHowever, planning often depends on global information and may have\nto be updated due to uncertainties in practice, which brings more\ncomplex calculations.\n\nAccording to the consideration above, we propose distributed control\nfor VTOL UAV swarm, each one having the same control protocol. Distributed\ncontrol will not use the global information so that the computation\nonly depends on the number of local UAVs \\cite{Connectedness},\\cite{Liao(2017)},\\cite{Distributed},\\cite{Zhu(2015)}.\nThis framework is applicable to dense air traffic. By the proposed\nprotocol, every UAV can pass a virtual tube freely not in formation,\nmeanwhile avoiding conflict with each other and keeping within the\nvirtual tube once it enters into the virtual tube. During the process,\nsome UAVs with high speed will overtake slow ones. The idea used is\nsimialr to artificial potential field methods because of its ease-of-use,\nwhere designed barrier functions \\cite{Safety} are taken as artificial\npotential functions. The distributed control laws use the negative\ngradient of mixing of attractive and repulsive potential functions\nto produce vector fields that ensure the passing and conflict avoidance,\nrespectively. However, it is not easy to use such an idea, with two\nreasons in the following. \n\\begin{itemize}\n\\item The guidance strategy for each UAV has to design. An easy method is\nto set a chain of waypoints for each UAV. However, UAVs may get trapped\nwhen using this method. Namely, they have not arrived at their corresponding\nwaypoints, but velocities are zero. Consequently, in order to avoid\ntrap, a higher-level decision should be made to set these waypoints.\nAs indicated by \\cite{Hernandez(2011)}, the complexity of the calculation\nof undesired equilibria remains an open problem. An example is proposed\nin \\cite{Ingersoll(2016)}, modelling the virtual tube passing problem\nas an objective optimization problem, which can get the optimal path\nto minimize the length, time or energy by designing suitable optimization-based\nalgorithm and objective functions. This algorithm works well for offline\npath-planning. However, if the obstacles to avoid are dynamic, the\noptimization-based algorithm will consume a lot of time to update\nglobal information and the corresponding constraints during the online\npath-planning process, which is not suitable for dense air traffic\nbecause of lacking real-time of control. Compared to the optimal solution\nof targets, safety, real-time and reliability are more necessary in\npractice. \n\\item Besides this problem, the second problem is also encountered in practice\nespecially for UAVs outdoor. The conflict of two agents is often defined\nin control strategies that their distance is less than a safety distance.\nThe area is called \\emph{safety area} of an agent if the distance\nto the agent is less than the safety distance. However, a conflict\nwill happen in practice even if conflict avoidance is proved formally\nbecause some assumptions will be violated in practice. For example,\na UAV may enter into the safety area of another due to an unpredictable\ncommunication delay. On the other hand, most likely, two UAVs may\nnot have a real collision in physics because the safety distance is\noften set large by considering various uncertainties, such as estimate\nerror, communication delay, and control delay. This is a big difference\nfrom some indoor robots with a highly accurate position estimation\nand control. In most literature, if their distance is less than a\nsafety distance, then their control schemes either do not work or\neven push the agent towards the center of the safety area rather than\nleaving the safety area. For example, some studies have used the following\nbarrier function terms for collision avoidance, such as $1\\mathord{\\left\/\\left(\\left\\Vert {\\bf p}_{i}-{\\bf p}_{j}\\right\\Vert ^{2}-R\\right)\\right.}$\\cite[p. 323]{Quan(2017)}\nor $\\ln\\left(\\left\\Vert \\mathbf{p}_{i}-\\mathbf{p}_{j}\\right\\Vert -R\\right)$\n\\cite{Panagou(2016)}, where $\\mathbf{p}_{i}$, $\\mathbf{p}_{j}$\nare two UAVs'{} positions, and $R>0$ is the separation distance.\nThe principle is to design a controller to make the barrier function\nterms bounded so that $\\left\\Vert \\mathbf{p}_{i}-\\mathbf{p}_{j}\\right\\Vert ^{2}>R$\nif $\\left\\Vert \\mathbf{p}_{i}\\left(0\\right)-\\mathbf{p}_{j}\\left(0\\right)\\right\\Vert ^{2}>R$.\nOtherwise, $\\left\\Vert \\mathbf{p}_{i}-\\mathbf{p}_{j}\\right\\Vert ^{2}=R$\nwill make the barrier function term unbounded. The separation distance\nfor robots indoor is often the sum of the two robots'{} physical\nradius, namely $\\left\\Vert \\mathbf{p}_{i}-\\mathbf{p}_{j}\\right\\Vert ^{2}0,$ $\\mathbf{p}_{i}\\in{{\\mathbb{R}}^{2}}$ and $\\mathbf{v}_{i}\\in{{\\mathbb{R}}^{2}}$\nare the position and velocity of the $i$th VTOL UAV, $\\mathbf{v}_{\\text{c},i}\\in{{\\mathbb{R}}^{2}}$\nis the velocity command of the $i$th UAV, $i=1,2,\\cdots,M.$ The\ncontrol gain $l_{i}$ depends on the $i$th UAV and the semi-autonomous\nautopilot used, which can be obtained through flight experiments.\nFrom the model (\\ref{positionmodel_ab_con_i}), $\\lim_{t\\rightarrow\\infty}\\left\\Vert \\mathbf{v}_{i}\\left(t\\right)-\\mathbf{v}_{\\text{c},i}\\right\\Vert =0$\nif $\\mathbf{v}_{\\text{c},i}$ is constant. Here, the velocity command\n$\\mathbf{v}_{\\text{c},i}$ for the $i$th VTOL UAV, is subject to\na saturation defined as where ${v_{\\text{m},i}}>0$ is the maximum\nspeed of the $i$th VTOL UAVs, $i=1,2,\\cdots,M$, $\\mathbf{v}\\triangleq\\lbrack{{v}_{1}}$\n${{v}_{2}}]{^{\\text{T}}}\\in{{\\mathbb{R}}^{2}}$. The saturation function\nsa${\\text{t}}\\left(\\mathbf{v},{v_{\\text{m},i}}\\right)$ and the vector\n$\\mathbf{v}$ are parallel all the time so it can keep the flying\ndirection the same if $\\left\\Vert \\mathbf{v}\\right\\Vert >{v_{\\text{m},i}}$\n\\cite{Quan(2017)}. The saturation function can be rewritten as \n\\begin{equation}\n\\text{sat}\\left(\\mathbf{v},{v_{\\text{m},i}}\\right)={{\\kappa}_{{v_{\\text{m},i}}}}\\left(\\mathbf{v}\\right)\\mathbf{v}\\label{sat0}\n\\end{equation}\nwhere \n\\[\n{{\\kappa}_{{v_{\\text{m},i}}}}\\left(\\mathbf{v}\\right)\\triangleq\\left\\{ \\begin{array}{c}\n1,\\\\\n\\frac{{v_{\\text{m},i}}}{\\left\\Vert \\mathbf{v}\\right\\Vert },\n\\end{array}\\begin{array}{c}\n\\left\\Vert \\mathbf{v}\\right\\Vert \\leq{v_{\\text{m},i}}\\\\\n\\left\\Vert \\mathbf{v}\\right\\Vert >{v_{\\text{m},i}}\n\\end{array}\\right..\n\\]\nIt is obvious that $0<{{\\kappa}_{{v_{\\text{m},i}}}}\\left(\\mathbf{v}\\right)\\leq1$.\nSometimes, ${{\\kappa}_{{v_{\\text{m},i}}}}\\left(\\mathbf{v}\\right)$\nwill be written as ${{\\kappa}_{{v_{\\text{m},i}}}}$ for short. According\nto this, if and only if $\\mathbf{v=0},$ then \n\\begin{equation}\n\\mathbf{v}^{\\text{T}}\\text{sa}{\\text{t}}\\left(\\mathbf{v},{v_{\\text{m},i}}\\right)=0.\\label{saturation1}\n\\end{equation}\n\n\\textbf{Remark 1. }It is well-known that a typical multicopter is\na physical system with underactuated dynamics \\cite[pp.126-130]{Quan(2017)}.\nBut, many organizations or companies have designed some open source\nsemi-autonomous autopilots or offered semi-autonomous autopilots with\nsoftware development kits. The semi-autonomous autopilots can be used\nfor velocity control of VTOL UAVs. For example, A3 autopilots released\nby DJI allow the range of the horizontal velocity command from $-10$m\/s$\\sim10$m\/s\n\\cite{A3}. With such an autopilot, the velocity of a VTOL UAV can\ntrack a given velocity command in a reasonable time. Not only can\nthis avoid the trouble of modifying the low-level source code of autopilots,\nbut also it can utilize commercial autopilots to complete various\ntasks. So, the dynamics (\\ref{positionmodel_ab_con_i}) is practical\nespecially for higher-level control.\n\n\\subsubsection{Filtered Position Model}\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[scale=0.9]{Intuitiveinterpretation} \n\\par\\end{centering}\n\\caption{Intuitive interpretation for filtered position}\n\\label{Intuitive} \n\\end{figure}\n\nIn this section, the motion of each VTOL UAV is transformed into a\nsingle integrator form to simplify the controller design and analysis.\nAs shown in Figure \\ref{Intuitive}, although the position distances\nof the three cases are the same, namely a marginal avoidance distance,\nthe case in Figure \\ref{Intuitive}(b) needs to carry out avoidance\nurgently by considering the velocity. However, the case in Figure\n\\ref{Intuitive}(a) in fact does not need to be considered to perform\ncollision avoidance. With such an intuition, a filtered position is\ndefined as follows: \n\\begin{equation}\n\\boldsymbol{\\xi}_{i}\\triangleq{\\mathbf{p}}_{i}+\\frac{1}{{l}_{i}}\\mathbf{v}_{i}.\\label{FilteredPosition}\n\\end{equation}\n{Then} \n\\begin{align}\n\\boldsymbol{\\dot{\\xi}}_{i} & =\\mathbf{\\dot{p}}_{i}+\\frac{1}{{l}_{i}}\\mathbf{\\dot{v}}_{i}\\nonumber \\\\\n & =\\mathbf{v}_{i}-\\frac{1}{{l}_{i}}{l}_{i}\\left(\\mathbf{v}_{i}-\\mathbf{v}_{\\text{c},i}\\right)\\nonumber \\\\\n & =\\mathbf{v}_{\\text{c},i}\\label{filteredposdyn}\n\\end{align}\nwhere $i=1,2,\\cdots,M${. Let} \n\\begin{equation}\nr_{\\text{v}}=\\max_{i}\\frac{v_{\\text{m},i}}{l_{i}}.\\label{rv}\n\\end{equation}\n\nIn the following, a relationship between the position error and the\nfiltered position error is shown.\n\n\\textbf{Proposition 1}. Given any $r>0,$ for the $i$th and $j$th\nVTOL UAVs, if $\\left\\Vert \\mathbf{v}_{i}\\left(0\\right)\\right\\Vert \\leq{v_{\\text{m},i}}$\nand the filtered position error satisfies $\\left\\Vert \\boldsymbol{\\xi}_{i}\\left(t\\right)-\\boldsymbol{\\xi}_{j}\\left(t\\right)\\right\\Vert \\geq r+2r_{\\text{v}},$\nthen \n\\[\n\\left\\Vert \\mathbf{p}_{i}\\left(t\\right)-{{\\mathbf{p}}_{j}}\\left(t\\right)\\right\\Vert \\geq r\n\\]\n$t\\geq0,$ where $i,j=1,2,\\cdots,M,$ $i\\neq j,\\ r>0$.\n\n\\emph{Proof}. See \\emph{Appendix}. $\\square$\n\n\\subsection{Three Types of Areas around a UAV}\n\nIn light of \\cite{Connectedness}, three types of areas used for control,\nnamely safety area, avoidance area, and detection area, are defined.\nUnlike \\cite{Connectedness}, these areas are suit for UAVs and the\nvelocity is further introduced.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{Threeaera} \n\\par\\end{centering}\n\\caption{Safety area, avoidance area and detection area of a UAV.}\n\\label{Threeaera} \n\\end{figure}\n\n\n\\subsubsection{Safety Area}\n\nIn order to avoid a conflict, as shown in Figure \\ref{Threeaera},\nthe \\emph{safety} \\emph{radius} $r_{\\text{s}}$ of a UAV is defined\nas \n\\begin{equation}\n\\mathcal{S}_{i}=\\left\\{ \\mathbf{x}\\in{{\\mathbb{R}}^{2}}\\left\\vert \\left\\Vert \\mathbf{x}-\\boldsymbol{\\xi}_{i}\\right\\Vert \\leq r_{\\text{s}}\\right.\\right\\} \\label{Safetyaera}\n\\end{equation}\nwhere $r_{\\text{s}}>0$ is the safety radius, $i=1,2,\\cdots,M.$ It\nshould be noted that we consider the velocity of the $i$th UAV in\nthe definition of $\\mathcal{S}_{i}$. For all UAVs, no \\emph{confliction}\nwith each other implies \n\\[\n\\mathcal{S}_{i}\\cap\\mathcal{S}_{j}=\\varnothing\n\\]\nnamely \n\\begin{equation}\n\\left\\Vert \\boldsymbol{\\xi}_{j}-\\boldsymbol{\\xi}_{i}\\right\\Vert >2r_{\\text{s}}.\\label{sdis}\n\\end{equation}\n\\textit{Proposition 1} implies that two VTOL UAVs will be separated\nlargely enough if (\\ref{sdis}) is satisfied with a safety radius\\emph{\n}$r_{\\text{s}}$ large enough.\n\n\\subsubsection{Avoidance Area}\n\nBesides the safety area, there exists an \\emph{avoidance area} used\nfor starting avoidance control. If another UAV is out of the avoidance\narea of the $i$th UAV, then the object will not need to be avoided.\nFor the $i$th UAV, the \\emph{avoidance area }for other UAVs is\\emph{\n}defined as \n\\begin{equation}\n\\mathcal{A}_{i}=\\left\\{ \\mathbf{x}\\in{{\\mathbb{R}}^{2}}\\left\\vert \\left\\Vert \\mathbf{x}-\\boldsymbol{\\xi}_{i}\\right\\Vert \\leq r_{\\text{a}}\\right.\\right\\} \\label{Avoidancearea}\n\\end{equation}\nwhere $r_{\\text{a}}>0$ is the \\emph{avoidance radius}, $i=1,2,\\cdots,M.$\nIt should be noted that we consider the velocity of the $i$th UAV\nin the definition of $\\mathcal{A}_{i}$. If \n\\[\n\\mathcal{A}_{i}\\cap\\mathcal{S}_{j}\\neq\\varnothing,\n\\]\nnamely \n\\[\n\\left\\Vert \\boldsymbol{\\xi}_{i}-\\boldsymbol{\\xi}_{j}\\right\\Vert \\leq r_{\\text{a}}+r_{\\text{s}}\n\\]\nthen the $j$th UAV should be avoided by the $i$th UAV. Since \n\\[\n\\mathcal{A}_{i}\\cap\\mathcal{S}_{j}\\neq\\varnothing\\Leftrightarrow\\mathcal{A}_{j}\\cap\\mathcal{S}_{i}\\neq\\varnothing\n\\]\naccording to the definition of $\\mathcal{A}_{i},$ the $i$th UAV\nshould be avoided by the $j$th UAV at the same time. When the $j$th\nUAV just enters into the avoidance area of the $i$th UAV, it is required\nthat they have not conflicted at the beginning. Therefore,\\textbf{\n}we require \n\\[\nr_{\\text{a}}>r_{\\text{s}}.\n\\]\n\n\n\\subsubsection{Detection Area}\n\nBy cameras, radars, or Vehicle to Vehicle (V2V) communication, the\nUAVs can receive the positions and velocities of their neighboring\nUAVs. The \\emph{detection area }only\\emph{ }depends on the detection\nrange of the used devices, which is only related to its position.\nFor the $i$th UAV, this area is defined as \n\\begin{equation}\n\\mathcal{D}_{i}=\\left\\{ \\mathbf{x}\\in\\mathbb{R}^{2}\\left\\vert \\left\\Vert \\mathbf{x}-\\mathbf{p}_{i}\\right\\Vert \\leq r_{\\text{d}}\\right.\\right\\} \\label{DetectionArea}\n\\end{equation}\nwhere $r_{\\text{d}}>0$ is the \\emph{detection radius},$\\ i=1,2,\\cdots,M$.\nWhen another UAV is within this area, it can be detected.\n\n\\textbf{Proposition 2}. Suppose $r_{\\text{d}}>r_{\\text{s}}+r_{\\text{a}}+2r_{\\text{v}}{,}$\n$i=1,2,\\cdots,M{.}$ Then for any $i\\neq j,$ if $\\mathcal{A}_{i}\\cap\\mathcal{S}_{j}\\neq\\varnothing,$\nthen $\\mathbf{p}_{j}\\in\\mathcal{D}_{i},$ $i,j=1,2,\\cdots,M.$\n\n\\textit{Proof}. It is similar to proof of \\textit{Proposition 1. }$\\square$\n\nTo simplify the following problems, we have the following assumption\nfor all VTOL UAVs.\n\n\\textbf{Assumption 1}. The radius of the detection area satisfies\n$r_{\\text{d}}>r_{\\text{s}}+r_{\\text{a}}+2r_{\\text{v}}{.}$\n\nAccording to \\textit{Assumption 1},\\textbf{ }for the $i$th UAV, any\nother UAV entering into its avoidance area can be detected by the\n$i$th UAV and will not conflict with the $i$th UAV initially, $i=1,2,\\cdots,M{.}$\n\n\\subsection{virtual tube Passing\\emph{ }Problem Formulation}\n\nIn a horizontal plane, as shown in Figure \\ref{airwaytunnel}, a virtual\ntube (analogous to an \\emph{airway} or a \\emph{highway} on the ground)\nhere is a horizontal long band with the width $2r_{\\text{t}}$ and\ncenterline starting from $\\mathbf{p}_{\\text{t,1}}\\in\\mathbb{R}^{2}$\nto $\\mathbf{p}_{\\text{t,2}}\\in\\mathbb{R}^{2},$ where $r_{\\text{t}}>L{{r}_{\\text{a}}},$\nwhere $L\\in{\\mathbb\n\\mathbb{Z\n}}_{+}$ is the lane number in the virtual tube allowed for UAVs. \n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{airwaytunnel} \n\\par\\end{centering}\n\\caption{Airspace and virtual tube.}\n\\label{airwaytunnel} \n\\end{figure}\n\nDefine \n\\begin{align}\n\\mathbf{A}_{\\text{t,12}}\\left(\\mathbf{p}_{\\text{t,1}},\\mathbf{p}_{\\text{t,2}}\\right) & \\triangleq\\mathbf{I}_{2}-\\frac{\\left(\\mathbf{p}_{\\text{t,1}}-\\mathbf{p}_{\\text{t,2}}\\right)\\left(\\mathbf{p}_{\\text{t,1}}-\\mathbf{p}_{\\text{t,2}}\\right){}^{\\text{T}}}{\\left\\Vert \\mathbf{p}_{\\text{t,1}}-\\mathbf{p}_{\\text{t,2}}\\right\\Vert ^{2}}\\nonumber \\\\\n\\mathbf{A}_{\\text{t,23}}\\left(\\mathbf{p}_{\\text{t,2}},\\mathbf{p}_{\\text{t,3}}\\right) & \\triangleq\\mathbf{I}{}_{2}-\\frac{\\left(\\mathbf{p}_{\\text{t,2}}-\\mathbf{p}_{\\text{t,3}}\\right)\\left(\\mathbf{p}_{\\text{t,2}}-\\mathbf{p}_{\\text{t,3}}\\right){}^{\\text{T}}}{\\left\\Vert \\mathbf{p}_{\\text{t,2}}-\\mathbf{p}_{\\text{t,3}}\\right\\Vert ^{2}}.\\label{define}\n\\end{align}\nHere, matrix $\\mathbf{A}_{\\text{t,12}}=\\mathbf{A}_{\\text{t,12}}^{\\text{T}},$\n$\\mathbf{A}_{\\text{t,23}}=\\mathbf{A}_{\\text{t,23}}^{\\text{T}}\\in{\\mathbb{R}}^{2\\times2}$\nare positive semi-definite matrices. According to the projection operator\n\\cite[p. 480]{Projection}, the value $\\left\\Vert \\mathbf{A}_{\\text{t,12}}\\left(\\mathbf{p}-{{\\mathbf{p}}_{\\text{t,1}}}\\right)\\right\\Vert $\nis the distance from $\\mathbf{p\\i\n\\mathbb{R\n}^{2}$ to the straight line $\\overline{{{\\mathbf{p}}_{\\text{t,1}}{\\mathbf{p}}_{\\text{t,2}}}}$\nas shown in Figure \\ref{Projection}. Particularly, the equation $\\left\\Vert \\mathbf{A}_{\\text{t,12}}\\left(\\mathbf{p}-{{\\mathbf{p}}_{\\text{t,1}}}\\right)\\right\\Vert =0$\nimplies that $\\mathbf{p}$ is on the straight-line $\\overline{{{\\mathbf{p}}_{\\text{t,1}}{\\mathbf{p}}_{\\text{t,2}}}}$.\nSimilarly, the value $\\left\\Vert \\mathbf{A}_{\\text{t,23}}\\left(\\mathbf{p}-{{\\mathbf{p}}_{\\text{t,2}}}\\right)\\right\\Vert $\nis the distance from $\\mathbf{p}$ to the finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$.\nDefine position errors as \n\\begin{align*}\n\\mathbf{\\tilde{p}}{_{\\text{l,}i}} & \\triangleq\\mathbf{A}_{\\text{t,23}}\\left(\\mathbf{p}_{i}-{{\\mathbf{p}}_{\\text{t,2}}}\\right)\\\\\n\\mathbf{\\tilde{p}}{_{\\text{m,}ij}} & \\triangleq\\mathbf{p}_{i}-{{\\mathbf{p}}_{j}}\\\\\n\\mathbf{\\tilde{p}}{_{\\text{t,}i}} & \\triangleq\\mathbf{A}_{\\text{t,12}}\\left(\\mathbf{p}_{i}-{{\\mathbf{p}}_{\\text{t,2}}}\\right)\n\\end{align*}\nand the filtered position errors as \n\\begin{align*}\n\\boldsymbol{\\tilde{\\xi}}{}_{\\text{l,}i} & \\triangleq\\mathbf{A}_{\\text{t,23}}\\left(\\boldsymbol{\\xi}_{i}-\\mathbf{p}_{\\text{t,2}}\\right)\\\\\n\\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij} & \\triangleq\\boldsymbol{\\xi}_{i}-\\boldsymbol{\\xi}{}_{j}\\\\\n\\boldsymbol{\\tilde{\\xi}}{}_{\\text{t,}i} & \\triangleq\\mathbf{A}_{\\text{t,12}}\\left(\\boldsymbol{\\xi}_{i}-\\mathbf{p}_{\\text{t,2}}\\right)\n\\end{align*}\nwhere $i,j=1,2,\\cdots,M.$ With the definitions above, according to\n(\\ref{positionmodel_ab_con_i}), the derivatives of the filtered errors\nare \n\\begin{align}\n\\boldsymbol{\\dot{\\tilde{\\xi}}}{_{\\text{l,}i}} & =\\mathbf{A}_{\\text{t,23}}\\mathbf{v}_{\\text{c},i}\\label{lmodel}\\\\\n\\boldsymbol{\\dot{\\tilde{\\xi}}}{_{\\text{m,}ij}} & =\\mathbf{v}_{\\text{c},i}-\\mathbf{v}_{\\text{c},j}\\label{mmodel}\\\\\n\\boldsymbol{\\dot{\\tilde{\\xi}}}{}_{\\text{t,}i} & =\\mathbf{A}_{\\text{t,12}}\\mathbf{v}_{\\text{c},i}\\label{tmodel}\n\\end{align}\nwhere $i,j=1,2,\\cdots,M.$\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{Projection} \n\\par\\end{centering}\n\\caption{Diagram of the projective operator.}\n\\label{Projection} \n\\end{figure}\n\nWith the description above, the following assumptions are proposed.\n\n\\textbf{Assumption 2}. As shown in Figure \\ref{airwaytunnel}, the\ninitial condition $\\mathbf{p}_{i}\\left(0\\right),\\boldsymbol{\\xi}_{i}\\left(0\\right),$\n$i=1,2,\\cdots,M$ are all within the virtual tube or its extension,\nnamely \n\\begin{align*}\n\\left(\\frac{{{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}}{\\left\\Vert {{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right\\Vert }\\right)^{\\text{T}}\\left(\\mathbf{p}_{i}\\left(0\\right)-{{\\mathbf{p}}_{\\text{t,2}}}\\right) & <0\\\\\n\\left(\\frac{{{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}}{\\left\\Vert {{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right\\Vert }\\right)^{\\text{T}}\\left(\\boldsymbol{\\xi}_{i}\\left(0\\right)-{{\\mathbf{p}}_{\\text{t,2}}}\\right) & <0\n\\end{align*}\nwhere $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$\nis perpendicular to $\\overline{{{\\mathbf{p}}_{\\text{t,1}}{\\mathbf{p}}_{\\text{t,2}}}}$\nwith $\\left\\Vert {{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,3}}}\\right\\Vert =r_{\\text{t}}.$\n\n\\textbf{Assumption 2}$^{\\prime}$. As shown in Figure \\ref{airwaytunnel},\nthe initial condition $\\boldsymbol{\\xi}_{i}\\left(0\\right)$ are not\nall within the virtual tube or its extension, but locate the left\nof the finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$.\n\n\\textbf{Assumption 3}. The UAVs' initial conditions satisfy \n\\[\n\\left\\Vert \\boldsymbol{\\xi}_{i}\\left(0\\right)-\\boldsymbol{\\xi}_{j}\\left(0\\right)\\right\\Vert >2r_{\\text{s}},i\\neq j\n\\]\nand $\\left\\Vert \\mathbf{v}_{i}\\left(0\\right)\\right\\Vert \\leq{v_{\\text{m}},}$\nwhere $i,j=1,2,\\cdots,M.$\n\n\\textbf{Assumption 4}. Once a UAV arrives near the finishing line\n$\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$,\nthen it will quit the virtual tube not to affect the UAVs behind.\nMathematically, given ${\\epsilon}_{\\text{0}}\\i\n\\mathbb{R\n_{+},$ a UAV arrives near the finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$\nif \n\\begin{equation}\n{\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right){^{\\text{T}}}}\\mathbf{A}_{\\text{t,23}}\\left(\\mathbf{p}_{i}-{{\\mathbf{p}}_{\\text{t,2}}}\\right)\\geq-{\\epsilon}_{\\text{0}}.\\label{arrivialairway}\n\\end{equation}\n\n\\textbf{Neighboring Set}. Let the set $\\mathcal{N}_{\\text{m},i}$\nbe the collection of all mark numbers of other VTOL UAVs whose safety\nareas enter into the avoidance\\emph{ }area of the $i$th UAV, namely\n\\[\n\\mathcal{N}_{\\text{m},i}=\\left\\{ \\left.j\\right\\vert \\mathcal{S}_{j}\\cap\\mathcal{A}_{i}\\neq\\varnothing,j=1,\\cdots,M,i\\neq j\\right\\} .\n\\]\nFor example, if the safety areas of the $1$th, $2$th VTOL UAVs enter\ninto in the avoidance\\emph{ }area of the $3$th UAV, then $\\mathcal{N}_{\\text{m},3}=\\left\\{ 1,2\\right\\} $.\nBased on \\textit{Assumptions} and \\textit{definition} above, two types\nof \\emph{virtual tube passing problems} are stated in the following. \n\\begin{itemize}\n\\item \\textbf{Basic virtual tube passing problem}.\\emph{ }Under \\textit{Assumptions\n1-4}, design the velocity input $\\mathbf{v}_{\\text{c},i}$ for the\n$i$th UAV with local information from $\\mathcal{N}_{\\text{m},i}$\nto guide it to fly to pass the virtual tube until it arrives near\nthe finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$,\nmeanwhile avoiding colliding other UAVs ($\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\right\\Vert >2r_{\\text{s}}$)\nand keeping within the virtual tube ($\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t,}i}}\\right\\Vert 2r_{\\text{s}}$)\nand keeping within the virtual tube when passing it ($\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t,}i}}\\right\\Vert {0,}$\nthen $\\mathbf{p}{_{i}}$ locates the right side of the finishing line\n$\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$;\nif $\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right)^{\\text{T}}\\left(\\mathbf{p}_{i}-{{\\mathbf{p}}_{\\text{t,2}}}\\right){<0,}$\nthen $\\mathbf{p}{_{i}}$ locates the left side of the finishing line\n$\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{Remark} \n\\par\\end{centering}\n\\caption{Position relative to the finishing line.}\n\\label{Remark} \n\\end{figure}\n\n\\textbf{Remark 2}. For \\textit{Assumption 2},\\textbf{ }all\\textbf{\n}UAVs are within the virtual tube (like \\textit{Place 0 }in Figure\n\\ref{airwaytunnel}) or its extension (like \\textit{Place 1 }in Figure\n\\ref{airwaytunnel}). For \\textit{Assumption 2}$^{\\prime}$,\\textbf{\n}UAVs are not all within the virtual tube or its extension. This implies\nthat UAVs may locate everywhere. For example, UAVs may locate the\nplaces, like \\textit{Place 0}, ..., \\textit{Place 5} shown in Figure\n\\ref{airwaytunnel}.\n\n\\section{Preliminaries}\n\n\\subsection{Line Integral Lyapunov Function}\n\nIn the following, we will design a new type of Lyapunov functions,\ncalled \\emph{Line Integral Lyapunov Function. }This type of Lyapunov\nfunctions is inspired by its scalar form \\cite[p.74]{Slotine(1991)}.\nIf $xf\\left(x\\right)>0\\ $for $x\\neq0,$ then $V_{\\text{li}}^{\\prime}\\left(y\\right)=\\int_{0}^{y}f\\left(x\\right)$d$x>0$\nwhen $y\\neq0.$ The derivative is $\\dot{V}_{\\text{li}}^{\\prime}=f\\left(y\\right)\\dot{y}.$\nA line integral Lyapunov function for vectors is defined as \n\\begin{equation}\nV_{\\text{li}}\\left(\\mathbf{y}\\right)=\\int_{C_{\\mathbf{y}}}\\text{sa}{\\text{t}}\\left(\\mathbf{x},a\\right)^{\\text{T}}\\text{d}\\mathbf{x}\\label{Vli0}\n\\end{equation}\nwhere $a>0,$ $\\mathbf{x}\\in\\mathbf\n\\mathbb{R\n}^{n},$ $C_{\\mathbf{y}}$ is a line from $\\mathbf{0}$ to $\\mathbf{y}\\i\n\\mathbb{R\n^{n}\\mathbf{.}$ In the following lemma, we will show its properties.\n\n\\textbf{Lemma 1}. Suppose that the line integral Lyapunov function\n$V_{\\text{li}}$ is defined as (\\ref{Vli0}). Then (i) $V_{\\text{li}}\\left(\\mathbf{y}\\right)>0$\nif $\\left\\Vert \\mathbf{y}\\right\\Vert \\neq0$; (ii) if $\\left\\Vert \\mathbf{y}\\right\\Vert \\rightarrow\\infty,$\nthen $V_{\\text{li}}\\left(\\mathbf{y}\\right)\\rightarrow\\infty;$ (iii)\nif $V_{\\text{li}}\\left(\\mathbf{y}\\right)$ is bounded, then $\\left\\Vert \\mathbf{y}\\right\\Vert $\nis bounded.\n\n\\textit{Proof}. Since \n\\[\n\\text{sa{t}}\\left(\\mathbf{x},{a}\\right)={{\\kappa}_{{a}}}\\left(\\mathbf{x}\\right)\\mathbf{x}\n\\]\nthe function (\\ref{Vli0}) can be written as \n\\begin{equation}\nV_{\\text{li}}\\left(\\mathbf{y}\\right)=\\int_{C_{\\mathbf{y}}}{{\\kappa}_{{a}}}\\left(\\mathbf{x}\\right)\\mathbf{x}^{\\text{T}}\\text{d}\\mathbf{x}\\label{Vli10}\n\\end{equation}\nwhere \n\\[\n{{\\kappa}_{{a}}}\\left(\\mathbf{x}\\right)\\triangleq\\left\\{ \\begin{array}{c}\n1,\\\\\n\\frac{{a}}{\\left\\Vert \\mathbf{x}\\right\\Vert },\n\\end{array}\\begin{array}{c}\n\\left\\Vert \\mathbf{x}\\right\\Vert \\leq{a}\\\\\n\\left\\Vert \\mathbf{x}\\right\\Vert >{a}\n\\end{array}\\right..\n\\]\nLet $z=\\left\\Vert \\mathbf{x}\\right\\Vert .$ Then the function (\\ref{Vli10})\nbecomes \n\\begin{align*}\nV_{\\text{li}}\\left(\\mathbf{y}\\right) & =\\int_{C_{\\mathbf{y}}}\\frac{{{\\kappa}_{{a}}}\\left(\\mathbf{x}\\right)}{2}\\text{d}z^{2}\\\\\n & =\\int_{0}^{\\left\\Vert \\mathbf{y}\\right\\Vert }{{\\kappa}_{{a}}}\\left(\\mathbf{x}\\right)z\\text{d}z.\n\\end{align*}\n\n\\begin{itemize}\n\\item If $\\left\\Vert \\mathbf{y}\\right\\Vert \\leq{a,}$ then ${{\\kappa}_{{a}}}\\left(\\mathbf{x}\\right)=1.$\nConsequently, \n\\begin{equation}\nV_{\\text{li}}\\left(\\mathbf{y}\\right)=\\frac{1}{2}\\left\\Vert \\mathbf{y}\\right\\Vert ^{2}.\\label{Vli2}\n\\end{equation}\n\\item If $\\left\\Vert \\mathbf{y}\\right\\Vert >{a,}$ then \n\\[\n\\int_{0}^{\\left\\Vert \\mathbf{y}\\right\\Vert }{{\\kappa}_{{a}}}\\left(\\mathbf{x}\\right)z\\text{d}z=\\int_{0}^{a}z\\text{d}z+\\int_{a}^{\\left\\Vert \\mathbf{y}\\right\\Vert }\\frac{{a}}{\\left\\Vert \\mathbf{x}\\right\\Vert }z\\text{d}z.\n\\]\nSince $z=\\left\\Vert \\mathbf{x}\\right\\Vert ,$ we have \n\\begin{equation}\nV_{\\text{li}}\\left(\\mathbf{y}\\right)\\geq\\frac{1}{2}a^{2}+{a}\\left(\\left\\Vert \\mathbf{y}\\right\\Vert -a\\right).\\label{Vli3}\n\\end{equation}\n\\end{itemize}\nTherefore, from the form of (\\ref{Vli2}) and (\\ref{Vli3}), we have\n(i) $V_{\\text{li}}\\left(\\mathbf{y}\\right)>0$ if $\\left\\Vert \\mathbf{y}\\right\\Vert \\neq0$.\n(ii) if $\\left\\Vert \\mathbf{y}\\right\\Vert \\rightarrow\\infty,$ then\n$V_{\\text{li}}\\left(\\mathbf{y}\\right)\\rightarrow\\infty;$ (iii) if\n$V_{\\text{li}}\\left(\\mathbf{y}\\right)$ is bounded, then $\\left\\Vert \\mathbf{y}\\right\\Vert $\nis bounded. $\\square$\n\n\\subsection{Two Smooth Functions}\n\nTwo smooth functions are defined for the following Lyapunov-like function\ndesign. As shown in Figure \\ref{saturationa} (upper plot), define\na second-order differentiable `bump' function as \\cite{Panagou(2016)}\n\\begin{equation}\n\\sigma\\left(x,d_{1},d_{2}\\right)=\\left\\{ \\begin{array}{c}\n1\\\\\nAx^{3}+Bx^{2}+Cx+D\\\\\n0\n\\end{array}\\right.\\begin{array}{c}\n\\text{if}\\\\\n\\text{if}\\\\\n\\text{if}\n\\end{array}\\begin{array}{c}\nx\\leq d_{1}\\\\\nd_{1}\\leq x\\leq d_{2}\\\\\nd_{2}\\leq x\n\\end{array}\\label{zerofunction}\n\\end{equation}\nwith $A=-2\\left\/\\left(d_{1}-d_{2}\\right)^{3}\\right.,$ $B=3\\left(d_{1}+d_{2}\\right)\\left\/\\left(d_{1}-d_{2}\\right)^{3}\\right.,$\n$C=-6d_{1}d_{2}\\left\/\\left(d_{1}-d_{2}\\right)^{3}\\right.$ and $D=d_{2}^{2}\\left(3d_{1}-d_{2}\\right)\\left\/\\left(d_{1}-d_{2}\\right)^{3}\\right.$.\nThe derivative of $\\sigma\\left(x,d_{1},d_{2}\\right)$ with respect\nto $x$ is \n\\[\n\\frac{\\partial\\sigma\\left(x,d_{1},d_{2}\\right)}{\\partial x}=\\left\\{ \\begin{array}{c}\n0\\\\\n3Ax^{2}+2Bx+C\\\\\n0\n\\end{array}\\right.\\begin{array}{c}\n\\text{if}\\\\\n\\text{if}\\\\\n\\text{if}\n\\end{array}\\begin{array}{c}\nx\\leq d_{1}\\\\\nd_{1}\\leq x\\leq d_{2}\\\\\nd_{2}\\leq x\n\\end{array}.\n\\]\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{saturation} \n\\par\\end{centering}\n\\caption{Two smooth functions. For a smooth saturation function, $\\theta_{\\text{s}}=67.5^{\\circ}.$}\n\\label{saturationa} \n\\end{figure}\n\nDefine another smooth function as shown in Figure \\ref{saturationa}\n(lower plot) to approximate a saturation function \n\\[\n\\bar{s}\\left(x\\right)=\\min\\left(x,1\\right),x\\geq0\n\\]\nthat \n\\begin{equation}\ns\\left(x,\\epsilon_{\\text{s}}\\right)=\\left\\{ \\begin{array}{c}\nx\\\\\n\\left(1-\\epsilon_{\\text{s}}\\right)+\\sqrt{\\epsilon_{\\text{s}}^{2}-\\left(x-x_{2}\\right)^{2}}\\\\\n1\n\\end{array}\\right.\\begin{array}{c}\n0\\leq x\\leq x_{1}\\\\\nx_{1}\\leq x\\leq x_{2}\\\\\nx_{2}\\leq x\n\\end{array}\\label{sat}\n\\end{equation}\nwith $x_{2}=1+\\frac{1}{\\tan67.5^{\\circ}}\\epsilon_{\\text{s}}$ and\n$x_{1}=x_{2}-\\sin45^{\\circ}\\epsilon_{\\text{s}}.$ Since it is required\n$x_{1}\\geq0$, one has $\\epsilon_{\\text{s}}\\leq\\frac{\\tan67.5^{\\circ}}{\\tan67.5^{\\circ}\\sin45^{\\circ}-1}.$\nFor any $\\epsilon_{\\text{s}}\\in\\left[0,\\frac{\\tan67.5^{\\circ}}{\\tan67.5^{\\circ}\\sin45^{\\circ}-1}\\right],$\nit is easy to see \n\\begin{equation}\ns\\left(x,\\epsilon_{\\text{s}}\\right)\\leq\\bar{s}\\left(x\\right)\\label{satinequ}\n\\end{equation}\nand \n\\begin{equation}\n\\lim_{\\epsilon_{\\text{s}}\\rightarrow0}\\underset{x\\geq0}{\\sup}\\left\\vert \\bar{s}\\left(x\\right)-s\\left(x,\\epsilon_{\\text{s}}\\right)\\right\\vert =0.\\label{sata}\n\\end{equation}\nThe derivative of $s\\left(x,\\epsilon_{\\text{s}}\\right)$ with respect\nto $x$ is \n\\[\n\\frac{\\partial s\\left(x,\\epsilon_{\\text{s}}\\right)}{\\partial x}=\\left\\{ \\begin{array}{c}\n1\\\\\n\\frac{x_{2}-x}{\\sqrt{\\epsilon_{\\text{s}}^{2}-\\left(x-x_{2}\\right)^{2}}}\\\\\n0\n\\end{array}\\right.\\begin{array}{c}\n0\\leq x\\leq x_{1}\\\\\nx_{1}\\leq x\\leq x_{2}\\\\\nx_{2}\\leq x\n\\end{array}.\n\\]\nFor any $\\epsilon_{\\text{s}}>0,$ we have $\\underset{x\\geq0}{\\sup}\\left\\vert \\partial s\\left(x,\\epsilon_{\\text{s}}\\right)\\left\/\\partial x\\right.\\right\\vert \\leq1.$\n\n\\section{Basic virtual tube Passing Problem}\n\nIn this section, three Lyapunov-like functions for approaching the\nfinishing line, avoiding conflict, and keeping within the virtual\ntube are established. Based on them, a controller to solve the basic\\textbf{\n}virtual tube passing problem is derived and then a formal analysis\nis made.\n\n\\subsection{Lyapunov-Like Function Design and Analysis}\n\nFor the basic\\textbf{ }virtual tube passing problem, three subproblems\nare required to solve, namely approaching the finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$,\navoiding conflict with other UAVs, and keeping within the virtual\ntube. Correspondingly, three Lyapunov-like functions are proposed.\n\n\\subsubsection{Integral Lyapunov Function for Approaching Finishing Line}\n\nDefine a smooth curve $C_{\\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}}}$\nfrom $\\mathbf{0}$ to $\\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}}$.\nThen, the line integral of sa${\\text{t}}\\left(\\mathbf{x},{v_{\\text{m},i}}\\right)$\nalong $C_{\\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}}}$ is \n\\begin{equation}\nV_{\\text{l},i}=\\int_{C_{\\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}}}}\\text{sa}{\\text{t}}\\left(k_{1}\\mathbf{x},{v_{\\text{m},i}}\\right)^{\\text{T}}\\text{d}\\mathbf{x}\\label{Vli}\n\\end{equation}\nwhere $k_{1}$ is an adjustable parameter, $i=1,2,\\cdots,M$. From\nthe definition, $V_{\\text{l},i}\\geq0.$ According to Thomas' Calculus\n\\cite[p. 911]{Thomas(2009)}, one has \n\\begin{equation}\nV_{\\text{l},i}=\\int_{0}^{t}\\text{sat}\\left(k_{1}\\boldsymbol{\\tilde{\\xi}}{}_{\\text{l,}i}\\left(\\tau\\right),v_{\\text{m},i}\\right)^{\\text{T}}\\boldsymbol{\\dot{\\tilde{\\xi}}}{}_{\\text{l,}i}\\left(\\tau\\right)\\text{d}\\tau\\mathbf{.}\\label{Vli1}\n\\end{equation}\nThe objective of the designed velocity command is to make $V_{\\text{l},i}$\nbe zero. This implies that $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}}\\right\\Vert $\ngoes down to zero according to the property (\\ref{Vli}), namely the\n$i$th UAV approaches the finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$.\n\n\\subsubsection{Barrier Function for Avoiding Conflict with Other UAVs}\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{Lyapunov} \n\\par\\end{centering}\n\\caption{Barrier functions for avoiding collision and keeping within virtual\ntube.}\n\\label{Lyapunovfun} \n\\end{figure}\n\nDefine \n\\begin{equation}\nV_{\\text{m},ij}=\\frac{k_{2}\\sigma_{\\text{m}}\\left(\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert \\right)}{\\left(1+\\epsilon_{\\text{m}}\\right)\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert -2r_{\\text{s}}s\\left(\\frac{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert }{2r_{\\text{s}}},\\epsilon_{\\text{s}}\\right)}.\\label{Vmij}\n\\end{equation}\nHere $\\sigma_{\\text{m}}\\left(x\\right)\\triangleq\\sigma\\left(x,2r_{\\text{s}},r_{\\text{a}}+r_{\\text{s}}\\right)$,\nwhere $\\sigma\\left(\\cdot\\right)$ is defined in (\\ref{zerofunction}).\\ When\n$r_{\\text{s}}=10,$ $r_{\\text{a}}=20,$ $\\epsilon_{\\text{m}}=10^{-6},$\n${{k}_{2}=1,}$ the function $V_{\\text{m},ij}$ is shown in Figure\n\\ref{Lyapunovfun} (upper plot), where $V_{\\text{m},ij}\\left(x\\right)=0$\nas $x\\geq r_{\\text{s}}+r_{\\text{a}}=30$ and $V_{\\text{m},ij}\\left(x\\right)$\nis increased sharply as $x\\rightarrow0$ from $x=30.$ The function\n$V_{\\text{m},ij}$ has the following properties: \n\\begin{itemize}\n\\item Property (i). $\\partial V_{\\text{m},ij}\\left\/\\partial\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert \\right.\\leq0$\nas $V_{\\text{m},ij}\\ $is a nonincreasing function with respect to\n$\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\right\\Vert $; \n\\item Property (ii). If $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert >r_{\\text{a}}+r_{\\text{s}}{,}$\nnamely $\\mathcal{A}_{i}\\cap\\mathcal{S}_{j}=\\varnothing$ and $\\mathcal{A}_{j}\\cap\\mathcal{S}_{i}=\\varnothing,$\nthen $V_{\\text{m},ij}=0$ and $\\partial V_{\\text{m},ij}\\left\/\\partial\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\right\\Vert \\right.=0$;\nif $V_{\\text{m},ij}=0,$ then $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\right\\Vert >r_{\\text{a}}+r_{\\text{s}}>2r_{\\text{s}};$ \n\\item Property (iii). If $0<\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert <2r_{\\text{s}}{,}$\nnamely $\\mathcal{S}_{j}\\cap\\mathcal{S}_{i}\\neq\\varnothing$ (they\nmay not collide in practice), then there exists a sufficiently small\n$\\epsilon_{\\text{s}}>0$ such that \n\\begin{equation}\nV_{\\text{m},ij}\\approx\\frac{k_{2}}{\\epsilon_{\\text{m}}\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert }\\geq\\frac{k_{2}}{2\\epsilon_{\\text{m}}r_{\\text{s}}}.\\label{Vmijd}\n\\end{equation}\n\\end{itemize}\nThe objective of the designed velocity command is to make $V_{\\text{m},ij}$\nbe zero or as small as possible. According to property (ii), this\nimplies $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\right\\Vert >2r_{\\text{s}}{,}$\nnamely the $i$th UAV will not conflict with the $j$th UAV.\n\n\\subsubsection{Barrier Function for Keeping within virtual tube}\n\nDefine \n\\[\nV_{\\text{t},i}=\\frac{k_{3}\\sigma_{\\text{t}}\\left(r_{\\text{t}}-\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert \\right)}{\\left(r_{\\text{t}}-r_{\\text{s}}\\right)-\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert s\\left(\\frac{r_{\\text{t}}-r_{\\text{s}}}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert +\\epsilon_{\\text{t}}},\\epsilon_{\\text{s}}\\right)}\n\\]\nwhere $\\sigma_{\\text{t}}\\left(x\\right)\\triangleq\\sigma\\left(x,r_{\\text{s}},r_{\\text{a}}\\right)$.\\ When\n$r_{\\text{t}}=50,$ $r_{\\text{s}}=10,$ $r_{\\text{a}}=20,$ $\\epsilon_{\\text{t}}=10^{-6},$\n${{k}_{3}=1},$ the function $V_{\\text{t},i}\\left(x\\right)$ is shown\nin Figure \\ref{Lyapunovfun} (lower plot), where $V_{\\text{t},i}\\left(x\\right)=0$\nas $x\\leq r_{\\text{t}}-r_{\\text{a}}=30$ and $V_{\\text{t},i}\\left(x\\right)$\nis increased sharply as $x\\rightarrow40$ from $x=30.$ The function\n$V_{\\text{t},i}$ has the following properties:\n\n(i) $\\partial V_{\\text{t},i}\\left\/\\partial\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert \\right.\\geq0$\nas $V_{\\text{t},i}\\ $is a nondecreasing function with respect to\n$\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\right\\Vert $;\n\n(ii) if $r_{\\text{t}}-\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert \\geq r_{\\text{a}}{,}$\nnamely the edges of the virtual tube are out of the avoidance area\nof the $i$th UAV, then $\\sigma_{\\text{t}}\\left(r_{\\text{t}}-\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\right\\Vert \\right)=0;$\nconsequently, $V_{\\text{t},i}=0$ and $\\partial V_{\\text{t},i}\\left\/\\partial\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\right\\Vert \\right.=0$;\n\n(iii) if $r_{\\text{t}}-\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert 0$ such\nthat \n\\[\ns\\left(\\frac{r_{\\text{t}}-r_{\\text{s}}}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert +\\epsilon_{\\text{t}}},\\epsilon_{\\text{s}}\\right)\\approx\\frac{r_{\\text{t}}-r_{\\text{s}}}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert +\\epsilon_{\\text{t}}}<1.\n\\]\nAs a result, \n\\[\nV_{\\text{t},i}\\approx\\frac{k_{3}\\left(\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert +\\epsilon_{\\text{t}}\\right)}{\\epsilon_{\\text{t}}\\left(r_{\\text{t}}-r_{\\text{s}}\\right)}\n\\]\nwhich will be very large if $\\epsilon_{\\text{t}}$ is very small.\n\nThe objective of the designed velocity command is to make $V_{\\text{t},i}$\nbe zero. This implies $r_{\\text{t}}-\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\right\\Vert \\geq r_{\\text{a}}{\\ }$according\nto property (ii), namely the $i$th UAV will keep within the virtual\ntube.\n\n\\subsection{Controller Design}\n\nThe velocity command is designed as \n\\begin{equation}\n\\mathbf{v}_{\\text{c},i}=\\mathbf{v}_{\\text{T},i}\\label{*}\n\\end{equation}\n\n\\begin{align}\n\\mathbf{v}_{\\text{T},i} & =-\\text{sat}\\Bigg(\\underset{\\text{Line Approaching}}{\\underbrace{\\mathbf{A}_{\\text{t,23}}\\text{sat}\\left(k_{1}\\boldsymbol{\\tilde{\\xi}}{}_{\\text{l,}i},v_{\\text{m},i}\\right)}}+\\underset{\\text{UAV Avoidance}}{\\underbrace{\\underset{j\\in\\mathcal{N}_{\\text{m},i}}{{\\displaystyle \\sum}}-b_{ij}\\boldsymbol{\\tilde{\\xi}}_{\\text{m,}ij}}}\\Bigg.\\nonumber \\\\\n & \\Bigg.+\\underset{\\text{Tunnel Keeping}}{\\underbrace{c_{i}\\mathbf{A}_{\\text{t,12}}\\boldsymbol{\\tilde{\\xi}}_{\\text{t},i}}},v_{\\text{m},i}\\Bigg)\\label{control_highway_dis}\n\\end{align}\nwith\\footnote{$b_{ij}\\geq0$ according to the property (i) of $V_{\\text{m},ij};$\n$c_{i}\\geq0$ according to the property (i) of $V_{\\text{t},i}.$} \n\\begin{align}\nb_{ij} & =-\\frac{\\partial V_{\\text{m},ij}}{\\partial\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert }\\frac{1}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{m,}ij}\\right\\Vert }\\label{bij}\\\\\nc_{i} & =\\frac{\\partial V_{\\text{t},i}}{\\partial\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert }\\frac{1}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{}_{\\text{t},i}\\right\\Vert }.\\label{ci}\n\\end{align}\nThis is a distributed control form. Unlike the formation control,\nneighboring UAVs' IDs of a UAV are not required. By active detection\ndevices such as cameras or radars may only detect neighboring UAVs'\nposition and velocity but no IDs, because these UAVs may look alike.\nThis implies that the proposed distributed control can work autonomously\nwithout communication.\n\n\\textbf{Remark 3}. It is noticed that the velocity command (\\ref{control_highway_dis})\nis saturated, whose norm will not exceed ${v_{\\text{m},i}.}$ If the\ncase such as $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}}\\right\\Vert <2r_{\\text{s}}$\nhappens in practice due to unpredictable uncertainties out of the\nassumptions we make, this may not imply that the $i$th UAV has collided\nthe $j_{i}$th UAV physically. In this case, the velocity command\n(\\ref{*}) degenerates to be \n\\begin{align*}\n\\mathbf{v}_{\\text{c},i} & =\\mathbf{-}\\text{sa}{\\text{t}}\\Bigg(\\mathbf{A}_{\\text{23}}\\text{sa}{\\text{t}}\\left(k_{1}\\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}},v_{\\text{m},i}\\right)-\\underset{j=1,j\\neq i,j_{i}}{\\overset{M}\n{\\displaystyle \\sum\n}}b_{ij}\\boldsymbol{\\tilde{\\xi}}_{\\text{m,}ij}\\Bigg.\\\\\n & \\Bigg.+c_{i}\\mathbf{A}_{\\text{t,12}}\\boldsymbol{\\tilde{\\xi}}_{\\text{t},i}-b_{ij_{i}}\\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}},{v_{\\text{m},i}}\\Bigg)\n\\end{align*}\nwith $b_{ij_{i}}\\approx\\frac{{{k}_{2}}}{\\epsilon_{\\text{m}}}\\frac{1}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}}\\right\\Vert ^{3}}.$\nSince $\\epsilon_{\\text{m}}$ is chosen to be sufficiently small, the\nterm $b_{ij_{i}}\\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}}$ will\ndominate\\footnote{Furthermore, we assume that the $i$th UAV does not conflict with\nothers except for the $j_{i}$th UAV, or not very close to the edges\nof the virtual tube.} so that the velocity command $\\mathbf{v}_{\\text{c},i}$ becomes \n\\[\n\\mathbf{v}_{\\text{c},i}\\approx\\text{sa}{\\text{t}}\\left(\\frac{{{k}_{2}}}{\\epsilon_{\\text{m}}}\\frac{1}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}}\\right\\Vert ^{2}}\\frac{\\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}}}{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}}\\right\\Vert },{v_{\\text{m},i}}\\right).\n\\]\nThis implies that, by recalling (\\ref{mmodel}), $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{i}}}\\right\\Vert $\nwill be increased very fast so that the $i$th UAV can keep away from\nthe $j_{i}$th UAV immediately.\n\n\\textbf{Remark 4}. In practice, the case such as $r_{\\text{h}}-\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{h},i}}\\right\\Vert 0$\nin $b_{ij}$ and $\\epsilon_{\\text{t}}>0$ in $c_{i}$ such that $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\left(t\\right)\\right\\Vert >2r_{\\text{s}},$\n$\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\right\\Vert 2r_{\\text{s}},$\n$\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\right\\Vert 2r_{\\text{s}}$,\n$\\left\\Vert \\mathbf{\\tilde{\\mathbf{p}}}{_{\\text{t},i}}\\right\\Vert 0.$ According to \\textit{Lemma 2}, $V_{\\text{m},ij},$ $V_{\\text{t},i}>0.$\nTherefore, ${V}\\left(\\boldsymbol{\\xi}_{1},\\cdots\\boldsymbol{\\xi}_{M}\\right)\\leq l$\nimplies $\\underset{i=1}{\\overset{M}\n{\\displaystyle \\sum\n}}V_{\\text{l},i}\\leq l.$ Furthermore, according to \\textit{Lemma 1(iii)}, $\\Omega$ is bounded.\nWhen $\\left\\Vert \\left[\\begin{array}{ccc}\n\\boldsymbol{\\xi}_{1} & \\cdots & \\boldsymbol{\\xi}_{M}\\end{array}\\right]\\right\\Vert \\rightarrow\\infty,$ then $\\underset{i=1}{\\overset{M}\n{\\displaystyle \\sum\n}}V_{\\text{l},i}\\rightarrow\\infty$ according to \\textit{Lemma 1(ii)}, namley ${V}\\rightarrow\\infty$.\nTherefore the function $V$ satisfies the condition that the invariant\nset theorem is requires. \n\\item Secondly, we will find the largest invariant set, then show all UAVs\ncan pass the finishing line. Now, recalling the property (\\ref{saturation1}),\n${\\dot{V}}={0}$ if and only if \n\\begin{equation}\n\\mathbf{A}_{\\text{t,23}}\\text{sa}{\\text{t}}\\left({{k}_{1}}\\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}},{v_{\\text{m},i}}\\right)-\\underset{j=1,j\\neq i}{\\overset{M}\n{\\displaystyle \\sum\n}}b_{ij}\\boldsymbol{\\tilde{\\xi}}_{\\text{m,}ij}+c_{i}\\mathbf{A}_{\\text{t,12}}\\boldsymbol{\\tilde{\\xi}}_{\\text{t},i}=\\mathbf{0}\\label{equilibriumTh5_v}\n\\end{equation}\nwhere $i=1,\\cdots,M$. Then $\\mathbf{v}_{\\text{c},i}=\\mathbf{0\\ }$according\nto ({\\ref{*}}). Consequently, by (\\ref{positionmodel_ab_con_i}),\nthe system cannot get ``stuck''\\ at an equilibrium value other\nthan $\\mathbf{v}_{i}=\\mathbf{0}$. The equation (\\ref{equilibriumTh5_v})\ncan be further written as \n\\begin{equation}\n{{k}_{1}{\\kappa}_{v_{\\text{m},i}}}\\mathbf{A}_{\\text{t,23}}{{\\mathbf{\\tilde{p}}}_{\\text{l,}i}}-\\underset{j=1,j\\neq i}{\\overset{M}\n{\\displaystyle \\sum\n}}b_{ij}\\mathbf{\\tilde{p}}_{\\text{m,}ij}+c_{i}\\mathbf{A}_{\\text{t,12}}\\mathbf{\\tilde{\\mathbf{p}}}_{\\text{t},i}=\\mathbf{0}.\\label{equilibriumTh5}\n\\end{equation}\nLet the $1$st UAV be ahead, the closest to the finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$.\nLet us examine the following equation related to the 1st UAV that\n\\begin{equation}\n{{k}_{1}{\\kappa}_{v_{\\text{w},1}}}\\mathbf{A}_{\\text{t,23}}{{\\mathbf{\\tilde{p}}}_{\\text{l,}1}}-\\underset{j=2}{\\overset{M}\n{\\displaystyle \\sum\n}}b_{1j}\\mathbf{\\tilde{p}}_{\\text{m,}1j}+c_{1}\\mathbf{A}_{\\text{t,12}}\\mathbf{\\tilde{\\mathbf{p}}}_{\\text{t},1}=\\mathbf{0}.\\label{equilibriumTh5_1st}\n\\end{equation}\nSince the $1$st UAV is ahead, we have \n\\begin{equation}\n\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right)^{\\text{T}}\\mathbf{\\tilde{p}}_{\\text{m,}1j}\\geq0\\label{1st}\n\\end{equation}\nwhere ``$=$''\\ hold if and only if the $j$th UAV is as ahead\nas the $1$st one. On the other hand, one has \n\\begin{equation}\n\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right)^{\\text{T}}\\mathbf{A}_{\\text{t,12}}=0.\\label{perpendicular}\n\\end{equation}\nMultiplying the term $\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right){^{\\text{T}}}$\nat the left side of {(\\ref{equilibriumTh5_1st}) results in} \n\\[\n{{k}_{1}{\\kappa}_{v_{\\text{w},1}}\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right){^{\\text{T}}}}\\mathbf{A}_{\\text{t,23}}{{\\mathbf{\\tilde{p}}}_{\\text{l,}1}}=\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right){^{\\text{T}}}\\underset{j=2}{\\overset{M}\n{\\displaystyle \\sum\n}}b_{1j}\\mathbf{\\tilde{p}}_{\\text{m,}1j}\n\\]\nwhere {(\\ref{perpendicular}) is used. Since }$b_{1j}\\geq0$ and\n{(\\ref{1st}) holds for the 1st UAV, one has} \n\\[\n{\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right){^{\\text{T}}}}\\mathbf{A}_{\\text{t,23}}{{\\mathbf{\\tilde{p}}}_{\\text{l,}1}\\geq0.}\n\\]\nSince ${\\left({{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right){^{\\text{T}}}}\\mathbf{A}_{\\text{t,23}}{{\\mathbf{\\tilde{p}}}_{\\text{l,}1}}\\left(0\\right)<{0\\ }$according\nto \\textit{Assumption 2}, owing to the continuity, given ${\\epsilon}_{\\text{0}}\\i\n\\mathbb{R\n_{+},$ there must exist a time $t_{11}\\i\n\\mathbb{R\n_{+}$ such that \n\\[\n\\left(\\frac{{{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}}{\\left\\Vert {{\\mathbf{p}}_{\\text{t,2}}}-{{\\mathbf{p}}_{\\text{t,1}}}\\right\\Vert }\\right)^{\\text{T}}\\left(\\mathbf{p}_{i}\\left(t\\right)-{{\\mathbf{p}}_{\\text{t,2}}}\\right)\\geq-{\\epsilon}_{\\text{0}}\n\\]\nas $t\\geq t_{11}.$ At the time $t_{11},$ the $1$st UAV is removed\nfrom {(\\ref{equilibriumTh5}) according to \\textit{Assumption 4},\nnamely it quits }the virtual tube. The left problem is to consider\nthe $M-1$ UAVs, namely $2$nd, $3$rd, ..., $M$th UAVs. We can repeat\nthe analysis above to conclude this proof. $\\square$ \n\\end{itemize}\n\n\\section{Controller Design for General virtual tube Passing Problem}\n\nSo far, we have solved the basic\\textbf{ }virtual tube passing problem.\nThen, we are going to solve the general virtual tube passing problem.\nFirst, we define different areas for the whole airspace. Then, the\ngeneral virtual tube passing problem is decomposed into several basic\\textbf{\n}virtual tube passing problems. As a result, for UAVs in different\nareas, they have different controllers, like ({\\ref{*}}). Combining\nthem together, the final controller is obtained.\n\n\\subsection{Area Definition}\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{modeswitching} \n\\par\\end{centering}\n\\caption{Area definition and flight sequence.}\n\\label{modeswitching} \n\\end{figure}\n\nAs shown in Figure \\ref{modeswitching}(a), the whole airspace is\ndivided into six areas, namely \\emph{Left} \\emph{Standby Area}, \\emph{Left}\n\\emph{Ready Area}, \\emph{Right} \\emph{Standby Area}, \\emph{Right}\n\\emph{Ready Area}, \\emph{virtual tube, }and\\emph{ virtual tube Extension}.\nMoreover, the Earth-fixed coordinate frame is built. For simplicity,\nlet ${{\\mathbf{p}}_{\\text{t,1}}}=\\mathbf{0\\ }$with $x$-axis pointing\nto ${{\\mathbf{p}}_{\\text{t,2}}}$ and $y$-axis pointing to its left\nside. \n\\begin{itemize}\n\\item \\emph{Left} \\emph{Standby Area }and\\emph{ Right} \\emph{Standby Area\n}are the areas on the outside of \\emph{virtual tube} and the right\nside of \\emph{Starting Line} $\\overline{{{\\mathbf{p}}_{\\text{t,1}}{\\mathbf{p}}_{\\text{t,4}}}},$\nwhere ${{\\mathbf{p}}_{\\text{t,4}}=[0}$ $r_{\\text{t}}{]}^{\\text{T}}.\\mathcal{\\ }$Concretely,\nif \n\\[\n\\boldsymbol{\\xi}_{i}\\left(1\\right)>0,\\boldsymbol{\\xi}_{i}\\left(2\\right)>r_{\\text{t}}\n\\]\nthen $\\boldsymbol{\\xi}_{i}$ is in \\emph{Left} \\emph{Standby Area.}\nIf \n\\[\n\\boldsymbol{\\xi}_{i}\\left(1\\right)>0,\\boldsymbol{\\xi}_{i}\\left(2\\right)<-r_{\\text{t}}\n\\]\nthen $\\boldsymbol{\\xi}_{i}$ is in \\emph{Right} \\emph{Standby Area.} \n\\item \\emph{Left Ready Area }and\\emph{ Right} \\emph{Ready Area }are the\nareas on the outside of \\emph{virtual tube} \\emph{Extension}\\ and\nthe left side of \\emph{Starting Line} $\\overline{{{\\mathbf{p}}_{\\text{t,1}}{\\mathbf{p}}_{\\text{t,4}}}}$.\nConcretely, if \n\\[\n\\boldsymbol{\\xi}_{i}\\left(1\\right)\\leq0,\\boldsymbol{\\xi}_{i}\\left(2\\right)>r_{\\text{t}}\n\\]\nthen $\\boldsymbol{\\xi}_{i}$ is in \\emph{Left} \\emph{Standby Area.}\nIf \n\\[\n\\boldsymbol{\\xi}_{i}\\left(1\\right)\\leq0,\\boldsymbol{\\xi}_{i}\\left(2\\right)<-r_{\\text{t}}\n\\]\nthen $\\boldsymbol{\\xi}_{i}$ is in \\emph{Right} \\emph{Standby Area.} \n\\item \\emph{virtual tube }and\\emph{ virtual tube Extension }are\\emph{ }a\nband\\emph{.} Concretely, if \n\\begin{align*}\n\\boldsymbol{\\xi}_{i}\\left(1\\right) & \\leq0\\text{ \\& }\\boldsymbol{\\xi}_{i}\\left(1\\right)>\\left\\Vert \\mathbf{p}_{\\text{t,1}}-\\mathbf{p}_{\\text{t,2}}\\right\\Vert \\\\\n-r_{\\text{t}} & \\leq\\boldsymbol{\\xi}_{i}\\left(2\\right)\\leq r_{\\text{t}}\n\\end{align*}\nthen $\\boldsymbol{\\xi}_{i}$ is in \\emph{virtual tube Extension.}\nIf \n\\begin{align*}\n0 & <\\boldsymbol{\\xi}_{i}\\left(1\\right)\\leq\\left\\Vert \\mathbf{p}_{\\text{t,1}}-\\mathbf{p}_{\\text{t,2}}\\right\\Vert \\\\\n-r_{\\text{t}} & \\leq\\boldsymbol{\\xi}_{i}\\left(2\\right)\\leq r_{\\text{t}}\n\\end{align*}\nthen $\\boldsymbol{\\xi}_{i}$ is in \\emph{virtual tube.} \n\\end{itemize}\n\n\\subsection{virtual tube Passing Scheme and Requirements}\n\nAs Assumption 2$^{\\prime}$ points, at the beginning, UAVs may locate\nin the six areas\\emph{. }A flight sequence is given shown in Figure\n\\ref{modeswitching}(b)\\emph{. }The requirement is as follows. \n\\begin{itemize}\n\\item From \\emph{Left\/Right Standby Area} to \\emph{Left\/Right} \\emph{Ready\nArea. }{{UAVs}} are required to fly into \\emph{Left\/Right} \\emph{Ready\nArea}, meanwhile avoiding conflict with other UAVs and keeping away\nfrom \\emph{virtual tube }and\\emph{ virtual tube Extension.} \n\\item From \\emph{Left\/Right Ready Area }to\\emph{ virtual tube Extension.\n}{{UAVs}} are required to fly into \\emph{virtual tube Extension},\nmeanwhile avoiding conflict with other UAVs and keeping away from\n\\emph{virtual tube.} \n\\item From \\emph{virtual tube and virtual tube Extension }to \\emph{Finishing\nLine }$\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}.$\n{{UAVs}} are required to pass the virtual tube until it arrives\nnear the finishing line $\\overline{{{\\mathbf{p}}_{\\text{t,2}}{\\mathbf{p}}_{\\text{t,3}}}}$,\nmeanwhile avoiding conflict other UAVs and keeping within the virtual\ntube and its extension. \n\\end{itemize}\n\n\\subsection{Controller Design}\n\n\\subsubsection{From Standby Area to Ready Area}\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{LS2R} \n\\par\\end{centering}\n\\caption{LS2R virtual tube is designed for\\emph{ }from Left Standby Area to\nLeft Ready Area.}\n\\label{LS2R} \n\\end{figure}\n\nAs shown in Figure \\ref{LS2R}, a virtual virtual tube, named \\emph{LS2R\nvirtual tube}, is designed with the width $2r_{\\text{sr}}$ and centerline\nstarting from ${\\mathbf{p}}_{\\text{sr,1}}\\in{{\\mathbb{R}}^{2}}$ to\n${\\mathbf{p}}_{\\text{sr,2}}\\in{{\\mathbb{R}}^{2},}$ where $r_{\\text{sr}}>0$\nis often sufficiently large ($10000$ in the following simulation\nfor example) that takes all UAVs in \\emph{Left Standby Area }in the\\emph{\n}virtual virtual tube. Moreover, we let all UAVs in \\emph{Left Standby\nArea }approach the finishing line $\\overline{\\mathbf{p}_{\\text{sr,2}}\\mathbf{p}_{\\text{sr,3}}}.$\nHere \n\\[\n\\mathbf{p}_{\\text{sr,1}}=\\left[\\begin{array}{c}\n\\mathbf{p}_{\\text{t,2}}\\left(1\\right)\\\\\nr_{\\text{t}}+r_{\\text{sr}}\n\\end{array}\\right],\\mathbf{p}_{\\text{sr,2}}=\\left[\\begin{array}{c}\n-r_{\\text{b}}\\\\\nr_{\\text{t}}+r_{\\text{sr}}\n\\end{array}\\right],\\mathbf{p}_{\\text{sr,3}}=\\left[\\begin{array}{c}\n-r_{\\text{b}}\\\\\nr_{\\text{t}}\n\\end{array}\\right]\n\\]\nwhere $r_{\\text{b}}>0,$ $r_{\\text{b}}=r_{\\text{a}}\\ $for example.\nThe intersection of \\emph{LS2R virtual tube }and\\emph{ Left Ready\nArea }is a buffer with length $r_{\\text{b}}$, which can make a UAV\nfly into \\emph{Left Ready Area }not only approaching it.\\emph{ }According\nto (\\ref{arrivialairway}), the controller is designed as $\\mathbf{v}_{\\text{c},i}=\\mathbf{v}_{\\text{sr},i},$\nwhere \n\\begin{align}\n\\mathbf{v}_{\\text{sr},i} & ={\\text{sa}{\\text{t}}}\\big(\\mathbf{v}_{\\text{l},i}\\left({k}_{1},{\\mathbf{p}}_{\\text{sr,2}},{\\mathbf{p}}_{\\text{sr,3}}\\right)+\\mathbf{v}_{\\text{m},i}\\left({{k}_{2}}\\right)\\big.\\nonumber \\\\\n & \\big.+\\mathbf{v}_{\\text{t},i}\\left({{k}_{3},r_{\\text{sr}},{\\mathbf{p}}}_{\\text{sr,1}},{\\mathbf{p}}_{\\text{sr,2}}\\right),{v_{\\text{m},i}}\\big)\\label{vsrl}\n\\end{align}\nwhere \n\\begin{align}\n\\mathbf{v}_{\\text{l},i}\\left({{k}_{1},{\\mathbf{p}}_{\\text{t,2}}},{{\\mathbf{p}}_{\\text{t,3}}}\\right) & \\triangleq-\\mathbf{A}_{\\text{t,23}}\\left({{\\mathbf{p}}_{\\text{t,2}}},{{\\mathbf{p}}_{\\text{t,3}}}\\right)\\text{sa}{\\text{t}}\\left({{k}_{1}}\\boldsymbol{\\tilde{\\xi}}{_{\\text{l,}i}},{v_{\\text{m},i}}\\right)\\label{control_line}\\\\\n\\mathbf{v}_{\\text{m},i}\\left({{k}_{2}}\\right) & \\triangleq\\underset{j\\in\\mathcal{N}_{\\text{m},i}}{\\overset{}\n{\\displaystyle \\sum\n}}b_{ij}\\left({{k}_{2}}\\right)\\boldsymbol{\\tilde{\\xi}}_{\\text{m,}ij}\\label{control_mul}\\\\\n\\mathbf{v}_{\\text{t},i}\\left({{k}_{3},r_{\\text{t}},{\\mathbf{p}}_{\\text{t,1}}},{{\\mathbf{p}}_{\\text{t,2}}}\\right) & \\triangleq-c_{i}\\left({{k}_{3},r_{\\text{t}}}\\right)\\mathbf{A}_{\\text{t,12}}\\left({{\\mathbf{p}}_{\\text{t,1}}},{{\\mathbf{p}}_{\\text{t,2}}}\\right)\\boldsymbol{\\tilde{\\xi}}_{\\text{t},i}.\\label{control_tun}\n\\end{align}\nFurthermore, according to \\textit{Theorem 1},\\textit{ }UAVs in \\emph{Left\nStandby Area }will fly into \\emph{Left} \\emph{Ready Area}, meaning\nwhile avoiding colliding other UAVs and keeping within \\emph{LS2R\nvirtual tube}, namely keeping away\\emph{ }from \\emph{virtual tube\n}and\\emph{ virtual tube Extension.}\n\nSimilarly, the controller for from \\emph{Right Standby Area} to \\emph{Right\nReady Area}\\ is designed as $\\mathbf{v}_{\\text{c},i}=\\mathbf{v}_{\\text{sr},i}^{\\prime},$\nwhere \n\\begin{align}\n\\mathbf{v}_{\\text{sr},i}^{\\prime} & ={\\text{sa}{\\text{t}}}\\big(\\mathbf{v}_{\\text{l},i}\\left({{k}_{1},{\\mathbf{p}}}_{\\text{sr,2}}^{\\prime},{\\mathbf{p}}_{\\text{sr,3}}^{\\prime}\\right)+\\mathbf{v}_{\\text{m},i}\\left({{k}_{2}}\\right)\\big.\\nonumber \\\\\n & \\big.+\\mathbf{v}_{\\text{t},i}\\left({{k}_{3},r_{\\text{sr}},{\\mathbf{p}}}_{\\text{sr,1}}^{\\prime},{\\mathbf{p}}_{\\text{sr,2}}^{\\prime}\\right),{v_{\\text{m},i}}\\big)\\label{vsrr}\n\\end{align}\nwith \n\\[\n\\mathbf{p}_{\\text{sr,1}}^{\\prime}=\\left[\\begin{array}{c}\n\\mathbf{p}_{\\text{t,2}}\\left(1\\right)\\\\\n-r_{\\text{t}}-r_{\\text{sr}}\n\\end{array}\\right],\\mathbf{p}_{\\text{sr,2}}^{\\prime}=\\left[\\begin{array}{c}\n-r_{\\text{b}}\\\\\n-r_{\\text{t}}-r_{\\text{sr}}\n\\end{array}\\right],\n\\]\n\n\\[\n\\mathbf{p}_{\\text{sr,3}}^{\\prime}=\\left[\\begin{array}{c}\n-r_{\\text{b}}\\\\\n-r_{\\text{t}}\n\\end{array}\\right].\n\\]\n\n\n\\subsubsection{From Ready Area to virtual tube Extension}\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics{LR2T} \n\\par\\end{centering}\n\\caption{LR2T virtual tube is designed for\\emph{ }from Left Ready Area to virtual\ntube Extension.}\n\\label{LR2T} \n\\end{figure}\n\nAs shown in Figure \\ref{LR2T}, a virtual virtual tube, named \\emph{LR2T\nvirtual tube}, is designed with the width $2r_{\\text{rt}}$ and centerline\nstarting from ${\\mathbf{p}}_{\\text{rt,1}}\\in{{\\mathbb{R}}^{2}}$ to\n${\\mathbf{p}}_{\\text{rt,2}}\\in{{\\mathbb{R}}^{2},}$ where $r_{\\text{rt}}>0$\nis sufficiently large that takes all UAVs in \\emph{Left Ready Area\n}in the\\emph{ }virtual virtual tube. Moreover, all UAVs in \\emph{Left\nReady Area }approach the finishing line $\\overline{\\mathbf{p}_{\\text{rt,2}}\\mathbf{p}_{\\text{rt,3}}}.$\nHere \n\\[\n\\mathbf{p}_{\\text{rt,1}}=\\left[\\begin{array}{c}\n-r_{\\text{rt}}\\\\\nr_{\\text{t}}+r_{\\text{rt}}\n\\end{array}\\right],\\mathbf{p}_{\\text{rt,2}}=\\left[\\begin{array}{c}\n-r_{\\text{rt}}\\\\\nr_{\\text{t}}-r_{\\text{b}}\n\\end{array}\\right],\\mathbf{p}_{\\text{rt,3}}=\\left[\\begin{array}{c}\n0\\\\\nr_{\\text{t}}-r_{\\text{b}}\n\\end{array}\\right].\n\\]\nThe intersection of \\emph{LR2T virtual tube }and\\emph{ virtual tube\nExtension }is a buffer with length $r_{\\text{b}}$, which can make\na UAV fly into \\emph{virtual tube Extension }not only approaching\nit. According to (\\ref{arrivialairway}), the controller is designed\nas $\\mathbf{v}_{\\text{c},i}=\\mathbf{v}_{\\text{rt},i},$ where \n\\begin{align}\n\\mathbf{v}_{\\text{rt},i} & ={\\text{sa}{\\text{t}}}\\big(\\mathbf{v}_{\\text{l},i}\\left({{k}_{1},{\\mathbf{p}}_{\\text{rt,2}},{\\mathbf{p}}_{\\text{rt,3}}}\\right)+\\mathbf{v}_{\\text{m},i}\\left({{k}_{2}}\\right)\\big.\\nonumber \\\\\n & \\big.+\\mathbf{v}_{\\text{t},i}\\left({{k}_{3},r_{\\text{rt}},{\\mathbf{p}}_{\\text{rt,1}},{\\mathbf{p}}_{\\text{rt,2}}}\\right),{v_{\\text{m},i}}\\big)\\label{vrtl}\n\\end{align}\nAccording to \\textit{Theorem 1},\\textit{ }UAVs in \\emph{Left Ready\nArea }will fly into \\emph{virtual tube Extension}, meanwhile avoiding\nconflict with other UAVs and keeping within \\emph{LR2T virtual tube},\nnamely keeping away\\emph{ }from \\emph{virtual tube.} Similarly, the\ncontroller for \\emph{Right Ready Area} to \\emph{virtual tube Extension}\\ is\ndesigned as $\\mathbf{v}_{\\text{c},i}=\\mathbf{v}_{\\text{rt},i},$ where\n\\begin{align}\n\\mathbf{v}_{\\text{rt},i}^{\\prime} & ={\\text{sa}{\\text{t}}}\\big(\\mathbf{v}_{\\text{l},i}\\left({{k}_{1},{\\mathbf{p}}_{\\text{rt,2}}^{\\prime},{\\mathbf{p}}_{\\text{rt,3}}^{\\prime}}\\right)+\\mathbf{v}_{\\text{m},i}\\left({{k}_{2}}\\right)\\big.\\nonumber \\\\\n & \\big.+\\mathbf{v}_{\\text{t},i}\\left({{k}_{3},r_{\\text{rt}},{\\mathbf{p}}_{\\text{rt,1}}^{\\prime},{\\mathbf{p}}_{\\text{rt,2}}^{\\prime}}\\right),{v_{\\text{m},i}}\\big)\\label{vrtr}\n\\end{align}\nwith \n\\[\n\\mathbf{p}_{\\text{rt,1}}^{\\prime}=\\left[\\begin{array}{c}\n-r_{\\text{rt}}\\\\\n-r_{\\text{t}}-r_{\\text{rt}}\n\\end{array}\\right],\\mathbf{p}_{\\text{rt,2}}^{\\prime}=\\left[\\begin{array}{c}\n-r_{\\text{rt}}\\\\\n-r_{\\text{t}}+r_{\\text{b}}\n\\end{array}\\right],\\mathbf{p}_{\\text{rt,3}}^{\\prime}=\\left[\\begin{array}{c}\n0\\\\\n-r_{\\text{t}}+r_{\\text{b}}\n\\end{array}\\right].\n\\]\n\n\n\\subsubsection{Final Controller}\n\nWith the design above, the final controller is designed as \n\\begin{equation}\n\\mathbf{v}_{\\text{c},i}=\\left\\{ \\begin{array}{ll}\n\\mathbf{v}_{\\text{T},i}\\text{ ({\\ref{control_highway_dis}})} & \\text{if }\\boldsymbol{\\xi}_{i}\\text{ in }\\emph{Tunnel\\ and\\ Tunnel\\ Extension}\\\\\n\\mathbf{v}_{\\text{sr},i}\\text{ (\\ref{vsrl})} & \\text{if }\\boldsymbol{\\xi}_{i}\\text{ in }\\emph{Left\\ Standby\\ Area}\\\\\n\\mathbf{v}_{\\text{sr},i}^{\\prime}\\text{ (\\ref{vsrr})} & \\text{if }\\boldsymbol{\\xi}_{i}\\text{ in }\\emph{Right\\ Standby\\ Area}\\\\\n\\mathbf{v}_{\\text{rt},i}\\text{ (\\ref{vrtl})} & \\text{if }\\boldsymbol{\\xi}_{i}\\text{ in }\\emph{Left\\ Ready\\ Area}\\\\\n\\mathbf{v}_{\\text{rt},i}^{\\prime}\\text{ (\\ref{vrtr})} & \\text{if }\\boldsymbol{\\xi}_{i}\\text{ in }\\emph{Right\\ Ready\\ Area}\n\\end{array}\\right.\\label{control_general}\n\\end{equation}\nwhere $i=1,2,\\cdots,M$. Then, for given ${\\epsilon}_{\\text{0}}\\in\n\\mathbb{R\n}_{+}$, there exist sufficiently small $\\epsilon_{\\text{m}},r_{\\text{s}}\\in{\\mathbb{R}}_{+}$\nin $b_{ij}$, $\\epsilon_{\\text{t}}\\in{\\mathbb{R}}_{+}$ in $c_{i}$\nand $t_{1}\\in\n\\mathbb{R\n}_{+}$ such that all UAVs can satisfy (\\ref{arrivialairway}) as $t\\geq t_{1},$\nmeanwhile $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\right\\Vert >2r_{\\text{s}}$\nand $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t,}i}}\\right\\Vert r+r_{\\text{v}}.\n\\]\nThen $\\left\\Vert \\mathbf{p}_{i}\\left(t\\right)-{{\\mathbf{p}}_{j}}\\left(t\\right)\\right\\Vert >r.$\n\n\n\\subsection{Proof of Lemma 2}\n\nThe reason why these VTOL UAVs are able to avoid conflict with each\nother, which will be proved by contradiction. Without loss of generality,\nassume that $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{1}}}\\left(t_{\\text{o}}\\right)\\right\\Vert =2r_{\\text{s}}$\noccurs at $t_{\\text{o}}>0$ first, i.e., a conflict between the $i$th$\\ $UAV\nand the $j_{1}$th$\\ $UAV happening. Then, $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\left(t_{\\text{o}}\\right)\\right\\Vert >2r_{\\text{s}}$\nfor $j\\neq j_{1}$. Consequently, $V_{\\text{m},ij}{\\left(t_{\\text{o}}\\right)\\geq0}$\nif $j\\neq j_{1}.$ Since ${V}\\left(0\\right)>0$ and ${{\\dot{V}}\\left(t\\right)}\\leq0$,\nthe function ${V}$ satisfies ${V}\\left(t_{\\text{o}}\\right)\\leq{V}\\left(0\\right),$\n$t\\in\\left[0,\\infty\\right)$. By the definition of ${{V},}$ we have\n\\[\nV{_{\\text{m,}ij_{1}}}\\left(t_{\\text{o}}\\right)\\leq{V}\\left(0\\right).\n\\]\nAccording to (\\ref{sata}), given any $\\epsilon_{rs}>0,$ there exists\na $\\epsilon_{\\text{s}}>0,$ such that \n\\[\ns\\left(1,\\epsilon_{\\text{s}}\\right)=1-\\epsilon_{rs}.\n\\]\nThen, at time $t_{\\text{o}},$ the denominator of $V_{\\text{m},ij_{1}}$\ndefined in (\\ref{Vmij}) is \n\\begin{align}\n & \\left(1+\\epsilon_{\\text{m}}\\right)\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{1}}}\\left(t_{\\text{o}}\\right)\\right\\Vert -2r_{\\text{s}}s\\left(\\frac{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{1}}}\\left(t_{\\text{o}}\\right)\\right\\Vert }{2r_{\\text{s}}},\\epsilon_{\\text{s}}\\right)\\nonumber \\\\\n & =2r_{\\text{s}}\\left(1+\\epsilon_{\\text{m}}\\right)-2r_{\\text{s}}\\left(1-\\epsilon_{rs}\\right)\\nonumber \\\\\n & =2r_{\\text{s}}\\left(\\epsilon_{\\text{m}}+\\epsilon_{rs}\\right)\\label{bound1}\n\\end{align}\nwhere $\\epsilon_{rs}>0$ can be sufficiently small if $\\epsilon_{\\text{s}}$\nis sufficiently small according to (\\ref{sata}). According to the\ndefinition in (\\ref{Vmij}), we have \n\\begin{equation}\n\\frac{1}{2r_{\\text{s}}\\left(\\epsilon_{\\text{m}}+\\epsilon_{rs}\\right)}=\\frac{V{_{\\text{m,}ij_{1}}}\\left(t_{\\text{o}}\\right)}{k_{2}}\\leq\\frac{{V}\\left(0\\right)}{k_{2}}\\label{fact}\n\\end{equation}\nwhere $\\sigma_{_{\\text{m}}}\\left(\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij_{1}}}\\right\\Vert \\right)=1$\nis used. Consequently, ${{V}\\left(0\\right)}$ is \\emph{unbounded}\nas $\\epsilon_{\\text{m}}\\rightarrow0\\ $and $\\epsilon_{rs}\\rightarrow0.$\nOn the other hand, for any $j$, we have $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\left(0\\right)\\right\\Vert >2r_{\\text{s}}$\nby \\textit{Assumption 3}. Let $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\left(0\\right)\\right\\Vert =2r_{\\text{s}}+{\\varepsilon_{\\text{m,}ij},}$\n${\\varepsilon_{\\text{m,}ij}}>0$. Then, at time $t=0,$ the denominator\nof $V_{\\text{m},ij}$ defined in (\\ref{Vmij}) is \n\\begin{align*}\n & \\left(1+\\epsilon_{\\text{m}}\\right)\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\left(0\\right)\\right\\Vert -2r_{\\text{s}}s\\left(\\frac{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\left(0\\right)\\right\\Vert }{2r_{\\text{s}}},\\epsilon_{\\text{s}}\\right)\\\\\n & \\geq\\left(1+\\epsilon_{\\text{m}}\\right)\\left(2r_{\\text{s}}+{\\varepsilon_{\\text{m,}ij}}\\right)-2r_{\\text{s}}\\bar{s}\\left(\\frac{\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{o,}ij}}\\left(0\\right)\\right\\Vert }{2r_{\\text{s}}}\\right)\\\\\n & =2r_{\\text{s}}\\epsilon_{\\text{m}}+\\left(1+\\epsilon_{\\text{m}}\\right){\\varepsilon_{\\text{m,}ij}}.\n\\end{align*}\nThen \n\\[\nV{_{\\text{m,}ij}}\\left(0\\right)\\leq\\frac{k_{2}}{2r_{\\text{s}}\\epsilon_{\\text{m}}+\\left(1+\\epsilon_{\\text{m}}\\right){\\varepsilon_{\\text{m,}ij}}}.\n\\]\nConsequently, $V{_{\\text{m,}ij}}\\left(0\\right)$ is still bounded\nas $\\epsilon_{\\text{m}}\\rightarrow0\\ $no matter what $\\epsilon_{rs}\\ $is.\nAccording to the definition of ${V}\\left(0\\right),$ ${V}\\left(0\\right)$\nis still \\emph{bounded} as $\\epsilon_{\\text{m}}\\rightarrow0\\ $and\n$\\epsilon_{rs}\\rightarrow0.$ This is a contradiction. Thus \n\\begin{equation}\n\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{m,}ij}}\\left(t\\right)\\right\\Vert >2r_{\\text{s}},i\\neq j\\label{bounded1}\n\\end{equation}\nfor $i,j=1,2,\\cdots,N,$ $t\\in\\left[0,\\infty\\right).$ Therefore,\nthe UAV can avoid another UAV by the velocity command (\\ref{*}).\n\nThe reason why a UAV can stay within the virtual tube is similar to\nthe above proof. It can be proved by contradiction as well. Without\nloss of generality, assume that $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\left(t_{\\text{o}}^{\\prime}\\right)\\right\\Vert =r_{\\text{t}}-{{r}_{\\text{s}}}$\noccurs at $t_{\\text{1}}^{\\prime}>0,$ i.e., a conflict happening first,\nwhile $\\left\\Vert \\boldsymbol{\\tilde{\\xi}}{_{\\text{t},i}}\\left(t_{\\text{o}}^{\\prime}\\right)\\right\\Vert >r_{\\text{t}}-{{r}_{\\text{s}}}$\nfor $i=1,\\cdots,M$. Similar to the above proof, one can also get\na contradiction. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSearching for faint point sources around bright objects is a challenging endeavor. The atmosphere \\citep{roddier1981,racine1999,macintosh2005}, telescope and instruments optics \\citep{marois2003,marois2005,hinkley2007} produce speckles having a range of timescales that limit the direct detection of faint companions. From previous theoretical analysis, it is known that the speckle intensity temporal distribution is a modified Rician \\citep{Goodman1968,soummer2004,fitzgerald2006}. If a large number of uncorrelated speckle realizations are coadded, from the central limit theorem, then the final residual speckle noise follows a Gaussian intensity distribution. Since atmospheric turbulence produces random speckles that have a very short correlation time, a Gaussian distributed residual speckle noise is commonly assumed for ground-based adaptive optics (AO) long integrations and a detection threshold of 5$\\sigma$ is usually considered.\n\nHowever, careful residual noise analysis of AO images have demonstrated that long exposures are not limited by random short-lived atmospheric speckles but by quasi-static speckles \\citep{marois2003,marois2004phd,masciadri2005,marois2005,marois2006} originating from the telescope and instruments. The speckle noise currently limiting high-contrast ground-based imaging is thus very similar to that limiting space-based observations \\citep{schneider2003}. The typical lifetime of ground-based quasi-static speckles has been found to be several minutes to hours \\citep{marois2006,hinkley2007}; the noise in the combination of several images spanning $\\sim$1~hr is very similar to that in a single image (see Fig.~\\ref{f1} for an example; acquired with NIRI\/Altair at the Gemini telescope). In this case, since the quasi-static speckle noise is well correlated for the entire sequence, the central limit theorem does not apply and the speckle noise in the final combined image will be non-Gaussian. Sensitivity limits calculated assuming Gaussian statistics would have lower confidence levels (CL). Finding a robust technique to estimate proper sensitivity limits is fundamental to analyze adequately the sensitivity of an exoplanet survey as a function of angular separation. The contrast limit reached by a survey plays a central role in Monte Carlo simulations to derive exoplanet frequencies around stars and constrain planet formation scenarios \\citep{metchev2006,carson2006,kasper2007,lafreniere2007a}. Understanding the residual noise statistical distribution is thus important for future dedicated surveys of next generation AO systems like NICI \\citep{ftaclas2003}, the Gemini Planet Imager (GPI, \\citealp{macintosh2006}), the VLT SPHERE \\citep{dohlen2006}, and as well as future space observatories.\n\nIn this paper, a new technique is presented to estimate sensitivity limits of a noise showing arbitrary statistics using a CL approach. The theory behind speckle statistics is summarized in \\S~\\ref{theo}. Then \\S~\\ref{ci} presents a technique to derive detection thresholds using the probability density function and associated CLs. The technique is applied to simulated (\\S~\\ref{app}) and observational (\\S~\\ref{obsdata}) data to confirm the theory and to validate the technique. The effect of averaging a sequence of independent non-Gaussian noise realizations is discussed in \\S~\\ref{dis}. Concluding remarks follow in \\S~\\ref{con}.\n\n\\section{Speckle Noise Statistics \\label{theo}}\nFollowing the work of \\citet{Goodman1968,soummer2004,fitzgerald2006}, the speckle intensity probability density function (PDF) for one location in the image plane and random temporal phase errors can be shown to be a modified Rician (MR) function. At a specific location in the image plane, the MR PDF $p_{\\rm{MR}}(I)$ is a function of the local time-averaged static point spread function intensity $I_c$ and random speckle noise intensities $I_s$:\n\\begin{equation}\np_{\\rm{MR}}(I) = \\frac{1}{I_s} \\exp \\left( - \\frac{I+I_c}{I_s} \\right) I_0 \\left( \\frac{2\\sqrt{I I_c}}{I_s} \\right)\\rm{,}\n\\end{equation}\n\\noindent where $I$ is the point spread function (PSF) intensity ($I=I_c + I_s$) and $I_0 (x)$ is the zero-order modified Bessel function of the first kind. At a specific point of the PSF, if $I_c \\gg I_s$, relevant to Airy ring pinned speckles, the associated PDF is a Gaussian-like function showing a bright positive tail, while if $I_c \\ll I_s$, relevant to PSF dark rings or coronagaphic PSFs dominated by second order halo speckles, the noise distribution is exponential. The CL $\\alpha$ for a given detection threshold $d$ is simply obtained by:\n\\begin{equation}\n\\alpha (d) = \\int_{-d}^{d} p_{\\rm{{MR}}}^{\\prime} (I) dI\\rm{,}\\label{eqCL}\n\\end{equation}\n\\noindent where $p_{\\rm{MR}}^{\\prime}$ is the mean-subtracted PDF. Fig.~\\ref{f2} illustrates the different possible regimes compared to a Gaussian intensity distribution. For a 5$\\sigma$ detection threshold, where $\\sigma$ is the standard deviation of the noise obtained using the {\\em robust\\_sigma} IDL algorithm\\footnote{The {\\em robust\\_sigma} algorithm uses the median absolute deviation as a first estimate of the standard deviation and then weight points using Tukey's Biweight; this algorithm provides another step of robustness to avoid biasing the standard deviation estimate if bad pixels are present.}, a Gaussian distribution shows a $1-3\\times 10^{-7}$ CL while a MR distribution show $\\sim 1-10^{-2}$ to $1-10^{-3}$ CL. The MR distribution is thus producing much more false positive events. For example, consider a survey of many stars where each observation has a $500\\times 500 \\lambda\/D$ field of view (FOV, i.e. the $20^{\\prime \\prime}\\times 20^{\\prime \\prime}$ NIRI\/Gemini FOV at H-band). If a $5\\sigma$ detection threshold is selected, the Gaussian noise distribution would lead to one false positive detection every four stars while the MR distribution would lead to $\\sim $250 to $\\sim $2,500 false positives per star. A detection threshold two to three times higher is required for the MR distribution to show the same CL as a 5$\\sigma$ Gaussian noise and the same number of false positive events.\n\nIn the previous speckle PDF analysis, it was shown that the atmospheric speckle noise PDF is obtained by analyzing the temporal variation at one location of the PSF. For a quasi-static speckle noise, this approach is not adequate since the noise does not vary significantly with time. The quasi-static noise PDF can be derived using a very simple argument. If we consider a PSF produced by a circular aperture and if the PDF is obtained by analyzing pixels inside a narrow annulus centered on the PSF core, azimuthal quasi-static speckle noise variations $I_s$ are produced with the same value of $I_c$ (here, $I_c$ is the unaberrated PSF and it is azimuthally symmetric for a circular aperture). The speckle noise inside a narrow annulus and from a single speckle noise realization thus shows the same PDF as a temporal speckle noise variation from random phase screens at any location inside the annulus.\n\n\\section{Experimental Derivation of the PDF and CL Curves\\label{ci}}\nA robust technique to derive sensitivity limits can be developed using CLs. The pixel PDF inside a specific region of the image is first obtained and the CL curve is then derived and extrapolated to estimate a local detection threshold. To avoid having too many false positive detections without missing possible faint companions, a CL of $1-3\\times 10^{-7}$ (5$\\sigma$ if Gaussian) is selected here. The basic steps to derive the PDF, to obtain the CL curves and to estimate the $1-3\\times 10^{-7}$ CL detection threshold is summarized in table~\\ref{tabpdfstep}.\n\nThe local PDF is obtained by producing an histogram of the pixel intensities inside a specific region of the image after subtraction of the mean pixel intensity over the region and division by the noise RMS of the image region. The CL curve as a function of detection threshold can be easily estimated by integrating the PDF inside the interval $\\pm d$ (see Eq.~\\ref{eqCL}). Due to the limited number of resolution elements (one $\\lambda\/D$ for PSFs or a pixel for simulated noise images) in an image, the PDF and CL curves will be known up to a certain detection threshold. In theory, for the $1-3\\times 10^{-7}$ CL detection threshold considered here, each area where the PDF needs to be estimated should have several million independent resolution elements. In practice, for images typically containing up to $500\\times 500 \\lambda\/D$ (250,000 resolution elements), the PDF will be known only up to $\\sim 1 - 10^{-5}$ CL for Gaussian noise. A model fit using a $\\chi^2$ analysis or a polynomial fit are required to extrapolate the CL curve and obtain the detection threshold corresponding to a $1-3\\times 10^{-7}$ CL. Since the CL curves of various distributions are nearly linear in a semi-$\\log (1-\\alpha)$ vs detection threshold plot (see Fig.~\\ref{f2}), we have chosen to use a polynomial fit due to its simplicity of implementation, its execution speed, and its accuracy. Due to non-linear effects for detection thresholds near 0$\\sigma$, a linear fit is first performed for detection thresholds above 1.5$\\sigma$. If the detection threshold for a $1-3\\times 10^{-7}$ CL is below 9$\\sigma$, a second order polynomial fit is used instead to better approximate the CL curve for quasi-Gaussian statistics. The CL extrapolation accuracy will be analyzed in the next section.\n\n\\section{Technique Validation with Simulated Data\\label{app}}\nIn this section, the PDF, the CL curve and the $1-3\\times 10^{-7}$ detection threshold of simulated data are obtained.\n\n\\subsection{Simulated PDFs}\nSimulated noise images using specific PDFs are used to test the algorithm in recovering the proper $1-3\\times 10^{-7}$ detection threshold for known PDFs. To test the effect of the image area size on the CL extrapolation accuracy, images of various sizes are produced following a MR of $I_c\/I_s$ equal to 0.1, 1 and 10 (see Fig.~\\ref{f2}). For each size, 25 independent realizations are computed to derive the extrapolation accuracy. Fig.~\\ref{f3b} and Table~\\ref{tab1} show the CL extrapolation accuracy for simulated statistical distributions. In general, the algorithm slightly underestimates the $1-3\\times 10^{-7}$ detection threshold for exponential statistics by $\\sim$ 5\\%, but usually within the 2$\\sigma$ error calculated for each area size. Typically, the bigger the area is, the more accurate is the detection threshold. To achieve a detection threshold accuracy of 10\\% for a $1-3\\times 10^{-7}$ CL detection threshold, each PDF needs to be known up to a $1-10^{-4}$ CL (10,000 resolution elements per area). For instruments with smaller FOVs (several hundred per several hundred $\\lambda\/D$), an area of $\\sim 50\\times 50 \\lambda\/D$ (2,500 resolution elements) would deliver a detection threshold accuracy of $\\sim $15\\% for all types of distribution. A solution to increase the detection accuracy of small FOVs would be to combine observations of several objects of similar magnitudes and observing conditions to increase the number of independant noise realizations in each area.\n\n\\subsection{Simulated PSFs\\label{simdata}}\nThe algorithm is now tested using simulated aberrated PSFs. For PSF observations, determination of the PDF is more complex. The speckle noise amplitude is decreasing with angular separation and the PDF may change with angular separation due to relative importance of random atmospheric speckles, photon, background and read noises. Since the speckle noise amplitude decreases with angular separation, a signal-to-noise ratio image is first obtained by dividing the pixel intensities, at each radius, by the standard deviation $\\sigma$ of the noise at that radius (estimated using the IDL {\\em robust\\_sigma} algorithm). Finally, since CL are extrapolated, a compromise needs to be found between having a good radial sampling of the PDF and having sections of images big enough to adequately determine the PDF.\n\nPSF simulations are performed using Fast Fourier Transforms of complex $2048\\times 2048$ pixel images with a 512 pixels diameter pupil; the full width half maximum (FWHM) of the PSF is 4 pixels. The pupil has uniform amplitude and includes $\\lambda\/160$ RMS of phase errors generated using a power-law of index $-2.6$. The PSF images are then trimmed to $1024\\times 1024$ pixels to avoid FFT aliasing effects. A non-aberrated reference PSF is subtracted to remove the Airy pattern and a signal-to-noise image is calculated. \n\nFor simplicity, consider the calculation of the PDF within an annulus centered on the PSF. Since the presence of background or companion point sources inside that annulus could bias the statistics for real data (we are assuming that the background star density is such that only one or a few background objects are detected in the field of view (FOV) around any single target; cases with a high background star density will be discussed in section~\\ref{hbsd}), we have chosen to divide the annulus in three azimuthal sections containing 50,000 pixels each ($\\sim$~10,000 resolution elements, see Fig.~\\ref{f3}). The median PDF over the three azimuthal sections is calculated. Given the area of these sections, the PDF will be known down to a $\\sim 1-10^{-4}$ CL for Gaussian statistics and the $1-3\\times 10^{-7}$ detection threshold will be known to $\\sim $10\\% accuracy (see Tab.~\\ref{tab1}).\n\nTo further avoid cases where a point source is located at the border of two sections, the entire procedure is repeated by rotating the sections by 30 and 60 degrees which respect to the PSF center, and the median PDF over the three orientations is finally obtained. This procedure is repeated at different angular separations.\n\nSince the PDF is estimated in large areas that may contain speckles with $I_c \\gg I_s$, $I_c \\sim I_s$ or $I_c \\ll I_s$, such technique returns an average PDF weighted by the various speckle noise contributions (pinned\/unpinned speckles or Gaussian noises). Simulation with and without a coronagraph (simulated with a Gaussian pupil apodizer having a FWHM equal to a quarter of the pupil diameter) and for $\\lambda\/160$, $\\lambda\/32$ and $\\lambda\/16$ RMS phase aberration are presented (see Fig.~\\ref{f4} and \\ref{f5}). The algorithm clearly detects the MR distribution expected for a pinned speckle dominated PSF and the unpinned (exponential) speckle dominated coronagraphic PSF.\n\nThe non-coronagraphic $\\lambda\/160$ RMS simulations confirm that speckle noise follows a MR (required detection threshold of $\\sim 10\\sigma$ for a $1-3\\times 10^{-7}$ CL), as expected since pinned speckles are dominant for this case. As the quantity of aberrations increases, the ratio of pinned to non-pinned speckles decreases and the noise becomes exponential. For the Gaussian apodized case, since pinned speckles are strongly attenuated, the halo term dominates and the noise is more exponential. Note that none of these curves are expected to be flat as a function of angular separation since the ratio of pinned to unpinned speckles is varying with angular separation, thus changing the pixel intensity distribution, and some noise is expected from the CL curve extrapolation (see Tab.~\\ref{tab1}). Another simulation was performed using the $\\lambda\/160$ RMS case to show that if a constant Gaussian noise (background or read noise) is added to the image, the algorithm correctly detects the change of intensity distribution of the pixels at wide separations (see Fig.~\\ref{f6}).\n\nIn high-contrast imaging observations, a partially correlated reference star PSF is usually subtracted to remove a fraction of the quasi-static speckle noise. Such reference PSF can be obtained by observing a nearby target, by acquiring the same star at another wavelength (simultaneous spectral differential imaging, \\citealp{marois2005}) or polarization \\citep{potter2001}, or by building the reference using images acquired with different field angles (angular differential imaging, \\citealp{marois2006}). Such a PSF subtraction is now simulated to estimate how its affect the PDF. The observed PSF $I$ is simulated with a $\\lambda\/160$ RMS phase aberration $\\phi$, with and without a Gaussian apodizer. The reference PSF $I_{\\rm{ref}}$ is constructed by combining a perfectly correlated phase aberration $a\\phi$, where $a$ is a constant less than 1, with an uncorrelated part $\\Delta \\phi$ such as:\n\n\\begin{equation}\nI = | \\rm{FT}(Ae^{i\\phi})|^2\n\\end{equation}\n\\begin{equation}\nI_{\\rm{ref}} = | \\rm{FT}(Ae^{i(a\\phi+\\Delta \\phi)})|^2 \\rm{,}\n\\end{equation}\n\\noindent where the total noise RMS of $\\phi$ and $a\\phi+\\Delta \\phi$ are equal and the ratio of the noise RMS of $a\\phi$ and $\\Delta \\phi$ is equal to 0.1, 1 and 10. Unless the background, read noise, random atmospheric speckles or photon noises are achieved by the reference PSF subtraction, the residual PDF is essentially unchanged for cases with and without a coronagraph (see Fig.~\\ref{f7}).\n\n\\section{Application to Observational Data\\label{obsdata}}\nThe steps required to use the algorithm with observational data are similar to the ones described in Tab.~\\ref{tabpdfstep} and $\\S~\\ref{simdata}$. Only a few additional reduction steps are necessary. Fig.~\\ref{figexample} illustrates the various steps of the technique using Gemini data.\n\nBeside the usual data reduction, deviant pixels, like diffraction from the secondary mirror support, must first be masked. Diffraction from the secondary mirror support usually produces a bright concentrated flux emanating from the PSF core along several azimuthal directions. Since this flux is not produced by quasi-static aberrations and is very localized in the image, if we include these pixels in the PDF, they will produced a bright positive tail in the PDF and the $1-3\\times 10^{-7}$ CL detection threshold will be overestimated.\n\nSince the main science goal is to detect point sources, noise filtering is also applied to remove the noise that is not at the spatial scale of point sources. An $8\\times 8$ FWHM median filter is first subtracted from the image to reject large spatial period noises. Then, a $1\\times 1$ FWHM median filter is applied to reject bad\/hot pixels and smooth out the noise having spatial period below the resolution limit. The image is finally divided, at each radius, by the standard deviation $\\sigma$ of the noise (again obtained with the IDL {\\em robust\\_sigma} algorithm) to obtain a signal-to-noise ratio image.\n\nThe algorithm is first tested using data obtained at the Gemini telescope with the Altair adaptive optics system \\citep{saddlemyer1998} and the NIRI near-infrared camera \\citep{hodapp2000}. These data are part of the Gemini Deep Planet Survey \\citep{lafreniere2007a} that uses the angular differential imaging (ADI) technique \\citep{marois2006,lafreniere2007b} to detect faint companions. This technique consists in acquiring a sequence of images with continuous FOV rotation. A reference PSF that does not contain any point sources is first obtained by combining images of the sequence, and the quasi-static speckle noise is then attenuated by subtracting the reference ADI PSF. The data for the star HD97334B (program GN-2005A-Q16), acquired on April 18, 2005 with good seeing conditions (Strehl of $0.2$ at H-band), are presented. These data have been reduced, registered and processed using the pipeline described in \\citet{marois2006} with the additional steps mentioned above, i.e. pixel masking and noise filtering \\& normalizing. Given that NIRI images are $1024\\times 1024$ pixels and PSFs have 3 pixels per FWHM, we have chosen the same areas as the simulated PSFs mentioned above (see \\S~\\ref{simdata}) to calculate the PDF. For each region of the image, the pixel intensity histogram is obtained and then integrated to derive the CL curve. The CL is then extrapolated using a polynomial fit and the $1-3\\times 10^{-7}$ CL detection threshold is estimated. The derived detection thresholds for a $1-3\\times 10^{-7}$ CL are presented (see Fig.~\\ref{f8}) for a single PSF, a PSF minus the ADI reference PSF, and the combined ADI-subtracted images.\n\nThe algorithm is next tested using observational data obtained at the Canada-France-Hawaii telescope using the PUEO adaptive optics system \\citep{rigaut1998} and TRIDENT near-infrared camera \\citep{marois2005}. TRIDENT is a triple beam multi-wavelength (1.58, 1.625 and 1.68$\\micron$ with 1\\% bandpass) imager built following the simultaneous spectral differential imaging technique \\citep{racine1999,marois2000}. The technique consists in acquiring several images at different wavelengths and subtracting them to attenuate the speckle noise while retaining most of the flux of nearby companions. These data have been acquired as part of a direct imaging survey of stars confirmed to possess exoplanets from radial velocity analysis. The dataset of the star Ups And, acquired on November 14, 2002, is used here. Seeing conditions were relatively good and the Strehl ratio was of the order of $0.5$. The data have been reduced by subtracting a dark, dividing by a flat field, correcting for bad\/hot pixels, and registering the PSF at the image center. An optimized reference PSF was obtained using a star (Chi And) having a similar spectral type, magnitude and acquired at the same DEC and HA to minimize PSFs evolution from differential atmospheric refraction and flexure effects. Performances with and without the simultaneous reference PSF and the Chi And reference PSF subtractions are analyzed. Due to the limited FOV of TRIDENT, sections of 10,000 pixels (500 resolution elements, given that TRIDENT has 5 pixels per $\\lambda\/D$) are used to derive the PDF. Detection thresholds are thus known to $\\sim$15\\% (see Tab.~\\ref{tab1}).\n\nIt is clear that both TRIDENT and Gemini raw PSFs are limited by a non-Gaussian noise. For both TRIDENT\/CFHT and NIRI\/Gemini images, even after subtraction of a reference PSF, the residuals are still dominated by quasi-static speckles rather than averaged atmospheric speckle, background, read, or photon noises. Only the final combined ADI-subtracted image possessed a clear Gaussian-like noise. Fig.~\\ref{figexample2} shows a visual example of a 5$\\sigma$ detection with and without a Gaussian distributed noise after introducing artificial 5$\\sigma$ point sources. For the Gaussian distributed noise, only the artificial point sources are detected with a detection threshold at 5$\\sigma$,\\footnote{Only approximately half of the artificial point sources are detected $\\ge 5\\sigma$ since the artificial sources, being $5\\sigma$ in intensity, vary in S\/N by $1\\sigma$ RMS due to the underlying noise in the image. A $5\\sigma$ detection threshold thus misses\/detects $\\sim 50$\\% of $5\\sigma$ sources.} while for the Gemini data (a MR distributed noise), numerous false positive sources are observed for the same detection threshold. If instead we select the $1-3\\times 10^{-7}$ CL detection threshold obtained by the technique described in this paper (here equal to $10\\sigma$, see Fig.~\\ref{f8}), then only the artificial point sources are detected. It is interesting to note that the artificial source detection CL in the left and right panels of Fig.~\\ref{figexample2} are the same.\n\n\\section{Discussions\\label{disc}}\n\n\\subsection{PDF Evolution with Quasi-Static Speckle Averaging\\label{dis}}\nIt was shown in \\S~\\ref{obsdata} that the ADI technique produces a quasi-Gaussian noise. This is mainly due to the FOV rotation that occurs during the observing sequence; the residual noise is averaged incoherently when combining the images after FOV alignment. From the central limit theorem, it is thus expected that the noise in the final combined image shows quasi-Gaussian statistics.\n\nIn this section, simulations are presented to estimate the number of independent speckle noise realizations required to converge to a quasi-Gaussian noise intensity distribution. Random noise images having 10$^{6}$ resolution elements are created following an MR distribution having $I_c\/I_s$ equal to 0.1, 1 and 10. The PDF and CL curves are calculated for a single realization up to the coaddition of 25 independent realizations (see Fig.~\\ref{f9}). Typically, $\\sim $20 independent realizations are required for the MR distribution to converge, to $\\sim $20\\%, to a Gaussian distribution. Fig.~\\ref{f10} shows the detection threshold $d$ for a $1-3\\times 10^{-7}$ CL as a function of $n_{\\rm{eff}}$, the number of independent noise realizations\n\\begin{equation}\nn_{\\rm{eff}} = n \\frac{t_{\\rm{exp}}}{\\tau_{\\rm{dcorr}}} \\rm{,}\\label{eqneff}\n\\end{equation}\nwhere $n$ is the number of acquired images in the sequence, $t_{\\rm{exp}}$ is the integration time per image and $\\tau_{\\rm{dcorr}}$ the speckle noise decorrelation timescale (the equation is valid if $\\tau_{\\rm{dcorr}} \\ge t_{\\rm{exp}}$; if $\\tau_{\\rm{dcorr}} < t_{\\rm{exp}}$ then $n_{\\rm{eff}} = n$). These three curves can be well fit by a simple power-law of the form\n\\begin{equation}\nd(n) = [d_1 - 5] n_{\\rm{eff}}^{-0.63} + 5 \\rm{,}\\label{eqevol}\n\\end{equation}\n\\noindent where $d_1$ is the detection threshold of a single image for a $1-3\\times 10^{-7}$ CL. This equation is valid for all types of statistical distributions studied here. Eq.~\\ref{eqevol} can be used to predict the detection threshold required for a $1-3\\times 10^{-7}$ CL and a statistical distribution with a known instantaneous PDF and speckle noise decorrelation timescale. If we consider the Gemini ADI observation ($d_1 \\sim 13$ for a single ADI-subtracted image, see Fig.~\\ref{f8}), for a 70 minute observation sequence with $\\tau_{\\rm{dcorr}} \\sim 1.5$ minutes (at 2$^{\\prime \\prime}$ or 50$\\lambda\/D$, see \\citealp{marois2006}), it is expected that the detection threshold for a $1-3\\times 10^{-7}$ CL of the final combined image ($n_{\\rm{eff}} = 46.7$) will be $\\sim 5.7$ at 50$\\lambda\/D$, in good agreement with the number derived with real images ($\\sim 5.8$, see Fig.~\\ref{f8}).\n\n\\subsection{Targets with a High Background Star Density\\label{hbsd}}\nIn some cases, it is desirable to observe an interesting nearby target situated along the galactic plane or in front of the galactic bulge. In those areas, the high background stellar density implies that numerous background stars will be present in each of the areas defined to derive pixel intensity distributions. The detection threshold obtained will be affected by those stars since the algorithm would consider them as speckles. Several techniques can be used to remove the stars before estimating detection thresholds. A simple solution is to subtract these stars, using a non-saturated image of the primary, and mask any remaining contaminated areas with a not-a-number (NaN) mask. If the observations are obtained using the ADI technique then a star-free residual image can be obtained in the following manner. Prior to combining all the ADI-subtracted images together, instead of rotating them by the angle required to align their field of view, they are rotated by the negative of that angle such that all off-axis sources (the background stars) are eliminated by the median combination. As the amplitude of the rotation between the images is the same as for the ``proper'' combination, the effect of the median combination on the residual noise is expected to be the same and this star-free residual image can be used to estimate the noise distribution. Of course, the proper ADI combination of images must be used to search for companions. Another approach, still within the ADI framework, is to use the final ADI residual image to subtract the off-axis sources from each non-rotated ADI-subtracted image. Then these source-free images are rotated by the negative of the angle needed to align their field of view, such that their median combination eliminates the off-axis sources subtraction residuals. As for the previous technique, this source-free residual image should have the same residual noise distribution as the proper ADI residual image.\n\n\\section{Conclusion\\label{con}}\nA robust technique was elaborated to estimate sensitivity limits using a CL approach. This technique correctly finds the expected MR intensity distributions of simulated and real PSFs, and properly detects a change of PDF as a function of angular separation. Experiments with simulated and observational data confirm the prediction of the theory that raw PSFs obtained with high-contrast imaging instruments are limited by a non-Gaussian noise. A correction factor (up to 3) needs to be applied to detection limits found assuming Gaussian statistics to obtain the desired $1-3\\times 10^{-7}$ CL detection threshold. Properly estimating this effect is important for future high-contrast imaging instruments for both ground- and space-based dedicated missions since a loss of a factor of three in contrast results in less sensitivity to low-mass exoplanets or, if a specific contrast needs to be achieved, integration times need to be at least nine times longer. It was shown that the ADI technique is the only observing strategy currently known that generates, intrinsically, a quasi-Gaussian noise at all separations where sufficient FOV rotation has occurred. A simulation has shown that it takes typically $\\sim 20$ independent speckle noise realizations to produce an average speckle noise that shows quasi-Gaussian statistics. A general power-law is derived to predict the detection threshold required when averaging independent speckle noise realization of known PDFs and decorrelation timescale.\n\n\\acknowledgments\nThe authors would like to thanks R\\'{e}mi Soummer, Mike Fitzgerald, James Graham, Anand Sivaramakrisnan, Lisa Poyneer and Daniel Nadeau for discussions. This research was performed under the auspices of the US Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract W-7405-ENG-48, and also supported in part by the National Science Foundation Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz under cooperative agreement AST 98-76783. This work is also supported in part through grants from the Natural Sciences and Engineering Research Council, Canada and from the Fonds Qu\\'{e}b\\'{e}cois de la Recherche sur la Nature et les Technologies, Qu\\'{e}bec.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The Probability Representation of Quantum Mechanics} \n\nThe MDF of a random variable $X$ was introduced in \\cite{cahill} as the\nFourier transform of the quantum characteristic function \n$\\chi (k) = $, to be \n\\begin{equation} \nw(X,t)= {1\\over 2\\pi} \\int dk\\, e^{-ikX}\n~ , \\label{mdf} \n\\end{equation} \nwhere $\\hat X$ is the operator\nassociated to $X$, $<\\hat A> = \\mbox{\\rm Tr} (\\hat\\rho \\hat\nA)$, and $\\hat\\rho$ is the time-dependent density operator. \nIt is shown in \\cite{cahill} \nthat $w(X,t)$ is positive and normalized to unity, provided $\\hat X$\nis an observable. This theorem may be easily proven taking for \nsimplicity $\\hat\\rho$ to be the density operator for a pure state. \nThen, evaluating the trace in \\eqn{mdf} on eigenstates of the operator \n$\\hat X$, it can be verified that \\eqn{mdf} yields $w(X,t)= \\rho(X,X,t)$\nwhich is positive and normalized to unity.\n\nWe recall that the quantum characteristic function is, up to factors\nof $i$, the generating function of the momenta of any order, for the\nprobability distribution of the operator $\\hat X$. Hence it plays in\nquantum statistical mechanics the same r\\^ole as the generating\nfunctionals for the Green's functions in quantum field theory. In ref.\n\\cite{tom1} $X$ is taken to be a variable of the form \n\\begin{equation}\n X=\\mu q + \\nu p \\, ,\\label{x}\n\\end{equation}\nwhere $\\mu, ~\\nu$ are real parameters labelling different reference\nframes in the phase space. $\\mu$ is dimensionless, while $[\\nu]=\n[m^{-1}] [t] $. Thus, $X$ represents the position coordinate taking\nvalues in an ensemble of reference frames. For such a choice of $\nX$ it was shown that there exists an invertible relation among the MDF\nand the density matrix, respectively in \\cite{tom1} for the 1-d case, \nand in \\cite{dar} for the 2-d case. \nThis relation was originally understood through the Wigner function: \nthe MDF was\nexpressed in terms of the Wigner function which is in turn related to \nthe density matrix and viceversa. The evolution equation of\nthe MDF was then found starting from an evolution equation for the\nWigner function established by Moyal in \\cite{moyal}. This intermediate\nstep in terms of the Wigner function is not necessary. We can directly\ninvert \\eqn{mdf}, when the variable $X$ and the associated operator are\ngiven by \\eqn{x}. The evolution equation is then obtained (in the\nSchr\\\"odinger representation) by means of the Liouville equation for\nthe density operator, in coordinate representation. In view of the\nsubsequent generalization to $N$ degrees of freedom and to field\ntheory, let us derive these results in some detail for a one\ndimensional system. Equation \\eqn{mdf} is explicitly written as \n\\begin{eqnarray}\nw(X,\\mu,\\nu,t)&=& {1\\over 2\\pi} \\int dk\\, \\int dZ \\, e^{-ikX}\n~ \\nonumber\\\\ \n&=& {1\\over 2\\pi} \\int dk\\, \\int dZ \\, \\rho(Z, Z-k\\nu\\hbar) \ne^{-ik[X-\\mu(Z-k\\nu\\hbar\/2)]} .\n\\end{eqnarray} \nThe MDF so defined is normalized with respect to the $X$ variable: $ \n\\int dX w(X,\\mu,\\nu,t)=1$.\nPerforming the change of variables $~Z'=Z,~Z''=Z-k\\nu\\hbar$ we may reexpress \nthe MDF in the more convenient form:\n\\begin{equation}\nw(X,\\mu,\\nu,t)={1\\over 2\\pi |\\nu|\\hbar} \\int \\rho(Z',Z'',t) \\exp \n\\left[-i{Z'-Z''\\over \\nu\\hbar}\\left(X-\\mu {Z+Z'\\over 2}\n\\right)\\right] dZ'\\, dZ'' \\label{wro}\n\\end{equation}\nwhich can be inverted to \n\\begin{equation}\n\\rho (X,X',t)=|\\alpha| \\int w(Y,\\mu,{X-X'\\over \\hbar\\alpha}) \\exp \\left[ \ni\\alpha\\left(Y-\\mu{X+X'\\over 2}\\right)\\right] d\\mu \\, dY \\label{row1}\n\\end{equation}\nwhere $\\alpha$ is a parameter with dimension of an inverse length.\nThe density matrix is independent of $\\alpha$. In facts, using the \nhomogeneity of the MDF, $w(\\alpha X, \\alpha\\mu,\\alpha\\nu)=|\\alpha|^{-1} \nw(X, \\mu,\\nu)$, which is evident from the definition, \n\\eqn{row1} may be written as\n\\begin{equation}\n\\rho (X,X',t)=\\int w(Y,\\mu, X-X') \\exp \\left[ \n{i\\over \\hbar} \\left(Y-\\mu{X+X'\\over 2}\\right)\\right] d\\mu \\, dY \\label{row}\n\\end{equation}\nwhere the variables $Y,\\mu$ have been rescaled by $\\alpha$.\nIt is important to note that, for \\eqn{wro} to be invertible, it is \nnecessary that $X$ be a coordinate variable taking values in an \nensemble of phase \nspaces; in other words, the specific choices $\\mu=1, \\nu=0$ or any \nother fixing of the parameters $\\mu$ and $\\nu$ \nwould not allow to reconstruct the density matrix. \nHence, the MDF contains the same amount of information on a quantum \nstate as the density matrix, only if Eq. \\eqn{x} is assumed. \n\nWe now address the problem of finding the evolution equation for the \nMDF, for Hamiltonians of the form\n\\begin{equation}\n\\hat H = {{\\hat p}^2\\over 2m} +V(\\hat q).\n\\end{equation}\nUsing the Liouville equation\n\\begin{equation}\n\\frac{\\partial \\hat\\rho}{\\partial t} + {i\\over \\hbar} \n[\\hat H,\\hat\\rho]=0 \\label{liou}\n\\end{equation}\nand substituting into Eq. \\eqn{wro}, we have \n\\begin{eqnarray}\n{\\dot w}(X,\\mu,\\nu,t)&=&-{i\\over 2\\pi |\\nu|\\hbar} \\int \\left[ -{\\hbar^2\\over \n2m}\\left(\n\\frac{\\partial^2}{\\partial Z^2} - \\frac{\\partial^2}{\\partial Z'^2} \\right) \n\\left( V(Z)- V(Z')\\right)\\right] \n\\rho(Z,Z',t) \\nonumber \\\\\n& &\\times\\exp \\left[-i{Z-Z'\\over \n\\nu\\hbar}\\left(X-\\mu{Z+Z'\\over 2}\\right)\\right] dZ\\, d Z' ~.\n\\end{eqnarray}\nIntegrating by parts and assuming the density matrix to be zero at \ninfinity, we finally have\n\\begin{eqnarray}\n{\\dot w}(X,\\mu,\\nu,t) &=& \\left\\{{1\\over m} \n\\mu {\\partial\\over \\partial \\nu}+{i\\over \\hbar} \\left[ \nV\\left(-({\\partial \\over \\partial X})^{-1} {\\partial \\over \\partial \\mu} \n-{i\\nu\\hbar\\over 2} {\\partial\\over \n\\partial X}\\right) \\right.\\right.\\nonumber\\\\ \n&-&\\left.\\left. V\\left(-({\\partial \\over \\partial X})^{-1} {\\partial \\over \\partial \\mu} \n+{i\\nu\\hbar\\over 2} {\\partial\\over \\partial X}\\right)\\right]\\right\\} w(X,\\mu,\\nu,t)~,\n\\label{evo}\n\\end{eqnarray}\nwhere the operator $({\\partial \\over \\partial X})^{-1}$ is so defined\n\\begin{equation}\n({\\partial \\over \\partial X})^{-1} \\int f(Z) e^{g(Z)X}dZ =\n\\int {f(Z)\\over g(Z)} e^{g(Z)X} dZ~. \n\\label{invder}\n\\end{equation}\nThis equation, which plays the r\\^ole of the Schr\\\"odinger equation in \nthe alternative scheme just outlined, has been studied and solved for \nsome quantum \nmechanical systems \\cite{noipra},\\cite{manko}. \nThe classical limit of \\eqn{evo} is easily seen to be \n\\begin{equation}\n{\\dot w}(X,\\mu,\\nu,t)= \\left\\{\\frac{\\mu}{m} {\\partial\\over \\partial \\nu}+\\nu \nV'\\left(-\\left({\\partial \\over \\partial X}\\right)^{-1} {\\partial \\over \\partial \n\\mu}\\right){\\partial \\over \\partial X} \\right\\} w(X,\\mu,\\nu,t)~,\n\\label{evocl}\n \\end{equation}\nwhere $V'$ is the derivative of the potential with respect to the \nargument. Equation \\eqn{evocl} \nmay be checked to be equivalent to Boltzmann equation \nfor a classical distribution of probability $f(q,p,t)$ ,\n\\begin{equation}\n{\\partial f\\over \\partial t} + {p\\over m} {\\partial f \\over \\partial q} - {\\partial V \\over \n\\partial q} {\\partial f \\over \\partial p} =0,\n\\end{equation}\nafter performing the change of variables\n\\begin{equation}\nw(X,\\mu,\\nu,t) = {1\\over 2\\pi} \\int f(q,p,t) e^{ik(X-\\mu q -\\nu p) } \ndk~ dq~ dp~;\n\\end{equation}\nHence, the classical and quantum evolution equations only differ by \nterms of higher order in $\\hbar$. Moreover, for potentials quadratic in \n$\\hat q$, higher order terms cancel out and the quantum evolution \nequation coincides with the classical one. \nThis leads to the remarkable result that there is no difference between \nthe evolution of the distributions of probability for quantum and \nclassical observables, when the system is described by a Hamiltonian \nquadratic in positions and momenta. For this kind of systems, the \npropagator is the same \\cite{ovman}. Of course, what makes the difference \nis the initial condition.\n\n\\section{Generalization to $N$ degrees of freedom}\nWe consider now a system of $N$ interacting particles sitting on the \nsites of a lattice (we choose it to be one-dimensional for simplicity).\nThe Hamiltonian of the system is \n\\begin{equation}\n\\hat H = \\sum_{i=1}^N {{\\hat p_i}^2\\over 2m_i} + V(\\hat q).\\label{hamn}\n\\end{equation}\nWe assume the masses to be equal to unity. We take the potential \nto be of the form:\n\\begin{equation}\nV(\\hat q)= \\sum_{i=1}^N \\left ( {1\\over{2a^2}} (\\hat q_{i+1} - \\hat q_i)^2\n+ U(\\hat q_i)\\right)\n\\end{equation}\nwhere $a$ is the lattice spacing and $U(\\hat q_i)$ is the part of the \npotential which depends only on the position of the i-th particle.\nThis specification is not essential for the purposes of this section, \nbut it will become necessary for understanding the limit to the \ncontinuum, which will be considered in next section.\nThe quantum characteristic function for the $N$ dimensional system \nmay be defined as \n\\begin{equation}\n\\chi (k_1,...k_N) = ~,\\label{chin}\n\\end{equation}\nwhere $<\\hat A> = \\mbox{\\rm Tr} (\\hat\\rho \\hat\nA)$, and $\\hat\\rho$ is the density operator of the system. (In case \nthere is no interaction between different sites of the lattice the \ndensity operator may be factorized and the characteristic function is \njust the product $ \\chi (k_1,...k_N) = \\Pi_{i=1}^N \\chi(k_i) $ .) \n\nPerforming the Fourier transform of \\eqn{chin} the MDF is \nthen given by\n\\begin{equation} \nw(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t)= {1\\over (2\\pi)^N} \\int dk_1 ... dk_N\\, \ne^{-i\\sum_ik_iX_i}\n~ ; \\label{mdfn} \n\\end{equation} \nwhere $\\sigma$ is a collective index. It may be shown \nthat this is a probability distribution, namely that it is positive \ndefinite and normalized, provided $\\hat X_i$ are observables. The proof \ngoes along with the one-dimensional case. We first suppose that there \nis no interaction between different sites at some initial time $t_0$ \nand we assume for simplicity that the system be in a pure state $|\\psi> \n= |\\psi_1>\\otimes ... \\otimes |\\psi_N>$. Using the factorization \nproperty of the quantum characteristic function Eq. \\eqn{mdfn} may be \nseen to reduce to the product $w(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t_0)= \n\\prod_i\\rho_i(X_i,X_i,t_0)=\\rho(X,X,t_0)$, which is positive and \nnormalized. Then Liouville equation guarantees that this result stays \nvalid when the interaction is switched on.\nIn analogy with the one-dimensional case we now introduce the variables\n\\begin{equation}\nX_i = \\mu_i q_i + \\nu_i p_i \\label{xi}\n\\end{equation}\nwith $\\hat X_i$ accordingly defined. \n \nIntroducing the notation $|Z_\\sigma>\\equiv |Z_1>\\otimes...\\otimes|Z_N>$\nwe rewrite \\eqn{mdfn} as\n\\begin{eqnarray}\n&& w(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t)= {1\\over (2\\pi)^N} \\int \n\\prod_{i=1}^N dk_i \n\\, dZ_i \\, e^{-i\\sum_{j=1}^N k_j X_j}\n~ \\cr \n&&= {1\\over (2\\pi)^N} \\int \\Pi_{i=1}^N dk_i\\, dZ_i \\, \\rho(Z_\\sigma, \nZ_\\sigma -k_\\sigma \\nu_\\sigma\\hbar ) \ne^{-i\\sum_{j+1}^N k_j[X_j-\\mu_j(Z_j-k_j\\nu_j\\hbar\/2)]} ,\n\\end{eqnarray} \nwhere $\\rho(Z_\\sigma, Z'_\\sigma)= $.\nPerforming the change of variables \n\\begin{equation}\nZ'_i=Z_i~,~~ Z''_i= Z_i - k_i\\nu_i\\hbar\n\\end{equation}\nwe have\n\\begin{eqnarray}\nw(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t)&=&{1\\over (2\\pi\\hbar )^N} \\int \n\\prod_{i=1}^N \\left ({dZ'_i\\, d Z''_i\\over |\\nu_i|}\\right) \n\\rho(Z'_\\sigma,Z''_\\sigma,t) \\nonumber \\\\\n&\\times&\\exp \\left[-i\\sum_{j=1}^N {Z'_j-Z''_j\\over \n\\nu_j\\hbar}\\left(X_j-\\mu_j{Z'_j+Z''_j\\over 2}\\right)\\right] \\label{wron}\n\\end{eqnarray}\nwhich can be inverted to \n\\begin{equation}\n\\rho (X_\\sigma,X'_\\sigma,t)={1\\over (2\\pi)^N} \\int \\prod_{i=1}^N d\\mu_i \\, dY_i\n~w(Y_\\sigma,\\mu_\\sigma,X_\\sigma-X'_\\sigma)\n \\exp \\left[ \n{i\\over \\hbar}\\sum_{j=1}^N \\left( Y_j-\\mu_j{X_j+X'_j\\over 2}\\right)\\right] . \n\\label{rown}\n\\end{equation}\nOnce again, we recall that the inversion of \\eqn{wron} is made possible by\nchoosing the variables $X_i$ as in \\eqn{xi}.\nWe now use the Liouville equation \\eqn{liou} to get \n\\begin{eqnarray}\n{\\dot w}(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t) &=& {-i\\over (2\\pi\\hbar)^N} \n\\int \\prod_{i=1}^N\\left({1\\over |\\nu_i| } dZ_i\\, d Z'_i \\right)\n\\Biggl[ \n\\sum_{j=1}^N -{\\hbar^2\\over 2m}\n\\left( \n\\frac{\\partial^2}{\\partial Z_j^2} - \\frac{\\partial^2}{\\partial Z_j^{'2}} \n\\right) \\\\\n&+& \\left(V(Z_\\sigma) - V(Z'_\\sigma)\\right)\\Biggr] \n\\rho(Z_\\sigma,Z'_\\sigma,t) \n\\exp \\left[-i\\sum_{l=1}^N {Z_l-Z'_l\\over \n\\nu_l\\hbar}\\left(X_l-\\mu_l{Z_l+Z'_l\\over 2}\\right)\\right] ~.\\nonumber\n\\end{eqnarray}\nIntegrating by parts and assuming the density matrix to be zero at \ninfinity, we finally have\n\\begin{eqnarray}\n{\\dot w}(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t) &=& \\Biggl\\{ \n\\sum_{i=1}^N\\mu_i {\\partial\\over \\partial \\nu_i}+{i\\over \\hbar} \n\\left[ \nV\\left(\n-({\\partial \\over \\partial X_\\sigma})^{-1} {\\partial \\over \\partial \\mu_\\sigma} \n-{i\\nu_\\sigma\\hbar\\over 2} {\\partial\\over \\partial X_\\sigma} \n\\right) \n\\right. \\nonumber\\\\\n&-& \\left. \nV\\left(\n-({\\partial \\over \\partial X_\\sigma})^{-1} {\\partial \\over \\partial \\mu_\\sigma} \n+{i\\nu_\\sigma\\hbar\\over 2} {\\partial\\over \\partial X_\\sigma}\n\\right)\n\\right] \\Biggr\\} w(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t)~,\n\\label{evon}\n\\end{eqnarray}\nwhere the inverse derivative is defined as in \\eqn{invder}.\nWe report for future convenience the term containing the potential when \nexplicitating the interaction between neighbours:\n\\begin{eqnarray}\n&&\\left[ \nV\\left(-({\\partial \\over \\partial X_\\sigma})^{-1} {\\partial \\over \\partial \\mu_\\sigma} \n-{i\\nu\\hbar\\over 2} {\\partial\\over \n\\partial X_\\sigma}\\right) - V\\left(-({\\partial \\over \\partial X_\\sigma})^{-1} \n{\\partial \\over \\partial \\mu_\\sigma} \n+{i\\nu_\\sigma\\hbar\\over 2} {\\partial\\over \\partial X_\\sigma}\\right)\\right] \\nonumber\\\\\n&&= \\left[ \nU\\left(-({\\partial \\over \\partial X_\\sigma})^{-1} {\\partial \\over \\partial \\mu_\\sigma} \n-{i\\nu\\hbar\\over 2} {\\partial\\over \n\\partial X_\\sigma}\\right) - U\\left(-({\\partial \\over \\partial X_\\sigma})^{-1} \n{\\partial \\over \\partial \\mu_\\sigma} \n+{i\\nu_\\sigma\\hbar \\over 2} {\\partial\\over \\partial X_\\sigma}\\right)\\right]\\nonumber\\\\\n&&- {i\\hbar\\over a^2} \\sum_{i=1}^N \\nu_i \\left[ {\\partial \\over \\partial X_i} \n\\left({\\partial \\over \\partial X_{i+1}}\\right)^{-1} {\\partial \\over \\partial \\mu_{i+1}}\n-2 {\\partial \\over \\partial \\mu_i} + {\\partial \\over \\partial X_i} \n\\left({\\partial \\over \\partial X_{i-1}}\\right)^{-1} \n{\\partial \\over \\partial \\mu_{i-1}}\\right]~. \\label{potenital}\n\\end{eqnarray}\nWhen considering the classical limit we have\n\\begin{eqnarray}\n{\\dot w}(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t)&=& \\left\\{\\sum_{i=1}^N\\mu_i \n{\\partial\\over \\partial \\nu_i}+ \\nu_i\n{1\\over a^2} \\left[ {\\partial \\over \\partial X_i} \n\\left({\\partial \\over \\partial X_{i+1}}\\right)^{-1} {\\partial \\over \\partial \\mu_{i+1}}\n\\right.\\right. \\nonumber\\\\\n&-& \\left. \\left. 2 {\\partial \\over \\partial \\mu_i} + {\\partial \\over \\partial X_i} \n\\left({\\partial \\over \\partial X_{i-1}}\\right)^{-1} \n{\\partial \\over \\partial \\mu_{i-1}}\\right] \\right. \\nonumber\\\\\n& & \\left. +V_i\\left(-({\\partial \\over \\partial X_\\sigma})^{-1} {\\partial \\over \\partial \n\\mu_\\sigma} \\right) {\\partial\\over \\partial X_\\sigma} \\right\\}\nw(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t)~,\\label{evoncl}\n\\end{eqnarray}\nwhere $U_i$ is the derivative of the self-interaction potential with \nrespect to the $i-th$ variable. \nEquation \\eqn{evoncl} may be seen to be equivalent to the Boltzmann \nequation as in the one-dimensional case. Moreover, Hamiltonians which \nare quadratic in positions and momenta yield the same evolution \nequations for classical and quantum probability distributions.\n\n\\section{Generalization to Field Theory}\nWe now consider a scalar quantum field theory described by the \nHamiltonian \n\\begin{equation}\n\\hat H = \\int d^d x~ \\left[{1\\over 2} \\hat\\pi^2(x) + {1\\over 2} \n\\sum_{b=1}^d(\\partial_b\\hat\\phi(x))^2 + U(\\hat\\phi(x))\\right] ~.\n\\label{hamfi}\n\\end{equation}\n$d$ is the spatial dimension, while \n$U(\\phi(x))$ is the self-interacting potential, polynomial in the \nfield $\\hat\\phi$. The Hamiltonian \\eqn{hamfi} is easily seen to be \nobtained by the discrete Hamiltonian \\eqn{hamn} by taking the limit to \nthe continuum ($a\\rightarrow 0$) with the following rules:\n\\begin{eqnarray}\na^{-d\/2}\\hat q_i \\rightarrow \\hat\\phi(x)~~&&~~\na^{-d\/2}\\hat p_i \\rightarrow \\hat\\pi(x)\\nonumber\\\\\na^{d}\\sum_i \\rightarrow \\int d^d x~~&&~~\na^{-(d\/2+1)}(\\hat q_{i+1} - \\hat q_i) \\rightarrow {\\partial \\hat \\phi(x)\\over \n\\partial x_b}~.\n\\end{eqnarray}\nIn analogy with the discrete case we introduce the field\n\\begin{equation}\n\\hat\\Phi(x)= \\mu(x)\\hat\\phi(x) + \\nu(x)\\hat\\pi(x)\n\\end{equation}\nwhere $\\mu(x)=\\lim_{a\\rightarrow 0} a^{-d\/2} \\mu_i$, and \n$\\nu(x)=\\lim_{a\\rightarrow 0} a^{-d\/2} \\nu_i$ . \n\nThe quantum characteristic functional, which now will play the r\\^ole \nof generating functional for correlation functions of the fields, may \nbe defined as\n\\begin{equation}\n\\chi(k(x))= =\\mbox{\\rm Tr} \\left(\\hat\\rho(t)\ne^{i\\int d^d x~ k(x) \\hat\\Phi(x)}\\right)~.\n\\end{equation}\nThe functional Fourier transform of $\\chi(k(x))$, what we will call the\nmarginal distribution functional (MD${\\cal F}$), still defines a\nprobability distribution. This can be understood by recognizing that it\nis the limit of the MDF for the discrete $N$-dimensional system\nconsidered in the previous section: \n\\begin{equation}\nw(\\Phi(x), \\mu(x), \\nu(x), t) = \\int {\\cal D}k\\; e^{-i\\int k(x) \\Phi(x) dx} \n\\chi(k)=\\lim_{a\\rightarrow 0} w(X_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t)~,\n\\label{mdfc}\n\\end{equation}\nwhere $\\prod_i {dk_i\\over (2\\pi)^N }\\rightarrow \\int {\\cal D} k$.\nAlso, the density matrix functional may be defined as the limit of Eq. \n\\eqn{rown} to be\n\\begin{equation}\n\\rho(\\Phi, \\Phi',t)= \\int {\\cal D} \\mu {\\cal D} \\Psi \\;\nw(\\Psi,\\mu,\\Phi-\\Phi') \\exp\\left\\{ {i\\over \\hbar} \\int dy \n\\left[\\Psi(y)-\\mu(y)\\left({\\Phi(y)-\\Phi'(y)\\over 2}\\right)\\right] \\right\\}~.\n\\end{equation}\nThen, the evolution equation for the probability distribution \nfunctional is easily obtained by taking the limit of \\eqn{evon}: \n\\begin{eqnarray}\n&&{\\dot w}(\\Phi(x), \\mu(x), \\nu(x), t) = \n\\left\\{ \\int d^d x~ \n\\left[ \\mu(x) {\\delta\\over \\delta \\nu(x)} + 2 \\nu(x){\\delta\\over \\delta\n\\Phi(x)}\\Delta \n\\left[\n\\left(\n{\\delta\\over \\delta \\Phi(x)}\n\\right)^{-1}\n{\\delta\\over \\delta \\mu(x)} \n\\right]\n\\right]\n\\right.\\nonumber\\\\\n &&+ \\left. {i\\over \\hbar} \n\\left[ U\n\\left[\n\\left( {-\\delta\\over \\delta \\Phi(x)}\n\\right)^{-1} \n{\\delta\\over\\delta \\mu(x)} - {i \\nu(x)\\hbar\\over 2} {\\delta\\over \\delta \\Phi(x)}\n\\right] \n\\right.\\right.\\nonumber\\\\\n&-& \\left. \\left. U\n\\left[\n\\left( {-\\delta\\over \\delta \\Phi(x)}\n\\right)^{-1} {\\delta\\over\n\\delta \\mu(x)} + {i \\nu(x)\\hbar\\over 2} \n{\\delta\\over \\delta \\Phi(x)}\n\\right] \n\\right] \n\\right\\}\nw(\\Phi(x), \\mu(x), \\nu(x), t) \\label{evoc}\n\\end{eqnarray}\nThe inverse functional derivative \n$\\left({\\delta\\over \\delta \\Phi(x)}\\right)^{-1}$ is so defined:\n\\begin{equation}\n\\left({\\delta\\over \\delta \\Phi(x)}\\right)^{-1} \n\\int {\\cal D}k\\; e^{-i\\int k(y) \\Phi(y) dy} \n= \\int {\\cal D}k\\; {i\\over k(x)}e^{-i\\int k(y) \\Phi(y) dy} ~,\n\\end{equation}\nwhile the notation $\\Delta [f(x)]$ stands for $f(x+\\Delta x )- f(x)$. \nPerforming an expansion in powers of $\\hbar$ the classical limit may be \nobtained as in the previous sections.\n\n\\section{The Quantum Characteristic Function as a Generating Function}\nIn this section we discuss the connection between the \nprobability representation described above both for quantum mechanics \nand quantum field theory and a slightly different point of view\ndeveloped in \\cite{wet2}, where evolution equations are found for a \nsuitably defined euclidean partition function. \nThere are two main ingredients in our\napproach: one is is the probabilistic interpretation for the\ndistribution describing the observables, the other is the equivalence\nbetween the description based on the MDF and the conventional\ndescription based on the density matrix. The first aspect is guaranteed \nby the Glauber theorem which states that the Fourier transform of the\nquantum characteristic function associated to observables is a\nprobability distribution. The second aspect, namely the invertibility\nof the MDF in terms of the density matrix, is achieved by introducing\nconfiguration space variables which take value in an ensemble of\nreference frames in phase space, each labelled by the two parameters, $\\mu,\n\\nu$. \nThus, the evolution equations which we have found (\\eqn{evo}, \\eqn{evon}, \n\\eqn{evoc}), together with suitable initial conditions, completely \ncharacterize the state of the given quantum \nsystem. These equations assume a simpler form when their Fourier \ntransform is performed. We have\n\\begin{equation}\n\\chi(k,\\mu,\\nu,t)= \\int dX e^{ikX} w(X,\\mu,\\nu,t)~,\n\\end{equation}\nwith obvious generalizations to the $N$ dimensional case and to field \ntheory.\nFor the one-dimensional quantum systems considered in section 1, \nthe Fourier transform of Eq.\\eqn{evo} yields an evolution equation for \nthe quantum characteristic function itself:\n\\begin{equation}\n{\\dot \\chi}(k,\\mu,\\nu,t) = \\left\\{\\frac{1}{m}\n\\mu {\\partial\\over \\partial \\nu}+{i\\over \n\\hbar} \\left[ \nV\\left({1 \\over i k} {\\partial \\over \\partial \\mu} -{k\\nu \\hbar\\over2} \\right) \n- V\\left({1 \\over ik} {\\partial \\over \\partial \\mu} \n+{k \\nu \\hbar\\over2}\\right)\\right]\\right\\} \\chi(k,\\mu,\\nu,t)~.\n\\label{evotr}\n\\end{equation}\nFor the $N$-dimensional quantum systems considered in section 2, the \nFourier transform of Eq.\\eqn{evon} yields \n\\begin{eqnarray}\n{\\dot \\chi}(k_\\sigma,\\mu_\\sigma,\\nu_\\sigma,t) &=& \\Biggl\\{\n\\frac{1}{m}\\sum_{i=1}^N\\mu_i {\\partial\\over \\partial \\nu_i}+{i\\over \\hbar} \n\\left[ \nV\\left(\n{1\\over i k_\\sigma} {\\partial \\over \\partial \\mu_\\sigma} \n-{\\nu_\\sigma k_\\sigma \\hbar\\over2} \n\\right) \\right. \\nonumber\\\\\n&-&\\left. \nV\\left( \n{1 \\over i k_\\sigma} \n{\\partial \\over \\partial \\mu_\\sigma} \n+{\\nu_\\sigma k_\\sigma\\hbar\\over 2} \n\\right)\n\\right]\n\\Biggr\\} \\chi(k_\\sigma,\n\\mu_\\sigma,\\nu_\\sigma,t)~,\n\\label{evontr}\n\\end{eqnarray}\nwhile Fourier transforming Eq.\\eqn{evoc} we have, for quantum field \ntheory,\n\\begin{eqnarray}\n{\\dot \\chi}(k(x), \\mu(x), \\nu(x), t) &=& \\left\\{ \\int d^d x~ \n\\left[ \n\\frac{1}{m}\\mu(x) {\\delta\\over \\delta \\nu(x)} - 2i\\hbar k(x)\\nu(x)\\Delta \n\\left[ \n{1\\over i k(x)}{\\delta\\over \\delta \\mu(x)} \n\\right] \n\\right] \n\\right. \\nonumber \\\\\n&+& \\left. i \n\\left[\nU\\left[\n{1\\over i k(x)} {\\delta\\over\\delta \\mu(x)} - {k(x) \\nu(x)\\hbar\\over2} \n\\right] \\right.\\right. \\nonumber\\\\\n&-&\\left. \\left. \nU\\left[ \n{1\\over i k(x)} {\\delta\\over\\delta \\mu(x)} +{k(x) \\nu(x)\\hbar\\over2} \n\\right] \\right] \\right\\}\n\\chi(k(x), \\mu(x), \\nu(x), t) \\label{evoctr}~ .\n\\end{eqnarray}\nNow the comparison with the results of \\cite{wet2} may be easily understood.\nLet us stick to quantum field theory for definiteness.\nThe quantum characteristic functional which is a generating functional \nfor correlation functions of the $\\Phi$ field \ncoincides with the generating functional considered in\n\\cite{wet2}\n\\begin{equation}\nZ(\\mu', \\nu',t) = \\mbox{\\rm Tr} \\left(\n\\hat\\rho(t)\\exp\n\\left\\{ \\int{ d^d x~ \\mu'(x)\\hat\\phi(x)+ \\nu'(x)\\hat\\pi(x) }\n\\right\\}\n\\right)\n~, \\label{part}\n\\end{equation}\nafter rescaling the parameters $\\mu$ and $\\nu$ to $\\mu'=ik\\mu$ and\n$\\nu'=ik\\nu$ (of course, the same holds for quantum mechanics).\nConsequently, the evolution equations for the characteristic functional\nmay be seen to be equal to those found in \\cite{wet2} for the\ngenerating functional \\eqn{part} provided the parameters $\\mu'$ and\n$\\nu'$ are rescaled as specified. \n\nGoing back to the initial remark of this section, we may conclude that,\nthe quantum characteristic function and its evolution equation (or the\ngenerating functional in \\eqn{part}) are more interesting from an\noperative point of view as they determine the correlation functions and\ntheir time evolution. On the other hand the introduction of the MDF is\nboth relevant and necessary from a theoretical point of view. In facts\nit allows a unified description of classical and quantum phenomena in\nterms of probability distributions obeying different evolution\nequations. Also it justifies the introduction of the $X$ variable, as a\nvariable taking values in an ensemble of reference frames \\eqn{x}, in\nview of the invertibility of the MDF in terms of the density matrix.\nThis seems to us the profound motivation for introducing such a\ncombination of phase space variables in the quantum characteristic\nfunctional and in the generating functional \\eqn{part} . We stress once\nagain that $X$ and its field analogue $\\Phi$ are, for each couple\n$(\\mu,\\nu)$, configuration space variables in the transformed reference\nframe labelled by $(\\mu,\\nu)$. \n\n\n\\section{Conclusions}\nIn this letter we have presented an extension of the probabilistic\nrepresentation of quantum mechanics to quantum field theory. In this\nframework classical and quantum phenomena, both statistically\ndescribed, only differ by the evolution equations of the distributions\nof probabilities for the relevant observables. Quantum observables are\ndescribed by a distribution of probability, the MDF, and the time\nevolution by an integro-differential equation for the MDF. We recently\naddressed the problem of finding the Green's function for the\ntime-evolution equation of the MDF \\cite{noipra}. The problem was\nsolved for quadratic Hamiltonians, and a characterization of such a\npropagator in terms of the time--dependent invariants of the system was\nfound. This propagator represents the transition {\\it probability} of\nthe system from a quantum state to another. Thus, a generalization to\nquantum field theory would be interesting in our opinion, and is\npresently under consideration. \nAnother promising application of the probabilistic point of view \n is suggested in\n\\cite{wet2} where it is used to study the approach to equilibrium\nof non equilibrium quantum field theories. \nAn extension to relativistic quantum field theory would be also \ninteresting, though it poses problems of interpretation which are not \nunderstood at the moment.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDuring the past four decades a great deal of interest has been shown in identifying and characterizing qualitative and quantitative properties of finite dimensional integrable nonlinear dynamical systems \\cite{book1,int12}. Several powerful mathematical methods have been developed to solve\/integrate nonlinear ordinary differential equations (ODEs). Some widely used methods to solve nonlinear ODEs in the contemporary literature are (i) Painlev$\\acute{e}$ singularity structure analysis \\cite{pain,pain1,mlbook}, (ii) Lie symmetry analysis \\cite{olv,hydon,stephani,blu1,blu,ibr}, (iii) Darboux method \\cite{darb1} and (iv) Jacobi last multiplier method \\cite{jlm1,nuci1,nuci}. Among these the Lie group method advocated by Sophus Lie plays a vital role \\cite{olv,hydon,stephani,blu1,ibr}. The method is essentially based on the invariance of differential equations under a continuous group of point transformations and such transformations usually have the form $T=T(t,x,\\epsilon),~X=X(t,x,\\epsilon),$ where $(t,x)$ and $(T,X)$ are the old and new independent and dependent variables respectively of the given ODE and $\\epsilon$ denotes the group parameter. The transformations depend only on the variables $t$ and $x$ and not on the derivatives, that is $\\dot{x}$. Transformations of this type are called the point symmetry group of a differential equation when this group of transformations leave the differential equation invariant \\cite{hydon,stephani,blu1,blu}. The Norwegian mathematician, Sophus Lie, founder of this method, developed an algorithm to determine the symmetry groups associated with a given differential equation in a systematic way. Once the symmetry group associated with the differential equation is explored, it can be used to analyze the differential equation in several ways. For example, the symmetry groups can be used (i) to derive new solutions from old ones \\cite{blu1,olv}, (ii) to reduce the order of the given equation \\cite{olv,blu1,hydon}, (iii) to discover whether or not a differential equation can be linearized and to construct an explicit linearization when one exists \\cite{kum1,kum2,new_leach} and (iv) to derive conserved quantities \\cite{olv}. \n\nHowever, studies have also shown that certain nonlinear ODEs which are integrable by quadratures do not admit Lie point symmetries \\cite{lopex,mur1}. To understand the integrability of these nonlinear ODEs, through Lie symmetry analysis, attempts have been made to extend Lie's theory of continuous group of point transformations in several directions. A few notable extensions which have been developed for this purpose are (i) contact symmetries \\cite{cont,ci4,ci5,ci3}, (ii) hidden and nonlocal symmetries \\cite{ci6,non_past,nonlocal,nonlocal1,nonlocal2,nonlocal3,nonlocal4,nonlocal5,nonlocal7,si11,si12,sir2}, (iii) $\\lambda$-symmetries \\cite{mur1,mur3,mur4,tel2}, (iv) adjoint symmetries \\cite{blu,blu1,blu11} and (v) telescopic vector fields \\cite{tel1,tel2}.\n\nIn the conventional Lie symmetry analysis the invariance of differential equations under one parameter Lie group of continuous transformations is investigated with point transformations alone. One may also consider the coefficient functions $\\xi$ and $\\eta$ (see Eq.(2.2)) in the infinitesimal transformations to be functions of $\\dot{x}$ besides $t$ and $x$. Such derivative included transformations are called contact transformations. In fact, Lie himself considered this extension \\cite{19}. The method of finding contact symmetries for certain linear oscillators (harmonic and damped harmonic oscillators) were worked out by Schwarz and Cerver$\\acute{o}$ and Villarroel \\cite{ci4,ci7}. The integrability of a class of nonlinear oscillators through dynamical symmetries approach was carried out by Lakshmanan and his collaborators, see for example Refs. \\cite{sir_old1,sir_old2}.\n\nInvestigations have also revealed that nonlinear ODEs do admit nonlocal symmetries. A symmetry is nonlocal if the infinitesimal transformations depend upon an integral. Initially some of these nonlocal symmetries were observed as hidden symmetries of ODEs in the following manner. Suppose an $n^{th}$-order ODE is order reduced to $(n-1)^{th}$ order with the help of a Lie point symmetry. Now substituting the transformation (which was used to reduce the $n^{th}$-order to $(n-1)^{th}$ order) in other symmetry vector fields of the $n^{th}$-order equation one observes that these symmetry vector fields turns out to be symmetry vector fields of the order reduced ODE. In other words, all these vector fields satisfy the linearized equation of the order reduced equation. Upon analyzing these vector fields one may observe that some of them retain their point symmetry nature and the rest of them turn out to be nonlocal vector fields of the reduced ODE. Since these nonlocal symmetry vector fields cannot be identified through Lie point symmetry analysis they are coined as hidden symmetries. These nonlocal hidden symmetries were first observed by Olver and later they were largely investigated by Abraham-Shrauner and her collaborators \\cite{nonlocal1, nonlocal3,nonlocal5}. \n\nSubsequently it has been shown that many of these nonlocal or hidden symmetries can be connected to $\\lambda$-symmetries. The $\\lambda$-symmetries concept was introduced by Muriel and Romero \\cite{mur1}. These $\\lambda$-symmetries can be derived by a well-defined algorithm which include Lie point symmetries as a specific sub-case and have an associated order reduction procedure which is similar to the classical Lie method of reduction \\cite{mur3}. Although, $\\lambda$-symmetries are not Lie point symmetries, the unique prolongation of vector fields to the space of variables $(t, x, \\dot{x},\\ddot{x},...)$ for which the Lie reduction method applies is always a $\\lambda$-prolongation, for some functions $\\lambda(t, x, \\dot{x},\\ddot{x},...)$. For more details on $\\lambda$-symmetries approach, one may refer the works of Muriel and Romero \\cite{mur4}. The method of finding $\\lambda$-symmetries for a second-order ODE has been discussed in depth by these authors and the advantage of finding such symmetries has also been demonstrated by them. The authors have also developed an algorithm to determine integrating factors and integrals from $\\lambda$-symmetries for second-order ODEs \\cite{mur3}.\n\n\nVery recently Pucci and Sacomandi have generalized $\\lambda$-symmetries by introducing telescopic vector fields. Telescopic vector fields are more general vector fields which encompose Lie point symmetries, contact symmetries and $\\lambda$-symmetries as their sub-cases. For more details about these generalized vector fields we refer to the works of Pucci and Sacomandi \\cite{tel1}\n\nThe connection between symmetries and integrating factors of higher order ODEs was investigated by several authors \\cite{ci8}. The literature is large and in this paper we discuss only one method, namely adjoint symmetry method which was developed by Bluman and Anco \\cite{blu,blu1}. The main observation of Bluman and Anco was that the integrating factors are the solutions of adjoint equation of the linearized equation. In case the adjoint equation coincides with the linearized equation, then the underlying system is self-adjoint and in this case the symmetries become the integrating factors. A main advantage of this method is that we can find the integrals straightaway by multiplying the integrating factors and integrating the resultant ODE \\cite{blu11}.\n\nThe symmetry methods described above are all applicable to any order. Each method has its own merits and demerits. In this paper, we review the methods of finding symmetries (starting from Lie point symmetries to telescopic vector fields) of a differential equation and demonstrate how these symmetries are helpful in determining integrating factors and integrals of the ODEs and establish the integrability of them. We demonstrate all these symmetry methods for a second-order ODE. The extension of each one of this procedure to higher order ODEs is effectively an extension. We also illustrate each one of the methods with the same example. We consider the same example for all the methods so that the general reader can understand the advantages, disadvantages and complexity involved in each one of these methods. \n\nThe example which we have chosen to illustrate the symmetry methods is the modified Emden equation (MEE) \\cite{exboo2,int14,int15,Dix:1990,int17,chand1,carinena,ames}\n\\begin{equation}\n\\ddot{x}+3{x}\\dot{x}+x^3=0,\\qquad \\qquad.=\\frac{d} {dt}. \\label{main} \n\\end{equation}\nIn the contemporary literature Eq. (\\ref{main}) are also called second-order Riccati equation \/Painlev$\\acute{e}$-Ince equation. This equation has received attention from both mathematicians and physicists for a long time \\cite{int1,int12}. For example, Painlev$\\acute{e}$ had studied this equation with two arbitrary parameters, $\\ddot{x}+\\alpha{x}\\dot{x}+\\beta x^3=0$, and identified Eq. (\\ref{main}) as one of the four integrable cases of it \\cite{pain}. The differential equation (\\ref{main}) arises in a variety of mathematical problems such as univalued functions defined by second-order differential equations \\cite{exboo1} and the Riccati equation \\cite{exboo2}. On the other hand, physicists have shown that this equation arises in different contexts. For example, it arises in the study of equilibrium configurations of a spherical gas cloud acting under the mutual attraction of its molecules and subjected to the laws of thermodynamics \\cite{int15,Dix:1990}. Equation (\\ref{main}) admits time independent nonstandard Lagrangian and Hamiltonian structures \\cite{chand1}. In the contemporary literature, this equation has been considered by several authors in different contexts. For example, Chandrasekar, Senthilvelan and Lakshmanan have studied the linearization and investigated the integrability of this equation through the extended Prelle-Singer procedure \\cite{chand1}. The Lie point symmetries of this equation were also derived by few authors in different contexts. For example, Mahomed and Leach have studied the invariance of this equation and shown that it admits $sl(3,R)$ algebra and constructed a linearizing transformation from the Lie point symmetries. Pandey et al. have identified (\\ref{main}) as one of the nonlinear ODEs that admits maximal Lie point symmetries when they carry out Lie symmetry analysis for the equation, $\\ddot{x}+f(x)\\dot{x}+g(x)=0$, where $f(x)$ and $g(x)$ are functions of $x$ \\cite{pandey1}. Nucci and her group have analyzed this equation in terms of Jacobi last multiplier \\cite{nuci}. $\\lambda$-symmetries and their associated integrating factors for this equation were investigated by Bhuvaneswari et.al. \\cite{bhu1}. Noether symmetries of this equation were also studied in Ref. \\cite{arxiv}. The nonlocal symmetries were also investigated in the Refs. \\cite{nonlocal3,nonlocal} \n\nThe plan of the paper is as follows. In Sec. \\ref{2}, we present Lie's invariance analysis to determine Lie point symmetries of Eq.(\\ref{main}). We also discuss few applications of Lie point symmetries. In Sec. \\ref{noeth}, we describe the method of finding variational symmetries of this equation. In Sec. \\ref{comta}, we consider a more generalized transformation and present the method of finding contact symmetries of the given differential equation. In Sec. \\ref{sec5}, we discuss the methods that connect symmetries and integrating factors. We consider two different methods, namely, (i) adjoint symmetries method and (ii) $\\lambda$-symmetries approach. In Sec. \\ref{hidd}, we introduce the notion of hidden symmetries and enlist some hidden symmetries of MEE which are not obtained through Lie symmetry analysis. In Sec. \\ref{nongan}, we introduce nonlocal symmetries and investigate the connection between nonlocal symmetries and $\\lambda$-symmetries. In Sec. \\ref{teles}, we consider a more general vector field, namely telescopic vector field and derive these generalized vector fields for the MEE. Finally, we give our conclusions in Sec. \\ref{9th}. \n \n \n\\section{Lie point symmetries}\n\\label{2}\n\nLet us consider a second-order ODE \n\\begin{eqnarray}\n\\ddot{x}=\\phi(t,x,\\dot{x}),\\label{main1}\n\\end{eqnarray}\nwhere overdot denotes differentiation with respect to `$t$'. The invariance of Eq.(\\ref{main1}) under an one parameter Lie group of infinitesimal point transformations \\cite{mahomed1,pandey1},\n\\begin{eqnarray}\n&&T=t+\\varepsilon \\,\\xi(t,x),~~~X=x+\\varepsilon \\,\\eta(t,x),\\quad \\epsilon \\ll 1,\\label{asjkm}\n\\end{eqnarray}\nwhere $\\xi(t,x)$ and $\\eta(t,x)$ are arbitrary functions of their arguments and $\\varepsilon$ is a small parameter, is given by\n\\begin{eqnarray}\n&&\\hspace{-1.9cm}\\xi \\frac{\\partial \\phi}{\\partial t}+\\eta \\frac{\\partial \\phi}{\\partial x}+(\\eta_t+\\dot x (\\eta_x-\\xi_t)-\\dot x^2 \\xi_x)\\frac{\\partial \\phi}{ \\partial \\dot x}-(\\eta_{tt}+(2\\eta_{tx}-\\xi_{tt})\\dot x+(\\eta_{xx}-2\\xi_{tx})\\dot x^2\\nonumber \\\\&& \\hspace{4.30cm}-\\xi_{xx} \\dot x^3+(\\eta_x-2\\xi_t-3\\dot x \\xi_x)\\ddot x) =0.\\label{liec1}\n\\end{eqnarray}\nSubstituting the known expression $\\phi$ in (\\ref{liec1}) and equating the coefficients of various powers of $\\dot{x}$, to zero one obtains a set of linear partial differential equations for the unknown functions $\\xi$ and $\\eta$. Solving them consistently we can get the Lie point symmetries ($\\xi$ and $\\eta$) associated with the given ODE. The associated vector field is given by $V=\\xi \\frac{\\partial} {\\partial t}+\\eta\\frac{\\partial} {\\partial x}$. \n\nOne may also introduce a characteristics $Q=\\eta-\\dot{x}\\xi$ and rewrite the invariance condition (\\ref{liec1}) in terms of a single variable $Q$ as \\cite{olv}\n\\begin{equation}\n\\frac{d^2Q} {dt^2}-\\phi_{\\dot{x}}\\frac{dQ} {dt}-\\phi_x Q=0.\n\\label{met1411}\n\\end{equation}\nSolving Eq.(\\ref{met1411}) one can get $Q$. From $Q$ one can recover the functions $\\xi$ and $\\eta$. The invariants associated with the vector field $V$ can be found by solving the following characteristic equation \\cite{olv,blu1}\n\\begin{equation}\n\\frac{dt} {\\xi}=\\frac{dx} {\\eta}=\\frac{d\\dot{x}}{\\eta^{(1)}}.\\label{invaria}\n\\end{equation}\nHere $\\eta^{(1)}$ represents the first prolongation which is given by $\\eta_t+\\dot x (\\eta_x-\\xi_t)-\\dot x^2 \\xi_x$. Integrating the characteristic equation (\\ref{invaria}) we obtain two invariants, namely $u(t,x)$ and $v(t,x,\\dot{x})$. The derivative of these two invariants, \n\\begin{eqnarray}\nw=\\frac{dv} {du}=\\frac{v_t+\\dot{x}v_x+\\phi v_{\\dot{x}}} {u_t+\\dot{x}u_x},\\label{cite1}\n\\end{eqnarray} \ndefines a second-order differential invariant. Integrating the above equation (\\ref{cite1}) we can get the solution for the given equation (\\ref{main1}).\n\\subsection{Example: modified Emden equation}\nEq.(\\ref{main}) is invariant under the infinitesimal transformation (\\ref{asjkm}) provided it should satisfy the Eq.(\\ref{liec1}). Substituting the expression $\\phi=-3 x \\dot{x}-x^3$ in (\\ref{liec1}), we get\n\\begin{eqnarray}\n\\hspace{-1cm}\\eta(-3\\dot{x}&-3x^2)+(\\eta_t+\\dot{x}\\eta_x-\\dot{x}\\xi_t-\\dot{x}^2\\xi_x)(-3x)-(\\eta_{tt}+\\dot{x}(2\\eta_{tx}-\\xi_{tt})\\nonumber \\\\ \n&\\hspace{-0.9cm}+\\dot{x}^2(\\eta_{xx}-2\\xi_{tx})-\\xi_{xx}\\dot{x}^3+(\\eta_x-2\\xi_t-3\\dot{x}\\xi_x)(-3 x \\dot{x}-x^3))=0.\n\\end{eqnarray}\nEquating the coefficients of various powers of $\\dot{x}$ to zero and solving the resultant set of partial differential equations for $\\xi$ and $\\eta$, we obtain \\cite{bhu1,pandey1,new_leach}\n\\begin{eqnarray}\n\\xi&=&x\\bigg(a_2+a_1t-\\frac{c_2+b_1} {2}t^2-\\frac{c_1+d_2} {2}t^3+\\frac{d_1} {4}t^4\\bigg)\\nonumber \\\\&&-\\frac{d_1} {2}t^3+\\bigg(c_1+\\frac{3} {2} d_2\\bigg)t^2+b_1t+b_2,\\nonumber\\\\\n\\eta&=&-x^3\\bigg(a_1t+a_2+\\frac{d_1} {4}t^4-\\bigg(\\frac{c_1+d_2} {2}\\bigg)t^3-\\frac{c_2+b_1} {2}t^2\\bigg)\\nonumber \\\\&&+x^2\\bigg(d_1t^3-3\\bigg(\\frac{c_1+d_2} {2}\\bigg)t^2-(c_2+b_1)t+a_1\\bigg)\\nonumber \\\\&&+x\\bigg(-\\frac{3} {2} d_1t^2+c_1t+c_2\\bigg)+d_1t+d_2,\n\\end{eqnarray}\nwhere $a_i,b_i,c_i$ and $d_i,~i=1,2,$ are real arbitrary constants. The associated Lie vector fields are given by\n\\begin{eqnarray}\n&&V_1=\\frac{\\partial} {\\partial{t}},~~V_2=t\\bigg(1-\\frac{xt} {2}\\bigg)\\frac{\\partial} {\\partial{t}}+x^2t\\bigg(-1+\\frac{xt} {2}\\bigg)\\frac{\\partial} {\\partial{x}},\\nonumber\\\\\n&&V_3=x\\frac{\\partial} {\\partial{t}}-x^3\\frac{\\partial} {\\partial{x}},~~V_4=xt\\frac{\\partial} {\\partial{t}}+x^2\\bigg(1-xt\\bigg)\\frac{\\partial} {\\partial{x}},\\nonumber\\\\\n&&V_5=-\\frac{xt^2} {2}\\frac{\\partial} {\\partial{t}}+x\\bigg(1-xt+\\frac{x^2 t^2} {2}\\bigg)\\frac{\\partial} {\\partial{x}},\\nonumber\\\\\n&&V_6=t^2\\bigg(1-\\frac{xt} {2}\\bigg)\\frac{\\partial} {\\partial{t}}+xt\\bigg(1-\\frac{3} {2}xt+\\frac{x^2 t^2} {2}\\bigg)\\frac{\\partial} {\\partial{x}},\\nonumber\\\\\n&&V_7=\\frac{3} {2}t^2\\bigg(1-\\frac{xt} {3}\\bigg)\\frac{\\partial} {\\partial{t}}+\\bigg(1-\\frac{3} {2}x^2t^2+\\frac{x^3 t^3} {2}\\bigg)\\frac{\\partial} {\\partial{x}},\\nonumber\\\\\n&&V_8=-\\frac{t^3} {2}\\bigg(1-\\frac{xt} {2}\\bigg)\\frac{\\partial} {\\partial{t}}+t\\bigg(1-\\frac{3} {2}xt+x^2t^2-\\frac{x^3t^3} {4}\\bigg)\\frac{\\partial} {\\partial{x}}.\\label{vf8}\n\\end{eqnarray}\n\nOne can explore the algebra associated with the Lie group of infinitesimal transformations (\\ref{asjkm}) by analyzing the commutation relations between the vector fields. Since the example under consideration admits maximal Lie point symmetries (eight) the underlying Lie algebra turns out to be $sl(3,R)$ which can be unambiguously verified from the vector fields (\\ref{vf8}) \\cite{new_leach}. In the following, we present few applications of Lie vector fields (\\ref{vf8}).\n\n\n\\subsection{Applications of Lie point symmetries}\n\\subsubsection{General solution}\n\\label{sol_lie}\nThe first and foremost application of Lie point symmetries is to explore the solution of the given equation through order reduction procedure. The order reduction procedure is carried out by constructing the invariants associated with the vector fields \\cite{olv,blu1}. In the following, we illustrate the order reduction procedure by choosing the vector field $V_3$. For the remaining vector fields one can proceed and obtain the solution in the same manner.\n\nSubstituting the expressions $\\xi$, $\\eta$ and $\\eta^{(1)}$ in the characteristic equation $\\frac{dt} {\\xi}=\\frac{dx} {\\eta}=\\frac{d\\dot{x}} {\\eta^{(1)}}$, we get\n\\begin{equation}\n\\frac{dt} {x}=\\frac{dx} {-x^3}=\\frac{d\\dot{x}} {-(3\\dot{x}x^2+\\dot{x}^2)}.\\label{invar}\n\\end{equation}\nIntegrating the characteristic equation (\\ref{invar}) we find the invariants $u$ and $v$ are of the form\n\\begin{equation}\nu=t-\\frac{1} {x},~~v=\\frac{x(\\dot{x}+x^2)} {\\dot{x}}.\\label{vvv}\n\\end{equation}\nThe second-order invariant can be found from the relation $w=\\frac{dv} {du}$ (see Eq.(\\ref{cite1})). Substituting Eq. (\\ref{vvv}) and their derivatives in (\\ref{cite1}) and simplifying the resultant equation we arrive at\n\\begin{equation}\n\\frac{dv} {du}=x^2(1+2\\frac{x^2} {\\dot{x}}+\\frac{x^4} {\\dot{x}^2})=v^2.\\label{ebt}\n\\end{equation}\nIntegrating equation (\\ref{ebt}) we find $v=-\\frac{1} {I_1+u}$, where $I_1$ is an integration constant. Substituting the expressions $u$ and $v$ in this solution and rewriting the resultant equation for $\\dot{x}$, we end up with \n\\begin{eqnarray}\n\\dot{x}-\\frac{x-I_1x^2-tx^2} {I_1+t}=0.\\label{first}\n\\end{eqnarray} \nIntegrating Eq.(\\ref{first}) we obtain the general solution of the MEE equation in the following form\n\\begin{equation}\nx(t)=\\frac{2(I_1+t)} {I_2+2I_1t+t^2},\\label{lie_soln}\n\\end{equation}\nwhere $I_2$ is the integration constant. The solution exactly coincides with the one found by other methods \\cite{chand1,bhu1}.\n\n\\subsubsection{Linearization}\n\nOne can also identify a linearizing transformation from the Lie point symmetries if the given equation admits $sl(3,R)$ algebra \\cite{new_leach}. The method of finding linearizing transformation for the modified Emden equation was discussed in detail by Mahomed and Leach \\cite{mahomed1} and later by others \\cite{duarte,chand1}. The underlying idea is the following. One has to choose two vector fields appropriately in such a way that they should constitute a two-dimensional algebra in the real plane \\cite{19} and transform them into the canonical form $\\frac{\\partial}{\\partial \\tilde{x}}$ and $\\tilde{t}\\frac{\\partial}{\\partial \\tilde{x}}$, where $\\tilde{t}$ and $\\tilde{x}$ are the new independent and dependent variables. For the MEE one can generate this subalgebra by considering the vector fields $V_8$ and $V_9 = V_7 -2V_6$. To transform them into the canonical forms one should introduce the transformations \\cite{nuci}\n\\begin{eqnarray}\n \\tilde{t}=\\frac{tx-1}{t(tx-2)},~~\\tilde{x}=-\\frac{x}{2t(tx-2)},\\label{ght11}\n\\end{eqnarray}\nwhere $\\tilde{t}$ and $\\tilde{x}$ are the new independent and dependent variables respectively. In these new variables, $(\\tilde{t},\\tilde{x})$, Eq.(\\ref{main}) becomes the free particle equation $\\frac{d^2\\tilde{x}}{d\\tilde{t}^2}=0$. From the solution of the free particle solution one can derive the general solution of MEE (\\ref{main}) with the help of (\\ref{ght11}). The solution coincides with the one given in Eq.(\\ref{lie_soln}).\n\n\n\\subsubsection{Lagrangian from Lie point symmetries}\nAnother interesting application of Lie point symmetries is that one can explore Lagrangians for the given second-order differential equation through the Jacobi last multiplier \\cite{nuci}. The inverse of the determinant $\\Delta$ \\cite{nuci1},\n \\begin{equation}\n\\Delta = \\left| \\begin{array}{ccc}\n1 & \\dot{x} & \\ddot{x} \\\\\n\\xi_1 & \\eta_1 & \\eta_{1}^{(1)} \\\\\n\\xi_2 & \\eta_2 & \\eta_{2}^{(1)} \\end{array}\\right|,~~~~M=\\frac{1} {\\Delta},\\label{delta}\n\\end{equation}\nwhere $(\\xi_1,\\eta_1)$ and $(\\xi_2,\\eta_2)$ are two sets of Lie point symmetries of the second-order ODE (\\ref{main1}) and $\\eta_{1}^{(1)}$ and $\\eta_{2}^{(1)}$ are their corresponding first prolongations, gives the Jacobi last multiplier for the given equation. The determinant establishes the connection between the multiplier and Lie point symmetries. The Jacobi last multiplier $M$ is related to the Lagrangian $L$ through the relation \\cite{nuci1}\n\\begin{equation} \nM=\\frac{\\partial^2 L}{\\partial \\dot{x}^2}.\\label{met21}\n\\end{equation}\nOnce the multiplier is known, the Lagrangian $L$ can be derived by integrating the expression (\\ref{met21}) two times with respect to $\\dot{x}$. \n\nTo obtain the Jacobi last multiplier $M$ for the MEE, we choose the vector fields $V_3$ and $V_1$. Evaluating the determinant (\\ref{delta}) with these two vector fields, we find\n\\begin{equation}\n\\Delta =\\left|\n\\begin{array}{ccc}\n 1 & \\dot{x} & -3x\\dot{x}-x^3 \\\\\nx & -x^3 & -\\dot{x}(\\dot{x}+3x^2) \\\\\n1 & 0 & 0\n\\end{array}\\right|=-(x^2+\\dot{x})^3\n\\end{equation}\nso that\n\\begin{equation}\nM=-\\frac{1} {(x^2+\\dot{x})^3}\\label{mmm1}.\n\\end{equation}\nWe can obtain the Lagrangian by integrating the expression (\\ref{mmm1}) twice with respect to $\\dot{x}$. Doing so, we find\n\\begin{eqnarray}\nL&=&-\\frac{1} {2(\\dot{x}+x^2)}+f_1(t,x)\\dot{x}+f_2(t,x),\\label{lagra11}\n\\end{eqnarray}\nwhere $f_1$ and $f_2$ are two arbitrary functions. The Lagrangian (\\ref{lagra11}) leads to the equation of motion (\\ref{main}) with $\\frac{\\partial f_1} {\\partial t}=\\frac{\\partial f_2} {\\partial x}$. One can also find more Lagrangains for Eq.(\\ref{main1}) by considering other vector fields given in (\\ref{vf8}).\n\n\n\\section{Noether symmetries}\n\\label{noeth}\nIn the previous section, we have discussed the invariance of the equation of motion under the one parameter Lie group of infinitesimal transformations (\\ref{asjkm}). If the given second-order equation has a variational structure then one can also determine the symmetries which leave the action integral invariant. Such symmetries are called variational symmetries. Variational symmetries are important since they provide conservation laws via Noether's theorem \\cite{noether}. In the following, we recall the method of finding variational symmetries \\cite{olv,blu}.\n\nLet us consider a second-order dynamical system described by a Lagrangian $L(t,x,\\dot{x})$ and\nthe action integral associated with the Lagrangian be\n\\begin{eqnarray} \nS=\\int L(t,x,\\dot{x}) dt.\n\\label {gme08}\n\\end{eqnarray}\nNoether's theorem states that whenever the action integral is invariant under\nthe one-parameter group of infinitesimal transformations (\\ref{asjkm}) then the solution of Euler's equation admits the\nconserved quantity \\cite{Gelfand:1963,Lutzky:1978}\n\\begin{eqnarray} \nI=(\\xi\\dot{x}-\\eta)\\frac{\\partial{L}}{\\partial{\\dot{x}}}-\\xi L+f,\n\\label {gme010}\n\\end{eqnarray}\nwhere $f$ is a function of $t$ and $x$. The functions $\\xi,\\;\\eta$ and $f$ can\nbe determined from the equation \n\\begin{eqnarray} \nE\\{L\\}=\\xi\\frac{\\partial{L}}{\\partial{t}}+\\eta\\frac{\\partial{L}}{\\partial{x}}\n+(\\dot{\\eta}-\\dot{x}\\dot{\\xi})\\frac{\\partial{L}}{\\partial{\\dot{x}}},\n\\label {gme011}\n\\end{eqnarray}\nwhere over dot denotes differentiation with respect to time and \n\\begin{eqnarray} \nE\\{L\\}=-\\dot{\\xi}L+\\dot{f}.\n\\label {gme012}\n\\end{eqnarray}\nEquation (\\ref{gme011}) can be derived by differentiating the equation\n(\\ref{gme010}) and simplifying the expression in the resultant equation. Solving equation (\\ref{gme011}) one can obtain explicit expressions for the functions $\\xi,\\;\\eta$ and $f$. Substituting these expressions back in (\\ref{gme010}) one can get the associated integral of motion.\n\n\\subsection{Example: modified Emden equation}\nIn this sub-section, we illustrate the method of finding Noether symmetries and their associated conserved quantities for the MEE (\\ref{main}) which has a nonstandard Lagrangian of the form (see Eq.(\\ref{lagra11}))\n\\begin{eqnarray} \nL=\\frac{1}{3(\\dot{x}+x^2)},\\label{lagra}\n\\label {kps03}\n\\end{eqnarray}\nwhere we have chosen the arbitrary functions $f_1$ and $f_2$ to be zero for simplicity. Substituting the Lagrangian (\\ref{lagra}) and its derivatives in (\\ref{gme011}), we get\n\\begin{eqnarray} \n \\eta\\bigg(-\\frac{1} {3(\\dot{x}+x^2)^2}\\bigg)+\n(\\eta_t+\\dot{x}\\eta_x-\\dot{x}(\\xi_t+\\dot{x}\\xi_x))\\bigg(-\\frac{2x} {3(\\dot{x}+x^2)^2}\\bigg)\\nonumber\\\\\n=-(\\xi_t+\\dot{x}\\xi_x)\\bigg(\\frac{1} {3(\\dot{x}+x^2)^2}\\bigg)\n+f_t+\\dot{x}f_x.\n\\label {gme014}\n\\end{eqnarray}\n\nEquating the coefficients of various powers of $\\dot{x}$ to zero and solving the\nresultant equations, we find\n\\begin{eqnarray} \n\\eta&=&12D-6(C+4Dt)x+3(B+3t(C+2Dt))x^2\\nonumber\\\\\n&&-\\frac{3} {2}(A+t(2B+3Ct+4Dt^2))x^3,\\nonumber\\\\ \n\\xi&=&E-3t(C+2Dt)+\\frac{3}{2}(A+t(2B+3Ct+4Dt^2))x,\\nonumber\\\\ \nf&=&At+Bt^2+Ct^3+Dt^4,\n\\label {kps06}\n\\end{eqnarray}\nwhere $A, B, C$, $D$ and $E$ are real arbitrary constants. The associated vector fields are\n\\begin{eqnarray} \n\\hspace{-1cm}&&X_1=x\\frac{\\partial}{\\partial t}\n-x^3\\frac{\\partial}{\\partial x},~~ X_2=xt\\frac{\\partial}{\\partial t} \n\t+\\bigg(x^2-tx^3\\bigg)\\frac{\\partial}{\\partial x},\\nonumber\\\\ \n\\hspace{-1cm}&&X_3=\\bigg(t-\\frac{3t^2x} {2}\\bigg)\\frac{\\partial}{\\partial t}\n+\\bigg(2x-3tx^2+\\frac{3t^2x^3} {2}\\bigg)\\frac{\\partial}{\\partial x},\\nonumber\\\\\n\\hspace{-1cm}&&X_4=\\bigg(\\frac{t^3x} {2}-\\frac{t^2} {2}\\bigg)\\frac{\\partial}{\\partial t}\n+\\bigg(1-2tx +\\frac{3t^2x^2} {2}-\\frac{t^3x^3} {2}\\bigg)\\frac{\\partial}{\\partial x},~X_5=\\frac{\\partial}{\\partial t}. \n\\label {sgme016}\n\\end{eqnarray}\n\nThe Noether's symmetries are sub-set of Lie point symmetries. In the above, while the vector fields $X_1, X_2$ and $X_5$ exactly match with the Lie vector fields $V_3,V_4$ and $V_1$ (see Eq. (\\ref{vf8})), the remaining two Noether vector fields $X_3$ and $X_4$ can be expressed as linear combinations of other Lie point symmetries, that is $X_3=V_2+2V_5$ and $X_4=V_7-2V_6$.\n\nSubstituting each one of the vector fields separately into (\\ref{gme010}) we obtain the associated integrals of motions. They turned out to be\n\\begin{eqnarray} \n\\hspace{-1.2cm}&&I_1= t-\\frac{x}{x^2+\\dot{x}},~~I_2= \\frac{(-x+tx^2+t\\dot{x})^2}{(x^2+\\dot{x})^2},\n\\nonumber\\\\ \n\\hspace{-1.2cm}&&I_3=\\frac{-9t^2x^3+3t^3x^4-3x(2+3t^2\\dot{x})+2tx^2(6+3t^2\\dot{x})\n+9t\\dot{x}(6+3t^2\\dot{x})}{(x^2+\\dot{x})^2},\n\\nonumber\\\\ \n\\hspace{-1.2cm}&&I_4=\\frac{3(2-2tx+t^2x^2+t^2\\dot{x})}{(x^2+\\dot{x})},\n\\nonumber\\\\ \n\\hspace{-1.2cm}&&I_5=\\frac{2\\dot{x}+x^2}{3(x^2+\\dot{x})^2},\\qquad\\qquad \\qquad \\qquad\\frac{dI_i}{dt}=0,~i=1,2,3,4,5.\n\\label {kps11}\n\\end{eqnarray}\n\nOne can easily verify that out of the five integrals two of them are independent and the remaining three can be expressed in terms of these two integrals, that is $I_2=I_1^2,~~ I_3=I_1I_4$ and $I_5=\\frac{1}{9}(I_4-3I_1^2)$. We can construct a general solution of (\\ref{main}) with the help of the two independent integrals $I_1$ and $I_4$. The underlying solution coincides with (\\ref{lie_soln}) after rescaling.\n\n\\section{Contact symmetries}\n\\label{comta}\nIn the previous two cases, Lie point symmetries and Noether symmetries, we have considered the functions $\\xi$ and $\\eta$ to be functions of $t$ and $x$ only. One may relax this condition by allowing the functions $\\xi$ and $\\eta$ to depend on $\\dot{x}$ besides $t$ and $x$. This generalization was considered Sophus Lie himself \\cite{19} and later by several authors \\cite{ci4,ci5,sir_old1,sir_old2}. This velocity dependent infinitesimal transformations are called contact transformations and the functions $\\xi$ and $\\eta$ are called contact symmetries. The contact symmetries for the harmonic and damped harmonic oscillators were worked out explicitly by Schwarz and Cerver$\\acute{o}$ and Villarroel respectively \\cite{ci4,ci5}. Several nonlinear second-order ODEs do not admit Lie point symmetries but they were proved to be integrable by other methods. To demonstrate the integrability of these nonlinear ODEs in Lie's sense one should consider velocity dependent transformations. In the following, we give a brief account of the theory and illustrate the underlying ideas by considering MEE as an example.\n\\subsection{Method of Lie}\nLet a one-parameter group of contact transformation be given by \\cite{ci4}\n\\begin{eqnarray}\n\\hspace{-1cm}T=t+\\varepsilon \\,\\xi(t,x,\\dot{x}),~X=x+\\varepsilon \\,\\eta(t,x,\\dot{x}),~\\dot{X}=\\dot{x}+\\varepsilon \\, \\eta^{(1)}(t,x,\\dot{x}) \\quad \\epsilon \\ll 1.\\label{asm}\n\\end{eqnarray}\nThe functions $\\xi$ and $\\eta$ determine an infinitesimal contact transformation if it is possible to write them in the form \\cite{ci7}\n\\begin{eqnarray}\n\\xi(t,x,\\dot{x})=-\\frac{\\partial W} {\\partial \\dot{x}},~~\\eta(t,x,\\dot{x})=W-\\dot{x}\\frac{\\partial W} {\\partial \\dot{x}},~~\\eta^{(1)}=\\frac{\\partial W} {\\partial t}+\\dot{x}\\frac{\\partial W} {\\partial x},\n\\end{eqnarray} where the characteristic function $W(t,x,\\dot{x})$ is an arbitrary function of its arguments. If $W$ is linear in $\\dot{x}$ the corresponding contact transformation is an extended point transformation and it holds that $W(t,x,\\dot{x})=\\eta(t,x)-\\dot{x}\\xi(t,x)$. A second-order differential equation (\\ref{main1}) is said to be invariant under the contact transformation (\\ref{asm}) if $\\xi \\frac{\\partial \\phi}{\\partial t}+\\eta \\frac{\\partial \\phi}{\\partial x}+\\eta^{(1)}\\frac{\\partial \\phi}{ \\partial \\dot x}-\\eta^{(2)} =0$ on the manifold $\\ddot{x}-\\phi(t,x,\\dot{x})=0$ in the space of the variables $t,x,\\dot{x}$ and $\\ddot{x}$ \\cite{ci4,ci5}. Here $\\eta^{(1)}$ and $\\eta^{(2)}$ are the first and second prolongations with $\\eta^{(1)}=\\dot{\\eta}-\\dot{x}\\dot{\\xi}$ and $\\eta^{(2)}=\\dot{\\eta}^{(1)}-\\ddot{x}\\dot{\\xi}$. The invariance condition provides the following linear partial differential equation for the characteristic function $W$:\n\\begin{eqnarray}\n\\hspace{-1.4cm}&&\\frac{\\partial W}{\\partial \\dot{x}} \\frac{\\partial \\phi}{\\partial t} +\\left(\\dot{x} \\frac{\\partial W}{\\partial \\dot{x}}-W\\right)\\frac{\\partial \\phi}{\\partial x} - \\left(\\frac{\\partial W}{\\partial t} +\\dot{x} \\frac{\\partial W}{\\partial x}\\right) \\frac{\\partial \\phi}{\\partial \\dot{x}} +\n\\bigg(\\phi^2 \\frac{\\partial ^2 W}{\\partial \\dot{x}^2}+2 \\phi \\frac{\\partial^2 W }{\\partial t \\partial \\dot{x}}\\nonumber \\\\ \\hspace{-1.3cm}&&\\qquad\\qquad \\quad+2 \\phi \\dot{x} \\frac{\\partial^2 W}{\\partial x \\partial \\dot{x}}+\\phi \\frac{\\partial W}{\\partial x}+\\frac{\\partial ^2W}{\\partial t^2}+2 \\dot{x} \\frac{\\partial^2 W}{\\partial t\\partial x}+\\dot{x}^2 \\frac{\\partial ^2W}{\\partial x^2}\\bigg)=0.\\label{cintchar}\n\\end{eqnarray}\nIntegrating Eq.(\\ref{cintchar}) one can get the characteristics $W$. From $W$ one can recover the contact symmetries $\\xi$ and $\\eta$. One can also recover the necessary independent integrals from the characteristic function (see for example, Ref.\\cite{ci4}).\n\\subsection{Example: modified Emden equation}\nTo determine the contact symmetries of MEE, we have to determine the characteristic function (\\ref{cintchar}), by solving the first-order partial differential equation\n\\begin{eqnarray}\n\\hspace{-1.4cm}&&-(3\\dot{x}+3x^2)\\left(\\dot{x} \\frac{\\partial W}{\\partial \\dot{x}}-W\\right) +3x \\left(\\frac{\\partial W}{\\partial t} +\\dot{x} \\frac{\\partial W}{\\partial x}\\right) +\\bigg((3x\\dot{x}+x^3)^2 \\frac{\\partial ^2 W}{\\partial \\dot{x}^2}\\nonumber \\\\ \\hspace{-1.4cm}&&\\quad\\quad \\quad-2 (3x\\dot{x}+x^3)\\frac{\\partial^2 W}{\\partial t \\partial \\dot{x}}-2 (3x\\dot{x}+x^3) \\dot{x} \\frac{\\partial^2 W}{\\partial x \\partial \\dot{x}}-(3x\\dot{x}+x^3) \\frac{\\partial W}{\\partial x}\\nonumber \\\\ \\hspace{-1.7cm}&&\\qquad\\qquad \\qquad\\qquad\\qquad\\qquad\\qquad+\\frac{\\partial ^2W}{\\partial t^2}+2 \\dot{x}\\frac{\\partial^2 W}{\\partial t \\partial x}+\\dot{x}^2 \\frac{\\partial ^2W}{\\partial x^2}\\bigg)=0.\\label{cintchar1}\n\\end{eqnarray}\n\nOne may find the general solution of the above linear partial differential equation by employing the well known methods for solving linear partial differential equations. In general $W$ depends upon arbitrary functions and the contact Lie group has an infinite number of parameters. \n\nFor the sake of illustration, in the following, we present two particular solutions of Eq.(\\ref{cintchar1}):\n\\begin{eqnarray} \nW_1=\\frac{x^2 (1-t x)}{x^2+\\dot{x}}-t^2 \\dot{x},~~W_2=-\\frac{x \\dot{x}}{\\sqrt{x^2+2 \\dot{x}}}-\\frac{x \\left(x^2+\\dot{x}\\right)}{\\sqrt{x^2+2 \\dot{x}}}.\n\\end{eqnarray}\nThe infinitesimal vector fields read\n\\begin{eqnarray}\n\\hspace{-0.5cm}\\Omega_1=t^2\\frac{\\partial}{\\partial t}+\\frac{x^2(1-tx)} {\\dot{x}+x^2}\\frac{\\partial}{\\partial x},~~\\Omega_2=\\frac{x}{\\sqrt{2\\dot{x}+x^2}}\\frac{\\partial}{\\partial t}-x\\frac{(\\dot{x}+x^2)} {\\sqrt{2\\dot{x}+x^2}}\\frac{\\partial}{\\partial x}.\\label{kinl}\n\\end{eqnarray}\nAs one can see from (\\ref{kinl}) the infinitesimals $\\xi$ and $\\eta$ depend on the velocity terms also. One can derive the general solution from each one of the contact symmetries by solving the characteristic equation associated with the vector field. We demonstrate this procedure in Sec. \\ref{con_sub}.\n\n\\subsection{Method of Gladwin Pradeep et.al}\n\\label{con_sub}\nIn a recent paper Gladwin Pradeep et.al proposed a new method of finding contact symmetries for a class of equations \\cite{cont}. Their method involves two steps. In the first step, one has to find a linearizing contact transformation. In the second step, with the help of contact transformation, one proceeds to construct contact symmetries for the given equation. Once the contact symmetries are determined the order reduction procedure can be employed to derive the general solution of the given differential equation. In the following, we recall this procedure with MEE as an example.\n\n\\vspace{0.2cm}\n{\\it Step 1: Linearizing contact transformation} \n\\vspace{0.2cm}\n\nThe MEE (\\ref{main}) can be linearized to the free particle equation $\\frac{d^2u} {dt^2}=0$ by the contact transformation (for more details one may refer, Ref.\\cite{cont})\n\\begin{eqnarray}\nx=\\frac{2u\\dot{u}}{1+u^2},\\qquad\\dot{x}=\\frac{2\\dot{u}^2(1-u^2)}{(1+u^2)^2}.\\label{contscx}\n\\end{eqnarray}\nOne may also invert the above transformation (\\ref{contscx}) and obtain\n\\begin{eqnarray}\nu=\\frac{x}{\\sqrt{2\\dot{x}+x^2}},~~\n\\dot{u}=\\frac{(\\dot{x}+x^2)}{\\sqrt{2\\dot{x}+x^2}}.\\label{conct2}\n\\end{eqnarray}\n\n\n\\vspace{0.2cm}\n{\\it Step 2: Contact symmetries}\n\\vspace{0.2cm}\n\n Let us designate the symmetry vector field and its first prolongation of the nonlinear ODE (\\ref{main}) respectively be of the form\n$\n\\Omega=\\lambda \\frac{\\partial}{\\partial t}\n+\\mu \\frac{\\partial}{\\partial x},\n$\nand\n\\begin{eqnarray} \n\\Omega^{1}=\\lambda \\frac{\\partial}{\\partial t}\n+\\mu \\frac{\\partial}{\\partial x}+(\\dot{\\mu}\n-\\dot{x}\\dot{\\lambda})\\frac{\\partial}{\\partial \\dot{x}},\n\\label {sym04}\n\\end{eqnarray}\nwhere $\\lambda$ and $\\mu$ are the infinitesimals associated with the variables $t$ and $x$, respectively. \n\nLet the symmetry vector field associated with the linear ODE, $\\frac{d^2u} {dt^2}=0$, be $\\Lambda=\\xi \\frac{\\partial}{\\partial t}+\\eta \\frac{\\partial}{\\partial u},$ and its first extension be $\\Lambda^{1}=\\xi \\frac{\\partial}{\\partial t}+\\eta \\frac{\\partial}{\\partial u}+(\\dot{\\eta}\n-\\dot{u}\\dot{\\xi})\\frac{\\partial}{\\partial \\dot{u}}$. Using the contact transformation (\\ref{conct2}) we can deduce the following differential identities, that is $\\frac{\\partial }{\\partial u}=\\frac{\\partial x}{\\partial u}\\frac{\\partial}{\\partial x}+\\frac{\\partial \\dot{x}}{\\partial u}\\frac{\\partial}{\\partial \\dot{x}}$ and $\\frac{\\partial }{\\partial \\dot{u}}=\\frac{\\partial x}{\\partial \\dot{u}}\\frac{\\partial}{\\partial x}+\\frac{\\partial \\dot{x}}{\\partial \\dot{u}}\\frac{\\partial}{\\partial \\dot{x}}$. Rewriting the first prolongation $\\Lambda^1$ using these relations, we find\n\\begin{eqnarray}\n\\hspace{-0.8cm}\\Lambda^1=\\xi\\frac{\\partial}{\\partial t}+\\left(\\eta\\frac{\\partial x}{\\partial u}+(\\dot{\\eta}-\\dot{u}\\dot{\\xi})\\frac{\\partial x}{\\partial \\dot{u}}\\right)\\frac{\\partial}{\\partial x}+\n\\left(\\eta\\frac{\\partial \\dot{x}}{\\partial u}+(\\dot{\\eta}-\\dot{u}\\dot{\\xi})\\frac{\\partial \\dot{x}}{\\partial \\dot{u}}\\right)\\frac{\\partial}{\\partial \\dot{x}}.\\label{lambda1}\n\\end{eqnarray}\nNow comparing the vector fields (\\ref{lambda1}) and (\\ref{sym04}), we find\n\\begin{eqnarray}\n\\hspace{-0.7cm}\\lambda=\\xi,\\qquad\n\\mu=\\eta\\frac{\\partial x}{\\partial u}+(\\dot{\\eta}-\\dot{u}\\dot{\\xi})\\frac{\\partial x}{\\partial \\dot{u}}=\\frac{\\sqrt{2\\dot{x}+x^2}}{\\dot{x}+x^2}\\left(\\eta\\dot{x}+\\dot{\\eta}x\\right)-\\dot{\\xi}x.\\label{arbitrary-sym}\n\\end{eqnarray}\nThe functions $\\xi$ and $\\eta$ are the symmetries of the free particle equation. They are given by \\cite{olv,hydon}\n\\begin{eqnarray} \n&&\\Lambda_1=\\frac{\\partial}{\\partial t},\\quad \n\\Lambda_2=\\frac{\\partial}{\\partial u},\\quad \n\\Lambda_3=t \\frac{\\partial}{\\partial u},\\quad \n\\Lambda_4=u\\frac{\\partial}{\\partial u},\\quad\n\\Lambda_5=u \\frac{\\partial}{\\partial t},\\nonumber\\\\ \n&&\\Lambda_6=t \\frac{\\partial}{\\partial t}\n,\\quad \n\\Lambda_7=t^2 \\frac{\\partial}{\\partial t}\n+tu \\frac{\\partial}{\\partial u},\\quad \n\\Lambda_8=tu \\frac{\\partial}{\\partial t}\n+u^2 \\frac{\\partial}{\\partial u}.\n\\label {sym11}\n\\end{eqnarray}\nSubstituting the above symmetry generators $\\Lambda_i$'s, $i=2,\\ldots,8,$ in Eq. (\\ref{arbitrary-sym}), we can determine the function $\\mu$. Substituting $\\lambda=\\xi$ and $\\mu$ in the vector field $\\Omega=\\xi \\frac{\\partial}{\\partial t}+\\mu \\frac{\\partial}{\\partial x}$, we arrive at the following contact symmetry generators of Eq. (\\ref{main}),\n\\begin{eqnarray} \n&&\\Omega_1=\\frac{\\partial}{\\partial t},~~\\Omega_2=\\frac{\\sqrt{2\\dot{x}+x^2}}{\\dot{x}+x^2}\\dot{x}\\frac{\\partial}{\\partial x},~~\\Omega_3=\\frac{\\sqrt{2\\dot{x}+x^2}}{\\dot{x}+x^2}(t\\dot{x}+x)\\frac{\\partial}{\\partial x},\\nonumber\\\\\n&&\\Omega_4=\\frac{x(2\\dot{x}+x^2)}{(\\dot{x}+x^2)}\\frac{\\partial}{\\partial x},~~\\Omega_5=\\frac{x}{\\sqrt{2\\dot{x}+x^2}}\\frac{\\partial}{\\partial t}-x\\frac{(\\dot{x}+x^2)} {\\sqrt{2\\dot{x}+x^2}}\\frac{\\partial}{\\partial x},\\nonumber\\\\&& \\Omega_6=t\\frac{\\partial}{\\partial t}-x\\frac{\\partial} {\\partial x},\n~~ \\Omega_7=t^2\\frac{\\partial}{\\partial t}+\\frac{x^2(1-tx)} {\\dot{x}+x^2}\\frac{\\partial}{\\partial x},\\nonumber\\\\\n&&\\Omega_8=\\frac{tx}{\\sqrt{(2\\dot{x}+x^2)}}\\frac{\\partial}{\\partial t}+\\bigg(\\frac{x^2\\sqrt{(2\\dot{x}+x^2)}} {\\dot{x}+x^2}- \\frac{tx(\\dot{x}+x^2)} {\\sqrt{(2\\dot{x}+x^2)}}\\bigg)\\frac{\\partial}{\\partial x}.\\label{dyna_symm}\n\\end{eqnarray}\nOne can unambiguously verify that all the symmetry generators are solutions of the invariance condition.\n\n\\subsubsection{General solution} \n\nTo derive the general solution of the given equation one has to integrate the Lagrange's system associated with the contact symmetries given above. We demonstrate this procedure by considering the vector field $\\Omega_4$ given in Eq.(\\ref{dyna_symm}). For the other vector fields one may follow the same procedure. \n\nThe characteristic equation associated with the vector field $\\Omega_4$ turns out to be\n\\begin{equation}\n\\frac{dt}{0}=\\frac{(\\dot{x}+x^2)dx}{x(2\\dot{x}+x^2)}=-\\frac{(\\dot{x}+x^2)d\\dot{x}}{x^4+x^2\\dot{x}-2\\dot{x}^2}. \\label{inpo}\n\\end{equation}\nIntegrating Eq.(\\ref{inpo}) we find the invariants $u$ and $v$ are of the form $u=t$ and $v=\\frac{\\dot{x}}{x}+x$. In terms of these variables one can reduce the order of the equation (\\ref{main}). The reduced equation turns out to be the Riccati equation,\n$\\frac{dv}{dt}=-v^2$, whose general solution is given by $v=\\frac{1}{I_1+u},$\nwhere $I_1$ is the integration constant. Substituting the expressions $u$ and $v$ in the above solution and rearranging the resultant expression for $\\dot{x}$, we find\n\\begin{equation}\n\\dot{x}-\\frac{x} {I_1+t}+x^2=0.\\label{riccati-eg1}\n\\end{equation}\nIntegrating (\\ref{riccati-eg1}) one can obtain the general solution of (\\ref{main}). The general solution coincides with (\\ref{lie_soln}) after appropriate rescaling.\n\n\n\n\\section{Symmetries and integrating factors}\n\\label{sec5}\nIn Sec.\\ref{2}, we have discussed only few applications of Lie point symmetries. One can also determine integrating factors from Lie point symmetries. In fact, Lie himself found the equivalence between integrating factors and Lie point symmetries for the first-order ODEs. For second-order ODEs the equivalence has been established only recently \\cite{mur1}. The reason is that unlike the first-order ODEs (which admit infinite number of symmetries) several second-order ODEs do not admit Lie point symmetries although they are integrable by quadratures. Subsequently attempts have been made to generalize the classical Lie method to yield nontrivial symmetries and integrating factors. Two such generalizations which come out in this direction are (i) adjoint symmetries method \\cite{blu,blu11} and (ii) $\\lambda$-symmetries approach \\cite{mur3,mur4}. The adjoint symmetry method was developed by Bluman and Anco \\cite{blu1} and the $\\lambda$-symmetry approach was floated by Muriel and Romero. The applicability of both the methods have been demonstrated for the equations which lack Lie point symmetries. In both the methods one can find symmetries, integrating factors and integrals associated with the given equation in an algorithmic way.\n\nIn the following, we recall briefly these two powerful methods and demonstrate the underlying ideas by considering MEE as an example.\n \n\\subsection{Method of Bluman and Anco}\n\nIn general, an integrating factor is a function, multiplying the ODE, which yields a first integral. If the given ODE is self-adjoint, then its integrating factors are necessarily solutions of its linearized system (\\ref{met1411}) \\cite{blu}. Such solutions are the symmetries of the given ODE. If a given ODE is not self-adjoint, then its integrating factors are necessarily solutions of the adjoint system of its linearized system. Such solutions are known as adjoint symmetries of the given ODE \\cite{blu1}. \n\nLet us consider second-order ODE (\\ref{main1}). The linearized ODE for Eq.(\\ref{main1}) is given in (\\ref{met1411}). The adjoint ODE of the linearized equation is found to be\n\\begin{equation}\nL^*[x]w=\\frac{d^2w} {dt^2}+\\frac{d} {dt}(\\phi_{\\dot{x}}w)-\\phi_xw=0.\\label{ajoin1}\n\\end{equation}\nThe solutions $w=\\Lambda(t,x,\\dot{x})$ of the above equation (\\ref{ajoin1}) holding for any $x(t)$ satisfying the given equation (\\ref{main1}) are the adjoint symmetries of (\\ref{main1}) \\cite{blu1}.\n\n\nThe adjoint symmetry of the Eq.(\\ref{main1}) becomes an integrating factor of (\\ref{main1}) if and only if $\\Lambda(t,x,\\dot{x})$ satisfies the adjoint invariance condition \\cite{blu11}\n\\begin{eqnarray}\nL^*[x]\\Lambda(t,x,\\dot{x})=-\\Lambda_{x}(\\ddot{x}-\\phi)+\\frac{d} {dt}(\\Lambda_{\\dot{x}}(\\ddot{x}-\\phi)). \\label{comp1}\n\\end{eqnarray}\nNow comparing the Eqs. (\\ref{ajoin1}) and (\\ref{comp1}) and collecting the powers of $\\ddot{x}$ and constant terms, we find\n\\numparts\n\\addtocounter{equation}{-1}\n\\label{adjon_con}\n\\addtocounter{equation}{1}\n\\begin{eqnarray}\n&&\\Lambda_{t\\dot{x}}+\\Lambda_{x\\dot{x}}\\dot{x}+2\\Lambda_{x}+\\Lambda \\phi_{\\dot{x}\\dot{x}}+2\\phi_{\\dot{x}}\\Lambda_{\\dot{x}}+\\phi \\Lambda_{\\dot{x}\\dot{x}}=0,\\label{adjo1}\\\\\n&&\\Lambda_{tt}+2\\Lambda_{tx}\\dot{x}+\\Lambda_{xx}\\dot{x}^2+\\Lambda \\phi_{t\\dot{x}}+\\Lambda \\phi_{x\\dot{x}}\\dot{x}+\\phi_{\\dot{x}}\\Lambda_{t}+\\phi_{\\dot{x}}\\Lambda_{x}\\dot{x}\\nonumber \\\\ &&-\\Lambda \\phi_{x}-\\Lambda_{x}\\phi+\\phi \\Lambda_{t\\dot{x}}+\\phi \\Lambda_{x\\dot{x}}\\dot{x}+\\Lambda_{\\dot{x}}\\phi_t+\\Lambda_{\\dot{x}}\\phi_{x}\\dot{x}=0.\\label{adjo2}\n\\end{eqnarray}\n\\endnumparts\nThe solutions of (\\ref{adjo2}) are called the adjoint symmetries. If these solutions also satisfy Eq.(\\ref{adjo1}) then they become integrating factors for the given second-order ODE (\\ref{main1}). \n\nThe main advantage of this method is that if the given equation is of odd order or does not have variational structure, one can use this method and obtain the integrals in an algorithmic way.\n\n\\subsubsection{Example: modified Emden equation}\nFor the MEE equation, the linearized equation is given by\n\\begin{equation}\n\\frac{d^2Q} {dt^2}+3x\\frac{dQ} {dt}+(3\\dot{x}+3x^2) Q=0.\n\\label{met1411ad}\n\\end{equation}\nThe adjoint equation for the above linearized equation turns out that\n\\begin{equation}\n\\frac{d^2w} {dt^2}+\\frac{d} {dt}(-3xw)+(3\\dot{x}+3x^2)w=0.\\label{adj_exam}\n\\end{equation}\nSince Eqs. (\\ref{met1411ad}) and (\\ref{adj_exam}) do not coincide, the function $Q$ is not an integrating factor for the MEE. In this case the integrating factors can be determined from the adjoint symmetry condition (\\ref{adj_exam}). The adjoint symmetry determining equation (\\ref{adj_exam}) for the present example reads\n\\begin{eqnarray}\n&&\\hspace{-1.3cm}\\Lambda_{tt}+2\\Lambda_{tx}\\dot{x}+\\Lambda_{xx}\\dot{x}^2-3\\Lambda\\dot{x}-3x\\Lambda_{t}-3 x\\Lambda_{x}\\dot{x}+(3\\dot{x}+3x^2)\\Lambda \\nonumber \\\\ &&\\hspace{-1.8cm}+(3 x\\dot{x}+x^3)\\Lambda_{x}-(3 x \\dot{x}+x^3)\\Lambda_{\\dot{xt}}-(3x\\dot{x}+x^3)\\dot{x}\\Lambda_{x\\dot{x}}-(3 \\dot{x}+3x^2)\\Lambda_{\\dot{x}}\\dot{x}=0.\\label{adjo3}\n\\end{eqnarray} \nTwo particular solutions of (\\ref{adjo3}) are given by\n\\begin{equation}\n\\Lambda_1= \\frac{x} {(\\dot{x}+x^2)},~~\\Lambda_{2}=\\frac{t(-2+tx)} {2(t\\dot{x}-x+tx^2)^2}.\n\\label{adm112}\n\\end{equation}\n \nThese two adjoint symmetries, $\\Lambda_1$ and $\\Lambda_2$, also satisfy the Eq.(\\ref{adjo1}). So they become integrating factors for the Eq.(\\ref{main}). Multiplying the given equation by each one of these integrating factors and rewriting the resultant expression as a perfect derivative and integrating them we obtain two integrals $I_1$ and $I_2$ which are of the form,\n\\begin{eqnarray}\nI_1&=&-\\frac{(\\dot{x}t-x+tx^2)} {\\dot{x}+x^2},~~I_2=-\\frac{(2+\\dot{x}t^2-2tx+t^2x^2)} {2(\\dot{x}t-x+tx^2)}.\\label{lam117}\n\\end{eqnarray}\nFrom these integrals, $I_1$ and $I_2$, the general solution can be derived. The resultant expression coincides with Eq.(\\ref{lie_soln}) after rescaling.\n\n\n\\subsection{Method of Muriel and Romero}\nAs we mentioned at few places earlier, many second-order nonlinear dynamical systems often lack Lie point symmetries but proved to be integrable by other methods. To overcome this problem, efforts have been made to generalize the classical Lie algorithm and obtain integrals and general solution of these nonlinear ODEs. One such generalization is the $\\lambda$-symmetries approach. This approach was developed by Muriel and Romero \\cite{mur1}. These symmetries are neither Lie point nor Lie-B$\\ddot{a}$cklund symmetries and are called $\\lambda$-symmetries since they are vector fields that depend upon a function $\\lambda$. If we choose this arbitrary function as null we obtain the classical Lie point symmetries. The method of finding $\\lambda$-symmetries for a second-order ODE has been discussed in depth by Muriel and Romero and the advantage of finding such symmetries has also been demonstrated by them \\cite{mur3}. The authors have also developed an algorithm to determine integrating factors and integrals from $\\lambda$-symmetries for the second-order ODEs \\cite{mur4}.\n\nConsider a second-order ODE (\\ref{main1}). Let $\\tilde{V}=\\xi(t,x)\\frac{\\partial}{\\partial t}+\\eta(t,x)\\frac{\\partial}{\\partial x}$ be a $\\lambda$-symmetry of the given ODE for some function $\\lambda=\\lambda(t,x,\\dot{x})$. The invariance of the given ODE under $\\lambda$-symmetry vector field is given by \\cite{mur1}\n\\begin{equation}\n\\xi\\phi_t+\\eta\\phi_x+\\eta^{[\\lambda,(1)]}\\phi_{\\dot{x}}-\\eta^{[\\lambda,(2)]}=0,\n\\label{beq1}\n\\end{equation}\nwhere $\\eta^{[\\lambda,(1)]}$ and $\\eta^{[\\lambda,(2)]}$ are the first and second $\\lambda$- prolongations. They are given by\n\\begin{eqnarray}\n\\hspace{-1cm}\\eta^{[\\lambda,(1)]}&=&(D_t+\\lambda)\\eta^{[\\lambda,(0)]}(t,x)-(D_t+\\lambda)(\\xi(t,x))\\dot{x}=\\eta^{(1)}+\\lambda(\\eta-\\dot{x}\\xi),\\label{beq2}\\nonumber \\\\\n\\hspace{-1cm}\\eta^{[\\lambda,(2)]}&=&(D_t+\\lambda)\\eta^{[\\lambda,(1)]}(t,\\dot{x})-(D_t+\\lambda)(\\xi(t,x))\\ddot{x}=\\eta^{(2)}+f(\\lambda),\n\\end{eqnarray}\nwhere $f(\\lambda)$ is given by $D[\\lambda](\\eta-\\xi \\dot{x})+2\\lambda(\\eta^{(1)}-\\xi\\ddot{x})+\\lambda^2(\\eta-\\xi \\dot{x})$ and $\\eta^{(0)}=\\eta(t,x)$, and $D$ is the total differential operator $(D=\\frac{\\partial}{\\partial t}+\\dot{x}\\frac{\\partial}{\\partial x}+\\phi \\frac{\\partial}{\\partial \\dot{x}})$. In the above prolongation formula if we put $\\lambda=0$, we end up at standard Lie prolongation expressions. Solving the invariance condition (\\ref{beq1}) we can determine the functions $\\xi$, $\\eta$ and $\\lambda$ for the given equation. We note here that three unknowns $\\xi$, $\\eta$ and $\\lambda$ have to be determined from the invariance condition (\\ref{beq1}). The procedure is as follows.\n\nLet us suppose that the second-order equation (\\ref{main1}) has Lie point symmetries. In that case, the $\\lambda$-function can be determined in a more simple way without solving the invariance condition (\\ref{beq1})\n\nIf $V$ is a Lie point symmetry of (\\ref{main1}) and $Q=\\eta-\\dot{x}\\xi$ is its characteristics then $v=\\frac{\\partial} {\\partial x}$ is a $\\lambda$-symmetry of (\\ref{main1}) for $\\lambda=\\frac{D[Q]} {Q}$. The $\\lambda$-symmetry satisfies the invariance condition \\cite{mur3} \n\\begin{equation}\n\\phi_x+\\lambda \\phi_{\\dot{x}}=D[\\lambda]+\\lambda^2.\\label{lamd}\n\\end{equation}\n\n Once the $\\lambda$-symmetry is determined, we can obtain the first integrals in two different ways. In the first way, we can calculate the integral directly from the $\\lambda$-symmetry using the four step algorithm given below. In the second way, we can find the integrating factor $\\mu$ from $\\lambda$-symmetry. With the help of integrating factors and $\\lambda$-symmetries we can obtain the first integrals by integrating the system of Eqs.(\\ref{imueq}). In the following, we enumerate both the procedures.\n\n\\vspace{0.2cm}\n{\\it (A) Method of finding the first integral directly from $\\lambda$-symmetry \\cite{mur3}}\n\\vspace{0.2cm}\n\nThe method of finding integral\ndirectly from $\\lambda$-symmetry is as follows:\n\\begin{enumerate}[(i)]\n\\item\nFind a first integral $w(t,x,\\dot x)$ of $v^{[\\lambda,(1)]}$, that is a particular solution of the equation\n$w_x+\\lambda w_{\\dot{x}}=0,\n$\nwhere subscripts denote partial derivative with respect to that variable and $v^{[\\lambda,(1)]}$ is the first-order $\\lambda$-prolongation of the vector field $v$.\n\\item\nEvaluate $D[w]$ and express $D[w]$ in terms of $(t,w)$ as $D[w]=F(t,w)$.\n\\item\nFind a first integral G of $\\partial_t+F(t,w)\\partial_w$.\n\\item\nEvaluate $I(t,x,\\dot{x})=G(t,w(t,x,\\dot{x}))$.\n\\end{enumerate}\n\n\\vspace{0.2cm}\n{\\it (B) Method of finding integrating factors from $\\lambda $ \\cite{mur3}}\n\\vspace{0.2cm}\n\n{\\it \nIf $V$ is a Lie point symmetry of (\\ref{main1}) and $Q=\\eta-\\dot{x}\\xi$ is its characteristics, then\n$v=\\partial_{x}$ is a $\\lambda$-symmetry of (\\ref{main1}) for $\\lambda=D[Q]\/Q$ and any solution of the first-order linear system\n\\begin{eqnarray}\nD[\\mu]+(\\phi_{\\dot{x}}-\\frac{D[Q]}{Q})\\mu = 0, \\;\\;\\;\n\\mu_x+(\\frac{D[Q]}{Q}\\mu)_{{\\dot{x}}} = 0,\n\\label{musaa}\n\\end{eqnarray}\nis an integrating factor of (\\ref{main1}). Here $D$ represents the total derivative operator and it is given by $\\frac{\\partial}{\\partial t}+\\dot{x}\\frac{\\partial}{\\partial x}+\\phi \\frac{\\partial}{\\partial \\dot{x}}$.}\n\nSolving the system of equations (\\ref{musaa}) one can get $\\mu$. Once the integrating factor $\\mu$ is known then a first integral $I$ such that $I_{\\dot{x}}=\\mu$ can be\nfound by solving the system of equations\n\\begin{eqnarray}\nI_t=\\mu(\\lambda \\dot{x}-\\phi),\\;\\; I_x=-\\lambda \\mu,\\;\\; I_{\\dot{x}}=\\mu.\n\\label{imueq}\n\\end{eqnarray}\nFrom the first integrals, we can write the general solution of the given equation.\n\n\\subsubsection{Example: modified Emden equation}\n\\label{lamb_sec}\n\nSince the second-order ODE under investigation admits Lie point symmetries one can derive the $\\lambda$-symmetries directly from Lie point symmetries \\cite{bhu1} through the relation $\\lambda=\\frac{D[Q]}{Q}$\n\nTo start with, we consider the vector field $V_3$. In this case, we have $\\xi = x$ and $\\eta = -x^3$, the $Q$ function turns out to be $Q=\\eta-\\xi \\dot{x}$ = $-x(\\dot{x}+x^2)$. Using the relation $\\lambda=\\frac{D[Q]}{Q}$ we can fix $\\lambda _3 = \\frac{\\dot{x}}{x}-x$. In a similar way one can fix the $\\lambda$-symmetries for the remaining vector fields. The resultant expressions are given in the following Table. One can verify that the functions $\\xi$, $\\eta$ and $\\lambda$ satisfy the invariance condition (\\ref{beq1}). \n{\\tiny\n\\begin{table}[h]\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline\nVector & Q & $\\lambda$-symmetries \\\\\n\\hline\n$V_1$& $-\\dot{x}$ &$-(3 x +\\frac{x^3}{\\dot x})$ \\\\\n\\hline\n$V_2$ & $\\frac{1} {2}t(-2+tx) (\\dot{x}+x^2)$ & $\\frac{2-\\dot{x}t^2-4 t x+t^2 x^2}{t(2-t x)}$ \\\\\n\\hline\n$V_3$ & $-x(\\dot{x}+x^2)$ &$\\frac{\\dot{x}} {x}-x$ \\\\\n\\hline\n$V_4$ & $x(-\\dot{x}t+x-tx^2)$ & $\\frac{\\dot{x}} {x}-x$ \\\\\n\\hline\n$V_5$ & $\\frac{x} {2}(2+\\dot{x}t^2-2 t x+t^2 x^2)$ &$\\frac{\\dot{x}} {x}-x$ \\\\\n\\hline\n$V_6$ & $\\frac{t} {2}(-2+tx)(\\dot{x}t-x+tx^2)$ & $(\\frac{2-\\dot{x}t^2-4 t x+t^2 x^2}{t(2-t x)})$ \\\\\n\\hline\n$V_7$ & $\\frac{1} {2}(2-3t^2x^2+t^3x^3-3\\dot{x}t^2+\\dot{x}t^3x$ &$-\\frac{t(-\\dot{x}^2t^2+\\dot x(6-6tx)+x^2(6-6tx+t^2x^2)} {2-3t^2x^2+t^3x^3+\\dot{x}t^2(-3+tx)}$ \\\\\n\\hline\n$V_8$ & $\\frac{-t} {4}(-2+tx) (2+\\dot{x}t^2-2 tx+t^2x^2$ &$\\frac{2-\\dot{x}t^2-4 t x+t^2 x^2}{t(2-t x)}$ \\\\\n\\hline\n\\end{tabular}\n\\caption{$\\lambda$- symmetries for eight the vector fields admitted by Eq.(\\ref{main})}\n\\end{center}\n\\end{table}}\n\nSince we are dealing with a second-order ODE two different $\\lambda$ functions are sufficient to generate two independent first integrals and hence the general solution. In the following, we consider the vector fields $V_3$ and $V_6$ and their $\\lambda$-symmetries and demonstrate the method of finding their associated integrals.\n\n \\vspace{0.3cm}\n{\\it (i) First integrals directly from $\\lambda$-symmetry}\n\\vspace{0.3cm}\n\n\n\nSubstituting $ \\lambda _3 = \\frac{\\dot x}{x}-x$ in the equation $w_x+\\lambda w_{\\dot{x}}=0,\n$ one gets $w_x+\\left(\\frac{\\dot x}{x}-x\\right)w_{\\dot{x}}=0$. This first-order PDE admits an integral $w(t,x,\\dot{x})$ of the form $w(t,x,\\dot{x}) = \\frac{\\dot x}{x}+x$ (first step). The total differential of this function can be expressed in terms of $w$ itself, that is $D[w] =-w^2= F(t,w)$ (second step). Next, one has to find an integral associated with the first-order partial differential equation $\\frac{\\partial G}{\\partial t}-w^2\\frac{\\partial G}{\\partial w}=0$. A particular solution of this first-order partial differential equation can be given as $G=t-\\frac{1}{w}$ (third step). Finally, one has to express $G(t,w)$ in terms of $(t,x,\\dot{x})$. Doing so, we find that the integral turns out to be (fourth step)\n\\begin{equation}\n\\hat{I}_1=t-\\frac{x}{\\dot{x}+x^2},~~~~~\\qquad\\frac{d\\hat{I}_1}{dt}=0.\\label{i11}\n\\end{equation\n\n\nNext we consider the function $\\lambda _6$. Following the steps given above, we find the integral associated with $\\lambda_6$ which turns out to be\n\\begin{equation}\n\\hat{I}_2=\\frac{\\dot{x}t-x+tx^2} {2+\\dot{x}t^2-2tx+t^2x^2}\n\\label{i12}\n\\end{equation}\nwith $\\frac{d\\hat{I}_2}{dt}=0$. Using these two integrals, $\\hat{I}_1$ and $\\hat{I}_2$, one can construct the general solution of Eq.(\\ref{main}). The resultant expression coincides with the earlier expression (see Eq.(\\ref{lie_soln})) after rescaling.\n\\vspace{0.5cm}\n\n{\\it (ii) Integrating factors from $\\lambda$-symmetries}\n\\vspace{0.3cm}\n\nIn the following, we discuss the second route of obtaining the integral from $\\lambda$-symmetry. We solve the system of equations (\\ref{musaa}) in the following way. We first consider the second equation in (\\ref{musaa}) and obtain a solution for $\\mu$. We then check whether the obtained expression satisfies the first equation or not. If it satisfies then we treat it as a compatible solution.\n\n\nWe again consider the Lie point symmetries $V_3$ and $V_6$ and discuss the method of deriving integrating factors for these functions. To determine the integrating factor associated with $\\lambda _3$ we first solve the second equation in (\\ref{musaa}), that is $\\mu_x+(\\frac{\\dot{x}}{x}-x)\\mu_{\\dot{x}}+\\frac{1}{x}\\mu=0$. A particular solution is $\\mu_1 = -\\frac{x}{(\\dot{x}+x^2)^2}$. This solution also satisfies the first equation in (\\ref{musaa}). To determine the integral we substitute $\\mu_1$ and $\\lambda _3$ in (\\ref{imueq}) and obtain the following set of equations for the unknown $I$, namely\n\\begin{eqnarray}\nI_t=1,\\;\\; I_x=-\\frac{(\\dot{x}-x^2)}{(\\dot{x}+x^2)^2},\\;\\;\nI_{\\dot{x}}=\\frac{x}{(\\dot{x}+x^2)^2}.\n\\label{mu2eq333}\n\\end{eqnarray}\nThe integral which comes out by integrating the system of equations (\\ref{mu2eq333}), that is $I_1=t-\\frac{x}{x^2+\\dot x}$, coincides exactly with the one found earlier.\n \n\nLet us now consider the function $\\lambda_6$. Substituting the function $\\lambda _6$ in equation (\\ref{musaa}) we get\n\\begin{eqnarray}\n\\mu_x+(\\frac{2-\\dot{x}t^2-4tx+t^2x^2} {t(2-tx)}\\mu)_{{\\dot{x}}} = 0.\n\\label{mu1eq11}\n\\end{eqnarray}\nEq. (\\ref{mu1eq11}) admits a particular solution of the form\n\\begin{eqnarray}\n\\mu_2=-\\frac{3 t(2-tx)}\n{(2-2 tx+t^2\\dot{x}+t^2x^2)^2}.\n\\label{mu1eq33}\n\\end{eqnarray}\nWe find that $\\mu_2$ also satisfies the first equation given in (\\ref{musaa}) and forms a compatible solution to the system of equations (\\ref{musaa}).\n\nSubstituting the expressions $\\lambda_6$ and $\\mu_2$ in (\\ref{imueq}) and integrating the resultant set of equations, \n\\begin{eqnarray}\nI_t & = -\\frac{3(2tx^3-t^2x^4+2(1+tx\n-t^2{x}^2)\\dot x-t^2 \\dot x^2)}\n{(2-2tx+t^2\\dot{x}+t^2x^2)^2},\n\\nonumber\\\\\nI_x & = -\\frac{3(2+t^2x^2-4tx\n-t^2\\dot{x})}\n{(2-2tx+t^2\\dot{x}+t^2x^2)^2},\n\\nonumber\\\\\nI_{\\dot{x}} & = -\\frac{3t(2-tx)}\n{(2-2tx+t^2\\dot{x}+t^2x^2)^2},\n\\label{me1eq3}\n\\end{eqnarray}\nwe find $3\\tilde{I_2}=\\hat{I_2}$, where $\\hat{I_2}$ is given in (\\ref{i12}). With the help of $\\hat{I_1}$ and $\\tilde{I_2}$, we can derive the general solution for MEE.\n\n\\section{Hidden Symmetries}\n\\label{hidd}\nAn important application of Lie point symmetry is that it can be used to reduce the order of the underlying ordinary differential equation. It was observed that the order of the reduced ODE may admit more or lesser number of symmetries than that of the higher order equation. Such symmetries were termed as ``hidden symmetries\". This type of symmetry was first observed by Olver and later extensively investigated by Abraham-Shrauner and her collaborators \\cite{nonlocal1,nonlocal2,nonlocal3,nonlocal4,nonlocal5,hiden_not}.\n\n\nThe motivation for finding hidden symmetries of differential equations is the possibility of transforming a given ODE which has insufficient number of Lie point symmetries to be solved to another ODE that has enough Lie point symmetries such that it can be solved by integration. These hidden symmetries cannot be found through the Lie classical method for point symmetries of differential equations. \n\nA detailed study on the hidden symmetries of differential equations show that there can be two types of hidden symmetries. For example, if a $n^{th}$ order ODE is reduced in order by a symmetry group then two possibilities may occur. The reduced lower order ODE may not retain other symmetry group of the $n^{th}$ order ODE. Here the symmetries of the $n^{th}$ order equation is lost in the reduced equation. This lost symmetry is called a Type I hidden symmetry of the lower order ODE. Conversely, the lower order ODE may possess a symmetry group that is not shared by the $n^{th}$ order ODE. In this case, the lower order ODE has gained one symmetry. This is called as Type II hidden symmetry of the $n^{th}$ order ODE \\cite{nonlocal1,nonlocal2,nonlocal3,nonlocal4,ci1,hiden_not}.\n\nSince we are focusing our attention on second-order ODEs, we again consider the MEE equation as an example and point out the hidden symmetries associated with this equation.\n\n\n\\subsection{Example: modified Emden equation}\nLet us consider the MEE equation (\\ref{main}). By introducing the following Riccati transformation\n\\begin{eqnarray}\n x=\\frac{\\dot{y}}{y},~~t=z,\\label{hid_tra}\n\\end{eqnarray}\nthe MEE can be transformed to a linear third-order ODE, that is $\\frac{d^3y}{dz^3}=0$. The transformation (\\ref{hid_tra}) is nothing but the invariants associated with the Lie point symmetry $y\\frac{\\partial}{\\partial y}$ of the linear third-order ODE. In other words, the third-order equation $\\frac{d^3y}{dz^3}=0$ has been order reduced to MEE by one of its point symmetry generator $y\\frac{\\partial}{\\partial y}$. The other Lie point symmetries of the third-order linear ODE, $\\frac{d^3y}{dz^3}=0$, are \\cite{olv,nonlocal,hiden_not}\n\\begin{eqnarray}\n&& \\chi_1=\\frac{\\partial}{\\partial z},\\,\\chi_2=\\frac{\\partial}{\\partial y},\\,\\chi_3=z^2\\frac{\\partial}{\\partial y},\\,\\chi_4=z\\frac{\\partial}{\\partial z},\\nonumber\\\\\n&&\\chi_5=z\\frac{\\partial}{\\partial y},\\,\\chi_6=y\\frac{\\partial }{\\partial y},\\,\\chi_7=\\frac{z^2}{2}\\frac{\\partial}{\\partial z}+yz\\frac{\\partial}{\\partial y}.\\label{third_free}\n\\end{eqnarray}\n\n Substituting the transformation (\\ref{hid_tra}) in the remaining vector fields given in (\\ref{third_free}), they can be transformed into the following forms, namely\n\\begin{eqnarray} \n\\hspace{-1.2cm}&&\\hat{V}_1=\\frac{\\partial} {\\partial t}=V_1,~~\\hat{V}_2=-xe^{-\\int x dt}\\frac{\\partial} {\\partial x},~~\\hat{V}_3=\\left(\\frac{t}{x}-\\frac{t^2} {2}\\right)xe^{-\\int xdt}\n\\frac{\\partial}{\\partial x},\\nonumber\\\\\n\\hspace{-1.2cm}&&\\hat{V}_4=t\\frac{\\partial} {\\partial t}-x\\frac{\\partial} {\\partial x}=V_2-V_5,~~\\hat{V}_5=\\left(\\frac{1}{x}-t\\right)xe^{-\\int xdt} \\frac{\\partial}{\\partial x},\\nonumber\\\\\n\\hspace{-1.2cm}&&\n\\hat{V}_7=\\frac{t^2} {2}\\frac{\\partial}{\\partial t}+(1-tx)\\frac{\\partial}{\\partial x}=V_7-V_6,\\label{hid_sym}\n\\end{eqnarray}\nwhere $\\hat{V}_i,~i=1,2,3,4,5,7,$ are the symmetry generators of the MEE (see Eqs.(\\ref{vf8}) and (\\ref{sym12})). While three of the vector fields ($\\hat{V_1}$, $\\hat{V_4}$ and $\\hat{V_7}$) retain their point-symmetry nature, the remaining three vector fields ($\\hat{V}_3, \\hat{V}_3$ and $\\hat{V}_5$) turns out to be nonlocal vector fields. All these vector fields satisfy the invariance condition and turn out to be the vector fields of the MEE. The local vector fields $\\hat{V_1}(=V_1)$, $\\hat{V_4}(=V_2-V_5)$ and $\\hat{V_7}(=V_7-V_6)$ match with the earlier ones (see Eq. (\\ref{vf8})) whereas the nonlocal ($\\hat{V}_3, \\hat{V}_3$ and $\\hat{V}_5$) vector fields emerge as new ones.\n\n\n\nNow we pick up Type-I and Type-II hidden symmetries from them. As we pointed out in the beginning of this section, Type-II hidden symmetries of third-order ODEs are nothing but the symmetries gained by the second-order ODEs. The MEE admits eight Lie point symmetries (see Sec.\\ref{2}). In the above, we obtained only three Lie point symmetries of Eq.(\\ref{main}). The remaining five Lie point symmetries are Type II hidden symmetries of the third-order ODE. These five symmetries can be gained from either non-local symmetries or contact symmetries of the third-order ODE.\n\nType-I hidden symmetries of MEE are the symmetries which may not retain the symmetry group of the third-order ODE. In the present case, they turned out to be $\\chi_3$ and $\\chi_5$ since these two vector fields cannot be found in (\\ref{hid_sym}).\n\n\n\n\n\n\n\n\\section{Nonlocal symmetries}\n\\label{nongan}\nThe study of hidden symmetries of ODEs brought out a new result. Besides point and contact symmetries, the ODEs do admit nonlocal symmetries (the symmetry is nonlocal if the coefficient functions $\\xi$ and $\\eta$ depend upon an integral). The associated vector field is of the form $V=\\xi(t,x,\\int u(t,x) dt)\\frac{\\partial}{\\partial t}+\\eta(t,x,\\int u(t,x) dt)\\frac{\\partial}{\\partial x}$. Subsequently attempts have been made to determine nonlocal symmetries of ODEs. However, due to the presence of nonlocal terms, these nonlocal symmetries cannot be determined completely in an algorithmic way as in the case of Lie point symmetries. The determination of nonlocal symmetries for second-order ODEs was initiated by Govinder and Leach \\cite{nonlocal4}. Their approach was confined to the determination of these nonlocal symmetries that reduce to point symmetries under reduction of order by $\\frac{\\partial}{\\partial t}$. Later several authors have studied nonlocal symmetries of nonlinear ODEs \\cite{nonlocal1,nonlocal2,nonlocal3,nonlocal4,nonlocal5,nonlocal7}. Nucci and Leach have introduced a way to find the nonlocal symmetries \\cite{nonlocal5}. In the following, we present a couple of methods which determine nonlocal symmetries associated with the given equation. We again consider MEE as an example in both the methods and derive the nonlocal symmetries of it. We also discuss the connection between $\\lambda$-symmetries and nonlocal symmetries.\n\n\\subsection{Method of Bluman et al. \\cite{bl_p}}\nIn this method one essentially introduces an auxiliary ``covering'' system with auxiliary dependent variables. A Lie symmetry of the auxiliary system, acting on the space of independent and dependent variables of the given ODE as well as the auxiliary variables, yields a nonlocal symmetry of the given ODE if it does not project to a point symmetry acting in its space of the independent and dependent variables. This method was first initiated by Bluman \\cite{bl_p} and later extensively investigated by Gandarias and her collaborators \\cite{si11,si12, sir2}.\n\nLet the given second-order nonlinear ODE be of the form (\\ref{main1}). To derive nonlocal symmetries of this equation, the authors introduced an auxiliary nonlocal variable $y$ with the following auxiliary system \\cite{si11,si12, sir2}, \n\\begin{equation}\n\\ddot{x}-\\phi(t,x,\\dot{x})=0,\\;\\;\\dot{y}=f(t,x,y).\n\\label{eq2}\n\\end{equation}\n Any Lie group of point transformation $V=\\xi(t,x,y)\\frac{\\partial}{\\partial t}+\\eta(t,x,y) \\frac{\\partial}{\\partial x}+\\psi(t,x,y)\\frac{\\partial}{\\partial y}$, admitted by (\\ref{eq2}) yields a nonlocal symmetry of the given ODE (\\ref{main1}) if the infinitesimals $\\xi$ or $\\eta$ depend\nexplicitly on the new variable $y$, that is if the following condition is satisfied $\\xi_y^2+\\eta_y^2 \\neq 0$.\nAs the local symmetries of (\\ref{eq2}) are nonlocal symmetries of (\\ref{main1}) this method provides an algorithm to derive a class of nonlocal symmetries for the given equation. These nonlocal symmetries can be profitably utilized to derive the general solution for the given\nequation. Using this procedure, Gandarias and her collaborators have constructed nonlocal symmetries for a class of equations \\cite{si11,si12,sir2}.\n\nIn the following, using the ideas given above, we derive nonlocal symmetries for the MEE. \n\\subsubsection{Example: modified Emden equation}\nWe introduce a nonlocal variable $y$ and rewrite Eq. (\\ref{main}) in the form \\cite{sir2}\n\\begin{eqnarray}\n\\ddot{x}+3x \\dot{x}+x^3=0,~~\\dot{y}=f(t,x,y),\\label {sents}\n\\end{eqnarray}\nwhere $f(t,x,y)$ is an arbitrary function to be determined. Any Lie group of point transformation $V=\\xi(t,x,y)\\frac{\\partial}{\\partial t}+\\eta(t,x,y) \\frac{\\partial}{\\partial x}+\\psi(t,x,y)\\frac{\\partial}{\\partial y}$ admitted by (\\ref{sents}) yields a nonlocal symmetry of the ODE (\\ref{main}), if the infinitesimals $\\xi$ and $\\eta$ satisfy the equation $\\label{cod}\\xi_y^2+\\eta_y^2 \\neq 0$.\n\n\nThe invariance of the system (\\ref{sents}) under a one parameter Lie group of point transformations leads to the following set of determining equations $\\xi$, $\\eta$ and $f$, namely\n\\small\n\\begin{eqnarray}\n\\hspace{-2cm}\\xi_{xx}= 0,\\;\\;\\psi_x-f\\xi_x = 0,\\;\\;\\eta_{xx}-f_x \\xi_y-2 \\xi_{tx}-2 f \\xi_{xy}+6 x \\xi_{x} &=& 0,\\nonumber\\\\\n\\hspace{-2cm}\\psi_t+f \\psi_y-f\\xi_t-f^2\\xi_y -f_t \\xi-f_x \\eta&=& 0,\\nonumber \\\\ \n\\hspace{-2cm}2 x^{3} \\xi_t+2 f x^{3} \\xi_y- \\eta_x x^{3}+3 \\eta x^{2}+3 \\eta_t x+3 f \\eta_y x +\\eta_{tt}+f^{2} \\eta_{yy} +2 f \\eta_{yt}+f_t \\eta_y&=& 0,\\nonumber\\\\ \n\\hspace{-2cm}3 x \\xi_t-\\xi_{tt} -f^{2} \\xi_{yy}-2 f \\xi_{yt} +3 f x \\xi_{y}\\label{ed} -f_t \\xi_y+3 x^{3} \\xi_{x}+f_x \\eta_y+2 \\eta_{tx}+2 f \\eta_{xy}+3 \\eta &=& 0.\n\\label{sym1}\n\\end{eqnarray}\n\\normalsize\n\nSolving the overdetermined system (\\ref{sym1}) we obtain the following infinitesimal symmetry generator for the Eq.(\\ref{sents}):\n\\begin{equation}\nV=c(t)e^{y}(x \\frac{\\partial}{\\partial x}+ \\frac{\\partial}{\\partial y}),\\label{ght}\n\\end{equation}\nwith \n\\begin{equation}\\label{ff}\nf(t,x)= -x-\\frac{c_t}{c},\n\\end{equation}\nwhere $c(t)$ is an arbitrary function of $t$. We note here that (\\ref{ght}) is not the only solution set for the determining equation (\\ref{sym1}).\n\nSolving the characteristic equation, we find two functionally independent invariants which are of the form\n\\begin{equation}\n\\label{inv1}\\begin{array}{ll}z=t, &\n\\zeta=\\displaystyle \\frac{\\dot{x}}{x}+x.\n\\end{array}\\end{equation} \nIn terms of these two variables, $z$ and $\\zeta$, Eq.(\\ref{main}) reads as $\\zeta_z+\\zeta^2=0$. The general solution of this first-order ODE can be given readily as $\\zeta=\\displaystyle\n\\frac{1}{t+k_1}$ with $k_1$ as an integration constant. Plugging this expression in the second equation in (\\ref{inv1}) and rewriting it, we find\n\\begin{equation}\n\\frac{\\dot{x}}{x}+x-\\frac{1}{t+k_1}=0.\n\\end{equation}\n This first-order ODE can be integrated straightforwardly to yield\n \\begin{equation}\\label{sol2}x= {{2\\,\\left(t+{k_1}\\right)}\\over{t^{2}+2 { k_1}\\,t-2\\,\n { k_2}}}, \\end{equation}\nwhere $k_2$ is the second integration constant. Replacing $k_1=I_1$ and $-2k_2=I_2$ in (\\ref{sol2}), we end up at the expression given in Eq.(\\ref{lie_soln}).\n\n\\subsection{Connection between nonlocal symmetries and $\\lambda$-symmetries}\n\\label{mjk}\nThe exponential nonlocal symmetries admitted by (\\ref{sents}) can also be derived from $\\lambda$-symmetries. To illustrate this we recall the following theorem from Ref.\\cite{tel2}. \n\\begin{thmn}\\label{teocasoparticular}\nLet us suppose that for a given second-order equation (\\ref{main1}) there exists some function $f=f(t,x,\\dot{x})$ such that the system (\\ref{eq2}) admits a Lie point symmetry $V=\\xi(t,x,y)\\frac{\\partial}{\\partial t}+\\eta(t,x,y) \\frac{\\partial}{\\partial x}+\\psi(t,x,y)\\frac{\\partial}{\\partial y}$ satisfying $\\xi_y^2+\\eta_y^2 \\neq 0$. We assume that $z=z(t,x)$, $\\zeta=\\zeta(t,x,\\dot{x})$ are two functionally independent functions that verify $V(z)=0, \\left. V^{(1)}(\\zeta)\\right|_\\Delta=0$ and are such that equation (\\ref{main}) can be written in terms of\n$\\{z,\\zeta,\\zeta_z\\}$ as a~first-order ODE.\nThen\n\\begin{enumerate}\\itemsep=0pt\n\\item[$1.$] The vector field $V$ has to be of the form\n\\begin{eqnarray}\\label{final0}\nV=e^{Cy}\\left(\\xi(t,x)\\partial_t+\\eta(t,x)\\partial_x+\\psi(t,x,\\dot{x})\\partial_y\\right)+C_1\\partial_y,\n\\end{eqnarray}\nwhere $C$ and $C_1$ are constants.\n\\item[$2.$] The pair\n\\begin{eqnarray}\\label{pair2}\nv=\n\\xi(t,x)\\partial_t+\\eta(t,x)\\partial_x,\\qquad\n\\lambda=C f.\n\\end{eqnarray}\ndefines a $\\lambda$-symmetry of the equation (\\ref{main}) and the set $\\{z,\\zeta,\\zeta_z\\}$ is a complete system of invariants of $v^{[\\lambda,(1)]}$.\n\\end{enumerate}\n\\end{thmn}\nWith the choice $C=1, C_1=0$ and $f=\\lambda$, the vector field (\\ref{final0}) turns out to be\n\\begin{eqnarray}\\label{expteo}\nV=e^{y}\\left(\\xi(t,x)\\partial_t+\\eta(t,x)\\partial_x+\\psi(t,x,\\dot{x})\\partial_y\\right),\n\\end{eqnarray}\nwhere $\\xi$ and $\\eta$ are the infinitesimal coefficients of $v$ and $\\psi = \\psi(t, x, \\dot{x})$ satisfy the condition $\nV^{(2)}(\\dot{y} - \\lambda)|_\\Delta = 0$. This equation provides a linear first-order partial differential equation to\ndetermine $\\psi$, that is\n\\begin{eqnarray}\\label{edppsi}\n\\psi_t+\\dot{x}\\psi_x +\\psi_{\\dot{x}}\\phi+\\psi\\lambda=D_t(\\xi)\\lambda+\\xi \\lambda^2+v^{[\\lambda,(1)]}(\\lambda).\n\\end{eqnarray}\n\nLet $v=\\xi \\partial_t+\\eta \\partial_x$ be a $\\lambda$-symmetry of (\\ref{main1}) for some $\\lambda=\\lambda(t,x,\\dot{x})$ and $\\psi=\\psi(t,x,\\dot{x})$ be a particular solution of the equation (\\ref{edppsi}). Then (\\ref{expteo})\nis a~nonlocal symmetry of (\\ref{main1})\nassociated to system (\\ref{eq2}) for $f=\\lambda(t,x,\\dot{x})$ \\cite{tel2} .\n\n\\subsubsection{Example: modified Emden equation}\nUsing the above, we can demonstrate that the nonlocal symmetries found by Gandarias et.al for the MEE can be extracted from the $\\lambda$-symmetries themselves. To show this let us consider the $\\lambda$-symmetry $\\frac{\\partial} {\\partial x}$ with $\\lambda_3=\\frac{\\dot{x}} {x}-x$ (from Table 1). Substituting this expression in Eq.(\\ref{edppsi}) and solving the resultant partial differential equation we can obtain an explicit expression for $\\psi(t,x,\\dot{x})$. Let us choose the simplest case $\\psi(t,x,\\dot{x})=0$. In this case the left hand side of Eq.(\\ref{edppsi}) disappears and the right hand side of it also vanishes automatically since it is nothing but the $\\lambda$-symmetry determining equation in which $\\lambda_3$ is a solution. Substituting $\\lambda_3=f$ in the second expression given in (\\ref{eq2}) and integrating it, we find\n\\begin{equation}\ny=\\log x-\\int x dt.\n\\end{equation}\nNow substituting the expressions $\\xi=0$, $\\eta=1$, $\\psi=0$ and the above expression of $y$ in (\\ref{expteo}), we obtain a nonlocal symmetry\n\\begin{eqnarray}\n\\Omega_4=xe^{-\\int xdt}\\frac{\\partial}{\\partial x}.\\label{som_new}\n\\end{eqnarray}\nOne can unambiguously verify that the vector field (\\ref{som_new}) also satisfies the determining equation and turns out to be a nonlocal symmetry of the MEE. We mention here that the nonlocal symmetry (\\ref{som_new}) had already been observed in the order reduction procedure (see Eq. (\\ref{hid_sym})). \n\nThe other choices of $\\lambda$ and\/or $\\psi(t,x,\\dot{x})$ will generate new nonlocal symmetries for the MEE. For example, the choice $\\psi=c(t),\\xi=0,\\eta=c(t)x$ and $\\lambda=-x-\\frac{c_t}{c}$, provides another nonlocal symmetry (\\ref{ght}) through the above said procedure. \nIn this way, one can also construct nonlocal symmetries from the $\\lambda$-symmetries. \n\n\\subsection{Method of Gladwin Pradeep et.al}\n\\label{anna_work} \nIn a recent paper Gladwin Pradeep et.al proposed yet another procedure to determine the nonlocal symmetries for the given equation \\cite{nonlocal}. In the following, we briefly recall the essential ideas behind this method with reference to MEE.\n \n\n\nThe MEE (\\ref{main}) can be transformed to the second-order linear ODE $\\frac{d^2u} {dt^2}=0$\nthrough the nonlocal transformation $u=xe^{\\int xdt}$.\nTo explore the nonlocal symmetries associated with (\\ref{main}), the authors used the identity $\\frac{\\dot{u}} {u}=\\frac{\\dot{x}} {x}+x$ (which can be directly deduced from the nonlocal transformation $u=xe^{\\int xdt}$) \\cite{nonlocal}. This nonlocal connection between the free particle equation and MEE allows one to deduce the nonlocal symmetries of Eq. (\\ref{main}) in the following manner.\n \nLet $\\xi$ and $\\eta$ be the infinitesimal point transformations, that is $u'=u+\\epsilon \\eta(t,u)$, $T=t+\\epsilon \\xi(t,u)$, associated with the linear ODE $\\frac{d^2u} {dt^2}=0$. The symmetry vector field associated with this infinitesimal transformations reads as $V=\\xi \\frac{\\partial}{\\partial t}+\\eta \\frac{\\partial}{\\partial u}$\nand its first extension is given by $Pr^{(1)}V=\\xi \\frac{\\partial}{\\partial t}\n+\\eta \\frac{\\partial}{\\partial u}+(\\dot{\\eta}\n-\\dot{u}\\dot{\\xi})\\frac{\\partial}{\\partial \\dot{u}}$. Then we denote the symmetry vector field and its first prolongation of the MEE (\\ref{main}) as $\\Omega=\\delta \\frac{\\partial}{\\partial t}+\\mu \\frac{\\partial}{\\partial u}$ and $Pr^{(1)}\\Omega=\\delta \\frac{\\partial}{\\partial t}\n+\\mu \\frac{\\partial}{\\partial x}+(\\dot{\\mu}\n-\\dot{x}\\dot{\\delta})\\frac{\\partial}{\\partial \\dot{x}}$, where $\\delta$ and $\\mu$ are the infinitesimals associated with the variables $t$ and $x$, respectively. \n\nFrom the identity $\\frac{\\dot{u}} {u}=\\frac{\\dot{x}} {x}+x$, the authors defined a new function, say $X$\n\\begin{eqnarray} \n\\frac{\\dot{u}}{u}=\\frac{\\dot{x}}{x}+x=X.\\label{x}\n\\label {sym05}\n\\end{eqnarray}\nIn the new variable $X$, the MEE turns out to be the Riccati equation, that is\n\\begin{eqnarray}\n\\dot{X}+X^2=0.\n\\label{reduced-riccati}\n\\end{eqnarray}\n\nThe symmetry vector field of this equation can be obtained by using the relation $X=\\frac{\\dot{u}}{u}$ and rewriting $V^{1}=\\xi \\frac{\\partial}{\\partial t}\n+\\eta \\frac{\\partial}{\\partial u}+(\\dot{\\eta}\n-\\dot{u}\\dot{\\xi})\\frac{\\partial}{\\partial \\dot{u}}$ as\n\\begin{eqnarray}\n&& V^{1}=\\xi \\frac{\\partial}{\\partial t}\n+\\bigg(\\frac{\\dot{\\eta}}{u}-\\frac{\\eta \\dot{u}}{u^2}\n-X\\dot{\\xi}\\bigg) \\frac{\\partial}{\\partial X}\\equiv\\Sigma.\n\\label {sym06}\n\\end{eqnarray}\nWe note that Eq. (\\ref{reduced-riccati}), being a first-order ODE, admits infinite number of Lie point symmetries. These Lie point symmetries of Eq. (\\ref{reduced-riccati}) become contact symmetries of the linear second-order ODE $\\frac{d^2u} {dt^2}=0$ through the relation $X=\\frac{\\dot{u}}{u}$.\n\nSimilarly one can rewrite $\\Omega^{1}=\\delta \\frac{\\partial}{\\partial t}\n+\\mu \\frac{\\partial}{\\partial x}+(\\dot{\\mu}\n-\\dot{x}\\dot{\\delta})\\frac{\\partial}{\\partial \\dot{x}}$, using the relation $X=\\frac{\\dot{x}}{x}+x$, as\n\\begin{eqnarray} \n&&\\Omega^{1}=\\delta \\frac{\\partial}{\\partial t}\n+\\bigg((-\\frac{1}{x^2}\\dot{x}+1)\\mu+(\\dot{\\mu}\n-\\dot{x}\\dot{\\lambda})\\frac{1}{x}\\bigg)\\frac{\\partial}{\\partial X}\\equiv\\Xi.\n\\label {sym07}\n\\end{eqnarray}\n\nAs the symmetry vector fields $\\Sigma$ and $\\Xi$ correspond to the same\nequation (\\ref{reduced-riccati}), their infinitesimal symmetries must be equal. Therefore, comparing equations (\\ref{sym06}) and (\\ref{sym07}) one obtains\n\\begin{eqnarray} \n&& \\xi =\\delta,\\quad \n \\frac{\\dot{\\eta}}{u}-\\frac{\\eta \\dot{u}}{u^2}\n-x\\dot{\\xi}=(-\\frac{1}{x^2}\\dot{x}\n+1)\\mu+\\dot{\\mu}\\frac{1}{x}.\n\\label {sym08}\n\\end{eqnarray}\nRewriting the second equation given in (\\ref{sym08}) we can obtain the following first-order ODE for the unknown function $\\mu$, that is\n\\begin{eqnarray} \n \\frac{1}{x}\\dot{\\mu}+(-\\frac{1}{x^2}\\dot{x}+1)\\mu= \n \\frac{d}{dt}(\\frac{\\eta}{u})\n-x\\dot{\\xi}.\n\\label {sym09}\n\\end{eqnarray}\n\n\nThe free particle equation $\\frac{d^2u} {dt^2}=0$ admits eight Lie point symmetries which are given in Eq.(\\ref{sym11}). Substituting these symmetries $(\\xi_i,\\eta_i)$, $i=1,2,\\ldots,8$, and $u=xe^{\\int xdt}$, in Eq. (\\ref{sym09}),\nwe get the following seven first-order ODEs for $\\mu$,\n\\numparts\n\\addtocounter{equation}{-1}\n\\label{dumm}\n\\addtocounter{equation}{1}\n\\begin{eqnarray}\n&&\\frac{1}{x}\\dot{\\mu}+(1-\\frac{1}{x^2}\\dot{x})\\mu+(x^2+\\dot{x})x^{-2}e^{-\\int xdt}=0,\\\\\n&&\\frac{1}{x}\\dot{\\mu}+(1-\\frac{1}{x^2}\\dot{x})\\mu-(x-tx^2-t\\dot{x})x^{-2}e^{-\\int xdt}=0,\\\\\n&&\\frac{1}{x}\\dot{\\mu}+(1-\\frac{1}{x^2}\\dot{x})\\mu=0,\\\\\n&&\\frac{1}{x}\\dot{\\mu}+(1-\\frac{1}{x^2}\\dot{x})\\mu+(x^2+\\dot{x})xe^{-\\int xdt}=0,\\\\\n&&\\frac{1}{x}\\dot{\\mu}+(1-\\frac{1}{x^2}\\dot{x})\\mu+x=0,\\\\\n&&\\frac{1}{x}\\dot{\\mu}+(1-\\frac{1}{x^2}\\dot{x})\\mu+2tx-1=0,\\\\\n&&\\frac{1}{x}\\dot{\\mu}+(1-\\frac{1}{x^2}\\dot{x})\\mu+2(x^2+\\dot{x})x^2e^{2\\int xdt}-1=0.\n\\end{eqnarray}\n\\endnumparts\nIntegrating each one of the above first-order linear ODEs we can obtain the function $\\mu$. Substituting the symmetries $\\delta(=\\xi)$ and $\\mu$ in $\\Omega=\\delta \\frac{\\partial}{\\partial t}\n+\\mu \\frac{\\partial}{\\partial x}$, we get following nonlocal symmetries,\n\\begin{eqnarray} \n&&\\hspace{-0.5cm}\\Omega_1=\\frac{\\partial}{\\partial t},~~\\Omega_2=\\left(\\frac{1}{x}-t\\right)xe^{-\\int xdt}\n\\frac{\\partial}{\\partial x},\\nonumber \\\\ \n&&\\hspace{-0.5cm}\\Omega_3=\\left(\\frac{t}{x}-\\frac{t^2} {2}\\right)xe^{-\\int xdt}\n\\frac{\\partial}{\\partial x},~~\\Omega_4=xe^{-\\int xdt}\\frac{\\partial}{\\partial x},\n\\label{omega4-eg1}\\nonumber\\\\\n&&\\hspace{-0.5cm}\\Omega_5=xe^{\\int x dt}\\frac{\\partial}{\\partial t}\n-\\left(\\int x(\\dot{x}+x^2)e^{\\int(2x) dt}dt\\right)xe^{-\\int x dt}\n\\frac{\\partial}{\\partial x},\\nonumber\\\\\n&&\\hspace{-0.5cm}\\Omega_6=t\\frac{\\partial}{\\partial t}\n-xe^{-\\int( x f_x )dt}\\left(\\int xe^{\\int x dt}dt\\right) \\frac{\\partial}{\\partial x},\\nonumber\\\\\n&&\\hspace{-0.5cm}\\Omega_7=t^2 \\frac{\\partial}{\\partial t}\n+xe^{-\\int{x}dt}\\left(\\int (1-2tx)e^{-\\int x dt}dt\\right) \\frac{\\partial}{\\partial x},\\nonumber\\\\\n&&\\hspace{-0.5cm}\\Omega_8=txe^{\\int x dt}\\frac{\\partial}{\\partial t}\n+xe^{-\\int{x}dt}\\left(\\int (tx^3+(tx-1)\\dot{x})\ne^{\\int (2x)dt}dt \\right)\n\\frac{\\partial}{\\partial x}\n\\label {sym12}\n\\end{eqnarray}\nof equation (\\ref{main}). One may observe that some of the nonlocal vector fields $\\Omega_2, \\Omega_3$ and $\\Omega_4$ had already been found as hidden symmetries earlier. It is a straightforward exercise to check that all these nonlocal symmetries satisfy the invariance condition $\\delta \\frac{\\partial \\phi}{\\partial t}+\\mu \\frac{\\partial \\phi}{\\partial x}+\\mu^{(1)}\\frac{\\partial \\phi}{\\partial \\dot{x}}-\\mu^{(2)}=0$, where $\\mu^{(1)}$ and $\\mu^{(2)}$ are the first and second prolongations. We mention here that these nonlocal symmetries can also be related to $\\lambda$-symmetries through the theorem given in Sec. \\ref{mjk}.\n\nTo derive the general solution of the given nonlinear ODE one has to solve the Lagrange's system associated with the nonlocal symmetry. For the vector field $\\Omega_4$, the underlying equation reads (Eq. (\\ref{omega4-eg1})),\n\\begin{equation}\n\\frac{dt}{0}=\\frac{dx}{x}=\\frac{d\\dot{x}}{\\dot{x}-x^2}.\\label{ghfss}\n\\end{equation}\nIntegrating Eq.(\\ref{ghfss}), we find $u=t$ and $v=\\frac{\\dot{x}}{x}+x$. Following the procedure described in Sec.\\ref{con_sub}, we can obtain the general solution of (\\ref{main}) as in the form Eq.(\\ref{lie_soln}).\n\n\n\\section{Telescopic vector fields}\n\\label{teles}\nTelescopic vector fields are more general vector fields than the ones discussed so far. The Lie point symmetries, contact symmetries and $\\lambda$-symmetries are all sub-cases of telescopic vector fields. A telescopic vector field can be considered as a $\\lambda$-prolongation where the two first infinitesimals can depend on the first derivative of the dependent variable \\cite{tel1,tel2}. In the following, we briefly discuss the method of finding telescopic vector fields for a second-order ODE. We then present the telescopic vector fields for the MEE.\n\nLet us consider the second-order equation (\\ref{main1}). The vector field \n\\begin{equation}\nv^{(2)}=\\xi \\frac{\\partial}{\\partial t}+\\eta \\frac{\\partial}{\\partial x}+\\zeta^{(1)} \\frac{\\partial}{\\partial \\dot{x}}+\\zeta^{(2)} \\frac{\\partial}{\\partial \\ddot{x}}\n\\end{equation}\nis telescopic if and only if \\cite{tel1}\n\\begin{eqnarray}\n \\xi=\\xi(t,x,\\dot{x}),~\\eta=\\eta(t,x,\\dot{x}),~\\zeta^{(1)}=\\zeta^{(1)}(t,x,\\dot{x})\n\\end{eqnarray}\nwith $\\zeta^{(2)}$ is given by\n\\begin{eqnarray}\n \\zeta^{(2)}=D[\\zeta^{(1)}]-\\phi D[\\xi]+\\frac{\\zeta^{(1)}+\\dot{x}D[\\xi]-D[\\eta]}{\\eta-\\dot{x} \\xi}(\\zeta^{(1)}-\\phi \\xi).\n\\end{eqnarray}\n\nTo prove that the telescopic vector fields are the more general vector fields, let us introduce two functions $g_1$ and $g_2$ in the following form, namely\n\\begin{eqnarray}\n \\hspace{-0.3cm}g_1(t,x,\\dot{x})=\\frac{\\zeta^{(1)}+\\dot{x}\\xi_t-\\eta_t+\\dot{x}(\\dot{x}\\xi_x-\\eta_x)}{\\eta-\\dot{x}\\xi},~~\n g_2(t,x,\\dot{x})=\\frac{\\dot{x}\\xi_{\\dot{x}}-\\eta_{\\dot{x}}}{\\eta-\\dot{x}\\xi}.\\label{telg2}\n\\end{eqnarray}\nWe can rewrite the prolongations $\\zeta^{(1)}$ and $\\zeta^{(2)}$ using the above functions $g_1$ and $g_2$ as follows:\n\\begin{eqnarray}\n \\zeta^{(1)}&=&D[\\eta]-\\dot{x}D[\\xi]+(g_1+g_2 \\phi)(\\eta-\\dot{x}\\xi),\\\\\n \\zeta^{(2)}&=&D[\\zeta^{(1)}]-\\phi{x}D[\\xi]+(g_1+g_2 \\phi)(\\zeta^{(1)}-\\phi\\xi).\n\\end{eqnarray}\nThe relationship between telescopic vector fields and previously considered\nvector fields can be given by the following expressions \\cite{tel1,tel2}\n\\begin{eqnarray}\n \\zeta^{(1)}&=&\\eta^{(1)}+(g_1+g_2 \\phi)(\\eta-\\dot{x}\\xi),\\label{kjdf}\\\\\n \\zeta^{(2)}&=&\\eta^{(2)}+(g_1+g_2 \\phi)(\\zeta^{(1)}-\\phi\\xi).\\label{fkds}\n\\end{eqnarray}\n\nIn the above vector fields if we choose $g_1=g_2=0$ and $\\xi_{\\dot{x}}^2+\\eta_{\\dot{x}}^2 = 0$ we get the Lie point symmetries. The choice $g_1=g_2=0$ and $\\xi_{\\dot{x}}^2+\\eta_{\\dot{x}}^2 \\neq 0$ gives the contact symmetries. To get $\\lambda$-symmetries, we should choose $g_1\\neq 0$ and $\\xi_{\\dot{x}}^2+\\eta_{\\dot{x}}^2 = 0$. As a consequence it can be considered as the more general vector field.\n\\subsection{Example: modified Emden equation}\nTo find the telescopic vector fields admitted by the MEE equation, we have to solve the invariance condition $v^{(2)}(\\phi)=\\xi \\frac{\\partial \\phi}{\\partial t}+\\eta \\frac{\\partial \\phi}{\\partial x}+\\zeta^{(1)} \\frac{\\partial \\phi}{\\partial \\dot{x}}+\\zeta^{(2)} \\frac{\\partial \\phi}{\\partial \\ddot{x}}=0$, with $\\xi$ and $\\eta$ are functions of $(t,x,\\dot{x})$ and $\\zeta^{(1)}$ and $\\zeta^{(2)}$ are defined through (\\ref{kjdf}) and (\\ref{fkds}) respectively. Substituting Eq.(\\ref{main}) in the invariance condition, we obtain\n\\begin{eqnarray}\n-(3 \\dot{x}+3x^2)\\eta-3x\\zeta^{(1)} -\\zeta^{(2)}=0.\\label{fyuuuu}\n\\end{eqnarray}\nSolving equation (\\ref{fyuuuu}), we obtain a telescopic vector field which is of the form\n\\begin{eqnarray}\n\\hspace{-0.5cm}\\gamma_1&=&-\\bigg(\\frac{x}{\\left(x^2+\\dot{x}\\right)^2}\\bigg)\\frac{\\partial}{\\partial x}+\\bigg(\\frac{x^2-\\dot{x}}{\\left(x^2+\\dot{x}\\right)^2}\\bigg)\\frac{\\partial}{\\partial \\dot{x}}+\\bigg(\\frac{6 x \\dot{x}}{\\left(x^2+\\dot{x}\\right)^2}\\bigg)\\frac{\\partial}{\\partial \\ddot{x}}.\\label{tel_com11}\n\\end{eqnarray}\nwhere the components are\n\\begin{eqnarray}\n\\xi_1=0,~\\eta_1=-\\frac{x}{\\left(x^2+\\dot{x}\\right)^2},~\\zeta_1^{(1)}=\\frac{x^2-\\dot{x}}{\\left(x^2+\\dot{x}\\right)^2},~\\zeta_1^{(2)}=\\frac{6 x \\dot{x}}{\\left(x^2+\\dot{x}\\right)^2}.\n\\end{eqnarray}\n\nA second telescopic vector field is found to be\n\\begin{eqnarray}\n\\hspace{-1cm}\\gamma_2&=&-\\bigg(\\frac{t (2-t x)}{\\left(t^2 \\left(x^2+\\dot{x}\\right)-2 t x+2\\right)^2}\\bigg)\\frac{\\partial}{\\partial x}+\\bigg(\\frac{t^2 \\left(\\dot{x}-x^2\\right)+4 t x-2}{\\left(t^2 \\left(x^2+\\dot{x}\\right)-2 t x+2\\right)^2}\\bigg)\\frac{\\partial}{\\partial \\dot{x}}\\nonumber \\\\\n\\hspace{-1cm}&&-\\bigg(\\frac{6 (t x-1) (t\\dot{x}+x)}{\\left(t^2 \\left(x^2+\\dot{x}\\right)-2 t x+2\\right)^2}\\bigg)\\frac{\\partial}{\\partial \\ddot{x}}\\label{tel_com21}\n\\end{eqnarray}\nand its components are given by\n\\begin{eqnarray}\n\\hspace{-1cm}&&\\xi_2=0,~\\eta_2=-\\frac{t (2-t x)}{\\left(t^2 \\left(x^2+\\dot{x}\\right)-2 t x+2\\right)^2},~\\zeta_2^{(1)}=\\frac{t^2 \\left(\\dot{x}-x^2\\right)+4 t x-2}{\\left(t^2 \\left(x^2+\\dot{x}\\right)-2 t x+2\\right)^2},\\nonumber \\\\\n\\hspace{-1cm}&&\\zeta_2^{(2)}=-\\frac{6 (t x-1) (t\\dot{x}+x)}{\\left(t^2 \\left(x^2+\\dot{x}\\right)-2 t x+2\\right)^2}.\\label{tel_com2}\n\\end{eqnarray}\n\nThe invariants associated with a telescopic symmetry vector field can be derived by solving the associated characteristic equation. For the vector field $\\gamma_1$, it reads\n\\begin{eqnarray}\n\\frac{dt} {0}=\\frac{dx} {-\\frac{x}{\\left(x^2+\\dot{x}\\right)^2}}=\\frac{d\\dot{x}} {\\frac{x^2-\\dot{x}}{\\left(x^2+\\dot{x}\\right)^2}}.\n\\end{eqnarray}\nUsing the procedure discussed in Sec.\\ref{sol_lie}, we can integrate the above characteristic equation to obtain the integral given in Eq.(\\ref{i11}). Repeating the procedure for the second telescopic vector field (\\ref{tel_com21}) we end up at the second integral given in Eq.(\\ref{i12}). From these two integrals we can derive the general solution of (\\ref{main}).\n\n\n\n\n\n\\section{Conclusion}\n\\label{9th}\nIn this paper, we have reviewed continuous symmetries of second-order ODEs and elaborated the methods of finding them. To begin with we have considered Lie point symmetries and presented Lie's invariance analysis for a second-order ODE. To illustrate the method, we have considered the modified Emden equation (MEE) as an example. We have also discussed few applications of Lie point symmetries. We have demonstrated the connection between symmetries and conservation laws by recalling Noether's theorem. Few conserved quantities including energy have been identified for the MEE through this theorem. We then considered the velocity dependent transformations and presented the method of finding contact symmetries for the second-order ODEs. We have also pointed out the contact symmetries of the MEE. We have also recalled hidden symmetries of the MEE. Some of them are found to be exponential nonlocal symmetries. The connection between symmetries and the integrating factors of ODEs was discussed through $\\lambda$-symmetries approach and adjoint symmetries method. The method of finding $\\lambda$-symmetries, adjoint symmetries, integrating factors and their associated integrals of a second-order ODE are discussed elaborately and illustrated with MEE as an example. We have also pointed out the connection between exponential nonlocal symmetries and $\\lambda$-symmetries. Finally, we have considered a more generalized vector field, namely telescopic vector field and discussed the method of finding these generalized vector fields. For the MEE we have also brought out a couple of telescopic vector fields. We have also derived the general solution of MEE from each one of these symmetries. The symmetry methods presented here are all extendable to higher order ODEs. Through this review, we have emphasized the utility of symmetry analysis in solving ODEs.\n\n\n\\section*{Acknowledgments}\nThe authors wish to thank Professor M. Lakshmanan for suggesting us to write this review and his interest and overall guidance in this program on symmetries. The work of MS forms part of a research project sponsored by Department of Science and Technology, Government of India. The work of VKC forms part of a research project sponsored by INSA Young Scientist Project. RMS acknowledges the University Grants Commission (UGC-RFSMS), Government of India, for providing a Research Fellowship.\n\n\n\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}