diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzklnq" "b/data_all_eng_slimpj/shuffled/split2/finalzzklnq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzklnq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe scope of this work is the theoretical study of the welfare function\ndescribing the economics of shallow lakes. Pollution of shallow lakes is a quite often observed phenomenon \n because of heavy use of fertilizers on\nsurrounding land and an increased inflow of waste water from human\nsettlements and industries. The shallow lake system provides conflicting services as a resource, due to the provision of ecological services of a clear lake, \nand as a waste sink, due to agricultural activities. \n\\vskip.05in\n\nThe economic analysis of the problem requires the study of an optimal control problem or a differential game in case of common \nproperty resources by various communities; see, for example, \\cite{carpenter}, \\cite{brock-starrett} and \\cite{xepapadeas}.\n\\vskip.05in\n\nTypically the model is described in terms of the amount, $x(t)$, of phosphorus in algae which is assumed to evolve according to the stochastic differential equation (sde for short)\n\\begin{equation} \\label{sldyn}\n\\begin{cases} dx(t)=\\left( u(t)-b x(t)+\\dfrac{x^{2}(t)}{x^{2}(t)+1}\\right) dt+\\sigma x(t)dW(t), & \\\\\nx(0) = x. & \\end{cases}\n\\end{equation}\n\n The first term, $u(t)$, in the drift part of the dynamics represents the exterior load of phosphorus imposed by the agricultural community, which is assumed to be positive. The second term is\n the rate of loss $bx(t)$, which consists of sedimentation, outflow and\nsequestration in other biomass. The third term is the rate of recycling of phosphorus due to \nsediment resuspension resulting, for example, from waves, or oxygen depletion. This \nrate is typically taken to be a sigmoid function; see \\cite{carpenter}. The model also assumes an uncertainty in the recycling rate driven by a linear multiplicative white noise with diffusion strenght $\\sigma$. \n\\vskip.05in\n\nThe lake has value as a waste\nsink for agriculture modeled by $\\ln u$ and it provides ecological\nservices that decrease with the amount of phosphorus $-cx^{2}$. \nThe positive parameter $c$ reflects the relative weight of this welfare\ncomponent; large $c$ gives more weight to the ecological services\nof the lake.\n\\vskip.05in\n\nAssuming an infinite horizon at a discount rate $\\rho >0$, the total benefit is\n\\begin{equation}\\label{Jxu}\nJ(x;u)=\\mathbb{E}\\left[\\int_0^\\infty e^{-\\rho t}\\big(\\ln u(t)-cx^2(t)\\big)\\, dt\\right],\n\\end{equation}\nwhere $x(\\cdot)$ is the solution to (\\ref{sldyn}), for a given $u(\\cdot)$, and $x(0)=x$.\nOptimal management requires to maximize the total benefit, over all exterior loads that act as controls by the social planner. Thus the welfare function is\n\\begin{equation}\\label{ihvf}\nV(x) = \\underset{u\\in \\mathfrak{U}_x} {\\sup}\\ J(x;u),\n\\end{equation}\nwhere $\\mathfrak{U}_x$ is the set of admissible controls $u: [0,\\infty) \\rightarrow \\mathbb{R}^+ $ which are specified in the next section.\n\\vskip.05in\n\nDynamic programming arguments lead, under the assumption that the welfare function is decreasing, to the Hamilton-Jacobi-Bellman equation\n$$\\rho V=\\left( \\dfrac{x^{2}}{x^{2}+1}-bx\\right)V_x-\\left( \\ln(-V_x)+x^{2}+1\\right)+\\dfrac{1}{2}\\sigma^{2}x^{2}V_{xx}, \\leqno\\mathrm{(OHJB)}$$\nand of the aims of this paper is to provide a rigorous justification of this fact.\n\\vskip.05in\n\nIn the deterministic case $\\sigma=0$, the optimal dynamics of the problem were fully investigated by analysing the possible equilibria of the dynamics given by Pontryagin maximum principle; see, for example, \\cite{xepapadeas} and\n\\cite{Wagener}. The possibility to steer the combined economic-ecological system\ntowards the trajectory of optimal management via optimal state-dependent taxes was also considered; see \\cite{KPXAdeZ}. \n\\vskip.05in\n\nOn the other hand, there is not much in the literature about the stochastic problem. Formal asymptotics expansions\nof the solution for small $\\sigma $ for Hamilton-Jacobi-Bellman equations like (OHJB) have been presented in \\cite{GKW}. In the same paper, the authors also give a formal phenomenological bifurcation analysis based on a geometric invariant quantity, along with some numerical computations of the stochastic bifurcations\nbased on (formal) asymptotics for small $\\sigma$.\n\\vskip.05in\n\nThe connection between stochastic control problems and Hamilton-Jacobi-Bellman equation, which is based on the dynamic programing principle, has been studied extensively. \nThe correct mathematical framework \nis that of the Crandall-Lions viscosity solutions introduced in \\cite{lions}; see the review article \\cite{CIL}). The deterministic case leads to the study of a first order Hamilton-Jacobi equation; see, for example, \n \\cite{BCD}, \\cite{barles} and references therein. For the general stochastic \noptimal control problem we refer to \\cite{Kry}, \\cite{PLL1} - \\cite{PLL3}, \\cite{FS}, \\cite{JYXZ} and references therein.\n\\vskip.05in\n\nThe stochastic shallow lake problem has some nonstandard features and, hence, it requires some special analysis. At first, the problem is formulated as a state constraint one on a\nsemi-infinite domain. Viscosity solutions with state\nconstraint boundary conditions were introduced for first order equations by \\cite{So} and \\cite{CDL}. For second order equations one should consult \\cite{Ka},\n\\cite{LL} and \\cite{ALL}. Apart from the left boundary condition, the correct asymptotic decay of the solution at infinity is necessary to establish a comparison result; see, for example, \\cite{IL} and\n\\cite{DaLL}.\n\\vskip.05in\n\nThe unboundedness of the controls along with the logarithmic term in the cost functional lead to a logarithm of the gradient variable in (OHJB). An \n a priori knowledge of the solution is required to guarantee that the Hamiltonian is well defined. Moreover, due the presence of the logarithmic term it is necessary to construct an appropriate test function \n to establish a comparison proof. The commonly used polynomial functions, see, for example, \\cite{IL} and \\cite{DaLL}, are not useful here since they do not yield a supersolution \nof the equation. \n\\vskip.05in\n\nIn the present work we study the stochastic shallow lake problem for a fixed $\\sigma$. We first prove the necessary stochastic analysis estimates for the welfare function (\\ref{ihvf}). We obtain directly various crucial\nproperties for the welfare function, that is, boundary behaviour, local regularity, monotonicity and\nasymptotic estimates at infinity.\n\\vskip.05in\n\nWe prove, including the deterministic case, that (\\ref{ihvf}) is the unique decreasing constrained viscosity solution \nto (OHJB) with quadratic growth at infinity. The comparison theorem is proved by considering a linearized equation and constructing a proper supersolution. Exploiting the well-posedness of the problem within the framework of constrained viscosity \nsolutions we investigate the exact asymptotic behavior of the solutions at infinity. The latter is used to construct a monotone convergent numerical scheme that\nalong with the optimal dynamics equation can be used to reconstruct numerically the stochastic optimal dynamics.\n\n\n\n\n\\section{The general setting and the main results}\n\nWe assume that there exists a filtered probability space $( \\Omega, \\mathcal{F},\\{ \\mathcal{F}_t\\}_{t\\ge 0},\\mathbb{P} )$\nsatisfying the usual conditions, and a Brownian motion $W(\\cdot)$ defined on that space. \nAn admissible control $u(\\cdot)\\in\\mathfrak{U}_x$ is an $\\mathcal{F}_t$-adapted, $\\mathbb{P}$-a.s. locally integrable process with values in $U=(0,\\infty)$, satisfying \n\\begin{equation}\\label{ac_constraint}\n \\mathbb{E} \\left[ \\int_{0}^{\\infty}e^{-\\rho t}\\ln u(t)dt \\right] < \\infty,\n\\end{equation}\nsuch that the problem (\\ref{sldyn}) has a unique strong solution $x(\\cdot)$.\n\\vskip.05in\n\n The shallow lake problem has an infinite horizon. Standard arguments based on the dynamic programming principle (see \\cite{FS}, Section III.7) suggest that the welfare function $V$ given by (\\ref{ihvf}) satisfies the HJB equation\n\\begin{equation}\\label{HJB}\n\\rho V=\\sup_{u\\in U } G(x,u,V_x,V_{xx}),\n\\end{equation}\nwith $G $ defined by\n\\begin{equation}\\label{hG}\nG(x,u,p,P)=\\dfrac{1}{2}\\sigma^{2}x^{2}P+\\left( u-bx+\\dfrac{x^{2}}{x^{2}+1}\\right)p +\\ln u-x^{2}.\n\\end{equation}\n\nOne difficulty in the study of this problem is related to the fact that the control functions $u$ take values in the unbounded set $ \\mathbb{R}^+$ so that supremum in (\\ref{HJB}) \nmight take infinite values. Indeed, when $U=\\mathbb{R}^+$, setting \n$$ H(x,p,P)= \\sup_{u\\in \\mathbb{R}^+ } G(x,u,p,P),$$\nwe find \n\\begin{equation}\\label{HJB-unbounded}\nH(x,p,P)=\n \\left\\{\n \\begin{array}{lr} \\left( \\dfrac{x^{2}}{x^{2}+1}-bx\\right)p-\\left( \\ln(-p)+x^{2}+1\\right)+\\dfrac{1}{2}\\sigma^{2}x^{2}P & \\,\\, if \\,\\, p<0, \\\\\n + \\infty & \\,\\, if \\,\\, p \\geq 0. \\end{array} \\right.\n \\end{equation}\n \n \n It is natural to expect that since shallow lake looses its value with a higher concentration of phosphorus, the welfare function is decreasing\n as the initial state of phosphorus increases. Assuming that $V_x<0$, (\\ref{HJB}) becomes (OHJB).\n\\vskip.05in \n\nSince the problem is set in $(0, \\infty)$, it is necessary to introduce boundary conditions guaranteeing the well-posedness of the corresponding boundary value problem. \n\\vskip.05in\n\nGiven the possible degeneracies of Hamilton-Jacobi-Bellman equations at $x=0$, the right framework \nis that of continuous viscosity solutions in which boundary conditions are considered in the weak sense.\nSince at the boundary point $x=0$\n\\begin{equation}\\label{control}\n\\underset{u\\in U} {\\inf} \\left\\{ - u+b x-\\dfrac{x^{2}}{x^{2}+1} \\right\\} < 0,\n\\end{equation}\nthat is, there always exists a control that\ncan drive the system inside $(0,\\infty)$, the problem should be considered as a state constraint one on the interval $[0,\\infty)$, meaning that $V$ is a subsolution in $[0,\\infty)$ and a supersolution in $(0,\\infty)$. \n\\vskip.05in\n\nNext we present the main results of the paper. The proofs are given in Section 4. The first is about the relationship between the welfare function and (OHJB). \n\n\\begin{thm}\\label{constrained_vs}\nIf $\\sigma^2 < \\rho +2b$, the welfare function $V$ is a continuous in $[0,\\infty) $ constrained viscosity solution of the equation {\\rm (OHJB)} in $[0,\\infty)$. \n\\end{thm}\n\\noindent\nThe second result is the following comparison principle for solutions of (OHJB). \n\\begin{thm}\\label{comparison} Assume that $u\\in C([0,\\infty))$ is a bounded from above strictly decreasing subsolution of ${\\rm (OHJB)}$ in $[0, \\infty)$ and $v\\in C([0,\\infty))$ is a bounded from above strictly decreasing supersolution of\n${\\rm (OHJB)}$ in $(0, \\infty)$ such that $v \\geq -c(1+x^2)$ and $Du\\leq -\\frac{1}{c^*}$\nin the viscosity sense,\nfor $c$, $c^*$ positive constants. Then\n$u \\leq v$ in $[0, \\infty)$. \n\\end{thm}\n\\noindent\n\nThe next theorem describes the exact asymptotic behavior of (\\ref{ihvf}) at $+\\infty$. Let \n\\begin{equation*}\\label{K}\nA=\\frac{1}{\\rho+2b-\\sigma^2}\\,\\,\\, \\mbox{ and } \\,\\,\\, K=\\frac{1}{\\rho}\\left(\\frac{2b+\\sigma^2}{2\\rho}-\\frac{A(\\rho+2b)}{(b+\\rho)^2} -1\\right).\n\\end{equation*}\n\n\\begin{thm}\\label{asymptotic}\nAs $x\\rightarrow \\infty$,\n\\begin{equation}\\label{asympto_behav}\nV(x)= -A\\left(x+\\frac{1}{b+\\rho}\\right)^2-\\frac{1}{\\rho}\\ln \\left[2A(x+\\frac{1}{b+\\rho})\\right]+K+o(1). \n\\end{equation}\n\\end{thm}\n\\vskip.05in\n\nAn important ingredient of the analysis is the following proposition which collects some key properties of $V$ that are used in \nthe proofs of Theorem \\ref{constrained_vs} and Theorem \\ref{asymptotic} and show that $V$ satisfies the assumptions of Theorem \\ref{comparison}. The proof is presented in Section 3. \n\\begin{propos}\\label{vprop}\nSuppose $\\sigma^2<\\rho+2b$.\\\\[1mm]\n(i) There exist constants $K_1, K_2>0$, such that, for any $x\\ge 0$, we have\n\\begin{equation}\\label{Vbounds}\nK_1\\ \\le\\ V(x)+A\\left(x+\\frac{1}{b+\\rho}\\right)^2+\\frac{1}{\\rho}\\ln \\left(x+\\frac{1}{b+\\rho}\\right)\\ \\le\\ K_2.\n\\end{equation}\n\\noindent\n(ii) There exist $C>0$ and $\\Phi:[0,+\\infty)\\rightarrow \\mathbb{R} $ increasing such that, for any $x_1,x_2\\in [0,+\\infty)$ with $x_1-\\infty$. \n\\vskip.05in\n\nConsider now the solutions $x_1(\\cdot),x_2(\\cdot)$ to (\\ref{sldyn}) \nwith initial conditions $x_1, x_2$ and a common control $u\\in\\mathfrak{U}$. Lemma \\ref{monotone} implies that, $\\mathbb{P}$-a.s. and for all $t\\geq0$ and $i=1,2,$\n\\[\nx_i(t)\\ge x_iZ_t, \\qquad\\text{and}\\qquad x_2(t)-x_1(t)\\ge (x_2-x_1)Z_t.\n\\]\n\nNote that since $u\\in\\mathfrak{U}$ and $J(x_2;u)>-\\infty$,\n\\[\n\\int_0^\\infty e^{-\\rho t}x_2^2(t)\\,dt<+\\infty\\Rightarrow\\int_0^\\infty e^{-\\rho t}x_1^2(t)\\,dt<+\\infty\\Rightarrow J(x_1;u)>-\\infty.\n\\]\n\nIn particular,\n\\begin{align*}\nJ(x_2;u)\\!-\\!J(x_1;u)&=\\mathbb{E}\\left[ \\int_{0}^{\\infty}\\!\\!\\!\\!e^{-\\rho t}\\big(\\ln u(t)-x_2^{2}(t)\\big)dt -\\!\\! \\int_{0}^{\\infty}\\!\\!\\!\\!e^{-\\rho t}\\big(\\ln u(t)-x_1^{2}(t)\\big)dt\\right]\\\\\n&=-\\mathbb{E}\\left[ \\int_{0}^{\\infty} e^{-\\rho t}\\big(x_2(t)-x_1(t)\\big)\\big(x_2(t)+x_1(t)\\big)dt \\right]\\\\\n&\\le -(x_2^2-x_1^2)\\int_0^\\infty e^{-\\rho t} \\mathbb{E}\\big[Z_t^2\\big]\\ dt= -A(x_2^2-x_1^2).\n\\end{align*}\n\\hfill$\\Box$\n\\begin{lemma}\\label{V0fin}\nThe welfare function at zero satisfies\n$\nV(0) \\le\\frac{1}{\\rho}\\ln\\left(\\frac{b+\\rho}{\\sqrt{2e}}\\right).\n$\n\\end{lemma}\n\\noindent\n{\\em Proof:} Recall that, for any $u\\in\\mathfrak{U}$, $x(t)\\ge M_t(u)$.\nUsing Jensen's inequality and part (i) of Lemma \\ref{Mest}, we find\n\\begin{align*}\n\\mathbb{E}\\left[\\int_0^\\infty e^{-\\rho t}\\ln u(t)\\ dt\\right] &\\le\\frac{1}{\\rho}\\ln\\mathbb{E}\\left[\\int_0^\\infty \\rho e^{-\\rho t}u(t)\\ dt\\right] \\\\\n&=\\frac{1}{\\rho}\\ln\\mathbb{E}\\left[\\int_0^\\infty \\rho(\\rho+b) e^{-\\rho t}M_t(u)\\ dt\\right] \\\\\n&\\le \\frac{\\ln(b+\\rho)}{\\rho}+\\frac{1}{\\rho}\\ln\\mathbb{E}\\left[\\int_0^\\infty \\rho e^{-\\rho t}x(t)\\ dt\\right] \\\\\n&\\le \\frac{\\ln(b+\\rho)}{\\rho}+\\frac{1}{2\\rho}\\ln\\mathbb{E}\\left[\\int_0^\\infty \\rho e^{-\\rho t}x^2(t)\\ dt\\right].\n\\end{align*}\n\nIn view of (\\ref{ac_constraint}), we need only consider $u\\in\\mathfrak{U}$ such that $D=\\mathbb{E}\\left[\\int_0^\\infty e^{-\\rho t}x^2(t)\\ dt\\right]<\\infty$.\nThen\n\\[\n\\mathbb{E} \\left\\{ \\int_{0}^{\\infty}e^{-\\rho t}\\big[\\ln u(t)-x^{2}(t)\\big]dt \\right\\}\\le \\frac{\\ln(b+\\rho)}{\\rho}+\\frac{\\ln(\\rho D)}{2\\rho}-D\\le\\frac{1}{\\rho}\\ln\\left(\\frac{b+\\rho}{\\sqrt{2e}}\\right),\n\\]\nand the assertion holds.\\hfill$\\Box$\\\\\n\nIt follows from Lemma \\ref{Vdec} and Lemma \\ref{V0fin} that $V<+\\infty$ in $[0,\\infty)$.\n\\noindent\nThe next result is a special case of the dynamic programming principle. \n\\begin{lemma}\\label{specialdpp}\nFix $x_1,x_2\\in[0,\\infty)$ with $x_10$, choose a control $u_\\varepsilon$ such that \n\\[\nV(x_2) \\tau_u.\\end{cases}\n\\]\n\nJust as in the proof of the upper bound we get\n\\begin{align*}\nV(x_1)&\\ge J(x_1;u_*)=\\mathbb{E}\\left[\\int_0^{\\tau_u} e^{-\\rho t}\\big(\\ln u(t)-x^2(t)\\big)\\, dt+e^{-\\rho\\tau_u}J(x_2;u_\\varepsilon)\\right]\\\\\n&>\\mathbb{E}\\left[\\int_0^{\\tau_u} e^{-\\rho t}\\big(\\ln u(t)-x^2(t)\\big)\\, dt+e^{-\\rho\\tau_u}V(x_2)\\right]-\\varepsilon,\n\\end{align*}\nwhich concludes the proof.\\hfill$\\Box$\\\\[2mm]\n\nWe next give the proof of Proposition \\ref{vprop}, which is subdivided in several parts. \\\\[1mm]\n\n\\noindent\n{\\bf Proof of Proposition \\ref{vprop}:} \n\\noindent\n{\\em Proof lower bound, part (i):}\\quad The claim will follow by choosing the control $u(t)=\\frac{1+x(t)}{1+x^2(t)}$, which is clearly admissible. \nThen, (\\ref{implicit}) gives\n\\begin{equation}\nx(t)=xZ_t+\\int_0^t \\frac{Z_t}{Z_s}\\ \\Big(1+\\frac{x(s)}{1+x^2(s)}\\Big)ds=xZ_t+M_t(1)+\\int_0^t \\frac{Z_t}{Z_s}\\ \\frac{x(s)}{1+x^2(s)}ds\n\\label{testX}\n\\end{equation}\nand, hence,\n\\begin{align*}\nx^2(t)&=x^2Z_t^2+2xZ_tM_t(1)+M_t^2(1)+\\Big(\\int_0^t \\frac{Z_t}{Z_s}\\ \\frac{x(s)}{1+x^2(s)}ds\\Big)^2\\\\\n&\\quad+M_t(1)\\int_0^t \\frac{Z_t}{Z_s}\\ \\frac{2x(s)}{1+x^2(s)}ds+\\int_0^t \\frac{Z_t^2}{Z_s}\\ \\frac{2x x(s)}{1+x^2(s)}ds.\n\\end{align*}\n\nTo estimate the rightmost term from above, note that, in view of (\\ref{testX}), $x\\le x(s)Z_s^{-1}$, while for the third and fourth terms of the sum we use that\n$\\frac{x(s)}{1+x^2(s)}\\le \\frac{1}{2}$. It follows that\n\\begin{equation}\nx^2(t)\\le x^2Z_t^2+2xZ_tM_t(1)+\\frac{9}{4}M_t^2(1)+2\\int_0^t \\frac{Z_t^2}{Z_s^2}\\ ds,\n\\label{testX2}\n\\end{equation}\n\nIt is easy to see that\n\\begin{equation}\n\\mathbb{E}\\left[\\int_0^\\infty e^{-\\rho t}x^2Z_t^2 dt\\right]=x^2\\int_0^\\infty e^{-(\\rho+2b-\\sigma^2)t}dt=Ax^2,\n\\label{Xsq}\n\\end{equation}\n\nLemma \\ref{Mest} (ii) gives\n\\begin{align}\\label{X1}\n\\mathbb{E}\\left[\\int_0^\\infty e^{-\\rho t} 2xZ_t M_t(1) dt\\right]&=2Ax\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[Z_t\\big]dt= \\frac{2Ax}{\\rho+b},\n\\end{align}\nwhile Lemma \\ref{Mest} (i) and Lemma \\ref{Mest} (iii) yield\n\\begin{equation}\n\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[M_t^2(1)\\big]\\ dt = 2A\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[M_t(1)\\big]\\ dt =\\frac{2A}{\\rho(\\rho+b)}.\n\\label{Mtsq}\n\\end{equation}\n\nWe also have\n\\begin{align}\\label{lastterm}\n\\int_0^\\infty \\hspace{-2mm}e^{-\\rho t}\\int_0^t\\mathbb{E}\\left[\\frac{Z_t^2}{Z_s^2}\\right]ds\\ dt&=\\int_0^\\infty \\hspace{-2mm}e^{-\\rho t}\\int_0^t e^{(\\sigma^2-2b)(t-s)}ds\\ dt=\\frac{A}{\\rho}.\n\\end{align}\n\nUsing the last four observations in (\\ref{testX2}), we find for some constant $B$, \n\\begin{equation}\n\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[x^2(t)\\big]\\ dt \\le A\\left[\\big(x+\\frac{1}{\\rho+b}\\big)^2+B\\right].\n\\label{Xener}\n\\end{equation}\n\n On the other hand, using that, for all $x\\ge 0$, $(1+x)^2\\ge (1+x^2)$, and Jensen's inequality, we find\n\\begin{align*}\n\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[\\ln u(t)\\big]\\ dt &\\ge-\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[\\ln\\big(1+x(t)\\big)\\big]\\ dt \\\\\n&\\ge -\\frac{1}{\\rho}\\ln\\left(\\int_0^\\infty \\rho e^{-\\rho t} \\Big(1+\\mathbb{E}\\big[x(t)\\big]\\Big) dt\\right)\\\\\n&=-\\frac{1}{\\rho}\\ln\\left(1+\\rho\\int_0^\\infty e^{-\\rho t} \\mathbb{E}\\big[x(t)\\big] dt\\right).\n\\end{align*}\n\nBy (\\ref{testX}) it follows that\n$\\displaystyle\n\\mathbb{E}\\big[x(t)\\big]\\le x\\mathbb{E}\\big[Z_t\\big]+\\frac{3}{2}\\mathbb{E}\\big[M_t(1)\\big]=xe^{-bt}+\\frac{3}{2}\\mathbb{E}\\big[M_t(1)\\big].\n$\n\nHence, using Lemma \\ref{Mest} (i), we obtain\n\\[\n\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[\\ln u(t)\\big]\\ dt \\ge -\\frac{1}{\\rho}\\ln\\left(1+\\frac{\\rho x}{\\rho+b}+\\frac{3}{2(\\rho+b)}\\right).\n\\]\n\nThe preceding estimate and (\\ref{Xener}) together imply that, for some suitable constant $K_1$,\n\\begin{align*}\nV(x)&\\ge J(x;u) =\\mathbb{E}\\big[\\int_0^\\infty e^{-\\rho t}\\big(\\ln u(t)-x^2(t)\\big)\\ dt \\big]\\\\\n&\\ge -A\\left(x+\\frac{1}{b+\\rho}\\right)^2-\\frac{1}{\\rho}\\ln \\left(x+\\frac{1}{b+\\rho}\\right)+K_1.\n\\end{align*}\n\n\n\\noindent\n{\\em Proof of the upper bound, part (i) :}\\quad In view of Lemma \\ref{Vdec} and Lemma \\ref{V0fin}, it suffices to find $K_2>0$, such that the asserted inequality holds for $x\\ge 1$.\n\\vskip.05in\n\nFix $u\\in\\mathfrak{U}$. Then \n\\begin{align*}\nx^2(t)&\\ge x^2Z_t^2+2xZ_t^2\\int_0^t\\frac{1}{Z_s}\\left(u(s)+\\frac{x^2(s)}{1+x^2(s)}\\right) ds\\\\\n&=x^2Z_t^2+2xZ_tM_t(1+u)-\\int_0^t\\frac{Z_t^2}{Z_s^2}\\frac{2xZ_s}{1+x^2(s)} ds.\n\\end{align*}\n\nSince\n\\[\n1+x^2(t)\\ge 1+x^2Z_t^2\\ge 2xZ_t,\n\\]\nso we can further estimate $x^2(t)$ from below by\n\\begin{align}\\label{mtuest}\nx^2(t)&\\ge x^2Z_t^2+2xZ_tM_t(1)+2xZ_tM_t(u)-\\int_0^t \\frac{Z_t^2}{Z_s^2} ds.\n\\end{align}\n\nUsing the elementary inequality that $\\ln a\\le ab-\\ln b-1$, which holds for all $a,b>0$, and Lemma \\ref{Mest} (ii), we obtain, for some B,\n\\begin{align*}\n&\\int_0^\\infty e^{-\\rho t}\\mathbb{E}\\big[\\ln u(t)\\big]dt \\le \\mathbb{E}\\left[\\int_0^\\infty e^{-\\rho t} \\Big\\{2AxZ_tu(t)-\\ln\\big(2AxZ_t\\big)\\Big\\}\\ dt\\right]\\nonumber\\\\\n&\\qquad=\\mathbb{E}\\left[\\int_0^\\infty\\hspace{-2mm} e^{-\\rho t} 2xZ_tM_t(u)\\, dt\\right]-\\frac{\\ln(2Ax) }{\\rho}+\\frac{2b+\\sigma^2}{2\\rho^2}\\nonumber\\\\\n&\\qquad\\le \\mathbb{E}\\left[\\int_0^\\infty \\hspace{-2mm}e^{-\\rho t} \\Big(x^2(t)-x^2Z_t^2-2xZ_tM_t(1)+\\int_0^t\\frac{Z_t^2}{Z_s^2}\\, ds\\Big)\\, dt\\right]-\\frac{\\ln x + B}{\\rho}\\nonumber,\n\\end{align*}\nwhere in the final step we have used (\\ref{mtuest}). \n\\vskip.05in\n\nIn view of (\\ref{Xsq}), (\\ref{X1}) and (\\ref{lastterm}), for every $u\\in\\mathfrak{U}$ there exists $K_2>0$ such that\n\\[\nJ(x;u)\\le -A\\left(x+\\frac{1}{b+\\rho}\\right)^2-\\frac{1}{\\rho}\\ln \\big(x+\\frac{1}{\\rho+b}\\big)+K_2.\n\\]\n\nThe assertion now follows by taking the supremum over $u\\in\\mathfrak{U}$.\\\\[2mm]\n\\noindent\n{\\em Proof of the lower bound, part (ii):} Fix $x_1,x_2$ as in the statement. It follows from Lemma \\ref{specialdpp} that for any $\\epsilon>0$ there exists a control $u_\\varepsilon\\in\\mathfrak{U}$ such that \n\\begin{equation}\\label{dppe}\nV(x_1)\\le \\mathbb{E}\\left[\\int_0^{\\tau_{\\varepsilon}}e^{-\\rho t}\\ln u_\\varepsilon(t)\\ dt\\right]+\\mathbb{E}\\big[e^{-\\rho\\tau_{\\varepsilon}}\\big]V(x_2)+\\varepsilon(x_2^2-x_1^2),\n\\end{equation}\nwhere $\\tau_\\varepsilon$ is the hitting time on $[x_2,+\\infty)$ of the solution $x_\\varepsilon(\\cdot)$ to (\\ref{sldyn}) with $x(0)=x_1$ and control $u_\\varepsilon$.\n\\vskip.05in\n\nUsing the elementary inequality\n\\[\n\\ln u_\\varepsilon(t)\\le \\ln c+\\frac{u_\\varepsilon(t)}{c}-1, \\quad\\text{with } c=e^{\\rho V(x_2)+1},\n\\]\nwe find\n\\begin{align}\\label{DVcont}\nV(x_1)-V(x_2)&\\le \\frac{1}{c}\\mathbb{E}\\left[\\int_0^{\\tau_\\varepsilon} e^{-\\rho t}u_\\varepsilon(t)\\ dt\\right]+\\varepsilon(x_2^2-x_1^2).\n\\end{align}\n\nTo conclude it suffices to show that\n\\begin{equation}\n\\frac{1}{c}\\mathbb{E}\\left[\\int_0^{\\tau_\\varepsilon}e^{-\\rho t}u_\\varepsilon(t)\\ dt\\right]\\le \\Phi(x_2)(x_2-x_1).\n\\label{suffu}\n\\end{equation}\n\nTo this end, we apply It\\^{o}'s rule to the semimartingale $Y_t=e^{-\\rho t+\\gamma x_\\varepsilon(t)}$, where $\\gamma>0$ is a constant to be determined, and find\n\\begin{align*}\nY_t-e^{\\gamma x_1}&=\\int_0^t Y_s \\big(-\\rho\\, ds+ \\gamma\\, dx_\\varepsilon(s)+\\frac{\\gamma^2}{2}\\, d\\langle x_\\varepsilon\\rangle_s\\big)\\\\\n&=\\int_0^t Y_s\\Big(-\\rho+\\gamma\\big(u_\\varepsilon(s)-bx_\\varepsilon(s)+\\frac{x_\\varepsilon^2(s)}{1+x_\\varepsilon^2(s)}\\big)+\\frac{\\gamma^2\\sigma^2 x_\\varepsilon^2(s)}{2}\\Big)ds+M_t,\n\\end{align*}\nwhere $M_t$ stands for the martingale $\\gamma\\sigma \\int_0^t x_\\varepsilon(s)\\, dW(s)$. \n\\vskip.05in\nNext we apply the optional stopping theorem for the bounded stopping time $\\tau_N=\\min\\{\\tau_\\varepsilon,N\\}$, with $N\\in\\mathbb{N}$, to find \n\\begin{align*}\n\\mathbb{E}\\big[Y_{\\tau_N}\\big]-e^{\\gamma x_1}&=\\mathbb{E}\\left[\\int_0^{\\tau_N} \\hspace{-2mm}Y_s\\Big(-\\rho+\\gamma\\big(u_\\varepsilon(s)-bx_\\varepsilon(s)+\\frac{x_\\varepsilon^2(s)}{1+x_\\varepsilon^2(s)}\\big)+\\frac{\\gamma^2\\sigma^2 x_\\varepsilon^2(s)}{2}\\Big)ds\\right].\n\\end{align*}\n\nSince $0\\le x_\\varepsilon(s) \\le x_2$ in $[0,\\tau_\\varepsilon]$, \n\\begin{align*}\ne^{\\gamma x_2}\\mathbb{E}\\big[e^{-\\rho\\tau_N}\\big]-e^{\\gamma x_1}&\\ge \\mathbb{E}\\left[\\int_0^{\\tau_N} \\hspace{-2mm}Y_t\\Big(-\\rho+\\gamma\\big(u_\\varepsilon(t)-bx_\\varepsilon(t)\\big)+\\frac{\\gamma^2\\sigma^2 x_\\varepsilon^2(t)}{2}\\Big)dt\\right],\n\\end{align*}\nand\n\\begin{align*}\ne^{\\gamma x_2}-e^{\\gamma x_1}&\\ge \\gamma\\mathbb{E}\\left[\\int_0^{\\tau_N}\\hspace{-2mm}e^{-\\rho t}u_\\varepsilon(t)\\, dt\\right]+\\mathbb{E}\\left[\\int_0^{\\tau_N} \\hspace{-2mm}Y_t\\Big(-b\\gamma x_\\varepsilon(t)+\\frac{\\sigma^2\\gamma^2 x_\\varepsilon^2(t)}{2}\\Big)dt\\right].\n\\end{align*}\n\nNote that the term in the parenthesis above is nonnegative if $\\gamma x_\\varepsilon(t) \\ge 2b\/\\sigma^2$, and greater than or equal to $-b^2\/2\\sigma^2$ in any case. Hence, \n\\[\ne^{\\gamma x_2}-e^{\\gamma x_1}\\ge \\gamma\\mathbb{E}\\left[\\int_0^{\\tau_N}\\hspace{-2mm}e^{-\\rho t}u_\\varepsilon(t)\\, dt\\right]-\\frac{b^2e^{\\frac{2b}{\\sigma^2}}}{2\\sigma^2}\\mathbb{E}\\left[\\int_0^{\\tau_N} \\hspace{-2mm}e^{-\\rho t}dt\\right].\n\\]\n\nLetting $N\\to\\infty$ we get\n\\begin{equation}\\label{coerce}\ne^{\\gamma x_2}-e^{\\gamma x_1}\\ge \\gamma\\mathbb{E}\\left[\\int_0^{\\tau_\\varepsilon}\\hspace{-2mm}e^{-\\rho t}u_\\varepsilon(t)\\, dt\\right]-\\frac{b^2e^{\\frac{2b}{\\sigma^2}}}{2\\sigma^2}\\mathbb{E}\\left[\\int_0^{\\tau_\\varepsilon} \\hspace{-2mm}e^{-\\rho t}dt\\right].\n\\end{equation}\n\nWith $\\gamma$ still at our disposal, to show (\\ref{suffu}) it suffices to control the term $\\mathbb{E}\\left[\\int_0^{\\tau_\\varepsilon} e^{-\\rho t}dt\\right]$\nby $\\mathbb{E}\\left[\\int_0^{\\tau_\\varepsilon}e^{-\\rho t}u_\\varepsilon(t)\\, dt\\right]$. \n\\vskip.05in\n\nSince without loss of generality we may assume that $\\varepsilonN\\big]\\). \n\\vskip.05in\n\nOn the other hand, since we have assumed that $x_2\\le b$, we have $x_c(t)\\le b$ up to time $\\tau_c$. Thus, the right hand side \nof \\eqref{lastone} is bounded by \\( \\mathbb{E}\\left[\\int_0^{\\tau_N}e^{-\\rho t} c\\ dt\\right]\\). \n\\vskip.05in\n\nLetting $N\\to\\infty$ in $\\eqref{lastone}$, by the monotone convergence theorem, we have\n\\[\nx_2 \\mathbb{E}\\big[e^{-\\rho \\tau_c}\\big]-x_1\\le c\\,\\mathbb{E}\\big[\\int_0^{\\tau_c} e^{-\\rho t}dt\\big] \\Longleftrightarrow (x_2-x_1)\\mathbb{E}\\big[e^{-\\rho\\tau_c}\\big]\\le (c+\\rho x_1)\\ \\mathbb{E}\\big[\\int_0^{\\tau_c} e^{-\\rho t}dt\\big].\n\\]\n\nSubstituting this in (\\ref{ub}) and choosing $\\ln c=\\rho V(x_1)+1+x_2^2$, we find\n\\begin{equation}\\label{lowb}\nV(x_2)-V(x_1)\\le -(x_2-x_1)\\left(e^{\\rho V(x_1)+1+x_2^2}+\\rho x_1\\right)^{-1} \\!\\!\\!.\n\\end{equation}\n\nThe assertion now follows letting $C=Ab\\wedge \\left(e^{\\rho V(0)+1+b^2}+\\rho b\\right)^{-1}>0$. \\hfill$\\Box$ \\\\[2mm]\n\n\\noindent\n{\\em Proof of part (iii):} It follows from (\\ref{lowb}) that, for any $\\varepsilon\\in(0,b]$,\n\\[\n\\frac{V(\\varepsilon)-V(0)}{\\varepsilon}\\le -e^{-\\rho V(0)-1-\\varepsilon^2}.\n\\]\n\nLetting $\\varepsilon\\to 0$ we get\n\\[\n\\limsup_{\\varepsilon\\to 0}\\frac{V(\\varepsilon)-V(0)}{\\varepsilon}\\le -e^{-\\rho V(0)-1},\n\\]\nwhile (\\ref{DVlower}) gives\n\\[\n\\frac{V(\\varepsilon)-V(0)}{\\varepsilon}\\ge -\\big(1+q(\\varepsilon)\\big)e^{q(\\varepsilon)-1-\\rho V(\\varepsilon)}.\n\\]\n\nLetting $\\varepsilon\\to 0$ and noting that $q(\\varepsilon)\\to 0$, and $\\ V(\\varepsilon)\\to V(0)$, we have\n\\[\n\\liminf_{\\varepsilon\\to 0}\\frac{V(\\varepsilon)-V(0)}{\\varepsilon}\\le -e^{-\\rho V(0)-1},\n\\]\nwhich proves the claim. \\hfill$\\Box$ \\\\[2mm]\n\nWe conclude observing that since $V\\in C\\left([0,\\infty)\\right)$, the general dynamic programming principle is also satisfied. For a proof we refer to \\cite{Tou}.\n\n\n\n\n\\section{Viscosity solutions and the Hamilton--Jacobi--Bellman equation}\n\nSince the Hamiltonian (\\ref{HJB-unbounded}) can take infinite values we have a singular stochastic control problem and the welfare function (\\ref{ihvf}) should satisfy the proper variational inequality; see \\cite{FS},\nSection VIII and \\cite{pham}, Section 4. The proof of the next Lemma, except for the treatment of the boundary conditions, follows the lines of Proposition 4.3.2 of \\cite{pham}. \n\\begin{lemma}\\label{singular_control_problem}\nIf $\\sigma^2 < \\rho +2b$, the welfare function $V$ defined by (\\ref{ihvf}) is a continuous constrained viscosity solution of \n\\begin{equation}\\label{HJB-var}\n \\min \\Big[ \\rho V -\\sup_{u\\in \\mathbb{R}^+ } \\left( \\dfrac{1}{2}\\sigma^{2}x^{2}V_{xx}+( u-bx+\\dfrac{x^{2}}{x^{2}+1})V_x +\\ln u-x^{2}\\right), -V_x \\Big] =0, \\quad \\text{ in } [0,\\infty).\n \\end{equation}\n\\end{lemma}\n\\noindent\n{\\em Proof:} That $V$ is a viscosity solution in $(0, \\infty)$ follows as in \\cite{pham}, so we omit the details. \n\\vskip.05in\n\nHere we briefly discuss the subsolution property at $x=0$. Let $\\phi$ be a test function \nsuch that $V-\\phi$ has a maximum at $x=0$ with $V(0)-\\phi(0)=0$, and, proceeding by contradiction, we assume that \n\\begin{align}\\label{subat0}\n \\rho \\phi(0) -\\sup_{u\\in \\mathbb{R}^+ } G\\big(0,u,\\phi'(0),\\phi''(0)\\big) >0 \\,\\, \\text{ and }\\,\\, -\\phi'(0) >0. \n\\end{align}\n\nSince $-\\phi'(0)>0$, the supremum in the above inequality is finite, the Hamiltonian takes the standard form, and the first inequality in \\eqref{subat0} becomes\n\\[\n\\rho \\phi(0)+1+\\ln\\big(-\\phi'(0)\\big)>0.\n\\]\n\nOn the other hand, in view of Proposition \\ref{vprop} (iii), we have \\(\\rho V(0)+1+\\ln\\big(-V'(0)\\big)=0\\), hence \\(V'(0)>\\phi'(0)\\), contradicting that \\(V-\\phi\\) has a maximum at $x=0$.\n\\vskip.05in\n\nWe have now obtained all the necessary material for the proof of Theorem \\ref{constrained_vs}.\n\\vskip.05in\n\n\\noindent\n{\\em Proof:} \nThe fact that (\\ref{ihvf}) is a continuous constrained viscosity solution of the equation ${\\rm (OHJB)}$ is a direct consequence of the above Lemma and the fact that inequality (\\ref{DVbounds}) \nimplies that $p\\leq-C$ for any $p\\in D^\\pm V(x)$, with $x\\in (0, \\infty)$. The regularity of $V$ in $(0,\\infty)$ follows from the classical results for uniformly elliptic \noperators. \\hfill$\\Box$ \\\\\n\nDue to the extra regularity of the welfare function, the following verification equation holds in $(0, \\infty)$, for any optimal pair $(\\overline{u}(\\cdot)),\\overline{x}(\\cdot))$, \n\\begin{align*}\n\\rho V(\\overline{x}(t))=& \\sup_{u\\in U } G(\\overline{x}(t),u,V_x(\\overline{x}(t)),V_{xx}(\\overline{x}(t))) \\\\\n=&\\left( \\dfrac{\\overline{x}^{2}(t)}{\\overline{x}^{2}(t)+1}-b\\overline{x}(t)\\right)V_x(\\overline{x}(t))-\\Big( \\ln(-V_x(\\overline{x}(t)))+\\overline{x}^{2}(t)+1\\Big) \\\\\n& +\\dfrac{1}{2}\\sigma^{2}\\overline{x}^{2}(t)V_{xx}(\\overline{x}(t)), \n\\ \\ \\ \\ \\ \\ t\\in[0,\\infty)- a. e.\\ ,\\ \\ \\mathbb{P}- a.e.;\n\\end{align*}\n\\noindent\n see \\cite{FS} and \\cite{JYXZ}.\n\\vskip.05in\n\nNext we prove the proper comparison principle for ${\\rm (OHJB)}$. The proof is along the lines of the strategy in \\cite{HI}, \nwhere given a subsolution $u$ and a supersolution $v$ \nof (OHJB), $u-v$ is a subsolution of the corresponding linearized equation. Then, one concludes by comparing $u-v$ with the appropriate supersolution of the linearized equation; see also \n\\cite{DaLL} and \\cite{Za}. The difference with the existing results is that, \ndue to the presence of the logarithmic term,\nthe commonly used functions of simple polynomials do not yield a supersolution of the equation. \n\\vskip.05in\n\nHaving in mind that we are looking for a viscosity solution that is strictly decreasing \nand satisfies (\\ref{DVbounds}), we prove the following lemma. \n\n\\noindent\n\\begin{lemma} \\label{intermed} Suppose $u$, $v$ satisfy the assumptions of Theorem \\ref{comparison}.\nThen $\\psi=u-v$ is a subsolution of \n\\begin{equation}\\label{u-v}\n \\rho \\psi +bxD\\psi-\\left(1+c^*\\right)|D\\psi| -\\frac{1}{2}\\sigma^2x^2 D^2 \\psi = 0 \\,\\, \\mbox{ in } \\,\\, [0, \\infty).\n\\end{equation}\n\\end{lemma} \n\\noindent\n{\\em Proof:}\nLet $\\bar{x}\\ge 0$ a maximum point of $\\psi -\\phi$ for some smooth function $\\phi$ and set, following \\cite{So},\n$$\\theta(x,y)=\\phi(x)+\\frac{(x-y+\\varepsilon L)^2}{\\varepsilon}+\\delta(x-\\bar{x})^4,$$\nwhere $L,\\delta$ are positive constants.\n\nThe assumptions on $u,v$ imply that the function $(x,y) \\mapsto u(x)-v(y)-\\theta(x,y)$ is bounded from above and achieves its maximum at, say, $(x_{\\varepsilon},y_{\\varepsilon})$. It follows that $x \\mapsto u(x)-v(y_{\\varepsilon})-\\theta(x,y_{\\varepsilon})$ has a local maximum at\n$x_{\\varepsilon}$ and $y \\mapsto v(y)-u(x_{\\varepsilon})+\\theta(x_{\\varepsilon},y)$ has a local minimum at $y_{\\varepsilon}$. Moreover, (see Proposition 3.7 in \\cite{CIL}), as $\\varepsilon \\rightarrow 0$, \n\\begin{equation}\\label{doubling}\n \\frac{|x_{\\varepsilon}-y_{\\varepsilon}|^2}{\\varepsilon} \\rightarrow 0 ,\\, x_\\varepsilon\\rightarrow \\bar{x}, \\, \\mbox{ and }\nu(x_{\\varepsilon})-v(y_{\\varepsilon}) \\rightarrow \\psi(\\bar{x}).\n\\end{equation}\n\nThe inequalities\n\\[\nu(x_\\varepsilon)-v(y_\\varepsilon)-\\theta(x_\\varepsilon,y_\\varepsilon)\\le \\psi(\\bar{x})-\\phi(\\bar{x})+v(x_\\varepsilon)-v(y_\\varepsilon)-\\frac{|x_{\\varepsilon}-y_{\\varepsilon}+\\varepsilon L|^2}{\\varepsilon}-\\delta(x_\\varepsilon-\\bar{x})^4\n\\]\nand \n\\[\nu(x_\\varepsilon)-v(y_\\varepsilon)-\\theta(x_\\varepsilon,y_\\varepsilon)\\ge u(\\bar{x})-v(\\bar{x}+\\varepsilon L)-\\theta(\\bar{x},\\bar{x}+\\varepsilon L)\\ge \\psi(\\bar{x})-\\phi(\\bar{x})\n\\]\ntogether imply that \n\\[\n\\frac{|x_{\\varepsilon}-y_{\\varepsilon}+\\varepsilon L|^2}{\\varepsilon}+\\delta(x_\\varepsilon-\\bar{x})^4\\le v(x_\\varepsilon)-v(y_\\varepsilon).\n\\]\n\nSince $v$ is decreasing we must have $y_\\varepsilon>x_\\varepsilon.$ In particular, $y_\\varepsilon\\in(0,\\infty)$.\n\nTherefore, setting $p_{\\varepsilon}=2\\frac{x_{\\varepsilon}-y_{\\varepsilon}+\\varepsilon L}{\\varepsilon}$ and $q_\\varepsilon= \\phi_x(x_{\\varepsilon})+4\\delta(x_\\varepsilon-\\bar{x})^3$, Theorem 3.2 in \\cite{CIL} implies that,\nfor every $\\alpha >0$, there exist $X, \\, Y \\in \\mathbb{R}$ such that\n\\begin{equation}\\label{sub-super}\n \\rho u(x_{\\varepsilon})-H(x_{\\varepsilon},p_\\varepsilon+q_{\\varepsilon},X) \\leq 0 \\mbox{ \\,\\,and\\,\\, } \\rho v(y_{\\varepsilon})- H(y_{\\varepsilon},p_{\\varepsilon},Y) \\geq 0\n\\end{equation}\nand \n\\begin{equation}\\label{jet-inequality}\n -(\\frac{1}{\\alpha}+\\|M\\|) I \\leq \\left(\n\\begin{array}{cc}\nX & 0 \\\\\n0 & -Y \n\\end{array} \\right) \\leq M+\\alpha M^2\n\\end{equation}\nwith $M=D^2\\theta(x_{\\varepsilon},y_{\\varepsilon}).$\n\nBy subtracting the two inequalities in (\\ref{sub-super}) we obtain \n\\begin{align}\\label{comp-inequ_1}\n \\rho & u(x_{\\varepsilon})-\\rho v(y_{\\varepsilon}) +b(x_{\\varepsilon}-y_{\\varepsilon})p_{\\varepsilon}+\\Big(\\frac{y_{\\varepsilon}^2}{y_{\\varepsilon}^2+1}-\\frac{x_{\\varepsilon}^2}{x_{\\varepsilon}^2+1}\\Big)p_{\\varepsilon}\n - \\Big(\\frac{x_{\\varepsilon}^2}{x_{\\varepsilon}^2+1}-bx_{\\varepsilon} \\Big) q_\\varepsilon \\\\\n & +\\ln{(-p_{\\varepsilon}- q_\\varepsilon)}- \\ln{(-p_\\varepsilon)} \\nonumber + x_{\\varepsilon}^2- y_{\\varepsilon}^2 -\\frac{1}{2}\\sigma^2x_{\\varepsilon}^2 X + \\frac{1}{2}\\sigma^2y_{\\varepsilon}^2 Y \\leq 0. \\nonumber\n\\end{align}\n\nOur assumption on $u$ implies that $p_\\varepsilon+q_\\varepsilon\\le -1\/c^*$. Thus, the difference of the logarithmic terms in the above inequality can be estimated from below as\n\\[\n\\ln\\Big(\\frac{p_\\varepsilon+q_\\varepsilon}{p_\\varepsilon}\\Big)\\ge \\frac{q_\\varepsilon}{p_\\varepsilon+q_\\varepsilon}\\ge -c^*|q_\\varepsilon|,\n\\]\nand inequality (\\ref{comp-inequ_1}) gives\n\\begin{align}\\label{comp-inequ_2}\n\\frac{1}{2}\\sigma^2x_{\\varepsilon}^2 X + \\frac{1}{2}\\sigma^2y_{\\varepsilon}^2 Y&\\ge\\rho u(x_{\\varepsilon})-\\rho v(y_{\\varepsilon}) + bx_\\varepsilon q_\\varepsilon- (1+c^*)|q_\\varepsilon| +\\nonumber\\\\\n &\\qquad p_\\varepsilon(x_\\varepsilon-y_\\varepsilon)\\Big(b-\\frac{x_\\varepsilon+y_\\varepsilon}{(1+x_\\varepsilon^2)(1+y_\\varepsilon^2)}\\Big)+x_\\varepsilon^2-y_\\varepsilon^2\n\\end{align}\n\nOn the other hand, the right-hand-side in (\\ref{jet-inequality}) yields\n\\begin{equation}\\label{X-Y}\n \\frac{1}{2}\\sigma^2x_{\\varepsilon}^2 X - \\frac{1}{2}\\sigma^2y_{\\varepsilon}^2 Y \\leq \\frac{1}{2}\\sigma^2x_{\\varepsilon}^2 \\big(\\phi_{xx}(x_{\\varepsilon})+12\\delta(x_\\varepsilon-\\bar{x})^2\\big)\n +\\frac{\\sigma^2}{\\varepsilon}(x_{\\varepsilon}-y_{\\varepsilon})^2+m(\\frac{\\alpha}{\\varepsilon^2}), \n\\end{equation}\nwith $m$ a modulus of continuity independent of $\\alpha$, $\\varepsilon$. \n\\vskip.05in\n\nBy combining \\eqref{comp-inequ_2} with \\eqref{X-Y}, we conclude the proof taking first $\\alpha \\rightarrow 0$, then $\\varepsilon \\rightarrow 0$ and using \\eqref{doubling}. \\hfill$\\Box$ \\\\\n\n\\noindent\n We continue with the \\\\[1mm]\n \\noindent\n{\\it Proof of Theorem \\ref{comparison}:} The main step is the construction of a solution of the linearized equation. For this, we consider the ode \n\\begin{equation}\\label{ode1}\n\\rho w +\\big(bx-(1+c^*)\\big)w'-\\frac{1}{2}\\sigma^2x^2w''=0,\n\\end{equation}\nwhich has a solution of the form \n\\begin{equation}\\label{sol}\n w(x)=x^{-k}\\mathcal{J}(\\frac{2+2c^*}{\\sigma^2x}),\n\\end{equation}\nwhere $k$ is a root of \n\\begin{equation}\\label{root}\n k^2+\\Big(1+\\frac{2b}{\\sigma^2}\\Big)k-\\frac{2\\rho}{\\sigma^2}=0\n\\end{equation}\nand $\\mathcal{J}$ a solution of the degenerate hypergeometric equation \n\\begin{equation}\\label{confluent}\nxy''+(\\tilde{b}-x)y'-\\tilde{a}y=0\n\\end{equation}\nwith $\\tilde{a}=k$ and $\\tilde{b}=2(k+1+b\/\\sigma^2)$.\n\nSince we are looking for a solution of \\eqref{ode1} with superquadratic growth at $+\\infty$, we choose $k$ to be the negative root of \\eqref{root}. The assumption $\\sigma^2 < \\rho+2b$ implies $-k>2$.\n\nWe further choose $\\mathcal{J}$ to be the Tricomi solution of \\eqref{confluent} which satisfies\n\\[\n\\mathcal{J}(0)>0\\qquad\\text{and} \\qquad \\mathcal{J}(x)=x^{-k}\\big(1+\\frac{2\\rho}{\\sigma^2 x}+o(x^{-1})\\big)\\quad\\text{as }x\\to\\infty.\n\\]\n\nWith this choice, the function $w$ defined in \\eqref{sol} for $x>0$ and by continuity at $x=0$, satisfies $w(0), w'(0) >0$ and $w(x)\\sim \\mathcal{J}(0) x^{-k}$, as $x\\to\\infty$. \n\nNote that $w$ is increasing in $[0,\\infty)$ since it would otherwise have a positive local maximum and this is impossible by \\eqref{ode1}. In particular, $w$ satisfies \\eqref{u-v}.\n\nSet now $\\psi=u-v$ and consider $\\epsilon>0$. Since $\\psi-\\epsilon w<0$ in a neighborhood of infinity, there exists $x^\\epsilon \\in [0,\\infty)$ such that\n\\[\n\\max_{x\\ge 0} \\big(\\psi(x)-\\epsilon w(x)\\big)=\\psi(x^\\epsilon)-\\epsilon w(x^\\epsilon).\n\\]\n\nBy Lemma \\ref{intermed} $\\psi $ is a subsolution of \\eqref{u-v}. We now use $\\epsilon w$ as a test function to find tha\n\\begin{equation*}\n0\\ge \\rho \\psi({x}^\\epsilon) +\\epsilon b{x}^\\epsilon w({x}^\\epsilon)-\\epsilon\\left(1+c^*\\right)|w'({x}^\\epsilon))| -\\frac{1}{2}\\sigma^2{(x^\\epsilon)}^2 w''({x}^\\epsilon) = \\psi({x}^\\epsilon)-\\epsilon w(x^\\epsilon).\n\\end{equation*}\n\nHence, $\\psi(x)\\le \\epsilon w(x)$ for all $x\\in [0,\\infty)$. Since $\\epsilon$ is arbitrary, this proves the claim. \\hfill$\\Box$ \\\\[1mm]\n\nThe stability property of viscosity solutions yields the following theorem. \n\n\\begin{thm}\\label{constrained_vs_det} As $\\sigma \\rightarrow 0$, the welfare function $V$ defined by (\\ref{ihvf}) converges locally uniformly to the constrained viscosity solution $V^{(d)}$ of the deterministic \nshallow lake equation in $[0, \\infty)$,\n$$\\rho V^{(d)}=\\left( \\dfrac{x^{2}}{x^{2}+1}-bx\\right)V_{x}^{(d)}-\\Big( \\ln(-V_{x}^{(d)})+x^{2}+1\\Big). \\leqno\\mathrm{(OHJB_d)}$$\n\\end{thm}\n\\vskip.05in\nWe next prove Theorem \\ref{asymptotic} that describes the asymptotic behaviour of $V$ as $x \\rightarrow \\infty$. The proof is based on a scaling argument and the stability properties of the viscosity solutions.\\\\\n\n\\noindent\n{\\it Proof of Theorem \\ref{asymptotic}: } We write $V$ as \n\\begin{equation*}\n V(x)= -A\\left(x+\\frac{1}{b+\\rho}\\right)^2-\\frac{1}{\\rho}\\ln \\left(2A(x+\\frac{1}{b+\\rho})\\right)+K+v(x). \n\\end{equation*}\n\nStraightforwad calculations yield that $v$ is a viscosity solution in $(0, \\infty)$ of the equation\n\\begin{multline}\n\\rho v+\\left(bx-\\dfrac{x^{2}}{x^{2}+1}\\right)v'+\\ln\\left(1+\\frac{1-\\rho\\big(x+\\frac{1}{b+\\rho}\\big)v'}{2A\\rho\\left(x+\\frac{1}{b+\\rho}\\right)^2} \\right)-\\dfrac{1}{2}\\sigma^{2}x^{2}v''+f=0,\n\\end{multline}\nwhere \n\\[\nf(x)=\\frac{b+\\frac{\\sigma^2}{2}+(b+\\rho)\\frac{x^2}{1+x^2}}{\\rho\\big(1+x(b+\\rho)\\big)}+\\frac{\\sigma^2x(b+\\rho)}{2\\rho\\big(1+x(b+\\rho)\\big)^2}-2A\\frac{\\big(1+x(b+\\rho)\\big)}{1+x^2}.\n\\]\n\nNote $f$ is smooth on $[0,\\infty)$ and vanishes as $x\\to\\infty$.\n\\vskip.05in\n\n Let $v_\\lambda(y)=v(\\frac{y}{\\lambda})$ and observe that, if $v_\\lambda(1) \\rightarrow 0$ as $\\lambda \\rightarrow 0$, then $v(x) \\rightarrow 0$ as $x \n \\rightarrow \\infty$. It turns out that $v_\\lambda$ solves\n \\begin{multline*}\n\\rho v_\\lambda+ \\left(bx-\\dfrac{\\lambda x^{2}}{x^{2}+\\lambda^2}\\right)v'_\\lambda+\\ln\\left(1+\\frac{\\lambda^2\\big(1-\\rho\\big(x+\\frac{\\lambda}{b+\\rho}\\big)v'_\\lambda\\big)}{2A\\rho\\left(x+\\frac{\\lambda}{b+\\rho}\\right)^2} \\right)-\\dfrac{1}{2}\\sigma^{2}x^{2}v''_\\lambda+f\\big(\\frac{x}{\\lambda}\\big)=0.\n \\end{multline*}\n\nSince, by (\\ref{Vbounds}) $v_\\lambda $ is uniformly bounded, we consider the half-relaxed limits $v^*(y)=\\limsup_{x \\rightarrow y, \\lambda\\rightarrow 0}v_\\lambda(x)$ and $v_*(y)=\\liminf_{x \\rightarrow y, \\lambda\\rightarrow 0}v_\\lambda(x)$ \n in $(0, \\infty)$, which are (see \\cite{BP}) respectively sub- and super-solutions of \n\\begin{equation}\n \\rho w +byw'-\\frac{1}{2} \\sigma^2 y^2 w''=0.\n\\end{equation}\n\nIt is easy to check that for any $y>0$ we have $v^*(y)=\\limsup_{x\\to\\infty}v(x)$ and $v_*(y)=\\liminf_{x\\to\\infty}v(x)$. \n\\vskip.05in\nThe subsolution property of $v^*$ and the supersolution property of $v_*$ give \n\\[\n\\limsup_{x\\to\\infty}v(x)\\le 0\\le\\liminf_{x\\to\\infty}v(x)\\le \\limsup_{x\\to\\infty}v(x).\n\\]\\qed\n\n\n\n\\section{A numerical scheme and optimal dynamics}\n\n A general argument to prove the convergence of monotone schemes for viscosity solutions of fully nonlinear second-order elliptic or parabolic, possibly degenerate, partial differential equations has been introduced in \\cite{BS}. \n Their methodology has been extensively\n used to approximate solutions to first-order equations, see for example \\cite{RT}, \\cite{Se2}, \\cite{KMS}, \\cite{Qi}. \n \\vskip.05in\n \n On the other hand, it is not always possible to construct monotone schemes for second-order equations in their full generality. However, various types of nonlinear second-order equations have been approximated\n via monotone schemes based on \\cite{BS}; see, for example, \\cite{OF}, \\cite{BJ}, \\cite{BZ}, \\cite{FO}.\n \\vskip.05in\n \n Next, following \\cite{KosZoh} which considered the deterministic problem, we construct a monotone finite difference scheme to approximate numerically the welfare function and recover numerically the stochastic optimal dynamics. \n\\vskip.05in \n\nLet\n$\\Delta x$ denote the step size of a uniform partition $0 = x_0 < x_1 < \\ldots < x_{N-1} < x_N = l$ of $[0, l]$ for \n$l > 0$ sufficiently large. Having in mind (\\ref{DVbounds}), if $V_{i}$ is the approximation of $V$ at $x_i$, we employ a backward finite difference discretization to approximate the first derivative in the linear term of the (OHJB), \na forward finite difference discretization for the derivative in the logarithmic term and a central finite difference scheme to approximate the second derivative. \n\n\\vskip.05in\nThese considerations yield, for $i = 1,\\ldots, N-1$, the approximate equation\n\\begin{multline}\\label{DHJB}\n V_i - \\frac{1}{\\rho} \\Big(\\frac{x_i^2}{x_i^2+1} - bx_i \\Big) \\frac{ V_{i} - V_{i-1} }{ \\Delta x} + \\frac{1}{\\rho}\n\\left[x_i^2 + 1 +\\ln \\left( - \\frac{V_{i+1} - V_i}{\\Delta x}\\right)\\right] \\\\\n- \\frac{\\sigma^2}{2\\rho}\n\\frac{ V_{i+1} + V_{i-1} - 2 V_i}{(\\Delta x)^2}=0.\n\\end{multline}\n\nSetting \n\\begin{multline}\n g(x,w,c,d)=\\Big[(\\Delta x)^2 - \\frac{1}{\\rho}\\Big(\\frac{x^2}{x^2+1} - bx \\Big) \\Delta x +\\frac{\\sigma^2}{\\rho}\\Big]w+ \\\\ \n \\frac{1}{\\rho}\n(x^2 + 1 ) (\\Delta x)^2+\\frac{1}{\\rho}(\\Delta x)^2\\ln \\left( - \\frac{c - w}{\\Delta x}\\right)+\\frac{1}{\\rho}\\Delta x \\Big(\\frac{x^2}{x^2+1} - bx \\Big) d- \\frac{\\sigma^2}{2\\rho}\n( c + d ),\n\\end{multline}\n the numerical approximation of $V$ satisfies \n\\begin{equation}\\label{fdscheme}\n g(x_i,V_i,V_{i+1},V_{i-1})=0, \\mbox{\\,\\,\\,for\\,\\, \\, i = 1,\\ldots, N-1 },\n\\end{equation}\nand the consistency is immediate.\n\\vskip.05in\n\nFor the monotonicity we observe that for two different approximation grid vectors $(U_0, \\ldots, U_N)$ and $(V_0, \\ldots, V_N)$ with $U_i \\geq V_i$ and $U_i=V_i=w$, we have \n\\begin{equation}\n g(x_i,w,U_{i+1},U_{i-1})\\leq g(x_i,w,V_{i+1},V_{i-1}),\n\\end{equation}\nprovided $\\Delta x (\\frac{x^2}{x^2+1} - bx)\\leq \\sigma^2\/2$. This condition is satisfied if $b \\geq 0.5$ or if we take $\\Delta x\\leq\\sigma^2\/2$.\n\\vskip.05in\nSince the welfare function solves a state constraint problem the equation is satisfied on the left boundary point.\n\\vskip.05in\n\nIt follows that the numerical scheme is monotone in the sense of \\cite{BS} and converges to the unique constrained\nviscosity solution. \n\\vskip.05in\n\n Since the computational domain of the problem is finite, a boundary condition has to be imposed at $x=l$, for $l$ sufficiently large, by exploiting the asymptotic behaviour of the welfare function $V$ as $x\\to+\\infty$. The boundary condition at\nthe right endpoint $x_N$ is provided by the asymptotic estimate \\eqref{asympto_behav}. \n\\vskip.05in\n\nThe scheme above suggests a numerical algorithm for the computation of optimal dynamics governing the shallow lake problem. In this direction, the nondegeneracy of the shallow lake equation in $(0,\\infty)$ induces extra regularity for the function $V$ in $(0,\\infty)$. Hence, the optimal dynamics for the shallow lake problem are described by \n \\begin{equation}\\label{opt_dyn}\n \\left\\{ \\begin{array}{l} d\\bar{x}(t)=\\left(-\\dfrac{1}{V'(\\bar{x}(t))} -b \\bar{x}(t)+\\dfrac{\\bar{x}^{2}(t)}{\\bar{x}^{2}(t)+1}\\right)dt+\\sigma \\bar{x}(t)dW(t), \\\\\n \\bar{x}(0)=x \n \\end{array} \\right.\n\\end{equation}\n \nUsing the numerical representation of $V$ via \\eqref{fdscheme} and properly discretizing the SDE (\\ref{opt_dyn}), we can reconstruct numerically the optimal dynamics. \nThis is a direct approach to investigate numerically the stochastic properties of the optimal dynamics of the shallow lake problem for the various parameters $\\rho$, $b$, $c$, $\\sigma$ of the problem.\n\\vskip.05in\n\nThe exact numerical algorithm for the computation of the constrained viscosity solution along with the numerical study of the optimal dynamics and their stochastic properties \n for various $\\sigma$'s will be presented elsewhere. \n \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\nWe know that in Hamiltonian systems a dynamic function $f(q,p)$ develops in time according to\n\\be\n\\label{1}\n\\dot{f} = [f,H]_{P.B} \\, .\t\t\t\t\t\t\t\n\\ee\nIn classical mechanics, we are used to studying the time development of a physical system by employing the Hamiltonian and the \nPoisson brackets $(P.B.)$. \nBut one can extend the latter, using the so-called Lie brackets $(L.B.)$ to great advantage.\nA Lie algebra is an algebraic structure in which the connections between its elements \nare determined with the help of $L.B$. \nIn order to do this, we begin with the set $\\cD$\nof all dynamic functions $A(q,p), B(q,p), \\ldots \\epsilon \\cD$ and require that $\\alpha A, \\alpha A + \\beta B, AB$ and $A^{-1}\n(\\alpha, \\beta$ constants) be defined and likewise should belong to $\\cD$. \nWhen we consider elements like $A, B, \\ldots$ as building blocks, then one can construct a large class of dynamic functions, \ni.e., polynomials, analytic functions, meromorphic functions, Fourier series, etc. One only gets a really new characteristic of \n$\\cD$ if one imposes an algebraic structure by way of the Lie bracket $[A,B] \\epsilon \\cD$, where $L.B.$\nsatisfies the following relations:\n\\begin{align}\n \\label{2}\n{[A,B]} & = -[B,A] \\, , \\\\\n \\label{3}\n [\\alpha A,B] & = \\alpha [A,B] \t\\, , \\\\\n \\label{4}\n [\\alpha A + \\beta B,C] & = \\alpha [A,C] + \\beta [B,C] \\, , \\\\\n\\label{5}\n\t\t\t\t\t[AB,C] & = A[B,C] + B[A,C] \\, . \t\t\n \\end{align}\n\t\t\tThe operation $L.B.$ is not associative. Instead the Jacobi identity applies:\n\\be\n\\label{6}\n\t\t\t\t\t[[A,B],C] = [[B,C],A] + [[C,A],B]] = 0 \\, .\n\\ee\nA well-known example is given by a 3-dimensional vector space with cross product:\n\\begin{align} \n\t\t\t\t \\vA \\times \\vB & = - \\vB \\times \\vA \\, , \\nonumber\\\\\n\t\t\t\t(\\alpha \\vA + \\beta \\vB) \\times \\vC & = \\alpha \\vA \\times \\vC + \\beta \\vB \\times \\vC \\, , \\nonumber\\\\\n\t\t\t\t\\vA \\times (\\vB \\times \\vC) + \\vB \\times (\\vC \\times \\vA) + \\vC \\times (\\vA \\times \\vB) & = 0 \\, .\\nonumber \n\\end{align}\nThe three rules (\\ref{3}, \\ref{4}, \\ref{5}) can be realized with the help of first-order differential operators:\n\\be\n\\label{7}\n[A (q, p), B] = \\sli^N_{a = 1} \\lk \\frac{\\partial A}{\\partial q_a} \\left[ q_a, B \\right] + \\frac{\\partial A}{\\partial p_a} [p_a, B] \\right) \\, .\n\\ee\nThe proof for (\\ref{3}) and (\\ref{4}) is trivial, as is the proof for (\\ref{5}):\n\\begin{align}\n [AB, C] &= \\frac{\\partial (AB)}{\\partial q} [q, C] + \\frac{\\partial (AB)}{\\partial p} [p, C] \\nonumber\\\\\n &= \\lk \\frac{\\partial A}{\\partial q} B + A \\frac{\\partial B}{\\partial q} \\right) [q, C] + \n \\lk \\frac{\\partial A}{\\partial p} B + A \\frac{\\partial B}{\\partial p} \\right) [p, C] \\nonumber\\\\\n &= A \\lk \\frac{\\partial B}{\\partial q} [q, C] + \\frac{\\partial B}{\\partial p} [p, C] \\right) + \n B \\lk \\frac{\\partial A}{\\partial q} [q, C] + \\frac{\\partial A}{\\partial p} [p, C] \\right) \\nonumber\\\\\n &= A [B, C] + B [A, C] \\, .\\nonumber\n\n\\end{align}\nIf one iterates relation (\\ref{7}), one finds:\n \\begin{align}\n [A (q, p), B (q, p)] &= \\sli^N_{a = 1} \n \\sli^N_{b = 1} \\Bigg( \\frac{\\partial A}{\\partial q_a} \\frac{\\partial B}{\\partial q_b} \n [q_a, q_b] + \\frac{\\partial A}{\\partial q_a} \\frac{\\partial B}{\\partial p_b} [q_a, p_b] \n \\nonumber \\\\\n & + \\frac{\\partial A}{\\partial p_a} \\frac{\\partial B}{\\partial q_b} [p_a, q_b] + \\frac{\\partial A}{\\partial p_a} \n \\frac{\\partial B}{\\partial p_b} [p_a, p_b] \\Bigg) \\, . \n\\label{8}\n \\end{align}\nThe $q$'s and $p$'s here are arbitrary pairs of phase-space variables - not necessarily canonical conjugate variables as they appear \nin the Hamiltonian equations.\n\nThe quantities $[q_a, q_b], [q_a, p_b], [p_a, p_b]$\nare called fundamental $L.B$. If one knows these for all $a, b = 1, \\ldots N$, then one can calculate the Lie brackets in (\\ref{8}).\nIf, however, we define the fundamental L.B. according to\n\\be\n\\label{9}\n[q_i, q_j] = 0 \\, , \\quad [p_i, p_j] = 0 \\, , \\quad [q_i, p_j] = \\delta_{ij} \\, \\quad \\mbox{for all} \\, i, j \\, , \n\\ee\nthe $L.B.$ go over to the $P.B.$:\n\\be\n\\label{10}\n[A, B]_{q, p} = \\sli^N_{i, j = 1} \\lk \\frac{\\partial A}{\\partial q_i} \\frac{\\partial B}{\\partial p_j} - \\frac{\\partial A}{\\partial p_i}\n\\frac{\\partial B}{\\partial q_j} \\right) \\, .\n\\ee\nThe variables $q_i, p_i$ are then said to be canonically conjugate.\nFor a Hamiltonian system $H$, the time evolutionary equations then immediately follow from (\\ref{8}) for the variables $q$ and $p$:\n\t\t\t\t\t\t\t\t\t\t\t\t\\begin{align}\n\t\t\t\t\t\t\t\t\t\t\t\t \\label{11}\nA = q, B = H &: \\dot{q}_i = [q_i, H] = [q_i, p_j] \\frac{\\partial H}{\\partial p_j} = \\delta_{ij} \\frac{\\partial H}{\\partial p_j}\n= \\frac{\\partial H}{\\partial p_i} \\, , \\\\\n\\label{12}\nA = p, B = H &: \\dot{p}_i = [p_i, H] = [p_i, q_j] \\frac{\\partial H}{\\partial q_j} = - \\delta_{ij} \\frac{\\partial H}{\\partial q_j}\n= \\frac{\\partial H}{\\partial q_i} \\, .\n\t\t\t\t\t\t\t\t\t\t\t\t\\end{align}\nWe now combine the canonical coordinates $q_i$ and the momenta $p_i$ into a new set of $N$ generalized coordinates:\n\\be\n\\label{13}\n z = (q_1, \\ldots, q_{N\/2}, p_1, \\ldots, p_{N\/2}) \\, .\n\t\t \t\t\t\t\t\t\t\t \\ee\n\tThen the canonical $P.B.$ from (\\ref{9}) can be very elegantly written with the aid of the Poisson tensor $\\omega$ as\n\t\\be\n\t\\label{14}\n\t\t\t[z_a, z_b] = \\omega_{ab} \\, , \\quad \\quad det (\\omega_{ab}) \\neq 0 \\quad (1 \\, \\mbox{for canonical coordinates}) \t \t\t\t\t\t\t\t\n\\ee\nwith\n\\begin{align}\n\\label{15}\n\\omega_{ab} = \\,\\,\\,\n \\begin{blockarray}{cccccc}\n & \\scriptstyle{q_1} & \\scriptstyle{p_1} & \\scriptstyle{q_2} & \\scriptstyle{p_2} & \\cdots \\\\\n \\begin{block}{c(ccccc)}\n \\scriptstyle{q_1} & [q_1,q_1] & [q_1,p_1] & \\cdots & \\cdots & \\cdots \\\\\n \\scriptstyle{p_1} & [p_1,q_1] & [p_1,p_1] & \\cdots & \\cdots & \\cdots \\\\\n \\scriptstyle{q_2} & & & \\ddots & & \\\\\n \\scriptstyle{p_2} & & & & \\ddots & \\\\\n \\vdots & & & & & \\ddots \\\\\n \\end{block} \n \\end{blockarray} \n\\,\\,\\,=\\,\\, \n\\begin{pmatrix}\n \\dboxed{\\begin{matrix} 0 & 1 \\\\ -1 & 0 \\end{matrix}} & & \\\\\n & \\dboxed{\\begin{matrix}0 & 1 \\\\ -1 & 0 \\end{matrix}} & \\\\\n & & \\ddots\n\\end{pmatrix}\n\\end{align}\n\nor\n\\be\n\\label{16}\n\\omega_{ab} = \\mathrm{diag} \\left[ \\lk \\begin{array}{cc} \n 0 & 1 \\\\\n - 1 & 0\n \\end{array} \\right) \\, , \\ldots\n \\lk\n \\begin{array}{cc} \n 0 & 1 \\\\\n - 1 & 0\n \\end{array} \\right) \\right] \\, .\n\t\t\t\t\t\t\t\t\\ee\nThe inverse of $\\omega_{ab} , \\omega^{ab} = -\\omega_{ba}$, is defined according to\n\\be\n\\omega_{ac} \\omega^{cb} = \\delta_a^b \\nonumber\n\\ee\n\\be\n\\mbox{e.g., for} \\quad N = 2, \\quad (q_1, p_1):\n\\lk \\begin{array}{cc}\n 0 & 1\\\\\n - 1 & 0\n \\end{array} \\right) \n \\lk \\begin{array}{cc}\n 0 & - 1\\\\\n 1 & 0\n \\end{array} \\right)\n \\lk \\begin{array}{cc}\n 1 & 0\\\\\n 0 & 1\n \\end{array} \\right)\n\\, . \\nonumber\n\\ee\nThen our $P.B.$ (\\ref{8}) can now be written in abbreviated form as\n\\be\n\\label{17}\n\t[A (z), B (z)]_{P.B.} = \\frac{\\partial A}{\\partial z_a} [z_a, z_b] \\frac{\\partial B}{\\partial z_b} = \\omega_{ab}\n\t\\frac{\\partial A}{\\partial z_a} \\frac{\\partial B}{\\partial z_b} = \\partial^a A \\omega_{ab} \n\t\\partial^b B \\, .\n\t\\ee\n\tAccordingly, the Hamiltonian equations are given by\n\\begin{align}\n \\label{18}\n \\dot{z}_a &= [z_a, H] = \\omega_{ab} \\partial^b H \\, , \\quad \\quad \\omega_{ab} \\, \\mbox{is independent of} \\, z \\nonumber\\\\\n \\lk {\\dot{q} \\atop \\dot{p}} \\right) &= \\lk \\begin{array}{cc}\n 0 & 1 \\\\\n - 1 & 0 \n \\end{array} \\right) \n \\lk \\begin{array}{c}\n \\frac{\\partial H}{\\partial q} \\\\\n \\frac{\\partial H}{\\partial p}\n \\end{array} \\right) \\quad : \\quad \\dot{q} = \n \\frac{\\partial H}{\\partial p} \\, , \\quad \\dot{p} = - \\frac{\\partial H}{\\partial q} \\, .\n\\end{align}\nWe call a transformation from the original variables $(q_i, p_i)$ to new variables $(Q_i, P_i)$\n\\begin{align}\n \\label{19}\n Q_i &= Q_i (q_i, p_i) \\, , \\nonumber\\\\\n P_i &= P_i (q_i, p_i) \\, ,\n\\end{align}\n\\underline{canonical}, if the fundamental $P.B.$ for the new $(Q_i, P_i)$ also are of form (\\ref{9}):\n\\be\n\\label{20}\n[Q_i, Q_j] = 0 = [P_i, P_j] \\, , \\quad \\quad [Q_i, P_j] = \\delta_{ij} \\, .\n\\ee\nIt can be easily shown that as a consequence of formulas (\\ref{20}), \nthe Jacobi determinant of a canonical transformation is equal to one (''the canonicity is preserved''): \n\\be\n\\label{21}\n\t\\mathrm{det}(\\omega_{qp}) = \\mathrm{det} (\\omega_{QP}) = 1 \\, .\n\t\\ee\nTo prove (\\ref{21}), we use $N = 2$. With the help of (\\ref{10}), the following simply applies:\n\\be\n\\label{22}\n[Q, P]_{q, p} = \\lk \\frac{\\partial Q}{\\partial q} \\frac{\\partial P}{\\partial p} - \n\\frac{\\partial Q}{\\partial p} \\frac{\\partial P}{\\partial q} \\right) \\, .\n\\ee\nIf $Q$ and $P$ are to be a pair of canonical conjugate variables, then according to equations (\\ref{20}), $[Q,P] = 1$ is valid; thus\n\\be\n[Q, P] = \\left| \\begin{array}{cc}\n \\frac{\\partial Q}{\\partial q} & \\frac{\\partial Q}{\\partial p} \\\\ \n\\frac{\\partial P}{\\partial q} & \\frac{\\partial P}{\\partial p} \n \\end{array}\\right| = 1\\, . \\nonumber\n\\ee\nThe main characteristic of a canonical transformation is, however, that in the new variables, the canonical equations also hold:\nthe Hamiltonian canonical equations are form invariant (covariant) under (\\ref{19}), \nbut with a new Hamiltonian function $K$:\n\\be\n\\label{23}\nH (q (Q, P), p (Q, P)) = K (Q, P) \\, .\n\\ee\nWe show this again for one degree of freedom:\n\\begin{align}\n \\dot{Q} = [Q, H]_{q, p} &= \\frac{\\partial Q}{\\partial q} \\frac{\\partial H}{\\partial p} - \n \\frac{\\partial Q}{\\partial p} \\frac{\\partial H}{\\partial q} \\nonumber\\\\\n &= \\frac{\\partial Q}{\\partial q} \\lk \\frac{\\partial H}{\\partial Q} \\frac{\\partial Q}{\\partial p} + \n \\frac{\\partial H}{\\partial P} \\frac{\\partial P}{\\partial p} \\right) - \n \\frac{\\partial Q}{\\partial p} \\lk \\frac{\\partial H}{\\partial Q} \\frac{\\partial Q}{\\partial q}\n + \\frac{\\partial H}{\\partial P} \\frac{\\partial P}{\\partial q} \\right) \\nonumber\\\\\n & = \\frac{\\partial H}{\\partial P} \\lk \\frac{\\partial Q}{\\partial q} \\frac{\\partial P}{\\partial p} - \n \\frac{\\partial Q}{\\partial p} \\frac{\\partial P}{\\partial q} \\right) = \n \\frac{\\partial H}{\\partial P} [Q, P] \\equiv \\frac{\\partial K}{\\partial P} (Q, P) \\, .\n\\nonumber\n\\end{align}\nSimilarly for $\\dot{P}$.\n\\bi\n\n\\no\nThe literature on Hamiltonian dynamics is dominated by canonical transformations. But this an unfortunate situation, \nsince it is often impossible to introduce convenient variables which are also canonical. In a ``pseudocanonical''\ntransformation, the new variables must not be canonical, i.e., they need \\underline{not} satisfy (\\ref{20}). \nHowever, the transformation equations, as before, have to be invertible.\n\\bi\n\n\\no\nFor non-canonical phase-space variables, \ninstead of (\\ref{18}), we now write\n\\be\n\\dot{z}_a = [z_a, H] = \\sigma_{ab} (z) \\partial^b H \\, , \\quad \\quad a, b = 1, 2, \\ldots N \\, \\nonumber\n\\ee\nwith $\\sigma_{ab} (z)$ instead of (\\ref{14})\n\\be\n\\label{24}\n[z_a, z_b] = \\sigma_{ab} (z) \\, .\n\\ee \nProperties of $\\sigma_{ab} (z)$\n\\be\n\\label{25}\n[z_a, z_b] = - [z_b, z_a] : \\quad \\quad \\sigma_{ab} (z) = - \\sigma_{ba} (z) \\, .\n\\ee\nInverse\n\\be\n\\label{26}\n\\sigma_{ac} (z) \\sigma^{cb} (z) = \\delta^b_a \\, .\n\\ee\nFrom the Jacobi identity\n\\be\n[z_a, [z_b, z_c]] + [z_b, [z_c, z_a]] + [z_c, [z_a, z_b]] = 0 \\nonumber\n\\ee\nfollows as a condition for the $\\sigma$'s:\n\\be\n\\label{27}\n\\partial_a \\sigma_{bc} (z) + \\partial_b \\sigma_{ca} (z) + \\partial_c \\sigma_{ab} (z) = 0 \\, .\n\\ee\nIf we change the phase-space coordinates according to the transformation\n \\be\n z \\to z'(z) \\nonumber\n \\ee\nwe get as $L.B.$ for the new coordinates\n\\begin{align}\n \\sigma'_{ab} &= [z'_a, z'_b] = [z'_a (z), z'_b (z)] \\nonumber\\\\\n &=\\frac{\\partial z'_a}{\\partial z_c} [z_c, z_d] \\frac{\\partial z'_b}{\\partial z_d} = \n \\frac{\\partial z'_a}{\\partial z_c} \\sigma_{cd} (z) \\frac{\\partial z'_b}{\\partial z_d} \\, . \\nonumber\n\\end{align}\nTherefore\t\t\t\t\t\t\n\\be\n\\label{28}\n\\sigma'_{ab} = \\frac{\\partial z'_a}{\\partial z_c} \\frac{\\partial z'_b}{\\partial z_d} \\sigma_{cd} \\, .\n\\ee\nHere the $\\sigma_{ab}$ turn out to be components of a two-rank contravariant tensor.\n\\bi\n\n\\no\nMoreover, we already know that the $L.B.$ for two arbitrary phase-space functions $A(z)$, $B(z)$ can be written as\n\\begin{align}\n \\label{29}\n [A (z), B (z)] &= \\frac{\\partial A}{\\partial z_a} [z_a, z_b] \\frac{\\partial B}{\\partial z_b} \\, , \\nonumber\\\\\n &= \\frac{\\partial A}{\\partial z_a} \\sigma_{ab} \\frac{\\partial B}{\\partial z_b} \\, .\n\\end{align}\nIf $z_a = (q_1, \\ldots, q_{N\/2}, p_1, \\ldots, p_{N\/2})$ are canonical coordinates, then we get \nfor the transformation $z \\to z'$ ($z'$ is not necessarily canonical)\n\\be\n\\sigma'_{ab} = \\frac{\\partial {z'}_a}{\\partial z_c} \\frac{\\partial {z'}_b}{\\partial z_d} \\omega_{cd} \\, . \\nonumber\n\\ee\nHere, $\\omega_{ab}$ is again the ``flat metric'' in the canonical $q-p$ basis (\\ref{16}).\n\\bi\n\n\\no\t\t\t\t\t\t\t\t\t\t\t\t\t\nWe now want to illustrate the practicality of the noncanonical coordinates using three very important examples from \nphysics. Let us begin with the mathematical pendulum. The energy (not Hamiltonian) of a particle in the Earth's \ngravitational field is given in right-angle Cartesian coordinates by\n\\be\n\\label{30}\nH' (x, p_x, p_y) = \\frac{1}{2m} (p^2_x + p^2_y) - mgx .\n\\includegraphics[width=.2\\textwidth]{cartesian.eps}\n\\ee\nbut subject to the constraint \n\\be\n\\label{31}\n\tx^2 + y^2 = l^2 \\, . \t\t\t\t\t \\ee\nIt should be observed that the variables in (\\ref{30}) \nare \nnoncanonical, so that they do not satisfy the conventional Hamiltonian canonical equations of motion, because of the additional\nconstraint (\\ref{31}). It is, however, well known that we can express $H'$ in canonical coordinates\n$(\\varphi, p_\\varphi)$ with the resultant Hamiltonian:\n\t\t\t\\be\n\t\t\t\\label{32}\n\t\t\tH (\\varphi, p_\\varphi) = \\frac{1}{2 m l^2} p^2_\\varphi - m g l \\cos \\varphi \\, .\n\\ee\nHence we can express the equations of motion with a single degree of freedom. The associated phase space \n$(\\varphi, p_\\varphi)$ is two dimensional and endowed with a Poisson bracket structure:\n\\be\n\\label{33}\n[\\varphi, \\varphi]_{P. B} = 0 = [p_\\varphi, p_\\varphi] \\, , \\quad \\quad [\\varphi, p_\\varphi]_{P. B} = 1 \\, .\n\t\\ee\nThis can also be formulated by saying that the symplectic structure of this two-dimensional phase space is given by the symplectic matrix\n\\be\n\\label{34}\n\\omega = \n\\bordermatrix{\n & \\varphi\t& p_\\varphi \\cr\n\\varphi & 0 & 1 \\cr\np_\\varphi & - 1 & 0 \\cr\n} \\, .\n\\ee\nThe two-dimensional matrix representation is typical for every canonical pair $(q,p)$.\nOur next goal is to enlarge the so far two-dimensional phase pace into four dimensions with pseudo-canonical variables \n$(r, p_r, \\varphi, p_\\varphi)$. We call them pseudo-canonical variables because, as will be shown, $r$\nand $p_r$ satisfy certain constraints and hence cannot be canonical variables. This means that there are no Poisson brackets for the \n$(r, p_r)$ variables. Instead, the Lie brackets take over and are the building elements of the symplectic matrix.\nHere are the details. We begin with the pseudo-Hamiltonian \n\\be\n\\label{35}\n\tH' (r, p_r, \\varphi, p_\\varphi)\t= \\frac{1}{2 m} \\lk p^2_r + \\frac{1}{r^2} p^2_\\varphi \\right)\t- m g r \\cos \\varphi \\, .\n\t\\ee\nThe pseudo-canonical transformation is given by\n\\be\n(x, p_x, y, p_y ) \\to (r, p_r, \\varphi, p_\\varphi) \\nonumber\n\\ee\nwith\n\\begin{align}\n x &= r \\cos \\varphi \\, , \\quad \\quad p_x = p_r \\cos \\varphi - \\frac{p_\\varphi}{r} \\sin \\varphi \\, , \\nonumber\\\\\n y &= r \\sin \\varphi \\, , \\quad \\quad p_y = p_r \\sin \\varphi + \\frac{p_\\varphi}{r} \\cos \\varphi \\, . \\nonumber\n\\end{align}\nInverting these equation yields\n\\begin{align}\n\\cos \\varphi &= \\frac{x}{r} \\, , \\quad \\quad p_r = p_x \\cos \\varphi + p_\\varphi \\sin \\varphi \\, , \\nonumber\\\\\n \\sin \\varphi &= \\frac{y}{r} \\, , \\quad \\quad p_\\varphi = - p_x r \\sin \\varphi + p_y r \\cos \\varphi \\, . \\nonumber\n\\end{align}\nHere are our two constants of motion (constraints):\n\\be\n\\label{36}\n(x^2 + y^2) = r^2 \\, , \\quad \\quad \\frac{x p_y + y p_y}{(x^2 + y^2)^{1\/2}} = p_r \\, .\n\\ee\nThese quantities are called Casimirs and are constants of motion because of the form of the bracket rather \nthan the form of the Hamiltonian. In $(\\varphi, p_\\varphi)$\nspace, of course, the brackets (as well as the bracket matrix $\\omega$) are canonical.\nBefore introducing the constraints (\\ref{36}), the four-dimensional matrix $\\omega'$ is given by\n\\begin{align}\n \\label{37}\n \\omega' = \\begin{array}{c}\n \\\\\n r \\\\\n p_r \\\\\n \\varphi \\\\\n p_\\varphi \n \\end{array} \n \\lk\n \\begin{array}{cccc}\n r & p_r & \\varphi & p_\\varphi \n \\\\\n \\left[r, r\\right] & \\left[r, p_r\\right] & \\left[r, \\varphi \\right] & \\left[r, p_\\varphi \\right] \n \\\\\n \\left[p_r, r\\right] & \\left[p_r, p_r\\right] & \\ldots \\ldots & \\ldots \\ldots \\\\\n \\vdots && \\vdots & \\\\\n \\left[p_\\varphi, r\\right] & \\ldots & \\left[p_\\varphi, \\varphi\\right] & \\left[p_\\varphi, p_\\varphi\\right] \n \\end{array}\n \\right) \n \\begin{array}{l}\n \\mbox{constraints} \\\\\n \\longrightarrow \\\\\n \\left[r, r \\right] = 0 \\\\\n \\mbox{etc.}\n \\end{array}\n \\lk \\begin{array}{cc}\n O_2 & O_2 \\\\\n O_2 & \\lk \\begin{array}{cc}\n 0 & 1 \\\\\n - 1 & 0\n \\end{array}\n\\right) \n \\end{array}\n\\right) \\, .\n\\end{align}\nAfter taking into account the two constraints in (\\ref{36}), \nthis four-dimensional $\\omega'$ structure is reduced to a two-dimensional phase subspace $(N = 2)$\nwith canonical coordinates $(\\varphi, p_\\varphi)$ and the well-known symplectic matrix $\\lk \\begin{array}{cc}\n 0 & 1 \\\\\n - 1 & 0\n \\end{array}\n\\right)$, meaning one degree of freedom, so that our dynamical \nsystem ``mathematical pendulum'' is soluble.\n\\bi\n\n\\no\nWhen we use the one-dimensional harmonic oscillator in the $x$ representation, we have for the Schr\\\"odinger Hamiltonian\n\\be\nH (x, \\partial_x) = - \\frac{\\hbar^2}{2 m} \\partial^2_x + \\frac{1}{2} m \\omega^2 x^2 \\quad \\mbox{with} \\quad\n[\\partial_x, x] = 1 \\, , \\quad [\\partial^2_x, x] = 2 \\partial_x \\, . \\nonumber\n\\ee\nTo set up the corresponding Lie algebra we define the following ``operators'':\n\\be\nL_+ = \\frac{1}{2} x^2 \\, , \\quad \\quad L_- = - \\frac{1}{2} \\partial^2_x \\, , \\qquad \\quad L_3 = \\frac{1}{2} x \\partial_x + \\frac{1}{4} \n\\nonumber\n\\ee\nso that the pseudo-Hamiltonian can be written as\n\\be\nH' = \\frac{\\hbar^2}{m} L_- + m \\omega^2 L_+ \\, . \\nonumber\n\\ee\nNow it is easy to prove that\n\\be\n\\left[L_+, L_- \\right] = 2 L_3 \\, , \\quad \\quad \\left[L_3, L_\\pm\\right] = L_\\pm \\, . \\nonumber\n\\ee\nThis is the $SO(3)$ Lie algebra of angular momenta, which is locally isomorphic to the $SU(2)$ Lie algebra\nand thus our problem is naturally soluble. \n\\bi\n\n\\no\nOur next example is the description of a charged particle in an external constant magnetic field, i.e., with the Hamiltonian function\n\\be\n\\label{38}\nH (\\vq, \\vp) = \\frac{1}{2 m} \\lk \\vp - \\frac{e}{c} \\vA (\\vq) \\right)^2 \\, , \\quad \\quad\t\n\\vA (\\vq) = \\frac{1}{2} \\vB \\times \\vq \\, .\n\\ee\nThis choice of vector potential (gauge) guarantees that $\\vB$ indeed is constant, because\n\\begin{align}\n \\vB &= \\vec{\\nabla}_{\\vq} \\times \\vA (\\vq) = \\frac{1}{2} \\vec{\\nabla} \\times (\\vB \\times \\vq) \\nonumber\\\\\n &= \\frac{1}{2} \n \\Big[ (\\vq \\cdot \\vec{\\nabla}) \\vB - \n (\\vB \\cdot \\vec{\\nabla}) \\vq + \\vB (\\vec{\\nabla} \\cdot \\vq) - \n \\vq (\\vec{\\nabla} \\cdot B) \\Big] \n \\nonumber\\\\\n &= \\frac{1}{2} \\left[ 0 - \\vB + 3 \\vB - 0 \\right] = \\frac{1}{2} (- \\vB + 3 \\vB) = \\vB \\, . \\nonumber\n\\end{align}\nHerewith we get as Hamiltonian equations of motion\n\\begin{align}\n \\label{39}\n \\dot{p}_i &= - \\frac{\\partial H}{\\partial q_i} = \\frac{e}{mc} \\lk p_j - \\frac{e}{c} A_j (\\vq) \\right) \n \\frac{\\partial}{\\partial q_i} A_j (\\vq) \\, , \\quad \\quad A_j (\\vq) = \\frac{1}{2} \\epsilon_{jkl} B_k q_l \\nonumber\\\\\n &= \\frac{e}{mc} \\lk p_j - \\frac{e}{c} A_j (\\vq) \\right) \\frac{1}{2} \\epsilon_{jki} B_k \\nonumber\\\\\n &= \\frac{e}{2 mc} \\left[ \\lk \\vp - \\frac{e}{c} \\vA (\\vq) \\right) \\times \\vB \\right]_i \\\\\n \\label{40}\n \\dot{q}_i &= \\frac{\\partial H}{\\partial p_i} = \\lk \\vq - \\frac{e}{c} \\vA (\\vq) \\right)_i \\, .\n \\end{align}\nNow we consider the noncanonical transformation\n\\begin{align}\n(\\vq, \\vp) & \\longrightarrow (\\vx', \\vec{v}') \\equiv \\vz'_a \\nonumber\n\\\\\n & \\mbox{noncanonical coordinates} \\nonumber\n\\end{align}\nso: \n\\begin{align}\n \\label{41}\n \\vx' &= \\vq \\nonumber\\\\\n \\vv' &= \\frac{1}{m} \\lk \\vp - \\frac{e}{c} \\vA (\\vq) \\right) \\equiv\n \\frac{1}{m} \\vec{\\Pi} \\, \\mbox{(noncanonical, gauge invariant)} \\, .\n\\end{align}\n$\n(\\vx', \\vv')$ parametrize the phase-space just as well as $(\\vq, \\vp)$.\nNow it holds for the fundamental Lie brackets:\n\\begin{align}\n \\label{42}\n {\\vv'}_i &= q_i : \\quad \\quad \\left[{\\vx'}_i, {x'}_j \\right]_{q, p} = \n \\frac{\\partial {x'}_i}{\\partial q_k} \\frac{\\partial x'_j}{\\underbrace{\\partial p_k}_{= 0}} -\n \\frac{\\partial x'_i}{\\underbrace{\\partial p_k}_{= 0}} \n \\frac{\\partial {x'}_j}{\\partial q_k} = 0\n \\nonumber\\\\\n {v'}_i &= \\frac{1}{m} \\lk p_i - \\frac{e}{c} A_i (q_j) \\right): \\quad \\quad \\left[{x'}_i, {v'}_j \\right]_{q, p} = \n \\frac{1}{m} \\delta_{ij} = \n \\left[{v'}_i, {x'}_j \\right]_{q, p} \\nonumber\\\\\n A_i (\\vq) &= \\frac{1}{2} \\epsilon_{ijk} B_j q_k\n \\nonumber\\\\\n \\left[{v'}_i, {v'}_j \\right]_{q, p} &= \\frac{e}{m^2 c} \\lk \\frac{\\partial A_j}{\\partial q_i} - \\frac{\\partial A_i}{\\partial q_j} \\right) = \n \\frac{e}{m^2c} B_{ij} = \\frac{1}{m^2} \\left[ \\Pi_i, \\Pi_j \\right] \\nonumber\\\\\n & \\hspace{5cm} \\frac{e}{c} B_{ij} = F_{ij} \\nonumber\\\\\n &= \\frac{e}{m^2 c} \\epsilon_{ijk} B_k = \\frac{1}{m} \\Omega_{ij} \\nonumber\\\\\n \\overset{\\leftrightarrow}{\\Omega} &= \\Omega_{ij} = \\frac{e}{m c} \\epsilon_{ijk} B_k \\, .\n\\end{align}\nSo we have found:\n\\begin{align}\n \\label{43}\n \\sigma'_{a b} &= \\frac{1}{m} \\lk \\begin{array}{cc}\n O_3 & 1_3\\\\\n - 1_3 & \\overset{\\leftrightarrow}{\\Omega}\n \\end{array}\n \\right) \\\\\n \\label{44}\n \\sigma'^{ab} &= m \n\\lk \\begin{array}{cc}\n \\overset{\\leftrightarrow}{\\Omega} & - 1_3\\\\\n 1_3 & O\n \\end{array}\n \\right) \\, ,\n\\end{align}\t\nso that indeed\n\\begin{align}\n \\sigma'_{ac} \\sigma'^{cb} &= \\lk \\begin{array}{cc}\n 0 & 1\\\\\n -1 & \\overset{\\leftrightarrow}{\\Omega}\n \\end{array}\n \\right) \n \\lk \\begin{array}{cc}\n \\overset{\\leftrightarrow}{\\Omega} & - 1\\\\\n 1 & 0\n \\end{array}\n \\right) \\nonumber\\\\\n &= \\lk \\begin{array}{cc}\n 1 & 0\\\\\n 0 & 1\n \\end{array}\n \\right) = 1 \\, . \\nonumber\n\\end{align}\nIn canonical coordinates $(\\vq, \\vp)$:\t\t\t\t\t\t\t\t\t\n\\be\n\\label{45}\nH (\\vq, \\vp) = \\frac{1}{2 m} \\lk \\vp - \\frac{e}{c} \\vA (\\vq) \\right)^2 \\, .\n\\ee\nIn non-canonical coordinates \n\\be\n\\label{46}\n(\\vx', \\vv'): H' (\\vx', \\vv') = \\frac{m}{2} \\vv'^2 = \\frac{1}{2 m} \\vec{\\Pi}^2\t\t\t\t\\, .\n\\ee\nNon-canonical equations of motion:\n\\be\n\\frac{d z'_a}{d t} = [z'_a, H'] = \\sigma'_{ab} \\frac{\\partial H'}{\\partial z'_b}\\nonumber\n\\ee\nor\n\\begin{align}\n\\label{47}\n \\frac{d}{dt} \\lk {\\vx' \\atop \\vv'} \\right) &= \\frac{1}{m} \n\\lk \\begin{array}{cc} \n 0 & 1 \\\\\n - 1 & \\overset{\\leftrightarrow}{\\Omega}\n \\end{array}\\right)\n \\lk {\\frac{\\partial H'}{\\partial\\vx'} \\atop \\frac{\\partial H'}{\\partial \\vv'}} \\right) = \n \\lk\n \\begin{array}{c}\n \\vv' \\\\\n \\frac{e}{m c} \\epsilon_{ijk} B_k v'_j\n \\end{array}\n \\right) \\nonumber\\\\\n &= \\lk\n \\begin{array}{c}\n \\vv' \\\\\n \\frac{e}{mc} \\vv' \\times \\vB\n \\end{array}\n \\right) \n\\end{align}\n\\be\n\\label{48}\n\\mbox{Newton-Lorentz equation.} \\atop (\\mbox{No} \\, \\vA \\, , \\mbox{gauge invariant!}) \\, .\n\\ee\nOne should note that in $H'(\\vx', \\vv') = \\frac{m}{2} \\vv'^2 = \\frac{1}{2m} \\vec{\\Pi}^2$, \nthe nonphysical magnetic vector potential $\\vA$ has disappeared ! ? \nThe components of the $\\sigma'_{ab}$\ntensor can be used to calculate the $L.B.$ of two phase-space functions $f (\\vx', \\vv'), g (\\vx',\\vv')$:\n\\be\n[f, g] = \\frac{\\partial f}{\\partial {z'}_a} \\sigma'_{ab} \\frac{\\partial g}{\\partial {z'}_b} \\, . \\nonumber\n\\ee\nThe result follows directly from (\\ref{43})\n\\begin{align}\n \\label{49}\n [f, g] &= \\frac{\\partial f}{\\partial \\vx'} \n \\mathclap{\\raisebox{3.5ex}{\\scalebox{3}[1]{$\\frown$}}}\n\\mathclap{\\raisebox{-3.5ex}{\\scalebox{3}[1]{$\\smile$}}}\n \\frac{\\partial f}{\\partial \\vv'} \\frac{1}{m} \n \\lk \\begin{array}{cc}\n 0 & 1\\\\\n - 1 & \\overset{\\leftrightarrow}{\\Omega}\n \\end{array}\n \\right)\n \\lk \\begin{array}{c} \n \\frac{\\partial g}{\\partial \\vx'}\\\\\n \\frac{\\partial g}{\\partial \\vv'}\n \\end{array}\n \\right) \\nonumber\\\\\n &= \\frac{1}{m} \n\\frac{\\partial f}{\\partial \\vx'} \n \\mathclap{\\raisebox{3.5ex}{\\scalebox{3}[1]{$\\frown$}}}\n\\mathclap{\\raisebox{-3.5ex}{\\scalebox{3}[1]{$\\smile$}}}\n\\frac{\\partial f}{\\partial \\vv'} \n\\begin{pmatrix}\n \\frac{\\partial g}{\\partial \\vv'} \\\\\n - \\frac{\\partial g}{\\partial \\vx'} \n + \\overset{\\leftrightarrow}{\\Omega} \\cdot \\frac{\\partial g}{\\partial \\vv'}\n\\end{pmatrix} \\nonumber\\\\\n&= \\frac{1}{m} \\lk\n\\frac{\\partial f}{\\partial \\vx'} \\cdot \\frac{\\partial g}{\\partial \\vv'} - \\frac{\\partial f}{\\partial \\vv'} \\cdot \n\\frac{\\partial g}{\\partial \\vx'} \\right) + \\frac{e}{m^2 c} \\vB \\cdot \n\\lk \\frac{\\partial f}{\\partial \\vv'} \\times \\frac{\\partial g}{\\partial \\vv'} \\right) \n\\, ,\n\\end{align}\nwhich for\n\\be\nf = \\Pi_i (\\vv') = m v'_i \\, , \\quad \\quad g = \\Pi_j (\\vv') = m v'_j \\, \\mbox{reproduces} \\, (\\ref{42}). \\nonumber\n\\ee\nThe $\\vB$ or, respectively, the $\\vA$ field, which disappeared in $H' = \\frac{m}{2} \\vv'^2$, \ncan be rediscovered in the altered symplectic structure $\\sigma'$ (\\ref{43}) of the gauge invariant pseudo-Hamiltonian system\n$H' = \\frac{1}{2 m} \\vec{\\Pi}^2$.\n\\bi\n\n\\no\nA further important example of the application of noncanonical coordinates can be found in force-free rigid bodies. \nHere $\\dot{\\vL} = 0$ holds in the inertial system. Relative to the fixed-body reference system, the \\underline{same}\nangular-momentum vector has the components $\\{K_i\\}_{i = 1, 2, 3} = \\vL^{body}$.\n\\bi\n\n\\no\nNow we perform a non-canonical transformation\n\\begin{align}\n\\begin{array}{lll}\n(\\phi, \\theta, \\psi, p_\\phi, p_\\theta, p_\\psi) & \\longrightarrow & (K_1, K_2, K_3) \\nonumber\\\\\n\\mbox{6-dimensional canonical phase space} & & \\mbox{3-dimensional reduced noncanonical phase space} \n\\end{array} \\nonumber\n\\end{align}\nThe\nformation equations are\n\\begin{align}\n K_1 &= \\lk p_\\phi \\frac{1}{\\sin \\theta} - p_\\psi \\cot \\theta \\right) \\sin \\psi + p_\\theta \\cos \\psi \\, , \\nonumber\\\\\n K_2 &= \\lk p_\\phi \\frac{1}{\\sin \\theta} - p_\\psi \\cot \\theta \\right) \\cos \\psi - p_\\theta \\sin \\psi \\, , \\nonumber\\\\\n K_3 &= p_\\psi \\, . \\nonumber\n\\end{align}\nWith the help of fundamental canonical brackets $[\\phi, p_\\phi], \\ldots$ one can prove the following validity for the fundamental $L.B.$:\n\\begin{align}\n\\label{50}\n\t[K_1, K_2] & = -K_3 \\nonumber\\\\\n\t[K_2, K_3] & = -K_1 \\quad \\quad \\mbox{Note the minus signs!} \\nonumber\\\\\n\t[K_3, K_1] & = -K_2 \n\\end{align}\n or \n \\be\n \\label{51}\n [K_\\alpha, K_\\beta] = - \\epsilon_{\\alpha \\beta \\gamma} K^\\gamma \\, ,\n \\ee\n i.e.,\n \\be\n \\sigma^{\\alpha \\beta} = - \\epsilon^{\\alpha \\beta \\gamma} K_\\gamma \\nonumber\n\\ee\n or\t\t\t\t \t\t\t\t\t\t\t\t\t \n\\be\n\\label{52}\n\\sigma = \\lk \\begin{array}{ccc}\n 0 & - K_3 & K_2 \\\\\n K_3 & 0 & - K_1 \\\\\n - K_2 & K_1 & 0\n \\end{array} \\right) \\, .\n\\ee\nHerewith one can form the (noncanonical) $L.B.$ of two arbitrary functions $A (\\vK), B (\\vK)$ according to\n\\begin{align}\n \\label{53}\n [A (\\vK), B (\\vK)] &= \\frac{\\partial A}{\\partial K^\\alpha} [K^\\alpha, K^\\beta] \\frac{\\partial B}{\\partial K^\\beta} \\nonumber\\\\\n &= - \\epsilon^{\\alpha \\beta \\gamma} K_\\gamma \\frac{\\partial A}{\\partial K^\\alpha} \\frac{\\partial B}{\\partial K^\\beta} \\nonumber\\\\\n &= - \\vK \\cdot \\lk \\frac{\\partial A}{\\partial \\vK} \\times \\frac{\\partial B}{\\partial \\vK} \\right) \\, .\n\\end{align}\nWith the pseudo-Hamiltonian function with $I_i$ the principal axis of inertia\n\\be\nH' = \\frac{K^2_1}{2 I_3} + \\frac{K^2_2}{2 I_2} + \\frac{K^2_3}{2 I_3}\n\\nonumber\n\\ee\nfollows for the equations of motion:\t\t\t\t\t\t\t\t\t\n\\be\n\\label{54}\n\\frac{d}{dt} K^\\alpha = [K^\\alpha, H'] = \\sigma^{\\alpha \\beta} \\frac{\\partial H'}{\\partial K^\\beta} \\, ,\n\\ee\ni.e.,\n\\be\n\\frac{d}{dt} \\lk \\begin{array}{c}\nK^1\\\\ K^2 \\\\ K^3\n \\end{array}\n\\right) = \\lk\n\\begin{array}{ccc}\n 0 & - K_3 & K_2 \\\\\n K_3 & 0 & - K_1 \\\\\n - K_2 & K_1 & 0\n\\end{array} \\right) \n\\lk \n\\begin{array}{c}\n \\frac{\\partial H'}{\\partial K_1} \\\\\n \\frac{\\partial H'}{\\partial K_2} \\\\\n \\frac{\\partial H'}{\\partial K_3} \n\\end{array} \\right) \\, , \\quad \\quad\n\\frac{\\partial H'}{\\partial K^\\beta} = \\frac{K_\\beta}{I_\\beta} \\, . \\nonumber \n\\ee\nFor example:\n\\be\n\\label{55}\n\\frac{d}{dt} K_1 = K_3 \\frac{K_2}{I_2} + K_2 \\frac{K_3}{I_3} \\, ,\n\\ee\n i.e.\n \\begin{align}\n \\dot{K}_1 = & \\lk \\frac{1}{I_3} - \\frac{1}{I_2} \\right) K_2 K_3 \\nonumber\\\\\n\\mbox{and cyclic order} &. \\nonumber \n \\end{align}\nThese are Euler's equations of motion for the force-free rigid body.\nWith $K_i = I_i \\omega_i (i = 1,2,3)$ we obtain the well-known formula for the force-free rigid body\n\\be\n\\label{56}\n(I_i - I_j) \\omega_i \\omega_j - \\sli^3_{k = 1} \\epsilon_{ijk} I_k \\dot{\\omega}_k = 0 \\, .\n\\ee\nIt is remarkable that without further calculation our system proves to be soluble. This follows from the Casimir\n\\be\nC= K^2_1 + K^2_2 + K^2_3 \\, , \\nonumber\n\\ee\nwhich is conserved by the form of the brackets. \nThus the motion of a free rigid body actually lives in a two-dimensional state space on the sphere $C = constant$, \nequivalent to a one-degree-of-freedom system, and therefore the motion is integrable.\n\\bi\n\n\\no\nTo show how useful our former results are, let us calculate the non-relativistic propagation function for a charged particle \nin three dimensions in presence of a constant $B$ field in the $z$ direction. \n\\bi\n\n\\no\nWe rewrite our findings (\\ref{47}) in terms of the pseudo-momentum $\\vec{\\Pi} = m \\vv$:\n\\begin{align}\n \\label{57}\n \\frac{d \\vx (t)}{dt} &= \\frac{\\vec{\\Pi}}{m} (t) \\, , \\nonumber\\\\\n \\frac{d \\vec{\\Pi} (t)}{dt} &= \\frac{e}{c} \\overset{\\leftrightarrow}{\\Omega} \\cdot \\vec{\\Pi} (t) \\, , \\quad \\quad\n ( \\overset{\\leftrightarrow}{\\Omega})_{ij} = \n \\frac{e}{mc} \\epsilon_{ijk} B^k \\, .\n\\end{align}\nFrom now on we choose the axis of the magnetic field in the $z$ direction so that our pseudo-Hamiltonian $\\cH$ reads:\n\\be\n\\label{58}\n\\cH = \\frac{1}{2m} (\\Pi^2_1 + \\Pi^2_2) + \\frac{P^2_3}{2m} = \\cH_\\perp + \\frac{P^2_3}{2m} \\, .\n\\ee\nThe free-particle propagation in the third direction can be easily treated, so we drop it and introduce\n\\be\n\\vr (t) = \\lk {x_1 (t) \\atop x_2 (t)} \\right) \\, , \\quad \\quad \\vec{\\Pi} (t) = \n\\lk {\\Pi_1 (t) \\atop \\Pi_2 (t)} \\right) \\, . \\nonumber\n\\ee\nWe leave it as an exercise for the reader to show that in the now two-dimensional problem the matrix $\\sigma$ is given by\n\\begin{align}\n\\label{59}\n\\sigma_{ab} = \\frac{1}{m} \n\\lk \\begin{array}{cc}\n 0_2 & 1_2 \\\\\n - 1_2 & \\frac{e B}{mc} \\overset{\\leftrightarrow}{\\epsilon}\n \\end{array}\n \\right) \\, , \\quad \\quad \n (\\overset{\\leftrightarrow}{\\epsilon})_{ab} = \n \\lk \\begin{array}{cc}\n 0 & 1 \\\\\n - 1 & 0\n \\end{array}\n \\right) \\, , \\quad \\quad \n ( \\overset{\\leftrightarrow}{\\epsilon})^{ab} = \n \\lk \\begin{array}{cc}\n 0 & -1 \\\\\n 1 & 0\n \\end{array}\n \\right) & = - \\epsilon_{ab} \\, , \\nonumber\\\\\n & \\epsilon_{ac} \\epsilon^{cb} = \\delta_a^b \\, .\n\\end{align}\nHence the non-canonical equations of motion take the form\n\\begin{align}\n \\label{60}\n \\lk \\dot{\\bar{x}} \\atop \\dot{\\vv} \\right) &= \\frac{1}{m} \n \\lk \\begin{array}{cc}\n 0 & 1 \\\\\n -1 & \\frac{e B}{m c} \\overset{\\leftrightarrow}{\\epsilon}\n \\end{array}\n \\right) \n \\lk \\begin{array}{c}\n 0 \\\\\n \\frac{\\partial \\cH_\\perp}{\\partial \\vv}\n \\end{array}\n \\right) = \\frac{1}{m}\n \\lk \\begin{array}{cc}\n 0_2 & 1_2 \\\\\n -1 & \\frac{e B}{m c} \n \\overset{\\leftrightarrow}{\\epsilon}\n \\end{array}\n \\right) \n \\lk \\begin{array}{c}\n 0 \\\\\n m \\vv\n \\end{array}\n \\right) \n \\\\\n \\label{61}\n & \\mbox{or} \\quad \\frac{d \\vr}{d t} = \\frac{\\vec{\\Pi}}{m} \\\\\n \\label{62}\n \\frac{d \\vec{\\Pi}}{d t} & = \\frac{e B}{mc} \n \\overset{\\leftrightarrow}{\\epsilon}\n \\cdot \\vec{\\Pi} \\, , \\quad \\quad \n \\Omega = \\frac{e B}{m c} \\quad \\mbox{cyclotron frequency} \\nonumber\\\\\n & = \\Omega \n \\overset{\\leftrightarrow}{\\epsilon}\n\\cdot \\vec{\\Pi} \\, .\n\\end{align}\nThe solution of (\\ref{62}) for $\\vec{\\Pi}$ is needed for determining the explicit expression for $\\cH_\\perp (t) = \\frac{\\vec{\\Pi}^2}{2m}$. \nThis information enables us to calculate the three-dimensional non-relativistic Feynman propagation function. \nSolving equations (\\ref{61}) and (\\ref{62}) for $\\vr (t)$ and $\\vec{\\Pi} (t)$, is equivalent to solving the following Lie\nbracket relations\n\\begin{align}\n \\label{63}\n \\left[ r_i (t), \\Pi^2 (t) \\right] &= 2 \\Pi_i (t) \\, , \\nonumber\\\\\n \\left[ \\Pi_i (t), \\Pi^2 (t) \\right] &= 2 \\frac{e}{c} B_{ij} \\Pi^j (t) \\nonumber\\\\\n &= F_{ij} \\Pi^j = 2 \\frac{e}{c} \\epsilon_{ij3} B^3 \\Pi^j (t) \\, .\n\\end{align}\nOn the way to calculating the Feynman propagation function we could use Schwinger's proper time method. But instead of doing so, we prefer\nanother method, making use of the classical action and the Van Vleck determinant.\n\\bi\n\n\\no\nTo begin with, we remember from the harmonic oscillator the classical action \\cite{1}\n\\be\nS_{el} = \\frac{m \\omega}{2 \\sin (\\omega \\tau)} \\left[ (x^2 + {x'}^2) \\cos (\\omega \\tau) - 2 x x' \\right] \\, . \\nonumber\n\\ee\nUsing the Van Vleck determinant\n\\be\nD = \\mathrm{det} \\left[ - \\lk \\frac{\\partial^2 S_{el}}{\\partial x \\partial x'} \\right) \\right] = \\frac{m \\omega}{\\sin (\\omega \\tau)} : \\quad \\quad\n\\sqrt{D} = \\sqrt{\\frac{m \\omega}{\\sin (\\omega \\tau)}} \\nonumber\n\\ee\nwe obtain for the propagation function of the linear harmonic oscillator\n\\begin{align}\n K (x, x'; \\tau) &= \\sqrt{\\frac{1}{2 \\pi i \\hbar}} \\sqrt{D} e^{\\frac{i}{\\hbar} S_{el}} \\nonumber\\\\\n &= \\lk \\frac{m \\omega}{2 \\pi i \\hbar \\sin (\\omega \\tau)} \\right)^{1\/2} \\exp \n \\left\\{ \\frac{i m \\omega}{2 \\hbar \\sin (\\omega \\tau)} \\left[ (x^2 + x'^2) \\cos (\\omega \\tau) - 2 x x' \\right] \\right\\} \\, . \\nonumber\n\\end{align}\nIf we now follow the same path for a charged particle gauge fixed for a magnetic field in $z$ direction we obtain \n \\begin{align}\n \\label{64}\n \\lk \\vr \\right. & = \\left. (x_1, x_2) \\, , \\quad {\\vr}' = ({x'}_1, {x'}_2) \\, , \\quad (\\overset{\\leftrightarrow}{\\epsilon})_{ij} = \n \\lk \\begin{array}{cc}\n 0 & 1 \\\\\n - 1 & 0\n \\end{array} \\right) \\right) \\nonumber\\\\\n S_{el} &= \\frac{m}{2} \\left\\{ \\frac{\\omega}{2} \\cot (\\omega \\tau) (\\vr - \\vr')^2 + \\omega \\vr \\cdot \n \\overset{\\leftrightarrow}{\\epsilon} \\cdot \n \\vr' \\right\\} \\, , \\quad \\omega = \\frac{e B}{m c} \\, .\n\\end{align}\nThe results of the two space derivatives we need for the Van Vleck determinant are given by\n\\begin{align}\n & - \\frac{m}{2} \\omega \\cot \\lk \\frac{\\omega \\tau}{2} \\right) 1 + \\frac{m}{2} \\omega \\overset{\\leftrightarrow}{\\epsilon} = - \\frac{m}{2}\n \\frac{\\omega}{\\sin \\lk \\frac{\\omega \\tau}{2} \\right)} \\lk \\cos \\lk \\frac{\\omega \\tau}{2} \\right) 1 - \n\\overset{\\leftrightarrow}{\\epsilon}\\sin \\lk \\frac{\\omega \\tau}{2} \\right) \\right) \\nonumber\\\\\n &= - \\frac{m}{2} \\frac{\\omega}{\\sin \\lk \\frac{\\omega \\tau}{2} \\right)} \\lk \\cos \\lk \\frac{\\omega \\tau}{2} \\right) 1 - \n i \\sigma_2 \\sin \\lk \\frac{\\omega \\tau}{2} \\right) \\right) = - \\frac{m}{2} \\frac{\\omega}{\\sin \\lk \\frac{\\omega \\tau}{2} \\right)}\n e^{- i \\sigma_2 \\lk \\frac{\\omega \\tau}{2} \\right)} \\, . \\nonumber\n\\end{align}\nFrom here we obtain the determinant\n\\be\nD = \\lk - \\frac{m}{2} \\frac{\\omega}{\\sin \\lk \\frac{\\omega \\tau}{2} \\right)} \\right)^2 : \\quad \\quad \\sqrt{D} = \n\\frac{m}{2} \\frac{\\omega}{\\sin \\lk \\frac{\\omega \\tau}{2} \\right)} \\, . \\nonumber\n\\ee\nThe full result for the propagation function in a gauge that fixes the magnetic field in the third direction is then given by \n($\\omega = \\frac{eB}{m c}$, cyclotron frequency)\n\\begin{align}\n \\label{65}\n K (\\vr, \\vr'; \\tau) K (x_3, x'_3; \\tau) &= \\frac{1}{2 \\pi i \\hbar} \\frac{m}{2} \\frac{\\omega}{\\sin \\lk \\frac{\\omega \\tau}{2} \\right)}\n e^{\\frac{i}{\\hbar} \\frac{m}{2} \\left[ \\frac{\\omega}{2} \\cot \\lk \\frac{\\omega \\tau}{2} \\right) \n (\\vr - \\vr')^2 + \\omega \\vr \\cdot \\vec{\\epsilon} \\cdot \\vr' \\right]} \\nonumber\\\\\n & \\cdot \\sqrt{\\frac{m}{2 \\pi i \\hbar \\tau}} e^{\\frac{i}{\\hbar} \\frac{m}{2} \\frac{(x_3 - x'_3)^2}{\\tau}} \\, .\n\\end{align}\nIt might be interesting to write in $K (\\vr, \\vr'; \\tau)$: \n($L$ indicates a straight path connecting $\\vr'$ with $\\vr$):\n \\begin{align}\n \\label{66}\n e^{\\frac{i}{\\hbar} e \\il^{\\vr}_{\\vr'} d \\vec{\\zeta} \\cdot \\vA (\\vec{\\zeta})} \n &= \n e^{\\frac{i}{\\hbar} \\frac{m}{2} \\omega {\\vr} \\cdot \\vec{\\epsilon} \\cdot {\\vr'}} \n \\nonumber\\\\ \n &= e^{\\frac{i}{\\hbar} \n \\frac{m}{2} \\frac{e B}{m c} (x_1 {x'}_2 - x_2 {x'}_1)} \n \\nonumber\\\\\n &= e^{\\frac{i}{\\hbar} \\frac{e}{2 c} \\phi} \\, , \\quad \\quad \\phi = A r e a \\cdot B \\, .\n\\end{align}\nHence we can also write\n\\be\n\\label{67}\nK (\\vr, \\vr'; \\tau) = \\frac{1}{2 \\pi i \\hbar} \\frac{m}{2} \\frac{\\omega}{\\sin \\lk \\frac{\\omega \\tau}{2} \\right)} \ne^{\\frac{1}{\\hbar} e \\il^{\\vr}_{\\vr'_L} d \\vec{\\zeta} \\cdot \\vA (\\vec{\\zeta})}\ne^{\\frac{i}{\\hbar} \\frac{m}{2} \\frac{\\omega}{2} \\cot \\lk \\frac{\\omega \\tau}{2} \\right) (\\vr - \\vr')^2} \\, , \n\\ee\nan expression we will meet again in the full relativistic case of a charged particle travelling in a \nconstant electromagnetic field. \n\\bi\n\n\\no\nSo let us turn to the calculation of the propagation function using the relativistic coordinate representation\nof $K (x', s; x'', 0)$. One can think of $s$ as of proper time introduced by V. Fock and J. Schwinger \\cite{2}. \nWe will not adopt this method but instead follow the former non-relativistic strategy, i.e., making use of the \nVan Vleck determinant in $d = 4$ dimensions. We also set $\\hbar = 1$.\n\\bi\n\n\\no\nAs before, we begin with a Lagrangian, in our case with a matrix-valued Lagrangian, set up\nthe equations of motion, and compute the classical action. So let us start with \\cite{3}\n\\be\n\\label{68}\nL = \\frac{1}{4} \\dot{x}_\\mu \\dot{x}^\\mu + e A^\\mu \\dot{x}_\\mu + \\frac{e}{2} \\sigma^{\\mu \\nu} F_{\\mu \\nu} (x (s)) \\, .\n\\ee\nThe associated (``pseudo-'') Hamiltonian is clearly $\\lk \\Pi = \\frac{\\dot{x}}{2} = (p - eA) \\right)$\n\\begin{align}\n H &= p \\dot{x} - L = \\lk \\frac{\\dot{x}}{2} + e A \\right) \\dot{x} - \\frac{1}{4} \\dot{x}^2 - e A \\dot{x} - \\frac{e}{2} \\sigma F = \n \\frac{\\dot{x}^2}{4} - \\frac{e}{2} \\sigma F \\nonumber\\\\\n \\mbox{or} \\quad H &= \\Pi^2 - \\frac{e}{2} \\sigma F \\, . \\nonumber\n\\end{align}\nFrom (\\ref{68}) we obtain the equations of motion:\n\\begin{align}\n \\frac{d}{ds} \\lk \\frac{\\partial L}{\\partial \\dot{x}^\\mu} \\right) - \\frac{\\partial L}{\\partial {x}^\\mu} &= 0 : \\quad \\quad\n \\frac{\\partial L}{\\partial \\dot{x}^\\mu} = \\frac{1}{2} \\dot{x}_\\mu + e A_\\mu \\, , \\nonumber\\\\\n & \\hspace{2cm} \\frac{\\partial L}{\\partial x^\\mu} = e \\frac{\\partial A_\\lambda}{\\partial x^\\mu} \\dot{x}^\\lambda + \\frac{e}{2}\n \\sigma^{\\lambda \\nu} \\frac{\\partial F_{\\lambda \\nu}}{\\partial x^\\mu} \\nonumber\\\\\n \\frac{d}{ds} \\lk \\frac{1}{2} \\dot{x}_\\mu + e A_\\mu \\right) &= e \\frac{\\partial A_\\lambda}{\\partial x^\\mu} \\dot{x}^\\lambda + \\frac{e}{2} \n \\sigma^{\\lambda \\nu} \\frac{\\partial F_{\\lambda \\nu}}{\\partial x^\\mu} \\, , \\nonumber\\\\\n \\frac{1}{2} \\ddot{x}_\\mu + e \\frac{\\partial A_\\mu}{\\partial x^\\lambda} \\dot{x}^\\lambda : \\quad \\quad \\frac{1}{2} \\ddot{x}_\\mu &= e \\lk \n \\frac{\\partial A_\\lambda}{\\partial x^\\mu } - \\frac{\\partial A_\\mu}{\\partial x^\\lambda} \\right) \\dot{x}^\\lambda + \n \\frac{e}{2} \\sigma^{\\lambda \\nu} \\frac{\\partial F_{\\lambda \\nu}}{\\partial x^\\mu} \\, . \\nonumber\n\\end{align}\nThis provides us with the equation of motion\n\\be\n\\label{69}\n\\ddot{x}_\\mu = 2 e F_{\\mu \\nu} \\dot{x}^\\nu + e \\sigma^{\\lambda \\nu} \\frac{\\partial F_{\\lambda \\nu}}{\\partial x^\\mu} \\, .\n\\ee\nUsing\n\\be\n\\dot{x}_\\mu = 2 \\Pi_\\mu \\nonumber\n\\ee\nwe have\n\\be\n\\dot{\\Pi}_\\mu = \\frac{1}{2} \\ddot{x}_\\mu = e F_{\\mu \\nu} \\dot{x}^\\nu + \\frac{e}{2} \\sigma^{\\lambda \\nu} \\frac{\\partial F_{\\lambda \\nu}}{\\partial x^\\mu}\n\\nonumber\n\\ee\nor\n\\be\n\\label{70}\n\\dot{\\Pi}_\\mu = 2 e F_{\\mu \\nu} \\Pi^\\nu + \\frac{e}{2} \\sigma^{\\lambda \\nu} \\frac{\\partial F_{\\lambda \\nu}}{\\partial x^\\mu} \\, .\n\\ee\nFrom now on we consider constant fields only, i.e., $F_{\\mu \\nu} = const.$. Since we want to calculate the classical action, i.e.,\n\\be\n\\label{71}\nS_{el} = \\il^s_0 d \\lambda L (\\dot{x} (\\lambda), x (\\lambda))\n\\ee\nwe first have to integrate (in matrix notation)\n\\be\n\\ddot{x} = 2 e F \\dot{x} \\, , \\nonumber\n\\ee\nwhich can be solved with the Ansatz\n\\be\nx (\\lambda) = e^{2 e F \\lambda} \\dot{x} (0) \\nonumber\n\\ee\nand a further integration yields\n\\be\nx (\\lambda) - x (0) = \\il^\\lambda_0 d \\lambda' e^{2 e F \\lambda'} \\dot{x} (0) = \\frac{1}{2 e F} \\left[ e^{2 e F \\lambda} - 1 \\right] \n\\dot{x} (0) \\nonumber\n\\ee\nor with the initial condition $x (\\lambda = 0) = x''$ and $x (\\lambda = s) = x'$ we obtain\n\\be\nx' - x'' = \\frac{1}{2 e F} \\left[ e^{2 e F s} - 1 \\right] \\dot{x} (0) \\nonumber\n\\ee\nor\n\\be\n\\dot{x} (0) = \\frac{1}{e^{2 e F s} - 1} 2 e F (x' - x'') \\, . \\nonumber\n\\ee\nSo far we have\n\\begin{align}\n \\label{72}\n x (\\lambda) - x (0) &= \\frac{1}{2 e F} \\left[ e^{2 e F \\lambda} - 1 \\right] \\frac{1}{e^{2 e F s} - 1} 2 e F (x' - x'') \\nonumber\\\\\n &= \\frac{e^{2 e F \\lambda} - 1}{e^{2 e Fs} - 1} (x' - x'') \\, .\n\\end{align}\nWe need\n\\be\n\\label{73}\nS = \\il^s_0 d \\lambda \\left\\{ \\frac{1}{4} \\dot{x}^T (\\lambda) \\dot{x} (\\lambda) + e \\dot{x}^T (\\lambda) A (\\lambda) \\right\\}\n+ \\frac{e}{2} s \\sigma F \\, .\n\\ee\nIn the present case ${g (F)^T} = g (F^T) = g (- F)$. \nTherefore with\n\\begin{align}\n \\dot{x} (\\lambda) & {=} \n e^{2 e F \\lambda} \\dot{x} (0) {=} \\frac{e^{2 e F \\lambda}}{e^{2 e Fs} - 1} \n 2 e F (x' - x'')\n \\nonumber\\\\\n \\dot{x}^T (\\lambda) \\dot{x} (\\lambda) &= (x' - x'')^T \\frac{(- 2 e F)}{e^{- 2 e Fs} - 1} \\cancel{e^{- 2 e F \\lambda}} \n \\cancel{e^{2 e F \\lambda}} \\frac{2 e F}{e^{2 e F s} - 1} 2 e F (x' - x'') \\nonumber\\\\\n&= (x' - x'')^T \\frac{(- 2 e F)}{\\cancel{e^{- e Fs}} \\lk e^{- e Fs} - e^{e Fs} \\right)} \n\\frac{(2 e F)}{\\cancel{e^{e Fs}} \\lk e^{e Fs} - e^{- eFS} \\right)} (x' - x'') \\nonumber\\\\\n&= (x' - x'')^T \\frac{e F}{\\sin h (e Fs)} \\frac{e F}{\\sin h (e Fs)} (x' - x'') \\, , \\nonumber\\\\\n\\to \\frac{1}{4} \\dot{x}^T (\\lambda) \\dot{x} (\\lambda) &= \\frac{1}{4} (x' - x'') ^T \\frac{(e F)^2}{\\sin h^2 (e Fs)} (x' - x'') \\nonumber\n\\end{align}\nwe obtain for the first term in (\\ref{73})\n\\be\n\\label{74}\n\\frac{1}{4} \\il^s_0 d \\lambda \\dot{x} (\\lambda)^T \\dot{x} (\\lambda) = \\frac{1}{4} (x' - x'')^T \\frac{(eF)^2 s}{\\sin h^2 (e Fs)} (x' - x'') \\, .\n\\ee\nNow we turn to the expression \n\\begin{align}\n \\label{75}\n \\il^s_0 d \\lambda \\{ e \\dot{x}^T A \\} &= \\il^{x'}_{x''} d x^T (e A) \\nonumber\\\\\n &= e \\il^{x'}_{x''} d x^T (e A + \\frac{1}{2} F \\cdot (x - x'')) - \n e \\frac{1}{2} \\il^{x'}_{x''} d x^T F \\cdot (x - x'') \\, .\n\\end{align}\nThe first integral on the right-hand-side of (\\ref{75}) is independent of the path of \nintegration, since the curl of the integrand \nvanishes: \n\\begin{align}\n & \\partial_\\mu \\lk A_\\nu + \\frac{1}{2} F_{\\nu \\lambda} (x - x'')^\\lambda \\right) - \n \\partial_\\nu \\lk A_\\mu + \\frac{1}{2} F_{\\mu \\lambda} (x - x'')^\\lambda \\right) \\nonumber\\\\\n &= \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu + \\frac{1}{2} F_{\\nu \\mu} - \\frac{1}{2} F_{\\mu \\nu} \\nonumber\\\\\n &= F_{\\mu \\nu} - F_{\\mu \\nu} = 0 \\, . \\nonumber\n\\end{align}\nHence instead of integrating along the classical path $x (\\lambda)$ we can integrate along the straight path\nconnecting $x'$ and $x''$:\n\\be\n\\label{76}\nq (\\lambda) = x'' + (x' - x'') \\frac{\\lambda}{s} \\, , \\quad \\quad q (0) = x'' \\, , \\quad \\quad\nq (s) = x'\n\\ee\n\\begin{align}\n & e \\il^{x'}_{x''} d x^T \\lk A + \\frac{1}{2} F \\cdot (x - x'') \\right) \\nonumber\\\\\n &= e \\il^{x'}_{x''} d q^T A + \\frac{e}{2} \\il^{x'}_{x''} d q^T F \\cdot (q - x'') \\nonumber\\\\\n &= e \\il^{x'}_{x''} d q^\\mu A_\\mu (q) + \\frac{e}{2} \\il^s_0 \\frac{d \\lambda}{s} \n \\underbrace{(x' - x'') F (x' - x'')}_{(x' - x'')^\\mu F_{\\mu \\nu} \\cdot (x' - x'')^\\nu = 0 \\atop F_{\\mu \\nu} = \n - F_{\\nu \\mu}} \\frac{\\lambda}{s} \\nonumber\\\\\n &= e \\il^{x'}_{x''} d q^\\mu A_\\mu (q) \\, . \\nonumber\n\\end{align}\nFinally we need the second term in (\\ref{75})\n\\be\n\\label{77}\n- \\frac{e}{2} \\il^{x'}_{x''} d x^T F \\cdot (x - x'') = \\frac{e}{2} \\il^{x'}_{x''} d \\lambda \n\\cdot{x}^T F \\cdot (x - x'') \\, .\n\\ee\n\\begin{align}\n \\dot{x}^T F (x - x'') &= (x - x'')^T \\frac{(- 2 e F)}{e^{- 2 e Fs} - 1} \n e^{- 2 e F \\lambda} F \\cdot (x - x'') \\nonumber\\\\\n &= (x - x'')^T \\frac{(- 2 e F)}{e^{- 2 e Fs} - 1} e^{- 2 e F \\lambda} F \n \\frac{e^{2 e F \\lambda} - 1}{e^{2 e F s} - 1} (x' - x'') \\nonumber\\\\\n &= (x' - x'')^T \\frac{(- 2 e F)}{\\cancel{e^{- e Fs}} (e^{- e Fs} - e^{e Fs})} F \n \\frac{1 - e^{- 2 e F \\lambda}}{\\cancel{e^{e Fs}} (e^{e Fs} - e^{- Fs})} (x' - x'') \\nonumber\\\\\n .\/. &= (x' - x'')^T \\frac{e F^2}{\\sin h e Fs} \\cdot \\frac{1}{2} \n \\frac{(1 - e^{- 2 e F \\lambda})}{\\sin h e Fs} (x' - x'') \\nonumber\\\\\n \\il^s_0 d \\lambda .\/. &= \\frac{1}{2} (x' - x'')^T \\frac{e F^2}{\\sin h^2 e Fs} \n \\left[ s + \\frac{1}{2 e F} \\lk e^{- 2e Fs} - 1 \\right) \\right] (x' - x'') \\nonumber\\\\\n &= \\frac{1}{2} (x' - x'')^T \\left[ \\frac{e F^2 s}{\\sin h^2 e Fs} + \\frac{e F^2}{\\sin h^2 e Fs} \n \\frac{1}{2 e F} e^{- e Fs} \\lk e^{- e Fs} - e^{e Fs} \\right) \\right] (x' - x'') \\nonumber\\\\\n &= \\frac{1}{2} (x' - x'')^T \\left[ \\frac{e F^2 s}{\\sin h^2 e Fs} - \\frac{e F^2}{\\sin h^2 e F s} \\frac{e^{- e FS}}{e F} \\sin h e Fs \n \\right] (x' - x'') \\nonumber\\\\\n &= \\frac{1}{2} (x' - x'')^T \\left[ \\frac{e F^2 s}{\\sin h^2 e Fs} - e^{- e Fs} \\frac{F}{\\sin h e Fs} \\right] (x' - x'') \\nonumber\n\\end{align}\nSo the contribution of the integral is given by\n\\be\n\\label{78}\n- \\frac{e}{2} \\il^{x'}_{x''} d x^T F \\cdot (x - x'') = - \\frac{1}{4} (x' - x'')^T \\left[ \\frac{(e F)^2 s}{\\sin h^2 e Fs} - \n\\frac{e^{- e Fs} (e F)}{\\sin h e Fs} \\right] (x' - x'') \\, .\n\\ee\nThe first term in (\\ref{78}) cancels the result (\\ref{74}). So we obtain for the classical action \n\\begin{align}\n S (x', x''; s) & \\underset{\\text{($\\ref{13}$)}}{=} e \\il^{x'}_{x''} d q^\\mu A_\\mu (q) + \\frac{1}{4} (x' - x'')^T\n e^{- e Fs} \\frac{eF}{\\sin h e Fs} (x' - x'') + \\frac{e}{2} s \\sigma F \\nonumber\\\\\n &= e \\il^{x'}_{x''} d q_\\mu A^\\mu (q) + \\frac{1}{4} (x' - x'')^T e F \n \\underbrace{\\frac{\\cos h e Fs - \\sin h e Fs}{\\sin h e Fs}}_{(\\cot h e F s - \\cancel{1})_{(x' - x'') F (x' - x'') = 0}} \n (x' - x'') + \\frac{e}{2} \\sigma F s \\nonumber\\\\\n &= e \\il^{x'}_{x''} d q_\\mu A^\\mu (q) + \\frac{1}{4} (x' - x'')^T e F \\cot h (e Fs) (x' - x'') + \\frac{e}{2} \\sigma_{\\mu \\nu}\n F^{\\mu \\nu} s \\, .\\nonumber\n\\end{align}\nThen the final result reads\n\\be\n\\label{79}\nS (x', x''; s) = e \\il^{x'}_{x''} d q_\\mu A^\\mu (q) + \\frac{1}{4} (x' - x'')^\\alpha e F_\\alpha^\\beta [\\cot h (e Fs)]_\\beta^\\gamma (x' - x'')_\\gamma\n+ \\frac{e}{2} \\sigma_{\\mu \\nu} F^{\\mu \\nu} s \\, .\n\\ee\nWith this information we are able to compute the propagation function \nby a classical WKB approximation which is exact for the constant-field case\n\\begin{align}\n \\label{80}\n K (x', s; x'', 0) &= \\il_{x (0) = x'' \\atop x (s) = x'} [d x (\\lambda)] e^{i S [x (\\lambda)]} \\nonumber\\\\\n S &= \\il^s_0 d \\lambda L (x, \\dot{x}) = (\\ref{79})\n\\end{align}\nor\n\\be\n\\label{81}\nK (x', s; x'', 0) = (2 \\pi i)^{- \\frac{n}{2}} \\sqrt{D} e^{i S}\n\\ee\nwith the Van Vleck determinant\n\\be\n\\label{82}\nD (x', x''; s)^{n = 4} = (- 1)^4 \\det \\lk \\frac{\\partial^2 S}{\\partial x'^\\mu \\partial x''^\\nu} \\right) \\, .\n\\ee\nWe begin with\n\\begin{align}\n \\label{83}\n & \\frac{\\partial^2}{\\partial {x'}^\\mu \\partial {x''}^\\nu} \\left[ \\frac{1}{4} (x' - x'')^\\alpha e F_\\alpha^\\beta \n ( \\cot h e F s)_\\beta^\\gamma \n (x' - x'')_\\gamma \\right] \\nonumber\\\\\n &= \\frac{\\partial}{\\partial x''^\\nu} \\left[ \\frac{1}{4} e F_\\mu^\\beta (\\cot h e Fs)_\\beta^\\gamma (x' - x'')_\\gamma \n + \\frac{1}{4} (x' - x'')^\\alpha e F_\\alpha^\\beta (\\cot h e Fs)_{\\beta \\mu} \\right] \\nonumber\\\\\n &= \\frac{\\partial}{\\partial x''^\\nu} \\left[ \\frac{1}{2} (x' - x'')^\\alpha e F_\\alpha^\\beta (\\cot h e Fs)_{\\beta \\mu} \\right] \\nonumber\\\\\n &= - \\frac{1}{2} e F_\\nu^\\beta (\\cot h e Fs)_{\\beta \\mu} \\, .\n \\end{align}\nFurthermore we have to compute\n\\begin{align}\n\\label{84}\n\\frac{\\partial}{\\partial {x'}^\\mu} \\il^{x'}_{x''} d q^\\alpha A_\\alpha (q) &= \\frac{\\partial}{\\partial {x'}^\\mu} \\il^s_0 d \\lambda\n\\frac{\\partial q^\\alpha}{\\partial \\lambda} A_\\alpha (q) \\nonumber\\\\\n&= \\il^s_0 d \\lambda \\left[ \\lk \\frac{\\partial}{\\partial \\lambda} \\frac{\\partial q^\\alpha}{\\partial {x'}^\\mu} \\right) A_\\alpha (q) + \n\\frac{\\partial q^\\alpha}{\\partial \\lambda} \\frac{\\partial A_\\alpha (q)}{\\partial q^\\beta} \\frac{\\partial q^\\beta}{\\partial {x'}^\\mu} \\right]\n\\nonumber\\\\\n&= \\left. \\frac{\\partial q^\\alpha}{\\partial {x'}^\\mu} A_\\alpha (q) \\right]^{\\lambda = s}_{\\lambda = 0} + \\il^s_0 d \\lambda\n\\left[ - \\frac{\\partial q^\\alpha}{\\partial {x'}^\\mu} \\frac{\\partial A_\\alpha}{\\partial q^\\beta} \\frac{\\partial q^\\beta}{\\partial \\lambda}\n+ \\frac{\\partial q^\\alpha}{\\partial \\lambda} \\frac{\\partial A_\\alpha (q)}{\\partial q^\\beta} \n\\frac{\\partial q^\\beta}{\\partial {x'}^\\mu} \\right]\\nonumber\\\\\nq (s) = x' \\atop q (0) = x'' &= A_\\mu (x') + \\il^s_0 d \\lambda \\frac{\\partial q^\\alpha}{\\partial \\lambda} \n\\left[ \\frac{\\partial A_\\alpha}{\\partial q^\\beta} - \\frac{\\partial A_\\beta}{\\partial q^\\alpha} \\right] \n\\frac{\\partial q^\\beta}{\\partial {x'}^\\mu} \n\\nonumber\\\\\nq (x', x'', \\lambda) &= x'' + (x' - x'') \\frac{\\lambda}{s} : \\quad \\frac{\\partial q^\\alpha}{\\partial \\lambda} = \\frac{(x' - x'')^\\alpha}{s}\n\\nonumber\\\\\n& \\hspace{5cm} \\frac{\\partial q^\\beta}{\\partial {x'}^\\mu} = \\delta_\\mu^\\beta \\frac{\\lambda}{s} \\nonumber\\\\\n\\frac{\\partial}{\\partial {x'}^\\mu} \\il^{x'}_{x''} d q^\\alpha A_\\alpha (q) &= A_\\mu (x') + \\il^s_0 d \\lambda\n(x' - x'')^\\alpha \\frac{1}{s} F_{\\beta \\alpha} \\delta_\\mu^\\beta \\frac{\\lambda}{s} \\nonumber\\\\\n&= A_\\mu (x') + \\frac{1}{2} F_{\\mu \\alpha} (x' - x'')^\\alpha \\, .\n \\end{align}\n Therefore\n \\be\n \\label{85}\n \\frac{\\partial^2}{\\partial x''^\\nu \\partial x'^\\mu} \\il^{x'}_{x''} d q^\\alpha A_\\alpha (q) = - \\frac{1}{2} F_{\\mu \\nu} \\, .\n \\ee\n Together with (\\ref{83}) we obtain\n \\begin{align}\n \\label{86}\n \\frac{\\partial' S}{\\partial x'^\\mu \\partial x''^\\nu} &= - \\frac{1}{2} e F_{\\mu \\nu} - \\frac{1}{2} e\n F_\\nu^\\beta (\\cot h e Fs)_{\\beta \\mu} \\nonumber\\\\\n &= - \\frac{1}{2s} \\left[ e Fs + e Fs (\\cot h e Fs) \\right]_{\\mu \\nu} \\nonumber\\\\\n &= - \\frac{1}{2s} \\left[ e Fs \\frac{\\sin h e Fs + \\cos h e Fs}{\\sin h e Fs} \\right]_{\\mu \\nu} \\nonumber\\\\\n &= - \\frac{1}{2s} \\left[ e Fs \\frac{e^{e Fs}}{\\sin h e Fs} \\right]_{\\mu \\nu} \\, .\n \\end{align}\nGiven (\\ref{86}), the Van Vleck determinant is given by\n\\begin{align}\n \\label{87}\n n = 4: \\quad D &= \\det \\left\\{ (-1) \\frac{1}{2s} e Fs \\frac{e^{e Fs}}{\\sin h e Fs} \\right\\} \\nonumber\\\\\n &= - 1 \\frac{1}{16 s^4} \\det \\lk e F s \\frac{e{Fs}}{\\sin h e F s} \\right) \\nonumber\\\\\n &= - 1 \\frac{1}{16 s^4} e^{tr \\ln \\lk \\frac{e Fs}{\\sin h e Fs} e^{e Fs} \\right)} \\nonumber\\\\\n &= - \\frac{1}{16 s^4} e^{- tr \\ln \\frac{\\sin h e Fs}{e Fs} + es \\overbrace{tr F}^{= 0}} \\nonumber\\\\\n \\sqrt{D} &= \\frac{i}{4 s^2} e^{- \\frac{1}{2} tr \\ln \\frac{\\sin h e Fs}{e Fs}} \\, .\n\\end{align}\nWe finally end up with\n\\begin{align}\n K (x', s; x'', 0) &= - \\frac{1}{4 \\pi^2} \\frac{i}{4 s^2} e^{- \\frac{1}{2} tr \\ln \\frac{\\sin h e Fs}{e Fs}} e^{i S} \\nonumber\n\\end{align}\nor\n\\begin{align}\n \\label{88}\n K (x', s; x'', 0) &= - \\frac{i}{16 \\pi^2} \\frac{1}{s^2} e^{- \\frac{1}{2} tr \\ln \\lk \\frac{\\sin h e Fs}{e Fs} \\right)} \n \\nonumber\\\\\n & \\cdot e^{i e \\il^{x'}_{x''} d q_\\mu A^\\mu (q)} e^{i \\left[ \\frac{1}{4} (x' - x'') \\lk e F \\cot h e Fs \\right) \n (x' - x'') \\right]} e^{i \\frac{e}{2} \\sigma F s} \\, \\\\\n & \\mbox{holonomy factor} \\quad \\quad \\mbox J.S., P.R. {\\bf 82}, 664 (51) \\nonumber\\\\\n & \\mbox{counter gauge factor} \\quad \\quad (3.20) \\nonumber\\, .\n\\end{align}\nThis result should be compared with Julian Schwinger's superb article, Physical Review {\\bf 82}, 664 (1951) [ref. 2], formula (3.20).\nA useful result is then given by (c.f. \\ref{80})\n\\begin{align}\n \\label{89}\n & \\il_{x (0) = x'' \\atop x (s) = x'} \\left[d x (\\lambda) \\right] e^{i \\il^s_0 d \\lambda \\left[ \\frac{1}{4} \n \\dot{x}^2 (\\lambda) + e A (x (\\lambda)) \\dot{x} (\\lambda) \\right]} \\nonumber\\\\\n &= \\underbrace{e^{i e \\il^{x'}_{x''} d q_\\mu A^\\mu (q)}}_{\\mbox{gauge dependence} \\atop \\mbox{isolated!}} \n \\frac{- i}{(4 \\pi)^2} \\frac{1}{s^2} e^{- \\frac{1}{2} tr \\ln\n \\lk \\frac{\\sin h e Fs}{e Fs} \\right)} \\nonumber\\\\\n & \\cdot e^{i \\frac{1}{4} (x' - x'') (e F \\cot h e Fs) (x' - x'')} \\, .\n \\end{align}\nIt takes a little more to show that (\\ref{88}) makes its appearance when calculating the Green's function of an \nelectron in an external constant electromagnetic field (in an arbitrary function gauge):\n\\begin{align}\n \\label{90}\n G (x, x' | A ) &= \\phi (x, x' | A) \\frac{1}{(4 \\pi)^2} \\il^\\infty_0 \\frac{ds}{s^2} \n \\left[ m - \\frac{1}{2} \\gamma^\\mu [f (s) + e F]_{\\mu \\nu} (x - x')^\\nu \\right] \\nonumber\\\\\n& \\cdot e^{- i m^2 s - L (s)} \\exp \\left[ \\frac{1}{4} (x - x') f (s) (x - x') \\right]e^{i \\frac{e}{2} \\sigma Fs} \\, . \n\\end{align}\nHere we have used the abbreviations\n\\be\nf (s) = e F \\cot h (e Fs) \\quad \\mbox{and} \\quad L (s) = \\frac{1}{2} tr \\ln \\frac{\\sin h (e Fs)}{e Fs} \\, , \\nonumber\n\\ee\nand\n\\be\n\\phi (x, x' | A) = \\exp \\left\\{ i e \\il^x_{x'} d \\zeta_\\mu \\left[ A^\\mu (\\zeta) + \\frac{1}{2} F^{\\mu \\nu} (\\zeta - x')_\\nu \\right]\n\\right\\} \\nonumber\n\\ee\ncarries completely the gauge dependence of the propagation function. Needless to say, with (\\ref{90}) in our hands, we can compute several \nimportant processes in QED, as there are the effective Lagrangian, in particular the one-loop effective Lagrangian, the pair-production rate, the Heisenberg-Euler\nLagrangian, photon-photon scattering, the axial vector anomaly, dispersion effects for low-frequency photons and many more \\cite{3}.\n\\bi\n\n\\no\nAlso note that in our last relativistic problem we never used operator quantum field theory.\nWe always dealt with measured physical quantities like, for instance, the charge and mass of the electron,\nand nowhere were we confronted with any kind of renormalisation procedure. \n \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and terminology}\n\n\nIn the non-Archimedean setting, at least two notions of differentiability have been defined: classical and strict derivative. Classical derivative has some unpleasant and strange behaviors, but it has been long known that the strict differentiability in the sense of Bourbaki is the hypothesis most useful to geometric applications, such as inverse theorem. Let us recall definitions.\n\nLet $\\mathbb K$ be a valued field containing $\\mathbb{Q}_p$ such that $\\mathbb K$ is complete (as a metric space), and $X$ be a nonempty subset of $\\mathbb K$ without isolated points.\nLet $f\\colon X \\to \\mathbb K$ and $r$ be a natural number. Set\n\t$$\n\t\\nabla^{r} X:=\\begin{cases}\n\tX & \\text{if } r=1,\\\\\n\t\\left\\{\\left(x_1, x_2,\\ldots, x_r\\right)\\in X^r \\colon x_i\\neq x_j \\text{ if } i\\neq j \\right\\} & \\text{if } r\\geq 2.\n\t\\end{cases}\n\t$$\nThe $r$-th difference quotient $\\Phi_rf: \\nabla^{r+1} X \\to \\mathbb K$ of $f$, with $r\\geq 0$, is recursively given by $\\Phi_0f=f$ and, for $r\\geq 1$, $(x_1, x_2,\\ldots, x_{r+1})\\in \\nabla^{r+1} X$ by \n\t$$\n\t\\Phi_r f(x_1, x_2,\\ldots, x_{r+1})=\\frac{\\Phi_{r-1}f(x_1, x_3,\\ldots, x_{r+1})-\\Phi_{r-1}f(x_2, x_3,\\ldots, x_{r+1})}{x_1-x_2}.\n\t$$\nThen a function $f\\colon X \\to \\mathbb K$ at a point $a\\in X$ is said to be:\n\\begin{itemize}\n\t\\item \\textbf{differentiable} if $\\lim_{x\\to a}f(x)-f(a)\/(x-a)$ exists;\n\t\n\t\\item \\textbf{strictly differentiable of order $r$} if $\\Phi_r f$ can be extended to a continuous function $\\overline{\\Phi}_r f: X^{r+1} \\to \\mathbb K$. We then set $D_rf(a)=\\overline{\\Phi}_r f(a, a,\\ldots, a)$.\n\\end{itemize}\n\n\n\nOur aim in this work is to study these notions through a recently coined approach--the lineability theory. Searching for large algebraic structures in the sets of objects with a special property, we, in this approach, can get deeper understanding of the behavior of the objects under discussion. In \\cites{jms1,jms2} authors studied lineability notions in the $p$-adic analysis; see\nalso \\cites{preprint-June-2019,jms3, fmrs}. The study of lineability can be traced back to Levine\nand Milman \\cites{levinemilman1940} in 1940, and V. I. Gurariy \\cite{gur1} in 1966. These works, among\nothers, motivated the introduction of the notion of lineability in 2005 \\cite%\n{AGS2005} (notion coined by V. I. Gurariy). Since then it has been a rapidly developing trend in mathematical analysis. There are extensive works on the classical lineability theory, see e.g. \\cites{ar,AGS2005,ar2,survey,bbf,bay}, whereas some recent topics can be found in \\cites{TAMS2020,abms,bns,cgp,nak,CS2019,bo,egs,bbls,ms}. It is interesting to note that Mahler in \\cite{Ma2} stated that:\n\\begin{quote}\n``On the other hand, the behavior of continuous functions of a $p$-adic variable is quite distinct from that of real continuous functions, and many basic theorems of real analysis have no $p$-adic analogues. \n\n$\\ldots$ there exist infinitely many linearly independent non-constant functions the derivative of which vanishes identically $\\ldots$''. \n\\end{quote} \n\nBefore further going, let us recall three essential notions. \nLet $\\kappa$ be a cardinal number. We say that a subset $A$ of a vector space $V$ over a field $\\mathbb{K}$ is \n\\begin{itemize}\n\n\t\\item \\textbf{$\\kappa$-lineable} in $V$ if there exists a vector space $M$ of dimension $\\kappa$ and $%\n\tM\\setminus \\{0\\} \\subseteq A$;\n\\end{itemize}\t \n\tand following \\cites{ar2, ba3}, if $V$ is\n\tcontained in a (not necessarily unital) linear algebra, then $A$ is called\n\\begin{itemize}[resume*]\t\n\t\\item \\textbf{$\\kappa$-algebrable} in $V$ if there exists an algebra $M$ such that $%\n\tM\\setminus \\{0\\} \\subseteq A$ and $M$ is a $\\kappa$-dimensional vector\n\tspace;\n\t\n\t\\item \\textbf{strongly $\\kappa$-algebrable} in $V$ if there exists an $\\alpha$-generated free algebra $M$ such that $M\\setminus \\{0\\}\\subseteq A$.\n\\end{itemize}\nNote that if $V$ is also contained in a commutative algebra, then a set $B\\subset V$ is a generating set of a free algebra contained in $A$ if and only if for any $n\\in \\mathbb N$ with $n\\leq \\text{card}(B)$ (where $\\text{card}(B)$ denotes the cardinality of $B$), any nonzero polynomial $P$ in $n$ variables with coefficients in $\\mathbb K$ and without free term, and any distinct $b_1,\\ldots,b_n\\in B$, we have $P(b_1,\\ldots,b_n)\\neq 0$.\n\nNow we can give an outline of our work. In Section \\ref{bac} we recall some standard concepts and notations concerning\nnon-Archimedean analysis. Then, in the section of Main results, we \nshow, among other things, that: \n(i) the set of functions $\\mathbb{Q}_p\\rightarrow \\mathbb{Q}_p$ with continuous derivative that are not strictly differentiable is $\\mathfrak{c}$-lineable ($\\mathfrak{c}$ denotes the cardinality of the continuum), \n(ii) the set of strictly differentiable functions $\\mathbb{Q}_p\\rightarrow \\mathbb{Q}_p$ of order $r$ but not strictly differentiable of order $r+1$ is $\\mathfrak c$-lineable,\n(iii) the set of all strictly differentiable functions $\\mathbb{Z}_p\\rightarrow \\mathbb K$ with zero derivative that are not Lipschitzian of any order $\\alpha>1$ is $\\mathfrak{c}$-lineable and $1$-algebrable, \n(iv) the set of differentiable functions $\\mathbb{Q}_p\\rightarrow \\mathbb{Q}_p$ which derivative is unbounded is strongly $\\mathfrak{c}$-algebrable, \n(v) the set of continuous functions $\\mathbb{Z}_p\\rightarrow \\mathbb{Q}_p$ that are differentiable with bounded derivative on a full set for any positive real-valued Haar measure on $\\mathbb{Z}_p$ but not differentiable on its complement having cardinality $\\mathfrak{c}$ is $\\mathfrak c$-lineable.\n\n\n\\section{Preliminaries for $p$-adic analysis}\n\\label{bac}\n\nWe summarize some basic definitions of $p$-adic analysis (for a\nmore profound treatment of this topic we refer the interested reader to \\cites{go,kato,va,sc}).\n\nWe shall use standard set-theoretical notation. As usual, $\\mathbb{N}, \n\\mathbb{N}_0, \\mathbb{Z}, \\mathbb{Q}$, $\\mathbb{R}$ and $\\mathbb P$ denote\nthe sets of all natural, natural numbers including zero, integer, rational, real, and prime numbers, respectively.\nThe restriction of a function $f$ to a set $A$ and the characteristic function of a set $A$ will be denoted\nby $f\\restriction A$ and $1_A$, respectively.\n\nFrequently, we use a theorem due to Fichtenholz-Kantorovich-Hausdorff \\cites{fich,h}: For any set $X$ of infinite cardinality there exists a family $\\mathcal{B}\\subseteq \n\\mathcal{P}(X)$ of cardinality $2^{\\text{card}(X)}$ such that for any finite\nsequences $B_1,\\ldots , B_n\\in \\mathcal{B}$ and $\\varepsilon_1, \\ldots,\n\\varepsilon_n \\in \\{0,1\\}$ we have $B_1^{\\varepsilon_1}\\cap \\ldots \\cap\nB_n^{\\varepsilon_n}\\neq \\emptyset$, where $B^1=B$ and $B^0=X\\setminus B$. \nA family of subsets of $X$ that satisfy the latter condition is called a family of \\textit{independent subsets} of $X$.\nHere $\\mathcal{P}(X)$ denotes the power set of $X$. In what follows we fix $\\mathcal{N}$, $\\mathcal N_0$ and $\\mathcal P$ for families of independent subsets of $\\mathbb{N}$, $\\mathbb N_0$ and $\\mathbb P$, respectively,\nsuch that $\\text{card}(\\mathcal N)=\\text{card}(\\mathcal N_0)=\\text{card}(\\mathcal P)=\\mathfrak c$.\n\n\nNow let us recall that given a field $\\mathbb K$, an absolute value on $\\mathbb K$ is a function \n\t$$\n\t|\\cdot|\\colon \\mathbb K\\to [0,+\\infty)\n\t$$ \nsuch that:\n\t\\begin{itemize}\n\t\t\\item $|x|=0$ if and only if $x=0$,\n\t\t\n\t\t\\item $|x y|=|x||y|$, and\n\t\t\n\t\t\\item $|x+y|\\leq |x|+|y|$,\n\t\\end{itemize}\nfor all $x,y\\in \\mathbb{K}$. \nThe last condition is the so-called \\textit{triangle inequality}.\nFurthermore, if $(\\mathbb K,|\\cdot|)$ satisfies the condition $|x+y|\\leq \\max\\{|x|, |y|\\}$ (the \\textit{strong triangle inequality}), then $(\\mathbb K,|\\cdot|)$ is called a non-Archimedean field.\nNote that $(\\mathbb K,|\\cdot|)$ is a normed space since $\\mathbb K$ is a vector space in itself.\nFor simplicity, we will denote for the rest of the paper $(\\mathbb K,|\\cdot|)$ by $\\mathbb K$.\nClearly, $|1|=|-1|=1$ and, if $\\mathbb K$ is a non-Archimedean field, then $\\underbrace{|1+\\cdots+1|}_{n \\text{ times}}\\leq 1$ \nfor all $n\\in \\mathbb{N}$. An immediate consequence of the strong triangle\ninequality is that $|x|\\neq |y|$ implies $|x+y|=\\max\\{|x|, |y|\\}$.\nNotice that if $\\mathbb K$ is a finite field, then the only possible absolute value that can be defined on $\\mathbb K$ is the trivial absolute value, that is, $|x|=0$ if $x=0$, and $|x|=1$ otherwise.\nFurthemore, given any field $\\mathbb K$, the topology endowed by the trivial absolute value on $\\mathbb K$ is the discrete topology.\n\n\n\nLet us fix a prime number $p$ throughout this work. For any non-zero integer \n$n\\neq 0$, let $\\text{ord}_p (n)$ be the highest power of $p$ which divides $%\nn$. Then we define $|n|_p=p^{-\\text{ord}_p (n)}$, $|0|_p=0$ and $|\\frac{n}{m}%\n|_p=p^{-\\text{ord}_p (n)+\\text{ord}_p (m)}$, the $p$-adic absolute value. The completion of the field of\nrationals, $\\mathbb{Q}$, with respect to the $p$-adic absolute value\nis called the field of $p$-adic numbers $\\mathbb{Q}_p$. \nAn important property of $p$-adic numbers is that each nonzero $x\\in \\mathbb{Q}_p$ can be represented as\n\t$$\n\tx=\\sum_{n=m}^\\infty a_n p^n,\n\t$$\nwhere $m\\in \\mathbb Z$, $a_n\\in \\mathbb F_p$ (the finite field of $p$ elements) and $a_m\\neq 0$.\nIf $x=0$, then $a_n=0$ for all $n\\in \\mathbb Z$.\nThe $p$-adic absolute value\nsatisfies the strong triangle inequality. Ostrowski's Theorem states that every nontrivial\nabsolute value on $\\mathbb{Q}$ is equivalent (i.e., defines the same\ntopology) to an absolute value $|\\cdot|_p$, where $p$ is a prime number, or\nthe usual absolute value (see \\cite{go}). \n\nLet $a\\in \\mathbb{Q}_{p}$ and $r$ be a positive number. The set $%\nB_r(a)=\\{x\\in \\mathbb{Q}_{p}\\colon |x-a|_{p}n \\}.\n\t\t$$\n\tNow consider the sequences $\\overline x=(x_n)_{N_1^1\\cap N_2^0\\cap \\cdots \\cap N_k^0}$, $\\overline y=(y_n)_{N_1^1\\cap N_2^0\\cap \\cdots \\cap N_k^0}$ and $\\overline z=(z_n)_{n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_k^0}$ defined as $x_n=p^n$, $y_n=0$ and $z_n=p^{n}+p^{n_+}$ for every $n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_k^0$.\n\t(Notice that the sequences $\\overline{x}$, $\\overline{y}$ and $\\overline{z}$ converge to $0$.)\n\tThen, \n\t\t\\begin{align*}\n\t\t\t& \\left|(y_n-z_n)^{-1}\\right|_p \\left|\\frac{g(x_n)-g(y_n)}{x_n-y_n}-\\frac{g(x_n)-g(z_n)}{x_n-z_n} \\right|_p\\\\\n\t\t\t& = \\left|(p^n+p^{n_+})^{-1}\\right|_p \\left|\\frac{\\beta_1 p^{2n}}{p^n}-\\frac{\\beta_1 p^{2n}-\\beta_1 p^{2n}-\\beta_1 p^{2n_{+}} }{p^n-p^n-p^{n_+}} \\right|_p\\\\\n\t\t\t& = |\\beta_1|_p \\left|\\frac{p^n-p^{n_+}}{p^n+p^{n_+}} \\right|_p=|\\beta_1|_p\\neq 0,\n\t\t\\end{align*}\n\tfor every $n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_k^0$.\n\tHowever, $g^{\\prime\\prime}\\equiv 0$.\n\tThis finishes the proof.\n\\end{proof}\n\nLet $\\mathbb K$ be a non-Archimedean field with absolute value $|\\cdot|$ that contains $\\mathbb{Q}_p$.\nFor any $\\alpha>0$, the space of Lipschitz functions from $X$ to $\\mathbb K$ of order $\\alpha$ is defined as\n\t$$\n\t\\text{Lip}_\\alpha (X,\\mathbb K)=\\{f\\colon X\\rightarrow \\mathbb K \\colon \\exists M>0(|f(x)-f(y)|\\leq M|x-y|^\\alpha),\\forall x,y\\in X \\}.\n\t$$\nLet\n\t$$\n\tN^1(X,\\mathbb K)=\\{f\\in S^1(X,\\mathbb K)\\colon f' \\equiv 0 \\}.\n\t$$\nIn view of \\cite{sc}*{Exercise~63.C} we have\n\t$$\n\tN^1(\\mathbb{Z}_p,\\mathbb K)\\setminus \\bigcup_{\\alpha>1} \\text{Lip}_\\alpha (\\mathbb{Z}_p, \\mathbb K)\\neq \\emptyset.\n\t$$\n\t\nTo prove the next theorem, we need a characterization of the spaces $N^1(\\mathbb{Z}_p,\\mathbb K)$ and $\\text{Lip}_\\alpha (\\mathbb{Z}_p,\\mathbb K)$ from \\cite{sc}. \nFor this we will denote by $(e_n)_{n\\in \\mathbb N_0}$ the van der Put base of $C(\\mathbb{Z}_p,\\mathbb K)$, which is given by $e_0\\equiv 1$ and $e_n$ is the characteristic function of $\\{x\\in\\mathbb{Z}_p \\colon |x-n|_p<1\/n \\}$ for every $n\\in \\mathbb N$.\n\n\\begin{proposition}\\label{thm:1}~\\begin{itemize}\n\t\\item[(i)] \tLet $f=\\sum_{n=0}^\\infty a_n e_n \\in C(\\mathbb{Z}_p,\\mathbb K)$.\n\tThen $f\\in N^1(\\mathbb{Z}_p,\\mathbb K)$ if and only if $(|a_n| n)_{n\\in \\mathbb N_0}$ converges to $0$ (see \\cite{sc}*{Theorem~63.3}).\\label{thm:63.3}\n\t\n\t\\item[(ii)]\n\tLet $f=\\sum_{n=0}^\\infty a_n e_n \\in C(\\mathbb{Z}_p,\\mathbb K)$ and $\\alpha>0$.\n\tThen $f\\in \\text{Lip}_\\alpha (\\mathbb{Z}_p,\\mathbb K)$ if and only if \n\t$$\n\t\\sup \\{|a_n| n^\\alpha \\colon n\\in\\mathbb N_0 \\}<\\infty\n\t$$ \n\t(see \\cite{sc}*{Exercise~63.B}).\\label{ex:63.B}\n\\end{itemize}\n\\end{proposition} \n\nThe next result shows that there is a vector space of maximum dimension of strictly differentiable functions with zero derivative that are not Lipschitzian.\nOur next three results can be compared with some results obtained in \\cites{jms, bbfg,bq} for the classical case.\n\n\\begin{theorem}\n\tThe set $N^1(\\mathbb{Z}_{p},\\mathbb K)\\setminus\\bigcup_{\\alpha>1}{\\rm{Lip}}_{\\alpha }(\\mathbb{Z}_{p},\\mathbb K)$ is $\\mathfrak{c}$-lineable (as a $\\mathbb K$-vector space).\n\\end{theorem}\n\n\\begin{proof}\n\tFix $n_1\\in\\mathbb N$ and take $B_1=\\{x\\in \\mathbb Z_p\\colon |x-n_1|_p<1\/n_1 \\}$.\n\tSince $\\mathbb Z_p$ and $B_1$ are clopen sets, we have that $\\mathbb Z_p\\setminus B_1$ is open and also nonempty.\n\tTherefore there exists an open ball $D_1\\subset \\mathbb Z_p\\setminus B_1$.\n\tFurthermore, as $\\mathbb N$ is dense in $\\mathbb Z_p$, there exists $n_2\\in \\mathbb N\\setminus \\{n_1\\}$ such that $B_2=\\{ x\\in \\mathbb Z_p\\colon |x-n_2|_p<1\/n_2 \\}\\subset D_1$.\n\tBy recurrence, we can construct a set $M=\\{n_k\\colon k\\in\\mathbb N \\}\\subset \\mathbb N$ such that the balls $B_k=\\{x\\in\\mathbb Z_p\\colon |x-n_k|_p<1\/n_k \\}$ are pairwise disjoint.\n\t\n\tLet $\\sigma\\colon \\mathbb N_0\\rightarrow M$ be the increasing bijective function and let $(m_n)_{n\\in\\mathbb N}\\subset \\mathbb N$ be an increasing sequence such that $p^{-m_n}n\\rightarrow 0$ and $p^{-m_n}n^\\alpha\\rightarrow\\infty$ for every $\\alpha>1$ when $n\\rightarrow\\infty$. (This can be done for instance by taking $m_n=\\left\\lfloor \\ln(n \\ln(n))\/\\ln(p) \\right\\rfloor$.)\n\tFor every $N\\in\\mathcal N_0$, define $f_N\\colon \\mathbb Z_p\\rightarrow \\mathbb K$ as\n\t\t$$\n\t\tf_N=\\sum_{n=0}^\\infty 1_N(n) p^{m_{\\sigma(n)}} e_{\\sigma(n)}.\n\t\t$$\n\tSince every $N\\in\\mathcal N_0$ is infinite, we have that $|1_N(n)| p^{-m_{\\sigma(n)}} \\sigma(n)\\rightarrow 0$ when $n\\rightarrow \\infty$ and \n\t\t$$\n\t\t\\{|1_N(n)| p^{-m_{\\sigma(n)}} \\sigma(n)^\\alpha \\colon n\\in\\mathbb N_0 \\}\n\t\t$$ \n\tis unbounded for every $\\alpha>1$.\n\tHence, by Theorem~\\ref{thm:1}, we have $f_N\\in N^1(\\mathbb{Z}_{p},\\mathbb K)\\setminus\\bigcup_{\\alpha>1}{\\rm{Lip}}_{\\alpha }(\\mathbb{Z}_{p},\\mathbb K)$ for every $N\\in\\mathcal N_0$.\n\t\n\tWe will prove now that the functions in $V=\\{f_N\\colon N\\in\\mathcal N_0 \\}$ are linearly independent over $\\mathbb K$ and such that any nonzero linear combination of $V$ over $\\mathbb K$ is contained in $N^1(\\mathbb{Z}_{p},\\mathbb K)\\setminus\\bigcup_{\\alpha>1}{\\rm{Lip}}_{\\alpha }(\\mathbb{Z}_{p},\\mathbb K)$.\n\tTake $f=\\sum_{i=1}^r a_i f_{N_i}$, where $a_1,\\ldots, a_r\\in\\mathbb K$, $N_1,\\ldots, N_r\\in\\mathcal N_0$ are distinct and $r\\in\\mathbb N$.\n\tAssume that $f\\equiv 0$. Fix $n\\in N_1^1\\cap N_2^0\\cap \\cdots\\cap N_r^0$ and take $x\\in\\mathbb Z_p$ such that $x\\in B_{\\sigma(n)}$, then $0=f(x)=a_1 p^{m_{\\sigma(n)}}$, i.e., $a_1=0$.\n\tBy applying similar arguments we have $a_i=0$ for every $i\\in\\{1,\\ldots,r \\}$.\n\tAssume for the rest of the proof that $a_i\\neq 0$ for every $i\\in\\{1,\\ldots,r \\}$.\n\tNotice that $f=\\sum_{n=0}^\\infty \\left(\\sum_{i=1}^r a_i 1_{N_i}\\right)(n) p^{m_{\\sigma(n)}} e_{\\sigma(n)}$, where $\\left|\\left(\\sum_{i=1}^r a_i 1_{N_i}\\right)(n) p^{m_{\\sigma(n)}}\\right|\\leq |p^{m_{\\sigma(n)}}| \\max\\{|a_i|\\colon i\\in\\{1,\\ldots,r \\} \\}=p^{-m_{\\sigma(n)}} \\max\\{|a_i|\\colon i\\in\\{1,\\ldots,r \\} \\}$.\n\tTherefore $\\left|\\left(\\sum_{i=1}^r a_i 1_{N_i}\\right)(n) p^{m_{\\sigma(n)}}\\right| \\sigma(n)\\rightarrow 0$ when $n\\rightarrow \\infty$.\n\tFinally, as $N_1^1\\cap N_2^0\\cap \\cdots\\cap N_r^0$ is infinite, we have that \n\t\t\\begin{align*}\n\t\t\t& \\left\\{\\left|\\left(\\sum_{i=1}^r a_i 1_{N_i}\\right)(n) p^{m_{\\sigma(n)}}\\right| \\sigma(n)^\\alpha \\colon n\\in N_1^1\\cap N_2^0\\cap \\cdots\\cap N_r^0 \\right\\}\\\\\n\t\t\t& = \\{|a_1| p^{-m_{\\sigma(n)}} \\sigma(n)^\\alpha \\colon n\\in N_1^1\\cap N_2^0\\cap \\cdots\\cap N_r^0 \\}\n\t\t\\end{align*}\n\tis unbounded for every $\\alpha>1$.\n\\end{proof}\n\n\n\nThe next lineability result can be considered as a non-Archimedean counterpart of \\cite{g7}*{Theorem 6.1}.\nTo prove it we will make use of the following lemma.\n(For more information on the usage of the lemma see \\cite[Section~5]{fmrs}.)\nIn order to understand it, let us consider the functions $x\\mapsto (1+x)^\\alpha$ where $x\\in p\\mathbb Z_p$ and $\\alpha\\in \\mathbb Z_p$.\nIt is well known that $(1+x)^\\alpha$ is defined analytically by $(1+x)^\\alpha = \\sum_{i=0}^\\infty {\\alpha \\choose i} x^i$.\nMoreover $x\\mapsto (1+x)^\\alpha$ is well defined (see \\cite[Theorem~47.8]{sc}), differentiable with derivative $\\alpha(1+x)^{\\alpha-1}$, and $x\\mapsto (1+x)^\\alpha$ takes values in $\\mathbb Z_p$ (in particular $(1+x)^\\alpha=1+y$ for some $y\\in p\\mathbb Z_p$, see \\cite[Theorem~47.10]{sc}).\n\n\\begin{lemma}\\label{lem:1}\n\tIf $\\alpha_1,\\ldots,\\alpha_n\\in \\mathbb Z_p\\setminus \\{0\\}$ are distinct, with $n\\in \\mathbb N$, then every linear combination $\\sum_{i=1}^n \\gamma_i (1+x)^{\\alpha_i}$, with $\\gamma_i\\in \\mathbb Q_p\\setminus \\{0\\}$ for every $1\\leq i\\leq n$, is not constant.\n\\end{lemma}\n\n\n\\begin{theorem}\n\t\\label{16} The set of everywhere differentiable functions $\\mathbb{Q}_{p}\\rightarrow \\mathbb{Q}_{p}$\n\twhich derivative is unbounded is strongly $\\mathfrak{c}$-algebrable.\n\\end{theorem}\n\n\\begin{proof}\n\tLet $\\mathcal H$ be a Hamel basis of $\\mathbb Q_p$ over $\\mathbb Q$ contained in $p\\mathbb Z_p$, and for each $\\beta \\in \\mathbb Z_p\\setminus\\{0\\}$ define $f_\\beta : \\mathbb Q_p\\to \\mathbb Q_p$ by\n\t\t$$\n\t\tf_\\beta (x)=\\begin{cases}\n\t\tp^{-n} (1+y)^\\beta & \\text{if } x=\\sum_{k=-n}^0 a_k p^k +y, \\text{ where } a_n\\neq 0,\\ n\\in \\mathbb N_0 \\text{ and } y\\in p\\mathbb{Z}_p,\\\\\n\t\t0 & \\text{otherwise}.\n\t\t\\end{cases}\n\t\t$$\n\tThe function $f_\\beta$ is differentiable everywhere for any $\\beta\\in \\mathbb Z_p$.\n\tIndeed, firstly we have that $f_\\beta$ is locally constant on $p\\mathbb Z_p$ as $f_\\beta\\restriction p\\mathbb Z_p \\equiv 0$.\n\tLastly it remains to prove that $f_\\beta$ is differentiable at $x_0=\\sum_{k=-n}^0 a_k p^k +y_0$, i.e., the limit\n\t\t\\begin{equation}\\label{equ:2}\n\t\t\t\\lim_{x\\rightarrow x_0} \\frac{p^{-m}(1+y)^\\beta-p^{-n}(1+y_0)^\\beta}{x-x_0}\n\t\t\\end{equation}\n\texists, where the values $x$ are of the form $x=\\sum_{k=-m}^0 b_k p^k +y$.\n\tMoreover, as $x$ approaches $x_0$ in the limit \\eqref{equ:2}, we can assume that $x=\\sum_{k=-n}^0 a_k p^k +y$.\n\tHence, the limit in \\eqref{equ:2} can be simplified to\n\t\t\\begin{align*}\n\t\t\t\\lim_{x\\rightarrow x_0} \\frac{p^{-m}(1+y)^\\beta-p^{-n}(1+y_0)^\\beta}{x-x_0} & =\\lim_{x\\rightarrow x_0} \\frac{p^{-n}(1+y)^\\beta-p^{-n}(1+y_0)^\\beta}{x-x_0}\\\\\n\t\t\t& =p^{-n} \\lim_{y\\rightarrow y_0} \\frac{(1+y)^\\beta-(1+y_0)^\\beta}{y-y_0}=p^{-n}\\beta(1+y_0)^{\\beta-1}.\n\t\t\\end{align*}\n\tIn particular the derivative of $f_\\beta$ is given by\n\t\t$$\n\t\tf_\\beta^\\prime (x)=\\begin{cases}\n\t\tp^{-n} \\beta (1+y)^{\\beta-1} & \\text{if } x=\\sum_{k=-n}^0 a_k p^k +y, \\text{ where } a_n\\neq 0,\\ n\\in \\mathbb N_0, y\\in p\\mathbb{Z}_p,\\\\\n\t\t0 & \\text{otherwise},\n\t\t\\end{cases}\n\t\t$$\n\tand it is unbounded since $\\lim_{n\\rightarrow \\infty} |p^{-n}\\beta(1+y)^{\\beta-1}|_p=\\lim_{n\\rightarrow \\infty} p^{n}|\\beta(1+y)^{\\beta-1}|_p=|\\beta|_p\\lim_{n\\rightarrow \\infty} p^{n}=\\infty$, where we have used the fact that $\\beta\\neq 0$.\n\t\n\tLet $h_1,\\ldots,h_m \\in \\mathcal H$ be distinct and take $P$ a polynomial in $m$ variables with coefficients in $\\mathbb Q_p\\setminus \\{0\\}$ and without free term, that is, $P(x_1,\\ldots,x_m)=\\sum_{r=1}^d \\alpha_r x_1^{k_{r,1}}\\cdots x_m^{k_{r,m}}$, where $d\\in \\mathbb N$, $\\alpha_i\\in\\mathbb{Q}_p\\setminus \\{0\\}$ for every $1\\leq r\\leq d$, $k_{r,i}\\in\\mathbb N_0$ for every $1\\leq r\\leq d$ and $1\\leq i\\leq m$ with $\\overline{k}_r:=\\sum_{i=1}^m k_{r,i}\\geq 1$, and the $m$-tuples $(k_{r,1},\\ldots,k_{r,m})$ are pairwise distinct.\n\tAssume also without loss of generality that $\\overline{k}_1\\geq \\cdots\\geq \\overline{k}_d$.\n\tWe will prove first that $P(f_{h_1},\\ldots,f_{h_m})\\not\\equiv 0$, i.e., the functions in $\\{f_h \\colon h\\in \\mathcal H \\}$ are algebraically independent.\n\tNotice that $P(f_{h_1},\\ldots,f_{h_m})$ is of the form\n\t\t$$\n\t\t\\begin{cases}\n\t\t\\sum_{r=1}^d p^{-n\\overline{k}_r} \\alpha_r (1+y)^{\\beta_r} & \\text{if } x=\\sum_{k=-n}^0 a_k p^k +y, \\text{ with } a_n\\neq 0,\\ n\\in \\mathbb N_0, y\\in p\\mathbb{Z}_p,\\\\\n\t\t0 & \\text{otherwise},\n\t\t\\end{cases}\n\t\t$$\n\twhere the exponents $\\beta_r:=\\sum_{i=1}^m k_{r,i} h_i$ belong to $p\\mathbb{Z}_p\\setminus\\{0\\}$ and are pairwise distinct because $\\mathcal H$ is Hamel basis of $\\mathbb Q_p$ over $\\mathbb Q$ contained in $p\\mathbb{Z}_p$, $k_{r,i}\\in\\mathbb N_0$, $\\overline{k}_r\\neq 0$ and the numbers $h_1,\\ldots,h_m$ as well as the $m$-tuples $(k_{r,1},\\ldots,k_{r,m})$ are pairwise distinct.\t\n\tFix $n\\in\\mathbb N_0$.\n\tSince $p^{-n\\overline{k}_r} \\alpha_r \\neq 0$ for every $1\\leq r\\leq d$, by Lemma~\\ref{equ:2}, there is $y\\in p\\mathbb{Z}_p$ such that $\\sum_{r=1}^d p^{-n\\overline{k}_r} \\alpha_r (1+y)^{\\beta_r}\\neq 0$.\n\tHence, by taking $x=p^{-n}+y$, we have $P(f_{h_1},\\ldots,f_{h_m})(x)\\neq 0$.\n\t\n\tFinally, let us prove that $P(f_{h_1},\\ldots,f_{h_m})'$ exists and it is unbounded.\n\tClearly $P(f_{h_1},\\ldots,f_{h_m})$ is differentiable and the derivative is given by\n\t\t\\begin{equation}\\label{equ:6}\n\t\t\t\\begin{cases}\n\t\t\t\\displaystyle \\sum_{r=1}^d p^{-n\\overline{k}_r} \\alpha_r \\beta_r (1+y)^{\\beta_r-1} & \\displaystyle \\text{if } x=\\sum_{k=-n}^0 a_k p^k +y, \\text{ with } a_n\\neq 0,\\ n\\in \\mathbb N_0, y\\in p\\mathbb{Z}_p,\\\\\n\t\t\t0 & \\text{otherwise}.\n\t\t\t\\end{cases}\n\t\t\\end{equation}\n\tNotice that $\\beta_r\\neq 1$ for every $1\\leq r\\leq d$ since $\\beta_r\\in p\\mathbb{Z}_p$.\n\tWe will now rewrite formula \\eqref{equ:6} in order to simplify the proof.\n\tNotice that if some of the exponents $\\overline{k}_r$ are equal, i.e., for instance $\\overline{k_i}=\\cdots=\\overline{k}_j$ for some $1\\leq i\\cdots>\\widetilde{k}_{\\widetilde{d}}$, and $\\alpha_{q,s}$ and $\\beta_{q,s}$ are the corresponding terms $\\alpha_r$ and $\\beta_r$, respectively.\n\tNow, since $\\alpha_{1,s} \\beta_{1,s}\\neq 0$ and the exponents $\\beta_{1,s}-1$ are pairwise distinct and not equal to $0$ for every $1\\leq s\\leq m_1$, by Lemma~\\ref{equ:2}, there exists $y_1\\in p\\mathbb{Z}_p$ such that $\\sum_{s=1}^{m_1}\\alpha_{1,s}\\beta_{1,s} (1+y_1)^{\\beta_{1,s}-1}\\neq 0$.\n\tTake the sequence $(x_n)_{n=1}^\\infty$ defined by $x_n=p^{-n}+y_1$.\n\tSince $\\widetilde{k}_1>\\cdots>\\widetilde{k}_{\\widetilde{d}}$, there exists $n_0\\in \\mathbb N$ such that\n\t\t\\begin{align*}\n\t\t\t|P(f_{h_1},\\ldots,f_{h_m})^\\prime (x_n)|_p & =\\left|\\sum_{q=1}^{\\widetilde{d}} p^{-n\\widetilde{k}_q} \\sum_{s=1}^{m_q}\\alpha_{q,s}\\beta_{q,s} (1+y_1)^{\\beta_{q,s}-1} \\right|_p\\\\\n\t\t\t& =\\left|p^{-n\\widetilde{k}_1}\\sum_{s=1}^{m_1 }\\alpha_{1,s} \\beta_{1,s} (1+y_1)^{\\beta_{1,s}-1} + \\right.\\\\ \n\t\t\t&\\left. + \\sum_{q=2}^{\\widetilde{d}} p^{-n\\widetilde{k}_r} \\sum_{s=1}^{m_q}\\alpha_{q,s} \\beta_{q,s} (1+y_1)^{\\beta_{q,s}-1} \\right|_p\\\\\n\t\t\t& =\\left|p^{-n\\widetilde{k}_1}\\sum_{s=1}^{m_1 }\\alpha_{1,s} \\beta_{1,s} (1+y_1)^{\\beta_{1,s}-1}\\right|_p\\\\\n\t\t\t& =p^{n\\widetilde{k}_1}\\left|\\sum_{s=1}^{m_1 }\\alpha_{1,s} \\beta_{1,s} (1+y_1)^{\\beta_{1,s}-1}\\right|_p,\n\t\t\\end{align*}\n\tfor every $n\\geq n_0$. \n\tHence, $\\lim_{n\\to \\infty} |P(f_{h_1},\\ldots,f_{h_m})^\\prime (x_n)|_p=\\infty$.\n\\end{proof}\n\nThe reader may have noticed that the functions in the proof of Theorem~\\ref{16} have unbounded derivative but the derivative is bounded on each ball of $\\mathbb{Q}_p$.\nThe following result (which proof is a modification of the one in Theorem~\\ref{16}) shows that we can obtain a similar optimal result when the derivative is unbounded on each ball centered at a fixed point $a\\in \\mathbb{Q}_p$.\nThe functions will not be differentiable at $a$.\n\n\\begin{corollary}\n\\label{15} Let $a\\in \\mathbb{Q}_p$.\nThe set of continuous functions $\\mathbb{Q}_{p}\\rightarrow \\mathbb{Q}_{p}$ that are differentiable except at $a$ and which derivative is unbounded on $\\mathbb{Q}_p\\setminus (a+\\mathbb{Z}_p)$ and on $(a+\\mathbb{Z}_p)\\setminus\\{a\\}$ is strongly $\\mathfrak{c}$-algebrable.\n\\end{corollary}\n\n\\begin{proof}\n\tFix $a\\in \\mathbb{Q}_p$.\n\tLet $\\mathcal H$ be a Hamel basis of $\\mathbb Q_p$ over $\\mathbb Q$ contained in $p\\mathbb Z_p$.\n\tFor every $\\beta\\in \\mathbb{Z}_p\\setminus\\{0\\}$ take the function $f_\\beta$ defined in the proof of Theorem~\\ref{16} and also define $g_\\beta\\colon \\mathbb{Q}_p\\rightarrow \\mathbb{Q}_p$ by\n\t\t$$\n\t\tg_\\beta(x)=\\begin{cases}\n\t\tp^{n}[p^{-n^2}(x-a)]^\\beta & \\text{if } x\\in \\overline{B}_{p^{-(n^2+1)}}(a+p^{n^2}) \\text{ for some } n\\in \\mathbb N,\\\\\n\t\t0 & \\text{otherwise}.\n\t\t\\end{cases}\n\t\t$$\n\tNotice that by applying the change of variable $y=x-a$ we can assume, without loss of generality, that $a=0$.\n\tSince for any $x\\in \\overline{B}_{p^{-(n^2+1)}}(p^{n^2})$ with $n\\in\\mathbb N$, $x$ is of the form $p^{n^2}+\\sum_{k=n^2+1}^\\infty a_k p^k$ with $a_{k}\\in \\{0,1,\\ldots,p-1\\}$ for every integer $k\\geq n^2+1$, we have that $p^{-n^2}x=1+\\sum_{k=n^2+1}^\\infty a_k p^{k-n^2}\\in 1+p\\mathbb{Z}_p$.\n\tThus $g_\\beta$ is well defined.\n\tNow, for every $\\beta\\in \\mathbb{Z}_p\\setminus\\{0\\}$, let $F_{\\beta}:=f_\\beta+g_\\beta$.\n\tIt is easy to see that $F_\\beta$ is differentiable at every $x\\in \\mathbb{Q}_p\\setminus\\{0\\}$ and, in particular, continuous on $\\mathbb{Q}_p\\setminus\\{0\\}$.\n\tLet us prove now that $F_\\beta$ is continuous at $0$.\n\tFix $\\varepsilon>0$ and take $n\\in\\mathbb N$ such that $p^{-n}<\\varepsilon$. \n\tIf $|x|_p0$ is $\\mathfrak{c}$-lineable and 1-algebrable.\n\\end{proposition}\n\n\\begin{proof}\n\tLet us prove first the lineability part.\n\tFor any $N\\in\\mathcal N$, let $f_N\\colon \\mathbb{Q}_p\\rightarrow \\mathbb{Q}_p$ be:\n\t$$\n\tf_N(x)=\\begin{cases}\n\t\tp^{n} & \\text{if } x\\in S_{p^{-n^2}}(0) \\text{ and } n\\in N,\\\\\n\t\t0 & \\text{otherwise}.\n\t\\end{cases}\n\t$$\n\tFor every $x\\in\\mathbb{Q}_p\\setminus\\{0\\}$, it is clear that there exists a neighborhood of $x$ such that $f_N$ is constant since the spheres are open sets.\n\tThus, $f_N$ is locally constant on $\\mathbb{Q}_p\\setminus\\{0\\}$ which implies that $f_N$ is continuous, differentiable on $\\mathbb{Q}_p\\setminus\\{0\\}$ and $f_N^\\prime(x)=0$ for every $x\\in \\mathbb{Q}_p\\setminus\\{0\\}$.\n\tMoreover, it is easy to see that $f_N$ is continuous at $0$.\n\tHowever, $f_N$ is not differentiable at $0$.\n\tIndeed, take $x_n=p^{n^2}$ for every $n\\in N$.\n\tIt is clear that the sequence $(x_n)_{n\\in N}$ converges to $0$ and also, for every $\\alpha>0$,\n\t$$\n\t\\frac{|f_N(x_n)|_p}{|x_n|_p^\\alpha}=p^{\\left(-1+\\alpha n\\right) n}\\rightarrow \\infty,\n\t$$\n\twhen $n\\in N$ tends to infinity.\n\tTherefore $f_N$ is not differentiable at $0$.\n\tFurthermore, notice that for any $M>0$ there are infinitely many $x\\in \\mathbb{Z}_p$ such that $|f_N(x)|_p> M|x|_p^\\alpha$.\n\tHence $f_N$ is not Lipschitzian of order $\\alpha>0$.\n\t\n\tIt remains to prove that the functions in $V=\\{f_N\\colon N\\in\\mathcal N \\}$ are linearly independent over $\\mathbb{Q}_p$ and such that the functions in $\\text{span}(V)\\setminus\\{0\\}$ are differentiable except at $0$, with bounded derivative on $\\mathbb{Q}_p\\setminus\\{0\\}$ and not Lipschitzian of order $\\alpha>0$.\n\tLet $f=\\sum_{i=1}^m \\alpha_i f_{N_i}$, where $\\alpha_1,\\ldots, \\alpha_m\\in \\mathbb{Q}_p$, $N_1,\\ldots, N_m\\in \\mathcal N$ are distinct and $m\\in \\mathbb N$.\n\tAssume that $f\\equiv 0$ and take $n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_m^0$.\n\tThen $0=f(p^{n^2})=\\alpha_1 p^{n}$ which implies that $\\alpha_1=0$.\n\tApplying similar arguments we have that $\\alpha_i=0$ for every $i\\in \\{1,\\ldots, m \\}$.\n\tFinally, assume that $\\alpha_i\\neq 0$ for every $i\\in \\{1,\\ldots, m \\}$.\n\tIt is clear that $f$ is continuous on $\\mathbb{Q}_p$ and differentiable on $\\mathbb{Q}_p\\setminus\\{0\\}$ with $f_N^\\prime(x)=0$ for every $x\\in \\mathbb{Q}_p\\setminus \\{0\\}$.\n\tLet $x_n=p^{n^2}$ for every $n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_m^0$ and notice that $(x_n)_{n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_m^0}$ converges to $0$.\n\tMoreover, for every $\\alpha>0$,\n\t$$\n\t\\frac{|f(x_n)|_p}{|x_n|_p^\\alpha}=|\\alpha_1|_p p^{\\left(-1+\\alpha n\\right) n}\\rightarrow \\infty,\n\t$$\n\twhen $n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_m^0$ tends to infinity.\n\tHence, $f$ is not differentiable at $0$, and also for every $M>0$ there are infinitely many $x\\in \\mathbb{Z}_p$ such that $|f(x)|_p> M|x|_p^\\alpha$.\n\t\n\tFor the algebrability part, let $g\\colon \\mathbb{Q}_p\\rightarrow\\mathbb{Q}_p$ be defined as:\n\t$$\n\tg(x)=\\begin{cases}\n\t\tp^n & \\text{if } x\\in S_{p^{-n^2}}(0) \\text{ for some } n\\in\\mathbb N,\\\\\n\t\t0 & \\text{otherwise}.\n\t\\end{cases}\n\t$$\n\tBy applying similar arguments used in the first part of the proof, we have that $g$ is continuous, differentiable on $\\mathbb{Q}_p\\setminus\\{0\\}$ with $g^\\prime(x)=0$ for every $x\\in \\mathbb{Q}_p\\setminus\\{0\\}$ and not Lipschitzian of order $\\alpha>0$.\n\tTo finish the proof, let $G=\\beta g^k$ where $\\beta\\in\\mathbb{Q}_p\\setminus\\{0\\}$ and $k\\in\\mathbb N$.\n\tIt is obvious that $G$ is continuous, differentiable on $\\mathbb{Q}_p\\setminus\\{0\\}$ and $G^\\prime(x)=0$ for every $x\\in\\mathbb{Q}_p\\setminus\\{0\\}$.\n\tNow, let $x_n=p^{n^2}$ for every $n\\in\\mathbb N$.\n\tIt is easy to see that $(x_n)_{n\\in\\mathbb N}$ converges to $0$ and\n\t$$\n\t\\frac{|G(x_n)|_p}{|x_n|_p^\\alpha}=|\\beta|_p p^{\\left(-k+\\alpha n\\right) n}\\rightarrow \\infty,\n\t$$\n\twhen $n\\rightarrow\\infty$.\n\\end{proof}\n\nLet $\\mathcal B$ be the $\\sigma$-algebra of all Borel subsets of $\\mathbb{Z}_p$ and $\\mu$ be any non-negative real-valued Haar measure on the measurable space $(\\mathbb{Z}_p,\\mathcal B)$.\nIn particular, if $\\mu$ is normalized, then $\\mu\\left(x+p^n\\mathbb{Z}_p\\right)=p^{-n}$ for any $x\\in \\mathbb{Z}_p$ and $n\\in\\mathbb N$.\nFor the rest of the paper $\\mu$ will denote a non-negative real-valued Haar measure on $(\\mathbb{Z}_p,\\mathcal B)$.\nAs usual, a Borel subset $B$ of $\\mathbb{Z}_p$ is called a null set for $\\mu$ provided that $\\mu(B)=0$.\nWe also say that a Borel subset of $\\mathbb{Z}_p$ is a full set for $\\mu$ if $\\mathbb{Z}_p\\setminus B$ is a null set. (See \\cite[Section~2.2]{Fo} for more details on the Haar measure.)\n\n\nIt is easy to see that the singletons of $\\mathbb{Z}_p$ are null sets for any Haar measure $\\mu$ on $(\\mathbb{Z}_p,\\mathcal B)$.\nTherefore Proposition~\\ref{26} states in particular that, for any Haar measure $\\mu$ on $(\\mathbb{Z}_p,\\mathcal B)$, the set of continuous functions $\\mathbb{Z}_p\\rightarrow\\mathbb{Q}_p$ that are differentiable except on a null set for $\\mu$ of cardinality $1$, with bounded derivative elsewhere, is $\\mathfrak{c}$-lineable.\nThe following result shows that a similar version can be obtained when we consider null sets of cardinality $\\mathfrak{c}$ for any Haar measure $\\mu$ on $(\\mathbb{Z}_p,\\mathcal B)$.\nIn order to prove it, we recall the following definition and result from probability theory.\n\n\\begin{definition}\\label{def:1}\n\tLet $(\\Omega,\\mathcal F,P)$ be a probability space and $Y$ be a measurable real-valued function on $\\Omega$. We say that $Y$ is a random variable.\n\\end{definition}\n\n\\begin{theorem}\\normalfont{(Strong law of large numbers, see \\cite[Theorem~22.1]{Bi}).}\n\tLet $(Y_n)_{n\\in \\mathbb N_0}$ be a sequence of independent and identically distributed real-valued random variables on a probability space $(\\Omega,\\mathcal F,P)$ such that, for each $n\\in\\mathbb N_0$, $E[Y_n]=m$ for some $m\\in\\mathbb R$ (where $E$ denotes the expected value).\n\tThen\n\t$$\n\tP\\left(\\left\\{x\\in \\Omega\\colon \\exists\\lim_{n\\rightarrow \\infty} \\frac{\\sum_{k=0}^{n-1} Y_k(x)}{n}=m \\right\\} \\right)=1.\n\t$$\n\\end{theorem}\n\n\\begin{theorem}\\label{thm:2}\n\tLet $\\mu$ be a Haar measure on $(\\mathbb{Z}_p,\\mathcal B)$.\n\tThe set of continuous functions $\\mathbb{Z}_p\\rightarrow\\mathbb{Q}_p$ which are differentiable on a full set for $\\mu$ with bounded derivative but not differentiable on the complement having cardinality $\\mathfrak c$ is $\\mathfrak c$-lineable.\n\\end{theorem}\n\n\\begin{proof}\n\tWe will prove the result for $\\mu$ the normalized Haar measure on $(\\mathbb{Z}_p,\\mathcal B)$ since any null set for the normalized Haar measure is also a null set for any other non-negative real-valued Haar measure on $(\\mathbb{Z}_p,\\mathcal B)$.\n\tThis is an immediate consequence of Haar's Theorem which states that Haar measures are unique up to a positive multiplicative constant (see \\cite[Theorem~2.20]{Fo}).\n\t\n\n\t\n\tLet $f\\colon \\mathbb{Z}_p\\rightarrow \\mathbb{Z}_p$ be defined as follows: for every $x=\\sum_{i=0}^\\infty x_i p^i\\in \\mathbb{Z}_p$, we have\n\t\t\\begin{equation}\\label{equ:3}\n\t\t\tf(x)=\\begin{cases}\n\t\t\t\tx & \\text{if } (x_{2i},x_{2i+1})\\neq (0,0) \\text{ for all } i\\in \\mathbb N_0,\\\\\n\t\t\t\t\\displaystyle \\sum_{i=0}^{2n+1} x_i p^i & \\text{if } (x_{2i},x_{2i+1})\\neq (0,0) \\text{ for all } i\\leq n \\\\\n\t\t\t\t & \\text{with } n\\in \\mathbb N_0 \\text{ and } (x_{2n+2},x_{2n+3})= (0,0),\\\\\n\t\t\t\t0 & \\text{if } (x_0,x_1)=(0,0).\n\t\t\t\\end{cases}\n\t\t\\end{equation}\n\tThe function $f$ is continuous.\n\tIndeed, let $x=\\sum_{i=0}^\\infty x_i p^i\\in \\mathbb{Z}_p$ and fix $\\varepsilon>0$.\n\tTake any $m\\in \\mathbb N_0$ such that $p^{-(2m+1)}<\\varepsilon$.\n\tThen for any $y\\in \\mathbb{Z}_p$ such that $|x-y|_p0$.\n\tTake an integer $m>n$.\n\tThen for every $y\\in \\mathbb{Z}_p$ such that $|x-y|_p0$ and take $n\\in \\mathbb N$ such that $p^{-n}<\\varepsilon$.\n\tIf $x\\in \\mathbb{Z}_p$ is such that $|x|_p=p^{-n}$, then $x=x_n p^n+p^{n+1}x^\\prime$ with $x_n\\neq 0$.\n\tFurthermore,\n\t\t\\begin{align*}\n\t\t\t|g(0)-g(x)|_p & =\\begin{cases}\n\t\t\t|p^n f(x^\\prime)|_p & \\text{if } x_n=1,\\\\\n\t\t\t0 & \\text{otherwise},\n\t\t\t\\end{cases}\n\t\t\t=\\begin{cases}\n\t\t\tp^{-n}|f(x^\\prime)|_p & \\text{if } x_n=1,\\\\\n\t\t\t0 & \\text{otherwise}.\n\t\t\t\\end{cases}\n\t\t\\end{align*}\n\tHence, $|g(0)-g(x)|_p\\leq p^{-n}<\\varepsilon$.\n\tTherefore we have proven that $g$ is continuous on $\\mathbb{Z}_p$.\n\tMoreover, $g$ is differentiable also on $\\bigcup_{n=1}^\\infty (B_n\\setminus E_n)$ (with bounded derivative) as $f$ is differentiable on $\\bigcup_{n=1}^\\infty (B_n\\setminus E_n)$, where $E_n:=p^n+p^{n+1}E$; and $g$ is not differentiable on $\\bigcup_{n=1}^\\infty E_n$ since $f$ is not differentiable on $\\bigcup_{n=1}^\\infty E_n$.\n\tNotice that once again $\\text{card}(E_n)=\\mathfrak{c}$ for every $n\\in \\mathbb N_0$.\n\t\n\tLet us prove that $E_n$ is a Borel set with $\\mu(E_n)=0$ for every $n\\in \\mathbb N$.\n\tTo do so, let us consider the restricted measure $\\mu_n=p^{n+1}\\mu$ on the measurable space $(B_n,\\mathcal B_n)$, where $\\mathcal B_n$ is the $\\sigma$-algebra of all Borel subsets of $B_n$.\n\tNotice that $\\mathcal B_n=\\{B\\cap B_n\\colon B\\in \\mathcal B \\}$ and $(B_n,\\mathcal B_n,\\mu_n)$ is a probability space.\n\tDefine now for every $i\\in \\mathbb N_0$ the random variables $Y_{n,i}\\colon B_n\\rightarrow\\{0,1\\}$ as follows: for $x=p^n+p^{n+1}\\sum_{i=0}^\\infty x_i p^i\\in B_n$, we have\n\t$$\n\tY_{n,i}(x)=\\begin{cases}\n\t\t1 & \\text{if } (x_{2i},x_{2i+1})=(0,0),\\\\\n\t\t0 & \\text{if } (x_{2i},x_{2i+1}) \\neq (0,0).\n\t\\end{cases}\n\t$$\n\tOnce again the random variables $(Y_{n,i})_{i\\in \\mathbb N_0}$ are independent and identically distributed with $E[Y_{n,i}]=\\frac{1}{p^2}$ for every $i\\in\\mathbb N_0$.\n\tThus, the set \n\t$$\n\t\\left\\{x=p^n+p^{n+1}\\sum_{i=0}^\\infty x_i p^i\\in B_n \\colon \\exists\\lim_{m\\rightarrow\\infty} \\frac{\\sum_{i=0}^{m-1} Y_{n,i}(x)}{m}=\\frac{1}{p} \\right\\}=p^n+p^{n+1}D=:D_n\n\t$$\n\tis a full set for $\\mu_n$.\n\tBy considering for each $k\\in \\mathbb N_0$ the function $\\pi_{n,k}\\colon B_n\\to \\mathbb F_p^2$ given by $\\pi_{n,k}(x)=(x_{2k},x_{2k+1})$ for every $x=p^n+p^{n+1}\\sum_{i=0}^\\infty x_i p^i\\in B_n$ and applying similar arguments as above, we have that $\\pi_{n,k}$ is continuous.\n\tHence $E_n=\\bigcap_{k=0}^\\infty \\pi_{n,k}(\\{(x,y)\\in \\mathbb F_p^2\\colon (x,y) \\neq (0,0) \\})$ is once again a Borel set.\n\tFurthermore, since $E_n\\subset B_n\\setminus D_n$ we have that $E_n$ is a null set for $\\mu_n$.\n\tThus $\\mu(E_n)=p^{-(n+1)}\\mu_n(E_n)=0$ for every $n\\in \\mathbb N$.\n\t\n\tFinally let us prove that $g$ is not differentiable at $0$.\n\tSince every neighborhood containing $0$ on $\\mathbb{Z}_p$ contains points $x$ such that $g(x)=0$, if $g$ were differentiable at $0$ then $g^\\prime (0)=0$.\n\tAssume that $g$ is differentiable at $0$.\n\tAs $p^n+p^{n+1}\\sum_{i=0}^\\infty p^i=p^n \\sum_{i=0}^\\infty p^i\\in B_n$ for every $n\\in \\mathbb N$, we have that\n\t\t$$\n\t\t\\left|\\frac{g\\left(p^n+p^{n+1}\\sum_{i=0}^\\infty p^i \\right)-g(0)}{p^n+p^{n+1}\\sum_{i=0}^\\infty p^i} \\right|_p=\\left|\\frac{p^n f\\left(\\sum_{i=0}^\\infty p^i \\right)}{p^n \\sum_{i=0}^\\infty p^i} \\right|_p=\\left|\\frac{p^n \\sum_{i=0}^\\infty p^i}{p^n \\sum_{i=0}^\\infty p^i} \\right|_p=1,\n\t\t$$\n\twhere $\\lim_{n\\to \\infty } \\left|p^n+p^{n+1}\\sum_{i=0}^\\infty p^i \\right|_p=\\lim_{n\\to \\infty} p^{-n}=0$, a contradiction.\n\t\n\n\tFor every $N\\in \\mathcal N$, let us define $f_N\\colon \\mathbb{Z}_p\\rightarrow \\mathbb{Q}_p$ by:\n\t$$\n\tf_N(x)=g(x)\\sum_{n\\in N} 1_{B_n}(x).\n\t$$\n\tSince $B_n\\cap B_m=\\emptyset$ for every distinct $n,m\\in \\mathbb N$, the function $f_N$ is well defined.\n\tFurthermore, since each $N\\in \\mathcal N$ is infinite, we can apply the above arguments to prove that $f_N$ is continuous and differentiable on a full set for $\\mu$ with bounded derivative but not differentiable on the complement having cardinality $\\mathfrak c$.\n\t\n\tIt remains to prove that the functions in $V=\\{f_N\\colon N\\in \\mathcal N \\}$ are linearly independent over $\\mathbb{Q}_p$ and any nonzero linear combination over $\\mathbb{Q}_p$ of the functions in $V$ satisfies the necessary properties.\n\tLet $F:=\\sum_{i=1}^k a_i f_{N_i}$, where $k\\in \\mathbb N$, $a_1,\\ldots,a_k\\in \\mathbb{Q}_p$, and $N_1,\\ldots,N_k\\in \\mathcal N$ are distinct.\n\tWe begin by showing the linear independence.\n\tAssume that $F\\equiv 0$.\n\tFix $n\\in N_1^1\\cap N_2^0\\cap \\cdots \\cap N_k^0$ and take $x=p^n+p^{n+1}\\sum_{i=0}^\\infty p^i\\in B_n$.\n\tThen, $0=F(x)=a_1 f_{N_1}(x)=a_1 \\sum_{i=0}^\\infty p^i$ if and only if $a_1=0$.\n\tBy repeating the same argument, it is easy to see that $a_i=0$ for every $i\\in \\{1,\\ldots,k \\}$.\n\tAssume now that $a_i\\neq 0$ for every $i\\in \\{1,\\ldots,k \\}$.\n\tThen $F$ is continuous but also differentiable on\n\t\t$$\n\t\t\\Delta:=\\mathbb{Z}_p\\setminus \\left(\\{0\\}\\cup \\left(\\bigcup_{n\\in \\bigcup_{i=1}^k N_i} E_n \\right) \\right)\n\t\t$$\n\t(with bounded derivative).\n\t\tApplying similar arguments as above, we have that $F$ is not differentiable at $0$.\n\tLet $x\\in E_n$ with $n\\in \\bigcup_{i=1}^k N_i$.\n\tWe will analyze the differentiability of $F$ at $x$ depending on the values that $F$ takes on $B_n$.\n\tWe have two possible cases.\n\t\n\t\\textit{Case 1:} If $F$ is identically $0$ on $B_n$, then $F$ is differentiable at $x$.\n\t\n\t\\textit{Case 2:} If $F$ is not identically $0$ on $B_n$, then there exists $a\\in \\mathbb{Q}_p\\setminus\\{0\\}$ such that $F=ag$.\n\tHence $F$ is not differentiable at $x$ since $g$ is not differentiable at $x$.\n\tNotice that Case 2 is always satisfied.\n\t\n\tTo finish the proof, it is enough to show that $\\Delta$ is a full set for $\\mu$, but this is an immediate consequence of the fact that $\\{0\\}\\cup \\left(\\bigcup_{n\\in \\bigcup_{i=1}^k N_i} E_n \\right)$ is the countable union of null sets for $\\mu$ since it implies that\n\t\t$$\n\t\t\\mu\\left(\\{0\\}\\cup \\left(\\bigcup_{n\\in \\bigcup_{i=1}^k N_i} E_n \\right) \\right)=0.\n\t\t$$\n\\end{proof}\n\n\\begin{bibdiv}\n\\begin{biblist}\n\t\n\\bib{abms}{article}{\n\tAUTHOR = {Ara\\'{u}jo, G.},\n\tAUTHOR = {Bernal-Gonz\\'{a}lez, L.},\n\tAUTHOR = {Mu\\~{n}oz-Fern\\'{a}ndez, G. A.},\n\tAUTHOR = {Seoane-Sep\\'{u}lveda, J. B.},\n\tTITLE = {Lineability in sequence and function spaces},\n\tJOURNAL = {Studia Math.},\n\tVOLUME = {237},\n\tYEAR = {2017},\n\tNUMBER = {2},\n\tPAGES = {119--136},\n}\n\t\n\t\n\n\\bib{ar}{book}{\n\tauthor={Aron, R.M.},\n\tauthor={Bernal Gonz\\'{a}lez, L.},\n\tauthor={Pellegrino, D.M.},\n\tauthor={Seoane Sep\\'{u}lveda, J.B.},\n\ttitle={Lineability: the search for linearity in mathematics},\n\tseries={Monographs and Research Notes in Mathematics},\n\tpublisher={CRC Press, Boca Raton, FL},\n\tdate={2016},\n\tpages={xix+308},\n\tisbn={978-1-4822-9909-0},\n}\n\n\n\n\\bib{AGS2005}{article}{\n\tauthor={Aron, R. M.},\n\tauthor={Gurariy, V. I.},\n\tauthor={Seoane-Sep\\'{u}lveda, J. B.},\n\ttitle={Lineability and spaceability of sets of functions on $\\mathbb R$},\n\tjournal={Proc. Amer. Math. Soc.},\n\tvolume={133},\n\tdate={2005},\n\tnumber={3},\n\tpages={795--803},\n}\n\n\\bib{ar2}{article}{\n\tauthor={Aron, R.M.},\n\tauthor={P\\'{e}rez-Garc\\'{\\i }a, D.},\n\tauthor={Seoane-Sep\\'{u}lveda, J.B.},\n\ttitle={Algebrability of the set of non-convergent Fourier series},\n\tjournal={Studia Math.},\n\tvolume={175},\n\tdate={2006},\n\tnumber={1},\n\tpages={83--90},\n}\n\n\\bib{bbf}{article}{\n\tAUTHOR = {Balcerzak, Marek},\n\t\tAUTHOR = {Bartoszewicz, Artur},\n\t\tAUTHOR = {Filipczak, Ma\\l gorzata},\n\tTITLE = {Nonseparable spaceability and strong algebrability of sets of\n\t\tcontinuous singular functions},\n\tJOURNAL = {J. Math. Anal. Appl.},\n\tVOLUME = {407},\n\tYEAR = {2013},\n\tNUMBER = {2},\n\tPAGES = {263--269},\n\t}\n\\bib{bbfg}{article}{\n\tAUTHOR = {Bartoszewicz, Artur},\n\t\tAUTHOR = {Bienias, Marek},\n\t\tAUTHOR = {Filipczak, Ma\\l gorzata},\n\t\tAUTHOR = {G\\l \\c{a}b, Szymon},\n\tTITLE = {Strong {$\\germ{c}$}-algebrability of strong\n\t\t{S}ierpi\\'{n}ski-{Z}ygmund, smooth nowhere analytic and other sets\n\t\tof functions},\n\tJOURNAL = {J. Math. Anal. Appl.},\n\tVOLUME = {412},\n\tYEAR = {2014},\n\tNUMBER = {2},\n\tPAGES = {620--630},\n}\n\n\n\\bib{barglab}{article}{\n\tauthor={Bartoszewicz, A.},\n\tAUTHOR = {Bienias, M.},\n\tauthor={G\\l \\c {a}b, S.},\n\tTITLE = {Independent {B}ernstein sets and algebraic constructions},\n\tJOURNAL = {J. Math. Anal. Appl.},\n\tVOLUME = {393},\n\tYEAR = {2012},\n\tNUMBER = {1},\n\tPAGES = {138--143},\n}\n\n\n\n\\bib{bar20}{article}{\n\tAUTHOR = {Bartoszewicz, Artur},\n\tAUTHOR ={ Filipczak, Ma\\l gorzata},\n\tAUTHOR = {Terepeta, Ma\\l gorzata},\n\tTITLE = {Lineability of {L}inearly {S}ensitive {F}unctions},\n\tJOURNAL = {Results Math.},\n\tVOLUME = {75},\n\tYEAR = {2020},\n\tNUMBER = {2},\n\tPAGES = {Paper No. 64},\n}\n\n\n\\bib{ba3}{article}{\n\tauthor={Bartoszewicz, A.},\n\tauthor={G\\l \\c {a}b, S.},\n\ttitle={Strong algebrability of sets of sequences and functions},\n\tjournal={Proc. Amer. Math. Soc.},\n\tvolume={141},\n\tdate={2013},\n\tnumber={3},\n\tpages={827--835},\n}\n\n\n\n\n\n\\bib{bay}{article}{\n\tAUTHOR = {Bayart, Fr\\'{e}d\\'{e}ric},\n\tTITLE = {Linearity of sets of strange functions},\n\tJOURNAL = {Michigan Math. J.},\n\tVOLUME = {53},\n\tYEAR = {2005},\n\tNUMBER = {2},\n\tPAGES = {291--303},\n\t}\n\n\n\\bib{bq}{article}{\n\tAUTHOR = {Bayart, Fr\\'{e}d\\'{e}ric},\n\tAUTHOR = {Quarta, Lucas},\n\tTITLE = {Algebras in sets of queer functions},\n\tJOURNAL = {Israel J. Math.},\n\tVOLUME = {158},\n\tYEAR = {2007},\n\tPAGES = {285--296},\n}\n\n\\bib{bbls}{article}{\n\tAUTHOR = {Bernal-Gonz\\'{a}lez, L.},\n\tAUTHOR = {Bonilla, A.},\n\tAUTHOR = {L\\'{o}pez-Salazar, J.},\n\tAUTHOR = {Seoane-Sep\\'{u}lveda, J. B.},\n\tTITLE = {Nowhere h\\\"{o}lderian functions and {P}ringsheim singular\n\t\tfunctions in the disc algebra},\n\tJOURNAL = {Monatsh. Math.},\n\tVOLUME = {188},\n\tYEAR = {2019},\n\tNUMBER = {4},\n\tPAGES = {591--609},\n}\n\n\n\\bib{TAMS2020}{article}{\n\tauthor={Bernal-Gonz\\'{a}lez, L.},\n\tauthor={Cabana-M\\'{e}ndez, H. J.},\n\tauthor={Mu\\~{n}oz-Fern\\'{a}ndez, G. A.},\n\tauthor={Seoane-Sep\\'{u}lveda, J. B.},\n\ttitle={On the dimension of subspaces of continuous functions attaining\n\t\ttheir maximum finitely many times},\n\tjournal={Trans. Amer. Math. Soc.},\n\tvolume={373},\n\tdate={2020},\n\tnumber={5},\n\tpages={3063--3083},\n}\n\n\\bib{b20}{article}{\n\tauthor={Bernal-Gonz\\'{a}lez, L.},\n\tauthor={Mu\\~{n}oz-Fern\\'{a}ndez, G. A.},\n\tauthor={Rodr\\'{\\i }guez-Vidanes, D. L.},\n\tauthor={Seoane-Sep\\'{u}lveda, J. B.},\n\ttitle={Algebraic genericity within the class of sup-measurable functions},\n\tjournal={J. Math. Anal. Appl.},\n\tvolume={483},\n\tdate={2020},\n\tnumber={1},\n\tpages={123--576},\n}\n\n\\bib{bo}{article}{\n\tAUTHOR = {Bernal-Gonz\\'{a}lez, Luis},\n\tAUTHOR = {Ord\\'{o}\\~{n}ez Cabrera, Manuel},\n\tTITLE = {Lineability criteria, with applications},\n\tJOURNAL = {J. Funct. Anal.},\n\tVOLUME = {266},\n\tYEAR = {2014},\n\tNUMBER = {6},\n\tPAGES = {3997--4025},\n}\n\n\\bib{survey}{article}{\n\tauthor={Bernal-Gonz\\'{a}lez, L.},\n\tauthor={Pellegrino, D.},\n\tauthor={Seoane-Sep\\'{u}lveda, J.B.},\n\ttitle={Linear subsets of nonlinear sets in topological vector spaces},\n\tjournal={Bull. Amer. Math. Soc. (N.S.)},\n\tvolume={51},\n\tdate={2014},\n\tnumber={1},\n\tpages={71--130},\n}\n\n\n\n\n\n\\bib{bns}{article}{\n\tAUTHOR = {Biehler, N.},\n\tAUTHOR = {Nestoridis, V.},\n\tAUTHOR = {Stavrianidi, A.},\n\tTITLE = {Algebraic genericity of frequently universal harmonic\n\t\tfunctions on trees},\n\tJOURNAL = {J. Math. Anal. Appl.},\n\tVOLUME = {489},\n\tYEAR = {2020},\n\tNUMBER = {1},\n\tPAGES = {124132, 11},\n}\n\n\\bib{Bi}{book}{\n\tauthor={Billingsley, Patrick},\n\ttitle={Probability and measure},\n\tseries={Wiley Series in Probability and Mathematical Statistics},\n\tedition={3},\n\tnote={A Wiley-Interscience Publication},\n\tpublisher={John Wiley \\& Sons, Inc., New York},\n\tdate={1995},\n\tpages={xiv+593},\n}\n\n\\bib{cgp}{article}{\n\tAUTHOR = {Calder\\'{o}n-Moreno, M. C.},\n\tAUTHOR = {Gerlach-Mena, P. J.},\n\tAUTHOR = {Prado-Bassas, J. A.},\n\tTITLE = {Lineability and modes of convergence},\n\tJOURNAL = {Rev. R. Acad. Cienc. Exactas F\\'{\\i}s. Nat. Ser. A Mat. RACSAM},\n\tVOLUME = {114},\n\tYEAR = {2020},\n\tNUMBER = {1},\n\tPAGES = {Paper No. 18, 12},\n}\n\n\n\n\\bib{CS2019}{article}{\n\tauthor={Ciesielski, Krzysztof C.},\n\tauthor={Seoane-Sep\\'{u}lveda, Juan B.},\n\ttitle={Differentiability versus continuity: restriction and extension theorems and monstrous examples},\n\tjournal={Bull. Amer. Math. Soc. (N.S.)},\n\tvolume={56},\n\tdate={2019},\n\tnumber={2},\n\tpages={211--260},\n}\n\n\\bib{egs}{article}{\n\tAUTHOR = {Enflo, Per H.},\n\t\tAUTHOR = {Gurariy, Vladimir I.},\n\t\tAUTHOR = {Seoane-Sep\\'{u}lveda,\tJuan B.},\n\tTITLE = {Some results and open questions on spaceability in function\n\t\tspaces},\n\tJOURNAL = {Trans. Amer. Math. Soc.},\n\tVOLUME = {366},\n\tYEAR = {2014},\n\tNUMBER = {2},\n\tPAGES = {611--625},\n}\n\n\n\n\n\\bib{fmrs}{article}{\n\tAUTHOR = {Fern\\'{a}ndez-S\\'{a}nchez, J.}, \n\tauthor={Maghsoudi, S.},\n\tauthor={Rodr\\'{\\i}guez-Vidanes, D. L.},\n\tauthor={Seoane-Sep{\\'u}lveda, J. B.},\n\tTITLE = {Classical vs. non-Archimedean analysis. An approach via algebraic genericity},\n status={Preprint (2021)},\n}\n\n\\bib{preprint-June-2019}{article}{\n\tauthor={Fern\\'andez-S\\'anchez, J.},\n\tauthor={Mart\\'inez-G\\'omez, M.E.},\n\tauthor={Seoane-Sep\\'ulveda, J.B.},\n\ttitle={Algebraic genericity and special properties within sequence spaces and series},\n\tjournal={Preprint (2019)},\n}\n\n\\bib{fich}{article}{\n\tauthor={Fichtenholz, G.},\n\tauthor={Kantorovich, L.},\n\ttitle={Sur les op\\'{e}rations dans l'espace des functions born\\'{e}es},\n\tjournal={Studia Math.},\n\tvolume={5},\n\tdate={1934},\n\tnumber={},\n\tpages={69-98},\n\t}\n\n\\bib{Fo}{book}{\n\tauthor={Folland, Gerald B.},\n\ttitle={A course in abstract harmonic analysis},\n\tseries={Textbooks in Mathematics},\n\tedition={2},\n\tpublisher={CRC Press, Boca Raton, FL},\n\tdate={2016},\n\tpages={xiii+305 pp.+loose errata},\n}\n\n\\bib{g7}{article}{\n\tAUTHOR = {Garc\\'{\\i}a-Pacheco, F. J.},\n\tAUTHOR = {Palmberg, N.},\n\tAUTHOR = {Seoane-Sep\\'{u}lveda, J. B.},\n\tTITLE = {Lineability and algebrability of pathological phenomena in analysis},\n\tJOURNAL = {J. Math. Anal. Appl.},\n\tVOLUME = {326},\n\tYEAR = {2007},\n\tNUMBER = {2},\n\tPAGES = {929--939},\n}\n\n\n\n\\bib{go}{book}{\n\tauthor={Gouv\\^{e}a, F.Q.},\n\ttitle={$p$-adic numbers, An introduction},\n\tseries={Universitext},\n\tedition={2},\n\tpublisher={Springer-Verlag, Berlin},\n\tdate={1997},\n\tpages={vi+298},\n}\n\n\n\n\\bib{gur1}{article}{\n\tauthor={Gurari\\u {\\i }, V.I.},\n\ttitle={Subspaces and bases in spaces of continuous functions},\n\tlanguage={Russian},\n\tjournal={Dokl. Akad. Nauk SSSR},\n\tvolume={167},\n\tdate={1966},\n\tpages={971--973},\n}\n\n\\bib{h}{article}{\n\tauthor={Hausdorff, F.},\n\ttitle={Uber zwei Satze von G. Fichtenholz und L. Kantorovich},\n\tlanguage={German},\n\tjournal={Studia Math.},\n\tvolume={6},\n\tdate={1936},\n\tpages={18--19},\n}\n\n\\bib{jms}{article}{\n\tAUTHOR = {Jim\\'{e}nez-Rodr\\'{\\i}guez, P.},\n\t\tAUTHOR = {Mu\\~{n}oz-Fern\\'{a}ndez, G. A.},\n\t\tAUTHOR = {Seoane-Sep\\'{u}lveda, J. B.},\n\t\t\tTITLE = {Non-{L}ipschitz functions with bounded gradient and related\n\t\tproblems},\n\tJOURNAL = {Linear Algebra Appl.},\n\tVOLUME = {437},\n\tYEAR = {2012},\n\tNUMBER = {4},\n\tPAGES = {1174--1181},\n\t}\n\n\n\n\\bib{kato}{book}{\n\tAUTHOR = {Katok, S.},\n\tTITLE = {{$p$}-adic analysis compared with real},\n\tSERIES = {Student Mathematical Library},\n\tVOLUME = {37},\n\tPUBLISHER = {American Mathematical Society, Providence, RI; Mathematics\n\t\tAdvanced Study Semesters, University Park, PA},\n\tYEAR = {2007},\n\tPAGES = {xiv+152},\n}\n\n\\bib{jms1}{article}{\n\tauthor={Khodabendehlou, J.},\n\tauthor={Maghsoudi, S.},\n\tauthor={Seoane-Sep{\\'u}lveda, J.B.},\n\ttitle={Algebraic genericity and summability within the non-Archimedean setting},\n\tjournal={Rev. R. Acad. Cienc. Exactas F\\'{\\i}s. Nat. Ser. A Mat. RACSAM},\n\tvolume={115},\n\tnumber={21},\n\tdate={2021},\n}\n\n\n\n\\bib{jms2}{article}{\n\tauthor={Khodabendehlou, J.},\n\tauthor={Maghsoudi, S.},\n\tauthor={Seoane-Sep{\\'u}lveda, J.B.},\n\ttitle={Lineability and algebrability within $p$-adic function spaces},\n\tjournal={Bull. Belg. Math. Soc. Simon Stevin},\n\tvolume={27},\n\tpages={711\u2013729},\n\tdate={2020},\n}\n\n\\bib{jms3}{article}{\n\tauthor={Khodabendehlou, J.},\n\tauthor={Maghsoudi, S.},\n\tauthor={Seoane-Sep{\\'u}lveda, J.B.},\n\ttitle={Lineability, continuity, and antiderivatives in the\n\t\tnon-Archimedean setting},\n\tjournal={Canad. Math. Bull.},\n\tvolume={64},\n\tdate={2021},\n\tnumber={3},\n\tpages={638--650},\n}\n\n\n\\bib{levinemilman1940}{article}{\n\tauthor={Levine, B.},\n\tauthor={Milman, D.},\n\ttitle={On linear sets in space $C$ consisting of functions of bounded variation},\n\tlanguage={Russian, with English summary},\n\tjournal={Comm. Inst. Sci. Math. M\\'{e}c. Univ. Kharkoff [Zapiski Inst. Mat. Mech.] (4)},\n\tvolume={16},\n\tdate={1940},\n\tpages={102--105},\n}\n\n\n\n\n\\bib{mahler}{book}{\n\tAUTHOR = {Mahler, K.},\n\tTITLE = {$p$-adic numbers and their functions},\n\tSERIES = {Cambridge Tracts in Mathematics},\n\tVOLUME = {76},\n\tEDITION = {Second},\n\tPUBLISHER = {Cambridge University Press, Cambridge-New York},\n\tYEAR = {1981},\n\tPAGES = {xi+320},\n\t}\n\n\\bib{Ma2}{article}{\n\tauthor={Mahler, K.},\n\ttitle={An interpolation series for continuous functions of a $p$-adic\n\t\tvariable},\n\tjournal={J. Reine Angew. Math.},\n\tvolume={199},\n\tdate={1958},\n\tpages={23--34},\n}\n\n\\bib{ms}{article}{\n\tAUTHOR = {Moothathu, T. K. Subrahmonian},\n\tTITLE = {Lineability in the sets of {B}aire and continuous real\n\t\tfunctions},\n\tJOURNAL = {Topology Appl.},\n\tVOLUME = {235},\n\tYEAR = {2018},\n\tPAGES = {83--91},\n}\n\n\n\n\\bib{nak}{article}{\n\tAUTHOR = {Natkaniec, Tomasz},\n\tTITLE = {On lineability of families of non-measurable functions of two\n\t\tvariable},\n\tJOURNAL = {Rev. R. Acad. Cienc. Exactas F\\'{\\i}s. Nat. Ser. A Mat. RACSAM},\n\tVOLUME = {115},\n\tYEAR = {2021},\n\tNUMBER = {1},\n\tPAGES = {Paper No. 33, 10},\n}\n\n\n\n\\bib{ro}{book}{\n\tAUTHOR = {Robert, A. M.},\n\tTITLE = {A course in $p$-adic analysis},\n\tSERIES = {Graduate Texts in Mathematics},\n\tVOLUME = {198},\n\tPUBLISHER = {Springer-Verlag, New York},\n\tYEAR = {2000},\n\tPAGES = {xvi+437},\n}\n\n\\bib{sc}{book}{\n\tauthor={Schikhof, W.H.},\n\ttitle={Ultrametric calculus, An introduction to $p$-adic analysis},\n\tseries={Cambridge Studies in Advanced Mathematics},\n\tvolume={4},\n\tpublisher={Cambridge University Press, Cambridge},\n\tdate={1984},\n\tpages={viii+306},\n}\n\n\n\n\\bib{juanksu}{book}{\n\tauthor={Seoane-Sep\\'{u}lveda, J.B.},\n\ttitle={Chaos and lineability of pathological phenomena in analysis},\n\tnote={Thesis (Ph.D.)--Kent State University},\n\tpublisher={ProQuest LLC, Ann Arbor, MI},\n\tdate={2006},\n\tpages={139},\n}\n\n\n\n\n\\bib{va}{book}{\n\tauthor={van Rooij, A.C.M.},\n\ttitle={Non-Archimedean functional analysis},\n\tseries={Monographs and Textbooks in Pure and Applied Math.},\n\tvolume={51},\n\tpublisher={Marcel Dekker, Inc., New York},\n\tdate={1978},\n\tpages={x+404},\n}\n\\end{biblist}\n\\end{bibdiv}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\n\nThe microbiome, a dynamic ecosystem of microorganisms (bacteria, archaea, fungi, and viruses) that live in and on us, plays a vital role in host-immune responses resulting in significant effects on host health (see, for example, \\cite{metwally2018review}). Dysbiosis of the microbiome has been linked to diseases including asthma, obesity, diabetes, transplant rejection, and inflammatory bowel disease~\\citep{Vatanen2016VariationHumans,pflughoeft2012human,Cho2012TheDisease,rani2016diverse}. \nThese observations suggest that modulation of the microbiome could become an important therapeutic modality for some diseases~\\citep{ sehrawat2021probiotics, gupta2020therapies, Yatsunenko2012HumanGeography}. \n\nModeling microbiome data is very challenging due to the exceeding number of zeros in the data~\\citep{gloor2017microbiome, knights2011supervised}. Dealing with zeros is one of the biggest challenges in microbiome and transcriptomics studies \\citep{xia2018statistical}. It is challenging to model those features which are skewed and zero-inflated \\citep{chen2012statistical}. Table~\\ref{countTableExample} is a toy example of taxonomic profile with a dimension of $3\\times 6$, where $3$ denotes the number of microbial features and $6$ denotes the number of metagenomic samples. The table shows the sparsity of the mirobiome data. Therefore, zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), Poisson hurdle (PH), and negative binomial hurdle (NBH) models are commonly used to model microbiome data \\citep{metwally2018review}.\n\\begin{table}[]\n\\caption{A Toy Example of Taxonomic Profile Count Table}\n\\label{countTableExample}\n\\begin{center}\n \\begin{tabular}{||c| c c c c c c || c||} \n\t\t\t\\hline\n\t\t\tSpecies\/Sample & $S_1$ & $S_2$ &$S_3$ &$S_4$ &$S_5$ &$S_6$ &Total \\\\ [0.5ex] \n\t\t\t\\hline\\hline\n\t\t\t$\\textit{Streptococcus pneumoniae}$ & $0$ & 0& $102$ &0 & $3$ & 0 & $105$ \\\\ \n\t\t\t\\hline\n\t\t\t$\\textit{Escherichia coli}$ & $13$&0 & 0 &75 & $0$ & $0$ & $88$ \\\\\n\t\t\t\\hline\n\t\t\t$\\textit{Staphylococcus aureus}$ & 0&$14$& 0 & $0$ &138 & $0$& $152$ \\\\ \n\t\t\t\\hline\\hline\n\t\t\tTotal& $13$ & $14$& 102 & $75$ & 141& $0$ &345 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\\vspace{1em}\n\\end{center}\n\\end{table}\n\n\nThe selection of an appropriate probabilistic model is critical for microbiome studies. For example, in order to determine if there is an association between a microbiome feature (such as a bacteria), and the disease, we may need to detect the significance of the difference between two groups of records. With appropriate probabilistic models identified successfully, we can improve the power of the statistical test significantly.\nRecently, \\cite{aldirawi2019identifying} proposed a statistical procedure for identifying the most appropriate discrete probabilistic models for zero-inflated or hurdle models based on the p-value of the discrete Kolmogorov-Smirnov (KS) test. The same procedure could be used for more general zero-inflated or hurdle models, including the ones with continuous baseline distributions. More specifically, the goal is to test if the sample ${\\mathbf X} = \\{ X_{1},X_{2},..,X_{n} \\}$ comes from a discrete or mixed distribution with cumulative distribution function (CDF) $F_{\\boldsymbol\\theta}(x)$ where the parameter(s) $\\boldsymbol\\theta$ is unknown. Algorithm~1, which is regenerated from \\cite{aldirawi2019identifying}, provides our procedures in details.\n\n\\begin{algorithm}\n\\caption{Estimating p-value of KS test}\n\\begin{algorithmic}[1]\n\\State \\emph{Given ${\\mathbf X}=(X_1, X_2, \\cdots X_n$)}\n\\State \\emph{For $b = 1, \\ldots, B$,\nresample $X$ with replacement to get a bootstrapped sample ${\\mathbf X}^{(b)} = \\{X^{(b)}_{1}, \\cdots, X_{n}^{(b)}\\}$.}\n\\State \\emph{For each b, calculate the MLE $\\hat{\\boldsymbol\\theta}^{(b)}$ of $\\boldsymbol\\theta$.}\n\\State \\emph{Simulate ${\\mathbf X}^{(c)} = \\{X^{(c)}_1, \\ldots, X^{(c)}_n\\}$ iid from $F_{\\hat{\\boldsymbol\\theta}^{(b)}}$, which is the CDF $F_{\\boldsymbol\\theta}(x)$ with parameter $\\boldsymbol\\theta = \\hat{\\boldsymbol\\theta}^{(b)}$. }\n\\State \\emph{Calculate the KS statistic $D_{n}^{(b)}={\\rm sup}_{x}\\lvert \\hat F^{(c)}_{n}(x)- F_{\\hat{\\boldsymbol\\theta}^{(b)}}(x)\\rvert$, where $\\hat F^{(c)}_{n}(x)$ is the empirical distribution function of ${\\mathbf X}^{(c)}$. }\n\\State \\emph{Estimate the p-value by \n$\\frac{\\#\\{b \\mid D_{n}^{(b)} > D_{n}\\}+1}{B+1}$\nwhere $D_{n} = {\\rm sup}_{x}\\lvert \\hat F_{n}(x)- F_{\\hat{\\boldsymbol\\theta}}(x)\\rvert$ is the KS statistic based on the original data and its MLE $\\hat{\\boldsymbol\\theta}$.}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\nAlthough the following procedure and algorithm were described, their theoretical justifications were not provided in ~\\cite{aldirawi2019identifying}. One major step in the above algorithm is to estimate the distribution parameters (step 3) using the maximum likelihood estimate (MLE) method. In this paper, we develop a general MLE procedure for estimating the parameters for general zero-inflated and hurdle models. In addition, we discuss the asymptotic properties and Fisher information matrix for MLEs, which can be used for building up the confidence intervals of the distribution parameters. For modeling microbiome data, we recommend zero-inflated (or hurdle) beta binomial or beta negative binomial models. \n\n \n\n\n\n\n\n\\section{Zero-altered or hurdle models and their MLEs}\\label{sec:hurdle}\n\nZero-altered models, also known as {\\it hurdle models}, have been used for modeling data with an excess or deficit of zeros (see, for example, ~\\cite{metwally2018review}, for a review). A general hurdle model consists of two components, one generating the zeros and the other generating non-zeros (or positive values for many applications). Given a baseline distribution $f_{\\boldsymbol\\theta}(y)$, which could be the probability mass function (pmf) of a discrete distribution or the probability density function (pdf) of a condinuous distribution, with parameter(s) $\\boldsymbol\\theta = (\\theta_1, \\ldots, \\theta_p)^T$, the distribution function of the corresponding hurdle model can be written as follows:\n\\begin{equation}\\label{eq:hurdle}\nf_{\\rm ZA}(y\\mid \\phi, \\boldsymbol\\theta) = \\phi {\\mathbf 1}_{\\{y=0\\}} + (1-\\phi) f_{\\rm tr}(y\\mid \\boldsymbol\\theta) {\\mathbf 1}_{\\{y\\neq 0\\}}\n\\end{equation}\nwhere $\\phi \\in [0, 1]$ is the weight parameter of zeros, $f_{\\rm tr}(y\\mid \\boldsymbol\\theta) = [1-p_0(\\boldsymbol\\theta)]^{-1} f_{\\boldsymbol\\theta} (y), y\\neq 0$ is the pmf or pdf of the zero-truncated baseline distribution, and $p_0(\\boldsymbol\\theta) = f_{\\boldsymbol\\theta}(0)$ for discrete baseline distributions or simply $0$ for continuous baseline distributions.\nExamples with discrete baseline distributions include zero-altered Poisson (ZAP) or Poisson hurdle (PH), zero-altered negative binomial (ZANB) or negative binomial hurdle (NBH) models and others, where model~\\eqref{eq:hurdle} provides a new pmf. Examples with continuous basedline distributions include zero-altered Gaussian (ZAG) or Gaussian hurdle (GH), zero-altered lognormal or lognormal hurdle models and others, where model~\\eqref{eq:hurdle} is indeed a mixture distribution with a discrete part with a probability mass $\\phi$ at $[Y=0]$ and a continuous component in $[Y \\neq 0]$ with density function $(1-\\phi) f_{\\boldsymbol\\theta}(y), y\\neq 0$.\n\nThe zero-altered models can actually be defined with a fairly general baseline distribution equiped with a cumulative distribution function (cdf) $F_{\\boldsymbol\\theta}(y) = P_{\\boldsymbol\\theta} (Y\\leq y)$. Its corresponding cdf is defined as follows:\n\\[\nF_{\\rm ZA}(y\\mid \\phi, \\boldsymbol\\theta) = P_{\\rm ZA}(Y \\leq y\\mid \\phi, \\boldsymbol\\theta) = \\phi {\\mathbf 1}_{\\{y \\geq 0\\}} + (1-\\phi) F_{\\rm tr}(y\\mid \\boldsymbol\\theta) \n\\]\nwhere $F_{\\rm tr}(y\\mid \\boldsymbol\\theta) = [F_{\\boldsymbol\\theta}(y) - P_{\\boldsymbol\\theta}(Y=0) {\\mathbf 1}_{\\{y \\geq 0\\}}]\/[1-P_{\\boldsymbol\\theta}(Y=0)]$ is a zero-truncated cdf. \nIn this paper, we assume that the baseline distribution is either discrete or continuous with pmf or pdf $f_{\\boldsymbol\\theta}(y)$.\n\n\\subsection{Maximum likelihood estimate for zero-altered or hurdle model}\n\nThe parameters of hurdle model~\\eqref{eq:hurdle} include both $\\phi$ and $\\boldsymbol\\theta$. Let $Y_1, \\ldots, Y_n$ be a random sample from model~\\eqref{eq:hurdle}. \nThe likelihood function of $(\\phi, \\boldsymbol\\theta)$ is\n\\begin{equation}\\label{eq:hurdelmle}\nL(\\phi, \\boldsymbol\\theta) = \\phi^{n-m} (1-\\phi)^m \\cdot \\prod_{i:Y_i\\neq 0} f_{\\rm tr}(Y_i\\mid \\boldsymbol\\theta)\n\\end{equation}\nwhere $m=\\#\\{i:Y_i\\neq 0\\}$ is the number of nonzero observations. Since $\\phi$ and $\\boldsymbol\\theta$ are separable in the likelihood function, we obtain the following theorem.\n\n\\begin{theorem}\\label{thm:hurdlemle}\nFor model~\\eqref{eq:hurdle} with zero-truncated pmf or pdf $f_{\\rm tr}(y\\mid \\boldsymbol\\theta)$, the maximum likelihood estimate (MLE) maximizing \\eqref{eq:hurdelmle} is\n\\[\n\\hat{\\phi}=1-\\frac{m}{n},\\>\\>\\> \n\\hat{\\boldsymbol\\theta}={\\rm argmax}_{\\boldsymbol\\theta} \\prod_{i:Y_i\\neq 0} f_{\\rm tr}(Y_i\\mid \\boldsymbol\\theta)\n\\]\n\\end{theorem}\nRecall that $f_{\\rm tr}(y\\mid \\boldsymbol\\theta) = [1-p_0(\\boldsymbol\\theta)]^{-1} f_{\\boldsymbol\\theta}(y), y\\neq 0$ for discrete baseline functions or simply $f_{\\boldsymbol\\theta}(y), y\\neq 0$ for continuous baseline functions.\n\n\\begin{example}\\label{ex:ZABB}{\nFor zero-altered beta binomial or beta binomial hurdle (BBH) distribution, the pmf of the baseline distribution is \n\\[\nf_{\\boldsymbol\\theta} (y) = \\dbinom{n}{y} \\frac{{ \\rm beta}(y + \\alpha, n - y + \\beta)}{{ \\rm beta}(\\alpha,\\beta)}\n\\]\nwhere $\\boldsymbol\\theta = (n, \\alpha, \\beta)$, $y=0, 1, \\ldots, n$ and\n\\begin{equation*}\np_0(\\boldsymbol\\theta) = \\frac{\\Gamma(n + \\beta) \\Gamma(\\alpha + \\beta)}{\\Gamma(n + \\alpha + \\beta) \\Gamma(\\beta)}\n\\end{equation*}\n\tLet $L(\\boldsymbol\\theta)$ be the likelihood of zero-truncated beta binomial distribution, then \n\n\\begin{eqnarray*}\n\tL(n,\\alpha,\\beta)&=& {argmax}_{\\boldsymbol\\theta} \\frac{\\prod_{i: y_i \\neq 0}f_{\\theta}(y_i)}{[1-p_0(\\boldsymbol\\theta)]^m}=\\left(\\frac{\\Gamma(\\alpha+n+\\beta)\\Gamma(\\beta)}{\\Gamma(\\alpha+n+\\beta)\\Gamma(\\beta)-\\Gamma(n+\\beta)\\Gamma(\\alpha+\\beta)}\\right)^{m}\\cdot\\\\\t\n\t& &\\prod_{i=1}^{m} \\left(\\frac{\\Gamma(n+1)\\Gamma(y_i+\\alpha)\\Gamma(n-y_i+\\beta)\n\t\t\\Gamma(\\alpha+\\beta)}{\\Gamma(y_{i}+1)\\Gamma(n-y_i+1)\\Gamma(\\alpha+n+\\beta)\\Gamma(\\alpha) \\Gamma(\\beta)}\\right) \n\\end{eqnarray*}\n\t\n The loglikelihood of zero-truncated beta binomial is given by:\n\\begin{eqnarray*} \nl(n,\\alpha,\\beta)&=& m\\log\\Gamma(n+1)+m\\log\\Gamma(\\alpha+\\beta)-m\\log\\Gamma(\\alpha)+\\sum_{i=1}^{n}\\log \\Gamma(y_{i}+\\alpha) \\\\ \n&-& m\\log\\left(\\Gamma(\\alpha+n+\\beta)\\Gamma(\\beta)-\\Gamma(n+\\beta)\\Gamma(\\alpha+\\beta)\\right)+\\sum_{i=1}^{m}\\log \\Gamma(n-y_{i}+\\beta)\\\\ &-& \\sum_{i=1}^{m}\\log \\Gamma(y_{i}+1) -\\sum_{i=1}^{m}\\log \\Gamma(n-y_{i}+1)\n\\end{eqnarray*}\n\t\n\t\n\nLet $\\Psi(\\cdot) = \\Gamma'(\\cdot)\/\\Gamma(\\cdot)$, known as the {\\it digamma} function. In order to calculate the MLE, the following formulae are needed:\n\\begin{eqnarray*}\n\t \\frac{\\partial l(n,\\alpha,\\beta)}{\\partial n}& =& m\\left(\\frac{\\exp(\\log B-\\log A)(\\psi(n+\\beta)-\\psi(n+\\alpha+\\beta))}{1-\\exp(\\log B-\\log A)}\\right)+m\\psi(n+1)\\\\\n\t &&+\n\\sum_{i=1}^{m} \\psi(n-y_{i}+\\beta) -\\sum_{i=1}^{m} \\psi(n-y_{i}+1)-m\\psi(\\alpha+n+\\beta) \\\\\n\t \\frac{\\partial l(n,\\alpha,\\beta)}{\\partial \\alpha} &=& m\\left(\\frac{\\exp(\\log B-\\log A)(\\psi(n+\\beta)-\\psi(n+\\alpha+\\beta))}{1-\\exp(\\log B-\\log A)}\\right)\\\\\n\t &&+\\sum_{i=1}^{m} \\psi(y_{i}+\\alpha) +m\\psi(\\alpha+\\beta)-m\\psi(\\alpha+n+\\beta)-m\\psi(\\alpha) \\\\\n\t \\frac{\\partial l(n,\\alpha,\\beta)}{\\partial \\beta} &=& m\\left(\\frac{\\exp(\\log B-\\log A)(\\psi(n+\\beta)+\\psi(\\alpha+\\beta)-\\psi(\\alpha+n+\\beta)-\\psi(\\beta))}{1-\\exp(\\log B-\\log A)}\\right)\\\\\n\t &&-m\\psi(\\beta)+\\sum_{i=1}^{m} \\psi(n-y_{i}+\\beta) +m\\psi(\\alpha+\\beta)-m\\psi(\\alpha+n+\\beta) \n\\end{eqnarray*}\nwhere $A=\\Gamma(n+\\alpha+\\beta) \\Gamma(\\beta)$, and $B=\\Gamma(n+\\beta) \\Gamma(\\alpha+\\beta)$.\n}\\hfill{$\\Box$}\n\\end{example}\n\n\n\n\n\n\\subsection{Asymptotic properties and Fisher information matrix of hurdle MLEs}\n\nLet $Y_1, \\ldots, Y_n$ be a random sample from hurdle model~\\eqref{eq:hurdle}. In order to find the MLE numerically, we often consider the log-likelihood function\n\\begin{eqnarray*}\nl(\\phi, \\boldsymbol\\theta) = \\log L(\\phi, \\boldsymbol\\theta) &&= \\log\\phi \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i=0\\}} + \\log(1-\\phi) \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\\\\\n&&-\\ \\log [1-p_0(\\boldsymbol\\theta)] \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i\\neq 0\\}} + \\sum_{i=1}^n \\log f_{\\boldsymbol\\theta}(Y_i) {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\n\\end{eqnarray*}\nwhose first derivatives are\n\\begin{eqnarray*}\n\\frac{\\partial l}{\\partial\\phi} &=& \\frac{1}{\\phi} \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i=0\\}} - \\frac{1}{1-\\phi} \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\\\\\n\\frac{\\partial l}{\\partial \\boldsymbol\\theta} &=& \\frac{p_0(\\boldsymbol\\theta)}{1-p_0(\\boldsymbol\\theta)}\\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i\\neq 0\\}} + \\sum_{i=1}^n \\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\n\\end{eqnarray*}\n\n\\begin{lemma}\\label{lem:hurdelmlefirst}\n\\[\nE\\left(\\frac{\\partial l}{\\partial \\phi}\\right) = 0 \\>\\>\\mbox{ and }\\>\\> E\\left(\\frac{\\partial l}{\\partial \\boldsymbol\\theta}\\right) = \\frac{n(1-\\phi)}{1-p_0(\\boldsymbol\\theta)} \\cdot E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y')}{\\partial \\boldsymbol\\theta} \\right]\n\\]\nwhere $Y'$ follows the baseline distribution $f_{\\boldsymbol\\theta}(y)$.\n\\end{lemma}\n\n\n\\medskip\\noindent\n{\\bf Proof of Lemma~\\ref{lem:hurdelmlefirst}:} \nSince $P(Y_i=0) = \\phi$, then $E(\\partial l\/\\partial\\phi) = 0$. Let $Y_1', \\ldots, Y_n'$ be iid $\\sim f_{\\boldsymbol\\theta} (y)$. Then\n\\begin{eqnarray*}\nE\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\\right] \\ &&=\\ \\frac{1-\\phi}{1-p_0(\\boldsymbol\\theta)} \\cdot E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i')}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{Y_i'\\neq 0\\}}\\right] \\\\\n&&= \\frac{1-\\phi}{1-p_0(\\boldsymbol\\theta)} \\cdot \\left\\{ E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i')}{\\partial \\boldsymbol\\theta} \\right] - p_0(\\boldsymbol\\theta)\\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta}\\right\\}\n\\end{eqnarray*}\nThen\n\\[\nE\\left(\\frac{\\partial l}{\\partial \\boldsymbol\\theta}\\right) = \\frac{1-\\phi}{1-p_0(\\boldsymbol\\theta)} \\cdot \\sum_{i=1}^n E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i')}{\\partial \\boldsymbol\\theta} \\right] = \\frac{n(1-\\phi)}{1-p_0(\\boldsymbol\\theta)} \\cdot E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y')}{\\partial \\boldsymbol\\theta} \\right]\n\\]\n\\hfill{$\\Box$}\n\nAs a direct corollary of Theorem~17 in \\cite{ferguson1996}, the MLEs of hurdle model have strong consistency under fairly general conditions.\n\n\n\\begin{theorem}\\label{thm:hurdlemleconsistency}\nLet $Y_1, \\ldots, Y_n$ be a random sample from hurdle model~\\eqref{eq:hurdle} with true parameter value $(\\phi_0, \\boldsymbol\\theta_0) \\in (0,1)\\times \\boldsymbol\\Theta$, where $\\boldsymbol\\Theta$ is compact. Let\n$\\hat{\\phi}=n^{-1} \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i =0\\}}$ and $\\hat{\\boldsymbol\\theta}={\\rm argmax}_{\\boldsymbol\\theta \\in \\boldsymbol\\Theta} \\prod_{i:Y_i\\neq 0} f_{\\rm tr}(Y_i\\mid \\boldsymbol\\theta)$ be the MLEs.\nSuppose (1) $f_{\\boldsymbol\\theta}(y)$ is continuous in $\\boldsymbol\\theta$ for all $y$; (2) $f_{\\boldsymbol\\theta}(y) = f_{\\boldsymbol\\theta_0}(y)$ for all $y$ always implies $\\boldsymbol\\theta = \\boldsymbol\\theta_0$; and (3) there exists a nonnegative function $K(y)$ such that $E[K(Y)] < \\infty$ for $Y\\sim f_{\\rm tr}(y; \\boldsymbol\\theta_0)$ and $\\log[f_{\\rm tr}(y\\mid \\boldsymbol\\theta)\/f_{\\rm tr}(y \\mid \\boldsymbol\\theta_0)] \\leq K(y)$ for all $y\\neq 0$ and $\\boldsymbol\\theta \\in \\boldsymbol\\Theta$. Then $\\hat\\phi \\stackrel{\\rm a.s.}{\\longrightarrow} \\phi_0$ and $\\hat{\\boldsymbol\\theta}_n \\stackrel{\\rm a.s.}{\\longrightarrow} \\boldsymbol\\theta_0$ as $n$ goes to infinity.\n\\end{theorem}\n\n\nUnder regularity conditions, see for example, Chapter 18 in \\cite{ferguson1996} or Section~5f in \\cite{rao1973linear}, $E[\\partial \\log f_{\\boldsymbol\\theta}(Y')\/\\partial \\boldsymbol\\theta]=0$ and thus $E\\left(\\partial l\/\\partial \\boldsymbol\\theta\\right) =0$ according to Lemma~\\ref{lem:hurdelmlefirst}. We can further calculate the Fisher information matrix of the random sample\n\\begin{equation}\\label{eq:fisher_information}\n{\\mathbf F}(\\phi, \\boldsymbol\\theta) = \n-\\left[\n\\begin{array}{cc}\nE\\left(\\frac{\\partial^2 l}{\\partial \\phi^2}\\right) & E\\left(\\frac{\\partial^2 l}{\\partial \\phi \\partial \\boldsymbol\\theta^T}\\right) \\\\\nE\\left(\\frac{\\partial^2 l}{\\partial\\boldsymbol\\theta \\partial \\phi}\\right) & E\\left(\\frac{\\partial^2 l}{\\partial \\boldsymbol\\theta \\partial\\boldsymbol\\theta^T}\\right)\n\\end{array}\\right]\n\\end{equation}\n\n\\begin{theorem}\\label{thm:hurdle_Fisher}\nLet $Y_1, \\ldots, Y_n$ be a random sample from hurdle model~\\eqref{eq:hurdle}. Under regularity conditions, the Fisher information matrix of the sample is\n\\[\n{\\mathbf F}_{\\rm ZA} = \nn \\left[\\begin{array}{cc}\n\\phi^{-1} (1-\\phi)^{-1} & {\\mathbf 0}^T\\\\\n{\\mathbf 0} & {\\mathbf F}_{\\rm ZA22}\n\\end{array}\\right]\n\\]\nwhere \n\\[\n{\\mathbf F}_{\\rm ZA22} = -\\frac{1-\\phi}{1-p_0(\\boldsymbol\\theta)} \\left(E\\left[\\frac{\\partial^2 \\log f_{\\boldsymbol\\theta}(Y')}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} \\right] + \\frac{p_0(\\boldsymbol\\theta)}{1-p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta}\\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta^T}\\right)\n\\]\nand $Y'$ follows the baseline distribution $f_{\\boldsymbol\\theta}(y)$.\n\\end{theorem}\n\n\n\\medskip\\noindent\n{\\bf Proof of Theorem~\\ref{thm:hurdle_Fisher}:}\nSince \n\\begin{eqnarray*}\n\\frac{\\partial\\log f_{\\rm ZA} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\phi} &=& \\phi^{-1} {\\mathbf 1}_{\\{y=0\\}} - (1-\\phi)^{-1} {\\mathbf 1}_{\\{y\\neq 0\\}}\\\\\n\\frac{\\partial\\log f_{\\rm ZA} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} &=& \\frac{p_0(\\boldsymbol\\theta)}{1-p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{y\\neq 0\\}} + \\frac{\\partial \\log f_{\\boldsymbol\\theta}(y)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{y\\neq 0\\}}\n\\end{eqnarray*}\nThen \n\\begin{eqnarray*}\n\\frac{\\partial^2\\log f_{\\rm ZA} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\phi^2} &=& -\\phi^{-2} {\\mathbf 1}_{\\{y=0\\}} - (1-\\phi)^{-2} {\\mathbf 1}_{\\{y\\neq 0\\}}\\\\\n\\frac{\\partial^2\\log f_{\\rm ZA} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\phi \\partial \\boldsymbol\\theta^T} &=& {\\mathbf 0}^T\\\\\n\\frac{\\partial^2\\log f_{\\rm ZA} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta \\partial \\phi} &=& {\\mathbf 0}\\\\\n\\frac{\\partial^2\\log f_{\\rm ZA} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} &=& \\frac{p_0(\\boldsymbol\\theta)}{[1-p_0(\\boldsymbol\\theta)]^2} \\cdot \\frac{\\partial\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} \\cdot \\frac{\\partial\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta^T}{\\mathbf 1}_{\\{y\\neq 0\\}}\\\\\n&+&\\frac{p_0(\\boldsymbol\\theta)}{1-p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial^2\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} {\\mathbf 1}_{\\{y\\neq 0\\}} +\\frac{\\partial^2 \\log f_{\\boldsymbol\\theta}(y)}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} {\\mathbf 1}_{\\{y\\neq 0\\}}\n\\end{eqnarray*}\nThe rest of the conclusions can be obtained via $l(\\phi, \\boldsymbol\\theta) = \\sum_{i=1}^n \\log f_{\\rm ZA} (Y_i \\mid \\phi, \\boldsymbol\\theta)$.\n\\hfill{$\\Box$}\n\nAs direct conclusions of Theorem~18 in \\cite{ferguson1996}: (i) $\\sqrt{n}(\\hat\\phi - \\phi_0) \\stackrel{{\\cal L}}{\\rightarrow} N(0, \\phi_0 (1-\\phi_0))$; (ii) $\\sqrt{n}(\\hat{\\boldsymbol\\theta} - \\boldsymbol\\theta_0) \\stackrel{{\\cal L}}{\\rightarrow} N\\left(0, {\\mathbf F}_{\\rm ZA22}^{-1} \\right)$; and (iii) $\\hat\\phi$ and $\\hat{\\boldsymbol\\theta}$ are asymptotically independent. These results can be used for building up confidence intervals of $\\phi$ and $\\boldsymbol\\theta$.\n\n\\begin{example}\\label{ex:ZAP}{\\rm\nFor zero-altered Poisson or Poisson hurdle distribution, the pmf of the baseline distribution is $f_\\lambda(y)= e^{-\\lambda}\\lambda^{y}\/y!$ with $p_0(\\lambda) = e^{-\\lambda}$. It can be verified that \n\\[\nE\\left(\\frac{\\partial\\log f_\\lambda(Y')}{\\partial \\lambda}\\right) = 0\n\\]\nif $Y' \\sim f_\\lambda (y)$. The truncated pmf is\n\\[\n\\rm f_{\\rm tr}(y \\mid \\lambda) = \\frac{e^{-\\lambda}}{1-e^{-\\lambda}} \\cdot \\frac{\\lambda^y}{y!}, \\>\\>\\> y=1,2,\\ldots \n\\]\nThe loglikelihood for the zero-truncated Poisson is\n\\[\nl(\\lambda) = -m\\lambda-m\\log(1-e^{-\\lambda}) + \\sum_{i:Y_i>0} Y_i \\cdot \\log\\lambda - \\log(\\prod_{i:Y_i>0} Y_i!)\n\\] \nThe MLE $\\hat\\lambda$ of $\\lambda$ solves the likelihood equation $\\lambda = \\bar{Y} (1-e^{-\\lambda})$ with $\\bar{Y} = m^{-1} \\sum_{i:Y_i>0} Y_i$~, which can be solved numerically. If the true value $\\lambda_0 \\in [\\lambda_1, \\lambda_2]$ for some $0 < \\lambda_1 < \\lambda_2 < \\infty$, then $K(y)$ in Theorem~\\ref{thm:hurdlemle} can be chosen as\n\\[\nK(y) = \\log\\frac{\\lambda_2}{\\lambda_1} \\cdot y + \\log\\frac{1-e^{-\\lambda_2}}{1-e^{-\\lambda_1}} + \\lambda_2 - \\lambda_1\n\\]\nSince there is no difference in practice as long as $0 < \\lambda_1 < \\hat\\lambda < \\lambda_2 < \\infty$, we know $\\hat\\lambda \\stackrel{\\rm a.s.}{\\rightarrow} \\lambda_0$ as $n$ goes to infinity.\n\nAccording to Theorem~\\ref{thm:hurdle_Fisher}, the Fisher information matrix of the Poisson hurdle sample is\n\\[\n{\\mathbf F}_{\\rm PH} = \\left[\\begin{array}{cc}\n\\frac{1}{\\phi(1-\\phi)} & 0 \\\\\n0 & \\frac{1-\\phi}{1-e^{-\\lambda}} \\left(\\frac{1}{\\lambda} - \\frac{e^{-\\lambda}}{1-e^{-\\lambda}}\\right)\n\\end{array}\\right]\n\\]\nNote that $\\frac{1}{\\lambda} - \\frac{e^{-\\lambda}}{1-e^{-\\lambda}} > 0$ as long as $\\lambda > 0$.\n}\\hfill{$\\Box$}\n\\end{example}\n\n\n\n\\begin{example}\\label{ex:ZANB}{\\rm\nFor zero-altered negative binomial or negative binomial hurdle distribution, the pmf of the baseline distribution with parameters $\\boldsymbol\\theta = (r, p) \\in (0, \\infty)\\times [0,1]$ is given by\n$f_{\\boldsymbol\\theta}(y)= \\frac{\\Gamma(y+r)}{\\Gamma(y+1) \\Gamma(r)} p^{y}(1-p)^{r}$, $y \\in \\{0, 1, 2, \\ldots\\}$. Then $p_0(\\boldsymbol\\theta) = (1-p)^r$. In order to apply Lemma~\\ref{lem:hurdelmlefirst}, we obtain\n\\begin{eqnarray*}\n\\log f_{\\boldsymbol\\theta}(y) &=& \\log\\Gamma(y+r) - \\log\\Gamma(y+1) - \\log\\Gamma(r) + y\\log p + r\\log(1-p)\\\\\n\\frac{\\partial\\log f_{\\boldsymbol\\theta}(y)}{\\partial r} &=& \\Psi(y+r)-\\Psi(r)+\\log(1-p)\\\\\n\\frac{\\partial\\log f_{\\boldsymbol\\theta}(y)}{\\partial p} &=& \\frac{y}{p}-\\frac{r}{1-p}\n\\end{eqnarray*}\nwhere $\\Psi(\\cdot) = \\Gamma'(\\cdot)\/\\Gamma(\\cdot)$ is known as the {\\it digamma} function.\n\nIf $Y' \\sim f_{\\boldsymbol\\theta}(y)$, then $E(Y') = pr\/(1-p)$ and $E\\left(\\partial\\log f_{\\boldsymbol\\theta}(Y')\/\\partial p\\right) = 0$. On the other hand,\nsince $\\Gamma(y) = \\int_0^\\infty t^{y-1} e^{-t} dt$ and $\\Gamma'(y) = \\int_0^\\infty t^{y-1} e^{-t} \\log t dt$ for $y>0$, then \n\n\\begin{eqnarray*}\nE\\left(\\Psi(Y'+r)\\right) &=& \\sum_{y=0}^\\infty \\frac{\\Gamma'(y+r)}{\\Gamma(y+r)}\\cdot \\frac{\\Gamma(y+r)}{\\Gamma(y+1)\\Gamma(r)}p^y (1-p)^r\\\\\n&=& \\frac{(1-p)^r}{\\Gamma(r)} \\sum_{y=0}^\\infty \\frac{p^y}{y!} \\Gamma'(y+r)\\\\\n&=& \\frac{(1-p)^r}{\\Gamma(r)} \\sum_{y=0}^\\infty \\frac{p^y}{y!} \\int_0^\\infty t^{y+r-1} e^{-t}\\log tdt\\\\\n&=& \\frac{(1-p)^r}{\\Gamma(r)} \\int_0^\\infty \\left(\\sum_{y=0}^\\infty \\frac{(pt)^y}{y!} e^{-pt}\\right) \\cdot t^{r-1} e^{-t(1-p)} \\log t dt\\\\ \t\n&=& \\frac{(1-p)^r}{\\Gamma(r)} \\int_0^\\infty t^{r-1} e^{-t(1-p)} \\log t dt\\>\\>\\> (\\mbox{let }s=(1-p)t)\\\\ \t\n&=& \\frac{(1-p)^r}{\\Gamma(r)} \\int_0^\\infty s^{r-1} e^{-s}\\left[\\log s - \\log(1-p)\\right] ds \\cdot (1-p)^{-r}\\\\\t\n&=& \\frac{1}{\\Gamma(r)} \\left[\\int_0^\\infty s^{r-1} e^{-s}\\log s ds - \\int_0^\\infty s^{r-1} e^{-s} \\log(1-p) ds\\right]\\\\\n&=& \\frac{1}{\\Gamma(r)} \\left[\\Gamma'(r) - \\log(1-p) \\Gamma(r)\\right]\\\\\n&=& \\Psi(r) - \\log(1-p)\n\\end{eqnarray*}\nTherefore, $E\\left(\\partial\\log f_{\\boldsymbol\\theta}(Y')\/\\partial r\\right) = E\\left(\\Psi(Y'+r)\\right) - \\Psi(r)+\\log(1-p) = 0$.\n\nAccording to Theorem~\\ref{thm:hurdle_Fisher}, the Fisher information matrix of a random sample from the negative binomial hurdle distribution is\n\\[\n{\\mathbf F}_{\\rm NBH} = \\left[\\begin{array}{cc}\n\\frac{1}{\\phi(1-\\phi)} & {\\mathbf 0}^T\\\\\n{\\mathbf 0} & {\\mathbf F}_{\\rm NBH22}\n\\end{array}\\right]\n\\]\nwith\n\\[\n{\\mathbf F}_{\\rm NBH22} = \n\\frac{1-\\phi}{1-(1-p)^r} \\left\\{\n\\left[\\begin{array}{cc}\nA(y,r) & \\frac{1}{1-p}\\\\\n\\frac{1}{1-p} & \\frac{r}{p(1-p)^2}\n\\end{array}\\right] -\n\\frac{(1-p)^r}{1-(1-p)^r}\n\\left[\\begin{array}{cc}\n\\log^2(1-p) & -\\frac{r\\log (1-p)}{1-p}\\\\\n-\\frac{r\\log(1-p)}{1-p} & \\frac{r^2}{(1-p)^2}\n\\end{array}\\right]\\right\\}\n\\]\n$A(y,r)=\\Psi_1(r) - E\\Psi_1(Y' + r)$\nwhere $\\Psi_1(\\cdot) = \\Psi'(\\cdot)$ is known as the {\\it trigamma} function. Since $E\\Psi_1(Y' + r)$ does not have a simple form for computation, a numerical solution for estimating it has been proposed by \\cite{guo2020}. \n}\\hfill{$\\Box$}\n\\end{example}\n\n\n\\section{Zero-inflated models and their MLEs}\\label{sec:zimodel}\n\nUnlike zero-altered models, a zero-inflated model always assumes an excess of zeros. Besides zeros coming from the baseline distribution, such as Poisson or negative binomial, there are additional zeros modeled by a weight parameter $\\phi \\in [0,1]$. \n\nWhen the baseline distribution is discrete with a pmf $f_{\\boldsymbol\\theta}(y)$, the corresponding zero-inflated model has a pmf $f_{\\rm ZI}(y\\mid \\phi, {\\boldsymbol\\theta}) = \\phi {\\mathbf 1}_{\\{y=0\\}} + (1-\\phi) f_{{\\boldsymbol\\theta}}(y)\n$ as well. In order to cover both pmf and pdf, we would rather write the distribution function of the zero-inflated distribution as\n\\begin{equation}\\label{eq:zimodel}\nf_{\\rm ZI}(y\\mid \\phi, {\\boldsymbol\\theta}) = [\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)] {\\mathbf 1}_{\\{y=0\\}} + (1-\\phi) f_{{\\boldsymbol\\theta}}(y) {\\mathbf 1}_{\\{y\\neq 0\\}}\n\\end{equation}\nRecall that $p_0(\\boldsymbol\\theta) = f_{\\boldsymbol\\theta}(0)$ for pmf and $0$ for pdf. Similar to zero-altered models, when the baseline function $f_{\\boldsymbol\\theta}(y)$ is a pdf, the zero-inflated model is a mixture distribution with a probability mass on $[Y=0]$ and a density in $[Y\\neq 0]$. It should be noted that when the baseline distribution is either continuous with a pdf $f_{\\boldsymbol\\theta}(y)$ or discrete but with $p_0(\\boldsymbol\\theta)=0$, the zero-inflated model is essentially the same as the corresponding zero-altered model.\n\nCommonly used baseline distributions include Gaussian, half-normal, Poisson, negative binomial, beta binomial, etc. The corresponding zero-inflated models are known as zero-inflated Gaussian (ZIG), zero-inflated half-normal (ZIHN), zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), zero-inflated beta binomial (ZIBB), respectively.\n\n \n\\subsection{Maximum likelihood estimate for zero-inflated models}\\label{sec:zimle}\n\nGiven a random sample $Y_1, \\ldots, Y_n$ from the zero-inflated model~\\eqref{eq:zimodel}, we adopt the maximum likelihood estimate $\\hat\\phi$ for $\\phi$ and $\\hat{\\boldsymbol\\theta}$ for $\\boldsymbol\\theta$. Similar as in Section~\\ref{sec:hurdle}, we denote $m= \\#\\{i:Y_i \\neq 0\\}$.\n\nIf the baseline distribution satisfies $P_{\\boldsymbol\\theta}(Y=0)=0$, then the likelihood function $L(\\phi, \\boldsymbol\\theta) = \\phi^{n-m} (1-\\phi)^m \\cdot \\prod_{i: Y_i \\neq 0} f_{\\boldsymbol\\theta}(Y_i)$. Then $\\hat\\phi = 1-m\/n$ and $\\hat{\\boldsymbol\\theta} = {\\rm argmax}_{\\boldsymbol\\theta} \\prod_{i: Y_i \\neq 0} f_{\\boldsymbol\\theta}(Y_i)$, which are the same as the ones for hurdle models.\n\nFor general cases, the likelihood function for model~\\eqref{eq:zimodel} is\n\\begin{equation}\\label{eq:discreteL}\nL(\\phi, \\boldsymbol{\\theta}) \n=\\left[\\phi + (1-\\phi)p_0({\\boldsymbol\\theta}) \\right]^{n-m} (1-\\phi)^{m}\n \\left(1 - p_0(\\boldsymbol\\theta)\\right)^{m} \\prod_{i: y_i \\neq 0} \\rm f_{\\rm tr}(Y_{i} \\mid \\boldsymbol\\theta)\n\\end{equation}\nwhere $f_{\\rm tr}(y \\mid \\boldsymbol\\theta) = f_{\\boldsymbol\\theta}(y)\/[1-p_0(\\boldsymbol\\theta)], y\\neq 0$ is the pmf or pdf of the zero-truncated version of the baseline distribution. \nBy reparametrization, we let\n\\begin{equation}\n\t\\psi = 1- [\\phi + (1-\\phi) p_0({\\boldsymbol\\theta}) ] = (1-\\phi) [1-p_0({\\boldsymbol\\theta})]\n\\end{equation}\nThen $\\phi = 1 - \\psi\/[1 - p_0(\\boldsymbol\\theta)]$ and the likelihood of $\\psi$ and $\\boldsymbol\\theta$ is\n\\[\nL(\\psi, \\boldsymbol\\theta)\n=(1-\\psi)^{n-m}\\psi^{m}\\cdot \\prod_{i: Y_i \\neq 0} \\rm f_{\\rm tr}(Y_i \\mid \\boldsymbol\\theta)\n\\] \nwhich is separable for $\\psi$ and $\\boldsymbol\\theta$.\n\n\\begin{theorem}\\label{thm:zimle}\nLet $\\boldsymbol\\theta_* = {\\rm argmax}_{\\boldsymbol\\theta} \\prod_{i: Y_i \\neq 0} f_{\\rm tr}(Y_i \\mid \\boldsymbol\\theta)$. The maximum likelihood estimate $(\\hat\\phi, \\hat{\\boldsymbol\\theta})$ maximizing \\eqref{eq:discreteL} can be obtained as follows:\n\\begin{itemize}\n \\item[(1)] If $m\/n \\leq 1-p_0({\\boldsymbol\\theta_*})$, then $\\hat{\\boldsymbol\\theta} = \\boldsymbol\\theta_*$ and $\\hat\\phi = 1 - m\/n\\cdot (1-p_0({\\boldsymbol\\theta_*}))^{-1}$.\n\\item[(2)] Otherwise, $\\hat{\\boldsymbol\\theta} = {\\rm argmax}_{\\boldsymbol\\theta} (1-\\psi(\\boldsymbol\\theta))^{n-m} \\psi(\\boldsymbol\\theta)^m \\prod_{i: Y_i \\neq 0} f_{\\rm tr}(Y_i \\mid \\boldsymbol\\theta)$ and $\\hat\\phi = 1 - \\psi(\\hat{\\boldsymbol\\theta}) \\cdot (1-p_0(\\hat{\\boldsymbol\\theta}))^{-1}$, where $\\psi(\\boldsymbol\\theta) = \\min\\{m\/n, 1-p_0({\\boldsymbol\\theta})\\}$.\n\\end{itemize}\n\\end{theorem}\n\n\\medskip\\noindent\n{\\bf Proof of Theorem~\\ref{thm:zimle}:}\nFirst of all, we denote \n$\\psi_* = {\\rm argmax}_{\\psi} (1-\\psi)^{n-m} \\psi^m$ and $\\boldsymbol\\theta_* = {\\rm argmax}_{\\boldsymbol\\theta} \\prod_{i: Y_i \\neq 0} f_{\\rm tr}(Y_i \\mid \\boldsymbol\\theta)$. It can be verified that $\\psi_* = m\/n$.\n\nOn the other hand, $\\psi = (1-\\phi) [1-p_0({\\boldsymbol\\theta})]$ with $\\phi \\in [0,1]$, which implies $\\psi \\in [0, 1-p_0({\\boldsymbol\\theta})]$. If $m\/n \\leq 1-p_0({\\boldsymbol\\theta_*})$, then $\\hat\\psi = m\/n$, $\\hat{\\boldsymbol\\theta} = \\boldsymbol\\theta_*$ is the mle. In this case, the mle of $\\phi$ is $\\hat\\phi = 1 - \\hat\\psi (1-p_0({\\boldsymbol\\theta_*}))^{-1}$.\n\nOtherwise, we have $m\/n > 1-p_0({\\boldsymbol\\theta_*})$. Then $\\hat\\psi = \\psi(\\boldsymbol\\theta) = \\min\\{m\/n, 1-p_0({\\boldsymbol\\theta})\\}$ is the mle of $\\phi$ given $\\boldsymbol\\theta$. In order to find the mle of $\\phi$ and $\\boldsymbol\\theta$, we first find $\\boldsymbol\\theta^* = {\\rm argmax}_{\\boldsymbol\\theta} L(\\psi(\\boldsymbol\\theta), \\boldsymbol\\theta)$. Then $\\hat{\\boldsymbol\\theta} = {\\boldsymbol\\theta}^*$ and $\\hat\\psi = \\psi(\\boldsymbol\\theta^*)$.\n\\hfill{$\\Box$}\n\nTheorem~\\ref{thm:zimle} actually makes the connection between MLEs for zero-inflated models and zero-altered models when $m\/n \\leq 1 - p_0(\\boldsymbol\\theta_*)$. It can also be used for finding MLEs numerically.\n\n\n\\begin{example} {\\rm\\quad\nLet $\\boldsymbol\\theta = (n, \\alpha, \\beta)$. The pmf of beta binomial distribution is\n\\[\nf_{\\boldsymbol\\theta} (y) = \\dbinom{n}{y} \\frac{{\\rm beta}(y + \\alpha, n - y + \\beta)}{{\\rm beta}(\\alpha,\\beta)}\n\\]\nwith $y=0, 1, \\ldots, n$ and\n\\begin{eqnarray*}\np_0(\\boldsymbol\\theta) &=& \\frac{\\Gamma(n + \\beta) \\Gamma(\\alpha + \\beta)}{\\Gamma(n + \\alpha + \\beta) \\Gamma(\\beta)}\\\\\n\\frac{p_0(\\boldsymbol\\theta)}{1 - p_0(\\boldsymbol\\theta)} &=& \\frac{\\Gamma(n + \\beta) \\Gamma(\\alpha + \\beta)}{\\Gamma(n + \\alpha + \\beta) \\Gamma(\\beta) - \\Gamma(n + \\beta) \\Gamma(\\alpha + \\beta)}\n\\end{eqnarray*}\nLet $\\Psi(\\cdot) = \\Gamma'(\\cdot)\/\\Gamma(\\cdot)$, known as the {\\it digamma} function. In order to apply Theorem~\\ref{thm:zimle}, we need the following formulas:\n\\begin{eqnarray*}\n\\frac{\\partial\\log f_{\\boldsymbol\\theta}(y)}{\\partial n} &=& \\Psi(n+1) - \\Psi(n-y+1) + \\Psi(n-y+\\beta) - \\Psi(n+\\alpha+\\beta)\\\\\n\\frac{\\partial\\log f_{\\boldsymbol\\theta}(y)}{\\partial \\alpha} &=& \\Psi(y+\\alpha) - \\Psi(n+\\alpha+\\beta) + \\Psi(\\alpha+\\beta) - \\Psi(\\alpha)\\\\\n\\frac{\\partial\\log f_{\\boldsymbol\\theta}(y)}{\\partial \\beta} &=& \\Psi(n-y+\\beta) - \\Psi(n+\\alpha+\\beta) + \\Psi(\\alpha+\\beta) - \\Psi(\\beta) \\\\\n\\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial n} &=& \\Psi(n+\\beta) - \\Psi(n+\\alpha+\\beta) \\\\\n\\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\alpha} &=& \\Psi(\\alpha+\\beta) - \\Psi(n+\\alpha+\\beta) \\\\\n\\frac{\\partial\\log p_0(\\boldsymbol\\theta)}{\\partial \\beta} &=& \\Psi(n+\\beta) + \\Psi(\\alpha+\\beta) - \\Psi(n+\\alpha+\\beta) - \\Psi(\\beta)\n\\end{eqnarray*}\n}\\hfill{$\\Box$}\n\\end{example}\n\n\\subsection{Asymptotic properties and Fisher information matrix for zero-inflated MLEs}\n\nLet $Y_1, \\ldots, Y_n$ be a random sample from the zero-inflated model~\\eqref{eq:zimodel}. The log-likelihood function of $\\phi$ and $\\boldsymbol\\theta$ is\n\\begin{eqnarray*}\nl(\\phi, \\boldsymbol\\theta) = \\log L(\\phi, \\boldsymbol\\theta) &&= \\log [\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)] \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i=0\\}}\n+ \\log(1-\\phi) \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i\\neq 0\\}} \\\\\n&&+ \\sum_{i=1}^n \\log f_{\\boldsymbol\\theta}(Y_i) {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\n\\end{eqnarray*}\nThen \n\\begin{eqnarray*}\n\\frac{\\partial l}{\\partial\\phi} &=& \\frac{1 - p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)} \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i=0\\}} - \\frac{1}{1-\\phi} \\cdot \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\\\\\n\\frac{\\partial l}{\\partial \\boldsymbol\\theta} &=& \\frac{(1-\\phi)p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)}\\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} \\sum_{i=1}^n {\\mathbf 1}_{\\{Y_i = 0\\}} + \\sum_{i=1}^n \\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\n\\end{eqnarray*}\n\n\n\\begin{lemma}\\label{lem:zimodelmlefirst}\nSuppose $0\\leq \\phi <1$. Then \n\\[\nE\\left(\\frac{\\partial l}{\\partial \\phi}\\right) = 0 \\>\\>\\mbox{ and }\\>\\>\nE\\left(\\frac{\\partial l}{\\partial \\boldsymbol\\theta}\\right) = n(1-\\phi) E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y')}{\\partial \\boldsymbol\\theta} \\right]\n\\]\nwhich is $0$ if and only if $E[\\partial \\log f_{\\boldsymbol\\theta}(Y')\/\\partial \\boldsymbol\\theta]=0$, where $Y'$ follows the baseline distribution $f_{\\boldsymbol\\theta}(y)$.\n\\end{lemma}\n\n\\medskip\\noindent\n{\\bf Proof of Lemma~\\ref{lem:zimodelmlefirst}:}\nSince $P(Y_i=0) = \\phi + (1-\\phi) p_0(\\boldsymbol\\theta)$ and $P(Y_i \\neq 0) = (1-\\phi) [1- p_0(\\boldsymbol\\theta)]$, then $E(\\partial l\/\\partial\\phi) = 0$. Let $Y_1', \\ldots, Y_n'$ be iid $\\sim f_{\\boldsymbol\\theta} (y)$. Then\n\\begin{eqnarray*}\nE\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{Y_i\\neq 0\\}}\\right] &&= (1-\\phi) \\cdot E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i')}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{Y_i'\\neq 0\\}}\\right] \\\\ \n&&= (1-\\phi) \\left\\{ E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i')}{\\partial \\boldsymbol\\theta} \\right] - p_0(\\boldsymbol\\theta)\\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta}\\right\\}\n\\end{eqnarray*}\nand\n\\[\nE\\left(\\frac{\\partial l}{\\partial \\boldsymbol\\theta}\\right) = (1-\\phi) \\sum_{i=1}^n E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y_i')}{\\partial \\boldsymbol\\theta} \\right] = n(1-\\phi) E\\left[\\frac{\\partial \\log f_{\\boldsymbol\\theta}(Y')}{\\partial \\boldsymbol\\theta} \\right]\n\\]\n\\hfill{$\\Box$}\n\n\nUnder regularity conditions, $E[\\partial \\log f_{\\boldsymbol\\theta}(Y')\/\\partial \\boldsymbol\\theta]=0$ and thus $E\\left(\\partial l\/\\partial \\boldsymbol\\theta\\right) =0$ according to Lemma~\\ref{lem:zimodelmlefirst}. \n\n\\begin{theorem}\\label{thm:zi_Fisher}\nLet $Y_1, \\ldots, Y_n$ be a random sample from zero-inflated model~\\eqref{eq:zimodel}. Under regularity conditions, the Fisher information matrix of the sample is\n\\[\n{\\mathbf F}_{\\rm ZI} = \nn \\left[\\begin{array}{cc}\n\\frac{1-p_0(\\boldsymbol\\theta)}{[\\phi + (1-\\phi)p_0(\\boldsymbol\\theta)] (1-\\phi)} & \\frac{p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta^T} \\\\\n\\frac{p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} & {\\mathbf F}_{\\rm ZI22}\n\\end{array}\\right]\n\\]\nwhere \n\\[\n{\\mathbf F}_{\\rm ZI22} = -(1-\\phi) \\left(E\\left[\\frac{\\partial^2 \\log f_{\\boldsymbol\\theta}(Y')}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} \\right] + \\frac{\\phi p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta}\\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta^T}\\right)\n\\]\nand $Y'$ follows the baseline distribution $f_{\\boldsymbol\\theta}(y)$.\n\\end{theorem}\n\n\n\\medskip\\noindent\n{\\bf Proof of Theorem~\\ref{thm:zi_Fisher}:}\nSince \n\\begin{eqnarray*}\n\\frac{\\partial\\log f_{\\rm ZI} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\phi} &=& \\frac{1-p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)} {\\mathbf 1}_{\\{y=0\\}} - \\frac{1}{1-\\phi} {\\mathbf 1}_{\\{y\\neq 0\\}}\\\\\n\\frac{\\partial\\log f_{\\rm ZI} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} &=& \\frac{(1-\\phi) p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{y = 0\\}} + \\frac{\\partial \\log f_{\\boldsymbol\\theta}(y)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{y\\neq 0\\}}\n\\end{eqnarray*}\nThen \n\\begin{eqnarray*}\n\\frac{\\partial^2\\log f_{\\rm ZI} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\phi^2} &=& -\\frac{[1-p_0(\\boldsymbol\\theta)]^2}{[\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)]^2} {\\mathbf 1}_{\\{y=0\\}} - \\frac{1}{(1-\\phi)^2} {\\mathbf 1}_{\\{y\\neq 0\\}}\\\\\n\\frac{\\partial^2\\log f_{\\rm ZI} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\phi \\partial \\boldsymbol\\theta^T} &=& -\\frac{p_0(\\boldsymbol\\theta)}{[\\phi + (1-\\phi)p_0(\\boldsymbol\\theta)]^2} \\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta^T} {\\mathbf 1}_{\\{y=0\\}}\\\\\n\\frac{\\partial^2\\log f_{\\rm ZI} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta \\partial \\phi} &=& -\\frac{p_0(\\boldsymbol\\theta)}{[\\phi + (1-\\phi)p_0(\\boldsymbol\\theta)]^2} \\cdot \\frac{\\partial \\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} {\\mathbf 1}_{\\{y=0\\}}\\\\\n\\frac{\\partial^2\\log f_{\\rm ZI} (y \\mid \\phi, \\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} &=& \n\\frac{\\phi (1-\\phi) p_0(\\boldsymbol\\theta)}{[\\phi + (1-\\phi)p_0(\\boldsymbol\\theta)]^2} \\cdot \\frac{\\partial\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta} \\cdot \\frac{\\partial\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta^T}{\\mathbf 1}_{\\{y = 0\\}}\\\\\n&&+\\frac{(1-\\phi) p_0(\\boldsymbol\\theta)}{\\phi + (1-\\phi) p_0(\\boldsymbol\\theta)} \\cdot \\frac{\\partial^2\\log p_0(\\boldsymbol\\theta)}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} {\\mathbf 1}_{\\{y = 0\\}} +\\frac{\\partial^2 \\log f_{\\boldsymbol\\theta}(y)}{\\partial \\boldsymbol\\theta \\partial \\boldsymbol\\theta^T} {\\mathbf 1}_{\\{y\\neq 0\\}}\n\\end{eqnarray*}\nThe rest of the conclusions can be obtained via $l(\\phi, \\boldsymbol\\theta) = \\sum_{i=1}^n \\log f_{\\rm ZI} (Y_i \\mid \\phi, \\boldsymbol\\theta)$, $P(Y_i = 0) = \\phi + (1-\\phi)p_0(\\boldsymbol\\theta)$ and $P(Y_i \\neq 0) = (1-\\phi) [1-p_0(\\boldsymbol\\theta)]$.\n\\hfill{$\\Box$}\n\nAs direct conclusions of Theorem~18 in \\cite{ferguson1996}, under regularity conditions, we have (i) \n\\[\n\\sqrt{n}(\\hat\\phi - \\phi_0) \\stackrel{{\\cal L}}{\\longrightarrow} N\\left(0, \\frac{[\\phi + (1-\\phi)p_0(\\boldsymbol\\theta)] (1-\\phi)}{1-p_0(\\boldsymbol\\theta)}\\right)\n\\] \nand (ii) $\\sqrt{n}(\\hat{\\boldsymbol\\theta} - \\boldsymbol\\theta_0) \\stackrel{{\\cal L}}{\\rightarrow} N\\left(0, {\\mathbf F}_{\\rm ZI22}^{-1} \\right)$. However, $\\hat\\phi$ and $\\hat{\\boldsymbol\\theta}$ are usually not asymptotically independent unless $p_0(\\boldsymbol\\theta) = 0$ or does not depend on $\\theta$. These results can be used for building up confidence intervals of $\\phi$ and $\\boldsymbol\\theta$ as well.\n\n\n\\begin{example}\\label{ex:zipFisher} {\\rm\nFor zero-inflated Poisson (ZIP) model, the pmf of the baseline distribution is $f_\\lambda(y)= e^{-\\lambda}\\lambda^{y}\/y!$ with $y=0, 1, \\ldots$, where $p_0(\\lambda)= e^{-\\lambda}$. Then $\\partial \\log f_\\lambda(y)\/\\partial \\lambda = y\/\\lambda -1$.\n\nAccording to Theorem~\\ref{thm:zi_Fisher}, the Fisher information matrix of the Poisson sample is\n\\[\n{\\mathbf F}_{\\rm ZIP} = \nn \\left[\\begin{array}{cc}\n\\frac{1-e^{-\\lambda}}{[\\phi + (1-\\phi)e^{-\\lambda}] (1-\\phi)} & -\\frac{e^{-\\lambda}}{\\phi + (1-\\phi) e^{-\\lambda}} \\\\\n-\\frac{e^{-\\lambda}}{\\phi + (1-\\phi) e^{-\\lambda}} & (1-\\phi) \\left(\\frac{1}{\\lambda} - \\frac{\\phi e^{-\\lambda}}{\\phi + (1-\\phi) e^{-\\lambda}} \\right)\n\\end{array}\\right]\n\\]\nNote that $\\frac{1}{\\lambda} - \\frac{\\phi e^{-\\lambda}}{\\phi + (1-\\phi) e^{-\\lambda}} > 0$ as long as $\\lambda > 0$ and $0\\leq \\phi < 1$.\n\\hfill{$\\Box$}\n}\\end{example}\n\n\n\n\n\n\n\n\n\\section{Microbiome Data Application}\nAs an application, the bootstrap KS test with unknown parameters (Algorithm 1) has been used to a list of 229 bacterial and fungal OTUs~\\citep{aldirawi2019identifying,tipton2018fungi}. We are interested in knowing how many of the 229 OTU follows each of the following distributions given that the distribution parameters are unknown: Poisson, negative binomial, beta binomial, beta negative binomials, and the corresponding zero-inflated and hurdle models. Table~\\ref{ks}, which was regenerated from \\cite{aldirawi2019identifying}, summarizes the number of features that do\nnot show significant divergence (p-value $>$ 0.05). \n\n\n\n\nPoisson, negative binomial, ZIP, ZINB have been used commonly for modeling microbiome data. However, as shown in the above table, Poisson, zero-inflated Poisson, and Poisson hurdle are not appropriate distributions to model sparse microbial features as only 0.4\\%, 2\\%, 1\\% out of 229 the features were able to be appropriately fitted using these distributions, respectively. On the other hand, binomial and negative binomial families can be used to approximate sparse microbial data, with BNBH as the best distribution to model such dataset (being able to appropriately fit 53\\% of the 229 features) using the proposed conservative method. In addition, ZIBNB fits about half of the features followed by BBH and ZIBB which they fit about 40\\% of the features. \n\n\nBased on the above table, we conclude that zero-inflated and hurdle beta binomial or beta negative binomial are more appropriate than commonly used models.\n\n\n\n\n\\begin{table}[]\n\\caption{Number and percentage of species out of 229 species that don't show significant difference (KS p-value $>0.05$)}\n\\label{ks}\n\\begin{center}\n\\resizebox{0.85\\textwidth}{!}{\n\\begin{tabular}{|| c| c |c||}\n\\hline\n \\textbf{Distribution} & \\textbf{Number} & \\textbf{Percentage} \\\\\n \\cline{1-3}\n Poisson & $1 $ & 0.4\\% \\\\\n \\hline\n negative binomial (NB) & 23 & 10\\% \\\\\n \\hline\n beta binomial (BB) & 76 & 33\\% \\\\\n \\hline\n beta negative binomial (BNB) & 60 & 26\\% \\\\\n \\hline\n zero-inflated Poisson (ZIP)& 3 & 2\\% \\\\\n \\hline\n zero-inflated negative binomial (ZINB)& 25 & 11\\% \\\\\n \\hline\n zero-inflated beta binomial (ZIBB)& 89 & 39\\% \\\\\n \\hline\n zero-inflated beta negative binomial (ZIBNB)& 110& 48\\% \\\\\n \\hline\n Poisson hurdle (PH)& 2 & 1\\% \\\\\n \\hline\n negative binomial hurdle (NBH)& 56 & 24\\% \\\\\n \\hline\n beta binomial hurdle (BBH)& 92 & 40\\% \\\\\n \\hline\n beta negative binomial hurdle (BNBH)& 121 & 53\\% \\\\\n \\hline\n \\end{tabular}\n\\label{tab1}\n}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\\label{conclusion}\n\nUnderstanding the role of the microbiome in human health and how it can be modeled is becoming increasingly relevant for preventive medicine and for the medical management of chronic diseases \\citep{calle2019statistical}. However, the microbiome data is highly sparse and skewed. It is very challenging to select an appropriate probabilistic model. \n\nIn this paper, we use the MLE approach to estimate the parameters of general zero-inflated and hurdle models. We also derive the corresponding Fisher information matrices for exploring the estimator's asymptotic properties to build up the confidence intervals of the parameters.\n\nIn the literature, Poisson and negative binomial models have been commonly used for modeling microbiome data. Based on a real dataset analysis, we show that zero-inflated (or hurdle) beta binomial or beta negative binomial models are more appropriate.\n\n\n\\section*{Acknowledgments}\n\nThis work was partly supported by NSF grant DMS-1924859.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nKaluza-Klein (KK) compactification \\cite{KAL}\\footnote{For a recent\nreview see Ref. \\cite{D}.} in its original sense is\nthe procedure by which one unifies the pure 4-dimensional (4-d) gravity\nwith gauge theories through the dimensional reduction of\nthe higher dimensional pure gravity theories. The basic idea is that\nthe isometry symmetry of the internal space manifests itself as the\ngauge symmetry in 4-d after the compactification of the extra space.\nIn addition to gauge fields, associated with the isometry group, the\nscalar fields, corresponding to the degrees of freedom of the\ninternal metric, arise upon compactification.\n\nA class of interesting solutions for such an effective 4-d KK theory\nconstitutes configurations with a non-trivial 4-d space-time\ndependence for the additional scalar fields, gauge fields,\nas well as for the 4-d space-time metric\n\\footnote{One can view \\cite{CY} this class of configurations as a subset of\nblack holes in the heterotic string theory \\cite{Sen} compactified on a six\ntorus. Such configurations, therefore, constitute a\nclass of non-trivial solutions in low-energy 4-d string theory.}.\nIn particular, spherically symmetric charged configurations correspond to\ncharged black hole (BH) solutions with additional scalar fields\nvarying with the spatial radial coordinate. We shall refer to such\nconfigurations as KK BH's.\n\nExamples of KK BH's in 5-d \\cite{FIV,DM,GW} KK theories\nwith Abelian isometry have been studied in the\npast.\\footnote{See also Ref. \\cite{GM}.} Dobiasch and Maison \\cite{DM}\ndeveloped a formalism by which one can generate the most general\nstationary, spherically symmetric solutions in Abelian KK theories.\nTheir formalism was applied primarily to obtain the explicit\nsolutions for the BH's in 5-d KK theory, only.\n\nAmong a class of non-trivial configurations, those which saturate\nthe Bogomol'nyi bound on their energy (ADM mass \\cite{ADM})\ncan be viewed as non-trivial vacuum configurations -- solitons --\nand are thus of special interest.\nWhen embedded in supersymmetric theories such configurations\ncorrespond to bosonic configurations which are invariant under\n(constrained) supersymmetry transformations, {\\it i.e.}, they\nsatisfy the corresponding Killing spinor equations. One refers\nto such configurations as supersymmetric ones.\n\nSupersymmetric KK BH's in 5-d KK theory were first discussed by Gibbons\nand Perry \\cite{GIBB}. Their result was recently generalized \\cite{EX}\nto supersymmetric KK BH's in $(4+n)$-d Abelian KK theory, {\\it i.e.},\nthose with the $U(1)^n$ internal isometry group. With a diagonal internal\nmetric, such solutions exist if and only if the isometry group of\nthe internal space is broken down to the $U(1)_M \\times U(1)_E$ group.\n\nThe aim of this paper is to address static 4-d charged BH solutions,\ncompatible with the corresponding Bogomol'nyi bound,\nin $(4+n)$-d KK theory with Abelian isometry group $U(1)^n$.\nExplicit solutions with diagonal internal metric\nturn out to correspond to configurations with at most one electric\nand one magnetic charges, which can either come from the same\n$U(1)$-gauge field (corresponding to dyonic BH's in effective\n5-d KK theory) or from different ones (corresponding to BH's with\nthe $U(1)_M \\times U(1)_E$ isometry in an effective 6-d KK theory).\nTheir global space-time and thermal properties are explored.\nA class of Abelian KK BH solutions with constrained charge configurations\nis further obtained by performing global $SO(n)$ transformations on\nthe above solutions.\n\nThe paper is organized in the following way. In Section II we discuss\ndimensional reduction of the $(4+n)$-d gravity. In Section III we\nderive the constraints on charges for charged KK BH's with the diagonal\ninternal metric Ansatz. In Section IV we discuss the explicit form of\nthe non-extreme $U(1)_M \\times U(1)_E$ solutions, their global\nspace-time structure and thermal properties. In Section V we discuss\nthe electric-magnetic duality of a more general class of the solutions\ngenerated by global $SO(n)$ rotations on the internal metric.\nConclusions are given in Section VI.\n\\section{Dimensional Reduction of $(4+n)$-Dimensional Gravity}\nThe starting point is the pure Einstein-Poincar\\' e gravity in\n$(4+n)$-d:\n\\begin{equation}\n{\\cal L} = -{1 \\over {2\\kappa^2}} \\sqrt{-g^{(4+n)}}{\\cal R}^{(4+n)}\\ \\ ,\n\\label{highlag}\n\\end{equation}\nwhere the Ricci scalar ${\\cal R}^{(4+n)}$ and the determinant\n$g^{(4+n)}$ are defined in terms of a $(4+n)$-d metric\n$g^{(4+n)}_{\\Lambda\\Pi}$, and $\\kappa$ is the $(4+n)$-d\ngravitational constant. For the notation of space-time indices, we\nshall follow the convention of Ref. \\cite{EX}, {\\it i.e.}, the upper-\n(or lower-) case letters are for those running over $(4+n)$-d (or 4-d)\nspace-time. Those with tilde are reserved for the $n$ extra spatial\ndimensions. Latin (or greek) letters denote flat (or curved) indices.\nWe shall use the mostly positive signature convention\n$(-++\\cdot\\cdot\\cdot +)$ for the $(4+n)$-d metric.\n\nThe dimensional reduction of the higher dimensional gravity is\nachieved by splitting the $(4+n)$-d metric $g^{(4+n)}_{\\Lambda\\Pi}$\ninto the following form:\n\\begin{equation}\ng^{(4+n)}_{\\Lambda \\Pi} =\n\\left [ \\matrix{{\\rm e}^{-{1 \\over \\alpha}\\varphi}g_{\\lambda \\pi} +\n{\\rm e}^{{2\\varphi} \\over {n\\alpha}}\\rho_{\\tilde{\\lambda}\n\\tilde{\\pi}}A^{\\tilde{\\lambda}}_{\\lambda} A^{\\tilde{\\pi}}_{\\pi} &\n{\\rm e}^{{2\\varphi} \\over {n\\alpha}}\\rho_{\\tilde{\\lambda}\n\\tilde{\\pi}}A^{\\tilde{\\lambda}}_{\\lambda}\n\\cr {\\rm e}^{{2 \\varphi} \\over {n\\alpha}}\\rho_{\\tilde{\\lambda}\n\\tilde{\\pi}}A^{\\tilde{\\pi}}_{\\pi} & {\\rm e}^{{2\\varphi} \\over\n{n\\alpha}}\\rho_{\\tilde{\\lambda} \\tilde{\\pi}}} \\right ]\\ \\ ,\n\\label{ansatz}\n\\end{equation}\nwhere $\\rho_{\\tilde{\\lambda} \\tilde{\\pi}}$ is the unimodular part,\n{\\it i.e.}, ${\\hbox{det}}\\rho_{\\tilde{\\lambda} \\tilde{\\pi}}=1$,\nof the internal metric, $\\varphi$ is the determinant of the\ninternal metric, which we will refer to as dilaton, and\n$\\alpha = \\sqrt{{n+2}\\over n}$.\n\nThe effective theory in 4-d is then obtained by imposing ``the right\ninvariance'' \\cite{CHB} of the $(4+n)$-d metric $g_{\\Lambda\\Pi}$\nunder the action of an isometry of the internal space. This requirement\nfixes the dependence of the metric components on the internal coordinates.\nIt turns out that the internal coordinate dependence of the transformation\nlaws (under the general coordinate transformations) of the fields in\n(\\ref{ansatz}) factors out, and $(4+n)$-d Einstein Lagrangian density\n(\\ref{highlag}) becomes independent of the internal coordinates. Then,\n4-d effective action after a trivial integration over the internal space\nassumes the form (see for example Eq.(8) of Ref. \\cite{CHB}):\n\\begin{eqnarray}\n{\\cal L}= -{1 \\over 2}\\sqrt{-g}[{\\cal R} +\n{\\rm e}^{-\\alpha\\varphi}{\\cal R}_K + {1 \\over 4}{\\rm e}^{\\alpha\\varphi}\n\\rho_{ij}F^i_{\\mu \\nu}F^{j \\mu \\nu}\n+ {1 \\over 2}\\partial_{\\mu}\\varphi \\partial^{\\mu}\\varphi \\nonumber \\\\\n+ {1 \\over 4}\\rho^{ij}\\rho^{k\\ell}\n(D_{\\mu}\\rho_{ik})(D^{\\mu}\\rho_{j\\ell})\n+ \\chi (\\det\\rho_{ij} - 1)]\\ \\ ,\n\\label{efflag}\n\\end{eqnarray}\nwhere the Ricci scalar ${\\cal R}_K$ is defined in terms of the unimodular\npart $\\rho_{ij}$ $(i,j=1,2,...,n)$\n\\footnote{From now on, for the simplicity of\nnotation, we shall denote the internal space\nindices as $i\\equiv \\tilde{\\alpha} - 3, etc.$ }\nof the internal metric, $F^i_{\\mu \\nu} \\equiv \\partial_{\\mu}\nA^i_{\\nu} - \\partial_{\\nu}A^i_{\\mu} - gf^i_{jk} A^j_{\\mu} A^k_{\\nu}$,\nwhere $f^i_{jk}$ is the structure constant for the internal isometry\ngroup and $g$ is the gauge coupling constant of the isometry group,\nis the field strength of the gauge field $A^i_{\\mu}$,\n$D_{\\mu} \\rho_{ij} = \\partial_{\\mu}\\rho_{ij} - f^\\ell_{kj}A^k_{\\mu}\n\\rho_{i \\ell}$ is the corresponding gauge covariant derivative\nand $\\chi$ is the Lagrangian multiplier.\nThe 4-d gravitational constant $\\kappa_4$ has been set equal to 1.\nWhen the isometry group is Abelian, {\\it i.e.}, one compactifies on\n$n$-torus $T^n$, the structure constant $f^i_{jk}$ vanishes. In this\ncase, the gauge covariant derivatives in (\\ref{efflag}) become ordinary\nones and the Ricci scalar ${\\cal R}_K$, which in general describes\nthe self-interactions among scalar fields, vanishes.\n\\section{Diagonal Internal Metric Ansatz and Constraints on Solutions}\nWe shall first address the spherically symmetric configurations\nwith the diagonal internal metric Ansatz. Then, the unimodular\npart of the internal metric is of the form:\n\\begin{equation}\n\\rho_{ij} = {\\rm diag}(\\rho_1 ,..., \\rho_{n-1},\n\\prod^{n-1}_{k=1} \\rho^{-1}_k )\\ \\ ,\n\\label{diag}\n\\end{equation}\nthe spherically symmetric {\\it Ansatz} for the 4-d space-time metric\nis of the form:\n\\begin{equation}\n{\\rm d} s^2 = g_{\\mu\\nu} {\\rm d}x^{\\mu} {\\rm d} x^{\\nu} =\n-\\lambda (r) {\\rm d} t^2 + \\lambda^{-1} (r) {\\rm d} r^2\n+ R(r) ( {\\rm d} \\theta^2 + \\sin^2 \\theta {\\rm d} \\phi^2 )\n\\label{sph}\n\\end{equation}\nand the scalar fields $\\varphi$ and $\\rho_i$ depend on the radial\ncoordinate $r$, only. The electromagnetic vector potentials take the form:\n\\begin{equation}\nA^i_{\\phi} = P_i \\cos \\theta \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\nA^i_t = \\psi_i (r) \\ \\ ,\n\\label{vector}\n\\end{equation}\nwhere $E_i (r) = -\\partial_r \\psi_i (r) = {{\\tilde{Q}_i} \\over\n{R {\\rm e}^{\\alpha \\varphi} \\rho_i}}$\n\\footnote{The physical Electric charge $Q_i$ defined in terms of the\nasymptotic behavior $E_i \\sim {Q_i \\over r^2}$ ($r \\rightarrow \\infty$)\nof the electric field is related to $\\tilde{Q}_i$ through\n$\\tilde{Q}_i = {\\rm e}^{\\alpha\\varphi_{\\infty}}\\rho_{i \\infty} Q_i$. Here,\n$\\varphi_\\infty$ and $\\rho_\\infty$ are the constant values of the\ncorresponding scalar fields at $r\\rightarrow \\infty$.}.\n\nThe {\\it Ansatz} with all the off-diagonal components of the internal\nmetric turned off has to be consistent with the equations of motion.\nIn fact, such a consistency restricts the allowed charge\nconfigurations. Namely, the Euler-Lagrange equations for the components\n$\\rho_{ij}$ of the unimodular part of the metric is given by:\n\\begin{equation}\n{1 \\over 2}{\\rm e}^{\\alpha\\varphi(r)}[R(r)E_i(r) E_j(r) -\nR^{-1}(r)P_i P_j ] + \\chi R(r)\\rho^{ij}(r) =\n{1 \\over 2}{{\\rm d} \\over {{\\rm d}r}}\n[\\lambda(r)R(r){{{\\rm d}\\rho^{ij}(r)}\\over {{\\rm d}r}}]\n\\ \\ ,\n\\label{rhoij}\n\\end{equation}\nwhich implies that for the diagonal metric {\\it Ansatz} (\\ref{diag}) the\nfollowing constraints have to be satisfied:\n\\begin{equation}\nQ_iQ_j-e^{2\\alpha\\varphi}\\rho_i\\rho_j\\, P_iP_j=0\\ \\ \\\ni\\neq j\\ \\ .\n\\label{ij}\n\\end{equation}\nEqs. (\\ref{ij}) can be satisfied if and only if\n\\footnote{A more general constraint $Q_iQ_j-c_ic_jP_iP_j=0$ with\n$c_i\\equiv\\rho_ie^{\\alpha\\varphi}$ being constant would imply the equation\nof motion for $\\rho_ie^{\\alpha\\varphi}$ with $Q_i=P_i=0$. Thus, the\nconstraint $Q_iQ_j-c_ic_jP_iP_j=0$ in turn reduces to the subset of\nconstraints (\\ref{chcon}).}\n\\begin{equation}\nQ_i Q_j = 0\\ \\ \\ \\ {\\rm and}\\ \\ \\ \\ P_i P_j = 0 ,\\ \\ \\ \\ i \\neq j \\ \\ .\n\\label{chcon}\n\\end{equation}\nConstraints (\\ref{chcon}) imply that the same type of charge,\n{\\it i.e.}, either electric or magnetic one, can appear in at most\none gauge field. Consequently, the internal isometry group $U(1)^n$\nis broken down to at most $U(1)\\times U(1)$, with one electric and\none magnetic charges, only.\n\nWhen, say, the first ($n-2$) gauge fields are turned off the first\n($n-2$) components of the diagonal internal metric become constant\n(${\\rm e}^{{2\\varphi}\\over {n\\alpha}}\\rho_i = const.$\n($i=1,...n-2$)). Thus, the KK BH's are those of effective 6-d KK:\n\\begin{eqnarray}\n{\\tilde {\\cal L}} = -{1\\over 2}\\sqrt{-g}\n[{\\cal R} + {1\\over 2} \\partial_{\\mu}\\Phi \\partial^{\\mu}\\Phi +\n{1\\over 2} \\partial_{\\mu}\\chi_{n-1}\\partial^{\\mu}\\chi_{n-1} +\n{1\\over 2} \\partial_{\\mu}\\chi_n \\partial^{\\mu}\\chi_n \\nonumber \\\\\n+ {1\\over 4}{\\rm e}^{\\sqrt{2}(\\Phi + \\chi_{n-1})}F^{n-1}_{\\mu\\nu}\nF^{n-1 \\ \\mu\\nu} + {1\\over 4}{\\rm e}^{\\sqrt{2}(\\Phi +\\chi_n)}\nF^n_{\\mu\\nu} F^{n \\ \\mu\\nu}]\\ \\ ,\n\\label{effec}\n\\end{eqnarray}\nwhere $\\Phi \\equiv {\\sqrt{2} \\over \\alpha}\\varphi$ and\n$\\chi_i \\equiv {1 \\over \\sqrt{2}}[\\ln\\rho_i + {{2-n}\\over {n\\alpha}}\n\\varphi ]$ $(i = n-1, n)$ with the constraint $\\chi_{n-1} + \\chi_n =\nconst$.\n\nIn general, one can, therefore, have the following qualitatively\ndifferent classes of configurations:\n\\begin{itemize}\n\\item $Q_{n-1}=P_{n-1}=Q_n=P_n=0$, which corresponds to the ordinary 4-d\nSchwarzschild BH's.\n\\item Say, $Q_n = P_n =0$, which corresponds to KK BH's in effective\n5-d KK theory.\n\\item Say, $Q_{n-1}=P_n=0$, which corresponds to a class of\nKK BH's in effective 6-d KK theory, where electric and magnetic\ncharges arise from different $U(1)$ groups.\n\\end{itemize}\nSchwarzschild BH's are well understood, and BH's in 5-d KK theory have\nbeen extensively studied in Refs. \\cite{FIV,DM,GW}.\nWe, therefore, concentrate on the study of the last class of solutions,\nwhich correspond to the non-extremal generalization of $U(1)_M \\times\nU(1)_E$ supersymmetric solution, studied in Ref. \\cite{EX}.\n\\section{Non-extreme $U(1)_M \\times U(1)_E$ Solutions}\nIf only a pair of gauge fields are non-zero, the Lagrangian density\n(\\ref{efflag}) becomes that of effective 6-d KK theory \\cite{EX}.\nWithout loss of generality, one assumes that the first\n[second] gauge field to be magnetic [electric]. The Einstein's and\nEuler-Lagrange equations can be cast in following form:\n\\begin{equation}\n(\\lambda R)^{\\prime \\prime} = 2\n\\label{beqn}\n\\end{equation}\n\\begin{equation}\n2(\\lambda^{\\prime}R)^{\\prime} = R^{-1}{\\rm e}^{-\\sqrt{2}\\Phi}\n\\rho \\tilde{Q}^2 +R^{-1}{\\rm e}^{\\sqrt{2}\\Phi}\\rho P^2\n\\label{aeqn}\n\\end{equation}\n\\begin{equation}\n{1 \\over \\sqrt{2}}\n(\\lambda R \\Phi^{\\prime})^{\\prime} + (\\lambda R \\rho^{-1}\n\\rho^{\\prime})^{\\prime}=R^{-1}{\\rm e}^{\\sqrt{2}\\Phi}\\rho P^2\n\\label{ceqn}\n\\end{equation}\n\\begin{equation}\n-{1 \\over \\sqrt{2}}(\\lambda R \\Phi^{\\prime})^{\\prime} +\n(\\lambda R \\rho^{-1} \\rho^{\\prime})^{\\prime}=R^{-1}\n{\\rm e}^{-\\sqrt{2} \\Phi}\\rho \\tilde{Q}^2 \\ \\ ,\n\\label{deqn}\n\\end{equation}\nwhere $\\rho \\equiv{\\rm e}^{\\sqrt{2} \\chi_{n-1}}$ and the prime denotes\ndifferentiation with respect to $r$. Recall, ${\\tilde Q} =\n{\\rm e}^{\\sqrt{2}\\Phi_{\\infty}} \\rho_{\\infty}Q$.\nEqs.(\\ref{aeqn})-(\\ref{deqn}) exhibit manifest electric-magnetic\nduality symmetry:\n$P \\leftrightarrow \\tilde{Q}$ and $\\Phi \\rightarrow -\\Phi$.\n\nEquation (\\ref{beqn}) implies\n$\\lambda R = (r - r_+ )(r - r_+ + 2\\beta )$, while\nthe resultant equation obtained by subtracting Eqs. (\\ref{ceqn}) and\n(\\ref{deqn}) from Eq. (\\ref{aeqn}) yields\n$\\lambda = {\\rho \\over \\rho_{\\infty}}\\left({{r-r_+ }\n\\over {r-r_+ +2\\beta}} \\right )$.\nHere $r_+$ is defined to be the outermost horizon and $\\beta > 0$\nis the non-extremality parameter\n\\footnote{Note, the role of the non-extremality parameter $\\beta$\nis very similar to the one used in describing non-extreme\nsupergravity walls \\cite{CS,NON}.}.\nSubstitution of these relations into Eqs. (\\ref{ceqn}) and (\\ref{deqn}),\nyields the following ordinary differential equations\n\\footnote{These equations are reminiscent of equations for Toda molecule.}:\n\\begin{eqnarray}\n2{\\bf P}^2{{{\\rm e}^X} \\over {(r-r_{+})^2}} =\n[(r-r_{+})(r-r_{+}+2\\beta)X^{\\prime}]^{\\prime}\\ \\ \\ \\nonumber \\\\\n2{\\bf Q}^2{{{\\rm e}^Y} \\over {(r-r_{+})^2}} =\n[(r-r_{+})(r-r_{+}+2\\beta)Y^{\\prime}]^{\\prime} \\ \\ ,\n\\label{toda}\n\\end{eqnarray}\nwhere $X \\equiv \\sqrt{2}\\Phi + 2\\ln \\lambda - \\sqrt{2}\\Phi_{\\infty}$\nand $Y \\equiv -\\sqrt{2}\\Phi + 2\\ln \\lambda + \\sqrt{2} \\Phi_{\\infty}$.\nHere, ${\\bf P}$ and ${\\bf Q}$ are the ``screened'' \\cite{EX} magnetic\nand electric monopole charges, {\\it i.e.}, ${\\bf P} \\equiv\n{\\rm e}^{{1 \\over \\sqrt{2}}\\Phi_{\\infty}} \\rho^{1\\over 2}_{\\infty}P$ and\n${\\bf Q} \\equiv {\\rm e}^{{1 \\over \\sqrt{2}}\\Phi_{\\infty}}\n\\rho^{-{1\\over 2}}_{\\infty}Q$. The two equations in (\\ref{toda})\ncan be solved explicity, yielding the following explicit form of\nthe solutions with regular horizons:\n\\begin{equation}\n\\lambda ={{r-r_+} \\over {(r-r_+ +\\hat{P})^{1\/2}(r-r_+ + \\hat{Q})^{1\/2}}}\n\\label{asol}\n\\end{equation}\n\\begin{equation}\nR = r^2 (1-{{r_+ - 2\\beta}\\over r})(1-{{r_+ - \\hat{P}}\\over r})^{1 \\over 2}\n(1-{{r_+ - \\hat{Q}} \\over r})^{1 \\over 2}\n\\label{bsol}\n\\end{equation}\n\\begin{equation}\n{\\rm e}^{\\sqrt{2}(\\Phi - \\Phi_{\\infty})} =\n{{r - r_+ + \\hat{Q}} \\over {r - r_+ + \\hat{P}}}\n\\label{csol}\n\\end{equation}\n\\begin{equation}\n\\rho = \\rho_\\infty {{r -r_+ +2\\beta} \\over {(r - r_+ + \\hat{P})^{1\/2}\n(r - r_+ + \\hat{Q})^{1\/2}}} \\ \\ ,\n\\label{dsol}\n\\end{equation}\nwhere $r_+ = \\beta + {{|{\\bf P}| \\sqrt{{\\bf P}^2 + \\beta^2} -\n|{\\bf Q}| \\sqrt{{\\bf Q}^2 + \\beta^2}} \\over {|{\\bf P}| - |{\\bf Q}}|}$,\n$\\hat{P} = \\beta + \\sqrt{{\\bf P}^2 + \\beta^2}$ and $\\hat{Q} = \\beta +\n\\sqrt{{\\bf Q}^2 + \\beta^2}$, while the ADM mass of the configurations\nis of the form:\n\\begin{equation}\nM = 2\\beta + \\sqrt{{\\bf P}^2 + \\beta^2} + \\sqrt{{\\bf Q}^2 + \\beta^2}\n\\ \\ .\\label{mass}\n\\end{equation}\nIn the limit $\\beta \\rightarrow 0$, the above expressions\nreduce to those with $r_+=r_H=|{\\bf P}|+|{\\bf Q}|$, $\\hat{P}=|{\\bf P}|$,\n$\\hat{Q}=|{\\bf Q}|$ and the ADM mass saturates the Bogomol'nyi bound\n$M_{ext}=|{\\bf P}|+|{\\bf Q}|$. These are supersymmetric\nsolutions of Ref. \\cite{EX}.\n\nNow, we shall discuss the global space-time structure and\nthermal properties of the solution. We first describe the\nsingularity structure of 4-d space-time defined by the metric\ncoefficients (\\ref{asol}) and (\\ref{bsol}). For the non-extreme\nsolutions, {\\it i.e.}, $\\beta > 0$, there is a space-like singularity\nat $r = r_+ - 2\\beta$ which is hidden behind a horizon at\n$r =r_+ $. The global space-time of the non-extreme solutions\nis that of the Schwarzschild BH (see Fig. 1).\nIn the supersymmetric (extreme) limit ($\\beta\\rightarrow 0$)\nthe singularity becomes null, {\\it i.e.}, it coincides with the horizon\n(see Fig. 2a). In the extreme limit with either $Q$ or $P$ zero,\n{\\it i.e.}, supersymmetric 5-d KK BH \\cite{GIBB},\nthe singularity becomes naked (see Fig. 2b).\n\\begin{figure}[p]\n\\vfill\n\\iffiginc\n\\hfill\\psfig{figure=warsaw1.eps}\\hfill\n\\fi\n\\caption{The Penrose diagram (in the ($r,t$) plane) for non-extreme\n$U(1)_M \\times U(1)_E$ 4-d Kaluza-Klein black holes.\nThe space-like singularity (jagged line) at $r=r_+ - 2\\beta$\n($\\beta$ is the non-extremality parameter) is hidden behind the\nhorizon (dashed line) at $r=r_+$.}\n\\label{Fig. 1}\n\\end{figure}\n\\begin{figure}\n\\vfill\n\\iffiginc\n\\psfig{figure=warsaw2.eps}\\hfill\n\\fi\n\\caption{The Penrose diagram (in the ($r,t$) plane) for the\nsupersymmetric (extreme) $U(1)_M \\times U(1)_E$ 4-d Kaluza-Klein black\nholes, {\\it i.e.}, those with {\\it both} $Q$ and $P$ charges\nnon-zero, is given in fig. 2a. The Penrose diagram for the\nsupersymmetric $U(1)_E$ (or $U(1)_M$) 4-d Kaluza-Klein black\nholes,{\\it i.e.}, those with $P$ or $Q$ charge nonzero,\nis given in fig. 2b. Note a null singularity (jagged line) in the\nformer case, and a naked singularity in the latter one.}\n\\label{Figure 2}\n\\end{figure}\n\nThe Hawking temperature\\ cite{HAW}, which can be calculated by\nidentifying the inverse of the imaginary time period \\cite{GH} of a\nfunctional path integral, turns out to be\n$T_H = {{|\\lambda^{\\prime}(r_+ )|} \\over {4\\pi}}$.\nWith the explicit solution (\\ref{asol}) for $\\lambda$ one obtains:\n\\begin{equation}\nT_H ={1 \\over {4\\pi [\\beta + ({\\bf Q}^2 + \\beta^2 )^{1 \\over 2}]^{1\\over 2}\n[\\beta +({\\bf P}^2 +\\beta^2 )^{1\\over 2}]^{1\\over 2}}} \\ \\ .\n\\label{temp}\n\\end{equation}\nAs $\\beta$ decreases $T_H$ increases and reaches the upper bound\n$T_{H\\, ext}=1\/{4\\pi|{\\bf P Q}|}$ in the extreme limit\n$\\beta = 0$. In the extreme limit, when either\n${\\bf Q}$ or ${\\bf P}$ becomes zero $T_H$ is infinite.\n\nThe entropy $S$ of the system, determined as $S = {1 \\over 4}\\times$\n(the surface area of the event horizon) \\cite{BEK}, is of the\nfollowing form:\n\\begin{equation}\nS = 2\\pi \\beta [\\beta +({\\bf Q}^2 +\\beta^2)^{1\\over 2} ]^{1\\over 2}\n[\\beta +({\\bf P}^2 +\\beta^2 )^{1\\over 2}]^{1\\over 2}\\ \\ .\n\\label{ent}\n\\end{equation}\nwhere we have used the explicit solution (\\ref{bsol}) for $R(r)$.\nIn the extreme limit the entropy becomes zero.\n\\section{Electric-Magnetic Duality Transformations}\nIn addition to the 4-d general coordinate invariance and gauge symmetry,\nwhich arise from the $(4+n)$-d general coordinate invariance of the\n$(4+n)$-d Einstein action (\\ref{highlag}), the Lagrangian density\n(\\ref{efflag}) has a global $SO(n)$ invariance:\n\\begin{equation}\n\\rho_{ij} \\rightarrow U_{ik} \\rho_{k\\ell} (U^{T})_{\\ell j} \\ \\ \\ \\ \\ \\ \\\nA^i_{\\mu} \\rightarrow U_{ij} A^j_{\\mu}\\ \\ , \\label{son}\n\\end{equation}\nwhere $U$ is an $SO(n)$ rotation matrix.\nOne convenient parameterization of $U$ is in terms of\nsuccessive rotations in all the possible planes in $\\Re^n$:\n\\begin{equation}\nU = \\hat{U}_{12} \\cdot \\cdot \\cdot \\hat{U}_{1n}\\hat{U}_{23}\n\\cdot \\cdot \\cdot \\hat{U}_{2n} \\cdot \\cdot \\cdot \\hat{U}_{i, i+1}\n\\cdot \\cdot \\cdot \\hat{U}_{in} \\cdot \\cdot \\cdot \\hat{U}_{n-1, n} \\ \\ ,\n\\label{repre}\n\\end{equation}\nwhere $\\hat{U}_{k\\ell}$ is the rotation matrix in the $(k,\\ell)$-plane\nwith a rotational angle $\\theta_{k\\ell}$.\n\nThe two types of solutions are obtained by performing\nthe $SO(n)$ transformation on BH solutions of the effective 5-d and\n6-d KK theories, respectively. Since the 4-d space-time metric and\nthe dilaton field are not affected by the $SO(n)$ transformations,\nthe global space-time and the thermal properties in each class of the\nsolutions remain the same.\nThe above transformations also generate non-diagonal\ninternal metric coefficients; the ${{n(n-1)}\/2}$ degrees of freedom\nassociated with the rotational angles of the $SO(n)$ matrix\ngenerate the $n(n-1)\/2$ off-diagonal internal metric components.\n\nOn the other hand, the two classes of the solutions correspond to\nKK BH's with constrained charge configurations, and therefore do not\nconstitute the most general set of static 4-d Abelian KK BH's solutions.\nNamely, the subset of rotational matrices, which\ntransform the charge configuration of the\n$U(1)_M\\times U(1)_E$ solution to a new type of charge configurations,\ncorresponds to the coset space $SO(n)\/SO(n-2)$. This subset of\ntransformations therefore provides $(2n-3)$ additional degrees of\nfreedom for the (electric and magnetic) charge configuration.\nThus, the resultant number of degrees of freedom for the charge\nconfiguration is $2n-1$. In fact, $2n$ ($n$-electric and $n$-magnetic)\ncharges $(\\vec{Q},\\vec{P})$ satisfy the constraint\n$\\vec{Q} \\cdot \\vec{P} = 0$. Similarly, for solutions generated by\nthe $SO(n)\/SO(n-1)$-transformations on the charge configuration of\nthe effective 5-d KK solution, the resultant number of charge degrees\nof freedom is $n+1$.\n\nIn the following we shall concentrate on the electric-magnetic duality of\nsolutions generated by $SO(n)$ transformations on BH's of the effective\n6-d KK theory, since they involve less trivial transformations than those\nacting on BH's of the effective 5-d KK theory. The unimodular part of\nthe internal metric and the set of $n$ gauge fields transform in\nthe following way:\n\\begin{equation}\n\\rho_{ij}^\\prime = \\rho_k {U}_{ik} {U}_{jk} \\ \\ \\ \\\nA^{\\prime i}_\\phi= U_{i \\,(n-1)} P \\cos \\theta \\ \\ \\ \\\nA^{\\prime i}_t = U_{in}\\psi (r) \\ \\ ,\n\\label{nondiag}\n\\end{equation}\nwhere the electric and magnetic charges of new gauge field\n$A^{\\prime i}_\\mu$ are, therefore, given by $\\tilde{Q}^{\\prime i} =\nU_{in}\\tilde{Q} $ and $P^{\\prime i} = U_{i\\,(n-1)}P $ ($i=1,...,n$).\nThe expression for 4-d metric components in the new configuration\ncan be obtained from (\\ref{asol}) - (\\ref{bsol}) by replacing\n$\\tilde{Q}$ and $P$ by $\\sqrt{\\sum^n_{i=1} (\\tilde{Q}^{\\prime i})^2}$ and\n$\\sqrt{\\sum^n_{i=1}( P^{\\prime i})^2}$, respectively.\n\nThe $SO(n)$ transformations generate the continuous\nelectric-magnetic duality transformations which rotate the\n$U(1)_M \\times U(1)_E$ configuration, {\\it i.e.}, $P^i = \\delta_{n-1}^i P$\nand $\\tilde{Q}^i = \\delta_n^i \\tilde{Q}$, to general charge\nconfigurations $P^{\\prime i} = U_{i\\,(n-1)}P $ and $\\tilde{Q}^{\\prime i} =\nU_{in}\\tilde{Q}$, with the constraint $\\sum^n_{i=1}P^{\\prime\ni}Q^{\\prime i}=0$. This constraint is a consequence of\n$\\sum^n_{i=1} U_{i n}U_{i\\, (n-1)} = 0$.\n\nA subset of $SO(n)$ transformations corresponding to the\nrotation in the $(n-1,n)$-plane, {\\it i.e.}, $U=\\hat U_{(n-1)\\, n}$\n(see Eq. (\\ref{repre})), mixes the monopole charges in the\n$(n-1)$-$th$ and $n$-$th$ gauge fields and induces the corresponding\noff-diagonal terms in the internal metric of the effective 6-d KK\ntheory. In particular, the discrete change of the rotation angle\n$\\theta_{(n-1)\\, n}$ from 0 to $\\pi \\over 2$ corresponds to a discrete\nelectric-magnetic duality transformation, which interchanges the magnetic\nand electric monopole charges in the $(n-1)$-$th$ and $n$-$th$\ngauge fields \\cite{EX}.\n\n\n\\section{Conclusions}\nIn this paper, we studied a class of static, spherically\nsymmetric solutions in $(4+n)$-d KK theory with Abelian isometry\n($U(1)^n$). In particular, for a diagonal internal metric {\\it Ansatz},\nthe consistency of the equations of motion imposes strong constraints\non the possible charge configurations of such solutions; BH's\nexist only for configurations with at most one non-zero electric\nand one non-zero magnetic charges. The case of electric and magnetic\ncharges coming from the same gauge field corresponds to BH's\nin effective 5-d KK theory. Configurations with electric and magnetic\ncharges arising from different $U(1)$ gauge factors, {\\it i.e.},\nthe isometry group is $U(1)_M \\times U(1)_E$, correspond to BH's\nin effective 6-d KK theory.\n\nNon-extreme BH's with $U(1)_M\\times U(1)_E$ symmetry, which are compatible\nwith the corresponding Bogomol'nyi bound, are parameterized in terms of\nthe non-extremality parameter $\\beta>0$. They have a global\nspace-time structure of Schwarzschield BH's, with the temperature\n$T_H$ [entropy $S$] increasing [decreasing] as $\\beta$ decreases.\nIn the extreme limit $\\beta \\to 0$, the solutions correspond to\nthe supersymmetric BH's \\cite{EX}. In this limit, the corresponding\nBogomol'nyi bound for the ADM mass is saturated, $T_H$ [$S$] reaches\nthe upper [lower] limit $T_{H\\, ext}=1\/(4\\pi\\sqrt{|PQ|})$ [$S=0$],\nand the space-like singularity becomes null.\nNotably, the extreme limit is reached {\\it smoothly}.\n\nA class of solutions with non-diagonal internal metrics and\ngeneral charge distributions among gauge fields are obtained by\napplying the global $SO(n)$ transformations on the solutions with\ndiagonal internal metric. These solutions correspond to a subset of\nstatic 4-d Abelian KK BH's with constrained charge configurations.\nSince $SO(n)$ transformations act on the unimodular part of the\ninternal metric and the gauge fields, only, the 4-d global space-time\nand the thermal properties of the general class of the 4-d abelian KK\nBH's are the same as those of the corresponding 4-d abelian KK BH's\nwith the diagonal internal metric.\nThis class of solutions exhibits continuous electric-magnetic\nduality symmetry parameterized by rotational angles of $SO(n)$ rotations.\n\\acknowledgments\nThe work is supported by U.S. DOE Grant No. DOE-EY-76-02-3071,\nand the NATO collaborative research grant CGR\n940870. We would like to acknowledge useful discussions with E. Kiritsis,\nC. Kounnas, J. Russo and D. Waldram.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}