diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzihft" "b/data_all_eng_slimpj/shuffled/split2/finalzzihft" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzihft" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nOne of the widely accepted mechanisms\nof planet formation is the core-nucleated instability theory (Perri \\& Cameron 1974; Haris 1978; Mizuno\net al. 1978; Stevenson 1982). According to this scenario, the massive gaseous atmosphere would\nbe accumulated in a runaway manner when the core mass reaches a critical value.\nIn static models, when heating balances cooling,\nrunaway accretion occurs when the planet is beyond the critical mass,\nbecause the envelope fails to hold hydrostatic equilibrium. Rafikov (2006) found\na broad range of critical mass ($0.1 M_{\\oplus} \\le M_{\\rm critical} \\le 100 M_{\\oplus}$)\ndue to various disk properties and planetesimal accretion rate.\nHowever, in dynamic or quasi-static models, the thermal disequilibrium\nrather than the hydrostatic disequilibrium plays the dominant role.\nThe runaway accretion occurs because the envelope becomes thermally unstable\nas the cooling timescale becomes catastrophically shorter.\nIn this case, the runaway accretion is driven by\nrunaway cooling (Bodenheimer \\& Pollack 1986; Lee et al. 2014; Piso \\& Youdin 2014).\nThree stages are involved in the formation process. In the first stage,\nrocky cores grow by rapid planetesimal accretion. In the second stage, the core's feeding zone is depleted of solids and the atmosphere grows gradually, regulated by the KH contraction. Finally, when the atmosphere reaches the crossover mass,\ngas runaway takes place and the planet gets inflated into a gas giant. The timescale of second stage is the longest among\nthe three and dominates whole formation process (Pollack et al. 1996).\n\n\nAbout 20\\% of Sun-like stars host super-Earths with radii of 1-4 $R_{\\oplus}$ at distance 0.05-0.3 AU (Howard et al. 2010; Batalha et al. 2013; Petigura et al. 2013).\nRadial velocity measurements (Weiss \\& Marcy 2014) and transit timing variations (Wu \\& Lithwick 2013) manifest that\n masses of these super-Earths are in the range of 2$-$20 $M_{\\oplus}$.\nThe abundance of super-Earths presents a puzzle for the core instability theory.\nThis theory indicates that when a protoplanet reaches the super-Earth size, two physical processes\nmake the survival of super-Earths difficult, leading to a planetary ``desert\" in this size range (Ida \\& Lin 2004).\nSuper-Earths would excite density waves in PPDs and give rise to rapid type I migration.\nThis type of migration would cause the planet to be engulfed by its host star if the disk\ninner edge touches the stellar surface.\nRecent studies have sought various remedies\nfor type I migration (Yu et al. 2010; Fung \\& Chiang 2017).\nPPDs are expected to have an inner edge at the stellar magnetosphere (e.g., Long et al. 2005). For planets undergoing disk-driven migration, they are expected to pile up near this edge. They would stay either at the edge because the gas runs out, or inside the edge down to 2:1 resonance because that's where the tidal torque will taper off, or outside the edge as the standing waves generated by wave reflection off the inner edge stall planet migration (Tsang 2011).\nIn this paper we would focus on another threat for super-Earths.\nSuper Earths have low mean density, which suggests that they must\nbe surrounded by gas envelopes (Rogers \\& Seager 2010).\nSince these observed super-Earths are in the range of critical mass,\nthey would trigger efficient gas runaway and accumulate massive gas envelope.\nThey would become gas giants. As a result, super-Earths are supposed to be rare.\nHowever, the Kepler's discovery wreck these predictions. Lee et al. (2014) has proposed\nmetallicity gradient inside the PPDs or late assembly of cores to\nresolve the puzzle of super-Earth formation. Lee \\& Chiang (2016) stressed\nthat the late core assembly in transitional PPDs is more consistent with\nobservations. In gas-poor environments, gas dynamical friction has\nweakened to allow proto-cores to stir one another and merge.\nIn addition, this formation scenario ensures that super-Earth cores accrete mass\nwith a few percent envelope mass fraction (EMF).\n\n\n\n\n\nGuillot \\& Showman (2002) argued that the dissipation of kinetic energy of atmospheric wind,\ndriven by intense irradiation, could bury heat inside the planet.\nMany studies extend this idea to explain the radius anomaly of hot Jupiters\n(Youdin \\& Mitchell 2010; Ginzburg \\& Sari 2015; Komacek \\& Youdin 2017).\nThese investigations focus on the late evolution after the disk dispersal.\nUnfortunately, this is invalid for the early evolution of super-Earths because they are still\nembedded within disks. The irradiation may not penetrate the\ndisk and is not able to bury heat in the exoplanets.\n\nHowever, we note that\ntidal interactions between the host star and planet can periodically perturb the planet\nand generate mechanical forcing of the fluid motions (Zahn 1977; Goldreich \\& Nicholson 1989).\nHeating by tidal dissipation in primordial super-Earth envelope can inhibit the gas cooling (Ginzburg \\& Sari 2017).\nThis mechanism requires the orbital eccentricity of super-Earths be continuously pumped.\nBut super-Earths may not be massive enough to clear a clean gap to excite orbital\neccentricity (Goldreich \\& Sari 2003).\nAnother important aspect about tidal interaction is that \ntidally-forced turbulent mixing would induce heat transport inside the planets.\nRecent laboratory experiment shows that turbulence could penetrate deep inside the\nplanet interior (Cabanes et al. 2017).\nBy combining laboratory measurements and high resolution simulations, Grannan et al. (2017)\nconfirmed the generation of bulk filling turbulence inside planet driven by tidal forcing.\nTurbulent mixing plays an essential role in heat transport in strongly stratified environments (Garaud \\& Kulenthirarajah 2016).\nThis motivates us to study the effects of turbulent diffusion on the planet's thermal evolution.\nPrior studies have noticed that the turbulent mixing induced by mechanical forcing\nleads to heat transport inside hot Jupiters (Youdin \\& Mitchell 2010).\nThese tides would produce appreciable thermal feedback and may lead to interior radiative zones, enhancing\ng-mode dissipations with a wide spectrum of resonances (Jermyn et al. 2017).\nWe find that the thermal feedback associated with the externally-forced turbulent stirring\nmay greatly alter the accretion history of super-Earths.\n\n\n\nIt is well known that the timescale of gas accretion is dictated by the KH timescale.\nIn other words, accretion is determined by the planet's ability to cool (Lee \\& Chiang 2015).\nIn this paper, we note that\nthe tidally-forced turbulent diffusion influences the heat transport inside the planet's envelope.\nThermal feedback would be induced by turbulent diffusion.\nThe heat transport associated with tidally-forced turbulent diffusion\nwould reduce the cooling luminosity\nand enhance the KH timescale.\nWe find that turbulent diffusion may have significant effects\non the planet accretion history\\footnote{In our\ncalculation, turbulent diffusion coefficient $\\mu_{\\rm turb}\\sim 10^{7} - 10^{9}$ cm$^2$ s$^{-1}$,\ncomparable to typical\n$\\mu_{\\rm turb}\\sim 10^{6}-10^{10}$ cm$^2$ s$^{-1}$ in solar system planets (de Pater \\& Lissauer 2001).}.\nBased on our calculations, we propose that tidally-forced turbulent diffusion would effectively\nhelp super-Earths evade growing into gas giants.\n\n\n\nThis paper is structured as follows. In section 2, we provide a brief description of the accreting\nplanet envelope with tidally-forced turbulent diffusion.\nIn section 3, we compare the planet interior thermal\nprofile with and without turbulent diffusion, discussing the thermal feedback\ninduced by turbulent diffusion, especially the shift of RCBs.\nIn section 4, we depict the cooling luminosity variations and onset of gas runaway.\nThe quasi-static Kelvin-Helmholtz evolution and critical turbulent diffusivity\nare discussed in Section 5. In Section 6, we discuss the mass loss\nmechanisms for super-Earths and the limitation of super-Earth formation by the turbulent diffusion.\nSummary and conclusions are given\nin Section 7.\n\n\\section{Accreting Envelope with Tidally-Forced Turbulence}\nSuper-Earths are susceptible to runaway accretion (Pollack et al. 1996).\nThe ability to accrete is determined by the planet's power to cool (Lee \\& Chiang 2015).\nHow super-Earths avoid rapid gas runaway\ndepends critically on the cooling history of planet, which is closely related to\nthe thermal structure of the envelope.\nIn the convectively stable region, the turbulent diffusion would induce heat transport within the planet.\nIn this section we will concentrate on the thermal feedback caused by tidally-induced turbulent diffusion.\n\n\\subsection{Thermal structure of Gaseous Envelope}\nSince the planet's ability to cool depends on planets' thermal structure of the envelope,\nwe first study the gaseous envelope structure of planets, i.e., the distribution of pressure,\ntemperature, and mass around a protoplanetary core with mass $M_{\\rm c}$ embedded\nwithin the protoplanetary nebular.\nThe planet envelope (or, interchangeably ``atmosphere'') structure is governed by the following equations of mass\nconservation, hydrostatic equilibrium, thermal gradient, and energy conservation\n(Kippenhahn et al. 2012) :\n\\begin{equation}\n\\frac{d M_r}{d r} = 4 \\pi \\rho r^2 \\ ,\n\\end{equation}\n\\begin{equation}\n\\frac{d P}{d r} = - \\frac{G M_r}{r^2} \\rho \\ ,\n\\end{equation}\n\\begin{equation}\n\\frac{d T}{d r} = \\nabla_{\\rm } \\frac{T}{P} \\frac{d P}{d r} \\ ,\n\\end{equation}\n\\begin{equation}\n\\frac{d L}{d r} = \\frac{d M_r}{d r} \\left( \\epsilon - T\\frac{\\partial s}{\\partial t} \\right) \\ ,\n\\end{equation}\nwhere $G$ is the gravitational constant, $P$ is the pressure, $\\rho$ is the density,\n$T$ is the temperature, $L$ is the luminosity,\nand $M_r$ is the mass, including the core mass\nand the atmosphere mass, enclosed inside the radius $r$,\n$M_r = M_{\\rm atm} + M_{\\rm c}$.\nThe symbol ``$\\nabla$\" denotes the temperature gradient\ninside the envelope.\nThe energy generation $\\epsilon$ is set to zero since there is no nuclear reaction\ninside the planet.\nThe above equations implicitly indicate\nthat the envelope quickly adjusts and dynamical timescale is shorter\nthan the accretion timescale (Rafikov 2006).\nNote that the right hand side term, $-T\\frac{\\partial s}{\\partial t}$, in the energy equation dictates the cooling process.\nReplacing the local energy equation by a global energy equation would greatly reduce the\nnumerical tasks and we need only deal with ODEs rather than PDEs\n(Piso \\& Youdin 2014; Lee et al. 2014). Details will be discussed in Section 3.\n\nThe energy transport in the convective region is very efficient and\nthe temperature gradient is\\footnote{This assumption is mainly made for simplicity\nof the models, they are not necessarily correct (Stevenson 1985; Leconte \\& Chabrier 2012).\nWe are working on including the mixing length theory (e.g Kippenhahn et al. 2012) to\nbetter quantify the issue of super-adiabaticity.}\n\\begin{equation}\n\\nabla = \\nabla_{\\rm ad} = \\left( \\frac{d \\ln T}{d \\ln P}\\right)_{\\rm ad} \\ .\n\\end{equation}\nThe convective and radiative layers of the envelope are specified by\nthe Schwarzschild criterion: the atmosphere is stable against\nconvection when $\\nabla < \\nabla_{\\rm ad} $ and convectively\nunstable when $\\nabla \\ge \\nabla_{\\rm ad} $. Since the convective energy\ntransport is efficient, $\\nabla = \\nabla_{\\rm ad}$ in the convective region.\nThe actual temperature gradient can be expressed as\n\\begin{equation}\n\\nabla_{\\rm } = \\min(\\nabla_{\\rm ad}, \\nabla_{\\rm rad}) \\ .\n\\end{equation}\nIn this paper, we adopt a polytropic index $\\gamma=7\/5$ for an ideal\ndiatomic gas and the adiabatic gradient $\\nabla_{\\rm ad} = (\\gamma-1)\/\\gamma$.\nNote that the realistic equation of state (EOS) would change the value of $\\nabla_{\\rm ad}$\nand the effects of realistic EOS will be left for future studies.\n\n\n\n\nThe radiative temperature gradient\n\\begin{equation}\n\\label{radTgrad}\n\\nabla_{\\rm rad} = \\frac{3 \\kappa L P}{64\\pi\\sigma G M_r T^4} \\ ,\n\\end{equation}\nwhere $\\kappa$ is the opacity. \nIn the upper part of the atmosphere, the exact\nvalue of $\\kappa$ is highly uncertain because the amount of dust and the dust size\ndistribution are not well constrained in PPDs.\nLee et al. (2014) studied both dusty and dust-free atmosphere and\nfound that the radiative-convective boundaries (RCBs) are determined by\nH$_2$ dissociation at an almost fixed temperature $\\sim$2500 K for dusty atmosphere.\nThey also found the for dust-free atmosphere, the radiative region keeps an almost\nisothermal temperature fixed by the envelope outer surface.\nTechnically, the opacity laws can be written as a power law as a function\nof pressure and temperature whether or not the total opacity is dominated\nby dust grains. For these reasons, we adopt a power law opacity\n(Rafikov 2006; Piso \\& Youdin 2014; Ginzburg et al. 2016), by assuming that\n\\begin{equation}\n\\kappa = \\kappa_0 (P\/P_0)^{\\alpha} (T\/T_0)^{\\beta} \\ .\n\\end{equation}\nHere we choose $\\kappa_0 = 0.001$cm$^{2}$g$^{-1}$, which allows our\nfiducial model without turbulent diffusion to possess properties of more\nsophisticated super-Earth models (Lee et al. 2014).\nWhat is important is the opacity near the RCB. In that sense,\nit is important to keep in mind that the power-law indices\n$\\alpha$ and $\\beta$ can change significantly within the envelope (and with distance from the star).\nWe have tried different choices of $\\alpha$ and $\\beta$. We find that, as long as the\nparameter $\\alpha$ and $\\beta$ satisfy $\\nabla_0 \\equiv \\frac{1+\\alpha}{4-\\beta} > \\nabla_{\\rm ad}$,\nour results are robust and insensitive to the choices we made\\footnote{\nIn later part of this paper, we present the results with $\\alpha=1$, $\\beta = 1$,\nwhich ensures the existence of the inner convective region and outer radiative region\ninside the planet gas envelope. For details, please refer to discussions in Rafikov (2006)\nand Youdin \\& Mitchell (2010).}.\n\n\n\n\n\n\n\nConventionally, it is believed that solid cores accrete planetesimal\nand gas simultaneously (Pollack et al. 1996; Bodenheimer et al.\n2000).\nHowever, estimation shows that the termination epoch\nof accretion of solids is well before the accretion of gas.\nThe dust coagulation timescale can be as short as\n$t_{\\rm coagulate} \\sim 10^4$ yr especially\nwhen the planet is close to the central host (Lee et al. 2014).\nThis timescale is much shorter than typical disk dispersal timescale ($\\sim$ 0.5$-$10 Myr).\nIn addition, calculations by\nLee \\& Chiang (2015) showed that planetesimal accretion does not generically prevent runaway.\nAs a result, it is physically valid to set the planetesimal\naccretion rate to zero ($L_{\\rm acc}=0$)\nwhen we study accreting super-Earths within the disk.\nIn this case, the core is free to cool and contract, and it is extremely susceptible to the gas runaway.\n\n\n\nNote that the above differential equations are essentially identical\nto the usual planet interior structure equations. The distinction is the thermal\nfeedback generated by tidally-forced turbulent mixing inside the stably stratified region.\nMore specifically, $\\nabla_{\\rm rad}$ is affected by the turbulent diffusion, which will\nbe further discussed in the next section.\n\n\n\n\\subsection{Thermal Feedback by Tidally-Forced Turbulent Mixing}\nHow do super-Earths evade becoming gas giants?\nIn this paper, we propose a robust mechanism to avoid runaway accretion.\nDue to the tidal forcing, the planet's gas envelope would be stirred and\nthe turbulent motion may be initiated.\nDetailed analyses of these processes are rather complex\nand beyond the scope of this paper (e.g., Garaud \\& Kulenthirarajah 2016; Grannan et al. 2017).\nIn this paper, we try to constrain the turbulent diffusion\nthat is necessary to influentially affect the planet accretion timescale.\nWe find\nthat even weak turbulence would affect the planet accretion history significantly.\n\n\nSince the sound-crossing time is much shorter than\nthe time for heat to diffuse across the fluid blob, the blob conserves entropy (i.e. adiabatically) and keeps\npressure equilibrium with the ambient environments when it displaces over a radial\ndistance $\\ell$.\nThe temperature difference between the blob and its surroundings is\n\\begin{equation}\n\\delta T = \\left(\\frac{d T}{dr}\\bigg|_{\\rm ad} - \\frac{d T}{dr} \\right) \\ell = - \\frac{\\ell T}{c_p} \\frac{d s}{dr} \\ .\n\\end{equation}\nThe heat excess associated with these fluid blobs can be written\nas $\\delta q = \\rho c_p \\delta T$ and the corresponding turbulent heat flux is\n$F_{\\rm turb} = v \\delta q$, where $v$ is the characteristic speed of turbulent eddies.\nThe entropy gradient can be put down as\n\\begin{equation}\n\\frac{d s}{d r} = \\frac{ g}{T \\nabla_{\\rm ad} } (\\nabla_{\\rm ad} - \\nabla) \\ ,\n\\end{equation}\nwhere $g$ is the gravitational acceleration. This equation\nindicates that in the stably stratified region ($\\nabla < \\nabla_{\\rm ad}$),\nthe entropy gradient is positive ($ds\/dr>0$). The heat flux by turbulent mixing is\nthen\n\\begin{equation}\nF_{\\rm turb} = v\\delta q = \\rho c_p v \\delta T = - \\rho g v \\ell\n \\left( 1- \\frac{\\nabla}{\\nabla_{\\rm ad}} \\right) \\ .\n\\end{equation}\nThe flux is negative for stable stratification.\nFor a thermal\nengine without external forcing, heat always flows from hot to cold regions.\nHowever, with external mechanical forcing by tides, heat flows from cold\nto hot regions becomes feasible (Youdin \\& Mitchell 2010).\nNote that the turbulent diffusion coefficient $\\mu_{\\rm turb} \\equiv v \\ell$\\footnote{Note that $\\mu_{\\rm turb} = K_{zz}$, a symbol widely used in the community of planetary atmospheres.} and the corresponding luminosity is\n\\begin{equation}\nL_{\\rm turb} = 4 \\pi r^2 \\left[ - \\rho g \\mu_{\\rm turb} \\left( 1- \\frac{\\nabla}{\\nabla_{\\rm ad}} \\right) \\right] \\ .\n\\end{equation}\nThe total luminosity is carried by two components, the radiative and the turbulent\n\\begin{equation}\nL = L_{\\rm rad} + L_{\\rm turb} \\ .\n\\end{equation}\nWe note that the temperature gradient in the radiative region\ncan be arranged in a compact form as (see Appendix A for details),\n\\begin{equation}\n\\nabla^{\\rm }_{\\rm rad} = \\frac{1 + \\eta }{1\/\\nabla^{(0)}_{\\rm rad} + \\eta\/\\nabla_{\\rm ad}} \\ .\n\\end{equation}\nIn the above equation,\n\\begin{equation}\n\\nabla^{(0)}_{\\rm rad} \\equiv \\frac{3 \\kappa P L}{64\\pi\\sigma G M_r T^4} \\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\eta \\equiv \\frac{4 \\pi \\mu_{\\rm turb} G M_r \\rho}{L} = 4 \\pi \\left( \\frac{M_{\\rm c}}{M_{\\oplus}} \\right) \\nu_{\\rm turb} \\left(\\frac{M_r}{M_{\\rm c}} \\right) \\left( \\frac{\\rho}{\\rho_{\\rm disk}} \\right) \\ ,\n\\end{equation}\nwhere the superscript ``(0)\" indicates the radiative temperature gradient without turbulence\\footnote{This equation is actually\nthe same as the equation (\\ref{radTgrad}) in this paper.} and $M_{\\rm c}$ is the mass of the solid core.\nIt can be readily shown that the following inequality holds in\nradiative region $\\nabla^{(0)}_{\\rm rad} < \\nabla < \\nabla_{\\rm ad}$ (see Figure 3 for the\npseudo-adiabatic region).\nHere we stress that it is the turbulent diffusion driven by external tidal forcing that makes $\\nabla$\nsteeper than $\\nabla_{\\rm rad}^{(0)}$. This inequality has significant implications for\nthe thermal feedback induced by tidally-forced turbulent diffusion.\nAn interesting issue is that radiative zones would be enlarged and the cooling luminosity\nwould be greatly reduced.\n\nHere we define two dimensionless parameters\n\\begin{equation}\n\\label{turb_def}\n\\nu_{\\rm turb} \\equiv \\frac{\\mu_{\\rm turb}}{L\/(GM_{\\oplus}\\rho_{\\rm disk})} \\ , \\ \\zeta \\equiv \\frac{\\mu_{\\rm turb}} { H_p c_s} \\ .\n\\end{equation}\nThe two parameters represent the strength of turbulence. In the definition of $\\zeta$,\n$H_p\\equiv -d r\/d\\ln P$ and $c_s$ are pressure scale height and sound speed, respectively.\nIt is obvious that, if the turbulence in the radiative region is negligible, i.e., $\\eta = 0$,\nthe temperature gradient recovers its usual definition,\n$\\nabla_{\\rm rad} \\rightarrow \\nabla^{(0)}_{\\rm rad}$.\nIn section 5.1, we will give a physical estimation of the parameter $\\zeta$ based on our calculations. We will\nsee that small value of $\\zeta \\sim 10^{-6} - 10^{-5}$ has already appreciable effects on the\nformation of super-Earths. This mechanism is robust in the sense that even weak turbulence is\nadequate for it to operate.\nWe should keep in mind that one limitation is that the turbulence strength is parameterized,\nnot physically specified. This is an important issue which still remains to be addressed, i.e.,\nforcing turbulence induced by tides\nshould be investigated in further detail (Barker 2016; Grannan et al. 2017).\n\n\n\n\n\n\n\n\n\n\n\\subsection{Boundary Conditions}\nThe density and temperature at the outer boundary of the atmosphere are given\nby the nebular density and temperature. We adopt the minimum mass extrasolar nebula (MMEN) model\nof Chiang \\& Laughlin (2013). According to MMEN, the disk structure reads,\n\\begin{equation}\n\\rho_{\\rm disk}= 6\\times 10^{-6} \\left(\\frac{a}{0.1 {\\rm AU}} \\right)^{-2.9} {\\rm g \\ cm^{-3}} \\ ,\n\\end{equation}\n\\begin{equation}\nT_{\\rm disk} = 1000 \\left( \\frac{a}{0.1 {\\rm AU}} \\right)^{-3\/7} {\\rm K} \\ .\n\\end{equation}\n\nThe inner boundary lies at the surface of the inner core.\nThe core density is assumed to be $\\rho_{\\rm core} = 7$g cm$^{-3}$, the core mass\nis 5 $M_{\\oplus}$ and the core radius is $R_{\\rm core}$ = 1.6 $R_{\\oplus}$.\nThe outer boundary condition is chosen at the smaller of the Bondi radius and Hill radius,\nwhich are\n\\begin{equation}\nR_H \\approx 40 R_{\\oplus} \\left[ \\frac{(1+{\\rm EMF}) M_{\\rm core}}{5 M_{\\oplus}}\\right]^{1\/3} \\left( \\frac{a}{0.1 {\\rm AU}}\\right) \\ ,\n\\end{equation}\n\\begin{equation}\nR_B \\approx 90 R_{\\oplus} \\left[ \\frac{(1+{\\rm EMF}) M_{\\rm core}}{5 M_{\\oplus}}\\right] \\left( \\frac{1000 {\\rm K}}{T}\\right) \\ ,\n\\end{equation}\nrespectively.\n\n\n\n\n\n\\section{Thermal Properties of Gas Envelopes}\nSince the thermal cooling timescale is intimately related to the planet interior structure,\nwe first describe the interior structure of the gaseous envelope.\nTo avoid the complication induced by sandwiched convection-radiation structure\ninside the planet interior (Ginzburg \\& Sari 2015; Jermyn et al. 2017),\nwe simply consider a two-layer model, i.e.,\na convective interior and a radiative exterior (Piso \\& Youdin 2014).\n\nWe adopt the assumption that the luminosity, $L$, is spatially constant, which\nis valid in radiative region if the thermal relaxation timescale is shorter than thermal times in the rest\nof the atmosphere. The validation of such assumption is corroborated by\nPiso \\& Youdin (2014) and Lee et al. (2014).\nTo get thermal profiles within the envelope, a luminosity $L$\nis required to obtain $\\nabla_{\\rm rad}$ before we numerically integrate the structure\nequations.\nThe spatially constant $L$ is treated as an eigenvalue of the ODEs.\nTo get the eigenvalue numerically, we first give a guess value of $L$ and\nre-iterate the integration until the mass at the core, $m(R_{\\rm c})$,\nmatches the actual mass $M_{\\rm c}$. Note that, once the luminosity is found,\nthe location of radiative-convective boundary (RCB) can be specified accordingly.\n\n\n\\subsection{Envelopes without Heat Transport by Turbulent Mixing}\nFor the convenience of comparison, we first consider a fiducial model, i.e.,\nan envelope without turbulence ($\\nu_{\\rm turb} = 0$).\nIn Figure \\ref{AtmProfile}, we show the radial profiles of pressure, temperature, and density of\nthe envelope for a 5$M_{\\oplus}$ core with increasing envelope mass during atmospheric growth.\nThe green, cyan and yellow curves denote the envelope mass fraction (EMF) = 0.1, 0.4, 0.8, respectively.\nThe thicker and thinner parts stand for the convective and radiative region, respectively.\nThe boundaries of the thicker and thinner part are the radiative-convective boundaries (RCBs).\nThe convective region is adiabatic.\nThe radiative region connects the lower entropy interior to the higher entropy exterior.\nIn Figure \\ref{AtmProfile}, we note that the pressure in the convection zone increases with\nenvelope mass, but the temperature only varies slightly.\nSince the entropy is $\\propto \\ln(T^{1\/\\nabla_{\\rm ad}}\/P)$,\nit is clear that, with increasing envelope mass,\nthe steady-state envelopes evolve in order of decreasing entropy (Marleau \\& Cumming 2014).\nThis is consistent with the cooling process that the envelope experiences,\nwhich allows the atmosphere to accrete more gas.\n\n\nLee et al. (2014) found that, for dusty atmosphere, the location of RCBs lies\nat an roughly fixed temperature where H$_2$ dissociates ($\\sim$ 2500K).\nIn Figure \\ref{AtmProfile}, the RCB lies at the bottom\nof the outermost radiative region and the temperatures at the RCBs are no longer 2500K.\nThis is because we adopt a grain-free atmosphere due to efficient grain coagulation (Ormel 2014).\nAccording to the middle panel of Figure \\ref{AtmProfile},\nwe find that grain-free atmosphere behaves differently from grain-rich atmosphere.\nThe outer radiative region is nearly isothermal, which implies that $T_{\\rm RCB} \\sim T_{\\rm out}$.\nSuch features have also been identified in Lee \\& Chiang (2015, 2016) and Inamdar \\& Schlichting (2015),\nwhich can be readily understood\nin terms of the following relation (Rafikov 2006; Piso \\& Youdin 2014)\n\\begin{equation}\n\\frac{T_{\\rm RCB}}{ \\ T_{\\rm out} } \\sim \\left(1 - \\frac{\\nabla_{\\rm ad}}{\\nabla_0} \\right)^{-1\/(4-\\beta)} \\sim 1 \\ .\n\\end{equation}\nThe term on the right hand side of this equation is around the order of unity.\nThis explains why the temperature at RCB, $T_{\\rm RCB} \\sim T_{\\rm out}$.\nWe would stress that, the above relation is only valid for atmosphere without turbulence.\nWhen heat transport by turbulent mixing is taken into account, the RCB is pushed inwards,\nand the temperature at the RCB ($T_{\\rm RCB}$) becomes higher.\n\n\n\nAt the early stage of accretion, the envelope mass is small and\nthe envelope can be well treated as non-self-gravitating.\nIn this case, simple analytic results can be derived (Rafikov 2006; Piso \\& Youdin 2014).\nThough the envelope we consider in this paper is self-gravitating, these analytical results\nare still very instructive to understand atmospheric evolution and interpret our\nnumerical results. How the position of the RCBs\nvaries with envelope mass can be understood with the following relations (Piso \\& Youdin 2014),\n\\begin{equation}\n\\label{LMrelation}\n\\frac{M_{\\rm atm}}{M_{\\rm c}} = \\frac{P_{\\rm RCB}}{\\xi P_{\\rm M}} \\ , \\\n\\frac{P_{\\rm RCB}}{ \\ P_{\\rm disk} } \\sim \\ e^{R_{\\rm B}\/R_{\\rm RCB} } \\ .\n\\end{equation}\nwhere $\\xi$ is a variable on the order of unity and $P_{\\rm M}$ is the characteristic pressure that is\nrelated to the core mass (Piso \\& Youdin 2014).\nIn the early stage of planet accretion, with the increase of envelope mass, the pressure at\nRCB would increase as well.\nAccordingly, the cooling luminosity would be reduced.\nWhen the self-gravity becomes important, the above relations no longer hold.\nThe stronger luminosity is necessary to support the more massive envelope.\nWith the increase of luminosity, the RCB would be shifted outward\nas shown in Figure \\ref{AtmProfile} (Ginzburg \\& Sari 2015).\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.75]{AtmProfileRevise.eps}\n\\caption{\\label{AtmProfile}\nThermal profiles around a planet core\nwith mass $M_{\\rm c} = 5 M_{\\oplus}$ at 0.1 AU.\nTurbulence is not included, $\\nu_{\\rm turb} = 0$.\nThe pressure, temperature, and density are shown in the upper, middle, and lower panels,\nrespectively. In each panel,\nthe green, cyan, yellow lines stand for $M_{\\rm atm}\/M_{\\rm c} = $ 0.1, 0.4, 0.8,\nrespectively.\nWith the increase of envelope mass, the pressure at RCBs always increases.\nHowever, the position of RCBs inside the planet first decreases and then increases.\nThis non-monotonic behavior is\ndue to the effects of self-gravity (Piso \\& Youdin 2014).\nNote in particular that no pseudo-adiabatic region appears in the envelope (cf. Figure \\ref{AtmProfile_with_turb}).\n}\n\\end{figure}\n\n\n\n\\subsection{Envelopes with Heat Transport by Turbulent Mixing}\nIn this section, we explore how turbulence ($\\nu_{\\rm turb} \\neq 0$) changes the structure\nof the planet envelope.\nThe most interesting feature is that the turbulence would push the RCBs\ninwards and diminish the cooling luminosity.\nIn Figure \\ref{AtmProfile_with_turb}, we show the planet thermal profiles for envelope\nmass fraction, $M_{\\rm atm}\/M_{\\rm c} =$ 0.2, 0.4 and 0.8.\nThe core mass $M_{\\rm c} = 5 M_{\\oplus}$.\nIn Figure \\ref{AtmProfile_with_turb}, we find that the difference\nis that a pseudo-adiabatic region appears.\nExplicitly, we point out the location of the pseudo-adiabatic region\nin the middle panel of Figure \\ref{AtmProfile_with_turb}.\nIn such regions, the temperature gradient is\nvery close to adiabatic gradient, but still smaller than adiabatic gradient (see Figure \\ref{TempGradVsP}).\n\nFrom middle panel of Figure \\ref{AtmProfile}, we see that,\nwhen the heat transport by turbulent diffusion is not included,\nthe RCB lies around the isothermal radiative region, $T_{\\rm RCB}\\sim T_{\\rm out}$.\nWhen turbulent diffusion is included, the temperature gradient would deviate\nfrom the isothermal approximation, which is most obvious by comparing middle panels of Figure\n\\ref{AtmProfile} and Figure \\ref{AtmProfile_with_turb}.\nWe can identify from Figure \\ref{TempGradVsP}, that the temperature gradient near\nRCBs is approaching $\\nabla_{\\rm ad}$ and clearly deviates from isothermal temperature\ngradient. Due to this temperature gradient deviation, a pseudo-adiabatic region appears.\nAs a result, the temperature at RCB becomes higher and RCBs would penetrate deeper inside\nthe envelope.\n\n\\begin{figure}\n\\includegraphics[scale=0.75]{AtmProfile_with_turbRevise.eps}\n\\caption{\\label{AtmProfile_with_turb}\nThe same as Figure \\ref{AtmProfile}, but for a turbulent envelope with $\\nu_{\\rm turb} = 0.016$.\nThe pseudo-adiabatic region is most clearly visible when comparing the middle panel of Figure 1 and Figure 2.\nDue to the presence of pseudo-adiabatic region, the RCBs are pushed inwards. The temperature at RCBs\nbecomes higher when heat transport by tidally-forced turbulent mixing is taken into account.\n}\n\\end{figure}\n\nTo better understand the effects of heat transport by turbulent mixing, we compare the profiles of planet\nenvelope with and without turbulence. The results are shown in Figure \\ref{TempGradVsP}\nas red solid and blue dashed lines, respectively.\nThe upper panel shows the global variation of temperature with pressure\nwithin the envelope.\nIn this panel, the difference between the two cases with and without turbulence\nis hardly discernible.\nThe middle panel shows again the variation of temperature with pressure but\nfocuses on the localized region around the radiative-convective transition region.\nIt shows that the turbulent mixing smoothes\nthe transition toward the adiabat.\nThere would appear a pseudo-adiabatic region above the\nactual adiabatic region. This pseudo-adiabatic region pushes the RCB inward to higher pressure.\nTurbulent mixing leads to a more gradual approach to adiabat.\n\n\nThe turbulent diffusion in stably stratified region provides heating, instead\nof cooling so it is natural to expect that with turbulent diffusion taken into\naccount, the total cooling rate of envelope will decrease and KH\ncontraction timescale would be prolonged (see Figure \\ref{LVsMatm} for details).\n\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.65]{TempGradVsP.eps}\n\\caption{\\label{TempGradVsP}\nThermal profiles of planet envelope. The EMF $M_{\\rm atm}\/M_{\\rm c} = 0.1 $.\n{\\it Upper panel:} The blue dashed curve represents the\nenvelope without turbulence.\nThe red solid curve denotes envelope with turbulence, $\\nu_{\\rm turb} = 0.016$.\nThe RCBs are denoted as blue and red dots.\nThe two temperature profiles are very similar and difficult to\ndistinguish.\n{\\it Middle panel:} To identify their differences,\nwe show the two profiles near the radiative-convective transition region.\nThe red curve shows a more gradual transition from the radiative\nto adiabatic region. The region between the blue dot and red dot is the\npseudo-adiabatic region.\n{\\it Bottom panel:} The ratio of temperature gradient to adiabatic gradient.\nThe region with $\\nabla\/\\nabla_{\\rm ad} = 1$ is the convection zone.\nThe region with $\\nabla\/\\nabla_{\\rm ad} < 1$ is the radiative zone.\nIn the pseudo-adiabatic region, $\\nabla$ is\nvery close to $\\nabla_{\\rm ad}$, but still smaller than $\\nabla_{\\rm ad}$.\nThe RCBs shift inwards when heat transport by turbulent mixing\nis taken into account. The RCBs penetrate deeper with stronger turbulent mixing.\n }\n\\end{figure}\n\n\n\\section{Onset of Gas Runaway and Cooling Luminosity Variations}\nSince we are interested in the planet accretion history,\nit is necessary to investigate the luminosity with increasing envelope mass.\nIn the deep atmosphere, heat is advected by convective eddies.\nNear the surface, this could be achieved by diffusion. The surface temperature\ngradients would become shallower and a radiative region shows up.\nThe variations of luminosity with envelope mass is shown in Figure \\ref{LVsMatm}.\nWith the accumulation of envelope mass, the luminosity reaches a minimum.\nBeyond this minimum, the luminosity $L$ increases. As a result, the planet begins to cool at a very\nshort timescale and the envelope mass would grow super-linearly after this epoch.\nPhysically, it is natural to adopt the epoch when the minimum $L$ is reached as the\nonset of gas runaway, $t_{\\rm run}$.\n\nOn the right hand side of luminosity minimum, the luminosity-mass relation is relatively\neasy to understand. At this late stage of mass growth, the self-gravity\nof envelope appears to be prominent, and greater\nluminosity is necessary to support stronger gravity.\nHowever, on the left hand side of the luminosity minimum,\nthe mass of envelope is small and the planet is at the its early stage\nof mass growth.\nAt this early stage (envelope's self-gravity can be ignored),\nthe luminosity diminishes with a thicker radiative outer layer and more massive envelope.\nThis reduction in cooling luminosity is intimately related to the shift of RCBs.\nWhen the envelope self-gravity can be ignored,\nthe luminosity at RCB can be written as (Piso \\& Youdin 2014)\n\\begin{equation}\nL_{\\rm RCB} = \\frac{64\\pi \\sigma G M_{\\rm RCB} T^4_{\\rm RCB}}{3 \\kappa P_{\\rm RCB}} \\nabla_{\\rm ad}\n\\approx \\frac{L_{\\rm disk} P_{\\rm disk}}{P_{\\rm RCB}} \\ ,\n\\end{equation}\nwhere $M_{\\rm RCB}$ and $L_{\\rm disk}$ reads\n\\begin{equation}\nM_{\\rm RCB} = \\frac{5\\pi^2}{4}\\rho_{\\rm RCB} R_{\\rm B}^{\\prime}\\sqrt{R_{\\rm RCB}} \\ , \\\nL_{\\rm disk} \\approx\n\\frac{64\\pi\\sigma G M_{\\rm RCB} T_{\\rm disk}^4}{3 \\kappa_{\\rm d} P_{\\rm disk}} \\nabla_{\\rm ad} \\ .\n\\end{equation}\nThe above equations can be written in terms of known properties if the envelope\nmass is centrally concentrated (see, e.g., Lee \\& Chiang 2015).\nThis central concentration is physically expected since in deeper layers where temperatures\nrise above $\\sim2500$K, hydrogen molecules dissociate. As energy is spent on dissociating H$_2$\nmolecules rather than heating up the gas, the adiabatic index drops below 4\/3, to approach 1.\nThe upshot is that both the densities at the RCB and the radiative luminosity\ncan be written in terms of core properties and the temperature at the RCB.\n\n\nAs RCB deepens, the RCB becomes even more\noptically thick so it is harder to radiate energy away; as a result, the\nenvelope cools more slowly.\n\n\n\n\nIn Figure \\ref{LVsMatm}, we stress that two important aspects of thermal\nevolution during the planet accretion would be\naffected by turbulent mixing. The first is that it influences the luminosity.\nIn Figure \\ref{LVsMatm}, we know that when the turbulent diffusivity ($\\nu_{\\rm turb}$) is enhanced, the\ncooling luminosity would be reduced globally.\nThat is, for any particular value of envelope mass, the cooling luminosity for\nan envelope with turbulence is always below that without turbulence.\nWhen the turbulence is stronger, the luminosity becomes even smaller.\nThe second is that it changes the EMF at which the gas runaway occurs.\nIn Figure \\ref{LVsMatm}, our calculations show that,\nwhen the turbulence becomes stronger, the onset of gas runaway takes place\nat higher envelope mass fraction (EMF).\n\n\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.75]{LVsMatm.eps}\n\\caption{\\label{LVsMatm}\nThe luminosity $L$ varies non-monotonically with envelope mass.\nThe results for $\\nu_{\\rm turb} = 0, 0.005, 0.016$ are shown in\nblue solid, green dot-dashed, and red dashed lines, respectively.\nThe luminosity minimum is reached at $M_{\\rm atm}\/M_{\\rm c}$\n= 0.86, 1.16, 1.20, respectively.\nWhen the envelope mass is small, the increase of envelope mass\ncauses the luminosity to decrease. When the envelope mass is sufficiently large,\nthe self-gravity of gas envelope become important, and bigger\nluminosity $L$ is necessary to support stronger gravity. We choose\nthe luminosity minimum as the epoch when the runaway accretion sets in.\nWe note that two important aspects of thermal evolution during the planet accretion would be\naffected. With the enhanced turbulence, the cooling luminosity is reduced globally.\nWhen the turbulence becomes stronger, the onset of gas runaway occurs\nat a higher envelope mass fraction.\n}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Quasi-Static KH Evolution and Critical Turbulent Diffusivity}\n\nSince we ignore the accretion luminosity from the planetesimals,\nthe gravitational KH contraction is the only source for the cooling.\nThe gas accretion is regulated by the KH timescale.\nOur time evolution model\ncan follow the envelope mass growth up to the very early epoch of\nrunaway growth around the crossover mass.\nFortunately, Pollack et al. (1996) found that the timescale spent in the runaway accretion stage\nis orders of magnitude smaller than the KH timescale. The mass growth\ntimescale is actually dominated by the KH stage. For this reason,\nour model can get rather accurate estimation of mass growth timescale of an accreting planet.\nIn this section, we will explore how the turbulent mixing affect the KH contraction timescale.\nFor strong turbulent diffusion, the heat transport may even inflate the planet (Youdin \\& Mitchell 2010).\nWe are not interested in planet inflation induced by strong turbulence in this paper. We find that even\nweak turbulence can already play an essential role to delay the KH contraction.\n\n\n\n\n\n\\subsection{ Time evolution: Temporally Connecting Snapshots }\nIn the previous section, we have obtained snapshots of envelope\nstructure for different envelope masses.\nTo estimate the accretion timescale,\nwe need to connect them temporally in order of increasing mass.\nThe gas accretion history can be followed\nby the cooling process (Piso \\& Youdin 2014).\nDetailed estimation shows the luminosity generated in the radiative\nregion can be safely ignored (Lee et al. 2014).\nIt is physically valid to assume the luminosity of the envelope is generated\nin the convective zone and the luminosity can be treated as constant in the\nouter radiative zone (Piso \\& Youdin 2014).\nThis would greatly simplify our evolutionary calculations.\nUnder such circumstances, we only need to solve a set of ordinary differential equations and connect\nthe solutions in time.\nLee \\& Chiang (2015) shows that it is physically valid to omit\nplanetesimal heating during the gas accretion of super-Earths.\nWhen there is no planetesimal accretion to power the gas envelope,\nthe time interval between two adjacent hydrostatic snapshots is the\ntime it spends to cool between them.\nIn addition to internal energy variations, gas accretion and envelope contraction\nalso bring about changes to the global energy budget.\nSpecifically, the time interval between two steady state solutions can be written as (Piso \\& Youdin 2014)\n\\begin{equation}\\label{budget}\n\\Delta t = \\frac{-\\Delta E + \\langle e\\rangle\\Delta M - \\langle P\\rangle\\Delta V_{\\langle M\\rangle}}{\\langle L\\rangle} \\ .\n\\end{equation}\nNote that the symbol $\\Delta$ designates the difference between\nthe two adjacent states and the bracket denotes the average of them.\nThe total energy $E$ consists of the internal energy and\nthe gravitational potential energy, which reads\n\\begin{equation}\nE = \\int_{M_c}^{M_{}} u \\ d M_r - \\int_{M_c}^{M_{}} \\frac{G M_r}{r} d M_r\\ ,\n\\end{equation}\nwhere $u$ is the specific internal energy, $u = c_{\\rm v} T$. The second term in\nequation (\\ref{budget}) stands for contribution from gas accretion. The specific energy of the accreting gas\nis $e = - G M_r \/r + u$. The third term in equation (\\ref{budget}) accounts for $P dV$ work done by the envelope\ncontraction.\nAll terms are calculated at the RCB. Note in particular that the volume difference\nbetween two adjacent snapshots are performed at fixed mass.\nWe choose the fixed mass as the average of the masses at the RCB (Piso \\& Youdin 2014).\n\n\n\\begin{figure}\n\\includegraphics[scale=0.65]{timescale_new.eps}\n\\caption{\\label{timescale}\n{\\it Upper panel} :\nThe accretion history for $\\nu_{\\rm turb} =$ 0, 0.0016, 0.005, and 0.016 is shown\nas cyan dot-dashed, blue solid, green dotted, red dashed lines, respectively.\nThe initial time for the accretion is estimated as $t_0 = |E|\/L$.\nThe slightly different starting time is due\nto the luminosity decrease by the inclusion of turbulence (see Figure \\ref{LVsMatm}).\nThe initial EMF is around 6\\%, where the planet is nearly fully convective.\nDifferent color dots in the upper panel denote the epoch, $t_{\\rm run}$, when the gas runaway takes place.\nThe runaway time is $t_{\\rm run} =$ 4.04, 10, 18.4, and 48.3 Myrs,\nrespectively. The solid blue curve shows the critical solution, where $t_{\\rm run} = t_{\\rm disk}$.\nThe critical diffusivity for $M_{\\rm core} = 5 M_{\\oplus}$ is\n$\\nu_{\\rm critical}\\sim 1.6\\times10^{-3}$ if $t_{\\rm disk}=10$ Myrs.\n{\\it Lower panel} : The critical $\\nu_{\\rm critical}$ for various core mass. For higher core\nmass, the critical $\\nu_{\\rm critical}$ is higher.\nWe note that a weak turbulence with small diffusivity,\n$\\mu_{\\rm turb} \\sim 10^{7} -10^{8}$ cm$^2$ s$^{-1}$, can already enhance\nthe runaway timescale and delay the gas runaway. \n}\n\\end{figure}\nIn the upper panel of Figure \\ref{timescale}, we shown the planet mass growth\nhistory for different turbulent diffusivity.\nIn our fiducial model without turbulence, $t_{\\rm run}\\sim$ 4.04 Myrs.\nBeyond this epoch, the gas runaway occurs.\nThe gas runaway is due to the fast increase of $L$ beyond $t_{\\rm run}$,\nwhich leads to a rapid cooling process on a shorter timescale.\nThe most intriguing feature is that\nthe runaway time is delayed and accretion timescale is prolonged\nwhen heat transport by tidally-forced turbulent mixing is taken into account.\nFor instance, when $\\nu_{\\rm turb} = 0.0016, 0.005, \\ 0.016$, the runaway time,\n$t_{\\rm run} = 10, \\ 18.4, \\ 48.3$ Myr, respectively.\nThe stronger the turbulence, the longer the gas runaway timescale.\n\nIn our calculations, we find that a small value of\n$\\nu_{\\rm turb}$, on the order of $10^{-3}$, can already appreciably affect\nthe cooling timescale of super-Earths. Since $\\nu_{\\rm turb}$ is dimensionless,\nit is better to recover its physical value according to Equation (\\ref{turb_def}).\nTypically, luminosities for super-Earths are $L \\sim 10^{26}$erg\/s,\n$M_{\\oplus} = 5.97 \\times 10^{27}$g, and $\\rho_0 = 6\\times 10^{-6}$ g cm$^{-3}$.\nThen the term, $L\/(GM_{\\oplus}\\rho_0)$, defined in Equation (\\ref{turb_def})\nis approximately $ \\sim 4.2 \\times10^{10}$ cm$^2$ s$^{-1}$.\nFor the dimensionless diffusivity $\\nu_{\\rm turb} = 0.0016$, the physical diffusivity\nis approximately $\\mu_{\\rm turb} \\sim 4.2 \\times 10^{7}$ cm$^2$ s$^{-1}$.\nFor even larger $\\nu_{\\rm turb}$,\nthe K-H contraction timescale can be enhanced by orders of magnitude.\nAccording to Figure \\ref{timescale}, it is evident that the turbulent diffusivity on the order\n$\\sim 10^7 - 10^{8}$ cm$^2$ s$^{-1}$ can already\nenhance the runaway timescale by an oder of magnitude.\nThe pressure scale height inside the planet is $H_p \\sim 10^9$ cm and the sound speed\nis $c_s \\sim 10^5$ cm s$^{-1}$. \nWe can get a physical sense how large the turbulent diffusivity is by estimating\nthe dimensionless parameter $\\zeta$ in Equation (\\ref{turb_def}).\nIn our calculation, the parameter $\\zeta$ is pretty small, on the order\nof $10^{-7}\\sim 10^{-6}$. This means that the turbulent diffusion\nnecessary to prolong the cooling timescale needs not to be very strong.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Critical Turbulence Diffusivity $\\nu_{\\rm critical}$ and Super-Earth Formation}\n\nA gas giant would be formed if the protoplanetary disk is still full of gas when\nthe planet enters the runaway accretion stage. However,\nif the runway time $t_{\\rm run}$ is longer than the disk lifetime $t_{\\rm disk}$,\nthe disk gas is depleted and the planet is unable to accrete sufficient gas to become a gas giant,\nthen a super-Earth may be formed.\nTwo timescales, $t_{\\rm run}$\nand $t_{\\rm disk}$, determine the ultimate destiny of the planet,\ni.e., whether the planet becomes a super-Earth\nor a gas giant. If $t_{\\rm run} < t_{\\rm disk}$, gas runaway occurs within the lifetime of\ndisk. The planet would get inflated by the runaway gas accretion and become a gas giant.\nOn the contrary, if $t_{\\rm run} > t_{\\rm disk}$, the disk disperses before the gas\nrunaway takes place. Because there is not enough gas material for the planet to accrete, the planet is\nunable to become a gas giant. Usually the disk life is about $5-10$ Myr.\nTo be specific, we take the disk lifetime as $t_{\\rm disk} = $ 10 Myrs throughout this paper.\n\nIn the upper panel of Figure \\ref{timescale}, the core mass fixed at $M_{\\rm core} = 5 M_{\\oplus}$.\nWe find that there exists a critical diffusivity $\\nu_{\\rm critical} = 1.6\\times 10^{-3}$.\nWhen $\\nu_{\\rm turb}> \\nu_{\\rm critical}$, the K-H contraction timescale\nbecomes longer than the disk lifetime and the core would not be able to experience the gas runaway.\nIn this case, the formation of gas giants can be avoided and the formation of super-Earths becomes viable.\nIn the lower panel of Fig. \\ref{timescale}, we show the variations of $\\nu_{\\rm critical}$\nwith $M_{\\rm core}$. The critical diffusivity becomes larger when the core mass increases.\nSpecifically, for a 10 Earth mass core, the critical dimensionless diffusivity is approximately\n$\\nu_{\\rm critical } = 3.2\\times10^{-2}$. The actual diffusivity is about $\\sim 10^{9}$ cm$^{2}$ s$^{-1}$.\n\n\n\n\n\n\n\n\n\n\\subsection{Variations of $\\nu_{\\rm critical}$ with Planet Location in PPDs }\nObservationally, the Kepler statistics show that $\\sim$20\\% of Sun-like stars harbors super-Earths\nat distance of 0.05-0.3 AU. By contrast, the occurrence rate\nfor hot Jupiters inside $\\sim 0.1$ AU is only 1\\%. To explain these observational features,\nwe consider how the turbulence affects the thermal evolution for planets\nat different locations in PPDs.\nThe turbulent mixing considered in this paper is driven by the tides raised by the host star.\nWe believe that the tidally-induced turbulent mixing inside the planet\nwould become weaker when the planet is farther away from the host star.\n\nLee et al. (2014) found that, for dusty disk, the runaway timescale is independent of\nthe orbital location. However, since dust can not persist in the envelope due to\ncoagulation and sedimentation (Ormel 2014; Mordasini 2014), \nthe runaway timescale is no longer independent of the orbital location.\nIn the upper panel of Figure \\ref{semimajor}, we show the accretion\nhistory for planets at three different locations. The core mass is $M_{\\rm c} = 6 M_{\\oplus}$.\nThe blue solid, green dot-dashed, and red dashed curves\ndesignate the temporal variations of envelope mass for $a = $ 0.1AU, 1AU, and 5AU, respectively.\nThe turbulent diffusivity is $\\nu_{\\rm turb} = 0.013$. The gas runaway occurs at\n$t_{\\rm run} $= 33.1, 3.2, and 1.7 Myrs. It is clear that gas accretion\nonto cores is hastened for planets that are farther away from the central star.\nThis behaviour can be understood from the decrease in opacity\nat the RCB which makes the envelope more transparent, enhancing the rate of cooling\n(Lee \\& Chiang 2015; Inamdar \\& Schlichting 2015).\nThe planet at $a = $ 1AU and 5 AU would become a gas giant due to\nrunaway accretion ($t_{\\rm run} < t_{\\rm disk}$).\nHowever, the planet in the inner region $a= $ 0.1 AU would become a super-Earth ($t_{\\rm run} > t_{\\rm disk}$).\nThe fact that atmospheres cool more rapidly at large distances as\ndust-free worlds has been used to explain the presence of extremely puffy, low mass planets\n(Inamdar \\& Schlichting 2015; Lee \\& Chiang 2016).\n\n\nWe explore the critical diffusivity, $\\nu_{\\rm critical}$, for planets at\ndifferent locations inside the minimum mass extrasolar nebula (MMEN).\nThe results are shown in the lower panel of Figure \\ref{semimajor}.\nIt shows that the critical diffusivity increases with the semi-major axis.\nWhen the planet is farther from the central star, $\\nu_{\\rm critical}$ becomes larger.\nThis means that the more distant planet requires stronger turbulence\nto lengthen the KH timescale and avoid gas runaway.\nFor tidally-induced forcing, we believe that the turbulent diffusion $\\nu_{\\rm turb}$ is determined by\nthe tides inside the planet raised by host star. The tides become weaker if the planet is farther away\nfrom the host star.\nOur proposed mechanism can naturally explain the formation of close-in super-Earth,\nwhile still ensuring the gas giant formation at larger orbital distance.\nWhen the planet is near the host star, tidally-forced turbulent mixing is stronger and $\\nu_{\\rm turb}$ would be larger.\nAccording to Figure \\ref{semimajor}, the required threshold $\\nu_{\\rm critical}$ is smaller,\nAs a result, the inequality $\\nu_{\\rm turb} > \\nu_{\\rm critical}$ can be more readily\nto satisfy and formation of super-Earth becomes possible.\nOn the contrary, when the planet is far from the host star, $\\nu_{\\rm turb}$ becomes smaller as the stirring by\ntides becomes weaker. The required threshold $\\nu_{\\rm critical}$ becomes larger.\nThe threshold to avoid gas runaway is more difficult to satisfy.\nThis indicates that, in the in-situ planet formation scenario,\nit is more readily to form close-in super-Earths\nand gas giants are more prone to appear in the outer region of PPDs.\nThe above implication is consistent with occurrence rate inferred from observations.\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\includegraphics[scale=0.7]{semimajor_new.eps}\n\\caption{\\label{semimajor}\n{\\it Upper panel} : Variations of envelope mass with time.\nThe core mass is $M_{\\rm c} = 6 M_{\\oplus}$. The turbulent\ndiffusivity is $\\nu_{\\rm turb} = 0.01$. The blue solid, green dot-dashed, red dashed lines denote\nmass growth history for planets at 0.1 AU, 1 AU, and 5 AU, respectively.\nThe critical mass ratio at the epoch of runaway decreases for more distant planet.\nThe runaway time for the three different cases are 33.1, 3.2, and 1.7 Myr, respectively.\nIt is expected that for more distant planets, larger turbulent diffusivity is required to prevent\nrunaway gas accretion within $t_{\\rm disk} \\sim$ 10 Myrs.\n{\\it Lower panel} : The critical diffusivity, $\\nu_{\\rm critical}$, for different orbital locations, required\nto prevent gas runaway for disk lifetime $t_{\\rm disk} \\sim$ 10 Myrs.\nBeyond $\\nu_{\\rm critical}$, the KH timescale is longer than the disk lifetime.\nThe formation of super-Earths becomes possible.\n }\n\\end{figure}\n\n\n\n\n\n\\section{Mass Loss Mechanisms}\nObservation shows that super-Earths possess hydrogen and helium\nenvelopes containing only several percent of the planet's mass.\nHowever, we can see in Figure \\ref{timescale} that the planets accrete\nvery massive gas envelopes.\nThe planet core with $\\nu_{\\rm turb}=0$ reaches an envelope mass fraction (EMF)\nof $\\sim 0.8$ at the epoch of gas runaway.\nThe envelope mass is considerably higher than the mass inferred from observations.\nThese primordial super-Earths may experience\nsignificant mass loss during the post-formation evolution.\n\nHow super-Earths lose their mass still remains an open question.\nHere we briefly mention some possible ways to lose the envelope mass.\nThe first possibility is that close-in planets are exposed to intense XUV (extreme UV\nand X-ray) irradiation from their host stars. Photoevaporation\ncan significantly modify the structure of their atmosphere.\nOver the timescale of $\\sim 100$ Myrs, X-rays from host stars can photoevaporate\nthe super-Earth envelopes from initial EMF $\\sim 1$ to EMF of $\\sim 0.01-0.1$,\nwhich may naturally explain the differences between\nthe theoretical predictions and observational facts (e.g., Murray-Clay et al. 2009;\nOwen \\& Wu 2013; Owen \\& Wu 2017; Gaudi et al. 2017).\n\nGiant impact is the second possible mechanism to explain the mass loss,\nwhich is expected to be common because they are needed to provide\nlong-term orbital stability of planetary systems (Cossou et al. 2014).\nHydrodynamical simulations show that a single collision between similarly sized exoplanets\ncan easily reduce the envelope-to-core-mass ratio by a factor of two.\nSuper-Earths' asymptotic mass can be achieved by one or two giant impacts.\nUnder certain circumstances, almost 90\\% of the gas envelope can be\nlost during impact process (Liu et al. 2015; Inamdar \\& Schlichting 2016).\n\n\nMass transfer between the close-in planet and host star via Roche lobe represent the third way to\nreduce the planet mass (Valsecchi et al. 2015; Jia \\& Spruit 2017; Jackson et al. 2017).\nTidal dissipation can drive orbits of these primordial super-Earths to decay toward the Roche limit.\nThe mass transfer is quite rapid, potentially leading to complete removal of the gaseous envelope in a few Gyr,\nand leaving behind a super-Earth.\nMany gaseous exoplanets in short-period orbits are on the verge or are in the process of Roche-lobe overflow (RLO).\nThe coupled processes of orbital evolution and RLO likely shape the observed distribution of close-in exoplanets and may even be responsible for producing some of the short-period rocky planets. But recent calculations by Dosopoulou et al. (2017) challenged this idea by claiming that, for high eccentric planets or retrograde planets, self-accretion by\nthe planet would slow down the mass loss rate via Roche lobe overflow.\n\n\nSuper-Earth envelope mass fractions range just 1-10\\% and more typically just $\\sim$1\\%\n(see Rogers \\& Seager 2010, Lopez \\& Fortney 2014, Wolfgang \\& Lopez 2015).\nThe mechanism discussed in this paper overpredicts the envelope mass fraction of super-Earths, often beyond 80\\%.\nPhotoevaporation, even around Sun-like stars, are only effective out to ~10 days and many super-Earths lie beyond this (see, e.g., Figure 8 of Owen \\& Wu 2013). Removal of $>$90\\% of the envelope by giant impact requires impact velocity that exceeds the escape velocity (see, e.g., Figure 3 of Inamdar \\& Schlichting 2016). Finally, Roche lobe overflow only works within ~2 stellar radii where the Roche radius is.\nLee \\& Chiang (2016) proposed that the late-time formation of cores ensures that super-Earth\ncores accrete a few percent envelope mass fraction, in agreement with the observations.\nThere is a clear difference in the expected final envelope mass fraction\nbetween their work and ours.\n\nVery recent works have revealed that planetary envelopes embedded within PPDs\nmay not be in hydrostatic balance, which slows down envelope growth. It\nis possible for a steady state gas flow enters\nthrough the poles and exits in the disc mid-plane (Lambrechts \\& Lega 2017).\nIn the presence of a magnetic field and weakly ionizing winds,\nohmic energy is dissipated more readily for lower-mass planets.\nOhmic dissipation would make super-Earths more vulnerable to atmospheric evaporation (Pu \\& Valencia 2017).\nThese findings may offer new explanations for the typical low-mass envelopes around the cores of Super-Earths.\nIn addition, we also note that the turbulent\ndiffusion mechanism may be still operating in the late core assembly scenario.\nIn the late core assembly scenario without turbulent diffusion, the asymptotic EMF is about 3-5\\% (Lee \\& Chiang 2016).\nWhen turbulent diffusion is taken into account, the EMF can be further reduced to 1\\%.\n\n\n\n\n\n\\section{Summary and Conclusion}\n\nIn this paper, we propose a new mechanism to avoid gas runaway for planet cores\nwithin the lifetime of disks.\nThe mechanism proposed in this paper is not subject the $\\kappa$ or $\\mu$\ncatastrophe (Lee \\& Chiang 2015). Tidal heating (Ginzburg \\& Sari 2017) requires\norbital eccentricity be continuously pumped up during super-Earth formation.\nOur mechanism does not depend on the orbital eccentricity of super-Earth.\nIncorporating this model into a population synthesis model may better constrain our\nunderstanding of the exoplanet formation (Ida \\&Lin 2004; Jin \\& Mordasini 2017).\n\nWe have explored the effects of heat transport induced by tidal stirring on the thermal\nstructure of stably stratified, radiative layers of super-Earths,\nfocusing on their influences on the KH timescale.\nWhen we take turbulent stirring into account,\npseudo-adiabatic regions would show up within the radiative zone.\nThis may push the RCBs inwards.\nThe temperature, pressure at RCBs becomes higher and the cooling luminosity would be reduced.\nAs a result, the KH timescale would be enhanced.\nWe find that\nthere exist a critical turbulent diffusivity $\\nu_{\\rm critical}$. When\n$\\nu_{\\rm turb} > \\nu_{\\rm critical}$, the runaway time is greater than\nthe disk lifetime ($t_{\\rm run} > t_{\\rm disk}$). Under such circumstances,\nthe onset of the planet gas runaway lags behind the disk gas depletion.\nSince the planet has not enough gas to accumulate, it can no longer grow\ninto a gas giant and become a super-Earth instead. In addition, we also investigate\nthe variations of $\\nu_{\\rm critical}$ with planet's semi-major axis in MMEN.\nOur calculations show that the condition for turbulence-induced formation of super-Earths\nis more readily satisfied in the inner disk region, but is harder to satisfy in the outer\ndisk region. The occurrence rate of super-Earths and gas giant is consistent our calculations.\n\n\nThe extent of radiative region has important implication\nfor the tidal dissipations inside the planet.\nThe turbulence pushes the RCBs inwards and produces enlarged radiative zones.\nSince the internal gravity waves can\npropagate inside the radiative zone, the variations of this resonant cavity\nwould significantly influence the dissipation of internal gravity waves.\nThis would greatly influence the propagation and dissipation of internal\ngravity waves inside the radiative zone (Jermyn et al. 2017).\nAnother effect is that the transition between convective zone to radiative zone is smoothed.\nThe radiative zone is thickened and\nthis bears important implications for the internal gravity wave\nexcitation and propagation (Lecoanet \\& Quataert 2013).\nThis would have appreciable effects on the thermal tides inside the planet.\nThese issues will be addressed in a further study.\n\n\nA limitation of this work is that the turbulence strength is not specified from first principle.\nAs a compromise, we parameterize the turbulence diffusion as a free parameter. We try\nto constrain the turbulence strength in terms of the planet thermal evolution. Interestingly,\nwe find the turbulence in the radiative region have substantial effects on the planet\naccretion history.\nHow turbulence is initiated during the planet formation and how strong the turbulent diffusion\nis involve very complicated physical processes, which are worth further investigations.\n\n\nRealistic opacities and EOS have influential effects on\nthe planetary thermal structure and the core accretion process (e.g. Stevenson 1982;\nIkoma et al. 2000; Rafikov 2006),\nespecially for timescale of the KH timescale (Lee et al. 2014; Piso \\& Youdin 2014).\nOur simple prescription of opacity needs to be improved.\nGuillot et al. (1994)\nshowed that an convective layer lies between two adjacent radiative\nregion due to the opacity window near $\\sim$ 2000K.\nA relevant caveat is the existence of radiative zones sandwiched inside convective interior.\nSuch radiative windows are ignored in our two-layer models.\nIt would be interesting\nto consider how a downward turbulent heat flux would interact with such\na sandwiched region. In summary, how super-Earth envelope cooling\nhistory responds to more realistic opacities and EOS needs to be further investigated.\nCalculation with realistic EOS and opacity are underway and will be reported elsewhere.\n\n\nWe have found that the epoch of runaway accretion can be effectively\ndelayed by the turbulent diffusion within the stably stratified region. But we should be cautious that\nthe envelope mass fraction predicted by this mechanism is not fully consistent with observations.\nThe envelope mass fraction for planet embedded within the gas-rich MMEN is greater than 80\\%, much higher\nthan the typical super-Earth envelope.\nIt is difficult for the turbulent diffusion alone to make the envelope mass fraction be consistent with observations.\nAdditional physical process, such as giant impact, photo-evaporation, Roche-lobe overflow may be operating to\nreduce the envelope mass fraction during\nthe formation of super-Earth. But these mass loss processes either operate on distances shorter\nthan most super-Earths or are applicable under certain circumstances. A promising mechanism for super-Earth formation\nis the late core assembly within the transitional PPDs. In this scenario, with the reduction of the\nPPD mass density, the envelope mass fraction can be as low as 3-5\\% (Lee \\& Chiang 2016). We note that the turbulent diffusion\nmay be still working in the late core assembly scenario. How turbulent diffusion affect the envelope\nmass fraction within transitional PPDs is an interesting issue worth further investigation.\n\n\n\\acknowledgments\nWe thank the anonymous referee for the thoughtful comments that greatly improve this paper.\nDiscussions about heat transport inside planet interior\nwith Yanqin Wu and Re'em Sari are highly appreciated.\nThis work has been supported by National Natural\nScience Foundation of China (Grants 11373064, 11521303, 11733010),\nYunnan Natural Science Foundation (Grant 2014HB048)\nand Yunnan Province (2017HC018).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLacunary generating functions appeared previously in a number of circumstances, including for example the treatment of Cauchy problems in partial differential equations~\\cite{babusci2017lacunary,penson2018quasi}. Here, we develop a rather general technique for the treatment of such generating functions, applicable to sequences $P=(p_n(x,y))_{n=0}^{\\infty}$ of polynomials $p_n(x,y)$, where $x$ is the generic variable and $y$ plays the role of a parameter. Such two-variable extensions of one-variable polynomials have been strongly advocated in~\\cite{babusci2010lectures}. They can be logically and consistently defined for all standard families of orthogonal polynomials such as Hermite, Laguerre, Chebyshev of first and second kind, Jacobi and Legendre polynomials~\\cite{babusci2017lacunary,babusci2010lectures,beals2016special}. Once such two-variable equivalents are properly defined, their one-variable variants are obtained by fixing the values of both variables to functions of one of the variables. The concrete example considered in this paper is given by the two-variable Hermite (or so-called Hermite-Kamp\\'{e} de F\\'{e}riet) polynomials $H_n(x,y)$~\\cite{kampe,dattoli1997evolution}, from which the standard one-variable Hermite polynomials $H_n(x)$ may be recovered via (see~\\eqref{eq:HPtwo} below for the definition of $H_n(x,y)$)\n\\begin{equation}\nH_n(x)=H_n(2x,-1)\\,.\n\\end{equation}\nWe will focus our particular attention onto the derivation of a general formula for the $K$-tuple $L$-shifted lacunary generating functions $\\cH_{K,L}(\\lambda;x,y)$ of the two-variable Hermite polynomials $H_n(x,y)$, which are defined (for $K=1,2,3,\\dotsc$ and $L=0,1,2,\\dotsc$) by\n\\begin{equation}\\label{eq:HLGFa}\n\\cH_{K,L}(\\lambda;x,y):=\\sum_{n=0}^{\\infty}\\frac{\\lambda^n}{n!}\\; H_{n\\cdot K+L}(x,y)\\,.\n\\end{equation}\nThe exponential generating functions of type~\\eqref{eq:HLGFa} for Hermite and other types polynomials are very sparsely known, and progress in obtaining new closed-form formulas has been painstakingly slow. A glance at standard reference tables~\\cite{prudnikov1992integrals} reveals only a few known examples. A number of results in this vein were obtained by combinatorial approaches initiated by D.~Foata and V.~Strehl in~\\cite{foataStrehl1984} supplemented by umbral methods, see~\\cite{dattoli2017operational} and references therein. This methodology culminated recently in a tandem study of various lacunary generating functions of Laguerre polynomials derived by purely umbral-type~\\cite{babusci2017lacunary} and purely combinatorial methods~\\cite{strehl2017lacunary}. Only a few results are currently available for lacunary generating functions of Hermite polynomials: the double lacunary case has been combinatorially re-derived by D.~Foata in~\\cite{foata1981some}, whereas the more challenging triple-lacunary generating function has been derived by both umbral and combinatorial methods in~\\cite{gessel2005triple}. Finally, several new lacunary generating functions for Legendre and Chebyshev polynomials were obtained recently by a combination of analytic and umbral methods in~\\cite{gorskalacunary}. To conclude our short survey of results known previously in the literature, let us comment that there exists a related result due to Nieto and Truax~\\cite{nieto1995arbitrary}, which (in the form adapted to the two-variable Hermite polynomials $H_n(x,y)$ as presented in~\\cite{dattoli1997evolution,dattoli1998operational}) reads for $K\\in \\bZ_{\\geq 1}$, $L\\in \\bZ_{\\geq 0}$ and $L0$), the Hamiltonian $H_{\\texttt{GRWA}}$ can be written in the matrix form as\n\\begin{widetext}\n\\begin{equation}\nH_{\\texttt{GRWA}}=\\left(\n\\begin{array}{cccc}\n\\omega (n+2)+\\mu _{1}(n+2) & \\Delta R_{n+1,n+2}^{\\prime } & 0 & 0 \\\\\n\\Delta R_{n+1,n+2}^{\\prime } & \\omega (n+1)+\\mu _{2}(n+1) & \\Delta R_{n,n+1}^{\\prime }\n& 0 \\\\\n0 & \\Delta R_{n,n+1}^{\\prime } & \\omega n+\\mu _{3}(n) & \\Delta R_{n-1,n}^{\\prime } \\\\\n0 & 0 & \\Delta R_{n-1,n}^{\\prime } & \\omega (n-1)+\\mu _{4}(n-1\n\\end{array\n\\right) ,\n\\end{equation}\n\\end{widetext}with $R_{n+1,n+2}^{\\prime }=\\frac{-\\sqrt{3}K_{2}+K_{1}(\\sqrt{3\n+2K_{2})}{C_{1}C_{2}}R_{n+1,n+2}\\sqrt{n+2}$, $R_{n,n+1}^{\\prime }=\\frac{\n\\sqrt{3}K_{3}+K_{2}(\\sqrt{3}-2K_{3})}{C_{2}C_{3}}R_{n,n+1}\\sqrt{n+1}$ and \nR_{n-1,n}^{\\prime }=\\frac{-\\sqrt{3}K_{4}+K_{3}(\\sqrt{3}+2K_{4})}{C_{3}C_{4}\nR_{n-1,n}\\sqrt{n}$.\n\nTo this end, the GRWA can be also performed analytically without more efforts than\nthose in the original Hamiltonian $H_{\\texttt{RWA}}$ in Eq.(\\ref{RWA}).\nThe displaced oscillator states $|n\\rangle _{m}$, $|n\\pm\n1\\rangle _{m}$ and $|n+2\\rangle _{m}$ depend upon the Dicke state \n|j,m\\rangle $, and are definitely different from both the RWA ones and the zeroth-order approximations where\nonly the state $|n\\rangle_{m}$ is considered. Hence, as $\\Delta\/\\omega$ increases,\nthe first-order correction provides an efficient, yet accurate analytical solution.\n\nThe ground-state energy for the ground state $|-\\frac{3}{2}\\rangle |0\\rangle\n$ is\n\\begin{equation}\nE_{0}=-\\frac{5g^2}{4\\omega}-\\frac{\\Delta }{2}e^{-\\frac{g^{2}}{2\\omega ^{2}}}-2\\chi _{1,0}.\n\\end{equation}\nThe first and second excited energies $\\{E_{0}^{k}\\}$ ($k=1,2$) can be given\nby expanding the GRWA Hamiltonian in the basis $|-\\frac{3}{2}\\rangle\n|1\\rangle$ and $|-\\frac{1}{2}\\rangle |0\\rangle$\n\\begin{equation}\nH_{\\mathtt{GRWA}}=\\left(\n\\begin{array}{cc}\n\\omega +\\mu _{1}(1) & \\Delta R_{0,1}^{\\prime } \\\\\n\\Delta R_{0,1}^{\\prime } & \\mu _{2}(0\n\\end{array\n\\right) .\n\\end{equation\nSimilarly, $H_{\\mathtt{GRWA}}$ is given in terms of $|-\\frac{3}{2}\\rangle\n|2\\rangle $, $|-\\frac{1}{2}\\rangle |1\\rangle $, $|\\frac{1}{2}\\rangle\n|0\\rangle $ as\n\\begin{equation}\nH_{\\mathtt{GRWA}}=\\left(\n\\begin{array}{ccc}\n2\\omega +\\mu _{1}(2) & \\Delta R_{1,2}^{\\prime } & 0 \\\\\n\\Delta R_{1,2}^{\\prime } & \\omega +\\mu _{2}(1) & \\Delta R_{0,1}^{\\prime } \\\\\n0 & \\Delta R_{0,1}^{\\prime } & \\mu _{3}(0\n\\end{array\n\\right) ,\n\\end{equation\nwhich provides three analytical excited energies $\\{E_{0}^{k}\\}$ ($k=3,4,5$).\n\nEnergies obtained by the GRWA are presented in dashed lines in Fig.~\\re\n{energy level}. Especially, for the resonance case $\\Delta =\\omega $, the GRWA results are much better than the zeroth-order results (blue dotted lines) in Fig.\\ref{energy level}(b). It ascribes to the effect of the coupling between states\nwith different manifolds. Our approach is basically a perturbative expansion\nin terms of $\\Delta\/\\omega$. As the increase of the $\\Delta \/\\omega$ ,\nthe high order terms in Eq.(5) still cannot be neglected in the intermediate and strong coupling regimes.\nSo the GRWA works reasonably well in the ultra-strong coupling regime $g\/\\omega<0.3$ at resonance.\nInterestingly, the level crossing is present in both the GRWA results\nand the exact ones. The RWA requires weak coupling due to the complete neglect of the CRW terms, which are\nqualitatively incorrect as the coupling strength increases. So the GRWA includes the dominant contribution of the CRW\nterms, exhibiting substantial improvement\nof energy levels over the RWA one. The RWA fails in particular to describe\nthe eigenstates, which should be more sensitive in the quantum entanglement\npresented in the next section.\n\n\\section{Quantum entanglement}\n\nIn the present three-qubit system, we study the GME for the multipartite entanglement and the concurrence for the bipartite\nentanglement. A fully separable three-particle state must contain no entanglement.\nIf the state is not fully separable, then it\ncontains some entanglement, but it might be still separable with respect to\ntwo-party configurations. For genuine multiparticle entangled states, all\nparticles are entangled and therefore GME is very important among various definition of entanglements.\n\n\nWe review the basic definitions of GME for the three qubits $A$, $B$, and $C$. A separable state is a mixture of product states with respect to a bipartition $A|BC$, that is $\\rho_{A|BC}^{sep}=\\sum_{j}p_j|\\varphi_A^j\\rangle\\langle\\varphi_A^j|\\otimes|\\varphi_{BC}^j\\rangle\\langle\\varphi_{BC}^j|$,\n where $p_j$ is a coefficient. Similarly, we denote other separable states for the two other bipartitions as $\\rho_{B|AC}^{sep}$ and $\\rho_{C|AB}^{sep}$. A biseparable state is a mixture of separable states, and combines the separable states $\\rho_{A|BC}^{sep}$, $\\rho_{B|AC}^{sep}$, and $\\rho_{C|AB}^{sep}$ with respect to all possible bipartitions. Any state that is not a biseparable state is called genuinely multipartite entangled.\n\nRecently, a powerful technique has been advanced to characterize multipartite entanglement using positive partial transpose (PPT) mixtures~\\cite{peres}. It is well known that a separable state is PPT, implying that its partial transpose is positive semidefinite.\nWe denote a PPT mixture of a tripartite state as a convex combination of PPT states $\\rho_{A|BC}^{PPT}$, $\\rho_{B|AC}^{PPT}$ and $\\rho_{C|AB}^{PPT}$\nwith respect to different bipartitions.\nThe set of PPT mixtures contains the set of\nbiseparable states. The advantage of using PPT mixtures instead of biseparable states is that the set of PPT mixtures\ncan be fully characterized by the linear semidefinite programming (SDP)~\\cite{boyd},\nwhich is a standard problem of constrained convex optimization theory.\n\nIn order to characterize PPT mixtures, a multipartite state which is not a PPT mixture can be detected by a decomposable entanglement witness $W$~\\cite{novo}. The witness operator is defined as $W=P_M+Q_M^{T_M}$ for all bipartitions $M|\\bar{M}$, where $P_M$, and $Q_M$ are positive semidefinite operators, and $T_M$ is the partial transpose with respect to $M$. This observable $W$ is positive on all PPT mixtures, but has a negative expectation value on at least one entangled state.\nTo find a fully decomposable witness for a given state $\\rho$, the convex optimization technique SDP\nbecomes important, since it allows us to optimize over all fully decomposable witnesses.\nHence, a state $\\rho$ is a PPT mixture only if the optimization problem~\\cite{novo},\n\\begin{equation} \\label{minm}\n\\textrm{minimize:} {}{} \\mathtt{Tr}(W\\rho).\n\\end{equation}\nhas a positive solution. If the minimum in Eq. (~\\ref{minm}) is negative, $\\rho$ is\nnot a PPT mixture and hence is genuinely multipartite entangled.\nWe denote the absolute value of the above minimization as $E(\\rho)$. For solving the SDP we use the programs YALMIP and SDPT3~\\cite{yalmip,program}, which are freely available.\n\n\nNow we discuss the dynamics of the GME for the three-qubit entanglement.\nThe initial entangled three-qubit state is chosen as the W state with only one excitation\n\\begin{equation}\n|W\\rangle =\\frac{1}{\\sqrt{3}}(|100\\rangle +|010\\rangle +|001\\rangle ),\n\\label{initial state}\n\\end{equation\nwhich corresponds to the Dicke state $|D_{3}\\rangle =|-\\frac{1}{2\n\\rangle $. For the Hamiltonian (~\\ref{Ham}) with respect to the rotation around the \ny$ axis by the angle $\\pi\/2$, the initial Dicke state can be written as\n\\begin{equation}\n|D_{3}\\rangle =\\frac{1}{\\sqrt{8}}(-\\sqrt{3}|-\\frac{3}{2}\\rangle -|-\\frac{1}{\n}\\rangle +|\\frac{1}{2}\\rangle +\\sqrt{3}|\\frac{3}{2}\\rangle),\n\\label{initial state1}\n\\end{equation\nand the initial cavity state is the vacuum state $|0\\rangle $. Based on the\neigenstates $\\left\\{ |\\varphi _{k,n}\\rangle\\right\\} $ and eigenvalues \n\\left\\{ E_{n}^{k}\\right\\} $ in the GRWA and the zeroth-order approximation,\nthe wavefunction evolves from the initial state as $|\\phi (t)\\rangle\n=\\sum_{n,k}e^{-iE_{n}^{k}t}|\\varphi _{k,n}\\rangle \\langle \\varphi\n_{k,n}|D_{3}\\rangle $. And the three-qubit reduced state $\\rho (t)$ can be given by\ntracing out the cavity degrees of freedom\n\\begin{equation}\n\\rho (t)=\\texttt{Tr}_{\\mathtt{cavity\n}(|\\phi (t)\\rangle \\langle \\phi (t)|).\n\\end{equation}\nWe then calculate the absolute value of the minimum $E(\\rho )$ to detect the GME by solving the minimum in Eq.(~\\ref{minm}).\n\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.45]{GMEDym.eps}\n\\caption{(Color online) Dynamics of the GME for three-qubit entanglement\nwith the initial W state for the ultrastrong-coupling strength $g\/\\protec\n\\omega=0.1$ with the different detuning $\\Delta\/\\protect\\omega=0.1$ (a) and \n\\Delta\/\\protect\\omega=1$ (b) by the GRWA method (dash-dotted lines),\nnumerical method (solid lines), RWA (short-dotted\nlines), and the zeroth-order approximation (dashed lines).}\n\\label{dynamics GEM}\n\\end{figure}\n\nFig.~\\ref{dynamics GEM} shows the $E(\\rho )$ plotted against parameter $\\Delta t\/(2\\pi)$ for different detunings $\\Delta\/\\omega$\nfor the ultra-strong-coupling strength $g\/\\omega =0.1$. For comparison, results from numerical exact\ndiagonalization and RWA are also shown. We observe a quasi-periodic behavior\nof the GME dynamics. $E(\\rho )$ decays from the initial entangled W state\nand falls off to a nonzero minimum value, implying no death of the three-qubit entanglement. The GME dynamics obtained by the\nGRWA are consistent with the numerical results, while the RWA results are\nqualitatively incorrect for the off-resonance case $\\Delta \/\\omega =0.1$ in\nFig.~\\ref{dynamics GEM} (a). The zeroth-order approximation, where only states within\nthe same manifold are included, works well for the off-resonance case \n\\Delta =0.1$ in Fig.~\\ref{dynamics GEM} (a) but not for the on-resonance\ncase in Fig.~\\ref{dynamics GEM} (b). The validity of the GRWA ascribes to the\ninclusion of the CRW interaction $iJ_{y}F_{1}\\left( a^{\\dagger }a\\right)\n(a^{\\dagger }-a)$.\n\n\nThe onset of the decay of the multipartite\nentanglement is due to the information loss of qubits dynamics to the cavity.\nOn the other hand, it is the interaction with the cavity that leads to the\nentanglement resurrection. The lost information will be transferred back to the qubit\nsubsystem after a finite time, which is associated with the ratio between the\ncoupling strength $g\/\\omega$ and the level-splitting of qubits $\\Delta\/\\omega$.\nAs the ratio $g\/\\Delta$ increases, the contributions of the qubit-cavity interaction become dominant\nand the lost entanglement will be transferred quickly from the cavity to qubits with\nless revivals time, as shown in Fig~\\ref{dynamics GEM} (a).\n\n\nMoreover, it is significant to study the different behavior of the multipartite entanglement and the bipartite entanglement.\nThe concurrence characterizes the\nentanglement between two qubits. Due to the symmetric Dicke states in the\nthree-qubit collective model, the concurrence is evaluated in terms of the\nexpectation values of the collective spin operators as $C=\\max\n\\{0,C_{y},C_{z}\\}$, where the quantity $C_{n}$ is defined for a given\ndirection $n(=y,z)$ as $C_{n}=\\frac{1}{2N(N-1)}\\{N^{2}-4\\langle\nS_{n}^{2}\\rangle -\\sqrt{[N(N-2)+4\\langle S_{n}^{2}\\rangle\n]^{2}-[4(N-1)\\langle S_{n}\\rangle ]^{2}}\\}$~\\cite{vidal}. From the dynamical\nwavefunction $|\\phi (t)\\rangle $, we can easily evaluate the coefficients\nfor the qubit to remain in the $|j,m\\rangle $ state\n\\begin{equation}\\label{zeroprob}\nP_{m}^{0th}=\\sum_{n=0}^{\\infty\n}\\sum_{k=1}^{4}f_{n}(t)e^{-iE_{n}^{k}t},\n\\end{equation\nin the zeroth-order approximation and\n\\begin{eqnarray}\\label{probability}\nP_{m}^{\\mathtt{GRWA}} &\\approx&\\sum_{n}^{\\infty\n}\\sum_{k=1}^{4}f_{n}^{k}(t)(e^{-iE_{n-2}^{k}t}+e^{-iE_{n-1}^{k}t} \\notag \\\\\n&&+e^{-iE_{n}^{k}t}+e^{-iE_{n+1}^{k}t}),\n\\end{eqnarray\nin the GRWA. $f_{n}^{k}(t)$ is a dynamical parameter associated with the\ninitial state and the $k$-th eigenstates for each $n$. From $P_{m}^{\\mathtt{GRWA}}$ in Eq.(~\\ref{probability}), we\nobserve energy-level transitions among $E_{n-2}^{k}$, $E_{n\\pm 1}^{k}$ and \nE_{n}^{k}$ in the GRWA, which produce essential improvement of the dynamics\nover the zeroth-order ones in Eq.(~\\ref{zeroprob}). Since the average value of collective\nspin operators can be expressed by $P_m$, such as $4\\langle S_{y}^{2}\\rangle =4\\sqrt{3}({}_{-\\frac{3}{\n}}\\langle n-2|n\\rangle _{\\frac{1}{2}}P_{-\\frac{3}{2}}P_{\\frac{1}{2}}+{}_{\n\\frac{1}{2}}\\langle n-1|n+1\\rangle _{\\frac{3}{2}}P_{-\\frac{1}{2}}P_{\\frac{3}\n2}})-4(P_{-\\frac{1}{2}}^{2}+P_{\\frac{1}{2}}^{2})+3$, we calculate the concurrence $C$ by the zeroth-order approximation and the GRWA, respectively.\n\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.7]{condyn.eps}\n\\caption{(Color online) Dynamics of the concurrence for the qubit-qubit\nentanglement with the initial W state for the ultrastrong coupling strength \ng\/\\protect\\omega=0.1$. The parameters are the same as in Fig.~\\protect\\re\n{dynamics GEM}.}\n\\label{dynamics concurrence}\n\\end{figure}\n\nWe plot the dynamics of the concurrence for different detunings $\\Delta \/\\omega =0.1$ and $1$ in\nFig.~\\ref{dynamics concurrence}. The initial W state gives the maximum\npairwise entanglement $C=2\/3$ of any Dicke states. Fig.~\\ref{dynamics\nconcurrence} (a) shows that dynamics of the concurrence by the zeroth-order\napproximation are similar to the numerical ones in the off-resonance case \n\\Delta \/\\omega =0.1$, in which the RWA results are invalid. The sudden death\nof the bipartite entanglement is observed in the resonance case in Fig.~\\re\n{dynamics concurrence} (b). The dynamics of the concurrence obtained by the\nGRWA is similar to the numerical results, exhibiting the disappearance of the\nentanglement for a period of time.\nHowever, there is no sudden death of the\nentanglement in the RWA case, indicating that the CRW terms are not negligible.\n\nVery interestingly, as shown in Fig.~\\ref{dynamics GEM}, the GME for the three-qubit entanglement never vanishes, in sharp contrast with bipartite entanglement.\nDuring the vanishment of\nconcurrence, the GME is generally small but still finite.\nIt follows that the two-qubit state is separable in the system, but the three-qubit state still contains residual entanglement. This may be one\nadvantage to using GME as a quantum information resource.\n\nFinally, it is significant to clarify why the GME of the tripartite entanglement behaves differently\nwith the concurrence of the bipartite entanglement. The well-known death of the concurrence\nis related to the disappearance of the entanglement in an arbitrary two-qubit subsystem, say A and B, while a deep understanding\nis associated with the question of whether there exists entanglement in the three-qubit system. Intuitively, we may think that entanglement is still stored in the bipartition $AB|C$. Negativity is used to detect the entanglement for this bipartition~\\cite{vidal2}, which\nfalls off to a nonzero minimum in Fig.~\\ref{entanglement}. It reveals that the state for the bipartition $AB|C$ is not a separable state. Similarly, those states with respect to other bipartitions $AC|B$ and $BC|A$ are not separable. Therefore, the three-qubit state stays in an entangled state and\nthe GME for the three-qubit entanglement never disappears during the death of the two-qubit entanglement. The theory of the multipartite entanglement is not fully developed and requires more insightful investigations into more- than two-party systems. We highlight here the different features of the multipartite entanglement and bipartite entanglement in the more- than two-qubit system, and have found that the GME is always robust at least in the qubits and single-mode cavity system.\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.45]{compare.eps}\n\\caption{(Color online) GME for the three qubits A, B,C (dash-dotted line), negativity for the entanglement with respect to the bipartition $AB|C$ (solid line),\nand concurrence between A and B qubits (dashed line) obtained by the numerical method for \ng\/\\protect\\omega=0.1$ and $\\Delta\/\\protect\\omega =1$.}\n\\label{entanglement}\n\\end{figure}\n\n\\section{Conclusion}\n\nIn this work, we have extended the original GRWA by Irish for the one-qubit Rabi model to the three-qubit Dicke model by the unitary transformation. The zeroth-order approximation, equivalent to the adiabatic approximation,\nis suited for arbitrary coupling strengths for the large detuning case. The first-order approximation, also called GRWA,\nworks well in a wide range of coupling strength even on resonance and much better than the RWA ones. In the GRWA, the effective Hamiltonian with\nthe CRW interactions is evaluated as the same form of the ordinary RWA one, which facilitates the derivation of the explicit analytic solutions. All eigenvalues and eigenstates can be approximately given.\n\nBy the proposed GRWA scheme, we have also calculated the dynamics of concurrence for the bipartite entanglement and the GME for the multipartite entanglement, which are in quantitative agreement with the numerical ones. The well-known sudden death of the two-qubit entanglement is observed by our analytic solution.\nAn interesting phenomenon of entanglement is that the GME for the three-qubit entanglement decays to the nonzero minimum during the time window in which the two-qubit entanglement disappears, implying that three qubits remain entangled when the two-qubit state is separable.\nOur results indicate that the GME is the powerful entanglement to detect quantum correlations in multipartite systems that cannot be described via bipartite entanglement in subsystems of smaller particles.\nThere still exists many open problems to the theory of entanglement for multipartite systems due to much richer structure of the entanglement in a more- than two-party system. In particular, the dynamical behaviors for two kinds of\nentanglement may be explored in the multi-qubit realized in the recent\ncircuit QED systems in the ultra-strong coupling.\n\n\n\nIn the end of the preparation of the present work, we noted a recent paper\nby Mao et al. ~\\cite{mao} for the same model. We should say that the approach\nused there is the adiabatic approximation of the present work, i.e., the\nzeroth-order approximation.\n\n\\section{Acknowledgements}\n\nThis work was supported by National Natural Science Foundation of China\n(Grants No.11547305, and No.11474256), Chongqing Research Program of Basic Research and\nFrontier Technology (Grant No.cstc2015jcyjA00043), and Research Fund for the Central Universities\n(Grant No.106112016CDJXY300005).\n\n$^{*}$ Email:yuyuzh@cqu.edu.cn\n\n$^{\\dagger}$ Email:qhchen@zju.edu.cn\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper, we are interested to solve the following unconstrained optimization problem:\n\\begin{eqnarray}\\label{general}\n\\min_{x\\in\\Bbb{R}^n}f(x),\n\\end{eqnarray}\nin which $f:\\Bbb{R}^n\\rightarrow \\Bbb{R}$ is a continuously differentiable function. There are various iterative \napproaches for solving (\\ref{general}) \\citep{Nocedal}. The Conjugate Gradient (CG) method is one such approach. The CG based methods\ndo not need any second-order information of the objective function. For a given point $x_0\\in \\Bbb{R}^n$, the iterative formula\ndescribing the CG method is:\n\\begin{equation}\\label{iter}\nx_{k+1}=x_k+\\alpha_k d_k,\n\\end{equation}\nin which $x_k$ is current iterate point, $\\alpha_k$ is the step size, and $d_k$ is the search direction determined by:\n\\begin{eqnarray}\\label{dk}\nd_k=\\left\\{\n\\begin{array}{lr}\n-g_k\\qquad\\qquad\\quad\\qquad k=0,&\\\\\n-g_k+\\beta_{k-1}d_{k-1}\\qquad k\\geq 1,&\\\\\n\\end{array} \\right.\n\\end{eqnarray}\nwhere $g_k=\\nabla f(x_k)$ is the gradient of the objective function in the current iteration. The conjugate gradient parameter is $\\beta_k$, whose choice of different values leads to various CG methods.\nThe most well-known of the CG methods are the Hestenes-Stiefel (HS) method \\citep{hestenes}, Fletcher-Reeves (FR) method \\citep{Fletcher64},\nConjugate Descent (CD) \\citep{Fletcher13}, and Polak-Ribiere-Polyak (PRP) \\citep{prp}.\n\nThere are various approaches to determining a suitable step size in each iteration such as Armijo line search, Goldstein line search, and Wolfe line search \\citep{Nocedal}. The Armijo line search finds the largest value of step size in each iteration such that the following inequality holds:\n\\begin{eqnarray}\\label{line}\nf(x_k+\\alpha_kd_k)\\leq f(x_k)+\\gamma\\alpha_kg_k^Td_k\n\\end{eqnarray}\nin which $\\gamma\\in(0,1)$ is a constant parameter.\nGrippo et al. \\citep{Grippo86} introduced a non-monotone Armijo-type line search technique as another way to compute step size.\nThe Incorporation of the non-monotone strategy into the gradient and projected gradient approaches, the conjugate gradient method, and the trust-region methods has led to significant improvements to these methods. Zhang and Hager \\citep{Zhang04} gave some conditions to improve the convergence rate of this strategy. Ahookhosh et al. \\citep{Ahookhosh122} built on these results and investigated a new non-monotone condition:\n\\begin{equation}\\label{amin}\nf(x_k+\\alpha_kd_k)\\leq R_k+\\gamma\\alpha_kg_k^T d_k,\n\\end{equation}\nwhere $R_k$ is defined by\n\\begin{eqnarray}\n& R_k=\\eta_k f_{l_k}+(1-\\eta_k)f_k, & \\label{rk}\\label{flk}\\\\\n& \\eta_k\\in[\\eta_{\\min},\\eta_{\\max}],~\\eta_{\\min}\\in[0, 1), \\ \\eta_{\\max}\\in[\\eta_{\\min},1], & \\notag\\\\\n& f_{l_k}=\\max_{0\\leq j\\leq m_k}\\{f_{k-j}\\}, \\nonumber \\\\\n& m_0=0, \\ \\ 0\\leq m_k\\leq \\min\\{m_{k-1}+1,N\\} \\mbox{ for some } N\\geq 0. \n\\end{eqnarray}\nNote that $\\eta_k$ is known as the non-monotone parameter and plays an essential role in the algorithm's convergence.\n\nAlthough this new non-monotone strategy in \\citep{Ahookhosh122} has some appealing properties, especially in functional performance, current algorithms based on this non-monotone strategy\nface the following challenges.\n\\begin{itemize}\n \\item The existing schemes for determining the parameter $\\eta_k$ \nmay not reduce the value of the objective function significantly in initial iterations.\nTo overcome this drawback, we propose a new scheme for choosing $\\eta_k$ \nbased on the gradient behaviour of the objective function.\nThis can reduce the total number of iterations.\n\\item Many evaluations of the objective function are needed to find \nthe step length $\\alpha_k$ in step $k$. \nTo make this step more efficient, we use an adaptive and composite step length procedure from \\citep{Li19} to determine the initial value of the step length in inner iterations.\n\\item The third issue is the global convergence for the non-monotone CG method. Most exiting CG methods use the Wolfe condition, which plays a vital role in establishing the global convergence of various CG methods \\citep{Nazareth01}. Wolfe line search is more expensive than the Armijo line search strategy. Here, we define a suitable conjugate gradient parameter so that the scheme proposed here has global convergence property.\n\n\\end{itemize}\n\n\n\n\n\n\n\n\nBy combining the outlined strategies, we propose a modification to the non-monotone line search method. Then, we incorporate this approach into the CG method and introduce a new non-monotone CG algorithm. We prove that our proposed algorithm has global convergence. Finally, we compare our algorithm and eight other algorithms on standard tests and non-negative matrix factorization instances. We utilize some criteria such as the number of objective function evaluations, the number of gradient evaluations, the number of iterations, and the CPU time to compare the performance of algorithms.\n\n\n\\section{An improved non-monotone line search algorithm} \\label{s:algorithm}\n\nThis section discusses the issues with the state of the art of non-monotone line search strategy, choice of the step sizes, and finally, the conjugate gradient parameter.\n\\subsection{A new scheme of choosing $\\eta_k$}\nRecall that the non-monotone line search strategy is determined by equation \\eqref{amin} in step $k$.\nThe parameter $\\eta_k$ is involved in the non-monotone term (\\ref{flk})\nand its choice can have a significant impact on the performance of the algorithm. There are two common approaches for calculating \n$\\eta_k$.\nThe scheme proposed by Ahookhosh et al. \\citep{Ahookhosh122} has been used in most of the existing non-monotone algorithms \\citep{Esmaeili,Ahookhosh_Nu,Amini_App14,Ahookhosh15}.\nThis strategy can be formulated as $\\eta_k=\\frac{1}{3}\\eta_0 (-\\frac{1}{2})^k+\\frac{2}{3}\\eta_0$ \nwhere $\\eta_0=0.15$ and the limit value of $\\eta_k$ is 0.1.\nThe other scheme proposed by Amini et al. \\citep{Amini14}, which depends on the behaviour of gradient is given by:\n\\begin{equation}\\label{Amini's_method}\n\\eta_0=0.95, \\ \\\n\\eta_{k}= \\left\\{\n\\begin{array}{ll}\n\\frac{2}{3}\\eta_{k-1} +0.01, & \\mbox{if } ~\\|g_{k} \\|_{\\infty}\\leq 10^{-3}; \\\\\n\\max\\{ 0.99\\eta_{k-1},0.5\\}, & \\mbox{otherwise}.\n\\end{array} \\right.\n\\end{equation}\nTo illustrate the behaviour of $\\eta_k$ proposed in \\citep{Ahookhosh122} and \\citep{Amini14}, we solve the problem $f(x)= (x_0-5)^2+\\sum_{i=1}^{40} (x_i-1)^2$ for $ x\\in \\Bbb{R}^{41}$. \nThe values of the parameter $\\eta_k$ corresponding to the two schemes are displayed in Fig. \\ref{muk} (Left).\n \\begin{figure}[h!]\n\\centering\n \\includegraphics[width=.45\\textwidth]{a1.jpg}\n \\includegraphics[width=.45\\textwidth]{a4.jpg}\n\\caption{(Left): Values of $\\eta_k$ proposed in \\citep{Ahookhosh122} and \\citep{Amini14}, (Right): Values of $\\eta_k$ for the new scheme.}\n\\label{muk}\n\\end{figure}\nAs shown in Fig. \\ref{muk}, for the scheme proposed by Ahookhosh et al. \\citep{Ahookhosh122}, $\\eta_k$\nis close to $0.1$ after only a few iterations. {Notice that $\\eta_k$ in each iteration does not have any connection with the behaviour of the objective function. Thus this scheme is not effective.} In addition, there are two issues with the scheme introduced by Amini et al. in \\citep{Amini14}.\nOne problem indicated by Fig. \\ref{muk} is that that $\\eta_k$ decreases relatively\nquickly for the first 65 iterations.\n{Since the algorithm requires the long iterations to solve his problem}, ideally $\\eta_k$ should be close to 1 for these initial iterations.\nThe second problem is that the value of $\\eta_k$ remains the same for a large number of iterations and it is not affected by the behaviour of the objective function.\n\nTo avoid theses challenges, we propose an adaptive strategy for calculating the value of $\\eta_k$:\n\\begin{equation} \\label{eq:etakn}\n\\eta_{k}=0.95\\sin\\left(\\frac{\\pi \\|g_{k}\\|}{1+2\\|g_{k}\\|}\\right)+0.01.\n\\end{equation}\nWhen $x_k$ is far away from the minimizer, we can reasonably assume that $\\|g_k\\|$ is large. Thus the value of $\\eta_k$ defined by \\eqref{eq:etakn} is close to 1.\nThis makes the scheme closer to the original non-monotone strategy in the initial iterations, providing a chance to reduce the value of the objective function more significantly in the initial iterations. On the other hand, when $x_k$ is close to the minimizer, $\\|g_k\\|$ is small, then the value of $\\eta_k$ is close to zero. Thus, the step length is small so that the new point stays in the neighbourhood of the optimal point. Thus the new scheme is closer to the monotone strategy. We plot the behaviour of $\\eta_k$ denoted by \\eqref{eq:etakn} in Fig. \\ref{muk} (Right), using the same values of the gradient for the optimization problem mentioned above.\n\n\\subsection{ New schemes for choosing $\\alpha_k$ }\nWe utilize a convex combination of the Barzilai-Borwein (BB) step sizes to calculate an appropriate $\\alpha_k$ in each outer iteration as in \\citep{Li19}. Our strategy calculates the value of $\\alpha_k$, using the following equation:\n\\begin{equation}\\label{newalpha}\n\\alpha_k^{{\\scriptscriptstyle \\textrm{CBB}}} =\\mu_k\\alpha^{(1)}_k+(1-\\mu_k)\\alpha^{(2)}_k,\n\\end{equation}\nwhere\n\\begin{eqnarray*}\n\t&\\alpha_k^{(1)}=\\frac{s_k^Ts_k}{s_k^Ty_k},\\quad \\alpha^{(2)}_k=\\frac{s_k^Ty_k}{y_k^Ty_k},\\quad s_k:=x_k-x_{k-1},\\quad y_k:=g_k-g_{k-1};&\\\\\n\t&\\mu_k=\\frac{K_2}{K_1+K_2}\\quad\n\tK_1=\\|\\alpha^{(1)}_k y_k-s_k\\|^2,\\quad K_2=\\|(\\alpha^{(2)}_k)^{-1}s_k-y_k\\|^2.&\n\\end{eqnarray*}\n\\subsection{Conjugate gradient parameter}\nHere, we propose the new conjugate gradient parameter given by:\n\\begin{eqnarray}\\label{cgpar}\n\\beta_k=\\omega \\frac{\\|g_k\\|}{\\|d_{k-1}\\|},\\quad \\omega \\in (0,1).\n\\end{eqnarray}\nThe complete algorithm is in Appendix \\ref{AppA} (see Algorithm \\ref{alg1}). The next lemma proves a key property of $\\beta_k$ which is very important in proving the algorithm's convergence. The proofs are in the Appendix \\ref{AppA}.\n\\begin{lemma}\\label{decent}\nFor the search direction $d_k$ and the constant $c>0$ we have:\n\t\\begin{eqnarray}\n\td_k^Tg_k\\leq -c\\|g_k\\|.\n\t\\end{eqnarray}\n\\end{lemma}\n The following assumptions are used to analyze the convergence properties of Algorithm \\ref{alg1}.\n\\begin{description}\n\t\\item[H1] The level set $\n\t\\mathcal{L}(x_0)=\\{x|f(x)\\leq f(x_0),~~~~x\\in \\Bbb{R}^n\\}$ is bounded set.\n\t\\item[H2] The gradient of objective function is Lipschitz continuous over an open convex set $C$ containing $\t\\mathcal{L}(x_0)$. That is:\n\t\\begin{equation*}\n\t\\|g(x)-g(y)\\|\\leq L\\|x-y\\|,\\qquad \\forall ~x,y\\in C.\n\t\\end{equation*}\n\\end{description}\nWe prove the following Theorem about the global convergence of Algorithm \\ref{alg1}, the proof of which follows from the Lemmas presented in \nthis section. Please see the appendix for the proofs.\n\n\n\\begin{theorem}\\label{glob}\n\t{Let $(H1)$, $(H2)$, and Lemmas \\ref{decent} and \\ref{aboveserch} hold. Then, for the\n\tsequence $\\{x_k\\}$ generated by Algorithm \\ref{alg1}, we have $\\lim_{k\\rightarrow \\infty} \\|g_k\\|=0.$\n}\\end{theorem}\n\n\n\\begin{lemma}\\label{aboveserch}\n\tSuppose that the search direction $d_k$ with the CG parameter $\\beta_k$ given by (\\ref{cgpar}) is generated by Algorithm \\ref{alg1}. Then, an upper bound for $d_k$ is given by $\\|d_k\\|\\leq (1+\\omega)\\|g_k\\|.$\n\\end{lemma}\n\\begin{lemma}\\label{low-bou}\n\tSuppose that $x_k$ is not a stationary point of (\\ref{general}). Then there exists a constant\n\t\\begin{equation*}\n\t{\\lambda}=\\min \\left\\{\\beta_1\\rho,\\frac{2(1-\\omega)\\rho(1-\\gamma)}{L(1+\\omega)^2}\\right\\},\n\t\\end{equation*}\n\tsuch that $\\alpha_k\\geq {\\lambda}$.\n\\end{lemma}\n\n\n\n\\section{Numerical Results}\nIn this section we test the new algorithm to solve a set of standard optimization problems and the non-negative matrix factorization problem, which is a non-convex optimization problem. The implementation level details are in Appendix \\ref{AppB}.\nTo demonstrate the efficiency of the proposed algorithm, we compare our algorithm and eight other existing algorithms introduced in \\citep{Ahookhosh122,Amini14,Jiang,Zhang} on a set of $110$ standards test problems. To describe the behaviour of each strategy, we use\nperformance profiles proposed by Dolan and Mor\u00e9 \\citep{Dolan}.\nNote that the performance profile for an algorithm $p_s(\\tau): \\Bbb{R}\\mapsto [0, 1]$ is a non-decreasing, piece-wise constant function, continuous from the right at each breakpoint. Moreover, the value $p_s(1)$ denotes the probability that the algorithm will win against the rest of the algorithm. More information on the performance profile is in Appendix \\ref{AppB}. We plot the performance profile of each algorithm in terms of the total number of outer iteration and the CPU time on the set of standard test problems in Fig. \\ref{results}. \n \\begin{figure}[h!]\n\\centering\n \\includegraphics[width=.45\\textwidth]{1.jpg}\n \\includegraphics[width=.45\\textwidth]{3.jpg}\n\\caption{(Left): Performance profiles of the total number of outer iterations, (Right): Performance profiles of CPU Time.}\n\\label{results}\n\\end{figure}\n\n\nWe also apply our algorithm to solve the Non-Negative Matrix Factorization (NMF)\nwhich has several applications in image processing such as face detection problems. \nGiven a non-negative matrix $V\\in\\Bbb{R}^{m\\times n}$, a NMF\nfinds two non-negative matrices\n$W\\in\\Bbb{R}^{m\\times k}$ and $H\\in\\Bbb{R}^{k\\times n}$ with\n$k\\ll\\min(m,n)$ such that $X\\approx WH$. This problem can be formulated as\n\\begin{equation}\\label{opti-n}\n\\min_{W,H\\geq0} F(W,H)=\\frac{1}{2}\\|V-WH\\|_{F}^2.\n\\end{equation}\nEquation \\eqref{opti-n} is a non-convex optimization problem. We compare our method and Zhang's algorithm \\citep{Zhang} on some random datasets and reported these results in Appendix \\ref{AppB}. \n\n\n\\section{Conclusion} In this paper, we introduced a new non-monotone conjugate gradient algorithm based on efficient Barzilai-Borwein step size. We introduced a new non-monotone parameter based on gradient behaviour and determined by a trigonometric function. We use a convex combination of the determined method to compute the step size value in each iteration. We prove that the proposed algorithm has global convergence. We implemented and tested our algorithm on a set of standard test problems and the non-negative matrix factorization problems. The proposed algorithm can solve $98\\%$ of the test problems for a set of standard test instances. For the non-negative matrix factorization, the results indicate that our algorithm is more efficient compared to Zhang' s method \\citep{Zhang}. \n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Introduction}\n\nThe difference-of-log-Normals distribution, henceforth DLN, is the distribution arising when one subtracts a log-Normal random variable (RV) from another. To define the DLN, consider an RV $W$ such that \n\\begin{equation} \\label{eq:DLN}\nW = Y_{p} - Y_{n} = \\text{exp}(X_{p}) - \\text{exp}(X_{n}) \\ \\ \\text{with} \\ \\ \\pmb{X} = (X_{p},X_{n})^{T} \\sim \\mathcal{N}(\\pmb{\\mu},\\pmb{\\Sigma})\n\\end{equation}\nin which $\\pmb{X}$ is a bi-variate Normal with\n\\begin{equation} \\label{eq:BVN}\n\\pmb{\\mu} = \\begin{bmatrix} \\mu_p \\\\ \\mu_n \\end{bmatrix} \\ \\ \\ \n\\pmb{\\Sigma} = \\begin{bmatrix} \\sigma_p^2 & \\sigma_p\\cdot\\sigma_n\\cdot\\rho_{pn} \\\\ \\sigma_p\\cdot\\sigma_n\\cdot\\rho_{pn} & \\sigma_n^2 \\end{bmatrix}\n\\end{equation}\nWe say $W$ follows the five-parameter DLN distribution, $W \\sim \\text{DLN}(\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$.\n\nThe companion paper \\cite{Parham2022} makes the case that the DLN is a \\emph{fundamental distribution in nature}, in the sense that it arises naturally in a plethora of disparate natural phenomena, similar to the Normal and log-Normal distributions. It shows that firm income, return, and growth are all well-described by the DLN, it further shows that city population growth, per-county GDP growth, and the per-industry per-Metro GDP growth all show remarkable fit to the DLN. \\cite{Parham2022} describes how the emergence of the DLN is a direct result of an application of the Central Limit Theorems and ``Gibrat's Law'' when applied to various economic phenomena. As the DLN is almost completely unexplored,\\footnote{At the time of writing, I was able to find only two statistical works considering it, \\cite{Lo2012} and \\cite{GulisashviliTankov2016}. Both papers concentrate on the sum of log-Normals but show their results hold for the difference of log-Normals as well, under some conditions.} this paper aims to fill the gap.\n\nThe next section fully characterizes the DLN distribution, deriving its PDF, CDF, central moments, and estimators for the distribution parameters given data. It also introduces an extension of the DLN to the multi-variate N-dimensional case using elliptical distribution theory. A full suite of computer code is provided for future use.\n\nNext, Section~\\ref{sec:Methods} discusses the difficulty of working with the raw DLN distribution, stemming from its characteristic ``double-exponential'' heavy tails. To alleviate this difficulty, I discuss the close link between the DLN and the Hyperbolic Sine (\\emph{sinh}) function and its inverse (\\emph{asinh}) and present the ADLN distribution - the DLN under an asinh transform. The section then considers the problem of measuring growth in DLN-distributed RVs. To that end, it generalizes the concept of growth, currently defined only for strictly positive RVs, to DLN RVs that are sometimes negative. I show that the appropriate growth concept for an RV (e.g. percentage, difference in logs, or DLN-growth) intimately depends on the RV's statistical distribution.\n\nSection~\\ref{sec:MC} explores the properties of the estimators presented via extensive Monte-Carlo experiments. It: (i) reports the empirical bias and variance of the moment estimators and the MLE parameter estimators; (ii) establishes critical values for the Kolmogorov-Smirnov and Anderson-Darling distributional tests for DLN RVs; and (iii) presents the relation between the measures of growth developed in Section~\\ref{sec:Methods}. \\comments{Finally, it discusses using the DLN as an approximating distribution, and presents evidence that the DLN is an excellent approximating distribution for several distributions, including the distributions arising from the sum of two DLN RVs, the multiplication of DLN by Normal RVs, and the multiplication of Normal by log-Normal RVs.}\n\n\n\n\\section{Definitions and properties}\n\\label{sec:Def}\n\nPrior to proceeding, and to fix ideas, Figure~\\ref{fig:DLNexam} presents several instances of the DLN distribution. Panel (a) presents and contrasts the standard Normal, standard DLN, and standard log-Normal. The uncorrelated standard DLN is defined as DLN(0,1,0,1,0), i.e. the difference between two exponentiated uncorrelated standard Normal RVs. Panel (b) shows the role of the correlation coefficient $\\rho_{pn}$ in the standard DLN, controlling tail-weight vs. peakedness. Panel (c) repeats the analysis of Panel (b) for a different parametrization common in practical applications, exhibiting the problem of dealing with heavy tails. Panel (d) presents the data of panel (c) in asinh space, showing how asinh resolves the problem of graphing heavy tails and why the ADLN distribution is useful in practice.\n\n\\RPprep{DLN Examples}{0}{0}{DLNexam}{%\n This figure presents examples of the DLN distribution. Panel (a) graphs the PDFs of the standard Normal, log-Normal, and DLN. Panel (b) graphs the PDFs of standard DLN with different correlation coefficients $\\rho_{pn}$. Panel (c) presents the PDFs of a DLN with parameters $(3,2,2,2)$, common in practice, and varying correlation coefficients $\\rho_{pn}$. Panel (c) presents the PDF for the range $\\pm 10$, which is a significant truncation due to the long tails of this DLN. Panel (d) presents the same PDFs as Panel (c), but the x-axis is asinh-transformed, such that it spans the range sinh(-10) $\\approx$ -11,000 to sinh(10) $\\approx$ 11,000.\n}\n\\RPfig{%\n\t\\begin{tabular}{cc} \n\t\t\\subfigure[standard DLN, N, LN]{\\includegraphics[width=3in]{Img\/DLN_N_LN.pdf}} & \n\t\t\\subfigure[Std. DLN w\/ corrs]{\\includegraphics[width=3in]{Img\/SDLN_corrs.pdf}} \\\\ \\\\\n\t\t\\subfigure[DLN w\/ corrs]{\\includegraphics[width=3in]{Img\/DLN_corrs.pdf}} &\n\t\t\\subfigure[ADLN w\/ corrs]{\\includegraphics[width=3in]{Img\/ADLN_corrs.pdf}} \\\\ \\\\\n\t\\end{tabular}\n}\n\n\n\n\\subsection{PDF and CDF}\n\nThe PDF for the bi-variate Normal (BVN) RV $\\pmb{X}$ is well-known to be\n\\begin{equation} \\label{eq:PDFBVN}\nf_{BVN}(\\pmb{x}) = \\frac{\\lvert\\pmb{\\Sigma}\\rvert^{-\\frac{1}{2}}}{2\\pi}\\cdot \\text{exp}\\left(-\\frac{1}{2} (\\pmb{x}-\\pmb{\\mu})^{T} \\pmb{\\Sigma}^{-1} (\\pmb{x}-\\pmb{\\mu})\\right) = \\frac{\\lvert\\pmb{\\Sigma}\\rvert^{-\\frac{1}{2}}}{2\\pi}\\cdot \\text{exp}\\left(-\\frac{1}{2} \\lvert\\lvert\\pmb{x}-\\pmb{\\mu}\\rvert\\rvert_{\\pmb{\\Sigma}}\\right)\n\\end{equation} \nwith $\\lvert\\pmb{\\Sigma}\\rvert$ the determinant of $\\pmb{\\Sigma}$ and $\\lvert\\lvert\\pmb{x}\\rvert\\rvert_{\\pmb{\\Sigma}}$ the Euclidean norm of $\\pmb{x}$ under the Mahalanobis distance induced by $\\pmb{\\Sigma}$.\n\nThe PDF for the bi-variate log-Normal (BVLN) RV $\\pmb{Y} = (Y_{p},Y_{n})^{T}$ can be obtained by using the multivariate change of variables theorem. If $\\pmb{Y}=g(\\pmb{X})$ then\n\\begin{equation}\nf_{Y}(\\pmb{y}) = f_{X}(g^{-1}(\\pmb{y})) \\cdot \\lvert\\lvert J_{g^{-1}}(\\pmb{y})\\rvert\\rvert\n\\end{equation}\nwith $J_{g^{-1}}$ the Jacobian matrix of $g^{-1}(\\cdot)$ and $\\lvert\\lvert J_{g^{-1}}\\rvert\\rvert$ the absolute value of its determinant. Applying the theorem for $\\pmb{Y} = g(\\pmb{X}) = (\\text{exp}(X_p),\\text{exp}(X_n))^{T}$ we have $g^{-1}(\\pmb{y}) = (log(y_p),log(y_n))^{T}$ and $\\lvert\\lvert J_{g^{-1}}(\\pmb{y})\\rvert\\rvert = (y_p\\cdot y_n)^{-1}$. The PDF of a BVLN RV is then\n\\begin{equation} \\label{eq:PDFBVLN}\nf_{BVLN}(\\pmb{y}) = \\frac{\\lvert\\pmb{\\Sigma}\\rvert^{-\\frac{1}{2}}}{2\\pi y_p y_n} \\text{exp}\\left(-\\frac{1}{2}\\lvert\\lvert\\log(\\pmb{y})-\\pmb{\\mu}\\rvert\\rvert_{\\pmb{\\Sigma}}\\right)\n\\end{equation}\n\nWe can now define the cumulative distribution function (CDF) of the DLN distribution using the definition of the CDF of the difference of two RV\n\\begin{equation} \\label{eq:CDFDLN1}\n\\begin{split}\nF_{DLN}(w) & = P[W\\leq w] = P[y_{p} - y_{n} \\leq w] = P[y_{p} \\leq y_{n} + w] \\\\\n & = \\int_{-\\infty}^{\\infty}\\int_{-\\infty}^{y_{n}+w}f_{BVLN}(y_{p},y_{n})dy_{p}dy_{n}\n\\end{split}\n\\end{equation}\nwhich can be differentiated w.r.t $w$ to yield the PDF\n\\begin{equation} \\label{eq:PDFDLN1}\nf_{DLN}(w) = \\int_{-\\infty}^{\\infty}f_{BVLN}(y+w,y)dy = \\int_{-\\infty}^{\\infty}f_{BVLN}(y,y-w)dy\n\\end{equation}\nbut because $f_{BVLN}(\\pmb{y})$ is non-zero only for $\\pmb{y}>0$, we limit the integration range\n\\begin{equation} \\label{eq:PDFDLN}\nf_{DLN}(w) = \\int_{\\text{max}(0,w)}^{\\infty}f_{BVLN}(y,y-w)dy\n\\end{equation}\nwhich yields the PDF of the DLN distribution.\n\nIt is well-known, however, that the integral in equation~\\ref{eq:PDFDLN} does not have a closed-form solution. The accompanying code suite evaluates it numerically, and also numerically evaluates the CDF using its definition\n\\begin{equation} \\label{eq:CDFDLN}\nF_{DLN}(w) = \\int_{-\\infty}^{w}f_{DLN}(y)dy\n\\end{equation}\n\n\\begin{sloppypar}\nFor the simpler case with difference of uncorrelated log-Normals, i.e. $\\rho_{pn}=0$, we can derive the PDF of the DLN via a characteristic function (CF) approach as well. In this case, we can write the CF of the DLN as ${\\varphi_{DLN}(t)=\\varphi_{LN}(t)\\cdot\\varphi_{LN}(-t)}$ with $\\varphi_{LN}(t)$ the CF of the log-Normal. Next, we can apply a Fourier transform to obtain the PDF,\n\\begin{equation} \\label{eq:PDFDLNCF}\nf_{DLN}(w) = \\frac{1}{2\\pi}\\int_{-\\infty}^{\\infty}e^{-i\\cdot t\\cdot w} \\cdot \\varphi_{DLN}(t)dt\n\\end{equation}\nUnfortunately, the log-Normal does not admit an analytical CF, and using Equation~\\ref{eq:PDFDLNCF} requires a numerical approximation for $\\varphi_{LN}(t)$ as well. \\cite{Gubner2006} provides a fast and accurate approximation method for the CF of the log-Normal which I use in the calculation of $f_{DLN}(w)$ when using this method.\n\\end{sloppypar}\n\n\n\\subsection{Moments}\n\\label{sec:Moms}\n\n\\subsubsection{MGF}\n\nThe moment generating function (MGF) of the DLN can be written as\n\\begin{equation} \\label{eq:MGFDLN}\nM_{W}(t) = \\mathbb{E}\\left[e^{tW}\\right] = \\int_{-\\infty}^{\\infty}\\int_{-\\infty}^{\\infty}e^{tw}f_{BVLN}(y+w,y)dydw\n\\end{equation}\nbut this formulation has limited usability due to the lack of closed-form solution for the integrals. Instead, it is useful to characterize the moments directly, as we can obtain them in closed-form.\n\n\n\n\\subsubsection{Mean and variance}\n\nUsing the definitions of $\\pmb{\\mu}$ and $\\pmb{\\Sigma}$ in \\ref{eq:BVN}, define the mean and covariance of the BVLN RV, $\\pmb{\\hat{\\mu}}$ and $\\pmb{\\hat{\\Sigma}}$ (element-wise) as\n\\begin{equation} \\label{eq:BVLN}\n\\begin{split}\n\\pmb{\\hat{\\mu}}_{(i)} & = \\text{exp}\\left(\\pmb{\\mu}_{(i)} + \\frac{1}{2}\\pmb{\\Sigma}_{(i,i)}\\right) \\\\\n\\pmb{\\hat{\\Sigma}}_{(i,j)} & = \\text{exp}\\left(\\pmb{\\mu}_{(i)} + \\pmb{\\mu}_{(j)} + \\frac{1}{2}\\left(\\pmb{\\Sigma}_{(i,i)} + \\pmb{\\Sigma}_{(j,j)}\\right)\\right)\\cdot \\left( \\text{exp}\\left(\\pmb{\\Sigma}_{(i,j)}\\right)-1\\right) \\\\\n\\end{split}\n\\end{equation}\nNote that if $\\pmb{\\Sigma}$ is diagonal (i.e., $X_{p}$ and $X_{n}$ are uncorrelated) then $\\pmb{\\hat{\\Sigma}}$ will be diagonal as well. We are however interested in the general form of the DLN distribution. The identities regarding the expectation and variance of a sum of RV yield\n\\begin{equation} \\label{eq:MUDLN}\n\\mathbb{E}\\left[W\\right] = \\mathbb{E}\\left[Y_p\\right] - \\mathbb{E}\\left[Y_n\\right] = \\pmb{\\hat{\\mu}}_{(1)} - \\pmb{\\hat{\\mu}}_{(2)} = \\text{exp}(\\mu_p + \\frac{\\sigma_p^2}{2}) - \\text{exp}(\\mu_n + \\frac{\\sigma_n^2}{2})\n\\end{equation}\nand\n\\begin{equation} \\label{eq:SIGDLN}\n\\begin{split}\n\\text{Var}\\left[W\\right] & =\\mathbb{C}\\left[Y_p,Y_p\\right] + \\mathbb{C}\\left[Y_n,Y_n\\right] -2\\cdot\\mathbb{C}\\left[Y_p,Y_n\\right] = \\pmb{\\hat{\\Sigma}}_{(1,1)} + \\pmb{\\hat{\\Sigma}}_{(2,2)} - 2\\cdot\\pmb{\\hat{\\Sigma}}_{(1,2)} \\\\\n & = \\text{exp}\\left(2\\mu_{p}+\\sigma_p^2\\right)\\cdot\\left(exp\\left(\\sigma_p^2\\right) - 1\\right) \n + \\text{exp}\\left(2\\mu_{n}+\\sigma_n^2\\right)\\cdot\\left(exp\\left(\\sigma_n^2\\right) - 1\\right) \\\\\n & - 2\\text{exp}\\left(\\mu_{p}+\\mu_{n}+\\frac{1}{2}(\\sigma_p^2+\\sigma_n^2)\\right)\n \\cdot\\left(\\text{exp}\\left(\\sigma_p\\sigma_n\\rho_{pn}\\right) - 1\\right)\n\\end{split}\n\\end{equation}\nwith $\\mathbb{C}$ the covariance operator of two general RV $U_{1},U_{2}$\n\\begin{equation} \\label{eq:COVAR}\n\\mathbb{C}\\left[U_{1},U_{2}\\right] = \\mathbb{E}\\left[(U_1 - \\mu_1)(U_2-\\mu_2)\\right]\n\\end{equation}\n\n\n\n\\subsubsection{Skewness and kurtosis}\n\nSkewness and kurtosis of the DLN can similarly be established using coskewness and cokurtosis (for overview, see e.g. \\cite{Miller2013}). Coskewness of three general RV $U_{1},U_{2},U_{3}$ is defined as\n\\begin{equation} \\label{eq:COSKEW}\n\\mathbb{S}\\left[U_{1},U_{2},U_{3}\\right] = \\frac{\\mathbb{E}\\left[(U_1 - \\mu_1)(U_2-\\mu_2)(U_3-\\mu_3)\\right]}{\\sigma_1\\sigma_2\\sigma_3}\n\\end{equation}\nand cokurtosis of four general RV $U_{1},U_{2},U_{3},U_{4}$ is defined as \n\\begin{equation} \\label{eq:COKURT}\n\\mathbb{K}\\left[U_{1},U_{2},U_{3},U_{4}\\right] = \\frac{\\mathbb{E}\\left[(U_1 - \\mu_1)(U_2-\\mu_2)(U_3-\\mu_3)(U_4-\\mu_4)\\right]}{\\sigma_1\\sigma_2\\sigma_3\\sigma_4}\n\\end{equation}\nwith the property that $\\mathbb{S}\\left[U,U,U\\right] = \\text{Skew}\\left[U\\right]$ and $\\mathbb{K}\\left[U,U,U,U\\right] = \\text{Kurt}\\left[U\\right]$. More importantly, it is simple to show that\n\\begin{equation} \\label{eq:SKEWDIFF}\n\\text{Skew}\\left[U-V\\right] = \\frac{\\sigma_U^3\\mathbb{S}\\left[U,U,U\\right] -3\\sigma_U^2\\sigma_V\\mathbb{S}\\left[U,U,V\\right]+3\\sigma_U\\sigma_V^2\\mathbb{S}\\left[U,V,V\\right] -\\sigma_V^3\\mathbb{S}\\left[V,V,V\\right]}{\\sigma_{U-V}^{3}}\n\\end{equation}\nand similarly\n\\begin{equation} \\label{eq:KURTDIFF}\n\\begin{split}\n\\text{Kurt}\\left[U-V\\right] & = \\frac{1}{\\sigma_{U-V}^{4}} [ \\sigma_U^4\\mathbb{K}\\left[U,U,U,U\\right] -4\\sigma_U^3\\sigma_V\\mathbb{K}\\left[U,U,U,V\\right] \\\\ & + 6\\sigma_U^2\\sigma_V^2\\mathbb{K}\\left[U,U,V,V\\right] -4\\sigma_U\\sigma_V^3\\mathbb{K}\\left[U,V,V,V\\right]+\\sigma_V^4\\mathbb{K}\\left[V,V,V,V\\right] ]\n\\end{split}\n\\end{equation}\nwith $\\sigma_{U-V} = \\text{Var}\\left[U-V\\right]^{\\frac{1}{2}}$ calculated using Equation~\\ref{eq:SIGDLN}. Evaluating the operators $\\mathbb{S}$ and $\\mathbb{K}$ for the case of DLN requires evaluating expressions of the general form $\\mathbb{E}\\left[Y_{p}^{i}Y_{n}^{j}\\right]$, which can be done via the MGF of the BVN distribution\n\\begin{equation} \\label{eq:EUVSimp}\n\\mathbb{E}\\left[Y_{p}^{i}Y_{n}^{j}\\right] = \\mathbb{E}\\left[e^{i X_p}e^{j X_n}\\right] = \\text{MGF}_{BVN}\\left(\\big[\\begin{smallmatrix} i \\\\ j \\end{smallmatrix}\\big] \\right) = \\mathbb{E}\\left[Y_{p}^{i}\\right]\\mathbb{E}\\left[Y_{n}^{j}\\right]e^{ij\\pmb{\\Sigma}_{(1,2)}}\n\\end{equation}\nwith $\\mathbb{E}\\left[Y_{p}^{i}\\right]=\\text{exp}\\left(i\\mu_{p} + \\frac{1}{2}i^2\\sigma_{p}^2\\right)$. This concludes the technical details of the derivation. \n\nThe method presented can be extended to higher central moments as well. The accompanying code suite includes functions that implement the equations above and use them to calculate the first five moments of the DLN given the parameters $(\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$. Section~\\ref{sec:MC} later describes the results of Monte-Carlo experiments testing the empirical variance and bias of the moments as a function of sample size.\n\n\n\n\\subsection{Estimation}\n\\label{sec:Estim}\n\nGiven data $\\pmb{D} \\sim \\text{DLN}(\\pmb{\\Theta})$ with $\\pmb{\\Theta} = (\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$, we would like to find an estimate $\\pmb{\\hat{\\Theta}}$ to the parameter vector $\\pmb{\\Theta}$. Experiments show that given an appropriate initial guess, the MLE estimates of $\\pmb{\\Theta}$ perform well in practice. The main parameter of difficulty is $\\rho_{pn}$. This parameter is akin to the shape parameter in the Stable distribution, which plays a similar role and is similarly difficult to estimate, see e.g. \\cite{FamaRoll1971}. It hence requires special care in the estimation.\n\nThe estimation code provided minimizes the negative log-likelihood of the data w.r.t the DLN PDF using a multi-start algorithm. The starting values for the first four parameters are fixed for all start points as:\n\\begin{equation} \\label{eq:ESTIM_GUESS}\n\\begin{bmatrix}\n\\mu_p \\\\ \\sigma_p \\\\ \\mu_n \\\\ \\sigma_n\n\\end{bmatrix} = \n\\begin{bmatrix}\n\\text{Median}\\left[\\text{log}\\left(\\pmb{D}\\right)\\right] \\ \\ \\text{for} \\ \\ \\pmb{D}>0 \\\\\n\\text{IQR}\\left[\\text{log}\\left(\\pmb{D}\\right)\\right]\/1.35 \\ \\ \\text{for} \\ \\ \\pmb{D}>0 \\\\\n\\text{Median}\\left[\\text{log}\\left(-\\pmb{D}\\right)\\right] \\ \\ \\text{for} \\ \\ \\pmb{D}<0 \\\\\n\\text{IQR}\\left[\\text{log}\\left(-\\pmb{D}\\right)\\right]\/1.35 \\ \\ \\text{for} \\ \\ \\pmb{D}<0 \\\\\n\\end{bmatrix}\n\\end{equation}\nwhile the initial guesses for $\\rho_{pn}$ are $(-0.8,-0.3,0,0.3,0.8)$. The estimator $\\pmb{\\hat{\\Theta}}$ is then the value which minimizes the negative log-likelihood in the multi-start algorithm. The estimator inherits asymptotic normality, consistency, and efficiency properties from the general M-estimator theory, as the dimension of $\\pmb{\\hat{\\Theta}}$ is fixed, the likelihood is smooth, and is supported on $\\mathbb{R}\\ \\forall \\pmb{\\hat{\\Theta}}$. A better estimation procedure for the parameters of the DLN might be merited, but is left for future work.\n\n\n\n\\subsection{The elliptical multi-variate DLN}\n\\label{sec:mvsdln}\n\nPractical applications of the DLN require the ability to work with multi-variate DLN RVs. I hence present an extension of the DLN to the multi-variate case using elliptical distribution theory, with the standard reference being \\cite{FangEtAl1990}.\n\n\\begin{sloppypar}\nThe method of elliptical distributions requires a symmetric baseline distribution. We will therefore focus our attention on the symmetric DLN case in which ${\\mu_p=\\mu_n\\equiv\\mu}$ and ${\\sigma_p = \\sigma_n\\equiv\\sigma}$, yielding the three parameter uni-variate symmetric distribution $\\text{SymDLN}(\\mu,\\sigma,\\rho)=\\text{DLN}(\\mu,\\sigma,\\mu,\\sigma,\\rho)$. I begin by defining a standardized N-dimensional elliptical DLN RV using SymDLN and the spherical decomposition of \\cite{CambanisEtAl1981}, and later extend it to a location-scale family of distributions.\n\\end{sloppypar}\n\nLet $\\mathbf{U}$ be an N-dimensional RV distributed uniformly on the unit hyper-sphere in $\\mathbb{R}^{N}$ and arranged as a column vector. Let $R\\geq0$ be a uni-variate RV independent of $\\mathbf{U}$ with PDF $f_{R}\\left(r\\right)$ to be derived momentarily, and let $\\mathbf{Z}=R\\cdot\\mathbf{U}$ be a standardized N-dimensional elliptical DLN RV. A common choice for $\\mathbf{U}$ is $\\widehat{\\mathbf{U}}\/\\lvert\\lvert\\widehat{\\mathbf{U}}\\rvert\\rvert_{2}$ with $\\widehat{\\mathbf{U}} \\sim MVN(\\mathbf{0}_N,\\mathbf{1}_N)$. $\\mathbf{U}$ captures a direction in $\\mathbb{R}^{N}$, and we have $\\sqrt{\\mathbf{U}^{T}\\cdot\\mathbf{U}} = \\lvert\\lvert\\mathbf{U}\\rvert\\rvert_{2} \\equiv 1$, which implies $\\sqrt{\\mathbf{Z}^{T}\\cdot\\mathbf{Z}} = \\lvert\\lvert\\mathbf{Z}\\rvert\\rvert_{2} = R$. We further know that the surface area of an N-sphere with radius $R$ is given by\n\\begin{equation} \\label{eq:Surface}\nS_{N}\\left(R\\right) = \\frac{2\\cdot\\pi^{\\frac{N}{2}}}{\\Gamma\\left(\\frac{N}{2}\\right)}\\cdot R^{N-1}\n\\end{equation}\nand can hence write the PDF of $\\mathbf{Z}$ as\n\\begin{equation} \\label{eq:fZPDF1}\nf_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) = \\frac{f_{R}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)}{S_{N}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)} = \\frac{\\Gamma\\left(\\frac{N}{2}\\right)\\cdot f_{R}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)}{2\\cdot\\pi^{\\frac{N}{2}}\\cdot\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}^{N-1}}\n\\end{equation}\n\nWe require $f_{R}\\left(r\\right)$ and $f_{\\mathbf{Z}}\\left(\\mathbf{z}\\right)$ to be valid PDFs, which yields the conditions\n\\begin{equation} \\label{eq:RZcond}\n\\begin{split}\n& f_{R}\\left(r\\right) \\geq 0\\ \\forall\\ r\\in\\mathbb{R} \\\\\n& f_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) \\geq 0\\ \\forall\\ \\mathbf{z}\\in\\mathbb{R}^{N} \\\\\n& \\int_{-\\infty}^{\\infty}f_{R}\\left(r\\right)\\ dr = 1 \\\\\n& \\int_{-\\infty}^{\\infty}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\infty} f_{\\mathbf{Z}}\\left(\\mathbf{z}\\right)\\ d\\mathbf{z}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{z}_{(1)} = 1 \\\\\n\\end{split}\n\\end{equation}\nto those, we can add the condition that the properly normalized distribution of $f_{R}\\left(r\\right)$ will be SymDLN,\n\\begin{equation} \\label{eq:fRcond}\nf_{R}\\left(r\\right) = \\widetilde{M}_{N}\\left(r\\right)\\cdot f_{DLN}(r)\n\\end{equation}\nwith $\\widetilde{M}_{N}\\left(r\\right)$ chosen such that the conditions in Equation~\\ref{eq:RZcond} hold. Solving for this set of conditions yields\n\\begin{equation} \\label{eq:fR}\nf_{R}\\left(r\\right) = \\frac{r^{N-1}} {\\int_{0}^{\\infty}\\widetilde{r}^{N-1}\\cdot f_{DLN}\\left(\\widetilde{r}\\right)\\ d\\widetilde{r}}\\cdot f_{DLN}\\left(r\\right)\n\\end{equation}\nand\n\\begin{equation} \\label{eq:fZ}\nf_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) = \\frac{\\Gamma\\left(\\frac{N}{2}\\right)}{2\\cdot\\pi^{\\frac{N}{2}}\\cdot \\int_{0}^{\\infty}\\widetilde{r}^{N-1}\\cdot f_{DLN}\\left(\\widetilde{r}\\right)\\ d\\widetilde{r}}\\cdot f_{DLN}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right) = M_{N}\\cdot f_{DLN}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)\n\\end{equation}\nwith $M_{N}$ a normalization constant depending only on the dimension N and the parameters of the baseline SymDLN$\\left(\\mu, \\sigma, \\rho\\right)$ being used. We can further use the definition of the CDF of $\\mathbf{Z}$ to write\n\\begin{equation} \\label{eq:FZ}\n\\begin{split}\nF_{\\mathbf{Z}}\\left(\\mathbf{z}\\right) & = \\int_{-\\infty}^{\\mathbf{z}_{(1)}}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\mathbf{z}_{(N)}} f_{\\mathbf{Z}}\\left(\\mathbf{\\widehat{z}}\\right)\\ d\\mathbf{\\widehat{z}}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{\\widehat{z}}_{(1)} \\\\ \n& = \\int_{-\\infty}^{\\mathbf{z}_{(1)}}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\mathbf{z}_{(N)}} M_{N}\\cdot f_{\\mathbf{DLN}}\\left(\\lvert\\lvert\\mathbf{z}\\rvert\\rvert_{2}\\right)\\ d\\mathbf{\\widehat{z}}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{\\widehat{z}}_{(1)} \\\\\n\\end{split}\n\\end{equation}\nwhich concludes the characterization of the standardized N-dimensional\nelliptical DLN RV.\n\nExtending the standardized N-dimensional DLN to a location-scale family of distributions is now straightforward. Let $\\widetilde{\\pmb{\\mu}}=\\left(\\mu_1 , \\mu_2 , ... , \\mu_N\\right)^{T}$ be a column vector of locations and let $\\widetilde{\\pmb{\\Sigma}}$ be a positive-semidefinite scaling matrix of rank $N$. Define \n\\begin{equation} \\label{eq:MVDLN}\n\\mathbf{W} = \\widetilde{\\pmb{\\mu}} + \\widetilde{\\pmb{\\Sigma}}^{\\frac{1}{2}}\\cdot\\mathbf{Z}\n\\end{equation}\nwith $\\widetilde{\\pmb{\\Sigma}}^{\\frac{1}{2}}$ denoting the eigendecomposition of $\\widetilde{\\pmb{\\Sigma}}$. The PDF of $\\mathbf{W}$ is then given by\n\\begin{equation} \\label{eq:PDFMVDLN}\n\\begin{split}\nf_\\mathbf{W}\\left(\\mathbf{w}\\right) & = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot f_{\\mathbf{Z}}\\left(\\widetilde{\\pmb{\\Sigma}}^{-\\frac{1}{2}}\\cdot\\left(\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\right)\\right) \\\\\n& = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot M_{N}\\cdot f_{DLN}\\left(\\sqrt{\\left(\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\right)^{T}\\cdot\\widetilde{\\pmb{\\Sigma}}^{-1}\\cdot\\left(\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\right)}\\right) \\\\\n& = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot M_{N}\\cdot f_{DLN}\\left(\\lvert\\lvert\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\rvert\\rvert_{\\widetilde{\\pmb{\\Sigma}}}\\right) \\\\\n\\end{split}\n\\end{equation}\nThe CDF of $\\mathbf{W}$ can similarly be written as\n\\begin{equation} \\label{eq:CDFMVDLN}\n\\begin{split}\nF_{\\mathbf{W}}\\left(\\mathbf{w}\\right) & = \\lvert\\widetilde{\\pmb{\\Sigma}}\\rvert^{-\\frac{1}{2}}\\cdot M_{N}\\cdot \\int_{-\\infty}^{\\mathbf{w}_{(1)}}\\cdot\\cdot\\cdot \\int_{-\\infty}^{\\mathbf{w}_{(N)}} f_{\\mathbf{DLN}}\\left(\\lvert\\lvert\\mathbf{w}-\\widetilde{\\pmb{\\mu}}\\rvert\\rvert_{\\widetilde{\\pmb{\\Sigma}}}\\right)\\ d\\mathbf{\\widehat{w}}_{(N)}\\cdot\\cdot\\cdot d\\mathbf{\\widehat{w}}_{(1)} \\\\\n\\end{split}\n\\end{equation}\nwhich characterizes a general elliptical multi-variate DLN RV.\n\nFinally, note that the scaling matrix $\\widetilde{\\pmb{\\Sigma}}$ is not the covariance matrix of $\\mathbf{W}$ due to the heavy-tails of $\\mathbf{W}$, similar to other heavy-tailed elliptical distributions such as the multi-variate Stable, t, or Laplace distributions. Further note that the normalization integral in Equation~\\ref{eq:fR} is numerically unstable for high values of N (e.g., $N\\geq 5$), and care should be taken when deriving the PDF of high-dimensional DLN RVs.\n\n\n\n\\section{Methods for heavy-tailed analysis}\n\\label{sec:Methods}\n\nAs discussed above, a main difficulty of working with the DLN distribution stems from its ``double exponential'' nature, i.e. the fact it exhibits exponential tails in both the positive and negative directions. The usual mitigation for a single exponential tail, applying a log transform, fails as the log is undefined on the negatives. This section describes how to extend methods applied to one-sided exponential tails to double-exponential distributions.\n\n\n\n\\subsection{Inverse-Hyperbolic-Sine space and the ADLN}\n\nA common alternative to using log-transforms is transforming the data using the Inverse Hyperbolic Sine (asinh). For a review of the use of asinh in economic applications see \\cite{BellemareWichman2020}. The hyperbolic sine and its inverse are given by\n\\begin{equation} \\label{eq:ASINH}\n\\begin{split}\n\\text{sinh}(x) & = \\frac{e^{x}-e^{-x}}{2} \\\\\n\\text{asinh}(x) & = \\log\\left(x+\\sqrt{1+x^2}\\right)\n\\end{split}\n\\end{equation}\nThe asinh transform has the following useful properties:\n\\begin{enumerate}\n \\item Differentiable and strictly increasing in x.\n \\item $\\text{asinh}(x)\\approx \\text{sign}(x)(\\log\\lvert x\\rvert + \\log(2))$, with the approximation error rapidly vanishing as $\\lvert x\\rvert$ increases.\\footnote{About 1\\% approximation error at $\\lvert x\\rvert$=4, and about 0.1\\% at $\\lvert x\\rvert$=10.}\n \\item Odd function, such that $\\text{asinh}(-x) = -\\text{asinh}(x)$.\n \\item Zero based, such that $\\text{asinh}(0)=0$\n\\end{enumerate}\nI.e., asinh is a bijection similar in flavor to the neglog transform:\n\\begin{equation} \\label{eq:NEGLOG}\n\\text{neglog}(x) =\\text{sign}(x)\\log(1+\\lvert x\\rvert)\n\\end{equation}\nbut with less distortion than the neglog around 0, at the cost of the fixed bias $\\log(2)\\approx 0.7$.\n\nIt is useful to note that any difference of exponentials function can be factored into an exponential multiplied by a Hyperbolic Sine, i.e., \n\\begin{equation} \\label{eq:NEGLOG}\ny =\\exp\\left(x_1\\right) - \\exp\\left(x_2\\right) = 2\\cdot\\exp\\left(\\frac{x_1 + x_2}{2}\\right)\\cdot\\text{sinh}\\left(\\frac{x_1 - x_2}{2}\\right)\n\\end{equation}\nwhich highlights the intimate intuitive relation between the sinh function and the DLN and Laplace distributions. All three are expressed in terms of difference of exponentials, leading to their characteristic ``double exponential'' nature. Sinh's inverse, the asinh, is hence a natural transform to apply to DLN and Laplace distributed RVs.\n\nAs asinh is differentiable and strictly increasing, the method of transformation applies. If $Z=\\text{asinh}(W)$ where $W\\sim DLN$ then $Z\\sim ADLN$, $W=\\text{sinh}(Z)$, and $\\frac{dZ}{dW} = \\left(1+\\text{sinh}(Z)^{2}\\right)^{-1\/2}$. We can now write the PDF for the ADLN distribution\n\\begin{equation} \\label{eq:PDFADLN}\nf_{ADLN}(z) = \\frac{f_{DLN}(\\text{sinh}(z))}{\\text{asinh}'(\\text{sinh}(z))} = f_{DLN}(\\text{sinh}(z))\\sqrt{1+\\text{sinh}(z)^2} \n\\end{equation}\nwhich allows analysis of $Z\\sim ADLN$, the transformed DLN RVs, whose histogram is more ``compact'' and easier to present.\n\nPanels (c) and (d) of Figure~\\ref{fig:DLNexam} present typical DLN distributions encountered in practice with linear (Panel c) and asinh (Panel d) horizontal axis. Panel (c) presents a truncated segment of the distribution. Due to the asinh transform, Panel (d) is able to present the entire distribution. The approximate log-Normality of the positive and negative sides of the DLN is not visible in Panel (c), but is made clear by the asinh transform in Panel (d).\n\n\n\n\\subsection{Growth in DLN-distributed variates}\n\\label{sec:Growth}\n\nHow does one measure growth in DLN-distributed RVs? A firm that had $\\$100M$ of income in year $1$ and $\\$120M$ of income in year $2$ has certainly grown its income. One can argue whether it is preferable to say the firm grew by $\\frac{120M}{100M}-1=0.2=20\\%$ or by $\\log(120M)-\\log(100M)=0.182$ log-points, yet the question itself is well-formed. But what if the firm had $-\\$100M$ of income (i.e., loss) in year $1$, and then $\\$120M$ of income in year $2$? What was its growth? This section aims to provide a rigorous answer to that question.\n\nTo begin, we require a definition of growth. \\cite{BarroSala-I-Martin2003} and \\cite{StudenyMeznik2013} define instantaneous growth of a time-continuous and \\emph{strictly positive} RV $Z(t)>0$ as \n\\begin{equation} \\label{eq:pergrowth}\n\\frac{dZ(t)\/dt}{Z(t)} = \\frac{Z'(t)}{Z(t)} \\approx \\frac{Z_{t+1}-Z_t}{Z_t}\n\\end{equation}\nwith the second part of the equation using the first-difference of discrete variables as an approximation to the derivative $Z'(t)$, which yields the well-known formulation of percentage growth in discrete variables. Generalizing this definition to $Z(t)\\in \\mathbb{R}$ yields:\n\\begin{equation} \\label{eq:pergrowth2}\nd\\% \\equiv \\frac{dZ(t)\/dt}{\\lvert Z(t)\\rvert} = \\frac{Z'(t)}{\\lvert Z(t)\\rvert} \\approx \\frac{Z_{t+1}-Z_t}{\\lvert Z_t\\rvert} \\ \\ \\text{for} \\ \\ Z(t) \\neq 0\n\\end{equation}\nwhich guarantees that $Z_{t+1}>Z_t$ will imply positive growth, regardless of the sign of $Z_t$. The approximate term $\\left(Z_{t+1}-Z_t\\right)\/\\lvert Z_t\\rvert$ is \\emph{generalized percentage growth} (hereafter denoted d\\%), and is explosive if $\\lvert Z_t\\rvert\\to 0$, similar to ``traditional'' percentage growth.\n\nNext, it is instructive to consider the growth of a log-Normally distributed RV, as most measures of size encountered in firm dynamics (and elsewhere) are approximately log-Normally distributed. To that end, consider the following setting:\n\\begin{equation} \\label{eq:AR_LN}\n\\begin{split}\n& X_{t+1} = \\left(1-\\rho_X\\right)\\cdot\\mu_X + \\rho_X\\cdot X_{t} + \\epsilon^{X}_{t} \\\\\n& \\epsilon^{X}_{t} \\sim \\mathcal{N}(0,\\sigma_{X}^2) \\\\\n& Y_{t} = \\text{exp}\\left(X_{t}\\right)\n\\end{split}\n\\end{equation}\nIn which $X_{t}$ is a simple $AR(1)$ stochastic process, and hence distributes Normally, and $Y_{t}>0$ is log-Normally distributed. What is the growth in $Y_{t}$?\n\nApplying the definition, we have:\n\\begin{equation} \\label{eq:loggrowth}\n\\frac{Y'(t)}{\\lvert Y(t)\\rvert} = \\frac{Y(t)\\cdot X'(t)}{Y(t)} = X'(t) \\approx X_{t+1} - X_t = \\log(Y_{t+1}) - \\log(Y_{t}) \\equiv \\text{dlog}(Y_{t+1})\n\\end{equation}\nwhich yields the well-known formulation of growth as a difference in logs between consecutive values, denoted dlog(). The difference between Equations~\\ref{eq:pergrowth2} and~\\ref{eq:loggrowth} is in whether we differentiate before applying the first-difference approximation. Note that using percentage growth as in Equation~\\ref{eq:pergrowth2} in this case would yield:\n\\begin{equation} \\label{eq:pergrowthYt}\n\\frac{Y_{t+1}}{Y_t} - 1 = \\exp(X_{t+1} - X_t) - 1\n\\end{equation}\nor the general observation that percentage growth is a convex transform of log growth. It is further worth noting that $\\lim_{\\rho_X \\to 1} \\left(X_{t+1} - X_t\\right) = \\epsilon^{X}_{t}$. Log growth yields the innovation in the underlying AR(1) process, while percent growth yields the transformed value $\\exp(\\epsilon^{X}_{t})-1$. I.e., percent growth introduces a convexity bias relative to log growth in the case of a log-Normally distributed RV.\n\nConversely, using log growth to measure growth in a Normally distributed RV, even if said RV is strictly positive in practice, would introduce a similar but opposite ``concavity bias.'' To see that, consider the growth in $X(t)>0$, when measured in dlog terms:\n\\begin{equation} \\label{eq:loggrowthNorm}\n\\text{dlog}(X_{t+1}) = \\log(X_{t+1}) - \\log(X_t) = \\log\\left(\\frac{X_{t+1}}{X_t} -1 +1\\right) = \\log\\left(\\frac{ X'(t)}{\\lvert X(t)\\rvert} + 1\\right)\n\\end{equation}\nPut differently, using dlog() to measure growth in $X$ yields the log of percent growth, which is the appropriate measure by the definition in Equations~\\ref{eq:pergrowth} and~\\ref{eq:pergrowth2}. Hence, the concept of growth used is closely related to the distribution being considered.\n\nNext, consider a similar setting, but for a DLN RV:\n\\begin{equation} \\label{eq:AR_DLN}\n\\begin{split}\n& X^{p}_{t+1} = \\left(1-\\rho_{p}\\right)\\cdot\\mu_{p} + \\rho_{p}\\cdot X_{t}^{p} + \\epsilon^{p}_{t} \\\\\n& X^{n}_{t+1} = \\left(1-\\rho_{n}\\right)\\cdot\\mu_{n} + \\rho_{n}\\cdot X_{t}^{n} + \\epsilon^{n}_{t} \\\\\n& (\\epsilon^{p}_{t},\\epsilon^{n}_{t})^{T} \\sim \\mathcal{N}\\left(\\pmb{0},\\pmb{\\Sigma}\\right) \\\\\n& Y^{p}_{t} = \\text{exp}\\left(X^{p}_{t}\\right) \\ \\ ; \\ \\ Y^{n}_{t} = \\text{exp}\\left(X^{n}_{t}\\right) \\\\\n& W_{t} = Y^{p}_{t} - Y^{n}_{t}\n\\end{split}\n\\end{equation}\nwith $\\pmb{\\Sigma}$ as in Equation~\\ref{eq:BVN}. By applying the generalized growth definition~\\ref{eq:pergrowth2}, we have:\n\\begin{equation} \\label{eq:DLNGROWTH}\n\\begin{split}\n\\frac{W'(t)}{\\lvert W(t)\\rvert} & = \\frac{Y^{p}(t)\\cdot dX^{p}(t)\/dt - Y^{n}(t)\\cdot dX^{n}(t)\/dt}{\\lvert W(t)\\rvert} \\approx \\frac{Y^{p}_{t}\\cdot\\left(X^{p}_{t+1} - X^{p}_{t}\\right) - Y^{n}_{t}\\cdot\\left(X^{n}_{t+1} - X^{n}_{t}\\right)}{\\lvert W(t)\\rvert} \\\\\n& = \\frac{Y^{p}_{t}\\cdot\\text{dlog}\\left(Y^{p}_{t+1}\\right) - Y^{n}_{t}\\cdot\\text{dlog}\\left(Y^{n}_{t+1}\\right)}{\\lvert Y^{p}_{t} - Y^{n}_{t}\\rvert}\n\\end{split}\n\\end{equation}\nwhich implies the growth of a DLN RV can be defined as a function of the levels and growth rates of its two component log-Normal RVs. Section~\\ref{sec:MC} conducts Monte-Carlo experiments to explore the relation between the measures of growth presented above for Normal, log-Normal, and DLN distributed RVs.\n\n\\comments{\n\\begin{equation} \\label{eq:DLNGROWTH}\n\\begin{split}\ng & = \\frac{200\\cdot\\left(\\log\\left(270\\right)-\\log\\left(200\\right)\\right) - 100\\cdot\\left(\\log\\left(120\\right)-\\log\\left(100\\right)\\right)}{\\lvert200 - 100\\rvert} = 0.4179 \\ dlnp\\ (\\approx 0.4055 \\ lp) \\\\\ng & = \\frac{50\\cdot\\left(\\log\\left(270\\right)-\\log\\left(50\\right)\\right) - 100\\cdot\\left(\\log\\left(120\\right)-\\log\\left(100\\right)\\right)}{\\lvert50 - 100\\rvert} = 1.3218 \\ dlnp\n\\end{split}\n\\end{equation}\n}\n\n\n\n\\section{Monte-Carlo experiments}\n\\label{sec:MC}\n\nThis section reports the results of Monte-Carlo experiments designed to ascertain the properties of the moments, estimators, and measures discussed above. \\comments{, as well as present further results on the properties of the DLN as an approximating distribution.}\n\n\n\n\\subsection{Properties of estimators}\n\nI begin by exploring the moments and parameter estimators of Sections~\\ref{sec:Moms} and~\\ref{sec:Estim}. I concentrate the experiments on a region of the parameter space that arises in practical applications related to the theory of the firm:\n\\begin{equation} \\label{eq:MC_Region_1}\n\\pmb{Q}: \\ \\ \\left(\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn}\\right) \\in \\left(\\left[-3,3\\right],\\left[0.5,2.5\\right],\\left[-3,3\\right],\\left[0.5,2.5\\right],\\left[-1,1\\right]\\right)\n\\end{equation}\n\n\\noindent The data collection\/creation for the Monte-Carlo analysis proceeds as follows.\\\\\n\\noindent For each $i \\in \\{1...N\\}$:\n\\begin{enumerate}\n \\item Draw a parameter vector $\\pmb{\\Theta}_i\\in\\pmb{Q}$ with Uniform probability.\n \\item Calculate the theoretical central moments based on $\\pmb{\\Theta}_i$ using the method of Section~\\ref{sec:Moms}.\n \\item Draw $K$ observations $W_{i,k}\\sim\\text{DLN}(\\pmb{\\Theta}_i)$.\n \\item Calculate the first five empirical central moments of $W_{i,k}$.\n \\item Recalculate the first five empirical moments using iteratively smaller subsets of the $K$ observations.\\footnote{Specifically, I recalculate the moments based on the first $K\/2^s$ observations for $s\\in\\{1...11\\}$.}\n \\item Estimate the parameters of $W_{i,k}$, denoted $\\pmb{\\widehat{\\Theta}}_i$, using the method of Section~\\ref{sec:Estim}.\n \\item Calculate the Kolmogorov-Smirnov (K-S), Chi-square (C-2), and Anderson-Darling (A-D) test statistics based on $\\pmb{\\widehat{\\Theta}}_i$ and $W_{i,k}$.\n\\end{enumerate}\nI repeat the data creation process $N=70,000$ times. Within each loop, I draw $K=100,000$ observations $W_{i,k}\\sim\\text{DLN}(\\pmb{\\Theta}_i)$.\n\nPanel (a) of Table~\\ref{tab:MC1} presents the Monte-Carlo results for the moment estimators of Section~\\ref{sec:Moms}. It compares the theoretical moments derived in Step 2 of the Monte-Carlo experiment to the empirical moments derived in Step 4, concentrating on the first five moments of the distribution. The analysis is done in asinh space because the moments of the DLN explode quickly due to its heavy tails (similar to moments of the log-Normal, which are similarly considered in log space). The empirical and theoretical moments show high correlation, and the odd moments (mean or $1^{st}$ moment, skewness or $3^{rd}$ moment, and $5^{th}$ moment) exhibit no significant bias. The even moments (variance or $2^{nd}$, and kurtosis or $4^{th}$) show evidence of bias, which is fairly severe for kurtosis. Small-sample bias correction to the kurtosis estimator appears warranted, but is outside the scope of this work. The IQR of the difference between the theoretical and empirical moments is increasing with the moment degree, as expected.\n\n\\RPprep{Estimator Monte Carlo Experiments}{0}{0}{MC1}{%\n This table presents results of estimator Monte-Carlo experiments with $N=70,000$ repetitions and $K=100,000$ observations drawn in each repetition. Panel (a) tests the moments estimators $\\widehat{M}_i\\ \\ i\\in\\{1...5\\}$ of Section~\\ref{sec:Moms} vs. the actual moments $M_i$, conducting all analysis in asinh space. It reports the general accuracy corr($\\text{asinh}(\\widehat{M}_i),\\text{asinh}(M_i)$); the bias median($\\text{asinh}(\\widehat{M}_i)-\\text{asinh}(M_i)$) ; and the accuracy IQR($\\text{asinh}(\\widehat{M}_i)-\\text{asinh}(M_i)$). Panel (b) reports similar statistics comparing the DLN parameter estimators of Section~\\ref{sec:Estim} $\\pmb{\\widehat{\\Theta}}$ and the actual parameters $\\pmb{\\Theta}$. Panel (c) reports the values of parameters a,b,c,d in the approximations $ICDF(p) = a\\cdot\\exp(b\\cdot p) + c\\cdot\\exp(d\\cdot p)$ for the ICDFs of the Kolmogorov-Smirnov, Chi-square, and Anderson-Darling test statistics for DLN RVs, as well as the approximation $R^2$.\n}\n\\RPtab{%\n \\begin{tabularx}{\\linewidth}{Frrrrr}\n \\toprule\n\t\\textit{Panel (a): Moment estimators} & $\\widehat{M}_1$ & $\\widehat{M}_2$ & $\\widehat{M}_3$ & $\\widehat{M}_4$ & $\\widehat{M}_5$ \\\\\n \\midrule\n Correlation & 0.9997 & 0.9929 & 0.9282 & 0.8238 & 0.8478 \\\\\n Bias & -0.0001 & 0.1092 & -0.0002 & 6.3410 & 0.0220 \\\\\n Accuracy & 0.0217 & 0.4785 & 3.4480 & 8.5609 & 32.0236 \\\\ \\\\\n \n\t\\textit{Panel (b): Parameter estimators} & $\\widehat{\\mu}_p$ & $\\widehat{\\sigma}_p$ & $\\widehat{\\mu}_n$ & $\\widehat{\\sigma}_n$ & $\\widehat{\\rho}_{pn}$ \\\\\n \\midrule\n Correlation & 0.9408 & 0.9619 & 0.9412 & 0.9623 & 0.9190 \\\\\n Bias & -0.0034 & 0.0019 & -0.0043 & 0.0019 & -0.0048 \\\\\n Accuracy & 0.0588 & 0.0251 & 0.0614 & 0.0259 & 0.0762 \\\\ \\\\\n\n \\textit{Panel (c): ICDF approximations} & a & b & c & d & $R^2$ \\\\\n \\midrule\n Kolmogorov-Smirnov & 6.75e-7 & 0.1553 & -6.7520 & -0.0011 & 0.9976 \\\\\n Chi-square & 1.88e-8 & 0.1955 & 1.2080 & 0.0044 & 0.9920 \\\\\n Anderson-Darling & 1.18e-5 & 0.1350 & -5.7070 & -0.0060 & 0.9900 \\\\\n\t\\bottomrule\n \\end{tabularx}\n}\n\nPanel (b) of Table~\\ref{tab:MC1} goes on to present the Monte-Carlo results for the parameter estimators of Section~\\ref{sec:Estim}. It compares the actual parameters drawn in Step 1 to the estimated parameters calculated in Step 6. The results indicate the estimation procedure is performing quite well. There is high correlation between the actual and estimated parameters, including the hard to estimate correlation parameter. The parameter estimates also exhibit no systematic bias and reasonably low estimation error IQR. These results imply the estimation procedure, while cumbersome, is able to capture the DLN parameters correctly.\n\nTo further explore the precision and small-sample bias of the moment estimators, Figure~\\ref{fig:MC1} presents the dependence of estimator quality on sample size. Panel (a) of the figure presents the dependence of the correlation between the theoretical and empirical moments on sample size. Kurtosis is even less precise than the $5^{th}$ moment, and is strongly influenced by sample size. Panel (b) of Figure~\\ref{fig:MC1} then presents the dependence of the bias on sample size. The $1^{st}$ and $3^{rd}$ moment estimators exhibit no small-sample bias. The $2^{nd}$ and $5^{th}$ exhibit small and rapidly decreasing bias. Kurtosis, again, shows high bias, only slowly decreasing with sample size.\n\n\\RPprep{Estimator Monte-Carlo experiments}{0}{1}{MC1}{%\n This figure presents results of estimator Monte-Carlo experiments. Panel (a) graphs the dependence of the correlation between the theoretical and empirical moments on sample size. Panel (b) graphs the dependence of moment bias on sample size. Panel (c) presents the distribution of (log of) the K-S statistic in the simulations. Panels (d)-(f) then present the ICDF of the (log) K-S, C-2, and A-D statistics, along with the fitted curves.\n}\n\\RPfig{%\n\t\\begin{tabular}{ccc} \n\t\t\\subfigure[Corr($\\text{asinh}(\\widehat{M}_i),\\text{asinh}(M_i)$)] {\\includegraphics[width=2.5in]{Img\/Moment_Corr.pdf}} & \n\t\t\\subfigure[Median($\\text{asinh}(\\widehat{M}_i)-\\text{asinh}(M_i)$)] {\\includegraphics[width=2.5in]{Img\/Moment_Bias.pdf}} & \n\t\t\\subfigure[PDF of log K-S statistic] {\\includegraphics[width=2.5in]{Img\/KS_PDF.pdf}} \\\\ \\\\\n\t\t\\subfigure[ICDF of log K-S statistic]\n\t\t{\\includegraphics[width=2.5in]{Img\/KSfit.pdf}} &\n\t\t\\subfigure[ICDF of log C-2 statistic]\n\t\t{\\includegraphics[width=2.5in]{Img\/C2fit.pdf}} &\n\t\t\\subfigure[ICDF of log A-D statistic]\n\t\t{\\includegraphics[width=2.5in]{Img\/ADfit.pdf}} \\\\ \\\\\n\t\\end{tabular}\n}\n\n\n\n\\subsection{Test-statistic critical values}\n\\label{sec:TestStats}\n\nA second goal of the Monte-Carlo experiments is to establish critical values for test statistics of the hypothesis that some given data are drawn from a DLN distribution. This is especially important for the Anderson-Darling test statistic, whose critical values are well-known to strongly depend on the distribution being examined. See e.g. \\cite{Stephens1979}, \\cite{DAgostinoStephens1986} Chapter 4, and \\cite{JantschiBolboaca2018}.\n\nTo that end, I calculate the K-S, C-2, and A-D test statistics for each of the $N$ draws in the sample, as described in Step 8 above. To fix ideas, Panel (c) of Figure~\\ref{fig:MC1} presents the distribution of (log of) the K-S statistic in the Monte-Carlo experiment. I then calculate the inverse-CDF (ICDF) of the resulting distribution of (log of) each test statistic. Panels (d), (e), and (f) of Figure~\\ref{fig:MC1} present the ICDFs of the (log) K-S, C-2, and A-D test statistics, respectively. E.g., Panel (f) indicates that one should reject the hypothesis that given data are drawn from the DLN distribution (at a 5\\% confidence level) if the A-D statistic is higher than $\\text{exp}(ICDF(95)) = \\text{exp}(1.135) = 3.110$.\n\nTo move from calculating critical values to deriving a continuous mapping between p-values and test-statistic values, it is common in the literature discussed above to propose an ad-hoc functional form which is able to approximate the ICDF well. Once one estimates the approximating functional form using non-linear least-squares, one can use it to find the p-values associated with each test-statistic value, and vice-versa. Following experimentation, the functional form most closely able to replicate the resulting ICDFs is of the form:\n\\begin{equation} \\label{eq:Pvals}\nICDF(p) = a\\cdot\\text{exp}\\left(b\\cdot p\\right) + c\\cdot\\text{exp}\\left(d\\cdot p\\right)\n\\end{equation}\nwhich is a four-parameter sum (or difference, if $c<0$) of exponentials.\n\nPanels (d), (e), and (f) of Figure~\\ref{fig:MC1} include the fitted values of the functional form, and show that there is an excellent fit between the functional form and the empirical ICDFs. Panel (c) of Table~\\ref{tab:MC1} presents the values of the four approximating parameters for each of the (log) test statistics' ICDFs, and further reports the $R^2$ of the fit, which is above $0.99$ for all three statistics. Hence, one can safely use these functionals to derive p-values for tests of distributional hypotheses.\n\n\n\n\n\\subsection{Growth measures}\n\nA second set of Monte-Carlo experiments tests the relation between the growth measures described in Section~\\ref{sec:Growth}, for RVs distributed Normal, log-Normal, and DLN. To that end, I define three stochastic processes yielding stationary distributions distributed N, LN, and DLN. For each RV type, in each Monte-Carlo iteration, I draw random parameters for the distribution, simulate it forward, measure growth per-period using the different measures discussed above, and consider the relation between the random innovations $\\epsilon_t$ and the various growth measures.\n\nThe stochastic processes for $X$, $Y$, and $W$, distributed N, LN, and DLN, respectively, are as described in Equations~\\ref{eq:AR_LN} and~\\ref{eq:AR_DLN} above. The parameter regions are:\n\\begin{equation} \\label{eq:MC_Region_2}\n\\begin{split}\n\\pmb{Q}_{N}: & \\ \\ \\left(\\rho_{N},\\mu_{N},sd_{N}\\right) \\in \\left(\\left[0.60,0.99\\right],\\left[-100,100\\right],\\left[10,100\\right]\\right) \\\\\n\\pmb{Q}_{LN}: & \\ \\ \\left(\\rho_{LN},\\mu_{LN},sd_{LN}\\right) \\in \\left(\\left[0.60,0.99\\right],\\left[-3,3\\right],\\left[0.5,2.5\\right]\\right) \\\\\n\\pmb{Q}_{DLN}: & \\ \\ \\left(\\rho^{p,n}_{DLN},\\mu^{p,n}_{DLN},sd^{p,n}_{LN},\\rho^{pn}_{DLN}\\right) \\in \\left(\\left[0.60,0.99\\right],\\left[-3,3\\right],\\left[0.5,2.5\\right],\\left[-1,1\\right]\\right) \\\\\n\\end{split}\n\\end{equation}\nwith $\\sigma_{\\Box} = \\sqrt{sd_{\\Box}^2\\cdot\\left(1-\\rho_{\\Box}^2\\right)}$ for $\\Box \\in \\{N,LN,DLN\\}$.\n\n\\noindent The data collection\/creation for the second Monte-Carlo analysis proceeds as follows:\\\\\n\\noindent For each RV type $\\Box \\in \\{N,LN,DLN\\}$: \\\\\n\\noindent For each $i \\in \\{1...N\\}$:\n\\begin{enumerate}\n \\item Draw a parameter vector $\\pmb{\\Theta}_i\\in\\pmb{Q}_\\Box$ with Uniform probability.\n \\item Initialize the RV $Z_{\\Box,0}$ to $\\mu_\\Box$ for N, exp($\\mu_\\Box$) for LN, and $Z^{p,n}_{\\Box,0}$ at exp($\\mu^{p,n}_{\\Box,0}$) for DLN.\n \\item Draw a shock vector of length $K+100$ (two correlated shock vectors for DLN).\n \\item Simulate the process forward $K+100$ period based on its laws of motion.\n \\item Drop the first 100 observation as burn-in.\n \\item Calculate the set of growth measures from Section~\\ref{sec:Growth}.\n\\end{enumerate}\nI repeat the data creation process $N=10,000$ times, each for $K=1,000$ periods, yielding a total of $10M$ growth observations to be analyzed per distribution type.\n\nPanels (a),(b),(c) of Table~\\ref{tab:MC2} presents the correlations between different growth measures for N, LN, and DLN RVs, respectively. The panels also report correlations concentrating on strictly positive values (i.e., when $Z_{t}>0$ and $Z_{t+1}>0$) and when further avoiding tiny beginning values (i.e., $Z_t>1$). The appropriate concept of growth for Normally distributed RV is $\\epsilon_t\/\\lvert Z_{t}\\rvert$, and Panel (a) shows it is highly correlated with the generalized percentage growth measure. The panel further shows that using dlog as a measure of growth for Normal RVs is inaccurate. This fact is further highlighted by Panels (a) and (b) of Figure~\\ref{fig:MC2} which present the relation between the appropriate growth measure and the generalized percent (d\\%) and dlog measures, respectively. Panel (a) shows d\\% captures growth of Normal RVs well, and Panel (b) highlights the ``concavity bias'' arising from using the dlog measure rather than the d\\% measure. The dispersion around the 45-degree line in Panel (a) is driven by the mean-reversion term of the AR(1), which the growth concept ignores.\n\n\\RPprep{Growth Monte Carlo Experiments}{0}{0}{MC2}{%\n This table presents results of growth Monte-Carlo experiments with $N=10,000$ repetitions and $K=1,000$ observations simulated forward in each repetition. Panels (a), (b), and (c) present results for N, LN, DLN, respectively. Within each panel, I report correlations between the following measures of growth: $\\epsilon_t$ the stochastic innovation underlying the growth at time $t$; $\\epsilon_t\/\\lvert Z_{t}\\rvert$ the relative stochastic innovation; d\\%($Z_{t+1}$)=$\\left(Z_{t+1} - Z_{t}\\right\/\\lvert Z_{t}\\rvert$ the generalized percentage growth; dlog($Z_{t+1}$)=log($Z_{t+1}$)-log($Z_{t}$) the log point growth; dDLN($Z_{t+1}$) the DLN growth formulation based on Equation~\\ref{eq:DLNGROWTH}.\n}\n\\RPtab{%\n \\begin{tabularx}{\\linewidth}{Flllll}\n \\toprule\n\t\\textit{Panel (a): N} & $\\epsilon_t$ & $\\epsilon_t\/\\lvert Z_{t}\\rvert$ & d\\%($Z_{t+1}$) & dlog($Z_{t+1}$) & \\\\\n \\midrule\n $\\epsilon_t$ \n & 1.000 & 0.010 & 0.009 & 0.659$^{a}$ & \\\\\n $\\epsilon_t\/\\lvert Z_{t}\\rvert$ \n & 0.380$^{b}$ & 1.000 & 0.973 & 0.031$^{a}$ & \\\\\n d\\%($Z_{t+1}$) \n & 0.357$^{b}$ & 0.960$^{b}$ & 1.000 & 0.033$^{a}$ & \\\\\n dlog($Z_{t+1}$) \n & 0.712$^{b}$ & 0.590$^{b}$ & 0.617$^{b}$ & 1.000 & \\\\ \\\\\n\n\t\\textit{Panel (b): LN} & $\\epsilon_t$ & $\\epsilon_t\/\\lvert Z_{t}\\rvert$ & d\\%($Z_{t+1}$) & dlog($Z_{t+1}$) & \\\\\n \\midrule\n $\\epsilon_t$ \n & 1.000 & 0.023$^{a}$ & 0.269$^{a}$ & 0.931$^{a}$ & \\\\\n $\\epsilon_t\/\\lvert Z_{t}\\rvert$\n & 0.644$^{b}$ & 1.000 & 0.097$^{a}$ & 0.023$^{a}$ & \\\\\n d\\%($Z_{t+1}$) \n & 0.381$^{b}$ & 0.363$^{b}$ & 1.000 & 0.295$^{a}$ & \\\\\n dlog($Z_{t+1}$) \n & 0.929$^{b}$ & 0.620$^{b}$ & 0.381$^{b}$ & 1.000 & \\\\ \\\\\n\n\t\\textit{Panel (c): DLN} & $\\widehat{\\epsilon}_t^c$ & $\\widehat{\\epsilon}_t\/\\lvert Z_{t}\\rvert^c$ & d\\%($Z_{t+1}$) & dlog($Z_{t+1}$) & dDLN($Z_{t+1}$) \\\\\n \\midrule\n $\\widehat{\\epsilon}_t^c$ \n & 1.000 & 0.000$^{a}$ & 0.000$^{a}$ & 0.038$^{a}$ & 0.000$^{a}$ \\\\\n $\\widehat{\\epsilon}_t\/\\lvert Z_{t}\\rvert^c$\n & 0.043$^{b}$ & 1.000 & 0.652 & 0.022$^{a}$ & 0.944 \\\\\n d\\%($Z_{t+1}$) \n & 0.009$^{b}$ & 0.464$^{b}$ & 1.000 & 0.016$^{a}$ & 0.645 \\\\\n dlog($Z_{t+1}$) \n & 0.057$^{b}$ & 0.739$^{b}$ & 0.397$^{b}$ & 1.000 & 0.023$^{a}$ \\\\\n dDLN($Z_{t+1}$)\n & 0.040$^{b}$ & 0.931$^{b}$ & 0.455$^{b}$ & 0.797$^{b}$ & 1.000 \\\\ \\\\\n \\bottomrule\n \\end{tabularx}\n \\begin{flushleft}\n $^a$ For strictly positive values ($Z_{t}>0$ and $Z_{t+1}>0$) \\\\\n\t$^b$ For strictly positive and non-tiny initial values ($Z_{t}>1$ and $Z_{t+1}>0$) \\\\\n\t$^c$ For DLN, I define $\\widehat{\\epsilon}_t = \\left(Z_t^p\\cdot\\epsilon_t^p - Z_t^n\\cdot\\epsilon_t^n\\right)$ and $Z_t = Z_t^p - Z_t^n$\n \\end{flushleft}\n}\n\n\\RPprep{Growth Monte-Carlo experiments}{1}{0}{MC2}{%\n This figure presents results of growth Monte-Carlo experiments. Panels (a) and (b) graph the relation between the growth of a Normal RV and (a) generalized percentage growth d\\%($Z_{t+1}$)=$\\left(Z_{t+1} - Z_{t}\\right)\/\\lvert Z_{t}\\rvert$; (b) log point growth dlog($Z_{t+1}$)=log($Z_{t+1}$)-log($Z_{t}$). Panels (c) and (d) graph the relation between the growth of a log-Normal RV and (c) dlog($Z_{t+1}$) ; (d) d\\%($Z_{t+1}$). Panels (e) and (f) graph the relation between the growth of a DLN RV and (e) the DLN growth measure from Equation~\\ref{eq:DLNGROWTH}, dDLN($Z_{t+1}$); (f) d\\%($Z_{t+1}$).\n}\n\\RPfig{%\n\t\\begin{tabular}{cc} \n\t\t\\subfigure[N growth vs. d\\%$^a$] {\\includegraphics[width=2.5in]{Img\/N_grow_per.pdf}} & \n\t\t\\subfigure[N growth vs. dlog$^b$] {\\includegraphics[width=2.5in]{Img\/N_grow_dlog.pdf}} \\\\ \n\t\t\\subfigure[LN growth vs. dlog] {\\includegraphics[width=2.5in]{Img\/LN_grow_dlog.pdf}} & \n\t\t\\subfigure[LN growth vs. d\\%] {\\includegraphics[width=2.5in]{Img\/LN_grow_per.pdf}} \\\\ \t\t\n\t\t\\subfigure[DLN growth vs. dDLN$^a$] {\\includegraphics[width=2.5in]{Img\/DLN_grow_dDLN.pdf}} & \n\t\t\\subfigure[DLN growth vs. d\\%$^a$] {\\includegraphics[width=2.5in]{Img\/DLN_grow_per.pdf}} \\\\ \n\t\\end{tabular}\n \\begin{flushleft}\n $^a$ For non-tiny initial values ($\\lvert Z_{t}\\rvert>1$) \\\\\n\t$^b$ For strictly positive and non-tiny initial values ($Z_{t}>1$ and $Z_{t+1}>0$) \\\\\n \\end{flushleft}\t\n}\n\nPanel (b) of Table~\\ref{tab:MC2} moves on to considering LN RVs. Here, the appropriate concept of growth is just $\\epsilon_t$, and the panel shows that dlog measures growth well, while d\\% suffers from a convexity bias and is a poor measure of growth. Panels (c) and (d) of Figure~\\ref{fig:MC2} make the convexity bias clear by plotting the relation between growth and dlog and between growth and d\\%, respectively.\n\nFinally, Panel (c) of Table~\\ref{tab:MC2} presents correlations between growth of DLN RVs and the growth measures. For DLN, the appropriate concept of growth is $\\left(Z_t^p\\cdot\\epsilon_t^p - Z_t^n\\cdot\\epsilon_t^n\\right)\/\\lvert Z_t^p-Z_t^n\\rvert$, and the panel shows that the growth formula for DLN derived in Equation~\\ref{eq:DLNGROWTH} captures it well. The panel also shows that dlog, which has limited usability for measuring DLN growth as it is limited to positive values, does poorly even when limited to positive values, and reaches a correlation of ~0.75 with DLN growth even when limiting to positive, non-tiny values. Panels (e) and (f) of Figure~\\ref{fig:MC2} show that dDLN is indeed an appropriate measures, while d\\% is an unbiased but noisy measure of DLN growth.\n\n\n\n\\comments{\n\\subsection{DLN as an approximating distribution}\n\nA third Monte-Carlo experiment tests how well the DLN distribution approximates several ``compound'' distributions arising in practice. The test is also useful in providing evidence that our tests have power to reject ``non-DLN'' distributions. The distributions I concentrate on are: (i) sum of two DLN RVs; (ii) multiplication of DLN by log-Normal RV; (iii) Division of DLN by log-Normal RV; (iv) multiplication of DLN by Normal RV; and (v) multiplication of Normal by log-Normal RV. The two distributions being compounded are independent of each other.\n\nFor all DLN RVs, I use the parameter region $\\pmb{Q}$ from Equation~\\ref{eq:MC_Region_1}. For Normal and log-Normal RVs, I use the following parameter regions:\n\\begin{equation} \\label{eq:MC_Region_3}\n\\begin{split}\n\\pmb{Q}_{\\widehat{N}}: \\ \\ & \\left(\\mu_N,\\sigma_N\\right) \\in \\left(\\left[-100,100\\right],\\left[10,100\\right]\\right) \\\\\n\\pmb{Q}_{\\widehat{LN}}: \\ \\ & \\left(\\mu_{LN},\\sigma_{LN}\\right) \\in \\left(\\left[-3,3\\right],\\left[0.5,2.5\\right]\\right) \\\\\n\\end{split}\n\\end{equation}\n\nThe data collection\/creation for the Monte-Carlo analysis in this section proceeds as follows: \\\\\nFor each $i \\in \\{1...N\\}$:\n\\begin{enumerate}\n \\item Draw a first parameter vector $\\pmb{\\Theta}^1_i$, from the appropriate parameter range, with Uniform probability.\n \\item Draw a second parameter vector $\\pmb{\\Theta}^2_i$, from the appropriate parameter range, with Uniform probability.\n \\item Draw $K$ observations $X_{i,k}$ from the first distribution with parameter vector $\\pmb{\\Theta}^1_i$.\n \\item Draw $K$ observations $Y_{i,k}$ from the second distribution with parameter vector $\\pmb{\\Theta}^2_i$.\n \\item Calculate the compound RV values $W_{i,k}$ using $X_{i,k}$ and $Y_{i,k}$.\n \\item Estimate the DLN parameters of $W_{i,k}$, denoted $\\pmb{\\hat{\\Theta}}_i$, using the method of Section~\\ref{sec:Estim}.\n \\item Calculate the K-S, C-2, and A-D test statistics based on $\\pmb{\\hat{\\Theta}}_i$ and $W_{i,k}$.\n \\item Calculate the p-values of the test statistics using the ICDF approximations from Section~\\ref{sec:TestStats}.\n\\end{enumerate}\n\nTable~\\ref{tab:MC3} presents the results of the analysis.\\rp{Need to refresh the table after the fixes to the dlnfit code.} For each compound distribution, Panel (a) presents the percent of Monte-Carlo repetitions rejected as DLN by each of the three distributional tests, at the 1\\%, 5\\%, and 10\\% confidence levels. Panels (b) and (c) present the (5,10,50)$^{th}$ percentiles of the test statistics and their accompanying p-values, respectively, across the Monte-Carlo experiments.\n\n\\RPprep{Approximation Monte Carlo Experiments}{0}{0}{MC3}{%\n This table presents results of approximation Monte-Carlo experiments with $N=25,000$ repetitions and $K=100,000$ observations drawn in each repetition. For each compound distribution tested, Panel (a) reports the share of observations rejected as being DLN at the 1\\%, 5\\%, and 10\\% confidence levels using each of the three distributional tests K-S, C-2, and A-D. Panels (b) and (c) present the (5,10,50)$^{th}$ percentiles of the test statistics and their accompanying p-values, respectively, across all Monte-Carlo runs for each compound distribution.\n}\n\\RPtab{%\n \\begin{tabularx}{\\linewidth}{Frrrrrrrrr}\n \\toprule\n & \\multicolumn{3}{c}{K-S} & \\multicolumn{3}{c}{C-2} & \\multicolumn{3}{c}{A-D} \\\\\n\t\\textit{Panel (a): rejected} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} \\\\\n \\midrule\n DLN + DLN & 0.007 & 0.076 & 0.337 & 0.007 & 0.104 & 0.334 & 0.009 & 0.082 & 0.355 \\\\\n DLN * LN & 0.101 & 0.377 & 0.539 & 0.111 & 0.408 & 0.538 & 0.084 & 0.378 & 0.549 \\\\\n DLN \/ LN & 0.129 & 0.375 & 0.521 & 0.148 & 0.385 & 0.522 & 0.117 & 0.370 & 0.515 \\\\\n DLN * N & 0.001 & 0.148 & 0.690 & 0.001 & 0.385 & 0.753 & 0.001 & 0.178 & 0.668 \\\\\n LN * N & 0.000 & 0.070 & 0.324 & 0.000 & 0.080 & 0.270 & 0.000 & 0.086 & 0.353 \\\\ \\\\\n \n\t\\textit{Panel (b): stats} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} \\\\\n \\midrule\n DLN + DLN & 0.002 & 0.002 & 0.004 & 3.910 & 4.279 & 9.975 & 0.011 & 0.025 & 0.209 \\\\\n DLN * LN & 0.001 & 0.002 & 0.008 & 3.349 & 3.855 & 32.59 & 0.006 & 0.009 & 1.068 \\\\\n DLN \/ LN & 0.001 & 0.001 & 0.007 & 3.441 & 3.864 & 22.28 & 0.006 & 0.009 & 0.707 \\\\\n DLN * N & 0.002 & 0.002 & 0.009 & 4.170 & 4.657 & 56.31 & 0.011 & 0.018 & 1.358 \\\\\n LN * N & 0.001 & 0.001 & 0.002 & 3.233 & 3.660 & 5.321 & 0.008 & 0.010 & 0.042 \\\\ \\\\\n \n\t\\textit{Panel (c): p-vals} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{50\\%} \\\\\n \\midrule\n DLN + DLN & 0.042 & 0.055 & 0.136 & 0.035 & 0.047 & 0.138 & 0.040 & 0.053 & 0.132 \\\\\n DLN * LN & 0.000 & 0.010 & 0.082 & 0.000 & 0.007 & 0.073 & 0.001 & 0.013 & 0.080 \\\\\n DLN \/ LN & 0.000 & 0.002 & 0.092 & 0.000 & 0.002 & 0.087 & 0.000 & 0.007 & 0.091 \\\\\n DLN * N & 0.041 & 0.046 & 0.072 & 0.030 & 0.034 & 0.057 & 0.039 & 0.046 & 0.074 \\\\\n LN * N & 0.044 & 0.058 & 0.238 & 0.047 & 0.056 & 0.315 & 0.040 & 0.055 & 0.247 \\\\\n\t\\bottomrule\n \\end{tabularx}\n}\n\nThe results in Table~\\ref{tab:MC3} first establish that the distributional tests used have sufficient power to reject distributions that are non-DLN. In Panel (a), for each of the three test methods, about 30\\%-70\\% of Monte-Carlo repetitions are rejected as being DLN at the 10\\% confidence level. Even at the 5\\% level, around 40\\% of DLN*LN and DLN\/LN repetitions are rejected.\n\nSecond, the results in Table~\\ref{tab:MC3} indicate that DLN performs well as an approximating distribution for sum of DLN (DLN+DLN), multiplication of log-Normal by Normal (LN*N), and to a lesser extent multiplication of DLN by Normal (DLN*N). E.g., for these three compound distributions, the 5$^{th}$ percentile of p-values using all three tests is around $0.03$-$0.05$. DLN however performs poorly as an approximating distribution for the other two compound distributions, DLN*LN and DLN\/LN. For comparison, both of these compound distributions have 5$^{th}$ percentile of p-values around $0.000$-$0.001$.\n}\n\n\n\n\\section{Summary}\n\nThis paper presents the Difference-of-Log-Normals (DLN) distribution, stemming from the multiplicative CLT, and lays a methodological and quantitative foundation for the analysis of DLN-distributed phenomena. It begins by characterizing the distribution, defining its PDF and CDF, presenting estimators for its moments and parameters, and generalizing it to a elliptical multi-variate RVs.\n\nIt goes on to discuss mathematical methods useful in the analysis of DLN distributions. First, it shows the intimate intuitive relation between the DLN distribution and the Hyperbolic Sine, and why the Inverse Hyperbolic Sine (asinh) is a useful transform when dealing with ``double exponential'' RVs such as the DLN.\n\nNext, it considers the concept of growth for DLN RVs. It extends the classical definition of growth, applying only to positive RVs, to RVs $\\in \\mathbb{R}$. It then shows that the measure of growth used is dependant on the distribution of the data being measured. It makes the case that growth in Normal, log-Normal and DLN RVs should be measured using different measures of growth and develops the appropriate measure of growth for DLN RVs.\n\nThe paper reports the results of extensive Monte-Carlo experiments, aimed to establish the properties of the estimators and measures presented. It shows that the moment estimators have good accuracy, but highlights their small-sample bias, especially for the case of kurtosis. A small-sample bias-correction method for the kurtosis estimator is merited. It also shows that the parameter estimators proposed are reasonably accurate and unbiased. To enable accurate tests of whether some data are DLN, it establishes critical values and p-value estimators for three distributional tests: Kolmogorov-Smirnov, Chi-square, and Anderson-Darling.\n\nA second Monte-Carlo experiment verifies the generalized growth measures discussed indeed back-out the appropriate growth concept for Normal, log-Normal, and DLN distributions. It especially highlights the ``convexity\/concavity bias'' arising when applying the wrong measure of growth to an RV. Of importance here is the evidence that measuring growth of log-Normal RVs using percentage growth leads to a significant convexity bias. \\comments{A third Monte-Carlo experiment presents evidence that DLN is also a useful approximating distribution, able to approximate several compound distributions.}\n\n\n\n\n\n\n\n\n\n\n\\comments{\n\n\\subsection{Alternative parametrization}\n\\label{sec:Alt}\n\nConsider the following bijection:\n\\begin{equation} \\label{eq:REPARAM}\n\\begin{bmatrix}\n\\alpha \\\\ \\beta \\\\ \\gamma \\\\ \\delta \\\\ \\epsilon\n\\end{bmatrix}\n= \\text{asinh}\\left(\n\\begin{bmatrix}\n\\text{exp}\\left(\\mu_p+\\frac{\\sigma_p^2}{2}\\right) - \\text{exp}\\left(\\mu_n+\\frac{\\sigma_n^2}{2}\\right) \\\\\n\\text{exp}\\left(\\mu_p+\\frac{\\sigma_p^2}{2}\\right) + \\text{exp}\\left(\\mu_n+\\frac{\\sigma_n^2}{2}\\right) \\\\\n\\left(\\text{exp}\\left(\\sigma_p^2\\right)-1\\right) - \\left(\\text{exp}\\left(\\sigma_n^2\\right)-1\\right) \\\\\n\\left(\\text{exp}\\left(\\sigma_p^2\\right)-1\\right) + \\left(\\text{exp}\\left(\\sigma_n^2\\right)-1\\right) \\\\\n\\text{exp}\\left(\\sigma_p\\cdot\\sigma_n\\cdot\\rho_{pn}\\right)-1\n\\end{bmatrix}\\right)\n\\end{equation}\nwhich maps the parameter vector $\\pmb{\\Theta}_1 = (\\mu_p,\\sigma_p,\\mu_n,\\sigma_n,\\rho_{pn})$ to the parameter vector $\\pmb{\\Theta}_2 = (\\alpha,\\beta,\\gamma,\\delta,\\epsilon)$. This parametrization stems from Equations~\\ref{eq:MUDLN} and~\\ref{eq:SIGDLN}, in which the various terms appear. It further concentrates on the sums and differences of the terms, and applies an asinh transform to the parameter space. The transformed parameters in $\\pmb{\\Theta}_2$ correlate with the (asinh of the) first four moments of the DLN distribution described by the vector $\\pmb{\\Theta}_1$, as shown at the Monte-Carlo experiments in Section~\\ref{sec:MC}. This alternative parametrization is useful for implementing method-of-moments estimators for the parameters of the DLN.\n\nPanel (c) of the same table presents an analysis of the alternative parametrization described in Section~\\ref{sec:Alt}. The correlation between the alternative parameters and the predicted and actual moments is high for the first four parameters and respective moments, but is practically zero for the fifth parameter and moment. This indicates the fifth parameter in the alternative parametrization does not capture the associated moment. There is again significant bias in the even parameters relative to their corresponding moments.\n\nPanel (c) --- Alternative parameters & $\\alpha$ & $\\beta$ & $\\gamma$ & $\\delta$ & $\\epsilon$ \\\\\n\\midrule\n$\\widehat{M}_i$ Correlation & 1.0000 & 0.9280 & 0.8568 & 0.9682 & 0.0183 \\\\\n$\\widehat{M}_i$ Bias & 0.0000 & -5.8456 & -0.0574 & -11.486 & 4.5365 \\\\\n$\\widehat{M}_i$ Accuracy & 0.0000 & 4.2309 & 4.7317 & 7.6611 & 60.5636 \\\\\n$M_i$ Correlation & 0.9994 & 0.9359 & 0.7638 & 0.7692 & 0.0081 \\\\\n$M_i$ Bias & 0.0001 & -5.4896 & 0.0160 & -3.7869 & 4.3550 \\\\\n$M_i$ Accuracy & 0.0275 & 4.0366 & 2.1522 & 1.6222 & 26.3605 \\\\ \\\\\n\nPanel (c) compares the alternative parametrization of Section~\\ref{sec:Alt} $\\pmb{\\widetilde{\\Theta}}$ with the first five moment estimators and actual moments $\\widehat{M}_i$ and $M_i$.\n\nOur last distribution of interest is the firm income growth distribution (FIGD). Dealing with growth aspects of income presents a methodological issue, however, as our measures of growth are ill-equipped to describe growth in sometimes-negative values. I hence begin by extending the growth measures to deal with values in $(-\\infty,\\infty)$ rather than $(0,\\infty)$. To fix ideas, consider the following seven scenarios: a firm earns (i) \\$100 in period $t$ and \\$200 in period $t+1$, (ii) \\$100 in $t$ and \\$1 in $t+1$, (iii) \\$100 in $t$ and -\\$100 in $t+1$, (iv) -\\$100 in $t$ and \\$100 in $t+1$, (v) -\u00a210,000 in $t$ and \u00a210,000 in $t+1$, (vi) \\$0 in $t$ and \\$100 in $t+1$, (vii) \\$0 in $t$ and -\\$100 in $t+1$, .\n\nWhat was the growth in firm income in each scenario? The two standard ways in which to measure growth are percent change $d\\%(X_{t+1}) = (X_{t+1} - X_{t})\/X_{t}$, and log-point change $\\text{dlog}(X_{t+1}) = \\log(X_{t+1}) - \\log(X_{t})$. Using either method, the first two scenarios are well-defined. In scenario (i), income growth was (200-100)\/100 = 1 = 100\\%, or it was log(200)-log(100) = 0.693 = 69.3 log-points (lp). In scenario (ii), it was (1-100)\/100 = -99\\% or log(1)-log(100) = -461 lp. Note that log-point growth quickly tends to $-\\infty$ as we decrease firm income during the second period in scenario (ii) from $1$ to $0$.\n\nIn scenario (iii), one could say that percent growth was ((-100)-100\/100) = -2 = -200\\%, extending the definition of percent changes. But this extension leads percent growth in scenario (iv) to be -200\\% as well, even though firm income grew. A more intuitive extension of the percent concept is to use\n\\begin{equation} \\label{eq:gprecent}\n\\widetilde{d\\%}(X_{t+1}) = (X_{t+1} - X_{t})\/\\lvertX_{t}\\rvert\n\\end{equation}\nGrowth rates then become (i) 100\\%, (ii) -99\\%, (iii) -200\\%, (iv) 200\\%, (v) 200\\%, (vi) $-\\infty$, and (vii) $+\\infty$. This extension improves the direction (i.e. sign) of percent growth. As can be seen in scenario (v), it is also scale-invariant (i.e., not impacted by the unit of measurement). Yet growth rates from values close to zero remain explosive.\n\nCan we likewise extend the log-point growth concept? Using the inverse hyperbolic sine (asinh) again, we can define the growth in $X$ to be \n\\begin{equation} \\label{eq:dasinh}\n\\text{dasinh}(X_{t+1}) = \\text{asinh}(X_{t+1}) - \\text{asinh}(X_{t})\n\\end{equation}\nGrowth rates in the seven scenarios are then (i) 69.3 asinh points (ap), (ii) -442 ap, (iii) -1060 ap, (iv) +1060 ap, (v) +1980 ap (vi) +530 ap, (vii) -530 ap. This measure has several desirable properties: (a) growth to and from zero is well-defined and non-explosive, (b) growth from -X to X is double the growth from 0 to X, (c) growth from -X to X is the opposite of growth from X to -X, (d) growth between two positive values quickly approaches the proper log-point growth (due to the quickly decreasing approximation error of asinh discussed above). One downside of the measure is that it is scale-dependent when the two values being compared have opposite signs, as can be seen in scenario (v).\n\n}\n\n\\clearpage\n\\bibliographystyle{JFE}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}