diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpeeg" "b/data_all_eng_slimpj/shuffled/split2/finalzzpeeg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpeeg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nIt is well known that topologically non-trivial configurations play an essential role in\nQuantum ChromoDynamics (QCD).\nFor $J^P = 0^-$ mesons, such configurations must be present in order to split the iso-singlet $\\eta'$ meson\nfrom the octet of pions, kaons, and the $\\eta$ meson \\cite{Witten:1979vv,Veneziano:1979ec,tHooft:1986ooh}.\nThey also affect the spectrum of mesons with higher spin \\cite{Giacosa:2017pos}, \nand contribute to the proton and photon structure functions in polarised deep-inelastic scattering\n\\cite{Veneziano:1989ei,Shore:1990zu,Shore:1991dv,Shore:1997tq,Narison:1998aq,Bass:2004xa,Shore:2007yn,Tarasov:2020cwl,Tarasov:2021yll}.\n\nIn weak coupling for a $SU(N)$ gauge theory, the dominant configurations are instantons,\nwhich are self-dual solutions to the classical equations of motion.\nBy asymptotic freedom instanton effects can be reliably computed\nat high temperature, $T$, or quark chemical potential, $\\mu_{\\rm qk}$\n\\footnote{The computation at $T\\neq 0$ is only complete to one loop order\n\\cite{Gross:1980br,KorthalsAltes:2014dkx,KorthalsAltes:2015zpx,Pisarski:2019upw,Boccaletti:2020mxu}.\nWhen $\\mu_{\\rm qk} \\neq 0$, even at one loop order \nthe result for arbitrary instanton scale size is lacking \\cite{Pisarski:2019upw}.}.\nThe action for a single instanton with unit topological charge is $= 8 \\pi^2\/g^2$. \nSince by asymptotic freedom the running coupling constant, $g^2(T) \\sim 8 \\pi^2\/(c \\log(T))$,\n\\footnote{For an $SU(N)$ gauge theory coupled to $N_f$ flavors of massless quarks,\n$c = (11 N - 4 N_f C_f)\/3$, $C_f = (N^2-1)\/(2N)$.},\nthe topological susceptibility falls off sharply at high temperature,\n\\begin{equation}\n \\chi_{\\rm top}(T)\\sim \\exp(- 8 \\pi^2\/g^2(T)) \\sim 1\/T^{c} \\;\\; , \\;\\; T \\rightarrow \\infty \\; .\n\\end{equation}\nRemarkably, numerical simulations in lattice\nQCD find that this power law holds down to temperatures as low as $\\approx 300$~MeV when $\\mu_{\\rm qk} = 0$,\nwhich we assume henceforth\n\\cite{Borsanyi:2015cka,GrillidiCortona:2015jxo,Bonati:2015vqz,Borsanyi:2016ksw,Frison:2016vuc,Petreczky:2016vrs,Taniguchi:2016tjc,Bonati:2018blm,Bonanno:2020hht,Lombardo:2020bvn,Jahn:2021qrp,Borsanyi:2021gqg,Chen:2022fid}\n\\footnote{\nThe overall magnitude of the topological susceptibility is about an order of magnitude greater than the\ninstanton result at one loop order, but surely it is necessary to include\nthe corrections to the instanton density at two loop order for $T \\neq 0$. \nMulti-instanton configurations can also contribute at low $T$ \\cite{Rennecke:2020zgb}.}.\n\nFor $T < 300$~MeV, in QCD the topological susceptibility is\nnot that of a dilute instanton gas. To understand the what generates the topological\nsusceptibility at lower temperature, and especially in vacuum, it is useful to consider a\n$SU(N)$ gauge theory without dynamical quarks.\nIn this case, a global $\\mathbb{Z}_N$ symmetry is\nspontaneously broken above the temperature for deconfinement, $T_{\\rm deconf}$ \\cite{Gaiotto:2014kfa}.\nNumerical simulations of the pure gauge theory find that the dependence on $N$ is rather weak.\nForming a dimensionless ratio between the topological susceptibility and the square of the string tension,\nat zero temperature $\\chi_{\\rm top}(0)\/\\sigma^2$, only varies by $\\approx 10\\%$ between $N=3$\n\\cite{Alles:1996nm,Durr:2006ky,Luscher:2010ik,Xiong:2015dya,Jahn:2018dke,Giusti:2018cmp}\nand higher $N$\n\\cite{Lucini:2001ej,Vicari:2008jw,GarciaPerez:2009mg,Fodor:2009nh,Fodor:2009ar,Lucini:2012gg,Bonati:2013tt,Ce:2016awn,Bonati:2016tvi,Kitano:2017jng,Itou:2018wkm,Bonanno:2018xtd,Kitano:2021jho,Bennett:2022gdz}. \nThese results suggest that as $N \\rightarrow \\infty$, the topological susceptibility does not vary with temperature\nin the confined phase.\nNumerical simulations find that the\ndeconfining phase transition is of first order for three or more colors, with\n$T_{\\rm deconf} \\approx 270 $~MeV for $N = 3$\n\\cite{Boyd:1996bx,Borsanyi:2012ve,Shirogane:2016zbf,Caselle:2018kap,Borsanyi:2022xml}\nFor $N \\geq 3$, $\\chi_{\\rm top}(T)$ jumps at $T_{\\rm deconf}$, and then\nfalls off rapidly with increasing $T$, dominated by instantons above some temperature close to $T_{\\rm deconf}$.\n\nIt is difficult to see how the topological susceptibility could be due to instantons in the confined phase\nat large $N$ \n\\footnote{There could be a dense liquid or crystal of instantons \\cite{Carter:2001ih,Liu:2018znq}.\nHowever, it is necessary\nto show that the action of such a crystal vanishes to $\\sim N$, and is only $\\sim 1$, in order to generate\na topological susceptibility of the same order \\cite{Witten:1978bc}.}.\nHolding $g^2 N \\equiv \\lambda$ fixed as $N \\rightarrow \\infty$, the action of a single instanton\nin the partition function is\nexponentially suppressed, $\\sim \\exp(- (8 \\pi^2\/\\lambda) N)$,\nas $N\\rightarrow \\infty$. This quandry was recognized originally by \nWitten \\cite{Witten:1979vv} and Veneziano \\cite{Veneziano:1979ec}, who nevertheless argued that in vacuum\nthe topological susceptibility is significant at large $N$ \\cite{Witten:1998uka}.\n\nThe most natural possibility is that there are objects with fractional topological charge $\\sim 1\/N$, whose\ncontribution directly survives at infinite $N$, $\\sim \\exp(-8 \\pi^2\/\\lambda)$.\n't Hooft first presented explicit configurations on tori of finite volume\n\\cite{tHooft:1980kjq,tHooft:1981nnx,vanBaal:1982ag,Sedlacek:1982cd,Nash:1982kp,Killingback:1984en,Alvarez:2003}.\nKraan, van Baal, Lee, and Lu\n\\cite{Lee:1997vp,Lee:1998vu,Lee:1998bb,Kraan:1998pm,Kraan:1998sn,Diakonov:2002fq,Bruckmann:2009nw,Diakonov:2009jq,Poppitz:2008hr,Gonzalez-Arroyo:2019wpu,Anber:2021upc}\nshowed that at nonzero temperature\ninstantons can be viewed as made of $N$ constituents, each with topological charge $1\/N$; the constituents have\nnontrival holonomy, and cannot be pulled arbitrarily far apart.\nConsequently, it is not apparent how these configurations survive in vacuum.\n\nA useful limit is to study gauge theories on a femto-slab, where one spatial dimension, $L$,\nis very small, with $L \\Lambda_{\\rm QCD} \\ll 1$, \nwhere $\\Lambda_{\\rm QCD}$ is the renormalization mass scale of QCD.\nOver large distances the theory reduces to one in $2+1$ dimensions\n\\cite{Polyakov:1976fu,Dunne:2000vp,Kogan:2002au,Kovchegov:2002vi}.\nOn a femto-slab, semi-classical techniques demonstrate\nthat monopole-instantons with topological charge $\\sim 1\/N$ are ubiquitous\n\\cite{Unsal:2006pj,Unsal:2007vu,Unsal:2007jx,Unsal:2008ch,Shifman:2008ja,Simic:2010sv,Anber:2011gn,Poppitz:2012sw,Anber:2013doa,Aitken:2017ayq,Unsal:2020yeh,sym14010180,Tanizaki:2022ngt}.\nHowever, it is not clear what happens when the size of the slab increases to distances where $L \\Lambda_{\\rm QCD} \\sim 1$.\n\nIn this paper we consider configurations with fractional topological charge $\\sim 1\/N$ in both\nthe $\\mathbb{CP}^{N-1}$ model in $1+1$ dimensions\n\\cite{Gross:1977wu,Witten:1978bc,DAdda:1978vbw,DiVecchia:1979pzw,Berg:1979uq,Fateev:1979dc,Rothe:1980rp,Zhitnitsky:1988zd,Ahmad:2005dr,Lian:2006ky,Bruckmann:2007zh,Brendel:2009mp,Unsal:2020yeh},\nand for $SU(N)$ gauge theories without dynamical quarks in\n$3+1$ dimensions. The former is useful as a toy model to illustrate our basic point. Instantons with\nintegral topological charge are solutions of the classical equations of motion\n\\cite{Atiyah:1978ri}. For both models,\nthe classical action is invariant under a scale symmetry, which implies that instantons have a scale size, $\\rho$,\nwhich ranges from zero to infinity. In contrast, our configurations are stationary points of a quantum,\neffective action. Thus their scale size is manifestly {\\it non}-perturbative, on the order of the confinement\nscale.\nIn essence, we suggest that as a femto-slab increases in width, that the size of the objects with\nfractional topological charge is fixed at the confinement scale.\nIn the $\\mathbb{CP}^{N-1}$ model this is apparent,\nand we argue that the same holds for pure $SU(N)$ gauge theories.\nWe give both a general mathematical analysis, and outline how to construct \n$\\mathbb{Z}_N$ dyons with fractional topological charge.\n\nWe discuss how to measure such objects using numerical simulations\non the lattice, following Edwards, Heller, and Narayanan \\cite{Edwards:1998dj,karthik:2022}.\nWe stress that with periodic boundary conditions, the total topological charge is necessarily integral.\nTo measure a system whose total topological charge is fractional,\n$\\mathbb{Z}_N$ twisted boundary conditions must be used. Nevertheless,\nwe suggest that as on a femto-slab\n\\cite{Polyakov:1976fu,Dunne:2000vp,Kogan:2002au,Kovchegov:2002vi,Unsal:2006pj,Unsal:2007vu,Unsal:2007jx,Unsal:2008ch,Shifman:2008ja,Simic:2010sv,Anber:2011gn,Poppitz:2012sw,Anber:2013doa,Aitken:2017ayq,Unsal:2020yeh,sym14010180,Tanizaki:2022ngt},\nthat the vacuum is a condensate of $\\mathbb{Z}_N$ dyons and anti-dyons, each with\nfractional topological charge. This should\nbe measureable, assuming that there is some separation between the dyons and anti-dyons.\n\nWe conclude with a discussion of how quarks and $\\mathbb{Z}_N$ dyons might interact in QCD. \n\n\n\\section{$\\mathbb{CP}^{N-1}$ model}\n\\label{sec:cpn}\n\nThis model is a nonlinear sigma model in two spacetime dimensions where the target space is the complex projective space\n$\\mathbb{CP}^{N-1}$. The target space can thus be described by\n$N$ complex variables $z^i$ which are the so-called homogeneous\ncoordinates, with the identification\n$z^i \\sim w \\, z^i$, $w \\in \\mathbb{C} - \\{ 0\\}$.\nOne can choose to fix or eliminate the freedom of the magnitude of\n$w$ in this identification by setting $\\sum_{i=1}^N {\\bar{z}}^i z^i = 1$.\nThe phase of $w$ can be removed by gauging the $U(1)$ corresponding\nto such phase transformations.\nThe Lagrangian density defining the theory can thus be taken to be\n\\begin{equation}\n {\\cal L} = \\frac{1}{g^2} \\sum_{i = 1}^N \\; |D_\\mu z^i|^2 \\;\\; ; \\;\\; D_\\mu = \\partial_\\mu - i A_\\mu \\; .\n \\label{lag_cpn}\n\\end{equation}\nwhere it is understood that the $z^i$'s satisfy the constraint $\\sum_{i=1}^N \\overline{z}^i z^i = 1$ at each point.\nThe gauge field $A_\\mu$ ensures that the $z^i$ are also invariant under local $U(1)$ transformations,\n$z^i(x) \\rightarrow {\\rm e}^{i \\alpha(x)} z^i(x)$. \n(Notice that the constraint ${\\bar{z}}\\cdot z = 1$ defines a sphere $S^{2 N -1}$\nat every point in spacetime.\nThe gauge symmetry removes a phase, i.e., $S^1$, so that\nwe get $S^{2 N -1}\/ S^1$ which is another way to define\n$\\mathbb{CP}^{N-1} $.)\nThe absence of a kinetic term for the gauge field in Eq. (\\ref{lag_cpn})\nensures that no new degrees of freedom are introduced via \nthe gauging. In fact, classically one can eliminate $A_\\mu$ by its equation of motion as\n\\begin{equation}\nA_\\mu = - {i \\over 2} \\left( {{{\\bar{z}}^i \\partial_\\mu z^i - \\partial_\\mu {\\bar{z}}^i \\, z^i }\\over {\\bar{z}} \\cdot z }\\right)\n\\label{lag-cpn2}\n\\end{equation}\nso that ${\\cal L}$ can be re-expressed entirely in terms of the $z^i$ and ${\\bar{z}}^i$.\nFinally, we note that the only\ncoupling constant in Eq. (\\ref{lag_cpn}) is $g^2$, which is dimensionless.\n\nClearly the Lagrangian in Eq. (\\ref{lag_cpn}) is invariant under global $SU(N)$ transformations,\n$z^i \\rightarrow U^{i}_{~j} \\, z^j$, where $U \\in SU(N)$.\nElements of the center of $SU(N)$, which is ${\\mathbb{Z}_N}$, are special. These are\nthe $U_k = {\\rm e}^{2 \\pi i k\/N} {\\bf 1}$, where $k = 0, 1\\ldots (N-1)$. \nUnder the $U_k$ the $z^i$'s transform as $z^i \\rightarrow e^{2 \\pi i k\/N} z^i$, but since the\n$z^i$ are homogeneous coordinates (with the identification $z^i \\sim w\\, z^i$), this is equivalent to the identity transformation.\nThus the full global symmetry is $SU(N)\/{\\mathbb{Z}_N}$ \\cite{Witten:1978bc,DAdda:1978vbw,Unsal:2020yeh}.\n\nThe topological winding number is\n\\begin{equation}\n Q = \\frac{1}{2 \\pi} \\int d^2 x \\; \\epsilon^{\\mu \\nu} \\partial_\\mu A_\\nu \\; .\n\\end{equation}\nFor fields where $z^i$ approaches a constant at infinity, $Q$ is an integer.\nAll classical configurations with $ Q \\neq 0$ are known \\cite{DAdda:1978vbw}. \nKeeping $g^2 N$ fixed as $N \\rightarrow \\infty$, the value of the classical action is uniformly $\\sim N$. The fluctuations\nabout arbitrary instanton configurations have also been computed. While they simplify for\n$N=2$, where it reduces to an $O(3)$ model \\cite{Gross:1977wu,Berg:1979uq,Fateev:1979dc},\nfor $N > 2$ the integration over the collective coordinates of the instantons is not tractable. Even so,\nthey all appear to be exponentially suppressed, term by term, for large $N$.\n\nAs discussed in the seminal papers \\cite{Witten:1978bc,DAdda:1978vbw},\nthe large $N$ analysis in the quantum theory\ncan be carried out by\nintroducing a Lagrange multiplier field $\\lambda(x)$ to impose the constraint \n${\\bar{z}}\\cdot z - 1 = 0$ and then integrating out\nthe $z^i$\nfields. This leads to an effective action for $A_\\mu$ and $\\lambda$,\n\\begin{equation}\n {\\cal S}_{\\rm eff} = N \\, {\\rm tr} \\, \\log \\left( -D_\\mu^2 + i \\lambda \\right) - i \\int d^2 x \\; \\frac{\\lambda(x)}{g^2}\n \\; .\n\\label{eff_act_cpn}\n\\end{equation}\nThe corresponding equations of motion are\n\\begin{equation}\n N \\; {\\rm tr} \\; D_\\mu^{\\rm cl} \\frac{1}{-(D^{\\rm cl}_\\mu)^2 + m^2(x)} = 0 \\; ,\n \\label{eqn-A}\n\\end{equation}\nand\n\\begin{equation}\n N \\; {\\rm tr} \\; \\frac{1}{-(D^{\\rm cl}_\\mu)^2 + m^2(x)} - \\frac{i}{g^2} = 0 \\; ,\n \\label{eqn-lamb}\n\\end{equation}\nfor arbitrary solutions $A_\\mu(x) = A_\\mu^{\\rm cl}(x)$ and $i \\lambda(x) = m^2(x)$. In vacuum\n$A_\\mu^{\\rm cl}=0$ and $m^2(x)$ is constant, with the dynamically generated mass\n$m$ related to the coupling constant $g^2$ through dimensional transmutation \\cite{Witten:1978bc,DAdda:1978vbw}.\n\nThe quantum dynamics of the model, defined by Eq.\n(\\ref{eff_act_cpn}), has significant qualitative differences compared to the classical analysis \nbased on Eq. (\\ref{lag_cpn}). Classically, the constraint\n${\\bar{z}} \\cdot z = 1$ is imposed, {\\it a priori}, or as the equation of motion\nfor $\\lambda$. The solution of this equation necessarily breaks the\n$SU(N)$ global symmetry. In the quantum theory,\nthis symmetry breaking is eliminated, in accordance with\nthe Mermin-Wagner-Coleman theorem. The expectation values of the\n$z^i$ will vanish and they behave like massive fields. Explicitly, \nin a derivative expansion, the effective action of Eq. (\\ref{eff_act_cpn}),\nwith the inclusion of the fields $z^i$, takes the\nform\n\\begin{equation}\n{\\cal S}_{\\rm eff} = \\int \\vert (\\partial_\\mu -i A_\\mu ) z^i \\vert^2 - m^2 {\\bar{z}}^i z^i\n- {N \\over 48 \\pi m^2} F_{\\mu\\nu}F^{\\mu\\nu} + \\cdots\n\\label{eff_act-2}\n\\end{equation}\nwhere $m^2$ is the vacuum value determined by Eqs. (\\ref{eqn-A}) and ({\\ref{eqn-lamb}).\n(This equation gives the generating functional of the one particle irreducible vertices,\nand so includes small fluctuations in the $z^i$ fields, in addition to Eq. (\\ref{eff_act_cpn}).)\nThe terms with derivatives of $m^2(x) = i \\lambda (x)$ are not displayed here\nas they do not play a significant role in the subsequent discussions.\nWe note that the higher terms indicated by ellipsis in\nEq. (\\ref{eff_act-2}), by virtue of gauge invariance, will involve\nonly covariant derivatives of the fields $z^i$, ${\\bar{z}}^i$ and powers\nof $F_{\\mu\\nu}$.\n\nBefore we proceed to consider topologically nontrivial field configurations, there is one key observation\nregarding the action in Eq. (\\ref{eff_act-2})\nwhich is important for us. The global symmetry of the theory\nis still $SU(N)\/{\\mathbb{Z}_N}$. By virtue of confinement,\nthe allowed states are $\\mathbb{Z}_N$-invariant \\cite{Witten:1978bc}.\n\nIt is now possible to consider topologically nontrivial configurations,\nnot as solutions to the classical equations of motion,\nbut with the effective Lagrangian at large $N$.\nIn polar coordinates $(r,\\varphi)$ in two dimensions, at large $r$ the solution for the gauge field must satisfy\n\\begin{equation}\n A_\\mu dx^\\mu \\sim Q d \\varphi \\; ; \\; {\\rm i.e.,} ~A_\\varphi \\sim {Q \\over r} \\; \\; {\\rm as~}r \\rightarrow \\infty \\; ,\n \\label{asymp_behavior_int}\n\\end{equation}\nso that $\\int F \\sim Q \\neq 0$.\nThis should be accompanied by a suitable\nansatz for $m^2(x)$ is a function of $r$, but\nwe do not elaborate on this since it is not important for the main thread of our arguments.\nDetermining the solution of the nonlocal equations of motion in\nEqs. (\\ref{eqn-A}) and (\\ref{eqn-lamb}) is not elementary. \nBut there is one aspect of any such solution which is worth of note, and which\nin fact is a recurrent point throughout our analysis.\nWhile the classical action is invariant under scale transformations, at large $N$ the quantum effective\naction is not. Thus while Eq. (\\ref{asymp_behavior_int}) fixes the behavior at infinity, the nature of\nthe full solution varies over a distance $\\sim 1\/m$.\n\nThere is one possibility, noted by Witten \\cite{Witten:1978bc}, which is that the term $\\sim N$ for the action\nof a quantum instanton vanishes. \nThis is most unlikely for classical instantons, as their scale size $\\rho$ runs from zero to infinity, and at one\nloop order their contribution is a nontrivial function of $\\rho$ times the renormalization mass scale.\nIn contrast, for the quantum action the coupling constant\nis evaluated at a given scale through dimensional transmutation, and the size is fixed. \nEven so, we do not see any general reason why the action of the quantum instanton should vanish at $\\sim N$.\n\nWe then turn to the possibility of configurations with fractional topological charge.\nOn a femto-slab classical instantons were constructed by Unsal \\cite{Unsal:2020yeh}; their\nsize is necessarily on the order of the width of the slab.\nIn contrast, we consider quantum instantons\nin vacuum. As a first step, consider the spherically symmetric configuration\n\\begin{equation}\nF_{12} = \\begin{cases}\n{2 \/ (N a^2)} \\hskip .1in&r < a\\\\\n0& r>a\n\\end{cases}\n\\label{vortex1}\n\\end{equation}\nThis corresponds to $Q = \\int (F\/2\\pi) = 1\/N$, and the gauge potential\n\\begin{equation}\nA_\\mu dx^\\mu = - {1\\over N \\pi a^2} \\int d^2x'\\, {\\epsilon_{\\mu\\nu} (x- x')^\\nu \\over \\vert x- x'\\vert^2} \\rho(x')\\, dx^\\mu\n\\label{vortex2}\n\\end{equation}\nwhere $\\rho (x')$ is equal to $1$ in a small disk of radius $a$, and zero elsewhere. This configuration is a slightly thickened vortex and\nthe potential (\\ref{vortex2}) is consistent with the asymptotic behavior\n(\\ref{asymp_behavior_int}), with $Q = 1\/N$.\nThe contribution of this configuration to the action (\\ref{eff_act-2}) is\n\\begin{equation}\n{N \\over 48 \\pi m^2} \\int F^2 = {1\\over 6 N m^2 a^2}\n\\label{vortex3}\n\\end{equation}\nThe action for the higher terms will be similarly suppressed, since they must involve powers of $F$.\n\nThe ansatz (\\ref{vortex2}) may be written for $\\rho$ with support around the origin, as\n\\begin{equation}\nA_\\mu dx^\\mu = f(r) d\\varphi = {1\\over N} \\begin{cases}\n(r^2 \/a^2)\\, d\\varphi \\hskip .1in& r\\leq a\\\\\nd\\varphi& r>a\\\\\n\\end{cases}\n\\label{vortex3a}\n\\end{equation}\n\nTurning to the $z$-dependent part of the action, the only point of subtlety\nis about the phase of $z^i$. With the background (\\ref{vortex1}), (\\ref{vortex2}),\nthe parallel transport of $z^i$ in a full circle around the origin\n(or the location of the vortex) gives $z^i \\rightarrow e^{2 \\pi i\/ N} z^i$.\n(The phase may also be viewed as the Aharonov-Bohm phase acquired by\n$z^i$ in a circuit around the vortex.)\nSeemingly, $z^i$ are not single-valued, but notice that the phase can be removed\nor canceled by an $SU(N)$ transformation in its center $\\mathbb{Z}_N$.\nSince states are $\\mathbb{Z}_N$-invariant there is no difficulty\nassociated with this behavior.\n\nWe can now supplement the ansatz (\\ref{vortex2}) or (\\ref{vortex3a})\nwith a suitable ansatz for $z^i$, such as\n\\begin{equation}\nz^1 = e^{i \\varphi \/N} h(r) , \\hskip .1in z^i = 0, \\hskip .1in\ni = 2, 3, \\cdots, N,\n\\label{vortex4}\n\\end{equation}\nor any $SU(N)\/{\\mathbb{Z}_N}$ transformation of this.\nWe have incorporated the aperiodicity in $\\varphi$ mentioned above,\nnamely, $z_i(r,2 \\pi) = {\\rm e}^{2 \\pi i\/N} z_i(r,0)$. This multi-valuedness\nis where it differs from previous solutions of the classical equations of motion \\cite{Berg:1979uq,Fateev:1979dc}.\n\nTaking the matter part of the action as in (\\ref{eff_act-2}), we find\n\\begin{equation}\nS_z = 2 \\pi \\int dr r \\left[ \\left( {\\partial h \\over \\partial r}\\right)^2\n+ {h^2 \\over r^2} \\left( f - {1\\over N} \\right)^2 + m^2 h^2\n\\right] +\\cdots\n\\label{vortex5}\n\\end{equation}\nThe behavior of $h(r)$ for small and large values of $r$ can be inferred from the equation of motion for $h$, namely,\n\\begin{equation}\n- {1\\over r} {\\partial \\over \\partial r} \\left( r {\\partial h \\over \\partial r} \\right)\n+ \\left( f - {1\\over N} \\right)^2 {h \\over r^2} + m^2 h +\\cdots = 0\n\\label{vortex6}\n\\end{equation}\nBy examining the small $r$ and large $r$ limits of this equation, we \ncan see that\n\\begin{equation}\nh(r) \\sim \\begin{cases}\nr^{1\\over N} \\hskip .2in& r \\rightarrow 0\\\\\ne^{- m r} & r \\rightarrow \\infty\\\\\n\\end{cases}\n\\label{vortex7}\n\\end{equation}\nNotice that $h$ vanishes exponentially as $r \\rightarrow \\infty$.\nThis is a significant point. While the gauge part of the configuration\n(\\ref{vortex3a}) is like an Abrikosov-Nielsen-Olesen vortex, the asymptotic\nbehavior of $z^i$ is very different. We may also note that the vanishing of $z^i$ \nat spatial infinity is consistent with the fact that any configuration of finite action should reproduce\nvacuum behavior at spatial infinity, and that, by virtue of the Mermin-Wagner-Coleman theorem, we cannot have any spontaneous\nbreaking of the $SU(N)\/{\\mathbb{Z}_N}$ symmetry.\n\nIntroducing a scale factor $r_0$, a simple ansatz consistent with \n(\\ref{vortex7}) is\n\\begin{equation}\nh(r) = C \\, { u^{1\\over N} \\over 1+ u^{1\\over N}} e^{-\\mu u},\n\\hskip .1in u = {r\\over r_0}, ~\\mu = m r_0\n\\label{vortex8}\n\\end{equation}\nIt is easy to verify that $S_z$ is finite with this ansatz and\nthe term in (\\ref{vortex5}) involving both $f$ and $h$ depends on \n$a$. Along with the gauge field contribution\n(\\ref{vortex3}), we get a nonlinear expression involving\n$a$ and $C$. Treating these as variational parameters, \nwe can obtain values which minimize the action, at least within the\nclass of ans\\\"atze (\\ref{vortex3a}), (\\ref{vortex4}), ({\\ref{vortex8}).\n\nA few comments are in order at this point. Notice that, even if $C =0$, we do have a vortex-like configuration (\\ref{vortex3a}). Although extremization with respect to $a$ with just the term (\\ref{vortex3}) will lead to\n$a \\rightarrow \\infty$, there are \nterms with higher powers of $F$ in the action,\nindicated by ellipsis in (\\ref{eff_act-2}). Including them and extremizing can lead to a finite value, ultimately determined by the dimensionful\nparameter $m$.\nThe inclusion of higher $z$-dependent terms will lead\nterms which are of order $C^4$ and higher. Thus we expect that\nthe extremization including such terms will give finite values to\nboth $a$ and $C$. As noted, this equivalent to solving the nonlocal equations of motion\nin Eq. (\\ref{eff_act_cpn}) and (\\ref{eqn-A}).\n\n\n\n\n\nTo frame this more generally, at large $N$, we can again consider expanding \n(\\ref{eff_act_cpn}) in powers of $A_\\varphi$ which is of order\n$1\/N$, based on our ansatz. As for the solution with integral\ntopological charge, and as in the example above,\nas a solution of the quantum action the size is $\\sim 1\/m$. The term linear in\n$A_\\varphi$ vanishes by the equation of motion for the gauge potential. Taking $m^2(x) = m^2$, then,\nthe term in the action $\\sim 1$ automatically vanishes. \nThe expansion of the effective action to quadratic\norder in $A_\\varphi$, i.e., as in (\\ref{eff_act-2}),\nshows that the nonzero\ncontribution of the $A$-part of the action will\nbe of order $1\/N$.\n\nFinally, turning to the dependence on the vacuum angle $\\theta$,\ndirect computation \nof the free energy demonstrates that it is of order $\\sim 1$,\nwith moments which can be computed in a background electric field \\cite{Bonati:2016tvi}.\nThus even with a vacuum of a superposition of quantum instantons with fractional topological charge,\nthey cannot be approximated as a dilute gas. Instead, the interaction between such configurations is\nlarge, and dominates the action of each fractional quantum instanton.\n\n\\section{Towards fractional instantons in 4d}\n\\label{sec:math}\n\\mathchardef\\mhyphen=\"2D\n\nWe now turn to nonabelian gauge theories in four dimensions.\nOne of the key steps in understanding configurations of fractional topological charge is the identification of what is meant by the gauge group. Although this question\nhas been analyzed before, it is useful to collect some of the basic ideas here.\nWe will first consider the boundary values for gauge transformations based on the Gauss law (or the nature of the test functions to be used in implementing the Gauss law) and how these are related to\ncharge quantization conditions. \nThis will clarify the nature of the configuration space\nand will naturally lead to the possibility of fractional topological charges.\n\\setcounter{secnumdepth}{2}\n\\subsection{Gauss law in the $E$-representation}\n\nWe consider the gauge theory in the $A_0 =0$ gauge. \nWe must then impose the Gauss law on the wave functions.\nQuantization conditions on the electric charge will be important for us, so it is \nmore appropriate to consider wave functions in the representation \nwhich are eigenstates of the electric field operators $E^a$. \nIn other words, the wave functions are functionals of the electric field.\nThe Gauss law operator is given by\n\\begin{equation}\nG^a(x)= \\nabla_i E_i^a + f^{abc} A^b_i E^c_i \\label{1}\n\\end{equation}\nwhere $f^{abc}$ are the structure constants of the Lie algebra of $G$.\nThe gauge potential and the electric field obey the usual commutation rule\n$[A^a_i(x), E^b_j(y)]= i \\delta^{ab}\\delta_{ij}\\delta (x-y)$, so that, in\nthe $E$-representation, $A^a_i = i (\\delta \/ \\delta E^a_i)$.\nThe physical wave functions $\\Psi$ are selected by the condition \nthat the Gauss law operator must annihilate them.\nThis condition can be written as\n\\begin{eqnarray}\n &\\int_M& \\theta^a(x) G^a(x) ~\\Psi = \\nonumber \\\\\n&=& \\int_M \\theta^a(x)\\left[ \\nabla_iE^a_i \\nonumber \n -i f^{abc} E^b_i {\\delta \\over \\delta E^c_i}\\right] \\Psi =0 \\\\\n \\label{2}\n\\end{eqnarray}\n(The integral is over the spatial manifold $M$.)\nThis law should be required only for test functions $\\theta^a(x)$\nobeying certain conditions; the nature of these conditions\nwill be clear from the following discussion.\nTreating $\\theta^a(x)$ as an infinitesimal group parameter, (\\ref{2}) may be\nwritten as\n\\begin{eqnarray}\n\\delta \\Psi &\\equiv& \\Psi( U^{-1}EU) -\\Psi(E)\\nonumber\\\\\n&=&-\\left[ i \\int_M \\theta^a(x) \\nabla_iE^a_i \\right]~\\Psi\n\\label{3}\n\\end{eqnarray}\nwhere $E_i= T^a E^a_i,~ U=\\exp(iT^a \\theta^a )\\approx 1+iT^a \\theta^a$, \n$T^a$ being hermitian matrices which form a basis for the Lie algebra of\n$G$, with $[T^a,T^b]= if^{abc}T^c$. For the fundamental representation, we write\n$T^a = t^a$ and normalize \nthem by ${\\rm Tr} (t^at^b) =\\textstyle{1\\over 2} \\delta^{ab}$.\nThe quantity $U^{-1}E_iU$ is the gauge transform of $E_i$ and hence $\\delta \\Psi$ \nmeasures the change of $\\Psi$ under a gauge transformation with parameter \n$\\theta^a(x)$. Obviously, if $\\Psi$ is a solution to (\\ref{3}), then so is\n$\\Psi\\, f(E)$ where $f(E)$ is a gauge-invariant function of $E_i$.\nThe general solution to (\\ref{3}) may therefore be written as $\\Psi= \\rho\\,\n\\Phi (E)$, where $\\Phi (E)$ is an arbitrary gauge-invariant function and\n$\\rho$ is a particular solution to\n\\begin{equation}\n\\delta \\rho +\\left[ i \\int_x \\theta^a(x) \\nabla_iE^a_i \\right]~\\rho =0 \\label{4}\n\\end{equation}\n\nA finite transformation, and the corresponding variation of $\\rho$, can be\nobtained by composition of infinitesimal transformations. Assume that, for\nan electric field $E_i$, we have started from the identity and built up a \nfinite transformation $U$. At this point, the electric field is given by \n${\\cal E}_i=U^{-1}E_iU$. A further infinitesimal transformation would be \ngiven by $iT^a \\theta^a = U^{-1}\\delta U$. Thus (\\ref{4}), written for \nan arbitrary point on the space of $U$'s, becomes\n\\begin{eqnarray}\n\\delta \\rho + 2 \\int_M {\\rm Tr} (\\nabla_i{\\cal E}_i~ U^{-1}\\delta U) \\rho &=&0\\nonumber\\\\\n\\delta (\\log \\rho ) = -2 \\int_M {\\rm Tr} (\\nabla_i{\\cal E}_i ~U^{-1}\\delta U) \n&\\equiv& \\Omega \n\\label{5}\n\\end{eqnarray}\nOne can integrate this equation along a curve in the space of $U$'s from \nthe identity to $U$ to obtain the change of $\\rho$ under a finite \ntransformation. With $\\delta$ interpreted as a derivative \non the space of $U$'s, $U^{-1}\\delta U$ is a covariant vector\n(or one-form) and the result of the integration is generally path-dependent.\nFor the result to be independent of the path of integration, the curl of $ {\\rm Tr} (\\nabla\\cdot {\\cal E} \\,U^{-1}\\delta U)$, viewed as a covariant vector or\nas a one-form on the space of the $U$'s, must vanish. Thus the\nthe integrability condition for (\\ref{5}), or \nthe path-independence for the change in $\\rho$, becomes\n\\begin{equation}\n\\delta \\Omega = \\delta \\left[ -2 \\int {\\rm Tr} (\\nabla_i{\\cal E}_i U^{-1}\\delta U)\n\\right] =0\n\\label{5a}\n\\end{equation}\nHere we take $\\delta$ to signify the exterior derivative, so that\n$\\delta$ acting on a one-form (or covariant vector) gives the curl.\nWe now write $\\Omega =\\Omega_1 +\\Omega_2$ with\n\\begin{eqnarray}\n\\Omega_1&=& 2\\int_M{\\rm Tr} ({\\cal E}_i \\nabla_i(U^{-1}\\delta U))\\nonumber\\\\\n&=& 2 \\int_M{\\rm Tr} \\left[ E_i \\left( \\nabla_i (\\delta U U^{-1} ) - [\\nabla_i U U^{-1}, \n\\delta U U^{-1} ] \\right) \\right]\\nonumber\\\\\n\\nonumber\\\\\n\\Omega_2&=& -2 \\oint_{\\partial M}{\\rm Tr} ({\\cal E}_i U^{-1}\\delta U )dS^i \\nonumber \\\\\n&=& -2 \\oint_{\\partial M}{\\rm Tr} (E_i \\delta U ~U^{-1})dS^i\n\\label{6}\n\\end{eqnarray}\nIt is easily checked, using $\\delta (\\delta U U^{-1} ) = (\\delta U U^{-1})^2$, {\\it without the need of any integration-by-parts on $M$}, that\n$\\delta \\Omega_1 =0$. \nFor the second term, we find\n\\begin{equation}\n\\delta \\Omega_2= -2 \\oint_{\\partial M} {\\rm Tr} ( E_i \\delta U U^{-1} \n\\delta U U^{-1} ) dS^i \\label{7}\n\\end{equation}\nThis is in general not zero. Indeed if $U$ is constant on $\\partial M$,\n$\\delta \\Omega_2 =-2 {\\rm Tr}[Q(\\delta UU^{-1})^2],~Q=\\oint E_idS^i$.\nIn this case, $\\delta \\Omega_2$ has the form of the coadjoint orbit\ntwo-form on $G\/H$, where $H\\subset\nG$ is the subgroup which commutes with the charge $Q$. This form, well-known as the basis for the Borel-Weil-Bott theory on group representations,\nis a nondegenerate\ntwo-form on $G\/H$. In order to have $\\delta \\Omega =0$, we must therefore\nimplement the Gauss law only for those $U$'s which obey the restriction\n\\begin{equation}\n\\oint_{\\partial M} {\\rm Tr} [ E_i \\delta UU^{-1}] dS^i =0 \\label{8a}\n\\end{equation}\nThis is basically the cocycle condition\nwhich allows us to build up finite transformations using sequences of\ninfinitesimal transformations.\nIf $E_i$ on $\\partial M$ can be arbitrary, this\ncondition (\\ref{8a}) would require fixing $U$ to some\nvalue, say, $U_\\infty$\non $\\partial M$. (If $U_\\infty$ is held fixed, $\\delta U_\\infty =0$, so that the \nrequirement (\\ref{8a}) is trivially satisfied.)\nThis clarifies the nature of the test functions $\\theta^a$\nin (\\ref{4}) in imposing the Gauss law: {\\it The test functions must be so chosen that\nthey lead to $U_\\infty$ on $\\partial M$}.\n\nThe key question for us is then:\nWhat are the allowed values of $U_\\infty$? This will be determined by\nthe charge quantization conditions. But before we take up this issue,\na\ncomment on the asymptotic behavior of $U$ is in order.\nAlthough we argued using constant $U$ on $\\partial M$, generically, we cannot impose the Gauss law for $U$'s which are not constant on $\\partial M$ as well,\nsince $\\delta \\Omega_2 $ will not vanish for such cases.\nIn fact, $U$'s which are not constant\non $\\partial M$ correspond to degrees of freedom which\nare physical and generate the\n``edge modes\" of a gauge theory. If we consider the boundary to be at\nspatial infinity, such edge modes are irrelevant. This will be the case \nfor our analysis in this paper.\n\nReturning to constant values of $U$ on $\\partial M$, and\nthe identification of the possible values of $U_\\infty$, we start with the\nquestion:\nHow does $\\Psi$ change under\n transformations which go to a constant $U \\neq U_\\infty$.\nIt is easily seen that the action of a general infinitesimal\ntransformation\n\\begin{equation}\n\\delta A^a_i = -\\partial_i \\theta^a - f^{abc} A_i^b \\theta^c, \\hskip .3in\n\\delta E^a_i = - f^{abc} E^b_i \\theta^c\n\\label{G1}\n\\end{equation}\nis given by\n\\begin{equation}\n\\delta \\Psi = \\left[ i \\int_M D_i\\theta^a(x) ~E^a_i \\right]~\\Psi \n= \\exp\\left[ iQ^a \\theta^a (r= \\infty )\\right] ~\\Psi\\label{G2}\n \\end{equation}\nwhere $Q^a$ is the electric charge $Q^a = \\oint E^a_i dS_i$. Thus transformations which go to a constant $\\neq U_\\infty$ act as a Noether symmetry, under which the charged states undergo a phase transformation. If the only charges in the theory correspond to the adjoint representation of $G$ and its products, i.e., if the states are invariant under ${\\mathbb Z}_N \\in SU(N)$,\nthen the wave functions are {\\it invariant} for those $U$'s which go to an element of the center\n${\\mathbb Z}_N$ at spatial infinity.\nWe have seen that \nwe can implement the Gauss law only for transformations which go to a fixed element\n$U_\\infty$ at spatial infinity. Now we see that the allowed choices for\n$U_\\infty$ correspond to an element of the center ${\\mathbb Z}_N$.\n\nTo recapitulate briefly, we have seen that the true gauge transformations of the theory, in the sense of corresponding to a redundancy of description,\nare of the form $U({\\vec x} )$ with:\\\\\na) $U \\rightarrow$ a constant $U_\\infty$ at spatial infinity\\\\\nb) $U_\\infty \\in \\mathbb{Z}_N$ for a theory with charges which are\n$\\mathbb{Z}_N$-invariant.\n\\subsection{Charge quantization and $U_\\infty$: An alternate argument}\n\nThere is another way to arrive at the conclusion of the previous subsection,\nnamely, by a direct analysis of the charge quantization conditions.\nNotice that, for $U$'s obeying (\\ref{8a}), we can write $\\Omega$ as\n\\begin{eqnarray}\n\\Omega &=& 2\\int_M {\\rm Tr} [ E_i U \\nabla_i (U^{-1}\\delta U) U^{-1}]\\nonumber\\\\\n&=& 2\\int_M {\\rm Tr} [ E_i \\delta (\\nabla_i U U^{-1})] \\nonumber \\\\\n&=& \\delta \\left(\n2\\int_M {\\rm Tr} [ E_i \\nabla_iUU^{-1}]\\right)\n\\label{9}\n\\end{eqnarray}\nUsing this and integrating (\\ref{5}) from the identity to $U$, we obtain\n\\begin{equation}\n\\rho( U^{-1}EU) = \\rho (E) \\exp\\left(2 \\int_M {\\rm Tr} (E_i\\nabla_iUU^{-1})\\right)\n\\label{10}\n\\end{equation}\nThis equation will be important for us; it will have a key role in subsequent analysis. So another comment and another derivation will be appropriate\nbefore proceeding. One concern about (\\ref{10}) might be that we have used integration from the identity to $U$. \nIn three spatial dimensions, \nsince $\\Pi_3 (G)={\\mathbb Z}$, there are $U$'s which are not connected to the identity. Even though the derivation given above does not quite make it clear,\nthe result (\\ref{10}) \nholds even for $U$'s which are \nnot in the connected component. This can be seen by the following \nalternate derivation borrowed from \\cite{Jackiw:1996ec}.\n\\begin{eqnarray}\n\\Psi (E) &=&\\int d\\mu (A) \\exp\\left[{ 2\\int_M {\\rm Tr} (E_iA_i)}\\right]~ \\Psi (A)\\nonumber\\\\\n&=& \\int d\\mu (A) \\exp \\left[{2\\int_M {\\rm Tr} (E_iA_i)}\\right] \\nonumber \\\\\n&& \\;\\;\\;\\;\\;\\;\\;\\; \\times \\; \\Psi (U^{-1}AU+U^{-1}\\nabla U)\\nonumber\\\\\n&=&\\exp(-2\\int_M {\\rm Tr} (E_i\\nabla_iUU^{-1})) \\nonumber \\\\\n&& \\times \\int d\\mu (A) \n\\exp\\left[{2\\int_M {\\rm Tr} (U^{-1}E_iUA_i)}\\right]~ \\Psi (A)\\nonumber\\\\\n&=& \\exp(-2\\int_M {\\rm Tr} (E_i\\nabla_i UU^{-1}))~ \\Psi (U^{-1}EU)\n\\label{12}\n\\end{eqnarray}\nwhere we have first used the gauge invariance of the wave functions in the\n$A$-representation (i.e. $ \\Psi (A) = \\Psi (U^{-1}AU+U^{-1}\\nabla U)$) and then changed the variable of integration from\n$A$ to $U^{-1}AU +U^{-1}\\nabla U$. With $\\Psi = \\rho\\, \\Phi (E)$, (\\ref{12})\ngives (\\ref{10}). (This derivation is simpler but the earlier analysis does \nreveal some interesting aspects of imposing the Gauss law.)\n\nEquation (\\ref{10}) contains certain charge quantization requirements which \ncan be used to see why the boundary values of $U$ can be an element\nof the center, rather than strictly being the identity.\nWe can show that $\\rho (E)$ of (\\ref{10})\nwill vanish unless certain conditions are satisfied by $E_i$. \nFor this, it is adequate to examine some special configurations.\nThe basic strategy is to choose an electric field configuration and\na $U$ which commutes with the chosen configuration for $E_i$.\nEquation (\\ref{10}) then gives an identity of the form \n$\\rho =\\rho e^{i\\lambda}$ where the\nphase $\\lambda$ is given by the integral $2\\int_M {\\rm Tr} (E_i\\nabla_iUU^{-1})$.\nThis would imply that $\\rho$ must vanish unless the phase is an integral \nmultiple of $2\\pi$; this is the constraint for the chosen type of field \nconfiguration.\nFor simplicity, \nwe shall use $G=SU(2)$ for the example below; generalization to other \ngroups is straightforward.\n\nFor our example, we choose polar coordinates $(r,\\theta , {\\varphi} )$ and take \n\\begin{equation} \n\\begin{split}\nE_{\\theta} &=E_{{\\varphi}}=0, \\hskip .2in E_r= {\\sigma_3 \\over 2}~{q\\over 4\\pi r^2} \\\\\nU &= \\exp (i\\sigma_3 f(r))\n\\end{split}\n\\label{cq1}\n\\end{equation}\nThis field corresponds to a point charge at $r=0$.\nTo avoid the singularity, we shall remove the point $r=0$ from $M$. Thus\nthe boundary $\\partial M$ consists of a small sphere around $r=0$ and the sphere\nat spatial infinity.\nEven though $U$ is not constant in space, we have chosen it to commute with\nthe given $E_i$. Evaluating the phase factor in (\\ref{10}), we obtain\n\\begin{equation}\n\\rho = \\rho \\exp \\left( 2i~ (\\Delta f) ~q \\right)\n\\label{cq2}\n\\end{equation}\nwhere $\\Delta f = f(\\infty) - f(0)$.\nAs for the values of $f(0), ~f(\\infty )$, they should be integral\nmultiples of $\\pi$ to be consistent with the trivial action of $U$ on states at the \nboundaries. \nIf we require $U$ to go to the identity (and not just an element of the center)\nat the boundary, $\\Delta f = 2 \\pi n$,\n$n \\in {\\mathbb Z}$.\nEquation (\\ref{cq2}) then tells us that we can have nonzero $\\rho$ for\n$q = \\textstyle{1\\over 2} n$. The Gauss law for, say fermion sources,\nmay be written as \n\\begin{equation}\n\\nabla \\cdot E^a + f^{abc} A^b\\cdot E^c = {\\bar \\psi} T^a \\psi\n\\label{cq3}\n\\end{equation}\nFor the fundamental representation, this gives, for a point source with\n$T^3$-charge,\n$E^3 = {1\\over 2} ({1\/4\\pi r^2})$. This is consistent with the quantization of\n$q$. On the other hand, if we allow $U$ to go to $-1$, then we only need\n$\\Delta f = \\pi n$. Correspondingly, (\\ref{cq2}) tells us that $q$ should be quantized as $q =n$. Equation (\\ref{cq3}) also tells us that this is consistent with\nsources transforming under ${\\mathbb Z}_2$-invariant representations.\n\nThe result of the arguments presented here is that\nwave functions are {\\it invariant} under gauge transformations\nwhich go to an element of the center in theories where the\ncharges are in $\\mathbb{Z}_N$-invariant representations.\nSuch transformations therefore characterize the redundancy\nof the variables $(A_i, E_i)$ in the theory.\n\nThe configuration\nwe have used for obtaining charge quantization\nhas a divergent kinetic energy\n$T={\\textstyle{1\\over 2} } \\int E^2$. \nIt is possible to find nonsingular configurations which lead to the same result; it is just that the argument will be a little more elaborate.\n\\subsection{Nature of the configuration space}\n\nThe $E$-representation of the wave functions was useful in elucidating the\nnature of the allowed boundary values for $U$. However, for the\nanalysis and formulation of ans\\\"atze for the configurations with fractional topological charge, the $A$-representation is more appropriate, so \nthis is the representation we will use for the rest of this paper.\n\nWe can now formalize the situation with the gauge transformations\nas follows.\nStaying within the $A_0 =0$ gauge, let\n\\begin{eqnarray}\n{\\mathcal A} &\\equiv& \\{ {\\rm Set ~of ~all ~gauge ~potentials}~ A_i \\}\\nonumber\\\\\n&\\equiv&\\{ {\\rm Set ~of ~all ~Lie\\mhyphen algebra\\mhyphen valued ~vector ~fields}\\nonumber\\\\\n&&\\hskip .1in {\\rm on ~space}~ {\\mathbb R}^3\\}\\nonumber\n\\end{eqnarray}\nFurther, let \n\\begin{eqnarray}\n{\\mathcal G} &\\equiv&\\{ {\\rm Set ~of ~all} ~g({\\vec x} ): {\\mathbb R}^3 \\rightarrow SU(N),\n~{\\rm such ~that}\\nonumber\\\\\n&&\\hskip .2in g({\\vec x} ) \\longrightarrow ~{\\rm constant} \\in SU(N) ~{\\rm as} ~\\vert {\\vec x} \\vert \\longrightarrow \\infty\\}\n\\nonumber\\\\\n{\\mathcal G}_\\omega &\\equiv& \\{ {\\rm Set ~of ~all} ~g({\\vec x} ): {\\mathbb R}^3 \\rightarrow SU(N),\n~{\\rm such ~that}\\nonumber\\\\\n&&\\hskip .2in g({\\vec x} ) \\longrightarrow \\omega \\in {\\mathbb Z}_N ~{\\rm as} ~\\vert {\\vec x} \\vert \\longrightarrow \\infty\\}\n\\nonumber\n\\end{eqnarray}\nEvidently, ${\\mathcal G}\/{\\mathcal G}_1 = SU(N)$, the set of rigid transformations or the set of constant boundary values for elements $g$ in ${\\mathcal G}$. Our discussion of the Gauss law shows that the gauge group, namely, the \nset of transformations which leave the wave functions invariant, is given by ${\\mathcal G}_1$\n in a theory without ${\\mathbb Z}_N$-invariance. However, in a theory with ${\\mathbb Z}_N$-invariance, \n ${\\mathcal G}_\\omega$ leaves $\\Psi$ invariant for any $\\omega$, so that the gauge group is\n ${\\mathcal G}_* = \\cup_{\\omega \\in {\\mathbb Z}_N} {\\mathcal G}_\\omega$. Since the difference\n between ${\\mathcal G}_1$ and ${\\mathcal G}_\\omega$ is in the boundary value, we may\n also consider any element of ${\\mathcal G}_\\omega$ to be of the form\n $g ({\\vec x} )~ \\omega$, where $g ({\\vec x} )$ goes to the identity at spatial infinity.\n \nThe physical configuration space, for theories with charges in the fundamental representation, i.e., without $\\mathbb{Z}_N$-invariance,\nis given by ${\\mathcal A}\/ {\\mathcal G}_1$. It is easy to see that this space is multiply connected.\nConsider a sequence of configurations $A_i ({\\vec x}, \\tau )$ with $0\\leq \\tau \\leq 1$ given by\n\\begin{eqnarray}\nA_i ({\\vec x}, \\tau ) &=&A_i ({\\vec x} ) (1-\\tau ) + \\tau\\, A^g_i ({\\vec x} ) \\; , \\nonumber\\\\\nA_i^g ({\\vec x} )&= & g^{-1} A_i ({\\vec x} ) g + g^{-1} \\partial_i g \\; ,\\label{G3}\n\\end{eqnarray}\nwhere $g ({\\vec x} ) \\in {\\mathcal G}_1$. Thus $g ({\\vec x} ) \\rightarrow 1$ at spatial infinity. \nThe starting point and ending point of this sequence of gauge fields are gauge-equivalent, so that (\\ref{G3}) gives a closed curve in ${\\mathcal A}\/{\\mathcal G}_1$. If this curve is contractible, then we will be able to transform the entire sequence into gauge-equivalent configurations, writing\n\\begin{equation}\nA_i ({\\vec x}, \\tau ) = g^{-1}({\\vec x} ,\\tau) A_i ({\\vec x} ) g({\\vec x}, \\tau) + g^{-1}({\\vec x}, \\tau) \\partial_i g({\\vec x}, \\tau)\n\\label{G4}\n\\end{equation}\nThe transformations $g ({\\vec x} , \\tau )$ give a homotopic deformation of the identity\n(at $\\tau =0$) to $g({\\vec x} ) $ at $\\tau =1$. The homotopy classes of transformations\n$g \\in {\\mathcal G}_1$ are characterized by the winding number\n\\begin{equation}\nQ [g] = {1\\over 24\\pi^2} \\int {\\rm Tr} (g^{-1} dg )^3\\label{G5}\n\\end{equation}\nThus if $g$ is chosen to have nonzero winding number, then we do not have the possibility\n(\\ref{G4}), leading to the conclusion that there are noncontractible paths in ${\\mathcal A}\/{\\mathcal G}_1$. In other words, if $g ({\\vec x} )$ has nonzero winding number, the configuration (\\ref{G3})\ntraces out a noncontractible path in ${\\mathcal A}\/{\\mathcal G}_1$ as $\\tau$ changes from $0$ to $1$.\nThe usual instanton is an example of such a path,\nwhich, although it is not captured by the simple parametrization\ngiven in (\\ref{G3}), is deformable to\n(\\ref{G3}).\nIn general, the noncontractible paths\nare topologically nontrivial configurations with nonzero instanton number,\nbut not necessarily self-dual (or antiself-dual).\nIn fact, evaluating the instanton number on the configurations (\\ref{G3}), we find\n\\begin{eqnarray}\n\\nu [A] &\\equiv& -{1\\over 8\\pi^2}\\int_{M \\times [0,1]} {\\rm Tr} ( F ~ F)\\nonumber\\\\\n&=& {1\\over 24\\pi^2} \\int_{M, \\tau =1}{\\rm Tr} (g^{-1} dg )^3\n\\label{G6}\n\\end{eqnarray}\nwhere we used the fact that $g$ goes to the identity at spatial infinity.\n\\subsection{Fractional values of $\\nu$}\n\nIt is now easy to see how one may get fractional values of $\\nu$. We consider a path in the space of gauge potentials ${\\cal A}$ of the form (\\ref{G4}), say with $g = U({\\vec x}, \\tau )$, where\n$U({\\vec x}, 1)$ is such that it goes to $\\omega = \\exp (2\\pi i \/N)$ as $\\vert {\\vec x} \\vert \\rightarrow\n\\infty$. In other words, $U({\\vec x}, 1) \\in {\\cal G}_\\omega$. Therefore the path\n$A^U = U^{-1} A U + U^{-1} \\nabla U$ is closed in the ${\\mathbb Z}_N$-invariant theory. The instanton number of this configuration can be evaluated explicitly, but before doing that, a comment is in order.\nThe configuration $U^{-1} A U + U^{-1} \\nabla U$ looks similar to\n(\\ref{G4}), but there is an important difference. In (\\ref{G4}), \n$g({\\vec x}, \\tau )$ gives a homotopy between the identity and $g({\\vec x} )$,\nso that $A_i ({\\vec x} , \\tau)$ is gauge-equivalent to $A_i ({\\vec x} )$ for any\nvalue of $\\tau$. Further, the value of $g({\\vec x}, \\tau )$ as\n$\\vert {\\vec x} \\vert \\rightarrow \\infty$ is\nidentity. To get a noncontractible path, one needs to consider\n$A_i$ which depend on $\\tau$ as in (\\ref{G3}) (or in the usual self-dual\ninstanton configurations). In the present case, the boundary value\nof $U$ changes from the identity to $\\omega$, \nso that at $\\tau \\neq 0, 1$, $U$ is not an element of\n${\\cal G}_1$ or ${\\cal G}_\\omega$. This is why the configurations\n $ U^{-1} A U + U^{-1} \\nabla U$ can still give a nonzero $\\nu$.\n\nTurning to details, it is useful to have an explicit construction of such a $U({\\vec x}, \\tau )$. \nLet $t^a$, $a = 1, 2, \\cdots, (N^2-1)$, denote a basis of hermitian $N\\times N$ matrices for the Lie algebra\nof $SU(N)$, normalized so that ${\\rm Tr} (t^a t^b) = \\textstyle{1\\over 2} \\delta^{ab}$.\nWe can take $t^{N^2-1}$ to be diagonal and given by\n\\begin{equation}\n(t^{N^2-1})_{ij} = \\sqrt{N\\over 2(N-1)} \\left\\{ \\begin{matrix}\n{1\\over N} \\delta_{ij} &~& i,j = 1, 2, \\cdots, (N-1)\\\\\n{1\\over N}- 1&~& i=j=N\\\\\n\\end{matrix}\n\\right.\n\\label{G7}\n\\end{equation}\nThis is the $SU(N)$ version of the usual hypercharge matrix.\nIt is easy to see that \n\\begin{equation}\ng = \\exp( i 2\\pi \\tau \\sqrt{2(N-1)\/N} ~ t_{N^2-1})\n\\label{G7a}\n\\end{equation}\nis a path from $g=1$ to $g =\\omega$ in $SU(N)$ as $\\tau$ varies from zero to $1$.\nThus it is a closed path in the pure $SU(N)$ gauge theory.\nKeeping in mind that instantons are essentially in an $SU(2)$ subgroup of $SU(N)$, we define\nthe $N\\times N$ matrix \n\\begin{equation}\nY_{ij} = \\left\\{ \\begin{matrix}\n{1\\over N} \\delta_{ij}& i,j = 1, 2, \\cdots, (N-2)\\\\\n{1\\over 2} (\\sigma\\cdot {\\hat x} )_{ij} + \\left( {1\\over N} - {1\\over 2} \\right)_{ij} & i,j = N-1, N\\\\\n\\end{matrix}\n\\right.\n\\label{G8}\n\\end{equation}\nWe can then define \n\\begin{equation}\nU({\\vec x}, \\tau ) = \\exp (i Y \\Theta (r,\\tau ))\\label{G9}\n\\end{equation}\nwith $\\Theta (r, 0) =0$, $\\Theta (0,\\tau ) =0$ and\n$\\Theta (\\infty, \\tau ) = 2\\pi \\tau$. \n(One example of such a function is\n$\\Theta (r, \\tau) = 2\\pi \\tau r\/(r+r_0)$. There are obviously infinitely \nmany $\\Theta$'s consistent with the required boundary behavior.) This gives a spherically symmetric ansatz\nfor an element of ${\\cal G}_\\omega$.\nIt is easy to verify that $U(\\infty, \\tau)$ traces out a path from the identity to \n$\\omega$ in $SU(N)$. Also,\nsince $U(\\infty, \\tau) \\rightarrow 1, \\omega$ at\n$\\tau =0, 1$, it qualifies as a gauge transformation at the two ends in the\n$SU(N)\/{\\mathbb Z}_N$ theory.\n\nReturning to the configurations\n$A_i^U = U^{-1} A_i U + U^{-1} \\partial_i U$\nin the space of potentials, we see that this corresponds to a closed\npath in ${\\cal A}\/ {\\cal G}_\\omega$.\nSince $U$ depends on ${\\vec x}, \\tau$, but $A_i$ depends only on ${\\vec x}$, \n\\begin{eqnarray}\n{\\cal F} &=& dA^U + A^U A^U = U^{-1} F U + d\\tau {\\partial \\over \\partial \\tau} A^U\\nonumber\\\\\n&=& U^{1} (F - Da ) U\\label{G10}\n\\end{eqnarray}\n where $F$ involves only the spatial components of the field strength tensor\n and $a = d\\tau~ {\\dot U} U^{-1}$. (For this calculation, $\\tau$ can also be viewed\n as the time coordinate, so that $Da$ is essentially the electric field.)\nFrom (\\ref{G10}), \n \\begin{eqnarray}\n \\nu &=& - {1\\over 8\\pi^2} \\int {\\rm Tr} ( (F- Da ) (F- Da) = {1\\over 4\\pi^2} \\int {\\rm Tr} (Da F )\\nonumber\\\\\n &=& {1\\over 4\\pi^2} \\oint {\\rm Tr} (a F)\\label{G11}\n \\end{eqnarray}\n The indicated boundary integration is over spatial infinity and over all $\\tau$.\n This shows that we will need a nonzero magnetic flux to obtain a nonzero value\n for $\\nu$. Therefore, we consider monopole-like configurations with the asymptotic\n behaviour \n \\begin{eqnarray}\n F &=& {1\\over 2} F_{ij} dx^i \\wedge dx^j\\nonumber\\\\\n &\\rightarrow& -{ i\\over 2} (\\sigma \\cdot {\\hat x} )~ {M\\over 2} ~\\epsilon_{ijk} {{\\hat x}^k \\over r^2}\n dx^i\\wedge dx^j\n \\label{G12}\n \\end{eqnarray}\n where $\\sigma_i$ are in the $2\\times 2$ block of $i, j = (N-1), N$ viewed as\n an $N\\times N$ matrix. $M$ is (electric charge $e$ times) the monopole charge.\n We then find\n \\begin{equation}\n \\nu = M\n \\label{G13}\n \\end{equation}\n $M$ must be quantized according to the Dirac quantization condition.\n This condition, for a general gauge group is the Goddard-Nuyts-Olive\n (GNO) quantization condition \\cite{Goddard:1976qe} and amounts to the following.\n If the electric charges correspond to representations of $G$, then the magnetic charges\n $M$ take values in the dual group ${\\tilde G}$. For our case, we note that\n the GNO dual of $SU(N)$ is $SU(N)\/ {\\mathbb Z}_N$. Thus if the electric charges\n are ${\\mathbb Z}_N$-invariant, taking values corresponding to $SU(N)\/ {\\mathbb Z}_N$\n representations, then the fundamental charges of $SU(N)$ are allowed values for\n $M$. They are thus quantized in units of $1\/N$. \n \n Thus we see that we can indeed obtain fractional values of $\\nu$.\n The problem however, is that for the nonsingular 't Hooft-Polyakov (`t H-P) monopoles, the quantization condition is not quite the Dirac (or GNO) condition. In fact, for the case of $SU(2)$, \n $M$ is an integer for `t H-P monopoles, whereas the GNO condition would suggest\n that it is possible to get $M = {\\textstyle{1\\over 2}}$.\n It is, however, possible to construct nonsingular configurations of separated GNO monopoles which\n have a total flux consistent with the `t Hooft-Polyakov condition.\n These will look like some split versions of the `t H-P monopole.\n \n We can construct an ansatz for the split monopole for the case of $SU(2)$ as follows.\n Let $A_D$ be the Dirac form of the monopole given by\n \\begin{equation}\n A_D = {({\\hat x}_1 d{\\hat x}_2 - {\\hat x}_2 d{\\hat x}_1 ) \\over (1+ {\\hat x}_3)} \\label{G14}\n \\end{equation}\nThe `t Hooft-Polyakov form of the monopole is then given by\n\\begin{eqnarray}\nA &=& (1- K(r) ) \\left[ g^{-1} i \\left({\\sigma_3\\over 2} \\right) A_D~ g + g^{-1} d g \\right]\\nonumber\\\\\n&=& i \\left({ \\sigma^a \\over 2}\\right) ~(1- K) ~\\epsilon_{abc} {x^b\\over r^2} dx^c\n\\label{G15}\n\\end{eqnarray}\nwhere $g$ is the matrix\n\\begin{equation}\ng = {1\\over \\sqrt{1+z{\\bar{z}}}} \\left[ \\begin{matrix} \n1 & z \\\\\n-{\\bar{z}} &1\\\\\n\\end{matrix}\n\\right] \\label{G16}\n\\end{equation}\n and $z = \\tan (\\theta \/2)~ e^{-i{\\varphi}}$ and\n \\begin{equation}\n {\\hat x}_1 = {z +{\\bar{z}} \\over 1+ z{\\bar{z}}}, \\hskip .2in\n {\\hat x}_2 = {i (z-{\\bar{z}} ) \\over 1+ z{\\bar{z}}}, \\hskip .2in\n {\\hat x}_3 = {1 - z{\\bar{z}} \\over 1+ z{\\bar{z}}}\n \\label{G17}\n \\end{equation}\n We may also note that $g^{-1} \\sigma_3~ g = \\sigma \\cdot {\\hat x}$.\nThe function $K(r)$ vanishes exponentially outside of the core of the monopole,\nand $1 - K(r) \\sim r^2$ for small $r$.\n The advantage of writing it as in (\\ref{G15}) is that, for large $r$, we can trivially calculate $F$ as\n \\begin{equation}\n F = i g^{-1} {\\sigma_3\\over 2} g ~ dA_D = \n{i\\over 2} \\sigma\\cdot {\\hat x} ~ \\sin\\theta ~d\\theta d{\\varphi}\n \\label{G18}\n \\end{equation}\nThus $F^a = - {\\hat x}^a \\sin\\theta ~d\\theta d{\\varphi}$, with\n$\\int F^a {\\hat x}^a = - 4\\pi$. We can now modify this ansatz with some of the flux piped away from the monopole by a vortex. We consider an Abelian vortex given by\n\\begin{equation}\nA_v = {1\\over 2}~ f (\\rho, x_3)~ {x_1 dx_2 - x_2 dx_1 \\over \\rho^2}\n\\label{G19}\n\\end{equation}\nwhere $\\rho^2 = x_1^2 + x_2^2$. This is a vortex along the $x_3$-axis. We also have\n$f (\\rho, x_3) \\rightarrow 1$ as $\\rho$ becomes large, essentially\noutside the core of the vortex. The factor of $\\textstyle{1\\over 2}$ tells us that the flux carried by this vortex is\n$2\\pi \/2$; it is a ${\\mathbb Z}_N$-vortex, for $N=2$. We will consider a vortex of finite length\n$L$ by taking as an ansatz\n\\begin{equation}\nf( \\rho, x_3) = {1\\over 2} \\tanh \\lambda \\rho ~ [ \\tanh {\\tilde \\lambda} x_3~-~\n\\tanh {\\tilde \\lambda} (x_3- L)] \n\\label{G20}\n\\end{equation}\nThis function vanishes exponentially for $x_3 \\ll 0$ and for $x_3 \\gg L$.\nThe core of the vortex has an extent in $\\rho$ of the order of $1\/\\lambda$.\nOur modified ansatz is now given by\n\\begin{equation}\nA = (1- K(r) ) \\left[ g^{-1} i \\sigma_3 (A_D -A_v)~ g + g^{-1} d g \\right]\n\\label{G21}\n\\end{equation}\n\nConsider a large sphere of radius $R$ much larger than the core of the monopole and the core of the vortex. If $R\\ll L$, then the sphere intersects the vortex. The flux may be computed by\ntaking $K \\rightarrow 0$ , so that\n\\begin{equation}\nF = { i \\over 2} \\sigma \\cdot {\\hat x} (d A_D - 2 ~ dA_v )\n\\label{G22}\n\\end{equation}\nThe flux is then $-(4\\pi - 2\\pi) = -2\\pi$. This is what we expect for a GNO monopole, and is equivalent to $M = \\textstyle{1\\over 2}$. \nIf we consider a sphere of radius much larger than $L$, then the contribution\nfrom $A_v$ is zero, since $f$ vanishes and we get $- 4\\pi$ for the total flux.\nIn this sense, we can view the configuration (\\ref{G21}) as\na split monopole.\n\nThe relevance of the split monopole can be understood from the following question: In a calculation or simulation of the vacuum-to-vacuum transition amplitude, can we see configurations with fractional values of $\\nu$? For this it is useful to write $\\nu$ in terms of the Chern-Simons integral\n\\begin{equation}\nS_{CS}(M) = -{1\\over 8\\pi^2} \\int_M {\\rm Tr} (A ~dA + {2\\over 3} A^3 )\n\\label{G23}\n\\end{equation}\nThe topological charge $\\nu$, which is the integral of\nthe exterior derivative of the Chern-Simons over spacetime,\ncan then be written as\n\\begin{eqnarray}\n\\nu &=& S_{CS} (M, \\tau =1) - S_{CS} (M, \\tau =0)\\nonumber\\\\\n&&\\hskip .1in + {1\\over 8\\pi^2} \\oint_{\\partial M} {\\rm Tr} (A_i E_j ) dx^0\\wedge dx^i \\wedge dx^j.\n\\label{G24}\n\\end{eqnarray}\nAs the representative of the vacuum at $\\tau =0$, we may take\n$A_i =0$. The final configuration is also the vacuum, so it must be a gauge transform of $A_i =0$, say, $A_i = g^{-1} dg$.\nFurther, if we consider spatial boundary conditions (periodic, Dirichlet, etc.)\nwhich lead to vanishing of the integral with the\nelectric flux on $\\partial M$,\nwe find\n\\begin{eqnarray}\n \\nu &=& S_{CS} (M, \\tau =1) - S_{CS} (M, \\tau =0) \\nonumber \\\\\n &=& {1\\over 24\\pi^2} \\int_M {\\rm Tr} (g^{-1} dg)^3 = Q [g] \\; . \n\\label{G25}\n\\end{eqnarray}\nSince $Q [g]$ is an integer, even for $g$'s such that\n$g \\rightarrow \\omega$ on $\\partial M$,\nwe get integral values of $\\nu$ in the vacuum-to-vacuum\namplitude. \nHowever, we can have configurations like\n$A_i^U = U^{-1} A_i U + U^{-1} \\partial_i U$ where $A_i$ is a split monopole configuration as in (\\ref{G21}). We get separated configurations, each of which in isolation may be considered as having a fractional value of $\\nu$, but the total value of $\\nu$ is integral.\n\n\n\n\n\\subsection{Simple solution}\n\\label{sec:simple}\n\nWe will now illustrate the analysis given above in a related but slightly different way and also comment on the situation with finite nonzero temperature $T$. It is convenient to frame this discussion in terms\nof a nonzero $A_0$, by replacing\nthe field configuration $(A_0 =0, U^{-1} A_i U + U^{-1} \\partial_i U)$\nby its gauge equivalent version\n$(A_0 = {\\dot U} U^{-1}, A_i )$. \nFor $A_0$, at $r=\\infty$ our choice is then\n\\begin{equation}\nA_0 \\; = \\; \\frac{2 \\pi T}{N} \\; \\mathbf{k} \\; .\n\\end{equation}\nHere $\\mathbf{k}$ is a diagonal $SU(N)$ matrix related to\n$\\mathbb{Z}_N$ transformations, so their elements are integers. There are two choices,\n\\begin{eqnarray}\n\\mathbf{k}_1 \\; &=& \\;\n\\left(\n\\begin{array}{cc}\n{\\mathbf 1}_{N-1} & 0 \\\\\n0 & -(N-1) \\\\\n\\end{array}\n\\right) \\; ,\n\\\\\n\\mathbf{k}_2 \\; &=& \\;\n\\left(\n\\begin{array}{ccc}\n{\\mathbf 1}_{N-2} & 0 & 0\\\\\n0 & -(N-1) & 0 \\\\\n0 & 0 & 1 \\\\\n\\end{array}\n\\right) \\; .\n\\end{eqnarray}\nThese are obviously related to the matrix $(t^{N^2-1})_{ij} $ in Eq. (\\ref{G7}).\nFor $\\mathbf{k}$ equal to either $\\mathbf{k}_i$, the Wilson line in the imaginary\ntime direction, $t$, is\n\\begin{equation}\n\\Omega \\; = \\; \\exp \\left( i \\int^{1\/T}_0 \\; A_0 \\; dt \\right) \n\\; = \\; \\exp\\left( \\frac{2 \\pi i}{N} \\; \\mathbf{k} \\right) \\; ,\n\\end{equation}\nhas nontrivial holonomy, as these values represent $\\mathbb{Z}_N$ degenerate vacum\n\nFor the spatial components, construct a split 't H-P monopole, as in the previous section.\nDivide a sphere into an upper and a lower hemisphere,\nwith gauge potentials on each, $A^\\pm$, and take\n\\begin{equation}\nA^\\pm_\\phi \\; = \\; \\frac{1}{2N r } \\; \\mathbf{m} \\;\n\\frac{\\left( \\pm 1 - \\cos\\theta \\right)}{\\sin \\theta} \\; .\n\\label{zn_soln}\n\\end{equation}\nTo see this is a $\\mathbb{Z}_N$ monopole, compute the the\nWilson line for a special closed path, $\\vec{s}$.\nSince the vector potential is specified by two patches, we compute\nthe Wilson line with $A^+$,\ngoing around by $2 \\pi$ in $\\phi$; then, take the Wilson line\nwith $A^-$, running in the opposite direction:\n\\begin{eqnarray}\n&&\\exp \\left( \\; i \\oint \\vec{A}^+ \\cdot d\\vec{s} \\; \\right)\n\\; \\left( \\exp\\left( \\;\ni \\oint \\vec{A}^- \\cdot d\\vec{s} \\; \\right) \\right)^\\dagger \\nonumber \\\\\n &=& \\exp\\left( \\frac{2 \\pi i}{N} \\; \\mathbf{m} \\right) \\; .\n\\end{eqnarray}\nThis is manifestly gauge invariant, and $=1$ if the configuration\nis trivial, $A^+ = A^-$. For the $\\mathbb{Z}_N$ monoopole, instead one\nobtains a non-trivial element of $\\mathbb{Z}_N$. For this to be\ntrue, $\\mathbf{m}$ must be one of the two matrices, $\\mathbf{c}_{1}$ or $\\mathbf{c}_2$.\n\nThe above are the boundary conditions at spatial infinity, $r \\rightarrow\n\\infty$. At the origin, $r = 0$, we require all $A_\\mu$'s to vanish,\nat least like $\\sim r^2$, so that $F_{\\mu \\nu} \\sim r$ as $r \\rightarrow 0$.\n\nAs argued in Sec. (\\ref{sec:cpn}), in general we expect that this exists only as a quantum\ninstanton, on the order of the confinement scale. At nonzero temperature, however,\n$1\/T$ provides an alternate length scale. \nWhile the solution is approximately self-dual over\ndistances $\\sim 1\/T$, because of the presence of the Debye screening mass, it is\nnot self-dual over larger distances. This generates corrections $\\sim \\sqrt{g^2}$ to the action.\n\nIt is straightforward to compute the topological charge. For large $r$,\n\\begin{equation}\nA_0(r) \\; = \\;\n\\frac{2 \\pi T}{N} \\; \\mathbf{k} \\; - \\; \\frac{1}{2 N r} \\; \\mathbf{m} \\; + \\; \\ldots\n\\end{equation}\nFor a static configuration,\n\\begin{equation}\nQ \\; = \\;\n\\frac{1}{4 \\pi^2} \\int d^4 x \\; \\partial_i \\; {\\rm tr} \\left( A_0 \\; B_i\n\\right) = \\frac{1}{N^2} \\; \\bf{m} \\cdot \\bf{k} \\; .\n\\end{equation}\nThis was first derived by 't Hooft \\cite{tHooft:1980kjq}.\n\nThere are only two cases to consider. Either the $\\mathbb{Z}_N$ charges\nare the same, or they are different. If they are the same,\n$\\bf{m} = \\bf{k_1}$,\n\\begin{equation}\nQ \\; = \\; \\frac{N-1}{N} \\; .\n\\end{equation}\nIf they charges are different, such as $\\mathbf{m} = \\mathbf{k}_1$ and $\\mathbf{k}\n= \\mathbf{k}_2$, then\n\\begin{equation}\nQ \\; = \\; - \\; \\frac{1}{N} \\; .\n\\end{equation}\n\nWe conclude this section by discussing the relationship between the configuration above and that of\nKraan, van Baal, Lee, and Lu (KvBLL)\n\\cite{Lee:1997vp,Lee:1998vu,Lee:1998bb,Kraan:1998pm,Kraan:1998sn,Diakonov:2002fq,Bruckmann:2009nw,Diakonov:2009jq,Poppitz:2008hr,Gonzalez-Arroyo:2019wpu,Anber:2021upc}.\nLike ours, their solution carries magnetic charge and has nontrivial holonomy. Our ansatz, however, carries\n$\\mathbb{Z}_N$ magnetic charge, and so must be represented by a multivalued function, Eq. (\\ref{zn_soln}), while\nthat of KvBLL has integral magnetic charge.\nFor our solution, the boundary condition for holonomy at spatial infinity ensures that it is a vacuum which is\ndegenerate with the vacuum. Thus when one loop corrections are included, the action for our solution will remain\nfinite. In contrast, for the solution of KvBLL, the holonomy is at a maximum of the holonomous potential. When\none loop corrections are included, then, the action for a constituent with charge $1\/N$\ndiverges as the spatial volume. The action for an instanton with integral charge remains finite, which\nis why on the quantum level, these constituents cannot be pulled apart.\n\n\\section{Physics of $\\mathbb{Z}_N$ dyons and the lattice}\n\\label{sec:lattice}\n\nThe $\\mathbb{Z}_N$ dyons above represent configurations with fractional topological charge, and carry both\n$\\mathbb{Z}_N$ electric and magnetic charge. In the confined phase, $\\mathbb{Z}_N$ magnetic charge is unconfined,\nbut in the deconfined phase, $\\mathbb{Z}_N$ magnetic charge which\npropagates in the temporal direction is confined. Thus quantum mechanically, a\n$\\mathbb{Z}_N$ dyon is only relevant for a narrow temperature region above the deconfining temperature, $T_d$.\nAs the temperature increases, so will the magnetic string tension, binding the\n$\\mathbb{Z}_N$ dyons with increasing strength. Eventually, at some temperature above but close to $T_d$,\nthere will only be instantons with integral topological charge.\n\nWe stress that unless there are boundary conditions which are twisted with respect to $\\mathbb{Z}_N$,\nas suggested by 't Hooft \\cite{tHooft:1980kjq,tHooft:1981nnx}, then it will not be possible to measure\na system with a net number of $\\mathbb{Z}_N$ dyons (or anti-dyons). Nevertheless, $\\mathbb{Z}_N$ dyons do contribute\nto the partition function, and in particular, alter the dependence of the free energy on the $\\theta$-parameter.\nFor an especially clear discussion, see Unsal \\cite{Unsal:2020yeh}.\n\nAs noted, in the confined phase $\\mathbb{Z}_N$ magnetic charge can propagate freely.\nOur construction of a dyon depended essentially upon the non-trivial holonomy at nonzero temperature,\nbut as the temperature decreases, surely the dominant dyons will not propagate in straight lines\nin the direction of imaginary time. Instead, they will curl up over distances set by the confinement scale.\nThis gives an alternate explanation of why the dyons disappear above a given temperature, as they\nget squeezed out by the extent of the direction in imaginary time.\n\nIn the continuum, an instanton in a $SU(N)$ gauge theory\nwith topological charge one, coupled to single massless Dirac quark in the fundamental\nrepresentation, has two zero modes, for a quark of each chirality. In the adjoint\nrepresentation there are $2N$ zero modes. Thus to measure a $\\mathbb{Z}_N$ dyon with\nfractional topological charge $1\/N$ it is necessary to use adjoint quarks.\n\nOn the lattice, in the pure gauge theory one can use an external quark propagator in the adjoint representation\nto look for isolated zero modes. To ensure these are not lattice artifacts,\nit is imperative to use a Dirac propagator with exact chiral symmetry, such\nas the overlap operator\n\\cite{Ginsparg:1981bj,Kaplan:1992bt,Narayanan:1993zzh,Narayanan:1993sk,Narayanan:1993ss,Narayanan:1994gw,Luscher:1998pqa}.\n\nThis was done previously by Edwards, Heller, and Narayanan \\cite{Edwards:1998dj}. With more modern\ntechniques, it should be possible to establish the existence of $\\mathbb{Z}_N$ dyons with certainty very close to the continuum\nlimit \\cite{karthik:2022}. \n\nFodor {\\it et al.} performed simulations in a $SU(3)$ gauge theory\nusing an external quark propagator in the sextet representation\n\\cite{Fodor:2009nh}. As they discuss, the sextet representation is sensitive to the presence of objects\nwith topological charge $1\/5$, and for which they see no evidence. However, this does not exclude\nthe appearance of objects with charge $1\/3$.\n\nWith the overlap operator, one would look for configurations with zero modes. Given that, it is then\npossible to compute the eigenvector as a function of position. This will allow an estimation of the size\nof the object with a given topological charge.\n\nFurther, especially in the vacuum, it is not clear what the space-time structure of the tangled dyon worldlines\nis, especially at distances which are smaller than that for the confinement scale. This may help explain why\nlattice studies of the topological structure do not find evidence for a simple instanton, concentrated at\nclose to a single point in spacetime\n\\cite{Horvath:2002yn,Ahmad:2005dr,Horvath:2005rv,Lian:2006ky,Thacker:2010zk,Thacker:2011sz,Alexandru:2021pap}.\n\n\n\n\\section{Quarks}\n\\label{sec:quarks}\n\nWe have concentrated exclusively on a gauge theory without dynamical quarks. In this section we discuss\nwhat might occur with their introduction.\n\nFor a Wilson loop which carries electric $\\mathbb{Z}_N$ flux, adding dynamical quarks generates quark anti-quark pairs\nwhich then screen the flux tube. In contrast, it is not possible to consistently define a $\\mathbb{Z}_N$ magnetic monopole\nwith dynamical quarks. With dynamical quarks, the magnetic $\\mathbb{Z}_N$ flux is visible, and\nquarks generate a linear potential between $\\mathbb{Z}_N$ dyons, generating their confinement.\nSince the size of a quantum instanton in the pure\ngauge theory is of order of the confinement scale, this surely remains so in the theory with dynamical quarks.\n\nFor the topological susceptibility in QCD, there are three regimes. Consider first zero quark chemical potential.\nInstantons dominate at temperatures above $\\geq 300$~MeV, and $\\mathbb{Z}_N$ dyons below that.\nThe lattice finds that\nthe crossover temperature for chiral symmetry is at $T_\\chi \\approx 156 \\pm 2$~MeV\n\\cite{HotQCD:2018pds,Borsanyi:2020fev,Guenther:2022wcr}.\nThus the interaction of $\\mathbb{Z}_N$ dyons with essentially massless quarks dominates for $T_\\chi \\leq T \\leq 300$~MeV,\nwhile for $T \\leq T_\\chi$, the dyons interact with massive quarks.\nWhile this description is consistent with measurements on the lattice, it is\nadmittedly speculative, as it is not clear how to measure $\\mathbb{Z}_N$ dyons\nin the presence of dynamical quarks.\n\nAt low temperature and nonzero quark chemical potential, $\\mu$, there are at least three regions.\nBecause the number of degrees of freedom for quarks and gluons at nonzero temperature is so much greater\nthan that at $T = 0$ and $\\mu \\neq 0$, \nestimates with a dilute instanton gas indicate that at zero temperature, instantons do not dominate until {\\it very} high\nchemical potential, at least $\\mu \\sim 2$~GeV \\cite{Pisarski:2019upw}.\nThus for $\\mu > 313$~MeV, when a Fermi sea of nucleons first forms, the interaction of $\\mathbb{Z}_N$ dyons and\nquarks controls the topological susceptibility. Initially this is in a region with broken chiral symmetry,\nand then a region which is chirally symmetric. \nThis includes both a confined, quarkyonic regime \\cite{McLerran:2007qj,Lajer:2021kcz},\nand perhaps even into the nominally perturbative regime, which may arise for\n$\\mu > 1$~GeV. In the latter, color superconductivity is dominant near the Fermi surface,\nbut the effects of the axial anomaly can still affect the possible pairing mechanisms \\cite{Pisarski:1999gq}.\nThis illustrates the general expectation that the phases of cold, dense\nquarks might be considerably more involved than at $\\mu = 0$ and $T \\neq 0$.\n\nA useful approach would be to develop a phenomenological model in ~which the effects of $\\mathbb{Z}_N$ dyons in the $SU(N)$\ntheory without quarks could be fit. Then this model could be extended to the theory with dynamical quarks.\nWe leave this for future study.\n\n\\acknowledgments\nV.P.N. was supported in part by the U.S. National Science Foundation Grants No.\nPHY-2112729 and No. PHY-1820271.\nR.D.P. was supported by the U.S. Department of Energy under contract DE-SC0012704.\nR.D.P. thanks A. Alexandru, O. Alvarez, A. Dumitru, U. Heller, I. Horvath, T. Izubuchi, J. Lenaghan,\nN. Karthik, R. Narayanan, P. Petreczky, E. Poppitz, R. Venugopalan, and M. Unsal for discussions.\n\n\\input{frac.bbl}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:int}Introduction}\n\nThe neutrino physics has made great progress in the past decades.\nThe mass square differences $\\Delta m^2_{\\rm sol}$, $\\Delta\nm^2_{\\rm atm}$ and the mixing angles have been measured with good\naccuracy \\cite{Strumia:2006db,Schwetz:2008er,Fogli:Indication}.\nA global fit to the current neutrino oscillation data demonstrates\nthat the observed lepton mixing matrix is remarkably compatible with\nthe tri-bimaximal (TB) mixing pattern \\cite{TBmix}, which suggests\nthe following values of the mixing angles:\n\\begin{equation}\n\\label{1}\\sin^2\\theta^{TB}_{12}=\\frac{1}{3},~~~\\sin^2\\theta^{TB}_{23}=\\frac{1}{2},~~\\sin\\theta^{TB}_{13}=0\n\\end{equation}\nThe question of how to achieve TB mixing has been the subject of\nintense theoretical speculation. Recently it has been found that the\nflavor symmetry based on the discrete group is particularly suitable\nto reproduce this specific mixing pattern in leading order (LO).\nVarious discrete flavor symmetry models have been built, please see\nthe Refs.\\cite{Altarelli:2010gt,Ishimori:2010au} for a review. A\ncommon feature of these model is to produce TB mixing at leading\norder, and the leading order predictions are always subjected to\ncorrections due to higher dimensional operators in both the driving\nsuperpotential and the Yukawa superpotentials. These models provide\nan elegant description of neutrino mixing at very high energy scale,\nwhereas the neutrino experiments are performed at low energy scale.\nIn order to compare the model predictions with experimental data,\none has to perform a renormalization group (RG) running from the\nhigh energy scale where the theory is defined to the electroweak\nscale $M_Z$. Moreover, we note that RG effects have interesting\nimplications for model building, the lepton mixing angles can be\nmagnified \\cite{Balaji:2000au}, even the bimaximal mixing at high\nenergy can be compatible with low energy experiment\n\\cite{Antusch:2002hy}. Therefore, in a consistent flavor model\nbuilding, we have to guarantee that the successful leading order\npredictions are not destroyed by the RG running corrections. The aim\nof this work is to analyze the RG corrections on the TB mixing\npattern in two typical $S_4$ flavor models\n\\cite{Bazzocchi:2009pv,Ding:2009iy} in addition to the next to\nleading order corrections arising from high dimensional operators\nand to confront them with experimental values. We shall see that the\nrunning of the neutrino parameters is strongly constrained by the\nflavor symmetry as well, and this result holds very generally for the discrete\nflavor symmetry models.\n\nThe $S_4$ flavor symmetry is very interesting. From the group theory\npoint of view \\cite{Lam:2008rs}, it is the minimal group which can\nproduce the TB mixing in a natural way, namely without ad hoc\nassumptions. It is remarkable that we have more alternatives to\nrealize the exact TB mixing than those in the $A_4$ flavor model\n\\cite{Bazzocchi:2009pv,Ding:2009iy,Ma:2005pd,Meloni:2009cz,Altarelli:2009gn,Grimus:2009pg}.\nIn particular, the $\\mathbf{2}$ dimensional irreducible\nrepresentation of $S_4$ group can be utilized to describe the quark\nsector. Moreover, the group $S_4$ as a flavor symmetry, as has been\nshown for example in Refs.\n\\cite{Dutta:2009bj,Hagedorn:2010th,Ishimori:2008fi,Toorop:2010yh,Ding:2010pc},\ncan also give a successful description of the quark and lepton\nmasses and mixing angles within the framework of grand unified\ntheory (GUT). We note that $S_4$ as a flavor symmetry has been\ninvestigated long ago \\cite{Pakvasa:1978tx,Hagedorn:2006ug}, but\nwith different aims and different results.\n\nThe paper is organized as follows. In section \\ref{sec:RGE_review},\nwe briefly review the RG equations for the type I see-saw mechanism.\nThen we give a concise introduction to the Bazzocchi-Merlo-Morisi\n(BMM) $S_4$ model \\cite{Bazzocchi:2009pv} and the $S_4$ model of\nDing \\cite{Ding:2009iy} in section \\ref{sec:models}, where the main\nfeatures of these models are shown. In section\n\\ref{sec:RGE_running}, Our results of RG effects on the neutrino\nmixing parameters for these two interesting models are presented.\nFinally we draw our conclusions in section \\ref{sec:conclusion}.\n\n\\section{\\label{sec:RGE_review}Running of neutrino parameters in type I see-saw scenario}\n\nThe running of neutrino masses and lepton mixing angles is very\nimportant and has been studied extensively in the literature\n\\cite{rge1,rge2,rge3,rge4,rge5,Chakrabortty:2008zh,Antusch:2003kp,Antusch:2005gp,Xing:2007fb,Bergstrom:2010qb,Blennow:2011mp}\nin the past years. In particular, Antusch et al. have developed the\nMathematica package REAP in Ref. \\cite{Antusch:2005gp}, which can\nsolve renormalization group equations (RGE) and provide numerical\nvalues for the neutrino mass and mixing parameters. In this section,\nwe present the RGEs for neutrino parameters in the minimal\nsupersymmetric standard model (MSSM) extended by three singlet\n(right-handed) heavy neutrinos. The superpotential is given by\n\\begin{equation}\n\\label{2}W=D^cY_dQH_d+U^cY_uQH_u+E^cY_eLH_d+N^cY_{\\nu}LH_u+\\frac{1}{2}N^cMN^c\n\\end{equation}\nwhere $Q$ and $L$ are the left-handed quark and lepton doublets\nchiral superfields, respectively, $U^c$, $D^c$, $N^c$ and $E^c$ are\nright-handed up-type quark, down-type quark, heavy neutrino and\ncharged lepton singlet superfields, respectively, $H_u$ and $H_d$ are\nthe well-known two Higgs doublets in MSSM. The Yukawa matrices\n$Y_u$, $Y_d$, $Y_{\\nu}$ and $Y_e$ are general complex $3\\times3$\nmatrices and the $3\\times3$ heavy neutrino mass matrix $M$ is\nsymmetric. Integrating out all the heavy singlet neutrinos, one gets\nthe usual dimension-5 effective neutrino mass operator\n\\begin{equation}\n\\label{3}{\\cal L}_{\\kappa}=-\\frac{1}{4}\\kappa_{fg}(L^{f}\\cdot H_u)(L^{g}\\cdot H_u)\n\\end{equation}\nwhere $f$ and $g$ are family indices, and the dot indicates the\n$SU(2)_L$ invariant contractions. After electroweak symmetry\nbreaking, this operator leads to the light-neutrino masses\n\\begin{equation}\n\\label{4}m_{\\nu}(\\mu)=-\\frac{1}{4}\\kappa(\\mu)v^2\\sin^2\\beta\n\\end{equation}\nwhere $\\mu$ is the renormalization scale, $v=246$ GeV and\n$\\tan\\beta=v_u\/v_d$ is the ratio of vacuum expectation values (VEV)\nof the Higgs doublets. Above the heaviest neutrino mass scale, the\nlight-neutrino mass matrix reads\n\\begin{equation}\n\\label{5}m_{\\nu}(\\mu)=-\\frac{1}{2}Y^T_{\\nu}(\\mu)M^{-1}(\\mu)Y_{\\nu}(\\mu)v^2\\sin^2\\beta\n\\end{equation}\nWhen we evolved the energy from high energy scale down to the low\nexperimental observation scale, the heavy singlet neutrinos involved\nin the see-saw mechanism have to be integrated out one by one, thus\none has to consider a series of effective theories\n\\cite{Antusch:2005gp}. In general, the light-neutrino mass matrix\ncan be written as\\footnote{We use the GUT charge normalization for\nthe gauge coupling $g_1$.}\n\\begin{equation}\n\\label{6}m_{\\nu}=-\\frac{1}{4}\\left(\\accentset{(n)}{\\kappa}+2\\accentset{(n)}{Y}^T_{\\nu}\\accentset{(n)}{M}^{-1}\\accentset{(n)}{Y}_{\\nu}\\right)v^2\\sin^2\\beta\n\\end{equation}\nwhere the superscript $(n)$ denotes a quantity below the $n$th mass\nthreshold. In the MSSM, the two parts $\\accentset{(n)}{\\kappa}$ and\n$2\\accentset{(n)}{Y}^T_{\\nu}\\accentset{(n)}{M}^{-1}\\accentset{(n)}{Y}_{\\nu}$\nevolve in the same way\n\\begin{eqnarray}\n\\label{8}16\\pi^2\\frac{d \\accentset{(n)}{X}}{d t}=\\left(Y^{\\dagger}_eY_e+\\accentset{(n)}{Y}^{\\dagger}_{\\nu}\\accentset{(n)}{Y}_{\\nu}\\right)^{T}\\accentset{(n)}{X}+\\accentset{(n)}{X}\\left(Y^{\\dagger}_eY_e+\\accentset{(n)}{Y}^{\\dagger}_{\\nu}\\accentset{(n)}{Y}_{\\nu}\\right)+\\Big[2\\,{\\rm Tr}\\big(\\accentset{(n)}{Y}^{\\dagger}_{\\nu}\\accentset{(n)}{Y}_{\\nu}+3Y^{\\dagger}_uY_u\\big)-\\frac{6}{5}g^2_1-6g^2_2\\Big]\\accentset{(n)}{X}\n\\end{eqnarray}\nwhere $t=\\ln(\\mu\/\\mu_0)$, and $\\accentset{(n)}{X}$ stands for\n$\\accentset{(n)}{\\kappa}$ or\n$2\\accentset{(n)}{Y}^T_{\\nu}\\accentset{(n)}{M}^{-1}\\accentset{(n)}{Y}_{\\nu}$.\nThe RG equations for the Yukawa couplings $Y_u$, $Y_d$, $Y_{\\nu}$,\n$Y_e$ and the right-handed neutrino mass matrix $M$ are given by\n\\begin{eqnarray}\n\\nonumber&&16\\pi^2\\frac{d\\accentset{(n)}{Y}_{\\nu}}{dt}=\\accentset{(n)}{Y}_{\\nu}\\left[3\\accentset{(n)}{Y}^{\\dagger}_{\\nu}\\accentset{(n)}{Y}_{\\nu}+Y^{\\dagger}_eY_e+{\\rm Tr}(\\accentset{(n)}{Y}^{\\dagger}_{\\nu}\\accentset{(n)}{Y}_{\\nu})+3{\\rm Tr}(Y^{\\dagger}_uY_u)-\\frac{3}{5}g^2_1-3g^2_2\\right]\\\\\n\\nonumber&&16\\pi^2\\frac{dY_e}{dt}=Y_e\\left[3Y^{\\dagger}_eY_e+\\accentset{(n)}{Y}^{\\dagger}_{\\nu}\\accentset{(n)}{Y}_{\\nu}+3{\\rm Tr}(Y^{\\dagger}_dY_d)+{\\rm Tr}(Y^{\\dagger}_eY_e)-\\frac{9}{5}g^2_1-3g^2_2\\right]\\\\\n\\nonumber&&16\\pi^2\\frac{dY_u}{dt}=Y_u\\left[Y^{\\dagger}_dY_d+3Y^{\\dagger}_uY_u+{\\rm Tr}(\\accentset{(n)}{Y}^{\\dagger}_{\\nu}\\accentset{(n)}{Y}_{\\nu})+3{\\rm Tr}(Y^{\\dagger}_uY_u)-\\frac{13}{15}g^2_1-3g^2_2-\\frac{16}{3}g^2_3\\right]\\\\\n\\nonumber&&16\\pi^2\\frac{dY_d}{dt}=Y_d\\left[3Y^{\\dagger}_dY_d+Y^{\\dagger}_uY_u+3{\\rm Tr}(Y^{\\dagger}_dY_d)+{\\rm Tr}(Y^{\\dagger}_eY_e)-\\frac{7}{15}g^2_1-3g^2_2-\\frac{16}{3}g^2_3\\right]\\\\\n\\label{9}&&16\\pi^2\\frac{d\\accentset{(n)}{M}}{dt}=2(\\accentset{(n)}{Y}_{\\nu}\\accentset{(n)}{Y}^{\\dagger}_{\\nu})\\accentset{(n)}{M}+2\\accentset{(n)}{M}(\\accentset{(n)}{Y}_{\\nu}\\accentset{(n)}{Y}^{\\dagger}_{\\nu})^{T}\n\\end{eqnarray}\nIn the full theory above the highest see-saw scale, the superscript\n$(n)$ has to be omitted, and the RG equations for MSSM without\nsinglet neutrinos can be recovered by setting the neutrino Yukawa\ncouplings and the mass matrix of the singlets to be zero. Below the\nSUSY breaking scale, which is taken to be 1000 GeV in this work, we\ngo to the standard model region. Since all the heavy right-handed\nneutrinos have already been integrated out at this scale, the\nneutrino masses are described by the effective dimension-5 operator,\nand the neutrino mass matrix $m_{\\nu}$ evolves as\n\\begin{equation}\n\\label{10}16\\pi^2\\frac{dm_{\\nu}}{dt}=-\\frac{3}{2}(Y^{\\dagger}_eY_e)^{T}m_{\\nu}-\\frac{3}{2}m_{\\nu}(Y^{\\dagger}_eY_e)+\\left[2{\\rm Tr}(3Y^{\\dagger}_uY_u+3Y^{\\dagger}_dY_d+Y^{\\dagger}_eY_e)-3g^2_2+\\lambda\\right]m_{\\nu}\n\\end{equation}\nwhere $\\lambda$ is the Higgs self-interaction coupling\\footnote{We\nuse the convention that the Higgs self-interaction term in the\nLagrangian is $-\\frac{\\lambda}{4}(H^{\\dagger}H)^2$.}. In order to\ncalculate the RG evolution of the effective neutrino mass matrix, we\nhave to solve the RG equations for all the parameters of the theory\nsimultaneously\\footnote{The running of the gauge couplings has to be\ntaken into account as well, the corresponding $\\beta$ functions are\nwell-known .}. At the mass threshold, we should integrate out the\ncorresponding heavy neutrino and perform the tree-level matching\ncondition for the effective coupling constant between the effective\ntheories\n\\begin{equation}\n\\label{7}\\accentset{(n)}{\\kappa}_{gf}\\Big|_{M_n}=\\;\\;\\accentset{(n+1)}{\\kappa}_{gf}\\Big|_{M_n}+2(\\;\\;\\accentset{(n+1)}{Y}_{\\nu}^{\\,\\,T}\\,)_{gn}M^{-1}_n(\\;\\;\\accentset{(n+1)}{Y}_{\\nu}\\;)_{nf}\\Big|_{M_n}~~~({\\rm no\\,sum\\,over}\\, n)\n\\end{equation}\n\n\\section{\\label{sec:models}Variants of the Two $S_4$ Models\\label{sec:BMM}}\n\nIn this section, we recapitulate the main features of the $S_4$\nflavor model of BMM \\cite{Bazzocchi:2009pv} and Ding\n\\cite{Ding:2009iy}. Both models generate neutrino masses via type I\nsee-saw mechanism, and the neutrino TB mixing is produced at LO. For\nan introduction to the group theory of $S_4$ we refer to\nRefs.\\cite{Ding:2009iy,Ding:2010pc}, the same conventions for the\n$S_4$ representation matrix and Clebsch-Gordan coefficient are used\nin this work.\n\n\\subsection{\\label{sec:BMM}BMM $S_4$ model}\nIn this model the flavor symmetry $S_4$ is accompanied by the cyclic\ngroup $Z_5$ and the Froggatt-Nielsen symmetry $U(1)_{FN}$. The $S_4$\nflavor symmetry is spontaneously broken to the subgroup $Z_2\\times\nZ_2$ in the neutrino sector and to nothing in the charged lepton one\nat leading order. This misalignment between the flavor symmetry\nbreaking in the neutrino and charged lepton sectors is exactly the\norigin of the TB mixing. Furthermore, the auxiliary symmetry $Z_5$\neliminating some dangerous terms, with the interplay of the\ncontinuous $U(1)_{FN}$, is responsible for the hierarchy among the\ncharged lepton masses. The leptonic fields and the flavon fields of\nthe model and their transformation properties under the flavor\nsymmetry are shown in Table \\ref{table:BMM_transformation}.\n\n\\begin{table}[hptb]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c||c||c|c||c|c||c|}\n \\hline\\hline\n & $\\ell$ & $e^c$ & $\\mu^c$ & $\\tau^c$ & $\\nu^c$ & $H_{u,d}$ & $\\theta$ & $\\psi$ & $\\eta$ & $\\Delta$ & $\\varphi$ & $\\xi'$ \\\\\n \\hline\n $S_4$ & $3_1$ & $1_2$ & $1_2$ & $1_1$ & $3_1$ & $1_1$ & $1_1$ & $3_1$ & 2 & $3_1$ & $2$ & $1_2$ \\\\\n $Z_5$ & $\\omega^4$ & $1$ & $\\omega^2$ & $\\omega^4$ & $\\omega$ & 1 & 1 & $\\omega^2$ & $\\omega^2$ & $\\omega^3$ & $\\omega^3$ & 1 \\\\\n $U(1)_{FN}$ & 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\\\\n \\hline\\hline\n \\end{tabular}\n\\caption{\\label{table:BMM_transformation}Transformation properties\nof the leptonic fields and flavons in the BMM model\n\\cite{Bazzocchi:2009pv}. Note that $\\omega$ is the fifth root of\nunity, i.e. $\\omega=e^{i2\\pi\/5}$.}\n\\end{center}\n\\end{table}\n\nBy introducing a $U(1)_R$ symmetry, the authors in Ref.\n\\cite{Bazzocchi:2009pv} have shown that the flavon fields develop\nthe following vacuum alignment at LO\n\\begin{eqnarray}\n\\nonumber\\langle\\psi\\rangle&=&\\left(\n \\begin{array}{c}\n 0 \\\\\n 1 \\\\\n 0 \\\\\n \\end{array}\n \\right)v_{\\psi},~~~\\langle\\eta\\rangle=\\left(\n \\begin{array}{c}\n 0 \\\\\n 1 \\\\\n \\end{array}\n \\right)v_\\eta\\\\\n\\nonumber&& \\\\\n\\nonumber\\langle\\Delta\\rangle&=&\\left(\n \\begin{array}{c}\n 1 \\\\\n 1 \\\\\n 1 \\\\\n \\end{array}\n \\right)v_\\Delta,~~~\n\\langle\\varphi\\rangle=\\left(\n \\begin{array}{c}\n 1 \\\\\n 1 \\\\\n \\end{array}\n \\right)v_{\\varphi}\\\\\n\\nonumber&& \\\\\n\\label{11}\\langle\\xi'\\rangle&=&v_{\\xi'},~~~\n\\langle\\theta\\rangle=v_\\theta\n\\end{eqnarray}\nThe superpotential of the model in the lepton sector is\n\\begin{eqnarray}\n\\nonumber&&w_{\\ell}=\\sum^{4}_{i=1}\\frac{\\theta}{\\Lambda}\\frac{y_{e,i}}{\\Lambda^3}\\,e^c(\\ell X_i)_{1_2}H_d+\\frac{y_{\\mu}}{\\Lambda^2}\\mu^c(\\ell\\psi\\eta)_{1_2}H_d+\\frac{y_{\\tau}}{\\Lambda}\\tau^c(\\ell\\psi)_{1_1}H_d+...\\\\\n\\label{12}&&w_{\\nu}=y(\\nu^c\\ell)_{1_1}H_u+x_d(\\nu^c\\nu^c\\varphi)_{1_1}+x_t(\\nu^c\\nu^c\\Delta)_{1_1}+...\n\\end{eqnarray}\nwhere the subscript $1_1$ and $1_2$ denote the contraction in $1_1$\nand $1_2$, respectively, and dots stand for higher dimensional\noperators, which are suppressed by additional powers of the cutoff\n$\\Lambda$. The composite $X$ is given by\n\\begin{equation}\n\\label{13}X=\\{\\psi\\psi\\eta,\\,\\psi\\eta\\eta,\\,\\Delta\\Delta\\xi',\\,\\Delta\\varphi\\xi'\\}\n\\end{equation}\nTaking into account the vacuum alignment in Eq.(\\ref{11}), the mass matrix for the charged lepton reads\n\\begin{equation}\n\\label{14}m_{\\ell}=\\frac{v_du}{\\sqrt{2}}\\left(\\begin{array}{ccc}y^{(1)}_eu^2t&y^{(2)}_eu^2t&y^{(3)}_eu^2t\\\\\n0&y_{\\mu}u&0\\\\\n0&0&y_{\\tau}\n\\end{array}\\right)\n\\end{equation}\nwhere $y^{(i)}_e$ is the linear combination of the $y_{e,i}$\ncontributions. The parameter $u$ parameterizes the ratio\n$v_{\\psi}\/\\Lambda$, $v_{\\eta}\/\\Lambda$,\n$v_{\\Delta}\/\\Lambda$,$v_{\\varphi}\/\\Lambda$ and $v_{\\xi'}\/\\Lambda$,\nwhich should be of the same order of magnitude to produce the mass\nhierarchy among the charged fermions. The parameter $t$ denotes the\nratio $v_{\\theta}\/\\Lambda$. It has been shown that the parameters\n$u$ and $t$ belong to the range $0.010$ and $\\sin\\Omega<0$\nrespectively. We note that the Dirac CP phase is undetermined\nbecause the reactor angle is vanishing in TB mixing. The above\nsuccessful leading order results are corrected by the NLO\ncontributions, which consists of the higher dimensional operators in\nboth the driving superpotential and Yukawa superpotentials. It has\nbeen shown that all the three leptonic mixing angles receive\ncorrections of order $u$ \\cite{Bazzocchi:2009pv}.\n\n\\subsection{The $S_4$ model of Ding}\nThe total flavor symmetry of this model is $S_4\\times Z_3\\times Z_4$\n\\cite{Ding:2009iy}. It is remarkable that the realistic pattern of\nfermion masses and flavor mixing in both the lepton and quark sector\nhave been reproduced in this model, and the mass hierarchies are\ndetermined by the spontaneous breaking of the flavor symmetry\nwithout invoking a Froggatt-Nielsen $U(1)$ symmetry. The leptonic\nfields and the flavons of the model and their classifications under\nthe flavor symmetry are shown in Table \\ref{tab:S4trans}, where the\nquark fields have been omitted.\n\n\\begin{table}[hptb]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\\hline\\hline\n& $\\ell$ & $e^{c}$ & $\\mu^{c}$ & $\\tau^{c}$ &$\\nu^c$ &\n$H_{u,d}$&$\\varphi$ & $\\chi$ & $\\zeta$ & $\\eta$ & $\\phi$ & $\\Delta$\n\\\\\\hline\n\n$\\rm{S_4}$& $3_1$& $1_1$ & $1_2$&$1_1$ & $3_1$ & $1_1$&$3_1$ &\n$3_2$& $1_2$ & 2 & $3_1$ & $1_2$ \\\\\\hline\n\n$\\rm{Z_{3}}$& $\\omega$ & $\\omega^2$& $\\omega^2$& $\\omega^2$ & $1$\n&1 & 1 &1 &1 & $\\omega^2$ & $\\omega^2$ & $\\omega^2$\n\\\\\\hline\n\n$\\rm{Z_{4}}$& 1 &i & -1& -i & 1 & 1& i &i & 1& 1 & 1 & -1\n\\\\\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\label{tab:S4trans}The transformation rules of the leptonic\nfields and the flavons under the symmetry groups $S_4$, $Z_3$ and\n$Z_4$ in the $S_4$ model of Ref. \\cite{Ding:2009iy}, where $\\omega$ is the third root of unity, i.e.\n$\\omega=e^{i\\frac{2\\pi}{3}}=(-1+i\\sqrt{3})\/2$.}\n\\end{table}\n\nIn this model the $S_4$ symmetry is broken down to Klein four and\n$Z_3$ subgroups in the neutrino and charged lepton sector,\nrespectively, at LO, this specific breaking scheme require flavon\nfields develop the following vacuum configuration\n\\begin{eqnarray}\n\\nonumber&&\\langle\\varphi\\rangle=(0,V_{\\varphi},0),~~~\\langle\\chi\\rangle=(0,V_{\\chi},0)~~~\\langle\\zeta\\rangle=V_{\\zeta}\\\\\n\\label{24}&&\\langle\\eta\\rangle=(V_{\\eta},V_{\\eta}),~~~\\langle\\phi\\rangle=(V_{\\phi},V_{\\phi},V_{\\phi}),~~~\\langle\\Delta\\rangle=V_{\\Delta}\n\\end{eqnarray}\nWe have demonstrated that this particular vacuum alignment is a\nnatural solution to the scalar potential, all the VEVs (scaled by\nthe cutoff $\\Lambda$) $V_{\\varphi}\/\\Lambda$, $V_{\\chi}\/\\Lambda$,\n$V_{\\zeta}\/\\Lambda$, $V_{\\eta}\/\\Lambda$, $V_{\\phi}\/\\Lambda$ and\n$V_{\\Delta}\/\\Lambda$ are of the same order of magnitude about ${\\cal\nO}(\\lambda^2_c)$, and this vacuum configuration is stable under the\nhigher order corrections, please see Ref. \\cite{Ding:2009iy} for\ndetail. Then the most general superpotential in the lepton sector,\nwhich is compatible with the representation assignment of Table\n\\ref{tab:S4trans}, is given by\n\\begin{eqnarray}\n\\nonumber&&w_{\\ell}=\\frac{y_{e_1}}{\\Lambda^3}\\;e^{c}(\\ell\\varphi)_{1_1}(\\varphi\\varphi)_{1_1}h_d+\\frac{y_{e_2}}{\\Lambda^3}\\;e^{c}((\\ell\\varphi)_2(\\varphi\\varphi)_2)_{1_1}h_d+\\frac{y_{e_3}}{\\Lambda^3}\\;e^{c}((\\ell\\varphi)_{3_1}(\\varphi\\varphi)_{3_1})_{1_1}h_d\\\\\n\\nonumber&&~~+\\frac{y_{e_4}}{\\Lambda^3}\\;e^{c}((\\ell\\chi)_2(\\chi\\chi)_2)_{1_1}h_d+\\frac{y_{e_5}}{\\Lambda^3}\\;e^{c}((\\ell\\chi)_{3_1}(\\chi\\chi)_{3_1})_{1_1}h_d+\\frac{y_{e_6}}{\\Lambda^3}\\;e^{c}(\\ell\\varphi)_{1_1}(\\chi\\chi)_{1_1}h_d\\\\\n\\nonumber&&~~+\\frac{y_{e_7}}{\\Lambda^3}\\;e^{c}((\\ell\\varphi)_2(\\chi\\chi)_2)_{1_1}h_d+\\frac{y_{e_8}}{\\Lambda^3}\\;e^{c}((\\ell\\varphi)_{3_1}(\\chi\\chi)_{3_1})_{1_1}h_d+\\frac{y_{e_9}}{\\Lambda^3}\\;e^{c}((\\ell\\chi)_2(\\varphi\\varphi)_2)_{1_1}h_d\\\\\n\\nonumber&&~~+\\frac{y_{e_{10}}}{\\Lambda^3}\\;e^{c}((\\ell\\chi)_{3_1}(\\varphi\\varphi)_{3_1})_{1_1}h_d+\\frac{y_{\\mu}}{2\\Lambda^2}\\mu^{c}(\\ell(\\varphi\\chi)_{3_2})_{1_2}h_d+\\frac{y_{\\tau}}{\\Lambda}\\tau^{c}(\\ell\\varphi)_{1_1}h_d+...\\\\\n\\label{25}&&w_{\\nu}=\\frac{y_{\\nu_1}}{\\Lambda}((\\nu^{c}\\ell)_2\\eta)_{1_1}h_u+\\frac{y_{\\nu_2}}{\\Lambda}((\\nu^{c}\\ell)_{3_1}\\phi)_{1_1}h_u+\\frac{1}{2}M(\\nu^c\\nu^c)_{1_1}+...\n\\end{eqnarray}\nwhere $(...)_{1_1,1_2, 2, 3_1,3_2}$ stands for the $1_1$, $1_2$,\n$2$, $3_1$ and $3_2$ products, respectively. We note that and one can\nalways set $M$ to be real and positive by global phase\ntransformation of the lepton fields, and a priori $M$ should be of\nthe same order as the cutoff scale $\\Lambda$. Taking into account\nthe vacuum alignment in Eq.(\\ref{24}), we find that the charged\nlepton mass matrix is diagonal at LO,\n\\begin{equation}\n\\label{26}m_{\\ell}=\\frac{v_d}{\\sqrt{2}}\\left(\\begin{array}{ccc}\ny_e\\frac{v^3_{\\varphi}}{\\Lambda^3}&0&0\\\\\n0&y_{\\mu}\\frac{v_{\\varphi}v_{\\chi}}{\\Lambda^2}&0\\\\\n0&0&y_{\\tau}\\frac{v_{\\varphi}}{\\Lambda}\n\\end{array}\\right)\n\\end{equation}\nwhere $y_e$ is the result of all the different contributions of\n$y_{e_i}$. The neutrino Dirac and Majorana mass matrices can be\nstraightforwardly read out as\n\\begin{equation}\n\\label{27}m^{D}_{\\nu}=\\frac{v_{u}}{\\sqrt{2}}\\left(\\begin{array}{ccc}2b&a-b&a-b\\\\\na-b&a+2b&-b\\\\\na-b&-b&a+2b\\end{array}\\right),~~~~~~~M_N=\\left(\\begin{array}{ccc}\nM&0&0\\\\\n0&0&M\\\\\n0&M&0\\end{array}\\right)\n\\end{equation}\nwhere $a=y_{\\nu_1}\\frac{v_{\\eta}}{\\Lambda}$ and\n$b=y_{\\nu_2}\\frac{v_{\\phi}}{\\Lambda}$. As a result, the light-neutrino mass matrix is given by\n\\begin{equation}\n\\label{28}m_{\\nu}=-(m^{D}_{\\nu})^{T}M^{-1}_{N}m^{D}_{\\nu}=-\\frac{v^2_u}{2M}\\left(\\begin{array}{ccc}2a^2-4ab+6b^2&a^2+2ab-3b^2&a^2+2ab-3b^2\\\\\na^2+2ab-3b^2&a^2-4ab-3b^2&2a^2+2ab+6b^2\\\\\na^2+2ab-3b^2&2a^2+2ab+6b^2&a^2-4ab-3b^2\n\\end{array}\\right)\n\\end{equation}\nWe can see that the mass matrix $m_{\\nu}$ is exactly diagonalized by the TB mixing matrix\n\\begin{equation}\n\\label{29}U^{T}_{\\nu}m_{\\nu}U_{\\nu}={\\rm diag}(m_{\\nu1},m_{\\nu2},m_{\\nu3})\n\\end{equation}\nThe unitary matrix $U_{\\nu}$ is\n\\begin{equation}\n\\label{30}U_{\\nu}=U_{TB}\\,{\\rm\ndiag}(e^{-i\\alpha_1\/2},e^{-i\\alpha_2\/2},e^{-i\\alpha_3\/2})\n\\end{equation}\nThe phases $\\alpha_{1}$, $\\alpha_2$ and $\\alpha_3$ are closely related to the Majorana phase\n\\begin{equation}\n\\label{31}\\alpha_1={\\rm arg}(-(a-3b)^2\/M),~~~~\\alpha_2={\\rm arg}(-4a^2\/M),~~~~\\alpha_3={\\rm arg}((a+3b)^2\/M)\n\\end{equation}\nand the neutrino masses are given by\n\\begin{equation}\n\\label{32}m_{\\nu1}=|(a-3b)^2|v^2_u\/(2M),~~~m_{\\nu2}=2|a|^2v^2_u\/M,~~~m_{\\nu3}=|(a+3b)^2|v^2_u\/(2M)\n\\end{equation}\nIt is interesting to estimate the order of magnitude for the right-handed neutrino mass $M$. Since the parameters $a$ and $b$ are\nexpected to be of order $\\lambda^2_c$, with this and using\n$\\sqrt{|\\Delta m^2_{atm}|}\\simeq0.05$ eV as the light-neutrino mass\nscale in the see-saw formula, we obtain\n\\begin{equation}\n\\label{add_EPJC_3}M\\sim 10^{12\/13}{\\rm GeV}\n\\end{equation}\nSimilar to the analysis of section \\ref{sec:BMM}, we define\n\\begin{equation}\n\\label{33}\\frac{b}{a}=R\\,e^{i\\Phi}\n\\end{equation}\nStraightforwardly we can express $R$ and $\\cos\\Phi$ as functions of the neutrino masses\n\\begin{eqnarray}\n\\nonumber&&R=\\frac{1}{3}\\sqrt{\\frac{2m_{\\nu1}}{m_{\\nu2}}+\\frac{2m_{\\nu3}}{m_{\\nu2}}-1}\\\\\n\\label{34}&&\\cos\\Phi=\\frac{\\frac{m_{\\nu3}}{m_{\\nu2}}-\\frac{m_{\\nu1}}{m_{\\nu2}}}{\\sqrt{\\frac{2m_{\\nu1}}{m_{\\nu2}}+\\frac{2m_{\\nu3}}{m_{\\nu2}}-1}}\n\\end{eqnarray}\nIn exactly the same way section \\ref{sec:BMM}, the Majorana phases in standard parameterization are determined as\n\\begin{equation}\n\\label{add4} \\varphi_1=\\alpha_1-\\alpha_3,~~~~~\\varphi_2=\\alpha_2-\\alpha_3\n\\end{equation}\nwith\n\\begin{eqnarray}\n\\nonumber&& \\cos\\varphi_1=\\frac{-(1-9R^2)^2+36R^2\\sin^2\\Phi}{(1+9R^2)^2-36R^2\\cos^2\\Phi},~~~~~\\sin\\varphi_1=\\frac{12R(1-9R^2)\\sin\\Phi}{(1+9R^2)^2-36R^2\\cos^2\\Phi}\\\\\n\\label{add5}&&\\cos\\varphi_2=\\frac{-1-9R^2\\cos2\\Phi-6R\\cos\\Phi}{1+9R^2+6R\\cos\\Phi},~~~~\\sin\\varphi_2=\\frac{6R(1+3R\\cos\\Phi)\\sin\\Phi}{1+9R^2+6R\\cos\\Phi}\n\\end{eqnarray}\nConsequently, all the low energy parameters in the neutrino sector\ncan be expressed in terms of the lightest neutrino mass. Imposing\nthe condition $|\\cos\\Phi|\\leq1$, we get the following constraint on\nthe lightest neutrino mass:\n\\begin{eqnarray}\n\\nonumber&&m_{\\nu1}\\geq0.011\\;{\\rm eV},~~~~~{\\rm NH}\\\\\n\\label{35}&&m_{\\nu3}>0.0\\;{\\rm eV},~~~~~~~~~~{\\rm IH}\n\\end{eqnarray}\nThe NLO corrections have been analyzed in detail in Ref.\n\\cite{Ding:2009iy}. It is shown that both the neutrino masses and\nmixing angles receive corrections of order\n$\\varepsilon\\sim\\lambda^2_c$ with respect to leading order result,\nwhere $\\varepsilon$ parameterizes the ratio $VEV\/\\Lambda$ and\n$\\lambda_c$ is the Cabibbo angle.\n\n\\section{\\label{sec:RGE_running}RG running effects in $S_4$ flavor models}\nAs has been shown, the TB mixing is achieved in both BMM model and\nthe $S_4$ model of Ding at LO. In this section, we turn to a\nquantitative discussion of RG effects, and compare them with the NLO\ncorrections and the experimental data. For definiteness we shall\nassume a supersymmetry breaking scale of 1 TeV, below which the SM\nis valid. We note that the mass hierarchy between top and bottom is\nproduced via the spontaneous breaking of flavor symmetry in both\nmodels, and $\\tan\\beta$ should be small. As a result, we shall take\nthe parameter $\\tan\\beta$ to be 10 apart except where explicitly indicated\notherwise. To study the running of the neutrino mixing parameters\nfrom the GUT scale to the electroweak scale, the Mathematica package\nREAP is used \\cite{Antusch:2005gp}. This package numerically solves\nthe RG equations of the quantities relevant for neutrino mass and\nmixing, and it has been widely used for different purposes\n\\cite{Boudjemaa:2008jf}. The package can be downloaded from\nhttp:\/\/users.physik.tu-muenchen.de\/rge\/REAP\/index.html, and\nMathematica version 5 or higher is required. We note that the\napproximate analytical solutions based on leading log approximation\nto the RG equations have been derived in\nRefs.\\cite{Antusch:2005gp,Antusch:2003kp}, which allows one to\nunderstand the generic behavior of the renormalization effects.\nHowever, due to enhancement\/suppression factors and possible\ncancelations, the exact numerical solutions may differ considerably\nfrom those estimates. Therefore throughout this paper we adopt a\nnumerical approach, exploiting the convenient REAP package.\n\nAs has been demonstrated above, we generally need to introduce\nflavon fields to break the flavor symmetry in order to generate\nfermion masses and flavor mixing. In the unbroken phase of flavor\nsymmetry, the flavons are active fields, therefore the corresponding\nRG equations should be modified in principle. However, the\nsuperpotentials of the models in Eq.(\\ref{12}) and Eq.(\\ref{25})\ncontain all the possible LO terms allowed by the symmetries, the\ninvariance under the flavor symmetry $S_4$ is maintained until we\nmove down to the scale of the VEV of the flavon fields, which is of\nthe order of GUT scale. We conclude that the flavor structures of\nthe models are preserved above the scale of the VEV of the flavon\nfields, the contributions of the flavon fields in the RG running can\nbe absorbed by the redefinition of the model parameters\n\\cite{Lin:2009sq}. In the following, we will discuss the RG\nevolution of neutrino masses and mixing parameters in both $S_4$\nflavor models, stating from the initial conditions of neutrino Dirac\nand Majorana mass matrices described in section \\ref{sec:models} at\nthe GUT scale. In particularly, the parameter spaces are scanned.\n\n\\subsection{\\label{sec:RG_BMM}RG effects in the BMM models}\n\nIn this section we report results of the calculations of the RG\nevolution of the neutrino mixing parameters in the BMM model.\nWithout loss of generality, we choose the Yukawa coupling $y=1$ for\nour numerical analysis. The GUT scale neutrino mass squared\ndifferences $m^2_{\\nu2}-m^2_{\\nu1}$ and $|m^2_{\\nu3}-m^2_{\\nu1}|$\nare treated as random numbers in the range of $3.5\\times10^{-5}{~\\rm\neV^2}\\sim2.5\\times10^{-4}{~\\rm eV^2}$ and $1.0\\times10^{-3}{~\\rm\neV^2}\\sim8.3\\times10^{-3}{~\\rm eV^2}$ respectively, \\footnote{We\nshall show later that the mass squared difference at the GUT scale\nis about a factor of $1.2\\sim3$ larger than its low energy value in\nthe whole spectrum.}, and the lightest neutrino mass is varied from\nthe lowest bound determined by Eq.(\\ref{21}) or Eq.(\\ref{34}) to 0.2\neV which is the future sensitivity of KATRIN experiment\n\\cite{katrin}. The RG corrected neutrino mixing angles as functions\nof the lightest neutrino mass are shown in Fig.\n\\ref{fig:RGE_BMM_mass1} for both NH and IH spectrum \\footnote{The\nresults are independent of the sign of $\\sin\\Omega$, the reason is\nexplained later.}. These plots display only the points corresponding\nto choices of the parameters reproducing $\\Delta m^2_{\\rm atm}$,\n$\\Delta m^2_{\\rm sol}$ and the mixing angles within the $3\\sigma$\ninterval.\n\nWe see that the lightest neutrino mass is still bounded from below,\nand the concrete values of the lower bounds are about 0.0107 eV and\n0.027 eV, respectively, for the NH and IH spectrum, and these values\nare found to be almost independent of $\\tan\\beta$. It is remarkable\nthat all the mixing parameters and $J_{CP}$ are predicted to lie in\na relative narrow range. For both NH and IH spectrum, it is obvious\nthat the RG changes of the atmospheric and reactor angles are very\nsmall, the corresponding allowed regions lie within the current\n$1\\sigma$ bounds. In particular, the RG corrections to\n$\\sin^2\\theta_{23}$ and $\\sin^2\\theta_{13}$ are of the same order or\neven smaller than the NLO contributions. On the other hand, the\nrunning of the solar neutrino mixing angle displays a different\npattern. The RG change of $\\sin^2\\theta_{12}$ is much larger than\nthose of $\\sin^2\\theta_{23}$ and $\\sin^2\\theta_{13}$, which is a\ngeneral property of the RG evolution\n\\cite{Antusch:2005gp,Antusch:2003kp}, consequently the deviation\nfrom its TB value can be large. In the case of NH spectrum and large\n$\\tan\\beta$, $\\sin^2\\theta_{12}$ is within the $3\\sigma$ limit only\nfor smaller values of neutrino mass. Taking into account the lower\nbound on the lightest neutrino mass, $m_{\\nu1}$ is constrained to\nlie in certain region, which decreases with $\\tan\\beta$. This point\ncan be clearly seen from Fig.\\ref{fig:RGE_BMM_mass1}. For IH\nspectrum, the RG effect of $\\theta_{12}$ is even larger due to the\nnearly degeneracy of $m_{\\nu1}$ and $m_{\\nu2}$. For example, for\n$\\tan\\beta=10$, $\\sin^2\\theta_{12}$ is very close or above the\n$3\\sigma$ upper bound in the allowed region of $m_{\\nu3}$, and the\nvalues of $\\sin^2\\theta_{12}$ goes completely beyond the $3\\sigma$\nlimit for larger $\\tan\\beta$. As a result, the IH spectrum is\nstrongly disfavored for $\\tan\\beta>10$ in the BMM model. We note\nthat possible large deviation of solar neutrino mixing angle from\nthe TB value, and small change of atmospheric and reactor angles\nunder RG running are predicted as well in the Altarelli-Feruglio\n$A_4$ model \\cite{Lin:2009sq}. In Ref.\\cite{Lin:2009sq}, the authors\nperform a general analysis of running effects on lepton mixing\nparameters in flavor models with type I see-saw, they show that, for\nthe mass-independent mixing pattern, the running contribution from\nthe neutrino Yukawa coupling $Y_{\\nu}$ can be absorbed by a small\nshift on neutrino mass eigenvalues leaving mixing angles unchanged,\nconsequently the RG change of mixing angle is due to the\ncontribution coming from the charged lepton sector. This is exactly\nthe reason why similar results to the $A_4$ model are obtained here.\n\nThe variations of Majorana phases $\\varphi_1$ and $\\varphi_2$, Dirac\nCP violating phase $\\delta$ and the Jarlskog invariant $J_{CP}$ with\nrespect to the lightest neutrino mass are also plotted in Fig.\n\\ref{fig:RGE_BMM_mass2}. We note that Dirac phase $\\delta$ arises\nfrom the running effect, even though it is undetermined in the\nbeginning. The initial value of Jarlskog invariant $J_{CP}$ is zero\ndue to the vanishing of the $\\theta_{13}$ in TB mixing scheme, and\nit remains small because of the smallness of the $\\theta_{13}$,\nalthough the value of $\\delta$ is large. It is remarkable that we\ncan understand the dependence on the sign of $\\sin\\Omega$ exactly.\nAt initial scale, the right-handed neutrino mass matrix $M_N$ shown\nin Eq.(\\ref{15}) is complex with each other for $\\sin\\Omega>0$ and\n$\\sin\\Omega<0$ apart from the irrelevant overall phase, and the\nneutrino Yukawa coupling matrix $Y_{\\nu}$ can be chosen to be real.\nTherefore, in the case of $\\sin\\Omega<0$, the complex conjugates of\n$Y_{\\nu}$, $Y_{e}$, $M_{N}$ and $\\kappa$ run in the same way as the\ncorresponding quantities of $\\sin\\Omega>0$ with the same initial\nconditions. Consequently the resulting low energy effective neutrino\nmass matrix for $\\sin\\Omega<0$ is the complex conjugate of the\ncorresponding one of $\\sin\\Omega>0$. As a result, the RG evolution\nof mixing angles and $J_{CP}$ is independent of the sign of\n$\\sin\\Omega$, and summation of the each CP phase for $\\sin\\Omega>0$\nand $\\sin\\Omega<0$ is equal to $2\\pi$. These results have been\nconfirmed in our numerical analysis explicitly.\n\nConcretely the running of neutrino masses and mixing parameters with\nthe energy scale is displayed in Fig. \\ref{fig:RGE_BMM_scale} for\nboth NH and IH spectrum with $\\tan\\beta=10$, where the initial\nconditions for the NH and IH are chosen to be $m_1=0.041$ eV,\n$\\Delta m^2_{\\rm sol}=1.76\\times10^{-4}{\\rm eV^2}$, $\\Delta m^2_{\\rm\natm}=5.85\\times10^{-3}{\\rm eV^2}$ and $m_3=0.0538$ eV, $\\Delta\nm^2_{\\rm sol}=1.87\\times10^{-4}{\\rm eV^2}$, $\\Delta m^2_{\\rm\natm}=5.58\\times10^{-3}{\\rm eV^2}$ respectively. Reasonable values\nfor the lower energy oscillation parameters are reached. We see that\nthe deviation of the solar neutrino mixing angle $\\theta_{12}$ from\nthe TB value can be relative large for the IH spectrum, the mixing\nangles $\\theta_{23}$ and $\\theta_{13}$ and the CP phases $\\delta$,\n$\\varphi_1$ and $\\varphi_2$ are stable under the RG evolution, the\ncorresponding RG corrections are small. Since\n$Y^{\\dagger}_{\\nu}Y_{\\nu}=y^2{\\bf 1}$, the contribution from the\nneutrino Yukawa coupling is universal above the see-saw threshold.\nThen, only the charged lepton relevant part $Y^{\\dagger}_eY_e$\ncontributes to the change in mixing angles, and the evolution above\nthe see-saw scales is essentially the same as below. This is in\ncontrast with the usual situation where the neutrino Yukawa coupling\nplays dominant role in the running of neutrino mass matrix above the\nhighest see-saw scale. Furthermore, we find that the running of the\nneutrino mass $m_{\\nu i}$ is approximately given by a common scaling\nof the mass eigenvalues, this is the same as the situation below the\nsee-saw scale \\cite{Antusch:2003kp,Chankowski:2001mx}. It is\nremarkable that the neutrino mass is reduced by about $2.4$ times at\nlow energy. We note that the above results about the running\nbehavior of neutrino masses and mixing parameters are very general,\nthey almost do not depend on the initial conditions.\n\n\n\n\n\n\\subsection{RG effects in the $S_4$ model of Ding}\n\nIt is remarkable that the heavy right-handed neutrinos are\ndegenerate at LO, and the corrections to the degeneracy arising from\nRG running turn out to be so small that could be neglected,\nconsequently the threshold effects should be very small in this\ncase. In particular, we note that $Y^{\\dagger}_{\\nu}Y_{\\nu}$ is not\nproportional to the unit matrix any more, large RG effects seem\npossible. As has been demonstrated in Eq.(\\ref{add_EPJC_3}), the\nright-handed neutrino mass $M$ is estimated to be of order $\n10^{12}\\sim10^{13}{\\rm GeV}$. Without loss of generality, we shall\nchoose $M=10^{12}$ GeV in the following numerical analysis, and we\nhave checked that final results change very slowly with the\nparameter $M$. The neutrino mixing angles at electroweak scale as\nfunctions of the lightest neutrino mass are shown in\nFig.\\ref{fig:RGE_Ding_mass1}, it is obvious that the lightest\nneutrino mass for NH spectrum is bounded from below, and the lower\nbound on the lightest neutrino mass in the case of IH spectrum is\nstill approximately zero. We see that the RG effects on both\natmospheric and reactor angles are rather small, and the running of\n$\\theta_{12}$ can be large depending on $\\tan\\beta$ and the mass\ndegeneracy. Matching $\\theta_{12}$ with the data already puts strong\nconstraints on the lightest neutrino mass spectrum and $\\tan\\beta$\nat the present stage, and a upper bound on the lightest neutrino\nmass is usually implied for small value of $\\tan\\beta$, which means\nthat the neutrino mass spectrum can not be highly degenerate. In the\ncase of $\\tan\\beta=20$, the IH spectrum is ruled out, since the\nvalue of $\\sin^2\\theta_{12}$ is much larger than its $3\\sigma$ upper\nbound. While the model is within the $3\\sigma$ limit only for small\nneutrino mass for NH spectrum, as is displayed in\nFig.\\ref{fig:RGE_Ding_mass1}. The predictions for the CP phases and\nthe Jarlskog invariant are plotted in Fig.\\ref{fig:RGE_Ding_mass2}.\nIn a similar way as section \\ref{sec:RG_BMM}, we learn that the\nevolutions of mixing angles and $J_{CP}$ do not depend on the\nsign of $\\sin\\Delta$, the summation of each CP phase for\n$\\sin\\Delta>0$ and $\\sin\\Delta<0$ is equal to $2\\pi$. These points\nare checked by our detailed numerical analysis.\n\nThe running of neutrino masses and mixing parameters with the energy\nscale are plotted in Fig.\\ref{fig:RGE_Ding_scale}. Being similar to\nthe situation in the BMM model, the RG corrections to the CP phases\n$\\delta$, $\\varphi_1$ and $\\varphi_2$ are typically small, the\ncorresponding curves are almost straight lines. We see that the\nneutrino mixing angles are rather stable under RG evolution except\nthe solar angle for IH spectrum. The running of neutrino mass can be\napproximately described by a common scaling factor, and it reduced by\nabout 2 times at electroweak scale. In short summary, the evolution\nof the neutrino parameters in Ding's $S_4$ model is very similar to\nthat of BMM model, although the textures of the mass matrices are\ntotally different.\n\n\\section{\\label{sec:conclusion}Conclusion}\nFlavor models based on discrete flavor symmetry are particularly\ninteresting, they can produce the tri-bimaximal neutrino mixing (or\nsome other mass-independent mixing patterns) at LO in an elegant\nway. It is a common feature that the LO predictions would be\ncorrected by the subleading higher dimensional operators, and it\nhave been shown that the subleading corrections are under control in\nsome consistent flavor models. Since the tri-bimaximal mixing is\npredicted at high energy scale, it is very necessary to investigate\nwhether the RG effects would push the mixing parameters beyond the\ncurrent allowed ranges by experimental data.\n\nIn this paper, we have analyzed the RG running of the neutrino mass\nand mixing parameters in the BMM model and the $S_4$ model of Ding,\nboth models predict tri-bimaximal neutrino mixing at LO, but the\ntextures of the mass matrices are totally different. To study the\nrunning effects, we use the Mathematica package REAP. By detailed\nnumerical analysis, we find that the evolution of neutrino mixing\nparameters displays approximately the same pattern in both $S_4$\nmodels. We see that the atmospheric and reactor neutrino mixing\nangles are essentially stable under RG evolution for both NH and IH\nspectrum. However, the running of solar neutrino mixing angle\ndepends on the neutrino mass and the parameter $\\tan\\beta$, and the\ndeviation from its TB value could be large. After we take into\naccount the RG effects, the neutrino mass spectrum is strongly\nconstrained by the current data on $\\theta_{12}$, the lightest\nneutrino mass is bounded from both below and up, and the upper bound\ndecreases with $\\tan\\beta$. For large $\\tan\\beta$ ($\\tan\\beta>10$),\nthe value of $\\sin^2\\theta_{12}$ could be larger than its $3\\sigma$\nupper bound for the whole spectrum in the case of IH spectrum. As a\nresult, the IH neutrino mass spectrum is disfavored in the case of\nlarge $\\tan\\beta$. Moreover, we note that the running of light-neutrino masses can be approximately described by a common scaling\nfactor, and they reduce by about $1.2\\sim3$ times at low energy.\nThis effects is neglected in Ref.\\cite{Lin:2009sq}. We note that the\nevolutions of mixing angles and $J_{CP}$ don't depend on the sign of\n$\\sin\\Omega$ or $\\sin\\Delta$, and the sum of each CP phase for both\nsign is equal to $2\\pi$. These results are confirmed both\nanalytically and numerically.\n\nFinally we note that running of neutrino parameters in the\nAltarelli-Feruglio $A_4$ model, BMM model and Ding's $S_4$ model is\nsimilar to each other, although they produce tri-bimaximal mixing in\ndifferent ways. The reason is that the neutrino Yukawa coupling only\ncontributes to the running of neutrino mass, it doesn't affect the\nlepton mixing angles, and the change in mixing angles is due to the contribution from the charged lepton sector. We conclude that\nthe running of mixing parameters is also severely constrained by the\nflavor symmetry in discrete flavor symmetry models.\n\n\n\\section*{Acknowledgements}\nWe are grateful to Prof. Zhi-Zhong Xing for stimulating discussions\non RGE running. The author Gui-Jun Ding gratefully acknowledge the\npleasant hospitality of the theory group at the University of\nWisconsin. This work is supported by the National Natural Science\nFoundation of China under Grant No.10905053, Chinese Academy\nKJCX2-YW-N29 and the 973 project with Grant No. 2009CB825200.\nDong-Mei Pan is supported in part by the National Natural Science\nFoundation of China under Grant No.10775124.\n\n\n\\providecommand{\\href}[2]{#2}\\begingroup\\raggedright\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nEarly on in the history of programming, a metaphor was put forward\nthat has seen wide acceptance in the software community: that of\nprogramming as LEGO (Figure~\\ref{fig:CACM}). The metaphor suggests\nthat building large systems is a matter of connecting small\nstandardized bricks together, one at a time, through their universal\ninterfaces: the small bricks are independent of the scale and purpose\nof the construction. This metaphor had a tremendous influence\nin the development of OOP languages. Inspired by the simplicity of the\nLEGO construction model, these languages placed their focus on\nmechanisms that would allow to connect small computational units\ntogether to create large software systems.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.3in]{figures\/CACMSeptember1990.jpg}\n\\includegraphics[width=1.8in]{figures\/512px-Lego_dimensions.png}\n\\caption{Left: Cover of the CACM, Special issue on Object-Oriented\n Programming, September 1990~\\cite{CACM:1990}. Right: LEGO bricks\n showing standard dimensions. (Source: Wikipedia, ``Cmglee''.)}\n\\label{fig:CACM}\n\\end{figure}\n\nMeanwhile, in 1975 another idea was put forward that has also seen\nwide acceptance in the software community: that ``programming in the\nlarge'' has different characteristics from ``programming in the\nsmall.'' This idea was first formulated by DeRemer and\nKron~\\cite{DeRemer:1975}, who argued that ``structuring a large\ncollection of modules to form a \"system\" is an essentially distinct\nand different intellectual activity from that of constructing the\nindividual modules.'' DeRemer and Kron went on to advocate a ``Module\nInterconnection Language'' (MIL) for large systems.\n\nThese two popular ideas aren't mutually exclusive: it is possible to\nimagine system-wide directives and constraints (i.e. architecture) for\nlarge LEGO constructions. But DeRemer and Kron's essay states a\npremise that puts some pressure on the LEGO metaphor: ``Where an MIL\nis not available, module interconnectivity information is usually\nburied partly in the modules, partly in an often amorphous collection\nof linkage-editor instructions, and partly in the informal\ndocumentation of the project.'' In LEGO terms, this might mean that in\norder to build a large castle, one might need to plumb stronger\nconnection material into the bricks themselves. In short, the scale of\nthe system would affect the internal structure of the construction.\n\nThis paper focuses on the core of these two popular ideas by asking\nand answering the following question:\n\n\\noindent\n{\\bf Does the scale of the software system affect the internal\n structure of its modules or are modules scale-invariant?}\n\nWe want to find out whether there are mathematical principles\nrelated to size in large ecosystems of software projects. Besides\nshedding light on the differences between programming-in-the-small and\nprogramming-in-the-large, this question has important\n implications for research. A common practice for validating ideas\nin software research is to collect a number of artifacts, either\nrandomly or using some criteria, measure the effects of the ideas\nusing those artifacts, and reach conclusions from the empirical\ndata. Even though size of software artifacts (projects, classes, etc.)\nhas been known to be an issue in quantitative studies of software,\nsoftware research continues to be fairly oblivious to its effect in\nthese assorted datasets. This is particularly problematic for any\nstudies involving software metrics, including OO metrics. It also\naffects performance studies that tend to collect data on relatively\nsmall programs that aren't necessarily representative of large\nprograms. Several studies published in the literature may have\nreached invalid conclusions by ignoring the effect of size or by\ntreating it inappropriately.\n\nThe question, as formulated above, is too ambitious to be answered in\none single step. This paper takes only the first step. We focus on\nObject-Oriented software systems, since those are the most influenced\nby the programming-as-LEGO metaphor; other language families should be\nstudied for broader conclusions. Within OOP, we focus on Java, since\nit is one of the most popular OOP languages; other OOP ecosystems\nshould be studied for broader conclusions. Finally, we report on a\ndozen metrics that illustrate the main trends, but many more metrics\ncould be studied.\n\nWe deconstruct the general question into five research questions for\nwhich specific metrics can be measured:\n\n\\begin{itemize}\n\\item [RQ1] Module size: Are modules of larger systems larger than modules\n of smaller systems?\n\\item [RQ2] Module Type: Is there a statistically significant variation\n in the mix of classes and interfaces for projects of different size\n scales?\n\\item [RQ3] Internal Complexity: Are modules of larger systems more,\n or fewer, complex than modules of smaller systems?\n\\item [RQ4] Composition via Inheritance: Does the scale of the project\naffect the use of inheritance?\n\\item [RQ5] Dependencies: Do larger projects use disproportionately \n more, or fewer, types from external libraries than smaller projects?\n\\end{itemize}\n\n\nThis study puts forward strong evidence that, as programs become\nlarger, the internal structure of the modules and the mixture of\ncomposition mechanisms used are affected. As such, the paper\nmakes the following contributions:\n\n\\begin{enumerate}\n\\item It unveils strong empirical evidence of the existence of super-\n and sublinear effects in software that have not been measured\n before, and it shows concrete parameters of many non-linear\n relations that underly a large and important ecosystem of Java\n programs.\n\\item It proposes more accurate definitions of popular OO metrics that\n properly normalize for size.\n\\item By unveilling the characteristics of large projects, it may\n suggest new ideas for how to tame deterimental non-linear effects,\n both in terms of programming language design and project management.\n\\end{enumerate}\n\n\n\\section{Motivation and Related Work}\n\\label{sec:relwork}\n\nIt has been almost 25 years since Chidamber and Kemerer published\ntheir influential paper on OO metrics at\nOOPSLA'91~\\cite{Chidamber:1991}. Since then, OO metrics have been used\npervasively in research and development. Here, we review and\ndiscuss the main issues with OO metrics, and the research\ncommunity's attempts to understand the empirically-based principles of\nsoftware.\n\n\n\\subsection{The Confusing Effect of Size}\n\\label{sec:confusingeffect}\n\nA large body of literature exists in analyzing how software metrics\ncorrelate with software quality. A typical study along those lines\ninvolves computing internal software metrics (e.g. coupling of\nclasses) and correlating them with external quality attributes\n(e.g. post-release bug fixes involving those classes). Many studies of\nthis kind apply simple univariate statistical analysis, and often\nconclude that there is a correlation.\n\nFor quite some time, however, size has been known to be a potential\nconfounding factor in empirical studies of software artifacts. For\nexample, in a study designed to verify whether it is possible to use a\nmultivariate logistic regression model based on OO metrics to predict\nfaults in OO programs, Briand et al.~\\cite{Briand2000245} reported\nstrong correlations between class size and several OO software\nmetrics. They then went on to compensate for that correlation by doing\npartial correlations. In another study of a large C++\nsystem~\\cite{Cartwright2000}, Cartwright et al. also reported such\ncorrelations. In 2001, El Emam et al.~\\cite{ElEmam2001} presented a\ncomprehensive analysis of the effect of class size in several OO\nmetrics, and suggested that this effect might have confounded prior\nstudies.\\footnote{We refer readers to ~\\cite{ElEmam2001} for an\n extensive list of studies that the authors suggest may have reached\n invalid conclusions by neglecting to compensate for size.} They\nthen presented their own study of a large C++ framework which showed\nthat strong correlations resulting from univariate analysis of data\nwere neutralized when multivariate analysis including class size is\nused. Another more recent study reached the same conclusions when\nstudying the relation between internal software attributes and\ncomponent utilization~\\cite{Sajnani2014}.\n\nHowever, Briand et al. and El Emam et al.'s argument has drawn some\ncriticism stemming from the point of view that multivariate analysis\nof the kind proposed in their papers produces ill-specified, logically\ninconsistent statistical models~\\cite{Evanco2003}. Specifically, the\npartial correlation of $X$ and $Y$ controlling for a third variable\n$Z$, written $r(X,Y|Z)$, is a measure of the relationship between X\nand Y if statistically {\\em we hold Z constant}. But trying to\npredict, for example, the effect on post-release defects $X$ by\nincreasing the coupling value $Y$ while holding the number of lines of\ncode $Z$ constant doesn't make sense, because in the world from where\nthe data comes, increasing coupling usually requires additional lines\nof code (e.g. field and variable declarations). As Evanco points\nout~\\cite{Evanco2003}, this model is inconsistent with the reality of\nthe data. The suggestion following the criticism is that prediction\nmodels should use the metric in question $Y$ or the size metric ($Z$),\nwhichever gives more predictive power, but not both.\n\nEither way, these observations raise doubts about the value of the\nmany software metrics that are correlated with size, as they do not\nprovide any more additional statistical power than what is already\nprovided by their strong correlate -- and size is very easy to\nmeasure. \nIn summary, size may not be a confounding factor in statistical\nterminology, but it certainly has been the source of much confusion in\nsoftware research.\n\n\\subsection{Non-Normal Data}\n\\label{sec:nonnormal}\n\nIn their study of slice-based cohesion and coupling metrics over 63 C\nprograms, Meyers and Binkley ~\\cite{Meyers:2007} include correlation\ncoefficients between several coupling and cohesion metrics and Lines\nof Code (LOC). They show that they are not correlated. We noted that\ntheir correlation analysis was made for the entire dataset, which\ncontained components of considerably different sizes; this made the\nanalysis prone to sknewness-related errors. In subsequent email\nexchanges with one of the authors, he kindly shared the data with us;\nwe then verified that, indeed, the distribution of size of the\ncomponents was not normal but log-normal. Once the transformation to\nlog scale was performed, the data showed moderate-to-strong positive\nlinear correlation between log(size) and their coupling\nmetric.\n\nThis exchange illustrates another source of problems when doing empirical\nstudies of software artifacts, and how size can drastically affect the\nconclusions. Size is not just a confusing factor; because the\nprojects' size distribution is often skewed, the statistical analysis\nneeds to take non-normal data into account too.\n\n\\subsection{Software Corpora}\n\\label{sec:corpora}\n\nIn recent years, there has been an increasing number of empirical\nstudies on increasingly larger collections of software projects for\npurposes of understanding the way that developers use programming\nlanguages in real projects. \nFor example, Tempero et\nal.~\\cite{Tempero:2008} studied the way Java programs use inheritance\nin the 100 projects of the Qualitas\ncorpus~\\cite{QualitasCorpus:APSEC:2010}. The criteria for inclusion of\nprojects in that corpus is relatively strict, requiring, for example,\ndistribution in both source and binary forms.\\footnote{See\n https:\/\/www.cs.auckland.ac.nz\/$\\sim$ewan\/corpus\/docs\/criteria.html} While\ntheir findings fall within the results reported here, the Qualitas\ncorpus contains only 100 projects. The results reported in\n\\cite{Tempero:2008} show that the data does not follow a normal\ndistribution. Another study on the same corpus explored the simulated\nuse of multiple dispatch via cascading {\\texttt instanceof}\nstatements~\\cite{Muschevici:2008}. Another study by Gil and Lenz\n\\cite{Gil:2010} studied the use of overloading in Java programs, also\nusing the Qualitas corpus. Some of the conclusions in these studies\n(e.g. whether a project is an outlier or not) may be missing the\neffect of size of the project.\n\nCalla\\'u et al.~\\cite{Callau:2011} made a statistical analysis of\n1,000 Smalltalk projects found in SqueakSource in order to understand\nthe use of certain dynamic features of Smalltalk. They do not\nreport the distribution in terms of project size. The study was\ndesigned to gather bulk statistics along an existing taxonomy, so the\nresults are reported as simple counts of feature occurrences among the\nwhole corpus or among a category of projects (e.g. out of 652,990\nmethods, only 8,349 use dynamic features, and then a breakdown is\nshown among categories). While the taxonomy is taken into account in the\nanalysis of the data, project size is not. It would be interesting\nto see whether there is a correlation between the categories and size\nof the projects.\n\nCollberg et al.~\\cite{Collberg:2007} randomly collected 1,132 jar\nfiles off the Internet and analyzed them (at bytecode level) using a\ntool developed by the authors. The purpose of that study was to inform\nJava language designers and implementers about how developers actually\nuse the language. That study reports summary statistics for their\nentire dataset without taking the distribution of jar size into\naccount. Most distributions shown in the paper aren't normal, so the\nsummary statistics are somewhat misleading. Some of the reported\nmetrics in that study are the same metrics that we use for our study;\nfor example they found on average 9 methods per class, with median 5.\nThe reported values fall within the range of ours, but particularly\nclose to the values for large projects, which leads us to believe that\ntheir dataset was biased towards large projects.\n\nIn another large study, Grechanik et al.~\\cite{Grechanik:2010} have\nconducted an empirical assessment of 2,080 Java projects randomly\nselected from Sourceforge, and discovered several facts about the\nprojects' use of Java. The size of the projects is not reported, and\nonly simple statistics are given. For example, the reported mean and\nmedian methods per class are 3.5 and 4, respectively. Given that the\ndata does not follow a normal distribution on project size, these\nvalues are, again, somewhat misleading and at odds with the findings\nof Collberg et al.~\\cite{Collberg:2007}. Like so many large open source\ncode repositories, Sourceforge is severely skewed towards small to\nmedium projects; the reported summary statistics are consistent with\nour findings for small projects.\n\n\nIn any large corpora of projects, the data rarely follows normal\ndistributions of size, so simple summary statistics such as averages\nand medians reported in some of these papers provide only weak\ninsights into the principles of those ecosystems, and may hide\nimportant phenomena. Also, sample biases may have a large influence on\nassumptions and conclusions.\nBut what exactly is the effect of size on software artifacts? Can we\nfind general statistical principles that explain the \nphenomena observed in prior studies? \n\n\\subsection{Complex Systems}\n\nOurs is not the first study to try to unveil internal mathematical\nstructures of software, and the software research community is not the\nonly one looking for mathematical principles in existing software;\ncommunities that study complex systems and networks have long found\nsoftware intriguing. One of the first studies of this kind was by\nValverde et al.~\\cite{Valverde:2002}, which analyzed the types and\ndependencies in the JDK, and noticed the existence of power laws and\nsmall world behavior. Soon after, Myers~\\cite{Myers:2003} explored\nwhat he called ``collaboration graphs'' ({\\em aka} dependencies) in\nthree C++ and three C applications. Many more studies of this kind\nfollowed. For example, \\cite{Valverde:2005}, \\cite{Zheng:2008},\n\\cite{Fortuna:2011} and \\cite{Gherardi:2013} all study the evolution\nof software networks finding evidence of known mathematical principles\nthat also exist in natural systems, and that might serve as predictive\nmodels for software evolution.\n\nCloser to our work, a study presented in 2006 by Baxter et\nal.~\\cite{Baxter:2006} also targeted the ``Lego Hypothesis,'' as\ncoined by the authors. That study, which built on an earlier one by\nthe same group~\\cite{Potanin:2005}, searched for the existence of\npower laws and other mathematical functions in a collection of 56 Java\napplications using 17 OO metrics, such as number of methods per type\nand the number of dependencies per type. For each of those 56\napplications, the study revealed whether the 17 metrics' data points\ncould fit the mathematical functions of interest. The study found that\nvery few projects, and in only very few metrics, had strict power law\ndistributions; most projects, and in most metrics, revealed reasonable\nfits at 80\\% confidence interval with several of the functions that\nthey were searching for. Another study by Louridas et\nal.~\\cite{Louridas:2008} studied the existence of power laws in a\nvariety of applications written in a variety of languages.\n\nAll of these studies largely ignore {\\em application} size, and focus\non the modules themselves (i.e. classes, interfaces). In the study by\nBaxter et al.~\\cite{Baxter:2006}, the results are ordered by\napplication size, and even grouped within size ranges; but no insights\nare given regarding the effect, if any, that application size may have\non the observations. We believe our study is complementary to all of\nthese prior studies in search for mathematical laws in software\napplications, because it focuses on the size of the application as a\nwhole, not just on the size of each OO module. \n\n\n\\section{Dataset}\n\\label{sec:dataset}\n\nIn this study, we use the Sourcerer 2011\ndataset~~\\cite{Lopes+Ossher:2012}, which contains over 150,000+\nprojects collected from Google Code, SourceForge and Apache as of\n2011. The projects have been processed into a relational database of\nentities and relations, using the Sourcerer Tools publicly available\nfrom Github~\\cite{Sourcerer:2015}. The database facilitates static\nanalysis for very large collections of source code, as it contains\npreprocessed static analysis information that can be queried on\ndemand. By issuing specific queries on the database, we extracted the\nnecessary numbers into a Comma Separated Value (CSV) file, which was\nthen used to perform the statistical analysis described in this paper.\n\nThe database produced by the Sourcerer tools was, therefore, the basis\nof our study. We present a small example that illustrates the kinds of\nentities and relations that are found in the database. Consider the\nfollowing Java program:\n\n{\\footnotesize\n\\begin{verbatim}\npackage foo;\n\npublic class FooNumber {\n private int x;\n FooNumber(int _x) { x = _x; }\n private void print() {\n System.out.println(\"It is number \" + x)\n }\n public static void main(String[] args) {\n new FooNumber(Integer.parseInt(args[0])).print();\n }\n}\n\\end{verbatim}\n}\n\n\n\\begin{table}\n\\centering\n\\caption{Entities}\n\\label{tab:entities}\n{\\small\n\\begin{tabular}{|l|l|l|} \\hline\nEntity ID & FQN & Type\\\\ \\hline\n1 & foo & PACKAGE \\\\\n2 & foo.FooNumber & CLASS \\\\\n3 & foo.FooNumber.x & FIELD \\\\\n4 & foo.FooNumber.$<$init$>$ & CONSTRUCTOR \\\\\n5 & foo.FooNumber.print & METHOD \\\\\n6 & foo.FooNumber.main & METHOD \\\\ \n... & ... & ... \\\\\\hline\n\\end{tabular}\\\\\n\n\\caption{Relations}\n\\label{tab:relations}\n\\begin{tabular}{|l|c|l|} \\hline\nSource & Relation type & Target \\\\ \\hline\n1 & CONTAINS & 2 \\\\\n2 & CONTAINS & 3 \\\\\n2 & CONTAINS & 4 \\\\\n2 & CONTAINS & 5 \\\\\n2 & CONTAINS & 6 \\\\\n3 & HOLDS & {\\em Integer\\_ID} \\\\\n4 & WRITES & 3 \\\\\n5 & READS & 3 \\\\\n5 & CALLS & {\\em println\\_ID} \\\\\n6 & INSTANTIATES & 4 \\\\\n6 & CALLS & 5 \\\\ \n... & ... & ... \\\\\\hline\n\\end{tabular}\n}\n\\end{table}\n\n\nThis program results in the entities and relations shown in\nTables~\\ref{tab:entities} and \\ref{tab:relations} (not all\nentities and relations are shown, for brevity sake).\nGiven this database schema, with these entities and relations tables,\nwe issued several queries in order to extract all the numbers we\nneeded. Here is one example query that extracts the number of methods\ndeclared in classes in each of the projects:\n\n{\\footnotesize\n\\begin{verbatim}\n-- Extract number of class methods per project\nSELECT p.project_id,IFNULL(COUNT(DISTINCT m.entity_id),0) \n FROM e_methods AS m\n INNER JOIN r_contains AS r ON m.entity_id = r.rhs_eid\n INNER JOIN e_classes AS c ON c.entity_id = r.lhs_eid\n RIGHT JOIN projects AS p ON p.project_id=m.project_id\n GROUP BY p.project_id\n\\end{verbatim}\n}\n\nAlthough the complete dataset contains projects from Google Code,\nSourceforge and Apache, for this study, we restricted the analysis to\nthe projects from Google Code only. The main properties of the Google\nCode dataset are presented in\nTable~\\ref{tab:dataset}. Figure~\\ref{fig:projects-size} shows the size\nof the projects, from smallest to largest, as well as the histogram of\nproject sizes in the dataset.\n\n\\begin{table}\n\\centering\n\\caption{Main metrics of the Google Code dataset. }\n\\begin{tabular}{|l|r|}\\hline\n & Google Code \\\\ \\hline\nProjects & 30,914 \\\\ \\hline\nClasses & 3,060,853 \\\\ \\hline\nInterfaces & 274,745 \\\\ \\hline\nMethods & 19,358,490 \\\\ \\hline\nSLOC & 221,194,474 \\\\ \\hline\nMedian SLOC & 1,570 \\\\ \\hline\n\\end{tabular}\n\\label{tab:dataset}\n\\end{table}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.5in]{figures\/RProjectSize2.png}\n\\includegraphics[width=1.5in]{figures\/RHistogramSize2.png}\n\\caption{Left: Size of the projects in the Google Code dataset, in Source Lines\n of Code (SLOC) when projects are ordered by increasing size. Right:\n Histogram of the size of the projects, in log scale.}\n\\label{fig:projects-size}\n\\end{figure}\n\nThis study's granularity is a ``project.'' For the purposes of this\nstudy, a project is the collection of Java source code files that were\nfound in each Google Code Project Hosting's project pages. For\nexample, the project named {\\texttt 1cproject} was hosted at\nhttps:\/\/code.google.com\/p\/1cproject\/, and its source code was\navailable at\n\\\\ https:\/\/code.google.com\/p\/1cproject\/source\/browse\/\\\\ \nThe ``project,'' in this case, consists of all Java source files found\nunder source control in {\\em trunk}. When the project included jar\nfiles, those were considered potential dependencies, not part of the\nproject itself.\\footnote{This paragraph is written in the past tense,\n because Google Code is slated to become unavailable soon.}\n\n\\vspace*{0.3cm}\n\\noindent\n{\\bf Availability of Data and Tools}\n\n\\noindent\nThe Sourcerer infrastructure and tools are available from\nGithub~\\cite{Sourcerer:2015}, and have been described before in our\nprior papers~\\cite{Ossher:2009,Bajracharya:2014}. Besides those two\nprior publications, a publicly available tutorial explains the\nprocessing pipeline of the Sourcerer tools with concrete\nexamples~\\cite{Lopes+Ossher:2012}. Additionally, the artifact\nassociated with this paper contains all the Sourcerer tools and a\nsmall sample repository of projects, meant to illustrate the\nprocessing pipeline by which raw source code is converted into a\nrelational database for static analysis, such as that in this\npaper. Note that only a small repository is included, because the full\nrepository is 433Gb; its processing into a relational database took\napproximately 3 weeks of computing on a 24-core, 128 Gb RAM server.\n\nResearchers wanting to reproduce this study, or wanting to study other\nfacets of this data, can start by downloading the artifact associated\nwith this paper, and running the Sourcerer tools installed in it on\nthe included sample repository; then, they can download the full\nrepository from our Web site~\\cite{Lopes+Ossher:2012} and run the\nSourcerer tools on it.\n\nHaving done all this processing ourselves, we are making the processed\ndatasets available to other researchers. The several representations\nof the Sourcerer 2011 dataset, including the full repository and the\ndatabase, are publicly available for download\nfrom our Web site~\\cite{Lopes+Ossher:2012}. Note that this dataset is immutable; it\nwas collected once in 2011, and we do not plan to collect later\nversions of the projects. The CSV file upon which statistical analysis\nof this study was done is included in the artifact.\n\n\n\\section{Statistical Analysis Methods}\n\nThis section explains the main statistical methods that were used in\nthis study.\n\n\\subsection{Linear vs. Log Scales}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.5in]{figures\/RHistGenericSkewed.png}\n\\includegraphics[width=1.5in]{figures\/RHistGenericLogNormal.png}\n\\caption{Histograms of log-normal data when plotted in {\\em linear}\n scale (left) and {\\em log} scale (right).}\n\\label{fig:lognormal}\n\\end{figure}\n\nAs mentioned in Section \\ref{sec:relwork}, when dealing with large\necosystems of software artifacts, the data is expected to be highly\nskewed in almost every dimension. That is also the case in the Google\nCode data. Figure~\\ref{fig:lognormal} shows a generic illustration of\nskewness in the data: the left histogram shows that the vast majority\nof data points have small values of $X$, where $X$ is some measured\nfeature of the dataset; in transforming the data into log scale,\nhowever, we can see an almost perfect log-normal distribution (right\nhistogram). When this holds, it would be ill-suited to use normal\nstatistics in linear space, but we can proceed to apply normal\nstatistics in log space. This is a critical step in analyzing\nthese ecosystems.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.5in]{figures\/RScatterGenericLinear.png}\n\\includegraphics[width=1.5in]{figures\/RScatterGenericLog.png}\n\\caption{Scatterplots of Y against X when both X and Y are\n log-normal data. On the left: plot in {\\em linear} scale of\n X and Y; on the right: plot in {\\em log} scale of X and Y.}\n\\label{fig:lm}\n\\end{figure}\n\n\\subsection{Linear Regression Models}\n\nThe main statistical tool we use in this study is the linear\nregression model. Linear regression tries to find the best linear\nmodel (i.e. a line) that fits the data. Figure~\\ref{fig:lm}\nillustrates our use of this statistical tool. On the left, we see a\nscatterplot of some feature $X$ against some other feature $Y$ plotted\nin linear scale of both X and Y. The plot also shows the best fit line\nresulting from linear regression of the data. On the right, we see the\nscatterplot of the same features $X$ and $Y$ but plotted in log scale,\nalong with the best fit line. In both cases, the line is given as\n$y\\_values = \\alpha + \\beta x\\_values$. However, the plot on the right\nbeing in log scale, the straight line represents $log(y) = \\alpha + \\beta\nlog(x)$. Transforming this back to linear space gives the following\nnon-linear (exponential) relation between $X$ and $Y$:\n\n\\begin{equation} \n\\label{eq1}\ny=e^{\\alpha} x^{\\beta}\n\\end{equation}\n\nWhen the relation between two features is {\\em non-linear}\nand, specifically, exponential, some observations are at hand:\n\\begin{itemize}\n\\item When $\\beta=1$, the relation between $X$ and $Y$ degenerates to\n linear.\n\\item Any value of $\\beta \\neq 1$ indicates an exponential relation\n between the two features. Small variations in $\\beta$ represent\n large variations of $Y$ against $X$ in linear space.\n\\item $\\beta > 1$ indicates a superlinear relation, i.e. $Y$ grows\n exponentially faster as $X$ grows.\n\\item $\\beta < 1$ indicates a sublinear relation, i.e. $Y$ grows\n exponentially slower as $X$ grows.\n\\end{itemize}\n\n\\subsection{Goodness of Fit}\n\nOne critical part of linear regression is the goodness of fit, that\nis, how well the line fits the data. $R^{2}$, prononced R-squared, is\na statistic that measures how successful the fit is in explaining the\nvariation of the data.\\footnote{$R^{2} = 1 - \\dfrac{SSE}{SST}$, where\n $SSE$ is the residual sum of squares and $SST$ is the total sum of\n squares.} For example, $R^{2}=0.92$ means that the fit explains 92\\%\nof the total variation in the data. A value of 1 would be the perfect\nfit.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.5in]{figures\/ResPlots.png}\n\\caption{Residuals plots. Top-left: Residuals vs Fitted. Top-right:\n Normal QQ. Bottom-left: Scale-location (aka spread). Bottom-right:\n Residuals vs Leverage. }\n\\label{fig:residuals}\n\\end{figure}\n\nHowever, due to how it is calculated, there are several limitations\nfor what $R^{2}$ can explain. Depending on the characteristics of the\ndata, $R^{2}$ can have a low prediction value. In order to verify\nthis, it is important to analyse the {\\em residuals} of the linear\nregression models. Figure~\\ref{fig:residuals} illustrates the kinds of\nresiduals plots that we analyze to check whether the linear models are\nappropriate or not. The {\\em Residuals vs Fitted} plot (top-left) is\nthe most important one. A good fit should result in this plot showing\nrandomly distributed data around the horizontal line at the origin,\nmeaning that what's left from the fit is unbiased noise -- this\nparticular plot shows that. When this doesn't happen, then the linear\nmodel may not be appropriate to explain the data, even if $R^{2}$ is\nhigh. The {\\em Normal QQ} plot (top-right) illustrates assumptions\nabout normality of the residuals in the model. When the dots all fall\nin the straight diagonal, then the residuals fit exactly a normal\ndistribution, which is the ideal case. This particular plot shows a\nsymmetrical light-tailed normal distribution of the residuals, which\nis acceptable. In general, some deviation from the norm is to be\nexpected, particularly near the ends. The {\\em Scale-Location} plot,\nalso known as {\\em spread}, illustrates the variance of the Y variable\nalong the X variable. A flat line means that the variance is constant\nalong X, which is the ideal case for linear regression. This particular\nplot shows that there is more variance for lower values of X, and then\nthe variance evens out. This kind of small deviation from the ideal is\nacceptable. Finally, {\\em Residuals vs Leverage} illustrates the {\\em\n leverage} (influence) that the data points had on the fitness\nprocess. This plot serves to identify potential outliers that may have\nhad undue influence in the model. We want the points to fall as close\nas possible to the horizontal line at origin, and not to fall outside\nCook's distance. That is the case with this particular plot.\n\n\n\\subsection{Binned Analysis}\n\nWhen the residuals of the linear models show potential problems with\nthe model, that means that the simple linear regression models are\nmissing important characteristics of the data. In those cases, we try\nto perform binned analysis instead of analysis on the whole data. This\nanalysis is meaningful when the data in the bins shows normal\ndistributions. When that is the case, we compare the differences of\nmeans among the bins using Welch two sample t-test on a 95\\%\nconfidence interval in order to extract more meaningful insights.\n\n\\section{Findings}\n\\label{sec:results}\n\nThis section presents the main findings of our study. It starts with\nobservations regarding the size of the modules, then their complexity,\nthe use of inheritance, and finally the kinds of dependencies the\nmodules have. It should be noted that all linear models and\ncorrelations presented here are statistically significant, with\np-values $<<$ 0.0001.\n\n\n\\subsection{Module Size}\n\\label{sec:modulesize}\n\nRQ1: {\\em Are modules of larger systems larger, or smaller, than modules\n of smaller systems, or are there no statistically significant\n differences?}\n\nIn Java, the modular, replaceable units are classes and interfaces, so\nwe use the word {\\em module} to mean either a class or an\ninterface. The bivariate analyses used to study the above question\nare: SLOC vs. Modules, Methods vs. Classes and Constructors\nvs. Classes. We know from several previous studies that these pairs of\nmetrics are strongly positively correlated. But what exactly is the nature\nof these relations?\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3in]{figures\/RSLOCVsModules.png} \\\\\n\\vspace*{0.5cm}\n\\includegraphics[width=3in]{figures\/RMethodsVsClasses.png} \\\\\n\\vspace*{0.5cm}\n\\includegraphics[width=3in]{figures\/RConstructorsVsClasses.png}\n\\caption{Top: SLOC vs. number of modules. Middle: Number of methods\n vs. number of classes. Bottom: Number of constructors vs. number\n of classes.}\n\\label{fig:modulesize}\n\\end{figure}\n\nFigure~\\ref{fig:modulesize} sheds some light into this question. On the\ntop, the size of each project in SLOC is plotted against its number of\nclasses and interfaces. As expected there is a very strong linear\ncorrelation ($r=0.93$). Moreover, with 87\\% linear fitness (in log\nspace), it appears that as the number of modules grows, the lines of\nsource code also grows but at exponentially higher pace. Specifically,\nusing equation~\\ref{eq1},\n\n\\begin{center}\n$SLOC=e^{3.5549} Modules^{1.0939}$\n\\end{center}\n\nFor example, a project with 10 modules is predicted to have close to\n434 SLOC; a project with 100 modules is predicted to have not just\n4,340 but close to 5,391 SLOC, so considerably more than 10 times\nwhat's expected of a project with 10 modules; a project with 1,000\nmodules is predicted to have close to 66,923 SLOC, again considerably\nmore than 10 times what's expected of a project with 100 modules;\netc. The growth of SLOC is exponential with number of modules, and\neven though the exponent (1.0939) is close to 1, the small 0.0939\ndifference results in large differences in the linear space.\n\nWhere do all these extra lines of code go? The middle plot in\nFigure~\\ref{fig:modulesize} explains it. The plot shows the number of\nmethods declared in classes vs. the number of classes. Again, as\nexpected, there is a strong positive linear correlation\n($r=0.94$). Moreover, with 89\\% linear fitness, it appears that as the\nnumber of classes grows, the number of methods grows\nexponentially. Specifically, \n\n\\begin{center}\n$Methods=e^{1.0949} Classes^{1.1055}$\n\\end{center}\n\nFor example, a project with 10 classes is predicted to have close to\n38 methods; a project with 100 classes is predicted to have not just\n380 but close to 486 methods; a project with 1,000 classes is\npredicted to have close to 6,195 methods; etc. Again, the growth of\nthe number of methods is exponential, not linear. \n\nA similar exponential growth can be observed for constructors\nvs. classes (Figure~\\ref{fig:modulesize}, bottom). The relation in that\ncase is $Constructors = e^{0.0246} Classes^{1.0195}$. In this case,\n$\\beta$ is very close to 1, so this is almost a linear function.\n\nIn short: projects with {\\bf more modules} have disproportionately more\nlines of code than projects with less modules, which means that they\nhave {\\bf larger modules}. Moreover, the extra lines of code seem to\nbe grouped in disproportionately {\\bf more methods} and, to a lesser\ndegree, constructors, {\\bf per class}.\n\nTable~\\ref{tab:modulesize} summarizes the statistical principles\ninferred from the data related to the effect of size. $\\alpha$ and\n$\\beta$ are the coefficients for equation~\\ref{eq1}; $r$ is the\nPearson correlation coefficient of the data in log scale; $R^{2}$ is\nthe fitness of the line in log space. The residuals of the linear\nmodel can be found in Figures \\ref{fig:res_SLOC_Modules},\n\\ref{fig:res_Methods_Classes} and\n\\ref{fig:res_Constructors_Classes}. All of them show good strength of\nthe model.\n\n\\begin{table}\n\\centering\n\\caption{Analysis of project size.}\n{\\small\n\\begin{tabular}{|l|r|r|r|r|l|}\\hline\nAnalysis & $\\alpha$ & $\\beta$& $r$ & $R^{2}$ & Space\\\\ \\hline\nSLOC vs. Modules & 3.5549 & 1.0939 & 0.93 & 0.87 & log-log \\\\ \\hline\nMeths. vs. Classes & 1.0949 & 1.1055 & 0.94 & 0.89 & log-log \\\\ \\hline\nConstrs. vs. Classes & 0.0246 & 1.0195 & 0.99 & 0.98 & log-log \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:modulesize}\n\\end{table}\n\n\nThese observations explain apparent inconsistencies in the literature\nover the past few years. As described in Section~\\ref{sec:relwork},\ndifferent studies of Java corpora have reported different average\nmethods per class. This could potentially be explained by our\nfindings: a corpus that is dominated by smaller projects will have\nlower average methods per class than a corpus dominated by larger\nprojects.\n\n\n\\subsection{Module Types}\n\\label{sec:moduletype}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3in]{figures\/RInterfacesVsClassesLog.png}\n\\caption{Interfaces vs. classes in log scale.}\n\\label{fig:moduletype}\n\\end{figure}\n\nRQ2: {\\em Is there a statistically significant variation in the mix of\n classes and interfaces for projects of different scales?}\n\nThe answer to the question involves dealing with data that shows high\nvariance. We started by regressing the number of interfaces against\nthe number of classes in each project, similarly to what we did for\nthe previous question. Figure~\\ref{fig:moduletype} shows\nthe non-linear model:\n\n{\\em Interfaces} $= e^{-1.2064}Classes^{0.7035}$\n\n$R^2 =0.49$ is not too good of a fit. Visual inspection of the plot\nand the fitted straight line exposes weaknesses of this simple linear\nmodel, particularly at the edges of the data: for projects with very\nsmall and very large classes, the model underestimates the number of\ninterfaces. Clearly, the relation between the number of classes and the\nnumber of interfaces in this ecosystem is not properly explained by\na simple exponential function.\n\nThe observations from visual inspection are confirmed in the plots of\nthe residuals in Figure~\\ref{fig:res_Interfaces_Classes} (Appendix\nB). The {\\em Residuals vs Fitted} plot, in particular shows a bend in\nthe residual data, rather than a straight line. This is indicative\nthat the linear model is missing a non-linear component. The shape of\nthe bend suggests a parabola, so we add an additional transformation\nof the X variable, specifically $log(X)^2$, and perform a linear\nregression on that transformed space ($log(y) \\sim log(x)^2$). This\nadditional transformation introduces non-monotonicity (see Appendix \nFigure~\\ref{fig:log_squared}). Transforming this back to linear\nspace, we are establishing the relation:\n\n\\begin{equation} \n\\label{eq2}\ny=e^{\\alpha} x^{\\beta log(x)}\n\\end{equation}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3in]{figures\/RInterfacesVsClasses2.png}\n\\caption{Interfaces vs. classes in log$^2$ scale.}\n\\label{fig:moduletype2}\n\\end{figure}\n\nThe plot is shown in Figure~\\ref{fig:moduletype2} and the residuals\nplots are in Appendix B, Figure~\\ref{fig:res_Interfaces_Classes2}. As\ncan be seen, this is a better fit, with $R^2=0.52$. The bias in the\nfit that transpired with the bend in the residuals plot is now\npractically eliminated. Given the parameters, this model predicts \n\n{\\em Interfaces} $= e^{0.14} Classes^{0.083 log(Classes)}$\n\nNote that due to the non-monotonicity illustrated in\nFigure~\\ref{fig:log_squared}, this model establishes a variable ratio\nbetween classes and interfaces depending on project\nsize. Specifically, smaller projects have a much higher ratio\\\\\n$Interfaces\/Classes$. \\\\\nFor example, for a project with 10 classes, the\nmodel predicts 1.79 interfaces ($\\sim$ 18\\% ratio); 50 classes $\\succ$\n4.1 interfaces ($\\sim$ 8.2\\%); 100 classes $\\succ$ 6.69 interfaces\n($\\sim$ 7\\%); 1000 classes $\\succ$ 60.4 interfaces ($\\sim$\n6\\%). Acording to the model, the ratio proceeds to increase again for\nvery large projects. For example 10,000 classes $\\succ$ 1,314\ninterfaces ($\\sim$ 13\\%). Our dataset includes only 6 projects that\ncontain over 10,000 classes each, so the model may not be precise for\nthis end of the size spectrum.\n\nAnother way of analyzing this data is to make a binned analysis. For\nthat, we divide the data into 5 bins on the number of classes: very\nlarge, large, medium, small and very small. We then compute the ratio\n$Interfaces\/Classes$ for all the projects, and compute the means of\nthe ratios in each bin. The results are shown in\nTable~\\ref{tab:bin_moduletype}.\n\n\\begin{table}\n\\centering\n\\caption{Bins for analysis of {\\em Interfaces\/Classes}. Mean\n and SD values are in log scale.} {\\small\n\\begin{tabular}{|l|l|r|r|r|}\\hline\n Bin & \\# Classes & Projects & Mean (linear\\%) & SD \\\\ \\hline\nV. Large & $>$ 5,000 & 17 & -2.47 (8.5) & 0.87 \\\\ \\hline\nLarge & 1,000 -- 5,000 & 419 & -2.83 (5.9) & 1.00 \\\\ \\hline\nMedium & 100 -- 1,000 & 5,762 & -2.77 (6.3) & 1.05 \\\\ \\hline\nSmall & 20 -- 100 & 11,715& -2.49 (8.3) & 0.92 \\\\ \\hline\nV. Small & $<$ 20 & 11,557& -1.68 (18.6)& 0.78 \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:bin_moduletype}\n\\end{table}\n\nFinally, we perform a Welch two sample t-test on the differences of\nmeans to check whether the differences exist and are statistically\nsignificant. The tests show statistical significance (p $<<$ 0.0001)\non the mean differences between medium and small, and between small\nand very small. The other differences are not statistically\nsignificant at 95\\% confidance level.\n\nThe numerical values of the binned analysis are consistent with those\nfrom the linear regression model that places the number of interfaces\nas a continuous function of the number of classes as by\nequation~\\ref{eq2}. This adds strength to the result.\n\nOne possible explanation for why smaller projects have\ndisproportionately more interfaces is that they have an investment in\nmodeling entities with interfaces without having enough\nimplementations of those entities to pay off the investment. In\nmedium-to-large projects, that investment pays off, as more classes\nprovide alternative implementations of the interfaces. The higher\nratio in very large projects is not statistically significant, so no\nconclusions should be made on whether that holds in general or only in\nthis particular set of 17 very large projects.\n\n\n\\subsection{Internal Complexity}\n\\label{sec:modulecomplexity}\n\nRQ3: {\\em Do larger projects have more method calls or use more unsafe\n operations than smaller projects, or are there no statistical\n significant differences?}\n\nA recent study by Landman et al.~\\cite{Landman2014} showed that\nthere seems to be no correlation between the size of Java projects\n(measured in SLOC) and the cyclomatic complexity of their methods. We\ngo one step further to investigate other potential sources of\ncomplexity in code: the number of outgoing methods calls, the\nnumber of {\\texttt instanceof} statements and the number of unsafe\ntype casts in each project. \n\n\\begin{table}\n\\centering\n\\caption{Analysis of code complexity}\n{\\small\n\\begin{tabular}{|l|r|r|r|r|l|}\\hline\nAnalysis & $\\alpha$ & $\\beta$& $r$ & $R^{2}$ & Space\\\\ \\hline\nCalls vs. Methods & 1.64 & 1.00 & 0.94 & 0.89 & log-log\\\\ \\hline\nInst.of vs. Methods& -2.77 & 0.84 & 0.70 & 0.49 & log-log\\\\ \n & -0.41 & 0.01 & 0.72 & 0.52 & log-log$^{2}$\\\\ \\hline\nCasts. vs. Methods & -1.81 & 1.00 & 0.83 & 0.68 & log-log \\\\ \n & -0.49 & 0.36 & 0.84 & 0.70 & log-log$^{1.4}$ \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:modulecomplexity}\n\\end{table}\n\nLike for module types, some of the data here also has a considerable\nvariation. Table \\ref{tab:modulecomplexity} summarizes the results.\nWe found no evidence that larger projects use more unsafe features of\nJava than smaller projects, and we found some weak evidence that the\ncontrary may happen.\n\n\n\\subsubsection{Method Calls vs. Methods declared in classes}\n\\label{sec:callsvsmethods}\n\nIn the case of method calls, there is a fairly strong fit of the\nlinear model ($R^{2}=0.89$), and the residuals plots show no warning\nsigns (Appendix B Figure\\ref{fig:res_Calls_Methods}). The exponent\n$\\beta=0.9971$, however, is very close to 1, which means that the\nrelation is essentially linear at a rate of $e^{1.64}=5.1$ calls per\nmethod. For example, according to the model, a project with 50 methods\nhas 255 method calls; a project with 500 methods has 2,531 method\ncalls; a project with 5,000 methods has 25,144 method calls. \n\n\\subsubsection{Instanceof statements vs. Methods declared in classes}\n\\label{sec:instanceofvsmethods}\n\nIn the case of {\\texttt instanceof}, the linear model in log-log space\nis not that good ($R^{2}=0.49$), and the residuals plots show some\nwarning signs (Appendix B\nFigure~\\ref{fig:res_Instanceof_Methods}). Similarly to what was done\nfor the previous analysis, we transformed the X axis (methods) with an\nadditional square function, and the fit improved to $R^{2}=0.52$\n(residuals in Appendix B\nFigure~\\ref{fig:res_Instanceof_Methods2}). This yields the\nrelation\\\\ \n{\\em Instanceof} $= e^{-0.14} Methods^{0.0702 log(Methods)}$\n\nAgain, this function is not monotonic, and therefore results in a\nnon-monotonic average number of {\\texttt instanceof} statements per\nmethod, depending on the total number of methods of the projects: the\nratio starts high for projects with just a few methods (e.g. 0.21 for\nprojects with 5 methods) and decreases sharply for projects with very\nsmall number of methods ($< 20$); it then continues to decrease but\nmore gently, reaching a minimum of 0.025 {\\texttt instanceof}\nstatements per method for projects with around 1,000 methods\n(i.e. almost 10 times less than for projects with 5 methods); from\nthen on, it increases again, but slowly. Its predicted value is 0.06\n{\\texttt instanceof} statements per method for projects with 50,000\nmethods (of which there are 17 in the dataset).\n\n\n\\subsubsection{Type casts vs. Methods declared in classes}\n\\label{sec:castsvsmethods}\n\nIn the case of casts, the linear log-log model is also not that good\n($R^{2}=0.68$, see also Appendix B\nFigure~\\ref{fig:res_Casts_Methods}). We then tried a few\ntransformations, and found $log^{1.4}$ to produce very good residuals\nplots (Appendix B Figure~\\ref{fig:res_Casts_Methods2}) and a better\n$R^{2}=0.70$. This function has a similar behavior as the one\nexplained for {\\texttt instanceof} in terms of monotonicity, but the\nminimum (0.13 casts per method) happens a bit earlier, at around 500\nmethods. This value of casts per method is roughly 10 times less than\nthe value for projects with 5 methods.\n\n\\subsubsection{Discussion}\n\nCombined, and along with the study by Landman et al., these results\nshow that there is no evidence to support the hypothesis that larger\nprojects have more complex code. On the contrary, there seems to be a\ntrend for smaller projects to include proportionally more unsafe\nstatements of Java. \n\nThe linear model method calls vs. methods (Section\n\\ref{sec:callsvsmethods}) is a fairly strong fit that shows that the\nnumber of method calls per declared method is roughly constant and\nindependent of the size of the projects (measured in number of\nmethods). The other two models (Secitons \\ref{sec:callsvsmethods} and\n\\ref{sec:instanceofvsmethods}) have a less strong fit. That simply\nmeans that their precision as predictors is not too good, but the\ntrend showing proportionally more unsafe features of Java in small\nprojects (Section~\\ref{sec:castsvsmethods}) is interesting and\nstatistically significant. We conjecture that this may happen because\ndevelopers of non-trivial projects adhere to a stricter discipline of\navoiding these features.\n\n\\subsection{Class Composition via Inheritance}\n\\label{sec:inheritance}\n\nRQ4: {\\em Does the scale of the project affect the use of\n inheritance?}\n\nThe two linear models used to answer the above question are classes\ndefined using inheritance (DUI) vs. total classes, and classes that\nare inherited from within the project (IF) vs. total classes of each\nproject. The results are shown in Table~\\ref{tab:inheritance} (plots\nin Appendix \\ref{fig:res_DUI_Classes},\n\\ref{fig:res_DUI_Classes2} \\ref{fig:res_IF_Classes} and\n\\ref{fig:res_IF_Classes2}.\n\n\\begin{table}\n\\centering\n\\caption{Analysis of inheritance}\n{\\small\n\\begin{tabular}{|l|r|r|r|r|l|}\\hline\nAnalysis & $\\alpha$ & $\\beta$& $r$ & $R^{2}$ & Space \\\\ \\hline\n{\\footnotesize DUI vs. Classes} & -1.0505 & 1.0159 & 0.92 & 0.85 & log-log \\\\\n & -0.5364 & 0.6626 & 0.93 & 0.86 & log-log$^{1.2}$ \\\\ \\hline\n{\\footnotesize IF vs. Classes} & -1.9908 & 0.8037 & 0.78 & 0.61 & log-log \\\\ \n & -0.3414 & 0.0903 & 0.80 & 0.64 & log-log$^{2}$ \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:inheritance}\n\\end{table}\n\nAs in previous analysis, the residuals plots of the initial linear\nregression models showed some warning signs that the models might not\nbe the best (Appendix B Figures~\\ref{fig:res_DUI_Classes2} and\n\\ref{fig:res_IF_Classes2}). As such, we compensated for the bend in\nthe residual data by adding an additional non-linear components to the\nX axis (classes). \n\n\\subsubsection{Classes Defined Using Inheritance (DUI)}\n\nIn the case of DUI classes vs. classes, the better\nmodel is\\\\ \n{\\em DUI} $= e^{-0.5364+0.6626 log(Classes)^{1.2}}$\n\nAlso here, the curve of the ratio starts high, decreases sharply, then\ndecreases slowly up to a minimum, then increases again. In the case of\nthese parameters, the minimum is around 10 classes, with 35\\% of them\nDUI. According to this model, projects with 2 classes have on average\n0.9 of them defined using inheritance (45\\%); 10 classes $\\succ$\n3.5 (35\\%); 100 classes $\\succ$ 37 (37\\%); 1,000 classes $\\succ$ 493\n(49\\%); 5,000 classes $\\succ$ 3,379 (68\\%); etc.\n\nA project with 100 classes, 65\\% of them DUI, is far from the norm,\nbut if the number of classes is close to 5,000, then that percentage\nof DUI is close to the norm.\n\n\\subsubsection{Classes Inherited From (IF)}\n\nIn the case of IF classes, the better model is\\\\ \n{\\em IF} $= e^{-0.3414} Classes^{0.0903}$\n\nIn the case of these parameters, the minimum is around 100 classes,\nwith 5\\% of them DUI. According to this model, projects with 10\nclasses have on average 1.1 inherited from (11\\%); 100 classes\n$\\succ$ 4.8 (5\\%); 1,000 classes $\\succ$ 52.8 (5\\%); 5,000 classes\n$\\succ$ 3,379 (10\\%); etc.\n\n\n\\subsection{Dependencies}\n\\label{sec:dependencies}\n\nRQ5: {\\em Do larger projects use disproportionately more, or fewer,\n modules than smaller projects? How does project efferent coupling\n vary with size? Are there statistically significant differences in how\n types from JDK\/internal\/external libraries are used in projects of\n varying sizes?}\n\nTo answer these questions, we first look at the growth of the number of\ndistinct types used by projects vs. the growth of the number of\nmodules\\footnote{Again, we use the term modules = types = classes +\n interfaces.} declared in the projects. In counting the number of\nmodules used, we count the number of {\\em distinct} modules, so\nmodules used multiple times in a project are counted only once. We\nthen look deeper into the origins of the used modules.\n\n\\subsubsection{Used Modules vs. Declared Modules}\n\nProjects use a variety of modules, some of them declared internally,\nothers provided by the JDK and others provided by external\nlibraries. Again, we know that the number of modules used in a project\nis highly correlated with the size of the project. We are interested\nin studying the underlying trend function, and whether it is linear or\nsuper-\/sub-linear. Project size here is measured by\nthe number of declared modules in it.\n\nIn analyzing the initial linear regression model in log-log space, it\nwas visible that it suffered from a small bend in the residuals (see\nAppendix B Figure~\\ref{fig:res_Used_Modules}.) We then compensated\nfor it by applying a polynomial of size $1.2$, which eliminated the\nbend, producing a better model. Table~\\ref{tab:moduleuse} summarizes the\nparameters.\n\n\\begin{table}\n\\centering\n\\caption{Analysis of used modules}\n{\\small\n\\begin{tabular}{|l|r|r|r|r|l|}\\hline\nAnalysis & $\\alpha$ & $\\beta$& $r$ & $R^{2}$ & Space \\\\ \\hline\n{\\footnotesize Used vs. Declared} & 2.006 & 0.7357 & 0.93 & 0.87 & log-log \\\\\n & 2.335 & 0.4863 & 0.93 & 0.87 & log-log$^{1.2}$ \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:moduleuse}\n\\end{table}\n\nThe better model establishes the following relation between the number\nof used modules and the number of declared modules in a project:\n\n{\\em Used} $= e^{2.3353 + 0.4863 log(Modules)^{1.2}}$\n\nThese parameters define a sub-linear relation between the number of\nused modules and the number of declared modules in a project, meaning\nthat the number of distinct used modules increase disproportionately less than\nthe number of declared modules in projects. According to this model, \nprojects with 10 declared modules use on average 39 modules; 100\ndeclared $\\succ$ 216 used; 1,000 declared $\\succ$ 1,450 used; etc. The\nnumber of used modules grows slower than the number of declared modules.\n\nThis result is intriguing, as it was unclear what to expect. The\nresult makes sense when the addition of a dependency (external or\ninternal) is correlated with the addition of multiple modules internal\nto the project; the causal relation is unclear, and there may be\nunknown confounding factors behind this correlation. \n\nTheoretically, according to this model, there is a scale point at\nwhich the number of used modules is less than the number of declared\nmodules, which means that some declared modules would not be used,\njust declared. That point is around 50,000 declared modules. Our\ndataset does not contain any project that large, but we found 753\nprojects where the number of declared modules is larger than the\nnumber of used modules, so this situation is not rare. An analysis of\nthis set of projects shows that they are statistically larger than the\naverage of the whole dataset, and that it contains a disproportionate\nnumber of very large projects -- 17 out of the 59 projects with more\nthan 3,000 declared modules are in this subset of projects that have\nhigher number of declared modules than used modules. It is possible\nthat these cases correspond to utility frameworks.\n\nHowever, the opposite result, if it had been observed, could also be\nexplained. That is, one could imagine that the number of used modules\nwould grow faster than the number of declared modules. In this case,\nthe addition of a dependency (external or internal) would not be\ncorrelated with additional internal modules, and, instead, it would\nsimply correlate with the addition of methods in existing modules that\nuse that new entity. That is not the case in this ecosystem: more\nmethods seem to exist for defining additional functionality with\nexisting dependencies than new methods are added for using additional\ndependencies. (plots omitted for space reasons)\n\n\\subsubsection{Efferent Coupling vs. SLOC}\n\\label{sec:coupling}\n\nThe efferent coupling of an entire project is given by the number of\nexternal modules (classes+interfaces) that the project uses. Here we\nstudy its exact relation with project size given in SLOC. This\nanalysis targets the well-known correlation between efferent coupling\nmetrics and size of artifacts, in general. Table~\\ref{tab:coupling}\nsummarizes the parameters. (Plots are in\nAppendix B Figure~\\ref{fig:res_Coupling_SLOC})\n\n\\begin{table}\n\\centering\n\\caption{Analysis of efferent coupling of projects}\n{\\small\n\\begin{tabular}{|l|r|r|r|r|l|}\\hline\nAnalysis & $\\alpha$ & $\\beta$& $r$ & $R^{2}$ & Space \\\\ \\hline\n{\\footnotesize Coupling vs. SLOC} & 0.1176 & 0.5641 & 0.91 & 0.82 & log-log \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:coupling}\n\\end{table}\n\nAccording to this model, the relation is sublinear, i.e. efferent\ncoupling grows disproportionately slower than SLOC. Also, here,\n``normality'' changes with scale, it's not a simple constant ratio.\n\n\\subsubsection{Provenance of Used Modules}\n\nIn order to find out whether there are differences in the origin of\ndependencies among projects of different sizes, we then looked at the\nprovenance of all classes and interfaces (i.e. modules) that are used\nin each project, and regressed them against size of the project, given\nby number of declared modules. Table~\\ref{tab:provenance} summarizes\nthe parameters. (All residuals plots can be found in Appendix B,\nFigures~\\ref{fig:res_Internal_Modules}, \\ref{fig:res_JDK_Modules} and\n\\ref{fig:res_External_Modules})\n\n\\begin{table}\n\\centering\n\\caption{Analysis of origin of dependencies (I)}\n{\\small\n\\begin{tabular}{|l|r|r|r|r|l|}\\hline\nAnalysis & $\\alpha$ & $\\beta$& $r$ & $R^{2}$ & Space \\\\ \\hline\n{\\footnotesize Inter. vs. Modules}& -0.5040 & 1.0037 & 0.96 & 0.92 & log-log \\\\ \\hline\n{\\footnotesize JDK vs. Modules} & 1.7405 & 0.5306 & 0.81 & 0.66 & log-log \\\\ \\hline\n{\\footnotesize Exter. vs. Modules}& 0.7168 & 0.7489 & 0.80 & 0.65 & log-log \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:provenance}\n\\end{table}\n\nIndeed, these parameters show that there are differences. As $\\beta$\nindicates, larger projects use disproportionately less modules from\nexternal sources ($\\beta=0.7489$) and even less from the JDK\n($\\beta=0.5306$) than smaller projects. They use slightly\ndisproportionately more internally-defined modules ($\\beta=1.0037$)\nthan smaller projects. \n\nThese numbers are highly driven by the previous result -- in general,\nthe number of used modules grows slower than the number of declared\nmodules. That blurs the true ratios of the origin of dependencies as\nprojects grow, so let us analyze the data in a different way. We can\ntake module use as the independent variable and module origin as the\ndependent variable. This helps us quantify the mix of dependency\nprovenance as a function of project size given by the number of total\nused modules (i.e. a slightly different size metric that is highly\ncorrelated with the number of declared modules). The results are shown\nin Table~\\ref{tab:dependencies}.\n\n\\begin{table}\n\\centering\n\\caption{Analysis of origin of dependencies (II)}\n{\\small\n\\begin{tabular}{|l|r|r|r|r|l|}\\hline\nAnalysis & $\\alpha$ & $\\beta$& $r$ & $R^{2}$ & Space \\\\ \\hline\n{\\footnotesize Inter. vs. Total}& -2.882 & 1.282 & 0.93 & 0.87 & log-log \\\\ \n & -2.821 & 1.275 & 0.93 & NA & log-log {\\tiny (RLM)} \\\\ \\hline\n{\\footnotesize JDK vs. Total} & 0.162 & 0.750 & 0.90 & 0.82 & log-log \\\\\n & 0.153 & 0.756 & 0.90 & NA & log-log {\\tiny (RLM)} \\\\ \\hline\n{\\footnotesize Exter. vs. Total}& -1.585 & 1.072 & 0.89 & 0.79 & log-log \\\\ \n & -1.454 & 1.059 & 0.89 & NA & log-log {\\tiny (RLM)} \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:dependencies}\n\\end{table}\n\nAn inspection of the residuals plots (Appendix B Figures\n\\ref{fig:res_Internal_Total}, \\ref{fig:res_JDK_Total} and\n\\ref{fig:res_External_Total}) suggested that the simple linear model\nmay suffer from the effect of outliers, particularly on the use of JDK\nentities. As such we performed a {\\em robust} linear regression model\n(RLM), which excludes outliers.\\footnote{Robust linear regression does\n not report $R^{2}$.} The new residuals plots (Appendix B Figures\n\\ref{fig:res_Internal_Total2}, \\ref{fig:res_JDK_Total2} and\n\\ref{fig:res_External_Total2}) still suffer from some left-skewness,\nbut the {\\em Residuals vs. Fitted} plot shows an improvement. Even with\nRLM, the model may not be strong for the edges of the data, i.e. for\nextremelly small and extremelly large projects.\n\nThe results indicate that, as the number of total used modules grows,\nprojects use disproportionately much less of the JDK ($\\beta=0.7559$)\nand much more internal ($\\beta=1.2822$) modules. The growth in\nexternal dependencies is also disproportionately larger, but less so\nthan the use of internal modules ($\\beta=1.0589$).\n\nIn retrospect, this result makes sense: the reason why projects are\nlarger is that they define more classes and interfaces; those are\nlikely to be used internally. For large projects, and given that the\namount of types in the JDK is fixed, the relative importance of the\ntypes from the JDK decreases and the importance of internal types\nincreases. \n\nBut this result exposes an interesting characteristic of\nprogramming-in-the-large: larger projects use much more of their\ninternal, and potentially less stable, components. Smaller projects\nleverage the JDK.\n\n\\section{Sampling Biases}\n\nThe linear regression analysis in the previous section was performed\nover the entire datastet without excluding any of the projects. The\ndataset is heavily right-tailed, with a bias towards small projects,\nand with only a few very large projects. Since linear regression\nlearns the paramters $\\alpha$ and $\\beta$ from the data, the data that\nwe use influences the exact values of those parameters. As such, it\ncould very well be that the non-linear effects that have been reported\nin the previous section, given by $\\beta \\neq 1$, could be an artifact\nof the many small projects and the very few very large projects in the\ndataset forcing that non-linear behavior in order for the models to\nfit the data. If that were to happen, there might be a simpler linear\nmodel with $\\beta=1$ that could perfectly explain the data ``in the\nmiddle'' containing only projects above and below certain size\nthresholds. In other words, we could give up explaining what happens\nfor the many very small projects, because the variance in them\nis very large, and for very large projects, because there aren't that\nmany, and focus on finding simple models for projects in between.\n\nWe investigated this possibility by constructing alternative models\nwhere the parameters are learned from various subsets that exclude\nvery small and very large projects. We report the result on only one\nof the many bivariate analysis of the previous section, specifically\nMethods vs. Classes, which had $\\beta=1.1055$\n(Section~\\ref{sec:modulesize}). Table~\\ref{tab:alternativemodels}\nsummarizes the results. The column Subset in that table denotes the\nconditions for project inclusion in the set as a range on the number\nof classes in the project. The first row is the baseline model given\nin the previous section. All of these models result in good residuals\nwithout any warning signs.\n\n\\begin{table}\n\\centering\n\\caption{Alternative models for Methods vs. Classes}\n{\\small\n\\begin{tabular}{|l|l||r|r|r|r|}\\hline\nModel & Subset & \\#Projects & $\\alpha$ & $\\beta$ & $R^{2}$ \\\\ \\hline\n1 & Baseline & 30,914 & 1.095 & 1.106 & 0.89 \\\\ \\hline\n2 & [10--3,000] & 22,860 & 1.283 & 1.061 & 0.84 \\\\ \\hline\n3 & [20--3,000] & 18,239 & 1.335 & 1.051 & 0.82 \\\\ \\hline\n4 & [30--3,000] & 15,030 & 1.347 & 1.049 & 0.81 \\\\ \\hline\n5 & [50--1,000] & 10,576 & 1.350 & 1.047 & 0.73 \\\\ \\hline\n6 & [100--500] & 5,167 & 1.232 & 1.068 & 0.52 \\\\ \\hline\n7 & [10--100] & 16,712 & 1.222 & 1.081 & 0.63 \\\\ \\hline\n8 & [1,000--3,000] & 386 & 0.168 & 1.218 & 0.36 \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:alternativemodels}\n\\end{table}\n\nAs shown in the table, all of the alternative models are still\nnon-linear, with $\\beta \\neq 1$, but, with the exception of model 8,\nthe value is lower than the baseline model. Model 5, which excludes\nalmost $2\/3$ of the dataset and includes the projects in the middle of\nthe dataset, has the lowest $\\beta$. But even in that subset, the\nnon-linear relation between classes and methods can be\nobserved. According to the parameters of model 5, a project with 50\nclasses is predicted to have 232 methods, and a project with 500\nclasses is predicted to have, not 2,320, but 2,583 methods. Any\nconcerns that the non-linear relation between methods and classes was\nan artifact of sampling bias are put to rest with the results shown in\nTable~\\ref{tab:alternativemodels}.\n\nGiven that there can be an unlimited number of models inferred from any\narbitrary subset of the original dataset, the question arises of which\nmodel to use. \n\nIf the goal is to make predictions based on the models, the accuracy\nof each model can be tested on test datasets that aren't part of the\ndata from which the models are learned.\nWe exemplify such prediction goals by measuring the Normalized Root\nMean Square Error (NRMSE) of each model on the two extremes of the whole dataset\nthat have been eliminated from the learning part: the very small\nprojects ($\\#Classes < 10$) and the very large projects ($\\#Classes >\n3,000$); note in Table~\\ref{tab:alternativemodels} that those\nprojects are not part of any subset. NRMSE is given by\n\n\\begin{equation}\n\\label{eq:MSRE}\n\\begin{split}\nRMSE = \\sqrt{\\frac{\\sum\\limits_{t=1}^n{(\\text{\\^y} - y)^{2}}}{n}}\\\\\nNRMSE = \\frac{RMSE}{y_{max} - y_{min}}\n\\end{split}\n\\end{equation}\n\n\nThe summary of this accuracy analysis can be seen in\nTable~\\ref{tab:accuracy}. For comparison, we also show the NRMSE of\neach model on the entire dataset. Numbers in bold represent the models\nthat performed the best.\n\n\\begin{table}\n\\centering\n\\caption{Accuracy of the models, measured in NRMSE}\n{\\small\n\\begin{tabular}{|l||r|r|r|}\\hline\nModel & V.Small & V.Large & All \\\\\n & (5,472 projects) & (50 projects) & (30,914 projects) \\\\ \\hline\n1 & 0.13467 & 0.1500 & {\\em {\\bf 0.05110}} \\\\ \\hline\n2 & 0.13109 & 0.1229 & 0.05155 \\\\ \\hline\n3 & 0.13083 & 0.1210 & 0.05181 \\\\ \\hline\n4 & 0.13081 & 0.1208 & 0.05188 \\\\ \\hline\n5 & {\\bf 0.13078} & {\\bf 0.1204} & 0.05189 \\\\ \\hline\n6 & 0.13171 & 0.1230 & 0.05136 \\\\ \\hline\n7 & 0.13184 & 0.1354 & {\\bf 0.05135} \\\\ \\hline\n8 & 0.21137 & 0.1530 & 0.05700 \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:accuracy}\n\\end{table}\n\nAs expected, the model that performs the best for the entire dataset\nis model 1, whose parameters were inferred from that same data. This\ncase doesn't serve to validate the model, it just confirms what was\nexpected. Excluding that baseline, the model that performs the second\nbest on the entire dataset is model 7, which contains many small\nprojects. The real validation comes only on the performance of the\nmodels on the two test sets containing very small and very large\nprojects, which weren't contained in the learning data. In both cases,\nthe model that makes the best predictions is model 5, whose parameters\nare inferred from a large portion of small\/medium\/large size projects.\n\nGiven these results, model 5, which learns the parameters ignoring the\nedges of the data, should be used instead of the baseline model\n1. Similar accuracy analysis should be done for all the other\nbivariate analysis. It is likely that the best models are always the\nones that learn the parameters ignoring the projects at the edges,\nwhere there is either more variance or uncertainty. Nevertheless, the\nmost important take away from this section is that {\\bf the\n non-linearities exist in the data, independent of which \n subsets we choose.}\n\n\\section{Implications for Software Metrics}\n\\label{sec:discussion}\n\nOur study was centered around a very simple question: {\\em does the scale of\nthe software system affect the internal structure of its modules or\nare modules scale-invariant?}\n\nFor the Java ecosystem, the answer is: yes, the scale of the\nsystem affects several aspects of the internal structure of its\nmodules, and of the way the modules are put together. Among those, the\nnumber of methods per class, the number of LOCs per module, the use of\ninheritance and mix of dependencies stand out. Going back to the LEGO\nmetaphor, it is as if large Java projects have injected stronger\ncoupling material and more hooks into the [larger] software bricks.\nThese findings have profound implications for software research,\nespecially quantitative studies of software artifacts. We\ndiscuss them here.\n\nAs mentioned in Section \\ref{sec:confusingeffect}, size has been the\nsource of much confusion in software studies. As noted several times\nin the literature, many software metrics -- for\nexample, Weighted Methods per Class (WMC) and (efferent and afferent)\nCoupling, just to mention two -- are correlated with size, so their\nstatistical power is very weak when size metrics are\navailable. We explain how to properly normalize for size with one\nexample metric: WMC.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3in]{figures\/RWMCVsClasses.png}\n\\caption{Correlation: how WMC grows with the number of classes.}\n\\label{fig:wmc_classes}\n\\end{figure}\n\nFigure~\\ref{fig:wmc_classes} shows the regression of WMC\nvs. Classes in our dataset, a confirmation of what we already know\nabout the existence of these correlations. In our data, the Person\ncorrelation (in log space) is $r=0.3$, so moderately strong.\n\n\\subsection{Linear or Log?}\n\nA first approach to normalizing the number of methods controlling for\nsize of the project is to make a simple average $WMC = Methods \/\nClasses$. This is, in fact, how this metric is defined in the\nliterature~\\cite{Chidamber:1991}, assuming uniform complexity of 1 (an\nassumption made in several prior studies). This gives us a number\nthat, in principle, can be used to compare projects independent of\ntheir size. If we have two projects, one with $WMC=3$ and the other\nwith $WMC=8$, that tells us that these two projects are considerably\ndifferent without needing to know any size metric. \n\nIn software ecosystems, a mean of $WMC$ can be calculated for entire\ncollections of projects by computing the $WMC$ of all the projects in\ncollection, and then computing the mean of those values. In our\ndataset $mean(WMC)=5.15$, which might lead us to conclude that in this\nvery large Java ecosystem, the average WMC is 5.15.\n\nThis value, however, is very misleading, because the distribution of\n$WMC$ in the dataset is not normal, but\nlog-normal. Figure~\\ref{fig:wmc} in Appendix shows the $WMC$ distribution in\nlinear and log scales.\n\nGiven this knowledge, a second approach to normalizing for size is to\nfind the mean and SD of $WMC$ in log scale. In our dataset that is\n$mean(WCM)_{log}=1.455$ and $SD(WMC)_{log}=0.63$. This translates to\nlinear space as $4.28$, with 68\\% of values falling within the\ninterval $[2.28 - 8.00]$, skewed towards the lower end of the\ninterval.\n\nThe first thing to notice is that these two numbers, $mean(WMC)$ and\n$mean(WMC)_{log}$ are different, the former being larger than the\nlatter. That happens because the data is highly right-skewed,\ni.e. there are many more smaller values than larger ones. Therefore\nthe simple mean in linear scale does not capture an important aspect\nof the data, its skewness; the mean and SD in log scale do. Another way of\nlooking at this is that when drawing a data point randomly out of this\ndataset, the odds are higher around $4.28$ than around $5.15$.\n\nEven though this is basic statistics, many papers continue to report\nsummary statistics in linear scale when the data is not normally\ndistributed in that scale. In general, we must inspect what kind of\ndistribution our data has and report summary statistics accordingly,\nor the reports will be misleading.\n\n\\subsection{Non-Linearity}\n\nThe above analysis is still missing something important about the\ndata, namely the findings unveiled by this paper that the number of\nmethods in a project grows {\\em disproportionately} faster with the\nnumber of classes. Therefore {\\em normality} takes a different value\ndepending on the scale of the project. We might conclude that a\nproject with $WMC=7.9$ ($WMC_{log} = 2.067$), which is on the edge of\nthe SD interval, might need special attention, and that a project with\n$WMC=4.3$ would be perfectly ``normal.'' That may or may not be the\ncase, depending on the size of that project. In\nSection~\\ref{sec:modulesize} we found a strong non-linear model given\nby:\n\n$Methods = e^{\\alpha} Classes^{\\beta}$\n\nThis equation gives us the {\\em norm} of what to expect of $WMC$ in\nprojects of varying sizes in this dataset. For a project with 10,000\nclasses, the expectation of the model is that it will have 70,000+\nmethods, not 42,800 as a simple linear model would predict (i.e. 4.28\n* 10,000); so $WMC=7.9$ is what we would expect for a project of this\nsize. A project with 10,000 classes that shows $WMC=4.3$ would be an\noddity in this ecosystem. If, however, the project has only 25\nclasses, then $WMC=4.3$ would be expected, but $WMC=7.9$ would be\nsurprising, in the sense that it is a large deviation from what is\nexpected of projects of that size. \n\nTherefore, the proper normalization for size must take this non-linear\nrelation into account, producing an adjusted ratio that is {\\em\n truly} independent of the number of classes:\n\n\\begin{equation}\n\\label{eq4}\nWMC_{\\beta} = \\frac{Methods}{ Classes^{\\beta}}\n\\end{equation}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3in]{figures\/RWMCbetaVsClasses.png}\n\\caption{Normalization: how $WMC_{\\beta}$ grows with the number of classes.}\n\\label{fig:wmcbeta_classes}\n\\end{figure}\n\nFigure~\\ref{fig:wmcbeta_classes} shows how $WMC_{\\beta}$ and size are\n{\\bf not correlated}, using $\\beta=1.1055$, and the parameters from model\n1 in the previous section. Pearson correlation between the two\nvariables is $r << 0.001$, and Spearman correlation is $R=-0.04$.\n\nThe parameter $\\beta$ has elluded measurement, because it can only be\nobserved on sufficiently large collections of programs written in the\nsame language and that, collectively and empirically, define what is\nto be expected of programs written in that language. We now have the\nmeans to measure it, as shown in this paper. Therefore, we now have\nthe knowledge to create updated versions of well-known software\nmetrics that are truly independent of size and that may (or may not)\ncarry additional important information about the code that is not\nalready captured by size metrics. If, for example, high coupling\nreally is ``bad'', we now have the mathematical knowledge to measure\nthe size-independent essence of coupling. We plan to investigate the\nstatistical power of this seamingly small, but critical, adjustment in\nfuture work.\n\n\n\\section{Conclusion}\n\nWe have described a quantitative study designed to answer the\nquestion: does the scale of a software system affect the internal\nstructure of its modules? We have made an important step\ninto answering this question by performing a statistical analysis of a\nvery large and varied collection of Java projects. The statistical\nsignificant results in this dataset are strong: there are,\nindeed, superlinear effects on some aspects of the modules' internal\nstructure and composition with other modules. This reinforces the\nwidely accepted idea that programming-in-the-large carries with it\ndifferent concerns that aren't as strongly present for\nprogramming-in-the-small. More importantly, it has tremendous\nconsequences for software metrics in general. Many of the metrics\nproposed in the literature, and that are used widely in IDEs, have\nsuffered from poor information content for prediction models because\nthey correlate with the much simpler size metrics. Our paper shows how\nthis can be corrected.\n\n\n\\acks\n\nThis work was supported by National Science Foundation\ngrants nos. CCF-0725370 and CCF-1018374, and by the DARPA MUSE\nprogram. We would like to thank Pedro Martins for his assistance in\nthe production of the artifact, and the anonymous reviewers, who made\nthis a better paper.\n\n\n\n\\bibliographystyle{abbrvnat}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Technical lemmas and proofs}\n\\label{sec:proofs1}\n\\begin{lemma}\n\t\\label{lemma:BP}\nLet $B$ be a Bernoulli random variable, $T_0$ a nonnegative random variable and let $T=T_0$ if $B=1$ and $T=\\infty$ if $B=0$. Let $X$ and $Z$ be two real-valued random vectors. Then\n\\[\nT_0\\perp (C,X)\\mid Z\\quad \\text{ and } B\\perp (C,T_0,Z)\\mid X\\quad\\Longrightarrow\\quad T\\perp C\\mid (X,Z)\n\\]\n\\end{lemma}\n\\begin{proof}\n\nThis lemma is similar to Lemma 8.1 in \\cite{BP}. We provide the proof for completeness. By elementary properties of conditional independence we have\n\\[\nB\\perp (C,T_0,Z)\\mid X \\quad\\Longleftrightarrow \\quad B\\perp C\\mid (X,Z,T_0)\\quad\\text{and}\\quad B\\perp T_0\\mid (X,Z)\\quad\\text{and}\\quad B\\perp Z\\mid X\n\\]\nand\n\\[\nT_0\\perp (C,X)\\mid Z \\quad\\Longleftrightarrow \\quad T_0\\perp C\\mid (X,Z)\\quad\\text{and}\\quad T_0\\perp X\\mid Z.\n\\]\nThen,\n\\[\nC\\perp B\\mid (X,Z,T_0) \\quad\\text{and}\\quad C\\perp T_0\\mid (X,Z)\\quad\\Longleftrightarrow \\quad (B,T_0)\\perp C\\mid (X,Z)\n\\]\nThe result follows from the fact that $T$ is completely determined by $B$ and $T_0$.\n\\end{proof}\n\n\n\n\n\n\n\\subsection{Identifiability with restricted survival times}\n\nFor any $0<\\tau^*\\leq \\tau_0$, let\n$$\nT_0^*= \\min (T_0, \\tau^*), \\qquad T^* = B T_0^*+ (1-B) \\infty \\qquad \\text{and} \\qquad \nC^* = \\min (C, \\tau^*).\n$$\nMoreover, let \n$$\nY^* = \\min (T^*, C^*)\\quad \\text{ and }\n\\Delta^* = \\mathds{1}_{\\{T^*\\leq C^*\\}}.\n$$ \n\n\nA first aspect to study is the identifiability of the true values of the parameter when $(Y,\\Delta)$ is replaced by $(Y^*,\\Delta^*)$. Here, identifiability means that the true values $\\beta_0$ and $\\Lambda_0$ of the parameters maximize the expectation of the criterion maximized to obtain the estimators. This issue is addressed in Lemma \\ref{theo:identif}. Let us introduce some additional notation: for any $0<\\tau^* \\leq \\tau_0$ and $\\Lambda\\in\\mathcal H$, $\\Lambda_{|\\tau^*}$ is defined as \n\\begin{equation}\\label{def:lambda_trunc}\n\\Lambda_{|\\tau^*}(t) = \\Lambda (t), \\;\\;\\forall t\\in [0,\\tau^*) \\quad \\text{ and } \\quad \\Delta\\Lambda_{|\\tau^*}(\\tau^*)=\\Lambda_{|\\tau^*}(\\{\\tau^*\\})=1.\n\\end{equation} \nThe dominating measure for the model of $T_0$ changes with such a stopped cumulative hazard measure to allow for a positive mass at $\\tau^*$. Then, $\\ell$ defined in \\eqref{def:hat_l_cox_g} becomes \n\\begin{multline}\\label{el_star}\n\\ell (y, d, x, z ; \\beta, \\Lambda_{|\\tau^*}, \\gamma) \n= \\mathds{1}_{\\{y< \\tau^*\\}} \\left[d \\log f_u(y|z;\\beta,\\Lambda) \\right. \\\\\n\\left. + (1-d)\\log\\left\\{1-\\phi(\\gamma,x)+\\phi(\\gamma,x)S_u( y |z;\\beta,\\Lambda)\\right\\} \\right]\\\\\n+ \\mathds{1}_{\\{y \\geq \\tau^*\\}} \\left[ d \\log S_u(\\tau^*|z;\\beta,\\Lambda) + (1-d)\\log\\left\\{1-\\phi(\\gamma,x)\\right\\}\\right] .\n\\end{multline}\n\n\n\\begin{lemma}\n\\label{theo:identif}\nLet $0<\\tau^* \\leq \\tau_0$. \nAssume that for any $\\tilde \\beta\\in B$ and $\\tilde \\Lambda\\in\\mathcal H$,\n\\begin{equation}\\label{eq:ident_T0}\nS_u ( t | z; \\tilde \\beta , \\tilde \\Lambda_{|\\tau^*}) = S_u ( t | z;\\beta_0,\\Lambda_{0|\\tau^*}), \\; \\forall t \\in [0, \\tau^*) \\quad \\Longrightarrow \\quad \\tilde \\beta=\\beta_0 \\;\\text{ and } \\; \\tilde \\Lambda_{|\\tau^*} = \\Lambda_{0|\\tau^*}.\n\\end{equation}\n Then $(\\beta_0,\\Lambda_{0|\\tau^*})$ is the unique solution of \n\\begin{equation}\n\\label{eq:ident_T0_b}\n\\max_{\\beta\\in B,\\Lambda\\in\\mathcal H }\\mathbb E \\left[ \\ell ( Y^* ,\\Delta^*,X,Z;\\beta,\\Lambda_{|\\tau^*},\\gamma_0) \\right] .\n\\end{equation}\n\\end{lemma}\n{Condition \\eqref{eq:ident_T0} is a minimal requirement of identification of the true value of the parameters in the model for the uncured subjects if the variable $T_0\\wedge C$ was observed and only the events in a subset of the support of $T_0$ are considered.} In the Cox PH model \n\\eqref{eq:ident_T0} is guaranteed by the requirement that $Var(Z)$ has full rank.\n\n\n\\begin{proof}[Proof of Lemma \\ref{theo:identif}]\nFirst, let \n$$\nH_k([0,t] |x,z) = \\mathbb P (Y\\leq t, { \\Delta = k } | X=x,Z=z), \\quad k\\in\\{0,1\\}, \\quad t\\in[0,\\infty),\n$$\nand let $H_k(dt |x,z) $ be the associated conditional measures. These conditional measures characterize the distribution of $(Y,\\Delta)$ given $X=x$ and $Z=z$. By the model and independence assumptions, for any $t\\geq 0$,\n\\begin{equation}\\label{eq:inv1_0}\nH_1(dt | x, z) = \\phi(\\gamma_0,x) F_C([t,\\infty)|x,z) f_u(t|z;\\beta_0,\\Lambda_{0}) dt,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:inv2_0}\n H_0(dt | x, z ) = \\{1-\\phi(\\gamma_0,x)+ \\phi(\\gamma_0,x) S_u(t|z;\\beta_0,\\Lambda_{0}) \\}F_C(dt|x,z) .\n\\end{equation}\nFollowing an usual notation abuse, herein we\ntreat $dt$ not just as the length of a small interval but also as the name of the interval itself. Note that up to additive terms which do not depend on the parameters $\\beta,\\Lambda$,\n$$\n(y,d)\\mapsto d \\log f_u(y |z ;\\beta_0,\\Lambda_0) \n+\n(1-d )\\log\\left\\{1-\\phi(\\gamma_0,x)+\\phi(\\gamma_0,x)S_u(y|z;\\beta_0,\\Lambda_0)\\right\\},\n$$\nis the conditional log-density of $(Y,\\Delta)$ given $X=x$ and $Z=z$. From this and Kullback information inequality one can deduce that the expectation of $\\ell$ defined in \\eqref{def:hat_l_cox_g} is maximized by $\\beta_0,\\Lambda_0$ and $\\gamma_0$. \n\nLet $0<\\tau^*\\leq \\tau_0$. Note that \n$$\n H_1([\\tau^*,\\tau_0] | x, z) = H_1([\\tau^*,\\infty) | x, z) = \\phi(\\gamma_0,x) \\int_{[\\tau^*,\\tau_0]} F_C([t,\\infty)|x,z)f_u(t|z;\\beta_0,\\Lambda_{0}) dt,\n$$\nand\n\\begin{multline*}\nH_0([\\tau^*,\\infty) | x, z) = \\phi(\\gamma_0,x) \\int_{[\\tau^*,\\tau_0]} S_u(t|z;\\beta_0,\\Lambda_{0}) F_C(dt|x,z) \\\\+\\{1- \\phi(\\gamma_0,x)\\} F_C([\\tau^*,\\infty)|x,z) .\n\\end{multline*}\nMoreover, \n\\begin{multline*}\nd(x,z;\\tau^*) := \\phi(\\gamma_0,x) \\int_{[\\tau^*,\\tau_0]} F_C([t,\\infty)|x,z)f_u(t|z;\\beta_0,\\Lambda_{0}) dt \\\\ +\\phi(\\gamma_0,x) \\int_{[\\tau^*,\\tau_0]} S_u(t|z;\\beta_0,\\Lambda_{0}) F_C(dt|x,z) \\\\ = \\phi(\\gamma_0,x) F_C([\\tau^*,\\infty) |x,z) S_u(\\tau^*|z;\\beta_0,\\Lambda_{0})\\\\ = \\mathbb P (T_0 \\wedge C \\geq \\tau^*, B=1).\n\\end{multline*}\nIn the limit case of no cure, $d(x,z;\\tau^*) = H_1([\\tau^*,\\infty) | x, z)+ H_0([\\tau^*,\\infty) | x, z)$. \nBy construction we have \n$\nY^* = \\min (Y, \\tau^*),\n$\nand\n$$\n\\mathbb P (Y^*=\\tau^*, \\Delta^* =1 | X=x, Z=z) = \nd(x,z;\\tau^*).\n$$\n\n\nNext, let \n$$\nH_k^*([0,t] |x,z) = \\mathbb P (Y^*\\leq t, { \\Delta^* = k } | X=x,Z=z), \\quad k\\in\\{0,1\\}, \\quad t\\in[0,\\infty),\n$$\nand let $H_k^*(dt |x,z) $ be the associated conditional measures. \nThis means\nfor any $t\\in[0,\\tau^*) $, \n\\begin{equation*\nH^*_1(dt | x, z) = H_1(dt | x, z) \\quad \\text{and} \\quad H^*_0(dt | x, z) = H_0(dt | x, z) .\n\\end{equation*}\nMoreover, \n$$\nH_1^*(\\{\\tau^*\\} | x, z) = H_1^*([\\tau^*,\\infty) | x, z) = d(x,z;\\tau^*),\n$$\nand\n$$\nH_0^*( \\{\\tau^*\\} | x, z) =H_0^*([\\tau^*,\\tau_0] | x, z) = \\{1- \\phi(\\gamma_0,x)\\} F_C([\\tau^*,\\infty)|x,z).\n$$\n\nNow, according to the inversion formulae of \\cite{PK}, without any reference to a model, one can solve the set of equations \n\\begin{eqnarray}\\label{sq2}\nH^*_1(dt | x, z) &=& \\phi^*(x,z) F^*_C([t,\\infty)|x,z) F^*_u(dt|x,z),\\notag \\\\\n\\\\\nH_0^*(dt | x, z ) &=& \\{1-\\phi^*(x,z)+ \\phi^*(x,z) S^*_u(t|x,z) \\}F^*_C(dt|x,z) ,\\notag\n\\end{eqnarray}\nwhere $F_u^* = 1-S_u^*$. Solving \\eqref{sq2} for $F^*_C$, $S^*_u$ and $\\phi^*$, the functional $S^*_u$ is a proper survival function which puts mass only on sets where $H_1^*$ does. Note that solving the similar system with $H_1,H_0$ instead of $H_1^*, H_0^*$, one gets the true $F_C$, $S_u$ and $\\phi$. \nIf $\\Lambda_C^*$ denotes the cumulative hazard function associated to the solution $F^*_C$, then \n$$\n\\Lambda_C^*(dt | x,z) = \\frac{H_0^*(dt | x, z )}{H^*_1((t,\\infty)| x, z)+ H_0^*([t,\\infty) | x, z )},\\quad t\\geq 0,\n$$\nand thus, by construction, we have \n$ F_C(dt|x,z) = F^*_C(dt|x,z)$ on $[0,\\tau^*)$, for any $x,z$. Then, by \\eqref{eq:inv2_0} and the second equation in \\eqref{sq2} we deduce \n$$\n\\phi^*(x,z) F^*_u(t|x,z) = \\phi(\\gamma_0,x)F_u(t|z;\\beta_0,\\Lambda_{0}), \\quad \\forall t\\in [0,\\tau^*), \\forall x,z.\n$$ \nNext, taking into account that $S^*_u(t|x,z)= 0$, $\\forall t\\geq \\tau^*$, $\\forall x,z$, and integrating the second equation \\eqref{sq2} on $[\\tau^*,\\infty)$, \nwe obtain\n\\begin{multline*}\n\\{1-\\phi^*(x,z)\\}F^*_C([\\tau^*,\\infty)|x,z) = H_0^*( \\{\\tau^*\\} | x, z ) = \\{1-\\phi(x,z)\\}F_C([\\tau^*,\\infty)|x,z) .\n\\end{multline*}\nSince $F^*_C([0,\\tau^*)|x,z) = F_C([0,\\tau^*)|x,z)$, we deduce that $\\phi^*(x,z) = \\phi(\\gamma_0,x)$\nand thus \n\\begin{equation}\\label{za1}\nF^*_u(t|x,z) = F_u(t|z;\\beta_0,\\Lambda_{0})= F_u(t|z;\\beta_0,\\Lambda_{0|\\tau^*}), \\quad \\forall t\\in [0,\\tau^*), \\forall x,z.\n\\end{equation}\nThe second equality in the last display is by the construction of the survival function from the cumulative hazard function: only the values of $\\Lambda_0$ on $[0,t]$ contribute to obtain $F_u(t|z;\\beta_0,\\Lambda_{0})$.\nSince the inversion formula necessarily yields $F^*_u([0,\\tau^*]|x,z) \\equiv 1$, we deduce \n\\begin{equation}\\label{za2}\nF^*_u(\\{\\tau^* \\}|x,z)=S_u(\\tau^*|z;\\beta_0,\\Lambda_{0})=S_u(\\tau^*|z;\\beta_0,\\Lambda_{0|\\tau^*}).\n\\end{equation} \n\nFinally, we can write \n\\begin{multline*}\nE \\left[ \\ell ( Y^* ,\\Delta^*,X,Z;\\beta,\\Lambda_{|\\tau^*},\\gamma_0) \\right] \\\\\n= \\iiint \n\\log \\ell ( t ,1,x,z;\\beta,\\Lambda_{|\\tau^*},\\gamma_0) \nH^*_1(dt | x, z) G(dx, dz)\n\\\\ + \\iiint \\log \\ell ( t ,0,x,z;\\beta,\\Lambda_{|\\tau^*},\\gamma_0) H_0^*(dt | x, z) G(dx, dz).\n\\end{multline*}\nTo obtain the identifiability result it remains to apply Kullback information inequality. More precisely, it suffices to {notice} that here, up to additive terms which do not depend on the parameters, {$\\ell$ defined} in \\eqref{el_star} considered with $\\beta_0,\\Lambda_{0|\\tau^*}$ corresponds to the {log-density of the conditional law of $(Y^*,\\Delta^*)$ given $X=x$ and $Z=z$}. (Note that the dominated measure changed as we introduce jumps at $\\tau^*$.) This follows from \\eqref{za1} and \\eqref{za2}. Thus $\\beta_0,\\Lambda_{0|\\tau^*}$ is solution of the problem \\eqref{eq:ident_T0_b}. The unicity of the solution is guaranteed by \\eqref{eq:ident_T0}.\n\\end{proof}\n\n\\subsection{Consistency}\n\n\\begin{proof}[\\textsc{Proof of Theorem~\\ref{theo:consistency}.}]\n\tWe follow the idea of \\cite{scharfstein}. Since we are interested in almost sure convergence, we work with fixed realizations of the data, $\\omega$ that will lie in a set of probability one. Let $\\Omega$ be the abstract probability space where the random vector $(B,T_0,C,X,Z)$ is defined (for example we can take $\\Omega=\\{0,1\\}\\times[0,\\tau_0]\\times[0,\\tau]\\times\\mathcal{X}\\times\\mathcal{Z}$ and $(B,T_0,C,X,Z)(\\omega)=\\omega$. Let $N\\subset\\Omega$ be a set of probability one $\\mathbb{P}(N)=1$ and fix $\\omega\\in N$. We will show that each subsequence $\\hat\\gamma_{n_k}$ has a subsequence that converges to $\\gamma_0$. As a bounded sequence in $\\mathbb{R}^p$, $\\hat\\gamma_{n_k}$ has a convergent subsequence $\\hat\\gamma_{m_k}\\to\\gamma^*$. It suffices to show that $\\gamma^*=\\gamma_0$. Since $\\hat\\gamma_{m_k}$ maximizes $\\log \\hat{L}_{m_k,1}$, we have\n\t\\begin{equation}\n\t\\label{eqn:consistency_1}\n\t\\begin{split}\n\t0&\\leq \\frac{1}{m_k}\\log \\hat{L}_{m_k,1}(\\hat\\gamma_{m_k})-\\frac{1}{m_k}\\log \\hat{L}_{m_k,1}(\\gamma_0)\\\\\n\t&=\\frac{1}{m_k}\\sum_{i=1}^{m_k}\\left[\\left\\{1-\\hat\\pi(X_i)\\right\\}\\log\\frac{\\phi(\\hat\\gamma_{m_k},X_i)}{\\phi(\\gamma_0,X_i)}+\\hat\\pi(X_i)\\log\\frac{1-\\phi(\\hat\\gamma_{m_k},X_i)}{1-\\phi(\\gamma_0,X_i)}\\right]\\\\\n\t&=\\frac{1}{m_k}\\sum_{i=1}^{m_k}\\left[\\left\\{1-\\pi_0(X_i)\\right\\}\\log\\frac{\\phi(\\gamma^*,X_i)}{\\phi(\\gamma_0,X_i)}+\\pi_0(X_i)\\log\\frac{1-\\phi(\\gamma^*,X_i)}{1-\\phi(\\gamma_0,X_i)}\\right]+o(1)\n\t\\end{split}\n\t\\end{equation}\n\tif $N\\subset\\{\\omega: \\sup_x\\left|\\hat\\pi(x)-\\pi_0(x)\\right|\\to 0\\}$. Note that the remainder term $o(1)$ in the previous display depends on $\\omega$ and converges to zero as $\\hat\\pi$ converges to $\\pi_0$. Next we will show that, for an appropriate choice of $N$, the first term converges to \n\t\\begin{equation}\n\t\\label{eqn:expectation_consistency}\n\t\\mathbb{E}\\left[\\left\\{1-\\pi_0(X)\\right\\}\\log\\frac{\\phi(\\gamma^*,X)}{\\phi(\\gamma_0,X)}+\\pi_0(x)\\log\\frac{1-\\phi(\\gamma^*,X)}{1-\\phi(\\gamma_0,X)}\\right]\n\t\\end{equation}\n\twhere the expectation is taken with respect to $X$ and $\\gamma^*\\in\\mathbb{R}^p$ (for a fixed $\\omega$). Since here we are dealing with a simple parametric model, this convergence follows easily from the uniform law of large numbers. However, we follow a longer argument to explain the idea that will be used also in the proof of Theorem~\\ref{theo:consistency2} (where the model is semiparametric). It is obvious, by the law of large numbers, that \n\t\\[\n\t\\begin{split}\n\t&\\frac{1}{m_k}\\sum_{i=1}^{m_k}\\left[\\left\\{1-\\pi_0(X_i)\\right\\}\\log\\phi(\\gamma_0,X_i)+\\pi_0(X_i)\\log\\left(1-\\phi(\\gamma_0,X_i)\\right)\\right]\\\\\n\t&\\to \\mathbb{E}\\left[\\left\\{1-\\pi_0(X)\\right\\}\\log \\phi(\\gamma_0,X)+\\pi_0(x)\\log\\left(1-\\phi(\\gamma_0,X)\\right)\\right]\\text{ a.s. }\n\t\\end{split}\n\t\\]\n\tand, at first sight it seems that the same holds when $\\gamma_0$ is replaced by $\\gamma^*$. However, the proof is more delicate because $\\gamma^*$ depends on $\\omega$ and thus also the event of probability one where the strong law of large numbers holds for this average. To avoid this we consider a countable dense subset of $G$, $\\{\\tilde{\\gamma}_l\\}_{l\\geq 1}$ (for example the subset for which all components of $\\gamma$ are rational numbers). Now, consider the countable collection of the probability one sets $\\{N_l\\}_{l\\geq 1}$ where\n\t\\[\n\t\\begin{split}\n\t&\\frac{1}{m_k}\\sum_{i=1}^{m_k}\\left[\\left\\{1-\\pi_0(X_i)\\right\\}\\log\\phi(\\tilde\\gamma_l,X_i)+\\pi_0(X_i)\\log\\left(1-\\phi(\\tilde\\gamma_l,X_i)\\right)\\right]\\\\\n\t&\\to \\mathbb{E}\\left[\\left\\{1-\\pi_0(X)\\right\\}\\log \\phi(\\tilde\\gamma_l,X)+\\pi_0(x)\\log\\left(1-\\phi(\\tilde\\gamma_l,X)\\right)\\right]\\quad\\forall l\\geq 1.\n\t\\end{split}\n\t\\]\n\tIf $N\\subseteq\\left(\\cap_{l\\geq 1} N_l\\right)$, we can write \n\t\\[\n\t\\begin{split}\n\t&\\left|\\frac{1}{m_k}\\sum_{i=1}^{m_k}\\left[\\left\\{1-\\pi_0(X_i)\\right\\}\\log\\phi(\\gamma^*,X_i)+\\pi_0(X_i)\\log\\left(1-\\phi(\\gamma^*,X_i)\\right)\\right]\\right.\\\\\n\t&\\quad-\\mathbb{E}\\left[\\left\\{1-\\pi_0(X)\\right\\}\\log \\phi(\\gamma^*,X)+\\pi_0(x)\\log\\left(1-\\phi(\\gamma^*,X)\\right)\\right]\\Bigg|\\\\\n\t&\\leq \\left|\\frac{1}{m_k}\\sum_{i=1}^{m_k}\\left[\\left\\{1-\\pi_0(X_i)\\right\\}\\log\\frac{\\phi(\\gamma^*,X_i)}{\\phi(\\tilde{\\gamma}_l,X_i)}+\\pi_0(X_i)\\log\\frac{\\left(1-\\phi(\\gamma^*,X_i)\\right)}{\\left(1-\\phi(\\tilde\\gamma_l,X_i)\\right)}\\right]\\right|\\\\\n\t&\\quad+\\left|\\frac{1}{m_k}\\sum_{i=1}^{m_k}\\left[\\left\\{1-\\pi_0(X_i)\\right\\}\\log\\phi(\\tilde\\gamma_l,X_i)+\\pi_0(X_i)\\log\\left(1-\\phi(\\tilde\\gamma_l,X_i)\\right)\\right]\\right.\\\\ \n\t&\\qquad-\\mathbb{E}\\left[\\left\\{1-\\pi_0(X)\\right\\}\\log \\phi(\\tilde\\gamma_l,X)+\\pi_0(x)\\log\\left(1-\\phi(\\tilde\\gamma_l,X)\\right)\\right]\\Bigg|\\\\\n\t&\\quad+\\left|\\mathbb{E}\\left[\\left\\{1-\\pi_0(X)\\right\\}\\log\\frac{ \\phi(\\gamma^*,X)}{\\phi(\\tilde\\gamma_l,X)}+\\pi_0(x)\\log\\frac{1-\\phi(\\gamma^*,X)}{1-\\phi(\\tilde\\gamma_l,X)}\\right]\\right|.\n\t\\end{split}\n\t\\]\n\tSince $\\tilde\\gamma_l$ can be taken arbitrarily close to $\\gamma^*$, by properties of $\\phi$ in assumptions (AC3)-(AC4), it can be easily derived that, for an appropriate choice of $\\tilde\\gamma_l$, the first and the third term on the right hand side in the previous equation converge to zero. Moreover, the second term also converges to zero in the set of probability one that we are considering. As a result, we can conclude that \n\t\\[\n\t\\begin{split}\n\t0&\\leq \\frac{\\log \\hat{L}_{m_k,1}(\\hat\\gamma_{m_k})-\\log \\hat{L}_{m_k,1}(\\gamma_0)}{m_k}\\\\\n\t&=\\mathbb{E}\\left[\\left\\{1-\\pi_0(X)\\right\\}\\log\\frac{\\phi(\\gamma^*,X)}{\\phi(\\gamma_0,X)}+\\pi_0(x)\\log\\frac{1-\\phi(\\gamma^*,X)}{1-\\phi(\\gamma_0,X)}\\right]+o(1)\n\t\\end{split}\n\t\\]\t\n\tFor each $x\\in\\mathcal{X}$, consider the function\n\t\\[\n\tg_x(z)=\\phi(\\gamma_0,x)\\log\\frac{z}{\\phi(\\gamma_0,x)}+\\left\\{1-\\phi(\\gamma_0,x)\\right\\}\\log\\frac{1-z}{1-\\phi(\\gamma_0,x)},\\quad z\\in(0,1).\n\t\\]\n\tIt is easy to check that $g_x(z)\\leq 0$ and the equality holds only if $z=\\phi(\\gamma_0,x)$. Hence, the expectation in \\eqref{eqn:expectation_consistency} is smaller or equal to zero. Due to the inequality in \\eqref{eqn:consistency_1}, it must be equal to zero, which means that $\\phi(\\gamma^*,X)=\\phi(\\gamma_0,X)$. By the identifiability assumption \\eqref{eqn:CI3}, this is possible only if $\\gamma^*=\\gamma_0$.\n\t\\end{proof}\n\n\\begin{lemma}\n\t\\label{lemma:boundedness_hat_Lambda}\n\tAssume (AC2),(AC5)\n\thold {and $\\tau^*$ is such that \\eqref{eqn:no_jump_cond} is satisfied.} Then $\\sup_{n}\\hat\\Lambda_n({\\tau^*})<\\infty$ almost surely.\n\\end{lemma}\n\\begin{proof}\n\t\n\t\tBy definition\n\t\t\\[\n\t\t\\hat\\Lambda_n({\\tau^*})=\\frac{1}{n}\\sum_{i=1}^{n}\\frac{\\Delta_i\\mathds{1}_{\\{Y_i< {\\tau^*}\\}}}{\\frac{1}{n}\\sum_{j=1}^n\\mathds{1}_{\\{Y_i\\leq Y_j\\leq \\tau_0\\}}\\exp(\\hat\\beta'_nZ_j)\\left\\{{\\Delta_j}+(1-{\\Delta_j})g_j(Y_j,\\hat\\Lambda_n,\\hat\\beta_n,\\hat\\gamma_n)\\right\\}}.\n\t\t\\]\n\t\tFrom assumptions (AC2) and (AC5) we have\n\t\\[\n\t\\begin{split}\n\t&\\frac{1}{n}\\sum_{j=1}^n\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}}\\exp(\\hat\\beta'_nZ_j)\\left\\{\\Delta_j+(1-\\Delta_j)g_j(Y_j,\\hat\\Lambda_n,\\hat\\beta_n,\\hat\\gamma_n)\\right\\}\\\\\n\t&\\geq \\frac{1}{n}\\sum_{j=1}^n\\Delta_j\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}}\\exp(\\hat\\beta'_nZ_j)\\\\\n\t&\\geq c\\frac{1}{n}\\sum_{j=1}^n\\Delta_j\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}},\n\t\\end{split}\n\t\\]\t\n\tfor some $c>0$. Since $\\frac{1}{n}\\sum_{j=1}^n\\Delta_j\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}}\\xrightarrow{a.s.} \\mathbb{P}\\left(Y\\geq \\tau^*,\\Delta=1\\right)>0$, it follows that $\\frac{1}{n}\\sum_{j=1}^n\\Delta_j\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}}$ is bounded from below away from zero almost everywhere. As a result\n\t\\[\n\t\\begin{split}\n\t\\sup_n\\hat\\Lambda_n(\\tau^*)&\\leq\\sup_n \\frac{1}{n}\\sum_{i=1}^{n}\\frac{\\Delta_i\\mathds{1}_{\\{Y_i< \\tau^*\\}}}{\\frac{1}{n}\\sum_{j=1}^n\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}}\\exp(\\hat\\beta'_nZ_j)\\left\\{\\Delta_j+(1-\\Delta_j)g_j(Y_j,\\hat\\Lambda_n,\\hat\\beta_n,\\hat\\gamma_n)\\right\\}}\\\\\n\t&\\leq\\sup_n \\frac{1}{n}\\sum_{i=1}^{n}\\frac{\\Delta_i\\mathds{1}_{\\{Y_i< \\tau_0\\}}}{c\\frac{1}{n}\\sum_{j=1}^n\\Delta_j\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}}}\\\\\n\t&\\leq \\frac{1}{c}\\left(\\inf_n{\\frac{1}{n}\\sum_{j=1}^n\\Delta_j\\mathds{1}_{\\{\\tau^*\\leq Y_j\\leq \\tau_0\\}}}\\right)^{-1}\n\t\\end{split}\n\t\\]\n\tis bounded almost surely.\n\t%\n\tNote that, if \\eqref{eqn:jump_cond} is satisfied, then we can take $\\tau^*=\\tau_0$.\n\\end{proof}\n\n\\begin{proof}[\\textsc{Proof of Theorem~\\ref{theo:consistency2}.}]\n\nLet $0<\\tau^* \\leq \\tau_0$ and \n\\begin{equation*\n\\hat{l}_n^*( \\beta, \\Lambda_{|\\tau^*}, \\hat\\gamma_n ) = \\frac{1}{n}\\sum_{i=1}^n \\ell (Y_i^*,\\Delta_i^*,X_i,Z_i;\\beta, \\Lambda_{|\\tau^*}, \\hat\\gamma_n),\n\\end{equation*}\nwith $\\ell$ defined in \\eqref{el_star}. \nIf we consider the Cox PH model for the conditional law of $T_0$, then \n\\begin{multline*}\n\\hat{l}^*_n(\\beta,\\Lambda_{|\\tau^*}, \\hat\\gamma_n) \n=\\frac{1}{n}\\sum_{i=1}^n \\Delta_i \\left[ \\mathds{1}_{\\{Y_i < \\tau^*\\}} \\left\\{ \\log \\Delta\\Lambda(Y_i)+\\beta'Z_i-\\Lambda(Y_i)e^{\\beta'Z_i} \\right\\} \n\\right] \\\\\n+\\frac{1}{n}\\sum_{i=1}^n(1-\\Delta_i ) \\mathds{1}_{\\{Y_i < \\tau^*\\}}\\log\\left\\{1-\\phi(\\hat\\gamma_n,X_i)+\\phi(\\hat\\gamma_n,X_i)\\exp\\left(-\\Lambda(Y_i)e^{\\beta'Z_i}\\right)\\right\\}\\\\\n- \\frac{\\Lambda(\\tau^*-)}{n}\\sum_{i=1}^n \\mathds{1}_{\\{Y_i \\geq \\tau^*\\}} \\mathds{1}_{\\{B_i =1\\}} e^{\\beta'Z_i}\n+ \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{Y_i \\geq \\tau^*\\}} \\mathds{1}_{\\{B_i =0\\}} \\log\\left\\{1-\\phi(\\hat\\gamma_n,X_i)\\right\\},\n\\end{multline*}\nand has to be maximized with respect to $\\beta$ and $\\Lambda$ in\nthe class of step functions $\\Lambda$\nwith jumps of size $\\Delta \\Lambda$ at the event times in $[0,\\tau^*) $. \nAs in \\cite{Lu}, it can be shown that the maximizer $(\\hat\\Lambda^*_n,\\hat\\beta^*_n)$ of $\\hat{l}^*_n$ exists and it is finite. Moreover, for $t\\in[0,\\tau_0] $, $\\hat{\\Lambda}_n^* = \\Lambda^*_{n,\\hat\\beta^*_n, \\hat\\gamma_n}$ where \n\\begin{equation*}\n\\Lambda^*_{n,\\beta, \\gamma} (t)=\\frac{1}{n}\\sum_{i=1}^{n}\\frac{\\Delta_i\\mathds{1}_{\\{Y_i{\\leq t, Y_i<\\tau^* }\\}}}{\\frac{1}{n}\\sum_{j=1}^n\\mathds{1}_{\\{{Y_j\\geq Y_i }\\}}\\exp(\\beta' Z_j)\\left\\{{\\Delta_j^*}+(1-{\\Delta_j})\\mathds{1}_{\\{Y_j < \\tau^*\\}}g_j(Y_j, \\Lambda^*_{n,\\beta,\\gamma},\\beta,\\gamma)\\right\\}} ,\n\\end{equation*}\n$\\Delta^* = \\mathds{1}_{\\{T_0^* \\leq C^*\\}} = \\Delta \\mathds{1}_{\\{Y_j < \\tau^*\\}} + \\mathds{1}_{\\{Y_i \\geq \\tau^*\\}} \\mathds{1}_{\\{B_i =1\\}}$ and $g_j(t,\\Lambda,\\beta,\\gamma)$ defined in \\eqref{def:g_j}. \n\nLet\n\\begin{equation}\n\\label{def:tilde_Lambda_0}\n\\tilde\\Lambda_{0,n}(t)=\\frac{1}{n}\\sum_{i=1}^{n}\\frac{\\Delta_i\\mathds{1}_{\\{Y_i\\leq t{, Y_i<\\tau_0}\\}}}{\\frac{1}{n}\\sum_{j=1}^n\\mathds{1}_{\\{Y_j\\geq Y_i{,Y_j\\leq\\tau_0}\\}}\\exp(\\beta'_0Z_j)\\left\\{{\\Delta_j}+(1-{\\Delta_j})g_j(Y_j,\\Lambda_{0},\\beta_0,\\gamma_0))\\right\\}}. \n\\end{equation}\n\n{We want to prove that $\\hat\\beta_n{\\xrightarrow{a.s.}}\\beta_0$, and $\\sup_{t\\in[0,\\bar\\tau]}|\\hat\\Lambda_n(t)-\\Lambda_0(t)|{\\xrightarrow{a.s.}}0$ for any $\\bar\\tau<\\tau_0$. We suppose that the previous statement is false, i.e $\\hat\\beta_n$ does not converge {almost surely} to $\\beta_0$ or there exists $\\bar\\tau$ such that $\\sup_{t\\in[0,\\bar\\tau]}|\\hat\\Lambda_n(t)-\\Lambda_0(t)|$ does not converge to zero {almost surely}. This means that, \n\n\tthere exist $\\epsilon>0$ and $\\bar\\tau<\\tau_0$ such that\n\t\\[\n\t\\mathbb{P}[A_{1} (\\bar\\tau,\\epsilon)]>0, \\;\\; \\text{ with } \\;\\; A_{1} (\\bar\\tau,\\epsilon)=\\left\\{\\limsup_{n\\rightarrow \\infty }\\left[ \\left\\|\\hat\\beta_n-\\beta_0\\right\\|+\\sup_{t\\in[0,\\bar\\tau]}\\left|\\hat\\Lambda_n(t)-\\Lambda_0(t)\\right|\\right] >\\epsilon \\right\\}.\n\t\\]\t\n\t{On the other hand,} since $(\\hat\\Lambda_n,\\hat\\beta_n)$ maximizes $\\hat\\ell_n(\\Lambda,\\beta,\\hat\\gamma_n)$, for any realization {$\\omega$} of the data we have \n\t\\begin{equation}\n\t\\label{eqn:inequality_1}\n\t\\hat{l}_n(\\hat \\beta_n, \\hat \\Lambda_{n}, \\hat \\gamma_n) - \\hat{l}_n( \\beta_0, \\tilde \\Lambda_{0,n}, \\hat \\gamma_n) \\geq 0.\n\t\\end{equation}\n{Then the idea for creating the contradiction is to } show that the previous inequality is not satisfied for any $\\omega$ in some event of positive probability. \n\tWe argue for a fixed realization $\\omega$ of the data. \n\tAs a bounded sequence in $\\mathbb{R}^q$, $\\hat\\beta_{n}$ has a convergent subsequence $\\hat{\\beta}_{n_k}\\to\\bar\\beta$. Let $(\\tau_i)_{i\\geq 1}$ be an increasing sequence such that $\\lim_{i\\to \\infty}\\tau_i=\\tau_0$. Since for all $\\tau<\\tau_0$, $\\hat\\Lambda_n(\\tau)<\\infty$ almost surely (see Lemma~\\ref{lemma:boundedness_hat_Lambda}), by Helly's selection theorem (\\cite{ash}), there exists a \n\n\tsubsequence $\\hat{\\Lambda}_{m_k}$ of $\\hat{\\Lambda}_{n_k}$, converging pointwise to a function $\\bar\\Lambda$ on $[0,\\tau_1]$. Repeating the same argument, we can extract a further subsequence converging pointwise to a function $\\bar\\Lambda$ on $[0,\\tau_2]$ and so on. Hence, there exist a subsequence $\\hat{\\Lambda}_{r_k}$ converging pointwise to a function $\\bar\\Lambda$ on all compacts of $[0,\\tau_0]$ that do not include $\\tau_0$. {This defines a monotone function $\\bar\\Lambda$ on $[0,\\tau_0)$, which could be extended at $\\tau_0$ by taking the limit.} \n\tAs in Lemma 2 of \\cite{Lu}, it can be shown that $\\bar\\Lambda$ is absolutely continuous and pointwise convergence of monotone functions to a continuous monotone function implies uniform convergence on compacts. Note that the chosen subsequence and the limits $\\bar\\beta$ and $\\bar\\Lambda$ depend on $\\omega$. To keep the notation simple, in what follows we use the index $n$ instead of the chosen subsequence $r_k$. \t\n\tFor any $\\tau^*<\\tau_0$, we can write\n\n\t\\begin{align}\\label{lik_deco}\n\t&\\!\\!\\! 0\\leq \\hat{l}_n(\\hat \\beta_n, \\hat \\Lambda_{n}, \\hat \\gamma_n) - \\hat{l}_n( \\beta_0, \\tilde \\Lambda_{0,n}, \\hat \\gamma_n) \\notag \\\\ \n\t&= \\hat{l}_n^*(\\hat \\beta_n, \\hat \\Lambda_{n|\\tau^*}, \\hat \\gamma_n) + D_{1n} - \\hat{l}^*_n( \\beta_0, \\tilde \\Lambda_{0,n|\\tau^*},\\hat \\gamma_n) - D_{2n} \\notag \\\\\n\t&= \\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\bar \\beta, \\bar\\Lambda_{|\\tau^*},\\gamma_0)] + D_{1n}+ R_{1n} \\notag\\\\\n\t&\\quad- \\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*} ,\\gamma_0)] - D_{2n}-R_{2n}\n\t,\n\t\\end{align}\n\n\twhere\n\t\\begin{equation}\\label{deco_d1}\n\tD_{1n} = \\hat{l}_n(\\hat \\beta_n, \\hat \\Lambda_{n}, \\hat \\gamma_n) - \\hat{l}_n^*(\\hat \\beta_n, \\hat \\Lambda_{n|\\tau^*}, \\hat \\gamma_n),\n\t\\end{equation}\n\t\\begin{equation}\\label{deco_d2}\n\tD_{2n} = \\hat{l}_n( \\beta_0, \\tilde \\Lambda_{0,n}, \\hat \\gamma_n) - \n\t\\hat{l}^*_n( \\beta_0, \\tilde \\Lambda_{0,n|\\tau^*},\\hat \\gamma_n),\n\t\\end{equation}\n\t\\begin{equation}\\label{deco_r1}\n\tR_{1n} = \\hat{l}_n^*(\\hat \\beta_n, \\hat\\Lambda_{n|\\tau^*}, \\hat \\gamma_n) - \\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\bar\\beta, \\bar\\Lambda_{|\\tau^*} , \\gamma_0)] ,\n\t\\end{equation}\n\t\\begin{equation}\\label{deco_r2}\n\tR_{2n} =\\hat{l}^*_n( \\beta_0, \\tilde \\Lambda_{0,n|\\tau^*},\\hat \\gamma_n) - \\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*} ,\\gamma_0)] .\n\t\\end{equation}\nNote that the limit of $(\\hat\\beta_n,\\hat\\Lambda_n)$ depends on $\\omega$, but here the expectation is taken with respect to $(Y^*,\\Delta^*,X,Z)$ for fixed $(\\bar\\beta,\\bar\\Lambda)$.\n\tWe now define the event \n\t$\n\tA_{3}(\\tau^*)=\\left\\{|R_{1n}-R_{2n}|\\rightarrow 0\\right\\}.\n\t$\n\tBy Lemma \\ref{lem:consist_full_1}, for any $\\tau^*<\\tau_0$, we have $\\mathbb{P}[A_{1} (\\bar\\tau,\\epsilon)\\cap A_{3}(\\tau^*)]=\\mathbb{P}[A_{1} (\\bar\\tau,\\epsilon)]$.\n\tNext, for $\\bar\\tau<\\tau_0$ and $\\epsilon>0$ such that $\\mathbb{P}[A_{1} (\\bar\\tau,\\epsilon)]>0$, by Lemma \\ref{lem:consist_full_3} there exist $00$ such that we have\n\t\\[\n\t\\begin{split}\n\tc&=\\inf\\Bigg\\{\\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*} ,\\gamma_0)]-\t\\mathbb E [\\ell (Y^*,\\Delta^*,X,Z; \\beta, \\Lambda_{|\\tau^*},\\gamma_0)]:\\\\\n\t&\\left.\\qquad\\bar{\\tau}+\\delta \\leq \\tau^*<\\tau_0,\\quad \\|\\beta-\\beta_0\\|\\geq c_1\\epsilon\/2 \\quad\\text{ or}\\quad \\sup\\limits_{t\\in[0,\\bar \\tau]}|\\Lambda(t)-\\Lambda_0(t)|\\geq (1-c_1) \\epsilon \/2 \\right\\}>0.\n\t\\end{split}\n\t\\]\n\tNote that if $\\omega \\in A_{1}(\\bar\\tau,\\epsilon)$ and $\\bar\\beta$ and $\\bar \\Lambda$ are the limits for $ \\hat \\beta_n$ and $\\hat \\Lambda_n$, respectively, then necessarily, either $\\|\\bar \\beta-\\beta_0\\|\\geq c_1\\epsilon\/2 ,$ or $\\sup\\limits_{t\\in[0, \\bar \\tau]}|\\bar \\Lambda(t)-\\Lambda_0(t)|\\geq (1-c_1) \\epsilon \/2$, and consequently \n\t$$\n\t\\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\bar \\beta, \\bar\\Lambda_{|\\tau^*},\\gamma_0)] - \\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*} ,\\gamma_0)] \\geq c,\\quad \\forall \\bar{\\tau}{+\\delta}\\leq \\tau^*<\\tau_0.\n\t$$\n\tFinally, we define $$A_{2}(\\tau^*)=\\left\\{ \\limsup_{n\\rightarrow\\infty} \\left|D_{1n}-D_{2,n} \\right| \\leq c\/2\\right\\},$$ with $D_{1n}$ and $D_{2n} $ defined in \\eqref{deco_d1} and \\eqref{deco_d2}, and choose $\\tau^*\\in [\\bar\\tau+\\delta,\\tau_0)$ such that \n\t$$\n\tc_b \\left\\{ \\mathbb{P}(T_0\\geq \\tau^*) \\log \\{1\/\\mathbb{P} (T_0\\geq \\tau^*)\\} + \\mathbb{P}(C\\in[ \\tau^*,\\tau_0])\\right\\}0$. Moreover, with such a suitable $\\tau^*$, \n\tfor any $\\omega\\in A_{1} (\\bar\\tau,\\epsilon)\\cap\n\tA_{2}(\\tau^*)\\cap A_{3}(\\tau^*)$, \n\twe have \n\t$$\\limsup_{n\\rightarrow \\infty} \\left[ \\hat{l}_n(\\hat \\beta_n, \\hat \\Lambda_{n}, \\hat \\gamma_n) - \\hat{l}_n( \\beta_0, \\tilde \\Lambda_{0,n}, \\hat \\gamma_n)\\right] \\leq -c\/2<0.$$ \n\tWe deduce that \\eqref{eqn:inequality_1} is violated on an event of positive probability, which by definition is impossible. Thus\n\t$\\hat\\beta_n{\\xrightarrow{a.s.}}\\beta_0$, and $\\sup_{t\\in[0,\\bar\\tau]}|\\hat\\Lambda_n(t)-\\Lambda_0(t)|{\\xrightarrow{a.s.}}0$ for any $\\bar\\tau<\\tau_0$.\n}\n\n{If condition \\eqref{eqn:jump_cond} is satisfied, we want to show in addition that $|\\hat\\Lambda_n(\\tau_0)-\\Lambda_0(\\tau_0)|{\\xrightarrow{a.s.}}0$. In that case, $\\hat\\Lambda_n(\\tau_0)<\\infty$ almost surely and as a result, for any realization $\\omega$, there exists a subsequence $\\hat\\Lambda_{r_k}$ converging to some absolutely continuous function $\\bar\\Lambda$ uniformly on $[0,\\tau_0]$. \n\tSince we already showed that \n\t$|\\hat\\Lambda_n(t)-\\Lambda_0(t)|{\\xrightarrow{a.s.}}0$ for any $t<\\tau_0$ and $\\Lambda_0(\\tau_0)=\\lim_{t\\uparrow\\tau_0}\\Lambda_0(t)$, we necessarily have $\\bar\\Lambda=\\Lambda_0$ on the whole interval $[0,\\tau_0]$. This concludes the proof of the Theorem.\n}\n\\end{proof}\n\n\n\\begin{lemma}\n\t\\label{lem:consist_full_1}\n\tConsider a realization of the data $\\omega$ and assume that $\\hat\\beta_n(\\omega)\\to\\bar\\beta$ and $\\hat\\Lambda_n(\\omega)(t)\\to \\bar\\Lambda(t)$ for any $t\\in[0,\\tau_0)$, for some absolutely continuous function $\\bar\\Lambda$. Let $0<\\tau^*<\\tau_0$ and let $R_{1n}$, $R_{2n}$ be defined as in \\eqref{deco_r1} and \\eqref{deco_r2}, respectively. There exists an event $A_3(\\tau^*)$ of probability one such that, for any $\\omega\\in A_3(\\tau^*)$, \n\t$$\n\tR_{1n}(\\omega)-R_{2n}(\\omega)\\to 0.\n\t$$\n\\end{lemma}\n\n\n\\begin{proof} {Let us consider some $0<\\tau^*<\\tau_0$}. From Theorem \\ref{theo:consistency} and Lemma 2 in \\cite{Lu} it follows that the event \n\t\\[\n\tA_3^1 {(\\tau^*)}=\\left\\{\\hat\\gamma_n\\to\\gamma_0\\quad\\text{ and }\\quad\\sup_{t\\in[0,\\tau^*]}|\\tilde{\\Lambda}_{0,n}(t)-\\Lambda_0(t)|\\to 0\\right\\}\n\t\\]\n\thas probability one. Next we argue for the given realization of the data $\\omega{\\in A_3^1(\\tau^*)}$ and will determine the event $A_3{(\\tau^*)}$ appropriately. \n\tBy the triangular inequality we can write\n\t\\begin{align}\n\t\\label{eqn:R2}\n\n\t|R_{1n}\\!-R_{2n}|&\\leq\\left|\\left\\{\\hat{l}^*_n(\\hat\\beta_n,\\hat{\\Lambda}_{n|\\tau*},\\hat\\gamma_n)\\! -\\hat{l}^*_n(\\beta_0,\\tilde{\\Lambda}_{0,n|\\tau*},\\hat\\gamma_n)\\right\\} \\!-\\! \\left\\{\\hat{l}^*_n(\\bar\\beta,\\bar{\\Lambda}_{|\\tau*},\\!\\gamma_0) \\!- \\! \\hat{l}^*_n(\\beta_0,{\\Lambda}_{0|\\tau*},\\!\\gamma_0)\\right\\}\\right| \\notag\\\\\n\t&\\quad+\\left|\\hat{l}^*_n(\\beta_0,{\\Lambda}_{0|\\tau*},\\!\\gamma_0)-\\mathbb{E}\\left[l(Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*},\\!\\gamma_0)\\right]\\right|\\notag\\\\\n\t&\\quad+\\left|\\hat{l}^*_n(\\bar\\beta,\\bar{\\Lambda}_{|\\tau*},\\gamma_0)-\\mathbb{E}\\left[l(Y^*,\\Delta^*,X,Z;\\bar\\beta,\\bar\\Lambda_{|\\tau^*},\\gamma_0)\\right]\\right|.\n\n\t\\end{align}\n\tSince $\\bar\\Lambda$ is absolutely continuous, it is differentiable almost everywhere. Let $\\bar\\lambda(t)=\\mathrm{d}\\bar\\Lambda(t)\/\\mathrm{d} t$. \n\tBy definition we have\n\t\\[\n\t\\begin{split}\n\t&\\hat{l}^*_n(\\hat\\beta_n,\\hat{\\Lambda}_{n|\\tau*},\\hat\\gamma_n)-\\hat{l}^*_n(\\beta_0,\\tilde{\\Lambda}_{0,n|\\tau*},\\hat\\gamma_n)\\\\\n\t&=\\frac{1}{n}\\sum_{i=1}^n \\Delta_i \\mathds{1}_{\\{Y_i < \\tau^*\\}} \\left\\{ \\log \\frac{\\Delta\\hat\\Lambda_n(Y_i)}{\\Delta\\tilde\\Lambda_{0,n}(Y_i)}+(\\hat\\beta_n-\\beta_0)'Z_i-\\hat\\Lambda_n(Y_i)e^{\\hat\\beta'_nZ_i}+\\tilde\\Lambda_{0,n}(Y_i)e^{\\beta'_0Z_i} \\right\\} \\\\\n\t&\\qquad+\\frac{1}{n}\\sum_{i=1}^n(1-\\Delta_i ) \\mathds{1}_{\\{Y_i < \\tau^*\\}}\\log\\frac{1-\\phi(\\hat\\gamma_n,X_i)+\\phi(\\hat\\gamma_n,X_i)\\exp\\left(-\\hat\\Lambda_n(Y_i)e^{\\hat\\beta'_nZ_i}\\right)}{1-\\phi(\\hat\\gamma_n,X_i)+\\phi(\\hat\\gamma_n,X_i)\\exp\\left(-\\tilde\\Lambda_{0,n}(Y_i)e^{\\beta'_0Z_i}\\right)}\\\\\n\t&\\qquad - \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{Y_i \\geq \\tau^*\\}}\\mathds{1}_{\\{B_i =1\\}}\\left\\{\n\n\t\\hat\\Lambda_n(\\tau^*)e^{\\hat\\beta'_nZ_i} - \\tilde\\Lambda_{0,n}(\\tau^*)e^{\\beta'_0Z_i}\\right\\}. \n\t\\end{split}\n\t\\]\n\tIf $\\omega\\in A^1_3{(\\tau^*)}$, we obtain\n\t\\[\n\t\\begin{split}\n\t&\\hat{l}^*_n(\\hat\\beta_n,\\hat{\\Lambda}_{n|\\tau*},\\hat\\gamma_n)-\\hat{l}^*_n(\\beta_0,\\tilde{\\Lambda}_{0,n|\\tau*},\\hat\\gamma_n)\\\\\n\t&=\\frac{1}{n}\\sum_{i=1}^n \\Delta_i \\mathds{1}_{\\{Y_i < \\tau^*\\}} \\left\\{ \\log \\frac{\\bar\\lambda(Y_i)}{\\lambda_{0}(Y_i)}+(\\bar\\beta-\\beta_0)'Z_i-\\bar\\Lambda(Y_i)e^{\\bar\\beta'Z_i}+\\Lambda_{0}(Y_i)e^{\\beta'_0Z_i} \\right\\} \\\\\n\t&\\qquad+\\frac{1}{n}\\sum_{i=1}^n(1-\\Delta_i ) \\mathds{1}_{\\{Y_i < \\tau^*\\}}\\log\\frac{1-\\phi(\\gamma_0,X_i)+\\phi(\\gamma_0,X_i)\\exp\\left(-\\bar\\Lambda(Y_i)e^{\\bar\\beta'Z_i}\\right)}{1-\\phi(\\gamma_0,X_i)+\\phi(\\gamma_0,X_i)\\exp\\left(-\\Lambda_{0}(Y_i)e^{\\beta'_0Z_i}\\right)}\\\\\n\t&\\qquad - \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{Y_i \\geq \\tau^*\\}}\\mathds{1}_{\\{B_i =1\\}}\\left\\{\n\n\t\\bar\\Lambda(\\tau^*)e^{\\bar\\beta'Z_i}- \\Lambda_{0}(\\tau^*)e^{\\beta'_0Z_i}\\right\\} +o(1)\\\\\n\t&=\\hat{l}^*_n(\\bar\\beta,\\bar{\\Lambda}_{|\\tau*},\\gamma_0)-\\hat{l}^*_n(\\beta_0,{\\Lambda}_{0|\\tau*},\\gamma_0)+o(1),\n\t\\end{split}\n\t\\]\n\twhere the remainder term depends on $\\omega$ and converges to zero. Hence, {the first term on the right hand side of \\eqref{eqn:R2}} converges to zero. Let $A_n^2{(\\tau^*)}$ be the event where \n\t\\[\n\t\\hat{l}^*_n(\\beta_0,{\\Lambda}_{0|\\tau*},\\gamma_0)\\to\\mathbb{E}\\left[l(Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*},\\gamma_0)\\right] \\quad\\text{ as }n\\to\\infty.\n\t\\]\n\tBy the law of large numbers $\\mathbb{P}[A^2_n{(\\tau^*)}]=1$, implying that also {the second term on the right hand side of \\eqref{eqn:R2}} converges to zero if $\\omega\\in A^2_n{(\\tau^*)}$. It remains to deal with the third term. Note that here $(\\bar\\beta,\\bar\\Lambda)$ depend on $\\omega$ and the expectation is taken with respect to $(Y,\\Delta,X,Z)$ for fixed $(\\bar\\beta,\\bar\\Lambda)$.\n\tWe have the same issue as in the proof of Theorem \\ref{theo:consistency} when dealing with the terms involving $\\bar\\beta$ and $\\bar\\Lambda$, so we need to consider approximations by elements of a countable dense subset of $\\mathcal{B}$ and of the space of bounded, absolutely continuous, increasing functions in $[0,\\tau^*]$ (is separable, so such subset exists). The same reasoning is used also in \\cite{murphy,Lu,scharfstein}. Hence, there exists a countable collection of probability one sets $\\{N_l\\}_{l\\geq 1}$ where\n\t\\[\n\t\\hat{l}^*_n(\\beta_l,{\\Lambda}_{l},\\gamma_0)\\to\\mathbb{E}\\left[l(Y^*,\\Delta^*,X,Z;\\beta_l,\\Lambda_{l},\\gamma_0)\\right] \\quad\\text{ as }n\\to\\infty\n\t\\]\n\tand $(\\beta_l,\\Lambda_l)$ can be taken arbitrarily close to $(\\bar\\beta,\\bar\\Lambda)$. As a result, if $\\omega\\in A^3_n{(\\tau^*)}=\\bigcap_{l\\geq 1}N_l$, then \n\t\\[\n\t\\left|\\hat{l}^*_n(\\bar\\beta,\\bar{\\Lambda}_{|\\tau*},\\gamma_0)-\\mathbb{E}\\left[l(Y^*,\\Delta^*,X,Z;\\bar\\beta,\\bar\\Lambda_{|\\tau^*},\\gamma_0)\\right]\\right|\\to 0.\n\t\\]\n\tTo conclude, we define $A_3{(\\tau^*)}=A^1_3{(\\tau^*)}\\cap A^2_3{(\\tau^*)}\\cap A^3_3{(\\tau^*)}$ and we have $\\mathbb{P}[A_3{(\\tau^*)}]=1$. \n\\end{proof}\n\n\n\\begin{lemma}\n\t\\label{lem:consist_full_2}\n\tLet $D_{1n}$ and $D_{2n}$ be defined as in \\eqref{deco_d1} and \\eqref{deco_d2}, respectively, for some $\\tau^* <\\tau_0$. Then there exists a constant $c_b$ independent of $\\tau^*$ such that \n\t$$\n\t\\mathbb P \\left[ \\limsup_{n\\rightarrow\\infty} \\left|D_{1n}-D_{2,n}\\right| > c_b \\left\\{ \\mathbb{P}(T_0\\geq \\tau^*) \\log \\{1\/\\mathbb{P} (T_0\\geq \\tau^*)\\} + \\mathbb{P}(C\\in[ \\tau^*,\\tau_0])\\right\\} \\right] =0.\n\t$$\n\\end{lemma}\n\n\\begin{proof}\n\n\tBy definition, for any $\\gamma$, $\\beta$ and cumulative hazard function $\\Lambda$ piecewise constant with jumps at the observed events\n\t\\begin{align*}\n\t& {l}_n(\\beta, \\Lambda, \\gamma) - \\hat{l}_n^*(\\beta, \\Lambda_{|\\tau^*}, \\gamma)\n\t\\\\\n\t& = \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{{\\tau^*\\leq Y_i< \\tau_0 } \\}} \\Delta_i \\log \\Lambda (\\{Y_i\\})\\\\\n\t& \\quad + \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{{\\tau^*\\leq Y_i< \\tau_0 }\\}} \\Delta_i \\beta^\\prime Z_i \n\t\\\\ \n\t& \\quad - \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{ Y_i\\geq \\tau^* \\}} e^{\\beta^\\prime Z_i} \\{ \\Delta_i \\Lambda (Y_i) - \\mathds{1}_{\\{B_i=1\\}}\\Lambda (\\tau^*-) \\} \\\\\n\t& \\quad +\\frac{1}{n}\\!\\sum_{i=1}^n \\!\\mathds{1}_{\\{\\tau_0\\geq Y_i \\geq \\tau^*\\!\\}}\\!\\! \\left[\\! (1\\!-\\!\\Delta_i )\\log\\!\\left\\{\\!1\\!-\\!\\phi_i(\\gamma)\\!+\\!\\phi_i(\\gamma)\\exp\\left(\\!-\\Lambda(Y_i)e^{\\beta'Z_i}\\!\\right)\\!\\!\\right\\} \\right. \\! \\\\\n\t& \\quad \\quad \\left. - \\!\\mathds{1}_{\\{B_i=0\\}}\\! \\log\\left\\{1\\!-\\!\\phi_i(\\gamma)\\right\\}\\! \\right]\\\\\n\t& =: r_{1n} (\\Lambda;\\tau^*) + r_{2n} (\\beta;\\tau^*) - r_{3n} (\\Lambda, \\beta;\\tau^*) + r_{4n} (\\Lambda, \\beta, \\gamma;\\tau^*) ,\n\t\\end{align*}\n\twhere $\\phi_i (\\gamma) $ is a short notation for $\\phi(\\gamma,X_i)$. For proving the Lemma, we have to suitably bound $r_{1n},\\ldots,r_{4n}$. For this purpose, let us notice that, by definition, all the cumulative hazard functions we have to consider ($\\hat\\Lambda_n$, $\\tilde \\Lambda_{0,n}$,...) have bounded jumps at the event times. More precisely, because the parameter space $\\mathcal B$ and $Z$ are supposed bounded, there exist constants \n\t$00$,\n\t\\begin{equation}\\label{eq:up_A2}\n\t\\mathbb{P} (\\limsup_{n\\rightarrow\\infty} A_{2n} > C) = 0,\n\t\\end{equation}\n\tand, on the other hand, the $\\limsup$ of $A_{1n}$ could be controlled by a function of $\\tau^*$ almost surely. More precisely, since \n\t$$\n\tA_{2n} \\leq \\frac{\\log n }{n}\\sum_{i=1}^n \\mathds{1}_{\\{ Y_i\\in [\\tau_0 - a_n ,\\tau_0 ] \\}} \\Delta_i ,\n\t$$\n\twe take $a_n$ such that $p_n\\log n \\rightarrow 0$ and $p_n \\log^2 n \\rightarrow \\infty$, where \n\t\\begin{multline*}\n\tp_n = \\mathbb{P} (Y\\in [\\tau_0-a_n,\\tau_0] , \\Delta = 1) = \\mathbb E \\left[ \\phi(\\gamma_0,X) \\int_{[\\tau_0-a_n,\\tau_0]} F_C([t,\\infty)|X,Z) F_u(dt|Z)\\right].\n\t\\end{multline*} \n\tThen, by Theorem 1(i) from \\cite{Wellner}, we have \n\t$$\n\t\\lim_{n\\rightarrow \\infty } \\frac{1 }{p_n} \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{ Y_i\\in [\\tau_0 - a_n ,\\tau_0 ] \\}} \\Delta_i = 1,\\quad a.s.,\n\t$$\n\twhich implies \\eqref{eq:up_A2}. On the other hand, we have\n\t$$\n\tA_{1n} \\leq \\log\\left(\\sup_{t\\in [\\tau^*,\\tau_0 - a_n ] } \\rho_n(t) \\right) \\times \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{ Y_i\\in [\\tau^*,\\tau_0 ]\\}} \\Delta_i .\n\t$$\n\tBy the same Theorem 1(i) from \\cite{Wellner},\n\t$$\n\t\\lim_{n\\rightarrow \\infty } \\sup_{t\\in [\\tau^*,\\tau_0 - a_n ] }\\left[ \\rho_n(t) \\frac{\\mathbb{P} (Y\\in [t,\\tau_0 - a_n ] , \\Delta = 1) }{\\mathbb{P} (Y\\in [t,\\tau_0 - a_n ] ) } \\right] = 1,\\quad a.s.\n\t$$\n\tBy our assumptions, there exists a constant $C_r$, independent of $\\tau^*$, $\\beta$, $\\gamma$ and $\\Lambda$, such that \n\t$$\n\t1< \\inf_{t\\in [\\tau^*,\\tau_0{-a_n} ] }\\frac{\\mathbb{P} (Y\\in [t,\\tau_0 - a_n ] ) }{\\mathbb{P} (Y\\in [t,\\tau_0 - a_n ] , \\Delta = 1) } < \\sup_{t\\in [\\tau^*,\\tau_0{-a_n} ] }\\frac{\\mathbb{P} (Y\\in [t,\\tau_0 - a_n ] ) }{\\mathbb{P} (Y\\in [t,\\tau_0 - a_n ] , \\Delta = 1) } \\leq C_r.\n\t$$\n\tGathering facts, deduce with probability 1, for sufficiently large $n$, \n\t$$\n\t\\left| r_{1n} (\\hat \\Lambda_{n};\\tau^*) - r_{1n} ( \\tilde \\Lambda_{0,n} ;\\tau^*)\\right| \\leq c \\frac{N^*}{n} ,\n\t$$\n\twhere $N^*$ be the number of uncensored observations in $[\\tau^*,\\tau_0]$ and $c$ is some constant (independent of $\\tau^*$, $\\beta$, $\\gamma$ and $\\Lambda$). Here, $N^*$ is a binomial random variable with $n$ trials and success probability \n\t\\begin{multline*}\n\tp^* = \\mathbb P ( Y\\geq\\tau^* , \\Delta = 1) = \\mathbb E \\left[ \\phi(\\gamma_0,X) \\int_{[\\tau^*,\\tau_0]} F_C([t,\\infty)|X,Z) F_u(dt|Z)\\right]\\\\ \\leq \\sup_{x} \\phi(\\gamma_0,x) \\mathbb P (Y\\geq\\tau^* ).\n\t\\end{multline*}\n\t\n\t\n\t\n\t\n\tTo bound $r_{3n}=r_{3n}(\\Lambda, \\beta;\\tau^*)$, we note that $\\mathds{1}_{\\{B_i=1\\}} = \\Delta_i + (1-\\Delta_i )\\mathds{1}_{\\{B_i=1\\}}$ and rewrite \n\t\\begin{multline*}\n\tr_{3n} = \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{ Y_i\\geq \\tau^* \\}} e^{\\beta^\\prime Z_i}\\Delta_i \\{ \\Lambda (Y_i) - \\Lambda (\\tau^*-) \\}\\\\- \\frac{\\Lambda (\\tau^*-)}{n}\\sum_{i=1}^n \\mathds{1}_{\\{ Y_i\\geq \\tau^* \\}} e^{\\beta^\\prime Z_i}\\mathds{1}_{\\{B_i=1\\}} (1- \\Delta_i) = r_{3an}-r_{3bn}.\n\t\\end{multline*}\n\tOn one hand, \n\t$$\n\tr_{3an} \\leq \\frac{c_u}{c_l} \\times \\frac{N^*}{n}.\n\t$$\n\tThe last inequality is obtained by bounding the jumps of $\\Lambda$ and using the following identity: for any integer $M\\geq 1$, \n\t$$\n\t\\sum_{k=1}^M \\sum_{j=k}^M \\frac{1}{j} = \\sum_{j,k=1}^M \\frac{\\mathds{1}_{\\{k\\leq j\\}} }{j} = M. \n\t$$\n\tTo bound $r_{3bn}$, let us note that \n\t$$\n\t\\Lambda (\\tau^*-) \\leq \\frac{1}{c_l} \\sum_{j=N^*+1}^N \\frac{1}{j} \\leq c_1 \\log \\frac{N}{N^*},\n\t$$\n\twith $c_1$ some constant depending only on $c_l$ and the maximal value of the convergent sequence \n\t$$\\sum_{j=1}^m \\frac{1}{j} - \\log m, \\quad m\\geq 1.$$ Here, $N =\\sum_{i=1}^n \\Delta_i$ is a binomial random variable with $n$ trials and success probability\n\t$$\n\tp = \\mathbb P ( \\Delta = 1) = \\mathbb E \\left[ \\phi(\\gamma_0,X) \\int_{[0,\\tau_0]} F_C([t,\\infty)|X,Z) F_u(dt|Z)\\right] .\n\t$$\n\tThus\n\t$$\n\tr_{3bn} \\leq c \\log \\frac{N}{N^*} \\times \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{ Y_i\\geq \\tau^* \\}} \\mathds{1}_{\\{B_i=1\\}} (1- \\Delta_i) = c \\log \\frac{N\/n}{N^*\/n} \\times \\frac{Q^*}{n},\n\t$$\n\twhere $Q^*$ is a binomial variable \n\twith $n$ trials and success probability \n\t\\begin{multline*}\n\tq^* = \\mathbb E \\left[ \\phi(\\gamma_0,X) \\int_{[\\tau^*,\\tau_0]} F_u([t,\\tau_0]|X,Z) F_C(dt|X,Z)\\right]\\\\\\leq \\mathbb E \\left[ \\phi(\\gamma_0,X) F_C([\\tau^*,\\tau_0]|X,Z) F_u([\\tau^*,\\tau_0]|Z)\\right] \\\\\n\t\\leq \\left[ \\sup_{x} \\phi(\\gamma_0,x) \\right]\n\t\\left[\\sup_{x,z} \\frac{F_C([\\tau^*,\\tau_0]|X=x,Z=z)}{\\tau_0 - \\tau^*}\\right] \n\t\\times (\\tau^* -\\tau_0)\\times \\mathbb P (T_0\\geq \\tau^*) \\\\ \\leq c (\\tau^* -\\tau_0)\\times \\mathbb P (T_0\\geq \\tau^*) ,\n\t\\end{multline*}\n\tand $c$ is some constant. By the strong Law of Large Numbers, \n\t$$\n\t\\lim_{n\\rightarrow \\infty} \\log \\frac{N}{N^*} = \\log \\frac{p}{p^*}, \\, a.s.\n\t$$\n\t\n\tNext, to bound $ r_{2n}= r_{2n} (\\beta;\\tau^*) $, we write\n\t$$\n\tr_{2n} = \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{ {\\tau^*\\leq T_0<\\tau_0} \\}} e^{\\beta^\\prime Z_i} \\Delta_i \n\t\\leq c_u \\frac{N^*}{n}.\n\t$$ \n\t\n\tFinally, to control $ r_{4n}= r_{4n} (\\Lambda, \\beta, \\gamma;\\tau^*)$, \n\tsince $\\mathds{1}_{\\{B_i=0\\}} = (1-\\Delta_i) \\mathds{1}_{\\{B_i=0\\}}$ and $\\log(1+u)\\leq u,$ $\\forall u\\geq 0$,\n\twe have \n\t\\begin{multline*}\n\tr_{4n} = \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{\\tau_0\\geq Y_i \\geq \\tau^*\\}} \\mathds{1}_{\\{B_i=0\\}} \\log\\left\\{1+\\frac{\\phi_i(\\gamma)\\exp\\left(-\\Lambda(Y_i)e^{\\beta'Z_i}\\right)}{1-\\phi_i(\\gamma)}\\right\\} \\\\\n\t+ \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{Y_i \\geq \\tau^*\\!\\}} (1-\\Delta_i ) \\mathds{1}_{\\{B_i=1\\}}\\log\\left\\{1-\\phi_i(\\gamma)+\\phi_i(\\gamma)\\exp\\left(-\\Lambda(Y_i)e^{\\beta'Z_i}\\right)\\right\\}.\n\t\\end{multline*}\n\tThus \n\t\\begin{multline*}\n\t|r_{4n}| \\leq \\sup_{\\gamma,x}\\left| \\frac{\\phi(\\gamma,x)}{1-\\phi(\\gamma,x)}\\right| \\exp\\left(-c_l \\Lambda(\\tau^*-)\\right) \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{Y_i \\geq \\tau^*\\}} \\mathds{1}_{\\{B_i=0\\}}\n\t\\\\ + \n\t\\sup_{\\gamma,x}\\left| \\log\\left\\{1-\\phi(\\gamma,x)\\right\\}\\right| \\times \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{\\{Y_i \\geq \\tau^*\\!\\}} (1-\\Delta_i )\\mathds{1}_{\\{B_i=1\\}}\\\\\n\t= c_1\\exp\\left(-c_l \\Lambda(\\tau^*-)\\right) \\frac{R^*}{n} + c_2 \\frac{Q^*}{n},\n\t\\end{multline*}\n\twhere $R^*$ is a binomial variable \n\twith $n$ trials and success probability \n\t$$\n\tr^* = \\mathbb E \\left[ \\{1-\\phi(\\gamma_0,X)\\} F_C([\\tau^*,\\tau_0]|X,Z)\\right],\n\t$$\n\tand $c_1$ and $c_2$ are some constants. \n\tDeduce that there exists a constant $c_3$ such that\n\t$$\n\t|r_{4n}| \\leq c_3 \\left( \n\t\\frac{R^*}{n} + \\frac{Q^*}{n}\\right).\n\t$$\n\t\n\tGathering facts, there exists a constants $C^*$ and \n\t$c^*$, independent of $\\tau^*$, $\\beta$, $\\gamma$ and $\\Lambda$, such that \n\t$$\n\t\\left| \\hat{l}_n(\\beta, \\Lambda, \\gamma) - \\hat{l}_n^*(\\beta, \\Lambda_{|\\tau^*}, \\gamma) \\right| \\leq C^* \\left\\{ \\frac{N^*}{n} + \\frac{Q^*}{n}\\left[1 + \\log {\\frac{n}{N^*}}\\right] + \\frac{R^*}{n} \\right\\} + o_{a.s.} (1), \n\t$$\n\twhere $N^*$, $Q^*$ and $R^*$ are binomial with $n$ trials and success probabilities $p^*$, $q^*$ and $r^*$, respectively, and\n\t$$\n\tp^* + q^* \\leq c^*\\mathbb{P} (T_0\\geq \\tau^*) \\quad \\text{and}\\quad r^* \\leq c^* \\mathbb{P} (C \\in [\\tau^*, \\tau_0]) .\n\t$$\n\t\n\\end{proof}\n\n\n\n\\begin{lemma}\n\t\\label{lem:consist_full_3}\n\tAssume that for any $x$ and $z$, the conditional distribution of the censoring times given $X=x$ and $Z=z$\n\tis such that there exists a constant $C>0$ such that\n\t$$\n\t\\inf_{[t_1,t_2]\\subset [0,\\tau_0]} \\inf_{x,z}\\{ F_C(t_2|x,z)- F_C(t_1|x,z) \\}>C(t_2-t_1) ,\\qquad \\forall \\delta>0.\n\t$$\n\tLet $0<\\bar{\\tau}<\\tau_0$ and $\\epsilon>0$. There exist $c_1, c_2>0$, $\\delta>0$ such that $c_1+c_2=1$ and \n\n\n\t\\[\n\t\\begin{split}\n\t&\\inf\\Bigg\\{\\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*} ,\\gamma_0)]-\t\\mathbb E [\\ell (Y^*,\\Delta^*,X,Z; \\beta, \\Lambda_{|\\tau^*},\\gamma_0)]:\\\\\n\t&\\left.\\qquad\\quad \\bar\\tau+\\delta\\leq \\tau^*<\\tau_0,\\quad \\|\\beta-\\beta_0\\|\\geq c_1\\epsilon\\quad\\text{ or}\\quad \\sup\\limits_{t\\in[0,\\bar\\tau]}|\\Lambda(t)-\\Lambda_0(t)|\\geq c_2\\epsilon\\right\\}>0\n\t\\end{split}\n\t\\]\n\\end{lemma}\n\\begin{proof}\n\n\tNote that, for any $\\tau^*\\in(\\bar\\tau,\\tau_0)$,\n\t\\[\n\t\\mathbb E [\\ell (Y^*,\\Delta^*,X,Z;\\beta_0,\\Lambda_{0|\\tau^*} ,\\gamma_0)]-\t\\mathbb E [\\ell (Y^*,\\Delta^*,X,Z; \\beta, \\Lambda_{|\\tau^*},\\gamma_0)]\n\t\\]\n\tis the Kullback-Leibler divergence $KL(\\mathbb{P}|Q)$, where $\\mathbb{P}$ and $Q$ are the probability measures of $(Y^*,\\Delta^*,X,Z)$ when the true parameters are $(\\Lambda_0,\\beta_0,\\gamma_0)$ and $(\\Lambda,\\beta,\\gamma_0)$ respectively. By Pinsker's inequality, we have\n\t\\[\n\tKL(\\mathbb{P}|Q)\\geq 2\\delta(\\mathbb{P},Q)^2,\n\t\\]\n\twhere $\\delta(\\mathbb{P},Q)$ is the total variation distance between the two probability measures, defined as \n\t\\[\n\t\\delta(\\mathbb{P},Q)=\\sup_{A}|\\mathbb{P}(A)-Q(A)|,\n\t\\]\n\twhere the supremum is taken over all measurable sets $A$. We want to find a positive lower bound for $\\delta(\\mathbb{P},Q)$ independent of $\\tau^*$ and $Q$, for all $Q$ such that $\\|\\beta-\\beta_0\\|\\geq c_1\\epsilon$ or $ \\sup_{t\\in[0,\\bar\\tau]}|\\Lambda(t)-\\Lambda_0(t)|\\geq c_2\\epsilon$. Hence, it is sufficient to find $k>0$ and for each such $Q$ an event $A$, {which could depend on $Q$}, for which $|\\mathbb{P}(A)-Q(A)|>k.$ Without loss of generality we can assume that the covariate vector $Z$ has mean zero. \n\n\t\n\t\\bigskip\n\t\\textit{Case 1.} If $\\sup_{t\\in[0,\\bar\\tau]}|\\Lambda(t)-\\Lambda_0(t)|\\geq c_2\\epsilon$, there exists $\\bar{t}\\in[0,\\bar\\tau] $ such that either\n\t\\begin{equation}\\label{eq:cas1}\n\t\\Lambda(\\bar{t})\\geq\\Lambda_0(\\bar{t})+c_2\\epsilon\n\t\\end{equation}\n\tor \n\t\\begin{equation}\\label{eq:cas2}\n\t\\Lambda(\\bar{t})\\leq\\Lambda_0(\\bar{t})-c_2\\epsilon.\n\t\\end{equation} \nWe first consider \\eqref{eq:cas1} and define \n\t$$\n\t\\delta=\\min\\left\n\t\\{\\frac{\\tau_0-\\bar\\tau}{2} ,\\;\\frac{c_2\\epsilon}{2\\sup_{t\\in[0, (\\bar\\tau+\\tau_0)\/2]}\\lambda_0(t)}\\right\\}.\n\t$$ \n\tIt follows that for all \n\t$t\\in [\\bar{t},\\bar{t}+\\delta]\\subset [0, (\\bar\\tau+\\tau_0)\/2] \\subset[0,{\\tau_0}) $ we have $\\Lambda(t)\\geq\\Lambda_0(t)+\\frac12c_2\\epsilon$. Indeed, we can write \n\t\\begin{equation*\n\t\\Lambda(t)\\geq \\Lambda(\\bar{t})\\geq\\Lambda_0(\\bar{t})+c_2\\epsilon\\geq \\Lambda_0(t)-\\delta\\sup_{u\\in[0,(\\bar\\tau+\\tau_0)\/2]}\\lambda_0(u)+c_2\\epsilon\\geq \\Lambda_0(t)+\\frac12c_2\\epsilon,\\quad \\forall t\\in [\\bar{t},\\bar{t}+\\delta] .\n\t\\end{equation*}\n\n\tSince $Z$ has mean zero, $(\\beta-\\beta_0)^\\prime Z$ also has zero mean. \n\tMoreover, since $B$ is compact and $Z$ is bounded non degenerated variance, we have\n\t\\begin{equation}\\label{whynot}\n\t\\inf_{\\beta\\in B} \\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\geq 0)>0\\quad \\text{ and } \\quad \\inf_{\\beta\\in B} \\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\leq 0)>0\n\t\\end{equation}\n\t{(see proof below)}.\n\tLet $A_\\beta$ be the event $\\{\\Delta^*=0, Y^*\\in[\\bar{t},\\bar{t}+\\delta],{(\\beta-\\beta_0)^\\prime Z}\\geq 0\\}$, which depends on $\\beta$ and thus on $Q$. However, by \\eqref{whynot} and the construction of the model, the event $A_\\beta$ has positive probability which stays bounded away from zero. Moreover, we have\n\t\\[\n\t\\begin{split}\n{\\mathbb{P}(A_\\beta)-Q(A_\\beta)}&=\\iint\\limits_{{(\\beta-\\beta_0)^\\prime z\\geq 0}}\\int_{\\bar{t}}^{\\bar{t}+\\delta}\\phi(\\gamma_0,x)\\\\\n\t&\\qquad \\quad\\times \\left\\{\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right)\\right\\} F_C(\\mathrm{d} t|x,z) G(\\mathrm{d} x,\\mathrm{d} z).\n\t\\end{split}\n\t\\]\n\tWhenever $(\\beta-\\beta_0)^\\prime z\\geq 0$, by the mean value theorem, we obtain\n\t\\[\n\t\\begin{split}\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right)\n\t&=\\left\\{\\Lambda(t)e^{\\beta^\\prime z}-\\Lambda_0(t)e^{\\beta_0 ^\\prime z}\\right\\}e^{-\\xi}\\\\\n\t&=\\left[\\{\\Lambda(t)-\\Lambda_0(t)\\}e^{\\beta_0^\\prime z}+\\Lambda(t)\\{e^{\\beta^\\prime z}-e^{\\beta_0 ^\\prime z}\\}\\right] e^{-\\xi}\\\\\n\t& { \\geq \\{\\Lambda(t)-\\Lambda_0(t)\\}e^{\\beta_0^\\prime z} e^{-\\xi},}\n\t\\end{split}\n\t\\]\n\tfor some $\\xi>0$ such that $|\\xi-\\Lambda_0(t)e^{\\beta_0^\\prime z}|\\leq |\\Lambda(t)e^{\\beta^\\prime z}-\\Lambda_0(t)e^{\\beta_0^\\prime z}|$, {$t \\in[\\bar{t}, \\bar{t}+\\delta]$. }\n\tNow, let \n\t\\[\n\tM(t)=\\Lambda_0(t)\\frac{\\sup_{\\beta,z}e^{\\beta^\\prime z}}{\\inf_{\\beta,z}e^{\\beta ^\\prime z}}+\\frac{\\log 2}{\\inf_{\\beta,z}e^{\\beta^\\prime z}},\\quad {t \\in[\\bar{t}, \\bar{t}+\\delta]. }\n\t\\]\n\tThen, for $(\\beta-\\beta_0)^\\prime z\\geq 0$ and $t\\in[\\bar{t},\\bar{t}+\\delta],$ such that $\\Lambda(t)\\leq M(t)$ we simply use \\eqref{eq:cas1} and write \n\t\\[\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta^\\prime z}\\right)\\geq \\frac12 c_2\\epsilon e^{\\beta_0^\\prime z}e^{-\\xi}\\geq k_1\\epsilon ,\n\t\\]\n\tfor some constant $k_1>0$ independent of $\\Lambda$, $\\beta$ and the event $A_\\beta$, because $M(t)$ is uniformly bounded on $[0,(\\bar{\\tau}+\\tau_0)\/2]$ and thus $e^{-\\xi}$ is bounded away from zero. On the other hand, for $t\\in[\\bar{t},\\bar{t}+\\delta]$ such that $\\Lambda(t)>M(t)$, we have \n\t\\[\n\t\\exp\\left(-\\Lambda(t)e^{\\beta^\\prime z}\\right)\\leq \\exp\\left(-M(t)\\inf_{\\beta,z}e^{\\beta ^\\prime z}\\right)\\leq \\frac{1}{2}\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right).\n\t\\] \n\tConsequently,\n\t\\begin{multline*}\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right)\\geq \\frac{1}{2}\\exp\\left(-\\Lambda_0(t)e^{\\beta_0 ^\\prime z}\\right) \\\\ \\geq \\frac{1}{2}\\exp\\left(-\\Lambda_0((\\bar\\tau+\\tau_0)\/2)\\sup_{\\beta,z}e^{\\beta_0 ^\\prime z}\\right)=k_2(\\bar\\tau)>0.\n\t\\end{multline*}\n\tWe conclude that, for any $t\\in [\\bar{t},\\bar{t}+\\delta]$,\n\t\\[\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right)\\geq {\\min\\{k_1\\epsilon , k_2(\\bar\\tau)\\}}>0.\n\t\\]\n\tIt follows that \n\t\\begin{multline*}\n\t|\\mathbb{P}(A_\\beta)\\! -Q(A_\\beta)|\\geq {\\min\\{k_1\\epsilon , k_2(\\bar\\tau)\\}}\\inf_x\\phi(\\gamma_0,x)\\\\\n\t\\times \\iint\\limits_{{(\\beta-\\beta_0)^\\prime z\\geq 0}} \\left\\{F_C(\\bar{t}+\\delta|x,z)\\!-F_C(\\bar{t}|x,z)\\right\\} G(\\mathrm{d} x,\\mathrm{d} z).\n\t\\end{multline*}\n\tBy assumption we have \n\t\\[\n\t\\inf_{x,z}F_C(\\bar{t}+\\delta|x,z)-F_C(\\bar{t}|x,z)\\geq C \\delta,\n\t\\] \n\tyielding that there exist another constant $k_3>0$ independent of $\\Lambda$, $\\beta$ and the event {$A_\\beta$ (but depending on $\\epsilon$ and $\\bar \\tau$)} such that\n\t\\[\n\t\\forall \\beta\\in B,\\quad |\\mathbb{P}(A_\\beta)-Q(A_\\beta)|\\geq k_3 \\inf_{\\beta\\in B} \\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\geq 0)\n\n\t>0.\n\t\\]\n\tNote that {the uniform lower bound} holds for any choice of the constants $c_1$ and $c_2$ in the statement of the Lemma. \n\n\n\n\n\t\n We next consider \\eqref{eq:cas2}. Let\n\t$$\n\t{\\bar\\delta}=\\min\\left\\{\\frac{\\bar \\tau}{2},\\, \\frac{c_2\\epsilon}{2\\sup_{t\\in[0,\\bar\\tau]}\\lambda_0(t)} \\right\\}.\n\t$$ \n\tIt follows that for all \n\t$t\\in [\\, \\bar{t}-{\\bar\\delta},\\bar{t}\\, ] $ we have \n\t$\\Lambda(t)\\leq\\Lambda_0(t)-\\frac12c_2\\epsilon$. Indeed, we can write \n\t\\begin{multline*}\n\t\\Lambda(t)\\leq \\Lambda(\\bar{t}) \\leq \\{ \\Lambda_0(\\bar{t}) - \\Lambda_0(t) \\} + \\Lambda_0(t) - c_2\\epsilon\\\\\n\t\\leq {\\bar\\delta}\\sup_{u\\in[0,\\bar{\\tau}]}\\lambda_0(u)+ \\Lambda_0(t) - c_2\\epsilon \\leq \n\t\\Lambda_0(t)-\\frac12c_2\\epsilon,\\quad \\forall t\\in [\\, \\bar{t}-{\\bar\\delta},\\bar{t}\\, ] .\n\t\\end{multline*}\n\tNext we redefine $A_\\beta$ as the event $\\{\\Delta^*=0, Y^*\\in[\\bar{t}-{\\bar\\delta},\\bar{t}\\,],{(\\beta-\\beta_0)^\\prime Z}\\leq 0\\}$, which depends on $\\beta$ and thus on $Q$. However, by \\eqref{whynot} and the construction of the model, the event $A_\\beta$ has positive probability which stays bounded away from zero. Moreover, we have\n\t\\[\n\t\\begin{split}\n\t\\mathbb{P}(A_\\beta)-Q(A_\\beta)&=\\iint\\limits_{{(\\beta-\\beta_0)^\\prime z\\leq 0}}\\int_{\\bar{t}-{\\bar\\delta}}^{\\bar{t}}\\phi(\\gamma_0,x)\\\\\n\t&\\qquad \\quad\\times \\left\\{\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right)\\right\\} F_C(\\mathrm{d} t|x,z) G(\\mathrm{d} x,\\mathrm{d} z).\n\t\\end{split}\n\t\\]\n\tWhenever $(\\beta-\\beta_0)^\\prime z\\leq 0$, by the mean value theorem, we obtain\n\t\\[\n\t\\begin{split}\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right)\n\t\\leq \\{\\Lambda(t)-\\Lambda_0(t)\\}e^{\\beta_0^\\prime z} e^{-\\xi} \\leq - \\frac12 c_2\\epsilon e^{\\beta_0^\\prime z}e^{-\\xi},\n\t\\end{split}\n\t\\]\n\tfor some $\\xi>0$ such that $|\\xi-\\Lambda_0(t)e^{\\beta_0^\\prime z}|\\leq |\\Lambda(t)e^{\\beta^\\prime z}-\\Lambda_0(t)e^{\\beta_0^\\prime z}|\\leq 2\\Lambda_0(t)e^{\\beta_0^\\prime z} $, $t \\in[\\bar{t}-{\\bar\\delta}, \\bar{t}\\,]$. Thus necessarily $0<\\xi \\leq 2\\Lambda_0(\\bar\\tau)e^{\\beta_0^\\prime z}$, and thus $e^{-\\xi}$ stays away from zero. \n\tUsing arguments as we used for the case \\eqref{eq:cas2}, we deduce that $\\mathbb{P}(A_\\beta)-Q(A_\\beta)$ is negative and away from zero. Thus we obtain the result with \n\t$\\bar\\tau \\leq \\tau^*<\\tau_0$ instead of $\\bar\\tau+\\delta \\leq \\tau^*<\\tau_0$.\n\tFinally, it remains to recall that $\\inf$ is a decreasing function of nested sets. Now the arguments for \\emph{Case 1} are complete for any choice of the constants $c_1$ and $c_2$ in the statement of the Lemma. \n\t\n\t\n\t\\bigskip\n\t\\textit{Case 2.} If $\\sup_{t\\in[0,\\bar\\tau]}|\\Lambda(t)-\\Lambda_0(t)|\\leq c_2\\epsilon$, then necessarily $\\|\\beta-\\beta_0\\|\\geq c_1\\epsilon.$ In particular we also have that $\\Lambda(\\bar{\\tau})\\leq\\Lambda_0(\\bar{\\tau})+c_2\\epsilon$, so all such functions $\\Lambda$ are uniformly bounded on $[0,\\bar{\\tau}]$. Without loss of generality we can also assume that\n\t$\\Lambda_0(\\bar\\tau\/2)\\geq 1$ (otherwise we can take a larger $\\bar\\tau$).\n\tNote that \n\t$$\n\tVar ((\\beta-\\beta_0)^\\prime Z) = (\\beta-\\beta_0)^\\prime Var (Z) (\\beta-\\beta_0) \\geq ( c_1\\epsilon)^2 \\lambda_{min} ,\n\t$$\n\twith $\\lambda_{min} $ the smallest eigenvalue of $Var(Z)$. From this lower bound for the variance of $(\\beta-\\beta_0)^\\prime Z$, and since $Z$ is centered and has a bounded support, we have\n\n\t\\begin{equation}\\label{whynot2}\n\t\\inf_{|\\beta-\\beta_0|\\geq c_1\\epsilon}\\left[ \\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\geq z_0)+ \\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\leq - z_0)\\right] >\\frac{( c_1\\epsilon)^2 \\lambda_{min}}{2\\sup \\|Z\\|^2},\n\t\\end{equation}\n\tfor $z_0= c_1\\epsilon \\lambda^{1\/2}_{min}\/2 $ {(see proof below)}.\n\tIf $$\\inf_{|\\beta-\\beta_0|\\geq c_1\\epsilon}\\mathbb{P}\\left((\\beta-\\beta_0)^\\prime Z\\geq z_0\\right)> \\frac{( c_1\\epsilon)^2\\lambda_{min}}{2\\sup \\|Z\\|^2},$$ let $A_\\beta$ be the event $\\{\\Delta^*=0,Y^*\\in[\\bar\\tau\/2,\\bar\\tau],(\\beta-\\beta_0)^\\prime Z \\geq z_0\\}$. \n\tBy \\eqref{whynot2} and the construction of the model, the event $A_\\beta$ has positive probability which stays bounded away from zero.\n\tNext, as in \\emph{Case 1}, we write\n\t\\[\n\t\\begin{split}\n\t\\mathbb{P}(A_\\beta) - Q(A_\\beta)&= \\iint\\limits_{(\\beta-\\beta_0)^\\prime z \\geq z_0}\\int_{\\frac12\\bar{\\tau}}^{\\bar{\\tau}}\\phi(\\gamma_0,x)\\\\\n\t&\\qquad \\qquad \\times \\left\\{\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta^\\prime z}\\right)\\right\\} F_C(\\mathrm{d} t|x,z)\\, G(\\mathrm{d} x,\\mathrm{d} z) ,\n\t\\end{split}\n\t\\]\n\tand\n\t\\[\n\t\\begin{split}\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta^\\prime z}\\right)=\\left[\\{\\Lambda(t)-\\Lambda_0(t)\\}e^{\\beta_0^\\prime z}+\\Lambda(t)\\{e^{\\beta ^\\prime z}-e^{\\beta_0 ^\\prime z}\\}\\right] e^{-\\xi},\n\t\\end{split}\n\t\\]\n\tfor some $\\xi>0$ such that $|\\xi-\\Lambda_0(t)e^{\\beta_0 ^\\prime z}|\\leq |\\Lambda(t)e^{\\beta^\\prime z}-\\Lambda_0(t)e^{\\beta_0^\\prime z}|$. From the boundedness of $\\beta$, $z$, $\\Lambda$ and $\\Lambda_0$ on $[0,\\bar\\tau]$, it follows that $e^{-\\xi}\\geq k_4>0$ for some $k_4$ independent of $\\Lambda,$ $\\beta$ and the event $A_\\beta$ (but depending on $\\bar\\tau$). Moreover, since for $t\\in[\\bar\\tau\/2,\\bar\\tau]$, $(\\beta-\\beta_0)^\\prime z\\geq z_0$, \n\t\\[\n\t\\left|\\Lambda(t)-\\Lambda_0(t)\\right|e^{\\beta_0^\\prime z}\\leq c_2\\epsilon e^{\\beta_0^\\prime z},\n\t\\]\n\tand \n\t\\[\n\t\\begin{split}\n\n\n\n\t\\Lambda(t)\\{e^{\\beta^\\prime z}-e^{\\beta_0^\\prime z}\\}\n\t\\geq \\Lambda_0 (\\bar\\tau\/2) e^{\\beta_0^\\prime z}\\{ e^{(\\beta-\\beta_0)^\\prime z} - 1\\}\\geq e^{\\beta_0^\\prime z}z_0 = e^{\\beta_0^\\prime z}\\lambda^{1\/2}_{min} \\,c_1\\epsilon \/2,\n\t\\end{split}\n\t\\]\n\twe obtain \n\t\\[\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right)\\geq \\epsilon \\left[ \\lambda^{1\/2}_{min} \\,c_1\/2 - c_2\\right] e^{\\beta_0^\\prime z}e^{-\\xi} \n\t\\]\n\tDefine\n\t\\[\n\tc_1= \\frac{4}{\\lambda_{min}^{1\/2} + 4}\\qquad\\text{and}\\qquad c_2= \\frac{\\lambda_{min}^{1\/2}}{\\lambda_{min}^{1\/2} + 4},\n\t\\]\n\tsuch that $00.\n\t\\end{multline*}\n\tNote that this bound does not depend on $\\delta$ in the statement of the Lemma. Finally it is easy to see that similar arguments apply when $\\inf_{|\\beta-\\beta_0|\\geq c_1\\epsilon}\\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\leq - z_0)> ( c_1\\epsilon)^2 \\lambda_{min}\/\\{4\\sup \\|Z\\|^2\\} $, for the same expression of $z_0$. In this case, we define \n\t$A_\\beta= \\{\\Delta^*=0,Y^*\\in[\\bar\\tau\/2,\\bar\\tau],(\\beta-\\beta_0)^\\prime Z \\leq -z_0\\}$ and follow the same steps as above to show that \n\t\\[\n\t\\exp\\left(-\\Lambda_0(t)e^{\\beta_0^\\prime z}\\right)-\\exp\\left(-\\Lambda(t)e^{\\beta ^\\prime z}\\right) <0,\n\t\\]\n\tand the difference of exponentials stays away from zero. Now, the proof of Lemma \\ref{lem:consist_full_3} is complete. \n\\end{proof}\n\n\n\n\\begin{proof}[\\textsc{Proof of Equation \\eqref{whynot}.}]\n\tFor $\\beta=\\beta_0$ we have $\\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\geq 0)=1$ and thus we only have to study $\\beta\\neq \\beta_0$. Since $Var(Z)$ is non degenerated, $\\mathbb{P}((\\beta-\\beta_0)^\\prime Z= 0)<1$ for any $\\beta\\neq \\beta_0$. Next, we could write \n\t\\begin{multline*}\n\t0 = \\mathbb E \\left( \\frac{(\\beta-\\beta_0)^\\prime Z}{\\|\\beta-\\beta_0\\|} \\right) = \\mathbb E \\left( \\frac{(\\beta-\\beta_0)^\\prime Z}{\\|\\beta-\\beta_0\\|} \\mathds{1}_{\\{ (\\beta-\\beta_0)^\\prime Z \\geq 0\\} }\\right) + \n\t\\mathbb E \\left( \\frac{(\\beta-\\beta_0)^\\prime Z}{\\|\\beta-\\beta_0\\|} \\mathds{1}_{ \\{ (\\beta-\\beta_0)^\\prime Z < 0 \\} }\\right) \\\\ \\leq \\|Z\\| \\mathbb{P}((\\beta-\\beta_0)^\\prime Z\\geq 0) +\\mathbb E \\left( \\frac{(\\beta-\\beta_0)^\\prime Z}{\\|\\beta-\\beta_0\\|} \\mathds{1}_{\\{ (\\beta-\\beta_0)^\\prime Z < 0 \\} }\\right) .\n\t\\end{multline*}\n\tIt remains to notice that \n\t$$\\sup_{\\|\\tilde \\beta\\|=1} \\mathbb E \\left( \\tilde \\beta^\\prime Z\\mathds{1}_{\\{ \\tilde \\beta^\\prime Z < 0 \\} }\\right)<0.$$\n\tIndeed, if $\\mathbb E \\left( \\tilde \\beta^\\prime Z\\mathds{1}_{\\{ \\tilde \\beta^\\prime Z < 0 \\} }\\right)$, which is negative, could be arbitrarily close to zero, and since $\\mathbb E \\left( \\tilde \\beta^\\prime Z\\mathds{1}_{\\{ \\tilde \\beta^\\prime Z < 0 \\} }\\right)= - \\mathbb E \\left( \\tilde \\beta^\\prime Z\\mathds{1}_{\\{ \\tilde \\beta^\\prime Z \\geq 0 \\} }\\right)$, we deduce that $\\mathbb E (| \\tilde \\beta^\\prime Z|)$ could be arbitrarily close to zero, for suitable $ \\tilde \\beta$ with unit norm. Since the support of $Z$ is bounded and \n\t$$\n\t\\lambda_{min} (Var(Z)) \\leq \\mathbb E (| \\tilde \\beta^\\prime Z|^2)\\leq \\mathbb E (| \\tilde \\beta^\\prime Z|)\\sup \\|Z\\|, \\quad \\forall \\|\\tilde \\beta\\|=1,\n\t$$\n\twe thus get a contradiction with the assumption that $\\lambda_{min} $, the smallest eigenvalue of $Var(Z)$, is positive. \n\tDeduce that \\eqref{whynot} holds true. \\end{proof}\n\n\n\n\n\\quad\n\n\\begin{proof}[\\textsc{Proof of Equation \\eqref{whynot2}.}]\n\tIt suffices to prove the following property. \n\tLet $U$ be a centered variable such that $|U|\\leq M$ for some constant $M$ and $Var(U)$ is bounded from below by some constant $00$ such that $\\mathbb{P}(|U|\\geq z_0)> C\/2M^2$ with $z_0$ depending on $M$ and $C$ but independent of the law of $U$. \n\t\n\tFor any $00.\n\\]\nWe will also use the Gateaux derivative of $M(\\gamma,\\pi_0)$ in a direction $[\\pi-\\pi_0]$ given by\n{\\small\\begin{equation}\n\\label{eqn:directional_derivative}\n\\begin{split}\n\\Gamma_2(\\gamma,\\pi_0)[\\pi-\\pi_0]&:=\\nabla_\\pi M(\\gamma,\\pi_0)[\\pi-\\pi_0]\\\\\n&=\\lim_{h\\to 0}\\frac{1}{h}\\left[M(\\gamma,\\pi_0+h(\\pi-\\pi_0))-M(\\gamma,\\pi_0)\\right]\\\\\n&=-\\lim_{h\\to 0}\\frac{1}{h}\\mathbb{E}\\left[\\left\\{\\frac{h\\left\\{\\pi(X)-\\pi_0(X)\\right\\}}{\\phi(\\gamma,X)}+\\frac{h\\left\\{\\pi(X)-\\pi_0(X)\\right\\}}{1-\\phi(\\gamma,X)} \\right\\}\\nabla_\\gamma\\phi(\\gamma,X)\\right]\\\\\n&=-\\mathbb{E}\\left[\\left\\{\\pi(X)-\\pi_0(X)\\right\\}\\left\\{\\frac{1}{\\phi(\\gamma,X)}+\\frac{1}{1-\\phi(\\gamma,X)} \\right\\}\\nabla_\\gamma\\phi(\\gamma,X)\\right].\n\\end{split}\n\\end{equation} } \n\tWe apply Theorem 2 in \\cite{chen} so we need to verify its conditions. Consistency of $\\hat\\gamma_n$ is shown in Theorem \\ref{theo:consistency}, while condition (2.1) in \\cite{chen} is satisfied by construction since \n\t\\[\n\t\\Vert M_n(\\hat\\gamma_n,\\hat\\pi_n)\\Vert=0=\\inf_{\\gamma\\in G}\\Vert M_n(\\gamma,\\hat\\pi_n)\\Vert.\n\t\\] \n\tNote that assumption (AC1) was needed in Theorem \\ref{theo:consistency} in order to obtain almost sure convergence. However, here we only need convergence in probability for which (AN4)-(ii) suffices. \n\tFor condition (2.2) in \\cite{chen}, the derivative of $M$ with respect to $\\gamma$ is computed in \\eqref{def:gradient_M} and the matrix is negative definite (as a result also full rank) because of our assumption (AN3). Moreover the directional derivative was computed in \\eqref{eqn:directional_derivative} and for $(\\gamma,\\pi)\\in G_{\\delta_n}\\times\\Pi_{\\delta_n}$ with $G_{\\delta_n}=\\{\\gamma\\in G: \\Vert\\gamma-\\gamma_0\\Vert\\leq\\delta_n\\}$, $\\Pi_{\\delta_n}=\\{\\pi\\in \\Pi: \\Vert\\pi-\\pi_0\\Vert_{\\infty}\\leq\\delta_n\\}$, $\\delta_n=o(1)$, we have\n\t\\[\n\t\\begin{split}\n\t&\\left\\Vert M(\\gamma,\\pi)-M(\\gamma,\\pi_0)-\\Gamma_2(\\gamma,\\pi_0)[\\pi-\\pi_0]\\right\\Vert\\\\\n\t&=\\left\\Vert\\mathbb{E}\\left[\\left\\{\\frac{\\pi_0(X)-\\pi(X)}{\\phi(\\gamma,X)} +\\frac{\\pi_0(X)-\\pi(X)}{1-\\phi(\\gamma,X)} \\right\\}\\nabla_\\gamma\\phi(\\gamma,X)\\right]\\right.\\\\\n\t&\\quad+\\left.\\mathbb{E}\\left[\\left\\{\\pi(X)-\\pi_0(X)\\right\\}\\left\\{\\frac{1}{\\phi(\\gamma,X)} +\\frac{1}{1-\\phi(\\gamma,X)}\\right\\}\\nabla_\\gamma\\phi(\\gamma,X)\\right]\\right\\Vert=0,\n\t\\end{split}\n\t\\]\n\twhich means that condition (2.3i) is satisfied. For condition (2.3ii), we have \n\t\\[\n\t\\begin{split}\n\t&\\Gamma_2(\\gamma,\\pi_0)[\\pi-\\pi_0]-\\Gamma_2(\\gamma_0,\\pi_0)[\\pi-\\pi_0]\\\\\n\t&=-\\mathbb{E}\\left[\\left\\{\\pi(X)-\\pi_0(X)\\right\\}\\left\\{\\left( \\frac{1}{\\phi(\\gamma,X)} +\\frac{1}{1-\\phi(\\gamma,X)}\\right)\\nabla_\\gamma\\phi(\\gamma,X)\\right.\\right.\\\\\n\t&\\quad\\qquad-\\left.\\left. \\left( \\frac{1}{\\phi(\\gamma_0,X)} +\\frac{1}{1-\\phi(\\gamma_0,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_0,X)\\right\\}\\right].\n\t\\end{split}\n\t\\]\n\tThen, from $\\sup_x|\\pi(x)-\\pi_0(x)|\\leq\\delta_n$, $|\\gamma-\\gamma_0|\\leq\\delta_n\\to 0$ and (AN1),\n\tit follows that \n\t\\[\n\t\\Vert\\Gamma_2(\\gamma,\\pi_0)[\\pi-\\pi_0]-\\Gamma_2(\\gamma_0,\\pi_0)[\\pi-\\pi_0]\\Vert\\leq o(1)\\delta_n.\n\t\\]\n\tConditions (2.4) and (2.6) in \\cite{chen} are satisfied thanks to our assumption (AN4) because\n\t\\begin{equation}\n\t\\label{eqn:M_n-Gamma_2}\n\t\\begin{split}\n\t&M_n(\\gamma_0,\\pi_0)+\\Gamma_2(\\gamma_0,\\pi_0)[\\hat\\pi-\\pi_0]\\\\\n\t&=\\frac{1}{n}\\sum_{i=1}^n\\left\\{\\frac{1-\\pi_0(X_i)}{\\phi(\\gamma_0,X_i)}-\\frac{\\pi_0(X_i)}{1-\\phi(\\gamma_0,X_i)}\\right\\}\\nabla_\\gamma\\phi(\\gamma_0,X_i)\\\\\n\t&\\quad+ \\mathbb{E}^*\\left[\\left\\{\\hat\\pi(X)-\\pi_0(X)\\right\\}\\left(\\frac{1}{\\phi(\\gamma_0,X)} +\\frac{1}{1-\\phi(\\gamma_0,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_0,X)\\right] \\\\\n\t&=\\mathbb{E}^*\\left[\\left\\{\\hat\\pi(X)-\\pi_0(X)\\right\\}\\left(\\frac{1}{\\phi(\\gamma_0,X)} +\\frac{1}{1-\\phi(\\gamma_0,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_0,X)\\right].\n\t\\end{split}\n\t\\end{equation}\n\tThen we conclude by central limit theorem that \n\t\\[\n\t\\sqrt{n}\\left(M_n(\\gamma_0,\\pi_0)+\\Gamma_2(\\gamma_0,\\pi_0)[\\hat\\pi-\\pi_0]\\right)\\xrightarrow{d} N(0,V)\n\t\\]\n\twhere $V=Var(\\Psi(Y,\\Delta,X,Z))$. It remains to deal with condition (2.5), which is a consequence of Theorem 3 in \\cite{chen} and assumption (AN2) because from (AN1) we have\n\t\\[\n\t\\begin{split}\n\t\\Vert m(x;\\gamma_1,\\pi_1)-m(x;\\gamma_2,\\pi_2)\\Vert\n\t&\\leq \\left\\Vert\\left(\\frac{1-\\pi_1(x)}{\\phi(\\gamma_1,x)} +\\frac{\\pi_1(x)}{1-\\phi(\\gamma_1,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_1,X)\\right.\\\\\n\t&\\left.\\qquad\\qquad-\\left(\\frac{1-\\pi_2(x)}{\\phi(\\gamma_2,x)} +\\frac{\\pi_2(x)}{1-\\phi(\\gamma_2,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_2,X)\\right\\Vert\\\\\n\t&\\leq C_1\\Vert\\gamma_1-\\gamma_2\\Vert+C_2\\Vert\\pi_1-\\pi_2\\Vert_\\infty.\n\t\\end{split}\n\t\\]\n\tFinally, the asymptotic normality follows from Theorem 2 in \\cite{chen} and the asymptotic covariance matrix is given by \n\t\\begin{equation}\n\t\\label{def:Sigma_gamma}\n\t\\Sigma_\\gamma=(\\Gamma'_1\\Gamma_1)^{-1}\\Gamma'_1V\\Gamma_1(\\Gamma'_1\\Gamma_1)^{-1}=\\Gamma_1^{-1}V\\Gamma_1^{-1}\n\t\\end{equation}\n\t\\end{proof}\n\\begin{proof}[\\textsc{Proof of Theorem \\ref{theo:asymptotic_normality2}.}]\n\tWe show that conditions 1 and 4 of Theorem 4 in \\cite{Lu} are satisfied. Define $S_n$ as the version of $\\hat{S}_n$ where $\\hat\\gamma_n$ is replaced by $\\gamma_0$ \n\\[\n\\begin{split}\n{S}_n(\\hat\\Lambda_n,\\hat\\beta_n)(h_1,h_2)&=\\frac{1}{n}\\sum_{i=1}^n \\Delta_i\\mathds{1}_{\\{Y_i<\\tau_0\\}} \\left[h_1(Y_i)+h'_2Z_i\\right]\\\\\n&\\quad-\\frac{1}{n}\\sum_{i=1}^n \\left\\{ \\Delta_i+(1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}g_i(Y_i,\\hat\\Lambda_n,\\hat\\beta_n,\\gamma_0) \\right\\}\\\\\n&\\qquad\\qquad\\quad\\times\\left\\{e^{\\hat\\beta'_nZ_i}\\int_0^{Y_i}h_1(s)\\mathrm{d}\\hat\\Lambda_n(s)+e^{\\hat\\beta'_nZ_i}\\hat\\Lambda_n(Y_i)h'_2Z_i\n\\right\\}\n\\end{split}\n\\]\n\\textit{Condition 1.} We start by writing \n\\begin{equation}\n\\label{eqn:condition1}\n\\begin{split}\n\\hat{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)-{S}(\\Lambda_0,\\beta_0)(h_1,h_2)&=\\left[\\hat{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)-{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)\\right]\\\\\n&\\quad+\\left[{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)-{S}(\\Lambda_0,\\beta_0)(h_1,h_2)\\right].\n\\end{split}\n\\end{equation}\nFor the second term on the right hand side we have\n\\begin{equation}\n\\label{eqn:condition1_2}\n{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)-{S}(\\Lambda_0,\\beta_0)(h_1,h_2)=\\int f_h(y,\\delta,x,z)\\,\\mathrm{d}(\\mathbb{P}_n-\\mathbb{P})(y,\\delta,x,z)\n\\end{equation}\nwhere\n\\[\n\\begin{split}\nf_h(y,\\delta,x,z)&=h_2'z\\left\\{\\delta\\mathds{1}_{\\{y<\\tau_0\\}}-\\left[\\delta-(1-\\delta)\\mathds{1}_{\\{y\\leq\\tau_0\\}}g(y,\\Lambda_0,\\beta_0,\\gamma_0)\\right]e^{\\beta'_0z}\\Lambda_0(y) \\right\\}\\\\\n&+\\delta \\mathds{1}_{\\{y<\\tau_0\\}}h_1(y)-\\left[\\delta -(1-\\delta)\\mathds{1}_{\\{y\\leq\\tau_0\\}}g(y,\\Lambda_0,\\beta_0,\\gamma_0)\\right]e^{\\beta'_0z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda_0(s).\n\\end{split}\n\\]\nThe classes $\\{h_2\\in\\mathbb{R}^q,\\Vert h_2\\Vert\\leq \\mathfrak{m}\\} $, $\\{h_1\\in BV[0,\\tau_0],\\, \\Vert h_1\\Vert_v\\leq \\mathfrak{m}\\}$ and $$\\left\\{\\int_0^y h_1(t)\\,\\mathrm{d}\\Lambda_0(t), \\,h_1\\in BV[0,\\tau_0], \\Vert h_1\\Vert_v\\leq \\mathfrak{m}\\right\\}$$ are Donsker classes (the last one because it consists of monotone bounded functions). As in \\cite{Lu}, because of the boundedness of the covariates and $\\Lambda_0$, it follows that $\\{f_h(y,\\delta,x,z),\\,h\\in\\mathcal{H}_\\mathfrak{m}\\}$ is also a Donsker class since it is sum of products of Donsker classes with fixed uniformly bounded functions. \n\nOn the other hand, for the first term on the right hand side of \\eqref{eqn:condition1}, we have \n\\begin{equation}\n\\label{eqn:condition1_4}\n\\begin{split}\n&\\left[\\hat{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)-{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)\\right]\\\\\n&=-\\frac1n\\sum_{i=1}^n (1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}\\left\\{e^{\\beta'_0Z_i}\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda_0(s)+e^{\\beta'_0Z_i}\\Lambda_0(Y_i)h'_2Z_i\n\\right\\}\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\left\\{g_i(Y_i,\\Lambda_0,\\beta_0,\\hat\\gamma_n)-g_i(Y_i,\\Lambda_0,\\beta_0,\\gamma_0)\\right\\}\\\\\n&=-\\frac1n\\sum_{i=1}^n (1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}e^{\\beta'_0Z_i}\\left\\{\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y_i)h'_2Z_i\n\\right\\}\\\\\n&\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\Lambda_0,\\beta_0,\\gamma_0)\\left\\{\\phi(\\hat\\gamma_n,X_i)-\\phi(\\gamma_0,X_i)\\right\\}+o_P(n^{-1\/2}),\n\\end{split}\n\\end{equation}\nwhere \n\\[\n\\begin{split}\n\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\Lambda_0,\\beta_0,\\gamma_0)&=\\frac{\\exp\\left(-\\Lambda(Y_i) e^{\\beta'Z_j}\\right)}{1-\\phi(\\gamma,X_j)+\\phi(\\gamma,X_j)\\exp\\left(-\\Lambda(Y_i) e^{\\beta'Z_j}\\right)}\\\\\n&\\quad+\\frac{\\phi(\\gamma,X_j)\\exp\\left(-\\Lambda(Y_i)e^{\\beta'Z_j}\\right)\\left[\\exp\\left(-\\Lambda(Y_i) e^{\\beta'Z_j}\\right)-1\\right]}{\\left[1-\\phi(\\gamma,X_j)+\\phi(\\gamma,X_j)\\exp\\left(-\\Lambda(Y_i) e^{\\beta'Z_j}\\right)\\right]^2}.\n\\end{split}\n\\]\nIn order to conclude that the remainder term is of order $o_P(n^{-1\/2})$ we use \n\\[\n\\sup_x\\left|\\phi(\\hat\\gamma_n,x)-\\phi(\\gamma_0,x)\\right|\\leq \\sup_{\\gamma\\in G, x\\in\\mathcal{X}}\\left\\Vert\\nabla_\\gamma\\phi(\\gamma,x)\\right\\Vert |\\hat\\gamma_n-\\gamma_0|=O_P(n^{-1\/2})\n\\]\nand the fact that $\\frac{\\partial^2 g_i}{\\partial\\phi^2}(Y_i,\\Lambda_0,\\beta_0,\\gamma)$ and $$(1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}e^{\\beta'_0Z_i}\\left\\{\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y_i)h'_2Z_i\n\\right\t\\}$$ are uniformly bounded functions thanks to our assumptions on $Z,$ $\\Lambda$, $\\Phi$ and $h$. \nFrom the same assumptions we also obtain \n\\begin{equation}\n\\label{eqn:condition1_5}\n\\begin{split}\n&\\frac1n\\sum_{i=1}^n (1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}e^{\\beta'_0Z_i}\\left\\{\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y_i)h'_2Z_i\n\\right\\}\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\Lambda_0,\\beta_0,\\gamma_0)\\left\\{\\phi(\\hat\\gamma_n,X_i)-\\phi(\\gamma_0,X_i)\\right\\}\\\\\n&=\\frac1n\\sum_{i=1}^n (1-\\Delta_i)e^{\\beta'_0Z_i}\\left\\{\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y_i)h'_2Z_i\n\\right\\}\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X_i)'\\left\\{\\hat\\gamma_n-\\gamma_0\\right\\}+o_P(n^{-1\/2})\\\\\n&=\\mathbb{E}\\left[(1-\\Delta)\\mathds{1}_{\\{Y\\leq\\tau_0\\}}e^{\\beta'_0Z}\\left\\{\\int_0^{Y}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y)h'_2Z\n\\right\\}\\right.\\\\\n&\\left.\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\frac{\\partial g}{\\partial\\phi}(Y,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right]\\left(\\hat\\gamma_n-\\gamma_0\\right)+o_P(n^{-1\/2}).\n\\end{split}\n\\end{equation}\nand the expectation term is uniformly bounded. To prove the asymptotic normality of $\\hat\\gamma_n-\\gamma_0$ in Theorem \\ref{theo:asymptotic_normality} we used Theorem 2 in \\cite{chen}. Going through the proof of Theorem 2 in \\cite{chen}, we actually have \n\\[\n(\\hat\\gamma_n-\\gamma_0)=-(\\Gamma_1'\\Gamma_1)^{-1}\\Gamma'_1\\left\\{M_n(\\gamma_0,\\pi_0)+\\Gamma_2(\\gamma_0,\\pi_0)[\\hat\\pi-\\pi_0]\\right\\}+ o_P(n^{-1\/2})\n\\]\nwhere $\\Gamma_1$ is defined in \\eqref{def:gradient_M} and $\\Gamma_2$ in \\eqref{eqn:directional_derivative}. From Assumption (AN4-iii) and \\eqref{eqn:M_n-Gamma_2}, it follows that \n\\begin{equation}\n\\label{eqn:condition_1_6}\n(\\hat\\gamma_n-\\gamma_0)=-(\\Gamma_1'\\Gamma_1)^{-1}\\Gamma'_1\t\\int \\Psi(y,\\delta,x)\\,\\d(\\mathbb{P}_n-\\mathbb{P})(y,\\delta,x,z)+ o_P(n^{-1\/2}). \n\\end{equation}\nPutting together \\eqref{eqn:condition1}-\\eqref{eqn:condition_1_6}, we have\n\\[\n\\begin{split}\n&\\hat{S}_n(\\Lambda_0,\\beta_0)(h_1,h_2)-{S}(\\Lambda_0,\\beta_0)(h_1,h_2)\\\\\n&=\\int \\left\\{ f_h(y,\\delta,x,z)-Q_h\\Gamma_1^{-1} \\Psi(y,\\delta,x)\\right\\}\\,\\mathrm{d} (\\mathbb{P}_n-\\mathbb{P})(y,\\delta,x,z)+o_P(n^{-1\/2})\n\\end{split}\n\\]\nwhere\n\\[\nQ_h=\\mathbb{E}\\left[(1-\\Delta)\\mathds{1}_{\\{Y\\leq\\tau_0\\}}e^{\\beta'_0Z}\\left\\{\\int_0^{Y}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y)h'_2Z\n\\right\\}\\frac{\\partial g}{\\partial\\phi}(Y,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right].\n\\]\nIn order to conclude the convergence of $\\sqrt{n}(\\hat{S}_n(\\Upsilon_0)-S(\\Upsilon_0))$ to a Gaussian process $G^*$, we need to have that $\\{Q_h\\Gamma_1^{-1} \\Psi(y,\\delta,x), h\\in\\mathcal{H}_\\mathfrak{m}\\}$ is a bounded Dosker class of functions (since sum of bounded Donsker classes is also Donsker). We can write \n\\[\n\\begin{split}\n&Q_h\\Gamma_1^{-1}\\Psi(y,\\delta,x)\\\\\n&=h'_2\\mathbb{E}\\left[(1-\\Delta)\\mathds{1}_{\\{Y\\leq\\tau_0\\}}Ze^{\\beta'_0Z}\\Lambda_0(Y)\n\\frac{\\partial g}{\\partial\\phi}(Y,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right]\\Gamma_1^{-1}\\Psi(y,\\delta,x)\\\\\n&\\quad+\\int_0^{\\tau_0}\\mathbb{E}\\left[(1-\\Delta)\\mathds{1}_{\\{Y\\leq\\tau_0\\}}e^{\\beta'_0Z}\\frac{\\partial g}{\\partial\\phi}(Y,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right]h_1(s)\\mathrm{d}\\Lambda_0(s) \\Gamma_1^{-1} \\Psi(y,\\delta,x)\n\\end{split}\n\\]\nBy assumption (AN1) and $\\inf_{x}H((\\tau_0,\\infty)|x)>0$, $\\Lambda_0(\\tau_0)<\\infty$ and the boundedness of the covariates, we have that $$\\mathbb{E}\\left[(1-\\Delta)\\mathds{1}_{\\{Y\\leq\\tau_0\\}}Ze^{\\beta'_0Z}\\Lambda_0(Y)\n\\frac{\\partial g}{\\partial\\phi}(Y,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right]\\Gamma_1^{-1} \\Psi(y,\\delta,x)$$ is uniformly bounded. Hence \n\\[\n\\begin{split}\n&\\left\\{h'_2\\mathbb{E}\\left[(1-\\Delta)\\mathds{1}_{\\{Y\\leq\\tau_0\\}}Ze^{\\beta'_0Z}\\Lambda_0(Y)\n\\frac{\\partial g}{\\partial\\phi}(Y,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right]\\Gamma_1^{-1} \\Psi(y,\\delta,x):\\right.\\\\\n&\\,\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\qquad\\qquad\\qquad \\qquad\\qquad\\qquad h_2\\in\\mathbb{R}^q, \\Vert h_2\\Vert_{L_1}\\leq \\mathfrak{m}\\bigg \\}\n\\end{split}\n\\]\nis a Donsker class (see Example 2.10.10 in \\cite{VW}). It can also be shown that, since $h_1$ belongs to the class of bounded functions with bounded variation and all the other terms are uniformly bounded, that \n\\[\n\\begin{split}\n&\\left\\{\\int_0^{\\tau_0}\\mathbb{E}\\left[(1-\\Delta)\\mathds{1}_{\\{Y\\leq\\tau_0\\}}e^{\\beta'_0Z}\\frac{\\partial g}{\\partial\\phi}(Y,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right]h_1(s)\\mathrm{d}\\Lambda_0(s) \\Gamma_1^{-1} \\Psi(y,\\delta,x), \\right.\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\qquad\\qquad\\qquad \\qquad\\qquad\\qquad h_1\\in BV[0,\\tau_0], \\,\\Vert h_1\\Vert_v\\leq \\mathfrak{m}\\bigg\\}\n\\end{split}\n\\]\nis also a bounded Donsker class (covering numbers of order $\\epsilon$ of $\\{ h_1\\in BV[0,\\tau_0], \\Vert h_1\\Vert_v\\leq \\mathfrak{m}\\}$ correspond to covering numbers of order $c\\epsilon$ for some constant $c>0$).\n\nThe limit process $G^*$ has mean zero because\n\\[\n\\mathbb{E}[f_h(y,\\delta,x,z)]=S(\\Upsilon_0)(h)=0\\qquad\\text{and}\\qquad\\mathbb{E}[\\Psi(Y,\\Delta,X)]=0.\n\\]\nThe covariance process of $G^*$ is \n\\begin{equation}\n\\label{eqn:cov_G*}\n\\begin{split}\n&Cov\\left(G^*(h),G^*(\\tilde{h})\\right)\\\\\n&=\\mathbb{E}\\left[\\left\\{f_h(Y,\\Delta,X,Z)-Q_h\\Gamma_1^{-1} \\Psi(Y,\\Delta,X)\\right\\}\\left\\{f_{\\tilde{h}}(Y,\\Delta,X,Z)-Q_{\\tilde{h}}\\Gamma_1^{-1} \\Psi(Y,\\Delta,X)\\right\\}\\right]\\\\\n&=\\mathbb{E}[f_h(Y,\\Delta,X,Z)f_{\\tilde{h}}(Y,\\Delta,X,Z)]-Q_h\\Gamma_1^{-1}\\mathbb{E}[f_{\\tilde{h}}(Y,\\Delta,X,Z)\\Psi(Y,\\Delta,X)]\\\\\n&\\quad-Q_{\\tilde{h}}\\Gamma_1^{-1}\\mathbb{E}[f_h(Y,\\Delta,X,Z)\\Psi(Y,\\Delta,X)]+Q_{\\tilde{h}}\\Gamma_1^{-1}\\mathbb{E}[\\Psi(Y,\\Delta,X)\\Psi(Y,\\Delta,X)']\\Gamma_1^{-1}Q'_h.\n\\end{split}\n\\end{equation}\n\\textit{Condition 4 of Theorem 4 in \\cite{Lu}. } As for condition 1, we write\n\\begin{equation}\n\\label{eqn:cond4}\n\\begin{split}\n\\sqrt{n}\\left\\{(\\hat{S}_n-S)(\\Upsilon_n)-(\\hat{S}_n-S)(\\Upsilon_0)\\right\\}&=\\sqrt{n}\\left\\{({S}_n-S)(\\Upsilon_n)-({S}_n-S)(\\Upsilon_0)\\right\\}\\\\\n&\\quad+\\sqrt{n}\\left\\{(\\hat{S}_n-S_n)(\\Upsilon_n)-(\\hat{S}_n-S_n)(\\Upsilon_0)\\right\\}\n\\end{split}\n\\end{equation}\nFor the second term in the right hand side of \\eqref{eqn:cond4}, similarly to \\eqref{eqn:condition1_4}-\\eqref{eqn:condition1_5}, we have\n\\[\n\\begin{split}\n&(\\hat{S}_n-S_n)(\\Upsilon_n)-(\\hat{S}_n-S_n)(\\Upsilon_0)\\\\\n&=\\frac1n\\sum_{i=1}^n (1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}e^{\\hat\\beta'_nZ_i}\\left\\{\\int_0^{Y_i}h_1(s)\\mathrm{d}\\hat\\Lambda_n(s)+\\hat\\Lambda_n(Y_i)h'_2Z_i\n\\right\\}\\\\\n&\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\hat\\Lambda_n,\\hat\\beta_n,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X_i)'\\left\\{\\hat\\gamma_n-\\gamma_0\\right\\}\\\\\n&\\quad-\\frac1n\\sum_{i=1}^n (1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}e^{\\beta'_0Z_i}\\left\\{\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y_i)h'_2Z_i\n\\right\\}\\\\\n&\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\Lambda_0,\\beta_0,\\gamma_0)\\nabla_\\gamma\\phi(\\gamma_0,X_i)'\\left\\{\\hat\\gamma_n-\\gamma_0\\right\\}+o_P(n^{-1\/2})\\\\\n\\end{split}\n\\]\nUsing the boundedness in probability of $\\hat\\beta_n$ and $\\hat\\Lambda_n(\\tau_0)$, the boundedness of the covariates, $\\beta_0$, $\\Lambda_0(\\tau)$, $\\nabla_\\gamma\\phi(\\gamma,x)$ and the consistency results in Theorem \\ref{theo:consistency2}, it follows that \n\\[\n\\begin{split}\n&\\left|\\frac1n\\sum_{i=1}^n (1-\\Delta_i)\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}\\left\\{e^{\\hat\\beta'_nZ_i}\\left(\\int_0^{Y_i}h_1(s)\\mathrm{d}\\hat\\Lambda_n(s)+\\hat\\Lambda_n(Y_i)h'_2Z_i\n\\right)\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\hat\\Lambda_n,\\hat\\beta_n,\\gamma_0)\\right.\\right.\\\\\n&\\left.\\left.\\quad-e^{\\beta'_0Z_i}\\left(\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda_0(s)+\\Lambda_0(Y_i)h'_2Z_i\n\\right)\\frac{\\partial g_i}{\\partial\\phi}(Y_i,\\Lambda_0,\\beta_0,\\gamma_0)\\right\\}\\nabla_\\gamma\\phi(\\gamma_0,X_i)\\right|=o_P(1)\n\\end{split}\n\\]\nAs a consequence, since $\\hat\\gamma_n-\\gamma_0=O_P(n^{-1\/2})$, we obtain\n\\[\n\\sqrt{n}\\left\\{(\\hat{S}_n-S_n)(\\Upsilon_n)-(\\hat{S}_n-S_n)(\\Upsilon_0)\\right\\}=o_P(1).\n\\]\nIt remains to deal with the first term on the right hand side of \\eqref{eqn:cond4}. It suffices to show that, for any sequence $\\epsilon_n\\to 0$,\n\\[\n\\sup_{|\\Lambda-\\Lambda_0|_\\infty\\leq\\epsilon_n, \\,\\Vert\\beta-\\beta_0\\Vert\\leq\\epsilon_n}\\frac{\\left|({S}_n-S)(\\Upsilon)-({S}_n-S)(\\Upsilon_0)\\right|}{n^{-1\/2}\\vee \\Vert\\beta-\\beta_0\\Vert\\vee |\\Lambda-\\Lambda_0|_\\infty }=o_P(1).\n\\]\nLet\n\\[\na_1(y,\\delta,z,x)=\\delta e^{\\beta'z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda(s)-\\delta e^{\\beta'_0z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda_0(s),\n\\]\n\\[\na_2(y,\\delta,z,x)=\\delta e^{\\beta'z}\\Lambda(y)h'_2z-\\delta e^{\\beta'_0z}\\Lambda_0(y)h'_2z,\n\\]\nand\n\\[\n\\begin{split}\na_3(y,\\delta,z,x)&=(1-\\delta)\\mathds{1}_{\\{y\\leq\\tau_0\\}}g(y,\\Lambda,\\beta,\\gamma_0)\\left\\{e^{\\beta'z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda(s)+e^{\\beta'z}\\Lambda(y)h'_2z\\right\\}\\\\\n&\\quad-(1-\\delta)\\mathds{1}_{\\{y\\leq\\tau_0\\}}g(y,\\Lambda_0,\\beta_0,\\gamma_0)\\left\\{e^{\\beta'_0z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda_0(s)+e^{\\beta'_0z}\\Lambda_0(y)h'_2z\n\\right\\}\n\\end{split}\n\\]\nThen, we have \n\\[\n\\begin{split}\n({S}_n-S)(\\Upsilon)-({S}_n-S)(\\Upsilon_0)&=-\\frac{1}{n}\\sum_{i=1}^n \\left\\{a_1(Y_i,\\Delta_i,Z_i,X_i)-\\mathbb{E}\\left[a_1(Y,\\Delta,Z,X)\\right] \\right\\}\\\\\n&\\quad-\\frac{1}{n}\\sum_{i=1}^n \\left\\{a_2(Y_i,\\Delta_i,Z_i,X_i)-\\mathbb{E}\\left[a_2(Y,\\Delta,Z,X)\\right] \\right\\}\\\\\n&\\quad-\\frac{1}{n}\\sum_{i=1}^n \\left\\{a_3(Y_i,\\Delta_i,Z_i,X_i)-\\mathbb{E}\\left[a_3(Y,\\Delta,Z,X)\\right] \\right\\} \n\\end{split}\n\\]\nNext we consider the first term. The other two can be handled similarly. From a Taylor expansion we have\n\\[\n\\begin{split}\n&\\delta e^{\\beta'z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda(s)-\\delta e^{\\beta'_0z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda_0(s)\\\\\n&=(\\beta-\\beta_0)'z\\delta e^{\\beta'_0z}\\int_0^{y}h_1(s)\\mathrm{d}\\Lambda(s)+\\delta e^{\\beta'_0z}\\int_0^{y}h_1(s)\\mathrm{d}(\\Lambda-\\Lambda_0)(s)+o(\\Vert\\beta-\\beta_0\\Vert).\n\\end{split}\n\\]Hence\n\\begin{equation}\n\\label{eqn:a1}\n\\begin{split}\n&\\frac{1}{n}\\sum_{i=1}^n \\left\\{a_1(Y_i,\\Delta_i,Z_i,X_i)-\\mathbb{E}\\left[a_1(Y,\\Delta,Z,X)\\right] \\right\\}\\\\\n&\\leq (\\beta-\\beta_0)\\left\\{\\frac{1}{n}\\sum_{i=1}^n Z_i\\Delta_i e^{\\beta'_0Z_i}\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda(s)-\\mathbb{E}\\left[Z\\Delta e^{\\beta'_0Z}\\int_0^{Y}h_1(s)\\mathrm{d}\\Lambda(s)\\right]+o(1)\\right\\}\\\\\n&\\quad+ \\frac{1}{n}\\sum_{i=1}^n\\left\\{\\Delta_i e^{\\beta'_0Z_i}\\int_0^{Y_i}h_1(s)\\mathrm{d}(\\Lambda-\\Lambda_0)(s)-\\mathbb{E}\\left[\\Delta e^{\\beta'_0Z}\\int_0^{Y}h_1(s)\\mathrm{d}(\\Lambda-\\Lambda_0)(s)\\right]\\right\\}.\n\\end{split}\n\\end{equation}\nBy the law of large numbers\n\\[\n\\frac{1}{n}\\sum_{i=1}^n\\left\\{ Z_i\\Delta_i e^{\\beta'_0Z_i}\\int_0^{Y_i}h_1(s)\\mathrm{d}\\Lambda(s)-\\mathbb{E}\\left[Z\\Delta e^{\\beta'_0Z}\\int_0^{Y}h_1(s)\\mathrm{d}\\Lambda(s)\\right] \\right\\}=o_P(1)\n\\]\nand as a result, the first term in the right hand side of \\eqref{eqn:a1} is $o_P(\\Vert\\beta-\\beta_0\\Vert)$. The second term can be rewritten as \n\\[\n\\int_0^{\\tau_0} D_n(s)h_1(s)\\mathrm{d}(\\Lambda-\\Lambda_0)(s)\n\\]\nwhere\n\\[\nD_n(s)=\\frac{1}{n}\\sum_{i=1}^n\\left\\{\\Delta_i\\mathds{1}_{\\{Y_i\\geq s\\}} e^{\\beta'_0Z_i}-\\mathbb{E}\\left[\\Delta\\mathds{1}_{\\{Y\\geq s\\}} e^{\\beta'_0Z}\\right]\\right\\}. \n\\]\nBy integration by parts and the chain rule we have\n\\[\n\\begin{split}\n&\\int_0^{\\tau_0} D_n(s)h_1(s)\\mathrm{d}(\\Lambda-\\Lambda_0)(s)\\\\\n&= D_n(\\tau_0)h_1(\\tau_0)(\\Lambda-\\Lambda_0)(\\tau_0)- \\int_0^{\\tau_0} (\\Lambda-\\Lambda_0)(s)\\mathrm{d} \\left[D_n(s)h_1(s)\\right]\\\\\n&=D_n(\\tau_0)h_1(\\tau_0)(\\Lambda-\\Lambda_0)(\\tau_0)-\\int_0^{\\tau_0} (\\Lambda-\\Lambda_0)(s) D_n(s)\\mathrm{d} h_1(s)\\\\\n&\\quad +\\frac{1}{n}\\sum_{i=1}^n\\left\\{ \\Delta_i (\\Lambda-\\Lambda_0)(Y_i)h_1(Y_i) e^{\\beta'_0Z_i}-\\mathbb{E}\\left[ (\\Lambda-\\Lambda_0)(Y)h_1(Y)\\Delta e^{\\beta'_0Z}\\right]\\right\\}\n\\end{split} \n\\]\nNote that \n\\[\n\\begin{split}\n\\mathbb{E}\\left[ (\\Lambda-\\Lambda_0)(Y)h_1(Y)\\Delta e^{\\beta'_0Z}\\right]&=\\mathbb{E}\\left[e^{\\beta'_0Z}\\int_0^{\\tau_0}(\\Lambda-\\Lambda_0)(s)h_1(s)\\mathrm{d} H_1(s|X,Z)\\bigg|X,Z\\right]\\\\\n&=-\\int_0^{\\tau_0}(\\Lambda-\\Lambda_0)(s)h_1(s)\\mathrm{d}\\mathbb{E}\\left[\\Delta\\mathds{1}_{\\{Y\\geq s\\}}e^{\\beta'_0Z}\\right].\n\\end{split}\n\\]\nIt can be shown that $\\sqrt{n}D_n$ converges weakly to a tight, mean zero Gaussian process $D$ in $l^\\infty([0,\\tau_0])$. Since $h_1$ is bounded, it follows \n\\[\n\\frac{\\left|D_n(\\tau_0)h_1(\\tau_0)(\\Lambda-\\Lambda_0)(\\tau_0)\\right|}{\\Vert\\Lambda-\\Lambda_0\\Vert_{\\infty}}=o_P(1)\n\\]\nMoreover, since $D_n\\to 0$ and $h_1$ is of bounded variation \n\\[\n\\frac{ \\left|\\int_0^{\\tau_0} (\\Lambda-\\Lambda_0)(s) D_n(s)\\mathrm{d} h_1(s)\\right|}{\\Vert\\Lambda-\\Lambda_0\\Vert_{\\infty}}\\leq \\sup_{t\\in[0,\\tau_0]}|D_n(s)| \\int_0^{\\tau_0}\\,\\left|\\mathrm{d} h(s)\\right|\\to 0.\n\\] \nFinally, since $\\left\\{g_{\\Lambda}(y,\\delta,z)=\\delta(\\Lambda-\\Lambda_0)(y)h_1(y)e^{\\beta'_0Z}:\\,\\Vert\\Lambda-\\Lambda_0\\Vert_{\\infty}\\leq \\epsilon_n\\right\\}$ is a Donsker class (product of bounded variation functions, uniformly bounded) and \n\\[\n\\mathbb{E}\\left[(\\Lambda-\\Lambda_0)(Y)^2h_1(Y)^2\\Delta e^{2\\beta'_0Z}\\right]=O(\\epsilon_n^2)=o(1)\n,\n\\] \nwe have that\n\\[\n\\sqrt{n}\\frac{1}{n}\\sum_{i=1}^n\\left\\{ \\Delta_i (\\Lambda-\\Lambda_0)(Y_i)h_1(Y_i) e^{\\beta'_0Z_i}-\\mathbb{E}\\left[ (\\Lambda-\\Lambda_0)(Y)h_1(Y)\\Delta e^{\\beta'_0Z}\\right]\\right\\}\n\\]\nconverges to zero in probability. So we obtain \n\\[\n\\frac{ \\int_0^{\\tau_0} D_n(s)h_1(s)\\mathrm{d}(\\Lambda-\\Lambda_0)(s)}{n^{-1\/2}\\vee \\Vert\\Lambda-\\Lambda_0\\Vert_{\\infty} }=o_P(1)\n\\] \nThe other two terms related to $a_2$ and $a_3$ can be treated similarly. \n\nThis concludes the verification of conditions of Theorem 4 in \\cite{Lu} (or Theorem 3.3.1 in \\cite{VW}). Hence, the weak convergence of $\\sqrt{n}(\\Upsilon_n-\\Upsilon_0)$ to a tight, mean zero Gaussian process $G$ follows. Next we compute the covariance process of $G$. \nFrom Theorem 3.3.1 in \\cite{VW} we have\n\\begin{equation}\n\\label{eqn:asymptotic_relation}\n-\\sqrt{n}\\dot{S}(\\Upsilon_0)(\\Upsilon_n-\\Upsilon_0)(h)=\\sqrt{n}(\\hat{S}_n(\\Upsilon_0)-S(\\Upsilon_0))(h)+o_P(1).\n\\end{equation}\nMoreover, in \\cite{Lu} it is computed that \n\\begin{equation}\n\\label{eqn:derivative}\n\\dot{S}(\\Upsilon_0)(\\Upsilon_n-\\Upsilon_0)(h)=\\int_0^{\\tau_0}\\sigma_1(h)(t)\\mathrm{d}(\\hat\\Lambda_n(t)-\\Lambda_0(t))+(\\hat\\beta_n-\\beta_0)'\\sigma_2(h)\n\\end{equation}\nwhere $\\sigma=(\\sigma_1,\\sigma_2)$ is a continuous linear operator from $\\mathcal{H}_\\mathfrak{m}$ to $\\mathcal{H}_\\mathfrak{m}$ of the form \n\\[\n\\begin{split}\n\\sigma_1(h)(t)&=\\mathbb{E}\\left[\\mathds{1}_{\\{Y\\geq t\\}}V(t,\\Upsilon_0)(h)g(t,\\Upsilon_0)e^{\\beta'_0Z}\\right]\\\\\n&\\quad-\\mathbb{E}\\left[\\int_t^{\\tau_0}\\mathds{1}_{\\{Y\\geq s\\}}V(t,\\Upsilon_0)(h)g(s,\\Upsilon_0)\\{1-g(s,\\Upsilon_0)\\}e^{2\\beta'_0Z}\\mathrm{d}\\Lambda_0(s)\\right]\n\\end{split}\n\\]\nand\n\\[\n\\sigma_2(h)(t)=\\mathbb{E}\\left[\\int_0^{\\tau_0}\\mathds{1}_{\\{Y\\geq t\\}}W(t,\\Upsilon_0)V(t,\\Upsilon_0)(h)g(t,\\Upsilon_0)e^{\\beta'_0Z}\\mathrm{d}\\Lambda_0(t)\\right]\n\\]\nwhere\n\\[\nV(t,\\Upsilon_0)(h)=h_1(t)-\\left\\{1-g(t,\\Upsilon_0)\\right\\}e^{\\beta'_0Z}\\int_0^th_1(s)\\mathrm{d}\\Lambda_0(s)+h'_2W(t,\\Upsilon_0)\n\\]\nand\n\\[\nW(t,\\Upsilon_0)=\\left[1-\\left\\{1-g(t,\\Upsilon_0)\\right\\}e^{\\beta'_0Z}\\Lambda_0(t)\\right]Z\n\\]\nIn \\cite{Lu}, it is also shown that $\\sigma$ is invertible with inverse $ \\sigma^{-1}=(\\sigma_1^{-1},\\sigma_2^{-1})$. Hence, for all $g\\in\\mathcal{H}_\\mathfrak{m}$, let $h=\\sigma^{-1}(g)$. If in \\eqref{eqn:derivative} we replace $h$ by $\\sigma^{-1}(g)$ and use \\eqref{eqn:asymptotic_relation}, we obtain\n\\[\n\\begin{split}\n&\\int_0^{\\tau_0}g_1(t)\\mathrm{d}\\sqrt{n}(\\Lambda_n(t)-\\Lambda_0(t))+\\sqrt{n}(\\hat\\beta_n-\\beta_0)'g_2\\\\&=-\\sqrt{n}(\\hat{S}_n(\\Upsilon_0)-S(\\Upsilon_0))(\\sigma^{-1}(g))+o_P(1)\\xrightarrow{d}- G^*(\\sigma^{-1}(g)).\n\\end{split}\n\\]\nSince the previous results holds for all $g\\in\\mathcal{H}_\\mathfrak{m}$, it follows that $(\\sqrt{n}(\\hat\\Lambda_n-\\Lambda_0),\\sqrt{n}(\\hat\\beta_n-\\beta_0))$ converges to a tight mean zero Gaussian process $G$ with covariance \n\\begin{equation}\n\\label{def:cov_G}\nCov(G(g),G(\\tilde{g}))=Cov\\left(G^*(\\sigma^{-1}(g)),G^*(\\sigma^{-1}(\\tilde{g}))\\right)\n\\end{equation}\nand the covariance of $G^*$ is given in \\eqref{eqn:cov_G*}.\n\\end{proof}\n\\section{Additional simulation results}\nIn this section we report the simulation results for scenario 2 of the models~{1-4}, $n=200, 400$, that were omitted from the main paper and the results for sample size $n=1000$ (all models and scenarios). In addition, Table \\ref{tab:results3_4_beta} complements Tables~\\ref{tab:results3} and~\\ref{tab:results4} containing results for $\\hat\\beta$. \n\n\n\\section{Introduction}\nThere are many situations in survival analysis problems where some of the subjects will never experience the event of interest. For instance, as significant progress is being made for treatment of different types of cancers, many of the patients get cured of the disease and do not experience recurrence or cancer-related death. Other examples include study of time to natural conception,\ntime to default in finance and risk management,\ntime to early failure of integrated circuits in {engineering, time to find a job after a layoff.}\nHowever, because of the finite duration of the studies and censoring, the cured subjects (for which the event never takes place) cannot be distinguished from the `susceptible' ones. We can just get an indication of the presence of a cure fraction from the context of the study and a long plateau (containing many censored observations) with height greater than zero in the Kaplan-Meier estimator of the survival function. Predicting the probability of being cured given a set of characteristics is often of particular interest in order to make better decisions in terms of {treatment, management strategies or public policies.}\nThis lead to the development of mixture cure models.\n\nMixture cure models were first proposed by \\cite{boag49} and \\cite{berkson52}. They assume that the population is a mixture of two groups: the cured and the susceptible subjects. Within this very wide class of models, various approaches have been considered in the literature for modelling and {estimating} the incidence (probability of being uncured) and the latency (survival function of the uncured subjects). Initially, fully parametric models with a logistic regression form of the incidence and various parametric distributions for the latency were used in \\cite{farewell82,yamaguchi92,kuk92}. Later on, more flexible semi-parametric approaches were proposed for the latency based on the Cox proportional hazards model \\cite{ST2000,peng2000} or accelerated failure time models \\cite{li2002,zhang2007}. However, they still maintain the logistic regression model for the incidence. More recently, nonparametric methods have been developed for both or one of the model components in \\cite{XP2014,PK2019,AKL19}. \n{In this} wide range of models, probably the most commonly used one in practice is the {logistic\/Cox mixture} cure model \\citep{stringer2016cure,wycinka2017,lee2017extinct}.\n\nThere have been different proposals for estimation in the logistic\/Cox mixture cure model. The presence of a latent variable (the unknown cure status), does not allow for a `direct' approach as in the classical Cox proportional hazards model. \\cite{kuk92} adapted a marginal likelihood approach computed through Monte Carlo approximations, whereas \\cite{peng2000} and \\cite{ST2000} computed the maximum likelihood estimator via the Expectation-Maximization algorithm. Asymptotic properties of the latter estimators are investigated in \\cite{Lu2008}, while the procedure is implemented in the package \\texttt{smcure} \\cite{cai_smcure}. \nOne concern about the previous estimators is that they are obtained by iterative procedures which {could} be unstable in practice. {In particular, when the sample size is small there are situations in which the EM algorithm fails to converge (even though the \\texttt{smcure} package can still provide without error the estimates obtained when the maximum number of iterations is reached). Such problems are for example reported in \\cite{han2017statistical}. In addition, the maximum likelihood estimator for the incidence component depends on which variables are included in the latency model (see for example the illustration in Section~\\ref{sec:application}) and this instability might in practice lead to unobserved effects (when the effect is not very strong). In particular, if the latency model is misspecified, even the estimators of the incidence parameters suffer from induced bias (see for example \\cite{BP18}).}\n\n In this paper, we introduce an alternative {estimation method which applies {very broadly and}, in particular, for the} logistic\/Cox mixture cure model. {Our approach focuses on } direct estimation of the cure probability without using distributional assumptions on the latency and iterative algorithms. It relies on a preliminary nonparametric estimator for the incidence which is then `projected' on {a parametric class of functions (like logistic functions)}. The idea of constructing a parametric estimator by nonparametric estimation has been previously proposed for the classical linear regression by \\cite{cristobal1987class}. {Later on it was shown to be effective also in the context of variable selection and functional linear regression \\citep{presmoothing_var_sel,ferraty2012presmoothing}. However, its }\nextension to nonlinear setups has been very little investigated. Here we show that in the context of mixture cure models, even when a parametric form is assumed for the incidence, the use of a presmoothed estimator as an intermediate step for obtaining the parameter estimates often leads to more accurate results. Once the cure fraction is estimated, we estimate the survival distribution of the uncured subjects. {In the case of the logistic\/Cox cure model, this is done by maximizing the Cox component of the likelihood.} In this step, {an iterative algorithm}\nis used to compute the estimators of the baseline cumulative hazard and the regression parameters. This new approach is of practical relevance given the popularity {of the semiparametric logistic\/Cox mixture cure model.} However, the method can be applied more in general to a mixture cure model with a parametric form of the incidence and {other type of models for the uncured subjects, such as the semiparametric proportional odds model or the semiparametric AFT model. Our findings} suggest that presmoothing has potential {to improve parameter estimation for small and moderate sample size.}\n\nThe paper is organized as follows. In Sections \\ref{sec:model} and \\ref{sec:method} we describe the model and the estimation procedure. {Section \\ref{sec:Cox} focuses on the estimation method in the case of the logistic\/Cox mixture cure model.}\nConsistency and asymptotic normality of the estimators are shown in Section~\\ref{sec:asymptotics}. {Thanks to the presmoothing, we are able to present theoretical results under more reasonable assumptions and thus we contribute to fill a gap between unrealistic technical conditions and applications. \n\tThe finite sample performance of the method is investigated through a simulation study and results are reported in {Section~\\ref{sec:simulations}}. \n\t{For practical purposes, we propose to make simple and commonly used choices for the bandwidth and the kernel function in the presmoothing step, and we show that these choices provide satisfactory results.}\n\n\t The proposed estimation procedure is applied to two medical datasets about studies of patients with melanoma cancer (see Section~\\ref{sec:application}). {We conclude in Section \\ref{sec:disc} with some discussion and ideas for further research. Finally, some of the proofs can be found in Section \\ref{sec:appendix}, while the remaining proofs {and additional simulation results} are collected in the online Supplementary Material.} \n\t\n\t\n\t\n\t\n\t\\section{Model description}\n\t\\label{sec:model}\n\tIn the mixture cure model the survival time $T$ can be decomposed as \n\t\\[\n\tT=BT_0+(1-B)\\infty,\n\t\\]\n\twhere $T_0$ represents {the finite survival time} for an uncured individual and {$B$ is an unobserved $0$-$1$ random variable giving the uncured status: $B=1$ for uncured individuals and $B=0$ otherwise}. By convention $0\\cdot \\infty = 0$. Let $C$ be the censoring time and $(X',Z')'$ a $(p+q)$-dimensional vector of covariates, where $x'$ denotes the transpose of the vector $x$. Let $\\mathcal{X}$ and $\\mathcal{Z}$ be the supports of $X$ and $Z$ respectively. Observations consist {of} $n$ i.i.d. realizations of $(Y,\\Delta,X,Z)$, where $Y=\\min(T,C)$ is {the finite follow-up time} and $\\Delta=\\mathds{1}_{\\{T\\leq C\\}}$ is the censoring indicator. {Since $Y$ is finite, then necessarily $\\mathbb{P}(C<\\infty)=1$, that means} the censoring times are finite ({which makes sense given the} limited duration of the studies). As a result, {censored survival times of the uncured subjects} cannot be distinguished from the cured ones. \n\t\n\tThe covariates included in $X$ are those used to model the cure rate, while the ones in $Z$ affect the survival conditional on the uncured status. This allows in general to use different variables for modelling the incidence and the latency but does not exclude situations in which the two vectors $X$ and $Z$ share some components or are exactly the same. \n\tApart from the standard assumption in survival analysis that $T_0\\perp (C,X) | Z$, here we also need \n\t\\begin{equation}\\label{eqn:bb_ind}\n\t\tB\\perp (C,T_0,Z)|X.\n\t\\end{equation} \n\tThis implies in particular that \n\t\\begin{equation}\n\t\t\\label{eqn:CI1}\n\t\tT\\perp C| (X,Z)\n\t\\end{equation}\n\t(see {Lemma~1 in the Suplementary Material}). Moreover, \\eqref{eqn:bb_ind} implies \n\t\\begin{equation}\n\t\t\\label{eqn:condition_X_Z}\n\t\t\\mathbb{P}(T=\\infty|X,Z)=\\mathbb{P}(T=\\infty|X).\n\t\\end{equation}\n\tIn addition, in the cure model context we need that the event time $T_0$ has support $[0,\\tau_0]$, {i.e. $\\{T>\\tau_0\\}=\\{T=\\infty\\}$, such} that \n\t\\begin{equation}\n\t\t\\label{eqn:CI2}\n\t\t{\\inf_{x}\\mathbb{P}(C>\\tau_0|X=x)>0.}\n\t\n\t\\end{equation}\n\t{(If the support of $T_0$ given $Z=z$ depends on $z$, then we let $\\tau_0 = \\sup \\tau_0(z)$, where $\\tau_0(z)$ is the right endpoint of this support.)} This condition tells us that all the observations with $Y>\\tau_0$ are cured. Even if it might {seem restrictive,} it is reasonable when a cure model is justified by a `good' follow-up beyond the time when most of the events occur and it is commonly accepted in the cure model literature in order for the mixture cure model to be identifiable and not to overestimate the cure rate. Since $T_0\\perp X | Z$, we have \n\t\\begin{equation*}\n\t\n\t\t\\mathbb{P}(T_0\\leq t |X,Z)=\\mathbb{P}(T_0\\leq t|Z), \\quad \\forall t\\in [0,\\tau_0].\n\t\\end{equation*}\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\tWe assume a parametric model for the cure rate and we denote by $\\pi_0(x)$ the cure probability of a subject with covariate $x$, i.e\n\t\\[\n\t\\pi_0(x)=\\mathbb{P}(T=\\infty|X=x)=1-\\phi(\\gamma_0,x),\n\t\\]\n\tfor some {parametric model $\\{\\phi (\\gamma,x): \\gamma \\in G\\}$ and $\\gamma_0\\in G$.} The first component of $X$ is equal to one and {the first component of $\\gamma$} corresponds to the intercept. In order for $\\gamma$ to be identifiable we need the following condition\n\t\\begin{equation}\n\t\t\\label{eqn:CI3}\n\t\t\\mathbb{P}\\left(\\phi(\\gamma,X)=\\phi(\\tilde{\\gamma},X)\\right)=1\\qquad\\text{ implies that }\\qquad \\gamma=\\tilde{\\gamma}.\n\t\\end{equation}\n{Choosing a parametric model for the incidence seems quite standard in the literature of mixture cure models (\\cite{PK2019,BP18,sposto2002cure}) because of its simplicity and ease of interpretability (particularly for multiple covariates). To check the fit of this model in practice, one can compare the prediction error with that of a more flexible single-index model as done in \\cite{AKL19} and for our real data application in Section~\\ref{sec:application}. It is also possible to test whether this assumption is reasonable using the test proposed in \\cite{muller2019goodness}, but this is currently developed only for one covariate. Among the parametric models for the incidence component, } the most common example is the logistic model, where \n\t\\begin{equation}\n\t\t\\label{eqn:logistic}\n\t\t\\phi(\\gamma,x)=1\/(1+{\\exp(-\\gamma' x)}).\n\t\\end{equation}\n\n\t{We state} the results in Section \\ref{sec:asymptotics} for a general parametric model for the incidence, but then we focus on the logistic function in the simulation study in Section \\ref{sec:simulations} since it is more of interest in practice. \n\tFor the uncured subjects, we {can consider a general semiparametric model defined through the survival function \n\t\t\\begin{equation} \\label{Su}\n\t\t\tS_u(t|z) =S_u(t|z;\\beta,\\Lambda) = \\mathbb{P}(T_0>t| {Z=z}, B=1) \\quad\\text{and}\\quad S_u(\\tau_0|z) =0,\n\t\t\\end{equation}\n\t\t{where the conditional survival function $S_u$ is allowed to depend on a finite-dimensional parameter, denoted by $\\beta\\in \\mathcal B$, and\/or an infinite-dimensional parameter, denoted by $\\Lambda\\in \\mathcal H$, with $\\mathcal B$ and $\\mathcal H$ the respective parameter sets.} Let $\\beta_0\\in\\mathcal B$ and $\\Lambda_0 \\in\\mathcal H$ be the true values of these parameters. As a result, the conditional survival function corresponding to $T$ is then\n\t\t$$\n\t\tS(t|x,z)=\\mathbb{P}(T>t|{X=x, Z=z})=1-\\phi(\\gamma_0,x)+\\phi(\\gamma_0,x)S_u(t|z).\n\t\t$$\n\t\tThe main example we keep in mind is the Cox proportional hazards (PH) model where $\\Lambda_0$ is the baseline cumulative hazard. In this case \n\t\t\\begin{equation} \\label{SuCox}\n\t\t\tS_u(t|z)\n\t\t\t=S_0(t)^{\\exp(\\beta'_0z)} = \\exp(-\\Lambda_0(t)\\exp(\\beta'_0z)),\n\t\t\\end{equation}\n\t\twhere $S_0 $ is the baseline survival and $\\beta_0 $ does not contain an intercept.}\n\n\t\n\t\n\n\n\n\n\n\n\t\n\t\n\t\n\t\\section{{Presmoothing estimation approach}}\n\t\\label{sec:method}\n\tThe estimation method we propose is based on a two step procedure. We first estimate nonparametrically the cure probability for each observation and then compute an estimator of $\\gamma$ as the maximizer of the logistic likelihood, ignoring {the model for the uncured subjects.}\n\n\tIn the second step, we plug-in this estimator of $\\gamma$ in the {full likelihood of the\n\t\n\t\tmixture cure model and fit the \n\t\tlatency model \n\t\tusing maximum likelihood estimation.} \n\tIn what follows, we describe in more details these two steps. \n\t\n\t\\textit{Step 1.} Even though a parametric model is assumed for the incidence, we start by computing a nonparametric estimator of the cure probability for each subject. One possibility is to use the method {followed by \\cite{PK2019} (see also \\cite{XP2014})}, {but other estimators are possible as well, as long as the conditions given in Section \\ref{sec:asymptotics} are satisfied. The estimator of \\cite{PK2019} is defined as follows:}\n\t\\begin{equation}\n\t\t\\label{def:hat_pi}\n\t\t\\hat\\pi(x)=\\prod_{t\\in\\mathbb{R}} \\left(1-\\frac{\\hat{H}_1(dt|x)}{\\hat{H}([t,\\infty)|x)}\\right){,}\n\t\\end{equation}\n\twhere {$\\hat{H}([t,\\infty)|x)=\\hat{H}_1([t,\\infty)|x)+\\hat{H}_0([t,\\infty)|x)$,} $\\hat{H}_1(dt|x) = \\hat{H}_1((t-dt,t]|x)$ for small $dt$ and \n\t\\[\n\t\\hat{H}_k([t,\\infty)|x)=\\sum_{i=1}^n\\frac{\\tilde{K}_b(X_i-x)}{\\sum_{j=1}^n\\tilde{K}_b(X_j-x)}\\mathds{1}_{\\{Y_i\\geq t, \\Delta_i=k\\}},\\quad k=0,1{,}\n\t\\] \n\tare estimators of\n\t\\[\n\tH_k([t,\\infty)|x)=\\mathbb{P}\\left(Y\\geq t,\\Delta=k|X=x\\right), \n\t\\] \n\t$H([t,\\infty)|x)=H_1([t,\\infty)|x)+H_0([t,\\infty)|x)$. Here $\\tilde{K}_{{b}}$ is a multidimensional kernel function defined in the following way. If $X$ is composed of continuous and discrete components, {$X=(X_c,X_d)\\in\\mathcal{X}_c\\times\\mathcal{X}_d\\subset\\mathbb{R}^{p_c}\\times\\mathbb{R}^{p_d}$ with $p_c+p_d=p$}, then\n\t\\[\n\t\\tilde{K}_b(X_i-x)=K_b(X_{c,i}-x_c)\\mathds{1}_{\\{X_{d,i}=x_d\\}},\n\t\\] \n\twhere $b=b_n$ is a bandwidth sequence, $K_b(\\cdot)=K(\\cdot\/b)\/b^{p_c}$ and $K(u)=\\prod_{j=1}^{p_c}k(u_j)$, {with $k$ a kernel.} \n\tNote that, one can compute this estimator with any covariate but here we only use $X$ because of our assumption~\\eqref{eqn:condition_X_Z}.\n\n\t{The estimator} $\\hat\\pi(x)$ coincides with the Beran estimator of the {conditional survival function} $S$ at the largest observed event time $Y_{(m)}$ {and does not require any specification of $\\tau_0$}. {Since $\\hat{H}_1(dt|x)$ is different from zero only at the observed event times, computation of $\\hat\\pi(x)$ requires only a product over $t$ in the set of the observed event times.} Afterwards, we consider the logistic likelihood\n\t\\[\n\t\\hat L_{n,1}(\\gamma)=\\prod_{i=1}^n \\phi(\\gamma,X_i)^{1-\\hat\\pi(X_i)}(1-\\phi(\\gamma,X_i))^{\\hat\\pi(X_i)}{,}\n\t\\]\n\tand define $\\hat\\gamma_n$ as the maximizer of \n\t\\begin{equation}\n\t\t\\label{def:hat_L_gamma}\n\t\t\\log \\hat{L}_{n,1}(\\gamma)=\\sum_{i=1}^n\\Big\\{\\left[1-\\hat\\pi(X_i)\\right] \\log \\phi(\\gamma,X_i)+\\hat\\pi(X_i)\\log \\left[1-\\phi(\\gamma,X_i)\\right]\\Big\\}.\n\t\\end{equation}\n\tExistence and uniqueness of $\\hat\\gamma_n$ holds under the same conditions as for the maximum likelihood estimator in the binary outcome regression model where $1-\\hat\\pi(X_i)$ is replaced by {the outcome $B_i$}. For example, in the logistic model, it is required that $p\\tau_0$), with jumps of size $\\Delta \\Lambda$ at the event times. {The indicator of the event $\\{Y_i<\\tau_0\\}$ in the first term is needed in case the distribution of the event times has a jump at $\\tau_0$ meaning that $\\mathbb{P}(T_0=\\tau_0|Z)>0$. In such a case $f_u(\\tau_0|Z;\\beta,\\Lambda)=\\exp(-\\Lambda(\\tau_0)e^{\\beta'Z})$ where $\\Lambda(\\tau_0)=\\lim_{t\\uparrow\\tau_0}\\Lambda(t)$. Otherwise, if $\\mathbb{P}(T_0=\\tau_0|Z)=0$, then for all uncensored observations we have $\\mathds{1}_{\\{Y<\\tau_0\\}}=1$ with probability one. Thus, the presence of the indicator function can be neglected. }\n\tAs in \\cite{Lu2008}, it can be shown that \n\t\\begin{equation}\\label{def_MLE}\n\t\t(\\hat\\beta_n, \\hat\\Lambda_n)= \\arg\\max_{\\beta, \\Lambda }\n\t\t\\hat{l}_n(\\beta,\\Lambda,\\hat\\gamma_n)\n\t\\end{equation}\n\texists and it is finite. Moreover, for any given $\\beta$ and $\\gamma$, the ${\\Lambda}_{n,\\beta,\\gamma}$ which maximizes $\\hat{l}_n(\\beta,\\Lambda,\\gamma)$ {in \\eqref{def:hat_l_cox}}, \n\twith respect to $\\Lambda$ with jumps at the event times,\n\tcan be characterized as \n\t\\begin{equation}\n\t\t\\label{eqn:hat_lambda_n}\n\t\t{\\Lambda}_{n,\\beta,\\gamma}(t)=\\frac{1}{n}\\sum_{i=1}^{n}\\frac{\\Delta_i\\mathds{1}_{\\{Y_i\\leq {t {, Y_i<\\tau_0} }\\}}}{\\frac{1}{n}\\sum_{j=1}^n\\mathds{1}_{\\{{{Y_i\\leq Y_j\\leq\\tau_0} }\\}}\\exp(\\beta'Z_j)\\left\\{{\\Delta_j}+(1-{\\Delta_j})g_j(Y_j,{\\Lambda}_{n,\\beta},\\beta,\\gamma)\\right\\}} ,\n\t\\end{equation}\n\twhere\n\t\\begin{equation}\n\t\t\\label{def:g_j}\n\t\tg_j(t,\\Lambda,\\beta,\\gamma)=\\frac{\\phi(\\gamma,X_j)\\exp\\left(-\\Lambda(t) \\exp\\left(\\beta' Z_j\\right)\\right)}{1-\\phi(\\gamma,X_j)+\\phi(\\gamma,X_j)\\exp\\left(-\\Lambda(t) \\exp\\left(\\beta' Z_j\\right)\\right)}.\n\t\\end{equation}\n\tNext, we could define\n\t$$\n\t\\hat\\beta_n = \\arg\\max_{\\beta}\\hat{l}_n(\\beta,{\\Lambda}_{n,\\beta,\\hat\\gamma_n} ,\\hat\\gamma_n) \\quad \\text{and} \\quad \\hat\\Lambda_n = {\\Lambda}_{n,\\hat\\beta_n,\\hat\\gamma_n}.\n\t$$\n\t\n\tTo compute $(\\hat\\beta_n,\\hat\\Lambda_n)$ we use an iterative algorithm based on profiling. \n\tTo be precise, we start with initial values which are the maximum partial likelihood estimator and the Breslow estimator (as if there was no cure fraction) and we iterate between the next two steps until convergence:\n\t\\begin{itemize}\n\t\t\\item[{a)}] Compute the weights \n\t\t\\[\n\t\tw_j^{(m)}={\\Delta_j}+(1-{\\Delta_j})\\frac{\\phi(\\hat\\gamma_n,X_j)\\hat{S}^{(m)}_u(Y_j|Z_j)}{1-\\phi(\\hat\\gamma_n,X_j)+\\phi(\\hat\\gamma_n,X_j)\\hat{S}^{(m)}_u(Y_j|Z_j)},\n\t\t\\]\n\t\twhere\n\t\t\\[\n\t\t\\hat{S}^{(m)}_u(Y_j|Z_j)=\\exp\\left(-\\hat{\\Lambda}^{(m)}_n(Y_j) \\exp\\left(\\hat\\beta^{(m)'}_n Z_j\\right)\\right),\n\t\t\\]\n\t\tusing the estimators $\\hat{\\Lambda}^{(m)}_n$, $\\hat\\beta^{(m)}_n $ of the previous step. \n\t\t\\item[{b)}] Using the previous weights, update the estimators for $\\Lambda$ and $\\beta$, i.e. $\\hat\\beta^{(m+1)}_n$ is the maximizer of\n\t\t\\[\n\t\t\\prod_{i=1}^n \\left\\{\\frac{e^{\\beta'Z_i}}{\\sum_{{Y_k\\geq Y_i }}w_k^{(m)}e^{\\beta'Z_k}}\\right\\}^{\\Delta_i\n\t\t\n\t\t}\n\t\t\\]\n\t\tand\n\t\t\\begin{equation}\n\t\t\t\\label{eqn:lambda}\n\t\t\t\\hat\\Lambda_n^{(m+1)}(t)=\\sum_{i=1}^n\\frac{\\Delta_i\\mathds{1}_{\\{{Y_i\\leq t{, Y_i<\\tau_0}}\\}}}{\\sum_{j=1}^n\\mathds{1}_{\\{{{Y_i\\leq Y_j\\leq\\tau_0}}\\}} w_j^{(m)}\\exp\\left(\\hat{\\beta}_n^{(m+1)'} Z_j\\right)}.\n\t\t\\end{equation}\n\t\\end{itemize}\n\t{The update of $\\Lambda$ an $\\beta$ in {Step (b)} coincides with the maximization step of the EM algorithm and the weights $w^{(m)}$ correspond to the expectation of the latent variable $B$ given the observed data and the current parameter values. However, unlike the maximum likelihood estimation \\citep{ST2000}, we are keeping $\\hat\\gamma_n$ fixed while performing this iterative algorithm.} {The estimator $\\hat\\Lambda_n$ seems to depend on the unknown $\\tau_0$. However, with data at hand, one could easily proceed without knowing $\\tau_0$. Indeed, if there are ties at the last uncensored observation, then $\\tau_0$ is revealed by the data. On the other hand, if there are no ties, all uncensored observations will be smaller than $\\tau_0$, hence no need to know $\\tau_0$.}\n\t\n\n\t\n\tAs suggested in \\cite{taylor95,ST2000}, we impose the zero-tail constraint, meaning that $\\hat{S}_u^{(m)}$ is forced to be equal to zero beyond the last event. In this way, all censored observations in the plateau are assigned to the cured group. \n\t\n\n\t\n\t\n\t\n\t\\section{Asymptotic results}\n\t\\label{sec:asymptotics}\n\t\n\tWe first explain {why} presmoothing {allows for more realistic} asymptotic results in semiparametric mixture cure models. \n\tNext, we\n\tshow consistency and asymptotic normality of the proposed estimators $\\hat{\\gamma}_n$, $\\hat\\beta_n$ and $\\hat\\Lambda_n$ \n\t{for the parametric\/Cox \n\t\tmixture cure model}\n\twhen, in Step 1, we use a general nonparametric estimator $\\hat\\pi$ of $\\pi_0$ that satisfies certain assumptions. Afterwards, we verify these conditions for the particular estimator $\\hat\\pi$ in \\eqref{def:hat_pi}. \n\tSome of the proofs can be found in Section~\\ref{sec:appendix} and the rest in the online Supplementary Material.\n\tThe assumptions mentioned in Section \\ref{sec:model} are assumed to be satisfied throughout this section. {In addition $Var(Z)$ is supposed to have full rank. }\n\t\n\n\t\n\t\\subsection{A challenge with mixture cure models}\n\t\n\tTo derive asymptotic results, in most of the existing literature it has been assumed that\n\t\\begin{equation}\n\t\t\\label{eqn:jump_cond}\n\t\t\\inf_{z}\\mathbb{P}(T_0\\geq\\tau_0|Z=z)>0,\n\t\\end{equation}\n\t\\citep{Lu2008,PK2019}. In nonparametric approaches such a condition keeps the denominators away from zero. In the parametric\/Cox \n\tmixture cure model, it guarantees that the baseline distribution stays bounded on the compact support $[0,\\tau_0]$.\n\t{However, condition \\eqref{eqn:jump_cond} implies that $\\inf_{z}\\mathbb P (Y=\\tau_0{,\\Delta=1}|Z = z) >0$, a condition which is not frequently satisfied in real-data applications. \n\t\t\n\t\t\n\t\t\n\t\tOne could imagine that, instead of imposing condition \\eqref{eqn:jump_cond}, it could be possible to proceed as follows: first restrict to events on $[0,\\tau^*]$ for some \n\t\t$\\tau^* < \\tau_0$ such that \n\t\t\\begin{equation}\n\t\t\t\\label{eqn:no_jump_cond}\n\t\t\t\\inf_{z}\\mathbb{P}(Y\\geq\\tau^*{,\\Delta=1}|Z=z)>0,\n\t\t\\end{equation}\n\t\tnext derive the asymptotics, and finally let $\\tau^*$ tend to $\\tau_0$. This idea is used, for instance, in Cox PH model, see \\cite{FleHar} chapter 8, or \\cite{AndGill82}. However, this idea does not seem to work for mixture cure models without suitable adaptation. This is because it implicitly requires \n\t\tthat $\\beta_0$ and $\\Lambda_0$ are identifiable from the restricted data. Here, identifiability means that the true values $\\beta_0$ and $\\Lambda_0$ of the parameters maximize the expectation of the criterion maximized to obtain the estimators.\n\t\tTwo aspects have to be taken into account when analyzing this identifiability. The first aspect is related to the parameter identifiability in the semiparametric model for $T_0$ when the events are restricted to $[0,\\tau^*]$. This property is satisfied in the common models, in particular it holds true in the Cox PH model as soon as $Var(Z)$ has full rank. \n\t\tThe second aspect is the additional complexity induced by the mixture with a cure fraction. If the cure fraction is unknown and one decides to restrict to events on $[0,\\tau^*]$, the parameter identifiability is likely lost because the events $\\{T_0\\in (\\tau^*, \\tau_0]\\}$ and $\\{T=\\infty\\}$ are not distinguishable. The usual remedy for this is to impose \\eqref{eqn:jump_cond}, so that $\\tau^*$ could be taken equal to $\\tau_0$. \n\t\t\n\t\t\n\t\tPresmoothing allows to avoid condition \\eqref{eqn:jump_cond} and thus to fill the gap between the technical conditions and the reality of the data. This is possible because, when using the presmoothing, the conditional probability of the event $\\{T=\\infty\\}$ is identified by other means. We are thus able to prove the consistency of $\\hat \\beta$ and $\\hat\\Lambda$ without imposing \\eqref{eqn:jump_cond}. Deriving the asymptotic normality without \\eqref{eqn:jump_cond} remains an open problem which will be addressed elsewhere. \n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\\subsection{Consistency}\n\t\t\n\t\tWe first prove consistency of $\\hat\\gamma_n$ and then use that result to obtain consistency of $\\hat\\Lambda_n$ and $\\hat\\beta_n$. In order to proceed with our results, the following conditions will be used.\n\t\t\\begin{itemize}\n\t\t\t\\item[(AC1)] $\\sup_{x\\in\\mathcal{X}}\\left|\\hat\\pi(x)-\\pi_0(x)\\right|\\to 0$ almost surely. \n\t\t\t\\item[(AC2)] The parameters $\\beta_0$ and $\\gamma_0$ lie in the interior of compact sets $B\\subset\\mathbb{R}^q$, $G\\subset\\mathbb{R}^p$.\n\t\t\t\\item[(AC3)]There exist some constants $a>0$, $c>0$ such that\n\t\t\t\\[\n\t\t\t\\left|\\phi(\\gamma_1,x)-\\phi(\\gamma_2,x)\\right|\\leq c\\Vert\\gamma_1-\\gamma_2\\Vert^a,\\qquad\\forall\\gamma_1,\\gamma_2\\in G,\\,\\forall x\\in\\mathcal{X},\n\t\t\t\\]\n\t\t\twhere $\\Vert\\cdot\\Vert$ denotes the Euclidean distance.\n\t\t\t\\item[(AC4)] $\\inf_{\\gamma\\in G}\\inf_{x\\in\\mathcal{X}}\\phi(\\gamma,x)>0$ and $\\inf_{\\gamma\\in G}\\inf_{x\\in\\mathcal{X}}\\phi(\\gamma,x)<1$.\n\t\t\t\\item[(AC5)] The covariates are bounded: $\\mathbb{P}\\left(\\Vert Z\\Vert<{m} \\text{ and } \\Vert X\\Vert0$.\n\t\t\t\\item[(AC6)] {The baseline hazard function $\\lambda_0(t)=\\Lambda'_0(t)$} is strictly positive and continuous on $[0,{\\tau_0)}$. \n\t\t\n\t\t\t\\item[(AC7)] With probability one, the conditional distribution function of the censoring times $F_C(t|x,z)$ is continuous in $t$ on $[0,{\\tau_0}]$ and \n\t\t\tthere exists a constant $C>0$ such that\n\t\t\t$$\n\t\t\t\\inf_{0\\leq t_1C.\n\t\t\t$$\n\t\t\\end{itemize}\n\t\t{(AC1) is a minimal assumption given that we want to match $\\phi(\\gamma,\\cdot)$ to $\\hat \\pi (\\cdot)$. \n\t\t\t(AC2) to (AC4) are mild conditions satisfied by usual binary regression models, like for instance the logistic one, and (AC5) is always satisfied in practice for large $m$. }\n\t\t\\begin{theorem}\n\t\t\t\\label{theo:consistency}\n\t\t\t{Let the estimator $\\hat\\gamma_n$ be defined as in (\\ref{def:hat_L_gamma}).}\n\t\t\tAssume that \n\t\t\t(AC1)-(AC4) hold. Then, $\\hat\\gamma_n\\to\\gamma_0$ almost surely.\n\t\t\\end{theorem}\n\t\t\\begin{theorem}\n\t\t\t\\label{theo:consistency2}\n\t\t\tLet the estimators $\\hat\\beta_n$ and $\\hat\\Lambda_n$ be defined as in Section~\\ref{sec:Cox}.\n\t\t\tAssume that\n\t\t\t(AC1)-(AC7) hold. Then, with probability one,\n\t\t\n\t\t\n\t\t\t$\\Vert\\hat\\beta_n-\\beta_0\\Vert\\to 0$,\n\t\t\n\t\t\twhere $\\Vert\\cdot\\Vert$ denotes the Euclidean distance. Moreover, \n\t\t\tfor any $\\tau^*\\leq \\tau_0$\n\t\t\tsatisfying \\eqref{eqn:no_jump_cond}, with probability one,\n\t\t\t\\[\n\t\t\t\\sup_{t\\in[0,{\\tau^*}]}\\left|\\hat\\Lambda_n(t)-\\Lambda_0(t)\\right|\\to 0.\n\t\t\t\\]\n\t\t\\end{theorem}\n\t\t{When condition \\eqref{eqn:jump_cond} is satisfied and $\\tau^*=\\tau_0$ in the previous Theorem, we are referring to the continuous version of $\\Lambda_0$, i.e. $\\Lambda_0(\\tau_0)=\\lim_{t\\uparrow\\tau_0}\\Lambda_0(t)$. Note that, by definition, we also have $\\hat\\Lambda_n(\\tau_0)=\\lim_{t\\uparrow\\tau_0}\\hat\\Lambda_n(t)$. } \n\t\t\\subsection{Asymptotic normality}\n\t\tWe first derive asymptotic normality of $\\hat\\gamma_n$ following the approach in \\cite{chen2003}. Theorem 2 in that paper provides sufficient conditions for the $\\sqrt{n}$ normality of parametric estimators obtained by minimizing an objective function that depends on a preliminary infinite dimensional estimator $\\hat\\pi$. In our case, since $\\hat\\gamma_n$ solves\n\t\t\\[\n\t\t\\frac{1}{n}\\nabla_\\gamma\\log \\hat{L}_{n,1}(\\gamma)=0,\n\t\t\\] \n\t\twhere $\\nabla_\\gamma$ denotes the vector-valued partial differentiation operator with respect to the components of $\\gamma$, it follows that $\\hat\\gamma_n$ minimizes the function \n\t\t\\[\n\t\t\\left\\Vert\\frac{1}{n}\\nabla_\\gamma\\log \\hat{L}_{n,1}(\\gamma)\\right\\Vert=\\left\\Vert\\frac{1}{n}\\sum_{i=1}^n m(X_i;\\gamma,\\hat\\pi)\\right\\Vert,\n\t\t\\]\n\t\twhere\n\t\t\\begin{equation}\n\t\t\t\\label{def:m}\n\t\t\tm(x;\\gamma,\\pi)=\\left[\\frac{1-\\pi(x)}{\\phi(\\gamma,x)}-\\frac{\\pi(x)}{1-\\phi(\\gamma,x)}\\right]\\nabla_\\gamma\\phi(\\gamma,x).\n\t\t\\end{equation}\n\t\tHence, we only need to check that the conditions of Theorem 2 in \\cite{chen2003} are satisfied. \n\t\tTo do that, we need the following assumptions which are stronger than the previous (AC1)-(AC4).\n\t\t\\begin{itemize}\n\t\t\t\\item[(AN1)] The parameter $\\gamma_0$ lies in the interior of a compact set $G\\subset\\mathbb{R}^p$ and, for each $x\\in\\mathcal{X}$, the function $\\gamma\\mapsto\\phi(\\gamma,x)$ is twice continuously differentiable with uniformly bounded derivatives in $G\\times\\mathcal{X}$ and satisfies (AC4).\n\t\t\t\\item[(AN2)] $\\pi_0(\\cdot)$ belongs to a class of functions $\\Pi$ such that \n\t\t\t\\begin{equation*}\n\t\t\t\t\\label{eqn:entropy}\n\t\t\t\t\\int_0^\\infty \\sqrt{\\log N(\\epsilon,\\Pi,\\Vert\\cdot\\Vert_{\\infty})}\\,\\mathrm{d}\\epsilon<\\infty,\n\t\t\t\\end{equation*}\n\t\t\twhere $N(\\epsilon,\\Pi,\\Vert\\cdot\\Vert_{\\infty})$ denotes the $\\epsilon$-covering number of the space $\\Pi$ with respect to $\\Vert\\pi\\Vert_{{\\infty}}=\\sup_{x\\in\\mathcal{X}}|\\pi(x)|$.\n\t\t\t\\item[(AN3)] The matrix $\\mathbb{E}\\left[\\nabla_\\gamma\\phi(\\gamma_0,X)\\nabla_\\gamma\\phi(\\gamma_0,X)'\\right]$ is positive definite.\n\t\t\t\\item[(AN4)] The estimator $\\hat\\pi(\\cdot)$ satisfies the following properties:\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item[(i)] $\\mathbb{P}\\left(\\hat\\pi(\\cdot)\\in\\Pi\\right)\\to 1$.\n\t\t\t\t\\item[(ii)] $\\left\\Vert\\hat\\pi(x)-\\pi_0(x)\\right\\Vert_{{\\infty}}=o_P(n^{-1\/4})$. \n\t\t\t\t\\item[(iii)] There exists a function $\\Psi$ such that \n\t\t\t\t\\[\n\t\t\t\t\\begin{split}\n\t\t\t\t\t&\\mathbb{E}^*\\left[\\left(\\hat\\pi(X)-\\pi_0(X)\\right)\\left(\\frac{1}{\\phi(\\gamma_0,X)}+\\frac{1}{1-\\phi(\\gamma_0,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_0,X)\\right]\\qquad\\qquad\\\\\n\t\t\t\t\t&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad=\\frac{1}{n}\\sum_{i=1}^n\\Psi(Y_i,\\Delta_i,X_i)+R_n,\n\t\t\t\t\\end{split}\n\t\t\t\t\\]\n\t\t\t\twhere $\\mathbb{E}^*$ denotes the conditional expectation given the sample, taken with respect to the generic variable $X$. Moreover, $\\mathbb{E}[\\Psi(Y,\\Delta,X)]=0$ and $\\Vert R_n\\Vert=o_P(n^{-1\/2})$.\n\t\t\t\\end{itemize}\n\t\t\\end{itemize}\n\t\t\\begin{theorem}\n\t\t\t\\label{theo:asymptotic_normality}\n\t\t\t{Let the estimator $\\hat\\gamma_n$ be defined as in (\\ref{def:hat_L_gamma}).}\n\t\t\tAssume that\n\t\t\n\t\t\t(AN1)-(AN4) hold. Then, \\[\n\t\t\tn^{1\/2}\\left(\\hat\\gamma_n-\\gamma_0\\right)\\xrightarrow{d}N(0,\\Sigma_\\gamma)\n\t\t\t\\]\n\t\t\twith covariance matrix $\\Sigma_\\gamma$ defined { in~(A28).}\n\t\t\\end{theorem}\n\t\t{For deriving the asymptotic distribution of $\\hat\\beta_n$ and $\\hat\\Lambda_n$ we assume, for simplicity, that condition \\eqref{eqn:jump_cond} is satisfied. In such case, in Theorem~\\ref{theo:consistency2} we can take $\\tau^*=\\tau_0$ and obtain uniform strong consistency of $\\hat\\Lambda_n$ on the whole support $[0,\\tau_0]$. We believe that, at the price of additional technicalities, asymptotic distributional theory can be obtained also without imposing \\eqref{eqn:jump_cond}, as we did for the consistency in Theorem~\\ref{theo:consistency2}. This conjecture is supported by simulations but we leave the problem to be addressed by future research. }\n\t\t\\begin{theorem}\n\t\t\t\\label{theo:asymptotic_normality2}\n\t\t\tLet the estimators $\\hat\\beta_n$ and $\\hat\\Lambda_n$ be defined as in Section~\\ref{sec:Cox}. \tAssume that {condition~\\eqref{eqn:jump_cond},}\n\t\t\t(AN1)-(AN4) and {(AC2)}, (AC5)-(AC7) hold. Then, \n\t\t\t\\[\n\t\t\t\\left\t\\langle\\sqrt{n}\\left(\\hat\\Lambda_n-\\Lambda_0\\right),\\sqrt{n}\\left(\\hat\\beta_n-\\beta_0\\right)\\right\\rangle\\to G \n\t\t\t\\]\n\t\t\tweakly in $l^\\infty(\\mathcal{H}_\\mathfrak{m})$, where {$\\mathcal{H}_\\mathfrak{m}$ is a functional space defined in Section \\ref{sec:appendix_Cox}}, { $l^\\infty(\\mathcal{H}_\\mathfrak{m})$ denotes the space of bounded real-valued functions on $\\mathcal{H}_\\mathfrak{m}$,} $G$ is a tight Gaussian process in $l^\\infty(\\mathcal{H}_\\mathfrak{m})$ with mean zero and covariance process given {in~(A39)} { and for $h=(h_1,h_2)\\in \\mathcal{H}_\\mathfrak{m}$\n\t\t\t\t\\[\n\t\t\t\t\\langle \\Lambda,\\beta\\rangle(h)=\\int_0^{\\tau_0}h_1(t)\\,\\mathrm{d}\\Lambda(t)+h_2'\\beta.\n\t\t\t\t\\]\t\n\t\t\t} \t\n\t\t\\end{theorem}\n\t{The asymptotic variances of each component of $\\hat\\beta_n$ and of $\\hat\\Lambda_n(t)$ can be obtained from the covariance process in~(A39) by taking $h_1(t)=0$ for all $t$ and $h_2=e_i$ (the $i$th unit vector) or $h_2=0$ and $h_1(s)=\\mathds{1}_{\\{s\\leq t\\}}$. We leave the details about these covariance matrices in the Supplementary Material because they have quite complicated expressions that require definitions of several other quantities. Even though it could be possible in principle to estimate the asymptotic standard errors through plug-in estimators and numerical inverse, we think that this is not feasible in practice and we do not intend to exploit it further. Instead, we use a bootstrap procedure for estimation of the standard errors in the application discussed in Section~\\ref{sec:application}. However, the maximum likelihood estimators are not more favorable in this regard. For example, in the logistic\/Cox model, the proposed estimators of the asymptotic variance in \\cite{Lu2008} also involve solving numerically complicated nonlinear equalitons. For this reason, bootstrap is used in practice to estimate the standard errors even for the maximum likelihood estimators. }\n\t\n\t{By considering a two-step procedure, where estimation of the incidence parameters is performed independently of the latency model, we expect to loose efficiency of the estimators. However, this does not cause major concern because our purpose is to provide an alternative estimation method that performs better than the maximum likelihood estimation with sample sizes usually encountered in practice. Efficiency is a key concept for the asymptotics of the estimators, and in general there is no particular need for another method since the MLE would be the best choice. However, in many nonlinear models, like the mixture cure models, the asymptotic approximation is poor and the efficiency becomes a less relevant purpose for real data sample sizes. Hence, we choose to trade efficiency for better performance in a wider range of applications. }\n\t\t\\subsection{Verification of assumptions for $\\hat\\pi$}\n\t\tNext we show that our assumptions \n\t\t(AN1)-(AN4) of the asymptotic theory are satisfied for the nonparametric estimator $\\hat\\pi$ defined in \\eqref{def:hat_pi} and the logistic model in \\eqref{eqn:logistic}. For reasons of simplicity, since we use results available in the literature only for a one-dimensional covariate, we consider only cases with one continuous covariate. In order for assumption (AN4) to be satisfied we need the following conditions:\n\t\t\\begin{itemize}\n\t\t\t\\item[(C1)] The bandwidth $b$ is such that $nb^4\\to 0$ and $nb^{3+\\xi}\/(\\log b^{-1}) \\to \\infty$ for some $\\xi>0$.\n\t\t\t\\item[(C2)] The support $\\mathcal{X}$ of $X$ is a compact subset of $\\mathbb{R}$. The density $f_X(\\cdot)$ of $X$ is bounded away from zero and twice differentiable with bounded second derivative.\n\t\t\t\\item[(C3)] The kernel $k$ is a twice continuously differentiable, symmetric probability density function with compact support and $\\int uk(u)\\,\\mathrm{d} u=0$.\n\t\t\t\\item[(C4)] (i) The functions $H([0,t]|x)$, $H_1([0,t]|x)$ are twice differentiable with respect to $x$, with uniformly bounded derivatives for all $t\\leq \\tau_0$, $x\\in\\mathcal{X}$. Moreover, there exist continuous nondecreasing functions $L_1$, $L_2$, $L_3$ such that $L_i(0)=0$, $L_i(\\tau_0)<\\infty$ and \tfor all $t, s\\in[0,\\tau_0]$, $x\\in\\mathcal{X}$,\n\t\t\t\\[\n\t\t\t\\begin{split}\n\t\t\t\t\\left|H_c(t|x)-H_c(s|x)\\right|\\leq \\left|L_1(t)-L_1(s)\\right|,&\\quad \\left|H_{1c}(t|x)-H_{1c}(s|x)\\right|\\leq \\left|L_1(t)-L_1(s)\\right|\\\\\n\t\t\t\t\\left|\\frac{\\partial H_c(t|x)}{\\partial x}-\\frac{\\partial H_c(s|x)}{\\partial x}\\right|&\\leq \\left|L_2(t)-L_2(s)\\right|\\\\\n\t\t\t\t\\left|\\frac{\\partial H_{1c}(t|x)}{\\partial x}-\\frac{\\partial H_{1c}(s|x)}{\\partial x}\\right|&\\leq \\left|L_3(t)-L_3(s)\\right|,\n\t\t\t\\end{split}\n\t\t\t\\]\n\t\t\twhere the subscript c denotes the continuous part of a function.\n\t\t\t\n\t\t\t(ii) The jump points for the distribution function $G(t|x)$ of the censoring times given the covariate, are {finite and} the same for all $x$. The partial derivative of $G(t|x)$ with respect to $x$ exists and is uniformly bounded for all $t\\leq\\tau_0$, $x\\in\\mathcal{X}$. Moreover, \n\t\t\tthe partial derivative with respect to $x$ of $F(t|x)$ {(distribution function of the survival times $T$ given $X=x$)} exists and is uniformly bounded for all $t\\leq\\tau_0$, $x\\in\\mathcal{X}$. \n\t\t\t\\item[(C5)]{The survival time $T$ and the censoring time $C$ are independent given $X$.}\n\t\t\\end{itemize}\n\t\t{(C1) to (C5) are conditions guaranteeing the rates of convergence and the i.i.d. representation \\citep{DA2002}}.\n\t\tIn case of discrete covariates we also need to have only a finite number of atoms. Assumption (C5) is needed because we are dealing with the distribution of $T$ conditional only on the covariate $X$ (since the cure rate depends only on $X$).\n\t\t\n\t\t\\begin{theorem}\n\t\t\t\\label{theo:pi_hat}\n\t\t\tUnder the conditions (C1)-(C5), the assumptions (AN1)-(AN4) hold true for the logistic model {and the estimator $\\hat\\pi(x)$ defined in (\\ref{def:hat_pi}).}\n\t\t\\end{theorem}\n\t\t\n\t\t\n\t\t\n\t\t\\section{Simulation study}\n\t\t\\label{sec:simulations}\n\t\tIn this section we focus on the logistic\/Cox mixture cure model and evaluate the finite sample performance of the proposed method. Comparison is made with the maximum likelihood estimator implemented in the package \\texttt{smcure}. \n\t\t\n\t\t{\n\tWe first illustrate through a brief example the convergence problems of the smcure estimator. \tWe consider a model where the incidence depends on four independent covariates: $X_1\\sim N(0,2)$, $X_2\\sim \\text{Uniform}(-1,1)$, $X_3\\sim\\text{Bernoulli}(0.8)$, $X_4\\sim\\text{Bernoulli}(0.2)$. The latency depends on $Z_1=X_1$, $Z_2=X_3$ and $Z_3=X_4$. We generate the cure status $B$ as a Bernoulli random variable with success probability $\\phi(\\gamma,X)$ where $\\phi$ is the logistic function and $\\gamma=(0.6,-1,1,2.5,1.2)$. The survival times for the uncured observations are generated according to a Weibull proportional hazards model\n\t\\[\n\tS_u(t|z)=\\exp\\left(-\\mu t^\\rho\\exp(\\beta'z)\\right),\n\t\\]\n\tand are truncated at $\\tau_0=14$ for $\\rho=1.75$, $\\mu=1.5$, $\\beta=(-0.8,0.9,0.5)$. The censoring times are independent from $X$ and $T$. They are generated from the exponential distribution with parameter $\\lambda_C=0.22$ and are truncated at $\\tau=16$. We generate $1000$ datasets according to this model with sample size $n=100$, and we observe that smcure fails to converge in $43\\%$ of the cases. Convergence fails mainly in the $\\gamma$ parameter, with only $17\\%$ of the cases failing to converge also for the $\\beta$ parameter (because of the unreasonable $\\gamma$ estimators). On the other hand, there was no convergence problem in the second step of the presmoothing approach. In addition, even among the cases where smcure converged, the presmoothing approach showed significantly better behavior, as can be seen in Table~\\ref{tab:convergence}. \n\t}\n\\begin{table}[h!]\n\t\\caption{\\label{tab:convergence}Bias, variance and MSE of $\\hat\\gamma$ and $\\hat\\beta$ for \\texttt{smcure} and our approach among the iterations that converged for smcure.}\n\t\\centering\n\n\t\\scalebox{0.85}{\n\t\t\\fbox{\n\t\t\t\\begin{tabular}{rrrrrrr}\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\t&\t& & & & & \\\\[-7pt]\n\t\t&\t\\multicolumn{3}{c}{presmoothing}&\\multicolumn{3}{c}{smcure}\\\\\n\t\t\tPar. & Bias & Var. & MSE & Bias & Var. & MSE \\\\[2pt]\n\t\t\t\t\\hline\n\t\t\t\t& & & & & & \\\\[-8pt]\n\t\t\t\t $\\gamma_1 $ & $-0.113 $ & $0.620 $ & $0.633 $ & $0.200 $ & $8.318 $ & $8.358 $ \\\\\n\t\t\t$\\gamma_2$\t& $-0.073 $ & $0.156$ & $ 0.162 $ & $-0.388 $ & $3.085$ & $3.236 $\\\\\n\t\t\t $\\gamma_3 $ & $-0.071$ & $0.546$ & $0.551 $ & $ 0.280$ & $1.957 $ & $2.035 $ \\\\\n\t\t\t\t$\\gamma_4$ & $ 0.037$ & $1.326$ & $1.327 $ & $0.704$ & $14.395 $ & $14.891 $ \\\\\n\t\t\t\t\t$\\gamma_5$ & $ -0.250$ & $8.398$ & $8.461 $ & $1.621$ & $36.450 $ & $36.945 $ \\\\\n\t\t\t\t $\\beta_1 $ & $-0.014 $ & $0.011 $ & $0.012 $ & $-0.017 $ & $ 0.012$ & $ 0.012$\\\\\n\t\t\t\t$\\beta_2$ & $0.024 $ & $0.064 $ & $ 0.065 $ & $ 0.026 $ & $0.065 $ & $0.065$ \\\\\n\t\t\t\t$\\beta_3$ & $-0.053 $ & $0.165 $ & $ 0.168 $ & $- 0.053 $ & $0.166 $ & $0.169$ \\\\\t\n\t\t\t\\end{tabular}\n\t}}\n\\end{table}\n\n{Hence, in the cases in which smcure exhibits very poor behavior, the presmoothing is obviously superior. Next, we focus on models for which smcure behaves reasonable (there are convergence problems in less then $3\\%$ of the cases) and show that, even in such scenarios presmoothing can lead to more accurate results. }\n\t\t\n\t\t\n\t\t \n\t\tWe consider four different models and for each of them various choices of the parameters in order to cover a wide range of scenarios. \n\t\tThe models are as follows. \n\t\t\n\t\t\\textit{Model 1.} Both incidence and latency depend on one covariate $X$, which is uniform on $(-1,1)$. We generate the cure status $B$ as a Bernoulli random variable with success probability $\\phi(\\gamma,X)$ where $\\phi$ is the logistic function. The survival times for the uncured observations are generated according to a Weibull proportional hazards model\n\t\t\\[\n\t\tS_u(t|x)=\\exp\\left(-\\mu t^\\rho\\exp(\\beta x)\\right),\n\t\t\\]\n\t\tand are truncated at $\\tau_0$ for $\\rho=1.75$, $\\mu=1.5$, $\\beta=1$ and $\\tau_0=4$. The censoring times are independent from $X$ and $T$. They are generated from the exponential distribution with parameter $\\lambda_C$ and are truncated at $\\tau=6$. \n\t\t\n\t\t\\textit{Model 2.} Both incidence and latency depend on one covariate $X$ with standard normal distribution. The cure status and the survival times for the uncured observations are generated as in Model 1 for $\\rho=1.75$, $\\mu=1.5$, $\\beta=1$ and $\\tau_0=10$. The censoring times are generated according to a Weibull proportional hazards model\n\t\t\\[\n\t\tS_C(t|x)=\\exp\\left(-\\nu\\mu t^{\\rho}\\exp(\\beta_Cx)\\right),\n\t\t\\] \n\t\tfor $\\beta_C=1$ and various choices of $\\nu$ and are truncated at $\\tau=15$. \n\t\t\n\t\t\\textit{Model 3.} For the incidence we consider three independent covariates: $X_1$ is normal with mean zero and standard deviation $2$, $X_2$ and $X_3$ are Bernoulli random variables with parameters $0.6 $ and $0.4$ respectively. The latency also depends on three covariates: $Z_1=X_1$, $Z_2$ is a uniform random variable on $(-3,3)$ independent of the previous ones and $Z_3=X_2$. The cure status and the survival times for the uncured observations are generated as in Model 1 for $\\rho=1.75$, $\\mu=1.5$ and different choices of the other parameters. The censoring times are generated independently of the previous variables from an exponential distribution with parameter $\\lambda_C$ \n\t\tand are truncated at $\\tau$, for given choices of $\\lambda_C$ and $\\tau$. \n\t\t\n\t\t\\textit{Model 4.} This setting is obtained by adding an additional continuous covariate to the incidence component of Model 3. To be precise, $X_1$ is normal with mean zero and standard deviation $2$, $X_2$ is uniform on $(-1,1)$ {independent of the other variables}, $X_3$ and $X_4$ are Bernoulli random variables with parameters $0.6 $ and $0.4$ respectively. As in Model 3, $Z_1=X_1$, $Z_2$ is a uniform random variable on $(-3,3)$ independent of the previous ones and $Z_3=X_3$. The event and censoring times are generated as in the previous model.\n\t\t\n\t\tFor the four models we choose the values of the unspecified parameters in such a way that the cure rate is around $20\\%$, $30\\%$, $50\\%$ (corresponding respectively to scenarios $1$, $2$ and $3$) and the censoring rate corresponds to three levels (with a difference of $5\\%$ between each other). The specification of the parameters and the corresponding censoring rate and percentage of the observations in the plateau are given Table~\\ref{tab:models}. {Note that, within each scenario, the fraction of the observations in the plateau decreases as the censoring rate increases because more cured observations are censored earlier and as a result are not observed in the plateau. This makes the estimation of the cure rate more difficult.} The truncation of the survival and censoring times on $[0,\\tau_0]$ and $[0,\\tau]$ is made in such a way that $\\tau_0<\\tau$ and condition \\eqref{eqn:jump_cond} is satisfied { but in practice\n\t\t\tit is unlikely to observe event times at $\\tau_0$. In this way, we try to find a compromise between theoretical assumptions and real-life scenarios.}\n\t\t\n\t\t\\begin{table}\n\t\t\t\\caption{\t\\label{tab:models}Parameter values and model characteristics for each scenario.}\n\t\t\t\\centering\n\t\t\t\\addtolength{\\tabcolsep}{-4pt}\n\t\t\t\\fbox{%\n\t\t\t\t\\begin{tabular}{ccccccc}\n\t\t\t\t\tModel & Parameters & Scenario & Cens. & Cens. & Cens. & Plateau\\\\\n\t\t\t\t\t& & & level & parameters & rate & \\\\%[2pt]\n\t\t\t\t\t\\hline\n\t\t\t\t\t& & & & & & \\\\[-10pt]\n\t\t\t\t\t& & & $1$ & $\\lambda_C=0.1 $ & $25\\%$ & $15\\% $\\\\\n\t\t\t\t\t& $\\gamma=(1.75, 2)$ & $1 $ & $2$ & $\\lambda_C=0.2 $ & $30\\%$ & $11\\% $\\\\\n\t\t\t\t\t& & & $3$ & $\\lambda_C=0.3 $ & $35\\%$ & $9\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t& & & $1$ & $\\lambda_C=0.1 $ & $34\\%$ & $22\\% $\\\\\n\t\t\t\t\t1 & $\\gamma=(1, 1.5)$ & $2$ & $2$ & $\\lambda_C=0.25 $ & $40\\%$ & $15\\% $\\\\\n\t\t\t\t\t& & & $3$ & $\\lambda_C=0.4 $ & $46\\%$ & $10\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t& & & $1$ & $\\lambda_C=0.2 $ & $54\\%$ & $32\\% $\\\\\n\t\t\t\t\t& $\\gamma=(0.1, 5)$ & $3 $ & $2$ & $\\lambda_C=0.4 $ & $59\\%$ & $23\\% $\\\\\n\t\t\t\t\t& & & $3$ & $\\lambda_C=0.7 $ & $65\\%$ & $15\\% $\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t& & & & & & \\\\[-10pt]\n\t\t\t\t\t& & & $1$ & $\\nu=1\/15 $ & $25\\%$ & $7\\% $\\\\\n\t\t\t\t\t& $\\gamma=(1.5, 0.5)$ & $1$ & $2$ & $\\nu=1\/7 $ & $30\\%$ & $4\\% $\\\\\n\t\t\t\t\t& & & $3$ & $\\nu=1\/4 $ & $35\\%$ & $2\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t& & & $1$ & $\\nu=1\/13 $ & $35\\%$ & $14\\% $\\\\\n\t\t\t\t\t2 & $\\gamma=(1, 1)$ & $2 $ & $2$ & $\\nu=1\/10 $ & $40\\%$ & $9\\% $\\\\\n\t\t\t\t\t& & & $3$ & $\\nu=5\/18 $ & $45\\%$ & $6\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t& & & $1$ & $\\nu=1\/9 $ & $56\\%$ & $38\\% $\\\\\n\t\t\t\t\t& $\\gamma=(-0.1, 5)$ & $3 $ & $2$ & $\\nu=1\/4$ & $60\\%$ & $30\\% $\\\\\n\t\t\t\t\t& & & $3$ & $\\nu=2\/5 $ & $65\\%$ & $25\\% $\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t& & & & & & \\\\[-10pt]\n\t\t\t\t\t& $\\gamma=(0.5, -1,2.5, 1.2)$ & & $1$ & $\\lambda_C=0.12$ & $25\\%$ & $10\\% $\\\\\n\t\t\t\t\t& $\\beta=( -1,0.5, 1.5)$ & $1$ & $2$ & $\\lambda_C=0.25 $ & $30\\%$ & $6\\% $\\\\\n\t\t\t\t\t& $\\tau_0=30$, $\\tau=35$ & & $3$ & $\\lambda_C=0.45 $ & $35\\%$ & $4\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t& $\\gamma=(1,2,1.8, 0.5)$& & $1$ & $\\lambda_C=0.2 $ & $35\\%$ & $16\\% $\\\\\n\t\t\t\t\t3 & $\\beta=(1,0.5, 2)$ & $2 $ & $2$ & $\\lambda_C=0.5 $ & $40\\%$ & $9\\% $\\\\\n\t\t\t\t\t& $\\tau_0=6$, $\\tau=8$ & & $3$ & $\\lambda_C=0.8 $ & $45\\%$ & $6\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t&$\\gamma=(-0.8,1.3,1.5,-0.2) $& & $1$ & $\\lambda_C=0.3 $ & $55\\%$ & $24\\% $\\\\\n\t\t\t\t\t& $\\beta=(1,-0.1,0.8)$ & $3 $ & $2$ & $\\lambda_C=0.7 $ & $59\\%$ & $14\\% $\\\\\n\t\t\t\t\t& $\\tau_0=5$, $\\tau=7$ & & $3$ & $\\lambda_C=1.3 $ & $65\\%$ & $8\\% $\\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t& & & & & & \\\\[-10pt]\n\t\t\t\t\t& $\\gamma=(0.6,-1,1,2.5,1.2)$ & & $1$ & $\\lambda_C=0.1$ & $25\\%$ & $11\\% $\\\\\n\t\t\t\t\t& $\\beta=( -0.8,0.3,0.5)$ & $1$ & $2$ & $\\lambda_C=0.22 $ & $30\\%$ & $7\\% $\\\\\n\t\t\t\t\t& $\\tau_0=14$, $\\tau=16$ & & $3$ & $\\lambda_C=0.35 $ & $35\\%$ & $5\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t& $\\gamma=(0.45,0.5,2{,}1,0.5)$& & $1$ & $\\lambda_C=0.15 $ & $35\\%$ & $11\\% $\\\\\n\t\t\t\t\t4 & $\\beta=(1,0.5, 2)$ & $2 $ & $2$ & $\\lambda_C=0.35 $ & $40\\%$ & $7\\% $\\\\\n\t\t\t\t\t& $\\tau_0=18$, $\\tau=20$ & & $3$ & $\\lambda_C=0.6 $ & $45\\%$ & $5\\% $\\\\\n\t\t\t\t\t\\cline{2-7}\n\t\t\t\t\t&$\\gamma=(-0.22,0.3,-0.4,0.5,-0.2) $& & $1$ & $\\lambda_C=0.2 $ & $55\\%$ & $30\\% $\\\\\n\t\t\t\t\t& $\\beta=(0.4,-0.1,0.5)$ & $3 $ & $2$ & $\\lambda_C=0.4 $ & $59\\%$ & $20\\% $\\\\\n\t\t\t\t\t& $\\tau_0=6$, $\\tau=8$ & & $3$ & $\\lambda_C=0.7 $ & $65\\%$ & $12\\% $\\\\\n\t\t\t\t\n\t\t\t\t\\end{tabular}\n\t\t\t}\n\t\t\\end{table}\n\t\t\n\t\t\\begin{table}\n\t\t\t\\caption{\\label{tab:results1_2}Bias, variance and MSE of $\\hat\\gamma$ and $\\hat\\beta$ for \\texttt{smcure} (second rows) and our approach (first rows) in Model 1 and 2.}\n\t\t\t\\centering\n\t\t\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{ccccrrrrrrrrr}\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-7pt]\n\t\t\t\t\t\t&\t&&&\\multicolumn{3}{c}{Cens. level 1}&\\multicolumn{3}{c}{Cens. level 2}&\\multicolumn{3}{c}{Cens. level 3}\\\\\n\t\t\t\t\t\tMod.&\tn& scen. & Par. & Bias & Var. & MSE & Bias & Var. & MSE & Bias & Var. & MSE\\\\[2pt]\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t1&\t$200$ & $1 $ & $\\gamma_1 $ & $0.001 $ & $0.060 $ & $0.060 $ & $0.020 $ & $0.065 $ & $0.065 $ & $0.005 $ & $0.078 $ & $0.078 $\\\\\n\t\t\t\t\t\t&\t& & & $0.021 $ & $0.063 $ & $ 0.063 $ & $0.050 $ & $0.068 $ & $0.071 $ & $ 0.044$ & $0.084 $ & $0.086 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.034 $ & $0.164 $ & $0.165 $ & $ -0.014 $ & $0.202 $ & $0.202 $ & $-0.051 $ & $0.209 $ & $0.212 $\\\\\n\t\t\t\t\t\t&\t& & & $ 0.026$ & $0.173 $ & $0.173 $ & $ 0.067 $ & $0.222 $ & $0.226 $ & $0.044 $ & $0.229 $ & $0.230 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $0.008 $ & $0.028 $ & $0.028 $ & $0.015 $ & $ 0.029$ & $ 0.029$ & $0.013 $ & $0.034 $ & $0.035 $\\\\\n\t\t\t\t\t\t&\t& & & $0.007 $ & $0.028 $ & $ 0.028 $ & $ 0.012 $ & $0.029 $ & $0.029 $ & $0.009 $ & $0.035 $ & $0.035 $\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ & $\\gamma_1 $ & $-0.001 $ & $0.059 $ & $0.059 $ & $0.009 $ & $ 0.065$ & $ 0.065$ & $ -0.014 $ & $0.091 $ & $ 0.092$\\\\\n\t\t\t\t\t\t&\t& & & $0.010 $ & $0.064 $ & $0.064 $ & $0.029 $ & $0.074 $ & $0.075 $ & $0.037 $ & $0.113 $ & $0.115 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.034 $ & $0.536 $ & $0.537 $ & $-0.111 $ & $ 0.595$ & $0.608 $ & $ -0.085 $ & $ 0.809$ & $0.816 $\\\\\n\t\t\t\t\t\t&\t& & & $0.201 $ & $ 0.649$ & $ 0.689 $ & $0.218 $ & $0.768 $ & $0.816 $ & $0.400 $ & $1.146 $ & $1.306 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $ 0.011$ & $0.090 $ & $ 0.090 $ & $ 0.024 $ & $ 0.109$ & $0.110 $ & $0.014 $ & $0.128 $ & $0.128 $\\\\\n\t\t\t\t\t\t&\t& & & $ 0.007$ & $0.091 $ & $ 0.091 $ & $0.014 $ & $0.110 $ & $0.110 $ & $-0.001 $ & $0.129 $ & $0.129 $\\\\\n\t\t\t\t\t\t\\cline{2-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t$400$ & $1 $ & $\\gamma_1 $ & $0.001 $ & $0.028 $ & $0.028 $ & $ 0.007 $ & $0.032 $ & $0.032 $ & $0.001 $ & $0.037 $ & $0.037 $\\\\\n\t\t\t\t\t\t&\t& & & $0.015 $ & $0.029 $ & $0.030 $ & $0.027 $ & $0.033 $ & $ 0.034$ & $0.024 $ & $0.039 $ & $0.039 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.024 $ & $0.083 $ & $ 0.084 $ & $-0.004 $ & $ 0.088$ & $ 0.088$ & $ -0.018 $ & $0.107 $ & $0.107 $\\\\\n\t\t\t\t\t\t&\t& & & $ 0.021$ & $0.087 $ & $0.087 $ & $ 0.049 $ & $0.093 $ & $0.095 $ & $0.041 $ & $0.111 $ & $0.113 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $0.003 $ & $0.013 $ & $0.013 $ & $0.007 $ & $0.015 $ & $0.015 $ & $0.002 $ & $ 0.016$ & $0.016$\\\\\n\t\t\t\t\t\t&\t& & & $0.002 $ & $0.013 $ & $0.013 $ & $0.005 $ & $0.015 $ & $0.015 $ & $ 0.000 $ & $ 0.016$ & $0.016 $\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ & $\\gamma_1 $ & $-0.004 $ & $0.029 $ & $0.029 $ & $-0.004 $ & $0.030 $ & $ 0.030$ & $-0.007 $ & $0.048 $ & $ 0.048$\\\\\n\t\t\t\t\t\t&\t& & & $ 0.002$ & $0.030 $ & $0.030 $ & $ 0.009 $ & $0.033 $ & $0.033 $ & $0.015 $ & $0.053 $ & $0.053 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.050 $ & $ 0.237$ & $ 0.239 $ & $-0.080 $ & $0.312 $ & $0.318 $ & $-0.134 $ & $0.432 $ & $ 0.450$\\\\\n\t\t\t\t\t\t&\t& & & $0.111 $ & $0.260 $ & $0.273 $ & $ 0.142 $ & $0.361 $ & $ 0.381$ & $ 0.167 $ & $0.491 $ & $0.519 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $-0.003 $ & $0.039 $ & $0.039 $ & $0.024 $ & $0.051 $ & $0.052 $ & $ 0.013 $ & $0.071 $ & $0.071 $\\\\\n\t\t\t\t\t\t&\t& & & $-0.007 $ & $0.039 $ & $ 0.039 $ & $ 0.017 $ & $0.051 $ & $0.052 $ & $0.000 $ & $0.071 $ & $0.071 $\\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t2&\t$200$ & $1 $ & $\\gamma_1 $ & $0.004 $ & $0.040 $ & $0.040 $ & $0.020 $ & $0.045 $ & $0.045 $ & $-0.016 $ & $0.060 $ & $0.060 $\\\\\n\t\t\t\t\t\t&\t& & & $0.017 $ & $ 0.040$ & $ 0.040 $ & $ 0.058 $ & $0.047 $ & $0.050 $ & $ 0.083 $ & $0.079 $ & $0.086 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $ 0.001$ & $0.039 $ & $0.039 $ & $- 0.022 $ & $0.042 $ & $0.043 $ & $-0.027 $ & $ 0.055$ & $0.056 $\\\\\n\t\t\t\t\t\t&\t& & & $0.016 $ & $0.040 $ & $0.040 $ & $ 0.008 $ & $0.047 $ & $0.047 $ & $0.029 $ & $0.072 $ & $0.073 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $0.006 $ & $ 0.011$ & $0.011 $ & $ 0.000 $ & $ 0.014$ & $ 0.014$ & $0.011 $ & $0.015 $ & $0.015 $\\\\\n\t\t\t\t\t\t&\t& & & $0.005 $ & $ 0.011$ & $0.011 $ & $ -0.002 $ & $0.014 $ & $ 0.014$ & $0.004 $ & $0.016 $ & $0.016 $\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ & $\\gamma_1 $ & $-0.016 $ & $0.071 $ & $ 0.071 $ & $ -0.057 $ & $0.065 $ & $0.068 $ & $ -0.139 $ & $ 0.083$ & $ 0.102$\\\\\n\t\t\t\t\t\t&\t& & & $ 0.029$ & $0.092 $ & $0.092 $ & $ 0.051 $ & $0.119 $ & $0.121 $ & $0.024 $ & $0.175 $ & $0.176 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.468 $ & $0.723 $ & $0.942 $ & $ -0.943 $ & $0.823 $ & $ 1.713$ & $-1.348 $ & $0.829 $ & $ 2.646$\\\\\n\t\t\t\t\t\t&\t& & & $0.364 $ & $ 0.926$ & $ 1.058 $ & $ 0.495 $ & $ 1.453$ & $ 1.698$ & $0.596 $ & $2.128 $ & $2.482 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $0.017 $ & $0.035 $ & $ 0.035 $ & $0.022 $ & $0.039 $ & $0.039 $ & $ 0.036 $ & $0.052 $ & $0.054 $\\\\\n\t\t\t\t\t\t&\t& & & $ 0.014$ & $ 0.035$ & $0.035 $ & $ 0.017 $ & $0.040 $ & $0.040 $ & $0.025 $ & $0.053 $ & $0.054 $\\\\\n\t\t\t\t\t\t\\cline{2-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t$400$ & $1 $ & $\\gamma_1 $ & $0.011 $ & $0.019 $ & $0.019 $ & $0.019 $ & $0.023 $ & $ 0.023$ & $0.002 $ & $0.032 $ & $0.032 $\\\\\n\t\t\t\t\t\t&\t& & & $0.018 $ & $0.019 $ & $0.019 $ & $ 0.037 $ & $0.023 $ & $0.025 $ & $0.047 $ & $0.034 $ & $0.036 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.002 $ & $0.018 $ & $ 0.018 $ & $-0.010 $ & $0.023 $ & $ 0.023$ & $ -0.019 $ & $0.027 $ & $0.028 $\\\\\n\t\t\t\t\t\t&\t& & & $0.009 $ & $0.018 $ & $ 0.018 $ & $ 0.007 $ & $0.025 $ & $ 0.025$ & $ 0.008 $ & $ 0.032$ & $0.032 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $ 0.000$ & $0.006 $ & $ 0.006 $ & $0.004 $ & $0.006 $ & $ 0.006$ & $ 0.003$ & $ 0.008$ & $0.008 $\\\\\n\t\t\t\t\t\t&\t& & & $0.000 $ & $ 0.006$ & $0.006 $ & $ 0.002 $ & $0.006 $ & $0.006 $ & $ 0.000 $ & $0.008 $ & $0.008 $\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ & $\\gamma_1 $ & $-0.015 $ & $0.031 $ & $0.031 $ & $-0.071 $ & $0.034 $ & $0.039 $ & $-0.086 $ & $ 0.041$ & $0.048 $\\\\\n\t\t\t\t\t\t&\t& & & $0.014 $ & $0.037 $ & $0.038 $ & $ 0.001 $ & $0.050 $ & $0.050 $ & $ 0.047 $ & $0.072 $ & $0.074 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.444 $ & $ 0.330$ & $ 0.527 $ & $-0.802 $ & $0.410 $ & $ 1.053$ & $ -1.191 $ & $0.463 $ & $1.881 $\\\\\n\t\t\t\t\t\t&\t& & & $0.149 $ & $ 0.364$ & $0.386 $ & $ 0.244$ & $0.557 $ & $0.616 $ & $0.325 $ & $ 0.739$& $ 0.845 $\\\\\n\t\t\t\t\t\t&\t& & $\\beta $ & $0.007 $ & $0.016 $ & $0.016 $ & $0.015 $ & $0.019 $ & $0.020 $ & $0.017 $ & $ 0.024$ & $0.024 $\\\\\n\t\t\t\t\t\t&\t& & & $0.004 $ & $ 0.016$ & $0.016 $ & $ 0.010 $ & $0.019 $ & $0.020 $ & $0.010 $ & $ 0.024$ & $0.024 $\\\\\n\t\t\t\t\t\\end{tabular}\n\t\t\t}}\n\t\t\\end{table}\n\t\tFor each setting we consider samples of size $n=200, 400, 1000$. This leads to a total of $108$ settings ($4$ models, $3$ scenarios for the cure rate, $3$ censoring levels and $3$ sample sizes). In this way, we hope to address a number of issues such as the effect of the cure proportion, the sample size, amount and type of censoring, covariates (number, relation between $X$ and $Z$ and their distribution).\n\t\tFor each configuration $1020$ datasets were generated and the estimators of $\\beta_0$ and $\\gamma_0$ were computed through \\texttt{smcure} and our method. We report the bias, variance and mean squared error (MSE) of the estimators, computed after omitting the lowest and the highest $1\\%$ of the estimators (for stability of the reported results) and rounded to three decimals. Tables~\\ref{tab:results1_2}-\\ref{tab:results4} show some of the results, while the rest can be found in the online Supplementary Material. {We aim to provide a ready-to-use method that works well in practice without needing to think about how to choose the kernel function or the bandwidth. Hence, we illustrate the performance of the method for some standard and commonly used choices. } The kernel function $k$ is taken to be the Epanechnikov kernel $k(u)=(3\/4)(1-u^2)\\mathds{1}_{\\{|u|\\leq1\\}}\n\t\t$. \n\t\tWe use the cross-validation bandwidth (implemented in the R package \\texttt{np}) for kernel estimators of conditional distribution functions, in our case for estimation of $H=H_0+H_1$ given the continuous covariates (affecting the incidence). In addition, we restrict to the interval $[0,Y_{(m)}]$, where $Y_{(m)}$ is the last observed event time since the estimator of the cure probability $\\hat\\pi$ in \\eqref{def:hat_pi} is essentially a product over values of $t$ that are equal to the observed event times. This means that we use the {cross-validation} bandwidth for estimation of the conditional distribution $H(t|x)$ for $t\\leq Y_{(m)}$. This choice of bandwidth improves significantly the performance of the estimators, compared to the cross-validation bandwidth on the whole interval $[0,\\tau]$, in situations with a large percentage of observations in the plateau, while it leads to little difference otherwise. \n\t\t\\begin{table}\n\t\t\t\\caption{\t\\label{tab:results3}Bias, variance and MSE of $\\hat\\gamma$ for \\texttt{smcure} (second rows) and our approach (first rows) in Model 3.}\n\t\t\t\\centering\n\t\t\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{ccccrrrrrrrrr}\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t&&&\\multicolumn{3}{c}{Cens. level 1}&\\multicolumn{3}{c}{Cens. level 2}&\\multicolumn{3}{c}{Cens. level 3}\\\\\n\t\t\t\t\t\tMod.&\tn& scen. & Par. & Bias & Var. & MSE & Bias & Var. & MSE & Bias & Var. & MSE\\\\[2pt]\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t3&\t$200$ & $1 $ & $\\gamma_1 $ & $ 0.025$ & $0.147 $ & $0.147 $ & $0.010 $ & $0.192 $ & $0.192 $ & $ -0.008 $ & $0.243 $ & $0.243 $\\\\\n\t\t\t\t\t\t&\t& & & $ 0.034$ & $0.147 $ & $0.148 $ & $0.034 $ & $0.191 $ & $ 0.192$ & $0.062 $ & $0.249 $ & $0.253 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $-0.045 $ & $0.042 $ & $ 0.044 $ & $ -0.078 $ & $0.049 $ & $ 0.055$ & $-0.085 $ & $0.059 $ & $ 0.066$\\\\\n\t\t\t\t\t\t&\t& & & $-0.077 $ & $ 0.050$ & $ 0.056 $ & $ -0.122 $ & $0.065 $ & $0.080 $ & $-0.148 $ & $0.092 $ & $0.144 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $0.081 $ & $0.366 $ & $ 0.373 $ & $ 0.074 $ & $ 0.485$ & $ 0.491$ & $0.029 $ & $0.536 $ & $ 0.537$\\\\\n\t\t\t\t\t\t&\t& & & $0.174 $ & $ 0.397$ & $0.427 $ & $ 0.266 $ & $0.574 $ & $0.644 $ & $0.309 $ & $ 0.799$ & $0.895 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $ -0.046$ & $0.326 $ & $ 0.373 $ & $-0.160 $ & $ 0.412$ & $0.437 $ & $ -0.289 $ & $ 0.453$ & $ 0.537$\\\\\n\t\t\t\t\t\t&\t& & & $0.087 $ & $0.366 $ & $ 0.374 $ & $ 0.089 $ & $0.528 $ & $0.535 $ & $0.186 $ & $ 0.908$ & $0.943 $\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ & $\\gamma_1 $ & $-0.059 $ & $ 0.161$ & $ 0.164 $ & $ -0.091$ & $ 0.258$ & $0.266 $ & $ -0.223 $ & $ 0.419$ & $0.468 $\\\\\n\t\t\t\t\t\t&\t& & & $-0.053 $ & $ 0.163$ & $ 0.166 $ & $-0.071 $ & $0.261 $ & $0.266 $ & $-0.138 $ & $0.524 $ & $ 0.543$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $0.018 $ & $0.046 $ & $ 0.046$ & $0.026 $ & $ 0.063$ & $0.064 $ & $0.086 $ & $0.088 $ & $ 0.096$\\\\\n\t\t\t\t\t\t&\t& & & $0.080 $ & $ 0.052$ & $ 0.058 $ & $0.121 $ & $0.080 $ & $0.095 $ & $0.252 $ & $0.170 $ & $0.233 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $0.060 $ & $0.235 $ & $0.238 $ & $0.076 $ & $ 0.366$ & $0.372 $ & $ 0.135 $ & $0.517 $ & $ 0.535$\\\\\n\t\t\t\t\t\t&\t& & & $0.091 $ & $0.242 $ & $0.251 $ & $0.135 $ & $ 0.375$ & $ 0.393$ & $0.228 $ & $0.642 $ & $0.694 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $-0.030 $ & $0.202 $ & $ 0.203 $ & $ -0.040 $ & $0.292 $ & $ 0.293$ & $ -0.081 $ & $0.479 $ & $0.486 $\\\\\n\t\t\t\t\t\t&\t& & & $-0.027 $ & $ 0.205$ & $0.205 $ & $ -0.017 $ & $0.277 $ & $0.277 $ & $ -0.037 $ & $0.534 $ & $ 0.535$\\\\\n\t\t\t\t\t\t\\cline{2-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t$400$ & $1 $ & $\\gamma_1 $ & $ 0.016$ & $0.074 $ & $0.074 $ & $0.021 $ & $0.091 $ & $ 0.092$ & $ 0.003 $ & $ 0.128$ & $0.128 $\\\\\n\t\t\t\t\t\t&\t& & & $0.017 $ & $ 0.072$ & $0.073 $ & $ 0.022 $ & $0.082 $ & $0.082 $ & $ 0.023 $ & $0.108 $ & $0.108 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $ -0.026$ & $0.019 $ & $0.019 $ & $ -0.039 $ & $0.023 $ & $ 0.025$ & $-0.070 $ & $0.032 $ & $ 0.037$\\\\\n\t\t\t\t\t\t&\t& & & $-0.042 $ & $0.020 $ & $0.021 $ & $ -0.049 $ & $0.025 $ & $0.027 $ & $-0.081 $ & $0.035 $ & $ 0.041$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $0.039 $ & $0.194 $ & $ 0.195 $ & $0.028 $ & $0.219 $ & $ 0.220$ & $ 0.026 $ & $0.298 $ & $ 0.298$\\\\\n\t\t\t\t\t\t&\t& & & $0.093 $ & $ 0.190$ & $ 0.198 $ & $ 0.097 $ & $0.206 $ & $0.215 $ & $0.158 $ & $0.297 $ & $ 0.322$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $ 0.010$ & $0.171 $ & $ 0.171 $ & $-0.091 $ & $0.193 $ & $0.201 $ & $ -0.178 $ & $0.276 $ & $0.307 $\\\\\n\t\t\t\t\t\t&\t& & & $ 0.070$ & $0.177 $ & $ 0.182 $ & $0.038 $ & $0.198 $ & $0.200 $ & $0.088 $ & $0.289 $ & $0.297 $\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ & $\\gamma_1 $ & $-0.023 $ & $0.089 $ & $0.089 $ & $-0.051 $ & $0.118 $ & $0.121 $ & $-0.124 $ & $0.212 $ & $0.228 $\\\\\n\t\t\t\t\t\t&\t& & & $-0.029 $ & $0.092 $ & $ 0.093 $ & $-0.032 $ & $0.112 $ & $ 0.113$ & $-0.062 $ & $0.200 $ & $0.204 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $0.003 $ & $0.023 $ & $ 0.023$ & $0.006 $ & $0.033 $ & $0.033 $ & $0.042 $ & $0.048 $ & $0.050 $\\\\\n\t\t\t\t\t\t&\t& & & $0.042 $ & $0.023 $ & $ 0.025 $ & $ 0.057 $ & $0.034 $ & $0.037 $ & $ 0.104 $ & $ 0.055$ & $0.066 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $0.010 $ & $0.113 $ & $0.113 $ & $ 0.042 $ & $ 0.166$ & $ 0.168$ & $0.090 $ & $0.276 $ & $0.284 $\\\\\n\t\t\t\t\t\t&\t& & & $0.039 $ & $0.111 $ & $0.111 $ & $ 0.060 $ & $0.152 $ & $0.156 $ & $ 0.108 $ & $0.250 $ & $ 0.262$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $0.012 $ & $0.110 $ & $ 0.110 $ & $ -0.021 $ & $ 0.131$ & $ 0.131$ & $-0.047 $ & $0.220 $ & $0.223 $\\\\\n\t\t\t\t\t\t&\t& & & $0.014 $ & $ 0.111$ & $0.111 $ & $ -0.018 $ & $0.117 $ & $ 0.118$ & $-0.020 $ & $0.183 $ & $0.183 $\\\\\n\t\t\t\t\t\\end{tabular}\n\t\t\t}\t}\n\t\t\\end{table}\n\t\t\n\t\tSimulations show that, for not large sample size, the new method performs better than \\texttt{smcure} for estimation of $\\gamma_0$, mostly because of a smaller variance.\n\t\tAs the sample size increases, they tend {to behave quite similarly.} \n\t\tOn the other hand, both methods give almost the same estimates for $\\beta_0$ and $\\Lambda$. \n\t\tThe most favorable situation for our method is when there is little censoring among uncured observations and \n\t\tthe censored uncured observations are in the region of covariates that corresponds to higher cure rate. This comes from the fact that the nonparametric estimator in \\eqref{def:hat_pi} takes larger values when the product has more terms equal to one. This should not be a problem when we expect that subjects with high probability of being cured correspond to longer survival times, meaning that it is more probable for them to be censored compared to those with small cure probability and shorter survival times.\n\t\t\n\t\t\\begin{table}\n\t\t\t\\caption{\t\\label{tab:results4}Bias, variance and MSE of $\\hat\\gamma$ for \\texttt{smcure} (second rows) and our approach (first rows) in Model 4.}\n\t\t\t\\centering\n\t\t\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{ccccrrrrrrrrr}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t&&&\\multicolumn{3}{c}{Cens. level 1}&\\multicolumn{3}{c}{Cens. level 2}&\\multicolumn{3}{c}{Cens. level 3}\\\\\n\t\t\t\t\t\tMod.&\tn& scen. & Par. & Bias & Var. & MSE & Bias & Var. & MSE & Bias & Var. & MSE\\\\[2pt]\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t4&\t$200$ & $1 $ & $\\gamma_1 $ & $ 0.041$ & $ 0.157$ & $0.159 $ & $ 0.016 $ & $ 0.187 $ & $0.188 $ & $-0.010 $ & $ 0.210 $ & $0.210 $\\\\\n\t\t\t\t\t\t&\t& & & $0.077 $ & $0.178 $ & $ 0.184 $ & $ 0.096 $ & $0.228 $ & $ 0.238 $ & $ 0.127$ & $0.285 $ & $0.301 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $ -0.017$ & $0.039 $ & $ 0.039 $ & $ -0.019 $ & $ 0.042 $ & $ 0.042 $ & $ -0.015$ & $0.049 $ & $ 0.049$\\\\\n\t\t\t\t\t\t&\t& & & $-0.090 $ & $0.052 $ & $ 0.060 $ & $ -0.125 $ & $ 0.069$ & $ 0.085 $ & $ -0.164$ & $ 0.108 $ & $ 0.135$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $ -0.245$ & $0.159 $ & $0.219 $ & $ -0.281 $ & $ 0.165$ & $ 0.244 $ & $ -0.355$ & $0.179 $ & $0.305 $\\\\\n\t\t\t\t\t\t& & & & $0.064 $ & $0.244 $ & $ 0.249 $ & $ 0.084 $ & $0.304 $ & $ 0.311 $ & $ 0.114$ & $ 0.395 $ & $0.408 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $-0.068 $ & $0.331 $ & $ 0.336 $ & $-0.162 $ & $ 0.385 $ & $ 0.411 $ & $ -0.285$ & $ 0.443 $ & $0.524 $\\\\\n\t\t\t\t\t\t& & & & $0.171 $ & $0.401 $ & $ 0.430 $ & $ 0.241 $ & $0.561 $ & $ 0.619 $ & $ 0.314$ & $ 0.842 $ & $ 0.941$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_5 $ & $-0.095 $ & $0.301 $ & $ 0.310 $ & $ -0.234 $ & $0.349 $ & $ 0.404 $ & $ -0.366$ & $ 0.371 $ & $ 0.505$\\\\\n\t\t\t\t\t\t& & & & $ 0.106$ & $0.363 $ & $ 0.375 $ & $ 0.143 $ & $0.509 $ & $0.529 $ & $ 0.177$ & $ 0.693 $ & $ 0.724$\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ &$\\gamma_1 $ & $ -0.044$ & $0.079 $ & $ 0.081 $ & $-0.079 $ & $ 0.095$ & $ 0.101 $ & $-0.148 $ & $ 0.132$ & $0.154 $\\\\\n\t\t\t\t\t\t& & & & $0.000 $ & $ 0.079$ & $ 0.079 $ & $ 0.003 $ & $ 0.096 $ & $ 0.096 $ & $0.009 $ & $ 0.141 $ & $ 0.141$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $ 0.018$ & $0.007 $ & $ 0.008 $ & $ 0.024 $ & $ 0.008$ & $0.009 $ & $0.041 $ & $ 0.010 $ & $0.012 $\\\\\n\t\t\t\t\t\t& & & & $0.015 $ & $0.008 $ & $ 0.008 $ & $ 0.017 $ & $ 0.009 $ & $ 0.010 $ & $0.028 $ & $ 0.013 $ & $0.014 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $0.034 $ & $0.066 $ & $ 0.067 $ & $ 0.046 $ & $ 0.073 $ & $ 0.075 $ & $0.067 $ & $0.087 $ & $0.091 $\\\\\n\t\t\t\t\t\t& & & & $-0.025 $ & $0.080 $ & $ 0.080$ & $ -0.033 $ & $ 0.091 $ & $ 0.092 $ & $ -0.041$ & $0.120 $ & $ 0.122$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $0.041 $ & $0.102 $ & $ 0.104 $ & $ 0.054$ & $ 0.125 $ & $0.128 $ & $0.082 $ & $ 0.166 $ & $0.173 $\\\\\n\t\t\t\t\t\t& & & & $0.022 $ & $0.100 $ & $0.101 $ & $ 0.023 $ & $ 0.126 $ & $ 0.126 $ & $0.026 $ & $0.179 $ & $ 0.180$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_5 $ & $-0.031 $ & $ 0.103$ & $ 0.104 $ & $ -0.034 $ & $0.120 $ & $ 0.121 $ & $ -0.054$ & $0.159 $ & $0.162 $\\\\\n\t\t\t\t\t\t& & & &$ -0.016$ & $0.099 $ & $ 0.099 $ & $ -0.013 $ & $ 0.115 $ & $ 0.115 $ & $ -0.018$ & $ 0.150 $ & $ 0.151$\\\\\n\t\t\t\t\t\t\\cline{2-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t$400$ & $1 $ & $\\gamma_1 $ & $ 0.013$ & $ 0.067$ & $ 0.067 $ & $ 0.015 $ & $ 0.079 $ & $0.080 $ & $ 0.005$ & $ 0.097 $ & $ 0.097$\\\\\n\t\t\t\t\t\t& & & & $0.024 $ & $0.067 $ & $ 0.068$ & $ 0.037 $ & $ 0.080$ & $0.082 $ & $ 0.043$ & $ 0.101 $ & $0.103 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $ -0.001$ & $ 0.017$ & $ 0.017 $ & $ -0.003$ & $ 0.020 $ & $ 0.020 $ & $-0.007 $ & $ 0.023 $ & $0.023 $\\\\\n\t\t\t\t\t\t& & & & $-0.042 $ & $0.020 $ & $ 0.021$ & $ -0.055 $ & $0.025 $ & $ 0.028 $ & $-0.079 $ & $ 0.034$ & $0.041 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $-0.229 $ & $0.089 $ & $ 0.141 $ & $ -0.207 $ & $ 0.090 $ & $ 0.133 $ & $ -0.275$ & $ 0.102 $ & $0.178 $\\\\\n\t\t\t\t\t\t& & & & $0.046 $ & $0.107 $ & $ 0.109 $ & $ 0.061$ & $ 0.137 $ & $ 0.141 $ & $ 0.066$ & $ 0.162 $ & $ 0.166$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $ -0.063$ & $ 0.161$ & $ 0.165 $ & $ -0.143 $ & $ 0.175 $ & $ 0.196$ & $-0.222 $ & $ 0.215 $ & $ 0.265$\\\\\n\t\t\t\t\t\t& & & & $ 0.085$ & $ 0.176$ & $ 0.183 $ & $ 0.107 $ & $ 0.222 $ & $ 0.234 $ & $ 0.145$ & $0.318 $ & $0.339 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_5 $ & $ -0.075$ & $0.146 $ & $ 0.151 $ & $-0.192 $ & $ 0.177 $ & $ 0.214$ & $-0.299 $ & $ 0.199 $ & $ 0.289$\\\\\n\t\t\t\t\t\t& & & & $0.043 $ & $0.157 $ & $ 0.159 $ & $ 0.049 $ & $ 0.194 $ & $0.196 $ & $ 0.060$ & $ 0.253 $ & $ 0.257$\\\\\n\t\t\t\t\t\t\\cline{3-13}\n\t\t\t\t\t\t&\t& & & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t\t&\t& $3 $ & $\\gamma_1 $ & $-0.024 $ & $ 0.038$ & $ 0.039 $ & $ -0.038 $ & $ 0.047 $ & $0.048 $ & $-0.092 $ & $0.064 $ & $0.073 $\\\\\n\t\t\t\t\t\t& & & & $-0.003 $ & $0.036 $ & $ 0.036 $ & $ 0.002 $ & $ 0.044 $ & $ 0.044 $ & $0.004 $ & $ 0.060 $ & $ 0.060$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_2 $ & $ 0.006$ & $ 0.003$ & $ 0.003 $ & $0.010 $ & $ 0.004 $ & $ 0.004 $ & $ 0.019$ & $ 0.005$ & $0.006 $\\\\\n\t\t\t\t\t\t& & & & $0.005 $ & $ 0.004$ & $ 0.004 $ & $ 0.005 $ & $0.004 $ & $0.004 $ & $0.006 $ & $ 0.006$ & $0.006 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_3 $ & $ 0.028$ & $0.033 $ & $ 0.034 $ & $0.045 $ & $0.038 $ & $ 0.040 $ & $0.065 $ & $ 0.045 $ & $0.049 $\\\\\n\t\t\t\t\t\t& & & & $ -0.010$ & $ 0.036$ & $ 0.036 $ & $ -0.008 $ & $ 0.042 $ & $ 0.042 $ & $ -0.008$ & $ 0.052 $ & $0.052 $\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_4 $ & $ 0.021$ & $ 0.052$ & $ 0.053 $ & $0.032 $ & $ 0.062 $ & $ 0.063 $ & $0.060 $ & $ 0.088 $ & $ 0.092$\\\\\n\t\t\t\t\t\t& & & & $0.015 $ & $ 0.050$ & $ 0.050 $ & $ 0.016 $ & $0.060 $ & $ 0.060 $ & $0.019 $ & $ 0.083 $ & $ 0.083$\\\\\n\t\t\t\t\t\t&\t& & $\\gamma_5 $ & $ -0.024$ & $0.049 $ & $ 0.050 $ & $ -0.041$ & $ 0.059 $ & $ 0.061 $ & $ -0.048$ & $ 0.077 $ & $ 0.079$\\\\\n\t\t\t\t\t\t& & & & $-0.017 $ & $0.049 $ & $ 0.049 $ & $ -0.022 $ & $ 0.056$ & $ 0.057$ & $-0.022 $ & $ 0.069 $ & $0.069 $\\\\\n\t\t\t\t\t\\end{tabular}\n\t\t\t}\t}\n\t\t\\end{table}\n\t\t\n\t\tThis is indeed the case in Model 1 and we observe that our approach outperforms \\texttt{smcure} in all the scenarios. The difference between the two is more marked when $n$ is small and the absolute value of the $\\gamma$ coefficient is larger. In Model 2, the situation is more difficult because censoring depends on the covariate in such a way that, the non-cured subjects have the same probability of being censored independently of their cure probability. However, for the first two scenarios the new method is still superior. The third scenario is more problematic because \n\t\tthe cure probability drops very fast from almost one to almost zero, resulting in a large fraction of uncured observations with almost zero cure probability. The presence of censoring in this region leads to overestimation of the cure rate. If we would take $\\beta_C=0.1$ (meaning larger probability of being censored for higher cure rate), then the new approach is significantly superior (see Table \\ref{tab:2_4} for $n=400$ and scenario 3). In Model 3, complications arise because of the presence of different covariates for the incidence and latency. Hence, subjects with higher cure rate might correspond to shorter survival times. As a result, the previous problem might still happen and its effects are more visible for large sample size and large censoring rate. Finally, Model 4 suggests that, even though the assumptions in Section \\ref{sec:asymptotics} were shown to be satisfied only for one continuous covariate, the method {could be applied in more general cases.}\n\t\tWe noticed that, when a continuous covariate affects only the incidence and not the latency, the bandwidth selected by the \\texttt{np} package is often very large, meaning that it fails to capture the effect of this covariate on the conditional distribution function. In those cases, we truncate the selected bandwidth from above at $2$. Note that the bandwidth is chosen for standardized covariates so the truncation level can be fixed regardless of the distribution of the covariate. We decided to truncate at $2$ since it seems to be a kind of boundary for a `reasonable' bandwidth {with standardized covariates} (we do not want to externally affect chosen bandwidths smaller than $2$ but we only replace extremely large values by $2$). However, even when reasonable, the \\texttt{np} bandwidth for $X_2$ seems to be larger than it should, resulting in more bias in the estimator of $\\gamma_3$. Nevertheless, in terms of mean squared error, the method performs well for not large sample size. If $X_2$ would affect also the latency, the selected bandwidth would be more adequate and there would be no bias problems.\n\t\t\n\t\t\\begin{table}\n\t\t\t\\caption{\\label{tab:2_4}Bias, variance and MSE of $\\hat\\gamma$ and $\\hat\\beta$ for \\texttt{smcure} and our approach in Model 2, scenario 3 when $\\beta_C=0.1$ and $n=400$.}\n\t\t\t\\centering\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{crrrrrr}\n\t\t\t\t\t\n\t\t\t\t\t\t& \\multicolumn{3}{c}{} & \\multicolumn{3}{c}{} \\\\[-8pt]\n\t\t\t\t\t\t&\t \\multicolumn{3}{c}{\\texttt{smcure} package} & \\multicolumn{3}{c}{Our approach}\\\\\n\t\t\t\t\t\tParameter\t& Bias & Var & MSE & Bias &Var & MSE\\\\[2pt]\n\t\t\t\t\t\t\\cline{1-7}\n\t\t\t\t\t\t& & & & & & \\\\[-8pt]\n\t\t\t\t\t\t$\\gamma_1$ & $0.014 $ & $0.123 $ & $0.123$ & $-0.058 $ & $ 0.103$ & $0.106$\\\\\n\t\t\t\t\t\t$\\gamma_2$ & $0.418 $ & $1.243 $ & $1.418 $ & $-0.535 $ & $0.652 $ & $0.937 $\\\\\n\t\t\t\t\t\t$\\beta$ & $0.001$ & $0.025$ & $0.025 $ & $0.001 $ & $0.027 $ & $ 0.027$\\\\\n\t\t\t\t\t\n\t\t\t\\end{tabular}}}\n\t\t\\end{table}\n\t\t\n\t\tTo conclude, the new approach seems to perform significantly better than \\texttt{smcure} when the sample size is not large and the fraction of censored observations is not much higher than the expected cure proportion. In other situations, both methods are comparable. However, one has to be more careful when there is no reason to expect that the censored subjects correspond to higher cure probabilities.\n\t\n\t\t\n\t\t{In the previous settings, we truncated the event times at $\\tau_0$ in such a way that condition \\eqref{eqn:jump_cond} is satisfied but in practice\n\t\t\t\tit is unlikely to observe event times at $\\tau_0$. Next, we consider one additional model for which condition \\eqref{eqn:jump_cond} is not satisfied. The covariates and the parameters are as in Model 3 described above, but the event times are generated from a Weibull distribution on $[0,\\tau_0]$ with $\\tau_0=15$, i.e.\n\t\t\\[\n\t\tS_u(t|z)=\\frac{\\exp\\left\\{-\\mu t^\\rho\\exp(\\beta'z)\\right\\}-\\exp\\left\\{-\\mu\\tau_0^\\rho\\exp(\\beta'z)\\right\\}}{1-\\exp\\left\\{-\\mu\\tau_0^\\rho\\exp(\\beta'z)\\right\\}}\n\t\t\\]\n\t\tThe censoring times are exponentially distributed as in Model 3 and truncated at $\\tau=20$. Results for sample size $n=200$ and three censoring levels are shown in Table~\\ref{tab:nojump}. Compared to Model 3 above, we observe that, when condition \\eqref{eqn:jump_cond} is not satisfied, presmoothing is even more superior than the smcure estimator. \n\t\t\t\t}\n\t\t\t\\begin{table}[h!]\n\t\t\t\\caption{\t\\label{tab:nojump}Bias, variance and MSE of $\\hat\\gamma$ for \\texttt{smcure} (second rows) and our approach (first rows) in Model 3 without condition \\eqref{eqn:jump_cond}.}\n\t\t\t\\centering\n\t\t\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{crrrrrrrrr}\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t&\\multicolumn{3}{c}{Cens. level 1}&\\multicolumn{3}{c}{Cens. level 2}&\\multicolumn{3}{c}{Cens. level 3}\\\\\n\t\t\t\t\t\t Par. & Bias & Var. & MSE & Bias & Var. & MSE & Bias & Var. & MSE\\\\[2pt]\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t & & & & & & & && \\\\[-8pt]\n\t\t\t\t\t $\\gamma_1 $ & $ 0.015$ & $0.152 $ & $0.152 $ & $0.000 $ & $0.196 $ & $0.196$ & $ -0.032 $ & $0.246 $ & $0.247 $\\\\\n\t\t\t\t\t\t & $ 0.017$ & $0.150 $ & $0.151 $ & $0.027 $ & $0.193 $ & $ 0.194$ & $0.035 $ & $0.260 $ & $0.262 $\\\\\n\t\t\t\t\t $\\gamma_2 $ & $-0.054 $ & $0.044 $ & $ 0.047 $ & $ -0.077 $ & $0.052 $ & $ 0.058$ & $-0.109 $ & $0.064 $ & $ 0.076$\\\\\n\t\t\t\t\t & $-0.085$ & $ 0.050$ & $ 0.057 $ & $ -0.119 $ & $0.069 $ & $0.083 $ & $-0.171 $ & $0.101 $ & $0.130$\\\\\n\t\t\t\t\t $\\gamma_3 $ & $0.087 $ & $0.379 $ & $ 0.386 $ & $ 0.073 $ & $ 0.450$ & $ 0.456$ & $0.045 $ & $0.578 $ & $ 0.580$\\\\\n\t\t\t\t\t\t& $0.197 $ & $ 0.423$ & $0.462 $ & $ 0.249 $ & $0.561 $ & $0.623 $ & $0.343 $ & $ 0.885$ & $1.002 $\\\\\n\t\t\t\t\t $\\gamma_4 $ & $ -0.010$ & $0.339 $ & $ 0.339 $ & $-0.106 $ & $ 0.373$ & $0.385 $ & $ -0.228 $ & $ 0.498$ & $ 0.550$\\\\\n\t\t\t\t\t\t& $0.125 $ & $0.364 $ & $ 0.380 $ & $ 0.156 $ & $0.523 $ & $0.548 $ & $0.260 $ & $ 1.513$ & $1.581 $\\\\\n\t\t\t\t\t\t\t\\end{tabular}\n\t\t\t}\t}\n\t\t\\end{table}\n\t\t\n\tFinally we conclude with a remark about the computational aspect.\t{The proposed approach is computationally more intensive than the MLE mainly because of the bandwidth selection through a cross-validation procedure. For example, for one iteration in Model 3 with sample size $200$ and $400$, \\texttt{smcure} computes the estimates in $0.7$ and $0.8$ seconds respectively, while the new approach requires $4.1$ and $23.5$ seconds (with a Core i7-8665U CPU desktop). However, this seems still reasonable since the method is not meant for much larger sample sizes.} \n\t\t\n\t\t\n\t\t\\section{Application: Melanoma study}\n\t\t\\label{sec:application}\n\t\tTo illustrate the practical performance, we apply the proposed estimation procedure to two medical datasets for patients with melanoma and compare the results with \\texttt{smcure}. Melanoma is the third most common skin cancer type with overall incidence rate 21.8 out of 100,000 people in the US (Cancer statistics from the Center for Disease Control and Prevention) and according to the American Cancer Society, $6850$ people are expected to die of melanoma in $2020$. However, in the recent years, the chances of survival for melanoma patients have increased due to earlier diagnosis and improvement of treatment and surgical techniques. The 5-year survival rates based on the stage of the cancer when it was first diagnosed are $92\\%$ for localized, $65\\%$ for regional and $25\\%$ for distant stage. It is also known that this disease is more common among white people and the death rate is higher for men than women. Even though most melanoma patients are cured by their initial treatment, it is not possible to distinguish them from the uncured patients. Hence, accurately estimating the probability of being cured is important in order to plan further treatment and prevent recurrence of uncured patients. \n\t\t\n\t\t\\subsection{Eastern Cooperative Oncology Group (ECOG) Data}\n\t\tWe use the melanoma data (ECOG phase III clinical trial e1684) from the \\texttt{smcure} package \\cite{cai_smcure} in order to compare our results with those of \\texttt{smcure}.\n\t\tThe purpose of this study was to evaluate the effect of treatment (high dose interferon alpha-2b regimen) as the postoperative adjuvant therapy. The event time is the time from initial treatment to recurrence of melanoma and three covariates have been considered: age (continuous variable centered to the mean), gender (0=male and 1=female) and treatment (0=control and 1=treatment).\n\t\tThe data consists of $284$ observations (after deleting missing data) out of which $196$ had recurrence of the melanoma cancer (around $30\\%$ censoring). The Kaplan-Meier curve is shown in Figure~\\ref{fig:KM_melanoma_1}. \n\t\tThe parameter estimates, standard errors and corresponding p-values for the Wald test using our method and the \\texttt{smcure} package are given in Table~\\ref{tab:melanoma1}. Standard errors are computed through $500$ {naive} bootstrap samples.\n\t\t\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\makebox{\n\t\t\t\t\\includegraphics[width=0.48\\linewidth]{KM_melanoma_1.pdf}\\qquad\\includegraphics[width=0.48\\linewidth]{KM_melanoma_1_treatment.pdf}}\n\t\t\t\\caption{\\label{fig:KM_melanoma_1}Left panel: Kaplan-Meier survival curve for ECOG data. Right panel: Kaplan-Meier survival curves for the treatment group (solid) and control group (dotted) in the ECOG data.}\n\t\t\\end{figure}\n\t\t\n\t\t\\begin{table}\n\t\t\t\\caption{\\label{tab:melanoma1}Results for the incidence (logistic component) and the latency (Cox PH component) from the ECOG data.}\n\t\t\t\\centering\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{c|crrrrrr}\n\t\t\t\t\n\t\t\t\t\t\t&& \\multicolumn{3}{c}{} & \\multicolumn{3}{c}{} \\\\[-8pt]\n\t\t\t\t\t\t&\t& \\multicolumn{3}{c}{\\texttt{smcure} package} & \\multicolumn{3}{c}{Our approach}\\\\\n\t\t\t\t\t\t&\tCovariates\t& Estimates & SE & p-value & Estimates & SE & p-value\\\\[2pt]\n\t\t\t\t\t\t\\cline{2-8}\n\t\t\t\t\t\t&\t& & & & & & \\\\[-8pt]\n\t\t\t\t\t\t\\multirow{4}{*}{\\STAB{\\rotatebox[origin=c]{90}{incidence}}}\t&Intercept & $1.3649 $ & $0.3457 $ & $8\\cdot10^{-5} $ & $1.6697 $ & $ 0.3415$ & $10^{-6} $\\\\\n\t\t\t\t\t\t&\tAge & $0.0203 $ & $0.0159 $ & $0.2029 $ & $0.0220 $ & $0.0104 $ & $0.0344 $\\\\\n\t\t\t\t\t\t&\tGender & $-0.0869 $ & $0.3347 $ & $0.7949 $ & $-0.3039 $ & $0.3448 $ & $ 0.3493$\\\\\n\t\t\t\t\t\t&\tTreatment & $-0.5884 $ & $ 0.3706$ & $0.1123 $ & $-0.9345 $ & $0.3603 $ & $0.0095 $\\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t&\t& & & & & & \\\\[-8pt]\n\t\t\t\t\t\t\\multirow{3}{*}{\\STAB{\\rotatebox[origin=c]{90}{latency}}}\t&\tAge & $-0.0077 $ & $0.0069 $ & $0.2663 $ & $-0.0079 $ & $0.0060 $ & $0.1861 $\\\\\n\t\t\t\t\t\t&\tGender & $0.0994 $ & $0.1932 $ & $0.6067 $ & $0.1240 $ & $0.1653 $ & $0.4534 $\\\\\n\t\t\t\t\t\t&\tTreatment & $-0.1535 $ & $0.1715 $ & $0.3707 $ & $-0.0947 $ & $ 0.1692$ & $0.5756 $\\\\\n\t\t\t\t\t\n\t\t\t\\end{tabular}}}\n\t\t\\end{table}\n\t\t\n\t\tWe observe that, for both methods, the effects of the covariates\n\t\n\t\thave the same direction. Only the intercept was found significant for the incidence with \\texttt{smcure}, while our method concludes that also age and treatment are significant. In particular, the probability of recurring melanoma is higher for the control group compared to the treatment group. This seems to be indeed the case if we look at the Kaplan Meier survival curves for the two groups in Figure~\\ref{fig:KM_melanoma_1}. On the other hand, both methods agree that none of the covariates is significant for the latency. \n\t\t\n\t\t{To illustrate another advantage of the new approach, we also compute the maximum likelihood estimator with the \\texttt{smcure} package for different choices of the latency model. We see in Table~\\ref{tab:melanoma3} that the estimators of the incidence component (and their significance) change depending on which variables are included in the latency. On the other hand, the new method does not suffer from this problem because it estimates the incidence independently of the latency.\n\t\t\t\\begin{table}\n\t\t\t\\caption{\\label{tab:melanoma3}Results for the incidence (logistic component) and the latency (Cox PH component) from the ECOG data.}\n\t\t\t\\centering\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{c|c|rrr|rrr|rrr}\n\t\t\t\t\t\n\t\t\t\t\t\t&& \\multicolumn{3}{c}{} & \\multicolumn{3}{c}{} & \\multicolumn{3}{c}{}\\\\[-8pt]\n\t\t\t\t\t\t&\t& \\multicolumn{3}{c}{Model 1} & \\multicolumn{3}{c}{Model 2 }& \\multicolumn{3}{c}{Model 3 }\\\\\n\t\t\t\t\t\t&\tCovariates\t& Estimates & SE & p-value & Estimates & SE & p-value\t& Estimates & SE & p-value\\\\[2pt]\n\t\t\t\t\t\t\\cline{2-11}\n\t\t\t\t\t\t&\t& & & & & & &&&\\\\[-8pt]\n\t\t\t\t\t\t\\multirow{4}{*}{\\STAB{\\rotatebox[origin=c]{90}{incidence}}}\t&Intercept & $1.3507 $ & $0.3001 $ & $7\\cdot10^{-6} $ & $1.4148 $ & $ 0.3213$ & $10^{-5} $& $1.4181$ & $ 0.3073$ & $4\\cdot10^{-6} $\\\\\n\t\t\t\t\t\t&\tAge & $0.0164$ & $0.0125 $ & $0.1905 $ & $0.0205 $ & $0.0154 $ & $0.1803 $& $0.0209 $ & $ 0.0146$ & $0.1528 $\\\\\n\t\t\t\t\t\t&\tGender & $-0.0265 $ & $0.3113 $ & $0.9320 $ & $-0.0673 $ & $0.3352$ & $ 0.8407$& $-0.0222$ & $ 0.3130$ & $0.9432 $\\\\\n\t\t\t\t\t\t&\tTreatment & $-0.6060$ & $ 0.3509$ & $0.0842 $ & $-0.6773 $ & $0.3223 $ & $0.0415 $& $-0.6913$ & $ 0.3439$ & $0.0444 $\\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t&\t& & & & & & &&& \\\\[-8pt]\n\t\t\t\t\t\t\\multirow{3}{*}{\\STAB{\\rotatebox[origin=c]{90}{latency}}}\t&\tAge & & & & $-0.0074 $ & $0.0066 $ & $0.2568 $& $-0.0073 $ & $ 0.0064$ & $0.2579 $\\\\\n\t\t\t\t\t\t&\tGender && & & $0.0789 $ & $0.1863 $ & $0.6719$& & & \\\\\n\t\t\t\t\t\t&\tTreatment & $-0.1324 $ & $0.1561 $ & $0.3963$ & & & & & &\\\\\n\t\t\t\t\t\n\t\t\t\\end{tabular}}}\n\t\t\\end{table}\n\t}\n\t\t\\subsection{Surveillance, Epidemiology and End Results (SEER) database}\n\t\t\n\t\tThe SEER database collects cancer incidence data from population-based cancer registries in US. These data consist of patient demographic characteristics, primary tumor site, tumor morphology, stage at diagnosis, length of follow up and vital status. We select the database `Incidence - SEER 18 Regs Research Data' and extract the melanoma cancer data for the county of San Francisco in California during the period $2004-2015$. We consider only patients with stage at diagnosis: localized, regional and distant and exclude those with unknown or zero follow-up time and restrict the study to white people because of the very small number of cases from other races. The event time is death because of melanoma. This cohort consists of $1445$ melanoma cases out of which $596$ are female and $849$ male. The age ranges from $11 $ to $101$ years old, the follow-up from $1$ to $155$ months. For most of the patients the cancer has been diagnosed at early stage (localized), while for $101$ of them the stage at diagnosis is `regional' and only for $42$ it is `distant'. We aim at evaluating how age, gender and stage at diagnosis affect the survival of melanoma patients in this cohort. The use of cure models is justified from the presence of a long plateau containing around $20\\%$ of the observations (see the Kaplan-Meier curve in Figure \\ref{fig:KM_melanoma_2}). Moreover, the Kaplan-Meier curves depending on gender and stage at diagnosis in Figure~\\ref{fig:KM_melanoma_2} confirm that gender and stage affect the cure rate. \n\t\t\n\t\t\\begin{figure}\n\t\t\t\\begin{center}\t\\makebox{\n\t\t\t\t\t{\\includegraphics[width=0.48\\linewidth]{KM_melanoma_2.pdf}}\n\t\t\t}\t\\end{center}\n\t\t\t\\makebox{\n\t\t\t\t{\\includegraphics[width=0.48\\linewidth]{KM_melanoma_2_gender}\\qquad \t\\includegraphics[width=0.48\\linewidth]{KM_melanoma_2_stage}}}\n\t\t\t\\caption{\\label{fig:KM_melanoma_2}Upper panel: Kaplan-Meier survival curves for SEER data. Left panel: group division based on gender, females (solid) and males (dotted). Right panel: group division based on cancer stage at diagnosis, localized (solid), regional (dashed) and distant (dotted).}\n\t\t\\end{figure}\n\t\t\n\t\tWe checked the fit of the logistic model by comparing it with the single-index mixture cure model proposed in \\cite{AKL19} through the prediction error of the incidence. More precisely, as in \\cite{AKL19}, we divide the data into a training set and a test set of size $964$ and $481$ respectively. Using the training set, we estimate the logistic\/Cox model and the single-index\/Cox model. Afterwards, we compute the prediction error in the test set given by\n\t\t\\[\n\t\tPE=-\\sum_{j=1}^{481}\\left\\{\\hat{W}_j\\log [1-\\hat\\pi(X_j^{\\text{test}})]+(1-\\hat{W}_j)\\log \\hat\\pi(X_j^{\\text{test}})\\right\\}\n\t\t\\]\n\t\twhere $\\hat\\pi(X_j^{\\text{test}})$ and $\\hat{W_j}$ are the predicted cure probability and the predicted weight for the $j$th observation in the test set, computed based on the parameter estimates (and the link function for the single-index model) in the training set. {More precisely, for the logistic\/Cox model we have $\\hat\\pi(X_j^{\\text{test}})=\\phi(\\hat\\gamma_n,X_j^{\\text{test}})$ and \n\t\t\t\\[\n\t\t\t\\hat{W}_j=\\Delta_j^{\\text{test}}+(1-\\Delta_j^{\\text{test}})\\frac{\\hat\\pi(X_j^{\\text{test}})\\exp\\left(-\\hat\\Lambda_n(Y_j^{\\text{test}})e^{\\hat\\beta'_nZ_j^{\\text{test}}}\\right)}{1-\\hat\\pi(X_j^{\\text{test}})+\\hat\\pi(X_j^{\\text{test}})\\exp\\left(-\\hat\\Lambda_n(Y_j^{\\text{test}})e^{\\hat\\beta'_nZ_j^{\\text{test}}}\\right)}\n\t\t\t\\]\n\t\t\twhere $\\hat\\gamma_n$, $\\hat\\beta_n$ and $\\hat\\Lambda_n$ are the estimated parameters and the estimated hazard function in the training set. For the single-index\/Cox model, the only difference is that $\\hat\\pi(X_j^{\\text{test}})=\\hat{g}_n(\\hat\\gamma_n,X_j^{\\text{test}})$ where $\\hat{g}_n$ is the estimated link function as in \\cite{AKL19}. The weights $\\hat W_j$ correspond to the conditional expectation of the cure status $B$ given the observations. } \n\t\tWe find that the prediction error for the logistic model is $98.53$, whereas for the single-index model it is $156.55$. This means that the logistic model performs better. \n\t\t\n\t\t\\begin{table}\n\t\t\t\\caption{\\label{tab:melanoma2}\n\t\t\t\tResults for the incidence (logistic component) and the latency (Cox PH component) from the SEER data.}\n\t\t\t\\centering\n\t\t\t\\scalebox{0.85}{\n\t\t\t\t\\fbox{\n\t\t\t\t\t\\begin{tabular}{c|crrrrrr}\n\t\t\t\t\t\n\t\t\t\t\t\t&& \\multicolumn{3}{c}{} & \\multicolumn{3}{c}{} \\\\[-8pt]\n\t\t\t\t\t\t&\t& \\multicolumn{3}{c}{\\texttt{smcure} package} & \\multicolumn{3}{c}{Our approach}\\\\\n\t\t\t\t\t\t&\tCovariates\t& Estimates & SE & p-value & Estimates & SE & p-value\\\\[2pt]\n\t\t\t\t\t\t\\cline{2-8}\n\t\t\t\t\t\t&\t& & & & & & \\\\[-8pt]\n\t\t\t\t\t\t\\multirow{5}{*}{\\STAB{\\rotatebox[origin=c]{90}{incidence}}}&\tIntercept & $-4.2071 $ & $0.3817 $ & $0 $ & $-4.2436 $ & $0.3980 $ & $0 $ \\\\\n\t\t\t\t\t\t&\tAge & $0.0304 $ & $0.0122 $ & $0.0124 $ & $0.0328 $ & $0.0172 $ & $ 0.0565$ \\\\\n\t\t\t\t\t\t&\tGender & $ 1.1318$ & $0.4211 $ & $0.0072 $ & $1.2341 $ & $0.4792 $ & $0.010 $ \\\\\n\t\t\t\t\t\t&\t$S_1$& $2.6738 $ & $0.3702 $ & $5\\cdot 10^{-13} $ & $ 2.4474$ & $0.4247 $ & $ 8\\cdot 10^{-9}$ \\\\\n\t\t\t\t\t\t&\t$S_2$ & $4.0763 $ & $0.5067 $ & $8\\cdot 10^{-16} $ & $3.9426 $ & $0.4536 $ & $ 0$ \\\\\n\t\t\t\t\t\t\\hline\n\t\t\t\t\t\t&\t& & & & & & \\\\[-8pt]\n\t\t\t\t\t\t\\multirow{4}{*}{\\STAB{\\rotatebox[origin=c]{90}{latency}}}&\tAge & $-0.0139 $ & $ 0.0098$ & $ 0.1577$ & $-0.0143 $ & $0.0106 $ & $0.1756 $ \\\\\n\t\t\t\t\t\t&\tGender & $-0.0549 $ & $0.4065 $ & $ 0.8925$ & $-0.0871 $ & $0.3687 $ & $0.8131 $ \\\\\n\t\t\t\t\t\t&\t$S_1$& $0.5176 $ & $0.3993 $ & $0.1949$ & $0.6130 $ & $0.3971 $ & $0.1226 $ \\\\\n\t\t\t\t\t\t&\t$S_2$ & $1.8039 $ & $0.4529 $ & $7\\cdot 10^{-5} $ & $1.8623 $ & $0.5072 $ & $0.0002 $ \\\\\n\t\t\t\t\t\n\t\t\t\t\t\\end{tabular}\n\t\t\t}}\n\t\t\\end{table}\n\t\t\n\t\tThe parameter estimates, standard errors and corresponding p-values for the Wald test using our method and the \\texttt{smcure} package are given in Table \\ref{tab:melanoma2}. Standard errors are computed with $500$ {naive} bootstrap samples. The covariate stage is classified using two dummy Bernoulli variables $S_1$ and $S_2$, where $S_1=1$ indicates the regional stage and\n\t\n\t\t$S_2=1$ indicates the distant stage.\n\t\n\t\tThe gender variable is equal to zero for females and one for males. We observe that both methods agree that all the considered covariates are significant for the incidence (with age being a borderline case for our approach). \n\t\tFor the latency, only being in the distant stage is found significant with both methods. \n\t\tMoreover, again the effects of all the covariates on the latency and incidence have the same direction for both methods.\n\t\t\n\t\t\n\t\t\\section{Discussion} \\label{sec:disc}\n\t\tIn this paper we proposed a new estimation procedure for the mixture cure model with a parametric form of the incidence (for example logistic) and {any semiparametric model for the latency. We investigated more in detail the logistic\/Cox model given its practical relevance}. Instead of using an iterative algorithm for dealing with the unknown cure status, this method relies on a preliminary nonparametric estimator of the cure probabilities. We showed through simulations that the new approach improves upon the classical maximum likelihood estimator implemented in the package \\texttt{smcure}, mainly for smaller sample sizes. For the latency, both methods behave similarly. Hence, it is of particular interest in situations in which the focus is on the estimation of cure probabilities. {The real data application on the ECOG clinical trial also showed that the improvement in estimation can be meaningful in practice and help detecting significant effects.}\n\t\t\n\t\t{ The proposed method has the advantage of direct estimation of the incidence component, without relying on the latency, which makes it robust to latency model misspecification. On the contrary, the \\texttt{smcure} estimator strongly depends on the choice of the variables for the latency and could be biased for a misspecified Cox model. Hence, for practical reason, confronting the estimators obtained with the two methods is valuable for confirming the results or obtaining new insights. }\n\t\t{From the theoretical point of view, unlike the standard maximum likelihood estimation, presmoothing allows us to obtain consistency and asymptotic normality without requiring the `unrealistic' assumption that the distribution of uncured subjects has a positive mass at the end point of the support. } \n\t\t\n\t\t{It might be argued that since the proposed method relies on smoothing, it is more complex and the results can be affected by the choice of the kernel function or the bandwidth. Our purpose was to show that the user doesn't have to think about this because the standard choices proposed in this paper perform well in practice. In addition, since the final estimator is a parametric one and the kernel estimator is only a preliminary step of the procedure, the results would anyway be more stable with respect to these choices than in a nonparametric setting. The main challenge this method faces is extension to many continuous covariates for the incidence. \t\t}\n\t\tWe did not deeply investigate such situations since, in that case, multiple bandwidths have to be chosen, which can be more problematic and computationally intensive. However, our approach based on presmoothing allows to efficiently handle these situations if the estimator $\\hat\\pi$ is constructed in a more adequate way. {One possibility would be to construct the estimator assuming a single-index model for the latency, which is reasonable since the final goal is a parametric estimator. With this approach one can avoid the choice of multiple bandwidths and perform the estimation as in the one dimensional case.} However, this problem will be addressed by future research. {In this regard, even though considering only one continuous covariate might seem restrictive in practice, the proposed procedure constitutes the basis for further developments of new estimators for general dimension scenarios that do not require multidimensional smoothing.}\n\t\t\n\t\t\n\t\t\\section{Appendix}\n\t\t\\label{sec:appendix}\n\t\t\\subsection{Proof of Theorem \\ref{theo:asymptotic_normality2}}\n\t\t\\label{sec:appendix_Cox}\n\t\tWe obtain the asymptotic normality of $\\hat\\Lambda_n$, $\\hat\\beta_n$ following the proof of Theorem 3 in \\cite{Lu2008}. In order to work with a one-dimensional submodel, {for $d$ in a neighbourhood of the origin,} let $\\Lambda_d(t)=\\int_0^t\\{1+dh_1(s)\\}\\mathrm{d} \\hat\\Lambda_n(s)$ and $\\beta_d=dh_2+\\hat\\beta_n$, where $h_1$ is a function of bounded variation on $[0,{\\tau_0}]$ and $h_2$ is a $q$-dimensional real vector. Let $\\hat{S}_n(\\hat\\Lambda_n,\\hat\\beta_n)(h_1,h_2)$ denote the derivative of ${\\hat{l}_n}\n\t\t(\\Lambda_d,\\beta_d)$ (defined in \\eqref{def:hat_l_cox}) with respect to $d$ and evaluated at $d=0$. We have \n\t\t\\[\n\t\t\\begin{split}\n\t\t\t\\hat{S}_n(\\hat\\Lambda_n,\\hat\\beta_n)(h_1,h_2)\n\t\t\t&=\\frac{1}{n}\\sum_{i=1}^n \\Delta_i\\mathds{1}_{\\{Y_i<\\tau_0\\}} \\left[h_1(Y_i)+h'_2Z_i\\right]\\\\\n\t\t\t&\\quad-\\frac{1}{n}\\sum_{i=1}^n \\left\\{\\Delta_i+(1-\\Delta_i){\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}}g_i(Y_i,\\hat\\Lambda_n,\\hat\\beta_n,\\hat\\gamma_n)\\right\\}\\\\\n\t\t\t&\\qquad\\qquad\\qquad\\quad\\times\\left\\{e^{\\hat\\beta'_nZ_i}\\int_0^{Y_i}h_1(s)\\mathrm{d}\\hat\\Lambda_n(s)+e^{\\hat\\beta'_nZ_i}\\hat\\Lambda_n(Y_i)h'_2Z_i\n\t\t\t\\right\\},\n\t\t\\end{split}\n\t\t\\]\n\t\twhere $g_j$ is defined in \\eqref{def:g_j} and $\\hat\\gamma_n$ is the maximizer of \\eqref{def:hat_L_gamma}. Let $\\Upsilon_n=(\\hat\\Lambda_n,\\hat\\beta_n)$ and $\\Upsilon_0=(\\Lambda_0,\\beta_0)$. Furthermore, denote by $S$ the asymptotic version of $\\hat{S}_n$:\n\t\t\\[\n\t\t\\begin{split}\n\t\t\tS(\\Lambda,\\beta)(h_1,h_2)&=\\mathbb{E}\\bigg[ \\Delta\\mathds{1}_{\\{Y<\\tau_0\\}} \\{h_1(Y)+h'_2Z\\}-\\left\\{\\Delta+(1-\\Delta){\\mathds{1}_{\\{Y\\leq\\tau_0\\}}}g(Y,\\Lambda,\\beta,\\gamma_0)\\right\\}\\\\\n\t\t\t&\\qquad\\qquad\\qquad\\qquad\\left. \\times\\left\\{e^{\\beta'Z}\\int_0^{Y}h_1(s)\\mathrm{d}\\Lambda(s)+e^{\\beta'Z}\\Lambda(Y)h'_2Z\n\t\t\t\\right\\}\\right].\n\t\t\\end{split}\n\t\t\\]\n\t\tWe have $\\hat{S}_n(\\Upsilon_n)=0$ and $S(\\Upsilon_0)=0$. \n\t\tThe score function $S_n$ and $S$ are respectively a random and a deterministic map from $\\Xi$ to $l^{\\infty}(\\mathcal{H}_\\mathfrak{m})$ (the space of bounded real-valued functions on $\\mathcal{H}_\\mathfrak{m}$), where\n\t\t\\[\n\t\t\\Xi=\\left\\{(\\Lambda,\\beta)\\,:\\,\\sup_{h\\in\\mathcal{H}_\\mathfrak{m}}\\left|\\int_0^{{\\tau_0}}h_1(s)\\mathrm{d}\\Lambda(s)+h'_2\\beta\\right|<\\infty\\right\\}\n\t\t\\] \n\t\tand $\\mathcal{H}_\\mathfrak{m}=\\{h\\in\\mathcal{H}\\,:\\, \\Vert h\\Vert_{H}\\leq \\mathfrak{m}\\}$. Here $\\Vert h\\Vert_{H}=\\Vert h_1\\Vert_v+\\Vert h_2\\Vert_{L_1}$, $\\Vert h_2\\Vert_{L_1}=\\sum_{j=1}^q|h_{2,j}|$, $\\Vert h_1\\Vert_v=|h_1(0)|+V_0^{{\\tau_0}}(h_1)$ and $V_0^{{\\tau_0}}(h_1)$ denotes the total variation of $h_1$ on $[0,{\\tau_0}]$. \n\t\tThis means that $S_n$ is a random variable defined in the abstract probability space $(\\Omega,\\mathcal{F},\\mathbb{P})$ (where the random vector $(B,T_0,C,X,Z)$ is defined) with values in the space of bounded functions $\\Xi\\mapsto l^\\infty(\\mathcal{H}_\\mathfrak{m})$ with respect to the supremum norm. The latter one is a Banach space equipped with the Borel $\\sigma$-field.\n\t\t\n\t\tWe need to show that conditions 1-4 of Theorem 4 in \\cite{Lu2008} (or Theorem 3.3.1 in \\cite{VW96}) are satisfied. \n\t\tThe main difference of the function $S$ from the one in \\cite{Lu2008} is that here $\\gamma=\\gamma_0$ fixed. We are only considering variation with respect to $\\beta$ and not $\\gamma$, so the components of $h$ that correspond to $\\gamma$ are set to zero.\n\t\tHowever, conditions 2 and 3 of Theorem 4 in \\cite{Lu2008} for $S$ can be shown in the same way as in \\cite{Lu2008}. \n\t\tDetails about conditions 1 and 4 can be found in the online Supplementary Material.\n\t\t\\qed\n\t\t\n\t\t\n\t\t\\subsection{Proof of Theorem \\ref{theo:pi_hat}}\n\t\tThe logistic model for the cure probability obviously satisfies assumptions (AN1) and (AN3). Let $\\Pi$ be the space of continuously differentiable functions $f$ from $\\mathcal{X}$ to $[0,1]$ such that $\\sup_{x\\in\\mathcal{X}}|f'(x)|\\leq M$ and\n\t\t\\[\n\t\t\\sup_{x_1, x_2\\in\\mathcal{X}}\\frac{|f'(x_1)-f'(x_2)|}{|x_1-x_2|^\\xi}\\leq M\n\t\t\\]\n\t\tfor some $M>0$ and $\\xi\\in(0,1]$. If such space is equipped with the supremum norm, the covering numbers satisfy \n\t\t\\[\n\t\t\\log N\\left(\\epsilon,\\Pi,\\Vert\\cdot\\Vert_\\infty\\right)\\leq K\\frac{1}{\\epsilon^{1\/(1+\\xi)}}\n\t\t\\]\n\t\tfor some constant $K>0$ independent of $\\epsilon$ (see Theorem 2.7.1 in \\cite{VW96}). Obviously, for $\\epsilon>1$, $\\log N(\\epsilon,\\Pi,\\Vert\\cdot\\Vert_\\infty)=0$. Hence, assumption (AN2) is satisfied. \n\t\tIt remains to check (AN4). Recall that the estimator of the cure probability $\\hat\\pi(x)$ is the value at time $\\tau_0$ of the Beran estimator $\\hat{S}(t|x)$, while $\\pi_0(x)=S(\\tau_0|x)$. Moreover, by assumption \\eqref{eqn:CI2}, we have $\\inf_x H((\\tau_0,\\infty)|x)>0$. From Proposition 4.1 and 4.2 in \\cite{KA99} it follows that\n\t\t\\[\n\t\t\\begin{aligned}\n\t\t\t\\sup_x\\left|\\hat{\\pi}(x)-\\pi_0(x)\\right|&=O\\left((nb)^{-1\/2}(\\log b^{-1})^{1\/2}\\right)\\quad a.s., \\\\\n\t\t\t\\sup_x\\left|\\hat{\\pi}'(x)-\\pi_0'(x)\\right|&=O\\left((nb^3)^{-1\/2}(\\log b^{-1})^{1\/2}\\right)\\quad a.s.\n\t\t\\end{aligned}\n\t\t\\]\n\t\tand\n\t\t\\[\n\t\t\\sup_{x_1, x_2\\in\\mathcal{X}}\\frac{|\\hat\\pi'(x_1)-\\pi'_0(x_1)-\\hat\\pi'(x_2)+\\pi'_0(x_2)|}{|x_1-x_2|^{\\xi\/2}}=O\\left(\\left(nb^{3+\\xi}\\right)^{-1\/2}(\\log b^{-1})^{1\/2}\\right)\\quad a.s.,\n\t\t\\]\n\t\twhere $\\xi$ is as in assumption (C1). Since $\\pi_0$ is twice continuously differentiable, from assumption (C1) it follows that $\\hat\\pi$ satisfies (i,ii) of (AN4).\n\t\tFrom Theorem 3.2 of \\cite{DA2002} (with $T=\\tau_0$) we have $\\hat{\\pi}(x)-\\pi_0(x)=\\frac{1}{n}\\sum_{i=1}^n A_i(x)+R_n(x)$, where\n\t\t\\begin{equation}\n\t\t\t\\label{eqn:iid_pi}\n\t\t\t\\begin{split}\n\t\t\t\tA_i(x)=-\\frac{1-\\phi(\\gamma_0,x)}{f_X(x)}\\frac{1}{b}k\\left(\\frac{x-X_i}{b}\\right)\n\t\t\t\t\\left\\{\\frac{\\Delta_i\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}}{H([Y_i,\\infty)|x)}-\\int_0^{Y_i\\wedge \\tau_0}\\frac{H_1(ds|x)}{H^2([s,\\infty)|x)}\\right\\}\n\t\t\t\\end{split}\n\t\t\\end{equation}\n\t\tand $\\sup_x|R_n(x)|=O\\left((nb)^{-3\/4}(\\log n)^{3\/4}\\right)$ a.s.. \n\t\tHence\n\t\t\\[\n\t\t\\begin{split}\n\t\t\t&\\mathbb{E}^*\\left[\\left(\\hat\\pi(X)-\\pi_0(X)\\right)\\left(\\frac{1}{\\phi(\\gamma_0,X)}+\\frac{1}{1-\\phi(\\gamma_0,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_0,X)\\right]\\\\\n\t\t\t&=\\frac{1}{n}\\sum_{i=1}^n\\mathbb{E}^*\\left[A_i(x)\\left(\\frac{1}{\\phi(\\gamma_0,X)}+\\frac{1}{1-\\phi(\\gamma_0,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_0,X)\\right] \\\\\n\t\t\t&\\quad+\\mathbb{E}^*\\left[R_n(X)\\left(\\frac{1}{\\phi(\\gamma_0,X)}+\\frac{1}{1-\\phi(\\gamma_0,X)}\\right)\\nabla_\\gamma\\phi(\\gamma_0,X)\\right].\n\t\t\\end{split}\n\t\t\\] \n\t\tThe second term on the right hand side of the previous display is bounded by $c\\sup_x|R_n(x)|=o(n^{-1\/2})$ for some $c>0$ because of assumptions (C1) and (AN1). Furthermore, from (AN1) and (AC4) and a Taylor expansion, it follows that the generic element of the sum in the first term is equal to \n\t\t\\[\n\t\t\\begin{split}\n\t\t\t&-\\int_{\\mathcal{X}}\\frac{1}{b}k\\left(\\frac{x-X_i}{b}\\right)\n\t\t\t\\left\\{\\frac{\\Delta_i\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}}{H([Y_i,\\infty)|x)}-\\int_0^{Y_i\\wedge \\tau_0}\\frac{H_1(ds|x)}{H^2([s,\\infty)|x)}\\right\\}\\frac{1}{\\phi(\\gamma_0,x)}\\nabla_\\gamma\\phi(\\gamma_0,x)\\,\\mathrm{d} x\\\\\n\t\t\t&=-\n\t\t\t\\left\\{\\frac{\\Delta_i\\mathds{1}_{\\{Y_i\\leq\\tau_0\\}}}{H([Y_i,\\infty)|X_i)}-\\int_0^{Y_i\\wedge \\tau_0}\\frac{H_1(ds|X_i)}{H^2([s,\\infty)|X_i)}\\right\\}\n\t\t\t\\frac{1}{\\phi(\\gamma_0,X_i)}\\nabla_\\gamma\\phi(\\gamma_0,X_i)+O(b^2).\n\t\t\\end{split}\n\t\t\\]\n\t\tSince because of (C1) we have $O(b^2)=o(n^{-1\/2})$, (AN4-iii) holds with \n\t\t\\[\n\t\t\\Psi(Y,\\Delta,X)=-\\left\\{\\frac{\\Delta\\mathds{1}_{\\{Y\\leq\\tau_0\\}}}{H([Y,\\infty)|X)}-\\int_0^{Y\\wedge \\tau_0}\\frac{H_1(ds|X)}{H^2([s,\\infty)|X)}\\right\\}\\frac{1}{\\phi(\\gamma_0,X)}\\nabla_\\gamma\\phi(\\gamma_0,X).\n\t\t\\]\n\t\t\\qed\n\t\t\\section*{Acknowledgements}\n\t\n\t\tI. Van Keilegom and E. Musta acknowledge financial support from the European Research Council (2016-2021, Horizon 2020 and grant agreement 694409). For the simulations we used the infrastructure of the Flemish\n\t\tSupercomputer Center (VSC).\n\t\t\n\t\t\\section*{Supplement}\n\t\tSupporting information may be found in the online appendix.\n\t\n\t\t{It contains the proofs of Theorems 1, 2 and 3} in Section~\\ref{sec:asymptotics} and additional simulation results.\n\n\n\n\n\n\\bibliographystyle{imsart-number}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nPropagation coherence is the condition for realization of\noscillations - the interference effects that lead to\ntime - distance periodic variations of observables \\cite{Kayser:1981ye,Kiers:1995zj,Giunti:1997wq,Grimus:1998uh,Giunti:2002xg,Akhmedov:2009rb,Akhmedov:2010ms,Akhmedov:2012uu,Akhmedov:2019iyt}. \nCorrespondingly, decoherence means the disappearance of the interference \nand oscillatory pattern. \n\nIn the configuration space, the decoherence is produced by \nrelative shift, and eventually, separation of the wave \npackets (WPs) that correspond to the eigenstates of propagation. The coherence length was defined as the distance at which\nthe wave packets are shifted with respect to \neach other (separated) by the size of the wave packet $\\sigma_x$, or equivalently, when the interference term in the probability is suppressed by a factor $1\/e$.\nIn the energy-momentum space, the decoherence is due to \naveraging of the oscillation phase over the energy interval of the energy uncertainty in a setup. \nIt was checked that in vacuum and uniform matter, the two considerations \nare equivalent and lead to the same value of coherence length \\cite{Mikheev:1986wj,Mikheev:1987jp,Kersten:2015kio}. \n\n\nThe loss of the propagation coherence does not lead to the\nloss of information and can be restored under certain conditions. In the configuration space, the restoration requires a long enough coherence time of detection. In the energy-momentum \nspace it requires very good energy resolution. Irreversible loss of coherence occurs if neutrinos are treated as open quantum systems \\cite{Benatti:2000ph}.\n\nMostly the coherence was studied for vacuum oscillation \\cite{Kayser:1981ye,Kiers:1995zj,Giunti:1997wq,Grimus:1998uh,Giunti:2002xg,Akhmedov:2009rb,Akhmedov:2010ms,Akhmedov:2012uu,Akhmedov:2019iyt}. \nIn matter with constant (or adiabatically changing) density\nit was considered in \\cite{Mikheev:1987jp,Mikheyev:1989dy,PhysRevD.41.2379,Peltoniemi:2000nw,Kersten:2015kio}. It was noticed that \nat certain energy (close to the MSW resonance energy), the difference of group velocities of the WPs vanishes in media with constant density and the \ncoherence length becomes infinite \\cite{Mikheev:1986wj,Mikheev:1987jp,Kersten:2015kio}. If density varies with distance monotonously, then at a specific point, $x_0$, near the MSW resonance, the difference of group velocities changes sign, and the shift that happened at $xx_0$, so that the packets overlap can be restored \\cite{Mikheev:1987jp,Mikheyev:1989dy,Kersten:2015kio}. Decoherence affects resonant oscillations of solar and supernova neutrinos \\cite{PhysRevD.41.2379} and, in the case of supernova, decoherence happens before the resonance region. However, the effects of collective oscillations were not taken into account in \\cite{PhysRevD.41.2379}. In ref.~\\cite{Peltoniemi:2000nw}, a formal solution for the neutrino equation of motion in general profile including non-adiabaticity was found. Based on it, the equivalence between $x$- and $E$-spaces was shown, at least for situations with $\\sigma_E \\ll E$. Limitations of the WP treatment were also discussed; in particular, as $\\sigma_E$ increases, negative energy components of the WPs become relevant. This can be important in situations with $\\sigma_E \\sim E$.\n\n\nThe coherence in the inner parts of supernova (SN)\nis of special interest \\cite{Kersten:2013fba}. The reason is that the wave packets of SN neutrinos are very short, $\\sigma_E \\sim E$, suggesting that decoherence can occur at smaller scales than\nthe scales of possible collective oscillation phenomena \\cite{Kersten:2015kio,Akhmedov:2017mcc}. \nFurthermore, effectively, the problem is non-linear \\cite{Hansen:2018apu}. \nTherefore the question arises whether the loss of coherence destroys or not the collective phenomena \\cite{Hansen:2019iop}. \n\nIn this paper, we consider coherence and decoherence in matter in detail. We revisit the case of the infinite coherence length in matter of constant density and give its physics interpretation. We show that in matter dominated regime, decoherence is determined by vacuum parameters. We comment on the fact that oscillations of massless neutrinos do not decohere. In the adiabatic case $L_{coh} \\rightarrow \\infty$ is possible if a density profile has periodic modulations.\nWe then consider decoherence in the presence of adiabaticity violation. That includes decoherence in matter with single density \njump which can be relevant for propagation through the shock wavefront in SN \\cite{Kersten:2015kio}, and in the castle-wall profile \\cite{Akhmedov:1988kd,Krastev:1989ix,Akhmedov:1998ui,Akhmedov:1999ty,Akhmedov:1999va}. The latter can give some idea about the decoherence of collective oscillations in the inner parts of supernova \\cite{Hansen:2018apu}. The castle-wall profile is the simplest example because it admits analytical treatment and allows us to draw some conclusions about the relation between parametric resonances and coherence length. \n\nDecoherence in the configuration ($x$-space) is equivalent to averaging of oscillation phase in energy or $E$-space for stationary situations \\cite{Stodolsky:1998tc}. Therefore, for each matter profile, we elaborate on the\nissue of equivalence of results in the $x$- and $E$-spaces. In particular, for the castle-wall profile, we show that the modification of shape factors in $x$-space due to the density jumps do not alter its Fourier transform and the initial energy spectrum of the WPs is preserved.\n\n\nThe paper is organized as follows: in section~\\ref{coherence-lengths} we study the coherence length in vacuum and matter with constant and adiabatically varying density. We emphasize the existence of energies with infinite coherence length and give interpretation of $L_{coh} \\rightarrow \\infty$ in position and energy spaces.\nIn section~\\ref{coherence and adiabaticity violation}, we analyze coherence in the case of maximal adiabaticity violation.\nSection~\\ref{parametric-oscillations} presents the study of coherence in the castle-wall profile.\nSection~\\ref{applications} is devoted to the applications of our results to supernova neutrinos.\nIn section~\\ref{conclusions} we present our conclusions.\n\n\n\\section{Coherence in matter with constant and adiabatically changing density} \n\\label{coherence-lengths}\n\nIn what follows for simplicity, we will consider the coherence in two neutrino system; \nthe generalization to the case of three neutrinos is straightforward. \n\n\\subsection{Coherence of oscillations in matter}\n\nIn the energy-momentum space, the decoherence is related \nto averaging of the oscillation phase $\\phi$ or \nthe oscillatory (interference) term in probability $P_{int}$ \nover the energy uncertainty, $\\sigma_E$, of a set up\\footnote{There is subtle aspect related to averaging \nof $\\phi$ and $P_{int}$ which we will discuss later.}.\n$\\sigma_E$ gives the width of the wave packet (WP), and \nin general, it is determined by the energy uncertainty at the production \nand detection, thus including a possibility \nof restoration of the coherence at the detection: \n$1\/\\sigma_E = 1\/\\sigma_E^{prod} + 1\/\\sigma_E^{det}$ \\cite{Kayser:1981ye,Kiers:1995zj,Giunti:1997wq,Grimus:1998uh,Giunti:2002xg,Akhmedov:2009rb,Akhmedov:2010ms,Peltoniemi:2000nw,Hollenberg:2011tc,Kersten:2015kio}.\n\nIn the case of uniform medium (vacuum, constant density matter) the \neigenstates and eigenvalues of the Hamiltonian of propagation \n$H_{im}$ ($i = 1,\\, 2$) are well defined \\cite{Wolfenstein:1977ue,Mikheev:1986wj,Mikheev:1987jp,Mikheyev:1989dy}. The difference of the eigenvalues, $\\Delta H_m \\equiv H_{2m} - H_{1m}$, determines the oscillation phase\nacquired along the distance $L$: \n\\begin{equation} \n\\label{mat-phase} \\nonumber\n\\phi_m = \\Delta H_m L.\n\\end{equation}\nFor a fixed $L$, variation of the phase with change of the neutrino \nenergy $E$ in the interval $E \\pm \\sigma_E$ equals \n\\begin{equation} \n\\label{expansion}\n\\Delta \\phi_m = \\left( 2 \\sigma_E \n\\frac{d \\Delta H_m}{d E}\n+ \\frac{\\sigma_E^3}{3}\\frac{d^3 \\Delta H_m}{d E^3} \n+... \\right) L. \n\\end{equation}\nIt increases linearly with $L$ and we define the coherence \nlength, $L_{coh}$, \nas the length at which the variation of oscillation phase\nbecomes $2\\pi$: \n\\begin{equation}\n\\label{cohlength}\n|\\Delta \\phi_m (L_{coh}^m)|= 2 \\pi.\n\\end{equation}\nThis condition means that averaging of the oscillatory dependence of the probability, which is a measure of interference, \nbecomes substantial. \nUsing the first term of the expansion (\\ref{expansion}) \nwe obtain from the condition (\\ref{cohlength}) \n\\begin{equation} \n\\label{const-density}\nL^m_{coh} = \\frac{\\pi}{\\sigma_E} \n\\left| \\frac{d \\Delta H_m}{d E} \\right|^{-1}. \n\\end{equation}\nAt \n\\begin{equation} \n\\label{infcoh}\n\\frac{d \\Delta H_m (E)}{d E} = 0 \n\\end{equation}\nthe coherence length becomes infinite \\cite{Mikheev:1987jp,Mikheyev:1989dy,Kersten:2015kio}. The next terms in the expansions (\\ref{expansion}) shift the pole in (\\ref{const-density})\nbut do not eliminate it. The shift due to higher order terms is suppressed if\n$\\sigma_E\/E \\ll 1$.\n\nIn vacuum \n\\begin{equation} \\nonumber\n\\Delta H = \\frac{\\Delta m^2}{2 E},\n\\end{equation}\nwhere $\\Delta m^2 \\equiv m_2^2 -m_1^2$ is the mass square difference.\nConsequently, Eq.~(\\ref{const-density}) gives\n\\begin{equation} \n\\label{L-coh}\nL_{coh} = \n\\frac{\\pi}{\\sigma_E}\\frac{2 E^2}{\\Delta m^2} = \nl_{\\nu} \\frac{E}{2 \\sigma_E}, \n\\end{equation}\nwith $l_{\\nu} = 4 \\pi E \/ \\Delta m^2$ being \nthe vacuum oscillation length. \nNotice that $L_{coh}\\rightarrow \\infty$ in the limits \n$E \\rightarrow \\infty$ or $\\Delta m^2 \\rightarrow 0$. \n\nIn matter with constant \ndensity \\cite{Kersten:2015kio}, the difference \nof eigenvalues for a given matter potential is given by \n\\begin{equation} \n\\label{split-eigenvalues}\n\\Delta H_m = \\frac{\\Delta m^2}{2E} \n\\sqrt{\\left(c_{2\\theta} - \\frac{2EV}{\\Delta m^2} \\right)^2 \n+ s^2_{2\\theta}} \\, ,\n\\end{equation}\nwhere $\\theta$ is the vacuum mixing angle and $c_{2 \\theta} \\equiv \\cos 2 \\theta$ and $s_{2 \\theta} \\equiv \\sin 2\\theta$. The derivative of (\\ref{split-eigenvalues}) equals \n\\begin{equation}\n\\label{eq:partder}\n\\frac{d \\Delta H_m}{d E} = \n\\frac{\\Delta m^2}{2E^2} \n\\frac{ \\left(\\frac{\\Delta m^2}{2 E} - \nV \\cos 2 \\theta \\right)}{\\Delta H_m}.\n\\end{equation}\nConsequently, according to Eq.~(\\ref{const-density}) \nthe coherence length in matter equals \n\\begin{equation} \\nonumber\n\\label{const-density1}\nL^m_{coh}= \n\\frac{L_{coh}}{l_m} \\frac{2 \\pi}{\\left|\\frac{\\Delta m^2}{2 E} \n- V \\cos 2 \\theta \\right|} = L_{coh} \n\\left(1 -c_{2\\theta} \\frac{2VE}{\\Delta m^2} \\right)^{-1} \n\\sqrt{\\left( c_{2\\theta} - \\frac{2VE}{\\Delta m^2} \\right)^2 + s^2_{2 \\theta}} ,\n\\end{equation}\nwith $L_{coh}$ given in (\\ref{L-coh}) and $l_m=2 \\pi \/ \\Delta H_m$ \nbeing the oscillation length in matter. \n\nLet us consider dependence of $L^m_{coh}$ on energy. \nIts salient feature is existence of the pole at \n\\begin{equation} \\nonumber\n\\label{eq-E0}\n E_0=\\frac{\\Delta m^2}{2 V \\cos 2 \\theta}. \n\\end{equation}\nThat is, the infinite coherence length \n$L^m_{coh} \\rightarrow \\infty$ is realized \nat finite energy $E_0$, in contrast \nto the vacuum case. $E_0$ is related to the MSW resonance energy, $E_R =\\Delta m^2 \\cos 2\\theta\/2 V $, as \n\\begin{equation} \n\\label{MSW}\nE_0 = \\frac{E_R}{\\cos^2 2\\theta}. \n\\end{equation}\nThus, $E_0 > E_R$ and it can be very close to $E_R$ in case of small mixing. Since the width of the MSW resonance peak is $2\\tan 2 \\theta E_R \\approx 2\\sin 2\\theta E_R$, the pole is within the peak. \nTherefore, at $E_0$, the transition probability is large. \n\nIn terms of $E_0$ or $x \\equiv E\/E_0$ the derivative (\\ref{eq:partder}) \ncan be rewritten as \n\\begin{equation} \\nonumber\n\\label{eq:intermse0}\n\\frac{d \\Delta H_m}{d E} = \n\\frac{2\\pi}{E l_\\nu} \n\\frac{1 - x}{\\sqrt{(1 - x)^2 + x^2\\tan^2{2\\theta} }}. \n\\end{equation}\nConsequently, the ratio of the coherence lengths \nin matter and in vacuum (\\ref{const-density}) becomes \n\\begin{equation} \n\\label{ratio}\n \\frac{L^m_{coh}}{L_{coh}}=\\sqrt{1+ \n\\frac{\\tan^2 2 \\theta}{\\left(1-\\frac{E_0}{E} \\right)^2}}.\n\\end{equation}\nThe ratio Eq.~(\\ref{ratio}) as function of energy \nis shown in fig.~\\ref{fig-E0}. It\nhas the universal form which depends on the vacuum mixing only. \nThe value $L^m_{coh}\/L_{coh} = 2$ corresponds to energies at both\nsides of the peak $E=E_0 \\left(1 \\pm \\tan 2 \\theta\/\\sqrt{3}\\right)^{-1}$.\nThus, for small $\\theta$ the quantity $\\Delta E = (2\/\\sqrt{3})\\tan 2\\theta E_0$\ncharacterizes the width of the peak.\n\nAt the MSW resonance, $E=E_R$, the ratio (\\ref{ratio}) equals\n\\begin{equation} \n\\label{ratio-resonance}\n\\frac{L^m_{coh}}{L_{coh}}=\\frac{1}{\\sin 2 \\theta}.\n\\end{equation}\nThat is, the coherence length Eq.~(\\ref{ratio-resonance})\nis enhanced in comparison to the vacuum length. \nFor example, if $\\theta=\\theta_{13}=8.5^\\circ$, \nwe have $L^m_{coh} \\approx 12 L_{coh}$. \nIn the resonance the oscillation length increases \nby the same amount: $l_m^R = l_\\nu\/ \\sin 2\\theta$.\n\n\nFor $E \\ll E_0$, {\\it i.e.} in the vacuum dominance case,\nwe have $L^m_{coh} \\approx L_{coh}$. \nInterestingly, for $E \\gg E_0$, (matter dominated region)\n\\begin{equation} \nL^m_{coh} \\approx \\frac{L_{coh}}{\\cos 2 \\theta}. \n\\label{matter-domination}\n\\end{equation}\nThat is, $L_{coh}^m$ is determined by the vacuum parameters, and for small $\\theta$ it approaches the vacuum $L_{coh}$ again. \nThis has important implications for supernova neutrinos. \n\nThe depth of oscillations in matter given by\n\\begin{equation} \n\\label{sine-mixing}\n\\sin^2 2 \\theta_m = \\frac{\\sin^2 2 \\theta}{\\left( \\cos 2\\theta \n- \\frac{2EV}{\\Delta m^2} \\right)^2 + \\sin^2 2\\theta}\n\\end{equation}\nbecomes at $E=E_0$\n\\begin{equation} \\nonumber\n \\sin^2 2 \\theta_m =\\cos^2 \\theta.\n\\end{equation}\nFor $\\theta=8.5^0$ the depth is close to maximum: \n$\\sin^2 2 \\theta_m \\approx 0.98$. \nTherefore, in matter, strong flavor transitions can be preserved \nfrom decoherence over large distances $L$ from the source to the detector.\n\n\n\n\n\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=0.8\\linewidth]{fig-E0.png}\n\\caption{Dependence of the ratio $L^m_{coh}\/L_{coh}$ (\\ref{ratio}) \non energy for $\\theta = \\theta_{13} = 8.5^\\circ $. The ratio diverges at $E_0$, and $E_R$ is the MSW resonance energy. \n}\n\\label{fig-E0}\n\\end{figure}\n\n\n\n\\subsection{Coherence in the configuration space}\n\nIn the configuration space, the loss of propagation coherence \nis associated with relative shift and eventually separation \nof the wave packets of the eigenstates. \nThe difference of group velocities \nof the eigenstates is given by \n\\begin{equation} \n\\label{Delta-v}\n \\Delta v_m=-\\frac{d \\Delta H_m}{d E}.\n\\end{equation}\nThen spatial separation (relative shift) \n$x_{shift}$ of the packets after propagating a distance $L$ equals\n\\begin{equation} \\nonumber\n\\label{velocity}\nx_{shift}(L) =\\int_{0}^{L} \\Delta v_m dx = \n-\\int_{0}^{L} \\frac{d \\Delta H_m}{d E} dx.\n\\end{equation}\nThe coherence length $L^m_{coh}$ can be defined as the distance at which \nthe separation equals the spatial size of the packets $\\sigma_x$:\n$|x_{shift}(L^m_{coh})| \\approx \\sigma_x$. At this point, the overlap of the packets becomes small. \nIn the case of constant density this condition gives \n\\begin{equation} \n\\label{velocity1}\nL^m_{coh} = \\sigma_x \n\\left| \\frac{d \\Delta H_m}{d E} \\right|^{-1}.\n\\end{equation}\nSince $\\sigma_x = 2\\pi\/(2 \\sigma_E)$ \n(recall that $\\sigma_E \\approx \\sigma_p$ is the half-width), \nthe expression (\\ref{velocity1}) coincides with expression \nfor the length in the energy representation (\\ref{const-density}). \nThis shows the equivalence of the results in the \nconfiguration and the energy-momentum spaces. \n\nAccording to Eq.~(\\ref{Delta-v}) at $E=E_0$ \nthe eigenstates in matter have equal group \nvelocities and therefore do not separate. As a result, \ncoherence is maintained during infinite time: \n$L^m_{coh} \\rightarrow \\infty$. \nThus, in the configuration space the condition for infinite coherence \nis \n\\begin{equation} \n\\label{infcoh-conf}\nv_1 = v_2. \n\\end{equation} \n\nThe infinite coherence energy $E_0$ coincides\nwith the resonance energy of oscillations\nof the mass eigenstates in matter\n$\\nu_1 \\leftrightarrow \\nu_2$ \\cite{Mikheyev:1989dy}. This is not accidental.\nThe origin of the energy dependence of phase,\nthe difference of group velocities\nand therefore decoherence is the mass states.\nThe resonance means that at $E_0$ the mass states $\\nu_1$ and $\\nu_2$\nare maximally mixed in the eigenstates $\\nu_i^m$.\nThat is, $\\nu_1^m$ and $\\nu_2^m$\nboth contain equal admixtures of $\\nu_i$. Consequently,\nthe group velocities of $\\nu_1^m$ and $\\nu_2^m$ should be equal.\nIn other words, in a given flavor state\nthe components $\\nu_1$ and $\\nu_2$ oscillate\nwith maximal depth $\\nu_1 \\rightarrow \\nu_2\n\\rightarrow \\nu_1 \\rightarrow \\nu_2$,\nwhich compensate separation.\n\nThe derivative of the oscillation length in matter equals\n$$\n\\frac{d l_m}{dE} = - \\frac{2\\pi}{(\\Delta H_m)^2}\n\\frac{d\\Delta H_m}{dE}.\n$$\nTherefore the condition of infinite coherence (\\ref{infcoh}) coincides\nwith the extremum of oscillation length.\nFor a given potential $V$,\n$l_m^{max} = 2\\pi\/(V \\sin 2\\theta)$, which is larger than\nthe length in the MSW resonance:\n$l_m^{R} = 2\\pi\/(V \\tan 2\\theta)$, and the difference\nis substantial for large vacuum mixing.\n\nNotice that decoherence is related to\nuncertainty in energy on a certain interval of energy.\nTherefore infinite coherence energy should refer to the\naverage energy in a wave packet.\n\nNotice that the equivalence of results in two representations \noriginates from the fact that \nthe same quantity $d \\Delta H_m \/d E$ \ndetermines the change of the oscillation phase with energy \n(in $E-p$ space) on the one hand side, and the difference of the \ngroup velocities (in $x$-space) on the other hand.\n\n\\subsection{Correlation of the phase difference and delay}\n\nThe separation between the WP of eigenstates, $t \\approx \\Delta v X$,\nis proportional to their phase difference (oscillation phase), \n$\\phi = \\Delta H_m X$: \n\\begin{equation}\n\\label{eq:phase-delay}\nt = g(E, V) \\phi, \n\\end{equation}\nwhere according to (\\ref{eq:partder})\n\\begin{equation} \\nonumber\n\\label{eq:gfunc}\ng(E, V) = \\frac{\\Delta v}{\\Delta H_m} = \n\\frac{1}{\\Delta H_m}{2E^2} = \n\\frac{1}{E} \n\\frac{1 - \\frac{2 V E c_{2\\theta}}{\\Delta m^2}} \n{\\left(c_{2\\theta} - \\frac{2VE}{\\Delta m^2}\\right)^2 + s^2_{2\\theta}}.\n\\end{equation}\nIn terms of infinite coherence energy, $E_0$, it can be rewritten as\n\\begin{equation} \\nonumber\n\\label{eq:gfunc2}\ng(E, V) = \\frac{2 V c_{2\\theta}}{\\Delta m^2} \n\\frac{\\frac{E_0}{E} - 1} \n{\\left(c_{2\\theta} - \\frac{E}{E_0 c_{2\\theta}} \\right)^2 \n+ s^2_{2\\theta}}.\n\\end{equation}\nFor $E \\ll E_0$ (vacuum case)\n$$\ng(E, V) = \\frac{1}{E}.\n$$\nAt $E = E_0$ (near the MSW resonance): \n$g(E, V) = 0$ - there is no loss of coherence, \nand the oscillation phase, being non-zero, is suppressed by mixing: \n$$\n\\phi = \\frac{\\Delta m^2 X}{2E_0} \\tan 2\\theta = \n\\phi_{vac}(E_0) \\tan 2\\theta. \n$$\nFor $E \\gg E_0$ (matter dominated case):\n$$\ng(E, V) \\approx - \\frac{c_{2\\theta} \\Delta m^2}{2V E^2}= - \\frac{ E_0 c_{2 \\theta}^2}{ E^2}.\n$$\nHere also the relative separation is strongly \nsuppressed, while oscillations proceed with large phase \n$\\phi \\approx V X$. \n\nNotice also that the relative separation changes the sign at $E_0$. Therefore in layers $a$ and $b$ with \ntwo different densities one may have opposite sign \nof a delay and also \n$t^a = -t^b$ (see below). The phases do not change the sign and are\nalways positive. \n\n\n\n\\subsection{Physics of $L^m_{coh} \\rightarrow \\infty$} \n\\label{interpretation}\n\nHere we present an interpretation of the divergence of \n$L^m_{coh}$ at certain energy $E_0$. \nThe transition probability is given by\n\\begin{equation} \\nonumber\n\\label{generic-prob}\nP_{\\alpha \\beta}(\\theta_m,\\phi_m ) =\n\\frac{1}{2}\\sin^2 2 \\theta_m \\left(1 - \\cos \\phi_m \\right).\n\\end{equation}\nIt should be averaged over the energy interval $2\\sigma_E$. \nThe change of $\\sin^2 2 \\theta_m$ with energy in the interval \n$E_0 \\pm \\sigma_E$ equals\n\\begin{equation}\\nonumber\n \\left| 2 \\sigma_E \\frac{d}{d E}\\sin^2 2 \\theta_m \\right|=2 \\sigma_E \\frac{V c_{2 \\theta}}{\\Delta m^2}=\\frac{\\sigma_E}{E_0} \\ll 1,\n\\end{equation}\nwhere we have taken into account that at $E_0$ the mixing parameters are $\\sin^2 2 \\theta_m=\\cos^2 \\theta$ and $\\cos^2 2 \\theta_m=\\sin^2 \\theta$. Therefore \n$\\sin^2 2 \\theta_m$ can be put out of the averaging integral at $E \\sim E_0$. \nFurthermore, to simplify consideration, instead of averaging over $E$ we will \naverage $P_{\\alpha \\beta}$ immediately over the phase $\\phi_m$ \nin the interval $2\\delta \\phi_m$ determined by \n\\begin{equation}\n2 \\delta \\phi_m = \\frac{d \\phi_m }{d E} 2\\sigma_E\n= \\frac{d \\Delta H_m}{d E} L 2\\sigma_E. \n\\label{pm-int}\n\\end{equation}\nThe averaging of the oscillatory term yields\n\\begin{equation} \\nonumber\n\\label{average}\n\\frac{1}{2 \\delta \\phi_m} \\int_{\\phi_m - \\delta \\phi_m}^{\\phi_m\n+ \\delta \\phi_m} \nd\\phi_m' P_{\\alpha \\beta}(\\phi_m') = \\frac{1}{2}\n\\sin^2 2 \\theta_m (E_0) \n\\left[1 - D(\\delta \\phi_m) \\cos \\phi_m \\right], \n\\end{equation}\nwhere \n\\begin{equation} \\nonumber\n\\label{decoh}\nD(\\delta \\phi_m) \\equiv \\frac{\\sin \\delta \\phi_m}{\\delta \\phi_m}\n\\end{equation}\nis the decoherence factor which describes suppression of \nthe interference term.\n\nAccording to (\\ref{pm-int}) for $E = E_0$ the variation of phase \nis $\\delta \\phi_m = 0$, \nand consequently, $D(\\delta \\phi_m) \\rightarrow 1$.\nThe averaging effect is small in the region around $E_0$. \nIndeed, $D(\\delta \\phi_m)$ has a peak centered at $\\delta \\phi_m = 0$. \nThe half-width at nearly half of maximum, \n$D(\\delta \\phi_m) = 0.5$ corresponds to \n\\begin{equation} \\label{rounding}\n|\\delta \\phi_m| \\approx 1.89. \n\\end{equation}\nConsequently, the half-width of the peak in the energy scale, $\\Gamma$, \nis determined by the condition \n\\begin{equation} \\label{Gamma}\\nonumber\n|\\delta \\phi_m (E_0 + \\Gamma)| = 2\n\\end{equation}\n(rounding (\\ref{rounding}) to $2$).\nUsing this value \nand expressions in (\\ref{pm-int}) and (\\ref{eq:partder}), \nwe find the relative half-width of the peak:\n\\begin{equation} \n\\label{exact-peak}\n \\frac{\\Gamma}{E_0} = \n\\frac{ \\tan 2 \\theta }{\\sqrt{\\pi^2 d^2 - 1}},\n\\end{equation}\nwhere \n\\begin{equation}\\nonumber\nd \\equiv \\frac{\\sigma_E}{E_0} \\frac{L}{l_{\\nu}}. \n\\end{equation}\nNotice that (\\ref{exact-peak}) requires $d \\pi > 1$, \non explicitly \n$L > l_{\\nu}E\/ (\\pi \\sigma_E) \\approx L_{coh}$.\nFor $d \\pi \\rightarrow 1$, the width $\\Gamma \\rightarrow \\infty$, \nwhich means that coherence is well satisfied for all the energies. \nMoreover, the peak disappears (half a height does not exist). \n\nFor $\\pi d \\gg 1$ the Eq.~(\\ref{exact-peak}) becomes\n\\begin{equation} \n\\label{exact-peak1}\n\\frac{\\Gamma}{E_0} = \\frac{ \\tan 2 \\theta }{\\pi d} = \n\\tan 2 \\theta \\frac{E_0}{\\sigma_E} \\frac{l_{\\nu}}{\\pi L}. \n\\end{equation}\nThe larger $L$, as well as $\\sigma_E$, the narrower the peak.\nAlso, according to (\\ref{exact-peak1}) the width of the peak increases with energy:\n\\begin{equation} \\nonumber\n\\label{propto-peak}\n \\frac{\\Gamma}{E_0} \\propto E_0^2. \n\\end{equation}\n\n\n\n\n\n\n\\subsection{Adiabatic evolution and infinite coherence}\nIn the case of slow adiabatic density change \\cite{Mikheev:1986wj,Mikheev:1987jp,Mikheyev:1989dy}, one can\nintroduce the instantaneous eigenstates and eigenvalues\n$H_{im}(x)$ and their difference $\\Delta H_m(x)$,\nwhich are well-defined quantities.\nThe adiabatic oscillation phase is given by the integral\n\\begin{equation}\n\\label{adiab-phase}\n\\phi_m^{ad}(L) = \\int_0^{L} dx \\Delta H_m(N_e(x)).\n\\end{equation}\nA change of the phase with the energy in the interval\n$2\\sigma_E$ equals\n\\begin{equation}\n\\label{adiab-diff}\\nonumber\n\\Delta \\phi_m^{ad}(L) = 2 \\sigma_E \\int_0^{L} dx\n\\frac{d}{d E}\\Delta H_m(x),\n\\end{equation}\nwhere we permuted the intergration over $x$ and differentiation\nover $E$.\nThen the condition for decoherence is\n$\\Delta \\phi_m (L^m_{coh}) = 2\\pi$, or explicitly,\n\\begin{equation}\n\\label{cond-var-density}\\nonumber\n\\left| \\int_0^{L^m_{coh}} dx \\frac{d}{d E}\n\\Delta H_m(x)\\right| = \\frac{\\pi}{\\sigma_E},\n\\end{equation}\nand $d \\Delta H_m(x)\/d E$ is given in (\\ref{eq:partder}).\n\n\nIn the configuration space the adiabatic evolution means\nthat there are no transitions between the eigenstates.\nTherefore a propagation is described by two wave packets,\nwhich do not change shape factors as in the constant density case.\nSeparation of the WP equals\n\\begin{equation}\n\\label{adiab-sep}\n\\Delta x (L) = \\int_0^{L} dx\n\\frac{d}{d E} \\Delta H_m(x),\n\\end{equation}\nand from (\\ref{adiab-phase}), (\\ref{adiab-sep}) we obtain\n\\begin{equation}\\nonumber\n\\label{adiab-equal}\n\\frac{d\\phi_m^{ad}}{d E} = \\Delta x (L),\n\\end{equation}\nwhich is the basis of equivalence of results in the $x$- and $E$-representations. In particular,\nthe infinite coherence condition, $d\\phi_m^{ad} \/dE = 0$, correspons to\nzero separation.\n\nLet us consider coherence for different density (potential) profiles.\nFor monotonous change of the potential the infinite\n$L_{coh}^m$ can not be realized. Indeed, for a given $E$ the pole\ncondition\nis satisfied for specific value of the potential\n$V_0 (E) = V(E, x_0)$ in specific point $x_0$.\nIn the point $x = x_0 + \\Delta x$, where $V(x_0 + \\Delta x)$\nis outside the $L_{coh}$-peak,\nsignificant enhancement of coherence is absent and $L_{coh}^m < \\Delta x$.\n\n\nCertain increase of the coherence length can be related\nto the fact that at $V_0$ the derivative\n$d \\Delta H_m(x)\/d E$\nchanges the sign which suppresses the integral over $x$.\nThis corresponds to the change of sign of\nthe difference of the group velocities at $E_0$ ($V_0$).\n\nFor a layer with a given length $L$ the energy of zero\nseparation (complete overlap) is realized\n(\\ref{adiab-sep}) when\n\\begin{equation} \\label{L_0}\n \\int_0^{x_0} dx \\frac{d}{d E} \n\\Delta H_m(x)=-\\int_{x_0}^{L} dx \\frac{d}{d E} \n\\Delta H_m(x) ,\n\\end{equation}\nand $x_0 = x_0(E)$.\n\nThe infinite coherence can be achieved\nin the potential with periodic modulations:\n\\begin{equation}\n\\label{eq:var-pot}\nV(x) = \\Bar{V} (1 + h \\sin 2\\pi x\/X).\n\\end{equation}\nFor a large $X$, the adiabaticity condition is satisfied.\nZero separation of WP, $\\Delta x = 0$, is realized when\nthe difference of group velocities has different signs in the\nfirst and the second parts of the period.\nSo, there is a continuous \"catch-up\" effect.\n\n\nAccording to (\\ref{eq:partder}) the energy $E_0$ of infinite coherence\nis obtained from the condition\n\\begin{equation}\n\\label{eq:intcond}\nlim_{L \\rightarrow \\infty} \\int_0^L dx\n\\frac{\\left(\\frac{\\Delta m^2}{2 E} - V\\cos 2\\theta \\right)}{\\Delta\nH_m(x)}\n= 0.\n\\end{equation}\nAround the zero value of nominator, denominator changes\nmuch slower\nand it can be put out of integral at some average value of the potential.\nThen the condition (\\ref{eq:intcond}) reduces to\n\\begin{equation}\\nonumber\n\\label{eq:intcond2}\n\\frac{\\Delta m^2}{2 E} L - \\cos 2\\theta \\int_0^L dx V (x) = 0.\n\\end{equation}\nPerforming explicit integration with the potential\n(\\ref{eq:var-pot}) we obtain\n\\begin{equation}\n\\label{eq:intcond3}\n\\frac{\\Delta m^2}{2 E_0} \\approx \\cos 2\\theta \\Bar{V} + \\frac{h}{\\pi}\n\\frac{X}{L}\n\\sin^2 \\frac{\\pi L}{X}.\n\\end{equation}\nIn the limit $L \\rightarrow \\infty$ the last term\non the RHS of (\\ref{eq:intcond3}) can be neglected and for $E_0$\nwe obtain the same expression (\\ref{eq-E0}) as in constant density case\nwith the average potential $\\Bar{V}$.\n\nIn general, if $V$ varies about some average value $\\bar V$ (even irregularly), there may be energy where $L_{coh}^m \\rightarrow \\infty$. \n\n\n\\subsection{Coherence for massless neutrinos}\n\n\nThe limit $L_{coh}^m \\rightarrow \\infty$ is realized \nfor oscillations of massless neutrinos, \nas originally introduced by Wolfenstein \\cite{Wolfenstein:1977ue}. \nHere the oscillation phase is independent of neutrino energy.\nRecall that the propagation decoherence is related to \nthe energy dependence of level splitting $\\Delta H_m$ and \nshows up as averaging over energy. \nTherefore the decoherence does not exist for oscillations \nof massless neutrinos driven by potentials. \n\n\n\\section{Coherence and adiabaticity violation}\n\\label{coherence and adiabaticity violation}\n\nAdiabaticity violation means transitions between the\neigenstates of propagation $\\nu_{1m} \\leftrightarrow \\nu_{2m}$ and therefore change of shape of the WPs. Here we will consider the cases of extreme adiabaticity violation when $dV\/dx \\rightarrow \\infty$, which corresponds to density jumps at certain spatial points.\n\nExtreme adiabaticity violation combined with shift and separation of the WPs leads to splits and catch up of the WPs \\cite{Kersten:2015kio}. We will first illustrate these effects considering a single density jump.\n\n\\subsection{Coherence in the case of single density jump}\n\nConsider a two layers ($a$ and $b$) \nprofile with density jump at a border between them. The layers have matter potentials $V_a$ and $V_b$ and lengths $L_a$ and $L_b$, \nso that the total length equals $L \\equiv L_a + L_b$. \nLet $\\theta_k = \\theta_m(V_k)$\nbe the flavor mixing angle \nin layer $k$, \n($k = a, \\, b$).\n\nSuppose in the beginning of layer $a$ the state is\n\\begin{equation} \n\\label{init}\n\\nu (x=0,t) = c_a' f_1(t) \\nu_1^a + s_a' f_2 (t) \\nu_2^a, \n\\end{equation}\nwhere we use abbreviations $c_a' \\equiv \\cos \\theta_a$, \n$s_a' \\equiv \\sin \\theta_a$ and \n$f_i(t)$ are the shape factors of the wave packets \nnormalized as \n$$\n\\int dt |f_i (t)|^2 = 1.\n$$\nEvolving to the border between the layers $a$ and $b$ \nthe state (\\ref{init}) becomes \n\\begin{equation} \n\\label{border}\n\\nu (L_a,t) = c_a' f(t - t_1^a) e^{i 2\\phi_1^a} \\nu_1^a + \ns_a' f(t - t_2^a) e^{i 2\\phi_2^a} \\nu_2^a. \n\\end{equation} \nHere $t_i^a$ are the times of propagation in the $a$-layer,\n$t_i^a = L_a\/ v_i^a$, $v_i^a$ are group velocities, and $\\phi_i^a$ are phases \nacquired by the eigenstates taken \nat the average energies in the packets. \nWe assume that in the beginning ($t=0$) the shape factors of $\\nu_1^a$ \nand $\\nu_2^a$ are equal. \n\nCrossing the border between $a$ and $b$ each eigenstate $\\nu_i^a$ splits into eigenstates $\\nu_j^b$ of the layer $b$:\n\\begin{equation} \n\\label{splitab} \n\\nu_1^a = c_\\Delta \\nu_1^b + s_\\Delta \\nu_2^b, \\, \\, \\, \\, \n\\nu_2^a = c_\\Delta \\nu_2^b - s_\\Delta \\nu_1^b,\n\\end{equation}\nwhere $\\Delta \\equiv \\theta_b - \\theta_a$. Inserting (\\ref{splitab}) into (\\ref{border}) we obtain the state in the beginning of layer $b$. Then to the end of the layer $b$ the state (\\ref{border}) evolves to \n\\begin{eqnarray} \n\\label{border-ba}\n\\nu (L,t) = \n\\left[ c_a' c_\\Delta f(t - t_1^a - t_1^b) e^{i 2(\\phi_1^a + \n\\phi_1^b)} \n- s_a' s_\\Delta f(t - t_2^a - t_1^b) e^{i 2(\\phi_2^a + \\phi_1^b)} \n\\right] \\nu_1^b + \n\\nonumber \\\\\n\\left[c_a' s_\\Delta f(t - t_1^a - t_2^b) \ne^{i 2(\\phi_1^a + \\phi_2^b)} \n+ s_a' c_\\Delta f(t - t_2^a - t_2^b) e^{i 2(\\phi_2^a \n+ \\phi_2^b)} \\right] \\nu_2^b. \n\\end{eqnarray} \nWe put out the common phase factor \n$e^{i 2(\\phi_1^a + \\phi_1^b)}$ \nand introduce the phase differences,\n\\begin{equation} \\label{phase-redefinition}\n \\phi^a \\equiv \\phi_2^a - \\phi_1^a \\, \\, \\, \\, \\text{and} \\, \\, \\, \\, \\phi^b \\equiv \\phi_2^b - \\phi_1^b.\n\\end{equation}\nThen we make the time shift as $t' = t - t_1^a - t_1^b$, which allows us to express results in terms of the relative \ntime shifts of the eigenstates 1 and 2 in the layers $a$ and $b$:\n\\begin{equation} \\label{time-redefinition}\n t^a \\equiv t_2^a - t_1^a \\, \\, \\, \\, \\text{and} \\, \\, \\, \\, t^b \\equiv t_2^b - t_1^b.\n\\end{equation}\n\n\nUsing these quantities and projecting (\\ref{border-ba}) \nonto $\\nu_e$ we obtain the amplitude \nof $\\nu_e \\rightarrow \\nu_e$ transition\nover the two layer profile:\n\\begin{eqnarray} \n\\label{eeamp1}\nA_{ee} & = &\nc_\\Delta \\left[\n c_a'c_b' f(t') e^{- i\\phi}\n+ s_a's_b' f(t' - t^a - t^b) e^{i\\phi} \n\\right] \n\\nonumber \\\\\n& + & \ns_\\Delta \\left[\n c_a' s_b' f(t' - t^b) e^{-i\\phi'}\n- s_a'c_b'f(t' - t^a) e^{i\\phi'} \n\\right]. \n\\end{eqnarray} \nIn this expression we put out the factor \n$e^{i(\\phi^a + \\phi^b)}$ and introduce\n\\begin{equation} \\label{phase-redefinition-2}\n \\phi \\equiv \\phi^a + \\phi^b \\, \\, \\, \\, \\text{and} \\, \\, \\, \\, \\phi' \\equiv \\phi^a - \\phi^b.\n\\end{equation}\n\\subsection{Specific examples}\nLet us consider specific cases. \n\n1. Suppose $|t^a| \\gg \\sigma_t$, that is, \nthe separation of the WPs in the layer $a$ \nis larger than the coherence time, which means that the \nWPs are completely separated and coherence is lost. Suppose also that \n$|t^b| \\ll \\sigma_t$, {\\it i.e.} loss \nof coherence in the layer $b$ is negligible, $|t^b| \\approx 0$. Then taking into account that the \nshape factors $f(t')$ and $f(t - t^a)$ do not overlap \nwe obtain from (\\ref{eeamp1}) \n\\begin{eqnarray}\n\\label{eeamp2}\nP_{ee} & = & \\int dt |A_{ee}|^2 = \n\\int dt |f(t')|^2 \n\\left|c_a' c_b' c_\\Delta + \nc_a' s_b' s_\\Delta e^{ i 2\\phi^b} \\right|^2\n\\nonumber \\\\\n& + & \\int dt |f(t' - t^a)|^2 \n\\left| s_a's_b' c_\\Delta e^{i 2\\phi^b} \n- s_a'c_b's_\\Delta \\right|^2. \n\\end{eqnarray} \nWe assume that change of the oscillation phase $\\phi^b (t)$ along the \nWP is negligible, so that the oscillatory factors can be\nput out of the integral and take into account that \nintegrations of the moduli squared of the shape factors give $1$ \ndue to normalization. As a result, we obtain \n\\begin{equation}\n\\label{eeamp2}\nP_{ee} = \n\\left|c_a' c_b' c_\\Delta + \nc_a' s_b' s_\\Delta e^{ i 2\\phi^b} \\right|^2\n+ \\left| s_a's_b' c_\\Delta e^{i 2\\phi^b} \n- s_a'c_b's_\\Delta \\right|^2. \n\\end{equation} \n\nComputing the amplitudes in the energy space, we need to use the plane waves so that $f(t) =1 $, and the energy\ndependent phases $\\phi^a (E)$ and $\\phi^b (E)$. \nThe probability is given by \n\\begin{equation}\n\\label{eepr1}\nP_{ee} = \\int dE F(E) |A_{ee}|^2,\n\\end{equation}\nwhere $F(E)$ is the energy spectrum of neutrinos \nthat corresponds to the WP in the $E$-$p$ space\n$F(E) = |f(E)|^2$ and $f(E)$ is the Fourier transform of $f(t)$. \nThe amplitude (\\ref{eeamp1}) can be written as \n\\begin{equation} \n\\label{eeamp12}\nA_{ee} = e^{-i\\phi(E)} A_1 + e^{i\\phi'(E)} A_2, \n\\end{equation}\nwith\n\\begin{equation} \n\\label{a1a2}\nA_1 = c_a' \\left[c_b' c_\\Delta \n+ s_b' s_\\Delta e^{i2\\phi^b(E)} \\right], \n\\, \\, \\, \\\nA_2 = s_a'\\left[s_b'c_\\Delta e^{i 2\\phi^b(E)} \n- c_b' s_\\Delta \\right]. \n\\end{equation} \nHere $A_i$ are the oscillations amplitudes of the eigenstates \nof the layer $a$, $\\nu^a_i$, in the layer $b$, $\\nu_i^a \\rightarrow \\nu_e$. \nInserting (\\ref{eeamp12}) into (\\ref{eepr1}) we have \n\\begin{equation}\n\\label{eepr3}\nP_{ee} = \\int dE F(E) \\left[ |A_1|^2 + |A_2|^2 + \n2|A_1 A_2| \\cos 2(\\phi^a + \\chi) \\right], \n\\end{equation}\nwhere $\\chi \\equiv Arg (A_1^* A_2)$ depends on the phase $\\phi^b$. \nTo compare this with consideration in the $x$-space we assume that \n$\\phi^a \\gg 1$ (which corresponds to large delay in the layer $a$). In contrast, \nthe phase $\\phi^b$ is relatively small, \n$\\phi^b \\ll 1$. In this case we can put all the terms \nbut those which depend on $\\phi^a$ \nin (\\ref{eepr3}) out of the integral taking them\nat average value of $E$ in the spectrum. Then \ndue to normalization $\\int dE F(E) = 1$ the expression \n(\\ref{eepr3}) becomes\n\\begin{equation}\n\\label{eepr4}\nP_{ee} = |\\bar{A}_1|^2 + |\\bar{A}_2|^2 + \n2|\\bar{A}_1 \\bar{A}_2| \\int dE F(E)\\cos 2(\\phi^a + \\chi). \n\\end{equation}\nHere $\\bar{A}_i$ are the amplitudes at the average value of energy. \nThe last term is suppressed as $1\/\\phi^a$ and negligible \nfor $\\phi^a \\gg 1$. Thus, the result (\\ref{eepr4}), with $A_i$ in (\\ref{a1a2}), coincides with that in the $x$-representation (\\ref{eeamp2}). \n\nIn the example considered above, the result does not depend\non $L_a$, and $L_a \\rightarrow \\infty$ is possible. In a sense,\none can consider this as the case of infinite coherence since\noscillations\ncan be observed at the arbitrary long distance from the source.\nThus, density jump and splitting of eigenstates induce or restore\nthe interference and therefore oscillations. Here suppression of\npropagation\ncoherence is determined\nby properties of layer $b$. That is, the problem is reduced to\na single layer with constant density.\nDepending on initial mixing and $\\Delta \\theta$, crossing\nthe density jump can lead to even stronger\ninterference than at the beginning of layer $a$.\n\nThe situation described above is realized, {\\it e.g.}, \nfor the solar and supernova neutrinos oscillating inside the Earth, \nwhere $a$ is the vacuum between the star and the Earth, and $b$ is the Earth. \nIn the high energy part of solar neutrinos: $c_a^2 \\ll s_a^2$.\n\n2. Suppose the layer $b$ has density at which\nvelocities of eigenstates are equal so that coherence is maintained for arbitrary large $L_b$ and $t^b=0$.\nFurthermore, suppose in the layer $a$ the wave packets\nare completely separated, $L_a \\rightarrow \\infty$ which means that dependence on $\\phi^a$ disappears.\nThen according to (\\ref{eeamp1}),\n\\begin{eqnarray}\n\\label{eeampcomp}\nA_{ee} & = &\nc_a' c_b' c_\\Delta f(t) e^{i \\phi} - s_a'c_b' s_\\Delta f(t - t^a) e^{i\\phi'}+\n\\nonumber \\\\\n& & c_a' s_b' s_\\Delta f(t) e^{-i\\phi'}\n+ s_a's_b' c_\\Delta f(t - t^a) e^{i\\phi}.\n\\end{eqnarray}\nThe interference of the overlapping parts\n(they have the same argument in $f$) equals\n\\begin{equation}\n\\label{int1112}\nt: \\, \\, 2 c_a'^2 c_b' s_b' s_\\Delta c_\\Delta \\cos 2\\phi^b (t),\n\\, \\, \\, \\, \\, \\,\n(t - t^a): \\, \\, - 2 s_a'^2 c_b' s_b' s_\\Delta c_\\Delta\n\\cos 2\\phi^b (t - t^a).\n\\end{equation}\nAt a detector, both oscillation phases\nare equal to $\\phi^b$, and, the sum of interference terms (after integration over\ntime)\nis\n\\begin{equation}\n\\label{intsum}\n\\cos 2\\theta_a \\sin 2 \\theta_b s_\\Delta c_\\Delta \\cos 2\\phi^b.\n\\end{equation}\nThis can be compared to the depth of interference\nin the beginning of the layer $a$: $0.5\\sin^2 2\\theta_a$.\n\nRegions of parameters exist where the depth increases after\npropagation in the two-layer profile. \\\\\n\n\nApart from the restoration of the interference, there is another phenomenon\nwhich will be important for our further consideration, namely, change of the shape of the WP at\ncrossing the border between the layers:\n\n\n\\begin{itemize}\n\n\\item\nEach packet $\\nu_i^a$ becomes the two-component one with an overall size\nextended by the relative delay in the layer $a$, $t^a$.\n\n\\item\nAccording to (\\ref{eeampcomp})\nthe amplitudes of components of $\\nu_1^b$ equal\n$(c_a' c_\\Delta, \\, \\, - s_a' s_\\Delta)$ for the earlier and the later\nones correspondingly. The amplitudes in $\\nu_2^b$ are\n$(c_a' s_\\Delta, \\, \\, s_a' c_\\Delta)$.\n\n\\item\nThe shape depends on the size of density jump.\nIf, {\\it e.g.}, the jump is small, $s_\\Delta \\sim s_a' \\sim \\epsilon \\ll 1$,\nwe find from the previous item the amplitudes\nfor $\\nu_1^b$: $(1, \\, \\epsilon^2)$,\nand for $\\nu_2^b$: $(\\epsilon, \\, \\epsilon)$.\nIf $s_\\Delta \\approx c_\\Delta$, the packets will have similar form with similar amplitudes\n$\\nu_1^b$: $(c_a', \\, - s_a')$ and $\\nu_2^b$: $(c_a', \\, s_a')$.\nIf $s_\\Delta = 1$, $c_\\Delta = \\epsilon$ we have\n$\\nu_1^b$: $(\\epsilon, \\, \\epsilon)$ and\n$\\nu_2^b$: $(1, \\, \\epsilon^2)$.\n\n\\end{itemize}\n\n\n3. Consider the case $t^a = - t^b$, when separation in the layer $b$\ncompensates separation in $a$. In spite of this compensation,\nand in contrast to the constant density or adiabatic cases,\nthere is no complete overlap at the end of layer $b$\ndue to split of the eigenstates. Only WP of $A_{11}$ and $A_{22}$\ncomponents will overlap completely, where subscripts indicate eigenstates in which a given final component propagated in layer $a$ and $b$ ({\\it e.g.} $A_{11}$ is the amplitude of WP $\\nu_1^a \\rightarrow \\nu_1^b$). The component $A_{12}$\nwill shift forward by $t^a$ and $A_{21}$ backward by $t^a$ with respect\nto\noverlapping components,\nor vice versa, depending on the sign of $t^a$.\nIf $|t^a| > \\sigma_x$, only $A_{11}$ and $A_{22}$ interfere and\nthe interference term equals\n\\begin{equation}\n\\label{intdd}\n0.5 \\sin 2\\theta_a \\sin 2\\theta_b c_\\Delta^2 \\cos 2(\\phi^a + \\phi^b),\n\\end{equation}\nwhich again can be bigger than interference in\nthe beginning of the layer $a$. \\\\\n\n4. In the case $t^a \\approx 0$, there is no separation and loss\nof coherence in the layer $a$. The components $A_{11}$ and $A_{21}$\nas well as $A_{22}$ and $A_{12}$ will overlap, there is no\nsplit of eigenstates at the border $a \\rightarrow b$. \n\n\nThere is no unambiguous way of introducing the coherence length in the presence of density jump in the profile. Recall that the equivalence in the $E$- and $x$-representations \nis established at the level of averaging of the oscillation phase and separation of the WP. \nIn the case of a single jump, there are two different oscillation phases and two different delays due to splitting of eigenstates. The introduction of a single effective phase and effective delay is non-trivial. \nAs we will see in sect.~\\ref{parametric-oscillations}, this can be done for periodic structures with density jumps.\n\n\\subsection{Wave packets in the $x$- and $E$-representations}\n\nSplit of the eigenstates and delays of splitted components lead\nto substantial modifications of the shape factor\nof the total WPs, $\\psi_i(x,t)$, in the $x$-space. \nAt the same time, the Fourier transforms of $\\psi_i(x,t)$\ndetermine the neutrino energy spectrum. Let us show that despite substantial modifications\nof $\\psi_i(x,t)$, the energy spectrum remains unchanged, as it should be \\cite{Stodolsky:1998tc}.\n\nSuppose $f(E)$ is the Fourier transform of $f(t)$,\nthen in the $E$-representation the initial state (\\ref{init}) is\n$$\n\\nu(x=0,E) = f(E)(c_a' \\nu_1^a + s_a' \\nu_2^a).\n$$\nThe energy spectrum of this state equals\n\\begin{equation}\nF(E) = |f(E) c_a'|^2 + |f(E) s_a'|^2 = |f(E)|^2.\n\\label{eq:spectr1}\n\\end{equation}\n\nConsider the simplest element of the profile\nwhich leads to modification of the shape factor:\nthe layer $a$ with constant density and\ndensity jump at the end\n(general case is just repetition of this element). State (\\ref{border-ba}), at the beginning of layer $b$ ($t^b_i = 0$ and $\\phi^b_i = 0$) in terms of $\\psi_i(x=L_a,t)$ is\n\\begin{equation}\n\\nu(L_a,t) = \\psi_1(L_a,t) \\nu_1^b + \\psi_2(L_a,t) \\nu_2^b,\n\\label{total-WPs}\n\\end{equation}\nwhere the total WPs equal\n\\begin{eqnarray} \\label{total-WPs-2}\n\\psi_1(L_a,t) & = &\n[c_a' c_\\Delta f(t) - s_a' s_\\Delta f(t - t^a) e^{2i \\phi^a}],\n\\nonumber\\\\\n\\psi_2(L_a,t) & = &\n[c_a' s_\\Delta f(t) + s_a' c_\\Delta f(t - t^a) e^{2i \\phi^a}].\n\\label{eq:totwp12}\n\\end{eqnarray}\nHere we made time shift by $t_1^a$ and put out common phase\n$\\phi^a_1$.\nIn the beginning, $\\psi_i$ are the elementary WPs, $f(t) \\nu_i^a$, as in (\\ref{init}), but along the propagation their widths increase and their forms change.\n\nIn $E$-space, the state just after crossing the density jump, $\\nu(L_a)$, in Eq.~(\\ref{total-WPs}), is:\n\\begin{equation}\n\\nu(L_a,E) = \\psi_1(L_a,E) \\nu_1^b + \\psi_2(L_a,E) \\nu_2^b,\n\\label{eq:statexb}\n\\end{equation}\nThe Fourier transform of\n$f(t - t^a)$ is $f(E) e^{i E t^a}$, so that\n\\begin{eqnarray}\n\\psi_1(L_a,E) & = &\n[c_a' c_\\Delta - s_a' s_\\Delta e^{- i E t^a + 2i \\phi^a}] f(E) ,\n\\nonumber\\\\\n\\psi_2(L_a,E) & = &\n[c_a' s_\\Delta + s_a' c_\\Delta e^{- iEt^a + 2i \\phi^a}] f(E).\n\\nonumber\n\\end{eqnarray}\nUsing these expressions, we find the energy spectrum\n$$\nF(E) = |\\psi_1(L_a,E)|^2 + |\\psi_2(L_a,E)|^2 = |f(E)|^2,\n$$\nwhich coincides with (\\ref{eq:spectr1}).\n\n\n\\section{Coherence in the castle-wall profile}\n\\label{parametric-oscillations}\n\nOne can introduce the coherence length in the case of a periodic structure with a sharp density change. The castle-wall (CW) profile \nis multiple repetitions of the two layer structure considered sect.~\\ref{coherence and adiabaticity violation}. \nThis is an explicitly solvable example \nof the periodic profile, which can give an idea about effects in \nprofiles with sine (or cosine) type of dependence on distance \\cite{Akhmedov:1988kd,Akhmedov:1999ty,Akhmedov:1999va}. \n\n\\subsection{Parametric oscillations in the energy-momentum space} \\label{parametric1}\n\nThe oscillation probability after crossing $n$ periods can be obtained from \nthe probability for a single period computed above. Indeed, the amplitude $A_{ee}$ (\\ref{eeamp12}),(\\ref{a1a2}) can be written as \n\\begin{equation} \\nonumber\n\\label{eeprob}\nA_{ee} = R + i I_3, \n\\end{equation}\nwhere the real and imaginary parts of the amplitude equal respectively, \nsee \\cite{Akhmedov:1999ty,Akhmedov:1999va}, \n\\begin{equation} \n\\label{R}\nR=c_{a} c_{b} - s_{a} s_{b} \\cos \\left(2 \\theta_a - 2 \\theta_b\\right),\n\\end{equation} \nand \n\\begin{equation} \\nonumber\n\\label{third-component}\nI_3 = - (s_a c_b \\cos 2 \\theta_a + s_b c_a \\cos 2 \\theta_b).\n\\end{equation}\nThen the probability is given by\n\\begin{equation} \\nonumber\n\\label{eeprob}\nP_{ee} = R^2 + I_3^2. \n\\end{equation}\n\nSimilarly to $A_{ee}$ one can find the amplitudes \n$A_{e \\mu}$, $A_{\\mu e}$ and $A_{\\mu \\mu}$ which compose \nthe evolution matrix $U_L$ over one period. The matrix \nallows to reconstruct the Hamiltonian integrated over the period: \n$i H_{CW} L = I - U_L$. Diagonalization of this Hamiltonian \ngives the eigenstates, {\\it i.e.} mixing and \ndifference of eigenvalues \nwhich allow to write the transition probability \nafter passing $n$ periods of size $L$, $P = |U(nL)_{\\alpha \\beta}|^2$, as \n\\begin{equation} \n\\label{P-CW}\nP(\\nu_{\\alpha} \\rightarrow \\nu_{\\beta},nL) = \n\\left(1-\\frac{I_3^2}{1 - R^2} \\right) \\sin^2 \\left(n \\xi \\right).\n\\end{equation}\nHere $\\xi$ is the effective phase acquired over period:\n\\begin{equation}\\nonumber\n\\xi \\equiv \\arccos R. \n\\end{equation}\nThe factor in front of sine gives the depth of the parametric \noscillations. The depth is maximal at $I_3 = 0$, or explicitly\n\\begin{equation} \n\\label{condition1}\n s_a c_b \\cos 2 \\theta_a +c_a s_b \\cos 2 \\theta_b = 0, \n\\end{equation}\nwhich is the condition of the parametric resonance. \n For arbitrary mixings the condition is satisfied \nfor values of phases\n\\begin{equation} \\nonumber\n\\label{condition2}\n\\phi^a = \\frac{\\pi}{2}+k \\pi \\hspace{0.5 cm} \\text{and} \n\\hspace{0.5 cm} \n\\phi^b = \\frac{\\pi}{2}+k' \\pi. \n\\end{equation}\nIn general, $I_3 = 0$ requires specific correlations between \nphases and mixings. The transition probability (\\ref{P-CW}) reaches maximum, $P = 1$, when \n\\begin{equation} \\nonumber\n\\label{condition3}\nn \\xi = \\frac{\\pi}{2} + k \\pi.\n\\end{equation}\n\nIn what follows for illustration of the results we will use \na castle-wall profile with $n = 5$ periods \nand the following parameters: \n\\begin{equation} \n\\label{eq:paramet}\nV_a = 5.3 \\times 10^{-4} \\, \\text{eV}^{2}, \\, \\, \\, \\, \\, L_a = 4 \\, {\\rm km}, \\, \\, \\, \nV_b = 2 \\times 10^{-5} \\, \\text{eV}^{2}, \\, \\, \\, L_b = 2 \\, {\\rm km}.\n\\end{equation}\nWe take the vacuum oscillation parameters $\\theta=8.5^{\\circ}$ and \n$\\Delta m^2=2.5 \\times 10^{-3}$ $\\text{eV}^2$. \n\nThe transition probability of parametric oscillations \nEq.~(\\ref{P-CW}), as function of energy, is shown in fig.~\\ref{resonance1}. The probability\nreaches maximum at $E \\sim 2.2$ (MSW resonance), $3.1$ and $7.7$ MeV, \nwhere condition (\\ref{condition1}) is met.\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=0.7\\linewidth]{new-prob.png}\n\\caption\nTransition probability of the $\\nu_e \\rightarrow \\nu_{\\mu}$ parametric oscillations as a function of neutrino energy for castle-wall profile with $5$ periods.\nParameters of the CW profile are given in Eq.~(\\ref{eq:paramet}).\nWe used $\\theta=8.5^{\\circ}$ and $\\Delta m^2=2.5 \\times 10^{-3}$ $\\text{eV}^2$.}\n\\label{resonance1}\n\\end{figure}\n\n\n\\subsection{Coherence length in a castle-wall profile}\n\\label{coherence-castle-wall}\n\nIn the energy representation we can repeat here the same procedure of determination of the \ncoherence length as in the case of constant density. \nAccording to Eq.~(\\ref{P-CW}), the phase of parametric \noscillations equals\n\\begin{equation} \\nonumber\n\\label{CW-phase}\n \\phi_n = n \\xi = n L \\frac{\\xi}{L} . \n\\end{equation}\nTherefore the effective difference of eigenvalues equals \n\\begin{equation} \\nonumber\n\\label{eff-eig}\n\\Delta H_{cw} = \\frac{\\xi}{L}. \n\\end{equation}\n\nWe introduce $n_{coh}$, the number of periods over which the coherence \nis maintained, so that $L_{coh}=n_{coh} L$. \nUsing Eq.~(\\ref{const-density}) \nfor the coherence length we find\n\\begin{equation} \\nonumber\n\\label{phase-var}\nn_{coh} = \\frac{\\pi}{\\sigma_E}\n\\left|\\frac{d \\xi}{d E} \\right|^{-1}. \n\\end{equation}\nAccording to (\\ref{const-density}) and (\\ref{R})\n\\begin{equation} \n\\label{D-xi}\n\\frac{d \\xi}{d E} = \n-\\frac{1}{\\sqrt{1-R^2}} \\frac{d R}{d E}, \n\\end{equation}\nand consequently, \n\\begin{equation} \n\\label{p-coh}\nn_{coh}(E)=\\frac{\\pi \\sqrt{1-R^2}}{2 \\sigma_E \\left| \n\\frac{d R}{d E} \\right|}.\n\\end{equation}\n\nThe condition for infinite coherence length, \n$L_{coh} \\propto n_{coh} \\rightarrow \\infty$, is \n\\begin{equation}\n\\label{divergency} \n\\left|\\frac{d \\xi}{d E} \\right| = 0, \\, \\, \\, \\, \n{\\rm or } \\, \\, \n\\frac{d R(E)}{d E} = 0, \\, \\, \\, (R \\neq 1).\n\\end{equation}\nSince $R(E)$ is an oscillatory function of $E$, and therefore has several \nmaxima and minima at certain energies $E_0^i$, the condition \n(\\ref{divergency}) is satisfied at these energies \n$E = E_0^i$.\n\nAs in the case of constant density, averaging of probability over the \nenergy intervals $\\sigma_E$ leads to the appearance \nof the decoherence factors \n$D_i(E)$, (\\ref{decoh}), centered at energies $E_0^i$. Let us find the widths \nof the peaks of $D_i(E)$ for fixed length of trajectory \n$ n L$. \nAs in constant density case, instead of averaging over \n$E$ we perform averaging over \nthe effective phase in the interval which corresponds to \n$2\\sigma_E$: \n\\begin{equation}\n\\label{eq:dphi-n}\n\\delta \\phi_n = 2\\sigma_E n \\frac{d \\xi}{d E}.\n\\end{equation}\nThe decoherence factor \n\\begin{equation}\\nonumber\n D(\\delta \\phi_n (E)) = \\frac{\\sin \\delta \\phi_n}{\\delta \\phi_n}\n\\end{equation}\nhas a width $\\Gamma$ determined by the condition in (\\ref{Gamma}).\nPlugging \n$\\delta \\phi_n$ from (\\ref{eq:dphi-n}) in the above equation we obtain\n\\begin{equation} \n\\label{half-max1}\n\\left| \\frac{d \\xi(E_0^i +\\Gamma)}{d E} \\right| \n= \\frac{1}{\\sigma_E n}.\n\\end{equation}\nFor narrow peaks, $\\Gamma \\ll E_0^i$, \nwe can expand the left hand side of Eq.~(\\ref{half-max1}) \naround $E_0$; and take \ninto account that \n$d \\xi (E_0)\/d E = 0$. Then Eq.~(\\ref{half-max1})\nreduces to \n\\begin{equation}\n\\frac{d^2 \\xi (E)}{d E^2}\\bigg{|}_{E=E_0^i} \n\\times \\Gamma = \\frac{1}{\\sigma_E n}.\n\\label{eq:condigamma}\n\\end{equation}\nDifferentiating $d\\xi\/dE$ in (\\ref{D-xi}) over $E$ we have \n\\begin{equation}\n\\label{eq:doubleder}\n\\frac{d^2 \\xi (E)}{d E^2}\\bigg{|}_{E=E_0^i} = \n-\\frac{1}{\\sqrt{1- R^2(E_0^i)}} \\frac{d^2 R(E_0^i)}{d E^2}.\n\\end{equation}\nThen insertion of this into (\\ref{eq:condigamma}) gives\n\\begin{equation} \\nonumber\n\\label{CW-peak}\n\\Gamma \\approx \n\\frac{\\sqrt{1-R^2(E_0^i)}}{ \\sigma_E n \\left| \n\\frac{d^2 R(E_0^i)}{d E^2} \\right|}. \n\\end{equation}\n\n\nIn the limits $E_0^i \\ll E_{a},E_{b}$ and $E_0^i \\gg E_{a},E_{b}$,\nwhere $E_{a}$ and $E_{b}$ are the MSW resonance energies, we have \n$\\theta_a \\approx \\theta_b$, and therefore \n\\begin{equation} \\nonumber\n\\label{approx-R}\nR \\approx \\cos (\\phi^a + \\phi^b). \n\\end{equation}\nSince $d\\xi\/dE \\propto d(\\phi^a + \\phi^b)\/dE = 0$ at $E = E_0^i$ \nthe equation (\\ref{eq:doubleder}) becomes \n\\begin{equation}\\nonumber\n\\frac{d^2 \\xi}{d E^2} \\bigg{|}_{E=E_0} \n\\approx \\frac{d^2 (\\phi^a + \\phi^b)}{d E^2}\\bigg{|}_{E=E_0},\n\\end{equation}\nand the width equals\n\\begin{equation} \\nonumber\n\\label{approx-gamma}\n\\Gamma_i \\approx \\frac{1}{\\sigma_E n \n\\frac{d^2 (\\phi^a + \\phi^b)}{d E^2} \\big{|}_{E=E_0^i} }.\n\\end{equation}\nThe widths $\\Gamma^i$ are narrower at lower energies \ndue to faster variations of the half-phases $\\phi^a$ \nand $\\phi^b$. Similarly to the case of constant \nmatter density, the ranges of weak averaging effect \nof transition probability are smaller at lower $E_0^i$. \n\n\\subsection{Infinite coherence and parametric resonance}\n\nThe energies of infinite coherence, $E_0^i$, \nare correlated to the energies of parametric resonances, \ndetermined by the condition $I_3=0$. This correlation is analogous to the one in Eq.~(\\ref{MSW}) for constant density.\nTo show this, we consider the expressions for $d R\/d E$ and $I_3$ \nin three different regions of energies. Starting with the explicit expression for $d R\/d E$,\n\\begin{multline} \\label{dR}\n\\frac{d R}{d E}=-s_a c_b \\frac{d \\phi^a}{d E}-c_a s_b \\frac{d \\phi^b}{d E} -\nc_a s_b \\cos(2\\theta_a-2 \\theta_b) \\frac{d \\phi^a}{d E}\\\\\n-s_a c_b \\cos(2\\theta_a-2 \\theta_b) \\frac{d \\phi^b}{d E} + \ns_a s_b \\sin(2\\theta_a-2 \\theta_b) \\frac{d (2\\theta_a-2 \\theta_b)}{d E}.\n\\end{multline}\nWe obtain the following\n\\begin{enumerate}\n\n\\item Far from the MSW resonances of both layers:\n$E \\ll E_{a}, \\, E_{b}$, \nwhere $\\theta_a \\approx \\theta_b \\approx \\theta$, or \n$E \\gg E_{a}, \\, E_{b}$ where \n$\\theta_a \\approx \\theta_b \\approx \\pi\/2$. \nIn both cases $\\cos (2\\theta_a - 2\\theta_b) \\approx 1$,\nand consequently, $R \\approx \\cos (\\phi^a + \\phi^b)$. Therefore \n\\begin{equation} \\nonumber\n \\frac{d R}{d E} = \n- \\sin (\\phi^a + \\phi^b)\\frac{d (\\phi^a + \\phi^b) }{d E} \n\\end{equation}\nOn the other hand \n\\begin{equation}\\nonumber\nI_3 \\approx \\sin (\\phi^a + \\phi^b) \\cos 2\\theta_a.\n\\end{equation}\nSo that, the condition $I_3 \\rightarrow 0$ requires \n$\\sin (\\phi^a + \\phi^b) = 0$ and consequently, \n$\\frac{d R}{d E} \\rightarrow 0$, \nwhich implies $n_{coh} \\rightarrow \\infty$.\n\n\\item In the MSW resonances: when $E$ coincides with one of the resonance \nenergies {\\it e.g.} $E \\approx E_{a}$, \nand is far enough from the other one, $E_{b}$, then\n$\\theta_a=\\pi\/4$, $\\theta_b = \\theta \\approx 0$ (small mixing) or $\\pi\/2$ and $d \\theta_b\/d E=0$. Because $E_{a}\\sim E_{0a}$, we can take in (\\ref{dR}) $d \\phi^a\/d E=0$ which becomes\n\\begin{equation} \\label{dR1}\n \\frac{d R}{d E}=(-c_a s_b \\pm s_a c_b \\cos 2\\theta_a) \\frac{d \\phi^b}{d E}\\pm 2s_a s_b \\sin 2\\theta_a \\frac{d \\theta_a}{d E}.\n\\end{equation}\nHere the upper sign is for $\\theta_b \\approx 0$ and the lower one $\\theta_b \\approx \\pi\/2$. We can rewrite (\\ref{dR1}) as\n\\begin{equation} \\label{dR2}\n \\frac{d R}{d E}=\\pm I_3 \\frac{d \\phi^b}{d E}\\pm 2s_a s_b \\sin 2\\theta_a \\frac{d \\theta_a}{d E}.\n\\end{equation}\nAssuming that $\\sigma_E d \\phi^b\/d E \\sim 1$ and $\\sigma_E d \\theta_a\/d E \\ll 1$, we find that the second term in (\\ref{dR2}) shifts the zero of $d R\/d E$ relative to the zero of $I_3$. Nevertheless, $E_0$ is close to $E_R$, and the closer they are, the smaller $d \\theta_a\/d E$ is relative to $d \\phi^b\/d E$.\n\n\\item Between the MSW resonances:\n$E_{a} < E< E_{b}$ or $E_{a} > E > E_{b}$. \nIf resonances are well separated, \nwe have $|\\theta_a - \\theta_b| \\approx \\frac{\\pi}{2}$ or \n$\\cos (2\\theta_a - 2\\theta_b) \\approx -1$. Therefore, \n\\begin{equation}\\nonumber\nR \\approx \\cos (\\phi^a - \\phi^b), \n\\end{equation}\nand \n\\begin{equation}\\nonumber\n \\frac{d R}{d E} = \n- \\sin (\\phi^a - \\phi^b)\\frac{d (\\phi^a - \\phi^b) }{d E}. \n \\end{equation}\nOn the other hand, \n\\begin{equation}\\nonumber\nI_3 \\approx -\\sin (\\phi^a - \\phi^b) \\cos 2 \\theta_1.\n\\end{equation}\nConsequently, $I_3 \\rightarrow 0$ implies \n$d R\/ d E \\rightarrow 0$, and \n$n_{coh} \\rightarrow \\infty$. \n\n\\end{enumerate}\n\nIn the left panel of fig.~\\ref{fig-p-coh} we show $d R\/d E$\nand $I_3$ as functions of neutrino energy.\n$I_3$ is null at $2.25$, $3.1$ and $ 7.7$ MeV, while the zeros of \n$d R\/d E$, $E_0^i$ are $2.3$, $3.3$ and $7.7$ MeV. \nThese coincidences show correlation of the parametric resonance and infinite coherence.\nThe MSW resonance energies are $E_{a}=2.25$ MeV \nand $E_{b}=62.7$ MeV, outside the range of the plot.\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=1\\linewidth]{joint-new.png}\n\\caption{Left panel: $I_3$ and $\\frac{d \\xi}{d E}$ as function \nof neutrino energy. Right panel: Dependence of the $n_{coh}$ in (\\ref{p-coh}) \non neutrino energy with $\\sigma_E\/E=0.1$ MeV. The parameters of CW profile are the same as in fig.~\\ref{resonance1}. \n}\n\\label{fig-p-coh}\n\\end{figure}\n\nOn the right panel of fig.~\\ref{fig-p-coh} we show dependence of $n_{coh}$ on $E$\nfor the CW profile with parameters (\\ref{eq:paramet}) \nand $\\sigma_E\/E=0.1$ MeV. $n_{coh}$ diverges at the energies $E_0^i$. Notice that the shape \nof the peak is similar to MSW case in fig.~\\ref{fig-E0}. Thus, at the parametric resonance energies \nthe averaging of oscillations is weak, and coherence is enhanced.\n\n\n\\subsection{Coherence in castle-wall \nprofile in the configuration space}\n\nIn the $x$-representation, a picture of evolution in the CW profile is the following. \nCrossing every border between the layers $a$ and $b$ \na given eigenstate of the layer $a$, $\\nu_i^a$, splits into two eigenstates of the layer $b$: $\\nu_i^a \\rightarrow \\nu_1^b, \\nu_2^b$. In turn, at the next border each eigenstate $\\nu_j^b$ splits into eigenstates \nof $a$ $\\nu_j^b \\rightarrow \\nu_i^a$,\n{\\it etc}. The amplitudes of the splits are determined by \nchange of the mixing angles in matter (\\ref{splitab}). \n\nAs in subsection~\\ref{parametric1}, \\ref{coherence-castle-wall} we consider that a flavor neutrino enters the layer $a$ of the first CW period and \na detector is placed at the border of $b$ layer of the last period. After crossing $n$ periods of the CW profile, \neach initially produced eigenstate $\\nu_i^a$ ($i = 1, 2$) splits \n$2^{2n -1}$ times, and therefore at a detector \nthere are $2^{2n}$ elementary components. \nThe eigenstate $\\nu_i^b$ ($i=1,2$), that arrives \nat a detector (last border), will be a composition \nof $2^{2n -1}$ such components. They will arrive at different moments and have different phases and amplitudes. \nWe will assume that these elementary WPs are short enough and do not spread. Therefore the oscillation phases (phase differences) are the same along the elementary WPs. Each elementary WP has its \"history\" of propagation determined by \ntype of eigenstate it showed up in each layer of the profile, \n{\\it e.g.} $\\nu_1^a \\rightarrow \\nu_2^b \\rightarrow \\nu_2^a \\rightarrow \\nu_1^b \\rightarrow \\nu_1^b \\rightarrow ... \\rightarrow \\nu_2^b$. \n\nThe elementary WPs compose the total WP of eigenstates $\\psi_i$.\nSplits of eigenstates and delays lead to spread\nof total WP after crossing the CW profile.\nThis spread affects the level of overlap\nand coherence.\nIf a detector is sensitive to the flavor\n$\\nu_f = c_b' \\nu^b_1 + s_b' \\nu^b_2$, the detected signal is determined by \n\\begin{equation}\n\\label{eq:sign}\n\\int dt \\left| c_b' \\psi^b_1 (t) + s_b' \\psi^b_2 (t) \\right|^2. \n\\end{equation}\nwhere $\\psi_i^b$ are the total WP of the eigenstates\n$\\nu_i^b$ in a layer $b$ at a detector.\nThe total WP, $\\psi^b_j (t)$, can be found summing up all $2^{2n-1}$ elementary WPs at the detector.\n\n\nA given elementary WP at the detector can be characterized by \n$k_a$ and $k_b$ - the numbers of $a$ and $b$ layers it propagates through as \nthe second eigenstate, {\\it i.e.} $\\nu_2^m$, $m=a,b$. \nCorrespondingly, the numbers of layers \nit crossed as $\\nu_{1}^m$ are $(n - k_a)$ and $(n - k_b)$. Using the same notation for phases, $\\phi_i^{a}$ and $\\phi_i^{b}$ $(i = 1,2)$, as in sect.~\\ref{coherence and adiabaticity violation} $(i = 1,2)$ we can write \nthe total phase of a given elementary packet \nat a detector: \n\\begin{equation}\n\\label{eq:totph}\n\\phi (k_a, k_b) = \\phi_1^{a} (n - k_a) + \\phi_2^{a} k_a \n+ \\phi_1^{b} (n - k_b) + \\phi^{b}_2 k_b. \n\\end{equation}\nThe phase difference between the elementary WPs characterized by \n$k_a, k_b$ and $k_a', k_b'$ equals\n\\begin{equation}\n\\label{eq:totphdif}\n\\phi (k_a, k_b, k_a', k_b') = \n\\phi^{a} (k_a - k_a') + \\phi^{b} (k_b - k_b'), \n\\end{equation}\nwhere we used definitions in (\\ref{phase-redefinition}).\n\nSimilarly one can find the relative time delays of arrival of \nthe packets characterized by $k_a, k_b$ and $k_a', k_b'$ at a detector using the definitions (\\ref{time-redefinition}): \n\\begin{equation}\n\\label{eq:totdeldif}\nt (k_a, k_b, k_a', k_b') = t^a (k_a - k_a') + t^b (k_b - k_b'). \n\\end{equation}\nAs for the case of a single layer (\\ref{eq:phase-delay}), there are the relations between the \ntotal delays (\\ref{eq:totdeldif}) and total oscillation phases (\\ref{eq:totphdif}):\n\\begin{equation}\n\\label{eq:correltp}\nt (k_a - k_a', k_b- k_b') = \n\\frac{d \\phi (k_a - k_a', k_b - k_b')}{d E}, \n\\end{equation}\nand \n\\begin{equation}\n\\label{eq:correltp}\nt (k_a - k_a', k_b- k_b') = \ng(E, V) \\phi (k_a - k_a', k_b - k_b'). \n\\end{equation}\n\nSuppose, for definiteness, that $t^a$ and $t^b$ are positive, \nthen maximal relative delay (separation) $t^{max}$ corresponds to \n$|k_a - k_a'| = |k_b - k_b'| = n$: \n$$\nt^{max} = n(t^a + t^b).\n$$\nThis is the time difference of arrival of the fastest \nand the slowest extreme elementary WP. \n\n$t^{max}$ separation determines the spread of whole WPs. The space between the fastest and the slowest WPs is \nfilled in by non-extreme WPs. \nThe extreme WPs are coherent if \n\\begin{equation}\n\\label{eq:nxs-coh}\nn (t^a + t^b) \\leq \\sigma_x,\n\\end{equation}\nwhich gives the coherence number of periods (length) \n\\begin{equation} \n\\label{part-1}\nn_{coh} \\leq \\frac{\\sigma_x}{t^a + t^b}.\n\\end{equation}\nUnder this condition, all other (intermediate) WP-components \nare also coherent (overlap). \nThe condition (\\ref{part-1}) is sufficient but not necessary, since \nthe coherence broken for extreme packets \nstill can hold for intermediate packets. \n\nParameters $k_a, k_b$ uniquely determine the phase and the delay of a given component. \nNumber of components with a given \n$k_a$ and $k_b$ equals\n$$\n ^n_{k_a}C \\times~ ^n_{k_b}C, \n$$\nwhere $^n _{p}C$ is number of combinations of $p$ elements from $n$ elements. Apart from \n$k_a$ and $k_b$\nthe amplitudes of individual components \nare determined by the product of various mixing parameters: the initial and final flavor mixing, {\\it e.g.} $c_a'c_b'$ for $\\nu_e \\rightarrow \\nu_e$ channel and $s_\\Delta$, $c_\\Delta$ determined by change \nof mixing at the borders between the \nlayers $\\Delta$:\n$$ \nA(n,r,h) = (-1)^h c'_a c'_b s_\\Delta^r c_\\Delta^{2n - 1 - r}. \n$$\nFactor $s_\\Delta$ originates from each border at which the eigenstate number \nchanges: $\\nu_{1}^a \\rightarrow \\nu_{2}^b$, \\textit{etc.}, while $c_\\Delta$ \nappears from the borders without change of eigenstate: \n$\\nu_{i}^a \\rightarrow \\nu_{i}^b$. $r$ is the number of borders where eigenstate changes; it is \ndetermined by the number of merging blocks: sequences of layers without change \nof eigenstate. $h$ is given by the number of blocks with \neven number of sequential layers in which WP propagates as\nthe second eigenstate. \n\nFor large number of periods there are many elementary packets \nwith the same values of \n$k_a, k_b, r, h$. We call this number the multiplicity, $M(k_a, k_b, r, h)$. \nTherefore the total amplitude of all WPs with a given \n$k_a, k_b$ equals\n\\begin{equation} \\label{amp-kakb}\nA(k_a, k_b) = \\sum_{r,h} A(n,r,h) M (k_a, k_b,r,h).\n\\end{equation}\n\n\nAs for the one period case, here the shape of the total WP \ndepends on mixing \nand multiplicities. \n\n\nFor $k_a = k_a'$, $k_b = k_b'$ there is no shift but also \nthe phase difference is zero. \nFor $k_a - k_a' = 1$, $k_b - k_b' = 0$, the phase difference \nis $\\phi^a$, while for $k_a - k_a' = 0$, $k_b - k_b' = 1$ the phase difference is $\\phi^b$, {\\it etc}. \nIn general, one finds the components with phase differences varying from $0$ to $ n(\\phi^a + \\phi^b)$. \n\n\\subsection{Resummation}\nWe can obtain the compact expression for the total WPs summing up effects after each crossing of the CW period representing them as the interference of the two (total)\nWPs of $\\nu_1^a$ and $\\nu_2^a$.\n\nWe consider first the summation in the basis of eigenstates of the layer $a$:\n$\\nu^a = (\\nu^a_1, ~\\nu^a_2)$.\nLet $\\psi^{k - 1} = (\\psi_1^{k - 1},~ \\psi_2^{k - 1})^T$ be the vector describing\nthe state of system after crossing\n$k - 1$ periods of the CW profile. $\\psi_i^{k - 1}$ is result of resummation\nof the elementary WP of $\\nu_i^a$\nin the layer $a$ in the beginning of $k$-period.\nThen after crossing of the $k$th period, the state becomes\n\\begin{equation}\n\\psi^{k } = U_L^a \\psi^{k - 1},\n\\label{eq:state-k}\n\\end{equation}\nwhere the evolution matrix over one period \n\\begin{equation}\nU_L^a = U_\\Delta D^b U_\\Delta^{\\dagger} D^a .\n\\label{eq:evmat}\n\\end{equation}\nHere $U_\\Delta$ is the matrix of the mixing change at crossing the border between $a$ and $b$:\n\\begin{equation}\nU_\\Delta = U^{a \\dagger}U^b =\n\\left(\\begin{array}{cc}\nc_\\Delta & s_\\Delta \\\\\n- s_\\Delta & c_\\Delta\n\\end{array}\\right),\n\\label{eq:udelta}\n\\end{equation}\nand $U^a$, $U^b$ are the flavor mixing matrices in the\n$a$ and $b$ layers.\n$D^a $ is the evolution matrix in the layer $a$ which can be written as\n\\begin{equation}\nD^a =\n\\left(\\begin{array}{cc}\nS(t^a_1) e^{i2 \\phi_1^a} & 0 \\\\\n0 & S(t^a_2) e^{i2 \\phi_2^a}.\n\\end{array}\\right)\n\\label{eq:dop-a}\n\\end{equation}\nHere $S(t_i^a)$ is the time-shift operator acting on the shape factor of the WP\nin such a way that\n$$\nS(t_i^a) f_i (t) = f_i (t - t_i^a) , ~~~~ S(t_i^a) S(t_j^b) = S(t_j^b) S(t_i^a) = S(t_i^a + t_j^b) ~~~\n(i, j = a, b).\n$$\nWe can also require that $S(t)^{\\dagger} = S(-t)$, so that $S$ is the unitary: $S(t)^{\\dagger} S(t) = I$.\n\nSimilarly one can introduce the evolution matrix for the layer $b$. Inserting $U_\\Delta$ and $D^m_i$ ($m = a, b$) from\n(\\ref{eq:udelta}) and (\\ref{eq:dop-a}) into (\\ref{eq:evmat}) we obtain\n\\begin{equation}\nU_L^a =\n\\left(\\begin{array}{cc}\nc_\\Delta^2 e^{-i\\phi} + s_\\Delta^2 S(t^b) e^{-i\\phi'} & \n s_\\Delta c_\\Delta [S(t^b + t^a) e^{i\\phi} - S(t^a) e^{i\\phi'} ] \\\\\ns_\\Delta c_\\Delta [S(t^b) e^{- i\\phi'} - e^{-i\\phi} ] & \n s_\\Delta^2 S(t^a) e^{i\\phi' } + c_\\Delta^2 S(t^b + t^a) e^{ i\\phi}\n\\end{array}\\right),\n\\label{eq:evmat3}\n\\end{equation}\nwhere we subtracted the commom phase factor $\\exp \\{i(\\phi_2^b + \\phi_2^a + \\phi_1^b + \\phi_1^a) \\}$ and applied (\\ref{phase-redefinition}) and (\\ref{phase-redefinition-2}), \nas well as performed the time shift by \n$t_1^b + t_1^a$ and used (\\ref{time-redefinition}). Using (\\ref{eq:evmat3}), we can recover results in sect.~\\ref{coherence and adiabaticity violation} for one period.\n\n\nIf all $S = 1$, we obtain the matrix which leads to the parametric oscillations as in \\cite{Akhmedov:1988kd,Akhmedov:1998ui,Akhmedov:1999ty}. \n\n\nThe evolution matrix after $n$ periods equals\n\\begin{equation}\nU_{nL}^a = (U_L^a)^n.\n\\label{eq:evmatnn}\n\\end{equation}\nAccording to (\\ref{eq:state-k}) the total eigenstate is\n\\begin{equation}\n\\psi^n (t) = (U^a_L)^n f(t) \\psi^0,\n\\label{eq:eigen12}\n\\end{equation}\nwhere $f(t)$ is the initial shape factor and\n$\\psi^0$ are the admixtures of eigenstates\n$\\nu_i^a$ in the initial moment of time.\nThe components of $\\psi^n$ have the form\n\\begin{equation} \\label{sum-kakb}\n\\psi^n= \\sum_{k_a,k_b}^n A(k_a,k_b) f(t - k_a t^a - k_b t^b) e^{i (k_a \\phi^a + k_b \\phi^b) } \\psi^0.\n\\end{equation}\nNotice from (\\ref{sum-kakb}) that oscillations in energy\nare induced not only by interference between the total WPs of $\\psi^n_1$ and $\\psi^n_2$ but also by interference of components within amplitudes\n$\\psi_1^n$ and $\\psi_2^n$ with amplitudes (\\ref{amp-kakb}).\n\nThe amplitude of the $\\nu_e \\rightarrow \\nu_e$ transition equals\n\\begin{equation}\nA_{ee} (t) = \\nu_e^T (U_L^a)^n f(t) \\nu_e, ~~~~~~~~ \\nu_e^T = (c_a,~ s_a).\n\\label{eq:ampl}\n\\end{equation}\nThis reproduces the amplitude for two layer case of sect.~\\ref{parametric1}. Then the probability can be found as\n\\begin{equation}\nP_{ee} = \\int dt |\\nu_e^T U_{nL}^a f(t) \\nu_e|^2.\n\\label{eq:probab}\n\\end{equation}\n\nVarious results can be obtained using the general form of amplitude and probability. \nOne can consider evolution in the flavor basis, which corresponds to the derivation of\nthe parametric oscillation probability in \\cite{Akhmedov:1988kd,Akhmedov:1998ui,Akhmedov:1999ty}.\nNow the evolution matrix equals\n\\begin{equation}\nU_L^f = U^b D^b U_\\Delta^{\\dagger} D^a U^{a \\dagger} = (U^b D^b U^{b \\dagger})(U^a D^a U^{a \\dagger}),\n\\label{eq:evmatf}\n\\end{equation}\nwhere in the last step the eigenstates of the layer $b$ are projected onto the flavor basis.\nThe matrices in the $a$ eigenstate basis (\\ref{eq:evmat}) and in the flavor basis (\\ref{eq:evmatf})\nare related as\n\\begin{equation}\nU_L^a = U^{a \\dagger} U_L^f U^{a}.\n\\label{eq:evmatfa}\n\\end{equation}\nThe matrices after $n$ layers:\n\\begin{equation}\n(U_L^f)^n = U^{a} (U_L^a )^n U^{a \\dagger}.\n\\label{eq:evmatfan}\n\\end{equation}\n\n\nWe can determine the effective Hamiltonian over one period using the evolution matrix over one period:\n$U_L = I - i H L$. Then diagonalizing the Hamiltonian gives the effective mixing and the eigenvalues,\nwhich, in turn, give the depth of the parametric oscillations and the phase.\nClearly, such a Hamiltonian depends on mixing $\\Delta$ and the phases of individual layers\n$\\phi^a, \\phi^b$. \n\n\n\n\n\n\n\\subsection{Effective group velocities and infinite coherence in CW\nprofile}\n\nIn general, the condition of infinite coherence can be formulated\nas an equality of the effective\ngroup velocities of the wave packets:\n\\begin{equation} \\label{equal-eff}\nv_1^{eff} = v_2^{eff}\n\\end{equation}\nor $\\Delta v^{eff} = 0$.\nDetermination $v_i^{eff}$ depends\non properties of the density profile and turns out to be non-trivial for\nthe CW profile.\n\nRecall that in the case of constant density $v_i^{eff}$ are well defined, see (\\ref{infcoh-conf}) and related discussion.\nSince the shape factor does not change, the group velocity\nis the velocity of any fixed point of shape factor, {\\it e.g.}, maximum.\nThe difference of group velocities $\\Delta v =\n\\Delta v (E, V, \\theta, \\Delta m^2)$\ndoes not depend on the oscillation phase.\nThe velocities are constant and do not change in the course of\npropagation.\n\n\nIf density changes along the neutrino trajectory,\n$v_i^{eff}$ and $\\Delta v^{eff}$ do depend on distance.\nIn the adiabatic periodically varying density\n$v_i^{eff}$ can be introduced as the velocities averaged over period:\n$$\nv_i^{eff} = L^{-1} \\int_0^L dx \\frac{d H_i}{dE}\\bigg|_{\\bar{E}} .\n$$\nThen the condition of infinite coherence is\n$$\n\\Delta v^{eff} = L^{-1}\\int_0^L dx \\frac{d \\Delta H\n}{dE}\\bigg|_{\\bar{E}} = 0.\n$$\nIt corresponds to a situation when $\\Delta v^{eff}(x) > 0$ in one part of the\nperiod and $\\Delta v^{eff}(x) < 0$ in another part (\\ref{L_0}).\n\nThe situation is much more complicated if adiabaticity is broken as in the case of a single jump between two layers, discussed in sect.~\\ref{coherence and adiabaticity violation}.\nIn the second layer, the WP has two components,\nand it is non-trivial to identify the point of the WP to which the group velocity should\nbe\nascribed.\nFurthermore, just after crossing the jump, certain parts of the WP will\nhave opposite phases and therefore interfere destructively,\nso that the interference is the same as before crossing.\nThis also affects shape factors (\\ref{total-WPs}) and (\\ref{total-WPs-2}). In this case, the infinite\ncoherence has no meaning apart from specific cases which are reduced\nto infinite coherence in a single layer (sect.~\\ref{coherence and adiabaticity violation}).\n\n\n\nInfinite coherence (infinite number of periods)\nhas meaning in the castle-wall profile for the\n(long-range) parametric oscillations. The effective group velocities\nshould be defined for a single\nperiod of CW profile similarly to the periodic profile\nwith adiabatic density change considered above. Furthermore,\none should take into\naccount (similarly to single jump case) that (\\ref{sum-kakb})\n\n\\begin{itemize}\n\n\\item Shape factor of the total WPs changes at the crossings,\ndue to splitting and delays of the components;\n\n\\item Oscillation phase is different in different parts of the same\ntotal WP because of different histories of the elementary components.\n\n\\end{itemize}\n\nAs a result, expressions for $v_i^{eff}$ and condition for\ninfinite coherence should depend not only on phases acquired in the\nlayers $a$ and $b$ but also on the mixing in both layers (\\ref{eq:evmat3}).\n\nIndeed we can obtain the condition in the $x$-representation rewriting condition\n(\\ref{divergency}) for $n_{coh} \\rightarrow \\infty$ in terms of\ndelays $t^a$ and $t^b$ we find:\n\\begin{multline}\n\\label{x-space}\n [s_a c_b +c_a s_b \\cos 2\\Delta] t_a +[c_a s_b+s_ac_b\n\\cos 2\\Delta]t_b =\\\\\n=s_a s_b \\sin 2\\Delta \\frac{1}{E}\n\\left(\\frac{V_{a} \\sin 2 \\theta_{a}}{\\Delta H_{a}}-\\frac{V_{b}\n\\sin 2 \\theta_{b}}{\\Delta H_{b}}\\right),\n\\end{multline}\nwhere we used\n$$\n\\frac{d 2\\theta_{m}}{dE}=\\frac{V_{m} \\sin 2 \\theta_{m}}{E \\Delta H_{m}},\n$$\n$m=a,b$. From (\\ref{x-space}),\nwe see that infinite coherence for parametric oscillations\nis unrelated to overlaps of the elementary WPs at the\nend of each period; this is the major difference from the cases\nwhere adiabaticity is conserved.\n\nThere is no simple interpretation of infinite coherence for parametric oscillations in terms of characteristics of elementary WP. In the\nTable~\\ref{table}, for three different $E_0$, we\nshow the delays of elementary WPs as well as phases and mixings in each layer of the CW profile with parameters (\\ref{eq:paramet}). No simple correlation is observed.\n\\begin{table}\n\\begin{center} \n\\begin{tabular}{ |p{1.8cm}||p{2cm}|p{2cm}|p{1.4cm}|p{1.4cm}| p{1.4cm}|p{1.4cm}| }\n\n\n \\hline\n $E_0$ (MeV) & $t_a$ ($\\text{MeV}^{-1}$) & $t_b$ ($\\text{MeV}^{-1}$)&\n$\\phi^a$(rad) & $\\phi^b$(rad) & $\\theta_a$(rad) & $\\theta_b$(rad) \\\\\n \\hline\n $2.3$ & $-0.98$ & $-2.4$ & $1.6$ & $2.7$ &\n$0.83$ & $0.15$ \\\\\n $3.3$ & $1.5$ & $-1.2$ & $2.1$ & $1.8$ &\n$1.3$ & $0.156$ \\\\\n $7.7$ & $0.4$ & $-0.21$ & $3.87$ & $0.73$ &\n$1.5$ & $0.2$ \\\\\n \\hline\n\\end{tabular}\n\\caption{\\label{table}Energies of enhanced coherence length, $E_0$, delays, phases and mixing angles in each of the layers $a$ and $b$ of a CW profile with parameters (\\ref{eq:paramet}).}\n\\end{center}\n\\end{table}\n\\section{Applications to supernova neutrino evolution}\n\\label{applications}\n\nThe issues of coherence in matter are of great relevance for oscillations of supernova neutrinos (SN). \nThe reasons are enormous distances from the production points to a detector at the Earth, \ncomplicated oscillation and conversion phenomena inside a star, non-trivial profiles of medium potentials. \nHere we briefly describe some effects, while a detailed study will be presented elsewhere \\cite{yago:next}. \n\nSN neutrinos are produced at densities $\\rho \\sim 10^{12}$ $\\text{g\/cm}^3$ and have very short \nwave packets $\\sigma_x \\sim 10^{-11}$ cm \\cite{Kersten:2013fba,Kersten:2015kio,Akhmedov:2017mcc}. Propagating to the surface of a star they can undergo \nthe collective oscillations at $R < 100$ km \\cite{Duan:2010bg,Mirizzi:2015eza,Chakraborty:2016lct,Tamborra:2020cul} and then \nthe resonance flavor conversion at $R > 1000$ km \\cite{Wolfenstein:1979ni,Mikheev:1986if,Dighe:1999bi,Lunardini:2003eh,Tomas:2004gr,Dasgupta:2005wn}. \nThe collective effects are supposed to be active at certain \nphases of a neutrino burst \\cite{Mirizzi:2015eza}. \n\nLet us first assume that collective oscillations are absent due to damping effects \\cite{Raffelt:2010za} or \ndue to lack of conditions for these oscillations at certain time intervals. \nIn this case, the standard resonance conversion picture is realized: \nin the production region, the mixing is strongly suppressed so that the flavor states coincide with the eigenstates of propagation in matter. The coincidence depends on the type of mass hierarchy. \nExcept for special situations (time periods and locations), the adiabaticity condition is well satisfied, \nand therefore the adiabatic transformations occur when neutrinos travel to the surface: $\\nu_i^m \\rightarrow \\nu_i$\n$(i = 1, 2, 3)$. The adiabaticity can be broken in fronts of shock waves which affects the described picture. \n\n\nIf the initial fluxes of $\\nu_\\mu$ and $\\nu_\\tau$ are equal, \nthe results of conversion depend only on the $\\nu_e - \\nu_e$ survival probability, $p$, in the neutrino \nchannel and on the $\\bar{\\nu}_e - \\bar{\\nu}_e$ probability, $\\bar{p}$, in the antineutrino channel. \nFor definiteness we will consider the $\\Bar{\\nu}_e$ channel. \nIn this case, the flux of $\\Bar{\\nu}_e$ at the surface of the star, and consequently, the Earth equals \n\\begin{equation}\n F_{\\Bar{\\nu}_e}=\\Bar{p} F_{\\Bar{\\nu}_e}^0 + (1-\\Bar{p}) F_{\\Bar{\\nu}_x}^0, \n\\label{eq:eflux}\n\\end{equation}\nwhere $F^0_{\\Bar{\\nu}_e}$ and $F^0_{\\Bar{\\nu}_x}$ are the initial fluxes of $\\Bar{\\nu}_e$ and $\\Bar{\\nu}_x$ \n(mixture of $\\Bar{\\nu}_\\mu$ and $\\Bar{\\nu}_\\tau$). \nThe second term in (\\ref{eq:eflux}) corresponds to transitions \n $\\Bar{\\nu}_\\mu, \\Bar{\\nu}_\\tau \\rightarrow \\Bar{\\nu}_e$. Derivation of the equation (\\ref{eq:eflux}) assumes that \ninitially produced $\\nu_i^m$ (which coincide with the flavor states) are incoherent and evolve independently. \nTherefore signals produced by $\\nu_i^m$ or states, to which $\\nu_i^m$ evolve, \nsum up incoherently. \n\nFor $R \\ll 1000$ km, the matter density is much bigger than \nthe density of the H-resonance associated with $\\Delta m^2_{13}$. \nIn this matter dominated range, according to Eq.~(\\ref{matter-domination}) in \nsect.~\\ref{coherence-lengths} the coherence length $L_{coh}^m$ is function of the vacuum parameters \n(see also fig.~\\ref{fig-E0}): \n\\begin{equation} \\nonumber\nL_{coh}^m = \\frac{L_\\nu}{\\cos 2 \\theta}\\frac{E}{2\\sigma_E} = 1100~ {\\rm km} \n\\left(\\frac{E}{ 20 {\\rm MeV }}\\right) \\left(\\frac{E}{10 \\sigma_E }\\right).\n\\label{eq:cohinsn}\n\\end{equation}\nTherefore the WP of eigenstates will separate before reaching the H-resonance even if they \noverlapped at the production. \nNotice that with an increase on $E$, the length $L_{coh}^m$ increases, but the resonance shifts to outer layers. Still, some partial coherence may exist. \n\n\nFor definiteness we assume the inverted mass hierarchy. Then according to the level crossing scheme \nat the production $\\bar{\\nu}_3^m \\approx \\bar{\\nu}_e$, $\\bar{\\nu}_1^m \\approx \\bar{\\nu}_x$ \nand $\\bar{\\nu}_2^m = \\bar{\\nu}_x'$. \nFurthermore, antineutrinos cross only H-resonance. \nThe dynamics is related to $\\bar{\\nu}_3^m - \\bar{\\nu}_1^m$ subsystem, while \n$\\bar{\\nu}_2^m$ essentially decouples appearing as a spectator. Therefore the $\\bar{\\nu}_e$ survival probability can be written as \n\\begin{equation} \\nonumber\n\\label{survival}\n\\Bar{p}=|U_{e1}|^2 P_{31} + \ns_{13}^2 (1- P_{31}), \n\\end{equation}\nwhere $P_{31} \\equiv P(\\Bar{\\nu}_{3}^m \\rightarrow \\Bar{\\nu}_{1}^m)$ is the \n$\\Bar{\\nu}_{3}^m \\rightarrow \\Bar{\\nu}_{1}^m$ transition probability.\n\nLet us consider the effects of the coherence loss \nand coherence enhancement under different circumstances. \n\n\n\n1. In the completely adiabatic case, $\\bar{\\nu}_3^m (\\approx \\bar{\\nu}_e)$ evolves to $\\bar{\\nu}_3$, so that \n$P(\\Bar{\\nu}_{3}^m \\rightarrow \\Bar{\\nu}_{1}^m) = 0$ and $\\bar{p} = |U_{e3}|^2 = s_{13}^2$. \nIn central parts of a star $\\bar{\\nu}_1^m$ has larger group velocity than $\\bar{\\nu}_3^m$. Below \nthe $H$-resonance, \ninversely, $\\bar{\\nu}_3^m$ moves faster and $\\bar{\\nu}_3$ will arrive at the Earth first. \n\n\n2. Adiabaticity is strongly broken in a shock wave front which can be considered as \nthe instantaneous density jump (see sect.~\\ref{coherence and adiabaticity violation}). \nWe will assume that before the jump (layer $a$) and after the jump (layer $b$) neutrinos propagate adiabatically. \nCrossing the jump the eigenstates $\\bar{\\nu}_1^a$ and $\\bar{\\nu}_3^a$ split into eigenstates $\\bar{\\nu}_j^b$. \nTo find $\\bar{p}$ we consider evolution of $\\bar{\\nu}_3^a$ which becomes \n$c_\\Delta \\bar{\\nu}_3^b - s_\\Delta \\bar{\\nu}_1^b$ after crossing the jump. Then its components $\\bar{\\nu}_j^b$\nloose coherence and evolve to $\\bar{\\nu}_j$ at the surface of a star. \nConsequently, we find \n\\begin{equation}\\nonumber\n\\Bar{p} = c_\\Delta^2 s_{13}^2 + s_\\Delta^2 |U_{e1}|^2. \n\\label{eq:padviol}\n\\end{equation}\nStrong adiabaticity violation effect on $\\Bar{p}$ is realized if the change of mixing in matter, $\\Delta$, is large. The latter occurs when the jump appears in the resonance layer, \nand the size of the jump is larger than the width of the resonance layer: \n$\\Delta \\rho > \\rho_R \\tan 2\\theta_{13} = 0.3 \\rho_R$. That is, even a small jump can produce a strong effect. \n\n\nAs we discussed before, the jump regenerates oscillations, which then disappear, \nand $\\Bar{p}$ does not depend on the oscillation phase. Single shock front (jump) could \nlead to phase (coherence) effect, if $\\bar{\\nu}_1^a$ and $\\bar{\\nu}_3^a$ are produced in a coherent state \n({\\it e.g.}, as a result of collective oscillations) and coherence is supported till the jump. \nIn this case, the result of evolution depends on the phase $\\phi^a$ acquired in the layer $a$, and this phase is observable. Notice that formula (\\ref{eq:eflux}) is invalid in this case. \\\\\n\n \n\n3. Let us consider the case of two density jumps that appear due to the presence \nof two shock wavefronts: the direct and reversed one. \nThis situation is analogous to the one described \nin sect.~\\ref{coherence and adiabaticity violation}. \nWe denote by $a$, $b$ and $g$ the layers between the production point and the inner (reversed) shock, \nbetween two shocks and outside the outer shock correspondingly. \nThe potential of the layer $b$, $V_b$ can be constant or change adiabatically, \nthe potentials just before inner shock, $V_a$, and after second shock (jump), $V_g$, are not equal \n(in contrast to CW).\n\nThe picture of evolution of $\\bar{\\nu}_{3}^m$ state is the following:\n\n\\begin{itemize}\n\n\\item \n$\\bar{\\nu}_{3m}$ propagates adiabatically in the layer $a$ and splits at the border between $a$ and $b$: \n$\\bar{\\nu}_3^a = c_\\Delta \\bar{\\nu}_3^b - s_\\Delta \\bar{\\nu}_1^b$; \n\n\\item\nthe latter state is coherent and oscillates in the layer $b$ acquiring the oscillation phase $\\phi^b$: \n$c_\\Delta e^{i\\phi^b} \\bar{\\nu}_3^b - s_\\Delta \\bar{\\nu}_1^b$; \n\n\\item \nthe eigenstates $\\Bar{\\nu}_3^b$, $\\Bar{\\nu}_1^b$ split in the second jump (the border between $b$ and $g$): \n$\\bar{\\nu}_3^b = c_{\\Delta'} \\bar{\\nu}_3^g - s_{\\Delta'} \\bar{\\nu}_1^g$, $\\bar{\\nu}_1^b = \nc_{\\Delta'} \\bar{\\nu}_1^g + s_{\\Delta'} \\bar{\\nu}_3^g$, \nwhere $\\Delta' \\equiv \\theta_g - \\theta_b$ is the change of the mixing angle in the second jump; \n\n\\item \nin the layer $g$ the eigenstates $\\nu_j^g$ evolve adiabatically to the mass eigenstates \n$\\bar{\\nu}_j^g \\rightarrow \\bar{\\nu}_j$. Furthermore, $\\bar{\\nu}_1$ and $\\bar{\\nu}_3$ separate and decohere. \n\n\\end{itemize}\n\nAccording to this picture we obtain the following expression for the probability $\\bar{p}$: \n\\begin{equation}\n\\label{transition2}\n\\bar{p} = |U_{e1}|^2 \\sin^2(\\Delta + \\Delta') + s_{13}^2 \\cos^2(\\Delta + \\Delta')\n-(|U_{e1}|^2 - s_{13}^2) \\sin 2\\Delta \\sin 2\\Delta' \\sin^2\\frac{\\phi^b}{2}. \n\\end{equation}\nNotice that $\\Delta + \\Delta' = \\theta_g -\\theta_a$. \n\n\n \nIn fig.~\\ref{dip} we plot $\\Bar{p}$\ncomputed with Eq. (\\ref{transition2}) for the following values of \nparameters: $V_a=6.3 \\times 10^{-12}$ $\\text{eV}$, $V_b=3.2 \\times 10^{-11}$ $\\text{eV}$, $V_g=3.2 \\times 10^{-12}$ $\\text{eV}$ and $L_b=1060$ km. If we assume electron fraction $Y_e=0.5$, the corresponding densities are $\\rho_a=2 \\times 10^2$ $\\text{g\/cm}^3$, $\\rho_b= 10^3$ $\\text{g\/cm}^3$, $\\rho_g= 10^2$ $\\text{g\/cm}^3$.\nSince the distance between the two shock fronts is \nmuch larger than the oscillation length in matter, $l_m < 100$ km, \none can naively average the oscillatory terms in (\\ref{transition2}), {\\it i.e.} take \n$\\sin^2 \\phi^b\/2 = 0.5$ (the \nred curve in fig.~\\ref{dip}). However, according to our analysis in sect.~\\ref{coherence-lengths} \nin layers of constant and slowly varying densities, the energy range of enhanced \ncoherence exists around $E_0$, where oscillation effects might not be negligible. \nThis is shown in fig.~\\ref{dip} by the black curve for \nthe energy resolution $\\sigma_E\/E=0.1$. Here $E_0=41$ MeV.\nThis oscillation effect will survive further propagation to the Earth, the effect encodes information about \ndistance between shock wave fronts and density profile. Similar effects of enhanced coherence also survives for adiabatic propagation between the two shocks according to (\\ref{eq:intcond3}).\nAs the distance between the two jumps increases, \none needs better energy resolution, $\\sigma_E\/E < 0.1$, to observe the effect of enhanced coherence.\n\n\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=0.8\\linewidth]{pbar.pdf}\n\\caption{Dependence of the $\\bar{\\nu}_e$ survival probability in supernova\non neutrino energy (blue line).\nInverted mass hierarchy and 1-3 vacuum mixing\nwere assumed. The probability\naveraged over the energy interval\n$\\sigma_E = 0.1 E$ is shown by black line. The red line shows the probability without oscillatory\nterm.}\n\\label{dip}\n\\end{figure}\n\n\n\n\n\nLet us consider the possible effect of the coherence loss on the collective\noscillations in central parts of a star.\nThese oscillations are induced by coherent flavor exchange in the $\\nu - \\nu$ scattering, and high neutrino densities at $r < 100$ km make them relevant. \n\n\n\nThe critical issue is whether fast loss of coherence due to\nvery short WPss destroys collective oscillations.\nIndeed, the latter imply the $\\nu - \\nu$ interactions\nafter neutrino production, which can be considered as\n$\\nu$-detection (\"observation\") process.\nLoss of coherence between production and interaction\nas well as between different interactions can be important.\nFurthermore, effectively the problem becomes\nnon-linear, and therefore, it is not clear if integration\nover energies that form the WP can be interchanged with evolution.\nSuch an interchange is in the basis of equivalence \nof results in the $x$- and $E$-representations \\cite{Stodolsky:1998tc}. There is no clear answer to \nthis question \\cite{Kersten:2013fba,Kersten:2015kio,Akhmedov:2016gzx,Akhmedov:2017mcc}.\n\nSome new insight into the problem can be obtained using\ndescription of the collective oscillations as\nevolution of the individual neutrinos propagating\nin effective external potentials formed by matter\nand background neutrinos \\cite{Hansen:2018apu}. In this case, the problem becomes\nexplicitly linear. Since the background\nneutrinos oscillate, the effective potentials\nhave non-trivial oscillatory dependence in time (distance).\nThese variations of the potentials are non-adiabatic\nand therefore strong flavor\ntransitions can be interpreted as parametric\noscillations and parametric enhancement effects. Taking into consideration this picture\none can apply the results of sect.~\\ref{parametric-oscillations}.\n\nThe following comments are in order:\n\n1. Propagation decoherence is related to the energy dependence\nof the Hamiltonian (and consequently the phases). The collective (fast)\ntransformations are driven mainly by potentials that dominate\nover the vacuum term $\\Delta m^2\/2E$ (the source of $E$ dependence).\nThe vacuum term (with mixing) triggers conversion at\nthe initial short phase. Therefore, apart from the initial phase,\n decoherence is simply absent.\nIf the initial phase is short enough\nthe decoherence is entirely irrelevant.\n\n2. In the region of fast collective oscillations, $r_{coll} < 100$ km,\nwith strong matter dominance the coherence length\nis given by vacuum parameters (\\ref{eq:cohinsn}) \nand turns out to be $L_{coh} \\sim 10^3$ km. So, $r_{coll} \\ll L_{coh}$ and\ndecoherence can be neglected for $r < r_{coll}$.\n\n3. For low energies ($E \\sim 10$ MeV) and $\\sigma_E \\sim E$\nthe length $L_{coh} < 10^2$ km can be comparable to the\nscale of collective oscillations.\nHowever, this estimation of $L_{coh}$ is for constant density.\nAs we established in sect.~\\ref{parametric-oscillations}, the parametric\noscillations themselves can substantially enhance coherence at least\nin certain energy intervals around $E_0$. So, here we deal with\n coherence sustained by oscillatory dependence of\npotentials.\n\nNotice that consideration of coherence for collective oscillations\nwas problematic because of strong adiabaticity violation and\ndifficulty to define the eigenstates. Using analogy with the CW case, \nthis can be done by integrating the Hamiltonian\noven one period, thus eliminating fast variations. For\nthe averaged Hamiltonian, one can introduce the eigenstates\nand their effective group velocities.\nThen the enhanced coherence corresponds to approximate equality\nof these group velocities (\\ref{x-space}).\n\n\n\n\n\n\n\n\\section{Conclusions} \n\\label{conclusions}\nThe propagation decoherence occurs due to the difference\nof the group velocities, which leads to the shift of\nthe WPs relative to each other and their eventual separation.\nIn matter, due to refraction, both the group\nand phase velocities of neutrinos change\nwith respect to the vacuum velocities.\nThus, the refraction affects the propagation coherence (and\ndecoherence).\n\nThe consideration of WPs in the $x$-space\nwith average energy and momentum produces the same results\nas consideration of plane waves in the $E$-space\nand integration of probability (oscillation phase) over the energy\nspectrum with width\ngiven by the energy uncertainty.\nIn particular, the same value of the coherence length\nfollows from both approaches.\n\nFor different density profiles, we determined the coherence lengths\nand their dependence on neutrino energy.\nThe salient feature in matter is the existence\nof infinite coherence length,\n$L_{coh} \\rightarrow \\infty$, at certain energies\n$E_0$, and regions of enhanced coherence around $E_0$.\nIn the energy space, $E_0$ is given by zero derivatives\nof the phase on energy. In the configuration space,\nit follows from equality of the group velocities\nof the eigenstates. The condition can be described as\nthe energy of minimal averaging of the interference term of the\nprobability.\n\nThe fundamental notion here is the effective group velocities\nwhich depend on the density profiles.\n\n1. For constant density the infinite coherence $E_0 = E_R\/ \\cos 2\\theta$\ncoincides with the MSW resonance energy in oscillations\nof mass eigenstates $\\nu_1 \\leftrightarrow \\nu_2$, and for small vacuum mixing, it is close to the MSW flavor resonance.\nThe width of the region of enhanced coherence (weak averaging) is inversely proportional to the energy resolution, $\\sigma_E$, and grows according to $\\propto E_0^2$.\nFor very large densities (high energies), the coherence length\nis determined by vacuum parameters and becomes close to the coherence\nlength in vacuum: $L_{coh}^m = L_{coh}\/ \\cos 2\\theta$.\n\nFor massless neutrinos and oscillations driven by matter\nthe coherence is supported infinitely for all the energies.\n\n2. In varying density profiles, the infinite coherence\nis realized in particular situations.\nFor monotonously and adiabatically changing density\nthe coherence length can increase toward specific energy $E_0$,\nwhich corresponds to certain $V_0$, such that for $V > V_0$ and\n$V < V_0$ separation have opposite signs and compensate each other.\n\nInfinite coherence can be realized\nfor periodic profile with adiabatic density change. It corresponds to\nequality of the group velocities of eigenstates averaged over\nthe period. $E_0$ can be found from the latter condition.\n\n3. In the presence of adiabaticity violation, transitions between\neigenstates\nof propagation happen. This leads to modification of WP shape,\nand consequently, to change of the effective group velocities.\n\nWe considered in details the case of maximal adiabaticity\nbreaking - density jumps in certain points of space (neutrino\ntrajectory).\nIn this case, two new effects are realized, which are related to\nthe split of the eigenstates at\nborders between layers: (i) spread of total WP, (ii) change of the\noscillation phase\nalong these total WPs.\n\nFor the example of a single jump between two layers with different\nconstant densities,\nthere are two components in the first layer and four components in the\nsecond.\nCorrespondingly there are various group velocities and oscillation\nphases at a detector,\nand it is not possible to introduce the coherence length in the usual\nway. Still the medium\nparameters can be selected so that some level of coherence can be\nsupported for long\ndistances because the split of the eigenstates regenerates interference\nand oscillations.\n\n4.\nWe checked that modifications of the total\nWP in the $x$-space do not change its Fourier\ntransform to the $E$-space,\nand consequently, the energy spectrum of neutrinos.\nTherefore derivation of the infinite coherence\nconditions for complicated density profiles can be more accessible in the\n$E$-representation.\n\nIn certain situations, consideration of WP in the $x$-space\nsimplifies computations of probabilities, especially when\ncoherence is completely lost.\nMoreover, it becomes crucial when time tagging\nis introduced at the production or detection, breaking\nthe stationarity condition.\n\n5. For periodic structures with density jumps, such as the castle-wall profile, the coherence length can be introduced for the parametric\noscillations\n(and not for small scale coherence of elementary WP).\nThen the coherence can be supported for many periods.\nFor parametric oscillations, there are several $E_0$, each associated with\na particular parametric resonance.\n\n\nInterpretation of the condition for $E_0$ in $x$-space in terms of\nelementary WPs propagating in the individual layers of the profile is very non-trivial.\nWe find that the effective group velocities for one period (and\ntherefore the\ninfinite coherence condition) are rather complicated functions of mixing angles, phases and delays acquired by the elementary WPs in two parts of the period.\n\n6. Using elementary WPs, we reconstructed total WPs at a detector and\nstudied their spread and shape change\nin the course of propagation depending on parameters of the CW profile. The reconstruction of the total WP\nfrom the elementary WP using operators of time shift is presented. Resummation of the elementary WP at a detector can be\nperformed using the evolution matrix for one period\nexpressed in terms of operators of time shifts.\n\n\n\n7. We outlined applications of the obtained results to supernova neutrinos.\nIn particular, we showed that coherence can be supported between\ntwo shock wavefronts, leading to observable\noscillation effects at the Earth.\n\nFor collective oscillations \ndue to large matter potentials\nthe coherence is determined by vacuum parameters and\ncoherence length is much larger than the scale of\nfast oscillations. Using interpretation\nof collective oscillations as the parametric effects\nwe find that coherence can be further enhanced in certain\nenergy ranges.\n\n\n\\section{Acknowledgements}\nYPPS acknowledges support from FAPESP funding Grants No. 2014\/19164-453 6, No. 2017\/05515-0 and No. 2019\/22961-9. \n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}