diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzigtl" "b/data_all_eng_slimpj/shuffled/split2/finalzzigtl" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzigtl" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nHamilton introduced the Ricci flow to study the global structures and classification of Riemannian manifolds in his seminal work \\cite{H1}. It was used to solve Thurston's geometrization conjecture for $3$-manifolds \\cite{P1, P2, P3, KL, MT}. The Ricci flow preserves the K\\\"ahlerian metrics: If $(X,g_0)$ is compact K\\\"ahler manifold of complex dimension $n$, then any solution $g(t)$ of the Ricci flow with initial condition $g_0$ must be K\\\"ahler. This leads to the K\\\"ahler-Ricci flow which has been very useful in K\\\"ahler geometry: \n \\begin{equation}\\label{unkrflow}\n\\left\\{\n\\begin{array}{l}\n{ \\displaystyle \\ddt{g} = -Ric(g) ,}\\\\\n\\\\\ng|_{t=0} =g_0 .\n\\end{array} \\right.\n\\end{equation}\nIn \\cite{Cao1}, adapting certain arguments in \\cite{Y1}, Cao first studied the K\\\"ahler-Ricci flow and used this to give an alternative proof of the Calabi conjecture.\n \nThe Ricci flow always has a solution for $t$ small. It was proved in \\cite{TZha} that \\eqref{unkrflow} admits a maximal solution on $[0, T)$, where \n\\begin{equation} \\label{T}\nT \\,=\\, \\sup \\{ t \\in \\mathbb{R} \\ | \\ [\\omega_0] + t [K_X] >0 \\}.\n\\end{equation}\nwhere $\\omega_0$ denotes the K\\\"ahler form of $g_0$. This gives a sharp local existence theorem. Therefore, the K\\\"ahler-Ricci flow admits a long time solution if and only if the canonical class is nef. It was also shown in \\cite{Cao1} that the K\\\"ahler-Ricci flow after normalization always converges exponentially fast to a K\\\"ahler-Einstein metric if the first Chern class is negative or zero. If the first Chern class is positive, $T$ is finite and one can study finer behavior of $g(t)$ as $t$ tends to $T$. This has been extensively studied (see \\cite{P4, SeT, TZhu1, PSSW, CS, TiZ1, B, CW} etc.). One would hope that the K\\\"ahler-Ricci flow should deform any initial K\\\"ahler metric to a K\\\"ahler-Einstein metric, however, most K\\\"ahler manifolds do not admit definite or vanishing first Chern class and so the flow will in general develop singularities. An Analytic Minimal Model Program (AMMP) through Ricci flow was initiated by Song-Tian more than a decade ago to study birational classification of compact K\\\"ahler manifolds, including algebraic manifolds. We refer the readers to \\cite{SoT3} for a description of the AMMP.\nOne crucial problem in this program is to study formation of singularities along the K\\\"ahler-Ricci flow. It is conjectured by Song-Tian in \\cite{SoT3} that the K\\\"ahler-Ricci flow will deform a projective variety $X$ of nonnegative Kodaira dimension, to its minimal model via finitely many divisorial metric contractions and metric flips in Gromov-Hausdorff topology, then eventually converge to a unique canonical metric of Einstein type on its unique canonical model. The existence and uniqueness is proved in \\cite{SoT3} for the analytic solutions of the K\\\"ahler-Ricci flow on algebraic varieties with log terminal singularities and the K\\\"ahler-Ricci flow can be analytically and uniquely extended through divisorial contractions and flips \\cite{SoT3}. Finite time geometric surgeries in terms of Gromov-Hausdorff topology are introduced and established in the case of K\\\"ahler surfaces and more generally, flips induced by Mumford quotients in \\cite{SW1, SW2, SW3, SY, S1}. An alternative approach to understand the K\\\"ahler-Ricci flow through singularities was proposed in \\cite{LT} in the frame work of K\\\"ahler quotients by transforming the parabolic complex Monge-Amp\\`ere equation into an elliptic $V$-soliton equation.\n\n\n\nIn this paper, we are interested in geometric behavior of long time solutions of the K\\\"ahler-Ricci flow. We consider the normalized K\\\"ahler-Ricci flow on an $n$-dimensional projective manifold $X$ defined by\n \\begin{equation}\\label{krflow}\n\\left\\{\n\\begin{array}{l}\n{ \\displaystyle \\ddt{g} = -Ric(g) - g,}\\\\\n\\\\\ng|_{t=0} =g_0 .\n\\end{array} \\right.\n\\end{equation}\nwith the initial K\\\"ahler metric $g_0$. The long time existence of the flow (\\ref{krflow}) is equivalent to the canonical class $K_X$ being nef. A projective manifold of nef canonical model is called a minimal model. The well-known abundance conjecture in birational geometry predicts that $K_X$ being nef is equivalent to $K_X$ being semi-ample and the conjecture holds up to complex dimension $3$. In this paper we will always assume the canonical bundle $K_X$ to be semi-ample, and a uniform scalar curvature bounded is established for the global solutions of the flow (\\ref{krflow}). \n\nThe Kodaira dimension is the algebraic dimension measuring the size of the pluricanonical system of the underlying K\\\"ahler manifold $X$. When the canonical bundle $K_X$ is semi-ample or nef, the Kodaira dimension of $X$ must be a nonnegative integer no greater than $\\dim_\\mathbb{C} X=n$.\nWe will discuss some of the known results by Kodaira dimension ${\\rm Kod}(X)$.\n\n When the ${\\rm Kod}(X) =n$, the canonical class of $X$ is big and nef and it was proved in \\cite{Ts, TZha} that the flow (\\ref{krflow}) converges weakly to the unique K\\\"ahler-Einstein current on the canonical model $X_{can}$ of $X$ which is smooth on $X_{can}^\\circ$, the regular set of $X_{can}$. Recently, it was proved in \\cite{S2} that the metric completion of such a smooth K\\\"ahler-Einstein metric on $X_{can}$ is homeomorphic to the projective variety $X_{can}$ itself. This result was used in \\cite{W} to obtain a uniform diameter bound for the flow (\\ref{krflow}). In the special case when $K_X$ is ample, it was shown in \\cite{Cao1} that the flow will always converge smoothly to the unique K\\\"ahler-Einstein metric on $X$.\n\n\nWhen ${\\rm Kod}(X)=0$, $X$ must a Calabi-Yau manifold, as we mentioned above, the unnormalized flow (\\ref{unkrflow}) converges smoothly \\cite{Cao1} to the unique Ricci-flat K\\\"ahler metric in the initial K\\\"ahler class.\n\nWhen ${\\rm Kod} (X)=\\kappa$ and $1\\leq \\kappa \\leq n-1$, for sufficiently large $m$, the canonical ring is finitely generated and the pluricanonical system $|mK_X|$ induces a unique projective morphism\n$$\\Phi: X \\rightarrow X_{can}$$\nwhere $X_{can}$ is the unique canonical model of $X$ and $\\dim X_{can} = \\kappa$. $X_{can}$ has canonical singularities and the map $\\Phi$ is a holomorphic fibration of $(n-\\kappa)$-dimensional manifolds of vanishing first Chern class over $X_{can}^\\circ$, the set of smooth points on $X_{can}$ over which $\\Phi$ is submersion. $X_{can}^\\circ$ is a Zariski open set of $X_{can}$. In \\cite{ST1,ST2}, the twisted K\\\"ahler-Einstein (possibly singular) metric $g_{can}$ on $X_{can}$ is defined to be\n\\begin{equation}\\label{twke}\nRic(g_{can}) = - g_{can} + g_{WP},\n\\end{equation}\nwhere $g_{WP}$ is the Weil-Petersson current induced from the variation of Calabi-Yau fibres of $X$ over $X_{can}$. In fact, $g_{can}$ has bounded local potentials and it is smooth on $X_{can}^\\circ$. It was shown in \\cite{ST1, ST2} that the solution of (\\ref{krflow}) converges in current to the canonical twisted K\\\"ahler-Einstein current $g_{can}$ on $X_{can}$ and the local potential converges in $C^{1,\\alpha}$ on any compact subset of $X^\\circ= \\Phi^{-1}(X_{can}^\\circ)$. The local convergence is improved to the $C^0$-topology for $g(t)$ to $g_{can}$ in \\cite{TWY}. \n\nWe now state our first result of the paper.\n\n\n\\begin{theorem} \\label{main1} Let $X$ be a projective manifold with semi-ample canonical bundle. Let $g(t)$ be the solution of the normalized K\\\"ahler-Ricci flow (\\ref{krflow}) on $X$ with any initial K\\\"ahler metric $g_0$. If there exists an open domain $U$ of $X$ containing a fibre of $X$ over $X_{can}$ and $\\Lambda>0$ such that \n\\begin{equation}\\label{curvature bound}\n\\sup_{U\\times [0, \\infty)} |Ric(g(t))|_{g(t)} <\\Lambda .\n\\end{equation} Then there exists $D>0$ such that for all $t\\in [0, \\infty)$ we have\n\\begin{equation}\\label{diambd}\ndiam(X, g(t)) < D ,\n\\end{equation}\nwhere $diam(X, g(t))$ is the diameter of $(X, g(t))$.\n\\end{theorem}\n\nThe assumption on the Ricci curvature in Theorem \\ref{main1} is used to apply the relative volume comparison for the Ricci flow proved in \\cite{TiZ3}. It is expected that such a Ricci bound holds on any open domain $U$ compactly supported in $X^\\circ= \\Phi^{-1}(X_{can}^\\circ)$, but it is still open at the moment. The recent work of \\cite{HT} might give some technical hints on how to bound the Ricci curvature locally in $X^\\circ$. In fact, when the general fibre of $\\Phi: X\\rightarrow X_{can}$ is a complex torus, then the full curvature tensors are uniformly bounded on any compact subset of $X_{can}^\\circ$ \\cite{TWY} and the following corollary immediately follows from Theorem \\ref{main1}. Combined with the uniform bounds for the scalar curvature\n$$\\sup_{X\\times [0, \\infty)} |R(g(t))| < \\infty$$\nfor all time (\\cite{SoT4}), the estimate in Theorem \\ref{main1} for long time collapsing solutions can be compared to Perelman's diameter and scalar curvature bounds for non-collapsed solutions of the K\\\"ahler-Ricci flow on Fano manifolds. The diameter estimate can also be viewed as a boundedness result in view of algebraic geometry and topology. Theorem \\ref{main1} can be largely extended to the K\\\"ahler case since the semi-ampleness of the canonical bundle already implies that the canonical model is projective. \n\n\n\n\\begin{corollary}\\label{mainc1} Let $g(t)$ be the solution of the normalized K\\\"ahler-Ricci flow (\\ref{krflow}) on $X$ with any initial K\\\"ahler metric $g_0$. If ${\\rm Kod}(X)\\geq \\dim X - 1$, or more generally if the general fibre of $\\Phi: X \\rightarrow X_{can}$ is a complex torus, then there exists $D>0$ such that for all $t\\in [0, \\infty)$ we have\n$$diam(X, g(t)) < D .$$\n\n\\end{corollary}\n\n\n\nBy Theorem \\ref{main1}, one can always extract a convergent sequence along time for the solution of the normalized K\\\"ahler-Ricci flow in Gromov-Hausdorff topology, however, more delicate analysis is required to show the uniqueness of such limits. We naturally propose the following conjecture about the convergence of the flow (\\ref{krflow}) on smooth minimal models, i.e. projective manifolds with nef canonical bundle.\n\n\\begin{conjecture} \\label{conj} Let $X$ be a smooth K\\\"ahler manifold with nef canonical bundle. Then for any initial K\\\"ahler metric $g_0$ on $X$, the solution $g(t)$ of the unnormalized K\\\"ahler-Ricci flow (\\ref{krflow}) has uniformly bounded diameter and scalar curvature for all time. Furthermore, $(X, g(t))$ converges in the Gromov-Hausdorff topology to a unique compact metric space $(\\mathcal{Z}, d_\\mathcal{Z}) $ homeomorphic to the canonical $X_{can}$.\n\n\n\\end{conjecture}\n We confirm the conjecture in the following theorem when $K_X$ is semi-ample, the Kodaira dimension is $1$and the general fibre of $X$ over $X_{can}$ is a complex torus. In particular, it confirms the conjecture for the K\\\"ahler-Ricci flow on minimal elliptic surfaces of Kodaira dimension $1$ in \\cite{ST1}. \n\n\n\\begin{theorem}\\label{KRF: minimal model}\nUnder the same assumptions including (\\ref{curvature bound}) in Theorem \\ref{main1}, if the projective manifold $X$ has Kodaira dimension $1$, then any solution of the normalized K\\\"ahler-Ricci flow (\\ref{krflow}) converges in the Gromov-Hausdorff topology to the metric completion of $(X_{can}^\\circ, g_{can})$, which is homeomorphic to $X_{can}$. In particular, the conclusion holds for projective manifolds of Kodaira dimension $1$ and semi-ample canonical bundle whose general fibre over its canonical model is a complex torus. \n\\end{theorem}\n\nWe remark that the result in Theorem \\ref{KRF: minimal model} still holds in the case of higher Kodaira dimension if $X_{can}\\setminus X_{can}^\\circ$ is a set of finitely many isolated points.\n\nThe proof of Theorem \\ref{main1} relies on the diameter estimate for certain family of twisted K\\\"ahler-Einstein metrics established in \\cite{FGS} and the relative volume comparison established in \\cite{TiZ3}. The main technical contribution of the paper is to prove that the evolving metrics $g(t)$ of the K\\\"ahler- Ricci flow has suitable convexity on $\\Phi^{-1}(X_{can}^\\circ)$. Such a convexity result is built on the almost convexity of the twisted K\\\"ahler-Einstein metric $g_{can}$ on $X_{can}^\\circ$ in the following theorem for the continuity method improving the results in \\cite{FGS}.\n\n\\begin{theorem} \\label{main2}Let $X$ be a projective K\\\"ahler manifold of $\\dim_{\\mathbb{C}} X =n$ with semi-ample canonical line bundle $K_X$ with $X_{can}$ being the canonical model of $X$. Let $A$ be an ample line bundle and $g_t\\in [tA + K_X]$ be the unique K\\\"ahler metrics for $t\\in (0, \\infty)$ satisfying\n$$Ric(g_t) = -g_t + t g_A, $$\nfor any fixed K\\\"ahler metric $\\omega_A\\in [A]$ on $X$. Then the following hold.\n\n\\begin{enumerate}\n\n\\item $(X, g_t)$ converges in the Gromov-Hausdorff topology to a compact metric space $(\\mathcal{Z}, d_\\mathcal{Z})$ as $t\\rightarrow 0^+$.\n\n\\item $g_t$ converges in the $C^0$-topology on $\\Phi^{-1}(X_{can}^\\circ)$ to the pullback of $g_{can}$ on $X_{can}^\\circ$ as $t\\rightarrow 0^+$.\n\n\\item The metric completion of $(X_{can}^\\circ, g_{can})$ is isomorphic to $(\\mathcal{Z}, d_\\mathcal{Z})$.\n\n\\end{enumerate}\nIn particular, if $\\dim_{\\mathbb{C}} X_{can} \\leq 2$, or more generally if $X_{can}$ has only orbifold singularities, $(\\mathcal{Z}, d_\\mathcal{Z})$ is homeomorphic to $X_{can}$.\n\n\n\n\n\\end{theorem}\n\n\n\nThe advantage of using the continuity method over the Ricci flow is that the Ricci curvature is naturally bounded below uniformly and one can apply many existing techniques in comparison geometry. A more natural adaption of Theorem \\ref{main2} is for the collapsing behavior of Ricci-flat K\\\"ahler metrics on a Calabi-Yau manifold as a holomorphic fibration of Calabi-Yau manifolds. This topic has been extensively studied in \\cite{Tos, GTZ, ToZh, HT}.\n\n\n\\begin{theorem} \\label{main3}Let $X$ be a projective K\\\"ahler manifold of $\\dim_{\\mathbb{C}} X =n$ with $c_1(X)=0$. Suppose $L$ is a semi-ample line bundle over $X$. The linear system $H^0(X, mL)$ induces a holomorphic map $\\Phi: X \\rightarrow Y$ for sufficiently large $m$ from $X$ to a projective variety $Y$. Let $A$ be an ample line bundle and $g_t\\in [tA + L]$ be the unique Calabi-Yau metrics for $t\\in (0, \\infty)$. Then the followings hold.\n\n\\begin{enumerate}\n\n\\item $(X, g_t)$ converges in the Gromov-Hausdorff topology to a compact metric space $(\\mathcal{Z}, d_\\mathcal{Z})$ as $t\\rightarrow 0^+$,\n\n\\item $g_t$ converges in the $C^0$-topology on $\\Phi^{-1}(Y^\\circ)$ to the pullback of a unique smooth K\\\"ahler metric $g_Y$ on $Y^\\circ$ as $t\\rightarrow 0^+$. Here $Y^\\circ$ is the set of smooth points of $Y$ over which $\\Phi$ is submersion. \n\n\\item $g_Y$ extends to a K\\\"ahler current on $Y$ with bounded local potentials and on $Y^\\circ$, we have\n$$Ric(g_Y) = g_{WP},$$\nwhere $g_{WP}$ is the Weil-Petersson metric of the variation of the smooth Calabi-Yau fibres of $X$ over $Y^\\circ$.\n\n\\item The metric completion of $(Y^\\circ, g_Y)$ is isomorphic to $(\\mathcal{Z}, d_\\mathcal{Z})$.\n\n\\end{enumerate}\nIn particular, if $\\dim_{\\mathbb{C}} Y \\leq 2$ or more generally if $Y$ has only orbifold singularities, $(\\mathcal{Z}, d_\\mathcal{Z})$ is homeomorphic to $Y$.\n\n\n\\end{theorem}\n\n\nThe twisted Ricci-flat K\\\"ahler metric $g_Y$ in Theorem \\ref{main3} was proposed in \\cite{ST1, ST2} as a special case of the twisted K\\\"ahler-Einstein metrics and it had already been implicitly studied in \\cite{GW, Fi} in the case of complex surfaces. The statement (4) in Theorem \\ref{main3} confirms the conjecture proposed in \\cite{ToZh} (Conjecture 1.1 (a) and (b)) related to an analogous conjecture by Gross \\cite{Gr}, Kontsevich-Soilbelman \\cite{KS} and Todorov \\cite{Ma} for collapsing limits of Ricci-flat K\\\"ahler metrics near complex structure limits. The statement (3) in Theorem \\ref{main3} is shown in \\cite{ST1, ST2}. The sequential convergence in statements (1) is proved in \\cite{Tos, GTZ} and the statement (2) is proved in \\cite{TWY}. The special case for the statement (4) is proved in \\cite{GTZ, ToZh} when $\\dim_{\\mathbb{C}} Y=1$ or $X$ is Hyperkahler. The main contribution of our work in this paper is the statement (4) for identifying the intrinsic and extrinsic geometric limits of collapsing Calabi-Yau metrics. \n\nWe would also like to point out that the projective assumption for the K\\\"ahler manifold $X$ in this paper is for conveniences and is not essential in the proof. In fact, the semi-ampleness condition already implies $X_{can}$ is projective. \n\nWe give a brief outline of the paper. In section 2, we prove Theorem \\ref{main2} and in particular, we show that $(X_{can}^\\circ, g_{can})$ is almost geodesically convex in $(\\mathcal{Z}, d_\\mathcal{Z})$. In section 3, Theorem \\ref{main3} is proved by slight modification of the proof of Theorem \\ref{main2}. In section 4, we prove our main result Theorem \\ref{main1} and its corollaries by using the result and proof of section 2. Finally, we prove Theorem \\ref{KRF: minimal model} in section 5. \n\n\\section{Proof of Theorem \\ref{main2}}\n\nIn this section, we will study deformation of a family of collapsing twisted K\\\"ahler-Einstein metrics and prove Theorem \\ref{main2}.\n\n\nLet $X$ be a projective manifold of complex dimension $n$. Suppose the canonical line bundle $K_X$ is semi-ample and the Kodaira dimension of $X$ is $\\kappa$, i.e., $\\dim_{\\mathbb{C}}X_{can} = \\kappa$. In this section we will always assume $0<\\kappa < n$. Then the pluricanonical system $|mK_X|$ induces a holomorphic morphism\n$$\\Phi: X \\rightarrow X_{can}\\hookrightarrow \\mathbb{CP}^{N_m}, $$\nfor sufficiently large $m$ and $X_{can}$ is the unique canonical model of $X$.\n\n\n\nLet $\\omega_A$ be a K\\\"ahler metric in a K\\\"ahler class $A$ on $X$. We will now consider a continuous family of K\\\"ahler metrics $ \\omega (t)$ defined by\n\\begin{equation}\\label{contin1}\nRic( \\omega(t)) = - \\omega(t) + t \\omega_A, ~ t\\in (0, 1].\n\\end{equation}\nWe let $\\Omega$ be a smooth volume form on $X$ such that\n\\begin{equation}\n\\chi = \\sqrt{-1}\\partial\\dbar\\log \\Omega =\\frac{1}{m} \\Phi^*\\omega_{FS}\\in [K_X],\n\\end{equation}\nwhere $\\omega_{FS}$ is the Fubini-Study metric of \n$\\mathbb{CP}^{N_m}$. \nTherefore \n$$[ \\omega(t)] = [\\chi] + t [\\omega_A].$$ Throughout the paper, we abuse the notation by identifying $\\chi$ with $\\frac{1}{m}\\omega_{FS}|_{X_{can}}$ on $X_{can}$ as well.\nIf we write $$ \\omega(t) = \\chi +t \\omega_A +\\sqrt{-1}\\partial\\dbar \\psi (t)$$\nfor some $\\psi = \\psi(t) \\in C^\\infty(X)$,\nthen by straightforward deductions, the equation (\\ref{contin1}) becomes\n\\begin{equation}\\label{contin2}\nt^{-(n-\\kappa)} (\\chi + t \\omega_A+ \\sqrt{-1}\\partial\\dbar \\psi)^n = e^{\\psi} \\Omega.\n\\end{equation}\n\nEquation (\\ref{contin2}) has a smooth solution for all $t > 0$ by \\cite{Y1, A} since $[\\chi+t\\omega_A ]$ is a K\\\"ahler class and we are interested in the the limiting behavior of $ \\omega(t)$ as $t \\rightarrow 0$. We first state some basic estimates for $\\psi$.\n\n\n\\begin{theorem} There exists $C>0$ such that for all $t\\in (0,1]$, we have\n$$|\\psi| \\leq C. $$\n\n\\end{theorem}\n\n\\begin{proof} By the maximum principle, there exist $C_1, C_2>0$ such that for any $t\\in (0,1]$, we have\n$$\\sup_X \\psi \\leq \\sup_X \\left(\\log \\frac{ t^{-(n-\\kappa)}(\\chi + t \\omega_A )^n}{\\Omega} \\right)\\leq C_1 \\sup_X \\left(\\log \\frac{ \\chi^\\kappa \\wedge \\omega_A^{n-\\kappa}}{\\Omega} \\right)\\leq C_2. $$\nThen the right hand side of the equation (\\ref{contin2}) is uniformly bounded above and the lower bound of $\\psi$ follows directly from $L^\\infty$-estimate for degenerate complex Monge-Amp\\`ere equations by Demailly-Pali \\cite{DP} (see also \\cite{Kol1, EGZ}).\n\n\n\\end{proof}\n\n\nWe notice that for any $t\\in (0,1]$, the Ricci curvature of $ \\omega(t)$ is uniformly bounded below by $-1$. We can then apply the following diameter estimate proved in \\cite{FGS}.\n\n\n\\begin{lemma} Let $ g(t)$ be the K\\\"ahler metric associated to $ \\omega(t)$.\nThere exists $L>0$ such that for all $t\\in (0,1]$,\n$$Diam(X, g(t)) \\leq L.$$\n\n\\end{lemma}\n\n\nBy the volume comparison, we immediately have the following volume estimate.\n\n\\begin{corollary} \\label{volr} There exists $C>0$ such that for any point $p\\in X$ and $t\\in (0,1]$ and $0< r < L$,\n$$Vol(B_{ g(t)} (p, r), g(t) ) \\geq Cr^{2n} t^{n-\\kappa}.$$\n\n\n\n\\end{corollary}\n\n\n\nThe following lemma is due to \\cite{G} (also see \\cite{CC, Da}) as a consequence of the volume comparison and it is very useful to prove geometric convexity for certain family of metric spaces.\n\n\\begin{lemma} \\label{gromov} Let $(M, g)$ be a Riemannian manifold of dimension $n$ satisfying\n$$Ric(g) \\geq - g, ~ Diam(M, g) \\leq L.$$\nLet $E\\subset M$ be any compact set with a smooth boundary. If there are two points $p_1, p_2\\in M$ with $$B_g(p_i, r) \\cap E = \\emptyset, ~ i =1, 2$$\nand every minimal geodesic from $p_1$ to points in $B(p_2, r)$ intersects $E$, then there exists $c=c(n, r, L)>0$ such that\n$$Vol(\\partial E, g) \\geq c Vol(B_g(p_2, r), g). $$\n\n\n\\end{lemma}\n\n\nWe will construct the set $E$ for the family of metrics $ \\omega(t)$.\nFirst, by semi-ampleness of $K_X$ we can assume $K_X$ is the pullback of an ample line bundle $\\mathcal{L}$ on $X_{can}$. We can pick an effective $\\mathbb{Q}$-divisor $\\sigma$ on $X_{can}$ such that\n\n\\begin{enumerate}\n\n\\item $\\sigma$ lies in the class of $[\\mathcal{L}]$,\n\n\\item $X_{can}\\setminus X_{can}^\\circ$ is contained in the support of $\\sigma$.\n\n\n\\end{enumerate}\nWe let $\\sigma'=\\Phi^*\\sigma$.\n\nSecond, we consider a log resolution of $X_{can}$ defined by\n$$\\Psi: W \\rightarrow X_{can}$$\nsuch that\n\n\\begin{enumerate}\n\n\\item $W$ is smooth and the exceptional locus of $\\Psi$ is a union of smooth divisors of simple normal crossings.\n\n\\item $\\tilde \\sigma$, the pullback of $\\sigma$, is a union of smooth divisors of simple normal crossings.\n\n\\end{enumerate} The Fubini-Study metric $\\chi$ on $X_{can}$ also lies in $[\\sigma]$.\nLet $\\tilde X$ be the blow-up of $X$ induced by $\\Psi: W \\rightarrow X_{can}$ and we let $ \\Psi': \\tilde X \\rightarrow X$. We also pick the hermitian metric $h$ on $\\mathcal{L}$ such that $Ric(h) = \\chi$. Let $\\tilde \\sigma' = \\Psi^* \\sigma'$. Away from $\\tilde \\sigma'$, $\\tilde X$ can be identified as $X$ by assuming the blow-ups take place at the support of $ \\sigma'$. We also let $ h' = \\Phi^*h$ and $\\tilde h' = \\Psi^* h'$.\n\n\nThe following is an analogue of the Schwarz lemma.\n\\begin{lemma} \\label{schwa} There exists $c>0$ such that for all $t\\in (0,1]$ we have on $X$,\n$$ \\omega(t) \\geq c \\chi. $$\n\n\\end{lemma}\n\n\\begin{proof} We can directly apply the maximum principle to the following quantity\n$$\\log tr_{ \\omega}(\\chi) - K \\psi$$\nfor sufficiently large $K$ and the estimate of the lemma will immediately follows as $\\psi$ is uniformly bounded.\n\n\n\\end{proof}\n\n\nFor simplicity, we assume that \n$$|\\tilde \\sigma'|^2_{\\tilde h'} \\leq 1$$\neverywhere.\nLet $F$ be the standard decreasing smooth cut-off function defined on $[0, \\infty)$ satisfying\n\n\\begin{enumerate}\n\\item $F(x)=3$, if $x\\in [0, 1\/2]$,\n\n\\item $F(x)=0$, if $x\\in [3, \\infty)$,\n\\item $F(x) = 3-x$, if $x\\in [1,2]$.\n\\end{enumerate}\nLet $$\\eta_\\epsilon = \\max \\left( \\log |\\tilde\\sigma'|^2_{\\tilde h'}, \\log \\epsilon \\right)$$\nfor some sufficiently small $\\epsilon>0$ to be determined later.\nBy the construction of $\\tilde\\sigma'$ and $h$, we have\n$$ \\sqrt{-1}\\partial\\dbar \\log |\\tilde \\sigma'|^2_{\\tilde h'} + \\chi \\geq 0,$$\ntherefore $$\\eta_\\epsilon \\in PSH(X, \\chi) \\cap C^0(X). $$\nIn particular, for sufficiently small $\\epsilon>0$, we have $$ \\log \\epsilon \\leq \\eta_\\epsilon \\leq 0. $$\nWe define $\\rho_\\epsilon$ by\n$$\\rho_\\epsilon = F\\left( \\frac{100\\eta_\\epsilon}{\\log \\epsilon} \\right). $$\n\n\nThe following estimate is based on the calculations in \\cite{S2} (see ).\n\\begin{lemma} \\label{cutoff1} There exists $C>0$ such that for any $t\\in (0,1]$ and any $0<\\epsilon<1$, we have\n$$\\int_{ X} |\\nabla \\rho_\\epsilon|^2 \\wedge \\omega(t)^n \\leq C(-\\log \\epsilon)^{-1} t^{n-\\kappa}. $$\n\n\\end{lemma}\n\n\\begin{proof} There exist $C_1, C_2 >0$ such that\n\\begin{eqnarray*}\n&& \\sqrt{-1} \\int_X \\partial \\rho_\\epsilon \\wedge \\overline{\\partial} \\rho_\\epsilon \\wedge \\omega^{n-1}\\\\\n&=& 10000(\\log \\epsilon)^{-2}\\sqrt{-1} \\int_X (F')^2 \\partial \\eta_\\epsilon \\wedge \\overline{\\partial} \\eta_\\epsilon \\wedge \\omega^{n-1}\\\\\n&\\leq & C_1 (\\log \\epsilon)^{-2} \\int_X (- \\eta_\\epsilon) \\sqrt{-1}\\partial\\dbar \\eta_\\epsilon \\wedge \\omega^{n-1}\\\\\n&=& C_1 (\\log \\epsilon)^{-2} \\left( \\int_X (- \\eta_\\epsilon)(\\chi + \\sqrt{-1}\\partial\\dbar \\eta_\\epsilon) \\wedge \\omega^{n-1} + \\int_X \\eta_\\epsilon \\chi \\wedge \\omega^{n-1} \\right) \\\\\n&\\leq & C_1 (-\\log \\epsilon)^{-1} \\int_X (\\chi + \\sqrt{-1}\\partial\\dbar \\eta_\\epsilon) \\wedge \\omega^{n-1} \\\\\n&= & C_1 (-\\log \\epsilon)^{-1} \\int_X \\chi \\wedge \\omega^{n-1} \\\\\n&\\leq &C_2 (-\\log \\epsilon)^{-1} t^{n-\\kappa }.\n\\end{eqnarray*}\n\n\n\\end{proof}\n\nWe will pick one of the level set of $|\\tilde\\sigma'|^2_h$ to be the hypersurface $E$ in Lemma \\ref{gromov}.\n\n\\begin{lemma} \\label{bdyarea} There exists $C>0$ such that for any $0<\\epsilon_0<1$ and any $t\\in (0,1]$, there exists $\\epsilon_0^2 \\leq \\epsilon \\leq \\epsilon_0$, such that \n$$ Vol_{ \\omega(t)}(\\{|\\tilde\\sigma'|^{200}_{\\tilde h'} = \\epsilon \\}) \\leq C(-\\log \\epsilon)^{-1\/2} t^{n-\\kappa} . $$\n\n\n\\end{lemma}\n\n\n\\begin{proof} We apply the co-area formula\n$$\\int_X H dg = \\int_{-\\infty}^\\infty \\int_{\\{G=u\\}} \\frac{ H}{|\\nabla G|} dg|_{G=u} d u $$\nby letting $H= |\\nabla G|$ and $G= \\rho_\\epsilon$. Applying the previous lemma, there exists $C>0$ such that for all sufficiently small $\\epsilon>0$ and $t\\in (0, 1]$,\n$$\\int_X |\\nabla \\rho_\\epsilon| \\omega^n \\leq \\left(\\int_X |\\nabla \\rho_\\epsilon|^2 \\omega^n \\right)^{1\/2} \\left(\\int_X \\omega^n \\right)^{1\/2} \\leq C(-\\log\\epsilon)^{-1\/2} t^{n-\\kappa}. $$\nWe consider the region $$B_{\\epsilon_0} = \\{ \\epsilon_0^2\\leq |\\sigma|^{200}_h \\leq \\epsilon_0 \\}.$$ In $B_{\\epsilon_0}$,\n$$1\\leq \\rho_{\\epsilon_0}=F(100\\eta_{\\epsilon_0}\/\\log \\epsilon_0) \\leq 2$$\nand\n$$\\int_1^2\nVol(\\{\\rho_{\\epsilon_0} = u\\}) d u \\leq C(-\\log\\epsilon_0)^{-1\/2} t^{n-\\kappa} .$$\nBy mean value theorem, there is $a\\in[1,2]$ such that\n$$Vol(\\{\\rho_{\\epsilon_0} = a\\}) \\leq C(-\\log\\epsilon_0)^{-1\/2} t^{n-\\kappa} .$$\nIn other words,\n$$Vol(\\{\\Phi^*|\\sigma|_h^{200}=\\epsilon_0^{3-a}\\}) \\leq C(-\\log\\epsilon_0)^{-1\/2} t^{n-\\kappa} .$$\n\\end{proof}\n\nLet \n\\begin{equation}\\label{deset}\nD_\\epsilon = X \\setminus \\{ |\\tilde\\sigma'|^{200}_{\\tilde h'} <\\epsilon\\}. \n\\end{equation}\nFor sufficiently large $N>0$\n\\begin{equation}\\label{gradchi}\n\\sup_X \\left |\\partial |\\tilde \\sigma'|_{\\tilde h'} ^{2N}\\right|_{\\chi} <\\infty\n\\end{equation}\n because there exists $C=C(N)>0$ such that\n$$ \\chi\\geq |\\tilde \\sigma' |^{2N}_{\\tilde h'} \\geq C g_A$$\nfor some fixed K\\\"ahelr metric $g_A$ on $X$. Without loss of generality, we can assume $N=100$ for simplicity.\nThe previous lemma indicates that for almost every sufficiently small $\\epsilon>0$, $\\partial D_\\epsilon$ has very small volume. The following lemma also shows that $\\{ |\\tilde \\sigma'|^{200}_h < \\epsilon\\}$ has very small volume.\n\n\\begin{lemma} \\label{smvol} For any $\\delta>0$, there exists $\\epsilon>0$ such that for all $t\\in (0,1]$,\n$$\\int_{|\\tilde\\sigma'|^{200}_{\\tilde h'} \\leq \\epsilon} \\omega(t)^n \\leq \\delta t^{n-\\kappa}.$$\n\n\n\\end{lemma}\n\n\\begin{proof} First we notice that $\\rho_\\epsilon \\geq 1$ when $|\\sigma|^{200}_h \\leq \\epsilon$ and so\n$$\\int_{|\\tilde\\sigma'|^{200}_{\\tilde h'} \\leq \\epsilon} \\omega(t)^n \\leq \\int_X \\rho_\\epsilon \\omega(t)^n. $$\nAlso $$\\lim_{\\epsilon\\rightarrow 0} \\int_{|\\tilde\\sigma'|^{200}_{\\tilde h'} \\leq \\epsilon} \\Omega = 0 $$\nand if we let $\\theta(t) = \\chi+ t\\omega_A$,\n$$ \\int_X \\rho_\\epsilon \\theta^n \\leq t^{n-\\kappa} \\int_X \\rho_\\epsilon\\Omega$$\nand\n$$\\int_X \\rho_\\epsilon ( \\omega(t)^n - \\theta^n) = \\sum_{l=0}^{n-1} \\int_X \\rho_\\epsilon \\sqrt{-1}\\partial\\dbar \\psi \\wedge \\omega(t)^l \\wedge \\theta^{n-1-l}.$$\nNow by similar calculations as in the proof of Lemma \\ref{cutoff1}, there exist $C_1, C_2, C_3, C_4>0$ such that\n\n\\begin{eqnarray*}\n&& \\int_X \\rho_\\epsilon \\sqrt{-1}\\partial\\dbar \\psi\\wedge \\omega \\wedge \\theta^{n-1-l}\\\\\n&=& \\int_X \\psi \\sqrt{-1}\\partial\\dbar \\rho_\\epsilon \\wedge \\omega \\wedge \\theta^{n-1-l} \\\\\n&=& \\int_X \\psi \\left( 10^2(\\log \\epsilon)^{-1}F' \\sqrt{-1}\\partial\\dbar \\eta_\\epsilon + 10^4( \\log \\epsilon)^{-2}F'' \\partial \\eta_\\epsilon \\wedge \\overline{\\partial} \\eta_\\epsilon \\right) \\wedge \\omega \\wedge \\theta^{n-1-l} \\\\\n&\\leq& C_1(-\\log \\epsilon)^{-1} \\int_X (\\sqrt{-1}\\partial\\dbar \\eta_\\epsilon + \\chi) \\wedge \\omega \\wedge \\theta^{n-1-l} + C_1(-\\log \\epsilon)^{-1} \\int_X \\chi \\wedge \\omega \\wedge \\theta^{n-1-l} \\\\\n&&+C_1 (\\log\\epsilon)^{-2}\\int_X \\partial \\eta_\\epsilon \\wedge \\overline{\\partial} \\eta_\\epsilon \\wedge \\omega \\wedge \\theta^{n-1-l} \\\\\n&\\leq& C_2(-\\log\\epsilon)^{-1} [\\chi]\\wedge[ \\omega]^{n-1}- C_1 (\\log\\epsilon)^{-2}\\int_X \\eta_\\epsilon \\sqrt{-1}\\partial\\dbar \\eta_\\epsilon \\wedge \\omega \\wedge \\theta^{n-1-l}\\\\\n&\\leq& C_3 (-\\log\\epsilon)^{-1} [\\chi]\\cdot [ \\omega(t)]^{n-1}\\\\\n&\\leq& C_4(-\\log\\epsilon)^{-1} t^{n-\\kappa}.\n\\end{eqnarray*}\n\nThe lemma easily follows by combining the above estimates.\n\n\\end{proof}\n\nRecall that there exists $L>0$ such that for all $t\\in (0,1]$,\n$$ diam(X, \\omega(t)) \\leq L. $$\n\n\\begin{lemma} \\label{distance1} For any $\\delta>0$, there exists $0<\\epsilon <\\delta$ such that for any $t\\in (0,1]$ and any two points\n$p_1, p_2 \\in D_\\delta$, \n there exists a smooth path $\\gamma_t \\subset D_ \\epsilon $ joining $p_1$ and $p_2$ satisfying\n %\n$$\\mathcal{L}_{g(t)}(\\gamma_t) \\leq d_{g(t)}(p_1, p_2) + \\delta. $$\nwhere $\\mathcal{L}_{g(t)}(\\gamma_t) $ is the arc length of $\\gamma_t$ with respect to the metric $g(t)$.\n\\end{lemma}\n\n\\begin{proof} \nLet\n$$K_{\\epsilon_1}(t)=\\{ x\\in X ~|~ d_{g(t)}(x, \\partial D_\\delta)< \\epsilon_1\\} . $$\nFor any $x\\in K_{\\epsilon_1}(t)$, there exists $x'\\in \\partial \\tilde D_\\delta$ such that\n\\begin{eqnarray*}\n|\\tilde \\sigma|_{\\tilde h}^{200}(x) &\\geq& |\\tilde \\sigma|_{\\tilde h}^{200}(x') - \\left(\\sup_X \\left|\\nabla |\\sigma|_{\\tilde h'}^{200}\\right|_{g(t)}\\right) d_{g(t)}(x, x') \\\\\n&\\geq& |\\tilde \\sigma|_{\\tilde h}^{200}(x') - C_1 \\left(\\sup_X \\left|\\nabla |\\sigma|_{\\tilde h'}^{200}\\right|_{\\chi}\\right) d_{g(t)}(x, x')\\\\\n&\\geq& \\delta- C_2 \\epsilon_1\n\\end{eqnarray*}\nby Lemma \\ref{schwa} and (\\ref{gradchi}), where $C_1, C_2$ do not depend on $\\delta$, $\\epsilon_1$ or $t$. By choosing $\\epsilon_1<<\\delta$, we have for all $t\\in (0,1)$\n$$K_{\\epsilon_1} (t)\\subset D_{\\epsilon_1}.$$\nWe choose $\\epsilon< \\epsilon_1$ with \n$$K_{\\epsilon_1} (t) \\subset D_{\\epsilon_1} \\subset D_\\epsilon$$ and by Lemma \\ref{bdyarea}, \nand there exists $C_3>0$ such that \n$$Vol_{g(t)}(\\partial D_\\epsilon ) \\leq C_3 (-\\log \\epsilon)^{-\\frac{1}{2}} t^{n-\\kappa} $$\nfor all $t\\in (0,1)$.\n\nBy Corollary \\ref{volr}, there exists $c>0$ such that for all $t\\in (0,1)$ and $r<1$\n$$Vol_{ g (t)} (B_{ g (t)}(p_i, r) \\geq c_1 r^{2n} t^{n-\\kappa} $$\nand so \n$$Vol_{ g (t)} (B_{ g (t)}(p_i, \\epsilon_1) \\geq c_1 (\\epsilon_1)^{2n} t^{n-\\kappa}, ~ i=1, 2.$$\n\nSince $B_{ g (t)}(p_i, \\epsilon_1) \\subset\\subset D_\\epsilon$, $i=1, 2$, we can apply Lemma \\ref{gromov} by choosing sufficiently small $\\epsilon>0$ with \n$$ (-\\log \\epsilon)^{\\frac{1}{2}} (\\epsilon_1)^{2n} >>1$$\nand letting $E = D_\\epsilon$ and $r=\\epsilon_1$. \nHence for all $t\\in (0, 1)$, there exist $q\\in B_{g(t)}(p_2, \\epsilon)$ and a minimal geodesic $\\hat\\gamma_t \\subset D_\\epsilon$ (with respect to $g(t)$) joining $p_1$ and $q$ and\n$$ \\mathcal{L}_{g(t)}(\\hat\\gamma_t) =d_{g(t)}(p_1, q) \\leq d_{g(t)} (p_1, p_2) + \\epsilon.$$\nNow we can complete the proof of the lemma by letting $\\gamma_t$ be the curve combining $\\hat\\gamma_t$ and a minimal geodesic joining $p_2$ and $q$ because\n$$\\mathcal{L}_{g(t)}(\\gamma_t) \\leq d_{g(t)}(p_1, p_2) + 2\\epsilon \\leq d_{g(t)}(p_1, p_2) + 2\\delta. $$\n\n\\end{proof}\n\n\nWe will also need the $C^0$ regularity of metrics. Let $\\omega_{can} = \\chi+\\sqrt{-1}\\partial\\dbar \\psi_{can}$ be the twisted K\\\"ahler-Einstein metric on $X_{can}$\n$$Ric(\\omega_{can})= -\\omega_{can} + \\omega_{WP}. $$\nWe now define a semi-flat closed $(1,1)$-current on $X$ introduced in \\cite{ST1, ST2} by the following\n$$\\omega_{SF} = \\omega_A + \\sqrt{-1}\\partial\\dbar \\phi_{SF}$$ such that for any $z \\in X_{can}^{\\circ}$,\n$$Ric(\\omega_{SF} |_{X_z}) =0, ~~ \\int_{X_z} \\phi_{SF} \\omega_0^{n-\\kappa} |_{X_z}=0. $$\n\nThe following lemma is due to \\cite{FGS, TWY}.\n\n\n\\begin{lemma} \\label{equivalence} For any $\\epsilon>0$, there exists $h_\\epsilon (t)\\geq 0$ with $\\lim_{t\\rightarrow 0} h_\\epsilon(t) =0$ such that for all $t\\in (0,1] $, we have on $D_\\epsilon$,\n$$ (1-h_\\epsilon(t)) (\\Phi^*\\omega_{can} + t \\omega_{SF}) \\leq \\omega(t) \\leq (1+h_\\epsilon(t)) (\\Phi^*\\omega_{can} + t \\omega_{SF}) .$$\nIn particular, $\\omega_t$ converges in $C^0$-topology on $\\Phi^{-1}(X_{can}^\\circ)$ to $\\Phi^*\\omega_{can}$ as $t\\rightarrow 0^+$.\n\n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nNow we are ready to prove the main result of this section.\n\n\n\\begin{proposition} \\label{convex1} For any $\\delta>0$, there exist $\\epsilon_1 > \\epsilon_2 >0$ such that for all $t\\in (0,1]$,\n\n\\begin{enumerate}\n\n\\item $Vol( X\\setminus D_{\\epsilon_1}, \\omega(t)) < \\delta$.\n\n\\item for any two points $p, q \\in D_{ \\epsilon _1}$, there exists a continuous path $\\gamma_t \\subset D_{\\epsilon _2}$ joining $p$ and $q$ such that\n$$\\mathcal{L}_{g(t)}(\\gamma_t) \\leq d_{ \\omega(t)}(p, q) + \\delta \\leq L+\\delta.$$\n\n\\end{enumerate}\n\n\\end{proposition}\n\n\n\\begin{proof} For any $\\delta>0$, we can always choose $\\epsilon_1$ sufficiently small so that the first estimate in the proposition holds by Lemma \\ref{smvol}. The second statement follows directly by Lemma \\ref{distance1} after choosing both $\\epsilon_1$ and $\\epsilon_2$ sufficiently small.\n\\end{proof}\n\n\n\nProposition \\ref{convex1} has many geometric consequences. We will use Proposition \\ref{convex1} to prove Theorem \\ref{main2}. The following proposition proves the first three statements in Theorem \\ref{main2}.\n\n\\begin{proposition} \\label{contpro}The following hold.\n\\begin{enumerate}\n\n\\item $(X, g(t))$ converges in Gromov-Hausdorff topology to a compact metric space $(\\mathcal{Z}, d_\\mathcal{Z})$ as $t\\rightarrow 0^+$,\n\n\n\\item $\\omega_t$ converges in $C^0$-topology on $\\Phi^{-1}(X_{can}^\\circ)$ to the pullback of a smooth K\\\"ahler metric $g_{can}$ on $X_{can}^\\circ$ as $t\\rightarrow 0^+$,\n\n\\item the metric completion of $(X_{can}^\\circ, g_{can})$ is isomorphic to $(\\mathcal{Z}, d_\\mathcal{Z})$.\n\n\\end{enumerate}\n\n\n\n\\end{proposition}\n\n\n\\begin{proof} It is proved in \\cite{FGS} that for any sequence $t_j\\rightarrow 0$, $(X, g(t_j))$ converges in Gromov-Hausdorff topology to a compact metric space $(\\mathcal{Z}, d_{\\mathcal{Z}})$ after possibly passing to a subsequence and so (1) follows.\n\n(2) follows directly from Lemma \\ref{equivalence}.\n\nIn order to prove (3), we again apply the result of \\cite{FGS} that $(X_{can}^\\circ, g_{can})$ can be locally isometrically embedded into $(\\mathcal{Z}, d_{\\mathcal{Z}})$ as a dense open set in $\\mathcal{Z}$. For any two points $p, q \\in X_{can}^\\circ$, we choose arbitary $p'$ and $q'$ in the fibre $X_p=\\Phi^{-1}(p) $ and $X_q=\\Phi^{-1}(q)$. By Proposition \\ref{convex1}, for any $\\delta>0$, there exists $\\epsilon >0$ such that for all $t_j $,\n %\n %\n$$p', q' \\in D_{\\epsilon} $$\nand there exists a continuous path $\\gamma_{t_j} \\subset D_{\\epsilon}$ joining $p'$ and $q'$ such that\n$$\\mathcal{L}_{g(t_j)}(\\gamma_{t_j}) \\leq d_{ g(t_j)}(p', q') + \\delta. $$\nSince the fibre diameter uniformly tends to $0$ away from singular fibres and $g_{t_j}$ converges uniformly in $C^0$ to $g_{can}$ on $D_\\epsilon$, $p$ and $g$ must converge in Gromov-Hausdorff distance to $p$ and $q$ on $(\\mathcal{Z}, d_\\mathcal{Z})$. Then there exists $J>0$ such that for all $j\\geq T$, we have \n$$\\mathcal{L}_{g(t_j)}(\\gamma_{t_j}) \\leq d_\\mathcal{Z}(p, q) + 2\\delta.$$\nWe can also assume for all $j\\geq J$, we have \n$$g_{can} \\leq (1+\\delta)g_{t_j} $$\non $D_\\epsilon$ by the $C^0$-estimate and convergence of $g(t_j)$ to $g_{can}$.\nTherefore for $j\\geq J$, we have\n\\begin{eqnarray*}\nd_{g_{can}|_{X_{can}^\\circ}}(p, q) &\\leq& |\\Phi(\\gamma_{t_j})|_{g_{can}} \\leq (1+\\delta) \\mathcal{L}_{g(t_j)}(\\gamma_{t_j}) \\\\\n&\\leq & (1+\\delta) (d_\\mathcal{Z}(p, q) + 2\\delta)\\\\\n&\\leq& d_\\mathcal{Z}(p, q) + \\delta(diam(\\mathcal{Z}, d_\\mathcal{Z}) + 2+2\\delta)\n\\end{eqnarray*}\nwhere $d_{g_{can}|_{X_{can}^\\circ}}$ is the distance function on $X_{can}^\\circ$ induced by $g_{can}|_{X_{can}^\\circ}$. By letting $\\delta \\rightarrow 0$, we have on $X_{can}^\\circ$,\n\\begin{equation}\\label{d1side}\nd_{g_{can}|_{X_{can}^\\circ}} \\leq d_\\mathcal{Z}.\n\\end{equation}\nOn the other hand, for any $\\delta>0$, by definition there exists a continuous path $\\gamma_\\delta$ in $X_{can}^\\circ$ such that\n$$ \\left| \\mathcal{L}_{g_{can}} (\\gamma_\\delta) - d_{g_{can}|_{X_{can}^\\circ}} (p, q) \\right|< \\delta, $$\nwhile for sufficiently large $j>0$,\n$$\\mathcal{L}_{g_{can}} (\\gamma_\\delta) \\geq \\mathcal{L}_{g(t_j)} (\\gamma'_\\delta)- \\delta \\geq d_{ g(t_j)}(p', q') - \\delta \\geq d_\\mathcal{Z}(p, q) - 2\\delta,$$\nwhere $\\gamma'_\\delta$ is a lift of $\\gamma_\\delta$ with $p'$ and $q'$ as the end points.\nTherefore we have on $X_{can}^\\circ$,\n$$d_{g_{can}|_{X_{can}^\\circ}} \\geq d_\\mathcal{Z}$$\nand so combined with (\\ref{d1side}), we have $$d_{g_{can}|_{X_{can}^\\circ}} = d_\\mathcal{Z}|_{X_{can}}. $$\nIt immediately implies that the identity map on $X_{can}^\\circ$ induces a Lipschitz map\n$$\\mathcal{F}: (\\mathcal{Z}, d_\\mathcal{Z}) \\rightarrow (Z_{can}, d_{Z_{can}}), $$\nwhere $(Z_{can}, d_{Z_{can}})$ is the metric completion of $(X_{can}^\\circ, g_{can})$.\n\nSince $X_{can}^\\circ$ is a dense open set in $\\mathcal{Z}$, for any point $z\\in \\mathcal{Z}$, there exist a sequence $z_i \\in X_{can}^\\circ$ converging to $z$ with respect to $d_\\mathcal{Z}$. Since $d_{\\omega_{can}|_{X_{can}^\\circ}} = d_\\mathcal{Z} $ on $X_{can}^\\circ$, $z_j$ must also converge in $(Z, d_Z)$ and so $\\mathcal{F}$ must be injective. Same argument implies $\\mathcal{F}$ is also surjective. This completes the proof of the proposition.\n\n\\end{proof}\n\n\nWe have completed the proof for the statements (1), (2), (3) of Theorem \\ref{main2} by Proposition \\ref{contpro}. \n\n\n\n\nNow we will prove the last part of Theorem \\ref{main2}. First, we state the following H\\\"older estimate in \\cite{Kol2} for complex Monge-Amp\\`ere equations on K\\\"ahler orbifolds.\n\n\\begin{lemma}\\label{holder1}Let $X$ be an $n$-dimensional K\\\"ahler orbifold. Let $\\omega_{orb}$ be an orbifold K\\\"ahler metric on $X$ and $\\Omega$ be an orbifold volume form on $X$. We consider the following complex Monge-Amp\\`ere equation\n$$(\\omega_{orb} + \\sqrt{-1}\\partial\\dbar\\varphi)^n = F \\Omega, $$\nwhere $F$ is a non-negative function with $\\int_X F \\Omega = \\int_X \\omega_{orb}^n$.\nIf $||F||_{L^p(X, \\Omega)} <\\infty$ for some $p>1$, then there exist $\\alpha=\\alpha(X, p) \\in (0,1)$ and $C=C(X, p, \\omega_{orb}, ||F||_{L^p(X, \\Omega)})>0$ such that\n$$||\\varphi -\\sup_X \\varphi||_{C^\\alpha(X, \\omega_{orb})} \\leq C. $$\n\n\n\\end{lemma}\n\n\nThe following lemma is a slight orbifold generalization of a distance estimate obtained in \\cite{Li}.\n\n\n\\begin{lemma} \\label{holder2} Let $X$ be an $n$-dimensional K\\\"ahler orbifold. Let $\\omega_{orb}$ be an orbifold K\\\"ahler metric on $X$. Suppose $\\omega \\in [\\omega_{orb}]$ is an orbifold K\\\"ahler metric on $X$ satisfying\n$$\\left\\|\\frac{\\omega^n }{\\omega_{orb}^n} \\right\\|_{L^p(X, \\omega_{orb})} <\\infty. $$\nThen there exist $\\alpha=\\alpha(X, p)\\in (0,1)$ and $C=C\\left(X, p, \\omega_{orb}, \\left\\|\\omega^n \/\\omega_{orb}^n \\right\\|_{L^p(X, \\omega_{orb})} \\right)$ such that\n$$ d_{\\omega}(p, q) \\leq Cd_{\\omega_{orb}}(p, q)^\\alpha $$\nfor any two points $p, q \\in X$.\n\\end{lemma}\n\n\n\\begin{proof} The proof follows the argument of \\cite{Li}. We include the argument for the sake of completeness since it is fairly short and effective.\n\n\nFor any $p\\in X$ we consider the distance function $f(z)=d_{\\omega}(p,z)$. Let $\\pi:\\tilde B\\rightarrow B$ be a local orbifold uniformization at $p$ on a metric ball $B=B_{\\omega_{orb}}(p,4r_0)$ for some uniform radius $r_0$. Let $\\tilde{\\omega}_{orb}=\\pi^*\\omega_{orb}$. Denote $\\tilde B=\\tilde B_{\\tilde{orb}}(\\tilde{p},4r_0)$ the lifting metric ball and $\\tilde f(\\tilde z)=d_{\\tilde \\omega_{orb}}(\\tilde p,\\tilde z)$ the distance function accordingly.\n\n\n\n\nFor any $\\tilde{q}\\in \\tilde B_{\\tilde{\\omega}_{orb}}(\\tilde{p},r_0)$ and radius $r\\le r_0$, we define a cut-off function $\\tilde\\rho_r$ via\n$$\\tilde\\rho_r = F(d_{\\tilde\\omega_{orb}}(\\tilde q, \\cdot)\/r)$$\nwhere $d_{\\tilde\\omega_{orb}}$ is the distance function with respect to $\\tilde\\omega_{orb}$ and $F$ is one smooth nonnegative cut-off function with $F(x)=1$ for $x\\in [0, 1]$ and $F(x)=0$ for $x\\geq 2.$ Then $-C_1 r^{-2} \\tilde \\omega_{orb}\\leq \\sqrt{-1}\\partial\\dbar \\tilde \\rho_r \\leq C_1 r^{-2} \\tilde \\omega_{orb}$ in $\\tilde B_{\\tilde{\\omega}_{orb}}(\\tilde{q},2r)\\subset \\tilde B$ for some fixed $C_1>0$, and for $r0 $ such that\n\\begin{eqnarray*}\n\\int_{\\tilde B_{\\tilde\\omega_{orb}}(\\tilde q, r)} tr_{\\tilde \\omega_{orb}}(\\tilde \\omega) \\tilde\\omega_{orb}^n &\\leq & \\int_{\\tilde B_{\\tilde{\\omega}_{orb}}(\\tilde{q},2r)} \\tilde \\rho_r tr_{\\tilde \\omega_{orb}}(\\tilde \\omega)\\tilde \\omega_{orb}^n\\\\\n&=&\\int_{\\tilde B_{\\tilde{\\omega}_{orb}}(\\tilde{q},2r)} \\tilde \\rho_r \\tilde \\omega_{orb}^n + n\\int_{\\tilde B_{\\tilde{\\omega}_{orb}}(\\tilde{q},2r)} \\tilde \\rho_r \\sqrt{-1}\\partial\\dbar \\tilde \\varphi \\wedge \\tilde \\omega_{orb}^{n-1}\\\\\n&\\leq& C_2 r^{2n} + n\\int_{\\tilde B_{\\tilde{\\omega}_{orb}}(\\tilde{q},2r)}(\\tilde\\varphi -\\tilde\\varphi(\\tilde p)) \\sqrt{-1}\\partial\\dbar \\tilde \\rho_r \\wedge \\tilde \\omega_{orb}^{n-1}\\\\\n&\\leq& C_2 r^{2n} + C_3r^{-2+\\alpha}\\int_{\\tilde B_{\\tilde\\omega_{orb}}(\\tilde p, 2r)} \\omega_{orb}^n\\\\\n&\\leq& C_4 r^{2n-2+\\alpha}.\n\\end{eqnarray*}\nObviously, $|\\nabla\\tilde f|^2_{\\tilde \\omega} =1$. Therefore $|\\nabla\\tilde f |_{\\tilde \\omega_{orb}} \\leq tr_{\\tilde\\omega_{orb}}(\\tilde\\omega).$ It follows that for all $r\\leq r_0$, we have\n$$\\int_{\\tilde B_{\\tilde \\omega_{orb}}( \\tilde q, r)} |\\nabla \\tilde f|^2_{\\tilde\\omega_{orb}} \\tilde \\omega_{orb}^n \\leq C_4 r^{2n-2+\\alpha}. $$\nThen by Morrey's embedding theorem and the fact that $\\tilde f(\\tilde p)=0$, we have\n$$|\\tilde f|_{C^{\\alpha\/2}(B_{\\tilde\\omega_{orb}}(\\tilde p, \\beta r_0))} \\leq C_5$$\nfor some fixed $C_5>0$. In particular, we have\n$$d_{\\tilde{\\omega}}(\\tilde p,\\tilde q)\\le C_5 d_{\\tilde \\omega_{orb}}(\\tilde p,\\tilde q)^{\\alpha\/2}$$\nfor all $\\tilde q\\in \\tilde B_{\\tilde \\omega_{orb}}(\\tilde p,r_0). $\nThe corresponding distance function on $X$ satisfies\n$$d_\\omega(p,q)\\le d_{\\tilde{\\omega}}(\\tilde p,\\tilde q)\\le C_5 d_{\\tilde \\omega_{orb}}(\\tilde p,\\tilde q)^{\\alpha\/2} =C_5 d_{\\omega_{orb}}(p,q)^{\\alpha\/2}$$\nfor all $q\\in B_{\\omega_{orb}}(p,r_0).$ \nThis completes the proof for the lemma.\n\\end{proof}\n\n\n\\begin{proposition} \\label{contpro2} If $X_{can}$ has only orbifold singulairties, then $(\\mathcal{Z}, d_\\mathcal{Z})$ is homeomorphic to $X_{can}$.\n\n\\end{proposition}\n\n\\begin{proof} We will break the proof into the following steps.\n\n\\noindent {\\bf Step 1.} Let $\\omega_{orb}$ be a smooth orbifold K\\\"ahler metric on $X_{can}$ in the same class of $\\omega_{can}$. Let $F=-\\log \\left( \\frac{\\omega_{can}^\\kappa}{(\\omega_{orb})^\\kappa} \\right)$. We claim that $F$ is bounded above. For simplicity, we assume $X_{can}$ is smooth. We let $\\omega(t)$, $\\chi$, $\\omega_A$ and $\\psi(t)$ be defined as in (\\ref{contin1}). We then consider the following quantity\n$$H= \\log tr_{\\omega(t)}(\\chi) - B \\psi(t) $$\nfor $t\\in (0, 1]$ and some sufficiently large $B>0$ to be determined. Straightforward calculations show that there exist $C_1, C_2>0$ such that for all $t\\in (0, 1]$\n$$\\Delta_t \\log H \\geq (B-C_1)tr_{\\omega(t)}(\\chi) - C_2,$$\nusing the fact that $\\psi(t)$ is uniformly bounded in $L^\\infty(X)$ for $t\\in (0,1]$ and the curvature of $\\chi$ is uniformly bounded, where $\\Delta_t$ is the Laplace operator associated to $\\omega(t)$. After applying the maximum principle, we conclude that $H$ is uniformly bounded above and so there exists $C_3>0$ such that\n$$tr_{\\omega(t)}(\\chi) \\leq C_3$$\nand we immediately obtain the upper bound for $F$ by letting $t\\rightarrow 0$ and using the mean value inequality since $\\omega(t)$ converges to $\\omega_{can}$ on $X^\\circ$ and $\\chi$ is equivalent to $\\omega_{orb}$. \n\n\nLet $\\Omega$ be the smooth volume form on $X$ such that $$\\sqrt{-1}\\partial\\dbar\\log \\Omega = \\chi \\in [K_X]$$\nas before.\nIf we let $\\omega_{can} = \\chi + \\sqrt{-1}\\partial\\dbar \\varphi_{can}$,\n$$\\omega_{can}^\\kappa= (\\chi + \\varphi_{can})^\\kappa = e^{\\varphi_{can}}\\Phi_*\\Omega= e^{-F} (\\omega_{orb})^\\kappa$$\nfor some $\\varphi_{can}\\in PSH(X_{can}, \\chi)\\cap C^0(X_{can})$, where $\\Phi_*\\Omega$ is the pushforward or fibre-integration of $\\Omega$. In particular, $\\varphi_{can}$ is smooth on $X_{can}^\\circ$.\nBy \\cite{ST2}, there exists $p>1$ such that $$||e^{-F}||_{L^p(X_{can}, \\omega_{orb})}<\\infty .$$\nAlso we have$$\\sqrt{-1}\\partial\\dbar F = - \\omega_{can} +\\omega_{WP} - Ric(\\omega_{orb}) .$$\nwhere the Weil-Petersson current $\\omega_{WP}$ is a $(1,1)$-current from the variation of the complex structure of smooth Calabi-Yau fibres. Since $\\omega_{WP}$ is semi-positive on $X_{can}^\\circ$, we have\n$$Ric(\\omega_{orb})+\\omega_{can}+\\sqrt{-1}\\partial\\dbar F \\geq 0$$\n on $X_{can}^\\circ$ and $\\omega_{can}$ has bounded local potentials. This implies that $F$ is quasi-plurisubharmonic on $X_{can}^\\circ$ with respect to $Ric(\\omega_{orb})+\\omega_{can}$, by extension theorem of quasi-plurisubharmonic function, $F$ must be also quasi-plurisubharmonic on $X_{can}$ since $F$ is bounded above and $X_{can}\\setminus X_{can}^\\circ$ is an analytic subvariety of $X_{can}$, i.e., \n$$F\\in PSH(X_{can}, Ric(\\omega_{orb})+\\omega_{can}).$$ \nIn particular, it implies that $\\omega_{WP}$ is a global semi-positive current on $X_{can}$. In fact, the semi-positivity of $\\omega_{WP}$ holds in general for any canonical models (\\cite{GS}). By the standard approximation theory for plurisubharmonic functions, there exist a sequence orbifold smooth functions. $F_j \\in PSH(X_{can}, B\\omega_{orb}+\\omega_{can})$ for some fixed constant $B>0$ such that $F_j$ converges to $F$ decreasingly. Then immediately, we have\n$$||e^{-F_j}||_{L^p(X_{can}, \\omega_{orb})}\\leq ||e^{-F}||_{L^p(X_{can}, \\omega_{orb})}<\\infty. $$\nWe then consider the following complex Monge-Amp\\`ere equations\n$$(\\omega_{orb}+\\sqrt{-1}\\partial\\dbar \\varphi_j)^\\kappa = c_j e^{-F_j} (\\omega_{orb})^\\kappa, $$\nwhere $c_j$ is the norming constant with\n$$[\\omega_{orb}]^\\kappa = c_j \\int_{X_{can}} e^{-F_j} (\\omega_{orb})^\\kappa. $$\n$c_j$ is uniformly bounded below because $F_j$ decreases to $F_j$ and it is also uniformly bounded above because $F_j$ are uniformly bounded from above. Therefore \n$$||c_j e^{-F_j}||_{L^p(X_{can}, \\omega_{orb})}$$ is uniformly bounded above for all $j$.\n\n\n\\noindent {\\bf Step 2.} By Lemma \\ref{holder1}, there exist $\\alpha$ and $C_1>0$ such that for all $j$,\n$$||\\varphi_j||_{C^\\alpha (X_{can}, \\omega_{orb})} \\leq C_1. $$\nand by Lemma \\ref{holder2} that there exists $C_2$ such that on $X$,\n$$d_{g_j}(\\cdot, \\cdot) \\leq C_2 d_{g_{orb}}(\\cdot, \\cdot)^\\alpha.$$\nWe then apply the maximum principle to the following quantity for sufficiently large $K>0$\n$$H = \\log tr_{\\omega_j}(\\omega_{orb}) - K \\varphi_j,$$\nwhere $\\omega_j = \\omega_{orb}+ \\varphi_j$. \nThen similarly to the Schwarz lemma, there exists $C_3>0$ such that for all $j$,\n$$tr_{\\omega_j}(\\omega_{orb})\\leq C_3, $$\nor equivalently,\n$$\\omega_j \\geq (nC_3)^{-1} \\omega_{orb}$$\nsince the curvature of $\\omega_{orb}$ is bounded and $\\varphi_j$ is uniformly bounded in $L^\\infty$ for all $j$.\n\nIn particular, this implies that there exists $C_4, C_5>0$ such that\n$$Ric(\\omega_j) = \\sqrt{-1}\\partial\\dbar F_j +Ric(\\omega_{orb}) \\geq - C_4 \\omega_{orb} \\geq - C_5\\omega_j.$$\nTherefore the Ricci curvature of $g_j$ are uniformly bounded below. Let $g_j$ and $g_{orb}$ be the orbifold K\\\"ahler metrics associated to $\\omega_j$ and $\\omega_{orb}$. We can apply the result of \\cite{FGS} and there exists $D>0$ such that\n$$diam_{g_j}(X_{can}) \\leq D. $$\n\n\n\n\\noindent {\\bf Step 3.} Let $\\sigma$ be an effective $\\mathbb{Q}$-divisor in the class $[\\omega_{can}]$ such that $X_{can}\\setminus X_{can}^\\circ$ is contained in the support of $\\sigma$ and $h$ is the smooth hermitian metric on $[\\sigma]$ with $Ric(h) = \\omega_{orb}$. We define $D_\\epsilon = \\{ |\\sigma|^2_h < \\epsilon \\}.$ The same argument in Proposition \\ref{convex1} implies that for for any $\\delta>0$, there exist $\\epsilon_1 > \\epsilon_2 >0$ such that for all $j\\geq 0$,\n\n\\begin{enumerate}\n\n\\item $Vol( X\\setminus D_{\\epsilon_1}, g_j) < \\delta$.\n\n\\item for any two points $p, q \\in D_{ \\epsilon _1}$, there exists a continuous path $\\gamma_j \\subset D_{\\epsilon _2}$ joining $p$ and $q$ such that\n$$\\mathcal{L}_{g_j}(\\gamma_j)\\leq d_{g_j}(p, q) + \\delta \\leq D+\\delta. $$\n\n\\end{enumerate}\n\n\n\n\n\n\n\nWe apply Lemma \\ref{holder2} again. There exist $\\alpha\\in (0,1)$ and $C_\\alpha>0$ such that for any $j$ and any two points $p, q\\in X_{can}$\n$$d_{g_j}(p, q) \\leq C_\\alpha d_{g_{orb}}(p, q)^\\alpha. $$\nAfter letting $j\\rightarrow \\infty$, $g_j$ converges to $g_{can}$ smoothly on $X_{can}^\\circ$\nand for any $p, q\\in X_{can}^\\circ$, we have\n$$d_{g_{can}|_{X_{can}^\\circ}}(p, q) \\leq C_\\alpha d_{g_{orb}}(p, q)^\\alpha, $$\nwhere $d_{g_{can}|_{X_{can}^\\circ}}$ is the distance function on $X_{can}^\\circ$ induced by $g_{can}$.\n\n\n\n\nSince $g_{can}\\geq cg_{orb}$ for some $c>0$ on $X_{can}^\\circ$ and $(\\mathcal{Z}, d_\\mathcal{Z})$ is the metric completion of $(X_{can}^\\circ, \\omega_{can})$, the local isometry map from $X_{can}^\\circ$ into $\\mathcal{Z}$ extends to a surjective Lipschitz map\n$$\\mathcal{F}: (\\mathcal{Z}, d_\\mathcal{Z}) \\rightarrow (X_{can}, g_{orb}).$$\nWe claim that $\\mathcal{F}$ must be injective. For any point $z\\in X_{can}$, there exist a sequence $\\{z_i\\} \\in X_{can}^\\circ$ converging to $z$ with respect to $g_{orb}$. The inequality\n$$d_{g_{can}|_{X_{can}^\\circ}}(\\cdot, \\cdot) \\leq C \\left(d_{g_{orb}}(\\cdot, \\cdot)|_{X_{can}^\\circ }\\right)^\\alpha$$ implies that\n$\\{z_i\\}$ is also a Cauchy sequence in $(X_{can}^\\circ, g_{can})$. Since $(\\mathcal{Z}, d_\\mathcal{Z})$ is the metric completion of $(X_{can}^\\circ, g_{can})$, $\\{z_i\\}$ must also be a Cauchy sequence of $(\\mathcal{Z}, d_\\mathcal{Z})$ and so $\\mathcal{F}$ must also be injective. This completes the proof of the proposition.\n\n\n\\end{proof}\n\n\n\n\\section{Proof of Theorem \\ref{main3} }\nIn this section, we will prove Theorem \\ref{main3} by the same argument in the proof of Theorem \\ref{main2}.\nLet $X$ be an $n$-dimensional projective K\\\"ahler manifold of $c_1(X)=0$. Suppose $L$ is a semi-ample line bundle and the linear system $|mL|$ induces a holomorphic map\n$$\\Phi: X \\rightarrow Y$$\nfor sufficiently large $m$. We assume $0< \\dim_{\\mathbb{C}}Y=k 0$ such that for all $t\\geq 0$, we have\n$$||\\varphi ||_{L^\\infty(X)} + \\left|\\left| \\ddt{\\varphi} \\right|\\right|_{L^\\infty(X)}\\leq C. $$\nFurthermore, for any compact set $K \\subset X^\\circ$, $\\varphi$ converges in $C^{1, \\alpha}$ to $\\varphi_{can}$ for any $\\alpha \\in [0, 1)$.\n\n\\end{lemma}\n\n\n\nThe following lemma is proved in \\cite{TWY} for the local $C^0$-convergence of the evolving collapsing metrics. \n\\begin{lemma} Let $\\omega(t)$ be the solutions of the K\\\"ahler-Ricci flow (\\ref{krflow}). Then the following hold.\n\n\\begin{enumerate}\n\n\\item For any compact subset $K \\subset X^\\circ$,\n\n$$\\lim_{t\\rightarrow \\infty} ||\\omega(t) - \\omega_{can}||_{C^0(K, \\omega_0)} =0. $$\n\n\\item For any compact subset $K' \\subset X_{can}^\\circ$, the fibre metric over any point in $K'$ converges after rescaling to a Ricci-flat K\\\"ahler metric uniformly. More precisely, let $\\omega_{CY, y}$ be the unique Ricci-flat K\\\"ahler metric in the K\\\"ahler class $[\\omega_0|_{X_y}$. Then\n$$\\lim_{t\\rightarrow \\infty} \\sup_{y\\in K'} || e^t \\omega(t)|_{X_y} - \\omega_{CY, y}||_{C^0(X_y, \\omega_0|_{X_y}} = 0. $$\n\n\n\n\\end{enumerate}\n\n\n\n\\end{lemma}\n\nThe uniform bound for the scalar curvature of global solutions of the normalized K\\\"ahler-Ricci flow is established in \\cite{SoT4} as in the following lemma. \n\\begin{lemma} The scalar curvature $R$ of $g(t)$ is uniformly bounded, i.e., there exists $C>0$ such that for all $t\\geq 0$,\n\n$$\\sup_{X} |R(\\cdot, t)| \\leq C. $$\n\n\n\\end{lemma}\n\nWe now state the relative volume comparison established in \\cite{TiZ3}.\n\n\\begin{lemma} \\label{rvolcom}For any $B\\geq 1$, there exists $k=k(m, B)>0$ such that the following hold. Let $g(t)$ be a solution to the Ricci flow on a compact $n$-dimensional Riemannian manifold $M$ over time $0\\leq t\\leq r_0^2$. If\n$$|Ric| \\leq r_0^{-2}, ~in~ B_{g(0)}(x_0, r_0)\\times [0, r_0^2], $$\n then for any $B_{g(r_0^2)}(x, r) \\subset B_{g(r_0^2)}(x_0, B r_0)$ satisfying\n$$R|_{t=r_0^2}\\leq r^{-2}, ~in~ B_{g(r_0^2)}(x, r), $$\nwe have\n$$\\frac{Vol(B_{g(r_0^2)}(x, r), g(r_0^2))}{r^m} \\geq k \\frac{Vol(B_{g(0)}(x_0, r_0), g(r_0^2))}{r_0^m} .$$\n\n\n\n\\end{lemma}\n\n\n\nWe let $D_\\epsilon$ be the set defined as (\\ref{deset}).\n\\begin{lemma} \\label{convex32} Let $g(t)$ be the global solution of the normalized K\\\"ahler-Ricci flow on $X$. There exists $L>0$ such that for any $\\delta>0$ there exist $\\epsilon_1, \\epsilon_2 >0$ and $T>0$ so that the following hold.\n\\begin{enumerate}\n\n\\item $Vol( X\\setminus D_{\\epsilon_1}, g(t)) < \\delta e^{-(n-\\kappa)t}$ for all $t\\geq 0$.\n\n\\item For any two points $p, q \\in D_{\\epsilon_1 }$ and $t>T$, there exists a continuous path $\\gamma_t \\subset D_{ \\epsilon_2 }$ joining $p$ and $q$ such that\n$$\\mathcal{L}_{g(t)}(\\gamma_t) \\leq L. $$\n\\end{enumerate}\n\n\\end{lemma}\n\n\n\\begin{proof} Using the fact that $\\varphi(t)$ is uniformly bounded, the same argument in the proof of Proposition \\ref{convex1} can show that for any $\\delta>0$ there exists $\\epsilon_1>0$ such that\n$$Vol( X\\setminus D_{\\epsilon_1}, g(t)) < \\delta e^{-(n-\\kappa)t} $$ for all $t\\geq 0$. Let $p' = \\Phi(p)$ and $q'=\\Phi(q)$ for any two points $p, q \\in X\\setminus D_{\\epsilon_1}$. Since $ (X_{can}^\\circ, g_{can})$ is almost geodesically convex in $(X_{can}, g_{can})$, there exists a continuous path $\\gamma'$ in $\\Phi(X\\setminus D_{\\epsilon_2})$ joining $p'$ and $q'$ such that\n$$\\mathcal{L}_{g_{can}}(\\gamma') \\leq D+\\delta$$\nby choosing sufficiently small $\\epsilon_2>0$, where $D$ is the diameter of $(X_{can}, \\omega_{can})$.\nWe lift $\\gamma'$ to a continuous path $\\gamma$ in $ X\\setminus D_{\\epsilon_1}$ joining $p$ and $q$. Since $\\omega(t)$ converges to $\\omega_{can}$ on $\\Phi^{-1}(X\\setminus D_{\\epsilon_1})$ to $\\omega_{can}$ in $C^0$-topology uniformly for as $t\\rightarrow \\infty$, there exists $T>0$ such that\n$$\\mathcal{L}_{g(t)}(\\gamma) \\leq\\mathcal{L}_{g_{can}}(\\gamma') + \\delta \\leq D+2\\delta. $$\nThis completes the proof of the lemma.\n\n\\end{proof}\n\n\n\nNow we can prove Theorem \\ref{main1} as a consequence of the following proposition.\n\\begin{proposition} \\label{mainprop} If there exists an open domain $U$ in $X$ and $\\Lambda>0$ such that such that $\\sup_{U\\times [0, \\infty)} |Ric(g(t))|_{g(t)} <\\Lambda $. Then there exists $D>0$ such that for all $t\\in [0, \\infty)$ we have\n$$diam(X, g(t)) < D .$$\n\n\\end{proposition}\n\n\\begin{proof} By the result in \\cite{SoT4}, the scalar curvature is uniformly bounded along the flow and so we can apply Lemma \\ref{rvolcom} for relative volume comparison. We then apply Lemma \\ref{convex32} with the same notations. We fixed a base point $P\\in X^\\circ$. Suppose there exist a point $Q\\in X$ and $t>0$ such that\n$$2L 0$ such that\n$$Vol_{\\omega(t)}(B_{\\omega(t)}(Q, L) \\geq k L^{2\\kappa} e^{-(n-\\kappa)t}. $$\nWe will apply Lemma \\ref{convex32} by choosing\n$$\\delta = 100^{-1}k L^{2\\kappa} $$ and $T>0$ accordingly.\nSuppose %\n$$\\limsup_{t\\rightarrow \\infty} diam(X, g(t)) = \\infty. $$\nThen there exist $t'>T$ and $Q'\\in X$ such that\n$$2L 0$, \n$$B_{i, \\delta}= \\{ p\\in X_{can}~|~ d_\\chi(p, p_i) < \\delta\\}, $$\nwhere $d_\\chi$ is the distance function with respect to the smooth K\\\"ahler metric $\\chi$. \nLet $g(t)$ be the solution of the normalized K\\\"ahler-Ricci flow on $X$ and $$f_{i,\\delta} = \\int_{ \\Phi^{-1}(B_{i, \\delta})}\\Omega.$$ Then the following holds as $\\Phi^{-1}(B_{i, \\delta})$ converges to a single fibre as $\\delta \\rightarrow 0$.\n\\begin{lemma} \\label{smallvol}\n\n$$\n\\lim_{\\delta\\rightarrow 0} f_{i,\\delta} = 0. \n$$\n\n\\end{lemma} \nWe now estimate the volume of $\\Phi^{-1}(B_{i, \\delta})$.\n\n\\begin{lemma} \\label{smallvo}There exists $C>0$ such that \n\n$$Vol_{g(t)} \\left( \\Phi^{-1}(B_{i, \\delta}) \\right) \\leq C e^{-(n-1)t} f_{i, \\delta}. $$\n\n\\end{lemma}\n\n\\begin{proof} By the uniform $L^\\infty$-estimate of $\\varphi$ and $\\ddt{\\varphi}$ in Lemma \\ref{0est}, there exists $C>0$ such that for all $t\\geq 0$, we have \n$$\\omega(t)^n \\leq C e^{-(n-1)t} \\Omega,$$\nwhere $\\Omega$ is a smooth volume form on $X$. Then the lemma follows by the following calculations \n$$Vol_{g(t)} \\left( \\Phi^{-1}(B_{i, \\delta}) \\right) = \\int_{ \\Phi^{-1}(B_{i, \\delta})} \\omega(t)^n\n\\leq C e^{-(n-1)t}\\int_{ \\Phi^{-1}(B_{i, \\delta})}\\Omega. $$\n\n\n\n\n\\end{proof}\n\n\nThe following lemma follows immediately from Proposition \\ref {contpro2} (also see \\cite{Zh17}) $X_{can}$ is nonsingular.\n\\begin{lemma} The metric completion of $(X_{can}^\\circ, g_{can})$ is homeomorphic to $X_{can}$. \n\n\\end{lemma}\n\nWe define $$\\gamma_\\delta (t)= \\sup_{p, q \\in \\partial B_{i, \\delta}} d_{g(t)}(p, q)$$ as the diameter of $\\partial B_{i, \\delta}$ with respect to $g(t)$. \n\n\\begin{lemma}\\label{bdl} For any $\\epsilon>0$, there exist $\\delta_0>0$ and $T(\\delta)>0$ for $0< \\delta <\\delta_0$ such that for all $0<\\delta<\\delta_0$ and $t>T(\\delta)$, we have \n$$ \\gamma_\\delta(t) < \\frac{\\epsilon}{8} .$$\n\\end{lemma}\n\\begin{proof} The lemma follows from the geodesic convexity of $(X_{can}^\\circ, g_{can})$ in its metric completion and the fact that $g(t)$ converges uniformly to $g_{can}$ uniformly away from singular fibres. \n\n\n\\end{proof}\n\nThe following lemma is the key estimate in this section.\n\n\n\\begin{lemma} \\label{sec5key} For any $\\epsilon>0$, there exist $\\delta>0$ and $T>0$ such that for all $t\\geq T$, we have \n$$ diam( \\Phi^{-1}(B_{i, \\delta}), g(t)) < \\epsilon, $$\nwhere the distance is calculated in $(X, g(t))$ and $i=1, 2, ..., N$. \n\n\\end{lemma}\n\n\\begin{proof} %\nLet\n %\n $$d_{i,\\delta} (t) = \\frac{1}{2} \\sup_{q \\in \\Phi^{-1} (B_{i, \\delta})} d_{g(t)}\\left(q, \\partial \\left( \\Phi^{-1} (B_{i, \\delta}) \\right)\\right),$$\nwhere $d_{g(t)}$ is the distance function with respect to $g(t)$.\nThen one can find a geodesic ball $B(t)$ of radius $d_{i,\\delta} (t)$ with respect to the metric $g(t)$ lying entirely in $\\Phi^{-1} (B_{i, \\delta})$. By Lemma \\ref{smallvo} there exists $C>0$ independent of $t$ and $\\delta$ such that \n$$Vol_{g(t)} (B(t)) \\leq Vol_{g(t)} (\\Phi^{-1}(B_{i,\\delta})) \\leq C e^{-(n-1)t} f_{i, \\delta}.$$\n\nSince the diameter of $g(t)$ is uniformly bounded by Theorem \\ref{main1} and the Ricci curvature of $g(t)$ is uniformly bounded away from $\\mathcal{S}$ for all $t$, we can apply the relative volume comparison in Lemma \\ref{rvolcom}: there exists $c>0$ independent of $t$ and $\\delta$ such that \n$$ce^{-(n-1)t} d_{i, \\delta}(t)^{2n} \\leq Vol_{g(t)} (B(t)) < C e^{-(n-1)t} f_{i, \\delta}.$$\nThis implies that \n$$ d_{i, \\delta}(t) \\leq (c^{-1} C f_{i, \\delta})^{\\frac{1}{2n}} \\rightarrow 0 $$\n as $\\delta \\rightarrow 0$, in other words, for any $\\epsilon>0$, there exists $\\delta>0$ such that for all $t\\geq 0$, we have \n\\begin{equation}\\label{smball}\nd_{i, \\delta} (t) < \\frac{\\epsilon}{8}.\n\\end{equation}\n\nLet $X_y = \\Phi^{-1}(y)$ be the fibre of $\\Phi$ over $y\\in X_{can}$. For fixed $\\delta$ chosen above, we can choose $T>0$ such that for all $t\\geq T$, \n\\begin{equation}\\label{fibresm}\n \\sup_{y\\notin \\cup_{i=1}^N B_{i, \\delta}} diam( X_y, g(t)) < \\frac{\\epsilon}{8}\n %\n \\end{equation}\nbecause the fibre diameter uniformly goes to $0$ away from singular fibres. Combining (\\ref{smball}), (\\ref{fibresm}) and Lemma \\ref{bdl}, we have for all $t\\geq T$, \n$$diam(\\Phi^{-1}(B_{i, \\delta}), g(t)) < \\epsilon, $$\nwhere the diameter is computed in the ambient space $(X, g(t))$. \n\n\n\\end{proof}\n\nThe following lemma holds since $g(t)$ converges in $C^0$ to $g_{can}$ away from singular fibres and the fibre diameter with respect to $g(t)$ uniformly tends to zero away from $\\Phi^{-1} (B_{i, \\delta} )$ as $t\\rightarrow \\infty$.\n\n\\begin{lemma} \\label{appraw} \n\n\\begin{equation}\n\\lim_{\\delta \\rightarrow 0} \\limsup_{t\\rightarrow \\infty} d_{GH}\\big((X\\setminus \\left( \\cup_{i=1}^N \\Phi^{-1} (B_{i, \\delta} )\\right) ,g(t)),~(X_{can}\\setminus \\left( \\cup_{i=1}^N B_{i, \\delta} \\right),g_{can})\\big) =0.\n\\end{equation}\n\n\n\n\\end{lemma}\n\nTheorem \\ref{KRF: minimal model} immediately follows from Lemma \\ref{sec5key} and Lemma \\ref{appraw} since the metric completion $(X_{can}^\\circ, g_{can})$ is homeomorphic to $X_{can}$. \n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Where is the particle?}\n\nGiven two wave functions, the one shown in Fig.~\\ref{fig1}(a) and the one shown in Fig.~\\ref{fig1}(b), which is the correct eigenstate for a double well potential?\n\nThe actual correct answer is that we do not have enough information. We do not have enough information because we are only given a picture of the double well potential, and not an explicit definition. An explicit definition of the potential used in Fig.~\\ref{fig1} will be provided below, and this definition will reveal an asymmetry not apparent in the figure, that the potential well on the right hand side is slightly lower than that on the left hand side. The potential is actually drawn with this asymmetry, but it is so small as to be invisible to the eye on this scale (by many orders of magnitude); \nthe net result, however, is that the actual ground state is that pictured in \nFig.~\\ref{fig1}(b), with the wave function located entirely in the right side well. Hence, a tiny perturbation (the `flea')\nresults in a state very different from the familiar symmetric superposition of `left' and `right' well occupancy (shown in Fig.~\\ref{fig1}(a)).\n\\par\n\\bigskip\n\n\\section{The Asymmetric Double Well Potential}\n\n\\subsection{Introduction}\n\nThe double well potential is often used in quantum mechanics to illustrate \\hypertarget{hrazavy03}{situations} \\hypertarget{hjelic12}{in} which more than one state is accessible\nin a system, with a coupling from one to the other through tunnelling.\\cite{razavy03,jelic12} For example, in the \\hypertarget{hfeynman65}{Feynman Lectures} the\nammonia molecule is used to illustrate a physical system that has a double well potential for the Nitrogen atom.\\cite{feynman65}\nHe used an effective two-state model to illustrate these ideas, and in most undergraduate textbooks a two state system is utilized\nfor a similar purpose. Here instead we will first focus on a full solution to a double well potential; the features inherent in a two state\nsystem will emerge from our calculations. Indeed, we will also present a refined two-state model to\ncapture the essence of the asymmetry in a microscopic double well potential, as we are using in Fig.~\\ref{fig1}.\n\n\\begin{figure}[here]\n\\includegraphics[width =8.0cm,angle=0]{Dauphinee_Fig01a.eps}\n\\includegraphics[width =8.0cm,angle=0]{Dauphinee_Fig01b.eps}\n\\caption{(a) State A, ``Left'' \\textbf{and} ``Right'', or (b) State B: ``Just Left'' \\textbf{or} ``Just Right''. In the first state the wave function\nhas equal amplitude to be in either well, a linear superposition of states so common in quantum mechanics, while in the second state the wave function\nis entirely in the right side well. Which is the ground state for the double well potential shown?}\n\\label{fig1}\n\\end{figure}\n\nThe notion that a slight asymmetry can result in a \\hypertarget{hjona-lasinio81a}{drastic} \\hypertarget{hjona-lasinio81b}{change} \\hypertarget{hsimon85}{in} \nthe wave function is not new --- it was\nfirst discussed in Refs. [\\onlinecite{jona-lasinio81a,jona-lasinio81b,simon85}], but in a manner and context inaccessible to\n\\hypertarget{hreuvers12}{undergraduate} \\hypertarget{hlandsman12}{students}. More recently the topic has been revisited \\cite{reuvers12,landsman12} to illustrate an emerging\nphenomenon in the semiclassical limit ($\\hbar \\rightarrow 0$). These authors refer to the `Flea' in reference to\nthe very minor\nperturbation in the potential (as in ours) and to the `Elephant' \\cite{simon85} (the very deep double well potential). \nSchr\\\"odinger's Cat has crept into the discussion \\cite{reuvers12,landsman12} because the `flea' disrupts the entangled\ncharacter (Fig.~\\ref{fig1}(a)) of the usual Schr\\\"odinger cat-like double well wave function (Fig.~\\ref{fig1}(a)).\n\nThe purpose of this paper is to utilize a very simple model of an asymmetric double well potential, \\hypertarget{hmarsiglio09}{solvable} either analytically \nor through an application of matrix mechanics, \\cite{marsiglio09,jelic12} to demonstrate the rather potent effect of a rather tiny\nimperfection in the otherwise symmetric double well potential. Contrary to the impression one might get from the references on this subject, there\nis nothing `semiclassical' about the asymmetry of the wave function illustrated in Fig.~\\ref{fig1}(b). We will show,\nusing a slight modification of Feynman's ammonia example, \\cite{feynman65,landsman12} that the important parameter to\nwhich the asymmetry should be compared is the tunnelling probability; this latter parameter can be arbitrarily small. This correspondence applies for excited states as well, along with other asymmetric double well shapes.\n\n\\subsection{Square Double Well with Asymmetry}\n\nA variety of symmetric double well potentials was used in Ref. \\onlinecite{jelic12} to illustrate the universality of the\nenergy splitting described there. Here instead we will use perhaps the simplest model to exhibit the impact of asymmetry --- we have checked with other versions and the same physics applies universally. This model uses two square wells, with left and\nright wells having base levels $V_L$ and $V_R$, respectively, separated by a barrier of width $b$ and height $V_0$\nand enclosed within an infinite square well extending from $x=0$ to $x=a$. For our purposes we assign the two wells\nequal width, $w = (a-b)\/2$. Mathematically it is described by\n\\begin{equation}\nV(x) = \\begin{cases} \\infty & \\text{if $x < 0$ or $x > a$} \\\\ \nV_0 & \\text{if $(a-b)\/2 < x < (a+b)\/2$} \\\\ \nV_L & \\text{if $0 < x < (a-b)\/2$} \\\\ \nV_R & \\text{if $(a+b)\/2 < x < a$},\n\\end{cases}\n\\label{asy_doublewell_potential} \n\\end{equation}\nand is shown in Fig.~\\ref{fig2}. We can readily recover the symmetric double well by using $V_L = V_R$. Units of \nenergy are those of the ground state for the infinite square well of width $a$, $E_1^0 = \\pi^2 \\hbar^2\/(2m_0a^2)$, \nwhere $m_0$ is mass of the particle, and units of length are expressed in terms of the width of the infinite square well, $a$.\nWe will typically use a barrier width $b = a\/5$ so that the individual wells have widths $w = 2a\/5$. The height of the barrier\nthen controls the degree to which tunnelling from one well to the other occurs, and $\\delta \\equiv (V_L - V_R)\/2$ controls\nthe asymmetry.\n\nIn the following subsection we will work through detailed solutions to this problem, first analytically, and then numerically.\nIt is important to realize that these solutions retain the full Hilbert space in the problem. A `toy' model is introduced in a later\nsubsection, and reduces this complex problem to a two-state problem. We then proceed to illustrate how the two-state problem\nreproduces remarkable features of the complex problem, as a function of the asymmetry in the two wells.\n\n\\begin{figure}[here]\n\\includegraphics[width=0.55\\textwidth]{Dauphinee_Fig02.eps}\n\\caption{A schematic of the generic asymmetric square double well potential. The well widths are the same but the\nleft- and right-side levels can be independently adjusted.}\n\\label{fig2}\n\\end{figure}\n\n\\subsection{Preliminary Analysis}\n\nFig.~\\ref{fig1} was produced with $v_0 \\equiv V_0\/E_1^0 = 500$, $b\/a = 0.2$, $v_L \\equiv V_L\/E_1^0 = 0$, and\n$v_R \\equiv V_R\/E_1^0 = 0$ in part (a) and $v_R \\equiv V_R\/E_1^0 = -0.00001$ in part (b). The change in potential strength\ncompared with the barrier between these two cases is $1$ part in $50,000,000$. This results in ground state energies of\n$e_1 \\equiv E_1\/E_1^0 = 5.827 034$ and $e_1 \\equiv E_1\/E_1^0 = 5.827 025$ for the symmetric and asymmetric case,\nrespectively. Needless to say, either the difference in potentials or the difference in ground state energies represent minute\nchanges compared to the tremendous qualitative change in the wave function evident in parts (a) vs (b) of Fig.~\\ref{fig1}.\n\nThe potential used is simple enough that an `analytical' solution is also possible. The word `analytical' is in quotations here\nbecause, in reality, the solution to the equation for the energy for each state must be obtained graphically, i.e. numerically. \nWhile this poses no significant difficulty,\nit is sufficient \\hypertarget{hremark1}{work} that essentially all textbooks stop here, and do not examine the wave function.\\cite{remark1} \n\nAssuming that $E\\atop }} V_L$, we expect $|D| {{ \\atop >} \\atop {<\\atop }} |A|$.\n\nAlternatively, we solve the original Schr\\\"odinger Equation numerically right from the start\nby expanding in the infinite square well basis [$\\phi_n(x)\n= \\sqrt{2 \\over a} \\sin{\\left({n \\pi x \\over a}\\right)}$ for $n = 1,2,3....$], and `embed' the double well part in this basis.\nMore specifically, we write\n\\begin{equation}\n|\\psi\\rangle = \\sum_n c_n |\\phi_n\\rangle,\n\\label{expansion}\n\\end{equation}\nand insert this into the Schr\\\"odinger Equation to obtain the matrix equation,\n\\begin{equation}\n\\sum_m H_{nm} c_m = E_n c_n,\n\\label{matrix}\n\\end{equation}\nwhere\n\\begin{eqnarray}\n&&H_{nm} = \\delta_{nm} \\left( n^2E_1^{0} +(V_L+V_R)w\/a + V_0 b\/a \\right) \\nonumber \\\\\n&& \\phantom{aaaaa}+\\delta_{nm} [2V_0-V_L-V_R] \\ {\\rm sinc}(2n) \\nonumber \\\\\n&&+\\left(1 - \\delta_{nm} \\right) D_{nm} \\left(V_L - V_0 + [V_R - V_0] (-1)^{n+m}]\\right) \\nonumber \\\\\n\\label{ham}\n\\end{eqnarray}\nwhere\n\\begin{equation}\nD_{nm} \\equiv {\\rm sinc}(n-m) - {\\rm sinc}(n+m),\n\\label{defn}\n\\end{equation}\nand\n\\begin{equation}\n{\\rm sinc}(n) \\equiv {{\\rm sin}(\\pi n w) \\over \\pi n}.\n\\label{sinc}\n\\end{equation}\nAs before, $w \\equiv (a-b)\/2$ and $b$ are the widths of the wells and barrier, respectively.\nThe general procedure is provided in Refs. [\\onlinecite{marsiglio09}] and [\\onlinecite{jelic12}], and the reader is referred \nto these papers for more details. Using either the analytical expressions or the numerical diagonalization, the results are identical. The advantage of the latter method is that the study is not confined to simple well geometries consisting of boxes, and\nstudents can easily explore a variety of double well potential shapes.\n\n\\subsection{Results}\nIn Fig.~3 the resulting wave function is shown for a variety of asymmetries.\n\\begin{figure}[here]\n\\includegraphics[width =0.5\\textwidth]{Dauphinee_Fig03.eps}\n\\caption{Progression of the wave function as the right well level is lowered from $0$ (same as the left well level) to\n$-10^{-5}$ (in units of $E_1^0 = \\hbar^2 \\pi^2\/(2ma^2)$). When the double well is symmetric the probability \ndensity ($\\left|\\psi(x)\\right|^2$) is symmetric (shown in red); the degree of asymmetry in the probability density increases\nmonotonically as $V_R\/E_1^0$ decreases from $0$ to $-10^{-5}$. The actual values of $V_R$ used in this plot, in units of \n$E_1^0$, are indicated in the legend. Any of these values is absolutely\nindistinguishable on the energy scale of the barrier ($V_0\/E_1^0 = 500$) or the ground state energies ($E_{\\rm GS}\/E_1^0\n\\approx 5.827$).\n}\n\\label{fig3}\n\\end{figure}\nThe result is remarkable. As $V_R\/E_1^0$ decreases from $0$ to a value of $-0.00001$, the probability density changes from a\nsymmetric profile (equal probability in left and right wells) to an entirely asymmetric profile (entire probability localized\nto the right well). The other `obvious' energy scales in the problem are the barrier height ($V_0\/E_1^0 = 500$) and the\nground state energies ($E_{\\rm GS}\/E_1^0 \\approx 5.827$), so these changes in the potential are minute in comparison. \nEven more remarkable \nis that as far as the energies are concerned, these minute changes give rise to equally minute changes in the ground state\nenergy: $E_{\\rm GS}\/E_1^0 \\approx 5.827034$) ($ \\approx 5.827025$) for $V_R\/E_1^0 = 0.0$ ($V_R\/E_1^0 = -0.000 01$),\nrespectively, while the changes in the wave functions are qualitatively spectacular. \n\n\\subsection{Discussion}\n\nThat such enormous qualitative changes can result from such minute asymmetries in the double well potential is of\ncourse important for experiments in this area, where it would be very difficult to control deviations from perfect symmetry in a typical double well potential\nat the $10^{-5}$ level. Why is this phenomenon not widely disseminated in textbooks? And what precisely controls the\nenergy scale for the `flea-like' perturbation that eventually results in a completely asymmetric wave function situated in only one\nof the two wells [Fig.~(1b)]? The answer to the first question is undoubtedly connected to the lack of a straightforward analytical\ndemonstration of the strong asymmetry in the wave function. Certainly Eqs. (\\ref{relamp},\\ref{relampinv}) exist, but it is difficult to coax out of \neither of these equations an explicit demonstration of the resulting asymmetry apparent in Fig.~(3) as a \nfunction of lowering (or raising) the level of the potential well on the right. To shed more light on this phenomenon, \nand to provide an answer to the second question, we resort to a `toy model' slightly modified from the one used by \nFeynman \\cite{feynman65} to explain tunnelling in a symmetric double well system, and introduced more \nrecently by Landsman and Reuvers \\cite{landsman12,reuvers12} in a perturbative way. Such a model is also \\hypertarget{hcohentannoudji77}{used} \nin standard \\hypertarget{htownsend00}{textbooks}\nto discuss `fictitious' spins interacting with a magnetic field\\cite{cohentannoudji77} and two level systems subject to an electric field.\\cite{townsend00}\n\n\\section{A Toy model for the Asymmetric Double Well}\n\nAn aid towards understanding the results of our `microscopic' calculations is provided by an `effective' model. The tact is to\nstrip the system of its complexity and focus on the essential ingredients. In this instance, the key features amount to whether the\nparticle is in the right well, or left well, or a combination thereof.\nFollowing Feynman, we begin with two isolated wells, each with a particular energy level, and with each coupled to the other\nthrough some matrix element, $t$:\n\\begin{eqnarray}\nH\\psi_L &=& E_L \\psi_L - t\\psi_R \\nonumber \\\\\nH\\psi_R &=& E_R \\psi_R - t\\psi_L,\n\\label{feynman}\n\\end{eqnarray}\nwhere, in the absence of coupling, the left (right) well would have a ground state energy $E_L$ ($E_R$), and $\\psi_L$\n($\\psi_R$) represents a wave function localized in the left-side (right-side) well. A straightforward solution of this two state\nsystem results in an energy splitting, as in the symmetric case:\n\\begin{equation}\nE_{\\pm} = {E_L + E_R \\over 2} \\pm \\sqrt{\\left({E_L - E_R \\over 2}\\right)^2 + t^2}.\n\\label{energy_split}\n\\end{equation}\nFor typical barriers ($t <<(E_L + E_R)\/2$) and small asymmetries ($E_L - E_R << (E_L + E_R)\/2$), very little difference occurs in the energies, in agreement with the results from our more microscopic calculations above.\nIf we define $\\delta \\equiv (E_L - E_R)\/2$ [$ \\equiv (V_L - V_R)\/2$ for the square double well potential], then the \nground state wave function becomes\n\\begin{equation}\n\\psi = {1 \\over \\sqrt{2}} \\sqrt{1 - {\\delta \\over \\sqrt{\\delta^2 + t^2}}}\\psi_L + {1 \\over \\sqrt{2}} \n\\sqrt{1 + {\\delta \\over \\sqrt{\\delta^2 + t^2}}}\\psi_R.\n\\label{wave_function_asym}\n\\end{equation}\nIn the symmetric case we recover the (symmetric) linear superposition of the state with the particle in the left well, along\nwith the state with the particle in the right well (see the remark in Ref. [\\onlinecite{remark1}]). With increasing \nasymmetry, however, say with $V_R < V_L$, i.e. $\\delta > 0$,\nthe amplitude for the particle being in the right well rises to unity, while that for the particle in the left well decreases to zero.\nOur toy model illustrates that the energy scale for this cross-over is the tunnelling matrix element, $t$. This energy scale must be\nclearly present in the microscopic model defined in Eq.~(\\ref{asy_doublewell_potential}), but it is not there explicitly.\n\n\\subsection{Comparison of the toy model to the microscopic model}\n\nTo see how well the toy model defined by the two-state system in Eq.~(\\ref{feynman}) reproduces properties of the\nmicroscopic calculations, we make an attempt to\ncompare the results from the two calculations. This is most readily accomplished by the following procedure. First,\nthe solid curves displayed in Fig.~4 are readily obtained by plotting the two amplitudes in Eq.~(\\ref{wave_function_asym}), \n\\begin{eqnarray}\n|c_L|^2 &\\equiv &{1 \\over 2} \\left( 1 - {\\delta \\over \\sqrt{\\delta^2 + t^2}} \\right) \\nonumber \\\\\n|c_R|^2 &\\equiv &{1 \\over 2} \\left( 1 + {\\delta \\over \\sqrt{\\delta^2 + t^2}} \\right)\n\\label{cs}\n\\end{eqnarray}\nas a function of $\\delta\/t$. Then, for one of the results shown in Fig.~(\\ref{fig3}),\n\\begin{figure}[here]\n\\includegraphics[width =0.56\\textwidth]{Dauphinee_Fig04.eps}\n\\caption{Plot of the amplitude in each well vs the asymmetry parameter, $\\delta\/t$. The circles indicate the area integrated from Fig.~3 on each side of the barrier, as indicated. The large circle shows the point used to establish a value of $t \\approx 6.84\\cdot 10^{-7} E_1^0$, so that the expression in Eq.~(\\ref{cs}) matches precisely the value determined by the more microscopic calculation of the previous subsection. With this value of $t$ the curves corresponding to the expressions in Eq.~(\\ref{cs}) are also plotted, and the agreement is excellent over the entire range of $\\delta\/t$. This indicates that the `toy model' phenomenology \nis very accurate. }\n\\label{fig4}\n\\end{figure}\nwe compute the area under the curve on the left; it will correspond to an amplitude $|c_L|^2$ in Fig.~4. By placing\nthis value on the appropriate curve in Fig.~4 we are able to extract a value of $\\delta\/t$ and hence an effective value\nof $t$ (since $\\delta \\equiv (V_L- V_R)\/2$ is known). This is marked by a large circle in Fig.~4.\nWe have thus identified a value of $t$, strictly only defined for the\ntoy model, with a specific barrier height and width in the more microscopic calculations connected with Eqs. (\\ref{psi_regions}-\n\\ref{relampinv}) or their numerical counterparts. We can then vary the value of $\\delta$ (as was done to generate the \ncurves shown in Fig.~3) and plot the values of the total probability density in the left and right wells as a function of $\\delta\/t$. \nThe smaller circles in Fig.~4 are the results of these calculations, and they almost perfectly lie on the curves generated\nfrom the toy model, thus showing that the asymmetric double well system indeed behaves like a two-state system\ndescribed phenomenologically by Eq.~(\\ref{feynman}). We have done this for other barrier heights and widths and similar\nvery accurate agreement between the two approaches is achieved. \nWe have also carried out such comparisons for excited states, and\nalso for other kinds of double wells (e.g. so-called Gaussian wells), with similar agreement.\n\n\\begin{figure}[here]\n\\includegraphics[width =0.5\\textwidth]{Dauphinee_Fig05.eps}\n\\caption{As in Fig.~3, progression of the wave function as the right well level is lowered from $0$ (same as the left well level) to\n$-3.0\\cdot 10^{-8}$ (in units of $E_1^0 = \\hbar^2 \\pi^2\/(2ma^2)$), but now with a barrier height of $V_0 = 1000E_1^0$. When the double well is symmetric the probability \ndensity ($\\left|\\psi(x)\\right|^2$) is symmetric (shown in red); the degree of asymmetry in the probability density increases\nmonotonically as $V_R\/E_1^0$ decreases from $0$ to $-3.0\\cdot 10^{-8}$. The actual values of $V_R$ used in this plot, in units of \n$E_1^0$, are indicated in the legend. Even more so than before, any of these values is absolutely\nindistinguishable on the energy scale of the barrier ($V_0\/E_1^0 = 1000$) or the ground state energies ($E_{\\rm GS}\/E_1^0\n\\approx 5.947$).}\n\\label{fig5}\n\\end{figure}\n\n\\begin{figure}[here]\n\\includegraphics[width =0.56\\textwidth]{Dauphinee_Fig06.eps}\n\\caption{As in Fig.~4, plot of the amplitude in each well vs the asymmetry parameter, $\\delta\/t$, but now for a much higher barrier potential, $V_0\/E_1^0 = 1000$. The circles indicate the area integrated from Fig.~5 on each side of the barrier, as indicated. The large circle shows the point used to establish a value of $t \\approx 1.45\\cdot 10^{-9} E_1^0$, so that the expression in Eq.~(\\ref{cs}) matches precisely the value determined by the more microscopic calculation of the previous subsection. With this value of $t$ the curves corresponding to the expressions in Eq.~(\\ref{cs}) are also plotted, and the agreement is excellent over the entire range \nof $\\delta\/t$. As long as the barrier is sufficiently high to delineate two very distinct states (`left' and `right,' or `dead' and `alive',\nthe two state model works very well.}\n\\label{fig6}\n\\end{figure}\n\nAs an example, in Fig.~5 and Fig.~6 we show results analogous to those of Fig.~3 and Fig.~4, but for a double well with a barrier\nwith the same width, but with a significantly increased height. The sequence of probability densities in Fig.~5 is similar\nto those shown in Fig.~3 except that the changes in the potential asymmetry are orders of magnitude smaller. Fig.~6 then\nconfirms that the significantly enhanced sensitivity is due to the significantly reduced effective `hopping' amplitude $t$\nbetween the two wells, such that the asymmetry in probability densities as a function of potential asymmetry in Fig.~6\nis as it is in Fig.~4 as a function of $\\delta\/t$, with both of these parameters greatly reduced.\n\n\\subsection{Origin of the coupling $t$}\n\n\nThe origin of the coupling parameter $t$ in the two-state toy model is clearly the possibility of tunnelling that exists from one well into\nthe other. It is rather involved to `derive' this parameter $t$ from parameters of the original double well potential \n\\hypertarget{hmerzbacher98}{specified} in Eq.~(\\ref{asy_doublewell_potential}). \nIn fact it suffices to provide an estimate based only on the symmetric case ($V_L = V_R = 0$), \nand we provide a brief exposition here, following Merzbacher.\\cite{merzbacher98}\n\n\nOne starts with a variational wave function, of the form \n\\begin{equation}\n\\psi_{\\pm}(x) = {N_{\\pm} \\over \\sqrt{2}}\\bigl( \\psi_L(x) \\pm \\psi_R(x) \\bigr),\n\\label{variational}\n\\end{equation}\nwhere the $\\pm$ refers to the ground state ($+$) or first excited state ($-$), respectively, and the subscript $L$ ($R$)\nrefers to the left (right) well, respectively. First taking the two wells in isolation, we obtain $\\psi_L(x)$, for example, with a solution\nsimilar to that in Eq.~(\\ref{psi_regions}),\n\\begin{align}\n \\psi_L(x) & = A\\sin{(kx)} & 0 w,\n\\label{psi_L_region}\n\\end{align}\nand similarly for the well on the right; with all coordinates displaced a distance $b$ to the right, it forms a mirror image of the one on the left. \nThis distance can be considered\nto be very large at first. In these equations we have included an unimportant normalization constant, $N_{\\pm}$ in Eq.~(\\ref{variational})\nand $A$ in Eq.~(\\ref{psi_L_region}). The energy splitting between the two states is determined by the `overlap' between the two\nwells. A straightforward calculation gives\n\\begin{equation}\nE_{\\rm split} = \\int \\ dx \\ \\psi_L(x) H \\psi_R(x) \\propto e^{-\\kappa b} \\approx e^{-b\\sqrt{V_0}},\n\\label{split}\n\\end{equation}\nso as $V_0$, the height of the barrier, increases for fixed width $b$, the splitting becomes less and less. Note, however, that the\nparameter $t$, first introduced in Eq.~(\\ref{feynman}) is proportional to this same quantity:\n\\begin{equation}\nt \\propto e^{-b\\sqrt{V_0}}.\n\\label{t}\n\\end{equation}\nThis factor is equal to $8.6 \\times 10^{-7}$ and $2.5 \\times 10^{-9}$ for $V_0\/E_1^0 = 500$ and $1000$, respectively. The actual values\nof $t$ obtained phenomenologically through the fitting procedure described above are $6.8 \\times 10^{-7}$ and $1.5 \\times 10^{-9}$,\nrespectively, which tracks very closely these exponentially decaying factors.\n\n\n\n\n\n\\section{Summary}\n\nWe have examined the simplest asymmetric double well potential and explored the behavior of the wave function\nas a function of asymmetry. In the symmetric case the ground state is a linear superposition of the particle in the left\nwell and the particle in the right well.\nAs the floor level of the potential on the right side ($V_R$) decreases, the probability\nfor the particle to be on the right side `slowly' increases. The remarkable result of our calculations is that the energy\nscale over which the transition from symmetric ground state to completely asymmetric ground state can be made\narbitrarily small. As our `toy model' calculation demonstrated, this energy scale is controlled by the tunnelling probability\nbetween the two wells, which is an energy scale that is not obviously present in the microscopic parameters (height and\nwidth of the barrier). In fact, the better well-defined the two well system is, the smaller this energy scale. Fig.~3 (or Fig.~5)\ndemonstrates this quite dramatically, where imperceptibly small asymmetries in the potential give rise to a completely asymmetric\nwave function.\nThere is very little indication of this failure from the ground state energy; instead, it\nrequires a calculation of the wave function to demonstrate this. \n\nThese calculations serve to demonstrate a number of important principles for the novice. First, the numerical calculation is\n`simpler' than the analytical calculation, and less likely to lead to error. By this we mean that solving for the wave function\nthrough Eqs.~(\\ref{allowed},\\ref{relamp},\\ref{relampinv}) is a little subtle and, for example, the wrong choice of using either\nEq.~(\\ref{relamp}) or Eq.~(\\ref{relampinv}) can lead to inaccuracies. In contrast the numerical solution is straightforward.\nThen, the solution obtained here can be tied to perturbation theory. An accurate calculation of the energy can be achieved with\nperturbation theory, but not so with the wave function! This ties into the variational principle, and teaches the important lesson that\na (very) accurate estimate of the energy certainly does not imply an even qualitatively correct wave function.\n\n\\begin{acknowledgments}\n\nThis work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), by the Alberta\niCiNano program, and by a University of Alberta Teaching and Learning Enhancement Fund (TLEF) grant. \n\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nThe study of quantum walks, tracing back to \\cite{Gudder,Meyer},\nhas been accelerated from various aspects during the last decade,\nsee e.g., \\cite{Ambainis,KFCSWAMW,KonnoBook,Venegas} and references cited therein.\nFrom a mathematical viewpoint sharp contrast\nbetween quantum walks and random walks is of particular importance.\nFor example, the ballistic spreading is observed in a wide class of\nquantum walks \\cite{ABNVW,CHKS,IKS,Konno2002,Konno2005,KLS,Katori},\ni.e., the speed of a quantum walker's spreading is proportional to the time $n$\nwhile the typical scale for a random walk is $\\sqrt{n}$.\nMoreover, the limit distributions of quantum walks are obtained \\cite{CHKS,IKS,Konno2002,Konno2005,KLS,TS,Katori}\nwith a significant contrast with the normal Gaussian law in the case of random walks.\nIn this paper we focus on the phenomenon called \\textit{localization},\nwhich is also considered as a typical property of quantum walks, \nsee \\cite{CGMV2,IKS,KS} among others.\nWe introduce the Grover walk on a particular infinite graph called a \\textit{spidernet},\nconsider an isotropic initial state,\nand determine the class of spidernets which exhibits localization.\nA spidernet is not only a new example for the localization\nbut also is expected to be a clue to understand localization from graph structure.\nOur method is based on quantum probabilistic spectral analysis of graphs \\cite{HO}.\n\nA spidernet is obtained by adding large cycles to an almost regular tree,\nsee Subsection \\ref{subsec:Spidernets} for definition and\nsee Fig.~\\ref{fig:illestration of spidernets} for illustration.\nIt is expected to have intermediate properties between\ntrees and lattices,\nand its spectral properties have been studied to some extent, \nsee e.g., \\cite{IO} for the spectral distribution of the adjacency matrix\nand \\cite{Urakawa2003} for estimates of the Cheeger constant and Green kernel \nin terms of spectra.\nThen the standard application of the Karlin-McGregor formula\n(see e.g., \\cite{Obata2012}) \nyields an explicit formula for the $n$-step transition probability of the isotropic random walk on a spidernet,\nwhere the free Meixner law appears as the spectral distribution.\nThis argument is along a natural extension of the result on random walks \non a homogeneous tree due to Kesten \\cite{Kesten}.\nOur attempt in this paper is to establish the quantum counterpart.\n\nIn Section~\\ref{sec:Grover Walks on Graphs} we introduce the Grover walk on a general graph\nafter the standard literatures, see e.g., \\cite{Ambainis,Watrous}.\nThen we formulate two concepts of localization,\nthat is, \\textit{initial point localization} and \n\\textit{exponential localization}.\nSeveral quantum walks are known to exhibit the localization,\nsee e.g., \\cite{CGMV2,CHKS,IKS,KLS,KS,Katori}.\nFor relevant discussion see also \\cite{S}.\n\nIn Section~\\ref{sec:Main Results} we introduce the spidernet $S(a,b,c)$ and\nmention the main results.\nWe first obtain the integral representation of the $n$-step transition amplitude\nfor the Grover walk on a spidernet:\n\\begin{equation}\\label{1eqn:main expression of probability amplitude}\n\\langle \\bs{\\psi}_0^+, U^n \\bs{\\psi}_0^+\\rangle\n=\\int_{-1}^{1} \\cos n\\theta \\,\\mu(d\\lambda),\n\\quad\nn=0,\\pm1,\\pm2,\\dots,\n\\end{equation}\nwhere $\\lambda=\\cos\\theta$ and $\\mu$ is the free Meixner law of which the parameters\nare determined by $a,b,c$ of the spidernet under consideration,\nsee Theorem \\ref{mainthm:probability amplitude} for the precise statement.\nThe free Meixner law is a probability distribution on $[-1,1]$\nwhich is the sum of absolutely continuous part and at most two point masses.\nIt is then rather easy to derive from \\eqref{1eqn:main expression of probability amplitude}\nthe asymptotic behavior of the transition amplitude as $n\\rightarrow\\infty$.\nIn fact, only the effect of the point masses remains in the limit\nand the asymptotic results follow.\nIn particular, we prove that the initial point localization occurs\nif and only if $b>c+\\sqrt{c}$, see Theorem \\ref{3thm:localization for spidernet} for details.\n\nIt is instructive to consider the family of spidernets $S(\\kappa,\\kappa+2,\\kappa-1)$,\n$\\kappa\\ge2$.\nThese are obtained by suitably adding a large cycle to the homogeneous tree with degree $\\kappa$.\nWe see from Theorem \\ref{3thm:localization for spidernet} that\nthe initial point localization occurs for $2\\leq\\kappa< 10$\nand no initial point localization occurs for $\\kappa\\ge10$\n(Corollaries \\ref{3cor:k<10} and \\ref{3thm:estimate kappa ge10}).\nWhile, Corollary \\ref{tree} asserts that no initial point localization occurs\non a homogeneous tree either.\nIn the recent work \\cite{Katori} we know that the Grover walk on two-dimensional lattice exhibits\nthe initial point localization.\nThese results suggest the effect of cycles for the localization of the Grover walk.\n\nIn Section \\ref{sec:One-dimensional reduction} we introduce the one-dimensional\nreduction of our Grover walk, called a $(p,q)$-quantum walk on $\\mathbb{Z}_+$.\nWe determine the eigenvalues of the $(p,q)$-quantum walk with space cutoff\nby extending the quantum probabilistic method together with theory of Jacobi matrices.\n\nIn Section \\ref{sec:Proofs of main results} we obtain the integral expression of\nthe $n$-step transition amplitude of the $(p,q)$-quantum walk on $\\mathbb{Z}_+$\n(Theorem \\ref{5thm:integral representation of four amplitudes})\nand the asymptotic behavior of the transition amplitude\n(Theorem \\ref{5thm:localization criteria}).\nWith these preparations we prove the main results.\n\nIn Appendix we recall the definition of the free Meixner law\nand derive the associated orthogonal polynomials.\nThe explicit form of the orthogonal polynomials is used to derive \nthe somehow amazing result (Lemma \\ref{5lem:special value}) \nwhich plays a key role in deriving the exponential localization.\n\n\nFinally, we mention some relevant works.\nThe so-called CGMV method \\cite{CGMV,CGMV2,GVWW,KS} is also based on the spectral analysis on the unit circle\nand seems to have close connection with our approach. \nThe technique to get the eigensystem of some class of quantum walks on a finite system including the Grover walk \nis established in \\cite{Sze}. \nOur result is an extension to an infinite system, where the orthogonal polynomials \nwith respect to the free Meixner play a key role.\nConservation of probability is an interesting question for a quantum walk,\nsee e.g., \\cite{CHKS,IKS,KLS,SK,Katori}. \nThe quantum walks studied in \\cite{CHKS,IKS,KLS} are non-conservative\nand the ``missing probability\" is found through the weak convergence theorem\nin such a way that the limit distribution is a convex combination of \na point mass at the origin corresponding to localization and the Konno density function \n\\cite{Konno2002,Konno2005} coming from ballistic spreading, see \\cite{KLS} for details. \nIt is not yet checked whether our Grover walks are conservative or not.\nThere is a large number of literatures under the name of quantum graphs, \nsee e.g., \\cite{GS2006} and references cited therein,\nwhich are expected to have a profound relation to quantum walks but not yet very clear.\n\n\n\n\\section{Grover Walks on Graphs}\n\\label{sec:Grover Walks on Graphs}\n\nLet $G$ be a graph with vertex set $V=V(G)$ and edge set $E=E(G)$,\ni.e., $V$ is a non-empty (finite or infinite) set \nand $E$ is a subset of $\\{\\{u,v\\}\\,;\\, u,v\\in V, u\\neq v\\}$.\nWe often write $u\\sim v$ for $\\{u,v\\}\\in E$. \nThroughout the paper a graph is always assumed to be \\textit{locally finite}, \ni.e., $\\deg(u)=|\\{v\\in V\\,;\\, v\\sim u\\}| <\\infty$ for all $u\\in V$,\nand \\textit{connected}, i.e., every pair of vertices are connected by a walk.\nAn ordered pair $(u,v)\\in V\\times V$ is called \na \\textit{half-edge} extending from $u$ to $v$ if $u\\sim v$.\nLet $A(G)$ denote the set of half edges of $G$.\n\nThe state space of our Grover walk will be given by the Hilbert space\n$\\mathcal{H}=\\mathcal{H}(G)=\\ell^2(A(G))$ of square-summable functions on $A(G)$.\nThe inner product is defined by\n\\[\n\\langle \\bs{\\phi},\\bs{\\psi}\\rangle\n=\\sum_{(u,v)\\in A(G)} \\overline{\\bs{\\phi}(u,v)}\\,\\bs{\\psi}(u,v),\n\\qquad\n\\bs{\\phi},\\bs{\\psi}\\in \\mathcal{H}.\n\\]\nIn general, a unit vector in $\\mathcal{H}$ is called a \\textit{state}.\nThe canonical orthonormal basis is denoted by \n$\\{\\bs{\\delta}_{(u,v)}\\,;\\, (u,v)\\in A(G)\\}$.\nFor $u\\in V$ let $\\mathcal{H}_u$ be the closed subspace spanned by\n$\\{\\bs{\\delta}_{(u,v)}\\,;\\, v\\sim u\\}$.\nObviously, we have $\\dim \\mathcal{H}_u=\\deg(u)$ and\nthe orthogonal decomposition:\n\\[\n\\mathcal{H}=\\sum_{u\\in V} \\oplus \\mathcal{H}_u\\,.\n\\]\n\nWe next introduce unitary operators on $\\mathcal{H}$.\nWith each $u\\in V$ we associate a \\textit{Grover operator} $H^{(u)}$ on $\\mathcal{H}_u$\ndefined by means of the actions on the orthonormal basis\n$\\{\\bs{\\delta}_{(u,v)}\\,;\\, v\\sim u\\}$:\n\\begin{equation}\n(H^{(u)})_{vw}\n\\equiv \\langle \\bs{\\delta}_{(u,v)}, H^{(u)}\\bs{\\delta}_{(u,w)}\\rangle\n=\\frac{2}{\\deg(u)}-\\delta_{vw}\\,.\n\\end{equation} \nAs is easily verified,\nthe Grover operator $H^{(u)}$ is a real symmetric, unitary operator on $\\mathcal{H}_u$.\nThen the \\textit{coin flip operator} $C$ on $\\mathcal{H}$ is defined by\n\\begin{equation}\nC\\bs{\\delta}_{(u,v)}\n=\\sum_{w\\sim u}(H^{(u)})_{vw}\\bs{\\delta}_{(u,w)}.\n\\end{equation}\nThe \\textit{shift operator} $S$ is defined by\n\\[\nS\\bs{\\delta}_{(u,v)}=\\bs{\\delta}_{(v,u)}\\,.\n\\]\nNote that $C^2=S^2=I$ (the identity operator).\nSince both $C$ and $S$ are unitary operators on $\\mathcal{H}$,\nso is \n\\[\nU=SC,\n\\]\nwhich is called the \\textit{Grover walk} on the graph $G$.\n\nThe time evolution of the Grover walk with an initial state $\\bs{\\Phi}_0\\in\\mathcal{H}=\\ell^2(A(G))$\nis given by the sequence of unit vectors:\n\\[\n\\bs{\\Phi}_n=U^n\\bs{\\Phi}_0\\,,\n\\qquad n=0,1,2,\\dots.\n\\]\nSince $U^n$ is unitary, we have\n\\[\n1=\\|\\bs{\\Phi}_n\\|^2=\\sum_{u\\in V}\\sum_{v\\sim u}|\\bs{\\Phi}_n(u,v)|^2,\n\\qquad n=0,1,2,\\dots.\n\\]\nTherefore, the function\n\\[\nu\\mapsto \\sum_{v\\sim u}|\\bs{\\Phi}_n(u,v)|^2,\n\\qquad u\\in V,\n\\]\ndefines a probability distribution on $V$,\nwhich is interpreted as the probability\nof finding a Grover walker at $u\\in V$ at time $n$.\nFollowing convention we write\n\\begin{equation}\\label{2eqn:distribution of GW}\nP(X_n=u)=\\sum_{v\\sim u}|\\bs{\\Phi}_n(u,v)|^2,\n\\qquad u\\in V.\n\\end{equation}\nIt is noted, however, that\n$X_n$ is merely defined as a random variable for each $n$.\nIt is an interesting question to construct a discrete-time stochastic process\n$\\{X_n\\,;\\, n=0,1,2,\\dots\\}$ with state space $V$ reasonably reflecting \nprobabilistic properties of the Grover walk.\nThe quantity $\\bs{\\Phi}_n(u,v)=\\langle\\bs{\\delta}_{(u,v)},U^n\\bs{\\Phi}_0\\rangle$\nappearing in \\eqref{2eqn:distribution of GW},\nor more generally $\\langle\\bs{\\Phi},U^n\\bs{\\Phi}_0\\rangle$ for \ntwo states $\\bs{\\Phi},\\bs{\\Phi}_0$ is\ncalled a \\textit{transition amplitude}.\nThis is a quantum counterpart of transition probability of a Markov chain.\n\n\nSince the sequence $\\{P(X_n=u)\\,;\\, n=0,1,2,\\dots\\}$ defined in \n\\eqref{2eqn:distribution of GW} is oscillating in general,\nit is essential to study the time average:\n\\[ \n\\overline{q}^{(\\infty)}(u)\n=\\lim_{N\\to\\infty}\\frac{1}{N}\\sum_{n=0}^{N-1}P(X_n=u),\n\\qquad u\\in V,\n\\]\nwhen the limit exists.\nFor a state $\\bs{\\Phi}\\in\\mathcal{H}=\\ell^2(A(G))$ we denote by\n$\\mathrm{supp\\,}\\bs{\\Phi}$ the set of vertices $u\\in V$\nsuch that $\\bs{\\Phi}(u,v)=\\langle \\bs{\\delta}_{(u,v)}, \\bs{\\Phi}\\rangle\\neq0$\nfor some $v\\sim u$.\n\n\\begin{definition}[Initial point localization]\n\\label{def:initial point localization}\n\\normalfont\\rm\nLet $o\\in V$ be a distinguished vertex\nand $\\bs{\\Phi}_0\\in\\mathcal{H}=\\ell^2(A(G))$ a state with $\\mathrm{supp\\,}\\bs{\\Phi}=\\{o\\}$.\nWe say that the Grover walk on $G$ with an initial state $\\bs{\\Phi}_0$ exhibits\n\\textit{initial point localization} if $\\overline{q}^{(\\infty)}(o)>0$.\n\\end{definition}\n\n\\begin{definition}[Exponential localization]\n\\label{def:exponential localization}\n\\normalfont\\rm\nLet $o\\in V$ and $\\bs{\\Phi}_0$ be the same as in Definition \\ref{def:initial point localization}.\nWe say that the Grover walk with an initial state $\\bs{\\Phi}_0$ exhibits\n\\textit{exponential localization} if there exist constant numbers $C>0$ and $0c+\\sqrt{c}$, then the initial point localization occurs:\n\\[\n\\overline{q}^{(\\infty)}(o)\n=\\lim_{N\\rightarrow\\infty}\\frac1N\\sum_{n=0}^{N-1} P(X_n=o)\n=\\frac{w^2}{2}>0.\n\\]\nIf $b\\le c+\\sqrt{c}$, then no localization occurs:\n\\[\n\\lim_{n\\rightarrow\\infty}P(X_n=o)=0\n\\quad{and}\\quad\n\\overline{q}^{(\\infty)}(o)=0.\n\\]\n\\end{theorem}\n\nIt is instructive to consider the family of spidernets $S(\\kappa,\\kappa+2,\\kappa-1)$, $\\kappa\\ge2$.\nNote that $S(\\kappa,\\kappa+2,\\kappa-1)$ is obtained by adding a large cycle to each stratum\nof $S(\\kappa,\\kappa,\\kappa-1)$, which is the homogeneous tree of degree $\\kappa$.\nBelow we list some results obtained immediately from Theorem \\ref{3thm:localization for spidernet}.\n\n\\begin{corollary}\\label{3cor:k<10}\nLet $2\\leq\\kappa< 10$.\nFor the Grover walk on a spidernet $S(\\kappa,\\kappa+2,\\kappa-1)$ with \ninitial state $\\bs{\\psi}_0^+$ it holds that\n\\[\nP(X_n=o)\n=|\\langle\\bs{\\psi}_0^+,U^n\\bs{\\psi}_0^+\\rangle|^2\n\\sim \n\\left(\\frac{10-\\kappa}{12}\\right)^2\\cos^2 n\\tilde\\theta,\n\\quad n\\rightarrow\\infty,\n\\]\nwhere $\\cos\\tilde\\theta=-1\/3$, $0\\le \\tilde\\theta\\le \\pi$.\nMoreover, \n\\begin{equation}\n\\overline{q}^{(\\infty)}(o)\n=\\frac12 \\left(\\frac{10-\\kappa}{12}\\right)^2,\n\\end{equation}\nwhich means that the Grover walk under consideration exhibits initial point localization.\n(An example for $\\kappa=4$ is shown in Fig.~\\ref{fig:one}.)\n\\end{corollary}\n\n\n\\begin{corollary}\\label{3thm:estimate kappa ge10}\nLet $\\kappa\\geq 10$.\nFor the Grover walk on a spidernet $S(\\kappa,\\kappa+2,\\kappa-1)$ with \nan initial state $\\bs{\\psi}_0^+$ it holds that\n\\[\n\\lim_{n\\rightarrow\\infty}P(X_n=o)=0,\n\\quad{hence}\\quad\n\\overline{q}^{(\\infty)}(o)=0.\n\\]\n\\end{corollary}\n\n\\begin{corollary}\\label{tree}\nFor the Grover walk $U$ on a spidernet $S(a,b,b-1)$ with an initial state $\\bs{\\psi}^+_0$\nwe have\n\\[\n\\lim_{n\\rightarrow\\infty}P(X_n=o)=0,\n\\quad{hence}\\quad\n\\overline{q}^{(\\infty)}(o)=0.\n\\]\n\\end{corollary}\n\nFrom Corollaries \\ref{3cor:k<10}--\\ref{tree} we see \nthat the localization occurs when the ``density\" of large cycles is high.\nFurther study in this direction is now in progress.\n\nCorollary \\ref{tree} follows directly from Theorem \\ref{mainthm:probability amplitude}\nas a homogeneous tree is a special case of spidernets.\nWhile, quantum walks on a tree have been studied from various aspects and\nthe result in Corollary \\ref{tree} is already known \\cite{CHKS}.\nNote also that localization may occur for the Grover walk on a tree with \na non-isotropic initial state.\n\nIn relation to Theorem \\ref{3thm:localization for spidernet} we have the following\n\n\\begin{theorem}\\label{exponential localization}\nConsider a spidernet $S(a,b,c)$ with $b>c+\\sqrt{c}$.\nThen for the Grover walk $U$ with an initial state $\\bs{\\psi}_0^+$ it holds that\n\\[\n\\liminf_{N\\rightarrow\\infty}\\frac{1}{N}\\sum_{n=0}^{N-1}P(X_n\\in V_l)\n\\ge \\frac{b}{2c}\\left\\{\\frac{(b-c)^2-c}{(b-c)(b-c+1)}\\right\\}^2\n \\left\\{\\frac{c}{(b-c)^2}\\right\\}^l,\n\\quad l\\ge1.\n\\]\nIf the spidernet $S(a,b,c)$ is rotationally symmetric around $o$, we have\n\\[\n\\liminf_{N\\rightarrow\\infty}\\frac{1}{N}\\sum_{n=0}^{N-1}P(X_n=u)\n\\ge \\frac{b}{2a}\\left\\{\\frac{(b-c)^2-c}{(b-c)(b-c+1)}\\right\\}^2\n \\left\\{\\frac{1}{(b-c)^2}\\right\\}^{\\partial(u,o)}\\,,\n\\]\nfor all $u\\in V$, $u\\neq o$.\nNamely, the Grover walk under consideration exhibits exponential localization.\n\\end{theorem}\n\nSpecializing the parameters in Theorem \\ref{exponential localization}, we\nobtain the following result with no difficulty.\n\n\\begin{corollary}\\label{local}\nFor $2\\leq\\kappa< 10$ the Grover walk on a spidernet $S(\\kappa,\\kappa+2,\\kappa-1)$ with \nan initial state $\\bs{\\psi}_0^+$ it holds that\n\\[\n\\liminf_{N\\rightarrow\\infty}\\frac{1}{N}\\sum_{n=0}^{N-1}P(X_n\\in V_l)\n\\ge \\frac{\\kappa+2}{2(\\kappa-1)}\\left(\\frac{10-\\kappa}{12}\\right)^2 \n \\left(\\frac{\\kappa-1}{9}\\right)^l,\n\\quad l\\ge1.\n\\]\nIf the spidernet $S(\\kappa,\\kappa+2,\\kappa-1)$ is rotationally symmetric\naround $o$, we have\n\\[\n\\liminf_{N\\rightarrow\\infty}\\frac{1}{N}\\sum_{n=0}^{N-1}P(X_n=u)\n\\ge \\frac{\\kappa+2}{2\\kappa}\\left(\\frac{10-\\kappa}{12}\\right)^2 \n \\left(\\frac{1}{9}\\right)^{\\partial(0,u)},\n\\quad u\\in V, \\,\\, u\\neq o.\n\\]\nNamely, the Grover walk under consideration exhibits exponential localization.\n\\end{corollary}\n\n\\begin{remark}\nBy changing variable as $\\lambda=\\cos\\theta$,\nthe right-hand side of \\eqref{3eqn:main expression of probability amplitude} becomes an \nintegral over $[0,\\pi]$. Then using symmetric extension we can write\n\\[\n\\int_{-1}^{1} \\cos n\\theta \\,\\mu(d\\lambda)\n=\\int_{-\\pi}^{\\pi} \\cos n\\theta \\,\\nu(d\\theta)\n\\]\nwith a suitable probability distribution $\\nu$ on $[-\\pi,\\pi]$\nsuch that $\\nu(-d\\theta)=\\nu(d\\theta)$,\nwhere no point mass at $\\pm\\pi$.\nThus, we have an alternative expression for the transition amplitude:\n\\[\n\\langle \\bs{\\psi}_0^+, U^n \\bs{\\psi}_0^+\\rangle\n=\\int_{-\\pi}^{\\pi} e^{in\\theta} \\,\\nu(d\\theta),\n\\qquad n=0,\\pm1,\\pm2,\\dots,\n\\]\nwhich is directly related to the spectral decomposition of the unitary\noperator $U$.\n\\end{remark}\n\n\\begin{remark}\nLet $\\{X_n\\,;n=0,1,2,\\dots\\}$ be the isotropic random walk on $S(a,b,c)$ with\ntransition matrix $T$. It then follows from the well-established general theory that\n\\begin{equation}\\label{3eqn:RW}\nP(X_n=o|X_0=o)\n=\\langle\\bs{\\delta}_o,T^n\\bs{\\delta}_o\\rangle\n=\\int_{-1}^{1} \\lambda^n \\,\\mu(d\\lambda),\n\\quad n=0,1,2,\\dots,\n\\end{equation}\nwhere $\\mu$ is the same probability distribution as in Theorem \\ref{mainthm:probability amplitude},\nsee also \\cite{Obata2012} for relevant discussion along quantum probability.\nWe see that \\eqref{3eqn:RW} makes a good contrast to the transition amplitude \n\\eqref{3eqn:main expression of probability amplitude}.\n\\end{remark}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=225pt]{origin.eps}\n\\caption{The Grover walk on $S(4,6,3)$ from time $n=620$ to $n=650$\n(see Corollary \\ref{3cor:k<10}): \nThe dots stand for $P(X_n=o)$ calculated by numerical simulation. \nThe curve is the graph of $(1\/4)\\cos^2(t\\tilde\\theta)$ \nand the horizontal line depicts the time averaged limit probability $\\overline{q}^{(\\infty)}(o)$.}\n\\label{fig:one} \n\\end{center}\n\\end{figure}\n\n\n\\section{One-dimensional reduction}\n\\label{sec:One-dimensional reduction}\n\n\n\\subsection{$(p,q)$-Quantum walk on $\\mathbb{Z}_+$}\n\\label{subsec:4.1}\n\nLet $U=SC$ be the Grover walk on the spidernet $G=S(a,b,c)$.\nDefine orthonormal vectors in $\\mathcal{H}=\\ell^2(A(G))$ by\n\\begin{alignat}{2}\n\\bs{\\psi}^+_n \n&= \\frac{1}{\\sqrt{ac^n}}\n\\sum_{u\\in V_n}\\sum_{\\substack{v\\in V_{n+1} \\\\ v\\sim u}} \\bs{\\delta}_{(u,v)}\\,,\n&\\quad &n\\ge0, \n\\label{3eqn:def of psi+} \\\\\n\\bs{\\psi}^\\circ_n \n&= \\frac{1}{\\sqrt{a(b-c-1)c^{n-1}}}\n\\sum_{u\\in V_n}\\sum_{\\substack{v\\in V_n \\\\ v\\sim u}} \\bs{\\delta}_{(u,v)}\\,,\n&\\quad &n\\ge1, \n\\label{3eqn:def of psio} \\\\\n\\bs{\\psi}^-_n \n&= \\frac{1}{\\sqrt{ac^{n-1}}}\n\\sum_{u\\in V_n}\\sum_{\\substack{v\\in V_{n-1} \\\\ v\\sim u}} \\bs{\\delta}_{(u,v)}\\,,\n&\\quad &n\\ge1.\n\\label{3eqn:def of psi-}\n\\end{alignat}\nWe keep the same notations as in \\eqref{3eqn:p and q from spidernet}:\n\\begin{equation}\\label{4eqn:pqr}\np=\\frac{c}{b}\\,,\n\\qquad\nq=\\frac{1}{b}\\,,\n\\qquad\nr=\\frac{b-c-1}{b}\\,,\n\\end{equation}\nverifying that\n\\[\np>0,\n\\quad q>0, \n\\quad r=1-p-q\\ge0.\n\\]\n\n\\begin{lemma}\\label{lem:actions of C on psi}\nIt holds that\n\\begin{align}\nC\\bs{\\psi}^+_n\n&=\\begin{cases}\n\\bs{\\psi}^+_0\\,, & n=0, \\\\\n(2p-1)\\bs{\\psi}^+_n+2\\sqrt{pr}\\,\\bs{\\psi}^\\circ_n+2\\sqrt{pq}\\,\\bs{\\psi}^-_n\\,, & n\\ge1,\n\\end{cases}\n\\label{eqn:def of psi +n} \\\\\nC\\bs{\\psi}^\\circ_n\n&=2\\sqrt{pr}\\,\\bs{\\psi}^+_n+(2r-1)\\bs{\\psi}^\\circ_n+2\\sqrt{qr}\\,\\bs{\\psi}^-_n\\,, \n\\,\\,\\,\\,\\quad n\\ge1,\n\\label{eqn:def of psi on} \\\\\nC\\bs{\\psi}^-_n\n&=2\\sqrt{pq}\\,\\bs{\\psi}^+_n +2\\sqrt{qr}\\, \\bs{\\psi}^\\circ_n +(2q-1)\\bs{\\psi}^-_n\\,,\n\\quad n\\ge1.\n\\label{eqn:def of psi -n}\n\\end{align}\n\\end{lemma}\n\n\\noindent{\\it Proof.} \nBy definition we have\n\\begin{align*}\nC\\bs{\\delta}_{(x,y)}\n&=\\sum_{w\\sim x}(H^{(x)})_{yw}\\bs{\\delta}_{(x,w)}\\,,\n\\qquad (x,y)\\in A(G), \\\\\n(H^{(x)})_{yw}\n&=\n\\begin{cases}\n\\dfrac{2}{a}-\\delta_{yw}\\,,& x=o, \\\\[8pt]\n\\dfrac{2}{b}-\\delta_{yw}\\,,& \\textit{otherwise}.\n\\end{cases}\n\\end{align*}\nWe first show \\eqref{eqn:def of psi +n} for $n=0$.\nSuppose $(o,y)\\in A(G)$.\nThen, \n\\begin{align*}\nC\\bs{\\delta}_{(o,y)}\n&=\\sum_{w\\sim o}(H^{(o)})_{yw}\\bs{\\delta}_{(o,w)} \\\\\n&=\\sum_{w\\sim o}\\left(\\dfrac{2}{a}-\\delta_{yw}\\right)\\bs{\\delta}_{(o,w)} \\\\\n&=\\frac{2}{a}\\sum_{w\\sim o}\\bs{\\delta}_{(o,w)}-\\bs{\\delta}_{(o,y)}\\,.\n\\end{align*}\nTaking the summation over $y\\sim o$, we obtain\n\\[\n\\sum_{y\\sim o}C\\bs{\\delta}_{(o,y)}\n=2\\sum_{w\\sim o}\\bs{\\delta}_{(o,w)}-\\sum_{y\\sim o}\\bs{\\delta}_{(o,y)}\n=\\sum_{w\\sim o}\\bs{\\delta}_{(o,w)}\\,,\n\\]\nfrom which the desired relation follows by\ndividing both sides by $\\sqrt{a}$.\n\nWe next prove \\eqref{eqn:def of psi +n} for $n\\ge1$.\nSuppose $x\\in V_n$ with $n\\ge1$.\nThen by definition,\n\\begin{align*}\n\\sum_{\\substack{y\\in V_{n+1} \\\\ y\\sim x}} C\\bs{\\delta}_{(x,y)}\n&=\\sum_{\\substack{y\\in V_{n+1} \\\\ y\\sim x}} \\sum_{w\\sim x}\\left(\\dfrac{2}{b}-\\delta_{yw}\\right)\\bs{\\delta}_{(x,w)} \\\\\n&=\\frac{2c}{b} \\sum_{w\\sim x}\\bs{\\delta}_{(x,w)} \n - \\sum_{\\substack{y\\in V_{n+1} \\\\ y\\sim x}} \\bs{\\delta}_{(x,y)}\\,,\n\\end{align*}\nwhere $|\\{y\\in V_{n+1}\\,;\\, y\\sim x\\}|=c$ is taken into account.\nTaking the summation over $x\\in V_n$, we obtain\n\\begin{align*}\n\\sum_{x\\in V_n}\\sum_{\\substack{y\\in V_{n+1} \\\\ y\\sim x}} C\\bs{\\delta}_{(x,y)}\n&=\\frac{2c}{b} \\sum_{x\\in V_n}\\sum_{w\\sim x}\\bs{\\delta}_{(x,w)} \n - \\sum_{x\\in V_n}\\sum_{\\substack{y\\in V_{n+1} \\\\ y\\sim x}} \\bs{\\delta}_{(x,y)} \\\\\n&=\\frac{2c}{b}\\Big(\\sqrt{ac^n}\\, \\bs{\\psi}^+_n\n + \\sqrt{a(b-c-1)c^{n-1}}\\,\\bs{\\psi}^\\circ_n \\\\\n&\\qquad\\qquad +\\sqrt{ac^{n-1}}\\,\\bs{\\psi}^-_n\\Big) \n - \\sqrt{ac^n}\\,\\bs{\\psi}^+_n(x)\n\\end{align*}\nand then, dividing both sides by $\\sqrt{ac^n}$, we come to\n\\[\nC\\bs{\\psi}^+_n\n=\\left(\\frac{2c}{b}-1\\right)\\bs{\\psi}^+_n\n +\\frac{2\\sqrt{c(b-c-1)}}{b}\\,\\bs{\\psi}^\\circ_n\n +\\frac{2\\sqrt{c}}{b}\\,\\bs{\\psi}^-_n\\,,\n\\]\nwhich shows \\eqref{eqn:def of psi +n}.\nThe rest of the relations is proved in a similar manner.\n\\begin{flushright} $\\square$ \\end{flushright}\n\n\\begin{lemma}\\label{docomo}\nIt holds that\n\\begin{alignat}{2}\nS\\bs{\\psi}^+_n &=\\bs{\\psi}^-_{n+1}\\,, &\\quad &n\\ge0,\n\\label{eqn:def of S psi +n} \\\\\nS\\bs{\\psi}^\\circ_n &=\\bs{\\psi}^\\circ_n\\,, &\\quad &n\\ge1, \n\\label{eqn:def of S psi on} \\\\\nS\\bs{\\psi}^-_n &= \\bs{\\psi}^+_{n-1}\\,, &\\quad &n\\ge1.\n\\label{eqn:def of S psi -n}\n\\end{alignat}\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nBy Straightforward calculation similar to the proof of Lemma \\ref{lem:actions of C on psi}.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\noindent It is convenient to study the actions of $C$ and $S$\ndescribed in Lemmas \\ref{lem:actions of C on psi} and \\ref{docomo}\nin a slightly more general context.\nWe consider the Hilbert space $\\mathcal{H}(\\mathbb{Z}_+)$ of the form:\n\\[\n\\mathcal{H}(\\mathbb{Z}_+)\n=\\mathbb{C}\\bs{\\psi}^+_0\\oplus \\sum_{n=1}^\\infty\\oplus \n (\\mathbb{C}\\bs{\\psi}^+_n\\oplus \\mathbb{C}\\bs{\\psi}^\\circ_n\\oplus \\mathbb{C}\\bs{\\psi}^-_n),\n\\]\nwhere $\\bs{\\psi}^+_0, \\bs{\\psi}^+_1,\\bs{\\psi}^\\circ_1, \\bs{\\psi}^-_1,\\dots$ form an \northonormal basis of $\\mathcal{H}(\\mathbb{Z}_+)$.\nLet $p,q,r$ be constant numbers satisfying\n\\[\np>0, \\quad q>0, \\quad r=1-p-q\\ge0.\n\\]\nWe then define the coin operator $C$ and the shift operator $S$ on $\\mathcal{H}(\\mathbb{Z}_+)$\nby \\eqref{eqn:def of psi +n}--\\eqref{eqn:def of psi -n} and by \\eqref{eqn:def of S psi +n}--\\eqref{eqn:def of S psi -n},\nrespectively. \nIt is easily seen that both $C$ and $S$ are unitary operators.\nHence $U=SC$ is also a unitary operator on $\\mathcal{H}(\\mathbb{Z}_+)$,\nwhich is called the \\textit{$(p,q)$-quantum walk} on $\\mathbb{Z}_+$.\n\nThus the Grover walk on a spidernet $G=S(a,b,c)$ restricted to\nthe closed subspace spanned by\n$\\{\\bs{\\psi}^+_n,\\,n\\ge0\\}\\cup\n \\{\\bs{\\psi}^\\circ_n\\,;\\, n\\ge1\\}\\cup\n \\{\\bs{\\psi}^-_n\\,;\\,n\\ge1\\}$ is a $(p,q)$-quantum walk on $\\mathbb{Z}_+$,\nwhere $p,q$ are given by \\eqref{4eqn:pqr}.\n\nWe define orthonormal vectors in $\\mathcal{H}(\\mathbb{Z}_+)$ by\n\\begin{align*}\n\\bs{\\Psi}_0 &=\\bs{\\psi}^+_0, \\\\\n\\bs{\\Psi}_n &=\\sqrt{p}\\,\\bs{\\psi}^+_n+\\sqrt{r}\\,\\bs{\\psi}^\\circ_n+\\sqrt{q}\\,\\bs{\\psi}^-_n\\,, \\quad n\\ge1,\n\\end{align*}\nand set\n\\[\n\\Gamma(\\mathbb{Z}_+)=\\sum_{n=0}^\\infty \\oplus \\mathbb{C} \\bs{\\Psi}_n\\,.\n\\]\nThen $\\Gamma(\\mathbb{Z}_+)\\subset \\mathcal{H}(\\mathbb{Z}_+)$ is a closed subspace.\nLet $\\Pi:\\mathcal{H}(\\mathbb{Z}_+)\\rightarrow\\Gamma(\\mathbb{Z}_+)$ denote the orthogonal projection.\n\n\\begin{lemma}\\label{3lem:C=2pi-I on Z}\nIt holds that\n\\[\nC=C^*=2\\Pi-I.\n\\]\nIn particular, $C$ is the reflection with respect to $\\Gamma(\\mathbb{Z}_+)$ and\nacts on $\\Gamma(\\mathbb{Z}_+)$ as the identity.\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nStraightforward by definition.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\n\\subsection{$(p,q)$-Quantum walk on a path of finite length}\n\nLet $U$ be a $(p,q)$-quantum walk on $\\mathbb{Z}_+$ as in the previous section.\nWe will introduce a $(p,q)$-quantum walk on the path of length $N\\ge2$,\nobtained from the $(p,q)$-quantum walk on $\\mathbb{Z}_+$ by cutoff.\n\nFor $N\\ge2$ we define a Hilbert space:\n\\[\n\\mathcal{H}(N)\n=\\mathbb{C}\\bs{\\psi}^+_0\\oplus \\sum_{n=1}^{N-1}\\oplus \n (\\mathbb{C}\\bs{\\psi}^+_n\\oplus \\mathbb{C}\\bs{\\psi}^\\circ_n\\oplus \\mathbb{C}\\bs{\\psi}^-_n)\n \\oplus \\mathbb{C}\\bs{\\psi}^-_N\n\\]\nand unitary operators $C=C_N$ and $S=S_N$ respectively \nas in \\eqref{eqn:def of psi +n}--\\eqref{eqn:def of psi -n}\nand in \\eqref{eqn:def of S psi +n}--\\eqref{eqn:def of S psi -n}, except\n\\begin{equation}\\label{3eqn:exception C}\nC\\bs{\\psi}^-_N=\\bs{\\psi}^-_N.\n\\end{equation}\nThen we obtain a unitary operator $U=U_N=S_NC_N$ on $\\mathcal{H}(N)$,\nwhich is called the \\textit{$(p,q)$-quantum walk} on the path of length $N$.\nBoth endpoints play as reflection barriers in analogy of random walks.\nFrom now on we omit the suffix $N$ whenever there is no danger of confusion.\n\nIn view of \\eqref{eqn:def of psi +n}--\\eqref{eqn:def of psi -n}, \\eqref{3eqn:exception C}\nand \\eqref{eqn:def of S psi +n}--\\eqref{eqn:def of S psi -n}\nthe explicit actions of $U$ on $\\bs{\\psi}^\\epsilon_j$ are easily written down as follows:\n\\begin{align}\nU\\bs{\\psi}^+_j\n&=\\begin{cases}\n\\bs{\\psi}^-_1\\,, & j=0, \\\\\n(2p-1)\\bs{\\psi}^-_{j+1}+2\\sqrt{pr}\\,\\bs{\\psi}^\\circ_j+2\\sqrt{pq}\\,\\bs{\\psi}^+_{j-1}\\,, & 1\\le j\\le N-1,\n\\end{cases}\n\\label{eqn:action of U psi +n} \\\\\nU\\bs{\\psi}^\\circ_j\n&=2\\sqrt{pr}\\,\\bs{\\psi}^-_{j+1}+(2r-1)\\bs{\\psi}^\\circ_j+2\\sqrt{qr}\\,\\bs{\\psi}^+_{j-1}\\,, \n\\,\\,\\,\\,\\quad 1\\le j\\le N-1,\n\\label{eqn:action of U psi on} \\\\\nU\\bs{\\psi}^-_j\n&=\\begin{cases}\n2\\sqrt{pq}\\,\\bs{\\psi}^-_{j+1} +2\\sqrt{qr}\\, \\bs{\\psi}^\\circ_j +(2q-1)\\bs{\\psi}^+_{j-1}\\,, & 1\\le j\\le N-1, \\\\\n\\bs{\\psi}^+_{N-1}, & j=N,\n\\end{cases}\n\\label{eqn:action of U psi -n}\n\\end{align}\n\nThe goal of this subsection is to determine the spectra (eigenvalues) of $U$.\nWe start with the following result.\n\n\\begin{lemma}\\label{3lem:Trace of U}\n$\\mathrm{Tr\\,}U=(2r-1)(N-1)$.\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nWe see from \\eqref{eqn:action of U psi +n}--\\eqref{eqn:action of U psi -n} that\n\\[\n\\mathrm{Tr\\,}U=\\sum_{\\epsilon,n}\\langle \\bs{\\psi}^\\epsilon_n, U \\bs{\\psi}^\\epsilon_n\\rangle\n=\\sum_{j=1}^{N-1} \\langle \\bs{\\psi}^\\circ_j, U \\bs{\\psi}^\\circ_j\\rangle\n=(2r-1)(N-1)\n\\]\nas desired.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\noindent Define orthonormal vectors in $\\mathcal{H}(N)$ by\n\\begin{align*}\n&\\bs{\\Psi}_0 =\\bs{\\psi}^+_0, \\\\\n&\\bs{\\Psi}_j =\\sqrt{p}\\,\\bs{\\psi}^+_j+\\sqrt{r}\\,\\bs{\\psi}^\\circ_j+\\sqrt{q}\\,\\bs{\\psi}^-_j\\,, \\quad 1\\le j\\le N-1, \\\\\n&\\bs{\\Psi}_N =\\bs{\\psi}^-_N\n\\end{align*}\nand set\n\\[\n\\Gamma(N)=\\sum_{j=0}^N \\oplus\\mathbb{C} \\bs{\\Psi}_j\\,.\n\\]\nThen $\\Gamma(N)$ is a closed subspace of $\\mathcal{H}(N)$\nand let $\\Pi=\\Pi_N:\\mathcal{H}(N)\\rightarrow\\Gamma(N)$ denote the orthogonal projection.\nThe assertion of Lemma \\ref{3lem:C=2pi-I on Z} remains true, i.e., \nit holds that \n\\[\nC=C^*=2\\Pi-I.\n\\]\n\nFor the $(p,q)$-quantum walk $U=U_N$ we consider $\\Pi U \\Pi$ as an operator on $\\Gamma(N)$, \nwhich is denoted by $T=T_N$.\nThus,\n\\begin{equation}\\label{4eqn:T and Pi}\nT=\\Pi U \\Pi \\!\\!\\restriction_{\\Gamma(N)}\n=\\Pi SC \\Pi \\!\\!\\restriction_{\\Gamma(N)}\n=\\Pi S \\!\\!\\restriction_{\\Gamma(N)}\\,.\n\\end{equation}\nMoreover, by direct calculation we obtain its matrix expression \nwith respect to the orthonormal basis $\\{\\bs{\\Psi}_j\\,;\\,0\\le j\\le N\\}$ as follows:\n\\[\nT=T_N=\n\\begin{bmatrix}\n0 & \\sqrt{q} & \\\\\n\\sqrt{q} & r & \\sqrt{pq} & \\\\ \n & \\sqrt{pq}& r & \\sqrt{pq} & \\\\ \n & & \\ddots & \\ddots & \\ddots \\\\ \n& & & \\sqrt{pq} & r &\\sqrt{pq} \\\\ \n& & & & \\sqrt{pq} & r &\\sqrt{p} \\\\ \n& & & & & \\sqrt{p} & 0 \\\\ \n\\end{bmatrix}.\n\\]\nFor example,\n\\begin{align*}\n&T \\bs{\\Psi}_0 =\\sqrt{q}\\, \\bs{\\Psi}_1\\,, \\\\\n&T \\bs{\\Psi}_1 =\\sqrt{pq}\\, \\bs{\\Psi}_2+ r \\bs{\\Psi}_1 + \\sqrt{q}\\,\\bs{\\Psi}_0\\,,\n\\quad \\text{etc.}\n\\end{align*}\n\n\\begin{lemma}\nEvery eigenvalue of $T$ is simple.\nMoreover, $\\mathrm{Spec}(T)\\subset [-1,1]$ and $1\\in \\mathrm{Spec}(T)$.\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nThat every eigenvalue of $T$ is simple follows from general theory of Jacobi matrices\n(see e.g., \\cite{HO,Deift}).\nSince $T=\\Pi S\\!\\!\\restriction_{\\Gamma(N)}$ by \\eqref{4eqn:T and Pi}, the operator norm of $T$ is bounded by one.\nHence every eigenvalue of $T$ lies in $[-1,1]$.\nFinally, it is easily verified by expansion that $\\det(T-I)=0$.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\begin{lemma}\\label{3lem:3.6}\n{\\upshape (1)} If $r>0$, there exists no non-zero $v\\in\\Gamma(N)$ such that $Sv=-v$.\n\n{\\upshape (2)} If $r=0$, there exists non-zero $v\\in\\Gamma(N)$ such that $Sv=-v$.\nMoreover, such a non-zero vector $v$ is determined uniquely up to a constant factor.\n\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nEvery $v\\in \\Gamma(N)$ is in the form:\n\\begin{align*}\nv\n&=\\sum_{j=0}^N \\gamma_j \\bs{\\Psi}_j \\\\\n&=\\gamma_0 \\bs{\\psi}_0^+\n +\\sum_{j=1}^{N-1} \\gamma_j (\\sqrt{p}\\,\\bs{\\psi}_j^+ +\\sqrt{r}\\, \\bs{\\psi}_j^\\circ + \\sqrt{q}\\, \\bs{\\psi}_j^-)\n +\\gamma_N \\bs{\\psi}_N^-,\n\\end{align*}\nwhere $\\gamma_0,\\dots,\\gamma_N$ are constant numbers.\nThen the equation $Sv=-v$ is equivalent to the one for these constant numbers,\nwhich is obtained by direct calculation:\n\\begin{align}\n&\\gamma_1=-\\frac{1}{\\sqrt{q}}\\,\\gamma_0\\,,\n\\quad\n\\gamma_N=-\\sqrt{p}\\,\\gamma_{N-1}\\,, \n\\label{4eqn:in proof (000)} \\\\\n&\\gamma_j=-\\sqrt{\\frac{p}{q}}\\,\\gamma_{j-1}\\,,\n\\quad 2\\le j\\le N-1,\n\\label{4eqn:in proof (001)} \\\\\n&\\gamma_j\\sqrt{r}=-\\gamma_j \\sqrt{r}\\,,\n\\quad 1\\le j\\le N-1.\n\\label{4eqn:in proof (002)} \n\\end{align}\nIf $r>0$, it follows from \\eqref{4eqn:in proof (002)} that $\\gamma_1=\\dots=\\gamma_{N-1}=0$.\nThen in view of \\eqref{4eqn:in proof (000)} we also have $\\gamma_0=\\gamma_N=0$,\nwhich implies $v=0$.\nIf $r=0$, the recurrence relations \\eqref{4eqn:in proof (000)} and \\eqref{4eqn:in proof (001)} determine\nthe sequence $\\gamma_0,\\gamma_1,\\dots,\\gamma_N$ uniquely by the initial value $\\gamma_0$.\nHence $\\dim\\{v\\in\\Gamma(N)\\,;\\, Sv=-v\\}=1$ as desired.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\begin{lemma}\\label{3lem:3.7}\nIf $Tv=\\pm v$ for $v\\in\\Gamma(N)$, then $Sv=Uv=\\pm v$.\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nIt is sufficient to consider the case of $v\\neq0$.\nSuppose that $Tv=\\pm v$ for $v\\in\\Gamma(N)$.\nFrom $T=\\Pi S\\!\\!\\restriction_{\\Gamma(N)}$ and $\\Pi v=v$ we obtain $\\Pi(Sv\\mp v)=0$.\nHence $\\langle Sv \\mp v,v\\rangle=0$, which implies that\n\\[\n\\langle Sv,v \\rangle\n=\\pm \\langle v,v \\rangle.\n\\]\nSince $S$ is unitary, the above relation implies the Schwartz equality\nand $Sv=\\alpha v$ with some constant $\\alpha\\in\\mathbb{C}$.\nIt follows by applying $\\Pi$ that $\\alpha=\\pm1$ and $Sv=\\pm v$.\nFinally, since $U=SC$ by definition and $C$ acts on $\\Gamma(N)$ as the identity,\nwe have $Uv=SCv=Sv$.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\begin{lemma}\nWe have $-1 \\not\\in\\mathrm{Spec}(T)$ for $r>0$,\nand $-1 \\in \\mathrm{Spec}(T)$ for $r=0$.\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nSuppose that $r>0$ and $Tv=-v$ for $v\\in\\Gamma(N)$.\nWe see from Lemma \\ref{3lem:3.7} $Sv=-v$,\nthen applying Lemma \\ref{3lem:3.6} we come to $v=0$.\nThis means that $-1$ is not an eigenvalue of $T$.\n\nWe next suppose that $r=0$. \nBy Lemma \\ref{3lem:3.6} there exists a non-zero vector $v\\in\\Gamma(N)$ such that $Sv=-v$.\nThen $Tv=\\Pi Sv=-v$, which means that $-1$ is an eigenvalue of $T$.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\noindent Thus, the eigenvalues of $T$ are arranged in such a way that\n\\begin{gather}\n\\lambda_0=1=\\cos\\theta_0,\n\\quad\n\\lambda_1=\\cos\\theta_1\\,,\n\\quad\n\\lambda_2=\\cos\\theta_2\\,,\n\\quad \\dots,\n\\quad\n\\lambda_N=\\cos\\theta_N\\,, \n\\label{3eqn:arrangement of lambda} \\\\\n0=\\theta_0<\\theta_1<\\theta_2<\\dots<\\theta_N\\le \\pi,\n\\nonumber\n\\end{gather}\nwhere $\\theta_N<\\pi$ for $r>0$ and $\\theta_N=\\pi$ for $r=0$.\nFor each eigenvalue $\\lambda_j$ we take a normalized eigenvector $\\bs{\\Omega}_j\\in\\Gamma(N)$,\ni.e.,\n\\[\nT\\bs{\\Omega}_j=\\lambda_j \\bs{\\Omega}_j\\,,\n\\quad\n\\|\\bs{\\Omega}_j\\|=1.\n\\]\nThen we have the orthogonal decomposition of $\\Gamma(N)$ in two ways:\n\\[\n\\Gamma(N)\n=\\sum_{j=0}^N \\oplus \\mathbb{C}\\bs{\\Psi}_j\n=\\sum_{j=0}^N \\oplus \\mathbb{C}\\bs{\\Omega}_j.\n\\]\nWe next study the subspace\n\\begin{equation}\\label{4eqn:def of L(N)}\n\\mathcal{L}(N)=\\Gamma(N)+S\\Gamma(N),\n\\end{equation}\nwhich is invariant under the actions of $S$ and $U$.\nIn fact, for $\\bs{\\phi}, \\bs{\\psi}\\in \\Gamma(N)$ we have\n\\begin{align*}\nU(\\bs{\\phi}+S\\bs{\\psi})\n&=SC\\bs{\\phi}+SCS\\bs{\\psi}\n=S\\bs{\\phi}+S(2\\Pi-I)S\\bs{\\psi} \\\\\n&=S\\bs{\\phi}+2S\\Pi S\\bs{\\psi}-S^2\\bs{\\psi}\n=S(\\bs{\\phi}+2\\Pi S\\bs{\\psi})-\\bs{\\psi},\n\\end{align*}\nwhich shows that $\\mathcal{L}(N)=\\Gamma(N)+S\\Gamma(N)$ is invariant under $U$.\n\n\\begin{lemma}\\label{3lem:orthogonality}\n{\\upshape (1)} If $r>0$, then the vectors $\\bs{\\Omega}_0,\\bs{\\Omega}_1,\\dots, \\bs{\\Omega}_N,\nS\\bs{\\Omega}_1,\\dots, S\\bs{\\Omega}_N$ are linearly independent.\nMoreover,\n\\begin{equation}\\label{3eqn:inner product 3.9}\n\\langle \\bs{\\Omega}_j, S\\bs{\\Omega}_k\\rangle=\\lambda_k\\delta_{jk}\\,,\n\\quad 0\\le j\\le N, \n\\quad 1\\le k\\le N.\n\\end{equation}\n\n{\\upshape (2)} If $r=0$, then $S\\bs{\\Omega}_N=-\\bs{\\Omega}_N$ and \n$\\bs{\\Omega}_0,\\bs{\\Omega}_1,\\dots, \\bs{\\Omega}_N$,\n$S\\bs{\\Omega}_1,\\dots, S\\bs{\\Omega}_{N-1}$ are linearly independent.\nMoreover, \\eqref{3eqn:inner product 3.9} remains valid where\n$0\\le j\\le N$ and $1\\le k\\le N-1$.\n\\end{lemma}\n\n\\noindent{\\it Proof.}\n(1) Suppose that\n\\begin{equation}\\label{3eqn:in proof 3.9 (1)}\n\\alpha_0 \\bs{\\Omega}_0 \n+\\sum_{j=1}^N \\alpha_j \\bs{\\Omega}_j\n+\\sum_{j=1}^N \\beta_j S\\bs{\\Omega}_j=0.\n\\end{equation}\nTaking $\\Pi S=T$ in mind, we apply $\\Pi$ to both sides to obtain\n\\begin{equation}\\label{3eqn:in proof 3.9 (2)}\n\\alpha_0 \\bs{\\Omega}_0 +\\sum_{j=1}^N (\\alpha_j+\\beta_j\\lambda_j) \\bs{\\Omega}_j=0.\n\\end{equation}\nSimilarly, applying $\\Pi S$ to both sides of \\eqref{3eqn:in proof 3.9 (1)}, we obtain\n\\begin{equation}\\label{3eqn:in proof 3.9 (3)}\n\\alpha_0 \\bs{\\Omega}_0 +\\sum_{j=1}^N (\\alpha_j\\lambda_j+\\beta_j) \\bs{\\Omega}_j=0,\n\\end{equation}\nwhere $S^2=I$ and $S\\bs{\\Omega}_0=\\bs{\\Omega}_0$ from Lemma \\ref{3lem:3.7} are taken into account.\nIt then follows from \\eqref{3eqn:in proof 3.9 (2)} and \\eqref{3eqn:in proof 3.9 (3)} that\n\\[\n\\alpha_0=0, \\quad \\alpha_j+\\beta_j\\lambda_j=\\alpha_j\\lambda_j+\\beta_j=0,\n\\quad 1\\le j\\le N.\n\\]\nSince $\\lambda_j\\neq \\pm1$ for $1\\le j\\le N$, we see that $\\alpha_j=\\beta_j=0$ for all $j$.\nThe inner product \\eqref{3eqn:inner product 3.9} is computed as follows:\n\\[\n\\langle \\bs{\\Omega}_j, S\\bs{\\Omega}_k\\rangle\n=\\langle \\bs{\\Omega}_j, \\Pi S\\bs{\\Omega}_k\\rangle\n =\\langle \\bs{\\Omega}_j, T\\bs{\\Omega}_k\\rangle\n=\\lambda_k \\langle \\bs{\\Omega}_j, \\bs{\\Omega}_k\\rangle\n=\\lambda_k\\delta_{kj}\\,.\n\\]\n\n(2) is proved similarly by\nusing Lemma \\ref{3lem:3.7} and $\\lambda_N=-1$. \n\\begin{flushright}$\\square$\\end{flushright}\n\n\\begin{lemma}\\label{4lem:U on omegas}\n{\\upshape (1)} If $r>0$ we have\n\\begin{align*}\n&U \\bs{\\Omega}_0=\\bs{\\Omega}_0, \\\\\n&U \\bs{\\Omega}_j=S\\bs{\\Omega}_j, \\quad\nU S\\bs{\\Omega}_j=-\\bs{\\Omega}_j+2\\lambda_j S\\bs{\\Omega}_j, \n\\quad 1\\le j\\le N.\n\\end{align*}\n\n{\\upshape (2)} If $r=0$, the above relations hold except $U \\bs{\\Omega}_N=-\\bs{\\Omega}_N$.\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nSince the proofs are similar we prove only (1).\nWe first observe that\n\\[\nU\\bs{\\Omega}_j\n=SC\\bs{\\Omega}_j\n=S(2\\Pi-I)\\bs{\\Omega}_j\n=2S\\Pi\\bs{\\Omega}_j -S\\bs{\\Omega}_j\n=S\\bs{\\Omega}_j\\,,\n\\quad 0\\le j\\le N.\n\\]\nFor $j=0$ we have $T\\bs{\\Omega}_0=\\bs{\\Omega}_0$ so \n$S\\bs{\\Omega}_0=\\bs{\\Omega}_0$ by Lemma \\ref{3lem:3.7}.\nHence\n\\[\nU\\bs{\\Omega}_0=SC\\bs{\\Omega}_0=\\bs{\\Omega}_0\\,.\n\\]\nWe next calculate $US\\bs{\\Omega}_j$ for $1\\le j\\le N$. \nUsing $C=2\\Pi-I$ and $S^2=I$ we obtain\n\\begin{align*}\nUS\\bs{\\Omega}_j\n&=SCS\\bs{\\Omega}_j\n=S(2\\Pi-I)S\\bs{\\Omega}_j \\\\\n&=2ST\\bs{\\Omega}_j-\\bs{\\Omega}_j\n=2\\lambda_j S\\bs{\\Omega}_j-\\bs{\\Omega}_j\\,.\n\\end{align*}\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\noindent Thus, we obtain the orthogonal decomposition of $\\mathcal{L}(N)$ defined\nin \\eqref{4eqn:def of L(N)}:\n\\begin{alignat}{2}\n\\mathcal{L}(N)\n&=\\mathbb{C}\\bs{\\Omega}_0 \\oplus\n \\sum_{j=1}^N \\oplus (\\mathbb{C}\\bs{\\Omega}_j+\\mathbb{C}S\\bs{\\Omega}_j),\n&\\qquad &r>0, \n\\label{4eqn:decomposition of L(N)}\\\\\n\\mathcal{L}(N)\n&=\\mathbb{C}\\bs{\\Omega}_0 \\oplus\n \\sum_{j=1}^{N-1} \\oplus (\\mathbb{C}\\bs{\\Omega}_j+\\mathbb{C}S\\bs{\\Omega}_j)\n \\oplus \\mathbb{C}\\bs{\\Omega}_N\\,,\n&\\quad &r=0,\n\\label{4eqn:decomposition of L(N)(r=0)}\n\\end{alignat}\nwhere each factor is invariant under the action of $U$.\n\n\\begin{theorem}\\label{3thmspectra of U}\n{\\upshape (1)} If $r>0$, the eigenvalues of $U$ are\n\\[\n1, \n\\quad e^{\\pm i\\theta_j} \\quad (1\\le j\\le N),\n\\quad -1,\n\\]\nwhere $0<\\theta_1<\\theta_2<\\dots<\\theta_N<\\pi$\nare obtained in \\eqref{3eqn:arrangement of lambda}.\nAll the eigenvalues except $-1$ are multiplicity free\nand the multiplicity of the eigenvalue $-1$ is $N-2$.\n\n{\\upshape (2)} If $r=0$, the eigenvalues of $U$ are\n\\[\n1, \n\\quad e^{\\pm i\\theta_j} \\quad (1\\le j\\le N-1),\n\\quad -1,\n\\]\nwhere $0<\\theta_1<\\theta_2<\\dots<\\theta_{N-1}<\\pi$.\nAll the eigenvalues except $-1$ are multiplicity free\nand the multiplicity of the eigenvalue $-1$ is $N$.\n\\end{theorem}\n\n\\noindent{\\it Proof.}\nSince the proofs are similar we prove only (1).\nThe orthogonal decomposition \\eqref{4eqn:decomposition of L(N)} gives rise to \na blockwise diagonalization of $U$.\nIt is obvious from Lemma \\ref{4lem:U on omegas} \nthat $U$ restricted to $\\mathbb{C}\\bs{\\Omega}_0$ is the identity operator.\nNext suppose that $1\\le j\\le N$.\nWe see from Lemma \\ref{4lem:U on omegas} that $U$\nrestricted to $\\mathbb{C}\\bs{\\Omega}_j+\\mathbb{C}S\\bs{\\Omega}_j$ \nadmits a matrix representation:\n\\[\n\\begin{bmatrix}\n0 & -1 \\\\\n1 & 2\\lambda_j\n\\end{bmatrix},\n\\]\nof which the eigenvalues are \n\\[\n\\lambda_j\\pm i\\sqrt{1-\\lambda_j^2}=e^{\\pm i\\theta_j}.\n\\]\nDenoting by $\\mathcal{M}$ the orthogonal complement of $\\mathcal{L}(N)$ in $\\mathcal{H}(N)$,\nwe have\n\\begin{align*}\n\\mathrm{Tr\\,}(U)\n&=1+\\sum_{j=1}^N2\\lambda_j + \\mathrm{Tr\\,}(U\\!\\!\\restriction_{\\mathcal{M}})\n=1+2(\\mathrm{Tr\\,}(T)-1)+\\mathrm{Tr\\,}(U\\!\\!\\restriction_{\\mathcal{M}}) \\\\\n&=2\\mathrm{Tr\\,}(T)-1+\\mathrm{Tr\\,}(U\\!\\!\\restriction_{\\mathcal{M}})\n=2r(N-1)-1+\\mathrm{Tr\\,}(U\\!\\!\\restriction_{\\mathcal{M}}).\n\\end{align*}\nOn the other hand, $\\mathrm{Tr\\,}(U)=(2r-1)(N-1)$ by Lemma \\ref{3lem:Trace of U}.\nHence\n\\[\n\\mathrm{Tr\\,}(U\\!\\!\\restriction_{\\mathcal{M}})\n=(2r-1)(N-1)-(2r(N-1)-1)\n=-(N-2).\n\\]\nSince $\\dim \\mathcal{M}=(3N-1)-(2N+1)=N-2$,\nwe see that $U\\!\\!\\restriction_{\\mathcal{M}}=-I$.\nTherefore $\\mathcal{M}$ is the eigenspace of $U$ with eigenvalue $-1$\nso that the multiplicity of the eigenvalue $-1$ coincides with $\\dim \\mathcal{M}=N-2$.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\begin{theorem}\n{\\upshape (1)} Let $r>0$ and set \n\\[\n\\bs{\\Omega}_j^{\\pm}=\\frac{1}{\\sqrt2\\, \\sin\\theta_j}\\,(\\bs{\\Omega}_j-e^{\\pm i\\theta_j}S\\bs{\\Omega}_j),\n\\quad 1\\le j\\le N.\n\\]\nThen $\\bs{\\Omega}_j^{\\pm}\\in (\\mathbb{C}\\bs{\\Omega}_j+\\mathbb{C}S\\bs{\\Omega}_j)$, \n$\\|\\bs{\\Omega}_j^{\\pm}\\|=1$ and\n\\[\nU\\bs{\\Omega}_j^{\\pm}=e^{\\pm i\\theta_j} \\bs{\\Omega}_j^{\\pm}.\n\\]\nIn other words,\n$\\bs{\\Omega}_j^{\\pm}$ is a normalized eigenvector of $U$ with eigenvalue $e^{\\pm i\\theta_j}$.\n\n{\\upshape (2)} If $r=0$, the above assertion remains valid for $1\\le j\\le N-1$.\n\\end{theorem}\n\n\\noindent{\\it Proof.}\nThat $\\|\\bs{\\Omega}_j^{\\pm}\\|=1$ is verified by using Lemma \\ref{3lem:orthogonality}.\nThat $U\\bs{\\Omega}_j^{\\pm}=e^{\\pm i\\theta_j} \\bs{\\Omega}_j^{\\pm}$ follows \nfrom Lemma \\ref{4lem:U on omegas}.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\n\\section{Proofs of main results}\n\\label{sec:Proofs of main results}\n\n\\subsection{Transition amplitudes of the $(p,q)$-quantum walk on $\\mathbb{Z}_+$}\n\nFor Theorem \\ref{mainthm:probability amplitude} we need to calculate\nthe transition amplitude:\n\\begin{equation}\\label{5eqn:goal amplitude}\n\\langle \\bs{\\psi}_0^+, U^n \\bs{\\psi}_0^+\\rangle\n=\\langle \\bs{\\Psi}_0, U^n \\bs{\\Psi}_0\\rangle\n\\end{equation}\nfor the $(p,q)$-quantum walk $U$ on $\\mathbb{Z}_+$.\nA key observation here is that \\eqref{5eqn:goal amplitude} is \ncalculated after cutoff.\nMore generally, if $\\bs{\\phi}, \\bs{\\psi}\\in\\mathcal{H}(\\mathbb{Z}_+)$ have\nfinite supports, \n\\begin{equation}\\label{5eqn:goal amplitude finite}\n\\langle \\bs{\\phi}, U^n \\bs{\\psi}\\rangle\n=\\langle \\bs{\\phi}, U_N^n \\bs{\\psi}\\rangle_{\\mathcal{L}(N)}\n\\end{equation}\nholds for all sufficiently large $N$.\nIn fact, if $\\mathrm{supp\\,}\\bs{\\phi}\\subset\\{0,1,\\dots,l\\}$ and\n$\\mathrm{supp\\,} \\bs{\\psi}\\subset\\{0,1,\\dots,m\\}$, then\n\\eqref{5eqn:goal amplitude finite} holds for $N>\\min\\{n+l,n+m\\}$.\nThe purpose of this subsection is to derive an integral formula for \\eqref{5eqn:goal amplitude finite}.\n\nNow let $N\\ge2$ be fixed and start with \nthe $(p,q)$-quantum walk $U$ on the path of length $N\\ge2$.\n\n\\begin{lemma}\\label{4lem:4.33}\nFor $r>0$ it holds that\n\\begin{align}\n\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^{\\pm} \\rangle \n&= \\frac{\\mp ie^{\\pm i\\theta_j}}{\\sqrt{2}}\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j \\rangle, \n\\qquad 1\\le j\\le N,\n\\label{4eqn:in 4.33 (1)}\\\\\n\\langle S\\bs{\\Psi}_l,\\bs{\\Omega}_0 \\rangle\n&=\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_0 \\rangle, \n\\label{4eqn:in 4.33 (3)}\\\\\n\\langle S\\bs{\\Psi}_l,\\bs{\\Omega}_j^{\\pm} \\rangle \n&= \\frac{\\mp i}{\\sqrt{2}}\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j \\rangle,\n\\qquad 1\\le j\\le N,\n\\label{4eqn:in 4.33 (2)}\n\\end{align}\nFor $r=0$ the above relations remain valid for $1\\le j\\le N-1$ and\n\\begin{equation}\\label{4eqn:in 4.33 (6)}\n\\langle S\\bs{\\Psi}_l,\\bs{\\Omega}_N \\rangle\n=-\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_N \\rangle. \n\\end{equation}\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nBy definition we have\n\\begin{equation}\\label{4eqn:in proof 4.2 (1)}\n\\langle\\bs{\\Psi}_l,\\bs{\\Omega}_j^{\\pm}\\rangle\n=\\frac{1}{\\sqrt2\\,\\sin\\theta_j}\\,\\left(\n \\langle\\bs{\\Psi}_l,\\bs{\\Omega}_j\\rangle\n -e^{\\pm i\\theta_j}\\langle\\bs{\\Psi}_l,S\\bs{\\Omega}_j\\rangle\n \\right).\n\\end{equation}\nSince $\\Pi\\bs{\\Psi}_l=\\bs{\\Psi}_l$ we have\n\\[\n\\langle\\bs{\\Psi}_l,S\\bs{\\Omega}_j\\rangle\n= \\langle\\bs{\\Psi}_l,\\Pi S\\bs{\\Omega}_j\\rangle\n= \\langle\\bs{\\Psi}_l,T\\bs{\\Omega}_j\\rangle\n= \\lambda_j \\langle\\bs{\\Psi}_l,\\bs{\\Omega}_j\\rangle.\n\\]\nThen \\eqref{4eqn:in proof 4.2 (1)} becomes\n\\begin{equation}\\label{4eqn:in proof 4.2 (2)}\n\\langle\\bs{\\Psi}_l,\\bs{\\Omega}_j^{\\pm}\\rangle\n=\\frac{1}{\\sqrt2\\,\\sin\\theta_j}\\,(1-e^{\\pm i\\theta_j}\\lambda_j) \\langle\\bs{\\Psi}_l,\\bs{\\Omega}_j\\rangle.\n\\end{equation}\nWe see easily from $\\cos\\theta_j=\\lambda_j$ that\n\\[\n1-e^{\\pm i\\theta_j}\\lambda_j=\\mp i (\\sin \\theta_j) e^{\\pm i\\theta_j}.\n\\]\nInserting the above relation into \\eqref{4eqn:in proof 4.2 (2)}, we obtain \\eqref{4eqn:in 4.33 (1)}. \n\nWe next show \\eqref{4eqn:in 4.33 (2)}.\nIn view of the definition of $\\bs{\\Omega}_j^{\\pm}$ and using $S^2=I$ we have\n\\begin{align*}\n\\langle S\\bs{\\Psi}_l,\\bs{\\Omega}_j^{\\pm}\\rangle\n&=\\frac{1}{\\sqrt2\\,\\sin\\theta_j}\\,\\left(\n \\langle S\\bs{\\Psi}_l,\\bs{\\Omega}_j\\rangle\n -e^{\\pm i\\theta_j}\\langle S\\bs{\\Psi}_l,S\\bs{\\Omega}_j\\rangle\n \\right) \\\\\n&=\\frac{1}{\\sqrt2\\,\\sin\\theta_j}\\,\\left(\n \\langle \\bs{\\Psi}_l, S\\bs{\\Omega}_j\\rangle\n -e^{\\pm i\\theta_j}\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j\\rangle\n \\right). \n\\end{align*}\nThen, applying a similar consideration as above, we obtain\n\\eqref{4eqn:in 4.33 (2)} with no difficulty.\n\nFinally, since $U\\bs{\\Omega}_0=S\\bs{\\Omega}_0=\\bs{\\Omega}_0$, we have \n\\[\n\\langle S\\bs{\\Psi}_l,\\bs{\\Omega}_0 \\rangle\n=\\langle \\bs{\\Psi}_l, S\\bs{\\Omega}_0 \\rangle\n=\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_0 \\rangle, \n\\]\nwhich shows \\eqref{4eqn:in 4.33 (3)}.\nFor \\eqref{4eqn:in 4.33 (6)} we need only to note that $S\\bs{\\Omega}_N=-\\bs{\\Omega}_N$.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\n\\begin{lemma}\\label{5lem:four inner products}\nFor $0\\le l,m\\le N$ and $n=0,\\pm1,\\pm2,\\dots$ it holds that\n\\begin{align}\n\\langle \\bs{\\Psi}_l,U^n\\bs{\\Psi}_m \\rangle\n&=\\sum_{j=0}^N (\\cos n\\theta_j)\n \\langle \\bs{\\Psi}_l, \\bs{\\Omega}_j\\rangle\n \\langle \\bs{\\Omega}_j, \\bs{\\Psi}_m\\rangle, \n\\label{4eqn:5-2-01} \\\\\n\\langle S\\bs{\\Psi}_l,U^n\\bs{\\Psi}_m \\rangle\n&=\\langle \\bs{\\Psi}_l,U^{n-1}\\bs{\\Psi}_m \\rangle, \n\\label{4eqn:5-2-02} \\\\\n\\langle \\bs{\\Psi}_l,U^n S\\bs{\\Psi}_m \\rangle\n&=\\langle \\bs{\\Psi}_l,U^{n+1}\\bs{\\Psi}_m \\rangle, \n\\label{4eqn:5-2-03} \\\\\n\\langle S\\bs{\\Psi}_l, U^n S\\bs{\\Psi}_m \\rangle\n&=\\langle \\bs{\\Psi}_l,U^{n}\\bs{\\Psi}_m \\rangle.\n\\label{4eqn:5-2-04}\n\\end{align}\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nBecause the proofs are similar, we prove the assertions under $r>0$. \nSince $\\bs{\\Psi}_l$ and $U^n\\bs{\\Psi}_m$ are vectors in $\\mathcal{L}(N)$, \nthe left-hand side of \\eqref{4eqn:5-2-01}\nis expanded in terms of the orthonormal basis\n$\\bs{\\Omega}_0, \\bs{\\Omega}_1^\\pm,\\dots, \\bs{\\Omega}_N^\\pm$ as follows:\n\\begin{align}\n\\langle \\bs{\\Psi}_l,U^n\\bs{\\Psi}_m \\rangle\n&=\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_0\\rangle\\langle \\bs{\\Omega}_0, U^n\\bs{\\Psi}_m \\rangle\n +\\sum_{j=1}^N \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^+\\rangle\\langle \\bs{\\Omega}_j^+, U^n\\bs{\\Psi}_m \\rangle \n\\nonumber \\\\\n&\\qquad +\\sum_{j=1}^N \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^-\\rangle\\langle \\bs{\\Omega}_j^-, U^n\\bs{\\Psi}_m \\rangle\n\\label{5eqn: in proof 5.2}\n\\end{align}\nThe first term becomes \n\\begin{equation}\\label{5eqn:in proof 5.2 101}\n\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_0\\rangle\\langle \\bs{\\Omega}_0, U^n\\bs{\\Psi}_m \\rangle\n=\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_0\\rangle\\langle U^{-n}\\bs{\\Omega}_0, \\bs{\\Psi}_m \\rangle\n=\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_0\\rangle\\langle \\bs{\\Omega}_0, \\bs{\\Psi}_m \\rangle.\n\\end{equation}\nFor the second term of \\eqref{5eqn: in proof 5.2} we see that\n\\begin{align*}\n\\sum_{j=1}^N \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^+\\rangle\\langle \\bs{\\Omega}_j^+, U^n\\bs{\\Psi}_m \\rangle\n&=\\sum_{j=1}^N \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^+\\rangle\n \\langle U^{-n}\\bs{\\Omega}_j^+, \\bs{\\Psi}_m \\rangle \\\\\n&=\\sum_{j=1}^N \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^+\\rangle\n \\langle e^{-in\\theta_j}\\bs{\\Omega}_j^+, \\bs{\\Psi}_m \\rangle \\\\\n&=\\sum_{j=1}^N e^{in\\theta_j}\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^+\\rangle\n \\langle \\bs{\\Omega}_j^+, \\bs{\\Psi}_m \\rangle.\n\\end{align*}\nThen we apply Lemma \\ref{4lem:4.33} to have\n\\begin{align}\n\\sum_{j=1}^N \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^+\\rangle\\langle \\bs{\\Omega}_j^+, U^n\\bs{\\Psi}_m \\rangle\n&=\\sum_{j=1}^N e^{in\\theta_j}\n \\frac{- ie^{i\\theta_j}}{\\sqrt{2}}\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j \\rangle\n \\frac{ie^{-i\\theta_j}}{\\sqrt{2}}\\langle\\bs{\\Omega}_j, \\bs{\\Psi}_m \\rangle \n\\nonumber \\\\\n&=\\sum_{j=1}^N \\frac{e^{in\\theta_j}}{2}\\,\n \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j \\rangle\n \\langle\\bs{\\Omega}_j, \\bs{\\Psi}_m \\rangle.\n\\label{5eqn:in proof 5.2 102}\n\\end{align}\nApplying a similar argument to the third term of \\eqref{5eqn: in proof 5.2}, we obtain \n\\begin{equation}\\label{5eqn:in proof 5.2 103}\n\\sum_{j=1}^N \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j^-\\rangle\\langle \\bs{\\Omega}_j^-, U^n\\bs{\\Psi}_m \\rangle\n=\\sum_{j=1}^N \\frac{e^{-in\\theta_j}}{2}\\,\n \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j \\rangle\n \\langle\\bs{\\Omega}_j, \\bs{\\Psi}_m \\rangle.\n\\end{equation}\nSumming up \\eqref{5eqn:in proof 5.2 101}--\\eqref{5eqn:in proof 5.2 103}, we see that\n\\eqref{5eqn: in proof 5.2} becomes\n\\begin{align*}\n\\langle \\bs{\\Psi}_l,U^n\\bs{\\Psi}_m \\rangle\n&=\\langle \\bs{\\Psi}_l,\\bs{\\Omega}_0\\rangle\\langle \\bs{\\Omega}_0, \\bs{\\Psi}_m \\rangle\n+\\sum_{j=1}^N \\frac{e^{in\\theta_j}+e^{-in\\theta_j}}{2}\\,\n \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j \\rangle\n \\langle\\bs{\\Omega}_j, \\bs{\\Psi}_m \\rangle \\\\\n&=\\sum_{j=0}^N (\\cos n\\theta_j)\n \\langle \\bs{\\Psi}_l,\\bs{\\Omega}_j \\rangle\n \\langle\\bs{\\Omega}_j, \\bs{\\Psi}_m \\rangle,\n\\end{align*}\nwhere $\\theta_0=0$ is taken into account.\nThus, \\eqref{4eqn:5-2-01} is proved.\n\nNoting that $U=SC$ and $C$ acts on $\\Gamma(N)$ as the identity, we see that\n\\begin{align*}\n\\langle S\\bs{\\Psi}_l,U^n\\bs{\\Psi}_m \\rangle\n&=\\langle S\\bs{\\Psi}_l, SCU^{n-1}\\bs{\\Psi}_m \\rangle \\\\\n&=\\langle \\bs{\\Psi}_l, CU^{n-1}\\bs{\\Psi}_m \\rangle \\\\\n&=\\langle C\\bs{\\Psi}_l, U^{n-1}\\bs{\\Psi}_m \\rangle \\\\\n&=\\langle \\bs{\\Psi}_l, U^{n-1}\\bs{\\Psi}_m \\rangle,\n\\end{align*}\nwhich proves \\eqref{4eqn:5-2-02}.\nWe next observe that\n\\[\n\\langle \\bs{\\Psi}_l,U^n S\\bs{\\Psi}_m \\rangle\n=\\langle \\bs{\\Psi}_l,U^n SC\\bs{\\Psi}_m \\rangle \n=\\langle \\bs{\\Psi}_l,U^{n+1}\\bs{\\Psi}_m \\rangle,\n\\]\nwhich proves \\eqref{4eqn:5-2-03}.\nFinally, we see that\n\\[\n\\langle S\\bs{\\Psi}_l, U^n S\\bs{\\Psi}_m \\rangle\n=\\langle SC\\bs{\\Psi}_l, U^n SC\\bs{\\Psi}_m \\rangle\n=\\langle U\\bs{\\Psi}_l, U^{n+1}\\bs{\\Psi}_m \\rangle\n=\\langle \\bs{\\Psi}_l,U^{n}\\bs{\\Psi}_m \\rangle,\n\\]\nwhich shows \\eqref{4eqn:5-2-04}.\n\\begin{flushright}$\\square$\\end{flushright}\n\nLet $\\{P_n\\,;\\,n=0,1,2,\\dots\\}$ be the orthogonal polynomials with respect to the free Meixner law with\nparameters $q,pq,r$, i.e.,\nthe polynomials defined by the Jacobi parameters\n\\[\n\\omega_1=q,\n\\,\\,\\,\n\\omega_2=\\omega_3=\\dots=pq;\n\\quad\n\\alpha_1=0,\n\\,\\,\\,\n\\alpha_2=\\alpha_3=\\dots=r,\n\\]\nsee also Appendix.\nWe set\n\\begin{align}\np_0(x)\n&=P_0(x)=1, \n\\nonumber \\\\\np_j(x)\n&=\\frac{P_j(x)}{\\sqrt{\\mathstrut\\omega_1\\dots \\omega_j}}\n =\\frac{P_j(x)}{\\sqrt{q(pq)^{j-1}}}\\,,\n\\quad j=1,2,\\dots,N.\n\\label{5eqn:def of p_j}\n\\end{align}\nIt is shown that $\\{p_j\\,;\\, 0\\le j\\le N\\}$ satisfies the recurrence relations\ndetermined by $T_N$.\nWe define \n\\[\n\\mu_N=\\sum_{j=0}^N \\rho(j)\\delta_{\\lambda_j}\\,,\n\\qquad\n\\rho(j)=\\rho_N(j)=\\bigg(\\sum_{n=0}^N p_n(\\lambda_j)^2\\bigg)^{-1}.\n\\]\nThe following results are known by general theory of Jacobi matrices and orthogonal polynomials. \\cite{Deift,HO}\n\n\\begin{lemma}\n$\\mu_N$ is a probability distribution uniquely determined by the Jacobi matrix $T_N$.\nMoreover, $\\{p_j\\,;\\,j=0,1,2,\\dots,N\\}$ is the orthogonal polynomials with respect to $\\mu_N$, normalized \nso as to have norm one, i.e.,\n\\[\n\\int_{-1}^1 p_j(x)p_k(x)\\mu_N(dx)=\\delta_{jk}\\,.\n\\]\n\\end{lemma}\n\n\\begin{lemma}\\label{5lem:explicit form of Omega_j}\nFor $j=0,1,\\dots,N$ \nlet $\\bs{\\Omega}_j$ be the normalized eigenvector of $T_N$ with eigenvalue $\\lambda_j$ such that\n$\\langle \\bs{\\Omega}_j, \\bs{\\Psi}_0\\rangle>0$.\nThen, \n\\[\n\\bs{\\Omega}_j=\\sqrt{\\rho_N(j)}\\sum_{n=0}^N p_n(\\lambda_j)\\bs{\\Psi}_n\\,,\n\\]\nor equivalently,\n\\[\n\\langle \\bs{\\Omega}_j, \\bs{\\Psi}_n\\rangle\n=\\sqrt{\\rho_N(j)}\\, p_n(\\lambda_j).\n\\]\n\\end{lemma}\n\nThe next result is a key for removing the cutoff.\n\n\\begin{lemma}\\label{4lem:limit free Meixner law}\nThe sequence of probability distributions $\\mu_N$ converges weakly to the\nfree Meixner law with parameters $q,pq,r$.\nIn particular, for any continuous function $f$ on $[-1,1]$ we have\n\\[\n\\lim_{N\\rightarrow\\infty} \\int_{-1}^1 f(x)\\mu_N(dx)= \\int_{-1}^1 f(x)\\mu(dx).\n\\]\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nWe first note that\n\\begin{equation}\\label{eqn:convergence in moments}\n\\lim_{N\\rightarrow\\infty} \\int_{-\\infty}^{+\\infty} x^m\\mu_N(dx)= \\int_{-1}^1 x^m\\mu(dx),\n\\qquad m=0,1,2,\\dots.\n\\end{equation}\nIn fact, the $m$-th moment of $\\mu_N$ is a polynomial in the\nfirst $m$ terms of the Jacobi coefficients of $\\mu_N$,\nwhich are identical with the first $m$ terms of the Jacobi coefficients of the free\nMeixner law $\\mu$ if $m \\min\\{l+n, m+n\\}$.\nWe take such a sufficiently large $N$.\nBy Lemmas \\ref{5lem:four inner products} and \\ref{5lem:explicit form of Omega_j} we have\n\\begin{align}\n\\langle \\bs{\\Psi}_l,U^n\\bs{\\Psi}_m \\rangle\n&=\\sum_{j=0}^N (\\cos n\\theta_j)\n \\langle \\bs{\\Psi}_l, \\bs{\\Omega}_j\\rangle\n \\langle \\bs{\\Omega}_j, \\bs{\\Psi}_m\\rangle \n\\nonumber \\\\\n&=\\sum_{j=0}^N (\\cos n\\theta_j)\\, p_l(\\lambda_j) p_m(\\lambda_j) \\rho(j) \n\\nonumber \\\\\n&=\\int_{-1}^1 (\\cos n\\theta)\\, p_l(\\lambda)p_m(\\lambda) \\mu_N(d\\lambda),\n\\end{align}\nwhich holds for all sufficiently large $N$.\nThen, taking Lemma \\ref{4lem:limit free Meixner law} into account, we come to\n\\begin{align*}\n\\langle \\bs{\\Psi}_l,U^n\\bs{\\Psi}_m \\rangle\n&=\\lim_{N\\rightarrow\\infty}\\int_{-1}^1 (\\cos n\\theta)\\,p_l(\\lambda)p_m(\\lambda) \\mu_N(d\\lambda) \\\\\n&=\\int_{-1}^1 (\\cos n\\theta)\\, p_l(\\lambda)p_m(\\lambda) \\mu(d\\lambda).\n\\end{align*}\nThis completes the proof of \\eqref{4eqn:5-2-11}.\nThe rest is proved by combination of\nLemma \\ref{5lem:four inner products} and \\eqref{4eqn:5-2-11}.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\\begin{theorem}\\label{5thm:localization criteria}\nLet $U$ be the $(p,q)$-quantum walk on $\\mathbb{Z}_+$ with parameters satisfying\n\\begin{equation}\\label{5eqn:condition for pqr}\np+q+r=1,\n\\qquad p\\ge q>0,\n\\qquad r\\ge0.\n\\end{equation}\nThen it holds that\n\\[\n\\langle \\bs{\\Psi}_l, U^n \\bs{\\Psi}_0\\rangle\n\\sim\nw p_l(\\xi)\\cos n \\tilde\\theta,\n\\qquad\n\\text{as $n\\rightarrow\\infty$},\n\\]\nwhere\n\\[\nw=\\max\\left\\{\\frac{(1-p)^2-pq}{(1-p)(1-p+q)}\\,,0\\right\\},\n\\quad\n\\xi=-\\frac{q}{1-p}=\\cos\\tilde\\theta,\n\\quad\n0<\\tilde\\theta <\\pi.\n\\]\nTherefore,\n\\[\n\\lim_{N\\rightarrow\\infty}\\frac1N\\sum_{n=0}^{N-1}\n|\\langle \\bs{\\Psi}_0, U^n \\bs{\\Psi}_0\\rangle|^2\n=\\frac{w^2}{2}\\,.\n\\]\nIn particular, $U$ exhibits the initial point localization \nif and only if $w>0$, i.e., $(1-p)^2-pq>0$.\n\\end{theorem}\n\n\\noindent{\\it Proof.}\nBy \\eqref{4eqn:5-2-11} we have\n\\begin{equation}\\label{5eqn:in proof Theorem 5.7} \n\\langle \\bs{\\Psi}_l,U^n\\bs{\\Psi}_0 \\rangle\n=\\int_{-1}^1 (\\cos n\\theta)\\,p_l(\\lambda)\\,\\mu(d\\lambda),\n\\quad \\cos\\theta=\\lambda.\n\\end{equation}\nUnder the assumption \\eqref{5eqn:condition for pqr}\nthe free Meixner law with parameters $q,pq,r$ is of the form:\n\\[\n\\mu(dx)=\\rho(x)dx+w\\delta_\\xi \\,,\n\\]\nwhere $\\rho$ is a continuous function on $[r-2\\sqrt{pq}\\,, r+2\\sqrt{pq}\\,]\\subset [-1,1]$,\nan explicit form is deferred in Appendix, and\n\\[\n\\xi=-\\frac{q}{1-p}\\,,\n\\quad\nw=\\max\\left\\{\\frac{(1-p)^2-pq}{(1-p)(1-p+q)}\\,,0\\right\\}.\n\\]\nSince $\\rho$ is an integrable function, the Riemann--Lebesgue lemma implies that\n\\[\n\\lim_{n\\rightarrow\\infty} \\int_{-1}^{1} (\\cos n\\theta)\\,p_l(\\lambda) \\rho(\\lambda) \\, d\\lambda=0.\n\\]\nHence in \\eqref{5eqn:in proof Theorem 5.7} only contribution by the point mass\nremains in the limit, i.e.,\n\\[\n\\langle \\bs{\\Psi}_l, U^n \\bs{\\Psi}_0\\rangle\n\\sim\nw p_l(\\xi) \\cos n \\tilde\\theta,\n\\qquad\n\\text{as $n\\rightarrow\\infty$},\n\\]\nas desired.\nThe rest is straightforward.\n\\begin{flushright}$\\square$\\end{flushright}\n\n\n\\subsection{Proof of Theorem \\ref{mainthm:probability amplitude}}\n\nLet $U$ be the Grover walk on a spidernet $G=S(a,b,c)$ and consider\nthe initial state $\\bs{\\psi}^+_0$ defined by \\eqref{3eqn:initial state}.\nLet\n\\[\n\\Gamma(\\mathbb{Z}_+)\\subset\\mathcal{H}(\\mathbb{Z}_+)\\subset \\mathcal{H}(G)\n\\]\nbe the subspaces defined in Subsection \\ref{subsec:4.1}.\nThen ${\\mathcal{H}(\\mathbb{Z}_+)}$ is invariant under $U$ and\n$U\\!\\!\\restriction_{\\mathcal{H}(\\mathbb{Z}_+)}$ is the $(p,q)$-quantum walk on $\\mathbb{Z}_+$,\nwhere\n\\begin{equation}\\label{5eqn:pqr by abc}\np=\\frac{c}{b}\\,,\n\\qquad\nq=\\frac{1}{b}\\,,\n\\qquad\nr=\\frac{b-c-1}{b}\\,.\n\\end{equation}\nSince the initial state $\\bs{\\psi}^+_0=\\bs{\\Psi}_0$ belongs to $\\mathcal{H}(\\mathbb{Z}_+)$,\n$\\langle \\bs{\\psi}_0^+, U^n \\bs{\\psi}_0^+\\rangle$ is obtained from the\n$(p,q)$-quantum walk on $\\mathbb{Z}_+$.\nIn fact, by Theorem \\ref{5thm:integral representation of four amplitudes}\nwe have\n\\begin{equation}\\label{5eqn:main representation}\n\\langle \\bs{\\psi}_0^+, U^n \\bs{\\psi}_0^+\\rangle\n=\\int_{-1}^{1} (\\cos n\\theta) \\,\\mu(d\\lambda),\n\\quad\n\\lambda=\\cos\\theta,\n\\end{equation}\nwhere $\\mu$ be the free Meixner law with parameters $q,pq,r$.\nThis completes the proof of Theorem \\ref{mainthm:probability amplitude}.\n\n\n\n\\subsection{Proof of Theorem \\ref{3thm:localization for spidernet}}\n\nFor a spidernet $G=S(a,b,c)$ the parameters $p,q,r$ defined by\n\\eqref{5eqn:pqr by abc} satisfies the condition in\nTheorem \\ref{5thm:localization criteria}.\nSo it holds that\n\\[\n\\langle \\bs{\\Psi}_0, U^n \\bs{\\Psi}_0\\rangle\n\\sim\nw \\cos n \\tilde\\theta,\n\\qquad\n\\text{as $n\\rightarrow\\infty$},\n\\]\nwhere\n\\begin{align}\nw&=\\max\\left\\{\\frac{(1-p)^2-pq}{(1-p)(1-p+q)}\\,,0\\right\\},\n\\label{5eqn: theta and w} \\\\\n\\xi&=-\\frac{q}{1-p}=\\cos\\tilde\\theta,\n\\quad 0<\\tilde\\theta <\\pi.\n\\label{5eqn: c and tilde theta}\n\\end{align}\nFor the first half of Theorem \\ref{3thm:localization for spidernet} \nit is sufficient to apply the following obvious relations:\n\\[\n\\frac{(1-p)^2-pq}{(1-p)(1-p+q)}=\\frac{(b-c)^2-c}{(b-c)(b-c+1)}\\,,\n\\qquad\n-\\frac{q}{1-p}=-\\frac{1}{b-c}\\,.\n\\]\nFor the second half we need only to note that\n$(b-c)^2-c>0$ is equivalent to $b>c+\\sqrt{c}$ under the assumption\n\\eqref{3eqn:constraint abc} posed at the beginning.\n\n\\subsection{Proofs of Corollaries \\ref{3cor:k<10}--\\ref{tree}}\n\nThese follow immediately from Theorem \\ref{3thm:localization for spidernet}.\nWe need only to check the parameters.\nFor a spidernet $S(\\kappa,\\kappa+2,\\kappa-1)$ we have\n\\begin{align*}\n\\xi&=\\cos\\tilde\\theta=-\\frac{1}{b-c}=-\\frac{1}{3}\\,, \\\\\nw&=\\max\\left\\{\\frac{(b-c)^2-c}{(b-c)(b-c+1)}\\,,0\\right\\}\n=\\max\\left\\{\\frac{10-\\kappa}{12}\\,,0\\right\\}.\n\\end{align*}\nWhile, for a spidernet $S(a,b,b-1)$ we have\n\\[\n(b-c)^2-c=-(b-2)\\le 0,\n\\qquad \\kappa\\ge2,\n\\]\nwhich implies $w=0$.\n\n\n\\subsection{Proof of Theorem \\ref{local}}\n\nIn a similar manner as in the proof of Theorem \\ref{3thm:localization for spidernet}\nwe see that\n\\begin{equation}\\label{4eqn:5-2-110}\n\\langle \\bs{\\Psi}_l,U^n\\bs{\\Psi}_0 \\rangle\n\\sim w p_l(\\xi) \\cos n\\tilde\\theta,\n\\quad\\text{as $n\\rightarrow\\infty$},\n\\end{equation}\nwhere $w$, $\\xi$, $\\tilde\\theta$ are given by \\eqref{5eqn: theta and w} and \\eqref{5eqn: c and tilde theta}.\nThe value $p_l(\\xi)$ is known explicitly from Lemma \\eqref{5lem:special value} below:\n\\[\np_l(\\xi)=\\frac{1}{\\sqrt{p}}\n\\bigg(-\\frac{\\sqrt{\\mathstrut pq}}{1-p}\\bigg)^l\n=\\sqrt{\\frac{b}{c}}\\left(-\\frac{\\sqrt{c}}{b-c}\\right)^l,\n\\qquad l=1,2,\\dots.\n\\]\nThen the time averaged limit probability is given by\n\\begin{align}\n&\\lim_{N\\rightarrow\\infty}\\frac{1}{N}\\sum_{n=0}^{N-1}\n |\\langle \\bs{\\Psi}_l, U^n\\bs{\\Psi}_0 \\rangle |^2\n=\\frac{w^2}{2}p_l(\\xi)^2\n\\nonumber \\\\\n&\\qquad\n=\\frac12\\left\\{\\frac{(b-c)^2-c}{(b-c)(b-c+1)}\\right\\}^2\n \\times \\frac{b}{c} \\left\\{\\frac{c}{(b-c)^2}\\right\\}^l.\n\\label{exponential(0)} \n\\end{align}\nWe here use the following rather obvious result.\n\n\\begin{lemma}\\label{5lem:5.7}\nLet $U$ be the Grover walk on a spidernet $S(a,b,c)$ with an initial state $\\bs{\\psi}_0^+$.\nThen we have\n\\begin{equation}\\label{5eqn:in lem 5.7(0)}\nP(X_n\\in V_l)\n\\ge|\\langle \\bs{\\Psi}_l, U^n\\bs{\\Psi}_0\\rangle|^2,\n\\qquad l=1,2,\\dots.\n\\end{equation}\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nWe first note the obvious inequality:\n\\begin{align}\nP(X_n\\in V_l)\n&=\\sum_{u\\in V_l} \\sum_{v\\sim u} \n |\\langle \\bs{\\delta}_{(u,v)}, U^n\\bs{\\psi}_0^+\\rangle|^2 \n\\nonumber \\\\\n&\\ge \\left|\\left\\langle \\frac{1}{\\sqrt{b |V_l|}}\\sum_{u\\in V_l} \\sum_{v\\sim u} \n \\bs{\\delta}_{(u,v)}, U^n\\bs{\\Psi}_0\\right\\rangle\\right|^2.\n\\label{5eqn:in proof 5.7}\n\\end{align}\nOn the other hand, from the definitions \\eqref{3eqn:def of psi+}--\\eqref{3eqn:def of psi-} we see that\n\\[\n\\sum_{u\\in V_l} \\sum_{v\\sim u} \\bs{\\delta}_{(u,v)}\n=\\sqrt{\\mathstrut ac^{l}}\\,\\bs{\\psi}_l^+ \n +\\sqrt{\\mathstrut a(b-c-1)c^{l-1}}\\,\\bs{\\psi}_l^\\circ\n +\\sqrt{\\mathstrut ac^{l-1}}\\, \\bs{\\psi}_l^-.\n\\]\nThen, noting that $|V_l|=ac^{l-1}$ for $l\\ge1$, we obtain\n\\[\n\\frac{1}{\\sqrt{b |V_l|}}\\sum_{u\\in V_l} \\sum_{v\\sim u} \\bs{\\delta}_{(u,v)}\n=\\sqrt{p}\\,\\bs{\\psi}_l^+ +\\sqrt{r}\\,\\bs{\\psi}_l^\\circ+\\sqrt{q}\\, \\bs{\\psi}_l^-\n=\\bs{\\Psi}_l\\,.\n\\]\nInserting the above relation into \\eqref{5eqn:in proof 5.7},\nwe obtain \\eqref{5eqn:in lem 5.7(0)}.\n\\begin{flushright}$\\square$\\end{flushright}\n\nApplying Lemma \\ref{5lem:5.7} to \\eqref{exponential(0)}, we obtain\n\\begin{equation}\\label{5eqn:1 st of theorem}\n\\liminf_{N\\rightarrow\\infty}\\frac{1}{N}\\sum_{n=0}^{N-1}P(X_n\\in V_l)\n\\ge \\frac{b}{2c}\\left\\{\\frac{(b-c)^2-c}{(b-c)(b-c+1)}\\right\\}^2\n \\left\\{\\frac{c}{(b-c)^2}\\right\\}^l,\n\\end{equation}\nwhich proves the first half of Theorem \\ref{exponential localization}.\nIf the spidernet $S(a,b,c)$ is rotationally symmetric around $o$, we have\n\\[\nP(X_n=u)\n=\\frac{1}{|V_l|}\\, P(X_n\\in V_l),\n\\qquad \\partial(o,u)=l.\n\\]\nThen the second half of Theorem \\ref{exponential localization} follows by\ndividing \\eqref{5eqn:1 st of theorem} by $|V_l|=ab^{l-1}$.\n\nFinally, we calculate the value of $p_l(x)$ at $x=\\xi$.\nThe result is somehow amazing and plays a key role in showing the exponential localization.\n\n\\begin{lemma}\\label{5lem:special value}\nLet $p,q,r$ be constant numbers satisfying\n\\[\np>0,\\quad\nq>0, \\quad\nr=1-p-q\\ge0, \\quad\n(1-p)^2-pq>0.\n\\]\nLet $\\{p_n\\}$ be the orthogonal polynomials \nassociated with the free Meixner law with parameters $q,pq,r$,\nnormalized to have norm one as before, see \\eqref{5eqn:def of p_j}.\nThen we have\n\\[\np_n\\left(-\\frac{q}{1-p}\\right)=\\frac{1}{\\sqrt{p}}\n\\left(-\\frac{\\sqrt{\\mathstrut pq}}{1-p}\\right)^n,\n\\quad n=1,2,\\dots.\n\\]\n\\end{lemma}\n\n\\noindent{\\it Proof.}\nWe see from Theorem \\ref{Athm: OP for free Meixner} that\nthe orthogonal polynomials $\\{P_n\\}$ associated with the free Meixner law with parameters $q,pq,r$\nverify\n\\begin{equation}\\label{5eqn:in 5.9 (1)}\nP_n(x)=\\frac{(xR_+(x)-2q)R_+(x)^{n-1}-(xR_-(x)-2q)R_-(x)^{n-1}}{2^{n-1}(R_+(x)-R_-(x))}\\,,\n\\quad n\\ge1,\n\\end{equation}\nwhere\n\\[\nR_{\\pm}(x)=x-r \\pm\\sqrt{\\mathstrut (x-r)^2-4pq}\\,,\n\\quad\n(x-r)^2-4pq>0.\n\\]\nWe need to compute the value of $P_n(x)$ at $\\xi=-q\/(1-p)$. \nNoting first that\n\\[\n(\\xi-r)^2-4pq\n=\\left(-\\frac{q}{1-p}-r\\right)^2-4pq\n=\\left\\{\\frac{(1-p)^2-pq}{1-p}\\right\\}^2,\n\\]\nwe obtain\n\\[\nR_+(\\xi)=-\\frac{2pq}{1-p}\\,,\n\\qquad\nR_-(\\xi)=-2(1-p),\n\\]\nand hence \n\\begin{gather*}\n\\xi R_+(\\xi)-2q=\\frac{2q(pq-(1-p)^2)}{(1-p)^2}\\,,\n\\qquad\n\\xi R_-(\\xi)-2q=0, \\\\\n\\xi(R_+(\\xi)-R_-(\\xi))=\\xi R_+(\\xi)-2q.\n\\end{gather*}\nThen putting $x=\\xi$ in \\eqref{5eqn:in 5.9 (1)} we have\n\\begin{align*}\nP_n(\\xi)\n&=\\frac{(\\xi R_+(\\xi)-2q)R_+(\\xi)^{n-1}}{2^{n-1}(R_+(\\xi)-R_-(\\xi))} \\\\\n&=\\frac{\\xi}{2^{n-1}}\\,R_+(\\xi)^{n-1} \\\\\n&=\\frac{1}{p}\\left(-\\frac{pq}{1-p}\\right)^n,\n\\qquad n=1,2,\\dots.\n\\end{align*}\nFinally, in view of \\eqref{5eqn:def of p_j} we have\n\\[\np_n(x)\n=\\frac{P_n(x)}{\\sqrt{\\mathstrut q(pq)^{n-1}}}\n=\\frac{1}{\\sqrt{\\mathstrut p}}\\left(-\\frac{\\sqrt{\\mathstrut pq}}{1-p}\\right)^n,\n\\quad n=1,2,\\dots.\n\\]\nThis completes the proof.\n\\begin{flushright}$\\square$\\end{flushright}\n\\par\n\\par\\noindent\n\\noindent\n{\\bf Acknowledgments.}\nNK was partially supported by the Grant-in-Aid for Scientific Research (C) of Japan Society for the \nPromotion of Science (Grant No. 21540118). \nNO was partially supported by the CREST project ``A Mathematical Challenge to a New Phase of Material Sciences\" (2008--2014)\nof Japan Science and Technology Agency.\n\\par\n\\\n\\par\n\n\\begin{small}\n\\bibliographystyle{jplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nModelling insurance claim sizes is not only essential in various actuarial applications including pricing and risk management, but is also very challenging due to several peculiar characteristics of claim severity distributions. Claim size distributions often exhibit multimodality for small and moderate claims, when there exists unobserved heterogeneities possibly reflected by different claim types and accident causes, or when the observed samples come from a contaminated distribution. Also, the distribution is usually heavy-tail in nature, where very large claims occur with a small but non-negligible probability.\nDue to the highly complex distributional characteristics, we have to admit the impossibility to perfectly capture all the distributional features using a parametric model without excessively large number of parameters (which results in over-fitting). When model misspecification is unavoidable, correct specification of the tail part is more important than finely capturing the distributional nodes of smaller claims which are rather immaterial to the insurance portfolio, because the large claims are the losses which can severely damage the portfolio. As a result, we need to specify an appropriate distributional model with a justifiable statistical inference approach which not only preserves sufficient flexibility to appropriately capture the whole severity distribution, but also puts a particular emphasis on robust estimation of the tail.\n\n\nExisting actuarial loss modelling literature focus a lot on the model specifications, by introducing various distributional classes to capture the peculiar characteristics of claim severity distributions. Notable directions include extreme value distributions (EVD, see e.g. \\cite{embrechts1999extreme}) to capture the heavy-tailedness, composite loss modelling (see e.g. \\cite{cooray2005modeling}, \\cite{scollnik2007composite}, \\cite{bakar2015modeling} and \\cite{grun2019extending}) to cater for mismatch between body and tail behavior of claim severity distributions, and finite mixture model (FMM, see e.g. \\cite{LEE2010modeling} and \\cite{MILJKOVIC2016387}) to capture distributional multimodality.\nIn particular, FMM is becoming an increasingly useful smooth density estimation tool in insurance claim severity modelling perspective, due to its high versatility theoretically justified by denseness theorems (\\cite{LEE2010modeling}).\nThe mismatch between its body and tail behavior can also be easily modelled by FMM by selecting varying distributional classes among mixture component functions (see e.g. \\cite{BLOSTEIN201935} and \\cite{fung2021mixture}), including both light-tailed and heavy-tailed distributions. In both actuarial research and practice, statistical inferences of FMM are predominantly based on maximum likelihood estimation (MLE) with the use of Expectation-Maximization (EM) algorithm.\n\nNonetheless, MLE would often cause tail-robustness issues where the tail part of the fitted model is very sensitive to model misspecifications -- when the observations are generated from a perturbed and\/or contaminated distribution.\nAs evidenced by several empirical studies including \\cite{fung2021mixture} and \\cite{wuthrich2021statistical}, the estimated tail part of the FMM obtained by MLE can be unreliable and highly unstable in most practical cases. This is mainly due to the overlapping density regions between mixture components modelling small to moderate claims (body) and those modelling large claims (tail). Hence, the estimated tail distribution will be heavily influenced by some smaller claims if FMM is not able to fully explain those small claims, which is always the case in practice due to the distributional complexity of real dataset impossible to be perfectly captured even by flexible density approximation tools including FMM. Under MLE approach, FMM may fail to extrapolate well the large claims, and this would lead to serious implications to insurance pricing and risk management perspectives. It is therefore natural to question whether or not MLE is still a plausible approach in modelling actuarial claim severity data, and whether or not there exists an alternative statistical inference tool which better addresses our modelling challenges and outperforms the MLE.\n\nRobust statistical inference methods for heavy tail distributions are relatively scarce in actuarial science literature. \nNotable contributions include \\cite{brazauskas2000robust}, \\cite{serfling2002efficient}, \\cite{brazauskas2003favorable} and \\cite{dornheim2007robust} who adopt various kinds of statistical inference tools, such as quantile, trimmed mean, trimmed-M and generalized median estimators, to robustly estimate Gamma, Pareto and Log-normal distributions.\nRecent actuarial works study several variations of the method of moments (MoM) for robust estimations of Pareto and Log-normal loss models. Notable contributions in this direction include \\cite{brazauskas2009robust}, \\cite{poudyal2021robust} (trimmed moments), \\cite{zhao2018robust} (winsorized moments) and \\cite{poudyal2021truncated} (truncated moments).\nNote that these research works address robustness issue against the upper outliers by reducing the influence of few extreme observations to the estimated model parameters. This outlier-robustness issue is however very different from the tail-robustness issue mentioned above as the key motivation of this paper, where the contaminations on the body part affects the tail extrapolations. Very few research works look into this ``non-standard\" tail-robustness issue. Notable contributions are \\cite{beran2012robust} and \\cite{gong2018robust} who propose a huberization of the MLE, which protects against perturbations and misspecifications in the body part of distribution, to robustly estimate the tail index of Pareto and Weibull distributions. \nAll of the above existing approaches focus solely on one or two-parameter distributions. A general robust tail estimation strategy under multi-parameter flexible models like FMM is lacking.\n\nMotivated by the aforementioned tail-robustness issue in insurance context, we propose a new maximum weighted likelihood estimation (MWLE) approach for robust heavy-tail modelling of FMM. \nUnder the MWLE, an observation-dependent weight function is introduced to the log-likelihood, de-emphasizing the contributions of smaller claims and hence reducing their influence to the estimated tail part of FMM. Down-weighting small claims is also natural in insurance loss modelling perspective, as mentioned in the beginning of this section, accurate modelling of the large claims is more important than the smaller claims. \nTo offset the bias caused by the weighting scheme, we also include an adjustment term in the weighted log-likelihood, which can be interpreted as the effects of randomly truncating the observations. With the bias adjustment term, we prove that estimated parameters under the proposed MWLE is consistent and asymptotically normal with any pre-specified choices of weight functions.\nAlso, under some specific choices of weight functions, we will show that the MWLE tail index, which determines the tail heaviness of a distribution, is consistent, even under model misspecifications where the true model is not FMM. Therefore, MWLE can be regarded as a generalized alternative framework of Hill estimator (\\cite{hill1975simple}). Furthermore, with a probabilistic interpretation of the proposed MWLE approach, it is still possible to derive a Generalized EM (GEM) algorithm to efficiently estimate parameters which maximize the weighted log-likelihood function.\n\nNote that the proposed MWLE is different from the existing statistics papers which adopt weighting schemes for likelihood-based inference. The existing literature are mainly motivated by one of the following two aspects very different from the focus of this paper: (i) Robustness against upper and lower outliers, where related research works include e.g. \\cite{fieldsmith1994}, \\cite{markatou1997weighted}, \\cite{markatou2000mixture}, \\cite{dupuis2002robust}, \\cite{ahmed2005robust}, \\cite{wong2014robust} and \\cite{aeberhard2021robust}; (ii) Bringing in more relevant observations for statistical inference to increase precision while trading off some biases, studied by e.g. \\cite{wang2001maximum}, \\cite{hu2002weighted}, \\cite{wang2004asymptotic} and \\cite{wang2005selecting}. Note also that our proposed MWLE stems differently from the existing statistics literature in terms of mathematical technicality, since none of the above papers incorporate the truncation-based bias adjustment as included in the proposed WMLE.\n\nThe rest of this paper is structured as follows. In Section \\ref{sec:fmm}, we briefly revisit the class of FMM for heavy tail modelling. Section \\ref{sec:MWLE} introduces the proposed MWLE for robust heavy-tail modelling of FMM and explains its motivations in terms of insurance claims modelling. Section \\ref{sec:theory} explores several theoretical properties to understand and justify the proposed MWLE. After that, we present in Section \\ref{sec:em} two types of GEM algorithms for efficient parameter estimations under the MWLE approach on FMM. In Section \\ref{sec:ex}, we analyze the performance of the proposed MWLE through three empirical examples: a toy example, a simulation study and a real insurance claim severity dataset. After showing the superior performance of MWLE compared to MLE, we finally summarize our findings in Section \\ref{sec:discussion} with a brief discussion how the proposed MWLE can be extended to a regression framework.\n\n\n\n\\section{Finite mixture model} \\label{sec:fmm}\nThis section provides a very brief review on finite mixture model (FMM) which serves as a flexible density estimation tool. Suppose that there are $n$ i.i.d. claim severities $\\bm{Y}=(Y_1,\\ldots, Y_n)$ with realizations $\\bm{y}=(y_1,\\ldots, y_n)$. $Y_i$ is generated by a probability distribution of $G(\\cdot)$ with density function $g(\\cdot)$ which is unknown. In insurance context, claim severity distribution often exhibits multimodality, which results from the unobserved heterogeneity stemming from the amalgamation of different types of claims unobserved in advance. Also, claim sizes are often heavy-tail in nature, which can be attributed to a few large losses from a portfolio of policies which usually represent the greatest part of the indemnities paid by the insurance company. The mismatch between body and tail behavior often poses difficulties to fit the data well using only a standard parametric distribution.\n\nMotivated by the challenges of modelling insurance claim severities, we aim to model the dataset using finite mixture model (FMM). Define a class of finite mixture distributions $\\mathcal{H}=\\{H(\\cdot;\\bm{\\Phi}):\\bm{\\Phi}\\in\\Omega\\}$, where $\\bm{\\Phi}=(\\psi_1,\\ldots,\\psi_P)^T$ is a column vector with length $P$ representing the model parameters and $\\Omega$ is the parameter space. Its density function $h(y;\\bm{\\Phi})$ is given by the following form:\n\n\\begin{equation} \\label{eq:model}\nh(y_i;\\bm{\\Phi})=\\sum_{j=1}^{J}\\pi_j f_b(y_i;\\bm{\\varphi}_j)+\\pi_{J+1}f_t(y_i;\\bm{\\eta}),\\qquad y_i>0,\n\\end{equation}\nwhere the parameters $\\bm{\\Phi}$ can alternatively be written as $\\bm{\\Phi}=(\\bm{\\pi},\\bm{\\varphi},\\bm{\\eta})$. Here, $\\bm{\\pi}=(\\pi_1,\\ldots,\\pi_{J+1})$ are the mixture probabilities for each of the $J+1$ components with $\\sum_{j=1}^{J+1}\\pi_j=1$. $\\bm{\\varphi}=(\\bm{\\varphi}_1,\\ldots,\\bm{\\varphi}_J)$ and $\\bm{\\eta}$ are the parameters for the mixture densities $f_b$ and $f_t$ respectively.\n\nThe $J$ mixture components with densities $f_b$ mainly serve as modelling the multimodality for the body part of the distribution. $f_b$ is naturally chosen as a light-tailed distribution like Gamma, Weibull and Inverse-Gaussian.\nThe remaining mixture component $f_t$ is designed to capture the large observations and hence the tail distribution can be properly extrapolated. The possible choices of heavy-tail distribution for $f_t$ include Log-normal, Pareto and Inverse-Gamma.\n\n\n\\section{Maximum weighted log-likelihood estimator} \\label{sec:MWLE}\nWith the maximum likelihood estimation (MLE) approach, parameter estimations require maximizing the log-likelihood function\n\n\\begin{equation} \\label{eq:loglik}\n\\mathcal{L}_n(\\bm{\\Phi};\\bm{y})=\\sum_{i=1}^{n}\\log h(y_i;\\bm{\\Phi})\n\\end{equation}\nwith respect to the parameters $\\bm{\\Phi}$. Under this approach, each claim has the same relative influence to the estimated parameters, but in insurance loss modelling and ratemaking perspective, correct specification and projection of larger claims are more important than those of smaller claims. More importantly, as explained by \\cite{fung2021mixture} and \\cite{wuthrich2021statistical}, MLE of FMM in Equation (\\ref{eq:model}) would fail in most practical cases due to incorrectly estimations of tail heaviness under model misspecification. Precisely, because of the overlapping region between the body parts $f_b$ and tail part $f_t$ of the distribution, the small claims may distort the estimated tail distribution $f_t$ if they are not fully captured by the $J$ mixture densities in the body. However, due to the highly complex multimodality characteristics of the body distribution which often appears in real insurance claim severity data, it is impossible to capture all the body distributional patterns without prohibitively large $J$ which causes over-fitting and loss of model interpretability. Therefore, it is often the case in practice that MLE of FMM would result in unstable estimates of tail distribution, causing unreliable tail extrapolation.\n\nOne way to mitigate the aforementioned model misspecification effect is to impose observation-dependent weights to the log-likelihood function, where a larger claim $y$ is assigned to a larger weight. This will reduce the influence of smaller observed values to the estimated tail parameter. For parameter estimations, we propose maximizing the weighted log-likelihood as follows instead\n\n\\begin{equation} \\label{eq:loglik_weight}\n\\mathcal{L}^*_n(\\bm{\\Phi};\\bm{y})=\\sum_{i=1}^{n}W(y_i)\\log \\frac{h(y_i;\\bm{\\Phi})W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du},\n\\end{equation}\nwhere $0\\leq W(\\cdot)\\leq 1$ is the weight of the log-likelihood function. We call the resulting parameters maximum weighted likelihood estimators (MWLE). To allow for greater relative influence of larger claims, we construct $W(u)$ as a monotonically non-decreasing function of $u$. In this case, we may interpret the weighted log-likelihood function as follows: First, we pretend that each claims $y_1,\\ldots,y_n$ are only observed respectively by $W(y_1),\\ldots,W(y_n)$ times. However, this alone will introduce bias to a heavier estimated tail because this implies more large claims are effectively included due to the weighting effect. To remove such a bias, we pretend to model $y_i$ by a random truncation model $\\tilde{h}(y_i;\\bm{\\Phi}):=h(y_i;\\bm{\\Phi})W(y_i)\/\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du$ instead of the original modelling distribution $h(y_i;\\bm{\\Phi})$\n\n\\begin{remark}\nThe proposed MWLE can be viewed as a form of M-estimator, where the optimal parameters are determined through maximizing a function. We here discuss two special cases of MWLE. (i) MLE: When $W(\\cdot)=1$, MWLE is reduced to a standard MLE; (ii) Truncated MLE: When $W(y)=1\\{y\\geq\\tau\\}$ for some threshold $\\tau>0$, then MWLE is reduced to truncated MLE introduced by \\cite{marazzi2004adaptively}, where a hard rejection is applied to all samples smaller than $\\tau$.\n\\end{remark}\n\n\n\n\n\\section{Theoretical Properties} \\label{sec:theory}\nThis section presents several theoretical properties of the proposed MWLE to theoretically justify the use of the proposed MWLE. Unless specified otherwise, throughout this section the estimated model parameters $\\hat{\\bm{\\Phi}}$ are obtained by maximizing the proposed weighted log-likelihood function given by Equation (\\ref{eq:loglik_weight}).\n\\subsection{Asymptotic behavior with fixed weight function}\n\\subsubsection{Consistency and asymptotic normality}\nWe first want to show that the proposed weighted log-likelihood approach leads to correct convergence to true model parameters as $n\\rightarrow\\infty$. The proof is presented in Section 2 of the supplementary materials.\n\\begin{theorem} \\label{thm:asym_tru}\nSuppose that $G(\\cdot)=H(\\cdot;\\bm{\\Phi}_0)\\in\\mathcal{H}$. Assume that the density function $h(y;\\bm{\\Phi})$ satisfies a set of regularity conditions outlined in Section 1 of supplementary materials\\footnote{Note that the set of regularity conditions are equivalent to those required for consistent and asymptotic normal estimations of MLE.}. Then, there exists a local maximizer $\\hat{\\bm{\\Phi}}_n$ of the weighted log-likelihood $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})$ such that\n\\begin{equation}\n\\sqrt{n}(\\hat{\\bm{\\Phi}}_n-\\bm{\\Phi}_0)\\overset{d}{\\rightarrow}\\mathcal{N}(\\bm{0},\\bm{\\Sigma}),\n\\end{equation}\nwhere $\\bm{\\Sigma}=\\bm{\\Gamma}^{-1}\\bm{\\Lambda}\\bm{\\Gamma}^{-1}$, with $\\bm{\\Lambda}$ and $\\bm{\\Gamma}$ being $P\\times P$ matrices given by\n\\begin{align} \\label{eq:asym:lambda}\n\\bm{\\Lambda} \n&=E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad-\\frac{1}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}\\Bigg\\{E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\nonumber\\\\\n&\\hspace{8em}+E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\Bigg\\}\\nonumber\\\\\n&\\quad+\\frac{E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]^2}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\n\\end{align}\nand\n\\begin{align} \\label{eq:asym:gamma}\n\\bm{\\Gamma}\n&=-E_{\\bm{\\Phi}_0}\\left[W(Y)\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad+\\frac{1}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T,\n\\end{align}\nwhere $E_{\\bm{\\Phi}_0}[Q(Y)]=\\int_{0}^{\\infty}Q(u)h(u;\\bm{\\Phi}_0)du$ represents the expectation under density $h(\\cdot;\\bm{\\Phi}_0)$ for any functions $Q$, and the derivative ${\\partial}\/{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})$ is assumed to be a column vector with length $P$.\n\\end{theorem}\n\n\\begin{remark}\nWhen $W(\\cdot)=1$, all except the first term in the right hand side of Equations (\\ref{eq:asym:lambda}) and (\\ref{eq:asym:gamma}) vanish. As a result, the asymptotic variance $\\bm{\\Sigma}=\\bm{\\Gamma}^{-1}\\bm{\\Lambda}\\bm{\\Gamma}^{-1}$ is reduced to the inverse of Fisher information matrix under standard MLE approach.\n\\end{remark}\n\n\\begin{remark}\nTheorem \\ref{eq:asym:gamma} only asserts the existence of local maximizer instead of global maximizer, because in FMM it is common that the likelihood function has multiple critical points and\/or is unbounded (\\cite{Mclachlan2004Finite}).\n\\end{remark}\n\nThe above theorem suggest that for large sample size, the estimated parameters are approximately unbiased and we may approximate the variance of estimated parameters as\n\\begin{equation} \\label{eq:asym_var}\n\\widehat{\\text{Var}}(\\hat{\\bm{\\Phi}}_{n})\\approx \\frac{1}{n}\\hat{\\bm{\\Gamma}}_n^{-1}\\hat{\\bm{\\Lambda}}_n\\hat{\\bm{\\Gamma}}_n^{-1}\n\\end{equation}\nwhere $\\hat{\\bm{\\Lambda}}_n$ and $\\hat{\\bm{\\Gamma}}_n$ are given by $\\bm{\\Lambda}$ and $\\bm{\\Gamma}$ in Equations (\\ref{eq:asym:lambda}) and (\\ref{eq:asym:gamma}) except that the expectations are changed to empirical means and $\\bm{\\Phi}_0$ is changed to $\\hat{\\bm{\\Phi}}_n$. Then, it is easy to construct a two-sided Wald-type confidence interval (CI) for $\\psi_p$ ($p=1,\\ldots,P$) as\n\\begin{equation} \\label{eq:asym_CI}\n\\left[\\hat{\\psi}_{n,p}-\\frac{z_{1-\\kappa\/2}}{\\sqrt{n}}\\sqrt{\\left[\\hat{\\bm{\\Gamma}}_n^{-1}\\hat{\\bm{\\Lambda}}_n\\hat{\\bm{\\Gamma}}_n^{-1}\\right]_{p,p}},\\hat{\\psi}_{n,p}+\\frac{z_{\\kappa\/2}}{\\sqrt{n}}\\sqrt{\\left[\\hat{\\bm{\\Gamma}}_n^{-1}\\hat{\\bm{\\Lambda}}_n\\hat{\\bm{\\Gamma}}_n^{-1}\\right]_{p,p}}\\right],\n\\end{equation}\nwhere $\\hat{\\psi}_{n,p}$ is the estimated $\\psi_p$, $z_{\\kappa}$ is the $\\kappa$-quantile of the standard normal distribution and $\\left[\\bm{M}\\right]_{p,p}$ is the $(p,p)$-th element of $\\bm{M}$ for some matrices $\\bm{M}$. For other quantities of interest (e.g.~mean, VaR and CTE of claim amounts), one may apply a delta method or simulate parameters from ${\\cal N}(\\hat{\\bm{\\Phi}}_{n},\\widehat{\\text{Var}}(\\hat{\\bm{\\Phi}}_{n}))$ to analytically or empirically approximate their CIs.\n\n\\medskip\n\nNext, we examine the asymptotic property of MWLE dropping the assumption that $G(\\cdot)\\in\\mathcal{H}$ (i.e. we may misspecify the model class).\n\n\\begin{theorem} \\label{thm:asym_mis}\nAssume that the density function $h(y;\\bm{\\Phi})$ satisfies the same set of regularity conditions as in the previous theorem. Further assume that there is a local maximizer $\\bm{\\Phi}_0^{*}$ of\n\\begin{equation}\n\\tilde{E}\\left[\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]:=\n\\tilde{E}\\left[W(Y)\\log \\frac{h(Y;\\bm{\\Phi})W(Y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}\\right],\n\\end{equation}\nwhere $\\tilde{E}\\left[Q(Y)\\right]=\\int_0^{\\infty}Q(u)dG(u)$ represents the expectation under distribution $G(y)$ for any functions $Q$. Then, there exists a local maximizer $\\hat{\\bm{\\Phi}}_n$ of the weighted log-likelihood $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})$ such that\n\\begin{equation}\n\\sqrt{n}(\\hat{\\bm{\\Phi}}_n-\\bm{\\Phi}_0^{*})\\overset{d}{\\rightarrow}\\mathcal{N}(\\bm{0},\\tilde{\\bm{\\Sigma}}),\n\\end{equation}\nwhere $\\tilde{\\bm{\\Sigma}}=\\tilde{\\bm{\\Gamma}}^{-1}\\tilde{\\bm{\\Lambda}}\\tilde{\\bm{\\Gamma}}^{-1}$, with $\\tilde{\\bm{\\Lambda}}$ and $\\tilde{\\bm{\\Gamma}}$ given by\n\\begin{align} \\label{eq:asym:lambda_mis}\n\\tilde{\\bm{\\Lambda}} \n&=\\tilde{E}\\left[W(Y)^2\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad-\\frac{1}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}\\Bigg\\{\\tilde{E}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\nonumber\\\\\n&\\hspace{8em}+E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\tilde{E}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\Bigg\\}\\nonumber\\\\\n&\\quad+\\frac{\\tilde{E}\\left[W(Y)^2\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]^2}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\n\\end{align}\nand\n\\begin{align} \\label{eq:asym:gamma_mis}\n\\tilde{\\bm{\\Gamma}}\n&=\\tilde{E}\\left[W(Y)\\frac{\\partial^2}{\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T}\\log h(Y;\\bm{\\Phi})\\right]\n-\\frac{\\tilde{E}\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial^2}{\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T}\\log h(Y;\\bm{\\Phi})\\right]\\nonumber\\\\\n&\\quad-\\frac{\\tilde{E}\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad+\\frac{\\tilde{E}\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]^2}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T.\n\\end{align}\n\\end{theorem}\n\nAs shown by the above theorem, the MWLE is still asymptotically convergent and normally distributed. As a result, it is still justifiable to evaluate the parameter uncertainties and CI of parameters in the forms of Equations (\\ref{eq:asym_var}) and (\\ref{eq:asym_CI}). However, in the context of modelling heavy-tail distributions as an example, there could be an asymptotic bias on the estimated tail index. As a result, it is important to theoretically examine how the choice of weight functions influence the impacts of model misspecifications. These will be leveraged to the next subsections on the robustness studies and asymptotics under varying weight functions.\n\n\\subsubsection{Robustness}\nIt is well known that MLE is the most efficient estimator under all asymptotically unbiased estimators. Therefore, with an attempt to reduce the bias of estimated tail distribution under misspecified models through MWLE approach with $W(\\cdot)\\neq 1$, there will be a trade-off between bias regularizations and loss in efficiencies. This subsection will analyze such a trade-off, which may provide guidance to choose an appropriate weight function $W(\\cdot)$. In light of Theorem \\ref{thm:asym_tru}, it is easy to show the following proposition by applying delta method.\n\n\\begin{proposition}\nSuppose that $G(\\cdot)=H(\\cdot;\\bm{\\Phi}_0)\\in\\mathcal{H}$ with the same set of regularity conditions as previous theorems. Define $\\hat{\\bm{\\Phi}}_n$ and $\\hat{\\bm{\\Phi}}_n^{(0)}$ as the MWLE and MLE respectively. Then, for some differentiable functions $U(\\cdot)$, the relative asymptotic efficiency (AEFF) of $U(\\hat{\\bm{\\Phi}}_n)$ is given by\n\\begin{equation}\n\\text{AEFF}(W;\\bm{\\Phi}_0):=\\lim_{n\\rightarrow\\infty}\\frac{\\text{Var}(\\hat{\\bm{\\Phi}}_n^{(0)})}{\\text{Var}(\\hat{\\bm{\\Phi}}_n)}=\\frac{U'(\\bm{\\Phi}_0)^T\\bm{\\Sigma}^{(0)}U'(\\bm{\\Phi}_0)}{U'(\\bm{\\Phi}_0)^T\\bm{\\Sigma}U'(\\bm{\\Phi}_0)},\n\\end{equation}\nwhere $U'(\\bm{\\Phi})$ is the gradient of $U(\\bm{\\Phi})$ w.r.t. $\\bm{\\Phi}$, and $\\bm{\\Sigma}^{(0)}$ is the inverse of Fisher information matrix under standard MLE approach.\n\\end{proposition}\n\nNext, we need to quantify robustness by some statistical measures. In a theoretical setting, we follow e.g. \\cite{huber1981robust}, \\cite{beran2012robust} and \\cite{gong2018robust} to consider the case that $Y_i$ is generated by a contamination model, given by\n\n\\begin{equation} \\label{eq:asym_contam}\nG(y):=G(y;\\epsilon,M,\\bm{\\Phi}_0)=(1-\\epsilon)H(y;\\bm{\\Phi}_0)+\\epsilon M(y), \\quad y>0,\n\\end{equation}\nfor some contamination distribution function $M$. Then, the asymptotic bias can be analyzed through evaluating the influence function (IF), a column vector with length $P$ given by\n\n\\begin{equation}\n\\text{IF}(\\bm{\\Phi}_0; H, M)=\\lim_{\\epsilon\\rightarrow 0}\\frac{\\tilde{\\bm{\\Phi}}^{\\epsilon,M}-\\bm{\\Phi}_0}{\\epsilon},\n\\end{equation}\nwhere $\\tilde{\\bm{\\Phi}}^{\\epsilon,M}$ is the asymptotic estimated parameters if $H^{\\epsilon,M}$ given by Equation (\\ref{eq:asym_contam}) is distribution generating $Y_i$, contrasting to $\\bm{\\Phi}_0$ which are the true model parameters. IF can be interpreted as the infinitesimal asymptotic bias of estimated parameters by perturbing the model generating $Y_i$. Smaller $|\\text{IF}_p(\\bm{\\Phi}_0; H, M)|$ (with $\\text{IF}_p$ being the $p^{\\text{th}}$ element of $\\text{IF}$) means a more robust estimation of $\\phi_p$ under model misspecification. Our goal is to demonstrate the potential of the proposed MWLE to reduce such a bias and hence improve the robustness. We have the following proposition which derives the IF under the MWLE approach:\n\n\\begin{proposition}\nThe IF is given by\n\\begin{align}\n\\text{IF}(\\bm{\\Phi}_0; H, M)\n&=-\\bm{\\Gamma}^{-1}\\Bigg\\{E_M\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\hspace{5em}-\\frac{E_M\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\Bigg\\},\n\\end{align}\nwhere $E_M[Q(Y)]=\\int_0^{\\infty}Q(u)dM(u)$ for some functions $Q$, and $\\bm{\\Gamma}$ is given by Equation (\\ref{eq:asym:gamma}).\n\\end{proposition}\n\nWe will show empirically how the choice of weight functions $W(\\cdot)$ affects the AEFF and IF in Section \\ref{sec:ex:toy}, which will help us understand the bias-variance tradeoff of our proposed MWLE approach.\n\n\\subsection{Asymptotic behavior of tail index with varying weight functions} \\label{sec:theory:tail_idx}\n\nTail index measures the tail-heaviness of a probability distribution. Correctly specifying the tail index is a critical task of modelling insurance data with heavy-tail nature, as insurance companies often care more about large claims which are more material than small ones. In this section, we show that under some sequences of weight functions $W_n(\\cdot)$ which depend on the number of observations $n$, the estimated tail index will be consistent under the proposed MWLE even if the model class is misspecified. \nThis result theoretically justifies how the proposed MWLE addresses the tail-robustness issue caused by model misspecification and distributional contamination, by showing that reduced influence of smaller claims through downweighting can be useful for producing a plausible tail index estimate. \nAlso, the result may provide some theoretical guidance on selecting an appropriate weight function.\n\nDenote $\\mathcal{R}_{-\\gamma}$ be a class of regularly varying distributions with tail index $\\gamma>0$, such that $\\bar{H}\\in\\mathcal{R}_{-\\gamma}$ if and only if $\\bar{H}(y)\\sim y^{-\\gamma}L_0(y)$ as $y\\rightarrow\\infty$ for some slowly varying functions $L_0(y)\\in\\mathcal{R}_0$ satisfying $L_0(ty)\/L_0(y)\\rightarrow 1$ as $y\\rightarrow\\infty$ for any $t>0$. Smaller $\\gamma$ implies heavier tail. Note that regularly varying distributions include many distributions that capture heavy-tail behaviors of loss random variables and we refer the readers to \\citet{cooke2014fat} for more explanation on these distributions. Also define the following transformed density functions\n\\begin{equation}\n\\tilde{g}_{n}(y)=\\frac{g(y)W_n(y)}{\\int_{0}^{\\infty}g(u)W_n(u)du},\\qquad \\tilde{h}_{n}(y;\\bm{\\Phi})=\\frac{h(y;\\bm{\\Phi})W_n(y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W_n(u)du},\n\\end{equation}\nand $\\tilde{G}_{n}(\\cdot)$ and $\\tilde{H}_{n}(\\cdot)$ are the corresponding distribution functions. We further put a bar to any function $Q$ to denote its survival function (i.e. $\\bar{Q}:=1-Q$). We then make the following assumptions:\n\n\\begin{enumerate}[font={\\bfseries},label={A\\arabic*.}]\n\\item $\\bar{G}\\in\\mathcal{R}_{-\\gamma_0}$ with tail index $\\gamma_0>0$.\n\\item $\\bar{H}(y;\\bm{\\Phi})=y^{-\\gamma}L(y;\\bm{\\Phi})$ for some slowly varying functions $L$, so that $\\bar{H}\\in\\mathcal{R}_{-\\gamma}$. Here, $\\gamma$ is the only model parameter within $\\bm{\\Phi}$ that governs the tail index. Also, both $L(yt;\\bm{\\Phi})\/L(y;\\bm{\\Phi})$ and its derivative w.r.t. $\\bm{\\Phi}$ converges uniformly as $y\\rightarrow\\infty$ for any fixed $t>1$.\n\\item There exists some sequences of thresholds $\\{\\tau_n\\}_{n=1,2,\\ldots}$ with $\\tau_n\\rightarrow\\infty$ as $n\\rightarrow\\infty$ such that $\\tau_nW_n(\\tau_n)\\rightarrow 0$ as $n\\rightarrow\\infty$.\n\\item $\\tilde{E}_n[(\\log \\tilde{h}_{n}(Y;\\bm{\\Phi}))^2]\/(n\\tilde{E}[W_n(Y)])\\rightarrow 0$ as $n\\rightarrow\\infty$, where $\\tilde{E}_n[Q(Y)]=\\int_{0}^{\\infty}Q(u)d\\tilde{G}_{n}$ and $\\tilde{E}[Q(Y)]=\\int_{0}^{\\infty}Q(u)dG$ for some functions $Q$.\n\\item The density functions $h(y;\\bm{\\Phi})$ and $g(y)$ are ultimately monotone (i.e. monotone on $y\\in (z,\\infty)$ for some $z>0$), uniformly on $\\bm{\\Phi}$.\n\\end{enumerate}\n\nAssumptions \\textbf{A1} and \\textbf{A2} ensure that both the model generating the observations and the fitted model class are heavy tail in nature, with tail heaviness quantified by tail indices $\\gamma_0$ and $\\gamma$ respectively. In finite mixture context, see Section \\ref{sec:fmm}, \\textbf{A2} can be easily satisfied choosing $h$ in Equation (\\ref{eq:model}) as any standard regularly varying distributions such as Pareto and Inverse-Gamma with compact parameter space. Assumption \\textbf{A3} asserts that all observations other than the extreme ones are greatly down-weighted. This assumption provides a theoretical guidance of choosing the weight function such that small to moderate claims should only be allocated by small weights, while substantial weights should be assigned only to large claims. \\textbf{A4} requires that the effective number of MWLE observations $n\\tilde{E}[W_n(Y)]\\rightarrow\\infty$ such that large sample theories hold. The numerator $\\tilde{E}_n[(\\log \\tilde{h}_{n}(Y;\\bm{\\Phi}))^2]$ grows much slower than the denominator as a logarithm is involved. Assumption \\textbf{A5} is of no practical concern. Now, we have the following theorem which asserts the consistency of estimated tail index. The proof is leveraged to Section 3 of the supplementary material.\n\n\\begin{theorem} \\label{thm:asym:tail_idx}\nAssume \\textbf{A1} to \\textbf{A5} hold for the settings under the MWLE, and the regularity conditions outlined in Section 1 of Supplementary materials are satisfied. Then, there exists a local maximizer $\\hat{\\bm{\\Phi}}_n$ of the weighted log-likelihood function $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ with the estimated tail index $\\hat{\\gamma}_n$ such that $\\hat{\\gamma}_n\\rightarrow\\gamma_0$ as $n\\rightarrow\\infty$. Further, the local maximizer $\\hat{\\gamma}_n$ is unique in probability as $n\\rightarrow\\infty$.\n\\end{theorem}\n\n\n\\begin{remark}\nConsider a special case where: (i) the weight functions $W_n(y)=1\\{y>\\tau_n\\}$ are step functions for some sequences of $\\tau_n\\rightarrow\\infty$; and (ii) the fitted model class $H(y;\\bm{\\Phi})$ is chosen as a Generalized Pareto distribution (GPD) or equivalently Lomax distribution which will be described in Section \\ref{sec:ex} (i.e. $H(y;\\bm{\\Phi})$ is an FMM in Equation (\\ref{eq:model}) with $J=0$ and $f_t$ is a GPD). Theorem \\ref{thm:asym:tail_idx} is then asserting the consistency of tail index obtained by excess over threshold method on GPD (\\cite{smith1987estimating}), which has a very close connection with the consistency property of the Hill estimator (\\cite{hill1975simple}) (see Section 4 of \\cite{smith1987estimating}). Therefore, we can regard the proposed MWLE approach as a generalized framework of the Hill-type estimator by \\cite{hill1975simple}.\n\\end{remark}\n\n\\section{Parameter estimation} \\label{sec:em}\n\\subsection{GEM algorithm} \\label{sec:em:gem}\nSince there is a probabilistic interpretation of the weighted log-likelihood given by Equation (\\ref{eq:loglik_weight}), it is feasible to construct a generalized Expectation-Maximization (GEM) algorithm for efficient parameter estimations. In this paper, we will present two distinctive approaches of complete data constructions which result to two different kinds of GEM algorithms.\n\n\\subsubsection{Method 1: Hypothetical data approach}\n\\paragraph{Construction of complete data}\nTo address the challenges of optimizing directly the ``observed data\" weighted log-likelihood in Equation (\\ref{eq:loglik_weight}), we extend the ``hypothetical complete data\" method proposed by \\cite{FUNG2020MoECensTrun}, by defining the complete data\n\\begin{equation}\n\\mathcal{D}^{\\text{com}}=\\{(y_i,\\bm{z}_i,k_i,\\{\\bm{z}'_{is},y'_{is}\\}_{s=1,\\ldots,k_i})\\}_{i=1,\\ldots,n},\n\\end{equation}\nwhere $k_i$ is the number of missing sample points ``generated\" by observation $i$, due to the probabilistic interpretation that each sample $i$ is removed with a probability of $1-W(y_i)$. As an auxiliary tool for efficient computations we assume that $k_i$ follows geometric distribution with probability mass function\n\\begin{equation} \\label{eq:em:k}\np(k_i;\\bm{\\Phi})=\\left[1-\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du\\right]^{k_i}\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du, \\qquad k_i=0,1,\\ldots,\n\\end{equation}\nand $\\{y'_{is}\\}_{s=1,\\ldots,k_i}$ are i.i.d. variables representing the missing samples. We assume that $Y'_{is}$ (with realization $y'_{is}$) is independent of $y_i$ and $k_i$, and follows a distribution with the following density function\n\\begin{equation} \\label{eq:em:y}\n\\tilde{h}^{*}(y'_{is};\\bm{\\Phi})=\\frac{h(y'_{is};\\bm{\\Phi})(1-W(y'_{is}))}{\\int_0^{\\infty}h(u;\\bm{\\Phi})(1-W(u))du},\\qquad y'_{is}>0.\n\\end{equation}\n\nFurther, $\\bm{z}_i=(z_{i1},\\ldots,z_{i(J+1)})$ are the latent mixture components assignment labels, where $z_{ij}=1$ if observation $i$ belongs to the $j^{\\text{th}}$ latent class and $z_{ij}=0$ otherwise. Similarly, $\\bm{z}'_i=(z'_{is1},\\ldots,z'_{is(J+1)})$ are the labels for missing data, where $z'_{isj}=1$ if the $s^{\\text{th}}$ missing sample generated by observation $i$ belongs to the $j^{\\text{th}}$ latent class, and $z'_{isj}=0$ otherwise. \n\nThe complete data weighted log-likelihood function is then given by\n\n\\begingroup\n\\allowdisplaybreaks\n\\begin{align}\n\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi};\\mathcal{D}^{\\text{com}})\n&=\\sum_{i=1}^{n}W(y_i)\\log\\Bigg\\{\\frac{\\left\\{\\prod_{j=1}^{J}[\\pi_jf_b(y_i;\\bm{\\varphi}_j)]^{z_{ij}}\\right\\}\\left(\\pi_{J+1}f_t(y_i;\\bm{\\eta})\\right)^{z_{i(J+1)}}W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du} \\nonumber\\\\\n&\\hspace{8em} \\times \\left[1-\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du\\right]^{k_i}\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du \\nonumber\\\\\n&\\hspace{8em} \\times \\prod_{s=1}^{k_i}\\frac{\\left\\{\\prod_{j=1}^{J}[\\pi_jf_b(y'_{is};\\bm{\\varphi}_j)]^{z'_{isj}}\\right\\}\\left(\\pi_{J+1}f_t(y'_{is};\\bm{\\eta})\\right)^{z'_{is(J+1)}}W(y_i)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})(1-W(u))du}\\Bigg\\} \\nonumber\\\\\n&=\\sum_{i=1}^nW(y_i)\\left\\{\\left[\\sum_{j=1}^{J}z_{ij}\\log \\pi_jf_b(y_i;\\bm{\\varphi}_j)\\right]+z_{i(J+1)}\\log\\pi_{J+1} f_t(y_i;\\bm{\\eta})\\right\\}\\nonumber\\\\\n&\\quad +\\sum_{i=1}^{n}\\sum_{s=1}^{k_i}W(y_i)\\left\\{\\left[\\sum_{j=1}^J z'_{ijs}\\log\\pi_j f_b(y'_{is};\\bm{\\varphi}_j)\\right]+z'_{i(J+1)s}\\log\\pi_{J+1} f_t(y'_{is};\\bm{\\eta})\\right\\}+\\text{const.},\n\\end{align}\n\\endgroup\nwhich is more computationally tractable. In the following we will omit the constant term which is irrelevant for calculations.\n\n\n\n\n\\paragraph{Iterative procedures}\nIn the $l^{\\text{th}}$ iteration of the E-step, we compute the expectation of the complete data weighted log-likelihood as follows:\n\\begin{align} \\label{eq:em:q}\nQ^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})\n&=\\sum_{i=1}^nW(y_i)\\left\\{\\left[\\sum_{j=1}^{J}z_{ij}^{(l)}\\log \\pi_jf_b(y_i;\\bm{\\varphi}_j)\\right]+z_{i(J+1)}^{(l)}\\log\\pi_{J+1} f_t(y_i;\\bm{\\eta})\\right\\}\\nonumber\\\\\n&\\quad +\\sum_{i=1}^{n}k_i^{(l)}W(y_i)\\Big\\{\\left[\\sum_{j=1}^J {z'}^{(l)}_{ij}\\left(\\log\\pi_j +E\\left[\\log f_b(Y';\\bm{\\varphi}_j)|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}\\right]\\right)\\right] \\nonumber\\\\\n& \\hspace{8em} +{z'}^{(l)}_{i(J+1)}\\left(\\log\\pi_{J+1}+ E\\left[\\log f_t(Y';\\bm{\\eta})|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}\\right]\\right)\\Big\\},\n\\end{align}\nwhere $z_{ij}^{(l)}=E[z_{ij}|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$, ${z'}^{(l)}_{ij}=E[z'_{ijs}|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$ and $k_i^{(l)}=E[K_i|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$. Also, $K$ follows $p(\\cdot;\\bm{\\Phi}^{(l-1)})$ in Equation (\\ref{eq:em:k}) and $Y'$ follows $\\tilde{h}^{*}(\\cdot;\\bm{\\Phi}^{(l-1)})$ in Equation (\\ref{eq:em:y}). The precise expressions of the above expectations are presented in Section 4.2 of supplementary materials, under a particular specification of Gamma distribution for $f_b$ and Lomax distribution for $f_t$. This specification will also be studied in the illustrating examples (Section \\ref{sec:ex}).\n\nIn the M-step, we attempt to find the updated parameters $\\bm{\\Phi}^{(l)}$ such that $Q^{*}(\\bm{\\Phi}^{(l)}|\\bm{\\Phi}^{(l-1)})\\geq Q^{*}(\\bm{\\Phi}^{(l-1)}|\\bm{\\Phi}^{(l-1)})$. Note in Equation (\\ref{eq:em:q}) that $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ is linearly separable w.r.t. parameters $(\\bm{\\pi}, \\bm{\\varphi}_1,\\ldots,\\bm{\\varphi}_J,\\bm{\\eta})$. Therefore, the optimization can be done separately w.r.t. each subset of parameters. Details are leveraged to Section 4.3 of supplementary materials. \n\n\\subsubsection{Method 2: Parameter transformation approach}\n\\paragraph{Construction of complete data}\nMotivated by the mixture probability transformation approach adopted by e.g. \\cite{lee2012algorithms} and \\cite{VERBELEN2015Censor} for truncated data, we here rewrite the random truncation distribution $\\tilde{h}(y_i;\\bm{\\Phi})$ in Equation (\\ref{eq:loglik_weight}) as\n\\begin{equation}\n\\tilde{h}(y_i;\\bm{\\Phi})\n=\\frac{h(y_i;\\bm{\\Phi})W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}\n=\\sum_{j=1}^J\\pi_j^{*}\\frac{f_b(y_i;\\bm{\\varphi}_j)W(y_i)}{\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}+\\pi_{J+1}^{*}\\frac{f_t(y_i;\\bm{\\eta})W(y_i)}{\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du},\n\\end{equation}\nwhere $\\bm{\\pi}^{*}:=(\\pi_1^{*},\\ldots,\\pi_{J+1}^{*})$ are the transformed mixing weight parameters given by\n\\begin{equation} \\label{eq:em:pi_trans}\n\\pi_j^{*}=\\frac{\\pi_j\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du},~j=1,\\ldots,J;\\qquad\n\\pi_{J+1}^{*}=\\frac{\\pi_{J+1}\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}.\n\\end{equation}\n\nAs a result, the problem is reduced to maximizing the weighted log-likelihood of finite mixture of random truncated distributions. In this case, define the complete data\n\\begin{equation}\n\\mathcal{D}^{\\text{com}}=\\{(y_i,\\bm{z}_i^{*})\\}_{i=1,\\ldots,n},\n\\end{equation}\nwhere $\\bm{z}_i^{*}=(z_{i1}^{*},\\ldots,z_{i(J+1)}^{*})$ are the labels where $z_{ij}^{*}=1$ if observation $i$ belongs to the $j^{\\text{th}}$ (transformed) latent mixture component and $z_{ij}^{*}=0$ otherwise. The complete data weighted log-likelihood function is reduced to\n\\begin{align}\n\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi};\\mathcal{D}^{\\text{com}})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\left[\\sum_{j=1}^{J}z_{ij}^{*}\\left(\\log\\pi_j^{*}+\\log\\frac{f_b(y_i;\\bm{\\varphi}_j)W(y_i)}{\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}\\right)\\right]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*}\\left(\\log\\pi_{J+1}^{*}+\\log\\frac{f_t(y_i;\\bm{\\eta})W(y_i)}{\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du}\\right)\\Bigg\\}.\n\\end{align}\n\n\\paragraph{Iterative procedures}\nIn the $l^{\\text{th}}$ iteration of the E-step, the expectation of the complete data weighted log-likelihood is:\n\\begin{align} \\label{eq:em:q2}\nQ^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\left[\\sum_{j=1}^{J}z_{ij}^{*(l)}\\left(\\log\\pi_j^{*}+\\log\\frac{f_b(y_i;\\bm{\\varphi}_j)W(y_i)}{\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}\\right)\\right]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*(l)}\\left(\\log\\pi_{J+1}^{*}+\\log\\frac{f_t(y_i;\\bm{\\eta})W(y_i)}{\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du}\\right)\\Bigg\\},\n\\end{align}\nwhere $z_{ij}^{*(l)}=E[z_{ij}^{*}|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$ is provided in Section 5.2 of supplementary materials.\n\nIn the M-step, similar to Method 1 that $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ is linearly separable w.r.t. parameters $(\\bm{\\pi}^{*}, \\bm{\\varphi}_1,\\ldots,\\bm{\\varphi}_J,\\bm{\\eta})$, we can maximize $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ sequentially w.r.t. each subset of parameters. Details are presented in Section 5.3 of supplementary materials. Note that the M-step of this approach is slightly more computationally more intensive than Method 1 as the target function $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ here involves numerical integrals.\n\nAfter completing the iterative procedures, we will obtain an estimate of the transformed mixing weights $\\bm{\\pi}^{*}$ instead of $\\bm{\\pi}$. One can revert Equation (\\ref{eq:em:pi_trans}) to get back the estimated original mixing weights as follows:\n\\begin{equation}\n\\pi_j=\\frac{\\pi_j^{*}[\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du]^{-1}}{\\pi_j^{*}\\sum_{j'=1}^J[\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_{j'})W(u)du]^{-1}+\\pi_{J+1}^{*}[\\int_{0}^{\\infty}f_t(u;\\bm{\\eta})W(u)du]^{-1}},~j=1,\\ldots,J;\n\\end{equation}\n\\begin{equation}\n\\pi_{J+1}=\\frac{\\pi_{J+1}^{*}[\\int_{0}^{\\infty}f_t(u;\\bm{\\eta})W(u)du]^{-1}}{\\pi_j^{*}\\sum_{j'=1}^J[\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_{j'})W(u)du]^{-1}+\\pi_{J+1}^{*}[\\int_{0}^{\\infty}f_t(u;\\bm{\\eta})W(u)du]^{-1}}.\n\\end{equation}\n\n\n\\subsection{Ascending property of the GEM algorithm}\nIt is well known from \\cite{DEMPSTER1977EM} that an increase of complete data log-likelihood implies an increase of observed data log-likelihood (Equation (\\ref{eq:loglik})). This can be analogously extended to the proposed weighted log-likelihood framework where we have the following proposition. The proof is leveraged to Section 6 of the supplementary material.\n\n\\begin{proposition} \\label{prop:ascend}\nIf the expected complete data weighted log-likelihood is increased during the $l^{\\text{th}}$ iteration (i.e. $Q^{*}(\\bm{\\Phi}^{(l)}|\\bm{\\Phi}^{(l-1)})\\geq Q^{*}(\\bm{\\Phi}^{(l-1)}|\\bm{\\Phi}^{(l-1)})$), then the observed data weighted log-likelihood is also increased (i.e. $\\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l)};\\bm{y})\\geq \\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l-1)};\\bm{y})$).\n\\end{proposition}\n\n\\subsection{Parameter Initialization, convergence acceleration and stopping criterion} \\label{sec:em:init}\nInitialization of parameters is an important issue, in the sense that poor initializations may lead to slow convergence, numerical instability and even convergence to spurious local maximum. We suggest to determine the initial parameters $\\bm{\\Phi}^{(0)}$ using a modified version of clusterized method of moments (CMM) approach by \\cite{gui2018fit}. Under this approach, we first determine a threshold $\\tau$ which classifies observations $y_i$ into either body ($y_i\\leq\\tau$) or tail ($y_i>\\tau$) part of the distribution. We then apply a $K$-means clustering method to assign ``body\" observations $y_i$ with $y_i\\leq\\tau$ to one of the $J$ mixture components for the body, with moment matching method for each mixture components to determine the initial parameters $(\\bm{\\pi}^{(0)},\\bm{\\varphi}^{(0)})$. Moment matching technique is also applied to ``tail\" observations $y_i$ with $y_i>\\tau$ to initialize $\\bm{\\eta}^{(0)}$. For details, we direct readers to Section 7 of the supplementary material.\n\nAs EM algorithm often converges slowly with small step sizes, we further apply a step lengthening procedure for every two GEM iterations to accelerate the algorithm. This is described by \\cite{jamshidian1997acceleration} and its references therein as a ``pure accelerator\" for the EM algorithm.\n\n\\sloppy The GEM algorithm is iterated until the relative change of iterated parameters $\\Delta^{\\text{rel}}\\bm{\\Phi}^{(l)}:=|\\log(\\bm{\\Phi}^{(l)}\/\\bm{\\Phi}^{(l-1)})|\/P$ is smaller than a threshold of $10^{-5}$ or the maximum number of iterations of 1000 is reached.\n\n\\subsection{Specification of weight function}\nOur proposed MWLE is rather flexible by allowing us to pre-specify any weight functions $W(\\cdot)$ prior to fitting the GEM algorithm. The appropriate choice of $W(\\cdot)$ depends on some decision rules beyond what statistical inference can do. In insurance loss modelling perspective, such decision rule includes the relative importance of insurance company to correctly specify the tail distribution (to evaluate some tail measures such as Value-at-risk (VaR)) compared to that of more accurately modelling the smaller attritional claims. If accurate extrapolation of huge claims are way more important than modelling the smaller claims, then one may consider $W(y)$ to be close to zero unless $y$ is large, aligning with Assumption \\textbf{A3} in Section \\ref{sec:theory:tail_idx} to ensure near-consistent tail index estimations (Theorem \\ref{thm:asym:tail_idx}). Otherwise, one may consider a flatter $W(y)$ across $y$.\n\nThroughout the entire paper, we analyze the following general form of weight function\n\\begin{equation} \\label{eq:em:wgt_func}\nW(y):=W(y;\\xi,\\tilde{\\mu},\\tilde{\\phi})=\\xi+(1-\\xi)\\int_0^y\\frac{(\\tilde{\\phi}\\tilde{\\mu})^{-1\/\\tilde{\\phi}}}{\\Gamma(1\/\\tilde{\\phi})}u^{1\/\\tilde{\\phi}-1}e^{-u\/(\\tilde{\\phi}\\tilde{\\mu})}du,\\quad y>0,\n\\end{equation}\nwhich is the distribution function of a zero-inflated Gamma distribution. The above weight function has the following characteristics:\n\\begin{itemize}\n\\item $W(y)$ is a non-decreasing function of $y$, meaning that smaller observations are down-weighted.\n\\item $\\xi\\in[0,1]$ is the minimum weight assigned to each observation.\n\\item $\\tilde{\\mu}$ and $\\tilde{\\phi}$ are the location and dispersion hyperparameters of Gamma distribution respectively. Larger $\\tilde{\\mu}$ means more (small to moderate) claims are under-weighted by a larger extent, while $\\tilde{\\phi}$ controls the shape of weight function, or how the observations are under-weighted.\n\\item If $\\xi=1$ or $\\tilde{\\mu}=0$, then the weight function is reduced to $W(\\cdot)=1$, leading to standard MLE approach.\n\\item If $\\xi=0$ and $\\tilde{\\phi}\\rightarrow 0$, then $W(y)=1\\{y\\geq\\tilde{\\mu}\\}$, meaning that only observations greater than $\\tilde{\\mu}$ are informative in determining the estimated parameters.\n\\end{itemize}\n\nOverall, smaller $\\xi$, larger $\\tilde{\\mu}$ and smaller $\\tilde{\\phi}$ represent greater under-weightings to more small claims, where we will expect more robust tail estimation by sacrificing more efficiencies on body estimations. In this paper, instead of quantifying decision rules to select the hyperparameters, in the subsequent sections we empirically test various (wide range) combinations of $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$ to study how these hyperparameters affect the trade-off between tail-robustness and estimation efficiency. These provide practical guidance and assessments to determine the suitable hyperparameters.\n\n\\begin{remark}\nThere are many possible ways to quantify the decision rule to select the ``optimal\" weight function hyperparameters. We here briefly discuss two possible ways: (1) Consider a goodness-of-fit test statistic for heavy-tailed distributions, such as the modified AD test (\\cite{ahmad1988assessment}). Then select weight function hyperparameters which optimizes the test statistic; (2) Define an acceptable range of estimated parameter uncertainty of tail index, e.g. two times as the uncertainty obtained by MLE. Then select the hyperparameters with the greatest distortion metric (e.g. the average downweighting factor $\\sum_{i=1}^{n}(1-W(y_i))\/n$) where the tail index uncertainty is still within the acceptable range.\n\\end{remark}\n\n\n\\subsection{Choice of model complexity} \\label{sec:em:complex}\nThe above GEM algorithm assumes a fixed number of mixture component $J$. However, it is important to control the model complexity by choosing an appropriate $J$ which allows enough flexibility to capture the distributional characteristics without over-fitting. \n\nThe first criterion is motivated by maximizing the expected weighted log-likelihood\n\\begin{equation} \\label{eq:em:e_wgt_ll}\nn\\times\\tilde{E}[\\mathcal{L}^{*}(\\bm{\\Phi};\\bm{Y})]=n\\times\\tilde{E}\\left[W(Y)\\log\\frac{h(Y;\\bm{\\Phi})W(Y)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du}\\right],\n\\end{equation}\nwhere the expectation is taken on $Y$ under the true model generating the observations. Without knowing the true model (in real data applications), Equation (\\ref{eq:em:e_wgt_ll}) is approximated by $\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})$ in Equation (\\ref{eq:loglik_weight}) with fitted model parameters $\\hat{\\bm{\\Phi}}$. Note that it is positively biased with correction term $\\text{tr}(-\\bm{\\Gamma}^{-1}\\bm{\\Lambda})$ shown by \\cite{konishi1996generalised}. This leads to a robustified AIC\n\\begin{equation}\n\\text{RAIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+2\\times\\text{tr}(-\\hat{\\bm{\\Gamma}}^{-1}\\hat{\\bm{\\Lambda}}).\n\\end{equation}\n\nAnalogous and naturally, since AIC-type criteria often choose excessively complex models, we also consider the robustified BIC given by\n\\begin{equation}\n\\text{RBIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+(\\log n)\\times\\text{tr}(-\\hat{\\bm{\\Gamma}}^{-1}\\hat{\\bm{\\Lambda}}).\n\\end{equation}\n\nWe choose $J$ that minimizes either the RAIC or RBIC, and the $(p,p)$-th element of $-\\hat{\\bm{\\Gamma}}^{-1}\\hat{\\bm{\\Lambda}}$ can be interpreted as the effective number of parameter attributed by the $p^{\\text{th}}$ parameter.\n\nInsurance loss dataset is often characterized by very complicated and multimodal distribution on very small claims, yet it is not meaningful to capture all these small nodes by choosing an overly complex mixture distribution with large $J$. However, the above RAIC and RBIC cannot effectively reduce those mixture components as the effective number of parameters for those capturing the smaller claims could be very small if $W(\\cdot)$ is chosen very small over the region of small claims. To effectively remove components which excessively capture the small claims, we propose treating all parameters as ``full parameters\", which results to the following truncated AIC and BIC:\n\\begin{equation}\n\\text{TAIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+2\\times P,\n\\end{equation}\n\\begin{equation}\n\\text{TBIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+\\left(\\log \\sum_{i=1}^{n}W(y_i)\\right)\\times P.\n\\end{equation}\n\n\\begin{remark} \\label{rmk:tic}\nThe above TAIC and TBIC are motivated by the bias of approximating $n\\times\\tilde{E}[\\mathcal{L}^{*}(\\bm{\\Phi};\\bm{Y})]$ by the empirical truncated log-likelihood $\\mathcal{L}^{**}_n(\\hat{\\bm{\\Phi}};\\bm{y}):=\\sum_{i=1}^{n}V_i(y_i)\\log \\frac{h(y_i;\\bm{\\Phi})W(y_i)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du}$ instead of $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$, where $V_i(y)\\sim\\text{Bernoulli}(W(y))$ is an indicator randomly discarding some observations. It can be easily shown (details in Section 8 of supplementary material) that the asymptotic bias is simply $P$ with effective number of observations $\\sum_{i=1}^{n}W(y_i)$. Note also that the weighted log-likelihood $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ is asymptotically equivalent to the truncated log-likelihood $\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})$, except that the former produces more accurate estimated parameters than the latter. This motivates why in TAIC and TBIC we choose to evaluate $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ instead of $\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})$.\n\\end{remark}\n\n\\section{Illustrating examples} \\label{sec:ex}\nIn this section, we analyze the performance of our proposed MWLE approach (Equation (\\ref{eq:loglik_weight})) on FMM given by Equation (\\ref{eq:model}). In the following examples, we select Gamma density for the body components $f_b$, a light-tailed distribution to capture the distributional multimodality of small to moderate claims, and Lomax density for the tail component $f_t$ to extrapolate well the tail-heaviness of larger observations. Then, Equation (\\ref{eq:model}) becomes\n\\begin{equation} \\label{eq:em:density_mixture}\nh(y;\\bm{\\Phi})\n=\\sum_{j=1}^J\\pi_jf_b(y;\\mu_j,\\phi_j)+\\pi_{J+1}f_t(y;\\theta,\\gamma),\n\\end{equation}\nwhere the parameter set is re-expressed as $\\bm{\\Phi}=(\\bm{\\pi},\\bm{\\mu},\\bm{\\phi},\\gamma)$ while $\\bm{\\varphi}_j=(\\mu_j,\\phi_j)$ and $\\bm{\\eta}=(\\theta,\\gamma)$, and the Gamma and Lomax densities $f_b$ and $f_t$ are respectively given by\n\\begin{equation} \\label{eq:em:comp}\nf_b(y;\\mu,\\phi)=\\frac{(\\phi\\mu)^{-1\/\\phi}}{\\Gamma(1\/\\phi)}y^{1\/\\phi-1}e^{-y\/(\\phi\\mu)}\\quad \\text{and} \\quad f_t(y;\\theta,\\gamma)=\\frac{\\gamma\\theta^{\\gamma}}{\\left(y+\\theta\\right)^{\\gamma+1}},\n\\end{equation}\nwhere $\\mu$ and $\\phi$ are the mean and dispersion parameters of Gamma distribution, while $\\gamma$ is the tail index parameter for the Lomax distribution. $\\theta$ is scale of the Lomax distribution. Note that the above model is a regular varying distribution with the tail behavior predominately explained by the tail index $\\gamma$. As a result, tail-robustness is highly determined by how stable and accurate the estimated tail index $\\gamma$ is.\n\nThe specifications of body and tail component functions are mainly motivated by the key characteristics of insurance claim severity distributions (multimodal distribution of small claim, existence of extremely large claims, mismatch between body and tail behavior etc.) which will be illustrated in the real insurance data application section. While we do not preclude the existence of other specifications, such as Weibull for the body and Inverse-Gamma for the tail, plausible for insurance applications, in this section we simply focus on studying Gamma-Lomax combination to focus on the scope of this paper -- demonstrating the usefulness of the proposed MWLE, instead of performing distributional comparisons under FMM.\n\n\\subsection{Toy example} \\label{sec:ex:toy}\n\nWe demonstrate how the proposed MWLE framework works through a simple toy example of one-parameter Lomax distribution $H(y;\\gamma)=1-(y+1)^{-\\gamma}$ ($y>0$), which is a special case of Equation (\\ref{eq:em:density_mixture}) with $J=0$ and $\\theta=1$.\n\nConsider the first case where the true model $G(\\cdot)$ is a Lomax with $\\gamma=\\gamma_0=1$. For the weight function for the MWLE, we consider the form of Equation (\\ref{eq:em:wgt_func}) with $\\xi=0$ for simplicity. We will test across a wide range of $\\tilde{\\mu}$ and across $\\tilde{\\phi}\\in\\{0.1, 0.2, 0.5, 1\\}$. Figure \\ref{fig:thm_aeff} presents how the choices of these hyperparameters affect the AEFF. Starting from $\\text{AEFF}=1$ when $\\tilde{\\mu}=0$ which is equivalent to standard MLE, the AEFF decrease monotonically as $\\tilde{\\mu}$ increases. This is intuitive because under-weighting smaller observations with MWLE means effectively discarding some observed information, leading to larger parameter uncertainties compared to MLE. Since the MLE estimated tail index is unbiased under the true model, there is obviously no benefit of using the proposed MWLE to fit the true model.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_aeff_1.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_aeff_2.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{AEFF as a function of the weight location hyperparameter $\\tilde{\\mu}$ (left panel) or Pareto (Lomax) quantile of $\\tilde{\\mu}$ (right panel) under Lomax true model.}\n\\label{fig:thm_aeff}\n\\end{figure}\n\nNow, consider the second case where the true model is perturbed by the contamination function $M$, as presented in Equation (\\ref{eq:asym_contam}). In this demonstration example, we consider two following two choices for the contamination function $M$:\n\\begin{itemize}\n\\item Degenerate perturbation: One-point distribution on $y=1\/4$\n\\item Pareto perturbation: Lomax distribution with tail index $\\gamma=\\gamma^{*}=4>\\gamma_0$\n\\end{itemize}\nNote that the contamination function is relatively lighter tailed and hence it would not affect the tail behavior of the perturbed distribution. In Figure \\ref{fig:thm_if}, we present the IF as a function of the AEFF (determined as a function of chosen $\\tilde{\\mu}$) under the two choices of $M$. We find that as the AEFF reduces (by choosing a larger $\\tilde{\\mu}$), the IF would shrink towards zero. This reflects that a more robust estimation of tail index can be achieved using the proposed MWLE approach by trading off some efficiencies.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_if_1.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_if_2.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{IF as a function of AEFF under degenerate (left panel) and Pareto (right panel) contaminations.}\n\\label{fig:thm_if}\n\\end{figure}\n\n\n\n\\subsection{Simulation studies} \\label{sec:ex:sim}\n\\subsubsection{Simulation settings}\nWe here simulate $n=10,000$ claims (the sample size is motivated by the size of a typical insurance portfolio) from the aforementioned $J$-Gamma Lomax distribution for each of the following two parameter settings with $\\theta=1000$:\n\\begin{itemize}\n \\item Model 1: $J=2$, $\\bm{\\pi}=(0.4,0.4,0.2)$, $\\bm{\\mu}=(100,300)$, $\\bm{\\phi}=(0.25,0.25)$ and $\\gamma=2$.\n \\item Model 2: $J=3$, $\\bm{\\pi}=(0.4,0.3,0.1,0.2)$, $\\bm{\\mu}=(50,200,600)$, $\\bm{\\phi}=(0.2,0.2,0.2)$ and $\\gamma=2$.\n\\end{itemize}\n\nWe also consider the zero-inflated Gamma distribution given by Equation (\\ref{eq:em:wgt_func}) as the weight function, with $\\tilde{\\mu}\\in\\{q_{0},q_{0.9},q_{0.95},q_{0.99},q_{0.995}\\}$, $\\tilde{\\phi}\\in\\{0.025,0.1,0.25,1\\}$ and $\\xi\\in\\{0.001,0.01,0.05,0.25\\}$, where $q_{\\alpha}$ is the empirical quantile of the data with $0\\leq \\alpha\\leq 1$. Recall that the choice of $\\tilde{\\mu}=q_0=0$ implies that $W(y;\\xi,\\tilde{\\mu},\\tilde{\\phi})= 1$ and hence the MWLE is equivalent to standard MLE. For each combinations of models and weight function hyperparameters, the simulations of sample points are repeated by 100 times to enable thorough analysis of the results using the proposed weighted log-likelihood approach under various settings. Each simulated sample is then fitted to the $J$-Gamma Lomax mixture in Equation (\\ref{eq:em:density_mixture}) with $J=2$. Note that for simplicity, in the simulation studies we do not examine the choice of $J$ as outlined by Section \\ref{sec:em:complex}. As a result, we have the following research goals in the simulation studies:\n\\begin{itemize}\n\\item Under Model 1, the data is fitted to the true class of models. Hence, we empirically verify the consistencies of estimating model parameters (Theorem \\ref{thm:asym_tru}) using the MWLE. We also study how the selection of weight function hyperparameters affect the estimated parameter uncertainties. Further, we compare the computational efficiency of the two kinds of proposed GEM algorithms.\n\\item Under Model 2, the data is fitted to a misspecified class of models. Hence, we demonstrate how this would distort the estimation of the tail under the MLE, and study how the proposed MWLE produces a more robust tail estimation.\n\\end{itemize}\n\n\\subsubsection{Results of fitting Model 1 (true model)}\n\nConsidering the case where we fit the true class of model to the data generated by Model 1, we first compare the computational efficiencies between the two construction methods of the GEM algorithm as presented by Section \\ref{sec:em:gem}. In general, around 100 iterations are needed under parameter transformation approach (Method 2), as compared to at least 300 iterations under hypothetical data approach (Method 1), revealing relatively faster convergences under Method 2. Figure \\ref{fig:sim_tru_rel} plots the relative change of iterated parameters $\\Delta^{\\text{rel}}\\bm{\\Phi}^{(l)}:=|\\log(\\bm{\\Phi}^{(l)}\/\\bm{\\Phi}^{(l-1)})|\/P$ versus the GEM iteration $l$ under two example choices of weight function hyperparameters, where the division operator is applied element-wise to the vector of parameters. It is apparent that the curve drops much faster under Method 2 than Method 1, confirming faster convergence under Method 2. The main reason is that the construction of hypothetical missing observations under Method 1 will generally effectively reduce the learning rates of the optimization algorithms. As both methods produce very similar estimated parameters while Method 2 is more computationally efficient, from now on we only present the results produced by the GEM algorithm under Method 2.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_rel1.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_rel2.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{The relative change of iterated parameters in the first 100 GEM iterations under two example choices of weight function hyperparameters: Left panel -- $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.01,q_{0.95},0.1)$; Right panel -- $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.05,q_{0.99},0.25)$.}\n\\label{fig:sim_tru_rel}\n\\end{figure}\n\nFigure \\ref{fig:sim_tru_gamma} demonstrates the how the biasedness and uncertainty of the estimated tail index $\\hat{\\gamma}$ differ among various choices of weight functions and their corresponding hyperparameters $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$. From the left panel, the median estimated parameters are very close to the true model parameters (differ by less than 1-2\\%) under most settings of the weight functions, except for few extreme cases where both $\\xi$ and $\\tilde{\\phi}$ are chosen to be very small. This empirically justifies the asymptotic unbiasedness of the MWLE. As expected from the right panel, the uncertainties of MLE parameters are the smallest, verifying that MLE is the asymptotically most efficient estimator among all unbiased estimators if we are fitting the correct model class. The parameter uncertainties generally slightly increase as we choose larger $\\tilde{\\mu}$ to de-emphasize the impacts of smaller observations. In some extreme cases where $\\xi$ and $\\tilde{\\phi}$ very small and $\\tilde{\\mu}$ is very large, the standard error can grow dramatically, reflecting that a lot of information are effectively discarded. \n\nSimilarly in Figure \\ref{fig:sim_tru_mu1} where the biasedness and uncertainty of an estimated mean parameter $\\hat{\\mu_1}$ from the body distribution are displayed, we observe that the proposed MWLE approach behaves properly for fitting the body distribution unless when $\\xi$ and $\\tilde{\\phi}$ are both chosen to be extremely small (in those cases, the estimated body parameters would become unstable with inflated uncertainties). Hence, these extreme choices of hyperparameters are deemed to be inappropriate.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_gamma_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_gamma_sd.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and standard deviation of the estimated tail index $\\hat{\\gamma}$ versus various weight function hyperparameters under the true model.}\n\\label{fig:sim_tru_gamma}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_mu1_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_mu1_sd.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and standard deviation of the estimated mean parameter of the first mixture component $\\hat{\\mu_1}$ versus various weight function hyperparameters under the true model.}\n\\label{fig:sim_tru_mu1}\n\\end{figure}\n\n\\subsubsection{Results of fitting Model 2 (misspecified model)}\n\nWe now turn to the case where we fit a misspecified model (with $J=2$) to the simulated data generated from Model 2 (with $J=3$). The left panels of Figures \\ref{fig:sim_mis_gamma} and \\ref{fig:sim_mis_tailp} examine how the robustness of the estimated tail index $\\hat{\\gamma}$ and tail probability $\\hat{\\pi}_{J+1}$ differs among different choices of hyperparameters $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$. From the left panel, the MLE of the tail index is around $\\hat{\\gamma}=2.48$ which largely over-estimates the true tail index $\\gamma=2$, indicating that the heavy-tailedness of the true distribution is under-extrapolated. On the other hand, with the incorporation of weight functions to under-weight the smaller claims, the biases of the MWLE of $\\gamma$ are greatly reduced compared to that of the MLE under most choices of weight function hyperparameters. In particular, the bias reduction for tail index is more effective using smaller $\\xi$ (i.e. $\\xi\\leq 0.05$). This is intuitive as smaller $\\xi$ means smaller claims are under-weighted by a larger extent, reducing the impacts of smaller claims on the tail index estimations. Similarly from the right panel, the proposed MWLE approach effectively reduces the bias of the estimated tail probability $\\hat{\\pi}_{J+1}$. \n\nThe analysis of bias-variance trade-off is also conducted through computing the mean-squared errors (MSE) of both estimated tail index $\\hat{\\gamma}$ and tail probability $\\hat{\\pi}_{J+1}$. From the right panels of Figures \\ref{fig:sim_mis_gamma} and \\ref{fig:sim_mis_tailp}, as evidenced by smaller MSEs under most choices of weight function hyperparameters, MWLE is much more preferable than MLE approach even after accounting for the increased parameter uncertainties through down-weighting the importance of smaller claims.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_gamma_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_gamma_mse.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and MSE of the estimated tail index $\\hat{\\gamma}$ versus various weight function hyperparameters under the misspecified model.}\n\\label{fig:sim_mis_gamma}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_tailp_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_tailp_mse.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and MSE of the estimated tail probability $\\hat{\\pi}_{J+1}$ versus various weight function hyperparameters under the misspecified model.}\n\\label{fig:sim_mis_tailp}\n\\end{figure}\n\n\\subsubsection{Summary remark on the choice of weight function hyperparameters}\nFrom the above two simulation studies, we find that under a wide range of choices of weight function hyperparameters, the proposed MWLE not only produces plausible model estimations under true model (Model 1), but is also effective in mitigating the bias of tail estimation inherited from model misspecifications (Model 2).\n\nAmong the three hyperparameters $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$, the choice of minimum weight hyperparameter $\\xi$ plays a particularly vital role on the bias-variance trade-off of the estimated parameters. Under misspecified model (Model 2), smaller $\\xi$ (i.e. $\\xi\\leq 0.05$) is more effective in reducing the biases of both estimated tail index $\\hat{\\gamma}$ and tail probability $\\hat{\\pi}_{J+1}$. However, as evidenced by the results produced under the true model (Model 1), the estimated parameters of the body distributions (i.e. $\\hat{\\bm{\\mu}}$ and $\\hat{\\bm{\\phi}}$) may become prohibitively unstable if $\\xi$ is chosen to be extremely small (i.e. $\\xi\\leq 0.001$) such that smaller observations are effectively almost fully discarded. It is therefore important to compare parameter uncertainties of MWLE to that of MLE, and select\/ consider only the weight function hyperparameters where the corresponding MWLE parameter uncertainties are within an acceptable range (i.e. not too off from the MLE parameter uncertainties). Overall, the choices of $\\xi$ between 0.01 and 0.05 are deemed to be suitable.\n\n\\subsection{Real data analysis} \\label{sec:ex:real}\n\\subsubsection{Data description and background} \\label{sec:ex:real:dat}\nIn this section, we study an insurance claim severity dataset kindly provided by a major insurance company operating in Greece. It consists of 64,923 motor third-party liability (MTPL) insurance policies with non-zero property claims for underwriting years 2013 to 2017. This dataset is also analyzed by \\cite{fung2021mixture} using a mixture composite model, with an emphasis on selecting various policyholder characteristics (explanatory variables) which significantly influence the claim severities. The empirical claim severity distribution exhibits several peculiar characteristics including multimodality and tail-heaviness. The excessive number of distributional nodes for small claims reflects the possibility of distributional contamination, which cannot be and should not be perfectly captured and over-fitted by parametric models like FMM. Preliminary analyses also suggest that the estimated tail index is around 1.3 to 1.4, but note that these are only rough and subjective estimates. The details of preliminary data analysis are provided in Section 9 of the supplementary materials. The key goals of this real data analysis are as follows:\n\\begin{enumerate}\n\\item Illustrate that MLE of FMM would produce highly unstable and unrobust estimates to the tail part of the claim severity distribution. This confirms that tail-robustness is an important research problem in real insurance claim severity modelling which needs to be properly addressed.\n\\item Demonstrate how the proposed MWLE approach leads to superior fittings to the tail and more reliable estimates of tail index as compared to MLE, without much sacrificing its ability to adequately capture the body.\n\\end{enumerate}\n\nTo avoid diverging the focus of this paper, in this analysis we solely examine the distributional fitting of the claim sizes without considering the explanatory variables. Note however that the proposed MWLE can be extended to a regression framework, with the discussions being leveraged to Section \\ref{sec:discussion}.\n\n\n\\subsubsection{Fitting results}\nThe claim severity dataset is fitted to the mixture Gamma-Lomax distribution with density given by Equation \\ref{eq:em:density_mixture} under the proposed MWLE approach. The fitting performances will be examined thoroughly across different number of Gamma (body) mixture components $J\\in\\{1,2,\\ldots,10\\}$ and various choices of weight function hyperparameters ($\\tilde{\\mu}\\in\\{q_{0},q_{0.9},q_{0.95},q_{0.99},q_{0.995}\\}$, $\\tilde{\\phi}\\in\\{0.025,0.1,0.25,1\\}$ and $\\xi\\in\\{0.001,0.01,0.05,0.25\\}$). The MWLE fitted parameters are also compared to the standard MLE across various $J$.\n\nWe first present in Figure \\ref{fig:greek:gamma} the fitted tail index $\\hat{\\gamma}$ versus the number of body components $J$ under all combinations of selected weight function hyperparameters. Each of the four sub-figures corresponds to a particular choice of $\\xi\\in\\{0.001,0.01,0.05,0.25\\}$. The black thick trends for each sub-figure are the MLE estimated tail indexes for comparison purpose. The MLE tail indexes are rather unstable as evidenced by great fluctuations across different number of body components $J$, showing that MLE may not be reliable in extrapolating the heavy-tailedness of complex claim distributions.\nFor instance, with a slight change of model specification from $J=5$ to $J=6$, the estimated tail index largely drops from about 1.8 to 1.5. This is rather unnatural because the change from $J=5$ to $J=6$ should only reflect a slight change in specifying the body.\nThe large drop of the estimated tail index reflects that the Lomax tail part of FMM is not specialized in extrapolating the tail-heaviness of the distribution, but instead is very sensitive to the small claims and the model specifications of the body part.\nTherefore, we conclude that the mixture Gamma-Lomax FMM is not achieving its modelling purpose under the MLE.\n\nOn the other hand, looking individually at each path under MWLE, we find that the estimated $\\hat{\\gamma}$ is much more stable across different $J$ under most choices of weight function hyperparameters, especially when $J\\geq 5$. Also, the estimated MWLE $\\hat{\\gamma}$ is in general smaller than the $\\hat{\\gamma}$ obtained by MLE, moving closer to the values roughly determined by the preliminary data analysis in Section \\ref{sec:ex:real:dat}. Note in the figure that there are a few black solid dots, which appear when the estimated $\\hat{\\gamma}$ under MWLE is outside the range of the plots. These unstable estimates of $\\hat{\\gamma}$ are rare and only occur under one the following two situations: (i) $J$ is chosen be very small (i.e. $J\\leq 2$) in the sense that the models would severely under-fit the distributional complexity of the dataset; (ii) extreme choices of weight function hyperparameters (very small $\\xi$ and $\\tilde{\\phi}$) aligned to the results of the simulation studies.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma1.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma2.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma3.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma4.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Estimated tail index versus the number of Gamma mixture components under MLE and MWLE with various choices of weight function hyperparameters.}\n\\label{fig:greek:gamma}\n\\end{figure}\n\nThe optimal choice of $J$ is tricky as evidenced by the excessive number of small distributional nodes for very small claim sizes described in Section \\ref{sec:ex:real:dat}, which should not be over-emphasized or excessively modelled as these very small claims are almost irrelevant for pricing and risk management. However, both AIC and BIC decrease slowly and steadily for MLE models as $J$ increases. The optimal $J$ in this case goes way beyond $J=10$. Under the proposed MWLE approach with various choices of weight function hyperparameters, the same model selection problem exists using RAIC and RBIC, with the reasons already explained in Section \\ref{sec:em:complex}. On the other hand, using TAIC and TBIC (especially for TBIC), a majority selections of weight function hyperparameters lead to an optimal $J=5$, aligning with the heuristic arguments by \\cite{fung2021mixture} that $J=5$ is enough for capturing all the distributional nodes except for the very small claims which are smoothly approximated by a single mixture component.\n\nTo better understand how the use of proposed MWLE affects the estimations of all parameters (not just the tail index but also parameters affecting the body distributions such as $\\bm{\\mu}$), we showcase in Table \\ref{tab:greek:est_prm} all the estimated parameters and their standard errors (based on Equation (\\ref{eq:asym_var})) using MWLE under two distinguishable example choices of hyperparameters (MWLE 1: $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.01,q_{0.99},0.1)$; MWLE 2: $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.05,q_{0.995},0.25)$) as compared to MLE parameters with $J=5$, the optimal number of body components under TBIC for both MWLE 1 and MWLE 2. Note that the two selected examples are for demonstration purpose -- generally the following findings and conclusions are also valid for other choices of weight function hyperparameters under the proposed MWLE.\n\nWe first observe that the estimated parameters influencing the body (i.e. $\\bm{\\pi}$, $\\bm{\\mu}$ and $\\bm{\\phi}$) under MWLE are very close to those under MLE, even if the smaller claims are greatly down-weighted. MWLE generally results to larger parameter uncertainties as compared with MLE -- reflecting a bias-variance trade-off, but these standard errors are of the same order of magnitude and are still relatively immaterial compared to the estimates as the sample size $n=64,923$ is large. \n\nComparing between the above two MWLE examples, we further notice that the parameter uncertainties under MWLE 1 are greater than those under MWLE 2. This is expected because the influences of smaller claims are down-weighted more under MWLE 1 than those under MWLE 2 (as reflected by smaller minimum weight hyperparameter $\\xi$ chosen under MWLE 1). On the other hand, the estimated tail index $\\hat{\\gamma}$ under MWLE 1 is slightly closer to the heuristic values (i.e. 1.3 to 1.4) than MWLE 2. These may also reflect the bias-variance trade-off among various choices of weight function hyperparameters.\n\n\\begin{table}[!h]\n\\centering\n\\begin{tabular}{lrrrrrr}\n\\toprule\n & \\multicolumn{2}{c}{MWLE 1} & \\multicolumn{2}{c}{MWLE 2} & \\multicolumn{2}{c}{MLE} \\\\\n\\cmidrule(l{3pt}r{3pt}){2-3} \\cmidrule(l{3pt}r{3pt}){4-5} \\cmidrule(l{3pt}r{3pt}){6-7}\n & \\multicolumn{1}{c}{Estimates} & \\multicolumn{1}{c}{Std. Error} & \\multicolumn{1}{c}{Estimates} & \\multicolumn{1}{c}{Std. Error} & \\multicolumn{1}{c}{Estimates} & \\multicolumn{1}{c}{Std. Error} \\\\\n\\midrule\n$\\pi_1$ & 0.3787 & 0.0053 & 0.3829 & 0.0031 & 0.3878 & 0.0022 \\\\\n$\\pi_2$ & 0.0380 & 0.0036 & 0.0404 & 0.0021 & 0.0444 & 0.0014 \\\\\n$\\pi_3$ & 0.1117 & 0.0024 & 0.1134 & 0.0020 & 0.1161 & 0.0017 \\\\\n$\\pi_4$ & 0.0221 & 0.0059 & 0.0192 & 0.0021 & 0.0153 & 0.0008 \\\\\n$\\pi_5$ & 0.2173 & 0.0036 & 0.2163 & 0.0022 & 0.2130 & 0.0019 \\\\\n\\hline\n$\\mu_1$ & 1,303.21 & 50.30 & 1,322.10 & 16.70 & 1,348.68 & 11.11 \\\\\n$\\mu_2$ & 9,171.42 & 145.83 & 9,165.92 & 64.64 & 9,165.36 & 49.38 \\\\\n$\\mu_3$ & 27,590.46 & 125.68 & 27,571.75 & 64.36 & 27,538.88 & 52.41 \\\\\n$\\mu_4$ & 317,274.90 & 2,410.93 & 323,827.70 & 2,159.37 & 322,872.40 & 2,372.68 \\\\\n$\\mu_5$ & 89,007.07 & 170.41 & 88,979.12 & 112.01 & 88,895.92 & 99.20 \\\\\n\\hline\n$\\phi_1$ & 0.9945 & 0.0175 & 0.9996 & 0.0121 & 1.0062 & 0.0113 \\\\\n$\\phi_2$ & 0.0264 & 0.0089 & 0.0284 & 0.0030 & 0.0324 & 0.0020 \\\\\n$\\phi_3$ & 0.0154 & 0.0015 & 0.0158 & 0.0007 & 0.0164 & 0.0005 \\\\\n$\\phi_4$ & 0.0472 & 0.0033 & 0.0333 & 0.0025 & 0.0186 & 0.0020 \\\\\n$\\phi_5$ & 0.0127 & 0.0007 & 0.0126 & 0.0003 & 0.0122 & 0.0002 \\\\\n\\hline\n$\\gamma$ & 1.5353 & 0.0707 & 1.6153 & 0.0586 & 1.7963 & 0.0471 \\\\\n$\\theta$ & 62,637.42 & 7,829.79 & 73,604.51 & 6,630.65 & 101,107.20 & 5,088.29\\\\\n\\hhline{=======}\n\\end{tabular}\n\\caption{\\label{tab:greek:est_prm}Estimated parameters and standard errors under MLE and MWLE approaches with $J=5$.}\n\\end{table}\n\nThe Q-Q plot in Figure \\ref{fig:greek:qq} suggests that the fitting results are satisfactory under both MWLE and MLE except for the very immaterial claims (i.e. $y<100$). Note however that due to the log-scale nature of the Q-Q plot, it is hard to examine from the plot how well the fitted models extrapolate the tail-heaviness of the claim severity data. To examine the tail behavior of the fitted models, we present the log-log plot in the left panel of Figure \\ref{fig:greek:loglog}, with the axis shifted to include large claims only. We observe that for extreme claims (i.e. claim amounts greater than about 0.5 millions, or $\\log y>13$), the logged survival probability produced by MLE fitted model diverges quite significantly from that of empirical observations. Such a divergence can effectively be mitigated by using MWLE with either of the hyperparameter settings. \n\nWe further compute the value-at-risk (VaR) and conditional tail expectation (CTE) at $100q^{\\text{th}}$ security level (denoted as $\\text{VaR}_q(Y;\\hat{\\bm{\\Phi}})$ and $\\text{CTE}_q(Y;\\hat{\\bm{\\Phi}})$ respectively) from the fitted models, and compare them to the empirical values from the severity data (denoted as $\\widehat{\\text{VaR}}_q(Y)$ and $\\widehat{\\text{CTE}}_q(Y)$ respectively). The results are summarized in Table \\ref{tab:greek:risk}. Both MLE and MWLE produce plausible estimates of VaR and CTE up to security levels of 95\\% and 75\\% respectively, reflecting the ability of both approaches in capturing the body part of severity distribution. Nonetheless, the MLE fitted model shows significant divergences of VaR and CTE from the empirical data at higher security levels. In particular, the 99\\%-CTE and 99.5\\%-CTE are largely underestimated by the MLE approach. Such a divergence is effectively reduced by the proposed MWLE approach where superior fittings to the tail are obtained. Further, MWLE 1 seems to perform slightly better than MWLE 2 in terms of tail fitting, as reflected by smaller underestimations of CTEs at high security levels. This provides a plausible trade-off to the increased parameter uncertainties under MWLE 1 as previously mentioned.\n\nTo visualize the results, we further plot the relative misfit of VaR, given by $\\log(\\text{VaR}_q(Y;\\hat{\\bm{\\Phi}})\/\\widehat{\\text{VaR}}_q(Y))$, versus the log survival probability $\\log (1-q)\\in(-4.7,-7.5)$, equivalent to the range of security level from 99\\% to 99.95\\%, in the right panel of Figure \\ref{fig:greek:loglog}. We observe that the MLE fitted model over-estimates the VaR of large claims (security level between 99\\% to 99.8\\%) but then largely under-extrapolates the extreme claims (security level beyond 99.8\\%). This issue is well mitigated by the MWLE where the misfits of VaR are smaller in both regions. Therefore, we conclude that the proposed MWLE effectively improves the goodness-of-fit on the tail part of distribution (as compared to the MLE) without much sacrificing its flexibly to adequately capture the body part. \n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_qq.jpg}\n\\end{subfigure}\n\\end{center}\n\\caption{Q-Q plot under MLE and MWLE with two selected combinations of weight function hyperparameters.}\n\\label{fig:greek:qq}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_loglog.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_misfit.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Left panel: log-log plot of fitted models compared to empirical data; Right panel: misfit of logged claim amounts versus logged survival probabilities under three fitted models.}\n\\label{fig:greek:loglog}\n\\end{figure}\n\n\\begin{table}[!h]\n\\centering\n\\begin{tabular}{rrrrrlrrrrr}\n\\toprule\n \\multicolumn{5}{c}{VaR ('000)} & & \\multicolumn{5}{c}{CTE ('000)} \\\\\n\\cmidrule(l{3pt}r{3pt}){1-5} \\cmidrule(l{3pt}r{3pt}){7-11}\n \\multicolumn{1}{c}{Level} & \\multicolumn{1}{c}{MLE} & \\multicolumn{1}{c}{MWLE 1} & \\multicolumn{1}{c}{MWLE 2} & \\multicolumn{1}{c}{Empirical} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{Level} & \\multicolumn{1}{c}{MLE} & \\multicolumn{1}{c}{MWLE 1} & \\multicolumn{1}{c}{MWLE 2} & \\multicolumn{1}{c}{Empirical} \\\\\n\\midrule\n50\\% & 21 & 21 & 21 & 21 & & 0\\% & 109 & 112 & 111 & 116 \\\\\n75\\% & 83 & 82 & 82 & 82 & & 50\\% & 174 & 180 & 177 & 187 \\\\\n95\\% & 190 & 191 & 187 & 182 & & 75\\% & 468 & 505 & 489 & 536 \\\\\n99\\% & 461 & 450 & 445 & 452 & & 90\\% & 1,140 & 1,326 & 1,242 & 1,498 \\\\\n99.5\\% & 719 & 693 & 674 & 676 & & 95\\% & 1,711 & 2,115 & 1,948 & 2,455 \\\\\n99.75\\% & 1,149 & 1,031 & 1,046 & 1,075 & & 99\\% & 2,533 & 3,379 & 3,063 & 4,057 \\\\\n99.95\\% & 2,787 & 3,163 & 3,220 & 3,348 & & 99.5\\% & 5,956 & 10,243 & 8,572 & 13,329\\\\\n\\hhline{===========}\n\\end{tabular}\n\\caption{VaR and CTE (in thousands) estimated by MLE, MWLE and empirical approaches.}\n\\label{tab:greek:risk}\n\\end{table}\n\n\n\n\\section{Discussions} \\label{sec:discussion}\nIn this paper, we introduce a maximum weighted log-likelihood estimation (MWLE) approach to robustly estimate the tail part of finite mixture models (FMM) while preserving the capability of FMM to flexibly capture the complex distributional phenomena from the body part. Asymptotic theories justify the unbiasedness and robustness of the proposed estimator. In computational aspect, the applicability of EM-based algorithm for efficient estimation of parameters makes the proposed MWLE distinctive compared to the existing literature on weighted likelihood approach. Through several simulation studies and real data analyses, we empirically confirm that the proposed MWLE approach is more appropriate in specifying the tail part of the distribution compared to MLE, and at the same time it still preserves the flexibility of FMM in fitting the smaller observations.\n\nAnother advantage of the MWLE not yet mentioned throughout this paper is its extensibility. First, it is obvious that the proposed MWLE is not restricted to FMM but it is also applicable to any continuous or discrete distributions. Second, MWLE can be easily extended to regression settings, which is crucial for insurance pricing perspective as insurance companies often determine different premiums across policyholders based on individual attributes (e.g. age, geographical location and past claim history). In regression settings, we define $\\bm{x}=(\\bm{x}_1,\\ldots,\\bm{x}_n)$ as the covariates vectors for each of the $n$ observations. Then, the weighted log-likelihood function in Equation (\\ref{eq:loglik_weight}) is then re-expressed as\n\\begin{equation} \\label{eq:loglik_weight_reg}\n\\mathcal{L}^*_n(\\bm{\\Phi};\\bm{y},\\bm{x})=\\sum_{i=1}^{n}W(y_i)\\log \\frac{h(y_i;\\bm{\\Phi},\\bm{x}_i)W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi},\\bm{x}_i)W(u)du}\n\\end{equation}\nfor some regression models with density function $h(y_i;\\bm{\\Phi},\\bm{x}_i)$. Obviously, the asymptotic properties still hold subject further to some regularity conditions on covariates $\\bm{x}_i$. For parameter estimations using the GEM algorithm, only the hypothetical data approach (Method 1, which converge slower than Method 2 in Section \\ref{sec:ex}) works, because the transformed mixing probabilities in Equation (\\ref{eq:em:pi_trans}) under Method 2 are assumed to be homogeneous across all observations. We leave all theoretical details with more empirical studies and applications to the future research direction.\n\n\n\n\n\\bibliographystyle{abbrvnat}\n\n\\section{Regularity conditions for asymptotic theory} \\label{apx:asym_reg}\nLet $h(y;\\bm{\\Phi})$ be the density function of $Y$ with parameter space of $\\bm{\\Phi}\\in\\bm{\\Omega}$. For a more concise presentation on the regularity conditions, we here write $\\bm{\\Phi}=(\\psi_1,\\ldots,\\psi_P)$ where $P$ is the total number of parameters in the model. The regularity conditions are:\n\n\\begin{enumerate}[font={\\bfseries},label={R\\arabic*.}]\n\\item $h(y;\\bm{\\Phi})$ has common support in $y$ for all $\\bm{\\Phi}\\in\\bm{\\Omega}$, $h(y;\\bm{\\Phi})$ is identifiable in $\\bm{\\Phi}$ up to a permutation of mixture components.\n\\item $h(y;\\bm{\\Phi})$ admits third partial derivatives with respect to $\\bm{\\Phi}$ for each $\\bm{\\Phi}\\in\\bm{\\Omega}$ and for almost all $y$.\n\\item For all $j_1,j_2=1,\\ldots,P$, the first two derivatives of $h(y;\\bm{\\Phi})$ satisfy\n\\begin{equation}\nE\\left[\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\right]=0;\n\\end{equation}\n\\begin{equation}\nE\\left[\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right]=E\\left[-\\frac{\\partial^2}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right].\n\\end{equation}\n\\item The Fisher information matrix is finite and positive definite at $\\bm{\\Phi}=\\bm{\\Phi}_0$:\n\\begin{equation}\n\\mathcal{I}(\\bm{\\Phi})=E\\left[\\left(\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(y;\\bm{\\Phi})\\right)\\left(\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(y;\\bm{\\Phi})\\right)^T\\right].\n\\end{equation}\n\\item There exists an integrable function $\\mathcal{M}(y)$ such that\n\\begin{equation}\n\\hspace{-1cm}\n\\left|\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\\quad\n\\left|\\frac{\\partial^2}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\\quad\n\\left|\\frac{\\partial^3}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}\\partial\\psi_{j_3}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\n\\end{equation}\n\\begin{equation}\n\\hspace{-1cm}\n\\left|\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\\quad\n\\left|\\frac{\\partial^2}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_3}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\n\\end{equation}\n\\begin{equation}\n\\hspace{-1cm}\n\\left|\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_3}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y).\n\\end{equation}\n\\end{enumerate}\n\n\\section{Proof of Theorems 1 and 2} \\label{apx:asym_proof1}\nWe first focus on Theorem 1. Denote the weighted log-likelihood of a single observation\n\\begin{equation}\n\\mathcal{L}^{*}(\\bm{\\Phi};y)=W(y)\\log \\frac{h(y;\\bm{\\Phi})W(y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}.\n\\end{equation}\n\nThe consistency and asymptotic normality can be proved by applying Theorems 5.41 and 5.42 of \\cite{van2000asymptotic}. The theorems require the regularity conditions that $E\\left[\\|\\partial\/\\partial\\bm{\\Phi}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\|^2\\right]<\\infty$, the matrix $E\\left[\\partial^2\/\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]$ exists and that $|\\partial^3\/\\partial\\psi_{j_1}\\partial\\psi_{j_2}\\partial\\psi_{j_3}\\mathcal{L}^{*}(\\bm{\\Phi};y)|$ is dominated by a fixed integrable function of $y$, $j_1,j_2,j_3=1,\\ldots,P$ and $\\psi_j$ is the $j^{\\text{th}}$ element of $\\bm{\\Phi}$. Through a direct computation of differentiations, the aforementioned equations can all be expressed as functions of $\\kappa(u;\\bm{\\Phi})$ and $\\int_{0}^{\\infty}\\kappa(u;\\bm{\\Phi})h(u;\\bm{\\Phi})W(u)du$ only, where $\\kappa(\\bm{\\Phi})$ can be the six terms presented in regularity condition \\textbf{R5} (the left hand side of the six equations underneath \\textbf{R5} without the absolute sign). Given \\textbf{R5} that $\\kappa(u;\\bm{\\Phi})$ is bounded by an integrable function and since $|\\int_{0}^{\\infty}\\kappa(u;\\bm{\\Phi})h(u;\\bm{\\Phi})W(u)du|\\leq \\int_{0}^{\\infty}|\\kappa(u;\\bm{\\Phi})|h(u;\\bm{\\Phi})du$, the aforementioned regularity conditions required by \\cite{van2000asymptotic} hold.\n\n\\medskip\n\nFor consistency, it suffices from Theorem 5.42 of \\cite{van2000asymptotic} to show that $\\bm{\\Phi}_0$ is the maximizer of\n\\begin{align}\nE_{\\bm{\\Phi}_0}\\left[\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]\n&=\\int_{0}^{\\infty}W(y)\\log \\frac{h(y;\\bm{\\Phi})W(y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}h(y;\\bm{\\Phi}_0)dy\\nonumber\\\\\n&=c_1\\int_{0}^{\\infty}\\tilde{h}(y;\\bm{\\Phi}_0)\\log\\frac{\\tilde{h}(y;\\bm{\\Phi})}{\\tilde{h}(y;\\bm{\\Phi}_0)}dy+c_2\\nonumber\\\\\n&=-c_1D_{\\text{KL}}\\left(\\tilde{h}(y;\\bm{\\Phi})\\|\\tilde{h}(y;\\bm{\\Phi}_0)\\right)+c_2,\n\\end{align}\nwhere $c_1= \\int_0^\\infty h(y; \\bm{\\Phi}_0)W(y) dy >0$ and $c_2=c_1 \\int_0^\\infty \\tilde{h}(y;\\bm{\\Phi}_0) \\log \\tilde{h}(y;\\bm{\\Phi}_0) dy$ are constants and $D_{\\text{KL}}(Q_1\\|Q_2)\\geq 0$ is the KL divergence between $Q_1$ and $Q_2$. Since $D_{\\text{KL}}\\left(\\tilde{h}(y;\\bm{\\Phi})\\|\\tilde{h}(y;\\bm{\\Phi}_0)\\right)=0$ as $\\bm{\\Phi}=\\bm{\\Phi}_0$, the result follows.\n\n\\medskip\n\nFor asymptotic normality, from Theorem 5.41 of \\cite{van2000asymptotic}, we have $\\sqrt{n}(\\hat{\\bm{\\Phi}}_n-\\bm{\\Phi}_0)\\overset{d}{\\rightarrow}\\mathcal{N}(\\bm{0},\\bm{\\Sigma})$ with \n$\\bm{\\Sigma}=\\bm{\\Gamma}^{-1}\\bm{\\Lambda}\\bm{\\Gamma}^{-1}$, where \n\\begin{equation} \\label{eq:asym:lambda_proof}\n\\bm{\\Lambda}=E_{\\bm{\\Phi}_0}\\left[\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\n\\end{equation}\nand\n\\begin{equation} \\label{eq:asym:gamma_proof}\n\\bm{\\Gamma}=-E_{\\bm{\\Phi}_0}\\left[\\frac{\\partial^2}{\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right].\n\\end{equation}\n\nPerforming the derivatives and algebra manipulations from Equations (2.3) and (2.4) would result to Equations (4.2) and (4.3) respectively, which prove the asymptotic normality result.\n\n\\medskip\n\nProof idea of Theorem 2 is completely identical as the above, except that the expectations in Equations (2.3) and (2.4) are taken as $\\tilde{E}[\\cdot]$ instead of $E_{\\bm{\\Phi}_0}[\\cdot]$.\n\n\n\n\\section{Proof of Theorem 3} \\label{apx:asym_proof2}\nWe begin with the following lemmas:\n\\begin{lemma} \\label{apx:lem:asym1}\nTo prove Theorem 3, it suffices to show that\n\\begin{equation}\nT_n(\\bm{\\Phi}):=\\frac{\\partial}{\\partial\\gamma}\\tilde{E}_n[\\log \\tilde{h}_{n}(Y;\\bm{\\Phi})]\n\\end{equation}\nis asymptotically a strictly decreasing function of $\\gamma$ as $n\\rightarrow\\infty$, with $T_n(\\bm{\\Phi})|_{\\gamma=\\gamma_0}\\rightarrow 0$ as $n\\rightarrow\\infty$.\n\\end{lemma}\n\n\\begin{proof}\nIf we keep the weight function $W(\\cdot)$ fixed (independent of $n$), applying Theorem 5.7 of \\cite{van2000asymptotic} we have that maximizing the weighted log-likelihood function $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ is asymptotically equivalent to maximizing $\\tilde{E}_n[\\log \\tilde{h}_{n}(Y;\\bm{\\Phi})]$ (which is indeed independent of $n$).\n\nNow that the weight function $W_n(\\cdot)$ depends on $n$, and as $n$ increases, the increasing distortion (more down-weighting) of the relative importance of observations would cause reduction of the effective number of observations. Heuristically, we need the number of observations $n$ to increase faster than the distortion impacts of $W_n(\\cdot)$, so that effective number of observations grows to infinity and large sample theory still applies. Quantitatively, we require that the variance of (scaled) empirical weighted log-likelihood\n\\begin{equation}\nV_n(\\bm{\\Phi}):=\\text{Var}\\left(\\frac{1}{n\\int_{0}^{\\infty}W_n(u)g(u)du}\\sum_{i=1}^{n}W_n(Y)\\log\\tilde{h}_{n}(Y;\\bm{\\Phi})\\right)\\rightarrow 0\n\\end{equation}\nas $n\\rightarrow\\infty$, such that the (scaled) empirical weighted log-likelihood function converges to its expectation which is $\\tilde{E}_n[\\log \\tilde{h}_{n}(Y;\\bm{\\Phi})]$. Now, $V_n(\\bm{\\Phi})$ is evaluated as follows:\n\\begin{align}\nV_n(\\bm{\\Phi})\n&=\\frac{1}{n(\\int_{0}^{\\infty}W_n(u)g(u)du)^2}\\text{Var}\\left(W_n(Y)\\log\\tilde{h}_{n}(Y;\\bm{\\Phi})\\right)\\nonumber\\\\\n&\\leq\\frac{1}{n(\\int_{0}^{\\infty}W_n(u)g(u)du)^2}\\tilde{E}\\left[W_n(Y)(\\log\\tilde{h}_{n}(Y;\\bm{\\Phi}))^2\\right]\\nonumber\\\\\n&=\\frac{1}{n\\int_{0}^{\\infty}W_n(u)g(u)du}\\int_{0}^{\\infty}\\frac{W_n(y)g(y)}{\\int_{0}^{\\infty}W_n(u)g(u)du}(\\log\\tilde{h}_{n}(y;\\bm{\\Phi}))^2dy\\nonumber\\\\\n&=\\frac{\\tilde{E}_n[(\\log \\tilde{h}_{n}(Y;\\bm{\\Phi}))^2]}{n\\tilde{E}[W_n(Y)]}\\rightarrow 0,\n\\end{align}\nwhere the convergence is based on Assumption \\textbf{A4}.\n\\end{proof}\n\n\\begin{lemma} \\label{apx:lem:asym2}\n(Monotone density theorem -- Theorem 1.7.2 of \\cite{bingham1989regular}) Denote $H$ as a probability distribution function with $h$ being the corresponding probability density function. Assume $h$ is ultimately monotone (i.e. $h$ is monotone on $(z,\\infty)$ for some $z>0$). If\n\\begin{equation}\n\\bar{H}(y)\\sim y^{-\\gamma}L(y)\n\\end{equation}\nas $y\\rightarrow\\infty$ for some $\\gamma>0$ and slowly varying functions $L$, then\n\\begin{equation}\nh(y)\\sim \\gamma y^{-\\gamma-1}L(y)\n\\end{equation}\nas $y\\rightarrow\\infty$.\n\\end{lemma}\n\nWe proceed to the proof of Theorem 3 as follows. Using the result from Lemma \\ref{apx:lem:asym1}, it suffices to evaluate\n\\begin{align} \\label{apx:eq:proof2:Tn}\nT_n(\\bm{\\Phi})\n&=\\frac{\\partial}{\\partial\\gamma}\\int_{0}^{\\infty}\\tilde{g}_n(y)\\log \\tilde{h}_{n}(y;\\bm{\\Phi})dy\\nonumber\\\\\n&=\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\tilde{g}^{*}_n(y)\\log\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy + o(1)\\nonumber\\\\\n&=\\frac{\\partial}{\\partial\\gamma}\\log\\tilde{h}^{*}_{n}(\\tau_n;\\bm{\\Phi})\n+\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\left[\\frac{\\partial}{\\partial y}\\log\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})\\right]\\times\\bar{\\tilde{G}}^{*}_{n}(y)dy + o(1)\\nonumber\\\\\n&:=M_1(\\tau_n;\\bm{\\Phi})+M_2(\\tau_n;\\bm{\\Phi}) + o(1),\n\\end{align}\nwhere\n\\begin{equation}\n\\tilde{g}^{*}_{n}(y)=\\frac{g(y)W_n(y)}{\\int_{\\tau_n}^{\\infty}g(u)W_n(u)du}1\\{y\\geq\\tau_n\\},\\qquad \\tilde{h}^{*}_{n}(y;\\bm{\\Phi})=\\frac{h(y;\\bm{\\Phi})W_n(y)}{\\int_{\\tau_n}^{\\infty}h(u;\\bm{\\Phi})W_n(u)du}1\\{y\\geq\\tau_n\\},\n\\end{equation}\nare the proper transformed density functions, and $\\tilde{G}^{*}_{n}$ and $\\tilde{H}_{n}^{*}$ are the corresponding distribution functions. The second equality of Equation (\\ref{apx:eq:proof2:Tn}) is resulted from Assumption \\textbf{A3}, while the third equality is followed by integration by parts. Now, we evaluate $M_1(\\tau_n;\\bm{\\Phi})$ and $M_2(\\tau_n;\\bm{\\Phi})$ as follows:\n\n\\begin{align}\nM_1(\\tau_n;\\bm{\\Phi})\n&=\\frac{\\partial}{\\partial\\gamma}\\log\\tilde{h}^{*}_{n}(\\tau_n;\\bm{\\Phi})\\nonumber\\\\\n&=\\frac{\\partial}{\\partial\\gamma}\\left[\\log\\gamma-(\\gamma+1)\\log\\tau_n+\\log L(\\tau_n;\\bm{\\Phi})\\right]\\nonumber\\\\\n&\\hspace{3em}-\\int_{\\tau_n}^{\\infty}\\frac{\\partial}{\\partial\\gamma}\\left[\\log\\gamma-(\\gamma+1)\\log y+\\log L(y;\\bm{\\Phi})\\right]\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy+o(1)\\nonumber\\\\\n&=\\frac{1}{\\gamma}-\\log\\tau_n-\\frac{1}{\\gamma}+\\int_{\\tau_n}^{\\infty}(\\log y)\\times\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy\n-\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\log\\frac{L(y;\\bm{\\Phi})}{L(\\tau_n;\\bm{\\Phi})}\\times\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy+o(1)\\nonumber\\\\\n&=-\\log\\tau_n+\\log\\tau_n\n+\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{H}}^{*}_{n}(y;\\bm{\\Phi})dy\n-\\frac{\\partial}{\\partial\\gamma}\\int_{1}^{\\infty}\\log\\frac{L(\\tau_nt;\\bm{\\Phi})}{L(\\tau_n;\\bm{\\Phi})}\\times\\tau_n\\tilde{h}^{*}_{n}(\\tau_nt;\\bm{\\Phi})dt+o(1)\\nonumber\\\\\n&=\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{H}}^{*}_{n}(y;\\bm{\\Phi})dy + o(1),\n\\end{align}\nwhere dominated convergence theorem and integration by parts are repeatedly used. The second equality involves monotone density theorem (Lemma \\ref{apx:lem:asym2}) with Assumption \\textbf{A5} being satisfied. The last term of the second last equality converges to zero uniformly on $\\bm{\\Phi}$ due to dominated convergence theorem and the uniform convergence conditions in Assumption \\textbf{A2}. Using similar techniques as the above, $M_2(\\tau_n;\\bm{\\Phi})$ can be evaluated as\n\\begin{align}\nM_2(\\tau_n;\\bm{\\Phi})\n&=-\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{G}}^{*}_{n}(y)dy\n+\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\frac{\\partial}{\\partial y}(\\log L(y;\\bm{\\Phi}))\\times\\bar{\\tilde{G}}^{*}_{n}(y)dy\\nonumber\\\\\n&=-\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{G}}^{*}_{n}(y)dy-\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\log\\frac{L(y;\\bm{\\Phi})}{L(\\tau_n;\\bm{\\Phi})}\\times\\tilde{g}^{*}_{n}(y)dy\\nonumber\\\\\n&=-\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{G}}^{*}_{n}(y)dy + o(1).\n\\end{align}\n\nTo sum up, we have\n\\begin{equation}\nT_n(\\bm{\\Phi})\n=\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\left[\\bar{\\tilde{H}}^{*}_{n}(y;\\bm{\\Phi})-\\bar{\\tilde{G}}^{*}_{n}(y)\\right]dy\n=\\int_{1}^{\\infty}\\frac{1}{t}\\left[\\bar{\\tilde{H}}^{*}_{n}(\\tau_nt;\\bm{\\Phi})-\\bar{\\tilde{G}}^{*}_{n}(\\tau_nt)\\right]dt.\n\\end{equation}\n\nInvestigating each term inside the integrand, we have\n\\begin{align}\n\\bar{\\tilde{H}}^{*}_{n}(\\tau_nt;\\bm{\\Phi})\n&=\\frac{\\int_t^{\\infty}h(\\tau_nv;\\bm{\\Phi})W_n(\\tau_nv)dv}{\\int_1^{\\infty}h(\\tau_nv;\\bm{\\Phi})W_n(\\tau_nv)dv}\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)[L(\\tau_nv;\\bm{\\Phi})\/L(\\tau_n;\\bm{\\Phi})]dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)[L(\\tau_nv;\\bm{\\Phi})\/L(\\tau_n;\\bm{\\Phi})]dv} + o(1)\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv} + o(1),\n\\end{align}\nand\n\\begin{align}\n\\bar{\\tilde{G}}^{*}_{n}(\\tau_nt)\n&=\\frac{\\int_t^{\\infty}g(\\tau_nv)W_n(\\tau_nv)dv}{\\int_1^{\\infty}g(\\tau_nv)W_n(\\tau_nv)dv}\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)[L_0(\\tau_nv)\/L_0(\\tau_n)]dv}{\\int_1^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)[L_0(\\tau_nv)\/L_0(\\tau_n)]dv} + o(1)\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv} + o(1),\n\\end{align}\nwhere $\\tilde{W}_n(v)=W_n(\\tau_nv)$. Therefore, it is clear that\n\\begin{equation}\nT_n(\\bm{\\Phi})=\\int_{1}^{\\infty}\\frac{1}{t}\\left[\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}-\\frac{\\int_t^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv}\\right]dt+o(1)\n\\end{equation}\nconverges to zero for $\\gamma=\\gamma_0$ as $n\\rightarrow\\infty$. To show that $T_n(\\bm{\\Phi})$ is a strictly decreasing function of $\\gamma$ as $n\\rightarrow\\infty$, it suffices to evaluate\n\\begin{align}\n\\frac{\\partial}{\\partial\\gamma}\\log\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}\n&=-\\frac{\\int_t^{\\infty}(\\log v)v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}+\\frac{\\int_1^{\\infty}(\\log v)v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv},\n\\end{align}\nwhich is negative if and only if\n\\begin{equation}\n\\int_{1}^{t}(\\log v)k_{n,1,t}(v;\\gamma)dv<\\int_{t}^{\\infty}(\\log v)k_{n,t,\\infty}(v;\\gamma)dv,\n\\end{equation}\nwhere\n\\begin{equation}\nk_{n,t_1,t_2}(v;\\gamma)=\\frac{v^{-\\gamma-1}\\tilde{W}_n(v)}{\\int_{t_1}^{t_2}v^{-\\gamma-1}\\tilde{W}_n(v)dv}1\\{t_1\\tilde{\\mu}\\}+\\tilde{\\phi}$.\n\\end{itemize}\n\nRe-parameterize the gamma distribution with $\\alpha=1\/\\phi_j$ and $\\beta=1\/\\phi_j\\mu_j$, we are to compute\n\\begin{equation}\n\\int_{0}^{\\infty}q(u)\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du\n\\end{equation}\nfor $q(u)=1$, $q(u)=u$ and $q(u)=\\log u$; and\n\\begin{equation}\n\\int_{0}^{\\infty}r(u)\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du\n\\end{equation}\nfor $r(u)=1$ and $r(u)=\\log(u+\\theta)$.\n\n\\textbf{Case 1}. We have the following analytical results:\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\left(\\frac{\\beta}{\\beta+1\/\\tilde{\\mu}}\\right)^{\\alpha},\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\alpha\\beta^{\\alpha}}{(\\beta+1\/\\tilde{\\mu})^{\\alpha+1}},\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}\\frac{\\partial}{\\partial\\alpha}\\frac{\\Gamma(\\alpha)}{(\\beta+1\/\\tilde{\\mu})^{\\alpha}},\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=\\gamma\\left(\\frac{\\theta}{\\tilde{\\mu}}\\right)^{\\gamma}\\exp\\left\\{\\frac{\\theta}{\\tilde{\\mu}}\\right\\}\\Gamma(-\\gamma;\\frac{\\theta}{\\tilde{\\mu}},\\infty),\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log(u+\\theta)\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=-\\gamma\\theta^{\\gamma}\\exp\\left\\{\\frac{\\theta}{\\tilde{\\mu}}\\right\\}\\frac{\\partial}{\\partial\\gamma}\\Gamma(-\\gamma;\\frac{\\theta}{\\tilde{\\mu}},\\infty),\n\\end{equation}\nwhere $\\Gamma(m;c_1,c_2)=\\int_{c_1}^{c_2}u^{m-1}\\exp\\{-u\\}du$ is an incomplete gamma function.\n\n\\textbf{Case 2}. We have the following analytical results:\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\tilde{\\phi}\\frac{\\Gamma(\\alpha;\\beta\\tilde{\\mu},\\infty)}{\\Gamma(\\alpha)}+(1-\\tilde{\\phi}),\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\alpha}{\\beta}\\left[\\tilde{\\phi}\\Gamma(\\alpha+1;\\beta\\tilde{\\mu},\\infty)+(1-\\tilde{\\phi})\\right],\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}\\left[\\tilde{\\phi}\\frac{\\partial}{\\partial\\alpha}\\frac{\\Gamma(\\alpha;\\beta\\tilde{\\mu},\\infty)}{\\beta^{\\alpha}}+(1-\\tilde{\\phi})\\frac{\\partial}{\\partial\\alpha}\\frac{\\Gamma(\\alpha)}{\\beta^{\\alpha}}\\right],\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=\\tilde{\\phi}\\left(\\frac{\\theta}{\\tilde{\\mu}+\\theta}\\right)^{\\gamma}+(1-\\tilde{\\phi}),\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log(u+\\theta)\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=-\\gamma\\theta^{\\gamma}\\frac{\\partial}{\\partial\\gamma}\\left[\\tilde{\\phi}\\frac{1}{\\gamma(\\tilde{\\mu}+\\theta)^{\\gamma}}+(1-\\tilde{\\phi})\\frac{1}{\\gamma\\theta^{\\gamma}}\\right].\n\\end{equation}\n\n\\subsection{M-step} \\label{supp:sec:em_m}\nMaximizing $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ with respect to $\\bm{\\Phi}$ yields the following parameter updates:\n\\begin{equation}\n\\pi_j^{(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{ij}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_j^{'(l)}}{\\sum_{j'=1}^{J+1}\\left\\{\\sum_{i=1}^{n}W(y_i)z_{ij'}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_{j'}^{'(l)}\\right\\}},\\quad j=1,\\ldots,J+1,\n\\end{equation}\n\\begin{equation}\n\\mu_j^{(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{ij}^{(l)}y_i+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_j^{'(l)}\\widehat{y'}^{(l)}_j}{\\sum_{i=1}^{n}W(y_i)z_{ij}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_j^{'(l)}},\\quad j=1,\\ldots,J,\n\\end{equation}\n\\begin{align}\n\\phi_j^{(l)}\n&=\\underset{\\phi_j>0}{\\text{argmax}}\\Bigg\\{\\sum_{i=1}^nW(y_i)z^{(l)}_{ij}\\left\\{-\\frac{1}{\\phi_j}\\log\\phi_j-\\frac{1}{\\phi_j}\\log\\mu_j^{(l)}-\\log\\Gamma(\\frac{1}{\\phi_j})+(\\frac{1}{\\phi_j}-1)\\log y_i-\\frac{y_i}{\\phi_j\\mu_j}\\right\\}\\nonumber\\\\\n&\\hspace{5em} +k^{(l)}\\left(\\sum_{i=1}^{n}W(y_i)\\right)z^{'(l)}_{j}\\left\\{-\\frac{1}{\\phi_j}\\log\\phi_j-\\frac{1}{\\phi_j}\\log\\mu_j^{(l)}-\\log\\Gamma(\\frac{1}{\\phi_j})+(\\frac{1}{\\phi_j}-1)\\widehat{\\log y'}^{(l)}_j-\\frac{\\widehat{y'}^{(l)}_j}{\\phi_j\\mu_j^{(l)}}\\right\\}\\Bigg\\},\\nonumber\\\\\n\\end{align}\n\\begin{equation}\n\\gamma^{(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{i(J+1)}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_{J+1}^{'(l)}}{\\sum_{i=1}^{n}W(y_i)z_{i(J+1)}^{(l)}\\left[\\log(y_i+\\theta)-\\log\\theta\\right]+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_{J+1}^{'(l)}\\left[\\widehat{\\log(y'+\\theta)}^{(l)}_{J+1}-\\log\\theta\\right]}.\n\\end{equation}\n\nNote here that $\\theta$ is treated as a fixed hyperparameter not involved in estimation procedure. To estimate $\\theta$ as a parameter, we may need to take a further step to numerically maximize the observed data weighted log-likelihood $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})$ w.r.t. $\\theta$.\n\n\\section{GEM algorithm for MWLE under J-Gamma Lomax mixture model: Parameter transformation approach}\n\\subsection{Construction of complete data}\nThe complete data is given by\n\\begin{equation}\n\\mathcal{D}^{\\text{com}}=\\{(y_i,\\bm{z}_i^{*})\\}_{i=1,\\ldots,n},\n\\end{equation}\nwhere $\\bm{z}_i^{*}=(z_{i1}^{*},\\ldots,z_{i(J+1)}^{*})$ are the labels where $z_{ij}^{*}=1$ if observation $i$ belongs to the $j^{\\text{th}}$ (transformed) latent mixture component and $z_{ij}^{*}=0$ otherwise. The complete data weighted log-likelihood function is given by\n\\begin{align}\n\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi};\\mathcal{D}^{\\text{com}})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\left[\\sum_{j=1}^{J}z_{ij}^{*}\\left(\\log\\pi_j^{*}+\\log f_b(y_i;\\mu_j,\\phi_j) -\\log\\int_0^{\\infty}f_b(u;\\mu_j,\\phi_j)W(u)du\\right)\\right]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*}\\left(\\log\\pi_{J+1}^{*}+\\log f_t(y_i;\\theta,\\gamma)W(y_i)-\\log\\int_0^{\\infty}f_t(u;\\theta,\\gamma)W(u)du\\right)\\Bigg\\}.\n\\end{align}\n\n\n\\subsection{E-step} \\label{supp:sec:em_e2}\nThe expectation of the complete data weighted log-likelihood is given by the following for the $l^{\\text{th}}$ iteration:\n\\begin{align}\nQ^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\Bigg[\\sum_{j=1}^{J}z_{ij}^{*(l)}\\Bigg(\\log\\pi_j^{*}-\\frac{1}{\\phi_j}\\log\\phi_j-\\frac{1}{\\phi_j}\\log\\mu_j-\\log\\Gamma(\\frac{1}{\\phi_j})+(\\frac{1}{\\phi_j}-1)\\log y_i-\\frac{y_i}{\\phi_j\\mu_j}\\nonumber\\\\\n&\\hspace{12em}-\\log\\int_0^{\\infty}f_b(u;\\mu_j,\\phi_j)W(u)du\\Bigg)\\Bigg]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*(l)}\\Bigg(\\log\\pi_{J+1}^{*}+\\log\\gamma+\\gamma\\log\\theta-(\\gamma+1)\\log(y_i+\\theta)\\nonumber\\\\\n&\\hspace{13em}-\\log\\int_0^{\\infty}f_t(u;\\theta,\\gamma)W(u)du\\Bigg)\\Bigg\\},\n\\end{align}\nwhere\n\\begin{equation}\nz^{*(l)}_{ij}=P(z^{*}_{ij}=1|\\bm{y},\\bm{\\Phi}^{(l-1)})=\n\\begin{cases}\n\\dfrac{\\pi_j^{*(l-1)}f_b(y_i;\\mu_j^{(l-1)},\\phi_j^{(l-1)})W(y_i)}{\\int_0^{\\infty}f_b(u;\\mu_j^{(l-1)},\\phi_j^{(l-1)})W(u)du\\times h(y_i;\\bm{\\Phi}^{(l-1)})},\\quad j=1,\\ldots,J\\\\\n\\dfrac{\\pi^{*(l-1)}_{J+1}f_t(y_i;\\theta,\\gamma^{(l-1)})W(y_i)}{\\int_0^{\\infty}f_t(u;\\theta,\\gamma^{(l-1)})W(u)du\\times h(y_i;\\bm{\\Phi}^{(l-1)})},\\quad j=J+1.\n\\end{cases}\n\\end{equation}\n\n\\subsection{M-step} \\label{supp:sec:em_m2}\nMaximizing $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ with respect to $\\bm{\\Phi}$ yields the following parameter updates:\n\\begin{equation}\n\\pi_j^{*(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{ij}^{*(l)}}{\\sum_{j'=1}^{J+1}\\sum_{i=1}^{n}W(y_i)z_{ij'}^{*(l)}},\\quad j=1,\\ldots,J+1,\n\\end{equation}\nand the other parameters $(\\bm{\\mu},\\bm{\\phi},\\theta,\\gamma)$ are sequentially updated by numerically maximizing $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ w.r.t. each of the parameters.\n\n\\section{Proof of Proposition 3} \\label{supp:sec:ascend}\nWrite $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})=\\sum_{i=1}^{n}W(y_i)\\log p(y_i|\\bm{\\Phi})$ and $\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi}^{(l)};\\mathcal{D}^{\\text{com}})=\\sum_{i=1}^{n}W(y_i)\\left[\\log p(y_i|\\bm{\\Phi}) +\\log p(\\mathcal{D}^{\\text{mis}}_i|\\bm{\\Phi},y_i)\\right]$ for some probability density $p$ and missing data from sample $i$ given by $\\mathcal{D}^{\\text{mis}}_i$. Then, we have\n\\begin{align}\n\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})\n&=\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi}^{(l)};\\mathcal{D}^{\\text{com}})-\\sum_{i=1}^{n}W(y_i)\\log p(\\mathcal{D}^{\\text{mis}}_i|\\bm{\\Phi},y_i)\\nonumber\\\\\n&=Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})-\\sum_{i=1}^{n}W(y_i)\\int p(\\bm{v}_i|\\bm{\\Phi}^{(l-1)},y_i)\\log p(\\bm{v}_i|\\bm{\\Phi},y_i)d\\bm{v}_i,\n\\end{align}\nwhere the second equality results from expectation of both sides on the missing data under parameters $\\bm{\\Phi}^{(l-1)}$. Then, we have\n\\begin{align}\n\\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l)};\\bm{y})-\\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l-1)};\\bm{y})\n&=Q^{*}(\\bm{\\Phi}^{(l)}|\\bm{\\Phi}^{(l-1)})-Q^{*}(\\bm{\\Phi}^{(l-1)}|\\bm{\\Phi}^{(l-1)})\\nonumber\\\\\n&\\quad+\\sum_{i=1}^{n}W(y_i)\\int p(\\bm{v}_i|\\bm{\\Phi}^{(l-1)},y_i)\\log\\frac{p(\\bm{v}_i|\\bm{\\Phi}^{(l-1)},y_i)}{p(\\bm{v}_i|\\bm{\\Phi}^{(l)},y_i)}d\\bm{v}_i\\geq 0.\n\\end{align}\n\n\\section{Initialization of parameters} \\label{apx:em:init}\nAs briefly described in Section 5.3 of the paper, parameter initialization $\\bm{\\Phi}^{(0)}$ is done using the CMM approach by \\cite{gui2018fit}. This comes with the following steps:\n\\begin{enumerate}\n\\item Determine a threshold $\\tau$ which classifies observations $y_i$ into either body (when $y_i\\leq\\tau$) or tail (when $y_i>\\tau$) part of distribution. This can be done by plotting the log of empirical data survival function against $\\log y_i$, which is called the log-log plot. For regular varying distributions, the log-log plot is asymptotically linear. $\\tau$ is approximated by the point where the curve turns linear onwards.\n\\item Perform K-means clustering on $\\{y_i\\}_{i:y_i\\leq\\tau}$ with $J$ clusters, and obtain the clustering mean $\\{\\mu^{\\text{cluster}}_j\\}_{j=1,\\ldots,J}$, variance $\\{(\\sigma^{\\text{cluster}}_j)^2\\}_{j=1,\\ldots,J}$ and weights $\\{\\tilde{\\pi}_j^{\\text{cluster}}\\}_{j=1,\\ldots,J}$.\n\\item Set $\\mu_j^{(0)}=\\mu^{\\text{cluster}}_j$, $\\phi_j^{(0)}=({\\sigma^{\\text{cluster}}_j})^2\/{\\mu^{\\text{cluster}}_j}^2$.\n\\item Obtain $\\theta^{(0)}$ and $\\gamma^{(0)}$ by matching the first two moments of observations belonging to the tail component (i.e. $\\{y_i\\}_{i:y_i>\\tau}$).\n\\item Set $\\pi_{J+1}^{(0)}$ as the proportion of observations satisfying $y_i>\\tau$.\n\\item Set the remaining weight parameters as $\\pi_{j}^{(0)}=\\tilde{\\pi}_j^{\\text{cluster}}(1-\\pi_{J+1}^{(0)})$.\n\\end{enumerate}\n\n\\section{Truncated log-likelihood function} \\label{sec:supp:tll}\nThis section includes more details for Remark 6 in the paper. Denote $g(y)$ as the true distribution generating the observations and $\\tilde{h}(y;\\bm{\\Phi})=\\frac{h(y;\\bm{\\Phi})W(y)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du}$ as the truncated distribution. The expected weighted log-likelihood can be alternatively written as\n\\begin{align}\nn\\times\\tilde{E}[\\mathcal{L}^{*}(\\bm{\\Phi};\\bm{Y})]\n&=n\\int_{0}^{\\infty}W(u)\\log \\tilde{h}(u;\\bm{\\Phi})\\times g(u)du\\nonumber\\\\\n&=n\\int_{0}^{\\infty}g(u)W(u)du\\times\\int_{0}^{\\infty}\\log \\tilde{h}(u;\\bm{\\Phi})\\times\\frac{g(u)W(u)}{\\int_{0}^{\\infty}g(t)W(t)dt}du\\nonumber\\\\\n&=n\\int_{0}^{\\infty}g(u)W(u)du\\times\\tilde{E}^*[\\log \\tilde{h}(u;\\bm{\\Phi})],\n\\end{align}\nwhere the expectation $\\tilde{E}^*$ is taken on $Y$ under the random truncated distribution $\\frac{g(u)W(u)}{\\int_{0}^{\\infty}g(t)W(t)dt}$. Next, denote a random set $S_n=\\{i:V_i(y_i)=1\\}$, such that $\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})$ can be written as\n\\begin{equation}\n\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})=\\sum_{i\\in S_n}\\log \\tilde{h}(u;\\bm{\\Phi}),\n\\end{equation}\nwith effective number of terms $\\|S_n\\|\\approx n\\int_{0}^{\\infty}g(u)W(u)du\\approx \\sum_{i=1}^{n}W(y_i)$ in probability as $n\\rightarrow\\infty$. Comparing the above two equations, they simply correspond to standard MLE with bias term of $P$.\n\n\\section{Preliminary analysis of the motivating Greek dataset} \\label{apx:prelim_data}\nModelling the property damage claim size distribution is very challenging. Observing from Figures \\ref{fig:density} and \\ref{fig:loglogplot} which are also presented by \\cite{fung2021mixture}, the claim size distribution is not only heavy-tailed but also multi-modal. The key complexity of the empirical distribution is that there are many small distributional nodes for smaller claims, as evidenced by the right panel of Figure \\ref{fig:density}. On the other hand, it is undesirable to model all these nodes using excessive number of mixture components as (i) precise predictions of small claims are of less relevance of insurance pricing and risk management; (ii) this impedes the model interpretability. Further, the heavy-tailedness of empirical distribution is evidenced by asymptotic linearity of both log-log plot and mean excess plot in Figure \\ref{fig:loglogplot}. The asymptotic slope of log-log plot suggests that the estimated tail index is $\\gamma\\approx 1.3$ while the Lomax tail index obtained by \\cite{fung2021mixture} is about $\\gamma=1.38$, under a subjective choice of splicing threshold. Note however that these only provide a very rough guidance on the true tail index.\n\nNote that distributional multimodality and contamination are indeed prevalent not only to the aforementioned Greek dataset, but also to many publicly available insurance data sets. Notable examples include the French business interruption losses (\\textbf{frebiloss}), French motor third party liability claims (\\textbf{fremotor2sev9907} and \\textbf{freMPL8}) and Norwegian car claims (\\textbf{norauto}) which can all be retrived from the \\textbf{R} package \\textbf{CASdatasets}. This suggests that the modelling challenges emphasized in this paper is not only valid for the Greek data set we are analyzing, but is also applicable in many insurance claim severity data sets.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/density_original.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/density_log.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Empirical density of claim amounts (left panel) and log claim amounts (right panel); the orange vertical lines represent amounts of 10,000, 20,000, 50,000 and 100,000 respectively.}\n\\label{fig:density}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/loglogplot.jpg}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/me_plot.jpg}\n\\end{subfigure}\n\\end{center}\n\\caption{Left panel: log-log plot of the claim amounts; right panel: mean excess plot.}\n\\label{fig:loglogplot}\n\\end{figure}\n\n\n\\bibliographystyle{abbrvnat}\n\n\\section{Introduction}\nModelling insurance claim sizes is not only essential in various actuarial applications including pricing and risk management, but is also very challenging due to several peculiar characteristics of claim severity distributions. Claim size distributions often exhibit multimodality for small and moderate claims, when there exists unobserved heterogeneities possibly reflected by different claim types and accident causes, or when the observed samples come from a contaminated distribution. Also, the distribution is usually heavy-tail in nature, where very large claims occur with a small but non-negligible probability.\nDue to the highly complex distributional characteristics, we have to admit the impossibility to perfectly capture all the distributional features using a parametric model without excessively large number of parameters (which results in over-fitting). When model misspecification is unavoidable, correct specification of the tail part is more important than finely capturing the distributional nodes of smaller claims which are rather immaterial to the insurance portfolio, because the large claims are the losses which can severely damage the portfolio. As a result, we need to specify an appropriate distributional model with a justifiable statistical inference approach which not only preserves sufficient flexibility to appropriately capture the whole severity distribution, but also puts a particular emphasis on robust estimation of the tail.\n\n\nExisting actuarial loss modelling literature focus a lot on the model specifications, by introducing various distributional classes to capture the peculiar characteristics of claim severity distributions. Notable directions include extreme value distributions (EVD, see e.g. \\cite{embrechts1999extreme}) to capture the heavy-tailedness, composite loss modelling (see e.g. \\cite{cooray2005modeling}, \\cite{scollnik2007composite}, \\cite{bakar2015modeling} and \\cite{grun2019extending}) to cater for mismatch between body and tail behavior of claim severity distributions, and finite mixture model (FMM, see e.g. \\cite{LEE2010modeling} and \\cite{MILJKOVIC2016387}) to capture distributional multimodality.\nIn particular, FMM is becoming an increasingly useful smooth density estimation tool in insurance claim severity modelling perspective, due to its high versatility theoretically justified by denseness theorems (\\cite{LEE2010modeling}).\nThe mismatch between its body and tail behavior can also be easily modelled by FMM by selecting varying distributional classes among mixture component functions (see e.g. \\cite{BLOSTEIN201935} and \\cite{fung2021mixture}), including both light-tailed and heavy-tailed distributions. In both actuarial research and practice, statistical inferences of FMM are predominantly based on maximum likelihood estimation (MLE) with the use of Expectation-Maximization (EM) algorithm.\n\nNonetheless, MLE would often cause tail-robustness issues where the tail part of the fitted model is very sensitive to model misspecifications -- when the observations are generated from a perturbed and\/or contaminated distribution.\nAs evidenced by several empirical studies including \\cite{fung2021mixture} and \\cite{wuthrich2021statistical}, the estimated tail part of the FMM obtained by MLE can be unreliable and highly unstable in most practical cases. This is mainly due to the overlapping density regions between mixture components modelling small to moderate claims (body) and those modelling large claims (tail). Hence, the estimated tail distribution will be heavily influenced by some smaller claims if FMM is not able to fully explain those small claims, which is always the case in practice due to the distributional complexity of real dataset impossible to be perfectly captured even by flexible density approximation tools including FMM. Under MLE approach, FMM may fail to extrapolate well the large claims, and this would lead to serious implications to insurance pricing and risk management perspectives. It is therefore natural to question whether or not MLE is still a plausible approach in modelling actuarial claim severity data, and whether or not there exists an alternative statistical inference tool which better addresses our modelling challenges and outperforms the MLE.\n\nRobust statistical inference methods for heavy tail distributions are relatively scarce in actuarial science literature. \nNotable contributions include \\cite{brazauskas2000robust}, \\cite{serfling2002efficient}, \\cite{brazauskas2003favorable} and \\cite{dornheim2007robust} who adopt various kinds of statistical inference tools, such as quantile, trimmed mean, trimmed-M and generalized median estimators, to robustly estimate Gamma, Pareto and Log-normal distributions.\nRecent actuarial works study several variations of the method of moments (MoM) for robust estimations of Pareto and Log-normal loss models. Notable contributions in this direction include \\cite{brazauskas2009robust}, \\cite{poudyal2021robust} (trimmed moments), \\cite{zhao2018robust} (winsorized moments) and \\cite{poudyal2021truncated} (truncated moments).\nNote that these research works address robustness issue against the upper outliers by reducing the influence of few extreme observations to the estimated model parameters. This outlier-robustness issue is however very different from the tail-robustness issue mentioned above as the key motivation of this paper, where the contaminations on the body part affects the tail extrapolations. Very few research works look into this ``non-standard\" tail-robustness issue. Notable contributions are \\cite{beran2012robust} and \\cite{gong2018robust} who propose a huberization of the MLE, which protects against perturbations and misspecifications in the body part of distribution, to robustly estimate the tail index of Pareto and Weibull distributions. \nAll of the above existing approaches focus solely on one or two-parameter distributions. A general robust tail estimation strategy under multi-parameter flexible models like FMM is lacking.\n\nMotivated by the aforementioned tail-robustness issue in insurance context, we propose a new maximum weighted likelihood estimation (MWLE) approach for robust heavy-tail modelling of FMM. \nUnder the MWLE, an observation-dependent weight function is introduced to the log-likelihood, de-emphasizing the contributions of smaller claims and hence reducing their influence to the estimated tail part of FMM. Down-weighting small claims is also natural in insurance loss modelling perspective, as mentioned in the beginning of this section, accurate modelling of the large claims is more important than the smaller claims. \nTo offset the bias caused by the weighting scheme, we also include an adjustment term in the weighted log-likelihood, which can be interpreted as the effects of randomly truncating the observations. With the bias adjustment term, we prove that estimated parameters under the proposed MWLE is consistent and asymptotically normal with any pre-specified choices of weight functions.\nAlso, under some specific choices of weight functions, we will show that the MWLE tail index, which determines the tail heaviness of a distribution, is consistent, even under model misspecifications where the true model is not FMM. Therefore, MWLE can be regarded as a generalized alternative framework of Hill estimator (\\cite{hill1975simple}). Furthermore, with a probabilistic interpretation of the proposed MWLE approach, it is still possible to derive a Generalized EM (GEM) algorithm to efficiently estimate parameters which maximize the weighted log-likelihood function.\n\nNote that the proposed MWLE is different from the existing statistics papers which adopt weighting schemes for likelihood-based inference. The existing literature are mainly motivated by one of the following two aspects very different from the focus of this paper: (i) Robustness against upper and lower outliers, where related research works include e.g. \\cite{fieldsmith1994}, \\cite{markatou1997weighted}, \\cite{markatou2000mixture}, \\cite{dupuis2002robust}, \\cite{ahmed2005robust}, \\cite{wong2014robust} and \\cite{aeberhard2021robust}; (ii) Bringing in more relevant observations for statistical inference to increase precision while trading off some biases, studied by e.g. \\cite{wang2001maximum}, \\cite{hu2002weighted}, \\cite{wang2004asymptotic} and \\cite{wang2005selecting}. Note also that our proposed MWLE stems differently from the existing statistics literature in terms of mathematical technicality, since none of the above papers incorporate the truncation-based bias adjustment as included in the proposed WMLE.\n\nThe rest of this paper is structured as follows. In Section \\ref{sec:fmm}, we briefly revisit the class of FMM for heavy tail modelling. Section \\ref{sec:MWLE} introduces the proposed MWLE for robust heavy-tail modelling of FMM and explains its motivations in terms of insurance claims modelling. Section \\ref{sec:theory} explores several theoretical properties to understand and justify the proposed MWLE. After that, we present in Section \\ref{sec:em} two types of GEM algorithms for efficient parameter estimations under the MWLE approach on FMM. In Section \\ref{sec:ex}, we analyze the performance of the proposed MWLE through three empirical examples: a toy example, a simulation study and a real insurance claim severity dataset. After showing the superior performance of MWLE compared to MLE, we finally summarize our findings in Section \\ref{sec:discussion} with a brief discussion how the proposed MWLE can be extended to a regression framework.\n\n\n\n\\section{Finite mixture model} \\label{sec:fmm}\nThis section provides a very brief review on finite mixture model (FMM) which serves as a flexible density estimation tool. Suppose that there are $n$ i.i.d. claim severities $\\bm{Y}=(Y_1,\\ldots, Y_n)$ with realizations $\\bm{y}=(y_1,\\ldots, y_n)$. $Y_i$ is generated by a probability distribution of $G(\\cdot)$ with density function $g(\\cdot)$ which is unknown. In insurance context, claim severity distribution often exhibits multimodality, which results from the unobserved heterogeneity stemming from the amalgamation of different types of claims unobserved in advance. Also, claim sizes are often heavy-tail in nature, which can be attributed to a few large losses from a portfolio of policies which usually represent the greatest part of the indemnities paid by the insurance company. The mismatch between body and tail behavior often poses difficulties to fit the data well using only a standard parametric distribution.\n\nMotivated by the challenges of modelling insurance claim severities, we aim to model the dataset using finite mixture model (FMM). Define a class of finite mixture distributions $\\mathcal{H}=\\{H(\\cdot;\\bm{\\Phi}):\\bm{\\Phi}\\in\\Omega\\}$, where $\\bm{\\Phi}=(\\psi_1,\\ldots,\\psi_P)^T$ is a column vector with length $P$ representing the model parameters and $\\Omega$ is the parameter space. Its density function $h(y;\\bm{\\Phi})$ is given by the following form:\n\n\\begin{equation} \\label{eq:model}\nh(y_i;\\bm{\\Phi})=\\sum_{j=1}^{J}\\pi_j f_b(y_i;\\bm{\\varphi}_j)+\\pi_{J+1}f_t(y_i;\\bm{\\eta}),\\qquad y_i>0,\n\\end{equation}\nwhere the parameters $\\bm{\\Phi}$ can alternatively be written as $\\bm{\\Phi}=(\\bm{\\pi},\\bm{\\varphi},\\bm{\\eta})$. Here, $\\bm{\\pi}=(\\pi_1,\\ldots,\\pi_{J+1})$ are the mixture probabilities for each of the $J+1$ components with $\\sum_{j=1}^{J+1}\\pi_j=1$. $\\bm{\\varphi}=(\\bm{\\varphi}_1,\\ldots,\\bm{\\varphi}_J)$ and $\\bm{\\eta}$ are the parameters for the mixture densities $f_b$ and $f_t$ respectively.\n\nThe $J$ mixture components with densities $f_b$ mainly serve as modelling the multimodality for the body part of the distribution. $f_b$ is naturally chosen as a light-tailed distribution like Gamma, Weibull and Inverse-Gaussian.\nThe remaining mixture component $f_t$ is designed to capture the large observations and hence the tail distribution can be properly extrapolated. The possible choices of heavy-tail distribution for $f_t$ include Log-normal, Pareto and Inverse-Gamma.\n\n\n\\section{Maximum weighted log-likelihood estimator} \\label{sec:MWLE}\nWith the maximum likelihood estimation (MLE) approach, parameter estimations require maximizing the log-likelihood function\n\n\\begin{equation} \\label{eq:loglik}\n\\mathcal{L}_n(\\bm{\\Phi};\\bm{y})=\\sum_{i=1}^{n}\\log h(y_i;\\bm{\\Phi})\n\\end{equation}\nwith respect to the parameters $\\bm{\\Phi}$. Under this approach, each claim has the same relative influence to the estimated parameters, but in insurance loss modelling and ratemaking perspective, correct specification and projection of larger claims are more important than those of smaller claims. More importantly, as explained by \\cite{fung2021mixture} and \\cite{wuthrich2021statistical}, MLE of FMM in Equation (\\ref{eq:model}) would fail in most practical cases due to incorrectly estimations of tail heaviness under model misspecification. Precisely, because of the overlapping region between the body parts $f_b$ and tail part $f_t$ of the distribution, the small claims may distort the estimated tail distribution $f_t$ if they are not fully captured by the $J$ mixture densities in the body. However, due to the highly complex multimodality characteristics of the body distribution which often appears in real insurance claim severity data, it is impossible to capture all the body distributional patterns without prohibitively large $J$ which causes over-fitting and loss of model interpretability. Therefore, it is often the case in practice that MLE of FMM would result in unstable estimates of tail distribution, causing unreliable tail extrapolation.\n\nOne way to mitigate the aforementioned model misspecification effect is to impose observation-dependent weights to the log-likelihood function, where a larger claim $y$ is assigned to a larger weight. This will reduce the influence of smaller observed values to the estimated tail parameter. For parameter estimations, we propose maximizing the weighted log-likelihood as follows instead\n\n\\begin{equation} \\label{eq:loglik_weight}\n\\mathcal{L}^*_n(\\bm{\\Phi};\\bm{y})=\\sum_{i=1}^{n}W(y_i)\\log \\frac{h(y_i;\\bm{\\Phi})W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du},\n\\end{equation}\nwhere $0\\leq W(\\cdot)\\leq 1$ is the weight of the log-likelihood function. We call the resulting parameters maximum weighted likelihood estimators (MWLE). To allow for greater relative influence of larger claims, we construct $W(u)$ as a monotonically non-decreasing function of $u$. In this case, we may interpret the weighted log-likelihood function as follows: First, we pretend that each claims $y_1,\\ldots,y_n$ are only observed respectively by $W(y_1),\\ldots,W(y_n)$ times. However, this alone will introduce bias to a heavier estimated tail because this implies more large claims are effectively included due to the weighting effect. To remove such a bias, we pretend to model $y_i$ by a random truncation model $\\tilde{h}(y_i;\\bm{\\Phi}):=h(y_i;\\bm{\\Phi})W(y_i)\/\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du$ instead of the original modelling distribution $h(y_i;\\bm{\\Phi})$\n\n\\begin{remark}\nThe proposed MWLE can be viewed as a form of M-estimator, where the optimal parameters are determined through maximizing a function. We here discuss two special cases of MWLE. (i) MLE: When $W(\\cdot)=1$, MWLE is reduced to a standard MLE; (ii) Truncated MLE: When $W(y)=1\\{y\\geq\\tau\\}$ for some threshold $\\tau>0$, then MWLE is reduced to truncated MLE introduced by \\cite{marazzi2004adaptively}, where a hard rejection is applied to all samples smaller than $\\tau$.\n\\end{remark}\n\n\n\n\n\\section{Theoretical Properties} \\label{sec:theory}\nThis section presents several theoretical properties of the proposed MWLE to theoretically justify the use of the proposed MWLE. Unless specified otherwise, throughout this section the estimated model parameters $\\hat{\\bm{\\Phi}}$ are obtained by maximizing the proposed weighted log-likelihood function given by Equation (\\ref{eq:loglik_weight}).\n\\subsection{Asymptotic behavior with fixed weight function}\n\\subsubsection{Consistency and asymptotic normality}\nWe first want to show that the proposed weighted log-likelihood approach leads to correct convergence to true model parameters as $n\\rightarrow\\infty$. The proof is presented in Section 2 of the supplementary materials.\n\\begin{theorem} \\label{thm:asym_tru}\nSuppose that $G(\\cdot)=H(\\cdot;\\bm{\\Phi}_0)\\in\\mathcal{H}$. Assume that the density function $h(y;\\bm{\\Phi})$ satisfies a set of regularity conditions outlined in Section 1 of supplementary materials\\footnote{Note that the set of regularity conditions are equivalent to those required for consistent and asymptotic normal estimations of MLE.}. Then, there exists a local maximizer $\\hat{\\bm{\\Phi}}_n$ of the weighted log-likelihood $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})$ such that\n\\begin{equation}\n\\sqrt{n}(\\hat{\\bm{\\Phi}}_n-\\bm{\\Phi}_0)\\overset{d}{\\rightarrow}\\mathcal{N}(\\bm{0},\\bm{\\Sigma}),\n\\end{equation}\nwhere $\\bm{\\Sigma}=\\bm{\\Gamma}^{-1}\\bm{\\Lambda}\\bm{\\Gamma}^{-1}$, with $\\bm{\\Lambda}$ and $\\bm{\\Gamma}$ being $P\\times P$ matrices given by\n\\begin{align} \\label{eq:asym:lambda}\n\\bm{\\Lambda} \n&=E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad-\\frac{1}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}\\Bigg\\{E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\nonumber\\\\\n&\\hspace{8em}+E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\Bigg\\}\\nonumber\\\\\n&\\quad+\\frac{E_{\\bm{\\Phi}_0}\\left[W(Y)^2\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]^2}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\n\\end{align}\nand\n\\begin{align} \\label{eq:asym:gamma}\n\\bm{\\Gamma}\n&=-E_{\\bm{\\Phi}_0}\\left[W(Y)\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad+\\frac{1}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T,\n\\end{align}\nwhere $E_{\\bm{\\Phi}_0}[Q(Y)]=\\int_{0}^{\\infty}Q(u)h(u;\\bm{\\Phi}_0)du$ represents the expectation under density $h(\\cdot;\\bm{\\Phi}_0)$ for any functions $Q$, and the derivative ${\\partial}\/{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})$ is assumed to be a column vector with length $P$.\n\\end{theorem}\n\n\\begin{remark}\nWhen $W(\\cdot)=1$, all except the first term in the right hand side of Equations (\\ref{eq:asym:lambda}) and (\\ref{eq:asym:gamma}) vanish. As a result, the asymptotic variance $\\bm{\\Sigma}=\\bm{\\Gamma}^{-1}\\bm{\\Lambda}\\bm{\\Gamma}^{-1}$ is reduced to the inverse of Fisher information matrix under standard MLE approach.\n\\end{remark}\n\n\\begin{remark}\nTheorem \\ref{eq:asym:gamma} only asserts the existence of local maximizer instead of global maximizer, because in FMM it is common that the likelihood function has multiple critical points and\/or is unbounded (\\cite{Mclachlan2004Finite}).\n\\end{remark}\n\nThe above theorem suggest that for large sample size, the estimated parameters are approximately unbiased and we may approximate the variance of estimated parameters as\n\\begin{equation} \\label{eq:asym_var}\n\\widehat{\\text{Var}}(\\hat{\\bm{\\Phi}}_{n})\\approx \\frac{1}{n}\\hat{\\bm{\\Gamma}}_n^{-1}\\hat{\\bm{\\Lambda}}_n\\hat{\\bm{\\Gamma}}_n^{-1}\n\\end{equation}\nwhere $\\hat{\\bm{\\Lambda}}_n$ and $\\hat{\\bm{\\Gamma}}_n$ are given by $\\bm{\\Lambda}$ and $\\bm{\\Gamma}$ in Equations (\\ref{eq:asym:lambda}) and (\\ref{eq:asym:gamma}) except that the expectations are changed to empirical means and $\\bm{\\Phi}_0$ is changed to $\\hat{\\bm{\\Phi}}_n$. Then, it is easy to construct a two-sided Wald-type confidence interval (CI) for $\\psi_p$ ($p=1,\\ldots,P$) as\n\\begin{equation} \\label{eq:asym_CI}\n\\left[\\hat{\\psi}_{n,p}-\\frac{z_{1-\\kappa\/2}}{\\sqrt{n}}\\sqrt{\\left[\\hat{\\bm{\\Gamma}}_n^{-1}\\hat{\\bm{\\Lambda}}_n\\hat{\\bm{\\Gamma}}_n^{-1}\\right]_{p,p}},\\hat{\\psi}_{n,p}+\\frac{z_{\\kappa\/2}}{\\sqrt{n}}\\sqrt{\\left[\\hat{\\bm{\\Gamma}}_n^{-1}\\hat{\\bm{\\Lambda}}_n\\hat{\\bm{\\Gamma}}_n^{-1}\\right]_{p,p}}\\right],\n\\end{equation}\nwhere $\\hat{\\psi}_{n,p}$ is the estimated $\\psi_p$, $z_{\\kappa}$ is the $\\kappa$-quantile of the standard normal distribution and $\\left[\\bm{M}\\right]_{p,p}$ is the $(p,p)$-th element of $\\bm{M}$ for some matrices $\\bm{M}$. For other quantities of interest (e.g.~mean, VaR and CTE of claim amounts), one may apply a delta method or simulate parameters from ${\\cal N}(\\hat{\\bm{\\Phi}}_{n},\\widehat{\\text{Var}}(\\hat{\\bm{\\Phi}}_{n}))$ to analytically or empirically approximate their CIs.\n\n\\medskip\n\nNext, we examine the asymptotic property of MWLE dropping the assumption that $G(\\cdot)\\in\\mathcal{H}$ (i.e. we may misspecify the model class).\n\n\\begin{theorem} \\label{thm:asym_mis}\nAssume that the density function $h(y;\\bm{\\Phi})$ satisfies the same set of regularity conditions as in the previous theorem. Further assume that there is a local maximizer $\\bm{\\Phi}_0^{*}$ of\n\\begin{equation}\n\\tilde{E}\\left[\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]:=\n\\tilde{E}\\left[W(Y)\\log \\frac{h(Y;\\bm{\\Phi})W(Y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}\\right],\n\\end{equation}\nwhere $\\tilde{E}\\left[Q(Y)\\right]=\\int_0^{\\infty}Q(u)dG(u)$ represents the expectation under distribution $G(y)$ for any functions $Q$. Then, there exists a local maximizer $\\hat{\\bm{\\Phi}}_n$ of the weighted log-likelihood $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})$ such that\n\\begin{equation}\n\\sqrt{n}(\\hat{\\bm{\\Phi}}_n-\\bm{\\Phi}_0^{*})\\overset{d}{\\rightarrow}\\mathcal{N}(\\bm{0},\\tilde{\\bm{\\Sigma}}),\n\\end{equation}\nwhere $\\tilde{\\bm{\\Sigma}}=\\tilde{\\bm{\\Gamma}}^{-1}\\tilde{\\bm{\\Lambda}}\\tilde{\\bm{\\Gamma}}^{-1}$, with $\\tilde{\\bm{\\Lambda}}$ and $\\tilde{\\bm{\\Gamma}}$ given by\n\\begin{align} \\label{eq:asym:lambda_mis}\n\\tilde{\\bm{\\Lambda}} \n&=\\tilde{E}\\left[W(Y)^2\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad-\\frac{1}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}\\Bigg\\{\\tilde{E}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\nonumber\\\\\n&\\hspace{8em}+E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\tilde{E}\\left[W(Y)^2\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\\Bigg\\}\\nonumber\\\\\n&\\quad+\\frac{\\tilde{E}\\left[W(Y)^2\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]^2}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T\n\\end{align}\nand\n\\begin{align} \\label{eq:asym:gamma_mis}\n\\tilde{\\bm{\\Gamma}}\n&=\\tilde{E}\\left[W(Y)\\frac{\\partial^2}{\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T}\\log h(Y;\\bm{\\Phi})\\right]\n-\\frac{\\tilde{E}\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial^2}{\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T}\\log h(Y;\\bm{\\Phi})\\right]\\nonumber\\\\\n&\\quad-\\frac{\\tilde{E}\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\quad+\\frac{\\tilde{E}\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]^2}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]^T.\n\\end{align}\n\\end{theorem}\n\nAs shown by the above theorem, the MWLE is still asymptotically convergent and normally distributed. As a result, it is still justifiable to evaluate the parameter uncertainties and CI of parameters in the forms of Equations (\\ref{eq:asym_var}) and (\\ref{eq:asym_CI}). However, in the context of modelling heavy-tail distributions as an example, there could be an asymptotic bias on the estimated tail index. As a result, it is important to theoretically examine how the choice of weight functions influence the impacts of model misspecifications. These will be leveraged to the next subsections on the robustness studies and asymptotics under varying weight functions.\n\n\\subsubsection{Robustness}\nIt is well known that MLE is the most efficient estimator under all asymptotically unbiased estimators. Therefore, with an attempt to reduce the bias of estimated tail distribution under misspecified models through MWLE approach with $W(\\cdot)\\neq 1$, there will be a trade-off between bias regularizations and loss in efficiencies. This subsection will analyze such a trade-off, which may provide guidance to choose an appropriate weight function $W(\\cdot)$. In light of Theorem \\ref{thm:asym_tru}, it is easy to show the following proposition by applying delta method.\n\n\\begin{proposition}\nSuppose that $G(\\cdot)=H(\\cdot;\\bm{\\Phi}_0)\\in\\mathcal{H}$ with the same set of regularity conditions as previous theorems. Define $\\hat{\\bm{\\Phi}}_n$ and $\\hat{\\bm{\\Phi}}_n^{(0)}$ as the MWLE and MLE respectively. Then, for some differentiable functions $U(\\cdot)$, the relative asymptotic efficiency (AEFF) of $U(\\hat{\\bm{\\Phi}}_n)$ is given by\n\\begin{equation}\n\\text{AEFF}(W;\\bm{\\Phi}_0):=\\lim_{n\\rightarrow\\infty}\\frac{\\text{Var}(\\hat{\\bm{\\Phi}}_n^{(0)})}{\\text{Var}(\\hat{\\bm{\\Phi}}_n)}=\\frac{U'(\\bm{\\Phi}_0)^T\\bm{\\Sigma}^{(0)}U'(\\bm{\\Phi}_0)}{U'(\\bm{\\Phi}_0)^T\\bm{\\Sigma}U'(\\bm{\\Phi}_0)},\n\\end{equation}\nwhere $U'(\\bm{\\Phi})$ is the gradient of $U(\\bm{\\Phi})$ w.r.t. $\\bm{\\Phi}$, and $\\bm{\\Sigma}^{(0)}$ is the inverse of Fisher information matrix under standard MLE approach.\n\\end{proposition}\n\nNext, we need to quantify robustness by some statistical measures. In a theoretical setting, we follow e.g. \\cite{huber1981robust}, \\cite{beran2012robust} and \\cite{gong2018robust} to consider the case that $Y_i$ is generated by a contamination model, given by\n\n\\begin{equation} \\label{eq:asym_contam}\nG(y):=G(y;\\epsilon,M,\\bm{\\Phi}_0)=(1-\\epsilon)H(y;\\bm{\\Phi}_0)+\\epsilon M(y), \\quad y>0,\n\\end{equation}\nfor some contamination distribution function $M$. Then, the asymptotic bias can be analyzed through evaluating the influence function (IF), a column vector with length $P$ given by\n\n\\begin{equation}\n\\text{IF}(\\bm{\\Phi}_0; H, M)=\\lim_{\\epsilon\\rightarrow 0}\\frac{\\tilde{\\bm{\\Phi}}^{\\epsilon,M}-\\bm{\\Phi}_0}{\\epsilon},\n\\end{equation}\nwhere $\\tilde{\\bm{\\Phi}}^{\\epsilon,M}$ is the asymptotic estimated parameters if $H^{\\epsilon,M}$ given by Equation (\\ref{eq:asym_contam}) is distribution generating $Y_i$, contrasting to $\\bm{\\Phi}_0$ which are the true model parameters. IF can be interpreted as the infinitesimal asymptotic bias of estimated parameters by perturbing the model generating $Y_i$. Smaller $|\\text{IF}_p(\\bm{\\Phi}_0; H, M)|$ (with $\\text{IF}_p$ being the $p^{\\text{th}}$ element of $\\text{IF}$) means a more robust estimation of $\\phi_p$ under model misspecification. Our goal is to demonstrate the potential of the proposed MWLE to reduce such a bias and hence improve the robustness. We have the following proposition which derives the IF under the MWLE approach:\n\n\\begin{proposition}\nThe IF is given by\n\\begin{align}\n\\text{IF}(\\bm{\\Phi}_0; H, M)\n&=-\\bm{\\Gamma}^{-1}\\Bigg\\{E_M\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\nonumber\\\\\n&\\hspace{5em}-\\frac{E_M\\left[W(Y)\\right]}{E_{\\bm{\\Phi}_0}\\left[W(Y)\\right]}E_{\\bm{\\Phi}_0}\\left[W(Y)\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(Y;\\bm{\\Phi})\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\\Bigg\\},\n\\end{align}\nwhere $E_M[Q(Y)]=\\int_0^{\\infty}Q(u)dM(u)$ for some functions $Q$, and $\\bm{\\Gamma}$ is given by Equation (\\ref{eq:asym:gamma}).\n\\end{proposition}\n\nWe will show empirically how the choice of weight functions $W(\\cdot)$ affects the AEFF and IF in Section \\ref{sec:ex:toy}, which will help us understand the bias-variance tradeoff of our proposed MWLE approach.\n\n\\subsection{Asymptotic behavior of tail index with varying weight functions} \\label{sec:theory:tail_idx}\n\nTail index measures the tail-heaviness of a probability distribution. Correctly specifying the tail index is a critical task of modelling insurance data with heavy-tail nature, as insurance companies often care more about large claims which are more material than small ones. In this section, we show that under some sequences of weight functions $W_n(\\cdot)$ which depend on the number of observations $n$, the estimated tail index will be consistent under the proposed MWLE even if the model class is misspecified. \nThis result theoretically justifies how the proposed MWLE addresses the tail-robustness issue caused by model misspecification and distributional contamination, by showing that reduced influence of smaller claims through downweighting can be useful for producing a plausible tail index estimate. \nAlso, the result may provide some theoretical guidance on selecting an appropriate weight function.\n\nDenote $\\mathcal{R}_{-\\gamma}$ be a class of regularly varying distributions with tail index $\\gamma>0$, such that $\\bar{H}\\in\\mathcal{R}_{-\\gamma}$ if and only if $\\bar{H}(y)\\sim y^{-\\gamma}L_0(y)$ as $y\\rightarrow\\infty$ for some slowly varying functions $L_0(y)\\in\\mathcal{R}_0$ satisfying $L_0(ty)\/L_0(y)\\rightarrow 1$ as $y\\rightarrow\\infty$ for any $t>0$. Smaller $\\gamma$ implies heavier tail. Note that regularly varying distributions include many distributions that capture heavy-tail behaviors of loss random variables and we refer the readers to \\citet{cooke2014fat} for more explanation on these distributions. Also define the following transformed density functions\n\\begin{equation}\n\\tilde{g}_{n}(y)=\\frac{g(y)W_n(y)}{\\int_{0}^{\\infty}g(u)W_n(u)du},\\qquad \\tilde{h}_{n}(y;\\bm{\\Phi})=\\frac{h(y;\\bm{\\Phi})W_n(y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W_n(u)du},\n\\end{equation}\nand $\\tilde{G}_{n}(\\cdot)$ and $\\tilde{H}_{n}(\\cdot)$ are the corresponding distribution functions. We further put a bar to any function $Q$ to denote its survival function (i.e. $\\bar{Q}:=1-Q$). We then make the following assumptions:\n\n\\begin{enumerate}[font={\\bfseries},label={A\\arabic*.}]\n\\item $\\bar{G}\\in\\mathcal{R}_{-\\gamma_0}$ with tail index $\\gamma_0>0$.\n\\item $\\bar{H}(y;\\bm{\\Phi})=y^{-\\gamma}L(y;\\bm{\\Phi})$ for some slowly varying functions $L$, so that $\\bar{H}\\in\\mathcal{R}_{-\\gamma}$. Here, $\\gamma$ is the only model parameter within $\\bm{\\Phi}$ that governs the tail index. Also, both $L(yt;\\bm{\\Phi})\/L(y;\\bm{\\Phi})$ and its derivative w.r.t. $\\bm{\\Phi}$ converges uniformly as $y\\rightarrow\\infty$ for any fixed $t>1$.\n\\item There exists some sequences of thresholds $\\{\\tau_n\\}_{n=1,2,\\ldots}$ with $\\tau_n\\rightarrow\\infty$ as $n\\rightarrow\\infty$ such that $\\tau_nW_n(\\tau_n)\\rightarrow 0$ as $n\\rightarrow\\infty$.\n\\item $\\tilde{E}_n[(\\log \\tilde{h}_{n}(Y;\\bm{\\Phi}))^2]\/(n\\tilde{E}[W_n(Y)])\\rightarrow 0$ as $n\\rightarrow\\infty$, where $\\tilde{E}_n[Q(Y)]=\\int_{0}^{\\infty}Q(u)d\\tilde{G}_{n}$ and $\\tilde{E}[Q(Y)]=\\int_{0}^{\\infty}Q(u)dG$ for some functions $Q$.\n\\item The density functions $h(y;\\bm{\\Phi})$ and $g(y)$ are ultimately monotone (i.e. monotone on $y\\in (z,\\infty)$ for some $z>0$), uniformly on $\\bm{\\Phi}$.\n\\end{enumerate}\n\nAssumptions \\textbf{A1} and \\textbf{A2} ensure that both the model generating the observations and the fitted model class are heavy tail in nature, with tail heaviness quantified by tail indices $\\gamma_0$ and $\\gamma$ respectively. In finite mixture context, see Section \\ref{sec:fmm}, \\textbf{A2} can be easily satisfied choosing $h$ in Equation (\\ref{eq:model}) as any standard regularly varying distributions such as Pareto and Inverse-Gamma with compact parameter space. Assumption \\textbf{A3} asserts that all observations other than the extreme ones are greatly down-weighted. This assumption provides a theoretical guidance of choosing the weight function such that small to moderate claims should only be allocated by small weights, while substantial weights should be assigned only to large claims. \\textbf{A4} requires that the effective number of MWLE observations $n\\tilde{E}[W_n(Y)]\\rightarrow\\infty$ such that large sample theories hold. The numerator $\\tilde{E}_n[(\\log \\tilde{h}_{n}(Y;\\bm{\\Phi}))^2]$ grows much slower than the denominator as a logarithm is involved. Assumption \\textbf{A5} is of no practical concern. Now, we have the following theorem which asserts the consistency of estimated tail index. The proof is leveraged to Section 3 of the supplementary material.\n\n\\begin{theorem} \\label{thm:asym:tail_idx}\nAssume \\textbf{A1} to \\textbf{A5} hold for the settings under the MWLE, and the regularity conditions outlined in Section 1 of Supplementary materials are satisfied. Then, there exists a local maximizer $\\hat{\\bm{\\Phi}}_n$ of the weighted log-likelihood function $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ with the estimated tail index $\\hat{\\gamma}_n$ such that $\\hat{\\gamma}_n\\rightarrow\\gamma_0$ as $n\\rightarrow\\infty$. Further, the local maximizer $\\hat{\\gamma}_n$ is unique in probability as $n\\rightarrow\\infty$.\n\\end{theorem}\n\n\n\\begin{remark}\nConsider a special case where: (i) the weight functions $W_n(y)=1\\{y>\\tau_n\\}$ are step functions for some sequences of $\\tau_n\\rightarrow\\infty$; and (ii) the fitted model class $H(y;\\bm{\\Phi})$ is chosen as a Generalized Pareto distribution (GPD) or equivalently Lomax distribution which will be described in Section \\ref{sec:ex} (i.e. $H(y;\\bm{\\Phi})$ is an FMM in Equation (\\ref{eq:model}) with $J=0$ and $f_t$ is a GPD). Theorem \\ref{thm:asym:tail_idx} is then asserting the consistency of tail index obtained by excess over threshold method on GPD (\\cite{smith1987estimating}), which has a very close connection with the consistency property of the Hill estimator (\\cite{hill1975simple}) (see Section 4 of \\cite{smith1987estimating}). Therefore, we can regard the proposed MWLE approach as a generalized framework of the Hill-type estimator by \\cite{hill1975simple}.\n\\end{remark}\n\n\\section{Parameter estimation} \\label{sec:em}\n\\subsection{GEM algorithm} \\label{sec:em:gem}\nSince there is a probabilistic interpretation of the weighted log-likelihood given by Equation (\\ref{eq:loglik_weight}), it is feasible to construct a generalized Expectation-Maximization (GEM) algorithm for efficient parameter estimations. In this paper, we will present two distinctive approaches of complete data constructions which result to two different kinds of GEM algorithms.\n\n\\subsubsection{Method 1: Hypothetical data approach}\n\\paragraph{Construction of complete data}\nTo address the challenges of optimizing directly the ``observed data\" weighted log-likelihood in Equation (\\ref{eq:loglik_weight}), we extend the ``hypothetical complete data\" method proposed by \\cite{FUNG2020MoECensTrun}, by defining the complete data\n\\begin{equation}\n\\mathcal{D}^{\\text{com}}=\\{(y_i,\\bm{z}_i,k_i,\\{\\bm{z}'_{is},y'_{is}\\}_{s=1,\\ldots,k_i})\\}_{i=1,\\ldots,n},\n\\end{equation}\nwhere $k_i$ is the number of missing sample points ``generated\" by observation $i$, due to the probabilistic interpretation that each sample $i$ is removed with a probability of $1-W(y_i)$. As an auxiliary tool for efficient computations we assume that $k_i$ follows geometric distribution with probability mass function\n\\begin{equation} \\label{eq:em:k}\np(k_i;\\bm{\\Phi})=\\left[1-\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du\\right]^{k_i}\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du, \\qquad k_i=0,1,\\ldots,\n\\end{equation}\nand $\\{y'_{is}\\}_{s=1,\\ldots,k_i}$ are i.i.d. variables representing the missing samples. We assume that $Y'_{is}$ (with realization $y'_{is}$) is independent of $y_i$ and $k_i$, and follows a distribution with the following density function\n\\begin{equation} \\label{eq:em:y}\n\\tilde{h}^{*}(y'_{is};\\bm{\\Phi})=\\frac{h(y'_{is};\\bm{\\Phi})(1-W(y'_{is}))}{\\int_0^{\\infty}h(u;\\bm{\\Phi})(1-W(u))du},\\qquad y'_{is}>0.\n\\end{equation}\n\nFurther, $\\bm{z}_i=(z_{i1},\\ldots,z_{i(J+1)})$ are the latent mixture components assignment labels, where $z_{ij}=1$ if observation $i$ belongs to the $j^{\\text{th}}$ latent class and $z_{ij}=0$ otherwise. Similarly, $\\bm{z}'_i=(z'_{is1},\\ldots,z'_{is(J+1)})$ are the labels for missing data, where $z'_{isj}=1$ if the $s^{\\text{th}}$ missing sample generated by observation $i$ belongs to the $j^{\\text{th}}$ latent class, and $z'_{isj}=0$ otherwise. \n\nThe complete data weighted log-likelihood function is then given by\n\n\\begingroup\n\\allowdisplaybreaks\n\\begin{align}\n\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi};\\mathcal{D}^{\\text{com}})\n&=\\sum_{i=1}^{n}W(y_i)\\log\\Bigg\\{\\frac{\\left\\{\\prod_{j=1}^{J}[\\pi_jf_b(y_i;\\bm{\\varphi}_j)]^{z_{ij}}\\right\\}\\left(\\pi_{J+1}f_t(y_i;\\bm{\\eta})\\right)^{z_{i(J+1)}}W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du} \\nonumber\\\\\n&\\hspace{8em} \\times \\left[1-\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du\\right]^{k_i}\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du \\nonumber\\\\\n&\\hspace{8em} \\times \\prod_{s=1}^{k_i}\\frac{\\left\\{\\prod_{j=1}^{J}[\\pi_jf_b(y'_{is};\\bm{\\varphi}_j)]^{z'_{isj}}\\right\\}\\left(\\pi_{J+1}f_t(y'_{is};\\bm{\\eta})\\right)^{z'_{is(J+1)}}W(y_i)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})(1-W(u))du}\\Bigg\\} \\nonumber\\\\\n&=\\sum_{i=1}^nW(y_i)\\left\\{\\left[\\sum_{j=1}^{J}z_{ij}\\log \\pi_jf_b(y_i;\\bm{\\varphi}_j)\\right]+z_{i(J+1)}\\log\\pi_{J+1} f_t(y_i;\\bm{\\eta})\\right\\}\\nonumber\\\\\n&\\quad +\\sum_{i=1}^{n}\\sum_{s=1}^{k_i}W(y_i)\\left\\{\\left[\\sum_{j=1}^J z'_{ijs}\\log\\pi_j f_b(y'_{is};\\bm{\\varphi}_j)\\right]+z'_{i(J+1)s}\\log\\pi_{J+1} f_t(y'_{is};\\bm{\\eta})\\right\\}+\\text{const.},\n\\end{align}\n\\endgroup\nwhich is more computationally tractable. In the following we will omit the constant term which is irrelevant for calculations.\n\n\n\n\n\\paragraph{Iterative procedures}\nIn the $l^{\\text{th}}$ iteration of the E-step, we compute the expectation of the complete data weighted log-likelihood as follows:\n\\begin{align} \\label{eq:em:q}\nQ^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})\n&=\\sum_{i=1}^nW(y_i)\\left\\{\\left[\\sum_{j=1}^{J}z_{ij}^{(l)}\\log \\pi_jf_b(y_i;\\bm{\\varphi}_j)\\right]+z_{i(J+1)}^{(l)}\\log\\pi_{J+1} f_t(y_i;\\bm{\\eta})\\right\\}\\nonumber\\\\\n&\\quad +\\sum_{i=1}^{n}k_i^{(l)}W(y_i)\\Big\\{\\left[\\sum_{j=1}^J {z'}^{(l)}_{ij}\\left(\\log\\pi_j +E\\left[\\log f_b(Y';\\bm{\\varphi}_j)|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}\\right]\\right)\\right] \\nonumber\\\\\n& \\hspace{8em} +{z'}^{(l)}_{i(J+1)}\\left(\\log\\pi_{J+1}+ E\\left[\\log f_t(Y';\\bm{\\eta})|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}\\right]\\right)\\Big\\},\n\\end{align}\nwhere $z_{ij}^{(l)}=E[z_{ij}|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$, ${z'}^{(l)}_{ij}=E[z'_{ijs}|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$ and $k_i^{(l)}=E[K_i|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$. Also, $K$ follows $p(\\cdot;\\bm{\\Phi}^{(l-1)})$ in Equation (\\ref{eq:em:k}) and $Y'$ follows $\\tilde{h}^{*}(\\cdot;\\bm{\\Phi}^{(l-1)})$ in Equation (\\ref{eq:em:y}). The precise expressions of the above expectations are presented in Section 4.2 of supplementary materials, under a particular specification of Gamma distribution for $f_b$ and Lomax distribution for $f_t$. This specification will also be studied in the illustrating examples (Section \\ref{sec:ex}).\n\nIn the M-step, we attempt to find the updated parameters $\\bm{\\Phi}^{(l)}$ such that $Q^{*}(\\bm{\\Phi}^{(l)}|\\bm{\\Phi}^{(l-1)})\\geq Q^{*}(\\bm{\\Phi}^{(l-1)}|\\bm{\\Phi}^{(l-1)})$. Note in Equation (\\ref{eq:em:q}) that $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ is linearly separable w.r.t. parameters $(\\bm{\\pi}, \\bm{\\varphi}_1,\\ldots,\\bm{\\varphi}_J,\\bm{\\eta})$. Therefore, the optimization can be done separately w.r.t. each subset of parameters. Details are leveraged to Section 4.3 of supplementary materials. \n\n\\subsubsection{Method 2: Parameter transformation approach}\n\\paragraph{Construction of complete data}\nMotivated by the mixture probability transformation approach adopted by e.g. \\cite{lee2012algorithms} and \\cite{VERBELEN2015Censor} for truncated data, we here rewrite the random truncation distribution $\\tilde{h}(y_i;\\bm{\\Phi})$ in Equation (\\ref{eq:loglik_weight}) as\n\\begin{equation}\n\\tilde{h}(y_i;\\bm{\\Phi})\n=\\frac{h(y_i;\\bm{\\Phi})W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}\n=\\sum_{j=1}^J\\pi_j^{*}\\frac{f_b(y_i;\\bm{\\varphi}_j)W(y_i)}{\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}+\\pi_{J+1}^{*}\\frac{f_t(y_i;\\bm{\\eta})W(y_i)}{\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du},\n\\end{equation}\nwhere $\\bm{\\pi}^{*}:=(\\pi_1^{*},\\ldots,\\pi_{J+1}^{*})$ are the transformed mixing weight parameters given by\n\\begin{equation} \\label{eq:em:pi_trans}\n\\pi_j^{*}=\\frac{\\pi_j\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du},~j=1,\\ldots,J;\\qquad\n\\pi_{J+1}^{*}=\\frac{\\pi_{J+1}\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}.\n\\end{equation}\n\nAs a result, the problem is reduced to maximizing the weighted log-likelihood of finite mixture of random truncated distributions. In this case, define the complete data\n\\begin{equation}\n\\mathcal{D}^{\\text{com}}=\\{(y_i,\\bm{z}_i^{*})\\}_{i=1,\\ldots,n},\n\\end{equation}\nwhere $\\bm{z}_i^{*}=(z_{i1}^{*},\\ldots,z_{i(J+1)}^{*})$ are the labels where $z_{ij}^{*}=1$ if observation $i$ belongs to the $j^{\\text{th}}$ (transformed) latent mixture component and $z_{ij}^{*}=0$ otherwise. The complete data weighted log-likelihood function is reduced to\n\\begin{align}\n\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi};\\mathcal{D}^{\\text{com}})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\left[\\sum_{j=1}^{J}z_{ij}^{*}\\left(\\log\\pi_j^{*}+\\log\\frac{f_b(y_i;\\bm{\\varphi}_j)W(y_i)}{\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}\\right)\\right]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*}\\left(\\log\\pi_{J+1}^{*}+\\log\\frac{f_t(y_i;\\bm{\\eta})W(y_i)}{\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du}\\right)\\Bigg\\}.\n\\end{align}\n\n\\paragraph{Iterative procedures}\nIn the $l^{\\text{th}}$ iteration of the E-step, the expectation of the complete data weighted log-likelihood is:\n\\begin{align} \\label{eq:em:q2}\nQ^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\left[\\sum_{j=1}^{J}z_{ij}^{*(l)}\\left(\\log\\pi_j^{*}+\\log\\frac{f_b(y_i;\\bm{\\varphi}_j)W(y_i)}{\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du}\\right)\\right]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*(l)}\\left(\\log\\pi_{J+1}^{*}+\\log\\frac{f_t(y_i;\\bm{\\eta})W(y_i)}{\\int_0^{\\infty}f_t(u;\\bm{\\eta})W(u)du}\\right)\\Bigg\\},\n\\end{align}\nwhere $z_{ij}^{*(l)}=E[z_{ij}^{*}|\\mathcal{D}^{\\text{com}},\\bm{\\Phi}^{(l-1)}]$ is provided in Section 5.2 of supplementary materials.\n\nIn the M-step, similar to Method 1 that $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ is linearly separable w.r.t. parameters $(\\bm{\\pi}^{*}, \\bm{\\varphi}_1,\\ldots,\\bm{\\varphi}_J,\\bm{\\eta})$, we can maximize $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ sequentially w.r.t. each subset of parameters. Details are presented in Section 5.3 of supplementary materials. Note that the M-step of this approach is slightly more computationally more intensive than Method 1 as the target function $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ here involves numerical integrals.\n\nAfter completing the iterative procedures, we will obtain an estimate of the transformed mixing weights $\\bm{\\pi}^{*}$ instead of $\\bm{\\pi}$. One can revert Equation (\\ref{eq:em:pi_trans}) to get back the estimated original mixing weights as follows:\n\\begin{equation}\n\\pi_j=\\frac{\\pi_j^{*}[\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_j)W(u)du]^{-1}}{\\pi_j^{*}\\sum_{j'=1}^J[\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_{j'})W(u)du]^{-1}+\\pi_{J+1}^{*}[\\int_{0}^{\\infty}f_t(u;\\bm{\\eta})W(u)du]^{-1}},~j=1,\\ldots,J;\n\\end{equation}\n\\begin{equation}\n\\pi_{J+1}=\\frac{\\pi_{J+1}^{*}[\\int_{0}^{\\infty}f_t(u;\\bm{\\eta})W(u)du]^{-1}}{\\pi_j^{*}\\sum_{j'=1}^J[\\int_0^{\\infty}f_b(u;\\bm{\\varphi}_{j'})W(u)du]^{-1}+\\pi_{J+1}^{*}[\\int_{0}^{\\infty}f_t(u;\\bm{\\eta})W(u)du]^{-1}}.\n\\end{equation}\n\n\n\\subsection{Ascending property of the GEM algorithm}\nIt is well known from \\cite{DEMPSTER1977EM} that an increase of complete data log-likelihood implies an increase of observed data log-likelihood (Equation (\\ref{eq:loglik})). This can be analogously extended to the proposed weighted log-likelihood framework where we have the following proposition. The proof is leveraged to Section 6 of the supplementary material.\n\n\\begin{proposition} \\label{prop:ascend}\nIf the expected complete data weighted log-likelihood is increased during the $l^{\\text{th}}$ iteration (i.e. $Q^{*}(\\bm{\\Phi}^{(l)}|\\bm{\\Phi}^{(l-1)})\\geq Q^{*}(\\bm{\\Phi}^{(l-1)}|\\bm{\\Phi}^{(l-1)})$), then the observed data weighted log-likelihood is also increased (i.e. $\\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l)};\\bm{y})\\geq \\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l-1)};\\bm{y})$).\n\\end{proposition}\n\n\\subsection{Parameter Initialization, convergence acceleration and stopping criterion} \\label{sec:em:init}\nInitialization of parameters is an important issue, in the sense that poor initializations may lead to slow convergence, numerical instability and even convergence to spurious local maximum. We suggest to determine the initial parameters $\\bm{\\Phi}^{(0)}$ using a modified version of clusterized method of moments (CMM) approach by \\cite{gui2018fit}. Under this approach, we first determine a threshold $\\tau$ which classifies observations $y_i$ into either body ($y_i\\leq\\tau$) or tail ($y_i>\\tau$) part of the distribution. We then apply a $K$-means clustering method to assign ``body\" observations $y_i$ with $y_i\\leq\\tau$ to one of the $J$ mixture components for the body, with moment matching method for each mixture components to determine the initial parameters $(\\bm{\\pi}^{(0)},\\bm{\\varphi}^{(0)})$. Moment matching technique is also applied to ``tail\" observations $y_i$ with $y_i>\\tau$ to initialize $\\bm{\\eta}^{(0)}$. For details, we direct readers to Section 7 of the supplementary material.\n\nAs EM algorithm often converges slowly with small step sizes, we further apply a step lengthening procedure for every two GEM iterations to accelerate the algorithm. This is described by \\cite{jamshidian1997acceleration} and its references therein as a ``pure accelerator\" for the EM algorithm.\n\n\\sloppy The GEM algorithm is iterated until the relative change of iterated parameters $\\Delta^{\\text{rel}}\\bm{\\Phi}^{(l)}:=|\\log(\\bm{\\Phi}^{(l)}\/\\bm{\\Phi}^{(l-1)})|\/P$ is smaller than a threshold of $10^{-5}$ or the maximum number of iterations of 1000 is reached.\n\n\\subsection{Specification of weight function}\nOur proposed MWLE is rather flexible by allowing us to pre-specify any weight functions $W(\\cdot)$ prior to fitting the GEM algorithm. The appropriate choice of $W(\\cdot)$ depends on some decision rules beyond what statistical inference can do. In insurance loss modelling perspective, such decision rule includes the relative importance of insurance company to correctly specify the tail distribution (to evaluate some tail measures such as Value-at-risk (VaR)) compared to that of more accurately modelling the smaller attritional claims. If accurate extrapolation of huge claims are way more important than modelling the smaller claims, then one may consider $W(y)$ to be close to zero unless $y$ is large, aligning with Assumption \\textbf{A3} in Section \\ref{sec:theory:tail_idx} to ensure near-consistent tail index estimations (Theorem \\ref{thm:asym:tail_idx}). Otherwise, one may consider a flatter $W(y)$ across $y$.\n\nThroughout the entire paper, we analyze the following general form of weight function\n\\begin{equation} \\label{eq:em:wgt_func}\nW(y):=W(y;\\xi,\\tilde{\\mu},\\tilde{\\phi})=\\xi+(1-\\xi)\\int_0^y\\frac{(\\tilde{\\phi}\\tilde{\\mu})^{-1\/\\tilde{\\phi}}}{\\Gamma(1\/\\tilde{\\phi})}u^{1\/\\tilde{\\phi}-1}e^{-u\/(\\tilde{\\phi}\\tilde{\\mu})}du,\\quad y>0,\n\\end{equation}\nwhich is the distribution function of a zero-inflated Gamma distribution. The above weight function has the following characteristics:\n\\begin{itemize}\n\\item $W(y)$ is a non-decreasing function of $y$, meaning that smaller observations are down-weighted.\n\\item $\\xi\\in[0,1]$ is the minimum weight assigned to each observation.\n\\item $\\tilde{\\mu}$ and $\\tilde{\\phi}$ are the location and dispersion hyperparameters of Gamma distribution respectively. Larger $\\tilde{\\mu}$ means more (small to moderate) claims are under-weighted by a larger extent, while $\\tilde{\\phi}$ controls the shape of weight function, or how the observations are under-weighted.\n\\item If $\\xi=1$ or $\\tilde{\\mu}=0$, then the weight function is reduced to $W(\\cdot)=1$, leading to standard MLE approach.\n\\item If $\\xi=0$ and $\\tilde{\\phi}\\rightarrow 0$, then $W(y)=1\\{y\\geq\\tilde{\\mu}\\}$, meaning that only observations greater than $\\tilde{\\mu}$ are informative in determining the estimated parameters.\n\\end{itemize}\n\nOverall, smaller $\\xi$, larger $\\tilde{\\mu}$ and smaller $\\tilde{\\phi}$ represent greater under-weightings to more small claims, where we will expect more robust tail estimation by sacrificing more efficiencies on body estimations. In this paper, instead of quantifying decision rules to select the hyperparameters, in the subsequent sections we empirically test various (wide range) combinations of $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$ to study how these hyperparameters affect the trade-off between tail-robustness and estimation efficiency. These provide practical guidance and assessments to determine the suitable hyperparameters.\n\n\\begin{remark}\nThere are many possible ways to quantify the decision rule to select the ``optimal\" weight function hyperparameters. We here briefly discuss two possible ways: (1) Consider a goodness-of-fit test statistic for heavy-tailed distributions, such as the modified AD test (\\cite{ahmad1988assessment}). Then select weight function hyperparameters which optimizes the test statistic; (2) Define an acceptable range of estimated parameter uncertainty of tail index, e.g. two times as the uncertainty obtained by MLE. Then select the hyperparameters with the greatest distortion metric (e.g. the average downweighting factor $\\sum_{i=1}^{n}(1-W(y_i))\/n$) where the tail index uncertainty is still within the acceptable range.\n\\end{remark}\n\n\n\\subsection{Choice of model complexity} \\label{sec:em:complex}\nThe above GEM algorithm assumes a fixed number of mixture component $J$. However, it is important to control the model complexity by choosing an appropriate $J$ which allows enough flexibility to capture the distributional characteristics without over-fitting. \n\nThe first criterion is motivated by maximizing the expected weighted log-likelihood\n\\begin{equation} \\label{eq:em:e_wgt_ll}\nn\\times\\tilde{E}[\\mathcal{L}^{*}(\\bm{\\Phi};\\bm{Y})]=n\\times\\tilde{E}\\left[W(Y)\\log\\frac{h(Y;\\bm{\\Phi})W(Y)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du}\\right],\n\\end{equation}\nwhere the expectation is taken on $Y$ under the true model generating the observations. Without knowing the true model (in real data applications), Equation (\\ref{eq:em:e_wgt_ll}) is approximated by $\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})$ in Equation (\\ref{eq:loglik_weight}) with fitted model parameters $\\hat{\\bm{\\Phi}}$. Note that it is positively biased with correction term $\\text{tr}(-\\bm{\\Gamma}^{-1}\\bm{\\Lambda})$ shown by \\cite{konishi1996generalised}. This leads to a robustified AIC\n\\begin{equation}\n\\text{RAIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+2\\times\\text{tr}(-\\hat{\\bm{\\Gamma}}^{-1}\\hat{\\bm{\\Lambda}}).\n\\end{equation}\n\nAnalogous and naturally, since AIC-type criteria often choose excessively complex models, we also consider the robustified BIC given by\n\\begin{equation}\n\\text{RBIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+(\\log n)\\times\\text{tr}(-\\hat{\\bm{\\Gamma}}^{-1}\\hat{\\bm{\\Lambda}}).\n\\end{equation}\n\nWe choose $J$ that minimizes either the RAIC or RBIC, and the $(p,p)$-th element of $-\\hat{\\bm{\\Gamma}}^{-1}\\hat{\\bm{\\Lambda}}$ can be interpreted as the effective number of parameter attributed by the $p^{\\text{th}}$ parameter.\n\nInsurance loss dataset is often characterized by very complicated and multimodal distribution on very small claims, yet it is not meaningful to capture all these small nodes by choosing an overly complex mixture distribution with large $J$. However, the above RAIC and RBIC cannot effectively reduce those mixture components as the effective number of parameters for those capturing the smaller claims could be very small if $W(\\cdot)$ is chosen very small over the region of small claims. To effectively remove components which excessively capture the small claims, we propose treating all parameters as ``full parameters\", which results to the following truncated AIC and BIC:\n\\begin{equation}\n\\text{TAIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+2\\times P,\n\\end{equation}\n\\begin{equation}\n\\text{TBIC}=-2\\times\\mathcal{L}_n^{*}(\\hat{\\bm{\\Phi}};\\bm{y})+\\left(\\log \\sum_{i=1}^{n}W(y_i)\\right)\\times P.\n\\end{equation}\n\n\\begin{remark} \\label{rmk:tic}\nThe above TAIC and TBIC are motivated by the bias of approximating $n\\times\\tilde{E}[\\mathcal{L}^{*}(\\bm{\\Phi};\\bm{Y})]$ by the empirical truncated log-likelihood $\\mathcal{L}^{**}_n(\\hat{\\bm{\\Phi}};\\bm{y}):=\\sum_{i=1}^{n}V_i(y_i)\\log \\frac{h(y_i;\\bm{\\Phi})W(y_i)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du}$ instead of $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$, where $V_i(y)\\sim\\text{Bernoulli}(W(y))$ is an indicator randomly discarding some observations. It can be easily shown (details in Section 8 of supplementary material) that the asymptotic bias is simply $P$ with effective number of observations $\\sum_{i=1}^{n}W(y_i)$. Note also that the weighted log-likelihood $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ is asymptotically equivalent to the truncated log-likelihood $\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})$, except that the former produces more accurate estimated parameters than the latter. This motivates why in TAIC and TBIC we choose to evaluate $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ instead of $\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})$.\n\\end{remark}\n\n\\section{Illustrating examples} \\label{sec:ex}\nIn this section, we analyze the performance of our proposed MWLE approach (Equation (\\ref{eq:loglik_weight})) on FMM given by Equation (\\ref{eq:model}). In the following examples, we select Gamma density for the body components $f_b$, a light-tailed distribution to capture the distributional multimodality of small to moderate claims, and Lomax density for the tail component $f_t$ to extrapolate well the tail-heaviness of larger observations. Then, Equation (\\ref{eq:model}) becomes\n\\begin{equation} \\label{eq:em:density_mixture}\nh(y;\\bm{\\Phi})\n=\\sum_{j=1}^J\\pi_jf_b(y;\\mu_j,\\phi_j)+\\pi_{J+1}f_t(y;\\theta,\\gamma),\n\\end{equation}\nwhere the parameter set is re-expressed as $\\bm{\\Phi}=(\\bm{\\pi},\\bm{\\mu},\\bm{\\phi},\\gamma)$ while $\\bm{\\varphi}_j=(\\mu_j,\\phi_j)$ and $\\bm{\\eta}=(\\theta,\\gamma)$, and the Gamma and Lomax densities $f_b$ and $f_t$ are respectively given by\n\\begin{equation} \\label{eq:em:comp}\nf_b(y;\\mu,\\phi)=\\frac{(\\phi\\mu)^{-1\/\\phi}}{\\Gamma(1\/\\phi)}y^{1\/\\phi-1}e^{-y\/(\\phi\\mu)}\\quad \\text{and} \\quad f_t(y;\\theta,\\gamma)=\\frac{\\gamma\\theta^{\\gamma}}{\\left(y+\\theta\\right)^{\\gamma+1}},\n\\end{equation}\nwhere $\\mu$ and $\\phi$ are the mean and dispersion parameters of Gamma distribution, while $\\gamma$ is the tail index parameter for the Lomax distribution. $\\theta$ is scale of the Lomax distribution. Note that the above model is a regular varying distribution with the tail behavior predominately explained by the tail index $\\gamma$. As a result, tail-robustness is highly determined by how stable and accurate the estimated tail index $\\gamma$ is.\n\nThe specifications of body and tail component functions are mainly motivated by the key characteristics of insurance claim severity distributions (multimodal distribution of small claim, existence of extremely large claims, mismatch between body and tail behavior etc.) which will be illustrated in the real insurance data application section. While we do not preclude the existence of other specifications, such as Weibull for the body and Inverse-Gamma for the tail, plausible for insurance applications, in this section we simply focus on studying Gamma-Lomax combination to focus on the scope of this paper -- demonstrating the usefulness of the proposed MWLE, instead of performing distributional comparisons under FMM.\n\n\\subsection{Toy example} \\label{sec:ex:toy}\n\nWe demonstrate how the proposed MWLE framework works through a simple toy example of one-parameter Lomax distribution $H(y;\\gamma)=1-(y+1)^{-\\gamma}$ ($y>0$), which is a special case of Equation (\\ref{eq:em:density_mixture}) with $J=0$ and $\\theta=1$.\n\nConsider the first case where the true model $G(\\cdot)$ is a Lomax with $\\gamma=\\gamma_0=1$. For the weight function for the MWLE, we consider the form of Equation (\\ref{eq:em:wgt_func}) with $\\xi=0$ for simplicity. We will test across a wide range of $\\tilde{\\mu}$ and across $\\tilde{\\phi}\\in\\{0.1, 0.2, 0.5, 1\\}$. Figure \\ref{fig:thm_aeff} presents how the choices of these hyperparameters affect the AEFF. Starting from $\\text{AEFF}=1$ when $\\tilde{\\mu}=0$ which is equivalent to standard MLE, the AEFF decrease monotonically as $\\tilde{\\mu}$ increases. This is intuitive because under-weighting smaller observations with MWLE means effectively discarding some observed information, leading to larger parameter uncertainties compared to MLE. Since the MLE estimated tail index is unbiased under the true model, there is obviously no benefit of using the proposed MWLE to fit the true model.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_aeff_1.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_aeff_2.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{AEFF as a function of the weight location hyperparameter $\\tilde{\\mu}$ (left panel) or Pareto (Lomax) quantile of $\\tilde{\\mu}$ (right panel) under Lomax true model.}\n\\label{fig:thm_aeff}\n\\end{figure}\n\nNow, consider the second case where the true model is perturbed by the contamination function $M$, as presented in Equation (\\ref{eq:asym_contam}). In this demonstration example, we consider two following two choices for the contamination function $M$:\n\\begin{itemize}\n\\item Degenerate perturbation: One-point distribution on $y=1\/4$\n\\item Pareto perturbation: Lomax distribution with tail index $\\gamma=\\gamma^{*}=4>\\gamma_0$\n\\end{itemize}\nNote that the contamination function is relatively lighter tailed and hence it would not affect the tail behavior of the perturbed distribution. In Figure \\ref{fig:thm_if}, we present the IF as a function of the AEFF (determined as a function of chosen $\\tilde{\\mu}$) under the two choices of $M$. We find that as the AEFF reduces (by choosing a larger $\\tilde{\\mu}$), the IF would shrink towards zero. This reflects that a more robust estimation of tail index can be achieved using the proposed MWLE approach by trading off some efficiencies.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_if_1.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/thm_if_2.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{IF as a function of AEFF under degenerate (left panel) and Pareto (right panel) contaminations.}\n\\label{fig:thm_if}\n\\end{figure}\n\n\n\n\\subsection{Simulation studies} \\label{sec:ex:sim}\n\\subsubsection{Simulation settings}\nWe here simulate $n=10,000$ claims (the sample size is motivated by the size of a typical insurance portfolio) from the aforementioned $J$-Gamma Lomax distribution for each of the following two parameter settings with $\\theta=1000$:\n\\begin{itemize}\n \\item Model 1: $J=2$, $\\bm{\\pi}=(0.4,0.4,0.2)$, $\\bm{\\mu}=(100,300)$, $\\bm{\\phi}=(0.25,0.25)$ and $\\gamma=2$.\n \\item Model 2: $J=3$, $\\bm{\\pi}=(0.4,0.3,0.1,0.2)$, $\\bm{\\mu}=(50,200,600)$, $\\bm{\\phi}=(0.2,0.2,0.2)$ and $\\gamma=2$.\n\\end{itemize}\n\nWe also consider the zero-inflated Gamma distribution given by Equation (\\ref{eq:em:wgt_func}) as the weight function, with $\\tilde{\\mu}\\in\\{q_{0},q_{0.9},q_{0.95},q_{0.99},q_{0.995}\\}$, $\\tilde{\\phi}\\in\\{0.025,0.1,0.25,1\\}$ and $\\xi\\in\\{0.001,0.01,0.05,0.25\\}$, where $q_{\\alpha}$ is the empirical quantile of the data with $0\\leq \\alpha\\leq 1$. Recall that the choice of $\\tilde{\\mu}=q_0=0$ implies that $W(y;\\xi,\\tilde{\\mu},\\tilde{\\phi})= 1$ and hence the MWLE is equivalent to standard MLE. For each combinations of models and weight function hyperparameters, the simulations of sample points are repeated by 100 times to enable thorough analysis of the results using the proposed weighted log-likelihood approach under various settings. Each simulated sample is then fitted to the $J$-Gamma Lomax mixture in Equation (\\ref{eq:em:density_mixture}) with $J=2$. Note that for simplicity, in the simulation studies we do not examine the choice of $J$ as outlined by Section \\ref{sec:em:complex}. As a result, we have the following research goals in the simulation studies:\n\\begin{itemize}\n\\item Under Model 1, the data is fitted to the true class of models. Hence, we empirically verify the consistencies of estimating model parameters (Theorem \\ref{thm:asym_tru}) using the MWLE. We also study how the selection of weight function hyperparameters affect the estimated parameter uncertainties. Further, we compare the computational efficiency of the two kinds of proposed GEM algorithms.\n\\item Under Model 2, the data is fitted to a misspecified class of models. Hence, we demonstrate how this would distort the estimation of the tail under the MLE, and study how the proposed MWLE produces a more robust tail estimation.\n\\end{itemize}\n\n\\subsubsection{Results of fitting Model 1 (true model)}\n\nConsidering the case where we fit the true class of model to the data generated by Model 1, we first compare the computational efficiencies between the two construction methods of the GEM algorithm as presented by Section \\ref{sec:em:gem}. In general, around 100 iterations are needed under parameter transformation approach (Method 2), as compared to at least 300 iterations under hypothetical data approach (Method 1), revealing relatively faster convergences under Method 2. Figure \\ref{fig:sim_tru_rel} plots the relative change of iterated parameters $\\Delta^{\\text{rel}}\\bm{\\Phi}^{(l)}:=|\\log(\\bm{\\Phi}^{(l)}\/\\bm{\\Phi}^{(l-1)})|\/P$ versus the GEM iteration $l$ under two example choices of weight function hyperparameters, where the division operator is applied element-wise to the vector of parameters. It is apparent that the curve drops much faster under Method 2 than Method 1, confirming faster convergence under Method 2. The main reason is that the construction of hypothetical missing observations under Method 1 will generally effectively reduce the learning rates of the optimization algorithms. As both methods produce very similar estimated parameters while Method 2 is more computationally efficient, from now on we only present the results produced by the GEM algorithm under Method 2.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_rel1.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_rel2.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{The relative change of iterated parameters in the first 100 GEM iterations under two example choices of weight function hyperparameters: Left panel -- $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.01,q_{0.95},0.1)$; Right panel -- $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.05,q_{0.99},0.25)$.}\n\\label{fig:sim_tru_rel}\n\\end{figure}\n\nFigure \\ref{fig:sim_tru_gamma} demonstrates the how the biasedness and uncertainty of the estimated tail index $\\hat{\\gamma}$ differ among various choices of weight functions and their corresponding hyperparameters $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$. From the left panel, the median estimated parameters are very close to the true model parameters (differ by less than 1-2\\%) under most settings of the weight functions, except for few extreme cases where both $\\xi$ and $\\tilde{\\phi}$ are chosen to be very small. This empirically justifies the asymptotic unbiasedness of the MWLE. As expected from the right panel, the uncertainties of MLE parameters are the smallest, verifying that MLE is the asymptotically most efficient estimator among all unbiased estimators if we are fitting the correct model class. The parameter uncertainties generally slightly increase as we choose larger $\\tilde{\\mu}$ to de-emphasize the impacts of smaller observations. In some extreme cases where $\\xi$ and $\\tilde{\\phi}$ very small and $\\tilde{\\mu}$ is very large, the standard error can grow dramatically, reflecting that a lot of information are effectively discarded. \n\nSimilarly in Figure \\ref{fig:sim_tru_mu1} where the biasedness and uncertainty of an estimated mean parameter $\\hat{\\mu_1}$ from the body distribution are displayed, we observe that the proposed MWLE approach behaves properly for fitting the body distribution unless when $\\xi$ and $\\tilde{\\phi}$ are both chosen to be extremely small (in those cases, the estimated body parameters would become unstable with inflated uncertainties). Hence, these extreme choices of hyperparameters are deemed to be inappropriate.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_gamma_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_gamma_sd.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and standard deviation of the estimated tail index $\\hat{\\gamma}$ versus various weight function hyperparameters under the true model.}\n\\label{fig:sim_tru_gamma}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_mu1_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_tru_mu1_sd.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and standard deviation of the estimated mean parameter of the first mixture component $\\hat{\\mu_1}$ versus various weight function hyperparameters under the true model.}\n\\label{fig:sim_tru_mu1}\n\\end{figure}\n\n\\subsubsection{Results of fitting Model 2 (misspecified model)}\n\nWe now turn to the case where we fit a misspecified model (with $J=2$) to the simulated data generated from Model 2 (with $J=3$). The left panels of Figures \\ref{fig:sim_mis_gamma} and \\ref{fig:sim_mis_tailp} examine how the robustness of the estimated tail index $\\hat{\\gamma}$ and tail probability $\\hat{\\pi}_{J+1}$ differs among different choices of hyperparameters $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$. From the left panel, the MLE of the tail index is around $\\hat{\\gamma}=2.48$ which largely over-estimates the true tail index $\\gamma=2$, indicating that the heavy-tailedness of the true distribution is under-extrapolated. On the other hand, with the incorporation of weight functions to under-weight the smaller claims, the biases of the MWLE of $\\gamma$ are greatly reduced compared to that of the MLE under most choices of weight function hyperparameters. In particular, the bias reduction for tail index is more effective using smaller $\\xi$ (i.e. $\\xi\\leq 0.05$). This is intuitive as smaller $\\xi$ means smaller claims are under-weighted by a larger extent, reducing the impacts of smaller claims on the tail index estimations. Similarly from the right panel, the proposed MWLE approach effectively reduces the bias of the estimated tail probability $\\hat{\\pi}_{J+1}$. \n\nThe analysis of bias-variance trade-off is also conducted through computing the mean-squared errors (MSE) of both estimated tail index $\\hat{\\gamma}$ and tail probability $\\hat{\\pi}_{J+1}$. From the right panels of Figures \\ref{fig:sim_mis_gamma} and \\ref{fig:sim_mis_tailp}, as evidenced by smaller MSEs under most choices of weight function hyperparameters, MWLE is much more preferable than MLE approach even after accounting for the increased parameter uncertainties through down-weighting the importance of smaller claims.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_gamma_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_gamma_mse.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and MSE of the estimated tail index $\\hat{\\gamma}$ versus various weight function hyperparameters under the misspecified model.}\n\\label{fig:sim_mis_gamma}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_tailp_m.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/sim_mis_tailp_mse.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Median and MSE of the estimated tail probability $\\hat{\\pi}_{J+1}$ versus various weight function hyperparameters under the misspecified model.}\n\\label{fig:sim_mis_tailp}\n\\end{figure}\n\n\\subsubsection{Summary remark on the choice of weight function hyperparameters}\nFrom the above two simulation studies, we find that under a wide range of choices of weight function hyperparameters, the proposed MWLE not only produces plausible model estimations under true model (Model 1), but is also effective in mitigating the bias of tail estimation inherited from model misspecifications (Model 2).\n\nAmong the three hyperparameters $(\\xi,\\tilde{\\mu},\\tilde{\\phi})$, the choice of minimum weight hyperparameter $\\xi$ plays a particularly vital role on the bias-variance trade-off of the estimated parameters. Under misspecified model (Model 2), smaller $\\xi$ (i.e. $\\xi\\leq 0.05$) is more effective in reducing the biases of both estimated tail index $\\hat{\\gamma}$ and tail probability $\\hat{\\pi}_{J+1}$. However, as evidenced by the results produced under the true model (Model 1), the estimated parameters of the body distributions (i.e. $\\hat{\\bm{\\mu}}$ and $\\hat{\\bm{\\phi}}$) may become prohibitively unstable if $\\xi$ is chosen to be extremely small (i.e. $\\xi\\leq 0.001$) such that smaller observations are effectively almost fully discarded. It is therefore important to compare parameter uncertainties of MWLE to that of MLE, and select\/ consider only the weight function hyperparameters where the corresponding MWLE parameter uncertainties are within an acceptable range (i.e. not too off from the MLE parameter uncertainties). Overall, the choices of $\\xi$ between 0.01 and 0.05 are deemed to be suitable.\n\n\\subsection{Real data analysis} \\label{sec:ex:real}\n\\subsubsection{Data description and background} \\label{sec:ex:real:dat}\nIn this section, we study an insurance claim severity dataset kindly provided by a major insurance company operating in Greece. It consists of 64,923 motor third-party liability (MTPL) insurance policies with non-zero property claims for underwriting years 2013 to 2017. This dataset is also analyzed by \\cite{fung2021mixture} using a mixture composite model, with an emphasis on selecting various policyholder characteristics (explanatory variables) which significantly influence the claim severities. The empirical claim severity distribution exhibits several peculiar characteristics including multimodality and tail-heaviness. The excessive number of distributional nodes for small claims reflects the possibility of distributional contamination, which cannot be and should not be perfectly captured and over-fitted by parametric models like FMM. Preliminary analyses also suggest that the estimated tail index is around 1.3 to 1.4, but note that these are only rough and subjective estimates. The details of preliminary data analysis are provided in Section 9 of the supplementary materials. The key goals of this real data analysis are as follows:\n\\begin{enumerate}\n\\item Illustrate that MLE of FMM would produce highly unstable and unrobust estimates to the tail part of the claim severity distribution. This confirms that tail-robustness is an important research problem in real insurance claim severity modelling which needs to be properly addressed.\n\\item Demonstrate how the proposed MWLE approach leads to superior fittings to the tail and more reliable estimates of tail index as compared to MLE, without much sacrificing its ability to adequately capture the body.\n\\end{enumerate}\n\nTo avoid diverging the focus of this paper, in this analysis we solely examine the distributional fitting of the claim sizes without considering the explanatory variables. Note however that the proposed MWLE can be extended to a regression framework, with the discussions being leveraged to Section \\ref{sec:discussion}.\n\n\n\\subsubsection{Fitting results}\nThe claim severity dataset is fitted to the mixture Gamma-Lomax distribution with density given by Equation \\ref{eq:em:density_mixture} under the proposed MWLE approach. The fitting performances will be examined thoroughly across different number of Gamma (body) mixture components $J\\in\\{1,2,\\ldots,10\\}$ and various choices of weight function hyperparameters ($\\tilde{\\mu}\\in\\{q_{0},q_{0.9},q_{0.95},q_{0.99},q_{0.995}\\}$, $\\tilde{\\phi}\\in\\{0.025,0.1,0.25,1\\}$ and $\\xi\\in\\{0.001,0.01,0.05,0.25\\}$). The MWLE fitted parameters are also compared to the standard MLE across various $J$.\n\nWe first present in Figure \\ref{fig:greek:gamma} the fitted tail index $\\hat{\\gamma}$ versus the number of body components $J$ under all combinations of selected weight function hyperparameters. Each of the four sub-figures corresponds to a particular choice of $\\xi\\in\\{0.001,0.01,0.05,0.25\\}$. The black thick trends for each sub-figure are the MLE estimated tail indexes for comparison purpose. The MLE tail indexes are rather unstable as evidenced by great fluctuations across different number of body components $J$, showing that MLE may not be reliable in extrapolating the heavy-tailedness of complex claim distributions.\nFor instance, with a slight change of model specification from $J=5$ to $J=6$, the estimated tail index largely drops from about 1.8 to 1.5. This is rather unnatural because the change from $J=5$ to $J=6$ should only reflect a slight change in specifying the body.\nThe large drop of the estimated tail index reflects that the Lomax tail part of FMM is not specialized in extrapolating the tail-heaviness of the distribution, but instead is very sensitive to the small claims and the model specifications of the body part.\nTherefore, we conclude that the mixture Gamma-Lomax FMM is not achieving its modelling purpose under the MLE.\n\nOn the other hand, looking individually at each path under MWLE, we find that the estimated $\\hat{\\gamma}$ is much more stable across different $J$ under most choices of weight function hyperparameters, especially when $J\\geq 5$. Also, the estimated MWLE $\\hat{\\gamma}$ is in general smaller than the $\\hat{\\gamma}$ obtained by MLE, moving closer to the values roughly determined by the preliminary data analysis in Section \\ref{sec:ex:real:dat}. Note in the figure that there are a few black solid dots, which appear when the estimated $\\hat{\\gamma}$ under MWLE is outside the range of the plots. These unstable estimates of $\\hat{\\gamma}$ are rare and only occur under one the following two situations: (i) $J$ is chosen be very small (i.e. $J\\leq 2$) in the sense that the models would severely under-fit the distributional complexity of the dataset; (ii) extreme choices of weight function hyperparameters (very small $\\xi$ and $\\tilde{\\phi}$) aligned to the results of the simulation studies.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma1.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma2.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma3.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_gamma4.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Estimated tail index versus the number of Gamma mixture components under MLE and MWLE with various choices of weight function hyperparameters.}\n\\label{fig:greek:gamma}\n\\end{figure}\n\nThe optimal choice of $J$ is tricky as evidenced by the excessive number of small distributional nodes for very small claim sizes described in Section \\ref{sec:ex:real:dat}, which should not be over-emphasized or excessively modelled as these very small claims are almost irrelevant for pricing and risk management. However, both AIC and BIC decrease slowly and steadily for MLE models as $J$ increases. The optimal $J$ in this case goes way beyond $J=10$. Under the proposed MWLE approach with various choices of weight function hyperparameters, the same model selection problem exists using RAIC and RBIC, with the reasons already explained in Section \\ref{sec:em:complex}. On the other hand, using TAIC and TBIC (especially for TBIC), a majority selections of weight function hyperparameters lead to an optimal $J=5$, aligning with the heuristic arguments by \\cite{fung2021mixture} that $J=5$ is enough for capturing all the distributional nodes except for the very small claims which are smoothly approximated by a single mixture component.\n\nTo better understand how the use of proposed MWLE affects the estimations of all parameters (not just the tail index but also parameters affecting the body distributions such as $\\bm{\\mu}$), we showcase in Table \\ref{tab:greek:est_prm} all the estimated parameters and their standard errors (based on Equation (\\ref{eq:asym_var})) using MWLE under two distinguishable example choices of hyperparameters (MWLE 1: $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.01,q_{0.99},0.1)$; MWLE 2: $(\\xi,\\tilde{\\mu},\\tilde{\\phi})=(0.05,q_{0.995},0.25)$) as compared to MLE parameters with $J=5$, the optimal number of body components under TBIC for both MWLE 1 and MWLE 2. Note that the two selected examples are for demonstration purpose -- generally the following findings and conclusions are also valid for other choices of weight function hyperparameters under the proposed MWLE.\n\nWe first observe that the estimated parameters influencing the body (i.e. $\\bm{\\pi}$, $\\bm{\\mu}$ and $\\bm{\\phi}$) under MWLE are very close to those under MLE, even if the smaller claims are greatly down-weighted. MWLE generally results to larger parameter uncertainties as compared with MLE -- reflecting a bias-variance trade-off, but these standard errors are of the same order of magnitude and are still relatively immaterial compared to the estimates as the sample size $n=64,923$ is large. \n\nComparing between the above two MWLE examples, we further notice that the parameter uncertainties under MWLE 1 are greater than those under MWLE 2. This is expected because the influences of smaller claims are down-weighted more under MWLE 1 than those under MWLE 2 (as reflected by smaller minimum weight hyperparameter $\\xi$ chosen under MWLE 1). On the other hand, the estimated tail index $\\hat{\\gamma}$ under MWLE 1 is slightly closer to the heuristic values (i.e. 1.3 to 1.4) than MWLE 2. These may also reflect the bias-variance trade-off among various choices of weight function hyperparameters.\n\n\\begin{table}[!h]\n\\centering\n\\begin{tabular}{lrrrrrr}\n\\toprule\n & \\multicolumn{2}{c}{MWLE 1} & \\multicolumn{2}{c}{MWLE 2} & \\multicolumn{2}{c}{MLE} \\\\\n\\cmidrule(l{3pt}r{3pt}){2-3} \\cmidrule(l{3pt}r{3pt}){4-5} \\cmidrule(l{3pt}r{3pt}){6-7}\n & \\multicolumn{1}{c}{Estimates} & \\multicolumn{1}{c}{Std. Error} & \\multicolumn{1}{c}{Estimates} & \\multicolumn{1}{c}{Std. Error} & \\multicolumn{1}{c}{Estimates} & \\multicolumn{1}{c}{Std. Error} \\\\\n\\midrule\n$\\pi_1$ & 0.3787 & 0.0053 & 0.3829 & 0.0031 & 0.3878 & 0.0022 \\\\\n$\\pi_2$ & 0.0380 & 0.0036 & 0.0404 & 0.0021 & 0.0444 & 0.0014 \\\\\n$\\pi_3$ & 0.1117 & 0.0024 & 0.1134 & 0.0020 & 0.1161 & 0.0017 \\\\\n$\\pi_4$ & 0.0221 & 0.0059 & 0.0192 & 0.0021 & 0.0153 & 0.0008 \\\\\n$\\pi_5$ & 0.2173 & 0.0036 & 0.2163 & 0.0022 & 0.2130 & 0.0019 \\\\\n\\hline\n$\\mu_1$ & 1,303.21 & 50.30 & 1,322.10 & 16.70 & 1,348.68 & 11.11 \\\\\n$\\mu_2$ & 9,171.42 & 145.83 & 9,165.92 & 64.64 & 9,165.36 & 49.38 \\\\\n$\\mu_3$ & 27,590.46 & 125.68 & 27,571.75 & 64.36 & 27,538.88 & 52.41 \\\\\n$\\mu_4$ & 317,274.90 & 2,410.93 & 323,827.70 & 2,159.37 & 322,872.40 & 2,372.68 \\\\\n$\\mu_5$ & 89,007.07 & 170.41 & 88,979.12 & 112.01 & 88,895.92 & 99.20 \\\\\n\\hline\n$\\phi_1$ & 0.9945 & 0.0175 & 0.9996 & 0.0121 & 1.0062 & 0.0113 \\\\\n$\\phi_2$ & 0.0264 & 0.0089 & 0.0284 & 0.0030 & 0.0324 & 0.0020 \\\\\n$\\phi_3$ & 0.0154 & 0.0015 & 0.0158 & 0.0007 & 0.0164 & 0.0005 \\\\\n$\\phi_4$ & 0.0472 & 0.0033 & 0.0333 & 0.0025 & 0.0186 & 0.0020 \\\\\n$\\phi_5$ & 0.0127 & 0.0007 & 0.0126 & 0.0003 & 0.0122 & 0.0002 \\\\\n\\hline\n$\\gamma$ & 1.5353 & 0.0707 & 1.6153 & 0.0586 & 1.7963 & 0.0471 \\\\\n$\\theta$ & 62,637.42 & 7,829.79 & 73,604.51 & 6,630.65 & 101,107.20 & 5,088.29\\\\\n\\hhline{=======}\n\\end{tabular}\n\\caption{\\label{tab:greek:est_prm}Estimated parameters and standard errors under MLE and MWLE approaches with $J=5$.}\n\\end{table}\n\nThe Q-Q plot in Figure \\ref{fig:greek:qq} suggests that the fitting results are satisfactory under both MWLE and MLE except for the very immaterial claims (i.e. $y<100$). Note however that due to the log-scale nature of the Q-Q plot, it is hard to examine from the plot how well the fitted models extrapolate the tail-heaviness of the claim severity data. To examine the tail behavior of the fitted models, we present the log-log plot in the left panel of Figure \\ref{fig:greek:loglog}, with the axis shifted to include large claims only. We observe that for extreme claims (i.e. claim amounts greater than about 0.5 millions, or $\\log y>13$), the logged survival probability produced by MLE fitted model diverges quite significantly from that of empirical observations. Such a divergence can effectively be mitigated by using MWLE with either of the hyperparameter settings. \n\nWe further compute the value-at-risk (VaR) and conditional tail expectation (CTE) at $100q^{\\text{th}}$ security level (denoted as $\\text{VaR}_q(Y;\\hat{\\bm{\\Phi}})$ and $\\text{CTE}_q(Y;\\hat{\\bm{\\Phi}})$ respectively) from the fitted models, and compare them to the empirical values from the severity data (denoted as $\\widehat{\\text{VaR}}_q(Y)$ and $\\widehat{\\text{CTE}}_q(Y)$ respectively). The results are summarized in Table \\ref{tab:greek:risk}. Both MLE and MWLE produce plausible estimates of VaR and CTE up to security levels of 95\\% and 75\\% respectively, reflecting the ability of both approaches in capturing the body part of severity distribution. Nonetheless, the MLE fitted model shows significant divergences of VaR and CTE from the empirical data at higher security levels. In particular, the 99\\%-CTE and 99.5\\%-CTE are largely underestimated by the MLE approach. Such a divergence is effectively reduced by the proposed MWLE approach where superior fittings to the tail are obtained. Further, MWLE 1 seems to perform slightly better than MWLE 2 in terms of tail fitting, as reflected by smaller underestimations of CTEs at high security levels. This provides a plausible trade-off to the increased parameter uncertainties under MWLE 1 as previously mentioned.\n\nTo visualize the results, we further plot the relative misfit of VaR, given by $\\log(\\text{VaR}_q(Y;\\hat{\\bm{\\Phi}})\/\\widehat{\\text{VaR}}_q(Y))$, versus the log survival probability $\\log (1-q)\\in(-4.7,-7.5)$, equivalent to the range of security level from 99\\% to 99.95\\%, in the right panel of Figure \\ref{fig:greek:loglog}. We observe that the MLE fitted model over-estimates the VaR of large claims (security level between 99\\% to 99.8\\%) but then largely under-extrapolates the extreme claims (security level beyond 99.8\\%). This issue is well mitigated by the MWLE where the misfits of VaR are smaller in both regions. Therefore, we conclude that the proposed MWLE effectively improves the goodness-of-fit on the tail part of distribution (as compared to the MLE) without much sacrificing its flexibly to adequately capture the body part. \n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_qq.jpg}\n\\end{subfigure}\n\\end{center}\n\\caption{Q-Q plot under MLE and MWLE with two selected combinations of weight function hyperparameters.}\n\\label{fig:greek:qq}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_loglog.pdf}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/greek_misfit.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Left panel: log-log plot of fitted models compared to empirical data; Right panel: misfit of logged claim amounts versus logged survival probabilities under three fitted models.}\n\\label{fig:greek:loglog}\n\\end{figure}\n\n\\begin{table}[!h]\n\\centering\n\\begin{tabular}{rrrrrlrrrrr}\n\\toprule\n \\multicolumn{5}{c}{VaR ('000)} & & \\multicolumn{5}{c}{CTE ('000)} \\\\\n\\cmidrule(l{3pt}r{3pt}){1-5} \\cmidrule(l{3pt}r{3pt}){7-11}\n \\multicolumn{1}{c}{Level} & \\multicolumn{1}{c}{MLE} & \\multicolumn{1}{c}{MWLE 1} & \\multicolumn{1}{c}{MWLE 2} & \\multicolumn{1}{c}{Empirical} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{Level} & \\multicolumn{1}{c}{MLE} & \\multicolumn{1}{c}{MWLE 1} & \\multicolumn{1}{c}{MWLE 2} & \\multicolumn{1}{c}{Empirical} \\\\\n\\midrule\n50\\% & 21 & 21 & 21 & 21 & & 0\\% & 109 & 112 & 111 & 116 \\\\\n75\\% & 83 & 82 & 82 & 82 & & 50\\% & 174 & 180 & 177 & 187 \\\\\n95\\% & 190 & 191 & 187 & 182 & & 75\\% & 468 & 505 & 489 & 536 \\\\\n99\\% & 461 & 450 & 445 & 452 & & 90\\% & 1,140 & 1,326 & 1,242 & 1,498 \\\\\n99.5\\% & 719 & 693 & 674 & 676 & & 95\\% & 1,711 & 2,115 & 1,948 & 2,455 \\\\\n99.75\\% & 1,149 & 1,031 & 1,046 & 1,075 & & 99\\% & 2,533 & 3,379 & 3,063 & 4,057 \\\\\n99.95\\% & 2,787 & 3,163 & 3,220 & 3,348 & & 99.5\\% & 5,956 & 10,243 & 8,572 & 13,329\\\\\n\\hhline{===========}\n\\end{tabular}\n\\caption{VaR and CTE (in thousands) estimated by MLE, MWLE and empirical approaches.}\n\\label{tab:greek:risk}\n\\end{table}\n\n\n\n\\section{Discussions} \\label{sec:discussion}\nIn this paper, we introduce a maximum weighted log-likelihood estimation (MWLE) approach to robustly estimate the tail part of finite mixture models (FMM) while preserving the capability of FMM to flexibly capture the complex distributional phenomena from the body part. Asymptotic theories justify the unbiasedness and robustness of the proposed estimator. In computational aspect, the applicability of EM-based algorithm for efficient estimation of parameters makes the proposed MWLE distinctive compared to the existing literature on weighted likelihood approach. Through several simulation studies and real data analyses, we empirically confirm that the proposed MWLE approach is more appropriate in specifying the tail part of the distribution compared to MLE, and at the same time it still preserves the flexibility of FMM in fitting the smaller observations.\n\nAnother advantage of the MWLE not yet mentioned throughout this paper is its extensibility. First, it is obvious that the proposed MWLE is not restricted to FMM but it is also applicable to any continuous or discrete distributions. Second, MWLE can be easily extended to regression settings, which is crucial for insurance pricing perspective as insurance companies often determine different premiums across policyholders based on individual attributes (e.g. age, geographical location and past claim history). In regression settings, we define $\\bm{x}=(\\bm{x}_1,\\ldots,\\bm{x}_n)$ as the covariates vectors for each of the $n$ observations. Then, the weighted log-likelihood function in Equation (\\ref{eq:loglik_weight}) is then re-expressed as\n\\begin{equation} \\label{eq:loglik_weight_reg}\n\\mathcal{L}^*_n(\\bm{\\Phi};\\bm{y},\\bm{x})=\\sum_{i=1}^{n}W(y_i)\\log \\frac{h(y_i;\\bm{\\Phi},\\bm{x}_i)W(y_i)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi},\\bm{x}_i)W(u)du}\n\\end{equation}\nfor some regression models with density function $h(y_i;\\bm{\\Phi},\\bm{x}_i)$. Obviously, the asymptotic properties still hold subject further to some regularity conditions on covariates $\\bm{x}_i$. For parameter estimations using the GEM algorithm, only the hypothetical data approach (Method 1, which converge slower than Method 2 in Section \\ref{sec:ex}) works, because the transformed mixing probabilities in Equation (\\ref{eq:em:pi_trans}) under Method 2 are assumed to be homogeneous across all observations. We leave all theoretical details with more empirical studies and applications to the future research direction.\n\n\n\n\n\\bibliographystyle{abbrvnat}\n\n\\section{Regularity conditions for asymptotic theory} \\label{apx:asym_reg}\nLet $h(y;\\bm{\\Phi})$ be the density function of $Y$ with parameter space of $\\bm{\\Phi}\\in\\bm{\\Omega}$. For a more concise presentation on the regularity conditions, we here write $\\bm{\\Phi}=(\\psi_1,\\ldots,\\psi_P)$ where $P$ is the total number of parameters in the model. The regularity conditions are:\n\n\\begin{enumerate}[font={\\bfseries},label={R\\arabic*.}]\n\\item $h(y;\\bm{\\Phi})$ has common support in $y$ for all $\\bm{\\Phi}\\in\\bm{\\Omega}$, $h(y;\\bm{\\Phi})$ is identifiable in $\\bm{\\Phi}$ up to a permutation of mixture components.\n\\item $h(y;\\bm{\\Phi})$ admits third partial derivatives with respect to $\\bm{\\Phi}$ for each $\\bm{\\Phi}\\in\\bm{\\Omega}$ and for almost all $y$.\n\\item For all $j_1,j_2=1,\\ldots,P$, the first two derivatives of $h(y;\\bm{\\Phi})$ satisfy\n\\begin{equation}\nE\\left[\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\right]=0;\n\\end{equation}\n\\begin{equation}\nE\\left[\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right]=E\\left[-\\frac{\\partial^2}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right].\n\\end{equation}\n\\item The Fisher information matrix is finite and positive definite at $\\bm{\\Phi}=\\bm{\\Phi}_0$:\n\\begin{equation}\n\\mathcal{I}(\\bm{\\Phi})=E\\left[\\left(\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(y;\\bm{\\Phi})\\right)\\left(\\frac{\\partial}{\\partial\\bm{\\Phi}}\\log h(y;\\bm{\\Phi})\\right)^T\\right].\n\\end{equation}\n\\item There exists an integrable function $\\mathcal{M}(y)$ such that\n\\begin{equation}\n\\hspace{-1cm}\n\\left|\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\\quad\n\\left|\\frac{\\partial^2}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\\quad\n\\left|\\frac{\\partial^3}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}\\partial\\psi_{j_3}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\n\\end{equation}\n\\begin{equation}\n\\hspace{-1cm}\n\\left|\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\\quad\n\\left|\\frac{\\partial^2}{\\partial\\psi_{j_1}\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_3}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y),\n\\end{equation}\n\\begin{equation}\n\\hspace{-1cm}\n\\left|\\frac{\\partial}{\\partial\\psi_{j_1}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_2}}\\log h(y;\\bm{\\Phi})\\frac{\\partial}{\\partial\\psi_{j_3}}\\log h(y;\\bm{\\Phi})\\right|\\leq \\mathcal{M}(y).\n\\end{equation}\n\\end{enumerate}\n\n\\section{Proof of Theorems 1 and 2} \\label{apx:asym_proof1}\nWe first focus on Theorem 1. Denote the weighted log-likelihood of a single observation\n\\begin{equation}\n\\mathcal{L}^{*}(\\bm{\\Phi};y)=W(y)\\log \\frac{h(y;\\bm{\\Phi})W(y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}.\n\\end{equation}\n\nThe consistency and asymptotic normality can be proved by applying Theorems 5.41 and 5.42 of \\cite{van2000asymptotic}. The theorems require the regularity conditions that $E\\left[\\|\\partial\/\\partial\\bm{\\Phi}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\|^2\\right]<\\infty$, the matrix $E\\left[\\partial^2\/\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]$ exists and that $|\\partial^3\/\\partial\\psi_{j_1}\\partial\\psi_{j_2}\\partial\\psi_{j_3}\\mathcal{L}^{*}(\\bm{\\Phi};y)|$ is dominated by a fixed integrable function of $y$, $j_1,j_2,j_3=1,\\ldots,P$ and $\\psi_j$ is the $j^{\\text{th}}$ element of $\\bm{\\Phi}$. Through a direct computation of differentiations, the aforementioned equations can all be expressed as functions of $\\kappa(u;\\bm{\\Phi})$ and $\\int_{0}^{\\infty}\\kappa(u;\\bm{\\Phi})h(u;\\bm{\\Phi})W(u)du$ only, where $\\kappa(\\bm{\\Phi})$ can be the six terms presented in regularity condition \\textbf{R5} (the left hand side of the six equations underneath \\textbf{R5} without the absolute sign). Given \\textbf{R5} that $\\kappa(u;\\bm{\\Phi})$ is bounded by an integrable function and since $|\\int_{0}^{\\infty}\\kappa(u;\\bm{\\Phi})h(u;\\bm{\\Phi})W(u)du|\\leq \\int_{0}^{\\infty}|\\kappa(u;\\bm{\\Phi})|h(u;\\bm{\\Phi})du$, the aforementioned regularity conditions required by \\cite{van2000asymptotic} hold.\n\n\\medskip\n\nFor consistency, it suffices from Theorem 5.42 of \\cite{van2000asymptotic} to show that $\\bm{\\Phi}_0$ is the maximizer of\n\\begin{align}\nE_{\\bm{\\Phi}_0}\\left[\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]\n&=\\int_{0}^{\\infty}W(y)\\log \\frac{h(y;\\bm{\\Phi})W(y)}{\\int_{0}^{\\infty}h(u;\\bm{\\Phi})W(u)du}h(y;\\bm{\\Phi}_0)dy\\nonumber\\\\\n&=c_1\\int_{0}^{\\infty}\\tilde{h}(y;\\bm{\\Phi}_0)\\log\\frac{\\tilde{h}(y;\\bm{\\Phi})}{\\tilde{h}(y;\\bm{\\Phi}_0)}dy+c_2\\nonumber\\\\\n&=-c_1D_{\\text{KL}}\\left(\\tilde{h}(y;\\bm{\\Phi})\\|\\tilde{h}(y;\\bm{\\Phi}_0)\\right)+c_2,\n\\end{align}\nwhere $c_1= \\int_0^\\infty h(y; \\bm{\\Phi}_0)W(y) dy >0$ and $c_2=c_1 \\int_0^\\infty \\tilde{h}(y;\\bm{\\Phi}_0) \\log \\tilde{h}(y;\\bm{\\Phi}_0) dy$ are constants and $D_{\\text{KL}}(Q_1\\|Q_2)\\geq 0$ is the KL divergence between $Q_1$ and $Q_2$. Since $D_{\\text{KL}}\\left(\\tilde{h}(y;\\bm{\\Phi})\\|\\tilde{h}(y;\\bm{\\Phi}_0)\\right)=0$ as $\\bm{\\Phi}=\\bm{\\Phi}_0$, the result follows.\n\n\\medskip\n\nFor asymptotic normality, from Theorem 5.41 of \\cite{van2000asymptotic}, we have $\\sqrt{n}(\\hat{\\bm{\\Phi}}_n-\\bm{\\Phi}_0)\\overset{d}{\\rightarrow}\\mathcal{N}(\\bm{0},\\bm{\\Sigma})$ with \n$\\bm{\\Sigma}=\\bm{\\Gamma}^{-1}\\bm{\\Lambda}\\bm{\\Gamma}^{-1}$, where \n\\begin{equation} \\label{eq:asym:lambda_proof}\n\\bm{\\Lambda}=E_{\\bm{\\Phi}_0}\\left[\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]\\left[\\frac{\\partial}{\\partial\\bm{\\Phi}}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\right]^T\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right]\n\\end{equation}\nand\n\\begin{equation} \\label{eq:asym:gamma_proof}\n\\bm{\\Gamma}=-E_{\\bm{\\Phi}_0}\\left[\\frac{\\partial^2}{\\partial\\bm{\\Phi}\\partial\\bm{\\Phi}^T}\\mathcal{L}^{*}(\\bm{\\Phi};Y)\\Bigg|_{\\bm{\\Phi}=\\bm{\\Phi}_0}\\right].\n\\end{equation}\n\nPerforming the derivatives and algebra manipulations from Equations (2.3) and (2.4) would result to Equations (4.2) and (4.3) respectively, which prove the asymptotic normality result.\n\n\\medskip\n\nProof idea of Theorem 2 is completely identical as the above, except that the expectations in Equations (2.3) and (2.4) are taken as $\\tilde{E}[\\cdot]$ instead of $E_{\\bm{\\Phi}_0}[\\cdot]$.\n\n\n\n\\section{Proof of Theorem 3} \\label{apx:asym_proof2}\nWe begin with the following lemmas:\n\\begin{lemma} \\label{apx:lem:asym1}\nTo prove Theorem 3, it suffices to show that\n\\begin{equation}\nT_n(\\bm{\\Phi}):=\\frac{\\partial}{\\partial\\gamma}\\tilde{E}_n[\\log \\tilde{h}_{n}(Y;\\bm{\\Phi})]\n\\end{equation}\nis asymptotically a strictly decreasing function of $\\gamma$ as $n\\rightarrow\\infty$, with $T_n(\\bm{\\Phi})|_{\\gamma=\\gamma_0}\\rightarrow 0$ as $n\\rightarrow\\infty$.\n\\end{lemma}\n\n\\begin{proof}\nIf we keep the weight function $W(\\cdot)$ fixed (independent of $n$), applying Theorem 5.7 of \\cite{van2000asymptotic} we have that maximizing the weighted log-likelihood function $\\mathcal{L}_n^{*}(\\bm{\\Phi};\\bm{y})$ is asymptotically equivalent to maximizing $\\tilde{E}_n[\\log \\tilde{h}_{n}(Y;\\bm{\\Phi})]$ (which is indeed independent of $n$).\n\nNow that the weight function $W_n(\\cdot)$ depends on $n$, and as $n$ increases, the increasing distortion (more down-weighting) of the relative importance of observations would cause reduction of the effective number of observations. Heuristically, we need the number of observations $n$ to increase faster than the distortion impacts of $W_n(\\cdot)$, so that effective number of observations grows to infinity and large sample theory still applies. Quantitatively, we require that the variance of (scaled) empirical weighted log-likelihood\n\\begin{equation}\nV_n(\\bm{\\Phi}):=\\text{Var}\\left(\\frac{1}{n\\int_{0}^{\\infty}W_n(u)g(u)du}\\sum_{i=1}^{n}W_n(Y)\\log\\tilde{h}_{n}(Y;\\bm{\\Phi})\\right)\\rightarrow 0\n\\end{equation}\nas $n\\rightarrow\\infty$, such that the (scaled) empirical weighted log-likelihood function converges to its expectation which is $\\tilde{E}_n[\\log \\tilde{h}_{n}(Y;\\bm{\\Phi})]$. Now, $V_n(\\bm{\\Phi})$ is evaluated as follows:\n\\begin{align}\nV_n(\\bm{\\Phi})\n&=\\frac{1}{n(\\int_{0}^{\\infty}W_n(u)g(u)du)^2}\\text{Var}\\left(W_n(Y)\\log\\tilde{h}_{n}(Y;\\bm{\\Phi})\\right)\\nonumber\\\\\n&\\leq\\frac{1}{n(\\int_{0}^{\\infty}W_n(u)g(u)du)^2}\\tilde{E}\\left[W_n(Y)(\\log\\tilde{h}_{n}(Y;\\bm{\\Phi}))^2\\right]\\nonumber\\\\\n&=\\frac{1}{n\\int_{0}^{\\infty}W_n(u)g(u)du}\\int_{0}^{\\infty}\\frac{W_n(y)g(y)}{\\int_{0}^{\\infty}W_n(u)g(u)du}(\\log\\tilde{h}_{n}(y;\\bm{\\Phi}))^2dy\\nonumber\\\\\n&=\\frac{\\tilde{E}_n[(\\log \\tilde{h}_{n}(Y;\\bm{\\Phi}))^2]}{n\\tilde{E}[W_n(Y)]}\\rightarrow 0,\n\\end{align}\nwhere the convergence is based on Assumption \\textbf{A4}.\n\\end{proof}\n\n\\begin{lemma} \\label{apx:lem:asym2}\n(Monotone density theorem -- Theorem 1.7.2 of \\cite{bingham1989regular}) Denote $H$ as a probability distribution function with $h$ being the corresponding probability density function. Assume $h$ is ultimately monotone (i.e. $h$ is monotone on $(z,\\infty)$ for some $z>0$). If\n\\begin{equation}\n\\bar{H}(y)\\sim y^{-\\gamma}L(y)\n\\end{equation}\nas $y\\rightarrow\\infty$ for some $\\gamma>0$ and slowly varying functions $L$, then\n\\begin{equation}\nh(y)\\sim \\gamma y^{-\\gamma-1}L(y)\n\\end{equation}\nas $y\\rightarrow\\infty$.\n\\end{lemma}\n\nWe proceed to the proof of Theorem 3 as follows. Using the result from Lemma \\ref{apx:lem:asym1}, it suffices to evaluate\n\\begin{align} \\label{apx:eq:proof2:Tn}\nT_n(\\bm{\\Phi})\n&=\\frac{\\partial}{\\partial\\gamma}\\int_{0}^{\\infty}\\tilde{g}_n(y)\\log \\tilde{h}_{n}(y;\\bm{\\Phi})dy\\nonumber\\\\\n&=\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\tilde{g}^{*}_n(y)\\log\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy + o(1)\\nonumber\\\\\n&=\\frac{\\partial}{\\partial\\gamma}\\log\\tilde{h}^{*}_{n}(\\tau_n;\\bm{\\Phi})\n+\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\left[\\frac{\\partial}{\\partial y}\\log\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})\\right]\\times\\bar{\\tilde{G}}^{*}_{n}(y)dy + o(1)\\nonumber\\\\\n&:=M_1(\\tau_n;\\bm{\\Phi})+M_2(\\tau_n;\\bm{\\Phi}) + o(1),\n\\end{align}\nwhere\n\\begin{equation}\n\\tilde{g}^{*}_{n}(y)=\\frac{g(y)W_n(y)}{\\int_{\\tau_n}^{\\infty}g(u)W_n(u)du}1\\{y\\geq\\tau_n\\},\\qquad \\tilde{h}^{*}_{n}(y;\\bm{\\Phi})=\\frac{h(y;\\bm{\\Phi})W_n(y)}{\\int_{\\tau_n}^{\\infty}h(u;\\bm{\\Phi})W_n(u)du}1\\{y\\geq\\tau_n\\},\n\\end{equation}\nare the proper transformed density functions, and $\\tilde{G}^{*}_{n}$ and $\\tilde{H}_{n}^{*}$ are the corresponding distribution functions. The second equality of Equation (\\ref{apx:eq:proof2:Tn}) is resulted from Assumption \\textbf{A3}, while the third equality is followed by integration by parts. Now, we evaluate $M_1(\\tau_n;\\bm{\\Phi})$ and $M_2(\\tau_n;\\bm{\\Phi})$ as follows:\n\n\\begin{align}\nM_1(\\tau_n;\\bm{\\Phi})\n&=\\frac{\\partial}{\\partial\\gamma}\\log\\tilde{h}^{*}_{n}(\\tau_n;\\bm{\\Phi})\\nonumber\\\\\n&=\\frac{\\partial}{\\partial\\gamma}\\left[\\log\\gamma-(\\gamma+1)\\log\\tau_n+\\log L(\\tau_n;\\bm{\\Phi})\\right]\\nonumber\\\\\n&\\hspace{3em}-\\int_{\\tau_n}^{\\infty}\\frac{\\partial}{\\partial\\gamma}\\left[\\log\\gamma-(\\gamma+1)\\log y+\\log L(y;\\bm{\\Phi})\\right]\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy+o(1)\\nonumber\\\\\n&=\\frac{1}{\\gamma}-\\log\\tau_n-\\frac{1}{\\gamma}+\\int_{\\tau_n}^{\\infty}(\\log y)\\times\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy\n-\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\log\\frac{L(y;\\bm{\\Phi})}{L(\\tau_n;\\bm{\\Phi})}\\times\\tilde{h}^{*}_{n}(y;\\bm{\\Phi})dy+o(1)\\nonumber\\\\\n&=-\\log\\tau_n+\\log\\tau_n\n+\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{H}}^{*}_{n}(y;\\bm{\\Phi})dy\n-\\frac{\\partial}{\\partial\\gamma}\\int_{1}^{\\infty}\\log\\frac{L(\\tau_nt;\\bm{\\Phi})}{L(\\tau_n;\\bm{\\Phi})}\\times\\tau_n\\tilde{h}^{*}_{n}(\\tau_nt;\\bm{\\Phi})dt+o(1)\\nonumber\\\\\n&=\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{H}}^{*}_{n}(y;\\bm{\\Phi})dy + o(1),\n\\end{align}\nwhere dominated convergence theorem and integration by parts are repeatedly used. The second equality involves monotone density theorem (Lemma \\ref{apx:lem:asym2}) with Assumption \\textbf{A5} being satisfied. The last term of the second last equality converges to zero uniformly on $\\bm{\\Phi}$ due to dominated convergence theorem and the uniform convergence conditions in Assumption \\textbf{A2}. Using similar techniques as the above, $M_2(\\tau_n;\\bm{\\Phi})$ can be evaluated as\n\\begin{align}\nM_2(\\tau_n;\\bm{\\Phi})\n&=-\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{G}}^{*}_{n}(y)dy\n+\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\frac{\\partial}{\\partial y}(\\log L(y;\\bm{\\Phi}))\\times\\bar{\\tilde{G}}^{*}_{n}(y)dy\\nonumber\\\\\n&=-\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{G}}^{*}_{n}(y)dy-\\frac{\\partial}{\\partial\\gamma}\\int_{\\tau_n}^{\\infty}\\log\\frac{L(y;\\bm{\\Phi})}{L(\\tau_n;\\bm{\\Phi})}\\times\\tilde{g}^{*}_{n}(y)dy\\nonumber\\\\\n&=-\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\bar{\\tilde{G}}^{*}_{n}(y)dy + o(1).\n\\end{align}\n\nTo sum up, we have\n\\begin{equation}\nT_n(\\bm{\\Phi})\n=\\int_{\\tau_n}^{\\infty}\\frac{1}{y}\\left[\\bar{\\tilde{H}}^{*}_{n}(y;\\bm{\\Phi})-\\bar{\\tilde{G}}^{*}_{n}(y)\\right]dy\n=\\int_{1}^{\\infty}\\frac{1}{t}\\left[\\bar{\\tilde{H}}^{*}_{n}(\\tau_nt;\\bm{\\Phi})-\\bar{\\tilde{G}}^{*}_{n}(\\tau_nt)\\right]dt.\n\\end{equation}\n\nInvestigating each term inside the integrand, we have\n\\begin{align}\n\\bar{\\tilde{H}}^{*}_{n}(\\tau_nt;\\bm{\\Phi})\n&=\\frac{\\int_t^{\\infty}h(\\tau_nv;\\bm{\\Phi})W_n(\\tau_nv)dv}{\\int_1^{\\infty}h(\\tau_nv;\\bm{\\Phi})W_n(\\tau_nv)dv}\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)[L(\\tau_nv;\\bm{\\Phi})\/L(\\tau_n;\\bm{\\Phi})]dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)[L(\\tau_nv;\\bm{\\Phi})\/L(\\tau_n;\\bm{\\Phi})]dv} + o(1)\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv} + o(1),\n\\end{align}\nand\n\\begin{align}\n\\bar{\\tilde{G}}^{*}_{n}(\\tau_nt)\n&=\\frac{\\int_t^{\\infty}g(\\tau_nv)W_n(\\tau_nv)dv}{\\int_1^{\\infty}g(\\tau_nv)W_n(\\tau_nv)dv}\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)[L_0(\\tau_nv)\/L_0(\\tau_n)]dv}{\\int_1^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)[L_0(\\tau_nv)\/L_0(\\tau_n)]dv} + o(1)\\nonumber\\\\\n&=\\frac{\\int_t^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv} + o(1),\n\\end{align}\nwhere $\\tilde{W}_n(v)=W_n(\\tau_nv)$. Therefore, it is clear that\n\\begin{equation}\nT_n(\\bm{\\Phi})=\\int_{1}^{\\infty}\\frac{1}{t}\\left[\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}-\\frac{\\int_t^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma_0-1}\\tilde{W}_n(v)dv}\\right]dt+o(1)\n\\end{equation}\nconverges to zero for $\\gamma=\\gamma_0$ as $n\\rightarrow\\infty$. To show that $T_n(\\bm{\\Phi})$ is a strictly decreasing function of $\\gamma$ as $n\\rightarrow\\infty$, it suffices to evaluate\n\\begin{align}\n\\frac{\\partial}{\\partial\\gamma}\\log\\frac{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}\n&=-\\frac{\\int_t^{\\infty}(\\log v)v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_t^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv}+\\frac{\\int_1^{\\infty}(\\log v)v^{-\\gamma-1}\\tilde{W}_n(v)dv}{\\int_1^{\\infty}v^{-\\gamma-1}\\tilde{W}_n(v)dv},\n\\end{align}\nwhich is negative if and only if\n\\begin{equation}\n\\int_{1}^{t}(\\log v)k_{n,1,t}(v;\\gamma)dv<\\int_{t}^{\\infty}(\\log v)k_{n,t,\\infty}(v;\\gamma)dv,\n\\end{equation}\nwhere\n\\begin{equation}\nk_{n,t_1,t_2}(v;\\gamma)=\\frac{v^{-\\gamma-1}\\tilde{W}_n(v)}{\\int_{t_1}^{t_2}v^{-\\gamma-1}\\tilde{W}_n(v)dv}1\\{t_1\\tilde{\\mu}\\}+\\tilde{\\phi}$.\n\\end{itemize}\n\nRe-parameterize the gamma distribution with $\\alpha=1\/\\phi_j$ and $\\beta=1\/\\phi_j\\mu_j$, we are to compute\n\\begin{equation}\n\\int_{0}^{\\infty}q(u)\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du\n\\end{equation}\nfor $q(u)=1$, $q(u)=u$ and $q(u)=\\log u$; and\n\\begin{equation}\n\\int_{0}^{\\infty}r(u)\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du\n\\end{equation}\nfor $r(u)=1$ and $r(u)=\\log(u+\\theta)$.\n\n\\textbf{Case 1}. We have the following analytical results:\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\left(\\frac{\\beta}{\\beta+1\/\\tilde{\\mu}}\\right)^{\\alpha},\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\alpha\\beta^{\\alpha}}{(\\beta+1\/\\tilde{\\mu})^{\\alpha+1}},\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}\\frac{\\partial}{\\partial\\alpha}\\frac{\\Gamma(\\alpha)}{(\\beta+1\/\\tilde{\\mu})^{\\alpha}},\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=\\gamma\\left(\\frac{\\theta}{\\tilde{\\mu}}\\right)^{\\gamma}\\exp\\left\\{\\frac{\\theta}{\\tilde{\\mu}}\\right\\}\\Gamma(-\\gamma;\\frac{\\theta}{\\tilde{\\mu}},\\infty),\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log(u+\\theta)\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=-\\gamma\\theta^{\\gamma}\\exp\\left\\{\\frac{\\theta}{\\tilde{\\mu}}\\right\\}\\frac{\\partial}{\\partial\\gamma}\\Gamma(-\\gamma;\\frac{\\theta}{\\tilde{\\mu}},\\infty),\n\\end{equation}\nwhere $\\Gamma(m;c_1,c_2)=\\int_{c_1}^{c_2}u^{m-1}\\exp\\{-u\\}du$ is an incomplete gamma function.\n\n\\textbf{Case 2}. We have the following analytical results:\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\tilde{\\phi}\\frac{\\Gamma(\\alpha;\\beta\\tilde{\\mu},\\infty)}{\\Gamma(\\alpha)}+(1-\\tilde{\\phi}),\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\alpha}{\\beta}\\left[\\tilde{\\phi}\\Gamma(\\alpha+1;\\beta\\tilde{\\mu},\\infty)+(1-\\tilde{\\phi})\\right],\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log u\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}u^{\\alpha-1}\\exp\\{-\\beta u\\}(1-W(u))du=\\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)}\\left[\\tilde{\\phi}\\frac{\\partial}{\\partial\\alpha}\\frac{\\Gamma(\\alpha;\\beta\\tilde{\\mu},\\infty)}{\\beta^{\\alpha}}+(1-\\tilde{\\phi})\\frac{\\partial}{\\partial\\alpha}\\frac{\\Gamma(\\alpha)}{\\beta^{\\alpha}}\\right],\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=\\tilde{\\phi}\\left(\\frac{\\theta}{\\tilde{\\mu}+\\theta}\\right)^{\\gamma}+(1-\\tilde{\\phi}),\n\\end{equation}\n\\begin{equation}\n\\int_{0}^{\\infty}\\log(u+\\theta)\\frac{\\gamma\\theta^{\\gamma}}{(u+\\theta)^{\\gamma+1}}(1-W(u))du=-\\gamma\\theta^{\\gamma}\\frac{\\partial}{\\partial\\gamma}\\left[\\tilde{\\phi}\\frac{1}{\\gamma(\\tilde{\\mu}+\\theta)^{\\gamma}}+(1-\\tilde{\\phi})\\frac{1}{\\gamma\\theta^{\\gamma}}\\right].\n\\end{equation}\n\n\\subsection{M-step} \\label{supp:sec:em_m}\nMaximizing $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ with respect to $\\bm{\\Phi}$ yields the following parameter updates:\n\\begin{equation}\n\\pi_j^{(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{ij}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_j^{'(l)}}{\\sum_{j'=1}^{J+1}\\left\\{\\sum_{i=1}^{n}W(y_i)z_{ij'}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_{j'}^{'(l)}\\right\\}},\\quad j=1,\\ldots,J+1,\n\\end{equation}\n\\begin{equation}\n\\mu_j^{(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{ij}^{(l)}y_i+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_j^{'(l)}\\widehat{y'}^{(l)}_j}{\\sum_{i=1}^{n}W(y_i)z_{ij}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_j^{'(l)}},\\quad j=1,\\ldots,J,\n\\end{equation}\n\\begin{align}\n\\phi_j^{(l)}\n&=\\underset{\\phi_j>0}{\\text{argmax}}\\Bigg\\{\\sum_{i=1}^nW(y_i)z^{(l)}_{ij}\\left\\{-\\frac{1}{\\phi_j}\\log\\phi_j-\\frac{1}{\\phi_j}\\log\\mu_j^{(l)}-\\log\\Gamma(\\frac{1}{\\phi_j})+(\\frac{1}{\\phi_j}-1)\\log y_i-\\frac{y_i}{\\phi_j\\mu_j}\\right\\}\\nonumber\\\\\n&\\hspace{5em} +k^{(l)}\\left(\\sum_{i=1}^{n}W(y_i)\\right)z^{'(l)}_{j}\\left\\{-\\frac{1}{\\phi_j}\\log\\phi_j-\\frac{1}{\\phi_j}\\log\\mu_j^{(l)}-\\log\\Gamma(\\frac{1}{\\phi_j})+(\\frac{1}{\\phi_j}-1)\\widehat{\\log y'}^{(l)}_j-\\frac{\\widehat{y'}^{(l)}_j}{\\phi_j\\mu_j^{(l)}}\\right\\}\\Bigg\\},\\nonumber\\\\\n\\end{align}\n\\begin{equation}\n\\gamma^{(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{i(J+1)}^{(l)}+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_{J+1}^{'(l)}}{\\sum_{i=1}^{n}W(y_i)z_{i(J+1)}^{(l)}\\left[\\log(y_i+\\theta)-\\log\\theta\\right]+\\left(\\sum_{i=1}^{n}W(y_i)\\right)k^{(l)}z_{J+1}^{'(l)}\\left[\\widehat{\\log(y'+\\theta)}^{(l)}_{J+1}-\\log\\theta\\right]}.\n\\end{equation}\n\nNote here that $\\theta$ is treated as a fixed hyperparameter not involved in estimation procedure. To estimate $\\theta$ as a parameter, we may need to take a further step to numerically maximize the observed data weighted log-likelihood $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})$ w.r.t. $\\theta$.\n\n\\section{GEM algorithm for MWLE under J-Gamma Lomax mixture model: Parameter transformation approach}\n\\subsection{Construction of complete data}\nThe complete data is given by\n\\begin{equation}\n\\mathcal{D}^{\\text{com}}=\\{(y_i,\\bm{z}_i^{*})\\}_{i=1,\\ldots,n},\n\\end{equation}\nwhere $\\bm{z}_i^{*}=(z_{i1}^{*},\\ldots,z_{i(J+1)}^{*})$ are the labels where $z_{ij}^{*}=1$ if observation $i$ belongs to the $j^{\\text{th}}$ (transformed) latent mixture component and $z_{ij}^{*}=0$ otherwise. The complete data weighted log-likelihood function is given by\n\\begin{align}\n\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi};\\mathcal{D}^{\\text{com}})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\left[\\sum_{j=1}^{J}z_{ij}^{*}\\left(\\log\\pi_j^{*}+\\log f_b(y_i;\\mu_j,\\phi_j) -\\log\\int_0^{\\infty}f_b(u;\\mu_j,\\phi_j)W(u)du\\right)\\right]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*}\\left(\\log\\pi_{J+1}^{*}+\\log f_t(y_i;\\theta,\\gamma)W(y_i)-\\log\\int_0^{\\infty}f_t(u;\\theta,\\gamma)W(u)du\\right)\\Bigg\\}.\n\\end{align}\n\n\n\\subsection{E-step} \\label{supp:sec:em_e2}\nThe expectation of the complete data weighted log-likelihood is given by the following for the $l^{\\text{th}}$ iteration:\n\\begin{align}\nQ^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})\n&=\\sum_{i=1}^{n}W(y_i)\\Bigg\\{\\Bigg[\\sum_{j=1}^{J}z_{ij}^{*(l)}\\Bigg(\\log\\pi_j^{*}-\\frac{1}{\\phi_j}\\log\\phi_j-\\frac{1}{\\phi_j}\\log\\mu_j-\\log\\Gamma(\\frac{1}{\\phi_j})+(\\frac{1}{\\phi_j}-1)\\log y_i-\\frac{y_i}{\\phi_j\\mu_j}\\nonumber\\\\\n&\\hspace{12em}-\\log\\int_0^{\\infty}f_b(u;\\mu_j,\\phi_j)W(u)du\\Bigg)\\Bigg]\\nonumber\\\\\n&\\hspace{8em}+z_{i(J+1)}^{*(l)}\\Bigg(\\log\\pi_{J+1}^{*}+\\log\\gamma+\\gamma\\log\\theta-(\\gamma+1)\\log(y_i+\\theta)\\nonumber\\\\\n&\\hspace{13em}-\\log\\int_0^{\\infty}f_t(u;\\theta,\\gamma)W(u)du\\Bigg)\\Bigg\\},\n\\end{align}\nwhere\n\\begin{equation}\nz^{*(l)}_{ij}=P(z^{*}_{ij}=1|\\bm{y},\\bm{\\Phi}^{(l-1)})=\n\\begin{cases}\n\\dfrac{\\pi_j^{*(l-1)}f_b(y_i;\\mu_j^{(l-1)},\\phi_j^{(l-1)})W(y_i)}{\\int_0^{\\infty}f_b(u;\\mu_j^{(l-1)},\\phi_j^{(l-1)})W(u)du\\times h(y_i;\\bm{\\Phi}^{(l-1)})},\\quad j=1,\\ldots,J\\\\\n\\dfrac{\\pi^{*(l-1)}_{J+1}f_t(y_i;\\theta,\\gamma^{(l-1)})W(y_i)}{\\int_0^{\\infty}f_t(u;\\theta,\\gamma^{(l-1)})W(u)du\\times h(y_i;\\bm{\\Phi}^{(l-1)})},\\quad j=J+1.\n\\end{cases}\n\\end{equation}\n\n\\subsection{M-step} \\label{supp:sec:em_m2}\nMaximizing $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ with respect to $\\bm{\\Phi}$ yields the following parameter updates:\n\\begin{equation}\n\\pi_j^{*(l)}=\\frac{\\sum_{i=1}^{n}W(y_i)z_{ij}^{*(l)}}{\\sum_{j'=1}^{J+1}\\sum_{i=1}^{n}W(y_i)z_{ij'}^{*(l)}},\\quad j=1,\\ldots,J+1,\n\\end{equation}\nand the other parameters $(\\bm{\\mu},\\bm{\\phi},\\theta,\\gamma)$ are sequentially updated by numerically maximizing $Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})$ w.r.t. each of the parameters.\n\n\\section{Proof of Proposition 3} \\label{supp:sec:ascend}\nWrite $\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})=\\sum_{i=1}^{n}W(y_i)\\log p(y_i|\\bm{\\Phi})$ and $\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi}^{(l)};\\mathcal{D}^{\\text{com}})=\\sum_{i=1}^{n}W(y_i)\\left[\\log p(y_i|\\bm{\\Phi}) +\\log p(\\mathcal{D}^{\\text{mis}}_i|\\bm{\\Phi},y_i)\\right]$ for some probability density $p$ and missing data from sample $i$ given by $\\mathcal{D}^{\\text{mis}}_i$. Then, we have\n\\begin{align}\n\\mathcal{L}^{*}_n(\\bm{\\Phi};\\bm{y})\n&=\\tilde{\\mathcal{L}}^{*}_n(\\bm{\\Phi}^{(l)};\\mathcal{D}^{\\text{com}})-\\sum_{i=1}^{n}W(y_i)\\log p(\\mathcal{D}^{\\text{mis}}_i|\\bm{\\Phi},y_i)\\nonumber\\\\\n&=Q^{*}(\\bm{\\Phi}|\\bm{\\Phi}^{(l-1)})-\\sum_{i=1}^{n}W(y_i)\\int p(\\bm{v}_i|\\bm{\\Phi}^{(l-1)},y_i)\\log p(\\bm{v}_i|\\bm{\\Phi},y_i)d\\bm{v}_i,\n\\end{align}\nwhere the second equality results from expectation of both sides on the missing data under parameters $\\bm{\\Phi}^{(l-1)}$. Then, we have\n\\begin{align}\n\\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l)};\\bm{y})-\\mathcal{L}^{*}_n(\\bm{\\Phi}^{(l-1)};\\bm{y})\n&=Q^{*}(\\bm{\\Phi}^{(l)}|\\bm{\\Phi}^{(l-1)})-Q^{*}(\\bm{\\Phi}^{(l-1)}|\\bm{\\Phi}^{(l-1)})\\nonumber\\\\\n&\\quad+\\sum_{i=1}^{n}W(y_i)\\int p(\\bm{v}_i|\\bm{\\Phi}^{(l-1)},y_i)\\log\\frac{p(\\bm{v}_i|\\bm{\\Phi}^{(l-1)},y_i)}{p(\\bm{v}_i|\\bm{\\Phi}^{(l)},y_i)}d\\bm{v}_i\\geq 0.\n\\end{align}\n\n\\section{Initialization of parameters} \\label{apx:em:init}\nAs briefly described in Section 5.3 of the paper, parameter initialization $\\bm{\\Phi}^{(0)}$ is done using the CMM approach by \\cite{gui2018fit}. This comes with the following steps:\n\\begin{enumerate}\n\\item Determine a threshold $\\tau$ which classifies observations $y_i$ into either body (when $y_i\\leq\\tau$) or tail (when $y_i>\\tau$) part of distribution. This can be done by plotting the log of empirical data survival function against $\\log y_i$, which is called the log-log plot. For regular varying distributions, the log-log plot is asymptotically linear. $\\tau$ is approximated by the point where the curve turns linear onwards.\n\\item Perform K-means clustering on $\\{y_i\\}_{i:y_i\\leq\\tau}$ with $J$ clusters, and obtain the clustering mean $\\{\\mu^{\\text{cluster}}_j\\}_{j=1,\\ldots,J}$, variance $\\{(\\sigma^{\\text{cluster}}_j)^2\\}_{j=1,\\ldots,J}$ and weights $\\{\\tilde{\\pi}_j^{\\text{cluster}}\\}_{j=1,\\ldots,J}$.\n\\item Set $\\mu_j^{(0)}=\\mu^{\\text{cluster}}_j$, $\\phi_j^{(0)}=({\\sigma^{\\text{cluster}}_j})^2\/{\\mu^{\\text{cluster}}_j}^2$.\n\\item Obtain $\\theta^{(0)}$ and $\\gamma^{(0)}$ by matching the first two moments of observations belonging to the tail component (i.e. $\\{y_i\\}_{i:y_i>\\tau}$).\n\\item Set $\\pi_{J+1}^{(0)}$ as the proportion of observations satisfying $y_i>\\tau$.\n\\item Set the remaining weight parameters as $\\pi_{j}^{(0)}=\\tilde{\\pi}_j^{\\text{cluster}}(1-\\pi_{J+1}^{(0)})$.\n\\end{enumerate}\n\n\\section{Truncated log-likelihood function} \\label{sec:supp:tll}\nThis section includes more details for Remark 6 in the paper. Denote $g(y)$ as the true distribution generating the observations and $\\tilde{h}(y;\\bm{\\Phi})=\\frac{h(y;\\bm{\\Phi})W(y)}{\\int_0^{\\infty}h(u;\\bm{\\Phi})W(u)du}$ as the truncated distribution. The expected weighted log-likelihood can be alternatively written as\n\\begin{align}\nn\\times\\tilde{E}[\\mathcal{L}^{*}(\\bm{\\Phi};\\bm{Y})]\n&=n\\int_{0}^{\\infty}W(u)\\log \\tilde{h}(u;\\bm{\\Phi})\\times g(u)du\\nonumber\\\\\n&=n\\int_{0}^{\\infty}g(u)W(u)du\\times\\int_{0}^{\\infty}\\log \\tilde{h}(u;\\bm{\\Phi})\\times\\frac{g(u)W(u)}{\\int_{0}^{\\infty}g(t)W(t)dt}du\\nonumber\\\\\n&=n\\int_{0}^{\\infty}g(u)W(u)du\\times\\tilde{E}^*[\\log \\tilde{h}(u;\\bm{\\Phi})],\n\\end{align}\nwhere the expectation $\\tilde{E}^*$ is taken on $Y$ under the random truncated distribution $\\frac{g(u)W(u)}{\\int_{0}^{\\infty}g(t)W(t)dt}$. Next, denote a random set $S_n=\\{i:V_i(y_i)=1\\}$, such that $\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})$ can be written as\n\\begin{equation}\n\\mathcal{L}^{**}_n(\\bm{\\Phi};\\bm{y})=\\sum_{i\\in S_n}\\log \\tilde{h}(u;\\bm{\\Phi}),\n\\end{equation}\nwith effective number of terms $\\|S_n\\|\\approx n\\int_{0}^{\\infty}g(u)W(u)du\\approx \\sum_{i=1}^{n}W(y_i)$ in probability as $n\\rightarrow\\infty$. Comparing the above two equations, they simply correspond to standard MLE with bias term of $P$.\n\n\\section{Preliminary analysis of the motivating Greek dataset} \\label{apx:prelim_data}\nModelling the property damage claim size distribution is very challenging. Observing from Figures \\ref{fig:density} and \\ref{fig:loglogplot} which are also presented by \\cite{fung2021mixture}, the claim size distribution is not only heavy-tailed but also multi-modal. The key complexity of the empirical distribution is that there are many small distributional nodes for smaller claims, as evidenced by the right panel of Figure \\ref{fig:density}. On the other hand, it is undesirable to model all these nodes using excessive number of mixture components as (i) precise predictions of small claims are of less relevance of insurance pricing and risk management; (ii) this impedes the model interpretability. Further, the heavy-tailedness of empirical distribution is evidenced by asymptotic linearity of both log-log plot and mean excess plot in Figure \\ref{fig:loglogplot}. The asymptotic slope of log-log plot suggests that the estimated tail index is $\\gamma\\approx 1.3$ while the Lomax tail index obtained by \\cite{fung2021mixture} is about $\\gamma=1.38$, under a subjective choice of splicing threshold. Note however that these only provide a very rough guidance on the true tail index.\n\nNote that distributional multimodality and contamination are indeed prevalent not only to the aforementioned Greek dataset, but also to many publicly available insurance data sets. Notable examples include the French business interruption losses (\\textbf{frebiloss}), French motor third party liability claims (\\textbf{fremotor2sev9907} and \\textbf{freMPL8}) and Norwegian car claims (\\textbf{norauto}) which can all be retrived from the \\textbf{R} package \\textbf{CASdatasets}. This suggests that the modelling challenges emphasized in this paper is not only valid for the Greek data set we are analyzing, but is also applicable in many insurance claim severity data sets.\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/density_original.pdf}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/density_log.pdf}\n\\end{subfigure}\n\\end{center}\n\\caption{Empirical density of claim amounts (left panel) and log claim amounts (right panel); the orange vertical lines represent amounts of 10,000, 20,000, 50,000 and 100,000 respectively.}\n\\label{fig:density}\n\\end{figure}\n\n\\begin{figure}[!h]\n\\begin{center}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/loglogplot.jpg}\n\\end{subfigure}\n\\begin{subfigure}[h]{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{figure\/me_plot.jpg}\n\\end{subfigure}\n\\end{center}\n\\caption{Left panel: log-log plot of the claim amounts; right panel: mean excess plot.}\n\\label{fig:loglogplot}\n\\end{figure}\n\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFinding elements of high multiplicative order in a finite field is an interesting problem in computational number theory and has applications in cryptography (for instance: Discrete Logarithm Problem). A general method to find high order elements was given in \\cite{gao1999elements}, later improved in \\cite{conflitti2001} and \\cite{popovych2014gao}. Another general result in this area is an algorithmic technique for finding primitive elements which is devised in \\cite{huang2015primitive}.\nSuch technique is efficient in finite fields of small characteristic. \nOther strategies which allow to construct elements of high order usually address specific sequences of finite fields. In this regard, methods involving Gauss periods were first proposed in the results summarized in \\cite{vonzurgathen_shparlinski2001gauss}. After that, an extensive literature followed with works such as \\cite{ahmadi_etal2010gauss}, \\cite{popovych2012}, \\cite{popovych2013}, \\cite{chang2013gauss} and \\cite{popovych2014sharpening}. Recently, Artin-Schreier extensions were also effectively used in \\cite{popovych2015artin} and \\cite{brochero2016artin}. \nAnother interesting approach is to look for high order elements which arise as coordinates of points on an algebraic curve defined over a finite field (see for example \\cite{voloch2007curves}, \\cite{voloch2010elliptic} and \\cite{chang2013curves}). One way which has been explored for generating elements of this type is through the iterative use of polynomial equations of type $f(x_{n - 1},x_n) = 0$, defining suitable towers of fields, which we address as \\textit{recursive towers} in this work. Examples of this can be found in \\cite{burkhart2009finite}, \\cite{voloch2010elliptic}, \\cite{popovych2015wiedmann} and \\cite{popovych2018conway}. \n\nIn \\cite{burkhart2009finite}, a recursive tower defined by $f(x_{n - 1},x_n)$ is used to produce elements $\\delta_n$ with high multiplicative order in $\\mathrm{GF}(q,2^n)$, for $q$ odd, and in $\\mathrm{GF}(q,3^n)$, for $q \\neq 3$. \nThe choice of the polynomial $f$ for the recursive process to generate high order elements in finite field extensions, was limited to the equations of the modular curve towers in \\cite{elkies2001explicit}. \n\n\nIn this work, we attempt to generalize the choice of the polynomials.\nWe illustrate in detail several interesting towers of fields defined by $x_n^2 + x_n = v(x_{n - 1})$, where $v(x) \\in \\mathrm{GF}(q,1)[x]$, for $q$ odd, or $x_n^3 + x_n = v(x_{n - 1})$, for $v(x) \\in \\mathrm{GF}(2,1)[x]$. These towers\ngenerate elements of high orders in $\\mathrm{GF}(q,2^n)$ and in $\\mathrm{GF}(2,2 \\cdot 3^n)$, for $n \\geq 1$. \nWe also give a recipe for finding other towers of the same form which have similar properties. The simple algebraic conditions given in Sections \\ref{sec:odd} and \\ref{sec:even}, \nwhich differ \npartially from the conditions required in \\cite{burkhart2009finite} (Remark \\ref{BurkhartConditionsComparison} below), seem to play an important role for this purpose. In fact, in many \nof the cases we studied, these conditions are \nuseful to prove the existence of high order elements\n$x_n$, in the field extension. \n\nThroughout this paper, $\\delta_n$ in $\\mathrm{GF}(q,2^n)$ is the discriminant of the polynomial $f(x_{n},y)$ in $\\mathrm{GF}(q,2^n)[y]$. In Corollary \\ref{bound-odd}, we prove that the multiplicative orders of $x_n$ and $\\delta_n$ grow very fast if $x_{n - j}^2$ and $\\delta_{n-j}^2$ do not belong to $\\mathrm{GF}(q,2^{n - j - 1})$, for all $j < n-1$. Similar results hold also in even characteristic, see Corollary \\ref{bound-even}. Notably, despite the bounds obtained are similar, the even characteristic case turns out to be completely new\nwith respect to \\cite{burkhart2009finite}. \nIn particular no additional conditions on the discriminant are required, and the details of the proof are worked out in a different manner. Furthermore the numerical performance of some of our examples are better than \\cite{burkhart2009finite}, in the iterations we were able to compute. As already mentioned above, the polynomials used in \\cite{burkhart2009finite} are the models of certain modular curves given in \\cite{elkies2001explicit}. Despite this fact, a possible relation of the construction of high order elements with the arithmetic properties of such curves does not seem to play a role in the proof of the lower bounds. \nInstead, in one case, we do make use of some arithmetic properties of the algebraic curve considered\nby us (Lemma \\ref{lem:nonsqnear5}).\n\nA comparative study with other relevant literature has also been carried-out.\nFor example, a specific\nconstruction of high order elements in the same type of fields of odd characteristic $q$ can be found in \\cite{cohen1992explicit}, and some variations on it are in \\cite{meyn1995explicit} and \\cite{chapman1997completely}. Comparing the numerical performance of their construction with our variety of examples, we observe that the results are similar for $q \\equiv 1 \\pmod 4$, while for $q \\equiv 3 \\pmod 4$ our construction performs better (see Section \\ref{sec:numres} for examples with $q=3,11$).\n\nIn Section \\ref{sec:notation}, we introduce the notation that we use in the paper. In Sections \\ref{sec:odd} and \\ref{sec:even}, we give the main results\nwhich allow us to obtain the lower bounds on the order of $x_n$ and $\\delta_n$. \nSection \\ref{sec:odd} deals with odd characteristic and Section \\ref{sec:even} deals with even characteristic.\nThe lists of towers satisfying the properties given in Sections \\ref{sec:odd} and \\ref{sec:even} are provided in Sections \\ref{sec:list-odd} and \\ref{sec:list-even}, respectively. Finally, in Section \\ref{sec:numres}, we list numerical results obtained using MAGMA \\cite{MAGMA}, about the seven towers listed in Sections \\ref{sec:list-odd} and \\ref{sec:list-even}.\n\n\n\\section{Background and notation}\\label{sec:notation}\n\nLet $q$ be an odd prime. By \\emph{tower of fields}, or simply a \\emph{tower}, we mean a sequence of field extensions\n\t$$K_1\\subset K_2\\subset \\ldots \\subset K_n\\subset \\ldots.$$\nWe are interested in infinite towers, namely towers such that the degree $[K_n : K_1]$ grows to infinity. All the towers considered in this paper are actually finite, normal and separable, i.e., each extension $K_n\/K_{n - 1}$ is finite, normal and separable, for every $n > 1$. For each positive integer $n$, let $K_n = \\mathrm{GF}(q,1)(x_n)$, where the element $x_n \\in \\mathrm{GF}(q,2^n)$ is given by a recursive formula $f(x_{n - 1},x_n) = 0$, for a polynomial $f(x,y) \\in \\mathrm{GF}(q,1)[x,y]$. In this case, we say that the tower $K_1 \\subset K_2 \\subset \\ldots \\subset K_n \\subset \\ldots$ is defined by $f(x_{n - 1},x_n)$ and we address this kind of towers as \\emph{recursive towers}. We focus on towers defined by $f(x_{n - 1},x_n) = x_n^2 + x_n - v(x_{n - 1})$, for $n \\ge 2$, with $x_1 \\in \\mathrm{GF}(q,2)$, and where $v(x)$ is a polynomial in $\\mathrm{GF}(q,1)[x]$. We denote by $\\delta_n$ the discriminant $\\delta_n = 1 + 4v(x_n)$, for $n \\ge 1$. We point out that both elements $x_n$ and $\\delta_n$ belong to $\\mathrm{GF}(q,2^n)$, but they could also lie in a smaller extension $\\mathrm{GF}(q,2^k)$ for some $k < n$. Given the tower defined by $f(x_{n-1},x_n)$, we denote by $g(x,y) \\in \\mathrm{GF}(q,1)[x,y]$ a polynomial giving the relation between two consecutive discriminants $\\delta_{n-1}$ and $\\delta_{n}$, namely $g(\\delta_{n-1},\\delta_{n})=0$. In the case of even characteristic (Sections \\ref{sec:even} and \\ref{sec:list-even}), we deal with towers defined by $f(x_{n-1},x_n) = x_n^3 + x_n + v(x_{n - 1})$, with $x_n \\in \\mathrm{GF}(2,2 \\cdot 3^n)$, for $n \\ge 1$, and $v(x)$ being a polynomial in $\\mathrm{GF}(2,1)[x]$.\n\nGiven two positive integers $j$ and $n$, such that $j < n$, we denote the norm of the field extension $\\mathrm{GF}(q,2^n)\/ \\mathrm{GF}(q,2^{n - j})$ by, $\\mathrm{N}_{n,j} \\colon \\mathrm{GF}(q,2^n) \\to \\mathrm{GF}(q,2^{n - j})$. The norm in the odd case is $\\mathrm{N}_{n,j}(x) = x^{\\prod_{i = 1}^j (q^{2^{n - i}}+1)}$. In order to apply the same techniques to even characteristic, we also denote by $\\mathrm{N}_{n,j}\\colon\\mathrm{GF}(2,2\\cdot 3^n)\\to \\mathrm{GF}(2,2 \\cdot 3^{n-j})$ the norm of the extension $\\mathrm{GF}(2,2 \\cdot 3^n)\/ \\mathrm{GF}(2,2 \\cdot 3^{n - j})$, namely $\\mathrm{N}_{n,j}(x) = x^{\\prod_{i = 1}^j (4^{2 \\cdot 3^{n-i}} + 4^{3^{n - i}} + 1)}$. For every characteristic, we use the conventions $\\mathrm{N}(x):=\\mathrm{N}_{n,1}(x)$ and $\\mathrm{N}_{n,0}(x)=x$.\n\n\nWe use the following lemma for estimating the order of the elements in finite fields. \n\\begin{lem}\\label{lem:estimate}\nLet $\\ell$ be a prime and let\n$a$, $b$ and $c$ be positive integers such that $b\\ell^{b+1}$ and\n$\\gcd\\left(\\sum_{j=1}^{\\ell}a^{\\ell^b(\\ell-j)},\\sum_{j=1}^{\\ell}a^{\\ell^c(\\ell-j)}\\right)=\\ell$. In particular $\\frac{1}{\\ell}\\sum_{j=1}^{\\ell}a^{\\ell^b(\\ell-j)}$ and\n$\\frac{1}{\\ell}\\sum_{j=1}^{\\ell}a^{\\ell^c(\\ell-j)}$ are coprime.\n\\end{lem}\n\\begin{proof}\nSee \\cite[Lemmas 1 and 2]{burkhart2009finite}.\n\\end{proof}\nIn order to compute the Galois group of a cubic polynomial in characteristic 2, the following result is useful.\n\\begin{lem}\\label{lem:quadratic_resolvent}\nLet $f(x)=x^3+ax^2+bx+c$ be a separable\nirreducible polynomial\nover a field $K$.\nThe Galois group of the extension given by the roots of $f$ is the alternating group $A_3$ if its quadratic resolvent\n$R(x)=x^2+(ab-3c)x+a^3c+b^3+9c^2-6abc$\nis reducible over $K$ and it is the symmetric group $S_3$ otherwise.\n\\end{lem}\n\\begin{proof}\nSee \\cite[Section 1, pag.53]{kap1972fields}.\n\\end{proof}\n\n\nIn order to prove that a cubic polynomial is irreducible, we also need the following results.\n\\begin{lem}\\label{lem:cubicroots1}\nIf $u\\in \\mathrm{GF}(2,2\\cdot 3^{n})$\nand $c:=u+u^{-1}\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$, then $u\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$.\n\\end{lem}\n\\begin{proof}\nIf $u\\notin \\mathrm{GF}(2,2\\cdot 3^{n-1})$, then $x^2+cx+1$ is the minimum polynomial of\n$u$ over $\\mathrm{GF}(2,2\\cdot 3^{n-1})$. So $u\\in \\mathrm{GF}(2,2\\cdot 3^{n})\\cap\\mathrm{GF}(2,4\\cdot 3^{n-1})=\\mathrm{GF}(2,2\\cdot 3^{n-1})$ and we get a contradiction.\n\\end{proof}\n\\begin{lem}\\label{lem:cubicroots2}\nLet $u^3\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$ be a root of the quadratic polynomial $x^2+tx+1$, with $t\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$. Then $y:=u+u^{-1}\\in \\mathrm{GF}(2,2\\cdot 3^{n})$ is a root of the cubic polynomial $x^3+x+t$, and furthermore $y\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$ if and only if $u\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$.\n\\end{lem}\n\\begin{proof}\nThis is Cardano's formula for solving cubic equations in even characteristic. The second statement follows by Lemma \\ref{lem:cubicroots1} taking $y=c=u+u^{-1}$.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Towers in odd characteristic}\\label{sec:odd}\n\nIn order to find good towers we restrict our search to polynomials $f(x,y)=y^2+y-v(x)$, with $v(x)\\in\\mathrm{GF}(q,1)[x]$ being a non-zero polynomial, which satisfy Condition \\textbf{(1)} below and at least one of the last two conditions:\n\\begin{description}\n\t\\item[(1)] $\\frac{f(x_{n - 1},0)}{x_{n - 1}}$ is a square in $ \\mathrm{GF}(q,2^{n-1})$ for $n \\geq 2$;\n\t\\item[(2)] $\\frac{g(\\delta_{n - 1},0)}{x_{n - 1}}$ is a square in $\\mathrm{GF}(q,2^{n-1})$ for $n \\geq 2$;\n\t\\item[(2')] $\\frac{g(\\delta_{n - 1},0)}{\\delta_{n - 1}}$ is a square in $\\mathrm{GF}(q,2^{n-1})$ for $n \\geq 2$.\n\\end{description}\n\\begin{rem}\\label{BurkhartConditionsComparison}\nCondition \\textbf{(2')} above is satisfied by other towers of fields in the literature, see for example \\cite[Section 4, formula (5)]{burkhart2009finite}. We don't know whether the corresponding tower (see \\cite[Section 2, equation (2)]{burkhart2009finite}), which does not satisfies Condition \\textbf{(1)} above, satisfies a\nsuitable analog of this condition which ensure\nthat Proposition \\ref{general-odd} below holds.\n\\end{rem}\n\\begin{rem} These conditions are not sufficient for obtaining high order elements from each tower, but, for our particular choices of $f$, they are sufficient to construct a recursive tower defined by $f(x_{n - 1},x_{n})$ as Proposition\n\\ref{general-odd} below shows.\n\\end{rem}\nThe following key proposition ensures that all the polynomials $f(x_{n - 1}, x_{n})$ listed in Section \\ref{sec:list-odd} define infinite towers of fields. In particular it shows that $[K_n : K_{n - 1}] = 2$, for all $n > 1$. The argument of the proof is the corresponding analogue of\n\\cite[Proposition~1]{burkhart2009finite}\nbut it could be applied to many different towers.\n\\begin{prop}\\label{general-odd}\nLet $v(x) \\in \\mathrm{GF}(q,1)[x]$ be a polynomial and assume that $f(x_{n - 1},x_n) = x_n^2 + x_n - v(x_{n - 1})$ satisfies Conditions \\textbf{(1)} \\textit{and} \\textbf{(2)}, or Conditions \\textbf{(1)} \\textit{and} \\textbf{(2')}. If $x_{n - 1}$ and $\\delta_{n - 1}$ are not squares in the multiplicative group $\\mathrm{GF}(q,2^{n - 1})^*$ for a suitable $n \\ge 2$, then $x_j$ and $\\delta_j$ are not squares in the multiplicative group $\\mathrm{GF}(q,2^j)^*$, for $j \\ge n$.\n\\end{prop}\n\\begin{proof}\nThe element $x_{n}$ is not in $\\mathrm{GF}(q,2^{n - 1})$ because $\\delta_{n - 1}$ is not a square in $\\mathrm{GF}(q,2^{n - 1})$, so $f(x_{n - 1},y)$ is the minimal polynomial of $x_n$. We need to ensure that $x_{n}^{(q^{2^n} - 1)\/2} = -1$. As in \\cite[Proposition 1]{burkhart2009finite}, we obtain: \n\\begin{align*}\nx_{n}^{(q^{2^n} - 1)\/2} &= (x_{n}^{q^{2^{n - 1}} + 1})^{(q^{2^{n - 1}} - 1)\/2} = \\mathrm{N}(x_n)^{(q^{2^{n - 1}} - 1)\/2} = \\\\ &= f(x_{n - 1},0)^{(q^{2^{n - 1}} - 1)\/2}= -1,\n\\end{align*}\nwhere $\\mathrm{N}(x_n) = x_n^{q^{2^{n - 1}} + 1} = f(x_{n - 1},0)$ is the norm of $x_n$ over $\\mathrm{GF}(q,2^{n - 1})$ and we use Condition \\textbf{(1)} in last equality\nto show that $f(x_{n - 1},0)$ is not a square in $\\mathrm{GF}(q,2^{n - 1})$ for $n > 1$.\n\nConsider the discriminant $\\delta_n$. Again $g(\\delta_{n - 1},y)$ is the minimal polynomial of $\\delta_n = 1 + 4v(x_n)$. Since, in $\\mathrm{GF}(q,2^{n})$, we know that $\\frac{f(x_n,0)}{x_n}$ is a square by Condition \\textbf{(1)}, $-1$ is a square and $x_n$ is not a square as above, then $v(x_n) = -f(x_n,0)$ is not a square\nin $\\mathrm{GF}(q,2^{n })$. \nHence, $\\delta_n \\notin \\mathrm{GF}(q,2^{n - 1})$. The same computation as above yields:\n\\begin{align*}\n\\delta_{n}^{(q^{2^n} - 1)\/2} &= (\\delta_{n}^{q^{2^{n - 1}} + 1})^{(q^{2^{n-1}}-1)\/2} = \\mathrm{N}(\\delta_n)^{(q^{2^{n - 1}} - 1)\/2}= \\\\ \n&= g(\\delta_{n - 1},0)^{(q^{2^{n - 1}} -1)\/2} = -1,\n\\end{align*}\nwhere we use Condition \\textbf{(2)}, respectively \\textbf{(2')}, in last equality to show that $g(\\delta_n,0)$ is not a square in $\\mathrm{GF}(q,2^{n - 1})$, because $x_{n - 1}$, (respectively $\\delta_{n - 1}$), is a non-square by hypothesis. It follows that $x_{n}$ and $\\delta_{n}$ are non-squares in $\\mathrm{GF}(q,2^n)$. Repeating the same argument, we find that $x_{j}$ and $\\delta_{j}$ are not squares in $\\mathrm{GF}(q,2^{j})$, for all $j > n$, which completes the proof.\n\\end{proof}\n\nThe importance of this proposition is evident if we consider Corollary~\\ref{bound-odd} below, which is an analogue of \\cite[Proposition 2]{burkhart2009finite}. We first state the following property of the norm that is used in the proof of the corollary.\n\\begin{lem}\\label{lem:factor_odd} Let $n\\geq 2$ and $j 1$. The same lower bound also holds for the order of $\\delta_n$ if $\\delta_n^2 \\notin \\mathrm{GF}(q,2^{n - 1})$ for all $n > 1$.\n\\end{cor}\n\\begin{proof}\nWe know that $x_n\\not\\in \\mathrm{GF}(q,2^{n-1})$ by Proposition \\ref{general-odd}, therefore $x_n^2=-x_n+v(x_{n-1})\\not\\in \\mathrm{GF}(q,2^{n-1})$ for all $n>1$. We show that the order of $x_n$ has a common factor with the odd number $\\frac{q^{2^{n - j}} + 1}{2}$ proving that $x_n^{\\frac{2(q^{2^n} - 1)}{q^{2^{n - j}} + 1}} \\ne 1$, for $j = 1,2,\\ldots,n-1$. \nFor $j=1$, we have\n\\[\nx_n^{\\frac{2(q^{2^n} - 1)}{q^{2^{n - 1}} + 1}}=x_n^{2(q^{2^{n-1}}-1)} \\ne 1,\n\\]\nsince $x_n^2\\not\\in \\mathrm{GF}(q,2^{n - 1})$, as we have just seen. For $j\\geq 2$, we get\n\\begin{align*}\nx_n^{\\frac{2(q^{2^n} - 1)}{q^{2^{n - j}} + 1}} &= \\left(x_n^{\\prod_{k = 1}^{j - 1}( q^{2^{n - k}}+1)}\\right)^{2(q^{2^{n - j}} - 1)}= \\mathrm{N}_{n,j - 1}(x_n)^{2(q^{2^{n - j}} - 1)}\n\\end{align*}\nand the last member above is 1 only if $\\mathrm{N}_{n,j - 1}(x_n)^2\\in\\mathrm{GF}(q,2^{n-j})$. We show that this is not possible. Consider $\\mathrm{N}_{n,j }(x_n)=\\mathrm{N}_{n-j+1,1 }(\\mathrm{N}_{n,j-1 }(x_n))$. \nIf $\\mathrm{N}_{n,j - 1}(x_n)^2\\in\\mathrm{GF}(q,2^{n-j})$, then either $\\mathrm{N}_{n,j }(x_n)=\\mathrm{N}_{n,j - 1}(x_n)^2$ or $\\mathrm{N}_{n,j }(x_n)=\\mathrm{N}_{n,j - 1}(x_n)$. The latter equality is not possible since $\\mathrm{N}_{n,j-1 }(x_n)$ is not a square in \n$\\mathrm{GF}(q,2^{n-j+1})$ by Lemma \\ref{lem:factor_odd} but \n$\\mathrm{N}_{n,j }(x_n)\\in \\mathrm{GF}(q,2^{n-j})$ is a square in \n$\\mathrm{GF}(q,2^{n-j+1})$. The former equality, by Lemma \\ref{lem:factor_odd}, gives:\n\\allowdisplaybreaks\n\\begin{align*}\n1&=\\frac{x_{n - j} \\prod_{k=1}^{j}\\mathrm{N}_{n-k,j-k}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)}{x_{n - j+1}^2 \\prod_{k=1}^{j-1}\\left(\\mathrm{N}_{n-k,j-k-1}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)\\right)^2}=\\\\\n&=\\frac{x_{n - j}\\frac{\\mathrm{N}_{n-j+1,1}(x_{n-j+1})}{x_{n - j}} \\prod_{k=1}^{j-1}\\mathrm{N}_{n-k,j-k}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)}{x_{n - j+1}^2 \\prod_{k=1}^{j-1}\\left(\\mathrm{N}_{n-k,j-k-1}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)\\right)^2}=\\\\\n&=\\!\\frac{\\mathrm{N}_{n-j+1,1}(x_{n-j+1}) \\!\\prod_{k=1}^{j-1}\\!\\mathrm{N}_{n-j+1,1}\\!\\left(\\!\\mathrm{N}_{n-k,j-k-1}\\!\\left(\\!\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)\\!\\right)}{x_{n - j+1}^2 \\prod_{k=1}^{j-1}\\left(\\mathrm{N}_{n-k,j-k-1}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)\\right)^2}\\!=\\\\\n&=\\frac{(x_{n-j+1})^{q^{2^{n-j}}+1} \\prod_{k=1}^{j-1}\\left(\\mathrm{N}_{n-k,j-k-1}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)\\right)^{q^{2^{n-j}}+1}}{x_{n - j+1}^2 \\prod_{k=1}^{j-1}\\left(\\mathrm{N}_{n-k,j-k-1}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)\\right)^2}=\\\\\n& =x_{n - j+1}^{q^{2^{n-j}}-1} \\prod_{k=1}^{j-1}\\left(\\mathrm{N}_{n-k,j-k-1}\\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)\\right)^{q^{2^{n-j}}-1}.\n\\end{align*}\n\\allowdisplaybreaks[0]\nSince the last term is 1, then\n$$x_{n-j+1} \\prod_{k=1}^{j-1}\\mathrm{N}_{n-k,j-k-1} \\left(\\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right) \\in\\mathrm{GF}(q,2^{n-j}),$$\nbut this is impossible because $x_{n-j+1}$ is a non-square in $\\mathrm{GF}(q,2^{n-j+1})$, by Proposition \\ref{general-odd}, but \n$$\\mathrm{N}_{n-k,j-k-1}\\left( \\frac{\\mathrm{N}_{n-k+1,1}(x_{n-k+1})}{x_{n - k}}\\right)=\\mathrm{N}_{n-k,j-k-1}\\left( \\frac{f(x_{n-k},0)}{x_{n - k}}\\right)$$ \nis a square in $\\mathrm{GF}(q,2^{n-j+1})$, for each $k2^{n - j + 1}$, for every $j = 1,2,\\ldots,n-1$. Hence, the order is bounded below by $$2^{\\frac{n(n + 1)}{2} - 1}=\\prod_{j = 1}^{n - 1} 2^{n - j + 1} <\\prod_{j = 1}^{n - 1}p_j .$$\n\nThe remaining term $2^{n + \\mathrm{ord}_2(q - 1) - 1}$ follows as in \\cite[Proposition 2]{burkhart2009finite}. \nBy the repetition of the difference of squares formula, we get:\n$$\\mathrm{ord}_2 \\left(\\frac{q^{2^n} - 1}{2}\\right) = \\sum_{j = 0}^{n - 1} \\mathrm{ord}_2(q^{2^j} + 1) + \\mathrm{ord}_2(q - 1) - 1 = n + \\mathrm{ord}_2(q - 1) - 1,$$\nfor all $n\\geq 1$. It follows that $2^{n + \\mathrm{ord}_2(q - 1) - 1}$ divides the order of $x_n$ because $x_n^{\\frac{q^{2^n} - 1}{2}} = -1$ by Proposition \\ref{general-odd}. The proof for $\\delta_n$ is similar. \n\\end{proof}\n\n\n\n\n\n\\section{Towers in even characteristic}\\label{sec:even}\n\n\nThe even analogue of Conditions \\textbf{(1)} and \\textbf{(2)} in the odd case for polynomials $f(x,y)=y^3+y+v(x)$, with $v(x)\\in \\mathrm{GF}(2,1)[x]$, is:\n\\begin{description}\n\\item[(3)] There exists an integer $e\\geq 0$ such that $f(x_{n-1},0)=x_{n-1}^{2^e}$ for all $n\\geq 2$.\n\\end{description}\nThis means that we can restrict our study to polynomials in the form $f(x,y)=y^3+y+x^{2^e}$, with $e\\geq 0$, and\ndeduce similar results as in the previous section. In Section \\ref{sec:list-even}, we find some cases where the towers defined by polynomials $f(x_{n - 1},x_n)$ are infinite and Galois. \nThis is achieved by finding a suitable initial element $x_1\\in \\mathrm{GF}(2,6)$.\nUnder these hypotheses we have an analogue of Proposition \\ref{general-odd}.\n\\begin{prop}\\label{general-even}\nConsider an infinite normal tower defined by $f(x_{n - 1},x_n) = x_n^3 + x_n + x_{n - 1}^{2^e}$ for a certain $e\\geq 0$, for all $n>1$.\nLet $p$ be a prime divisor of $|\\mathrm{GF}(2,2\\cdot 3^{n - 1})^*|$, for a suitable $n>1$, and assume that $x_{n - 1}$ is not a $p$-th power in the multiplicative group $\\mathrm{GF}(2,2\\cdot 3^{n - 1})^*$. Then $x_j$ is not a $p$-th power in the multiplicative group $\\mathrm{GF}(2,2\\cdot 3^{j})^*$, for $j \\geq n$. \n\\end{prop}\n\\begin{proof}\nBy assumption $f(x_{n-1},y)$ is irreducible, so $x_{n}\\not\\in\\mathrm{GF}(2,2 \\cdot 3^{n - 1})$ and $f(x_{n-1},y)$ is the minimum polynomial of $x_{n}$. We need to check that $x_{n}^{(4^{3^n}-1)\/p}\\not =1$. As in the proof of Proposition~\\ref{general-odd}, we obtain: \n\\begin{align*}\nx_{n}^{(4^{3^n} - 1)\/p} &= (x_{n}^{4^{2 \\cdot 3^{n - 1}} + 4^{3^{n - 1}} + 1})^{(4^{3^{n - 1}} - 1)\/p}= \\\\\n&= \\mathrm{N}(x_n)^{(4^{3^{n - 1}} - 1)\/p} = f(x_{n - 1},0)^{(4^{3^{n - 1}} - 1)\/p},\n\\end{align*}\nwhere $\\mathrm{N}(x_n) = x_{n}^{4^{2 \\cdot 3^{n - 1}} + 4^{3^{n - 1}} + 1} = f(x_{n - 1},0)$ is the norm of $x_n$ over $\\mathrm{GF}(2,2\\cdot 3^{n-1})$. The last term is not equal to $1$ because $x_{n - 1}$ is not a $p$-th power in $\\mathrm{GF}(2,2 \\cdot 3^{n - 1})$, hence, by Condition \\textbf{(3)}, \n$f(x_{n - 1},0)$ is not a $p$-th power as well.\n\\end{proof}\nThe analogue of Lemma \\ref{lem:factor_odd} in even characteristic is the following: \n\\begin{lem}\\label{lem:factor_even} Let $e\\geq 0$, $n\\geq 2$ and $j1$. If $x_1$ is not a cube in $\\mathrm{GF}(2,6)$, then $x_n^3 \\notin \\mathrm{GF}(2,2 \\cdot 3^{n - 1})$ for all $n\\geq 2$ and the order of $x_n$ in the tower defined by $f(x_{n - 1},x_n)$ is greater than $$3^{\\frac{1}{2}(n^2 + 3n)-1}.$$\n\\end{cor}\n\\begin{proof}\nThe proof is similar to the proofs of Corollary \\ref{bound-odd} and \\cite[Proposition 4]{burkhart2009finite}. \nWe know that $x_n\\not\\in \\mathrm{GF}(2,2\\cdot 3^{n - 1})$ by\nProposition~\\ref{general-even}, therefore $x^3_n=x_n+v(x_{n-1})\\not \\in \\mathrm{GF}(2,2\\cdot 3^{n - 1})$.\nIt follows that $\\left(x^3_n\\right)^{2^e}$ does not belong to $\\mathrm{GF}(2,2\\cdot 3^{n - 1})$.\nIn order to show that the order of $x_n$ has a common factor with $\\textstyle \\frac{1}{3}(4^{2 \\cdot 3^{n - j}} + 4^{3^{n - j}} + 1)$, we show that $x_n^\\frac{3(4^{3^n} - 1)}{4^{2 \\cdot 3^{n - j}} + 4^{3^{n - j}} + 1} \\ne 1$, for $j = 1,2,\\ldots,n-1$. \nWe have:\n\\begin{align*}\nx_n^\\frac{3(4^{3^n} - 1)}{4^{2 \\cdot 3^{n - j}} + 4^{3^{n - j}} + 1} &= x_n^{\\frac{4^{3^n} - 1}{4^{3^{n - j}} - 1} \\cdot \\frac{3(4^{3^{n - j}} - 1)}{4^{2 \\cdot 3^{n - j}} + 4^{3^{n - j}} + 1}}= x_n^{\\frac{3(4^{3^n} - 1)(4^{3^{n - j}} - 1)}{4^{3^{n - j + 1}} - 1}} = \\\\\n&=x_n^{3(4^{3^{n - j}} - 1)\\prod_{i = 1}^{j - 1} (4^{2 \\cdot 3^{n - i}} + 4^{3^{n - i}} + 1)}=\n\\mathrm{N}_{n,j - 1}(x_n)^{3(4^{3^{n - j} - 1})}.\n\\end{align*}\nBy Lemma \\ref{lem:factor_even} \nwe have that $\\mathrm{N}_{n,j}(x_n)=x_{n-j}^{2^{ej}}$, for $j = 1,2,\\ldots,n-1$. But $\\left(x_{n - j + 1}^{2^{e(j-1)}}\\right)^3$ does not belong to $\\mathrm{GF}(2,2 \\cdot 3^{n - j})$ for all $j \\geq 1$. \nIt follows that $\\mathrm{N}_{n,j - 1}(x_n)^{3(4^{3^{n - j} - 1})}$ cannot be equal to $1$. This ensures, by Lemma \\ref{lem:estimate} with $a=4$, $b=n-j$ and $\\ell=3$, the existence of a lower bound on the order of $x_n$, namely $p_j>3^{n - j + 1}$, for every $j = 1,2,\\ldots,n-1$. Hence, we get a lower bound for the order of $x_n$, which is $3^{\\frac{n(n + 1)}{2}-1}=\\prod_{j = 1}^{n - 1} 3^{n - j + 1}<\\prod_{j = 1}^{n - 1}p_j$.\n\t\nThe remaining term $3^{n}$ follows by the computation of the power of 3 dividing the order of $x_n$. By the repetition of the difference of cubes formula, we have:\n$$\\mathrm{ord}_3 \\left(\\frac{4^{3^n} - 1}{3}\\right) = \\sum_{j = 0}^{n - 1} \\mathrm{ord}_3(4^{2 \\cdot 3^j} + 4^{3^j} + 1) + \\mathrm{ord}_3(4 - 1) - 1 = n,$$\nfor all $n \\geq 1$. This term divides the order of $x_n$, since $x_n^{\\frac{4^{3^n} - 1}{3}} \\neq 1$, by Proposition~\\ref{general-even}.\n\\end{proof}\n\n\n\n\n\n\\section{Examples of good towers in odd characteristic}\\label{sec:list-odd}\n\nIn this section we find high order elements in $\\mathrm{GF}(q,2^n)$, for odd $q$, using five good towers. In this section, we denote by $\\varepsilon$ the element $4^{-1}$ inside $\\mathrm{GF}(q,1)$. We consider the polynomials $f_i(x_{n - 1}, x_n) := x_n^2 + x_n - v_i(x_{n - 1})$, for $i \\in\\{1, 2,\\ldots, 5\\}$,\nwhere $v_i(x)$ is a polynomial chosen as follows:\n\\begin{enumerate}\n \\item $v_1(x) := \\varepsilon x;$\n \\item $v_2(x) := 4x{(x + 3\\varepsilon)}^2;$\n \\item $v_3(x) := 2\\varepsilon x;$\n \\item $v_4(x) := 8x{(2x + 3\\varepsilon)}^2;$\n \\item $v_5(x) := 8x{(x + 3\\varepsilon)}^2.$\n\\end{enumerate}\n\\begin{rem}\\label{rem:conditions}\nCondition \\textbf{(1)} holds for all the previous polynomials and the relation between two consecutive discriminants is given respectively by:\n\\begin{align*}\ng_1(\\delta_{n - 1}, \\delta_n) &= \\delta_n^2 - \\delta_n- \\varepsilon \\delta_{n - 1} + \\varepsilon; \\\\\ng_2(\\delta_{n - 1}, \\delta_n) &= \\delta_n^2 - \\delta_n -4 \\delta_{n - 1}^3 + 6\\delta_{n - 1}^2 - 9\\varepsilon\\delta_{n - 1} + \\varepsilon; \\\\\ng_3(\\delta_{n - 1}, \\delta_n) &= \\delta_n^2 - \\delta_{n - 1};\n\\\\g_4(\\delta_{n - 1}, \\delta_n) &= \\delta_n^2 + 48\\delta_{n - 1}\\delta_n - 256\\delta_{n - 1}^3 + 288\\delta_{n-1}^2 - 81\\delta_{n - 1}; \\\\\ng_5(\\delta_{n-1}, \\delta_n) &= \\delta_n^2 - 16\\delta_{n - 1}^3 + 24\\delta_{n - 1}^2 - 9\\delta_{n - 1}.\n\\end{align*}\nThe first two towers satisfy Condition \\textbf{(2)}. \nIn fact\n\\begin{align*}\ng_1(\\delta_{n-1},0)&=-\\varepsilon (1+4x_{n-1})+\\varepsilon=-x_{n-1};\\\\\ng_2(\\delta_{n-1},0)&=x_{n-1}(x_{n-1}+3\\varepsilon)^2(x_{n-1}^3+6x_{n-1}^2+9\\varepsilon^2 x_{n-1}+3\\varepsilon^3)^2.\n\\end{align*}\nSimilarly the last three towers satisfy Condition \\textbf{(2')}. In fact,\n\\begin{align*}\ng_3(\\delta_n, 0) &= -\\delta_{n};\\\\\ng_4(\\delta_n, 0) &= -256\\delta_n(\\delta_n - 9 \\varepsilon^2)^2; \\\\\ng_5(\\delta_n, 0) &= -16\\delta_n(\\delta_n - 3\\varepsilon)^2.\n\\end{align*}\nHence, Proposition \\ref{general-odd} applies to $f_i(x_{n - 1}, x_n)$, for $i \\in\\{ 1, 2, \\ldots, 5\\}$ once we have some starting points. \n\\end{rem}\nThe next two lemmas ensures the existence of a non-square $x_1$ such that $\\delta_1$ is a non-square in $\\mathrm{GF}(q,2)$ as well. This would be the corresponding analogue of \\cite[Lemma 3]{burkhart2009finite}, but \nhere we also need that \\textit{both} $x_1$ and $\\delta_1$ must be non-squares. This requires more effort, especially for the last tower $f_5(x_{n-1},x_n)$ below, but, \nas a balance, this gives \na lower bound for the order of $x_n$, also. \n\nThe present proof relies mainly on elementary combinatorial arguments. \n\\begin{lem}\\label{lem:nonsqnear}\nLet $c\\in \\mathrm{GF}(q,1)$ be a non-zero element. There is at least a non-square $x_1 \\in \\mathrm{GF}(q,2)$ such that $x_1 + c$ is a non-square as well.\n\\end{lem}\n\\begin{proof}\nConsider the action $\\rho$ of $\\mathrm{GF}(q,1)$ on $\\mathrm{GF}(q,2)$ as an additive group, namely $\\rho_g(x) = x + g$, for $g \\in \\mathrm{GF}(q,1)$ and $x \\in \\mathrm{GF}(q,2)$. Then, $\\mathrm{GF}(q,2)$ is partitioned into $q$ orbits. There are exactly $\\textstyle\\frac{1}{2}(q^2 + 1)$ squares in $\\mathrm{GF}(q,2)$. Among these, there are all the elements of the orbit $\\mathrm{GF}(q,1)$. It follows that there are exactly $\\textstyle\\frac{1}{2}(q^2 - 2q + 1)$ square elements in $q - 1$ orbits. Hence, there is at least one orbit with at most $\\textstyle\\frac{1}{2}(q - 1)$ square elements and at least $\\textstyle\\frac{1}{2}(q + 1)$ non-square elements. We denote this orbit by $S$. It follows that there are at least two consecutive non-squares in $S$ under the repeated action of $\\rho_c$, namely $a$ and $\\rho_c(a) = a + c$. The lemma follows by choosing $x_1 = a$.\n\\end{proof}\n\\begin{exa}\\label{exa1}\nConsider, $q = 3$ and $c=1$. Denote by $z$ a generator of $\\mathrm{GF}(3,2)^*$ satisfying $z^2 = z + 1$. There are exactly $5$ squares in $\\mathrm{GF}(3,2)^*$, but $3$ of them are in the same orbit $\\mathrm{GF}(3,1)$. The remaining ones are $z^2 = z + 1$ and $z^6 = 2z + 2$. One can check that they belong to the orbits $S_1 = (z; \\quad z + 1 = z^2; \\quad z + 2 = z^7)$ and $S_2 = (2z = z^5; \\quad 2z + 1 = z^3; \\quad 2z + 2 = z^6)$. As $x_1$ we can choose the element $2z$ or $z + 2$. They are both roots of the polynomial $x^2 = 2x + 1$, so we use this polynomial for $q = 3$ in Table \\ref{tab:res1} in Section~\\ref{sec:numres}.\n\\end{exa}\n\nIn order to show the existence of a suitable initial element $x_1$ for the tower defined by $f_5(x_{n-1},x_n)$ we prove the following lemma. \n\\begin{lem}\\label{lem:nonsqnear5}Let $q$ be an odd prime and let $p(x)$ be a cubic polynomial in $\\mathrm{GF}(q,1)[x]$ without multiple roots, such that $p(0)\\not = 0$. Then:\n\\begin{enumerate}[i)]\\item\\label{nonsqnear5_C1} The curve $C_1: y^2 = p(x)$ has at most $q^2 + 2q$ affine $\\mathrm{GF}(q,2)$-rational points and the curve $C_2: y^2 = p(x^2)$ has at least $q^2 - 4q - 1$ affine $\\mathrm{GF}(q,2)$-rational points.\\item\\label{nonsqnear5_exists} If $q\\geq 11$, then there is at least a non-square $x_1\\in\\mathrm{GF}(q,2)$ such that $p(x_1)$ is a non-square in $\\mathrm{GF}(q,2)$ as well.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof} \\ref{nonsqnear5_C1}) We observe that $p(x^2)$ is square-free since $p(x)$ is square-free and $p(0)\\ne 0$ by hypothesis. The first statement follows by Weil bound $|N-(q^2 + 1)| \\leq 2gq$, for every smooth projective curve of genus $g$ with $N$ points over $\\mathrm{GF}(q,2)$, since $C_1$ is an elliptic curve and $C_2$ has genus at most 2, see \\cite[Propositions 6.1.3 (a) and 6.2.3 (b)]{stichtenoth2009functionfields}. It is well known that the number of points at infinity is $1$ in an elliptic curve and it is at most $2$ in a genus $2$ curve. Hence, \\ref{nonsqnear5_C1}) is proved.\n\n\\ref{nonsqnear5_exists}) By contradiction we assume that $p(\\alpha)$ is a square for all non-square $\\alpha \\in \\mathrm{GF}(q,2)$. Let $\\beta\\in \\mathrm{GF}(q,2)$ be a square root of $p(\\alpha)$. Since there are exactly $\\textstyle\\frac{1}{2}(q^2 - 1)$ non-squares in $\\mathrm{GF}(q,2)$ and $\\beta \\neq 0$, except at most for $3$ choices of $\\alpha$, then the pairs $(\\alpha,\\beta)$ and $(\\alpha,-\\beta)$ produce at least $q^2 - 4$ distinct points of $C_1$. We show that such points are too many. We estimate the number of squares $\\alpha$ such that $p(\\alpha)$ is also a square in $\\mathrm{GF}(q,2)$. Each point $(t,y)$ in $C_2$ corresponds to the point $(x,y)$ in $C_1$ with $x = t^2$. This correspondence is not $1-1$ because, when $t\\ne 0$, the point $(-t,y)$ determines the same point in $C_1$. Let $N$ be the number of affine $\\mathrm{GF}(q,2)$-rational points of $C_2$, then $C_1$ must have more than $\\textstyle\\frac{N}{2}$ affine $\\mathrm{GF}(q,2)$-rational points $(x,y)$ with $x$ being a square in $\\mathrm{GF}(q,2)$. By Part \\ref{nonsqnear5_C1}), we have $N\\ge q^2 - 4q -1$. Counting the points of $C_1$ we get, again by Part \\ref{nonsqnear5_C1}), $\\textstyle q^2 - 4 +\\frac{1}{2}(q^2 - 4q -1) \\le q^2 + 2q$ which yields, after a straightforward computation, $q^2 - 8q - 9 \\le 0$. It follows that $q \\le 9$, which is contrary to our assumption on $q$. Hence, there is at least one non-square $x_1 \\in \\mathrm{GF}(q,2)$ such that $p(x_1)$ is a non-square too.\n\\end{proof}\n\\begin{rem}\\label{rem5}For suitable polynomials $p(x)$, Part \\ref{nonsqnear5_exists}) of the previous lemma also holds for odd primes $q < 11$. In the sequel, we are interested in \\begin{align*}p(x)\\! &= \\!1 + 4v_5(x) \\!=\\! 1 + 32x(x + 3\\varepsilon)^2\\!=\\! 32\\!\\left(x + \\frac{1}{2}\\right)\\!\\!\\left(x^2 + x + \\frac{1}{16}\\right)\\!= \\\\ &= 32\\left(x +\\frac{1}{2}\\right)\\left(x + \\frac{1}{2} - a\\right)\\left(x + \\frac{1}{2} + a\\right),\n\\end{align*} \nwhere $a$ is a square root of $\\textstyle\\frac{3}{16}$ in $\\mathrm{GF}(q,2)$. We are interested in this polynomial because $\\delta_1=p(x_1)$, for $f_5$, so we need that both $x_1$ and $\\delta_1$ are non-squares. It follows that $p(x)$ is square-free for $q = 5$ and $q = 7$. For $q = 5$, if we choose $x_1$ being a root of the polynomial $x^2 + 4x + 2$, then $p(x_1) = x_1^5$. Hence, both $x_1$ and $p(x_1)$ are non-square in $\\mathrm{GF}(5,2)$. \nSimilarly, for $q = 7$, if we choose $x_1$ as a root of $x^2 + 5x + 5$, then both $x_1$ and $p(x_1)$ are non-squares in $\\mathrm{GF}(7,2)$. \nFinally, for $q = 3$, we have that $p(x)$ has multiple roots, but Part \\ref{nonsqnear5_exists}) of Lemma \\ref{lem:nonsqnear5} still holds. In fact, if we choose $x_1$ as a root of the polynomial $x^2 + 2x + 2$, as in Example \\ref{exa1}, then $p(x_1) = x_1$, hence $p(x_1)$ is a non-square as well. Kindly refer to Remark \\ref{rem_f4=f5} for further explanations. We use the aforementioned examples in Table \\ref{tab:res5} in Section~\\ref{sec:numres}.\n\\end{rem}\nThe following corollary ensures the existence of towers defined by $f_i(x_{n - 1},x_n)$ generating high order elements for $i \\in\\{ 1,2,\\ldots,5\\}$.\n\\begin{cor}\\label{bound1-4}\nThe polynomials $f_i(x_{n - 1},x_n)$, for $i \\in\\{1,2,\\ldots,5\\}$, define infinite towers of fields. Moreover, for a suitable choice of $x_1$, the order of $x_n$ in $\\mathrm{GF}(q,2^n)$ is greater than $2^{\\frac{1}{2}(n^2 + 3n) + \\mathrm{ord}_2(q - 1) - 2}$. The same bound holds for $\\delta_n$ in the towers defined by $f_1(x_{n - 1},x_n)$ and $f_2(x_{n - 1},x_n)$ and, when $q > 3$, for $\\delta_n$ in the tower defined by $f_4(x_{n - 1},x_n)$.\n\\end{cor}\n\\begin{proof}\nFirst, for each tower considered, we show the existence of a non-square starting point $x_1$ such that the discriminant $\\delta_1$ is a non-square as well. A straightforward computation shows that $\\delta_1 = x_1 + 1$ for $f_1$ and that $\\delta_1 = 16x_1^3 + 24x_1^2 + 9x_1 + 1 = (x_1 + 1)(4x_1 + 1)^2$ for $f_2$. Hence, for the first two polynomials, it is enough to choose $x_1$ as in Lemma \\ref{lem:nonsqnear} with $c=1$. A straightforward computation also shows that $\\textstyle \\delta_1 = 2\\left(x_1 + \\frac{1}{2}\\right)$ for $f_3$ and that $\\textstyle \\delta_1 = 128\\left(x_1 + \\frac{1}{2}\\right)(x_1 + 2\\varepsilon^2)^2$ for $f_4$. Hence, for the third and the fourth polynomial, it is enough to choose $x_1$ as in Lemma \\ref{lem:nonsqnear} with $\\textstyle c=\\frac{1}{2}$. For the last tower, by Remark~\\ref{rem5}, we can take $x_1$ as in Remark \\ref{rem5} for $q \\leq 7$ and we can take $x_1$ as in Lemma~\\ref{lem:nonsqnear5} for $q \\geq 11$.\n\nNow, we know, by Remark \\ref{rem:conditions}, that all the considered towers satisfy Conditions \\textbf{(1)} and \\textbf{(2)}, or Conditions \\textbf{(1)} and \\textbf{(2')}. Therefore, the result for $x_n$ follows by Corollary \\ref{bound-odd}. For $\\delta_n$ we have to check that $\\delta_n^2 \\notin \\mathrm{GF}(q,2^{n-1})$ for $n > 1$, in the tower defined by $f_1(x_{n-1},x_n)$ and $f_2(x_{n-1},x_n)$, for $q\\geq 3$, and by $f_4(x_{n-1},x_n)$ for $q > 3$. But this follows by the expression of $g_1(\\delta_{n - 1},\\delta_{n })$,\n $g_2(\\delta_{n - 1},\\delta_{n })$ and $g_4(\\delta_{n - 1},\\delta_{n })$ in Remark~\\ref{rem:conditions}.\n\\end{proof}\nAs in \\cite{burkhart2009finite}, the bound of the previous corollary does not seem to be sharp, in fact in many cases we were able to construct generators of the multiplicative group $\\mathrm{GF}(q,2^n)^*$, whose order is $q^{2^n} - 1$, which is much higher than $2^{\\frac{n^2}{2}}$. The interested reader can compare the tables in Section \\ref{sec:numres} with the experimental results of \\cite{burkhart2009finite}.\n\\begin{rem}\\label{rem_f4=f5} \nThe bound in the Corollary \\ref{bound1-4} above, does not hold for $\\delta_n$ in the tower defined by $f_3(x_{n - 1}, x_n)$ and $f_5(x_{n - 1}, x_n)$. \nIn fact, $\\delta_n^2 \\in \\mathrm{GF}(q,2^{n - 1})$, for all $n > 1$, which can be verified easily. The interested reader can see the numerical results of Tables \\ref{tab:res3} and \\ref{tab:res5} in Section \\ref{sec:numres}.\nA careful comparison between these two tables reveals an interesting difference when $q > 3$. In fact, the order of the discriminant $\\delta_n$ turns out to grow very slowly in Table \\ref{tab:res3} in comparison to Table \\ref{tab:res5}. The reason is that in the former tower the discriminants satisfy the relation\n$g_3(\\delta_{n - 1}, \\delta_n) = \\delta_n^2 - \\delta_{n - 1} = 0,$\nwhich yields $\\delta_n^{2^{n - 1}} = \\delta_{n - 1}^{2^{n - 2}} = \\ldots = \\delta_{1} \\in \\mathrm{GF}(q,2).$ This implies that we can estimate the order of $\\delta_n$, which turns out to be lower than $2^{n - 1 + \\mathrm{ord}_2(q^2 - 1)}$. In the tower defined by $f_5(x_{n - 1},x_n)$, we have that $\\delta_n^{2^j} \\in \\mathrm{GF}(q,2^{n - j})$ holds for $j = 1$, but not for all $j < n$. This explains why the order grows comparatively faster when $q > 3$. In the case $q = 3$ the polynomial equation $g_5(\\delta_{n - 1},\\delta_n) = \\delta_n^2 - \\delta_{n - 1}^3 = 0$ gives $\\delta_n^{2^{n - 1}} = \\delta_{1}^{3^{n - 1}} \\in \\mathrm{GF}(3,2)$. This explains why the numerical results for the order of $\\delta_n$ are similar to the tower defined by $f_3(x_{n - 1},x_n)$.\n\\end{rem}\n\\begin{rem}\nFrom the relation $g_4(\\delta_{n - 1},\\delta_n)=0$ between $\\delta_n$ and $\\delta_{n - 1}$ in the fourth tower, for $q = 3$, we get $g_4(\\delta_{n - 1}, \\delta_n) = \\delta_n^2 - \\delta_{n - 1}^3 = 0$. Hence, we observe that the proof of last corollary does not work when $q = 3$. We also point out that $f_4(x_{n - 1}, x_n) = f_5(x_{n - 1}, x_n)$ when $q = 3$. This fact explains why the numerical results in Tables \\ref{tab:res4} and \\ref{tab:res5} have the same values in the first two columns.\n\\end{rem}\nOf course could exist other towers satisfying analogues of Conditions \\textbf{(1)} and \\textbf{(2)} or Conditions \\textbf{(1)} and \\textbf{(2')} above. \nAn extensive computer search could show the non-existence of similar examples of the form $f(x_{n - 1},x_n) = x_n^2 + x_n + v(x_{n - 1})$, with $\\deg(v(x)) \\leq 3$, at least for small prime fields.\n\n\n\n\n\n\n\n\\section{Examples of good towers in even characteristic}\\label{sec:list-even}\n\n\nIn this section we \nlist polynomials generating high order elements, as in\nSection \\ref{sec:list-odd}.\nWe have to adapt some proofs in even characteristic, since we have to prove that our cubic polynomials $f(x_{n-1},y)$ are Galois and irreducible in $\\mathrm{GF}(2,2\\cdot 3^{n-1})[y]$. Let $e$ be a non-negative integer. In the following results, we prove that $f(x_{n - 1},x_n):= x_n^3+x_n+x_{n-1}^{2^e}$ actually defines an infinite normal separable tower.\n\n\n\n\\begin{lem}\\label{lem:cubicroots4}\nLet $e$ and $n$ be integers such that $e\\geq 0$ and $n\\ge 2$, and let $x_{n-1}\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$. Assume that $u_n^3\\in \\mathrm{GF}(2,2\\cdot 3^{n-1})$ is a root of the quadratic polynomial $y^2+x_{n-1}^{2^e}y+1$ and that $x_n:=u_n+u^{-1}_n\\notin \\mathrm{GF}(2,2\\cdot 3^{n-1})$ is a root of the cubic polynomial $y^3+y+x_{n-1}^{2^e}$. \nLet $u_{n+1}\\in \\mathrm{GF}(2,2\\cdot 3^{n+1})$ be a third root of $u_n^{2^e}$. Then:\n\\begin{enumerate}[i)] \n\\item\\label{item_cond1}\n $u_{n+1}\\notin \\mathrm{GF}(2,2\\cdot 3^{n})$;\n\n\\item\\label{item_cond2}\n$u_{n+1}^3$ and $u_{n+1}^{-3}$ are the roots of $y^2+x_n^{2^e}y+1$;\n\n\\item\\label{item_cond3}\n $x_{n+1}:=u_{n+1}+u_{n+1}^{-1}$ is a root of $y^3+y+x_n^{2^e}$ and $x_{n+1}\\notin \\mathrm{GF}(2,2\\cdot 3^{n})$.\n\\end{enumerate} \n\\end{lem}\n\\begin{proof}\nPart \\ref{item_cond1}) follows since $u_{n+1}^9=(u_{n+1}^3)^3=(u_n^{2^e})^3=(u_n^3)^{2^e}$ belongs to $\\mathrm{GF}(2,2\\cdot 3^{n-1})$ and since $\\mathrm{GF}(2,2\\cdot 3^{n})$ does not contain any 9-th root of non-cubic elements in $\\mathrm{GF}(2,2\\cdot 3^{n-1})$ because 9 does not divide\n$$\\frac{|\\mathrm{GF}(2,2\\cdot 3^{n})^*|}{|\\mathrm{GF}(2,2\\cdot 3^{n-1})^*|}=1+4^{3^{n-1}}+4^{2\\cdot 3^{n-1}},$$ for all $n\\ge 1$.\n\nPart \\ref{item_cond2}) follows by straightforward verification.\n\nThe last part follows by Lemma \\ref{lem:cubicroots2} and by Parts \\ref{item_cond1}) and \\ref{item_cond2}).\n\\end{proof}\n\nPart iii) in the previous Lemma shows \nby induction that if $f(x_1,y)=y^3+y+x_{1}^{2^e}$\nis irreducible over \n$\\mathrm{GF}(2,6)$, then \n$f(x_n,y)=y^3+y+x_{n}^{2^e}$ is also irreducible over \n$\\mathrm{GF}(2,2\\cdot 3^{n})$ for all $n>1$.\nIt follows that the Galois groups of the splitting field of $f(x_n,y)$ is either the cyclic group\n$\\mathbb{Z}\/3 \\mathbb{Z}$ or the symmetric group $\\mathrm{S}_3$, at each iteration. \nIn the same spirit of the results above, we show that if the first polynomial $f(x_1,y)$ is normal (i.e., the Galois group of the splitting field is\n$\\mathbb{Z}\/3 \\mathbb{Z}$) then \n$f(x_n,y)$ is normal at each iteration.\n\n\\begin{lem}\\label{lem:galois}\nLet $e\\geq 0$ and $x_{n - 1} \\in \\mathrm{GF}(2,2 \\cdot 3^{n - 1})$. Assume that $f(x_{n - 1},y)=y^3+y+x_{n-1}^{2^e}$ splits into linear factors in $\\mathrm{GF}(2,2 \\cdot 3^{n})[y]$. Then $f(x_{n},y)$ splits into linear factors in $\\mathrm{GF}(2,2 \\cdot 3^{n + 1})[y]$.\n\\end{lem}\n\\begin{proof}\n Let $r_1, r_2, r_3 \\in \\mathrm{GF}(2,2 \\cdot 3^{n})$ be the roots of $f(x_{n - 1},y)$ and choose $x_n = r_1$. Let $R(x_n,y) = y^2 + x_n^{2^e} y + x_n^{2^{e+1}} + 1$ be the quadratic resolvent of $f(x_{n},y)$. \nApplying Frobenius automorphism, we know that the roots satisfy $r_2^{2^e} + r_3^{2^e} = r_1^{2^e}$ and $r_1^{2^e}r_2^{2^e}r_3^{2^e}=(x_{n-1}^{2^e})^{2^e}$. Hence, we get\n\\begin{align*}\nR(x_n,y)&=y^2 + r_1^{2^e}y + r_1^{2^{e+1}} + 1=y^2 + r_1^{2^e}y + \\frac{(r_1^3 + r_1)^{2^e}}{r_1^{2^e}}= \\\\\n&= y^2 + r_1^{2^e}y + \\frac{x_{n-1}^{2^{2e}}}{r_1^{2^e}}=(y + r_2^{2^e})(y + r_3^{2^e}).\n\\end{align*}\nIt follows that $R(x_n,y)$ splits in $\\mathrm{GF}(2,2 \\cdot 3^{n})[y]$. Therefore, $f(x_{n},x_{n + 1})$ splits into linear factors in $\\mathrm{GF}(2,2 \\cdot 3^{n + 1})[y]$ by Lemma \\ref{lem:quadratic_resolvent}.\n\\end{proof}\n\nWe summarize the results above in the following Corollary,\nwhich provides a good initial choice for $x_1$, resulting in\n$f(x_{n-1},x_n)$ to be a normal separable recursive tower.\n\n\\begin{cor}\\label{cor:eventowerorder}\nLet $e\\geq 0$ be an integer. Then $f(x_{n-1},x_n):= x_n^3+x_n+x_{n-1}^{2^e}$ defines an infinite tower of fields and, for a suitable choice of $x_1$, the order of $x_n\\in \\mathrm{GF}(2,2\\cdot 3^n)$, for $n \\ge 2 $, is greater than $3^{\\frac{1}{2}(n^2 + 3n)-1}.$\n\\end{cor}\n\\begin{proof}\nLet $x_1$ be one of the roots of $h(x) := x^6 + x^5 + x^3 + x^2 + 1.$ The reader can verify that each root of this polynomial is not a cube in $\\mathrm{GF}(2,18)$. Moreover, the quadratic resolvent $R(x_1,y)=y^2+x_1y+x_1+1$ of $f(x_{1},y)= y^3+y+x_{1}$ is reducible in $\\mathrm{GF}(2,6)[y]$ and the roots of $y^2+x_1y+1$ are not cubes in $\\mathrm{GF}(2,6)$. Applying the Frobenius automorphism, this implies that, for all $e\\ge 0$, the roots of $y^2+x_1^{2^e}y+1$ are not cubes and the quadratic resolvent $R(x_1^{2^e},y)=y^2+x_1^{2^e}y+x_1^{2^{e+1}}+1$ of $f(x_{1},y)= y^3+y+x_{1}^{2^e}$ is reducible in $\\mathrm{GF}(2,6)[y]$.\n\nBy Lemma~\\ref{lem:cubicroots4}, Part \\ref{item_cond3}), the fact that the roots of $y^2+x_1^{2^e}y+1$ are not cubes implies that $f(x_{n},y)= y^3+y+x_{n}^{2^e}$ is irreducible for each $n\\ge 1$. Hence $f(x_{n-1},x_n)$ defines an infinite tower of fields. By Lemma~\\ref{lem:galois}, the condition on the resolvent implies that this tower is Galois.\n\nSince $f$ clearly satisfies Condition \\textbf{(3)} of Section \\ref{sec:even}, so the proof follows by Corollary~\\ref{bound-even}.\n\\end{proof}\nIn Table \\ref{tab:res6} of Section~\\ref{sec:numres} we collated the numerical results for $f_6(x_{n - 1},x_n) := x_n^3 + x_n + x_{n - 1}$ and $f_7(x_{n - 1},x_n) := x_n^3 + x_n + x_{n - 1}^2$ corresponding to $e=0$ and $e=1$, respectively.\nThe initial element $x_1$ is one of the roots of $h(x) := x^6 + x^5 + x^3 + x^2 + 1$ as explained in the proof of Corollary \\ref{cor:eventowerorder}.\n\n\n\n\n\\section{Numerical results}\\label{sec:numres}\n\nIn this section, we have collated the multiplicative orders $o(x_n)$ (and $o(\\delta_n )$ for $q$ odd) for small $n$ in the towers defined by $f_i (x_{n - 1}, x_n)$, for $i = 1, 2,\\ldots,7$. In most of the cases we obtained generators of the multiplicative groups $\\mathrm{GF}(q, 2^n)^*$ and $\\mathrm{GF}(2, 2 \\cdot 3^n)^*$. We tabulated base 2 logarithm of the orders as they grow \nexponentially. The interested reader can also find the lower and upper bounds for $o(x_n)$ and $o(\\delta_n)$ listed in Tables \\ref{tab:bounds} and \\ref{tab:res6}, for odd and even characteristic respectively. Finally, in Table \\ref{tab:ComparativeAnalysis}, we compare one of our example in characteristic $3$, with the constructions of \\cite{burkhart2009finite} and \\cite{cohen1992explicit}.\n\nMAGMA \\cite{MAGMA} computational algebra system was used for the experiments and a sample MAGMA code and output, for $q = 11$ can be found in \\cite{CODE}. The performance of the code depends on the efficiency of the \\textit{root} finding algorithm that one uses. We have used the standard function of MAGMA \\cite{MAGMA} for finding roots.\n\n\n\\begingroup\n\\tiny{\n\\begin{table}[ht]\n\\caption{Results for $f_1(x_{n - 1},x_n)$ for odd $q\\leq 11$.}\n\\begin{center}\n\\begin{adjustwidth}{-0.35in}{-0.35in}\n\\begin{tabular}{|c||c|c||c|c||c|c||c|c|}\n\\hline \n\n$q$ & 3 & 3 & 5 & 5 & 7 & 7 & 11 & 11 \\\\\n\\hline\n\n$x_1^2=$ & $2x_1 + 1$ & $2x_1 + 1$ & $3x_1 + 2$ & $3x_1 + 2$ & $x_1 + 4$ & $x_1 + 4$ & $4x_1 + 9$ & $4x_1 + 9$ \\\\\n\\hline\n\\hline\n$n$ & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) \\\\ \n\\hline \n\n$1$ & 3.0 & 3.0 & 4.6 & 3.0 & 5.6 & 5.6 & 6.9 & 5.3 \\\\\n\\hline\n\n$2$ & 6.3 & 6.3 & 9.3 & 9.3 & 11.2 & 11.2 & 13.8 & 13.8 \\\\\n\\hline\n\n$3$ & 12.7 & 12.7 & 18.6 & 18.6 & 22.5 & 22.5 & 27.7 & 27.7 \\\\\n\\hline\n\n$4$ & 25.4 & 25.4 & 37.2 & 37.2 & 44.9 & 44.9 & 55.4 & 55.4 \\\\\n\\hline\n\n$5$ & 50.7 & 50.7 & 74.3 & 74.3 & 89.8 & 89.8 & 110.7 & 110.7 \\\\\n\\hline\n\n$6$ & 101.4 & 101.4 & 148.6 & 148.6 & 179.7 & 179.7 & 221.4 & 221.4 \\\\\n\\hline\n\n$7$ & 202.9 & 202.9 & 297.2 & 297.2 & 359.3 & 359.3 & 442.8 & 442.8 \\\\\n\n\\hline\n$8$ & 405.8 & 405.8 & 594.4 & 594.4 & 718.7 & 718.7 & 883.3 & 885.6 \\\\ \n\\hline\n\n$9$ & 811.5 & 811.5 & 1188.8 & 1188.8 & 1437.4 & 1437.4 & 1771.2 & 1771.2 \n\\\\\n\n\\hline\n\\end{tabular}\\label{tab:res1}\n\\end{adjustwidth}\n\\end{center}\n\\end{table}\n}\n\\endgroup\n\\begingroup\\tiny{\n\\begin{table}[ht]\n\\caption{Results for $f_2(x_{n - 1},x_n)$ for odd $q\\leq 11$.}\n\\begin{center}\n\\begin{adjustwidth}{-0.35in}{-0.35in}\n\\begin{tabular}{|c||c|c||c|c||c|c||c|c|}\\hline \n$q$ & 3 & 3 & 5 & 5 & 7 & 7 & 11 & 11 \\\\\n\\hline\n$x_1^2=$ & $2x_1+1$ & $2x_1+1$ & $3x_1+2$ & $3x_1+2$ & $x_1+4$ & $x_1+4$ & $4x_1+9$& $4x_1+9$ \\\\\n\\hline \n\\hline\n$n$ & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$))& $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$))\\\\ \n\\hline \n\n$1$ & 3.0 & 3.0 & 4.6 & 4.6 & 5.6 & 5.6 & 6.9 & 4.6 \\\\\\hline\n\n$2$ & 6.3 & 6.3 & 9.3 & 9.3 & 11.2 & 11.2 & 13.8 & 12.3 \\\\\\hline\n\n$3$ & 12.7 & 12.7 & 18.6 & 18.6 & 20.9 & 22.5 & 26.1 & 27.7 \\\\\\hline\n\n$4$ & 25.4 & 25.4 & 37.2 & 37.2 & 44.9 & 44.9 & 55.4 & 48.9 \\\\\\hline\n\n$5$ & 50.7 & 50.7 & 74.3& 74.3 & 88.3 & 89.8 & 106.6 & 110.7 \\\\\\hline\n\n$6$ & 101.4 & 101.4 & 148.6 & 148.6 & 179.7 & 179.7 & 221.4 & 219.8 \\\\\\hline\n\n$7$ & 202.9 & 202.9 & 297.2 & 297.2 & 357.8 & 359.3 & 441.2 & 442.8 \\\\\\hline\n\n$8$ & 405.8 & 405.8 & 594.4 & 594.4 & 718.7 & 718.7 & 885.6 & 879.2 \\\\ \\hline\n\n$9$ & 811.5 & 811.5 & 1188.8 & 1188.8 & 1435.8 & 1437.4 & 1767.1 & 1771.2 \\\\\\hline\n\n\\end{tabular}\\label{tab:res2}\n\\end{adjustwidth}\n\\end{center}\n\\end{table}}\n\\endgroup\n\\begingroup\n\\tiny{\n\\begin{table}[ht]\n\\caption{Results for $f_3(x_{n - 1},x_n)$ for odd $q \\leq 11$.}\n\\begin{center}\n\\begin{adjustwidth}{-0.35in}{-0.35in} \n\\begin{tabular}{|c||c|c||c|c||c|c||c|c|}\n\\hline \n\n$q$ & 3 & 3 & 5 & 5 & 7 & 7 & 11 & 11 \\\\\n\\hline\n\n$x_1^2=$ & $x_1+1$& $x_1+1$ & $2x_1+2$ & $2x_1+2$ & $3x_1+2$& $3x_1+2$ & $4x_1+9$& $4x_1+9$ \\\\\n\\hline \n\\hline\n$n$ & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$))& $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) \\\\ \n\\hline \n\n$1$ & 3.0 & 3.0 & 4.6 & 4.6 & 5.6 & 4.0 & 6.9 & 5.3 \\\\\n\\hline\n\n$2$ & 6.3 & 4.0 & 9.3 & 5.6 & 11.2 & 5.0 & 13.8 & 6.3 \\\\\n\\hline\n\n$3$ & 12.7 & 5.0 & 18.6 & 6.6 & 22.5 & 6.0 & 25.4 & 7.3 \\\\\n\\hline\n\n$4$ & 25.4& 6.0 & 37.2 & 7.6 & 44.9 & 7.0 & 51.3 & 8.3 \\\\\n\\hline\n\n$5$ & 50.7 & 7.0 & 74.3 & 8.6 & 89.8 & 8.0 & 106.6 & 9.3 \\\\\n\\hline\n\n$6$ & 101.4& 8.0 & 148.6 & 9.6 & 179.7 & 9.0 & 217.3 & 10.3 \\\\\n\\hline\n\n$7$ & 202.9 & 9.0 & 297.2 & 10.6 & 359.3 & 10.0 & 436.4 & 11.3 \\\\\n\\hline\n\n$8$ & 405.8 & 10.0 & 594.4& 11.6 & 718.7 & 11.0 & 881.5 & 12.3 \\\\ \n\\hline\n\n$9$ & 811.5 & 11.0 & 1188.8 & 12.6 & 1437.4 & 12.0 & 1767.1 & 13.3 \\\\\n\\hline\n\n\\end{tabular}\\label{tab:res3}\n\\end{adjustwidth}\n\\end{center}\n\\end{table}\n}\n\\endgroup\n\\begingroup \\tiny{\n\\begin{table}[ht]\n\\caption{Results for $f_4(x_{n - 1},x_n)$ for odd $q \\leq 11$.}\n\\begin{center}\n\\begin{adjustwidth}{-0.35in}{-0.35in} \\begin{tabular}{|c||c|c||c|c||c|c||c|c|}\\hline \n$q$ & 3 &3 & 5 &5 & 7 &7 &11 & 11\\\\\n\\hline\n$x_1^2=$ & $x_1+1$& $x_1+1$ & $4x_1+3$ & $4x_1+3$ & $2x_1+4$& $2x_1+4$ & $7x_1+4$& $7x_1+4$ \\\\\n\\hline \n \\hline\n$n$ & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$))& $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) \\\\ \\hline $1$ & 3.0 & 3.0 & 4.6 & 4.6 & 5.6 & 5.6 & 6.9 & 4.6 \\\\\n\\hline\n$2$ & 6.3 & 4.0 & 9.3 & 7.7 & 11.2 & 11.2 & 13.8 & 13.8 \\\\\n\\hline\n$3$ & 12.7 & 5.0 & 17.0 & 17.0 & 22.5 & 20.9 & 27.7 & 27.7 \\\\\n\\hline\n$4$ & 25.4 & 6.0 & 35.6 & 37.2 & 41.0 & 43.3 & 55.4 & 55.4 \\\\\n\\hline$5$ & 50.7 & 7.0 & 72.7 & 72.7 & 89.8 & 89.8 & 110.7 & 109.1 \\\\\n\\hline$6$ & 101.4& 8.0 & 147.0 & 148.6 & 177.3 & 179.7 & 221.4 & 219.8 \\\\\n\\hline$7$ & 202.9 & 9.0 & 295.6 & 295.6 & 359.3 & 357.8 & 442.8 & 441.2 \\\\\n\\hline$8$ & 405.8 & 10.0 & 592.8 & 594.4 & 717.1 & 717.1 & 885.6 & 884.0 \\\\ \\hline$9$ & 811.5 & 11.0 & 1187.2 & 1188.8 & 1435.8 & 1435.8 & 1769.6 & 1771.2 \\\\\n\\hline\n\\end{tabular}\n\\label{tab:res4}\n\\end{adjustwidth}\n\\end{center}\n\\end{table}}\n\\endgroup\n\\begingroup\\tiny{\n\\begin{table}[ht]\n\\caption{Results for $f_5(x_{n - 1},x_n)$ for odd $q\\leq 11$.}\n\\begin{center}\\begin{adjustwidth}{-0.35in}{-0.35in} \\begin{tabular}{|c||c|c||c|c||c|c||c|c|}\n\\hline $q$ & 3 &3 & 5 &5 & 7 &7 &11 & 11 \\\\\n\\hline$x_1^2=$ & $x_1+1$& $x_1+1$ & $x_1+3$ & $x_1+3$ & $2x_1+2$& $2x_1+2$ & $4x_1+4$& $4x_1+4$ \\\\\n\\hline \n\\hline$n$ & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$))& $\\log_2$(o($\\delta_n$)) & $\\log_2$(o($x_n$)) & $\\log_2$(o($\\delta_n$)) \\\\ \\hline $1$ & 3.0 & 3.0 & 4.6 & 4.6 & 5.6 & 5.6 & 6.9 & 3.0 \\\\\n\\hline$2$ & 6.3& 4.0 & 9.3& 5.6 & 8.9 & 5.0 & 13.8& 5.6 \\\\\n\\hline$3$ & 12.7& 5.0 & 18.6 & 8.7 & 22.5& 12.2 &27.7& 14.8 \\\\\n\\hline$4$ & 25.4& 6.0 & 35.6 & 18.0 & 42.6 & 23.5 & 55.4 & 28.7 \\\\\n\\hline$5$ & 50.7 & 7.0 & 72.7 & 38.2 & 89.8 & 45.9 & 110.7 & 56.4 \\\\\n\\hline$6$ & 101.4& 8.0 & 147.0 & 73.7 & 179.7 & 89.3 & 221.4 & 110.1 \\\\\n\\hline$7$ & 202.9 & 9.0 & 295.6 & 149.6 & 359.3 & 179.1 & 442.8 & 220.8 \\\\\n\\hline$8$ & 405.8 & 10.0 & 592.8& 296.6 & 718.7 & 358.8 & 883.3 & 442.8 \\\\ \n\\hline$9$ & 811.5 & 11.0 & 1187.2 & 595.4 & 1437.4 & 718.1 & 1771.2 & 885.0 \\\\\n\\hline\n\\end{tabular}\n\\label{tab:res5}\n\\end{adjustwidth}\\end{center}\n\\end{table}}\n\\endgroup\n\n\\begingroup\n\\tiny{\n\\begin{table}[ht]\n\\caption{Upper bounds for odd $q\\leq 11$ and lower bound.}\n\\begin{center}\n\\begin{tabular}{|c||c||c|c|c|c|}\n\\hline \n \n$q$ & 3 & 5 &7 & 11 & Lower bound\\\\\n\\hline \n\n$n$ & $\\log_2(q^{2^n}-1)$ & $\\log_2(q^{2^n}-1)$ & $\\log_2(q^{2^n}-1)$ & $\\log_2(q^{2^n}-1)$& $\\log_2(2^{(n^2+3n)\/2})$ \\\\\n\\hline \n\n$1$ & 3.0 & 4.6 & 5.6 & 6.9 & 2.0\\\\\n\\hline\n\n$2$ & 6.3 & 9.3 & 11.2 & 13.8& 5.0 \\\\\n\\hline\n\n$3$ & 12.7 & 18.6 & 22.5 & 27.7& 9.0 \\\\\n\\hline\n\n$4$ & 25.4 & 37.2 & 44.9 & 55.4 & 14.0 \\\\\n\\hline\n\n$5$ & 50.7 & 74.3 & 89.8 & 110.7 & 20.0\\\\\n\\hline\n\n$6$ & 101.4 & 148.6 & 179.7 & 221.4 & 27.0\\\\\n\\hline\n\n$7$ & 202.9 & 297.2 & 359.3 & 442.8 & 35.0\\\\\n\\hline\n\n$8$ & 405.8 & 594.4 & 718.7 & 885.6 & 44.0\\\\ \n\\hline\n\n$9$ & 811.5 & 1188.8 & 1437.4 & 1771.2 & 54.0\\\\\n\\hline\n\n\\end{tabular}\\label{tab:bounds}\n\\end{center}\n\\end{table}\n}\n\\endgroup\n\n\\clearpage\n\n\\begingroup\n\\tiny{\n\\begin{table}[ht]\n\\caption{Results for $f_6(x_{n - 1},x_n)$ and $f_7(x_{n - 1},x_n)$ for $q=2$ and related lower and upper bounds. }\n\\begin{adjustwidth}{-0.50in}{-0.50in} \n\\begin{center}\n\\begin{tabular}{|c||c|c|c|c|}\n\\hline \n\n$f(x_{n-1},x_n)=$ & $f_6(x_{n-1},x_n)$ & $f_7(x_{n-1},x_n)$& Lower bound & Upper bound \\\\\n\\hline \\hline \n\n$n$ & $\\log_2$(o($x_n$)) & $\\log_2$(o($x_n$)) & $\\log_2(3^{n(n+3)\/2})$ & $\\log_2(4^{3^n}-1) $\\\\ \n\\hline \n\n$1$ & 6.0 & 6.0 & 3.2 & 6.0 \\\\\n\\hline \n\n$2$ & 18.0& 18.0 & 7.9 & 18.0 \\\\\n\\hline\n\n$3$ & 54.0 & 54.0 & 14.3 & 54.0\\\\\n\\hline\n\n$4$ & 162.0 & 162.0 & 22.2 & 162.0 \\\\\n\\hline\n\n$5$ & 486.0 & 486.0 & 31.7 & 486.0 \\\\\n\\hline\n\n$6$ & 1458.0 & 1458.0 & 42.8 & 1458.0 \\\\\n\\hline\n\n\\end{tabular}\\label{tab:res6}\n\\end{center}\n\\end{adjustwidth}\n\\end{table}\n}\n\\endgroup\n\n\n\\begingroup\n\\tiny{\n\\begin{table}[ht]\n\\caption{Comparative Analysis}\n\\begin{center}\n\\begin{tabular}{|c||c||c|c|c|c|}\n\\hline \n \n$n$ & $\\log_2 (|{{\\mathbb{F}}^{*}}_{3^{2^n}}|))$ & Our Model & Burkhart's Model \\cite{burkhart2009finite} & Cohen's Model \\cite{cohen1992explicit} & McNay's Model \\cite{mcnay1995topics} \\\\\n \n\\hline \n\n$1$ & 3.0 & 3.0 & 3.0 & 3.0 & 3.0 \\\\\n\\hline\n\n$2$ & 6.3 & 6.3 & 5.3 & 4.0 & 4.3 \\\\\n\\hline\n\n$3$ & 12.7 & 12.7 & 10.7 & 5.0 & 7.4 \\\\\n\\hline\n\n$4$ & 25.4 & 25.4 & 22.4 & 6.0 & 13.7 \\\\\n\\hline\n\n$5$ & 50.7 & 50.7 & 46.8 & 7.0 & 26.4 \\\\\n\\hline\n\n$6$ & 101.4 & 101.4 & 96.5 & 8.0 & 51.7 \\\\\n\\hline\n\n$7$ & 202.9 & 202.9 & 197.0 & 9.0 & 102.4 \\\\\n\\hline\n\n$8$ & 405.8 & 405.8 & 399.0 & 10.0 & 203.9 \\\\ \n\\hline\n\n$9$ & 811.5 & 811.5 & 804.0 & 11.0 & 406.8 \\\\\n\\hline\n\n\\end{tabular}\\label{tab:ComparativeAnalysis}\n\\end{center}\n\\end{table}\n}\n\\endgroup\n\n\n\n\n\\section{Conclusion and future work}\n\n\nIn~\\cite{burkhart2009finite}, the choice of polynomials for the recursive process to generate high order elements in finite field extensions, was limited to the equations of the modular curve towers in \\cite{elkies2001explicit}. In this work, we attempted to generalize the choice of the polynomials. This provides us with more examples with similar properties. A central theme of this research work is to find a recipe to choose polynomials to use the recursive process. There might be other equations which could help to attain similar bounds. It would be interesting to understand in general which equations are good and which ones are not. We also point out that there could be other explicit towers satisfying similar properties. We were in fact attracted previously by other interesting examples with $v(x)$ being a polynomial of higher degree over $\\mathrm{GF}(q, 1)$, which turned out to give high order elements, although the proof seems to be much harder. A possible relation linking together these equations could allow to obtain other families of towers with good parameters. We also expect to improve our results by\nextending the construction of Section \\ref{sec:odd} to higher degree polynomials and\nextending the construction of Section \\ref{sec:even} to odd characteristic $q > 3$.\n\nAnother question that would be interesting to explore is the possible relation with some geometric construction.\nIn fact, since the tower in \\cite{burkhart2009finite} is obtained from the equation of a modular curve, it is a natural question to ask whether our results have a geometric interpretation or not. We hope that a finer understanding of the subject might also possibly provide a recipe for finding high order elements from towers obtained from different forms.\n\n\\section{Acknowledgements}\n\nThe second author was partially supported by the research grant \"Ing. Giorgio Schirillo\" of the Istituto Nazionale di Alta Matematica \"F. Severi\", Rome. The third author would like to thank the High Performance Computing facility of University of L'Aquila (UAQ), which enabled us to implement the algorithm on MAGMA \\cite{MAGMA} computer algebra system, and run the experiments to validate our results. The third author is grateful to the fruitful and illuminating discussions with Professor Norberto Gavioli, UAQ. The third author is thankful to Professor Kalyan Chakraborty for arranging his research visit to Harish-Chandra Institute (HRI), Prayagraj, India, where the possible modularity aspects of this research work were explored. The third author thanks Dr. Kalyan Banerjee (Post-Doctoral Fellow, HRI) for his interest in exploring the geometric meaning of such towers and the modularity connections.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}