diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbrmu" "b/data_all_eng_slimpj/shuffled/split2/finalzzbrmu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbrmu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n In the Minimal Supersymmetric Standard Model (MSSM), there are two charginos ${\\ti \\chi}_1^\\pm$ and ${\\ti \\chi}_2^\\pm$, which\nare the fermion mass eigenstates of the supersymmetric partners of the $W^\\pm$ and the charged Higgs bosons $H^\\pm_{1,2}$.\nLikewise, there are four neutralinos $\\neu 1,\\ldots,\\neu 4$, which are the fermion mass eigenstates of the supersymmetric partners\nof the photon, the $Z^0$-boson, and the neutral Higgs bosons $H^0_{1,2}$. Their mass matrices stem from soft gaugino breaking terms,\nspontaneous symmetry breaking in the Higgs sector and in case of $\\mu$ from the super-potential.\n\nThe next generation of future high-energy physics experiments at Tevatron, LHC and a future\n$e^+e^-$ linear collider (ILC) will hopefully discover these particles if supersymmetry (SUSY) is realized at low energies.\nMuch work has been devoted to the study of the physics interplay of experiments at LHC and ILC \\cite{Weiglein:2004hn}.\nParticularly at a linear collider, it will be possible to perform\nmeasurements with high precision \\cite{ Weiglein:2004hn, tesla, lincol}.\nIn fact, the accuracies\nof the masses of the lighter SUSY fermions are in the permille region which makes\nthe inclusion of higher order corrections indispensible.\n\nIn the framework of the real MSSM important results on quark self-energies were obtained in \\cite{Bednyakov:2002sf}-\\cite{Bednyakov:2005kt}.\nIn \\cite{Martin:2005ch,Yamada:2006vn} the gluino pole mass was calculated to two-loop order. Moreover, the MSSM Higgs-sector has\nbeen studied in detail, even in the full complex model \\cite{Heinemeyer:2007gj}-\\cite{Heinemeyer:1998yj}.\n\nIn a previous work \\cite{Schofbeck:2006gs} we studied loop corrections to the neutralino pole masses and found that\nthe relation between the $\\ifmmode{\\overline{\\rm DR}} \\else{$\\overline{\\rm DR}$} \\fi$-input and the physical observables has to be established at least at the two-loop level in order to match\nexperimental precision. Following these lines, we\ncalculate in this paper the two-loop $\\mathcal O(\\alpha\\alpha_S)$ corrections to the charginos within the MSSM. We conclude that these two-loop corrections\nare in the magnitude of the experimental uncertainty and therefore they are relevant e.g. for global fits of $\\ifmmode{\\overline{\\rm DR}} \\else{$\\overline{\\rm DR}$} \\fi$-parameters.\n\nA new feature of this work is the inclusion of complex parameters in the MSSM. We therefore not only study the charginos but\nalso re-analyze the neutralino-masses with complex parameters and study in particular the dependence on the phase\nof the soft trilinear breaking parameter $A_t$.\n\nGeneric analytic formulae for SUSY QCD corrections in $\\mathcal O(\\alpha\\alpha_S)$ to fermion pole masses in the MSSM were already derived\nin \\cite{Martin:2005ch}. Our calculation is, however, completely independent. More precisely, we use semi-automatic {\\sc Mathematica}\ntools \\cite{feynarts, feyncalc, tarcer} for the diagram generation and analytic simplifications.\n\nIn the Appendix we briefly describe our C-program {\\sc Polxino} \\cite{Polxino} developed for the calculation of chargino and neutralino\nmasses with complex parameters up to $\\mathcal O(\\alpha\\alpha_S)$ with a convenient\n{\\sc SLHA}-interface \\cite{Skands:2003cj} for numerical studies.\n\n\\section{Diagrammatics}\n\n\\begin{figure}[p]\n\\begin{center}\n\\begin{picture}(125,50)(0,0)\n \\put(-19,-8){\\mbox{\\resizebox{!}{5cm}{\\includegraphics{Ch1L.eps}}}}\n\\end{picture}\n\\end{center}\n\\caption{\\it Chargino one-loop self-energy diagrams}\\label{Ch1LoopDiags}\n\\end{figure}\n\nIn Fig.~\\ref{Ch1LoopDiags} we show all one-loop diagrams.\nSimilar to the neutralino case we checked our analytic one-loop calculation against previous work\n\\cite{Oller:2003ge, Fritzsche:2004ek} in the on-shell scheme and found agreement.\n\nNote that in contrast to the neutralino calculation there are now\nquark isospin-partners denoted by $(q,Q)$ in the loop which give rise to\ndifferent tensor reduction formulae.\nIn order to get a pure $\\alpha\\alpha_S$ correction from (Fig.~\\ref{Ch2LSquark}) it is necessary to shorten the\n4-squark coupling to its QCD part.\nDiagrams with one-loop counter-term insertions (Fig.~\\ref{ChCT}) involve $\\mathcal O (\\alpha_S)$ mass counter-terms for quarks and squarks as well as\ncoupling constant counter-terms stemming from the Yukawa part of the chargino-quark-squark couplings and counter-terms to the squark mixing matrix,\nsee e.g. \\cite{Bednyakov:2002sf}.\n\nFor the evaluation of the amplitudes we adopt the strategy of \\cite{Schofbeck:2006gs}, that is, we use semi-automatic tools\n\\cite{feynarts, feyncalc, tarcer} in an {\\sc Mathematica} environment and therein auto-create {\\sc Fortran} code.\n\n\nThe renormalization prescription we adopt is the familiar \\ifmmode{\\overline{\\rm DR}} \\else{$\\overline{\\rm DR}$} \\fi-scheme which regulates UV-divergencies dimensionally but\nintroduces an unphysical scalar field for any gauge field in the theory in order to restore the counting of degrees of freedom in supersymmetry.\nThese unphysical mass parameters can be absorbed in the sfermion mass parameters~\\cite{Martin:2001vx} (the resulting scheme is called $\\ifmmode{\\overline{\\rm DR'}} \\else{$\\overline{\\rm DR}'$} \\fi$)\nand hence provide a consistency check for the calculation of the diagrams containing gluon lines.\n\nIn order to handle infrared divergencies in the individual diagrams we introduce an infrared regulating mass parameter and check\nthat the resulting contributions as well as the unphysical scalar mass due to the gluon field cancel out in the final result.\nAs the main focus of this work is on the numerical analysis we do not reproduce the resulting lengthy expressions here.\n\nOwing to the fact that we split the contributions into self-energy and counter-term diagrams, we can quite easily check some of the generic formulae\nobtained in \\cite{Martin:2005ch} where we find agreement.\n\nThe numerical analysis was performed by implementing {\\sc Tsil} \\cite{tsil} in this {\\sc Fortran} program.\nAs in the case of neutralinos we used the usual 't Hooft Feynman $R_{\\xi=1}$ gauge for the gluon field, except for the check of gauge independence.\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{picture}(125,25)(0,0)\n \\put(-19,-8){\\mbox{\\resizebox{!}{2.95cm}{\\includegraphics{Ch2LGluon.eps}}}}\n\\end{picture}\n\\end{center}\n\\caption{\\it Chargino two-loop selfenergy diagrams with inner gluon line}\\label{Ch2LGluon}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{picture}(125,25)(0,0)\n \\put(-4,-9){\\mbox{\\resizebox{!}{2.80cm}{\\includegraphics{Ch2LGluino.eps}}}}\n\\end{picture}\n\\end{center}\n\\caption{\\it Chargino two-loop selfenergy diagrams with inner gluino line}\\label{Ch2LGluino}\n\\end{figure}\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{picture}(125,30)(0,0)\n \\put(20,-10){\\mbox{\\resizebox{!}{3.3cm}{\\includegraphics{Ch2LSquark.eps}}}}\n\\end{picture}\n\\end{center}\n\\caption{\\it Chargino two-loop selfenergy diagrams with three inner squark lines}\\label{Ch2LSquark}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{picture}(125,25)(0,0)\n \\put(-19,-8){\\mbox{\\resizebox{!}{2.95cm}{\\includegraphics{ChCT.eps}}}}\n\\end{picture}\n\\end{center}\n\\caption{\\it Chargino two-loop self-energy diagrams with counter-term insertions}\\label{ChCT}\n\\end{figure}\n\n\\section{Numerics}\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{tabular}{|c|c||c|c||c|}\n \\hline\n \\ \\mbox{Particle} \\ &\n \\ \\ \\mbox{Mass}\\ \\ & \\ \\mbox{``LHC''}\\ & \\ \\mbox{``ILC''}\\\n & \\ \\mbox{``LHC+ILC''}\\ \\\\\n \\hline\\hline\n $\\tilde{\\chi}^0_1$ & 97.7 & 4.8 & 0.05 & 0.05 \\\\\n $\\tilde{\\chi}^0_2$ & 183.9 & 4.7 & 1.2 & 0.08 \\\\\n $\\tilde{\\chi}^\\pm_1$ & 183.7 & & 0.55 & 0.55 \\\\\n $\\tilde{q}_R$ & 547.2 & 7-12 & - & 5-11 \\\\\n $\\tilde{q}_L$ & 564.7 & 8.7 & - & 4.9 \\\\\n $\\tilde{g}$ & 607.1 & 8.0 & - & 6.5 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n \\caption{accuracy from experiment at LHC, ILC and $\\textrm{LHC}\\oplus\\textrm{ILC}$ \\cite{Weiglein:2004hn, spa}, masses in GeV}\n \\label{acc}\n\\end{table}\n\n\nOur reference scenario used for the numerical analysis is the benchmark point SPS1a' \\cite{spa}. The SUSY parameters at $Q_0=1$~TeV are $\\tan\\beta = 10$,\n$M_1 = 103.209$~GeV, $M_2 = 193.295$~GeV, $M_3 = 572.328$~GeV, $\\mu = 401.62$~GeV, $A_t = -532.38$~GeV, $A_b = -938.91$~GeV,\n$M_{\\tilde Q_3} = 470.91$~GeV, $M_{\\tilde U_3} = 385.32$~GeV and $M_{\\tilde D_3} = 501.37$~GeV , for further details see \\cite{spa}. The tree-level chargino masses at this\npoint are $M_{\\tilde \\chi_1^+} = 180.9$~GeV and $M_{\\tilde \\chi_2^+} = 422.2$~GeV.\n\nFig.~\\ref{tanb} shows the chargino pole masses at SPS1a' as functions of $\\tan\\beta$. At the SPS1a' value $\\tan\\beta=10$ we find an absolute\ntwo-loop correction $\\delta m_{\\cha 1} = 0.2{\\rm GeV}$ which is in the order of magnitude of the expected experimental uncertainty for this particle, see Table~\\ref{acc}.\nTherefore, the inclusion of these corrections is mandatory when extracting \\ifmmode{\\overline{\\rm DR}} \\else{$\\overline{\\rm DR}$} \\fi-parameters from experiment.\n\nFor Fig.~\\ref{ATB} we set the third generation trilinear breaking parameters equal, $A_3 = A_t = A_b$. This parameter effects the mixing\nin the squark sector and therefore enters all the two-loop diagrams through the couplings and the sfermion masses.\n\nIn Fig.~\\ref{gaugeuni} we assume gauge unification, $M_1:M_2:M_3\\simeq 1:2:6$. The plot is over $M_2$,\nall other values are taken from SPS1a'.\n\nIn Fig.~\\ref{MSQ3} we show the one- and two-loop chargino mass shifts as a function of the third generation soft SUSY breaking masses $M_{\\tilde Q}=M_{\\tilde Q_3}=M_{\\tilde U_3}=M_{\\tilde D_3}$.\nAgain, all other parameters are taken from SPS1a'.\n\nFinally, in Fig.~\\ref{phAt} we investigate the dependence on $\\phi_{A_t}$, the complex phase of the soft trilinear breaking parameter $A_t$.\nExtending previous work \\cite{Schofbeck:2006gs} we here include the neutralinos by taking $A_t$ complex.\nIn the first column there are the one-loop corrections to the chargino and the neutralino\n pole masses and in the second the respective two-loop corrections. It can be seen that the influence of the phase is quite substantial\nat the two-loop level.\n\nFig.~\\ref{scaledep} shows the decrease of the scale dependence when the loop-level of the corrections is increased.\nThe plots with one- (red) and two-loop (black) masses are zooms of the plots to their left where the running tree-level mass\nis included. The scale dependence is reduced considerably when going from the one- to the two-loop level. The remaining scaling\ncomes from the uncancelled $\\mathcal O(\\alpha^2)$ RGEs.\n\\begin{figure}[p]\n \\begin{center}\n\\resizebox{!}{6cm}{\\includegraphics{chTB.eps}}\n \\caption{Absolute chargino mass shifts as functions of $\\tan\\beta$. All other parameters are from SPS1a'.\n Left: One-loop mass shifts. Right: Two-Loop mass shifts.\n In all plots\n we use black for $\\cha 1$ and blue for $\\cha 2$.}\\label{tanb}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n\\begin{center}\n\\hspace*{.3cm}\n\\resizebox{!}{6.1cm}{\\includegraphics{chATB.eps}}\n \\caption{\\it Relative chargino mass shifts $\\delta m_{\\cha 1}$ and $\\delta m_{\\cha 2}$ as functions\n of the trilinear breaking parameters $A_3 = A_t = A_b$. Left: One-loop mass shifts. Right: Two-Loop mass shifts.}\\label{ATB}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n\\begin{center}\n\\vspace*{-.5cm}\n\\resizebox{!}{5.8cm}{\\includegraphics{chgauge.eps}}\n \\caption{Absolute chargino mass shifts $\\delta m_{\\cha 1}$ and $\\delta m_{\\cha 2}$ as functions\n of the soft gaugino breaking mass $M_2$. In this plot we assume gauge unification, all other parameters are from SPS1a'.\n Left: One-loop mass shifts. Right: Two-Loop mass shifts.}\\label{gaugeuni}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n\\begin{center}\n\\hspace*{.3cm}\n\\vspace*{-.5cm}\\resizebox{!}{6.1cm}{\\includegraphics{chMSQ3.eps}}\n \\caption{Absolute chargino mass shifts as functions of $M_{\\tilde Q}$, see text. All other parameters are from SPS1a'.\n Left: One-loop mass shifts. Right: Two-Loop mass shifts.}\\label{MSQ3}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n\\begin{center}\n\\hspace*{-1cm}\n\\resizebox{!}{10cm}{\\includegraphics{phAt.eps}}\n \\caption{Absolute neutralino and chargino mass shifts as functions of $\\phi_{A_t}$, the complex phase of the soft trilinear breaking parameter $A_t$.\n First column: One-loop corrections. Second column: Two-loop corrections. The black line is $\\tilde \\chi_1$, blue line is $\\tilde \\chi_2$,\n red line is $\\tilde\\chi^0_3$ and the green line is $\\tilde\\chi^0_4$}\\label{phAt}\n\\hspace{-1cm}\n \\end{center}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\resizebox{!}{6cm}{\\includegraphics{scaledependence.eps}}\n\\hspace{-1cm}\n \\caption{Scale dependence of chargino pole masses, see text. The blue line is the tree-level running $\\ifmmode{\\overline{\\rm DR}} \\else{$\\overline{\\rm DR}$} \\fi$-mass,\n red is the one-loop corrected mass and black the two-loop corrected mass}\\label{scaledep}\n \\end{center}\n\\hspace{-1cm}\n\n\\end{figure}\n\n\\section{Conclusions}\n\nWe have calculated the chargino pole masses in the MSSM to order $\\mathcal{O}(\\alpha\\alpha_S)$ and performed\na detailed numerical study. The typical size of the two-loop corrections is comparable to the expected eperimental accuracy at\nfuture linear colliders and therefore needs to be taken into account when analyzing precision experiments. Our analytic expressions\nagree with previous generic results \\cite{Martin:2005ch} and have been checked thoroughly.\nExtending previous work \\cite{Schofbeck:2006gs} we also include complex parameters in the neutralino case.\nFinally, we wrote a program \\cite{Polxino} for this calculation with an\ninterface to the commonly used SUSY Les Houches accord \\cite{Skands:2003cj}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOne of the standardized features of financial data is that returns\nare uncorrelated, but their squares, or absolute values, are\n(highly) correlated, a property referred to as long memory (which\nwill be later defined precisely). A second commonly accepted feature\nis that log-returns are heavy tailed, in the sense that some moment\nof the log-returns is infinite. The last one we want to mention is\nleverage. In the financial time series context, leverage is\nunderstood to mean negative dependence between previous returns and\nfuture volatility (i.e. a large negative return will be followed by\na high volatility). Motivated by these empirical findings, one of\nthe common modeling approaches is to represent log-returns $\\{Y_i\\}$\nas a stochastic volatility sequence $Y_i = Z_i \\sigma_i$ where\n$\\{Z_i\\}$ is an i.i.d.~sequence and $\\{\\sigma_i^2\\}$ is the\nconditional variance or more generally a certain process which\nstands as a proxy for the volatility. In such a process, long memory\ncan only be modeled through the sequence $\\{\\sigma_i\\}$, and the\ntails can be modeled either through the sequence $\\{Z_i\\}$ or\nthrough $\\{\\sigma_i\\}$, or both. The well known GARCH processes\nbelong to this class of models. The volatility sequence\n$\\{\\sigma_i\\}$ is heavy tailed, unless the distribution of $Z_0$ has\nfinite support, and leverage can be present. But long memory in\nsquares cannot be modeled by GARCH process. The FIGARCH process was\nintroduced by \\cite{baillie:bollerslev:mikkelsen:1996} to this\npurpose, but it is not known if it really has a long memory\nproperty, see e.g. \\cite{douc:roueff:soulier:2008}.\n\n\nTo model long memory in squares, the so-called \\textit{Long Memory in Stochastic\n Volatility} (LMSV) process was introduced in \\cite{breidt:crato:delima:1998},\ngeneralizing earlier short memory version of this model. In this model, the\nsequences $\\{Z_i\\}$ and $\\{\\sigma_i\\}$ are fully independent, and $\\{\\sigma_i\\}$\nis the exponential of a Gaussian long memory process. Tails and long memory are\neasily modeled in this way, but leverage is absent. Throughout the paper, we\nwill refer to this process as LMSV, even though we do not rule out the short\nmemory case.\n\n\n\n\n\nIn order to model leverage, \\cite{Nelson1991} introduced the EGARCH model (where\nE stands for exponential), later extended by \\cite{bollerslev:mikkelsen:1996} to\nthe FIEGARCH model (where FI stands for fractionally integrated) in order to\nmodel also long memory. In these models, $\\{Z_i\\}$ is a Gaussian white noise,\nand $\\{\\sigma_i\\}$ is the exponential of a linear process with respect to a\nfunction of the Gaussian sequence $\\{Z_i\\}$. \\cite{SurgailisViano2002} extended\nthe type of dependence between the sequences $\\{Z_i\\}$ and $\\{X_i\\}$ and relaxed\nthe Gaussian assumption for both sequences, but assumed finite moments of all\norder. Thus long memory and leverage are possibly present in these models, but\nheavy tails are excluded.\n\n\nA quantity of other models have been introduced, e.g. models of\nRobinson and Zaffaroni \\cite{RobinsonZaffaroni1997},\n\\cite{RobinsonZaffaroni1998} and their further extensions in\n\\cite{Robinson2001}; LARCH($\\infty$) processes\n\\cite{GiraitisRobinsonSurgailis2000} and their bilinear extensions\n\\cite{GiraitisSurgailis2002}, and LARCH$_{+}(\\infty)$\n\\cite{Surgailis2008}; to mention a few. All of these models have\nlong memory and some have leverage and allow for heavy tails. The\ntheory for these models is usually extremely involved, and only the\nasymptotic properties of partial sums are known in certain\ncases. We will not consider these models here.\nIn~\\cite{giraitis:leipus:robinson:surgailis:2004} the leverage\neffect and long memory property of a LARCH($\\infty$) model was\nstudied thoroughly.\n\nThe theoretical effect of long memory is that the covariance of absolute powers\nof the returns $\\{Y_i\\}$ is slowly decaying and non summable. This induces non\nstandard limit theorems, such as convergence of the partial sum process to the\nfractional Brownian motion or finite variance non Gaussian processes or even\nL\\'evy processes. In practice, long memory is often evidenced by sample\ncovariance plots, showing an apparent slow decay of the covariance function.\nTherefore, it is of interest to investigate the asymptotic behaviour of the\nsample mean or of the partial sum process, and of the sample variance and\ncovariances.\n\nIn the case where $\\sigma_i=\\sigma(X_i)$, $\\{X_i\\}$ is a stationary\nGaussian process with summable covariances and $\\sigma(x) =\n\\exp(x)$, the asymptotic theory for sample mean of LMSV processes\nwith infinite variance is a straightforward consequence of a point\nprocess convergence result in \\cite{DavisMikosch2001}. The limit is\na L\\'evy stable process. \\cite{SurgailisViano2002} considered the\nconvergence of the partial sum process of absolute powers of\ngeneralized EGARCH processes with finite moments of all orders and\nshowed convergence to the fractional Brownian motion. To the best of\nour knowledge, the partial sum process of absolute powers has never\nbeen studied in the context of heavy tails and long memory and\npossible leverage, for a general function $\\sigma$.\n\nThe asymptotic theory for sample covariances of weakly dependent\nstationary processes with finite moments dates back to Anderson, see\n\\cite{anderson:1971}. The case of linear processes with regularly\nvarying innovations was studied in \\cite{davis:resnick:1985m} and\n\\cite{DavisResnick1986}, for infinite variance innovation and for\ninnovations with finite variance but infinite fourth moment,\nrespectively. The limiting distribution of the sample covariances\n(suitably centered and normalized) is then a stable law. These\nresults were obtained under conditions that rule out long memory.\nFor infinite variance innovation with tail index $\\alpha\\in(1,2)$,\nthese results were extended to long memory linear processes by\n\\cite{kokoszka:taqqu:1996}. The limiting distributions of the sample\ncovariances are again stable laws. However, if $\\alpha \\in (2,4)$,\n\\cite{horvath:kokoszka:2008} showed that as for partial sums, a\ndichotomy appears: the limiting distribution and the rate of\nconvergence depend on an interplay between a memory parameter and\nthe tail index $\\alpha$. The limit is either stable (as in the\nweakly dependent or i.i.d.~case) or, if the memory is strong enough,\nthe limiting distribution is non Gaussian but with finite variance\n(the so-called Hermite-Rosenblatt distributions). If the fourth\nmoment is finite, then the dichotomy is between Gaussian or finite\nvariance non Gaussian distributions (again of Hermite-Rosenblatt\ntype); see \\cite{hosking:1996}, \\cite[Theorem\n3.3]{horvath:kokoszka:2008} and \\cite{wu:huang:zheng:2010}.\n\n\nThe asymptotic properties of sample autocovariances of GARCH processes have been\nstudied by \\cite{basrak:davis:mikosch:2002r}. Stable limits arise as soon as the\nmarginal distribution has an infinite fourth moment. \\cite{DavisMikosch2001}\nstudied the sample covariance of a zero mean stochastic volatility process,\nunder implicit conditions that rule out long memory, and also found stable\nlimits. \\cite{mcelroy:politis:2007} (generalized by\n\\cite{jach:mcelroy:politis:2011}) studied partial sums and sample variance of a\npossibly nonzero mean stochastic volatility process with infinite variance and\nwhere the volatility is a Gaussian long memory process (in which case it is not\npositive but this is not important for the theoretical results). They obtained a\ndichotomy between stable and finite variance non Gaussian limits, and also the\nsurprising result that when the sample mean has a long memory type limit, then\nthe studentized sample mean converges in probability to zero.\n\nThe first aim of this article is to study asymptotic properties of partial sums,\nsample variance and covariances of stochastic volatility processes where the\nvolatility is an arbitrary function of a Gaussian, possibly long memory process\n$\\{X_i\\}$ independent of the sequence $\\{Z_i\\}$, which is a heavy tailed\ni.i.d.~sequence. We refer to these processes as LMSV processes. The interest of\nconsidering other functions than the exponential function is that it allows to\nhave other distributions than the log-normal for the volatility, while keeping\nthe convenience of Gaussian processes, without which dealing with long memory\nprocesses becomes rapidly extremely involved or even intractable. The results we\nobtain extend in various aspects all the previous literature in this domain.\n\nAnother important aim of the paper is to consider models with possible\nleverage. To do this, we need to give precise assumptions on the nature of the\ndependence between the sequences $\\{Z_i\\}$ and $\\{X_i\\}$, and since they are\nrelated in the process $\\{Y_i\\}$ through the function $\\sigma$, these\nassumptions also involve the function $\\sigma$. We have not looked for the\nwidest generality, but the functions $\\sigma$ that we consider include the\nexponential functions and all symmetric polynomials with positive coefficients.\nThis is not a severe restriction since the function $\\sigma$ must be\nnonnegative. Whereas the asymptotic theory for the partial sums is entirely\nsimilar to the case of LMSV process without leverage, asymptotic properties of\nsample autocovariances may be very different in the presence of leverage. Due to\nthe dependence between the two sequences, the rates of convergence and\nasymptotic distribution may be entirely different when not stable.\n\n\n\nThe article is organized as follows. In Section \\ref{sec:prel} we formulate\nproper assumptions, as well as prove some preliminary results on the marginal\nand multivariate tail behaviour of the sequence $\\{Y_i\\}$. In\nSection~\\ref{sec:pp-conv}, we establish the limit theory for a point process\nbased on the rescaled sequence $\\{Y_i\\}$. This methodology was first used in\nthis context by \\cite{DavisMikosch2001} and our proofs are closely related to\nthose in this reference. Section \\ref{sec:partial-sums} applies these results to\nobtain the functional asymptotic behaviour of the partial sum process of the\nsequences $\\{Y_i\\}$ and of powers. In Section \\ref{sec:sample-covariances} the\nlimiting behaviour of the sample covariances and autocorrelation of the process\n$\\{Y_i\\}$ and of its powers is investigated. Proofs are given in Section\n\\ref{sec:proofs}. In the Appendix we recall some results on multivariate\nGaussian processes with long memory.\n\n\n\\subsection*{A note on the terminology}\nWe consider in this paper sequences $\\{Y_i\\}$ which can be expressed as $Y_i=Z_i\n\\sigma(X_i)=Z_i\\sigma_i$, where $\\{Z_i\\}$ is an i.i.d. sequence and $Z_i$ is\nindependent of $X_i$ for each $i$. Originally, SV and LMSV processes refer to\nprocesses where the sequences $\\{Z_i\\}$ and $\\{\\sigma_i\\}$ are fully\nindependent, $\\sigma_i=\\sigma(X_i)$, $\\{X_i\\}$ is a Gaussian process and\n$\\sigma(x)=\\exp(x)$; see e.g. \\cite{breidt:crato:delima:1998},\n\\cite{BreidtDavis1998}, \\cite{DavisMikosch2001}. The names EGARCH and FIEGARCH,\nintroduced respectively by \\cite{Nelson1991} and\n\\cite{bollerslev:mikkelsen:1996}, refer to the case where $\\sigma(x)=\\exp(x)$\nand where $\\{X_i\\}$ is a non Gaussian process which admits a linear\nrepresentation with respect to an instantaneous function of the Gaussian\ni.i.d.~sequence $\\{Z_i\\}$, with dependence between the sequences $\\{Z_i\\}$ and\n$\\{X_i\\}$. \\cite{SurgailisViano2002} still consider the case\n$\\sigma(x)=\\exp(x)$, but relax the assumptions on $\\{Z_i\\}$ and $\\{X_i\\}$, and\nretain the name EGARCH. The LMSV processes can be seen as border cases of EGARCH\ntype processes, where the dependence between the sequences $\\{Z_i\\}$ and\n$\\{X_i\\}$ vanishes. In this article, we consider both LMSV models, and models\nwith leverage which generalize the EGARCH models as defined by\n\\cite{SurgailisViano2002}. In order to refer to the latter models, we have\nchosen not to use the acronym EGARCH or FIEGARCH, since these models were\ndefined with very precise specifications and this could create some confusion,\nnor to create a new one such as GEGARCH (with $G$ standing twice for\ngeneralized, which seems a bit too much) or (IV)LMSVwL (for (possibly) Infinite\nVariance Long Memory Stochastic Volatility with Leverage). Considering that the\nmain feature which distinguishes these two classes of models is the presence or\nabsence of leverage, we decided to refer to LMSV models when leverage is\nexcluded, and to models with leverage when we include the possibility thereof.\n\n\n\n\\section{Model description, assumptions and tail behaviour}\n\\label{sec:prel}\n\nLet $\\{Z_i,i\\in\\mathbb{Z}\\}$ be an i.i.d.~sequence whose marginal\ndistribution has regularly varying tails:\n\\begin{equation}\n \\label{eq:model-2}\n \\lim_{x\\to+\\infty} \\frac{\\mathbb{P}(Z_0>x)}{x^{-\\alpha}L(x)} = \\beta \\; , \\quad \\lim_{x\\to+\\infty}\n \\frac{\\mathbb{P}(Z_0<-x)}{x^{-\\alpha}L(x)} = 1-\\beta \\; ,\n\\end{equation}\nwhere $\\alpha>0$, $L$ is slowly varying at infinity, and $\\beta\\in\n[0,1]$. Condition~(\\ref{eq:model-2}) is referred to as the Balanced Tail\nCondition. It is equivalent to assuming that $\\mathbb{P}(|Z_0|>x) = x^{-\\alpha} L(x)$\nand\n\\begin{align*}\n \\beta = \\lim_{x\\to+\\infty} \\frac{\\mathbb{P}(Z_0>x)}{\\mathbb{P}(|Z_0|>x)} = 1 -\n \\lim_{x\\to+\\infty} \\frac{\\mathbb{P}(Z_0<-x)}{\\mathbb{P}(|Z_0|>x)} \\; .\n\\end{align*}\nWe will say that two random variables $Y$ and $Z$ are right-tail equivalent if\nthere exists $c\\in(0,\\infty)$ such that\n\\begin{align*}\n \\lim_{x\\to+\\infty} \\frac{\\mathbb{P}(Y>x)}{\\mathbb{P}(Z>x)} = c \\; .\n\\end{align*}\nIf one of the random variables has a regularly varying right tail,\nthen so has the other, with the same tail index. The converse is\nfalse, i.e. two random variables can have the same tail index\nwithout being tail equivalent. Two random variables $Y$ and $Z$ are\nsaid to be left-tail equivalent if $-Y$ and $-Z$ are right-tail\nequivalent, and they are said to be tail equivalent if they are both\nleft- and right-tail equivalent.\n\nUnder (\\ref{eq:model-2}), if moreover $\\mathbb{E}\\left[|Z_0|^{\\alpha}\\right] =\n\\infty$, then $Z_1Z_2$ is regularly varying and (see e.g.\n\\cite[Equation~(3.5)]{DavisResnick1986})\n\\begin{align*}\n \\lim_{x\\to+\\infty} \\frac{\\mathbb{P}(Z_0>x)} {\\mathbb{P}(Z_0Z_1>x)} & = 0 \\; , \\\\\n \\lim_{x\\to+\\infty} \\frac{\\mathbb{P}(Z_1Z_2 > x)} {\\mathbb{P}(|Z_1Z_2| > x)} & = \\beta^2 +\n (1-\\beta)^2 \\; .\n\\end{align*}\nFor example, if (\\ref{eq:model-2}) holds and the tail of $|Z_0|$ has\n Pareto-type tails, i.e. $\\mathbb{P}(|Z_0|>x) \\sim cx^{-\\alpha}$ as $x\\to+\\infty$ for\n some $c>0$, then $\\mathbb{E}\\left[|Z_0|^{\\alpha}\\right] = \\infty$. We will further\nassume that $\\{X_i\\}$ is a stationary zero mean unit variance Gaussian process\nwhich admits a linear representation with respect to an i.i.d.~Gaussian white\nnoise $\\{\\eta_i\\}$ with zero mean and unit variance, i.e.\n\\begin{align}\n \\label{eq:linear}\n X_i = \\sum_{j=1}^{\\infty} c_j \\eta_{i-j}\n\\end{align}\nwith $\\sum_{j=1}^\\infty c_j^2=1$. We assume that the process $\\{X_i\\}$ either\nhas short memory, in the sense that its covariance function is absolutely\nsummable, or exhibits long memory with Hurst index $H \\in (1\/2,1)$, i.e. its\ncovariance function $\\{\\rho_n\\}$ satisfies\n\\begin{equation}\n \\label{eq:model-1}\n \\rho_n = \\mathrm{cov}(X_0,X_n) = \\sum_{j=1}^\\infty c_jc_{j+n} = n^{2H-2} \\ell(n) \\; ,\n\\end{equation}\nwhere $\\ell$ is a slowly varying function.\n\nLet $\\sigma$ be a deterministic, nonnegative and continuous function\ndefined on $\\mathbb R$. Define $\\sigma_i = \\sigma(X_i)$ and the\nstochastic volatility process $\\{Y_i\\}$ by\n\\begin{equation}\n \\label{eq:model-3}\n Y_i = \\sigma_i Z_i = \\sigma(X_i) Z_i \\; .\n\\end{equation}\nAt this moment we do not assume independence of $\\{\\eta_i\\}$ and $\\{Z_i\\}$. Two\nspecial cases which we are going to deal with are:\n\\begin{itemize}\n\\item Long Memory Stochastic Volatility (LMSV) model: where $\\{\\eta_i\\}$ and\n $\\{Z_i\\}$ are independent.\n\\item Model with leverage: where $\\{(\\eta_i,Z_i)\\}$ is a sequence of\n i.i.d.~random vectors. For fixed $i$, $Z_i$ and $X_i$ are independent, but\n $X_{i}$ may not be independent of the past $\\{Z_j, j0)=1$ for all $a\\ne0$, $\\{(Z_i,\\eta_i)\\}$ is an\ni.i.d.~sequence and $Z_0$ satisfies the Balanced Tail\nCondition~(\\ref{eq:model-2}) with $\\mathbb{E}[|Z_0|^\\alpha]=\\infty$.\n\\end{assumption}\n\nLet $\\mathcal F_i$ be the sigma-field generated by $\\eta_j,Z_j$, $j\\leq i$. Then\nthe following properties hold.\n\\begin{itemize}\n\\item $Z_i$ is $\\mathcal F_i$-measurable and independent of $\\mathcal F_{i-1}$;\n\\item $X_{i}$ and $\\sigma_i$ are $\\mathcal F_{i-1}$-measurable.\n\\end{itemize}\n\nWe will also impose the following condition on the continuous function\n$\\sigma$. There exists $q>0$ such that\n\\begin{gather}\n \\label{eq:sigma-assumption}\n \\sup_{0\\le\\gamma\\le 1} \\mathbb{E}\\left[\\sigma^{q}(\\gamma X_0)\\right] < \\infty \\; .\n\\end{gather}\nIt is clearly fulfilled for all $q,q'$ if $\\sigma$ is a polynomial\nor $\\sigma(x) = \\exp(x)$ and $X_0$ is a standard Gaussian random\nvariable. Note that if (\\ref{eq:sigma-assumption}) holds for some\n$q>0$, then, for $q'\\leq q\/2$, it holds that\n\\begin{align*}\n \\sup_{0\\le\\gamma\\le 1} \\mathbb{E} \\left[ \\sigma^{q'}(\\gamma X_0) \\sigma^{q'}(\\gamma\n X_{s}) \\right] < \\infty \\; , \\ s=1,2,\\dots\n\\end{align*}\n\n\n\n\n\n\\subsection{Marginal tail behaviour}\nIf (\\ref{eq:sigma-assumption}) holds, then clearly $\\mathbb{E}[\\sigma^{q}(X_0)] <\n\\infty$. If moreover $q>\\alpha$, since $X_i$ and $Z_i$ are independent for\nfixed $i$, Breiman's Lemma (see e.g. \\cite[Proposition 7.5]{resnick:2007})\nyields that the distribution of $Y_0$ is regularly varying and\n\\begin{equation}\n \\label{eq:Breiman-1}\n \\lim_{x\\to+\\infty}\n \\frac{\\mathbb{P}(Y_0>x)}{\\mathbb{P}(Z_0>x)} = \\lim_{x\\to+\\infty}\n \\frac{\\mathbb{P}(Y_0 < -x)}{\\mathbb{P}(Z_0 < -x)} = \\mathbb{E}[\\sigma^{\\alpha}(X_0)] \\; .\n\\end{equation}\nThus we see that there is no effect of leverage on marginal tails.\nDefine\n\\begin{align}\n \\label{eq:def-an}\n a_n = \\inf\\{x: \\mathbb{P}(|Y_0|>x) < 1\/n\\} \\; .\n\\end{align}\nThen the sequence $a_n$ is regularly varying at infinity with index $1\/\\alpha$.\nMoreover, since $\\sigma$ is nonnegative, $Z_0$ and $Y_0$ have the same skewness,\ni.e.\n\\begin{align*}\n & \\lim_{n\\to+\\infty} n \\mathbb{P}(Y_0>a_n) = 1 - \\lim_{n\\to+\\infty} n \\mathbb{P}(Y_0<-a_n) =\n \\beta \\; .\n\\end{align*}\n\n\n\\subsection{Joint exceedances}\nOne of the properties of heavy tailed stochastic volatility models is that\nlarge values do not cluster. Mathematically, for all $h>0$,\n\\begin{equation}\n \\label{eq:Breiman-0b}\n \\mathbb{P}(|Y_0| > x,|Y_h| > x) = o(\\mathbb{P}(|Y_0|>x)) \\; .\n\\end{equation}\nFor the LMSV model, conditioning on $\\sigma_0,\\sigma_h$ yields\n\\begin{equation}\n \\label{eq:Breiman-0}\n \\lim_{x\\to+\\infty}\\frac{\\mathbb{P}(|Y_0|>x,|Y_h|>x)}{\\mathbb{P}^2(|Z_0|>x)}=\\mathbb{E}[(\\sigma_0\\sigma_h)^{\\alpha}] \\; ,\n\\end{equation}\nif~(\\ref{eq:sigma-assumption}) holds for some $q>2\\alpha$. Property\n(\\ref{eq:Breiman-0b}) still holds when leverage is present. Indeed, let $F_Z$\ndenote the distribution function of $Z_0$ and $\\bar F_Z=1-F_Z$. Recall that\n$\\mathcal F_{h-1}$ is the sigma-field generated by $\\eta_j,Z_j,j\\le h-1$. Thus,\n$Y_0$ and $X_h$ are measurable with respect to $\\mathcal F_{h-1}$, and $Z_h$ is\nindependent of $\\mathcal F_{h-1}$. Conditioning on $\\mathcal F_{h-1}$ yields\n\\begin{align*}\n \\mathbb{P}(Y_0>x, Y_h>x) = \\mathbb{E}[ \\bar F_Z(x\/\\sigma_h)\n \\mathbf{1}_{\\{Y_0>x\\}}]\\; .\n\\end{align*}\nNext, fix some $\\epsilon>0$. Applying Lemma~\\ref{lem:bound-potter},\nthere\n exists a constant $C$ such that for all $x \\ge 1$,\n\\begin{align*}\n \\frac{\\mathbb{P}\\left(Y_0>x, Y_h>x\\right)}{\\mathbb{P}(Z_0>x)} = \\mathbb{E} \\left[ \\frac{\\bar\n F_Z(x\/{\\sigma_h})} {\\bar F_Z(x)} \\mathbf 1_{\\{Y_0>x\\}}\\right]\\le C\n \\mathbb{E}\\left[(1\\vee \\sigma_h)^{\\alpha+\\epsilon} \\mathbf 1_{\\{Y_0>x\\}}\\right].\n\\end{align*}\nIf (\\ref{eq:sigma-assumption}) holds for some $q>\\alpha$, and $\\epsilon$ is\nchosen small enough so that $\\alpha+\\epsilon2\\alpha$, then\n\\begin{equation}\n \\label{eq:Breiman-2}\n \\lim_{x\\to+\\infty}\n \\frac{\\mathbb{P}(Y_0Y_h>x)}{\\mathbb{P}(Z_0Z_1>x)}=\\mathbb{E}[(\\sigma_0\\sigma_h)^{\\alpha}] \\; ,\n \\quad \\lim_{x\\to+\\infty}\n \\frac{\\mathbb{P}(Y_0Y_h<-x)}{\\mathbb{P}(Z_0Z_1<-x)}=\\mathbb{E}[(\\sigma_0\\sigma_h)^{\\alpha}] \\;.\n\\end{equation}\nFor further reference, we gather in a Lemma some properties of the\nproducts in the LMSV case, some of which are mentioned in\n\\cite{DavisMikosch2001} in the case $\\sigma(x)=\\exp(x)$.\n\\begin{lem}\n \\label{lem:asympt-indep-lmsv}\n Let Assumption~\\ref{hypo:iid-bivarie} hold and let the sequences $\\{\\eta_i\\}$\n and $\\{Z_i\\}$ be mutually independent. Assume that\n (\\ref{eq:sigma-assumption}) holds with $q>2\\alpha$. Then $Y_0Y_1$ is tail\n equivalent to $Z_0Z_1$ and has regularly varying and balanced tails with index\n $\\alpha$. Moreover, for all $h\\geq1$, there exist real numbers $d_+(h)$,\n $d_-(h)$ such that\n \\begin{align}\n \\lim_{x\\to \\infty} \\frac{\\mathbb{P}(Y_0Y_h>x)}{\\mathbb{P}(|Y_0Y_1|>x)} = d_+(h) \\; , \\ \\\n \\lim_{x\\to \\infty} \\frac{\\mathbb{P}(Y_0Y_h<-x)}{\\mathbb{P}(|Y_0Y_1|>x)} = d_-(h) \\;\n . \\label{eq:defd+-}\n \\end{align}\n Let $b_n$ be defined by\n \\begin{align}\n \\label{eq:def-bn}\n b_n = \\inf\\{x: \\mathbb{P}(|Y_0Y_1|>x) \\leq 1\/n\\} \\; .\n \\end{align}\n The sequence $\\{b_n\\}$ is regularly varying with index $1\/\\alpha$ and\n \\begin{align}\n a_n = o(b_n) \\; . \\label{eq:domination}\n \\end{align}\n For all $i\\ne j>0$, it holds that\n \\begin{gather}\n \\lim_{n\\to \\infty} n\\mathbb{P}(|Y_0| > a_n x \\; , \\ |Y_0Y_j| > b_n x) = 0 \\; , \\label{eq:indep1} \\\\\n \\lim_{n\\to \\infty} n\\mathbb{P}(|Y_0Y_i|> b_n x \\; , \\ |Y_0Y_j| > b_n x) = 0 \\; . \\label{eq:indep-products}\n\\end{gather}\n\n\\end{lem}\nThe quantities $d_+(h)$ and $d_-(h)$ can be easily computed in the LMSV case.\n\\begin{align*}\n d_+(h) & = \\{\\beta^2+(1-\\beta)^2\\} \\frac\n {\\mathbb{E}[\\sigma^\\alpha(X_0)\\sigma^\\alpha(X_h)]}\n {\\mathbb{E}[\\sigma^\\alpha(X_0)\\sigma^\\alpha(X_1)]} \\; , \\ \\\n d_-(h) = 2\\beta(1-\\beta) \\frac {\\mathbb{E}[\\sigma^\\alpha(X_0)\\sigma^\\alpha(X_h)]}\n {\\mathbb{E}[\\sigma^\\alpha(X_0)\\sigma^\\alpha(X_1)]} \\; .\n\\end{align*}\n\n\n\nWhen leverage is present, many different situations can occur,\nobviously depending on the type of dependence between $Z_0$ and\n$\\eta_0$, and also on the function~$\\sigma$. We consider the\nexponential function $\\sigma(x)=\\exp(x)$, and a class of subadditive\nfunctions. In each case we give an assumption on the type of\ndependence between $Z_0$ and $\\eta_0$ that will allow to prove our\nresults. Examples are given after the Lemmas.\n\n\n\n\n\\begin{lem}\n \\label{lemma:asymp-indep-EGARCH-expo}\n Assume that $\\sigma(x) = \\exp(x)$ and $\\exp(k\\eta_0) Z_0$ is tail\n equivalent to $Z_0$ for all $k \\in\\mathbb R$. Then~all the conclusions of\n Lemma~\\ref{lem:asympt-indep-lmsv} hold.\n\\end{lem}\n\n\n\\begin{lem}\n \\label{lemma:asymp-indep-EGARCH-quadratic}\n Assume that the function $\\sigma$ is subadditive, i.e. there exists a constant\n $C>0$ such that for all $x,y\\in\\mathbb R$, $\\sigma(x+y) \\leq\n C\\{\\sigma(x)+\\sigma(y)\\}$. Assume that for any $a,b>0$,\n $\\sigma(a\\xi+b\\eta_0)Z_0$ is tail equivalent to $Z_0$, where $\\xi$ is a\n standard Gaussian random variable independent of $\\eta_0$, and\n $\\sigma(b\\eta_0)Z_0$ is either tail equivalent to $Z_0$ or\n $\\mathbb{E}[\\{\\sigma(b\\eta_0)|Z_0|\\}^q]<\\infty$ for some $q>\\alpha$. Then~all the\n conclusions of Lemma~\\ref{lem:asympt-indep-lmsv} hold.\n\\end{lem}\n\n\n\n\n\n\n\\begin{example}\n \\label{ex:decreasing}\n\n Assume that $Z_0 = |\\eta_0|^{-1\/\\alpha}U_0$\n with $\\alpha>0$, where $U_0$ is independent of $\\eta_0$ and $\\mathbb{E}[|U_0|^q]<\\infty$\n for some $q>\\alpha$. Then $Z_0$ is regularly varying with index~$-\\alpha$.\n \\begin{itemize}\n \\item Case $\\sigma(x)=\\exp(x)$. For each $c>0$, $Z_0\\exp(c\\eta_0)$\n is tail equivalent to $Z_0$. See Lemma~\\ref{lem:tail-equivalence} for a\n proof of this fact.\n \\item Case $\\sigma(x)=x^2$. Let $q' \\in (\\alpha,q \\wedge\n \\{\\alpha\/(1-2\\alpha)_+\\})$. Then\n \\begin{align*}\n \\mathbb{E}[\\sigma^{q'}(b\\eta_0)|Z_0|^{q'}] = b^{2q'} \\mathbb{E}[|\\eta_0|^{q'(2-1\/\\alpha)} |U_0|^{q'}] < \\infty \\; .\n \\end{align*}\n Furthermore, let $\\xi$ be a standard Gaussian random variable independent of\n $\\eta_0$ and~$Z_0$. Then,\n \\begin{align*}\n \\sigma(a\\xi+b\\eta_0)Z_0 = a^2\\xi^2 Z_0 + 2ab\\xi\n \\mathrm{sign}(\\eta_0)|\\eta_0|^{1-1\/\\alpha} U_0 + b^2 |\\eta_0|^{2-1\/\\alpha}\n U_0 \\; .\n \\end{align*}\n Since $\\xi$ is independent of $Z_0$ and Gaussian, by Breiman's lemma, the\n first term on the right-hand side of the previous equation is tail\n equivalent to $Z_0$. The last two terms have finite moments of order $q'$\n for some $q'>\\alpha$ and do not contribute to the tail. Thus the\n assumptions of Lemma~\\ref{lemma:asymp-indep-EGARCH-quadratic} are satisfied.\n \\end{itemize}\n\\end{example}\n\n\\begin{example}\n \\label{xmpl:indep}\n Let $Z_0'$ have regularly varying balanced tails with index $-\\alpha$,\n independent of $\\eta_0$. Let $\\Psi_1(\\cdot)$ and $\\Psi_2(\\cdot)$ be polynomials and\n define $Z_0= Z_0'\\Psi_1(\\eta_0)+\\Psi_2(\\eta_0)$. Then, by Breiman's Lemma,\n $Z_0$ is tail equivalent to $Z'_0$, and it is easily checked that the\n assumptions of Lemma~\\ref{lemma:asymp-indep-EGARCH-expo} are satisfied and the\n assumptions of Lemma~\\ref{lemma:asymp-indep-EGARCH-quadratic} are satisfied with\n $\\sigma$ being any symmetric polynomial with positive coefficients. We omit the\n details.\n\\end{example}\n\n\n\n\n\n\n\\section{Point process convergence}\n\\label{sec:pp-conv}\nFor $s=0,\\ldots,h$, define a Radon measure $\\lambda_s$ on\n$[-\\infty,\\infty]\\setminus \\{0\\}$ by\n\\begin{align*}\n \\lambda_{0}(\\mathrm d x) = \\alpha \\left\\{ \\beta x^{-\\alpha-1} \\mathbf\n 1_{(0,\\infty)}(x) + (1-\\beta) (-x)^{-\\alpha-1} \\mathbf\n 1_{(-\\infty,0)}(x) \\right\\} \\mathrm d x \\; , \\\\\n \\lambda_{s}(\\mathrm d x) = \\alpha \\left\\{ d_+(s) x^{-\\alpha-1} \\mathbf\n 1_{(0,\\infty)}(x) + d_-(s) (-x)^{-\\alpha-1} \\mathbf 1_{(-\\infty,0)}(x)\n \\right\\}\\mathrm d x \\; ,\n\\end{align*}\nwhere $d_{\\pm}(s)$ are defined in (\\ref{eq:defd+-}). For $s=0,\\dots,h$, define\nthe Radon measure $\\nu_s$ on $[0,1] \\times {[-\\infty,\\infty]\\setminus\\{0\\}}$~by\n\\begin{align*}\n \\nu_{s}(\\mathrm d t,\\mathrm d x) & = \\mathrm d t \\, \\lambda_{s}(\\mathrm d x) \\; .\n\\end{align*}\nSet ${\\bf Y}_{n,i} = (a_n^{-1}Y_i,b_n^{-1} Y_iY_{i+1}, \\ldots, b_n^{-1}\nY_iY_{i+h})$, where $a_n$ and $b_n$ are defined in~(\\ref{eq:def-an}) and\n(\\ref{eq:def-bn}) respectively, and let $N_{n}$ be the point process defined on\n$[0,1] \\times ([-\\infty,\\infty]^{h+1} \\setminus \\{{\\bf 0}\\})$ by\n\\begin{align*}\n N_{n} = \\sum_{i=1}^n \\delta_{(i\/n,{\\bf Y}_{n,i})} \\; ,\n\\end{align*}\nwhere $\\delta_x$ denotes the Dirac measure at $x$.\n\n\n\nOur first result is that for the usual univariate point process of exceedances,\nthere is no effect of leverage. This is a consequence of the\nasymptotic independence (\\ref{eq:Breiman-0b}).\n\\begin{prop}\n \\label{prop:pp-univarie}\n Let Assumption~\\ref{hypo:iid-bivarie} hold and assume that $\\sigma$ is a\n continuous function such that (\\ref{eq:sigma-assumption}) holds with\n $q>\\alpha$. Then $\\sum_{i=1}^n \\delta_{(i\/n,{Y}_{i}\/a_n)}$ converges weakly to\n a Poisson point process with mean measure $\\nu_{0}$.\n\\end{prop}\n\nFor the multivariate point process $N_n$, we consider first LMSV models and then\nmodels with leverage.\n\n\\subsection{Point process convergence: LMSV case}\n\\begin{prop}\n \\label{prop:sv-pp}\n Let Assumption~\\ref{hypo:iid-bivarie} hold and assume that the sequences\n $\\{\\eta_i\\}$ and $\\{Z_i\\}$ are independent. Assume that the continuous\n volatility function $\\sigma$ satisfies (\\ref{eq:sigma-assumption}) for some\n $q>2\\alpha$. Then\n\\begin{equation}\n \\label{eq:pp-conv}\n N_{n} \\Rightarrow \\sum_{i=0}^h \\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,i}\\mathbf e_i)},\n\\end{equation}\nwhere $\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,0})}, \\dots,\n\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,h})}$ are independent Poisson\nprocesses with mean measures $\\nu_{0},\\ldots,\\nu_{h}$, and\n${\\bf e}_i \\in \\mathbb R^{h+1}$ is the $i$-th basis component. Here,\n$\\Rightarrow$ denotes convergence in distribution in the space of\nRadon point measures on $(0,1]\\times[-\\infty,\\infty]^{h+1} \\setminus\n\\{{\\bf 0}\\}$ equipped with the vague topology.\n\\end{prop}\n\n\n\\subsection{Point process convergence: case of leverage}\n\n\\begin{prop}\n \\label{prop:egarch-pp-expo}\n Let Assumption~\\ref{hypo:iid-bivarie} hold. Assume that $\\sigma(x) = \\exp(x)$\n and $Z_0\\exp(c \\eta_0)$ is tail equivalent to $Z_0$ for all $c$. Then the\n convergence~(\\ref{eq:pp-conv}) holds.\n\\end{prop}\n\n\\begin{prop}\n \\label{prop:egarch-pp-quadratic}\n Let Assumption~\\ref{hypo:iid-bivarie} hold. Assume that the distribution of\n $(Z_0,\\eta_0)$ and the function $\\sigma$ satisfy the assumptions of\n Lemma~\\ref{lemma:asymp-indep-EGARCH-quadratic} and moreover\n \\begin{align}\n |\\sigma(x+y) - \\sigma(x+z)| \\leq C (\\sigma(x) \\vee 1) \\{(\\sigma( y) \\vee 1)\n +(\\sigma(z) \\vee 1) \\} |y-z| \\; . \\label{eq:sigma-condition-truncation}\n \\end{align}\n Assume that condition~(\\ref{eq:sigma-assumption}) holds for some $q>2\\alpha$.\n Then the convergence~(\\ref{eq:pp-conv}) holds.\n\\end{prop}\nThe condition~(\\ref{eq:sigma-condition-truncation}) is an ad-hoc condition which\nis needed for a truncation argument used in the proof. It is satisfied by all\nsymmetric polynomials with positive coefficients. (The proof would not be\nsimplified by considering polynomials rather than functions satisfying this\nassumption.)\n\n\n\n\\section{Partial Sums}\n\\label{sec:partial-sums}\n\nDefine\n\\begin{align*}\n S_n(t) & = \\sum_{i=1}^{[nt]} Y_i \\; , \\ \\ S_{p,n}(t) = \\sum_{i=1}^{[nt]}\n |Y_i|^p \\; .\n\\end{align*}\nFor any function $g$ such that $\\mathbb{E}[g^2(\\eta_0)]<\\infty$ and any integer\n$q\\geq1$, define\n\\begin{align*}\n J_q(g) = \\mathbb{E}[H_q(\\eta_0)g(\\eta_0)]\\; ,\n\\end{align*}\nwhere $H_q$ is the $q$-th Hermite polynomial. The Hermite rank\n$\\tau(g)$ of the function $g$ is the smallest positive integer\n$\\tau$ such that $J_\\tau(g) \\ne 0$. Let $R_{\\tau,H}$ be the\nso-called Hermite process of order $\\tau$ with self-similarity index\n$1-\\tau(1-H)$. See \\cite{arcones:1994} or\nAppendix~\\ref{sec:LRD-Gaussian} for more details. Let $\\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow}$\ndenote convergence in the Skorokhod space $\\mathcal D([0,1],\\mathbb\nR)$ of real valued right-continuous functions with left limits,\nendowed with the $J_1$ topology, cf. \\cite{whitt:2002}.\n\\begin{thm}\n \\label{thm:partial-sums-egarch}\n Let Assumption~\\ref{hypo:iid-bivarie} hold and assume that the function\n $\\sigma$ is continuous and~(\\ref{eq:sigma-assumption}) holds for some\n $q>2\\alpha$.\n \\begin{enumerate}[(i)]\n \\item If $1 < \\alpha < 2$ and $\\mathbb{E}[Z_0]=0$, then $a_n^{-1} S_{n}$ converges\n weakly in the space $\\mathcal D([0,1),\\mathbb R)$ endowed with\n Skorokhod's $J_1$ topology to an $\\alpha$-stable L\\'evy process with skewness~$2\\beta-1$.\n \\end{enumerate}\n Let $\\tau_p=\\tau(\\sigma^p)$ be the Hermite rank of the function $\\sigma^{p}$.\n\\begin{enumerate}[(i)] \\addtocounter{enumi}{+1}\n\\item If $p<\\alpha<2p$ and $1-\\tau_p(1-H)p\/\\alpha$, then\n \\begin{align}\n \\label{eq:lrd-convergence-egarch}\n n^{-1} \\rho_n^{-\\tau_p\/2} (S_{p,n} - n \\mathbb{E}[|Y_0|^p]) \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow}\n \\frac{J_{\\tau_p}(\\sigma^p)\\mathbb{E}[|Z_1|^p]} {\\tau_p !} R_{\\tau_p,H} \\; .\n \\end{align}\n\\item If $p> \\alpha$, then $a_n^{-p} S_{p,n} \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} L_{\\alpha\/p}$, where\n $L_{\\alpha\/p}$ is a positive $\\alpha\/p$-stable L\\'evy process.\n\\end{enumerate}\n\n\\end{thm}\n\nNote that there is no effect of leverage. The situation will be\ndifferent for the sample covariances. The fact that when the marginal\ndistribution has infinite mean, long memory does not play any role and only a\nstable limit can arise was observed in a different context by \\cite{davis:1983}.\n\n\\section{Sample covariances}\n\\label{sec:sample-covariances}\n\nIn order to explain more clearly the nature of the results and the problems that\narise, we start by considering the sample covariances of the sequence $\\{Y_i\\}$,\nwithout assuming that $\\mathbb{E}[Z_0]=0$. For notational simplicity, assume that we\nobserve a sample of length $n+h$. Assume that $\\alpha>1$. Let $\\bar\nY_n=n^{-1}\\sum_{j=1}^nY_j$ denote the sample mean, $m=\\mathbb{E}[Z_0]$,\n$\\mu_Y=\\mathbb{E}[Y_0]=m\\mathbb{E}[\\sigma_0]$ and define the sample covariances by\n\\begin{align*}\n \\hat\\gamma_{n}(s) & = \\frac1n \\sum_{i=1}^{n} (Y_i-\\bar Y_n) (Y_{i+s}-\\bar Y_n)\n \\; , \\ 0 \\leq s \\leq h \\; ,\n\\end{align*}\nFor simplicity, we have defined all the sample covariances as sums with the same\nrange of indices $1,\\dots,n$. This obviously does not affect the asymptotic\ntheory. For $s=0,\\dots, h$, define furthermore\n\\begin{align*}\nC_n(s) & = \\frac1n \\sum_{i=1}^{n} Y_iY_{i+s} \\; .\n\\end{align*}\nThen, defining $\\gamma(s) = \\mathrm{cov}(Y_0,Y_s)$, we have, for $s=0,\\dots,h$,\n\\begin{align*}\n \\hat \\gamma_n(s) -\\gamma(s) & = C_n(s) - \\mathbb{E}[Y_0Y_s] + \\mu_Y^2-\\bar Y_n^2\n +O_P(1\/n) \\; .\n\\end{align*}\nUnder the assumptions of Theorem~\\ref{thm:partial-sums-egarch}, $\\bar\nY_n^2-\\mu_Y^2 = O_P(a_n)$. This term never contributes to the limit.\nConsider now $C_n(s)$. Recall that $\\mathcal F_i$ is the sigma-field generated\nby $(\\eta_j,Z_j)$, $j\\leq i$ and define\n\\begin{align*}\n\\hat{X}_{i,s} = \\frac{\\mathbb{E}[X_{i+s} \\mid \\mathcal\nF_{i-1}]}{\\mathrm{var}(\\mathbb{E}[X_{i+s} \\mid\n \\mathcal F_{i-1}])} = \\varsigma_s^{-1} \\sum_{j=s+1}^\\infty c_j \\eta_{i+s-j} \\; ,\n\\end{align*}\nwith $\\varsigma_s^2 = \\sum_{j=s+1}^\\infty c_j^2$. Let $K$ be the function\ndefined on $\\mathbb R^2$ by\n\\begin{align}\n \\label{eq:def-K}\n K(x,\\hat x) = \\mathbb{E}[Z_s] \\mathbb{E} \\left[ Z_0 \\sigma(x) \\sigma\\left( \\sum_{j=1}^s c_j\n \\eta_{s-j} + \\varsigma_s\\hat x \\right) \\right] - \\mathbb{E}[Y_0Y_s] \\; .\n\\end{align}\nThen, for each $i\\ge 0$, it holds that\n\\begin{align*}\n \\mathbb{E}[ Y_iY_{i+s} \\mid \\mathcal F_{i-1} ] - \\mathbb{E}[ Y_0Y_{s}] = K (X_i,\\hat{X}_{i,s})\n \\; .\n\\end{align*}\nWe see that if $m=\\mathbb{E}[Z_s]=0$, then the function $K$ is identically\nvanishing. We next write\n\\begin{align*}\n C_n(s) - \\mathbb{E}[Y_0Y_s] & = \\frac 1n \\sum_{i=1}^{n} \\{Y_iY_{i+s} -\n \\mathbb{E}[Y_iY_{i+s} \\mid \\mathcal F_{i-1} ] \\} + \\frac1n \\sum_{i=1}^{n}\n K(X_i,\\hat{X}_{i,s})=\\frac1n M_{n,s} + \\frac 1n T_{n,s} \\; .\n\\end{align*}\nThe point process convergence results of the previous section will allow to\nprove that $b_n^{-1} M_{n,s}$ has a stable limit. If $m=\\mathbb{E}[Z]=0$, then this\nwill be the limit of $b_n^{-1}(C_n(s)- \\mathbb{E}[Y_0Y_s])$, regardless of the\npresence of leverage. We can thus state a first result. Let $\\stackrel{\\scriptstyle d}{\\to}$ denote\nweak convergence of sequences of finite dimensional random vectors.\n\\begin{thm}\n Assume that $\\alpha\\in (1,2)$ and $\\mathbb{E}[Z_0]=0$. Under the assumptions of\n Propositions~\\ref{prop:sv-pp}, \\ref{prop:egarch-pp-expo} or\n \\ref{prop:egarch-pp-quadratic},\n \\begin{align*}\n n b_n^{-1} (\\hat\\gamma_{n}(1)-\\gamma(1),\\dots,\\hat\\gamma_{n}(h) -\\gamma(h))\n \\stackrel{\\scriptstyle d}{\\to} (\\mathcal L_1,\\dots,\\mathcal L_h) \\; ,\n \\end{align*}\n where $\\mathcal L_1,\\dots,\\mathcal L_h$ are independent $\\alpha$-stable random variables.\n\\end{thm}\nThis result was obtained by \\cite{DavisMikosch2001} in the (LM)SV\ncase for the function $\\sigma(x)=\\exp(x)$ and under implicit\nconditions that rule out long memory.\n\n\nWe continue the discussion under the assumption that $m\\ne0$. Then\nthe term $T_{n,s}$ is the partial sum of a sequence which is a\nfunction of a bivariate Gaussian sequence. It can be treated by\napplying the results of \\cite{arcones:1994}. Its rate of convergence\nand limiting distribution will depend on the Hermite rank of the\nfunction $K$ with respect to the bivariate Gaussian vector\n$(X_0,\\hat{X}_{0,s})$, which is fully characterized by the covariance\nbetween $X_0$ and $\\hat{X}_{0,s}$,\n\\begin{align*}\n \\mathrm{cov}(X_0,\\hat{X}_{0,s}) = \\varsigma_s^{-1} \\sum_{j=1}^\\infty c_j c_{j+s} =\n \\varsigma_s^{-1} \\rho_s \\; .\n\\end{align*}\n\n\n\n\\subsubsection*{LMSV case}\nSince in this context the noise sequence $\\{Z_i\\}$ and the volatility sequence\n$\\{\\sigma_i\\}$ are independent, we compute easily that\n\\begin{align*}\n K(x,y) = m^2 \\sigma(x) \\mathbb{E}[\\sigma(\\varkappa_s \\zeta + c_s \\eta_0 +\n \\varsigma_s y)] - m^2 \\mathbb{E}[\\sigma(X_0)\\sigma(X_s)]\\;,\n\\end{align*}\nwhere $\\varkappa_s^2 = \\sum_{j=1}^{s-1} c_j^2$ and $\\zeta$ is a standard\nGaussian random variable, independent of $\\eta_0$. Thus, the Hermite rank of the\nfunction $K$ depends only on the function $\\sigma$ (but is not necessarily equal\nto the Hermite rank of $\\sigma$).\n\n\n\\subsubsection*{Case of leverage}\nIn that case, the dependence between $\\eta_0$ and $Z_0$ comes into play. We now have\n\\begin{align*}\n K(x,y) = m \\sigma(x) \\mathbb{E}[\\sigma(\\varkappa_s \\zeta + c_s \\eta_0 + \\varsigma_sy)Z_0] - m\n \\mathbb{E}[\\sigma(X_0)\\sigma(X_s)Z_0] \\; ,\n\\end{align*}\nand now the Hermite rank of $K$ depends also on $Z_0$. Different situations can\noccur. We give two examples.\n\\begin{example}\n Consider the case $\\sigma(x) = \\exp(x)$. Then\n \\begin{align*}\n \\mathbb{E}[ Y_0Y_{s} \\mid \\mathcal F_{-1} ] & = \\mathbb{E}[Z_0 Z_s \\exp(X_0)\n \\exp(X_s) \\mid \\mathcal F_{-1}]\\\\\n& = m \\mathbb{E}[Z_0 \\exp(c_s \\eta_0)]\n\\mathbb{E}\\left[\\exp\\left(\\sum_{j=1}^{s-1} c_j\n \\eta_{s-j}\\right) \\right] \\exp\\left(X_0+\\varsigma_s \\hat{X}_{0,s}\\right) \\; .\n \\end{align*}\n Denote $\\tilde m = \\mathbb{E}[Z_0 \\exp(c_s \\eta_0)]$ and note that $\n \\mathbb{E}\\left[\\exp\\left(\\sum_{j=1}^{s-1} c_j \\eta_{s-j}\\right) \\right] = \\exp\\left(\\varkappa_s^2\/2\\right)$. Thus\n \\begin{align*}\n K(x,y) = m \\tilde m \\, \\exp\\left(\\varkappa_s^2\/2\\right) \\left\\{\\exp\\left(x+\\varsigma_sy\\right) - \\mathbb{E}\\left[\n \\exp\\left(X_0+\\varsigma_s \\hat{X}_{0,s}\\right) \\right]\\right\\} \\; .\n \\end{align*}\n If $\\mathbb{E}[Z_0]=0$ or $\\mathbb{E}[Z_0\\exp\\left(c_s\\eta_0\\right)]=0$, then the function\n $K$ is identically vanishing and $T_{n,s}=0$. Otherwise, the Hermite rank of\n $K$ with respect to $(X_0,\\hat{X}_{0,s})$ is 1. Thus, applying\n \\cite[Theorem~6]{arcones:1994} (in the one-dimensional case) yields that\n $n^{-1} \\rho_n^{-1\/2} T_{n,s}$ converges weakly to a zero mean Gaussian\n distribution. The rate of convergence is the same as in the LMSV case but the\n asymptotic variance is different unless $\\mathbb{E}[Z_0 \\exp(c_s\\eta_0)]\n =\\mathbb{E}[Z_0] \\mathbb{E}[ \\exp(c_s\\eta_0)]$.\n\\end{example}\n\n\n\\begin{example}\n Consider $\\sigma(x)=x^2$. Denote $\\check{X}_{i,s} = \\varkappa_s^{-1}\n \\sum_{j=1}^{s-1} c_j \\eta_{i+s-j}$. Then\n \\begin{align*}\n \\mathbb{E}[ Y_0Y_{s} \\mid \\mathcal F_{-1} ] & = \\mathbb{E}[Z_0 Z_s X_0^2 (\\varkappa_s\n \\check{X}_{0,s} + \\varsigma_s \\hat{X}_{0,s} + c_s \\eta_0)^2 \\mid \\mathcal F_{-1}] \\\\\n & = m X_0^2 \\left\\{ \\varkappa_s^2 m + c_s \\mathbb{E}[Z_0\\eta_0^2] + \\varsigma_s m\n (\\hat{X}_{0,s})^2 + 2\\varsigma_s c_s \\mathbb{E}[Z_0\\eta_0] \\hat{X}_{0,s} \\right\\}.\n \\end{align*}\nThus\n\\begin{align*}\n K(x,y) & = \\varsigma_s m^2 (x^2y^2 - \\mathbb{E}[X_0^2(\\hat{X}_{0,s}^2] ) + 2 \\varsigma_s c_s\n m\\mathbb{E}[Z_0\\eta_0] \\{x^2y-\\mathbb{E}[X_0^2 \\hat{X}_{0,s}]\\} \\\\\n & \\hspace*{.5cm}+ (\\varkappa_s^2m^2 + c_s m \\mathbb{E}[Z_0\\eta_0^2]) (x^2-1)\n\\end{align*}\nand it can be verified that the Hermite rank of $K$ with respect to $(X_0,\\hat\nX_0^{(s)})$ is 1, except if $\\mathbb{E}[Z_0\\eta_0] = 0$, which holds in the LMSV\ncase. Thus we see that the rate of convergence of $T_{n,s}$ depends on the\npresence or absence of leverage. See Example \\ref{xmpl:egarch-different} for\ndetails.\n\\end{example}\nLet us now introduce the notations that will be used to deal with\nsample covariances of powers. For $p>0$ define $m_p=\\mathbb{E}[|Z_0|^p]$.\nIf\n $p\\in(\\alpha,2\\alpha)$ and Assumption~(\\ref{eq:model-2}) holds, $m_p$ is\n finite and $\\mathbb{E}[|Z_0|^{2p}]=\\infty$. Moreover, under the assumptions of\n Lemma~\\ref{lem:asympt-indep-lmsv} or~\\ref{lemma:asymp-indep-EGARCH-expo}, for\n $s>0$, $\\mathbb{E}[|Y_0Y_s|^p]<\\infty$ and $\\mathbb{E}[|Y_0Y_s|^{2p}]=\\infty$ for\n $p\\in(\\alpha\/2,\\alpha)$. Thus the autocovariance $\\gamma_p(s) =\n \\mathrm{cov}(|Y_0|^p,|Y_s|^p)$ is well defined. Furthermore, define $\\bar Y_{p,n} =\nn^{-1} \\sum_{i=1}^n |Y_i|^p$ and\n\\begin{align*}\n \\hat \\gamma_{p,n}(s) = \\frac 1n \\sum_{i=1}^{ n} (|Y_i|^p - \\bar Y_{p,n})\n (|Y_{i+s}|^p - \\bar Y_{p,n}) \\; .\n\\end{align*}\nDefine the functions $K_{p,s}^*$ (LMSV case) and $K_{p,s}^\\dag$ (case with\nleverage) by\n\\begin{align}\n K_{p,s}^*(x,y) & = m_p^2 \\sigma^p(x) \\mathbb{E}[\\sigma^p(\\varkappa_s \\zeta + c_s\n \\eta_0 + \\varsigma_s y)] -\n m_p^2 \\mathbb{E}[\\sigma^p(X_0)\\sigma^p(X_s)] \\; , \\label{eq:def-K-lmsv} \\\\\n K_{p,s}^\\dag(x,y) & = m_p \\sigma^p(x) \\mathbb{E}[\\sigma^p(\\varkappa_s \\zeta + c_s\n \\eta_0 + \\varsigma_sy)|Z_0|^p] - m_p \\mathbb{E}[\\sigma^p(X_0)\\sigma^p(X_s)|Z_0|^p]\n \\; . \\label{eq:def-K-egarch}\n\\end{align}\n\n\n\n\n\\subsection{Convergence of the sample covariance of powers: LMSV case}\n\n\n\n\\begin{thm}\n \\label{theo:cov-lmsv}\n Let Assumption~\\ref{hypo:iid-bivarie} hold and assume that the sequences\n $\\{\\eta_i\\}$ and $\\{Z_i\\}$ are independent. Let the function $\\sigma$ be\n continuous and satisfy~(\\ref{eq:sigma-assumption}) with $q>4\\alpha$. For a\n fixed integer $s\\geq1$, let $\\tau_p^*(s)$ be the Hermite rank of the\n bivariate function $K_{p,s}^*$ defined\n by~(\\ref{eq:def-K-lmsv}), with respect to a bivariate Gaussian vector with\n standard marginal distributions and correlation $\\varsigma_s^{-1}\\gamma_s$.\n \\begin{itemize}\n \\item If $p<\\alpha<2p$ and $1-\\tau_p^*(s)(1-H)p\/\\alpha$, then\n \\begin{align*}\n \\rho_n^{-\\tau^*_p(s)\/2}\n (\\hat\\gamma_{p,n}(s)-\\gamma_p(s))\n \\stackrel{\\scriptstyle d}{\\to} G_s^* \\; ,\n \\end{align*}\n where the random variable $G_s^*$ is Gaussian if $\\tau_p^*(s)=1$.\n \\end{itemize}\n\\end{thm}\nFor different values $s=1,\\ldots,h$, the Hermite ranks $\\tau_p^*(s)$ of the\nfunctions $K_{p,s}^*$ may be different. Therefore, in order to consider the joint\nautovovariances at lags $s=1,\\dots,h$, we define\n$$\n\\tau_p^*=\\min\\{\\tau_p^*(1),\\ldots,\\tau_p^*(h)\\} \\; .\n$$\n\\addtocounter{cor}{+1}\n\\begin{cor}\\label{cor:1}\n Under the assumptions of Theorem~\\ref{theo:cov-lmsv},\n\\begin{itemize}\n \\item If $1-\\tau_p^*(1-H)p\/\\alpha$, then\n \\begin{align*}\n \\rho_n^{-\\tau^*_p\/2}\n (\\hat\\gamma_{p,n}(1)-\\gamma_p(1),\\dots,\\hat\\gamma_{p,n}(h) -\\gamma_p(h))\n \\stackrel{\\scriptstyle d}{\\to} (\\tilde G_1^*,\\dots,\\tilde G_h^*) \\; ,\n \\end{align*}\n where $\\tilde G_s^*=G_s^*$ if $\\tau_p^*(s)=\\tau_p^*$ and $\\tilde\n G_s^*=0$ otherwise.\n \\end{itemize}\n\n\\end{cor}\nWe see that the joint limiting vector $(\\tilde G_1^*,\\dots,\\tilde\nG_h^*)$\n may have certain zero components if there exist indices~$s$ such that\n $\\tau_p^*(s)>\\tau_p^*$. However, for standard choices of the function\n $\\sigma$, the Hermite rank~$\\tau_p^*(s)$ does not depend on~$s$. For instance,\n for $\\sigma(x)=\\exp(x)$, $\\tau_p^*(s)=1$ for all $s$, and for $\\sigma(x)=x^2$,\n $\\tau_p^*(s)=2$ for all $s$.\n\\subsection{Convergence of sample covariance of powers: case of leverage}\n\n\\begin{thm}\n \\label{theo:cov-egarch}\n Let the assumptions of Proposition~\\ref{prop:egarch-pp-expo} or\n \\ref{prop:egarch-pp-quadratic} hold and assume\n that~(\\ref{eq:sigma-assumption}) holds for some $q>4\\alpha$. Let\n $\\tau_p^\\dag(s)$ be the Hermite rank of the bivariate function\n $K_{p,s}^\\dag$ defined by~(\\ref{eq:def-K-egarch}), with respect to a\n bivariate Gaussian vector with standard marginal distributions and correlation\n $\\varsigma_s^{-1}\\gamma_s$.\n \\begin{itemize}\n \\item If $p<\\alpha<2p$ and $1-\\tau_p^\\dag(s)(1-H)p\/\\alpha$, then\n \\begin{align*}\n \\rho_n^{-\\tau^\\dag_p(s)\/2}\n (\\hat\\gamma_{p,n}(1)-\\gamma_p(1),\\dots,\\hat\\gamma_{p,n}(h) -\\gamma_p(h))\n \\stackrel{\\scriptstyle d}{\\to} G_s^\\dag \\; ,\n \\end{align*}\n where the random vector $G_s^\\dag$ is Gaussian if $\\tau_p^\\dag(s)=1$.\n \\end{itemize}\n\\end{thm}\nAgain, as in the previous case, in order to formulate the multivariate result,\nwe define further\n$$\n\\tau_p^\\dag=\\min\\{\\tau_p^\\dag(1),\\ldots,\\tau_p^\\dag(h)\\} \\; .\n$$\n\n\\begin{cor}\\label{cor:2}\n Under the assumptions of Theorem~\\ref{theo:cov-egarch},\n\\begin{itemize}\n \\item If $1-\\tau_p^\\dag(1-H)p\/\\alpha$, then\n \\begin{align*}\n \\rho_n^{-\\tau^\\dag_p\/2}\n (\\hat\\gamma_{p,n}(1)-\\gamma_p(1),\\dots,\\hat\\gamma_{p,n}(h) -\\gamma_p(h))\n \\stackrel{\\scriptstyle d}{\\to} (\\tilde G_1^\\dag,\\dots,\\tilde G_h^\\dag) \\; ,\n \\end{align*}\n where $\\tilde G_s^\\dag=G_s^\\dag$ if $\\tau_p^\\dag(s)=\\tau_p^\\dag$ and $\\tilde\n G_s^\\dag=0$ otherwise.\n \\end{itemize}\n\\end{cor}\nThe main difference between Theorems~\\ref{theo:cov-lmsv} and\n\\ref{theo:cov-egarch} (or, Corollaries \\ref{cor:1} and \\ref{cor:2}) is the\nHermite rank considered. Under the conditions that ensure convergence to a\nstable limit, the rates of convergence and the limits are the same in both\ntheorems. Otherwise, the rates and the limits may be different.\n\n\n\n\n\n\n\\begin{example}\n \\label{xmpl:exponential}\n Consider the case $\\sigma(x)=\\exp(x)$. For all $s\\ge 1$ we have\n $\\tau_p^\\dag=\\tau_p^\\dag(s)=1$. Thus, under the assumptions of\nTheorem~\\ref{theo:cov-egarch}, we have:\n\\begin{itemize}\n\\item If $Hp\/\\alpha$, then $\\rho_n^{-1\/2}\n \\{\\hat\\gamma_{p,n}(s)-\\gamma_p(s)\\}$ converges weakly to a zero mean Gaussian\n distribution.\n\\end{itemize}\nThe dichotomy is the same as in the LMSV case, but the variance of\nthe limiting distribution in the case $H>p\/\\alpha$ is different\nexcept if $\\mathbb{E}[Z_0 \\exp(c_s\\eta_0)] =\\mathbb{E}[Z_0] \\mathbb{E}[\n\\exp(c_s\\eta_0)]$.\n\\end{example}\n\n\n\\begin{example}\n \\label{xmpl:egarch-different}\n Consider the case $\\sigma(x)=x^2$ and $p=1$. Assume that\n $\\mathbb{E}[\\eta_1|Z_1|]\\not=0$. Then for each $s\\ge 1$,\n $\\tau_1^\\dag=\\tau_1^\\dag(s)=1$ whereas $\\tau_p^*=\\tau_p^*(s)=2$, thus the\n dichotomy is not the same as in the LMSV case and the rate of convergence\n differs in the case $H>1\/\\alpha$.\n\\begin{itemize}\n\\item If $H < 1\/\\alpha$, then $n b_n^{-1} \\{ \\hat\\gamma_{n,1}(s) -\n \\gamma_1(s)\\}$ converges weakly to a stable law.\n\\item If $H > 1\/\\alpha$, then $\\rho_n^{-1\/2} \\{ \\hat\\gamma_{n,1}(s) -\n \\gamma_1(s)\\}$ converges weakly to a zero mean Gaussian distribution.\n\\end{itemize}\nIf we assume now that $\\mathbb{E}[\\eta_1|Z_1|]=0$, then\n$\\tau_1^\\dag=\\tau_p^*=2$. Thus the dichotomy is the same as in the LMSV case,\nbut the limiting distribution in the non stable case can be different from the\none in the LMSV case.\n\\begin{itemize}\n\\item If $2H-1 < 1\/\\alpha$, then $n b_n^{-1} \\{\\hat\\gamma_{1,n}(s) -\n \\gamma_1(s)\\}$ converges weakly to a stable law.\n\\item If $2H-1 > 1\/\\alpha$, then $\\rho_n^{-1} \\{\\hat\\gamma_{1,n}(s) -\n \\gamma_1(s)\\}$ converges weakly to a zero mean non Gaussian distribution.\n\\end{itemize}\nIf moreover $\\mathbb{E}[H_2(\\eta_1)| Z_1|]=0$, then for each $s$,\nthe functions $K_{p,s}^*$ and $K_{p,s}^\\dag$ are equal, and thus\nthe limiting distribution is the same as in the LMSV case.\n\\end{example}\n\n\n\n\n\\section{Proofs}\\label{sec:proofs}\n\n\\begin{lem}\n \\label{lem:tail-equivalence}\n Let $Z$ be a nonnegative random variable with a regularly varying right tail\n with index $-\\alpha$, $\\alpha>0$. Let $g$ be a bounded function on\n $[0,\\infty)$ such that $\\lim_{x\\to+\\infty} g(x) = c_g \\in(0,\\infty)$. Then\n $Zg(Z)$ is tail equivalent to $Z$:\n \\begin{align*}\n \\lim_{x\\to+\\infty} \\frac{\\mathbb{P}(Zg(Z)>x)}{\\mathbb{P}(Z>x)} = c_g^\\alpha \\; .\n \\end{align*}\n\\end{lem}\n\n\\begin{proof}\n Fix some $\\epsilon>0$ and let $x_0$ be large enough so that\n $|g(x)-c_g|\/c_g<\\epsilon$ for all $x>x_0$. The function $g$ is bounded, thus $zg(z)>x$\n implies that $z>x\/\\|g\\|_\\infty$ and if $x>x_0\\|g\\|_\\infty$, we have\n \\begin{align*}\n \\mathbb{P}(Zg(Z)>x) & = \\mathbb{P}(Zg(Z)>x,Z>x\/\\|g\\|_\\infty) \\\\\n & \\leq \\mathbb{P}(Zc_g(1+\\epsilon)>x,Z>x\/\\|g\\|_\\infty) \\leq \\mathbb{P}(Zc_g(1+\\epsilon)>x) \\; .\n \\end{align*}\n This yields the upper bound:\n\\begin{align*}\n \\limsup_{x\\to+\\infty} \\frac{ \\mathbb{P}(Zg(Z)>x)}{\\mathbb{P}(Z>x) } \\leq \\limsup_{x\\to+\\infty}\n \\frac{ \\mathbb{P}(Zc(1+\\epsilon)>x)}{\\mathbb{P}(Z>x) } = c_g^\\alpha(1+\\epsilon)^\\alpha \\; .\n\\end{align*}\nConversely, we have\n\\begin{align*}\n \\mathbb{P}(Zg(Z)>x) & = \\mathbb{P}(Zg(Z)>x,Z>x\/\\|g\\|_\\infty)\n \\geq \\mathbb{P}(Zc_g(1-\\epsilon)>x,Z>x\/\\|g\\|_\\infty) \\\\\n & = \\mathbb{P}\\left( Z > x \\max\\left\\{ \\frac1{c_g(1-\\epsilon)} , \\frac1{\\|g\\|_\\infty}\n \\right\\} \\right) = \\mathbb{P}\\left( Z > \\frac x{c_g(1-\\epsilon)} \\right)\n\\end{align*}\nwhere the last equality comes from the fact that $(1-\\epsilon)c_g\n\\leq c_g = \\lim_{z\\to+\\infty} g(z) \\leq \\|g\\|_\\infty$. Thus\n\\begin{align*}\n \\liminf_{x\\to+\\infty} \\frac{ \\mathbb{P}(Zg(Z)>x)}{\\mathbb{P}(Z>x) } \\geq \\limsup_{x\\to+\\infty}\n \\frac{ \\mathbb{P}(Zc_g(1-\\epsilon)>x)}{\\mathbb{P}(Z>x) } = c_g^\\alpha(1-\\epsilon)^\\alpha \\; .\n\\end{align*}\n Since $\\epsilon$ is arbitrary, we obtain the desired limit.\n\\end{proof}\n\n\\begin{lem}\n \\label{lem:bound-potter}\n Let $Z$ be a nonnegative random variable with a regularly varying right tail\n with index $-\\alpha$, $\\alpha>0$. For each $\\epsilon>0$, there exists a\n constant $C$, such that for all $x\\geq1$ and all $y>0$,\n \\begin{align}\n \\frac{ \\mathbb{P}(yZ>x) }{ \\mathbb{P}(Z>x) } \\leq C (y\\vee1)^{\\alpha+\\epsilon} \\; . \\label{eq:claim-potter}\n \\end{align}\n\\end{lem}\n\n\\begin{proof}\n If $y \\leq 1$, then $\\mathbb{P}(yZ>x) \\leq \\mathbb{P}(Z>x)$ so the requested bound holds\n trivially with $C=1$. Assume now that $y \\geq 1$. Then, by Markov's inequality,\n \\begin{align}\n \\mathbb{P}(yZ > x) & = \\mathbb{P}(Z>x) + \\mathbb{P}(Z \\mathbf 1_{\\{Z \\leq x\\}} >x\/y)\n \\leq x^{-\\alpha-\\epsilon} y^{\\alpha+\\epsilon} \\mathbb{E}[ Z^{\\alpha+\\epsilon}\n \\mathbf 1_{\\{Z \\leq x\\}} ] \\; . \\label{eq:decomp-yz>x}\n \\end{align}\n Next, by \\cite[Theorem~VIII.9.2]{feller:1971} or\n \\cite[Theorem~8.1.2]{bingham:goldie:teugels:1989},\n\\begin{align*}\n \\lim_{x\\to+\\infty} \\frac{\\mathbb{E}[Z^{\\alpha+\\epsilon} \\mathbf 1_{\\{Z \\leq x\\}}]}\n {x^{\\alpha+\\epsilon} \\mathbb{P}(Z>x)} = \\frac\\alpha\\epsilon \\; .\n\\end{align*}\nMoreover, the function $x\\to\\mathbb{P}(Z>x)$ is decreasing on $[0,\\infty)$, hence\nbounded away from zero on compact sets of $[0,\\infty)$. Thus, there exists a\nconstant $C$ such that for all $x\\geq1$,\n\\begin{align}\n \\frac{\\mathbb{E}[Z^{\\alpha+\\epsilon} \\mathbf 1_{\\{Z \\leq x\\}}]} {\\mathbb{P}(Z>x)} \\leq C\n x^{\\alpha+\\epsilon} \\; . \\label{eq:borne-feller}\n\\end{align}\nPlugging~(\\ref{eq:borne-feller}) into~(\\ref{eq:decomp-yz>x}) yields, for all\n$x,y\\geq1$,\n\\begin{align*}\n \\frac{ \\mathbb{P}(yZ > x) }{ \\mathbb{P}(Z>x) }& = 1 + C y^{\\alpha+\\epsilon} \\; .\n\\end{align*}\nThis concludes the proof of~(\\ref{eq:claim-potter}).\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:asympt-indep-lmsv}]\n Under the assumption of independence between the sequences $\\{Z_i\\}$ and\n $\\{\\eta_i\\}$, as already mentioned, $Y_0$ is tail equivalent to $Z_0$ and\n $Y_0Y_h$ is tail equivalent to $Z_0Z_1$ for all $h$. The\n properties~(\\ref{eq:defd+-}), (\\ref{eq:def-bn}), (\\ref{eq:domination}) are\n straightforward. We need to prove~(\\ref{eq:indep1})\n and~(\\ref{eq:indep-products}). Since $Z_0$ is\n independent of $\\sigma_j$ and $Z_j$, by conditioning, we have\n \\begin{align*}\n n \\mathbb{P}(|Y_0| > a_n x , |Y_0Y_j| > b_n x) & = \\mathbb{E} \\left[ n \\bar F_{|Z|}\n \\left( \\frac{a_nx}{\\sigma_0} \\vee \\frac {b_n x} {\\sigma_0\\sigma_j|Z_j|} \\right)\n \\right]\n \\end{align*}\n with $F_{|Z|}$ the distribution function of $|Z_0|$. Since $a_n\/b_n\\to0$, for\n any $y>0$, it holds that $\\lim_{n\\to+\\infty} n\\bar F_{|Z|}(b_ny)=0$. Thus,\n\\begin{align*}\n n \\bar F_{|Z|} \\left( \\frac{a_nx}{\\sigma_0} \\vee \\frac {b_n x}\n {\\sigma_0\\sigma_j|Z_j|} \\right) \\leq n \\bar F_{|Z|} \\left( \\frac {b_n x}\n {\\sigma_0\\sigma_j|Z_j|} \\right) \\to 0 \\; , \\mbox{ a.s.}\n\\end{align*}\nMoreover, by Lemma~\\ref{lem:bound-potter} and the definition of $a_n$, for any\n$\\epsilon>0$ there exists a constant $C$ such that\n\\begin{align*}\n n \\bar F_{|Z|} \\left( \\frac{a_nx}{\\sigma_0} \\vee \\frac {b_n x}\n {\\sigma_0\\sigma_j|Z_j|} \\right) \\leq n \\bar F_{|Z|} \\left(\n \\frac{a_nx}{\\sigma_0} \\right) \\leq C x^{-\\alpha-\\epsilon} \\sigma_0^{\\alpha+\\epsilon} \\; .\n\\end{align*}\nBy assumption,~(\\ref{eq:sigma-assumption}) holds for some\n$q>\\alpha$. Thus, choosing $\\epsilon$ small enough allows to apply\nthe bounded convergence theorem and this proves~(\\ref{eq:indep1}).\nNext, to prove~(\\ref{eq:indep-products}), note that $|Y_i| \\wedge\n|Y_j| \\leq (\\sigma_i\\vee\\sigma_j) (|Z_i|\\wedge |Z_j|)$. Thus,\napplying Lemma~\\ref{lem:bound-potter}, we have\n \\begin{align*}\n \\mathbb{P}(|Y_0Y_i|>x,|Y_0Y_j|>x ) & = \\mathbb{P}(|Z_0| \\sigma_0(\\sigma_i |Z_i| \\wedge\n \\sigma_j |Z_j|) > x) \\\\\n & \\leq C \\mathbb{P}(|Z_0|>x) \\mathbb{E}[\\sigma_0^{\\alpha+\\epsilon}\n (\\sigma_i\\vee\\sigma_j)^{\\alpha+\\epsilon}] \\mathbb{E}[(|Z_i|\\wedge\n |Z_j|)^{\\alpha+\\epsilon}] \\; .\n \\end{align*}\n The expectation $\\mathbb{E}[\\sigma_0^{\\alpha+\\epsilon}\n (\\sigma_i\\vee\\sigma_j)^{\\alpha+\\epsilon}] $ is finite for $\\epsilon$ small\n enough, since Assumption~(\\ref{eq:sigma-assumption}) holds with\n $q>2\\alpha$. Since $\\mathbb{P}(|Z_0|>x) = o(\\mathbb{P}(|Z_1Z_2|>x))$, this\n yields~(\\ref{eq:indep-products}) in the LMSV case.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:asymp-indep-EGARCH-expo}]\n It suffices to prove the lemma when the random variables $Z_i$ are\n nonnegative. Under the assumption of the Lemma, $\\exp(c_h \\eta_0) Z_0$ is\n tail equivalent to $Z_0$. Thus, by the Corollary in\n \\cite[p.~245]{embrechts:goldie:1980}, $Z_0\\exp(c_h\\eta_0) Z_h$ is regularly\n varying with index $\\alpha$ and tail equivalent to $Z_0Z_h$. Since\n $\\mathbb{E}[Z_0^\\alpha]=\\infty$, it also holds that $\\mathbb{P}(Z_0>x) = o(\\mathbb{P}(\\exp(c_h\n \\eta_0) Z_0Z_1>x))$, cf. \\cite[Equation~(3.5)]{DavisResnick1986}.\n\n Define $\\hat X_h = \\sum_{k=1, k\\not=h}^{\\infty} c_k \\eta_{h-k}$. Then $\\hat\n X_h$ is independent of $Z_0$, $\\eta_0$ and $Z_h$. Since $Y_0Y_h =\n \\exp(X_0+\\hat X_h) Z_0\\exp(c_h\\eta_0) Z_h$, we can apply Breiman's Lemma\n to obtain that $Y_0Y_h$ is tail equivalent to $Z_0\\exp(c_h\\eta_0) Z_h$,\n hence to $Z_0Z_1$. Thus~(\\ref{eq:domination}) and (\\ref{eq:defd+-}) hold with\n \\begin{align*}\n d_+(h) = \\tilde \\beta \\frac{\\mathbb{E}[\\exp(\\alpha(X_0+\\hat\n X_h))]}{\\mathbb{E}[\\exp(\\alpha(X_0+\\hat X_1))]} \\; , \\ \\ d_-(h) = (1-\\tilde\n \\beta) \\frac{\\mathbb{E}[\\exp(\\alpha(X_0+\\hat X_h))]}{\\mathbb{E}[\\exp(\\alpha(X_0+\\hat X_1))]} \\; ,\n \\end{align*}\n where $\\tilde \\beta$ is the skewness parameter of\n $Z_0\\exp(c_h\\eta_0)\n Z_h$.\n\n We now prove~(\\ref{eq:indep-products}). For fixed $i,j$ such that $0 x , Y_0Y_j > x) & = \\mathbb{P}(\\sigma_0 \\hat\\sigma_i \\tilde Z_0^{(i)} Z_i\n >x \\; , \\ \\sigma_0 \\check\\sigma_{i,j} \\tilde Z_0^{(i)} \\exp(c_{j-i}\\eta_i) Z_j > x ) \\\\\n & \\leq \\mathbb{P}(\\sigma_0 (\\hat \\sigma_i \\vee \\check\\sigma_{i,j}) (\\tilde\n Z_0^{(i)}+\\tilde Z_0^{(j)}) (Z_i \\wedge V_i Z_j) > x) \\; .\n \\end{align*}\n Now, $(Z_i\\wedge V_i Z_j)$ is independent of $\\sigma_0 (\\hat \\sigma_i \\vee\n \\check\\sigma_{i,j}) (\\tilde Z_0^{(i)}+\\tilde Z_0^{(j)})$, which is tail\n equivalent to $Z_0$ by assumption and Breiman's Lemma. Thus, in order to\n prove~(\\ref{eq:indep-products}), we only need to show that for some\n $\\delta>\\alpha$, $\\mathbb{E}[(Z_i \\wedge V_i Z_j)^\\delta]<\\infty$. This is\n true. Indeed, since $\\mathbb{E}[V_i^q]<\\infty$ for all $q>1$, we can apply\n H\\\"older's inequality with $q$ arbitrarily close to 1. This yields for\n $p^{-1}+q^{-1}=1$,\n \\begin{align*}\n \\mathbb{E}[(Z_i \\wedge V_i Z_j)^\\delta] \\leq \\mathbb{E}[(1\\vee V_i)^\\delta (Z_i \\wedge\n Z_j)^\\delta] \\leq \\mathbb{E}^{1\/p}[(1\\vee V_i)^{p\\delta}] \\, \\mathbb{E}^{1\/q}[(Z_i\n \\wedge Z_j)^{q\\delta}] \\; .\n \\end{align*}\n The tail index of $(Z_i \\wedge Z_j)$ is $2\\alpha$, and thus $\\mathbb{E}^{1\/q}[(Z_i\n \\wedge Z_j)^{q\\delta}] < \\infty$ for any $q$ and $\\delta$ such that\n $q\\delta<2\\alpha$. Thus $\\mathbb{E}[(Z_i \\wedge V_i Z_j)^\\delta]<\\infty$ for any\n $\\delta \\in (\\alpha,2\\alpha)$ and~(\\ref{eq:indep-products}) holds. The proof\n of~(\\ref{eq:indep1}) is similar.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:asymp-indep-EGARCH-quadratic}]\n We omit the proof of the regular variation and the tail equivalence between\n $Y_0Y_h$ and $Z_0Z_1$ which is a straightforward consequence of the\n assumption. We prove~(\\ref{eq:indep-products}). Using the notation of the\n proof of Lemma~\\ref{lemma:asymp-indep-EGARCH-expo}, by the subadditivity\n property of $\\sigma$, we have, for $j>i>0$, and for some constant $C$,\n \\begin{align*}\n \\mathbb{P}&(Y_0Y_i > x , Y_0Y_j > x) \\\\\n & = \\mathbb{P}(\\sigma_0 \\sigma(\\hat X_i + c_i \\eta_0)\n Z_0 Z_i >x \\; , \\ \\sigma_0 \\sigma(\\check{X}_{i,j} + c_j \\eta_0 + c_{j-i} \\eta_i) Z_0 Z_j > x \\} \\\\\n & \\leq \\mathbb{P}(C \\sigma_0 |Z_0| \\{\\sigma(\\hat X_i) + \\sigma(c_i\\eta_0)\\}\n \\{\\sigma(\\check{X}_{i,j}) + \\sigma(c_j\\eta_0) + \\sigma(c_{j-i} \\eta_i)\\} (|Z_i| \\wedge |Z_j|) > x) \\\\\n & \\leq \\mathbb{P}(C \\sigma_0 |Z_0| \\sigma(\\hat X_i) \\sigma(\\check{X}_{i,j})(|Z_i| \\wedge\n |Z_j|) > x) + \\mathbb{P}(C \\sigma_0 |Z_0| \\sigma(\\hat X_i) \\sigma(c_j\\eta_0) (|Z_i| \\wedge |Z_j|) > x) \\\\\n & + \\mathbb{P}(C \\sigma_0 |Z_0| \\sigma(\\hat X_i) \\sigma(c_{j-i} \\eta_i) (|Z_i| \\wedge\n |Z_j|) > x) + \\mathbb{P}(C \\sigma_0 |Z_0| \\sigma(c_i\\eta_0)\\sigma(\\check{X}_{i,j}) (|Z_i| \\wedge |Z_j|) > x) \\\\\n & + \\mathbb{P}(C \\sigma_0 |Z_0| \\sigma(c_i\\eta_0) \\sigma(c_j\\eta_0) (|Z_i| \\wedge |Z_j|)\n > x) + \\mathbb{P}(C \\sigma_0 |Z_0| \\sigma(c_i\\eta_0) \\sigma(c_{j-i} \\eta_i) (|Z_i|\n \\wedge |Z_j|) > x) \\; .\n \\end{align*}\n Now, under the assumptions of the Lemma, each of the last six probabilities\n can be expressed as $\\mathbb{P}(\\tilde Z U>x)$, where $\\tilde Z$ is tail equivalent\n to $Z_0$ and $U$ is independent of $\\tilde Z$ and $\\mathbb{E}[|U|^q]<\\infty$ for some\n $q>\\alpha$. Thus, by Breiman's Lemma, $\\tilde ZU$ is also tail equivalent to\n $Z_0$, and thus $ \\mathbb{P}(Y_0Y_i > x , Y_0Y_j > x) = O(\\mathbb{P}(|Z_0|>x)) =\n o(\\mathbb{P}(|Y_0Y_1|>x))$, which proves~(\\ref{eq:indep-products}).\n\\end{proof}\n\n\n\\subsection{Proof of Propositions~\\ref{prop:pp-univarie}, \\ref{prop:sv-pp},\n \\ref{prop:egarch-pp-expo} and~\\ref{prop:egarch-pp-quadratic}}\n\nWe omit some details of the proof, since it is a slight modification of the proof of Theorems\n3.1 and 3.2 in \\cite{DavisMikosch2001}, adapted to a general stochastic\nvolatility with possible leverage and long memory. Note that the proof of\n\\cite[Theorem 3.2]{DavisMikosch2001} refers to the proof of Theorem 2.4 in\n\\cite{davis:resnick:1985l}. The latter proof uses condition (2.6) in\n\\cite{davis:resnick:1985l}, which rules out long memory.\n\n\n\nThe proof is in two steps. In the first step we consider an $m$-dependent\napproximation $X^{(m)}$ of the Gaussian process and prove point-process\nconvergence for the corresponding stochastic volatility process $Y^{(m)}$ for\neach fixed $m$. The second step naturally consists in proving that the limits\nfor the $m$-dependent approximations converge when $m$ tends to infinity, and\nthat this limit is indeed the limit of the original sequence.\n\n\\subsubsection*{First step}\nLet $X_i^{(m)}=\\sum_{k=1}^{m}c_k\\eta_{i-k}$,\n$Y_i^{(m)}=\\sigma(X_i^{(m)})Z_i$ and define accordingly ${\\bf\nY}_{n,i}^{(m)}$. Note that the tail properties\n of the process $\\{Y^{(m)}_i\\}$ are the same as those of the process $\\{Y_i\\}$,\n since the latter are proved without any particular assumptions on the\n coefficients $c_j$ of the expansion~(\\ref{eq:linear}) apart from square\n summability. In order to prove the desired point process convergence, as in\nthe proof of \\cite[Theorem~3.1]{DavisMikosch2001}, we must check the following\ntwo conditions (which are Equations (3.3) and (3.4) in \\cite{DavisMikosch2001}):\n \\begin{align}\n & \\mathbb{P}(\\mathbf Y_{n,1}^{(m)} \\in \\cdot ) \\stackrel{\\scriptstyle v}{\\to} \\boldsymbol\\nu_m \\; ,\n \\label{eq:conv-vague} \\\\\n & \\lim_{k\\to+\\infty} \\limsup_{n\\to+\\infty} n \\sum_{i=2}^{[n\/k]} \\mathbb{E}[g({\\bf\n Y}_{n,1}^{(m)}) g({\\bf Y}_{n,i}^{(m)}) ] = 0 \\;, \\label{eq:Dprime}\n \\end{align}\n where $\\boldsymbol\\nu_m$ is the mean measure of the limiting point process and\n (\\ref{eq:Dprime}) must hold for any continuous bounded function $g$, compactly\n supported on $[0,1] \\times [-\\infty,\\infty]^{h}\\setminus\\{\\boldsymbol0\\}$.\n\n The convergence~(\\ref{eq:conv-vague}) is a straightforward consequence of the\n joint regular variation and the asymptotic independence\n properties~(\\ref{eq:indep1}), (\\ref{eq:indep-products}) of\n $Y_0,Y_0Y_1,\\dots,Y_0Y_h$. Let us now prove~(\\ref{eq:Dprime}). Note first\n that, because of asymptotic independence, for any fixed $i$,\n\\begin{align*}\n \\lim_{n\\to+\\infty} n \\mathbb{E}[g({\\bf Y}_{n,1}^{(m)}) g({\\bf Y}_{n,i}^{(m)}) ] = 0 \\; .\n\\end{align*}\nNext, by $m$-dependence, for each $k$, as $n\\to+\\infty$, we have\n\\begin{align*}\n n \\sum_{i=2+m+h}^{[n\/k]} \\mathbb{E}[g({\\bf Y}_{n,1}^{(m)}) g({\\bf Y}_{n,i}^{(m)}) ]\n & = n \\sum_{i=2+m+h}^{[n\/k]} \\mathbb{E}[g({\\bf Y}_{n,1}^{(m)})] \\mathbb{E}[ g({\\bf\n Y}_{n,i}^{(m)})] \\\\ & \\sim \\frac 1k \\left( n\\mathbb{E}[g({\\bf Y}_{n,1}^{(m)})]\n \\right)^2 \\to \\frac 1k \\left(\\int g \\mathrm d \\boldsymbol\\nu_m \\right) ^2 \\; .\n\\end{align*}\nThis yields~(\\ref{eq:Dprime}). Thus, we obtain that\n$$\n\\sum_{i=1}^n \\delta_{(i\/n,{\\bf Y}_{n,i}^{(m)})} \\Rightarrow \\sum_{l=1}^h\n\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,l}^{(m)}{\\bf e}_l)},\n$$\nwhere $\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,0}^{(m)})},\n\\dots,\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,h}^{(m)})}$ are\nindependent Poisson processes with respective mean measures\n\\begin{align}\n \\lambda_{0,m}(\\mathrm d x) = \\alpha \\left\\{ \\beta_{m} x^{-\\alpha-1} \\mathbf\n 1_{(0,\\infty)}(x) + (1-\\beta_{ m}) (-x)^{-\\alpha-1} \\mathbf\n 1_{(-\\infty,0)}(x) \\right\\} \\mathrm d x \\; , \\label{mean-measure-1}\n \\\\\n \\lambda_{s,m}(\\mathrm d x) = \\alpha \\left\\{ d_+^{(m)}(s) x^{-\\alpha-1} \\mathbf\n 1_{(0,\\infty)}(x) + d_-^{(m)}(s) (-x)^{-\\alpha-1} \\mathbf 1_{(-\\infty,0)}(x)\n \\right\\}\\mathrm d x \\; ,\\label{mean-measure-2}\n\\end{align}\nwhere $d_+^{(m)}(s)$ and $d_-^{(m)}(s)$ depend on the process considered and\n$\\beta_m$ = ${\\beta \\mathbb{E}[\\sigma^\\alpha(X^{(m)})]\/\\mathbb{E}[\\sigma^\\alpha(X)]}$.\n\n\n\n\n\n\\subsubsection*{Second step}\nWe must now prove that\n\\begin{align}\n \\label{eq:3.13}\n N_m \\Rightarrow N\n\\end{align}\nas $m\\to+\\infty$ and that for all $\\eta>0$,\n\\begin{align}\n \\label{eq:3.14}\n \\lim_{m\\to+\\infty}\n\\limsup_{n\\to+\\infty} \\mathbb{P}(\\varrho(N_n,N_n^{(m)})>\\eta) = 0 \\; .\n\\end{align}\nwhere $\\varrho$ is the metric inducing the vague topology. Cf. (3.13) and\n(3.14) in \\cite{DavisMikosch2001}. To prove~(\\ref{eq:3.13}), it suffices to\nprove that\n\\begin{align}\n \\lim_{m\\to+\\infty} \\beta_m & = \\beta \\; , \\label{eq:conv-betam} \\\\\n\\ \\lim_{m\\to+\\infty} d_+^{(m)}(s) & = d_+(s) \\; ,\n \\ \\lim_{m\\to+\\infty} d_-^{(m)}(s)=d_-(s) \\; . \\label{eq:conv-d+-m}\n\\end{align}\nTo prove~(\\ref{eq:3.14}), as in the proof of\n\\cite[Theorem~3.3]{DavisMikosch2001}, it suffices to show that for\nall $\\epsilon>0$,\n\\begin{align}\n & \\lim_{m\\to+\\infty} \\limsup_{n\\to+\\infty} n \\mathbb{P} (a_n^{-1} |Y_0-Y_{0}^{(m)}| >\n \\epsilon) = 0 \\; , \\label{eq:pp-tightness1} \\\\\n & \\lim_{m\\to+\\infty} \\limsup_{n\\to+\\infty} n \\mathbb{P} \\left(b_n^{-1} |Y_0Y_{s} -\n Y_0^{(m)}Y_s^{(m)}| > \\epsilon \\right) = 0 \\; . \\label{eq:pp-tightness2}\n\\end{align}\nIf~(\\ref{eq:sigma-assumption}) holds for some $q>\\alpha$ and if $\\sigma$ is\ncontinuous, then~(\\ref{eq:conv-betam}) holds by bounded convergence, in both the\nLMSV case and the case of leverage.\nWe now prove~(\\ref{eq:pp-tightness1}). Since $Y_0$ and $Z_0$ are tail\nequivalent, by Breiman's Lemma, we have\n\\begin{align*}\n \\limsup_{n\\to+\\infty} n \\mathbb{P} (a_n^{-1} |Y_0-Y_{0}^{(m)}| > \\epsilon) \\leq C\n \\epsilon^{-\\alpha} \\mathbb{E}[|\\sigma(X_0^{(m)})-\\sigma(X_0)|^{\\alpha}] \\; .\n\\end{align*}\nContinuity of $\\sigma$, Assumption~(\\ref{eq:sigma-assumption}) with $q>\\alpha$\nand the bounded convergence theorem imply that $\\lim_{m\\to+\\infty}\n\\mathbb{E}[|\\sigma(X_0^{(m)})-\\sigma(X_0)|^{\\alpha}]=0$. This\nproves~(\\ref{eq:pp-tightness1}) in both the LMSV case and the case of leverage.\nWe now split the proof of~(\\ref{eq:conv-d+-m}) and~(\\ref{eq:pp-tightness2})\nbetween the LMSV and leverage cases.\n\n\\subsubsection*{LMSV case.}\nIn this case, we have\n\\begin{align*}\n d_+^{(m)}(s) = d_+(s)\n \\frac{\\mathbb{E}[\\sigma^\\alpha(X_0^{(m)})\\sigma^\\alpha(X_s^{(m)})]}\n {\\mathbb{E}[\\sigma^\\alpha(X_0)\\sigma^\\alpha(X_s)]} \\; , \\ d_-^{(m)}(s) = d_-(s)\n \\frac{\\mathbb{E}[\\sigma^\\alpha(X_0^{(m)})\\sigma^\\alpha(X_s^{(m)})]}\n {\\mathbb{E}[\\sigma^\\alpha(X_0)\\sigma^\\alpha(X_s)]} \\; .\n\\end{align*}\nFor $s = 1,\\dots,h$, define\n$$\nW_{m,s} = \\sigma(X_0^{(m)})\\sigma(X_{s}^{(m)}) - \\sigma(X_1)\\sigma(X_{1+s}) \\; .\n$$\nContinuity of $\\sigma$ implies that $W_{m,s} \\stackrel{\\scriptstyle P}{\\to} 0$\nas $m\\to+\\infty$. Under the Gaussian assumption, $X^{(m)} \\stackrel d = u_m X$\nfor some $u_m\\in(0,1)$, thus if~(\\ref{eq:sigma-assumption}) holds for some\n$q'>\\alpha$, then it also holds that\n\\begin{align*}\n \\sup_{m\\geq1} \\mathbb{E}[\\sigma^{q'}(X^{(m)})] <\\infty \\; ,\n\\end{align*}\nhence $W_m$ converges to 0 in $L^q$ for any $q2\\alpha$,\n$W_{m,s}$ converges to 0 in $L^q$ for any $q0$,\n\\begin{align*}\n \\limsup_{n\\to+\\infty} \\mathbb{P} (b_n^{-1} |Y_0Y_{s}-Y_{0}^{(m)}Y_{s}^{(m)}| >\n \\epsilon ) & \\le \\limsup_{n\\to+\\infty} n \\mathbb{P} \\left(b_n^{-1} |Z_0Z_{s}| |W_{m,s}|\n > C \\epsilon \\right) \\leq C^{-\\alpha} \\epsilon^{-\\alpha}\n \\mathbb{E}[|W_{m,s}|^{\\alpha}]\n\\end{align*}\nwhich converges to 0 as $m\\to+\\infty$. This concludes the proof of~(\\ref{eq:pp-tightness2}) in the LMSV case.\n\nTo prove (\\ref{eq:pp-tightness2}) in the case of leverage, we\nfurther split the proof between the cases $\\sigma(x)=\\exp(x)$ and\n$\\sigma$ subadditive.\n\n\n\\subsubsection*{Case of leverage, $\\sigma(x)=\\exp(x)$}\n\nDefine $\\hat X_s = \\sum_{j=1 , {j\\not= s}}^\\infty c_j \\eta_{s-j}$, $\\hat\nX_s^{(m)} = \\sum_{j=1 , {j\\not= s}}^m c_j \\eta_{s-j}$ and\n\\begin{align*}\n \\tilde W_{m,s} = |\\exp(X_0+\\hat X_s) - \\exp(X_0^{(m)}+\\hat\n X_s^{(m)})| \\; .\n\\end{align*}\nAs previously, we see that $\\tilde W_{m,s}$ converges to 0 in $L^q$ for some\n$q>\\alpha$. Thus, we obtain that\n$$\n\\sum_{i=1}^n \\delta_{(i\/n,{\\bf Y}_{n,i}^{(m)})} \\Rightarrow\n\\sum_{s=0}^h \\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,s}^{(m)}{\\bf\ne}_s)} \\; , (n\\to+\\infty) \\; ,\n$$\nwhere $\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,0}^{(m)})},\n\\dots,\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,h}^{(m)})}$ are\nindependent Poisson processes with respective mean measures\n$\\lambda_{{s,m}}(dx)$, $s=0,\\ldots,h$, defined in\n(\\ref{mean-measure-1})-(\\ref{mean-measure-2}) with the constants\n$d_+^{(m)}(s)$ and $d_-^{(m)}(s)$ that appear therein given by\n\\begin{align*}\n d_+^{(m)}(s) = d_+(s) \\frac{\\mathbb{E}[\\exp(\\alpha(X_0^{(m)}+\\hat X_s^{(m)}))]}\n {\\mathbb{E}[\\exp(\\alpha(X_0+\\hat X_s))]} \\; , \\ \\\n d_-^{(m)}(s) = d_-(s) \\frac{\\mathbb{E}[\\exp(\\alpha(X_0^{(m)}+\\hat X_s^{(m)}))]}\n {\\mathbb{E}[\\exp(\\alpha(X_0+\\hat X_s))]} \\; .\n\\end{align*}\nSince $|\\tilde W_{m,s}|$ converges to 0 in $L^q$, we obtain\n$$\n\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,s}^{(m)})} \\Rightarrow\n\\sum_{k=1}^{\\infty} \\delta_{(t_k,j_{k,s})} \\; , (m\\to+\\infty) \\; ,\n\\qquad s = 0,\\dots,h \\; .\n$$\nThen, for $s=1,\\ldots,h$, we obtain, with $\\tilde Z_0^{(s)} = Z_0\n\\exp(c_s\n \\eta_0)$, for $\\epsilon>0$,\n\\begin{align*}\n \\limsup_{n\\to+\\infty} n \\mathbb{P} \\left(b_n^{-1} |Y_0Y_s - Y_0^{(m)}Y_s^{(m)}| > \\epsilon \\right)\n & = \\limsup_{n\\to+\\infty} n \\mathbb{P} \\left(b_n^{-1} |Z_0\\tilde Z_{0}^{(s)}| |\\tilde\n W_{m,s}| > \\epsilon \\right) \\leq C \\epsilon^{-\\alpha} \\mathbb{E}[|\\tilde W_{m,s}|^\\alpha]\n\\end{align*}\nwhich converges to 0 as $m\\to+\\infty$. This\nproves~(\\ref{eq:pp-tightness2}) and concludes the proof in the case\nof leverage with~$\\sigma(x) = \\exp(x)$.\n\n\\subsubsection*{Case of leverage, $\\sigma$ subadditive}\nWe have to bound\n\\begin{align*}\nn\\mathbb{P} (|Z_0Z_s||\\sigma(X_0)\\sigma(X_{s})-\\sigma(X_0^{(m)})\\sigma(X_{s}^{(m)})|>\\epsilon b_n) \\; .\n\\end{align*}\nIt suffices to bound two terms\n\\begin{align*}\n I_1(n,m)=n\\mathbb{P} (|Z_0Z_s||\\sigma(X_0)-\\sigma(X_0^{(m)})|\\sigma(X_{s}^{(m)})>\\epsilon b_n) \\; , \\\\\n I_2(n,m)=n\\mathbb{P} (|Z_0Z_s|\\sigma(X_0)|\\sigma(X_{s})-\\sigma(X_{s}^{(m)})|>\\epsilon b_n) \\; .\n\\end{align*}\nRecall that $X_s^{(m)}=\\hat X_s^{(m)}+c_s\\eta_0$ and $X_s=\\hat\nX_s+c_s\\eta_0$. By subadditivity of $\\sigma$, we have, for some constant $\\delta$,\n\\begin{align*}\n I_1(n,m) \\leq & n\\mathbb{P} (|Z_0Z_s| |\\sigma(X_0)-\\sigma(X_0^{(m)})| \\sigma(\\hat\n X_{s}^{(m)}) > C \\epsilon b_n) \\\\\n & + n\\mathbb{P} (|Z_0Z_s| |\\sigma(X_0)-\\sigma(X_0^{(m)})| \\sigma(c_s\\eta_0) > \\delta\n \\epsilon b_n) \\; .\n\\end{align*}\nThe product $Z_0Z_s$ is independent of\n$|\\sigma(X_0)-\\sigma(X_0^{(m)})|\\sigma(\\hat X_{s}^{(m)})$ and tail equivalent to\n$Y_0Y_1$, thus we obtain\n$$\n\\limsup_{n\\to+\\infty}n\\mathbb{P} (|Z_0Z_s||\\sigma(X_0)-\\sigma(X_0^{(m)})|\\sigma(\\hat\nX_{s}^{(m)}) > \\delta \\epsilon b_n) \\le C \\epsilon^{-\\alpha}\n\\mathbb{E}[|\\sigma(X_0)-\\sigma(X_0^{(m)})|^{\\alpha} \\sigma^{\\alpha}(\\hat X_{s}^{(m)})]\n\\; .\n$$\nWe have already seen that $\\sigma(X_0^{(m)})$ converges to $\\sigma(X_0)$ in\n$L^\\alpha$, thus the latter expression converges to 0 as $m\\to+\\infty$. By\nassumption, $\\sigma(c_s\\eta_0)|Z_0Z_s|$ is either tail equivalent to $|Z_0Z_s|$\nor $\\mathbb{E}[\\sigma^q(c_s\\eta_0)|Z_0Z_s|^q]<\\infty$ for some $q>\\alpha$, and since\nit is independent of $|\\sigma(X_0) - \\sigma(X_0^{(m)})|$, we obtain that\n$$\n\\limsup_{n\\to+\\infty} n\\mathbb{P} (\\sigma(c_s\\eta_0)|Z_0Z_s||\\sigma(X_0) -\n\\sigma(X_0^{(m)})|>\\epsilon b_n) \\leq C\n\\epsilon^{-\\alpha}\\mathbb{E}[|\\sigma(X_0)-\\sigma(X_0^{(m)})|^{\\alpha}] \\;,\n$$\nwhere $ C=0$ in the latter case. In both cases, this yields\n\\begin{align*}\n \\lim_{m\\to+\\infty} \\limsup_{n\\to+\\infty} n\\mathbb{P} (\\sigma(c_s\\eta_0)|Z_0Z_s|\n |\\sigma(X_0) - \\sigma(X_0^{(m)})| > \\epsilon b_n) = 0 \\; .\n\\end{align*}\nThus we have obtained that $\\lim_{m\\to+\\infty}\\limsup_{n\\to+\\infty}I_1(n,m)=0$.\n\nFor the term $I_{2}(n,m)$ we use\nassumption~(\\ref{eq:sigma-condition-truncation}) with $x=c_s\\eta_0$, $y=\\hat\nX_s$ and $z=\\hat X_s^{(m)}$. Thus\n\\begin{align*}\n I_2(n,m) \\leq n \\mathbb{P} (|Z_0Z_s| (\\sigma(c_s\\eta_0) \\vee 1) \\tilde W_{m,s} >\n \\epsilon b_n) \\; ,\n\\end{align*}\nwith\n\\begin{align*}\n \\tilde W_{m,s} = \\sigma(X_0) \\{(\\sigma(\\hat X_s) \\vee 1) + (\\sigma(\\hat\n X_s^{(m)}) \\vee 1)\\}|\\hat X_{s} - \\hat X_{s}^{(m)}| \\; .\n\\end{align*}\nNote that $\\tilde W_{m,s}$ is independent of $|Z_0Z_s| (\\sigma(c_s\\eta_0) \\vee\n1)$ and $\\tilde W_{m,s}$ converges to 0 when $m\\to+\\infty$ in $L^q$ for some\n$q>\\alpha$. Since $|Z_0Z_s| \\sigma(c_s\\eta_0)$ is tail equivalent to $|Y_0Y_1|$\nor has a finite moment of order $q'$ for some $q'>\\alpha$, we have\n\\begin{align*}\n \\limsup_{n\\to+\\infty} n \\mathbb{P} (|Z_0Z_s| (\\sigma(c_s\\eta_0) \\vee 1) \\tilde W_{m,s}\n > \\epsilon b_n) \\leq C \\mathbb{E}[\\tilde W_{m,s}^\\alpha] \\; ,\n\\end{align*}\nwhere the constant $C$ can be zero in the latter case. In both cases, we\nconclude\n\\begin{align*}\n \\lim_{m\\to+\\infty} \\limsup_{n\\to+\\infty} n \\mathbb{P} (|Z_0Z_s| (\\sigma(c_s\\eta_0) \\vee\n 1) \\tilde W_{m,s} > \\epsilon b_n) = 0 \\; .\n\\end{align*}\n\n\\subsection{Proof of Theorem \\ref{thm:partial-sums-egarch}}\nWe start by studying $S_{p,n}$. Write\n\\begin{align*}\n \\sum_{i=1}^{[nt]}\\left(|Y_i|^p-\\mathbb{E}[|Y_0|^p]\\right) & =\n \\sum_{i=1}^{[nt]}\\left(|Y_i|^p-\\mathbb{E}[|Y_i|^p|{\\cal F}_{i-1}]\\right)\n +\\sum_{i=1}^{[nt]} \\left(\\mathbb{E}[|Y_i|^p|{\\cal F}_{i-1}]-\\mathbb{E}[|Y_0|^p]\\right) \\\\\n & =: M_n(t)+R_n(t)\\;.\n\\end{align*}\nNote that $\\mathbb{E}[|Y_i|^p|{\\cal F}_{i-1}] = \\mathbb{E}[|Z_0|^p]\\sigma^p(X_i)$ is a\nfunction of $X_i$ and does not depend on $Z_i$. Then, by\n\\cite[Theorem~6]{arcones:1994}, for $\\tau_p(1-H)<1\/2$ we have\n\\begin{equation}\n \\label{eq:lrd-limit-1-egarch}\n n^{-1} \\rho_n^{-\\tau_p\/2} R_n \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} \\frac{J_{\\tau_p}(\\sigma^p) \\mathbb{E}[|Z_1|^p]}{\\tau_p !} R_{\\tau_p,H} \\; .\n\\end{equation}\nIf $\\tau_p(1-H)>1\/2$ then by \\cite[Theorem~4]{arcones:1994}, we obtain\n\\begin{equation}\n \\label{eq:lrd-limit-2-egarch}\n n^{-1\/2} R_n \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} \\varsigma \\mathbb{E}[|Z_0|^p] B \\; ,\n\\end{equation}\nwhere $B$ is the standard Brownian motion and $\\varsigma^2 = \\var(\\sigma^p(X_0))+\n2\\sum_{i=1}^\\infty \\mathrm{cov}(\\sigma^p(X_0),\\sigma^p(X_i))$. We will show that\nunder the assumptions of Theorem \\ref{thm:partial-sums-egarch} we have,\n\\begin{eqnarray}\n \\label{eq:Levy-conv-egarch}\n a_n^{-p} M_n \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} L_{\\alpha\/p}\\; .\n\\end{eqnarray}\nThe convergences~(\\ref{eq:lrd-limit-1-egarch}), (\\ref{eq:lrd-limit-2-egarch})\nand (\\ref{eq:Levy-conv-egarch}) conclude the proof of the theorem. We now\nprove~(\\ref{eq:Levy-conv-egarch}). The proof is very similar to the proof of the\nconvergence of the partial sum of an i.i.d.~sequence in the domain of attraction\nof a stable law to a L\\'evy stable process. The differences are some additional\ntechnicalities. See e.g. \\cite[Proof of Theorem~7.1]{resnick:2007} for more\ndetails. For $0<\\epsilon<1$, decompose it further as\n\\begin{align*}\n M_n(t) & = \\sum_{i=1}^{[nt] }\\left\\{ |Y_i|^p \\mathbf1_{\\{|Y_i|<\\epsilon\n a_n\\}}-\\mathbb{E}\\left[|Y_i|^p\\mathbf1_{\\{|Y_i|<\\epsilon a_n\\}}|{\\cal F}_{i-1}\\right]\\right\\} \\\\\n & + \\sum_{i=1}^{[nt] } \\left\\{ |Y_i|^p \\mathbf 1_{\\{|Y_i| > \\epsilon a_n\\}} -\n \\mathbb{E}\\left[|Y_i|^p\\mathbf1_{\\{|Y_i|>\\epsilon a_n\\}}|{\\cal F}_{i-1}\\right]\\right\\} =:\n M_n^{(\\epsilon)}(t)+\\tilde M_n^{(\\epsilon)}(t) \\; .\n\\end{align*}\nThe term $\\tilde M_n^{(\\epsilon)}(\\cdot)$ is treated using the point process\nconvergence. Since for any $\\epsilon>0$, the summation functional is almost\nsurely continuous from the set of Radon measures on $[0,1] \\times\n[\\epsilon,\\infty)$ onto $\\mathcal D([0,1],\\mathbb R)$ with respect to the\ndistribution of the Poisson point process with mean measure $\\nu_0$ (see\ne.g. \\cite[p.~215]{resnick:2007}), from Proposition \\ref{prop:pp-univarie} we\nconclude\n\\begin{align}\n \\label{eq:proof-1b-egarch}\n a_n^{-p} \\sum_{i=1}^{[n\\cdot] } |Y_i|^p \\mathbf 1_{\\{|Y_i|>\\epsilon a_n\\}}\n \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} \\sum_{t_k\\le (\\cdot)} |j_k|^p \\mathbf 1_{\\{|j_k|>\\epsilon\\}} \\; .\n\\end{align}\nTaking expectation in (\\ref{eq:proof-1b-egarch}) we obtain\n\\begin{align}\n \\label{eq:proof-1c-egarch}\n \\lim_{n\\to+\\infty} [nt] a_n^{-p} \\mathbb{E} \\left[|Y_0|^p \\mathbf\n 1_{\\{|Y_1|>\\epsilon a_n\\}} \\right] = t \\int_{\\{x:|x|>\\epsilon\\}} |x|^p\n \\lambda_0(\\mathrm d x)\n\\end{align}\nuniformly with respect to $t\\in[0,1]$ since it is a sequence of increasing\nfunctions with a continuous limit. Furthermore, we claim that\n\\begin{align}\n \\label{eq:proof-1d-egarch}\n a_n^{-p} \\left|\\sum_{i=1}^{[nt] } \\left\\{ \\mathbb{E}\\left[|Y_0|^p \\mathbf\n 1_{\\{|Y_1|>\\epsilon a_n\\}} \\right] - \\mathbb{E} \\left[ |Y_i|^p \\mathbf\n 1_{\\{|Y_i|>\\epsilon a_n\\}}|{\\cal F}_{i-1}\\right] \\right\\} \\right|\n \\stackrel{\\scriptstyle P}{\\to} 0\\;,\n\\end{align}\nuniformly in $t\\in [0,1]$. We use the variance inequality\n(\\ref{eq:variance-inequality-lrd}) to bound the variance of the last expression\nby\n\\begin{align*}\n a_n^{-2p} [nt]^2 \\rho_{[nt]} \\,\n \\mathrm{var}\\left(\\mathbb{E}[|Y_1|^p\\mathbf1_{\\{|Y_1|>\\epsilon a_n\\}}|{\\cal F}_0]\\right) \\leq\n a_n^{-2p} [nt]^2 \\rho_{[nt]} \\mathbb{E} \\left[ \\left(\\mathbb{E}[|Y_1|^p \\mathbf\n 1_{\\{|Y_1|>\\epsilon a_n\\}}|{\\cal F}_0]\\right)^2 \\right] \\; .\n\\end{align*}\nIf $p<\\alpha<2p$, by Karamata's Theorem (see \\cite[p.~25]{resnick:2007}) and\nPotter's bound,\n$$\n\\mathbb{E} [\\sigma^p(x) |Z_1|^p \\mathbf 1_{\\{|\\sigma(x)Z_1|>\\epsilon a_n\\}}] \\leq C\nn^{-1} a_n^{p} \\frac{\\bar F_Z(\\epsilon a_n\/\\sigma(x))}{\\bar F_Z(a_n)} \\leq C n^{-1}\na_n^p \\sigma^{\\alpha+\\epsilon} (x)\\;.\n$$\nSince by assumption $\\mathbb{E}[\\sigma^{2\\alpha+2\\epsilon}(X_0)] < \\infty$ for some\n$\\epsilon>0$, for each $t$, we have\n\\begin{multline}\n \\mathrm{var} \\left( a_n^{-p} \\sum_{i=1}^{[nt] } \\left\\{ \\mathbb{E}\\left[|Y_0|^p \\mathbf\n 1_{\\{|Y_0|>\\epsilon a_n\\}} \\right] - \\mathbb{E} \\left[ |Y_i|^p \\mathbf\n 1_{\\{|Y_i|>\\epsilon a_n\\}}|{\\cal F}_{i-1}\\right] \\right\\} \\right) \\\\\n \\leq C n^{-2} [nt]^2 \\rho_{[nt]} \\leq C n^{2H-2+\\epsilon} t^{2H-\\epsilon} \\; , \\label{eq:same-arguments}\n\\end{multline}\nwhere the last bound is obtained for some $\\epsilon>0$ by Potter's bound. This\nproves convergence of finite dimensional distribution to 0 and tightness in\n$\\mathcal D([0,1],\\mathbb R)$.\nAs in \\cite[p.~216]{resnick:2007}, we now argue that (\\ref{eq:proof-1b-egarch}),\n(\\ref{eq:proof-1c-egarch}) and~(\\ref{eq:proof-1d-egarch}) imply that\n\\begin{align}\n \\label{eq:weak-conv-egarch}\n a_n^{-p} \\tilde M_n^{(\\epsilon)} \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} L_{\\alpha\/p}^{(\\epsilon)} \\; ,\n\\end{align}\nand it also holds that $L_{\\alpha\/p}^{(\\epsilon)} \\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} L_{\\alpha\/p}$ as\n$\\epsilon\\to0$.\nTherefore, to show (\\ref{eq:Levy-conv-egarch}) is suffices to show the\nnegligibility of $a_n^{-p}M_n^{(\\epsilon)}$. By Doob's martingale inequality we\nevaluate\n\\begin{align*}\n \\mathbb{E} &\\left[ \\left(\\sup_{t\\in [0,1]} a_n^{-p} \\sum_{i=1}^{[nt]} \\left\\{\n |Y_i|^p \\mathbf1_{\\{|Y_i|<\\epsilon a_n\\}} - \\mathbb{E} \\left[|Y_i|^p \\mathbf1_{\\{|Y_i|< \\epsilon a_n\\}}| {\\cal F}_{i-1} \\right] \\right\\} \\right)^2 \\right] \\\\\n & \\leq C na_n^{-2p} \\mathbb{E} \\left[ \\left(|Y_1|^p \\mathbf1_{\\{|Y_1|<\\epsilon\n a_n\\}} - \\mathbb{E} \\left[|Y_1|^p \\mathbf1_{\\{|Y_1|<\\epsilon a_n\\}}|{\\cal\n F}_{0}\\right]\\right)^2 \\right] \\\\\n &\\leq 4 C n a_n^{-2p} \\mathbb{E} \\left[|Y_1|^{2p} \\mathbf1_{\\{|Y_1|<\\epsilon a_n\\}} \\right] \\;.\n\\end{align*}\nRecall that $\\alpha<2p$. By Karamata's theorem (see \\cite[p. 25]{resnick:2007}),\n\\begin{align}\n \\label{eq:Karamata}\n \\mathbb{E} \\left[ |Y_1|^{2p} \\mathbf1_{\\{|Y_1|<\\epsilon a_n\\}}\\right] & \\sim\n \\frac{2\\alpha}{2p-\\alpha}(\\epsilon a_n)^{2p} \\bar F_Y(\\epsilon a_n) \\sim\n \\frac{2\\alpha}{2p-\\alpha}\\epsilon^{2p-\\alpha}a_n^{2p}n^{-1} \\; .\n\\end{align}\nApplying this and letting $\\epsilon\\to 0$ we conclude that\n$a_n^{-p}M_n^{(\\epsilon)}$ is uniformly negligible in $L^2$ and so in\nprobability, and thus we conclude that $a_n^{-p}M_n\\stackrel{\\scriptstyle \\mathcal D}{\\Rightarrow} L_{\\alpha\/p}$.\n\nFor $p>\\alpha$, $\\mathbb{E}[|Y_0|^p]=\\infty$. In that case it is well known (see\ne.g. \\cite[Theorem 3.1]{davis:hsing:1995}) that the convergence of $a_n^{-p}\nS_{p,n}$ to an $\\alpha\/p$-stable L\\'evy process follows directly from the\nconvergence of the point process $\\sum_{i=1}^n \\delta_{Y_i\/a_n}$ to a Poisson\npoint process, and that no centering is needed. In the present context, this\nentirely dispenses with the conditioning argument and the long memory part does\nnot appear. Therefore convergence to stable L\\'evy process always holds.\n\nAs for the sum $S_n$, since $\\mathbb{E}[Y_0]=\\mathbb{E}[Z_0]=0$, the long memory part $R_n$\nis identically vanishing, thus in this case also only the stable limit arises.\n\n\n\n\\subsection{Proof of Theorem~\\ref{theo:cov-lmsv}}\nLet $U_i = |Y_iY_{i+s}|$. We now write\n\\begin{align*}\n \\sum_{i=1}^{n} \\left(U_i^p - \\mathbb{E}[U_0^p] \\right) & = \\sum_{i=1}^{n} \\left(U_i^p -\n \\mathbb{E}[U_i^p \\mid {\\cal F}_{i-1}] \\right) + \\sum_{i=1}^{n}\n \\left(\\mathbb{E}[U_i^p \\mid {\\cal F}_{i-1}] - \\mathbb{E}[U_0^p] \\right) \\\\\n & = M_{n,s} + \\sum_{i=1}^{n} K_p^*(X_i,\\hat X_{i,s}) = M_{n,s} + T_{n,s}\\;.\n\\end{align*}\nAs mentioned above, the second part is the partial sum of a sequence\nof a function of the bivariate Gaussian sequence $(X_i,\\hat\nX_{i,s})$. The proof of the convergence to a stable law mimics the\nproof of Theorem \\ref{thm:partial-sums-egarch}. We split $M_{n,s}$\nbetween big jumps and small jumps. Write $M_{n,s}^{(\\epsilon)} +\n\\tilde M_{n,s}^{(\\epsilon)}$, with\n\\begin{align*}\n M_{n,s}^{(\\epsilon)} = \\sum_{i=1}^{n} \\left(U_i^p \\mathbf 1_{\\{U_i \\leq b_n\n \\epsilon\\}} - \\mathbb{E}[U_i^p \\mathbf 1_{\\{U_i \\leq b_n \\epsilon\\}} \\mid {\\cal\n F}_{i-1}] \\right) \\; .\n\\end{align*}\nThe point process convergence yields the convergence of the big jumps parts by\nthe same argument as in the proof of Theorem~\\ref{thm:partial-sums-egarch}. In\norder to prove the asymptotic negligibility of the small jumps parts, the only\nchange that has to be made comes from the observation that $\\tilde\nM_{n,s}^{(\\epsilon)}$ is no longer a martingale. However, assuming for\nsimplicity that we have $(s+1)n$ observations $Y_i$, we write, with $U_{i,k} =\nU_{(s+1)i-k} = |Y_{(s+1)i-k}Y_{(s+1)i+s-k}|$,\n$$\nM_{n,s}^{(\\epsilon)} = \\sum_{k=0}^s \\sum_{i=1}^{n} \\left\\{U_{i,k}^p \\mathbf\n 1_{\\{U_i \\leq b_n \\epsilon\\}} - \\mathbb{E}\\left[U_{i,k}^p \\mathbf 1_{\\{U_i \\leq b_n\n \\epsilon\\}} \\mid {\\cal F}_{(s+1)i-k-1}\\right] \\right\\} = : \\sum_{k=0}^s\nM_{n,s,k}^{(\\epsilon)} \\; .\n$$\nClearly, each $M_{n,s,k}^{(\\epsilon)}$, $k=0,\\ldots,s$, is a\nmartingale with respect to the filtration $\\{\\mathcal F_{i(s+1)}, 1\n\\leq i \\leq n\\}$, therefore we can apply Doob's inequality and\nconclude the proof with the same arguments as previously.\n\n\n\n\\subsection{Proof of Theorem~\\ref{theo:cov-egarch}}\nAgain, we mimic the proof of Theorem \\ref{thm:partial-sums-egarch}, however,\nsome technical modifications are needed. We use the decomposition between small\njumps and big jumps. To prove negligibility of the small jumps, we use the same\nsplitting technique as in the proof of Theorem~\\ref{theo:cov-lmsv}. To deal with\nthe big jumps, the only adaptation needed is to obtain a bound for the quantity\n\\begin{align}\n b_n^{-2p} n^2 \\rho_{n} \\mathbb{E} \\left[ \\left(\\mathbb{E}[|Y_0Y_{s}|^p \\mathbf\n 1_{\\{|Y_0Y_{s}| > \\epsilon b_n\\}}|{\\cal F}_{-1}] \\right)^2 \\right] \\; . \\label{eq:variance-a-borner}\n\\end{align}\nTo show that (\\ref{eq:proof-1d-egarch}) still holds in the present context, we\nmust prove that the expectation in~(\\ref{eq:variance-a-borner}) is of order\n$n^{-2} b_n^{2p}$. The rest of the arguments to prove the convergence of the\nbig jumps part remains unchanged.\nNote that $\\mathbb{E}[|Y_0Y_{s}|^p \\mathbf 1_{\\{|Y_0Y_{s}| > \\epsilon b_n\\}}|{\\cal\n F}_{-1}] = G(X_0,\\hat X_{0,s})$, thus we need an estimate for the bivariate\nfunction\n\\begin{align*}\n G(x,y) = \\sigma^p(x)\\mathbb{E} [|Z_0Z_s|^p \\sigma^p(c_s\\eta_0 + \\varsigma_s \\zeta+y)\n \\mathbf1_{\\{|Z_0Z_s| \\sigma(c_s\\eta_0 + \\varsigma_s\\zeta+y) > \\epsilon b_n\\}}] \\; ,\n\\end{align*}\nwhere $\\zeta$ is a standard Gaussian random variable, independent of\n$Z_0,\\eta_0$ and $Z_s$. We obtain this estimate first in the case\n$\\sigma(x)=\\exp(x)$ and then for subadditive functions.\n\nLet $\\sigma(x)=\\exp(x)$. As in the proof of point process\nconvergence, we write\n$$Y_0Y_{s}=Z_0Z_{s}\\exp(c_s\\eta_0)\\exp\\left(X_0+\\hat X_{s}\\right).$$\nBy Lemma~\\ref{lemma:asymp-indep-EGARCH-expo},\n$Z_0Z_{s}\\exp(c_s\\eta_0)$ is regularly varying and tail equivalent\nto $Z_0Z_{s}$. Since $ \\exp(p\\varsigma_s\\zeta)$ is independent of\n$Z_0Z_s\\exp(c_s\\eta_0)$ and has finite moments of all orders, we\nobtain that $Z_0Z_{s}\\exp(c_s\\eta_0)\\exp(p\\varsigma_s\\zeta)$ is also\ntail equivalent to $Z_0Z_s$, hence to $Y_0Y_1$. Thus, by Karamata's\nTheorem and Potter's bounds, we obtain, for some $\\delta>0$,\n\\begin{align*}\n G(x,y) &= \\exp(p(x+y)) \\mathbb{E} [|Z_0Z_s|^p \\exp(pc_s \\eta_0) \\exp(p\\varsigma_s\\zeta) \\mathbf1_{\\{|Z_0Z_s|\n \\exp(pc_s \\eta_0)\n\\exp(\\varsigma_s\\zeta)) > \\epsilon b_n \\exp(-y)\\}}]\\\\\n& \\leq C n^{-1} b_n^p\n \\exp(px) \\exp((p-\\alpha+\\delta)(y\\vee0)) \\; .\n\\end{align*}\nSince the log-normal distribution has finite moments of all order,\nwe obtain that $\\mathbb{E}[G^2(X_0,\\hat{X}_{0,s})] = O(n^{-2}b_n^{2p})$ which is\nthe required bound. This concludes the proof in the case $\\sigma(x)\n=\\exp(x)$.\n\nLet now the assumptions of Proposition~\\ref{prop:egarch-pp-quadratic} be\nin force. Using the subadditivity of $\\sigma^p$, we obtain $G(x,y) \\leq\n\\sum_{i=1}^4 I_i(x,y)$ with\n\\begin{align*}\n I_1(x,y) & = \\sigma^p(x)\\mathbb{E}[|Z_0Z_s|^p \\sigma^p(\\vartheta_s) \\mathbf 1_{\\{|Z_0Z_s| \\sigma (\\vartheta_s) > \\epsilon b_n\\}}],\\\\\n I_2(x,y )& = \\sigma^p(x)\\mathbb{E}[|Z_0Z_s|^p\\sigma^p(y) \\mathbf 1_{\\{|Z_0Z_s|\\sigma(y) > \\epsilon b_n\\}}],\\\\\n I_3(x,y) & = \\sigma^p(x)\\mathbb{E}[|Z_0Z_s|^p\\sigma^p(\\vartheta_s) \\mathbf 1_{\\{|Z_0Z_s|\\sigma(y)>\\epsilon b_n\\}}],\\\\\n I_4(x,y) & = \\sigma^p(x)\\mathbb{E}[|Z_0Z_s|^p\\sigma^p(y) \\mathbf 1_{\\{|Z_0Z_s|\\sigma(\\vartheta_s)>\\epsilon b_n\\}}] \\; ,\n\\end{align*}\nwhere for brevity we have denoted $\\vartheta_s = c_s\\eta_0+\n\\varsigma_s\\zeta$. We now give bound $\\mathbb{E}[I_j^2(X_0,\\hat{X}_{0,s})]$,\n$j=1,2,3,4$. Since by the assumptions, $|Z_0Z_s|\\sigma(\\vartheta_s)$\nis tail equivalent to $|Z_0Z_s|$, Karamata's Theorem yields\n\\begin{align*}\n \\sigma^p(x) \\mathbb{E} [|Z_0Z_s|^p \\sigma^p(\\vartheta_s) \\mathbf\n 1_{\\{|Z_0Z_s|\\sigma(\\vartheta_s)>\\epsilon b_n\\}}] \\le Cn^{-1}b_n^{p}\\sigma^p(x) \\; ,\n\\end{align*}\nand since $\\mathbb{E}[\\sigma^{2p}(X_0)]<\\infty$ by assumption, we obtain\nby integrating that $\\mathbb{E}[I_1^2(X_0,\\hat{X}_{0,s})] = O(n^{-2} b_n^{2p})$.\nFor $I_2$, using again Karamata's Theorem and Potter's bound, we\nobtain, for some $\\delta>0$,\n\\begin{align*}\n \\sigma^p(x) \\mathbb{E} [|Z_0Z_s|^p \\sigma^p(y) \\mathbf1_{\\{|Z_0Z_s|\\sigma(y)>\\epsilon\n b_n\\}}] \\le Cn^{-1} b_n^{p} \\sigma^p(x) (\\sigma(y)\\vee1)^{p-\\alpha+\\delta} \\; .\n\\end{align*}\nSince $|Z_0|\\sigma(\\vartheta_s)$ is tail equivalent to $|Z_0|$ and $Z_s$ is\nindependent of $Z_0\\sigma(\\vartheta_s)$, we easily obtain a bound for the tail\nof $|Z_0Z_s|(\\sigma(\\vartheta_s)\\vee 1)$:\n$$\n\\mathbb{P} (|Z_0Z_s|(\\sigma(\\vartheta_s)\\vee 1)>x) \\le\n\\mathbb{P}(|Z_0Z_s|\\sigma(\\vartheta_s)>x) + \\mathbb{P}(|Z_0Z_s|>x) \\leq C\n\\mathbb{P}(Z_0Z_s>x) \\; ,\n$$\nfor $x$ large. Thus, applying Karamata's Theorem and Potter's bound\nto $|Z_0Z_s|$ yields, for some arbitrarily small $\\delta>0$,\n\\begin{align*}\n I_3(x,y) \\leq C \\sigma^p(x) \\mathbb{E}[|Z_0Z_s|^p \\mathbf 1_{\\{\\sigma(y)|Z_0Z_s| > \\epsilon\n b_n\\}}] \\leq C n^{-1} b_n^p \\sigma^p(x) (\\sigma(y) \\vee 1)^{\\alpha+\\delta} \\;\n\\end{align*}\nand thus we conclude that $\\mathbb{E}[I_3^2(X_0,\\hat{X}_{0,s})] = O(n^{-2}b_n^{2p})$.\nFinally, we write,\n$$\nI_4(x,y) \\le \\sigma^p(x)\\sigma^p(y) \\mathbb{E}[|Z_0Z_s|^p\\left(\\sigma^p(\\vartheta_s)\\vee\n 1\\right) \\mathbf 1_{\\{|Z_0Z_s|\\left(\\sigma(\\vartheta_s)\\vee 1\\right)>\\epsilon b_n\\}}]\n$$\nand by the same argument as for $I_3$ we obtain that\n$\\mathbb{E}[I_3^2(X_0,\\hat{X}_{0,s})] = O(n^{-2}b_n^{2p})$.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf Introduction}\nThe study of the inequalities involving the classical means\nsuch as arithmetic mean $A$, geometric mean $G$, identric mean $I$ and logarithmic mean $L$ has been of the extensive interest for several authors, e.g., see \\cite{alzer1, alzer2,carlson,hasto,ns1004, ns1004a,sandorc,sandord,sandore,vvu}. \n\n\nFor two positive real numbers $a$ and $b$, the S\\'andor mean \n$X(a,b)$ (see \\cite{sandortwosharp})\nis defined by\n$$X=X(a,b)=Ae^{G\/P-1},$$\nwhere\n$A=A(a,b)=(a+b)\/2, G=G(a,b)=\\sqrt{ab},$ and\n$$P=P(a,b)=\\frac{a-b}{2\\displaystyle\\arcsin\\left(\\frac{a-b}{a+b}\\right)},\\quad a\\neq b,$$\nare the arithmetic mean, geometric mean, and Seiffert mean \n\\cite{seiff}, respectively.\n\nRecently, S\\'andor \\cite{sandornew} introduced a new mean $Y(a,b)$ \nfor two positive real $a$ and $b$, which is defined by\n$$Y=Y(a,b)=Ge^{L\/A-1},$$\nwhere\n$$L=L(a,b)=\\frac{a-b}{\\log(a)-\\log(y)},\\quad a\\neq b$$\nis a logarithmic mean. For two positive real numbers $a$ and $b$, the identric mean and harmonic mean are defined by\n$$I=I(a,b)=\\frac{1}{e}\\left(\\frac{a^a}{b^b}\\right)^{1\/(a-b)},\\quad a\\neq b,$$\nand \n$$H=H(a,b)=2ab\/(a+b),$$\nrespectively.\nFor the historical background and the generalization of these means we refer the reader to see, e.g, \\cite{alzer2,carlson,mit,ns1004,ns1004a,sandor611,sandorc,sandord,sandore,sandor1405,sandorF,vvu}.\nConnections of these means with the trigonometric or hyperbolic inequalities can be found in \\cite{barsan,sandora,sandornew,sandore}.\n \n\n\nFor $p\\in \\mathbb{R}$ and $a,b>$ with $a\\neq b$, the $p$th power mean \n$M_p(a,b)$ and $p$th power-type Heronian mean $H_p$(a,b) are define by \n\n$$\nM_p=M_p(a,b)=\n\\begin{cases}\n\\displaystyle\\left(\\frac{a^p+b^p}{2}\\right)^{1\/p}, & p\\neq 0,\\\\\n\\sqrt{ab}, & p=0,\n\\end{cases}\n$$\nand \n$$\nH_p=H_p(a,b)=\n\\begin{cases}\n\\displaystyle\\left(\\frac{a^p+(ab)^{p\/2}+b^p}{3}\\right)^{1\/p}, & p\\neq 0,\\\\\n\\sqrt{ab}, & p=0,\n\\end{cases}\n$$\nrespectively.\n\nIn \\cite{sandornew}, S\\'andor proved inequalities of $X$ and $Y$ means in terms of other classical means, as well as their relations with each other as follows. \n\n\\begin{theorem}\\label{sandortheorem} For $a,b>0$ with $a\\neq b$, one has\n\\begin{enumerate}\n\\item $G<\\frac{AG}{P}0$ with $a\\neq b$, one has\n\\begin{enumerate}\n\\item $\\frac{1}{e}(G+H)< Y<\\frac{1}{2}(G+H),$\\\\\n\\item $G^2I0$ with $a\\neq b$ if and only if $p\\leq 1\/3$ and\n$q\\geq \\log (2)\/(1+\\log(2))\\approx 0.4093.$\n\nRecently, Zhou et al. \\cite{zhouet} proved that for all $a,b>0$ with $a\\neq b$,\nthe following double inequality\n$$H_\\alpha < X < H_\\beta$$\nholds if and only if $\\alpha \\leq 1\/2$ and $\\beta\\geq \\log(3)\/(1+\\log(2))\n\\approx 0.6488$.\n\n\n\nMaking contribution to the topic, in this paper author refines some previous results appeared in \\cite{ barsan,sandornew} by giving the following theorems. \n\n\\begin{theorem}\\label{thm1} For $a,b>0$, we have\n\\begin{equation}\\label{30ineqa}\n\\alpha G+(1-\\alpha)A0$, we have\n$$a\\sqrt{GH}0$, we have\n\\begin{equation}\\label{today}\n\\left(\\frac{2+G\/A}{2+A\/G}\\right)^3<\\frac{H}{A}<\n\\left(\\frac{2+G\/A}{2+A\/G}\\right)^2,\n\\end{equation}\n\n\\begin{equation}\\label{ineq0208c}\n\\frac{G}{L}<\\left(\\frac{2}{1+A\/G}\\right)^{2\/3}<\n\\left(\\frac{1+G\/A}{2}\\right)^{2\/3}<\\frac{P}{A}.\n\\end{equation}\n\\end{theorem}\n\n\n\\begin{theorem}\\label{thm30} We have\n$$(AX)^{1\/\\alpha_2}b>0$, $x\\in(0,\\pi\/2)$ and $y>0$, one has\n\\begin{equation}\\label{ineq1}\n\\frac{P}{A}= \\frac{\\sin (x)}{x},\\ \\frac{G}{A} = \\cos(x),\\, \\frac{H}{A}= \\cos(x)^2,\\ \n\\frac{X}{A}= e^{x {\\rm cot}(x)-1}, \n\\end{equation}\n\n\\begin{equation}\\label{ineq2} \n\\frac{L}{G}= \\frac{\\sinh (y)}{y},\\, \\frac{L}{A}= \\frac{\\tanh (y)}{y},\\\n \\frac{H}{G}= \\frac{1}{\\cosh (y)},\\, \n\\frac{Y}{G}= e^{\\tanh (y)\/y -1}. \n\\end{equation}\n\n\\begin{equation}\\label{ineq3} \n\\log\\left(\\frac{I}{G}\\right)=\\frac{A}{L}-1,\\quad \n\\log\\left(\\frac{Y}{G}\\right)=\\frac{L}{A}-1.\n\\end{equation}\n\\end{lemma}\n\n\n\n\\begin{remark}\\rm Recently, the following inequality\n\\begin{equation}\\label{ineq5}\ne^{(x\/\\tanh(x)-1)\/2}<\\frac{\\sinh(x)}{x},\\quad x>0,\n\\end{equation}\nappeared in \\cite[Theorem 1.6]{barsan3}, which is equivalent to \n$$\\frac{\\sinh(x)}{x}>e^{x\/\\tanh(x)-1}\\frac{x}{\\sinh(x)}.$$ By Lemma \\ref{lemma1}, this can be written as\n$$\\frac{L}{G}>\\frac{I}{G}\\cdot \\frac{G}{L}=\\frac{I}{L},$$\nor \n\\begin{equation}\\label{ineq6}\nL>\\sqrt{IG}.\n\\end{equation}\nThe inequality \\eqref{ineq6} was proved by Alzer \\cite{alzer2}. For the convenience of the reader, we write that inequality \\eqref{ineq5}\nimplies the inequality \\eqref{ineq6} as follows:\n\n\n\\begin{equation}\n\\begin{cases}\n\\displaystyle e^{(x\/\\tanh(x)-1)\/2}<\\frac{\\sinh(x)}{x}, & x>0,\\\\\nL>\\sqrt{IG}.\n\\end{cases}\n\\end{equation}\n\n\nThe Adamovi\\'c-Mitrinovic inequality and Cusa- Huygens\ninequality \\cite{mit} imply the double double inequality for Seiffert mean \n$P$ as follows:\n\\begin{equation}\n\\begin{cases}\n\\displaystyle \\cos(x)^{1\/3}<\\frac{\\sin(x)}{x}<\\frac{2+\\cos(x)}{3}, & x\\in(0,\\pi\/2),\\\\\n\\sqrt[3]{A^2G}0,\\\\\n\\sqrt[3]{AG^2}0$ for all $n$, then the function $A(x)\/C(x)$ is also\nincreasing (decreasing) on $(0,R)$.\n\\end{lemma}\n\n\n\nFor $|x|<\\pi$, the following power series expansions can be found in \\cite[1.3.1.4 (2)--(3)]{jef},\n\\begin{equation}\\label{xcot}\nx \\cot x=1-\\sum_{n=1}^\\infty\\frac{2^{2n}}{(2n)!}|B_{2n}|x^{2n},\n\\end{equation}\n\n\\begin{equation}\\label{cot}\n\\cot x=\\frac{1}{x}-\\sum_{n=1}^\\infty\\frac{2^{2n}}{(2n)!}|B_{2n}|x^{2n-1},\n\\end{equation}\nand \n\\begin{equation}\\label{coth}\n{\\rm \\coth x}=\\frac{1}{x}+\\sum_{n=1}^\\infty\\frac{2^{2n}}{(2n)!}|B_{2n}|x^{2n-1},\n\\end{equation}\nwhere $B_{2n}$ are the even-indexed Bernoulli numbers \n(see \\cite[p. 231]{IR}). \nWe can get the following expansions directly from (\\ref{cot}) and (\\ref{coth}),\n\n\\begin{equation}\\label{cosec}\n\\frac{1}{(\\sin x)^2}=-(\\cot x)'=\\frac{1}{x^2}+\\sum_{n=1}^\\infty\\frac{2^{2n}}{(2n)!}\n|B_{2n}|(2n-1)x^{2n-2},\n\\end{equation}\n\n\\begin{equation}\\label{cosech}\n\\frac{1}{(\\sinh x)^2}=-({\\rm coth} x)'=\\frac{1}{x^2}-\\sum_{n=1}^\\infty\\frac{2^{2n}}{(2n)!}(2n-1)|B_{2n}|x^{2n-2}.\n\\end{equation}\nFor the following expansion formula \n\\begin{equation}\\label{xsin}\n\\frac{x}{\\sin x}=1+\\sum_{n=1}^\\infty\\frac{2^{2n}-2}{(2n)!}|B_{2n}|x^{2n}\n\\end{equation}\nsee \\cite{li}.\n\n\n\\begin{lemma}\\cite[Theorem 2]{avv1}\\label{lem0}\nFor $-\\infty2,$$\nwhich is equivalent to \n$$\\frac{\\sin(x)}{x}<\\frac{x+\\sin(x)\\cos(x)}{2\\sin(x)}.$$ Applying the Cusa-Huygens inequality\n$$\\frac{\\sin(x)}{x}<\\frac{\\cos(x)+2}{3},$$ we get\n$$\\frac{\\cos(x)+2}{3}<\\frac{x+\\sin(x)\\cos(x)}{2\\sin(x)},$$\nwhich is equivalent to $(\\cos(x)-1)^2>0$. Thus $f'_3 >0$, clearly $f'_1\/f'_2$ is strictly decreasing in $x\\in(0,\\pi\/2)$. By Lemma \\ref{lem0}, we conclude that the function $f(x)$ is strictly decreasing in \n$x\\in(0,\\pi\/2)$. The limiting values follows easily. This completes the proof of the lemma. \n\\end{proof}\n\n\n\n\n\n\\begin{lemma}\\label{lemma3} The following function\n$$f_4(x)=\n\\frac{\\sin (x)}{x \\left(\\cos (x)-e^{x \\cot (x)-1}+1\\right)}$$\nis strictly increasing from $(0,\\pi\/2)$ onto $(1,c)$, where\n$c=2e\/(\\pi(e-1))\\approx 1.0071$. In particular, for $x\\in(0,\\pi\/2)$ we have\n$$1+\\cos(x)-e^{x\/\\tan(x)-1}<\\frac{\\sin(x)}{x}0$, and $f_4(x)$\n\tis strictly increasing. The limiting values follows easily. This implies the proof.\n\\end{proof}\n\\section{Proofs}\n\n\n\n\n\\noindent{\\bf Proof of Theorem \\ref{thm1}.} It follows from Lemma \\ref{lemma2} that\n$$\\frac{e-1}{e}<\\frac{1-1\/e^{1-x\/\\tan(x)}}{\\cos(x)\/e^{1-x\/\\tan(x)}-1\/e^{1-x\/\\tan(x)}}<\\frac{2}{3}.$$\nNow we get the proof of \\eqref{30ineqa} by utilizing the Lemma \\ref{lemma1}.\nThe proof of \\eqref{30ineqb}\nfollows easily from Lemmas \\ref{lemma1} and \\ref{lemma2}.\n$\\hfill\\square$\n\n\n\n\n\\bigskip\n\n\\noindent{\\bf Proof of Theorem \\ref{0208thm}.} For the proof of the first inequality see \\cite[Theorem 7(2)]{barsan}.\nFor the validity of the following inequality\n$$\\frac{\\sinh(x)-\\cosh(x)}{2x\\cosh(x)}<\\log\\left(\\frac{1}{\\cosh(x)}\\right)$$\nsee \\cite{barsan3}, which is equivalent to \n\\begin{equation}\\label{ineq0208b}\n\\sqrt{\\cosh(x)}\\cdot\\exp{\\tanh(x)\/x-1}<1.\n\\end{equation}\nBy Lemma \\ref{lemma1} the inequality \\eqref{ineq0208b}\nimplies the proof of the second inequality.\n\n\n$\\hfill\\square$\n\n\n\\noindent{\\bf Proof of Theorem \\ref{thm3}.} Let $g(x)=g_1(x)\/g_2(x)$, where\n$$g_1(x)=\\log\\left(\\frac{2+1\/\\cos(x)}{2+\\cos(x)}\\right),\\quad\ng_2(x)=\\log\\left(\\frac{1}{\\cos(x)}\\right),$$\nfor all $x\\in(0,\\pi\/2)$. Differentiating with respect to $x$ we get\n$$\\frac{g'_1(x)}{g'_2(x)}=1-\\frac{1}{5+2\\cos(x)+2\/\\cos(x)}=g_3(x).$$\nThe function $g_3(x)$ is strictly increasing in $x\\in(0,\\pi\/2)$, because\n$$g'_3(x)=\\frac{6\\sin(x)^3}{(3+5\\cos(x)+\\cos(x)^2-\\sin(x)^2)^2}\n>0.$$ Hence $g'_1(x)\/g'_2(x)$ is strictly increasing, and clearly \n$g_1(0)=0=g_2(0)$. Since the function $g(x)$ is stricty increasing by Lemma \\ref{lem0}, and we get\n$$\\lim_{x\\to 0} g(x)=\\frac{2}{3}0,$$\ngives\n\\begin{equation}\\label{ineq0208e}\n\\left(\\frac{1+G\/H}{2}\\right)^{2\/3}<\\frac{L}{G}<\n\\left(\\frac{1+G\/H}{2}\\right).\n\\end{equation}\nNow the first and the third inequality in \\eqref{ineq0208c} are obvious from \\eqref{ineq0208d} and \\eqref{ineq0208d}. For the proof of the second inequality in \\eqref{ineq0208c}, it is enough to prove that \n$$\\frac{2}{1+x}<\\frac{1+1\/x}{2},\\quad x>1,$$ \nwhich holds true, because it can be simplified as\n$$(1-x)^2>0.$$\nThis completes the proof of theorem.\n$\\hfill\\square$\n\n\n\n\\begin{corollary} For $a,b>0$ with $a\\neq b$, we have\n$$\\frac{I}{L}<\\frac{L}{G}<1+\\frac{G}{H}-\\frac{I}{G}.$$\n\\end{corollary}\n\n\\begin{proof} The first inequality is due to Alzer \\cite{alzer2}, while the second inequality follows from the fact that the function\n$$x\\mapsto \\frac{1-e^{x\/\\tanh(x)-1}}{1-\\cosh(x)}:(0,\\infty)\\to (0,1)$$\nis strictly decreasing. The proof of the monotonicity of the function is the analogue to the proof of Lemma \\ref{lemma2}.\n\\end{proof}\n\n\n\n\n\n\\noindent{\\bf Proof of Theorem \\ref{thm30}.} The proof follows easity from Lemma \\ref{lemma5}.\n$\\hfill\\square$\n\n\\bigskip\n\nIn \\cite{Seif2}, Seiffert proved that\n\\begin{equation}\\label{seifineq}\n\\frac{2}{\\pi}A0$ with $a\\neq 0$.\nAs a counterpart of the above result we give the following inequalities.\n\n\\begin{corollary}\\label{coro89} For $a,b>0$ with $a\\neq b$, the following inequalities \n$$\\frac{1}{e}A<\\frac{\\pi}{2e} P 0$ with $a\\neq b$, we have\n$$L{$\\displaystyle}l<{$} @{\\qquad} >{$\\displaystyle}l<{$}}\n\\toprule\n& d=3 & d=2 \\\\\n\\midrule\nCoulomb term & - \\frac{g^2 C_F}{(4\\pi)^2} \\, s_p \\, \\abs{\\vp} \\, \\frac{8}{3} \\ln\\Lambda & \\text{finite} \\\\[3ex]\nTerms involving $V$ & \\frac{g^2 C_F}{(4\\pi)^2} \\, s_p \\biggl[ -2\\Lambda + \\abs{\\vp} \\ln\\Lambda \\biggl(-\\frac{2}{3}+\\frac{4}{1+s_p^2}\\biggr) \\biggr] \n& - \\frac{g^2 C_F}{(4\\pi)^2} \\, s_p \\ln\\Lambda \\\\[4ex]\nTerms involving $W$ & \\frac{g^2 C_F}{(4\\pi)^2} \\, s_p \\biggl[ 2\\Lambda + \\abs{\\vp} \\ln\\Lambda \\biggl(\\frac{10}{3}-\\frac{4}{1+s_p^2}\\biggr) \\biggr]\n& \\frac{g^2 C_F}{(4\\pi)^2} \\, s_p \\ln\\Lambda \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Comparison of the $d=3$ and $d=2$ UV divergences of the gap equation~\\eqref{x1}\nstemming from the Coulomb term, the kernel~$V$, and the kernel~$W$.}\n\\label{tab:div}\n\\end{table}\n\n\n\\section{Numerical results}\n\\label{sec:num}\nIn $d=2$ spatial dimensions the squared coupling constant~$g^2$ has the dimension\nof energy, and we express all dimensionful quantities in terms of~$g^2$.\nThe colour Coulomb potential can be assumed in the form\n\\\ng^2 F(\\vp) = \\frac{g^2}{\\vp^2} + \\frac{2\\pi\\sigma^{}_{\\mathrm{C}}}{\\abs{\\vp}^3}\n\\\nwhich consists of the perturbative part ($\\propto1\/\\vp^2$) and the linearly\nrising, confining part. For the gluon propagator \\Eqref{gluonprop} we use the\nGribov formula \\cite{Gribov:1977wm}\n\\begin{equation}\\label{gribov}\n\\Omega(\\vp) = \\sqrt{\\vp^2 + \\frac{m_A^4}{\\vp^2}} ,\n\\end{equation}\nwhich excellently fits the lattice data in $d=3$ \\cite{Burgio:2008jr}.\nThe infrared analysis of the ghost propagator DSE reveals a relation between the\nGribov mass $m_A$ and the Coulomb string tension $\\sigma_{\\mathrm{C}}$. When the\nangular approximation is used one finds \\cite{Feuchter:2007mq}\n\\[\nm_A^2 = \\frac{5 N_{\\mathrm{c}}}{12} \\, \\sigma^{}_{\\mathrm{C}} ,\n\\]\nwhile abandoning the angular approximation one obtains \\cite{Schleifenbaum:2006bq}\n\\[\nm_A^2 = 4 N_{\\mathrm{c}} \\biggl(\\frac{\\Gamma(3\/4)}{\\Gamma(1\/4)}\\biggr)^{\\!\\!2} \\sigma^{}_{\\mathrm{C}} .\n\\]\nThe two values are numerically very close to each other. The Coulomb string\ntension~$\\sigma_{\\mathrm{C}}$ is an upper bound for the Wilson string tension~$\\sigma$ \\cite{Zwanziger:2002sh},\nand in three spatial dimension we have\n$\\sigma_{\\mathrm{C}}\\simeq 4 \\sigma$ \\cite{Greensite:2015nea}.\nWe have no reliable data for the ratio $\\sigma_{\\mathrm{C}}\/\\sigma$\nin $d=2$. Since we are interested mostly in a qualitative analysis we choose\n$\\sigma_{\\mathrm{C}} \\approx \\sigma$.\nFor the Wilson string tension we take the value \\cite{Karabali:1998yq,Bringoltz:2006zg}\n\\[\n\\sigma = g^4 \\frac{\\Nc^2-1}{8\\pi} .\n\\]\n\nFor numerical stability it is convenient to reformulate the gap equation~\\eqref{x1} in\nterms of the pseudo-mass function\n\\[\nm(p) = \\frac{2 p s_p}{1-s^2_p} .\n\\]\nThe resulting gap equation can be found in Refs.~\\cite{Vastag:2015qjd,Campagnari:2016wlt}.\nThe results of the numerical solution of this equation are shown in Fig.~\\ref{fig:res}. Like in the three-dimensional case,\nthe main contribution to the dynamical mass generation comes from the colour\nCoulomb potential [first line in \\Eqref{x1}]. The inclusion of the coupling to\nthe transverse gluons only slightly increases the mass function.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.45\\linewidth]{mass2d_lin}\\hfill\n\\includegraphics[width=.45\\linewidth]{mass2d_log}\n\\caption{Results (left: linear plot, right: logarithmic plot) for the pseudo-mass\nfunction $m_p$ in units of $g^2$ with the colour Coulomb potential alone (dashed\nline) and with the coupling to the transverse gluons included (continuous line).}\n\\label{fig:res}\n\\end{figure}\n\n\n\\section{Conclusions}\n\\label{sec:conc}\nIn this paper we have investigated the dynamical generation of mass in QCD in $d=2$ spatial\ndimensions within the Hamiltonian approach in Coulomb gauge. Somewhat surprisingly, despite\nthe fundamental differences in the representation of the Lorentz group most results obtained\nin $d=3$ hold also in $d=2$. In particular, the inclusion of the non-perturbative vector\nkernel $W$ in the bare quark-gluon vertex $\\bar\\Gamma_0$ [\\Eqref{bqgv}] (in addition to the\nleading kernel $V$, which exists also in perturbation theory) makes the gap equation UV finite\nas in $d=3$. Furthermore, also like in $d=3$, the coupling of the quarks to the spatial gluons\nonly slightly increases the dynamical mass generation. Like in $d=3$ this effect is absolutely dominated\nby the colour Coulomb potential \\Eqref{coulkernel}, which results through the elimination of the\ntemporal gluons $A_0$ in the Hamiltonian approach and, in fact, represents the instantaneous\npart of the propagator $\\vev{A_0 A_0}$.\n\n\n\\begin{acknowledgments}\nThis work was supported by the Deutsche Forschungsgemeinschaft (DFG) under contract\nNo.~DFG-Re856\/10-1.\n\\end{acknowledgments}\n\n\n\\bibliographystyle{h-physrev5}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nMarine scientists spend enormous amounts of resources on understanding and studying life in our oceans. These studies hold numerous benefits for environmental protection and scientific advancement, including the ability to identify areas of the ocean where certain habitats and substrates exist and where certain species gather. \n\nA common method for studying underwater habitats consists of planning underwater routes, called transects, then following those paths and recording the environment either by a diver with a camera or using an underwater ROV \\cite{shester2017exploring, drap2015rov}. Once the transects have been recorded and videos matched with their GPS locations, common annotation methods require researchers to review each video several times, annotating the substrates that the camera passes over in the first few annotation passes, then counting invertebrates in another pass, and then counting fish species in a final pass to give a better idea of where in the ocean which substrates exist and where different species live. This information is vital to determining species hotspots and finding ways to protect the environment while also meeting human needs for usage of our oceans. These studies ultimately lead to new discoveries as they facilitate exploration of unknown oceanic regions. Currently, however, the sheer amount of data researchers collect can be overwhelmingly expensive and difficult to annotate and utilize as their annotation methods' multiple passes can push annotations times to many times the duration of the video.\n\nComputer vision and machine learning models can significantly aid in managing, utilizing, analyzing, and understanding these videos, ultimately reducing the overall costs of these studies and freeing researchers from tedious annotation tasks. However, developing and training these models require annotated data. Further, the types of annotations generated and used by domain scientists do not directly correspond with the typical types of annotations generated and used by computer vision researchers, requiring new approaches to learning from video data and their annotations.\n\nAs a step toward advancement in efficiently computationally analyzing videos from a marine science setting, we introduce DUSIA, a real world scientific dataset including videos collected and annotated by marine scientists who directly use a superset of these videos to advance their own research and exploration. To our knowledge, DUSIA is the first public dataset to contain videos recorded in this challenging moving-camera setting where an underwater ROV drives and records over the ocean floor. This dataset allows us to create solutions to a host of difficult computer vision problems that have not yet been explored such as classifying and temporally localizing underwater habitats and substrates, counting and tracking invertebrate species as they appear in ROV video, and using these explicit substrate and habitat classifications to help detect and classify invertebrate species. Further, the types of annotations provided in DUSIA differ from those of typical computer vision datasets, requiring new approaches to learning.\n\n\n\n\n\nOur contributions can be summarized as follows:\n \\begin{itemize}\n \\item DUSIA provides the first publicly available dataset of annotated, full-length videos captured via an underwater ROV. DUSIA's videos are annotated by expert marine scientists with temporal labels indicating substrates, count labels for 59 invertebrate species, partial bounding box labels for ten invertebrate species of interest in the training set, and full bounding box labels for those species of interest in the validation and testing sets.\n \\item We introduce the novel Context-Driven Detector (CDD), which uses implicit context representations and explicit context labels to improve bounding box detections. In our case, context refers to explicit class labels of the background. Specifically, our context labels describe the substrate present on the ocean floor, which determine the environment and habitat in which the organisms live. In natural images, context might refer to indoor vs outdoor images or subcategories within such as school, office, library, or supermarket.\n \\item We propose Negative Region Dropping, an approach for improving performance of an object detector trained on a dataset with partially annotated images.\n \\item Finally, we offer a baseline method for counting invertebrate species individuals in this challenging setting using a detection plus tracking pipeline.\n \\end{itemize}\n\nIn Section \\ref{sec:related} we review other datasets and methods with similar data and highlight how DUSIA differs from previous datasets. Next, in Section \\ref{sec:dataset} we discuss the contents and collection of DUSIA's data and annotations. Section \\ref{sec:tasks} describes some of the tasks for which DUSIA can be used, and Section \\ref{sec:methods} discusses our approaches to those tasks including the novel CDD, Negative Region Dropping, and baseline tracking method. Section \\ref{sec:experiments} describes our experiments and results, and Section \\ref{sec:discussion} discusses our findings.\n\n\\section{Related Works}\n\\label{sec:related}\n\nCurrent achievements by deep learning-based vision models do not translate well when it comes to analyzing underwater animals and habitats as there exists a scarcity of well-annotated underwater data. Although there are a few efforts from the computer vision community to collect and annotate underwater data \\cite{pedersen2019detection,king2018comparison,boom2014research, marini2018tracking,joly2014lifeclef}, it is hardly enough to tackle this daunting problem, and few of these efforts collect data in the same way or provide annotations for the same goals. In general, collecting underwater image or video data is far more difficult than land data and day to day images of common objects. \nAs a result, the whole data collection process becomes complicated and expensive. DUSIA aims to be a collaborative, comprehensive effort to guide the exploration and automated analysis of underwater ecosystems. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\linewidth]{hab_sub_fig.png}\n \\caption{Illustration of the ROV attached to the catamaran, substrate layers, and habitat characterization. Substrates are divided into soft (mud, cobble and sand), hard (rock and boulder), or mixed (a combination of any soft and hard substrates). Illustration courtesy of Marine Applied Research and Exploration (MARE) Group.}\n \\label{fig:ROV}\n\\end{figure} \n\\subsection{Underwater Marine Datasets}\nMany of the existing underwater marine datasets are developed in order to detect and recognize the various behaviors or simply presence of fish ~\\cite{konovalov2019underwater,maaloy2019spatio, boom2014research,joly2014lifeclef,levy2018automated}. Numerous current works ~\\cite{konovalov2019underwater,maaloy2019spatio, levy2018automated,ditria2020automating} have validated their fish detection and fish behavior recognition models on these datasets. Interestingly, these methods mainly focus on developing novel data-hungry algorithms, but the data on which the algorithms perform is limited by its static perspective. For example, Maaloy et al ~\\cite{maaloy2019spatio} proposed a dual spatial-temporal recurrent network, but the algorithm is trained and tested on a dataset that is constrained by having no camera movement and working in a covered area. Similarly, Konovalov et al ~\\cite{konovalov2019underwater} augments their dataset of underwater fish images with the underwater non-fish images from VOC2012 ~\\cite{everingham2015pascal} by restricting their model to generating only binary (fish vs. no fish) predictions. In the same way, ~\\cite{ditria2020automating,levy2018automated} confined their models to do analysis only on one single fish. In contrast, DUSIA provides dynamic, high definition ROV video showcasing a rich and varied environment with many species occurring in intermingling groups.\n\n\\def 2cm {1.5cm}\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[height=2cm{}]{Bcrop.png}\n \\includegraphics[height=2cm{}]{Ccrop.png}\n \\includegraphics[height=2cm{}]{Mcrop.png}\n \\includegraphics[height=2cm{}]{Rcrop.png}\n \n \\caption{Example frames each containing just one substrate each, indicated by the in frame text}\n \\label{fig:subs}\n\\end{figure*}\n\nAdditionally, unlike existing datasets, a novel feature of DUSIA is the utilization of explicit, human-annotated, contextual information such as substrates or habitat in the analysis workflow. Such contextual information can play a vital role in making accurate predictions, especially in the case of identifying fish or other marine animals. Recently, ~\\cite{rashid2020trillion} has developed a large scale dataset for habitat mapping using both RGB images and hyperspectral images. This dataset contains a large number of annotated images for classifying different coral reef habitats, but marine animal information is not included in this dataset. DUSIA, in contrast, is unique in this aspect, as it has both explicit substrate and invertebrate annotations. \n\n\n\n\\subsection{Methodologies}\nAs mentioned in the previous section, recently, different works have developed deep learning-based algorithms to detect marine species (mostly fishes). Li et al ~\\cite{li2015fast} uses a Fast-RCNN ~\\cite{girshick2015fast} based network to classify twelve different species of fish. Salman et al ~\\cite{salman2016fish} present a deep network to detect fish in 32x32 size video frames. Siddiqui et al ~\\cite{siddiqui2018automatic} use a pre-trained object detection CNN network as a generalized feature extractor. The extracted features are then fed to an SVM (support vector machine) for classification of fish. Our baseline method aims to alleviate some of these methods' shortcomings by using explicit substrate predictions to enhance species detections.\n\n\\section{Dataset} \\label{sec:dataset}\nDUSIA consists of over 10 hours of footage captured from preplanned transects along the ocean floor near the Channel Islands of California. This includes 25 HD videos recorded using RGB video cameras attached to an observation class ROV equipped with multiple lighting fixtures recording at depths between 100 and 400 meters. Three of the 25 videos do not contain species of interest, so they are excluded from experiments presented in this paper. DUSIA's videos are part of a large collection, and we plan to release more similar videos from different excursions in the future. \n\n\\subsection{Data Collection}\nSurveys of wildlife on the ocean floor generally start with planning a group of paths, called transects, across some region in order to efficiently cover and survey one section of the ocean \\cite{shester2017exploring}; however, to protect these fragile ecosystems, DUSIA does not make specific GPS coordinates publicly available. \n\nSome surveys use scuba divers to collect video along transects, but DUSIA covers larger, deeper areas using an ROV attached to a 77-foot catamaran. During the collection process, the ROV is attached via cable to the catamaran. Once the boat arrives near the beginning of the desired transects, the ROV is placed in the water and remains on a long leash attached to the boat such that the catamaran can follow the transects roughly while the ROV follows its path more precisely via inputs from a remote operator on the boat who makes use of the ROV's cameras, lights, GPS, and other instruments that indicate the ROV's location relative to the boat, which allows for computing its GPS location. Figure \\ref{fig:ROV} roughly illustrates the ROV rig used for data collection.\n\n\n\n\\subsection{Substrate Classes and Annotations}\nAfter the collection stage, researchers return to a laboratory where they review, analyze, and annotate each video. DUSIA includes four different substrates: boulder, cobble, mud, and rock. An illustration of each one is shown in Figure \\ref{fig:ROV} and frames from the dataset are shown in Figure \\ref{fig:subs}. The difference between each depends on the nature of the material makeup of the ocean floor. A description of each substrate can be found in Table \\ref{tab:sub_des}, and Table \\ref{tab:an_ex} shows a toy example of the annotation format.\n \n\\begin{table}[t]\n\\centering\n\\begin{tabular}{p{0.13\\linewidth} p{0.81\\linewidth}}\n\\toprule\nSubstrate & Description \\\\ \\midrule\nBoulder & rocky substrate larger than 25 cm in diameter that is detached and clearly movable \\\\\nCobble & rocky substrate that is 6 to 25 cm in diameter \\\\\nMud & very fine sediments that stay suspended in the water when disturbed (loss of visibility) \\\\\nRock & consolidated rocky substrates that appear attached to the bottom and not movable \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Description of the four substrates present in DUSIA}\n\\label{tab:sub_des}\n\\end{table}\n\n\nEach of these substrates may overlap such that a given frame can have multiple substrate labels if enough of multiple substrates are visible. The annotation process includes multiple passes, one for each substrate, where the annotators indicate the start and end times of each substrate occurrence. This arduous process can be alleviated by our methods. \n\n\\subsection{Invertebrate Classes and Annotations} \\label{sec:invert_annos}\nOnce the substrate annotations are completed, scientists make yet another pass over each video, this time annotating invertebrate species, often referencing substrate labels as certain species have a tendency to occur in certain substrates. When a group or individual of a species touches the bottom of the video frame, they pause the video, count the species touching the bottom of the frame, and make note of the time stamp at which the count occurred, giving domain researchers insight into where in the video, in the ocean, and in which substrate, each species tends to occur. We refer to these labels as CABOF, Count At the Bottom of the Frame, labels. \n\n\n\nCount labels provide guidance in learning to classify and detect invertebrate species, they ensure that species individuals are not counted multiple times, and a human could use these labels to learn to label further videos. However, current computer vision methods struggle with weak supervision, and count labels of this nature are unusual for current machine learning methods. \n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{@{}lrrl@{}}\n\\toprule\nAnnotation & \\multicolumn{1}{l}{Begin} & \\multicolumn{1}{l}{End} & Count \\\\ \\midrule\nBoulder & 0:00:20 & 0:00:25 & \\\\\nFPU & 0:00:21 & \\multicolumn{1}{l}{} & \\multicolumn{1}{r}{2} \\\\\nCobble & 0:00:23 & 0:01:30 & \\\\\nMud & 0:00:40 & 0:01:20 & \\\\\nSL & 0:00:49 & \\multicolumn{1}{l}{} & \\multicolumn{1}{r}{1} \\\\\nSL & 0:00:51 & \\multicolumn{1}{l}{} & \\multicolumn{1}{r}{3} \\\\\nRock & 0:01:00 & 0:03:50 & \\\\\nMud & 0:02:10 & 0:02:15 & \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Example of combined substrate and CABOF, Count At the Bottom of the Frame, annotations. Substrates are labeled with beginning and end times, and invertebrate CABOF labels include a single timestamp shown in the Begin column and count.}\n\\label{tab:an_ex}\n\\end{table}\n\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=1\\linewidth]{species.png}\n \\caption{Cropped screenshots of each of the ten species of interest: basket star (BS), fragile pink urchin (FPU), gray gorgonian (GG), long-legged sunflower star (LLS), red swifita gorgonian (RSG), squat lobster (SL), laced sponge (LS), white slipper sea cucumber (WSSC), white spine sea cucumber (WSpSC), and yellow gorgonian (YG).}\n \\label{fig:species}\n\\end{figure*}\n\\subsubsection{Bounding Box Labels} To address this difficulty, we further annotate a subset of the dataset with bounding box tracks to help enable current computer vision methods, which often require bounding boxes for training and testing, and to validate those methods on DUSIA, using the marine scientists' CABOF labels. First, we select a subset of species to annotate with stronger annotations. We choose ten species, each visualized in Figure \\ref{fig:species} because they are some of the most abundant species in the dataset. Appendix \\ref{sec:all-spec} shows the counts of all invertebrate species annotated with count labels across DUSIA.\n\nTo generate our training set, we randomly select a subset of frames containing count labels for our species of interest. We seek to those frames and back up in the video until the annotated species individual or group, i.e. our annotation target(s), is either in the top half of the screen or first appearing. In the ROV viewpoint, objects typically appear at the top of the frame as the ROV moves forward. Once we back up sufficiently far, we then draw a bounding box or boxes on the annotated target(s), ignoring other instances of species of interest (thus creating partial annotations) due to annotation budget and visibility constraints.\n\nWe then jump 10-30 frames at a time adjusting the box location for the annotation target(s) in each frame we land on, referred to as \\emph{keyframes}. This process allows for efficient annotation and allows us to interpolate box locations between keyframes for additional annotation points.\n\nThe result of this annotation process is a partially annotated training set for learning to detect and later count species of interest. These annotations are partial because we did not attempt to always label every individual of each species of interest in the training set. Instead, we focused only on the annotation targets. Because some individuals of the ten species of interest may be labelled while other individuals of the ten species may not be, we consider these partial labels. \n\nWe chose to partially annotate the our training set so that we could collect boxes tracking each species. In populated areas, there are many species hiding, coming, and going, making collecting full annotations extremely difficult, especially across many frames.\n\nAdditionally, we provide some fully annotated frames where we guarantee that all individuals of the ten species of interest in the bottom half of each frame are labelled with a bounding box. We were constrained to the bottom half of the frame due to darkness, murky waters, low visibility, and text embedded in the videos during the collection process. Therefore, we use only the fully annotated bottom half of the validation and testing frames when presenting our detection results. Seeing as the marine scientists count the creatures that touch the bottom of the frame, we expect the bottom half of the frame to provide a good metric for count estimations. These frames are provided for validation and testing.\n\nIn order to generate these fully annotated validation and testing frames, we randomly selected a subset of count annotated frames in the validation and test sets. For each of those selected frames, we labelled all instances of species of interest in the bottom half of the frame including but not limited to the original targets. For rare species, we often labelled frames a second or two before and\/or after the count annotated frame in order to provide more validation and testing frames. Still, the number of validation and testing frames is limited by the difficulty in collecting these fully annotated frames as well as the scarcity of some species. \n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{full_1.png}\n \\caption{Fully annotated frame example. Color to species map is as follows: yellow: laced sponge, magenta: white spine sea cucumber, cyan: white slipper sea cucumber, green: squat lobster.}\n \\label{fig:full}\n\\end{figure*}\n\nThese fully annotated frames took on average 146.5 seconds per frame for trained individuals to annotate. For reference, it took annotators approximately 22.1 seconds per image to fully annotate with single point annotation and 34.9 seconds per image with squiggle supervision in the VOC2012 natural image dataset of 20 classes including cats, busses, and similar common object classes \\cite{bearman2016s}. Collecting bounding boxes, consisting of two precise points, with half the number of classes should take a similar amount of time, but the difference in time spent per image illustrates the challenge of annotating DUSIA as each annotator struggled to find every object of interest even after being trained to specifically to localize the species of interest. An example of a fully labelled validation frame is shown in Figure \\ref{fig:full}.\n\n\n\n\\iffalse\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{dark.png}\n \\caption{Example frame of DUSIA illustrating embedded text and poor lighting, especially in the top half of the frame. Some annotated species are barely identifiable in a single frame, but annotators can identify them by tracking individuals through video sequences.}\n \\label{fig:dark}\n\\end{figure}\n\\fi\n\n\\iffalse\n\\def 2cm {2.7cm}\n\\begin{figure}\n \\centering\n \\includegraphics[height=2cm{}]{subs.png}\n \\includegraphics[height=2cm{}]{fpu_ac_subs.png}\n \\includegraphics[height=2cm{}]{gg_ac_subs.png}\n \\includegraphics[height=2cm{}]{sl_ac_subs.png}\n \\includegraphics[height=2cm{}]{subs_ac_tkf.png}\n \\includegraphics[height=2cm{}]{subs_ac_valf.png}\n \\includegraphics[height=2cm{}]{subs_ac_testf.png}\n \\includegraphics[height=2cm{}]{all_sub_train3.png}\n \\includegraphics[height=2cm{}]{all_sub_val3.png}\n \\includegraphics[height=2cm{}]{all_sub_test3.png}\n \\caption{Top left: number of frames containing each substrate across DUSIA; others: distributions of annotations of species of interest across each substrate, still waiting on all species more plots}\n \\label{fig:sub_plots}\n\\end{figure}\n\\fi\n\n\n\n\n\\subsection{Dataset Splits}\n\\label{sec:splits}\nWe provide a split of the dataset into training, validation, and testing sets with 13, 3, and 6 videos in each split respectively. The training set includes 8,682 keyframes used for training the detector (described in detail in Section \\ref{sec:invert_annos}). The validation and test sets respectively include 514 and 677 frames with fully annotated lower halves. Between each split, we attempted to maintain a relatively even distribution across our species of interest; however, preserving this distribution leads to a slightly uneven distribution of substrate occurrences. \n\n\n\\subsection{Statistical Analysis of Data}\n\nTable \\ref{tab:sub_ac_sp} shows the frequency of each of the substrate classes present in our dataset. \n\nTable \\ref{tab:spec_ac_sp} shows the frequency of bounding box labels for invertebrate species of interest represented in our dataset, and Table \\ref{tab:count_ac_sp} illustrates the frequency of CABOF labels for invertebrate species.\n\nTable \\ref{tab:spec_ac_sub} illustrates the distributions of CABOF labels for each species across the different substrates. While not weighted against the relative presence of each substrate, this table still illustrates that certain species occur much more frequently in certain substrates. For example, fragile pink urchins (FPU) rarely occur in the boulder substrate, and frequently occur in mud while laced sponges (LS) almost always occur in a substrate that includes rock. These correlations suggest that learning to predict substrate may aid in learning the relationship between substrate and species and motivate a context driven approach for species detection and counting.\n\n\n\\begin{table}[t!]\n\\centering\n\\begin{tabular}{@{}lrrrrr@{}}\n\\toprule\n & \\multicolumn{1}{l}{B} & \\multicolumn{1}{l}{C} & \\multicolumn{1}{l}{M} & \\multicolumn{1}{l}{R} & \\multicolumn{1}{l}{Total} \\\\ \\midrule\nTrain & 70,248 & 247,764 & 259,535 & 183,020 & 760,567 \\\\\nVal & 14,899 & 28,694 & 23,656 & 63,322 & 130,571 \\\\\nTest & 30,742 & 91,695 & 102,422 & 87,399 & 312,258 \\\\\nTotal & 115,889 & 368,153 & 385,613 & 333,741 & 1,203,396 \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Distribution of number of frames containing each substrate across DUSIA and its splits}\n\\label{tab:sub_ac_sp}\n\\end{table}\n\n\n\n\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{lrrrrrrrrrrr}\n\\hline\n & \\multicolumn{1}{l}{BS} & \\multicolumn{1}{l}{FPU} & \\multicolumn{1}{l}{GG} & \\multicolumn{1}{l}{LLS} & \\multicolumn{1}{l}{RSG} & \\multicolumn{1}{l}{SL} & \\multicolumn{1}{l}{LS} & \\multicolumn{1}{l}{WSSC} & \\multicolumn{1}{l}{WSpSC} & \\multicolumn{1}{l}{YG} & \\multicolumn{1}{l}{Total} \\\\ \\hline\nTrain & 1,247 & 3,675 & 3,294 & 735 & 775 & 3,264 & 1,071 & 1,397 & 819 & 1,024 & 17,301 \\\\\nVal & 61 & 394 & 259 & 20 & 85 & 594 & 91 & 439 & 51 & 38 & 2,032 \\\\\nTest & 124 & 653 & 277 & 61 & 79 & 1,181 & 98 & 506 & 28 & 180 & 3,187 \\\\\nTotal & 1,432 & 4,722 & 3,830 & 816 & 939 & 5,039 & 1,260 & 2,342 & 898 & 1,242 & 22,520 \\\\ \\hline\n\\end{tabular}\n\\caption{Distribution of bounding box annotations of each species across splits. Note that one species individual may be annotated with multiple bounding boxes as it occurs across multiple frames.}\n\\label{tab:spec_ac_sp}\n\\end{table*}\n\n\n\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{@{}lrrrrrrrrrrr@{}}\n\\toprule\n & \\multicolumn{1}{l}{BS} & \\multicolumn{1}{l}{FPU} & \\multicolumn{1}{l}{GG} & \\multicolumn{1}{l}{LLS} & \\multicolumn{1}{l}{RSG} & \\multicolumn{1}{l}{SL} & \\multicolumn{1}{l}{LS} & \\multicolumn{1}{l}{WSSC} & \\multicolumn{1}{l}{WSpSC} & \\multicolumn{1}{l}{YG} & \\multicolumn{1}{l}{Total} \\\\ \\midrule\nTrain & 292 & 2,828 & 398 & 269 & 190 & 1,649 & 517 & 832 & 279 & 103 & 7,357 \\\\\nVal & 17 & 154 & 80 & 8 & 19 & 208 & 40 & 164 & 22 & 9 & 721 \\\\\nTest & 52 & 420 & 78 & 29 & 48 & 742 & 75 & 317 & 17 & 38 & 1,816 \\\\\nTotal & 361 & 3,402 & 556 & 306 & 257 & 2,599 & 632 & 1,313 & 318 & 150 & 9,894 \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Distribution of CABOF labels across DUSIA and its splits. As described in Section \\ref{sec:invert_annos}, each species individual is counted only once when it touches the bottom of the frame.}\n\\label{tab:count_ac_sp}\n\\end{table*}\n\n\n\\begin{table*}[ht!]\n\\centering\n\\begin{tabular}{lrrrrrrrrrr}\n\\hline\n & \\multicolumn{1}{l}{BS} & \\multicolumn{1}{l}{FPU} & \\multicolumn{1}{l}{GG} & \\multicolumn{1}{l}{LLS} & \\multicolumn{1}{l}{RSG} & \\multicolumn{1}{l}{SL} & \\multicolumn{1}{l}{LS} & \\multicolumn{1}{l}{WSSC} & \\multicolumn{1}{l}{WSpSC} & \\multicolumn{1}{l}{YG} \\\\ \\hline\nB & 0.302 & 0.059 & 0.362 & 0.206 & 0.198 & 0.219 & 0.168 & 0.224 & 0.176 & 0.340 \\\\\nC & 0.773 & 0.370 & 0.797 & 0.575 & 0.712 & 0.581 & 0.454 & 0.754 & 0.601 & 0.887 \\\\\nM & 0.288 & 0.813 & 0.185 & 0.951 & 0.471 & 0.689 & 0.372 & 0.467 & 0.896 & 0.127 \\\\\nR & 0.670 & 0.424 & 0.464 & 0.297 & 0.716 & 0.745 & 0.998 & 0.585 & 0.324 & 0.380 \\\\ \\hline\n\\end{tabular}\n\\caption{Percentage of total species individuals occuring in each substrate according to CABOF labels. Note that a given frame may have multiple substrate labels, so a given individual may occur in multiple substrates at one time.}\n\\label{tab:spec_ac_sub}\n\\end{table*}\n\n\n\n\\section{Tasks}\n\\label{sec:tasks}\nWhile our dataset has a plethora of uses, we present two specific tasks for which our dataset is well suited.\n\n\\subsection{Substrate Temporal Localization}\nThe first step marine researchers take to analyzing the videos that they collect is to define the temporal spans of each substrate by indicating the start and end times of each substrate as the substrate changes while the ROV drives over the ocean floor. Many substrates may occur simultaneously, which slightly complicates the problem making it a mutli-label classification problem. Our dataset makes it possible to develop and test automated methods for this problem. \n\n\\subsection{Counting Species Individuals}\nDUSIA also makes it possible to count the number of individuals of species occurring in the videos. Counting can be achieved in three stages: detection, tracking, and then counting. We present a simple baseline method for achieving this. While many computer vision methods for counting may rely on localization information such as bounding boxes, marine researchers are interested in the number of individuals occurring in the video and are less interested in where exactly in the frame an organism occurs. They can use video timestamps of those individuals' occurrence to map those timestamps back to their GPS coordinate time log from the expedition in which the video was captured, generating population density maps for different species.\n\n\nAdditionally, we provide bounding box labels for ten species of interest as described in Section \\ref{sec:invert_annos}.\n\n\n\n\n\\section{Methods}\n\\label{sec:methods}\nWhile our dataset can be used to train models to solve a wide variety of problems including substrate classification, species hotspot estimation, species counting, and invertebrate tracking, we present methods for substrate temporal localization and invertebrate species detection using partially supervised frames with our primary focus on invertebrate species detection. We feed our detection results to ByteTrack's tracking algorithm \\cite{zhang2021bytetrack} to track invertebrate species and present a simple method for using these tracks to count invertebrate individuals.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{arch3.png}\n \\caption{Context-Driven Detector: the Context Description Branch (green) takes features from the backbone, classifies context explicitly (blue), and feeds a global representation of context (purple) to the box classification layer to enhance detections. We show that using this branch enhances the detections overall indicating that learning from explicit context labels can enhance detections.}\n \\label{fig:arch}\n\\end{figure*}\n\n\n\\subsection{Substrate Classification}\nFor a baseline, we train two basic classifiers for substrate classification. First, we trained an out-of-the-box ResNet-50 based \\cite{he2016deep} classification CNN, pre-trained on ImageNet \\cite{deng2009imagenet}, on frames pulled from training videos to predict four substrates at once. Then, we trained four separate ResNet-50 classifiers, one per substrate, and combined the prediction results from each of the classifiers by simply assigning each of their confidence predictions to each class since substrate classification allows multiple substrates to be present in a single frame.\n\n\n\n\\subsection{Invertebrate Species Detection}\nWe trained an out-of-the-box Faster RCNN model using our partially annotated keyframes (see section \\ref{sec:invert_annos} for partial annotation description). We chose Faster RCNN for its adaptability and ability to classify smaller boxes, with which some object detectors struggle. As shown in Figure \\ref{fig:box-dis}, many classes in DUSIA are made up of small boxes.\n\n \nFigure \\ref{fig:arch} shows vanilla Faster RCNN in black. An image is fed to a backbone network, and image features are fed to a region proposal network. Then, region of interest pooling selects proposed regions. Finally, fully connected layers classify each region and regress the bounding box coordinates to refine their localization. We made no modifications to Faster R-CNN for this baseline model and refer to this version as vanilla Faster RCNN with the loss function, $L_{v}$, described by Ren et al \\cite{ren2015faster}:\n\\begin{equation}\nL_{v} = L_{d} + L_{p}\n\\label{eq:lv}\n\\end{equation}\nwhere $L_{d}$ is the loss for the detector and $L_{p}$ is the loss for the region proposal network. Since we make no modifications to this part of the loss, we leave the details of the original loss description to the source paper.\n\n\\subsubsection{Negative Region Dropping}\nBecause much of our partially annotated training set contains unlabelled individuals of species of interest, we propose an approach for teaching the detection network to pay more attention to the true positive labels, and to pay less attention to potential false positives during training because a false positive may actually just be an unlabelled positive. There is generally no way of being sure whether an individual of interest is not present given a partially labelled training set, but all of the boxes provided for training are correct, true positive examples. Since humans can make sense of such a scenario, we aim to create a method for a detector to emulate that process.\n\nFaster RCNN's region proposal network (RPN) generates proposals and computes a loss to learn which proposal contains an object of interest or not. Each proposal is assigned a label, positive or negative, based on whether it has sufficient overlap with a ground truth box (positive) or not (negative). Because DUSIA's training set contains unlabelled positives, we propose randomly dropping out a percentage of the negative proposals, thereby giving negative examples a lower weight and positive examples a higher weight. Dropping these negative proposals simply equates to not including them in the RPN's loss, $L_{p}$. \n\nWe explore different percentages, $\\rho$, to drop in section \\ref{sec:experiments}, and show that dropping negative proposals in this way leads to significant improvement in detection performance on DUSIA. \n\n\\subsubsection{Context Driven Detection}\nTo improve invertebrate detection using context annotations, we introduce the novel Context Description Branch as shown in green in Figure \\ref{fig:arch}. The first iteration of the context description branch (blue in Figure \\ref{fig:arch}) flattens the feature map from the backbone network and feeds this flattened vector to a fully connected layer which is trained in tandem with the detection branch to predict the multi-class substrate label. Simply backpropagating a weighted binary cross entropy loss to the backbone network to predict the substrate label increases the model's performance and generalizability (as measured by performance on the test set) by teaching the network about context via explicit context classification. This joint optimization generates cues in the backbone feature map that improve the invertebrate detection. For this iteration of the network, the loss function looks the same as equation \\ref{eq:lv} with the additional loss for explicit context classification.\n\\begin{equation}\nL = L_{v} + \\alpha*L_{c}\n\\label{eq:l}\n\\end{equation}\nwhere $\\alpha$ is a hyperparameter weight and $L_{c}$ is a binary cross entropy loss for context labels.\n\n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{@{}rrrrrrrr@{}}\n\\toprule\n & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{4}{c}{test\\_wv per class APs} & \\\\\n & \\multicolumn{1}{l}{val mAP} & \\multicolumn{1}{l}{test mAP} & B & C & M & R & test\\_wv mAP \\\\ \\midrule\nSeparate & \\textbf{0.588} & \\textbf{0.646} & \\multicolumn{1}{r}{\\textbf{0.274}} & \\multicolumn{1}{r}{\\textbf{0.802}} & \\multicolumn{1}{r}{0.750} & \\multicolumn{1}{r}{\\textbf{0.826}} & \\multicolumn{1}{r}{0.663} \\\\\nCombined & 0.551 & 0.572 & \\multicolumn{1}{r}{0.259} & \\multicolumn{1}{r}{0.777} & \\multicolumn{1}{r}{\\textbf{0.951}} & \\multicolumn{1}{r}{0.781} & \\multicolumn{1}{r}{\\textbf{0.692}} \\\\\nCDD & 0.517 & 0.596 & \\multicolumn{1}{r}{-} & \\multicolumn{1}{r}{-} & \\multicolumn{1}{r}{-} & \\multicolumn{1}{r}{-} & \\multicolumn{1}{r}{-} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Substrate classifier performance. Per class APs are shown for the test\\_wv set. CDD shows the classification performance of the CDD with $\\alpha$ = 0.0001 and $\\rho$ = 0.75, which was not run on test\\_wv.}\n\\label{tab:sub-cls}\n\\end{table*}\n\n\nBy feeding global features alongside local features to the box classification layer, we can also enhance the model's performance; however, for the network to learn from them simultaneously, the global and local features must be on similar orders of magnitude. For vanilla Faster RCNN, the local box features are vectors of size 1,024. Global features from the ResNet-50 backbone, though, are much larger. To address this size mismatch, we add a 1D convolution layer to the context description branch, which reduces the dimension of the backbone's feature map. This reduced map represents the global context information, which is largely the visible substrate, to a dimensionality on the same order of magnitude as each of the box features that are fed to the box classification head's fully-connected layer. Along those lines, we also scale the global features to match the local box feature vector by simply multiplying the global features element-wise with a scalar hyperparameter, $\\beta$. \n\nBecause Faster RCNN predicts the class of each box based on a set of box features, which is a local representation of the object that is being classified, we enhance these box classifications by concatenating each image's global context information to each of its box features. This concatenation fuses together local and global features and allows the network to draw more immediate conclusions about the global information, object features, and their relationship, which is especially relevant when classifying invertebrate species in this setting. Here, we make no changes to the loss function from equation \\ref{eq:l}, and the 1D convolution kernel is learned.\n\n\\subsection{Invertebrate Tracking and Counting}\nTo illustrate an example pipeline for invertebrate counting, we use a detection plus tracking approach. First, we train our detector on keyframes from our training set, and then we run inference on the full validation and testing videos at 30 fps saving all detections including their spatial and temporal locations, class labels, and confidence scores.\n\nAs an intermediate step, we filter out all low confidence detections under different thresholds so that the tracker does not see low confidence detections.\n\nByteTrack \\cite{zhang2021bytetrack} takes as input the detections (box coordinates and confidence scores) of a single class at a time and metadata from the images (e.g. image size). In short, ByteTrack performs a modified Kalman filter based algorithm to the detections in order to link them in adjacent frames and assign each detection a track ID, or filter it out. \n\nWe apply a second filter to the output of ByteTrack such that track IDs that occur in too few frames are filtered out.\n\nFinally, we count species individuals. To emulate the process used by marine scientists, we only count species individuals that touch the bottom of the frame. So, if a tracked species' box touches the bottom of the frame, we mark its track ID as counted and simply increment its class's count. This way, for each video, we can compute a total number of species per video that we can then compute relative error using our predicted counts and the sum of each video's CABOF labels.\n\n\\section{Experiments}\n\\label{sec:experiments}\nWe test a few models and methods for the substrate temporal localization task in an effort to provide a baseline for other works to improve upon.\n\n\\subsection{Substrate Temporal Localization}\n\\subsubsection{Single Classifier}\n\\label{sec:single}\nWe test a simple ResNet-50 based image classifier trained with a batch size of 32, learning rate of 0.1, and up to 50 epochs, selecting the epoch weights that perform best on the validation set. We also tested learning rates of 0.01 and 0.001 for our classifiers, and these models performed similarly but slightly worse. Table \\ref{tab:sub-cls} shows the results of these experiments as predictions were made on the fully annotated frames of our validation and testing sets. These two sets are included for comparison with the context classification performance of CDD with explicit context classification, though CDD is optimized to perform detection simply using substrate prediction as a guiding sub-task. For substrate localization, though, we have annotations for almost every frame. So, we also present our classification performance on the test\\_wv set which includes many more frames from the test videos. To generate test\\_wv we simply sample the test videos uniformly at one frame per second. We then classify each frame, and present the AP scores. \n\n\n\n\\subsubsection{Combination of Multiple Classifiers}\nAs mentioned in previous sections, substrate annotations are currently completed by trained marine scientists in multiple passes through each video, one pass per substrate. Inspired by this approach, we use one binary classifier network per substrate class. Each ResNet-50 image classification network is trained independently on the training set; however, each network is trained to simply indicate whether one substrate is present or not. We use each classifier's prediction together to predict the multi-class label and refer to this method as our combined approach. Table \\ref{tab:sub-cls} shows that this method improves performance over a single multi-classifier for most substrates, indicating that each approach may have different use cases.\n\nAll classifiers seem to struggle with correctly identifying the boulder substrate, and, given the nuance in differences between hard substrates, this is not surprising considering the classifiers have little scale information to use to determine and differentiate exact sizes of different pieces of cobble, boulders, or larger rock formations. Additionally, the changing perspective of the ROV makes it difficult to understand scale in the videos. That said, a dedicated boulder detector out-performed the single classifier method overall due to its impressive performance classifying the mud class.\n\n\n\\subsection{Invertebrate Species Detection}\nIn order to detect species individuals, we present mean average precision (mAP) results for object detection with an intersection over union (IOU) threshold of 0.5. For each detection experiment, we initalize our models with weights pretrained on ImageNet and then train the network for up to 15 epochs. We select the model from the epoch with the best performance on the fully annotated frames of the validation set. Then, we run inference on the fully annotated frames of the test set using those selected model weights. We repeat the training and testing procedure four times for each experiment and report the average results over the four runs because PyTorch does not support deterministic training for our model at the time of writing.\n\nWe first train vanilla Faster RCNN \\cite{ren2015faster} with a batch size of 8 and try several learning rates after initializing with weights pre-trained on COCO \\cite{lin2014microsoft} provided by PyTorch \\cite{paszke2019pytorch}. The results are shown in Table \\ref{tab:van-lr}.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{rrr}\n\\hline\n\\multicolumn{1}{l}{lr} & \\multicolumn{1}{l}{val mAP} & \\multicolumn{1}{l}{test mAP} \\\\ \\hline\n0.1 & 0.454 & 0.361 \\\\\n0.01 & \\textbf{0.490} & \\textbf{0.391} \\\\\n0.001 & 0.482 & 0.367 \\\\ \\hline\n\\end{tabular}\n\\caption{Performance of vanilla Faster RCNN with varying learning rates}\n\\label{tab:van-lr}\n\\end{table}\n\n\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{rrrr}\n\\hline\n\\multicolumn{1}{l}{lr} & \\multicolumn{1}{l}{$\\rho$} & \\multicolumn{1}{l}{val mAP} & \\multicolumn{1}{l}{test mAP} \\\\ \\hline\n0.01 & 0 & 0.490 & 0.391 \\\\\n0.01 & 0.5 & 0.492 & 0.413 \\\\\n0.01 & 0.75 & \\textbf{0.509} & \\textbf{0.439} \\\\\n0.01 & 0.9 & 0.492 & 0.403 \\\\\n0.01 & 1 & 0.297 & 0.264 \\\\\n0.001 & 0.75 & 0.479 & 0.380 \\\\\n0.001 & 0.9 & 0.481 & 0.380 \\\\ \\hline\n\\end{tabular}\n\\caption{Performance of Faster-RCNN with varying Negative Region Dropping percentages}\n\\label{tab:rho-only}\n\\end{table}\n\n\n\nWe then perform hyperparameter searches for each of our method contributions described in section \\ref{sec:methods}: $\\alpha$ for explicit context learning and backbone refinement, $\\beta$ for global context feature fusion, and $\\rho$ for Negative Region Dropping. After testing each hyperparameter independently, we try combinations of each and discuss the results. We prioritize test mAP over val mAP as test mAP is more indicative of the generalizability of our model since the best model weights are selected on best val mAP.\n\n\\subsubsection{Negative Region Dropping Percent \\texorpdfstring{$\\rho$}{Lg}}\nTable \\ref{tab:rho-only} shows that Negative Region Dropping consistently improves the training on DUSIA by teaching the network to focus more on learning from true examples than negative examples. Interestingly, setting $\\rho$ to 1.0 detrimentally harms performance indicating that having some negative regions contribute to the region proposal loss is still important.\n\n\\subsubsection{Global Feature Fusion Scalar \\texorpdfstring{$\\beta$}{Lg}}\nBy creating a global feature representation and feeding it later in the network, the network is better able to classify boxes correctly, but concatenating a global feature representation with the local box features requires that the features come in at similar scales. Table \\ref{tab:beta-only} shows the effect of different scalar values for this fusion.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{rrrr}\n\\hline\n\\multicolumn{1}{l}{lr} & \\multicolumn{1}{l}{$\\beta$} & \\multicolumn{1}{l}{val mAP} & \\multicolumn{1}{l}{test mAP} \\\\ \\hline\n0.01 & 0 & 0.490 & 0.391 \\\\\n0.01 & 0.1 & 0.471 & 0.371 \\\\\n0.01 & 0.01 & 0.491 & 0.397 \\\\\n0.01 & 0.001 & \\textbf{0.499} & 0.396 \\\\\n0.01 & 0.0001 & 0.494 & \\textbf{0.410} \\\\\n0.01 & 1.0E-05 & 0.496 & 0.406 \\\\\n0.01 & 1.0E-06 & 0.482 & 0.394 \\\\\n0.001 & 0.01 & 0.475 & 0.374 \\\\\n0.001 & 0.001 & 0.477 & 0.371 \\\\ \\hline\n\\end{tabular}\n\\caption{Performance of the Context Driven Detector given different $\\beta$ scalar values}\n\\label{tab:beta-only}\n\\end{table}\n\n\\subsubsection{Context Loss Weight \\texorpdfstring{$\\alpha$}{Lg}}\nBy modifying the detector to simultaneously classify the context of an image in parallel with detection, we demonstrate that simply backpropagating information useful for classifying substrate to the backbone also serves to help improve detection performance. Training a joint task in this way leads to less powerful context classifications than a dedicated context classifier, but it leads to a more powerful object detector. Table \\ref{tab:alpha-only} shows the effects of $\\alpha$ on the detection performance.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{rrrr}\n\\hline\n\\multicolumn{1}{l}{lr} & \\multicolumn{1}{l}{$\\alpha$} & \\multicolumn{1}{l}{val mAP} & \\multicolumn{1}{l}{test mAP} \\\\ \\hline\n0.01 & 0 & 0.490 & 0.391 \\\\\n0.01 & 0.1 & 0.470 & 0.389 \\\\\n0.01 & 0.01 & 0.494 & 0.419 \\\\\n0.01 & 0.001 & 0.487 & 0.401 \\\\\n0.01 & 0.0001 & 0.502 & \\textbf{0.420} \\\\\n0.01 & 1.0E-05 & \\textbf{0.507} & 0.410 \\\\\n0.01 & 1.0E-06 & 0.501 & 0.408 \\\\\n0.001 & 0.01 & 0.456 & 0.358 \\\\\n0.001 & 0.001 & 0.453 & 0.361 \\\\ \\hline\n\\end{tabular}\n\\caption{Performance of the Context Driven Detector given different context loss scaling $\\alpha$ values}\n\\label{tab:alpha-only}\n\\end{table}\n\n\n\\subsubsection{Hyperparameter Combinations}\nWe illustrate that each hyperparameter alone can improve the detector performance over the baseline out-of-the-box models. We further illustrate that Negative Region Dropping and context driven detection can work in tandem to further improve performance. We also find that a context driven detector with both implicit attention to context (global feature fusion) and explicit context classification does not necessarily outperform implicit context usage or explicit classification only. Training on both implicit and explicit context simultaneously may interfere with each other. Still, we emphasize that learning from context can significantly improve object detection performance in this setting, and we aim to find even better ways to utilize contextual information to better classify objects in future work. \n\nTable \\ref{tab:best-models} highlights the best hyperparameter settings revealed during our search, and Appendix \\ref{sec:append} goes into more detail on the settings tested for this study. Note that the $\\beta$ column set to zero indicates that global features are not being scaled by 0, rather they are not being concatenated with the local box features at all.\n\nWe find that Negative Region Dropping increases the overall performance of both vanilla Faster RCNN and context driven detectors. While explicit and implicit context usage may conflict with one another in training, independently they can achieve performance increases. The best model overall is achieved with global context feature fusion and Negative Region Dropping, and a model with explicit context classification and Negative Region Dropping follows close behind. We find that using context to influence detections leads to a 7.4\\% increase, using negative region dropping leads to a 12.3\\%, and together they can achieve a 14.3\\% increase in mAP on the fully annotated frames in DUSIA's test set.\n\nFigure \\ref{fig:per-class} illustrates the per class AP detection performance of our best model compared with vanilla Faster RCNN showing that our model significantly increases performance on all classes. Figure \\ref{fig:dets} shows qualitative examples of success and failure cases of the best version of CDD.\n\n\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{rrrrr}\n\\hline\n\\multicolumn{1}{l}{$\\alpha$} & \\multicolumn{1}{l}{$\\beta$} & \\multicolumn{1}{l}{$\\rho$} & \\multicolumn{1}{l}{val mAP} & \\multicolumn{1}{l}{test mAP} \\\\ \\hline\n0 & 0 & 0 & 0.490 & 0.391 \\\\\n0 & 0.0001 & 0 & 0.494 & 0.410 \\\\\n0.01 & 0.1 & 0 & 0.480 & 0.420 \\\\\n0.0001 & 0 & 0 & 0.502 & 0.420 \\\\\n1.0E-06 & 0.01 & 0.75 & 0.517 & 0.430 \\\\\n0 & 0 & 0.75 & 0.509 & 0.439 \\\\\n0.0001 & 0 & 0.75 & 0.514 & 0.439 \\\\\n0 & 0.01 & 0.75 & \\textbf{0.524} & \\textbf{0.447} \\\\ \\hline\n\\end{tabular}\n\\caption{Performance of best models from each hyperparameter combination}\n\\label{tab:best-models}\n\\end{table}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{per-class.png}\n \\caption{Per class test AP comparison of vanilla Faster RCNN and the best Context Driven Detector}\n \\label{fig:per-class}\n\\end{figure}\n\n\n\\def 2cm {2cm}\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[height=2cm{}]{40.png}\n \\includegraphics[height=2cm{}]{60.png}\n \\includegraphics[height=2cm{}]{80.png}\n \\includegraphics[height=2cm{}]{70.png}\n\\caption{Detection examples from our dataset. Blue indicates fragile pink urchin; green, gray gorgonian; and red, squat lobster. We show the success of our detector with the exception of the bottom right image. A crab (not a species of interest) is mislabeled as a fragile pink urchin toward the top center of the image. In the left side of the image, two pieces of floating debris are labelled as urchins, and close to the center two urchins are counted thrice. Right of center, a rock is labelled as an urchin. These failure cases demonstrate some of the challenges of DUSIA. In the top right corner of the bottom right image, a very difficult to see pink urchin is correctly detected.}\n\\label{fig:dets}\n\\end{figure*}\n\n\n\\subsection{Invertebrate Species Counting}\nThere are some noteworthy differences between the detection and counting problems. As mentioned in Section \\ref{sec:splits}, we partition DUSIA's videos into three sets: training, validation, and testing sets. However, the detector sees only a small fraction of each video as only a small subset of each video has bounding box annotations. Further, while we refer to three of our videos as validation videos, our detection models do not train on those videos at all, and only 514 frames from those ~124,000 validation video frames are used in the detection validation process to select our best model weights. \n\n\n\\begin{table*}[!t]\\centering\n\\small\n\\scriptsize\n\\resizebox{0.9\\linewidth}{!}{\n\\begin{tabular}{rrrrrrrrrrrrr}\\toprule\n\\multicolumn{12}{c}{val set per species relative errors} & \\\\\\cmidrule{1-13}\n$\\gamma$ & $\\tau$ & BS &FPU &GG &LLS &RSG &SL &LS &WSSC &WSpSC &YG &mean \\\\\n0 & 0 & 11.2 &4.04 &5.75 &25.6 &60.9 &3.18 &\\cellcolor[HTML]{e5eef0}0.35 &2.98 &2.32 &18.7 &13.5 \\\\\n20 & 0.5 & \\cellcolor[HTML]{dce8ea}-0.18 &\\cellcolor[HTML]{d6e4e7}-0.091 &\\cellcolor[HTML]{e6eef0}-0.34 &1.13 &\\cellcolor[HTML]{d7e5e7}-0.11 &\\cellcolor[HTML]{f0f5f6}-0.50 &-0.90 &-0.88 &\\cellcolor[HTML]{e2ecee}-0.27 &\\cellcolor[HTML]{d0e0e3}0.00 &\\cellcolor[HTML]{ebf2f3}0.439 \\\\\n\n\\hline\\\\\n\n\\multicolumn{12}{c}{test set per species relative errors}\\\\\\cmidrule{1-13}\n$\\gamma$ & $\\tau$ &BS &FPU &GG &LLS &RSG &SL &LS &WSSC &WSpSC &YG &mean \\\\\n0 & 0 & 6.00 &4.73 &15.38 &46.66 &70.23 &3.29 &2.57 &2.84 &3.71 &12.21 &16.8 \\\\\n20 & 0.5 & \\cellcolor[HTML]{f3f8f8}-0.56 &\\cellcolor[HTML]{d8e5e8}0.14 &\\cellcolor[HTML]{d2e2e4}-0.03 &1.28 &\\cellcolor[HTML]{e0ebed}-0.25 &\\cellcolor[HTML]{f1f6f7}-0.51 &-0.84 &-0.91 &\\cellcolor[HTML]{dfeaec}-0.24 &\\cellcolor[HTML]{e9f1f2}-0.39 &\\cellcolor[HTML]{f0f5f6}0.515 \\\\\n\\bottomrule\n\\end{tabular}\n}\n\n\\caption{Relative errors of our counting method with no thresholding and the best threshold settings. Darker color indicates better performance. See Table \\ref{tab:count_ac_sp} for ground truth counts for each species.}\n\\label{tab:err}\n\\end{table*}\n\n\nIn contrast, our counting method runs our detector on the entire lengths of the videos in the validation and testing sets, posing a great challenge to the generalizability and robustness of an object detection model. That is, the sets of frames used for the counting task are much larger than those used for detection. Also, the frames annotated with invertebrate species (i.e. all the frames in the detector's training set) all include instances of those species of interest. In contrast, each video contains long time spans of both densely and sparsely annotated areas including some long regions with no species of interest. As a result, counting species individuals poses a very challenging problem, and much work remains to be done in the power of a detector and its ability to differentiate between background and species of interest in both sparsely and densely populated environments.\n\nStill, we aim to demonstrate the challenge of this problem with a simple baseline method, though much work remains to be done to achieve a result that would be able to replace the annotation abilities of trained marine scientists. We hope that DUSIA can aid in pushing the limits of computer vision models and extend computer vision methods' usefulness into more challenging, scientific data.\n\nIn order to count invertebrate individuals, we first run the best performing version of CDD on each of our val and test videos at the full frame rate of 30 fps and save all detections. Then, we filter out all detections with confidence scores under a threshold, $\\tau$, before feeding all detections to ByteTracker. We then filter the output of ByteTrack by discarding any track IDs with less than $\\gamma$ detections in the track. That is, if a track ID is assigned to boxes in only a few frames, we discard that track ID. We experimented with ByteTracker's hyperparameters and found that their effect was significantly smaller than the effects of $\\tau$ and $\\gamma$, so we opt to use the default hyperparameter settings for ByteTracker. We leave the details of ByteTracker to the original work \\cite{zhang2021bytetrack}. Finally, for each species, we count the number of that species' track IDs that touch the bottom of any frame.\n\nWe applied the two aforementioned filters because, without any filters, our method vastly over counts all species through all videos. Figure \\ref{fig:dets} shows examples of a few false positive detections, and these types of errors likely contribute heavily to our method's over counting as the detector is run over hours of videos, accumulating false positive results. \n\nTo address the over counting issue, we opted to feed the tracker only our most confident detections and to only count tracks that occur across multiple frames. This filtering significantly improved the performance, but the error remains unacceptably high.\n\nTable \\ref{tab:err} shows the relative error for each class on the val and test videos as well as the mean relative error, averaged over all classes, as we vary the $\\tau$ and $\\gamma$ parameters. We leave the error sign to indicate over (positive error) or under (negative error) counting, but we compute the mean errors using the absolute value of the error values for each class. Clearly, the detector hardly learns some of the rarer classes (e.g. long-legged sunflower star and red swiftia gorgonian) and regularly misclassifies background, which may include species outside of our ten species of interest, as our species of interest. Appendix \\ref{sec:append} contains more experiment error results for varying these filter thresholds.\n\nUltimately, these baseline results indicate that this simple method is not powerful enough to put into practice given the effectiveness of our current detection model. Much work on methods for this problem is left to be done. We could look deeper into per class thresholds, but we expect improving object detections, false positive filtering, and the tracking algorithm would be more robust. We leave these improvements to future work.\n\n\n\n\\section{Discussion and Future Work}\n\\label{sec:discussion}\nOur baseline methods' detection and counting performance leaves plenty of room for improvement. Our detection methods do not enforce any sort of temporal continuity present in the ROV videos, which could likely improve performance, and the methods do not yet take advantage of the abundant, weak CABOF labels during training. \n\nIt is interesting to find the difference in performance of the different types of substrate classifiers. Overall, the substrate classification results are good enough for some substrates, and in future work we hope to see results good enough to fully automate this process. Additionally, marine scientists are interested in real time substrate classifiers that can indicate which substrates the ROV is passing in real time. Any indication of species hotspots in real time during expeditions can improve each excursion's productivity by reducing more manual means of searching for given substrates, habitats, and species hotspots. \n\nThe detection results of the Context Driven Detector provide a baseline, but in order to fully translate these detections to tracks with individual re-identification and counting, there is much work to be done. We hope to next take full advantage of the CABOF labels and to use context in more powerful ways to improve detection performance in future work. Further, we plan to enforce temporal continuity to improve our counting predictions. These improvements can lead us to eventually begin automating some of the invertebrate counting that is currently done manually. \n\nBy making DUSIA public, we also invite other collaborators to work independently or in cooperation with us to help improve our methods. \n\n\\backmatter\n\n\\section*{Supplementary information}\nDUSIA's data, annotations, and baseline methods will be made publicly available at the time of publication. \n\n\\section*{Acknowledgments}\nThis research was supported in part by National Science Foundation (NSF) award: SSI \\# 1664172. We would like to thank Dirk Rosen and Andy Lauermann from Marine Applied Rsearch \\& Exploration group for their video collection, guidance, and help through this project. We would also like to thank Anmol Kapoor and Shafin Haque for their contributions to the project.\n\n\\section*{Statements and Declarations}\n\\begin{itemize}\n\\item Funding: this work was partially supported by National Science Foundation (NSF) award: SSI \\# 1664172\n\\item Conflict of interest\/Competing interests: n\/a\n\\item Ethics approval : n\/a\n\\item Consent to participate: n\/a\n\\item Consent for publication: n\/a\n\\item Availability of data and materials: data and annotations will be made publicly available at the time of publication.\n\\item Code availability: code and implementation of methods will be made publicly available at the time of publication.\n\\end{itemize}\n\n\\newpage\n\\clearpage\n\\bibliographystyle{apacite}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}